Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/intel/e1000e/param.c
drivers/net/wireless/iwlwifi/iwl-agn-rx.c
drivers/net/wireless/iwlwifi/iwl-trans-pcie-rx.c
drivers/net/wireless/iwlwifi/iwl-trans.h

Resolved the iwlwifi conflict with mainline using 3-way diff posted
by John Linville and Stephen Rothwell. In 'net' we added a bug
fix to make iwlwifi report a more accurate skb->truesize but this
conflicted with RX path changes that happened meanwhile in net-next.

In e1000e a conflict arose in the validation code for settings of
adapter->itr. 'net-next' had more sophisticated logic so that
logic was used.

Signed-off-by: David S. Miller <davem@davemloft.net>

+4086 -2392
+19
Documentation/ABI/testing/sysfs-bus-hsi
··· 1 + What: /sys/bus/hsi 2 + Date: April 2012 3 + KernelVersion: 3.4 4 + Contact: Carlos Chinea <carlos.chinea@nokia.com> 5 + Description: 6 + High Speed Synchronous Serial Interface (HSI) is a 7 + serial interface mainly used for connecting application 8 + engines (APE) with cellular modem engines (CMT) in cellular 9 + handsets. 10 + The bus will be populated with devices (hsi_clients) representing 11 + the protocols available in the system. Bus drivers implement 12 + those protocols. 13 + 14 + What: /sys/bus/hsi/devices/.../modalias 15 + Date: April 2012 16 + KernelVersion: 3.4 17 + Contact: Carlos Chinea <carlos.chinea@nokia.com> 18 + Description: Stores the same MODALIAS value emitted by uevent 19 + Format: hsi:<hsi_client device name>
+2 -3
Documentation/devicetree/bindings/ata/calxeda-sata.txt Documentation/devicetree/bindings/ata/ahci-platform.txt
··· 1 - * Calxeda SATA Controller 1 + * AHCI SATA Controller 2 2 3 3 SATA nodes are defined to describe on-chip Serial ATA controllers. 4 4 Each SATA controller should have its own node. 5 5 6 6 Required properties: 7 - - compatible : compatible list, contains "calxeda,hb-ahci" 7 + - compatible : compatible list, contains "calxeda,hb-ahci" or "snps,spear-ahci" 8 8 - interrupts : <interrupt mapping for SATA IRQ> 9 9 - reg : <registers mapping> 10 10 ··· 14 14 reg = <0xffe08000 0x1000>; 15 15 interrupts = <115>; 16 16 }; 17 -
+2 -2
Documentation/networking/ip-sysctl.txt
··· 147 147 (if tcp_adv_win_scale > 0) or bytes-bytes/2^(-tcp_adv_win_scale), 148 148 if it is <= 0. 149 149 Possible values are [-31, 31], inclusive. 150 - Default: 2 150 + Default: 1 151 151 152 152 tcp_allowed_congestion_control - STRING 153 153 Show/set the congestion control choices available to non-privileged ··· 424 424 net.core.rmem_max. Calling setsockopt() with SO_RCVBUF disables 425 425 automatic tuning of that socket's receive buffer size, in which 426 426 case this value is ignored. 427 - Default: between 87380B and 4MB, depending on RAM size. 427 + Default: between 87380B and 6MB, depending on RAM size. 428 428 429 429 tcp_sack - BOOLEAN 430 430 Enable select acknowledgments (SACKS).
+19 -18
Documentation/power/freezing-of-tasks.txt
··· 9 9 10 10 II. How does it work? 11 11 12 - There are four per-task flags used for that, PF_NOFREEZE, PF_FROZEN, TIF_FREEZE 12 + There are three per-task flags used for that, PF_NOFREEZE, PF_FROZEN 13 13 and PF_FREEZER_SKIP (the last one is auxiliary). The tasks that have 14 14 PF_NOFREEZE unset (all user space processes and some kernel threads) are 15 15 regarded as 'freezable' and treated in a special way before the system enters a ··· 17 17 we only consider hibernation, but the description also applies to suspend). 18 18 19 19 Namely, as the first step of the hibernation procedure the function 20 - freeze_processes() (defined in kernel/power/process.c) is called. It executes 21 - try_to_freeze_tasks() that sets TIF_FREEZE for all of the freezable tasks and 22 - either wakes them up, if they are kernel threads, or sends fake signals to them, 23 - if they are user space processes. A task that has TIF_FREEZE set, should react 24 - to it by calling the function called __refrigerator() (defined in 25 - kernel/freezer.c), which sets the task's PF_FROZEN flag, changes its state 26 - to TASK_UNINTERRUPTIBLE and makes it loop until PF_FROZEN is cleared for it. 27 - Then, we say that the task is 'frozen' and therefore the set of functions 28 - handling this mechanism is referred to as 'the freezer' (these functions are 29 - defined in kernel/power/process.c, kernel/freezer.c & include/linux/freezer.h). 30 - User space processes are generally frozen before kernel threads. 20 + freeze_processes() (defined in kernel/power/process.c) is called. A system-wide 21 + variable system_freezing_cnt (as opposed to a per-task flag) is used to indicate 22 + whether the system is to undergo a freezing operation. And freeze_processes() 23 + sets this variable. After this, it executes try_to_freeze_tasks() that sends a 24 + fake signal to all user space processes, and wakes up all the kernel threads. 25 + All freezable tasks must react to that by calling try_to_freeze(), which 26 + results in a call to __refrigerator() (defined in kernel/freezer.c), which sets 27 + the task's PF_FROZEN flag, changes its state to TASK_UNINTERRUPTIBLE and makes 28 + it loop until PF_FROZEN is cleared for it. Then, we say that the task is 29 + 'frozen' and therefore the set of functions handling this mechanism is referred 30 + to as 'the freezer' (these functions are defined in kernel/power/process.c, 31 + kernel/freezer.c & include/linux/freezer.h). User space processes are generally 32 + frozen before kernel threads. 31 33 32 34 __refrigerator() must not be called directly. Instead, use the 33 35 try_to_freeze() function (defined in include/linux/freezer.h), that checks 34 - the task's TIF_FREEZE flag and makes the task enter __refrigerator() if the 35 - flag is set. 36 + if the task is to be frozen and makes the task enter __refrigerator(). 36 37 37 38 For user space processes try_to_freeze() is called automatically from the 38 39 signal-handling code, but the freezable kernel threads need to call it 39 40 explicitly in suitable places or use the wait_event_freezable() or 40 41 wait_event_freezable_timeout() macros (defined in include/linux/freezer.h) 41 - that combine interruptible sleep with checking if TIF_FREEZE is set and calling 42 - try_to_freeze(). The main loop of a freezable kernel thread may look like the 43 - following one: 42 + that combine interruptible sleep with checking if the task is to be frozen and 43 + calling try_to_freeze(). The main loop of a freezable kernel thread may look 44 + like the following one: 44 45 45 46 set_freezable(); 46 47 do { ··· 54 53 (from drivers/usb/core/hub.c::hub_thread()). 55 54 56 55 If a freezable kernel thread fails to call try_to_freeze() after the freezer has 57 - set TIF_FREEZE for it, the freezing of tasks will fail and the entire 56 + initiated a freezing operation, the freezing of tasks will fail and the entire 58 57 hibernation operation will be cancelled. For this reason, freezable kernel 59 58 threads must call try_to_freeze() somewhere or use one of the 60 59 wait_event_freezable() and wait_event_freezable_timeout() macros.
+13 -1
Documentation/security/keys.txt
··· 123 123 124 124 The key service provides a number of features besides keys: 125 125 126 - (*) The key service defines two special key types: 126 + (*) The key service defines three special key types: 127 127 128 128 (+) "keyring" 129 129 ··· 136 136 A key of this type has a description and a payload that are arbitrary 137 137 blobs of data. These can be created, updated and read by userspace, 138 138 and aren't intended for use by kernel services. 139 + 140 + (+) "logon" 141 + 142 + Like a "user" key, a "logon" key has a payload that is an arbitrary 143 + blob of data. It is intended as a place to store secrets which are 144 + accessible to the kernel but not to userspace programs. 145 + 146 + The description can be arbitrary, but must be prefixed with a non-zero 147 + length string that describes the key "subclass". The subclass is 148 + separated from the rest of the description by a ':'. "logon" keys can 149 + be created and updated from userspace, but the payload is only 150 + readable from kernel space. 139 151 140 152 (*) Each process subscribes to three keyrings: a thread-specific keyring, a 141 153 process-specific keyring, and a session-specific keyring.
+2 -2
MAINTAINERS
··· 5887 5887 F: drivers/scsi/st* 5888 5888 5889 5889 SCTP PROTOCOL 5890 - M: Vlad Yasevich <vladislav.yasevich@hp.com> 5890 + M: Vlad Yasevich <vyasevich@gmail.com> 5891 5891 M: Sridhar Samudrala <sri@us.ibm.com> 5892 5892 L: linux-sctp@vger.kernel.org 5893 5893 W: http://lksctp.sourceforge.net 5894 - S: Supported 5894 + S: Maintained 5895 5895 F: Documentation/networking/sctp.txt 5896 5896 F: include/linux/sctp.h 5897 5897 F: include/net/sctp/
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 4 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc5 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+9
arch/arm/Kconfig
··· 1186 1186 source "arch/arm/Kconfig-nommu" 1187 1187 endif 1188 1188 1189 + config ARM_ERRATA_326103 1190 + bool "ARM errata: FSR write bit incorrect on a SWP to read-only memory" 1191 + depends on CPU_V6 1192 + help 1193 + Executing a SWP instruction to read-only memory does not set bit 11 1194 + of the FSR on the ARM 1136 prior to r1p0. This causes the kernel to 1195 + treat the access as a read, preventing a COW from occurring and 1196 + causing the faulting task to livelock. 1197 + 1189 1198 config ARM_ERRATA_411920 1190 1199 bool "ARM errata: Invalidation of the Instruction Cache operation can fail" 1191 1200 depends on CPU_V6 || CPU_V6K
+2 -2
arch/arm/boot/dts/msm8660-surf.dts
··· 10 10 intc: interrupt-controller@02080000 { 11 11 compatible = "qcom,msm-8660-qgic"; 12 12 interrupt-controller; 13 - #interrupt-cells = <1>; 13 + #interrupt-cells = <3>; 14 14 reg = < 0x02080000 0x1000 >, 15 15 < 0x02081000 0x1000 >; 16 16 }; ··· 19 19 compatible = "qcom,msm-hsuart", "qcom,msm-uart"; 20 20 reg = <0x19c40000 0x1000>, 21 21 <0x19c00000 0x1000>; 22 - interrupts = <195>; 22 + interrupts = <0 195 0x0>; 23 23 }; 24 24 };
+1 -1
arch/arm/boot/dts/versatile-ab.dts
··· 173 173 mmc@5000 { 174 174 compatible = "arm,primecell"; 175 175 reg = < 0x5000 0x1000>; 176 - interrupts = <22>; 176 + interrupts = <22 34>; 177 177 }; 178 178 kmi@6000 { 179 179 compatible = "arm,pl050", "arm,primecell";
+1 -1
arch/arm/boot/dts/versatile-pb.dts
··· 41 41 mmc@b000 { 42 42 compatible = "arm,primecell"; 43 43 reg = <0xb000 0x1000>; 44 - interrupts = <23>; 44 + interrupts = <23 34>; 45 45 }; 46 46 }; 47 47 };
+2
arch/arm/configs/mini2440_defconfig
··· 14 14 # CONFIG_BLK_DEV_BSG is not set 15 15 CONFIG_BLK_DEV_INTEGRITY=y 16 16 CONFIG_ARCH_S3C24XX=y 17 + # CONFIG_CPU_S3C2410 is not set 18 + CONFIG_CPU_S3C2440=y 17 19 CONFIG_S3C_ADC=y 18 20 CONFIG_S3C24XX_PWM=y 19 21 CONFIG_MACH_MINI2440=y
+7
arch/arm/include/asm/thread_info.h
··· 118 118 extern void vfp_sync_hwstate(struct thread_info *); 119 119 extern void vfp_flush_hwstate(struct thread_info *); 120 120 121 + struct user_vfp; 122 + struct user_vfp_exc; 123 + 124 + extern int vfp_preserve_user_clear_hwstate(struct user_vfp __user *, 125 + struct user_vfp_exc __user *); 126 + extern int vfp_restore_user_hwstate(struct user_vfp __user *, 127 + struct user_vfp_exc __user *); 121 128 #endif 122 129 123 130 /*
+4
arch/arm/include/asm/tls.h
··· 7 7 8 8 .macro set_tls_v6k, tp, tmp1, tmp2 9 9 mcr p15, 0, \tp, c13, c0, 3 @ set TLS register 10 + mov \tmp1, #0 11 + mcr p15, 0, \tmp1, c13, c0, 2 @ clear user r/w TLS register 10 12 .endm 11 13 12 14 .macro set_tls_v6, tp, tmp1, tmp2 ··· 17 15 mov \tmp2, #0xffff0fff 18 16 tst \tmp1, #HWCAP_TLS @ hardware TLS available? 19 17 mcrne p15, 0, \tp, c13, c0, 3 @ yes, set TLS register 18 + movne \tmp1, #0 19 + mcrne p15, 0, \tmp1, c13, c0, 2 @ clear user r/w TLS register 20 20 streq \tp, [\tmp2, #-15] @ set TLS value at 0xffff0ff0 21 21 .endm 22 22
+3 -3
arch/arm/kernel/irq.c
··· 155 155 } 156 156 157 157 c = irq_data_get_irq_chip(d); 158 - if (c->irq_set_affinity) 159 - c->irq_set_affinity(d, affinity, true); 160 - else 158 + if (!c->irq_set_affinity) 161 159 pr_debug("IRQ%u: unable to set affinity\n", d->irq); 160 + else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret) 161 + cpumask_copy(d->affinity, affinity); 162 162 163 163 return ret; 164 164 }
+4 -51
arch/arm/kernel/signal.c
··· 180 180 181 181 static int preserve_vfp_context(struct vfp_sigframe __user *frame) 182 182 { 183 - struct thread_info *thread = current_thread_info(); 184 - struct vfp_hard_struct *h = &thread->vfpstate.hard; 185 183 const unsigned long magic = VFP_MAGIC; 186 184 const unsigned long size = VFP_STORAGE_SIZE; 187 185 int err = 0; 188 186 189 - vfp_sync_hwstate(thread); 190 187 __put_user_error(magic, &frame->magic, err); 191 188 __put_user_error(size, &frame->size, err); 192 189 193 - /* 194 - * Copy the floating point registers. There can be unused 195 - * registers see asm/hwcap.h for details. 196 - */ 197 - err |= __copy_to_user(&frame->ufp.fpregs, &h->fpregs, 198 - sizeof(h->fpregs)); 199 - /* 200 - * Copy the status and control register. 201 - */ 202 - __put_user_error(h->fpscr, &frame->ufp.fpscr, err); 190 + if (err) 191 + return -EFAULT; 203 192 204 - /* 205 - * Copy the exception registers. 206 - */ 207 - __put_user_error(h->fpexc, &frame->ufp_exc.fpexc, err); 208 - __put_user_error(h->fpinst, &frame->ufp_exc.fpinst, err); 209 - __put_user_error(h->fpinst2, &frame->ufp_exc.fpinst2, err); 210 - 211 - return err ? -EFAULT : 0; 193 + return vfp_preserve_user_clear_hwstate(&frame->ufp, &frame->ufp_exc); 212 194 } 213 195 214 196 static int restore_vfp_context(struct vfp_sigframe __user *frame) 215 197 { 216 - struct thread_info *thread = current_thread_info(); 217 - struct vfp_hard_struct *h = &thread->vfpstate.hard; 218 198 unsigned long magic; 219 199 unsigned long size; 220 - unsigned long fpexc; 221 200 int err = 0; 222 201 223 202 __get_user_error(magic, &frame->magic, err); ··· 207 228 if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE) 208 229 return -EINVAL; 209 230 210 - vfp_flush_hwstate(thread); 211 - 212 - /* 213 - * Copy the floating point registers. There can be unused 214 - * registers see asm/hwcap.h for details. 215 - */ 216 - err |= __copy_from_user(&h->fpregs, &frame->ufp.fpregs, 217 - sizeof(h->fpregs)); 218 - /* 219 - * Copy the status and control register. 220 - */ 221 - __get_user_error(h->fpscr, &frame->ufp.fpscr, err); 222 - 223 - /* 224 - * Sanitise and restore the exception registers. 225 - */ 226 - __get_user_error(fpexc, &frame->ufp_exc.fpexc, err); 227 - /* Ensure the VFP is enabled. */ 228 - fpexc |= FPEXC_EN; 229 - /* Ensure FPINST2 is invalid and the exception flag is cleared. */ 230 - fpexc &= ~(FPEXC_EX | FPEXC_FP2V); 231 - h->fpexc = fpexc; 232 - 233 - __get_user_error(h->fpinst, &frame->ufp_exc.fpinst, err); 234 - __get_user_error(h->fpinst2, &frame->ufp_exc.fpinst2, err); 235 - 236 - return err ? -EFAULT : 0; 231 + return vfp_restore_user_hwstate(&frame->ufp, &frame->ufp_exc); 237 232 } 238 233 239 234 #endif
+17 -11
arch/arm/kernel/smp.c
··· 510 510 local_fiq_disable(); 511 511 local_irq_disable(); 512 512 513 - #ifdef CONFIG_HOTPLUG_CPU 514 - platform_cpu_kill(cpu); 515 - #endif 516 - 517 513 while (1) 518 514 cpu_relax(); 519 515 } ··· 572 576 smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE); 573 577 } 574 578 579 + #ifdef CONFIG_HOTPLUG_CPU 580 + static void smp_kill_cpus(cpumask_t *mask) 581 + { 582 + unsigned int cpu; 583 + for_each_cpu(cpu, mask) 584 + platform_cpu_kill(cpu); 585 + } 586 + #else 587 + static void smp_kill_cpus(cpumask_t *mask) { } 588 + #endif 589 + 575 590 void smp_send_stop(void) 576 591 { 577 592 unsigned long timeout; 593 + struct cpumask mask; 578 594 579 - if (num_online_cpus() > 1) { 580 - struct cpumask mask; 581 - cpumask_copy(&mask, cpu_online_mask); 582 - cpumask_clear_cpu(smp_processor_id(), &mask); 583 - 584 - smp_cross_call(&mask, IPI_CPU_STOP); 585 - } 595 + cpumask_copy(&mask, cpu_online_mask); 596 + cpumask_clear_cpu(smp_processor_id(), &mask); 597 + smp_cross_call(&mask, IPI_CPU_STOP); 586 598 587 599 /* Wait up to one second for other CPUs to stop */ 588 600 timeout = USEC_PER_SEC; ··· 599 595 600 596 if (num_online_cpus() > 1) 601 597 pr_warning("SMP: failed to stop secondary CPUs\n"); 598 + 599 + smp_kill_cpus(&mask); 602 600 } 603 601 604 602 /*
+12 -12
arch/arm/mach-exynos/clock-exynos4.c
··· 497 497 .ctrlbit = (1 << 3), 498 498 }, { 499 499 .name = "hsmmc", 500 - .devname = "s3c-sdhci.0", 500 + .devname = "exynos4-sdhci.0", 501 501 .parent = &exynos4_clk_aclk_133.clk, 502 502 .enable = exynos4_clk_ip_fsys_ctrl, 503 503 .ctrlbit = (1 << 5), 504 504 }, { 505 505 .name = "hsmmc", 506 - .devname = "s3c-sdhci.1", 506 + .devname = "exynos4-sdhci.1", 507 507 .parent = &exynos4_clk_aclk_133.clk, 508 508 .enable = exynos4_clk_ip_fsys_ctrl, 509 509 .ctrlbit = (1 << 6), 510 510 }, { 511 511 .name = "hsmmc", 512 - .devname = "s3c-sdhci.2", 512 + .devname = "exynos4-sdhci.2", 513 513 .parent = &exynos4_clk_aclk_133.clk, 514 514 .enable = exynos4_clk_ip_fsys_ctrl, 515 515 .ctrlbit = (1 << 7), 516 516 }, { 517 517 .name = "hsmmc", 518 - .devname = "s3c-sdhci.3", 518 + .devname = "exynos4-sdhci.3", 519 519 .parent = &exynos4_clk_aclk_133.clk, 520 520 .enable = exynos4_clk_ip_fsys_ctrl, 521 521 .ctrlbit = (1 << 8), ··· 1202 1202 static struct clksrc_clk exynos4_clk_sclk_mmc0 = { 1203 1203 .clk = { 1204 1204 .name = "sclk_mmc", 1205 - .devname = "s3c-sdhci.0", 1205 + .devname = "exynos4-sdhci.0", 1206 1206 .parent = &exynos4_clk_dout_mmc0.clk, 1207 1207 .enable = exynos4_clksrc_mask_fsys_ctrl, 1208 1208 .ctrlbit = (1 << 0), ··· 1213 1213 static struct clksrc_clk exynos4_clk_sclk_mmc1 = { 1214 1214 .clk = { 1215 1215 .name = "sclk_mmc", 1216 - .devname = "s3c-sdhci.1", 1216 + .devname = "exynos4-sdhci.1", 1217 1217 .parent = &exynos4_clk_dout_mmc1.clk, 1218 1218 .enable = exynos4_clksrc_mask_fsys_ctrl, 1219 1219 .ctrlbit = (1 << 4), ··· 1224 1224 static struct clksrc_clk exynos4_clk_sclk_mmc2 = { 1225 1225 .clk = { 1226 1226 .name = "sclk_mmc", 1227 - .devname = "s3c-sdhci.2", 1227 + .devname = "exynos4-sdhci.2", 1228 1228 .parent = &exynos4_clk_dout_mmc2.clk, 1229 1229 .enable = exynos4_clksrc_mask_fsys_ctrl, 1230 1230 .ctrlbit = (1 << 8), ··· 1235 1235 static struct clksrc_clk exynos4_clk_sclk_mmc3 = { 1236 1236 .clk = { 1237 1237 .name = "sclk_mmc", 1238 - .devname = "s3c-sdhci.3", 1238 + .devname = "exynos4-sdhci.3", 1239 1239 .parent = &exynos4_clk_dout_mmc3.clk, 1240 1240 .enable = exynos4_clksrc_mask_fsys_ctrl, 1241 1241 .ctrlbit = (1 << 12), ··· 1340 1340 CLKDEV_INIT("exynos4210-uart.1", "clk_uart_baud0", &exynos4_clk_sclk_uart1.clk), 1341 1341 CLKDEV_INIT("exynos4210-uart.2", "clk_uart_baud0", &exynos4_clk_sclk_uart2.clk), 1342 1342 CLKDEV_INIT("exynos4210-uart.3", "clk_uart_baud0", &exynos4_clk_sclk_uart3.clk), 1343 - CLKDEV_INIT("s3c-sdhci.0", "mmc_busclk.2", &exynos4_clk_sclk_mmc0.clk), 1344 - CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &exynos4_clk_sclk_mmc1.clk), 1345 - CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &exynos4_clk_sclk_mmc2.clk), 1346 - CLKDEV_INIT("s3c-sdhci.3", "mmc_busclk.2", &exynos4_clk_sclk_mmc3.clk), 1343 + CLKDEV_INIT("exynos4-sdhci.0", "mmc_busclk.2", &exynos4_clk_sclk_mmc0.clk), 1344 + CLKDEV_INIT("exynos4-sdhci.1", "mmc_busclk.2", &exynos4_clk_sclk_mmc1.clk), 1345 + CLKDEV_INIT("exynos4-sdhci.2", "mmc_busclk.2", &exynos4_clk_sclk_mmc2.clk), 1346 + CLKDEV_INIT("exynos4-sdhci.3", "mmc_busclk.2", &exynos4_clk_sclk_mmc3.clk), 1347 1347 CLKDEV_INIT("exynos4-fb.0", "lcd", &exynos4_clk_fimd0), 1348 1348 CLKDEV_INIT("dma-pl330.0", "apb_pclk", &exynos4_clk_pdma0), 1349 1349 CLKDEV_INIT("dma-pl330.1", "apb_pclk", &exynos4_clk_pdma1),
+12 -12
arch/arm/mach-exynos/clock-exynos5.c
··· 455 455 .ctrlbit = (1 << 20), 456 456 }, { 457 457 .name = "hsmmc", 458 - .devname = "s3c-sdhci.0", 458 + .devname = "exynos4-sdhci.0", 459 459 .parent = &exynos5_clk_aclk_200.clk, 460 460 .enable = exynos5_clk_ip_fsys_ctrl, 461 461 .ctrlbit = (1 << 12), 462 462 }, { 463 463 .name = "hsmmc", 464 - .devname = "s3c-sdhci.1", 464 + .devname = "exynos4-sdhci.1", 465 465 .parent = &exynos5_clk_aclk_200.clk, 466 466 .enable = exynos5_clk_ip_fsys_ctrl, 467 467 .ctrlbit = (1 << 13), 468 468 }, { 469 469 .name = "hsmmc", 470 - .devname = "s3c-sdhci.2", 470 + .devname = "exynos4-sdhci.2", 471 471 .parent = &exynos5_clk_aclk_200.clk, 472 472 .enable = exynos5_clk_ip_fsys_ctrl, 473 473 .ctrlbit = (1 << 14), 474 474 }, { 475 475 .name = "hsmmc", 476 - .devname = "s3c-sdhci.3", 476 + .devname = "exynos4-sdhci.3", 477 477 .parent = &exynos5_clk_aclk_200.clk, 478 478 .enable = exynos5_clk_ip_fsys_ctrl, 479 479 .ctrlbit = (1 << 15), ··· 813 813 static struct clksrc_clk exynos5_clk_sclk_mmc0 = { 814 814 .clk = { 815 815 .name = "sclk_mmc", 816 - .devname = "s3c-sdhci.0", 816 + .devname = "exynos4-sdhci.0", 817 817 .parent = &exynos5_clk_dout_mmc0.clk, 818 818 .enable = exynos5_clksrc_mask_fsys_ctrl, 819 819 .ctrlbit = (1 << 0), ··· 824 824 static struct clksrc_clk exynos5_clk_sclk_mmc1 = { 825 825 .clk = { 826 826 .name = "sclk_mmc", 827 - .devname = "s3c-sdhci.1", 827 + .devname = "exynos4-sdhci.1", 828 828 .parent = &exynos5_clk_dout_mmc1.clk, 829 829 .enable = exynos5_clksrc_mask_fsys_ctrl, 830 830 .ctrlbit = (1 << 4), ··· 835 835 static struct clksrc_clk exynos5_clk_sclk_mmc2 = { 836 836 .clk = { 837 837 .name = "sclk_mmc", 838 - .devname = "s3c-sdhci.2", 838 + .devname = "exynos4-sdhci.2", 839 839 .parent = &exynos5_clk_dout_mmc2.clk, 840 840 .enable = exynos5_clksrc_mask_fsys_ctrl, 841 841 .ctrlbit = (1 << 8), ··· 846 846 static struct clksrc_clk exynos5_clk_sclk_mmc3 = { 847 847 .clk = { 848 848 .name = "sclk_mmc", 849 - .devname = "s3c-sdhci.3", 849 + .devname = "exynos4-sdhci.3", 850 850 .parent = &exynos5_clk_dout_mmc3.clk, 851 851 .enable = exynos5_clksrc_mask_fsys_ctrl, 852 852 .ctrlbit = (1 << 12), ··· 990 990 CLKDEV_INIT("exynos4210-uart.1", "clk_uart_baud0", &exynos5_clk_sclk_uart1.clk), 991 991 CLKDEV_INIT("exynos4210-uart.2", "clk_uart_baud0", &exynos5_clk_sclk_uart2.clk), 992 992 CLKDEV_INIT("exynos4210-uart.3", "clk_uart_baud0", &exynos5_clk_sclk_uart3.clk), 993 - CLKDEV_INIT("s3c-sdhci.0", "mmc_busclk.2", &exynos5_clk_sclk_mmc0.clk), 994 - CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &exynos5_clk_sclk_mmc1.clk), 995 - CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &exynos5_clk_sclk_mmc2.clk), 996 - CLKDEV_INIT("s3c-sdhci.3", "mmc_busclk.2", &exynos5_clk_sclk_mmc3.clk), 993 + CLKDEV_INIT("exynos4-sdhci.0", "mmc_busclk.2", &exynos5_clk_sclk_mmc0.clk), 994 + CLKDEV_INIT("exynos4-sdhci.1", "mmc_busclk.2", &exynos5_clk_sclk_mmc1.clk), 995 + CLKDEV_INIT("exynos4-sdhci.2", "mmc_busclk.2", &exynos5_clk_sclk_mmc2.clk), 996 + CLKDEV_INIT("exynos4-sdhci.3", "mmc_busclk.2", &exynos5_clk_sclk_mmc3.clk), 997 997 CLKDEV_INIT("dma-pl330.0", "apb_pclk", &exynos5_clk_pdma0), 998 998 CLKDEV_INIT("dma-pl330.1", "apb_pclk", &exynos5_clk_pdma1), 999 999 CLKDEV_INIT("dma-pl330.2", "apb_pclk", &exynos5_clk_mdma1),
+13 -1
arch/arm/mach-exynos/common.c
··· 326 326 s3c_fimc_setname(2, "exynos4-fimc"); 327 327 s3c_fimc_setname(3, "exynos4-fimc"); 328 328 329 + s3c_sdhci_setname(0, "exynos4-sdhci"); 330 + s3c_sdhci_setname(1, "exynos4-sdhci"); 331 + s3c_sdhci_setname(2, "exynos4-sdhci"); 332 + s3c_sdhci_setname(3, "exynos4-sdhci"); 333 + 329 334 /* The I2C bus controllers are directly compatible with s3c2440 */ 330 335 s3c_i2c0_setname("s3c2440-i2c"); 331 336 s3c_i2c1_setname("s3c2440-i2c"); ··· 348 343 s3c_device_i2c0.resource[0].end = EXYNOS5_PA_IIC(0) + SZ_4K - 1; 349 344 s3c_device_i2c0.resource[1].start = EXYNOS5_IRQ_IIC; 350 345 s3c_device_i2c0.resource[1].end = EXYNOS5_IRQ_IIC; 346 + 347 + s3c_sdhci_setname(0, "exynos4-sdhci"); 348 + s3c_sdhci_setname(1, "exynos4-sdhci"); 349 + s3c_sdhci_setname(2, "exynos4-sdhci"); 350 + s3c_sdhci_setname(3, "exynos4-sdhci"); 351 351 352 352 /* The I2C bus controllers are directly compatible with s3c2440 */ 353 353 s3c_i2c0_setname("s3c2440-i2c"); ··· 547 537 { 548 538 int irq; 549 539 550 - gic_init(0, IRQ_PPI(0), S5P_VA_GIC_DIST, S5P_VA_GIC_CPU); 540 + #ifdef CONFIG_OF 541 + of_irq_init(exynos4_dt_irq_match); 542 + #endif 551 543 552 544 for (irq = 0; irq < EXYNOS5_MAX_COMBINER_NR; irq++) { 553 545 combiner_init(irq, (void __iomem *)S5P_VA_COMBINER(irq),
+3 -10
arch/arm/mach-exynos/dev-dwmci.c
··· 16 16 #include <linux/dma-mapping.h> 17 17 #include <linux/platform_device.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/ioport.h> 19 20 #include <linux/mmc/dw_mmc.h> 20 21 21 22 #include <plat/devs.h> ··· 34 33 } 35 34 36 35 static struct resource exynos4_dwmci_resource[] = { 37 - [0] = { 38 - .start = EXYNOS4_PA_DWMCI, 39 - .end = EXYNOS4_PA_DWMCI + SZ_4K - 1, 40 - .flags = IORESOURCE_MEM, 41 - }, 42 - [1] = { 43 - .start = IRQ_DWMCI, 44 - .end = IRQ_DWMCI, 45 - .flags = IORESOURCE_IRQ, 46 - } 36 + [0] = DEFINE_RES_MEM(EXYNOS4_PA_DWMCI, SZ_4K), 37 + [1] = DEFINE_RES_IRQ(EXYNOS4_IRQ_DWMCI), 47 38 }; 48 39 49 40 static struct dw_mci_board exynos4_dwci_pdata = {
+1
arch/arm/mach-exynos/mach-nuri.c
··· 112 112 .host_caps = (MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA | 113 113 MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED | 114 114 MMC_CAP_ERASE), 115 + .host_caps2 = MMC_CAP2_BROKEN_VOLTAGE, 115 116 .cd_type = S3C_SDHCI_CD_PERMANENT, 116 117 .clk_type = S3C_SDHCI_CLK_DIV_EXTERNAL, 117 118 };
+1
arch/arm/mach-exynos/mach-universal_c210.c
··· 747 747 .max_width = 8, 748 748 .host_caps = (MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA | 749 749 MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED), 750 + .host_caps2 = MMC_CAP2_BROKEN_VOLTAGE, 750 751 .cd_type = S3C_SDHCI_CD_PERMANENT, 751 752 .clk_type = S3C_SDHCI_CLK_DIV_EXTERNAL, 752 753 };
+15 -10
arch/arm/mach-msm/board-msm8x60.c
··· 17 17 #include <linux/irqdomain.h> 18 18 #include <linux/of.h> 19 19 #include <linux/of_address.h> 20 + #include <linux/of_irq.h> 20 21 #include <linux/of_platform.h> 21 22 #include <linux/memblock.h> 22 23 ··· 50 49 msm_map_msm8x60_io(); 51 50 } 52 51 52 + #ifdef CONFIG_OF 53 + static struct of_device_id msm_dt_gic_match[] __initdata = { 54 + { .compatible = "qcom,msm-8660-qgic", .data = gic_of_init }, 55 + {} 56 + }; 57 + #endif 58 + 53 59 static void __init msm8x60_init_irq(void) 54 60 { 55 - gic_init(0, GIC_PPI_START, MSM_QGIC_DIST_BASE, 56 - (void *)MSM_QGIC_CPU_BASE); 61 + if (!of_have_populated_dt()) 62 + gic_init(0, GIC_PPI_START, MSM_QGIC_DIST_BASE, 63 + (void *)MSM_QGIC_CPU_BASE); 64 + #ifdef CONFIG_OF 65 + else 66 + of_irq_init(msm_dt_gic_match); 67 + #endif 57 68 58 69 /* Edge trigger PPIs except AVS_SVICINT and AVS_SVICINTSWDONE */ 59 70 writel(0xFFFFD7FF, MSM_QGIC_DIST_BASE + GIC_DIST_CONFIG + 4); ··· 86 73 {} 87 74 }; 88 75 89 - static struct of_device_id msm_dt_gic_match[] __initdata = { 90 - { .compatible = "qcom,msm-8660-qgic", }, 91 - {} 92 - }; 93 - 94 76 static void __init msm8x60_dt_init(void) 95 77 { 96 - irq_domain_generate_simple(msm_dt_gic_match, MSM8X60_QGIC_DIST_PHYS, 97 - GIC_SPI_START); 98 - 99 78 if (of_machine_is_compatible("qcom,msm8660-surf")) { 100 79 printk(KERN_INFO "Init surf UART registers\n"); 101 80 msm8x60_init_uart12dm();
+7
arch/arm/mach-pxa/include/mach/mfp-pxa2xx.h
··· 17 17 * 18 18 * bit 23 - Input/Output (PXA2xx specific) 19 19 * bit 24 - Wakeup Enable(PXA2xx specific) 20 + * bit 25 - Keep Output (PXA2xx specific) 20 21 */ 21 22 22 23 #define MFP_DIR_IN (0x0 << 23) ··· 26 25 #define MFP_DIR(x) (((x) >> 23) & 0x1) 27 26 28 27 #define MFP_LPM_CAN_WAKEUP (0x1 << 24) 28 + 29 + /* 30 + * MFP_LPM_KEEP_OUTPUT must be specified for pins that need to 31 + * retain their last output level (low or high). 32 + * Note: MFP_LPM_KEEP_OUTPUT has no effect on pins configured for input. 33 + */ 29 34 #define MFP_LPM_KEEP_OUTPUT (0x1 << 25) 30 35 31 36 #define WAKEUP_ON_EDGE_RISE (MFP_LPM_CAN_WAKEUP | MFP_LPM_EDGE_RISE)
+19 -2
arch/arm/mach-pxa/mfp-pxa2xx.c
··· 33 33 #define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2)) 34 34 #define GPLR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5)) 35 35 #define GPDR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x0c) 36 + #define GPSR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x18) 37 + #define GPCR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x24) 36 38 37 39 #define PWER_WE35 (1 << 24) 38 40 ··· 350 348 #ifdef CONFIG_PM 351 349 static unsigned long saved_gafr[2][4]; 352 350 static unsigned long saved_gpdr[4]; 351 + static unsigned long saved_gplr[4]; 353 352 static unsigned long saved_pgsr[4]; 354 353 355 354 static int pxa2xx_mfp_suspend(void) ··· 369 366 } 370 367 371 368 for (i = 0; i <= gpio_to_bank(pxa_last_gpio); i++) { 372 - 373 369 saved_gafr[0][i] = GAFR_L(i); 374 370 saved_gafr[1][i] = GAFR_U(i); 375 371 saved_gpdr[i] = GPDR(i * 32); 372 + saved_gplr[i] = GPLR(i * 32); 376 373 saved_pgsr[i] = PGSR(i); 377 374 378 - GPDR(i * 32) = gpdr_lpm[i]; 375 + GPSR(i * 32) = PGSR(i); 376 + GPCR(i * 32) = ~PGSR(i); 379 377 } 378 + 379 + /* set GPDR bits taking into account MFP_LPM_KEEP_OUTPUT */ 380 + for (i = 0; i < pxa_last_gpio; i++) { 381 + if ((gpdr_lpm[gpio_to_bank(i)] & GPIO_bit(i)) || 382 + ((gpio_desc[i].config & MFP_LPM_KEEP_OUTPUT) && 383 + (saved_gpdr[gpio_to_bank(i)] & GPIO_bit(i)))) 384 + GPDR(i) |= GPIO_bit(i); 385 + else 386 + GPDR(i) &= ~GPIO_bit(i); 387 + } 388 + 380 389 return 0; 381 390 } 382 391 ··· 399 384 for (i = 0; i <= gpio_to_bank(pxa_last_gpio); i++) { 400 385 GAFR_L(i) = saved_gafr[0][i]; 401 386 GAFR_U(i) = saved_gafr[1][i]; 387 + GPSR(i * 32) = saved_gplr[i]; 388 + GPCR(i * 32) = ~saved_gplr[i]; 402 389 GPDR(i * 32) = saved_gpdr[i]; 403 390 PGSR(i) = saved_pgsr[i]; 404 391 }
+5 -1
arch/arm/mach-pxa/pxa27x.c
··· 421 421 pxa_register_device(&pxa27x_device_i2c_power, info); 422 422 } 423 423 424 + static struct pxa_gpio_platform_data pxa27x_gpio_info __initdata = { 425 + .gpio_set_wake = gpio_set_wake, 426 + }; 427 + 424 428 static struct platform_device *devices[] __initdata = { 425 - &pxa_device_gpio, 426 429 &pxa27x_device_udc, 427 430 &pxa_device_pmu, 428 431 &pxa_device_i2s, ··· 461 458 register_syscore_ops(&pxa2xx_mfp_syscore_ops); 462 459 register_syscore_ops(&pxa2xx_clock_syscore_ops); 463 460 461 + pxa_register_device(&pxa_device_gpio, &pxa27x_gpio_info); 464 462 ret = platform_add_devices(devices, ARRAY_SIZE(devices)); 465 463 } 466 464
+4 -4
arch/arm/mach-s3c24xx/Kconfig
··· 111 111 help 112 112 Compile in platform device definition for Samsung TouchScreen. 113 113 114 - # cpu-specific sections 115 - 116 - if CPU_S3C2410 117 - 118 114 config S3C2410_DMA 119 115 bool 120 116 depends on S3C24XX_DMA && (CPU_S3C2410 || CPU_S3C2442) ··· 122 126 bool 123 127 help 124 128 Power Management code common to S3C2410 and better 129 + 130 + # cpu-specific sections 131 + 132 + if CPU_S3C2410 125 133 126 134 config S3C24XX_SIMTEC_NOR 127 135 bool
+2
arch/arm/mach-s5pv210/mach-goni.c
··· 25 25 #include <linux/gpio_keys.h> 26 26 #include <linux/input.h> 27 27 #include <linux/gpio.h> 28 + #include <linux/mmc/host.h> 28 29 #include <linux/interrupt.h> 29 30 30 31 #include <asm/hardware/vic.h> ··· 766 765 /* MoviNAND */ 767 766 static struct s3c_sdhci_platdata goni_hsmmc0_data __initdata = { 768 767 .max_width = 4, 768 + .host_caps2 = MMC_CAP2_BROKEN_VOLTAGE, 769 769 .cd_type = S3C_SDHCI_CD_PERMANENT, 770 770 }; 771 771
+1 -1
arch/arm/mach-sa1100/generic.c
··· 306 306 } 307 307 308 308 static struct resource sa1100_rtc_resources[] = { 309 - DEFINE_RES_MEM(0x90010000, 0x9001003f), 309 + DEFINE_RES_MEM(0x90010000, 0x40), 310 310 DEFINE_RES_IRQ_NAMED(IRQ_RTC1Hz, "rtc 1Hz"), 311 311 DEFINE_RES_IRQ_NAMED(IRQ_RTCAlrm, "rtc alarm"), 312 312 };
+4 -2
arch/arm/mach-u300/core.c
··· 1667 1667 1668 1668 for (i = 0; i < U300_VIC_IRQS_END; i++) 1669 1669 set_bit(i, (unsigned long *) &mask[0]); 1670 - vic_init((void __iomem *) U300_INTCON0_VBASE, 0, mask[0], mask[0]); 1671 - vic_init((void __iomem *) U300_INTCON1_VBASE, 32, mask[1], mask[1]); 1670 + vic_init((void __iomem *) U300_INTCON0_VBASE, IRQ_U300_INTCON0_START, 1671 + mask[0], mask[0]); 1672 + vic_init((void __iomem *) U300_INTCON1_VBASE, IRQ_U300_INTCON1_START, 1673 + mask[1], mask[1]); 1672 1674 } 1673 1675 1674 1676
+1 -8
arch/arm/mach-u300/i2c.c
··· 146 146 .min_uV = 1800000, 147 147 .max_uV = 1800000, 148 148 .valid_modes_mask = REGULATOR_MODE_NORMAL, 149 - .valid_ops_mask = 150 - REGULATOR_CHANGE_VOLTAGE | 151 - REGULATOR_CHANGE_STATUS, 152 149 .always_on = 1, 153 150 .boot_on = 1, 154 151 }, ··· 157 160 .min_uV = 2500000, 158 161 .max_uV = 2500000, 159 162 .valid_modes_mask = REGULATOR_MODE_NORMAL, 160 - .valid_ops_mask = 161 - REGULATOR_CHANGE_VOLTAGE | 162 - REGULATOR_CHANGE_STATUS, 163 163 .always_on = 1, 164 164 .boot_on = 1, 165 165 }, ··· 224 230 .max_uV = 1800000, 225 231 .valid_modes_mask = REGULATOR_MODE_NORMAL, 226 232 .valid_ops_mask = 227 - REGULATOR_CHANGE_VOLTAGE | 228 - REGULATOR_CHANGE_STATUS, 233 + REGULATOR_CHANGE_VOLTAGE, 229 234 .always_on = 1, 230 235 .boot_on = 1, 231 236 },
+75 -75
arch/arm/mach-u300/include/mach/irqs.h
··· 12 12 #ifndef __MACH_IRQS_H 13 13 #define __MACH_IRQS_H 14 14 15 - #define IRQ_U300_INTCON0_START 0 16 - #define IRQ_U300_INTCON1_START 32 15 + #define IRQ_U300_INTCON0_START 1 16 + #define IRQ_U300_INTCON1_START 33 17 17 /* These are on INTCON0 - 30 lines */ 18 - #define IRQ_U300_IRQ0_EXT 0 19 - #define IRQ_U300_IRQ1_EXT 1 20 - #define IRQ_U300_DMA 2 21 - #define IRQ_U300_VIDEO_ENC_0 3 22 - #define IRQ_U300_VIDEO_ENC_1 4 23 - #define IRQ_U300_AAIF_RX 5 24 - #define IRQ_U300_AAIF_TX 6 25 - #define IRQ_U300_AAIF_VGPIO 7 26 - #define IRQ_U300_AAIF_WAKEUP 8 27 - #define IRQ_U300_PCM_I2S0_FRAME 9 28 - #define IRQ_U300_PCM_I2S0_FIFO 10 29 - #define IRQ_U300_PCM_I2S1_FRAME 11 30 - #define IRQ_U300_PCM_I2S1_FIFO 12 31 - #define IRQ_U300_XGAM_GAMCON 13 32 - #define IRQ_U300_XGAM_CDI 14 33 - #define IRQ_U300_XGAM_CDICON 15 18 + #define IRQ_U300_IRQ0_EXT 1 19 + #define IRQ_U300_IRQ1_EXT 2 20 + #define IRQ_U300_DMA 3 21 + #define IRQ_U300_VIDEO_ENC_0 4 22 + #define IRQ_U300_VIDEO_ENC_1 5 23 + #define IRQ_U300_AAIF_RX 6 24 + #define IRQ_U300_AAIF_TX 7 25 + #define IRQ_U300_AAIF_VGPIO 8 26 + #define IRQ_U300_AAIF_WAKEUP 9 27 + #define IRQ_U300_PCM_I2S0_FRAME 10 28 + #define IRQ_U300_PCM_I2S0_FIFO 11 29 + #define IRQ_U300_PCM_I2S1_FRAME 12 30 + #define IRQ_U300_PCM_I2S1_FIFO 13 31 + #define IRQ_U300_XGAM_GAMCON 14 32 + #define IRQ_U300_XGAM_CDI 15 33 + #define IRQ_U300_XGAM_CDICON 16 34 34 #if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330) 35 35 /* MMIACC not used on the DB3210 or DB3350 chips */ 36 - #define IRQ_U300_XGAM_MMIACC 16 36 + #define IRQ_U300_XGAM_MMIACC 17 37 37 #endif 38 - #define IRQ_U300_XGAM_PDI 17 39 - #define IRQ_U300_XGAM_PDICON 18 40 - #define IRQ_U300_XGAM_GAMEACC 19 41 - #define IRQ_U300_XGAM_MCIDCT 20 42 - #define IRQ_U300_APEX 21 43 - #define IRQ_U300_UART0 22 44 - #define IRQ_U300_SPI 23 45 - #define IRQ_U300_TIMER_APP_OS 24 46 - #define IRQ_U300_TIMER_APP_DD 25 47 - #define IRQ_U300_TIMER_APP_GP1 26 48 - #define IRQ_U300_TIMER_APP_GP2 27 49 - #define IRQ_U300_TIMER_OS 28 50 - #define IRQ_U300_TIMER_MS 29 51 - #define IRQ_U300_KEYPAD_KEYBF 30 52 - #define IRQ_U300_KEYPAD_KEYBR 31 38 + #define IRQ_U300_XGAM_PDI 18 39 + #define IRQ_U300_XGAM_PDICON 19 40 + #define IRQ_U300_XGAM_GAMEACC 20 41 + #define IRQ_U300_XGAM_MCIDCT 21 42 + #define IRQ_U300_APEX 22 43 + #define IRQ_U300_UART0 23 44 + #define IRQ_U300_SPI 24 45 + #define IRQ_U300_TIMER_APP_OS 25 46 + #define IRQ_U300_TIMER_APP_DD 26 47 + #define IRQ_U300_TIMER_APP_GP1 27 48 + #define IRQ_U300_TIMER_APP_GP2 28 49 + #define IRQ_U300_TIMER_OS 29 50 + #define IRQ_U300_TIMER_MS 30 51 + #define IRQ_U300_KEYPAD_KEYBF 31 52 + #define IRQ_U300_KEYPAD_KEYBR 32 53 53 /* These are on INTCON1 - 32 lines */ 54 - #define IRQ_U300_GPIO_PORT0 32 55 - #define IRQ_U300_GPIO_PORT1 33 56 - #define IRQ_U300_GPIO_PORT2 34 54 + #define IRQ_U300_GPIO_PORT0 33 55 + #define IRQ_U300_GPIO_PORT1 34 56 + #define IRQ_U300_GPIO_PORT2 35 57 57 58 58 #if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330) || \ 59 59 defined(CONFIG_MACH_U300_BS335) 60 60 /* These are for DB3150, DB3200 and DB3350 */ 61 - #define IRQ_U300_WDOG 35 62 - #define IRQ_U300_EVHIST 36 63 - #define IRQ_U300_MSPRO 37 64 - #define IRQ_U300_MMCSD_MCIINTR0 38 65 - #define IRQ_U300_MMCSD_MCIINTR1 39 66 - #define IRQ_U300_I2C0 40 67 - #define IRQ_U300_I2C1 41 68 - #define IRQ_U300_RTC 42 69 - #define IRQ_U300_NFIF 43 70 - #define IRQ_U300_NFIF2 44 61 + #define IRQ_U300_WDOG 36 62 + #define IRQ_U300_EVHIST 37 63 + #define IRQ_U300_MSPRO 38 64 + #define IRQ_U300_MMCSD_MCIINTR0 39 65 + #define IRQ_U300_MMCSD_MCIINTR1 40 66 + #define IRQ_U300_I2C0 41 67 + #define IRQ_U300_I2C1 42 68 + #define IRQ_U300_RTC 43 69 + #define IRQ_U300_NFIF 44 70 + #define IRQ_U300_NFIF2 45 71 71 #endif 72 72 73 73 /* DB3150 and DB3200 have only 45 IRQs */ 74 74 #if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330) 75 - #define U300_VIC_IRQS_END 45 75 + #define U300_VIC_IRQS_END 46 76 76 #endif 77 77 78 78 /* The DB3350-specific interrupt lines */ 79 79 #ifdef CONFIG_MACH_U300_BS335 80 - #define IRQ_U300_ISP_F0 45 81 - #define IRQ_U300_ISP_F1 46 82 - #define IRQ_U300_ISP_F2 47 83 - #define IRQ_U300_ISP_F3 48 84 - #define IRQ_U300_ISP_F4 49 85 - #define IRQ_U300_GPIO_PORT3 50 86 - #define IRQ_U300_SYSCON_PLL_LOCK 51 87 - #define IRQ_U300_UART1 52 88 - #define IRQ_U300_GPIO_PORT4 53 89 - #define IRQ_U300_GPIO_PORT5 54 90 - #define IRQ_U300_GPIO_PORT6 55 91 - #define U300_VIC_IRQS_END 56 80 + #define IRQ_U300_ISP_F0 46 81 + #define IRQ_U300_ISP_F1 47 82 + #define IRQ_U300_ISP_F2 48 83 + #define IRQ_U300_ISP_F3 49 84 + #define IRQ_U300_ISP_F4 50 85 + #define IRQ_U300_GPIO_PORT3 51 86 + #define IRQ_U300_SYSCON_PLL_LOCK 52 87 + #define IRQ_U300_UART1 53 88 + #define IRQ_U300_GPIO_PORT4 54 89 + #define IRQ_U300_GPIO_PORT5 55 90 + #define IRQ_U300_GPIO_PORT6 56 91 + #define U300_VIC_IRQS_END 57 92 92 #endif 93 93 94 94 /* The DB3210-specific interrupt lines */ 95 95 #ifdef CONFIG_MACH_U300_BS365 96 - #define IRQ_U300_GPIO_PORT3 35 97 - #define IRQ_U300_GPIO_PORT4 36 98 - #define IRQ_U300_WDOG 37 99 - #define IRQ_U300_EVHIST 38 100 - #define IRQ_U300_MSPRO 39 101 - #define IRQ_U300_MMCSD_MCIINTR0 40 102 - #define IRQ_U300_MMCSD_MCIINTR1 41 103 - #define IRQ_U300_I2C0 42 104 - #define IRQ_U300_I2C1 43 105 - #define IRQ_U300_RTC 44 106 - #define IRQ_U300_NFIF 45 107 - #define IRQ_U300_NFIF2 46 108 - #define IRQ_U300_SYSCON_PLL_LOCK 47 109 - #define U300_VIC_IRQS_END 48 96 + #define IRQ_U300_GPIO_PORT3 36 97 + #define IRQ_U300_GPIO_PORT4 37 98 + #define IRQ_U300_WDOG 38 99 + #define IRQ_U300_EVHIST 39 100 + #define IRQ_U300_MSPRO 40 101 + #define IRQ_U300_MMCSD_MCIINTR0 41 102 + #define IRQ_U300_MMCSD_MCIINTR1 42 103 + #define IRQ_U300_I2C0 43 104 + #define IRQ_U300_I2C1 44 105 + #define IRQ_U300_RTC 45 106 + #define IRQ_U300_NFIF 46 107 + #define IRQ_U300_NFIF2 47 108 + #define IRQ_U300_SYSCON_PLL_LOCK 48 109 + #define U300_VIC_IRQS_END 49 110 110 #endif 111 111 112 112 /* Maximum 8*7 GPIO lines */ ··· 117 117 #define IRQ_U300_GPIO_END (U300_VIC_IRQS_END) 118 118 #endif 119 119 120 - #define NR_IRQS (IRQ_U300_GPIO_END) 120 + #define NR_IRQS (IRQ_U300_GPIO_END - IRQ_U300_INTCON0_START) 121 121 122 122 #endif
+1 -1
arch/arm/mach-ux500/mbox-db5500.c
··· 168 168 return sprintf(buf, "0x%X\n", mbox_value); 169 169 } 170 170 171 - static DEVICE_ATTR(fifo, S_IWUGO | S_IRUGO, mbox_read_fifo, mbox_write_fifo); 171 + static DEVICE_ATTR(fifo, S_IWUSR | S_IRUGO, mbox_read_fifo, mbox_write_fifo); 172 172 173 173 static int mbox_show(struct seq_file *s, void *data) 174 174 {
+12 -7
arch/arm/mm/abort-ev6.S
··· 26 26 mrc p15, 0, r1, c5, c0, 0 @ get FSR 27 27 mrc p15, 0, r0, c6, c0, 0 @ get FAR 28 28 /* 29 - * Faulty SWP instruction on 1136 doesn't set bit 11 in DFSR (erratum 326103). 30 - * The test below covers all the write situations, including Java bytecodes 29 + * Faulty SWP instruction on 1136 doesn't set bit 11 in DFSR. 31 30 */ 32 - bic r1, r1, #1 << 11 @ clear bit 11 of FSR 33 - tst r5, #PSR_J_BIT @ Java? 31 + #ifdef CONFIG_ARM_ERRATA_326103 32 + ldr ip, =0x4107b36 33 + mrc p15, 0, r3, c0, c0, 0 @ get processor id 34 + teq ip, r3, lsr #4 @ r0 ARM1136? 34 35 bne do_DataAbort 35 - do_thumb_abort fsr=r1, pc=r4, psr=r5, tmp=r3 36 - ldreq r3, [r4] @ read aborted ARM instruction 36 + tst r5, #PSR_J_BIT @ Java? 37 + tsteq r5, #PSR_T_BIT @ Thumb? 38 + bne do_DataAbort 39 + bic r1, r1, #1 << 11 @ clear bit 11 of FSR 40 + ldr r3, [r4] @ read aborted ARM instruction 37 41 #ifdef CONFIG_CPU_ENDIAN_BE8 38 - reveq r3, r3 42 + rev r3, r3 39 43 #endif 40 44 do_ldrd_abort tmp=ip, insn=r3 41 45 tst r3, #1 << 20 @ L = 0 -> write 42 46 orreq r1, r1, #1 << 11 @ yes. 47 + #endif 43 48 b do_DataAbort
+14 -11
arch/arm/mm/cache-l2x0.c
··· 32 32 static DEFINE_RAW_SPINLOCK(l2x0_lock); 33 33 static u32 l2x0_way_mask; /* Bitmask of active ways */ 34 34 static u32 l2x0_size; 35 + static unsigned long sync_reg_offset = L2X0_CACHE_SYNC; 35 36 36 37 struct l2x0_regs l2x0_saved_regs; 37 38 ··· 62 61 { 63 62 void __iomem *base = l2x0_base; 64 63 65 - #ifdef CONFIG_PL310_ERRATA_753970 66 - /* write to an unmmapped register */ 67 - writel_relaxed(0, base + L2X0_DUMMY_REG); 68 - #else 69 - writel_relaxed(0, base + L2X0_CACHE_SYNC); 70 - #endif 64 + writel_relaxed(0, base + sync_reg_offset); 71 65 cache_wait(base + L2X0_CACHE_SYNC, 1); 72 66 } 73 67 ··· 81 85 } 82 86 83 87 #if defined(CONFIG_PL310_ERRATA_588369) || defined(CONFIG_PL310_ERRATA_727915) 88 + static inline void debug_writel(unsigned long val) 89 + { 90 + if (outer_cache.set_debug) 91 + outer_cache.set_debug(val); 92 + } 84 93 85 - #define debug_writel(val) outer_cache.set_debug(val) 86 - 87 - static void l2x0_set_debug(unsigned long val) 94 + static void pl310_set_debug(unsigned long val) 88 95 { 89 96 writel_relaxed(val, l2x0_base + L2X0_DEBUG_CTRL); 90 97 } ··· 97 98 { 98 99 } 99 100 100 - #define l2x0_set_debug NULL 101 + #define pl310_set_debug NULL 101 102 #endif 102 103 103 104 #ifdef CONFIG_PL310_ERRATA_588369 ··· 330 331 else 331 332 ways = 8; 332 333 type = "L310"; 334 + #ifdef CONFIG_PL310_ERRATA_753970 335 + /* Unmapped register. */ 336 + sync_reg_offset = L2X0_DUMMY_REG; 337 + #endif 338 + outer_cache.set_debug = pl310_set_debug; 333 339 break; 334 340 case L2X0_CACHE_ID_PART_L210: 335 341 ways = (aux >> 13) & 0xf; ··· 383 379 outer_cache.flush_all = l2x0_flush_all; 384 380 outer_cache.inv_all = l2x0_inv_all; 385 381 outer_cache.disable = l2x0_disable; 386 - outer_cache.set_debug = l2x0_set_debug; 387 382 388 383 printk(KERN_INFO "%s cache controller enabled\n", type); 389 384 printk(KERN_INFO "l2x0: %d ways, CACHE_ID 0x%08x, AUX_CTRL 0x%08x, Cache size: %d B\n",
+2 -2
arch/arm/mm/init.c
··· 293 293 #endif 294 294 295 295 #ifndef CONFIG_SPARSEMEM 296 - static void arm_memory_present(void) 296 + static void __init arm_memory_present(void) 297 297 { 298 298 } 299 299 #else 300 - static void arm_memory_present(void) 300 + static void __init arm_memory_present(void) 301 301 { 302 302 struct memblock_region *reg; 303 303
+2 -2
arch/arm/mm/mmu.c
··· 618 618 } 619 619 } 620 620 621 - static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end, 622 - unsigned long phys, const struct mem_type *type) 621 + static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr, 622 + unsigned long end, unsigned long phys, const struct mem_type *type) 623 623 { 624 624 pud_t *pud = pud_offset(pgd, addr); 625 625 unsigned long next;
+14
arch/arm/plat-omap/dma.c
··· 916 916 l |= OMAP_DMA_CCR_BUFFERING_DISABLE; 917 917 l |= OMAP_DMA_CCR_EN; 918 918 919 + /* 920 + * As dma_write() uses IO accessors which are weakly ordered, there 921 + * is no guarantee that data in coherent DMA memory will be visible 922 + * to the DMA device. Add a memory barrier here to ensure that any 923 + * such data is visible prior to enabling DMA. 924 + */ 925 + mb(); 919 926 p->dma_write(l, CCR, lch); 920 927 921 928 dma_chan[lch].flags |= OMAP_DMA_ACTIVE; ··· 971 964 l &= ~OMAP_DMA_CCR_EN; 972 965 p->dma_write(l, CCR, lch); 973 966 } 967 + 968 + /* 969 + * Ensure that data transferred by DMA is visible to any access 970 + * after DMA has been disabled. This is important for coherent 971 + * DMA regions. 972 + */ 973 + mb(); 974 974 975 975 if (!omap_dma_in_1510_mode() && dma_chan[lch].next_lch != -1) { 976 976 int next_lch, cur_lch = lch;
+28
arch/arm/plat-samsung/include/plat/sdhci.h
··· 18 18 #ifndef __PLAT_S3C_SDHCI_H 19 19 #define __PLAT_S3C_SDHCI_H __FILE__ 20 20 21 + #include <plat/devs.h> 22 + 21 23 struct platform_device; 22 24 struct mmc_host; 23 25 struct mmc_card; ··· 357 355 static inline void exynos4_default_sdhci3(void) { } 358 356 359 357 #endif /* CONFIG_EXYNOS4_SETUP_SDHCI */ 358 + 359 + static inline void s3c_sdhci_setname(int id, char *name) 360 + { 361 + switch (id) { 362 + #ifdef CONFIG_S3C_DEV_HSMMC 363 + case 0: 364 + s3c_device_hsmmc0.name = name; 365 + break; 366 + #endif 367 + #ifdef CONFIG_S3C_DEV_HSMMC1 368 + case 1: 369 + s3c_device_hsmmc1.name = name; 370 + break; 371 + #endif 372 + #ifdef CONFIG_S3C_DEV_HSMMC2 373 + case 2: 374 + s3c_device_hsmmc2.name = name; 375 + break; 376 + #endif 377 + #ifdef CONFIG_S3C_DEV_HSMMC3 378 + case 3: 379 + s3c_device_hsmmc3.name = name; 380 + break; 381 + #endif 382 + } 383 + } 360 384 361 385 #endif /* __PLAT_S3C_SDHCI_H */
+99
arch/arm/vfp/vfpmodule.c
··· 17 17 #include <linux/sched.h> 18 18 #include <linux/smp.h> 19 19 #include <linux/init.h> 20 + #include <linux/uaccess.h> 21 + #include <linux/user.h> 20 22 21 23 #include <asm/cp15.h> 22 24 #include <asm/cputype.h> ··· 528 526 vfp_force_reload(cpu, thread); 529 527 530 528 put_cpu(); 529 + } 530 + 531 + /* 532 + * Save the current VFP state into the provided structures and prepare 533 + * for entry into a new function (signal handler). 534 + */ 535 + int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp, 536 + struct user_vfp_exc __user *ufp_exc) 537 + { 538 + struct thread_info *thread = current_thread_info(); 539 + struct vfp_hard_struct *hwstate = &thread->vfpstate.hard; 540 + int err = 0; 541 + 542 + /* Ensure that the saved hwstate is up-to-date. */ 543 + vfp_sync_hwstate(thread); 544 + 545 + /* 546 + * Copy the floating point registers. There can be unused 547 + * registers see asm/hwcap.h for details. 548 + */ 549 + err |= __copy_to_user(&ufp->fpregs, &hwstate->fpregs, 550 + sizeof(hwstate->fpregs)); 551 + /* 552 + * Copy the status and control register. 553 + */ 554 + __put_user_error(hwstate->fpscr, &ufp->fpscr, err); 555 + 556 + /* 557 + * Copy the exception registers. 558 + */ 559 + __put_user_error(hwstate->fpexc, &ufp_exc->fpexc, err); 560 + __put_user_error(hwstate->fpinst, &ufp_exc->fpinst, err); 561 + __put_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err); 562 + 563 + if (err) 564 + return -EFAULT; 565 + 566 + /* Ensure that VFP is disabled. */ 567 + vfp_flush_hwstate(thread); 568 + 569 + /* 570 + * As per the PCS, clear the length and stride bits for function 571 + * entry. 572 + */ 573 + hwstate->fpscr &= ~(FPSCR_LENGTH_MASK | FPSCR_STRIDE_MASK); 574 + 575 + /* 576 + * Disable VFP in the hwstate so that we can detect if it gets 577 + * used. 578 + */ 579 + hwstate->fpexc &= ~FPEXC_EN; 580 + return 0; 581 + } 582 + 583 + /* Sanitise and restore the current VFP state from the provided structures. */ 584 + int vfp_restore_user_hwstate(struct user_vfp __user *ufp, 585 + struct user_vfp_exc __user *ufp_exc) 586 + { 587 + struct thread_info *thread = current_thread_info(); 588 + struct vfp_hard_struct *hwstate = &thread->vfpstate.hard; 589 + unsigned long fpexc; 590 + int err = 0; 591 + 592 + /* 593 + * If VFP has been used, then disable it to avoid corrupting 594 + * the new thread state. 595 + */ 596 + if (hwstate->fpexc & FPEXC_EN) 597 + vfp_flush_hwstate(thread); 598 + 599 + /* 600 + * Copy the floating point registers. There can be unused 601 + * registers see asm/hwcap.h for details. 602 + */ 603 + err |= __copy_from_user(&hwstate->fpregs, &ufp->fpregs, 604 + sizeof(hwstate->fpregs)); 605 + /* 606 + * Copy the status and control register. 607 + */ 608 + __get_user_error(hwstate->fpscr, &ufp->fpscr, err); 609 + 610 + /* 611 + * Sanitise and restore the exception registers. 612 + */ 613 + __get_user_error(fpexc, &ufp_exc->fpexc, err); 614 + 615 + /* Ensure the VFP is enabled. */ 616 + fpexc |= FPEXC_EN; 617 + 618 + /* Ensure FPINST2 is invalid and the exception flag is cleared. */ 619 + fpexc &= ~(FPEXC_EX | FPEXC_FP2V); 620 + hwstate->fpexc = fpexc; 621 + 622 + __get_user_error(hwstate->fpinst, &ufp_exc->fpinst, err); 623 + __get_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err); 624 + 625 + return err ? -EFAULT : 0; 531 626 } 532 627 533 628 /*
+26 -27
arch/blackfin/mach-bf538/boards/ezkit.c
··· 38 38 .name = "rtc-bfin", 39 39 .id = -1, 40 40 }; 41 - #endif 41 + #endif /* CONFIG_RTC_DRV_BFIN */ 42 42 43 43 #if defined(CONFIG_SERIAL_BFIN) || defined(CONFIG_SERIAL_BFIN_MODULE) 44 44 #ifdef CONFIG_SERIAL_BFIN_UART0 ··· 100 100 .platform_data = &bfin_uart0_peripherals, /* Passed to driver */ 101 101 }, 102 102 }; 103 - #endif 103 + #endif /* CONFIG_SERIAL_BFIN_UART0 */ 104 104 #ifdef CONFIG_SERIAL_BFIN_UART1 105 105 static struct resource bfin_uart1_resources[] = { 106 106 { ··· 148 148 .platform_data = &bfin_uart1_peripherals, /* Passed to driver */ 149 149 }, 150 150 }; 151 - #endif 151 + #endif /* CONFIG_SERIAL_BFIN_UART1 */ 152 152 #ifdef CONFIG_SERIAL_BFIN_UART2 153 153 static struct resource bfin_uart2_resources[] = { 154 154 { ··· 196 196 .platform_data = &bfin_uart2_peripherals, /* Passed to driver */ 197 197 }, 198 198 }; 199 - #endif 200 - #endif 199 + #endif /* CONFIG_SERIAL_BFIN_UART2 */ 200 + #endif /* CONFIG_SERIAL_BFIN */ 201 201 202 202 #if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE) 203 203 #ifdef CONFIG_BFIN_SIR0 ··· 224 224 .num_resources = ARRAY_SIZE(bfin_sir0_resources), 225 225 .resource = bfin_sir0_resources, 226 226 }; 227 - #endif 227 + #endif /* CONFIG_BFIN_SIR0 */ 228 228 #ifdef CONFIG_BFIN_SIR1 229 229 static struct resource bfin_sir1_resources[] = { 230 230 { ··· 249 249 .num_resources = ARRAY_SIZE(bfin_sir1_resources), 250 250 .resource = bfin_sir1_resources, 251 251 }; 252 - #endif 252 + #endif /* CONFIG_BFIN_SIR1 */ 253 253 #ifdef CONFIG_BFIN_SIR2 254 254 static struct resource bfin_sir2_resources[] = { 255 255 { ··· 274 274 .num_resources = ARRAY_SIZE(bfin_sir2_resources), 275 275 .resource = bfin_sir2_resources, 276 276 }; 277 - #endif 278 - #endif 277 + #endif /* CONFIG_BFIN_SIR2 */ 278 + #endif /* CONFIG_BFIN_SIR */ 279 279 280 280 #if defined(CONFIG_SERIAL_BFIN_SPORT) || defined(CONFIG_SERIAL_BFIN_SPORT_MODULE) 281 281 #ifdef CONFIG_SERIAL_BFIN_SPORT0_UART ··· 311 311 .platform_data = &bfin_sport0_peripherals, /* Passed to driver */ 312 312 }, 313 313 }; 314 - #endif 314 + #endif /* CONFIG_SERIAL_BFIN_SPORT0_UART */ 315 315 #ifdef CONFIG_SERIAL_BFIN_SPORT1_UART 316 316 static struct resource bfin_sport1_uart_resources[] = { 317 317 { ··· 345 345 .platform_data = &bfin_sport1_peripherals, /* Passed to driver */ 346 346 }, 347 347 }; 348 - #endif 348 + #endif /* CONFIG_SERIAL_BFIN_SPORT1_UART */ 349 349 #ifdef CONFIG_SERIAL_BFIN_SPORT2_UART 350 350 static struct resource bfin_sport2_uart_resources[] = { 351 351 { ··· 379 379 .platform_data = &bfin_sport2_peripherals, /* Passed to driver */ 380 380 }, 381 381 }; 382 - #endif 382 + #endif /* CONFIG_SERIAL_BFIN_SPORT2_UART */ 383 383 #ifdef CONFIG_SERIAL_BFIN_SPORT3_UART 384 384 static struct resource bfin_sport3_uart_resources[] = { 385 385 { ··· 413 413 .platform_data = &bfin_sport3_peripherals, /* Passed to driver */ 414 414 }, 415 415 }; 416 - #endif 417 - #endif 416 + #endif /* CONFIG_SERIAL_BFIN_SPORT3_UART */ 417 + #endif /* CONFIG_SERIAL_BFIN_SPORT */ 418 418 419 419 #if defined(CONFIG_CAN_BFIN) || defined(CONFIG_CAN_BFIN_MODULE) 420 420 static unsigned short bfin_can_peripherals[] = { ··· 452 452 .platform_data = &bfin_can_peripherals, /* Passed to driver */ 453 453 }, 454 454 }; 455 - #endif 455 + #endif /* CONFIG_CAN_BFIN */ 456 456 457 457 /* 458 458 * USB-LAN EzExtender board ··· 488 488 .platform_data = &smc91x_info, 489 489 }, 490 490 }; 491 - #endif 491 + #endif /* CONFIG_SMC91X */ 492 492 493 493 #if defined(CONFIG_SPI_BFIN5XX) || defined(CONFIG_SPI_BFIN5XX_MODULE) 494 494 /* all SPI peripherals info goes here */ ··· 518 518 static struct bfin5xx_spi_chip spi_flash_chip_info = { 519 519 .enable_dma = 0, /* use dma transfer with this chip*/ 520 520 }; 521 - #endif 521 + #endif /* CONFIG_MTD_M25P80 */ 522 + #endif /* CONFIG_SPI_BFIN5XX */ 522 523 523 524 #if defined(CONFIG_TOUCHSCREEN_AD7879) || defined(CONFIG_TOUCHSCREEN_AD7879_MODULE) 524 525 #include <linux/spi/ad7879.h> ··· 536 535 .gpio_export = 1, /* Export GPIO to gpiolib */ 537 536 .gpio_base = -1, /* Dynamic allocation */ 538 537 }; 539 - #endif 538 + #endif /* CONFIG_TOUCHSCREEN_AD7879 */ 540 539 541 540 #if defined(CONFIG_FB_BFIN_LQ035Q1) || defined(CONFIG_FB_BFIN_LQ035Q1_MODULE) 542 541 #include <asm/bfin-lq035q1.h> ··· 565 564 .platform_data = &bfin_lq035q1_data, 566 565 }, 567 566 }; 568 - #endif 567 + #endif /* CONFIG_FB_BFIN_LQ035Q1 */ 569 568 570 569 static struct spi_board_info bf538_spi_board_info[] __initdata = { 571 570 #if defined(CONFIG_MTD_M25P80) \ ··· 580 579 .controller_data = &spi_flash_chip_info, 581 580 .mode = SPI_MODE_3, 582 581 }, 583 - #endif 582 + #endif /* CONFIG_MTD_M25P80 */ 584 583 #if defined(CONFIG_TOUCHSCREEN_AD7879_SPI) || defined(CONFIG_TOUCHSCREEN_AD7879_SPI_MODULE) 585 584 { 586 585 .modalias = "ad7879", ··· 591 590 .chip_select = 1, 592 591 .mode = SPI_CPHA | SPI_CPOL, 593 592 }, 594 - #endif 593 + #endif /* CONFIG_TOUCHSCREEN_AD7879_SPI */ 595 594 #if defined(CONFIG_FB_BFIN_LQ035Q1) || defined(CONFIG_FB_BFIN_LQ035Q1_MODULE) 596 595 { 597 596 .modalias = "bfin-lq035q1-spi", ··· 600 599 .chip_select = 2, 601 600 .mode = SPI_CPHA | SPI_CPOL, 602 601 }, 603 - #endif 602 + #endif /* CONFIG_FB_BFIN_LQ035Q1 */ 604 603 #if defined(CONFIG_SPI_SPIDEV) || defined(CONFIG_SPI_SPIDEV_MODULE) 605 604 { 606 605 .modalias = "spidev", ··· 608 607 .bus_num = 0, 609 608 .chip_select = 1, 610 609 }, 611 - #endif 610 + #endif /* CONFIG_SPI_SPIDEV */ 612 611 }; 613 612 614 613 /* SPI (0) */ ··· 717 716 }, 718 717 }; 719 718 720 - #endif /* spi master and devices */ 721 - 722 719 #if defined(CONFIG_I2C_BLACKFIN_TWI) || defined(CONFIG_I2C_BLACKFIN_TWI_MODULE) 723 720 static struct resource bfin_twi0_resource[] = { 724 721 [0] = { ··· 758 759 .num_resources = ARRAY_SIZE(bfin_twi1_resource), 759 760 .resource = bfin_twi1_resource, 760 761 }; 761 - #endif 762 - #endif 762 + #endif /* CONFIG_BF542 */ 763 + #endif /* CONFIG_I2C_BLACKFIN_TWI */ 763 764 764 765 #if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE) 765 766 #include <linux/gpio_keys.h>
+1
arch/hexagon/kernel/dma.c
··· 22 22 #include <linux/bootmem.h> 23 23 #include <linux/genalloc.h> 24 24 #include <asm/dma-mapping.h> 25 + #include <linux/module.h> 25 26 26 27 struct dma_map_ops *dma_ops; 27 28 EXPORT_SYMBOL(dma_ops);
+3 -3
arch/hexagon/kernel/process.c
··· 1 1 /* 2 2 * Process creation support for Hexagon 3 3 * 4 - * Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved. 4 + * Copyright (c) 2010-2012, Code Aurora Forum. All rights reserved. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License version 2 and ··· 88 88 void cpu_idle(void) 89 89 { 90 90 while (1) { 91 - tick_nohz_stop_sched_tick(1); 91 + tick_nohz_idle_enter(); 92 92 local_irq_disable(); 93 93 while (!need_resched()) { 94 94 idle_sleep(); ··· 97 97 local_irq_disable(); 98 98 } 99 99 local_irq_enable(); 100 - tick_nohz_restart_sched_tick(); 100 + tick_nohz_idle_exit(); 101 101 schedule(); 102 102 } 103 103 }
+1
arch/hexagon/kernel/ptrace.c
··· 28 28 #include <linux/ptrace.h> 29 29 #include <linux/regset.h> 30 30 #include <linux/user.h> 31 + #include <linux/elf.h> 31 32 32 33 #include <asm/user.h> 33 34
+7 -1
arch/hexagon/kernel/smp.c
··· 1 1 /* 2 2 * SMP support for Hexagon 3 3 * 4 - * Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved. 4 + * Copyright (c) 2010-2012, Code Aurora Forum. All rights reserved. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License version 2 and ··· 28 28 #include <linux/sched.h> 29 29 #include <linux/smp.h> 30 30 #include <linux/spinlock.h> 31 + #include <linux/cpu.h> 31 32 32 33 #include <asm/time.h> /* timer_interrupt */ 33 34 #include <asm/hexagon_vm.h> ··· 178 177 179 178 printk(KERN_INFO "%s cpu %d\n", __func__, current_thread_info()->cpu); 180 179 180 + notify_cpu_starting(cpu); 181 + 182 + ipi_call_lock(); 181 183 set_cpu_online(cpu, true); 184 + ipi_call_unlock(); 185 + 182 186 local_irq_enable(); 183 187 184 188 cpu_idle();
+1
arch/hexagon/kernel/time.c
··· 28 28 #include <linux/of.h> 29 29 #include <linux/of_address.h> 30 30 #include <linux/of_irq.h> 31 + #include <linux/module.h> 31 32 32 33 #include <asm/timer-regs.h> 33 34 #include <asm/hexagon_vm.h>
+1
arch/hexagon/kernel/vdso.c
··· 21 21 #include <linux/err.h> 22 22 #include <linux/mm.h> 23 23 #include <linux/vmalloc.h> 24 + #include <linux/binfmts.h> 24 25 25 26 #include <asm/vdso.h> 26 27
+1 -1
arch/mips/ath79/dev-wmac.c
··· 58 58 59 59 static int ar933x_wmac_reset(void) 60 60 { 61 - ath79_device_reset_clear(AR933X_RESET_WMAC); 62 61 ath79_device_reset_set(AR933X_RESET_WMAC); 62 + ath79_device_reset_clear(AR933X_RESET_WMAC); 63 63 64 64 return 0; 65 65 }
+1 -1
arch/mips/include/asm/mach-jz4740/irq.h
··· 45 45 #define JZ4740_IRQ_LCD JZ4740_IRQ(30) 46 46 47 47 /* 2nd-level interrupts */ 48 - #define JZ4740_IRQ_DMA(x) (JZ4740_IRQ(32) + (X)) 48 + #define JZ4740_IRQ_DMA(x) (JZ4740_IRQ(32) + (x)) 49 49 50 50 #define JZ4740_IRQ_INTC_GPIO(x) (JZ4740_IRQ_GPIO0 - (x)) 51 51 #define JZ4740_IRQ_GPIO(x) (JZ4740_IRQ(48) + (x))
-6
arch/mips/include/asm/mmu_context.h
··· 37 37 write_c0_xcontext((unsigned long) smp_processor_id() << 51); \ 38 38 } while (0) 39 39 40 - 41 - static inline unsigned long get_current_pgd(void) 42 - { 43 - return PHYS_TO_XKSEG_CACHED((read_c0_context() >> 11) & ~0xfffUL); 44 - } 45 - 46 40 #else /* CONFIG_MIPS_PGD_C0_CONTEXT: using pgd_current*/ 47 41 48 42 /*
+5 -22
arch/mips/kernel/signal.c
··· 257 257 return -EFAULT; 258 258 sigdelsetmask(&newset, ~_BLOCKABLE); 259 259 260 - spin_lock_irq(&current->sighand->siglock); 261 260 current->saved_sigmask = current->blocked; 262 - current->blocked = newset; 263 - recalc_sigpending(); 264 - spin_unlock_irq(&current->sighand->siglock); 261 + set_current_blocked(&newset); 265 262 266 263 current->state = TASK_INTERRUPTIBLE; 267 264 schedule(); ··· 283 286 return -EFAULT; 284 287 sigdelsetmask(&newset, ~_BLOCKABLE); 285 288 286 - spin_lock_irq(&current->sighand->siglock); 287 289 current->saved_sigmask = current->blocked; 288 - current->blocked = newset; 289 - recalc_sigpending(); 290 - spin_unlock_irq(&current->sighand->siglock); 290 + set_current_blocked(&newset); 291 291 292 292 current->state = TASK_INTERRUPTIBLE; 293 293 schedule(); ··· 356 362 goto badframe; 357 363 358 364 sigdelsetmask(&blocked, ~_BLOCKABLE); 359 - spin_lock_irq(&current->sighand->siglock); 360 - current->blocked = blocked; 361 - recalc_sigpending(); 362 - spin_unlock_irq(&current->sighand->siglock); 365 + set_current_blocked(&blocked); 363 366 364 367 sig = restore_sigcontext(&regs, &frame->sf_sc); 365 368 if (sig < 0) ··· 392 401 goto badframe; 393 402 394 403 sigdelsetmask(&set, ~_BLOCKABLE); 395 - spin_lock_irq(&current->sighand->siglock); 396 - current->blocked = set; 397 - recalc_sigpending(); 398 - spin_unlock_irq(&current->sighand->siglock); 404 + set_current_blocked(&set); 399 405 400 406 sig = restore_sigcontext(&regs, &frame->rs_uc.uc_mcontext); 401 407 if (sig < 0) ··· 568 580 if (ret) 569 581 return ret; 570 582 571 - spin_lock_irq(&current->sighand->siglock); 572 - sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask); 573 - if (!(ka->sa.sa_flags & SA_NODEFER)) 574 - sigaddset(&current->blocked, sig); 575 - recalc_sigpending(); 576 - spin_unlock_irq(&current->sighand->siglock); 583 + block_sigmask(ka, sig); 577 584 578 585 return ret; 579 586 }
+4 -16
arch/mips/kernel/signal32.c
··· 290 290 return -EFAULT; 291 291 sigdelsetmask(&newset, ~_BLOCKABLE); 292 292 293 - spin_lock_irq(&current->sighand->siglock); 294 293 current->saved_sigmask = current->blocked; 295 - current->blocked = newset; 296 - recalc_sigpending(); 297 - spin_unlock_irq(&current->sighand->siglock); 294 + set_current_blocked(&newset); 298 295 299 296 current->state = TASK_INTERRUPTIBLE; 300 297 schedule(); ··· 315 318 return -EFAULT; 316 319 sigdelsetmask(&newset, ~_BLOCKABLE); 317 320 318 - spin_lock_irq(&current->sighand->siglock); 319 321 current->saved_sigmask = current->blocked; 320 - current->blocked = newset; 321 - recalc_sigpending(); 322 - spin_unlock_irq(&current->sighand->siglock); 322 + set_current_blocked(&newset); 323 323 324 324 current->state = TASK_INTERRUPTIBLE; 325 325 schedule(); ··· 482 488 goto badframe; 483 489 484 490 sigdelsetmask(&blocked, ~_BLOCKABLE); 485 - spin_lock_irq(&current->sighand->siglock); 486 - current->blocked = blocked; 487 - recalc_sigpending(); 488 - spin_unlock_irq(&current->sighand->siglock); 491 + set_current_blocked(&blocked); 489 492 490 493 sig = restore_sigcontext32(&regs, &frame->sf_sc); 491 494 if (sig < 0) ··· 520 529 goto badframe; 521 530 522 531 sigdelsetmask(&set, ~_BLOCKABLE); 523 - spin_lock_irq(&current->sighand->siglock); 524 - current->blocked = set; 525 - recalc_sigpending(); 526 - spin_unlock_irq(&current->sighand->siglock); 532 + set_current_blocked(&set); 527 533 528 534 sig = restore_sigcontext32(&regs, &frame->rs_uc.uc_mcontext); 529 535 if (sig < 0)
+2 -8
arch/mips/kernel/signal_n32.c
··· 93 93 sigset_from_compat(&newset, &uset); 94 94 sigdelsetmask(&newset, ~_BLOCKABLE); 95 95 96 - spin_lock_irq(&current->sighand->siglock); 97 96 current->saved_sigmask = current->blocked; 98 - current->blocked = newset; 99 - recalc_sigpending(); 100 - spin_unlock_irq(&current->sighand->siglock); 97 + set_current_blocked(&newset); 101 98 102 99 current->state = TASK_INTERRUPTIBLE; 103 100 schedule(); ··· 118 121 goto badframe; 119 122 120 123 sigdelsetmask(&set, ~_BLOCKABLE); 121 - spin_lock_irq(&current->sighand->siglock); 122 - current->blocked = set; 123 - recalc_sigpending(); 124 - spin_unlock_irq(&current->sighand->siglock); 124 + set_current_blocked(&set); 125 125 126 126 sig = restore_sigcontext(&regs, &frame->rs_uc.uc_mcontext); 127 127 if (sig < 0)
-4
arch/powerpc/include/asm/irq.h
··· 18 18 #include <linux/atomic.h> 19 19 20 20 21 - /* Define a way to iterate across irqs. */ 22 - #define for_each_irq(i) \ 23 - for ((i) = 0; (i) < NR_IRQS; ++(i)) 24 - 25 21 extern atomic_t ppc_n_lost_interrupts; 26 22 27 23 /* This number is used when no interrupt has been assigned */
+1 -5
arch/powerpc/kernel/irq.c
··· 330 330 331 331 alloc_cpumask_var(&mask, GFP_KERNEL); 332 332 333 - for_each_irq(irq) { 333 + for_each_irq_desc(irq, desc) { 334 334 struct irq_data *data; 335 335 struct irq_chip *chip; 336 - 337 - desc = irq_to_desc(irq); 338 - if (!desc) 339 - continue; 340 336 341 337 data = irq_desc_get_irq_data(desc); 342 338 if (irqd_is_per_cpu(data))
+2 -5
arch/powerpc/kernel/machine_kexec.c
··· 23 23 24 24 void machine_kexec_mask_interrupts(void) { 25 25 unsigned int i; 26 + struct irq_desc *desc; 26 27 27 - for_each_irq(i) { 28 - struct irq_desc *desc = irq_to_desc(i); 28 + for_each_irq_desc(i, desc) { 29 29 struct irq_chip *chip; 30 - 31 - if (!desc) 32 - continue; 33 30 34 31 chip = irq_desc_get_chip(desc); 35 32 if (!chip)
+7 -1
arch/powerpc/net/bpf_jit.h
··· 48 48 /* 49 49 * Assembly helpers from arch/powerpc/net/bpf_jit.S: 50 50 */ 51 - extern u8 sk_load_word[], sk_load_half[], sk_load_byte[], sk_load_byte_msh[]; 51 + #define DECLARE_LOAD_FUNC(func) \ 52 + extern u8 func[], func##_negative_offset[], func##_positive_offset[] 53 + 54 + DECLARE_LOAD_FUNC(sk_load_word); 55 + DECLARE_LOAD_FUNC(sk_load_half); 56 + DECLARE_LOAD_FUNC(sk_load_byte); 57 + DECLARE_LOAD_FUNC(sk_load_byte_msh); 52 58 53 59 #define FUNCTION_DESCR_SIZE 24 54 60
+95 -13
arch/powerpc/net/bpf_jit_64.S
··· 31 31 * then branch directly to slow_path_XXX if required. (In fact, could 32 32 * load a spare GPR with the address of slow_path_generic and pass size 33 33 * as an argument, making the call site a mtlr, li and bllr.) 34 - * 35 - * Technically, the "is addr < 0" check is unnecessary & slowing down 36 - * the ABS path, as it's statically checked on generation. 37 34 */ 38 35 .globl sk_load_word 39 36 sk_load_word: 40 37 cmpdi r_addr, 0 41 - blt bpf_error 38 + blt bpf_slow_path_word_neg 39 + .globl sk_load_word_positive_offset 40 + sk_load_word_positive_offset: 42 41 /* Are we accessing past headlen? */ 43 42 subi r_scratch1, r_HL, 4 44 43 cmpd r_scratch1, r_addr ··· 50 51 .globl sk_load_half 51 52 sk_load_half: 52 53 cmpdi r_addr, 0 53 - blt bpf_error 54 + blt bpf_slow_path_half_neg 55 + .globl sk_load_half_positive_offset 56 + sk_load_half_positive_offset: 54 57 subi r_scratch1, r_HL, 2 55 58 cmpd r_scratch1, r_addr 56 59 blt bpf_slow_path_half ··· 62 61 .globl sk_load_byte 63 62 sk_load_byte: 64 63 cmpdi r_addr, 0 65 - blt bpf_error 64 + blt bpf_slow_path_byte_neg 65 + .globl sk_load_byte_positive_offset 66 + sk_load_byte_positive_offset: 66 67 cmpd r_HL, r_addr 67 68 ble bpf_slow_path_byte 68 69 lbzx r_A, r_D, r_addr ··· 72 69 73 70 /* 74 71 * BPF_S_LDX_B_MSH: ldxb 4*([offset]&0xf) 75 - * r_addr is the offset value, already known positive 72 + * r_addr is the offset value 76 73 */ 77 74 .globl sk_load_byte_msh 78 75 sk_load_byte_msh: 76 + cmpdi r_addr, 0 77 + blt bpf_slow_path_byte_msh_neg 78 + .globl sk_load_byte_msh_positive_offset 79 + sk_load_byte_msh_positive_offset: 79 80 cmpd r_HL, r_addr 80 81 ble bpf_slow_path_byte_msh 81 82 lbzx r_X, r_D, r_addr 82 83 rlwinm r_X, r_X, 2, 32-4-2, 31-2 83 - blr 84 - 85 - bpf_error: 86 - /* Entered with cr0 = lt */ 87 - li r3, 0 88 - /* Generated code will 'blt epilogue', returning 0. */ 89 84 blr 90 85 91 86 /* Call out to skb_copy_bits: ··· 136 135 bpf_slow_path_common(1) 137 136 lbz r_X, BPF_PPC_STACK_BASIC+(2*8)(r1) 138 137 rlwinm r_X, r_X, 2, 32-4-2, 31-2 138 + blr 139 + 140 + /* Call out to bpf_internal_load_pointer_neg_helper: 141 + * We'll need to back up our volatile regs first; we have 142 + * local variable space at r1+(BPF_PPC_STACK_BASIC). 143 + * Allocate a new stack frame here to remain ABI-compliant in 144 + * stashing LR. 145 + */ 146 + #define sk_negative_common(SIZE) \ 147 + mflr r0; \ 148 + std r0, 16(r1); \ 149 + /* R3 goes in parameter space of caller's frame */ \ 150 + std r_skb, (BPF_PPC_STACKFRAME+48)(r1); \ 151 + std r_A, (BPF_PPC_STACK_BASIC+(0*8))(r1); \ 152 + std r_X, (BPF_PPC_STACK_BASIC+(1*8))(r1); \ 153 + stdu r1, -BPF_PPC_SLOWPATH_FRAME(r1); \ 154 + /* R3 = r_skb, as passed */ \ 155 + mr r4, r_addr; \ 156 + li r5, SIZE; \ 157 + bl bpf_internal_load_pointer_neg_helper; \ 158 + /* R3 != 0 on success */ \ 159 + addi r1, r1, BPF_PPC_SLOWPATH_FRAME; \ 160 + ld r0, 16(r1); \ 161 + ld r_A, (BPF_PPC_STACK_BASIC+(0*8))(r1); \ 162 + ld r_X, (BPF_PPC_STACK_BASIC+(1*8))(r1); \ 163 + mtlr r0; \ 164 + cmpldi r3, 0; \ 165 + beq bpf_error_slow; /* cr0 = EQ */ \ 166 + mr r_addr, r3; \ 167 + ld r_skb, (BPF_PPC_STACKFRAME+48)(r1); \ 168 + /* Great success! */ 169 + 170 + bpf_slow_path_word_neg: 171 + lis r_scratch1,-32 /* SKF_LL_OFF */ 172 + cmpd r_addr, r_scratch1 /* addr < SKF_* */ 173 + blt bpf_error /* cr0 = LT */ 174 + .globl sk_load_word_negative_offset 175 + sk_load_word_negative_offset: 176 + sk_negative_common(4) 177 + lwz r_A, 0(r_addr) 178 + blr 179 + 180 + bpf_slow_path_half_neg: 181 + lis r_scratch1,-32 /* SKF_LL_OFF */ 182 + cmpd r_addr, r_scratch1 /* addr < SKF_* */ 183 + blt bpf_error /* cr0 = LT */ 184 + .globl sk_load_half_negative_offset 185 + sk_load_half_negative_offset: 186 + sk_negative_common(2) 187 + lhz r_A, 0(r_addr) 188 + blr 189 + 190 + bpf_slow_path_byte_neg: 191 + lis r_scratch1,-32 /* SKF_LL_OFF */ 192 + cmpd r_addr, r_scratch1 /* addr < SKF_* */ 193 + blt bpf_error /* cr0 = LT */ 194 + .globl sk_load_byte_negative_offset 195 + sk_load_byte_negative_offset: 196 + sk_negative_common(1) 197 + lbz r_A, 0(r_addr) 198 + blr 199 + 200 + bpf_slow_path_byte_msh_neg: 201 + lis r_scratch1,-32 /* SKF_LL_OFF */ 202 + cmpd r_addr, r_scratch1 /* addr < SKF_* */ 203 + blt bpf_error /* cr0 = LT */ 204 + .globl sk_load_byte_msh_negative_offset 205 + sk_load_byte_msh_negative_offset: 206 + sk_negative_common(1) 207 + lbz r_X, 0(r_addr) 208 + rlwinm r_X, r_X, 2, 32-4-2, 31-2 209 + blr 210 + 211 + bpf_error_slow: 212 + /* fabricate a cr0 = lt */ 213 + li r_scratch1, -1 214 + cmpdi r_scratch1, 0 215 + bpf_error: 216 + /* Entered with cr0 = lt */ 217 + li r3, 0 218 + /* Generated code will 'blt epilogue', returning 0. */ 139 219 blr
+9 -17
arch/powerpc/net/bpf_jit_comp.c
··· 127 127 PPC_BLR(); 128 128 } 129 129 130 + #define CHOOSE_LOAD_FUNC(K, func) \ 131 + ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset) 132 + 130 133 /* Assemble the body code between the prologue & epilogue. */ 131 134 static int bpf_jit_build_body(struct sk_filter *fp, u32 *image, 132 135 struct codegen_context *ctx, ··· 394 391 395 392 /*** Absolute loads from packet header/data ***/ 396 393 case BPF_S_LD_W_ABS: 397 - func = sk_load_word; 394 + func = CHOOSE_LOAD_FUNC(K, sk_load_word); 398 395 goto common_load; 399 396 case BPF_S_LD_H_ABS: 400 - func = sk_load_half; 397 + func = CHOOSE_LOAD_FUNC(K, sk_load_half); 401 398 goto common_load; 402 399 case BPF_S_LD_B_ABS: 403 - func = sk_load_byte; 400 + func = CHOOSE_LOAD_FUNC(K, sk_load_byte); 404 401 common_load: 405 - /* 406 - * Load from [K]. Reference with the (negative) 407 - * SKF_NET_OFF/SKF_LL_OFF offsets is unsupported. 408 - */ 402 + /* Load from [K]. */ 409 403 ctx->seen |= SEEN_DATAREF; 410 - if ((int)K < 0) 411 - return -ENOTSUPP; 412 404 PPC_LI64(r_scratch1, func); 413 405 PPC_MTLR(r_scratch1); 414 406 PPC_LI32(r_addr, K); ··· 427 429 common_load_ind: 428 430 /* 429 431 * Load from [X + K]. Negative offsets are tested for 430 - * in the helper functions, and result in a 'ret 0'. 432 + * in the helper functions. 431 433 */ 432 434 ctx->seen |= SEEN_DATAREF | SEEN_XREG; 433 435 PPC_LI64(r_scratch1, func); ··· 441 443 break; 442 444 443 445 case BPF_S_LDX_B_MSH: 444 - /* 445 - * x86 version drops packet (RET 0) when K<0, whereas 446 - * interpreter does allow K<0 (__load_pointer, special 447 - * ancillary data). common_load returns ENOTSUPP if K<0, 448 - * so we fall back to interpreter & filter works. 449 - */ 450 - func = sk_load_byte_msh; 446 + func = CHOOSE_LOAD_FUNC(K, sk_load_byte_msh); 451 447 goto common_load; 452 448 break; 453 449
+3 -5
arch/powerpc/platforms/cell/axon_msi.c
··· 114 114 pr_devel("axon_msi: woff %x roff %x msi %x\n", 115 115 write_offset, msic->read_offset, msi); 116 116 117 - if (msi < NR_IRQS && irq_get_chip_data(msi) == msic) { 117 + if (msi < nr_irqs && irq_get_chip_data(msi) == msic) { 118 118 generic_handle_irq(msi); 119 119 msic->fifo_virt[idx] = cpu_to_le32(0xffffffff); 120 120 } else { ··· 276 276 if (rc) 277 277 return rc; 278 278 279 - /* We rely on being able to stash a virq in a u16 */ 280 - BUILD_BUG_ON(NR_IRQS > 65536); 281 - 282 279 list_for_each_entry(entry, &dev->msi_list, list) { 283 280 virq = irq_create_direct_mapping(msic->irq_domain); 284 281 if (virq == NO_IRQ) { ··· 389 392 } 390 393 memset(msic->fifo_virt, 0xff, MSIC_FIFO_SIZE_BYTES); 391 394 392 - msic->irq_domain = irq_domain_add_nomap(dn, 0, &msic_host_ops, msic); 395 + /* We rely on being able to stash a virq in a u16, so limit irqs to < 65536 */ 396 + msic->irq_domain = irq_domain_add_nomap(dn, 65536, &msic_host_ops, msic); 393 397 if (!msic->irq_domain) { 394 398 printk(KERN_ERR "axon_msi: couldn't allocate irq_domain for %s\n", 395 399 dn->full_name);
+1 -1
arch/powerpc/platforms/cell/beat_interrupt.c
··· 248 248 { 249 249 int i; 250 250 251 - for (i = 1; i < NR_IRQS; i++) 251 + for (i = 1; i < nr_irqs; i++) 252 252 beat_destruct_irq_plug(i); 253 253 }
+3 -3
arch/powerpc/platforms/powermac/pic.c
··· 57 57 58 58 static DEFINE_RAW_SPINLOCK(pmac_pic_lock); 59 59 60 - #define NR_MASK_WORDS ((NR_IRQS + 31) / 32) 61 - static unsigned long ppc_lost_interrupts[NR_MASK_WORDS]; 62 - static unsigned long ppc_cached_irq_mask[NR_MASK_WORDS]; 60 + /* The max irq number this driver deals with is 128; see max_irqs */ 61 + static DECLARE_BITMAP(ppc_lost_interrupts, 128); 62 + static DECLARE_BITMAP(ppc_cached_irq_mask, 128); 63 63 static int pmac_irq_cascade = -1; 64 64 static struct irq_domain *pmac_pic_host; 65 65
+2 -2
arch/powerpc/platforms/pseries/Kconfig
··· 30 30 two or more partitions. 31 31 32 32 config EEH 33 - bool "PCI Extended Error Handling (EEH)" if EXPERT 33 + bool 34 34 depends on PPC_PSERIES && PCI 35 - default y if !EXPERT 35 + default y 36 36 37 37 config PSERIES_MSI 38 38 bool
+1 -2
arch/powerpc/sysdev/cpm2_pic.c
··· 51 51 static intctl_cpm2_t __iomem *cpm2_intctl; 52 52 53 53 static struct irq_domain *cpm2_pic_host; 54 - #define NR_MASK_WORDS ((NR_IRQS + 31) / 32) 55 - static unsigned long ppc_cached_irq_mask[NR_MASK_WORDS]; 54 + static unsigned long ppc_cached_irq_mask[2]; /* 2 32-bit registers */ 56 55 57 56 static const u_char irq_to_siureg[] = { 58 57 1, 1, 1, 1, 1, 1, 1, 1,
+20 -41
arch/powerpc/sysdev/mpc8xx_pic.c
··· 18 18 extern int cpm_get_irq(struct pt_regs *regs); 19 19 20 20 static struct irq_domain *mpc8xx_pic_host; 21 - #define NR_MASK_WORDS ((NR_IRQS + 31) / 32) 22 - static unsigned long ppc_cached_irq_mask[NR_MASK_WORDS]; 21 + static unsigned long mpc8xx_cached_irq_mask; 23 22 static sysconf8xx_t __iomem *siu_reg; 24 23 25 - int cpm_get_irq(struct pt_regs *regs); 24 + static inline unsigned long mpc8xx_irqd_to_bit(struct irq_data *d) 25 + { 26 + return 0x80000000 >> irqd_to_hwirq(d); 27 + } 26 28 27 29 static void mpc8xx_unmask_irq(struct irq_data *d) 28 30 { 29 - int bit, word; 30 - unsigned int irq_nr = (unsigned int)irqd_to_hwirq(d); 31 - 32 - bit = irq_nr & 0x1f; 33 - word = irq_nr >> 5; 34 - 35 - ppc_cached_irq_mask[word] |= (1 << (31-bit)); 36 - out_be32(&siu_reg->sc_simask, ppc_cached_irq_mask[word]); 31 + mpc8xx_cached_irq_mask |= mpc8xx_irqd_to_bit(d); 32 + out_be32(&siu_reg->sc_simask, mpc8xx_cached_irq_mask); 37 33 } 38 34 39 35 static void mpc8xx_mask_irq(struct irq_data *d) 40 36 { 41 - int bit, word; 42 - unsigned int irq_nr = (unsigned int)irqd_to_hwirq(d); 43 - 44 - bit = irq_nr & 0x1f; 45 - word = irq_nr >> 5; 46 - 47 - ppc_cached_irq_mask[word] &= ~(1 << (31-bit)); 48 - out_be32(&siu_reg->sc_simask, ppc_cached_irq_mask[word]); 37 + mpc8xx_cached_irq_mask &= ~mpc8xx_irqd_to_bit(d); 38 + out_be32(&siu_reg->sc_simask, mpc8xx_cached_irq_mask); 49 39 } 50 40 51 41 static void mpc8xx_ack(struct irq_data *d) 52 42 { 53 - int bit; 54 - unsigned int irq_nr = (unsigned int)irqd_to_hwirq(d); 55 - 56 - bit = irq_nr & 0x1f; 57 - out_be32(&siu_reg->sc_sipend, 1 << (31-bit)); 43 + out_be32(&siu_reg->sc_sipend, mpc8xx_irqd_to_bit(d)); 58 44 } 59 45 60 46 static void mpc8xx_end_irq(struct irq_data *d) 61 47 { 62 - int bit, word; 63 - unsigned int irq_nr = (unsigned int)irqd_to_hwirq(d); 64 - 65 - bit = irq_nr & 0x1f; 66 - word = irq_nr >> 5; 67 - 68 - ppc_cached_irq_mask[word] |= (1 << (31-bit)); 69 - out_be32(&siu_reg->sc_simask, ppc_cached_irq_mask[word]); 48 + mpc8xx_cached_irq_mask |= mpc8xx_irqd_to_bit(d); 49 + out_be32(&siu_reg->sc_simask, mpc8xx_cached_irq_mask); 70 50 } 71 51 72 52 static int mpc8xx_set_irq_type(struct irq_data *d, unsigned int flow_type) 73 53 { 74 - if (flow_type & IRQ_TYPE_EDGE_FALLING) { 75 - irq_hw_number_t hw = (unsigned int)irqd_to_hwirq(d); 54 + /* only external IRQ senses are programmable */ 55 + if ((flow_type & IRQ_TYPE_EDGE_FALLING) && !(irqd_to_hwirq(d) & 1)) { 76 56 unsigned int siel = in_be32(&siu_reg->sc_siel); 77 - 78 - /* only external IRQ senses are programmable */ 79 - if ((hw & 1) == 0) { 80 - siel |= (0x80000000 >> hw); 81 - out_be32(&siu_reg->sc_siel, siel); 82 - __irq_set_handler_locked(d->irq, handle_edge_irq); 83 - } 57 + siel |= mpc8xx_irqd_to_bit(d); 58 + out_be32(&siu_reg->sc_siel, siel); 59 + __irq_set_handler_locked(d->irq, handle_edge_irq); 84 60 } 85 61 return 0; 86 62 } ··· 107 131 IRQ_TYPE_LEVEL_HIGH, 108 132 IRQ_TYPE_EDGE_FALLING, 109 133 }; 134 + 135 + if (intspec[0] > 0x1f) 136 + return 0; 110 137 111 138 *out_hwirq = intspec[0]; 112 139 if (intsize > 1 && intspec[1] < 4)
+1
arch/powerpc/sysdev/scom.c
··· 22 22 #include <linux/debugfs.h> 23 23 #include <linux/slab.h> 24 24 #include <linux/export.h> 25 + #include <asm/debug.h> 25 26 #include <asm/prom.h> 26 27 #include <asm/scom.h> 27 28
+3 -4
arch/powerpc/sysdev/xics/xics-common.c
··· 188 188 { 189 189 int cpu = smp_processor_id(), hw_cpu = hard_smp_processor_id(); 190 190 unsigned int irq, virq; 191 + struct irq_desc *desc; 191 192 192 193 /* If we used to be the default server, move to the new "boot_cpuid" */ 193 194 if (hw_cpu == xics_default_server) ··· 203 202 /* Allow IPIs again... */ 204 203 icp_ops->set_priority(DEFAULT_PRIORITY); 205 204 206 - for_each_irq(virq) { 207 - struct irq_desc *desc; 205 + for_each_irq_desc(virq, desc) { 208 206 struct irq_chip *chip; 209 207 long server; 210 208 unsigned long flags; ··· 212 212 /* We can't set affinity on ISA interrupts */ 213 213 if (virq < NUM_ISA_INTERRUPTS) 214 214 continue; 215 - desc = irq_to_desc(virq); 216 215 /* We only need to migrate enabled IRQS */ 217 - if (!desc || !desc->action) 216 + if (!desc->action) 218 217 continue; 219 218 if (desc->irq_data.domain != xics_host) 220 219 continue;
+1 -1
arch/sh/include/asm/atomic.h
··· 11 11 #include <linux/types.h> 12 12 #include <asm/cmpxchg.h> 13 13 14 - #define ATOMIC_INIT(i) ( (atomic_t) { (i) } ) 14 + #define ATOMIC_INIT(i) { (i) } 15 15 16 16 #define atomic_read(v) (*(volatile int *)&(v)->counter) 17 17 #define atomic_set(v,i) ((v)->counter = (i))
+1 -1
arch/sh/mm/fault_32.c
··· 86 86 pte_t *pte_k; 87 87 88 88 /* Make sure we are in vmalloc/module/P3 area: */ 89 - if (!(address >= VMALLOC_START && address < P3_ADDR_MAX)) 89 + if (!(address >= P3SEG && address < P3_ADDR_MAX)) 90 90 return -1; 91 91 92 92 /*
+2 -2
arch/tile/include/asm/pci.h
··· 47 47 */ 48 48 #define PCI_DMA_BUS_IS_PHYS 1 49 49 50 - int __devinit tile_pci_init(void); 51 - int __devinit pcibios_init(void); 50 + int __init tile_pci_init(void); 51 + int __init pcibios_init(void); 52 52 53 53 static inline void pci_iounmap(struct pci_dev *dev, void __iomem *addr) {} 54 54
+2 -2
arch/tile/kernel/pci.c
··· 141 141 * 142 142 * Returns the number of controllers discovered. 143 143 */ 144 - int __devinit tile_pci_init(void) 144 + int __init tile_pci_init(void) 145 145 { 146 146 int i; 147 147 ··· 287 287 * The controllers have been set up by the time we get here, by a call to 288 288 * tile_pci_init. 289 289 */ 290 - int __devinit pcibios_init(void) 290 + int __init pcibios_init(void) 291 291 { 292 292 int i; 293 293
+11 -3
arch/x86/boot/compressed/head_32.S
··· 33 33 __HEAD 34 34 ENTRY(startup_32) 35 35 #ifdef CONFIG_EFI_STUB 36 + jmp preferred_addr 37 + 38 + .balign 0x10 36 39 /* 37 40 * We don't need the return address, so set up the stack so 38 41 * efi_main() can find its arugments. ··· 44 41 45 42 call efi_main 46 43 cmpl $0, %eax 47 - je preferred_addr 48 44 movl %eax, %esi 49 - call 1f 45 + jne 2f 50 46 1: 47 + /* EFI init failed, so hang. */ 48 + hlt 49 + jmp 1b 50 + 2: 51 + call 3f 52 + 3: 51 53 popl %eax 52 - subl $1b, %eax 54 + subl $3b, %eax 53 55 subl BP_pref_address(%esi), %eax 54 56 add BP_code32_start(%esi), %eax 55 57 leal preferred_addr(%eax), %eax
+16 -6
arch/x86/boot/compressed/head_64.S
··· 200 200 * entire text+data+bss and hopefully all of memory. 201 201 */ 202 202 #ifdef CONFIG_EFI_STUB 203 - pushq %rsi 203 + /* 204 + * The entry point for the PE/COFF executable is 0x210, so only 205 + * legacy boot loaders will execute this jmp. 206 + */ 207 + jmp preferred_addr 208 + 209 + .org 0x210 204 210 mov %rcx, %rdi 205 211 mov %rdx, %rsi 206 212 call efi_main 207 - popq %rsi 208 - cmpq $0,%rax 209 - je preferred_addr 210 213 movq %rax,%rsi 211 - call 1f 214 + cmpq $0,%rax 215 + jne 2f 212 216 1: 217 + /* EFI init failed, so hang. */ 218 + hlt 219 + jmp 1b 220 + 2: 221 + call 3f 222 + 3: 213 223 popq %rax 214 - subq $1b, %rax 224 + subq $3b, %rax 215 225 subq BP_pref_address(%rsi), %rax 216 226 add BP_code32_start(%esi), %eax 217 227 leaq preferred_addr(%rax), %rax
+11 -4
arch/x86/boot/tools/build.c
··· 205 205 put_unaligned_le32(file_sz, &buf[pe_header + 0x50]); 206 206 207 207 #ifdef CONFIG_X86_32 208 - /* Address of entry point */ 209 - put_unaligned_le32(i, &buf[pe_header + 0x28]); 208 + /* 209 + * Address of entry point. 210 + * 211 + * The EFI stub entry point is +16 bytes from the start of 212 + * the .text section. 213 + */ 214 + put_unaligned_le32(i + 16, &buf[pe_header + 0x28]); 210 215 211 216 /* .text size */ 212 217 put_unaligned_le32(file_sz, &buf[pe_header + 0xb0]); ··· 222 217 /* 223 218 * Address of entry point. startup_32 is at the beginning and 224 219 * the 64-bit entry point (startup_64) is always 512 bytes 225 - * after. 220 + * after. The EFI stub entry point is 16 bytes after that, as 221 + * the first instruction allows legacy loaders to jump over 222 + * the EFI stub initialisation 226 223 */ 227 - put_unaligned_le32(i + 512, &buf[pe_header + 0x28]); 224 + put_unaligned_le32(i + 528, &buf[pe_header + 0x28]); 228 225 229 226 /* .text size */ 230 227 put_unaligned_le32(file_sz, &buf[pe_header + 0xc0]);
+3 -3
arch/x86/include/asm/posix_types.h
··· 7 7 #else 8 8 # ifdef __i386__ 9 9 # include "posix_types_32.h" 10 - # elif defined(__LP64__) 11 - # include "posix_types_64.h" 12 - # else 10 + # elif defined(__ILP32__) 13 11 # include "posix_types_x32.h" 12 + # else 13 + # include "posix_types_64.h" 14 14 # endif 15 15 #endif
+1 -1
arch/x86/include/asm/sigcontext.h
··· 257 257 __u64 oldmask; 258 258 __u64 cr2; 259 259 struct _fpstate __user *fpstate; /* zero when no FPU context */ 260 - #ifndef __LP64__ 260 + #ifdef __ILP32__ 261 261 __u32 __fpstate_pad; 262 262 #endif 263 263 __u64 reserved1[8];
+7 -1
arch/x86/include/asm/siginfo.h
··· 2 2 #define _ASM_X86_SIGINFO_H 3 3 4 4 #ifdef __x86_64__ 5 - # define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int)) 5 + # ifdef __ILP32__ /* x32 */ 6 + typedef long long __kernel_si_clock_t __attribute__((aligned(4))); 7 + # define __ARCH_SI_CLOCK_T __kernel_si_clock_t 8 + # define __ARCH_SI_ATTRIBUTES __attribute__((aligned(8))) 9 + # else /* x86-64 */ 10 + # define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int)) 11 + # endif 6 12 #endif 7 13 8 14 #include <asm-generic/siginfo.h>
+3 -3
arch/x86/include/asm/unistd.h
··· 63 63 #else 64 64 # ifdef __i386__ 65 65 # include <asm/unistd_32.h> 66 - # elif defined(__LP64__) 67 - # include <asm/unistd_64.h> 68 - # else 66 + # elif defined(__ILP32__) 69 67 # include <asm/unistd_x32.h> 68 + # else 69 + # include <asm/unistd_64.h> 70 70 # endif 71 71 #endif 72 72
-1
arch/x86/include/asm/x86_init.h
··· 195 195 196 196 extern void x86_init_noop(void); 197 197 extern void x86_init_uint_noop(unsigned int unused); 198 - extern void x86_default_fixup_cpu_id(struct cpuinfo_x86 *c, int node); 199 198 200 199 #endif
+4
arch/x86/kernel/acpi/sleep.c
··· 24 24 static char temp_stack[4096]; 25 25 #endif 26 26 27 + asmlinkage void acpi_enter_s3(void) 28 + { 29 + acpi_enter_sleep_state(3, wake_sleep_flags); 30 + } 27 31 /** 28 32 * acpi_suspend_lowlevel - save kernel state 29 33 *
+4
arch/x86/kernel/acpi/sleep.h
··· 3 3 */ 4 4 5 5 #include <asm/trampoline.h> 6 + #include <linux/linkage.h> 6 7 7 8 extern unsigned long saved_video_mode; 8 9 extern long saved_magic; 9 10 10 11 extern int wakeup_pmode_return; 12 + 13 + extern u8 wake_sleep_flags; 14 + extern asmlinkage void acpi_enter_s3(void); 11 15 12 16 extern unsigned long acpi_copy_wakeup_routine(unsigned long); 13 17 extern void wakeup_long64(void);
+1 -3
arch/x86/kernel/acpi/wakeup_32.S
··· 74 74 ENTRY(do_suspend_lowlevel) 75 75 call save_processor_state 76 76 call save_registers 77 - pushl $3 78 - call acpi_enter_sleep_state 79 - addl $4, %esp 77 + call acpi_enter_s3 80 78 81 79 # In case of S3 failure, we'll emerge here. Jump 82 80 # to ret_point to recover
+1 -3
arch/x86/kernel/acpi/wakeup_64.S
··· 71 71 movq %rsi, saved_rsi 72 72 73 73 addq $8, %rsp 74 - movl $3, %edi 75 - xorl %eax, %eax 76 - call acpi_enter_sleep_state 74 + call acpi_enter_s3 77 75 /* in case something went wrong, restore the machine status and go on */ 78 76 jmp resume_point 79 77
+20 -14
arch/x86/kernel/apic/apic.c
··· 1637 1637 mp_lapic_addr = APIC_DEFAULT_PHYS_BASE; 1638 1638 1639 1639 /* The BIOS may have set up the APIC at some other address */ 1640 - rdmsr(MSR_IA32_APICBASE, l, h); 1641 - if (l & MSR_IA32_APICBASE_ENABLE) 1642 - mp_lapic_addr = l & MSR_IA32_APICBASE_BASE; 1640 + if (boot_cpu_data.x86 >= 6) { 1641 + rdmsr(MSR_IA32_APICBASE, l, h); 1642 + if (l & MSR_IA32_APICBASE_ENABLE) 1643 + mp_lapic_addr = l & MSR_IA32_APICBASE_BASE; 1644 + } 1643 1645 1644 1646 pr_info("Found and enabled local APIC!\n"); 1645 1647 return 0; ··· 1659 1657 * MSR. This can only be done in software for Intel P6 or later 1660 1658 * and AMD K7 (Model > 1) or later. 1661 1659 */ 1662 - rdmsr(MSR_IA32_APICBASE, l, h); 1663 - if (!(l & MSR_IA32_APICBASE_ENABLE)) { 1664 - pr_info("Local APIC disabled by BIOS -- reenabling.\n"); 1665 - l &= ~MSR_IA32_APICBASE_BASE; 1666 - l |= MSR_IA32_APICBASE_ENABLE | addr; 1667 - wrmsr(MSR_IA32_APICBASE, l, h); 1668 - enabled_via_apicbase = 1; 1660 + if (boot_cpu_data.x86 >= 6) { 1661 + rdmsr(MSR_IA32_APICBASE, l, h); 1662 + if (!(l & MSR_IA32_APICBASE_ENABLE)) { 1663 + pr_info("Local APIC disabled by BIOS -- reenabling.\n"); 1664 + l &= ~MSR_IA32_APICBASE_BASE; 1665 + l |= MSR_IA32_APICBASE_ENABLE | addr; 1666 + wrmsr(MSR_IA32_APICBASE, l, h); 1667 + enabled_via_apicbase = 1; 1668 + } 1669 1669 } 1670 1670 return apic_verify(); 1671 1671 } ··· 2213 2209 * FIXME! This will be wrong if we ever support suspend on 2214 2210 * SMP! We'll need to do this as part of the CPU restore! 2215 2211 */ 2216 - rdmsr(MSR_IA32_APICBASE, l, h); 2217 - l &= ~MSR_IA32_APICBASE_BASE; 2218 - l |= MSR_IA32_APICBASE_ENABLE | mp_lapic_addr; 2219 - wrmsr(MSR_IA32_APICBASE, l, h); 2212 + if (boot_cpu_data.x86 >= 6) { 2213 + rdmsr(MSR_IA32_APICBASE, l, h); 2214 + l &= ~MSR_IA32_APICBASE_BASE; 2215 + l |= MSR_IA32_APICBASE_ENABLE | mp_lapic_addr; 2216 + wrmsr(MSR_IA32_APICBASE, l, h); 2217 + } 2220 2218 } 2221 2219 2222 2220 maxlvt = lapic_get_maxlvt();
+5 -2
arch/x86/kernel/apic/apic_numachip.c
··· 207 207 208 208 static void fixup_cpu_id(struct cpuinfo_x86 *c, int node) 209 209 { 210 - c->phys_proc_id = node; 211 - per_cpu(cpu_llc_id, smp_processor_id()) = node; 210 + 211 + if (c->phys_proc_id != node) { 212 + c->phys_proc_id = node; 213 + per_cpu(cpu_llc_id, smp_processor_id()) = node; 214 + } 212 215 } 213 216 214 217 static int __init numachip_system_init(void)
+6
arch/x86/kernel/apic/x2apic_phys.c
··· 24 24 { 25 25 if (x2apic_phys) 26 26 return x2apic_enabled(); 27 + else if ((acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID) && 28 + (acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL) && 29 + x2apic_enabled()) { 30 + printk(KERN_DEBUG "System requires x2apic physical mode\n"); 31 + return 1; 32 + } 27 33 else 28 34 return 0; 29 35 }
+6 -5
arch/x86/kernel/cpu/amd.c
··· 26 26 * contact AMD for precise details and a CPU swap. 27 27 * 28 28 * See http://www.multimania.com/poulot/k6bug.html 29 - * http://www.amd.com/K6/k6docs/revgd.html 29 + * and section 2.6.2 of "AMD-K6 Processor Revision Guide - Model 6" 30 + * (Publication # 21266 Issue Date: August 1998) 30 31 * 31 32 * The following test is erm.. interesting. AMD neglected to up 32 33 * the chip setting when fixing the bug but they also tweaked some ··· 95 94 "system stability may be impaired when more than 32 MB are used.\n"); 96 95 else 97 96 printk(KERN_CONT "probably OK (after B9730xxxx).\n"); 98 - printk(KERN_INFO "Please see http://membres.lycos.fr/poulot/k6bug.html\n"); 99 97 } 100 98 101 99 /* K6 with old style WHCR */ ··· 353 353 node = per_cpu(cpu_llc_id, cpu); 354 354 355 355 /* 356 - * If core numbers are inconsistent, it's likely a multi-fabric platform, 357 - * so invoke platform-specific handler 356 + * On multi-fabric platform (e.g. Numascale NumaChip) a 357 + * platform-specific handler needs to be called to fixup some 358 + * IDs of the CPU. 358 359 */ 359 - if (c->phys_proc_id != node) 360 + if (x86_cpuinit.fixup_cpu_id) 360 361 x86_cpuinit.fixup_cpu_id(c, node); 361 362 362 363 if (!node_online(node)) {
-9
arch/x86/kernel/cpu/common.c
··· 1163 1163 #endif /* ! CONFIG_KGDB */ 1164 1164 1165 1165 /* 1166 - * Prints an error where the NUMA and configured core-number mismatch and the 1167 - * platform didn't override this to fix it up 1168 - */ 1169 - void __cpuinit x86_default_fixup_cpu_id(struct cpuinfo_x86 *c, int node) 1170 - { 1171 - pr_err("NUMA core number %d differs from configured core number %d\n", node, c->phys_proc_id); 1172 - } 1173 - 1174 - /* 1175 1166 * cpu_init() initializes state that is per-CPU. Some data is already 1176 1167 * initialized (naturally) in the bootstrap process, such as the GDT 1177 1168 * and IDT. We reload them nevertheless, this function acts as a
+4 -4
arch/x86/kernel/cpu/intel_cacheinfo.c
··· 433 433 /* check if @slot is already used or the index is already disabled */ 434 434 ret = amd_get_l3_disable_slot(nb, slot); 435 435 if (ret >= 0) 436 - return -EINVAL; 436 + return -EEXIST; 437 437 438 438 if (index > nb->l3_cache.indices) 439 439 return -EINVAL; 440 440 441 441 /* check whether the other slot has disabled the same index already */ 442 442 if (index == amd_get_l3_disable_slot(nb, !slot)) 443 - return -EINVAL; 443 + return -EEXIST; 444 444 445 445 amd_l3_disable_index(nb, cpu, slot, index); 446 446 ··· 468 468 err = amd_set_l3_disable_slot(this_leaf->base.nb, cpu, slot, val); 469 469 if (err) { 470 470 if (err == -EEXIST) 471 - printk(KERN_WARNING "L3 disable slot %d in use!\n", 472 - slot); 471 + pr_warning("L3 slot %d in use/index already disabled!\n", 472 + slot); 473 473 return err; 474 474 } 475 475 return count;
+1
arch/x86/kernel/i387.c
··· 235 235 if (tsk_used_math(tsk)) { 236 236 if (HAVE_HWFP && tsk == current) 237 237 unlazy_fpu(tsk); 238 + tsk->thread.fpu.last_cpu = ~0; 238 239 return 0; 239 240 } 240 241
+7 -5
arch/x86/kernel/microcode_amd.c
··· 82 82 { 83 83 struct cpuinfo_x86 *c = &cpu_data(cpu); 84 84 85 - if (c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) { 86 - pr_warning("CPU%d: family %d not supported\n", cpu, c->x86); 87 - return -1; 88 - } 89 - 90 85 csig->rev = c->microcode; 91 86 pr_info("CPU%d: patch_level=0x%08x\n", cpu, csig->rev); 92 87 ··· 375 380 376 381 struct microcode_ops * __init init_amd_microcode(void) 377 382 { 383 + struct cpuinfo_x86 *c = &cpu_data(0); 384 + 385 + if (c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) { 386 + pr_warning("AMD CPU family 0x%x not supported\n", c->x86); 387 + return NULL; 388 + } 389 + 378 390 patch = (void *)get_zeroed_page(GFP_KERNEL); 379 391 if (!patch) 380 392 return NULL;
+4 -6
arch/x86/kernel/microcode_core.c
··· 419 419 if (err) 420 420 return err; 421 421 422 - if (microcode_init_cpu(cpu) == UCODE_ERROR) { 423 - sysfs_remove_group(&dev->kobj, &mc_attr_group); 422 + if (microcode_init_cpu(cpu) == UCODE_ERROR) 424 423 return -EINVAL; 425 - } 426 424 427 425 return err; 428 426 } ··· 526 528 microcode_ops = init_intel_microcode(); 527 529 else if (c->x86_vendor == X86_VENDOR_AMD) 528 530 microcode_ops = init_amd_microcode(); 529 - 530 - if (!microcode_ops) { 531 + else 531 532 pr_err("no support for this CPU vendor\n"); 533 + 534 + if (!microcode_ops) 532 535 return -ENODEV; 533 - } 534 536 535 537 microcode_pdev = platform_device_register_simple("microcode", -1, 536 538 NULL, 0);
-1
arch/x86/kernel/x86_init.c
··· 93 93 struct x86_cpuinit_ops x86_cpuinit __cpuinitdata = { 94 94 .early_percpu_clock_init = x86_init_noop, 95 95 .setup_percpu_clockev = setup_secondary_APIC_clock, 96 - .fixup_cpu_id = x86_default_fixup_cpu_id, 97 96 }; 98 97 99 98 static void default_nmi_init(void) { };
+2 -2
arch/x86/platform/mrst/mrst.c
··· 805 805 } else 806 806 i2c_register_board_info(i2c_bus[i], i2c_devs[i], 1); 807 807 } 808 - intel_scu_notifier_post(SCU_AVAILABLE, 0L); 808 + intel_scu_notifier_post(SCU_AVAILABLE, NULL); 809 809 } 810 810 EXPORT_SYMBOL_GPL(intel_scu_devices_create); 811 811 ··· 814 814 { 815 815 int i; 816 816 817 - intel_scu_notifier_post(SCU_DOWN, 0L); 817 + intel_scu_notifier_post(SCU_DOWN, NULL); 818 818 819 819 for (i = 0; i < ipc_next_dev; i++) 820 820 platform_device_del(ipc_devs[i]);
+2 -2
arch/x86/xen/enlighten.c
··· 261 261 262 262 static bool __init xen_check_mwait(void) 263 263 { 264 - #ifdef CONFIG_ACPI 264 + #if defined(CONFIG_ACPI) && !defined(CONFIG_ACPI_PROCESSOR_AGGREGATOR) && \ 265 + !defined(CONFIG_ACPI_PROCESSOR_AGGREGATOR_MODULE) 265 266 struct xen_platform_op op = { 266 267 .cmd = XENPF_set_processor_pminfo, 267 268 .u.set_pminfo.id = -1, ··· 350 349 /* Xen will set CR4.OSXSAVE if supported and not disabled by force */ 351 350 if ((cx & xsave_mask) != xsave_mask) 352 351 cpuid_leaf1_ecx_mask &= ~xsave_mask; /* disable XSAVE & OSXSAVE */ 353 - 354 352 if (xen_check_mwait()) 355 353 cpuid_leaf1_ecx_set_mask = (1 << (X86_FEATURE_MWAIT % 32)); 356 354 }
+15
arch/x86/xen/smp.c
··· 178 178 static void __init xen_filter_cpu_maps(void) 179 179 { 180 180 int i, rc; 181 + unsigned int subtract = 0; 181 182 182 183 if (!xen_initial_domain()) 183 184 return; ··· 193 192 } else { 194 193 set_cpu_possible(i, false); 195 194 set_cpu_present(i, false); 195 + subtract++; 196 196 } 197 197 } 198 + #ifdef CONFIG_HOTPLUG_CPU 199 + /* This is akin to using 'nr_cpus' on the Linux command line. 200 + * Which is OK as when we use 'dom0_max_vcpus=X' we can only 201 + * have up to X, while nr_cpu_ids is greater than X. This 202 + * normally is not a problem, except when CPU hotplugging 203 + * is involved and then there might be more than X CPUs 204 + * in the guest - which will not work as there is no 205 + * hypercall to expand the max number of VCPUs an already 206 + * running guest has. So cap it up to X. */ 207 + if (subtract) 208 + nr_cpu_ids = nr_cpu_ids - subtract; 209 + #endif 210 + 198 211 } 199 212 200 213 static void __init xen_smp_prepare_boot_cpu(void)
+1 -1
arch/x86/xen/xen-asm.S
··· 96 96 97 97 /* check for unmasked and pending */ 98 98 cmpw $0x0001, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_pending 99 - jz 1f 99 + jnz 1f 100 100 2: call check_events 101 101 1: 102 102 ENDPATCH(xen_restore_fl_direct)
-3
arch/xtensa/include/asm/hardirq.h
··· 11 11 #ifndef _XTENSA_HARDIRQ_H 12 12 #define _XTENSA_HARDIRQ_H 13 13 14 - void ack_bad_irq(unsigned int irq); 15 - #define ack_bad_irq ack_bad_irq 16 - 17 14 #include <asm-generic/hardirq.h> 18 15 19 16 #endif /* _XTENSA_HARDIRQ_H */
+1
arch/xtensa/include/asm/io.h
··· 14 14 #ifdef __KERNEL__ 15 15 #include <asm/byteorder.h> 16 16 #include <asm/page.h> 17 + #include <linux/bug.h> 17 18 #include <linux/kernel.h> 18 19 19 20 #include <linux/types.h>
+1
arch/xtensa/kernel/signal.c
··· 496 496 signr = get_signal_to_deliver(&info, &ka, regs, NULL); 497 497 498 498 if (signr > 0) { 499 + int ret; 499 500 500 501 /* Are we from a system call? */ 501 502
+30 -26
drivers/acpi/sleep.c
··· 28 28 #include "internal.h" 29 29 #include "sleep.h" 30 30 31 + u8 wake_sleep_flags = ACPI_NO_OPTIONAL_METHODS; 31 32 static unsigned int gts, bfs; 32 - module_param(gts, uint, 0644); 33 - module_param(bfs, uint, 0644); 33 + static int set_param_wake_flag(const char *val, struct kernel_param *kp) 34 + { 35 + int ret = param_set_int(val, kp); 36 + 37 + if (ret) 38 + return ret; 39 + 40 + if (kp->arg == (const char *)&gts) { 41 + if (gts) 42 + wake_sleep_flags |= ACPI_EXECUTE_GTS; 43 + else 44 + wake_sleep_flags &= ~ACPI_EXECUTE_GTS; 45 + } 46 + if (kp->arg == (const char *)&bfs) { 47 + if (bfs) 48 + wake_sleep_flags |= ACPI_EXECUTE_BFS; 49 + else 50 + wake_sleep_flags &= ~ACPI_EXECUTE_BFS; 51 + } 52 + return ret; 53 + } 54 + module_param_call(gts, set_param_wake_flag, param_get_int, &gts, 0644); 55 + module_param_call(bfs, set_param_wake_flag, param_get_int, &bfs, 0644); 34 56 MODULE_PARM_DESC(gts, "Enable evaluation of _GTS on suspend."); 35 57 MODULE_PARM_DESC(bfs, "Enable evaluation of _BFS on resume".); 36 - 37 - static u8 wake_sleep_flags(void) 38 - { 39 - u8 flags = ACPI_NO_OPTIONAL_METHODS; 40 - 41 - if (gts) 42 - flags |= ACPI_EXECUTE_GTS; 43 - if (bfs) 44 - flags |= ACPI_EXECUTE_BFS; 45 - 46 - return flags; 47 - } 48 58 49 59 static u8 sleep_states[ACPI_S_STATE_COUNT]; 50 60 ··· 273 263 { 274 264 acpi_status status = AE_OK; 275 265 u32 acpi_state = acpi_target_sleep_state; 276 - u8 flags = wake_sleep_flags(); 277 266 int error; 278 267 279 268 ACPI_FLUSH_CPU_CACHE(); ··· 280 271 switch (acpi_state) { 281 272 case ACPI_STATE_S1: 282 273 barrier(); 283 - status = acpi_enter_sleep_state(acpi_state, flags); 274 + status = acpi_enter_sleep_state(acpi_state, wake_sleep_flags); 284 275 break; 285 276 286 277 case ACPI_STATE_S3: ··· 295 286 acpi_write_bit_register(ACPI_BITREG_SCI_ENABLE, 1); 296 287 297 288 /* Reprogram control registers and execute _BFS */ 298 - acpi_leave_sleep_state_prep(acpi_state, flags); 289 + acpi_leave_sleep_state_prep(acpi_state, wake_sleep_flags); 299 290 300 291 /* ACPI 3.0 specs (P62) says that it's the responsibility 301 292 * of the OSPM to clear the status bit [ implying that the ··· 559 550 560 551 static int acpi_hibernation_enter(void) 561 552 { 562 - u8 flags = wake_sleep_flags(); 563 553 acpi_status status = AE_OK; 564 554 565 555 ACPI_FLUSH_CPU_CACHE(); 566 556 567 557 /* This shouldn't return. If it returns, we have a problem */ 568 - status = acpi_enter_sleep_state(ACPI_STATE_S4, flags); 558 + status = acpi_enter_sleep_state(ACPI_STATE_S4, wake_sleep_flags); 569 559 /* Reprogram control registers and execute _BFS */ 570 - acpi_leave_sleep_state_prep(ACPI_STATE_S4, flags); 560 + acpi_leave_sleep_state_prep(ACPI_STATE_S4, wake_sleep_flags); 571 561 572 562 return ACPI_SUCCESS(status) ? 0 : -EFAULT; 573 563 } 574 564 575 565 static void acpi_hibernation_leave(void) 576 566 { 577 - u8 flags = wake_sleep_flags(); 578 - 579 567 /* 580 568 * If ACPI is not enabled by the BIOS and the boot kernel, we need to 581 569 * enable it here. 582 570 */ 583 571 acpi_enable(); 584 572 /* Reprogram control registers and execute _BFS */ 585 - acpi_leave_sleep_state_prep(ACPI_STATE_S4, flags); 573 + acpi_leave_sleep_state_prep(ACPI_STATE_S4, wake_sleep_flags); 586 574 /* Check the hardware signature */ 587 575 if (facs && s4_hardware_signature != facs->hardware_signature) { 588 576 printk(KERN_EMERG "ACPI: Hardware changed while hibernated, " ··· 834 828 835 829 static void acpi_power_off(void) 836 830 { 837 - u8 flags = wake_sleep_flags(); 838 - 839 831 /* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */ 840 832 printk(KERN_DEBUG "%s called\n", __func__); 841 833 local_irq_disable(); 842 - acpi_enter_sleep_state(ACPI_STATE_S5, flags); 834 + acpi_enter_sleep_state(ACPI_STATE_S5, wake_sleep_flags); 843 835 } 844 836 845 837 /*
+2
drivers/ata/ahci.c
··· 394 394 .driver_data = board_ahci_yes_fbs }, /* 88se9128 */ 395 395 { PCI_DEVICE(0x1b4b, 0x9125), 396 396 .driver_data = board_ahci_yes_fbs }, /* 88se9125 */ 397 + { PCI_DEVICE(0x1b4b, 0x917a), 398 + .driver_data = board_ahci_yes_fbs }, /* 88se9172 */ 397 399 { PCI_DEVICE(0x1b4b, 0x91a3), 398 400 .driver_data = board_ahci_yes_fbs }, 399 401
+1
drivers/ata/ahci_platform.c
··· 280 280 281 281 static const struct of_device_id ahci_of_match[] = { 282 282 { .compatible = "calxeda,hb-ahci", }, 283 + { .compatible = "snps,spear-ahci", }, 283 284 {}, 284 285 }; 285 286 MODULE_DEVICE_TABLE(of, ahci_of_match);
+1 -1
drivers/ata/libata-core.c
··· 95 95 static void ata_dev_xfermask(struct ata_device *dev); 96 96 static unsigned long ata_dev_blacklisted(const struct ata_device *dev); 97 97 98 - atomic_t ata_print_id = ATOMIC_INIT(1); 98 + atomic_t ata_print_id = ATOMIC_INIT(0); 99 99 100 100 struct ata_force_param { 101 101 const char *name;
+2 -1
drivers/ata/libata-eh.c
··· 3501 3501 u64 now = get_jiffies_64(); 3502 3502 int *trials = void_arg; 3503 3503 3504 - if (ent->timestamp < now - min(now, interval)) 3504 + if ((ent->eflags & ATA_EFLAG_OLD_ER) || 3505 + (ent->timestamp < now - min(now, interval))) 3505 3506 return -1; 3506 3507 3507 3508 (*trials)++;
+23 -17
drivers/ata/libata-scsi.c
··· 3399 3399 */ 3400 3400 shost->max_host_blocked = 1; 3401 3401 3402 - rc = scsi_add_host(ap->scsi_host, &ap->tdev); 3402 + rc = scsi_add_host_with_dma(ap->scsi_host, 3403 + &ap->tdev, ap->host->dev); 3403 3404 if (rc) 3404 3405 goto err_add; 3405 3406 } ··· 3839 3838 } 3840 3839 EXPORT_SYMBOL_GPL(ata_sas_port_stop); 3841 3840 3842 - int ata_sas_async_port_init(struct ata_port *ap) 3841 + /** 3842 + * ata_sas_async_probe - simply schedule probing and return 3843 + * @ap: Port to probe 3844 + * 3845 + * For batch scheduling of probe for sas attached ata devices, assumes 3846 + * the port has already been through ata_sas_port_init() 3847 + */ 3848 + void ata_sas_async_probe(struct ata_port *ap) 3843 3849 { 3844 - int rc = ap->ops->port_start(ap); 3845 - 3846 - if (!rc) { 3847 - ap->print_id = atomic_inc_return(&ata_print_id); 3848 - __ata_port_probe(ap); 3849 - } 3850 - 3851 - return rc; 3850 + __ata_port_probe(ap); 3852 3851 } 3853 - EXPORT_SYMBOL_GPL(ata_sas_async_port_init); 3852 + EXPORT_SYMBOL_GPL(ata_sas_async_probe); 3853 + 3854 + int ata_sas_sync_probe(struct ata_port *ap) 3855 + { 3856 + return ata_port_probe(ap); 3857 + } 3858 + EXPORT_SYMBOL_GPL(ata_sas_sync_probe); 3859 + 3854 3860 3855 3861 /** 3856 3862 * ata_sas_port_init - Initialize a SATA device ··· 3874 3866 { 3875 3867 int rc = ap->ops->port_start(ap); 3876 3868 3877 - if (!rc) { 3878 - ap->print_id = atomic_inc_return(&ata_print_id); 3879 - rc = ata_port_probe(ap); 3880 - } 3881 - 3882 - return rc; 3869 + if (rc) 3870 + return rc; 3871 + ap->print_id = atomic_inc_return(&ata_print_id); 3872 + return 0; 3883 3873 } 3884 3874 EXPORT_SYMBOL_GPL(ata_sas_port_init); 3885 3875
+1 -3
drivers/ata/pata_arasan_cf.c
··· 943 943 944 944 return 0; 945 945 } 946 + #endif 946 947 947 948 static SIMPLE_DEV_PM_OPS(arasan_cf_pm_ops, arasan_cf_suspend, arasan_cf_resume); 948 - #endif 949 949 950 950 static struct platform_driver arasan_cf_driver = { 951 951 .probe = arasan_cf_probe, ··· 953 953 .driver = { 954 954 .name = DRIVER_NAME, 955 955 .owner = THIS_MODULE, 956 - #ifdef CONFIG_PM 957 956 .pm = &arasan_cf_pm_ops, 958 - #endif 959 957 }, 960 958 }; 961 959
+4
drivers/bluetooth/ath3k.c
··· 75 75 { USB_DEVICE(0x0CF3, 0x311D) }, 76 76 { USB_DEVICE(0x13d3, 0x3375) }, 77 77 { USB_DEVICE(0x04CA, 0x3005) }, 78 + { USB_DEVICE(0x13d3, 0x3362) }, 79 + { USB_DEVICE(0x0CF3, 0xE004) }, 78 80 79 81 /* Atheros AR5BBU12 with sflash firmware */ 80 82 { USB_DEVICE(0x0489, 0xE02C) }, ··· 96 94 { USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 }, 97 95 { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, 98 96 { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, 97 + { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, 98 + { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, 99 99 100 100 { } /* Terminating entry */ 101 101 };
+6
drivers/bluetooth/btusb.c
··· 101 101 { USB_DEVICE(0x0c10, 0x0000) }, 102 102 103 103 /* Broadcom BCM20702A0 */ 104 + { USB_DEVICE(0x0489, 0xe042) }, 104 105 { USB_DEVICE(0x0a5c, 0x21e3) }, 105 106 { USB_DEVICE(0x0a5c, 0x21e6) }, 106 107 { USB_DEVICE(0x0a5c, 0x21e8) }, 107 108 { USB_DEVICE(0x0a5c, 0x21f3) }, 108 109 { USB_DEVICE(0x413c, 0x8197) }, 110 + 111 + /* Foxconn - Hon Hai */ 112 + { USB_DEVICE(0x0489, 0xe033) }, 109 113 110 114 { } /* Terminating entry */ 111 115 }; ··· 137 133 { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, 138 134 { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, 139 135 { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, 136 + { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, 137 + { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, 140 138 141 139 /* Atheros AR5BBU12 with sflash firmware */ 142 140 { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
+1
drivers/dma/amba-pl08x.c
··· 1429 1429 * signal 1430 1430 */ 1431 1431 release_phy_channel(plchan); 1432 + plchan->phychan_hold = 0; 1432 1433 } 1433 1434 /* Dequeue jobs and free LLIs */ 1434 1435 if (plchan->at) {
-4
drivers/dma/at_hdmac.c
··· 221 221 222 222 vdbg_dump_regs(atchan); 223 223 224 - /* clear any pending interrupt */ 225 - while (dma_readl(atdma, EBCISR)) 226 - cpu_relax(); 227 - 228 224 channel_writel(atchan, SADDR, 0); 229 225 channel_writel(atchan, DADDR, 0); 230 226 channel_writel(atchan, CTRLA, 0);
+6 -3
drivers/dma/imx-dma.c
··· 571 571 if (desc->desc.callback) 572 572 desc->desc.callback(desc->desc.callback_param); 573 573 574 - dma_cookie_complete(&desc->desc); 575 - 576 - /* If we are dealing with a cyclic descriptor keep it on ld_active */ 574 + /* If we are dealing with a cyclic descriptor keep it on ld_active 575 + * and dont mark the descripor as complete. 576 + * Only in non-cyclic cases it would be marked as complete 577 + */ 577 578 if (imxdma_chan_is_doing_cyclic(imxdmac)) 578 579 goto out; 580 + else 581 + dma_cookie_complete(&desc->desc); 579 582 580 583 /* Free 2D slot if it was an interleaved transfer */ 581 584 if (imxdmac->enabled_2d) {
+3 -7
drivers/dma/mxs-dma.c
··· 201 201 202 202 static dma_cookie_t mxs_dma_tx_submit(struct dma_async_tx_descriptor *tx) 203 203 { 204 - struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(tx->chan); 205 - 206 - mxs_dma_enable_chan(mxs_chan); 207 - 208 204 return dma_cookie_assign(tx); 209 205 } 210 206 ··· 554 558 555 559 static void mxs_dma_issue_pending(struct dma_chan *chan) 556 560 { 557 - /* 558 - * Nothing to do. We only have a single descriptor. 559 - */ 561 + struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan); 562 + 563 + mxs_dma_enable_chan(mxs_chan); 560 564 } 561 565 562 566 static int __init mxs_dma_init(struct mxs_dma_engine *mxs_dma)
+15 -10
drivers/dma/pl330.c
··· 2225 2225 { 2226 2226 struct dma_pl330_dmac *pdmac; 2227 2227 struct dma_pl330_desc *desc; 2228 - struct dma_pl330_chan *pch; 2228 + struct dma_pl330_chan *pch = NULL; 2229 2229 unsigned long flags; 2230 - 2231 - if (list_empty(list)) 2232 - return; 2233 2230 2234 2231 /* Finish off the work list */ 2235 2232 list_for_each_entry(desc, list, node) { ··· 2244 2247 desc->pchan = NULL; 2245 2248 } 2246 2249 2250 + /* pch will be unset if list was empty */ 2251 + if (!pch) 2252 + return; 2253 + 2247 2254 pdmac = pch->dmac; 2248 2255 2249 2256 spin_lock_irqsave(&pdmac->pool_lock, flags); ··· 2258 2257 static inline void handle_cyclic_desc_list(struct list_head *list) 2259 2258 { 2260 2259 struct dma_pl330_desc *desc; 2261 - struct dma_pl330_chan *pch; 2260 + struct dma_pl330_chan *pch = NULL; 2262 2261 unsigned long flags; 2263 - 2264 - if (list_empty(list)) 2265 - return; 2266 2262 2267 2263 list_for_each_entry(desc, list, node) { 2268 2264 dma_async_tx_callback callback; ··· 2271 2273 if (callback) 2272 2274 callback(desc->txd.callback_param); 2273 2275 } 2276 + 2277 + /* pch will be unset if list was empty */ 2278 + if (!pch) 2279 + return; 2274 2280 2275 2281 spin_lock_irqsave(&pch->lock, flags); 2276 2282 list_splice_tail_init(list, &pch->work_list); ··· 2928 2926 INIT_LIST_HEAD(&pd->channels); 2929 2927 2930 2928 /* Initialize channel parameters */ 2931 - num_chan = max(pdat ? pdat->nr_valid_peri : (u8)pi->pcfg.num_peri, 2932 - (u8)pi->pcfg.num_chan); 2929 + if (pdat) 2930 + num_chan = max_t(int, pdat->nr_valid_peri, pi->pcfg.num_chan); 2931 + else 2932 + num_chan = max_t(int, pi->pcfg.num_peri, pi->pcfg.num_chan); 2933 + 2933 2934 pdmac->peripherals = kzalloc(num_chan * sizeof(*pch), GFP_KERNEL); 2934 2935 2935 2936 for (i = 0; i < num_chan; i++) {
+204 -125
drivers/dma/ste_dma40.c
··· 18 18 #include <linux/pm_runtime.h> 19 19 #include <linux/err.h> 20 20 #include <linux/amba/bus.h> 21 + #include <linux/regulator/consumer.h> 21 22 22 23 #include <plat/ste_dma40.h> 23 24 ··· 67 66 D40_DMA_RUN = 1, 68 67 D40_DMA_SUSPEND_REQ = 2, 69 68 D40_DMA_SUSPENDED = 3 69 + }; 70 + 71 + /* 72 + * enum d40_events - The different Event Enables for the event lines. 73 + * 74 + * @D40_DEACTIVATE_EVENTLINE: De-activate Event line, stopping the logical chan. 75 + * @D40_ACTIVATE_EVENTLINE: Activate the Event line, to start a logical chan. 76 + * @D40_SUSPEND_REQ_EVENTLINE: Requesting for suspending a event line. 77 + * @D40_ROUND_EVENTLINE: Status check for event line. 78 + */ 79 + 80 + enum d40_events { 81 + D40_DEACTIVATE_EVENTLINE = 0, 82 + D40_ACTIVATE_EVENTLINE = 1, 83 + D40_SUSPEND_REQ_EVENTLINE = 2, 84 + D40_ROUND_EVENTLINE = 3 70 85 }; 71 86 72 87 /* ··· 887 870 } 888 871 #endif 889 872 890 - static int d40_channel_execute_command(struct d40_chan *d40c, 891 - enum d40_command command) 873 + static int __d40_execute_command_phy(struct d40_chan *d40c, 874 + enum d40_command command) 892 875 { 893 876 u32 status; 894 877 int i; ··· 896 879 int ret = 0; 897 880 unsigned long flags; 898 881 u32 wmask; 882 + 883 + if (command == D40_DMA_STOP) { 884 + ret = __d40_execute_command_phy(d40c, D40_DMA_SUSPEND_REQ); 885 + if (ret) 886 + return ret; 887 + } 899 888 900 889 spin_lock_irqsave(&d40c->base->execmd_lock, flags); 901 890 ··· 996 973 } 997 974 998 975 d40c->pending_tx = 0; 999 - d40c->busy = false; 1000 976 } 1001 977 1002 - static void __d40_config_set_event(struct d40_chan *d40c, bool enable, 1003 - u32 event, int reg) 978 + static void __d40_config_set_event(struct d40_chan *d40c, 979 + enum d40_events event_type, u32 event, 980 + int reg) 1004 981 { 1005 982 void __iomem *addr = chan_base(d40c) + reg; 1006 983 int tries; 984 + u32 status; 1007 985 1008 - if (!enable) { 986 + switch (event_type) { 987 + 988 + case D40_DEACTIVATE_EVENTLINE: 989 + 1009 990 writel((D40_DEACTIVATE_EVENTLINE << D40_EVENTLINE_POS(event)) 1010 991 | ~D40_EVENTLINE_MASK(event), addr); 1011 - return; 1012 - } 992 + break; 1013 993 994 + case D40_SUSPEND_REQ_EVENTLINE: 995 + status = (readl(addr) & D40_EVENTLINE_MASK(event)) >> 996 + D40_EVENTLINE_POS(event); 997 + 998 + if (status == D40_DEACTIVATE_EVENTLINE || 999 + status == D40_SUSPEND_REQ_EVENTLINE) 1000 + break; 1001 + 1002 + writel((D40_SUSPEND_REQ_EVENTLINE << D40_EVENTLINE_POS(event)) 1003 + | ~D40_EVENTLINE_MASK(event), addr); 1004 + 1005 + for (tries = 0 ; tries < D40_SUSPEND_MAX_IT; tries++) { 1006 + 1007 + status = (readl(addr) & D40_EVENTLINE_MASK(event)) >> 1008 + D40_EVENTLINE_POS(event); 1009 + 1010 + cpu_relax(); 1011 + /* 1012 + * Reduce the number of bus accesses while 1013 + * waiting for the DMA to suspend. 1014 + */ 1015 + udelay(3); 1016 + 1017 + if (status == D40_DEACTIVATE_EVENTLINE) 1018 + break; 1019 + } 1020 + 1021 + if (tries == D40_SUSPEND_MAX_IT) { 1022 + chan_err(d40c, 1023 + "unable to stop the event_line chl %d (log: %d)" 1024 + "status %x\n", d40c->phy_chan->num, 1025 + d40c->log_num, status); 1026 + } 1027 + break; 1028 + 1029 + case D40_ACTIVATE_EVENTLINE: 1014 1030 /* 1015 1031 * The hardware sometimes doesn't register the enable when src and dst 1016 1032 * event lines are active on the same logical channel. Retry to ensure 1017 1033 * it does. Usually only one retry is sufficient. 1018 1034 */ 1019 - tries = 100; 1020 - while (--tries) { 1021 - writel((D40_ACTIVATE_EVENTLINE << D40_EVENTLINE_POS(event)) 1022 - | ~D40_EVENTLINE_MASK(event), addr); 1035 + tries = 100; 1036 + while (--tries) { 1037 + writel((D40_ACTIVATE_EVENTLINE << 1038 + D40_EVENTLINE_POS(event)) | 1039 + ~D40_EVENTLINE_MASK(event), addr); 1023 1040 1024 - if (readl(addr) & D40_EVENTLINE_MASK(event)) 1025 - break; 1041 + if (readl(addr) & D40_EVENTLINE_MASK(event)) 1042 + break; 1043 + } 1044 + 1045 + if (tries != 99) 1046 + dev_dbg(chan2dev(d40c), 1047 + "[%s] workaround enable S%cLNK (%d tries)\n", 1048 + __func__, reg == D40_CHAN_REG_SSLNK ? 'S' : 'D', 1049 + 100 - tries); 1050 + 1051 + WARN_ON(!tries); 1052 + break; 1053 + 1054 + case D40_ROUND_EVENTLINE: 1055 + BUG(); 1056 + break; 1057 + 1026 1058 } 1027 - 1028 - if (tries != 99) 1029 - dev_dbg(chan2dev(d40c), 1030 - "[%s] workaround enable S%cLNK (%d tries)\n", 1031 - __func__, reg == D40_CHAN_REG_SSLNK ? 'S' : 'D', 1032 - 100 - tries); 1033 - 1034 - WARN_ON(!tries); 1035 1059 } 1036 1060 1037 - static void d40_config_set_event(struct d40_chan *d40c, bool do_enable) 1061 + static void d40_config_set_event(struct d40_chan *d40c, 1062 + enum d40_events event_type) 1038 1063 { 1039 - unsigned long flags; 1040 - 1041 - spin_lock_irqsave(&d40c->phy_chan->lock, flags); 1042 - 1043 1064 /* Enable event line connected to device (or memcpy) */ 1044 1065 if ((d40c->dma_cfg.dir == STEDMA40_PERIPH_TO_MEM) || 1045 1066 (d40c->dma_cfg.dir == STEDMA40_PERIPH_TO_PERIPH)) { 1046 1067 u32 event = D40_TYPE_TO_EVENT(d40c->dma_cfg.src_dev_type); 1047 1068 1048 - __d40_config_set_event(d40c, do_enable, event, 1069 + __d40_config_set_event(d40c, event_type, event, 1049 1070 D40_CHAN_REG_SSLNK); 1050 1071 } 1051 1072 1052 1073 if (d40c->dma_cfg.dir != STEDMA40_PERIPH_TO_MEM) { 1053 1074 u32 event = D40_TYPE_TO_EVENT(d40c->dma_cfg.dst_dev_type); 1054 1075 1055 - __d40_config_set_event(d40c, do_enable, event, 1076 + __d40_config_set_event(d40c, event_type, event, 1056 1077 D40_CHAN_REG_SDLNK); 1057 1078 } 1058 - 1059 - spin_unlock_irqrestore(&d40c->phy_chan->lock, flags); 1060 1079 } 1061 1080 1062 1081 static u32 d40_chan_has_events(struct d40_chan *d40c) ··· 1110 1045 val |= readl(chanbase + D40_CHAN_REG_SDLNK); 1111 1046 1112 1047 return val; 1048 + } 1049 + 1050 + static int 1051 + __d40_execute_command_log(struct d40_chan *d40c, enum d40_command command) 1052 + { 1053 + unsigned long flags; 1054 + int ret = 0; 1055 + u32 active_status; 1056 + void __iomem *active_reg; 1057 + 1058 + if (d40c->phy_chan->num % 2 == 0) 1059 + active_reg = d40c->base->virtbase + D40_DREG_ACTIVE; 1060 + else 1061 + active_reg = d40c->base->virtbase + D40_DREG_ACTIVO; 1062 + 1063 + 1064 + spin_lock_irqsave(&d40c->phy_chan->lock, flags); 1065 + 1066 + switch (command) { 1067 + case D40_DMA_STOP: 1068 + case D40_DMA_SUSPEND_REQ: 1069 + 1070 + active_status = (readl(active_reg) & 1071 + D40_CHAN_POS_MASK(d40c->phy_chan->num)) >> 1072 + D40_CHAN_POS(d40c->phy_chan->num); 1073 + 1074 + if (active_status == D40_DMA_RUN) 1075 + d40_config_set_event(d40c, D40_SUSPEND_REQ_EVENTLINE); 1076 + else 1077 + d40_config_set_event(d40c, D40_DEACTIVATE_EVENTLINE); 1078 + 1079 + if (!d40_chan_has_events(d40c) && (command == D40_DMA_STOP)) 1080 + ret = __d40_execute_command_phy(d40c, command); 1081 + 1082 + break; 1083 + 1084 + case D40_DMA_RUN: 1085 + 1086 + d40_config_set_event(d40c, D40_ACTIVATE_EVENTLINE); 1087 + ret = __d40_execute_command_phy(d40c, command); 1088 + break; 1089 + 1090 + case D40_DMA_SUSPENDED: 1091 + BUG(); 1092 + break; 1093 + } 1094 + 1095 + spin_unlock_irqrestore(&d40c->phy_chan->lock, flags); 1096 + return ret; 1097 + } 1098 + 1099 + static int d40_channel_execute_command(struct d40_chan *d40c, 1100 + enum d40_command command) 1101 + { 1102 + if (chan_is_logical(d40c)) 1103 + return __d40_execute_command_log(d40c, command); 1104 + else 1105 + return __d40_execute_command_phy(d40c, command); 1113 1106 } 1114 1107 1115 1108 static u32 d40_get_prmo(struct d40_chan *d40c) ··· 1272 1149 spin_lock_irqsave(&d40c->lock, flags); 1273 1150 1274 1151 res = d40_channel_execute_command(d40c, D40_DMA_SUSPEND_REQ); 1275 - if (res == 0) { 1276 - if (chan_is_logical(d40c)) { 1277 - d40_config_set_event(d40c, false); 1278 - /* Resume the other logical channels if any */ 1279 - if (d40_chan_has_events(d40c)) 1280 - res = d40_channel_execute_command(d40c, 1281 - D40_DMA_RUN); 1282 - } 1283 - } 1152 + 1284 1153 pm_runtime_mark_last_busy(d40c->base->dev); 1285 1154 pm_runtime_put_autosuspend(d40c->base->dev); 1286 1155 spin_unlock_irqrestore(&d40c->lock, flags); ··· 1289 1174 1290 1175 spin_lock_irqsave(&d40c->lock, flags); 1291 1176 pm_runtime_get_sync(d40c->base->dev); 1292 - if (d40c->base->rev == 0) 1293 - if (chan_is_logical(d40c)) { 1294 - res = d40_channel_execute_command(d40c, 1295 - D40_DMA_SUSPEND_REQ); 1296 - goto no_suspend; 1297 - } 1298 1177 1299 1178 /* If bytes left to transfer or linked tx resume job */ 1300 - if (d40_residue(d40c) || d40_tx_is_linked(d40c)) { 1301 - 1302 - if (chan_is_logical(d40c)) 1303 - d40_config_set_event(d40c, true); 1304 - 1179 + if (d40_residue(d40c) || d40_tx_is_linked(d40c)) 1305 1180 res = d40_channel_execute_command(d40c, D40_DMA_RUN); 1306 - } 1307 1181 1308 - no_suspend: 1309 1182 pm_runtime_mark_last_busy(d40c->base->dev); 1310 1183 pm_runtime_put_autosuspend(d40c->base->dev); 1311 1184 spin_unlock_irqrestore(&d40c->lock, flags); 1312 1185 return res; 1313 - } 1314 - 1315 - static int d40_terminate_all(struct d40_chan *chan) 1316 - { 1317 - unsigned long flags; 1318 - int ret = 0; 1319 - 1320 - ret = d40_pause(chan); 1321 - if (!ret && chan_is_physical(chan)) 1322 - ret = d40_channel_execute_command(chan, D40_DMA_STOP); 1323 - 1324 - spin_lock_irqsave(&chan->lock, flags); 1325 - d40_term_all(chan); 1326 - spin_unlock_irqrestore(&chan->lock, flags); 1327 - 1328 - return ret; 1329 1186 } 1330 1187 1331 1188 static dma_cookie_t d40_tx_submit(struct dma_async_tx_descriptor *tx) ··· 1319 1232 1320 1233 static int d40_start(struct d40_chan *d40c) 1321 1234 { 1322 - if (d40c->base->rev == 0) { 1323 - int err; 1324 - 1325 - if (chan_is_logical(d40c)) { 1326 - err = d40_channel_execute_command(d40c, 1327 - D40_DMA_SUSPEND_REQ); 1328 - if (err) 1329 - return err; 1330 - } 1331 - } 1332 - 1333 - if (chan_is_logical(d40c)) 1334 - d40_config_set_event(d40c, true); 1335 - 1336 1235 return d40_channel_execute_command(d40c, D40_DMA_RUN); 1337 1236 } 1338 1237 ··· 1331 1258 d40d = d40_first_queued(d40c); 1332 1259 1333 1260 if (d40d != NULL) { 1334 - if (!d40c->busy) 1261 + if (!d40c->busy) { 1335 1262 d40c->busy = true; 1336 - 1337 - pm_runtime_get_sync(d40c->base->dev); 1263 + pm_runtime_get_sync(d40c->base->dev); 1264 + } 1338 1265 1339 1266 /* Remove from queue */ 1340 1267 d40_desc_remove(d40d); ··· 1461 1388 1462 1389 return; 1463 1390 1464 - err: 1465 - /* Rescue manoeuvre if receiving double interrupts */ 1391 + err: 1392 + /* Rescue manouver if receiving double interrupts */ 1466 1393 if (d40c->pending_tx > 0) 1467 1394 d40c->pending_tx--; 1468 1395 spin_unlock_irqrestore(&d40c->lock, flags); ··· 1843 1770 return 0; 1844 1771 } 1845 1772 1846 - 1847 1773 static int d40_free_dma(struct d40_chan *d40c) 1848 1774 { 1849 1775 ··· 1878 1806 } 1879 1807 1880 1808 pm_runtime_get_sync(d40c->base->dev); 1881 - res = d40_channel_execute_command(d40c, D40_DMA_SUSPEND_REQ); 1882 - if (res) { 1883 - chan_err(d40c, "suspend failed\n"); 1884 - goto out; 1885 - } 1886 - 1887 - if (chan_is_logical(d40c)) { 1888 - /* Release logical channel, deactivate the event line */ 1889 - 1890 - d40_config_set_event(d40c, false); 1891 - d40c->base->lookup_log_chans[d40c->log_num] = NULL; 1892 - 1893 - /* 1894 - * Check if there are more logical allocation 1895 - * on this phy channel. 1896 - */ 1897 - if (!d40_alloc_mask_free(phy, is_src, event)) { 1898 - /* Resume the other logical channels if any */ 1899 - if (d40_chan_has_events(d40c)) { 1900 - res = d40_channel_execute_command(d40c, 1901 - D40_DMA_RUN); 1902 - if (res) 1903 - chan_err(d40c, 1904 - "Executing RUN command\n"); 1905 - } 1906 - goto out; 1907 - } 1908 - } else { 1909 - (void) d40_alloc_mask_free(phy, is_src, 0); 1910 - } 1911 - 1912 - /* Release physical channel */ 1913 1809 res = d40_channel_execute_command(d40c, D40_DMA_STOP); 1914 1810 if (res) { 1915 - chan_err(d40c, "Failed to stop channel\n"); 1811 + chan_err(d40c, "stop failed\n"); 1916 1812 goto out; 1917 1813 } 1814 + 1815 + d40_alloc_mask_free(phy, is_src, chan_is_logical(d40c) ? event : 0); 1816 + 1817 + if (chan_is_logical(d40c)) 1818 + d40c->base->lookup_log_chans[d40c->log_num] = NULL; 1819 + else 1820 + d40c->base->lookup_phy_chans[phy->num] = NULL; 1918 1821 1919 1822 if (d40c->busy) { 1920 1823 pm_runtime_mark_last_busy(d40c->base->dev); ··· 1899 1852 d40c->busy = false; 1900 1853 d40c->phy_chan = NULL; 1901 1854 d40c->configured = false; 1902 - d40c->base->lookup_phy_chans[phy->num] = NULL; 1903 1855 out: 1904 1856 1905 1857 pm_runtime_mark_last_busy(d40c->base->dev); ··· 2116 2070 if (sg_next(&sg_src[sg_len - 1]) == sg_src) 2117 2071 desc->cyclic = true; 2118 2072 2119 - if (direction != DMA_NONE) { 2073 + if (direction != DMA_TRANS_NONE) { 2120 2074 dma_addr_t dev_addr = d40_get_dev_addr(chan, direction); 2121 2075 2122 2076 if (direction == DMA_DEV_TO_MEM) ··· 2417 2371 spin_unlock_irqrestore(&d40c->lock, flags); 2418 2372 } 2419 2373 2374 + static void d40_terminate_all(struct dma_chan *chan) 2375 + { 2376 + unsigned long flags; 2377 + struct d40_chan *d40c = container_of(chan, struct d40_chan, chan); 2378 + int ret; 2379 + 2380 + spin_lock_irqsave(&d40c->lock, flags); 2381 + 2382 + pm_runtime_get_sync(d40c->base->dev); 2383 + ret = d40_channel_execute_command(d40c, D40_DMA_STOP); 2384 + if (ret) 2385 + chan_err(d40c, "Failed to stop channel\n"); 2386 + 2387 + d40_term_all(d40c); 2388 + pm_runtime_mark_last_busy(d40c->base->dev); 2389 + pm_runtime_put_autosuspend(d40c->base->dev); 2390 + if (d40c->busy) { 2391 + pm_runtime_mark_last_busy(d40c->base->dev); 2392 + pm_runtime_put_autosuspend(d40c->base->dev); 2393 + } 2394 + d40c->busy = false; 2395 + 2396 + spin_unlock_irqrestore(&d40c->lock, flags); 2397 + } 2398 + 2420 2399 static int 2421 2400 dma40_config_to_halfchannel(struct d40_chan *d40c, 2422 2401 struct stedma40_half_channel_info *info, ··· 2622 2551 2623 2552 switch (cmd) { 2624 2553 case DMA_TERMINATE_ALL: 2625 - return d40_terminate_all(d40c); 2554 + d40_terminate_all(chan); 2555 + return 0; 2626 2556 case DMA_PAUSE: 2627 2557 return d40_pause(d40c); 2628 2558 case DMA_RESUME: ··· 2980 2908 dev_info(&pdev->dev, "hardware revision: %d @ 0x%x\n", 2981 2909 rev, res->start); 2982 2910 2911 + if (rev < 2) { 2912 + d40_err(&pdev->dev, "hardware revision: %d is not supported", 2913 + rev); 2914 + goto failure; 2915 + } 2916 + 2983 2917 plat_data = pdev->dev.platform_data; 2984 2918 2985 2919 /* Count the number of logical channels in use */ ··· 3076 2998 3077 2999 if (base) { 3078 3000 kfree(base->lcla_pool.alloc_map); 3001 + kfree(base->reg_val_backup_chan); 3079 3002 kfree(base->lookup_log_chans); 3080 3003 kfree(base->lookup_phy_chans); 3081 3004 kfree(base->phy_res);
-2
drivers/dma/ste_dma40_ll.h
··· 62 62 #define D40_SREG_ELEM_LOG_LIDX_MASK (0xFF << D40_SREG_ELEM_LOG_LIDX_POS) 63 63 64 64 /* Link register */ 65 - #define D40_DEACTIVATE_EVENTLINE 0x0 66 - #define D40_ACTIVATE_EVENTLINE 0x1 67 65 #define D40_EVENTLINE_POS(i) (2 * i) 68 66 #define D40_EVENTLINE_MASK(i) (0x3 << D40_EVENTLINE_POS(i)) 69 67
+196
drivers/firmware/efivars.c
··· 191 191 } 192 192 } 193 193 194 + static bool 195 + validate_device_path(struct efi_variable *var, int match, u8 *buffer, 196 + unsigned long len) 197 + { 198 + struct efi_generic_dev_path *node; 199 + int offset = 0; 200 + 201 + node = (struct efi_generic_dev_path *)buffer; 202 + 203 + if (len < sizeof(*node)) 204 + return false; 205 + 206 + while (offset <= len - sizeof(*node) && 207 + node->length >= sizeof(*node) && 208 + node->length <= len - offset) { 209 + offset += node->length; 210 + 211 + if ((node->type == EFI_DEV_END_PATH || 212 + node->type == EFI_DEV_END_PATH2) && 213 + node->sub_type == EFI_DEV_END_ENTIRE) 214 + return true; 215 + 216 + node = (struct efi_generic_dev_path *)(buffer + offset); 217 + } 218 + 219 + /* 220 + * If we're here then either node->length pointed past the end 221 + * of the buffer or we reached the end of the buffer without 222 + * finding a device path end node. 223 + */ 224 + return false; 225 + } 226 + 227 + static bool 228 + validate_boot_order(struct efi_variable *var, int match, u8 *buffer, 229 + unsigned long len) 230 + { 231 + /* An array of 16-bit integers */ 232 + if ((len % 2) != 0) 233 + return false; 234 + 235 + return true; 236 + } 237 + 238 + static bool 239 + validate_load_option(struct efi_variable *var, int match, u8 *buffer, 240 + unsigned long len) 241 + { 242 + u16 filepathlength; 243 + int i, desclength = 0, namelen; 244 + 245 + namelen = utf16_strnlen(var->VariableName, sizeof(var->VariableName)); 246 + 247 + /* Either "Boot" or "Driver" followed by four digits of hex */ 248 + for (i = match; i < match+4; i++) { 249 + if (var->VariableName[i] > 127 || 250 + hex_to_bin(var->VariableName[i] & 0xff) < 0) 251 + return true; 252 + } 253 + 254 + /* Reject it if there's 4 digits of hex and then further content */ 255 + if (namelen > match + 4) 256 + return false; 257 + 258 + /* A valid entry must be at least 8 bytes */ 259 + if (len < 8) 260 + return false; 261 + 262 + filepathlength = buffer[4] | buffer[5] << 8; 263 + 264 + /* 265 + * There's no stored length for the description, so it has to be 266 + * found by hand 267 + */ 268 + desclength = utf16_strsize((efi_char16_t *)(buffer + 6), len - 6) + 2; 269 + 270 + /* Each boot entry must have a descriptor */ 271 + if (!desclength) 272 + return false; 273 + 274 + /* 275 + * If the sum of the length of the description, the claimed filepath 276 + * length and the original header are greater than the length of the 277 + * variable, it's malformed 278 + */ 279 + if ((desclength + filepathlength + 6) > len) 280 + return false; 281 + 282 + /* 283 + * And, finally, check the filepath 284 + */ 285 + return validate_device_path(var, match, buffer + desclength + 6, 286 + filepathlength); 287 + } 288 + 289 + static bool 290 + validate_uint16(struct efi_variable *var, int match, u8 *buffer, 291 + unsigned long len) 292 + { 293 + /* A single 16-bit integer */ 294 + if (len != 2) 295 + return false; 296 + 297 + return true; 298 + } 299 + 300 + static bool 301 + validate_ascii_string(struct efi_variable *var, int match, u8 *buffer, 302 + unsigned long len) 303 + { 304 + int i; 305 + 306 + for (i = 0; i < len; i++) { 307 + if (buffer[i] > 127) 308 + return false; 309 + 310 + if (buffer[i] == 0) 311 + return true; 312 + } 313 + 314 + return false; 315 + } 316 + 317 + struct variable_validate { 318 + char *name; 319 + bool (*validate)(struct efi_variable *var, int match, u8 *data, 320 + unsigned long len); 321 + }; 322 + 323 + static const struct variable_validate variable_validate[] = { 324 + { "BootNext", validate_uint16 }, 325 + { "BootOrder", validate_boot_order }, 326 + { "DriverOrder", validate_boot_order }, 327 + { "Boot*", validate_load_option }, 328 + { "Driver*", validate_load_option }, 329 + { "ConIn", validate_device_path }, 330 + { "ConInDev", validate_device_path }, 331 + { "ConOut", validate_device_path }, 332 + { "ConOutDev", validate_device_path }, 333 + { "ErrOut", validate_device_path }, 334 + { "ErrOutDev", validate_device_path }, 335 + { "Timeout", validate_uint16 }, 336 + { "Lang", validate_ascii_string }, 337 + { "PlatformLang", validate_ascii_string }, 338 + { "", NULL }, 339 + }; 340 + 341 + static bool 342 + validate_var(struct efi_variable *var, u8 *data, unsigned long len) 343 + { 344 + int i; 345 + u16 *unicode_name = var->VariableName; 346 + 347 + for (i = 0; variable_validate[i].validate != NULL; i++) { 348 + const char *name = variable_validate[i].name; 349 + int match; 350 + 351 + for (match = 0; ; match++) { 352 + char c = name[match]; 353 + u16 u = unicode_name[match]; 354 + 355 + /* All special variables are plain ascii */ 356 + if (u > 127) 357 + return true; 358 + 359 + /* Wildcard in the matching name means we've matched */ 360 + if (c == '*') 361 + return variable_validate[i].validate(var, 362 + match, data, len); 363 + 364 + /* Case sensitive match */ 365 + if (c != u) 366 + break; 367 + 368 + /* Reached the end of the string while matching */ 369 + if (!c) 370 + return variable_validate[i].validate(var, 371 + match, data, len); 372 + } 373 + } 374 + 375 + return true; 376 + } 377 + 194 378 static efi_status_t 195 379 get_var_data_locked(struct efivars *efivars, struct efi_variable *var) 196 380 { ··· 505 321 506 322 if ((new_var->DataSize <= 0) || (new_var->Attributes == 0)){ 507 323 printk(KERN_ERR "efivars: DataSize & Attributes must be valid!\n"); 324 + return -EINVAL; 325 + } 326 + 327 + if ((new_var->Attributes & ~EFI_VARIABLE_MASK) != 0 || 328 + validate_var(new_var, new_var->Data, new_var->DataSize) == false) { 329 + printk(KERN_ERR "efivars: Malformed variable content\n"); 508 330 return -EINVAL; 509 331 } 510 332 ··· 815 625 816 626 if (!capable(CAP_SYS_ADMIN)) 817 627 return -EACCES; 628 + 629 + if ((new_var->Attributes & ~EFI_VARIABLE_MASK) != 0 || 630 + validate_var(new_var, new_var->Data, new_var->DataSize) == false) { 631 + printk(KERN_ERR "efivars: Malformed variable content\n"); 632 + return -EINVAL; 633 + } 818 634 819 635 spin_lock(&efivars->lock); 820 636
+19 -2
drivers/gpio/gpio-pxa.c
··· 64 64 unsigned long irq_mask; 65 65 unsigned long irq_edge_rise; 66 66 unsigned long irq_edge_fall; 67 + int (*set_wake)(unsigned int gpio, unsigned int on); 67 68 68 69 #ifdef CONFIG_PM 69 70 unsigned long saved_gplr; ··· 270 269 (value ? GPSR_OFFSET : GPCR_OFFSET)); 271 270 } 272 271 273 - static int __devinit pxa_init_gpio_chip(int gpio_end) 272 + static int __devinit pxa_init_gpio_chip(int gpio_end, 273 + int (*set_wake)(unsigned int, unsigned int)) 274 274 { 275 275 int i, gpio, nbanks = gpio_to_bank(gpio_end) + 1; 276 276 struct pxa_gpio_chip *chips; ··· 287 285 288 286 sprintf(chips[i].label, "gpio-%d", i); 289 287 chips[i].regbase = gpio_reg_base + BANK_OFF(i); 288 + chips[i].set_wake = set_wake; 290 289 291 290 c->base = gpio; 292 291 c->label = chips[i].label; ··· 415 412 writel_relaxed(gfer, c->regbase + GFER_OFFSET); 416 413 } 417 414 415 + static int pxa_gpio_set_wake(struct irq_data *d, unsigned int on) 416 + { 417 + int gpio = pxa_irq_to_gpio(d->irq); 418 + struct pxa_gpio_chip *c = gpio_to_pxachip(gpio); 419 + 420 + if (c->set_wake) 421 + return c->set_wake(gpio, on); 422 + else 423 + return 0; 424 + } 425 + 418 426 static void pxa_unmask_muxed_gpio(struct irq_data *d) 419 427 { 420 428 int gpio = pxa_irq_to_gpio(d->irq); ··· 441 427 .irq_mask = pxa_mask_muxed_gpio, 442 428 .irq_unmask = pxa_unmask_muxed_gpio, 443 429 .irq_set_type = pxa_gpio_irq_type, 430 + .irq_set_wake = pxa_gpio_set_wake, 444 431 }; 445 432 446 433 static int pxa_gpio_nums(void) ··· 486 471 struct pxa_gpio_chip *c; 487 472 struct resource *res; 488 473 struct clk *clk; 474 + struct pxa_gpio_platform_data *info; 489 475 int gpio, irq, ret; 490 476 int irq0 = 0, irq1 = 0, irq_mux, gpio_offset = 0; 491 477 ··· 532 516 } 533 517 534 518 /* Initialize GPIO chips */ 535 - pxa_init_gpio_chip(pxa_last_gpio); 519 + info = dev_get_platdata(&pdev->dev); 520 + pxa_init_gpio_chip(pxa_last_gpio, info ? info->gpio_set_wake : NULL); 536 521 537 522 /* clear all GPIO edge detects */ 538 523 for_each_gpio_chip(gpio, c) {
+5 -25
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 149 149 unsigned long pfn; 150 150 151 151 if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) { 152 - unsigned long usize = buf->size; 153 - 154 152 if (!buf->pages) 155 153 return -EINTR; 156 154 157 - while (usize > 0) { 158 - pfn = page_to_pfn(buf->pages[page_offset++]); 159 - vm_insert_mixed(vma, f_vaddr, pfn); 160 - f_vaddr += PAGE_SIZE; 161 - usize -= PAGE_SIZE; 162 - } 163 - 164 - return 0; 165 - } 166 - 167 - pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset; 155 + pfn = page_to_pfn(buf->pages[page_offset++]); 156 + } else 157 + pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset; 168 158 169 159 return vm_insert_mixed(vma, f_vaddr, pfn); 170 160 } ··· 514 524 if (!buffer->pages) 515 525 return -EINVAL; 516 526 527 + vma->vm_flags |= VM_MIXEDMAP; 528 + 517 529 do { 518 530 ret = vm_insert_page(vma, uaddr, buffer->pages[i++]); 519 531 if (ret) { ··· 702 710 int exynos_drm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) 703 711 { 704 712 struct drm_gem_object *obj = vma->vm_private_data; 705 - struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj); 706 713 struct drm_device *dev = obj->dev; 707 714 unsigned long f_vaddr; 708 715 pgoff_t page_offset; ··· 713 722 714 723 mutex_lock(&dev->struct_mutex); 715 724 716 - /* 717 - * allocate all pages as desired size if user wants to allocate 718 - * physically non-continuous memory. 719 - */ 720 - if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) { 721 - ret = exynos_drm_gem_get_pages(obj); 722 - if (ret < 0) 723 - goto err; 724 - } 725 - 726 725 ret = exynos_drm_gem_map_pages(obj, vma, f_vaddr, page_offset); 727 726 if (ret < 0) 728 727 DRM_ERROR("failed to map pages.\n"); 729 728 730 - err: 731 729 mutex_unlock(&dev->struct_mutex); 732 730 733 731 return convert_to_vm_err_msg(ret);
+7 -1
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 1133 1133 return -EINVAL; 1134 1134 } 1135 1135 1136 + if (args->num_cliprects > UINT_MAX / sizeof(*cliprects)) { 1137 + DRM_DEBUG("execbuf with %u cliprects\n", 1138 + args->num_cliprects); 1139 + return -EINVAL; 1140 + } 1136 1141 cliprects = kmalloc(args->num_cliprects * sizeof(*cliprects), 1137 1142 GFP_KERNEL); 1138 1143 if (cliprects == NULL) { ··· 1409 1404 struct drm_i915_gem_exec_object2 *exec2_list = NULL; 1410 1405 int ret; 1411 1406 1412 - if (args->buffer_count < 1) { 1407 + if (args->buffer_count < 1 || 1408 + args->buffer_count > UINT_MAX / sizeof(*exec2_list)) { 1413 1409 DRM_DEBUG("execbuf2 with %d buffers\n", args->buffer_count); 1414 1410 return -EINVAL; 1415 1411 }
+1
drivers/gpu/drm/i915/i915_reg.h
··· 568 568 #define CM0_MASK_SHIFT 16 569 569 #define CM0_IZ_OPT_DISABLE (1<<6) 570 570 #define CM0_ZR_OPT_DISABLE (1<<5) 571 + #define CM0_STC_EVICT_DISABLE_LRA_SNB (1<<5) 571 572 #define CM0_DEPTH_EVICT_DISABLE (1<<4) 572 573 #define CM0_COLOR_EVICT_DISABLE (1<<3) 573 574 #define CM0_DEPTH_WRITE_DISABLE (1<<1)
+11 -18
drivers/gpu/drm/i915/intel_crt.c
··· 430 430 { 431 431 struct drm_device *dev = connector->dev; 432 432 struct intel_crt *crt = intel_attached_crt(connector); 433 - struct drm_crtc *crtc; 434 433 enum drm_connector_status status; 434 + struct intel_load_detect_pipe tmp; 435 435 436 436 if (I915_HAS_HOTPLUG(dev)) { 437 437 if (intel_crt_detect_hotplug(connector)) { ··· 450 450 return connector->status; 451 451 452 452 /* for pre-945g platforms use load detect */ 453 - crtc = crt->base.base.crtc; 454 - if (crtc && crtc->enabled) { 455 - status = intel_crt_load_detect(crt); 456 - } else { 457 - struct intel_load_detect_pipe tmp; 458 - 459 - if (intel_get_load_detect_pipe(&crt->base, connector, NULL, 460 - &tmp)) { 461 - if (intel_crt_detect_ddc(connector)) 462 - status = connector_status_connected; 463 - else 464 - status = intel_crt_load_detect(crt); 465 - intel_release_load_detect_pipe(&crt->base, connector, 466 - &tmp); 467 - } else 468 - status = connector_status_unknown; 469 - } 453 + if (intel_get_load_detect_pipe(&crt->base, connector, NULL, 454 + &tmp)) { 455 + if (intel_crt_detect_ddc(connector)) 456 + status = connector_status_connected; 457 + else 458 + status = intel_crt_load_detect(crt); 459 + intel_release_load_detect_pipe(&crt->base, connector, 460 + &tmp); 461 + } else 462 + status = connector_status_unknown; 470 463 471 464 return status; 472 465 }
+8
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 401 401 if (INTEL_INFO(dev)->gen >= 6) { 402 402 I915_WRITE(INSTPM, 403 403 INSTPM_FORCE_ORDERING << 16 | INSTPM_FORCE_ORDERING); 404 + 405 + /* From the Sandybridge PRM, volume 1 part 3, page 24: 406 + * "If this bit is set, STCunit will have LRA as replacement 407 + * policy. [...] This bit must be reset. LRA replacement 408 + * policy is not supported." 409 + */ 410 + I915_WRITE(CACHE_MODE_0, 411 + CM0_STC_EVICT_DISABLE_LRA_SNB << CM0_MASK_SHIFT); 404 412 } 405 413 406 414 return ret;
+18 -16
drivers/gpu/drm/i915/intel_sdvo.c
··· 731 731 uint16_t width, height; 732 732 uint16_t h_blank_len, h_sync_len, v_blank_len, v_sync_len; 733 733 uint16_t h_sync_offset, v_sync_offset; 734 + int mode_clock; 734 735 735 736 width = mode->crtc_hdisplay; 736 737 height = mode->crtc_vdisplay; ··· 746 745 h_sync_offset = mode->crtc_hsync_start - mode->crtc_hblank_start; 747 746 v_sync_offset = mode->crtc_vsync_start - mode->crtc_vblank_start; 748 747 749 - dtd->part1.clock = mode->clock / 10; 748 + mode_clock = mode->clock; 749 + mode_clock /= intel_mode_get_pixel_multiplier(mode) ?: 1; 750 + mode_clock /= 10; 751 + dtd->part1.clock = mode_clock; 752 + 750 753 dtd->part1.h_active = width & 0xff; 751 754 dtd->part1.h_blank = h_blank_len & 0xff; 752 755 dtd->part1.h_high = (((width >> 8) & 0xf) << 4) | ··· 1001 996 struct intel_sdvo *intel_sdvo = to_intel_sdvo(encoder); 1002 997 u32 sdvox; 1003 998 struct intel_sdvo_in_out_map in_out; 1004 - struct intel_sdvo_dtd input_dtd; 999 + struct intel_sdvo_dtd input_dtd, output_dtd; 1005 1000 int pixel_multiplier = intel_mode_get_pixel_multiplier(adjusted_mode); 1006 1001 int rate; 1007 1002 ··· 1026 1021 intel_sdvo->attached_output)) 1027 1022 return; 1028 1023 1029 - /* We have tried to get input timing in mode_fixup, and filled into 1030 - * adjusted_mode. 1031 - */ 1032 - if (intel_sdvo->is_tv || intel_sdvo->is_lvds) { 1033 - input_dtd = intel_sdvo->input_dtd; 1034 - } else { 1035 - /* Set the output timing to the screen */ 1036 - if (!intel_sdvo_set_target_output(intel_sdvo, 1037 - intel_sdvo->attached_output)) 1038 - return; 1039 - 1040 - intel_sdvo_get_dtd_from_mode(&input_dtd, adjusted_mode); 1041 - (void) intel_sdvo_set_output_timing(intel_sdvo, &input_dtd); 1042 - } 1024 + /* lvds has a special fixed output timing. */ 1025 + if (intel_sdvo->is_lvds) 1026 + intel_sdvo_get_dtd_from_mode(&output_dtd, 1027 + intel_sdvo->sdvo_lvds_fixed_mode); 1028 + else 1029 + intel_sdvo_get_dtd_from_mode(&output_dtd, mode); 1030 + (void) intel_sdvo_set_output_timing(intel_sdvo, &output_dtd); 1043 1031 1044 1032 /* Set the input timing to the screen. Assume always input 0. */ 1045 1033 if (!intel_sdvo_set_target_input(intel_sdvo)) ··· 1050 1052 !intel_sdvo_set_tv_format(intel_sdvo)) 1051 1053 return; 1052 1054 1055 + /* We have tried to get input timing in mode_fixup, and filled into 1056 + * adjusted_mode. 1057 + */ 1058 + intel_sdvo_get_dtd_from_mode(&input_dtd, adjusted_mode); 1053 1059 (void) intel_sdvo_set_input_timing(intel_sdvo, &input_dtd); 1054 1060 1055 1061 switch (pixel_multiplier) {
+1 -1
drivers/gpu/drm/nouveau/nouveau_acpi.c
··· 270 270 struct acpi_buffer buffer = {sizeof(acpi_method_name), acpi_method_name}; 271 271 struct pci_dev *pdev = NULL; 272 272 int has_dsm = 0; 273 - int has_optimus; 273 + int has_optimus = 0; 274 274 int vga_count = 0; 275 275 bool guid_valid; 276 276 int retval;
+7 -3
drivers/gpu/drm/nouveau/nouveau_bios.c
··· 6156 6156 6157 6157 /* heuristic: if we ever get a non-zero connector field, assume 6158 6158 * that all the indices are valid and we don't need fake them. 6159 + * 6160 + * and, as usual, a blacklist of boards with bad bios data.. 6159 6161 */ 6160 - for (i = 0; i < dcbt->entries; i++) { 6161 - if (dcbt->entry[i].connector) 6162 - return; 6162 + if (!nv_match_device(bios->dev, 0x0392, 0x107d, 0x20a2)) { 6163 + for (i = 0; i < dcbt->entries; i++) { 6164 + if (dcbt->entry[i].connector) 6165 + return; 6166 + } 6163 6167 } 6164 6168 6165 6169 /* no useful connector info available, we need to make it up
+3 -1
drivers/gpu/drm/nouveau/nouveau_hdmi.c
··· 32 32 hdmi_sor(struct drm_encoder *encoder) 33 33 { 34 34 struct drm_nouveau_private *dev_priv = encoder->dev->dev_private; 35 - if (dev_priv->chipset < 0xa3) 35 + if (dev_priv->chipset < 0xa3 || 36 + dev_priv->chipset == 0xaa || 37 + dev_priv->chipset == 0xac) 36 38 return false; 37 39 return true; 38 40 }
+1 -1
drivers/gpu/drm/nouveau/nv10_gpio.c
··· 65 65 if (line < 10) { 66 66 line = (line - 2) * 4; 67 67 reg = NV_PCRTC_GPIO_EXT; 68 - mask = 0x00000003 << ((line - 2) * 4); 68 + mask = 0x00000003; 69 69 data = (dir << 1) | out; 70 70 } else 71 71 if (line < 14) {
+5
drivers/gpu/drm/nouveau/nvc0_fb.c
··· 54 54 nvc0_mfb_subp_isr(dev, unit, subp); 55 55 units &= ~(1 << unit); 56 56 } 57 + 58 + /* we do something horribly wrong and upset PMFB a lot, so mask off 59 + * interrupts from it after the first one until it's fixed 60 + */ 61 + nv_mask(dev, 0x000640, 0x02000000, 0x00000000); 57 62 } 58 63 59 64 static void
+5 -2
drivers/gpu/drm/radeon/atombios_crtc.c
··· 575 575 576 576 if (rdev->family < CHIP_RV770) 577 577 pll->flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP; 578 + /* use frac fb div on APUs */ 579 + if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev)) 580 + pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV; 578 581 } else { 579 582 pll->flags |= RADEON_PLL_LEGACY; 580 583 ··· 958 955 break; 959 956 } 960 957 961 - if (radeon_encoder->active_device & 962 - (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) { 958 + if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) || 959 + (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE)) { 963 960 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 964 961 struct drm_connector *connector = 965 962 radeon_get_connector_for_encoder(encoder);
+2 -1
drivers/gpu/drm/radeon/radeon_display.c
··· 533 533 radeon_legacy_init_crtc(dev, radeon_crtc); 534 534 } 535 535 536 - static const char *encoder_names[36] = { 536 + static const char *encoder_names[37] = { 537 537 "NONE", 538 538 "INTERNAL_LVDS", 539 539 "INTERNAL_TMDS1", ··· 570 570 "INTERNAL_UNIPHY2", 571 571 "NUTMEG", 572 572 "TRAVIS", 573 + "INTERNAL_VCE" 573 574 }; 574 575 575 576 static const char *connector_names[15] = {
+1 -1
drivers/hsi/clients/hsi_char.c
··· 123 123 static unsigned int hsc_major; 124 124 /* Maximum buffer size that hsi_char will accept from userspace */ 125 125 static unsigned int max_data_size = 0x1000; 126 - module_param(max_data_size, uint, S_IRUSR | S_IWUSR); 126 + module_param(max_data_size, uint, 0); 127 127 MODULE_PARM_DESC(max_data_size, "max read/write data size [4,8..65536] (^2)"); 128 128 129 129 static void hsc_add_tail(struct hsc_channel *channel, struct hsi_msg *msg,
+118 -105
drivers/hsi/hsi.c
··· 21 21 */ 22 22 #include <linux/hsi/hsi.h> 23 23 #include <linux/compiler.h> 24 - #include <linux/rwsem.h> 25 24 #include <linux/list.h> 26 - #include <linux/spinlock.h> 27 25 #include <linux/kobject.h> 28 26 #include <linux/slab.h> 29 27 #include <linux/string.h> 28 + #include <linux/notifier.h> 30 29 #include "hsi_core.h" 31 - 32 - static struct device_type hsi_ctrl = { 33 - .name = "hsi_controller", 34 - }; 35 - 36 - static struct device_type hsi_cl = { 37 - .name = "hsi_client", 38 - }; 39 - 40 - static struct device_type hsi_port = { 41 - .name = "hsi_port", 42 - }; 43 30 44 31 static ssize_t modalias_show(struct device *dev, 45 32 struct device_attribute *a __maybe_unused, char *buf) ··· 41 54 42 55 static int hsi_bus_uevent(struct device *dev, struct kobj_uevent_env *env) 43 56 { 44 - if (dev->type == &hsi_cl) 45 - add_uevent_var(env, "MODALIAS=hsi:%s", dev_name(dev)); 57 + add_uevent_var(env, "MODALIAS=hsi:%s", dev_name(dev)); 46 58 47 59 return 0; 48 60 } ··· 66 80 static void hsi_new_client(struct hsi_port *port, struct hsi_board_info *info) 67 81 { 68 82 struct hsi_client *cl; 69 - unsigned long flags; 70 83 71 84 cl = kzalloc(sizeof(*cl), GFP_KERNEL); 72 85 if (!cl) 73 86 return; 74 - cl->device.type = &hsi_cl; 75 87 cl->tx_cfg = info->tx_cfg; 76 88 cl->rx_cfg = info->rx_cfg; 77 89 cl->device.bus = &hsi_bus_type; ··· 77 93 cl->device.release = hsi_client_release; 78 94 dev_set_name(&cl->device, info->name); 79 95 cl->device.platform_data = info->platform_data; 80 - spin_lock_irqsave(&port->clock, flags); 81 - list_add_tail(&cl->link, &port->clients); 82 - spin_unlock_irqrestore(&port->clock, flags); 83 96 if (info->archdata) 84 97 cl->device.archdata = *info->archdata; 85 98 if (device_register(&cl->device) < 0) { 86 99 pr_err("hsi: failed to register client: %s\n", info->name); 87 - kfree(cl); 100 + put_device(&cl->device); 88 101 } 89 102 } 90 103 ··· 101 120 102 121 static int hsi_remove_client(struct device *dev, void *data __maybe_unused) 103 122 { 104 - struct hsi_client *cl = to_hsi_client(dev); 105 - struct hsi_port *port = to_hsi_port(dev->parent); 106 - unsigned long flags; 107 - 108 - spin_lock_irqsave(&port->clock, flags); 109 - list_del(&cl->link); 110 - spin_unlock_irqrestore(&port->clock, flags); 111 123 device_unregister(dev); 112 124 113 125 return 0; ··· 114 140 return 0; 115 141 } 116 142 117 - static void hsi_controller_release(struct device *dev __maybe_unused) 143 + static void hsi_controller_release(struct device *dev) 118 144 { 145 + struct hsi_controller *hsi = to_hsi_controller(dev); 146 + 147 + kfree(hsi->port); 148 + kfree(hsi); 119 149 } 120 150 121 - static void hsi_port_release(struct device *dev __maybe_unused) 151 + static void hsi_port_release(struct device *dev) 122 152 { 153 + kfree(to_hsi_port(dev)); 123 154 } 124 155 125 156 /** ··· 149 170 unsigned int i; 150 171 int err; 151 172 152 - hsi->device.type = &hsi_ctrl; 153 - hsi->device.bus = &hsi_bus_type; 154 - hsi->device.release = hsi_controller_release; 155 - err = device_register(&hsi->device); 173 + err = device_add(&hsi->device); 156 174 if (err < 0) 157 175 return err; 158 176 for (i = 0; i < hsi->num_ports; i++) { 159 - hsi->port[i].device.parent = &hsi->device; 160 - hsi->port[i].device.bus = &hsi_bus_type; 161 - hsi->port[i].device.release = hsi_port_release; 162 - hsi->port[i].device.type = &hsi_port; 163 - INIT_LIST_HEAD(&hsi->port[i].clients); 164 - spin_lock_init(&hsi->port[i].clock); 165 - err = device_register(&hsi->port[i].device); 177 + hsi->port[i]->device.parent = &hsi->device; 178 + err = device_add(&hsi->port[i]->device); 166 179 if (err < 0) 167 180 goto out; 168 181 } ··· 163 192 164 193 return 0; 165 194 out: 166 - hsi_unregister_controller(hsi); 195 + while (i-- > 0) 196 + device_del(&hsi->port[i]->device); 197 + device_del(&hsi->device); 167 198 168 199 return err; 169 200 } ··· 196 223 } 197 224 198 225 /** 226 + * hsi_put_controller - Free an HSI controller 227 + * 228 + * @hsi: Pointer to the HSI controller to freed 229 + * 230 + * HSI controller drivers should only use this function if they need 231 + * to free their allocated hsi_controller structures before a successful 232 + * call to hsi_register_controller. Other use is not allowed. 233 + */ 234 + void hsi_put_controller(struct hsi_controller *hsi) 235 + { 236 + unsigned int i; 237 + 238 + if (!hsi) 239 + return; 240 + 241 + for (i = 0; i < hsi->num_ports; i++) 242 + if (hsi->port && hsi->port[i]) 243 + put_device(&hsi->port[i]->device); 244 + put_device(&hsi->device); 245 + } 246 + EXPORT_SYMBOL_GPL(hsi_put_controller); 247 + 248 + /** 199 249 * hsi_alloc_controller - Allocate an HSI controller and its ports 200 250 * @n_ports: Number of ports on the HSI controller 201 251 * @flags: Kernel allocation flags ··· 228 232 struct hsi_controller *hsi_alloc_controller(unsigned int n_ports, gfp_t flags) 229 233 { 230 234 struct hsi_controller *hsi; 231 - struct hsi_port *port; 235 + struct hsi_port **port; 232 236 unsigned int i; 233 237 234 238 if (!n_ports) 235 239 return NULL; 236 240 237 - port = kzalloc(sizeof(*port)*n_ports, flags); 238 - if (!port) 239 - return NULL; 240 241 hsi = kzalloc(sizeof(*hsi), flags); 241 242 if (!hsi) 242 - goto out; 243 - for (i = 0; i < n_ports; i++) { 244 - dev_set_name(&port[i].device, "port%d", i); 245 - port[i].num = i; 246 - port[i].async = hsi_dummy_msg; 247 - port[i].setup = hsi_dummy_cl; 248 - port[i].flush = hsi_dummy_cl; 249 - port[i].start_tx = hsi_dummy_cl; 250 - port[i].stop_tx = hsi_dummy_cl; 251 - port[i].release = hsi_dummy_cl; 252 - mutex_init(&port[i].lock); 243 + return NULL; 244 + port = kzalloc(sizeof(*port)*n_ports, flags); 245 + if (!port) { 246 + kfree(hsi); 247 + return NULL; 253 248 } 254 249 hsi->num_ports = n_ports; 255 250 hsi->port = port; 251 + hsi->device.release = hsi_controller_release; 252 + device_initialize(&hsi->device); 253 + 254 + for (i = 0; i < n_ports; i++) { 255 + port[i] = kzalloc(sizeof(**port), flags); 256 + if (port[i] == NULL) 257 + goto out; 258 + port[i]->num = i; 259 + port[i]->async = hsi_dummy_msg; 260 + port[i]->setup = hsi_dummy_cl; 261 + port[i]->flush = hsi_dummy_cl; 262 + port[i]->start_tx = hsi_dummy_cl; 263 + port[i]->stop_tx = hsi_dummy_cl; 264 + port[i]->release = hsi_dummy_cl; 265 + mutex_init(&port[i]->lock); 266 + ATOMIC_INIT_NOTIFIER_HEAD(&port[i]->n_head); 267 + dev_set_name(&port[i]->device, "port%d", i); 268 + hsi->port[i]->device.release = hsi_port_release; 269 + device_initialize(&hsi->port[i]->device); 270 + } 256 271 257 272 return hsi; 258 273 out: 259 - kfree(port); 274 + hsi_put_controller(hsi); 260 275 261 276 return NULL; 262 277 } 263 278 EXPORT_SYMBOL_GPL(hsi_alloc_controller); 264 - 265 - /** 266 - * hsi_free_controller - Free an HSI controller 267 - * @hsi: Pointer to HSI controller 268 - */ 269 - void hsi_free_controller(struct hsi_controller *hsi) 270 - { 271 - if (!hsi) 272 - return; 273 - 274 - kfree(hsi->port); 275 - kfree(hsi); 276 - } 277 - EXPORT_SYMBOL_GPL(hsi_free_controller); 278 279 279 280 /** 280 281 * hsi_free_msg - Free an HSI message ··· 407 414 } 408 415 EXPORT_SYMBOL_GPL(hsi_release_port); 409 416 410 - static int hsi_start_rx(struct hsi_client *cl, void *data __maybe_unused) 417 + static int hsi_event_notifier_call(struct notifier_block *nb, 418 + unsigned long event, void *data __maybe_unused) 411 419 { 412 - if (cl->hsi_start_rx) 413 - (*cl->hsi_start_rx)(cl); 420 + struct hsi_client *cl = container_of(nb, struct hsi_client, nb); 421 + 422 + (*cl->ehandler)(cl, event); 414 423 415 424 return 0; 416 425 } 417 426 418 - static int hsi_stop_rx(struct hsi_client *cl, void *data __maybe_unused) 427 + /** 428 + * hsi_register_port_event - Register a client to receive port events 429 + * @cl: HSI client that wants to receive port events 430 + * @cb: Event handler callback 431 + * 432 + * Clients should register a callback to be able to receive 433 + * events from the ports. Registration should happen after 434 + * claiming the port. 435 + * The handler can be called in interrupt context. 436 + * 437 + * Returns -errno on error, or 0 on success. 438 + */ 439 + int hsi_register_port_event(struct hsi_client *cl, 440 + void (*handler)(struct hsi_client *, unsigned long)) 419 441 { 420 - if (cl->hsi_stop_rx) 421 - (*cl->hsi_stop_rx)(cl); 442 + struct hsi_port *port = hsi_get_port(cl); 422 443 423 - return 0; 444 + if (!handler || cl->ehandler) 445 + return -EINVAL; 446 + if (!hsi_port_claimed(cl)) 447 + return -EACCES; 448 + cl->ehandler = handler; 449 + cl->nb.notifier_call = hsi_event_notifier_call; 450 + 451 + return atomic_notifier_chain_register(&port->n_head, &cl->nb); 424 452 } 453 + EXPORT_SYMBOL_GPL(hsi_register_port_event); 425 454 426 - static int hsi_port_for_each_client(struct hsi_port *port, void *data, 427 - int (*fn)(struct hsi_client *cl, void *data)) 455 + /** 456 + * hsi_unregister_port_event - Stop receiving port events for a client 457 + * @cl: HSI client that wants to stop receiving port events 458 + * 459 + * Clients should call this function before releasing their associated 460 + * port. 461 + * 462 + * Returns -errno on error, or 0 on success. 463 + */ 464 + int hsi_unregister_port_event(struct hsi_client *cl) 428 465 { 429 - struct hsi_client *cl; 466 + struct hsi_port *port = hsi_get_port(cl); 467 + int err; 430 468 431 - spin_lock(&port->clock); 432 - list_for_each_entry(cl, &port->clients, link) { 433 - spin_unlock(&port->clock); 434 - (*fn)(cl, data); 435 - spin_lock(&port->clock); 436 - } 437 - spin_unlock(&port->clock); 469 + WARN_ON(!hsi_port_claimed(cl)); 438 470 439 - return 0; 471 + err = atomic_notifier_chain_unregister(&port->n_head, &cl->nb); 472 + if (!err) 473 + cl->ehandler = NULL; 474 + 475 + return err; 440 476 } 477 + EXPORT_SYMBOL_GPL(hsi_unregister_port_event); 441 478 442 479 /** 443 480 * hsi_event -Notifies clients about port events ··· 481 458 * Events: 482 459 * HSI_EVENT_START_RX - Incoming wake line high 483 460 * HSI_EVENT_STOP_RX - Incoming wake line down 461 + * 462 + * Returns -errno on error, or 0 on success. 484 463 */ 485 - void hsi_event(struct hsi_port *port, unsigned int event) 464 + int hsi_event(struct hsi_port *port, unsigned long event) 486 465 { 487 - int (*fn)(struct hsi_client *cl, void *data); 488 - 489 - switch (event) { 490 - case HSI_EVENT_START_RX: 491 - fn = hsi_start_rx; 492 - break; 493 - case HSI_EVENT_STOP_RX: 494 - fn = hsi_stop_rx; 495 - break; 496 - default: 497 - return; 498 - } 499 - hsi_port_for_each_client(port, NULL, fn); 466 + return atomic_notifier_call_chain(&port->n_head, event, NULL); 500 467 } 501 468 EXPORT_SYMBOL_GPL(hsi_event); 502 469
+5 -7
drivers/hwmon/ad7314.c
··· 47 47 u16 rx ____cacheline_aligned; 48 48 }; 49 49 50 - static int ad7314_spi_read(struct ad7314_data *chip, s16 *data) 50 + static int ad7314_spi_read(struct ad7314_data *chip) 51 51 { 52 52 int ret; 53 53 ··· 57 57 return ret; 58 58 } 59 59 60 - *data = be16_to_cpu(chip->rx); 61 - 62 - return ret; 60 + return be16_to_cpu(chip->rx); 63 61 } 64 62 65 63 static ssize_t ad7314_show_temperature(struct device *dev, ··· 68 70 s16 data; 69 71 int ret; 70 72 71 - ret = ad7314_spi_read(chip, &data); 73 + ret = ad7314_spi_read(chip); 72 74 if (ret < 0) 73 75 return ret; 74 76 switch (spi_get_device_id(chip->spi_dev)->driver_data) { 75 77 case ad7314: 76 - data = (data & AD7314_TEMP_MASK) >> AD7314_TEMP_OFFSET; 78 + data = (ret & AD7314_TEMP_MASK) >> AD7314_TEMP_OFFSET; 77 79 data = (data << 6) >> 6; 78 80 79 81 return sprintf(buf, "%d\n", 250 * data); ··· 84 86 * with a sign bit - which is a 14 bit 2's complement 85 87 * register. 1lsb - 31.25 milli degrees centigrade 86 88 */ 87 - data &= ADT7301_TEMP_MASK; 89 + data = ret & ADT7301_TEMP_MASK; 88 90 data = (data << 2) >> 2; 89 91 90 92 return sprintf(buf, "%d\n",
+5 -1
drivers/hwmon/coretemp.c
··· 52 52 MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius"); 53 53 54 54 #define BASE_SYSFS_ATTR_NO 2 /* Sysfs Base attr no for coretemp */ 55 - #define NUM_REAL_CORES 16 /* Number of Real cores per cpu */ 55 + #define NUM_REAL_CORES 32 /* Number of Real cores per cpu */ 56 56 #define CORETEMP_NAME_LENGTH 17 /* String Length of attrs */ 57 57 #define MAX_CORE_ATTRS 4 /* Maximum no of basic attrs */ 58 58 #define TOTAL_ATTRS (MAX_CORE_ATTRS + 1) ··· 708 708 pdata = platform_get_drvdata(pdev); 709 709 710 710 indx = TO_ATTR_NO(cpu); 711 + 712 + /* The core id is too big, just return */ 713 + if (indx > MAX_CORE_DATA - 1) 714 + return; 711 715 712 716 if (pdata->core_data[indx] && pdata->core_data[indx]->cpu == cpu) 713 717 coretemp_remove_core(pdata, &pdev->dev, indx);
+6 -3
drivers/hwmon/fam15h_power.c
··· 128 128 * counter saturations resulting in bogus power readings. 129 129 * We correct this value ourselves to cope with older BIOSes. 130 130 */ 131 + static DEFINE_PCI_DEVICE_TABLE(affected_device) = { 132 + { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) }, 133 + { 0 } 134 + }; 135 + 131 136 static void __devinit tweak_runavg_range(struct pci_dev *pdev) 132 137 { 133 138 u32 val; 134 - const struct pci_device_id affected_device = { 135 - PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) }; 136 139 137 140 /* 138 141 * let this quirk apply only to the current version of the 139 142 * northbridge, since future versions may change the behavior 140 143 */ 141 - if (!pci_match_id(&affected_device, pdev)) 144 + if (!pci_match_id(affected_device, pdev)) 142 145 return; 143 146 144 147 pci_bus_read_config_dword(pdev->bus,
+2 -2
drivers/i2c/busses/i2c-eg20t.c
··· 324 324 { 325 325 long ret; 326 326 ret = wait_event_timeout(pch_event, 327 - (adap->pch_event_flag != 0), msecs_to_jiffies(50)); 327 + (adap->pch_event_flag != 0), msecs_to_jiffies(1000)); 328 328 329 329 if (ret == 0) { 330 330 pch_err(adap, "timeout: %x\n", adap->pch_event_flag); ··· 1063 1063 1064 1064 MODULE_DESCRIPTION("Intel EG20T PCH/LAPIS Semico ML7213/ML7223/ML7831 IOH I2C"); 1065 1065 MODULE_LICENSE("GPL"); 1066 - MODULE_AUTHOR("Tomoya MORINAGA. <tomoya-linux@dsn.lapis-semi.com>"); 1066 + MODULE_AUTHOR("Tomoya MORINAGA. <tomoya.rohm@gmail.com>"); 1067 1067 module_param(pch_i2c_speed, int, (S_IRUSR | S_IWUSR)); 1068 1068 module_param(pch_clk, int, (S_IRUSR | S_IWUSR));
+4 -4
drivers/i2c/busses/i2c-mxs.c
··· 227 227 return -EINVAL; 228 228 229 229 init_completion(&i2c->cmd_complete); 230 + i2c->cmd_err = 0; 230 231 231 232 flags = stop ? MXS_I2C_CTRL0_POST_SEND_STOP : 0; 232 233 ··· 253 252 254 253 if (i2c->cmd_err == -ENXIO) 255 254 mxs_i2c_reset(i2c); 255 + else 256 + writel(MXS_I2C_QUEUECTRL_QUEUE_RUN, 257 + i2c->regs + MXS_I2C_QUEUECTRL_CLR); 256 258 257 259 dev_dbg(i2c->dev, "Done with err=%d\n", i2c->cmd_err); 258 260 ··· 303 299 MXS_I2C_CTRL1_SLAVE_STOP_IRQ | MXS_I2C_CTRL1_SLAVE_IRQ)) 304 300 /* MXS_I2C_CTRL1_OVERSIZE_XFER_TERM_IRQ is only for slaves */ 305 301 i2c->cmd_err = -EIO; 306 - else 307 - i2c->cmd_err = 0; 308 302 309 303 is_last_cmd = (readl(i2c->regs + MXS_I2C_QUEUESTAT) & 310 304 MXS_I2C_QUEUESTAT_WRITE_QUEUE_CNT_MASK) == 0; ··· 386 384 if (ret) 387 385 return -EBUSY; 388 386 389 - writel(MXS_I2C_QUEUECTRL_QUEUE_RUN, 390 - i2c->regs + MXS_I2C_QUEUECTRL_CLR); 391 387 writel(MXS_I2C_CTRL0_SFTRST, i2c->regs + MXS_I2C_CTRL0_SET); 392 388 393 389 platform_set_drvdata(pdev, NULL);
+1 -2
drivers/i2c/busses/i2c-pnx.c
··· 546 546 { 547 547 struct i2c_pnx_algo_data *alg_data = platform_get_drvdata(pdev); 548 548 549 - /* FIXME: shouldn't this be clk_disable? */ 550 - clk_enable(alg_data->clk); 549 + clk_disable(alg_data->clk); 551 550 552 551 return 0; 553 552 }
+8
drivers/i2c/busses/i2c-tegra.c
··· 516 516 if (likely(i2c_dev->msg_err == I2C_ERR_NONE)) 517 517 return 0; 518 518 519 + /* 520 + * NACK interrupt is generated before the I2C controller generates the 521 + * STOP condition on the bus. So wait for 2 clock periods before resetting 522 + * the controller so that STOP condition has been delivered properly. 523 + */ 524 + if (i2c_dev->msg_err == I2C_ERR_NO_ACK) 525 + udelay(DIV_ROUND_UP(2 * 1000000, i2c_dev->bus_clk_rate)); 526 + 519 527 tegra_i2c_init(i2c_dev); 520 528 if (i2c_dev->msg_err == I2C_ERR_NO_ACK) { 521 529 if (msg->flags & I2C_M_IGNORE_NAK)
+5 -3
drivers/infiniband/core/mad.c
··· 1854 1854 response->mad.mad.mad_hdr.method = IB_MGMT_METHOD_GET_RESP; 1855 1855 response->mad.mad.mad_hdr.status = 1856 1856 cpu_to_be16(IB_MGMT_MAD_STATUS_UNSUPPORTED_METHOD_ATTRIB); 1857 + if (recv->mad.mad.mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) 1858 + response->mad.mad.mad_hdr.status |= IB_SMP_DIRECTION; 1857 1859 1858 1860 return true; 1859 1861 } else { ··· 1871 1869 struct ib_mad_list_head *mad_list; 1872 1870 struct ib_mad_agent_private *mad_agent; 1873 1871 int port_num; 1872 + int ret = IB_MAD_RESULT_SUCCESS; 1874 1873 1875 1874 mad_list = (struct ib_mad_list_head *)(unsigned long)wc->wr_id; 1876 1875 qp_info = mad_list->mad_queue->qp_info; ··· 1955 1952 local: 1956 1953 /* Give driver "right of first refusal" on incoming MAD */ 1957 1954 if (port_priv->device->process_mad) { 1958 - int ret; 1959 - 1960 1955 ret = port_priv->device->process_mad(port_priv->device, 0, 1961 1956 port_priv->port_num, 1962 1957 wc, &recv->grh, ··· 1982 1981 * or via recv_handler in ib_mad_complete_recv() 1983 1982 */ 1984 1983 recv = NULL; 1985 - } else if (generate_unmatched_resp(recv, response)) { 1984 + } else if ((ret & IB_MAD_RESULT_SUCCESS) && 1985 + generate_unmatched_resp(recv, response)) { 1986 1986 agent_send_response(&response->mad.mad, &recv->grh, wc, 1987 1987 port_priv->device, port_num, qp_info->qp->qp_num); 1988 1988 }
+1 -1
drivers/infiniband/hw/mlx4/main.c
··· 247 247 err = mlx4_MAD_IFC(to_mdev(ibdev), 1, 1, port, 248 248 NULL, NULL, in_mad, out_mad); 249 249 if (err) 250 - return err; 250 + goto out; 251 251 252 252 /* Checking LinkSpeedActive for FDR-10 */ 253 253 if (out_mad->data[15] & 0x1)
+2 -1
drivers/input/mouse/synaptics.c
··· 274 274 static unsigned char param = 0xc8; 275 275 struct synaptics_data *priv = psmouse->private; 276 276 277 - if (!SYN_CAP_ADV_GESTURE(priv->ext_cap_0c)) 277 + if (!(SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) || 278 + SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c))) 278 279 return 0; 279 280 280 281 if (psmouse_sliced_command(psmouse, SYN_QUE_MODEL))
+1
drivers/mfd/omap-usb-host.c
··· 25 25 #include <linux/clk.h> 26 26 #include <linux/dma-mapping.h> 27 27 #include <linux/spinlock.h> 28 + #include <plat/cpu.h> 28 29 #include <plat/usb.h> 29 30 #include <linux/pm_runtime.h> 30 31
+3
drivers/mmc/host/mxs-mmc.c
··· 363 363 goto out; 364 364 365 365 dmaengine_submit(desc); 366 + dma_async_issue_pending(host->dmach); 366 367 return; 367 368 368 369 out: ··· 404 403 goto out; 405 404 406 405 dmaengine_submit(desc); 406 + dma_async_issue_pending(host->dmach); 407 407 return; 408 408 409 409 out: ··· 533 531 goto out; 534 532 535 533 dmaengine_submit(desc); 534 + dma_async_issue_pending(host->dmach); 536 535 return; 537 536 out: 538 537 dev_warn(mmc_dev(host->mmc),
+1
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 266 266 desc->callback = dma_irq_callback; 267 267 desc->callback_param = this; 268 268 dmaengine_submit(desc); 269 + dma_async_issue_pending(get_dma_chan(this)); 269 270 270 271 /* Wait for the interrupt from the DMA block. */ 271 272 err = wait_for_completion_timeout(dma_c, msecs_to_jiffies(1000));
+22 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 9485 9485 return bnx2x_prev_mcp_done(bp); 9486 9486 } 9487 9487 9488 + /* previous driver DMAE transaction may have occurred when pre-boot stage ended 9489 + * and boot began, or when kdump kernel was loaded. Either case would invalidate 9490 + * the addresses of the transaction, resulting in was-error bit set in the pci 9491 + * causing all hw-to-host pcie transactions to timeout. If this happened we want 9492 + * to clear the interrupt which detected this from the pglueb and the was done 9493 + * bit 9494 + */ 9495 + static void __devinit bnx2x_prev_interrupted_dmae(struct bnx2x *bp) 9496 + { 9497 + u32 val = REG_RD(bp, PGLUE_B_REG_PGLUE_B_INT_STS); 9498 + if (val & PGLUE_B_PGLUE_B_INT_STS_REG_WAS_ERROR_ATTN) { 9499 + BNX2X_ERR("was error bit was found to be set in pglueb upon startup. Clearing"); 9500 + REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, 1 << BP_FUNC(bp)); 9501 + } 9502 + } 9503 + 9488 9504 static int __devinit bnx2x_prev_unload(struct bnx2x *bp) 9489 9505 { 9490 9506 int time_counter = 10; 9491 9507 u32 rc, fw, hw_lock_reg, hw_lock_val; 9492 9508 BNX2X_DEV_INFO("Entering Previous Unload Flow\n"); 9493 9509 9494 - /* Release previously held locks */ 9510 + /* clear hw from errors which may have resulted from an interrupted 9511 + * dmae transaction. 9512 + */ 9513 + bnx2x_prev_interrupted_dmae(bp); 9514 + 9515 + /* Release previously held locks */ 9495 9516 hw_lock_reg = (BP_FUNC(bp) <= 5) ? 9496 9517 (MISC_REG_DRIVER_CONTROL_1 + BP_FUNC(bp) * 8) : 9497 9518 (MISC_REG_DRIVER_CONTROL_7 + (BP_FUNC(bp) - 6) * 8);
+16 -2
drivers/net/ethernet/broadcom/tg3.c
··· 888 888 if (sblk->status & SD_STATUS_LINK_CHG) 889 889 work_exists = 1; 890 890 } 891 - /* check for RX/TX work to do */ 892 - if (sblk->idx[0].tx_consumer != tnapi->tx_cons || 891 + 892 + /* check for TX work to do */ 893 + if (sblk->idx[0].tx_consumer != tnapi->tx_cons) 894 + work_exists = 1; 895 + 896 + /* check for RX work to do */ 897 + if (tnapi->rx_rcb_prod_idx && 893 898 *(tnapi->rx_rcb_prod_idx) != tnapi->rx_rcb_ptr) 894 899 work_exists = 1; 895 900 ··· 6177 6172 return work_done; 6178 6173 } 6179 6174 6175 + if (!tnapi->rx_rcb_prod_idx) 6176 + return work_done; 6177 + 6180 6178 /* run RX thread, within the bounds set by NAPI. 6181 6179 * All RX "locking" is done by ensuring outside 6182 6180 * code synchronizes with tg3->napi.poll() ··· 7629 7621 */ 7630 7622 switch (i) { 7631 7623 default: 7624 + if (tg3_flag(tp, ENABLE_RSS)) { 7625 + tnapi->rx_rcb_prod_idx = NULL; 7626 + break; 7627 + } 7628 + /* Fall through */ 7629 + case 1: 7632 7630 tnapi->rx_rcb_prod_idx = &sblk->idx[0].rx_producer; 7633 7631 break; 7634 7632 case 2:
+46 -46
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
··· 1150 1150 } 1151 1151 1152 1152 /** 1153 + * t3_synchronize_rx - wait for current Rx processing on a port to complete 1154 + * @adap: the adapter 1155 + * @p: the port 1156 + * 1157 + * Ensures that current Rx processing on any of the queues associated with 1158 + * the given port completes before returning. We do this by acquiring and 1159 + * releasing the locks of the response queues associated with the port. 1160 + */ 1161 + static void t3_synchronize_rx(struct adapter *adap, const struct port_info *p) 1162 + { 1163 + int i; 1164 + 1165 + for (i = p->first_qset; i < p->first_qset + p->nqsets; i++) { 1166 + struct sge_rspq *q = &adap->sge.qs[i].rspq; 1167 + 1168 + spin_lock_irq(&q->lock); 1169 + spin_unlock_irq(&q->lock); 1170 + } 1171 + } 1172 + 1173 + static void cxgb_vlan_mode(struct net_device *dev, netdev_features_t features) 1174 + { 1175 + struct port_info *pi = netdev_priv(dev); 1176 + struct adapter *adapter = pi->adapter; 1177 + 1178 + if (adapter->params.rev > 0) { 1179 + t3_set_vlan_accel(adapter, 1 << pi->port_id, 1180 + features & NETIF_F_HW_VLAN_RX); 1181 + } else { 1182 + /* single control for all ports */ 1183 + unsigned int i, have_vlans = features & NETIF_F_HW_VLAN_RX; 1184 + 1185 + for_each_port(adapter, i) 1186 + have_vlans |= 1187 + adapter->port[i]->features & NETIF_F_HW_VLAN_RX; 1188 + 1189 + t3_set_vlan_accel(adapter, 1, have_vlans); 1190 + } 1191 + t3_synchronize_rx(adapter, pi); 1192 + } 1193 + 1194 + /** 1153 1195 * cxgb_up - enable the adapter 1154 1196 * @adapter: adapter being enabled 1155 1197 * ··· 1203 1161 */ 1204 1162 static int cxgb_up(struct adapter *adap) 1205 1163 { 1206 - int err; 1164 + int i, err; 1207 1165 1208 1166 if (!(adap->flags & FULL_INIT_DONE)) { 1209 1167 err = t3_check_fw_version(adap); ··· 1239 1197 err = setup_sge_qsets(adap); 1240 1198 if (err) 1241 1199 goto out; 1200 + 1201 + for_each_port(adap, i) 1202 + cxgb_vlan_mode(adap->port[i], adap->port[i]->features); 1242 1203 1243 1204 setup_rss(adap); 1244 1205 if (!(adap->flags & NAPI_INIT)) ··· 2553 2508 return 0; 2554 2509 } 2555 2510 2556 - /** 2557 - * t3_synchronize_rx - wait for current Rx processing on a port to complete 2558 - * @adap: the adapter 2559 - * @p: the port 2560 - * 2561 - * Ensures that current Rx processing on any of the queues associated with 2562 - * the given port completes before returning. We do this by acquiring and 2563 - * releasing the locks of the response queues associated with the port. 2564 - */ 2565 - static void t3_synchronize_rx(struct adapter *adap, const struct port_info *p) 2566 - { 2567 - int i; 2568 - 2569 - for (i = p->first_qset; i < p->first_qset + p->nqsets; i++) { 2570 - struct sge_rspq *q = &adap->sge.qs[i].rspq; 2571 - 2572 - spin_lock_irq(&q->lock); 2573 - spin_unlock_irq(&q->lock); 2574 - } 2575 - } 2576 - 2577 - static void cxgb_vlan_mode(struct net_device *dev, netdev_features_t features) 2578 - { 2579 - struct port_info *pi = netdev_priv(dev); 2580 - struct adapter *adapter = pi->adapter; 2581 - 2582 - if (adapter->params.rev > 0) { 2583 - t3_set_vlan_accel(adapter, 1 << pi->port_id, 2584 - features & NETIF_F_HW_VLAN_RX); 2585 - } else { 2586 - /* single control for all ports */ 2587 - unsigned int i, have_vlans = features & NETIF_F_HW_VLAN_RX; 2588 - 2589 - for_each_port(adapter, i) 2590 - have_vlans |= 2591 - adapter->port[i]->features & NETIF_F_HW_VLAN_RX; 2592 - 2593 - t3_set_vlan_accel(adapter, 1, have_vlans); 2594 - } 2595 - t3_synchronize_rx(adapter, pi); 2596 - } 2597 - 2598 2511 static netdev_features_t cxgb_fix_features(struct net_device *dev, 2599 2512 netdev_features_t features) 2600 2513 { ··· 3355 3352 3356 3353 err = sysfs_create_group(&adapter->port[0]->dev.kobj, 3357 3354 &cxgb3_attr_group); 3358 - 3359 - for_each_port(adapter, i) 3360 - cxgb_vlan_mode(adapter->port[i], adapter->port[i]->features); 3361 3355 3362 3356 print_port_info(adapter, ai); 3363 3357 return 0;
+9 -43
drivers/net/ethernet/dlink/dl2k.c
··· 1254 1254 { 1255 1255 int phy_addr; 1256 1256 struct netdev_private *np = netdev_priv(dev); 1257 - struct mii_data *miidata = (struct mii_data *) &rq->ifr_ifru; 1258 - 1259 - struct netdev_desc *desc; 1260 - int i; 1257 + struct mii_ioctl_data *miidata = if_mii(rq); 1261 1258 1262 1259 phy_addr = np->phy_addr; 1263 1260 switch (cmd) { 1264 - case SIOCDEVPRIVATE: 1261 + case SIOCGMIIPHY: 1262 + miidata->phy_id = phy_addr; 1265 1263 break; 1266 - 1267 - case SIOCDEVPRIVATE + 1: 1268 - miidata->out_value = mii_read (dev, phy_addr, miidata->reg_num); 1264 + case SIOCGMIIREG: 1265 + miidata->val_out = mii_read (dev, phy_addr, miidata->reg_num); 1269 1266 break; 1270 - case SIOCDEVPRIVATE + 2: 1271 - mii_write (dev, phy_addr, miidata->reg_num, miidata->in_value); 1267 + case SIOCSMIIREG: 1268 + if (!capable(CAP_NET_ADMIN)) 1269 + return -EPERM; 1270 + mii_write (dev, phy_addr, miidata->reg_num, miidata->val_in); 1272 1271 break; 1273 - case SIOCDEVPRIVATE + 3: 1274 - break; 1275 - case SIOCDEVPRIVATE + 4: 1276 - break; 1277 - case SIOCDEVPRIVATE + 5: 1278 - netif_stop_queue (dev); 1279 - break; 1280 - case SIOCDEVPRIVATE + 6: 1281 - netif_wake_queue (dev); 1282 - break; 1283 - case SIOCDEVPRIVATE + 7: 1284 - printk 1285 - ("tx_full=%x cur_tx=%lx old_tx=%lx cur_rx=%lx old_rx=%lx\n", 1286 - netif_queue_stopped(dev), np->cur_tx, np->old_tx, np->cur_rx, 1287 - np->old_rx); 1288 - break; 1289 - case SIOCDEVPRIVATE + 8: 1290 - printk("TX ring:\n"); 1291 - for (i = 0; i < TX_RING_SIZE; i++) { 1292 - desc = &np->tx_ring[i]; 1293 - printk 1294 - ("%02x:cur:%08x next:%08x status:%08x frag1:%08x frag0:%08x", 1295 - i, 1296 - (u32) (np->tx_ring_dma + i * sizeof (*desc)), 1297 - (u32)le64_to_cpu(desc->next_desc), 1298 - (u32)le64_to_cpu(desc->status), 1299 - (u32)(le64_to_cpu(desc->fraginfo) >> 32), 1300 - (u32)le64_to_cpu(desc->fraginfo)); 1301 - printk ("\n"); 1302 - } 1303 - printk ("\n"); 1304 - break; 1305 - 1306 1272 default: 1307 1273 return -EOPNOTSUPP; 1308 1274 }
-7
drivers/net/ethernet/dlink/dl2k.h
··· 348 348 char *data; 349 349 }; 350 350 351 - struct mii_data { 352 - __u16 reserved; 353 - __u16 reg_num; 354 - __u16 in_value; 355 - __u16 out_value; 356 - }; 357 - 358 351 /* The Rx and Tx buffer descriptors. */ 359 352 struct netdev_desc { 360 353 __le64 next_desc;
+3 -3
drivers/net/ethernet/freescale/ucc_geth.c
··· 116 116 .maxGroupAddrInHash = 4, 117 117 .maxIndAddrInHash = 4, 118 118 .prel = 7, 119 - .maxFrameLength = 1518, 119 + .maxFrameLength = 1518+16, /* Add extra bytes for VLANs etc. */ 120 120 .minFrameLength = 64, 121 - .maxD1Length = 1520, 122 - .maxD2Length = 1520, 121 + .maxD1Length = 1520+16, /* Add extra bytes for VLANs etc. */ 122 + .maxD2Length = 1520+16, /* Add extra bytes for VLANs etc. */ 123 123 .vlantype = 0x8100, 124 124 .ecamptr = ((uint32_t) NULL), 125 125 .eventRegMask = UCCE_OTHER,
+1 -1
drivers/net/ethernet/freescale/ucc_geth.h
··· 877 877 878 878 /* Driver definitions */ 879 879 #define TX_BD_RING_LEN 0x10 880 - #define RX_BD_RING_LEN 0x10 880 + #define RX_BD_RING_LEN 0x20 881 881 882 882 #define TX_RING_MOD_MASK(size) (size-1) 883 883 #define RX_RING_MOD_MASK(size) (size-1)
+34 -26
drivers/net/ethernet/ibm/ehea/ehea_main.c
··· 290 290 291 291 arr[i].adh = adapter->handle; 292 292 arr[i].port_id = port->logical_port_id; 293 - arr[i].reg_type = EHEA_BCMC_SCOPE_ALL | 294 - EHEA_BCMC_MULTICAST | 293 + arr[i].reg_type = EHEA_BCMC_MULTICAST | 295 294 EHEA_BCMC_UNTAGGED; 295 + if (mc_entry->macaddr == 0) 296 + arr[i].reg_type |= EHEA_BCMC_SCOPE_ALL; 296 297 arr[i++].macaddr = mc_entry->macaddr; 297 298 298 299 arr[i].adh = adapter->handle; 299 300 arr[i].port_id = port->logical_port_id; 300 - arr[i].reg_type = EHEA_BCMC_SCOPE_ALL | 301 - EHEA_BCMC_MULTICAST | 301 + arr[i].reg_type = EHEA_BCMC_MULTICAST | 302 302 EHEA_BCMC_VLANID_ALL; 303 + if (mc_entry->macaddr == 0) 304 + arr[i].reg_type |= EHEA_BCMC_SCOPE_ALL; 303 305 arr[i++].macaddr = mc_entry->macaddr; 304 306 num_registrations -= 2; 305 307 } ··· 1840 1838 u64 hret; 1841 1839 u8 reg_type; 1842 1840 1843 - reg_type = EHEA_BCMC_SCOPE_ALL | EHEA_BCMC_MULTICAST 1844 - | EHEA_BCMC_UNTAGGED; 1841 + reg_type = EHEA_BCMC_MULTICAST | EHEA_BCMC_UNTAGGED; 1842 + if (mc_mac_addr == 0) 1843 + reg_type |= EHEA_BCMC_SCOPE_ALL; 1845 1844 1846 1845 hret = ehea_h_reg_dereg_bcmc(port->adapter->handle, 1847 1846 port->logical_port_id, ··· 1850 1847 if (hret) 1851 1848 goto out; 1852 1849 1853 - reg_type = EHEA_BCMC_SCOPE_ALL | EHEA_BCMC_MULTICAST 1854 - | EHEA_BCMC_VLANID_ALL; 1850 + reg_type = EHEA_BCMC_MULTICAST | EHEA_BCMC_VLANID_ALL; 1851 + if (mc_mac_addr == 0) 1852 + reg_type |= EHEA_BCMC_SCOPE_ALL; 1855 1853 1856 1854 hret = ehea_h_reg_dereg_bcmc(port->adapter->handle, 1857 1855 port->logical_port_id, ··· 1902 1898 netdev_err(dev, 1903 1899 "failed enabling IFF_ALLMULTI\n"); 1904 1900 } 1905 - } else 1901 + } else { 1906 1902 if (!enable) { 1907 1903 /* Disable ALLMULTI */ 1908 1904 hret = ehea_multicast_reg_helper(port, 0, H_DEREG_BCMC); ··· 1912 1908 netdev_err(dev, 1913 1909 "failed disabling IFF_ALLMULTI\n"); 1914 1910 } 1911 + } 1915 1912 } 1916 1913 1917 1914 static void ehea_add_multicast_entry(struct ehea_port *port, u8 *mc_mac_addr) ··· 1946 1941 struct netdev_hw_addr *ha; 1947 1942 int ret; 1948 1943 1949 - if (port->promisc) { 1950 - ehea_promiscuous(dev, 1); 1951 - return; 1952 - } 1953 - ehea_promiscuous(dev, 0); 1944 + ehea_promiscuous(dev, !!(dev->flags & IFF_PROMISC)); 1954 1945 1955 1946 if (dev->flags & IFF_ALLMULTI) { 1956 1947 ehea_allmulti(dev, 1); ··· 2464 2463 return 0; 2465 2464 2466 2465 ehea_drop_multicast_list(dev); 2466 + ehea_allmulti(dev, 0); 2467 2467 ehea_broadcast_reg_helper(port, H_DEREG_BCMC); 2468 2468 2469 2469 ehea_free_interrupts(dev); ··· 3263 3261 struct ehea_adapter *adapter; 3264 3262 const u64 *adapter_handle; 3265 3263 int ret; 3264 + int i; 3266 3265 3267 3266 if (!dev || !dev->dev.of_node) { 3268 3267 pr_err("Invalid ibmebus device probed\n"); ··· 3317 3314 tasklet_init(&adapter->neq_tasklet, ehea_neq_tasklet, 3318 3315 (unsigned long)adapter); 3319 3316 3320 - ret = ibmebus_request_irq(adapter->neq->attr.ist1, 3321 - ehea_interrupt_neq, IRQF_DISABLED, 3322 - "ehea_neq", adapter); 3323 - if (ret) { 3324 - dev_err(&dev->dev, "requesting NEQ IRQ failed\n"); 3325 - goto out_kill_eq; 3326 - } 3327 - 3328 3317 ret = ehea_create_device_sysfs(dev); 3329 3318 if (ret) 3330 - goto out_free_irq; 3319 + goto out_kill_eq; 3331 3320 3332 3321 ret = ehea_setup_ports(adapter); 3333 3322 if (ret) { ··· 3327 3332 goto out_rem_dev_sysfs; 3328 3333 } 3329 3334 3335 + ret = ibmebus_request_irq(adapter->neq->attr.ist1, 3336 + ehea_interrupt_neq, IRQF_DISABLED, 3337 + "ehea_neq", adapter); 3338 + if (ret) { 3339 + dev_err(&dev->dev, "requesting NEQ IRQ failed\n"); 3340 + goto out_shutdown_ports; 3341 + } 3342 + 3343 + 3330 3344 ret = 0; 3331 3345 goto out; 3332 3346 3347 + out_shutdown_ports: 3348 + for (i = 0; i < EHEA_MAX_PORTS; i++) 3349 + if (adapter->port[i]) { 3350 + ehea_shutdown_single_port(adapter->port[i]); 3351 + adapter->port[i] = NULL; 3352 + } 3353 + 3333 3354 out_rem_dev_sysfs: 3334 3355 ehea_remove_device_sysfs(dev); 3335 - 3336 - out_free_irq: 3337 - ibmebus_free_irq(adapter->neq->attr.ist1, adapter); 3338 3356 3339 3357 out_kill_eq: 3340 3358 ehea_destroy_eq(adapter->neq);
+1 -1
drivers/net/ethernet/ibm/ehea/ehea_phyp.h
··· 450 450 void *cb_addr); 451 451 452 452 #define H_REGBCMC_PN EHEA_BMASK_IBM(48, 63) 453 - #define H_REGBCMC_REGTYPE EHEA_BMASK_IBM(61, 63) 453 + #define H_REGBCMC_REGTYPE EHEA_BMASK_IBM(60, 63) 454 454 #define H_REGBCMC_MACADDR EHEA_BMASK_IBM(16, 63) 455 455 #define H_REGBCMC_VLANID EHEA_BMASK_IBM(52, 63) 456 456
+2 -2
drivers/net/ethernet/intel/e1000/e1000_main.c
··· 3400 3400 for (i = 0; tx_ring->desc && (i < tx_ring->count); i++) { 3401 3401 struct e1000_tx_desc *tx_desc = E1000_TX_DESC(*tx_ring, i); 3402 3402 struct e1000_buffer *buffer_info = &tx_ring->buffer_info[i]; 3403 - struct my_u { u64 a; u64 b; }; 3403 + struct my_u { __le64 a; __le64 b; }; 3404 3404 struct my_u *u = (struct my_u *)tx_desc; 3405 3405 const char *type; 3406 3406 ··· 3444 3444 for (i = 0; rx_ring->desc && (i < rx_ring->count); i++) { 3445 3445 struct e1000_rx_desc *rx_desc = E1000_RX_DESC(*rx_ring, i); 3446 3446 struct e1000_buffer *buffer_info = &rx_ring->buffer_info[i]; 3447 - struct my_u { u64 a; u64 b; }; 3447 + struct my_u { __le64 a; __le64 b; }; 3448 3448 struct my_u *u = (struct my_u *)rx_desc; 3449 3449 const char *type; 3450 3450
+1 -1
drivers/net/ethernet/intel/e1000e/netdev.c
··· 3778 3778 /* fire an unusual interrupt on the test handler */ 3779 3779 ew32(ICS, E1000_ICS_RXSEQ); 3780 3780 e1e_flush(); 3781 - msleep(50); 3781 + msleep(100); 3782 3782 3783 3783 e1000_irq_disable(adapter); 3784 3784
+43 -2
drivers/net/ethernet/intel/e1000e/param.c
··· 106 106 /* 107 107 * Interrupt Throttle Rate (interrupts/sec) 108 108 * 109 - * Valid Range: 100-100000 (0=off, 1=dynamic, 3=dynamic conservative) 109 + * Valid Range: 100-100000 or one of: 0=off, 1=dynamic, 3=dynamic conservative 110 110 */ 111 111 E1000_PARAM(InterruptThrottleRate, "Interrupt Throttling Rate"); 112 112 #define DEFAULT_ITR 3 ··· 389 389 break; 390 390 } 391 391 } else { 392 - adapter->itr_setting = opt.def; 392 + /* 393 + * If no option specified, use default value and go 394 + * through the logic below to adjust itr/itr_setting 395 + */ 396 + adapter->itr = opt.def; 397 + 398 + /* 399 + * Make sure a message is printed for non-special 400 + * default values 401 + */ 402 + if (adapter->itr > 40) 403 + e_info("%s set to default %d\n", opt.name, 404 + adapter->itr); 405 + } 406 + 407 + adapter->itr_setting = adapter->itr; 408 + switch (adapter->itr) { 409 + case 0: 410 + e_info("%s turned off\n", opt.name); 411 + break; 412 + case 1: 413 + e_info("%s set to dynamic mode\n", opt.name); 393 414 adapter->itr = 20000; 415 + break; 416 + case 3: 417 + e_info("%s set to dynamic conservative mode\n", 418 + opt.name); 419 + adapter->itr = 20000; 420 + break; 421 + case 4: 422 + e_info("%s set to simplified (2000-8000 ints) mode\n", 423 + opt.name); 424 + break; 425 + default: 426 + /* 427 + * Save the setting, because the dynamic bits 428 + * change itr. 429 + * 430 + * Clear the lower two bits because 431 + * they are used as control. 432 + */ 433 + adapter->itr_setting &= ~3; 434 + break; 394 435 } 395 436 } 396 437 { /* Interrupt Mode */
+2 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 2649 2649 2650 2650 txdctl |= E1000_TXDCTL_QUEUE_ENABLE; 2651 2651 wr32(E1000_TXDCTL(reg_idx), txdctl); 2652 - 2653 - netdev_tx_reset_queue(txring_txq(ring)); 2654 2652 } 2655 2653 2656 2654 /** ··· 3157 3159 buffer_info = &tx_ring->tx_buffer_info[i]; 3158 3160 igb_unmap_and_free_tx_resource(tx_ring, buffer_info); 3159 3161 } 3162 + 3163 + netdev_tx_reset_queue(txring_txq(tx_ring)); 3160 3164 3161 3165 size = sizeof(struct igb_tx_buffer) * tx_ring->count; 3162 3166 memset(tx_ring->tx_buffer_info, 0, size);
+2 -2
drivers/net/ethernet/intel/igbvf/netdev.c
··· 2731 2731 netdev->addr_len); 2732 2732 } 2733 2733 2734 - if (!is_valid_ether_addr(netdev->perm_addr)) { 2734 + if (!is_valid_ether_addr(netdev->dev_addr)) { 2735 2735 dev_err(&pdev->dev, "Invalid MAC Address: %pM\n", 2736 2736 netdev->dev_addr); 2737 2737 err = -EIO; 2738 2738 goto err_hw_init; 2739 2739 } 2740 2740 2741 - memcpy(netdev->perm_addr, adapter->hw.mac.addr, netdev->addr_len); 2741 + memcpy(netdev->perm_addr, netdev->dev_addr, netdev->addr_len); 2742 2742 2743 2743 setup_timer(&adapter->watchdog_timer, &igbvf_watchdog, 2744 2744 (unsigned long) adapter);
-3
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 598 598 extern struct ixgbe_info ixgbe_X540_info; 599 599 #ifdef CONFIG_IXGBE_DCB 600 600 extern const struct dcbnl_rtnl_ops dcbnl_ops; 601 - extern int ixgbe_copy_dcb_cfg(struct ixgbe_dcb_config *src_dcb_cfg, 602 - struct ixgbe_dcb_config *dst_dcb_cfg, 603 - int tc_max); 604 601 #endif 605 602 606 603 extern char ixgbe_driver_name[];
+20 -23
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
··· 44 44 #define DCB_NO_HW_CHG 1 /* DCB configuration did not change */ 45 45 #define DCB_HW_CHG 2 /* DCB configuration changed, no reset */ 46 46 47 - int ixgbe_copy_dcb_cfg(struct ixgbe_dcb_config *scfg, 48 - struct ixgbe_dcb_config *dcfg, int tc_max) 47 + static int ixgbe_copy_dcb_cfg(struct ixgbe_adapter *adapter, int tc_max) 49 48 { 49 + struct ixgbe_dcb_config *scfg = &adapter->temp_dcb_cfg; 50 + struct ixgbe_dcb_config *dcfg = &adapter->dcb_cfg; 50 51 struct tc_configuration *src = NULL; 51 52 struct tc_configuration *dst = NULL; 52 53 int i, j; 53 54 int tx = DCB_TX_CONFIG; 54 55 int rx = DCB_RX_CONFIG; 55 56 int changes = 0; 57 + #ifdef IXGBE_FCOE 58 + struct dcb_app app = { 59 + .selector = DCB_APP_IDTYPE_ETHTYPE, 60 + .protocol = ETH_P_FCOE, 61 + }; 62 + u8 up = dcb_getapp(adapter->netdev, &app); 56 63 57 - if (!scfg || !dcfg) 58 - return changes; 64 + if (up && !(up & (1 << adapter->fcoe.up))) 65 + changes |= BIT_APP_UPCHG; 66 + #endif 59 67 60 68 for (i = DCB_PG_ATTR_TC_0; i < tc_max + DCB_PG_ATTR_TC_0; i++) { 61 69 src = &scfg->tc_config[i - DCB_PG_ATTR_TC_0]; ··· 340 332 struct ixgbe_adapter *adapter = netdev_priv(netdev); 341 333 int ret = DCB_NO_HW_CHG; 342 334 int i; 343 - #ifdef IXGBE_FCOE 344 - struct dcb_app app = { 345 - .selector = DCB_APP_IDTYPE_ETHTYPE, 346 - .protocol = ETH_P_FCOE, 347 - }; 348 - u8 up; 349 - 350 - /* In IEEE mode, use the IEEE Ethertype selector value */ 351 - if (adapter->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) { 352 - app.selector = IEEE_8021QAZ_APP_SEL_ETHERTYPE; 353 - up = dcb_ieee_getapp_mask(netdev, &app); 354 - } else { 355 - up = dcb_getapp(netdev, &app); 356 - } 357 - #endif 358 335 359 336 /* Fail command if not in CEE mode */ 360 337 if (!(adapter->dcbx_cap & DCB_CAP_DCBX_VER_CEE)) 361 338 return ret; 362 339 363 - adapter->dcb_set_bitmap |= ixgbe_copy_dcb_cfg(&adapter->temp_dcb_cfg, 364 - &adapter->dcb_cfg, 340 + adapter->dcb_set_bitmap |= ixgbe_copy_dcb_cfg(adapter, 365 341 MAX_TRAFFIC_CLASS); 366 342 if (!adapter->dcb_set_bitmap) 367 343 return ret; ··· 432 440 * FCoE is using changes. This happens if the APP info 433 441 * changes or the up2tc mapping is updated. 434 442 */ 435 - if ((up && !(up & (1 << adapter->fcoe.up))) || 436 - (adapter->dcb_set_bitmap & BIT_APP_UPCHG)) { 443 + if (adapter->dcb_set_bitmap & BIT_APP_UPCHG) { 444 + struct dcb_app app = { 445 + .selector = DCB_APP_IDTYPE_ETHTYPE, 446 + .protocol = ETH_P_FCOE, 447 + }; 448 + u8 up = dcb_getapp(netdev, &app); 449 + 437 450 adapter->fcoe.up = ffs(up) - 1; 438 451 ixgbe_dcbnl_devreset(netdev); 439 452 ret = DCB_HW_CHG_RST;
+2
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
··· 1780 1780 rx_desc = IXGBE_RX_DESC(rx_ring, rx_ntc); 1781 1781 } 1782 1782 1783 + netdev_tx_reset_queue(txring_txq(tx_ring)); 1784 + 1783 1785 /* re-map buffers to ring, store next to clean values */ 1784 1786 ixgbe_alloc_rx_buffers(rx_ring, count); 1785 1787 rx_ring->next_to_clean = rx_ntc;
+1
drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
··· 437 437 */ 438 438 if ((fh->fh_r_ctl == FC_RCTL_DD_SOL_DATA) && 439 439 (fctl & FC_FC_END_SEQ)) { 440 + skb_linearize(skb); 440 441 crc = (struct fcoe_crc_eof *)skb_put(skb, sizeof(*crc)); 441 442 crc->fcoe_eof = FC_EOF_T; 442 443 }
+11 -8
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 2675 2675 /* enable queue */ 2676 2676 IXGBE_WRITE_REG(hw, IXGBE_TXDCTL(reg_idx), txdctl); 2677 2677 2678 - netdev_tx_reset_queue(txring_txq(ring)); 2679 - 2680 2678 /* TXDCTL.EN will return 0 on 82598 if link is down, so skip it */ 2681 2679 if (hw->mac.type == ixgbe_mac_82598EB && 2682 2680 !(IXGBE_READ_REG(hw, IXGBE_LINKS) & IXGBE_LINKS_UP)) ··· 4142 4144 ixgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer_info); 4143 4145 } 4144 4146 4147 + netdev_tx_reset_queue(txring_txq(tx_ring)); 4148 + 4145 4149 size = sizeof(struct ixgbe_tx_buffer) * tx_ring->count; 4146 4150 memset(tx_ring->tx_buffer_info, 0, size); 4147 4151 ··· 4395 4395 adapter->dcb_cfg.pfc_mode_enable = false; 4396 4396 adapter->dcb_set_bitmap = 0x00; 4397 4397 adapter->dcbx_cap = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_CEE; 4398 - ixgbe_copy_dcb_cfg(&adapter->dcb_cfg, &adapter->temp_dcb_cfg, 4399 - MAX_TRAFFIC_CLASS); 4398 + memcpy(&adapter->temp_dcb_cfg, &adapter->dcb_cfg, 4399 + sizeof(adapter->temp_dcb_cfg)); 4400 4400 4401 4401 #endif 4402 4402 ··· 4843 4843 netif_device_detach(netdev); 4844 4844 4845 4845 if (netif_running(netdev)) { 4846 + rtnl_lock(); 4846 4847 ixgbe_down(adapter); 4847 4848 ixgbe_free_irq(adapter); 4848 4849 ixgbe_free_all_tx_resources(adapter); 4849 4850 ixgbe_free_all_rx_resources(adapter); 4851 + rtnl_unlock(); 4850 4852 } 4851 4853 4852 4854 ixgbe_clear_interrupt_scheme(adapter); 4853 - #ifdef CONFIG_DCB 4854 - kfree(adapter->ixgbe_ieee_pfc); 4855 - kfree(adapter->ixgbe_ieee_ets); 4856 - #endif 4857 4855 4858 4856 #ifdef CONFIG_PM 4859 4857 retval = pci_save_state(pdev); ··· 7297 7299 7298 7300 ixgbe_release_hw_control(adapter); 7299 7301 7302 + #ifdef CONFIG_DCB 7303 + kfree(adapter->ixgbe_ieee_pfc); 7304 + kfree(adapter->ixgbe_ieee_ets); 7305 + 7306 + #endif 7300 7307 iounmap(adapter->hw.hw_addr); 7301 7308 pci_release_selected_regions(pdev, pci_select_bars(pdev, 7302 7309 IORESOURCE_MEM));
+20 -11
drivers/net/ethernet/marvell/sky2.c
··· 2494 2494 skb_copy_from_linear_data(re->skb, skb->data, length); 2495 2495 skb->ip_summed = re->skb->ip_summed; 2496 2496 skb->csum = re->skb->csum; 2497 + skb->rxhash = re->skb->rxhash; 2498 + skb->vlan_tci = re->skb->vlan_tci; 2499 + 2497 2500 pci_dma_sync_single_for_device(sky2->hw->pdev, re->data_addr, 2498 2501 length, PCI_DMA_FROMDEVICE); 2502 + re->skb->vlan_tci = 0; 2503 + re->skb->rxhash = 0; 2499 2504 re->skb->ip_summed = CHECKSUM_NONE; 2500 2505 skb_put(skb, length); 2501 2506 } ··· 2585 2580 struct sk_buff *skb = NULL; 2586 2581 u16 count = (status & GMR_FS_LEN) >> 16; 2587 2582 2588 - if (status & GMR_FS_VLAN) 2589 - count -= VLAN_HLEN; /* Account for vlan tag */ 2590 - 2591 2583 netif_printk(sky2, rx_status, KERN_DEBUG, dev, 2592 2584 "rx slot %u status 0x%x len %d\n", 2593 2585 sky2->rx_next, status, length); 2594 2586 2595 2587 sky2->rx_next = (sky2->rx_next + 1) % sky2->rx_pending; 2596 2588 prefetch(sky2->rx_ring + sky2->rx_next); 2589 + 2590 + if (vlan_tx_tag_present(re->skb)) 2591 + count -= VLAN_HLEN; /* Account for vlan tag */ 2597 2592 2598 2593 /* This chip has hardware problems that generates bogus status. 2599 2594 * So do only marginal checking and expect higher level protocols ··· 2652 2647 } 2653 2648 2654 2649 static inline void sky2_skb_rx(const struct sky2_port *sky2, 2655 - u32 status, struct sk_buff *skb) 2650 + struct sk_buff *skb) 2656 2651 { 2657 - if (status & GMR_FS_VLAN) 2658 - __vlan_hwaccel_put_tag(skb, be16_to_cpu(sky2->rx_tag)); 2659 - 2660 2652 if (skb->ip_summed == CHECKSUM_NONE) 2661 2653 netif_receive_skb(skb); 2662 2654 else ··· 2705 2703 sky2_write32(sky2->hw, Q_ADDR(rxqaddr[sky2->port], Q_CSR), 2706 2704 BMU_DIS_RX_CHKSUM); 2707 2705 } 2706 + } 2707 + 2708 + static void sky2_rx_tag(struct sky2_port *sky2, u16 length) 2709 + { 2710 + struct sk_buff *skb; 2711 + 2712 + skb = sky2->rx_ring[sky2->rx_next].skb; 2713 + __vlan_hwaccel_put_tag(skb, be16_to_cpu(length)); 2708 2714 } 2709 2715 2710 2716 static void sky2_rx_hash(struct sky2_port *sky2, u32 status) ··· 2773 2763 } 2774 2764 2775 2765 skb->protocol = eth_type_trans(skb, dev); 2776 - 2777 - sky2_skb_rx(sky2, status, skb); 2766 + sky2_skb_rx(sky2, skb); 2778 2767 2779 2768 /* Stop after net poll weight */ 2780 2769 if (++work_done >= to_do) ··· 2781 2772 break; 2782 2773 2783 2774 case OP_RXVLAN: 2784 - sky2->rx_tag = length; 2775 + sky2_rx_tag(sky2, length); 2785 2776 break; 2786 2777 2787 2778 case OP_RXCHKSVLAN: 2788 - sky2->rx_tag = length; 2779 + sky2_rx_tag(sky2, length); 2789 2780 /* fall through */ 2790 2781 case OP_RXCHKS: 2791 2782 if (likely(dev->features & NETIF_F_RXCSUM))
-1
drivers/net/ethernet/marvell/sky2.h
··· 2241 2241 u16 rx_pending; 2242 2242 u16 rx_data_size; 2243 2243 u16 rx_nfrags; 2244 - u16 rx_tag; 2245 2244 2246 2245 struct { 2247 2246 unsigned long last;
+1 -1
drivers/net/ethernet/sun/sungem.c
··· 2339 2339 netif_device_detach(dev); 2340 2340 2341 2341 /* Switch off chip, remember WOL setting */ 2342 - gp->asleep_wol = gp->wake_on_lan; 2342 + gp->asleep_wol = !!gp->wake_on_lan; 2343 2343 gem_do_stop(dev, gp->asleep_wol); 2344 2344 2345 2345 /* Unlock the network stack */
+1 -1
drivers/net/ethernet/ti/davinci_emac.c
··· 1512 1512 1513 1513 static int match_first_device(struct device *dev, void *data) 1514 1514 { 1515 - return 1; 1515 + return !strncmp(dev_name(dev), "davinci_mdio", 12); 1516 1516 } 1517 1517 1518 1518 /**
+1 -1
drivers/net/ethernet/ti/tlan.c
··· 228 228 unsigned long addr; 229 229 230 230 addr = tag->buffer[9].address; 231 - addr |= (tag->buffer[8].address << 16) << 16; 231 + addr |= ((unsigned long) tag->buffer[8].address << 16) << 16; 232 232 return (struct sk_buff *) addr; 233 233 } 234 234
+2 -2
drivers/net/usb/asix.c
··· 355 355 u32 packet_len; 356 356 u32 padbytes = 0xffff0000; 357 357 358 - padlen = ((skb->len + 4) % 512) ? 0 : 4; 358 + padlen = ((skb->len + 4) & (dev->maxpacket - 1)) ? 0 : 4; 359 359 360 360 if ((!skb_cloned(skb)) && 361 361 ((headroom + tailroom) >= (4 + padlen))) { ··· 377 377 cpu_to_le32s(&packet_len); 378 378 skb_copy_to_linear_data(skb, &packet_len, sizeof(packet_len)); 379 379 380 - if ((skb->len % 512) == 0) { 380 + if (padlen) { 381 381 cpu_to_le32s(&padbytes); 382 382 memcpy(skb_tail_pointer(skb), &padbytes, sizeof(padbytes)); 383 383 skb_put(skb, sizeof(padbytes));
+12 -2
drivers/net/usb/cdc_ether.c
··· 83 83 struct cdc_state *info = (void *) &dev->data; 84 84 int status; 85 85 int rndis; 86 + bool android_rndis_quirk = false; 86 87 struct usb_driver *driver = driver_of(intf); 87 88 struct usb_cdc_mdlm_desc *desc = NULL; 88 89 struct usb_cdc_mdlm_detail_desc *detail = NULL; ··· 196 195 info->control, 197 196 info->u->bSlaveInterface0, 198 197 info->data); 198 + /* fall back to hard-wiring for RNDIS */ 199 + if (rndis) { 200 + android_rndis_quirk = true; 201 + goto next_desc; 202 + } 199 203 goto bad_desc; 200 204 } 201 205 if (info->control != intf) { ··· 277 271 /* Microsoft ActiveSync based and some regular RNDIS devices lack the 278 272 * CDC descriptors, so we'll hard-wire the interfaces and not check 279 273 * for descriptors. 274 + * 275 + * Some Android RNDIS devices have a CDC Union descriptor pointing 276 + * to non-existing interfaces. Ignore that and attempt the same 277 + * hard-wired 0 and 1 interfaces. 280 278 */ 281 - if (rndis && !info->u) { 279 + if (rndis && (!info->u || android_rndis_quirk)) { 282 280 info->control = usb_ifnum_to_if(dev->udev, 0); 283 281 info->data = usb_ifnum_to_if(dev->udev, 1); 284 - if (!info->control || !info->data) { 282 + if (!info->control || !info->data || info->control != intf) { 285 283 dev_dbg(&intf->dev, 286 284 "rndis: master #0/%p slave #1/%p\n", 287 285 info->control,
+24 -11
drivers/net/usb/smsc75xx.c
··· 98 98 99 99 if (unlikely(ret < 0)) 100 100 netdev_warn(dev->net, 101 - "Failed to read register index 0x%08x", index); 101 + "Failed to read reg index 0x%08x: %d", index, ret); 102 102 103 103 le32_to_cpus(buf); 104 104 *data = *buf; ··· 128 128 129 129 if (unlikely(ret < 0)) 130 130 netdev_warn(dev->net, 131 - "Failed to write register index 0x%08x", index); 131 + "Failed to write reg index 0x%08x: %d", index, ret); 132 132 133 133 kfree(buf); 134 134 ··· 171 171 idx &= dev->mii.reg_num_mask; 172 172 addr = ((phy_id << MII_ACCESS_PHY_ADDR_SHIFT) & MII_ACCESS_PHY_ADDR) 173 173 | ((idx << MII_ACCESS_REG_ADDR_SHIFT) & MII_ACCESS_REG_ADDR) 174 - | MII_ACCESS_READ; 174 + | MII_ACCESS_READ | MII_ACCESS_BUSY; 175 175 ret = smsc75xx_write_reg(dev, MII_ACCESS, addr); 176 176 check_warn_goto_done(ret, "Error writing MII_ACCESS"); 177 177 ··· 210 210 idx &= dev->mii.reg_num_mask; 211 211 addr = ((phy_id << MII_ACCESS_PHY_ADDR_SHIFT) & MII_ACCESS_PHY_ADDR) 212 212 | ((idx << MII_ACCESS_REG_ADDR_SHIFT) & MII_ACCESS_REG_ADDR) 213 - | MII_ACCESS_WRITE; 213 + | MII_ACCESS_WRITE | MII_ACCESS_BUSY; 214 214 ret = smsc75xx_write_reg(dev, MII_ACCESS, addr); 215 215 check_warn_goto_done(ret, "Error writing MII_ACCESS"); 216 216 ··· 508 508 u16 lcladv, rmtadv; 509 509 int ret; 510 510 511 - /* clear interrupt status */ 511 + /* read and write to clear phy interrupt status */ 512 512 ret = smsc75xx_mdio_read(dev->net, mii->phy_id, PHY_INT_SRC); 513 513 check_warn_return(ret, "Error reading PHY_INT_SRC"); 514 + smsc75xx_mdio_write(dev->net, mii->phy_id, PHY_INT_SRC, 0xffff); 514 515 515 516 ret = smsc75xx_write_reg(dev, INT_STS, INT_STS_CLEAR_ALL); 516 517 check_warn_return(ret, "Error writing INT_STS"); ··· 644 643 645 644 static int smsc75xx_phy_initialize(struct usbnet *dev) 646 645 { 647 - int bmcr, timeout = 0; 646 + int bmcr, ret, timeout = 0; 648 647 649 648 /* Initialize MII structure */ 650 649 dev->mii.dev = dev->net; ··· 652 651 dev->mii.mdio_write = smsc75xx_mdio_write; 653 652 dev->mii.phy_id_mask = 0x1f; 654 653 dev->mii.reg_num_mask = 0x1f; 654 + dev->mii.supports_gmii = 1; 655 655 dev->mii.phy_id = SMSC75XX_INTERNAL_PHY_ID; 656 656 657 657 /* reset phy and wait for reset to complete */ ··· 663 661 bmcr = smsc75xx_mdio_read(dev->net, dev->mii.phy_id, MII_BMCR); 664 662 check_warn_return(bmcr, "Error reading MII_BMCR"); 665 663 timeout++; 666 - } while ((bmcr & MII_BMCR) && (timeout < 100)); 664 + } while ((bmcr & BMCR_RESET) && (timeout < 100)); 667 665 668 666 if (timeout >= 100) { 669 667 netdev_warn(dev->net, "timeout on PHY Reset"); ··· 673 671 smsc75xx_mdio_write(dev->net, dev->mii.phy_id, MII_ADVERTISE, 674 672 ADVERTISE_ALL | ADVERTISE_CSMA | ADVERTISE_PAUSE_CAP | 675 673 ADVERTISE_PAUSE_ASYM); 674 + smsc75xx_mdio_write(dev->net, dev->mii.phy_id, MII_CTRL1000, 675 + ADVERTISE_1000FULL); 676 676 677 - /* read to clear */ 678 - smsc75xx_mdio_read(dev->net, dev->mii.phy_id, PHY_INT_SRC); 679 - check_warn_return(bmcr, "Error reading PHY_INT_SRC"); 677 + /* read and write to clear phy interrupt status */ 678 + ret = smsc75xx_mdio_read(dev->net, dev->mii.phy_id, PHY_INT_SRC); 679 + check_warn_return(ret, "Error reading PHY_INT_SRC"); 680 + smsc75xx_mdio_write(dev->net, dev->mii.phy_id, PHY_INT_SRC, 0xffff); 680 681 681 682 smsc75xx_mdio_write(dev->net, dev->mii.phy_id, PHY_INT_MASK, 682 683 PHY_INT_MASK_DEFAULT); ··· 951 946 ret = smsc75xx_write_reg(dev, INT_EP_CTL, buf); 952 947 check_warn_return(ret, "Failed to write INT_EP_CTL: %d", ret); 953 948 949 + /* allow mac to detect speed and duplex from phy */ 950 + ret = smsc75xx_read_reg(dev, MAC_CR, &buf); 951 + check_warn_return(ret, "Failed to read MAC_CR: %d", ret); 952 + 953 + buf |= (MAC_CR_ADD | MAC_CR_ASD); 954 + ret = smsc75xx_write_reg(dev, MAC_CR, buf); 955 + check_warn_return(ret, "Failed to write MAC_CR: %d", ret); 956 + 954 957 ret = smsc75xx_read_reg(dev, MAC_TX, &buf); 955 958 check_warn_return(ret, "Failed to read MAC_TX: %d", ret); 956 959 ··· 1225 1212 .rx_fixup = smsc75xx_rx_fixup, 1226 1213 .tx_fixup = smsc75xx_tx_fixup, 1227 1214 .status = smsc75xx_status, 1228 - .flags = FLAG_ETHER | FLAG_SEND_ZLP, 1215 + .flags = FLAG_ETHER | FLAG_SEND_ZLP | FLAG_LINK_INTR, 1229 1216 }; 1230 1217 1231 1218 static const struct usb_device_id products[] = {
+2 -1
drivers/net/usb/smsc95xx.c
··· 1017 1017 dev->net->ethtool_ops = &smsc95xx_ethtool_ops; 1018 1018 dev->net->flags |= IFF_MULTICAST; 1019 1019 dev->net->hard_header_len += SMSC95XX_TX_OVERHEAD_CSUM; 1020 + dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; 1020 1021 return 0; 1021 1022 } 1022 1023 ··· 1192 1191 .rx_fixup = smsc95xx_rx_fixup, 1193 1192 .tx_fixup = smsc95xx_tx_fixup, 1194 1193 .status = smsc95xx_status, 1195 - .flags = FLAG_ETHER | FLAG_SEND_ZLP, 1194 + .flags = FLAG_ETHER | FLAG_SEND_ZLP | FLAG_LINK_INTR, 1196 1195 }; 1197 1196 1198 1197 static const struct usb_device_id products[] = {
+4 -1
drivers/net/usb/usbnet.c
··· 210 210 } else { 211 211 usb_fill_int_urb(dev->interrupt, dev->udev, pipe, 212 212 buf, maxp, intr_complete, dev, period); 213 + dev->interrupt->transfer_flags |= URB_FREE_BUFFER; 213 214 dev_dbg(&intf->dev, 214 215 "status ep%din, %d bytes period %d\n", 215 216 usb_pipeendpoint(pipe), maxp, period); ··· 1445 1444 1446 1445 status = register_netdev (net); 1447 1446 if (status) 1448 - goto out3; 1447 + goto out4; 1449 1448 netif_info(dev, probe, dev->net, 1450 1449 "register '%s' at usb-%s-%s, %s, %pM\n", 1451 1450 udev->dev.driver->name, ··· 1463 1462 1464 1463 return 0; 1465 1464 1465 + out4: 1466 + usb_free_urb(dev->interrupt); 1466 1467 out3: 1467 1468 if (info->unbind) 1468 1469 info->unbind (dev, udev);
+1
drivers/net/wireless/ath/ath5k/ahb.c
··· 220 220 } 221 221 222 222 ath5k_deinit_ah(ah); 223 + iounmap(ah->iobase); 223 224 platform_set_drvdata(pdev, NULL); 224 225 ieee80211_free_hw(hw); 225 226
+1 -1
drivers/net/wireless/ath/ath9k/ar5008_phy.c
··· 859 859 ar5008_hw_set_channel_regs(ah, chan); 860 860 ar5008_hw_init_chain_masks(ah); 861 861 ath9k_olc_init(ah); 862 - ath9k_hw_apply_txpower(ah, chan); 862 + ath9k_hw_apply_txpower(ah, chan, false); 863 863 864 864 /* Write analog registers */ 865 865 if (!ath9k_hw_set_rf_regs(ah, chan, freqIndex)) {
+1 -1
drivers/net/wireless/ath/ath9k/ar9003_paprd.c
··· 54 54 55 55 if (val) { 56 56 ah->paprd_table_write_done = true; 57 - ath9k_hw_apply_txpower(ah, chan); 57 + ath9k_hw_apply_txpower(ah, chan, false); 58 58 } 59 59 60 60 REG_RMW_FIELD(ah, AR_PHY_PAPRD_CTRL0_B0,
+3 -3
drivers/net/wireless/ath/ath9k/ar9003_phy.c
··· 373 373 else 374 374 spur_subchannel_sd = 0; 375 375 376 - spur_freq_sd = (freq_offset << 9) / 11; 376 + spur_freq_sd = ((freq_offset + 10) << 9) / 11; 377 377 378 378 } else { 379 379 if (REG_READ_FIELD(ah, AR_PHY_GEN_CTRL, ··· 382 382 else 383 383 spur_subchannel_sd = 1; 384 384 385 - spur_freq_sd = (freq_offset << 9) / 11; 385 + spur_freq_sd = ((freq_offset - 10) << 9) / 11; 386 386 387 387 } 388 388 ··· 680 680 ar9003_hw_override_ini(ah); 681 681 ar9003_hw_set_channel_regs(ah, chan); 682 682 ar9003_hw_set_chain_masks(ah, ah->rxchainmask, ah->txchainmask); 683 - ath9k_hw_apply_txpower(ah, chan); 683 + ath9k_hw_apply_txpower(ah, chan, false); 684 684 685 685 if (AR_SREV_9462(ah)) { 686 686 if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0,
+2
drivers/net/wireless/ath/ath9k/eeprom_9287.c
··· 798 798 regulatory->max_power_level = ratesArray[i]; 799 799 } 800 800 801 + ath9k_hw_update_regulatory_maxpower(ah); 802 + 801 803 if (test) 802 804 return; 803 805
+5 -4
drivers/net/wireless/ath/ath9k/hw.c
··· 1522 1522 return false; 1523 1523 } 1524 1524 ath9k_hw_set_clockrate(ah); 1525 - ath9k_hw_apply_txpower(ah, chan); 1525 + ath9k_hw_apply_txpower(ah, chan, false); 1526 1526 ath9k_hw_rfbus_done(ah); 1527 1527 1528 1528 if (IS_CHAN_OFDM(chan) || IS_CHAN_HT(chan)) ··· 2797 2797 return ah->eep_ops->get_eeprom(ah, gain_param); 2798 2798 } 2799 2799 2800 - void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan) 2800 + void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan, 2801 + bool test) 2801 2802 { 2802 2803 struct ath_regulatory *reg = ath9k_hw_regulatory(ah); 2803 2804 struct ieee80211_channel *channel; ··· 2819 2818 2820 2819 ah->eep_ops->set_txpower(ah, chan, 2821 2820 ath9k_regd_get_ctl(reg, chan), 2822 - ant_reduction, new_pwr, false); 2821 + ant_reduction, new_pwr, test); 2823 2822 } 2824 2823 2825 2824 void ath9k_hw_set_txpowerlimit(struct ath_hw *ah, u32 limit, bool test) ··· 2832 2831 if (test) 2833 2832 channel->max_power = MAX_RATE_POWER / 2; 2834 2833 2835 - ath9k_hw_apply_txpower(ah, chan); 2834 + ath9k_hw_apply_txpower(ah, chan, test); 2836 2835 2837 2836 if (test) 2838 2837 channel->max_power = DIV_ROUND_UP(reg->max_power_level, 2);
+2 -1
drivers/net/wireless/ath/ath9k/hw.h
··· 985 985 /* PHY */ 986 986 void ath9k_hw_get_delta_slope_vals(struct ath_hw *ah, u32 coef_scaled, 987 987 u32 *coef_mantissa, u32 *coef_exponent); 988 - void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan); 988 + void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan, 989 + bool test); 989 990 990 991 /* 991 992 * Code Specific to AR5008, AR9001 or AR9002,
+8 -2
drivers/net/wireless/b43/main.c
··· 4841 4841 out_mutex_unlock: 4842 4842 mutex_unlock(&wl->mutex); 4843 4843 4844 - /* reload configuration */ 4845 - b43_op_config(hw, ~0); 4844 + /* 4845 + * Configuration may have been overwritten during initialization. 4846 + * Reload the configuration, but only if initialization was 4847 + * successful. Reloading the configuration after a failed init 4848 + * may hang the system. 4849 + */ 4850 + if (!err) 4851 + b43_op_config(hw, ~0); 4846 4852 4847 4853 return err; 4848 4854 }
+7 -1
drivers/net/wireless/brcm80211/brcmfmac/bcmsdh_sdmmc.c
··· 108 108 sdio_release_host(sdfunc); 109 109 } 110 110 } else if (regaddr == SDIO_CCCR_ABORT) { 111 + sdfunc = kmemdup(sdiodev->func[0], sizeof(struct sdio_func), 112 + GFP_KERNEL); 113 + if (!sdfunc) 114 + return -ENOMEM; 115 + sdfunc->num = 0; 111 116 sdio_claim_host(sdfunc); 112 117 sdio_writeb(sdfunc, *byte, regaddr, &err_ret); 113 118 sdio_release_host(sdfunc); 119 + kfree(sdfunc); 114 120 } else if (regaddr < 0xF0) { 115 121 brcmf_dbg(ERROR, "F0 Wr:0x%02x: write disallowed\n", regaddr); 116 122 err_ret = -EPERM; ··· 492 486 kfree(bus_if); 493 487 return -ENOMEM; 494 488 } 495 - sdiodev->func[0] = func->card->sdio_func[0]; 489 + sdiodev->func[0] = func; 496 490 sdiodev->func[1] = func; 497 491 sdiodev->bus_if = bus_if; 498 492 bus_if->bus_priv.sdio = sdiodev;
+52 -12
drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c
··· 574 574 575 575 struct task_struct *dpc_tsk; 576 576 struct completion dpc_wait; 577 + struct list_head dpc_tsklst; 578 + spinlock_t dpc_tl_lock; 577 579 578 580 struct semaphore sdsem; 579 581 ··· 2596 2594 return resched; 2597 2595 } 2598 2596 2597 + static inline void brcmf_sdbrcm_adddpctsk(struct brcmf_sdio *bus) 2598 + { 2599 + struct list_head *new_hd; 2600 + unsigned long flags; 2601 + 2602 + if (in_interrupt()) 2603 + new_hd = kzalloc(sizeof(struct list_head), GFP_ATOMIC); 2604 + else 2605 + new_hd = kzalloc(sizeof(struct list_head), GFP_KERNEL); 2606 + if (new_hd == NULL) 2607 + return; 2608 + 2609 + spin_lock_irqsave(&bus->dpc_tl_lock, flags); 2610 + list_add_tail(new_hd, &bus->dpc_tsklst); 2611 + spin_unlock_irqrestore(&bus->dpc_tl_lock, flags); 2612 + } 2613 + 2599 2614 static int brcmf_sdbrcm_dpc_thread(void *data) 2600 2615 { 2601 2616 struct brcmf_sdio *bus = (struct brcmf_sdio *) data; 2617 + struct list_head *cur_hd, *tmp_hd; 2618 + unsigned long flags; 2602 2619 2603 2620 allow_signal(SIGTERM); 2604 2621 /* Run until signal received */ 2605 2622 while (1) { 2606 2623 if (kthread_should_stop()) 2607 2624 break; 2608 - if (!wait_for_completion_interruptible(&bus->dpc_wait)) { 2609 - /* Call bus dpc unless it indicated down 2610 - (then clean stop) */ 2611 - if (bus->sdiodev->bus_if->state != BRCMF_BUS_DOWN) { 2612 - if (brcmf_sdbrcm_dpc(bus)) 2613 - complete(&bus->dpc_wait); 2614 - } else { 2625 + 2626 + if (list_empty(&bus->dpc_tsklst)) 2627 + if (wait_for_completion_interruptible(&bus->dpc_wait)) 2628 + break; 2629 + 2630 + spin_lock_irqsave(&bus->dpc_tl_lock, flags); 2631 + list_for_each_safe(cur_hd, tmp_hd, &bus->dpc_tsklst) { 2632 + spin_unlock_irqrestore(&bus->dpc_tl_lock, flags); 2633 + 2634 + if (bus->sdiodev->bus_if->state == BRCMF_BUS_DOWN) { 2615 2635 /* after stopping the bus, exit thread */ 2616 2636 brcmf_sdbrcm_bus_stop(bus->sdiodev->dev); 2617 2637 bus->dpc_tsk = NULL; 2638 + spin_lock_irqsave(&bus->dpc_tl_lock, flags); 2618 2639 break; 2619 2640 } 2620 - } else 2621 - break; 2641 + 2642 + if (brcmf_sdbrcm_dpc(bus)) 2643 + brcmf_sdbrcm_adddpctsk(bus); 2644 + 2645 + spin_lock_irqsave(&bus->dpc_tl_lock, flags); 2646 + list_del(cur_hd); 2647 + kfree(cur_hd); 2648 + } 2649 + spin_unlock_irqrestore(&bus->dpc_tl_lock, flags); 2622 2650 } 2623 2651 return 0; 2624 2652 } ··· 2701 2669 /* Schedule DPC if needed to send queued packet(s) */ 2702 2670 if (!bus->dpc_sched) { 2703 2671 bus->dpc_sched = true; 2704 - if (bus->dpc_tsk) 2672 + if (bus->dpc_tsk) { 2673 + brcmf_sdbrcm_adddpctsk(bus); 2705 2674 complete(&bus->dpc_wait); 2675 + } 2706 2676 } 2707 2677 2708 2678 return ret; ··· 3548 3514 brcmf_dbg(ERROR, "isr w/o interrupt configured!\n"); 3549 3515 3550 3516 bus->dpc_sched = true; 3551 - if (bus->dpc_tsk) 3517 + if (bus->dpc_tsk) { 3518 + brcmf_sdbrcm_adddpctsk(bus); 3552 3519 complete(&bus->dpc_wait); 3520 + } 3553 3521 } 3554 3522 3555 3523 static bool brcmf_sdbrcm_bus_watchdog(struct brcmf_sdio *bus) ··· 3595 3559 bus->ipend = true; 3596 3560 3597 3561 bus->dpc_sched = true; 3598 - if (bus->dpc_tsk) 3562 + if (bus->dpc_tsk) { 3563 + brcmf_sdbrcm_adddpctsk(bus); 3599 3564 complete(&bus->dpc_wait); 3565 + } 3600 3566 } 3601 3567 } 3602 3568 ··· 3935 3897 } 3936 3898 /* Initialize DPC thread */ 3937 3899 init_completion(&bus->dpc_wait); 3900 + INIT_LIST_HEAD(&bus->dpc_tsklst); 3901 + spin_lock_init(&bus->dpc_tl_lock); 3938 3902 bus->dpc_tsk = kthread_run(brcmf_sdbrcm_dpc_thread, 3939 3903 bus, "brcmf_dpc"); 3940 3904 if (IS_ERR(bus->dpc_tsk)) {
+1 -2
drivers/net/wireless/brcm80211/brcmsmac/main.c
··· 847 847 */ 848 848 if (!(txs->status & TX_STATUS_AMPDU) 849 849 && (txs->status & TX_STATUS_INTERMEDIATE)) { 850 - wiphy_err(wlc->wiphy, "%s: INTERMEDIATE but not AMPDU\n", 851 - __func__); 850 + BCMMSG(wlc->wiphy, "INTERMEDIATE but not AMPDU\n"); 852 851 return false; 853 852 } 854 853
+12 -1
drivers/net/wireless/ipw2x00/ipw2200.c
··· 2192 2192 { 2193 2193 int rc = 0; 2194 2194 unsigned long flags; 2195 + unsigned long now, end; 2195 2196 2196 2197 spin_lock_irqsave(&priv->lock, flags); 2197 2198 if (priv->status & STATUS_HCMD_ACTIVE) { ··· 2234 2233 } 2235 2234 spin_unlock_irqrestore(&priv->lock, flags); 2236 2235 2236 + now = jiffies; 2237 + end = now + HOST_COMPLETE_TIMEOUT; 2238 + again: 2237 2239 rc = wait_event_interruptible_timeout(priv->wait_command_queue, 2238 2240 !(priv-> 2239 2241 status & STATUS_HCMD_ACTIVE), 2240 - HOST_COMPLETE_TIMEOUT); 2242 + end - now); 2243 + if (rc < 0) { 2244 + now = jiffies; 2245 + if (time_before(now, end)) 2246 + goto again; 2247 + rc = 0; 2248 + } 2249 + 2241 2250 if (rc == 0) { 2242 2251 spin_lock_irqsave(&priv->lock, flags); 2243 2252 if (priv->status & STATUS_HCMD_ACTIVE) {
+4 -4
drivers/net/wireless/iwlwifi/iwl-1000.c
··· 32 32 #include "iwl-agn-hw.h" 33 33 34 34 /* Highest firmware API version supported */ 35 - #define IWL1000_UCODE_API_MAX 6 36 - #define IWL100_UCODE_API_MAX 6 35 + #define IWL1000_UCODE_API_MAX 5 36 + #define IWL100_UCODE_API_MAX 5 37 37 38 38 /* Oldest version we won't warn about */ 39 39 #define IWL1000_UCODE_API_OK 5 ··· 122 122 IWL_DEVICE_100, 123 123 }; 124 124 125 - MODULE_FIRMWARE(IWL1000_MODULE_FIRMWARE(IWL1000_UCODE_API_MAX)); 126 - MODULE_FIRMWARE(IWL100_MODULE_FIRMWARE(IWL100_UCODE_API_MAX)); 125 + MODULE_FIRMWARE(IWL1000_MODULE_FIRMWARE(IWL1000_UCODE_API_OK)); 126 + MODULE_FIRMWARE(IWL100_MODULE_FIRMWARE(IWL100_UCODE_API_OK));
+8 -8
drivers/net/wireless/iwlwifi/iwl-2000.c
··· 38 38 #define IWL135_UCODE_API_MAX 6 39 39 40 40 /* Oldest version we won't warn about */ 41 - #define IWL2030_UCODE_API_OK 5 42 - #define IWL2000_UCODE_API_OK 5 43 - #define IWL105_UCODE_API_OK 5 44 - #define IWL135_UCODE_API_OK 5 41 + #define IWL2030_UCODE_API_OK 6 42 + #define IWL2000_UCODE_API_OK 6 43 + #define IWL105_UCODE_API_OK 6 44 + #define IWL135_UCODE_API_OK 6 45 45 46 46 /* Lowest firmware API version supported */ 47 47 #define IWL2030_UCODE_API_MIN 5 ··· 219 219 .ht_params = &iwl2000_ht_params, 220 220 }; 221 221 222 - MODULE_FIRMWARE(IWL2000_MODULE_FIRMWARE(IWL2000_UCODE_API_MAX)); 223 - MODULE_FIRMWARE(IWL2030_MODULE_FIRMWARE(IWL2030_UCODE_API_MAX)); 224 - MODULE_FIRMWARE(IWL105_MODULE_FIRMWARE(IWL105_UCODE_API_MAX)); 225 - MODULE_FIRMWARE(IWL135_MODULE_FIRMWARE(IWL135_UCODE_API_MAX)); 222 + MODULE_FIRMWARE(IWL2000_MODULE_FIRMWARE(IWL2000_UCODE_API_OK)); 223 + MODULE_FIRMWARE(IWL2030_MODULE_FIRMWARE(IWL2030_UCODE_API_OK)); 224 + MODULE_FIRMWARE(IWL105_MODULE_FIRMWARE(IWL105_UCODE_API_OK)); 225 + MODULE_FIRMWARE(IWL135_MODULE_FIRMWARE(IWL135_UCODE_API_OK));
+9 -2
drivers/net/wireless/iwlwifi/iwl-5000.c
··· 35 35 #define IWL5000_UCODE_API_MAX 5 36 36 #define IWL5150_UCODE_API_MAX 2 37 37 38 + /* Oldest version we won't warn about */ 39 + #define IWL5000_UCODE_API_OK 5 40 + #define IWL5150_UCODE_API_OK 2 41 + 38 42 /* Lowest firmware API version supported */ 39 43 #define IWL5000_UCODE_API_MIN 1 40 44 #define IWL5150_UCODE_API_MIN 1 ··· 74 70 #define IWL_DEVICE_5000 \ 75 71 .fw_name_pre = IWL5000_FW_PRE, \ 76 72 .ucode_api_max = IWL5000_UCODE_API_MAX, \ 73 + .ucode_api_ok = IWL5000_UCODE_API_OK, \ 77 74 .ucode_api_min = IWL5000_UCODE_API_MIN, \ 78 75 .device_family = IWL_DEVICE_FAMILY_5000, \ 79 76 .max_inst_size = IWLAGN_RTC_INST_SIZE, \ ··· 120 115 .name = "Intel(R) WiMAX/WiFi Link 5350 AGN", 121 116 .fw_name_pre = IWL5000_FW_PRE, 122 117 .ucode_api_max = IWL5000_UCODE_API_MAX, 118 + .ucode_api_ok = IWL5000_UCODE_API_OK, 123 119 .ucode_api_min = IWL5000_UCODE_API_MIN, 124 120 .device_family = IWL_DEVICE_FAMILY_5000, 125 121 .max_inst_size = IWLAGN_RTC_INST_SIZE, ··· 136 130 #define IWL_DEVICE_5150 \ 137 131 .fw_name_pre = IWL5150_FW_PRE, \ 138 132 .ucode_api_max = IWL5150_UCODE_API_MAX, \ 133 + .ucode_api_ok = IWL5150_UCODE_API_OK, \ 139 134 .ucode_api_min = IWL5150_UCODE_API_MIN, \ 140 135 .device_family = IWL_DEVICE_FAMILY_5150, \ 141 136 .max_inst_size = IWLAGN_RTC_INST_SIZE, \ ··· 160 153 IWL_DEVICE_5150, 161 154 }; 162 155 163 - MODULE_FIRMWARE(IWL5000_MODULE_FIRMWARE(IWL5000_UCODE_API_MAX)); 164 - MODULE_FIRMWARE(IWL5150_MODULE_FIRMWARE(IWL5150_UCODE_API_MAX)); 156 + MODULE_FIRMWARE(IWL5000_MODULE_FIRMWARE(IWL5000_UCODE_API_OK)); 157 + MODULE_FIRMWARE(IWL5150_MODULE_FIRMWARE(IWL5150_UCODE_API_OK));
+6 -4
drivers/net/wireless/iwlwifi/iwl-6000.c
··· 39 39 /* Oldest version we won't warn about */ 40 40 #define IWL6000_UCODE_API_OK 4 41 41 #define IWL6000G2_UCODE_API_OK 5 42 + #define IWL6050_UCODE_API_OK 5 43 + #define IWL6000G2B_UCODE_API_OK 6 42 44 43 45 /* Lowest firmware API version supported */ 44 46 #define IWL6000_UCODE_API_MIN 4 ··· 192 190 #define IWL_DEVICE_6030 \ 193 191 .fw_name_pre = IWL6030_FW_PRE, \ 194 192 .ucode_api_max = IWL6000G2_UCODE_API_MAX, \ 195 - .ucode_api_ok = IWL6000G2_UCODE_API_OK, \ 193 + .ucode_api_ok = IWL6000G2B_UCODE_API_OK, \ 196 194 .ucode_api_min = IWL6000G2_UCODE_API_MIN, \ 197 195 .device_family = IWL_DEVICE_FAMILY_6030, \ 198 196 .max_inst_size = IWL60_RTC_INST_SIZE, \ ··· 358 356 }; 359 357 360 358 MODULE_FIRMWARE(IWL6000_MODULE_FIRMWARE(IWL6000_UCODE_API_OK)); 361 - MODULE_FIRMWARE(IWL6050_MODULE_FIRMWARE(IWL6050_UCODE_API_MAX)); 362 - MODULE_FIRMWARE(IWL6005_MODULE_FIRMWARE(IWL6000G2_UCODE_API_MAX)); 363 - MODULE_FIRMWARE(IWL6030_MODULE_FIRMWARE(IWL6000G2_UCODE_API_MAX)); 359 + MODULE_FIRMWARE(IWL6050_MODULE_FIRMWARE(IWL6050_UCODE_API_OK)); 360 + MODULE_FIRMWARE(IWL6005_MODULE_FIRMWARE(IWL6000G2_UCODE_API_OK)); 361 + MODULE_FIRMWARE(IWL6030_MODULE_FIRMWARE(IWL6000G2B_UCODE_API_OK));
+15 -7
drivers/net/wireless/iwlwifi/iwl-agn-rx.c
··· 737 737 struct sk_buff *skb; 738 738 __le16 fc = hdr->frame_control; 739 739 struct iwl_rxon_context *ctx; 740 - struct page *p; 741 - int offset; 740 + unsigned int hdrlen, fraglen; 742 741 743 742 /* We only process data packets if the interface is open */ 744 743 if (unlikely(!priv->is_open)) { ··· 751 752 iwlagn_set_decrypted_flag(priv, hdr, ampdu_status, stats)) 752 753 return; 753 754 754 - skb = dev_alloc_skb(128); 755 + /* Dont use dev_alloc_skb(), we'll have enough headroom once 756 + * ieee80211_hdr pulled. 757 + */ 758 + skb = alloc_skb(128, GFP_ATOMIC); 755 759 if (!skb) { 756 - IWL_ERR(priv, "dev_alloc_skb failed\n"); 760 + IWL_ERR(priv, "alloc_skb failed\n"); 757 761 return; 758 762 } 763 + hdrlen = min_t(unsigned int, len, skb_tailroom(skb)); 764 + memcpy(skb_put(skb, hdrlen), hdr, hdrlen); 765 + fraglen = len - hdrlen; 759 766 760 - offset = (void *)hdr - rxb_addr(rxb) + rxb_offset(rxb); 761 - p = rxb_steal_page(rxb); 762 - skb_add_rx_frag(skb, 0, p, offset, len, len); 767 + if (fraglen) { 768 + int offset = (void *)hdr - rxb_addr(rxb) + rxb_offset(rxb); 769 + 770 + skb_add_rx_frag(skb, 0, rxb_steal_page(rxb), offset, 771 + fraglen, rxb->truesize); 772 + } 763 773 764 774 /* 765 775 * Wake any queues that were stopped due to a passive channel tx
-3
drivers/net/wireless/iwlwifi/iwl-agn.c
··· 975 975 976 976 void iwlagn_prepare_restart(struct iwl_priv *priv) 977 977 { 978 - struct iwl_rxon_context *ctx; 979 978 bool bt_full_concurrent; 980 979 u8 bt_ci_compliance; 981 980 u8 bt_load; ··· 984 985 985 986 lockdep_assert_held(&priv->mutex); 986 987 987 - for_each_context(priv, ctx) 988 - ctx->vif = NULL; 989 988 priv->is_open = 0; 990 989 991 990 /*
+18 -4
drivers/net/wireless/iwlwifi/iwl-fh.h
··· 104 104 * (see struct iwl_tfd_frame). These 16 pointer registers are offset by 0x04 105 105 * bytes from one another. Each TFD circular buffer in DRAM must be 256-byte 106 106 * aligned (address bits 0-7 must be 0). 107 + * Later devices have 20 (5000 series) or 30 (higher) queues, but the registers 108 + * for them are in different places. 107 109 * 108 110 * Bit fields in each pointer register: 109 111 * 27-0: TFD CB physical base address [35:8], must be 256-byte aligned 110 112 */ 111 - #define FH_MEM_CBBC_LOWER_BOUND (FH_MEM_LOWER_BOUND + 0x9D0) 112 - #define FH_MEM_CBBC_UPPER_BOUND (FH_MEM_LOWER_BOUND + 0xA10) 113 + #define FH_MEM_CBBC_0_15_LOWER_BOUND (FH_MEM_LOWER_BOUND + 0x9D0) 114 + #define FH_MEM_CBBC_0_15_UPPER_BOUND (FH_MEM_LOWER_BOUND + 0xA10) 115 + #define FH_MEM_CBBC_16_19_LOWER_BOUND (FH_MEM_LOWER_BOUND + 0xBF0) 116 + #define FH_MEM_CBBC_16_19_UPPER_BOUND (FH_MEM_LOWER_BOUND + 0xC00) 117 + #define FH_MEM_CBBC_20_31_LOWER_BOUND (FH_MEM_LOWER_BOUND + 0xB20) 118 + #define FH_MEM_CBBC_20_31_UPPER_BOUND (FH_MEM_LOWER_BOUND + 0xB80) 113 119 114 - /* Find TFD CB base pointer for given queue (range 0-15). */ 115 - #define FH_MEM_CBBC_QUEUE(x) (FH_MEM_CBBC_LOWER_BOUND + (x) * 0x4) 120 + /* Find TFD CB base pointer for given queue */ 121 + static inline unsigned int FH_MEM_CBBC_QUEUE(unsigned int chnl) 122 + { 123 + if (chnl < 16) 124 + return FH_MEM_CBBC_0_15_LOWER_BOUND + 4 * chnl; 125 + if (chnl < 20) 126 + return FH_MEM_CBBC_16_19_LOWER_BOUND + 4 * (chnl - 16); 127 + WARN_ON_ONCE(chnl >= 32); 128 + return FH_MEM_CBBC_20_31_LOWER_BOUND + 4 * (chnl - 20); 129 + } 116 130 117 131 118 132 /**
+9 -1
drivers/net/wireless/iwlwifi/iwl-mac80211.c
··· 1273 1273 struct iwl_rxon_context *tmp, *ctx = NULL; 1274 1274 int err; 1275 1275 enum nl80211_iftype viftype = ieee80211_vif_type_p2p(vif); 1276 + bool reset = false; 1276 1277 1277 1278 IWL_DEBUG_MAC80211(priv, "enter: type %d, addr %pM\n", 1278 1279 viftype, vif->addr); ··· 1295 1294 tmp->interface_modes | tmp->exclusive_interface_modes; 1296 1295 1297 1296 if (tmp->vif) { 1297 + /* On reset we need to add the same interface again */ 1298 + if (tmp->vif == vif) { 1299 + reset = true; 1300 + ctx = tmp; 1301 + break; 1302 + } 1303 + 1298 1304 /* check if this busy context is exclusive */ 1299 1305 if (tmp->exclusive_interface_modes & 1300 1306 BIT(tmp->vif->type)) { ··· 1328 1320 ctx->vif = vif; 1329 1321 1330 1322 err = iwl_setup_interface(priv, ctx); 1331 - if (!err) 1323 + if (!err || reset) 1332 1324 goto out; 1333 1325 1334 1326 ctx->vif = NULL;
+24 -3
drivers/net/wireless/iwlwifi/iwl-prph.h
··· 223 223 #define SCD_AIT (SCD_BASE + 0x0c) 224 224 #define SCD_TXFACT (SCD_BASE + 0x10) 225 225 #define SCD_ACTIVE (SCD_BASE + 0x14) 226 - #define SCD_QUEUE_WRPTR(x) (SCD_BASE + 0x18 + (x) * 4) 227 - #define SCD_QUEUE_RDPTR(x) (SCD_BASE + 0x68 + (x) * 4) 228 226 #define SCD_QUEUECHAIN_SEL (SCD_BASE + 0xe8) 229 227 #define SCD_AGGR_SEL (SCD_BASE + 0x248) 230 228 #define SCD_INTERRUPT_MASK (SCD_BASE + 0x108) 231 - #define SCD_QUEUE_STATUS_BITS(x) (SCD_BASE + 0x10c + (x) * 4) 229 + 230 + static inline unsigned int SCD_QUEUE_WRPTR(unsigned int chnl) 231 + { 232 + if (chnl < 20) 233 + return SCD_BASE + 0x18 + chnl * 4; 234 + WARN_ON_ONCE(chnl >= 32); 235 + return SCD_BASE + 0x284 + (chnl - 20) * 4; 236 + } 237 + 238 + static inline unsigned int SCD_QUEUE_RDPTR(unsigned int chnl) 239 + { 240 + if (chnl < 20) 241 + return SCD_BASE + 0x68 + chnl * 4; 242 + WARN_ON_ONCE(chnl >= 32); 243 + return SCD_BASE + 0x2B4 + (chnl - 20) * 4; 244 + } 245 + 246 + static inline unsigned int SCD_QUEUE_STATUS_BITS(unsigned int chnl) 247 + { 248 + if (chnl < 20) 249 + return SCD_BASE + 0x10c + chnl * 4; 250 + WARN_ON_ONCE(chnl >= 32); 251 + return SCD_BASE + 0x384 + (chnl - 20) * 4; 252 + } 232 253 233 254 /*********************** END TX SCHEDULER *************************************/ 234 255
+1
drivers/net/wireless/iwlwifi/iwl-trans-pcie-rx.c
··· 385 385 ._offset = offset, 386 386 ._page = rxb->page, 387 387 ._page_stolen = false, 388 + .truesize = max_len, 388 389 }; 389 390 390 391 pkt = rxb_addr(&rxcb);
+1
drivers/net/wireless/iwlwifi/iwl-trans.h
··· 258 258 struct page *_page; 259 259 int _offset; 260 260 bool _page_stolen; 261 + unsigned int truesize; 261 262 }; 262 263 263 264 static inline void *rxb_addr(struct iwl_rx_cmd_buffer *r)
+1
drivers/net/wireless/rtlwifi/pci.c
··· 1943 1943 rtl_deinit_deferred_work(hw); 1944 1944 rtlpriv->intf_ops->adapter_stop(hw); 1945 1945 } 1946 + rtlpriv->cfg->ops->disable_interrupt(hw); 1946 1947 1947 1948 /*deinit rfkill */ 1948 1949 rtl_deinit_rfkill(hw);
+1
drivers/net/wireless/ti/wl1251/main.c
··· 479 479 cancel_work_sync(&wl->irq_work); 480 480 cancel_work_sync(&wl->tx_work); 481 481 cancel_work_sync(&wl->filter_work); 482 + cancel_delayed_work_sync(&wl->elp_work); 482 483 483 484 mutex_lock(&wl->mutex); 484 485
+1 -1
drivers/net/wireless/ti/wl1251/sdio.c
··· 315 315 316 316 if (wl->irq) 317 317 free_irq(wl->irq, wl); 318 - kfree(wl_sdio); 319 318 wl1251_free_hw(wl); 319 + kfree(wl_sdio); 320 320 321 321 sdio_claim_host(func); 322 322 sdio_release_irq(func);
+1
drivers/pci/Makefile
··· 42 42 obj-$(CONFIG_PARISC) += setup-bus.o 43 43 obj-$(CONFIG_SUPERH) += setup-bus.o setup-irq.o 44 44 obj-$(CONFIG_PPC) += setup-bus.o 45 + obj-$(CONFIG_FRV) += setup-bus.o 45 46 obj-$(CONFIG_MIPS) += setup-bus.o setup-irq.o 46 47 obj-$(CONFIG_X86_VISWS) += setup-irq.o 47 48 obj-$(CONFIG_MN10300) += setup-bus.o
+45 -22
drivers/platform/x86/acerhdf.c
··· 50 50 */ 51 51 #undef START_IN_KERNEL_MODE 52 52 53 - #define DRV_VER "0.5.24" 53 + #define DRV_VER "0.5.26" 54 54 55 55 /* 56 56 * According to the Atom N270 datasheet, ··· 83 83 #endif 84 84 85 85 static unsigned int interval = 10; 86 - static unsigned int fanon = 63000; 87 - static unsigned int fanoff = 58000; 86 + static unsigned int fanon = 60000; 87 + static unsigned int fanoff = 53000; 88 88 static unsigned int verbose; 89 89 static unsigned int fanstate = ACERHDF_FAN_AUTO; 90 90 static char force_bios[16]; ··· 150 150 {"Acer", "AOA150", "v0.3308", 0x55, 0x58, {0x20, 0x00} }, 151 151 {"Acer", "AOA150", "v0.3309", 0x55, 0x58, {0x20, 0x00} }, 152 152 {"Acer", "AOA150", "v0.3310", 0x55, 0x58, {0x20, 0x00} }, 153 + /* LT1005u */ 154 + {"Acer", "LT-10Q", "v0.3310", 0x55, 0x58, {0x20, 0x00} }, 153 155 /* Acer 1410 */ 154 156 {"Acer", "Aspire 1410", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, 155 157 {"Acer", "Aspire 1410", "v0.3113", 0x55, 0x58, {0x9e, 0x00} }, ··· 163 161 {"Acer", "Aspire 1410", "v1.3303", 0x55, 0x58, {0x9e, 0x00} }, 164 162 {"Acer", "Aspire 1410", "v1.3308", 0x55, 0x58, {0x9e, 0x00} }, 165 163 {"Acer", "Aspire 1410", "v1.3310", 0x55, 0x58, {0x9e, 0x00} }, 164 + {"Acer", "Aspire 1410", "v1.3314", 0x55, 0x58, {0x9e, 0x00} }, 166 165 /* Acer 1810xx */ 167 166 {"Acer", "Aspire 1810TZ", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, 168 167 {"Acer", "Aspire 1810T", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, ··· 186 183 {"Acer", "Aspire 1810TZ", "v1.3310", 0x55, 0x58, {0x9e, 0x00} }, 187 184 {"Acer", "Aspire 1810T", "v1.3310", 0x55, 0x58, {0x9e, 0x00} }, 188 185 {"Acer", "Aspire 1810TZ", "v1.3314", 0x55, 0x58, {0x9e, 0x00} }, 186 + {"Acer", "Aspire 1810T", "v1.3314", 0x55, 0x58, {0x9e, 0x00} }, 189 187 /* Acer 531 */ 188 + {"Acer", "AO531h", "v0.3104", 0x55, 0x58, {0x20, 0x00} }, 190 189 {"Acer", "AO531h", "v0.3201", 0x55, 0x58, {0x20, 0x00} }, 190 + {"Acer", "AO531h", "v0.3304", 0x55, 0x58, {0x20, 0x00} }, 191 + /* Acer 751 */ 192 + {"Acer", "AO751h", "V0.3212", 0x55, 0x58, {0x21, 0x00} }, 193 + /* Acer 1825 */ 194 + {"Acer", "Aspire 1825PTZ", "V1.3118", 0x55, 0x58, {0x9e, 0x00} }, 195 + {"Acer", "Aspire 1825PTZ", "V1.3127", 0x55, 0x58, {0x9e, 0x00} }, 196 + /* Acer TravelMate 7730 */ 197 + {"Acer", "TravelMate 7730G", "v0.3509", 0x55, 0x58, {0xaf, 0x00} }, 191 198 /* Gateway */ 192 - {"Gateway", "AOA110", "v0.3103", 0x55, 0x58, {0x21, 0x00} }, 193 - {"Gateway", "AOA150", "v0.3103", 0x55, 0x58, {0x20, 0x00} }, 194 - {"Gateway", "LT31", "v1.3103", 0x55, 0x58, {0x9e, 0x00} }, 195 - {"Gateway", "LT31", "v1.3201", 0x55, 0x58, {0x9e, 0x00} }, 196 - {"Gateway", "LT31", "v1.3302", 0x55, 0x58, {0x9e, 0x00} }, 199 + {"Gateway", "AOA110", "v0.3103", 0x55, 0x58, {0x21, 0x00} }, 200 + {"Gateway", "AOA150", "v0.3103", 0x55, 0x58, {0x20, 0x00} }, 201 + {"Gateway", "LT31", "v1.3103", 0x55, 0x58, {0x9e, 0x00} }, 202 + {"Gateway", "LT31", "v1.3201", 0x55, 0x58, {0x9e, 0x00} }, 203 + {"Gateway", "LT31", "v1.3302", 0x55, 0x58, {0x9e, 0x00} }, 204 + {"Gateway", "LT31", "v1.3303t", 0x55, 0x58, {0x9e, 0x00} }, 197 205 /* Packard Bell */ 198 - {"Packard Bell", "DOA150", "v0.3104", 0x55, 0x58, {0x21, 0x00} }, 199 - {"Packard Bell", "DOA150", "v0.3105", 0x55, 0x58, {0x20, 0x00} }, 200 - {"Packard Bell", "AOA110", "v0.3105", 0x55, 0x58, {0x21, 0x00} }, 201 - {"Packard Bell", "AOA150", "v0.3105", 0x55, 0x58, {0x20, 0x00} }, 202 - {"Packard Bell", "DOTMU", "v1.3303", 0x55, 0x58, {0x9e, 0x00} }, 203 - {"Packard Bell", "DOTMU", "v0.3120", 0x55, 0x58, {0x9e, 0x00} }, 204 - {"Packard Bell", "DOTMU", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, 205 - {"Packard Bell", "DOTMU", "v0.3113", 0x55, 0x58, {0x9e, 0x00} }, 206 - {"Packard Bell", "DOTMU", "v0.3115", 0x55, 0x58, {0x9e, 0x00} }, 207 - {"Packard Bell", "DOTMU", "v0.3117", 0x55, 0x58, {0x9e, 0x00} }, 208 - {"Packard Bell", "DOTMU", "v0.3119", 0x55, 0x58, {0x9e, 0x00} }, 209 - {"Packard Bell", "DOTMU", "v1.3204", 0x55, 0x58, {0x9e, 0x00} }, 210 - {"Packard Bell", "DOTMA", "v1.3201", 0x55, 0x58, {0x9e, 0x00} }, 211 - {"Packard Bell", "DOTMA", "v1.3302", 0x55, 0x58, {0x9e, 0x00} }, 206 + {"Packard Bell", "DOA150", "v0.3104", 0x55, 0x58, {0x21, 0x00} }, 207 + {"Packard Bell", "DOA150", "v0.3105", 0x55, 0x58, {0x20, 0x00} }, 208 + {"Packard Bell", "AOA110", "v0.3105", 0x55, 0x58, {0x21, 0x00} }, 209 + {"Packard Bell", "AOA150", "v0.3105", 0x55, 0x58, {0x20, 0x00} }, 210 + {"Packard Bell", "ENBFT", "V1.3118", 0x55, 0x58, {0x9e, 0x00} }, 211 + {"Packard Bell", "ENBFT", "V1.3127", 0x55, 0x58, {0x9e, 0x00} }, 212 + {"Packard Bell", "DOTMU", "v1.3303", 0x55, 0x58, {0x9e, 0x00} }, 213 + {"Packard Bell", "DOTMU", "v0.3120", 0x55, 0x58, {0x9e, 0x00} }, 214 + {"Packard Bell", "DOTMU", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, 215 + {"Packard Bell", "DOTMU", "v0.3113", 0x55, 0x58, {0x9e, 0x00} }, 216 + {"Packard Bell", "DOTMU", "v0.3115", 0x55, 0x58, {0x9e, 0x00} }, 217 + {"Packard Bell", "DOTMU", "v0.3117", 0x55, 0x58, {0x9e, 0x00} }, 218 + {"Packard Bell", "DOTMU", "v0.3119", 0x55, 0x58, {0x9e, 0x00} }, 219 + {"Packard Bell", "DOTMU", "v1.3204", 0x55, 0x58, {0x9e, 0x00} }, 220 + {"Packard Bell", "DOTMA", "v1.3201", 0x55, 0x58, {0x9e, 0x00} }, 221 + {"Packard Bell", "DOTMA", "v1.3302", 0x55, 0x58, {0x9e, 0x00} }, 222 + {"Packard Bell", "DOTMA", "v1.3303t", 0x55, 0x58, {0x9e, 0x00} }, 223 + {"Packard Bell", "DOTVR46", "v1.3308", 0x55, 0x58, {0x9e, 0x00} }, 212 224 /* pewpew-terminator */ 213 225 {"", "", "", 0, 0, {0, 0} } 214 226 }; ··· 719 701 MODULE_AUTHOR("Peter Feuerer"); 720 702 MODULE_DESCRIPTION("Aspire One temperature and fan driver"); 721 703 MODULE_ALIAS("dmi:*:*Acer*:pnAOA*:"); 704 + MODULE_ALIAS("dmi:*:*Acer*:pnAO751h*:"); 722 705 MODULE_ALIAS("dmi:*:*Acer*:pnAspire*1410*:"); 723 706 MODULE_ALIAS("dmi:*:*Acer*:pnAspire*1810*:"); 707 + MODULE_ALIAS("dmi:*:*Acer*:pnAspire*1825PTZ:"); 724 708 MODULE_ALIAS("dmi:*:*Acer*:pnAO531*:"); 709 + MODULE_ALIAS("dmi:*:*Acer*:TravelMate*7730G:"); 725 710 MODULE_ALIAS("dmi:*:*Gateway*:pnAOA*:"); 726 711 MODULE_ALIAS("dmi:*:*Gateway*:pnLT31*:"); 727 712 MODULE_ALIAS("dmi:*:*Packard*Bell*:pnAOA*:"); 728 713 MODULE_ALIAS("dmi:*:*Packard*Bell*:pnDOA*:"); 729 714 MODULE_ALIAS("dmi:*:*Packard*Bell*:pnDOTMU*:"); 715 + MODULE_ALIAS("dmi:*:*Packard*Bell*:pnENBFT*:"); 730 716 MODULE_ALIAS("dmi:*:*Packard*Bell*:pnDOTMA*:"); 717 + MODULE_ALIAS("dmi:*:*Packard*Bell*:pnDOTVR46*:"); 731 718 732 719 module_init(acerhdf_init); 733 720 module_exit(acerhdf_exit);
+1
drivers/platform/x86/dell-laptop.c
··· 212 212 }, 213 213 .driver_data = &quirk_dell_vostro_v130, 214 214 }, 215 + { } 215 216 }; 216 217 217 218 static struct calling_interface_buffer *buffer;
+1 -1
drivers/platform/x86/intel_ips.c
··· 1565 1565 ips->poll_turbo_status = true; 1566 1566 1567 1567 if (!ips_get_i915_syms(ips)) { 1568 - dev_err(&dev->dev, "failed to get i915 symbols, graphics turbo disabled\n"); 1568 + dev_info(&dev->dev, "failed to get i915 symbols, graphics turbo disabled until i915 loads\n"); 1569 1569 ips->gpu_turbo_enabled = false; 1570 1570 } else { 1571 1571 dev_dbg(&dev->dev, "graphics turbo enabled\n");
+1
drivers/rtc/rtc-ds1307.c
··· 902 902 } 903 903 ds1307->nvram->attr.name = "nvram"; 904 904 ds1307->nvram->attr.mode = S_IRUGO | S_IWUSR; 905 + sysfs_bin_attr_init(ds1307->nvram); 905 906 ds1307->nvram->read = ds1307_nvram_read, 906 907 ds1307->nvram->write = ds1307_nvram_write, 907 908 ds1307->nvram->size = chip->nvram_size;
+4 -2
drivers/s390/net/qeth_core_main.c
··· 1672 1672 { 1673 1673 QETH_DBF_TEXT(SETUP, 2, "cfgblkt"); 1674 1674 1675 - if (prcd[74] == 0xF0 && prcd[75] == 0xF0 && prcd[76] == 0xF5) { 1675 + if (prcd[74] == 0xF0 && prcd[75] == 0xF0 && 1676 + (prcd[76] == 0xF5 || prcd[76] == 0xF6)) { 1676 1677 card->info.blkt.time_total = 250; 1677 1678 card->info.blkt.inter_packet = 5; 1678 1679 card->info.blkt.inter_packet_jumbo = 15; ··· 4541 4540 goto out_offline; 4542 4541 } 4543 4542 qeth_configure_unitaddr(card, prcd); 4544 - qeth_configure_blkt_default(card, prcd); 4543 + if (ddev_offline) 4544 + qeth_configure_blkt_default(card, prcd); 4545 4545 kfree(prcd); 4546 4546 4547 4547 rc = qdio_get_ssqd_desc(ddev, &card->ssqd);
+5 -1
drivers/scsi/ipr.c
··· 4549 4549 ENTER; 4550 4550 if (sdev->sdev_target) 4551 4551 sata_port = sdev->sdev_target->hostdata; 4552 - if (sata_port) 4552 + if (sata_port) { 4553 4553 rc = ata_sas_port_init(sata_port->ap); 4554 + if (rc == 0) 4555 + rc = ata_sas_sync_probe(sata_port->ap); 4556 + } 4557 + 4554 4558 if (rc) 4555 4559 ipr_slave_destroy(sdev); 4556 4560
+7 -5
drivers/scsi/libfc/fc_lport.c
··· 1742 1742 1743 1743 mfs = ntohs(flp->fl_csp.sp_bb_data) & 1744 1744 FC_SP_BB_DATA_MASK; 1745 - if (mfs >= FC_SP_MIN_MAX_PAYLOAD && 1746 - mfs <= lport->mfs) { 1747 - lport->mfs = mfs; 1748 - fc_host_maxframe_size(lport->host) = mfs; 1749 - } else { 1745 + 1746 + if (mfs < FC_SP_MIN_MAX_PAYLOAD || mfs > FC_SP_MAX_MAX_PAYLOAD) { 1750 1747 FC_LPORT_DBG(lport, "FLOGI bad mfs:%hu response, " 1751 1748 "lport->mfs:%hu\n", mfs, lport->mfs); 1752 1749 fc_lport_error(lport, fp); 1753 1750 goto err; 1751 + } 1752 + 1753 + if (mfs <= lport->mfs) { 1754 + lport->mfs = mfs; 1755 + fc_host_maxframe_size(lport->host) = mfs; 1754 1756 } 1755 1757 1756 1758 csp_flags = ntohs(flp->fl_csp.sp_features);
+10 -23
drivers/scsi/libsas/sas_ata.c
··· 546 546 .port_ops = &sas_sata_ops 547 547 }; 548 548 549 - int sas_ata_init_host_and_port(struct domain_device *found_dev) 549 + int sas_ata_init(struct domain_device *found_dev) 550 550 { 551 551 struct sas_ha_struct *ha = found_dev->port->ha; 552 552 struct Scsi_Host *shost = ha->core.shost; 553 553 struct ata_port *ap; 554 + int rc; 554 555 555 556 ata_host_init(&found_dev->sata_dev.ata_host, 556 557 ha->dev, ··· 568 567 ap->private_data = found_dev; 569 568 ap->cbl = ATA_CBL_SATA; 570 569 ap->scsi_host = shost; 571 - /* publish initialized ata port */ 572 - smp_wmb(); 570 + rc = ata_sas_port_init(ap); 571 + if (rc) { 572 + ata_sas_port_destroy(ap); 573 + return rc; 574 + } 573 575 found_dev->sata_dev.ap = ap; 574 576 575 577 return 0; ··· 652 648 void sas_probe_sata(struct asd_sas_port *port) 653 649 { 654 650 struct domain_device *dev, *n; 655 - int err; 656 651 657 652 mutex_lock(&port->ha->disco_mutex); 658 - list_for_each_entry_safe(dev, n, &port->disco_list, disco_list_node) { 653 + list_for_each_entry(dev, &port->disco_list, disco_list_node) { 659 654 if (!dev_is_sata(dev)) 660 655 continue; 661 656 662 - err = sas_ata_init_host_and_port(dev); 663 - if (err) 664 - sas_fail_probe(dev, __func__, err); 665 - else 666 - ata_sas_async_port_init(dev->sata_dev.ap); 657 + ata_sas_async_probe(dev->sata_dev.ap); 667 658 } 668 659 mutex_unlock(&port->ha->disco_mutex); 669 660 ··· 717 718 sas_put_device(dev); 718 719 } 719 720 720 - static bool sas_ata_dev_eh_valid(struct domain_device *dev) 721 - { 722 - struct ata_port *ap; 723 - 724 - if (!dev_is_sata(dev)) 725 - return false; 726 - ap = dev->sata_dev.ap; 727 - /* consume fully initialized ata ports */ 728 - smp_rmb(); 729 - return !!ap; 730 - } 731 - 732 721 void sas_ata_strategy_handler(struct Scsi_Host *shost) 733 722 { 734 723 struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost); ··· 740 753 741 754 spin_lock(&port->dev_list_lock); 742 755 list_for_each_entry(dev, &port->dev_list, dev_list_node) { 743 - if (!sas_ata_dev_eh_valid(dev)) 756 + if (!dev_is_sata(dev)) 744 757 continue; 745 758 async_schedule_domain(async_sas_ata_eh, dev, &async); 746 759 }
+35 -26
drivers/scsi/libsas/sas_discover.c
··· 72 72 struct asd_sas_phy *phy; 73 73 struct sas_rphy *rphy; 74 74 struct domain_device *dev; 75 + int rc = -ENODEV; 75 76 76 77 dev = sas_alloc_device(); 77 78 if (!dev) ··· 111 110 112 111 sas_init_dev(dev); 113 112 113 + dev->port = port; 114 114 switch (dev->dev_type) { 115 - case SAS_END_DEV: 116 115 case SATA_DEV: 116 + rc = sas_ata_init(dev); 117 + if (rc) { 118 + rphy = NULL; 119 + break; 120 + } 121 + /* fall through */ 122 + case SAS_END_DEV: 117 123 rphy = sas_end_device_alloc(port->port); 118 124 break; 119 125 case EDGE_DEV: ··· 139 131 140 132 if (!rphy) { 141 133 sas_put_device(dev); 142 - return -ENODEV; 134 + return rc; 143 135 } 144 136 145 - spin_lock_irq(&port->phy_list_lock); 146 - list_for_each_entry(phy, &port->phy_list, port_phy_el) 147 - sas_phy_set_target(phy, dev); 148 - spin_unlock_irq(&port->phy_list_lock); 149 137 rphy->identify.phy_identifier = phy->phy->identify.phy_identifier; 150 138 memcpy(dev->sas_addr, port->attached_sas_addr, SAS_ADDR_SIZE); 151 139 sas_fill_in_rphy(dev, rphy); 152 140 sas_hash_addr(dev->hashed_sas_addr, dev->sas_addr); 153 141 port->port_dev = dev; 154 - dev->port = port; 155 142 dev->linkrate = port->linkrate; 156 143 dev->min_linkrate = port->linkrate; 157 144 dev->max_linkrate = port->linkrate; ··· 158 155 sas_device_set_phy(dev, port->port); 159 156 160 157 dev->rphy = rphy; 158 + get_device(&dev->rphy->dev); 161 159 162 160 if (dev_is_sata(dev) || dev->dev_type == SAS_END_DEV) 163 161 list_add_tail(&dev->disco_list_node, &port->disco_list); ··· 167 163 list_add_tail(&dev->dev_list_node, &port->dev_list); 168 164 spin_unlock_irq(&port->dev_list_lock); 169 165 } 166 + 167 + spin_lock_irq(&port->phy_list_lock); 168 + list_for_each_entry(phy, &port->phy_list, port_phy_el) 169 + sas_phy_set_target(phy, dev); 170 + spin_unlock_irq(&port->phy_list_lock); 170 171 171 172 return 0; 172 173 } ··· 214 205 static void sas_probe_devices(struct work_struct *work) 215 206 { 216 207 struct domain_device *dev, *n; 217 - struct sas_discovery_event *ev = 218 - container_of(work, struct sas_discovery_event, work); 208 + struct sas_discovery_event *ev = to_sas_discovery_event(work); 219 209 struct asd_sas_port *port = ev->port; 220 210 221 211 clear_bit(DISCE_PROBE, &port->disc.pending); ··· 263 255 { 264 256 struct domain_device *dev = container_of(kref, typeof(*dev), kref); 265 257 258 + put_device(&dev->rphy->dev); 259 + dev->rphy = NULL; 260 + 266 261 if (dev->parent) 267 262 sas_put_device(dev->parent); 268 263 ··· 302 291 static void sas_destruct_devices(struct work_struct *work) 303 292 { 304 293 struct domain_device *dev, *n; 305 - struct sas_discovery_event *ev = 306 - container_of(work, struct sas_discovery_event, work); 294 + struct sas_discovery_event *ev = to_sas_discovery_event(work); 307 295 struct asd_sas_port *port = ev->port; 308 296 309 297 clear_bit(DISCE_DESTRUCT, &port->disc.pending); ··· 312 302 313 303 sas_remove_children(&dev->rphy->dev); 314 304 sas_rphy_delete(dev->rphy); 315 - dev->rphy = NULL; 316 305 sas_unregister_common_dev(port, dev); 317 306 } 318 307 } ··· 323 314 /* this rphy never saw sas_rphy_add */ 324 315 list_del_init(&dev->disco_list_node); 325 316 sas_rphy_free(dev->rphy); 326 - dev->rphy = NULL; 327 317 sas_unregister_common_dev(port, dev); 318 + return; 328 319 } 329 320 330 - if (dev->rphy && !test_and_set_bit(SAS_DEV_DESTROY, &dev->state)) { 321 + if (!test_and_set_bit(SAS_DEV_DESTROY, &dev->state)) { 331 322 sas_rphy_unlink(dev->rphy); 332 323 list_move_tail(&dev->disco_list_node, &port->destroy_list); 333 324 sas_discover_event(dev->port, DISCE_DESTRUCT); ··· 386 377 { 387 378 struct domain_device *dev; 388 379 int error = 0; 389 - struct sas_discovery_event *ev = 390 - container_of(work, struct sas_discovery_event, work); 380 + struct sas_discovery_event *ev = to_sas_discovery_event(work); 391 381 struct asd_sas_port *port = ev->port; 392 382 393 383 clear_bit(DISCE_DISCOVER_DOMAIN, &port->disc.pending); ··· 427 419 428 420 if (error) { 429 421 sas_rphy_free(dev->rphy); 430 - dev->rphy = NULL; 431 - 432 422 list_del_init(&dev->disco_list_node); 433 423 spin_lock_irq(&port->dev_list_lock); 434 424 list_del_init(&dev->dev_list_node); ··· 443 437 static void sas_revalidate_domain(struct work_struct *work) 444 438 { 445 439 int res = 0; 446 - struct sas_discovery_event *ev = 447 - container_of(work, struct sas_discovery_event, work); 440 + struct sas_discovery_event *ev = to_sas_discovery_event(work); 448 441 struct asd_sas_port *port = ev->port; 449 442 struct sas_ha_struct *ha = port->ha; 450 443 ··· 471 466 472 467 /* ---------- Events ---------- */ 473 468 474 - static void sas_chain_work(struct sas_ha_struct *ha, struct work_struct *work) 469 + static void sas_chain_work(struct sas_ha_struct *ha, struct sas_work *sw) 475 470 { 476 - /* chained work is not subject to SA_HA_DRAINING or SAS_HA_REGISTERED */ 477 - scsi_queue_work(ha->core.shost, work); 471 + /* chained work is not subject to SA_HA_DRAINING or 472 + * SAS_HA_REGISTERED, because it is either submitted in the 473 + * workqueue, or known to be submitted from a context that is 474 + * not racing against draining 475 + */ 476 + scsi_queue_work(ha->core.shost, &sw->work); 478 477 } 479 478 480 479 static void sas_chain_event(int event, unsigned long *pending, 481 - struct work_struct *work, 480 + struct sas_work *sw, 482 481 struct sas_ha_struct *ha) 483 482 { 484 483 if (!test_and_set_bit(event, pending)) { 485 484 unsigned long flags; 486 485 487 486 spin_lock_irqsave(&ha->state_lock, flags); 488 - sas_chain_work(ha, work); 487 + sas_chain_work(ha, sw); 489 488 spin_unlock_irqrestore(&ha->state_lock, flags); 490 489 } 491 490 } ··· 528 519 529 520 disc->pending = 0; 530 521 for (i = 0; i < DISC_NUM_EVENTS; i++) { 531 - INIT_WORK(&disc->disc_work[i].work, sas_event_fns[i]); 522 + INIT_SAS_WORK(&disc->disc_work[i].work, sas_event_fns[i]); 532 523 disc->disc_work[i].port = port; 533 524 } 534 525 }
+13 -11
drivers/scsi/libsas/sas_event.c
··· 27 27 #include "sas_internal.h" 28 28 #include "sas_dump.h" 29 29 30 - void sas_queue_work(struct sas_ha_struct *ha, struct work_struct *work) 30 + void sas_queue_work(struct sas_ha_struct *ha, struct sas_work *sw) 31 31 { 32 32 if (!test_bit(SAS_HA_REGISTERED, &ha->state)) 33 33 return; 34 34 35 - if (test_bit(SAS_HA_DRAINING, &ha->state)) 36 - list_add(&work->entry, &ha->defer_q); 37 - else 38 - scsi_queue_work(ha->core.shost, work); 35 + if (test_bit(SAS_HA_DRAINING, &ha->state)) { 36 + /* add it to the defer list, if not already pending */ 37 + if (list_empty(&sw->drain_node)) 38 + list_add(&sw->drain_node, &ha->defer_q); 39 + } else 40 + scsi_queue_work(ha->core.shost, &sw->work); 39 41 } 40 42 41 43 static void sas_queue_event(int event, unsigned long *pending, 42 - struct work_struct *work, 44 + struct sas_work *work, 43 45 struct sas_ha_struct *ha) 44 46 { 45 47 if (!test_and_set_bit(event, pending)) { ··· 57 55 void __sas_drain_work(struct sas_ha_struct *ha) 58 56 { 59 57 struct workqueue_struct *wq = ha->core.shost->work_q; 60 - struct work_struct *w, *_w; 58 + struct sas_work *sw, *_sw; 61 59 62 60 set_bit(SAS_HA_DRAINING, &ha->state); 63 61 /* flush submitters */ ··· 68 66 69 67 spin_lock_irq(&ha->state_lock); 70 68 clear_bit(SAS_HA_DRAINING, &ha->state); 71 - list_for_each_entry_safe(w, _w, &ha->defer_q, entry) { 72 - list_del_init(&w->entry); 73 - sas_queue_work(ha, w); 69 + list_for_each_entry_safe(sw, _sw, &ha->defer_q, drain_node) { 70 + list_del_init(&sw->drain_node); 71 + sas_queue_work(ha, sw); 74 72 } 75 73 spin_unlock_irq(&ha->state_lock); 76 74 } ··· 153 151 int i; 154 152 155 153 for (i = 0; i < HA_NUM_EVENTS; i++) { 156 - INIT_WORK(&sas_ha->ha_events[i].work, sas_ha_event_fns[i]); 154 + INIT_SAS_WORK(&sas_ha->ha_events[i].work, sas_ha_event_fns[i]); 157 155 sas_ha->ha_events[i].ha = sas_ha; 158 156 } 159 157
+44 -12
drivers/scsi/libsas/sas_expander.c
··· 202 202 u8 sas_addr[SAS_ADDR_SIZE]; 203 203 struct smp_resp *resp = rsp; 204 204 struct discover_resp *dr = &resp->disc; 205 + struct sas_ha_struct *ha = dev->port->ha; 205 206 struct expander_device *ex = &dev->ex_dev; 206 207 struct ex_phy *phy = &ex->ex_phy[phy_id]; 207 208 struct sas_rphy *rphy = dev->rphy; ··· 210 209 char *type; 211 210 212 211 if (new_phy) { 212 + if (WARN_ON_ONCE(test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state))) 213 + return; 213 214 phy->phy = sas_phy_alloc(&rphy->dev, phy_id); 214 215 215 216 /* FIXME: error_handling */ ··· 236 233 memcpy(sas_addr, phy->attached_sas_addr, SAS_ADDR_SIZE); 237 234 238 235 phy->attached_dev_type = to_dev_type(dr); 236 + if (test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state)) 237 + goto out; 239 238 phy->phy_id = phy_id; 240 239 phy->linkrate = dr->linkrate; 241 240 phy->attached_sata_host = dr->attached_sata_host; ··· 245 240 phy->attached_sata_ps = dr->attached_sata_ps; 246 241 phy->attached_iproto = dr->iproto << 1; 247 242 phy->attached_tproto = dr->tproto << 1; 248 - memcpy(phy->attached_sas_addr, dr->attached_sas_addr, SAS_ADDR_SIZE); 243 + /* help some expanders that fail to zero sas_address in the 'no 244 + * device' case 245 + */ 246 + if (phy->attached_dev_type == NO_DEVICE || 247 + phy->linkrate < SAS_LINK_RATE_1_5_GBPS) 248 + memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE); 249 + else 250 + memcpy(phy->attached_sas_addr, dr->attached_sas_addr, SAS_ADDR_SIZE); 249 251 phy->attached_phy_id = dr->attached_phy_id; 250 252 phy->phy_change_count = dr->change_count; 251 253 phy->routing_attr = dr->routing_attr; ··· 278 266 return; 279 267 } 280 268 269 + out: 281 270 switch (phy->attached_dev_type) { 282 271 case SATA_PENDING: 283 272 type = "stp pending"; ··· 317 304 else 318 305 return; 319 306 320 - SAS_DPRINTK("ex %016llx phy%02d:%c:%X attached: %016llx (%s)\n", 307 + /* if the attached device type changed and ata_eh is active, 308 + * make sure we run revalidation when eh completes (see: 309 + * sas_enable_revalidation) 310 + */ 311 + if (test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state)) 312 + set_bit(DISCE_REVALIDATE_DOMAIN, &dev->port->disc.pending); 313 + 314 + SAS_DPRINTK("%sex %016llx phy%02d:%c:%X attached: %016llx (%s)\n", 315 + test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state) ? "ata: " : "", 321 316 SAS_ADDR(dev->sas_addr), phy->phy_id, 322 317 sas_route_char(dev, phy), phy->linkrate, 323 318 SAS_ADDR(phy->attached_sas_addr), type); ··· 797 776 if (res) 798 777 goto out_free; 799 778 779 + sas_init_dev(child); 780 + res = sas_ata_init(child); 781 + if (res) 782 + goto out_free; 800 783 rphy = sas_end_device_alloc(phy->port); 801 - if (unlikely(!rphy)) 784 + if (!rphy) 802 785 goto out_free; 803 786 804 - sas_init_dev(child); 805 - 806 787 child->rphy = rphy; 788 + get_device(&rphy->dev); 807 789 808 790 list_add_tail(&child->disco_list_node, &parent->port->disco_list); 809 791 ··· 830 806 sas_init_dev(child); 831 807 832 808 child->rphy = rphy; 809 + get_device(&rphy->dev); 833 810 sas_fill_in_rphy(child, rphy); 834 811 835 812 list_add_tail(&child->disco_list_node, &parent->port->disco_list); ··· 855 830 856 831 out_list_del: 857 832 sas_rphy_free(child->rphy); 858 - child->rphy = NULL; 859 - 860 833 list_del(&child->disco_list_node); 861 834 spin_lock_irq(&parent->port->dev_list_lock); 862 835 list_del(&child->dev_list_node); ··· 934 911 } 935 912 port = parent->port; 936 913 child->rphy = rphy; 914 + get_device(&rphy->dev); 937 915 edev = rphy_to_expander_device(rphy); 938 916 child->dev_type = phy->attached_dev_type; 939 917 kref_get(&parent->kref); ··· 958 934 959 935 res = sas_discover_expander(child); 960 936 if (res) { 937 + sas_rphy_delete(rphy); 961 938 spin_lock_irq(&parent->port->dev_list_lock); 962 939 list_del(&child->dev_list_node); 963 940 spin_unlock_irq(&parent->port->dev_list_lock); ··· 1743 1718 int phy_change_count = 0; 1744 1719 1745 1720 res = sas_get_phy_change_count(dev, i, &phy_change_count); 1746 - if (res) 1747 - goto out; 1748 - else if (phy_change_count != ex->ex_phy[i].phy_change_count) { 1721 + switch (res) { 1722 + case SMP_RESP_PHY_VACANT: 1723 + case SMP_RESP_NO_PHY: 1724 + continue; 1725 + case SMP_RESP_FUNC_ACC: 1726 + break; 1727 + default: 1728 + return res; 1729 + } 1730 + 1731 + if (phy_change_count != ex->ex_phy[i].phy_change_count) { 1749 1732 if (update) 1750 1733 ex->ex_phy[i].phy_change_count = 1751 1734 phy_change_count; ··· 1761 1728 return 0; 1762 1729 } 1763 1730 } 1764 - out: 1765 - return res; 1731 + return 0; 1766 1732 } 1767 1733 1768 1734 static int sas_get_ex_change_count(struct domain_device *dev, int *ecc)
+5 -6
drivers/scsi/libsas/sas_init.c
··· 94 94 95 95 void sas_hae_reset(struct work_struct *work) 96 96 { 97 - struct sas_ha_event *ev = 98 - container_of(work, struct sas_ha_event, work); 97 + struct sas_ha_event *ev = to_sas_ha_event(work); 99 98 struct sas_ha_struct *ha = ev->ha; 100 99 101 100 clear_bit(HAE_RESET, &ha->pending); ··· 368 369 369 370 static void phy_reset_work(struct work_struct *work) 370 371 { 371 - struct sas_phy_data *d = container_of(work, typeof(*d), reset_work); 372 + struct sas_phy_data *d = container_of(work, typeof(*d), reset_work.work); 372 373 373 374 d->reset_result = transport_sas_phy_reset(d->phy, d->hard_reset); 374 375 } 375 376 376 377 static void phy_enable_work(struct work_struct *work) 377 378 { 378 - struct sas_phy_data *d = container_of(work, typeof(*d), enable_work); 379 + struct sas_phy_data *d = container_of(work, typeof(*d), enable_work.work); 379 380 380 381 d->enable_result = sas_phy_enable(d->phy, d->enable); 381 382 } ··· 388 389 return -ENOMEM; 389 390 390 391 mutex_init(&d->event_lock); 391 - INIT_WORK(&d->reset_work, phy_reset_work); 392 - INIT_WORK(&d->enable_work, phy_enable_work); 392 + INIT_SAS_WORK(&d->reset_work, phy_reset_work); 393 + INIT_SAS_WORK(&d->enable_work, phy_enable_work); 393 394 d->phy = phy; 394 395 phy->hostdata = d; 395 396
+3 -3
drivers/scsi/libsas/sas_internal.h
··· 45 45 struct mutex event_lock; 46 46 int hard_reset; 47 47 int reset_result; 48 - struct work_struct reset_work; 48 + struct sas_work reset_work; 49 49 int enable; 50 50 int enable_result; 51 - struct work_struct enable_work; 51 + struct sas_work enable_work; 52 52 }; 53 53 54 54 void sas_scsi_recover_host(struct Scsi_Host *shost); ··· 80 80 void sas_porte_link_reset_err(struct work_struct *work); 81 81 void sas_porte_timer_event(struct work_struct *work); 82 82 void sas_porte_hard_reset(struct work_struct *work); 83 - void sas_queue_work(struct sas_ha_struct *ha, struct work_struct *work); 83 + void sas_queue_work(struct sas_ha_struct *ha, struct sas_work *sw); 84 84 85 85 int sas_notify_lldd_dev_found(struct domain_device *); 86 86 void sas_notify_lldd_dev_gone(struct domain_device *);
+7 -14
drivers/scsi/libsas/sas_phy.c
··· 32 32 33 33 static void sas_phye_loss_of_signal(struct work_struct *work) 34 34 { 35 - struct asd_sas_event *ev = 36 - container_of(work, struct asd_sas_event, work); 35 + struct asd_sas_event *ev = to_asd_sas_event(work); 37 36 struct asd_sas_phy *phy = ev->phy; 38 37 39 38 clear_bit(PHYE_LOSS_OF_SIGNAL, &phy->phy_events_pending); ··· 42 43 43 44 static void sas_phye_oob_done(struct work_struct *work) 44 45 { 45 - struct asd_sas_event *ev = 46 - container_of(work, struct asd_sas_event, work); 46 + struct asd_sas_event *ev = to_asd_sas_event(work); 47 47 struct asd_sas_phy *phy = ev->phy; 48 48 49 49 clear_bit(PHYE_OOB_DONE, &phy->phy_events_pending); ··· 51 53 52 54 static void sas_phye_oob_error(struct work_struct *work) 53 55 { 54 - struct asd_sas_event *ev = 55 - container_of(work, struct asd_sas_event, work); 56 + struct asd_sas_event *ev = to_asd_sas_event(work); 56 57 struct asd_sas_phy *phy = ev->phy; 57 58 struct sas_ha_struct *sas_ha = phy->ha; 58 59 struct asd_sas_port *port = phy->port; ··· 82 85 83 86 static void sas_phye_spinup_hold(struct work_struct *work) 84 87 { 85 - struct asd_sas_event *ev = 86 - container_of(work, struct asd_sas_event, work); 88 + struct asd_sas_event *ev = to_asd_sas_event(work); 87 89 struct asd_sas_phy *phy = ev->phy; 88 90 struct sas_ha_struct *sas_ha = phy->ha; 89 91 struct sas_internal *i = ··· 123 127 phy->error = 0; 124 128 INIT_LIST_HEAD(&phy->port_phy_el); 125 129 for (k = 0; k < PORT_NUM_EVENTS; k++) { 126 - INIT_WORK(&phy->port_events[k].work, 127 - sas_port_event_fns[k]); 130 + INIT_SAS_WORK(&phy->port_events[k].work, sas_port_event_fns[k]); 128 131 phy->port_events[k].phy = phy; 129 132 } 130 133 131 134 for (k = 0; k < PHY_NUM_EVENTS; k++) { 132 - INIT_WORK(&phy->phy_events[k].work, 133 - sas_phy_event_fns[k]); 135 + INIT_SAS_WORK(&phy->phy_events[k].work, sas_phy_event_fns[k]); 134 136 phy->phy_events[k].phy = phy; 135 137 } 136 138 ··· 138 144 spin_lock_init(&phy->sas_prim_lock); 139 145 phy->frame_rcvd_size = 0; 140 146 141 - phy->phy = sas_phy_alloc(&sas_ha->core.shost->shost_gendev, 142 - i); 147 + phy->phy = sas_phy_alloc(&sas_ha->core.shost->shost_gendev, i); 143 148 if (!phy->phy) 144 149 return -ENOMEM; 145 150
+6 -11
drivers/scsi/libsas/sas_port.c
··· 123 123 spin_unlock_irqrestore(&sas_ha->phy_port_lock, flags); 124 124 125 125 if (!port->port) { 126 - port->port = sas_port_alloc(phy->phy->dev.parent, phy->id); 126 + port->port = sas_port_alloc(phy->phy->dev.parent, port->id); 127 127 BUG_ON(!port->port); 128 128 sas_port_add(port->port); 129 129 } ··· 208 208 209 209 void sas_porte_bytes_dmaed(struct work_struct *work) 210 210 { 211 - struct asd_sas_event *ev = 212 - container_of(work, struct asd_sas_event, work); 211 + struct asd_sas_event *ev = to_asd_sas_event(work); 213 212 struct asd_sas_phy *phy = ev->phy; 214 213 215 214 clear_bit(PORTE_BYTES_DMAED, &phy->port_events_pending); ··· 218 219 219 220 void sas_porte_broadcast_rcvd(struct work_struct *work) 220 221 { 221 - struct asd_sas_event *ev = 222 - container_of(work, struct asd_sas_event, work); 222 + struct asd_sas_event *ev = to_asd_sas_event(work); 223 223 struct asd_sas_phy *phy = ev->phy; 224 224 unsigned long flags; 225 225 u32 prim; ··· 235 237 236 238 void sas_porte_link_reset_err(struct work_struct *work) 237 239 { 238 - struct asd_sas_event *ev = 239 - container_of(work, struct asd_sas_event, work); 240 + struct asd_sas_event *ev = to_asd_sas_event(work); 240 241 struct asd_sas_phy *phy = ev->phy; 241 242 242 243 clear_bit(PORTE_LINK_RESET_ERR, &phy->port_events_pending); ··· 245 248 246 249 void sas_porte_timer_event(struct work_struct *work) 247 250 { 248 - struct asd_sas_event *ev = 249 - container_of(work, struct asd_sas_event, work); 251 + struct asd_sas_event *ev = to_asd_sas_event(work); 250 252 struct asd_sas_phy *phy = ev->phy; 251 253 252 254 clear_bit(PORTE_TIMER_EVENT, &phy->port_events_pending); ··· 255 259 256 260 void sas_porte_hard_reset(struct work_struct *work) 257 261 { 258 - struct asd_sas_event *ev = 259 - container_of(work, struct asd_sas_event, work); 262 + struct asd_sas_event *ev = to_asd_sas_event(work); 260 263 struct asd_sas_phy *phy = ev->phy; 261 264 262 265 clear_bit(PORTE_HARD_RESET, &phy->port_events_pending);
+1 -1
drivers/scsi/scsi_lib.c
··· 1638 1638 request_fn_proc *request_fn) 1639 1639 { 1640 1640 struct request_queue *q; 1641 - struct device *dev = shost->shost_gendev.parent; 1641 + struct device *dev = shost->dma_dev; 1642 1642 1643 1643 q = blk_init_queue(request_fn, NULL); 1644 1644 if (!q)
+1 -1
drivers/spi/Kconfig
··· 74 74 This selects a driver for the Atmel SPI Controller, present on 75 75 many AT32 (AVR32) and AT91 (ARM) chips. 76 76 77 - config SPI_BFIN 77 + config SPI_BFIN5XX 78 78 tristate "SPI controller driver for ADI Blackfin5xx" 79 79 depends on BLACKFIN 80 80 help
+1 -1
drivers/spi/Makefile
··· 15 15 obj-$(CONFIG_SPI_ATH79) += spi-ath79.o 16 16 obj-$(CONFIG_SPI_AU1550) += spi-au1550.o 17 17 obj-$(CONFIG_SPI_BCM63XX) += spi-bcm63xx.o 18 - obj-$(CONFIG_SPI_BFIN) += spi-bfin5xx.o 18 + obj-$(CONFIG_SPI_BFIN5XX) += spi-bfin5xx.o 19 19 obj-$(CONFIG_SPI_BFIN_SPORT) += spi-bfin-sport.o 20 20 obj-$(CONFIG_SPI_BITBANG) += spi-bitbang.o 21 21 obj-$(CONFIG_SPI_BUTTERFLY) += spi-butterfly.o
+93 -72
drivers/spi/spi-bcm63xx.c
··· 1 1 /* 2 2 * Broadcom BCM63xx SPI controller support 3 3 * 4 - * Copyright (C) 2009-2011 Florian Fainelli <florian@openwrt.org> 4 + * Copyright (C) 2009-2012 Florian Fainelli <florian@openwrt.org> 5 5 * Copyright (C) 2010 Tanguy Bouzeloc <tanguy.bouzeloc@efixo.com> 6 6 * 7 7 * This program is free software; you can redistribute it and/or ··· 30 30 #include <linux/spi/spi.h> 31 31 #include <linux/completion.h> 32 32 #include <linux/err.h> 33 + #include <linux/workqueue.h> 34 + #include <linux/pm_runtime.h> 33 35 34 36 #include <bcm63xx_dev_spi.h> 35 37 ··· 39 37 #define DRV_VER "0.1.2" 40 38 41 39 struct bcm63xx_spi { 42 - spinlock_t lock; 43 - int stopping; 44 40 struct completion done; 45 41 46 42 void __iomem *regs; ··· 96 96 { 391000, SPI_CLK_0_391MHZ } 97 97 }; 98 98 99 - static int bcm63xx_spi_setup_transfer(struct spi_device *spi, 100 - struct spi_transfer *t) 99 + static int bcm63xx_spi_check_transfer(struct spi_device *spi, 100 + struct spi_transfer *t) 101 101 { 102 - struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 103 102 u8 bits_per_word; 104 - u8 clk_cfg, reg; 105 - u32 hz; 106 - int i; 107 103 108 104 bits_per_word = (t) ? t->bits_per_word : spi->bits_per_word; 109 - hz = (t) ? t->speed_hz : spi->max_speed_hz; 110 105 if (bits_per_word != 8) { 111 106 dev_err(&spi->dev, "%s, unsupported bits_per_word=%d\n", 112 107 __func__, bits_per_word); ··· 113 118 __func__, spi->chip_select); 114 119 return -EINVAL; 115 120 } 121 + 122 + return 0; 123 + } 124 + 125 + static void bcm63xx_spi_setup_transfer(struct spi_device *spi, 126 + struct spi_transfer *t) 127 + { 128 + struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 129 + u32 hz; 130 + u8 clk_cfg, reg; 131 + int i; 132 + 133 + hz = (t) ? t->speed_hz : spi->max_speed_hz; 116 134 117 135 /* Find the closest clock configuration */ 118 136 for (i = 0; i < SPI_CLK_MASK; i++) { ··· 147 139 bcm_spi_writeb(bs, reg, SPI_CLK_CFG); 148 140 dev_dbg(&spi->dev, "Setting clock register to %02x (hz %d)\n", 149 141 clk_cfg, hz); 150 - 151 - return 0; 152 142 } 153 143 154 144 /* the spi->mode bits understood by this driver: */ ··· 159 153 160 154 bs = spi_master_get_devdata(spi->master); 161 155 162 - if (bs->stopping) 163 - return -ESHUTDOWN; 164 - 165 156 if (!spi->bits_per_word) 166 157 spi->bits_per_word = 8; 167 158 ··· 168 165 return -EINVAL; 169 166 } 170 167 171 - ret = bcm63xx_spi_setup_transfer(spi, NULL); 168 + ret = bcm63xx_spi_check_transfer(spi, NULL); 172 169 if (ret < 0) { 173 170 dev_err(&spi->dev, "setup: unsupported mode bits %x\n", 174 171 spi->mode & ~MODEBITS); ··· 193 190 bs->remaining_bytes -= size; 194 191 } 195 192 196 - static int bcm63xx_txrx_bufs(struct spi_device *spi, struct spi_transfer *t) 193 + static unsigned int bcm63xx_txrx_bufs(struct spi_device *spi, 194 + struct spi_transfer *t) 197 195 { 198 196 struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 199 197 u16 msg_ctl; 200 198 u16 cmd; 199 + 200 + /* Disable the CMD_DONE interrupt */ 201 + bcm_spi_writeb(bs, 0, SPI_INT_MASK); 201 202 202 203 dev_dbg(&spi->dev, "txrx: tx %p, rx %p, len %d\n", 203 204 t->tx_buf, t->rx_buf, t->len); ··· 209 202 /* Transmitter is inhibited */ 210 203 bs->tx_ptr = t->tx_buf; 211 204 bs->rx_ptr = t->rx_buf; 212 - init_completion(&bs->done); 213 205 214 206 if (t->tx_buf) { 215 207 bs->remaining_bytes = t->len; 216 208 bcm63xx_spi_fill_tx_fifo(bs); 217 209 } 218 210 219 - /* Enable the command done interrupt which 220 - * we use to determine completion of a command */ 221 - bcm_spi_writeb(bs, SPI_INTR_CMD_DONE, SPI_INT_MASK); 211 + init_completion(&bs->done); 222 212 223 213 /* Fill in the Message control register */ 224 214 msg_ctl = (t->len << SPI_BYTE_CNT_SHIFT); ··· 234 230 cmd |= (0 << SPI_CMD_PREPEND_BYTE_CNT_SHIFT); 235 231 cmd |= (spi->chip_select << SPI_CMD_DEVICE_ID_SHIFT); 236 232 bcm_spi_writew(bs, cmd, SPI_CMD); 237 - wait_for_completion(&bs->done); 238 233 239 - /* Disable the CMD_DONE interrupt */ 240 - bcm_spi_writeb(bs, 0, SPI_INT_MASK); 234 + /* Enable the CMD_DONE interrupt */ 235 + bcm_spi_writeb(bs, SPI_INTR_CMD_DONE, SPI_INT_MASK); 241 236 242 237 return t->len - bs->remaining_bytes; 243 238 } 244 239 245 - static int bcm63xx_transfer(struct spi_device *spi, struct spi_message *m) 240 + static int bcm63xx_spi_prepare_transfer(struct spi_master *master) 246 241 { 247 - struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 242 + struct bcm63xx_spi *bs = spi_master_get_devdata(master); 243 + 244 + pm_runtime_get_sync(&bs->pdev->dev); 245 + 246 + return 0; 247 + } 248 + 249 + static int bcm63xx_spi_unprepare_transfer(struct spi_master *master) 250 + { 251 + struct bcm63xx_spi *bs = spi_master_get_devdata(master); 252 + 253 + pm_runtime_put(&bs->pdev->dev); 254 + 255 + return 0; 256 + } 257 + 258 + static int bcm63xx_spi_transfer_one(struct spi_master *master, 259 + struct spi_message *m) 260 + { 261 + struct bcm63xx_spi *bs = spi_master_get_devdata(master); 248 262 struct spi_transfer *t; 249 - int ret = 0; 250 - 251 - if (unlikely(list_empty(&m->transfers))) 252 - return -EINVAL; 253 - 254 - if (bs->stopping) 255 - return -ESHUTDOWN; 263 + struct spi_device *spi = m->spi; 264 + int status = 0; 265 + unsigned int timeout = 0; 256 266 257 267 list_for_each_entry(t, &m->transfers, transfer_list) { 258 - ret += bcm63xx_txrx_bufs(spi, t); 268 + unsigned int len = t->len; 269 + u8 rx_tail; 270 + 271 + status = bcm63xx_spi_check_transfer(spi, t); 272 + if (status < 0) 273 + goto exit; 274 + 275 + /* configure adapter for a new transfer */ 276 + bcm63xx_spi_setup_transfer(spi, t); 277 + 278 + while (len) { 279 + /* send the data */ 280 + len -= bcm63xx_txrx_bufs(spi, t); 281 + 282 + timeout = wait_for_completion_timeout(&bs->done, HZ); 283 + if (!timeout) { 284 + status = -ETIMEDOUT; 285 + goto exit; 286 + } 287 + 288 + /* read out all data */ 289 + rx_tail = bcm_spi_readb(bs, SPI_RX_TAIL); 290 + 291 + /* Read out all the data */ 292 + if (rx_tail) 293 + memcpy_fromio(bs->rx_ptr, bs->rx_io, rx_tail); 294 + } 295 + 296 + m->actual_length += t->len; 259 297 } 298 + exit: 299 + m->status = status; 300 + spi_finalize_current_message(master); 260 301 261 - m->complete(m->context); 262 - 263 - return ret; 302 + return 0; 264 303 } 265 304 266 305 /* This driver supports single master mode only. Hence ··· 314 267 struct spi_master *master = (struct spi_master *)dev_id; 315 268 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 316 269 u8 intr; 317 - u16 cmd; 318 270 319 271 /* Read interupts and clear them immediately */ 320 272 intr = bcm_spi_readb(bs, SPI_INT_STATUS); 321 273 bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS); 322 274 bcm_spi_writeb(bs, 0, SPI_INT_MASK); 323 275 324 - /* A tansfer completed */ 325 - if (intr & SPI_INTR_CMD_DONE) { 326 - u8 rx_tail; 327 - 328 - rx_tail = bcm_spi_readb(bs, SPI_RX_TAIL); 329 - 330 - /* Read out all the data */ 331 - if (rx_tail) 332 - memcpy_fromio(bs->rx_ptr, bs->rx_io, rx_tail); 333 - 334 - /* See if there is more data to send */ 335 - if (bs->remaining_bytes > 0) { 336 - bcm63xx_spi_fill_tx_fifo(bs); 337 - 338 - /* Start the transfer */ 339 - bcm_spi_writew(bs, SPI_HD_W << SPI_MSG_TYPE_SHIFT, 340 - SPI_MSG_CTL); 341 - cmd = bcm_spi_readw(bs, SPI_CMD); 342 - cmd |= SPI_CMD_START_IMMEDIATE; 343 - cmd |= (0 << SPI_CMD_PREPEND_BYTE_CNT_SHIFT); 344 - bcm_spi_writeb(bs, SPI_INTR_CMD_DONE, SPI_INT_MASK); 345 - bcm_spi_writew(bs, cmd, SPI_CMD); 346 - } else { 347 - complete(&bs->done); 348 - } 349 - } 276 + /* A transfer completed */ 277 + if (intr & SPI_INTR_CMD_DONE) 278 + complete(&bs->done); 350 279 351 280 return IRQ_HANDLED; 352 281 } ··· 368 345 } 369 346 370 347 bs = spi_master_get_devdata(master); 371 - init_completion(&bs->done); 372 348 373 349 platform_set_drvdata(pdev, master); 374 350 bs->pdev = pdev; ··· 401 379 master->bus_num = pdata->bus_num; 402 380 master->num_chipselect = pdata->num_chipselect; 403 381 master->setup = bcm63xx_spi_setup; 404 - master->transfer = bcm63xx_transfer; 382 + master->prepare_transfer_hardware = bcm63xx_spi_prepare_transfer; 383 + master->unprepare_transfer_hardware = bcm63xx_spi_unprepare_transfer; 384 + master->transfer_one_message = bcm63xx_spi_transfer_one; 385 + master->mode_bits = MODEBITS; 405 386 bs->speed_hz = pdata->speed_hz; 406 - bs->stopping = 0; 407 387 bs->tx_io = (u8 *)(bs->regs + bcm63xx_spireg(SPI_MSG_DATA)); 408 388 bs->rx_io = (const u8 *)(bs->regs + bcm63xx_spireg(SPI_RX_DATA)); 409 - spin_lock_init(&bs->lock); 410 389 411 390 /* Initialize hardware */ 412 391 clk_enable(bs->clk); ··· 441 418 struct spi_master *master = platform_get_drvdata(pdev); 442 419 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 443 420 421 + spi_unregister_master(master); 422 + 444 423 /* reset spi block */ 445 424 bcm_spi_writeb(bs, 0, SPI_INT_MASK); 446 - spin_lock(&bs->lock); 447 - bs->stopping = 1; 448 425 449 426 /* HW shutdown */ 450 427 clk_disable(bs->clk); 451 428 clk_put(bs->clk); 452 429 453 - spin_unlock(&bs->lock); 454 430 platform_set_drvdata(pdev, 0); 455 - spi_unregister_master(master); 456 431 457 432 return 0; 458 433 }
+11 -10
drivers/spi/spi-bfin-sport.c
··· 252 252 bfin_sport_spi_restore_state(struct bfin_sport_spi_master_data *drv_data) 253 253 { 254 254 struct bfin_sport_spi_slave_data *chip = drv_data->cur_chip; 255 - unsigned int bits = (drv_data->ops == &bfin_sport_transfer_ops_u8 ? 7 : 15); 256 255 257 256 bfin_sport_spi_disable(drv_data); 258 257 dev_dbg(drv_data->dev, "restoring spi ctl state\n"); 259 258 260 259 bfin_write(&drv_data->regs->tcr1, chip->ctl_reg); 261 - bfin_write(&drv_data->regs->tcr2, bits); 262 260 bfin_write(&drv_data->regs->tclkdiv, chip->baud); 263 - bfin_write(&drv_data->regs->tfsdiv, bits); 264 261 SSYNC(); 265 262 266 263 bfin_write(&drv_data->regs->rcr1, chip->ctl_reg & ~(ITCLK | ITFS)); 267 - bfin_write(&drv_data->regs->rcr2, bits); 268 264 SSYNC(); 269 265 270 266 bfin_sport_spi_cs_active(chip); ··· 416 420 drv_data->cs_change = transfer->cs_change; 417 421 418 422 /* Bits per word setup */ 419 - bits_per_word = transfer->bits_per_word ? : message->spi->bits_per_word; 420 - if (bits_per_word == 8) 421 - drv_data->ops = &bfin_sport_transfer_ops_u8; 422 - else 423 + bits_per_word = transfer->bits_per_word ? : 424 + message->spi->bits_per_word ? : 8; 425 + if (bits_per_word % 16 == 0) 423 426 drv_data->ops = &bfin_sport_transfer_ops_u16; 427 + else 428 + drv_data->ops = &bfin_sport_transfer_ops_u8; 429 + bfin_write(&drv_data->regs->tcr2, bits_per_word - 1); 430 + bfin_write(&drv_data->regs->tfsdiv, bits_per_word - 1); 431 + bfin_write(&drv_data->regs->rcr2, bits_per_word - 1); 424 432 425 433 drv_data->state = RUNNING_STATE; 426 434 ··· 598 598 } 599 599 chip->cs_chg_udelay = chip_info->cs_chg_udelay; 600 600 chip->idle_tx_val = chip_info->idle_tx_val; 601 - spi->bits_per_word = chip_info->bits_per_word; 602 601 } 603 602 } 604 603 605 - if (spi->bits_per_word != 8 && spi->bits_per_word != 16) { 604 + if (spi->bits_per_word % 8) { 605 + dev_err(&spi->dev, "%d bits_per_word is not supported\n", 606 + spi->bits_per_word); 606 607 ret = -EINVAL; 607 608 goto error; 608 609 }
+8 -6
drivers/spi/spi-bfin5xx.c
··· 396 396 /* last read */ 397 397 if (drv_data->rx) { 398 398 dev_dbg(&drv_data->pdev->dev, "last read\n"); 399 - if (n_bytes % 2) { 399 + if (!(n_bytes % 2)) { 400 400 u16 *buf = (u16 *)drv_data->rx; 401 401 for (loop = 0; loop < n_bytes / 2; loop++) 402 402 *buf++ = bfin_read(&drv_data->regs->rdbr); ··· 424 424 if (drv_data->rx && drv_data->tx) { 425 425 /* duplex */ 426 426 dev_dbg(&drv_data->pdev->dev, "duplex: write_TDBR\n"); 427 - if (n_bytes % 2) { 427 + if (!(n_bytes % 2)) { 428 428 u16 *buf = (u16 *)drv_data->rx; 429 429 u16 *buf2 = (u16 *)drv_data->tx; 430 430 for (loop = 0; loop < n_bytes / 2; loop++) { ··· 442 442 } else if (drv_data->rx) { 443 443 /* read */ 444 444 dev_dbg(&drv_data->pdev->dev, "read: write_TDBR\n"); 445 - if (n_bytes % 2) { 445 + if (!(n_bytes % 2)) { 446 446 u16 *buf = (u16 *)drv_data->rx; 447 447 for (loop = 0; loop < n_bytes / 2; loop++) { 448 448 *buf++ = bfin_read(&drv_data->regs->rdbr); ··· 458 458 } else if (drv_data->tx) { 459 459 /* write */ 460 460 dev_dbg(&drv_data->pdev->dev, "write: write_TDBR\n"); 461 - if (n_bytes % 2) { 461 + if (!(n_bytes % 2)) { 462 462 u16 *buf = (u16 *)drv_data->tx; 463 463 for (loop = 0; loop < n_bytes / 2; loop++) { 464 464 bfin_read(&drv_data->regs->rdbr); ··· 587 587 if (message->state == DONE_STATE) { 588 588 dev_dbg(&drv_data->pdev->dev, "transfer: all done!\n"); 589 589 message->status = 0; 590 + bfin_spi_flush(drv_data); 590 591 bfin_spi_giveback(drv_data); 591 592 return; 592 593 } ··· 871 870 message->actual_length += drv_data->len_in_bytes; 872 871 /* Move to next transfer of this msg */ 873 872 message->state = bfin_spi_next_transfer(drv_data); 874 - if (drv_data->cs_change) 873 + if (drv_data->cs_change && message->state != DONE_STATE) { 874 + bfin_spi_flush(drv_data); 875 875 bfin_spi_cs_deactive(drv_data, chip); 876 + } 876 877 } 877 878 878 879 /* Schedule next transfer tasklet */ ··· 1029 1026 chip->cs_chg_udelay = chip_info->cs_chg_udelay; 1030 1027 chip->idle_tx_val = chip_info->idle_tx_val; 1031 1028 chip->pio_interrupt = chip_info->pio_interrupt; 1032 - spi->bits_per_word = chip_info->bits_per_word; 1033 1029 } else { 1034 1030 /* force a default base state */ 1035 1031 chip->ctl_reg &= bfin_ctl_reg;
+10 -14
drivers/spi/spi-ep93xx.c
··· 545 545 * in case of failure. 546 546 */ 547 547 static struct dma_async_tx_descriptor * 548 - ep93xx_spi_dma_prepare(struct ep93xx_spi *espi, enum dma_data_direction dir) 548 + ep93xx_spi_dma_prepare(struct ep93xx_spi *espi, enum dma_transfer_direction dir) 549 549 { 550 550 struct spi_transfer *t = espi->current_msg->state; 551 551 struct dma_async_tx_descriptor *txd; 552 552 enum dma_slave_buswidth buswidth; 553 553 struct dma_slave_config conf; 554 - enum dma_transfer_direction slave_dirn; 555 554 struct scatterlist *sg; 556 555 struct sg_table *sgt; 557 556 struct dma_chan *chan; ··· 566 567 memset(&conf, 0, sizeof(conf)); 567 568 conf.direction = dir; 568 569 569 - if (dir == DMA_FROM_DEVICE) { 570 + if (dir == DMA_DEV_TO_MEM) { 570 571 chan = espi->dma_rx; 571 572 buf = t->rx_buf; 572 573 sgt = &espi->rx_sgt; 573 574 574 575 conf.src_addr = espi->sspdr_phys; 575 576 conf.src_addr_width = buswidth; 576 - slave_dirn = DMA_DEV_TO_MEM; 577 577 } else { 578 578 chan = espi->dma_tx; 579 579 buf = t->tx_buf; ··· 580 582 581 583 conf.dst_addr = espi->sspdr_phys; 582 584 conf.dst_addr_width = buswidth; 583 - slave_dirn = DMA_MEM_TO_DEV; 584 585 } 585 586 586 587 ret = dmaengine_slave_config(chan, &conf); ··· 630 633 if (!nents) 631 634 return ERR_PTR(-ENOMEM); 632 635 633 - txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, 634 - slave_dirn, DMA_CTRL_ACK); 636 + txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, dir, DMA_CTRL_ACK); 635 637 if (!txd) { 636 638 dma_unmap_sg(chan->device->dev, sgt->sgl, sgt->nents, dir); 637 639 return ERR_PTR(-ENOMEM); ··· 647 651 * unmapped. 648 652 */ 649 653 static void ep93xx_spi_dma_finish(struct ep93xx_spi *espi, 650 - enum dma_data_direction dir) 654 + enum dma_transfer_direction dir) 651 655 { 652 656 struct dma_chan *chan; 653 657 struct sg_table *sgt; 654 658 655 - if (dir == DMA_FROM_DEVICE) { 659 + if (dir == DMA_DEV_TO_MEM) { 656 660 chan = espi->dma_rx; 657 661 sgt = &espi->rx_sgt; 658 662 } else { ··· 673 677 struct spi_message *msg = espi->current_msg; 674 678 struct dma_async_tx_descriptor *rxd, *txd; 675 679 676 - rxd = ep93xx_spi_dma_prepare(espi, DMA_FROM_DEVICE); 680 + rxd = ep93xx_spi_dma_prepare(espi, DMA_DEV_TO_MEM); 677 681 if (IS_ERR(rxd)) { 678 682 dev_err(&espi->pdev->dev, "DMA RX failed: %ld\n", PTR_ERR(rxd)); 679 683 msg->status = PTR_ERR(rxd); 680 684 return; 681 685 } 682 686 683 - txd = ep93xx_spi_dma_prepare(espi, DMA_TO_DEVICE); 687 + txd = ep93xx_spi_dma_prepare(espi, DMA_MEM_TO_DEV); 684 688 if (IS_ERR(txd)) { 685 - ep93xx_spi_dma_finish(espi, DMA_FROM_DEVICE); 689 + ep93xx_spi_dma_finish(espi, DMA_DEV_TO_MEM); 686 690 dev_err(&espi->pdev->dev, "DMA TX failed: %ld\n", PTR_ERR(rxd)); 687 691 msg->status = PTR_ERR(txd); 688 692 return; ··· 701 705 702 706 wait_for_completion(&espi->wait); 703 707 704 - ep93xx_spi_dma_finish(espi, DMA_TO_DEVICE); 705 - ep93xx_spi_dma_finish(espi, DMA_FROM_DEVICE); 708 + ep93xx_spi_dma_finish(espi, DMA_MEM_TO_DEV); 709 + ep93xx_spi_dma_finish(espi, DMA_DEV_TO_MEM); 706 710 } 707 711 708 712 /**
+34 -24
drivers/spi/spi-pl022.c
··· 1667 1667 /* cpsdvsr = 254 & scr = 255 */ 1668 1668 min_tclk = spi_rate(rate, CPSDVR_MAX, SCR_MAX); 1669 1669 1670 - if (!((freq <= max_tclk) && (freq >= min_tclk))) { 1670 + if (freq > max_tclk) 1671 + dev_warn(&pl022->adev->dev, 1672 + "Max speed that can be programmed is %d Hz, you requested %d\n", 1673 + max_tclk, freq); 1674 + 1675 + if (freq < min_tclk) { 1671 1676 dev_err(&pl022->adev->dev, 1672 - "controller data is incorrect: out of range frequency"); 1677 + "Requested frequency: %d Hz is less than minimum possible %d Hz\n", 1678 + freq, min_tclk); 1673 1679 return -EINVAL; 1674 1680 } 1675 1681 ··· 1687 1681 while (scr <= SCR_MAX) { 1688 1682 tmp = spi_rate(rate, cpsdvsr, scr); 1689 1683 1690 - if (tmp > freq) 1684 + if (tmp > freq) { 1685 + /* we need lower freq */ 1691 1686 scr++; 1687 + continue; 1688 + } 1689 + 1692 1690 /* 1693 - * If found exact value, update and break. 1694 - * If found more closer value, update and continue. 1691 + * If found exact value, mark found and break. 1692 + * If found more closer value, update and break. 1695 1693 */ 1696 - else if ((tmp == freq) || (tmp > best_freq)) { 1694 + if (tmp > best_freq) { 1697 1695 best_freq = tmp; 1698 1696 best_cpsdvsr = cpsdvsr; 1699 1697 best_scr = scr; 1700 1698 1701 1699 if (tmp == freq) 1702 - break; 1700 + found = 1; 1703 1701 } 1704 - scr++; 1702 + /* 1703 + * increased scr will give lower rates, which are not 1704 + * required 1705 + */ 1706 + break; 1705 1707 } 1706 1708 cpsdvsr += 2; 1707 1709 scr = SCR_MIN; 1708 1710 } 1711 + 1712 + WARN(!best_freq, "pl022: Matching cpsdvsr and scr not found for %d Hz rate \n", 1713 + freq); 1709 1714 1710 1715 clk_freq->cpsdvsr = (u8) (best_cpsdvsr & 0xFF); 1711 1716 clk_freq->scr = (u8) (best_scr & 0xFF); ··· 1840 1823 } else 1841 1824 chip->cs_control = chip_info->cs_control; 1842 1825 1843 - if (bits <= 3) { 1844 - /* PL022 doesn't support less than 4-bits */ 1826 + /* Check bits per word with vendor specific range */ 1827 + if ((bits <= 3) || (bits > pl022->vendor->max_bpw)) { 1845 1828 status = -ENOTSUPP; 1829 + dev_err(&spi->dev, "illegal data size for this controller!\n"); 1830 + dev_err(&spi->dev, "This controller can only handle 4 <= n <= %d bit words\n", 1831 + pl022->vendor->max_bpw); 1846 1832 goto err_config_params; 1847 1833 } else if (bits <= 8) { 1848 1834 dev_dbg(&spi->dev, "4 <= n <=8 bits per word\n"); ··· 1858 1838 chip->read = READING_U16; 1859 1839 chip->write = WRITING_U16; 1860 1840 } else { 1861 - if (pl022->vendor->max_bpw >= 32) { 1862 - dev_dbg(&spi->dev, "17 <= n <= 32 bits per word\n"); 1863 - chip->n_bytes = 4; 1864 - chip->read = READING_U32; 1865 - chip->write = WRITING_U32; 1866 - } else { 1867 - dev_err(&spi->dev, 1868 - "illegal data size for this controller!\n"); 1869 - dev_err(&spi->dev, 1870 - "a standard pl022 can only handle " 1871 - "1 <= n <= 16 bit words\n"); 1872 - status = -ENOTSUPP; 1873 - goto err_config_params; 1874 - } 1841 + dev_dbg(&spi->dev, "17 <= n <= 32 bits per word\n"); 1842 + chip->n_bytes = 4; 1843 + chip->read = READING_U32; 1844 + chip->write = WRITING_U32; 1875 1845 } 1876 1846 1877 1847 /* Now Initialize all register settings required for this chip */
+1
drivers/staging/octeon/ethernet-rx.c
··· 36 36 #include <linux/prefetch.h> 37 37 #include <linux/ratelimit.h> 38 38 #include <linux/smp.h> 39 + #include <linux/interrupt.h> 39 40 #include <net/dst.h> 40 41 #ifdef CONFIG_XFRM 41 42 #include <linux/xfrm.h>
+1
drivers/staging/octeon/ethernet-tx.c
··· 32 32 #include <linux/ip.h> 33 33 #include <linux/ratelimit.h> 34 34 #include <linux/string.h> 35 + #include <linux/interrupt.h> 35 36 #include <net/dst.h> 36 37 #ifdef CONFIG_XFRM 37 38 #include <linux/xfrm.h>
+1
drivers/staging/octeon/ethernet.c
··· 31 31 #include <linux/etherdevice.h> 32 32 #include <linux/phy.h> 33 33 #include <linux/slab.h> 34 + #include <linux/interrupt.h> 34 35 35 36 #include <net/dst.h> 36 37
-2
drivers/staging/ozwpan/ozpd.c
··· 383 383 pd->tx_pool = &f->link; 384 384 pd->tx_pool_count++; 385 385 f = 0; 386 - } else { 387 - kfree(f); 388 386 } 389 387 spin_unlock_bh(&pd->tx_frame_lock); 390 388 if (f)
+12 -8
drivers/staging/tidspbridge/core/tiomap3430.c
··· 79 79 #define OMAP343X_CONTROL_IVA2_BOOTADDR (OMAP2_CONTROL_GENERAL + 0x0190) 80 80 #define OMAP343X_CONTROL_IVA2_BOOTMOD (OMAP2_CONTROL_GENERAL + 0x0194) 81 81 82 - #define OMAP343X_CTRL_REGADDR(reg) \ 83 - OMAP2_L4_IO_ADDRESS(OMAP343X_CTRL_BASE + (reg)) 84 - 85 - 86 82 /* Forward Declarations: */ 87 83 static int bridge_brd_monitor(struct bridge_dev_context *dev_ctxt); 88 84 static int bridge_brd_read(struct bridge_dev_context *dev_ctxt, ··· 414 418 415 419 /* Assert RST1 i.e only the RST only for DSP megacell */ 416 420 if (!status) { 421 + /* 422 + * XXX: ioremapping MUST be removed once ctrl 423 + * function is made available. 424 + */ 425 + void __iomem *ctrl = ioremap(OMAP343X_CTRL_BASE, SZ_4K); 426 + if (!ctrl) 427 + return -ENOMEM; 428 + 417 429 (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2_MASK, 418 430 OMAP3430_RST1_IVA2_MASK, OMAP3430_IVA2_MOD, 419 431 OMAP2_RM_RSTCTRL); 420 432 /* Mask address with 1K for compatibility */ 421 433 __raw_writel(dsp_addr & OMAP3_IVA2_BOOTADDR_MASK, 422 - OMAP343X_CTRL_REGADDR( 423 - OMAP343X_CONTROL_IVA2_BOOTADDR)); 434 + ctrl + OMAP343X_CONTROL_IVA2_BOOTADDR); 424 435 /* 425 436 * Set bootmode to self loop if dsp_debug flag is true 426 437 */ 427 438 __raw_writel((dsp_debug) ? OMAP3_IVA2_BOOTMOD_IDLE : 0, 428 - OMAP343X_CTRL_REGADDR( 429 - OMAP343X_CONTROL_IVA2_BOOTMOD)); 439 + ctrl + OMAP343X_CONTROL_IVA2_BOOTMOD); 440 + 441 + iounmap(ctrl); 430 442 } 431 443 } 432 444 if (!status) {
+7 -1
drivers/staging/tidspbridge/core/wdt.c
··· 53 53 int ret = 0; 54 54 55 55 dsp_wdt.sm_wdt = NULL; 56 - dsp_wdt.reg_base = OMAP2_L4_IO_ADDRESS(OMAP34XX_WDT3_BASE); 56 + dsp_wdt.reg_base = ioremap(OMAP34XX_WDT3_BASE, SZ_4K); 57 + if (!dsp_wdt.reg_base) 58 + return -ENOMEM; 59 + 57 60 tasklet_init(&dsp_wdt.wdt3_tasklet, dsp_wdt_dpc, 0); 58 61 59 62 dsp_wdt.fclk = clk_get(NULL, "wdt3_fck"); ··· 102 99 dsp_wdt.fclk = NULL; 103 100 dsp_wdt.iclk = NULL; 104 101 dsp_wdt.sm_wdt = NULL; 102 + 103 + if (dsp_wdt.reg_base) 104 + iounmap(dsp_wdt.reg_base); 105 105 dsp_wdt.reg_base = NULL; 106 106 } 107 107
+1 -1
drivers/staging/zcache/Kconfig
··· 2 2 bool "Dynamic compression of swap pages and clean pagecache pages" 3 3 # X86 dependency is because zsmalloc uses non-portable pte/tlb 4 4 # functions 5 - depends on (CLEANCACHE || FRONTSWAP) && CRYPTO && X86 5 + depends on (CLEANCACHE || FRONTSWAP) && CRYPTO=y && X86 6 6 select ZSMALLOC 7 7 select CRYPTO_LZO 8 8 default n
+3 -3
drivers/tty/serial/pmac_zilog.c
··· 469 469 tty = NULL; 470 470 if (r3 & (CHAEXT | CHATxIP | CHARxIP)) { 471 471 if (!ZS_IS_OPEN(uap_a)) { 472 - pmz_debug("ChanA interrupt while open !\n"); 472 + pmz_debug("ChanA interrupt while not open !\n"); 473 473 goto skip_a; 474 474 } 475 475 write_zsreg(uap_a, R0, RES_H_IUS); ··· 493 493 spin_lock(&uap_b->port.lock); 494 494 tty = NULL; 495 495 if (r3 & (CHBEXT | CHBTxIP | CHBRxIP)) { 496 - if (!ZS_IS_OPEN(uap_a)) { 497 - pmz_debug("ChanB interrupt while open !\n"); 496 + if (!ZS_IS_OPEN(uap_b)) { 497 + pmz_debug("ChanB interrupt while not open !\n"); 498 498 goto skip_b; 499 499 } 500 500 write_zsreg(uap_b, R0, RES_H_IUS);
+19 -7
drivers/tty/vt/keyboard.c
··· 1085 1085 * 1086 1086 * Handle console start. This is a wrapper for the VT layer 1087 1087 * so that we can keep kbd knowledge internal 1088 + * 1089 + * FIXME: We eventually need to hold the kbd lock here to protect 1090 + * the LED updating. We can't do it yet because fn_hold calls stop_tty 1091 + * and start_tty under the kbd_event_lock, while normal tty paths 1092 + * don't hold the lock. We probably need to split out an LED lock 1093 + * but not during an -rc release! 1088 1094 */ 1089 1095 void vt_kbd_con_start(int console) 1090 1096 { 1091 1097 struct kbd_struct * kbd = kbd_table + console; 1092 - unsigned long flags; 1093 - spin_lock_irqsave(&kbd_event_lock, flags); 1098 + /* unsigned long flags; */ 1099 + /* spin_lock_irqsave(&kbd_event_lock, flags); */ 1094 1100 clr_vc_kbd_led(kbd, VC_SCROLLOCK); 1095 1101 set_leds(); 1096 - spin_unlock_irqrestore(&kbd_event_lock, flags); 1102 + /* spin_unlock_irqrestore(&kbd_event_lock, flags); */ 1097 1103 } 1098 1104 1099 1105 /** ··· 1108 1102 * 1109 1103 * Handle console stop. This is a wrapper for the VT layer 1110 1104 * so that we can keep kbd knowledge internal 1105 + * 1106 + * FIXME: We eventually need to hold the kbd lock here to protect 1107 + * the LED updating. We can't do it yet because fn_hold calls stop_tty 1108 + * and start_tty under the kbd_event_lock, while normal tty paths 1109 + * don't hold the lock. We probably need to split out an LED lock 1110 + * but not during an -rc release! 1111 1111 */ 1112 1112 void vt_kbd_con_stop(int console) 1113 1113 { 1114 1114 struct kbd_struct * kbd = kbd_table + console; 1115 - unsigned long flags; 1116 - spin_lock_irqsave(&kbd_event_lock, flags); 1115 + /* unsigned long flags; */ 1116 + /* spin_lock_irqsave(&kbd_event_lock, flags); */ 1117 1117 set_vc_kbd_led(kbd, VC_SCROLLOCK); 1118 1118 set_leds(); 1119 - spin_unlock_irqrestore(&kbd_event_lock, flags); 1119 + /* spin_unlock_irqrestore(&kbd_event_lock, flags); */ 1120 1120 } 1121 1121 1122 1122 /* 1123 1123 * This is the tasklet that updates LED state on all keyboards 1124 1124 * attached to the box. The reason we use tasklet is that we 1125 1125 * need to handle the scenario when keyboard handler is not 1126 - * registered yet but we already getting updates form VT to 1126 + * registered yet but we already getting updates from the VT to 1127 1127 * update led state. 1128 1128 */ 1129 1129 static void kbd_bh(unsigned long dummy)
+5 -2
drivers/usb/class/cdc-wdm.c
··· 157 157 spin_lock(&desc->iuspin); 158 158 desc->werr = urb->status; 159 159 spin_unlock(&desc->iuspin); 160 - clear_bit(WDM_IN_USE, &desc->flags); 161 160 kfree(desc->outbuf); 161 + desc->outbuf = NULL; 162 + clear_bit(WDM_IN_USE, &desc->flags); 162 163 wake_up(&desc->wait); 163 164 } 164 165 ··· 339 338 if (we < 0) 340 339 return -EIO; 341 340 342 - desc->outbuf = buf = kmalloc(count, GFP_KERNEL); 341 + buf = kmalloc(count, GFP_KERNEL); 343 342 if (!buf) { 344 343 rv = -ENOMEM; 345 344 goto outnl; ··· 407 406 req->wIndex = desc->inum; 408 407 req->wLength = cpu_to_le16(count); 409 408 set_bit(WDM_IN_USE, &desc->flags); 409 + desc->outbuf = buf; 410 410 411 411 rv = usb_submit_urb(desc->command, GFP_KERNEL); 412 412 if (rv < 0) { 413 413 kfree(buf); 414 + desc->outbuf = NULL; 414 415 clear_bit(WDM_IN_USE, &desc->flags); 415 416 dev_err(&desc->intf->dev, "Tx URB error: %d\n", rv); 416 417 } else {
+9
drivers/usb/core/hcd-pci.c
··· 493 493 494 494 pci_save_state(pci_dev); 495 495 496 + /* 497 + * Some systems crash if an EHCI controller is in D3 during 498 + * a sleep transition. We have to leave such controllers in D0. 499 + */ 500 + if (hcd->broken_pci_sleep) { 501 + dev_dbg(dev, "Staying in PCI D0\n"); 502 + return retval; 503 + } 504 + 496 505 /* If the root hub is dead rather than suspended, disallow remote 497 506 * wakeup. usb_hc_died() should ensure that both hosts are marked as 498 507 * dying, so we only need to check the primary roothub.
-1
drivers/usb/gadget/dummy_hcd.c
··· 927 927 928 928 dum->driver = NULL; 929 929 930 - dummy_pullup(&dum->gadget, 0); 931 930 return 0; 932 931 } 933 932
+1 -1
drivers/usb/gadget/f_mass_storage.c
··· 2189 2189 common->data_size_from_cmnd = 0; 2190 2190 sprintf(unknown, "Unknown x%02x", common->cmnd[0]); 2191 2191 reply = check_command(common, common->cmnd_size, 2192 - DATA_DIR_UNKNOWN, 0xff, 0, unknown); 2192 + DATA_DIR_UNKNOWN, ~0, 0, unknown); 2193 2193 if (reply == 0) { 2194 2194 common->curlun->sense_data = SS_INVALID_COMMAND; 2195 2195 reply = -EINVAL;
+1 -1
drivers/usb/gadget/file_storage.c
··· 2579 2579 fsg->data_size_from_cmnd = 0; 2580 2580 sprintf(unknown, "Unknown x%02x", fsg->cmnd[0]); 2581 2581 if ((reply = check_command(fsg, fsg->cmnd_size, 2582 - DATA_DIR_UNKNOWN, 0xff, 0, unknown)) == 0) { 2582 + DATA_DIR_UNKNOWN, ~0, 0, unknown)) == 0) { 2583 2583 fsg->curlun->sense_data = SS_INVALID_COMMAND; 2584 2584 reply = -EINVAL; 2585 2585 }
+2 -2
drivers/usb/gadget/udc-core.c
··· 263 263 264 264 if (udc_is_newstyle(udc)) { 265 265 udc->driver->disconnect(udc->gadget); 266 - udc->driver->unbind(udc->gadget); 267 266 usb_gadget_disconnect(udc->gadget); 267 + udc->driver->unbind(udc->gadget); 268 268 usb_gadget_udc_stop(udc->gadget, udc->driver); 269 269 } else { 270 270 usb_gadget_stop(udc->gadget, udc->driver); ··· 415 415 usb_gadget_udc_start(udc->gadget, udc->driver); 416 416 usb_gadget_connect(udc->gadget); 417 417 } else if (sysfs_streq(buf, "disconnect")) { 418 + usb_gadget_disconnect(udc->gadget); 418 419 if (udc_is_newstyle(udc)) 419 420 usb_gadget_udc_stop(udc->gadget, udc->driver); 420 - usb_gadget_disconnect(udc->gadget); 421 421 } else { 422 422 dev_err(dev, "unsupported command '%s'\n", buf); 423 423 return -EINVAL;
+1 -1
drivers/usb/gadget/uvc.h
··· 28 28 29 29 struct uvc_request_data 30 30 { 31 - unsigned int length; 31 + __s32 length; 32 32 __u8 data[60]; 33 33 }; 34 34
+1 -1
drivers/usb/gadget/uvc_v4l2.c
··· 39 39 if (data->length < 0) 40 40 return usb_ep_set_halt(cdev->gadget->ep0); 41 41 42 - req->length = min(uvc->event_length, data->length); 42 + req->length = min_t(unsigned int, uvc->event_length, data->length); 43 43 req->zero = data->length < uvc->event_length; 44 44 req->dma = DMA_ADDR_INVALID; 45 45
+8
drivers/usb/host/ehci-pci.c
··· 144 144 hcd->has_tt = 1; 145 145 tdi_reset(ehci); 146 146 } 147 + if (pdev->subsystem_vendor == PCI_VENDOR_ID_ASUSTEK) { 148 + /* EHCI #1 or #2 on 6 Series/C200 Series chipset */ 149 + if (pdev->device == 0x1c26 || pdev->device == 0x1c2d) { 150 + ehci_info(ehci, "broken D3 during system sleep on ASUS\n"); 151 + hcd->broken_pci_sleep = 1; 152 + device_set_wakeup_capable(&pdev->dev, false); 153 + } 154 + } 147 155 break; 148 156 case PCI_VENDOR_ID_TDI: 149 157 if (pdev->device == PCI_DEVICE_ID_TDI_EHCI) {
+195 -181
drivers/usb/host/ehci-tegra.c
··· 24 24 #include <linux/gpio.h> 25 25 #include <linux/of.h> 26 26 #include <linux/of_gpio.h> 27 + #include <linux/pm_runtime.h> 27 28 28 29 #include <mach/usb_phy.h> 29 30 #include <mach/iomap.h> ··· 38 37 struct clk *emc_clk; 39 38 struct usb_phy *transceiver; 40 39 int host_resumed; 41 - int bus_suspended; 42 40 int port_resuming; 43 - int power_down_on_bus_suspend; 44 41 enum tegra_usb_phy_port_speed port_speed; 45 42 }; 46 43 ··· 272 273 up_write(&ehci_cf_port_reset_rwsem); 273 274 } 274 275 275 - static int tegra_usb_suspend(struct usb_hcd *hcd) 276 - { 277 - struct tegra_ehci_hcd *tegra = dev_get_drvdata(hcd->self.controller); 278 - struct ehci_regs __iomem *hw = tegra->ehci->regs; 279 - unsigned long flags; 280 - 281 - spin_lock_irqsave(&tegra->ehci->lock, flags); 282 - 283 - tegra->port_speed = (readl(&hw->port_status[0]) >> 26) & 0x3; 284 - ehci_halt(tegra->ehci); 285 - clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 286 - 287 - spin_unlock_irqrestore(&tegra->ehci->lock, flags); 288 - 289 - tegra_ehci_power_down(hcd); 290 - return 0; 291 - } 292 - 293 - static int tegra_usb_resume(struct usb_hcd *hcd) 294 - { 295 - struct tegra_ehci_hcd *tegra = dev_get_drvdata(hcd->self.controller); 296 - struct ehci_hcd *ehci = hcd_to_ehci(hcd); 297 - struct ehci_regs __iomem *hw = ehci->regs; 298 - unsigned long val; 299 - 300 - set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 301 - tegra_ehci_power_up(hcd); 302 - 303 - if (tegra->port_speed > TEGRA_USB_PHY_PORT_SPEED_HIGH) { 304 - /* Wait for the phy to detect new devices 305 - * before we restart the controller */ 306 - msleep(10); 307 - goto restart; 308 - } 309 - 310 - /* Force the phy to keep data lines in suspend state */ 311 - tegra_ehci_phy_restore_start(tegra->phy, tegra->port_speed); 312 - 313 - /* Enable host mode */ 314 - tdi_reset(ehci); 315 - 316 - /* Enable Port Power */ 317 - val = readl(&hw->port_status[0]); 318 - val |= PORT_POWER; 319 - writel(val, &hw->port_status[0]); 320 - udelay(10); 321 - 322 - /* Check if the phy resume from LP0. When the phy resume from LP0 323 - * USB register will be reset. */ 324 - if (!readl(&hw->async_next)) { 325 - /* Program the field PTC based on the saved speed mode */ 326 - val = readl(&hw->port_status[0]); 327 - val &= ~PORT_TEST(~0); 328 - if (tegra->port_speed == TEGRA_USB_PHY_PORT_SPEED_HIGH) 329 - val |= PORT_TEST_FORCE; 330 - else if (tegra->port_speed == TEGRA_USB_PHY_PORT_SPEED_FULL) 331 - val |= PORT_TEST(6); 332 - else if (tegra->port_speed == TEGRA_USB_PHY_PORT_SPEED_LOW) 333 - val |= PORT_TEST(7); 334 - writel(val, &hw->port_status[0]); 335 - udelay(10); 336 - 337 - /* Disable test mode by setting PTC field to NORMAL_OP */ 338 - val = readl(&hw->port_status[0]); 339 - val &= ~PORT_TEST(~0); 340 - writel(val, &hw->port_status[0]); 341 - udelay(10); 342 - } 343 - 344 - /* Poll until CCS is enabled */ 345 - if (handshake(ehci, &hw->port_status[0], PORT_CONNECT, 346 - PORT_CONNECT, 2000)) { 347 - pr_err("%s: timeout waiting for PORT_CONNECT\n", __func__); 348 - goto restart; 349 - } 350 - 351 - /* Poll until PE is enabled */ 352 - if (handshake(ehci, &hw->port_status[0], PORT_PE, 353 - PORT_PE, 2000)) { 354 - pr_err("%s: timeout waiting for USB_PORTSC1_PE\n", __func__); 355 - goto restart; 356 - } 357 - 358 - /* Clear the PCI status, to avoid an interrupt taken upon resume */ 359 - val = readl(&hw->status); 360 - val |= STS_PCD; 361 - writel(val, &hw->status); 362 - 363 - /* Put controller in suspend mode by writing 1 to SUSP bit of PORTSC */ 364 - val = readl(&hw->port_status[0]); 365 - if ((val & PORT_POWER) && (val & PORT_PE)) { 366 - val |= PORT_SUSPEND; 367 - writel(val, &hw->port_status[0]); 368 - 369 - /* Wait until port suspend completes */ 370 - if (handshake(ehci, &hw->port_status[0], PORT_SUSPEND, 371 - PORT_SUSPEND, 1000)) { 372 - pr_err("%s: timeout waiting for PORT_SUSPEND\n", 373 - __func__); 374 - goto restart; 375 - } 376 - } 377 - 378 - tegra_ehci_phy_restore_end(tegra->phy); 379 - return 0; 380 - 381 - restart: 382 - if (tegra->port_speed <= TEGRA_USB_PHY_PORT_SPEED_HIGH) 383 - tegra_ehci_phy_restore_end(tegra->phy); 384 - 385 - tegra_ehci_restart(hcd); 386 - return 0; 387 - } 388 - 389 276 static void tegra_ehci_shutdown(struct usb_hcd *hcd) 390 277 { 391 278 struct tegra_ehci_hcd *tegra = dev_get_drvdata(hcd->self.controller); ··· 318 433 ehci_port_power(ehci, 1); 319 434 return retval; 320 435 } 321 - 322 - #ifdef CONFIG_PM 323 - static int tegra_ehci_bus_suspend(struct usb_hcd *hcd) 324 - { 325 - struct tegra_ehci_hcd *tegra = dev_get_drvdata(hcd->self.controller); 326 - int error_status = 0; 327 - 328 - error_status = ehci_bus_suspend(hcd); 329 - if (!error_status && tegra->power_down_on_bus_suspend) { 330 - tegra_usb_suspend(hcd); 331 - tegra->bus_suspended = 1; 332 - } 333 - 334 - return error_status; 335 - } 336 - 337 - static int tegra_ehci_bus_resume(struct usb_hcd *hcd) 338 - { 339 - struct tegra_ehci_hcd *tegra = dev_get_drvdata(hcd->self.controller); 340 - 341 - if (tegra->bus_suspended && tegra->power_down_on_bus_suspend) { 342 - tegra_usb_resume(hcd); 343 - tegra->bus_suspended = 0; 344 - } 345 - 346 - tegra_usb_phy_preresume(tegra->phy); 347 - tegra->port_resuming = 1; 348 - return ehci_bus_resume(hcd); 349 - } 350 - #endif 351 436 352 437 struct temp_buffer { 353 438 void *kmalloc_ptr; ··· 429 574 .hub_control = tegra_ehci_hub_control, 430 575 .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 431 576 #ifdef CONFIG_PM 432 - .bus_suspend = tegra_ehci_bus_suspend, 433 - .bus_resume = tegra_ehci_bus_resume, 577 + .bus_suspend = ehci_bus_suspend, 578 + .bus_resume = ehci_bus_resume, 434 579 #endif 435 580 .relinquish_port = ehci_relinquish_port, 436 581 .port_handed_over = ehci_port_handed_over, ··· 458 603 dev_err(&pdev->dev, "can't enable vbus\n"); 459 604 return err; 460 605 } 461 - gpio_set_value(gpio, 1); 462 606 463 607 return err; 464 608 } 609 + 610 + #ifdef CONFIG_PM 611 + 612 + static int controller_suspend(struct device *dev) 613 + { 614 + struct tegra_ehci_hcd *tegra = 615 + platform_get_drvdata(to_platform_device(dev)); 616 + struct ehci_hcd *ehci = tegra->ehci; 617 + struct usb_hcd *hcd = ehci_to_hcd(ehci); 618 + struct ehci_regs __iomem *hw = ehci->regs; 619 + unsigned long flags; 620 + 621 + if (time_before(jiffies, ehci->next_statechange)) 622 + msleep(10); 623 + 624 + spin_lock_irqsave(&ehci->lock, flags); 625 + 626 + tegra->port_speed = (readl(&hw->port_status[0]) >> 26) & 0x3; 627 + ehci_halt(ehci); 628 + clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 629 + 630 + spin_unlock_irqrestore(&ehci->lock, flags); 631 + 632 + tegra_ehci_power_down(hcd); 633 + return 0; 634 + } 635 + 636 + static int controller_resume(struct device *dev) 637 + { 638 + struct tegra_ehci_hcd *tegra = 639 + platform_get_drvdata(to_platform_device(dev)); 640 + struct ehci_hcd *ehci = tegra->ehci; 641 + struct usb_hcd *hcd = ehci_to_hcd(ehci); 642 + struct ehci_regs __iomem *hw = ehci->regs; 643 + unsigned long val; 644 + 645 + set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 646 + tegra_ehci_power_up(hcd); 647 + 648 + if (tegra->port_speed > TEGRA_USB_PHY_PORT_SPEED_HIGH) { 649 + /* Wait for the phy to detect new devices 650 + * before we restart the controller */ 651 + msleep(10); 652 + goto restart; 653 + } 654 + 655 + /* Force the phy to keep data lines in suspend state */ 656 + tegra_ehci_phy_restore_start(tegra->phy, tegra->port_speed); 657 + 658 + /* Enable host mode */ 659 + tdi_reset(ehci); 660 + 661 + /* Enable Port Power */ 662 + val = readl(&hw->port_status[0]); 663 + val |= PORT_POWER; 664 + writel(val, &hw->port_status[0]); 665 + udelay(10); 666 + 667 + /* Check if the phy resume from LP0. When the phy resume from LP0 668 + * USB register will be reset. */ 669 + if (!readl(&hw->async_next)) { 670 + /* Program the field PTC based on the saved speed mode */ 671 + val = readl(&hw->port_status[0]); 672 + val &= ~PORT_TEST(~0); 673 + if (tegra->port_speed == TEGRA_USB_PHY_PORT_SPEED_HIGH) 674 + val |= PORT_TEST_FORCE; 675 + else if (tegra->port_speed == TEGRA_USB_PHY_PORT_SPEED_FULL) 676 + val |= PORT_TEST(6); 677 + else if (tegra->port_speed == TEGRA_USB_PHY_PORT_SPEED_LOW) 678 + val |= PORT_TEST(7); 679 + writel(val, &hw->port_status[0]); 680 + udelay(10); 681 + 682 + /* Disable test mode by setting PTC field to NORMAL_OP */ 683 + val = readl(&hw->port_status[0]); 684 + val &= ~PORT_TEST(~0); 685 + writel(val, &hw->port_status[0]); 686 + udelay(10); 687 + } 688 + 689 + /* Poll until CCS is enabled */ 690 + if (handshake(ehci, &hw->port_status[0], PORT_CONNECT, 691 + PORT_CONNECT, 2000)) { 692 + pr_err("%s: timeout waiting for PORT_CONNECT\n", __func__); 693 + goto restart; 694 + } 695 + 696 + /* Poll until PE is enabled */ 697 + if (handshake(ehci, &hw->port_status[0], PORT_PE, 698 + PORT_PE, 2000)) { 699 + pr_err("%s: timeout waiting for USB_PORTSC1_PE\n", __func__); 700 + goto restart; 701 + } 702 + 703 + /* Clear the PCI status, to avoid an interrupt taken upon resume */ 704 + val = readl(&hw->status); 705 + val |= STS_PCD; 706 + writel(val, &hw->status); 707 + 708 + /* Put controller in suspend mode by writing 1 to SUSP bit of PORTSC */ 709 + val = readl(&hw->port_status[0]); 710 + if ((val & PORT_POWER) && (val & PORT_PE)) { 711 + val |= PORT_SUSPEND; 712 + writel(val, &hw->port_status[0]); 713 + 714 + /* Wait until port suspend completes */ 715 + if (handshake(ehci, &hw->port_status[0], PORT_SUSPEND, 716 + PORT_SUSPEND, 1000)) { 717 + pr_err("%s: timeout waiting for PORT_SUSPEND\n", 718 + __func__); 719 + goto restart; 720 + } 721 + } 722 + 723 + tegra_ehci_phy_restore_end(tegra->phy); 724 + goto done; 725 + 726 + restart: 727 + if (tegra->port_speed <= TEGRA_USB_PHY_PORT_SPEED_HIGH) 728 + tegra_ehci_phy_restore_end(tegra->phy); 729 + 730 + tegra_ehci_restart(hcd); 731 + 732 + done: 733 + tegra_usb_phy_preresume(tegra->phy); 734 + tegra->port_resuming = 1; 735 + return 0; 736 + } 737 + 738 + static int tegra_ehci_suspend(struct device *dev) 739 + { 740 + struct tegra_ehci_hcd *tegra = 741 + platform_get_drvdata(to_platform_device(dev)); 742 + struct usb_hcd *hcd = ehci_to_hcd(tegra->ehci); 743 + int rc = 0; 744 + 745 + /* 746 + * When system sleep is supported and USB controller wakeup is 747 + * implemented: If the controller is runtime-suspended and the 748 + * wakeup setting needs to be changed, call pm_runtime_resume(). 749 + */ 750 + if (HCD_HW_ACCESSIBLE(hcd)) 751 + rc = controller_suspend(dev); 752 + return rc; 753 + } 754 + 755 + static int tegra_ehci_resume(struct device *dev) 756 + { 757 + int rc; 758 + 759 + rc = controller_resume(dev); 760 + if (rc == 0) { 761 + pm_runtime_disable(dev); 762 + pm_runtime_set_active(dev); 763 + pm_runtime_enable(dev); 764 + } 765 + return rc; 766 + } 767 + 768 + static int tegra_ehci_runtime_suspend(struct device *dev) 769 + { 770 + return controller_suspend(dev); 771 + } 772 + 773 + static int tegra_ehci_runtime_resume(struct device *dev) 774 + { 775 + return controller_resume(dev); 776 + } 777 + 778 + static const struct dev_pm_ops tegra_ehci_pm_ops = { 779 + .suspend = tegra_ehci_suspend, 780 + .resume = tegra_ehci_resume, 781 + .runtime_suspend = tegra_ehci_runtime_suspend, 782 + .runtime_resume = tegra_ehci_runtime_resume, 783 + }; 784 + 785 + #endif 465 786 466 787 static u64 tegra_ehci_dma_mask = DMA_BIT_MASK(32); 467 788 ··· 753 722 } 754 723 755 724 tegra->host_resumed = 1; 756 - tegra->power_down_on_bus_suspend = pdata->power_down_on_bus_suspend; 757 725 tegra->ehci = hcd_to_ehci(hcd); 758 726 759 727 irq = platform_get_irq(pdev, 0); ··· 776 746 goto fail; 777 747 } 778 748 749 + pm_runtime_set_active(&pdev->dev); 750 + pm_runtime_get_noresume(&pdev->dev); 751 + 752 + /* Don't skip the pm_runtime_forbid call if wakeup isn't working */ 753 + /* if (!pdata->power_down_on_bus_suspend) */ 754 + pm_runtime_forbid(&pdev->dev); 755 + pm_runtime_enable(&pdev->dev); 756 + pm_runtime_put_sync(&pdev->dev); 779 757 return err; 780 758 781 759 fail: ··· 810 772 return err; 811 773 } 812 774 813 - #ifdef CONFIG_PM 814 - static int tegra_ehci_resume(struct platform_device *pdev) 815 - { 816 - struct tegra_ehci_hcd *tegra = platform_get_drvdata(pdev); 817 - struct usb_hcd *hcd = ehci_to_hcd(tegra->ehci); 818 - 819 - if (tegra->bus_suspended) 820 - return 0; 821 - 822 - return tegra_usb_resume(hcd); 823 - } 824 - 825 - static int tegra_ehci_suspend(struct platform_device *pdev, pm_message_t state) 826 - { 827 - struct tegra_ehci_hcd *tegra = platform_get_drvdata(pdev); 828 - struct usb_hcd *hcd = ehci_to_hcd(tegra->ehci); 829 - 830 - if (tegra->bus_suspended) 831 - return 0; 832 - 833 - if (time_before(jiffies, tegra->ehci->next_statechange)) 834 - msleep(10); 835 - 836 - return tegra_usb_suspend(hcd); 837 - } 838 - #endif 839 - 840 775 static int tegra_ehci_remove(struct platform_device *pdev) 841 776 { 842 777 struct tegra_ehci_hcd *tegra = platform_get_drvdata(pdev); ··· 817 806 818 807 if (tegra == NULL || hcd == NULL) 819 808 return -EINVAL; 809 + 810 + pm_runtime_get_sync(&pdev->dev); 811 + pm_runtime_disable(&pdev->dev); 812 + pm_runtime_put_noidle(&pdev->dev); 820 813 821 814 #ifdef CONFIG_USB_OTG_UTILS 822 815 if (tegra->transceiver) { ··· 862 847 static struct platform_driver tegra_ehci_driver = { 863 848 .probe = tegra_ehci_probe, 864 849 .remove = tegra_ehci_remove, 865 - #ifdef CONFIG_PM 866 - .suspend = tegra_ehci_suspend, 867 - .resume = tegra_ehci_resume, 868 - #endif 869 850 .shutdown = tegra_ehci_hcd_shutdown, 870 851 .driver = { 871 852 .name = "tegra-ehci", 872 853 .of_match_table = tegra_ehci_of_match, 854 + #ifdef CONFIG_PM 855 + .pm = &tegra_ehci_pm_ops, 856 + #endif 873 857 } 874 858 };
+2 -1
drivers/usb/musb/davinci.c
··· 386 386 usb_nop_xceiv_register(); 387 387 musb->xceiv = usb_get_transceiver(); 388 388 if (!musb->xceiv) 389 - return -ENODEV; 389 + goto unregister; 390 390 391 391 musb->mregs += DAVINCI_BASE_OFFSET; 392 392 ··· 444 444 445 445 fail: 446 446 usb_put_transceiver(musb->xceiv); 447 + unregister: 447 448 usb_nop_xceiv_unregister(); 448 449 return -ENODEV; 449 450 }
+1 -1
drivers/usb/musb/musb_core.h
··· 449 449 * We added this flag to forcefully disable double 450 450 * buffering until we get it working. 451 451 */ 452 - unsigned double_buffer_not_ok:1 __deprecated; 452 + unsigned double_buffer_not_ok:1; 453 453 454 454 struct musb_hdrc_config *config; 455 455
+14 -1
drivers/usb/otg/gpio_vbus.c
··· 96 96 struct gpio_vbus_data *gpio_vbus = 97 97 container_of(work, struct gpio_vbus_data, work); 98 98 struct gpio_vbus_mach_info *pdata = gpio_vbus->dev->platform_data; 99 - int gpio; 99 + int gpio, status; 100 100 101 101 if (!gpio_vbus->phy.otg->gadget) 102 102 return; ··· 108 108 */ 109 109 gpio = pdata->gpio_pullup; 110 110 if (is_vbus_powered(pdata)) { 111 + status = USB_EVENT_VBUS; 111 112 gpio_vbus->phy.state = OTG_STATE_B_PERIPHERAL; 113 + gpio_vbus->phy.last_event = status; 112 114 usb_gadget_vbus_connect(gpio_vbus->phy.otg->gadget); 113 115 114 116 /* drawing a "unit load" is *always* OK, except for OTG */ ··· 119 117 /* optionally enable D+ pullup */ 120 118 if (gpio_is_valid(gpio)) 121 119 gpio_set_value(gpio, !pdata->gpio_pullup_inverted); 120 + 121 + atomic_notifier_call_chain(&gpio_vbus->phy.notifier, 122 + status, gpio_vbus->phy.otg->gadget); 122 123 } else { 123 124 /* optionally disable D+ pullup */ 124 125 if (gpio_is_valid(gpio)) ··· 130 125 set_vbus_draw(gpio_vbus, 0); 131 126 132 127 usb_gadget_vbus_disconnect(gpio_vbus->phy.otg->gadget); 128 + status = USB_EVENT_NONE; 133 129 gpio_vbus->phy.state = OTG_STATE_B_IDLE; 130 + gpio_vbus->phy.last_event = status; 131 + 132 + atomic_notifier_call_chain(&gpio_vbus->phy.notifier, 133 + status, gpio_vbus->phy.otg->gadget); 134 134 } 135 135 } 136 136 ··· 297 287 irq, err); 298 288 goto err_irq; 299 289 } 290 + 291 + ATOMIC_INIT_NOTIFIER_HEAD(&gpio_vbus->phy.notifier); 292 + 300 293 INIT_WORK(&gpio_vbus->work, gpio_vbus_work); 301 294 302 295 gpio_vbus->vbus_draw = regulator_get(&pdev->dev, "vbus_draw");
+1
drivers/video/bfin-lq035q1-fb.c
··· 13 13 #include <linux/errno.h> 14 14 #include <linux/string.h> 15 15 #include <linux/fb.h> 16 + #include <linux/gpio.h> 16 17 #include <linux/slab.h> 17 18 #include <linux/init.h> 18 19 #include <linux/types.h>
+3 -3
drivers/watchdog/hpwdt.c
··· 435 435 { 436 436 reload = SECS_TO_TICKS(soft_margin); 437 437 iowrite16(reload, hpwdt_timer_reg); 438 - iowrite16(0x85, hpwdt_timer_con); 438 + iowrite8(0x85, hpwdt_timer_con); 439 439 } 440 440 441 441 static void hpwdt_stop(void) 442 442 { 443 443 unsigned long data; 444 444 445 - data = ioread16(hpwdt_timer_con); 445 + data = ioread8(hpwdt_timer_con); 446 446 data &= 0xFE; 447 - iowrite16(data, hpwdt_timer_con); 447 + iowrite8(data, hpwdt_timer_con); 448 448 } 449 449 450 450 static void hpwdt_ping(void)
+1 -1
drivers/xen/events.c
··· 274 274 275 275 static bool pirq_check_eoi_map(unsigned irq) 276 276 { 277 - return test_bit(irq, pirq_eoi_map); 277 + return test_bit(pirq_from_irq(irq), pirq_eoi_map); 278 278 } 279 279 280 280 static bool pirq_needs_eoi_flag(unsigned irq)
+4 -1
drivers/xen/xen-acpi-processor.c
··· 128 128 pr_debug(" C%d: %s %d uS\n", 129 129 cx->type, cx->desc, (u32)cx->latency); 130 130 } 131 - } else 131 + } else if (ret != -EINVAL) 132 + /* EINVAL means the ACPI ID is incorrect - meaning the ACPI 133 + * table is referencing a non-existing CPU - which can happen 134 + * with broken ACPI tables. */ 132 135 pr_err(DRV_NAME "(CX): Hypervisor error (%d) for ACPI CPU%u\n", 133 136 ret, _pr->acpi_id); 134 137
+11 -1
fs/autofs4/autofs_i.h
··· 110 110 int sub_version; 111 111 int min_proto; 112 112 int max_proto; 113 - int compat_daemon; 114 113 unsigned long exp_timeout; 115 114 unsigned int type; 116 115 int reghost_enabled; ··· 268 269 int autofs4_fill_super(struct super_block *, void *, int); 269 270 struct autofs_info *autofs4_new_ino(struct autofs_sb_info *); 270 271 void autofs4_clean_ino(struct autofs_info *); 272 + 273 + static inline int autofs_prepare_pipe(struct file *pipe) 274 + { 275 + if (!pipe->f_op || !pipe->f_op->write) 276 + return -EINVAL; 277 + if (!S_ISFIFO(pipe->f_dentry->d_inode->i_mode)) 278 + return -EINVAL; 279 + /* We want a packet pipe */ 280 + pipe->f_flags |= O_DIRECT; 281 + return 0; 282 + } 271 283 272 284 /* Queue management functions */ 273 285
+1 -2
fs/autofs4/dev-ioctl.c
··· 376 376 err = -EBADF; 377 377 goto out; 378 378 } 379 - if (!pipe->f_op || !pipe->f_op->write) { 379 + if (autofs_prepare_pipe(pipe) < 0) { 380 380 err = -EPIPE; 381 381 fput(pipe); 382 382 goto out; ··· 385 385 sbi->pipefd = pipefd; 386 386 sbi->pipe = pipe; 387 387 sbi->catatonic = 0; 388 - sbi->compat_daemon = is_compat_task(); 389 388 } 390 389 out: 391 390 mutex_unlock(&sbi->wq_mutex);
+1 -3
fs/autofs4/inode.c
··· 19 19 #include <linux/parser.h> 20 20 #include <linux/bitops.h> 21 21 #include <linux/magic.h> 22 - #include <linux/compat.h> 23 22 #include "autofs_i.h" 24 23 #include <linux/module.h> 25 24 ··· 224 225 set_autofs_type_indirect(&sbi->type); 225 226 sbi->min_proto = 0; 226 227 sbi->max_proto = 0; 227 - sbi->compat_daemon = is_compat_task(); 228 228 mutex_init(&sbi->wq_mutex); 229 229 mutex_init(&sbi->pipe_mutex); 230 230 spin_lock_init(&sbi->fs_lock); ··· 290 292 printk("autofs: could not open pipe file descriptor\n"); 291 293 goto fail_dput; 292 294 } 293 - if (!pipe->f_op || !pipe->f_op->write) 295 + if (autofs_prepare_pipe(pipe) < 0) 294 296 goto fail_fput; 295 297 sbi->pipe = pipe; 296 298 sbi->pipefd = pipefd;
+3 -19
fs/autofs4/waitq.c
··· 91 91 92 92 return (bytes > 0); 93 93 } 94 - 95 - /* 96 - * The autofs_v5 packet was misdesigned. 97 - * 98 - * The packets are identical on x86-32 and x86-64, but have different 99 - * alignment. Which means that 'sizeof()' will give different results. 100 - * Fix it up for the case of running 32-bit user mode on a 64-bit kernel. 101 - */ 102 - static noinline size_t autofs_v5_packet_size(struct autofs_sb_info *sbi) 103 - { 104 - size_t pktsz = sizeof(struct autofs_v5_packet); 105 - #if defined(CONFIG_X86_64) && defined(CONFIG_COMPAT) 106 - if (sbi->compat_daemon > 0) 107 - pktsz -= 4; 108 - #endif 109 - return pktsz; 110 - } 111 - 94 + 112 95 static void autofs4_notify_daemon(struct autofs_sb_info *sbi, 113 96 struct autofs_wait_queue *wq, 114 97 int type) ··· 155 172 { 156 173 struct autofs_v5_packet *packet = &pkt.v5_pkt.v5_packet; 157 174 158 - pktsz = autofs_v5_packet_size(sbi); 175 + pktsz = sizeof(*packet); 176 + 159 177 packet->wait_queue_token = wq->wait_queue_token; 160 178 packet->len = wq->name.len; 161 179 memcpy(packet->name, wq->name.name, wq->name.len);
+20 -7
fs/btrfs/backref.c
··· 22 22 #include "ulist.h" 23 23 #include "transaction.h" 24 24 #include "delayed-ref.h" 25 + #include "locking.h" 25 26 26 27 /* 27 28 * this structure records all encountered refs on the way up to the root ··· 894 893 s64 bytes_left = size - 1; 895 894 struct extent_buffer *eb = eb_in; 896 895 struct btrfs_key found_key; 896 + int leave_spinning = path->leave_spinning; 897 897 898 898 if (bytes_left >= 0) 899 899 dest[bytes_left] = '\0'; 900 900 901 + path->leave_spinning = 1; 901 902 while (1) { 902 903 len = btrfs_inode_ref_name_len(eb, iref); 903 904 bytes_left -= len; 904 905 if (bytes_left >= 0) 905 906 read_extent_buffer(eb, dest + bytes_left, 906 907 (unsigned long)(iref + 1), len); 907 - if (eb != eb_in) 908 + if (eb != eb_in) { 909 + btrfs_tree_read_unlock_blocking(eb); 908 910 free_extent_buffer(eb); 911 + } 909 912 ret = inode_ref_info(parent, 0, fs_root, path, &found_key); 910 913 if (ret > 0) 911 914 ret = -ENOENT; ··· 924 919 slot = path->slots[0]; 925 920 eb = path->nodes[0]; 926 921 /* make sure we can use eb after releasing the path */ 927 - if (eb != eb_in) 922 + if (eb != eb_in) { 928 923 atomic_inc(&eb->refs); 924 + btrfs_tree_read_lock(eb); 925 + btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK); 926 + } 929 927 btrfs_release_path(path); 930 928 931 929 iref = btrfs_item_ptr(eb, slot, struct btrfs_inode_ref); ··· 939 931 } 940 932 941 933 btrfs_release_path(path); 934 + path->leave_spinning = leave_spinning; 942 935 943 936 if (ret) 944 937 return ERR_PTR(ret); ··· 1256 1247 struct btrfs_path *path, 1257 1248 iterate_irefs_t *iterate, void *ctx) 1258 1249 { 1259 - int ret; 1250 + int ret = 0; 1260 1251 int slot; 1261 1252 u32 cur; 1262 1253 u32 len; ··· 1268 1259 struct btrfs_inode_ref *iref; 1269 1260 struct btrfs_key found_key; 1270 1261 1271 - while (1) { 1262 + while (!ret) { 1263 + path->leave_spinning = 1; 1272 1264 ret = inode_ref_info(inum, parent ? parent+1 : 0, fs_root, path, 1273 1265 &found_key); 1274 1266 if (ret < 0) ··· 1285 1275 eb = path->nodes[0]; 1286 1276 /* make sure we can use eb after releasing the path */ 1287 1277 atomic_inc(&eb->refs); 1278 + btrfs_tree_read_lock(eb); 1279 + btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK); 1288 1280 btrfs_release_path(path); 1289 1281 1290 1282 item = btrfs_item_nr(eb, slot); ··· 1300 1288 (unsigned long long)found_key.objectid, 1301 1289 (unsigned long long)fs_root->objectid); 1302 1290 ret = iterate(parent, iref, eb, ctx); 1303 - if (ret) { 1304 - free_extent_buffer(eb); 1291 + if (ret) 1305 1292 break; 1306 - } 1307 1293 len = sizeof(*iref) + name_len; 1308 1294 iref = (struct btrfs_inode_ref *)((char *)iref + len); 1309 1295 } 1296 + btrfs_tree_read_unlock_blocking(eb); 1310 1297 free_extent_buffer(eb); 1311 1298 } 1312 1299 ··· 1425 1414 1426 1415 void free_ipath(struct inode_fs_paths *ipath) 1427 1416 { 1417 + if (!ipath) 1418 + return; 1428 1419 kfree(ipath->fspath); 1429 1420 kfree(ipath); 1430 1421 }
+1 -1
fs/btrfs/ctree.h
··· 1078 1078 * is required instead of the faster short fsync log commits 1079 1079 */ 1080 1080 u64 last_trans_log_full_commit; 1081 - unsigned long mount_opt:21; 1081 + unsigned long mount_opt; 1082 1082 unsigned long compress_type:4; 1083 1083 u64 max_inline; 1084 1084 u64 alloc_start;
+11 -11
fs/btrfs/disk-io.c
··· 383 383 if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags)) 384 384 break; 385 385 386 - if (!failed_mirror) { 387 - failed = 1; 388 - printk(KERN_ERR "failed mirror was %d\n", eb->failed_mirror); 389 - failed_mirror = eb->failed_mirror; 390 - } 391 - 392 386 num_copies = btrfs_num_copies(&root->fs_info->mapping_tree, 393 387 eb->start, eb->len); 394 388 if (num_copies == 1) 395 389 break; 390 + 391 + if (!failed_mirror) { 392 + failed = 1; 393 + failed_mirror = eb->read_mirror; 394 + } 396 395 397 396 mirror_num++; 398 397 if (mirror_num == failed_mirror) ··· 563 564 } 564 565 565 566 static int btree_readpage_end_io_hook(struct page *page, u64 start, u64 end, 566 - struct extent_state *state) 567 + struct extent_state *state, int mirror) 567 568 { 568 569 struct extent_io_tree *tree; 569 570 u64 found_start; ··· 588 589 if (!reads_done) 589 590 goto err; 590 591 592 + eb->read_mirror = mirror; 591 593 if (test_bit(EXTENT_BUFFER_IOERR, &eb->bflags)) { 592 594 ret = -EIO; 593 595 goto err; ··· 652 652 653 653 eb = (struct extent_buffer *)page->private; 654 654 set_bit(EXTENT_BUFFER_IOERR, &eb->bflags); 655 - eb->failed_mirror = failed_mirror; 655 + eb->read_mirror = failed_mirror; 656 656 if (test_and_clear_bit(EXTENT_BUFFER_READAHEAD, &eb->bflags)) 657 657 btree_readahead_hook(root, eb, eb->start, -EIO); 658 658 return -EIO; /* we fixed nothing */ ··· 2254 2254 goto fail_sb_buffer; 2255 2255 } 2256 2256 2257 - if (sectorsize < PAGE_SIZE) { 2258 - printk(KERN_WARNING "btrfs: Incompatible sector size " 2259 - "found on %s\n", sb->s_id); 2257 + if (sectorsize != PAGE_SIZE) { 2258 + printk(KERN_WARNING "btrfs: Incompatible sector size(%lu) " 2259 + "found on %s\n", (unsigned long)sectorsize, sb->s_id); 2260 2260 goto fail_sb_buffer; 2261 2261 } 2262 2262
+7 -8
fs/btrfs/extent-tree.c
··· 2301 2301 2302 2302 if (ret) { 2303 2303 printk(KERN_DEBUG "btrfs: run_delayed_extent_op returned %d\n", ret); 2304 + spin_lock(&delayed_refs->lock); 2304 2305 return ret; 2305 2306 } 2306 2307 ··· 2332 2331 2333 2332 if (ret) { 2334 2333 printk(KERN_DEBUG "btrfs: run_one_delayed_ref returned %d\n", ret); 2334 + spin_lock(&delayed_refs->lock); 2335 2335 return ret; 2336 2336 } 2337 2337 ··· 3771 3769 */ 3772 3770 if (current->journal_info) 3773 3771 return -EAGAIN; 3774 - ret = wait_event_interruptible(space_info->wait, 3775 - !space_info->flush); 3776 - /* Must have been interrupted, return */ 3777 - if (ret) { 3778 - printk(KERN_DEBUG "btrfs: %s returning -EINTR\n", __func__); 3772 + ret = wait_event_killable(space_info->wait, !space_info->flush); 3773 + /* Must have been killed, return */ 3774 + if (ret) 3779 3775 return -EINTR; 3780 - } 3781 3776 3782 3777 spin_lock(&space_info->lock); 3783 3778 } ··· 4214 4215 4215 4216 num_bytes = calc_global_metadata_size(fs_info); 4216 4217 4217 - spin_lock(&block_rsv->lock); 4218 4218 spin_lock(&sinfo->lock); 4219 + spin_lock(&block_rsv->lock); 4219 4220 4220 4221 block_rsv->size = num_bytes; 4221 4222 ··· 4241 4242 block_rsv->full = 1; 4242 4243 } 4243 4244 4244 - spin_unlock(&sinfo->lock); 4245 4245 spin_unlock(&block_rsv->lock); 4246 + spin_unlock(&sinfo->lock); 4246 4247 } 4247 4248 4248 4249 static void init_global_block_rsv(struct btrfs_fs_info *fs_info)
+28 -28
fs/btrfs/extent_io.c
··· 402 402 return 0; 403 403 } 404 404 405 + static struct extent_state *next_state(struct extent_state *state) 406 + { 407 + struct rb_node *next = rb_next(&state->rb_node); 408 + if (next) 409 + return rb_entry(next, struct extent_state, rb_node); 410 + else 411 + return NULL; 412 + } 413 + 405 414 /* 406 415 * utility function to clear some bits in an extent state struct. 407 - * it will optionally wake up any one waiting on this state (wake == 1), or 408 - * forcibly remove the state from the tree (delete == 1). 416 + * it will optionally wake up any one waiting on this state (wake == 1) 409 417 * 410 418 * If no bits are set on the state struct after clearing things, the 411 419 * struct is freed and removed from the tree 412 420 */ 413 - static int clear_state_bit(struct extent_io_tree *tree, 414 - struct extent_state *state, 415 - int *bits, int wake) 421 + static struct extent_state *clear_state_bit(struct extent_io_tree *tree, 422 + struct extent_state *state, 423 + int *bits, int wake) 416 424 { 425 + struct extent_state *next; 417 426 int bits_to_clear = *bits & ~EXTENT_CTLBITS; 418 - int ret = state->state & bits_to_clear; 419 427 420 428 if ((bits_to_clear & EXTENT_DIRTY) && (state->state & EXTENT_DIRTY)) { 421 429 u64 range = state->end - state->start + 1; ··· 435 427 if (wake) 436 428 wake_up(&state->wq); 437 429 if (state->state == 0) { 430 + next = next_state(state); 438 431 if (state->tree) { 439 432 rb_erase(&state->rb_node, &tree->state); 440 433 state->tree = NULL; ··· 445 436 } 446 437 } else { 447 438 merge_state(tree, state); 439 + next = next_state(state); 448 440 } 449 - return ret; 441 + return next; 450 442 } 451 443 452 444 static struct extent_state * ··· 486 476 struct extent_state *state; 487 477 struct extent_state *cached; 488 478 struct extent_state *prealloc = NULL; 489 - struct rb_node *next_node; 490 479 struct rb_node *node; 491 480 u64 last_end; 492 481 int err; ··· 537 528 WARN_ON(state->end < start); 538 529 last_end = state->end; 539 530 540 - if (state->end < end && !need_resched()) 541 - next_node = rb_next(&state->rb_node); 542 - else 543 - next_node = NULL; 544 - 545 531 /* the state doesn't have the wanted bits, go ahead */ 546 - if (!(state->state & bits)) 532 + if (!(state->state & bits)) { 533 + state = next_state(state); 547 534 goto next; 535 + } 548 536 549 537 /* 550 538 * | ---- desired range ---- | ··· 599 593 goto out; 600 594 } 601 595 602 - clear_state_bit(tree, state, &bits, wake); 596 + state = clear_state_bit(tree, state, &bits, wake); 603 597 next: 604 598 if (last_end == (u64)-1) 605 599 goto out; 606 600 start = last_end + 1; 607 - if (start <= end && next_node) { 608 - state = rb_entry(next_node, struct extent_state, 609 - rb_node); 601 + if (start <= end && state && !need_resched()) 610 602 goto hit_next; 611 - } 612 603 goto search_again; 613 604 614 605 out: ··· 2304 2301 u64 start; 2305 2302 u64 end; 2306 2303 int whole_page; 2307 - int failed_mirror; 2304 + int mirror; 2308 2305 int ret; 2309 2306 2310 2307 if (err) ··· 2343 2340 } 2344 2341 spin_unlock(&tree->lock); 2345 2342 2343 + mirror = (int)(unsigned long)bio->bi_bdev; 2346 2344 if (uptodate && tree->ops && tree->ops->readpage_end_io_hook) { 2347 2345 ret = tree->ops->readpage_end_io_hook(page, start, end, 2348 - state); 2346 + state, mirror); 2349 2347 if (ret) 2350 2348 uptodate = 0; 2351 2349 else 2352 2350 clean_io_failure(start, page); 2353 2351 } 2354 2352 2355 - if (!uptodate) 2356 - failed_mirror = (int)(unsigned long)bio->bi_bdev; 2357 - 2358 2353 if (!uptodate && tree->ops && tree->ops->readpage_io_failed_hook) { 2359 - ret = tree->ops->readpage_io_failed_hook(page, failed_mirror); 2354 + ret = tree->ops->readpage_io_failed_hook(page, mirror); 2360 2355 if (!ret && !err && 2361 2356 test_bit(BIO_UPTODATE, &bio->bi_flags)) 2362 2357 uptodate = 1; ··· 2369 2368 * can't handle the error it will return -EIO and we 2370 2369 * remain responsible for that page. 2371 2370 */ 2372 - ret = bio_readpage_error(bio, page, start, end, 2373 - failed_mirror, NULL); 2371 + ret = bio_readpage_error(bio, page, start, end, mirror, NULL); 2374 2372 if (ret == 0) { 2375 2373 uptodate = 2376 2374 test_bit(BIO_UPTODATE, &bio->bi_flags); ··· 4462 4462 } 4463 4463 4464 4464 clear_bit(EXTENT_BUFFER_IOERR, &eb->bflags); 4465 - eb->failed_mirror = 0; 4465 + eb->read_mirror = 0; 4466 4466 atomic_set(&eb->io_pages, num_reads); 4467 4467 for (i = start_i; i < num_pages; i++) { 4468 4468 page = extent_buffer_page(eb, i);
+2 -2
fs/btrfs/extent_io.h
··· 79 79 u64 start, u64 end, 80 80 struct extent_state *state); 81 81 int (*readpage_end_io_hook)(struct page *page, u64 start, u64 end, 82 - struct extent_state *state); 82 + struct extent_state *state, int mirror); 83 83 int (*writepage_end_io_hook)(struct page *page, u64 start, u64 end, 84 84 struct extent_state *state, int uptodate); 85 85 void (*set_bit_hook)(struct inode *inode, struct extent_state *state, ··· 135 135 spinlock_t refs_lock; 136 136 atomic_t refs; 137 137 atomic_t io_pages; 138 - int failed_mirror; 138 + int read_mirror; 139 139 struct list_head leak_list; 140 140 struct rcu_head rcu_head; 141 141 pid_t lock_owner;
+7 -2
fs/btrfs/file.c
··· 567 567 int extent_type; 568 568 int recow; 569 569 int ret; 570 + int modify_tree = -1; 570 571 571 572 if (drop_cache) 572 573 btrfs_drop_extent_cache(inode, start, end - 1, 0); ··· 576 575 if (!path) 577 576 return -ENOMEM; 578 577 578 + if (start >= BTRFS_I(inode)->disk_i_size) 579 + modify_tree = 0; 580 + 579 581 while (1) { 580 582 recow = 0; 581 583 ret = btrfs_lookup_file_extent(trans, root, path, ino, 582 - search_start, -1); 584 + search_start, modify_tree); 583 585 if (ret < 0) 584 586 break; 585 587 if (ret > 0 && path->slots[0] > 0 && search_start == start) { ··· 638 634 } 639 635 640 636 search_start = max(key.offset, start); 641 - if (recow) { 637 + if (recow || !modify_tree) { 638 + modify_tree = -1; 642 639 btrfs_release_path(path); 643 640 continue; 644 641 }
+17 -35
fs/btrfs/inode.c
··· 1947 1947 * extent_io.c will try to find good copies for us. 1948 1948 */ 1949 1949 static int btrfs_readpage_end_io_hook(struct page *page, u64 start, u64 end, 1950 - struct extent_state *state) 1950 + struct extent_state *state, int mirror) 1951 1951 { 1952 1952 size_t offset = start - ((u64)page->index << PAGE_CACHE_SHIFT); 1953 1953 struct inode *inode = page->mapping->host; ··· 4069 4069 BTRFS_I(inode)->dummy_inode = 1; 4070 4070 4071 4071 inode->i_ino = BTRFS_EMPTY_SUBVOL_DIR_OBJECTID; 4072 - inode->i_op = &simple_dir_inode_operations; 4072 + inode->i_op = &btrfs_dir_ro_inode_operations; 4073 4073 inode->i_fop = &simple_dir_operations; 4074 4074 inode->i_mode = S_IFDIR | S_IRUGO | S_IWUSR | S_IXUGO; 4075 4075 inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME; ··· 4140 4140 static int btrfs_dentry_delete(const struct dentry *dentry) 4141 4141 { 4142 4142 struct btrfs_root *root; 4143 + struct inode *inode = dentry->d_inode; 4143 4144 4144 - if (!dentry->d_inode && !IS_ROOT(dentry)) 4145 - dentry = dentry->d_parent; 4145 + if (!inode && !IS_ROOT(dentry)) 4146 + inode = dentry->d_parent->d_inode; 4146 4147 4147 - if (dentry->d_inode) { 4148 - root = BTRFS_I(dentry->d_inode)->root; 4148 + if (inode) { 4149 + root = BTRFS_I(inode)->root; 4149 4150 if (btrfs_root_refs(&root->root_item) == 0) 4151 + return 1; 4152 + 4153 + if (btrfs_ino(inode) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) 4150 4154 return 1; 4151 4155 } 4152 4156 return 0; ··· 4192 4188 struct btrfs_path *path; 4193 4189 struct list_head ins_list; 4194 4190 struct list_head del_list; 4195 - struct qstr q; 4196 4191 int ret; 4197 4192 struct extent_buffer *leaf; 4198 4193 int slot; ··· 4282 4279 4283 4280 while (di_cur < di_total) { 4284 4281 struct btrfs_key location; 4285 - struct dentry *tmp; 4286 4282 4287 4283 if (verify_dir_item(root, leaf, di)) 4288 4284 break; ··· 4302 4300 d_type = btrfs_filetype_table[btrfs_dir_type(leaf, di)]; 4303 4301 btrfs_dir_item_key_to_cpu(leaf, di, &location); 4304 4302 4305 - q.name = name_ptr; 4306 - q.len = name_len; 4307 - q.hash = full_name_hash(q.name, q.len); 4308 - tmp = d_lookup(filp->f_dentry, &q); 4309 - if (!tmp) { 4310 - struct btrfs_key *newkey; 4311 4303 4312 - newkey = kzalloc(sizeof(struct btrfs_key), 4313 - GFP_NOFS); 4314 - if (!newkey) 4315 - goto no_dentry; 4316 - tmp = d_alloc(filp->f_dentry, &q); 4317 - if (!tmp) { 4318 - kfree(newkey); 4319 - dput(tmp); 4320 - goto no_dentry; 4321 - } 4322 - memcpy(newkey, &location, 4323 - sizeof(struct btrfs_key)); 4324 - tmp->d_fsdata = newkey; 4325 - tmp->d_flags |= DCACHE_NEED_LOOKUP; 4326 - d_rehash(tmp); 4327 - dput(tmp); 4328 - } else { 4329 - dput(tmp); 4330 - } 4331 - no_dentry: 4332 4304 /* is this a reference to our own snapshot? If so 4333 - * skip it 4305 + * skip it. 4306 + * 4307 + * In contrast to old kernels, we insert the snapshot's 4308 + * dir item and dir index after it has been created, so 4309 + * we won't find a reference to our own snapshot. We 4310 + * still keep the following code for backward 4311 + * compatibility. 4334 4312 */ 4335 4313 if (location.type == BTRFS_ROOT_ITEM_KEY && 4336 4314 location.objectid == root->root_key.objectid) {
+4 -1
fs/btrfs/ioctl.c
··· 2262 2262 di_args->bytes_used = dev->bytes_used; 2263 2263 di_args->total_bytes = dev->total_bytes; 2264 2264 memcpy(di_args->uuid, dev->uuid, sizeof(di_args->uuid)); 2265 - strncpy(di_args->path, dev->name, sizeof(di_args->path)); 2265 + if (dev->name) 2266 + strncpy(di_args->path, dev->name, sizeof(di_args->path)); 2267 + else 2268 + di_args->path[0] = '\0'; 2266 2269 2267 2270 out: 2268 2271 if (ret == 0 && copy_to_user(arg, di_args, sizeof(*di_args)))
+30 -20
fs/btrfs/reada.c
··· 250 250 struct btrfs_bio *bbio) 251 251 { 252 252 int ret; 253 - int looped = 0; 254 253 struct reada_zone *zone; 255 254 struct btrfs_block_group_cache *cache = NULL; 256 255 u64 start; 257 256 u64 end; 258 257 int i; 259 258 260 - again: 261 259 zone = NULL; 262 260 spin_lock(&fs_info->reada_lock); 263 261 ret = radix_tree_gang_lookup(&dev->reada_zones, (void **)&zone, ··· 271 273 kref_put(&zone->refcnt, reada_zone_release); 272 274 spin_unlock(&fs_info->reada_lock); 273 275 } 274 - 275 - if (looped) 276 - return NULL; 277 276 278 277 cache = btrfs_lookup_block_group(fs_info, logical); 279 278 if (!cache) ··· 302 307 ret = radix_tree_insert(&dev->reada_zones, 303 308 (unsigned long)(zone->end >> PAGE_CACHE_SHIFT), 304 309 zone); 305 - spin_unlock(&fs_info->reada_lock); 306 310 307 - if (ret) { 311 + if (ret == -EEXIST) { 308 312 kfree(zone); 309 - looped = 1; 310 - goto again; 313 + ret = radix_tree_gang_lookup(&dev->reada_zones, (void **)&zone, 314 + logical >> PAGE_CACHE_SHIFT, 1); 315 + if (ret == 1) 316 + kref_get(&zone->refcnt); 311 317 } 318 + spin_unlock(&fs_info->reada_lock); 312 319 313 320 return zone; 314 321 } ··· 320 323 struct btrfs_key *top, int level) 321 324 { 322 325 int ret; 323 - int looped = 0; 324 326 struct reada_extent *re = NULL; 327 + struct reada_extent *re_exist = NULL; 325 328 struct btrfs_fs_info *fs_info = root->fs_info; 326 329 struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree; 327 330 struct btrfs_bio *bbio = NULL; 328 331 struct btrfs_device *dev; 332 + struct btrfs_device *prev_dev; 329 333 u32 blocksize; 330 334 u64 length; 331 335 int nzones = 0; 332 336 int i; 333 337 unsigned long index = logical >> PAGE_CACHE_SHIFT; 334 338 335 - again: 336 339 spin_lock(&fs_info->reada_lock); 337 340 re = radix_tree_lookup(&fs_info->reada_tree, index); 338 341 if (re) 339 342 kref_get(&re->refcnt); 340 343 spin_unlock(&fs_info->reada_lock); 341 344 342 - if (re || looped) 345 + if (re) 343 346 return re; 344 347 345 348 re = kzalloc(sizeof(*re), GFP_NOFS); ··· 395 398 /* insert extent in reada_tree + all per-device trees, all or nothing */ 396 399 spin_lock(&fs_info->reada_lock); 397 400 ret = radix_tree_insert(&fs_info->reada_tree, index, re); 398 - if (ret) { 401 + if (ret == -EEXIST) { 402 + re_exist = radix_tree_lookup(&fs_info->reada_tree, index); 403 + BUG_ON(!re_exist); 404 + kref_get(&re_exist->refcnt); 399 405 spin_unlock(&fs_info->reada_lock); 400 - if (ret != -ENOMEM) { 401 - /* someone inserted the extent in the meantime */ 402 - looped = 1; 403 - } 404 406 goto error; 405 407 } 408 + if (ret) { 409 + spin_unlock(&fs_info->reada_lock); 410 + goto error; 411 + } 412 + prev_dev = NULL; 406 413 for (i = 0; i < nzones; ++i) { 407 414 dev = bbio->stripes[i].dev; 415 + if (dev == prev_dev) { 416 + /* 417 + * in case of DUP, just add the first zone. As both 418 + * are on the same device, there's nothing to gain 419 + * from adding both. 420 + * Also, it wouldn't work, as the tree is per device 421 + * and adding would fail with EEXIST 422 + */ 423 + continue; 424 + } 425 + prev_dev = dev; 408 426 ret = radix_tree_insert(&dev->reada_extents, index, re); 409 427 if (ret) { 410 428 while (--i >= 0) { ··· 462 450 } 463 451 kfree(bbio); 464 452 kfree(re); 465 - if (looped) 466 - goto again; 467 - return NULL; 453 + return re_exist; 468 454 } 469 455 470 456 static void reada_kref_dummy(struct kref *kr)
+3 -1
fs/btrfs/relocation.c
··· 1279 1279 if (rb_node) 1280 1280 backref_tree_panic(rb_node, -EEXIST, node->bytenr); 1281 1281 } else { 1282 + spin_lock(&root->fs_info->trans_lock); 1282 1283 list_del_init(&root->root_list); 1284 + spin_unlock(&root->fs_info->trans_lock); 1283 1285 kfree(node); 1284 1286 } 1285 1287 return 0; ··· 3813 3811 3814 3812 ret = btrfs_block_rsv_check(rc->extent_root, rc->block_rsv, 5); 3815 3813 if (ret < 0) { 3816 - if (ret != -EAGAIN) { 3814 + if (ret != -ENOSPC) { 3817 3815 err = ret; 3818 3816 WARN_ON(1); 3819 3817 break;
-15
fs/btrfs/scrub.c
··· 1257 1257 if (memcmp(csum, on_disk_csum, sdev->csum_size)) 1258 1258 fail = 1; 1259 1259 1260 - if (fail) { 1261 - spin_lock(&sdev->stat_lock); 1262 - ++sdev->stat.csum_errors; 1263 - spin_unlock(&sdev->stat_lock); 1264 - } 1265 - 1266 1260 return fail; 1267 1261 } 1268 1262 ··· 1328 1334 btrfs_csum_final(crc, calculated_csum); 1329 1335 if (memcmp(calculated_csum, on_disk_csum, sdev->csum_size)) 1330 1336 ++crc_fail; 1331 - 1332 - if (crc_fail || fail) { 1333 - spin_lock(&sdev->stat_lock); 1334 - if (crc_fail) 1335 - ++sdev->stat.csum_errors; 1336 - if (fail) 1337 - ++sdev->stat.verify_errors; 1338 - spin_unlock(&sdev->stat_lock); 1339 - } 1340 1337 1341 1338 return fail || crc_fail; 1342 1339 }
+4 -3
fs/btrfs/super.c
··· 815 815 return 0; 816 816 } 817 817 818 - btrfs_start_delalloc_inodes(root, 0); 819 818 btrfs_wait_ordered_extents(root, 0, 0); 820 819 821 820 trans = btrfs_start_transaction(root, 0); ··· 1147 1148 if (ret) 1148 1149 goto restore; 1149 1150 } else { 1150 - if (fs_info->fs_devices->rw_devices == 0) 1151 + if (fs_info->fs_devices->rw_devices == 0) { 1151 1152 ret = -EACCES; 1152 1153 goto restore; 1154 + } 1153 1155 1154 - if (btrfs_super_log_root(fs_info->super_copy) != 0) 1156 + if (btrfs_super_log_root(fs_info->super_copy) != 0) { 1155 1157 ret = -EINVAL; 1156 1158 goto restore; 1159 + } 1157 1160 1158 1161 ret = btrfs_cleanup_fs_roots(fs_info); 1159 1162 if (ret)
+5 -1
fs/btrfs/transaction.c
··· 73 73 74 74 cur_trans = root->fs_info->running_transaction; 75 75 if (cur_trans) { 76 - if (cur_trans->aborted) 76 + if (cur_trans->aborted) { 77 + spin_unlock(&root->fs_info->trans_lock); 77 78 return cur_trans->aborted; 79 + } 78 80 atomic_inc(&cur_trans->use_count); 79 81 atomic_inc(&cur_trans->num_writers); 80 82 cur_trans->num_joined++; ··· 1402 1400 ret = commit_fs_roots(trans, root); 1403 1401 if (ret) { 1404 1402 mutex_unlock(&root->fs_info->tree_log_mutex); 1403 + mutex_unlock(&root->fs_info->reloc_mutex); 1405 1404 goto cleanup_transaction; 1406 1405 } 1407 1406 ··· 1414 1411 ret = commit_cowonly_roots(trans, root); 1415 1412 if (ret) { 1416 1413 mutex_unlock(&root->fs_info->tree_log_mutex); 1414 + mutex_unlock(&root->fs_info->reloc_mutex); 1417 1415 goto cleanup_transaction; 1418 1416 } 1419 1417
+9 -4
fs/btrfs/volumes.c
··· 3324 3324 stripe_size = devices_info[ndevs-1].max_avail; 3325 3325 num_stripes = ndevs * dev_stripes; 3326 3326 3327 - if (stripe_size * num_stripes > max_chunk_size * ncopies) { 3327 + if (stripe_size * ndevs > max_chunk_size * ncopies) { 3328 3328 stripe_size = max_chunk_size * ncopies; 3329 - do_div(stripe_size, num_stripes); 3329 + do_div(stripe_size, ndevs); 3330 3330 } 3331 3331 3332 3332 do_div(stripe_size, dev_stripes); 3333 + 3334 + /* align to BTRFS_STRIPE_LEN */ 3333 3335 do_div(stripe_size, BTRFS_STRIPE_LEN); 3334 3336 stripe_size *= BTRFS_STRIPE_LEN; 3335 3337 ··· 3807 3805 else if (mirror_num) 3808 3806 stripe_index += mirror_num - 1; 3809 3807 else { 3808 + int old_stripe_index = stripe_index; 3810 3809 stripe_index = find_live_mirror(map, stripe_index, 3811 3810 map->sub_stripes, stripe_index + 3812 3811 current->pid % map->sub_stripes); 3813 - mirror_num = stripe_index + 1; 3812 + mirror_num = stripe_index - old_stripe_index + 1; 3814 3813 } 3815 3814 } else { 3816 3815 /* ··· 4353 4350 4354 4351 ret = __btrfs_open_devices(fs_devices, FMODE_READ, 4355 4352 root->fs_info->bdev_holder); 4356 - if (ret) 4353 + if (ret) { 4354 + free_fs_devices(fs_devices); 4357 4355 goto out; 4356 + } 4358 4357 4359 4358 if (!fs_devices->seeding) { 4360 4359 __btrfs_close_devices(fs_devices);
-1
fs/buffer.c
··· 985 985 return page; 986 986 987 987 failed: 988 - BUG(); 989 988 unlock_page(page); 990 989 page_cache_release(page); 991 990 return NULL;
+8 -4
fs/cifs/cifsfs.c
··· 370 370 (int)(srcaddr->sa_family)); 371 371 } 372 372 373 - seq_printf(s, ",uid=%d", cifs_sb->mnt_uid); 373 + seq_printf(s, ",uid=%u", cifs_sb->mnt_uid); 374 374 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_UID) 375 375 seq_printf(s, ",forceuid"); 376 376 else 377 377 seq_printf(s, ",noforceuid"); 378 378 379 - seq_printf(s, ",gid=%d", cifs_sb->mnt_gid); 379 + seq_printf(s, ",gid=%u", cifs_sb->mnt_gid); 380 380 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_GID) 381 381 seq_printf(s, ",forcegid"); 382 382 else ··· 434 434 seq_printf(s, ",noperm"); 435 435 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) 436 436 seq_printf(s, ",strictcache"); 437 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_BACKUPUID) 438 + seq_printf(s, ",backupuid=%u", cifs_sb->mnt_backupuid); 439 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_BACKUPGID) 440 + seq_printf(s, ",backupgid=%u", cifs_sb->mnt_backupgid); 437 441 438 - seq_printf(s, ",rsize=%d", cifs_sb->rsize); 439 - seq_printf(s, ",wsize=%d", cifs_sb->wsize); 442 + seq_printf(s, ",rsize=%u", cifs_sb->rsize); 443 + seq_printf(s, ",wsize=%u", cifs_sb->wsize); 440 444 /* convert actimeo and display it in seconds */ 441 445 seq_printf(s, ",actimeo=%lu", cifs_sb->actimeo / HZ); 442 446
+6 -6
fs/cifs/connect.c
··· 3228 3228 3229 3229 cifs_sb->mnt_uid = pvolume_info->linux_uid; 3230 3230 cifs_sb->mnt_gid = pvolume_info->linux_gid; 3231 - if (pvolume_info->backupuid_specified) 3232 - cifs_sb->mnt_backupuid = pvolume_info->backupuid; 3233 - if (pvolume_info->backupgid_specified) 3234 - cifs_sb->mnt_backupgid = pvolume_info->backupgid; 3235 3231 cifs_sb->mnt_file_mode = pvolume_info->file_mode; 3236 3232 cifs_sb->mnt_dir_mode = pvolume_info->dir_mode; 3237 3233 cFYI(1, "file mode: 0x%hx dir mode: 0x%hx", ··· 3258 3262 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_RWPIDFORWARD; 3259 3263 if (pvolume_info->cifs_acl) 3260 3264 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_ACL; 3261 - if (pvolume_info->backupuid_specified) 3265 + if (pvolume_info->backupuid_specified) { 3262 3266 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_BACKUPUID; 3263 - if (pvolume_info->backupgid_specified) 3267 + cifs_sb->mnt_backupuid = pvolume_info->backupuid; 3268 + } 3269 + if (pvolume_info->backupgid_specified) { 3264 3270 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_BACKUPGID; 3271 + cifs_sb->mnt_backupgid = pvolume_info->backupgid; 3272 + } 3265 3273 if (pvolume_info->override_uid) 3266 3274 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_OVERR_UID; 3267 3275 if (pvolume_info->override_gid)
+2 -1
fs/cifs/file.c
··· 2178 2178 unsigned long nr_pages, i; 2179 2179 size_t copied, len, cur_len; 2180 2180 ssize_t total_written = 0; 2181 - loff_t offset = *poffset; 2181 + loff_t offset; 2182 2182 struct iov_iter it; 2183 2183 struct cifsFileInfo *open_file; 2184 2184 struct cifs_tcon *tcon; ··· 2200 2200 cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 2201 2201 open_file = file->private_data; 2202 2202 tcon = tlink_tcon(open_file->tlink); 2203 + offset = *poffset; 2203 2204 2204 2205 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) 2205 2206 pid = open_file->pid;
+3 -1
fs/eventpoll.c
··· 1663 1663 if (op == EPOLL_CTL_ADD) { 1664 1664 if (is_file_epoll(tfile)) { 1665 1665 error = -ELOOP; 1666 - if (ep_loop_check(ep, tfile) != 0) 1666 + if (ep_loop_check(ep, tfile) != 0) { 1667 + clear_tfile_check_list(); 1667 1668 goto error_tgt_fput; 1669 + } 1668 1670 } else 1669 1671 list_add(&tfile->f_tfile_llink, &tfile_check_list); 1670 1672 }
+2
fs/ext4/super.c
··· 1597 1597 unsigned int *journal_ioprio, 1598 1598 int is_remount) 1599 1599 { 1600 + #ifdef CONFIG_QUOTA 1600 1601 struct ext4_sb_info *sbi = EXT4_SB(sb); 1602 + #endif 1601 1603 char *p; 1602 1604 substring_t args[MAX_OPT_ARGS]; 1603 1605 int token;
+7 -3
fs/gfs2/lock_dlm.c
··· 200 200 return -1; 201 201 } 202 202 203 - static u32 make_flags(const u32 lkid, const unsigned int gfs_flags, 203 + static u32 make_flags(struct gfs2_glock *gl, const unsigned int gfs_flags, 204 204 const int req) 205 205 { 206 206 u32 lkf = DLM_LKF_VALBLK; 207 + u32 lkid = gl->gl_lksb.sb_lkid; 207 208 208 209 if (gfs_flags & LM_FLAG_TRY) 209 210 lkf |= DLM_LKF_NOQUEUE; ··· 228 227 BUG(); 229 228 } 230 229 231 - if (lkid != 0) 230 + if (lkid != 0) { 232 231 lkf |= DLM_LKF_CONVERT; 232 + if (test_bit(GLF_BLOCKING, &gl->gl_flags)) 233 + lkf |= DLM_LKF_QUECVT; 234 + } 233 235 234 236 return lkf; 235 237 } ··· 254 250 char strname[GDLM_STRNAME_BYTES] = ""; 255 251 256 252 req = make_mode(req_state); 257 - lkf = make_flags(gl->gl_lksb.sb_lkid, flags, req); 253 + lkf = make_flags(gl, flags, req); 258 254 gfs2_glstats_inc(gl, GFS2_LKS_DCOUNT); 259 255 gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT); 260 256 if (gl->gl_lksb.sb_lkid) {
+1
fs/hugetlbfs/inode.c
··· 485 485 inode->i_fop = &simple_dir_operations; 486 486 /* directory inodes start off with i_nlink == 2 (for "." entry) */ 487 487 inc_nlink(inode); 488 + lockdep_annotate_inode_mutex_key(inode); 488 489 } 489 490 return inode; 490 491 }
+2 -2
fs/jbd2/commit.c
··· 723 723 if (commit_transaction->t_need_data_flush && 724 724 (journal->j_fs_dev != journal->j_dev) && 725 725 (journal->j_flags & JBD2_BARRIER)) 726 - blkdev_issue_flush(journal->j_fs_dev, GFP_KERNEL, NULL); 726 + blkdev_issue_flush(journal->j_fs_dev, GFP_NOFS, NULL); 727 727 728 728 /* Done it all: now write the commit record asynchronously. */ 729 729 if (JBD2_HAS_INCOMPAT_FEATURE(journal, ··· 859 859 if (JBD2_HAS_INCOMPAT_FEATURE(journal, 860 860 JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT) && 861 861 journal->j_flags & JBD2_BARRIER) { 862 - blkdev_issue_flush(journal->j_dev, GFP_KERNEL, NULL); 862 + blkdev_issue_flush(journal->j_dev, GFP_NOFS, NULL); 863 863 } 864 864 865 865 if (err)
+3 -1
fs/nfs/blocklayout/blocklayout.c
··· 38 38 #include <linux/buffer_head.h> /* various write calls */ 39 39 #include <linux/prefetch.h> 40 40 41 + #include "../pnfs.h" 42 + #include "../internal.h" 41 43 #include "blocklayout.h" 42 44 43 45 #define NFSDBG_FACILITY NFSDBG_PNFS_LD ··· 870 868 * GETDEVICEINFO's maxcount 871 869 */ 872 870 max_resp_sz = server->nfs_client->cl_session->fc_attrs.max_resp_sz; 873 - max_pages = max_resp_sz >> PAGE_SHIFT; 871 + max_pages = nfs_page_array_len(0, max_resp_sz); 874 872 dprintk("%s max_resp_sz %u max_pages %d\n", 875 873 __func__, max_resp_sz, max_pages); 876 874
+3 -2
fs/nfs/client.c
··· 1729 1729 */ 1730 1730 struct nfs_server *nfs_clone_server(struct nfs_server *source, 1731 1731 struct nfs_fh *fh, 1732 - struct nfs_fattr *fattr) 1732 + struct nfs_fattr *fattr, 1733 + rpc_authflavor_t flavor) 1733 1734 { 1734 1735 struct nfs_server *server; 1735 1736 struct nfs_fattr *fattr_fsinfo; ··· 1759 1758 1760 1759 error = nfs_init_server_rpcclient(server, 1761 1760 source->client->cl_timeout, 1762 - source->client->cl_auth->au_flavor); 1761 + flavor); 1763 1762 if (error < 0) 1764 1763 goto out_free_server; 1765 1764 if (!IS_ERR(source->client_acl))
+2 -2
fs/nfs/dir.c
··· 1429 1429 } 1430 1430 1431 1431 open_flags = nd->intent.open.flags; 1432 - attr.ia_valid = 0; 1432 + attr.ia_valid = ATTR_OPEN; 1433 1433 1434 1434 ctx = create_nfs_open_context(dentry, open_flags); 1435 1435 res = ERR_CAST(ctx); ··· 1536 1536 if (IS_ERR(ctx)) 1537 1537 goto out; 1538 1538 1539 - attr.ia_valid = 0; 1539 + attr.ia_valid = ATTR_OPEN; 1540 1540 if (openflags & O_TRUNC) { 1541 1541 attr.ia_valid |= ATTR_SIZE; 1542 1542 attr.ia_size = 0;
+4
fs/nfs/idmap.c
··· 554 554 struct nfs_client *clp; 555 555 int error = 0; 556 556 557 + if (!try_module_get(THIS_MODULE)) 558 + return 0; 559 + 557 560 while ((clp = nfs_get_client_for_event(sb->s_fs_info, event))) { 558 561 error = __rpc_pipefs_event(clp, event, sb); 559 562 nfs_put_client(clp); 560 563 if (error) 561 564 break; 562 565 } 566 + module_put(THIS_MODULE); 563 567 return error; 564 568 } 565 569
+4 -4
fs/nfs/internal.h
··· 165 165 extern void nfs_free_server(struct nfs_server *server); 166 166 extern struct nfs_server *nfs_clone_server(struct nfs_server *, 167 167 struct nfs_fh *, 168 - struct nfs_fattr *); 168 + struct nfs_fattr *, 169 + rpc_authflavor_t); 169 170 extern void nfs_mark_client_ready(struct nfs_client *clp, int state); 170 171 extern int nfs4_check_client_ready(struct nfs_client *clp); 171 172 extern struct nfs_client *nfs4_set_ds_client(struct nfs_client* mds_clp, ··· 187 186 188 187 /* nfs4namespace.c */ 189 188 #ifdef CONFIG_NFS_V4 190 - extern struct vfsmount *nfs_do_refmount(struct dentry *dentry); 189 + extern struct vfsmount *nfs_do_refmount(struct rpc_clnt *client, struct dentry *dentry); 191 190 #else 192 191 static inline 193 - struct vfsmount *nfs_do_refmount(struct dentry *dentry) 192 + struct vfsmount *nfs_do_refmount(struct rpc_clnt *client, struct dentry *dentry) 194 193 { 195 194 return ERR_PTR(-ENOENT); 196 195 } ··· 235 234 /* nfs4proc.c */ 236 235 #ifdef CONFIG_NFS_V4 237 236 extern struct rpc_procinfo nfs4_procedures[]; 238 - void nfs_fixup_secinfo_attributes(struct nfs_fattr *, struct nfs_fh *); 239 237 #endif 240 238 241 239 extern int nfs4_init_ds_session(struct nfs_client *clp);
+27 -66
fs/nfs/namespace.c
··· 148 148 return pseudoflavor; 149 149 } 150 150 151 - static int nfs_negotiate_security(const struct dentry *parent, 152 - const struct dentry *dentry, 153 - rpc_authflavor_t *flavor) 151 + static struct rpc_clnt *nfs_lookup_mountpoint(struct inode *dir, 152 + struct qstr *name, 153 + struct nfs_fh *fh, 154 + struct nfs_fattr *fattr) 154 155 { 155 - struct page *page; 156 - struct nfs4_secinfo_flavors *flavors; 157 - int (*secinfo)(struct inode *, const struct qstr *, struct nfs4_secinfo_flavors *); 158 - int ret = -EPERM; 159 - 160 - secinfo = NFS_PROTO(parent->d_inode)->secinfo; 161 - if (secinfo != NULL) { 162 - page = alloc_page(GFP_KERNEL); 163 - if (!page) { 164 - ret = -ENOMEM; 165 - goto out; 166 - } 167 - flavors = page_address(page); 168 - ret = secinfo(parent->d_inode, &dentry->d_name, flavors); 169 - *flavor = nfs_find_best_sec(flavors); 170 - put_page(page); 171 - } 172 - 173 - out: 174 - return ret; 175 - } 176 - 177 - static int nfs_lookup_with_sec(struct nfs_server *server, struct dentry *parent, 178 - struct dentry *dentry, struct path *path, 179 - struct nfs_fh *fh, struct nfs_fattr *fattr, 180 - rpc_authflavor_t *flavor) 181 - { 182 - struct rpc_clnt *clone; 183 - struct rpc_auth *auth; 184 156 int err; 185 157 186 - err = nfs_negotiate_security(parent, path->dentry, flavor); 187 - if (err < 0) 188 - goto out; 189 - clone = rpc_clone_client(server->client); 190 - auth = rpcauth_create(*flavor, clone); 191 - if (!auth) { 192 - err = -EIO; 193 - goto out_shutdown; 194 - } 195 - err = server->nfs_client->rpc_ops->lookup(clone, parent->d_inode, 196 - &path->dentry->d_name, 197 - fh, fattr); 198 - out_shutdown: 199 - rpc_shutdown_client(clone); 200 - out: 201 - return err; 158 + if (NFS_PROTO(dir)->version == 4) 159 + return nfs4_proc_lookup_mountpoint(dir, name, fh, fattr); 160 + 161 + err = NFS_PROTO(dir)->lookup(NFS_SERVER(dir)->client, dir, name, fh, fattr); 162 + if (err) 163 + return ERR_PTR(err); 164 + return rpc_clone_client(NFS_SERVER(dir)->client); 202 165 } 203 166 #else /* CONFIG_NFS_V4 */ 204 - static inline int nfs_lookup_with_sec(struct nfs_server *server, 205 - struct dentry *parent, struct dentry *dentry, 206 - struct path *path, struct nfs_fh *fh, 207 - struct nfs_fattr *fattr, 208 - rpc_authflavor_t *flavor) 167 + static inline struct rpc_clnt *nfs_lookup_mountpoint(struct inode *dir, 168 + struct qstr *name, 169 + struct nfs_fh *fh, 170 + struct nfs_fattr *fattr) 209 171 { 210 - return -EPERM; 172 + int err = NFS_PROTO(dir)->lookup(NFS_SERVER(dir)->client, dir, name, fh, fattr); 173 + if (err) 174 + return ERR_PTR(err); 175 + return rpc_clone_client(NFS_SERVER(dir)->client); 211 176 } 212 177 #endif /* CONFIG_NFS_V4 */ 213 178 ··· 191 226 struct vfsmount *nfs_d_automount(struct path *path) 192 227 { 193 228 struct vfsmount *mnt; 194 - struct nfs_server *server = NFS_SERVER(path->dentry->d_inode); 195 229 struct dentry *parent; 196 230 struct nfs_fh *fh = NULL; 197 231 struct nfs_fattr *fattr = NULL; 198 - int err; 199 - rpc_authflavor_t flavor = RPC_AUTH_UNIX; 232 + struct rpc_clnt *client; 200 233 201 234 dprintk("--> nfs_d_automount()\n"); 202 235 ··· 212 249 213 250 /* Look it up again to get its attributes */ 214 251 parent = dget_parent(path->dentry); 215 - err = server->nfs_client->rpc_ops->lookup(server->client, parent->d_inode, 216 - &path->dentry->d_name, 217 - fh, fattr); 218 - if (err == -EPERM && NFS_PROTO(parent->d_inode)->secinfo != NULL) 219 - err = nfs_lookup_with_sec(server, parent, path->dentry, path, fh, fattr, &flavor); 252 + client = nfs_lookup_mountpoint(parent->d_inode, &path->dentry->d_name, fh, fattr); 220 253 dput(parent); 221 - if (err != 0) { 222 - mnt = ERR_PTR(err); 254 + if (IS_ERR(client)) { 255 + mnt = ERR_CAST(client); 223 256 goto out; 224 257 } 225 258 226 259 if (fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL) 227 - mnt = nfs_do_refmount(path->dentry); 260 + mnt = nfs_do_refmount(client, path->dentry); 228 261 else 229 - mnt = nfs_do_submount(path->dentry, fh, fattr, flavor); 262 + mnt = nfs_do_submount(path->dentry, fh, fattr, client->cl_auth->au_flavor); 263 + rpc_shutdown_client(client); 264 + 230 265 if (IS_ERR(mnt)) 231 266 goto out; 232 267
+9 -2
fs/nfs/nfs4_fs.h
··· 59 59 60 60 #define NFS_SEQID_CONFIRMED 1 61 61 struct nfs_seqid_counter { 62 + ktime_t create_time; 62 63 int owner_id; 63 64 int flags; 64 65 u32 counter; ··· 205 204 extern const struct dentry_operations nfs4_dentry_operations; 206 205 extern const struct inode_operations nfs4_dir_inode_operations; 207 206 207 + /* nfs4namespace.c */ 208 + struct rpc_clnt *nfs4_create_sec_client(struct rpc_clnt *, struct inode *, struct qstr *); 209 + 208 210 /* nfs4proc.c */ 209 211 extern int nfs4_proc_setclientid(struct nfs_client *, u32, unsigned short, struct rpc_cred *, struct nfs4_setclientid_res *); 210 212 extern int nfs4_proc_setclientid_confirm(struct nfs_client *, struct nfs4_setclientid_res *arg, struct rpc_cred *); ··· 216 212 extern int nfs41_init_clientid(struct nfs_client *, struct rpc_cred *); 217 213 extern int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait, bool roc); 218 214 extern int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle); 219 - extern int nfs4_proc_fs_locations(struct inode *dir, const struct qstr *name, 220 - struct nfs4_fs_locations *fs_locations, struct page *page); 215 + extern int nfs4_proc_fs_locations(struct rpc_clnt *, struct inode *, const struct qstr *, 216 + struct nfs4_fs_locations *, struct page *); 217 + extern struct rpc_clnt *nfs4_proc_lookup_mountpoint(struct inode *, struct qstr *, 218 + struct nfs_fh *, struct nfs_fattr *); 219 + extern int nfs4_proc_secinfo(struct inode *, const struct qstr *, struct nfs4_secinfo_flavors *); 221 220 extern int nfs4_release_lockowner(struct nfs4_lock_state *); 222 221 extern const struct xattr_handler *nfs4_xattr_handlers[]; 223 222
+1 -1
fs/nfs/nfs4filelayoutdev.c
··· 699 699 * GETDEVICEINFO's maxcount 700 700 */ 701 701 max_resp_sz = server->nfs_client->cl_session->fc_attrs.max_resp_sz; 702 - max_pages = max_resp_sz >> PAGE_SHIFT; 702 + max_pages = nfs_page_array_len(0, max_resp_sz); 703 703 dprintk("%s inode %p max_resp_sz %u max_pages %d\n", 704 704 __func__, inode, max_resp_sz, max_pages); 705 705
+81 -5
fs/nfs/nfs4namespace.c
··· 52 52 } 53 53 54 54 /* 55 + * return the path component of "<server>:<path>" 56 + * nfspath - the "<server>:<path>" string 57 + * end - one past the last char that could contain "<server>:" 58 + * returns NULL on failure 59 + */ 60 + static char *nfs_path_component(const char *nfspath, const char *end) 61 + { 62 + char *p; 63 + 64 + if (*nfspath == '[') { 65 + /* parse [] escaped IPv6 addrs */ 66 + p = strchr(nfspath, ']'); 67 + if (p != NULL && ++p < end && *p == ':') 68 + return p + 1; 69 + } else { 70 + /* otherwise split on first colon */ 71 + p = strchr(nfspath, ':'); 72 + if (p != NULL && p < end) 73 + return p + 1; 74 + } 75 + return NULL; 76 + } 77 + 78 + /* 55 79 * Determine the mount path as a string 56 80 */ 57 81 static char *nfs4_path(struct dentry *dentry, char *buffer, ssize_t buflen) ··· 83 59 char *limit; 84 60 char *path = nfs_path(&limit, dentry, buffer, buflen); 85 61 if (!IS_ERR(path)) { 86 - char *colon = strchr(path, ':'); 87 - if (colon && colon < limit) 88 - path = colon + 1; 62 + char *path_component = nfs_path_component(path, limit); 63 + if (path_component) 64 + return path_component; 89 65 } 90 66 return path; 91 67 } ··· 130 106 ret = 0; 131 107 } 132 108 return ret; 109 + } 110 + 111 + static rpc_authflavor_t nfs4_negotiate_security(struct inode *inode, struct qstr *name) 112 + { 113 + struct page *page; 114 + struct nfs4_secinfo_flavors *flavors; 115 + rpc_authflavor_t flavor; 116 + int err; 117 + 118 + page = alloc_page(GFP_KERNEL); 119 + if (!page) 120 + return -ENOMEM; 121 + flavors = page_address(page); 122 + 123 + err = nfs4_proc_secinfo(inode, name, flavors); 124 + if (err < 0) { 125 + flavor = err; 126 + goto out; 127 + } 128 + 129 + flavor = nfs_find_best_sec(flavors); 130 + 131 + out: 132 + put_page(page); 133 + return flavor; 134 + } 135 + 136 + /* 137 + * Please call rpc_shutdown_client() when you are done with this client. 138 + */ 139 + struct rpc_clnt *nfs4_create_sec_client(struct rpc_clnt *clnt, struct inode *inode, 140 + struct qstr *name) 141 + { 142 + struct rpc_clnt *clone; 143 + struct rpc_auth *auth; 144 + rpc_authflavor_t flavor; 145 + 146 + flavor = nfs4_negotiate_security(inode, name); 147 + if (flavor < 0) 148 + return ERR_PTR(flavor); 149 + 150 + clone = rpc_clone_client(clnt); 151 + if (IS_ERR(clone)) 152 + return clone; 153 + 154 + auth = rpcauth_create(flavor, clone); 155 + if (!auth) { 156 + rpc_shutdown_client(clone); 157 + clone = ERR_PTR(-EIO); 158 + } 159 + 160 + return clone; 133 161 } 134 162 135 163 static struct vfsmount *try_location(struct nfs_clone_mount *mountdata, ··· 300 224 * @dentry - dentry of referral 301 225 * 302 226 */ 303 - struct vfsmount *nfs_do_refmount(struct dentry *dentry) 227 + struct vfsmount *nfs_do_refmount(struct rpc_clnt *client, struct dentry *dentry) 304 228 { 305 229 struct vfsmount *mnt = ERR_PTR(-ENOMEM); 306 230 struct dentry *parent; ··· 326 250 dprintk("%s: getting locations for %s/%s\n", 327 251 __func__, parent->d_name.name, dentry->d_name.name); 328 252 329 - err = nfs4_proc_fs_locations(parent->d_inode, &dentry->d_name, fs_locations, page); 253 + err = nfs4_proc_fs_locations(client, parent->d_inode, &dentry->d_name, fs_locations, page); 330 254 dput(parent); 331 255 if (err != 0 || 332 256 fs_locations->nlocations <= 0 ||
+145 -53
fs/nfs/nfs4proc.c
··· 838 838 p->o_arg.open_flags = flags; 839 839 p->o_arg.fmode = fmode & (FMODE_READ|FMODE_WRITE); 840 840 p->o_arg.clientid = server->nfs_client->cl_clientid; 841 - p->o_arg.id = sp->so_seqid.owner_id; 841 + p->o_arg.id.create_time = ktime_to_ns(sp->so_seqid.create_time); 842 + p->o_arg.id.uniquifier = sp->so_seqid.owner_id; 842 843 p->o_arg.name = &dentry->d_name; 843 844 p->o_arg.server = server; 844 845 p->o_arg.bitmask = server->attr_bitmask; ··· 1467 1466 goto unlock_no_action; 1468 1467 rcu_read_unlock(); 1469 1468 } 1470 - /* Update sequence id. */ 1471 - data->o_arg.id = sp->so_seqid.owner_id; 1469 + /* Update client id. */ 1472 1470 data->o_arg.clientid = sp->so_server->nfs_client->cl_clientid; 1473 1471 if (data->o_arg.claim == NFS4_OPEN_CLAIM_PREVIOUS) { 1474 1472 task->tk_msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_NOATTR]; ··· 1954 1954 }; 1955 1955 int err; 1956 1956 do { 1957 - err = nfs4_handle_exception(server, 1958 - _nfs4_do_setattr(inode, cred, fattr, sattr, state), 1959 - &exception); 1957 + err = _nfs4_do_setattr(inode, cred, fattr, sattr, state); 1958 + switch (err) { 1959 + case -NFS4ERR_OPENMODE: 1960 + if (state && !(state->state & FMODE_WRITE)) { 1961 + err = -EBADF; 1962 + if (sattr->ia_valid & ATTR_OPEN) 1963 + err = -EACCES; 1964 + goto out; 1965 + } 1966 + } 1967 + err = nfs4_handle_exception(server, err, &exception); 1960 1968 } while (exception.retry); 1969 + out: 1961 1970 return err; 1962 1971 } 1963 1972 ··· 2377 2368 * Note that we'll actually follow the referral later when 2378 2369 * we detect fsid mismatch in inode revalidation 2379 2370 */ 2380 - static int nfs4_get_referral(struct inode *dir, const struct qstr *name, 2381 - struct nfs_fattr *fattr, struct nfs_fh *fhandle) 2371 + static int nfs4_get_referral(struct rpc_clnt *client, struct inode *dir, 2372 + const struct qstr *name, struct nfs_fattr *fattr, 2373 + struct nfs_fh *fhandle) 2382 2374 { 2383 2375 int status = -ENOMEM; 2384 2376 struct page *page = NULL; ··· 2392 2382 if (locations == NULL) 2393 2383 goto out; 2394 2384 2395 - status = nfs4_proc_fs_locations(dir, name, locations, page); 2385 + status = nfs4_proc_fs_locations(client, dir, name, locations, page); 2396 2386 if (status != 0) 2397 2387 goto out; 2398 2388 /* Make sure server returned a different fsid for the referral */ ··· 2529 2519 return status; 2530 2520 } 2531 2521 2532 - void nfs_fixup_secinfo_attributes(struct nfs_fattr *fattr, struct nfs_fh *fh) 2522 + static void nfs_fixup_secinfo_attributes(struct nfs_fattr *fattr) 2533 2523 { 2534 - memset(fh, 0, sizeof(struct nfs_fh)); 2535 - fattr->fsid.major = 1; 2536 2524 fattr->valid |= NFS_ATTR_FATTR_TYPE | NFS_ATTR_FATTR_MODE | 2537 - NFS_ATTR_FATTR_NLINK | NFS_ATTR_FATTR_FSID | NFS_ATTR_FATTR_MOUNTPOINT; 2525 + NFS_ATTR_FATTR_NLINK | NFS_ATTR_FATTR_MOUNTPOINT; 2538 2526 fattr->mode = S_IFDIR | S_IRUGO | S_IXUGO; 2539 2527 fattr->nlink = 2; 2528 + } 2529 + 2530 + static int nfs4_proc_lookup_common(struct rpc_clnt **clnt, struct inode *dir, 2531 + struct qstr *name, struct nfs_fh *fhandle, 2532 + struct nfs_fattr *fattr) 2533 + { 2534 + struct nfs4_exception exception = { }; 2535 + struct rpc_clnt *client = *clnt; 2536 + int err; 2537 + do { 2538 + err = _nfs4_proc_lookup(client, dir, name, fhandle, fattr); 2539 + switch (err) { 2540 + case -NFS4ERR_BADNAME: 2541 + err = -ENOENT; 2542 + goto out; 2543 + case -NFS4ERR_MOVED: 2544 + err = nfs4_get_referral(client, dir, name, fattr, fhandle); 2545 + goto out; 2546 + case -NFS4ERR_WRONGSEC: 2547 + err = -EPERM; 2548 + if (client != *clnt) 2549 + goto out; 2550 + 2551 + client = nfs4_create_sec_client(client, dir, name); 2552 + if (IS_ERR(client)) 2553 + return PTR_ERR(client); 2554 + 2555 + exception.retry = 1; 2556 + break; 2557 + default: 2558 + err = nfs4_handle_exception(NFS_SERVER(dir), err, &exception); 2559 + } 2560 + } while (exception.retry); 2561 + 2562 + out: 2563 + if (err == 0) 2564 + *clnt = client; 2565 + else if (client != *clnt) 2566 + rpc_shutdown_client(client); 2567 + 2568 + return err; 2540 2569 } 2541 2570 2542 2571 static int nfs4_proc_lookup(struct rpc_clnt *clnt, struct inode *dir, struct qstr *name, 2543 2572 struct nfs_fh *fhandle, struct nfs_fattr *fattr) 2544 2573 { 2545 - struct nfs4_exception exception = { }; 2546 - int err; 2547 - do { 2548 - int status; 2574 + int status; 2575 + struct rpc_clnt *client = NFS_CLIENT(dir); 2549 2576 2550 - status = _nfs4_proc_lookup(clnt, dir, name, fhandle, fattr); 2551 - switch (status) { 2552 - case -NFS4ERR_BADNAME: 2553 - return -ENOENT; 2554 - case -NFS4ERR_MOVED: 2555 - return nfs4_get_referral(dir, name, fattr, fhandle); 2556 - case -NFS4ERR_WRONGSEC: 2557 - nfs_fixup_secinfo_attributes(fattr, fhandle); 2558 - } 2559 - err = nfs4_handle_exception(NFS_SERVER(dir), 2560 - status, &exception); 2561 - } while (exception.retry); 2562 - return err; 2577 + status = nfs4_proc_lookup_common(&client, dir, name, fhandle, fattr); 2578 + if (client != NFS_CLIENT(dir)) { 2579 + rpc_shutdown_client(client); 2580 + nfs_fixup_secinfo_attributes(fattr); 2581 + } 2582 + return status; 2583 + } 2584 + 2585 + struct rpc_clnt * 2586 + nfs4_proc_lookup_mountpoint(struct inode *dir, struct qstr *name, 2587 + struct nfs_fh *fhandle, struct nfs_fattr *fattr) 2588 + { 2589 + int status; 2590 + struct rpc_clnt *client = rpc_clone_client(NFS_CLIENT(dir)); 2591 + 2592 + status = nfs4_proc_lookup_common(&client, dir, name, fhandle, fattr); 2593 + if (status < 0) { 2594 + rpc_shutdown_client(client); 2595 + return ERR_PTR(status); 2596 + } 2597 + return client; 2563 2598 } 2564 2599 2565 2600 static int _nfs4_proc_access(struct inode *inode, struct nfs_access_entry *entry) ··· 3674 3619 return ret; 3675 3620 } 3676 3621 3677 - static void nfs4_write_cached_acl(struct inode *inode, const char *buf, size_t acl_len) 3622 + static void nfs4_write_cached_acl(struct inode *inode, struct page **pages, size_t pgbase, size_t acl_len) 3678 3623 { 3679 3624 struct nfs4_cached_acl *acl; 3680 3625 3681 - if (buf && acl_len <= PAGE_SIZE) { 3626 + if (pages && acl_len <= PAGE_SIZE) { 3682 3627 acl = kmalloc(sizeof(*acl) + acl_len, GFP_KERNEL); 3683 3628 if (acl == NULL) 3684 3629 goto out; 3685 3630 acl->cached = 1; 3686 - memcpy(acl->data, buf, acl_len); 3631 + _copy_from_pages(acl->data, pages, pgbase, acl_len); 3687 3632 } else { 3688 3633 acl = kmalloc(sizeof(*acl), GFP_KERNEL); 3689 3634 if (acl == NULL) ··· 3716 3661 struct nfs_getaclres res = { 3717 3662 .acl_len = buflen, 3718 3663 }; 3719 - void *resp_buf; 3720 3664 struct rpc_message msg = { 3721 3665 .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_GETACL], 3722 3666 .rpc_argp = &args, ··· 3729 3675 if (npages == 0) 3730 3676 npages = 1; 3731 3677 3678 + /* Add an extra page to handle the bitmap returned */ 3679 + npages++; 3680 + 3732 3681 for (i = 0; i < npages; i++) { 3733 3682 pages[i] = alloc_page(GFP_KERNEL); 3734 3683 if (!pages[i]) 3735 3684 goto out_free; 3736 3685 } 3737 - if (npages > 1) { 3738 - /* for decoding across pages */ 3739 - res.acl_scratch = alloc_page(GFP_KERNEL); 3740 - if (!res.acl_scratch) 3741 - goto out_free; 3742 - } 3686 + 3687 + /* for decoding across pages */ 3688 + res.acl_scratch = alloc_page(GFP_KERNEL); 3689 + if (!res.acl_scratch) 3690 + goto out_free; 3691 + 3743 3692 args.acl_len = npages * PAGE_SIZE; 3744 3693 args.acl_pgbase = 0; 3694 + 3745 3695 /* Let decode_getfacl know not to fail if the ACL data is larger than 3746 3696 * the page we send as a guess */ 3747 3697 if (buf == NULL) 3748 3698 res.acl_flags |= NFS4_ACL_LEN_REQUEST; 3749 - resp_buf = page_address(pages[0]); 3750 3699 3751 3700 dprintk("%s buf %p buflen %zu npages %d args.acl_len %zu\n", 3752 3701 __func__, buf, buflen, npages, args.acl_len); ··· 3760 3703 3761 3704 acl_len = res.acl_len - res.acl_data_offset; 3762 3705 if (acl_len > args.acl_len) 3763 - nfs4_write_cached_acl(inode, NULL, acl_len); 3706 + nfs4_write_cached_acl(inode, NULL, 0, acl_len); 3764 3707 else 3765 - nfs4_write_cached_acl(inode, resp_buf + res.acl_data_offset, 3708 + nfs4_write_cached_acl(inode, pages, res.acl_data_offset, 3766 3709 acl_len); 3767 3710 if (buf) { 3768 3711 ret = -ERANGE; ··· 4615 4558 static int nfs4_lock_reclaim(struct nfs4_state *state, struct file_lock *request) 4616 4559 { 4617 4560 struct nfs_server *server = NFS_SERVER(state->inode); 4618 - struct nfs4_exception exception = { }; 4561 + struct nfs4_exception exception = { 4562 + .inode = state->inode, 4563 + }; 4619 4564 int err; 4620 4565 4621 4566 do { ··· 4635 4576 static int nfs4_lock_expired(struct nfs4_state *state, struct file_lock *request) 4636 4577 { 4637 4578 struct nfs_server *server = NFS_SERVER(state->inode); 4638 - struct nfs4_exception exception = { }; 4579 + struct nfs4_exception exception = { 4580 + .inode = state->inode, 4581 + }; 4639 4582 int err; 4640 4583 4641 4584 err = nfs4_set_lock_state(state, request); ··· 4737 4676 { 4738 4677 struct nfs4_exception exception = { 4739 4678 .state = state, 4679 + .inode = state->inode, 4740 4680 }; 4741 4681 int err; 4742 4682 ··· 4783 4721 4784 4722 if (state == NULL) 4785 4723 return -ENOLCK; 4724 + /* 4725 + * Don't rely on the VFS having checked the file open mode, 4726 + * since it won't do this for flock() locks. 4727 + */ 4728 + switch (request->fl_type & (F_RDLCK|F_WRLCK|F_UNLCK)) { 4729 + case F_RDLCK: 4730 + if (!(filp->f_mode & FMODE_READ)) 4731 + return -EBADF; 4732 + break; 4733 + case F_WRLCK: 4734 + if (!(filp->f_mode & FMODE_WRITE)) 4735 + return -EBADF; 4736 + } 4737 + 4786 4738 do { 4787 4739 status = nfs4_proc_setlk(state, cmd, request); 4788 4740 if ((status != -EAGAIN) || IS_SETLK(cmd)) ··· 4967 4891 fattr->nlink = 2; 4968 4892 } 4969 4893 4970 - int nfs4_proc_fs_locations(struct inode *dir, const struct qstr *name, 4971 - struct nfs4_fs_locations *fs_locations, struct page *page) 4894 + static int _nfs4_proc_fs_locations(struct rpc_clnt *client, struct inode *dir, 4895 + const struct qstr *name, 4896 + struct nfs4_fs_locations *fs_locations, 4897 + struct page *page) 4972 4898 { 4973 4899 struct nfs_server *server = NFS_SERVER(dir); 4974 4900 u32 bitmask[2] = { ··· 5004 4926 nfs_fattr_init(&fs_locations->fattr); 5005 4927 fs_locations->server = server; 5006 4928 fs_locations->nlocations = 0; 5007 - status = nfs4_call_sync(server->client, server, &msg, &args.seq_args, &res.seq_res, 0); 4929 + status = nfs4_call_sync(client, server, &msg, &args.seq_args, &res.seq_res, 0); 5008 4930 dprintk("%s: returned status = %d\n", __func__, status); 5009 4931 return status; 4932 + } 4933 + 4934 + int nfs4_proc_fs_locations(struct rpc_clnt *client, struct inode *dir, 4935 + const struct qstr *name, 4936 + struct nfs4_fs_locations *fs_locations, 4937 + struct page *page) 4938 + { 4939 + struct nfs4_exception exception = { }; 4940 + int err; 4941 + do { 4942 + err = nfs4_handle_exception(NFS_SERVER(dir), 4943 + _nfs4_proc_fs_locations(client, dir, name, fs_locations, page), 4944 + &exception); 4945 + } while (exception.retry); 4946 + return err; 5010 4947 } 5011 4948 5012 4949 static int _nfs4_proc_secinfo(struct inode *dir, const struct qstr *name, struct nfs4_secinfo_flavors *flavors) ··· 5046 4953 return status; 5047 4954 } 5048 4955 5049 - static int nfs4_proc_secinfo(struct inode *dir, const struct qstr *name, 5050 - struct nfs4_secinfo_flavors *flavors) 4956 + int nfs4_proc_secinfo(struct inode *dir, const struct qstr *name, 4957 + struct nfs4_secinfo_flavors *flavors) 5051 4958 { 5052 4959 struct nfs4_exception exception = { }; 5053 4960 int err; ··· 5122 5029 nfs4_construct_boot_verifier(clp, &verifier); 5123 5030 5124 5031 args.id_len = scnprintf(args.id, sizeof(args.id), 5125 - "%s/%s.%s/%u", 5032 + "%s/%s/%u", 5126 5033 clp->cl_ipaddr, 5127 - init_utsname()->nodename, 5128 - init_utsname()->domainname, 5034 + clp->cl_rpcclient->cl_nodename, 5129 5035 clp->cl_rpcclient->cl_auth->au_flavor); 5130 5036 5131 5037 res.server_scope = kzalloc(sizeof(struct server_scope), GFP_KERNEL);
+19 -12
fs/nfs/nfs4state.c
··· 393 393 static void 394 394 nfs4_init_seqid_counter(struct nfs_seqid_counter *sc) 395 395 { 396 + sc->create_time = ktime_get(); 396 397 sc->flags = 0; 397 398 sc->counter = 0; 398 399 spin_lock_init(&sc->lock); ··· 435 434 static void 436 435 nfs4_drop_state_owner(struct nfs4_state_owner *sp) 437 436 { 438 - if (!RB_EMPTY_NODE(&sp->so_server_node)) { 437 + struct rb_node *rb_node = &sp->so_server_node; 438 + 439 + if (!RB_EMPTY_NODE(rb_node)) { 439 440 struct nfs_server *server = sp->so_server; 440 441 struct nfs_client *clp = server->nfs_client; 441 442 442 443 spin_lock(&clp->cl_lock); 443 - rb_erase(&sp->so_server_node, &server->state_owners); 444 - RB_CLEAR_NODE(&sp->so_server_node); 444 + if (!RB_EMPTY_NODE(rb_node)) { 445 + rb_erase(rb_node, &server->state_owners); 446 + RB_CLEAR_NODE(rb_node); 447 + } 445 448 spin_unlock(&clp->cl_lock); 446 449 } 447 450 } ··· 521 516 /** 522 517 * nfs4_put_state_owner - Release a nfs4_state_owner 523 518 * @sp: state owner data to release 519 + * 520 + * Note that we keep released state owners on an LRU 521 + * list. 522 + * This caches valid state owners so that they can be 523 + * reused, to avoid the OPEN_CONFIRM on minor version 0. 524 + * It also pins the uniquifier of dropped state owners for 525 + * a while, to ensure that those state owner names are 526 + * never reused. 524 527 */ 525 528 void nfs4_put_state_owner(struct nfs4_state_owner *sp) 526 529 { ··· 538 525 if (!atomic_dec_and_lock(&sp->so_count, &clp->cl_lock)) 539 526 return; 540 527 541 - if (!RB_EMPTY_NODE(&sp->so_server_node)) { 542 - sp->so_expires = jiffies; 543 - list_add_tail(&sp->so_lru, &server->state_owners_lru); 544 - spin_unlock(&clp->cl_lock); 545 - } else { 546 - nfs4_remove_state_owner_locked(sp); 547 - spin_unlock(&clp->cl_lock); 548 - nfs4_free_state_owner(sp); 549 - } 528 + sp->so_expires = jiffies; 529 + list_add_tail(&sp->so_lru, &server->state_owners_lru); 530 + spin_unlock(&clp->cl_lock); 550 531 } 551 532 552 533 /**
+35 -18
fs/nfs/nfs4xdr.c
··· 74 74 /* lock,open owner id: 75 75 * we currently use size 2 (u64) out of (NFS4_OPAQUE_LIMIT >> 2) 76 76 */ 77 - #define open_owner_id_maxsz (1 + 1 + 4) 77 + #define open_owner_id_maxsz (1 + 2 + 1 + 1 + 2) 78 78 #define lock_owner_id_maxsz (1 + 1 + 4) 79 79 #define decode_lockowner_maxsz (1 + XDR_QUADLEN(IDMAP_NAMESZ)) 80 80 #define compound_encode_hdr_maxsz (3 + (NFS4_MAXTAGLEN >> 2)) ··· 1340 1340 */ 1341 1341 encode_nfs4_seqid(xdr, arg->seqid); 1342 1342 encode_share_access(xdr, arg->fmode); 1343 - p = reserve_space(xdr, 32); 1343 + p = reserve_space(xdr, 36); 1344 1344 p = xdr_encode_hyper(p, arg->clientid); 1345 - *p++ = cpu_to_be32(20); 1345 + *p++ = cpu_to_be32(24); 1346 1346 p = xdr_encode_opaque_fixed(p, "open id:", 8); 1347 1347 *p++ = cpu_to_be32(arg->server->s_dev); 1348 - xdr_encode_hyper(p, arg->id); 1348 + *p++ = cpu_to_be32(arg->id.uniquifier); 1349 + xdr_encode_hyper(p, arg->id.create_time); 1349 1350 } 1350 1351 1351 1352 static inline void encode_createmode(struct xdr_stream *xdr, const struct nfs_openargs *arg) ··· 4258 4257 status = decode_attr_error(xdr, bitmap, &err); 4259 4258 if (status < 0) 4260 4259 goto xdr_error; 4261 - if (err == -NFS4ERR_WRONGSEC) 4262 - nfs_fixup_secinfo_attributes(fattr, fh); 4263 4260 4264 4261 status = decode_attr_filehandle(xdr, bitmap, fh); 4265 4262 if (status < 0) ··· 4900 4901 bitmap[3] = {0}; 4901 4902 struct kvec *iov = req->rq_rcv_buf.head; 4902 4903 int status; 4904 + size_t page_len = xdr->buf->page_len; 4903 4905 4904 4906 res->acl_len = 0; 4905 4907 if ((status = decode_op_hdr(xdr, OP_GETATTR)) != 0) 4906 4908 goto out; 4909 + 4907 4910 bm_p = xdr->p; 4911 + res->acl_data_offset = be32_to_cpup(bm_p) + 2; 4912 + res->acl_data_offset <<= 2; 4913 + /* Check if the acl data starts beyond the allocated buffer */ 4914 + if (res->acl_data_offset > page_len) 4915 + return -ERANGE; 4916 + 4908 4917 if ((status = decode_attr_bitmap(xdr, bitmap)) != 0) 4909 4918 goto out; 4910 4919 if ((status = decode_attr_length(xdr, &attrlen, &savep)) != 0) ··· 4922 4915 return -EIO; 4923 4916 if (likely(bitmap[0] & FATTR4_WORD0_ACL)) { 4924 4917 size_t hdrlen; 4925 - u32 recvd; 4926 4918 4927 4919 /* The bitmap (xdr len + bitmaps) and the attr xdr len words 4928 4920 * are stored with the acl data to handle the problem of 4929 4921 * variable length bitmaps.*/ 4930 4922 xdr->p = bm_p; 4931 - res->acl_data_offset = be32_to_cpup(bm_p) + 2; 4932 - res->acl_data_offset <<= 2; 4933 4923 4934 4924 /* We ignore &savep and don't do consistency checks on 4935 4925 * the attr length. Let userspace figure it out.... */ 4936 4926 hdrlen = (u8 *)xdr->p - (u8 *)iov->iov_base; 4937 4927 attrlen += res->acl_data_offset; 4938 - recvd = req->rq_rcv_buf.len - hdrlen; 4939 - if (attrlen > recvd) { 4928 + if (attrlen > page_len) { 4940 4929 if (res->acl_flags & NFS4_ACL_LEN_REQUEST) { 4941 4930 /* getxattr interface called with a NULL buf */ 4942 4931 res->acl_len = attrlen; 4943 4932 goto out; 4944 4933 } 4945 - dprintk("NFS: acl reply: attrlen %u > recvd %u\n", 4946 - attrlen, recvd); 4934 + dprintk("NFS: acl reply: attrlen %u > page_len %zu\n", 4935 + attrlen, page_len); 4947 4936 return -EINVAL; 4948 4937 } 4949 4938 xdr_read_pages(xdr, attrlen); ··· 5092 5089 return -EINVAL; 5093 5090 } 5094 5091 5095 - static int decode_secinfo(struct xdr_stream *xdr, struct nfs4_secinfo_res *res) 5092 + static int decode_secinfo_common(struct xdr_stream *xdr, struct nfs4_secinfo_res *res) 5096 5093 { 5097 5094 struct nfs4_secinfo_flavor *sec_flavor; 5098 5095 int status; 5099 5096 __be32 *p; 5100 5097 int i, num_flavors; 5101 5098 5102 - status = decode_op_hdr(xdr, OP_SECINFO); 5103 - if (status) 5104 - goto out; 5105 5099 p = xdr_inline_decode(xdr, 4); 5106 5100 if (unlikely(!p)) 5107 5101 goto out_overflow; ··· 5124 5124 res->flavors->num_flavors++; 5125 5125 } 5126 5126 5127 + status = 0; 5127 5128 out: 5128 5129 return status; 5129 5130 out_overflow: ··· 5132 5131 return -EIO; 5133 5132 } 5134 5133 5134 + static int decode_secinfo(struct xdr_stream *xdr, struct nfs4_secinfo_res *res) 5135 + { 5136 + int status = decode_op_hdr(xdr, OP_SECINFO); 5137 + if (status) 5138 + return status; 5139 + return decode_secinfo_common(xdr, res); 5140 + } 5141 + 5135 5142 #if defined(CONFIG_NFS_V4_1) 5143 + static int decode_secinfo_no_name(struct xdr_stream *xdr, struct nfs4_secinfo_res *res) 5144 + { 5145 + int status = decode_op_hdr(xdr, OP_SECINFO_NO_NAME); 5146 + if (status) 5147 + return status; 5148 + return decode_secinfo_common(xdr, res); 5149 + } 5150 + 5136 5151 static int decode_exchange_id(struct xdr_stream *xdr, 5137 5152 struct nfs41_exchange_id_res *res) 5138 5153 { ··· 6833 6816 status = decode_putrootfh(xdr); 6834 6817 if (status) 6835 6818 goto out; 6836 - status = decode_secinfo(xdr, res); 6819 + status = decode_secinfo_no_name(xdr, res); 6837 6820 out: 6838 6821 return status; 6839 6822 }
-2
fs/nfs/objlayout/objlayout.c
··· 604 604 { 605 605 struct objlayout_deviceinfo *odi; 606 606 struct pnfs_device pd; 607 - struct super_block *sb; 608 607 struct page *page, **pages; 609 608 u32 *p; 610 609 int err; ··· 622 623 pd.pglen = PAGE_SIZE; 623 624 pd.mincount = 0; 624 625 625 - sb = pnfslay->plh_inode->i_sb; 626 626 err = nfs4_proc_getdeviceinfo(NFS_SERVER(pnfslay->plh_inode), &pd); 627 627 dprintk("%s nfs_getdeviceinfo returned %d\n", __func__, err); 628 628 if (err)
+1 -1
fs/nfs/pnfs.c
··· 587 587 588 588 /* allocate pages for xdr post processing */ 589 589 max_resp_sz = server->nfs_client->cl_session->fc_attrs.max_resp_sz; 590 - max_pages = max_resp_sz >> PAGE_SHIFT; 590 + max_pages = nfs_page_array_len(0, max_resp_sz); 591 591 592 592 pages = kcalloc(max_pages, sizeof(struct page *), gfp_flags); 593 593 if (!pages)
+1 -1
fs/nfs/read.c
··· 322 322 while (!list_empty(res)) { 323 323 data = list_entry(res->next, struct nfs_read_data, list); 324 324 list_del(&data->list); 325 - nfs_readdata_free(data); 325 + nfs_readdata_release(data); 326 326 } 327 327 nfs_readpage_release(req); 328 328 return -ENOMEM;
+8 -4
fs/nfs/super.c
··· 2428 2428 dprintk("--> nfs_xdev_mount()\n"); 2429 2429 2430 2430 /* create a new volume representation */ 2431 - server = nfs_clone_server(NFS_SB(data->sb), data->fh, data->fattr); 2431 + server = nfs_clone_server(NFS_SB(data->sb), data->fh, data->fattr, data->authflavor); 2432 2432 if (IS_ERR(server)) { 2433 2433 error = PTR_ERR(server); 2434 2434 goto out_err_noserver; ··· 2767 2767 char *root_devname; 2768 2768 size_t len; 2769 2769 2770 - len = strlen(hostname) + 3; 2770 + len = strlen(hostname) + 5; 2771 2771 root_devname = kmalloc(len, GFP_KERNEL); 2772 2772 if (root_devname == NULL) 2773 2773 return ERR_PTR(-ENOMEM); 2774 - snprintf(root_devname, len, "%s:/", hostname); 2774 + /* Does hostname needs to be enclosed in brackets? */ 2775 + if (strchr(hostname, ':')) 2776 + snprintf(root_devname, len, "[%s]:/", hostname); 2777 + else 2778 + snprintf(root_devname, len, "%s:/", hostname); 2775 2779 root_mnt = vfs_kern_mount(fs_type, flags, root_devname, data); 2776 2780 kfree(root_devname); 2777 2781 return root_mnt; ··· 2955 2951 dprintk("--> nfs4_xdev_mount()\n"); 2956 2952 2957 2953 /* create a new volume representation */ 2958 - server = nfs_clone_server(NFS_SB(data->sb), data->fh, data->fattr); 2954 + server = nfs_clone_server(NFS_SB(data->sb), data->fh, data->fattr, data->authflavor); 2959 2955 if (IS_ERR(server)) { 2960 2956 error = PTR_ERR(server); 2961 2957 goto out_err_noserver;
+3 -2
fs/nfs/write.c
··· 682 682 req->wb_bytes = rqend - req->wb_offset; 683 683 out_unlock: 684 684 spin_unlock(&inode->i_lock); 685 - nfs_clear_request_commit(req); 685 + if (req) 686 + nfs_clear_request_commit(req); 686 687 return req; 687 688 out_flushme: 688 689 spin_unlock(&inode->i_lock); ··· 1019 1018 while (!list_empty(res)) { 1020 1019 data = list_entry(res->next, struct nfs_write_data, list); 1021 1020 list_del(&data->list); 1022 - nfs_writedata_free(data); 1021 + nfs_writedata_release(data); 1023 1022 } 1024 1023 nfs_redirty_request(req); 1025 1024 return -ENOMEM;
+1 -1
fs/nfsd/nfs4recover.c
··· 577 577 struct cld_net *cn = nn->cld_net; 578 578 579 579 if (mlen != sizeof(*cmsg)) { 580 - dprintk("%s: got %lu bytes, expected %lu\n", __func__, mlen, 580 + dprintk("%s: got %zu bytes, expected %zu\n", __func__, mlen, 581 581 sizeof(*cmsg)); 582 582 return -EINVAL; 583 583 }
+29 -2
fs/pipe.c
··· 346 346 .get = generic_pipe_buf_get, 347 347 }; 348 348 349 + static const struct pipe_buf_operations packet_pipe_buf_ops = { 350 + .can_merge = 0, 351 + .map = generic_pipe_buf_map, 352 + .unmap = generic_pipe_buf_unmap, 353 + .confirm = generic_pipe_buf_confirm, 354 + .release = anon_pipe_buf_release, 355 + .steal = generic_pipe_buf_steal, 356 + .get = generic_pipe_buf_get, 357 + }; 358 + 349 359 static ssize_t 350 360 pipe_read(struct kiocb *iocb, const struct iovec *_iov, 351 361 unsigned long nr_segs, loff_t pos) ··· 417 407 ret += chars; 418 408 buf->offset += chars; 419 409 buf->len -= chars; 410 + 411 + /* Was it a packet buffer? Clean up and exit */ 412 + if (buf->flags & PIPE_BUF_FLAG_PACKET) { 413 + total_len = chars; 414 + buf->len = 0; 415 + } 416 + 420 417 if (!buf->len) { 421 418 buf->ops = NULL; 422 419 ops->release(pipe, buf); ··· 474 457 if (ret > 0) 475 458 file_accessed(filp); 476 459 return ret; 460 + } 461 + 462 + static inline int is_packetized(struct file *file) 463 + { 464 + return (file->f_flags & O_DIRECT) != 0; 477 465 } 478 466 479 467 static ssize_t ··· 615 593 buf->ops = &anon_pipe_buf_ops; 616 594 buf->offset = 0; 617 595 buf->len = chars; 596 + buf->flags = 0; 597 + if (is_packetized(filp)) { 598 + buf->ops = &packet_pipe_buf_ops; 599 + buf->flags = PIPE_BUF_FLAG_PACKET; 600 + } 618 601 pipe->nrbufs = ++bufs; 619 602 pipe->tmp_page = NULL; 620 603 ··· 1040 1013 goto err_dentry; 1041 1014 f->f_mapping = inode->i_mapping; 1042 1015 1043 - f->f_flags = O_WRONLY | (flags & O_NONBLOCK); 1016 + f->f_flags = O_WRONLY | (flags & (O_NONBLOCK | O_DIRECT)); 1044 1017 f->f_version = 0; 1045 1018 1046 1019 return f; ··· 1084 1057 int error; 1085 1058 int fdw, fdr; 1086 1059 1087 - if (flags & ~(O_CLOEXEC | O_NONBLOCK)) 1060 + if (flags & ~(O_CLOEXEC | O_NONBLOCK | O_DIRECT)) 1088 1061 return -EINVAL; 1089 1062 1090 1063 fw = create_write_pipe(flags);
-3
fs/proc/task_mmu.c
··· 597 597 if (!page) 598 598 continue; 599 599 600 - if (PageReserved(page)) 601 - continue; 602 - 603 600 /* Clear accessed and referenced bits. */ 604 601 ptep_test_and_clear_young(vma, addr, pte); 605 602 ClearPageReferenced(page);
+11 -3
include/asm-generic/siginfo.h
··· 35 35 #define __ARCH_SI_BAND_T long 36 36 #endif 37 37 38 + #ifndef __ARCH_SI_CLOCK_T 39 + #define __ARCH_SI_CLOCK_T __kernel_clock_t 40 + #endif 41 + 42 + #ifndef __ARCH_SI_ATTRIBUTES 43 + #define __ARCH_SI_ATTRIBUTES 44 + #endif 45 + 38 46 #ifndef HAVE_ARCH_SIGINFO_T 39 47 40 48 typedef struct siginfo { ··· 80 72 __kernel_pid_t _pid; /* which child */ 81 73 __ARCH_SI_UID_T _uid; /* sender's uid */ 82 74 int _status; /* exit code */ 83 - __kernel_clock_t _utime; 84 - __kernel_clock_t _stime; 75 + __ARCH_SI_CLOCK_T _utime; 76 + __ARCH_SI_CLOCK_T _stime; 85 77 } _sigchld; 86 78 87 79 /* SIGILL, SIGFPE, SIGSEGV, SIGBUS */ ··· 99 91 int _fd; 100 92 } _sigpoll; 101 93 } _sifields; 102 - } siginfo_t; 94 + } __ARCH_SI_ATTRIBUTES siginfo_t; 103 95 104 96 #endif 105 97
+11
include/linux/efi.h
··· 554 554 #define EFI_VARIABLE_NON_VOLATILE 0x0000000000000001 555 555 #define EFI_VARIABLE_BOOTSERVICE_ACCESS 0x0000000000000002 556 556 #define EFI_VARIABLE_RUNTIME_ACCESS 0x0000000000000004 557 + #define EFI_VARIABLE_HARDWARE_ERROR_RECORD 0x0000000000000008 558 + #define EFI_VARIABLE_AUTHENTICATED_WRITE_ACCESS 0x0000000000000010 559 + #define EFI_VARIABLE_TIME_BASED_AUTHENTICATED_WRITE_ACCESS 0x0000000000000020 560 + #define EFI_VARIABLE_APPEND_WRITE 0x0000000000000040 557 561 562 + #define EFI_VARIABLE_MASK (EFI_VARIABLE_NON_VOLATILE | \ 563 + EFI_VARIABLE_BOOTSERVICE_ACCESS | \ 564 + EFI_VARIABLE_RUNTIME_ACCESS | \ 565 + EFI_VARIABLE_HARDWARE_ERROR_RECORD | \ 566 + EFI_VARIABLE_AUTHENTICATED_WRITE_ACCESS | \ 567 + EFI_VARIABLE_TIME_BASED_AUTHENTICATED_WRITE_ACCESS | \ 568 + EFI_VARIABLE_APPEND_WRITE) 558 569 /* 559 570 * The type of search to perform when calling boottime->locate_handle 560 571 */
+6 -5
include/linux/etherdevice.h
··· 159 159 * @addr1: Pointer to a six-byte array containing the Ethernet address 160 160 * @addr2: Pointer other six-byte array containing the Ethernet address 161 161 * 162 - * Compare two ethernet addresses, returns 0 if equal 162 + * Compare two ethernet addresses, returns 0 if equal, non-zero otherwise. 163 + * Unlike memcmp(), it doesn't return a value suitable for sorting. 163 164 */ 164 165 static inline unsigned compare_ether_addr(const u8 *addr1, const u8 *addr2) 165 166 { ··· 185 184 * @addr1: Pointer to an array of 8 bytes 186 185 * @addr2: Pointer to an other array of 8 bytes 187 186 * 188 - * Compare two ethernet addresses, returns 0 if equal. 189 - * Same result than "memcmp(addr1, addr2, ETH_ALEN)" but without conditional 190 - * branches, and possibly long word memory accesses on CPU allowing cheap 191 - * unaligned memory reads. 187 + * Compare two ethernet addresses, returns 0 if equal, non-zero otherwise. 188 + * Unlike memcmp(), it doesn't return a value suitable for sorting. 189 + * The function doesn't need any conditional branches and possibly uses 190 + * word memory accesses on CPU allowing cheap unaligned memory reads. 192 191 * arrays = { byte1, byte2, byte3, byte4, byte6, byte7, pad1, pad2} 193 192 * 194 193 * Please note that alignment of addr1 & addr2 is only guaranted to be 16 bits.
+4
include/linux/gpio-pxa.h
··· 13 13 14 14 extern int pxa_irq_to_gpio(int irq); 15 15 16 + struct pxa_gpio_platform_data { 17 + int (*gpio_set_wake)(unsigned int gpio, unsigned int on); 18 + }; 19 + 16 20 #endif /* __GPIO_PXA_H */
+17 -14
include/linux/hsi/hsi.h
··· 26 26 #include <linux/device.h> 27 27 #include <linux/mutex.h> 28 28 #include <linux/scatterlist.h> 29 - #include <linux/spinlock.h> 30 29 #include <linux/list.h> 31 30 #include <linux/module.h> 31 + #include <linux/notifier.h> 32 32 33 33 /* HSI message ttype */ 34 34 #define HSI_MSG_READ 0 ··· 121 121 * @device: Driver model representation of the device 122 122 * @tx_cfg: HSI TX configuration 123 123 * @rx_cfg: HSI RX configuration 124 - * @hsi_start_rx: Called after incoming wake line goes high 125 - * @hsi_stop_rx: Called after incoming wake line goes low 124 + * @e_handler: Callback for handling port events (RX Wake High/Low) 125 + * @pclaimed: Keeps tracks if the clients claimed its associated HSI port 126 + * @nb: Notifier block for port events 126 127 */ 127 128 struct hsi_client { 128 129 struct device device; 129 130 struct hsi_config tx_cfg; 130 131 struct hsi_config rx_cfg; 131 - void (*hsi_start_rx)(struct hsi_client *cl); 132 - void (*hsi_stop_rx)(struct hsi_client *cl); 133 132 /* private: */ 133 + void (*ehandler)(struct hsi_client *, unsigned long); 134 134 unsigned int pclaimed:1; 135 - struct list_head link; 135 + struct notifier_block nb; 136 136 }; 137 137 138 138 #define to_hsi_client(dev) container_of(dev, struct hsi_client, device) ··· 146 146 { 147 147 return dev_get_drvdata(&cl->device); 148 148 } 149 + 150 + int hsi_register_port_event(struct hsi_client *cl, 151 + void (*handler)(struct hsi_client *, unsigned long)); 152 + int hsi_unregister_port_event(struct hsi_client *cl); 149 153 150 154 /** 151 155 * struct hsi_client_driver - Driver associated to an HSI client ··· 218 214 * @start_tx: Callback to inform that a client wants to TX data 219 215 * @stop_tx: Callback to inform that a client no longer wishes to TX data 220 216 * @release: Callback to inform that a client no longer uses the port 221 - * @clients: List of hsi_clients using the port. 222 - * @clock: Lock to serialize access to the clients list. 217 + * @n_head: Notifier chain for signaling port events to the clients. 223 218 */ 224 219 struct hsi_port { 225 220 struct device device; ··· 234 231 int (*start_tx)(struct hsi_client *cl); 235 232 int (*stop_tx)(struct hsi_client *cl); 236 233 int (*release)(struct hsi_client *cl); 237 - struct list_head clients; 238 - spinlock_t clock; 234 + /* private */ 235 + struct atomic_notifier_head n_head; 239 236 }; 240 237 241 238 #define to_hsi_port(dev) container_of(dev, struct hsi_port, device) 242 239 #define hsi_get_port(cl) to_hsi_port((cl)->device.parent) 243 240 244 - void hsi_event(struct hsi_port *port, unsigned int event); 241 + int hsi_event(struct hsi_port *port, unsigned long event); 245 242 int hsi_claim_port(struct hsi_client *cl, unsigned int share); 246 243 void hsi_release_port(struct hsi_client *cl); 247 244 ··· 273 270 struct module *owner; 274 271 unsigned int id; 275 272 unsigned int num_ports; 276 - struct hsi_port *port; 273 + struct hsi_port **port; 277 274 }; 278 275 279 276 #define to_hsi_controller(dev) container_of(dev, struct hsi_controller, device) 280 277 281 278 struct hsi_controller *hsi_alloc_controller(unsigned int n_ports, gfp_t flags); 282 - void hsi_free_controller(struct hsi_controller *hsi); 279 + void hsi_put_controller(struct hsi_controller *hsi); 283 280 int hsi_register_controller(struct hsi_controller *hsi); 284 281 void hsi_unregister_controller(struct hsi_controller *hsi); 285 282 ··· 297 294 static inline struct hsi_port *hsi_find_port_num(struct hsi_controller *hsi, 298 295 unsigned int num) 299 296 { 300 - return (num < hsi->num_ports) ? &hsi->port[num] : NULL; 297 + return (num < hsi->num_ports) ? hsi->port[num] : NULL; 301 298 } 302 299 303 300 /*
+2 -1
include/linux/libata.h
··· 996 996 extern void ata_sas_port_destroy(struct ata_port *); 997 997 extern struct ata_port *ata_sas_port_alloc(struct ata_host *, 998 998 struct ata_port_info *, struct Scsi_Host *); 999 - extern int ata_sas_async_port_init(struct ata_port *); 999 + extern void ata_sas_async_probe(struct ata_port *ap); 1000 + extern int ata_sas_sync_probe(struct ata_port *ap); 1000 1001 extern int ata_sas_port_init(struct ata_port *); 1001 1002 extern int ata_sas_port_start(struct ata_port *ap); 1002 1003 extern void ata_sas_port_stop(struct ata_port *ap);
+9
include/linux/netfilter_bridge.h
··· 104 104 } daddr; 105 105 }; 106 106 107 + static inline void br_drop_fake_rtable(struct sk_buff *skb) 108 + { 109 + struct dst_entry *dst = skb_dst(skb); 110 + 111 + if (dst && (dst->flags & DST_FAKE_RTABLE)) 112 + skb_dst_drop(skb); 113 + } 114 + 107 115 #else 108 116 #define nf_bridge_maybe_copy_header(skb) (0) 109 117 #define nf_bridge_pad(skb) (0) 118 + #define br_drop_fake_rtable(skb) do { } while (0) 110 119 #endif /* CONFIG_BRIDGE_NETFILTER */ 111 120 112 121 #endif /* __KERNEL__ */
+6 -1
include/linux/nfs_xdr.h
··· 312 312 int rpc_status; 313 313 }; 314 314 315 + struct stateowner_id { 316 + __u64 create_time; 317 + __u32 uniquifier; 318 + }; 319 + 315 320 /* 316 321 * Arguments to the open call. 317 322 */ ··· 326 321 int open_flags; 327 322 fmode_t fmode; 328 323 __u64 clientid; 329 - __u64 id; 324 + struct stateowner_id id; 330 325 union { 331 326 struct { 332 327 struct iattr * attrs; /* UNCHECKED, GUARDED */
+1
include/linux/pipe_fs_i.h
··· 6 6 #define PIPE_BUF_FLAG_LRU 0x01 /* page is on the LRU */ 7 7 #define PIPE_BUF_FLAG_ATOMIC 0x02 /* was atomically mapped */ 8 8 #define PIPE_BUF_FLAG_GIFT 0x04 /* page is a gift */ 9 + #define PIPE_BUF_FLAG_PACKET 0x08 /* read() as a packet */ 9 10 10 11 /** 11 12 * struct pipe_buffer - a linux kernel pipe buffer
+2 -2
include/linux/skbuff.h
··· 1036 1036 } 1037 1037 1038 1038 /** 1039 - * skb_queue_splice - join two skb lists and reinitialise the emptied list 1039 + * skb_queue_splice_init - join two skb lists and reinitialise the emptied list 1040 1040 * @list: the new list to add 1041 1041 * @head: the place to add it in the first list 1042 1042 * ··· 1067 1067 } 1068 1068 1069 1069 /** 1070 - * skb_queue_splice_tail - join two skb lists and reinitialise the emptied list 1070 + * skb_queue_splice_tail_init - join two skb lists and reinitialise the emptied list 1071 1071 * @list: the new list to add 1072 1072 * @head: the place to add it in the first list 1073 1073 *
+1 -1
include/linux/spi/spi.h
··· 254 254 * driver is finished with this message, it must call 255 255 * spi_finalize_current_message() so the subsystem can issue the next 256 256 * transfer 257 - * @prepare_transfer_hardware: there are currently no more messages on the 257 + * @unprepare_transfer_hardware: there are currently no more messages on the 258 258 * queue so the subsystem notifies the driver that it may relax the 259 259 * hardware by issuing this call 260 260 *
+2
include/linux/usb/hcd.h
··· 126 126 unsigned wireless:1; /* Wireless USB HCD */ 127 127 unsigned authorized_default:1; 128 128 unsigned has_tt:1; /* Integrated TT in root hub */ 129 + unsigned broken_pci_sleep:1; /* Don't put the 130 + controller in PCI-D3 for system sleep */ 129 131 130 132 unsigned int irq; /* irq allocated */ 131 133 void __iomem *regs; /* device memory/io */
+3 -2
include/linux/vm_event_item.h
··· 26 26 PGFREE, PGACTIVATE, PGDEACTIVATE, 27 27 PGFAULT, PGMAJFAULT, 28 28 FOR_ALL_ZONES(PGREFILL), 29 - FOR_ALL_ZONES(PGSTEAL), 29 + FOR_ALL_ZONES(PGSTEAL_KSWAPD), 30 + FOR_ALL_ZONES(PGSTEAL_DIRECT), 30 31 FOR_ALL_ZONES(PGSCAN_KSWAPD), 31 32 FOR_ALL_ZONES(PGSCAN_DIRECT), 32 33 #ifdef CONFIG_NUMA 33 34 PGSCAN_ZONE_RECLAIM_FAILED, 34 35 #endif 35 - PGINODESTEAL, SLABS_SCANNED, KSWAPD_STEAL, KSWAPD_INODESTEAL, 36 + PGINODESTEAL, SLABS_SCANNED, KSWAPD_INODESTEAL, 36 37 KSWAPD_LOW_WMARK_HIT_QUICKLY, KSWAPD_HIGH_WMARK_HIT_QUICKLY, 37 38 KSWAPD_SKIP_CONGESTION_WAIT, 38 39 PAGEOUTRUN, ALLOCSTALL, PGROTATED,
+2 -1
include/net/bluetooth/hci_core.h
··· 314 314 315 315 __u8 remote_cap; 316 316 __u8 remote_auth; 317 + bool flush_key; 317 318 318 319 unsigned int sent; 319 320 ··· 981 980 int mgmt_connectable(struct hci_dev *hdev, u8 connectable); 982 981 int mgmt_write_scan_failed(struct hci_dev *hdev, u8 scan, u8 status); 983 982 int mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key, 984 - u8 persistent); 983 + bool persistent); 985 984 int mgmt_device_connected(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type, 986 985 u8 addr_type, u32 flags, u8 *name, u8 name_len, 987 986 u8 *dev_class);
+1
include/net/dst.h
··· 59 59 #define DST_NOCACHE 0x0010 60 60 #define DST_NOCOUNT 0x0020 61 61 #define DST_NOPEER 0x0040 62 + #define DST_FAKE_RTABLE 0x0080 62 63 63 64 short error; 64 65 short obsolete;
+3 -1
include/net/ip_vs.h
··· 392 392 393 393 void (*exit)(struct ip_vs_protocol *pp); 394 394 395 - void (*init_netns)(struct net *net, struct ip_vs_proto_data *pd); 395 + int (*init_netns)(struct net *net, struct ip_vs_proto_data *pd); 396 396 397 397 void (*exit_netns)(struct net *net, struct ip_vs_proto_data *pd); 398 398 ··· 1201 1201 1202 1202 extern int ip_vs_use_count_inc(void); 1203 1203 extern void ip_vs_use_count_dec(void); 1204 + extern int ip_vs_register_nl_ioctl(void); 1205 + extern void ip_vs_unregister_nl_ioctl(void); 1204 1206 extern int ip_vs_control_init(void); 1205 1207 extern void ip_vs_control_cleanup(void); 1206 1208 extern struct ip_vs_dest *
+2 -2
include/net/sock.h
··· 1142 1142 struct proto *prot = sk->sk_prot; 1143 1143 1144 1144 if (mem_cgroup_sockets_enabled && sk->sk_cgrp) 1145 - return percpu_counter_sum_positive(sk->sk_cgrp->sockets_allocated); 1145 + return percpu_counter_read_positive(sk->sk_cgrp->sockets_allocated); 1146 1146 1147 - return percpu_counter_sum_positive(prot->sockets_allocated); 1147 + return percpu_counter_read_positive(prot->sockets_allocated); 1148 1148 } 1149 1149 1150 1150 static inline int
+36 -4
include/scsi/libsas.h
··· 217 217 struct kref kref; 218 218 }; 219 219 220 - struct sas_discovery_event { 220 + struct sas_work { 221 + struct list_head drain_node; 221 222 struct work_struct work; 223 + }; 224 + 225 + static inline void INIT_SAS_WORK(struct sas_work *sw, void (*fn)(struct work_struct *)) 226 + { 227 + INIT_WORK(&sw->work, fn); 228 + INIT_LIST_HEAD(&sw->drain_node); 229 + } 230 + 231 + struct sas_discovery_event { 232 + struct sas_work work; 222 233 struct asd_sas_port *port; 223 234 }; 235 + 236 + static inline struct sas_discovery_event *to_sas_discovery_event(struct work_struct *work) 237 + { 238 + struct sas_discovery_event *ev = container_of(work, typeof(*ev), work.work); 239 + 240 + return ev; 241 + } 224 242 225 243 struct sas_discovery { 226 244 struct sas_discovery_event disc_work[DISC_NUM_EVENTS]; ··· 262 244 struct list_head destroy_list; 263 245 enum sas_linkrate linkrate; 264 246 265 - struct work_struct work; 247 + struct sas_work work; 266 248 267 249 /* public: */ 268 250 int id; ··· 288 270 }; 289 271 290 272 struct asd_sas_event { 291 - struct work_struct work; 273 + struct sas_work work; 292 274 struct asd_sas_phy *phy; 293 275 }; 276 + 277 + static inline struct asd_sas_event *to_asd_sas_event(struct work_struct *work) 278 + { 279 + struct asd_sas_event *ev = container_of(work, typeof(*ev), work.work); 280 + 281 + return ev; 282 + } 294 283 295 284 /* The phy pretty much is controlled by the LLDD. 296 285 * The class only reads those fields. ··· 358 333 }; 359 334 360 335 struct sas_ha_event { 361 - struct work_struct work; 336 + struct sas_work work; 362 337 struct sas_ha_struct *ha; 363 338 }; 339 + 340 + static inline struct sas_ha_event *to_sas_ha_event(struct work_struct *work) 341 + { 342 + struct sas_ha_event *ev = container_of(work, typeof(*ev), work.work); 343 + 344 + return ev; 345 + } 364 346 365 347 enum sas_ha_state { 366 348 SAS_HA_REGISTERED,
+2 -2
include/scsi/sas_ata.h
··· 37 37 } 38 38 39 39 int sas_get_ata_info(struct domain_device *dev, struct ex_phy *phy); 40 - int sas_ata_init_host_and_port(struct domain_device *found_dev); 40 + int sas_ata_init(struct domain_device *dev); 41 41 void sas_ata_task_abort(struct sas_task *task); 42 42 void sas_ata_strategy_handler(struct Scsi_Host *shost); 43 43 void sas_ata_eh(struct Scsi_Host *shost, struct list_head *work_q, ··· 52 52 { 53 53 return 0; 54 54 } 55 - static inline int sas_ata_init_host_and_port(struct domain_device *found_dev) 55 + static inline int sas_ata_init(struct domain_device *dev) 56 56 { 57 57 return 0; 58 58 }
+13 -12
init/main.c
··· 225 225 226 226 early_param("loglevel", loglevel); 227 227 228 - /* 229 - * Unknown boot options get handed to init, unless they look like 230 - * unused parameters (modprobe will find them in /proc/cmdline). 231 - */ 232 - static int __init unknown_bootoption(char *param, char *val) 228 + /* Change NUL term back to "=", to make "param" the whole string. */ 229 + static int __init repair_env_string(char *param, char *val) 233 230 { 234 - /* Change NUL term back to "=", to make "param" the whole string. */ 235 231 if (val) { 236 232 /* param=val or param="val"? */ 237 233 if (val == param+strlen(param)+1) ··· 239 243 } else 240 244 BUG(); 241 245 } 246 + return 0; 247 + } 248 + 249 + /* 250 + * Unknown boot options get handed to init, unless they look like 251 + * unused parameters (modprobe will find them in /proc/cmdline). 252 + */ 253 + static int __init unknown_bootoption(char *param, char *val) 254 + { 255 + repair_env_string(param, val); 242 256 243 257 /* Handle obsolete-style parameters */ 244 258 if (obsolete_checksetup(param)) ··· 738 732 "late parameters", 739 733 }; 740 734 741 - static int __init ignore_unknown_bootoption(char *param, char *val) 742 - { 743 - return 0; 744 - } 745 - 746 735 static void __init do_initcall_level(int level) 747 736 { 748 737 extern const struct kernel_param __start___param[], __stop___param[]; ··· 748 747 static_command_line, __start___param, 749 748 __stop___param - __start___param, 750 749 level, level, 751 - ignore_unknown_bootoption); 750 + repair_env_string); 752 751 753 752 for (fn = initcall_levels[level]; fn < initcall_levels[level+1]; fn++) 754 753 do_one_initcall(*fn);
+1 -1
kernel/events/core.c
··· 3183 3183 perf_event_for_each_child(event, func); 3184 3184 func(event); 3185 3185 list_for_each_entry(sibling, &event->sibling_list, group_entry) 3186 - perf_event_for_each_child(event, func); 3186 + perf_event_for_each_child(sibling, func); 3187 3187 mutex_unlock(&ctx->mutex); 3188 3188 } 3189 3189
+19 -19
kernel/irq/debug.h
··· 4 4 5 5 #include <linux/kallsyms.h> 6 6 7 - #define P(f) if (desc->status_use_accessors & f) printk("%14s set\n", #f) 8 - #define PS(f) if (desc->istate & f) printk("%14s set\n", #f) 7 + #define ___P(f) if (desc->status_use_accessors & f) printk("%14s set\n", #f) 8 + #define ___PS(f) if (desc->istate & f) printk("%14s set\n", #f) 9 9 /* FIXME */ 10 - #define PD(f) do { } while (0) 10 + #define ___PD(f) do { } while (0) 11 11 12 12 static inline void print_irq_desc(unsigned int irq, struct irq_desc *desc) 13 13 { ··· 23 23 print_symbol("%s\n", (unsigned long)desc->action->handler); 24 24 } 25 25 26 - P(IRQ_LEVEL); 27 - P(IRQ_PER_CPU); 28 - P(IRQ_NOPROBE); 29 - P(IRQ_NOREQUEST); 30 - P(IRQ_NOTHREAD); 31 - P(IRQ_NOAUTOEN); 26 + ___P(IRQ_LEVEL); 27 + ___P(IRQ_PER_CPU); 28 + ___P(IRQ_NOPROBE); 29 + ___P(IRQ_NOREQUEST); 30 + ___P(IRQ_NOTHREAD); 31 + ___P(IRQ_NOAUTOEN); 32 32 33 - PS(IRQS_AUTODETECT); 34 - PS(IRQS_REPLAY); 35 - PS(IRQS_WAITING); 36 - PS(IRQS_PENDING); 33 + ___PS(IRQS_AUTODETECT); 34 + ___PS(IRQS_REPLAY); 35 + ___PS(IRQS_WAITING); 36 + ___PS(IRQS_PENDING); 37 37 38 - PD(IRQS_INPROGRESS); 39 - PD(IRQS_DISABLED); 40 - PD(IRQS_MASKED); 38 + ___PD(IRQS_INPROGRESS); 39 + ___PD(IRQS_DISABLED); 40 + ___PD(IRQS_MASKED); 41 41 } 42 42 43 - #undef P 44 - #undef PS 45 - #undef PD 43 + #undef ___P 44 + #undef ___PS 45 + #undef ___PD
+22 -6
kernel/power/swap.c
··· 51 51 52 52 #define MAP_PAGE_ENTRIES (PAGE_SIZE / sizeof(sector_t) - 1) 53 53 54 + /* 55 + * Number of free pages that are not high. 56 + */ 57 + static inline unsigned long low_free_pages(void) 58 + { 59 + return nr_free_pages() - nr_free_highpages(); 60 + } 61 + 62 + /* 63 + * Number of pages required to be kept free while writing the image. Always 64 + * half of all available low pages before the writing starts. 65 + */ 66 + static inline unsigned long reqd_free_pages(void) 67 + { 68 + return low_free_pages() / 2; 69 + } 70 + 54 71 struct swap_map_page { 55 72 sector_t entries[MAP_PAGE_ENTRIES]; 56 73 sector_t next_swap; ··· 89 72 sector_t cur_swap; 90 73 sector_t first_sector; 91 74 unsigned int k; 92 - unsigned long nr_free_pages, written; 75 + unsigned long reqd_free_pages; 93 76 u32 crc32; 94 77 }; 95 78 ··· 333 316 goto err_rel; 334 317 } 335 318 handle->k = 0; 336 - handle->nr_free_pages = nr_free_pages() >> 1; 337 - handle->written = 0; 319 + handle->reqd_free_pages = reqd_free_pages(); 338 320 handle->first_sector = handle->cur_swap; 339 321 return 0; 340 322 err_rel: ··· 368 352 handle->cur_swap = offset; 369 353 handle->k = 0; 370 354 } 371 - if (bio_chain && ++handle->written > handle->nr_free_pages) { 355 + if (bio_chain && low_free_pages() <= handle->reqd_free_pages) { 372 356 error = hib_wait_on_bio_chain(bio_chain); 373 357 if (error) 374 358 goto out; 375 - handle->written = 0; 359 + handle->reqd_free_pages = reqd_free_pages(); 376 360 } 377 361 out: 378 362 return error; ··· 634 618 * Adjust number of free pages after all allocations have been done. 635 619 * We don't want to run out of pages when writing. 636 620 */ 637 - handle->nr_free_pages = nr_free_pages() >> 1; 621 + handle->reqd_free_pages = reqd_free_pages(); 638 622 639 623 /* 640 624 * Start the CRC32 thread.
-1
kernel/rcutree.c
··· 1820 1820 * a quiescent state betweentimes. 1821 1821 */ 1822 1822 local_irq_save(flags); 1823 - WARN_ON_ONCE(cpu_is_offline(smp_processor_id())); 1824 1823 rdp = this_cpu_ptr(rsp->rda); 1825 1824 1826 1825 /* Add the callback to our list. */
+16 -6
kernel/sched/core.c
··· 6405 6405 struct sd_data *sdd = &tl->data; 6406 6406 6407 6407 for_each_cpu(j, cpu_map) { 6408 - struct sched_domain *sd = *per_cpu_ptr(sdd->sd, j); 6409 - if (sd && (sd->flags & SD_OVERLAP)) 6410 - free_sched_groups(sd->groups, 0); 6411 - kfree(*per_cpu_ptr(sdd->sd, j)); 6412 - kfree(*per_cpu_ptr(sdd->sg, j)); 6413 - kfree(*per_cpu_ptr(sdd->sgp, j)); 6408 + struct sched_domain *sd; 6409 + 6410 + if (sdd->sd) { 6411 + sd = *per_cpu_ptr(sdd->sd, j); 6412 + if (sd && (sd->flags & SD_OVERLAP)) 6413 + free_sched_groups(sd->groups, 0); 6414 + kfree(*per_cpu_ptr(sdd->sd, j)); 6415 + } 6416 + 6417 + if (sdd->sg) 6418 + kfree(*per_cpu_ptr(sdd->sg, j)); 6419 + if (sdd->sgp) 6420 + kfree(*per_cpu_ptr(sdd->sgp, j)); 6414 6421 } 6415 6422 free_percpu(sdd->sd); 6423 + sdd->sd = NULL; 6416 6424 free_percpu(sdd->sg); 6425 + sdd->sg = NULL; 6417 6426 free_percpu(sdd->sgp); 6427 + sdd->sgp = NULL; 6418 6428 } 6419 6429 } 6420 6430
+10 -8
kernel/sched/fair.c
··· 784 784 update_load_add(&rq_of(cfs_rq)->load, se->load.weight); 785 785 #ifdef CONFIG_SMP 786 786 if (entity_is_task(se)) 787 - list_add_tail(&se->group_node, &rq_of(cfs_rq)->cfs_tasks); 787 + list_add(&se->group_node, &rq_of(cfs_rq)->cfs_tasks); 788 788 #endif 789 789 cfs_rq->nr_running++; 790 790 } ··· 3215 3215 3216 3216 static unsigned long task_h_load(struct task_struct *p); 3217 3217 3218 + static const unsigned int sched_nr_migrate_break = 32; 3219 + 3218 3220 /* 3219 3221 * move_tasks tries to move up to load_move weighted load from busiest to 3220 3222 * this_rq, as part of a balancing operation within domain "sd". ··· 3244 3242 3245 3243 /* take a breather every nr_migrate tasks */ 3246 3244 if (env->loop > env->loop_break) { 3247 - env->loop_break += sysctl_sched_nr_migrate; 3245 + env->loop_break += sched_nr_migrate_break; 3248 3246 env->flags |= LBF_NEED_BREAK; 3249 3247 break; 3250 3248 } ··· 3254 3252 3255 3253 load = task_h_load(p); 3256 3254 3257 - if (load < 16 && !env->sd->nr_balance_failed) 3255 + if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed) 3258 3256 goto next; 3259 3257 3260 3258 if ((load / 2) > env->load_move) ··· 4409 4407 .dst_cpu = this_cpu, 4410 4408 .dst_rq = this_rq, 4411 4409 .idle = idle, 4412 - .loop_break = sysctl_sched_nr_migrate, 4410 + .loop_break = sched_nr_migrate_break, 4413 4411 }; 4414 4412 4415 4413 cpumask_copy(cpus, cpu_active_mask); ··· 4447 4445 * correctly treated as an imbalance. 4448 4446 */ 4449 4447 env.flags |= LBF_ALL_PINNED; 4450 - env.load_move = imbalance; 4451 - env.src_cpu = busiest->cpu; 4452 - env.src_rq = busiest; 4453 - env.loop_max = busiest->nr_running; 4448 + env.load_move = imbalance; 4449 + env.src_cpu = busiest->cpu; 4450 + env.src_rq = busiest; 4451 + env.loop_max = min_t(unsigned long, sysctl_sched_nr_migrate, busiest->nr_running); 4454 4452 4455 4453 more_balance: 4456 4454 local_irq_save(flags);
+1
kernel/sched/features.h
··· 68 68 69 69 SCHED_FEAT(FORCE_SD_OVERLAP, false) 70 70 SCHED_FEAT(RT_RUNTIME_SHARE, true) 71 + SCHED_FEAT(LB_MIN, false)
+6 -7
kernel/time/tick-broadcast.c
··· 346 346 tick_get_broadcast_mask()); 347 347 break; 348 348 case TICKDEV_MODE_ONESHOT: 349 - broadcast = tick_resume_broadcast_oneshot(bc); 349 + if (!cpumask_empty(tick_get_broadcast_mask())) 350 + broadcast = tick_resume_broadcast_oneshot(bc); 350 351 break; 351 352 } 352 353 } ··· 373 372 static int tick_broadcast_set_event(ktime_t expires, int force) 374 373 { 375 374 struct clock_event_device *bc = tick_broadcast_device.evtdev; 375 + 376 + if (bc->mode != CLOCK_EVT_MODE_ONESHOT) 377 + clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT); 376 378 377 379 return clockevents_program_event(bc, expires, force); 378 380 } ··· 535 531 int was_periodic = bc->mode == CLOCK_EVT_MODE_PERIODIC; 536 532 537 533 bc->event_handler = tick_handle_oneshot_broadcast; 538 - clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT); 539 534 540 535 /* Take the do_timer update */ 541 536 tick_do_timer_cpu = cpu; ··· 552 549 to_cpumask(tmpmask)); 553 550 554 551 if (was_periodic && !cpumask_empty(to_cpumask(tmpmask))) { 552 + clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT); 555 553 tick_broadcast_init_next_event(to_cpumask(tmpmask), 556 554 tick_next_period); 557 555 tick_broadcast_set_event(tick_next_period, 1); ··· 581 577 raw_spin_lock_irqsave(&tick_broadcast_lock, flags); 582 578 583 579 tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT; 584 - 585 - if (cpumask_empty(tick_get_broadcast_mask())) 586 - goto end; 587 - 588 580 bc = tick_broadcast_device.evtdev; 589 581 if (bc) 590 582 tick_broadcast_setup_oneshot(bc); 591 583 592 - end: 593 584 raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags); 594 585 } 595 586
+5 -3
kernel/trace/trace.c
··· 4629 4629 rb_simple_read(struct file *filp, char __user *ubuf, 4630 4630 size_t cnt, loff_t *ppos) 4631 4631 { 4632 - struct ring_buffer *buffer = filp->private_data; 4632 + struct trace_array *tr = filp->private_data; 4633 + struct ring_buffer *buffer = tr->buffer; 4633 4634 char buf[64]; 4634 4635 int r; 4635 4636 ··· 4648 4647 rb_simple_write(struct file *filp, const char __user *ubuf, 4649 4648 size_t cnt, loff_t *ppos) 4650 4649 { 4651 - struct ring_buffer *buffer = filp->private_data; 4650 + struct trace_array *tr = filp->private_data; 4651 + struct ring_buffer *buffer = tr->buffer; 4652 4652 unsigned long val; 4653 4653 int ret; 4654 4654 ··· 4736 4734 &trace_clock_fops); 4737 4735 4738 4736 trace_create_file("tracing_on", 0644, d_tracer, 4739 - global_trace.buffer, &rb_simple_fops); 4737 + &global_trace, &rb_simple_fops); 4740 4738 4741 4739 #ifdef CONFIG_DYNAMIC_FTRACE 4742 4740 trace_create_file("dyn_ftrace_total_info", 0444, d_tracer,
+2 -2
kernel/trace/trace.h
··· 836 836 filter) 837 837 #include "trace_entries.h" 838 838 839 - #ifdef CONFIG_FUNCTION_TRACER 839 + #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_FUNCTION_TRACER) 840 840 int perf_ftrace_event_register(struct ftrace_event_call *call, 841 841 enum trace_reg type, void *data); 842 842 #else 843 843 #define perf_ftrace_event_register NULL 844 - #endif /* CONFIG_FUNCTION_TRACER */ 844 + #endif 845 845 846 846 #endif /* _LINUX_KERNEL_TRACE_H */
+5
kernel/trace/trace_output.c
··· 652 652 { 653 653 u64 next_ts; 654 654 int ret; 655 + /* trace_find_next_entry will reset ent_size */ 656 + int ent_size = iter->ent_size; 655 657 struct trace_seq *s = &iter->seq; 656 658 struct trace_entry *entry = iter->ent, 657 659 *next_entry = trace_find_next_entry(iter, NULL, ··· 661 659 unsigned long verbose = (trace_flags & TRACE_ITER_VERBOSE); 662 660 unsigned long abs_usecs = ns2usecs(iter->ts - iter->tr->time_start); 663 661 unsigned long rel_usecs; 662 + 663 + /* Restore the original ent_size */ 664 + iter->ent_size = ent_size; 664 665 665 666 if (!next_entry) 666 667 next_ts = iter->ts;
+1 -1
mm/hugetlb.c
··· 532 532 struct vm_area_struct *vma, 533 533 unsigned long address, int avoid_reserve) 534 534 { 535 - struct page *page; 535 + struct page *page = NULL; 536 536 struct mempolicy *mpol; 537 537 nodemask_t *nodemask; 538 538 struct zonelist *zonelist;
+5 -12
mm/memcontrol.c
··· 2476 2476 static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg, 2477 2477 struct page *page, 2478 2478 unsigned int nr_pages, 2479 - struct page_cgroup *pc, 2480 2479 enum charge_type ctype, 2481 2480 bool lrucare) 2482 2481 { 2482 + struct page_cgroup *pc = lookup_page_cgroup(page); 2483 2483 struct zone *uninitialized_var(zone); 2484 2484 bool was_on_lru = false; 2485 2485 bool anon; ··· 2716 2716 { 2717 2717 struct mem_cgroup *memcg = NULL; 2718 2718 unsigned int nr_pages = 1; 2719 - struct page_cgroup *pc; 2720 2719 bool oom = true; 2721 2720 int ret; 2722 2721 ··· 2729 2730 oom = false; 2730 2731 } 2731 2732 2732 - pc = lookup_page_cgroup(page); 2733 2733 ret = __mem_cgroup_try_charge(mm, gfp_mask, nr_pages, &memcg, oom); 2734 2734 if (ret == -ENOMEM) 2735 2735 return ret; 2736 - __mem_cgroup_commit_charge(memcg, page, nr_pages, pc, ctype, false); 2736 + __mem_cgroup_commit_charge(memcg, page, nr_pages, ctype, false); 2737 2737 return 0; 2738 2738 } 2739 2739 ··· 2829 2831 __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *memcg, 2830 2832 enum charge_type ctype) 2831 2833 { 2832 - struct page_cgroup *pc; 2833 - 2834 2834 if (mem_cgroup_disabled()) 2835 2835 return; 2836 2836 if (!memcg) 2837 2837 return; 2838 2838 cgroup_exclude_rmdir(&memcg->css); 2839 2839 2840 - pc = lookup_page_cgroup(page); 2841 - __mem_cgroup_commit_charge(memcg, page, 1, pc, ctype, true); 2840 + __mem_cgroup_commit_charge(memcg, page, 1, ctype, true); 2842 2841 /* 2843 2842 * Now swap is on-memory. This means this page may be 2844 2843 * counted both as mem and swap....double count. ··· 3293 3298 * page. In the case new page is migrated but not remapped, new page's 3294 3299 * mapcount will be finally 0 and we call uncharge in end_migration(). 3295 3300 */ 3296 - pc = lookup_page_cgroup(newpage); 3297 3301 if (PageAnon(page)) 3298 3302 ctype = MEM_CGROUP_CHARGE_TYPE_MAPPED; 3299 3303 else if (page_is_file_cache(page)) 3300 3304 ctype = MEM_CGROUP_CHARGE_TYPE_CACHE; 3301 3305 else 3302 3306 ctype = MEM_CGROUP_CHARGE_TYPE_SHMEM; 3303 - __mem_cgroup_commit_charge(memcg, newpage, 1, pc, ctype, false); 3307 + __mem_cgroup_commit_charge(memcg, newpage, 1, ctype, false); 3304 3308 return ret; 3305 3309 } 3306 3310 ··· 3386 3392 * the newpage may be on LRU(or pagevec for LRU) already. We lock 3387 3393 * LRU while we overwrite pc->mem_cgroup. 3388 3394 */ 3389 - pc = lookup_page_cgroup(newpage); 3390 - __mem_cgroup_commit_charge(memcg, newpage, 1, pc, type, true); 3395 + __mem_cgroup_commit_charge(memcg, newpage, 1, type, true); 3391 3396 } 3392 3397 3393 3398 #ifdef CONFIG_DEBUG_VM
+7 -4
mm/mempolicy.c
··· 1361 1361 1362 1362 mm = get_task_mm(task); 1363 1363 put_task_struct(task); 1364 - if (mm) 1365 - err = do_migrate_pages(mm, old, new, 1366 - capable(CAP_SYS_NICE) ? MPOL_MF_MOVE_ALL : MPOL_MF_MOVE); 1367 - else 1364 + 1365 + if (!mm) { 1368 1366 err = -EINVAL; 1367 + goto out; 1368 + } 1369 + 1370 + err = do_migrate_pages(mm, old, new, 1371 + capable(CAP_SYS_NICE) ? MPOL_MF_MOVE_ALL : MPOL_MF_MOVE); 1369 1372 1370 1373 mmput(mm); 1371 1374 out:
+8 -8
mm/migrate.c
··· 1388 1388 mm = get_task_mm(task); 1389 1389 put_task_struct(task); 1390 1390 1391 - if (mm) { 1392 - if (nodes) 1393 - err = do_pages_move(mm, task_nodes, nr_pages, pages, 1394 - nodes, status, flags); 1395 - else 1396 - err = do_pages_stat(mm, nr_pages, pages, status); 1397 - } else 1398 - err = -EINVAL; 1391 + if (!mm) 1392 + return -EINVAL; 1393 + 1394 + if (nodes) 1395 + err = do_pages_move(mm, task_nodes, nr_pages, pages, 1396 + nodes, status, flags); 1397 + else 1398 + err = do_pages_stat(mm, nr_pages, pages, status); 1399 1399 1400 1400 mmput(mm); 1401 1401 return err;
+8 -2
mm/nobootmem.c
··· 298 298 if (WARN_ON_ONCE(slab_is_available())) 299 299 return kzalloc_node(size, GFP_NOWAIT, pgdat->node_id); 300 300 301 + again: 301 302 ptr = __alloc_memory_core_early(pgdat->node_id, size, align, 302 303 goal, -1ULL); 303 304 if (ptr) 304 305 return ptr; 305 306 306 - return __alloc_memory_core_early(MAX_NUMNODES, size, align, 307 - goal, -1ULL); 307 + ptr = __alloc_memory_core_early(MAX_NUMNODES, size, align, 308 + goal, -1ULL); 309 + if (!ptr && goal) { 310 + goal = 0; 311 + goto again; 312 + } 313 + return ptr; 308 314 } 309 315 310 316 void * __init __alloc_bootmem_node_high(pg_data_t *pgdat, unsigned long size,
+8 -3
mm/vmscan.c
··· 1568 1568 reclaim_stat->recent_scanned[0] += nr_anon; 1569 1569 reclaim_stat->recent_scanned[1] += nr_file; 1570 1570 1571 - if (current_is_kswapd()) 1572 - __count_vm_events(KSWAPD_STEAL, nr_reclaimed); 1573 - __count_zone_vm_events(PGSTEAL, zone, nr_reclaimed); 1571 + if (global_reclaim(sc)) { 1572 + if (current_is_kswapd()) 1573 + __count_zone_vm_events(PGSTEAL_KSWAPD, zone, 1574 + nr_reclaimed); 1575 + else 1576 + __count_zone_vm_events(PGSTEAL_DIRECT, zone, 1577 + nr_reclaimed); 1578 + } 1574 1579 1575 1580 putback_inactive_pages(mz, &page_list); 1576 1581
+2 -2
mm/vmstat.c
··· 738 738 "pgmajfault", 739 739 740 740 TEXTS_FOR_ZONES("pgrefill") 741 - TEXTS_FOR_ZONES("pgsteal") 741 + TEXTS_FOR_ZONES("pgsteal_kswapd") 742 + TEXTS_FOR_ZONES("pgsteal_direct") 742 743 TEXTS_FOR_ZONES("pgscan_kswapd") 743 744 TEXTS_FOR_ZONES("pgscan_direct") 744 745 ··· 748 747 #endif 749 748 "pginodesteal", 750 749 "slabs_scanned", 751 - "kswapd_steal", 752 750 "kswapd_inodesteal", 753 751 "kswapd_low_wmark_hit_quickly", 754 752 "kswapd_high_wmark_hit_quickly",
+13 -14
net/bluetooth/hci_core.c
··· 1215 1215 return NULL; 1216 1216 } 1217 1217 1218 - static int hci_persistent_key(struct hci_dev *hdev, struct hci_conn *conn, 1218 + static bool hci_persistent_key(struct hci_dev *hdev, struct hci_conn *conn, 1219 1219 u8 key_type, u8 old_key_type) 1220 1220 { 1221 1221 /* Legacy key */ 1222 1222 if (key_type < 0x03) 1223 - return 1; 1223 + return true; 1224 1224 1225 1225 /* Debug keys are insecure so don't store them persistently */ 1226 1226 if (key_type == HCI_LK_DEBUG_COMBINATION) 1227 - return 0; 1227 + return false; 1228 1228 1229 1229 /* Changed combination key and there's no previous one */ 1230 1230 if (key_type == HCI_LK_CHANGED_COMBINATION && old_key_type == 0xff) 1231 - return 0; 1231 + return false; 1232 1232 1233 1233 /* Security mode 3 case */ 1234 1234 if (!conn) 1235 - return 1; 1235 + return true; 1236 1236 1237 1237 /* Neither local nor remote side had no-bonding as requirement */ 1238 1238 if (conn->auth_type > 0x01 && conn->remote_auth > 0x01) 1239 - return 1; 1239 + return true; 1240 1240 1241 1241 /* Local side had dedicated bonding as requirement */ 1242 1242 if (conn->auth_type == 0x02 || conn->auth_type == 0x03) 1243 - return 1; 1243 + return true; 1244 1244 1245 1245 /* Remote side had dedicated bonding as requirement */ 1246 1246 if (conn->remote_auth == 0x02 || conn->remote_auth == 0x03) 1247 - return 1; 1247 + return true; 1248 1248 1249 1249 /* If none of the above criteria match, then don't store the key 1250 1250 * persistently */ 1251 - return 0; 1251 + return false; 1252 1252 } 1253 1253 1254 1254 struct smp_ltk *hci_find_ltk(struct hci_dev *hdev, __le16 ediv, u8 rand[8]) ··· 1285 1285 bdaddr_t *bdaddr, u8 *val, u8 type, u8 pin_len) 1286 1286 { 1287 1287 struct link_key *key, *old_key; 1288 - u8 old_key_type, persistent; 1288 + u8 old_key_type; 1289 + bool persistent; 1289 1290 1290 1291 old_key = hci_find_link_key(hdev, bdaddr); 1291 1292 if (old_key) { ··· 1329 1328 1330 1329 mgmt_new_link_key(hdev, key, persistent); 1331 1330 1332 - if (!persistent) { 1333 - list_del(&key->list); 1334 - kfree(key); 1335 - } 1331 + if (conn) 1332 + conn->flush_key = !persistent; 1336 1333 1337 1334 return 0; 1338 1335 }
+3
net/bluetooth/hci_event.c
··· 1901 1901 } 1902 1902 1903 1903 if (ev->status == 0) { 1904 + if (conn->type == ACL_LINK && conn->flush_key) 1905 + hci_remove_link_key(hdev, &conn->dst); 1904 1906 hci_proto_disconn_cfm(conn, ev->reason); 1905 1907 hci_conn_del(conn); 1906 1908 } ··· 2313 2311 2314 2312 case HCI_OP_USER_PASSKEY_NEG_REPLY: 2315 2313 hci_cc_user_passkey_neg_reply(hdev, skb); 2314 + break; 2316 2315 2317 2316 case HCI_OP_LE_SET_SCAN_PARAM: 2318 2317 hci_cc_le_set_scan_param(hdev, skb);
+1 -1
net/bluetooth/mgmt.c
··· 2884 2884 return 0; 2885 2885 } 2886 2886 2887 - int mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key, u8 persistent) 2887 + int mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key, bool persistent) 2888 2888 { 2889 2889 struct mgmt_ev_new_link_key ev; 2890 2890
+1
net/bridge/br_forward.c
··· 47 47 kfree_skb(skb); 48 48 } else { 49 49 skb_push(skb, ETH_HLEN); 50 + br_drop_fake_rtable(skb); 50 51 dev_queue_xmit(skb); 51 52 } 52 53
+2 -6
net/bridge/br_netfilter.c
··· 156 156 rt->dst.dev = br->dev; 157 157 rt->dst.path = &rt->dst; 158 158 dst_init_metrics(&rt->dst, br_dst_default_metrics, true); 159 - rt->dst.flags = DST_NOXFRM | DST_NOPEER; 159 + rt->dst.flags = DST_NOXFRM | DST_NOPEER | DST_FAKE_RTABLE; 160 160 rt->dst.ops = &fake_dst_ops; 161 161 } 162 162 ··· 694 694 const struct net_device *out, 695 695 int (*okfn)(struct sk_buff *)) 696 696 { 697 - struct rtable *rt = skb_rtable(skb); 698 - 699 - if (rt && rt == bridge_parent_rtable(in)) 700 - skb_dst_drop(skb); 701 - 697 + br_drop_fake_rtable(skb); 702 698 return NF_ACCEPT; 703 699 } 704 700
+64 -24
net/core/drop_monitor.c
··· 42 42 * netlink alerts 43 43 */ 44 44 static int trace_state = TRACE_OFF; 45 - static DEFINE_SPINLOCK(trace_state_lock); 45 + static DEFINE_MUTEX(trace_state_mutex); 46 46 47 47 struct per_cpu_dm_data { 48 48 struct work_struct dm_alert_work; 49 - struct sk_buff *skb; 49 + struct sk_buff __rcu *skb; 50 50 atomic_t dm_hit_count; 51 51 struct timer_list send_timer; 52 + int cpu; 52 53 }; 53 54 54 55 struct dm_hw_stat_delta { ··· 80 79 size_t al; 81 80 struct net_dm_alert_msg *msg; 82 81 struct nlattr *nla; 82 + struct sk_buff *skb; 83 + struct sk_buff *oskb = rcu_dereference_protected(data->skb, 1); 83 84 84 85 al = sizeof(struct net_dm_alert_msg); 85 86 al += dm_hit_limit * sizeof(struct net_dm_drop_point); 86 87 al += sizeof(struct nlattr); 87 88 88 - data->skb = genlmsg_new(al, GFP_KERNEL); 89 - genlmsg_put(data->skb, 0, 0, &net_drop_monitor_family, 90 - 0, NET_DM_CMD_ALERT); 91 - nla = nla_reserve(data->skb, NLA_UNSPEC, sizeof(struct net_dm_alert_msg)); 92 - msg = nla_data(nla); 93 - memset(msg, 0, al); 94 - atomic_set(&data->dm_hit_count, dm_hit_limit); 89 + skb = genlmsg_new(al, GFP_KERNEL); 90 + 91 + if (skb) { 92 + genlmsg_put(skb, 0, 0, &net_drop_monitor_family, 93 + 0, NET_DM_CMD_ALERT); 94 + nla = nla_reserve(skb, NLA_UNSPEC, 95 + sizeof(struct net_dm_alert_msg)); 96 + msg = nla_data(nla); 97 + memset(msg, 0, al); 98 + } else 99 + schedule_work_on(data->cpu, &data->dm_alert_work); 100 + 101 + /* 102 + * Don't need to lock this, since we are guaranteed to only 103 + * run this on a single cpu at a time. 104 + * Note also that we only update data->skb if the old and new skb 105 + * pointers don't match. This ensures that we don't continually call 106 + * synchornize_rcu if we repeatedly fail to alloc a new netlink message. 107 + */ 108 + if (skb != oskb) { 109 + rcu_assign_pointer(data->skb, skb); 110 + 111 + synchronize_rcu(); 112 + 113 + atomic_set(&data->dm_hit_count, dm_hit_limit); 114 + } 115 + 95 116 } 96 117 97 118 static void send_dm_alert(struct work_struct *unused) 98 119 { 99 120 struct sk_buff *skb; 100 - struct per_cpu_dm_data *data = &__get_cpu_var(dm_cpu_data); 121 + struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data); 122 + 123 + WARN_ON_ONCE(data->cpu != smp_processor_id()); 101 124 102 125 /* 103 126 * Grab the skb we're about to send 104 127 */ 105 - skb = data->skb; 128 + skb = rcu_dereference_protected(data->skb, 1); 106 129 107 130 /* 108 131 * Replace it with a new one ··· 136 111 /* 137 112 * Ship it! 138 113 */ 139 - genlmsg_multicast(skb, 0, NET_DM_GRP_ALERT, GFP_KERNEL); 114 + if (skb) 115 + genlmsg_multicast(skb, 0, NET_DM_GRP_ALERT, GFP_KERNEL); 140 116 117 + put_cpu_var(dm_cpu_data); 141 118 } 142 119 143 120 /* ··· 150 123 */ 151 124 static void sched_send_work(unsigned long unused) 152 125 { 153 - struct per_cpu_dm_data *data = &__get_cpu_var(dm_cpu_data); 126 + struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data); 154 127 155 - schedule_work(&data->dm_alert_work); 128 + schedule_work_on(smp_processor_id(), &data->dm_alert_work); 129 + 130 + put_cpu_var(dm_cpu_data); 156 131 } 157 132 158 133 static void trace_drop_common(struct sk_buff *skb, void *location) ··· 163 134 struct nlmsghdr *nlh; 164 135 struct nlattr *nla; 165 136 int i; 166 - struct per_cpu_dm_data *data = &__get_cpu_var(dm_cpu_data); 137 + struct sk_buff *dskb; 138 + struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data); 167 139 140 + 141 + rcu_read_lock(); 142 + dskb = rcu_dereference(data->skb); 143 + 144 + if (!dskb) 145 + goto out; 168 146 169 147 if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) { 170 148 /* ··· 180 144 goto out; 181 145 } 182 146 183 - nlh = (struct nlmsghdr *)data->skb->data; 147 + nlh = (struct nlmsghdr *)dskb->data; 184 148 nla = genlmsg_data(nlmsg_data(nlh)); 185 149 msg = nla_data(nla); 186 150 for (i = 0; i < msg->entries; i++) { ··· 194 158 /* 195 159 * We need to create a new entry 196 160 */ 197 - __nla_reserve_nohdr(data->skb, sizeof(struct net_dm_drop_point)); 161 + __nla_reserve_nohdr(dskb, sizeof(struct net_dm_drop_point)); 198 162 nla->nla_len += NLA_ALIGN(sizeof(struct net_dm_drop_point)); 199 163 memcpy(msg->points[msg->entries].pc, &location, sizeof(void *)); 200 164 msg->points[msg->entries].count = 1; ··· 206 170 } 207 171 208 172 out: 173 + rcu_read_unlock(); 174 + put_cpu_var(dm_cpu_data); 209 175 return; 210 176 } 211 177 ··· 252 214 struct dm_hw_stat_delta *new_stat = NULL; 253 215 struct dm_hw_stat_delta *temp; 254 216 255 - spin_lock(&trace_state_lock); 217 + mutex_lock(&trace_state_mutex); 256 218 257 219 if (state == trace_state) { 258 220 rc = -EAGAIN; ··· 291 253 rc = -EINPROGRESS; 292 254 293 255 out_unlock: 294 - spin_unlock(&trace_state_lock); 256 + mutex_unlock(&trace_state_mutex); 295 257 296 258 return rc; 297 259 } ··· 334 296 335 297 new_stat->dev = dev; 336 298 new_stat->last_rx = jiffies; 337 - spin_lock(&trace_state_lock); 299 + mutex_lock(&trace_state_mutex); 338 300 list_add_rcu(&new_stat->list, &hw_stats_list); 339 - spin_unlock(&trace_state_lock); 301 + mutex_unlock(&trace_state_mutex); 340 302 break; 341 303 case NETDEV_UNREGISTER: 342 - spin_lock(&trace_state_lock); 304 + mutex_lock(&trace_state_mutex); 343 305 list_for_each_entry_safe(new_stat, tmp, &hw_stats_list, list) { 344 306 if (new_stat->dev == dev) { 345 307 new_stat->dev = NULL; ··· 350 312 } 351 313 } 352 314 } 353 - spin_unlock(&trace_state_lock); 315 + mutex_unlock(&trace_state_mutex); 354 316 break; 355 317 } 356 318 out: ··· 406 368 407 369 for_each_present_cpu(cpu) { 408 370 data = &per_cpu(dm_cpu_data, cpu); 409 - reset_per_cpu_data(data); 371 + data->cpu = cpu; 410 372 INIT_WORK(&data->dm_alert_work, send_dm_alert); 411 373 init_timer(&data->send_timer); 412 374 data->send_timer.data = cpu; 413 375 data->send_timer.function = sched_send_work; 376 + reset_per_cpu_data(data); 414 377 } 378 + 415 379 416 380 goto out; 417 381
+38 -2
net/ieee802154/6lowpan.c
··· 1059 1059 free_netdev(dev); 1060 1060 } 1061 1061 1062 + static struct wpan_phy *lowpan_get_phy(const struct net_device *dev) 1063 + { 1064 + struct net_device *real_dev = lowpan_dev_info(dev)->real_dev; 1065 + return ieee802154_mlme_ops(real_dev)->get_phy(real_dev); 1066 + } 1067 + 1068 + static u16 lowpan_get_pan_id(const struct net_device *dev) 1069 + { 1070 + struct net_device *real_dev = lowpan_dev_info(dev)->real_dev; 1071 + return ieee802154_mlme_ops(real_dev)->get_pan_id(real_dev); 1072 + } 1073 + 1074 + static u16 lowpan_get_short_addr(const struct net_device *dev) 1075 + { 1076 + struct net_device *real_dev = lowpan_dev_info(dev)->real_dev; 1077 + return ieee802154_mlme_ops(real_dev)->get_short_addr(real_dev); 1078 + } 1079 + 1062 1080 static struct header_ops lowpan_header_ops = { 1063 1081 .create = lowpan_header_create, 1064 1082 }; ··· 1084 1066 static const struct net_device_ops lowpan_netdev_ops = { 1085 1067 .ndo_start_xmit = lowpan_xmit, 1086 1068 .ndo_set_mac_address = eth_mac_addr, 1069 + }; 1070 + 1071 + static struct ieee802154_mlme_ops lowpan_mlme = { 1072 + .get_pan_id = lowpan_get_pan_id, 1073 + .get_phy = lowpan_get_phy, 1074 + .get_short_addr = lowpan_get_short_addr, 1087 1075 }; 1088 1076 1089 1077 static void lowpan_setup(struct net_device *dev) ··· 1109 1085 1110 1086 dev->netdev_ops = &lowpan_netdev_ops; 1111 1087 dev->header_ops = &lowpan_header_ops; 1088 + dev->ml_priv = &lowpan_mlme; 1112 1089 dev->destructor = lowpan_dev_free; 1113 1090 } 1114 1091 ··· 1183 1158 list_add_tail(&entry->list, &lowpan_devices); 1184 1159 mutex_unlock(&lowpan_dev_info(dev)->dev_list_mtx); 1185 1160 1161 + spin_lock_init(&flist_lock); 1162 + 1186 1163 register_netdevice(dev); 1187 1164 1188 1165 return 0; ··· 1194 1167 { 1195 1168 struct lowpan_dev_info *lowpan_dev = lowpan_dev_info(dev); 1196 1169 struct net_device *real_dev = lowpan_dev->real_dev; 1197 - struct lowpan_dev_record *entry; 1198 - struct lowpan_dev_record *tmp; 1170 + struct lowpan_dev_record *entry, *tmp; 1171 + struct lowpan_fragment *frame, *tframe; 1199 1172 1200 1173 ASSERT_RTNL(); 1174 + 1175 + spin_lock(&flist_lock); 1176 + list_for_each_entry_safe(frame, tframe, &lowpan_fragments, list) { 1177 + del_timer(&frame->timer); 1178 + list_del(&frame->list); 1179 + dev_kfree_skb(frame->skb); 1180 + kfree(frame); 1181 + } 1182 + spin_unlock(&flist_lock); 1201 1183 1202 1184 mutex_lock(&lowpan_dev_info(dev)->dev_list_mtx); 1203 1185 list_for_each_entry_safe(entry, tmp, &lowpan_devices, list) {
+1 -1
net/ipv4/inet_diag.c
··· 141 141 goto rtattr_failure; 142 142 143 143 if (icsk == NULL) { 144 - r->idiag_rqueue = r->idiag_wqueue = 0; 144 + handler->idiag_get_info(sk, r, NULL); 145 145 goto out; 146 146 } 147 147
+5 -4
net/ipv4/tcp.c
··· 3515 3515 { 3516 3516 struct sk_buff *skb = NULL; 3517 3517 unsigned long limit; 3518 - int max_share, cnt; 3518 + int max_rshare, max_wshare, cnt; 3519 3519 unsigned int i; 3520 3520 unsigned long jiffy = jiffies; 3521 3521 ··· 3575 3575 tcp_init_mem(&init_net); 3576 3576 /* Set per-socket limits to no more than 1/128 the pressure threshold */ 3577 3577 limit = nr_free_buffer_pages() << (PAGE_SHIFT - 7); 3578 - max_share = min(4UL*1024*1024, limit); 3578 + max_wshare = min(4UL*1024*1024, limit); 3579 + max_rshare = min(6UL*1024*1024, limit); 3579 3580 3580 3581 sysctl_tcp_wmem[0] = SK_MEM_QUANTUM; 3581 3582 sysctl_tcp_wmem[1] = 16*1024; 3582 - sysctl_tcp_wmem[2] = max(64*1024, max_share); 3583 + sysctl_tcp_wmem[2] = max(64*1024, max_wshare); 3583 3584 3584 3585 sysctl_tcp_rmem[0] = SK_MEM_QUANTUM; 3585 3586 sysctl_tcp_rmem[1] = 87380; 3586 - sysctl_tcp_rmem[2] = max(87380, max_share); 3587 + sysctl_tcp_rmem[2] = max(87380, max_rshare); 3587 3588 3588 3589 pr_info("Hash tables configured (established %u bind %u)\n", 3589 3590 tcp_hashinfo.ehash_mask + 1, tcp_hashinfo.bhash_size);
+8 -5
net/ipv4/tcp_input.c
··· 85 85 EXPORT_SYMBOL(sysctl_tcp_ecn); 86 86 int sysctl_tcp_dsack __read_mostly = 1; 87 87 int sysctl_tcp_app_win __read_mostly = 31; 88 - int sysctl_tcp_adv_win_scale __read_mostly = 2; 88 + int sysctl_tcp_adv_win_scale __read_mostly = 1; 89 89 EXPORT_SYMBOL(sysctl_tcp_adv_win_scale); 90 90 91 91 int sysctl_tcp_stdurg __read_mostly; ··· 496 496 goto new_measure; 497 497 if (before(tp->rcv_nxt, tp->rcv_rtt_est.seq)) 498 498 return; 499 - tcp_rcv_rtt_update(tp, jiffies - tp->rcv_rtt_est.time, 1); 499 + tcp_rcv_rtt_update(tp, tcp_time_stamp - tp->rcv_rtt_est.time, 1); 500 500 501 501 new_measure: 502 502 tp->rcv_rtt_est.seq = tp->rcv_nxt + tp->rcv_wnd; ··· 2904 2904 2905 2905 /* Do not moderate cwnd if it's already undone in cwr or recovery. */ 2906 2906 if (tp->undo_marker) { 2907 - if (inet_csk(sk)->icsk_ca_state == TCP_CA_CWR) 2907 + if (inet_csk(sk)->icsk_ca_state == TCP_CA_CWR) { 2908 2908 tp->snd_cwnd = min(tp->snd_cwnd, tp->snd_ssthresh); 2909 - else /* PRR */ 2909 + tp->snd_cwnd_stamp = tcp_time_stamp; 2910 + } else if (tp->snd_ssthresh < TCP_INFINITE_SSTHRESH) { 2911 + /* PRR algorithm. */ 2910 2912 tp->snd_cwnd = tp->snd_ssthresh; 2911 - tp->snd_cwnd_stamp = tcp_time_stamp; 2913 + tp->snd_cwnd_stamp = tcp_time_stamp; 2914 + } 2912 2915 } 2913 2916 tcp_ca_event(sk, CA_EVENT_COMPLETE_CWR); 2914 2917 }
+9
net/ipv4/udp_diag.c
··· 146 146 return udp_dump_one(&udp_table, in_skb, nlh, req); 147 147 } 148 148 149 + static void udp_diag_get_info(struct sock *sk, struct inet_diag_msg *r, 150 + void *info) 151 + { 152 + r->idiag_rqueue = sk_rmem_alloc_get(sk); 153 + r->idiag_wqueue = sk_wmem_alloc_get(sk); 154 + } 155 + 149 156 static const struct inet_diag_handler udp_diag_handler = { 150 157 .dump = udp_diag_dump, 151 158 .dump_one = udp_diag_dump_one, 159 + .idiag_get_info = udp_diag_get_info, 152 160 .idiag_type = IPPROTO_UDP, 153 161 }; 154 162 ··· 175 167 static const struct inet_diag_handler udplite_diag_handler = { 176 168 .dump = udplite_diag_dump, 177 169 .dump_one = udplite_diag_dump_one, 170 + .idiag_get_info = udp_diag_get_info, 178 171 .idiag_type = IPPROTO_UDPLITE, 179 172 }; 180 173
+2 -1
net/l2tp/l2tp_ip.c
··· 393 393 394 394 daddr = lip->l2tp_addr.s_addr; 395 395 } else { 396 + rc = -EDESTADDRREQ; 396 397 if (sk->sk_state != TCP_ESTABLISHED) 397 - return -EDESTADDRREQ; 398 + goto out; 398 399 399 400 daddr = inet->inet_daddr; 400 401 connected = 1;
+1 -1
net/mac80211/ieee80211_i.h
··· 1234 1234 struct sk_buff *skb); 1235 1235 void ieee80211_sta_reset_beacon_monitor(struct ieee80211_sub_if_data *sdata); 1236 1236 void ieee80211_sta_reset_conn_monitor(struct ieee80211_sub_if_data *sdata); 1237 - void ieee80211_mgd_teardown(struct ieee80211_sub_if_data *sdata); 1237 + void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata); 1238 1238 1239 1239 /* IBSS code */ 1240 1240 void ieee80211_ibss_notify_scan_completed(struct ieee80211_local *local);
+2 -2
net/mac80211/iface.c
··· 606 606 /* free all potentially still buffered bcast frames */ 607 607 local->total_ps_buffered -= skb_queue_len(&sdata->u.ap.ps_bc_buf); 608 608 skb_queue_purge(&sdata->u.ap.ps_bc_buf); 609 + } else if (sdata->vif.type == NL80211_IFTYPE_STATION) { 610 + ieee80211_mgd_stop(sdata); 609 611 } 610 612 611 613 if (going_down) ··· 770 768 771 769 if (ieee80211_vif_is_mesh(&sdata->vif)) 772 770 mesh_rmc_free(sdata); 773 - else if (sdata->vif.type == NL80211_IFTYPE_STATION) 774 - ieee80211_mgd_teardown(sdata); 775 771 776 772 flushed = sta_info_flush(local, sdata); 777 773 WARN_ON(flushed);
+1 -1
net/mac80211/mlme.c
··· 3523 3523 return 0; 3524 3524 } 3525 3525 3526 - void ieee80211_mgd_teardown(struct ieee80211_sub_if_data *sdata) 3526 + void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata) 3527 3527 { 3528 3528 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 3529 3529
+2 -1
net/mac80211/tx.c
··· 1159 1159 tx->sta = rcu_dereference(sdata->u.vlan.sta); 1160 1160 if (!tx->sta && sdata->dev->ieee80211_ptr->use_4addr) 1161 1161 return TX_DROP; 1162 - } else if (info->flags & IEEE80211_TX_CTL_INJECTED) { 1162 + } else if (info->flags & IEEE80211_TX_CTL_INJECTED || 1163 + tx->sdata->control_port_protocol == tx->skb->protocol) { 1163 1164 tx->sta = sta_info_get_bss(sdata, hdr->addr1); 1164 1165 } 1165 1166 if (!tx->sta)
+11
net/netfilter/ipvs/ip_vs_core.c
··· 1924 1924 control_fail: 1925 1925 ip_vs_estimator_net_cleanup(net); 1926 1926 estimator_fail: 1927 + net->ipvs = NULL; 1927 1928 return -ENOMEM; 1928 1929 } 1929 1930 ··· 1937 1936 ip_vs_control_net_cleanup(net); 1938 1937 ip_vs_estimator_net_cleanup(net); 1939 1938 IP_VS_DBG(2, "ipvs netns %d released\n", net_ipvs(net)->gen); 1939 + net->ipvs = NULL; 1940 1940 } 1941 1941 1942 1942 static void __net_exit __ip_vs_dev_cleanup(struct net *net) ··· 1995 1993 goto cleanup_dev; 1996 1994 } 1997 1995 1996 + ret = ip_vs_register_nl_ioctl(); 1997 + if (ret < 0) { 1998 + pr_err("can't register netlink/ioctl.\n"); 1999 + goto cleanup_hooks; 2000 + } 2001 + 1998 2002 pr_info("ipvs loaded.\n"); 1999 2003 2000 2004 return ret; 2001 2005 2006 + cleanup_hooks: 2007 + nf_unregister_hooks(ip_vs_ops, ARRAY_SIZE(ip_vs_ops)); 2002 2008 cleanup_dev: 2003 2009 unregister_pernet_device(&ipvs_core_dev_ops); 2004 2010 cleanup_sub: ··· 2022 2012 2023 2013 static void __exit ip_vs_cleanup(void) 2024 2014 { 2015 + ip_vs_unregister_nl_ioctl(); 2025 2016 nf_unregister_hooks(ip_vs_ops, ARRAY_SIZE(ip_vs_ops)); 2026 2017 unregister_pernet_device(&ipvs_core_dev_ops); 2027 2018 unregister_pernet_subsys(&ipvs_core_ops); /* free ip_vs struct */
+32 -24
net/netfilter/ipvs/ip_vs_ctl.c
··· 3680 3680 return 0; 3681 3681 } 3682 3682 3683 - void __net_init ip_vs_control_net_cleanup_sysctl(struct net *net) 3683 + void __net_exit ip_vs_control_net_cleanup_sysctl(struct net *net) 3684 3684 { 3685 3685 struct netns_ipvs *ipvs = net_ipvs(net); 3686 3686 ··· 3692 3692 #else 3693 3693 3694 3694 int __net_init ip_vs_control_net_init_sysctl(struct net *net) { return 0; } 3695 - void __net_init ip_vs_control_net_cleanup_sysctl(struct net *net) { } 3695 + void __net_exit ip_vs_control_net_cleanup_sysctl(struct net *net) { } 3696 3696 3697 3697 #endif 3698 3698 ··· 3750 3750 free_percpu(ipvs->tot_stats.cpustats); 3751 3751 } 3752 3752 3753 - int __init ip_vs_control_init(void) 3753 + int __init ip_vs_register_nl_ioctl(void) 3754 3754 { 3755 - int idx; 3756 3755 int ret; 3757 - 3758 - EnterFunction(2); 3759 - 3760 - /* Initialize svc_table, ip_vs_svc_fwm_table, rs_table */ 3761 - for(idx = 0; idx < IP_VS_SVC_TAB_SIZE; idx++) { 3762 - INIT_LIST_HEAD(&ip_vs_svc_table[idx]); 3763 - INIT_LIST_HEAD(&ip_vs_svc_fwm_table[idx]); 3764 - } 3765 - 3766 - smp_wmb(); /* Do we really need it now ? */ 3767 3756 3768 3757 ret = nf_register_sockopt(&ip_vs_sockopts); 3769 3758 if (ret) { ··· 3765 3776 pr_err("cannot register Generic Netlink interface.\n"); 3766 3777 goto err_genl; 3767 3778 } 3768 - 3769 - ret = register_netdevice_notifier(&ip_vs_dst_notifier); 3770 - if (ret < 0) 3771 - goto err_notf; 3772 - 3773 - LeaveFunction(2); 3774 3779 return 0; 3775 3780 3776 - err_notf: 3777 - ip_vs_genl_unregister(); 3778 3781 err_genl: 3779 3782 nf_unregister_sockopt(&ip_vs_sockopts); 3780 3783 err_sock: 3781 3784 return ret; 3785 + } 3786 + 3787 + void ip_vs_unregister_nl_ioctl(void) 3788 + { 3789 + ip_vs_genl_unregister(); 3790 + nf_unregister_sockopt(&ip_vs_sockopts); 3791 + } 3792 + 3793 + int __init ip_vs_control_init(void) 3794 + { 3795 + int idx; 3796 + int ret; 3797 + 3798 + EnterFunction(2); 3799 + 3800 + /* Initialize svc_table, ip_vs_svc_fwm_table, rs_table */ 3801 + for (idx = 0; idx < IP_VS_SVC_TAB_SIZE; idx++) { 3802 + INIT_LIST_HEAD(&ip_vs_svc_table[idx]); 3803 + INIT_LIST_HEAD(&ip_vs_svc_fwm_table[idx]); 3804 + } 3805 + 3806 + smp_wmb(); /* Do we really need it now ? */ 3807 + 3808 + ret = register_netdevice_notifier(&ip_vs_dst_notifier); 3809 + if (ret < 0) 3810 + return ret; 3811 + 3812 + LeaveFunction(2); 3813 + return 0; 3782 3814 } 3783 3815 3784 3816 ··· 3807 3797 { 3808 3798 EnterFunction(2); 3809 3799 unregister_netdevice_notifier(&ip_vs_dst_notifier); 3810 - ip_vs_genl_unregister(); 3811 - nf_unregister_sockopt(&ip_vs_sockopts); 3812 3800 LeaveFunction(2); 3813 3801 }
+2
net/netfilter/ipvs/ip_vs_ftp.c
··· 439 439 struct ip_vs_app *app; 440 440 struct netns_ipvs *ipvs = net_ipvs(net); 441 441 442 + if (!ipvs) 443 + return -ENOENT; 442 444 app = kmemdup(&ip_vs_ftp, sizeof(struct ip_vs_app), GFP_KERNEL); 443 445 if (!app) 444 446 return -ENOMEM;
+3
net/netfilter/ipvs/ip_vs_lblc.c
··· 551 551 { 552 552 struct netns_ipvs *ipvs = net_ipvs(net); 553 553 554 + if (!ipvs) 555 + return -ENOENT; 556 + 554 557 if (!net_eq(net, &init_net)) { 555 558 ipvs->lblc_ctl_table = kmemdup(vs_vars_table, 556 559 sizeof(vs_vars_table),
+3
net/netfilter/ipvs/ip_vs_lblcr.c
··· 745 745 { 746 746 struct netns_ipvs *ipvs = net_ipvs(net); 747 747 748 + if (!ipvs) 749 + return -ENOENT; 750 + 748 751 if (!net_eq(net, &init_net)) { 749 752 ipvs->lblcr_ctl_table = kmemdup(vs_vars_table, 750 753 sizeof(vs_vars_table),
+27 -11
net/netfilter/ipvs/ip_vs_proto.c
··· 59 59 return 0; 60 60 } 61 61 62 - #if defined(CONFIG_IP_VS_PROTO_TCP) || defined(CONFIG_IP_VS_PROTO_UDP) || \ 63 - defined(CONFIG_IP_VS_PROTO_SCTP) || defined(CONFIG_IP_VS_PROTO_AH) || \ 64 - defined(CONFIG_IP_VS_PROTO_ESP) 65 62 /* 66 63 * register an ipvs protocols netns related data 67 64 */ ··· 78 81 ipvs->proto_data_table[hash] = pd; 79 82 atomic_set(&pd->appcnt, 0); /* Init app counter */ 80 83 81 - if (pp->init_netns != NULL) 82 - pp->init_netns(net, pd); 84 + if (pp->init_netns != NULL) { 85 + int ret = pp->init_netns(net, pd); 86 + if (ret) { 87 + /* unlink an free proto data */ 88 + ipvs->proto_data_table[hash] = pd->next; 89 + kfree(pd); 90 + return ret; 91 + } 92 + } 83 93 84 94 return 0; 85 95 } 86 - #endif 87 96 88 97 /* 89 98 * unregister an ipvs protocol ··· 319 316 */ 320 317 int __net_init ip_vs_protocol_net_init(struct net *net) 321 318 { 319 + int i, ret; 320 + static struct ip_vs_protocol *protos[] = { 322 321 #ifdef CONFIG_IP_VS_PROTO_TCP 323 - register_ip_vs_proto_netns(net, &ip_vs_protocol_tcp); 322 + &ip_vs_protocol_tcp, 324 323 #endif 325 324 #ifdef CONFIG_IP_VS_PROTO_UDP 326 - register_ip_vs_proto_netns(net, &ip_vs_protocol_udp); 325 + &ip_vs_protocol_udp, 327 326 #endif 328 327 #ifdef CONFIG_IP_VS_PROTO_SCTP 329 - register_ip_vs_proto_netns(net, &ip_vs_protocol_sctp); 328 + &ip_vs_protocol_sctp, 330 329 #endif 331 330 #ifdef CONFIG_IP_VS_PROTO_AH 332 - register_ip_vs_proto_netns(net, &ip_vs_protocol_ah); 331 + &ip_vs_protocol_ah, 333 332 #endif 334 333 #ifdef CONFIG_IP_VS_PROTO_ESP 335 - register_ip_vs_proto_netns(net, &ip_vs_protocol_esp); 334 + &ip_vs_protocol_esp, 336 335 #endif 336 + }; 337 + 338 + for (i = 0; i < ARRAY_SIZE(protos); i++) { 339 + ret = register_ip_vs_proto_netns(net, protos[i]); 340 + if (ret < 0) 341 + goto cleanup; 342 + } 337 343 return 0; 344 + 345 + cleanup: 346 + ip_vs_protocol_net_cleanup(net); 347 + return ret; 338 348 } 339 349 340 350 void __net_exit ip_vs_protocol_net_cleanup(struct net *net)
+4 -1
net/netfilter/ipvs/ip_vs_proto_sctp.c
··· 1090 1090 * timeouts is netns related now. 1091 1091 * --------------------------------------------- 1092 1092 */ 1093 - static void __ip_vs_sctp_init(struct net *net, struct ip_vs_proto_data *pd) 1093 + static int __ip_vs_sctp_init(struct net *net, struct ip_vs_proto_data *pd) 1094 1094 { 1095 1095 struct netns_ipvs *ipvs = net_ipvs(net); 1096 1096 ··· 1098 1098 spin_lock_init(&ipvs->sctp_app_lock); 1099 1099 pd->timeout_table = ip_vs_create_timeout_table((int *)sctp_timeouts, 1100 1100 sizeof(sctp_timeouts)); 1101 + if (!pd->timeout_table) 1102 + return -ENOMEM; 1103 + return 0; 1101 1104 } 1102 1105 1103 1106 static void __ip_vs_sctp_exit(struct net *net, struct ip_vs_proto_data *pd)
+4 -1
net/netfilter/ipvs/ip_vs_proto_tcp.c
··· 677 677 * timeouts is netns related now. 678 678 * --------------------------------------------- 679 679 */ 680 - static void __ip_vs_tcp_init(struct net *net, struct ip_vs_proto_data *pd) 680 + static int __ip_vs_tcp_init(struct net *net, struct ip_vs_proto_data *pd) 681 681 { 682 682 struct netns_ipvs *ipvs = net_ipvs(net); 683 683 ··· 685 685 spin_lock_init(&ipvs->tcp_app_lock); 686 686 pd->timeout_table = ip_vs_create_timeout_table((int *)tcp_timeouts, 687 687 sizeof(tcp_timeouts)); 688 + if (!pd->timeout_table) 689 + return -ENOMEM; 688 690 pd->tcp_state_table = tcp_states; 691 + return 0; 689 692 } 690 693 691 694 static void __ip_vs_tcp_exit(struct net *net, struct ip_vs_proto_data *pd)
+4 -1
net/netfilter/ipvs/ip_vs_proto_udp.c
··· 467 467 cp->timeout = pd->timeout_table[IP_VS_UDP_S_NORMAL]; 468 468 } 469 469 470 - static void __udp_init(struct net *net, struct ip_vs_proto_data *pd) 470 + static int __udp_init(struct net *net, struct ip_vs_proto_data *pd) 471 471 { 472 472 struct netns_ipvs *ipvs = net_ipvs(net); 473 473 ··· 475 475 spin_lock_init(&ipvs->udp_app_lock); 476 476 pd->timeout_table = ip_vs_create_timeout_table((int *)udp_timeouts, 477 477 sizeof(udp_timeouts)); 478 + if (!pd->timeout_table) 479 + return -ENOMEM; 480 + return 0; 478 481 } 479 482 480 483 static void __udp_exit(struct net *net, struct ip_vs_proto_data *pd)
+1 -1
net/netfilter/xt_CT.c
··· 227 227 } 228 228 229 229 #ifdef CONFIG_NF_CONNTRACK_TIMEOUT 230 - if (info->timeout) { 230 + if (info->timeout[0]) { 231 231 typeof(nf_ct_timeout_find_get_hook) timeout_find_get; 232 232 struct nf_conn_timeout *timeout_ext; 233 233
+2 -4
net/sched/sch_netem.c
··· 413 413 if (q->corrupt && q->corrupt >= get_crandom(&q->corrupt_cor)) { 414 414 if (!(skb = skb_unshare(skb, GFP_ATOMIC)) || 415 415 (skb->ip_summed == CHECKSUM_PARTIAL && 416 - skb_checksum_help(skb))) { 417 - sch->qstats.drops++; 418 - return NET_XMIT_DROP; 419 - } 416 + skb_checksum_help(skb))) 417 + return qdisc_drop(skb, sch); 420 418 421 419 skb->data[net_random() % skb_headlen(skb)] ^= 1<<(net_random() % 8); 422 420 }
+39 -11
net/sunrpc/clnt.c
··· 176 176 return 0; 177 177 } 178 178 179 - static int __rpc_pipefs_event(struct rpc_clnt *clnt, unsigned long event, 180 - struct super_block *sb) 179 + static inline int rpc_clnt_skip_event(struct rpc_clnt *clnt, unsigned long event) 180 + { 181 + if (((event == RPC_PIPEFS_MOUNT) && clnt->cl_dentry) || 182 + ((event == RPC_PIPEFS_UMOUNT) && !clnt->cl_dentry)) 183 + return 1; 184 + return 0; 185 + } 186 + 187 + static int __rpc_clnt_handle_event(struct rpc_clnt *clnt, unsigned long event, 188 + struct super_block *sb) 181 189 { 182 190 struct dentry *dentry; 183 191 int err = 0; 184 192 185 193 switch (event) { 186 194 case RPC_PIPEFS_MOUNT: 187 - if (clnt->cl_program->pipe_dir_name == NULL) 188 - break; 189 195 dentry = rpc_setup_pipedir_sb(sb, clnt, 190 196 clnt->cl_program->pipe_dir_name); 191 197 BUG_ON(dentry == NULL); ··· 214 208 return err; 215 209 } 216 210 211 + static int __rpc_pipefs_event(struct rpc_clnt *clnt, unsigned long event, 212 + struct super_block *sb) 213 + { 214 + int error = 0; 215 + 216 + for (;; clnt = clnt->cl_parent) { 217 + if (!rpc_clnt_skip_event(clnt, event)) 218 + error = __rpc_clnt_handle_event(clnt, event, sb); 219 + if (error || clnt == clnt->cl_parent) 220 + break; 221 + } 222 + return error; 223 + } 224 + 217 225 static struct rpc_clnt *rpc_get_client_for_event(struct net *net, int event) 218 226 { 219 227 struct sunrpc_net *sn = net_generic(net, sunrpc_net_id); ··· 235 215 236 216 spin_lock(&sn->rpc_client_lock); 237 217 list_for_each_entry(clnt, &sn->all_clients, cl_clients) { 238 - if (((event == RPC_PIPEFS_MOUNT) && clnt->cl_dentry) || 239 - ((event == RPC_PIPEFS_UMOUNT) && !clnt->cl_dentry)) 218 + if (clnt->cl_program->pipe_dir_name == NULL) 219 + break; 220 + if (rpc_clnt_skip_event(clnt, event)) 240 221 continue; 241 - atomic_inc(&clnt->cl_count); 222 + if (atomic_inc_not_zero(&clnt->cl_count) == 0) 223 + continue; 242 224 spin_unlock(&sn->rpc_client_lock); 243 225 return clnt; 244 226 } ··· 277 255 void rpc_clients_notifier_unregister(void) 278 256 { 279 257 return rpc_pipefs_notifier_unregister(&rpc_clients_block); 258 + } 259 + 260 + static void rpc_clnt_set_nodename(struct rpc_clnt *clnt, const char *nodename) 261 + { 262 + clnt->cl_nodelen = strlen(nodename); 263 + if (clnt->cl_nodelen > UNX_MAXNODENAME) 264 + clnt->cl_nodelen = UNX_MAXNODENAME; 265 + memcpy(clnt->cl_nodename, nodename, clnt->cl_nodelen); 280 266 } 281 267 282 268 static struct rpc_clnt * rpc_new_client(const struct rpc_create_args *args, struct rpc_xprt *xprt) ··· 367 337 } 368 338 369 339 /* save the nodename */ 370 - clnt->cl_nodelen = strlen(init_utsname()->nodename); 371 - if (clnt->cl_nodelen > UNX_MAXNODENAME) 372 - clnt->cl_nodelen = UNX_MAXNODENAME; 373 - memcpy(clnt->cl_nodename, init_utsname()->nodename, clnt->cl_nodelen); 340 + rpc_clnt_set_nodename(clnt, utsname()->nodename); 374 341 rpc_register_client(clnt); 375 342 return clnt; 376 343 ··· 526 499 err = rpc_setup_pipedir(new, clnt->cl_program->pipe_dir_name); 527 500 if (err != 0) 528 501 goto out_no_path; 502 + rpc_clnt_set_nodename(new, utsname()->nodename); 529 503 if (new->cl_auth) 530 504 atomic_inc(&new->cl_auth->au_count); 531 505 atomic_inc(&clnt->cl_count);
+2 -1
net/sunrpc/rpc_pipe.c
··· 1126 1126 return -ENOMEM; 1127 1127 dprintk("RPC: sending pipefs MOUNT notification for net %p%s\n", net, 1128 1128 NET_NAME(net)); 1129 + sn->pipefs_sb = sb; 1129 1130 err = blocking_notifier_call_chain(&rpc_pipefs_notifier_list, 1130 1131 RPC_PIPEFS_MOUNT, 1131 1132 sb); 1132 1133 if (err) 1133 1134 goto err_depopulate; 1134 1135 sb->s_fs_info = get_net(net); 1135 - sn->pipefs_sb = sb; 1136 1136 return 0; 1137 1137 1138 1138 err_depopulate: 1139 1139 blocking_notifier_call_chain(&rpc_pipefs_notifier_list, 1140 1140 RPC_PIPEFS_UMOUNT, 1141 1141 sb); 1142 + sn->pipefs_sb = NULL; 1142 1143 __rpc_depopulate(root, files, RPCAUTH_lockd, RPCAUTH_RootEOF); 1143 1144 return err; 1144 1145 }
+9 -8
net/sunrpc/sunrpc_syms.c
··· 75 75 static int __init 76 76 init_sunrpc(void) 77 77 { 78 - int err = register_rpc_pipefs(); 78 + int err = rpc_init_mempool(); 79 79 if (err) 80 80 goto out; 81 - err = rpc_init_mempool(); 82 - if (err) 83 - goto out2; 84 81 err = rpcauth_init_module(); 85 82 if (err) 86 - goto out3; 83 + goto out2; 87 84 88 85 cache_initialize(); 89 86 90 87 err = register_pernet_subsys(&sunrpc_net_ops); 88 + if (err) 89 + goto out3; 90 + 91 + err = register_rpc_pipefs(); 91 92 if (err) 92 93 goto out4; 93 94 #ifdef RPC_DEBUG ··· 99 98 return 0; 100 99 101 100 out4: 102 - rpcauth_remove_module(); 101 + unregister_pernet_subsys(&sunrpc_net_ops); 103 102 out3: 104 - rpc_destroy_mempool(); 103 + rpcauth_remove_module(); 105 104 out2: 106 - unregister_rpc_pipefs(); 105 + rpc_destroy_mempool(); 107 106 out: 108 107 return err; 109 108 }
+1
sound/pci/hda/patch_realtek.c
··· 6109 6109 6110 6110 static const struct snd_pci_quirk alc269_fixup_tbl[] = { 6111 6111 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_MIC2_MUTE_LED), 6112 + SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_DMIC), 6112 6113 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), 6113 6114 SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), 6114 6115 SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+2
sound/soc/codecs/cs42l73.c
··· 929 929 930 930 /* MCLKX -> MCLK */ 931 931 mclkx_coeff = cs42l73_get_mclkx_coeff(freq); 932 + if (mclkx_coeff < 0) 933 + return mclkx_coeff; 932 934 933 935 mclk = cs42l73_mclkx_coeffs[mclkx_coeff].mclkx / 934 936 cs42l73_mclkx_coeffs[mclkx_coeff].ratio;
+223 -55
sound/soc/codecs/wm8994.c
··· 1000 1000 } 1001 1001 } 1002 1002 1003 + static int aif1clk_ev(struct snd_soc_dapm_widget *w, 1004 + struct snd_kcontrol *kcontrol, int event) 1005 + { 1006 + struct snd_soc_codec *codec = w->codec; 1007 + struct wm8994 *control = codec->control_data; 1008 + int mask = WM8994_AIF1DAC1L_ENA | WM8994_AIF1DAC1R_ENA; 1009 + int dac; 1010 + int adc; 1011 + int val; 1012 + 1013 + switch (control->type) { 1014 + case WM8994: 1015 + case WM8958: 1016 + mask |= WM8994_AIF1DAC2L_ENA | WM8994_AIF1DAC2R_ENA; 1017 + break; 1018 + default: 1019 + break; 1020 + } 1021 + 1022 + switch (event) { 1023 + case SND_SOC_DAPM_PRE_PMU: 1024 + val = snd_soc_read(codec, WM8994_AIF1_CONTROL_1); 1025 + if ((val & WM8994_AIF1ADCL_SRC) && 1026 + (val & WM8994_AIF1ADCR_SRC)) 1027 + adc = WM8994_AIF1ADC1R_ENA | WM8994_AIF1ADC2R_ENA; 1028 + else if (!(val & WM8994_AIF1ADCL_SRC) && 1029 + !(val & WM8994_AIF1ADCR_SRC)) 1030 + adc = WM8994_AIF1ADC1L_ENA | WM8994_AIF1ADC2L_ENA; 1031 + else 1032 + adc = WM8994_AIF1ADC1R_ENA | WM8994_AIF1ADC2R_ENA | 1033 + WM8994_AIF1ADC1L_ENA | WM8994_AIF1ADC2L_ENA; 1034 + 1035 + val = snd_soc_read(codec, WM8994_AIF1_CONTROL_2); 1036 + if ((val & WM8994_AIF1DACL_SRC) && 1037 + (val & WM8994_AIF1DACR_SRC)) 1038 + dac = WM8994_AIF1DAC1R_ENA | WM8994_AIF1DAC2R_ENA; 1039 + else if (!(val & WM8994_AIF1DACL_SRC) && 1040 + !(val & WM8994_AIF1DACR_SRC)) 1041 + dac = WM8994_AIF1DAC1L_ENA | WM8994_AIF1DAC2L_ENA; 1042 + else 1043 + dac = WM8994_AIF1DAC1R_ENA | WM8994_AIF1DAC2R_ENA | 1044 + WM8994_AIF1DAC1L_ENA | WM8994_AIF1DAC2L_ENA; 1045 + 1046 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, 1047 + mask, adc); 1048 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1049 + mask, dac); 1050 + snd_soc_update_bits(codec, WM8994_CLOCKING_1, 1051 + WM8994_AIF1DSPCLK_ENA | 1052 + WM8994_SYSDSPCLK_ENA, 1053 + WM8994_AIF1DSPCLK_ENA | 1054 + WM8994_SYSDSPCLK_ENA); 1055 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, mask, 1056 + WM8994_AIF1ADC1R_ENA | 1057 + WM8994_AIF1ADC1L_ENA | 1058 + WM8994_AIF1ADC2R_ENA | 1059 + WM8994_AIF1ADC2L_ENA); 1060 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, mask, 1061 + WM8994_AIF1DAC1R_ENA | 1062 + WM8994_AIF1DAC1L_ENA | 1063 + WM8994_AIF1DAC2R_ENA | 1064 + WM8994_AIF1DAC2L_ENA); 1065 + break; 1066 + 1067 + case SND_SOC_DAPM_PRE_PMD: 1068 + case SND_SOC_DAPM_POST_PMD: 1069 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1070 + mask, 0); 1071 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, 1072 + mask, 0); 1073 + 1074 + val = snd_soc_read(codec, WM8994_CLOCKING_1); 1075 + if (val & WM8994_AIF2DSPCLK_ENA) 1076 + val = WM8994_SYSDSPCLK_ENA; 1077 + else 1078 + val = 0; 1079 + snd_soc_update_bits(codec, WM8994_CLOCKING_1, 1080 + WM8994_SYSDSPCLK_ENA | 1081 + WM8994_AIF1DSPCLK_ENA, val); 1082 + break; 1083 + } 1084 + 1085 + return 0; 1086 + } 1087 + 1088 + static int aif2clk_ev(struct snd_soc_dapm_widget *w, 1089 + struct snd_kcontrol *kcontrol, int event) 1090 + { 1091 + struct snd_soc_codec *codec = w->codec; 1092 + int dac; 1093 + int adc; 1094 + int val; 1095 + 1096 + switch (event) { 1097 + case SND_SOC_DAPM_PRE_PMU: 1098 + val = snd_soc_read(codec, WM8994_AIF2_CONTROL_1); 1099 + if ((val & WM8994_AIF2ADCL_SRC) && 1100 + (val & WM8994_AIF2ADCR_SRC)) 1101 + adc = WM8994_AIF2ADCR_ENA; 1102 + else if (!(val & WM8994_AIF2ADCL_SRC) && 1103 + !(val & WM8994_AIF2ADCR_SRC)) 1104 + adc = WM8994_AIF2ADCL_ENA; 1105 + else 1106 + adc = WM8994_AIF2ADCL_ENA | WM8994_AIF2ADCR_ENA; 1107 + 1108 + 1109 + val = snd_soc_read(codec, WM8994_AIF2_CONTROL_2); 1110 + if ((val & WM8994_AIF2DACL_SRC) && 1111 + (val & WM8994_AIF2DACR_SRC)) 1112 + dac = WM8994_AIF2DACR_ENA; 1113 + else if (!(val & WM8994_AIF2DACL_SRC) && 1114 + !(val & WM8994_AIF2DACR_SRC)) 1115 + dac = WM8994_AIF2DACL_ENA; 1116 + else 1117 + dac = WM8994_AIF2DACL_ENA | WM8994_AIF2DACR_ENA; 1118 + 1119 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, 1120 + WM8994_AIF2ADCL_ENA | 1121 + WM8994_AIF2ADCR_ENA, adc); 1122 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1123 + WM8994_AIF2DACL_ENA | 1124 + WM8994_AIF2DACR_ENA, dac); 1125 + snd_soc_update_bits(codec, WM8994_CLOCKING_1, 1126 + WM8994_AIF2DSPCLK_ENA | 1127 + WM8994_SYSDSPCLK_ENA, 1128 + WM8994_AIF2DSPCLK_ENA | 1129 + WM8994_SYSDSPCLK_ENA); 1130 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, 1131 + WM8994_AIF2ADCL_ENA | 1132 + WM8994_AIF2ADCR_ENA, 1133 + WM8994_AIF2ADCL_ENA | 1134 + WM8994_AIF2ADCR_ENA); 1135 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1136 + WM8994_AIF2DACL_ENA | 1137 + WM8994_AIF2DACR_ENA, 1138 + WM8994_AIF2DACL_ENA | 1139 + WM8994_AIF2DACR_ENA); 1140 + break; 1141 + 1142 + case SND_SOC_DAPM_PRE_PMD: 1143 + case SND_SOC_DAPM_POST_PMD: 1144 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1145 + WM8994_AIF2DACL_ENA | 1146 + WM8994_AIF2DACR_ENA, 0); 1147 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1148 + WM8994_AIF2ADCL_ENA | 1149 + WM8994_AIF2ADCR_ENA, 0); 1150 + 1151 + val = snd_soc_read(codec, WM8994_CLOCKING_1); 1152 + if (val & WM8994_AIF1DSPCLK_ENA) 1153 + val = WM8994_SYSDSPCLK_ENA; 1154 + else 1155 + val = 0; 1156 + snd_soc_update_bits(codec, WM8994_CLOCKING_1, 1157 + WM8994_SYSDSPCLK_ENA | 1158 + WM8994_AIF2DSPCLK_ENA, val); 1159 + break; 1160 + } 1161 + 1162 + return 0; 1163 + } 1164 + 1165 + static int aif1clk_late_ev(struct snd_soc_dapm_widget *w, 1166 + struct snd_kcontrol *kcontrol, int event) 1167 + { 1168 + struct snd_soc_codec *codec = w->codec; 1169 + struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 1170 + 1171 + switch (event) { 1172 + case SND_SOC_DAPM_PRE_PMU: 1173 + wm8994->aif1clk_enable = 1; 1174 + break; 1175 + case SND_SOC_DAPM_POST_PMD: 1176 + wm8994->aif1clk_disable = 1; 1177 + break; 1178 + } 1179 + 1180 + return 0; 1181 + } 1182 + 1183 + static int aif2clk_late_ev(struct snd_soc_dapm_widget *w, 1184 + struct snd_kcontrol *kcontrol, int event) 1185 + { 1186 + struct snd_soc_codec *codec = w->codec; 1187 + struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 1188 + 1189 + switch (event) { 1190 + case SND_SOC_DAPM_PRE_PMU: 1191 + wm8994->aif2clk_enable = 1; 1192 + break; 1193 + case SND_SOC_DAPM_POST_PMD: 1194 + wm8994->aif2clk_disable = 1; 1195 + break; 1196 + } 1197 + 1198 + return 0; 1199 + } 1200 + 1003 1201 static int late_enable_ev(struct snd_soc_dapm_widget *w, 1004 1202 struct snd_kcontrol *kcontrol, int event) 1005 1203 { ··· 1207 1009 switch (event) { 1208 1010 case SND_SOC_DAPM_PRE_PMU: 1209 1011 if (wm8994->aif1clk_enable) { 1012 + aif1clk_ev(w, kcontrol, event); 1210 1013 snd_soc_update_bits(codec, WM8994_AIF1_CLOCKING_1, 1211 1014 WM8994_AIF1CLK_ENA_MASK, 1212 1015 WM8994_AIF1CLK_ENA); 1213 1016 wm8994->aif1clk_enable = 0; 1214 1017 } 1215 1018 if (wm8994->aif2clk_enable) { 1019 + aif2clk_ev(w, kcontrol, event); 1216 1020 snd_soc_update_bits(codec, WM8994_AIF2_CLOCKING_1, 1217 1021 WM8994_AIF2CLK_ENA_MASK, 1218 1022 WM8994_AIF2CLK_ENA); ··· 1240 1040 if (wm8994->aif1clk_disable) { 1241 1041 snd_soc_update_bits(codec, WM8994_AIF1_CLOCKING_1, 1242 1042 WM8994_AIF1CLK_ENA_MASK, 0); 1043 + aif1clk_ev(w, kcontrol, event); 1243 1044 wm8994->aif1clk_disable = 0; 1244 1045 } 1245 1046 if (wm8994->aif2clk_disable) { 1246 1047 snd_soc_update_bits(codec, WM8994_AIF2_CLOCKING_1, 1247 1048 WM8994_AIF2CLK_ENA_MASK, 0); 1049 + aif2clk_ev(w, kcontrol, event); 1248 1050 wm8994->aif2clk_disable = 0; 1249 1051 } 1250 - break; 1251 - } 1252 - 1253 - return 0; 1254 - } 1255 - 1256 - static int aif1clk_ev(struct snd_soc_dapm_widget *w, 1257 - struct snd_kcontrol *kcontrol, int event) 1258 - { 1259 - struct snd_soc_codec *codec = w->codec; 1260 - struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 1261 - 1262 - switch (event) { 1263 - case SND_SOC_DAPM_PRE_PMU: 1264 - wm8994->aif1clk_enable = 1; 1265 - break; 1266 - case SND_SOC_DAPM_POST_PMD: 1267 - wm8994->aif1clk_disable = 1; 1268 - break; 1269 - } 1270 - 1271 - return 0; 1272 - } 1273 - 1274 - static int aif2clk_ev(struct snd_soc_dapm_widget *w, 1275 - struct snd_kcontrol *kcontrol, int event) 1276 - { 1277 - struct snd_soc_codec *codec = w->codec; 1278 - struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 1279 - 1280 - switch (event) { 1281 - case SND_SOC_DAPM_PRE_PMU: 1282 - wm8994->aif2clk_enable = 1; 1283 - break; 1284 - case SND_SOC_DAPM_POST_PMD: 1285 - wm8994->aif2clk_disable = 1; 1286 1052 break; 1287 1053 } 1288 1054 ··· 1551 1385 SOC_DAPM_ENUM("AIF2DACR Mux", aif2dacr_src_enum); 1552 1386 1553 1387 static const struct snd_soc_dapm_widget wm8994_lateclk_revd_widgets[] = { 1554 - SND_SOC_DAPM_SUPPLY("AIF1CLK", SND_SOC_NOPM, 0, 0, aif1clk_ev, 1388 + SND_SOC_DAPM_SUPPLY("AIF1CLK", SND_SOC_NOPM, 0, 0, aif1clk_late_ev, 1555 1389 SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD), 1556 - SND_SOC_DAPM_SUPPLY("AIF2CLK", SND_SOC_NOPM, 0, 0, aif2clk_ev, 1390 + SND_SOC_DAPM_SUPPLY("AIF2CLK", SND_SOC_NOPM, 0, 0, aif2clk_late_ev, 1557 1391 SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD), 1558 1392 1559 1393 SND_SOC_DAPM_PGA_E("Late DAC1L Enable PGA", SND_SOC_NOPM, 0, 0, NULL, 0, ··· 1582 1416 }; 1583 1417 1584 1418 static const struct snd_soc_dapm_widget wm8994_lateclk_widgets[] = { 1585 - SND_SOC_DAPM_SUPPLY("AIF1CLK", WM8994_AIF1_CLOCKING_1, 0, 0, NULL, 0), 1586 - SND_SOC_DAPM_SUPPLY("AIF2CLK", WM8994_AIF2_CLOCKING_1, 0, 0, NULL, 0), 1419 + SND_SOC_DAPM_SUPPLY("AIF1CLK", WM8994_AIF1_CLOCKING_1, 0, 0, aif1clk_ev, 1420 + SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_PRE_PMD), 1421 + SND_SOC_DAPM_SUPPLY("AIF2CLK", WM8994_AIF2_CLOCKING_1, 0, 0, aif2clk_ev, 1422 + SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_PRE_PMD), 1587 1423 SND_SOC_DAPM_PGA("Direct Voice", SND_SOC_NOPM, 0, 0, NULL, 0), 1588 1424 SND_SOC_DAPM_MIXER("SPKL", WM8994_POWER_MANAGEMENT_3, 8, 0, 1589 1425 left_speaker_mixer, ARRAY_SIZE(left_speaker_mixer)), ··· 1638 1470 SND_SOC_DAPM_SUPPLY("CLK_SYS", SND_SOC_NOPM, 0, 0, clk_sys_event, 1639 1471 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), 1640 1472 1641 - SND_SOC_DAPM_SUPPLY("DSP1CLK", WM8994_CLOCKING_1, 3, 0, NULL, 0), 1642 - SND_SOC_DAPM_SUPPLY("DSP2CLK", WM8994_CLOCKING_1, 2, 0, NULL, 0), 1643 - SND_SOC_DAPM_SUPPLY("DSPINTCLK", WM8994_CLOCKING_1, 1, 0, NULL, 0), 1473 + SND_SOC_DAPM_SUPPLY("DSP1CLK", SND_SOC_NOPM, 3, 0, NULL, 0), 1474 + SND_SOC_DAPM_SUPPLY("DSP2CLK", SND_SOC_NOPM, 2, 0, NULL, 0), 1475 + SND_SOC_DAPM_SUPPLY("DSPINTCLK", SND_SOC_NOPM, 1, 0, NULL, 0), 1644 1476 1645 1477 SND_SOC_DAPM_AIF_OUT("AIF1ADC1L", NULL, 1646 - 0, WM8994_POWER_MANAGEMENT_4, 9, 0), 1478 + 0, SND_SOC_NOPM, 9, 0), 1647 1479 SND_SOC_DAPM_AIF_OUT("AIF1ADC1R", NULL, 1648 - 0, WM8994_POWER_MANAGEMENT_4, 8, 0), 1480 + 0, SND_SOC_NOPM, 8, 0), 1649 1481 SND_SOC_DAPM_AIF_IN_E("AIF1DAC1L", NULL, 0, 1650 - WM8994_POWER_MANAGEMENT_5, 9, 0, wm8958_aif_ev, 1482 + SND_SOC_NOPM, 9, 0, wm8958_aif_ev, 1651 1483 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), 1652 1484 SND_SOC_DAPM_AIF_IN_E("AIF1DAC1R", NULL, 0, 1653 - WM8994_POWER_MANAGEMENT_5, 8, 0, wm8958_aif_ev, 1485 + SND_SOC_NOPM, 8, 0, wm8958_aif_ev, 1654 1486 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), 1655 1487 1656 1488 SND_SOC_DAPM_AIF_OUT("AIF1ADC2L", NULL, 1657 - 0, WM8994_POWER_MANAGEMENT_4, 11, 0), 1489 + 0, SND_SOC_NOPM, 11, 0), 1658 1490 SND_SOC_DAPM_AIF_OUT("AIF1ADC2R", NULL, 1659 - 0, WM8994_POWER_MANAGEMENT_4, 10, 0), 1491 + 0, SND_SOC_NOPM, 10, 0), 1660 1492 SND_SOC_DAPM_AIF_IN_E("AIF1DAC2L", NULL, 0, 1661 - WM8994_POWER_MANAGEMENT_5, 11, 0, wm8958_aif_ev, 1493 + SND_SOC_NOPM, 11, 0, wm8958_aif_ev, 1662 1494 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), 1663 1495 SND_SOC_DAPM_AIF_IN_E("AIF1DAC2R", NULL, 0, 1664 - WM8994_POWER_MANAGEMENT_5, 10, 0, wm8958_aif_ev, 1496 + SND_SOC_NOPM, 10, 0, wm8958_aif_ev, 1665 1497 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), 1666 1498 1667 1499 SND_SOC_DAPM_MIXER("AIF1ADC1L Mixer", SND_SOC_NOPM, 0, 0, ··· 1688 1520 dac1r_mix, ARRAY_SIZE(dac1r_mix)), 1689 1521 1690 1522 SND_SOC_DAPM_AIF_OUT("AIF2ADCL", NULL, 0, 1691 - WM8994_POWER_MANAGEMENT_4, 13, 0), 1523 + SND_SOC_NOPM, 13, 0), 1692 1524 SND_SOC_DAPM_AIF_OUT("AIF2ADCR", NULL, 0, 1693 - WM8994_POWER_MANAGEMENT_4, 12, 0), 1525 + SND_SOC_NOPM, 12, 0), 1694 1526 SND_SOC_DAPM_AIF_IN_E("AIF2DACL", NULL, 0, 1695 - WM8994_POWER_MANAGEMENT_5, 13, 0, wm8958_aif_ev, 1527 + SND_SOC_NOPM, 13, 0, wm8958_aif_ev, 1696 1528 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), 1697 1529 SND_SOC_DAPM_AIF_IN_E("AIF2DACR", NULL, 0, 1698 - WM8994_POWER_MANAGEMENT_5, 12, 0, wm8958_aif_ev, 1530 + SND_SOC_NOPM, 12, 0, wm8958_aif_ev, 1699 1531 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), 1700 1532 1701 1533 SND_SOC_DAPM_AIF_IN("AIF1DACDAT", NULL, 0, SND_SOC_NOPM, 0, 0),
+3 -4
sound/soc/sh/fsi.c
··· 1001 1001 sg_dma_address(&sg) = buf; 1002 1002 sg_dma_len(&sg) = len; 1003 1003 1004 - desc = chan->device->device_prep_slave_sg(chan, &sg, 1, dir, 1005 - DMA_PREP_INTERRUPT | 1006 - DMA_CTRL_ACK); 1004 + desc = dmaengine_prep_slave_sg(chan, &sg, 1, dir, 1005 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1007 1006 if (!desc) { 1008 - dev_err(dai->dev, "device_prep_slave_sg() fail\n"); 1007 + dev_err(dai->dev, "dmaengine_prep_slave_sg() fail\n"); 1009 1008 return; 1010 1009 } 1011 1010
+1
sound/soc/soc-core.c
··· 3113 3113 GFP_KERNEL); 3114 3114 if (card->rtd == NULL) 3115 3115 return -ENOMEM; 3116 + card->num_rtd = 0; 3116 3117 card->rtd_aux = &card->rtd[card->num_links]; 3117 3118 3118 3119 for (i = 0; i < card->num_links; i++)
+2
sound/soc/soc-dapm.c
··· 67 67 [snd_soc_dapm_out_drv] = 10, 68 68 [snd_soc_dapm_hp] = 10, 69 69 [snd_soc_dapm_spk] = 10, 70 + [snd_soc_dapm_line] = 10, 70 71 [snd_soc_dapm_post] = 11, 71 72 }; 72 73 ··· 76 75 [snd_soc_dapm_adc] = 1, 77 76 [snd_soc_dapm_hp] = 2, 78 77 [snd_soc_dapm_spk] = 2, 78 + [snd_soc_dapm_line] = 2, 79 79 [snd_soc_dapm_out_drv] = 2, 80 80 [snd_soc_dapm_pga] = 4, 81 81 [snd_soc_dapm_mixer_named_ctl] = 5,
+2 -2
tools/perf/Makefile
··· 234 234 235 235 export PERL_PATH 236 236 237 - FLEX = $(CROSS_COMPILE)flex 238 - BISON= $(CROSS_COMPILE)bison 237 + FLEX = flex 238 + BISON= bison 239 239 240 240 $(OUTPUT)util/parse-events-flex.c: util/parse-events.l 241 241 $(QUIET_FLEX)$(FLEX) --header-file=$(OUTPUT)util/parse-events-flex.h -t util/parse-events.l > $(OUTPUT)util/parse-events-flex.c
+12 -5
tools/perf/builtin-report.c
··· 374 374 (kernel_map->dso->hit && 375 375 (kernel_kmap->ref_reloc_sym == NULL || 376 376 kernel_kmap->ref_reloc_sym->addr == 0))) { 377 - const struct dso *kdso = kernel_map->dso; 377 + const char *desc = 378 + "As no suitable kallsyms nor vmlinux was found, kernel samples\n" 379 + "can't be resolved."; 380 + 381 + if (kernel_map) { 382 + const struct dso *kdso = kernel_map->dso; 383 + if (!RB_EMPTY_ROOT(&kdso->symbols[MAP__FUNCTION])) { 384 + desc = "If some relocation was applied (e.g. " 385 + "kexec) symbols may be misresolved."; 386 + } 387 + } 378 388 379 389 ui__warning( 380 390 "Kernel address maps (/proc/{kallsyms,modules}) were restricted.\n\n" 381 391 "Check /proc/sys/kernel/kptr_restrict before running 'perf record'.\n\n%s\n\n" 382 392 "Samples in kernel modules can't be resolved as well.\n\n", 383 - RB_EMPTY_ROOT(&kdso->symbols[MAP__FUNCTION]) ? 384 - "As no suitable kallsyms nor vmlinux was found, kernel samples\n" 385 - "can't be resolved." : 386 - "If some relocation was applied (e.g. kexec) symbols may be misresolved."); 393 + desc); 387 394 } 388 395 389 396 if (dump_trace) {
+30
tools/perf/builtin-test.c
··· 851 851 return test__checkevent_symbolic_name(evlist); 852 852 } 853 853 854 + static int test__checkevent_exclude_host_modifier(struct perf_evlist *evlist) 855 + { 856 + struct perf_evsel *evsel = list_entry(evlist->entries.next, 857 + struct perf_evsel, node); 858 + 859 + TEST_ASSERT_VAL("wrong exclude guest", !evsel->attr.exclude_guest); 860 + TEST_ASSERT_VAL("wrong exclude host", evsel->attr.exclude_host); 861 + 862 + return test__checkevent_symbolic_name(evlist); 863 + } 864 + 865 + static int test__checkevent_exclude_guest_modifier(struct perf_evlist *evlist) 866 + { 867 + struct perf_evsel *evsel = list_entry(evlist->entries.next, 868 + struct perf_evsel, node); 869 + 870 + TEST_ASSERT_VAL("wrong exclude guest", evsel->attr.exclude_guest); 871 + TEST_ASSERT_VAL("wrong exclude host", !evsel->attr.exclude_host); 872 + 873 + return test__checkevent_symbolic_name(evlist); 874 + } 875 + 854 876 static int test__checkevent_symbolic_alias_modifier(struct perf_evlist *evlist) 855 877 { 856 878 struct perf_evsel *evsel = list_entry(evlist->entries.next, ··· 1112 1090 { 1113 1091 .name = "r1,syscalls:sys_enter_open:k,1:1:hp", 1114 1092 .check = test__checkevent_list, 1093 + }, 1094 + { 1095 + .name = "instructions:G", 1096 + .check = test__checkevent_exclude_host_modifier, 1097 + }, 1098 + { 1099 + .name = "instructions:H", 1100 + .check = test__checkevent_exclude_guest_modifier, 1115 1101 }, 1116 1102 }; 1117 1103
+1 -1
tools/perf/util/parse-events.l
··· 54 54 num_hex 0x[a-fA-F0-9]+ 55 55 num_raw_hex [a-fA-F0-9]+ 56 56 name [a-zA-Z_*?][a-zA-Z0-9_*?]* 57 - modifier_event [ukhp]{1,5} 57 + modifier_event [ukhpGH]{1,8} 58 58 modifier_bp [rwx] 59 59 60 60 %%
+6 -7
tools/perf/util/symbol.c
··· 977 977 * And always look at the original dso, not at debuginfo packages, that 978 978 * have the PLT data stripped out (shdr_rel_plt.sh_type == SHT_NOBITS). 979 979 */ 980 - static int dso__synthesize_plt_symbols(struct dso *dso, struct map *map, 981 - symbol_filter_t filter) 980 + static int 981 + dso__synthesize_plt_symbols(struct dso *dso, char *name, struct map *map, 982 + symbol_filter_t filter) 982 983 { 983 984 uint32_t nr_rel_entries, idx; 984 985 GElf_Sym sym; ··· 994 993 char sympltname[1024]; 995 994 Elf *elf; 996 995 int nr = 0, symidx, fd, err = 0; 997 - char name[PATH_MAX]; 998 996 999 - snprintf(name, sizeof(name), "%s%s", 1000 - symbol_conf.symfs, dso->long_name); 1001 997 fd = open(name, O_RDONLY); 1002 998 if (fd < 0) 1003 999 goto out; ··· 1701 1703 continue; 1702 1704 1703 1705 if (ret > 0) { 1704 - int nr_plt = dso__synthesize_plt_symbols(dso, map, 1705 - filter); 1706 + int nr_plt; 1707 + 1708 + nr_plt = dso__synthesize_plt_symbols(dso, name, map, filter); 1706 1709 if (nr_plt > 0) 1707 1710 ret += nr_plt; 1708 1711 break;
+9 -3
tools/testing/ktest/ktest.pl
··· 183 183 # do not force reboots on config problems 184 184 my $no_reboot = 1; 185 185 186 + # reboot on success 187 + my $reboot_success = 0; 188 + 186 189 my %option_map = ( 187 190 "MACHINE" => \$machine, 188 191 "SSH_USER" => \$ssh_user, ··· 2195 2192 } 2196 2193 2197 2194 # Are we looking for where it worked, not failed? 2198 - if ($reverse_bisect) { 2195 + if ($reverse_bisect && $ret >= 0) { 2199 2196 $ret = !$ret; 2200 2197 } 2201 2198 ··· 3472 3469 3473 3470 # Do not reboot on failing test options 3474 3471 $no_reboot = 1; 3472 + $reboot_success = 0; 3475 3473 3476 3474 $iteration = $i; 3477 3475 ··· 3558 3554 die "failed to checkout $checkout"; 3559 3555 } 3560 3556 3557 + $no_reboot = 0; 3558 + 3561 3559 # A test may opt to not reboot the box 3562 3560 if ($reboot_on_success) { 3563 - $no_reboot = 0; 3561 + $reboot_success = 1; 3564 3562 } 3565 3563 3566 3564 if ($test_type eq "bisect") { ··· 3606 3600 3607 3601 if ($opt{"POWEROFF_ON_SUCCESS"}) { 3608 3602 halt; 3609 - } elsif ($opt{"REBOOT_ON_SUCCESS"} && !do_not_reboot) { 3603 + } elsif ($opt{"REBOOT_ON_SUCCESS"} && !do_not_reboot && $reboot_success) { 3610 3604 reboot_to_good; 3611 3605 } elsif (defined($switch_to_good)) { 3612 3606 # still need to get to the good kernel