Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'devfreq-for-next' of git://git.infradead.org/users/kmpark/linux-samsung into pm-devfreq

* 'devfreq-for-next' of git://git.infradead.org/users/kmpark/linux-samsung: (765 commits)
PM/Devfreq: Add Exynos4-bus device DVFS driver for Exynos4210/4212/4412.
pci: Fix hotplug of Express Module with pci bridges
i2c-eg20t: correct the driver init order of pch_i2c_probe()
I2C: OMAP: fix FIFO usage for OMAP4
i2c-s3c2410: Fix return code of s3c24xx_i2c_parse_dt_gpio
i2c: i2c-s3c2410: Add a cpu_relax() to busy wait for bus idle
Linux 3.2-rc6
Revert "drm/i915: fix infinite recursion on unbind due to ilk vt-d w/a"
btrfs: lower the dirty balance poll interval
drm/i915/dp: Dither down to 6bpc if it makes the mode fit
drm/i915: enable semaphores on per-device defaults
drm/i915: don't set unpin_work if vblank_get fails
drm/i915: By default, enable RC6 on IVB and SNB when reasonable
iommu: Export intel_iommu_enabled to signal when iommu is in use
drm/i915/sdvo: Include LVDS panels for the IS_DIGITAL check
drm/i915: prevent division by zero when asking for chipset power
drm/i915: add PCH info to i915_capabilities
drm/i915: set the right SDVO transcoder for CPT
drm/i915: no-lvds quirk for ASUS AT5NM10T-I
sched: Fix select_idle_sibling() regression in selecting an idle SMT sibling
...

+11498 -6145
+6 -3
CREDITS
··· 688 688 689 689 N: Kees Cook 690 690 E: kees@outflux.net 691 - W: http://outflux.net/ 692 - P: 1024D/17063E6D 9FA3 C49C 23C9 D1BC 2E30 1975 1FFF 4BA9 1706 3E6D 693 - D: Minor updates to SCSI types, added /proc/pid/maps protection 691 + E: kees@ubuntu.com 692 + E: keescook@chromium.org 693 + W: http://outflux.net/blog/ 694 + P: 4096R/DC6DC026 A5C3 F68F 229D D60F 723E 6E13 8972 F4DF DC6D C026 695 + D: Various security things, bug fixes, and documentation. 694 696 S: (ask for current address) 697 + S: Portland, Oregon 695 698 S: USA 696 699 697 700 N: Robin Cornelius
-7
Documentation/ABI/testing/sysfs-bus-rbd
··· 57 57 58 58 $ echo <snap-name> > /sys/bus/rbd/devices/<dev-id>/snap_create 59 59 60 - rollback_snap 61 - 62 - Rolls back data to the specified snapshot. This goes over the entire 63 - list of rados blocks and sends a rollback command to each. 64 - 65 - $ echo <snap-name> > /sys/bus/rbd/devices/<dev-id>/snap_rollback 66 - 67 60 snap_* 68 61 69 62 A directory per each snapshot
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 33 33 ramtron Ramtron International 34 34 samsung Samsung Semiconductor 35 35 schindler Schindler 36 + sil Silicon Image 36 37 simtek 37 38 sirf SiRF Technology, Inc. 38 39 stericsson ST-Ericsson
+2 -2
Documentation/filesystems/btrfs.txt
··· 63 63 Userspace tools for creating and manipulating Btrfs file systems are 64 64 available from the git repository at the following location: 65 65 66 - http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs-unstable.git 67 - git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs-unstable.git 66 + http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git 67 + git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git 68 68 69 69 These include the following tools: 70 70
+3 -3
Documentation/kernel-parameters.txt
··· 315 315 CPU-intensive style benchmark, and it can vary highly in 316 316 a microbenchmark depending on workload and compiler. 317 317 318 - 1: only for 32-bit processes 319 - 2: only for 64-bit processes 318 + 32: only for 32-bit processes 319 + 64: only for 64-bit processes 320 320 on: enable for both 32- and 64-bit processes 321 321 off: disable for both 32- and 64-bit processes 322 322 323 - amd_iommu= [HW,X86-84] 323 + amd_iommu= [HW,X86-64] 324 324 Pass parameters to the AMD IOMMU driver in the system. 325 325 Possible values are: 326 326 fullflush - enable flushing of IO/TLB entries when
+5 -5
Documentation/networking/ip-sysctl.txt
··· 282 282 Default: 0 (off) 283 283 284 284 tcp_max_syn_backlog - INTEGER 285 - Maximal number of remembered connection requests, which are 286 - still did not receive an acknowledgment from connecting client. 287 - Default value is 1024 for systems with more than 128Mb of memory, 288 - and 128 for low memory machines. If server suffers of overload, 289 - try to increase this number. 285 + Maximal number of remembered connection requests, which have not 286 + received an acknowledgment from connecting client. 287 + The minimal value is 128 for low memory machines, and it will 288 + increase in proportion to the memory of machine. 289 + If server suffers from overload, try increasing this number. 290 290 291 291 tcp_max_tw_buckets - INTEGER 292 292 Maximal number of timewait sockets held by system simultaneously.
+66 -39
Documentation/power/devices.txt
··· 123 123 Subsystem-Level Methods 124 124 ----------------------- 125 125 The core methods to suspend and resume devices reside in struct dev_pm_ops 126 - pointed to by the pm member of struct bus_type, struct device_type and 127 - struct class. They are mostly of interest to the people writing infrastructure 128 - for buses, like PCI or USB, or device type and device class drivers. 126 + pointed to by the ops member of struct dev_pm_domain, or by the pm member of 127 + struct bus_type, struct device_type and struct class. They are mostly of 128 + interest to the people writing infrastructure for platforms and buses, like PCI 129 + or USB, or device type and device class drivers. 129 130 130 131 Bus drivers implement these methods as appropriate for the hardware and the 131 132 drivers using it; PCI works differently from USB, and so on. Not many people ··· 140 139 141 140 /sys/devices/.../power/wakeup files 142 141 ----------------------------------- 143 - All devices in the driver model have two flags to control handling of wakeup 144 - events (hardware signals that can force the device and/or system out of a low 145 - power state). These flags are initialized by bus or device driver code using 142 + All device objects in the driver model contain fields that control the handling 143 + of system wakeup events (hardware signals that can force the system out of a 144 + sleep state). These fields are initialized by bus or device driver code using 146 145 device_set_wakeup_capable() and device_set_wakeup_enable(), defined in 147 146 include/linux/pm_wakeup.h. 148 147 149 - The "can_wakeup" flag just records whether the device (and its driver) can 148 + The "power.can_wakeup" flag just records whether the device (and its driver) can 150 149 physically support wakeup events. The device_set_wakeup_capable() routine 151 - affects this flag. The "should_wakeup" flag controls whether the device should 152 - try to use its wakeup mechanism. device_set_wakeup_enable() affects this flag; 153 - for the most part drivers should not change its value. The initial value of 154 - should_wakeup is supposed to be false for the majority of devices; the major 155 - exceptions are power buttons, keyboards, and Ethernet adapters whose WoL 156 - (wake-on-LAN) feature has been set up with ethtool. It should also default 157 - to true for devices that don't generate wakeup requests on their own but merely 158 - forward wakeup requests from one bus to another (like PCI bridges). 150 + affects this flag. The "power.wakeup" field is a pointer to an object of type 151 + struct wakeup_source used for controlling whether or not the device should use 152 + its system wakeup mechanism and for notifying the PM core of system wakeup 153 + events signaled by the device. This object is only present for wakeup-capable 154 + devices (i.e. devices whose "can_wakeup" flags are set) and is created (or 155 + removed) by device_set_wakeup_capable(). 159 156 160 157 Whether or not a device is capable of issuing wakeup events is a hardware 161 158 matter, and the kernel is responsible for keeping track of it. By contrast, 162 159 whether or not a wakeup-capable device should issue wakeup events is a policy 163 160 decision, and it is managed by user space through a sysfs attribute: the 164 - power/wakeup file. User space can write the strings "enabled" or "disabled" to 165 - set or clear the "should_wakeup" flag, respectively. This file is only present 166 - for wakeup-capable devices (i.e. devices whose "can_wakeup" flags are set) 167 - and is created (or removed) by device_set_wakeup_capable(). Reads from the 168 - file will return the corresponding string. 161 + "power/wakeup" file. User space can write the strings "enabled" or "disabled" 162 + to it to indicate whether or not, respectively, the device is supposed to signal 163 + system wakeup. This file is only present if the "power.wakeup" object exists 164 + for the given device and is created (or removed) along with that object, by 165 + device_set_wakeup_capable(). Reads from the file will return the corresponding 166 + string. 169 167 170 - The device_may_wakeup() routine returns true only if both flags are set. 168 + The "power/wakeup" file is supposed to contain the "disabled" string initially 169 + for the majority of devices; the major exceptions are power buttons, keyboards, 170 + and Ethernet adapters whose WoL (wake-on-LAN) feature has been set up with 171 + ethtool. It should also default to "enabled" for devices that don't generate 172 + wakeup requests on their own but merely forward wakeup requests from one bus to 173 + another (like PCI Express ports). 174 + 175 + The device_may_wakeup() routine returns true only if the "power.wakeup" object 176 + exists and the corresponding "power/wakeup" file contains the string "enabled". 171 177 This information is used by subsystems, like the PCI bus type code, to see 172 178 whether or not to enable the devices' wakeup mechanisms. If device wakeup 173 179 mechanisms are enabled or disabled directly by drivers, they also should use 174 180 device_may_wakeup() to decide what to do during a system sleep transition. 175 - However for runtime power management, wakeup events should be enabled whenever 176 - the device and driver both support them, regardless of the should_wakeup flag. 181 + Device drivers, however, are not supposed to call device_set_wakeup_enable() 182 + directly in any case. 177 183 184 + It ought to be noted that system wakeup is conceptually different from "remote 185 + wakeup" used by runtime power management, although it may be supported by the 186 + same physical mechanism. Remote wakeup is a feature allowing devices in 187 + low-power states to trigger specific interrupts to signal conditions in which 188 + they should be put into the full-power state. Those interrupts may or may not 189 + be used to signal system wakeup events, depending on the hardware design. On 190 + some systems it is impossible to trigger them from system sleep states. In any 191 + case, remote wakeup should always be enabled for runtime power management for 192 + all devices and drivers that support it. 178 193 179 194 /sys/devices/.../power/control files 180 195 ------------------------------------ ··· 266 249 support all these callbacks and not all drivers use all the callbacks. The 267 250 various phases always run after tasks have been frozen and before they are 268 251 unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have 269 - been disabled (except for those marked with the IRQ_WAKEUP flag). 252 + been disabled (except for those marked with the IRQF_NO_SUSPEND flag). 270 253 271 - All phases use bus, type, or class callbacks (that is, methods defined in 272 - dev->bus->pm, dev->type->pm, or dev->class->pm). These callbacks are mutually 273 - exclusive, so if the device type provides a struct dev_pm_ops object pointed to 274 - by its pm field (i.e. both dev->type and dev->type->pm are defined), the 275 - callbacks included in that object (i.e. dev->type->pm) will be used. Otherwise, 276 - if the class provides a struct dev_pm_ops object pointed to by its pm field 277 - (i.e. both dev->class and dev->class->pm are defined), the PM core will use the 278 - callbacks from that object (i.e. dev->class->pm). Finally, if the pm fields of 279 - both the device type and class objects are NULL (or those objects do not exist), 280 - the callbacks provided by the bus (that is, the callbacks from dev->bus->pm) 281 - will be used (this allows device types to override callbacks provided by bus 282 - types or classes if necessary). 254 + All phases use PM domain, bus, type, or class callbacks (that is, methods 255 + defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, or dev->class->pm). 256 + These callbacks are regarded by the PM core as mutually exclusive. Moreover, 257 + PM domain callbacks always take precedence over bus, type and class callbacks, 258 + while type callbacks take precedence over bus and class callbacks, and class 259 + callbacks take precedence over bus callbacks. To be precise, the following 260 + rules are used to determine which callback to execute in the given phase: 261 + 262 + 1. If dev->pm_domain is present, the PM core will attempt to execute the 263 + callback included in dev->pm_domain->ops. If that callback is not 264 + present, no action will be carried out for the given device. 265 + 266 + 2. Otherwise, if both dev->type and dev->type->pm are present, the callback 267 + included in dev->type->pm will be executed. 268 + 269 + 3. Otherwise, if both dev->class and dev->class->pm are present, the 270 + callback included in dev->class->pm will be executed. 271 + 272 + 4. Otherwise, if both dev->bus and dev->bus->pm are present, the callback 273 + included in dev->bus->pm will be executed. 274 + 275 + This allows PM domains and device types to override callbacks provided by bus 276 + types or device classes if necessary. 283 277 284 278 These callbacks may in turn invoke device- or driver-specific methods stored in 285 279 dev->driver->pm, but they don't have to. ··· 311 283 312 284 After the prepare callback method returns, no new children may be 313 285 registered below the device. The method may also prepare the device or 314 - driver in some way for the upcoming system power transition (for 315 - example, by allocating additional memory required for this purpose), but 316 - it should not put the device into a low-power state. 286 + driver in some way for the upcoming system power transition, but it 287 + should not put the device into a low-power state. 317 288 318 289 2. The suspend methods should quiesce the device to stop it from performing 319 290 I/O. They also may save the device registers and put it into the
+24 -16
Documentation/power/runtime_pm.txt
··· 44 44 }; 45 45 46 46 The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks 47 - are executed by the PM core for either the power domain, or the device type 48 - (if the device power domain's struct dev_pm_ops does not exist), or the class 49 - (if the device power domain's and type's struct dev_pm_ops object does not 50 - exist), or the bus type (if the device power domain's, type's and class' 51 - struct dev_pm_ops objects do not exist) of the given device, so the priority 52 - order of callbacks from high to low is that power domain callbacks, device 53 - type callbacks, class callbacks and bus type callbacks, and the high priority 54 - one will take precedence over low priority one. The bus type, device type and 55 - class callbacks are referred to as subsystem-level callbacks in what follows, 56 - and generally speaking, the power domain callbacks are used for representing 57 - power domains within a SoC. 47 + are executed by the PM core for the device's subsystem that may be either of 48 + the following: 49 + 50 + 1. PM domain of the device, if the device's PM domain object, dev->pm_domain, 51 + is present. 52 + 53 + 2. Device type of the device, if both dev->type and dev->type->pm are present. 54 + 55 + 3. Device class of the device, if both dev->class and dev->class->pm are 56 + present. 57 + 58 + 4. Bus type of the device, if both dev->bus and dev->bus->pm are present. 59 + 60 + The PM core always checks which callback to use in the order given above, so the 61 + priority order of callbacks from high to low is: PM domain, device type, class 62 + and bus type. Moreover, the high-priority one will always take precedence over 63 + a low-priority one. The PM domain, bus type, device type and class callbacks 64 + are referred to as subsystem-level callbacks in what follows. 58 65 59 66 By default, the callbacks are always invoked in process context with interrupts 60 67 enabled. However, subsystems can use the pm_runtime_irq_safe() helper function 61 - to tell the PM core that a device's ->runtime_suspend() and ->runtime_resume() 62 - callbacks should be invoked in atomic context with interrupts disabled. 63 - This implies that these callback routines must not block or sleep, but it also 64 - means that the synchronous helper functions listed at the end of Section 4 can 65 - be used within an interrupt handler or in an atomic context. 68 + to tell the PM core that their ->runtime_suspend(), ->runtime_resume() and 69 + ->runtime_idle() callbacks may be invoked in atomic context with interrupts 70 + disabled for a given device. This implies that the callback routines in 71 + question must not block or sleep, but it also means that the synchronous helper 72 + functions listed at the end of Section 4 may be used for that device within an 73 + interrupt handler or generally in an atomic context. 66 74 67 75 The subsystem-level suspend callback is _entirely_ _responsible_ for handling 68 76 the suspend of the device as appropriate, which may, but need not include
+2 -4
Documentation/sound/alsa/soc/machine.txt
··· 50 50 The machine DAI configuration glues all the codec and CPU DAIs together. It can 51 51 also be used to set up the DAI system clock and for any machine related DAI 52 52 initialisation e.g. the machine audio map can be connected to the codec audio 53 - map, unconnected codec pins can be set as such. Please see corgi.c, spitz.c 54 - for examples. 53 + map, unconnected codec pins can be set as such. 55 54 56 55 struct snd_soc_dai_link is used to set up each DAI in your machine. e.g. 57 56 ··· 82 83 The machine driver can optionally extend the codec power map and to become an 83 84 audio power map of the audio subsystem. This allows for automatic power up/down 84 85 of speaker/HP amplifiers, etc. Codec pins can be connected to the machines jack 85 - sockets in the machine init function. See soc/pxa/spitz.c and dapm.txt for 86 - details. 86 + sockets in the machine init function. 87 87 88 88 89 89 Machine Controls
+2 -2
Documentation/usb/linux-cdc-acm.inf
··· 90 90 [SourceDisksFiles] 91 91 [SourceDisksNames] 92 92 [DeviceList] 93 - %DESCRIPTION%=DriverInstall, USB\VID_0525&PID_A4A7, USB\VID_1D6B&PID_0104&MI_02 93 + %DESCRIPTION%=DriverInstall, USB\VID_0525&PID_A4A7, USB\VID_1D6B&PID_0104&MI_02, USB\VID_1D6B&PID_0106&MI_00 94 94 95 95 [DeviceList.NTamd64] 96 - %DESCRIPTION%=DriverInstall, USB\VID_0525&PID_A4A7, USB\VID_1D6B&PID_0104&MI_02 96 + %DESCRIPTION%=DriverInstall, USB\VID_0525&PID_A4A7, USB\VID_1D6B&PID_0104&MI_02, USB\VID_1D6B&PID_0106&MI_00 97 97 98 98 99 99 ;------------------------------------------------------------------------------
+26 -30
MAINTAINERS
··· 511 511 L: iommu@lists.linux-foundation.org 512 512 T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu.git 513 513 S: Supported 514 - F: arch/x86/kernel/amd_iommu*.c 515 - F: arch/x86/include/asm/amd_iommu*.h 514 + F: drivers/iommu/amd_iommu*.[ch] 515 + F: include/linux/amd-iommu.h 516 516 517 517 AMD MICROCODE UPDATE SUPPORT 518 518 M: Andreas Herrmann <andreas.herrmann3@amd.com> ··· 789 789 S: Maintained 790 790 T: git git://git.pengutronix.de/git/imx/linux-2.6.git 791 791 F: arch/arm/mach-mx*/ 792 + F: arch/arm/mach-imx/ 792 793 F: arch/arm/plat-mxc/ 793 794 794 795 ARM/FREESCALE IMX51 ··· 804 803 S: Maintained 805 804 T: git git://git.linaro.org/people/shawnguo/linux-2.6.git 806 805 F: arch/arm/mach-imx/*imx6* 806 + 807 + ARM/FREESCALE MXS ARM ARCHITECTURE 808 + M: Shawn Guo <shawn.guo@linaro.org> 809 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 810 + S: Maintained 811 + T: git git://git.linaro.org/people/shawnguo/linux-2.6.git 812 + F: arch/arm/mach-mxs/ 807 813 808 814 ARM/GLOMATION GESBC9312SX MACHINE SUPPORT 809 815 M: Lennert Buytenhek <kernel@wantstofly.org> ··· 1054 1046 M: Ben Dooks <ben-linux@fluff.org> 1055 1047 M: Kukjin Kim <kgene.kim@samsung.com> 1056 1048 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1049 + L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) 1057 1050 W: http://www.fluff.org/ben/linux/ 1058 1051 S: Maintained 1059 1052 F: arch/arm/plat-samsung/ 1060 1053 F: arch/arm/plat-s3c24xx/ 1061 1054 F: arch/arm/plat-s5p/ 1055 + F: arch/arm/mach-s3c24*/ 1056 + F: arch/arm/mach-s3c64xx/ 1062 1057 F: drivers/*/*s3c2410* 1063 1058 F: drivers/*/*/*s3c2410* 1064 - 1065 - ARM/S3C2410 ARM ARCHITECTURE 1066 - M: Ben Dooks <ben-linux@fluff.org> 1067 - L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1068 - W: http://www.fluff.org/ben/linux/ 1069 - S: Maintained 1070 - F: arch/arm/mach-s3c2410/ 1071 - 1072 - ARM/S3C244x ARM ARCHITECTURE 1073 - M: Ben Dooks <ben-linux@fluff.org> 1074 - L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1075 - W: http://www.fluff.org/ben/linux/ 1076 - S: Maintained 1077 - F: arch/arm/mach-s3c2440/ 1078 - F: arch/arm/mach-s3c2443/ 1079 - 1080 - ARM/S3C64xx ARM ARCHITECTURE 1081 - M: Ben Dooks <ben-linux@fluff.org> 1082 - L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1083 - W: http://www.fluff.org/ben/linux/ 1084 - S: Maintained 1085 - F: arch/arm/mach-s3c64xx/ 1059 + F: drivers/spi/spi-s3c* 1060 + F: sound/soc/samsung/* 1086 1061 1087 1062 ARM/S5P EXYNOS ARM ARCHITECTURES 1088 1063 M: Kukjin Kim <kgene.kim@samsung.com> ··· 3101 3110 3102 3111 HIGH-RESOLUTION TIMERS, CLOCKEVENTS, DYNTICKS 3103 3112 M: Thomas Gleixner <tglx@linutronix.de> 3113 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core 3104 3114 S: Maintained 3105 3115 F: Documentation/timers/ 3106 3116 F: kernel/hrtimer.c ··· 3611 3619 IRQ SUBSYSTEM 3612 3620 M: Thomas Gleixner <tglx@linutronix.de> 3613 3621 S: Maintained 3614 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git irq/core 3622 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core 3615 3623 F: kernel/irq/ 3616 3624 3617 3625 ISAPNP ··· 4099 4107 LOCKDEP AND LOCKSTAT 4100 4108 M: Peter Zijlstra <peterz@infradead.org> 4101 4109 M: Ingo Molnar <mingo@redhat.com> 4102 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/peterz/linux-2.6-lockdep.git 4110 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/locking 4103 4111 S: Maintained 4104 4112 F: Documentation/lockdep*.txt 4105 4113 F: Documentation/lockstat.txt ··· 4303 4311 F: mm/ 4304 4312 4305 4313 MEMORY RESOURCE CONTROLLER 4314 + M: Johannes Weiner <hannes@cmpxchg.org> 4315 + M: Michal Hocko <mhocko@suse.cz> 4306 4316 M: Balbir Singh <bsingharora@gmail.com> 4307 - M: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> 4308 4317 M: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> 4309 4318 L: cgroups@vger.kernel.org 4310 4319 L: linux-mm@kvack.org ··· 5087 5094 M: Paul Mackerras <paulus@samba.org> 5088 5095 M: Ingo Molnar <mingo@elte.hu> 5089 5096 M: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> 5097 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core 5090 5098 S: Supported 5091 5099 F: kernel/events/* 5092 5100 F: include/linux/perf_event.h ··· 5167 5173 5168 5174 POSIX CLOCKS and TIMERS 5169 5175 M: Thomas Gleixner <tglx@linutronix.de> 5176 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core 5170 5177 S: Supported 5171 5178 F: fs/timerfd.c 5172 5179 F: include/linux/timer* ··· 5662 5667 F: include/media/*7146* 5663 5668 5664 5669 SAMSUNG AUDIO (ASoC) DRIVERS 5665 - M: Jassi Brar <jassisinghbrar@gmail.com> 5666 5670 M: Sangbeom Kim <sbkim73@samsung.com> 5667 5671 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 5668 5672 S: Supported ··· 5683 5689 TIMEKEEPING, NTP 5684 5690 M: John Stultz <johnstul@us.ibm.com> 5685 5691 M: Thomas Gleixner <tglx@linutronix.de> 5692 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core 5686 5693 S: Supported 5687 5694 F: include/linux/clocksource.h 5688 5695 F: include/linux/time.h ··· 5708 5713 SCHEDULER 5709 5714 M: Ingo Molnar <mingo@elte.hu> 5710 5715 M: Peter Zijlstra <peterz@infradead.org> 5716 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core 5711 5717 S: Maintained 5712 5718 F: kernel/sched* 5713 5719 F: include/linux/sched.h ··· 6636 6640 M: Steven Rostedt <rostedt@goodmis.org> 6637 6641 M: Frederic Weisbecker <fweisbec@gmail.com> 6638 6642 M: Ingo Molnar <mingo@redhat.com> 6639 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git perf/core 6643 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core 6640 6644 S: Maintained 6641 6645 F: Documentation/trace/ftrace.txt 6642 6646 F: arch/*/*/*/ftrace.h ··· 7386 7390 M: Ingo Molnar <mingo@redhat.com> 7387 7391 M: "H. Peter Anvin" <hpa@zytor.com> 7388 7392 M: x86@kernel.org 7389 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86.git 7393 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/core 7390 7394 S: Maintained 7391 7395 F: Documentation/x86/ 7392 7396 F: arch/x86/
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 2 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc6 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+18 -5
arch/arm/Kconfig
··· 220 220 be avoided when possible. 221 221 222 222 config PHYS_OFFSET 223 - hex "Physical address of main memory" 223 + hex "Physical address of main memory" if MMU 224 224 depends on !ARM_PATCH_PHYS_VIRT && !NEED_MACH_MEMORY_H 225 + default DRAM_BASE if !MMU 225 226 help 226 227 Please provide the physical address corresponding to the 227 228 location of main memory in your system. ··· 1232 1231 capabilities of the processor. 1233 1232 1234 1233 config PL310_ERRATA_588369 1235 - bool "Clean & Invalidate maintenance operations do not invalidate clean lines" 1234 + bool "PL310 errata: Clean & Invalidate maintenance operations do not invalidate clean lines" 1236 1235 depends on CACHE_L2X0 1237 1236 help 1238 1237 The PL310 L2 cache controller implements three types of Clean & ··· 1257 1256 entries regardless of the ASID. 1258 1257 1259 1258 config PL310_ERRATA_727915 1260 - bool "Background Clean & Invalidate by Way operation can cause data corruption" 1259 + bool "PL310 errata: Background Clean & Invalidate by Way operation can cause data corruption" 1261 1260 depends on CACHE_L2X0 1262 1261 help 1263 1262 PL310 implements the Clean & Invalidate by Way L2 cache maintenance ··· 1290 1289 operation is received by a CPU before the ICIALLUIS has completed, 1291 1290 potentially leading to corrupted entries in the cache or TLB. 1292 1291 1293 - config ARM_ERRATA_753970 1294 - bool "ARM errata: cache sync operation may be faulty" 1292 + config PL310_ERRATA_753970 1293 + bool "PL310 errata: cache sync operation may be faulty" 1295 1294 depends on CACHE_PL310 1296 1295 help 1297 1296 This option enables the workaround for the 753970 PL310 (r3p0) erratum. ··· 1352 1351 system. This workaround adds a DSB instruction before the 1353 1352 relevant cache maintenance functions and sets a specific bit 1354 1353 in the diagnostic control register of the SCU. 1354 + 1355 + config PL310_ERRATA_769419 1356 + bool "PL310 errata: no automatic Store Buffer drain" 1357 + depends on CACHE_L2X0 1358 + help 1359 + On revisions of the PL310 prior to r3p2, the Store Buffer does 1360 + not automatically drain. This can cause normal, non-cacheable 1361 + writes to be retained when the memory system is idle, leading 1362 + to suboptimal I/O performance for drivers using coherent DMA. 1363 + This option adds a write barrier to the cpu_idle loop so that, 1364 + on systems with an outer cache, the store buffer is drained 1365 + explicitly. 1355 1366 1356 1367 endmenu 1357 1368
+10 -6
arch/arm/common/gic.c
··· 526 526 sizeof(u32)); 527 527 BUG_ON(!gic->saved_ppi_conf); 528 528 529 - cpu_pm_register_notifier(&gic_notifier_block); 529 + if (gic == &gic_data[0]) 530 + cpu_pm_register_notifier(&gic_notifier_block); 530 531 } 531 532 #else 532 533 static void __init gic_pm_init(struct gic_chip_data *gic) ··· 582 581 * For primary GICs, skip over SGIs. 583 582 * For secondary GICs, skip over PPIs, too. 584 583 */ 584 + domain->hwirq_base = 32; 585 585 if (gic_nr == 0) { 586 586 gic_cpu_base_addr = cpu_base; 587 - domain->hwirq_base = 16; 588 - if (irq_start > 0) 589 - irq_start = (irq_start & ~31) + 16; 590 - } else 591 - domain->hwirq_base = 32; 587 + 588 + if ((irq_start & 31) > 0) { 589 + domain->hwirq_base = 16; 590 + if (irq_start != -1) 591 + irq_start = (irq_start & ~31) + 16; 592 + } 593 + } 592 594 593 595 /* 594 596 * Find out how many interrupts are supported.
+9 -3
arch/arm/common/pl330.c
··· 1211 1211 ccr |= (rqc->brst_size << CC_SRCBRSTSIZE_SHFT); 1212 1212 ccr |= (rqc->brst_size << CC_DSTBRSTSIZE_SHFT); 1213 1213 1214 - ccr |= (rqc->dcctl << CC_SRCCCTRL_SHFT); 1215 - ccr |= (rqc->scctl << CC_DSTCCTRL_SHFT); 1214 + ccr |= (rqc->scctl << CC_SRCCCTRL_SHFT); 1215 + ccr |= (rqc->dcctl << CC_DSTCCTRL_SHFT); 1216 1216 1217 1217 ccr |= (rqc->swap << CC_SWAP_SHFT); 1218 1218 ··· 1623 1623 return -1; 1624 1624 } 1625 1625 1626 + static bool _chan_ns(const struct pl330_info *pi, int i) 1627 + { 1628 + return pi->pcfg.irq_ns & (1 << i); 1629 + } 1630 + 1626 1631 /* Upon success, returns IdentityToken for the 1627 1632 * allocated channel, NULL otherwise. 1628 1633 */ ··· 1652 1647 1653 1648 for (i = 0; i < chans; i++) { 1654 1649 thrd = &pl330->channels[i]; 1655 - if (thrd->free) { 1650 + if ((thrd->free) && (!_manager_ns(thrd) || 1651 + _chan_ns(pi, i))) { 1656 1652 thrd->ev = _alloc_event(thrd); 1657 1653 if (thrd->ev >= 0) { 1658 1654 thrd->free = false;
+15 -51
arch/arm/configs/at91cap9adk_defconfig arch/arm/configs/at91sam9rl_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91CAP9=y 15 - CONFIG_MACH_AT91CAP9ADK=y 16 - CONFIG_MTD_AT91_DATAFLASH_CARD=y 14 + CONFIG_ARCH_AT91SAM9RL=y 15 + CONFIG_MACH_AT91SAM9RLEK=y 17 16 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 18 17 # CONFIG_ARM_THUMB is not set 19 - CONFIG_AEABI=y 20 - CONFIG_LEDS=y 21 - CONFIG_LEDS_CPU=y 22 18 CONFIG_ZBOOT_ROM_TEXT=0x0 23 19 CONFIG_ZBOOT_ROM_BSS=0x0 24 - CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw" 20 + CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw" 25 21 CONFIG_FPE_NWFPE=y 26 22 CONFIG_NET=y 27 - CONFIG_PACKET=y 28 23 CONFIG_UNIX=y 29 - CONFIG_INET=y 30 - CONFIG_IP_PNP=y 31 - CONFIG_IP_PNP_BOOTP=y 32 - CONFIG_IP_PNP_RARP=y 33 - # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 34 - # CONFIG_INET_XFRM_MODE_TUNNEL is not set 35 - # CONFIG_INET_XFRM_MODE_BEET is not set 36 - # CONFIG_INET_LRO is not set 37 - # CONFIG_INET_DIAG is not set 38 - # CONFIG_IPV6 is not set 39 24 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 40 25 CONFIG_MTD=y 41 - CONFIG_MTD_PARTITIONS=y 42 26 CONFIG_MTD_CMDLINE_PARTS=y 43 27 CONFIG_MTD_CHAR=y 44 28 CONFIG_MTD_BLOCK=y 45 - CONFIG_MTD_CFI=y 46 - CONFIG_MTD_JEDECPROBE=y 47 - CONFIG_MTD_CFI_AMDSTD=y 48 - CONFIG_MTD_PHYSMAP=y 49 29 CONFIG_MTD_DATAFLASH=y 50 30 CONFIG_MTD_NAND=y 51 31 CONFIG_MTD_NAND_ATMEL=y 52 32 CONFIG_BLK_DEV_LOOP=y 53 33 CONFIG_BLK_DEV_RAM=y 54 - CONFIG_BLK_DEV_RAM_SIZE=8192 55 - CONFIG_ATMEL_SSC=y 34 + CONFIG_BLK_DEV_RAM_COUNT=4 35 + CONFIG_BLK_DEV_RAM_SIZE=24576 56 36 CONFIG_SCSI=y 57 37 CONFIG_BLK_DEV_SD=y 58 38 CONFIG_SCSI_MULTI_LUN=y 59 - CONFIG_NETDEVICES=y 60 - CONFIG_NET_ETHERNET=y 61 - CONFIG_MII=y 62 - CONFIG_MACB=y 63 - # CONFIG_NETDEV_1000 is not set 64 - # CONFIG_NETDEV_10000 is not set 65 39 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 40 + CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 41 + CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 66 42 CONFIG_INPUT_EVDEV=y 67 43 # CONFIG_INPUT_KEYBOARD is not set 68 44 # CONFIG_INPUT_MOUSE is not set 69 45 CONFIG_INPUT_TOUCHSCREEN=y 70 - CONFIG_TOUCHSCREEN_ADS7846=y 46 + CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y 71 47 # CONFIG_SERIO is not set 72 48 CONFIG_SERIAL_ATMEL=y 73 49 CONFIG_SERIAL_ATMEL_CONSOLE=y 74 - CONFIG_HW_RANDOM=y 50 + # CONFIG_HW_RANDOM is not set 75 51 CONFIG_I2C=y 76 52 CONFIG_I2C_CHARDEV=y 53 + CONFIG_I2C_GPIO=y 77 54 CONFIG_SPI=y 78 55 CONFIG_SPI_ATMEL=y 79 56 # CONFIG_HWMON is not set 80 57 CONFIG_WATCHDOG=y 81 58 CONFIG_WATCHDOG_NOWAYOUT=y 59 + CONFIG_AT91SAM9X_WATCHDOG=y 82 60 CONFIG_FB=y 83 61 CONFIG_FB_ATMEL=y 84 - # CONFIG_VGA_CONSOLE is not set 85 - CONFIG_LOGO=y 86 - # CONFIG_LOGO_LINUX_MONO is not set 87 - # CONFIG_LOGO_LINUX_CLUT224 is not set 88 - # CONFIG_USB_HID is not set 89 - CONFIG_USB=y 90 - CONFIG_USB_DEVICEFS=y 91 - CONFIG_USB_MON=y 92 - CONFIG_USB_OHCI_HCD=y 93 - CONFIG_USB_STORAGE=y 94 - CONFIG_USB_GADGET=y 95 - CONFIG_USB_ETH=m 96 - CONFIG_USB_FILE_STORAGE=m 97 62 CONFIG_MMC=y 98 63 CONFIG_MMC_AT91=m 99 64 CONFIG_RTC_CLASS=y 100 65 CONFIG_RTC_DRV_AT91SAM9=y 101 66 CONFIG_EXT2_FS=y 102 - CONFIG_INOTIFY=y 67 + CONFIG_MSDOS_FS=y 103 68 CONFIG_VFAT_FS=y 104 69 CONFIG_TMPFS=y 105 - CONFIG_JFFS2_FS=y 106 70 CONFIG_CRAMFS=y 107 - CONFIG_NFS_FS=y 108 - CONFIG_ROOT_NFS=y 109 71 CONFIG_NLS_CODEPAGE_437=y 110 72 CONFIG_NLS_CODEPAGE_850=y 111 73 CONFIG_NLS_ISO8859_1=y 112 - CONFIG_DEBUG_FS=y 74 + CONFIG_NLS_ISO8859_15=y 75 + CONFIG_NLS_UTF8=y 113 76 CONFIG_DEBUG_KERNEL=y 114 77 CONFIG_DEBUG_INFO=y 115 78 CONFIG_DEBUG_USER=y 79 + CONFIG_DEBUG_LL=y
+14 -33
arch/arm/configs/at91rm9200_defconfig
··· 5 5 CONFIG_IKCONFIG=y 6 6 CONFIG_IKCONFIG_PROC=y 7 7 CONFIG_LOG_BUF_SHIFT=14 8 - CONFIG_SYSFS_DEPRECATED_V2=y 9 8 CONFIG_BLK_DEV_INITRD=y 10 9 CONFIG_MODULES=y 11 10 CONFIG_MODULE_FORCE_LOAD=y ··· 55 56 CONFIG_IP_PNP_DHCP=y 56 57 CONFIG_IP_PNP_BOOTP=y 57 58 CONFIG_NET_IPIP=m 58 - CONFIG_NET_IPGRE=m 59 59 CONFIG_INET_AH=m 60 60 CONFIG_INET_ESP=m 61 61 CONFIG_INET_IPCOMP=m ··· 73 75 CONFIG_BRIDGE=m 74 76 CONFIG_VLAN_8021Q=m 75 77 CONFIG_BT=m 76 - CONFIG_BT_L2CAP=m 77 - CONFIG_BT_SCO=m 78 - CONFIG_BT_RFCOMM=m 79 - CONFIG_BT_RFCOMM_TTY=y 80 - CONFIG_BT_BNEP=m 81 - CONFIG_BT_BNEP_MC_FILTER=y 82 - CONFIG_BT_BNEP_PROTO_FILTER=y 83 - CONFIG_BT_HIDP=m 84 78 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 85 79 CONFIG_MTD=y 86 - CONFIG_MTD_CONCAT=y 87 - CONFIG_MTD_PARTITIONS=y 88 80 CONFIG_MTD_CMDLINE_PARTS=y 89 81 CONFIG_MTD_AFS_PARTS=y 90 82 CONFIG_MTD_CHAR=y ··· 96 108 CONFIG_BLK_DEV_NBD=y 97 109 CONFIG_BLK_DEV_RAM=y 98 110 CONFIG_BLK_DEV_RAM_SIZE=8192 99 - CONFIG_ATMEL_TCLIB=y 100 - CONFIG_EEPROM_LEGACY=m 101 111 CONFIG_SCSI=y 102 112 CONFIG_BLK_DEV_SD=y 103 113 CONFIG_BLK_DEV_SR=m ··· 105 119 # CONFIG_SCSI_LOWLEVEL is not set 106 120 CONFIG_NETDEVICES=y 107 121 CONFIG_TUN=m 122 + CONFIG_ARM_AT91_ETHER=y 108 123 CONFIG_PHYLIB=y 109 124 CONFIG_DAVICOM_PHY=y 110 125 CONFIG_SMSC_PHY=y 111 126 CONFIG_MICREL_PHY=y 112 - CONFIG_NET_ETHERNET=y 113 - CONFIG_ARM_AT91_ETHER=y 114 - # CONFIG_NETDEV_1000 is not set 115 - # CONFIG_NETDEV_10000 is not set 127 + CONFIG_PPP=y 128 + CONFIG_PPP_BSDCOMP=y 129 + CONFIG_PPP_DEFLATE=y 130 + CONFIG_PPP_FILTER=y 131 + CONFIG_PPP_MPPE=m 132 + CONFIG_PPP_MULTILINK=y 133 + CONFIG_PPPOE=m 134 + CONFIG_PPP_ASYNC=y 135 + CONFIG_SLIP=m 136 + CONFIG_SLIP_COMPRESSED=y 137 + CONFIG_SLIP_SMART=y 138 + CONFIG_SLIP_MODE_SLIP6=y 116 139 CONFIG_USB_CATC=m 117 140 CONFIG_USB_KAWETH=m 118 141 CONFIG_USB_PEGASUS=m ··· 134 139 CONFIG_USB_ALI_M5632=y 135 140 CONFIG_USB_AN2720=y 136 141 CONFIG_USB_EPSON2888=y 137 - CONFIG_PPP=y 138 - CONFIG_PPP_MULTILINK=y 139 - CONFIG_PPP_FILTER=y 140 - CONFIG_PPP_ASYNC=y 141 - CONFIG_PPP_DEFLATE=y 142 - CONFIG_PPP_BSDCOMP=y 143 - CONFIG_PPP_MPPE=m 144 - CONFIG_PPPOE=m 145 - CONFIG_SLIP=m 146 - CONFIG_SLIP_COMPRESSED=y 147 - CONFIG_SLIP_SMART=y 148 - CONFIG_SLIP_MODE_SLIP6=y 149 142 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 150 143 CONFIG_INPUT_MOUSEDEV_SCREEN_X=640 151 144 CONFIG_INPUT_MOUSEDEV_SCREEN_Y=480 ··· 141 158 CONFIG_KEYBOARD_GPIO=y 142 159 # CONFIG_INPUT_MOUSE is not set 143 160 CONFIG_INPUT_TOUCHSCREEN=y 161 + CONFIG_LEGACY_PTY_COUNT=32 144 162 CONFIG_SERIAL_ATMEL=y 145 163 CONFIG_SERIAL_ATMEL_CONSOLE=y 146 - CONFIG_LEGACY_PTY_COUNT=32 147 164 CONFIG_HW_RANDOM=y 148 165 CONFIG_I2C=y 149 166 CONFIG_I2C_CHARDEV=y ··· 273 290 CONFIG_NFS_V4=y 274 291 CONFIG_ROOT_NFS=y 275 292 CONFIG_NFSD=y 276 - CONFIG_SMB_FS=m 277 293 CONFIG_CIFS=m 278 294 CONFIG_PARTITION_ADVANCED=y 279 295 CONFIG_MAC_PARTITION=y ··· 317 335 CONFIG_MAGIC_SYSRQ=y 318 336 CONFIG_DEBUG_FS=y 319 337 CONFIG_DEBUG_KERNEL=y 320 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 321 338 # CONFIG_FTRACE is not set 322 339 CONFIG_CRYPTO_PCBC=y 323 340 CONFIG_CRYPTO_SHA1=y
+61 -20
arch/arm/configs/at91sam9260ek_defconfig arch/arm/configs/at91sam9g20_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9260=y 15 - CONFIG_MACH_AT91SAM9260EK=y 14 + CONFIG_ARCH_AT91SAM9G20=y 15 + CONFIG_MACH_AT91SAM9G20EK=y 16 + CONFIG_MACH_AT91SAM9G20EK_2MMC=y 17 + CONFIG_MACH_CPU9G20=y 18 + CONFIG_MACH_ACMENETUSFOXG20=y 19 + CONFIG_MACH_PORTUXG20=y 20 + CONFIG_MACH_STAMP9G20=y 21 + CONFIG_MACH_PCONTROL_G20=y 22 + CONFIG_MACH_GSIA18S=y 23 + CONFIG_MACH_USB_A9G20=y 24 + CONFIG_MACH_SNAPPER_9260=y 25 + CONFIG_MACH_AT91SAM_DT=y 16 26 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 17 27 # CONFIG_ARM_THUMB is not set 28 + CONFIG_AEABI=y 29 + CONFIG_LEDS=y 30 + CONFIG_LEDS_CPU=y 18 31 CONFIG_ZBOOT_ROM_TEXT=0x0 19 32 CONFIG_ZBOOT_ROM_BSS=0x0 33 + CONFIG_ARM_APPENDED_DTB=y 34 + CONFIG_ARM_ATAG_DTB_COMPAT=y 20 35 CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 21 36 CONFIG_FPE_NWFPE=y 22 37 CONFIG_NET=y ··· 46 31 # CONFIG_INET_LRO is not set 47 32 # CONFIG_IPV6 is not set 48 33 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 34 + CONFIG_MTD=y 35 + CONFIG_MTD_CMDLINE_PARTS=y 36 + CONFIG_MTD_CHAR=y 37 + CONFIG_MTD_BLOCK=y 38 + CONFIG_MTD_DATAFLASH=y 39 + CONFIG_MTD_NAND=y 40 + CONFIG_MTD_NAND_ATMEL=y 41 + CONFIG_BLK_DEV_LOOP=y 49 42 CONFIG_BLK_DEV_RAM=y 50 43 CONFIG_BLK_DEV_RAM_SIZE=8192 51 - CONFIG_ATMEL_SSC=y 52 44 CONFIG_SCSI=y 53 45 CONFIG_BLK_DEV_SD=y 54 46 CONFIG_SCSI_MULTI_LUN=y 47 + # CONFIG_SCSI_LOWLEVEL is not set 55 48 CONFIG_NETDEVICES=y 56 - CONFIG_NET_ETHERNET=y 57 49 CONFIG_MII=y 58 50 CONFIG_MACB=y 59 51 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 60 - # CONFIG_INPUT_KEYBOARD is not set 52 + CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 53 + CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 54 + CONFIG_INPUT_EVDEV=y 55 + # CONFIG_KEYBOARD_ATKBD is not set 56 + CONFIG_KEYBOARD_GPIO=y 61 57 # CONFIG_INPUT_MOUSE is not set 62 - # CONFIG_SERIO is not set 58 + CONFIG_LEGACY_PTY_COUNT=16 63 59 CONFIG_SERIAL_ATMEL=y 64 60 CONFIG_SERIAL_ATMEL_CONSOLE=y 65 - # CONFIG_HW_RANDOM is not set 66 - CONFIG_I2C=y 67 - CONFIG_I2C_CHARDEV=y 68 - CONFIG_I2C_GPIO=y 61 + CONFIG_HW_RANDOM=y 62 + CONFIG_SPI=y 63 + CONFIG_SPI_ATMEL=y 64 + CONFIG_SPI_SPIDEV=y 69 65 # CONFIG_HWMON is not set 70 - CONFIG_WATCHDOG=y 71 - CONFIG_WATCHDOG_NOWAYOUT=y 72 - CONFIG_AT91SAM9X_WATCHDOG=y 73 - # CONFIG_VGA_CONSOLE is not set 74 - # CONFIG_USB_HID is not set 66 + CONFIG_SOUND=y 67 + CONFIG_SND=y 68 + CONFIG_SND_SEQUENCER=y 69 + CONFIG_SND_MIXER_OSS=y 70 + CONFIG_SND_PCM_OSS=y 71 + CONFIG_SND_SEQUENCER_OSS=y 72 + # CONFIG_SND_VERBOSE_PROCFS is not set 75 73 CONFIG_USB=y 76 74 CONFIG_USB_DEVICEFS=y 75 + # CONFIG_USB_DEVICE_CLASS is not set 77 76 CONFIG_USB_MON=y 78 77 CONFIG_USB_OHCI_HCD=y 79 78 CONFIG_USB_STORAGE=y 80 - CONFIG_USB_STORAGE_DEBUG=y 81 79 CONFIG_USB_GADGET=y 82 80 CONFIG_USB_ZERO=m 83 81 CONFIG_USB_GADGETFS=m 84 82 CONFIG_USB_FILE_STORAGE=m 85 83 CONFIG_USB_G_SERIAL=m 84 + CONFIG_MMC=y 85 + CONFIG_MMC_AT91=m 86 + CONFIG_NEW_LEDS=y 87 + CONFIG_LEDS_CLASS=y 88 + CONFIG_LEDS_GPIO=y 89 + CONFIG_LEDS_TRIGGERS=y 90 + CONFIG_LEDS_TRIGGER_TIMER=y 91 + CONFIG_LEDS_TRIGGER_HEARTBEAT=y 86 92 CONFIG_RTC_CLASS=y 87 93 CONFIG_RTC_DRV_AT91SAM9=y 88 94 CONFIG_EXT2_FS=y 89 - CONFIG_INOTIFY=y 95 + CONFIG_MSDOS_FS=y 90 96 CONFIG_VFAT_FS=y 91 97 CONFIG_TMPFS=y 98 + CONFIG_JFFS2_FS=y 99 + CONFIG_JFFS2_SUMMARY=y 92 100 CONFIG_CRAMFS=y 101 + CONFIG_NFS_FS=y 102 + CONFIG_NFS_V3=y 103 + CONFIG_ROOT_NFS=y 93 104 CONFIG_NLS_CODEPAGE_437=y 94 105 CONFIG_NLS_CODEPAGE_850=y 95 106 CONFIG_NLS_ISO8859_1=y 96 - CONFIG_DEBUG_KERNEL=y 97 - CONFIG_DEBUG_USER=y 98 - CONFIG_DEBUG_LL=y 107 + CONFIG_NLS_ISO8859_15=y 108 + CONFIG_NLS_UTF8=y 109 + # CONFIG_ENABLE_WARN_DEPRECATED is not set
+29 -44
arch/arm/configs/at91sam9g20ek_defconfig arch/arm/configs/at91cap9_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9G20=y 15 - CONFIG_MACH_AT91SAM9G20EK=y 16 - CONFIG_MACH_AT91SAM9G20EK_2MMC=y 14 + CONFIG_ARCH_AT91CAP9=y 15 + CONFIG_MACH_AT91CAP9ADK=y 16 + CONFIG_MTD_AT91_DATAFLASH_CARD=y 17 17 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 18 18 # CONFIG_ARM_THUMB is not set 19 19 CONFIG_AEABI=y ··· 21 21 CONFIG_LEDS_CPU=y 22 22 CONFIG_ZBOOT_ROM_TEXT=0x0 23 23 CONFIG_ZBOOT_ROM_BSS=0x0 24 - CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 24 + CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw" 25 25 CONFIG_FPE_NWFPE=y 26 - CONFIG_PM=y 27 26 CONFIG_NET=y 28 27 CONFIG_PACKET=y 29 28 CONFIG_UNIX=y 30 29 CONFIG_INET=y 31 30 CONFIG_IP_PNP=y 32 31 CONFIG_IP_PNP_BOOTP=y 32 + CONFIG_IP_PNP_RARP=y 33 33 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 34 34 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 35 35 # CONFIG_INET_XFRM_MODE_BEET is not set 36 36 # CONFIG_INET_LRO is not set 37 + # CONFIG_INET_DIAG is not set 37 38 # CONFIG_IPV6 is not set 38 39 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 39 40 CONFIG_MTD=y 40 - CONFIG_MTD_CONCAT=y 41 - CONFIG_MTD_PARTITIONS=y 42 41 CONFIG_MTD_CMDLINE_PARTS=y 43 42 CONFIG_MTD_CHAR=y 44 43 CONFIG_MTD_BLOCK=y 44 + CONFIG_MTD_CFI=y 45 + CONFIG_MTD_JEDECPROBE=y 46 + CONFIG_MTD_CFI_AMDSTD=y 47 + CONFIG_MTD_PHYSMAP=y 45 48 CONFIG_MTD_DATAFLASH=y 46 49 CONFIG_MTD_NAND=y 47 50 CONFIG_MTD_NAND_ATMEL=y 48 51 CONFIG_BLK_DEV_LOOP=y 49 52 CONFIG_BLK_DEV_RAM=y 50 53 CONFIG_BLK_DEV_RAM_SIZE=8192 51 - CONFIG_ATMEL_SSC=y 52 54 CONFIG_SCSI=y 53 55 CONFIG_BLK_DEV_SD=y 54 56 CONFIG_SCSI_MULTI_LUN=y 55 - # CONFIG_SCSI_LOWLEVEL is not set 56 57 CONFIG_NETDEVICES=y 57 - CONFIG_NET_ETHERNET=y 58 58 CONFIG_MII=y 59 59 CONFIG_MACB=y 60 - # CONFIG_NETDEV_1000 is not set 61 - # CONFIG_NETDEV_10000 is not set 62 60 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 63 - CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 64 - CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 65 61 CONFIG_INPUT_EVDEV=y 66 - # CONFIG_KEYBOARD_ATKBD is not set 67 - CONFIG_KEYBOARD_GPIO=y 62 + # CONFIG_INPUT_KEYBOARD is not set 68 63 # CONFIG_INPUT_MOUSE is not set 64 + CONFIG_INPUT_TOUCHSCREEN=y 65 + CONFIG_TOUCHSCREEN_ADS7846=y 66 + # CONFIG_SERIO is not set 69 67 CONFIG_SERIAL_ATMEL=y 70 68 CONFIG_SERIAL_ATMEL_CONSOLE=y 71 - CONFIG_LEGACY_PTY_COUNT=16 72 69 CONFIG_HW_RANDOM=y 70 + CONFIG_I2C=y 71 + CONFIG_I2C_CHARDEV=y 73 72 CONFIG_SPI=y 74 73 CONFIG_SPI_ATMEL=y 75 - CONFIG_SPI_SPIDEV=y 76 74 # CONFIG_HWMON is not set 77 - # CONFIG_VGA_CONSOLE is not set 78 - CONFIG_SOUND=y 79 - CONFIG_SND=y 80 - CONFIG_SND_SEQUENCER=y 81 - CONFIG_SND_MIXER_OSS=y 82 - CONFIG_SND_PCM_OSS=y 83 - CONFIG_SND_SEQUENCER_OSS=y 84 - # CONFIG_SND_VERBOSE_PROCFS is not set 85 - CONFIG_SND_AT73C213=y 75 + CONFIG_WATCHDOG=y 76 + CONFIG_WATCHDOG_NOWAYOUT=y 77 + CONFIG_FB=y 78 + CONFIG_FB_ATMEL=y 79 + CONFIG_LOGO=y 80 + # CONFIG_LOGO_LINUX_MONO is not set 81 + # CONFIG_LOGO_LINUX_CLUT224 is not set 82 + # CONFIG_USB_HID is not set 86 83 CONFIG_USB=y 87 84 CONFIG_USB_DEVICEFS=y 88 - # CONFIG_USB_DEVICE_CLASS is not set 89 85 CONFIG_USB_MON=y 90 86 CONFIG_USB_OHCI_HCD=y 91 87 CONFIG_USB_STORAGE=y 92 88 CONFIG_USB_GADGET=y 93 - CONFIG_USB_ZERO=m 94 - CONFIG_USB_GADGETFS=m 89 + CONFIG_USB_ETH=m 95 90 CONFIG_USB_FILE_STORAGE=m 96 - CONFIG_USB_G_SERIAL=m 97 91 CONFIG_MMC=y 98 92 CONFIG_MMC_AT91=m 99 - CONFIG_NEW_LEDS=y 100 - CONFIG_LEDS_CLASS=y 101 - CONFIG_LEDS_GPIO=y 102 - CONFIG_LEDS_TRIGGERS=y 103 - CONFIG_LEDS_TRIGGER_TIMER=y 104 - CONFIG_LEDS_TRIGGER_HEARTBEAT=y 105 93 CONFIG_RTC_CLASS=y 106 94 CONFIG_RTC_DRV_AT91SAM9=y 107 95 CONFIG_EXT2_FS=y 108 - CONFIG_INOTIFY=y 109 - CONFIG_MSDOS_FS=y 110 96 CONFIG_VFAT_FS=y 111 97 CONFIG_TMPFS=y 112 98 CONFIG_JFFS2_FS=y 113 - CONFIG_JFFS2_SUMMARY=y 114 99 CONFIG_CRAMFS=y 115 100 CONFIG_NFS_FS=y 116 - CONFIG_NFS_V3=y 117 101 CONFIG_ROOT_NFS=y 118 102 CONFIG_NLS_CODEPAGE_437=y 119 103 CONFIG_NLS_CODEPAGE_850=y 120 104 CONFIG_NLS_ISO8859_1=y 121 - CONFIG_NLS_ISO8859_15=y 122 - CONFIG_NLS_UTF8=y 123 - # CONFIG_ENABLE_WARN_DEPRECATED is not set 105 + CONFIG_DEBUG_FS=y 106 + CONFIG_DEBUG_KERNEL=y 107 + CONFIG_DEBUG_INFO=y 108 + CONFIG_DEBUG_USER=y
+2 -5
arch/arm/configs/at91sam9g45_defconfig
··· 18 18 CONFIG_ARCH_AT91=y 19 19 CONFIG_ARCH_AT91SAM9G45=y 20 20 CONFIG_MACH_AT91SAM9M10G45EK=y 21 + CONFIG_MACH_AT91SAM_DT=y 21 22 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 22 23 CONFIG_AT91_SLOW_CLOCK=y 23 24 CONFIG_AEABI=y ··· 74 73 # CONFIG_SCSI_LOWLEVEL is not set 75 74 CONFIG_NETDEVICES=y 76 75 CONFIG_MII=y 77 - CONFIG_DAVICOM_PHY=y 78 - CONFIG_NET_ETHERNET=y 79 76 CONFIG_MACB=y 80 - # CONFIG_NETDEV_1000 is not set 81 - # CONFIG_NETDEV_10000 is not set 77 + CONFIG_DAVICOM_PHY=y 82 78 CONFIG_LIBERTAS_THINFIRM=m 83 79 CONFIG_LIBERTAS_THINFIRM_USB=m 84 80 CONFIG_AT76C50X_USB=m ··· 129 131 CONFIG_SPI=y 130 132 CONFIG_SPI_ATMEL=y 131 133 # CONFIG_HWMON is not set 132 - # CONFIG_MFD_SUPPORT is not set 133 134 CONFIG_FB=y 134 135 CONFIG_FB_ATMEL=y 135 136 CONFIG_FB_UDL=m
+40 -33
arch/arm/configs/at91sam9rlek_defconfig arch/arm/configs/at91sam9260_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9RL=y 15 - CONFIG_MACH_AT91SAM9RLEK=y 14 + CONFIG_ARCH_AT91SAM9260=y 15 + CONFIG_ARCH_AT91SAM9260_SAM9XE=y 16 + CONFIG_MACH_AT91SAM9260EK=y 17 + CONFIG_MACH_CAM60=y 18 + CONFIG_MACH_SAM9_L9260=y 19 + CONFIG_MACH_AFEB9260=y 20 + CONFIG_MACH_USB_A9260=y 21 + CONFIG_MACH_QIL_A9260=y 22 + CONFIG_MACH_CPU9260=y 23 + CONFIG_MACH_FLEXIBITY=y 24 + CONFIG_MACH_SNAPPER_9260=y 25 + CONFIG_MACH_AT91SAM_DT=y 16 26 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 17 27 # CONFIG_ARM_THUMB is not set 18 28 CONFIG_ZBOOT_ROM_TEXT=0x0 19 29 CONFIG_ZBOOT_ROM_BSS=0x0 20 - CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw" 30 + CONFIG_ARM_APPENDED_DTB=y 31 + CONFIG_ARM_ATAG_DTB_COMPAT=y 32 + CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 21 33 CONFIG_FPE_NWFPE=y 22 34 CONFIG_NET=y 35 + CONFIG_PACKET=y 23 36 CONFIG_UNIX=y 37 + CONFIG_INET=y 38 + CONFIG_IP_PNP=y 39 + CONFIG_IP_PNP_BOOTP=y 40 + # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 41 + # CONFIG_INET_XFRM_MODE_TUNNEL is not set 42 + # CONFIG_INET_XFRM_MODE_BEET is not set 43 + # CONFIG_INET_LRO is not set 44 + # CONFIG_IPV6 is not set 24 45 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 25 - CONFIG_MTD=y 26 - CONFIG_MTD_CONCAT=y 27 - CONFIG_MTD_PARTITIONS=y 28 - CONFIG_MTD_CMDLINE_PARTS=y 29 - CONFIG_MTD_CHAR=y 30 - CONFIG_MTD_BLOCK=y 31 - CONFIG_MTD_DATAFLASH=y 32 - CONFIG_MTD_NAND=y 33 - CONFIG_MTD_NAND_ATMEL=y 34 - CONFIG_BLK_DEV_LOOP=y 35 46 CONFIG_BLK_DEV_RAM=y 36 - CONFIG_BLK_DEV_RAM_COUNT=4 37 - CONFIG_BLK_DEV_RAM_SIZE=24576 38 - CONFIG_ATMEL_SSC=y 47 + CONFIG_BLK_DEV_RAM_SIZE=8192 39 48 CONFIG_SCSI=y 40 49 CONFIG_BLK_DEV_SD=y 41 50 CONFIG_SCSI_MULTI_LUN=y 51 + CONFIG_NETDEVICES=y 52 + CONFIG_MII=y 53 + CONFIG_MACB=y 42 54 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 43 - CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 44 - CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 45 - CONFIG_INPUT_EVDEV=y 46 55 # CONFIG_INPUT_KEYBOARD is not set 47 56 # CONFIG_INPUT_MOUSE is not set 48 - CONFIG_INPUT_TOUCHSCREEN=y 49 - CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y 50 57 # CONFIG_SERIO is not set 51 58 CONFIG_SERIAL_ATMEL=y 52 59 CONFIG_SERIAL_ATMEL_CONSOLE=y ··· 61 54 CONFIG_I2C=y 62 55 CONFIG_I2C_CHARDEV=y 63 56 CONFIG_I2C_GPIO=y 64 - CONFIG_SPI=y 65 - CONFIG_SPI_ATMEL=y 66 57 # CONFIG_HWMON is not set 67 58 CONFIG_WATCHDOG=y 68 59 CONFIG_WATCHDOG_NOWAYOUT=y 69 60 CONFIG_AT91SAM9X_WATCHDOG=y 70 - CONFIG_FB=y 71 - CONFIG_FB_ATMEL=y 72 - # CONFIG_VGA_CONSOLE is not set 73 - CONFIG_MMC=y 74 - CONFIG_MMC_AT91=m 61 + # CONFIG_USB_HID is not set 62 + CONFIG_USB=y 63 + CONFIG_USB_DEVICEFS=y 64 + CONFIG_USB_MON=y 65 + CONFIG_USB_OHCI_HCD=y 66 + CONFIG_USB_STORAGE=y 67 + CONFIG_USB_STORAGE_DEBUG=y 68 + CONFIG_USB_GADGET=y 69 + CONFIG_USB_ZERO=m 70 + CONFIG_USB_GADGETFS=m 71 + CONFIG_USB_FILE_STORAGE=m 72 + CONFIG_USB_G_SERIAL=m 75 73 CONFIG_RTC_CLASS=y 76 74 CONFIG_RTC_DRV_AT91SAM9=y 77 75 CONFIG_EXT2_FS=y 78 - CONFIG_INOTIFY=y 79 - CONFIG_MSDOS_FS=y 80 76 CONFIG_VFAT_FS=y 81 77 CONFIG_TMPFS=y 82 78 CONFIG_CRAMFS=y 83 79 CONFIG_NLS_CODEPAGE_437=y 84 80 CONFIG_NLS_CODEPAGE_850=y 85 81 CONFIG_NLS_ISO8859_1=y 86 - CONFIG_NLS_ISO8859_15=y 87 - CONFIG_NLS_UTF8=y 88 82 CONFIG_DEBUG_KERNEL=y 89 - CONFIG_DEBUG_INFO=y 90 83 CONFIG_DEBUG_USER=y 91 84 CONFIG_DEBUG_LL=y
+1 -1
arch/arm/configs/ezx_defconfig
··· 287 287 # CONFIG_USB_DEVICE_CLASS is not set 288 288 CONFIG_USB_OHCI_HCD=y 289 289 CONFIG_USB_GADGET=y 290 - CONFIG_USB_GADGET_PXA27X=y 290 + CONFIG_USB_PXA27X=y 291 291 CONFIG_USB_ETH=m 292 292 # CONFIG_USB_ETH_RNDIS is not set 293 293 CONFIG_MMC=y
+1 -1
arch/arm/configs/imote2_defconfig
··· 263 263 # CONFIG_USB_DEVICE_CLASS is not set 264 264 CONFIG_USB_OHCI_HCD=y 265 265 CONFIG_USB_GADGET=y 266 - CONFIG_USB_GADGET_PXA27X=y 266 + CONFIG_USB_PXA27X=y 267 267 CONFIG_USB_ETH=m 268 268 # CONFIG_USB_ETH_RNDIS is not set 269 269 CONFIG_MMC=y
+1 -1
arch/arm/configs/magician_defconfig
··· 132 132 CONFIG_USB_OHCI_HCD=y 133 133 CONFIG_USB_GADGET=y 134 134 CONFIG_USB_GADGET_VBUS_DRAW=500 135 - CONFIG_USB_GADGET_PXA27X=y 135 + CONFIG_USB_PXA27X=y 136 136 CONFIG_USB_ETH=m 137 137 # CONFIG_USB_ETH_RNDIS is not set 138 138 CONFIG_USB_GADGETFS=m
-6
arch/arm/configs/omap1_defconfig
··· 48 48 CONFIG_MACH_NOKIA770=y 49 49 CONFIG_MACH_AMS_DELTA=y 50 50 CONFIG_MACH_OMAP_GENERIC=y 51 - CONFIG_OMAP_CLOCKS_SET_BY_BOOTLOADER=y 52 - CONFIG_OMAP_ARM_216MHZ=y 53 - CONFIG_OMAP_ARM_195MHZ=y 54 - CONFIG_OMAP_ARM_192MHZ=y 55 51 CONFIG_OMAP_ARM_182MHZ=y 56 - CONFIG_OMAP_ARM_168MHZ=y 57 - # CONFIG_OMAP_ARM_60MHZ is not set 58 52 # CONFIG_ARM_THUMB is not set 59 53 CONFIG_PCCARD=y 60 54 CONFIG_OMAP_CF=y
+6 -7
arch/arm/configs/u300_defconfig
··· 14 14 CONFIG_ARCH_U300=y 15 15 CONFIG_MACH_U300=y 16 16 CONFIG_MACH_U300_BS335=y 17 - CONFIG_MACH_U300_DUAL_RAM=y 18 - CONFIG_U300_DEBUG=y 19 17 CONFIG_MACH_U300_SPIDUMMY=y 20 18 CONFIG_NO_HZ=y 21 19 CONFIG_HIGH_RES_TIMERS=y ··· 24 26 CONFIG_CMDLINE="root=/dev/ram0 rw rootfstype=rootfs console=ttyAMA0,115200n8 lpj=515072" 25 27 CONFIG_CPU_IDLE=y 26 28 CONFIG_FPE_NWFPE=y 27 - CONFIG_PM=y 28 29 # CONFIG_SUSPEND is not set 29 30 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 31 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 31 - # CONFIG_MISC_DEVICES is not set 32 + CONFIG_MTD=y 33 + CONFIG_MTD_CMDLINE_PARTS=y 34 + CONFIG_MTD_NAND=y 35 + CONFIG_MTD_NAND_FSMC=y 32 36 # CONFIG_INPUT_MOUSEDEV is not set 33 37 CONFIG_INPUT_EVDEV=y 34 38 # CONFIG_KEYBOARD_ATKBD is not set 35 39 # CONFIG_INPUT_MOUSE is not set 36 40 # CONFIG_SERIO is not set 41 + CONFIG_LEGACY_PTY_COUNT=16 37 42 CONFIG_SERIAL_AMBA_PL011=y 38 43 CONFIG_SERIAL_AMBA_PL011_CONSOLE=y 39 - CONFIG_LEGACY_PTY_COUNT=16 40 44 # CONFIG_HW_RANDOM is not set 41 45 CONFIG_I2C=y 42 46 # CONFIG_HWMON is not set ··· 51 51 # CONFIG_HID_SUPPORT is not set 52 52 # CONFIG_USB_SUPPORT is not set 53 53 CONFIG_MMC=y 54 + CONFIG_MMC_CLKGATE=y 54 55 CONFIG_MMC_ARMMMCI=y 55 56 CONFIG_RTC_CLASS=y 56 57 # CONFIG_RTC_HCTOSYS is not set ··· 66 65 CONFIG_NLS_ISO8859_1=y 67 66 CONFIG_PRINTK_TIME=y 68 67 CONFIG_DEBUG_FS=y 69 - CONFIG_DEBUG_KERNEL=y 70 68 # CONFIG_SCHED_DEBUG is not set 71 69 CONFIG_TIMER_STATS=y 72 70 # CONFIG_DEBUG_PREEMPT is not set 73 71 CONFIG_DEBUG_INFO=y 74 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 75 72 # CONFIG_CRC32 is not set
+5 -9
arch/arm/configs/u8500_defconfig
··· 10 10 CONFIG_ARCH_U8500=y 11 11 CONFIG_UX500_SOC_DB5500=y 12 12 CONFIG_UX500_SOC_DB8500=y 13 - CONFIG_MACH_U8500=y 13 + CONFIG_MACH_HREFV60=y 14 14 CONFIG_MACH_SNOWBALL=y 15 15 CONFIG_MACH_U5500=y 16 16 CONFIG_NO_HZ=y ··· 24 24 CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y 25 25 CONFIG_VFP=y 26 26 CONFIG_NEON=y 27 + CONFIG_PM_RUNTIME=y 27 28 CONFIG_NET=y 28 29 CONFIG_PACKET=y 29 30 CONFIG_UNIX=y ··· 42 41 CONFIG_AB8500_PWM=y 43 42 CONFIG_SENSORS_BH1780=y 44 43 CONFIG_NETDEVICES=y 45 - CONFIG_SMSC_PHY=y 46 - CONFIG_NET_ETHERNET=y 47 44 CONFIG_SMSC911X=y 48 - # CONFIG_NETDEV_1000 is not set 49 - # CONFIG_NETDEV_10000 is not set 45 + CONFIG_SMSC_PHY=y 50 46 # CONFIG_WLAN is not set 51 47 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 52 48 CONFIG_INPUT_EVDEV=y ··· 70 72 CONFIG_SPI_PL022=y 71 73 CONFIG_GPIO_STMPE=y 72 74 CONFIG_GPIO_TC3589X=y 73 - # CONFIG_HWMON is not set 74 75 CONFIG_MFD_STMPE=y 75 76 CONFIG_MFD_TC3589X=y 77 + CONFIG_AB5500_CORE=y 76 78 CONFIG_AB8500_CORE=y 77 79 CONFIG_REGULATOR_AB8500=y 78 80 # CONFIG_HID_SUPPORT is not set 79 - CONFIG_USB_MUSB_HDRC=y 80 - CONFIG_USB_GADGET_MUSB_HDRC=y 81 - CONFIG_MUSB_PIO_ONLY=y 82 81 CONFIG_USB_GADGET=y 83 82 CONFIG_AB8500_USB=y 84 83 CONFIG_MMC=y ··· 92 97 CONFIG_STE_DMA40=y 93 98 CONFIG_STAGING=y 94 99 CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y 100 + CONFIG_HSEM_U8500=y 95 101 CONFIG_EXT2_FS=y 96 102 CONFIG_EXT2_FS_XATTR=y 97 103 CONFIG_EXT2_FS_POSIX_ACL=y
+1 -1
arch/arm/configs/zeus_defconfig
··· 140 140 CONFIG_USB_SERIAL_GENERIC=y 141 141 CONFIG_USB_SERIAL_MCT_U232=m 142 142 CONFIG_USB_GADGET=m 143 - CONFIG_USB_GADGET_PXA27X=y 143 + CONFIG_USB_PXA27X=y 144 144 CONFIG_USB_ETH=m 145 145 CONFIG_USB_GADGETFS=m 146 146 CONFIG_USB_FILE_STORAGE=m
-10
arch/arm/include/asm/pmu.h
··· 55 55 extern void 56 56 release_pmu(enum arm_pmu_type type); 57 57 58 - /** 59 - * init_pmu() - Initialise the PMU. 60 - * 61 - * Initialise the system ready for PMU enabling. This should typically set the 62 - * IRQ affinity and nothing else. The users (oprofile/perf events etc) will do 63 - * the actual hardware initialisation. 64 - */ 65 - extern int 66 - init_pmu(enum arm_pmu_type type); 67 - 68 58 #else /* CONFIG_CPU_HAS_PMU */ 69 59 70 60 #include <linux/err.h>
+1 -1
arch/arm/include/asm/topology.h
··· 25 25 26 26 void init_cpu_topology(void); 27 27 void store_cpu_topology(unsigned int cpuid); 28 - const struct cpumask *cpu_coregroup_mask(unsigned int cpu); 28 + const struct cpumask *cpu_coregroup_mask(int cpu); 29 29 30 30 #else 31 31
+4 -12
arch/arm/include/asm/unwind.h
··· 30 30 }; 31 31 32 32 struct unwind_idx { 33 - unsigned long addr; 33 + unsigned long addr_offset; 34 34 unsigned long insn; 35 35 }; 36 36 37 37 struct unwind_table { 38 38 struct list_head list; 39 - struct unwind_idx *start; 40 - struct unwind_idx *stop; 39 + const struct unwind_idx *start; 40 + const struct unwind_idx *origin; 41 + const struct unwind_idx *stop; 41 42 unsigned long begin_addr; 42 43 unsigned long end_addr; 43 44 }; ··· 49 48 unsigned long text_size); 50 49 extern void unwind_table_del(struct unwind_table *tab); 51 50 extern void unwind_backtrace(struct pt_regs *regs, struct task_struct *tsk); 52 - 53 - #ifdef CONFIG_ARM_UNWIND 54 - extern int __init unwind_init(void); 55 - #else 56 - static inline int __init unwind_init(void) 57 - { 58 - return 0; 59 - } 60 - #endif 61 51 62 52 #endif /* !__ASSEMBLY__ */ 63 53
+1 -1
arch/arm/kernel/entry-armv.S
··· 497 497 .popsection 498 498 .pushsection __ex_table,"a" 499 499 .long 1b, 4b 500 - #if __LINUX_ARM_ARCH__ >= 7 500 + #if CONFIG_ARM_THUMB && __LINUX_ARM_ARCH__ >= 6 && CONFIG_CPU_V7 501 501 .long 2b, 4b 502 502 .long 3b, 4b 503 503 #endif
+3 -1
arch/arm/kernel/kprobes-arm.c
··· 519 519 static const union decode_item arm_cccc_0001_____1001_table[] = { 520 520 /* Synchronization primitives */ 521 521 522 + #if __LINUX_ARM_ARCH__ < 6 523 + /* Deprecated on ARMv6 and may be UNDEFINED on v7 */ 522 524 /* SMP/SWPB cccc 0001 0x00 xxxx xxxx xxxx 1001 xxxx */ 523 525 DECODE_EMULATEX (0x0fb000f0, 0x01000090, emulate_rd12rn16rm0_rwflags_nopc, 524 526 REGS(NOPC, NOPC, 0, 0, NOPC)), 525 - 527 + #endif 526 528 /* LDREX/STREX{,D,B,H} cccc 0001 1xxx xxxx xxxx xxxx 1001 xxxx */ 527 529 /* And unallocated instructions... */ 528 530 DECODE_END
+17 -10
arch/arm/kernel/kprobes-test-arm.c
··· 427 427 428 428 TEST_GROUP("Synchronization primitives") 429 429 430 - /* 431 - * Use hard coded constants for SWP instructions to avoid warnings 432 - * about deprecated instructions. 433 - */ 434 - TEST_RP( ".word 0xe108e097 @ swp lr, r",7,VAL2,", [r",8,0,"]") 435 - TEST_R( ".word 0x610d0091 @ swpvs r0, r",1,VAL1,", [sp]") 436 - TEST_RP( ".word 0xe10cd09e @ swp sp, r",14,VAL2,", [r",12,13*4,"]") 430 + #if __LINUX_ARM_ARCH__ < 6 431 + TEST_RP("swp lr, r",7,VAL2,", [r",8,0,"]") 432 + TEST_R( "swpvs r0, r",1,VAL1,", [sp]") 433 + TEST_RP("swp sp, r",14,VAL2,", [r",12,13*4,"]") 434 + #else 435 + TEST_UNSUPPORTED(".word 0xe108e097 @ swp lr, r7, [r8]") 436 + TEST_UNSUPPORTED(".word 0x610d0091 @ swpvs r0, r1, [sp]") 437 + TEST_UNSUPPORTED(".word 0xe10cd09e @ swp sp, r14 [r12]") 438 + #endif 437 439 TEST_UNSUPPORTED(".word 0xe102f091 @ swp pc, r1, [r2]") 438 440 TEST_UNSUPPORTED(".word 0xe102009f @ swp r0, pc, [r2]") 439 441 TEST_UNSUPPORTED(".word 0xe10f0091 @ swp r0, r1, [pc]") 440 - TEST_RP( ".word 0xe148e097 @ swpb lr, r",7,VAL2,", [r",8,0,"]") 441 - TEST_R( ".word 0x614d0091 @ swpvsb r0, r",1,VAL1,", [sp]") 442 + #if __LINUX_ARM_ARCH__ < 6 443 + TEST_RP("swpb lr, r",7,VAL2,", [r",8,0,"]") 444 + TEST_R( "swpvsb r0, r",1,VAL1,", [sp]") 445 + #else 446 + TEST_UNSUPPORTED(".word 0xe148e097 @ swpb lr, r7, [r8]") 447 + TEST_UNSUPPORTED(".word 0x614d0091 @ swpvsb r0, r1, [sp]") 448 + #endif 442 449 TEST_UNSUPPORTED(".word 0xe142f091 @ swpb pc, r1, [r2]") 443 450 444 451 TEST_UNSUPPORTED(".word 0xe1100090") /* Unallocated space */ ··· 557 550 TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") 558 551 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") 559 552 TEST_RPR( "strcsd r",12,VAL2,", [r",11,48,", -r",10,24,"]!") 560 - TEST_RPR( "strd r",2, VAL1,", [r",3, 24,"], r",4,48,"") 553 + TEST_RPR( "strd r",2, VAL1,", [r",5, 24,"], r",4,48,"") 561 554 TEST_RPR( "strd r",10,VAL2,", [r",9, 48,"], -r",7,24,"") 562 555 TEST_UNSUPPORTED(".word 0xe1afc0fa @ strd r12, [pc, r10]!") 563 556
+8 -8
arch/arm/kernel/kprobes-test-thumb.c
··· 222 222 DONT_TEST_IN_ITBLOCK( 223 223 TEST_BF_R( "cbnz r",0,0, ", 2f") 224 224 TEST_BF_R( "cbz r",2,-1,", 2f") 225 - TEST_BF_RX( "cbnz r",4,1, ", 2f",0x20) 226 - TEST_BF_RX( "cbz r",7,0, ", 2f",0x40) 225 + TEST_BF_RX( "cbnz r",4,1, ", 2f", SPACE_0x20) 226 + TEST_BF_RX( "cbz r",7,0, ", 2f", SPACE_0x40) 227 227 ) 228 228 TEST_R("sxth r0, r",7, HH1,"") 229 229 TEST_R("sxth r7, r",0, HH2,"") ··· 246 246 TESTCASE_START(code) \ 247 247 TEST_ARG_PTR(13, offset) \ 248 248 TEST_ARG_END("") \ 249 - TEST_BRANCH_F(code,0) \ 249 + TEST_BRANCH_F(code) \ 250 250 TESTCASE_END 251 251 252 252 TEST("push {r0}") ··· 319 319 320 320 TEST_BF( "b 2f") 321 321 TEST_BB( "b 2b") 322 - TEST_BF_X("b 2f", 0x400) 323 - TEST_BB_X("b 2b", 0x400) 322 + TEST_BF_X("b 2f", SPACE_0x400) 323 + TEST_BB_X("b 2b", SPACE_0x400) 324 324 325 325 TEST_GROUP("Testing instructions in IT blocks") 326 326 ··· 746 746 TEST_BB("bne.w 2b") 747 747 TEST_BF("bgt.w 2f") 748 748 TEST_BB("blt.w 2b") 749 - TEST_BF_X("bpl.w 2f",0x1000) 749 + TEST_BF_X("bpl.w 2f", SPACE_0x1000) 750 750 ) 751 751 752 752 TEST_UNSUPPORTED("msr cpsr, r0") ··· 786 786 787 787 TEST_BF( "b.w 2f") 788 788 TEST_BB( "b.w 2b") 789 - TEST_BF_X("b.w 2f", 0x1000) 789 + TEST_BF_X("b.w 2f", SPACE_0x1000) 790 790 791 791 TEST_BF( "bl.w 2f") 792 792 TEST_BB( "bl.w 2b") 793 - TEST_BB_X("bl.w 2b", 0x1000) 793 + TEST_BB_X("bl.w 2b", SPACE_0x1000) 794 794 795 795 TEST_X( "blx __dummy_arm_subroutine", 796 796 ".arm \n\t"
+71 -31
arch/arm/kernel/kprobes-test.h
··· 149 149 "1: "instruction" \n\t" \ 150 150 " nop \n\t" 151 151 152 - #define TEST_BRANCH_F(instruction, xtra_dist) \ 152 + #define TEST_BRANCH_F(instruction) \ 153 153 TEST_INSTRUCTION(instruction) \ 154 - ".if "#xtra_dist" \n\t" \ 155 - " b 99f \n\t" \ 156 - ".space "#xtra_dist" \n\t" \ 157 - ".endif \n\t" \ 158 154 " b 99f \n\t" \ 159 155 "2: nop \n\t" 160 156 161 - #define TEST_BRANCH_B(instruction, xtra_dist) \ 157 + #define TEST_BRANCH_B(instruction) \ 162 158 " b 50f \n\t" \ 163 159 " b 99f \n\t" \ 164 160 "2: nop \n\t" \ 165 161 " b 99f \n\t" \ 166 - ".if "#xtra_dist" \n\t" \ 167 - ".space "#xtra_dist" \n\t" \ 168 - ".endif \n\t" \ 162 + TEST_INSTRUCTION(instruction) 163 + 164 + #define TEST_BRANCH_FX(instruction, codex) \ 165 + TEST_INSTRUCTION(instruction) \ 166 + " b 99f \n\t" \ 167 + codex" \n\t" \ 168 + " b 99f \n\t" \ 169 + "2: nop \n\t" 170 + 171 + #define TEST_BRANCH_BX(instruction, codex) \ 172 + " b 50f \n\t" \ 173 + " b 99f \n\t" \ 174 + "2: nop \n\t" \ 175 + " b 99f \n\t" \ 176 + codex" \n\t" \ 169 177 TEST_INSTRUCTION(instruction) 170 178 171 179 #define TESTCASE_END \ ··· 309 301 TESTCASE_START(code1 #reg1 code2) \ 310 302 TEST_ARG_PTR(reg1, val1) \ 311 303 TEST_ARG_END("") \ 312 - TEST_BRANCH_F(code1 #reg1 code2, 0) \ 304 + TEST_BRANCH_F(code1 #reg1 code2) \ 313 305 TESTCASE_END 314 306 315 - #define TEST_BF_X(code, xtra_dist) \ 307 + #define TEST_BF(code) \ 316 308 TESTCASE_START(code) \ 317 309 TEST_ARG_END("") \ 318 - TEST_BRANCH_F(code, xtra_dist) \ 310 + TEST_BRANCH_F(code) \ 319 311 TESTCASE_END 320 312 321 - #define TEST_BB_X(code, xtra_dist) \ 313 + #define TEST_BB(code) \ 322 314 TESTCASE_START(code) \ 323 315 TEST_ARG_END("") \ 324 - TEST_BRANCH_B(code, xtra_dist) \ 316 + TEST_BRANCH_B(code) \ 325 317 TESTCASE_END 326 318 327 - #define TEST_BF_RX(code1, reg, val, code2, xtra_dist) \ 328 - TESTCASE_START(code1 #reg code2) \ 329 - TEST_ARG_REG(reg, val) \ 330 - TEST_ARG_END("") \ 331 - TEST_BRANCH_F(code1 #reg code2, xtra_dist) \ 319 + #define TEST_BF_R(code1, reg, val, code2) \ 320 + TESTCASE_START(code1 #reg code2) \ 321 + TEST_ARG_REG(reg, val) \ 322 + TEST_ARG_END("") \ 323 + TEST_BRANCH_F(code1 #reg code2) \ 332 324 TESTCASE_END 333 325 334 - #define TEST_BB_RX(code1, reg, val, code2, xtra_dist) \ 335 - TESTCASE_START(code1 #reg code2) \ 336 - TEST_ARG_REG(reg, val) \ 337 - TEST_ARG_END("") \ 338 - TEST_BRANCH_B(code1 #reg code2, xtra_dist) \ 326 + #define TEST_BB_R(code1, reg, val, code2) \ 327 + TESTCASE_START(code1 #reg code2) \ 328 + TEST_ARG_REG(reg, val) \ 329 + TEST_ARG_END("") \ 330 + TEST_BRANCH_B(code1 #reg code2) \ 339 331 TESTCASE_END 340 - 341 - #define TEST_BF(code) TEST_BF_X(code, 0) 342 - #define TEST_BB(code) TEST_BB_X(code, 0) 343 - 344 - #define TEST_BF_R(code1, reg, val, code2) TEST_BF_RX(code1, reg, val, code2, 0) 345 - #define TEST_BB_R(code1, reg, val, code2) TEST_BB_RX(code1, reg, val, code2, 0) 346 332 347 333 #define TEST_BF_RR(code1, reg1, val1, code2, reg2, val2, code3) \ 348 334 TESTCASE_START(code1 #reg1 code2 #reg2 code3) \ 349 335 TEST_ARG_REG(reg1, val1) \ 350 336 TEST_ARG_REG(reg2, val2) \ 351 337 TEST_ARG_END("") \ 352 - TEST_BRANCH_F(code1 #reg1 code2 #reg2 code3, 0) \ 338 + TEST_BRANCH_F(code1 #reg1 code2 #reg2 code3) \ 339 + TESTCASE_END 340 + 341 + #define TEST_BF_X(code, codex) \ 342 + TESTCASE_START(code) \ 343 + TEST_ARG_END("") \ 344 + TEST_BRANCH_FX(code, codex) \ 345 + TESTCASE_END 346 + 347 + #define TEST_BB_X(code, codex) \ 348 + TESTCASE_START(code) \ 349 + TEST_ARG_END("") \ 350 + TEST_BRANCH_BX(code, codex) \ 351 + TESTCASE_END 352 + 353 + #define TEST_BF_RX(code1, reg, val, code2, codex) \ 354 + TESTCASE_START(code1 #reg code2) \ 355 + TEST_ARG_REG(reg, val) \ 356 + TEST_ARG_END("") \ 357 + TEST_BRANCH_FX(code1 #reg code2, codex) \ 353 358 TESTCASE_END 354 359 355 360 #define TEST_X(code, codex) \ ··· 391 370 " b 99f \n\t" \ 392 371 " "codex" \n\t" \ 393 372 TESTCASE_END 373 + 374 + 375 + /* 376 + * Macros for defining space directives spread over multiple lines. 377 + * These are required so the compiler guesses better the length of inline asm 378 + * code and will spill the literal pool early enough to avoid generating PC 379 + * relative loads with out of range offsets. 380 + */ 381 + #define TWICE(x) x x 382 + #define SPACE_0x8 TWICE(".space 4\n\t") 383 + #define SPACE_0x10 TWICE(SPACE_0x8) 384 + #define SPACE_0x20 TWICE(SPACE_0x10) 385 + #define SPACE_0x40 TWICE(SPACE_0x20) 386 + #define SPACE_0x80 TWICE(SPACE_0x40) 387 + #define SPACE_0x100 TWICE(SPACE_0x80) 388 + #define SPACE_0x200 TWICE(SPACE_0x100) 389 + #define SPACE_0x400 TWICE(SPACE_0x200) 390 + #define SPACE_0x800 TWICE(SPACE_0x400) 391 + #define SPACE_0x1000 TWICE(SPACE_0x800) 394 392 395 393 396 394 /* Various values used in test cases... */
+16 -4
arch/arm/kernel/perf_event.c
··· 343 343 { 344 344 struct perf_event *sibling, *leader = event->group_leader; 345 345 struct pmu_hw_events fake_pmu; 346 + DECLARE_BITMAP(fake_used_mask, ARMPMU_MAX_HWEVENTS); 346 347 347 - memset(&fake_pmu, 0, sizeof(fake_pmu)); 348 + /* 349 + * Initialise the fake PMU. We only need to populate the 350 + * used_mask for the purposes of validation. 351 + */ 352 + memset(fake_used_mask, 0, sizeof(fake_used_mask)); 353 + fake_pmu.used_mask = fake_used_mask; 348 354 349 355 if (!validate_event(&fake_pmu, leader)) 350 - return -ENOSPC; 356 + return -EINVAL; 351 357 352 358 list_for_each_entry(sibling, &leader->sibling_list, group_entry) { 353 359 if (!validate_event(&fake_pmu, sibling)) 354 - return -ENOSPC; 360 + return -EINVAL; 355 361 } 356 362 357 363 if (!validate_event(&fake_pmu, event)) 358 - return -ENOSPC; 364 + return -EINVAL; 359 365 360 366 return 0; 361 367 } ··· 401 395 irq_handler_t handle_irq; 402 396 int i, err, irq, irqs; 403 397 struct platform_device *pmu_device = armpmu->plat_device; 398 + 399 + if (!pmu_device) 400 + return -ENODEV; 404 401 405 402 err = reserve_pmu(armpmu->type); 406 403 if (err) { ··· 640 631 641 632 static int __devinit armpmu_device_probe(struct platform_device *pdev) 642 633 { 634 + if (!cpu_pmu) 635 + return -ENODEV; 636 + 643 637 cpu_pmu->plat_device = pdev; 644 638 return 0; 645 639 }
+1
arch/arm/kernel/pmu.c
··· 33 33 { 34 34 clear_bit_unlock(type, pmu_lock); 35 35 } 36 + EXPORT_SYMBOL_GPL(release_pmu);
+3
arch/arm/kernel/process.c
··· 192 192 #endif 193 193 194 194 local_irq_disable(); 195 + #ifdef CONFIG_PL310_ERRATA_769419 196 + wmb(); 197 + #endif 195 198 if (hlt_counter) { 196 199 local_irq_enable(); 197 200 cpu_relax();
+6 -8
arch/arm/kernel/setup.c
··· 895 895 { 896 896 struct machine_desc *mdesc; 897 897 898 - unwind_init(); 899 - 900 898 setup_processor(); 901 899 mdesc = setup_machine_fdt(__atags_pointer); 902 900 if (!mdesc) ··· 902 904 machine_desc = mdesc; 903 905 machine_name = mdesc->name; 904 906 907 + #ifdef CONFIG_ZONE_DMA 908 + if (mdesc->dma_zone_size) { 909 + extern unsigned long arm_dma_zone_size; 910 + arm_dma_zone_size = mdesc->dma_zone_size; 911 + } 912 + #endif 905 913 if (mdesc->soft_reboot) 906 914 reboot_setup("s"); 907 915 ··· 938 934 939 935 tcm_init(); 940 936 941 - #ifdef CONFIG_ZONE_DMA 942 - if (mdesc->dma_zone_size) { 943 - extern unsigned long arm_dma_zone_size; 944 - arm_dma_zone_size = mdesc->dma_zone_size; 945 - } 946 - #endif 947 937 #ifdef CONFIG_MULTI_IRQ_HANDLER 948 938 handle_arch_irq = mdesc->handle_irq; 949 939 #endif
+1 -1
arch/arm/kernel/topology.c
··· 43 43 44 44 struct cputopo_arm cpu_topology[NR_CPUS]; 45 45 46 - const struct cpumask *cpu_coregroup_mask(unsigned int cpu) 46 + const struct cpumask *cpu_coregroup_mask(int cpu) 47 47 { 48 48 return &cpu_topology[cpu].core_sibling; 49 49 }
+86 -47
arch/arm/kernel/unwind.c
··· 67 67 68 68 struct unwind_ctrl_block { 69 69 unsigned long vrs[16]; /* virtual register set */ 70 - unsigned long *insn; /* pointer to the current instructions word */ 70 + const unsigned long *insn; /* pointer to the current instructions word */ 71 71 int entries; /* number of entries left to interpret */ 72 72 int byte; /* current byte number in the instructions word */ 73 73 }; ··· 83 83 PC = 15 84 84 }; 85 85 86 - extern struct unwind_idx __start_unwind_idx[]; 87 - extern struct unwind_idx __stop_unwind_idx[]; 86 + extern const struct unwind_idx __start_unwind_idx[]; 87 + static const struct unwind_idx *__origin_unwind_idx; 88 + extern const struct unwind_idx __stop_unwind_idx[]; 88 89 89 90 static DEFINE_SPINLOCK(unwind_lock); 90 91 static LIST_HEAD(unwind_tables); ··· 99 98 }) 100 99 101 100 /* 102 - * Binary search in the unwind index. The entries entries are 101 + * Binary search in the unwind index. The entries are 103 102 * guaranteed to be sorted in ascending order by the linker. 103 + * 104 + * start = first entry 105 + * origin = first entry with positive offset (or stop if there is no such entry) 106 + * stop - 1 = last entry 104 107 */ 105 - static struct unwind_idx *search_index(unsigned long addr, 106 - struct unwind_idx *first, 107 - struct unwind_idx *last) 108 + static const struct unwind_idx *search_index(unsigned long addr, 109 + const struct unwind_idx *start, 110 + const struct unwind_idx *origin, 111 + const struct unwind_idx *stop) 108 112 { 109 - pr_debug("%s(%08lx, %p, %p)\n", __func__, addr, first, last); 113 + unsigned long addr_prel31; 110 114 111 - if (addr < first->addr) { 112 - pr_warning("unwind: Unknown symbol address %08lx\n", addr); 113 - return NULL; 114 - } else if (addr >= last->addr) 115 - return last; 115 + pr_debug("%s(%08lx, %p, %p, %p)\n", 116 + __func__, addr, start, origin, stop); 116 117 117 - while (first < last - 1) { 118 - struct unwind_idx *mid = first + ((last - first + 1) >> 1); 118 + /* 119 + * only search in the section with the matching sign. This way the 120 + * prel31 numbers can be compared as unsigned longs. 121 + */ 122 + if (addr < (unsigned long)start) 123 + /* negative offsets: [start; origin) */ 124 + stop = origin; 125 + else 126 + /* positive offsets: [origin; stop) */ 127 + start = origin; 119 128 120 - if (addr < mid->addr) 121 - last = mid; 122 - else 123 - first = mid; 129 + /* prel31 for address relavive to start */ 130 + addr_prel31 = (addr - (unsigned long)start) & 0x7fffffff; 131 + 132 + while (start < stop - 1) { 133 + const struct unwind_idx *mid = start + ((stop - start) >> 1); 134 + 135 + /* 136 + * As addr_prel31 is relative to start an offset is needed to 137 + * make it relative to mid. 138 + */ 139 + if (addr_prel31 - ((unsigned long)mid - (unsigned long)start) < 140 + mid->addr_offset) 141 + stop = mid; 142 + else { 143 + /* keep addr_prel31 relative to start */ 144 + addr_prel31 -= ((unsigned long)mid - 145 + (unsigned long)start); 146 + start = mid; 147 + } 124 148 } 125 149 126 - return first; 150 + if (likely(start->addr_offset <= addr_prel31)) 151 + return start; 152 + else { 153 + pr_warning("unwind: Unknown symbol address %08lx\n", addr); 154 + return NULL; 155 + } 127 156 } 128 157 129 - static struct unwind_idx *unwind_find_idx(unsigned long addr) 158 + static const struct unwind_idx *unwind_find_origin( 159 + const struct unwind_idx *start, const struct unwind_idx *stop) 130 160 { 131 - struct unwind_idx *idx = NULL; 161 + pr_debug("%s(%p, %p)\n", __func__, start, stop); 162 + while (start < stop) { 163 + const struct unwind_idx *mid = start + ((stop - start) >> 1); 164 + 165 + if (mid->addr_offset >= 0x40000000) 166 + /* negative offset */ 167 + start = mid + 1; 168 + else 169 + /* positive offset */ 170 + stop = mid; 171 + } 172 + pr_debug("%s -> %p\n", __func__, stop); 173 + return stop; 174 + } 175 + 176 + static const struct unwind_idx *unwind_find_idx(unsigned long addr) 177 + { 178 + const struct unwind_idx *idx = NULL; 132 179 unsigned long flags; 133 180 134 181 pr_debug("%s(%08lx)\n", __func__, addr); 135 182 136 - if (core_kernel_text(addr)) 183 + if (core_kernel_text(addr)) { 184 + if (unlikely(!__origin_unwind_idx)) 185 + __origin_unwind_idx = 186 + unwind_find_origin(__start_unwind_idx, 187 + __stop_unwind_idx); 188 + 137 189 /* main unwind table */ 138 190 idx = search_index(addr, __start_unwind_idx, 139 - __stop_unwind_idx - 1); 140 - else { 191 + __origin_unwind_idx, 192 + __stop_unwind_idx); 193 + } else { 141 194 /* module unwind tables */ 142 195 struct unwind_table *table; 143 196 ··· 200 145 if (addr >= table->begin_addr && 201 146 addr < table->end_addr) { 202 147 idx = search_index(addr, table->start, 203 - table->stop - 1); 148 + table->origin, 149 + table->stop); 204 150 /* Move-to-front to exploit common traces */ 205 151 list_move(&table->list, &unwind_tables); 206 152 break; ··· 330 274 int unwind_frame(struct stackframe *frame) 331 275 { 332 276 unsigned long high, low; 333 - struct unwind_idx *idx; 277 + const struct unwind_idx *idx; 334 278 struct unwind_ctrl_block ctrl; 335 279 336 280 /* only go to a higher address on the stack */ ··· 455 399 unsigned long text_size) 456 400 { 457 401 unsigned long flags; 458 - struct unwind_idx *idx; 459 402 struct unwind_table *tab = kmalloc(sizeof(*tab), GFP_KERNEL); 460 403 461 404 pr_debug("%s(%08lx, %08lx, %08lx, %08lx)\n", __func__, start, size, ··· 463 408 if (!tab) 464 409 return tab; 465 410 466 - tab->start = (struct unwind_idx *)start; 467 - tab->stop = (struct unwind_idx *)(start + size); 411 + tab->start = (const struct unwind_idx *)start; 412 + tab->stop = (const struct unwind_idx *)(start + size); 413 + tab->origin = unwind_find_origin(tab->start, tab->stop); 468 414 tab->begin_addr = text_addr; 469 415 tab->end_addr = text_addr + text_size; 470 - 471 - /* Convert the symbol addresses to absolute values */ 472 - for (idx = tab->start; idx < tab->stop; idx++) 473 - idx->addr = prel31_to_addr(&idx->addr); 474 416 475 417 spin_lock_irqsave(&unwind_lock, flags); 476 418 list_add_tail(&tab->list, &unwind_tables); ··· 488 436 spin_unlock_irqrestore(&unwind_lock, flags); 489 437 490 438 kfree(tab); 491 - } 492 - 493 - int __init unwind_init(void) 494 - { 495 - struct unwind_idx *idx; 496 - 497 - /* Convert the symbol addresses to absolute values */ 498 - for (idx = __start_unwind_idx; idx < __stop_unwind_idx; idx++) 499 - idx->addr = prel31_to_addr(&idx->addr); 500 - 501 - pr_debug("unwind: ARM stack unwinding initialised\n"); 502 - 503 - return 0; 504 439 }
+22 -4
arch/arm/lib/bitops.h
··· 1 + #include <asm/unwind.h> 2 + 1 3 #if __LINUX_ARM_ARCH__ >= 6 2 - .macro bitop, instr 4 + .macro bitop, name, instr 5 + ENTRY( \name ) 6 + UNWIND( .fnstart ) 3 7 ands ip, r1, #3 4 8 strneb r1, [ip] @ assert word-aligned 5 9 mov r2, #1 ··· 17 13 cmp r0, #0 18 14 bne 1b 19 15 bx lr 16 + UNWIND( .fnend ) 17 + ENDPROC(\name ) 20 18 .endm 21 19 22 - .macro testop, instr, store 20 + .macro testop, name, instr, store 21 + ENTRY( \name ) 22 + UNWIND( .fnstart ) 23 23 ands ip, r1, #3 24 24 strneb r1, [ip] @ assert word-aligned 25 25 mov r2, #1 ··· 42 34 cmp r0, #0 43 35 movne r0, #1 44 36 2: bx lr 37 + UNWIND( .fnend ) 38 + ENDPROC(\name ) 45 39 .endm 46 40 #else 47 - .macro bitop, instr 41 + .macro bitop, name, instr 42 + ENTRY( \name ) 43 + UNWIND( .fnstart ) 48 44 ands ip, r1, #3 49 45 strneb r1, [ip] @ assert word-aligned 50 46 and r2, r0, #31 ··· 61 49 str r2, [r1, r0, lsl #2] 62 50 restore_irqs ip 63 51 mov pc, lr 52 + UNWIND( .fnend ) 53 + ENDPROC(\name ) 64 54 .endm 65 55 66 56 /** ··· 73 59 * Note: we can trivially conditionalise the store instruction 74 60 * to avoid dirtying the data cache. 75 61 */ 76 - .macro testop, instr, store 62 + .macro testop, name, instr, store 63 + ENTRY( \name ) 64 + UNWIND( .fnstart ) 77 65 ands ip, r1, #3 78 66 strneb r1, [ip] @ assert word-aligned 79 67 and r3, r0, #31 ··· 89 73 moveq r0, #0 90 74 restore_irqs ip 91 75 mov pc, lr 76 + UNWIND( .fnend ) 77 + ENDPROC(\name ) 92 78 .endm 93 79 #endif
+1 -3
arch/arm/lib/changebit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_change_bit) 16 - bitop eor 17 - ENDPROC(_change_bit) 15 + bitop _change_bit, eor
+1 -3
arch/arm/lib/clearbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_clear_bit) 16 - bitop bic 17 - ENDPROC(_clear_bit) 15 + bitop _clear_bit, bic
+1 -3
arch/arm/lib/setbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_set_bit) 16 - bitop orr 17 - ENDPROC(_set_bit) 15 + bitop _set_bit, orr
+1 -3
arch/arm/lib/testchangebit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_change_bit) 16 - testop eor, str 17 - ENDPROC(_test_and_change_bit) 15 + testop _test_and_change_bit, eor, str
+1 -3
arch/arm/lib/testclearbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_clear_bit) 16 - testop bicne, strne 17 - ENDPROC(_test_and_clear_bit) 15 + testop _test_and_clear_bit, bicne, strne
+1 -3
arch/arm/lib/testsetbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_set_bit) 16 - testop orreq, streq 17 - ENDPROC(_test_and_set_bit) 15 + testop _test_and_set_bit, orreq, streq
+1 -1
arch/arm/mach-at91/at91rm9200_devices.c
··· 83 83 * USB Device (Gadget) 84 84 * -------------------------------------------------------------------- */ 85 85 86 - #ifdef CONFIG_USB_GADGET_AT91 86 + #ifdef CONFIG_USB_AT91 87 87 static struct at91_udc_data udc_data; 88 88 89 89 static struct resource udc_resources[] = {
+3 -3
arch/arm/mach-at91/at91sam9260.c
··· 195 195 CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.0", &tc0_clk), 196 196 CLKDEV_CON_DEV_ID("t1_clk", "atmel_tcb.0", &tc1_clk), 197 197 CLKDEV_CON_DEV_ID("t2_clk", "atmel_tcb.0", &tc2_clk), 198 - CLKDEV_CON_DEV_ID("t3_clk", "atmel_tcb.1", &tc3_clk), 199 - CLKDEV_CON_DEV_ID("t4_clk", "atmel_tcb.1", &tc4_clk), 200 - CLKDEV_CON_DEV_ID("t5_clk", "atmel_tcb.1", &tc5_clk), 198 + CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.1", &tc3_clk), 199 + CLKDEV_CON_DEV_ID("t1_clk", "atmel_tcb.1", &tc4_clk), 200 + CLKDEV_CON_DEV_ID("t2_clk", "atmel_tcb.1", &tc5_clk), 201 201 CLKDEV_CON_DEV_ID("pclk", "ssc.0", &ssc_clk), 202 202 /* more usart lookup table for DT entries */ 203 203 CLKDEV_CON_DEV_ID("usart", "fffff200.serial", &mck),
+1 -1
arch/arm/mach-at91/at91sam9260_devices.c
··· 84 84 * USB Device (Gadget) 85 85 * -------------------------------------------------------------------- */ 86 86 87 - #ifdef CONFIG_USB_GADGET_AT91 87 + #ifdef CONFIG_USB_AT91 88 88 static struct at91_udc_data udc_data; 89 89 90 90 static struct resource udc_resources[] = {
+1 -1
arch/arm/mach-at91/at91sam9261_devices.c
··· 87 87 * USB Device (Gadget) 88 88 * -------------------------------------------------------------------- */ 89 89 90 - #ifdef CONFIG_USB_GADGET_AT91 90 + #ifdef CONFIG_USB_AT91 91 91 static struct at91_udc_data udc_data; 92 92 93 93 static struct resource udc_resources[] = {
+1 -1
arch/arm/mach-at91/at91sam9263_devices.c
··· 92 92 * USB Device (Gadget) 93 93 * -------------------------------------------------------------------- */ 94 94 95 - #ifdef CONFIG_USB_GADGET_AT91 95 + #ifdef CONFIG_USB_AT91 96 96 static struct at91_udc_data udc_data; 97 97 98 98 static struct resource udc_resources[] = {
+1 -1
arch/arm/mach-at91/include/mach/system_rev.h
··· 19 19 #define BOARD_HAVE_NAND_16BIT (1 << 31) 20 20 static inline int board_have_nand_16bit(void) 21 21 { 22 - return system_rev & BOARD_HAVE_NAND_16BIT; 22 + return (system_rev & BOARD_HAVE_NAND_16BIT) ? 1 : 0; 23 23 } 24 24 25 25 #endif /* __ARCH_SYSTEM_REV_H__ */
+1 -1
arch/arm/mach-davinci/board-da850-evm.c
··· 753 753 .num_serializer = ARRAY_SIZE(da850_iis_serializer_direction), 754 754 .tdm_slots = 2, 755 755 .serial_dir = da850_iis_serializer_direction, 756 - .asp_chan_q = EVENTQ_1, 756 + .asp_chan_q = EVENTQ_0, 757 757 .version = MCASP_VERSION_2, 758 758 .txnumevt = 1, 759 759 .rxnumevt = 1,
+1 -1
arch/arm/mach-davinci/board-dm365-evm.c
··· 107 107 /* UBL (a few copies) plus U-Boot */ 108 108 .name = "bootloader", 109 109 .offset = 0, 110 - .size = 28 * NAND_BLOCK_SIZE, 110 + .size = 30 * NAND_BLOCK_SIZE, 111 111 .mask_flags = MTD_WRITEABLE, /* force read-only */ 112 112 }, { 113 113 /* U-Boot environment */
+3 -3
arch/arm/mach-davinci/board-dm646x-evm.c
··· 564 564 int val; 565 565 u32 value; 566 566 567 - if (!vpif_vsclkdis_reg || !cpld_client) 567 + if (!vpif_vidclkctl_reg || !cpld_client) 568 568 return -ENXIO; 569 569 570 570 val = i2c_smbus_read_byte(cpld_client); ··· 572 572 return val; 573 573 574 574 spin_lock_irqsave(&vpif_reg_lock, flags); 575 - value = __raw_readl(vpif_vsclkdis_reg); 575 + value = __raw_readl(vpif_vidclkctl_reg); 576 576 if (mux_mode) { 577 577 val &= VPIF_INPUT_TWO_CHANNEL; 578 578 value |= VIDCH1CLK; ··· 580 580 val |= VPIF_INPUT_ONE_CHANNEL; 581 581 value &= ~VIDCH1CLK; 582 582 } 583 - __raw_writel(value, vpif_vsclkdis_reg); 583 + __raw_writel(value, vpif_vidclkctl_reg); 584 584 spin_unlock_irqrestore(&vpif_reg_lock, flags); 585 585 586 586 err = i2c_smbus_write_byte(cpld_client, val);
-1
arch/arm/mach-davinci/dm646x.c
··· 161 161 .name = "dsp", 162 162 .parent = &pll1_sysclk1, 163 163 .lpsc = DM646X_LPSC_C64X_CPU, 164 - .flags = PSC_DSP, 165 164 .usecount = 1, /* REVISIT how to disable? */ 166 165 }; 167 166
+4 -1
arch/arm/mach-davinci/include/mach/psc.h
··· 233 233 #define PTCMD 0x120 234 234 #define PTSTAT 0x128 235 235 #define PDSTAT 0x200 236 - #define PDCTL1 0x304 236 + #define PDCTL 0x300 237 237 #define MDSTAT 0x800 238 238 #define MDCTL 0xA00 239 239 ··· 244 244 #define PSC_STATE_ENABLE 3 245 245 246 246 #define MDSTAT_STATE_MASK 0x3f 247 + #define PDSTAT_STATE_MASK 0x1f 247 248 #define MDCTL_FORCE BIT(31) 249 + #define PDCTL_NEXT BIT(1) 250 + #define PDCTL_EPCGOOD BIT(8) 248 251 249 252 #ifndef __ASSEMBLER__ 250 253
+9 -9
arch/arm/mach-davinci/psc.c
··· 52 52 void davinci_psc_config(unsigned int domain, unsigned int ctlr, 53 53 unsigned int id, bool enable, u32 flags) 54 54 { 55 - u32 epcpr, ptcmd, ptstat, pdstat, pdctl1, mdstat, mdctl; 55 + u32 epcpr, ptcmd, ptstat, pdstat, pdctl, mdstat, mdctl; 56 56 void __iomem *psc_base; 57 57 struct davinci_soc_info *soc_info = &davinci_soc_info; 58 58 u32 next_state = PSC_STATE_ENABLE; ··· 79 79 mdctl |= MDCTL_FORCE; 80 80 __raw_writel(mdctl, psc_base + MDCTL + 4 * id); 81 81 82 - pdstat = __raw_readl(psc_base + PDSTAT); 83 - if ((pdstat & 0x00000001) == 0) { 84 - pdctl1 = __raw_readl(psc_base + PDCTL1); 85 - pdctl1 |= 0x1; 86 - __raw_writel(pdctl1, psc_base + PDCTL1); 82 + pdstat = __raw_readl(psc_base + PDSTAT + 4 * domain); 83 + if ((pdstat & PDSTAT_STATE_MASK) == 0) { 84 + pdctl = __raw_readl(psc_base + PDCTL + 4 * domain); 85 + pdctl |= PDCTL_NEXT; 86 + __raw_writel(pdctl, psc_base + PDCTL + 4 * domain); 87 87 88 88 ptcmd = 1 << domain; 89 89 __raw_writel(ptcmd, psc_base + PTCMD); ··· 92 92 epcpr = __raw_readl(psc_base + EPCPR); 93 93 } while ((((epcpr >> domain) & 1) == 0)); 94 94 95 - pdctl1 = __raw_readl(psc_base + PDCTL1); 96 - pdctl1 |= 0x100; 97 - __raw_writel(pdctl1, psc_base + PDCTL1); 95 + pdctl = __raw_readl(psc_base + PDCTL + 4 * domain); 96 + pdctl |= PDCTL_EPCGOOD; 97 + __raw_writel(pdctl, psc_base + PDCTL + 4 * domain); 98 98 } else { 99 99 ptcmd = 1 << domain; 100 100 __raw_writel(ptcmd, psc_base + PTCMD);
+2
arch/arm/mach-exynos/cpuidle.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/cpuidle.h> 14 14 #include <linux/io.h> 15 + #include <linux/export.h> 16 + #include <linux/time.h> 15 17 16 18 #include <asm/proc-fns.h> 17 19
+10 -3
arch/arm/mach-exynos/mct.c
··· 44 44 char name[10]; 45 45 }; 46 46 47 - static DEFINE_PER_CPU(struct mct_clock_event_device, percpu_mct_tick); 48 - 49 47 static void exynos4_mct_write(unsigned int value, void *addr) 50 48 { 51 49 void __iomem *stat_addr; ··· 262 264 } 263 265 264 266 #ifdef CONFIG_LOCAL_TIMERS 267 + 268 + static DEFINE_PER_CPU(struct mct_clock_event_device, percpu_mct_tick); 269 + 265 270 /* Clock event handling */ 266 271 static void exynos4_mct_tick_stop(struct mct_clock_event_device *mevt) 267 272 { ··· 429 428 430 429 void local_timer_stop(struct clock_event_device *evt) 431 430 { 431 + unsigned int cpu = smp_processor_id(); 432 432 evt->set_mode(CLOCK_EVT_MODE_UNUSED, evt); 433 433 if (mct_int_type == MCT_INT_SPI) 434 - disable_irq(evt->irq); 434 + if (cpu == 0) 435 + remove_irq(evt->irq, &mct_tick0_event_irq); 436 + else 437 + remove_irq(evt->irq, &mct_tick1_event_irq); 435 438 else 436 439 disable_percpu_irq(IRQ_MCT_LOCALTIMER); 437 440 } ··· 448 443 449 444 clk_rate = clk_get_rate(mct_clk); 450 445 446 + #ifdef CONFIG_LOCAL_TIMERS 451 447 if (mct_int_type == MCT_INT_PPI) { 452 448 int err; 453 449 ··· 458 452 WARN(err, "MCT: can't request IRQ %d (%d)\n", 459 453 IRQ_MCT_LOCALTIMER, err); 460 454 } 455 + #endif /* CONFIG_LOCAL_TIMERS */ 461 456 } 462 457 463 458 static void __init exynos4_timer_init(void)
+4
arch/arm/mach-highbank/highbank.c
··· 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_platform.h> 24 24 #include <linux/of_address.h> 25 + #include <linux/smp.h> 25 26 26 27 #include <asm/cacheflush.h> 27 28 #include <asm/unified.h> ··· 73 72 74 73 void highbank_set_cpu_jump(int cpu, void *jump_addr) 75 74 { 75 + #ifdef CONFIG_SMP 76 + cpu = cpu_logical_map(cpu); 77 + #endif 76 78 writel(BSYM(virt_to_phys(jump_addr)), HB_JUMP_TABLE_VIRT(cpu)); 77 79 __cpuc_flush_dcache_area(HB_JUMP_TABLE_VIRT(cpu), 16); 78 80 outer_clean_range(HB_JUMP_TABLE_PHYS(cpu),
-13
arch/arm/mach-imx/Kconfig
··· 10 10 config HAVE_IMX_SRC 11 11 bool 12 12 13 - # 14 - # ARCH_MX31 and ARCH_MX35 are left for compatibility 15 - # Some usages assume that having one of them implies not having (e.g.) ARCH_MX2. 16 - # To easily distinguish good and reviewed from unreviewed usages new (and IMHO 17 - # more sensible) names are used: SOC_IMX31 and SOC_IMX35 18 13 config ARCH_MX1 19 14 bool 20 15 ··· 20 25 bool 21 26 22 27 config MACH_MX27 23 - bool 24 - 25 - config ARCH_MX31 26 - bool 27 - 28 - config ARCH_MX35 29 28 bool 30 29 31 30 config SOC_IMX1 ··· 61 72 select CPU_V6 62 73 select IMX_HAVE_PLATFORM_MXC_RNGA 63 74 select ARCH_MXC_AUDMUX_V2 64 - select ARCH_MX31 65 75 select MXC_AVIC 66 76 select SMP_ON_UP if SMP 67 77 ··· 70 82 select ARCH_MXC_IOMUX_V3 71 83 select ARCH_MXC_AUDMUX_V2 72 84 select HAVE_EPIT 73 - select ARCH_MX35 74 85 select MXC_AVIC 75 86 select SMP_ON_UP if SMP 76 87
+5 -2
arch/arm/mach-imx/clock-imx6q.c
··· 1953 1953 imx_map_entry(MX6Q, ANATOP, MT_DEVICE), 1954 1954 }; 1955 1955 1956 + void __init imx6q_clock_map_io(void) 1957 + { 1958 + iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc)); 1959 + } 1960 + 1956 1961 int __init mx6q_clocks_init(void) 1957 1962 { 1958 1963 struct device_node *np; 1959 1964 void __iomem *base; 1960 1965 int i, irq; 1961 - 1962 - iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc)); 1963 1966 1964 1967 /* retrieve the freqency of fixed clocks from device tree */ 1965 1968 for_each_compatible_node(np, NULL, "fixed-clock") {
+6 -4
arch/arm/mach-imx/mach-imx6q.c
··· 34 34 { 35 35 imx_lluart_map_io(); 36 36 imx_scu_map_io(); 37 + imx6q_clock_map_io(); 37 38 } 38 39 39 - static void __init imx6q_gpio_add_irq_domain(struct device_node *np, 40 + static int __init imx6q_gpio_add_irq_domain(struct device_node *np, 40 41 struct device_node *interrupt_parent) 41 42 { 42 - static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS - 43 - 32 * 7; /* imx6q gets 7 gpio ports */ 43 + static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS; 44 44 45 + gpio_irq_base -= 32; 45 46 irq_domain_add_simple(np, gpio_irq_base); 46 - gpio_irq_base += 32; 47 + 48 + return 0; 47 49 } 48 50 49 51 static const struct of_device_id imx6q_irq_match[] __initconst = {
+58 -51
arch/arm/mach-imx/mm-imx3.c
··· 33 33 static void imx3_idle(void) 34 34 { 35 35 unsigned long reg = 0; 36 - __asm__ __volatile__( 37 - /* disable I and D cache */ 38 - "mrc p15, 0, %0, c1, c0, 0\n" 39 - "bic %0, %0, #0x00001000\n" 40 - "bic %0, %0, #0x00000004\n" 41 - "mcr p15, 0, %0, c1, c0, 0\n" 42 - /* invalidate I cache */ 43 - "mov %0, #0\n" 44 - "mcr p15, 0, %0, c7, c5, 0\n" 45 - /* clear and invalidate D cache */ 46 - "mov %0, #0\n" 47 - "mcr p15, 0, %0, c7, c14, 0\n" 48 - /* WFI */ 49 - "mov %0, #0\n" 50 - "mcr p15, 0, %0, c7, c0, 4\n" 51 - "nop\n" "nop\n" "nop\n" "nop\n" 52 - "nop\n" "nop\n" "nop\n" 53 - /* enable I and D cache */ 54 - "mrc p15, 0, %0, c1, c0, 0\n" 55 - "orr %0, %0, #0x00001000\n" 56 - "orr %0, %0, #0x00000004\n" 57 - "mcr p15, 0, %0, c1, c0, 0\n" 58 - : "=r" (reg)); 36 + 37 + if (!need_resched()) 38 + __asm__ __volatile__( 39 + /* disable I and D cache */ 40 + "mrc p15, 0, %0, c1, c0, 0\n" 41 + "bic %0, %0, #0x00001000\n" 42 + "bic %0, %0, #0x00000004\n" 43 + "mcr p15, 0, %0, c1, c0, 0\n" 44 + /* invalidate I cache */ 45 + "mov %0, #0\n" 46 + "mcr p15, 0, %0, c7, c5, 0\n" 47 + /* clear and invalidate D cache */ 48 + "mov %0, #0\n" 49 + "mcr p15, 0, %0, c7, c14, 0\n" 50 + /* WFI */ 51 + "mov %0, #0\n" 52 + "mcr p15, 0, %0, c7, c0, 4\n" 53 + "nop\n" "nop\n" "nop\n" "nop\n" 54 + "nop\n" "nop\n" "nop\n" 55 + /* enable I and D cache */ 56 + "mrc p15, 0, %0, c1, c0, 0\n" 57 + "orr %0, %0, #0x00001000\n" 58 + "orr %0, %0, #0x00000004\n" 59 + "mcr p15, 0, %0, c1, c0, 0\n" 60 + : "=r" (reg)); 61 + local_irq_enable(); 59 62 } 60 63 61 64 static void __iomem *imx3_ioremap(unsigned long phys_addr, size_t size, ··· 111 108 l2x0_init(l2x0_base, 0x00030024, 0x00000000); 112 109 } 113 110 111 + #ifdef CONFIG_SOC_IMX31 114 112 static struct map_desc mx31_io_desc[] __initdata = { 115 113 imx_map_entry(MX31, X_MEMC, MT_DEVICE), 116 114 imx_map_entry(MX31, AVIC, MT_DEVICE_NONSHARED), ··· 130 126 iotable_init(mx31_io_desc, ARRAY_SIZE(mx31_io_desc)); 131 127 } 132 128 133 - static struct map_desc mx35_io_desc[] __initdata = { 134 - imx_map_entry(MX35, X_MEMC, MT_DEVICE), 135 - imx_map_entry(MX35, AVIC, MT_DEVICE_NONSHARED), 136 - imx_map_entry(MX35, AIPS1, MT_DEVICE_NONSHARED), 137 - imx_map_entry(MX35, AIPS2, MT_DEVICE_NONSHARED), 138 - imx_map_entry(MX35, SPBA0, MT_DEVICE_NONSHARED), 139 - }; 140 - 141 - void __init mx35_map_io(void) 142 - { 143 - iotable_init(mx35_io_desc, ARRAY_SIZE(mx35_io_desc)); 144 - } 145 - 146 129 void __init imx31_init_early(void) 147 130 { 148 131 mxc_set_cpu_type(MXC_CPU_MX31); 149 132 mxc_arch_reset_init(MX31_IO_ADDRESS(MX31_WDOG_BASE_ADDR)); 150 - imx_idle = imx3_idle; 151 - imx_ioremap = imx3_ioremap; 152 - } 153 - 154 - void __init imx35_init_early(void) 155 - { 156 - mxc_set_cpu_type(MXC_CPU_MX35); 157 - mxc_iomux_v3_init(MX35_IO_ADDRESS(MX35_IOMUXC_BASE_ADDR)); 158 - mxc_arch_reset_init(MX35_IO_ADDRESS(MX35_WDOG_BASE_ADDR)); 159 - imx_idle = imx3_idle; 133 + pm_idle = imx3_idle; 160 134 imx_ioremap = imx3_ioremap; 161 135 } 162 136 163 137 void __init mx31_init_irq(void) 164 138 { 165 139 mxc_init_irq(MX31_IO_ADDRESS(MX31_AVIC_BASE_ADDR)); 166 - } 167 - 168 - void __init mx35_init_irq(void) 169 - { 170 - mxc_init_irq(MX35_IO_ADDRESS(MX35_AVIC_BASE_ADDR)); 171 140 } 172 141 173 142 static struct sdma_script_start_addrs imx31_to1_sdma_script __initdata = { ··· 175 198 } 176 199 177 200 imx_add_imx_sdma("imx31-sdma", MX31_SDMA_BASE_ADDR, MX31_INT_SDMA, &imx31_sdma_pdata); 201 + } 202 + #endif /* ifdef CONFIG_SOC_IMX31 */ 203 + 204 + #ifdef CONFIG_SOC_IMX35 205 + static struct map_desc mx35_io_desc[] __initdata = { 206 + imx_map_entry(MX35, X_MEMC, MT_DEVICE), 207 + imx_map_entry(MX35, AVIC, MT_DEVICE_NONSHARED), 208 + imx_map_entry(MX35, AIPS1, MT_DEVICE_NONSHARED), 209 + imx_map_entry(MX35, AIPS2, MT_DEVICE_NONSHARED), 210 + imx_map_entry(MX35, SPBA0, MT_DEVICE_NONSHARED), 211 + }; 212 + 213 + void __init mx35_map_io(void) 214 + { 215 + iotable_init(mx35_io_desc, ARRAY_SIZE(mx35_io_desc)); 216 + } 217 + 218 + void __init imx35_init_early(void) 219 + { 220 + mxc_set_cpu_type(MXC_CPU_MX35); 221 + mxc_iomux_v3_init(MX35_IO_ADDRESS(MX35_IOMUXC_BASE_ADDR)); 222 + mxc_arch_reset_init(MX35_IO_ADDRESS(MX35_WDOG_BASE_ADDR)); 223 + pm_idle = imx3_idle; 224 + imx_ioremap = imx3_ioremap; 225 + } 226 + 227 + void __init mx35_init_irq(void) 228 + { 229 + mxc_init_irq(MX35_IO_ADDRESS(MX35_AVIC_BASE_ADDR)); 178 230 } 179 231 180 232 static struct sdma_script_start_addrs imx35_to1_sdma_script __initdata = { ··· 260 254 261 255 imx_add_imx_sdma("imx35-sdma", MX35_SDMA_BASE_ADDR, MX35_INT_SDMA, &imx35_sdma_pdata); 262 256 } 257 + #endif /* ifdef CONFIG_SOC_IMX35 */
+7
arch/arm/mach-imx/src.c
··· 14 14 #include <linux/io.h> 15 15 #include <linux/of.h> 16 16 #include <linux/of_address.h> 17 + #include <linux/smp.h> 17 18 #include <asm/unified.h> 18 19 19 20 #define SRC_SCR 0x000 ··· 24 23 25 24 static void __iomem *src_base; 26 25 26 + #ifndef CONFIG_SMP 27 + #define cpu_logical_map(cpu) 0 28 + #endif 29 + 27 30 void imx_enable_cpu(int cpu, bool enable) 28 31 { 29 32 u32 mask, val; 30 33 34 + cpu = cpu_logical_map(cpu); 31 35 mask = 1 << (BP_SRC_SCR_CORE1_ENABLE + cpu - 1); 32 36 val = readl_relaxed(src_base + SRC_SCR); 33 37 val = enable ? val | mask : val & ~mask; ··· 41 35 42 36 void imx_set_cpu_jump(int cpu, void *jump_addr) 43 37 { 38 + cpu = cpu_logical_map(cpu); 44 39 writel_relaxed(BSYM(virt_to_phys(jump_addr)), 45 40 src_base + SRC_GPR1 + cpu * 8); 46 41 }
+1 -1
arch/arm/mach-mmp/gplugd.c
··· 182 182 183 183 /* on-chip devices */ 184 184 pxa168_add_uart(3); 185 - pxa168_add_ssp(0); 185 + pxa168_add_ssp(1); 186 186 pxa168_add_twsi(0, NULL, ARRAY_AND_SIZE(gplugd_i2c_board_info)); 187 187 188 188 pxa168_add_eth(&gplugd_eth_platform_data);
+1 -1
arch/arm/mach-mmp/include/mach/gpio-pxa.h
··· 7 7 #define GPIO_REGS_VIRT (APB_VIRT_BASE + 0x19000) 8 8 9 9 #define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2)) 10 - #define GPIO_REG(x) (GPIO_REGS_VIRT + (x)) 10 + #define GPIO_REG(x) (*(volatile u32 *)(GPIO_REGS_VIRT + (x))) 11 11 12 12 #define NR_BUILTIN_GPIO IRQ_GPIO_NUM 13 13
+1
arch/arm/mach-msm/devices-iommu.c
··· 18 18 #include <linux/kernel.h> 19 19 #include <linux/platform_device.h> 20 20 #include <linux/bootmem.h> 21 + #include <linux/module.h> 21 22 #include <mach/irqs.h> 22 23 #include <mach/iommu.h> 23 24
+1 -1
arch/arm/mach-mx5/board-mx51_babbage.c
··· 362 362 { 363 363 iomux_v3_cfg_t usbh1stp = MX51_PAD_USBH1_STP__USBH1_STP; 364 364 iomux_v3_cfg_t power_key = NEW_PAD_CTRL(MX51_PAD_EIM_A27__GPIO2_21, 365 - PAD_CTL_SRE_FAST | PAD_CTL_DSE_HIGH | PAD_CTL_PUS_100K_UP); 365 + PAD_CTL_SRE_FAST | PAD_CTL_DSE_HIGH); 366 366 367 367 imx51_soc_init(); 368 368
+1 -1
arch/arm/mach-mx5/board-mx53_evk.c
··· 106 106 gpio_set_value(MX53_EVK_FEC_PHY_RST, 1); 107 107 } 108 108 109 - static struct fec_platform_data mx53_evk_fec_pdata = { 109 + static const struct fec_platform_data mx53_evk_fec_pdata __initconst = { 110 110 .phy = PHY_INTERFACE_MODE_RMII, 111 111 }; 112 112
+1 -1
arch/arm/mach-mx5/board-mx53_loco.c
··· 242 242 gpio_set_value(LOCO_FEC_PHY_RST, 1); 243 243 } 244 244 245 - static struct fec_platform_data mx53_loco_fec_data = { 245 + static const struct fec_platform_data mx53_loco_fec_data __initconst = { 246 246 .phy = PHY_INTERFACE_MODE_RMII, 247 247 }; 248 248
+1 -1
arch/arm/mach-mx5/board-mx53_smd.c
··· 104 104 gpio_set_value(SMD_FEC_PHY_RST, 1); 105 105 } 106 106 107 - static struct fec_platform_data mx53_smd_fec_data = { 107 + static const struct fec_platform_data mx53_smd_fec_data __initconst = { 108 108 .phy = PHY_INTERFACE_MODE_RMII, 109 109 }; 110 110
+3 -2
arch/arm/mach-mx5/cpu.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/module.h> 18 18 #include <mach/hardware.h> 19 - #include <asm/io.h> 19 + #include <linux/io.h> 20 20 21 21 static int mx5_cpu_rev = -1; 22 22 ··· 67 67 if (!cpu_is_mx51()) 68 68 return 0; 69 69 70 - if (mx51_revision() < IMX_CHIP_REVISION_3_0 && (elf_hwcap & HWCAP_NEON)) { 70 + if (mx51_revision() < IMX_CHIP_REVISION_3_0 && 71 + (elf_hwcap & HWCAP_NEON)) { 71 72 elf_hwcap &= ~HWCAP_NEON; 72 73 pr_info("Turning off NEON support, detected broken NEON implementation\n"); 73 74 }
+7 -5
arch/arm/mach-mx5/imx51-dt.c
··· 44 44 { /* sentinel */ } 45 45 }; 46 46 47 - static void __init imx51_tzic_add_irq_domain(struct device_node *np, 47 + static int __init imx51_tzic_add_irq_domain(struct device_node *np, 48 48 struct device_node *interrupt_parent) 49 49 { 50 50 irq_domain_add_simple(np, 0); 51 + return 0; 51 52 } 52 53 53 - static void __init imx51_gpio_add_irq_domain(struct device_node *np, 54 + static int __init imx51_gpio_add_irq_domain(struct device_node *np, 54 55 struct device_node *interrupt_parent) 55 56 { 56 - static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS - 57 - 32 * 4; /* imx51 gets 4 gpio ports */ 57 + static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS; 58 58 59 + gpio_irq_base -= 32; 59 60 irq_domain_add_simple(np, gpio_irq_base); 60 - gpio_irq_base += 32; 61 + 62 + return 0; 61 63 } 62 64 63 65 static const struct of_device_id imx51_irq_match[] __initconst = {
+7 -5
arch/arm/mach-mx5/imx53-dt.c
··· 48 48 { /* sentinel */ } 49 49 }; 50 50 51 - static void __init imx53_tzic_add_irq_domain(struct device_node *np, 51 + static int __init imx53_tzic_add_irq_domain(struct device_node *np, 52 52 struct device_node *interrupt_parent) 53 53 { 54 54 irq_domain_add_simple(np, 0); 55 + return 0; 55 56 } 56 57 57 - static void __init imx53_gpio_add_irq_domain(struct device_node *np, 58 + static int __init imx53_gpio_add_irq_domain(struct device_node *np, 58 59 struct device_node *interrupt_parent) 59 60 { 60 - static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS - 61 - 32 * 7; /* imx53 gets 7 gpio ports */ 61 + static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS; 62 62 63 + gpio_irq_base -= 32; 63 64 irq_domain_add_simple(np, gpio_irq_base); 64 - gpio_irq_base += 32; 65 + 66 + return 0; 65 67 } 66 68 67 69 static const struct of_device_id imx53_irq_match[] __initconst = {
+4 -2
arch/arm/mach-mx5/mm.c
··· 23 23 24 24 static void imx5_idle(void) 25 25 { 26 - mx5_cpu_lp_set(WAIT_UNCLOCKED_POWER_OFF); 26 + if (!need_resched()) 27 + mx5_cpu_lp_set(WAIT_UNCLOCKED_POWER_OFF); 28 + local_irq_enable(); 27 29 } 28 30 29 31 /* ··· 91 89 mxc_set_cpu_type(MXC_CPU_MX51); 92 90 mxc_iomux_v3_init(MX51_IO_ADDRESS(MX51_IOMUXC_BASE_ADDR)); 93 91 mxc_arch_reset_init(MX51_IO_ADDRESS(MX51_WDOG1_BASE_ADDR)); 94 - imx_idle = imx5_idle; 92 + pm_idle = imx5_idle; 95 93 } 96 94 97 95 void __init imx53_init_early(void)
+1 -1
arch/arm/mach-mxs/clock-mx28.c
··· 404 404 reg = __raw_readl(CLKCTRL_BASE_ADDR + HW_CLKCTRL_##dr); \ 405 405 reg &= ~BM_CLKCTRL_##dr##_DIV; \ 406 406 reg |= div << BP_CLKCTRL_##dr##_DIV; \ 407 - if (reg | (1 << clk->enable_shift)) { \ 407 + if (reg & (1 << clk->enable_shift)) { \ 408 408 pr_err("%s: clock is gated\n", __func__); \ 409 409 return -EINVAL; \ 410 410 } \
+2 -2
arch/arm/mach-mxs/include/mach/mx28.h
··· 104 104 #define MX28_INT_CAN1 9 105 105 #define MX28_INT_LRADC_TOUCH 10 106 106 #define MX28_INT_HSADC 13 107 - #define MX28_INT_IRADC_THRESH0 14 108 - #define MX28_INT_IRADC_THRESH1 15 107 + #define MX28_INT_LRADC_THRESH0 14 108 + #define MX28_INT_LRADC_THRESH1 15 109 109 #define MX28_INT_LRADC_CH0 16 110 110 #define MX28_INT_LRADC_CH1 17 111 111 #define MX28_INT_LRADC_CH2 18
+1
arch/arm/mach-mxs/include/mach/mxs.h
··· 30 30 */ 31 31 #define cpu_is_mx23() ( \ 32 32 machine_is_mx23evk() || \ 33 + machine_is_stmp378x() || \ 33 34 0) 34 35 #define cpu_is_mx28() ( \ 35 36 machine_is_mx28evk() || \
+1 -1
arch/arm/mach-mxs/mach-m28evk.c
··· 361 361 MACHINE_START(M28EVK, "DENX M28 EVK") 362 362 .map_io = mx28_map_io, 363 363 .init_irq = mx28_init_irq, 364 - .init_machine = m28evk_init, 365 364 .timer = &m28evk_timer, 365 + .init_machine = m28evk_init, 366 366 MACHINE_END
+1 -1
arch/arm/mach-mxs/mach-stmp378x_devb.c
··· 115 115 MACHINE_START(STMP378X, "STMP378X") 116 116 .map_io = mx23_map_io, 117 117 .init_irq = mx23_init_irq, 118 - .init_machine = stmp378x_dvb_init, 119 118 .timer = &stmp378x_dvb_timer, 119 + .init_machine = stmp378x_dvb_init, 120 120 MACHINE_END
+2 -2
arch/arm/mach-mxs/module-tx28.c
··· 66 66 MX28_PAD_ENET0_CRS__ENET1_RX_EN, 67 67 }; 68 68 69 - static struct fec_platform_data tx28_fec0_data = { 69 + static const struct fec_platform_data tx28_fec0_data __initconst = { 70 70 .phy = PHY_INTERFACE_MODE_RMII, 71 71 }; 72 72 73 - static struct fec_platform_data tx28_fec1_data = { 73 + static const struct fec_platform_data tx28_fec1_data __initconst = { 74 74 .phy = PHY_INTERFACE_MODE_RMII, 75 75 }; 76 76
-8
arch/arm/mach-omap1/Kconfig
··· 171 171 comment "OMAP CPU Speed" 172 172 depends on ARCH_OMAP1 173 173 174 - config OMAP_CLOCKS_SET_BY_BOOTLOADER 175 - bool "OMAP clocks set by bootloader" 176 - depends on ARCH_OMAP1 177 - help 178 - Enable this option to prevent the kernel from overriding the clock 179 - frequencies programmed by bootloader for MPU, DSP, MMUs, TC, 180 - internal LCD controller and MPU peripherals. 181 - 182 174 config OMAP_ARM_216MHZ 183 175 bool "OMAP ARM 216 MHz CPU (1710 only)" 184 176 depends on ARCH_OMAP1 && ARCH_OMAP16XX
+7 -3
arch/arm/mach-omap1/board-ams-delta.c
··· 302 302 omap_cfg_reg(J19_1610_CAM_D6); 303 303 omap_cfg_reg(J18_1610_CAM_D7); 304 304 305 - iotable_init(ams_delta_io_desc, ARRAY_SIZE(ams_delta_io_desc)); 306 - 307 305 omap_board_config = ams_delta_config; 308 306 omap_board_config_size = ARRAY_SIZE(ams_delta_config); 309 307 omap_serial_init(); ··· 371 373 } 372 374 arch_initcall(ams_delta_modem_init); 373 375 376 + static void __init ams_delta_map_io(void) 377 + { 378 + omap15xx_map_io(); 379 + iotable_init(ams_delta_io_desc, ARRAY_SIZE(ams_delta_io_desc)); 380 + } 381 + 374 382 MACHINE_START(AMS_DELTA, "Amstrad E3 (Delta)") 375 383 /* Maintainer: Jonathan McDowell <noodles@earth.li> */ 376 384 .atag_offset = 0x100, 377 - .map_io = omap15xx_map_io, 385 + .map_io = ams_delta_map_io, 378 386 .init_early = omap1_init_early, 379 387 .reserve = omap_reserve, 380 388 .init_irq = omap1_init_irq,
+2 -1
arch/arm/mach-omap1/clock.h
··· 17 17 18 18 #include <plat/clock.h> 19 19 20 - extern int __init omap1_clk_init(void); 20 + int omap1_clk_init(void); 21 + void omap1_clk_late_init(void); 21 22 extern int omap1_clk_enable(struct clk *clk); 22 23 extern void omap1_clk_disable(struct clk *clk); 23 24 extern long omap1_clk_round_rate(struct clk *clk, unsigned long rate);
+42 -19
arch/arm/mach-omap1/clock_data.c
··· 16 16 17 17 #include <linux/kernel.h> 18 18 #include <linux/clk.h> 19 + #include <linux/cpufreq.h> 20 + #include <linux/delay.h> 19 21 #include <linux/io.h> 20 22 21 23 #include <asm/mach-types.h> /* for machine_is_* */ ··· 769 767 .clk_disable_unused = omap1_clk_disable_unused, 770 768 }; 771 769 770 + static void __init omap1_show_rates(void) 771 + { 772 + pr_notice("Clocking rate (xtal/DPLL1/MPU): " 773 + "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n", 774 + ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10, 775 + ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10, 776 + arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10); 777 + } 778 + 772 779 int __init omap1_clk_init(void) 773 780 { 774 781 struct omap_clk *c; ··· 846 835 /* We want to be in syncronous scalable mode */ 847 836 omap_writew(0x1000, ARM_SYSST); 848 837 849 - #ifdef CONFIG_OMAP_CLOCKS_SET_BY_BOOTLOADER 850 - /* Use values set by bootloader. Determine PLL rate and recalculate 851 - * dependent clocks as if kernel had changed PLL or divisors. 838 + 839 + /* 840 + * Initially use the values set by bootloader. Determine PLL rate and 841 + * recalculate dependent clocks as if kernel had changed PLL or 842 + * divisors. See also omap1_clk_late_init() that can reprogram dpll1 843 + * after the SRAM is initialized. 852 844 */ 853 845 { 854 846 unsigned pll_ctl_val = omap_readw(DPLL_CTL); ··· 876 862 } 877 863 } 878 864 } 879 - #else 880 - /* Find the highest supported frequency and enable it */ 881 - if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) { 882 - printk(KERN_ERR "System frequencies not set. Check your config.\n"); 883 - /* Guess sane values (60MHz) */ 884 - omap_writew(0x2290, DPLL_CTL); 885 - omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL); 886 - ck_dpll1.rate = 60000000; 887 - } 888 - #endif 889 865 propagate_rate(&ck_dpll1); 890 866 /* Cache rates for clocks connected to ck_ref (not dpll1) */ 891 867 propagate_rate(&ck_ref); 892 - printk(KERN_INFO "Clocking rate (xtal/DPLL1/MPU): " 893 - "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n", 894 - ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10, 895 - ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10, 896 - arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10); 897 - 868 + omap1_show_rates(); 898 869 if (machine_is_omap_perseus2() || machine_is_omap_fsample()) { 899 870 /* Select slicer output as OMAP input clock */ 900 871 omap_writew(omap_readw(OMAP7XX_PCC_UPLD_CTRL) & ~0x1, ··· 923 924 clk_enable(&arm_gpio_ck); 924 925 925 926 return 0; 927 + } 928 + 929 + #define OMAP1_DPLL1_SANE_VALUE 60000000 930 + 931 + void __init omap1_clk_late_init(void) 932 + { 933 + unsigned long rate = ck_dpll1.rate; 934 + 935 + if (rate >= OMAP1_DPLL1_SANE_VALUE) 936 + return; 937 + 938 + /* System booting at unusable rate, force reprogramming of DPLL1 */ 939 + ck_dpll1_p->rate = 0; 940 + 941 + /* Find the highest supported frequency and enable it */ 942 + if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) { 943 + pr_err("System frequencies not set, using default. Check your config.\n"); 944 + omap_writew(0x2290, DPLL_CTL); 945 + omap_writew(cpu_is_omap7xx() ? 0x2005 : 0x0005, ARM_CKCTL); 946 + ck_dpll1.rate = OMAP1_DPLL1_SANE_VALUE; 947 + } 948 + propagate_rate(&ck_dpll1); 949 + omap1_show_rates(); 950 + loops_per_jiffy = cpufreq_scale(loops_per_jiffy, rate, ck_dpll1.rate); 926 951 }
+3
arch/arm/mach-omap1/devices.c
··· 30 30 #include <plat/omap7xx.h> 31 31 #include <plat/mcbsp.h> 32 32 33 + #include "clock.h" 34 + 33 35 /*-------------------------------------------------------------------------*/ 34 36 35 37 #if defined(CONFIG_RTC_DRV_OMAP) || defined(CONFIG_RTC_DRV_OMAP_MODULE) ··· 295 293 return -ENODEV; 296 294 297 295 omap_sram_init(); 296 + omap1_clk_late_init(); 298 297 299 298 /* please keep these calls, and their implementations above, 300 299 * in alphabetical order so they're easier to sort through.
+1
arch/arm/mach-omap2/Kconfig
··· 334 334 config OMAP3_EMU 335 335 bool "OMAP3 debugging peripherals" 336 336 depends on ARCH_OMAP3 337 + select ARM_AMBA 337 338 select OC_ETM 338 339 help 339 340 Say Y here to enable debugging hardware of omap3
+1 -4
arch/arm/mach-omap2/Makefile
··· 4 4 5 5 # Common support 6 6 obj-y := id.o io.o control.o mux.o devices.o serial.o gpmc.o timer.o pm.o \ 7 - common.o gpio.o dma.o wd_timer.o 7 + common.o gpio.o dma.o wd_timer.o display.o 8 8 9 9 omap-2-3-common = irq.o sdrc.o 10 10 hwmod-common = omap_hwmod.o \ ··· 263 263 smsc911x-$(CONFIG_SMSC911X) := gpmc-smsc911x.o 264 264 obj-y += $(smsc911x-m) $(smsc911x-y) 265 265 obj-$(CONFIG_ARCH_OMAP4) += hwspinlock.o 266 - 267 - disp-$(CONFIG_OMAP2_DSS) := display.o 268 - obj-y += $(disp-m) $(disp-y) 269 266 270 267 obj-y += common-board-devices.o twl-common.o
+1 -1
arch/arm/mach-omap2/board-rx51-peripherals.c
··· 193 193 static void __init rx51_charger_init(void) 194 194 { 195 195 WARN_ON(gpio_request_one(RX51_USB_TRANSCEIVER_RST_GPIO, 196 - GPIOF_OUT_INIT_LOW, "isp1704_reset")); 196 + GPIOF_OUT_INIT_HIGH, "isp1704_reset")); 197 197 198 198 platform_device_register(&rx51_charger_device); 199 199 }
+1
arch/arm/mach-omap2/cpuidle34xx.c
··· 24 24 25 25 #include <linux/sched.h> 26 26 #include <linux/cpuidle.h> 27 + #include <linux/export.h> 27 28 28 29 #include <plat/prcm.h> 29 30 #include <plat/irqs.h>
+159
arch/arm/mach-omap2/display.c
··· 27 27 #include <plat/omap_hwmod.h> 28 28 #include <plat/omap_device.h> 29 29 #include <plat/omap-pm.h> 30 + #include <plat/common.h> 30 31 31 32 #include "control.h" 33 + #include "display.h" 34 + 35 + #define DISPC_CONTROL 0x0040 36 + #define DISPC_CONTROL2 0x0238 37 + #define DISPC_IRQSTATUS 0x0018 38 + 39 + #define DSS_SYSCONFIG 0x10 40 + #define DSS_SYSSTATUS 0x14 41 + #define DSS_CONTROL 0x40 42 + #define DSS_SDI_CONTROL 0x44 43 + #define DSS_PLL_CONTROL 0x48 44 + 45 + #define LCD_EN_MASK (0x1 << 0) 46 + #define DIGIT_EN_MASK (0x1 << 1) 47 + 48 + #define FRAMEDONE_IRQ_SHIFT 0 49 + #define EVSYNC_EVEN_IRQ_SHIFT 2 50 + #define EVSYNC_ODD_IRQ_SHIFT 3 51 + #define FRAMEDONE2_IRQ_SHIFT 22 52 + #define FRAMEDONETV_IRQ_SHIFT 24 53 + 54 + /* 55 + * FRAMEDONE_IRQ_TIMEOUT: how long (in milliseconds) to wait during DISPC 56 + * reset before deciding that something has gone wrong 57 + */ 58 + #define FRAMEDONE_IRQ_TIMEOUT 100 32 59 33 60 static struct platform_device omap_display_device = { 34 61 .name = "omapdss", ··· 196 169 r = platform_device_register(&omap_display_device); 197 170 if (r < 0) 198 171 printk(KERN_ERR "Unable to register OMAP-Display device\n"); 172 + 173 + return r; 174 + } 175 + 176 + static void dispc_disable_outputs(void) 177 + { 178 + u32 v, irq_mask = 0; 179 + bool lcd_en, digit_en, lcd2_en = false; 180 + int i; 181 + struct omap_dss_dispc_dev_attr *da; 182 + struct omap_hwmod *oh; 183 + 184 + oh = omap_hwmod_lookup("dss_dispc"); 185 + if (!oh) { 186 + WARN(1, "display: could not disable outputs during reset - could not find dss_dispc hwmod\n"); 187 + return; 188 + } 189 + 190 + if (!oh->dev_attr) { 191 + pr_err("display: could not disable outputs during reset due to missing dev_attr\n"); 192 + return; 193 + } 194 + 195 + da = (struct omap_dss_dispc_dev_attr *)oh->dev_attr; 196 + 197 + /* store value of LCDENABLE and DIGITENABLE bits */ 198 + v = omap_hwmod_read(oh, DISPC_CONTROL); 199 + lcd_en = v & LCD_EN_MASK; 200 + digit_en = v & DIGIT_EN_MASK; 201 + 202 + /* store value of LCDENABLE for LCD2 */ 203 + if (da->manager_count > 2) { 204 + v = omap_hwmod_read(oh, DISPC_CONTROL2); 205 + lcd2_en = v & LCD_EN_MASK; 206 + } 207 + 208 + if (!(lcd_en | digit_en | lcd2_en)) 209 + return; /* no managers currently enabled */ 210 + 211 + /* 212 + * If any manager was enabled, we need to disable it before 213 + * DSS clocks are disabled or DISPC module is reset 214 + */ 215 + if (lcd_en) 216 + irq_mask |= 1 << FRAMEDONE_IRQ_SHIFT; 217 + 218 + if (digit_en) { 219 + if (da->has_framedonetv_irq) { 220 + irq_mask |= 1 << FRAMEDONETV_IRQ_SHIFT; 221 + } else { 222 + irq_mask |= 1 << EVSYNC_EVEN_IRQ_SHIFT | 223 + 1 << EVSYNC_ODD_IRQ_SHIFT; 224 + } 225 + } 226 + 227 + if (lcd2_en) 228 + irq_mask |= 1 << FRAMEDONE2_IRQ_SHIFT; 229 + 230 + /* 231 + * clear any previous FRAMEDONE, FRAMEDONETV, 232 + * EVSYNC_EVEN/ODD or FRAMEDONE2 interrupts 233 + */ 234 + omap_hwmod_write(irq_mask, oh, DISPC_IRQSTATUS); 235 + 236 + /* disable LCD and TV managers */ 237 + v = omap_hwmod_read(oh, DISPC_CONTROL); 238 + v &= ~(LCD_EN_MASK | DIGIT_EN_MASK); 239 + omap_hwmod_write(v, oh, DISPC_CONTROL); 240 + 241 + /* disable LCD2 manager */ 242 + if (da->manager_count > 2) { 243 + v = omap_hwmod_read(oh, DISPC_CONTROL2); 244 + v &= ~LCD_EN_MASK; 245 + omap_hwmod_write(v, oh, DISPC_CONTROL2); 246 + } 247 + 248 + i = 0; 249 + while ((omap_hwmod_read(oh, DISPC_IRQSTATUS) & irq_mask) != 250 + irq_mask) { 251 + i++; 252 + if (i > FRAMEDONE_IRQ_TIMEOUT) { 253 + pr_err("didn't get FRAMEDONE1/2 or TV interrupt\n"); 254 + break; 255 + } 256 + mdelay(1); 257 + } 258 + } 259 + 260 + #define MAX_MODULE_SOFTRESET_WAIT 10000 261 + int omap_dss_reset(struct omap_hwmod *oh) 262 + { 263 + struct omap_hwmod_opt_clk *oc; 264 + int c = 0; 265 + int i, r; 266 + 267 + if (!(oh->class->sysc->sysc_flags & SYSS_HAS_RESET_STATUS)) { 268 + pr_err("dss_core: hwmod data doesn't contain reset data\n"); 269 + return -EINVAL; 270 + } 271 + 272 + for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 273 + if (oc->_clk) 274 + clk_enable(oc->_clk); 275 + 276 + dispc_disable_outputs(); 277 + 278 + /* clear SDI registers */ 279 + if (cpu_is_omap3430()) { 280 + omap_hwmod_write(0x0, oh, DSS_SDI_CONTROL); 281 + omap_hwmod_write(0x0, oh, DSS_PLL_CONTROL); 282 + } 283 + 284 + /* 285 + * clear DSS_CONTROL register to switch DSS clock sources to 286 + * PRCM clock, if any 287 + */ 288 + omap_hwmod_write(0x0, oh, DSS_CONTROL); 289 + 290 + omap_test_timeout((omap_hwmod_read(oh, oh->class->sysc->syss_offs) 291 + & SYSS_RESETDONE_MASK), 292 + MAX_MODULE_SOFTRESET_WAIT, c); 293 + 294 + if (c == MAX_MODULE_SOFTRESET_WAIT) 295 + pr_warning("dss_core: waiting for reset to finish failed\n"); 296 + else 297 + pr_debug("dss_core: softreset done\n"); 298 + 299 + for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 300 + if (oc->_clk) 301 + clk_disable(oc->_clk); 302 + 303 + r = (c == MAX_MODULE_SOFTRESET_WAIT) ? -ETIMEDOUT : 0; 199 304 200 305 return r; 201 306 }
+29
arch/arm/mach-omap2/display.h
··· 1 + /* 2 + * display.h - OMAP2+ integration-specific DSS header 3 + * 4 + * Copyright (C) 2011 Texas Instruments, Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License version 2 as published by 8 + * the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + * 15 + * You should have received a copy of the GNU General Public License along with 16 + * this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #ifndef __ARCH_ARM_MACH_OMAP2_DISPLAY_H 20 + #define __ARCH_ARM_MACH_OMAP2_DISPLAY_H 21 + 22 + #include <linux/kernel.h> 23 + 24 + struct omap_dss_dispc_dev_attr { 25 + u8 manager_count; 26 + bool has_framedonetv_irq; 27 + }; 28 + 29 + #endif
arch/arm/mach-omap2/io.h
+3 -3
arch/arm/mach-omap2/mcbsp.c
··· 145 145 pdata->reg_size = 4; 146 146 pdata->has_ccr = true; 147 147 } 148 + pdata->set_clk_src = omap2_mcbsp_set_clk_src; 149 + if (id == 1) 150 + pdata->mux_signal = omap2_mcbsp1_mux_rx_clk; 148 151 149 152 if (oh->class->rev == MCBSP_CONFIG_TYPE3) { 150 153 if (id == 2) ··· 177 174 name, oh->name); 178 175 return PTR_ERR(pdev); 179 176 } 180 - pdata->set_clk_src = omap2_mcbsp_set_clk_src; 181 - if (id == 1) 182 - pdata->mux_signal = omap2_mcbsp1_mux_rx_clk; 183 177 omap_mcbsp_count++; 184 178 return 0; 185 179 }
+3 -3
arch/arm/mach-omap2/omap_hwmod.c
··· 749 749 ohii = &oh->mpu_irqs[i++]; 750 750 } while (ohii->irq != -1); 751 751 752 - return i; 752 + return i-1; 753 753 } 754 754 755 755 /** ··· 772 772 ohdi = &oh->sdma_reqs[i++]; 773 773 } while (ohdi->dma_req != -1); 774 774 775 - return i; 775 + return i-1; 776 776 } 777 777 778 778 /** ··· 795 795 mem = &os->addr[i++]; 796 796 } while (mem->pa_start != mem->pa_end); 797 797 798 - return i; 798 + return i-1; 799 799 } 800 800 801 801 /**
+14 -3
arch/arm/mach-omap2/omap_hwmod_2420_data.c
··· 875 875 }; 876 876 877 877 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 878 + /* 879 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 880 + * driver does not use these clocks. 881 + */ 878 882 { .role = "tv_clk", .clk = "dss_54m_fck" }, 879 883 { .role = "sys_clk", .clk = "dss2_fck" }, 880 884 }; ··· 903 899 .slaves_cnt = ARRAY_SIZE(omap2420_dss_slaves), 904 900 .masters = omap2420_dss_masters, 905 901 .masters_cnt = ARRAY_SIZE(omap2420_dss_masters), 906 - .flags = HWMOD_NO_IDLEST, 902 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 907 903 }; 908 904 909 905 /* l4_core -> dss_dispc */ ··· 943 939 .slaves = omap2420_dss_dispc_slaves, 944 940 .slaves_cnt = ARRAY_SIZE(omap2420_dss_dispc_slaves), 945 941 .flags = HWMOD_NO_IDLEST, 942 + .dev_attr = &omap2_3_dss_dispc_dev_attr 946 943 }; 947 944 948 945 /* l4_core -> dss_rfbi */ ··· 966 961 &omap2420_l4_core__dss_rfbi, 967 962 }; 968 963 964 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 965 + { .role = "ick", .clk = "dss_ick" }, 966 + }; 967 + 969 968 static struct omap_hwmod omap2420_dss_rfbi_hwmod = { 970 969 .name = "dss_rfbi", 971 970 .class = &omap2_rfbi_hwmod_class, ··· 981 972 .module_offs = CORE_MOD, 982 973 }, 983 974 }, 975 + .opt_clks = dss_rfbi_opt_clks, 976 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 984 977 .slaves = omap2420_dss_rfbi_slaves, 985 978 .slaves_cnt = ARRAY_SIZE(omap2420_dss_rfbi_slaves), 986 979 .flags = HWMOD_NO_IDLEST, ··· 992 981 static struct omap_hwmod_ocp_if omap2420_l4_core__dss_venc = { 993 982 .master = &omap2420_l4_core_hwmod, 994 983 .slave = &omap2420_dss_venc_hwmod, 995 - .clk = "dss_54m_fck", 984 + .clk = "dss_ick", 996 985 .addr = omap2_dss_venc_addrs, 997 986 .fw = { 998 987 .omap2 = { ··· 1012 1001 static struct omap_hwmod omap2420_dss_venc_hwmod = { 1013 1002 .name = "dss_venc", 1014 1003 .class = &omap2_venc_hwmod_class, 1015 - .main_clk = "dss1_fck", 1004 + .main_clk = "dss_54m_fck", 1016 1005 .prcm = { 1017 1006 .omap2 = { 1018 1007 .prcm_reg_id = 1,
+14 -3
arch/arm/mach-omap2/omap_hwmod_2430_data.c
··· 942 942 }; 943 943 944 944 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 945 + /* 946 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 947 + * driver does not use these clocks. 948 + */ 945 949 { .role = "tv_clk", .clk = "dss_54m_fck" }, 946 950 { .role = "sys_clk", .clk = "dss2_fck" }, 947 951 }; ··· 970 966 .slaves_cnt = ARRAY_SIZE(omap2430_dss_slaves), 971 967 .masters = omap2430_dss_masters, 972 968 .masters_cnt = ARRAY_SIZE(omap2430_dss_masters), 973 - .flags = HWMOD_NO_IDLEST, 969 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 974 970 }; 975 971 976 972 /* l4_core -> dss_dispc */ ··· 1004 1000 .slaves = omap2430_dss_dispc_slaves, 1005 1001 .slaves_cnt = ARRAY_SIZE(omap2430_dss_dispc_slaves), 1006 1002 .flags = HWMOD_NO_IDLEST, 1003 + .dev_attr = &omap2_3_dss_dispc_dev_attr 1007 1004 }; 1008 1005 1009 1006 /* l4_core -> dss_rfbi */ ··· 1021 1016 &omap2430_l4_core__dss_rfbi, 1022 1017 }; 1023 1018 1019 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 1020 + { .role = "ick", .clk = "dss_ick" }, 1021 + }; 1022 + 1024 1023 static struct omap_hwmod omap2430_dss_rfbi_hwmod = { 1025 1024 .name = "dss_rfbi", 1026 1025 .class = &omap2_rfbi_hwmod_class, ··· 1036 1027 .module_offs = CORE_MOD, 1037 1028 }, 1038 1029 }, 1030 + .opt_clks = dss_rfbi_opt_clks, 1031 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 1039 1032 .slaves = omap2430_dss_rfbi_slaves, 1040 1033 .slaves_cnt = ARRAY_SIZE(omap2430_dss_rfbi_slaves), 1041 1034 .flags = HWMOD_NO_IDLEST, ··· 1047 1036 static struct omap_hwmod_ocp_if omap2430_l4_core__dss_venc = { 1048 1037 .master = &omap2430_l4_core_hwmod, 1049 1038 .slave = &omap2430_dss_venc_hwmod, 1050 - .clk = "dss_54m_fck", 1039 + .clk = "dss_ick", 1051 1040 .addr = omap2_dss_venc_addrs, 1052 1041 .flags = OCPIF_SWSUP_IDLE, 1053 1042 .user = OCP_USER_MPU | OCP_USER_SDMA, ··· 1061 1050 static struct omap_hwmod omap2430_dss_venc_hwmod = { 1062 1051 .name = "dss_venc", 1063 1052 .class = &omap2_venc_hwmod_class, 1064 - .main_clk = "dss1_fck", 1053 + .main_clk = "dss_54m_fck", 1065 1054 .prcm = { 1066 1055 .omap2 = { 1067 1056 .prcm_reg_id = 1,
+4 -1
arch/arm/mach-omap2/omap_hwmod_2xxx_3xxx_ipblock_data.c
··· 11 11 #include <plat/omap_hwmod.h> 12 12 #include <plat/serial.h> 13 13 #include <plat/dma.h> 14 + #include <plat/common.h> 14 15 15 16 #include <mach/irqs.h> 16 17 ··· 44 43 .rev_offs = 0x0000, 45 44 .sysc_offs = 0x0010, 46 45 .syss_offs = 0x0014, 47 - .sysc_flags = (SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE), 46 + .sysc_flags = (SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE | 47 + SYSS_HAS_RESET_STATUS), 48 48 .sysc_fields = &omap_hwmod_sysc_type1, 49 49 }; 50 50 51 51 struct omap_hwmod_class omap2_dss_hwmod_class = { 52 52 .name = "dss", 53 53 .sysc = &omap2_dss_sysc, 54 + .reset = omap_dss_reset, 54 55 }; 55 56 56 57 /*
+32 -5
arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
··· 1369 1369 }; 1370 1370 1371 1371 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 1372 - { .role = "tv_clk", .clk = "dss_tv_fck" }, 1373 - { .role = "video_clk", .clk = "dss_96m_fck" }, 1372 + /* 1373 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 1374 + * driver does not use these clocks. 1375 + */ 1374 1376 { .role = "sys_clk", .clk = "dss2_alwon_fck" }, 1377 + { .role = "tv_clk", .clk = "dss_tv_fck" }, 1378 + /* required only on OMAP3430 */ 1379 + { .role = "tv_dac_clk", .clk = "dss_96m_fck" }, 1375 1380 }; 1376 1381 1377 1382 static struct omap_hwmod omap3430es1_dss_core_hwmod = { ··· 1399 1394 .slaves_cnt = ARRAY_SIZE(omap3430es1_dss_slaves), 1400 1395 .masters = omap3xxx_dss_masters, 1401 1396 .masters_cnt = ARRAY_SIZE(omap3xxx_dss_masters), 1402 - .flags = HWMOD_NO_IDLEST, 1397 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1403 1398 }; 1404 1399 1405 1400 static struct omap_hwmod omap3xxx_dss_core_hwmod = { 1406 1401 .name = "dss_core", 1402 + .flags = HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1407 1403 .class = &omap2_dss_hwmod_class, 1408 1404 .main_clk = "dss1_alwon_fck", /* instead of dss_fck */ 1409 1405 .sdma_reqs = omap3xxx_dss_sdma_chs, ··· 1462 1456 .slaves = omap3xxx_dss_dispc_slaves, 1463 1457 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_dispc_slaves), 1464 1458 .flags = HWMOD_NO_IDLEST, 1459 + .dev_attr = &omap2_3_dss_dispc_dev_attr 1465 1460 }; 1466 1461 1467 1462 /* ··· 1493 1486 static struct omap_hwmod_ocp_if omap3xxx_l4_core__dss_dsi1 = { 1494 1487 .master = &omap3xxx_l4_core_hwmod, 1495 1488 .slave = &omap3xxx_dss_dsi1_hwmod, 1489 + .clk = "dss_ick", 1496 1490 .addr = omap3xxx_dss_dsi1_addrs, 1497 1491 .fw = { 1498 1492 .omap2 = { ··· 1510 1502 &omap3xxx_l4_core__dss_dsi1, 1511 1503 }; 1512 1504 1505 + static struct omap_hwmod_opt_clk dss_dsi1_opt_clks[] = { 1506 + { .role = "sys_clk", .clk = "dss2_alwon_fck" }, 1507 + }; 1508 + 1513 1509 static struct omap_hwmod omap3xxx_dss_dsi1_hwmod = { 1514 1510 .name = "dss_dsi1", 1515 1511 .class = &omap3xxx_dsi_hwmod_class, ··· 1526 1514 .module_offs = OMAP3430_DSS_MOD, 1527 1515 }, 1528 1516 }, 1517 + .opt_clks = dss_dsi1_opt_clks, 1518 + .opt_clks_cnt = ARRAY_SIZE(dss_dsi1_opt_clks), 1529 1519 .slaves = omap3xxx_dss_dsi1_slaves, 1530 1520 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_dsi1_slaves), 1531 1521 .flags = HWMOD_NO_IDLEST, ··· 1554 1540 &omap3xxx_l4_core__dss_rfbi, 1555 1541 }; 1556 1542 1543 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 1544 + { .role = "ick", .clk = "dss_ick" }, 1545 + }; 1546 + 1557 1547 static struct omap_hwmod omap3xxx_dss_rfbi_hwmod = { 1558 1548 .name = "dss_rfbi", 1559 1549 .class = &omap2_rfbi_hwmod_class, ··· 1569 1551 .module_offs = OMAP3430_DSS_MOD, 1570 1552 }, 1571 1553 }, 1554 + .opt_clks = dss_rfbi_opt_clks, 1555 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 1572 1556 .slaves = omap3xxx_dss_rfbi_slaves, 1573 1557 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_rfbi_slaves), 1574 1558 .flags = HWMOD_NO_IDLEST, ··· 1580 1560 static struct omap_hwmod_ocp_if omap3xxx_l4_core__dss_venc = { 1581 1561 .master = &omap3xxx_l4_core_hwmod, 1582 1562 .slave = &omap3xxx_dss_venc_hwmod, 1583 - .clk = "dss_tv_fck", 1563 + .clk = "dss_ick", 1584 1564 .addr = omap2_dss_venc_addrs, 1585 1565 .fw = { 1586 1566 .omap2 = { ··· 1598 1578 &omap3xxx_l4_core__dss_venc, 1599 1579 }; 1600 1580 1581 + static struct omap_hwmod_opt_clk dss_venc_opt_clks[] = { 1582 + /* required only on OMAP3430 */ 1583 + { .role = "tv_dac_clk", .clk = "dss_96m_fck" }, 1584 + }; 1585 + 1601 1586 static struct omap_hwmod omap3xxx_dss_venc_hwmod = { 1602 1587 .name = "dss_venc", 1603 1588 .class = &omap2_venc_hwmod_class, 1604 - .main_clk = "dss1_alwon_fck", 1589 + .main_clk = "dss_tv_fck", 1605 1590 .prcm = { 1606 1591 .omap2 = { 1607 1592 .prcm_reg_id = 1, ··· 1614 1589 .module_offs = OMAP3430_DSS_MOD, 1615 1590 }, 1616 1591 }, 1592 + .opt_clks = dss_venc_opt_clks, 1593 + .opt_clks_cnt = ARRAY_SIZE(dss_venc_opt_clks), 1617 1594 .slaves = omap3xxx_dss_venc_slaves, 1618 1595 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_venc_slaves), 1619 1596 .flags = HWMOD_NO_IDLEST,
+12 -12
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 30 30 #include <plat/mmc.h> 31 31 #include <plat/i2c.h> 32 32 #include <plat/dmtimer.h> 33 + #include <plat/common.h> 33 34 34 35 #include "omap_hwmod_common_data.h" 35 36 ··· 1188 1187 static struct omap_hwmod_class omap44xx_dss_hwmod_class = { 1189 1188 .name = "dss", 1190 1189 .sysc = &omap44xx_dss_sysc, 1190 + .reset = omap_dss_reset, 1191 1191 }; 1192 1192 1193 1193 /* dss */ ··· 1242 1240 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 1243 1241 { .role = "sys_clk", .clk = "dss_sys_clk" }, 1244 1242 { .role = "tv_clk", .clk = "dss_tv_clk" }, 1245 - { .role = "dss_clk", .clk = "dss_dss_clk" }, 1246 - { .role = "video_clk", .clk = "dss_48mhz_clk" }, 1243 + { .role = "hdmi_clk", .clk = "dss_48mhz_clk" }, 1247 1244 }; 1248 1245 1249 1246 static struct omap_hwmod omap44xx_dss_hwmod = { 1250 1247 .name = "dss_core", 1248 + .flags = HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1251 1249 .class = &omap44xx_dss_hwmod_class, 1252 1250 .clkdm_name = "l3_dss_clkdm", 1253 1251 .main_clk = "dss_dss_clk", ··· 1327 1325 { } 1328 1326 }; 1329 1327 1328 + static struct omap_dss_dispc_dev_attr omap44xx_dss_dispc_dev_attr = { 1329 + .manager_count = 3, 1330 + .has_framedonetv_irq = 1 1331 + }; 1332 + 1330 1333 /* l4_per -> dss_dispc */ 1331 1334 static struct omap_hwmod_ocp_if omap44xx_l4_per__dss_dispc = { 1332 1335 .master = &omap44xx_l4_per_hwmod, ··· 1347 1340 &omap44xx_l4_per__dss_dispc, 1348 1341 }; 1349 1342 1350 - static struct omap_hwmod_opt_clk dss_dispc_opt_clks[] = { 1351 - { .role = "sys_clk", .clk = "dss_sys_clk" }, 1352 - { .role = "tv_clk", .clk = "dss_tv_clk" }, 1353 - { .role = "hdmi_clk", .clk = "dss_48mhz_clk" }, 1354 - }; 1355 - 1356 1343 static struct omap_hwmod omap44xx_dss_dispc_hwmod = { 1357 1344 .name = "dss_dispc", 1358 1345 .class = &omap44xx_dispc_hwmod_class, ··· 1360 1359 .context_offs = OMAP4_RM_DSS_DSS_CONTEXT_OFFSET, 1361 1360 }, 1362 1361 }, 1363 - .opt_clks = dss_dispc_opt_clks, 1364 - .opt_clks_cnt = ARRAY_SIZE(dss_dispc_opt_clks), 1365 1362 .slaves = omap44xx_dss_dispc_slaves, 1366 1363 .slaves_cnt = ARRAY_SIZE(omap44xx_dss_dispc_slaves), 1364 + .dev_attr = &omap44xx_dss_dispc_dev_attr 1367 1365 }; 1368 1366 1369 1367 /* ··· 1624 1624 .clkdm_name = "l3_dss_clkdm", 1625 1625 .mpu_irqs = omap44xx_dss_hdmi_irqs, 1626 1626 .sdma_reqs = omap44xx_dss_hdmi_sdma_reqs, 1627 - .main_clk = "dss_dss_clk", 1627 + .main_clk = "dss_48mhz_clk", 1628 1628 .prcm = { 1629 1629 .omap4 = { 1630 1630 .clkctrl_offs = OMAP4_CM_DSS_DSS_CLKCTRL_OFFSET, ··· 1785 1785 .name = "dss_venc", 1786 1786 .class = &omap44xx_venc_hwmod_class, 1787 1787 .clkdm_name = "l3_dss_clkdm", 1788 - .main_clk = "dss_dss_clk", 1788 + .main_clk = "dss_tv_clk", 1789 1789 .prcm = { 1790 1790 .omap4 = { 1791 1791 .clkctrl_offs = OMAP4_CM_DSS_DSS_CLKCTRL_OFFSET,
+4
arch/arm/mach-omap2/omap_hwmod_common_data.c
··· 49 49 .srst_shift = SYSC_TYPE2_SOFTRESET_SHIFT, 50 50 }; 51 51 52 + struct omap_dss_dispc_dev_attr omap2_3_dss_dispc_dev_attr = { 53 + .manager_count = 2, 54 + .has_framedonetv_irq = 0 55 + };
+4
arch/arm/mach-omap2/omap_hwmod_common_data.h
··· 16 16 17 17 #include <plat/omap_hwmod.h> 18 18 19 + #include "display.h" 20 + 19 21 /* Common address space across OMAP2xxx */ 20 22 extern struct omap_hwmod_addr_space omap2xxx_uart1_addr_space[]; 21 23 extern struct omap_hwmod_addr_space omap2xxx_uart2_addr_space[]; ··· 112 110 extern struct omap_hwmod_class omap2xxx_dma_hwmod_class; 113 111 extern struct omap_hwmod_class omap2xxx_mailbox_hwmod_class; 114 112 extern struct omap_hwmod_class omap2xxx_mcspi_class; 113 + 114 + extern struct omap_dss_dispc_dev_attr omap2_3_dss_dispc_dev_attr; 115 115 116 116 #endif
+1 -1
arch/arm/mach-omap2/omap_l3_noc.c
··· 237 237 static const struct of_device_id l3_noc_match[] = { 238 238 {.compatible = "ti,omap4-l3-noc", }, 239 239 {}, 240 - } 240 + }; 241 241 MODULE_DEVICE_TABLE(of, l3_noc_match); 242 242 #else 243 243 #define l3_noc_match NULL
+2 -4
arch/arm/mach-omap2/pm.c
··· 24 24 #include "powerdomain.h" 25 25 #include "clockdomain.h" 26 26 #include "pm.h" 27 + #include "twl-common.h" 27 28 28 29 static struct omap_device_pm_latency *pm_lats; 29 30 ··· 227 226 228 227 static int __init omap2_common_pm_late_init(void) 229 228 { 230 - /* Init the OMAP TWL parameters */ 231 - omap3_twl_init(); 232 - omap4_twl_init(); 233 - 234 229 /* Init the voltage layer */ 230 + omap_pmic_late_init(); 235 231 omap_voltage_late_init(); 236 232 237 233 /* Initialize the voltages */
+1 -1
arch/arm/mach-omap2/smartreflex.c
··· 139 139 sr_write_reg(sr_info, ERRCONFIG_V1, status); 140 140 } else if (sr_info->ip_type == SR_TYPE_V2) { 141 141 /* Read the status bits */ 142 - sr_read_reg(sr_info, IRQSTATUS); 142 + status = sr_read_reg(sr_info, IRQSTATUS); 143 143 144 144 /* Clear them by writing back */ 145 145 sr_write_reg(sr_info, IRQSTATUS, status);
+11
arch/arm/mach-omap2/twl-common.c
··· 30 30 #include <plat/usb.h> 31 31 32 32 #include "twl-common.h" 33 + #include "pm.h" 33 34 34 35 static struct i2c_board_info __initdata pmic_i2c_board_info = { 35 36 .addr = 0x48, ··· 47 46 pmic_i2c_board_info.platform_data = pmic_data; 48 47 49 48 omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1); 49 + } 50 + 51 + void __init omap_pmic_late_init(void) 52 + { 53 + /* Init the OMAP TWL parameters (if PMIC has been registerd) */ 54 + if (!pmic_i2c_board_info.irq) 55 + return; 56 + 57 + omap3_twl_init(); 58 + omap4_twl_init(); 50 59 } 51 60 52 61 #if defined(CONFIG_ARCH_OMAP3)
+3
arch/arm/mach-omap2/twl-common.h
··· 1 1 #ifndef __OMAP_PMIC_COMMON__ 2 2 #define __OMAP_PMIC_COMMON__ 3 3 4 + #include <plat/irqs.h> 5 + 4 6 #define TWL_COMMON_PDATA_USB (1 << 0) 5 7 #define TWL_COMMON_PDATA_BCI (1 << 1) 6 8 #define TWL_COMMON_PDATA_MADC (1 << 2) ··· 32 30 33 31 void omap_pmic_init(int bus, u32 clkrate, const char *pmic_type, int pmic_irq, 34 32 struct twl4030_platform_data *pmic_data); 33 + void omap_pmic_late_init(void); 35 34 36 35 static inline void omap2_pmic_init(const char *pmic_type, 37 36 struct twl4030_platform_data *pmic_data)
+1
arch/arm/mach-prima2/pm.c
··· 9 9 #include <linux/kernel.h> 10 10 #include <linux/suspend.h> 11 11 #include <linux/slab.h> 12 + #include <linux/module.h> 12 13 #include <linux/of.h> 13 14 #include <linux/of_address.h> 14 15 #include <linux/of_device.h>
+1
arch/arm/mach-prima2/prima2.c
··· 8 8 9 9 #include <linux/init.h> 10 10 #include <linux/kernel.h> 11 + #include <asm/sizes.h> 11 12 #include <asm/mach-types.h> 12 13 #include <asm/mach/arch.h> 13 14 #include <linux/of.h>
+1 -1
arch/arm/mach-pxa/balloon3.c
··· 307 307 /****************************************************************************** 308 308 * USB Gadget 309 309 ******************************************************************************/ 310 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 310 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 311 311 static void balloon3_udc_command(int cmd) 312 312 { 313 313 if (cmd == PXA2XX_UDC_CMD_CONNECT)
+1 -1
arch/arm/mach-pxa/colibri-pxa320.c
··· 146 146 static inline void __init colibri_pxa320_init_eth(void) {} 147 147 #endif /* CONFIG_AX88796 */ 148 148 149 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 149 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 150 150 static struct gpio_vbus_mach_info colibri_pxa320_gpio_vbus_info = { 151 151 .gpio_vbus = mfp_to_gpio(MFP_PIN_GPIO96), 152 152 .gpio_pullup = -1,
+1 -1
arch/arm/mach-pxa/gumstix.c
··· 106 106 } 107 107 #endif 108 108 109 - #ifdef CONFIG_USB_GADGET_PXA25X 109 + #ifdef CONFIG_USB_PXA25X 110 110 static struct gpio_vbus_mach_info gumstix_udc_info = { 111 111 .gpio_vbus = GPIO_GUMSTIX_USB_GPIOn, 112 112 .gpio_pullup = GPIO_GUMSTIX_USB_GPIOx,
+2 -2
arch/arm/mach-pxa/include/mach/palm27x.h
··· 37 37 #define palm27x_lcd_init(power, mode) do {} while (0) 38 38 #endif 39 39 40 - #if defined(CONFIG_USB_GADGET_PXA27X) || \ 41 - defined(CONFIG_USB_GADGET_PXA27X_MODULE) 40 + #if defined(CONFIG_USB_PXA27X) || \ 41 + defined(CONFIG_USB_PXA27X_MODULE) 42 42 extern void __init palm27x_udc_init(int vbus, int pullup, 43 43 int vbus_inverted); 44 44 #else
+2 -2
arch/arm/mach-pxa/palm27x.c
··· 164 164 /****************************************************************************** 165 165 * USB Gadget 166 166 ******************************************************************************/ 167 - #if defined(CONFIG_USB_GADGET_PXA27X) || \ 168 - defined(CONFIG_USB_GADGET_PXA27X_MODULE) 167 + #if defined(CONFIG_USB_PXA27X) || \ 168 + defined(CONFIG_USB_PXA27X_MODULE) 169 169 static struct gpio_vbus_mach_info palm27x_udc_info = { 170 170 .gpio_vbus_inverted = 1, 171 171 };
+1 -1
arch/arm/mach-pxa/palmtc.c
··· 338 338 /****************************************************************************** 339 339 * UDC 340 340 ******************************************************************************/ 341 - #if defined(CONFIG_USB_GADGET_PXA25X)||defined(CONFIG_USB_GADGET_PXA25X_MODULE) 341 + #if defined(CONFIG_USB_PXA25X)||defined(CONFIG_USB_PXA25X_MODULE) 342 342 static struct gpio_vbus_mach_info palmtc_udc_info = { 343 343 .gpio_vbus = GPIO_NR_PALMTC_USB_DETECT_N, 344 344 .gpio_vbus_inverted = 1,
+1 -1
arch/arm/mach-pxa/vpac270.c
··· 343 343 /****************************************************************************** 344 344 * USB Gadget 345 345 ******************************************************************************/ 346 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 346 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 347 347 static struct gpio_vbus_mach_info vpac270_gpio_vbus_info = { 348 348 .gpio_vbus = GPIO41_VPAC270_UDC_DETECT, 349 349 .gpio_pullup = -1,
+1
arch/arm/mach-s3c64xx/dev-spi.c
··· 10 10 11 11 #include <linux/kernel.h> 12 12 #include <linux/string.h> 13 + #include <linux/export.h> 13 14 #include <linux/platform_device.h> 14 15 #include <linux/dma-mapping.h> 15 16 #include <linux/gpio.h>
+1 -1
arch/arm/mach-s3c64xx/mach-crag6410-module.c
··· 8 8 * published by the Free Software Foundation. 9 9 */ 10 10 11 - #include <linux/module.h> 11 + #include <linux/export.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/i2c.h> 14 14
+1 -1
arch/arm/mach-s3c64xx/s3c6400.c
··· 70 70 s3c64xx_init_irq(~0 & ~(0xf << 5), ~0); 71 71 } 72 72 73 - struct sysdev_class s3c6400_sysclass = { 73 + static struct sysdev_class s3c6400_sysclass = { 74 74 .name = "s3c6400-core", 75 75 }; 76 76
+1 -1
arch/arm/mach-s3c64xx/setup-fb-24bpp.c
··· 20 20 #include <plat/fb.h> 21 21 #include <plat/gpio-cfg.h> 22 22 23 - extern void s3c64xx_fb_gpio_setup_24bpp(void) 23 + void s3c64xx_fb_gpio_setup_24bpp(void) 24 24 { 25 25 s3c_gpio_cfgrange_nopull(S3C64XX_GPI(0), 16, S3C_GPIO_SFN(2)); 26 26 s3c_gpio_cfgrange_nopull(S3C64XX_GPJ(0), 12, S3C_GPIO_SFN(2));
+1
arch/arm/mach-s5pv210/mach-smdkv210.c
··· 273 273 274 274 static struct platform_pwm_backlight_data smdkv210_bl_data = { 275 275 .pwm_id = 3, 276 + .pwm_period_ns = 1000, 276 277 }; 277 278 278 279 static void __init smdkv210_map_io(void)
+2 -2
arch/arm/mach-sa1100/Makefile.boot
··· 1 - ifeq ($(CONFIG_ARCH_SA1100),y) 2 - zreladdr-$(CONFIG_SA1111) += 0xc0208000 1 + ifeq ($(CONFIG_SA1111),y) 2 + zreladdr-y += 0xc0208000 3 3 else 4 4 zreladdr-y += 0xc0008000 5 5 endif
+1 -1
arch/arm/mm/cache-l2x0.c
··· 61 61 { 62 62 void __iomem *base = l2x0_base; 63 63 64 - #ifdef CONFIG_ARM_ERRATA_753970 64 + #ifdef CONFIG_PL310_ERRATA_753970 65 65 /* write to an unmmapped register */ 66 66 writel_relaxed(0, base + L2X0_DUMMY_REG); 67 67 #else
+10 -1
arch/arm/mm/dma-mapping.c
··· 168 168 pte_t *pte; 169 169 int i = 0; 170 170 unsigned long base = consistent_base; 171 - unsigned long num_ptes = (CONSISTENT_END - base) >> PGDIR_SHIFT; 171 + unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT; 172 172 173 173 consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL); 174 174 if (!consistent_pte) { ··· 331 331 { 332 332 struct page *page; 333 333 void *addr; 334 + 335 + /* 336 + * Following is a work-around (a.k.a. hack) to prevent pages 337 + * with __GFP_COMP being passed to split_page() which cannot 338 + * handle them. The real problem is that this flag probably 339 + * should be 0 on ARM as it is not supported on this 340 + * platform; see CONFIG_HUGETLBFS. 341 + */ 342 + gfp &= ~(__GFP_COMP); 334 343 335 344 *handle = ~0; 336 345 size = PAGE_ALIGN(size);
+6 -17
arch/arm/mm/mmap.c
··· 9 9 #include <linux/io.h> 10 10 #include <linux/personality.h> 11 11 #include <linux/random.h> 12 - #include <asm/cputype.h> 13 - #include <asm/system.h> 12 + #include <asm/cachetype.h> 14 13 15 14 #define COLOUR_ALIGN(addr,pgoff) \ 16 15 ((((addr)+SHMLBA-1)&~(SHMLBA-1)) + \ ··· 31 32 struct mm_struct *mm = current->mm; 32 33 struct vm_area_struct *vma; 33 34 unsigned long start_addr; 34 - #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K) 35 - unsigned int cache_type; 36 - int do_align = 0, aliasing = 0; 35 + int do_align = 0; 36 + int aliasing = cache_is_vipt_aliasing(); 37 37 38 38 /* 39 39 * We only need to do colour alignment if either the I or D 40 - * caches alias. This is indicated by bits 9 and 21 of the 41 - * cache type register. 40 + * caches alias. 42 41 */ 43 - cache_type = read_cpuid_cachetype(); 44 - if (cache_type != read_cpuid_id()) { 45 - aliasing = (cache_type | cache_type >> 12) & (1 << 11); 46 - if (aliasing) 47 - do_align = filp || flags & MAP_SHARED; 48 - } 49 - #else 50 - #define do_align 0 51 - #define aliasing 0 52 - #endif 42 + if (aliasing) 43 + do_align = filp || (flags & MAP_SHARED); 53 44 54 45 /* 55 46 * We enforce the MAP_FIXED case.
+1
arch/arm/plat-mxc/cpufreq.c
··· 17 17 * the CPU clock speed on the fly. 18 18 */ 19 19 20 + #include <linux/module.h> 20 21 #include <linux/cpufreq.h> 21 22 #include <linux/clk.h> 22 23 #include <linux/err.h>
+1 -1
arch/arm/plat-mxc/include/mach/common.h
··· 85 85 }; 86 86 87 87 extern void mx5_cpu_lp_set(enum mxc_cpu_pwr_mode mode); 88 - extern void (*imx_idle)(void); 89 88 extern void imx_print_silicon_rev(const char *cpu, int srev); 90 89 91 90 void avic_handle_irq(struct pt_regs *); ··· 132 133 extern void imx53_smd_common_init(void); 133 134 extern int imx6q_set_lpm(enum mxc_cpu_pwr_mode mode); 134 135 extern void imx6q_pm_init(void); 136 + extern void imx6q_clock_map_io(void); 135 137 #endif
-14
arch/arm/plat-mxc/include/mach/mxc.h
··· 50 50 #define IMX_CHIP_REVISION_3_3 0x33 51 51 #define IMX_CHIP_REVISION_UNKNOWN 0xff 52 52 53 - #define IMX_CHIP_REVISION_1_0_STRING "1.0" 54 - #define IMX_CHIP_REVISION_1_1_STRING "1.1" 55 - #define IMX_CHIP_REVISION_1_2_STRING "1.2" 56 - #define IMX_CHIP_REVISION_1_3_STRING "1.3" 57 - #define IMX_CHIP_REVISION_2_0_STRING "2.0" 58 - #define IMX_CHIP_REVISION_2_1_STRING "2.1" 59 - #define IMX_CHIP_REVISION_2_2_STRING "2.2" 60 - #define IMX_CHIP_REVISION_2_3_STRING "2.3" 61 - #define IMX_CHIP_REVISION_3_0_STRING "3.0" 62 - #define IMX_CHIP_REVISION_3_1_STRING "3.1" 63 - #define IMX_CHIP_REVISION_3_2_STRING "3.2" 64 - #define IMX_CHIP_REVISION_3_3_STRING "3.3" 65 - #define IMX_CHIP_REVISION_UNKNOWN_STRING "unknown" 66 - 67 53 #ifndef __ASSEMBLY__ 68 54 extern unsigned int __mxc_cpu_type; 69 55 #endif
+1 -6
arch/arm/plat-mxc/include/mach/system.h
··· 17 17 #ifndef __ASM_ARCH_MXC_SYSTEM_H__ 18 18 #define __ASM_ARCH_MXC_SYSTEM_H__ 19 19 20 - extern void (*imx_idle)(void); 21 - 22 20 static inline void arch_idle(void) 23 21 { 24 - if (imx_idle != NULL) 25 - (imx_idle)(); 26 - else 27 - cpu_do_idle(); 22 + cpu_do_idle(); 28 23 } 29 24 30 25 void arch_reset(char mode, const char *cmd);
+6 -1
arch/arm/plat-mxc/pwm.c
··· 32 32 #define MX3_PWMSAR 0x0C /* PWM Sample Register */ 33 33 #define MX3_PWMPR 0x10 /* PWM Period Register */ 34 34 #define MX3_PWMCR_PRESCALER(x) (((x - 1) & 0xFFF) << 4) 35 + #define MX3_PWMCR_DOZEEN (1 << 24) 36 + #define MX3_PWMCR_WAITEN (1 << 23) 37 + #define MX3_PWMCR_DBGEN (1 << 22) 35 38 #define MX3_PWMCR_CLKSRC_IPG_HIGH (2 << 16) 36 39 #define MX3_PWMCR_CLKSRC_IPG (1 << 16) 37 40 #define MX3_PWMCR_EN (1 << 0) ··· 80 77 writel(duty_cycles, pwm->mmio_base + MX3_PWMSAR); 81 78 writel(period_cycles, pwm->mmio_base + MX3_PWMPR); 82 79 83 - cr = MX3_PWMCR_PRESCALER(prescale) | MX3_PWMCR_EN; 80 + cr = MX3_PWMCR_PRESCALER(prescale) | 81 + MX3_PWMCR_DOZEEN | MX3_PWMCR_WAITEN | 82 + MX3_PWMCR_DBGEN | MX3_PWMCR_EN; 84 83 85 84 if (cpu_is_mx25()) 86 85 cr |= MX3_PWMCR_CLKSRC_IPG;
+2 -1
arch/arm/plat-mxc/system.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/err.h> 23 23 #include <linux/delay.h> 24 + #include <linux/module.h> 24 25 25 26 #include <mach/hardware.h> 26 27 #include <mach/common.h> ··· 29 28 #include <asm/system.h> 30 29 #include <asm/mach-types.h> 31 30 32 - void (*imx_idle)(void) = NULL; 33 31 void __iomem *(*imx_ioremap)(unsigned long, size_t, unsigned int) = NULL; 32 + EXPORT_SYMBOL_GPL(imx_ioremap); 34 33 35 34 static void __iomem *wdog_base; 36 35
+1 -1
arch/arm/plat-omap/include/plat/clock.h
··· 165 165 u8 auto_recal_bit; 166 166 u8 recal_en_bit; 167 167 u8 recal_st_bit; 168 - u8 flags; 169 168 # endif 169 + u8 flags; 170 170 }; 171 171 172 172 #endif
+3
arch/arm/plat-omap/include/plat/common.h
··· 30 30 #include <linux/delay.h> 31 31 32 32 #include <plat/i2c.h> 33 + #include <plat/omap_hwmod.h> 33 34 34 35 struct sys_timer; 35 36 ··· 55 54 void am35xx_init_early(void); 56 55 void ti816x_init_early(void); 57 56 void omap4430_init_early(void); 57 + 58 + extern int omap_dss_reset(struct omap_hwmod *); 58 59 59 60 void omap_sram_init(void); 60 61
+1 -1
arch/arm/plat-s3c24xx/cpu-freq-debugfs.c
··· 12 12 */ 13 13 14 14 #include <linux/init.h> 15 - #include <linux/module.h> 15 + #include <linux/export.h> 16 16 #include <linux/interrupt.h> 17 17 #include <linux/ioport.h> 18 18 #include <linux/cpufreq.h>
+1
arch/arm/plat-s5p/sysmmu.c
··· 11 11 #include <linux/io.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/platform_device.h> 14 + #include <linux/export.h> 14 15 15 16 #include <asm/pgtable.h> 16 17
-1
arch/arm/plat-samsung/dev-backlight.c
··· 15 15 #include <linux/slab.h> 16 16 #include <linux/io.h> 17 17 #include <linux/pwm_backlight.h> 18 - #include <linux/slab.h> 19 18 20 19 #include <plat/devs.h> 21 20 #include <plat/gpio-cfg.h>
+2
arch/arm/plat-samsung/include/plat/gpio-cfg.h
··· 24 24 #ifndef __PLAT_GPIO_CFG_H 25 25 #define __PLAT_GPIO_CFG_H __FILE__ 26 26 27 + #include<linux/types.h> 28 + 27 29 typedef unsigned int __bitwise__ samsung_gpio_pull_t; 28 30 typedef unsigned int __bitwise__ s5p_gpio_drvstr_t; 29 31
+1 -1
arch/arm/plat-samsung/pd.c
··· 11 11 */ 12 12 13 13 #include <linux/init.h> 14 - #include <linux/module.h> 14 + #include <linux/export.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/err.h> 17 17 #include <linux/pm_runtime.h>
+1 -1
arch/arm/plat-samsung/pwm.c
··· 11 11 * the Free Software Foundation; either version 2 of the License. 12 12 */ 13 13 14 - #include <linux/module.h> 14 + #include <linux/export.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/platform_device.h> 17 17 #include <linux/slab.h>
+1
arch/arm/tools/mach-types
··· 1123 1123 thales_adc MACH_THALES_ADC THALES_ADC 3492 1124 1124 ubisys_p9d_evp MACH_UBISYS_P9D_EVP UBISYS_P9D_EVP 3493 1125 1125 atdgp318 MACH_ATDGP318 ATDGP318 3494 1126 + m28evk MACH_M28EVK M28EVK 3613 1126 1127 smdk4212 MACH_SMDK4212 SMDK4212 3638 1127 1128 smdk4412 MACH_SMDK4412 SMDK4412 3765
+3 -1
arch/m68k/include/asm/unistd.h
··· 350 350 #define __NR_clock_adjtime 342 351 351 #define __NR_syncfs 343 352 352 #define __NR_setns 344 353 + #define __NR_process_vm_readv 345 354 + #define __NR_process_vm_writev 346 353 355 354 356 #ifdef __KERNEL__ 355 357 356 - #define NR_syscalls 345 358 + #define NR_syscalls 347 357 359 358 360 #define __ARCH_WANT_IPC_PARSE_VERSION 359 361 #define __ARCH_WANT_OLD_READDIR
+2
arch/m68k/kernel/syscalltable.S
··· 365 365 .long sys_clock_adjtime 366 366 .long sys_syncfs 367 367 .long sys_setns 368 + .long sys_process_vm_readv /* 345 */ 369 + .long sys_process_vm_writev 368 370
+4 -4
arch/mips/kernel/perf_event_mipsxx.c
··· 623 623 if (!atomic_inc_not_zero(&active_events)) { 624 624 if (atomic_read(&active_events) > MIPS_MAX_HWEVENTS) { 625 625 atomic_dec(&active_events); 626 - return -ENOSPC; 626 + return -EINVAL; 627 627 } 628 628 629 629 mutex_lock(&pmu_reserve_mutex); ··· 732 732 memset(&fake_cpuc, 0, sizeof(fake_cpuc)); 733 733 734 734 if (!validate_event(&fake_cpuc, leader)) 735 - return -ENOSPC; 735 + return -EINVAL; 736 736 737 737 list_for_each_entry(sibling, &leader->sibling_list, group_entry) { 738 738 if (!validate_event(&fake_cpuc, sibling)) 739 - return -ENOSPC; 739 + return -EINVAL; 740 740 } 741 741 742 742 if (!validate_event(&fake_cpuc, event)) 743 - return -ENOSPC; 743 + return -EINVAL; 744 744 745 745 return 0; 746 746 }
+13 -4
arch/powerpc/boot/dts/p1023rds.dts
··· 449 449 interrupt-parent = <&mpic>; 450 450 interrupts = <16 2>; 451 451 interrupt-map-mask = <0xf800 0 0 7>; 452 + /* IRQ[0:3] are pulled up on board, set to active-low */ 452 453 interrupt-map = < 453 454 /* IDSEL 0x0 */ 454 455 0000 0 0 1 &mpic 0 1 ··· 489 488 interrupt-parent = <&mpic>; 490 489 interrupts = <16 2>; 491 490 interrupt-map-mask = <0xf800 0 0 7>; 491 + /* 492 + * IRQ[4:6] only for PCIe, set to active-high, 493 + * IRQ[7] is pulled up on board, set to active-low 494 + */ 492 495 interrupt-map = < 493 496 /* IDSEL 0x0 */ 494 - 0000 0 0 1 &mpic 4 1 495 - 0000 0 0 2 &mpic 5 1 496 - 0000 0 0 3 &mpic 6 1 497 + 0000 0 0 1 &mpic 4 2 498 + 0000 0 0 2 &mpic 5 2 499 + 0000 0 0 3 &mpic 6 2 497 500 0000 0 0 4 &mpic 7 1 498 501 >; 499 502 ranges = <0x2000000 0x0 0xa0000000 ··· 532 527 interrupt-parent = <&mpic>; 533 528 interrupts = <16 2>; 534 529 interrupt-map-mask = <0xf800 0 0 7>; 530 + /* 531 + * IRQ[8:10] are pulled up on board, set to active-low 532 + * IRQ[11] only for PCIe, set to active-high, 533 + */ 535 534 interrupt-map = < 536 535 /* IDSEL 0x0 */ 537 536 0000 0 0 1 &mpic 8 1 538 537 0000 0 0 2 &mpic 9 1 539 538 0000 0 0 3 &mpic 10 1 540 - 0000 0 0 4 &mpic 11 1 539 + 0000 0 0 4 &mpic 11 2 541 540 >; 542 541 ranges = <0x2000000 0x0 0x80000000 543 542 0x2000000 0x0 0x80000000
+2
arch/powerpc/configs/ppc44x_defconfig
··· 52 52 CONFIG_MTD_JEDECPROBE=y 53 53 CONFIG_MTD_CFI_AMDSTD=y 54 54 CONFIG_MTD_PHYSMAP_OF=y 55 + CONFIG_MTD_NAND=m 56 + CONFIG_MTD_NAND_NDFC=m 55 57 CONFIG_MTD_UBI=m 56 58 CONFIG_MTD_UBI_GLUEBI=m 57 59 CONFIG_PROC_DEVICETREE=y
+1
arch/powerpc/mm/hugetlbpage.c
··· 15 15 #include <linux/of_fdt.h> 16 16 #include <linux/memblock.h> 17 17 #include <linux/bootmem.h> 18 + #include <linux/moduleparam.h> 18 19 #include <asm/pgtable.h> 19 20 #include <asm/pgalloc.h> 20 21 #include <asm/tlb.h>
+1 -1
arch/powerpc/platforms/85xx/Kconfig
··· 203 203 select PPC_E500MC 204 204 select PHYS_64BIT 205 205 select SWIOTLB 206 - select MPC8xxx_GPIO 206 + select GPIO_MPC8XXX 207 207 select HAS_RAPIDIO 208 208 select PPC_EPAPR_HV_PIC 209 209 help
+1 -1
arch/powerpc/platforms/85xx/p3060_qds.c
··· 70 70 .power_save = e500_idle, 71 71 }; 72 72 73 - machine_device_initcall(p3060_qds, declare_of_platform_devices); 73 + machine_device_initcall(p3060_qds, corenet_ds_publish_devices); 74 74 75 75 #ifdef CONFIG_SWIOTLB 76 76 machine_arch_initcall(p3060_qds, swiotlb_setup_bus_notifier);
+1
arch/powerpc/sysdev/ehv_pic.c
··· 280 280 281 281 if (!ehv_pic->irqhost) { 282 282 of_node_put(np); 283 + kfree(ehv_pic); 283 284 return; 284 285 } 285 286
+1
arch/powerpc/sysdev/fsl_lbc.c
··· 328 328 err: 329 329 iounmap(fsl_lbc_ctrl_dev->regs); 330 330 kfree(fsl_lbc_ctrl_dev); 331 + fsl_lbc_ctrl_dev = NULL; 331 332 return ret; 332 333 } 333 334
+1 -1
arch/powerpc/sysdev/qe_lib/qe.c
··· 216 216 /* Errata QE_General4, which affects some MPC832x and MPC836x SOCs, says 217 217 that the BRG divisor must be even if you're not using divide-by-16 218 218 mode. */ 219 - if (!div16 && (divisor & 1)) 219 + if (!div16 && (divisor & 1) && (divisor > 3)) 220 220 divisor++; 221 221 222 222 tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) |
+4 -4
arch/s390/include/asm/pgtable.h
··· 599 599 skey = page_get_storage_key(address); 600 600 bits = skey & (_PAGE_CHANGED | _PAGE_REFERENCED); 601 601 /* Clear page changed & referenced bit in the storage key */ 602 - if (bits) { 603 - skey ^= bits; 604 - page_set_storage_key(address, skey, 1); 605 - } 602 + if (bits & _PAGE_CHANGED) 603 + page_set_storage_key(address, skey ^ bits, 1); 604 + else if (bits) 605 + page_reset_referenced(address); 606 606 /* Transfer page changed & referenced bit to guest bits in pgste */ 607 607 pgste_val(pgste) |= bits << 48; /* RCP_GR_BIT & RCP_GC_BIT */ 608 608 /* Get host changed & referenced bits from pgste */
+18 -12
arch/s390/kernel/ptrace.c
··· 296 296 ((data & PSW_MASK_EA) && !(data & PSW_MASK_BA)))) 297 297 /* Invalid psw mask. */ 298 298 return -EINVAL; 299 - if (addr == (addr_t) &dummy->regs.psw.addr) 300 - /* 301 - * The debugger changed the instruction address, 302 - * reset system call restart, see signal.c:do_signal 303 - */ 304 - task_thread_info(child)->system_call = 0; 305 - 306 299 *(addr_t *)((addr_t) &task_pt_regs(child)->psw + addr) = data; 307 300 308 301 } else if (addr < (addr_t) (&dummy->regs.orig_gpr2)) { ··· 607 614 /* Transfer 31 bit amode bit to psw mask. */ 608 615 regs->psw.mask = (regs->psw.mask & ~PSW_MASK_BA) | 609 616 (__u64)(tmp & PSW32_ADDR_AMODE); 610 - /* 611 - * The debugger changed the instruction address, 612 - * reset system call restart, see signal.c:do_signal 613 - */ 614 - task_thread_info(child)->system_call = 0; 615 617 } else { 616 618 /* gpr 0-15 */ 617 619 *(__u32*)((addr_t) &regs->psw + addr*2 + 4) = tmp; ··· 893 905 return 0; 894 906 } 895 907 908 + static int s390_last_break_set(struct task_struct *target, 909 + const struct user_regset *regset, 910 + unsigned int pos, unsigned int count, 911 + const void *kbuf, const void __user *ubuf) 912 + { 913 + return 0; 914 + } 915 + 896 916 #endif 897 917 898 918 static int s390_system_call_get(struct task_struct *target, ··· 947 951 .size = sizeof(long), 948 952 .align = sizeof(long), 949 953 .get = s390_last_break_get, 954 + .set = s390_last_break_set, 950 955 }, 951 956 #endif 952 957 [REGSET_SYSTEM_CALL] = { ··· 1113 1116 return 0; 1114 1117 } 1115 1118 1119 + static int s390_compat_last_break_set(struct task_struct *target, 1120 + const struct user_regset *regset, 1121 + unsigned int pos, unsigned int count, 1122 + const void *kbuf, const void __user *ubuf) 1123 + { 1124 + return 0; 1125 + } 1126 + 1116 1127 static const struct user_regset s390_compat_regsets[] = { 1117 1128 [REGSET_GENERAL] = { 1118 1129 .core_note_type = NT_PRSTATUS, ··· 1144 1139 .size = sizeof(long), 1145 1140 .align = sizeof(long), 1146 1141 .get = s390_compat_last_break_get, 1142 + .set = s390_compat_last_break_set, 1147 1143 }, 1148 1144 [REGSET_SYSTEM_CALL] = { 1149 1145 .core_note_type = NT_S390_SYSTEM_CALL,
+1 -1
arch/s390/kernel/setup.c
··· 579 579 *msg = "first memory chunk must be at least crashkernel size"; 580 580 return 0; 581 581 } 582 - if (is_kdump_kernel() && (crash_size == OLDMEM_SIZE)) 582 + if (OLDMEM_BASE && crash_size == OLDMEM_SIZE) 583 583 return OLDMEM_BASE; 584 584 585 585 for (i = MEMORY_CHUNKS - 1; i >= 0; i--) {
+3 -5
arch/s390/kernel/signal.c
··· 460 460 regs->svc_code >> 16); 461 461 break; 462 462 } 463 - /* No longer in a system call */ 464 - clear_thread_flag(TIF_SYSCALL); 465 463 } 464 + /* No longer in a system call */ 465 + clear_thread_flag(TIF_SYSCALL); 466 466 467 467 if ((is_compat_task() ? 468 468 handle_signal32(signr, &ka, &info, oldset, regs) : ··· 486 486 } 487 487 488 488 /* No handlers present - check for system call restart */ 489 + clear_thread_flag(TIF_SYSCALL); 489 490 if (current_thread_info()->system_call) { 490 491 regs->svc_code = current_thread_info()->system_call; 491 492 switch (regs->gprs[2]) { ··· 500 499 /* Restart system call with magic TIF bit. */ 501 500 regs->gprs[2] = regs->orig_gpr2; 502 501 set_thread_flag(TIF_SYSCALL); 503 - break; 504 - default: 505 - clear_thread_flag(TIF_SYSCALL); 506 502 break; 507 503 } 508 504 }
+2 -4
arch/sparc/kernel/ds.c
··· 1181 1181 1182 1182 dp->rcv_buf_len = 4096; 1183 1183 1184 - dp->ds_states = kzalloc(sizeof(ds_states_template), 1185 - GFP_KERNEL); 1184 + dp->ds_states = kmemdup(ds_states_template, 1185 + sizeof(ds_states_template), GFP_KERNEL); 1186 1186 if (!dp->ds_states) 1187 1187 goto out_free_rcv_buf; 1188 1188 1189 - memcpy(dp->ds_states, ds_states_template, 1190 - sizeof(ds_states_template)); 1191 1189 dp->num_ds_states = ARRAY_SIZE(ds_states_template); 1192 1190 1193 1191 for (i = 0; i < dp->num_ds_states; i++)
+1 -3
arch/sparc/kernel/prom_common.c
··· 58 58 void *new_val; 59 59 int err; 60 60 61 - new_val = kmalloc(len, GFP_KERNEL); 61 + new_val = kmemdup(val, len, GFP_KERNEL); 62 62 if (!new_val) 63 63 return -ENOMEM; 64 - 65 - memcpy(new_val, val, len); 66 64 67 65 err = -ENODEV; 68 66
+1 -2
arch/sparc/mm/btfixup.c
··· 302 302 case 'i': /* INT */ 303 303 if ((insn & 0xc1c00000) == 0x01000000) /* %HI */ 304 304 set_addr(addr, q[1], fmangled, (insn & 0xffc00000) | (p[1] >> 10)); 305 - else if ((insn & 0x80002000) == 0x80002000 && 306 - (insn & 0x01800000) != 0x01800000) /* %LO */ 305 + else if ((insn & 0x80002000) == 0x80002000) /* %LO */ 307 306 set_addr(addr, q[1], fmangled, (insn & 0xffffe000) | (p[1] & 0x3ff)); 308 307 else { 309 308 prom_printf(insn_i, p, addr, insn);
-10
arch/tile/include/asm/irq.h
··· 74 74 */ 75 75 void tile_irq_activate(unsigned int irq, int tile_irq_type); 76 76 77 - /* 78 - * For onboard, non-PCI (e.g. TILE_IRQ_PERCPU) devices, drivers know 79 - * how to use enable/disable_percpu_irq() to manage interrupts on each 80 - * core. We can't use the generic enable/disable_irq() because they 81 - * use a single reference count per irq, rather than per cpu per irq. 82 - */ 83 - void enable_percpu_irq(unsigned int irq); 84 - void disable_percpu_irq(unsigned int irq); 85 - 86 - 87 77 void setup_irq_regs(void); 88 78 89 79 #endif /* _ASM_TILE_IRQ_H */
+8 -8
arch/tile/kernel/irq.c
··· 152 152 * Remove an irq from the disabled mask. If we're in an interrupt 153 153 * context, defer enabling the HW interrupt until we leave. 154 154 */ 155 - void enable_percpu_irq(unsigned int irq) 155 + static void tile_irq_chip_enable(struct irq_data *d) 156 156 { 157 - get_cpu_var(irq_disable_mask) &= ~(1UL << irq); 157 + get_cpu_var(irq_disable_mask) &= ~(1UL << d->irq); 158 158 if (__get_cpu_var(irq_depth) == 0) 159 - unmask_irqs(1UL << irq); 159 + unmask_irqs(1UL << d->irq); 160 160 put_cpu_var(irq_disable_mask); 161 161 } 162 - EXPORT_SYMBOL(enable_percpu_irq); 163 162 164 163 /* 165 164 * Add an irq to the disabled mask. We disable the HW interrupt ··· 166 167 * in an interrupt context, the return path is careful to avoid 167 168 * unmasking a newly disabled interrupt. 168 169 */ 169 - void disable_percpu_irq(unsigned int irq) 170 + static void tile_irq_chip_disable(struct irq_data *d) 170 171 { 171 - get_cpu_var(irq_disable_mask) |= (1UL << irq); 172 - mask_irqs(1UL << irq); 172 + get_cpu_var(irq_disable_mask) |= (1UL << d->irq); 173 + mask_irqs(1UL << d->irq); 173 174 put_cpu_var(irq_disable_mask); 174 175 } 175 - EXPORT_SYMBOL(disable_percpu_irq); 176 176 177 177 /* Mask an interrupt. */ 178 178 static void tile_irq_chip_mask(struct irq_data *d) ··· 207 209 208 210 static struct irq_chip tile_irq_chip = { 209 211 .name = "tile_irq_chip", 212 + .irq_enable = tile_irq_chip_enable, 213 + .irq_disable = tile_irq_chip_disable, 210 214 .irq_ack = tile_irq_chip_ack, 211 215 .irq_eoi = tile_irq_chip_eoi, 212 216 .irq_mask = tile_irq_chip_mask,
+1
arch/tile/kernel/pci-dma.c
··· 15 15 #include <linux/mm.h> 16 16 #include <linux/dma-mapping.h> 17 17 #include <linux/vmalloc.h> 18 + #include <linux/export.h> 18 19 #include <asm/tlbflush.h> 19 20 #include <asm/homecache.h> 20 21
+1
arch/tile/kernel/pci.c
··· 24 24 #include <linux/irq.h> 25 25 #include <linux/io.h> 26 26 #include <linux/uaccess.h> 27 + #include <linux/export.h> 27 28 28 29 #include <asm/processor.h> 29 30 #include <asm/sections.h>
+1
arch/tile/kernel/sysfs.c
··· 18 18 #include <linux/cpu.h> 19 19 #include <linux/slab.h> 20 20 #include <linux/smp.h> 21 + #include <linux/stat.h> 21 22 #include <hv/hypervisor.h> 22 23 23 24 /* Return a string queried from the hypervisor, truncated to page size. */
+3
arch/tile/lib/exports.c
··· 39 39 EXPORT_SYMBOL(current_text_addr); 40 40 EXPORT_SYMBOL(dump_stack); 41 41 42 + /* arch/tile/kernel/head.S */ 43 + EXPORT_SYMBOL(empty_zero_page); 44 + 42 45 /* arch/tile/lib/, various memcpy files */ 43 46 EXPORT_SYMBOL(memcpy); 44 47 EXPORT_SYMBOL(__copy_to_user_inatomic);
+6 -3
arch/tile/mm/homecache.c
··· 449 449 VM_BUG_ON(!virt_addr_valid((void *)addr)); 450 450 page = virt_to_page((void *)addr); 451 451 if (put_page_testzero(page)) { 452 - int pages = (1 << order); 453 452 homecache_change_page_home(page, order, initial_page_home()); 454 - while (pages--) 455 - __free_page(page++); 453 + if (order == 0) { 454 + free_hot_cold_page(page, 0); 455 + } else { 456 + init_page_count(page); 457 + __free_pages(page, order); 458 + } 456 459 } 457 460 }
+6 -2
arch/x86/Kconfig
··· 390 390 This option compiles in support for the CE4100 SOC for settop 391 391 boxes and media devices. 392 392 393 - config X86_INTEL_MID 393 + config X86_WANT_INTEL_MID 394 394 bool "Intel MID platform support" 395 395 depends on X86_32 396 396 depends on X86_EXTENDED_PLATFORM ··· 399 399 systems which do not have the PCI legacy interfaces (Moorestown, 400 400 Medfield). If you are building for a PC class system say N here. 401 401 402 - if X86_INTEL_MID 402 + if X86_WANT_INTEL_MID 403 + 404 + config X86_INTEL_MID 405 + bool 403 406 404 407 config X86_MRST 405 408 bool "Moorestown MID platform" ··· 414 411 select SPI 415 412 select INTEL_SCU_IPC 416 413 select X86_PLATFORM_DEVICES 414 + select X86_INTEL_MID 417 415 ---help--- 418 416 Moorestown is Intel's Low Power Intel Architecture (LPIA) based Moblin 419 417 Internet Device(MID) platform. Moorestown consists of two chips:
+8 -4
arch/x86/include/asm/intel_scu_ipc.h
··· 3 3 4 4 #include <linux/notifier.h> 5 5 6 - #define IPCMSG_VRTC 0xFA /* Set vRTC device */ 6 + #define IPCMSG_WARM_RESET 0xF0 7 + #define IPCMSG_COLD_RESET 0xF1 8 + #define IPCMSG_SOFT_RESET 0xF2 9 + #define IPCMSG_COLD_BOOT 0xF3 7 10 8 - /* Command id associated with message IPCMSG_VRTC */ 9 - #define IPC_CMD_VRTC_SETTIME 1 /* Set time */ 10 - #define IPC_CMD_VRTC_SETALARM 2 /* Set alarm */ 11 + #define IPCMSG_VRTC 0xFA /* Set vRTC device */ 12 + /* Command id associated with message IPCMSG_VRTC */ 13 + #define IPC_CMD_VRTC_SETTIME 1 /* Set time */ 14 + #define IPC_CMD_VRTC_SETALARM 2 /* Set alarm */ 11 15 12 16 /* Read single register */ 13 17 int intel_scu_ipc_ioread8(u16 addr, u8 *data);
+9
arch/x86/include/asm/mrst.h
··· 31 31 }; 32 32 33 33 extern enum mrst_cpu_type __mrst_cpu_chip; 34 + 35 + #ifdef CONFIG_X86_INTEL_MID 36 + 34 37 static inline enum mrst_cpu_type mrst_identify_cpu(void) 35 38 { 36 39 return __mrst_cpu_chip; 37 40 } 41 + 42 + #else /* !CONFIG_X86_INTEL_MID */ 43 + 44 + #define mrst_identify_cpu() (0) 45 + 46 + #endif /* !CONFIG_X86_INTEL_MID */ 38 47 39 48 enum mrst_timer_options { 40 49 MRST_TIMER_DEFAULT,
+8 -1
arch/x86/include/asm/msr.h
··· 169 169 return native_write_msr_safe(msr, low, high); 170 170 } 171 171 172 - /* rdmsr with exception handling */ 172 + /* 173 + * rdmsr with exception handling. 174 + * 175 + * Please note that the exception handling works only after we've 176 + * switched to the "smart" #GP handler in trap_init() which knows about 177 + * exception tables - using this macro earlier than that causes machine 178 + * hangs on boxes which do not implement the @msr in the first argument. 179 + */ 173 180 #define rdmsr_safe(msr, p1, p2) \ 174 181 ({ \ 175 182 int __err; \
+1
arch/x86/include/asm/system.h
··· 401 401 extern void free_init_pages(char *what, unsigned long begin, unsigned long end); 402 402 403 403 void default_idle(void); 404 + bool set_pm_idle_to_default(void); 404 405 405 406 void stop_this_cpu(void *dummy); 406 407
+22 -1
arch/x86/include/asm/timer.h
··· 32 32 * (mathieu.desnoyers@polymtl.ca) 33 33 * 34 34 * -johnstul@us.ibm.com "math is hard, lets go shopping!" 35 + * 36 + * In: 37 + * 38 + * ns = cycles * cyc2ns_scale / SC 39 + * 40 + * Although we may still have enough bits to store the value of ns, 41 + * in some cases, we may not have enough bits to store cycles * cyc2ns_scale, 42 + * leading to an incorrect result. 43 + * 44 + * To avoid this, we can decompose 'cycles' into quotient and remainder 45 + * of division by SC. Then, 46 + * 47 + * ns = (quot * SC + rem) * cyc2ns_scale / SC 48 + * = quot * cyc2ns_scale + (rem * cyc2ns_scale) / SC 49 + * 50 + * - sqazi@google.com 35 51 */ 36 52 37 53 DECLARE_PER_CPU(unsigned long, cyc2ns); ··· 57 41 58 42 static inline unsigned long long __cycles_2_ns(unsigned long long cyc) 59 43 { 44 + unsigned long long quot; 45 + unsigned long long rem; 60 46 int cpu = smp_processor_id(); 61 47 unsigned long long ns = per_cpu(cyc2ns_offset, cpu); 62 - ns += cyc * per_cpu(cyc2ns, cpu) >> CYC2NS_SCALE_FACTOR; 48 + quot = (cyc >> CYC2NS_SCALE_FACTOR); 49 + rem = cyc & ((1ULL << CYC2NS_SCALE_FACTOR) - 1); 50 + ns += quot * per_cpu(cyc2ns, cpu) + 51 + ((rem * per_cpu(cyc2ns, cpu)) >> CYC2NS_SCALE_FACTOR); 63 52 return ns; 64 53 } 65 54
+1
arch/x86/include/asm/uv/uv_mmrs.h
··· 57 57 58 58 #define UV1_HUB_PART_NUMBER 0x88a5 59 59 #define UV2_HUB_PART_NUMBER 0x8eb8 60 + #define UV2_HUB_PART_NUMBER_X 0x1111 60 61 61 62 /* Compat: if this #define is present, UV headers support UV2 */ 62 63 #define UV2_HUB_IS_SUPPORTED 1
+2
arch/x86/kernel/apic/x2apic_uv_x.c
··· 93 93 94 94 if (node_id.s.part_number == UV2_HUB_PART_NUMBER) 95 95 uv_min_hub_revision_id += UV2_HUB_REVISION_BASE - 1; 96 + if (node_id.s.part_number == UV2_HUB_PART_NUMBER_X) 97 + uv_min_hub_revision_id += UV2_HUB_REVISION_BASE - 1; 96 98 97 99 uv_hub_info->hub_revision = uv_min_hub_revision_id; 98 100 pnode = (node_id.s.node_id >> 1) & ((1 << m_n_config.s.n_skt) - 1);
+4 -4
arch/x86/kernel/cpu/amd.c
··· 442 442 443 443 static void __cpuinit early_init_amd(struct cpuinfo_x86 *c) 444 444 { 445 - u32 dummy; 446 - 447 445 early_init_amd_mc(c); 448 446 449 447 /* ··· 471 473 set_cpu_cap(c, X86_FEATURE_EXTD_APICID); 472 474 } 473 475 #endif 474 - 475 - rdmsr_safe(MSR_AMD64_PATCH_LEVEL, &c->microcode, &dummy); 476 476 } 477 477 478 478 static void __cpuinit init_amd(struct cpuinfo_x86 *c) 479 479 { 480 + u32 dummy; 481 + 480 482 #ifdef CONFIG_SMP 481 483 unsigned long long value; 482 484 ··· 655 657 checking_wrmsrl(MSR_AMD64_MCx_MASK(4), mask); 656 658 } 657 659 } 660 + 661 + rdmsr_safe(MSR_AMD64_PATCH_LEVEL, &c->microcode, &dummy); 658 662 } 659 663 660 664 #ifdef CONFIG_X86_32
+2
arch/x86/kernel/cpu/mtrr/generic.c
··· 547 547 548 548 if (tmp != mask_lo) { 549 549 printk(KERN_WARNING "mtrr: your BIOS has configured an incorrect mask, fixing it.\n"); 550 + add_taint(TAINT_FIRMWARE_WORKAROUND); 550 551 mask_lo = tmp; 551 552 } 552 553 } ··· 694 693 695 694 /* Disable MTRRs, and set the default type to uncached */ 696 695 mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & ~0xcff, deftype_hi); 696 + wbinvd(); 697 697 } 698 698 699 699 static void post_set(void) __releases(set_atomicity_lock)
+6 -10
arch/x86/kernel/cpu/perf_event.c
··· 312 312 return -EOPNOTSUPP; 313 313 } 314 314 315 - /* 316 - * Do not allow config1 (extended registers) to propagate, 317 - * there's no sane user-space generalization yet: 318 - */ 319 315 if (attr->type == PERF_TYPE_RAW) 320 - return 0; 316 + return x86_pmu_extra_regs(event->attr.config, event); 321 317 322 318 if (attr->type == PERF_TYPE_HW_CACHE) 323 319 return set_ext_hw_attr(hwc, event); ··· 584 588 x86_pmu.put_event_constraints(cpuc, cpuc->event_list[i]); 585 589 } 586 590 } 587 - return num ? -ENOSPC : 0; 591 + return num ? -EINVAL : 0; 588 592 } 589 593 590 594 /* ··· 603 607 604 608 if (is_x86_event(leader)) { 605 609 if (n >= max_count) 606 - return -ENOSPC; 610 + return -EINVAL; 607 611 cpuc->event_list[n] = leader; 608 612 n++; 609 613 } ··· 616 620 continue; 617 621 618 622 if (n >= max_count) 619 - return -ENOSPC; 623 + return -EINVAL; 620 624 621 625 cpuc->event_list[n] = event; 622 626 n++; ··· 1312 1316 c = x86_pmu.get_event_constraints(fake_cpuc, event); 1313 1317 1314 1318 if (!c || !c->weight) 1315 - ret = -ENOSPC; 1319 + ret = -EINVAL; 1316 1320 1317 1321 if (x86_pmu.put_event_constraints) 1318 1322 x86_pmu.put_event_constraints(fake_cpuc, event); ··· 1337 1341 { 1338 1342 struct perf_event *leader = event->group_leader; 1339 1343 struct cpu_hw_events *fake_cpuc; 1340 - int ret = -ENOSPC, n; 1344 + int ret = -EINVAL, n; 1341 1345 1342 1346 fake_cpuc = allocate_fake_cpuc(); 1343 1347 if (IS_ERR(fake_cpuc))
+18 -11
arch/x86/kernel/cpu/perf_event_amd_ibs.c
··· 199 199 goto out; 200 200 } 201 201 202 - pr_err(FW_BUG "using offset %d for IBS interrupts\n", offset); 203 - pr_err(FW_BUG "workaround enabled for IBS LVT offset\n"); 202 + pr_info("IBS: LVT offset %d assigned\n", offset); 204 203 205 204 return 0; 206 205 out: ··· 264 265 static __init int amd_ibs_init(void) 265 266 { 266 267 u32 caps; 267 - int ret; 268 + int ret = -EINVAL; 268 269 269 270 caps = __get_ibs_caps(); 270 271 if (!caps) 271 272 return -ENODEV; /* ibs not supported by the cpu */ 272 273 273 - if (!ibs_eilvt_valid()) { 274 - ret = force_ibs_eilvt_setup(); 275 - if (ret) { 276 - pr_err("Failed to setup IBS, %d\n", ret); 277 - return ret; 278 - } 279 - } 274 + /* 275 + * Force LVT offset assignment for family 10h: The offsets are 276 + * not assigned by the BIOS for this family, so the OS is 277 + * responsible for doing it. If the OS assignment fails, fall 278 + * back to BIOS settings and try to setup this. 279 + */ 280 + if (boot_cpu_data.x86 == 0x10) 281 + force_ibs_eilvt_setup(); 282 + 283 + if (!ibs_eilvt_valid()) 284 + goto out; 280 285 281 286 get_online_cpus(); 282 287 ibs_caps = caps; ··· 290 287 smp_call_function(setup_APIC_ibs, NULL, 1); 291 288 put_online_cpus(); 292 289 293 - return perf_event_ibs_init(); 290 + ret = perf_event_ibs_init(); 291 + out: 292 + if (ret) 293 + pr_err("Failed to setup IBS, %d\n", ret); 294 + return ret; 294 295 } 295 296 296 297 /* Since we need the pci subsystem to init ibs we can't do this earlier: */
+8
arch/x86/kernel/cpu/perf_event_intel.c
··· 1545 1545 x86_pmu.pebs_constraints = NULL; 1546 1546 } 1547 1547 1548 + static void intel_sandybridge_quirks(void) 1549 + { 1550 + printk(KERN_WARNING "PEBS disabled due to CPU errata.\n"); 1551 + x86_pmu.pebs = 0; 1552 + x86_pmu.pebs_constraints = NULL; 1553 + } 1554 + 1548 1555 __init int intel_pmu_init(void) 1549 1556 { 1550 1557 union cpuid10_edx edx; ··· 1701 1694 break; 1702 1695 1703 1696 case 42: /* SandyBridge */ 1697 + x86_pmu.quirks = intel_sandybridge_quirks; 1704 1698 case 45: /* SandyBridge, "Romely-EP" */ 1705 1699 memcpy(hw_cache_event_ids, snb_hw_cache_event_ids, 1706 1700 sizeof(hw_cache_event_ids));
+5 -1
arch/x86/kernel/cpu/perf_event_intel_ds.c
··· 493 493 unsigned long from = cpuc->lbr_entries[0].from; 494 494 unsigned long old_to, to = cpuc->lbr_entries[0].to; 495 495 unsigned long ip = regs->ip; 496 + int is_64bit = 0; 496 497 497 498 /* 498 499 * We don't need to fixup if the PEBS assist is fault like ··· 545 544 } else 546 545 kaddr = (void *)to; 547 546 548 - kernel_insn_init(&insn, kaddr); 547 + #ifdef CONFIG_X86_64 548 + is_64bit = kernel_ip(to) || !test_thread_flag(TIF_IA32); 549 + #endif 550 + insn_init(&insn, kaddr, is_64bit); 549 551 insn_get_length(&insn); 550 552 to += insn.length; 551 553 } while (to < ip);
+1 -1
arch/x86/kernel/cpu/perf_event_p4.c
··· 1268 1268 } 1269 1269 1270 1270 done: 1271 - return num ? -ENOSPC : 0; 1271 + return num ? -EINVAL : 0; 1272 1272 } 1273 1273 1274 1274 static __initconst const struct x86_pmu p4_pmu = {
+14 -7
arch/x86/kernel/hpet.c
··· 1049 1049 } 1050 1050 EXPORT_SYMBOL_GPL(hpet_rtc_timer_init); 1051 1051 1052 + static void hpet_disable_rtc_channel(void) 1053 + { 1054 + unsigned long cfg; 1055 + cfg = hpet_readl(HPET_T1_CFG); 1056 + cfg &= ~HPET_TN_ENABLE; 1057 + hpet_writel(cfg, HPET_T1_CFG); 1058 + } 1059 + 1052 1060 /* 1053 1061 * The functions below are called from rtc driver. 1054 1062 * Return 0 if HPET is not being used. ··· 1068 1060 return 0; 1069 1061 1070 1062 hpet_rtc_flags &= ~bit_mask; 1063 + if (unlikely(!hpet_rtc_flags)) 1064 + hpet_disable_rtc_channel(); 1065 + 1071 1066 return 1; 1072 1067 } 1073 1068 EXPORT_SYMBOL_GPL(hpet_mask_rtc_irq_bit); ··· 1136 1125 1137 1126 static void hpet_rtc_timer_reinit(void) 1138 1127 { 1139 - unsigned int cfg, delta; 1128 + unsigned int delta; 1140 1129 int lost_ints = -1; 1141 1130 1142 - if (unlikely(!hpet_rtc_flags)) { 1143 - cfg = hpet_readl(HPET_T1_CFG); 1144 - cfg &= ~HPET_TN_ENABLE; 1145 - hpet_writel(cfg, HPET_T1_CFG); 1146 - return; 1147 - } 1131 + if (unlikely(!hpet_rtc_flags)) 1132 + hpet_disable_rtc_channel(); 1148 1133 1149 1134 if (!(hpet_rtc_flags & RTC_PIE) || hpet_pie_limit) 1150 1135 delta = hpet_default_delta;
+3
arch/x86/kernel/irq_64.c
··· 38 38 #ifdef CONFIG_DEBUG_STACKOVERFLOW 39 39 u64 curbase = (u64)task_stack_page(current); 40 40 41 + if (user_mode_vm(regs)) 42 + return; 43 + 41 44 WARN_ONCE(regs->sp >= curbase && 42 45 regs->sp <= curbase + THREAD_SIZE && 43 46 regs->sp < curbase + sizeof(struct thread_info) +
+19 -9
arch/x86/kernel/microcode_core.c
··· 256 256 return 0; 257 257 } 258 258 259 - static void microcode_dev_exit(void) 259 + static void __exit microcode_dev_exit(void) 260 260 { 261 261 misc_deregister(&microcode_dev); 262 262 } ··· 519 519 520 520 microcode_pdev = platform_device_register_simple("microcode", -1, 521 521 NULL, 0); 522 - if (IS_ERR(microcode_pdev)) { 523 - microcode_dev_exit(); 522 + if (IS_ERR(microcode_pdev)) 524 523 return PTR_ERR(microcode_pdev); 525 - } 526 524 527 525 get_online_cpus(); 528 526 mutex_lock(&microcode_mutex); ··· 530 532 mutex_unlock(&microcode_mutex); 531 533 put_online_cpus(); 532 534 533 - if (error) { 534 - platform_device_unregister(microcode_pdev); 535 - return error; 536 - } 535 + if (error) 536 + goto out_pdev; 537 537 538 538 error = microcode_dev_init(); 539 539 if (error) 540 - return error; 540 + goto out_sysdev_driver; 541 541 542 542 register_syscore_ops(&mc_syscore_ops); 543 543 register_hotcpu_notifier(&mc_cpu_notifier); ··· 544 548 " <tigran@aivazian.fsnet.co.uk>, Peter Oruba\n"); 545 549 546 550 return 0; 551 + 552 + out_sysdev_driver: 553 + get_online_cpus(); 554 + mutex_lock(&microcode_mutex); 555 + 556 + sysdev_driver_unregister(&cpu_sysdev_class, &mc_sysdev_driver); 557 + 558 + mutex_unlock(&microcode_mutex); 559 + put_online_cpus(); 560 + 561 + out_pdev: 562 + platform_device_unregister(microcode_pdev); 563 + return error; 564 + 547 565 } 548 566 module_init(microcode_init); 549 567
+1 -1
arch/x86/kernel/mpparse.c
··· 95 95 } 96 96 #endif 97 97 98 + set_bit(m->busid, mp_bus_not_pci); 98 99 if (strncmp(str, BUSTYPE_ISA, sizeof(BUSTYPE_ISA) - 1) == 0) { 99 - set_bit(m->busid, mp_bus_not_pci); 100 100 #if defined(CONFIG_EISA) || defined(CONFIG_MCA) 101 101 mp_bus_id_to_type[m->busid] = MP_BUS_ISA; 102 102 #endif
+8
arch/x86/kernel/process.c
··· 403 403 EXPORT_SYMBOL(default_idle); 404 404 #endif 405 405 406 + bool set_pm_idle_to_default(void) 407 + { 408 + bool ret = !!pm_idle; 409 + 410 + pm_idle = default_idle; 411 + 412 + return ret; 413 + } 406 414 void stop_this_cpu(void *dummy) 407 415 { 408 416 local_irq_disable();
+13
arch/x86/kernel/quirks.c
··· 553 553 quirk_amd_nb_node); 554 554 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_10H_NB_LINK, 555 555 quirk_amd_nb_node); 556 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_15H_NB_F0, 557 + quirk_amd_nb_node); 558 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_15H_NB_F1, 559 + quirk_amd_nb_node); 560 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_15H_NB_F2, 561 + quirk_amd_nb_node); 562 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_15H_NB_F3, 563 + quirk_amd_nb_node); 564 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_15H_NB_F4, 565 + quirk_amd_nb_node); 566 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_15H_NB_F5, 567 + quirk_amd_nb_node); 568 + 556 569 #endif
+19 -2
arch/x86/kernel/reboot.c
··· 124 124 */ 125 125 126 126 /* 127 - * Some machines require the "reboot=b" commandline option, 127 + * Some machines require the "reboot=b" or "reboot=k" commandline options, 128 128 * this quirk makes that automatic. 129 129 */ 130 130 static int __init set_bios_reboot(const struct dmi_system_id *d) ··· 132 132 if (reboot_type != BOOT_BIOS) { 133 133 reboot_type = BOOT_BIOS; 134 134 printk(KERN_INFO "%s series board detected. Selecting BIOS-method for reboots.\n", d->ident); 135 + } 136 + return 0; 137 + } 138 + 139 + static int __init set_kbd_reboot(const struct dmi_system_id *d) 140 + { 141 + if (reboot_type != BOOT_KBD) { 142 + reboot_type = BOOT_KBD; 143 + printk(KERN_INFO "%s series board detected. Selecting KBD-method for reboot.\n", d->ident); 135 144 } 136 145 return 0; 137 146 } ··· 304 295 }, 305 296 }, 306 297 { /* Handle reboot issue on Acer Aspire one */ 307 - .callback = set_bios_reboot, 298 + .callback = set_kbd_reboot, 308 299 .ident = "Acer Aspire One A110", 309 300 .matches = { 310 301 DMI_MATCH(DMI_SYS_VENDOR, "Acer"), ··· 450 441 .matches = { 451 442 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 452 443 DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E6420"), 444 + }, 445 + }, 446 + { /* Handle problems with rebooting on the OptiPlex 990. */ 447 + .callback = set_pci_reboot, 448 + .ident = "Dell OptiPlex 990", 449 + .matches = { 450 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 451 + DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 990"), 453 452 }, 454 453 }, 455 454 { }
+5
arch/x86/kernel/rtc.c
··· 12 12 #include <asm/vsyscall.h> 13 13 #include <asm/x86_init.h> 14 14 #include <asm/time.h> 15 + #include <asm/mrst.h> 15 16 16 17 #ifdef CONFIG_X86_32 17 18 /* ··· 242 241 #endif 243 242 if (of_have_populated_dt()) 244 243 return 0; 244 + 245 + /* Intel MID platforms don't have ioport rtc */ 246 + if (mrst_identify_cpu()) 247 + return -ENODEV; 245 248 246 249 platform_device_register(&rtc_device); 247 250 dev_info(&rtc_device.dev,
+2
arch/x86/mm/gup.c
··· 201 201 do { 202 202 VM_BUG_ON(compound_head(page) != head); 203 203 pages[*nr] = page; 204 + if (PageTail(page)) 205 + get_huge_page_tail(page); 204 206 (*nr)++; 205 207 page++; 206 208 refs++;
+2
arch/x86/mm/highmem_32.c
··· 45 45 vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 46 46 BUG_ON(!pte_none(*(kmap_pte-idx))); 47 47 set_pte(kmap_pte-idx, mk_pte(page, prot)); 48 + arch_flush_lazy_mmu_mode(); 48 49 49 50 return (void *)vaddr; 50 51 } ··· 89 88 */ 90 89 kpte_clear_flush(kmap_pte-idx, vaddr); 91 90 kmap_atomic_idx_pop(); 91 + arch_flush_lazy_mmu_mode(); 92 92 } 93 93 #ifdef CONFIG_DEBUG_HIGHMEM 94 94 else {
+5 -2
arch/x86/oprofile/init.c
··· 21 21 extern void op_nmi_exit(void); 22 22 extern void x86_backtrace(struct pt_regs * const regs, unsigned int depth); 23 23 24 + static int nmi_timer; 24 25 25 26 int __init oprofile_arch_init(struct oprofile_operations *ops) 26 27 { ··· 32 31 #ifdef CONFIG_X86_LOCAL_APIC 33 32 ret = op_nmi_init(ops); 34 33 #endif 34 + nmi_timer = (ret != 0); 35 35 #ifdef CONFIG_X86_IO_APIC 36 - if (ret < 0) 36 + if (nmi_timer) 37 37 ret = op_nmi_timer_init(ops); 38 38 #endif 39 39 ops->backtrace = x86_backtrace; ··· 46 44 void oprofile_arch_exit(void) 47 45 { 48 46 #ifdef CONFIG_X86_LOCAL_APIC 49 - op_nmi_exit(); 47 + if (!nmi_timer) 48 + op_nmi_exit(); 50 49 #endif 51 50 }
+2 -46
arch/x86/platform/efi/efi_32.c
··· 39 39 */ 40 40 41 41 static unsigned long efi_rt_eflags; 42 - static pgd_t efi_bak_pg_dir_pointer[2]; 43 42 44 43 void efi_call_phys_prelog(void) 45 44 { 46 - unsigned long cr4; 47 - unsigned long temp; 48 45 struct desc_ptr gdt_descr; 49 46 50 47 local_irq_save(efi_rt_eflags); 51 48 52 - /* 53 - * If I don't have PAE, I should just duplicate two entries in page 54 - * directory. If I have PAE, I just need to duplicate one entry in 55 - * page directory. 56 - */ 57 - cr4 = read_cr4_safe(); 58 - 59 - if (cr4 & X86_CR4_PAE) { 60 - efi_bak_pg_dir_pointer[0].pgd = 61 - swapper_pg_dir[pgd_index(0)].pgd; 62 - swapper_pg_dir[0].pgd = 63 - swapper_pg_dir[pgd_index(PAGE_OFFSET)].pgd; 64 - } else { 65 - efi_bak_pg_dir_pointer[0].pgd = 66 - swapper_pg_dir[pgd_index(0)].pgd; 67 - efi_bak_pg_dir_pointer[1].pgd = 68 - swapper_pg_dir[pgd_index(0x400000)].pgd; 69 - swapper_pg_dir[pgd_index(0)].pgd = 70 - swapper_pg_dir[pgd_index(PAGE_OFFSET)].pgd; 71 - temp = PAGE_OFFSET + 0x400000; 72 - swapper_pg_dir[pgd_index(0x400000)].pgd = 73 - swapper_pg_dir[pgd_index(temp)].pgd; 74 - } 75 - 76 - /* 77 - * After the lock is released, the original page table is restored. 78 - */ 49 + load_cr3(initial_page_table); 79 50 __flush_tlb_all(); 80 51 81 52 gdt_descr.address = __pa(get_cpu_gdt_table(0)); ··· 56 85 57 86 void efi_call_phys_epilog(void) 58 87 { 59 - unsigned long cr4; 60 88 struct desc_ptr gdt_descr; 61 89 62 90 gdt_descr.address = (unsigned long)get_cpu_gdt_table(0); 63 91 gdt_descr.size = GDT_SIZE - 1; 64 92 load_gdt(&gdt_descr); 65 93 66 - cr4 = read_cr4_safe(); 67 - 68 - if (cr4 & X86_CR4_PAE) { 69 - swapper_pg_dir[pgd_index(0)].pgd = 70 - efi_bak_pg_dir_pointer[0].pgd; 71 - } else { 72 - swapper_pg_dir[pgd_index(0)].pgd = 73 - efi_bak_pg_dir_pointer[0].pgd; 74 - swapper_pg_dir[pgd_index(0x400000)].pgd = 75 - efi_bak_pg_dir_pointer[1].pgd; 76 - } 77 - 78 - /* 79 - * After the lock is released, the original page table is restored. 80 - */ 94 + load_cr3(swapper_pg_dir); 81 95 __flush_tlb_all(); 82 96 83 97 local_irq_restore(efi_rt_eflags);
+57 -11
arch/x86/platform/mrst/mrst.c
··· 76 76 EXPORT_SYMBOL_GPL(sfi_mrtc_array); 77 77 int sfi_mrtc_num; 78 78 79 + static void mrst_power_off(void) 80 + { 81 + if (__mrst_cpu_chip == MRST_CPU_CHIP_LINCROFT) 82 + intel_scu_ipc_simple_command(IPCMSG_COLD_RESET, 1); 83 + } 84 + 85 + static void mrst_reboot(void) 86 + { 87 + if (__mrst_cpu_chip == MRST_CPU_CHIP_LINCROFT) 88 + intel_scu_ipc_simple_command(IPCMSG_COLD_RESET, 0); 89 + else 90 + intel_scu_ipc_simple_command(IPCMSG_COLD_BOOT, 0); 91 + } 92 + 79 93 /* parse all the mtimer info to a static mtimer array */ 80 94 static int __init sfi_parse_mtmr(struct sfi_table_header *table) 81 95 { ··· 277 263 static int mrst_i8042_detect(void) 278 264 { 279 265 return 0; 280 - } 281 - 282 - /* Reboot and power off are handled by the SCU on a MID device */ 283 - static void mrst_power_off(void) 284 - { 285 - intel_scu_ipc_simple_command(0xf1, 1); 286 - } 287 - 288 - static void mrst_reboot(void) 289 - { 290 - intel_scu_ipc_simple_command(0xf1, 0); 291 266 } 292 267 293 268 /* ··· 487 484 return max7315; 488 485 } 489 486 487 + static void *tca6416_platform_data(void *info) 488 + { 489 + static struct pca953x_platform_data tca6416; 490 + struct i2c_board_info *i2c_info = info; 491 + int gpio_base, intr; 492 + char base_pin_name[SFI_NAME_LEN + 1]; 493 + char intr_pin_name[SFI_NAME_LEN + 1]; 494 + 495 + strcpy(i2c_info->type, "tca6416"); 496 + strcpy(base_pin_name, "tca6416_base"); 497 + strcpy(intr_pin_name, "tca6416_int"); 498 + 499 + gpio_base = get_gpio_by_name(base_pin_name); 500 + intr = get_gpio_by_name(intr_pin_name); 501 + 502 + if (gpio_base == -1) 503 + return NULL; 504 + tca6416.gpio_base = gpio_base; 505 + if (intr != -1) { 506 + i2c_info->irq = intr + MRST_IRQ_OFFSET; 507 + tca6416.irq_base = gpio_base + MRST_IRQ_OFFSET; 508 + } else { 509 + i2c_info->irq = -1; 510 + tca6416.irq_base = -1; 511 + } 512 + return &tca6416; 513 + } 514 + 515 + static void *mpu3050_platform_data(void *info) 516 + { 517 + struct i2c_board_info *i2c_info = info; 518 + int intr = get_gpio_by_name("mpu3050_int"); 519 + 520 + if (intr == -1) 521 + return NULL; 522 + 523 + i2c_info->irq = intr + MRST_IRQ_OFFSET; 524 + return NULL; 525 + } 526 + 490 527 static void __init *emc1403_platform_data(void *info) 491 528 { 492 529 static short intr2nd_pdata; ··· 689 646 static const struct devs_id __initconst device_ids[] = { 690 647 {"bma023", SFI_DEV_TYPE_I2C, 1, &no_platform_data}, 691 648 {"pmic_gpio", SFI_DEV_TYPE_SPI, 1, &pmic_gpio_platform_data}, 649 + {"pmic_gpio", SFI_DEV_TYPE_IPC, 1, &pmic_gpio_platform_data}, 692 650 {"spi_max3111", SFI_DEV_TYPE_SPI, 0, &max3111_platform_data}, 693 651 {"i2c_max7315", SFI_DEV_TYPE_I2C, 1, &max7315_platform_data}, 694 652 {"i2c_max7315_2", SFI_DEV_TYPE_I2C, 1, &max7315_platform_data}, 653 + {"tca6416", SFI_DEV_TYPE_I2C, 1, &tca6416_platform_data}, 695 654 {"emc1403", SFI_DEV_TYPE_I2C, 1, &emc1403_platform_data}, 696 655 {"i2c_accel", SFI_DEV_TYPE_I2C, 0, &lis331dl_platform_data}, 697 656 {"pmic_audio", SFI_DEV_TYPE_IPC, 1, &no_platform_data}, 657 + {"mpu3050", SFI_DEV_TYPE_I2C, 1, &mpu3050_platform_data}, 698 658 699 659 /* MSIC subdevices */ 700 660 {"msic_battery", SFI_DEV_TYPE_IPC, 1, &msic_battery_platform_data},
+16 -4
arch/x86/xen/setup.c
··· 173 173 domid_t domid = DOMID_SELF; 174 174 int ret; 175 175 176 - ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid); 177 - if (ret > 0) 178 - max_pages = ret; 176 + /* 177 + * For the initial domain we use the maximum reservation as 178 + * the maximum page. 179 + * 180 + * For guest domains the current maximum reservation reflects 181 + * the current maximum rather than the static maximum. In this 182 + * case the e820 map provided to us will cover the static 183 + * maximum region. 184 + */ 185 + if (xen_initial_domain()) { 186 + ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid); 187 + if (ret > 0) 188 + max_pages = ret; 189 + } 190 + 179 191 return min(max_pages, MAX_DOMAIN_PAGES); 180 192 } 181 193 ··· 422 410 #endif 423 411 disable_cpuidle(); 424 412 boot_option_idle_override = IDLE_HALT; 425 - 413 + WARN_ON(set_pm_idle_to_default()); 426 414 fiddle_vdso(); 427 415 }
+11 -12
block/blk-core.c
··· 366 366 if (drain_all) 367 367 blk_throtl_drain(q); 368 368 369 - __blk_run_queue(q); 369 + /* 370 + * This function might be called on a queue which failed 371 + * driver init after queue creation. Some drivers 372 + * (e.g. fd) get unhappy in such cases. Kick queue iff 373 + * dispatch queue has something on it. 374 + */ 375 + if (!list_empty(&q->queue_head)) 376 + __blk_run_queue(q); 370 377 371 378 if (drain_all) 372 379 nr_rqs = q->rq.count[0] + q->rq.count[1]; ··· 474 467 q->backing_dev_info.state = 0; 475 468 q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY; 476 469 q->backing_dev_info.name = "block"; 470 + q->node = node_id; 477 471 478 472 err = bdi_init(&q->backing_dev_info); 479 473 if (err) { ··· 559 551 if (!uninit_q) 560 552 return NULL; 561 553 562 - q = blk_init_allocated_queue_node(uninit_q, rfn, lock, node_id); 554 + q = blk_init_allocated_queue(uninit_q, rfn, lock); 563 555 if (!q) 564 556 blk_cleanup_queue(uninit_q); 565 557 ··· 571 563 blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn, 572 564 spinlock_t *lock) 573 565 { 574 - return blk_init_allocated_queue_node(q, rfn, lock, -1); 575 - } 576 - EXPORT_SYMBOL(blk_init_allocated_queue); 577 - 578 - struct request_queue * 579 - blk_init_allocated_queue_node(struct request_queue *q, request_fn_proc *rfn, 580 - spinlock_t *lock, int node_id) 581 - { 582 566 if (!q) 583 567 return NULL; 584 568 585 - q->node = node_id; 586 569 if (blk_init_free_list(q)) 587 570 return NULL; 588 571 ··· 603 604 604 605 return NULL; 605 606 } 606 - EXPORT_SYMBOL(blk_init_allocated_queue_node); 607 + EXPORT_SYMBOL(blk_init_allocated_queue); 607 608 608 609 int blk_get_queue(struct request_queue *q) 609 610 {
+14 -2
block/cfq-iosched.c
··· 3184 3184 } 3185 3185 } 3186 3186 3187 - if (ret) 3187 + if (ret && ret != -EEXIST) 3188 3188 printk(KERN_ERR "cfq: cic link failed!\n"); 3189 3189 3190 3190 return ret; ··· 3200 3200 { 3201 3201 struct io_context *ioc = NULL; 3202 3202 struct cfq_io_context *cic; 3203 + int ret; 3203 3204 3204 3205 might_sleep_if(gfp_mask & __GFP_WAIT); 3205 3206 ··· 3208 3207 if (!ioc) 3209 3208 return NULL; 3210 3209 3210 + retry: 3211 3211 cic = cfq_cic_lookup(cfqd, ioc); 3212 3212 if (cic) 3213 3213 goto out; ··· 3217 3215 if (cic == NULL) 3218 3216 goto err; 3219 3217 3220 - if (cfq_cic_link(cfqd, ioc, cic, gfp_mask)) 3218 + ret = cfq_cic_link(cfqd, ioc, cic, gfp_mask); 3219 + if (ret == -EEXIST) { 3220 + /* someone has linked cic to ioc already */ 3221 + cfq_cic_free(cic); 3222 + goto retry; 3223 + } else if (ret) 3221 3224 goto err_free; 3222 3225 3223 3226 out: ··· 4043 4036 4044 4037 if (blkio_alloc_blkg_stats(&cfqg->blkg)) { 4045 4038 kfree(cfqg); 4039 + 4040 + spin_lock(&cic_index_lock); 4041 + ida_remove(&cic_index_ida, cfqd->cic_index); 4042 + spin_unlock(&cic_index_lock); 4043 + 4046 4044 kfree(cfqd); 4047 4045 return NULL; 4048 4046 }
+22 -9
drivers/acpi/apei/erst.c
··· 932 932 static int erst_open_pstore(struct pstore_info *psi); 933 933 static int erst_close_pstore(struct pstore_info *psi); 934 934 static ssize_t erst_reader(u64 *id, enum pstore_type_id *type, 935 - struct timespec *time, struct pstore_info *psi); 935 + struct timespec *time, char **buf, 936 + struct pstore_info *psi); 936 937 static int erst_writer(enum pstore_type_id type, u64 *id, unsigned int part, 937 938 size_t size, struct pstore_info *psi); 938 939 static int erst_clearer(enum pstore_type_id type, u64 id, ··· 987 986 } 988 987 989 988 static ssize_t erst_reader(u64 *id, enum pstore_type_id *type, 990 - struct timespec *time, struct pstore_info *psi) 989 + struct timespec *time, char **buf, 990 + struct pstore_info *psi) 991 991 { 992 992 int rc; 993 993 ssize_t len = 0; 994 994 u64 record_id; 995 - struct cper_pstore_record *rcd = (struct cper_pstore_record *) 996 - (erst_info.buf - sizeof(*rcd)); 995 + struct cper_pstore_record *rcd; 996 + size_t rcd_len = sizeof(*rcd) + erst_info.bufsize; 997 997 998 998 if (erst_disable) 999 999 return -ENODEV; 1000 1000 1001 + rcd = kmalloc(rcd_len, GFP_KERNEL); 1002 + if (!rcd) { 1003 + rc = -ENOMEM; 1004 + goto out; 1005 + } 1001 1006 skip: 1002 1007 rc = erst_get_record_id_next(&reader_pos, &record_id); 1003 1008 if (rc) ··· 1011 1004 1012 1005 /* no more record */ 1013 1006 if (record_id == APEI_ERST_INVALID_RECORD_ID) { 1014 - rc = -1; 1007 + rc = -EINVAL; 1015 1008 goto out; 1016 1009 } 1017 1010 1018 - len = erst_read(record_id, &rcd->hdr, sizeof(*rcd) + 1019 - erst_info.bufsize); 1011 + len = erst_read(record_id, &rcd->hdr, rcd_len); 1020 1012 /* The record may be cleared by others, try read next record */ 1021 1013 if (len == -ENOENT) 1022 1014 goto skip; 1023 - else if (len < 0) { 1024 - rc = -1; 1015 + else if (len < sizeof(*rcd)) { 1016 + rc = -EIO; 1025 1017 goto out; 1026 1018 } 1027 1019 if (uuid_le_cmp(rcd->hdr.creator_id, CPER_CREATOR_PSTORE) != 0) 1028 1020 goto skip; 1029 1021 1022 + *buf = kmalloc(len, GFP_KERNEL); 1023 + if (*buf == NULL) { 1024 + rc = -ENOMEM; 1025 + goto out; 1026 + } 1027 + memcpy(*buf, rcd->data, len - sizeof(*rcd)); 1030 1028 *id = record_id; 1031 1029 if (uuid_le_cmp(rcd->sec_hdr.section_type, 1032 1030 CPER_SECTION_TYPE_DMESG) == 0) ··· 1049 1037 time->tv_nsec = 0; 1050 1038 1051 1039 out: 1040 + kfree(rcd); 1052 1041 return (rc < 0) ? rc : (len - sizeof(*rcd)); 1053 1042 } 1054 1043
+1 -1
drivers/ata/ahci_platform.c
··· 67 67 struct device *dev = &pdev->dev; 68 68 struct ahci_platform_data *pdata = dev_get_platdata(dev); 69 69 const struct platform_device_id *id = platform_get_device_id(pdev); 70 - struct ata_port_info pi = ahci_port_info[id->driver_data]; 70 + struct ata_port_info pi = ahci_port_info[id ? id->driver_data : 0]; 71 71 const struct ata_port_info *ppi[] = { &pi, NULL }; 72 72 struct ahci_host_priv *hpriv; 73 73 struct ata_host *host;
+4
drivers/ata/libata-sff.c
··· 2533 2533 if (rc) 2534 2534 goto out; 2535 2535 2536 + #ifdef CONFIG_ATA_BMDMA 2536 2537 if (bmdma) 2537 2538 /* prepare and activate BMDMA host */ 2538 2539 rc = ata_pci_bmdma_prepare_host(pdev, ppi, &host); 2539 2540 else 2541 + #endif 2540 2542 /* prepare and activate SFF host */ 2541 2543 rc = ata_pci_sff_prepare_host(pdev, ppi, &host); 2542 2544 if (rc) ··· 2546 2544 host->private_data = host_priv; 2547 2545 host->flags |= hflags; 2548 2546 2547 + #ifdef CONFIG_ATA_BMDMA 2549 2548 if (bmdma) { 2550 2549 pci_set_master(pdev); 2551 2550 rc = ata_pci_sff_activate_host(host, ata_bmdma_interrupt, sht); 2552 2551 } else 2552 + #endif 2553 2553 rc = ata_pci_sff_activate_host(host, ata_sff_interrupt, sht); 2554 2554 out: 2555 2555 if (rc == 0)
+4 -2
drivers/base/core.c
··· 1743 1743 */ 1744 1744 list_del_init(&dev->kobj.entry); 1745 1745 spin_unlock(&devices_kset->list_lock); 1746 - /* Disable all device's runtime power management */ 1747 - pm_runtime_disable(dev); 1746 + 1747 + /* Don't allow any more runtime suspends */ 1748 + pm_runtime_get_noresume(dev); 1749 + pm_runtime_barrier(dev); 1748 1750 1749 1751 if (dev->bus && dev->bus->shutdown) { 1750 1752 dev_dbg(dev, "shutdown\n");
+4 -2
drivers/block/cciss.c
··· 2601 2601 c->Request.Timeout = 0; 2602 2602 c->Request.CDB[0] = BMIC_WRITE; 2603 2603 c->Request.CDB[6] = BMIC_CACHE_FLUSH; 2604 + c->Request.CDB[7] = (size >> 8) & 0xFF; 2605 + c->Request.CDB[8] = size & 0xFF; 2604 2606 break; 2605 2607 case TEST_UNIT_READY: 2606 2608 c->Request.CDBLen = 6; ··· 4882 4880 { 4883 4881 if (h->msix_vector || h->msi_vector) { 4884 4882 if (!request_irq(h->intr[h->intr_mode], msixhandler, 4885 - IRQF_DISABLED, h->devname, h)) 4883 + 0, h->devname, h)) 4886 4884 return 0; 4887 4885 dev_err(&h->pdev->dev, "Unable to get msi irq %d" 4888 4886 " for %s\n", h->intr[h->intr_mode], ··· 4891 4889 } 4892 4890 4893 4891 if (!request_irq(h->intr[h->intr_mode], intxhandler, 4894 - IRQF_DISABLED, h->devname, h)) 4892 + IRQF_SHARED, h->devname, h)) 4895 4893 return 0; 4896 4894 dev_err(&h->pdev->dev, "Unable to get irq %d for %s\n", 4897 4895 h->intr[h->intr_mode], h->devname);
+2 -2
drivers/block/loop.c
··· 422 422 423 423 /* 424 424 * We use punch hole to reclaim the free space used by the 425 - * image a.k.a. discard. However we do support discard if 425 + * image a.k.a. discard. However we do not support discard if 426 426 * encryption is enabled, because it may give an attacker 427 427 * useful information. 428 428 */ ··· 797 797 } 798 798 799 799 q->limits.discard_granularity = inode->i_sb->s_blocksize; 800 - q->limits.discard_alignment = inode->i_sb->s_blocksize; 800 + q->limits.discard_alignment = 0; 801 801 q->limits.max_discard_sectors = UINT_MAX >> 9; 802 802 q->limits.discard_zeroes_data = 1; 803 803 queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q);
+10 -91
drivers/block/rbd.c
··· 183 183 184 184 static int __rbd_init_snaps_header(struct rbd_device *rbd_dev); 185 185 static void rbd_dev_release(struct device *dev); 186 - static ssize_t rbd_snap_rollback(struct device *dev, 187 - struct device_attribute *attr, 188 - const char *buf, 189 - size_t size); 190 186 static ssize_t rbd_snap_add(struct device *dev, 191 187 struct device_attribute *attr, 192 188 const char *buf, ··· 456 460 int i; 457 461 u32 snap_count = le32_to_cpu(ondisk->snap_count); 458 462 int ret = -ENOMEM; 463 + 464 + if (memcmp(ondisk, RBD_HEADER_TEXT, sizeof(RBD_HEADER_TEXT))) { 465 + return -ENXIO; 466 + } 459 467 460 468 init_rwsem(&header->snap_rwsem); 461 469 header->snap_names_len = le64_to_cpu(ondisk->snap_names_len); ··· 1356 1356 } 1357 1357 1358 1358 /* 1359 - * Request sync osd rollback 1360 - */ 1361 - static int rbd_req_sync_rollback_obj(struct rbd_device *dev, 1362 - u64 snapid, 1363 - const char *obj) 1364 - { 1365 - struct ceph_osd_req_op *ops; 1366 - int ret = rbd_create_rw_ops(&ops, 1, CEPH_OSD_OP_ROLLBACK, 0); 1367 - if (ret < 0) 1368 - return ret; 1369 - 1370 - ops[0].snap.snapid = snapid; 1371 - 1372 - ret = rbd_req_sync_op(dev, NULL, 1373 - CEPH_NOSNAP, 1374 - 0, 1375 - CEPH_OSD_FLAG_WRITE | CEPH_OSD_FLAG_ONDISK, 1376 - ops, 1377 - 1, obj, 0, 0, NULL, NULL, NULL); 1378 - 1379 - rbd_destroy_ops(ops); 1380 - 1381 - return ret; 1382 - } 1383 - 1384 - /* 1385 1359 * Request sync osd read 1386 1360 */ 1387 1361 static int rbd_req_sync_exec(struct rbd_device *dev, ··· 1584 1610 goto out_dh; 1585 1611 1586 1612 rc = rbd_header_from_disk(header, dh, snap_count, GFP_KERNEL); 1587 - if (rc < 0) 1613 + if (rc < 0) { 1614 + if (rc == -ENXIO) { 1615 + pr_warning("unrecognized header format" 1616 + " for image %s", rbd_dev->obj); 1617 + } 1588 1618 goto out_dh; 1619 + } 1589 1620 1590 1621 if (snap_count != header->total_snaps) { 1591 1622 snap_count = header->total_snaps; ··· 1861 1882 static DEVICE_ATTR(refresh, S_IWUSR, NULL, rbd_image_refresh); 1862 1883 static DEVICE_ATTR(current_snap, S_IRUGO, rbd_snap_show, NULL); 1863 1884 static DEVICE_ATTR(create_snap, S_IWUSR, NULL, rbd_snap_add); 1864 - static DEVICE_ATTR(rollback_snap, S_IWUSR, NULL, rbd_snap_rollback); 1865 1885 1866 1886 static struct attribute *rbd_attrs[] = { 1867 1887 &dev_attr_size.attr, ··· 1871 1893 &dev_attr_current_snap.attr, 1872 1894 &dev_attr_refresh.attr, 1873 1895 &dev_attr_create_snap.attr, 1874 - &dev_attr_rollback_snap.attr, 1875 1896 NULL 1876 1897 }; 1877 1898 ··· 2398 2421 err_unlock: 2399 2422 mutex_unlock(&ctl_mutex); 2400 2423 kfree(name); 2401 - return ret; 2402 - } 2403 - 2404 - static ssize_t rbd_snap_rollback(struct device *dev, 2405 - struct device_attribute *attr, 2406 - const char *buf, 2407 - size_t count) 2408 - { 2409 - struct rbd_device *rbd_dev = dev_to_rbd(dev); 2410 - int ret; 2411 - u64 snapid; 2412 - u64 cur_ofs; 2413 - char *seg_name = NULL; 2414 - char *snap_name = kmalloc(count + 1, GFP_KERNEL); 2415 - ret = -ENOMEM; 2416 - if (!snap_name) 2417 - return ret; 2418 - 2419 - /* parse snaps add command */ 2420 - snprintf(snap_name, count, "%s", buf); 2421 - seg_name = kmalloc(RBD_MAX_SEG_NAME_LEN + 1, GFP_NOIO); 2422 - if (!seg_name) 2423 - goto done; 2424 - 2425 - mutex_lock_nested(&ctl_mutex, SINGLE_DEPTH_NESTING); 2426 - 2427 - ret = snap_by_name(&rbd_dev->header, snap_name, &snapid, NULL); 2428 - if (ret < 0) 2429 - goto done_unlock; 2430 - 2431 - dout("snapid=%lld\n", snapid); 2432 - 2433 - cur_ofs = 0; 2434 - while (cur_ofs < rbd_dev->header.image_size) { 2435 - cur_ofs += rbd_get_segment(&rbd_dev->header, 2436 - rbd_dev->obj, 2437 - cur_ofs, (u64)-1, 2438 - seg_name, NULL); 2439 - dout("seg_name=%s\n", seg_name); 2440 - 2441 - ret = rbd_req_sync_rollback_obj(rbd_dev, snapid, seg_name); 2442 - if (ret < 0) 2443 - pr_warning("could not roll back obj %s err=%d\n", 2444 - seg_name, ret); 2445 - } 2446 - 2447 - ret = __rbd_update_snaps(rbd_dev); 2448 - if (ret < 0) 2449 - goto done_unlock; 2450 - 2451 - ret = count; 2452 - 2453 - done_unlock: 2454 - mutex_unlock(&ctl_mutex); 2455 - done: 2456 - kfree(seg_name); 2457 - kfree(snap_name); 2458 - 2459 2424 return ret; 2460 2425 } 2461 2426
+215 -147
drivers/block/swim3.c
··· 16 16 * handle GCR disks 17 17 */ 18 18 19 + #undef DEBUG 20 + 19 21 #include <linux/stddef.h> 20 22 #include <linux/kernel.h> 21 23 #include <linux/sched.h> ··· 38 36 #include <asm/machdep.h> 39 37 #include <asm/pmac_feature.h> 40 38 41 - static DEFINE_MUTEX(swim3_mutex); 42 - static struct request_queue *swim3_queue; 43 - static struct gendisk *disks[2]; 44 - static struct request *fd_req; 45 - 46 39 #define MAX_FLOPPIES 2 40 + 41 + static DEFINE_MUTEX(swim3_mutex); 42 + static struct gendisk *disks[MAX_FLOPPIES]; 47 43 48 44 enum swim_state { 49 45 idle, ··· 177 177 178 178 struct floppy_state { 179 179 enum swim_state state; 180 - spinlock_t lock; 181 180 struct swim3 __iomem *swim3; /* hardware registers */ 182 181 struct dbdma_regs __iomem *dma; /* DMA controller registers */ 183 182 int swim3_intr; /* interrupt number for SWIM3 */ ··· 203 204 int wanted; 204 205 struct macio_dev *mdev; 205 206 char dbdma_cmd_space[5 * sizeof(struct dbdma_cmd)]; 207 + int index; 208 + struct request *cur_req; 206 209 }; 210 + 211 + #define swim3_err(fmt, arg...) dev_err(&fs->mdev->ofdev.dev, "[fd%d] " fmt, fs->index, arg) 212 + #define swim3_warn(fmt, arg...) dev_warn(&fs->mdev->ofdev.dev, "[fd%d] " fmt, fs->index, arg) 213 + #define swim3_info(fmt, arg...) dev_info(&fs->mdev->ofdev.dev, "[fd%d] " fmt, fs->index, arg) 214 + 215 + #ifdef DEBUG 216 + #define swim3_dbg(fmt, arg...) dev_dbg(&fs->mdev->ofdev.dev, "[fd%d] " fmt, fs->index, arg) 217 + #else 218 + #define swim3_dbg(fmt, arg...) do { } while(0) 219 + #endif 207 220 208 221 static struct floppy_state floppy_states[MAX_FLOPPIES]; 209 222 static int floppy_count = 0; ··· 235 224 0, 0, 0, 0, 0, 0 236 225 }; 237 226 238 - static void swim3_select(struct floppy_state *fs, int sel); 239 - static void swim3_action(struct floppy_state *fs, int action); 240 - static int swim3_readbit(struct floppy_state *fs, int bit); 241 - static void do_fd_request(struct request_queue * q); 242 - static void start_request(struct floppy_state *fs); 243 - static void set_timeout(struct floppy_state *fs, int nticks, 244 - void (*proc)(unsigned long)); 245 - static void scan_track(struct floppy_state *fs); 246 227 static void seek_track(struct floppy_state *fs, int n); 247 228 static void init_dma(struct dbdma_cmd *cp, int cmd, void *buf, int count); 248 - static void setup_transfer(struct floppy_state *fs); 249 229 static void act(struct floppy_state *fs); 250 230 static void scan_timeout(unsigned long data); 251 231 static void seek_timeout(unsigned long data); ··· 256 254 unsigned int clearing); 257 255 static int floppy_revalidate(struct gendisk *disk); 258 256 259 - static bool swim3_end_request(int err, unsigned int nr_bytes) 257 + static bool swim3_end_request(struct floppy_state *fs, int err, unsigned int nr_bytes) 260 258 { 261 - if (__blk_end_request(fd_req, err, nr_bytes)) 259 + struct request *req = fs->cur_req; 260 + int rc; 261 + 262 + swim3_dbg(" end request, err=%d nr_bytes=%d, cur_req=%p\n", 263 + err, nr_bytes, req); 264 + 265 + if (err) 266 + nr_bytes = blk_rq_cur_bytes(req); 267 + rc = __blk_end_request(req, err, nr_bytes); 268 + if (rc) 262 269 return true; 263 - 264 - fd_req = NULL; 270 + fs->cur_req = NULL; 265 271 return false; 266 - } 267 - 268 - static bool swim3_end_request_cur(int err) 269 - { 270 - return swim3_end_request(err, blk_rq_cur_bytes(fd_req)); 271 272 } 272 273 273 274 static void swim3_select(struct floppy_state *fs, int sel) ··· 308 303 return (stat & DATA) == 0; 309 304 } 310 305 311 - static void do_fd_request(struct request_queue * q) 312 - { 313 - int i; 314 - 315 - for(i=0; i<floppy_count; i++) { 316 - struct floppy_state *fs = &floppy_states[i]; 317 - if (fs->mdev->media_bay && 318 - check_media_bay(fs->mdev->media_bay) != MB_FD) 319 - continue; 320 - start_request(fs); 321 - } 322 - } 323 - 324 306 static void start_request(struct floppy_state *fs) 325 307 { 326 308 struct request *req; 327 309 unsigned long x; 310 + 311 + swim3_dbg("start request, initial state=%d\n", fs->state); 328 312 329 313 if (fs->state == idle && fs->wanted) { 330 314 fs->state = available; ··· 321 327 return; 322 328 } 323 329 while (fs->state == idle) { 324 - if (!fd_req) { 325 - fd_req = blk_fetch_request(swim3_queue); 326 - if (!fd_req) 330 + swim3_dbg("start request, idle loop, cur_req=%p\n", fs->cur_req); 331 + if (!fs->cur_req) { 332 + fs->cur_req = blk_fetch_request(disks[fs->index]->queue); 333 + swim3_dbg(" fetched request %p\n", fs->cur_req); 334 + if (!fs->cur_req) 327 335 break; 328 336 } 329 - req = fd_req; 330 - #if 0 331 - printk("do_fd_req: dev=%s cmd=%d sec=%ld nr_sec=%u buf=%p\n", 332 - req->rq_disk->disk_name, req->cmd, 333 - (long)blk_rq_pos(req), blk_rq_sectors(req), req->buffer); 334 - printk(" errors=%d current_nr_sectors=%u\n", 335 - req->errors, blk_rq_cur_sectors(req)); 337 + req = fs->cur_req; 338 + 339 + if (fs->mdev->media_bay && 340 + check_media_bay(fs->mdev->media_bay) != MB_FD) { 341 + swim3_dbg("%s", " media bay absent, dropping req\n"); 342 + swim3_end_request(fs, -ENODEV, 0); 343 + continue; 344 + } 345 + 346 + #if 0 /* This is really too verbose */ 347 + swim3_dbg("do_fd_req: dev=%s cmd=%d sec=%ld nr_sec=%u buf=%p\n", 348 + req->rq_disk->disk_name, req->cmd, 349 + (long)blk_rq_pos(req), blk_rq_sectors(req), 350 + req->buffer); 351 + swim3_dbg(" errors=%d current_nr_sectors=%u\n", 352 + req->errors, blk_rq_cur_sectors(req)); 336 353 #endif 337 354 338 355 if (blk_rq_pos(req) >= fs->total_secs) { 339 - swim3_end_request_cur(-EIO); 356 + swim3_dbg(" pos out of bounds (%ld, max is %ld)\n", 357 + (long)blk_rq_pos(req), (long)fs->total_secs); 358 + swim3_end_request(fs, -EIO, 0); 340 359 continue; 341 360 } 342 361 if (fs->ejected) { 343 - swim3_end_request_cur(-EIO); 362 + swim3_dbg("%s", " disk ejected\n"); 363 + swim3_end_request(fs, -EIO, 0); 344 364 continue; 345 365 } 346 366 ··· 362 354 if (fs->write_prot < 0) 363 355 fs->write_prot = swim3_readbit(fs, WRITE_PROT); 364 356 if (fs->write_prot) { 365 - swim3_end_request_cur(-EIO); 357 + swim3_dbg("%s", " try to write, disk write protected\n"); 358 + swim3_end_request(fs, -EIO, 0); 366 359 continue; 367 360 } 368 361 } ··· 378 369 x = ((long)blk_rq_pos(req)) % fs->secpercyl; 379 370 fs->head = x / fs->secpertrack; 380 371 fs->req_sector = x % fs->secpertrack + 1; 381 - fd_req = req; 382 372 fs->state = do_transfer; 383 373 fs->retries = 0; 384 374 ··· 385 377 } 386 378 } 387 379 380 + static void do_fd_request(struct request_queue * q) 381 + { 382 + start_request(q->queuedata); 383 + } 384 + 388 385 static void set_timeout(struct floppy_state *fs, int nticks, 389 386 void (*proc)(unsigned long)) 390 387 { 391 - unsigned long flags; 392 - 393 - spin_lock_irqsave(&fs->lock, flags); 394 388 if (fs->timeout_pending) 395 389 del_timer(&fs->timeout); 396 390 fs->timeout.expires = jiffies + nticks; ··· 400 390 fs->timeout.data = (unsigned long) fs; 401 391 add_timer(&fs->timeout); 402 392 fs->timeout_pending = 1; 403 - spin_unlock_irqrestore(&fs->lock, flags); 404 393 } 405 394 406 395 static inline void scan_track(struct floppy_state *fs) ··· 451 442 struct swim3 __iomem *sw = fs->swim3; 452 443 struct dbdma_cmd *cp = fs->dma_cmd; 453 444 struct dbdma_regs __iomem *dr = fs->dma; 445 + struct request *req = fs->cur_req; 454 446 455 - if (blk_rq_cur_sectors(fd_req) <= 0) { 456 - printk(KERN_ERR "swim3: transfer 0 sectors?\n"); 447 + if (blk_rq_cur_sectors(req) <= 0) { 448 + swim3_warn("%s", "Transfer 0 sectors ?\n"); 457 449 return; 458 450 } 459 - if (rq_data_dir(fd_req) == WRITE) 451 + if (rq_data_dir(req) == WRITE) 460 452 n = 1; 461 453 else { 462 454 n = fs->secpertrack - fs->req_sector + 1; 463 - if (n > blk_rq_cur_sectors(fd_req)) 464 - n = blk_rq_cur_sectors(fd_req); 455 + if (n > blk_rq_cur_sectors(req)) 456 + n = blk_rq_cur_sectors(req); 465 457 } 458 + 459 + swim3_dbg(" setup xfer at sect %d (of %d) head %d for %d\n", 460 + fs->req_sector, fs->secpertrack, fs->head, n); 461 + 466 462 fs->scount = n; 467 463 swim3_select(fs, fs->head? READ_DATA_1: READ_DATA_0); 468 464 out_8(&sw->sector, fs->req_sector); 469 465 out_8(&sw->nsect, n); 470 466 out_8(&sw->gap3, 0); 471 467 out_le32(&dr->cmdptr, virt_to_bus(cp)); 472 - if (rq_data_dir(fd_req) == WRITE) { 468 + if (rq_data_dir(req) == WRITE) { 473 469 /* Set up 3 dma commands: write preamble, data, postamble */ 474 470 init_dma(cp, OUTPUT_MORE, write_preamble, sizeof(write_preamble)); 475 471 ++cp; 476 - init_dma(cp, OUTPUT_MORE, fd_req->buffer, 512); 472 + init_dma(cp, OUTPUT_MORE, req->buffer, 512); 477 473 ++cp; 478 474 init_dma(cp, OUTPUT_LAST, write_postamble, sizeof(write_postamble)); 479 475 } else { 480 - init_dma(cp, INPUT_LAST, fd_req->buffer, n * 512); 476 + init_dma(cp, INPUT_LAST, req->buffer, n * 512); 481 477 } 482 478 ++cp; 483 479 out_le16(&cp->command, DBDMA_STOP); 484 480 out_8(&sw->control_bic, DO_ACTION | WRITE_SECTORS); 485 481 in_8(&sw->error); 486 482 out_8(&sw->control_bic, DO_ACTION | WRITE_SECTORS); 487 - if (rq_data_dir(fd_req) == WRITE) 483 + if (rq_data_dir(req) == WRITE) 488 484 out_8(&sw->control_bis, WRITE_SECTORS); 489 485 in_8(&sw->intr); 490 486 out_le32(&dr->control, (RUN << 16) | RUN); ··· 502 488 static void act(struct floppy_state *fs) 503 489 { 504 490 for (;;) { 491 + swim3_dbg(" act loop, state=%d, req_cyl=%d, cur_cyl=%d\n", 492 + fs->state, fs->req_cyl, fs->cur_cyl); 493 + 505 494 switch (fs->state) { 506 495 case idle: 507 496 return; /* XXX shouldn't get here */ 508 497 509 498 case locating: 510 499 if (swim3_readbit(fs, TRACK_ZERO)) { 500 + swim3_dbg("%s", " locate track 0\n"); 511 501 fs->cur_cyl = 0; 512 502 if (fs->req_cyl == 0) 513 503 fs->state = do_transfer; ··· 529 511 break; 530 512 } 531 513 if (fs->req_cyl == fs->cur_cyl) { 532 - printk("whoops, seeking 0\n"); 514 + swim3_warn("%s", "Whoops, seeking 0\n"); 533 515 fs->state = do_transfer; 534 516 break; 535 517 } ··· 545 527 case do_transfer: 546 528 if (fs->cur_cyl != fs->req_cyl) { 547 529 if (fs->retries > 5) { 548 - swim3_end_request_cur(-EIO); 530 + swim3_err("Wrong cylinder in transfer, want: %d got %d\n", 531 + fs->req_cyl, fs->cur_cyl); 532 + swim3_end_request(fs, -EIO, 0); 549 533 fs->state = idle; 550 534 return; 551 535 } ··· 562 542 return; 563 543 564 544 default: 565 - printk(KERN_ERR"swim3: unknown state %d\n", fs->state); 545 + swim3_err("Unknown state %d\n", fs->state); 566 546 return; 567 547 } 568 548 } ··· 572 552 { 573 553 struct floppy_state *fs = (struct floppy_state *) data; 574 554 struct swim3 __iomem *sw = fs->swim3; 555 + unsigned long flags; 575 556 557 + swim3_dbg("* scan timeout, state=%d\n", fs->state); 558 + 559 + spin_lock_irqsave(&swim3_lock, flags); 576 560 fs->timeout_pending = 0; 577 561 out_8(&sw->control_bic, DO_ACTION | WRITE_SECTORS); 578 562 out_8(&sw->select, RELAX); 579 563 out_8(&sw->intr_enable, 0); 580 564 fs->cur_cyl = -1; 581 565 if (fs->retries > 5) { 582 - swim3_end_request_cur(-EIO); 566 + swim3_end_request(fs, -EIO, 0); 583 567 fs->state = idle; 584 568 start_request(fs); 585 569 } else { 586 570 fs->state = jogging; 587 571 act(fs); 588 572 } 573 + spin_unlock_irqrestore(&swim3_lock, flags); 589 574 } 590 575 591 576 static void seek_timeout(unsigned long data) 592 577 { 593 578 struct floppy_state *fs = (struct floppy_state *) data; 594 579 struct swim3 __iomem *sw = fs->swim3; 580 + unsigned long flags; 595 581 582 + swim3_dbg("* seek timeout, state=%d\n", fs->state); 583 + 584 + spin_lock_irqsave(&swim3_lock, flags); 596 585 fs->timeout_pending = 0; 597 586 out_8(&sw->control_bic, DO_SEEK); 598 587 out_8(&sw->select, RELAX); 599 588 out_8(&sw->intr_enable, 0); 600 - printk(KERN_ERR "swim3: seek timeout\n"); 601 - swim3_end_request_cur(-EIO); 589 + swim3_err("%s", "Seek timeout\n"); 590 + swim3_end_request(fs, -EIO, 0); 602 591 fs->state = idle; 603 592 start_request(fs); 593 + spin_unlock_irqrestore(&swim3_lock, flags); 604 594 } 605 595 606 596 static void settle_timeout(unsigned long data) 607 597 { 608 598 struct floppy_state *fs = (struct floppy_state *) data; 609 599 struct swim3 __iomem *sw = fs->swim3; 600 + unsigned long flags; 610 601 602 + swim3_dbg("* settle timeout, state=%d\n", fs->state); 603 + 604 + spin_lock_irqsave(&swim3_lock, flags); 611 605 fs->timeout_pending = 0; 612 606 if (swim3_readbit(fs, SEEK_COMPLETE)) { 613 607 out_8(&sw->select, RELAX); 614 608 fs->state = locating; 615 609 act(fs); 616 - return; 610 + goto unlock; 617 611 } 618 612 out_8(&sw->select, RELAX); 619 613 if (fs->settle_time < 2*HZ) { 620 614 ++fs->settle_time; 621 615 set_timeout(fs, 1, settle_timeout); 622 - return; 616 + goto unlock; 623 617 } 624 - printk(KERN_ERR "swim3: seek settle timeout\n"); 625 - swim3_end_request_cur(-EIO); 618 + swim3_err("%s", "Seek settle timeout\n"); 619 + swim3_end_request(fs, -EIO, 0); 626 620 fs->state = idle; 627 621 start_request(fs); 622 + unlock: 623 + spin_unlock_irqrestore(&swim3_lock, flags); 628 624 } 629 625 630 626 static void xfer_timeout(unsigned long data) ··· 648 612 struct floppy_state *fs = (struct floppy_state *) data; 649 613 struct swim3 __iomem *sw = fs->swim3; 650 614 struct dbdma_regs __iomem *dr = fs->dma; 615 + unsigned long flags; 651 616 int n; 652 617 618 + swim3_dbg("* xfer timeout, state=%d\n", fs->state); 619 + 620 + spin_lock_irqsave(&swim3_lock, flags); 653 621 fs->timeout_pending = 0; 654 622 out_le32(&dr->control, RUN << 16); 655 623 /* We must wait a bit for dbdma to stop */ ··· 662 622 out_8(&sw->intr_enable, 0); 663 623 out_8(&sw->control_bic, WRITE_SECTORS | DO_ACTION); 664 624 out_8(&sw->select, RELAX); 665 - printk(KERN_ERR "swim3: timeout %sing sector %ld\n", 666 - (rq_data_dir(fd_req)==WRITE? "writ": "read"), 667 - (long)blk_rq_pos(fd_req)); 668 - swim3_end_request_cur(-EIO); 625 + swim3_err("Timeout %sing sector %ld\n", 626 + (rq_data_dir(fs->cur_req)==WRITE? "writ": "read"), 627 + (long)blk_rq_pos(fs->cur_req)); 628 + swim3_end_request(fs, -EIO, 0); 669 629 fs->state = idle; 670 630 start_request(fs); 631 + spin_unlock_irqrestore(&swim3_lock, flags); 671 632 } 672 633 673 634 static irqreturn_t swim3_interrupt(int irq, void *dev_id) ··· 679 638 int stat, resid; 680 639 struct dbdma_regs __iomem *dr; 681 640 struct dbdma_cmd *cp; 641 + unsigned long flags; 642 + struct request *req = fs->cur_req; 682 643 644 + swim3_dbg("* interrupt, state=%d\n", fs->state); 645 + 646 + spin_lock_irqsave(&swim3_lock, flags); 683 647 intr = in_8(&sw->intr); 684 648 err = (intr & ERROR_INTR)? in_8(&sw->error): 0; 685 649 if ((intr & ERROR_INTR) && fs->state != do_transfer) 686 - printk(KERN_ERR "swim3_interrupt, state=%d, dir=%x, intr=%x, err=%x\n", 687 - fs->state, rq_data_dir(fd_req), intr, err); 650 + swim3_err("Non-transfer error interrupt: state=%d, dir=%x, intr=%x, err=%x\n", 651 + fs->state, rq_data_dir(req), intr, err); 688 652 switch (fs->state) { 689 653 case locating: 690 654 if (intr & SEEN_SECTOR) { ··· 699 653 del_timer(&fs->timeout); 700 654 fs->timeout_pending = 0; 701 655 if (sw->ctrack == 0xff) { 702 - printk(KERN_ERR "swim3: seen sector but cyl=ff?\n"); 656 + swim3_err("%s", "Seen sector but cyl=ff?\n"); 703 657 fs->cur_cyl = -1; 704 658 if (fs->retries > 5) { 705 - swim3_end_request_cur(-EIO); 659 + swim3_end_request(fs, -EIO, 0); 706 660 fs->state = idle; 707 661 start_request(fs); 708 662 } else { ··· 714 668 fs->cur_cyl = sw->ctrack; 715 669 fs->cur_sector = sw->csect; 716 670 if (fs->expect_cyl != -1 && fs->expect_cyl != fs->cur_cyl) 717 - printk(KERN_ERR "swim3: expected cyl %d, got %d\n", 718 - fs->expect_cyl, fs->cur_cyl); 671 + swim3_err("Expected cyl %d, got %d\n", 672 + fs->expect_cyl, fs->cur_cyl); 719 673 fs->state = do_transfer; 720 674 act(fs); 721 675 } ··· 750 704 fs->timeout_pending = 0; 751 705 dr = fs->dma; 752 706 cp = fs->dma_cmd; 753 - if (rq_data_dir(fd_req) == WRITE) 707 + if (rq_data_dir(req) == WRITE) 754 708 ++cp; 755 709 /* 756 710 * Check that the main data transfer has finished. ··· 775 729 if (intr & ERROR_INTR) { 776 730 n = fs->scount - 1 - resid / 512; 777 731 if (n > 0) { 778 - blk_update_request(fd_req, 0, n << 9); 732 + blk_update_request(req, 0, n << 9); 779 733 fs->req_sector += n; 780 734 } 781 735 if (fs->retries < 5) { 782 736 ++fs->retries; 783 737 act(fs); 784 738 } else { 785 - printk("swim3: error %sing block %ld (err=%x)\n", 786 - rq_data_dir(fd_req) == WRITE? "writ": "read", 787 - (long)blk_rq_pos(fd_req), err); 788 - swim3_end_request_cur(-EIO); 739 + swim3_err("Error %sing block %ld (err=%x)\n", 740 + rq_data_dir(req) == WRITE? "writ": "read", 741 + (long)blk_rq_pos(req), err); 742 + swim3_end_request(fs, -EIO, 0); 789 743 fs->state = idle; 790 744 } 791 745 } else { 792 746 if ((stat & ACTIVE) == 0 || resid != 0) { 793 747 /* musta been an error */ 794 - printk(KERN_ERR "swim3: fd dma: stat=%x resid=%d\n", stat, resid); 795 - printk(KERN_ERR " state=%d, dir=%x, intr=%x, err=%x\n", 796 - fs->state, rq_data_dir(fd_req), intr, err); 797 - swim3_end_request_cur(-EIO); 748 + swim3_err("fd dma error: stat=%x resid=%d\n", stat, resid); 749 + swim3_err(" state=%d, dir=%x, intr=%x, err=%x\n", 750 + fs->state, rq_data_dir(req), intr, err); 751 + swim3_end_request(fs, -EIO, 0); 798 752 fs->state = idle; 799 753 start_request(fs); 800 754 break; 801 755 } 802 - if (swim3_end_request(0, fs->scount << 9)) { 756 + fs->retries = 0; 757 + if (swim3_end_request(fs, 0, fs->scount << 9)) { 803 758 fs->req_sector += fs->scount; 804 759 if (fs->req_sector > fs->secpertrack) { 805 760 fs->req_sector -= fs->secpertrack; ··· 817 770 start_request(fs); 818 771 break; 819 772 default: 820 - printk(KERN_ERR "swim3: don't know what to do in state %d\n", fs->state); 773 + swim3_err("Don't know what to do in state %d\n", fs->state); 821 774 } 775 + spin_unlock_irqrestore(&swim3_lock, flags); 822 776 return IRQ_HANDLED; 823 777 } 824 778 ··· 829 781 } 830 782 */ 831 783 784 + /* Called under the mutex to grab exclusive access to a drive */ 832 785 static int grab_drive(struct floppy_state *fs, enum swim_state state, 833 786 int interruptible) 834 787 { 835 788 unsigned long flags; 836 789 837 - spin_lock_irqsave(&fs->lock, flags); 838 - if (fs->state != idle) { 790 + swim3_dbg("%s", "-> grab drive\n"); 791 + 792 + spin_lock_irqsave(&swim3_lock, flags); 793 + if (fs->state != idle && fs->state != available) { 839 794 ++fs->wanted; 840 795 while (fs->state != available) { 796 + spin_unlock_irqrestore(&swim3_lock, flags); 841 797 if (interruptible && signal_pending(current)) { 842 798 --fs->wanted; 843 - spin_unlock_irqrestore(&fs->lock, flags); 844 799 return -EINTR; 845 800 } 846 801 interruptible_sleep_on(&fs->wait); 802 + spin_lock_irqsave(&swim3_lock, flags); 847 803 } 848 804 --fs->wanted; 849 805 } 850 806 fs->state = state; 851 - spin_unlock_irqrestore(&fs->lock, flags); 807 + spin_unlock_irqrestore(&swim3_lock, flags); 808 + 852 809 return 0; 853 810 } 854 811 ··· 861 808 { 862 809 unsigned long flags; 863 810 864 - spin_lock_irqsave(&fs->lock, flags); 811 + swim3_dbg("%s", "-> release drive\n"); 812 + 813 + spin_lock_irqsave(&swim3_lock, flags); 865 814 fs->state = idle; 866 815 start_request(fs); 867 - spin_unlock_irqrestore(&fs->lock, flags); 816 + spin_unlock_irqrestore(&swim3_lock, flags); 868 817 } 869 818 870 819 static int fd_eject(struct floppy_state *fs) ··· 1021 966 { 1022 967 struct floppy_state *fs = disk->private_data; 1023 968 struct swim3 __iomem *sw = fs->swim3; 969 + 1024 970 mutex_lock(&swim3_mutex); 1025 971 if (fs->ref_count > 0 && --fs->ref_count == 0) { 1026 972 swim3_action(fs, MOTOR_OFF); ··· 1087 1031 .revalidate_disk= floppy_revalidate, 1088 1032 }; 1089 1033 1034 + static void swim3_mb_event(struct macio_dev* mdev, int mb_state) 1035 + { 1036 + struct floppy_state *fs = macio_get_drvdata(mdev); 1037 + struct swim3 __iomem *sw = fs->swim3; 1038 + 1039 + if (!fs) 1040 + return; 1041 + if (mb_state != MB_FD) 1042 + return; 1043 + 1044 + /* Clear state */ 1045 + out_8(&sw->intr_enable, 0); 1046 + in_8(&sw->intr); 1047 + in_8(&sw->error); 1048 + } 1049 + 1090 1050 static int swim3_add_device(struct macio_dev *mdev, int index) 1091 1051 { 1092 1052 struct device_node *swim = mdev->ofdev.dev.of_node; 1093 1053 struct floppy_state *fs = &floppy_states[index]; 1094 1054 int rc = -EBUSY; 1095 1055 1056 + /* Do this first for message macros */ 1057 + memset(fs, 0, sizeof(*fs)); 1058 + fs->mdev = mdev; 1059 + fs->index = index; 1060 + 1096 1061 /* Check & Request resources */ 1097 1062 if (macio_resource_count(mdev) < 2) { 1098 - printk(KERN_WARNING "ifd%d: no address for %s\n", 1099 - index, swim->full_name); 1063 + swim3_err("%s", "No address in device-tree\n"); 1100 1064 return -ENXIO; 1101 1065 } 1102 - if (macio_irq_count(mdev) < 2) { 1103 - printk(KERN_WARNING "fd%d: no intrs for device %s\n", 1104 - index, swim->full_name); 1066 + if (macio_irq_count(mdev) < 1) { 1067 + swim3_err("%s", "No interrupt in device-tree\n"); 1068 + return -ENXIO; 1105 1069 } 1106 1070 if (macio_request_resource(mdev, 0, "swim3 (mmio)")) { 1107 - printk(KERN_ERR "fd%d: can't request mmio resource for %s\n", 1108 - index, swim->full_name); 1071 + swim3_err("%s", "Can't request mmio resource\n"); 1109 1072 return -EBUSY; 1110 1073 } 1111 1074 if (macio_request_resource(mdev, 1, "swim3 (dma)")) { 1112 - printk(KERN_ERR "fd%d: can't request dma resource for %s\n", 1113 - index, swim->full_name); 1075 + swim3_err("%s", "Can't request dma resource\n"); 1114 1076 macio_release_resource(mdev, 0); 1115 1077 return -EBUSY; 1116 1078 } ··· 1137 1063 if (mdev->media_bay == NULL) 1138 1064 pmac_call_feature(PMAC_FTR_SWIM3_ENABLE, swim, 0, 1); 1139 1065 1140 - memset(fs, 0, sizeof(*fs)); 1141 - spin_lock_init(&fs->lock); 1142 1066 fs->state = idle; 1143 1067 fs->swim3 = (struct swim3 __iomem *) 1144 1068 ioremap(macio_resource_start(mdev, 0), 0x200); 1145 1069 if (fs->swim3 == NULL) { 1146 - printk("fd%d: couldn't map registers for %s\n", 1147 - index, swim->full_name); 1070 + swim3_err("%s", "Couldn't map mmio registers\n"); 1148 1071 rc = -ENOMEM; 1149 1072 goto out_release; 1150 1073 } 1151 1074 fs->dma = (struct dbdma_regs __iomem *) 1152 1075 ioremap(macio_resource_start(mdev, 1), 0x200); 1153 1076 if (fs->dma == NULL) { 1154 - printk("fd%d: couldn't map DMA for %s\n", 1155 - index, swim->full_name); 1077 + swim3_err("%s", "Couldn't map dma registers\n"); 1156 1078 iounmap(fs->swim3); 1157 1079 rc = -ENOMEM; 1158 1080 goto out_release; ··· 1160 1090 fs->secpercyl = 36; 1161 1091 fs->secpertrack = 18; 1162 1092 fs->total_secs = 2880; 1163 - fs->mdev = mdev; 1164 1093 init_waitqueue_head(&fs->wait); 1165 1094 1166 1095 fs->dma_cmd = (struct dbdma_cmd *) DBDMA_ALIGN(fs->dbdma_cmd_space); 1167 1096 memset(fs->dma_cmd, 0, 2 * sizeof(struct dbdma_cmd)); 1168 1097 st_le16(&fs->dma_cmd[1].command, DBDMA_STOP); 1169 1098 1099 + if (mdev->media_bay == NULL || check_media_bay(mdev->media_bay) == MB_FD) 1100 + swim3_mb_event(mdev, MB_FD); 1101 + 1170 1102 if (request_irq(fs->swim3_intr, swim3_interrupt, 0, "SWIM3", fs)) { 1171 - printk(KERN_ERR "fd%d: couldn't request irq %d for %s\n", 1172 - index, fs->swim3_intr, swim->full_name); 1103 + swim3_err("%s", "Couldn't request interrupt\n"); 1173 1104 pmac_call_feature(PMAC_FTR_SWIM3_ENABLE, swim, 0, 0); 1174 1105 goto out_unmap; 1175 1106 return -EBUSY; 1176 1107 } 1177 - /* 1178 - if (request_irq(fs->dma_intr, fd_dma_interrupt, 0, "SWIM3-dma", fs)) { 1179 - printk(KERN_ERR "Couldn't get irq %d for SWIM3 DMA", 1180 - fs->dma_intr); 1181 - return -EBUSY; 1182 - } 1183 - */ 1184 1108 1185 1109 init_timer(&fs->timeout); 1186 1110 1187 - printk(KERN_INFO "fd%d: SWIM3 floppy controller %s\n", floppy_count, 1111 + swim3_info("SWIM3 floppy controller %s\n", 1188 1112 mdev->media_bay ? "in media bay" : ""); 1189 1113 1190 1114 return 0; ··· 1196 1132 1197 1133 static int __devinit swim3_attach(struct macio_dev *mdev, const struct of_device_id *match) 1198 1134 { 1199 - int i, rc; 1200 1135 struct gendisk *disk; 1136 + int index, rc; 1137 + 1138 + index = floppy_count++; 1139 + if (index >= MAX_FLOPPIES) 1140 + return -ENXIO; 1201 1141 1202 1142 /* Add the drive */ 1203 - rc = swim3_add_device(mdev, floppy_count); 1143 + rc = swim3_add_device(mdev, index); 1204 1144 if (rc) 1205 1145 return rc; 1146 + /* Now register that disk. Same comment about failure handling */ 1147 + disk = disks[index] = alloc_disk(1); 1148 + if (disk == NULL) 1149 + return -ENOMEM; 1150 + disk->queue = blk_init_queue(do_fd_request, &swim3_lock); 1151 + if (disk->queue == NULL) { 1152 + put_disk(disk); 1153 + return -ENOMEM; 1154 + } 1155 + disk->queue->queuedata = &floppy_states[index]; 1206 1156 1207 - /* Now create the queue if not there yet */ 1208 - if (swim3_queue == NULL) { 1157 + if (index == 0) { 1209 1158 /* If we failed, there isn't much we can do as the driver is still 1210 1159 * too dumb to remove the device, just bail out 1211 1160 */ 1212 1161 if (register_blkdev(FLOPPY_MAJOR, "fd")) 1213 1162 return 0; 1214 - swim3_queue = blk_init_queue(do_fd_request, &swim3_lock); 1215 - if (swim3_queue == NULL) { 1216 - unregister_blkdev(FLOPPY_MAJOR, "fd"); 1217 - return 0; 1218 - } 1219 1163 } 1220 1164 1221 - /* Now register that disk. Same comment about failure handling */ 1222 - i = floppy_count++; 1223 - disk = disks[i] = alloc_disk(1); 1224 - if (disk == NULL) 1225 - return 0; 1226 - 1227 1165 disk->major = FLOPPY_MAJOR; 1228 - disk->first_minor = i; 1166 + disk->first_minor = index; 1229 1167 disk->fops = &floppy_fops; 1230 - disk->private_data = &floppy_states[i]; 1231 - disk->queue = swim3_queue; 1168 + disk->private_data = &floppy_states[index]; 1232 1169 disk->flags |= GENHD_FL_REMOVABLE; 1233 - sprintf(disk->disk_name, "fd%d", i); 1170 + sprintf(disk->disk_name, "fd%d", index); 1234 1171 set_capacity(disk, 2880); 1235 1172 add_disk(disk); 1236 1173 ··· 1259 1194 .of_match_table = swim3_match, 1260 1195 }, 1261 1196 .probe = swim3_attach, 1197 + #ifdef CONFIG_PMAC_MEDIABAY 1198 + .mediabay_event = swim3_mb_event, 1199 + #endif 1262 1200 #if 0 1263 1201 .suspend = swim3_suspend, 1264 1202 .resume = swim3_resume,
+3 -3
drivers/bluetooth/Kconfig
··· 188 188 The core driver to support Marvell Bluetooth devices. 189 189 190 190 This driver is required if you want to support 191 - Marvell Bluetooth devices, such as 8688/8787. 191 + Marvell Bluetooth devices, such as 8688/8787/8797. 192 192 193 193 Say Y here to compile Marvell Bluetooth driver 194 194 into the kernel or say M to compile it as module. ··· 201 201 The driver for Marvell Bluetooth chipsets with SDIO interface. 202 202 203 203 This driver is required if you want to use Marvell Bluetooth 204 - devices with SDIO interface. Currently SD8688/SD8787 chipsets are 205 - supported. 204 + devices with SDIO interface. Currently SD8688/SD8787/SD8797 205 + chipsets are supported. 206 206 207 207 Say Y here to compile support for Marvell BT-over-SDIO driver 208 208 into the kernel or say M to compile it as module.
+13 -2
drivers/bluetooth/btmrvl_sdio.c
··· 65 65 .io_port_1 = 0x01, 66 66 .io_port_2 = 0x02, 67 67 }; 68 - static const struct btmrvl_sdio_card_reg btmrvl_reg_8787 = { 68 + static const struct btmrvl_sdio_card_reg btmrvl_reg_87xx = { 69 69 .cfg = 0x00, 70 70 .host_int_mask = 0x02, 71 71 .host_intstatus = 0x03, ··· 92 92 static const struct btmrvl_sdio_device btmrvl_sdio_sd8787 = { 93 93 .helper = NULL, 94 94 .firmware = "mrvl/sd8787_uapsta.bin", 95 - .reg = &btmrvl_reg_8787, 95 + .reg = &btmrvl_reg_87xx, 96 + .sd_blksz_fw_dl = 256, 97 + }; 98 + 99 + static const struct btmrvl_sdio_device btmrvl_sdio_sd8797 = { 100 + .helper = NULL, 101 + .firmware = "mrvl/sd8797_uapsta.bin", 102 + .reg = &btmrvl_reg_87xx, 96 103 .sd_blksz_fw_dl = 256, 97 104 }; 98 105 ··· 110 103 /* Marvell SD8787 Bluetooth device */ 111 104 { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x911A), 112 105 .driver_data = (unsigned long) &btmrvl_sdio_sd8787 }, 106 + /* Marvell SD8797 Bluetooth device */ 107 + { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x912A), 108 + .driver_data = (unsigned long) &btmrvl_sdio_sd8797 }, 113 109 114 110 { } /* Terminating entry */ 115 111 }; ··· 1086 1076 MODULE_FIRMWARE("sd8688_helper.bin"); 1087 1077 MODULE_FIRMWARE("sd8688.bin"); 1088 1078 MODULE_FIRMWARE("mrvl/sd8787_uapsta.bin"); 1079 + MODULE_FIRMWARE("mrvl/sd8797_uapsta.bin");
+1 -2
drivers/bluetooth/btusb.c
··· 777 777 usb_mark_last_busy(data->udev); 778 778 } 779 779 780 - usb_free_urb(urb); 781 - 782 780 done: 781 + usb_free_urb(urb); 783 782 return err; 784 783 } 785 784
+7 -5
drivers/crypto/mv_cesa.c
··· 343 343 else 344 344 op.config |= CFG_MID_FRAG; 345 345 346 - writel(req_ctx->state[0], cpg->reg + DIGEST_INITIAL_VAL_A); 347 - writel(req_ctx->state[1], cpg->reg + DIGEST_INITIAL_VAL_B); 348 - writel(req_ctx->state[2], cpg->reg + DIGEST_INITIAL_VAL_C); 349 - writel(req_ctx->state[3], cpg->reg + DIGEST_INITIAL_VAL_D); 350 - writel(req_ctx->state[4], cpg->reg + DIGEST_INITIAL_VAL_E); 346 + if (first_block) { 347 + writel(req_ctx->state[0], cpg->reg + DIGEST_INITIAL_VAL_A); 348 + writel(req_ctx->state[1], cpg->reg + DIGEST_INITIAL_VAL_B); 349 + writel(req_ctx->state[2], cpg->reg + DIGEST_INITIAL_VAL_C); 350 + writel(req_ctx->state[3], cpg->reg + DIGEST_INITIAL_VAL_D); 351 + writel(req_ctx->state[4], cpg->reg + DIGEST_INITIAL_VAL_E); 352 + } 351 353 } 352 354 353 355 memcpy(cpg->sram + SRAM_CONFIG, &op, sizeof(struct sec_accel_config));
+13
drivers/devfreq/Kconfig
··· 65 65 66 66 comment "DEVFREQ Drivers" 67 67 68 + config ARM_EXYNOS4_BUS_DEVFREQ 69 + bool "ARM Exynos4210/4212/4412 Memory Bus DEVFREQ Driver" 70 + depends on CPU_EXYNOS4210 || CPU_EXYNOS4212 || CPU_EXYNOS4412 71 + select ARCH_HAS_OPP 72 + select DEVFREQ_GOV_SIMPLE_ONDEMAND 73 + help 74 + This adds the DEVFREQ driver for Exynos4210 memory bus (vdd_int) 75 + and Exynos4212/4412 memory interface and bus (vdd_mif + vdd_int). 76 + It reads PPMU counters of memory controllers and adjusts 77 + the operating frequencies and voltages with OPP support. 78 + To operate with optimal voltages, ASV support is required 79 + (CONFIG_EXYNOS_ASV). 80 + 68 81 endif # PM_DEVFREQ
+3
drivers/devfreq/Makefile
··· 3 3 obj-$(CONFIG_DEVFREQ_GOV_PERFORMANCE) += governor_performance.o 4 4 obj-$(CONFIG_DEVFREQ_GOV_POWERSAVE) += governor_powersave.o 5 5 obj-$(CONFIG_DEVFREQ_GOV_USERSPACE) += governor_userspace.o 6 + 7 + # DEVFREQ Drivers 8 + obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ) += exynos4_bus.o
+1135
drivers/devfreq/exynos4_bus.c
··· 1 + /* drivers/devfreq/exynos4210_memorybus.c 2 + * 3 + * Copyright (c) 2011 Samsung Electronics Co., Ltd. 4 + * http://www.samsung.com/ 5 + * MyungJoo Ham <myungjoo.ham@samsung.com> 6 + * 7 + * EXYNOS4 - Memory/Bus clock frequency scaling support in DEVFREQ framework 8 + * This version supports EXYNOS4210 only. This changes bus frequencies 9 + * and vddint voltages. Exynos4412/4212 should be able to be supported 10 + * with minor modifications. 11 + * 12 + * This program is free software; you can redistribute it and/or modify 13 + * it under the terms of the GNU General Public License version 2 as 14 + * published by the Free Software Foundation. 15 + * 16 + */ 17 + 18 + #include <linux/io.h> 19 + #include <linux/slab.h> 20 + #include <linux/mutex.h> 21 + #include <linux/suspend.h> 22 + #include <linux/opp.h> 23 + #include <linux/devfreq.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/regulator/consumer.h> 26 + #include <linux/module.h> 27 + 28 + /* Exynos4 ASV has been in the mailing list, but not upstreamed, yet. */ 29 + #ifdef CONFIG_EXYNOS_ASV 30 + extern unsigned int exynos_result_of_asv; 31 + #endif 32 + 33 + #include <mach/regs-clock.h> 34 + 35 + #include <plat/map-s5p.h> 36 + 37 + #define MAX_SAFEVOLT 1200000 /* 1.2V */ 38 + 39 + enum exynos4_busf_type { 40 + TYPE_BUSF_EXYNOS4210, 41 + TYPE_BUSF_EXYNOS4x12, 42 + }; 43 + 44 + /* Assume that the bus is saturated if the utilization is 40% */ 45 + #define BUS_SATURATION_RATIO 40 46 + 47 + enum ppmu_counter { 48 + PPMU_PMNCNT0 = 0, 49 + PPMU_PMCCNT1, 50 + PPMU_PMNCNT2, 51 + PPMU_PMNCNT3, 52 + PPMU_PMNCNT_MAX, 53 + }; 54 + struct exynos4_ppmu { 55 + void __iomem *hw_base; 56 + unsigned int ccnt; 57 + unsigned int event; 58 + unsigned int count[PPMU_PMNCNT_MAX]; 59 + bool ccnt_overflow; 60 + bool count_overflow[PPMU_PMNCNT_MAX]; 61 + }; 62 + 63 + enum busclk_level_idx { 64 + LV_0 = 0, 65 + LV_1, 66 + LV_2, 67 + LV_3, 68 + LV_4, 69 + _LV_END 70 + }; 71 + #define EX4210_LV_MAX LV_2 72 + #define EX4x12_LV_MAX LV_4 73 + #define EX4210_LV_NUM (LV_2 + 1) 74 + #define EX4x12_LV_NUM (LV_4 + 1) 75 + 76 + struct busfreq_data { 77 + enum exynos4_busf_type type; 78 + struct device *dev; 79 + struct devfreq *devfreq; 80 + bool disabled; 81 + struct regulator *vdd_int; 82 + struct regulator *vdd_mif; /* Exynos4412/4212 only */ 83 + struct opp *curr_opp; 84 + struct exynos4_ppmu dmc[2]; 85 + 86 + struct notifier_block pm_notifier; 87 + struct mutex lock; 88 + 89 + /* Dividers calculated at boot/probe-time */ 90 + unsigned int dmc_divtable[_LV_END]; /* DMC0 */ 91 + unsigned int top_divtable[_LV_END]; 92 + }; 93 + 94 + struct bus_opp_table { 95 + unsigned int idx; 96 + unsigned long clk; 97 + unsigned long volt; 98 + }; 99 + 100 + /* 4210 controls clock of mif and voltage of int */ 101 + static struct bus_opp_table exynos4210_busclk_table[] = { 102 + {LV_0, 400000, 1150000}, 103 + {LV_1, 267000, 1050000}, 104 + {LV_2, 133000, 1025000}, 105 + {0, 0, 0}, 106 + }; 107 + 108 + /* 109 + * MIF is the main control knob clock for exynox4x12 MIF/INT 110 + * clock and voltage of both mif/int are controlled. 111 + */ 112 + static struct bus_opp_table exynos4x12_mifclk_table[] = { 113 + {LV_0, 400000, 1100000}, 114 + {LV_1, 267000, 1000000}, 115 + {LV_2, 160000, 950000}, 116 + {LV_3, 133000, 950000}, 117 + {LV_4, 100000, 950000}, 118 + {0, 0, 0}, 119 + }; 120 + 121 + /* 122 + * INT is not the control knob of 4x12. LV_x is not meant to represent 123 + * the current performance. (MIF does) 124 + */ 125 + static struct bus_opp_table exynos4x12_intclk_table[] = { 126 + {LV_0, 200000, 1000000}, 127 + {LV_1, 160000, 950000}, 128 + {LV_2, 133000, 925000}, 129 + {LV_3, 100000, 900000}, 130 + {0, 0, 0}, 131 + }; 132 + 133 + /* TODO: asv volt definitions are "__initdata"? */ 134 + /* Some chips have different operating voltages */ 135 + static unsigned int exynos4210_asv_volt[][EX4210_LV_NUM] = { 136 + {1150000, 1050000, 1050000}, 137 + {1125000, 1025000, 1025000}, 138 + {1100000, 1000000, 1000000}, 139 + {1075000, 975000, 975000}, 140 + {1050000, 950000, 950000}, 141 + }; 142 + 143 + static unsigned int exynos4x12_mif_step_50[][EX4x12_LV_NUM] = { 144 + /* 400 267 160 133 100 */ 145 + {1050000, 950000, 900000, 900000, 900000}, /* ASV0 */ 146 + {1050000, 950000, 900000, 900000, 900000}, /* ASV1 */ 147 + {1050000, 950000, 900000, 900000, 900000}, /* ASV2 */ 148 + {1050000, 900000, 900000, 900000, 900000}, /* ASV3 */ 149 + {1050000, 900000, 900000, 900000, 850000}, /* ASV4 */ 150 + {1050000, 900000, 900000, 850000, 850000}, /* ASV5 */ 151 + {1050000, 900000, 850000, 850000, 850000}, /* ASV6 */ 152 + {1050000, 900000, 850000, 850000, 850000}, /* ASV7 */ 153 + {1050000, 900000, 850000, 850000, 850000}, /* ASV8 */ 154 + }; 155 + 156 + static unsigned int exynos4x12_int_volt[][EX4x12_LV_NUM] = { 157 + /* 200 160 133 100 */ 158 + {1000000, 950000, 925000, 900000}, /* ASV0 */ 159 + {975000, 925000, 925000, 900000}, /* ASV1 */ 160 + {950000, 925000, 900000, 875000}, /* ASV2 */ 161 + {950000, 900000, 900000, 875000}, /* ASV3 */ 162 + {925000, 875000, 875000, 875000}, /* ASV4 */ 163 + {900000, 850000, 850000, 850000}, /* ASV5 */ 164 + {900000, 850000, 850000, 850000}, /* ASV6 */ 165 + {900000, 850000, 850000, 850000}, /* ASV7 */ 166 + {900000, 850000, 850000, 850000}, /* ASV8 */ 167 + }; 168 + 169 + /*** Clock Divider Data for Exynos4210 ***/ 170 + static unsigned int exynos4210_clkdiv_dmc0[][8] = { 171 + /* 172 + * Clock divider value for following 173 + * { DIVACP, DIVACP_PCLK, DIVDPHY, DIVDMC, DIVDMCD 174 + * DIVDMCP, DIVCOPY2, DIVCORE_TIMERS } 175 + */ 176 + 177 + /* DMC L0: 400MHz */ 178 + { 3, 1, 1, 1, 1, 1, 3, 1 }, 179 + /* DMC L1: 266.7MHz */ 180 + { 4, 1, 1, 2, 1, 1, 3, 1 }, 181 + /* DMC L2: 133MHz */ 182 + { 5, 1, 1, 5, 1, 1, 3, 1 }, 183 + }; 184 + static unsigned int exynos4210_clkdiv_top[][5] = { 185 + /* 186 + * Clock divider value for following 187 + * { DIVACLK200, DIVACLK100, DIVACLK160, DIVACLK133, DIVONENAND } 188 + */ 189 + /* ACLK200 L0: 200MHz */ 190 + { 3, 7, 4, 5, 1 }, 191 + /* ACLK200 L1: 160MHz */ 192 + { 4, 7, 5, 6, 1 }, 193 + /* ACLK200 L2: 133MHz */ 194 + { 5, 7, 7, 7, 1 }, 195 + }; 196 + static unsigned int exynos4210_clkdiv_lr_bus[][2] = { 197 + /* 198 + * Clock divider value for following 199 + * { DIVGDL/R, DIVGPL/R } 200 + */ 201 + /* ACLK_GDL/R L1: 200MHz */ 202 + { 3, 1 }, 203 + /* ACLK_GDL/R L2: 160MHz */ 204 + { 4, 1 }, 205 + /* ACLK_GDL/R L3: 133MHz */ 206 + { 5, 1 }, 207 + }; 208 + 209 + /*** Clock Divider Data for Exynos4212/4412 ***/ 210 + static unsigned int exynos4x12_clkdiv_dmc0[][6] = { 211 + /* 212 + * Clock divider value for following 213 + * { DIVACP, DIVACP_PCLK, DIVDPHY, DIVDMC, DIVDMCD 214 + * DIVDMCP} 215 + */ 216 + 217 + /* DMC L0: 400MHz */ 218 + {3, 1, 1, 1, 1, 1}, 219 + /* DMC L1: 266.7MHz */ 220 + {4, 1, 1, 2, 1, 1}, 221 + /* DMC L2: 160MHz */ 222 + {5, 1, 1, 4, 1, 1}, 223 + /* DMC L3: 133MHz */ 224 + {5, 1, 1, 5, 1, 1}, 225 + /* DMC L4: 100MHz */ 226 + {7, 1, 1, 7, 1, 1}, 227 + }; 228 + static unsigned int exynos4x12_clkdiv_dmc1[][6] = { 229 + /* 230 + * Clock divider value for following 231 + * { G2DACP, DIVC2C, DIVC2C_ACLK } 232 + */ 233 + 234 + /* DMC L0: 400MHz */ 235 + {3, 1, 1}, 236 + /* DMC L1: 266.7MHz */ 237 + {4, 2, 1}, 238 + /* DMC L2: 160MHz */ 239 + {5, 4, 1}, 240 + /* DMC L3: 133MHz */ 241 + {5, 5, 1}, 242 + /* DMC L4: 100MHz */ 243 + {7, 7, 1}, 244 + }; 245 + static unsigned int exynos4x12_clkdiv_top[][5] = { 246 + /* 247 + * Clock divider value for following 248 + * { DIVACLK266_GPS, DIVACLK100, DIVACLK160, 249 + DIVACLK133, DIVONENAND } 250 + */ 251 + 252 + /* ACLK_GDL/R L0: 200MHz */ 253 + {2, 7, 4, 5, 1}, 254 + /* ACLK_GDL/R L1: 200MHz */ 255 + {2, 7, 4, 5, 1}, 256 + /* ACLK_GDL/R L2: 160MHz */ 257 + {4, 7, 5, 7, 1}, 258 + /* ACLK_GDL/R L3: 133MHz */ 259 + {4, 7, 5, 7, 1}, 260 + /* ACLK_GDL/R L4: 100MHz */ 261 + {7, 7, 7, 7, 1}, 262 + }; 263 + static unsigned int exynos4x12_clkdiv_lr_bus[][2] = { 264 + /* 265 + * Clock divider value for following 266 + * { DIVGDL/R, DIVGPL/R } 267 + */ 268 + 269 + /* ACLK_GDL/R L0: 200MHz */ 270 + {3, 1}, 271 + /* ACLK_GDL/R L1: 200MHz */ 272 + {3, 1}, 273 + /* ACLK_GDL/R L2: 160MHz */ 274 + {4, 1}, 275 + /* ACLK_GDL/R L3: 133MHz */ 276 + {5, 1}, 277 + /* ACLK_GDL/R L4: 100MHz */ 278 + {7, 1}, 279 + }; 280 + static unsigned int exynos4x12_clkdiv_sclkip[][3] = { 281 + /* 282 + * Clock divider value for following 283 + * { DIVMFC, DIVJPEG, DIVFIMC0~3} 284 + */ 285 + 286 + /* SCLK_MFC: 200MHz */ 287 + {3, 3, 4}, 288 + /* SCLK_MFC: 200MHz */ 289 + {3, 3, 4}, 290 + /* SCLK_MFC: 160MHz */ 291 + {4, 4, 5}, 292 + /* SCLK_MFC: 133MHz */ 293 + {5, 5, 5}, 294 + /* SCLK_MFC: 100MHz */ 295 + {7, 7, 7}, 296 + }; 297 + 298 + 299 + static int exynos4210_set_busclk(struct busfreq_data *data, struct opp *opp) 300 + { 301 + unsigned int index; 302 + unsigned int tmp; 303 + 304 + for (index = LV_0; index < EX4210_LV_NUM; index++) 305 + if (opp_get_freq(opp) == exynos4210_busclk_table[index].clk) 306 + break; 307 + 308 + if (index == EX4210_LV_NUM) 309 + return -EINVAL; 310 + 311 + /* Change Divider - DMC0 */ 312 + tmp = data->dmc_divtable[index]; 313 + 314 + __raw_writel(tmp, S5P_CLKDIV_DMC0); 315 + 316 + do { 317 + tmp = __raw_readl(S5P_CLKDIV_STAT_DMC0); 318 + } while (tmp & 0x11111111); 319 + 320 + /* Change Divider - TOP */ 321 + tmp = data->top_divtable[index]; 322 + 323 + __raw_writel(tmp, S5P_CLKDIV_TOP); 324 + 325 + do { 326 + tmp = __raw_readl(S5P_CLKDIV_STAT_TOP); 327 + } while (tmp & 0x11111); 328 + 329 + /* Change Divider - LEFTBUS */ 330 + tmp = __raw_readl(S5P_CLKDIV_LEFTBUS); 331 + 332 + tmp &= ~(S5P_CLKDIV_BUS_GDLR_MASK | S5P_CLKDIV_BUS_GPLR_MASK); 333 + 334 + tmp |= ((exynos4210_clkdiv_lr_bus[index][0] << 335 + S5P_CLKDIV_BUS_GDLR_SHIFT) | 336 + (exynos4210_clkdiv_lr_bus[index][1] << 337 + S5P_CLKDIV_BUS_GPLR_SHIFT)); 338 + 339 + __raw_writel(tmp, S5P_CLKDIV_LEFTBUS); 340 + 341 + do { 342 + tmp = __raw_readl(S5P_CLKDIV_STAT_LEFTBUS); 343 + } while (tmp & 0x11); 344 + 345 + /* Change Divider - RIGHTBUS */ 346 + tmp = __raw_readl(S5P_CLKDIV_RIGHTBUS); 347 + 348 + tmp &= ~(S5P_CLKDIV_BUS_GDLR_MASK | S5P_CLKDIV_BUS_GPLR_MASK); 349 + 350 + tmp |= ((exynos4210_clkdiv_lr_bus[index][0] << 351 + S5P_CLKDIV_BUS_GDLR_SHIFT) | 352 + (exynos4210_clkdiv_lr_bus[index][1] << 353 + S5P_CLKDIV_BUS_GPLR_SHIFT)); 354 + 355 + __raw_writel(tmp, S5P_CLKDIV_RIGHTBUS); 356 + 357 + do { 358 + tmp = __raw_readl(S5P_CLKDIV_STAT_RIGHTBUS); 359 + } while (tmp & 0x11); 360 + 361 + return 0; 362 + } 363 + 364 + static int exynos4x12_set_busclk(struct busfreq_data *data, struct opp *opp) 365 + { 366 + unsigned int index; 367 + unsigned int tmp; 368 + 369 + for (index = LV_0; index < EX4x12_LV_NUM; index++) 370 + if (opp_get_freq(opp) == exynos4x12_mifclk_table[index].clk) 371 + break; 372 + 373 + if (index == EX4x12_LV_NUM) 374 + return -EINVAL; 375 + 376 + /* Change Divider - DMC0 */ 377 + tmp = data->dmc_divtable[index]; 378 + 379 + __raw_writel(tmp, S5P_CLKDIV_DMC0); 380 + 381 + do { 382 + tmp = __raw_readl(S5P_CLKDIV_STAT_DMC0); 383 + } while (tmp & 0x11111111); 384 + 385 + /* Change Divider - DMC1 */ 386 + tmp = __raw_readl(S5P_CLKDIV_DMC1); 387 + 388 + tmp &= ~(S5P_CLKDIV_DMC1_G2D_ACP_MASK | 389 + S5P_CLKDIV_DMC1_C2C_MASK | 390 + S5P_CLKDIV_DMC1_C2CACLK_MASK); 391 + 392 + tmp |= ((exynos4x12_clkdiv_dmc1[index][0] << 393 + S5P_CLKDIV_DMC1_G2D_ACP_SHIFT) | 394 + (exynos4x12_clkdiv_dmc1[index][1] << 395 + S5P_CLKDIV_DMC1_C2C_SHIFT) | 396 + (exynos4x12_clkdiv_dmc1[index][2] << 397 + S5P_CLKDIV_DMC1_C2CACLK_SHIFT)); 398 + 399 + __raw_writel(tmp, S5P_CLKDIV_DMC1); 400 + 401 + do { 402 + tmp = __raw_readl(S5P_CLKDIV_STAT_DMC1); 403 + } while (tmp & 0x111111); 404 + 405 + /* Change Divider - TOP */ 406 + tmp = __raw_readl(S5P_CLKDIV_TOP); 407 + 408 + tmp &= ~(S5P_CLKDIV_TOP_ACLK266_GPS_MASK | 409 + S5P_CLKDIV_TOP_ACLK100_MASK | 410 + S5P_CLKDIV_TOP_ACLK160_MASK | 411 + S5P_CLKDIV_TOP_ACLK133_MASK | 412 + S5P_CLKDIV_TOP_ONENAND_MASK); 413 + 414 + tmp |= ((exynos4x12_clkdiv_top[index][0] << 415 + S5P_CLKDIV_TOP_ACLK266_GPS_SHIFT) | 416 + (exynos4x12_clkdiv_top[index][1] << 417 + S5P_CLKDIV_TOP_ACLK100_SHIFT) | 418 + (exynos4x12_clkdiv_top[index][2] << 419 + S5P_CLKDIV_TOP_ACLK160_SHIFT) | 420 + (exynos4x12_clkdiv_top[index][3] << 421 + S5P_CLKDIV_TOP_ACLK133_SHIFT) | 422 + (exynos4x12_clkdiv_top[index][4] << 423 + S5P_CLKDIV_TOP_ONENAND_SHIFT)); 424 + 425 + __raw_writel(tmp, S5P_CLKDIV_TOP); 426 + 427 + do { 428 + tmp = __raw_readl(S5P_CLKDIV_STAT_TOP); 429 + } while (tmp & 0x11111); 430 + 431 + /* Change Divider - LEFTBUS */ 432 + tmp = __raw_readl(S5P_CLKDIV_LEFTBUS); 433 + 434 + tmp &= ~(S5P_CLKDIV_BUS_GDLR_MASK | S5P_CLKDIV_BUS_GPLR_MASK); 435 + 436 + tmp |= ((exynos4x12_clkdiv_lr_bus[index][0] << 437 + S5P_CLKDIV_BUS_GDLR_SHIFT) | 438 + (exynos4x12_clkdiv_lr_bus[index][1] << 439 + S5P_CLKDIV_BUS_GPLR_SHIFT)); 440 + 441 + __raw_writel(tmp, S5P_CLKDIV_LEFTBUS); 442 + 443 + do { 444 + tmp = __raw_readl(S5P_CLKDIV_STAT_LEFTBUS); 445 + } while (tmp & 0x11); 446 + 447 + /* Change Divider - RIGHTBUS */ 448 + tmp = __raw_readl(S5P_CLKDIV_RIGHTBUS); 449 + 450 + tmp &= ~(S5P_CLKDIV_BUS_GDLR_MASK | S5P_CLKDIV_BUS_GPLR_MASK); 451 + 452 + tmp |= ((exynos4x12_clkdiv_lr_bus[index][0] << 453 + S5P_CLKDIV_BUS_GDLR_SHIFT) | 454 + (exynos4x12_clkdiv_lr_bus[index][1] << 455 + S5P_CLKDIV_BUS_GPLR_SHIFT)); 456 + 457 + __raw_writel(tmp, S5P_CLKDIV_RIGHTBUS); 458 + 459 + do { 460 + tmp = __raw_readl(S5P_CLKDIV_STAT_RIGHTBUS); 461 + } while (tmp & 0x11); 462 + 463 + /* Change Divider - MFC */ 464 + tmp = __raw_readl(S5P_CLKDIV_MFC); 465 + 466 + tmp &= ~(S5P_CLKDIV_MFC_MASK); 467 + 468 + tmp |= ((exynos4x12_clkdiv_sclkip[index][0] << 469 + S5P_CLKDIV_MFC_SHIFT)); 470 + 471 + __raw_writel(tmp, S5P_CLKDIV_MFC); 472 + 473 + do { 474 + tmp = __raw_readl(S5P_CLKDIV_STAT_MFC); 475 + } while (tmp & 0x1); 476 + 477 + /* Change Divider - JPEG */ 478 + tmp = __raw_readl(S5P_CLKDIV_CAM1); 479 + 480 + tmp &= ~(S5P_CLKDIV_CAM1_JPEG_MASK); 481 + 482 + tmp |= ((exynos4x12_clkdiv_sclkip[index][1] << 483 + S5P_CLKDIV_CAM1_JPEG_SHIFT)); 484 + 485 + __raw_writel(tmp, S5P_CLKDIV_CAM1); 486 + 487 + do { 488 + tmp = __raw_readl(S5P_CLKDIV_STAT_CAM1); 489 + } while (tmp & 0x1); 490 + 491 + /* Change Divider - FIMC0~3 */ 492 + tmp = __raw_readl(S5P_CLKDIV_CAM); 493 + 494 + tmp &= ~(S5P_CLKDIV_CAM_FIMC0_MASK | S5P_CLKDIV_CAM_FIMC1_MASK | 495 + S5P_CLKDIV_CAM_FIMC2_MASK | S5P_CLKDIV_CAM_FIMC3_MASK); 496 + 497 + tmp |= ((exynos4x12_clkdiv_sclkip[index][2] << 498 + S5P_CLKDIV_CAM_FIMC0_SHIFT) | 499 + (exynos4x12_clkdiv_sclkip[index][2] << 500 + S5P_CLKDIV_CAM_FIMC1_SHIFT) | 501 + (exynos4x12_clkdiv_sclkip[index][2] << 502 + S5P_CLKDIV_CAM_FIMC2_SHIFT) | 503 + (exynos4x12_clkdiv_sclkip[index][2] << 504 + S5P_CLKDIV_CAM_FIMC3_SHIFT)); 505 + 506 + __raw_writel(tmp, S5P_CLKDIV_CAM); 507 + 508 + do { 509 + tmp = __raw_readl(S5P_CLKDIV_STAT_CAM1); 510 + } while (tmp & 0x1111); 511 + 512 + return 0; 513 + } 514 + 515 + 516 + static void busfreq_mon_reset(struct busfreq_data *data) 517 + { 518 + unsigned int i; 519 + 520 + for (i = 0; i < 2; i++) { 521 + void __iomem *ppmu_base = data->dmc[i].hw_base; 522 + 523 + /* Reset PPMU */ 524 + __raw_writel(0x8000000f, ppmu_base + 0xf010); 525 + __raw_writel(0x8000000f, ppmu_base + 0xf050); 526 + __raw_writel(0x6, ppmu_base + 0xf000); 527 + __raw_writel(0x0, ppmu_base + 0xf100); 528 + 529 + /* Set PPMU Event */ 530 + data->dmc[i].event = 0x6; 531 + __raw_writel(((data->dmc[i].event << 12) | 0x1), 532 + ppmu_base + 0xfc); 533 + 534 + /* Start PPMU */ 535 + __raw_writel(0x1, ppmu_base + 0xf000); 536 + } 537 + } 538 + 539 + static void exynos4_read_ppmu(struct busfreq_data *data) 540 + { 541 + int i, j; 542 + 543 + for (i = 0; i < 2; i++) { 544 + void __iomem *ppmu_base = data->dmc[i].hw_base; 545 + u32 overflow; 546 + 547 + /* Stop PPMU */ 548 + __raw_writel(0x0, ppmu_base + 0xf000); 549 + 550 + /* Update local data from PPMU */ 551 + overflow = __raw_readl(ppmu_base + 0xf050); 552 + 553 + data->dmc[i].ccnt = __raw_readl(ppmu_base + 0xf100); 554 + data->dmc[i].ccnt_overflow = overflow & (1 << 31); 555 + 556 + for (j = 0; j < PPMU_PMNCNT_MAX; j++) { 557 + data->dmc[i].count[j] = __raw_readl( 558 + ppmu_base + (0xf110 + (0x10 * j))); 559 + data->dmc[i].count_overflow[j] = overflow & (1 << j); 560 + } 561 + } 562 + 563 + busfreq_mon_reset(data); 564 + } 565 + 566 + static int exynos4x12_get_intspec(unsigned long mifclk) 567 + { 568 + int i = 0; 569 + 570 + while (exynos4x12_intclk_table[i].clk) { 571 + if (exynos4x12_intclk_table[i].clk <= mifclk) 572 + return i; 573 + i++; 574 + } 575 + 576 + return -EINVAL; 577 + } 578 + 579 + static int exynos4_bus_setvolt(struct busfreq_data *data, struct opp *opp, 580 + struct opp *oldopp) 581 + { 582 + int err = 0, tmp; 583 + unsigned long volt = opp_get_voltage(opp); 584 + 585 + switch (data->type) { 586 + case TYPE_BUSF_EXYNOS4210: 587 + /* OPP represents DMC clock + INT voltage */ 588 + err = regulator_set_voltage(data->vdd_int, volt, 589 + MAX_SAFEVOLT); 590 + break; 591 + case TYPE_BUSF_EXYNOS4x12: 592 + /* OPP represents MIF clock + MIF voltage */ 593 + err = regulator_set_voltage(data->vdd_mif, volt, 594 + MAX_SAFEVOLT); 595 + if (err) 596 + break; 597 + 598 + tmp = exynos4x12_get_intspec(opp_get_freq(opp)); 599 + if (tmp < 0) { 600 + err = tmp; 601 + regulator_set_voltage(data->vdd_mif, 602 + opp_get_voltage(oldopp), 603 + MAX_SAFEVOLT); 604 + break; 605 + } 606 + err = regulator_set_voltage(data->vdd_int, 607 + exynos4x12_intclk_table[tmp].volt, 608 + MAX_SAFEVOLT); 609 + /* Try to recover */ 610 + if (err) 611 + regulator_set_voltage(data->vdd_mif, 612 + opp_get_voltage(oldopp), 613 + MAX_SAFEVOLT); 614 + break; 615 + default: 616 + err = -EINVAL; 617 + } 618 + 619 + return err; 620 + } 621 + 622 + static int exynos4_bus_target(struct device *dev, unsigned long *_freq) 623 + { 624 + int err = 0; 625 + struct platform_device *pdev = container_of(dev, struct platform_device, 626 + dev); 627 + struct busfreq_data *data = platform_get_drvdata(pdev); 628 + struct opp *opp = devfreq_recommended_opp(dev, _freq); 629 + unsigned long old_freq = opp_get_freq(data->curr_opp); 630 + unsigned long freq = opp_get_freq(opp); 631 + 632 + if (old_freq == freq) 633 + return 0; 634 + 635 + dev_dbg(dev, "targetting %lukHz %luuV\n", freq, opp_get_voltage(opp)); 636 + 637 + mutex_lock(&data->lock); 638 + 639 + if (data->disabled) 640 + goto out; 641 + 642 + if (old_freq < freq) 643 + err = exynos4_bus_setvolt(data, opp, data->curr_opp); 644 + if (err) 645 + goto out; 646 + 647 + if (old_freq != freq) { 648 + switch (data->type) { 649 + case TYPE_BUSF_EXYNOS4210: 650 + err = exynos4210_set_busclk(data, opp); 651 + break; 652 + case TYPE_BUSF_EXYNOS4x12: 653 + err = exynos4x12_set_busclk(data, opp); 654 + break; 655 + default: 656 + err = -EINVAL; 657 + } 658 + } 659 + if (err) 660 + goto out; 661 + 662 + if (old_freq > freq) 663 + err = exynos4_bus_setvolt(data, opp, data->curr_opp); 664 + if (err) 665 + goto out; 666 + 667 + data->curr_opp = opp; 668 + out: 669 + mutex_unlock(&data->lock); 670 + return err; 671 + } 672 + 673 + static int exynos4_get_busier_dmc(struct busfreq_data *data) 674 + { 675 + u64 p0 = data->dmc[0].count[0]; 676 + u64 p1 = data->dmc[1].count[0]; 677 + 678 + p0 *= data->dmc[1].ccnt; 679 + p1 *= data->dmc[0].ccnt; 680 + 681 + if (data->dmc[1].ccnt == 0) 682 + return 0; 683 + 684 + if (p0 > p1) 685 + return 0; 686 + return 1; 687 + } 688 + 689 + static int exynos4_bus_get_dev_status(struct device *dev, 690 + struct devfreq_dev_status *stat) 691 + { 692 + struct platform_device *pdev = container_of(dev, struct platform_device, 693 + dev); 694 + struct busfreq_data *data = platform_get_drvdata(pdev); 695 + int busier_dmc; 696 + int cycles_x2 = 2; /* 2 x cycles */ 697 + void __iomem *addr; 698 + u32 timing; 699 + u32 memctrl; 700 + 701 + exynos4_read_ppmu(data); 702 + busier_dmc = exynos4_get_busier_dmc(data); 703 + stat->current_frequency = opp_get_freq(data->curr_opp); 704 + 705 + if (busier_dmc) 706 + addr = S5P_VA_DMC1; 707 + else 708 + addr = S5P_VA_DMC0; 709 + 710 + memctrl = __raw_readl(addr + 0x04); /* one of DDR2/3/LPDDR2 */ 711 + timing = __raw_readl(addr + 0x38); /* CL or WL/RL values */ 712 + 713 + switch ((memctrl >> 8) & 0xf) { 714 + case 0x4: /* DDR2 */ 715 + cycles_x2 = ((timing >> 16) & 0xf) * 2; 716 + break; 717 + case 0x5: /* LPDDR2 */ 718 + case 0x6: /* DDR3 */ 719 + cycles_x2 = ((timing >> 8) & 0xf) + ((timing >> 0) & 0xf); 720 + break; 721 + default: 722 + pr_err("%s: Unknown Memory Type(%d).\n", __func__, 723 + (memctrl >> 8) & 0xf); 724 + return -EINVAL; 725 + } 726 + 727 + /* Number of cycles spent on memory access */ 728 + stat->busy_time = data->dmc[busier_dmc].count[0] / 2 * (cycles_x2 + 2); 729 + stat->busy_time *= 100 / BUS_SATURATION_RATIO; 730 + stat->total_time = data->dmc[busier_dmc].ccnt; 731 + 732 + /* If the counters have overflown, retry */ 733 + if (data->dmc[busier_dmc].ccnt_overflow || 734 + data->dmc[busier_dmc].count_overflow[0]) 735 + return -EAGAIN; 736 + 737 + return 0; 738 + } 739 + 740 + static void exynos4_bus_exit(struct device *dev) 741 + { 742 + struct platform_device *pdev = container_of(dev, struct platform_device, 743 + dev); 744 + struct busfreq_data *data = platform_get_drvdata(pdev); 745 + 746 + devfreq_unregister_opp_notifier(dev, data->devfreq); 747 + } 748 + 749 + static struct devfreq_dev_profile exynos4_devfreq_profile = { 750 + .initial_freq = 400000, 751 + .polling_ms = 50, 752 + .target = exynos4_bus_target, 753 + .get_dev_status = exynos4_bus_get_dev_status, 754 + .exit = exynos4_bus_exit, 755 + }; 756 + 757 + static int exynos4210_init_tables(struct busfreq_data *data) 758 + { 759 + u32 tmp; 760 + int mgrp; 761 + int i, err = 0; 762 + 763 + tmp = __raw_readl(S5P_CLKDIV_DMC0); 764 + for (i = LV_0; i < EX4210_LV_NUM; i++) { 765 + tmp &= ~(S5P_CLKDIV_DMC0_ACP_MASK | 766 + S5P_CLKDIV_DMC0_ACPPCLK_MASK | 767 + S5P_CLKDIV_DMC0_DPHY_MASK | 768 + S5P_CLKDIV_DMC0_DMC_MASK | 769 + S5P_CLKDIV_DMC0_DMCD_MASK | 770 + S5P_CLKDIV_DMC0_DMCP_MASK | 771 + S5P_CLKDIV_DMC0_COPY2_MASK | 772 + S5P_CLKDIV_DMC0_CORETI_MASK); 773 + 774 + tmp |= ((exynos4210_clkdiv_dmc0[i][0] << 775 + S5P_CLKDIV_DMC0_ACP_SHIFT) | 776 + (exynos4210_clkdiv_dmc0[i][1] << 777 + S5P_CLKDIV_DMC0_ACPPCLK_SHIFT) | 778 + (exynos4210_clkdiv_dmc0[i][2] << 779 + S5P_CLKDIV_DMC0_DPHY_SHIFT) | 780 + (exynos4210_clkdiv_dmc0[i][3] << 781 + S5P_CLKDIV_DMC0_DMC_SHIFT) | 782 + (exynos4210_clkdiv_dmc0[i][4] << 783 + S5P_CLKDIV_DMC0_DMCD_SHIFT) | 784 + (exynos4210_clkdiv_dmc0[i][5] << 785 + S5P_CLKDIV_DMC0_DMCP_SHIFT) | 786 + (exynos4210_clkdiv_dmc0[i][6] << 787 + S5P_CLKDIV_DMC0_COPY2_SHIFT) | 788 + (exynos4210_clkdiv_dmc0[i][7] << 789 + S5P_CLKDIV_DMC0_CORETI_SHIFT)); 790 + 791 + data->dmc_divtable[i] = tmp; 792 + } 793 + 794 + tmp = __raw_readl(S5P_CLKDIV_TOP); 795 + for (i = LV_0; i < EX4210_LV_NUM; i++) { 796 + tmp &= ~(S5P_CLKDIV_TOP_ACLK200_MASK | 797 + S5P_CLKDIV_TOP_ACLK100_MASK | 798 + S5P_CLKDIV_TOP_ACLK160_MASK | 799 + S5P_CLKDIV_TOP_ACLK133_MASK | 800 + S5P_CLKDIV_TOP_ONENAND_MASK); 801 + 802 + tmp |= ((exynos4210_clkdiv_top[i][0] << 803 + S5P_CLKDIV_TOP_ACLK200_SHIFT) | 804 + (exynos4210_clkdiv_top[i][1] << 805 + S5P_CLKDIV_TOP_ACLK100_SHIFT) | 806 + (exynos4210_clkdiv_top[i][2] << 807 + S5P_CLKDIV_TOP_ACLK160_SHIFT) | 808 + (exynos4210_clkdiv_top[i][3] << 809 + S5P_CLKDIV_TOP_ACLK133_SHIFT) | 810 + (exynos4210_clkdiv_top[i][4] << 811 + S5P_CLKDIV_TOP_ONENAND_SHIFT)); 812 + 813 + data->top_divtable[i] = tmp; 814 + } 815 + 816 + #ifdef CONFIG_EXYNOS_ASV 817 + tmp = exynos4_result_of_asv; 818 + #else 819 + tmp = 0; /* Max voltages for the reliability of the unknown */ 820 + #endif 821 + 822 + pr_debug("ASV Group of Exynos4 is %d\n", tmp); 823 + /* Use merged grouping for voltage */ 824 + switch (tmp) { 825 + case 0: 826 + mgrp = 0; 827 + break; 828 + case 1: 829 + case 2: 830 + mgrp = 1; 831 + break; 832 + case 3: 833 + case 4: 834 + mgrp = 2; 835 + break; 836 + case 5: 837 + case 6: 838 + mgrp = 3; 839 + break; 840 + case 7: 841 + mgrp = 4; 842 + break; 843 + default: 844 + pr_warn("Unknown ASV Group. Use max voltage.\n"); 845 + mgrp = 0; 846 + } 847 + 848 + for (i = LV_0; i < EX4210_LV_NUM; i++) 849 + exynos4210_busclk_table[i].volt = exynos4210_asv_volt[mgrp][i]; 850 + 851 + for (i = LV_0; i < EX4210_LV_NUM; i++) { 852 + err = opp_add(data->dev, exynos4210_busclk_table[i].clk, 853 + exynos4210_busclk_table[i].volt); 854 + if (err) { 855 + dev_err(data->dev, "Cannot add opp entries.\n"); 856 + return err; 857 + } 858 + } 859 + 860 + 861 + return 0; 862 + } 863 + 864 + static int exynos4x12_init_tables(struct busfreq_data *data) 865 + { 866 + unsigned int i; 867 + unsigned int tmp; 868 + int ret; 869 + 870 + /* Enable pause function for DREX2 DVFS */ 871 + tmp = __raw_readl(S5P_DMC_PAUSE_CTRL); 872 + tmp |= DMC_PAUSE_ENABLE; 873 + __raw_writel(tmp, S5P_DMC_PAUSE_CTRL); 874 + 875 + tmp = __raw_readl(S5P_CLKDIV_DMC0); 876 + 877 + for (i = 0; i < EX4x12_LV_NUM; i++) { 878 + tmp &= ~(S5P_CLKDIV_DMC0_ACP_MASK | 879 + S5P_CLKDIV_DMC0_ACPPCLK_MASK | 880 + S5P_CLKDIV_DMC0_DPHY_MASK | 881 + S5P_CLKDIV_DMC0_DMC_MASK | 882 + S5P_CLKDIV_DMC0_DMCD_MASK | 883 + S5P_CLKDIV_DMC0_DMCP_MASK); 884 + 885 + tmp |= ((exynos4x12_clkdiv_dmc0[i][0] << 886 + S5P_CLKDIV_DMC0_ACP_SHIFT) | 887 + (exynos4x12_clkdiv_dmc0[i][1] << 888 + S5P_CLKDIV_DMC0_ACPPCLK_SHIFT) | 889 + (exynos4x12_clkdiv_dmc0[i][2] << 890 + S5P_CLKDIV_DMC0_DPHY_SHIFT) | 891 + (exynos4x12_clkdiv_dmc0[i][3] << 892 + S5P_CLKDIV_DMC0_DMC_SHIFT) | 893 + (exynos4x12_clkdiv_dmc0[i][4] << 894 + S5P_CLKDIV_DMC0_DMCD_SHIFT) | 895 + (exynos4x12_clkdiv_dmc0[i][5] << 896 + S5P_CLKDIV_DMC0_DMCP_SHIFT)); 897 + 898 + data->dmc_divtable[i] = tmp; 899 + } 900 + 901 + #ifdef CONFIG_EXYNOS_ASV 902 + tmp = exynos4_result_of_asv; 903 + #else 904 + tmp = 0; /* Max voltages for the reliability of the unknown */ 905 + #endif 906 + 907 + if (tmp > 8) 908 + tmp = 0; 909 + pr_debug("ASV Group of Exynos4x12 is %d\n", tmp); 910 + 911 + for (i = 0; i < EX4x12_LV_NUM; i++) { 912 + exynos4x12_mifclk_table[i].volt = 913 + exynos4x12_mif_step_50[tmp][i]; 914 + exynos4x12_intclk_table[i].volt = 915 + exynos4x12_int_volt[tmp][i]; 916 + } 917 + 918 + for (i = 0; i < EX4x12_LV_NUM; i++) { 919 + ret = opp_add(data->dev, exynos4x12_mifclk_table[i].clk, 920 + exynos4x12_mifclk_table[i].volt); 921 + if (ret) { 922 + dev_err(data->dev, "Fail to add opp entries.\n"); 923 + return ret; 924 + } 925 + } 926 + 927 + return 0; 928 + } 929 + 930 + static int exynos4_busfreq_pm_notifier_event(struct notifier_block *this, 931 + unsigned long event, void *ptr) 932 + { 933 + struct busfreq_data *data = container_of(this, struct busfreq_data, 934 + pm_notifier); 935 + struct opp *opp; 936 + unsigned long maxfreq = ULONG_MAX; 937 + int err = 0; 938 + 939 + switch (event) { 940 + case PM_SUSPEND_PREPARE: 941 + /* Set Fastest and Deactivate DVFS */ 942 + mutex_lock(&data->lock); 943 + 944 + data->disabled = true; 945 + 946 + opp = opp_find_freq_floor(data->dev, &maxfreq); 947 + 948 + err = exynos4_bus_setvolt(data, opp, data->curr_opp); 949 + if (err) 950 + goto unlock; 951 + 952 + switch (data->type) { 953 + case TYPE_BUSF_EXYNOS4210: 954 + err = exynos4210_set_busclk(data, opp); 955 + break; 956 + case TYPE_BUSF_EXYNOS4x12: 957 + err = exynos4x12_set_busclk(data, opp); 958 + break; 959 + default: 960 + err = -EINVAL; 961 + } 962 + if (err) 963 + goto unlock; 964 + 965 + data->curr_opp = opp; 966 + unlock: 967 + mutex_unlock(&data->lock); 968 + if (err) 969 + return err; 970 + return NOTIFY_OK; 971 + case PM_POST_RESTORE: 972 + case PM_POST_SUSPEND: 973 + /* Reactivate */ 974 + mutex_lock(&data->lock); 975 + data->disabled = false; 976 + mutex_unlock(&data->lock); 977 + return NOTIFY_OK; 978 + } 979 + 980 + return NOTIFY_DONE; 981 + } 982 + 983 + static __devinit int exynos4_busfreq_probe(struct platform_device *pdev) 984 + { 985 + struct busfreq_data *data; 986 + struct opp *opp; 987 + struct device *dev = &pdev->dev; 988 + int err = 0; 989 + 990 + data = kzalloc(sizeof(struct busfreq_data), GFP_KERNEL); 991 + if (data == NULL) { 992 + dev_err(dev, "Cannot allocate memory.\n"); 993 + return -ENOMEM; 994 + } 995 + 996 + data->type = pdev->id_entry->driver_data; 997 + data->dmc[0].hw_base = S5P_VA_DMC0; 998 + data->dmc[1].hw_base = S5P_VA_DMC1; 999 + data->pm_notifier.notifier_call = exynos4_busfreq_pm_notifier_event; 1000 + data->dev = dev; 1001 + mutex_init(&data->lock); 1002 + 1003 + switch (data->type) { 1004 + case TYPE_BUSF_EXYNOS4210: 1005 + err = exynos4210_init_tables(data); 1006 + break; 1007 + case TYPE_BUSF_EXYNOS4x12: 1008 + err = exynos4x12_init_tables(data); 1009 + break; 1010 + default: 1011 + dev_err(dev, "Cannot determine the device id %d\n", data->type); 1012 + err = -EINVAL; 1013 + } 1014 + if (err) 1015 + goto err_regulator; 1016 + 1017 + data->vdd_int = regulator_get(dev, "vdd_int"); 1018 + if (IS_ERR(data->vdd_int)) { 1019 + dev_err(dev, "Cannot get the regulator \"vdd_int\"\n"); 1020 + err = PTR_ERR(data->vdd_int); 1021 + goto err_regulator; 1022 + } 1023 + if (data->type == TYPE_BUSF_EXYNOS4x12) { 1024 + data->vdd_mif = regulator_get(dev, "vdd_mif"); 1025 + if (IS_ERR(data->vdd_mif)) { 1026 + dev_err(dev, "Cannot get the regulator \"vdd_mif\"\n"); 1027 + err = PTR_ERR(data->vdd_mif); 1028 + regulator_put(data->vdd_int); 1029 + goto err_regulator; 1030 + 1031 + } 1032 + } 1033 + 1034 + opp = opp_find_freq_floor(dev, &exynos4_devfreq_profile.initial_freq); 1035 + if (IS_ERR(opp)) { 1036 + dev_err(dev, "Invalid initial frequency %lu kHz.\n", 1037 + exynos4_devfreq_profile.initial_freq); 1038 + err = PTR_ERR(opp); 1039 + goto err_opp_add; 1040 + } 1041 + data->curr_opp = opp; 1042 + 1043 + platform_set_drvdata(pdev, data); 1044 + 1045 + busfreq_mon_reset(data); 1046 + 1047 + data->devfreq = devfreq_add_device(dev, &exynos4_devfreq_profile, 1048 + &devfreq_simple_ondemand, NULL); 1049 + if (IS_ERR(data->devfreq)) { 1050 + err = PTR_ERR(data->devfreq); 1051 + goto err_opp_add; 1052 + } 1053 + 1054 + devfreq_register_opp_notifier(dev, data->devfreq); 1055 + 1056 + err = register_pm_notifier(&data->pm_notifier); 1057 + if (err) { 1058 + dev_err(dev, "Failed to setup pm notifier\n"); 1059 + goto err_devfreq_add; 1060 + } 1061 + 1062 + return 0; 1063 + err_devfreq_add: 1064 + devfreq_remove_device(data->devfreq); 1065 + err_opp_add: 1066 + if (data->vdd_mif) 1067 + regulator_put(data->vdd_mif); 1068 + regulator_put(data->vdd_int); 1069 + err_regulator: 1070 + kfree(data); 1071 + return err; 1072 + } 1073 + 1074 + static __devexit int exynos4_busfreq_remove(struct platform_device *pdev) 1075 + { 1076 + struct busfreq_data *data = platform_get_drvdata(pdev); 1077 + 1078 + unregister_pm_notifier(&data->pm_notifier); 1079 + devfreq_remove_device(data->devfreq); 1080 + regulator_put(data->vdd_int); 1081 + if (data->vdd_mif) 1082 + regulator_put(data->vdd_mif); 1083 + kfree(data); 1084 + 1085 + return 0; 1086 + } 1087 + 1088 + static int exynos4_busfreq_resume(struct device *dev) 1089 + { 1090 + struct platform_device *pdev = container_of(dev, struct platform_device, 1091 + dev); 1092 + struct busfreq_data *data = platform_get_drvdata(pdev); 1093 + 1094 + busfreq_mon_reset(data); 1095 + return 0; 1096 + } 1097 + 1098 + static const struct dev_pm_ops exynos4_busfreq_pm = { 1099 + .resume = exynos4_busfreq_resume, 1100 + }; 1101 + 1102 + static const struct platform_device_id exynos4_busfreq_id[] = { 1103 + { "exynos4210-busfreq", TYPE_BUSF_EXYNOS4210 }, 1104 + { "exynos4412-busfreq", TYPE_BUSF_EXYNOS4x12 }, 1105 + { "exynos4212-busfreq", TYPE_BUSF_EXYNOS4x12 }, 1106 + { }, 1107 + }; 1108 + 1109 + static struct platform_driver exynos4_busfreq_driver = { 1110 + .probe = exynos4_busfreq_probe, 1111 + .remove = __devexit_p(exynos4_busfreq_remove), 1112 + .id_table = exynos4_busfreq_id, 1113 + .driver = { 1114 + .name = "exynos4-busfreq", 1115 + .owner = THIS_MODULE, 1116 + .pm = &exynos4_busfreq_pm, 1117 + }, 1118 + }; 1119 + 1120 + static int __init exynos4_busfreq_init(void) 1121 + { 1122 + return platform_driver_register(&exynos4_busfreq_driver); 1123 + } 1124 + late_initcall(exynos4_busfreq_init); 1125 + 1126 + static void __exit exynos4_busfreq_exit(void) 1127 + { 1128 + platform_driver_unregister(&exynos4_busfreq_driver); 1129 + } 1130 + module_exit(exynos4_busfreq_exit); 1131 + 1132 + MODULE_LICENSE("GPL"); 1133 + MODULE_DESCRIPTION("EXYNOS4 busfreq driver with devfreq framework"); 1134 + MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>"); 1135 + MODULE_ALIAS("exynos4-busfreq");
+1 -1
drivers/edac/mpc85xx_edac.c
··· 1128 1128 { .compatible = "fsl,p1020-memory-controller", }, 1129 1129 { .compatible = "fsl,p1021-memory-controller", }, 1130 1130 { .compatible = "fsl,p2020-memory-controller", }, 1131 - { .compatible = "fsl,p4080-memory-controller", }, 1131 + { .compatible = "fsl,qoriq-memory-controller", }, 1132 1132 {}, 1133 1133 }; 1134 1134 MODULE_DEVICE_TABLE(of, mpc85xx_mc_err_of_match);
+9 -3
drivers/firmware/efivars.c
··· 457 457 } 458 458 459 459 static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, 460 - struct timespec *timespec, struct pstore_info *psi) 460 + struct timespec *timespec, 461 + char **buf, struct pstore_info *psi) 461 462 { 462 463 efi_guid_t vendor = LINUX_EFI_CRASH_GUID; 463 464 struct efivars *efivars = psi->data; ··· 479 478 timespec->tv_nsec = 0; 480 479 get_var_data_locked(efivars, &efivars->walk_entry->var); 481 480 size = efivars->walk_entry->var.DataSize; 482 - memcpy(psi->buf, efivars->walk_entry->var.Data, size); 481 + *buf = kmalloc(size, GFP_KERNEL); 482 + if (*buf == NULL) 483 + return -ENOMEM; 484 + memcpy(*buf, efivars->walk_entry->var.Data, 485 + size); 483 486 efivars->walk_entry = list_entry(efivars->walk_entry->list.next, 484 487 struct efivar_entry, list); 485 488 return size; ··· 581 576 } 582 577 583 578 static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, 584 - struct timespec *time, struct pstore_info *psi) 579 + struct timespec *timespec, 580 + char **buf, struct pstore_info *psi) 585 581 { 586 582 return -1; 587 583 }
+40 -2
drivers/firmware/iscsi_ibft.c
··· 746 746 ibft_cleanup(); 747 747 } 748 748 749 + #ifdef CONFIG_ACPI 750 + static const struct { 751 + char *sign; 752 + } ibft_signs[] = { 753 + /* 754 + * One spec says "IBFT", the other says "iBFT". We have to check 755 + * for both. 756 + */ 757 + { ACPI_SIG_IBFT }, 758 + { "iBFT" }, 759 + }; 760 + 761 + static void __init acpi_find_ibft_region(void) 762 + { 763 + int i; 764 + struct acpi_table_header *table = NULL; 765 + 766 + if (acpi_disabled) 767 + return; 768 + 769 + for (i = 0; i < ARRAY_SIZE(ibft_signs) && !ibft_addr; i++) { 770 + acpi_get_table(ibft_signs[i].sign, 0, &table); 771 + ibft_addr = (struct acpi_table_ibft *)table; 772 + } 773 + } 774 + #else 775 + static void __init acpi_find_ibft_region(void) 776 + { 777 + } 778 + #endif 779 + 749 780 /* 750 781 * ibft_init() - creates sysfs tree entries for the iBFT data. 751 782 */ ··· 784 753 { 785 754 int rc = 0; 786 755 756 + /* 757 + As on UEFI systems the setup_arch()/find_ibft_region() 758 + is called before ACPI tables are parsed and it only does 759 + legacy finding. 760 + */ 761 + if (!ibft_addr) 762 + acpi_find_ibft_region(); 763 + 787 764 if (ibft_addr) { 788 - printk(KERN_INFO "iBFT detected at 0x%llx.\n", 789 - (u64)isa_virt_to_bus(ibft_addr)); 765 + pr_info("iBFT detected.\n"); 790 766 791 767 rc = ibft_check_device(); 792 768 if (rc)
+2 -24
drivers/firmware/iscsi_ibft_find.c
··· 45 45 static const struct { 46 46 char *sign; 47 47 } ibft_signs[] = { 48 - #ifdef CONFIG_ACPI 49 - /* 50 - * One spec says "IBFT", the other says "iBFT". We have to check 51 - * for both. 52 - */ 53 - { ACPI_SIG_IBFT }, 54 - #endif 55 48 { "iBFT" }, 56 49 { "BIFT" }, /* Broadcom iSCSI Offload */ 57 50 }; ··· 54 61 #define IBFT_END 0x100000 /* 1MB */ 55 62 #define VGA_MEM 0xA0000 /* VGA buffer */ 56 63 #define VGA_SIZE 0x20000 /* 128kB */ 57 - 58 - #ifdef CONFIG_ACPI 59 - static int __init acpi_find_ibft(struct acpi_table_header *header) 60 - { 61 - ibft_addr = (struct acpi_table_ibft *)header; 62 - return 0; 63 - } 64 - #endif /* CONFIG_ACPI */ 65 64 66 65 static int __init find_ibft_in_mem(void) 67 66 { ··· 79 94 * the table cannot be valid. */ 80 95 if (pos + len <= (IBFT_END-1)) { 81 96 ibft_addr = (struct acpi_table_ibft *)virt; 97 + pr_info("iBFT found at 0x%lx.\n", pos); 82 98 goto done; 83 99 } 84 100 } ··· 94 108 */ 95 109 unsigned long __init find_ibft_region(unsigned long *sizep) 96 110 { 97 - #ifdef CONFIG_ACPI 98 - int i; 99 - #endif 100 111 ibft_addr = NULL; 101 - 102 - #ifdef CONFIG_ACPI 103 - for (i = 0; i < ARRAY_SIZE(ibft_signs) && !ibft_addr; i++) 104 - acpi_table_parse(ibft_signs[i].sign, acpi_find_ibft); 105 - #endif /* CONFIG_ACPI */ 106 112 107 113 /* iBFT 1.03 section 1.4.3.1 mandates that UEFI machines will 108 114 * only use ACPI for this */ 109 115 110 - if (!ibft_addr && !efi_enabled) 116 + if (!efi_enabled) 111 117 find_ibft_in_mem(); 112 118 113 119 if (ibft_addr) {
+59 -24
drivers/firmware/sigma.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/sigma.h> 16 16 17 - /* Return: 0==OK, <0==error, =1 ==no more actions */ 18 - static int 19 - process_sigma_action(struct i2c_client *client, struct sigma_firmware *ssfw) 17 + static size_t sigma_action_size(struct sigma_action *sa) 20 18 { 21 - struct sigma_action *sa = (void *)(ssfw->fw->data + ssfw->pos); 19 + size_t payload = 0; 20 + 21 + switch (sa->instr) { 22 + case SIGMA_ACTION_WRITEXBYTES: 23 + case SIGMA_ACTION_WRITESINGLE: 24 + case SIGMA_ACTION_WRITESAFELOAD: 25 + payload = sigma_action_len(sa); 26 + break; 27 + default: 28 + break; 29 + } 30 + 31 + payload = ALIGN(payload, 2); 32 + 33 + return payload + sizeof(struct sigma_action); 34 + } 35 + 36 + /* 37 + * Returns a negative error value in case of an error, 0 if processing of 38 + * the firmware should be stopped after this action, 1 otherwise. 39 + */ 40 + static int 41 + process_sigma_action(struct i2c_client *client, struct sigma_action *sa) 42 + { 22 43 size_t len = sigma_action_len(sa); 23 - int ret = 0; 44 + int ret; 24 45 25 46 pr_debug("%s: instr:%i addr:%#x len:%zu\n", __func__, 26 47 sa->instr, sa->addr, len); ··· 50 29 case SIGMA_ACTION_WRITEXBYTES: 51 30 case SIGMA_ACTION_WRITESINGLE: 52 31 case SIGMA_ACTION_WRITESAFELOAD: 53 - if (ssfw->fw->size < ssfw->pos + len) 54 - return -EINVAL; 55 32 ret = i2c_master_send(client, (void *)&sa->addr, len); 56 33 if (ret < 0) 57 34 return -EINVAL; 58 35 break; 59 - 60 36 case SIGMA_ACTION_DELAY: 61 - ret = 0; 62 37 udelay(len); 63 38 len = 0; 64 39 break; 65 - 66 40 case SIGMA_ACTION_END: 67 - return 1; 68 - 41 + return 0; 69 42 default: 70 43 return -EINVAL; 71 44 } 72 45 73 - /* when arrive here ret=0 or sent data */ 74 - ssfw->pos += sigma_action_size(sa, len); 75 - return ssfw->pos == ssfw->fw->size; 46 + return 1; 76 47 } 77 48 78 49 static int 79 50 process_sigma_actions(struct i2c_client *client, struct sigma_firmware *ssfw) 80 51 { 81 - pr_debug("%s: processing %p\n", __func__, ssfw); 52 + struct sigma_action *sa; 53 + size_t size; 54 + int ret; 82 55 83 - while (1) { 84 - int ret = process_sigma_action(client, ssfw); 56 + while (ssfw->pos + sizeof(*sa) <= ssfw->fw->size) { 57 + sa = (struct sigma_action *)(ssfw->fw->data + ssfw->pos); 58 + 59 + size = sigma_action_size(sa); 60 + ssfw->pos += size; 61 + if (ssfw->pos > ssfw->fw->size || size == 0) 62 + break; 63 + 64 + ret = process_sigma_action(client, sa); 65 + 85 66 pr_debug("%s: action returned %i\n", __func__, ret); 86 - if (ret == 1) 87 - return 0; 88 - else if (ret) 67 + 68 + if (ret <= 0) 89 69 return ret; 90 70 } 71 + 72 + if (ssfw->pos != ssfw->fw->size) 73 + return -EINVAL; 74 + 75 + return 0; 91 76 } 92 77 93 78 int process_sigma_firmware(struct i2c_client *client, const char *name) ··· 116 89 117 90 /* then verify the header */ 118 91 ret = -EINVAL; 119 - if (fw->size < sizeof(*ssfw_head)) 92 + 93 + /* 94 + * Reject too small or unreasonable large files. The upper limit has been 95 + * chosen a bit arbitrarily, but it should be enough for all practical 96 + * purposes and having the limit makes it easier to avoid integer 97 + * overflows later in the loading process. 98 + */ 99 + if (fw->size < sizeof(*ssfw_head) || fw->size >= 0x4000000) 120 100 goto done; 121 101 122 102 ssfw_head = (void *)fw->data; 123 103 if (memcmp(ssfw_head->magic, SIGMA_MAGIC, ARRAY_SIZE(ssfw_head->magic))) 124 104 goto done; 125 105 126 - crc = crc32(0, fw->data, fw->size); 106 + crc = crc32(0, fw->data + sizeof(*ssfw_head), 107 + fw->size - sizeof(*ssfw_head)); 127 108 pr_debug("%s: crc=%x\n", __func__, crc); 128 - if (crc != ssfw_head->crc) 109 + if (crc != le32_to_cpu(ssfw_head->crc)) 129 110 goto done; 130 111 131 112 ssfw.pos = sizeof(*ssfw_head);
+1 -1
drivers/gpio/Makefile
··· 18 18 obj-$(CONFIG_GPIO_EP93XX) += gpio-ep93xx.o 19 19 obj-$(CONFIG_GPIO_IT8761E) += gpio-it8761e.o 20 20 obj-$(CONFIG_GPIO_JANZ_TTL) += gpio-janz-ttl.o 21 - obj-$(CONFIG_MACH_KS8695) += gpio-ks8695.o 21 + obj-$(CONFIG_ARCH_KS8695) += gpio-ks8695.o 22 22 obj-$(CONFIG_GPIO_LANGWELL) += gpio-langwell.o 23 23 obj-$(CONFIG_ARCH_LPC32XX) += gpio-lpc32xx.o 24 24 obj-$(CONFIG_GPIO_MAX730X) += gpio-max730x.o
+8 -13
drivers/gpio/gpio-da9052.c
··· 22 22 #include <linux/mfd/da9052/da9052.h> 23 23 #include <linux/mfd/da9052/reg.h> 24 24 #include <linux/mfd/da9052/pdata.h> 25 - #include <linux/mfd/da9052/gpio.h> 26 25 27 26 #define DA9052_INPUT 1 28 27 #define DA9052_OUTPUT_OPENDRAIN 2 ··· 42 43 #define DA9052_GPIO_MASK_UPPER_NIBBLE 0xF0 43 44 #define DA9052_GPIO_MASK_LOWER_NIBBLE 0x0F 44 45 #define DA9052_GPIO_NIBBLE_SHIFT 4 46 + #define DA9052_IRQ_GPI0 16 47 + #define DA9052_GPIO_ODD_SHIFT 7 48 + #define DA9052_GPIO_EVEN_SHIFT 3 45 49 46 50 struct da9052_gpio { 47 51 struct da9052 *da9052; ··· 106 104 static void da9052_gpio_set(struct gpio_chip *gc, unsigned offset, int value) 107 105 { 108 106 struct da9052_gpio *gpio = to_da9052_gpio(gc); 109 - unsigned char register_value = 0; 110 107 int ret; 111 108 112 109 if (da9052_gpio_port_odd(offset)) { 113 - if (value) { 114 - register_value = DA9052_GPIO_ODD_PORT_MODE; 115 110 ret = da9052_reg_update(gpio->da9052, (offset >> 1) + 116 111 DA9052_GPIO_0_1_REG, 117 112 DA9052_GPIO_ODD_PORT_MODE, 118 - register_value); 113 + value << DA9052_GPIO_ODD_SHIFT); 119 114 if (ret != 0) 120 115 dev_err(gpio->da9052->dev, 121 116 "Failed to updated gpio odd reg,%d", 122 117 ret); 123 - } 124 118 } else { 125 - if (value) { 126 - register_value = DA9052_GPIO_EVEN_PORT_MODE; 127 119 ret = da9052_reg_update(gpio->da9052, (offset >> 1) + 128 120 DA9052_GPIO_0_1_REG, 129 121 DA9052_GPIO_EVEN_PORT_MODE, 130 - register_value); 122 + value << DA9052_GPIO_EVEN_SHIFT); 131 123 if (ret != 0) 132 124 dev_err(gpio->da9052->dev, 133 125 "Failed to updated gpio even reg,%d", 134 126 ret); 135 - } 136 127 } 137 128 } 138 129 ··· 196 201 .direction_input = da9052_gpio_direction_input, 197 202 .direction_output = da9052_gpio_direction_output, 198 203 .to_irq = da9052_gpio_to_irq, 199 - .can_sleep = 1; 200 - .ngpio = 16; 201 - .base = -1; 204 + .can_sleep = 1, 205 + .ngpio = 16, 206 + .base = -1, 202 207 }; 203 208 204 209 static int __devinit da9052_gpio_probe(struct platform_device *pdev)
+31 -1
drivers/gpio/gpio-ml-ioh.c
··· 332 332 &chip->reg->regs[chip->ch].imask); 333 333 } 334 334 335 + static void ioh_irq_disable(struct irq_data *d) 336 + { 337 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 338 + struct ioh_gpio *chip = gc->private; 339 + unsigned long flags; 340 + u32 ien; 341 + 342 + spin_lock_irqsave(&chip->spinlock, flags); 343 + ien = ioread32(&chip->reg->regs[chip->ch].ien); 344 + ien &= ~(1 << (d->irq - chip->irq_base)); 345 + iowrite32(ien, &chip->reg->regs[chip->ch].ien); 346 + spin_unlock_irqrestore(&chip->spinlock, flags); 347 + } 348 + 349 + static void ioh_irq_enable(struct irq_data *d) 350 + { 351 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 352 + struct ioh_gpio *chip = gc->private; 353 + unsigned long flags; 354 + u32 ien; 355 + 356 + spin_lock_irqsave(&chip->spinlock, flags); 357 + ien = ioread32(&chip->reg->regs[chip->ch].ien); 358 + ien |= 1 << (d->irq - chip->irq_base); 359 + iowrite32(ien, &chip->reg->regs[chip->ch].ien); 360 + spin_unlock_irqrestore(&chip->spinlock, flags); 361 + } 362 + 335 363 static irqreturn_t ioh_gpio_handler(int irq, void *dev_id) 336 364 { 337 365 struct ioh_gpio *chip = dev_id; ··· 367 339 int i, j; 368 340 int ret = IRQ_NONE; 369 341 370 - for (i = 0; i < 8; i++) { 342 + for (i = 0; i < 8; i++, chip++) { 371 343 reg_val = ioread32(&chip->reg->regs[i].istatus); 372 344 for (j = 0; j < num_ports[i]; j++) { 373 345 if (reg_val & BIT(j)) { ··· 398 370 ct->chip.irq_mask = ioh_irq_mask; 399 371 ct->chip.irq_unmask = ioh_irq_unmask; 400 372 ct->chip.irq_set_type = ioh_irq_type; 373 + ct->chip.irq_disable = ioh_irq_disable; 374 + ct->chip.irq_enable = ioh_irq_enable; 401 375 402 376 irq_setup_generic_chip(gc, IRQ_MSK(num), IRQ_GC_INIT_MASK_CACHE, 403 377 IRQ_NOREQUEST | IRQ_NOPROBE, 0);
+13 -5
drivers/gpio/gpio-mpc8xxx.c
··· 132 132 return 0; 133 133 } 134 134 135 + static int mpc5121_gpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val) 136 + { 137 + /* GPIO 28..31 are input only on MPC5121 */ 138 + if (gpio >= 28) 139 + return -EINVAL; 140 + 141 + return mpc8xxx_gpio_dir_out(gc, gpio, val); 142 + } 143 + 135 144 static int mpc8xxx_gpio_to_irq(struct gpio_chip *gc, unsigned offset) 136 145 { 137 146 struct of_mm_gpio_chip *mm = to_of_mm_gpio_chip(gc); ··· 349 340 mm_gc->save_regs = mpc8xxx_gpio_save_regs; 350 341 gc->ngpio = MPC8XXX_GPIO_PINS; 351 342 gc->direction_input = mpc8xxx_gpio_dir_in; 352 - gc->direction_output = mpc8xxx_gpio_dir_out; 353 - if (of_device_is_compatible(np, "fsl,mpc8572-gpio")) 354 - gc->get = mpc8572_gpio_get; 355 - else 356 - gc->get = mpc8xxx_gpio_get; 343 + gc->direction_output = of_device_is_compatible(np, "fsl,mpc5121-gpio") ? 344 + mpc5121_gpio_dir_out : mpc8xxx_gpio_dir_out; 345 + gc->get = of_device_is_compatible(np, "fsl,mpc8572-gpio") ? 346 + mpc8572_gpio_get : mpc8xxx_gpio_get; 357 347 gc->set = mpc8xxx_gpio_set; 358 348 gc->to_irq = mpc8xxx_gpio_to_irq; 359 349
+2 -2
drivers/gpio/gpio-pca953x.c
··· 546 546 * Translate OpenFirmware node properties into platform_data 547 547 * WARNING: This is DEPRECATED and will be removed eventually! 548 548 */ 549 - void 549 + static void 550 550 pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert) 551 551 { 552 552 struct device_node *node; ··· 574 574 *invert = *val; 575 575 } 576 576 #else 577 - void 577 + static void 578 578 pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert) 579 579 { 580 580 *gpio_base = -1;
-4
drivers/gpio/gpio-pl061.c
··· 238 238 int ret, irq, i; 239 239 static DECLARE_BITMAP(init_irq, NR_IRQS); 240 240 241 - pdata = dev->dev.platform_data; 242 - if (pdata == NULL) 243 - return -ENODEV; 244 - 245 241 chip = kzalloc(sizeof(*chip), GFP_KERNEL); 246 242 if (chip == NULL) 247 243 return -ENOMEM;
+25 -2
drivers/gpu/drm/drm_crtc_helper.c
··· 456 456 EXPORT_SYMBOL(drm_crtc_helper_set_mode); 457 457 458 458 459 + static int 460 + drm_crtc_helper_disable(struct drm_crtc *crtc) 461 + { 462 + struct drm_device *dev = crtc->dev; 463 + struct drm_connector *connector; 464 + struct drm_encoder *encoder; 465 + 466 + /* Decouple all encoders and their attached connectors from this crtc */ 467 + list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 468 + if (encoder->crtc != crtc) 469 + continue; 470 + 471 + list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 472 + if (connector->encoder != encoder) 473 + continue; 474 + 475 + connector->encoder = NULL; 476 + } 477 + } 478 + 479 + drm_helper_disable_unused_functions(dev); 480 + return 0; 481 + } 482 + 459 483 /** 460 484 * drm_crtc_helper_set_config - set a new config from userspace 461 485 * @crtc: CRTC to setup ··· 534 510 (int)set->num_connectors, set->x, set->y); 535 511 } else { 536 512 DRM_DEBUG_KMS("[CRTC:%d] [NOFB]\n", set->crtc->base.id); 537 - set->mode = NULL; 538 - set->num_connectors = 0; 513 + return drm_crtc_helper_disable(set->crtc); 539 514 } 540 515 541 516 dev = set->crtc->dev;
+32 -30
drivers/gpu/drm/exynos/exynos_drm_buf.c
··· 27 27 #include "drm.h" 28 28 29 29 #include "exynos_drm_drv.h" 30 + #include "exynos_drm_gem.h" 30 31 #include "exynos_drm_buf.h" 31 32 32 - static DEFINE_MUTEX(exynos_drm_buf_lock); 33 - 34 33 static int lowlevel_buffer_allocate(struct drm_device *dev, 35 - struct exynos_drm_buf_entry *entry) 34 + struct exynos_drm_gem_buf *buffer) 36 35 { 37 36 DRM_DEBUG_KMS("%s\n", __FILE__); 38 37 39 - entry->vaddr = dma_alloc_writecombine(dev->dev, entry->size, 40 - (dma_addr_t *)&entry->paddr, GFP_KERNEL); 41 - if (!entry->paddr) { 38 + buffer->kvaddr = dma_alloc_writecombine(dev->dev, buffer->size, 39 + &buffer->dma_addr, GFP_KERNEL); 40 + if (!buffer->kvaddr) { 42 41 DRM_ERROR("failed to allocate buffer.\n"); 43 42 return -ENOMEM; 44 43 } 45 44 46 - DRM_DEBUG_KMS("allocated : vaddr(0x%x), paddr(0x%x), size(0x%x)\n", 47 - (unsigned int)entry->vaddr, entry->paddr, entry->size); 45 + DRM_DEBUG_KMS("vaddr(0x%lx), dma_addr(0x%lx), size(0x%lx)\n", 46 + (unsigned long)buffer->kvaddr, 47 + (unsigned long)buffer->dma_addr, 48 + buffer->size); 48 49 49 50 return 0; 50 51 } 51 52 52 53 static void lowlevel_buffer_deallocate(struct drm_device *dev, 53 - struct exynos_drm_buf_entry *entry) 54 + struct exynos_drm_gem_buf *buffer) 54 55 { 55 56 DRM_DEBUG_KMS("%s.\n", __FILE__); 56 57 57 - if (entry->paddr && entry->vaddr && entry->size) 58 - dma_free_writecombine(dev->dev, entry->size, entry->vaddr, 59 - entry->paddr); 58 + if (buffer->dma_addr && buffer->size) 59 + dma_free_writecombine(dev->dev, buffer->size, buffer->kvaddr, 60 + (dma_addr_t)buffer->dma_addr); 60 61 else 61 - DRM_DEBUG_KMS("entry data is null.\n"); 62 + DRM_DEBUG_KMS("buffer data are invalid.\n"); 62 63 } 63 64 64 - struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev, 65 + struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev, 65 66 unsigned int size) 66 67 { 67 - struct exynos_drm_buf_entry *entry; 68 + struct exynos_drm_gem_buf *buffer; 68 69 69 70 DRM_DEBUG_KMS("%s.\n", __FILE__); 71 + DRM_DEBUG_KMS("desired size = 0x%x\n", size); 70 72 71 - entry = kzalloc(sizeof(*entry), GFP_KERNEL); 72 - if (!entry) { 73 - DRM_ERROR("failed to allocate exynos_drm_buf_entry.\n"); 73 + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); 74 + if (!buffer) { 75 + DRM_ERROR("failed to allocate exynos_drm_gem_buf.\n"); 74 76 return ERR_PTR(-ENOMEM); 75 77 } 76 78 77 - entry->size = size; 79 + buffer->size = size; 78 80 79 81 /* 80 82 * allocate memory region with size and set the memory information 81 - * to vaddr and paddr of a entry object. 83 + * to vaddr and dma_addr of a buffer object. 82 84 */ 83 - if (lowlevel_buffer_allocate(dev, entry) < 0) { 84 - kfree(entry); 85 - entry = NULL; 85 + if (lowlevel_buffer_allocate(dev, buffer) < 0) { 86 + kfree(buffer); 87 + buffer = NULL; 86 88 return ERR_PTR(-ENOMEM); 87 89 } 88 90 89 - return entry; 91 + return buffer; 90 92 } 91 93 92 94 void exynos_drm_buf_destroy(struct drm_device *dev, 93 - struct exynos_drm_buf_entry *entry) 95 + struct exynos_drm_gem_buf *buffer) 94 96 { 95 97 DRM_DEBUG_KMS("%s.\n", __FILE__); 96 98 97 - if (!entry) { 98 - DRM_DEBUG_KMS("entry is null.\n"); 99 + if (!buffer) { 100 + DRM_DEBUG_KMS("buffer is null.\n"); 99 101 return; 100 102 } 101 103 102 - lowlevel_buffer_deallocate(dev, entry); 104 + lowlevel_buffer_deallocate(dev, buffer); 103 105 104 - kfree(entry); 105 - entry = NULL; 106 + kfree(buffer); 107 + buffer = NULL; 106 108 } 107 109 108 110 MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+4 -17
drivers/gpu/drm/exynos/exynos_drm_buf.h
··· 26 26 #ifndef _EXYNOS_DRM_BUF_H_ 27 27 #define _EXYNOS_DRM_BUF_H_ 28 28 29 - /* 30 - * exynos drm buffer entry structure. 31 - * 32 - * @paddr: physical address of allocated memory. 33 - * @vaddr: kernel virtual address of allocated memory. 34 - * @size: size of allocated memory. 35 - */ 36 - struct exynos_drm_buf_entry { 37 - dma_addr_t paddr; 38 - void __iomem *vaddr; 39 - unsigned int size; 40 - }; 41 - 42 29 /* allocate physical memory. */ 43 - struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev, 30 + struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev, 44 31 unsigned int size); 45 32 46 - /* get physical memory information of a drm framebuffer. */ 47 - struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb); 33 + /* get memory information of a drm framebuffer. */ 34 + struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb); 48 35 49 36 /* remove allocated physical memory. */ 50 37 void exynos_drm_buf_destroy(struct drm_device *dev, 51 - struct exynos_drm_buf_entry *entry); 38 + struct exynos_drm_gem_buf *buffer); 52 39 53 40 #endif
+56 -22
drivers/gpu/drm/exynos/exynos_drm_connector.c
··· 37 37 38 38 struct exynos_drm_connector { 39 39 struct drm_connector drm_connector; 40 + uint32_t encoder_id; 41 + struct exynos_drm_manager *manager; 40 42 }; 41 43 42 44 /* convert exynos_video_timings to drm_display_mode */ ··· 49 47 DRM_DEBUG_KMS("%s\n", __FILE__); 50 48 51 49 mode->clock = timing->pixclock / 1000; 50 + mode->vrefresh = timing->refresh; 52 51 53 52 mode->hdisplay = timing->xres; 54 53 mode->hsync_start = mode->hdisplay + timing->left_margin; ··· 60 57 mode->vsync_start = mode->vdisplay + timing->upper_margin; 61 58 mode->vsync_end = mode->vsync_start + timing->vsync_len; 62 59 mode->vtotal = mode->vsync_end + timing->lower_margin; 60 + 61 + if (timing->vmode & FB_VMODE_INTERLACED) 62 + mode->flags |= DRM_MODE_FLAG_INTERLACE; 63 + 64 + if (timing->vmode & FB_VMODE_DOUBLE) 65 + mode->flags |= DRM_MODE_FLAG_DBLSCAN; 63 66 } 64 67 65 68 /* convert drm_display_mode to exynos_video_timings */ ··· 78 69 memset(timing, 0, sizeof(*timing)); 79 70 80 71 timing->pixclock = mode->clock * 1000; 81 - timing->refresh = mode->vrefresh; 72 + timing->refresh = drm_mode_vrefresh(mode); 82 73 83 74 timing->xres = mode->hdisplay; 84 75 timing->left_margin = mode->hsync_start - mode->hdisplay; ··· 101 92 102 93 static int exynos_drm_connector_get_modes(struct drm_connector *connector) 103 94 { 104 - struct exynos_drm_manager *manager = 105 - exynos_drm_get_manager(connector->encoder); 106 - struct exynos_drm_display *display = manager->display; 95 + struct exynos_drm_connector *exynos_connector = 96 + to_exynos_connector(connector); 97 + struct exynos_drm_manager *manager = exynos_connector->manager; 98 + struct exynos_drm_display_ops *display_ops = manager->display_ops; 107 99 unsigned int count; 108 100 109 101 DRM_DEBUG_KMS("%s\n", __FILE__); 110 102 111 - if (!display) { 112 - DRM_DEBUG_KMS("display is null.\n"); 103 + if (!display_ops) { 104 + DRM_DEBUG_KMS("display_ops is null.\n"); 113 105 return 0; 114 106 } 115 107 ··· 122 112 * P.S. in case of lcd panel, count is always 1 if success 123 113 * because lcd panel has only one mode. 124 114 */ 125 - if (display->get_edid) { 115 + if (display_ops->get_edid) { 126 116 int ret; 127 117 void *edid; 128 118 ··· 132 122 return 0; 133 123 } 134 124 135 - ret = display->get_edid(manager->dev, connector, 125 + ret = display_ops->get_edid(manager->dev, connector, 136 126 edid, MAX_EDID); 137 127 if (ret < 0) { 138 128 DRM_ERROR("failed to get edid data.\n"); ··· 150 140 struct drm_display_mode *mode = drm_mode_create(connector->dev); 151 141 struct fb_videomode *timing; 152 142 153 - if (display->get_timing) 154 - timing = display->get_timing(manager->dev); 143 + if (display_ops->get_timing) 144 + timing = display_ops->get_timing(manager->dev); 155 145 else { 156 146 drm_mode_destroy(connector->dev, mode); 157 147 return 0; ··· 172 162 static int exynos_drm_connector_mode_valid(struct drm_connector *connector, 173 163 struct drm_display_mode *mode) 174 164 { 175 - struct exynos_drm_manager *manager = 176 - exynos_drm_get_manager(connector->encoder); 177 - struct exynos_drm_display *display = manager->display; 165 + struct exynos_drm_connector *exynos_connector = 166 + to_exynos_connector(connector); 167 + struct exynos_drm_manager *manager = exynos_connector->manager; 168 + struct exynos_drm_display_ops *display_ops = manager->display_ops; 178 169 struct fb_videomode timing; 179 170 int ret = MODE_BAD; 180 171 ··· 183 172 184 173 convert_to_video_timing(&timing, mode); 185 174 186 - if (display && display->check_timing) 187 - if (!display->check_timing(manager->dev, (void *)&timing)) 175 + if (display_ops && display_ops->check_timing) 176 + if (!display_ops->check_timing(manager->dev, (void *)&timing)) 188 177 ret = MODE_OK; 189 178 190 179 return ret; ··· 192 181 193 182 struct drm_encoder *exynos_drm_best_encoder(struct drm_connector *connector) 194 183 { 184 + struct drm_device *dev = connector->dev; 185 + struct exynos_drm_connector *exynos_connector = 186 + to_exynos_connector(connector); 187 + struct drm_mode_object *obj; 188 + struct drm_encoder *encoder; 189 + 195 190 DRM_DEBUG_KMS("%s\n", __FILE__); 196 191 197 - return connector->encoder; 192 + obj = drm_mode_object_find(dev, exynos_connector->encoder_id, 193 + DRM_MODE_OBJECT_ENCODER); 194 + if (!obj) { 195 + DRM_DEBUG_KMS("Unknown ENCODER ID %d\n", 196 + exynos_connector->encoder_id); 197 + return NULL; 198 + } 199 + 200 + encoder = obj_to_encoder(obj); 201 + 202 + return encoder; 198 203 } 199 204 200 205 static struct drm_connector_helper_funcs exynos_connector_helper_funcs = { ··· 223 196 static enum drm_connector_status 224 197 exynos_drm_connector_detect(struct drm_connector *connector, bool force) 225 198 { 226 - struct exynos_drm_manager *manager = 227 - exynos_drm_get_manager(connector->encoder); 228 - struct exynos_drm_display *display = manager->display; 199 + struct exynos_drm_connector *exynos_connector = 200 + to_exynos_connector(connector); 201 + struct exynos_drm_manager *manager = exynos_connector->manager; 202 + struct exynos_drm_display_ops *display_ops = 203 + manager->display_ops; 229 204 enum drm_connector_status status = connector_status_disconnected; 230 205 231 206 DRM_DEBUG_KMS("%s\n", __FILE__); 232 207 233 - if (display && display->is_connected) { 234 - if (display->is_connected(manager->dev)) 208 + if (display_ops && display_ops->is_connected) { 209 + if (display_ops->is_connected(manager->dev)) 235 210 status = connector_status_connected; 236 211 else 237 212 status = connector_status_disconnected; ··· 280 251 281 252 connector = &exynos_connector->drm_connector; 282 253 283 - switch (manager->display->type) { 254 + switch (manager->display_ops->type) { 284 255 case EXYNOS_DISPLAY_TYPE_HDMI: 285 256 type = DRM_MODE_CONNECTOR_HDMIA; 257 + connector->interlace_allowed = true; 258 + connector->polled = DRM_CONNECTOR_POLL_HPD; 286 259 break; 287 260 default: 288 261 type = DRM_MODE_CONNECTOR_Unknown; ··· 298 267 if (err) 299 268 goto err_connector; 300 269 270 + exynos_connector->encoder_id = encoder->base.id; 271 + exynos_connector->manager = manager; 301 272 connector->encoder = encoder; 273 + 302 274 err = drm_mode_connector_attach_encoder(connector, encoder); 303 275 if (err) { 304 276 DRM_ERROR("failed to attach a connector to a encoder\n");
+39 -37
drivers/gpu/drm/exynos/exynos_drm_crtc.c
··· 29 29 #include "drmP.h" 30 30 #include "drm_crtc_helper.h" 31 31 32 + #include "exynos_drm_crtc.h" 32 33 #include "exynos_drm_drv.h" 33 34 #include "exynos_drm_fb.h" 34 35 #include "exynos_drm_encoder.h" 36 + #include "exynos_drm_gem.h" 35 37 #include "exynos_drm_buf.h" 36 38 37 39 #define to_exynos_crtc(x) container_of(x, struct exynos_drm_crtc,\ 38 40 drm_crtc) 39 - 40 - /* 41 - * Exynos specific crtc postion structure. 42 - * 43 - * @fb_x: offset x on a framebuffer to be displyed 44 - * - the unit is screen coordinates. 45 - * @fb_y: offset y on a framebuffer to be displayed 46 - * - the unit is screen coordinates. 47 - * @crtc_x: offset x on hardware screen. 48 - * @crtc_y: offset y on hardware screen. 49 - * @crtc_w: width of hardware screen. 50 - * @crtc_h: height of hardware screen. 51 - */ 52 - struct exynos_drm_crtc_pos { 53 - unsigned int fb_x; 54 - unsigned int fb_y; 55 - unsigned int crtc_x; 56 - unsigned int crtc_y; 57 - unsigned int crtc_w; 58 - unsigned int crtc_h; 59 - }; 60 41 61 42 /* 62 43 * Exynos specific crtc structure. ··· 66 85 67 86 exynos_drm_fn_encoder(crtc, overlay, 68 87 exynos_drm_encoder_crtc_mode_set); 69 - exynos_drm_fn_encoder(crtc, NULL, exynos_drm_encoder_crtc_commit); 88 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 89 + exynos_drm_encoder_crtc_commit); 70 90 } 71 91 72 - static int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 73 - struct drm_framebuffer *fb, 74 - struct drm_display_mode *mode, 75 - struct exynos_drm_crtc_pos *pos) 92 + int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 93 + struct drm_framebuffer *fb, 94 + struct drm_display_mode *mode, 95 + struct exynos_drm_crtc_pos *pos) 76 96 { 77 - struct exynos_drm_buf_entry *entry; 97 + struct exynos_drm_gem_buf *buffer; 78 98 unsigned int actual_w; 79 99 unsigned int actual_h; 80 100 81 - entry = exynos_drm_fb_get_buf(fb); 82 - if (!entry) { 83 - DRM_LOG_KMS("entry is null.\n"); 101 + buffer = exynos_drm_fb_get_buf(fb); 102 + if (!buffer) { 103 + DRM_LOG_KMS("buffer is null.\n"); 84 104 return -EFAULT; 85 105 } 86 106 87 - overlay->paddr = entry->paddr; 88 - overlay->vaddr = entry->vaddr; 107 + overlay->dma_addr = buffer->dma_addr; 108 + overlay->vaddr = buffer->kvaddr; 89 109 90 - DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n", 110 + DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n", 91 111 (unsigned long)overlay->vaddr, 92 - (unsigned long)overlay->paddr); 112 + (unsigned long)overlay->dma_addr); 93 113 94 114 actual_w = min((mode->hdisplay - pos->crtc_x), pos->crtc_w); 95 115 actual_h = min((mode->vdisplay - pos->crtc_y), pos->crtc_h); ··· 153 171 154 172 static void exynos_drm_crtc_dpms(struct drm_crtc *crtc, int mode) 155 173 { 156 - DRM_DEBUG_KMS("%s\n", __FILE__); 174 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 157 175 158 - /* TODO */ 176 + DRM_DEBUG_KMS("crtc[%d] mode[%d]\n", crtc->base.id, mode); 177 + 178 + switch (mode) { 179 + case DRM_MODE_DPMS_ON: 180 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 181 + exynos_drm_encoder_crtc_commit); 182 + break; 183 + case DRM_MODE_DPMS_STANDBY: 184 + case DRM_MODE_DPMS_SUSPEND: 185 + case DRM_MODE_DPMS_OFF: 186 + /* TODO */ 187 + exynos_drm_fn_encoder(crtc, NULL, 188 + exynos_drm_encoder_crtc_disable); 189 + break; 190 + default: 191 + DRM_DEBUG_KMS("unspecified mode %d\n", mode); 192 + break; 193 + } 159 194 } 160 195 161 196 static void exynos_drm_crtc_prepare(struct drm_crtc *crtc) ··· 184 185 185 186 static void exynos_drm_crtc_commit(struct drm_crtc *crtc) 186 187 { 188 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 189 + 187 190 DRM_DEBUG_KMS("%s\n", __FILE__); 188 191 189 - /* drm framework doesn't check NULL. */ 192 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 193 + exynos_drm_encoder_crtc_commit); 190 194 } 191 195 192 196 static bool
+25
drivers/gpu/drm/exynos/exynos_drm_crtc.h
··· 35 35 int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int crtc); 36 36 void exynos_drm_crtc_disable_vblank(struct drm_device *dev, int crtc); 37 37 38 + /* 39 + * Exynos specific crtc postion structure. 40 + * 41 + * @fb_x: offset x on a framebuffer to be displyed 42 + * - the unit is screen coordinates. 43 + * @fb_y: offset y on a framebuffer to be displayed 44 + * - the unit is screen coordinates. 45 + * @crtc_x: offset x on hardware screen. 46 + * @crtc_y: offset y on hardware screen. 47 + * @crtc_w: width of hardware screen. 48 + * @crtc_h: height of hardware screen. 49 + */ 50 + struct exynos_drm_crtc_pos { 51 + unsigned int fb_x; 52 + unsigned int fb_y; 53 + unsigned int crtc_x; 54 + unsigned int crtc_y; 55 + unsigned int crtc_w; 56 + unsigned int crtc_h; 57 + }; 58 + 59 + int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 60 + struct drm_framebuffer *fb, 61 + struct drm_display_mode *mode, 62 + struct exynos_drm_crtc_pos *pos); 38 63 #endif
+5
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 27 27 28 28 #include "drmP.h" 29 29 #include "drm.h" 30 + #include "drm_crtc_helper.h" 30 31 31 32 #include <drm/exynos_drm.h> 32 33 ··· 61 60 dev->dev_private = (void *)private; 62 61 63 62 drm_mode_config_init(dev); 63 + 64 + /* init kms poll for handling hpd */ 65 + drm_kms_helper_poll_init(dev); 64 66 65 67 exynos_drm_mode_config_init(dev); 66 68 ··· 120 116 exynos_drm_fbdev_fini(dev); 121 117 exynos_drm_device_unregister(dev); 122 118 drm_vblank_cleanup(dev); 119 + drm_kms_helper_poll_fini(dev); 123 120 drm_mode_config_cleanup(dev); 124 121 kfree(dev->dev_private); 125 122
+8 -5
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 29 29 #ifndef _EXYNOS_DRM_DRV_H_ 30 30 #define _EXYNOS_DRM_DRV_H_ 31 31 32 + #include <linux/module.h> 32 33 #include "drm.h" 33 34 34 35 #define MAX_CRTC 2 ··· 80 79 * @scan_flag: interlace or progressive way. 81 80 * (it could be DRM_MODE_FLAG_*) 82 81 * @bpp: pixel size.(in bit) 83 - * @paddr: bus(accessed by dma) physical memory address to this overlay 84 - * and this is physically continuous. 82 + * @dma_addr: bus(accessed by dma) address to the memory region allocated 83 + * for a overlay. 85 84 * @vaddr: virtual memory addresss to this overlay. 86 85 * @default_win: a window to be enabled. 87 86 * @color_key: color key on or off. ··· 109 108 unsigned int scan_flag; 110 109 unsigned int bpp; 111 110 unsigned int pitch; 112 - dma_addr_t paddr; 111 + dma_addr_t dma_addr; 113 112 void __iomem *vaddr; 114 113 115 114 bool default_win; ··· 131 130 * @check_timing: check if timing is valid or not. 132 131 * @power_on: display device on or off. 133 132 */ 134 - struct exynos_drm_display { 133 + struct exynos_drm_display_ops { 135 134 enum exynos_drm_output_type type; 136 135 bool (*is_connected)(struct device *dev); 137 136 int (*get_edid)(struct device *dev, struct drm_connector *connector, ··· 147 146 * @mode_set: convert drm_display_mode to hw specific display mode and 148 147 * would be called by encoder->mode_set(). 149 148 * @commit: set current hw specific display mode to hw. 149 + * @disable: disable hardware specific display mode. 150 150 * @enable_vblank: specific driver callback for enabling vblank interrupt. 151 151 * @disable_vblank: specific driver callback for disabling vblank interrupt. 152 152 */ 153 153 struct exynos_drm_manager_ops { 154 154 void (*mode_set)(struct device *subdrv_dev, void *mode); 155 155 void (*commit)(struct device *subdrv_dev); 156 + void (*disable)(struct device *subdrv_dev); 156 157 int (*enable_vblank)(struct device *subdrv_dev); 157 158 void (*disable_vblank)(struct device *subdrv_dev); 158 159 }; ··· 181 178 int pipe; 182 179 struct exynos_drm_manager_ops *ops; 183 180 struct exynos_drm_overlay_ops *overlay_ops; 184 - struct exynos_drm_display *display; 181 + struct exynos_drm_display_ops *display_ops; 185 182 }; 186 183 187 184 /*
+72 -11
drivers/gpu/drm/exynos/exynos_drm_encoder.c
··· 53 53 struct drm_device *dev = encoder->dev; 54 54 struct drm_connector *connector; 55 55 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder); 56 + struct exynos_drm_manager_ops *manager_ops = manager->ops; 56 57 57 58 DRM_DEBUG_KMS("%s, encoder dpms: %d\n", __FILE__, mode); 58 59 60 + switch (mode) { 61 + case DRM_MODE_DPMS_ON: 62 + if (manager_ops && manager_ops->commit) 63 + manager_ops->commit(manager->dev); 64 + break; 65 + case DRM_MODE_DPMS_STANDBY: 66 + case DRM_MODE_DPMS_SUSPEND: 67 + case DRM_MODE_DPMS_OFF: 68 + /* TODO */ 69 + if (manager_ops && manager_ops->disable) 70 + manager_ops->disable(manager->dev); 71 + break; 72 + default: 73 + DRM_ERROR("unspecified mode %d\n", mode); 74 + break; 75 + } 76 + 59 77 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 60 78 if (connector->encoder == encoder) { 61 - struct exynos_drm_display *display = manager->display; 79 + struct exynos_drm_display_ops *display_ops = 80 + manager->display_ops; 62 81 63 - if (display && display->power_on) 64 - display->power_on(manager->dev, mode); 82 + DRM_DEBUG_KMS("connector[%d] dpms[%d]\n", 83 + connector->base.id, mode); 84 + if (display_ops && display_ops->power_on) 85 + display_ops->power_on(manager->dev, mode); 65 86 } 66 87 } 67 88 } ··· 137 116 { 138 117 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder); 139 118 struct exynos_drm_manager_ops *manager_ops = manager->ops; 140 - struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 141 119 142 120 DRM_DEBUG_KMS("%s\n", __FILE__); 143 121 144 122 if (manager_ops && manager_ops->commit) 145 123 manager_ops->commit(manager->dev); 146 - 147 - if (overlay_ops && overlay_ops->commit) 148 - overlay_ops->commit(manager->dev); 149 124 } 150 125 151 126 static struct drm_crtc * ··· 225 208 { 226 209 struct drm_device *dev = crtc->dev; 227 210 struct drm_encoder *encoder; 211 + struct exynos_drm_private *private = dev->dev_private; 212 + struct exynos_drm_manager *manager; 228 213 229 214 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 230 - if (encoder->crtc != crtc) 231 - continue; 215 + /* 216 + * if crtc is detached from encoder, check pipe, 217 + * otherwise check crtc attached to encoder 218 + */ 219 + if (!encoder->crtc) { 220 + manager = to_exynos_encoder(encoder)->manager; 221 + if (manager->pipe < 0 || 222 + private->crtc[manager->pipe] != crtc) 223 + continue; 224 + } else { 225 + if (encoder->crtc != crtc) 226 + continue; 227 + } 232 228 233 229 fn(encoder, data); 234 230 } ··· 280 250 struct exynos_drm_manager *manager = 281 251 to_exynos_encoder(encoder)->manager; 282 252 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 253 + int crtc = *(int *)data; 283 254 284 - overlay_ops->commit(manager->dev); 255 + DRM_DEBUG_KMS("%s\n", __FILE__); 256 + 257 + /* 258 + * when crtc is detached from encoder, this pipe is used 259 + * to select manager operation 260 + */ 261 + manager->pipe = crtc; 262 + 263 + if (overlay_ops && overlay_ops->commit) 264 + overlay_ops->commit(manager->dev); 285 265 } 286 266 287 267 void exynos_drm_encoder_crtc_mode_set(struct drm_encoder *encoder, void *data) ··· 301 261 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 302 262 struct exynos_drm_overlay *overlay = data; 303 263 304 - overlay_ops->mode_set(manager->dev, overlay); 264 + if (overlay_ops && overlay_ops->mode_set) 265 + overlay_ops->mode_set(manager->dev, overlay); 266 + } 267 + 268 + void exynos_drm_encoder_crtc_disable(struct drm_encoder *encoder, void *data) 269 + { 270 + struct exynos_drm_manager *manager = 271 + to_exynos_encoder(encoder)->manager; 272 + struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 273 + 274 + DRM_DEBUG_KMS("\n"); 275 + 276 + if (overlay_ops && overlay_ops->disable) 277 + overlay_ops->disable(manager->dev); 278 + 279 + /* 280 + * crtc is already detached from encoder and last 281 + * function for detaching is properly done, so 282 + * clear pipe from manager to prevent repeated call 283 + */ 284 + if (!encoder->crtc) 285 + manager->pipe = -1; 305 286 } 306 287 307 288 MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+1
drivers/gpu/drm/exynos/exynos_drm_encoder.h
··· 41 41 void exynos_drm_disable_vblank(struct drm_encoder *encoder, void *data); 42 42 void exynos_drm_encoder_crtc_commit(struct drm_encoder *encoder, void *data); 43 43 void exynos_drm_encoder_crtc_mode_set(struct drm_encoder *encoder, void *data); 44 + void exynos_drm_encoder_crtc_disable(struct drm_encoder *encoder, void *data); 44 45 45 46 #endif
+39 -27
drivers/gpu/drm/exynos/exynos_drm_fb.c
··· 29 29 #include "drmP.h" 30 30 #include "drm_crtc.h" 31 31 #include "drm_crtc_helper.h" 32 + #include "drm_fb_helper.h" 32 33 34 + #include "exynos_drm_drv.h" 33 35 #include "exynos_drm_fb.h" 34 36 #include "exynos_drm_buf.h" 35 37 #include "exynos_drm_gem.h" ··· 43 41 * 44 42 * @fb: drm framebuffer obejct. 45 43 * @exynos_gem_obj: exynos specific gem object containing a gem object. 46 - * @entry: pointer to exynos drm buffer entry object. 47 - * - containing only the information to physically continuous memory 48 - * region allocated at default framebuffer creation. 44 + * @buffer: pointer to exynos_drm_gem_buffer object. 45 + * - contain the memory information to memory region allocated 46 + * at default framebuffer creation. 49 47 */ 50 48 struct exynos_drm_fb { 51 49 struct drm_framebuffer fb; 52 50 struct exynos_drm_gem_obj *exynos_gem_obj; 53 - struct exynos_drm_buf_entry *entry; 51 + struct exynos_drm_gem_buf *buffer; 54 52 }; 55 53 56 54 static void exynos_drm_fb_destroy(struct drm_framebuffer *fb) ··· 65 63 * default framebuffer has no gem object so 66 64 * a buffer of the default framebuffer should be released at here. 67 65 */ 68 - if (!exynos_fb->exynos_gem_obj && exynos_fb->entry) 69 - exynos_drm_buf_destroy(fb->dev, exynos_fb->entry); 66 + if (!exynos_fb->exynos_gem_obj && exynos_fb->buffer) 67 + exynos_drm_buf_destroy(fb->dev, exynos_fb->buffer); 70 68 71 69 kfree(exynos_fb); 72 70 exynos_fb = NULL; ··· 145 143 */ 146 144 if (!mode_cmd->handle) { 147 145 if (!file_priv) { 148 - struct exynos_drm_buf_entry *entry; 146 + struct exynos_drm_gem_buf *buffer; 149 147 150 148 /* 151 149 * in case that file_priv is NULL, it allocates 152 150 * only buffer and this buffer would be used 153 151 * for default framebuffer. 154 152 */ 155 - entry = exynos_drm_buf_create(dev, size); 156 - if (IS_ERR(entry)) { 157 - ret = PTR_ERR(entry); 153 + buffer = exynos_drm_buf_create(dev, size); 154 + if (IS_ERR(buffer)) { 155 + ret = PTR_ERR(buffer); 158 156 goto err_buffer; 159 157 } 160 158 161 - exynos_fb->entry = entry; 159 + exynos_fb->buffer = buffer; 162 160 163 - DRM_LOG_KMS("default fb: paddr = 0x%lx, size = 0x%x\n", 164 - (unsigned long)entry->paddr, size); 161 + DRM_LOG_KMS("default: dma_addr = 0x%lx, size = 0x%x\n", 162 + (unsigned long)buffer->dma_addr, size); 165 163 166 164 goto out; 167 165 } else { 168 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, 169 - size, 170 - &mode_cmd->handle); 166 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, 167 + &mode_cmd->handle, 168 + size); 171 169 if (IS_ERR(exynos_gem_obj)) { 172 170 ret = PTR_ERR(exynos_gem_obj); 173 171 goto err_buffer; ··· 191 189 * so that default framebuffer has no its own gem object, 192 190 * only its own buffer object. 193 191 */ 194 - exynos_fb->entry = exynos_gem_obj->entry; 192 + exynos_fb->buffer = exynos_gem_obj->buffer; 195 193 196 - DRM_LOG_KMS("paddr = 0x%lx, size = 0x%x, gem object = 0x%x\n", 197 - (unsigned long)exynos_fb->entry->paddr, size, 194 + DRM_LOG_KMS("dma_addr = 0x%lx, size = 0x%x, gem object = 0x%x\n", 195 + (unsigned long)exynos_fb->buffer->dma_addr, size, 198 196 (unsigned int)&exynos_gem_obj->base); 199 197 200 198 out: ··· 222 220 return exynos_drm_fb_init(file_priv, dev, mode_cmd); 223 221 } 224 222 225 - struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb) 223 + struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb) 226 224 { 227 225 struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb); 228 - struct exynos_drm_buf_entry *entry; 226 + struct exynos_drm_gem_buf *buffer; 229 227 230 228 DRM_DEBUG_KMS("%s\n", __FILE__); 231 229 232 - entry = exynos_fb->entry; 233 - if (!entry) 230 + buffer = exynos_fb->buffer; 231 + if (!buffer) 234 232 return NULL; 235 233 236 - DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n", 237 - (unsigned long)entry->vaddr, 238 - (unsigned long)entry->paddr); 234 + DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n", 235 + (unsigned long)buffer->kvaddr, 236 + (unsigned long)buffer->dma_addr); 239 237 240 - return entry; 238 + return buffer; 239 + } 240 + 241 + static void exynos_drm_output_poll_changed(struct drm_device *dev) 242 + { 243 + struct exynos_drm_private *private = dev->dev_private; 244 + struct drm_fb_helper *fb_helper = private->fb_helper; 245 + 246 + if (fb_helper) 247 + drm_fb_helper_hotplug_event(fb_helper); 241 248 } 242 249 243 250 static struct drm_mode_config_funcs exynos_drm_mode_config_funcs = { 244 251 .fb_create = exynos_drm_fb_create, 252 + .output_poll_changed = exynos_drm_output_poll_changed, 245 253 }; 246 254 247 255 void exynos_drm_mode_config_init(struct drm_device *dev)
+28 -16
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
··· 33 33 34 34 #include "exynos_drm_drv.h" 35 35 #include "exynos_drm_fb.h" 36 + #include "exynos_drm_gem.h" 36 37 #include "exynos_drm_buf.h" 37 38 38 39 #define MAX_CONNECTOR 4 ··· 86 85 }; 87 86 88 87 static int exynos_drm_fbdev_update(struct drm_fb_helper *helper, 89 - struct drm_framebuffer *fb, 90 - unsigned int fb_width, 91 - unsigned int fb_height) 88 + struct drm_framebuffer *fb) 92 89 { 93 90 struct fb_info *fbi = helper->fbdev; 94 91 struct drm_device *dev = helper->dev; 95 92 struct exynos_drm_fbdev *exynos_fb = to_exynos_fbdev(helper); 96 - struct exynos_drm_buf_entry *entry; 97 - unsigned int size = fb_width * fb_height * (fb->bits_per_pixel >> 3); 93 + struct exynos_drm_gem_buf *buffer; 94 + unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3); 98 95 unsigned long offset; 99 96 100 97 DRM_DEBUG_KMS("%s\n", __FILE__); ··· 100 101 exynos_fb->fb = fb; 101 102 102 103 drm_fb_helper_fill_fix(fbi, fb->pitch, fb->depth); 103 - drm_fb_helper_fill_var(fbi, helper, fb_width, fb_height); 104 + drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height); 104 105 105 - entry = exynos_drm_fb_get_buf(fb); 106 - if (!entry) { 107 - DRM_LOG_KMS("entry is null.\n"); 106 + buffer = exynos_drm_fb_get_buf(fb); 107 + if (!buffer) { 108 + DRM_LOG_KMS("buffer is null.\n"); 108 109 return -EFAULT; 109 110 } 110 111 111 112 offset = fbi->var.xoffset * (fb->bits_per_pixel >> 3); 112 113 offset += fbi->var.yoffset * fb->pitch; 113 114 114 - dev->mode_config.fb_base = entry->paddr; 115 - fbi->screen_base = entry->vaddr + offset; 116 - fbi->fix.smem_start = entry->paddr + offset; 115 + dev->mode_config.fb_base = (resource_size_t)buffer->dma_addr; 116 + fbi->screen_base = buffer->kvaddr + offset; 117 + fbi->fix.smem_start = (unsigned long)(buffer->dma_addr + offset); 117 118 fbi->screen_size = size; 118 119 fbi->fix.smem_len = size; 119 120 ··· 170 171 goto out; 171 172 } 172 173 173 - ret = exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width, 174 - sizes->fb_height); 174 + ret = exynos_drm_fbdev_update(helper, helper->fb); 175 175 if (ret < 0) 176 176 fb_dealloc_cmap(&fbi->cmap); 177 177 ··· 233 235 } 234 236 235 237 helper->fb = exynos_fbdev->fb; 236 - return exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width, 237 - sizes->fb_height); 238 + return exynos_drm_fbdev_update(helper, helper->fb); 238 239 } 239 240 240 241 static int exynos_drm_fbdev_probe(struct drm_fb_helper *helper, ··· 402 405 fb_helper = private->fb_helper; 403 406 404 407 if (fb_helper) { 408 + struct list_head temp_list; 409 + 410 + INIT_LIST_HEAD(&temp_list); 411 + 412 + /* 413 + * fb_helper is reintialized but kernel fb is reused 414 + * so kernel_fb_list need to be backuped and restored 415 + */ 416 + if (!list_empty(&fb_helper->kernel_fb_list)) 417 + list_replace_init(&fb_helper->kernel_fb_list, 418 + &temp_list); 419 + 405 420 drm_fb_helper_fini(fb_helper); 406 421 407 422 ret = drm_fb_helper_init(dev, fb_helper, ··· 422 413 DRM_ERROR("failed to initialize drm fb helper\n"); 423 414 return ret; 424 415 } 416 + 417 + if (!list_empty(&temp_list)) 418 + list_replace(&temp_list, &fb_helper->kernel_fb_list); 425 419 426 420 ret = drm_fb_helper_single_add_all_connectors(fb_helper); 427 421 if (ret < 0) {
+53 -18
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 64 64 unsigned int fb_width; 65 65 unsigned int fb_height; 66 66 unsigned int bpp; 67 - dma_addr_t paddr; 67 + dma_addr_t dma_addr; 68 68 void __iomem *vaddr; 69 69 unsigned int buf_offsize; 70 70 unsigned int line_size; /* bytes */ ··· 124 124 return 0; 125 125 } 126 126 127 - static struct exynos_drm_display fimd_display = { 127 + static struct exynos_drm_display_ops fimd_display_ops = { 128 128 .type = EXYNOS_DISPLAY_TYPE_LCD, 129 129 .is_connected = fimd_display_is_connected, 130 130 .get_timing = fimd_get_timing, ··· 177 177 writel(val, ctx->regs + VIDCON0); 178 178 } 179 179 180 + static void fimd_disable(struct device *dev) 181 + { 182 + struct fimd_context *ctx = get_fimd_context(dev); 183 + struct exynos_drm_subdrv *subdrv = &ctx->subdrv; 184 + struct drm_device *drm_dev = subdrv->drm_dev; 185 + struct exynos_drm_manager *manager = &subdrv->manager; 186 + u32 val; 187 + 188 + DRM_DEBUG_KMS("%s\n", __FILE__); 189 + 190 + /* fimd dma off */ 191 + val = readl(ctx->regs + VIDCON0); 192 + val &= ~(VIDCON0_ENVID | VIDCON0_ENVID_F); 193 + writel(val, ctx->regs + VIDCON0); 194 + 195 + /* 196 + * if vblank is enabled status with dma off then 197 + * it disables vsync interrupt. 198 + */ 199 + if (drm_dev->vblank_enabled[manager->pipe] && 200 + atomic_read(&drm_dev->vblank_refcount[manager->pipe])) { 201 + drm_vblank_put(drm_dev, manager->pipe); 202 + 203 + /* 204 + * if vblank_disable_allowed is 0 then disable 205 + * vsync interrupt right now else the vsync interrupt 206 + * would be disabled by drm timer once a current process 207 + * gives up ownershop of vblank event. 208 + */ 209 + if (!drm_dev->vblank_disable_allowed) 210 + drm_vblank_off(drm_dev, manager->pipe); 211 + } 212 + } 213 + 180 214 static int fimd_enable_vblank(struct device *dev) 181 215 { 182 216 struct fimd_context *ctx = get_fimd_context(dev); ··· 254 220 255 221 static struct exynos_drm_manager_ops fimd_manager_ops = { 256 222 .commit = fimd_commit, 223 + .disable = fimd_disable, 257 224 .enable_vblank = fimd_enable_vblank, 258 225 .disable_vblank = fimd_disable_vblank, 259 226 }; ··· 286 251 win_data->ovl_height = overlay->crtc_height; 287 252 win_data->fb_width = overlay->fb_width; 288 253 win_data->fb_height = overlay->fb_height; 289 - win_data->paddr = overlay->paddr + offset; 254 + win_data->dma_addr = overlay->dma_addr + offset; 290 255 win_data->vaddr = overlay->vaddr + offset; 291 256 win_data->bpp = overlay->bpp; 292 257 win_data->buf_offsize = (overlay->fb_width - overlay->crtc_width) * ··· 298 263 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n", 299 264 win_data->ovl_width, win_data->ovl_height); 300 265 DRM_DEBUG_KMS("paddr = 0x%lx, vaddr = 0x%lx\n", 301 - (unsigned long)win_data->paddr, 266 + (unsigned long)win_data->dma_addr, 302 267 (unsigned long)win_data->vaddr); 303 268 DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n", 304 269 overlay->fb_width, overlay->crtc_width); ··· 411 376 writel(val, ctx->regs + SHADOWCON); 412 377 413 378 /* buffer start address */ 414 - val = win_data->paddr; 379 + val = (unsigned long)win_data->dma_addr; 415 380 writel(val, ctx->regs + VIDWx_BUF_START(win, 0)); 416 381 417 382 /* buffer end address */ 418 383 size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3); 419 - val = win_data->paddr + size; 384 + val = (unsigned long)(win_data->dma_addr + size); 420 385 writel(val, ctx->regs + VIDWx_BUF_END(win, 0)); 421 386 422 387 DRM_DEBUG_KMS("start addr = 0x%lx, end addr = 0x%lx, size = 0x%lx\n", 423 - (unsigned long)win_data->paddr, val, size); 388 + (unsigned long)win_data->dma_addr, val, size); 424 389 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n", 425 390 win_data->ovl_width, win_data->ovl_height); 426 391 ··· 482 447 static void fimd_win_disable(struct device *dev) 483 448 { 484 449 struct fimd_context *ctx = get_fimd_context(dev); 485 - struct fimd_win_data *win_data; 486 450 int win = ctx->default_win; 487 451 u32 val; 488 452 ··· 489 455 490 456 if (win < 0 || win > WINDOWS_NR) 491 457 return; 492 - 493 - win_data = &ctx->win_data[win]; 494 458 495 459 /* protect windows */ 496 460 val = readl(ctx->regs + SHADOWCON); ··· 560 528 /* VSYNC interrupt */ 561 529 writel(VIDINTCON1_INT_FRAME, ctx->regs + VIDINTCON1); 562 530 531 + /* 532 + * in case that vblank_disable_allowed is 1, it could induce 533 + * the problem that manager->pipe could be -1 because with 534 + * disable callback, vsync interrupt isn't disabled and at this moment, 535 + * vsync interrupt could occur. the vsync interrupt would be disabled 536 + * by timer handler later. 537 + */ 538 + if (manager->pipe == -1) 539 + return IRQ_HANDLED; 540 + 563 541 drm_handle_vblank(drm_dev, manager->pipe); 564 542 fimd_finish_pageflip(drm_dev, manager->pipe); 565 543 ··· 589 547 * drm framework supports only one irq handler. 590 548 */ 591 549 drm_dev->irq_enabled = 1; 592 - 593 - /* 594 - * with vblank_disable_allowed = 1, vblank interrupt will be disabled 595 - * by drm timer once a current process gives up ownership of 596 - * vblank event.(drm_vblank_put function was called) 597 - */ 598 - drm_dev->vblank_disable_allowed = 1; 599 550 600 551 return 0; 601 552 } ··· 766 731 subdrv->manager.pipe = -1; 767 732 subdrv->manager.ops = &fimd_manager_ops; 768 733 subdrv->manager.overlay_ops = &fimd_overlay_ops; 769 - subdrv->manager.display = &fimd_display; 734 + subdrv->manager.display_ops = &fimd_display_ops; 770 735 subdrv->manager.dev = dev; 771 736 772 737 platform_set_drvdata(pdev, ctx);
+52 -37
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 62 62 return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT; 63 63 } 64 64 65 - struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv, 66 - struct drm_device *dev, unsigned int size, 67 - unsigned int *handle) 65 + static struct exynos_drm_gem_obj 66 + *exynos_drm_gem_init(struct drm_device *drm_dev, 67 + struct drm_file *file_priv, unsigned int *handle, 68 + unsigned int size) 68 69 { 69 70 struct exynos_drm_gem_obj *exynos_gem_obj; 70 - struct exynos_drm_buf_entry *entry; 71 71 struct drm_gem_object *obj; 72 72 int ret; 73 - 74 - DRM_DEBUG_KMS("%s\n", __FILE__); 75 - 76 - size = roundup(size, PAGE_SIZE); 77 73 78 74 exynos_gem_obj = kzalloc(sizeof(*exynos_gem_obj), GFP_KERNEL); 79 75 if (!exynos_gem_obj) { ··· 77 81 return ERR_PTR(-ENOMEM); 78 82 } 79 83 80 - /* allocate the new buffer object and memory region. */ 81 - entry = exynos_drm_buf_create(dev, size); 82 - if (!entry) { 83 - kfree(exynos_gem_obj); 84 - return ERR_PTR(-ENOMEM); 85 - } 86 - 87 - exynos_gem_obj->entry = entry; 88 - 89 84 obj = &exynos_gem_obj->base; 90 85 91 - ret = drm_gem_object_init(dev, obj, size); 86 + ret = drm_gem_object_init(drm_dev, obj, size); 92 87 if (ret < 0) { 93 - DRM_ERROR("failed to initailize gem object.\n"); 94 - goto err_obj_init; 88 + DRM_ERROR("failed to initialize gem object.\n"); 89 + ret = -EINVAL; 90 + goto err_object_init; 95 91 } 96 92 97 93 DRM_DEBUG_KMS("created file object = 0x%x\n", (unsigned int)obj->filp); ··· 115 127 err_create_mmap_offset: 116 128 drm_gem_object_release(obj); 117 129 118 - err_obj_init: 119 - exynos_drm_buf_destroy(dev, exynos_gem_obj->entry); 120 - 130 + err_object_init: 121 131 kfree(exynos_gem_obj); 122 132 123 133 return ERR_PTR(ret); 124 134 } 125 135 136 + struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev, 137 + struct drm_file *file_priv, 138 + unsigned int *handle, unsigned long size) 139 + { 140 + 141 + struct exynos_drm_gem_obj *exynos_gem_obj = NULL; 142 + struct exynos_drm_gem_buf *buffer; 143 + 144 + size = roundup(size, PAGE_SIZE); 145 + 146 + DRM_DEBUG_KMS("%s: size = 0x%lx\n", __FILE__, size); 147 + 148 + buffer = exynos_drm_buf_create(dev, size); 149 + if (IS_ERR(buffer)) { 150 + return ERR_CAST(buffer); 151 + } 152 + 153 + exynos_gem_obj = exynos_drm_gem_init(dev, file_priv, handle, size); 154 + if (IS_ERR(exynos_gem_obj)) { 155 + exynos_drm_buf_destroy(dev, buffer); 156 + return exynos_gem_obj; 157 + } 158 + 159 + exynos_gem_obj->buffer = buffer; 160 + 161 + return exynos_gem_obj; 162 + } 163 + 126 164 int exynos_drm_gem_create_ioctl(struct drm_device *dev, void *data, 127 - struct drm_file *file_priv) 165 + struct drm_file *file_priv) 128 166 { 129 167 struct drm_exynos_gem_create *args = data; 130 - struct exynos_drm_gem_obj *exynos_gem_obj; 168 + struct exynos_drm_gem_obj *exynos_gem_obj = NULL; 131 169 132 - DRM_DEBUG_KMS("%s : size = 0x%x\n", __FILE__, args->size); 170 + DRM_DEBUG_KMS("%s\n", __FILE__); 133 171 134 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size, 135 - &args->handle); 172 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, 173 + &args->handle, args->size); 136 174 if (IS_ERR(exynos_gem_obj)) 137 175 return PTR_ERR(exynos_gem_obj); 138 176 ··· 189 175 { 190 176 struct drm_gem_object *obj = filp->private_data; 191 177 struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj); 192 - struct exynos_drm_buf_entry *entry; 178 + struct exynos_drm_gem_buf *buffer; 193 179 unsigned long pfn, vm_size; 194 180 195 181 DRM_DEBUG_KMS("%s\n", __FILE__); ··· 201 187 202 188 vm_size = vma->vm_end - vma->vm_start; 203 189 /* 204 - * a entry contains information to physically continuous memory 190 + * a buffer contains information to physically continuous memory 205 191 * allocated by user request or at framebuffer creation. 206 192 */ 207 - entry = exynos_gem_obj->entry; 193 + buffer = exynos_gem_obj->buffer; 208 194 209 195 /* check if user-requested size is valid. */ 210 - if (vm_size > entry->size) 196 + if (vm_size > buffer->size) 211 197 return -EINVAL; 212 198 213 199 /* 214 200 * get page frame number to physical memory to be mapped 215 201 * to user space. 216 202 */ 217 - pfn = exynos_gem_obj->entry->paddr >> PAGE_SHIFT; 203 + pfn = ((unsigned long)exynos_gem_obj->buffer->dma_addr) >> PAGE_SHIFT; 218 204 219 205 DRM_DEBUG_KMS("pfn = 0x%lx\n", pfn); 220 206 ··· 295 281 296 282 exynos_gem_obj = to_exynos_gem_obj(gem_obj); 297 283 298 - exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->entry); 284 + exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->buffer); 299 285 300 286 kfree(exynos_gem_obj); 301 287 } ··· 316 302 args->pitch = args->width * args->bpp >> 3; 317 303 args->size = args->pitch * args->height; 318 304 319 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size, 320 - &args->handle); 305 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, &args->handle, 306 + args->size); 321 307 if (IS_ERR(exynos_gem_obj)) 322 308 return PTR_ERR(exynos_gem_obj); 323 309 ··· 374 360 375 361 mutex_lock(&dev->struct_mutex); 376 362 377 - pfn = (exynos_gem_obj->entry->paddr >> PAGE_SHIFT) + page_offset; 363 + pfn = (((unsigned long)exynos_gem_obj->buffer->dma_addr) >> 364 + PAGE_SHIFT) + page_offset; 378 365 379 366 ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn); 380 367
+22 -6
drivers/gpu/drm/exynos/exynos_drm_gem.h
··· 30 30 struct exynos_drm_gem_obj, base) 31 31 32 32 /* 33 + * exynos drm gem buffer structure. 34 + * 35 + * @kvaddr: kernel virtual address to allocated memory region. 36 + * @dma_addr: bus address(accessed by dma) to allocated memory region. 37 + * - this address could be physical address without IOMMU and 38 + * device address with IOMMU. 39 + * @size: size of allocated memory region. 40 + */ 41 + struct exynos_drm_gem_buf { 42 + void __iomem *kvaddr; 43 + dma_addr_t dma_addr; 44 + unsigned long size; 45 + }; 46 + 47 + /* 33 48 * exynos drm buffer structure. 34 49 * 35 50 * @base: a gem object. 36 51 * - a new handle to this gem object would be created 37 52 * by drm_gem_handle_create(). 38 - * @entry: pointer to exynos drm buffer entry object. 39 - * - containing the information to physically 53 + * @buffer: a pointer to exynos_drm_gem_buffer object. 54 + * - contain the information to memory region allocated 55 + * by user request or at framebuffer creation. 40 56 * continuous memory region allocated by user request 41 57 * or at framebuffer creation. 42 58 * ··· 61 45 */ 62 46 struct exynos_drm_gem_obj { 63 47 struct drm_gem_object base; 64 - struct exynos_drm_buf_entry *entry; 48 + struct exynos_drm_gem_buf *buffer; 65 49 }; 66 50 67 51 /* create a new buffer and get a new gem handle. */ 68 - struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv, 69 - struct drm_device *dev, unsigned int size, 70 - unsigned int *handle); 52 + struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev, 53 + struct drm_file *file_priv, 54 + unsigned int *handle, unsigned long size); 71 55 72 56 /* 73 57 * request gem object creation and buffer allocation as the size
+1
drivers/gpu/drm/i915/i915_debugfs.c
··· 62 62 const struct intel_device_info *info = INTEL_INFO(dev); 63 63 64 64 seq_printf(m, "gen: %d\n", info->gen); 65 + seq_printf(m, "pch: %d\n", INTEL_PCH_TYPE(dev)); 65 66 #define B(x) seq_printf(m, #x ": %s\n", yesno(info->x)) 66 67 B(is_mobile); 67 68 B(is_i85x);
+10
drivers/gpu/drm/i915/i915_dma.c
··· 1454 1454 1455 1455 diff1 = now - dev_priv->last_time1; 1456 1456 1457 + /* Prevent division-by-zero if we are asking too fast. 1458 + * Also, we don't get interesting results if we are polling 1459 + * faster than once in 10ms, so just return the saved value 1460 + * in such cases. 1461 + */ 1462 + if (diff1 <= 10) 1463 + return dev_priv->chipset_power; 1464 + 1457 1465 count1 = I915_READ(DMIEC); 1458 1466 count2 = I915_READ(DDREC); 1459 1467 count3 = I915_READ(CSIEC); ··· 1491 1483 1492 1484 dev_priv->last_count1 = total_count; 1493 1485 dev_priv->last_time1 = now; 1486 + 1487 + dev_priv->chipset_power = ret; 1494 1488 1495 1489 return ret; 1496 1490 }
+33 -10
drivers/gpu/drm/i915/i915_drv.c
··· 58 58 MODULE_PARM_DESC(powersave, 59 59 "Enable powersavings, fbc, downclocking, etc. (default: true)"); 60 60 61 - unsigned int i915_semaphores __read_mostly = 0; 61 + int i915_semaphores __read_mostly = -1; 62 62 module_param_named(semaphores, i915_semaphores, int, 0600); 63 63 MODULE_PARM_DESC(semaphores, 64 - "Use semaphores for inter-ring sync (default: false)"); 64 + "Use semaphores for inter-ring sync (default: -1 (use per-chip defaults))"); 65 65 66 - unsigned int i915_enable_rc6 __read_mostly = 0; 66 + int i915_enable_rc6 __read_mostly = -1; 67 67 module_param_named(i915_enable_rc6, i915_enable_rc6, int, 0600); 68 68 MODULE_PARM_DESC(i915_enable_rc6, 69 - "Enable power-saving render C-state 6 (default: true)"); 69 + "Enable power-saving render C-state 6 (default: -1 (use per-chip default)"); 70 70 71 71 int i915_enable_fbc __read_mostly = -1; 72 72 module_param_named(i915_enable_fbc, i915_enable_fbc, int, 0600); ··· 328 328 } 329 329 } 330 330 331 - static void __gen6_gt_force_wake_get(struct drm_i915_private *dev_priv) 331 + void __gen6_gt_force_wake_get(struct drm_i915_private *dev_priv) 332 332 { 333 333 int count; 334 334 ··· 344 344 udelay(10); 345 345 } 346 346 347 + void __gen6_gt_force_wake_mt_get(struct drm_i915_private *dev_priv) 348 + { 349 + int count; 350 + 351 + count = 0; 352 + while (count++ < 50 && (I915_READ_NOTRACE(FORCEWAKE_MT_ACK) & 1)) 353 + udelay(10); 354 + 355 + I915_WRITE_NOTRACE(FORCEWAKE_MT, (1<<16) | 1); 356 + POSTING_READ(FORCEWAKE_MT); 357 + 358 + count = 0; 359 + while (count++ < 50 && (I915_READ_NOTRACE(FORCEWAKE_MT_ACK) & 1) == 0) 360 + udelay(10); 361 + } 362 + 347 363 /* 348 364 * Generally this is called implicitly by the register read function. However, 349 365 * if some sequence requires the GT to not power down then this function should ··· 372 356 373 357 /* Forcewake is atomic in case we get in here without the lock */ 374 358 if (atomic_add_return(1, &dev_priv->forcewake_count) == 1) 375 - __gen6_gt_force_wake_get(dev_priv); 359 + dev_priv->display.force_wake_get(dev_priv); 376 360 } 377 361 378 - static void __gen6_gt_force_wake_put(struct drm_i915_private *dev_priv) 362 + void __gen6_gt_force_wake_put(struct drm_i915_private *dev_priv) 379 363 { 380 364 I915_WRITE_NOTRACE(FORCEWAKE, 0); 381 365 POSTING_READ(FORCEWAKE); 366 + } 367 + 368 + void __gen6_gt_force_wake_mt_put(struct drm_i915_private *dev_priv) 369 + { 370 + I915_WRITE_NOTRACE(FORCEWAKE_MT, (1<<16) | 0); 371 + POSTING_READ(FORCEWAKE_MT); 382 372 } 383 373 384 374 /* ··· 395 373 WARN_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex)); 396 374 397 375 if (atomic_dec_and_test(&dev_priv->forcewake_count)) 398 - __gen6_gt_force_wake_put(dev_priv); 376 + dev_priv->display.force_wake_put(dev_priv); 399 377 } 400 378 401 379 void __gen6_gt_wait_for_fifo(struct drm_i915_private *dev_priv) ··· 925 903 /* We give fast paths for the really cool registers */ 926 904 #define NEEDS_FORCE_WAKE(dev_priv, reg) \ 927 905 (((dev_priv)->info->gen >= 6) && \ 928 - ((reg) < 0x40000) && \ 929 - ((reg) != FORCEWAKE)) 906 + ((reg) < 0x40000) && \ 907 + ((reg) != FORCEWAKE) && \ 908 + ((reg) != ECOBUS)) 930 909 931 910 #define __i915_read(x, y) \ 932 911 u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg) { \
+14 -4
drivers/gpu/drm/i915/i915_drv.h
··· 107 107 struct opregion_acpi; 108 108 struct opregion_swsci; 109 109 struct opregion_asle; 110 + struct drm_i915_private; 110 111 111 112 struct intel_opregion { 112 113 struct opregion_header *header; ··· 222 221 struct drm_i915_gem_object *obj); 223 222 int (*update_plane)(struct drm_crtc *crtc, struct drm_framebuffer *fb, 224 223 int x, int y); 224 + void (*force_wake_get)(struct drm_i915_private *dev_priv); 225 + void (*force_wake_put)(struct drm_i915_private *dev_priv); 225 226 /* clock updates for mode set */ 226 227 /* cursor updates */ 227 228 /* render clock increase/decrease */ ··· 713 710 714 711 u64 last_count1; 715 712 unsigned long last_time1; 713 + unsigned long chipset_power; 716 714 u64 last_count2; 717 715 struct timespec last_time2; 718 716 unsigned long gfx_power; ··· 1002 998 extern unsigned int i915_fbpercrtc __always_unused; 1003 999 extern int i915_panel_ignore_lid __read_mostly; 1004 1000 extern unsigned int i915_powersave __read_mostly; 1005 - extern unsigned int i915_semaphores __read_mostly; 1001 + extern int i915_semaphores __read_mostly; 1006 1002 extern unsigned int i915_lvds_downclock __read_mostly; 1007 1003 extern int i915_panel_use_ssc __read_mostly; 1008 1004 extern int i915_vbt_sdvo_panel_type __read_mostly; 1009 - extern unsigned int i915_enable_rc6 __read_mostly; 1005 + extern int i915_enable_rc6 __read_mostly; 1010 1006 extern int i915_enable_fbc __read_mostly; 1011 1007 extern bool i915_enable_hangcheck __read_mostly; 1012 1008 ··· 1312 1308 extern void intel_detect_pch(struct drm_device *dev); 1313 1309 extern int intel_trans_dp_port_sel(struct drm_crtc *crtc); 1314 1310 1311 + extern void __gen6_gt_force_wake_get(struct drm_i915_private *dev_priv); 1312 + extern void __gen6_gt_force_wake_mt_get(struct drm_i915_private *dev_priv); 1313 + extern void __gen6_gt_force_wake_put(struct drm_i915_private *dev_priv); 1314 + extern void __gen6_gt_force_wake_mt_put(struct drm_i915_private *dev_priv); 1315 + 1315 1316 /* overlay */ 1316 1317 #ifdef CONFIG_DEBUG_FS 1317 1318 extern struct intel_overlay_error_state *intel_overlay_capture_error_state(struct drm_device *dev); ··· 1361 1352 /* We give fast paths for the really cool registers */ 1362 1353 #define NEEDS_FORCE_WAKE(dev_priv, reg) \ 1363 1354 (((dev_priv)->info->gen >= 6) && \ 1364 - ((reg) < 0x40000) && \ 1365 - ((reg) != FORCEWAKE)) 1355 + ((reg) < 0x40000) && \ 1356 + ((reg) != FORCEWAKE) && \ 1357 + ((reg) != ECOBUS)) 1366 1358 1367 1359 #define __i915_read(x, y) \ 1368 1360 u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg);
+18 -1
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 32 32 #include "i915_drv.h" 33 33 #include "i915_trace.h" 34 34 #include "intel_drv.h" 35 + #include <linux/dma_remapping.h> 35 36 36 37 struct change_domains { 37 38 uint32_t invalidate_domains; ··· 747 746 return 0; 748 747 } 749 748 749 + static bool 750 + intel_enable_semaphores(struct drm_device *dev) 751 + { 752 + if (INTEL_INFO(dev)->gen < 6) 753 + return 0; 754 + 755 + if (i915_semaphores >= 0) 756 + return i915_semaphores; 757 + 758 + /* Enable semaphores on SNB when IO remapping is off */ 759 + if (INTEL_INFO(dev)->gen == 6) 760 + return !intel_iommu_enabled; 761 + 762 + return 1; 763 + } 764 + 750 765 static int 751 766 i915_gem_execbuffer_sync_rings(struct drm_i915_gem_object *obj, 752 767 struct intel_ring_buffer *to) ··· 775 758 return 0; 776 759 777 760 /* XXX gpu semaphores are implicated in various hard hangs on SNB */ 778 - if (INTEL_INFO(obj->base.dev)->gen < 6 || !i915_semaphores) 761 + if (!intel_enable_semaphores(obj->base.dev)) 779 762 return i915_gem_object_wait_rendering(obj); 780 763 781 764 idx = intel_ring_sync_index(from, to);
+26 -4
drivers/gpu/drm/i915/i915_reg.h
··· 3303 3303 /* or SDVOB */ 3304 3304 #define HDMIB 0xe1140 3305 3305 #define PORT_ENABLE (1 << 31) 3306 - #define TRANSCODER_A (0) 3307 - #define TRANSCODER_B (1 << 30) 3308 - #define TRANSCODER(pipe) ((pipe) << 30) 3309 - #define TRANSCODER_MASK (1 << 30) 3306 + #define TRANSCODER(pipe) ((pipe) << 30) 3307 + #define TRANSCODER_CPT(pipe) ((pipe) << 29) 3308 + #define TRANSCODER_MASK (1 << 30) 3309 + #define TRANSCODER_MASK_CPT (3 << 29) 3310 3310 #define COLOR_FORMAT_8bpc (0) 3311 3311 #define COLOR_FORMAT_12bpc (3 << 26) 3312 3312 #define SDVOB_HOTPLUG_ENABLE (1 << 23) ··· 3447 3447 #define EDP_LINK_TRAIN_800_1200MV_0DB_SNB_B (0x38<<22) 3448 3448 #define EDP_LINK_TRAIN_VOL_EMP_MASK_SNB (0x3f<<22) 3449 3449 3450 + /* IVB */ 3451 + #define EDP_LINK_TRAIN_400MV_0DB_IVB (0x24 <<22) 3452 + #define EDP_LINK_TRAIN_400MV_3_5DB_IVB (0x2a <<22) 3453 + #define EDP_LINK_TRAIN_400MV_6DB_IVB (0x2f <<22) 3454 + #define EDP_LINK_TRAIN_600MV_0DB_IVB (0x30 <<22) 3455 + #define EDP_LINK_TRAIN_600MV_3_5DB_IVB (0x36 <<22) 3456 + #define EDP_LINK_TRAIN_800MV_0DB_IVB (0x38 <<22) 3457 + #define EDP_LINK_TRAIN_800MV_3_5DB_IVB (0x33 <<22) 3458 + 3459 + /* legacy values */ 3460 + #define EDP_LINK_TRAIN_500MV_0DB_IVB (0x00 <<22) 3461 + #define EDP_LINK_TRAIN_1000MV_0DB_IVB (0x20 <<22) 3462 + #define EDP_LINK_TRAIN_500MV_3_5DB_IVB (0x02 <<22) 3463 + #define EDP_LINK_TRAIN_1000MV_3_5DB_IVB (0x22 <<22) 3464 + #define EDP_LINK_TRAIN_1000MV_6DB_IVB (0x23 <<22) 3465 + 3466 + #define EDP_LINK_TRAIN_VOL_EMP_MASK_IVB (0x3f<<22) 3467 + 3450 3468 #define FORCEWAKE 0xA18C 3451 3469 #define FORCEWAKE_ACK 0x130090 3470 + #define FORCEWAKE_MT 0xa188 /* multi-threaded */ 3471 + #define FORCEWAKE_MT_ACK 0x130040 3472 + #define ECOBUS 0xa180 3473 + #define FORCEWAKE_MT_ENABLE (1<<5) 3452 3474 3453 3475 #define GT_FIFO_FREE_ENTRIES 0x120008 3454 3476 #define GT_FIFO_NUM_RESERVED_ENTRIES 20
+79 -10
drivers/gpu/drm/i915/intel_display.c
··· 38 38 #include "i915_drv.h" 39 39 #include "i915_trace.h" 40 40 #include "drm_dp_helper.h" 41 - 42 41 #include "drm_crtc_helper.h" 42 + #include <linux/dma_remapping.h> 43 43 44 44 #define HAS_eDP (intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP)) 45 45 ··· 4670 4670 /** 4671 4671 * intel_choose_pipe_bpp_dither - figure out what color depth the pipe should send 4672 4672 * @crtc: CRTC structure 4673 + * @mode: requested mode 4673 4674 * 4674 4675 * A pipe may be connected to one or more outputs. Based on the depth of the 4675 4676 * attached framebuffer, choose a good color depth to use on the pipe. ··· 4682 4681 * HDMI supports only 8bpc or 12bpc, so clamp to 8bpc with dither for 10bpc 4683 4682 * Displays may support a restricted set as well, check EDID and clamp as 4684 4683 * appropriate. 4684 + * DP may want to dither down to 6bpc to fit larger modes 4685 4685 * 4686 4686 * RETURNS: 4687 4687 * Dithering requirement (i.e. false if display bpc and pipe bpc match, 4688 4688 * true if they don't match). 4689 4689 */ 4690 4690 static bool intel_choose_pipe_bpp_dither(struct drm_crtc *crtc, 4691 - unsigned int *pipe_bpp) 4691 + unsigned int *pipe_bpp, 4692 + struct drm_display_mode *mode) 4692 4693 { 4693 4694 struct drm_device *dev = crtc->dev; 4694 4695 struct drm_i915_private *dev_priv = dev->dev_private; ··· 4759 4756 display_bpc = 8; 4760 4757 } 4761 4758 } 4759 + } 4760 + 4761 + if (mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) { 4762 + DRM_DEBUG_KMS("Dithering DP to 6bpc\n"); 4763 + display_bpc = 6; 4762 4764 } 4763 4765 4764 4766 /* ··· 5025 5017 pipeconf |= PIPECONF_DOUBLE_WIDE; 5026 5018 else 5027 5019 pipeconf &= ~PIPECONF_DOUBLE_WIDE; 5020 + } 5021 + 5022 + /* default to 8bpc */ 5023 + pipeconf &= ~(PIPECONF_BPP_MASK | PIPECONF_DITHER_EN); 5024 + if (is_dp) { 5025 + if (mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) { 5026 + pipeconf |= PIPECONF_BPP_6 | 5027 + PIPECONF_DITHER_EN | 5028 + PIPECONF_DITHER_TYPE_SP; 5029 + } 5028 5030 } 5029 5031 5030 5032 dpll |= DPLL_VCO_ENABLE; ··· 5498 5480 /* determine panel color depth */ 5499 5481 temp = I915_READ(PIPECONF(pipe)); 5500 5482 temp &= ~PIPE_BPC_MASK; 5501 - dither = intel_choose_pipe_bpp_dither(crtc, &pipe_bpp); 5483 + dither = intel_choose_pipe_bpp_dither(crtc, &pipe_bpp, mode); 5502 5484 switch (pipe_bpp) { 5503 5485 case 18: 5504 5486 temp |= PIPE_6BPC; ··· 7207 7189 work->old_fb_obj = intel_fb->obj; 7208 7190 INIT_WORK(&work->work, intel_unpin_work_fn); 7209 7191 7192 + ret = drm_vblank_get(dev, intel_crtc->pipe); 7193 + if (ret) 7194 + goto free_work; 7195 + 7210 7196 /* We borrow the event spin lock for protecting unpin_work */ 7211 7197 spin_lock_irqsave(&dev->event_lock, flags); 7212 7198 if (intel_crtc->unpin_work) { 7213 7199 spin_unlock_irqrestore(&dev->event_lock, flags); 7214 7200 kfree(work); 7201 + drm_vblank_put(dev, intel_crtc->pipe); 7215 7202 7216 7203 DRM_DEBUG_DRIVER("flip queue: crtc already busy\n"); 7217 7204 return -EBUSY; ··· 7234 7211 drm_gem_object_reference(&obj->base); 7235 7212 7236 7213 crtc->fb = fb; 7237 - 7238 - ret = drm_vblank_get(dev, intel_crtc->pipe); 7239 - if (ret) 7240 - goto cleanup_objs; 7241 7214 7242 7215 work->pending_flip_obj = obj; 7243 7216 ··· 7257 7238 7258 7239 cleanup_pending: 7259 7240 atomic_sub(1 << intel_crtc->plane, &work->old_fb_obj->pending_flip); 7260 - cleanup_objs: 7261 7241 drm_gem_object_unreference(&work->old_fb_obj->base); 7262 7242 drm_gem_object_unreference(&obj->base); 7263 7243 mutex_unlock(&dev->struct_mutex); ··· 7265 7247 intel_crtc->unpin_work = NULL; 7266 7248 spin_unlock_irqrestore(&dev->event_lock, flags); 7267 7249 7250 + drm_vblank_put(dev, intel_crtc->pipe); 7251 + free_work: 7268 7252 kfree(work); 7269 7253 7270 7254 return ret; ··· 7907 7887 dev_priv->corr = (lcfuse & LCFUSE_HIV_MASK); 7908 7888 } 7909 7889 7890 + static bool intel_enable_rc6(struct drm_device *dev) 7891 + { 7892 + /* 7893 + * Respect the kernel parameter if it is set 7894 + */ 7895 + if (i915_enable_rc6 >= 0) 7896 + return i915_enable_rc6; 7897 + 7898 + /* 7899 + * Disable RC6 on Ironlake 7900 + */ 7901 + if (INTEL_INFO(dev)->gen == 5) 7902 + return 0; 7903 + 7904 + /* 7905 + * Enable rc6 on Sandybridge if DMA remapping is disabled 7906 + */ 7907 + if (INTEL_INFO(dev)->gen == 6) { 7908 + DRM_DEBUG_DRIVER("Sandybridge: intel_iommu_enabled %s -- RC6 %sabled\n", 7909 + intel_iommu_enabled ? "true" : "false", 7910 + !intel_iommu_enabled ? "en" : "dis"); 7911 + return !intel_iommu_enabled; 7912 + } 7913 + DRM_DEBUG_DRIVER("RC6 enabled\n"); 7914 + return 1; 7915 + } 7916 + 7910 7917 void gen6_enable_rps(struct drm_i915_private *dev_priv) 7911 7918 { 7912 7919 u32 rp_state_cap = I915_READ(GEN6_RP_STATE_CAP); ··· 7970 7923 I915_WRITE(GEN6_RC6p_THRESHOLD, 100000); 7971 7924 I915_WRITE(GEN6_RC6pp_THRESHOLD, 64000); /* unused */ 7972 7925 7973 - if (i915_enable_rc6) 7926 + if (intel_enable_rc6(dev_priv->dev)) 7974 7927 rc6_mask = GEN6_RC_CTL_RC6p_ENABLE | 7975 7928 GEN6_RC_CTL_RC6_ENABLE; 7976 7929 ··· 8419 8372 /* rc6 disabled by default due to repeated reports of hanging during 8420 8373 * boot and resume. 8421 8374 */ 8422 - if (!i915_enable_rc6) 8375 + if (!intel_enable_rc6(dev)) 8423 8376 return; 8424 8377 8425 8378 mutex_lock(&dev->struct_mutex); ··· 8538 8491 8539 8492 /* For FIFO watermark updates */ 8540 8493 if (HAS_PCH_SPLIT(dev)) { 8494 + dev_priv->display.force_wake_get = __gen6_gt_force_wake_get; 8495 + dev_priv->display.force_wake_put = __gen6_gt_force_wake_put; 8496 + 8497 + /* IVB configs may use multi-threaded forcewake */ 8498 + if (IS_IVYBRIDGE(dev)) { 8499 + u32 ecobus; 8500 + 8501 + mutex_lock(&dev->struct_mutex); 8502 + __gen6_gt_force_wake_mt_get(dev_priv); 8503 + ecobus = I915_READ(ECOBUS); 8504 + __gen6_gt_force_wake_mt_put(dev_priv); 8505 + mutex_unlock(&dev->struct_mutex); 8506 + 8507 + if (ecobus & FORCEWAKE_MT_ENABLE) { 8508 + DRM_DEBUG_KMS("Using MT version of forcewake\n"); 8509 + dev_priv->display.force_wake_get = 8510 + __gen6_gt_force_wake_mt_get; 8511 + dev_priv->display.force_wake_put = 8512 + __gen6_gt_force_wake_mt_put; 8513 + } 8514 + } 8515 + 8541 8516 if (HAS_PCH_IBX(dev)) 8542 8517 dev_priv->display.init_pch_clock_gating = ibx_init_clock_gating; 8543 8518 else if (HAS_PCH_CPT(dev))
+133 -40
drivers/gpu/drm/i915/intel_dp.c
··· 208 208 */ 209 209 210 210 static int 211 - intel_dp_link_required(struct intel_dp *intel_dp, int pixel_clock) 211 + intel_dp_link_required(struct intel_dp *intel_dp, int pixel_clock, int check_bpp) 212 212 { 213 213 struct drm_crtc *crtc = intel_dp->base.base.crtc; 214 214 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 215 215 int bpp = 24; 216 216 217 - if (intel_crtc) 217 + if (check_bpp) 218 + bpp = check_bpp; 219 + else if (intel_crtc) 218 220 bpp = intel_crtc->bpp; 219 221 220 222 return (pixel_clock * bpp + 9) / 10; ··· 235 233 struct intel_dp *intel_dp = intel_attached_dp(connector); 236 234 int max_link_clock = intel_dp_link_clock(intel_dp_max_link_bw(intel_dp)); 237 235 int max_lanes = intel_dp_max_lane_count(intel_dp); 236 + int max_rate, mode_rate; 238 237 239 238 if (is_edp(intel_dp) && intel_dp->panel_fixed_mode) { 240 239 if (mode->hdisplay > intel_dp->panel_fixed_mode->hdisplay) ··· 245 242 return MODE_PANEL; 246 243 } 247 244 248 - if (intel_dp_link_required(intel_dp, mode->clock) 249 - > intel_dp_max_data_rate(max_link_clock, max_lanes)) 250 - return MODE_CLOCK_HIGH; 245 + mode_rate = intel_dp_link_required(intel_dp, mode->clock, 0); 246 + max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes); 247 + 248 + if (mode_rate > max_rate) { 249 + mode_rate = intel_dp_link_required(intel_dp, 250 + mode->clock, 18); 251 + if (mode_rate > max_rate) 252 + return MODE_CLOCK_HIGH; 253 + else 254 + mode->private_flags |= INTEL_MODE_DP_FORCE_6BPC; 255 + } 251 256 252 257 if (mode->clock < 10000) 253 258 return MODE_CLOCK_LOW; ··· 373 362 * clock divider. 374 363 */ 375 364 if (is_cpu_edp(intel_dp)) { 376 - if (IS_GEN6(dev)) 377 - aux_clock_divider = 200; /* SNB eDP input clock at 400Mhz */ 365 + if (IS_GEN6(dev) || IS_GEN7(dev)) 366 + aux_clock_divider = 200; /* SNB & IVB eDP input clock at 400Mhz */ 378 367 else 379 368 aux_clock_divider = 225; /* eDP input clock at 450Mhz */ 380 369 } else if (HAS_PCH_SPLIT(dev)) ··· 683 672 int lane_count, clock; 684 673 int max_lane_count = intel_dp_max_lane_count(intel_dp); 685 674 int max_clock = intel_dp_max_link_bw(intel_dp) == DP_LINK_BW_2_7 ? 1 : 0; 675 + int bpp = mode->private_flags & INTEL_MODE_DP_FORCE_6BPC ? 18 : 0; 686 676 static int bws[2] = { DP_LINK_BW_1_62, DP_LINK_BW_2_7 }; 687 677 688 678 if (is_edp(intel_dp) && intel_dp->panel_fixed_mode) { ··· 701 689 for (clock = 0; clock <= max_clock; clock++) { 702 690 int link_avail = intel_dp_max_data_rate(intel_dp_link_clock(bws[clock]), lane_count); 703 691 704 - if (intel_dp_link_required(intel_dp, mode->clock) 692 + if (intel_dp_link_required(intel_dp, mode->clock, bpp) 705 693 <= link_avail) { 706 694 intel_dp->link_bw = bws[clock]; 707 695 intel_dp->lane_count = lane_count; ··· 829 817 } 830 818 831 819 /* 832 - * There are three kinds of DP registers: 820 + * There are four kinds of DP registers: 833 821 * 834 822 * IBX PCH 835 - * CPU 823 + * SNB CPU 824 + * IVB CPU 836 825 * CPT PCH 837 826 * 838 827 * IBX PCH and CPU are the same for almost everything, ··· 886 873 887 874 /* Split out the IBX/CPU vs CPT settings */ 888 875 889 - if (!HAS_PCH_CPT(dev) || is_cpu_edp(intel_dp)) { 876 + if (is_cpu_edp(intel_dp) && IS_GEN7(dev)) { 877 + if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC) 878 + intel_dp->DP |= DP_SYNC_HS_HIGH; 879 + if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC) 880 + intel_dp->DP |= DP_SYNC_VS_HIGH; 881 + intel_dp->DP |= DP_LINK_TRAIN_OFF_CPT; 882 + 883 + if (intel_dp->link_configuration[1] & DP_LANE_COUNT_ENHANCED_FRAME_EN) 884 + intel_dp->DP |= DP_ENHANCED_FRAMING; 885 + 886 + intel_dp->DP |= intel_crtc->pipe << 29; 887 + 888 + /* don't miss out required setting for eDP */ 889 + intel_dp->DP |= DP_PLL_ENABLE; 890 + if (adjusted_mode->clock < 200000) 891 + intel_dp->DP |= DP_PLL_FREQ_160MHZ; 892 + else 893 + intel_dp->DP |= DP_PLL_FREQ_270MHZ; 894 + } else if (!HAS_PCH_CPT(dev) || is_cpu_edp(intel_dp)) { 890 895 intel_dp->DP |= intel_dp->color_range; 891 896 892 897 if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC) ··· 1406 1375 * These are source-specific values; current Intel hardware supports 1407 1376 * a maximum voltage of 800mV and a maximum pre-emphasis of 6dB 1408 1377 */ 1409 - #define I830_DP_VOLTAGE_MAX DP_TRAIN_VOLTAGE_SWING_800 1410 - #define I830_DP_VOLTAGE_MAX_CPT DP_TRAIN_VOLTAGE_SWING_1200 1411 1378 1412 1379 static uint8_t 1413 - intel_dp_pre_emphasis_max(uint8_t voltage_swing) 1380 + intel_dp_voltage_max(struct intel_dp *intel_dp) 1414 1381 { 1415 - switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) { 1416 - case DP_TRAIN_VOLTAGE_SWING_400: 1417 - return DP_TRAIN_PRE_EMPHASIS_6; 1418 - case DP_TRAIN_VOLTAGE_SWING_600: 1419 - return DP_TRAIN_PRE_EMPHASIS_6; 1420 - case DP_TRAIN_VOLTAGE_SWING_800: 1421 - return DP_TRAIN_PRE_EMPHASIS_3_5; 1422 - case DP_TRAIN_VOLTAGE_SWING_1200: 1423 - default: 1424 - return DP_TRAIN_PRE_EMPHASIS_0; 1382 + struct drm_device *dev = intel_dp->base.base.dev; 1383 + 1384 + if (IS_GEN7(dev) && is_cpu_edp(intel_dp)) 1385 + return DP_TRAIN_VOLTAGE_SWING_800; 1386 + else if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1387 + return DP_TRAIN_VOLTAGE_SWING_1200; 1388 + else 1389 + return DP_TRAIN_VOLTAGE_SWING_800; 1390 + } 1391 + 1392 + static uint8_t 1393 + intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing) 1394 + { 1395 + struct drm_device *dev = intel_dp->base.base.dev; 1396 + 1397 + if (IS_GEN7(dev) && is_cpu_edp(intel_dp)) { 1398 + switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) { 1399 + case DP_TRAIN_VOLTAGE_SWING_400: 1400 + return DP_TRAIN_PRE_EMPHASIS_6; 1401 + case DP_TRAIN_VOLTAGE_SWING_600: 1402 + case DP_TRAIN_VOLTAGE_SWING_800: 1403 + return DP_TRAIN_PRE_EMPHASIS_3_5; 1404 + default: 1405 + return DP_TRAIN_PRE_EMPHASIS_0; 1406 + } 1407 + } else { 1408 + switch (voltage_swing & DP_TRAIN_VOLTAGE_SWING_MASK) { 1409 + case DP_TRAIN_VOLTAGE_SWING_400: 1410 + return DP_TRAIN_PRE_EMPHASIS_6; 1411 + case DP_TRAIN_VOLTAGE_SWING_600: 1412 + return DP_TRAIN_PRE_EMPHASIS_6; 1413 + case DP_TRAIN_VOLTAGE_SWING_800: 1414 + return DP_TRAIN_PRE_EMPHASIS_3_5; 1415 + case DP_TRAIN_VOLTAGE_SWING_1200: 1416 + default: 1417 + return DP_TRAIN_PRE_EMPHASIS_0; 1418 + } 1425 1419 } 1426 1420 } 1427 1421 1428 1422 static void 1429 1423 intel_get_adjust_train(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]) 1430 1424 { 1431 - struct drm_device *dev = intel_dp->base.base.dev; 1432 1425 uint8_t v = 0; 1433 1426 uint8_t p = 0; 1434 1427 int lane; 1435 1428 uint8_t *adjust_request = link_status + (DP_ADJUST_REQUEST_LANE0_1 - DP_LANE0_1_STATUS); 1436 - int voltage_max; 1429 + uint8_t voltage_max; 1430 + uint8_t preemph_max; 1437 1431 1438 1432 for (lane = 0; lane < intel_dp->lane_count; lane++) { 1439 1433 uint8_t this_v = intel_get_adjust_request_voltage(adjust_request, lane); ··· 1470 1414 p = this_p; 1471 1415 } 1472 1416 1473 - if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1474 - voltage_max = I830_DP_VOLTAGE_MAX_CPT; 1475 - else 1476 - voltage_max = I830_DP_VOLTAGE_MAX; 1417 + voltage_max = intel_dp_voltage_max(intel_dp); 1477 1418 if (v >= voltage_max) 1478 1419 v = voltage_max | DP_TRAIN_MAX_SWING_REACHED; 1479 1420 1480 - if (p >= intel_dp_pre_emphasis_max(v)) 1481 - p = intel_dp_pre_emphasis_max(v) | DP_TRAIN_MAX_PRE_EMPHASIS_REACHED; 1421 + preemph_max = intel_dp_pre_emphasis_max(intel_dp, v); 1422 + if (p >= preemph_max) 1423 + p = preemph_max | DP_TRAIN_MAX_PRE_EMPHASIS_REACHED; 1482 1424 1483 1425 for (lane = 0; lane < 4; lane++) 1484 1426 intel_dp->train_set[lane] = v | p; ··· 1545 1491 DRM_DEBUG_KMS("Unsupported voltage swing/pre-emphasis level:" 1546 1492 "0x%x\n", signal_levels); 1547 1493 return EDP_LINK_TRAIN_400_600MV_0DB_SNB_B; 1494 + } 1495 + } 1496 + 1497 + /* Gen7's DP voltage swing and pre-emphasis control */ 1498 + static uint32_t 1499 + intel_gen7_edp_signal_levels(uint8_t train_set) 1500 + { 1501 + int signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK | 1502 + DP_TRAIN_PRE_EMPHASIS_MASK); 1503 + switch (signal_levels) { 1504 + case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_0: 1505 + return EDP_LINK_TRAIN_400MV_0DB_IVB; 1506 + case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_3_5: 1507 + return EDP_LINK_TRAIN_400MV_3_5DB_IVB; 1508 + case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_6: 1509 + return EDP_LINK_TRAIN_400MV_6DB_IVB; 1510 + 1511 + case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_0: 1512 + return EDP_LINK_TRAIN_600MV_0DB_IVB; 1513 + case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_3_5: 1514 + return EDP_LINK_TRAIN_600MV_3_5DB_IVB; 1515 + 1516 + case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_0: 1517 + return EDP_LINK_TRAIN_800MV_0DB_IVB; 1518 + case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_3_5: 1519 + return EDP_LINK_TRAIN_800MV_3_5DB_IVB; 1520 + 1521 + default: 1522 + DRM_DEBUG_KMS("Unsupported voltage swing/pre-emphasis level:" 1523 + "0x%x\n", signal_levels); 1524 + return EDP_LINK_TRAIN_500MV_0DB_IVB; 1548 1525 } 1549 1526 } 1550 1527 ··· 1684 1599 DP_LINK_CONFIGURATION_SIZE); 1685 1600 1686 1601 DP |= DP_PORT_EN; 1687 - if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1602 + 1603 + if (HAS_PCH_CPT(dev) && (IS_GEN7(dev) || !is_cpu_edp(intel_dp))) 1688 1604 DP &= ~DP_LINK_TRAIN_MASK_CPT; 1689 1605 else 1690 1606 DP &= ~DP_LINK_TRAIN_MASK; ··· 1699 1613 uint8_t link_status[DP_LINK_STATUS_SIZE]; 1700 1614 uint32_t signal_levels; 1701 1615 1702 - if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) { 1616 + 1617 + if (IS_GEN7(dev) && is_cpu_edp(intel_dp)) { 1618 + signal_levels = intel_gen7_edp_signal_levels(intel_dp->train_set[0]); 1619 + DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_IVB) | signal_levels; 1620 + } else if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) { 1703 1621 signal_levels = intel_gen6_edp_signal_levels(intel_dp->train_set[0]); 1704 1622 DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_SNB) | signal_levels; 1705 1623 } else { ··· 1712 1622 DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels; 1713 1623 } 1714 1624 1715 - if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1625 + if (HAS_PCH_CPT(dev) && (IS_GEN7(dev) || !is_cpu_edp(intel_dp))) 1716 1626 reg = DP | DP_LINK_TRAIN_PAT_1_CPT; 1717 1627 else 1718 1628 reg = DP | DP_LINK_TRAIN_PAT_1; ··· 1793 1703 break; 1794 1704 } 1795 1705 1796 - if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) { 1706 + if (IS_GEN7(dev) && is_cpu_edp(intel_dp)) { 1707 + signal_levels = intel_gen7_edp_signal_levels(intel_dp->train_set[0]); 1708 + DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_IVB) | signal_levels; 1709 + } else if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) { 1797 1710 signal_levels = intel_gen6_edp_signal_levels(intel_dp->train_set[0]); 1798 1711 DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_SNB) | signal_levels; 1799 1712 } else { ··· 1804 1711 DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels; 1805 1712 } 1806 1713 1807 - if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1714 + if (HAS_PCH_CPT(dev) && (IS_GEN7(dev) || !is_cpu_edp(intel_dp))) 1808 1715 reg = DP | DP_LINK_TRAIN_PAT_2_CPT; 1809 1716 else 1810 1717 reg = DP | DP_LINK_TRAIN_PAT_2; ··· 1845 1752 ++tries; 1846 1753 } 1847 1754 1848 - if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1755 + if (HAS_PCH_CPT(dev) && (IS_GEN7(dev) || !is_cpu_edp(intel_dp))) 1849 1756 reg = DP | DP_LINK_TRAIN_OFF_CPT; 1850 1757 else 1851 1758 reg = DP | DP_LINK_TRAIN_OFF; ··· 1875 1782 udelay(100); 1876 1783 } 1877 1784 1878 - if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) { 1785 + if (HAS_PCH_CPT(dev) && (IS_GEN7(dev) || !is_cpu_edp(intel_dp))) { 1879 1786 DP &= ~DP_LINK_TRAIN_MASK_CPT; 1880 1787 I915_WRITE(intel_dp->output_reg, DP | DP_LINK_TRAIN_PAT_IDLE_CPT); 1881 1788 } else { ··· 1887 1794 msleep(17); 1888 1795 1889 1796 if (is_edp(intel_dp)) { 1890 - if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1797 + if (HAS_PCH_CPT(dev) && (IS_GEN7(dev) || !is_cpu_edp(intel_dp))) 1891 1798 DP |= DP_LINK_TRAIN_OFF_CPT; 1892 1799 else 1893 1800 DP |= DP_LINK_TRAIN_OFF;
+1
drivers/gpu/drm/i915/intel_drv.h
··· 110 110 /* drm_display_mode->private_flags */ 111 111 #define INTEL_MODE_PIXEL_MULTIPLIER_SHIFT (0x0) 112 112 #define INTEL_MODE_PIXEL_MULTIPLIER_MASK (0xf << INTEL_MODE_PIXEL_MULTIPLIER_SHIFT) 113 + #define INTEL_MODE_DP_FORCE_6BPC (0x10) 113 114 114 115 static inline void 115 116 intel_mode_set_pixel_multiplier(struct drm_display_mode *mode,
+8
drivers/gpu/drm/i915/intel_lvds.c
··· 715 715 DMI_MATCH(DMI_PRODUCT_NAME, "EB1007"), 716 716 }, 717 717 }, 718 + { 719 + .callback = intel_no_lvds_dmi_callback, 720 + .ident = "Asus AT5NM10T-I", 721 + .matches = { 722 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."), 723 + DMI_MATCH(DMI_BOARD_NAME, "AT5NM10T-I"), 724 + }, 725 + }, 718 726 719 727 { } /* terminating entry */ 720 728 };
+5 -11
drivers/gpu/drm/i915/intel_panel.c
··· 178 178 if (HAS_PCH_SPLIT(dev)) { 179 179 max >>= 16; 180 180 } else { 181 - if (IS_PINEVIEW(dev)) { 181 + if (INTEL_INFO(dev)->gen < 4) 182 182 max >>= 17; 183 - } else { 183 + else 184 184 max >>= 16; 185 - if (INTEL_INFO(dev)->gen < 4) 186 - max &= ~1; 187 - } 188 185 189 186 if (is_backlight_combination_mode(dev)) 190 187 max *= 0xff; ··· 200 203 val = I915_READ(BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; 201 204 } else { 202 205 val = I915_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; 203 - if (IS_PINEVIEW(dev)) 206 + if (INTEL_INFO(dev)->gen < 4) 204 207 val >>= 1; 205 208 206 209 if (is_backlight_combination_mode(dev)) { 207 210 u8 lbpc; 208 211 209 - val &= ~1; 210 212 pci_read_config_byte(dev->pdev, PCI_LBPC, &lbpc); 211 213 val *= lbpc; 212 214 } ··· 242 246 } 243 247 244 248 tmp = I915_READ(BLC_PWM_CTL); 245 - if (IS_PINEVIEW(dev)) { 246 - tmp &= ~(BACKLIGHT_DUTY_CYCLE_MASK - 1); 249 + if (INTEL_INFO(dev)->gen < 4) 247 250 level <<= 1; 248 - } else 249 - tmp &= ~BACKLIGHT_DUTY_CYCLE_MASK; 251 + tmp &= ~BACKLIGHT_DUTY_CYCLE_MASK; 250 252 I915_WRITE(BLC_PWM_CTL, tmp | level); 251 253 } 252 254
+26 -10
drivers/gpu/drm/i915/intel_sdvo.c
··· 50 50 #define IS_TMDS(c) (c->output_flag & SDVO_TMDS_MASK) 51 51 #define IS_LVDS(c) (c->output_flag & SDVO_LVDS_MASK) 52 52 #define IS_TV_OR_LVDS(c) (c->output_flag & (SDVO_TV_MASK | SDVO_LVDS_MASK)) 53 + #define IS_DIGITAL(c) (c->output_flag & (SDVO_TMDS_MASK | SDVO_LVDS_MASK)) 53 54 54 55 55 56 static const char *tv_format_names[] = { ··· 1087 1086 } 1088 1087 sdvox |= (9 << 19) | SDVO_BORDER_ENABLE; 1089 1088 } 1090 - if (intel_crtc->pipe == 1) 1091 - sdvox |= SDVO_PIPE_B_SELECT; 1089 + 1090 + if (INTEL_PCH_TYPE(dev) >= PCH_CPT) 1091 + sdvox |= TRANSCODER_CPT(intel_crtc->pipe); 1092 + else 1093 + sdvox |= TRANSCODER(intel_crtc->pipe); 1094 + 1092 1095 if (intel_sdvo->has_hdmi_audio) 1093 1096 sdvox |= SDVO_AUDIO_ENABLE; 1094 1097 ··· 1319 1314 return status; 1320 1315 } 1321 1316 1317 + static bool 1318 + intel_sdvo_connector_matches_edid(struct intel_sdvo_connector *sdvo, 1319 + struct edid *edid) 1320 + { 1321 + bool monitor_is_digital = !!(edid->input & DRM_EDID_INPUT_DIGITAL); 1322 + bool connector_is_digital = !!IS_DIGITAL(sdvo); 1323 + 1324 + DRM_DEBUG_KMS("connector_is_digital? %d, monitor_is_digital? %d\n", 1325 + connector_is_digital, monitor_is_digital); 1326 + return connector_is_digital == monitor_is_digital; 1327 + } 1328 + 1322 1329 static enum drm_connector_status 1323 1330 intel_sdvo_detect(struct drm_connector *connector, bool force) 1324 1331 { ··· 1375 1358 if (edid == NULL) 1376 1359 edid = intel_sdvo_get_analog_edid(connector); 1377 1360 if (edid != NULL) { 1378 - if (edid->input & DRM_EDID_INPUT_DIGITAL) 1379 - ret = connector_status_disconnected; 1380 - else 1361 + if (intel_sdvo_connector_matches_edid(intel_sdvo_connector, 1362 + edid)) 1381 1363 ret = connector_status_connected; 1364 + else 1365 + ret = connector_status_disconnected; 1366 + 1382 1367 connector->display_info.raw_edid = NULL; 1383 1368 kfree(edid); 1384 1369 } else ··· 1421 1402 edid = intel_sdvo_get_analog_edid(connector); 1422 1403 1423 1404 if (edid != NULL) { 1424 - struct intel_sdvo_connector *intel_sdvo_connector = to_intel_sdvo_connector(connector); 1425 - bool monitor_is_digital = !!(edid->input & DRM_EDID_INPUT_DIGITAL); 1426 - bool connector_is_digital = !!IS_TMDS(intel_sdvo_connector); 1427 - 1428 - if (connector_is_digital == monitor_is_digital) { 1405 + if (intel_sdvo_connector_matches_edid(to_intel_sdvo_connector(connector), 1406 + edid)) { 1429 1407 drm_mode_connector_update_edid_property(connector, edid); 1430 1408 drm_add_edid_modes(connector, edid); 1431 1409 }
+45
drivers/gpu/drm/nouveau/nouveau_display.c
··· 369 369 spin_unlock_irqrestore(&dev->event_lock, flags); 370 370 return 0; 371 371 } 372 + 373 + int 374 + nouveau_display_dumb_create(struct drm_file *file_priv, struct drm_device *dev, 375 + struct drm_mode_create_dumb *args) 376 + { 377 + struct nouveau_bo *bo; 378 + int ret; 379 + 380 + args->pitch = roundup(args->width * (args->bpp / 8), 256); 381 + args->size = args->pitch * args->height; 382 + args->size = roundup(args->size, PAGE_SIZE); 383 + 384 + ret = nouveau_gem_new(dev, args->size, 0, TTM_PL_FLAG_VRAM, 0, 0, &bo); 385 + if (ret) 386 + return ret; 387 + 388 + ret = drm_gem_handle_create(file_priv, bo->gem, &args->handle); 389 + drm_gem_object_unreference_unlocked(bo->gem); 390 + return ret; 391 + } 392 + 393 + int 394 + nouveau_display_dumb_destroy(struct drm_file *file_priv, struct drm_device *dev, 395 + uint32_t handle) 396 + { 397 + return drm_gem_handle_delete(file_priv, handle); 398 + } 399 + 400 + int 401 + nouveau_display_dumb_map_offset(struct drm_file *file_priv, 402 + struct drm_device *dev, 403 + uint32_t handle, uint64_t *poffset) 404 + { 405 + struct drm_gem_object *gem; 406 + 407 + gem = drm_gem_object_lookup(dev, file_priv, handle); 408 + if (gem) { 409 + struct nouveau_bo *bo = gem->driver_private; 410 + *poffset = bo->bo.addr_space_offset; 411 + drm_gem_object_unreference_unlocked(gem); 412 + return 0; 413 + } 414 + 415 + return -ENOENT; 416 + }
+4
drivers/gpu/drm/nouveau/nouveau_drv.c
··· 433 433 .gem_open_object = nouveau_gem_object_open, 434 434 .gem_close_object = nouveau_gem_object_close, 435 435 436 + .dumb_create = nouveau_display_dumb_create, 437 + .dumb_map_offset = nouveau_display_dumb_map_offset, 438 + .dumb_destroy = nouveau_display_dumb_destroy, 439 + 436 440 .name = DRIVER_NAME, 437 441 .desc = DRIVER_DESC, 438 442 #ifdef GIT_REVISION
+6
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 1418 1418 struct drm_pending_vblank_event *event); 1419 1419 int nouveau_finish_page_flip(struct nouveau_channel *, 1420 1420 struct nouveau_page_flip_state *); 1421 + int nouveau_display_dumb_create(struct drm_file *, struct drm_device *, 1422 + struct drm_mode_create_dumb *args); 1423 + int nouveau_display_dumb_map_offset(struct drm_file *, struct drm_device *, 1424 + uint32_t handle, uint64_t *offset); 1425 + int nouveau_display_dumb_destroy(struct drm_file *, struct drm_device *, 1426 + uint32_t handle); 1421 1427 1422 1428 /* nv10_gpio.c */ 1423 1429 int nv10_gpio_get(struct drm_device *dev, enum dcb_gpio_tag tag);
+1 -1
drivers/gpu/drm/nouveau/nouveau_object.c
··· 680 680 return ret; 681 681 } 682 682 683 - ret = drm_mm_init(&chan->ramin_heap, base, size); 683 + ret = drm_mm_init(&chan->ramin_heap, base, size - base); 684 684 if (ret) { 685 685 NV_ERROR(dev, "Error creating PRAMIN heap: %d\n", ret); 686 686 nouveau_gpuobj_ref(NULL, &chan->ramin);
+3
drivers/gpu/drm/nouveau/nouveau_sgdma.c
··· 67 67 pci_unmap_page(dev->pdev, nvbe->pages[nvbe->nr_pages], 68 68 PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); 69 69 } 70 + nvbe->unmap_pages = false; 70 71 } 72 + 73 + nvbe->pages = NULL; 71 74 } 72 75 73 76 static void
+2 -2
drivers/gpu/drm/nouveau/nv50_display.c
··· 616 616 struct drm_nouveau_private *dev_priv = dev->dev_private; 617 617 struct nv50_display *disp = nv50_display(dev); 618 618 u32 unk30 = nv_rd32(dev, 0x610030), mc; 619 - int i, crtc, or, type = OUTPUT_ANY; 619 + int i, crtc, or = 0, type = OUTPUT_ANY; 620 620 621 621 NV_DEBUG_KMS(dev, "0x610030: 0x%08x\n", unk30); 622 622 disp->irq.dcb = NULL; ··· 708 708 struct nv50_display *disp = nv50_display(dev); 709 709 u32 unk30 = nv_rd32(dev, 0x610030), tmp, pclk, script, mc = 0; 710 710 struct dcb_entry *dcb; 711 - int i, crtc, or, type = OUTPUT_ANY; 711 + int i, crtc, or = 0, type = OUTPUT_ANY; 712 712 713 713 NV_DEBUG_KMS(dev, "0x610030: 0x%08x\n", unk30); 714 714 dcb = disp->irq.dcb;
+2
drivers/gpu/drm/nouveau/nvc0_graph.c
··· 381 381 u8 tpnr[GPC_MAX]; 382 382 int i, gpc, tpc; 383 383 384 + nv_wr32(dev, TP_UNIT(0, 0, 0x5c), 1); /* affects TFB offset queries */ 385 + 384 386 /* 385 387 * TP ROP UNKVAL(magic_not_rop_nr) 386 388 * 450: 4/0/0/0 2 3
+1 -1
drivers/gpu/drm/nouveau/nvd0_display.c
··· 780 780 continue; 781 781 782 782 if (nv_partner != nv_encoder && 783 - nv_partner->dcb->or == nv_encoder->or) { 783 + nv_partner->dcb->or == nv_encoder->dcb->or) { 784 784 if (nv_partner->last_dpms == DRM_MODE_DPMS_ON) 785 785 return; 786 786 break;
+33 -2
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1107 1107 return -EINVAL; 1108 1108 } 1109 1109 1110 - if (tiling_flags & RADEON_TILING_MACRO) 1110 + if (tiling_flags & RADEON_TILING_MACRO) { 1111 + if (rdev->family >= CHIP_CAYMAN) 1112 + tmp = rdev->config.cayman.tile_config; 1113 + else 1114 + tmp = rdev->config.evergreen.tile_config; 1115 + 1116 + switch ((tmp & 0xf0) >> 4) { 1117 + case 0: /* 4 banks */ 1118 + fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_4_BANK); 1119 + break; 1120 + case 1: /* 8 banks */ 1121 + default: 1122 + fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_8_BANK); 1123 + break; 1124 + case 2: /* 16 banks */ 1125 + fb_format |= EVERGREEN_GRPH_NUM_BANKS(EVERGREEN_ADDR_SURF_16_BANK); 1126 + break; 1127 + } 1128 + 1129 + switch ((tmp & 0xf000) >> 12) { 1130 + case 0: /* 1KB rows */ 1131 + default: 1132 + fb_format |= EVERGREEN_GRPH_TILE_SPLIT(EVERGREEN_ADDR_SURF_TILE_SPLIT_1KB); 1133 + break; 1134 + case 1: /* 2KB rows */ 1135 + fb_format |= EVERGREEN_GRPH_TILE_SPLIT(EVERGREEN_ADDR_SURF_TILE_SPLIT_2KB); 1136 + break; 1137 + case 2: /* 4KB rows */ 1138 + fb_format |= EVERGREEN_GRPH_TILE_SPLIT(EVERGREEN_ADDR_SURF_TILE_SPLIT_4KB); 1139 + break; 1140 + } 1141 + 1111 1142 fb_format |= EVERGREEN_GRPH_ARRAY_MODE(EVERGREEN_GRPH_ARRAY_2D_TILED_THIN1); 1112 - else if (tiling_flags & RADEON_TILING_MICRO) 1143 + } else if (tiling_flags & RADEON_TILING_MICRO) 1113 1144 fb_format |= EVERGREEN_GRPH_ARRAY_MODE(EVERGREEN_GRPH_ARRAY_1D_TILED_THIN1); 1114 1145 1115 1146 switch (radeon_crtc->crtc_id) {
+6 -1
drivers/gpu/drm/radeon/evergreen.c
··· 82 82 { 83 83 struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[crtc_id]; 84 84 u32 tmp = RREG32(EVERGREEN_GRPH_UPDATE + radeon_crtc->crtc_offset); 85 + int i; 85 86 86 87 /* Lock the graphics update lock */ 87 88 tmp |= EVERGREEN_GRPH_UPDATE_LOCK; ··· 100 99 (u32)crtc_base); 101 100 102 101 /* Wait for update_pending to go high. */ 103 - while (!(RREG32(EVERGREEN_GRPH_UPDATE + radeon_crtc->crtc_offset) & EVERGREEN_GRPH_SURFACE_UPDATE_PENDING)); 102 + for (i = 0; i < rdev->usec_timeout; i++) { 103 + if (RREG32(EVERGREEN_GRPH_UPDATE + radeon_crtc->crtc_offset) & EVERGREEN_GRPH_SURFACE_UPDATE_PENDING) 104 + break; 105 + udelay(1); 106 + } 104 107 DRM_DEBUG("Update pending now high. Unlocking vupdate_lock.\n"); 105 108 106 109 /* Unlock the lock, so double-buffering can take place inside vblank */
+123 -26
drivers/gpu/drm/radeon/evergreen_cs.c
··· 38 38 u32 group_size; 39 39 u32 nbanks; 40 40 u32 npipes; 41 + u32 row_size; 41 42 /* value we track */ 42 43 u32 nsamples; 43 44 u32 cb_color_base_last[12]; ··· 77 76 struct radeon_bo *db_s_read_bo; 78 77 struct radeon_bo *db_s_write_bo; 79 78 }; 79 + 80 + static u32 evergreen_cs_get_aray_mode(u32 tiling_flags) 81 + { 82 + if (tiling_flags & RADEON_TILING_MACRO) 83 + return ARRAY_2D_TILED_THIN1; 84 + else if (tiling_flags & RADEON_TILING_MICRO) 85 + return ARRAY_1D_TILED_THIN1; 86 + else 87 + return ARRAY_LINEAR_GENERAL; 88 + } 89 + 90 + static u32 evergreen_cs_get_num_banks(u32 nbanks) 91 + { 92 + switch (nbanks) { 93 + case 2: 94 + return ADDR_SURF_2_BANK; 95 + case 4: 96 + return ADDR_SURF_4_BANK; 97 + case 8: 98 + default: 99 + return ADDR_SURF_8_BANK; 100 + case 16: 101 + return ADDR_SURF_16_BANK; 102 + } 103 + } 104 + 105 + static u32 evergreen_cs_get_tile_split(u32 row_size) 106 + { 107 + switch (row_size) { 108 + case 1: 109 + default: 110 + return ADDR_SURF_TILE_SPLIT_1KB; 111 + case 2: 112 + return ADDR_SURF_TILE_SPLIT_2KB; 113 + case 4: 114 + return ADDR_SURF_TILE_SPLIT_4KB; 115 + } 116 + } 80 117 81 118 static void evergreen_cs_track_init(struct evergreen_cs_track *track) 82 119 { ··· 529 490 } 530 491 ib[idx] &= ~Z_ARRAY_MODE(0xf); 531 492 track->db_z_info &= ~Z_ARRAY_MODE(0xf); 493 + ib[idx] |= Z_ARRAY_MODE(evergreen_cs_get_aray_mode(reloc->lobj.tiling_flags)); 494 + track->db_z_info |= Z_ARRAY_MODE(evergreen_cs_get_aray_mode(reloc->lobj.tiling_flags)); 532 495 if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 533 - ib[idx] |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 534 - track->db_z_info |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 535 - } else { 536 - ib[idx] |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 537 - track->db_z_info |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 496 + ib[idx] |= DB_NUM_BANKS(evergreen_cs_get_num_banks(track->nbanks)); 497 + ib[idx] |= DB_TILE_SPLIT(evergreen_cs_get_tile_split(track->row_size)); 538 498 } 539 499 } 540 500 break; ··· 656 618 "0x%04X\n", reg); 657 619 return -EINVAL; 658 620 } 659 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 660 - ib[idx] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 661 - track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 662 - } else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) { 663 - ib[idx] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 664 - track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 665 - } 621 + ib[idx] |= CB_ARRAY_MODE(evergreen_cs_get_aray_mode(reloc->lobj.tiling_flags)); 622 + track->cb_color_info[tmp] |= CB_ARRAY_MODE(evergreen_cs_get_aray_mode(reloc->lobj.tiling_flags)); 666 623 } 667 624 break; 668 625 case CB_COLOR8_INFO: ··· 673 640 "0x%04X\n", reg); 674 641 return -EINVAL; 675 642 } 676 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 677 - ib[idx] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 678 - track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 679 - } else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) { 680 - ib[idx] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 681 - track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 682 - } 643 + ib[idx] |= CB_ARRAY_MODE(evergreen_cs_get_aray_mode(reloc->lobj.tiling_flags)); 644 + track->cb_color_info[tmp] |= CB_ARRAY_MODE(evergreen_cs_get_aray_mode(reloc->lobj.tiling_flags)); 683 645 } 684 646 break; 685 647 case CB_COLOR0_PITCH: ··· 729 701 case CB_COLOR9_ATTRIB: 730 702 case CB_COLOR10_ATTRIB: 731 703 case CB_COLOR11_ATTRIB: 704 + r = evergreen_cs_packet_next_reloc(p, &reloc); 705 + if (r) { 706 + dev_warn(p->dev, "bad SET_CONTEXT_REG " 707 + "0x%04X\n", reg); 708 + return -EINVAL; 709 + } 710 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 711 + ib[idx] |= CB_NUM_BANKS(evergreen_cs_get_num_banks(track->nbanks)); 712 + ib[idx] |= CB_TILE_SPLIT(evergreen_cs_get_tile_split(track->row_size)); 713 + } 732 714 break; 733 715 case CB_COLOR0_DIM: 734 716 case CB_COLOR1_DIM: ··· 1356 1318 } 1357 1319 ib[idx+1+(i*8)+2] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff); 1358 1320 if (!p->keep_tiling_flags) { 1359 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 1360 - ib[idx+1+(i*8)+1] |= TEX_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 1361 - else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 1362 - ib[idx+1+(i*8)+1] |= TEX_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 1321 + ib[idx+1+(i*8)+1] |= 1322 + TEX_ARRAY_MODE(evergreen_cs_get_aray_mode(reloc->lobj.tiling_flags)); 1323 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 1324 + ib[idx+1+(i*8)+6] |= 1325 + TEX_TILE_SPLIT(evergreen_cs_get_tile_split(track->row_size)); 1326 + ib[idx+1+(i*8)+7] |= 1327 + TEX_NUM_BANKS(evergreen_cs_get_num_banks(track->nbanks)); 1328 + } 1363 1329 } 1364 1330 texture = reloc->robj; 1365 1331 /* tex mip base */ ··· 1464 1422 { 1465 1423 struct radeon_cs_packet pkt; 1466 1424 struct evergreen_cs_track *track; 1425 + u32 tmp; 1467 1426 int r; 1468 1427 1469 1428 if (p->track == NULL) { ··· 1473 1430 if (track == NULL) 1474 1431 return -ENOMEM; 1475 1432 evergreen_cs_track_init(track); 1476 - track->npipes = p->rdev->config.evergreen.tiling_npipes; 1477 - track->nbanks = p->rdev->config.evergreen.tiling_nbanks; 1478 - track->group_size = p->rdev->config.evergreen.tiling_group_size; 1433 + if (p->rdev->family >= CHIP_CAYMAN) 1434 + tmp = p->rdev->config.cayman.tile_config; 1435 + else 1436 + tmp = p->rdev->config.evergreen.tile_config; 1437 + 1438 + switch (tmp & 0xf) { 1439 + case 0: 1440 + track->npipes = 1; 1441 + break; 1442 + case 1: 1443 + default: 1444 + track->npipes = 2; 1445 + break; 1446 + case 2: 1447 + track->npipes = 4; 1448 + break; 1449 + case 3: 1450 + track->npipes = 8; 1451 + break; 1452 + } 1453 + 1454 + switch ((tmp & 0xf0) >> 4) { 1455 + case 0: 1456 + track->nbanks = 4; 1457 + break; 1458 + case 1: 1459 + default: 1460 + track->nbanks = 8; 1461 + break; 1462 + case 2: 1463 + track->nbanks = 16; 1464 + break; 1465 + } 1466 + 1467 + switch ((tmp & 0xf00) >> 8) { 1468 + case 0: 1469 + track->group_size = 256; 1470 + break; 1471 + case 1: 1472 + default: 1473 + track->group_size = 512; 1474 + break; 1475 + } 1476 + 1477 + switch ((tmp & 0xf000) >> 12) { 1478 + case 0: 1479 + track->row_size = 1; 1480 + break; 1481 + case 1: 1482 + default: 1483 + track->row_size = 2; 1484 + break; 1485 + case 2: 1486 + track->row_size = 4; 1487 + break; 1488 + } 1489 + 1479 1490 p->track = track; 1480 1491 } 1481 1492 do {
+29
drivers/gpu/drm/radeon/evergreen_reg.h
··· 42 42 # define EVERGREEN_GRPH_DEPTH_8BPP 0 43 43 # define EVERGREEN_GRPH_DEPTH_16BPP 1 44 44 # define EVERGREEN_GRPH_DEPTH_32BPP 2 45 + # define EVERGREEN_GRPH_NUM_BANKS(x) (((x) & 0x3) << 2) 46 + # define EVERGREEN_ADDR_SURF_2_BANK 0 47 + # define EVERGREEN_ADDR_SURF_4_BANK 1 48 + # define EVERGREEN_ADDR_SURF_8_BANK 2 49 + # define EVERGREEN_ADDR_SURF_16_BANK 3 50 + # define EVERGREEN_GRPH_Z(x) (((x) & 0x3) << 4) 51 + # define EVERGREEN_GRPH_BANK_WIDTH(x) (((x) & 0x3) << 6) 52 + # define EVERGREEN_ADDR_SURF_BANK_WIDTH_1 0 53 + # define EVERGREEN_ADDR_SURF_BANK_WIDTH_2 1 54 + # define EVERGREEN_ADDR_SURF_BANK_WIDTH_4 2 55 + # define EVERGREEN_ADDR_SURF_BANK_WIDTH_8 3 45 56 # define EVERGREEN_GRPH_FORMAT(x) (((x) & 0x7) << 8) 46 57 /* 8 BPP */ 47 58 # define EVERGREEN_GRPH_FORMAT_INDEXED 0 ··· 72 61 # define EVERGREEN_GRPH_FORMAT_8B_BGRA1010102 5 73 62 # define EVERGREEN_GRPH_FORMAT_RGB111110 6 74 63 # define EVERGREEN_GRPH_FORMAT_BGR101111 7 64 + # define EVERGREEN_GRPH_BANK_HEIGHT(x) (((x) & 0x3) << 11) 65 + # define EVERGREEN_ADDR_SURF_BANK_HEIGHT_1 0 66 + # define EVERGREEN_ADDR_SURF_BANK_HEIGHT_2 1 67 + # define EVERGREEN_ADDR_SURF_BANK_HEIGHT_4 2 68 + # define EVERGREEN_ADDR_SURF_BANK_HEIGHT_8 3 69 + # define EVERGREEN_GRPH_TILE_SPLIT(x) (((x) & 0x7) << 13) 70 + # define EVERGREEN_ADDR_SURF_TILE_SPLIT_64B 0 71 + # define EVERGREEN_ADDR_SURF_TILE_SPLIT_128B 1 72 + # define EVERGREEN_ADDR_SURF_TILE_SPLIT_256B 2 73 + # define EVERGREEN_ADDR_SURF_TILE_SPLIT_512B 3 74 + # define EVERGREEN_ADDR_SURF_TILE_SPLIT_1KB 4 75 + # define EVERGREEN_ADDR_SURF_TILE_SPLIT_2KB 5 76 + # define EVERGREEN_ADDR_SURF_TILE_SPLIT_4KB 6 77 + # define EVERGREEN_GRPH_MACRO_TILE_ASPECT(x) (((x) & 0x3) << 18) 78 + # define EVERGREEN_ADDR_SURF_MACRO_TILE_ASPECT_1 0 79 + # define EVERGREEN_ADDR_SURF_MACRO_TILE_ASPECT_2 1 80 + # define EVERGREEN_ADDR_SURF_MACRO_TILE_ASPECT_4 2 81 + # define EVERGREEN_ADDR_SURF_MACRO_TILE_ASPECT_8 3 75 82 # define EVERGREEN_GRPH_ARRAY_MODE(x) (((x) & 0x7) << 20) 76 83 # define EVERGREEN_GRPH_ARRAY_LINEAR_GENERAL 0 77 84 # define EVERGREEN_GRPH_ARRAY_LINEAR_ALIGNED 1
+31
drivers/gpu/drm/radeon/evergreend.h
··· 899 899 #define DB_HTILE_DATA_BASE 0x28014 900 900 #define DB_Z_INFO 0x28040 901 901 # define Z_ARRAY_MODE(x) ((x) << 4) 902 + # define DB_TILE_SPLIT(x) (((x) & 0x7) << 8) 903 + # define DB_NUM_BANKS(x) (((x) & 0x3) << 12) 904 + # define DB_BANK_WIDTH(x) (((x) & 0x3) << 16) 905 + # define DB_BANK_HEIGHT(x) (((x) & 0x3) << 20) 902 906 #define DB_STENCIL_INFO 0x28044 903 907 #define DB_Z_READ_BASE 0x28048 904 908 #define DB_STENCIL_READ_BASE 0x2804c ··· 955 951 # define CB_SF_EXPORT_FULL 0 956 952 # define CB_SF_EXPORT_NORM 1 957 953 #define CB_COLOR0_ATTRIB 0x28c74 954 + # define CB_TILE_SPLIT(x) (((x) & 0x7) << 5) 955 + # define ADDR_SURF_TILE_SPLIT_64B 0 956 + # define ADDR_SURF_TILE_SPLIT_128B 1 957 + # define ADDR_SURF_TILE_SPLIT_256B 2 958 + # define ADDR_SURF_TILE_SPLIT_512B 3 959 + # define ADDR_SURF_TILE_SPLIT_1KB 4 960 + # define ADDR_SURF_TILE_SPLIT_2KB 5 961 + # define ADDR_SURF_TILE_SPLIT_4KB 6 962 + # define CB_NUM_BANKS(x) (((x) & 0x3) << 10) 963 + # define ADDR_SURF_2_BANK 0 964 + # define ADDR_SURF_4_BANK 1 965 + # define ADDR_SURF_8_BANK 2 966 + # define ADDR_SURF_16_BANK 3 967 + # define CB_BANK_WIDTH(x) (((x) & 0x3) << 13) 968 + # define ADDR_SURF_BANK_WIDTH_1 0 969 + # define ADDR_SURF_BANK_WIDTH_2 1 970 + # define ADDR_SURF_BANK_WIDTH_4 2 971 + # define ADDR_SURF_BANK_WIDTH_8 3 972 + # define CB_BANK_HEIGHT(x) (((x) & 0x3) << 16) 973 + # define ADDR_SURF_BANK_HEIGHT_1 0 974 + # define ADDR_SURF_BANK_HEIGHT_2 1 975 + # define ADDR_SURF_BANK_HEIGHT_4 2 976 + # define ADDR_SURF_BANK_HEIGHT_8 3 958 977 #define CB_COLOR0_DIM 0x28c78 959 978 /* only CB0-7 blocks have these regs */ 960 979 #define CB_COLOR0_CMASK 0x28c7c ··· 1164 1137 # define SQ_SEL_1 5 1165 1138 #define SQ_TEX_RESOURCE_WORD5_0 0x30014 1166 1139 #define SQ_TEX_RESOURCE_WORD6_0 0x30018 1140 + # define TEX_TILE_SPLIT(x) (((x) & 0x7) << 29) 1167 1141 #define SQ_TEX_RESOURCE_WORD7_0 0x3001c 1142 + # define TEX_BANK_WIDTH(x) (((x) & 0x3) << 8) 1143 + # define TEX_BANK_HEIGHT(x) (((x) & 0x3) << 10) 1144 + # define TEX_NUM_BANKS(x) (((x) & 0x3) << 16) 1168 1145 1169 1146 #define SQ_VTX_CONSTANT_WORD0_0 0x30000 1170 1147 #define SQ_VTX_CONSTANT_WORD1_0 0x30004
+6 -1
drivers/gpu/drm/radeon/r100.c
··· 187 187 { 188 188 struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[crtc_id]; 189 189 u32 tmp = ((u32)crtc_base) | RADEON_CRTC_OFFSET__OFFSET_LOCK; 190 + int i; 190 191 191 192 /* Lock the graphics update lock */ 192 193 /* update the scanout addresses */ 193 194 WREG32(RADEON_CRTC_OFFSET + radeon_crtc->crtc_offset, tmp); 194 195 195 196 /* Wait for update_pending to go high. */ 196 - while (!(RREG32(RADEON_CRTC_OFFSET + radeon_crtc->crtc_offset) & RADEON_CRTC_OFFSET__GUI_TRIG_OFFSET)); 197 + for (i = 0; i < rdev->usec_timeout; i++) { 198 + if (RREG32(RADEON_CRTC_OFFSET + radeon_crtc->crtc_offset) & RADEON_CRTC_OFFSET__GUI_TRIG_OFFSET) 199 + break; 200 + udelay(1); 201 + } 197 202 DRM_DEBUG("Update pending now high. Unlocking vupdate_lock.\n"); 198 203 199 204 /* Unlock the lock, so double-buffering can take place inside vblank */
+6 -5
drivers/gpu/drm/radeon/radeon_acpi.c
··· 35 35 36 36 /* Fail only if calling the method fails and ATIF is supported */ 37 37 if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { 38 - printk(KERN_DEBUG "failed to evaluate ATIF got %s\n", acpi_format_exception(status)); 38 + DRM_DEBUG_DRIVER("failed to evaluate ATIF got %s\n", 39 + acpi_format_exception(status)); 39 40 kfree(buffer.pointer); 40 41 return 1; 41 42 } ··· 51 50 acpi_handle handle; 52 51 int ret; 53 52 54 - /* No need to proceed if we're sure that ATIF is not supported */ 55 - if (!ASIC_IS_AVIVO(rdev) || !rdev->bios) 56 - return 0; 57 - 58 53 /* Get the device handle */ 59 54 handle = DEVICE_ACPI_HANDLE(&rdev->pdev->dev); 55 + 56 + /* No need to proceed if we're sure that ATIF is not supported */ 57 + if (!ASIC_IS_AVIVO(rdev) || !rdev->bios || !handle) 58 + return 0; 60 59 61 60 /* Call the ATIF method */ 62 61 ret = radeon_atif_call(handle);
+3 -4
drivers/gpu/drm/radeon/radeon_encoders.c
··· 233 233 switch (radeon_encoder->encoder_id) { 234 234 case ENCODER_OBJECT_ID_TRAVIS: 235 235 case ENCODER_OBJECT_ID_NUTMEG: 236 - return true; 236 + return radeon_encoder->encoder_id; 237 237 default: 238 - return false; 238 + return ENCODER_OBJECT_ID_NONE; 239 239 } 240 240 } 241 - 242 - return false; 241 + return ENCODER_OBJECT_ID_NONE; 243 242 } 244 243 245 244 void radeon_panel_mode_fixup(struct drm_encoder *encoder,
+6 -1
drivers/gpu/drm/radeon/rs600.c
··· 62 62 { 63 63 struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[crtc_id]; 64 64 u32 tmp = RREG32(AVIVO_D1GRPH_UPDATE + radeon_crtc->crtc_offset); 65 + int i; 65 66 66 67 /* Lock the graphics update lock */ 67 68 tmp |= AVIVO_D1GRPH_UPDATE_LOCK; ··· 75 74 (u32)crtc_base); 76 75 77 76 /* Wait for update_pending to go high. */ 78 - while (!(RREG32(AVIVO_D1GRPH_UPDATE + radeon_crtc->crtc_offset) & AVIVO_D1GRPH_SURFACE_UPDATE_PENDING)); 77 + for (i = 0; i < rdev->usec_timeout; i++) { 78 + if (RREG32(AVIVO_D1GRPH_UPDATE + radeon_crtc->crtc_offset) & AVIVO_D1GRPH_SURFACE_UPDATE_PENDING) 79 + break; 80 + udelay(1); 81 + } 79 82 DRM_DEBUG("Update pending now high. Unlocking vupdate_lock.\n"); 80 83 81 84 /* Unlock the lock, so double-buffering can take place inside vblank */
+6 -1
drivers/gpu/drm/radeon/rv770.c
··· 47 47 { 48 48 struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[crtc_id]; 49 49 u32 tmp = RREG32(AVIVO_D1GRPH_UPDATE + radeon_crtc->crtc_offset); 50 + int i; 50 51 51 52 /* Lock the graphics update lock */ 52 53 tmp |= AVIVO_D1GRPH_UPDATE_LOCK; ··· 67 66 (u32)crtc_base); 68 67 69 68 /* Wait for update_pending to go high. */ 70 - while (!(RREG32(AVIVO_D1GRPH_UPDATE + radeon_crtc->crtc_offset) & AVIVO_D1GRPH_SURFACE_UPDATE_PENDING)); 69 + for (i = 0; i < rdev->usec_timeout; i++) { 70 + if (RREG32(AVIVO_D1GRPH_UPDATE + radeon_crtc->crtc_offset) & AVIVO_D1GRPH_SURFACE_UPDATE_PENDING) 71 + break; 72 + udelay(1); 73 + } 71 74 DRM_DEBUG("Update pending now high. Unlocking vupdate_lock.\n"); 72 75 73 76 /* Unlock the lock, so double-buffering can take place inside vblank */
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
··· 140 140 goto out_clips; 141 141 } 142 142 143 - clips = kzalloc(num_clips * sizeof(*clips), GFP_KERNEL); 143 + clips = kcalloc(num_clips, sizeof(*clips), GFP_KERNEL); 144 144 if (clips == NULL) { 145 145 DRM_ERROR("Failed to allocate clip rect list.\n"); 146 146 ret = -ENOMEM; ··· 232 232 goto out_clips; 233 233 } 234 234 235 - clips = kzalloc(num_clips * sizeof(*clips), GFP_KERNEL); 235 + clips = kcalloc(num_clips, sizeof(*clips), GFP_KERNEL); 236 236 if (clips == NULL) { 237 237 DRM_ERROR("Failed to allocate clip rect list.\n"); 238 238 ret = -ENOMEM;
+6 -5
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1809 1809 } 1810 1810 1811 1811 rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect); 1812 - rects = kzalloc(rects_size, GFP_KERNEL); 1812 + rects = kcalloc(arg->num_outputs, sizeof(struct drm_vmw_rect), 1813 + GFP_KERNEL); 1813 1814 if (unlikely(!rects)) { 1814 1815 ret = -ENOMEM; 1815 1816 goto out_unlock; ··· 1825 1824 } 1826 1825 1827 1826 for (i = 0; i < arg->num_outputs; ++i) { 1828 - if (rects->x < 0 || 1829 - rects->y < 0 || 1830 - rects->x + rects->w > mode_config->max_width || 1831 - rects->y + rects->h > mode_config->max_height) { 1827 + if (rects[i].x < 0 || 1828 + rects[i].y < 0 || 1829 + rects[i].x + rects[i].w > mode_config->max_width || 1830 + rects[i].y + rects[i].h > mode_config->max_height) { 1832 1831 DRM_ERROR("Invalid GUI layout.\n"); 1833 1832 ret = -EINVAL; 1834 1833 goto out_free;
+1 -1
drivers/hid/hid-core.c
··· 1771 1771 { HID_USB_DEVICE(USB_VENDOR_ID_ESSENTIAL_REALITY, USB_DEVICE_ID_ESSENTIAL_REALITY_P5) }, 1772 1772 { HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC5UH) }, 1773 1773 { HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC4UM) }, 1774 + { HID_USB_DEVICE(USB_VENDOR_ID_GENERAL_TOUCH, 0x0001) }, 1774 1775 { HID_USB_DEVICE(USB_VENDOR_ID_GENERAL_TOUCH, 0x0002) }, 1775 - { HID_USB_DEVICE(USB_VENDOR_ID_GENERAL_TOUCH, 0x0003) }, 1776 1776 { HID_USB_DEVICE(USB_VENDOR_ID_GENERAL_TOUCH, 0x0004) }, 1777 1777 { HID_USB_DEVICE(USB_VENDOR_ID_GLAB, USB_DEVICE_ID_4_PHIDGETSERVO_30) }, 1778 1778 { HID_USB_DEVICE(USB_VENDOR_ID_GLAB, USB_DEVICE_ID_1_PHIDGETSERVO_30) },
+1 -1
drivers/hid/hid-ids.h
··· 266 266 #define USB_DEVICE_ID_GAMERON_DUAL_PCS_ADAPTOR 0x0002 267 267 268 268 #define USB_VENDOR_ID_GENERAL_TOUCH 0x0dfc 269 - #define USB_DEVICE_ID_GENERAL_TOUCH_WIN7_TWOFINGERS 0x0001 269 + #define USB_DEVICE_ID_GENERAL_TOUCH_WIN7_TWOFINGERS 0x0003 270 270 271 271 #define USB_VENDOR_ID_GLAB 0x06c2 272 272 #define USB_DEVICE_ID_4_PHIDGETSERVO_30 0x0038
-1
drivers/hwmon/ad7314.c
··· 160 160 static struct spi_driver ad7314_driver = { 161 161 .driver = { 162 162 .name = "ad7314", 163 - .bus = &spi_bus_type, 164 163 .owner = THIS_MODULE, 165 164 }, 166 165 .probe = ad7314_probe,
-1
drivers/hwmon/ads7871.c
··· 227 227 static struct spi_driver ads7871_driver = { 228 228 .driver = { 229 229 .name = DEVICE_NAME, 230 - .bus = &spi_bus_type, 231 230 .owner = THIS_MODULE, 232 231 }, 233 232
+1 -11
drivers/hwmon/exynos4_tmu.c
··· 506 506 .resume = exynos4_tmu_resume, 507 507 }; 508 508 509 - static int __init exynos4_tmu_driver_init(void) 510 - { 511 - return platform_driver_register(&exynos4_tmu_driver); 512 - } 513 - module_init(exynos4_tmu_driver_init); 514 - 515 - static void __exit exynos4_tmu_driver_exit(void) 516 - { 517 - platform_driver_unregister(&exynos4_tmu_driver); 518 - } 519 - module_exit(exynos4_tmu_driver_exit); 509 + module_platform_driver(exynos4_tmu_driver); 520 510 521 511 MODULE_DESCRIPTION("EXYNOS4 TMU Driver"); 522 512 MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>");
+1 -12
drivers/hwmon/gpio-fan.c
··· 539 539 }, 540 540 }; 541 541 542 - static int __init gpio_fan_init(void) 543 - { 544 - return platform_driver_register(&gpio_fan_driver); 545 - } 546 - 547 - static void __exit gpio_fan_exit(void) 548 - { 549 - platform_driver_unregister(&gpio_fan_driver); 550 - } 551 - 552 - module_init(gpio_fan_init); 553 - module_exit(gpio_fan_exit); 542 + module_platform_driver(gpio_fan_driver); 554 543 555 544 MODULE_AUTHOR("Simon Guinot <sguinot@lacie.com>"); 556 545 MODULE_DESCRIPTION("GPIO FAN driver");
+3 -13
drivers/hwmon/jz4740-hwmon.c
··· 59 59 { 60 60 struct jz4740_hwmon *hwmon = dev_get_drvdata(dev); 61 61 struct completion *completion = &hwmon->read_completion; 62 - unsigned long t; 62 + long t; 63 63 unsigned long val; 64 64 int ret; 65 65 ··· 203 203 return 0; 204 204 } 205 205 206 - struct platform_driver jz4740_hwmon_driver = { 206 + static struct platform_driver jz4740_hwmon_driver = { 207 207 .probe = jz4740_hwmon_probe, 208 208 .remove = __devexit_p(jz4740_hwmon_remove), 209 209 .driver = { ··· 212 212 }, 213 213 }; 214 214 215 - static int __init jz4740_hwmon_init(void) 216 - { 217 - return platform_driver_register(&jz4740_hwmon_driver); 218 - } 219 - module_init(jz4740_hwmon_init); 220 - 221 - static void __exit jz4740_hwmon_exit(void) 222 - { 223 - platform_driver_unregister(&jz4740_hwmon_driver); 224 - } 225 - module_exit(jz4740_hwmon_exit); 215 + module_platform_driver(jz4740_hwmon_driver); 226 216 227 217 MODULE_DESCRIPTION("JZ4740 SoC HWMON driver"); 228 218 MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
+1 -13
drivers/hwmon/ntc_thermistor.c
··· 432 432 .id_table = ntc_thermistor_id, 433 433 }; 434 434 435 - static int __init ntc_thermistor_init(void) 436 - { 437 - return platform_driver_register(&ntc_thermistor_driver); 438 - } 439 - 440 - module_init(ntc_thermistor_init); 441 - 442 - static void __exit ntc_thermistor_cleanup(void) 443 - { 444 - platform_driver_unregister(&ntc_thermistor_driver); 445 - } 446 - 447 - module_exit(ntc_thermistor_cleanup); 435 + module_platform_driver(ntc_thermistor_driver); 448 436 449 437 MODULE_DESCRIPTION("NTC Thermistor Driver"); 450 438 MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
+1 -12
drivers/hwmon/s3c-hwmon.c
··· 393 393 .remove = __devexit_p(s3c_hwmon_remove), 394 394 }; 395 395 396 - static int __init s3c_hwmon_init(void) 397 - { 398 - return platform_driver_register(&s3c_hwmon_driver); 399 - } 400 - 401 - static void __exit s3c_hwmon_exit(void) 402 - { 403 - platform_driver_unregister(&s3c_hwmon_driver); 404 - } 405 - 406 - module_init(s3c_hwmon_init); 407 - module_exit(s3c_hwmon_exit); 396 + module_platform_driver(s3c_hwmon_driver); 408 397 409 398 MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>"); 410 399 MODULE_DESCRIPTION("S3C ADC HWMon driver");
+1 -12
drivers/hwmon/sch5627.c
··· 590 590 .remove = sch5627_remove, 591 591 }; 592 592 593 - static int __init sch5627_init(void) 594 - { 595 - return platform_driver_register(&sch5627_driver); 596 - } 597 - 598 - static void __exit sch5627_exit(void) 599 - { 600 - platform_driver_unregister(&sch5627_driver); 601 - } 593 + module_platform_driver(sch5627_driver); 602 594 603 595 MODULE_DESCRIPTION("SMSC SCH5627 Hardware Monitoring Driver"); 604 596 MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 605 597 MODULE_LICENSE("GPL"); 606 - 607 - module_init(sch5627_init); 608 - module_exit(sch5627_exit);
+1 -12
drivers/hwmon/sch5636.c
··· 521 521 .remove = sch5636_remove, 522 522 }; 523 523 524 - static int __init sch5636_init(void) 525 - { 526 - return platform_driver_register(&sch5636_driver); 527 - } 528 - 529 - static void __exit sch5636_exit(void) 530 - { 531 - platform_driver_unregister(&sch5636_driver); 532 - } 524 + module_platform_driver(sch5636_driver); 533 525 534 526 MODULE_DESCRIPTION("SMSC SCH5636 Hardware Monitoring Driver"); 535 527 MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 536 528 MODULE_LICENSE("GPL"); 537 - 538 - module_init(sch5636_init); 539 - module_exit(sch5636_exit);
+1 -13
drivers/hwmon/twl4030-madc-hwmon.c
··· 136 136 }, 137 137 }; 138 138 139 - static int __init twl4030_madc_hwmon_init(void) 140 - { 141 - return platform_driver_register(&twl4030_madc_hwmon_driver); 142 - } 143 - 144 - module_init(twl4030_madc_hwmon_init); 145 - 146 - static void __exit twl4030_madc_hwmon_exit(void) 147 - { 148 - platform_driver_unregister(&twl4030_madc_hwmon_driver); 149 - } 150 - 151 - module_exit(twl4030_madc_hwmon_exit); 139 + module_platform_driver(twl4030_madc_hwmon_driver); 152 140 153 141 MODULE_DESCRIPTION("TWL4030 ADC Hwmon driver"); 154 142 MODULE_LICENSE("GPL");
+1 -12
drivers/hwmon/ultra45_env.c
··· 309 309 .remove = __devexit_p(env_remove), 310 310 }; 311 311 312 - static int __init env_init(void) 313 - { 314 - return platform_driver_register(&env_driver); 315 - } 316 - 317 - static void __exit env_exit(void) 318 - { 319 - platform_driver_unregister(&env_driver); 320 - } 321 - 322 - module_init(env_init); 323 - module_exit(env_exit); 312 + module_platform_driver(env_driver);
+1 -11
drivers/hwmon/wm831x-hwmon.c
··· 209 209 }, 210 210 }; 211 211 212 - static int __init wm831x_hwmon_init(void) 213 - { 214 - return platform_driver_register(&wm831x_hwmon_driver); 215 - } 216 - module_init(wm831x_hwmon_init); 217 - 218 - static void __exit wm831x_hwmon_exit(void) 219 - { 220 - platform_driver_unregister(&wm831x_hwmon_driver); 221 - } 222 - module_exit(wm831x_hwmon_exit); 212 + module_platform_driver(wm831x_hwmon_driver); 223 213 224 214 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 225 215 MODULE_DESCRIPTION("WM831x Hardware Monitoring");
+1 -11
drivers/hwmon/wm8350-hwmon.c
··· 133 133 }, 134 134 }; 135 135 136 - static int __init wm8350_hwmon_init(void) 137 - { 138 - return platform_driver_register(&wm8350_hwmon_driver); 139 - } 140 - module_init(wm8350_hwmon_init); 141 - 142 - static void __exit wm8350_hwmon_exit(void) 143 - { 144 - platform_driver_unregister(&wm8350_hwmon_driver); 145 - } 146 - module_exit(wm8350_hwmon_exit); 136 + module_platform_driver(wm8350_hwmon_driver); 147 137 148 138 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 149 139 MODULE_DESCRIPTION("WM8350 Hardware Monitoring");
+12 -10
drivers/i2c/busses/i2c-eg20t.c
··· 893 893 /* Set the number of I2C channel instance */ 894 894 adap_info->ch_num = id->driver_data; 895 895 896 + ret = request_irq(pdev->irq, pch_i2c_handler, IRQF_SHARED, 897 + KBUILD_MODNAME, adap_info); 898 + if (ret) { 899 + pch_pci_err(pdev, "request_irq FAILED\n"); 900 + goto err_request_irq; 901 + } 902 + 896 903 for (i = 0; i < adap_info->ch_num; i++) { 897 904 pch_adap = &adap_info->pch_data[i].pch_adapter; 898 905 adap_info->pch_i2c_suspended = false; ··· 917 910 918 911 pch_adap->dev.parent = &pdev->dev; 919 912 913 + pch_i2c_init(&adap_info->pch_data[i]); 920 914 ret = i2c_add_adapter(pch_adap); 921 915 if (ret) { 922 916 pch_pci_err(pdev, "i2c_add_adapter[ch:%d] FAILED\n", i); 923 - goto err_i2c_add_adapter; 917 + goto err_add_adapter; 924 918 } 925 - 926 - pch_i2c_init(&adap_info->pch_data[i]); 927 - } 928 - ret = request_irq(pdev->irq, pch_i2c_handler, IRQF_SHARED, 929 - KBUILD_MODNAME, adap_info); 930 - if (ret) { 931 - pch_pci_err(pdev, "request_irq FAILED\n"); 932 - goto err_i2c_add_adapter; 933 919 } 934 920 935 921 pci_set_drvdata(pdev, adap_info); 936 922 pch_pci_dbg(pdev, "returns %d.\n", ret); 937 923 return 0; 938 924 939 - err_i2c_add_adapter: 925 + err_add_adapter: 940 926 for (j = 0; j < i; j++) 941 927 i2c_del_adapter(&adap_info->pch_data[j].pch_adapter); 928 + free_irq(pdev->irq, adap_info); 929 + err_request_irq: 942 930 pci_iounmap(pdev, base_addr); 943 931 err_pci_iomap: 944 932 pci_release_regions(pdev);
+1 -1
drivers/i2c/busses/i2c-nuc900.c
··· 593 593 i2c->adap.algo_data = i2c; 594 594 i2c->adap.dev.parent = &pdev->dev; 595 595 596 - mfp_set_groupg(&pdev->dev); 596 + mfp_set_groupg(&pdev->dev, NULL); 597 597 598 598 clk_get_rate(i2c->clk); 599 599
+6 -5
drivers/i2c/busses/i2c-omap.c
··· 1047 1047 * size. This is to ensure that we can handle the status on int 1048 1048 * call back latencies. 1049 1049 */ 1050 - if (dev->rev >= OMAP_I2C_REV_ON_3530_4430) { 1051 - dev->fifo_size = 0; 1050 + 1051 + dev->fifo_size = (dev->fifo_size / 2); 1052 + 1053 + if (dev->rev >= OMAP_I2C_REV_ON_3530_4430) 1052 1054 dev->b_hw = 0; /* Disable hardware fixes */ 1053 - } else { 1054 - dev->fifo_size = (dev->fifo_size / 2); 1055 + else 1055 1056 dev->b_hw = 1; /* Enable hardware fixes */ 1056 - } 1057 + 1057 1058 /* calculate wakeup latency constraint for MPU */ 1058 1059 if (dev->set_mpu_wkup_lat != NULL) 1059 1060 dev->latency = (1000000 * dev->fifo_size) /
+2 -1
drivers/i2c/busses/i2c-s3c2410.c
··· 534 534 535 535 /* first, try busy waiting briefly */ 536 536 do { 537 + cpu_relax(); 537 538 iicstat = readl(i2c->regs + S3C2410_IICSTAT); 538 539 } while ((iicstat & S3C2410_IICSTAT_START) && --spins); 539 540 ··· 787 786 #else 788 787 static int s3c24xx_i2c_parse_dt_gpio(struct s3c24xx_i2c *i2c) 789 788 { 790 - return -EINVAL; 789 + return 0; 791 790 } 792 791 793 792 static void s3c24xx_i2c_dt_gpio_free(struct s3c24xx_i2c *i2c)
+6 -3
drivers/infiniband/core/addr.c
··· 216 216 217 217 neigh = neigh_lookup(&arp_tbl, &rt->rt_gateway, rt->dst.dev); 218 218 if (!neigh || !(neigh->nud_state & NUD_VALID)) { 219 + rcu_read_lock(); 219 220 neigh_event_send(dst_get_neighbour(&rt->dst), NULL); 221 + rcu_read_unlock(); 220 222 ret = -ENODATA; 221 223 if (neigh) 222 224 goto release; ··· 276 274 goto put; 277 275 } 278 276 277 + rcu_read_lock(); 279 278 neigh = dst_get_neighbour(dst); 280 279 if (!neigh || !(neigh->nud_state & NUD_VALID)) { 281 280 if (neigh) 282 281 neigh_event_send(neigh, NULL); 283 282 ret = -ENODATA; 284 - goto put; 283 + } else { 284 + ret = rdma_copy_addr(addr, dst->dev, neigh->ha); 285 285 } 286 - 287 - ret = rdma_copy_addr(addr, dst->dev, neigh->ha); 286 + rcu_read_unlock(); 288 287 put: 289 288 dst_release(dst); 290 289 return ret;
+4
drivers/infiniband/hw/cxgb3/iwch_cm.c
··· 1375 1375 goto reject; 1376 1376 } 1377 1377 dst = &rt->dst; 1378 + rcu_read_lock(); 1378 1379 neigh = dst_get_neighbour(dst); 1379 1380 l2t = t3_l2t_get(tdev, neigh, neigh->dev); 1381 + rcu_read_unlock(); 1380 1382 if (!l2t) { 1381 1383 printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n", 1382 1384 __func__); ··· 1948 1946 } 1949 1947 ep->dst = &rt->dst; 1950 1948 1949 + rcu_read_lock(); 1951 1950 neigh = dst_get_neighbour(ep->dst); 1952 1951 1953 1952 /* get a l2t entry */ 1954 1953 ep->l2t = t3_l2t_get(ep->com.tdev, neigh, neigh->dev); 1954 + rcu_read_unlock(); 1955 1955 if (!ep->l2t) { 1956 1956 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 1957 1957 err = -ENOMEM;
+9 -1
drivers/infiniband/hw/cxgb4/cm.c
··· 542 542 (mpa_rev_to_use == 2 ? MPA_ENHANCED_RDMA_CONN : 0); 543 543 mpa->private_data_size = htons(ep->plen); 544 544 mpa->revision = mpa_rev_to_use; 545 - if (mpa_rev_to_use == 1) 545 + if (mpa_rev_to_use == 1) { 546 546 ep->tried_with_mpa_v1 = 1; 547 + ep->retry_with_mpa_v1 = 0; 548 + } 547 549 548 550 if (mpa_rev_to_use == 2) { 549 551 mpa->private_data_size += ··· 1596 1594 goto reject; 1597 1595 } 1598 1596 dst = &rt->dst; 1597 + rcu_read_lock(); 1599 1598 neigh = dst_get_neighbour(dst); 1600 1599 if (neigh->dev->flags & IFF_LOOPBACK) { 1601 1600 pdev = ip_dev_find(&init_net, peer_ip); ··· 1623 1620 rss_qid = dev->rdev.lldi.rxq_ids[ 1624 1621 cxgb4_port_idx(neigh->dev) * step]; 1625 1622 } 1623 + rcu_read_unlock(); 1626 1624 if (!l2t) { 1627 1625 printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n", 1628 1626 __func__); ··· 1824 1820 } 1825 1821 ep->dst = &rt->dst; 1826 1822 1823 + rcu_read_lock(); 1827 1824 neigh = dst_get_neighbour(ep->dst); 1828 1825 1829 1826 /* get a l2t entry */ ··· 1861 1856 ep->rss_qid = ep->com.dev->rdev.lldi.rxq_ids[ 1862 1857 cxgb4_port_idx(neigh->dev) * step]; 1863 1858 } 1859 + rcu_read_unlock(); 1864 1860 if (!ep->l2t) { 1865 1861 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 1866 1862 err = -ENOMEM; ··· 2307 2301 } 2308 2302 ep->dst = &rt->dst; 2309 2303 2304 + rcu_read_lock(); 2310 2305 neigh = dst_get_neighbour(ep->dst); 2311 2306 2312 2307 /* get a l2t entry */ ··· 2346 2339 ep->retry_with_mpa_v1 = 0; 2347 2340 ep->tried_with_mpa_v1 = 0; 2348 2341 } 2342 + rcu_read_unlock(); 2349 2343 if (!ep->l2t) { 2350 2344 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 2351 2345 err = -ENOMEM;
+1 -1
drivers/infiniband/hw/cxgb4/cq.c
··· 311 311 while (ptr != cq->sw_pidx) { 312 312 cqe = &cq->sw_queue[ptr]; 313 313 if (RQ_TYPE(cqe) && (CQE_OPCODE(cqe) != FW_RI_READ_RESP) && 314 - (CQE_QPID(cqe) == wq->rq.qid) && cqe_completes_wr(cqe, wq)) 314 + (CQE_QPID(cqe) == wq->sq.qid) && cqe_completes_wr(cqe, wq)) 315 315 (*count)++; 316 316 if (++ptr == cq->size) 317 317 ptr = 0;
+4 -2
drivers/infiniband/hw/nes/nes_cm.c
··· 1377 1377 neigh_release(neigh); 1378 1378 } 1379 1379 1380 - if ((neigh == NULL) || (!(neigh->nud_state & NUD_VALID))) 1380 + if ((neigh == NULL) || (!(neigh->nud_state & NUD_VALID))) { 1381 + rcu_read_lock(); 1381 1382 neigh_event_send(dst_get_neighbour(&rt->dst), NULL); 1382 - 1383 + rcu_read_unlock(); 1384 + } 1383 1385 ip_rt_put(rt); 1384 1386 return rc; 1385 1387 }
+9 -9
drivers/infiniband/hw/qib/qib_iba7322.c
··· 2307 2307 SYM_LSB(IBCCtrlA_0, MaxPktLen); 2308 2308 ppd->cpspec->ibcctrl_a = ibc; /* without linkcmd or linkinitcmd! */ 2309 2309 2310 - /* initially come up waiting for TS1, without sending anything. */ 2311 - val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE << 2312 - QLOGIC_IB_IBCC_LINKINITCMD_SHIFT); 2313 - 2314 - ppd->cpspec->ibcctrl_a = val; 2315 2310 /* 2316 2311 * Reset the PCS interface to the serdes (and also ibc, which is still 2317 2312 * in reset from above). Writes new value of ibcctrl_a as last step. 2318 2313 */ 2319 2314 qib_7322_mini_pcs_reset(ppd); 2320 - qib_write_kreg(dd, kr_scratch, 0ULL); 2321 - /* clear the linkinit cmds */ 2322 - ppd->cpspec->ibcctrl_a &= ~SYM_MASK(IBCCtrlA_0, LinkInitCmd); 2323 2315 2324 2316 if (!ppd->cpspec->ibcctrl_b) { 2325 2317 unsigned lse = ppd->link_speed_enabled; ··· 2376 2384 /* Enable port */ 2377 2385 ppd->cpspec->ibcctrl_a |= SYM_MASK(IBCCtrlA_0, IBLinkEn); 2378 2386 set_vls(ppd); 2387 + 2388 + /* initially come up DISABLED, without sending anything. */ 2389 + val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE << 2390 + QLOGIC_IB_IBCC_LINKINITCMD_SHIFT); 2391 + qib_write_kreg_port(ppd, krp_ibcctrl_a, val); 2392 + qib_write_kreg(dd, kr_scratch, 0ULL); 2393 + /* clear the linkinit cmds */ 2394 + ppd->cpspec->ibcctrl_a = val & ~SYM_MASK(IBCCtrlA_0, LinkInitCmd); 2379 2395 2380 2396 /* be paranoid against later code motion, etc. */ 2381 2397 spin_lock_irqsave(&dd->cspec->rcvmod_lock, flags); ··· 5241 5241 off */ 5242 5242 if (ppd->dd->flags & QIB_HAS_QSFP) { 5243 5243 qd->t_insert = get_jiffies_64(); 5244 - schedule_work(&qd->work); 5244 + queue_work(ib_wq, &qd->work); 5245 5245 } 5246 5246 spin_lock_irqsave(&ppd->sdma_lock, flags); 5247 5247 if (__qib_sdma_running(ppd))
-12
drivers/infiniband/hw/qib/qib_qsfp.c
··· 480 480 udelay(20); /* Generous RST dwell */ 481 481 482 482 dd->f_gpio_mod(dd, mask, mask, mask); 483 - /* Spec says module can take up to two seconds! */ 484 - mask = QSFP_GPIO_MOD_PRS_N; 485 - if (qd->ppd->hw_pidx) 486 - mask <<= QSFP_GPIO_PORT2_SHIFT; 487 - 488 - /* Do not try to wait here. Better to let event handle it */ 489 - if (!qib_qsfp_mod_present(qd->ppd)) 490 - goto bail; 491 - /* We see a module, but it may be unwise to look yet. Just schedule */ 492 - qd->t_insert = get_jiffies_64(); 493 - queue_work(ib_wq, &qd->work); 494 - bail: 495 483 return; 496 484 } 497 485
+8 -5
drivers/infiniband/ulp/ipoib/ipoib_ib.c
··· 57 57 struct ib_pd *pd, struct ib_ah_attr *attr) 58 58 { 59 59 struct ipoib_ah *ah; 60 + struct ib_ah *vah; 60 61 61 62 ah = kmalloc(sizeof *ah, GFP_KERNEL); 62 63 if (!ah) 63 - return NULL; 64 + return ERR_PTR(-ENOMEM); 64 65 65 66 ah->dev = dev; 66 67 ah->last_send = 0; 67 68 kref_init(&ah->ref); 68 69 69 - ah->ah = ib_create_ah(pd, attr); 70 - if (IS_ERR(ah->ah)) { 70 + vah = ib_create_ah(pd, attr); 71 + if (IS_ERR(vah)) { 71 72 kfree(ah); 72 - ah = NULL; 73 - } else 73 + ah = (struct ipoib_ah *)vah; 74 + } else { 75 + ah->ah = vah; 74 76 ipoib_dbg(netdev_priv(dev), "Created ah %p\n", ah->ah); 77 + } 75 78 76 79 return ah; 77 80 }
+12 -8
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 432 432 433 433 spin_lock_irqsave(&priv->lock, flags); 434 434 435 - if (ah) { 435 + if (!IS_ERR_OR_NULL(ah)) { 436 436 path->pathrec = *pathrec; 437 437 438 438 old_ah = path->ah; ··· 555 555 return 0; 556 556 } 557 557 558 + /* called with rcu_read_lock */ 558 559 static void neigh_add_path(struct sk_buff *skb, struct net_device *dev) 559 560 { 560 561 struct ipoib_dev_priv *priv = netdev_priv(dev); ··· 637 636 spin_unlock_irqrestore(&priv->lock, flags); 638 637 } 639 638 639 + /* called with rcu_read_lock */ 640 640 static void ipoib_path_lookup(struct sk_buff *skb, struct net_device *dev) 641 641 { 642 642 struct ipoib_dev_priv *priv = netdev_priv(skb->dev); ··· 722 720 struct neighbour *n = NULL; 723 721 unsigned long flags; 724 722 723 + rcu_read_lock(); 725 724 if (likely(skb_dst(skb))) 726 725 n = dst_get_neighbour(skb_dst(skb)); 727 726 728 727 if (likely(n)) { 729 728 if (unlikely(!*to_ipoib_neigh(n))) { 730 729 ipoib_path_lookup(skb, dev); 731 - return NETDEV_TX_OK; 730 + goto unlock; 732 731 } 733 732 734 733 neigh = *to_ipoib_neigh(n); ··· 752 749 ipoib_neigh_free(dev, neigh); 753 750 spin_unlock_irqrestore(&priv->lock, flags); 754 751 ipoib_path_lookup(skb, dev); 755 - return NETDEV_TX_OK; 752 + goto unlock; 756 753 } 757 754 758 755 if (ipoib_cm_get(neigh)) { 759 756 if (ipoib_cm_up(neigh)) { 760 757 ipoib_cm_send(dev, skb, ipoib_cm_get(neigh)); 761 - return NETDEV_TX_OK; 758 + goto unlock; 762 759 } 763 760 } else if (neigh->ah) { 764 761 ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(n->ha)); 765 - return NETDEV_TX_OK; 762 + goto unlock; 766 763 } 767 764 768 765 if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) { ··· 796 793 phdr->hwaddr + 4); 797 794 dev_kfree_skb_any(skb); 798 795 ++dev->stats.tx_dropped; 799 - return NETDEV_TX_OK; 796 + goto unlock; 800 797 } 801 798 802 799 unicast_arp_send(skb, dev, phdr); 803 800 } 804 801 } 805 - 802 + unlock: 803 + rcu_read_unlock(); 806 804 return NETDEV_TX_OK; 807 805 } 808 806 ··· 841 837 dst = skb_dst(skb); 842 838 n = NULL; 843 839 if (dst) 844 - n = dst_get_neighbour(dst); 840 + n = dst_get_neighbour_raw(dst); 845 841 if ((!dst || !n) && daddr) { 846 842 struct ipoib_pseudoheader *phdr = 847 843 (struct ipoib_pseudoheader *) skb_push(skb, sizeof *phdr);
+9 -4
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
··· 240 240 av.grh.dgid = mcast->mcmember.mgid; 241 241 242 242 ah = ipoib_create_ah(dev, priv->pd, &av); 243 - if (!ah) { 244 - ipoib_warn(priv, "ib_address_create failed\n"); 243 + if (IS_ERR(ah)) { 244 + ipoib_warn(priv, "ib_address_create failed %ld\n", 245 + -PTR_ERR(ah)); 246 + /* use original error */ 247 + return PTR_ERR(ah); 245 248 } else { 246 249 spin_lock_irq(&priv->lock); 247 250 mcast->ah = ah; ··· 269 266 270 267 skb->dev = dev; 271 268 if (dst) 272 - n = dst_get_neighbour(dst); 269 + n = dst_get_neighbour_raw(dst); 273 270 if (!dst || !n) { 274 271 /* put pseudoheader back on for next time */ 275 272 skb_push(skb, sizeof (struct ipoib_pseudoheader)); ··· 725 722 if (mcast && mcast->ah) { 726 723 struct dst_entry *dst = skb_dst(skb); 727 724 struct neighbour *n = NULL; 725 + 726 + rcu_read_lock(); 728 727 if (dst) 729 728 n = dst_get_neighbour(dst); 730 729 if (n && !*to_ipoib_neigh(n)) { ··· 739 734 list_add_tail(&neigh->list, &mcast->neigh_list); 740 735 } 741 736 } 742 - 737 + rcu_read_unlock(); 743 738 spin_unlock_irqrestore(&priv->lock, flags); 744 739 ipoib_send(dev, skb, mcast->ah, IB_MULTICAST_QPN); 745 740 return;
+6 -1
drivers/iommu/intel-iommu.c
··· 405 405 int dmar_disabled = 1; 406 406 #endif /*CONFIG_INTEL_IOMMU_DEFAULT_ON*/ 407 407 408 + int intel_iommu_enabled = 0; 409 + EXPORT_SYMBOL_GPL(intel_iommu_enabled); 410 + 408 411 static int dmar_map_gfx = 1; 409 412 static int dmar_forcedac; 410 413 static int intel_iommu_strict; ··· 3527 3524 return 0; 3528 3525 } 3529 3526 3530 - int dmar_parse_rmrr_atsr_dev(void) 3527 + int __init dmar_parse_rmrr_atsr_dev(void) 3531 3528 { 3532 3529 struct dmar_rmrr_unit *rmrr, *rmrr_n; 3533 3530 struct dmar_atsr_unit *atsr, *atsr_n; ··· 3649 3646 bus_set_iommu(&pci_bus_type, &intel_iommu_ops); 3650 3647 3651 3648 bus_register_notifier(&pci_bus_type, &device_nb); 3649 + 3650 + intel_iommu_enabled = 1; 3652 3651 3653 3652 return 0; 3654 3653 }
+1 -1
drivers/iommu/intr_remapping.c
··· 773 773 return ir_supported; 774 774 } 775 775 776 - int ir_dev_scope_init(void) 776 + int __init ir_dev_scope_init(void) 777 777 { 778 778 if (!intr_remapping_enabled) 779 779 return 0;
+6
drivers/isdn/divert/divert_procfs.c
··· 242 242 case IIOCDOCFINT: 243 243 if (!divert_if.drv_to_name(dioctl.cf_ctrl.drvid)) 244 244 return (-EINVAL); /* invalid driver */ 245 + if (strnlen(dioctl.cf_ctrl.msn, sizeof(dioctl.cf_ctrl.msn)) == 246 + sizeof(dioctl.cf_ctrl.msn)) 247 + return -EINVAL; 248 + if (strnlen(dioctl.cf_ctrl.fwd_nr, sizeof(dioctl.cf_ctrl.fwd_nr)) == 249 + sizeof(dioctl.cf_ctrl.fwd_nr)) 250 + return -EINVAL; 245 251 if ((i = cf_command(dioctl.cf_ctrl.drvid, 246 252 (cmd == IIOCDOCFACT) ? 1 : (cmd == IIOCDOCFDIS) ? 0 : 2, 247 253 dioctl.cf_ctrl.cfproc,
+3
drivers/isdn/i4l/isdn_net.c
··· 2756 2756 char *c, 2757 2757 *e; 2758 2758 2759 + if (strnlen(cfg->drvid, sizeof(cfg->drvid)) == 2760 + sizeof(cfg->drvid)) 2761 + return -EINVAL; 2759 2762 drvidx = -1; 2760 2763 chidx = -1; 2761 2764 strcpy(drvid, cfg->drvid);
+4
drivers/md/bitmap.c
··· 1106 1106 */ 1107 1107 int i; 1108 1108 1109 + spin_lock_irq(&bitmap->lock); 1109 1110 for (i = 0; i < bitmap->file_pages; i++) 1110 1111 set_page_attr(bitmap, bitmap->filemap[i], 1111 1112 BITMAP_PAGE_NEEDWRITE); 1112 1113 bitmap->allclean = 0; 1114 + spin_unlock_irq(&bitmap->lock); 1113 1115 } 1114 1116 1115 1117 static void bitmap_count_page(struct bitmap *bitmap, sector_t offset, int inc) ··· 1607 1605 for (chunk = s; chunk <= e; chunk++) { 1608 1606 sector_t sec = (sector_t)chunk << CHUNK_BLOCK_SHIFT(bitmap); 1609 1607 bitmap_set_memory_bits(bitmap, sec, 1); 1608 + spin_lock_irq(&bitmap->lock); 1610 1609 bitmap_file_set_bit(bitmap, sec); 1610 + spin_unlock_irq(&bitmap->lock); 1611 1611 if (sec < bitmap->mddev->recovery_cp) 1612 1612 /* We are asserting that the array is dirty, 1613 1613 * so move the recovery_cp address back so
+23 -4
drivers/md/md.c
··· 570 570 mddev->ctime == 0 && !mddev->hold_active) { 571 571 /* Array is not configured at all, and not held active, 572 572 * so destroy it */ 573 - list_del(&mddev->all_mddevs); 573 + list_del_init(&mddev->all_mddevs); 574 574 bs = mddev->bio_set; 575 575 mddev->bio_set = NULL; 576 576 if (mddev->gendisk) { ··· 2546 2546 sep = ","; 2547 2547 } 2548 2548 if (test_bit(Blocked, &rdev->flags) || 2549 - rdev->badblocks.unacked_exist) { 2549 + (rdev->badblocks.unacked_exist 2550 + && !test_bit(Faulty, &rdev->flags))) { 2550 2551 len += sprintf(page+len, "%sblocked", sep); 2551 2552 sep = ","; 2552 2553 } ··· 3789 3788 if (err) 3790 3789 return err; 3791 3790 else { 3791 + if (mddev->hold_active == UNTIL_IOCTL) 3792 + mddev->hold_active = 0; 3792 3793 sysfs_notify_dirent_safe(mddev->sysfs_state); 3793 3794 return len; 3794 3795 } ··· 4490 4487 4491 4488 if (!entry->show) 4492 4489 return -EIO; 4490 + spin_lock(&all_mddevs_lock); 4491 + if (list_empty(&mddev->all_mddevs)) { 4492 + spin_unlock(&all_mddevs_lock); 4493 + return -EBUSY; 4494 + } 4495 + mddev_get(mddev); 4496 + spin_unlock(&all_mddevs_lock); 4497 + 4493 4498 rv = mddev_lock(mddev); 4494 4499 if (!rv) { 4495 4500 rv = entry->show(mddev, page); 4496 4501 mddev_unlock(mddev); 4497 4502 } 4503 + mddev_put(mddev); 4498 4504 return rv; 4499 4505 } 4500 4506 ··· 4519 4507 return -EIO; 4520 4508 if (!capable(CAP_SYS_ADMIN)) 4521 4509 return -EACCES; 4510 + spin_lock(&all_mddevs_lock); 4511 + if (list_empty(&mddev->all_mddevs)) { 4512 + spin_unlock(&all_mddevs_lock); 4513 + return -EBUSY; 4514 + } 4515 + mddev_get(mddev); 4516 + spin_unlock(&all_mddevs_lock); 4522 4517 rv = mddev_lock(mddev); 4523 - if (mddev->hold_active == UNTIL_IOCTL) 4524 - mddev->hold_active = 0; 4525 4518 if (!rv) { 4526 4519 rv = entry->store(mddev, page, length); 4527 4520 mddev_unlock(mddev); 4528 4521 } 4522 + mddev_put(mddev); 4529 4523 return rv; 4530 4524 } 4531 4525 ··· 7858 7840 s + rdev->data_offset, sectors, acknowledged); 7859 7841 if (rv) { 7860 7842 /* Make sure they get written out promptly */ 7843 + sysfs_notify_dirent_safe(rdev->sysfs_state); 7861 7844 set_bit(MD_CHANGE_CLEAN, &rdev->mddev->flags); 7862 7845 md_wakeup_thread(rdev->mddev->thread); 7863 7846 }
+5 -3
drivers/md/raid5.c
··· 3036 3036 if (dev->written) 3037 3037 s->written++; 3038 3038 rdev = rcu_dereference(conf->disks[i].rdev); 3039 + if (rdev && test_bit(Faulty, &rdev->flags)) 3040 + rdev = NULL; 3039 3041 if (rdev) { 3040 3042 is_bad = is_badblock(rdev, sh->sector, STRIPE_SECTORS, 3041 3043 &first_bad, &bad_sectors); ··· 3065 3063 } 3066 3064 } else if (test_bit(In_sync, &rdev->flags)) 3067 3065 set_bit(R5_Insync, &dev->flags); 3068 - else if (!test_bit(Faulty, &rdev->flags)) { 3066 + else { 3069 3067 /* in sync if before recovery_offset */ 3070 3068 if (sh->sector + STRIPE_SECTORS <= rdev->recovery_offset) 3071 3069 set_bit(R5_Insync, &dev->flags); 3072 3070 } 3073 - if (test_bit(R5_WriteError, &dev->flags)) { 3071 + if (rdev && test_bit(R5_WriteError, &dev->flags)) { 3074 3072 clear_bit(R5_Insync, &dev->flags); 3075 3073 if (!test_bit(Faulty, &rdev->flags)) { 3076 3074 s->handle_bad_blocks = 1; ··· 3078 3076 } else 3079 3077 clear_bit(R5_WriteError, &dev->flags); 3080 3078 } 3081 - if (test_bit(R5_MadeGood, &dev->flags)) { 3079 + if (rdev && test_bit(R5_MadeGood, &dev->flags)) { 3082 3080 if (!test_bit(Faulty, &rdev->flags)) { 3083 3081 s->handle_bad_blocks = 1; 3084 3082 atomic_inc(&rdev->nr_pending);
+8
drivers/mmc/card/block.c
··· 1606 1606 MMC_QUIRK_BLK_NO_CMD23), 1607 1607 MMC_FIXUP("MMC32G", 0x11, CID_OEMID_ANY, add_quirk_mmc, 1608 1608 MMC_QUIRK_BLK_NO_CMD23), 1609 + 1610 + /* 1611 + * Some Micron MMC cards needs longer data read timeout than 1612 + * indicated in CSD. 1613 + */ 1614 + MMC_FIXUP(CID_NAME_ANY, 0x13, 0x200, add_quirk_mmc, 1615 + MMC_QUIRK_LONG_READ_TIME), 1616 + 1609 1617 END_FIXUP 1610 1618 }; 1611 1619
+64 -34
drivers/mmc/core/core.c
··· 529 529 data->timeout_clks = 0; 530 530 } 531 531 } 532 + 533 + /* 534 + * Some cards require longer data read timeout than indicated in CSD. 535 + * Address this by setting the read timeout to a "reasonably high" 536 + * value. For the cards tested, 300ms has proven enough. If necessary, 537 + * this value can be increased if other problematic cards require this. 538 + */ 539 + if (mmc_card_long_read_time(card) && data->flags & MMC_DATA_READ) { 540 + data->timeout_ns = 300000000; 541 + data->timeout_clks = 0; 542 + } 543 + 532 544 /* 533 545 * Some cards need very high timeouts if driven in SPI mode. 534 546 * The worst observed timeout was 900ms after writing a ··· 1225 1213 mmc_host_clk_release(host); 1226 1214 } 1227 1215 1216 + static void mmc_poweroff_notify(struct mmc_host *host) 1217 + { 1218 + struct mmc_card *card; 1219 + unsigned int timeout; 1220 + unsigned int notify_type = EXT_CSD_NO_POWER_NOTIFICATION; 1221 + int err = 0; 1222 + 1223 + card = host->card; 1224 + 1225 + /* 1226 + * Send power notify command only if card 1227 + * is mmc and notify state is powered ON 1228 + */ 1229 + if (card && mmc_card_mmc(card) && 1230 + (card->poweroff_notify_state == MMC_POWERED_ON)) { 1231 + 1232 + if (host->power_notify_type == MMC_HOST_PW_NOTIFY_SHORT) { 1233 + notify_type = EXT_CSD_POWER_OFF_SHORT; 1234 + timeout = card->ext_csd.generic_cmd6_time; 1235 + card->poweroff_notify_state = MMC_POWEROFF_SHORT; 1236 + } else { 1237 + notify_type = EXT_CSD_POWER_OFF_LONG; 1238 + timeout = card->ext_csd.power_off_longtime; 1239 + card->poweroff_notify_state = MMC_POWEROFF_LONG; 1240 + } 1241 + 1242 + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1243 + EXT_CSD_POWER_OFF_NOTIFICATION, 1244 + notify_type, timeout); 1245 + 1246 + if (err && err != -EBADMSG) 1247 + pr_err("Device failed to respond within %d poweroff " 1248 + "time. Forcefully powering down the device\n", 1249 + timeout); 1250 + 1251 + /* Set the card state to no notification after the poweroff */ 1252 + card->poweroff_notify_state = MMC_NO_POWER_NOTIFICATION; 1253 + } 1254 + } 1255 + 1228 1256 /* 1229 1257 * Apply power to the MMC stack. This is a two-stage process. 1230 1258 * First, we enable power to the card without the clock running. ··· 1321 1269 1322 1270 void mmc_power_off(struct mmc_host *host) 1323 1271 { 1324 - struct mmc_card *card; 1325 - unsigned int notify_type; 1326 - unsigned int timeout; 1327 - int err; 1328 - 1329 1272 mmc_host_clk_hold(host); 1330 1273 1331 - card = host->card; 1332 1274 host->ios.clock = 0; 1333 1275 host->ios.vdd = 0; 1334 1276 1335 - if (card && mmc_card_mmc(card) && 1336 - (card->poweroff_notify_state == MMC_POWERED_ON)) { 1337 - 1338 - if (host->power_notify_type == MMC_HOST_PW_NOTIFY_SHORT) { 1339 - notify_type = EXT_CSD_POWER_OFF_SHORT; 1340 - timeout = card->ext_csd.generic_cmd6_time; 1341 - card->poweroff_notify_state = MMC_POWEROFF_SHORT; 1342 - } else { 1343 - notify_type = EXT_CSD_POWER_OFF_LONG; 1344 - timeout = card->ext_csd.power_off_longtime; 1345 - card->poweroff_notify_state = MMC_POWEROFF_LONG; 1346 - } 1347 - 1348 - err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1349 - EXT_CSD_POWER_OFF_NOTIFICATION, 1350 - notify_type, timeout); 1351 - 1352 - if (err && err != -EBADMSG) 1353 - pr_err("Device failed to respond within %d poweroff " 1354 - "time. Forcefully powering down the device\n", 1355 - timeout); 1356 - 1357 - /* Set the card state to no notification after the poweroff */ 1358 - card->poweroff_notify_state = MMC_NO_POWER_NOTIFICATION; 1359 - } 1277 + mmc_poweroff_notify(host); 1360 1278 1361 1279 /* 1362 1280 * Reset ocr mask to be the highest possible voltage supported for ··· 2218 2196 2219 2197 mmc_bus_get(host); 2220 2198 2221 - if (host->bus_ops && !host->bus_dead && host->bus_ops->awake) 2199 + if (host->bus_ops && !host->bus_dead && host->bus_ops->sleep) 2222 2200 err = host->bus_ops->sleep(host); 2223 2201 2224 2202 mmc_bus_put(host); ··· 2324 2302 * pre-claim the host. 2325 2303 */ 2326 2304 if (mmc_try_claim_host(host)) { 2327 - if (host->bus_ops->suspend) 2305 + if (host->bus_ops->suspend) { 2306 + /* 2307 + * For eMMC 4.5 device send notify command 2308 + * before sleep, because in sleep state eMMC 4.5 2309 + * devices respond to only RESET and AWAKE cmd 2310 + */ 2311 + mmc_poweroff_notify(host); 2328 2312 err = host->bus_ops->suspend(host); 2313 + } 2314 + mmc_do_release_host(host); 2315 + 2329 2316 if (err == -ENOSYS || !host->bus_ops->resume) { 2330 2317 /* 2331 2318 * We simply "remove" the card in this case. ··· 2349 2318 host->pm_flags = 0; 2350 2319 err = 0; 2351 2320 } 2352 - mmc_do_release_host(host); 2353 2321 } else { 2354 2322 err = -EBUSY; 2355 2323 }
+8 -4
drivers/mmc/core/mmc.c
··· 876 876 * set the notification byte in the ext_csd register of device 877 877 */ 878 878 if ((host->caps2 & MMC_CAP2_POWEROFF_NOTIFY) && 879 - (card->poweroff_notify_state == MMC_NO_POWER_NOTIFICATION)) { 879 + (card->ext_csd.rev >= 6)) { 880 880 err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 881 881 EXT_CSD_POWER_OFF_NOTIFICATION, 882 882 EXT_CSD_POWER_ON, 883 883 card->ext_csd.generic_cmd6_time); 884 884 if (err && err != -EBADMSG) 885 885 goto free_card; 886 - } 887 886 888 - if (!err) 889 - card->poweroff_notify_state = MMC_POWERED_ON; 887 + /* 888 + * The err can be -EBADMSG or 0, 889 + * so check for success and update the flag 890 + */ 891 + if (!err) 892 + card->poweroff_notify_state = MMC_POWERED_ON; 893 + } 890 894 891 895 /* 892 896 * Activate high speed (if supported)
+1
drivers/mmc/host/mxcmmc.c
··· 732 732 "failed to config DMA channel. Falling back to PIO\n"); 733 733 dma_release_channel(host->dma); 734 734 host->do_dma = 0; 735 + host->dma = NULL; 735 736 } 736 737 } 737 738
+5 -2
drivers/mmc/host/omap_hsmmc.c
··· 1010 1010 host->data->sg_len, 1011 1011 omap_hsmmc_get_dma_dir(host, host->data)); 1012 1012 omap_free_dma(dma_ch); 1013 + host->data->host_cookie = 0; 1013 1014 } 1014 1015 host->data = NULL; 1015 1016 } ··· 1576 1575 struct mmc_data *data = mrq->data; 1577 1576 1578 1577 if (host->use_dma) { 1579 - dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 1580 - omap_hsmmc_get_dma_dir(host, data)); 1578 + if (data->host_cookie) 1579 + dma_unmap_sg(mmc_dev(host->mmc), data->sg, 1580 + data->sg_len, 1581 + omap_hsmmc_get_dma_dir(host, data)); 1581 1582 data->host_cookie = 0; 1582 1583 } 1583 1584 }
+1
drivers/mmc/host/sdhci-cns3xxx.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/device.h> 17 17 #include <linux/mmc/host.h> 18 + #include <linux/module.h> 18 19 #include <mach/cns3xxx.h> 19 20 #include "sdhci-pltfm.h" 20 21
-2
drivers/mmc/host/sdhci-s3c.c
··· 644 644 static struct platform_driver sdhci_s3c_driver = { 645 645 .probe = sdhci_s3c_probe, 646 646 .remove = __devexit_p(sdhci_s3c_remove), 647 - .suspend = sdhci_s3c_suspend, 648 - .resume = sdhci_s3c_resume, 649 647 .driver = { 650 648 .owner = THIS_MODULE, 651 649 .name = "s3c-sdhci",
+1 -1
drivers/mmc/host/sh_mmcif.c
··· 908 908 if (host->power) { 909 909 pm_runtime_put(&host->pd->dev); 910 910 host->power = false; 911 - if (p->down_pwr) 911 + if (p->down_pwr && ios->power_mode == MMC_POWER_OFF) 912 912 p->down_pwr(host->pd); 913 913 } 914 914 host->state = STATE_IDLE;
+1 -1
drivers/mmc/host/tmio_mmc_pio.c
··· 798 798 /* start bus clock */ 799 799 tmio_mmc_clk_start(host); 800 800 } else if (ios->power_mode != MMC_POWER_UP) { 801 - if (host->set_pwr) 801 + if (host->set_pwr && ios->power_mode == MMC_POWER_OFF) 802 802 host->set_pwr(host->pdev, 0); 803 803 if ((pdata->flags & TMIO_MMC_HAS_COLD_CD) && 804 804 pdata->power) {
+1 -1
drivers/net/arcnet/Kconfig
··· 4 4 5 5 menuconfig ARCNET 6 6 depends on NETDEVICES && (ISA || PCI || PCMCIA) 7 - bool "ARCnet support" 7 + tristate "ARCnet support" 8 8 ---help--- 9 9 If you have a network card of this type, say Y and check out the 10 10 (arguably) beautiful poetry in
+6 -27
drivers/net/bonding/bond_main.c
··· 2553 2553 } 2554 2554 } 2555 2555 2556 - static __be32 bond_glean_dev_ip(struct net_device *dev) 2557 - { 2558 - struct in_device *idev; 2559 - struct in_ifaddr *ifa; 2560 - __be32 addr = 0; 2561 - 2562 - if (!dev) 2563 - return 0; 2564 - 2565 - rcu_read_lock(); 2566 - idev = __in_dev_get_rcu(dev); 2567 - if (!idev) 2568 - goto out; 2569 - 2570 - ifa = idev->ifa_list; 2571 - if (!ifa) 2572 - goto out; 2573 - 2574 - addr = ifa->ifa_local; 2575 - out: 2576 - rcu_read_unlock(); 2577 - return addr; 2578 - } 2579 - 2580 2556 static int bond_has_this_ip(struct bonding *bond, __be32 ip) 2581 2557 { 2582 2558 struct vlan_entry *vlan; ··· 3298 3322 struct bonding *bond; 3299 3323 struct vlan_entry *vlan; 3300 3324 3325 + /* we only care about primary address */ 3326 + if(ifa->ifa_flags & IFA_F_SECONDARY) 3327 + return NOTIFY_DONE; 3328 + 3301 3329 list_for_each_entry(bond, &bn->dev_list, bond_list) { 3302 3330 if (bond->dev == event_dev) { 3303 3331 switch (event) { ··· 3309 3329 bond->master_ip = ifa->ifa_local; 3310 3330 return NOTIFY_OK; 3311 3331 case NETDEV_DOWN: 3312 - bond->master_ip = bond_glean_dev_ip(bond->dev); 3332 + bond->master_ip = 0; 3313 3333 return NOTIFY_OK; 3314 3334 default: 3315 3335 return NOTIFY_DONE; ··· 3325 3345 vlan->vlan_ip = ifa->ifa_local; 3326 3346 return NOTIFY_OK; 3327 3347 case NETDEV_DOWN: 3328 - vlan->vlan_ip = 3329 - bond_glean_dev_ip(vlan_dev); 3348 + vlan->vlan_ip = 0; 3330 3349 return NOTIFY_OK; 3331 3350 default: 3332 3351 return NOTIFY_DONE;
-1
drivers/net/can/sja1000/peak_pci.c
··· 20 20 */ 21 21 22 22 #include <linux/kernel.h> 23 - #include <linux/version.h> 24 23 #include <linux/module.h> 25 24 #include <linux/interrupt.h> 26 25 #include <linux/netdevice.h>
+1 -1
drivers/net/ethernet/broadcom/b44.c
··· 608 608 skb->len, 609 609 DMA_TO_DEVICE); 610 610 rp->skb = NULL; 611 - dev_kfree_skb(skb); 611 + dev_kfree_skb_irq(skb); 612 612 } 613 613 614 614 bp->tx_cons = cons;
+38 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
··· 10327 10327 return 0; 10328 10328 } 10329 10329 10330 + 10331 + static void bnx2x_5461x_set_link_led(struct bnx2x_phy *phy, 10332 + struct link_params *params, u8 mode) 10333 + { 10334 + struct bnx2x *bp = params->bp; 10335 + u16 temp; 10336 + 10337 + bnx2x_cl22_write(bp, phy, 10338 + MDIO_REG_GPHY_SHADOW, 10339 + MDIO_REG_GPHY_SHADOW_LED_SEL1); 10340 + bnx2x_cl22_read(bp, phy, 10341 + MDIO_REG_GPHY_SHADOW, 10342 + &temp); 10343 + temp &= 0xff00; 10344 + 10345 + DP(NETIF_MSG_LINK, "54618x set link led (mode=%x)\n", mode); 10346 + switch (mode) { 10347 + case LED_MODE_FRONT_PANEL_OFF: 10348 + case LED_MODE_OFF: 10349 + temp |= 0x00ee; 10350 + break; 10351 + case LED_MODE_OPER: 10352 + temp |= 0x0001; 10353 + break; 10354 + case LED_MODE_ON: 10355 + temp |= 0x00ff; 10356 + break; 10357 + default: 10358 + break; 10359 + } 10360 + bnx2x_cl22_write(bp, phy, 10361 + MDIO_REG_GPHY_SHADOW, 10362 + MDIO_REG_GPHY_SHADOW_WR_ENA | temp); 10363 + return; 10364 + } 10365 + 10366 + 10330 10367 static void bnx2x_54618se_link_reset(struct bnx2x_phy *phy, 10331 10368 struct link_params *params) 10332 10369 { ··· 11140 11103 .config_loopback = (config_loopback_t)bnx2x_54618se_config_loopback, 11141 11104 .format_fw_ver = (format_fw_ver_t)NULL, 11142 11105 .hw_reset = (hw_reset_t)NULL, 11143 - .set_link_led = (set_link_led_t)NULL, 11106 + .set_link_led = (set_link_led_t)bnx2x_5461x_set_link_led, 11144 11107 .phy_specific_func = (phy_specific_func_t)NULL 11145 11108 }; 11146 11109 /*****************************************************************/
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
··· 6990 6990 #define MDIO_REG_INTR_MASK 0x1b 6991 6991 #define MDIO_REG_INTR_MASK_LINK_STATUS (0x1 << 1) 6992 6992 #define MDIO_REG_GPHY_SHADOW 0x1c 6993 + #define MDIO_REG_GPHY_SHADOW_LED_SEL1 (0x0d << 10) 6993 6994 #define MDIO_REG_GPHY_SHADOW_LED_SEL2 (0x0e << 10) 6994 6995 #define MDIO_REG_GPHY_SHADOW_WR_ENA (0x1 << 15) 6995 6996 #define MDIO_REG_GPHY_SHADOW_AUTO_DET_MED (0x1e << 10)
+1 -1
drivers/net/ethernet/davicom/dm9000.c
··· 613 613 614 614 if (!dm->wake_state) 615 615 irq_set_irq_wake(dm->irq_wake, 1); 616 - else if (dm->wake_state & !opts) 616 + else if (dm->wake_state && !opts) 617 617 irq_set_irq_wake(dm->irq_wake, 0); 618 618 } 619 619
+1
drivers/net/ethernet/freescale/Kconfig
··· 24 24 bool "FEC ethernet controller (of ColdFire and some i.MX CPUs)" 25 25 depends on (M523x || M527x || M5272 || M528x || M520x || M532x || \ 26 26 ARCH_MXC || ARCH_MXS) 27 + default ARCH_MXC || ARCH_MXS if ARM 27 28 select PHYLIB 28 29 ---help--- 29 30 Say Y here if you want to use the built-in 10/100 Fast ethernet
+7 -4
drivers/net/ethernet/freescale/fec.c
··· 232 232 struct platform_device *pdev; 233 233 234 234 int opened; 235 + int dev_id; 235 236 236 237 /* Phylib and MDIO interface */ 237 238 struct mii_bus *mii_bus; ··· 838 837 839 838 /* Adjust MAC if using macaddr */ 840 839 if (iap == macaddr) 841 - ndev->dev_addr[ETH_ALEN-1] = macaddr[ETH_ALEN-1] + fep->pdev->id; 840 + ndev->dev_addr[ETH_ALEN-1] = macaddr[ETH_ALEN-1] + fep->dev_id; 842 841 } 843 842 844 843 /* ------------------------------------------------------------------------- */ ··· 954 953 char mdio_bus_id[MII_BUS_ID_SIZE]; 955 954 char phy_name[MII_BUS_ID_SIZE + 3]; 956 955 int phy_id; 957 - int dev_id = fep->pdev->id; 956 + int dev_id = fep->dev_id; 958 957 959 958 fep->phy_dev = NULL; 960 959 ··· 1032 1031 * mdio interface in board design, and need to be configured by 1033 1032 * fec0 mii_bus. 1034 1033 */ 1035 - if ((id_entry->driver_data & FEC_QUIRK_ENET_MAC) && pdev->id > 0) { 1034 + if ((id_entry->driver_data & FEC_QUIRK_ENET_MAC) && fep->dev_id > 0) { 1036 1035 /* fec1 uses fec0 mii_bus */ 1037 1036 fep->mii_bus = fec0_mii_bus; 1038 1037 return 0; ··· 1064 1063 fep->mii_bus->read = fec_enet_mdio_read; 1065 1064 fep->mii_bus->write = fec_enet_mdio_write; 1066 1065 fep->mii_bus->reset = fec_enet_mdio_reset; 1067 - snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%x", pdev->id + 1); 1066 + snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%x", fep->dev_id + 1); 1068 1067 fep->mii_bus->priv = fep; 1069 1068 fep->mii_bus->parent = &pdev->dev; 1070 1069 ··· 1522 1521 int i, irq, ret = 0; 1523 1522 struct resource *r; 1524 1523 const struct of_device_id *of_id; 1524 + static int dev_id; 1525 1525 1526 1526 of_id = of_match_device(fec_dt_ids, &pdev->dev); 1527 1527 if (of_id) ··· 1550 1548 1551 1549 fep->hwp = ioremap(r->start, resource_size(r)); 1552 1550 fep->pdev = pdev; 1551 + fep->dev_id = dev_id++; 1553 1552 1554 1553 if (!fep->hwp) { 1555 1554 ret = -ENOMEM;
+8 -45
drivers/net/ethernet/freescale/fsl_pq_mdio.c
··· 183 183 } 184 184 EXPORT_SYMBOL_GPL(fsl_pq_mdio_bus_name); 185 185 186 - /* Scan the bus in reverse, looking for an empty spot */ 187 - static int fsl_pq_mdio_find_free(struct mii_bus *new_bus) 188 - { 189 - int i; 190 186 191 - for (i = PHY_MAX_ADDR; i > 0; i--) { 192 - u32 phy_id; 193 - 194 - if (get_phy_id(new_bus, i, &phy_id)) 195 - return -1; 196 - 197 - if (phy_id == 0xffffffff) 198 - break; 199 - } 200 - 201 - return i; 202 - } 203 - 204 - 205 - #if defined(CONFIG_GIANFAR) || defined(CONFIG_GIANFAR_MODULE) 206 187 static u32 __iomem *get_gfar_tbipa(struct fsl_pq_mdio __iomem *regs, struct device_node *np) 207 188 { 189 + #if defined(CONFIG_GIANFAR) || defined(CONFIG_GIANFAR_MODULE) 208 190 struct gfar __iomem *enet_regs; 209 191 210 192 /* ··· 202 220 } else if (of_device_is_compatible(np, "fsl,etsec2-mdio") || 203 221 of_device_is_compatible(np, "fsl,etsec2-tbi")) { 204 222 return of_iomap(np, 1); 205 - } else 206 - return NULL; 207 - } 223 + } 208 224 #endif 225 + return NULL; 226 + } 209 227 210 228 211 - #if defined(CONFIG_UCC_GETH) || defined(CONFIG_UCC_GETH_MODULE) 212 229 static int get_ucc_id_for_range(u64 start, u64 end, u32 *ucc_id) 213 230 { 231 + #if defined(CONFIG_UCC_GETH) || defined(CONFIG_UCC_GETH_MODULE) 214 232 struct device_node *np = NULL; 215 233 int err = 0; 216 234 ··· 243 261 return err; 244 262 else 245 263 return -EINVAL; 246 - } 264 + #else 265 + return -ENODEV; 247 266 #endif 248 - 267 + } 249 268 250 269 static int fsl_pq_mdio_probe(struct platform_device *ofdev) 251 270 { ··· 322 339 of_device_is_compatible(np, "fsl,etsec2-mdio") || 323 340 of_device_is_compatible(np, "fsl,etsec2-tbi") || 324 341 of_device_is_compatible(np, "gianfar")) { 325 - #if defined(CONFIG_GIANFAR) || defined(CONFIG_GIANFAR_MODULE) 326 342 tbipa = get_gfar_tbipa(regs, np); 327 343 if (!tbipa) { 328 344 err = -EINVAL; 329 345 goto err_free_irqs; 330 346 } 331 - #else 332 - err = -ENODEV; 333 - goto err_free_irqs; 334 - #endif 335 347 } else if (of_device_is_compatible(np, "fsl,ucc-mdio") || 336 348 of_device_is_compatible(np, "ucc_geth_phy")) { 337 - #if defined(CONFIG_UCC_GETH) || defined(CONFIG_UCC_GETH_MODULE) 338 349 u32 id; 339 350 static u32 mii_mng_master; 340 351 ··· 341 364 mii_mng_master = id; 342 365 ucc_set_qe_mux_mii_mng(id - 1); 343 366 } 344 - #else 345 - err = -ENODEV; 346 - goto err_free_irqs; 347 - #endif 348 367 } else { 349 368 err = -ENODEV; 350 369 goto err_free_irqs; ··· 359 386 } 360 387 361 388 if (tbiaddr == -1) { 362 - out_be32(tbipa, 0); 363 - 364 - tbiaddr = fsl_pq_mdio_find_free(new_bus); 365 - } 366 - 367 - /* 368 - * We define TBIPA at 0 to be illegal, opting to fail for boards that 369 - * have PHYs at 1-31, rather than change tbipa and rescan. 370 - */ 371 - if (tbiaddr == 0) { 372 389 err = -EBUSY; 373 390 374 391 goto err_free_irqs;
+2 -2
drivers/net/ethernet/ibm/ehea/ehea.h
··· 61 61 #ifdef EHEA_SMALL_QUEUES 62 62 #define EHEA_MAX_CQE_COUNT 1023 63 63 #define EHEA_DEF_ENTRIES_SQ 1023 64 - #define EHEA_DEF_ENTRIES_RQ1 4095 64 + #define EHEA_DEF_ENTRIES_RQ1 1023 65 65 #define EHEA_DEF_ENTRIES_RQ2 1023 66 - #define EHEA_DEF_ENTRIES_RQ3 1023 66 + #define EHEA_DEF_ENTRIES_RQ3 511 67 67 #else 68 68 #define EHEA_MAX_CQE_COUNT 4080 69 69 #define EHEA_DEF_ENTRIES_SQ 4080
+4 -2
drivers/net/ethernet/ibm/ehea/ehea_main.c
··· 371 371 out_herr: 372 372 free_page((unsigned long)cb2); 373 373 resched: 374 - schedule_delayed_work(&port->stats_work, msecs_to_jiffies(1000)); 374 + schedule_delayed_work(&port->stats_work, 375 + round_jiffies_relative(msecs_to_jiffies(1000))); 375 376 } 376 377 377 378 static void ehea_refill_rq1(struct ehea_port_res *pr, int index, int nr_of_wqes) ··· 2435 2434 } 2436 2435 2437 2436 mutex_unlock(&port->port_lock); 2438 - schedule_delayed_work(&port->stats_work, msecs_to_jiffies(1000)); 2437 + schedule_delayed_work(&port->stats_work, 2438 + round_jiffies_relative(msecs_to_jiffies(1000))); 2439 2439 2440 2440 return ret; 2441 2441 }
+1 -1
drivers/net/ethernet/ibm/iseries_veth.c
··· 1421 1421 1422 1422 /* FIXME: do we need this? */ 1423 1423 memset(local_list, 0, sizeof(local_list)); 1424 - memset(remote_list, 0, sizeof(VETH_MAX_FRAMES_PER_MSG)); 1424 + memset(remote_list, 0, sizeof(remote_list)); 1425 1425 1426 1426 /* a 0 address marks the end of the valid entries */ 1427 1427 if (senddata->addr[startchunk] == 0)
+110 -3
drivers/net/ethernet/jme.c
··· 1745 1745 } 1746 1746 1747 1747 static int 1748 + jme_phy_specreg_read(struct jme_adapter *jme, u32 specreg) 1749 + { 1750 + u32 phy_addr; 1751 + 1752 + phy_addr = JM_PHY_SPEC_REG_READ | specreg; 1753 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, JM_PHY_SPEC_ADDR_REG, 1754 + phy_addr); 1755 + return jme_mdio_read(jme->dev, jme->mii_if.phy_id, 1756 + JM_PHY_SPEC_DATA_REG); 1757 + } 1758 + 1759 + static void 1760 + jme_phy_specreg_write(struct jme_adapter *jme, u32 ext_reg, u32 phy_data) 1761 + { 1762 + u32 phy_addr; 1763 + 1764 + phy_addr = JM_PHY_SPEC_REG_WRITE | ext_reg; 1765 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, JM_PHY_SPEC_DATA_REG, 1766 + phy_data); 1767 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, JM_PHY_SPEC_ADDR_REG, 1768 + phy_addr); 1769 + } 1770 + 1771 + static int 1772 + jme_phy_calibration(struct jme_adapter *jme) 1773 + { 1774 + u32 ctrl1000, phy_data; 1775 + 1776 + jme_phy_off(jme); 1777 + jme_phy_on(jme); 1778 + /* Enabel PHY test mode 1 */ 1779 + ctrl1000 = jme_mdio_read(jme->dev, jme->mii_if.phy_id, MII_CTRL1000); 1780 + ctrl1000 &= ~PHY_GAD_TEST_MODE_MSK; 1781 + ctrl1000 |= PHY_GAD_TEST_MODE_1; 1782 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, MII_CTRL1000, ctrl1000); 1783 + 1784 + phy_data = jme_phy_specreg_read(jme, JM_PHY_EXT_COMM_2_REG); 1785 + phy_data &= ~JM_PHY_EXT_COMM_2_CALI_MODE_0; 1786 + phy_data |= JM_PHY_EXT_COMM_2_CALI_LATCH | 1787 + JM_PHY_EXT_COMM_2_CALI_ENABLE; 1788 + jme_phy_specreg_write(jme, JM_PHY_EXT_COMM_2_REG, phy_data); 1789 + msleep(20); 1790 + phy_data = jme_phy_specreg_read(jme, JM_PHY_EXT_COMM_2_REG); 1791 + phy_data &= ~(JM_PHY_EXT_COMM_2_CALI_ENABLE | 1792 + JM_PHY_EXT_COMM_2_CALI_MODE_0 | 1793 + JM_PHY_EXT_COMM_2_CALI_LATCH); 1794 + jme_phy_specreg_write(jme, JM_PHY_EXT_COMM_2_REG, phy_data); 1795 + 1796 + /* Disable PHY test mode */ 1797 + ctrl1000 = jme_mdio_read(jme->dev, jme->mii_if.phy_id, MII_CTRL1000); 1798 + ctrl1000 &= ~PHY_GAD_TEST_MODE_MSK; 1799 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, MII_CTRL1000, ctrl1000); 1800 + return 0; 1801 + } 1802 + 1803 + static int 1804 + jme_phy_setEA(struct jme_adapter *jme) 1805 + { 1806 + u32 phy_comm0 = 0, phy_comm1 = 0; 1807 + u8 nic_ctrl; 1808 + 1809 + pci_read_config_byte(jme->pdev, PCI_PRIV_SHARE_NICCTRL, &nic_ctrl); 1810 + if ((nic_ctrl & 0x3) == JME_FLAG_PHYEA_ENABLE) 1811 + return 0; 1812 + 1813 + switch (jme->pdev->device) { 1814 + case PCI_DEVICE_ID_JMICRON_JMC250: 1815 + if (((jme->chip_main_rev == 5) && 1816 + ((jme->chip_sub_rev == 0) || (jme->chip_sub_rev == 1) || 1817 + (jme->chip_sub_rev == 3))) || 1818 + (jme->chip_main_rev >= 6)) { 1819 + phy_comm0 = 0x008A; 1820 + phy_comm1 = 0x4109; 1821 + } 1822 + if ((jme->chip_main_rev == 3) && 1823 + ((jme->chip_sub_rev == 1) || (jme->chip_sub_rev == 2))) 1824 + phy_comm0 = 0xE088; 1825 + break; 1826 + case PCI_DEVICE_ID_JMICRON_JMC260: 1827 + if (((jme->chip_main_rev == 5) && 1828 + ((jme->chip_sub_rev == 0) || (jme->chip_sub_rev == 1) || 1829 + (jme->chip_sub_rev == 3))) || 1830 + (jme->chip_main_rev >= 6)) { 1831 + phy_comm0 = 0x008A; 1832 + phy_comm1 = 0x4109; 1833 + } 1834 + if ((jme->chip_main_rev == 3) && 1835 + ((jme->chip_sub_rev == 1) || (jme->chip_sub_rev == 2))) 1836 + phy_comm0 = 0xE088; 1837 + if ((jme->chip_main_rev == 2) && (jme->chip_sub_rev == 0)) 1838 + phy_comm0 = 0x608A; 1839 + if ((jme->chip_main_rev == 2) && (jme->chip_sub_rev == 2)) 1840 + phy_comm0 = 0x408A; 1841 + break; 1842 + default: 1843 + return -ENODEV; 1844 + } 1845 + if (phy_comm0) 1846 + jme_phy_specreg_write(jme, JM_PHY_EXT_COMM_0_REG, phy_comm0); 1847 + if (phy_comm1) 1848 + jme_phy_specreg_write(jme, JM_PHY_EXT_COMM_1_REG, phy_comm1); 1849 + 1850 + return 0; 1851 + } 1852 + 1853 + static int 1748 1854 jme_open(struct net_device *netdev) 1749 1855 { 1750 1856 struct jme_adapter *jme = netdev_priv(netdev); ··· 1875 1769 jme_set_settings(netdev, &jme->old_ecmd); 1876 1770 else 1877 1771 jme_reset_phy_processor(jme); 1878 - 1772 + jme_phy_calibration(jme); 1773 + jme_phy_setEA(jme); 1879 1774 jme_reset_link(jme); 1880 1775 1881 1776 return 0; ··· 3291 3184 jme_set_settings(netdev, &jme->old_ecmd); 3292 3185 else 3293 3186 jme_reset_phy_processor(jme); 3294 - 3187 + jme_phy_calibration(jme); 3188 + jme_phy_setEA(jme); 3295 3189 jme_start_irq(jme); 3296 3190 netif_device_attach(netdev); 3297 3191 ··· 3347 3239 MODULE_LICENSE("GPL"); 3348 3240 MODULE_VERSION(DRV_VERSION); 3349 3241 MODULE_DEVICE_TABLE(pci, jme_pci_tbl); 3350 -
+19
drivers/net/ethernet/jme.h
··· 760 760 RXMCS_CHECKSUM, 761 761 }; 762 762 763 + /* Extern PHY common register 2 */ 764 + 765 + #define PHY_GAD_TEST_MODE_1 0x00002000 766 + #define PHY_GAD_TEST_MODE_MSK 0x0000E000 767 + #define JM_PHY_SPEC_REG_READ 0x00004000 768 + #define JM_PHY_SPEC_REG_WRITE 0x00008000 769 + #define PHY_CALIBRATION_DELAY 20 770 + #define JM_PHY_SPEC_ADDR_REG 0x1E 771 + #define JM_PHY_SPEC_DATA_REG 0x1F 772 + 773 + #define JM_PHY_EXT_COMM_0_REG 0x30 774 + #define JM_PHY_EXT_COMM_1_REG 0x31 775 + #define JM_PHY_EXT_COMM_2_REG 0x32 776 + #define JM_PHY_EXT_COMM_2_CALI_ENABLE 0x01 777 + #define JM_PHY_EXT_COMM_2_CALI_MODE_0 0x02 778 + #define JM_PHY_EXT_COMM_2_CALI_LATCH 0x10 779 + #define PCI_PRIV_SHARE_NICCTRL 0xF5 780 + #define JME_FLAG_PHYEA_ENABLE 0x2 781 + 763 782 /* 764 783 * Wakeup Frame setup interface registers 765 784 */
+2 -1
drivers/net/ethernet/pasemi/Makefile
··· 2 2 # Makefile for the A Semi network device drivers. 3 3 # 4 4 5 - obj-$(CONFIG_PASEMI_MAC) += pasemi_mac.o pasemi_mac_ethtool.o 5 + obj-$(CONFIG_PASEMI_MAC) += pasemi_mac_driver.o 6 + pasemi_mac_driver-objs := pasemi_mac.o pasemi_mac_ethtool.o
+3 -5
drivers/net/ethernet/qlogic/qlge/qlge.h
··· 58 58 59 59 60 60 #define TX_DESC_PER_IOCB 8 61 - /* The maximum number of frags we handle is based 62 - * on PAGE_SIZE... 63 - */ 64 - #if (PAGE_SHIFT == 12) || (PAGE_SHIFT == 13) /* 4k & 8k pages */ 61 + 62 + #if ((MAX_SKB_FRAGS - TX_DESC_PER_IOCB) + 2) > 0 65 63 #define TX_DESC_PER_OAL ((MAX_SKB_FRAGS - TX_DESC_PER_IOCB) + 2) 66 64 #else /* all other page sizes */ 67 65 #define TX_DESC_PER_OAL 0 ··· 1351 1353 struct ob_mac_iocb_req *queue_entry; 1352 1354 u32 index; 1353 1355 struct oal oal; 1354 - struct map_list map[MAX_SKB_FRAGS + 1]; 1356 + struct map_list map[MAX_SKB_FRAGS + 2]; 1355 1357 int map_cnt; 1356 1358 struct tx_ring_desc *next; 1357 1359 };
+20 -33
drivers/net/ethernet/realtek/r8169.c
··· 1183 1183 return value; 1184 1184 } 1185 1185 1186 - static void rtl8169_irq_mask_and_ack(void __iomem *ioaddr) 1186 + static void rtl8169_irq_mask_and_ack(struct rtl8169_private *tp) 1187 1187 { 1188 - RTL_W16(IntrMask, 0x0000); 1188 + void __iomem *ioaddr = tp->mmio_addr; 1189 1189 1190 - RTL_W16(IntrStatus, 0xffff); 1190 + RTL_W16(IntrMask, 0x0000); 1191 + RTL_W16(IntrStatus, tp->intr_event); 1192 + RTL_R8(ChipCmd); 1191 1193 } 1192 1194 1193 1195 static unsigned int rtl8169_tbi_reset_pending(struct rtl8169_private *tp) ··· 3935 3933 break; 3936 3934 udelay(100); 3937 3935 } 3938 - 3939 - rtl8169_init_ring_indexes(tp); 3940 3936 } 3941 3937 3942 3938 static int __devinit ··· 4339 4339 void __iomem *ioaddr = tp->mmio_addr; 4340 4340 4341 4341 /* Disable interrupts */ 4342 - rtl8169_irq_mask_and_ack(ioaddr); 4342 + rtl8169_irq_mask_and_ack(tp); 4343 4343 4344 4344 rtl_rx_close(tp); 4345 4345 ··· 4885 4885 RTL_W16(IntrMitigate, 0x5151); 4886 4886 4887 4887 /* Work around for RxFIFO overflow. */ 4888 - if (tp->mac_version == RTL_GIGA_MAC_VER_11 || 4889 - tp->mac_version == RTL_GIGA_MAC_VER_22) { 4888 + if (tp->mac_version == RTL_GIGA_MAC_VER_11) { 4890 4889 tp->intr_event |= RxFIFOOver | PCSTimeout; 4891 4890 tp->intr_event &= ~RxOverflow; 4892 4891 } ··· 5074 5075 struct rtl8169_private *tp = netdev_priv(dev); 5075 5076 void __iomem *ioaddr = tp->mmio_addr; 5076 5077 struct pci_dev *pdev = tp->pci_dev; 5078 + 5079 + if (tp->mac_version >= RTL_GIGA_MAC_VER_30) { 5080 + tp->intr_event &= ~RxFIFOOver; 5081 + tp->napi_event &= ~RxFIFOOver; 5082 + } 5077 5083 5078 5084 if (tp->mac_version == RTL_GIGA_MAC_VER_13 || 5079 5085 tp->mac_version == RTL_GIGA_MAC_VER_16) { ··· 5346 5342 /* Wait for any pending NAPI task to complete */ 5347 5343 napi_disable(&tp->napi); 5348 5344 5349 - rtl8169_irq_mask_and_ack(ioaddr); 5345 + rtl8169_irq_mask_and_ack(tp); 5350 5346 5351 5347 tp->intr_mask = 0xffff; 5352 5348 RTL_W16(IntrMask, tp->intr_event); ··· 5393 5389 if (!netif_running(dev)) 5394 5390 goto out_unlock; 5395 5391 5392 + rtl8169_hw_reset(tp); 5393 + 5396 5394 rtl8169_wait_for_quiescence(dev); 5397 5395 5398 5396 for (i = 0; i < NUM_RX_DESC; i++) 5399 5397 rtl8169_mark_to_asic(tp->RxDescArray + i, rx_buf_sz); 5400 5398 5401 5399 rtl8169_tx_clear(tp); 5400 + rtl8169_init_ring_indexes(tp); 5402 5401 5403 - rtl8169_hw_reset(tp); 5404 5402 rtl_hw_start(dev); 5405 5403 netif_wake_queue(dev); 5406 5404 rtl8169_check_link_status(dev, tp, tp->mmio_addr); ··· 5413 5407 5414 5408 static void rtl8169_tx_timeout(struct net_device *dev) 5415 5409 { 5416 - struct rtl8169_private *tp = netdev_priv(dev); 5417 - 5418 - rtl8169_hw_reset(tp); 5419 - 5420 - /* Let's wait a bit while any (async) irq lands on */ 5421 5410 rtl8169_schedule_work(dev, rtl8169_reset_task); 5422 5411 } 5423 5412 ··· 5805 5804 */ 5806 5805 status = RTL_R16(IntrStatus); 5807 5806 while (status && status != 0xffff) { 5807 + status &= tp->intr_event; 5808 + if (!status) 5809 + break; 5810 + 5808 5811 handled = 1; 5809 5812 5810 5813 /* Handle all of the error cases first. These will reset ··· 5823 5818 switch (tp->mac_version) { 5824 5819 /* Work around for rx fifo overflow */ 5825 5820 case RTL_GIGA_MAC_VER_11: 5826 - case RTL_GIGA_MAC_VER_22: 5827 - case RTL_GIGA_MAC_VER_26: 5828 5821 netif_stop_queue(dev); 5829 5822 rtl8169_tx_timeout(dev); 5830 5823 goto done; 5831 - /* Testers needed. */ 5832 - case RTL_GIGA_MAC_VER_17: 5833 - case RTL_GIGA_MAC_VER_19: 5834 - case RTL_GIGA_MAC_VER_20: 5835 - case RTL_GIGA_MAC_VER_21: 5836 - case RTL_GIGA_MAC_VER_23: 5837 - case RTL_GIGA_MAC_VER_24: 5838 - case RTL_GIGA_MAC_VER_27: 5839 - case RTL_GIGA_MAC_VER_28: 5840 - case RTL_GIGA_MAC_VER_31: 5841 - /* Experimental science. Pktgen proof. */ 5842 - case RTL_GIGA_MAC_VER_12: 5843 - case RTL_GIGA_MAC_VER_25: 5844 - if (status == RxFIFOOver) 5845 - goto done; 5846 - break; 5847 5824 default: 5848 5825 break; 5849 5826 }
+9 -5
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 781 781 unsigned int mode = MMC_CNTRL_RESET_ON_READ | MMC_CNTRL_COUNTER_RESET | 782 782 MMC_CNTRL_PRESET | MMC_CNTRL_FULL_HALF_PRESET; 783 783 784 - /* Do not manage MMC IRQ (FIXME) */ 784 + /* Mask MMC irq, counters are managed in SW and registers 785 + * are cleared on each READ eventually. */ 785 786 dwmac_mmc_intr_all_mask(priv->ioaddr); 786 - dwmac_mmc_ctrl(priv->ioaddr, mode); 787 - memset(&priv->mmc, 0, sizeof(struct stmmac_counters)); 787 + 788 + if (priv->dma_cap.rmon) { 789 + dwmac_mmc_ctrl(priv->ioaddr, mode); 790 + memset(&priv->mmc, 0, sizeof(struct stmmac_counters)); 791 + } else 792 + pr_info(" No MAC Management Counters available"); 788 793 } 789 794 790 795 static u32 stmmac_get_synopsys_id(struct stmmac_priv *priv) ··· 1017 1012 memset(&priv->xstats, 0, sizeof(struct stmmac_extra_stats)); 1018 1013 priv->xstats.threshold = tc; 1019 1014 1020 - if (priv->dma_cap.rmon) 1021 - stmmac_mmc_setup(priv); 1015 + stmmac_mmc_setup(priv); 1022 1016 1023 1017 /* Start the ball rolling... */ 1024 1018 DBG(probe, DEBUG, "%s: DMA RX/TX processes started...\n", dev->name);
+4 -4
drivers/net/ethernet/tile/tilepro.c
··· 926 926 goto done; 927 927 928 928 /* Re-enable the ingress interrupt. */ 929 - enable_percpu_irq(priv->intr_id); 929 + enable_percpu_irq(priv->intr_id, 0); 930 930 931 931 /* HACK: Avoid the "rotting packet" problem (see above). */ 932 932 if (qup->__packet_receive_read != ··· 1296 1296 info->napi_enabled = true; 1297 1297 1298 1298 /* Enable the ingress interrupt. */ 1299 - enable_percpu_irq(priv->intr_id); 1299 + enable_percpu_irq(priv->intr_id, 0); 1300 1300 } 1301 1301 1302 1302 ··· 1697 1697 for (i = 0; i < sh->nr_frags; i++) { 1698 1698 1699 1699 skb_frag_t *f = &sh->frags[i]; 1700 - unsigned long pfn = page_to_pfn(f->page); 1700 + unsigned long pfn = page_to_pfn(skb_frag_page(f)); 1701 1701 1702 1702 /* FIXME: Compute "hash_for_home" properly. */ 1703 1703 /* ISSUE: The hypervisor checks CHIP_HAS_REV1_DMA_PACKETS(). */ ··· 1706 1706 /* FIXME: Hmmm. */ 1707 1707 if (!hash_default) { 1708 1708 void *va = pfn_to_kaddr(pfn) + f->page_offset; 1709 - BUG_ON(PageHighMem(f->page)); 1709 + BUG_ON(PageHighMem(skb_frag_page(f))); 1710 1710 finv_buffer_remote(va, f->size, 0); 1711 1711 } 1712 1712
+1 -1
drivers/net/phy/Kconfig
··· 3 3 # 4 4 5 5 menuconfig PHYLIB 6 - bool "PHY Device support and infrastructure" 6 + tristate "PHY Device support and infrastructure" 7 7 depends on !S390 8 8 depends on NETDEVICES 9 9 help
+1 -3
drivers/net/ppp/pptp.c
··· 423 423 lock_sock(sk); 424 424 425 425 opt->src_addr = sp->sa_addr.pptp; 426 - if (add_chan(po)) { 427 - release_sock(sk); 426 + if (add_chan(po)) 428 427 error = -EBUSY; 429 - } 430 428 431 429 release_sock(sk); 432 430 return error;
+2 -1
drivers/net/wireless/ath/ath9k/hw.c
··· 1827 1827 } 1828 1828 1829 1829 /* Clear Bit 14 of AR_WA after putting chip into Full Sleep mode. */ 1830 - REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE); 1830 + if (AR_SREV_9300_20_OR_LATER(ah)) 1831 + REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE); 1831 1832 } 1832 1833 1833 1834 /*
+1 -1
drivers/net/wireless/ath/ath9k/main.c
··· 286 286 ath_start_ani(common); 287 287 } 288 288 289 - if (ath9k_hw_ops(ah)->antdiv_comb_conf_get && sc->ant_rx != 3) { 289 + if ((ah->caps.hw_caps & ATH9K_HW_CAP_ANT_DIV_COMB) && sc->ant_rx != 3) { 290 290 struct ath_hw_antcomb_conf div_ant_conf; 291 291 u8 lna_conf; 292 292
+1
drivers/net/wireless/iwlwifi/iwl-1000.c
··· 191 191 .chain_noise_scale = 1000, 192 192 .wd_timeout = IWL_DEF_WD_TIMEOUT, 193 193 .max_event_log_size = 128, 194 + .wd_disable = true, 194 195 }; 195 196 static struct iwl_ht_params iwl1000_ht_params = { 196 197 .ht_greenfield_support = true,
+1
drivers/net/wireless/iwlwifi/iwl-5000.c
··· 364 364 .wd_timeout = IWL_LONG_WD_TIMEOUT, 365 365 .max_event_log_size = 512, 366 366 .no_idle_support = true, 367 + .wd_disable = true, 367 368 }; 368 369 static struct iwl_ht_params iwl5000_ht_params = { 369 370 .ht_greenfield_support = true,
+23 -13
drivers/net/wireless/iwlwifi/iwl-agn-rxon.c
··· 528 528 return 0; 529 529 } 530 530 531 + void iwlagn_config_ht40(struct ieee80211_conf *conf, 532 + struct iwl_rxon_context *ctx) 533 + { 534 + if (conf_is_ht40_minus(conf)) { 535 + ctx->ht.extension_chan_offset = 536 + IEEE80211_HT_PARAM_CHA_SEC_BELOW; 537 + ctx->ht.is_40mhz = true; 538 + } else if (conf_is_ht40_plus(conf)) { 539 + ctx->ht.extension_chan_offset = 540 + IEEE80211_HT_PARAM_CHA_SEC_ABOVE; 541 + ctx->ht.is_40mhz = true; 542 + } else { 543 + ctx->ht.extension_chan_offset = 544 + IEEE80211_HT_PARAM_CHA_SEC_NONE; 545 + ctx->ht.is_40mhz = false; 546 + } 547 + } 548 + 531 549 int iwlagn_mac_config(struct ieee80211_hw *hw, u32 changed) 532 550 { 533 551 struct iwl_priv *priv = hw->priv; ··· 604 586 ctx->ht.enabled = conf_is_ht(conf); 605 587 606 588 if (ctx->ht.enabled) { 607 - if (conf_is_ht40_minus(conf)) { 608 - ctx->ht.extension_chan_offset = 609 - IEEE80211_HT_PARAM_CHA_SEC_BELOW; 610 - ctx->ht.is_40mhz = true; 611 - } else if (conf_is_ht40_plus(conf)) { 612 - ctx->ht.extension_chan_offset = 613 - IEEE80211_HT_PARAM_CHA_SEC_ABOVE; 614 - ctx->ht.is_40mhz = true; 615 - } else { 616 - ctx->ht.extension_chan_offset = 617 - IEEE80211_HT_PARAM_CHA_SEC_NONE; 618 - ctx->ht.is_40mhz = false; 619 - } 589 + /* if HT40 is used, it should not change 590 + * after associated except channel switch */ 591 + if (iwl_is_associated_ctx(ctx) && 592 + !ctx->ht.is_40mhz) 593 + iwlagn_config_ht40(conf, ctx); 620 594 } else 621 595 ctx->ht.is_40mhz = false; 622 596
-5
drivers/net/wireless/iwlwifi/iwl-agn-sta.c
··· 1268 1268 1269 1269 switch (keyconf->cipher) { 1270 1270 case WLAN_CIPHER_SUITE_TKIP: 1271 - keyconf->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIC; 1272 - keyconf->flags |= IEEE80211_KEY_FLAG_GENERATE_IV; 1273 - 1274 1271 if (sta) 1275 1272 addr = sta->addr; 1276 1273 else /* station mode case only */ ··· 1280 1283 seq.tkip.iv32, p1k, CMD_SYNC); 1281 1284 break; 1282 1285 case WLAN_CIPHER_SUITE_CCMP: 1283 - keyconf->flags |= IEEE80211_KEY_FLAG_GENERATE_IV; 1284 - /* fall through */ 1285 1286 case WLAN_CIPHER_SUITE_WEP40: 1286 1287 case WLAN_CIPHER_SUITE_WEP104: 1287 1288 ret = iwlagn_send_sta_key(priv, keyconf, sta_id,
+17 -17
drivers/net/wireless/iwlwifi/iwl-agn.c
··· 2316 2316 return -EOPNOTSUPP; 2317 2317 } 2318 2318 2319 + switch (key->cipher) { 2320 + case WLAN_CIPHER_SUITE_TKIP: 2321 + key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIC; 2322 + /* fall through */ 2323 + case WLAN_CIPHER_SUITE_CCMP: 2324 + key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV; 2325 + break; 2326 + default: 2327 + break; 2328 + } 2329 + 2319 2330 /* 2320 2331 * We could program these keys into the hardware as well, but we 2321 2332 * don't expect much multicast traffic in IBSS and having keys ··· 2610 2599 2611 2600 /* Configure HT40 channels */ 2612 2601 ctx->ht.enabled = conf_is_ht(conf); 2613 - if (ctx->ht.enabled) { 2614 - if (conf_is_ht40_minus(conf)) { 2615 - ctx->ht.extension_chan_offset = 2616 - IEEE80211_HT_PARAM_CHA_SEC_BELOW; 2617 - ctx->ht.is_40mhz = true; 2618 - } else if (conf_is_ht40_plus(conf)) { 2619 - ctx->ht.extension_chan_offset = 2620 - IEEE80211_HT_PARAM_CHA_SEC_ABOVE; 2621 - ctx->ht.is_40mhz = true; 2622 - } else { 2623 - ctx->ht.extension_chan_offset = 2624 - IEEE80211_HT_PARAM_CHA_SEC_NONE; 2625 - ctx->ht.is_40mhz = false; 2626 - } 2627 - } else 2602 + if (ctx->ht.enabled) 2603 + iwlagn_config_ht40(conf, ctx); 2604 + else 2628 2605 ctx->ht.is_40mhz = false; 2629 2606 2630 2607 if ((le16_to_cpu(ctx->staging.channel) != ch)) ··· 3498 3499 module_param_named(ack_check, iwlagn_mod_params.ack_check, bool, S_IRUGO); 3499 3500 MODULE_PARM_DESC(ack_check, "Check ack health (default: 0 [disabled])"); 3500 3501 3501 - module_param_named(wd_disable, iwlagn_mod_params.wd_disable, bool, S_IRUGO); 3502 + module_param_named(wd_disable, iwlagn_mod_params.wd_disable, int, S_IRUGO); 3502 3503 MODULE_PARM_DESC(wd_disable, 3503 - "Disable stuck queue watchdog timer (default: 0 [enabled])"); 3504 + "Disable stuck queue watchdog timer 0=system default, " 3505 + "1=disable, 2=enable (default: 0)"); 3504 3506 3505 3507 /* 3506 3508 * set bt_coex_active to true, uCode will do kill/defer
+2
drivers/net/wireless/iwlwifi/iwl-agn.h
··· 86 86 struct ieee80211_vif *vif, 87 87 struct ieee80211_bss_conf *bss_conf, 88 88 u32 changes); 89 + void iwlagn_config_ht40(struct ieee80211_conf *conf, 90 + struct iwl_rxon_context *ctx); 89 91 90 92 /* uCode */ 91 93 int iwlagn_rx_calib_result(struct iwl_priv *priv,
+17 -5
drivers/net/wireless/iwlwifi/iwl-core.c
··· 1810 1810 { 1811 1811 unsigned int timeout = priv->cfg->base_params->wd_timeout; 1812 1812 1813 - if (timeout && !iwlagn_mod_params.wd_disable) 1814 - mod_timer(&priv->watchdog, 1815 - jiffies + msecs_to_jiffies(IWL_WD_TICK(timeout))); 1816 - else 1817 - del_timer(&priv->watchdog); 1813 + if (!iwlagn_mod_params.wd_disable) { 1814 + /* use system default */ 1815 + if (timeout && !priv->cfg->base_params->wd_disable) 1816 + mod_timer(&priv->watchdog, 1817 + jiffies + 1818 + msecs_to_jiffies(IWL_WD_TICK(timeout))); 1819 + else 1820 + del_timer(&priv->watchdog); 1821 + } else { 1822 + /* module parameter overwrite default configuration */ 1823 + if (timeout && iwlagn_mod_params.wd_disable == 2) 1824 + mod_timer(&priv->watchdog, 1825 + jiffies + 1826 + msecs_to_jiffies(IWL_WD_TICK(timeout))); 1827 + else 1828 + del_timer(&priv->watchdog); 1829 + } 1818 1830 } 1819 1831 1820 1832 /**
+2
drivers/net/wireless/iwlwifi/iwl-core.h
··· 113 113 * @shadow_reg_enable: HW shadhow register bit 114 114 * @no_idle_support: do not support idle mode 115 115 * @hd_v2: v2 of enhanced sensitivity value, used for 2000 series and up 116 + * wd_disable: disable watchdog timer 116 117 */ 117 118 struct iwl_base_params { 118 119 int eeprom_size; ··· 135 134 const bool shadow_reg_enable; 136 135 const bool no_idle_support; 137 136 const bool hd_v2; 137 + const bool wd_disable; 138 138 }; 139 139 /* 140 140 * @advanced_bt_coexist: support advanced bt coexist
+2 -2
drivers/net/wireless/iwlwifi/iwl-shared.h
··· 120 120 * @restart_fw: restart firmware, default = 1 121 121 * @plcp_check: enable plcp health check, default = true 122 122 * @ack_check: disable ack health check, default = false 123 - * @wd_disable: enable stuck queue check, default = false 123 + * @wd_disable: enable stuck queue check, default = 0 124 124 * @bt_coex_active: enable bt coex, default = true 125 125 * @led_mode: system default, default = 0 126 126 * @no_sleep_autoadjust: disable autoadjust, default = true ··· 141 141 int restart_fw; 142 142 bool plcp_check; 143 143 bool ack_check; 144 - bool wd_disable; 144 + int wd_disable; 145 145 bool bt_coex_active; 146 146 int led_mode; 147 147 bool no_sleep_autoadjust;
+3 -2
drivers/net/wireless/p54/p54spi.c
··· 588 588 589 589 WARN_ON(priv->fw_state != FW_STATE_READY); 590 590 591 - cancel_work_sync(&priv->work); 592 - 593 591 p54spi_power_off(priv); 594 592 spin_lock_irqsave(&priv->tx_lock, flags); 595 593 INIT_LIST_HEAD(&priv->tx_pending); ··· 595 597 596 598 priv->fw_state = FW_STATE_OFF; 597 599 mutex_unlock(&priv->mutex); 600 + 601 + cancel_work_sync(&priv->work); 598 602 } 599 603 600 604 static int __devinit p54spi_probe(struct spi_device *spi) ··· 656 656 init_completion(&priv->fw_comp); 657 657 INIT_LIST_HEAD(&priv->tx_pending); 658 658 mutex_init(&priv->mutex); 659 + spin_lock_init(&priv->tx_lock); 659 660 SET_IEEE80211_DEV(hw, &spi->dev); 660 661 priv->common.open = p54spi_op_start; 661 662 priv->common.stop = p54spi_op_stop;
+1 -1
drivers/net/wireless/prism54/isl_ioctl.c
··· 778 778 dwrq->flags = 0; 779 779 dwrq->length = 0; 780 780 } 781 - essid->octets[essid->length] = '\0'; 781 + essid->octets[dwrq->length] = '\0'; 782 782 memcpy(extra, essid->octets, dwrq->length); 783 783 kfree(essid); 784 784
+1 -1
drivers/net/wireless/rt2x00/rt2800lib.c
··· 3771 3771 /* Apparently the data is read from end to start */ 3772 3772 rt2800_register_read_lock(rt2x00dev, EFUSE_DATA3, &reg); 3773 3773 /* The returned value is in CPU order, but eeprom is le */ 3774 - rt2x00dev->eeprom[i] = cpu_to_le32(reg); 3774 + *(u32 *)&rt2x00dev->eeprom[i] = cpu_to_le32(reg); 3775 3775 rt2800_register_read_lock(rt2x00dev, EFUSE_DATA2, &reg); 3776 3776 *(u32 *)&rt2x00dev->eeprom[i + 2] = cpu_to_le32(reg); 3777 3777 rt2800_register_read_lock(rt2x00dev, EFUSE_DATA1, &reg);
+9 -8
drivers/net/wireless/rtlwifi/ps.c
··· 395 395 if (mac->link_state != MAC80211_LINKED) 396 396 return; 397 397 398 - spin_lock(&rtlpriv->locks.lps_lock); 398 + spin_lock_irq(&rtlpriv->locks.lps_lock); 399 399 400 400 /* Idle for a while if we connect to AP a while ago. */ 401 401 if (mac->cnt_after_linked >= 2) { ··· 407 407 } 408 408 } 409 409 410 - spin_unlock(&rtlpriv->locks.lps_lock); 410 + spin_unlock_irq(&rtlpriv->locks.lps_lock); 411 411 } 412 412 413 413 /*Leave the leisure power save mode.*/ ··· 416 416 struct rtl_priv *rtlpriv = rtl_priv(hw); 417 417 struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw)); 418 418 struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); 419 + unsigned long flags; 419 420 420 - spin_lock(&rtlpriv->locks.lps_lock); 421 + spin_lock_irqsave(&rtlpriv->locks.lps_lock, flags); 421 422 422 423 if (ppsc->fwctrl_lps) { 423 424 if (ppsc->dot11_psmode != EACTIVE) { ··· 439 438 rtl_lps_set_psmode(hw, EACTIVE); 440 439 } 441 440 } 442 - spin_unlock(&rtlpriv->locks.lps_lock); 441 + spin_unlock_irqrestore(&rtlpriv->locks.lps_lock, flags); 443 442 } 444 443 445 444 /* For sw LPS*/ ··· 540 539 RT_CLEAR_PS_LEVEL(ppsc, RT_PS_LEVEL_ASPM); 541 540 } 542 541 543 - spin_lock(&rtlpriv->locks.lps_lock); 542 + spin_lock_irq(&rtlpriv->locks.lps_lock); 544 543 rtl_ps_set_rf_state(hw, ERFON, RF_CHANGE_BY_PS); 545 - spin_unlock(&rtlpriv->locks.lps_lock); 544 + spin_unlock_irq(&rtlpriv->locks.lps_lock); 546 545 } 547 546 548 547 void rtl_swlps_rfon_wq_callback(void *data) ··· 575 574 if (rtlpriv->link_info.busytraffic) 576 575 return; 577 576 578 - spin_lock(&rtlpriv->locks.lps_lock); 577 + spin_lock_irq(&rtlpriv->locks.lps_lock); 579 578 rtl_ps_set_rf_state(hw, ERFSLEEP, RF_CHANGE_BY_PS); 580 - spin_unlock(&rtlpriv->locks.lps_lock); 579 + spin_unlock_irq(&rtlpriv->locks.lps_lock); 581 580 582 581 if (ppsc->reg_rfps_level & RT_RF_OFF_LEVL_ASPM && 583 582 !RT_IN_PS_LEVEL(ppsc, RT_PS_LEVEL_ASPM)) {
+1 -1
drivers/net/wireless/rtlwifi/rtl8192ce/phy.c
··· 569 569 } 570 570 case ERFSLEEP:{ 571 571 if (ppsc->rfpwr_state == ERFOFF) 572 - break; 572 + return false; 573 573 for (queue_id = 0, i = 0; 574 574 queue_id < RTL_PCI_MAX_TX_QUEUE_COUNT;) { 575 575 ring = &pcipriv->dev.tx_ring[queue_id];
+1 -1
drivers/net/wireless/rtlwifi/rtl8192cu/phy.c
··· 548 548 break; 549 549 case ERFSLEEP: 550 550 if (ppsc->rfpwr_state == ERFOFF) 551 - break; 551 + return false; 552 552 for (queue_id = 0, i = 0; 553 553 queue_id < RTL_PCI_MAX_TX_QUEUE_COUNT;) { 554 554 ring = &pcipriv->dev.tx_ring[queue_id];
+1 -1
drivers/net/wireless/rtlwifi/rtl8192de/phy.c
··· 3374 3374 break; 3375 3375 case ERFSLEEP: 3376 3376 if (ppsc->rfpwr_state == ERFOFF) 3377 - break; 3377 + return false; 3378 3378 3379 3379 for (queue_id = 0, i = 0; 3380 3380 queue_id < RTL_PCI_MAX_TX_QUEUE_COUNT;) {
+1 -1
drivers/net/wireless/rtlwifi/rtl8192se/phy.c
··· 602 602 } 603 603 case ERFSLEEP: 604 604 if (ppsc->rfpwr_state == ERFOFF) 605 - break; 605 + return false; 606 606 607 607 for (queue_id = 0, i = 0; 608 608 queue_id < RTL_PCI_MAX_TX_QUEUE_COUNT;) {
+2 -2
drivers/net/xen-netback/netback.c
··· 1021 1021 pending_idx = *((u16 *)skb->data); 1022 1022 xen_netbk_idx_release(netbk, pending_idx); 1023 1023 for (j = start; j < i; j++) { 1024 - pending_idx = frag_get_pending_idx(&shinfo->frags[i]); 1024 + pending_idx = frag_get_pending_idx(&shinfo->frags[j]); 1025 1025 xen_netbk_idx_release(netbk, pending_idx); 1026 1026 } 1027 1027 ··· 1668 1668 "netback/%u", group); 1669 1669 1670 1670 if (IS_ERR(netbk->task)) { 1671 - printk(KERN_ALERT "kthread_run() fails at netback\n"); 1671 + printk(KERN_ALERT "kthread_create() fails at netback\n"); 1672 1672 del_timer(&netbk->net_timer); 1673 1673 rc = PTR_ERR(netbk->task); 1674 1674 goto failed_init;
+6 -9
drivers/of/irq.c
··· 26 26 #include <linux/string.h> 27 27 #include <linux/slab.h> 28 28 29 - /* For archs that don't support NO_IRQ (such as x86), provide a dummy value */ 30 - #ifndef NO_IRQ 31 - #define NO_IRQ 0 32 - #endif 33 - 34 29 /** 35 30 * irq_of_parse_and_map - Parse and map an interrupt into linux virq space 36 31 * @device: Device node of the device whose interrupt is to be mapped ··· 39 44 struct of_irq oirq; 40 45 41 46 if (of_irq_map_one(dev, index, &oirq)) 42 - return NO_IRQ; 47 + return 0; 43 48 44 49 return irq_create_of_mapping(oirq.controller, oirq.specifier, 45 50 oirq.size); ··· 340 345 341 346 /* Only dereference the resource if both the 342 347 * resource and the irq are valid. */ 343 - if (r && irq != NO_IRQ) { 348 + if (r && irq) { 344 349 r->start = r->end = irq; 345 350 r->flags = IORESOURCE_IRQ; 346 351 r->name = dev->full_name; ··· 358 363 { 359 364 int nr = 0; 360 365 361 - while (of_irq_to_resource(dev, nr, NULL) != NO_IRQ) 366 + while (of_irq_to_resource(dev, nr, NULL)) 362 367 nr++; 363 368 364 369 return nr; ··· 378 383 int i; 379 384 380 385 for (i = 0; i < nr_irqs; i++, res++) 381 - if (of_irq_to_resource(dev, i, res) == NO_IRQ) 386 + if (!of_irq_to_resource(dev, i, res)) 382 387 break; 383 388 384 389 return i; ··· 419 424 420 425 desc->dev = np; 421 426 desc->interrupt_parent = of_irq_find_parent(np); 427 + if (desc->interrupt_parent == np) 428 + desc->interrupt_parent = NULL; 422 429 list_add_tail(&desc->list, &intc_desc_list); 423 430 } 424 431
+24 -5
drivers/oprofile/oprof.c
··· 239 239 return err; 240 240 } 241 241 242 + static int timer_mode; 243 + 242 244 static int __init oprofile_init(void) 243 245 { 244 246 int err; 245 247 248 + /* always init architecture to setup backtrace support */ 246 249 err = oprofile_arch_init(&oprofile_ops); 247 - if (err < 0 || timer) { 248 - printk(KERN_INFO "oprofile: using timer interrupt.\n"); 250 + 251 + timer_mode = err || timer; /* fall back to timer mode on errors */ 252 + if (timer_mode) { 253 + if (!err) 254 + oprofile_arch_exit(); 249 255 err = oprofile_timer_init(&oprofile_ops); 250 256 if (err) 251 257 return err; 252 258 } 253 - return oprofilefs_register(); 259 + 260 + err = oprofilefs_register(); 261 + if (!err) 262 + return 0; 263 + 264 + /* failed */ 265 + if (timer_mode) 266 + oprofile_timer_exit(); 267 + else 268 + oprofile_arch_exit(); 269 + 270 + return err; 254 271 } 255 272 256 273 257 274 static void __exit oprofile_exit(void) 258 275 { 259 - oprofile_timer_exit(); 260 276 oprofilefs_unregister(); 261 - oprofile_arch_exit(); 277 + if (timer_mode) 278 + oprofile_timer_exit(); 279 + else 280 + oprofile_arch_exit(); 262 281 } 263 282 264 283
+1
drivers/oprofile/timer_int.c
··· 110 110 ops->start = oprofile_hrtimer_start; 111 111 ops->stop = oprofile_hrtimer_stop; 112 112 ops->cpu_type = "timer"; 113 + printk(KERN_INFO "oprofile: using timer interrupt.\n"); 113 114 return 0; 114 115 } 115 116
+1
drivers/pci/Kconfig
··· 76 76 77 77 config PCI_PRI 78 78 bool "PCI PRI support" 79 + depends on PCI 79 80 select PCI_ATS 80 81 help 81 82 PRI is the PCI Page Request Interface. It allows PCI devices that are
+1
drivers/pci/ats.c
··· 13 13 #include <linux/export.h> 14 14 #include <linux/pci-ats.h> 15 15 #include <linux/pci.h> 16 + #include <linux/slab.h> 16 17 17 18 #include "pci.h" 18 19
+19 -6
drivers/pci/hotplug/acpiphp_glue.c
··· 132 132 if (!acpi_pci_check_ejectable(pbus, handle) && !is_dock_device(handle)) 133 133 return AE_OK; 134 134 135 + pdev = pbus->self; 136 + if (pdev && pci_is_pcie(pdev)) { 137 + tmp = acpi_find_root_bridge_handle(pdev); 138 + if (tmp) { 139 + struct acpi_pci_root *root = acpi_pci_find_root(tmp); 140 + 141 + if (root && (root->osc_control_set & 142 + OSC_PCI_EXPRESS_NATIVE_HP_CONTROL)) 143 + return AE_OK; 144 + } 145 + } 146 + 135 147 acpi_evaluate_integer(handle, "_ADR", NULL, &adr); 136 148 device = (adr >> 16) & 0xffff; 137 149 function = adr & 0xffff; ··· 225 213 226 214 pdev = pci_get_slot(pbus, PCI_DEVFN(device, function)); 227 215 if (pdev) { 228 - pdev->current_state = PCI_D0; 229 216 slot->flags |= (SLOT_ENABLED | SLOT_POWEREDON); 230 217 pci_dev_put(pdev); 231 218 } ··· 1389 1378 { 1390 1379 int *count = (int *)context; 1391 1380 1392 - if (acpi_is_root_bridge(handle)) { 1393 - acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY, 1394 - handle_hotplug_event_bridge, NULL); 1395 - (*count)++; 1396 - } 1381 + if (!acpi_is_root_bridge(handle)) 1382 + return AE_OK; 1383 + 1384 + (*count)++; 1385 + acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY, 1386 + handle_hotplug_event_bridge, NULL); 1387 + 1397 1388 return AE_OK ; 1398 1389 } 1399 1390
-3
drivers/pci/hotplug/pciehp_ctrl.c
··· 213 213 goto err_exit; 214 214 } 215 215 216 - /* Wait for 1 second after checking link training status */ 217 - msleep(1000); 218 - 219 216 /* Check for a power fault */ 220 217 if (ctrl->power_fault_detected || pciehp_query_power_fault(p_slot)) { 221 218 ctrl_err(ctrl, "Power fault on slot %s\n", slot_name(p_slot));
+18 -9
drivers/pci/hotplug/pciehp_hpc.c
··· 280 280 else 281 281 msleep(1000); 282 282 283 + /* 284 + * Need to wait for 1000 ms after Data Link Layer Link Active 285 + * (DLLLA) bit reads 1b before sending configuration request. 286 + * We need it before checking Link Training (LT) bit becuase 287 + * LT is still set even after DLLLA bit is set on some platform. 288 + */ 289 + msleep(1000); 290 + 283 291 retval = pciehp_readw(ctrl, PCI_EXP_LNKSTA, &lnk_status); 284 292 if (retval) { 285 293 ctrl_err(ctrl, "Cannot read LNKSTATUS register\n"); ··· 301 293 retval = -1; 302 294 return retval; 303 295 } 296 + 297 + /* 298 + * If the port supports Link speeds greater than 5.0 GT/s, we 299 + * must wait for 100 ms after Link training completes before 300 + * sending configuration request. 301 + */ 302 + if (ctrl->pcie->port->subordinate->max_bus_speed > PCIE_SPEED_5_0GT) 303 + msleep(100); 304 + 305 + pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status); 304 306 305 307 return retval; 306 308 } ··· 502 484 u16 slot_cmd; 503 485 u16 cmd_mask; 504 486 u16 slot_status; 505 - u16 lnk_status; 506 487 int retval = 0; 507 488 508 489 /* Clear sticky power-fault bit from previous power failures */ ··· 532 515 } 533 516 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 534 517 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 535 - 536 - retval = pciehp_readw(ctrl, PCI_EXP_LNKSTA, &lnk_status); 537 - if (retval) { 538 - ctrl_err(ctrl, "%s: Cannot read LNKSTA register\n", 539 - __func__); 540 - return retval; 541 - } 542 - pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status); 543 518 544 519 return retval; 545 520 }
+2 -2
drivers/pci/hotplug/shpchp_core.c
··· 278 278 279 279 static int is_shpc_capable(struct pci_dev *dev) 280 280 { 281 - if ((dev->vendor == PCI_VENDOR_ID_AMD) || (dev->device == 282 - PCI_DEVICE_ID_AMD_GOLAM_7450)) 281 + if (dev->vendor == PCI_VENDOR_ID_AMD && 282 + dev->device == PCI_DEVICE_ID_AMD_GOLAM_7450) 283 283 return 1; 284 284 if (!pci_find_capability(dev, PCI_CAP_ID_SHPC)) 285 285 return 0;
+2 -2
drivers/pci/hotplug/shpchp_hpc.c
··· 944 944 ctrl->pci_dev = pdev; /* pci_dev of the P2P bridge */ 945 945 ctrl_dbg(ctrl, "Hotplug Controller:\n"); 946 946 947 - if ((pdev->vendor == PCI_VENDOR_ID_AMD) || (pdev->device == 948 - PCI_DEVICE_ID_AMD_GOLAM_7450)) { 947 + if (pdev->vendor == PCI_VENDOR_ID_AMD && 948 + pdev->device == PCI_DEVICE_ID_AMD_GOLAM_7450) { 949 949 /* amd shpc driver doesn't use Base Offset; assume 0 */ 950 950 ctrl->mmio_base = pci_resource_start(pdev, 0); 951 951 ctrl->mmio_size = pci_resource_len(pdev, 0);
+7
drivers/pci/iov.c
··· 283 283 struct resource *res; 284 284 struct pci_dev *pdev; 285 285 struct pci_sriov *iov = dev->sriov; 286 + int bars = 0; 286 287 287 288 if (!nr_virtfn) 288 289 return 0; ··· 308 307 309 308 nres = 0; 310 309 for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { 310 + bars |= (1 << (i + PCI_IOV_RESOURCES)); 311 311 res = dev->resource + PCI_IOV_RESOURCES + i; 312 312 if (res->parent) 313 313 nres++; ··· 323 321 324 322 if (virtfn_bus(dev, nr_virtfn - 1) > dev->bus->subordinate) { 325 323 dev_err(&dev->dev, "SR-IOV: bus number out of range\n"); 324 + return -ENOMEM; 325 + } 326 + 327 + if (pci_enable_resources(dev, bars)) { 328 + dev_err(&dev->dev, "SR-IOV: IOV BARS not allocated\n"); 326 329 return -ENOMEM; 327 330 } 328 331
+8 -1
drivers/pci/pci.c
··· 664 664 error = platform_pci_set_power_state(dev, state); 665 665 if (!error) 666 666 pci_update_current_state(dev, state); 667 + /* Fall back to PCI_D0 if native PM is not supported */ 668 + if (!dev->pm_cap) 669 + dev->current_state = PCI_D0; 667 670 } else { 668 671 error = -ENODEV; 669 672 /* Fall back to PCI_D0 if native PM is not supported */ ··· 1129 1126 if (atomic_add_return(1, &dev->enable_cnt) > 1) 1130 1127 return 0; /* already enabled */ 1131 1128 1132 - for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) 1129 + /* only skip sriov related */ 1130 + for (i = 0; i <= PCI_ROM_RESOURCE; i++) 1131 + if (dev->resource[i].flags & flags) 1132 + bars |= (1 << i); 1133 + for (i = PCI_BRIDGE_RESOURCES; i < DEVICE_COUNT_RESOURCE; i++) 1133 1134 if (dev->resource[i].flags & flags) 1134 1135 bars |= (1 << i); 1135 1136
+16 -5
drivers/platform/x86/toshiba_acpi.c
··· 121 121 int illumination_supported:1; 122 122 int video_supported:1; 123 123 int fan_supported:1; 124 + int system_event_supported:1; 124 125 125 126 struct mutex mutex; 126 127 }; ··· 725 724 u32 hci_result; 726 725 u32 value; 727 726 728 - if (!dev->key_event_valid) { 727 + if (!dev->key_event_valid && dev->system_event_supported) { 729 728 hci_read1(dev, HCI_SYSTEM_EVENT, &value, &hci_result); 730 729 if (hci_result == HCI_SUCCESS) { 731 730 dev->key_event_valid = 1; ··· 965 964 966 965 /* enable event fifo */ 967 966 hci_write1(dev, HCI_SYSTEM_EVENT, 1, &hci_result); 967 + if (hci_result == HCI_SUCCESS) 968 + dev->system_event_supported = 1; 968 969 969 970 props.type = BACKLIGHT_PLATFORM; 970 971 props.max_brightness = HCI_LCD_BRIGHTNESS_LEVELS - 1; ··· 1035 1032 { 1036 1033 struct toshiba_acpi_dev *dev = acpi_driver_data(acpi_dev); 1037 1034 u32 hci_result, value; 1035 + int retries = 3; 1038 1036 1039 - if (event != 0x80) 1037 + if (!dev->system_event_supported || event != 0x80) 1040 1038 return; 1039 + 1041 1040 do { 1042 1041 hci_read1(dev, HCI_SYSTEM_EVENT, &value, &hci_result); 1043 - if (hci_result == HCI_SUCCESS) { 1042 + switch (hci_result) { 1043 + case HCI_SUCCESS: 1044 1044 if (value == 0x100) 1045 1045 continue; 1046 1046 /* act on key press; ignore key release */ ··· 1055 1049 pr_info("Unknown key %x\n", 1056 1050 value); 1057 1051 } 1058 - } else if (hci_result == HCI_NOT_SUPPORTED) { 1052 + break; 1053 + case HCI_NOT_SUPPORTED: 1059 1054 /* This is a workaround for an unresolved issue on 1060 1055 * some machines where system events sporadically 1061 1056 * become disabled. */ 1062 1057 hci_write1(dev, HCI_SYSTEM_EVENT, 1, &hci_result); 1063 1058 pr_notice("Re-enabled hotkeys\n"); 1059 + /* fall through */ 1060 + default: 1061 + retries--; 1062 + break; 1064 1063 } 1065 - } while (hci_result != HCI_EMPTY); 1064 + } while (retries && hci_result != HCI_EMPTY); 1066 1065 } 1067 1066 1068 1067
+6 -6
drivers/power/intel_mid_battery.c
··· 61 61 #define PMIC_BATT_CHR_SBATDET_MASK (1 << 5) 62 62 #define PMIC_BATT_CHR_SDCLMT_MASK (1 << 6) 63 63 #define PMIC_BATT_CHR_SUSBOVP_MASK (1 << 7) 64 - #define PMIC_BATT_CHR_EXCPT_MASK 0xC6 64 + #define PMIC_BATT_CHR_EXCPT_MASK 0x86 65 + 65 66 #define PMIC_BATT_ADC_ACCCHRG_MASK (1 << 31) 66 67 #define PMIC_BATT_ADC_ACCCHRGVAL_MASK 0x7FFFFFFF 67 68 ··· 305 304 pbi->batt_status = POWER_SUPPLY_STATUS_NOT_CHARGING; 306 305 pmic_battery_log_event(BATT_EVENT_BATOVP_EXCPT); 307 306 batt_exception = 1; 308 - } else if (r8 & PMIC_BATT_CHR_SDCLMT_MASK) { 309 - pbi->batt_health = POWER_SUPPLY_HEALTH_OVERVOLTAGE; 310 - pbi->batt_status = POWER_SUPPLY_STATUS_NOT_CHARGING; 311 - pmic_battery_log_event(BATT_EVENT_DCLMT_EXCPT); 312 - batt_exception = 1; 313 307 } else if (r8 & PMIC_BATT_CHR_STEMP_MASK) { 314 308 pbi->batt_health = POWER_SUPPLY_HEALTH_OVERHEAT; 315 309 pbi->batt_status = POWER_SUPPLY_STATUS_NOT_CHARGING; ··· 312 316 batt_exception = 1; 313 317 } else { 314 318 pbi->batt_health = POWER_SUPPLY_HEALTH_GOOD; 319 + if (r8 & PMIC_BATT_CHR_SDCLMT_MASK) { 320 + /* PMIC will change charging current automatically */ 321 + pmic_battery_log_event(BATT_EVENT_DCLMT_EXCPT); 322 + } 315 323 } 316 324 } 317 325
+3 -1
drivers/ptp/ptp_clock.c
··· 101 101 102 102 static int ptp_clock_getres(struct posix_clock *pc, struct timespec *tp) 103 103 { 104 - return 1; /* always round timer functions to one nanosecond */ 104 + tp->tv_sec = 0; 105 + tp->tv_nsec = 1; 106 + return 0; 105 107 } 106 108 107 109 static int ptp_clock_settime(struct posix_clock *pc, const struct timespec *tp)
+21 -20
drivers/rapidio/devices/tsi721.c
··· 851 851 INIT_WORK(&priv->idb_work, tsi721_db_dpc); 852 852 853 853 /* Allocate buffer for inbound doorbells queue */ 854 - priv->idb_base = dma_alloc_coherent(&priv->pdev->dev, 854 + priv->idb_base = dma_zalloc_coherent(&priv->pdev->dev, 855 855 IDB_QSIZE * TSI721_IDB_ENTRY_SIZE, 856 856 &priv->idb_dma, GFP_KERNEL); 857 857 if (!priv->idb_base) 858 858 return -ENOMEM; 859 - 860 - memset(priv->idb_base, 0, IDB_QSIZE * TSI721_IDB_ENTRY_SIZE); 861 859 862 860 dev_dbg(&priv->pdev->dev, "Allocated IDB buffer @ %p (phys = %llx)\n", 863 861 priv->idb_base, (unsigned long long)priv->idb_dma); ··· 902 904 */ 903 905 904 906 /* Allocate space for DMA descriptors */ 905 - bd_ptr = dma_alloc_coherent(&priv->pdev->dev, 907 + bd_ptr = dma_zalloc_coherent(&priv->pdev->dev, 906 908 bd_num * sizeof(struct tsi721_dma_desc), 907 909 &bd_phys, GFP_KERNEL); 908 910 if (!bd_ptr) ··· 911 913 priv->bdma[chnum].bd_phys = bd_phys; 912 914 priv->bdma[chnum].bd_base = bd_ptr; 913 915 914 - memset(bd_ptr, 0, bd_num * sizeof(struct tsi721_dma_desc)); 915 - 916 916 dev_dbg(&priv->pdev->dev, "DMA descriptors @ %p (phys = %llx)\n", 917 917 bd_ptr, (unsigned long long)bd_phys); 918 918 ··· 918 922 sts_size = (bd_num >= TSI721_DMA_MINSTSSZ) ? 919 923 bd_num : TSI721_DMA_MINSTSSZ; 920 924 sts_size = roundup_pow_of_two(sts_size); 921 - sts_ptr = dma_alloc_coherent(&priv->pdev->dev, 925 + sts_ptr = dma_zalloc_coherent(&priv->pdev->dev, 922 926 sts_size * sizeof(struct tsi721_dma_sts), 923 927 &sts_phys, GFP_KERNEL); 924 928 if (!sts_ptr) { ··· 933 937 priv->bdma[chnum].sts_phys = sts_phys; 934 938 priv->bdma[chnum].sts_base = sts_ptr; 935 939 priv->bdma[chnum].sts_size = sts_size; 936 - 937 - memset(sts_ptr, 0, sts_size); 938 940 939 941 dev_dbg(&priv->pdev->dev, 940 942 "desc status FIFO @ %p (phys = %llx) size=0x%x\n", ··· 1394 1400 1395 1401 /* Outbound message descriptor status FIFO allocation */ 1396 1402 priv->omsg_ring[mbox].sts_size = roundup_pow_of_two(entries + 1); 1397 - priv->omsg_ring[mbox].sts_base = dma_alloc_coherent(&priv->pdev->dev, 1403 + priv->omsg_ring[mbox].sts_base = dma_zalloc_coherent(&priv->pdev->dev, 1398 1404 priv->omsg_ring[mbox].sts_size * 1399 1405 sizeof(struct tsi721_dma_sts), 1400 1406 &priv->omsg_ring[mbox].sts_phys, GFP_KERNEL); ··· 1405 1411 rc = -ENOMEM; 1406 1412 goto out_desc; 1407 1413 } 1408 - 1409 - memset(priv->omsg_ring[mbox].sts_base, 0, 1410 - entries * sizeof(struct tsi721_dma_sts)); 1411 1414 1412 1415 /* 1413 1416 * Configure Outbound Messaging Engine ··· 2107 2116 INIT_LIST_HEAD(&mport->dbells); 2108 2117 2109 2118 rio_init_dbell_res(&mport->riores[RIO_DOORBELL_RESOURCE], 0, 0xffff); 2110 - rio_init_mbox_res(&mport->riores[RIO_INB_MBOX_RESOURCE], 0, 0); 2111 - rio_init_mbox_res(&mport->riores[RIO_OUTB_MBOX_RESOURCE], 0, 0); 2119 + rio_init_mbox_res(&mport->riores[RIO_INB_MBOX_RESOURCE], 0, 3); 2120 + rio_init_mbox_res(&mport->riores[RIO_OUTB_MBOX_RESOURCE], 0, 3); 2112 2121 strcpy(mport->name, "Tsi721 mport"); 2113 2122 2114 2123 /* Hook up interrupt handler */ ··· 2154 2163 const struct pci_device_id *id) 2155 2164 { 2156 2165 struct tsi721_device *priv; 2157 - int i; 2166 + int i, cap; 2158 2167 int err; 2159 2168 u32 regval; 2160 2169 ··· 2262 2271 dev_info(&pdev->dev, "Unable to set consistent DMA mask\n"); 2263 2272 } 2264 2273 2265 - /* Clear "no snoop" and "relaxed ordering" bits. */ 2266 - pci_read_config_dword(pdev, 0x40 + PCI_EXP_DEVCTL, &regval); 2267 - regval &= ~(PCI_EXP_DEVCTL_RELAX_EN | PCI_EXP_DEVCTL_NOSNOOP_EN); 2268 - pci_write_config_dword(pdev, 0x40 + PCI_EXP_DEVCTL, regval); 2274 + cap = pci_pcie_cap(pdev); 2275 + BUG_ON(cap == 0); 2276 + 2277 + /* Clear "no snoop" and "relaxed ordering" bits, use default MRRS. */ 2278 + pci_read_config_dword(pdev, cap + PCI_EXP_DEVCTL, &regval); 2279 + regval &= ~(PCI_EXP_DEVCTL_READRQ | PCI_EXP_DEVCTL_RELAX_EN | 2280 + PCI_EXP_DEVCTL_NOSNOOP_EN); 2281 + regval |= 0x2 << MAX_READ_REQUEST_SZ_SHIFT; 2282 + pci_write_config_dword(pdev, cap + PCI_EXP_DEVCTL, regval); 2283 + 2284 + /* Adjust PCIe completion timeout. */ 2285 + pci_read_config_dword(pdev, cap + PCI_EXP_DEVCTL2, &regval); 2286 + regval &= ~(0x0f); 2287 + pci_write_config_dword(pdev, cap + PCI_EXP_DEVCTL2, regval | 0x2); 2269 2288 2270 2289 /* 2271 2290 * FIXUP: correct offsets of MSI-X tables in the MSI-X Capability Block
+2
drivers/rapidio/devices/tsi721.h
··· 72 72 #define TSI721_MSIXPBA_OFFSET 0x2a000 73 73 #define TSI721_PCIECFG_EPCTL 0x400 74 74 75 + #define MAX_READ_REQUEST_SZ_SHIFT 12 76 + 75 77 /* 76 78 * Event Management Registers 77 79 */
+1 -1
drivers/regulator/aat2870-regulator.c
··· 160 160 break; 161 161 } 162 162 163 - if (!ri) 163 + if (i == ARRAY_SIZE(aat2870_regulators)) 164 164 return NULL; 165 165 166 166 ri->enable_addr = AAT2870_LDO_EN;
+1 -1
drivers/regulator/core.c
··· 2799 2799 list_del(&rdev->list); 2800 2800 if (rdev->supply) 2801 2801 regulator_put(rdev->supply); 2802 - device_unregister(&rdev->dev); 2803 2802 kfree(rdev->constraints); 2803 + device_unregister(&rdev->dev); 2804 2804 mutex_unlock(&regulator_list_mutex); 2805 2805 } 2806 2806 EXPORT_SYMBOL_GPL(regulator_unregister);
+44 -2
drivers/regulator/twl-regulator.c
··· 71 71 #define VREG_TYPE 1 72 72 #define VREG_REMAP 2 73 73 #define VREG_DEDICATED 3 /* LDO control */ 74 + #define VREG_VOLTAGE_SMPS_4030 9 74 75 /* TWL6030 register offsets */ 75 76 #define VREG_TRANS 1 76 77 #define VREG_STATE 2 ··· 515 514 .get_status = twl4030reg_get_status, 516 515 }; 517 516 517 + static int 518 + twl4030smps_set_voltage(struct regulator_dev *rdev, int min_uV, int max_uV, 519 + unsigned *selector) 520 + { 521 + struct twlreg_info *info = rdev_get_drvdata(rdev); 522 + int vsel = DIV_ROUND_UP(min_uV - 600000, 12500); 523 + 524 + twlreg_write(info, TWL_MODULE_PM_RECEIVER, VREG_VOLTAGE_SMPS_4030, 525 + vsel); 526 + return 0; 527 + } 528 + 529 + static int twl4030smps_get_voltage(struct regulator_dev *rdev) 530 + { 531 + struct twlreg_info *info = rdev_get_drvdata(rdev); 532 + int vsel = twlreg_read(info, TWL_MODULE_PM_RECEIVER, 533 + VREG_VOLTAGE_SMPS_4030); 534 + 535 + return vsel * 12500 + 600000; 536 + } 537 + 538 + static struct regulator_ops twl4030smps_ops = { 539 + .set_voltage = twl4030smps_set_voltage, 540 + .get_voltage = twl4030smps_get_voltage, 541 + }; 542 + 518 543 static int twl6030ldo_list_voltage(struct regulator_dev *rdev, unsigned index) 519 544 { 520 545 struct twlreg_info *info = rdev_get_drvdata(rdev); ··· 883 856 }, \ 884 857 } 885 858 859 + #define TWL4030_ADJUSTABLE_SMPS(label, offset, num, turnon_delay, remap_conf) \ 860 + { \ 861 + .base = offset, \ 862 + .id = num, \ 863 + .delay = turnon_delay, \ 864 + .remap = remap_conf, \ 865 + .desc = { \ 866 + .name = #label, \ 867 + .id = TWL4030_REG_##label, \ 868 + .ops = &twl4030smps_ops, \ 869 + .type = REGULATOR_VOLTAGE, \ 870 + .owner = THIS_MODULE, \ 871 + }, \ 872 + } 873 + 886 874 #define TWL6030_ADJUSTABLE_LDO(label, offset, min_mVolts, max_mVolts) { \ 887 875 .base = offset, \ 888 876 .min_mV = min_mVolts, \ ··· 989 947 TWL4030_ADJUSTABLE_LDO(VINTANA2, 0x43, 12, 100, 0x08), 990 948 TWL4030_FIXED_LDO(VINTDIG, 0x47, 1500, 13, 100, 0x08), 991 949 TWL4030_ADJUSTABLE_LDO(VIO, 0x4b, 14, 1000, 0x08), 992 - TWL4030_ADJUSTABLE_LDO(VDD1, 0x55, 15, 1000, 0x08), 993 - TWL4030_ADJUSTABLE_LDO(VDD2, 0x63, 16, 1000, 0x08), 950 + TWL4030_ADJUSTABLE_SMPS(VDD1, 0x55, 15, 1000, 0x08), 951 + TWL4030_ADJUSTABLE_SMPS(VDD2, 0x63, 16, 1000, 0x08), 994 952 TWL4030_FIXED_LDO(VUSB1V5, 0x71, 1500, 17, 100, 0x08), 995 953 TWL4030_FIXED_LDO(VUSB1V8, 0x74, 1800, 18, 100, 0x08), 996 954 TWL4030_FIXED_LDO(VUSB3V1, 0x77, 3100, 19, 150, 0x08),
+5 -5
drivers/rtc/class.c
··· 63 63 */ 64 64 delta = timespec_sub(old_system, old_rtc); 65 65 delta_delta = timespec_sub(delta, old_delta); 66 - if (abs(delta_delta.tv_sec) >= 2) { 66 + if (delta_delta.tv_sec < -2 || delta_delta.tv_sec >= 2) { 67 67 /* 68 68 * if delta_delta is too large, assume time correction 69 69 * has occured and set old_delta to the current delta. ··· 97 97 rtc_tm_to_time(&tm, &new_rtc.tv_sec); 98 98 new_rtc.tv_nsec = 0; 99 99 100 - if (new_rtc.tv_sec <= old_rtc.tv_sec) { 101 - if (new_rtc.tv_sec < old_rtc.tv_sec) 102 - pr_debug("%s: time travel!\n", dev_name(&rtc->dev)); 100 + if (new_rtc.tv_sec < old_rtc.tv_sec) { 101 + pr_debug("%s: time travel!\n", dev_name(&rtc->dev)); 103 102 return 0; 104 103 } 105 104 ··· 115 116 sleep_time = timespec_sub(sleep_time, 116 117 timespec_sub(new_system, old_system)); 117 118 118 - timekeeping_inject_sleeptime(&sleep_time); 119 + if (sleep_time.tv_sec >= 0) 120 + timekeeping_inject_sleeptime(&sleep_time); 119 121 return 0; 120 122 } 121 123
+34 -10
drivers/rtc/interface.c
··· 319 319 } 320 320 EXPORT_SYMBOL_GPL(rtc_read_alarm); 321 321 322 + static int ___rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) 323 + { 324 + int err; 325 + 326 + if (!rtc->ops) 327 + err = -ENODEV; 328 + else if (!rtc->ops->set_alarm) 329 + err = -EINVAL; 330 + else 331 + err = rtc->ops->set_alarm(rtc->dev.parent, alarm); 332 + 333 + return err; 334 + } 335 + 322 336 static int __rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) 323 337 { 324 338 struct rtc_time tm; ··· 356 342 * over right here, before we set the alarm. 357 343 */ 358 344 359 - if (!rtc->ops) 360 - err = -ENODEV; 361 - else if (!rtc->ops->set_alarm) 362 - err = -EINVAL; 363 - else 364 - err = rtc->ops->set_alarm(rtc->dev.parent, alarm); 365 - 366 - return err; 345 + return ___rtc_set_alarm(rtc, alarm); 367 346 } 368 347 369 348 int rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) ··· 770 763 return 0; 771 764 } 772 765 766 + static void rtc_alarm_disable(struct rtc_device *rtc) 767 + { 768 + struct rtc_wkalrm alarm; 769 + struct rtc_time tm; 770 + 771 + __rtc_read_time(rtc, &tm); 772 + 773 + alarm.time = rtc_ktime_to_tm(ktime_add(rtc_tm_to_ktime(tm), 774 + ktime_set(300, 0))); 775 + alarm.enabled = 0; 776 + 777 + ___rtc_set_alarm(rtc, &alarm); 778 + } 779 + 773 780 /** 774 781 * rtc_timer_remove - Removes a rtc_timer from the rtc_device timerqueue 775 782 * @rtc rtc device ··· 805 784 struct rtc_wkalrm alarm; 806 785 int err; 807 786 next = timerqueue_getnext(&rtc->timerqueue); 808 - if (!next) 787 + if (!next) { 788 + rtc_alarm_disable(rtc); 809 789 return; 790 + } 810 791 alarm.time = rtc_ktime_to_tm(next->expires); 811 792 alarm.enabled = 1; 812 793 err = __rtc_set_alarm(rtc, &alarm); ··· 870 847 err = __rtc_set_alarm(rtc, &alarm); 871 848 if (err == -ETIME) 872 849 goto again; 873 - } 850 + } else 851 + rtc_alarm_disable(rtc); 874 852 875 853 mutex_unlock(&rtc->ops_lock); 876 854 }
+1 -1
drivers/rtc/rtc-s3c.c
··· 202 202 void __iomem *base = s3c_rtc_base; 203 203 int year = tm->tm_year - 100; 204 204 205 - clk_enable(rtc_clk); 206 205 pr_debug("set time %04d.%02d.%02d %02d:%02d:%02d\n", 207 206 1900 + tm->tm_year, tm->tm_mon, tm->tm_mday, 208 207 tm->tm_hour, tm->tm_min, tm->tm_sec); ··· 213 214 return -EINVAL; 214 215 } 215 216 217 + clk_enable(rtc_clk); 216 218 writeb(bin2bcd(tm->tm_sec), base + S3C2410_RTCSEC); 217 219 writeb(bin2bcd(tm->tm_min), base + S3C2410_RTCMIN); 218 220 writeb(bin2bcd(tm->tm_hour), base + S3C2410_RTCHOUR);
+2 -5
drivers/s390/cio/chsc.c
··· 529 529 int chsc_chp_vary(struct chp_id chpid, int on) 530 530 { 531 531 struct channel_path *chp = chpid_to_chp(chpid); 532 - struct chp_link link; 533 532 534 - memset(&link, 0, sizeof(struct chp_link)); 535 - link.chpid = chpid; 536 533 /* Wait until previous actions have settled. */ 537 534 css_wait_for_slow_path(); 538 535 /* ··· 539 542 /* Try to update the channel path descritor. */ 540 543 chsc_determine_base_channel_path_desc(chpid, &chp->desc); 541 544 for_each_subchannel_staged(s390_subchannel_vary_chpid_on, 542 - __s390_vary_chpid_on, &link); 545 + __s390_vary_chpid_on, &chpid); 543 546 } else 544 547 for_each_subchannel_staged(s390_subchannel_vary_chpid_off, 545 - NULL, &link); 548 + NULL, &chpid); 546 549 547 550 return 0; 548 551 }
+5
drivers/s390/cio/cio.h
··· 68 68 __u8 mda[4]; /* model dependent area */ 69 69 } __attribute__ ((packed,aligned(4))); 70 70 71 + /* 72 + * When rescheduled, todo's with higher values will overwrite those 73 + * with lower values. 74 + */ 71 75 enum sch_todo { 72 76 SCH_TODO_NOTHING, 77 + SCH_TODO_EVAL, 73 78 SCH_TODO_UNREG, 74 79 }; 75 80
+59 -45
drivers/s390/cio/css.c
··· 195 195 } 196 196 EXPORT_SYMBOL_GPL(css_sch_device_unregister); 197 197 198 - static void css_sch_todo(struct work_struct *work) 199 - { 200 - struct subchannel *sch; 201 - enum sch_todo todo; 202 - 203 - sch = container_of(work, struct subchannel, todo_work); 204 - /* Find out todo. */ 205 - spin_lock_irq(sch->lock); 206 - todo = sch->todo; 207 - CIO_MSG_EVENT(4, "sch_todo: sch=0.%x.%04x, todo=%d\n", sch->schid.ssid, 208 - sch->schid.sch_no, todo); 209 - sch->todo = SCH_TODO_NOTHING; 210 - spin_unlock_irq(sch->lock); 211 - /* Perform todo. */ 212 - if (todo == SCH_TODO_UNREG) 213 - css_sch_device_unregister(sch); 214 - /* Release workqueue ref. */ 215 - put_device(&sch->dev); 216 - } 217 - 218 - /** 219 - * css_sched_sch_todo - schedule a subchannel operation 220 - * @sch: subchannel 221 - * @todo: todo 222 - * 223 - * Schedule the operation identified by @todo to be performed on the slow path 224 - * workqueue. Do nothing if another operation with higher priority is already 225 - * scheduled. Needs to be called with subchannel lock held. 226 - */ 227 - void css_sched_sch_todo(struct subchannel *sch, enum sch_todo todo) 228 - { 229 - CIO_MSG_EVENT(4, "sch_todo: sched sch=0.%x.%04x todo=%d\n", 230 - sch->schid.ssid, sch->schid.sch_no, todo); 231 - if (sch->todo >= todo) 232 - return; 233 - /* Get workqueue ref. */ 234 - if (!get_device(&sch->dev)) 235 - return; 236 - sch->todo = todo; 237 - if (!queue_work(cio_work_q, &sch->todo_work)) { 238 - /* Already queued, release workqueue ref. */ 239 - put_device(&sch->dev); 240 - } 241 - } 242 - 243 198 static void ssd_from_pmcw(struct chsc_ssd_info *ssd, struct pmcw *pmcw) 244 199 { 245 200 int i; ··· 419 464 ret = css_evaluate_new_subchannel(schid, slow); 420 465 if (ret == -EAGAIN) 421 466 css_schedule_eval(schid); 467 + } 468 + 469 + /** 470 + * css_sched_sch_todo - schedule a subchannel operation 471 + * @sch: subchannel 472 + * @todo: todo 473 + * 474 + * Schedule the operation identified by @todo to be performed on the slow path 475 + * workqueue. Do nothing if another operation with higher priority is already 476 + * scheduled. Needs to be called with subchannel lock held. 477 + */ 478 + void css_sched_sch_todo(struct subchannel *sch, enum sch_todo todo) 479 + { 480 + CIO_MSG_EVENT(4, "sch_todo: sched sch=0.%x.%04x todo=%d\n", 481 + sch->schid.ssid, sch->schid.sch_no, todo); 482 + if (sch->todo >= todo) 483 + return; 484 + /* Get workqueue ref. */ 485 + if (!get_device(&sch->dev)) 486 + return; 487 + sch->todo = todo; 488 + if (!queue_work(cio_work_q, &sch->todo_work)) { 489 + /* Already queued, release workqueue ref. */ 490 + put_device(&sch->dev); 491 + } 492 + } 493 + 494 + static void css_sch_todo(struct work_struct *work) 495 + { 496 + struct subchannel *sch; 497 + enum sch_todo todo; 498 + int ret; 499 + 500 + sch = container_of(work, struct subchannel, todo_work); 501 + /* Find out todo. */ 502 + spin_lock_irq(sch->lock); 503 + todo = sch->todo; 504 + CIO_MSG_EVENT(4, "sch_todo: sch=0.%x.%04x, todo=%d\n", sch->schid.ssid, 505 + sch->schid.sch_no, todo); 506 + sch->todo = SCH_TODO_NOTHING; 507 + spin_unlock_irq(sch->lock); 508 + /* Perform todo. */ 509 + switch (todo) { 510 + case SCH_TODO_NOTHING: 511 + break; 512 + case SCH_TODO_EVAL: 513 + ret = css_evaluate_known_subchannel(sch, 1); 514 + if (ret == -EAGAIN) { 515 + spin_lock_irq(sch->lock); 516 + css_sched_sch_todo(sch, todo); 517 + spin_unlock_irq(sch->lock); 518 + } 519 + break; 520 + case SCH_TODO_UNREG: 521 + css_sch_device_unregister(sch); 522 + break; 523 + } 524 + /* Release workqueue ref. */ 525 + put_device(&sch->dev); 422 526 } 423 527 424 528 static struct idset *slow_subchannel_set;
+2 -2
drivers/s390/cio/device.c
··· 1868 1868 */ 1869 1869 cdev->private->flags.resuming = 1; 1870 1870 cdev->private->path_new_mask = LPM_ANYPATH; 1871 - css_schedule_eval(sch->schid); 1871 + css_sched_sch_todo(sch, SCH_TODO_EVAL); 1872 1872 spin_unlock_irq(sch->lock); 1873 - css_complete_work(); 1873 + css_wait_for_slow_path(); 1874 1874 1875 1875 /* cdev may have been moved to a different subchannel. */ 1876 1876 sch = to_subchannel(cdev->dev.parent);
+22 -8
drivers/s390/cio/device_fsm.c
··· 496 496 cdev->private->pgid_reset_mask = 0; 497 497 } 498 498 499 - void 500 - ccw_device_verify_done(struct ccw_device *cdev, int err) 499 + static void create_fake_irb(struct irb *irb, int type) 500 + { 501 + memset(irb, 0, sizeof(*irb)); 502 + if (type == FAKE_CMD_IRB) { 503 + struct cmd_scsw *scsw = &irb->scsw.cmd; 504 + scsw->cc = 1; 505 + scsw->fctl = SCSW_FCTL_START_FUNC; 506 + scsw->actl = SCSW_ACTL_START_PEND; 507 + scsw->stctl = SCSW_STCTL_STATUS_PEND; 508 + } else if (type == FAKE_TM_IRB) { 509 + struct tm_scsw *scsw = &irb->scsw.tm; 510 + scsw->x = 1; 511 + scsw->cc = 1; 512 + scsw->fctl = SCSW_FCTL_START_FUNC; 513 + scsw->actl = SCSW_ACTL_START_PEND; 514 + scsw->stctl = SCSW_STCTL_STATUS_PEND; 515 + } 516 + } 517 + 518 + void ccw_device_verify_done(struct ccw_device *cdev, int err) 501 519 { 502 520 struct subchannel *sch; 503 521 ··· 538 520 ccw_device_done(cdev, DEV_STATE_ONLINE); 539 521 /* Deliver fake irb to device driver, if needed. */ 540 522 if (cdev->private->flags.fake_irb) { 541 - memset(&cdev->private->irb, 0, sizeof(struct irb)); 542 - cdev->private->irb.scsw.cmd.cc = 1; 543 - cdev->private->irb.scsw.cmd.fctl = SCSW_FCTL_START_FUNC; 544 - cdev->private->irb.scsw.cmd.actl = SCSW_ACTL_START_PEND; 545 - cdev->private->irb.scsw.cmd.stctl = 546 - SCSW_STCTL_STATUS_PEND; 523 + create_fake_irb(&cdev->private->irb, 524 + cdev->private->flags.fake_irb); 547 525 cdev->private->flags.fake_irb = 0; 548 526 if (cdev->handler) 549 527 cdev->handler(cdev, cdev->private->intparm,
+15 -5
drivers/s390/cio/device_ops.c
··· 198 198 if (cdev->private->state == DEV_STATE_VERIFY) { 199 199 /* Remember to fake irb when finished. */ 200 200 if (!cdev->private->flags.fake_irb) { 201 - cdev->private->flags.fake_irb = 1; 201 + cdev->private->flags.fake_irb = FAKE_CMD_IRB; 202 202 cdev->private->intparm = intparm; 203 203 return 0; 204 204 } else ··· 213 213 ret = cio_set_options (sch, flags); 214 214 if (ret) 215 215 return ret; 216 - /* Adjust requested path mask to excluded varied off paths. */ 216 + /* Adjust requested path mask to exclude unusable paths. */ 217 217 if (lpm) { 218 - lpm &= sch->opm; 218 + lpm &= sch->lpm; 219 219 if (lpm == 0) 220 220 return -EACCES; 221 221 } ··· 605 605 sch = to_subchannel(cdev->dev.parent); 606 606 if (!sch->schib.pmcw.ena) 607 607 return -EINVAL; 608 + if (cdev->private->state == DEV_STATE_VERIFY) { 609 + /* Remember to fake irb when finished. */ 610 + if (!cdev->private->flags.fake_irb) { 611 + cdev->private->flags.fake_irb = FAKE_TM_IRB; 612 + cdev->private->intparm = intparm; 613 + return 0; 614 + } else 615 + /* There's already a fake I/O around. */ 616 + return -EBUSY; 617 + } 608 618 if (cdev->private->state != DEV_STATE_ONLINE) 609 619 return -EIO; 610 - /* Adjust requested path mask to excluded varied off paths. */ 620 + /* Adjust requested path mask to exclude unusable paths. */ 611 621 if (lpm) { 612 - lpm &= sch->opm; 622 + lpm &= sch->lpm; 613 623 if (lpm == 0) 614 624 return -EACCES; 615 625 }
+4 -1
drivers/s390/cio/io_sch.h
··· 111 111 CDEV_TODO_UNREG_EVAL, 112 112 }; 113 113 114 + #define FAKE_CMD_IRB 1 115 + #define FAKE_TM_IRB 2 116 + 114 117 struct ccw_device_private { 115 118 struct ccw_device *cdev; 116 119 struct subchannel *sch; ··· 141 138 unsigned int doverify:1; /* delayed path verification */ 142 139 unsigned int donotify:1; /* call notify function */ 143 140 unsigned int recog_done:1; /* dev. recog. complete */ 144 - unsigned int fake_irb:1; /* deliver faked irb */ 141 + unsigned int fake_irb:2; /* deliver faked irb */ 145 142 unsigned int resuming:1; /* recognition while resume */ 146 143 unsigned int pgroup:1; /* pathgroup is set up */ 147 144 unsigned int mpath:1; /* multipathing is set up */
+2
drivers/s390/crypto/ap_bus.c
··· 1552 1552 rc = ap_init_queue(ap_dev->qid); 1553 1553 if (rc == -ENODEV) 1554 1554 ap_dev->unregistered = 1; 1555 + else 1556 + __ap_schedule_poll_timer(); 1555 1557 } 1556 1558 1557 1559 static int __ap_poll_device(struct ap_device *ap_dev, unsigned long *flags)
+4
drivers/s390/scsi/zfcp_scsi.c
··· 55 55 { 56 56 struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); 57 57 58 + /* if previous slave_alloc returned early, there is nothing to do */ 59 + if (!zfcp_sdev->port) 60 + return; 61 + 58 62 zfcp_erp_lun_shutdown_wait(sdev, "scssd_1"); 59 63 put_device(&zfcp_sdev->port->dev); 60 64 }
+5 -22
drivers/sbus/char/bbc_i2c.c
··· 233 233 int ret = 0; 234 234 235 235 while (len > 0) { 236 - int err = bbc_i2c_writeb(client, *buf, off); 237 - 238 - if (err < 0) { 239 - ret = err; 236 + ret = bbc_i2c_writeb(client, *buf, off); 237 + if (ret < 0) 240 238 break; 241 - } 242 - 243 239 len--; 244 240 buf++; 245 241 off++; ··· 249 253 int ret = 0; 250 254 251 255 while (len > 0) { 252 - int err = bbc_i2c_readb(client, buf, off); 253 - if (err < 0) { 254 - ret = err; 256 + ret = bbc_i2c_readb(client, buf, off); 257 + if (ret < 0) 255 258 break; 256 - } 257 259 len--; 258 260 buf++; 259 261 off++; ··· 416 422 .remove = __devexit_p(bbc_i2c_remove), 417 423 }; 418 424 419 - static int __init bbc_i2c_init(void) 420 - { 421 - return platform_driver_register(&bbc_i2c_driver); 422 - } 423 - 424 - static void __exit bbc_i2c_exit(void) 425 - { 426 - platform_driver_unregister(&bbc_i2c_driver); 427 - } 428 - 429 - module_init(bbc_i2c_init); 430 - module_exit(bbc_i2c_exit); 425 + module_platform_driver(bbc_i2c_driver); 431 426 432 427 MODULE_LICENSE("GPL");
+1 -12
drivers/sbus/char/display7seg.c
··· 275 275 .remove = __devexit_p(d7s_remove), 276 276 }; 277 277 278 - static int __init d7s_init(void) 279 - { 280 - return platform_driver_register(&d7s_driver); 281 - } 282 - 283 - static void __exit d7s_exit(void) 284 - { 285 - platform_driver_unregister(&d7s_driver); 286 - } 287 - 288 - module_init(d7s_init); 289 - module_exit(d7s_exit); 278 + module_platform_driver(d7s_driver);
+1 -11
drivers/sbus/char/envctrl.c
··· 1138 1138 .remove = __devexit_p(envctrl_remove), 1139 1139 }; 1140 1140 1141 - static int __init envctrl_init(void) 1142 - { 1143 - return platform_driver_register(&envctrl_driver); 1144 - } 1141 + module_platform_driver(envctrl_driver); 1145 1142 1146 - static void __exit envctrl_exit(void) 1147 - { 1148 - platform_driver_unregister(&envctrl_driver); 1149 - } 1150 - 1151 - module_init(envctrl_init); 1152 - module_exit(envctrl_exit); 1153 1143 MODULE_LICENSE("GPL");
+1 -11
drivers/sbus/char/flash.c
··· 216 216 .remove = __devexit_p(flash_remove), 217 217 }; 218 218 219 - static int __init flash_init(void) 220 - { 221 - return platform_driver_register(&flash_driver); 222 - } 219 + module_platform_driver(flash_driver); 223 220 224 - static void __exit flash_cleanup(void) 225 - { 226 - platform_driver_unregister(&flash_driver); 227 - } 228 - 229 - module_init(flash_init); 230 - module_exit(flash_cleanup); 231 221 MODULE_LICENSE("GPL");
+1 -11
drivers/sbus/char/uctrl.c
··· 435 435 }; 436 436 437 437 438 - static int __init uctrl_init(void) 439 - { 440 - return platform_driver_register(&uctrl_driver); 441 - } 438 + module_platform_driver(uctrl_driver); 442 439 443 - static void __exit uctrl_exit(void) 444 - { 445 - platform_driver_unregister(&uctrl_driver); 446 - } 447 - 448 - module_init(uctrl_init); 449 - module_exit(uctrl_exit); 450 440 MODULE_LICENSE("GPL");
+3 -2
drivers/scsi/bnx2i/bnx2i_hwi.c
··· 1906 1906 spin_lock(&session->lock); 1907 1907 task = iscsi_itt_to_task(bnx2i_conn->cls_conn->dd_data, 1908 1908 cqe->itt & ISCSI_CMD_RESPONSE_INDEX); 1909 - if (!task) { 1909 + if (!task || !task->sc) { 1910 1910 spin_unlock(&session->lock); 1911 1911 return -EINVAL; 1912 1912 } 1913 1913 sc = task->sc; 1914 - spin_unlock(&session->lock); 1915 1914 1916 1915 if (!blk_rq_cpu_valid(sc->request)) 1917 1916 cpu = smp_processor_id(); 1918 1917 else 1919 1918 cpu = sc->request->cpu; 1919 + 1920 + spin_unlock(&session->lock); 1920 1921 1921 1922 p = &per_cpu(bnx2i_percpu, cpu); 1922 1923 spin_lock(&p->p_work_lock);
+116
drivers/scsi/fcoe/fcoe.c
··· 31 31 #include <linux/sysfs.h> 32 32 #include <linux/ctype.h> 33 33 #include <linux/workqueue.h> 34 + #include <net/dcbnl.h> 35 + #include <net/dcbevent.h> 34 36 #include <scsi/scsi_tcq.h> 35 37 #include <scsi/scsicam.h> 36 38 #include <scsi/scsi_transport.h> ··· 103 101 static int fcoe_ddp_target(struct fc_lport *, u16, struct scatterlist *, 104 102 unsigned int); 105 103 static int fcoe_cpu_callback(struct notifier_block *, unsigned long, void *); 104 + static int fcoe_dcb_app_notification(struct notifier_block *notifier, 105 + ulong event, void *ptr); 106 106 107 107 static bool fcoe_match(struct net_device *netdev); 108 108 static int fcoe_create(struct net_device *netdev, enum fip_state fip_mode); ··· 131 127 /* notification function for CPU hotplug events */ 132 128 static struct notifier_block fcoe_cpu_notifier = { 133 129 .notifier_call = fcoe_cpu_callback, 130 + }; 131 + 132 + /* notification function for DCB events */ 133 + static struct notifier_block dcb_notifier = { 134 + .notifier_call = fcoe_dcb_app_notification, 134 135 }; 135 136 136 137 static struct scsi_transport_template *fcoe_nport_scsi_transport; ··· 1531 1522 skb_reset_network_header(skb); 1532 1523 skb->mac_len = elen; 1533 1524 skb->protocol = htons(ETH_P_FCOE); 1525 + skb->priority = port->priority; 1526 + 1534 1527 if (fcoe->netdev->priv_flags & IFF_802_1Q_VLAN && 1535 1528 fcoe->realdev->features & NETIF_F_HW_VLAN_TX) { 1536 1529 skb->vlan_tci = VLAN_TAG_PRESENT | ··· 1635 1624 stats->InvalidCRCCount++; 1636 1625 if (stats->InvalidCRCCount < 5) 1637 1626 printk(KERN_WARNING "fcoe: dropping frame with CRC error\n"); 1627 + put_cpu(); 1638 1628 return -EINVAL; 1639 1629 } 1640 1630 ··· 1758 1746 */ 1759 1747 static void fcoe_dev_setup(void) 1760 1748 { 1749 + register_dcbevent_notifier(&dcb_notifier); 1761 1750 register_netdevice_notifier(&fcoe_notifier); 1762 1751 } 1763 1752 ··· 1767 1754 */ 1768 1755 static void fcoe_dev_cleanup(void) 1769 1756 { 1757 + unregister_dcbevent_notifier(&dcb_notifier); 1770 1758 unregister_netdevice_notifier(&fcoe_notifier); 1759 + } 1760 + 1761 + static struct fcoe_interface * 1762 + fcoe_hostlist_lookup_realdev_port(struct net_device *netdev) 1763 + { 1764 + struct fcoe_interface *fcoe; 1765 + struct net_device *real_dev; 1766 + 1767 + list_for_each_entry(fcoe, &fcoe_hostlist, list) { 1768 + if (fcoe->netdev->priv_flags & IFF_802_1Q_VLAN) 1769 + real_dev = vlan_dev_real_dev(fcoe->netdev); 1770 + else 1771 + real_dev = fcoe->netdev; 1772 + 1773 + if (netdev == real_dev) 1774 + return fcoe; 1775 + } 1776 + return NULL; 1777 + } 1778 + 1779 + static int fcoe_dcb_app_notification(struct notifier_block *notifier, 1780 + ulong event, void *ptr) 1781 + { 1782 + struct dcb_app_type *entry = ptr; 1783 + struct fcoe_interface *fcoe; 1784 + struct net_device *netdev; 1785 + struct fcoe_port *port; 1786 + int prio; 1787 + 1788 + if (entry->app.selector != DCB_APP_IDTYPE_ETHTYPE) 1789 + return NOTIFY_OK; 1790 + 1791 + netdev = dev_get_by_index(&init_net, entry->ifindex); 1792 + if (!netdev) 1793 + return NOTIFY_OK; 1794 + 1795 + fcoe = fcoe_hostlist_lookup_realdev_port(netdev); 1796 + dev_put(netdev); 1797 + if (!fcoe) 1798 + return NOTIFY_OK; 1799 + 1800 + if (entry->dcbx & DCB_CAP_DCBX_VER_CEE) 1801 + prio = ffs(entry->app.priority) - 1; 1802 + else 1803 + prio = entry->app.priority; 1804 + 1805 + if (prio < 0) 1806 + return NOTIFY_OK; 1807 + 1808 + if (entry->app.protocol == ETH_P_FIP || 1809 + entry->app.protocol == ETH_P_FCOE) 1810 + fcoe->ctlr.priority = prio; 1811 + 1812 + if (entry->app.protocol == ETH_P_FCOE) { 1813 + port = lport_priv(fcoe->ctlr.lp); 1814 + port->priority = prio; 1815 + } 1816 + 1817 + return NOTIFY_OK; 1771 1818 } 1772 1819 1773 1820 /** ··· 2038 1965 } 2039 1966 2040 1967 /** 1968 + * fcoe_dcb_create() - Initialize DCB attributes and hooks 1969 + * @netdev: The net_device object of the L2 link that should be queried 1970 + * @port: The fcoe_port to bind FCoE APP priority with 1971 + * @ 1972 + */ 1973 + static void fcoe_dcb_create(struct fcoe_interface *fcoe) 1974 + { 1975 + #ifdef CONFIG_DCB 1976 + int dcbx; 1977 + u8 fup, up; 1978 + struct net_device *netdev = fcoe->realdev; 1979 + struct fcoe_port *port = lport_priv(fcoe->ctlr.lp); 1980 + struct dcb_app app = { 1981 + .priority = 0, 1982 + .protocol = ETH_P_FCOE 1983 + }; 1984 + 1985 + /* setup DCB priority attributes. */ 1986 + if (netdev && netdev->dcbnl_ops && netdev->dcbnl_ops->getdcbx) { 1987 + dcbx = netdev->dcbnl_ops->getdcbx(netdev); 1988 + 1989 + if (dcbx & DCB_CAP_DCBX_VER_IEEE) { 1990 + app.selector = IEEE_8021QAZ_APP_SEL_ETHERTYPE; 1991 + up = dcb_ieee_getapp_mask(netdev, &app); 1992 + app.protocol = ETH_P_FIP; 1993 + fup = dcb_ieee_getapp_mask(netdev, &app); 1994 + } else { 1995 + app.selector = DCB_APP_IDTYPE_ETHTYPE; 1996 + up = dcb_getapp(netdev, &app); 1997 + app.protocol = ETH_P_FIP; 1998 + fup = dcb_getapp(netdev, &app); 1999 + } 2000 + 2001 + port->priority = ffs(up) ? ffs(up) - 1 : 0; 2002 + fcoe->ctlr.priority = ffs(fup) ? ffs(fup) - 1 : port->priority; 2003 + } 2004 + #endif 2005 + } 2006 + 2007 + /** 2041 2008 * fcoe_create() - Create a fcoe interface 2042 2009 * @netdev : The net_device object the Ethernet interface to create on 2043 2010 * @fip_mode: The FIP mode for this creation ··· 2119 2006 2120 2007 /* Make this the "master" N_Port */ 2121 2008 fcoe->ctlr.lp = lport; 2009 + 2010 + /* setup DCB priority attributes. */ 2011 + fcoe_dcb_create(fcoe); 2122 2012 2123 2013 /* add to lports list */ 2124 2014 fcoe_hostlist_add(lport);
+4
drivers/scsi/fcoe/fcoe_ctlr.c
··· 320 320 321 321 skb_put(skb, sizeof(*sol)); 322 322 skb->protocol = htons(ETH_P_FIP); 323 + skb->priority = fip->priority; 323 324 skb_reset_mac_header(skb); 324 325 skb_reset_network_header(skb); 325 326 fip->send(fip, skb); ··· 475 474 } 476 475 skb_put(skb, len); 477 476 skb->protocol = htons(ETH_P_FIP); 477 + skb->priority = fip->priority; 478 478 skb_reset_mac_header(skb); 479 479 skb_reset_network_header(skb); 480 480 fip->send(fip, skb); ··· 568 566 cap->fip.fip_dl_len = htons(dlen / FIP_BPW); 569 567 570 568 skb->protocol = htons(ETH_P_FIP); 569 + skb->priority = fip->priority; 571 570 skb_reset_mac_header(skb); 572 571 skb_reset_network_header(skb); 573 572 return 0; ··· 1914 1911 1915 1912 skb_put(skb, len); 1916 1913 skb->protocol = htons(ETH_P_FIP); 1914 + skb->priority = fip->priority; 1917 1915 skb_reset_mac_header(skb); 1918 1916 skb_reset_network_header(skb); 1919 1917
+1 -1
drivers/scsi/mpt2sas/mpt2sas_scsih.c
··· 4335 4335 /* insert into event log */ 4336 4336 sz = offsetof(Mpi2EventNotificationReply_t, EventData) + 4337 4337 sizeof(Mpi2EventDataSasDeviceStatusChange_t); 4338 - event_reply = kzalloc(sz, GFP_KERNEL); 4338 + event_reply = kzalloc(sz, GFP_ATOMIC); 4339 4339 if (!event_reply) { 4340 4340 printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n", 4341 4341 ioc->name, __FILE__, __LINE__, __func__);
+23 -4
drivers/scsi/qla2xxx/qla_attr.c
··· 1762 1762 scsi_qla_host_t *vha = shost_priv(shost); 1763 1763 struct scsi_qla_host *base_vha = pci_get_drvdata(vha->hw->pdev); 1764 1764 1765 - if (!base_vha->flags.online) 1765 + if (!base_vha->flags.online) { 1766 1766 fc_host_port_state(shost) = FC_PORTSTATE_OFFLINE; 1767 - else if (atomic_read(&base_vha->loop_state) == LOOP_TIMEOUT) 1768 - fc_host_port_state(shost) = FC_PORTSTATE_UNKNOWN; 1769 - else 1767 + return; 1768 + } 1769 + 1770 + switch (atomic_read(&base_vha->loop_state)) { 1771 + case LOOP_UPDATE: 1772 + fc_host_port_state(shost) = FC_PORTSTATE_DIAGNOSTICS; 1773 + break; 1774 + case LOOP_DOWN: 1775 + if (test_bit(LOOP_RESYNC_NEEDED, &base_vha->dpc_flags)) 1776 + fc_host_port_state(shost) = FC_PORTSTATE_DIAGNOSTICS; 1777 + else 1778 + fc_host_port_state(shost) = FC_PORTSTATE_LINKDOWN; 1779 + break; 1780 + case LOOP_DEAD: 1781 + fc_host_port_state(shost) = FC_PORTSTATE_LINKDOWN; 1782 + break; 1783 + case LOOP_READY: 1770 1784 fc_host_port_state(shost) = FC_PORTSTATE_ONLINE; 1785 + break; 1786 + default: 1787 + fc_host_port_state(shost) = FC_PORTSTATE_UNKNOWN; 1788 + break; 1789 + } 1771 1790 } 1772 1791 1773 1792 static int
+4 -4
drivers/scsi/qla2xxx/qla_dbg.c
··· 12 12 * | Level | Last Value Used | Holes | 13 13 * ---------------------------------------------------------------------- 14 14 * | Module Init and Probe | 0x0116 | | 15 - * | Mailbox commands | 0x1129 | | 15 + * | Mailbox commands | 0x112b | | 16 16 * | Device Discovery | 0x2083 | | 17 17 * | Queue Command and IO tracing | 0x302e | 0x3008 | 18 18 * | DPC Thread | 0x401c | | 19 19 * | Async Events | 0x5059 | | 20 - * | Timer Routines | 0x600d | | 20 + * | Timer Routines | 0x6010 | 0x600e,0x600f | 21 21 * | User Space Interactions | 0x709d | | 22 - * | Task Management | 0x8041 | | 22 + * | Task Management | 0x8041 | 0x800b | 23 23 * | AER/EEH | 0x900f | | 24 24 * | Virtual Port | 0xa007 | | 25 - * | ISP82XX Specific | 0xb051 | | 25 + * | ISP82XX Specific | 0xb052 | | 26 26 * | MultiQ | 0xc00b | | 27 27 * | Misc | 0xd00b | | 28 28 * ----------------------------------------------------------------------
+1
drivers/scsi/qla2xxx/qla_gbl.h
··· 578 578 extern void qla82xx_chip_reset_cleanup(scsi_qla_host_t *); 579 579 extern int qla82xx_mbx_beacon_ctl(scsi_qla_host_t *, int); 580 580 extern char *qdev_state(uint32_t); 581 + extern void qla82xx_clear_pending_mbx(scsi_qla_host_t *); 581 582 582 583 /* BSG related functions */ 583 584 extern int qla24xx_bsg_request(struct fc_bsg_job *);
+2 -1
drivers/scsi/qla2xxx/qla_init.c
··· 1509 1509 &ha->fw_xcb_count, NULL, NULL, 1510 1510 &ha->max_npiv_vports, NULL); 1511 1511 1512 - if (!fw_major_version && ql2xallocfwdump) 1512 + if (!fw_major_version && ql2xallocfwdump 1513 + && !IS_QLA82XX(ha)) 1513 1514 qla2x00_alloc_fw_dump(vha); 1514 1515 } 1515 1516 } else {
+8 -6
drivers/scsi/qla2xxx/qla_iocb.c
··· 120 120 * Returns a pointer to the continuation type 1 IOCB packet. 121 121 */ 122 122 static inline cont_a64_entry_t * 123 - qla2x00_prep_cont_type1_iocb(scsi_qla_host_t *vha) 123 + qla2x00_prep_cont_type1_iocb(scsi_qla_host_t *vha, struct req_que *req) 124 124 { 125 125 cont_a64_entry_t *cont_pkt; 126 126 127 - struct req_que *req = vha->req; 128 127 /* Adjust ring index. */ 129 128 req->ring_index++; 130 129 if (req->ring_index == req->length) { ··· 291 292 * Five DSDs are available in the Continuation 292 293 * Type 1 IOCB. 293 294 */ 294 - cont_pkt = qla2x00_prep_cont_type1_iocb(vha); 295 + cont_pkt = qla2x00_prep_cont_type1_iocb(vha, vha->req); 295 296 cur_dsd = (uint32_t *)cont_pkt->dseg_0_address; 296 297 avail_dsds = 5; 297 298 } ··· 683 684 * Five DSDs are available in the Continuation 684 685 * Type 1 IOCB. 685 686 */ 686 - cont_pkt = qla2x00_prep_cont_type1_iocb(vha); 687 + cont_pkt = qla2x00_prep_cont_type1_iocb(vha, vha->req); 687 688 cur_dsd = (uint32_t *)cont_pkt->dseg_0_address; 688 689 avail_dsds = 5; 689 690 } ··· 2069 2070 * Five DSDs are available in the Cont. 2070 2071 * Type 1 IOCB. 2071 2072 */ 2072 - cont_pkt = qla2x00_prep_cont_type1_iocb(vha); 2073 + cont_pkt = qla2x00_prep_cont_type1_iocb(vha, 2074 + vha->hw->req_q_map[0]); 2073 2075 cur_dsd = (uint32_t *) cont_pkt->dseg_0_address; 2074 2076 avail_dsds = 5; 2075 2077 cont_iocb_prsnt = 1; ··· 2096 2096 int index; 2097 2097 uint16_t tot_dsds; 2098 2098 scsi_qla_host_t *vha = sp->fcport->vha; 2099 + struct qla_hw_data *ha = vha->hw; 2099 2100 struct fc_bsg_job *bsg_job = ((struct srb_ctx *)sp->ctx)->u.bsg_job; 2100 2101 int loop_iterartion = 0; 2101 2102 int cont_iocb_prsnt = 0; ··· 2142 2141 * Five DSDs are available in the Cont. 2143 2142 * Type 1 IOCB. 2144 2143 */ 2145 - cont_pkt = qla2x00_prep_cont_type1_iocb(vha); 2144 + cont_pkt = qla2x00_prep_cont_type1_iocb(vha, 2145 + ha->req_q_map[0]); 2146 2146 cur_dsd = (uint32_t *) cont_pkt->dseg_0_address; 2147 2147 avail_dsds = 5; 2148 2148 cont_iocb_prsnt = 1;
+1 -1
drivers/scsi/qla2xxx/qla_isr.c
··· 1741 1741 resid, scsi_bufflen(cp)); 1742 1742 1743 1743 cp->result = DID_ERROR << 16 | lscsi_status; 1744 - break; 1744 + goto check_scsi_status; 1745 1745 } 1746 1746 1747 1747 if (!lscsi_status &&
+21 -4
drivers/scsi/qla2xxx/qla_mbx.c
··· 79 79 mcp->mb[0] = MBS_LINK_DOWN_ERROR; 80 80 ql_log(ql_log_warn, base_vha, 0x1004, 81 81 "FW hung = %d.\n", ha->flags.isp82xx_fw_hung); 82 - rval = QLA_FUNCTION_FAILED; 83 - goto premature_exit; 82 + return QLA_FUNCTION_TIMEOUT; 84 83 } 85 84 86 85 /* ··· 162 163 HINT_MBX_INT_PENDING) { 163 164 spin_unlock_irqrestore(&ha->hardware_lock, 164 165 flags); 166 + ha->flags.mbox_busy = 0; 165 167 ql_dbg(ql_dbg_mbx, base_vha, 0x1010, 166 168 "Pending mailbox timeout, exiting.\n"); 167 169 rval = QLA_FUNCTION_TIMEOUT; ··· 188 188 HINT_MBX_INT_PENDING) { 189 189 spin_unlock_irqrestore(&ha->hardware_lock, 190 190 flags); 191 + ha->flags.mbox_busy = 0; 191 192 ql_dbg(ql_dbg_mbx, base_vha, 0x1012, 192 193 "Pending mailbox timeout, exiting.\n"); 193 194 rval = QLA_FUNCTION_TIMEOUT; ··· 303 302 if (!test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags) && 304 303 !test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags) && 305 304 !test_bit(ISP_ABORT_RETRY, &vha->dpc_flags)) { 306 - 305 + if (IS_QLA82XX(ha)) { 306 + ql_dbg(ql_dbg_mbx, vha, 0x112a, 307 + "disabling pause transmit on port " 308 + "0 & 1.\n"); 309 + qla82xx_wr_32(ha, 310 + QLA82XX_CRB_NIU + 0x98, 311 + CRB_NIU_XG_PAUSE_CTL_P0| 312 + CRB_NIU_XG_PAUSE_CTL_P1); 313 + } 307 314 ql_log(ql_log_info, base_vha, 0x101c, 308 315 "Mailbox cmd timeout occured. " 309 316 "Scheduling ISP abort eeh_busy=0x%x.\n", ··· 327 318 if (!test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags) && 328 319 !test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags) && 329 320 !test_bit(ISP_ABORT_RETRY, &vha->dpc_flags)) { 330 - 321 + if (IS_QLA82XX(ha)) { 322 + ql_dbg(ql_dbg_mbx, vha, 0x112b, 323 + "disabling pause transmit on port " 324 + "0 & 1.\n"); 325 + qla82xx_wr_32(ha, 326 + QLA82XX_CRB_NIU + 0x98, 327 + CRB_NIU_XG_PAUSE_CTL_P0| 328 + CRB_NIU_XG_PAUSE_CTL_P1); 329 + } 331 330 ql_log(ql_log_info, base_vha, 0x101e, 332 331 "Mailbox cmd timeout occured. " 333 332 "Scheduling ISP abort.\n");
+27 -15
drivers/scsi/qla2xxx/qla_nx.c
··· 3817 3817 return rval; 3818 3818 } 3819 3819 3820 + void qla82xx_clear_pending_mbx(scsi_qla_host_t *vha) 3821 + { 3822 + struct qla_hw_data *ha = vha->hw; 3823 + 3824 + if (ha->flags.mbox_busy) { 3825 + ha->flags.mbox_int = 1; 3826 + ha->flags.mbox_busy = 0; 3827 + ql_log(ql_log_warn, vha, 0x6010, 3828 + "Doing premature completion of mbx command.\n"); 3829 + if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags)) 3830 + complete(&ha->mbx_intr_comp); 3831 + } 3832 + } 3833 + 3820 3834 void qla82xx_watchdog(scsi_qla_host_t *vha) 3821 3835 { 3822 3836 uint32_t dev_state, halt_status; ··· 3853 3839 qla2xxx_wake_dpc(vha); 3854 3840 } else { 3855 3841 if (qla82xx_check_fw_alive(vha)) { 3842 + ql_dbg(ql_dbg_timer, vha, 0x6011, 3843 + "disabling pause transmit on port 0 & 1.\n"); 3844 + qla82xx_wr_32(ha, QLA82XX_CRB_NIU + 0x98, 3845 + CRB_NIU_XG_PAUSE_CTL_P0|CRB_NIU_XG_PAUSE_CTL_P1); 3856 3846 halt_status = qla82xx_rd_32(ha, 3857 3847 QLA82XX_PEG_HALT_STATUS1); 3858 - ql_dbg(ql_dbg_timer, vha, 0x6005, 3848 + ql_log(ql_log_info, vha, 0x6005, 3859 3849 "dumping hw/fw registers:.\n " 3860 3850 " PEG_HALT_STATUS1: 0x%x, PEG_HALT_STATUS2: 0x%x,.\n " 3861 3851 " PEG_NET_0_PC: 0x%x, PEG_NET_1_PC: 0x%x,.\n " ··· 3876 3858 QLA82XX_CRB_PEG_NET_3 + 0x3c), 3877 3859 qla82xx_rd_32(ha, 3878 3860 QLA82XX_CRB_PEG_NET_4 + 0x3c)); 3861 + if (LSW(MSB(halt_status)) == 0x67) 3862 + ql_log(ql_log_warn, vha, 0xb052, 3863 + "Firmware aborted with " 3864 + "error code 0x00006700. Device is " 3865 + "being reset.\n"); 3879 3866 if (halt_status & HALT_STATUS_UNRECOVERABLE) { 3880 3867 set_bit(ISP_UNRECOVERABLE, 3881 3868 &vha->dpc_flags); ··· 3892 3869 } 3893 3870 qla2xxx_wake_dpc(vha); 3894 3871 ha->flags.isp82xx_fw_hung = 1; 3895 - if (ha->flags.mbox_busy) { 3896 - ha->flags.mbox_int = 1; 3897 - ql_log(ql_log_warn, vha, 0x6007, 3898 - "Due to FW hung, doing " 3899 - "premature completion of mbx " 3900 - "command.\n"); 3901 - if (test_bit(MBX_INTR_WAIT, 3902 - &ha->mbx_cmd_flags)) 3903 - complete(&ha->mbx_intr_comp); 3904 - } 3872 + ql_log(ql_log_warn, vha, 0x6007, "Firmware hung.\n"); 3873 + qla82xx_clear_pending_mbx(vha); 3905 3874 } 3906 3875 } 3907 3876 } ··· 4088 4073 msleep(1000); 4089 4074 if (qla82xx_check_fw_alive(vha)) { 4090 4075 ha->flags.isp82xx_fw_hung = 1; 4091 - if (ha->flags.mbox_busy) { 4092 - ha->flags.mbox_int = 1; 4093 - complete(&ha->mbx_intr_comp); 4094 - } 4076 + qla82xx_clear_pending_mbx(vha); 4095 4077 break; 4096 4078 } 4097 4079 }
+4
drivers/scsi/qla2xxx/qla_nx.h
··· 1173 1173 1174 1174 static const int MD_MIU_TEST_AGT_RDDATA[] = { 0x410000A8, 0x410000AC, 1175 1175 0x410000B8, 0x410000BC }; 1176 + 1177 + #define CRB_NIU_XG_PAUSE_CTL_P0 0x1 1178 + #define CRB_NIU_XG_PAUSE_CTL_P1 0x8 1179 + 1176 1180 #endif
+10 -76
drivers/scsi/qla2xxx/qla_os.c
··· 201 201 "Set the Minidump driver capture mask level. " 202 202 "Default is 0x7F - Can be set to 0x3, 0x7, 0xF, 0x1F, 0x7F."); 203 203 204 - int ql2xmdenable; 204 + int ql2xmdenable = 1; 205 205 module_param(ql2xmdenable, int, S_IRUGO); 206 206 MODULE_PARM_DESC(ql2xmdenable, 207 207 "Enable/disable MiniDump. " 208 - "0 (Default) - MiniDump disabled. " 209 - "1 - MiniDump enabled."); 208 + "0 - MiniDump disabled. " 209 + "1 (Default) - MiniDump enabled."); 210 210 211 211 /* 212 212 * SCSI host template entry points ··· 423 423 qla25xx_delete_queues(vha); 424 424 destroy_workqueue(ha->wq); 425 425 ha->wq = NULL; 426 + vha->req = ha->req_q_map[0]; 426 427 fail: 427 428 ha->mqenable = 0; 428 429 kfree(ha->req_q_map); ··· 815 814 return return_status; 816 815 } 817 816 818 - /* 819 - * qla2x00_wait_for_loop_ready 820 - * Wait for MAX_LOOP_TIMEOUT(5 min) value for loop 821 - * to be in LOOP_READY state. 822 - * Input: 823 - * ha - pointer to host adapter structure 824 - * 825 - * Note: 826 - * Does context switching-Release SPIN_LOCK 827 - * (if any) before calling this routine. 828 - * 829 - * 830 - * Return: 831 - * Success (LOOP_READY) : 0 832 - * Failed (LOOP_NOT_READY) : 1 833 - */ 834 - static inline int 835 - qla2x00_wait_for_loop_ready(scsi_qla_host_t *vha) 836 - { 837 - int return_status = QLA_SUCCESS; 838 - unsigned long loop_timeout ; 839 - struct qla_hw_data *ha = vha->hw; 840 - scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev); 841 - 842 - /* wait for 5 min at the max for loop to be ready */ 843 - loop_timeout = jiffies + (MAX_LOOP_TIMEOUT * HZ); 844 - 845 - while ((!atomic_read(&base_vha->loop_down_timer) && 846 - atomic_read(&base_vha->loop_state) == LOOP_DOWN) || 847 - atomic_read(&base_vha->loop_state) != LOOP_READY) { 848 - if (atomic_read(&base_vha->loop_state) == LOOP_DEAD) { 849 - return_status = QLA_FUNCTION_FAILED; 850 - break; 851 - } 852 - msleep(1000); 853 - if (time_after_eq(jiffies, loop_timeout)) { 854 - return_status = QLA_FUNCTION_FAILED; 855 - break; 856 - } 857 - } 858 - return (return_status); 859 - } 860 - 861 817 static void 862 818 sp_get(struct srb *sp) 863 819 { ··· 993 1035 "Wait for hba online failed for cmd=%p.\n", cmd); 994 1036 goto eh_reset_failed; 995 1037 } 996 - err = 1; 997 - if (qla2x00_wait_for_loop_ready(vha) != QLA_SUCCESS) { 998 - ql_log(ql_log_warn, vha, 0x800b, 999 - "Wait for loop ready failed for cmd=%p.\n", cmd); 1000 - goto eh_reset_failed; 1001 - } 1002 1038 err = 2; 1003 1039 if (do_reset(fcport, cmd->device->lun, cmd->request->cpu + 1) 1004 1040 != QLA_SUCCESS) { ··· 1089 1137 goto eh_bus_reset_done; 1090 1138 } 1091 1139 1092 - if (qla2x00_wait_for_loop_ready(vha) == QLA_SUCCESS) { 1093 - if (qla2x00_loop_reset(vha) == QLA_SUCCESS) 1094 - ret = SUCCESS; 1095 - } 1140 + if (qla2x00_loop_reset(vha) == QLA_SUCCESS) 1141 + ret = SUCCESS; 1142 + 1096 1143 if (ret == FAILED) 1097 1144 goto eh_bus_reset_done; 1098 1145 ··· 1157 1206 if (qla2x00_wait_for_reset_ready(vha) != QLA_SUCCESS) 1158 1207 goto eh_host_reset_lock; 1159 1208 1160 - /* 1161 - * Fixme-may be dpc thread is active and processing 1162 - * loop_resync,so wait a while for it to 1163 - * be completed and then issue big hammer.Otherwise 1164 - * it may cause I/O failure as big hammer marks the 1165 - * devices as lost kicking of the port_down_timer 1166 - * while dpc is stuck for the mailbox to complete. 1167 - */ 1168 - qla2x00_wait_for_loop_ready(vha); 1169 1209 if (vha != base_vha) { 1170 1210 if (qla2x00_vp_abort_isp(vha)) 1171 1211 goto eh_host_reset_lock; ··· 1239 1297 atomic_set(&vha->loop_state, LOOP_DOWN); 1240 1298 atomic_set(&vha->loop_down_timer, LOOP_DOWN_TIME); 1241 1299 qla2x00_mark_all_devices_lost(vha, 0); 1242 - qla2x00_wait_for_loop_ready(vha); 1243 1300 } 1244 1301 1245 1302 if (ha->flags.enable_lip_reset) { 1246 1303 ret = qla2x00_lip_reset(vha); 1247 - if (ret != QLA_SUCCESS) { 1304 + if (ret != QLA_SUCCESS) 1248 1305 ql_dbg(ql_dbg_taskm, vha, 0x802e, 1249 1306 "lip_reset failed (%d).\n", ret); 1250 - } else 1251 - qla2x00_wait_for_loop_ready(vha); 1252 1307 } 1253 1308 1254 1309 /* Issue marker command only when we are going to start the I/O */ ··· 4009 4070 /* For ISP82XX complete any pending mailbox cmd */ 4010 4071 if (IS_QLA82XX(ha)) { 4011 4072 ha->flags.isp82xx_fw_hung = 1; 4012 - if (ha->flags.mbox_busy) { 4013 - ha->flags.mbox_int = 1; 4014 - ql_dbg(ql_dbg_aer, vha, 0x9001, 4015 - "Due to pci channel io frozen, doing premature " 4016 - "completion of mbx command.\n"); 4017 - complete(&ha->mbx_intr_comp); 4018 - } 4073 + ql_dbg(ql_dbg_aer, vha, 0x9001, "Pci channel io frozen\n"); 4074 + qla82xx_clear_pending_mbx(vha); 4019 4075 } 4020 4076 qla2x00_free_irqs(vha); 4021 4077 pci_disable_device(pdev);
+1 -1
drivers/scsi/qla2xxx/qla_version.h
··· 7 7 /* 8 8 * Driver version 9 9 */ 10 - #define QLA2XXX_VERSION "8.03.07.07-k" 10 + #define QLA2XXX_VERSION "8.03.07.12-k" 11 11 12 12 #define QLA_DRIVER_MAJOR_VER 8 13 13 #define QLA_DRIVER_MINOR_VER 3
+53 -2
drivers/scsi/qla4xxx/ql4_def.h
··· 147 147 #define ISCSI_ALIAS_SIZE 32 /* ISCSI Alias name size */ 148 148 #define ISCSI_NAME_SIZE 0xE0 /* ISCSI Name size */ 149 149 150 - #define QL4_SESS_RECOVERY_TMO 30 /* iSCSI session */ 150 + #define QL4_SESS_RECOVERY_TMO 120 /* iSCSI session */ 151 151 /* recovery timeout */ 152 152 153 153 #define LSDW(x) ((u32)((u64)(x))) ··· 173 173 #define ISNS_DEREG_TOV 5 174 174 #define HBA_ONLINE_TOV 30 175 175 #define DISABLE_ACB_TOV 30 176 + #define IP_CONFIG_TOV 30 177 + #define LOGIN_TOV 12 176 178 177 179 #define MAX_RESET_HA_RETRIES 2 178 180 ··· 242 240 243 241 uint16_t fw_ddb_index; /* DDB firmware index */ 244 242 uint32_t fw_ddb_device_state; /* F/W Device State -- see ql4_fw.h */ 243 + uint16_t ddb_type; 244 + #define FLASH_DDB 0x01 245 + 246 + struct dev_db_entry fw_ddb_entry; 247 + int (*unblock_sess)(struct iscsi_cls_session *cls_session); 248 + int (*ddb_change)(struct scsi_qla_host *ha, uint32_t fw_ddb_index, 249 + struct ddb_entry *ddb_entry, uint32_t state); 250 + 251 + /* Driver Re-login */ 252 + unsigned long flags; /* DDB Flags */ 253 + uint16_t default_relogin_timeout; /* Max time to wait for 254 + * relogin to complete */ 255 + atomic_t retry_relogin_timer; /* Min Time between relogins 256 + * (4000 only) */ 257 + atomic_t relogin_timer; /* Max Time to wait for 258 + * relogin to complete */ 259 + atomic_t relogin_retry_count; /* Num of times relogin has been 260 + * retried */ 261 + uint32_t default_time2wait; /* Default Min time between 262 + * relogins (+aens) */ 263 + 264 + }; 265 + 266 + struct qla_ddb_index { 267 + struct list_head list; 268 + uint16_t fw_ddb_idx; 269 + struct dev_db_entry fw_ddb; 270 + }; 271 + 272 + #define DDB_IPADDR_LEN 64 273 + 274 + struct ql4_tuple_ddb { 275 + int port; 276 + int tpgt; 277 + char ip_addr[DDB_IPADDR_LEN]; 278 + char iscsi_name[ISCSI_NAME_SIZE]; 279 + uint16_t options; 280 + #define DDB_OPT_IPV6 0x0e0e 281 + #define DDB_OPT_IPV4 0x0f0f 245 282 }; 246 283 247 284 /* ··· 452 411 #define AF_FW_RECOVERY 19 /* 0x00080000 */ 453 412 #define AF_EEH_BUSY 20 /* 0x00100000 */ 454 413 #define AF_PCI_CHANNEL_IO_PERM_FAILURE 21 /* 0x00200000 */ 455 - 414 + #define AF_BUILD_DDB_LIST 22 /* 0x00400000 */ 456 415 unsigned long dpc_flags; 457 416 458 417 #define DPC_RESET_HA 1 /* 0x00000002 */ ··· 645 604 uint16_t bootload_minor; 646 605 uint16_t bootload_patch; 647 606 uint16_t bootload_build; 607 + uint16_t def_timeout; /* Default login timeout */ 648 608 649 609 uint32_t flash_state; 650 610 #define QLFLASH_WAITING 0 ··· 665 623 uint16_t iscsi_pci_func_cnt; 666 624 uint8_t model_name[16]; 667 625 struct completion disable_acb_comp; 626 + struct dma_pool *fw_ddb_dma_pool; 627 + #define DDB_DMA_BLOCK_SIZE 512 628 + uint16_t pri_ddb_idx; 629 + uint16_t sec_ddb_idx; 630 + int is_reset; 668 631 }; 669 632 670 633 struct ql4_task_data { ··· 882 835 /*---------------------------------------------------------------------------*/ 883 836 884 837 /* Defines for qla4xxx_initialize_adapter() and qla4xxx_recover_adapter() */ 838 + 839 + #define INIT_ADAPTER 0 840 + #define RESET_ADAPTER 1 841 + 885 842 #define PRESERVE_DDB_LIST 0 886 843 #define REBUILD_DDB_LIST 1 887 844
+8
drivers/scsi/qla4xxx/ql4_fw.h
··· 12 12 #define MAX_PRST_DEV_DB_ENTRIES 64 13 13 #define MIN_DISC_DEV_DB_ENTRY MAX_PRST_DEV_DB_ENTRIES 14 14 #define MAX_DEV_DB_ENTRIES 512 15 + #define MAX_DEV_DB_ENTRIES_40XX 256 15 16 16 17 /************************************************************************* 17 18 * ··· 604 603 uint32_t ipv6_gw_advrt_mtu; /* 270-273 */ 605 604 uint8_t res14[140]; /* 274-2FF */ 606 605 }; 606 + 607 + #define IP_ADDR_COUNT 4 /* Total 4 IP address supported in one interface 608 + * One IPv4, one IPv6 link local and 2 IPv6 609 + */ 610 + 611 + #define IP_STATE_MASK 0x0F000000 612 + #define IP_STATE_SHIFT 24 607 613 608 614 struct init_fw_ctrl_blk { 609 615 struct addr_ctrl_blk pri;
+15 -1
drivers/scsi/qla4xxx/ql4_glbl.h
··· 13 13 int qla4xxx_hw_reset(struct scsi_qla_host *ha); 14 14 int ql4xxx_lock_drvr_wait(struct scsi_qla_host *a); 15 15 int qla4xxx_send_command_to_isp(struct scsi_qla_host *ha, struct srb *srb); 16 - int qla4xxx_initialize_adapter(struct scsi_qla_host *ha); 16 + int qla4xxx_initialize_adapter(struct scsi_qla_host *ha, int is_reset); 17 17 int qla4xxx_soft_reset(struct scsi_qla_host *ha); 18 18 irqreturn_t qla4xxx_intr_handler(int irq, void *dev_id); 19 19 ··· 153 153 uint32_t *mbx_sts); 154 154 int qla4xxx_clear_ddb_entry(struct scsi_qla_host *ha, uint32_t fw_ddb_index); 155 155 int qla4xxx_send_passthru0(struct iscsi_task *task); 156 + void qla4xxx_free_ddb_index(struct scsi_qla_host *ha); 156 157 int qla4xxx_get_mgmt_data(struct scsi_qla_host *ha, uint16_t fw_ddb_index, 157 158 uint16_t stats_size, dma_addr_t stats_dma); 158 159 void qla4xxx_update_session_conn_param(struct scsi_qla_host *ha, 159 160 struct ddb_entry *ddb_entry); 161 + void qla4xxx_update_session_conn_fwddb_param(struct scsi_qla_host *ha, 162 + struct ddb_entry *ddb_entry); 160 163 int qla4xxx_bootdb_by_index(struct scsi_qla_host *ha, 161 164 struct dev_db_entry *fw_ddb_entry, 162 165 dma_addr_t fw_ddb_entry_dma, uint16_t ddb_index); ··· 172 169 int qla4xxx_restore_factory_defaults(struct scsi_qla_host *ha, 173 170 uint32_t region, uint32_t field0, 174 171 uint32_t field1); 172 + int qla4xxx_get_ddb_index(struct scsi_qla_host *ha, uint16_t *ddb_index); 173 + void qla4xxx_login_flash_ddb(struct iscsi_cls_session *cls_session); 174 + int qla4xxx_unblock_ddb(struct iscsi_cls_session *cls_session); 175 + int qla4xxx_unblock_flash_ddb(struct iscsi_cls_session *cls_session); 176 + int qla4xxx_flash_ddb_change(struct scsi_qla_host *ha, uint32_t fw_ddb_index, 177 + struct ddb_entry *ddb_entry, uint32_t state); 178 + int qla4xxx_ddb_change(struct scsi_qla_host *ha, uint32_t fw_ddb_index, 179 + struct ddb_entry *ddb_entry, uint32_t state); 180 + void qla4xxx_build_ddb_list(struct scsi_qla_host *ha, int is_reset); 175 181 176 182 /* BSG Functions */ 177 183 int qla4xxx_bsg_request(struct bsg_job *bsg_job); 178 184 int qla4xxx_process_vendor_specific(struct bsg_job *bsg_job); 185 + 186 + void qla4xxx_arm_relogin_timer(struct ddb_entry *ddb_entry); 179 187 180 188 extern int ql4xextended_error_logging; 181 189 extern int ql4xdontresethba;
+203 -40
drivers/scsi/qla4xxx/ql4_init.c
··· 773 773 * be freed so that when login happens from user space there are free DDB 774 774 * indices available. 775 775 **/ 776 - static void qla4xxx_free_ddb_index(struct scsi_qla_host *ha) 776 + void qla4xxx_free_ddb_index(struct scsi_qla_host *ha) 777 777 { 778 778 int max_ddbs; 779 779 int ret; 780 780 uint32_t idx = 0, next_idx = 0; 781 781 uint32_t state = 0, conn_err = 0; 782 782 783 - max_ddbs = is_qla40XX(ha) ? MAX_PRST_DEV_DB_ENTRIES : 783 + max_ddbs = is_qla40XX(ha) ? MAX_DEV_DB_ENTRIES_40XX : 784 784 MAX_DEV_DB_ENTRIES; 785 785 786 786 for (idx = 0; idx < max_ddbs; idx = next_idx) { 787 787 ret = qla4xxx_get_fwddb_entry(ha, idx, NULL, 0, NULL, 788 788 &next_idx, &state, &conn_err, 789 789 NULL, NULL); 790 - if (ret == QLA_ERROR) 790 + if (ret == QLA_ERROR) { 791 + next_idx++; 791 792 continue; 793 + } 792 794 if (state == DDB_DS_NO_CONNECTION_ACTIVE || 793 795 state == DDB_DS_SESSION_FAILED) { 794 796 DEBUG2(ql4_printk(KERN_INFO, ha, ··· 806 804 } 807 805 } 808 806 809 - 810 807 /** 811 808 * qla4xxx_initialize_adapter - initiailizes hba 812 809 * @ha: Pointer to host adapter structure. ··· 813 812 * This routine parforms all of the steps necessary to initialize the adapter. 814 813 * 815 814 **/ 816 - int qla4xxx_initialize_adapter(struct scsi_qla_host *ha) 815 + int qla4xxx_initialize_adapter(struct scsi_qla_host *ha, int is_reset) 817 816 { 818 817 int status = QLA_ERROR; 819 818 ··· 841 840 if (status == QLA_ERROR) 842 841 goto exit_init_hba; 843 842 844 - qla4xxx_free_ddb_index(ha); 843 + if (is_reset == RESET_ADAPTER) 844 + qla4xxx_build_ddb_list(ha, is_reset); 845 845 846 846 set_bit(AF_ONLINE, &ha->flags); 847 847 exit_init_hba: ··· 857 855 return status; 858 856 } 859 857 860 - /** 861 - * qla4xxx_process_ddb_changed - process ddb state change 862 - * @ha - Pointer to host adapter structure. 863 - * @fw_ddb_index - Firmware's device database index 864 - * @state - Device state 865 - * 866 - * This routine processes a Decive Database Changed AEN Event. 867 - **/ 868 - int qla4xxx_process_ddb_changed(struct scsi_qla_host *ha, uint32_t fw_ddb_index, 869 - uint32_t state, uint32_t conn_err) 858 + int qla4xxx_ddb_change(struct scsi_qla_host *ha, uint32_t fw_ddb_index, 859 + struct ddb_entry *ddb_entry, uint32_t state) 870 860 { 871 - struct ddb_entry * ddb_entry; 872 861 uint32_t old_fw_ddb_device_state; 873 862 int status = QLA_ERROR; 874 - 875 - /* check for out of range index */ 876 - if (fw_ddb_index >= MAX_DDB_ENTRIES) 877 - goto exit_ddb_event; 878 - 879 - /* Get the corresponging ddb entry */ 880 - ddb_entry = qla4xxx_lookup_ddb_by_fw_index(ha, fw_ddb_index); 881 - /* Device does not currently exist in our database. */ 882 - if (ddb_entry == NULL) { 883 - ql4_printk(KERN_ERR, ha, "%s: No ddb_entry at FW index [%d]\n", 884 - __func__, fw_ddb_index); 885 - 886 - if (state == DDB_DS_NO_CONNECTION_ACTIVE) 887 - clear_bit(fw_ddb_index, ha->ddb_idx_map); 888 - 889 - goto exit_ddb_event; 890 - } 891 863 892 864 old_fw_ddb_device_state = ddb_entry->fw_ddb_device_state; 893 865 DEBUG2(ql4_printk(KERN_INFO, ha, ··· 876 900 switch (state) { 877 901 case DDB_DS_SESSION_ACTIVE: 878 902 case DDB_DS_DISCOVERY: 879 - iscsi_conn_start(ddb_entry->conn); 880 - iscsi_conn_login_event(ddb_entry->conn, 881 - ISCSI_CONN_STATE_LOGGED_IN); 903 + ddb_entry->unblock_sess(ddb_entry->sess); 882 904 qla4xxx_update_session_conn_param(ha, ddb_entry); 883 905 status = QLA_SUCCESS; 884 906 break; ··· 910 936 switch (state) { 911 937 case DDB_DS_SESSION_ACTIVE: 912 938 case DDB_DS_DISCOVERY: 913 - iscsi_conn_start(ddb_entry->conn); 914 - iscsi_conn_login_event(ddb_entry->conn, 915 - ISCSI_CONN_STATE_LOGGED_IN); 939 + ddb_entry->unblock_sess(ddb_entry->sess); 916 940 qla4xxx_update_session_conn_param(ha, ddb_entry); 917 941 status = QLA_SUCCESS; 918 942 break; ··· 926 954 __func__)); 927 955 break; 928 956 } 957 + return status; 958 + } 959 + 960 + void qla4xxx_arm_relogin_timer(struct ddb_entry *ddb_entry) 961 + { 962 + /* 963 + * This triggers a relogin. After the relogin_timer 964 + * expires, the relogin gets scheduled. We must wait a 965 + * minimum amount of time since receiving an 0x8014 AEN 966 + * with failed device_state or a logout response before 967 + * we can issue another relogin. 968 + * 969 + * Firmware pads this timeout: (time2wait +1). 970 + * Driver retry to login should be longer than F/W. 971 + * Otherwise F/W will fail 972 + * set_ddb() mbx cmd with 0x4005 since it still 973 + * counting down its time2wait. 974 + */ 975 + atomic_set(&ddb_entry->relogin_timer, 0); 976 + atomic_set(&ddb_entry->retry_relogin_timer, 977 + ddb_entry->default_time2wait + 4); 978 + 979 + } 980 + 981 + int qla4xxx_flash_ddb_change(struct scsi_qla_host *ha, uint32_t fw_ddb_index, 982 + struct ddb_entry *ddb_entry, uint32_t state) 983 + { 984 + uint32_t old_fw_ddb_device_state; 985 + int status = QLA_ERROR; 986 + 987 + old_fw_ddb_device_state = ddb_entry->fw_ddb_device_state; 988 + DEBUG2(ql4_printk(KERN_INFO, ha, 989 + "%s: DDB - old state = 0x%x, new state = 0x%x for " 990 + "index [%d]\n", __func__, 991 + ddb_entry->fw_ddb_device_state, state, fw_ddb_index)); 992 + 993 + ddb_entry->fw_ddb_device_state = state; 994 + 995 + switch (old_fw_ddb_device_state) { 996 + case DDB_DS_LOGIN_IN_PROCESS: 997 + case DDB_DS_NO_CONNECTION_ACTIVE: 998 + switch (state) { 999 + case DDB_DS_SESSION_ACTIVE: 1000 + ddb_entry->unblock_sess(ddb_entry->sess); 1001 + qla4xxx_update_session_conn_fwddb_param(ha, ddb_entry); 1002 + status = QLA_SUCCESS; 1003 + break; 1004 + case DDB_DS_SESSION_FAILED: 1005 + iscsi_block_session(ddb_entry->sess); 1006 + if (!test_bit(DF_RELOGIN, &ddb_entry->flags)) 1007 + qla4xxx_arm_relogin_timer(ddb_entry); 1008 + status = QLA_SUCCESS; 1009 + break; 1010 + } 1011 + break; 1012 + case DDB_DS_SESSION_ACTIVE: 1013 + switch (state) { 1014 + case DDB_DS_SESSION_FAILED: 1015 + iscsi_block_session(ddb_entry->sess); 1016 + if (!test_bit(DF_RELOGIN, &ddb_entry->flags)) 1017 + qla4xxx_arm_relogin_timer(ddb_entry); 1018 + status = QLA_SUCCESS; 1019 + break; 1020 + } 1021 + break; 1022 + case DDB_DS_SESSION_FAILED: 1023 + switch (state) { 1024 + case DDB_DS_SESSION_ACTIVE: 1025 + ddb_entry->unblock_sess(ddb_entry->sess); 1026 + qla4xxx_update_session_conn_fwddb_param(ha, ddb_entry); 1027 + status = QLA_SUCCESS; 1028 + break; 1029 + case DDB_DS_SESSION_FAILED: 1030 + if (!test_bit(DF_RELOGIN, &ddb_entry->flags)) 1031 + qla4xxx_arm_relogin_timer(ddb_entry); 1032 + status = QLA_SUCCESS; 1033 + break; 1034 + } 1035 + break; 1036 + default: 1037 + DEBUG2(ql4_printk(KERN_INFO, ha, "%s: Unknown Event\n", 1038 + __func__)); 1039 + break; 1040 + } 1041 + return status; 1042 + } 1043 + 1044 + /** 1045 + * qla4xxx_process_ddb_changed - process ddb state change 1046 + * @ha - Pointer to host adapter structure. 1047 + * @fw_ddb_index - Firmware's device database index 1048 + * @state - Device state 1049 + * 1050 + * This routine processes a Decive Database Changed AEN Event. 1051 + **/ 1052 + int qla4xxx_process_ddb_changed(struct scsi_qla_host *ha, 1053 + uint32_t fw_ddb_index, 1054 + uint32_t state, uint32_t conn_err) 1055 + { 1056 + struct ddb_entry *ddb_entry; 1057 + int status = QLA_ERROR; 1058 + 1059 + /* check for out of range index */ 1060 + if (fw_ddb_index >= MAX_DDB_ENTRIES) 1061 + goto exit_ddb_event; 1062 + 1063 + /* Get the corresponging ddb entry */ 1064 + ddb_entry = qla4xxx_lookup_ddb_by_fw_index(ha, fw_ddb_index); 1065 + /* Device does not currently exist in our database. */ 1066 + if (ddb_entry == NULL) { 1067 + ql4_printk(KERN_ERR, ha, "%s: No ddb_entry at FW index [%d]\n", 1068 + __func__, fw_ddb_index); 1069 + 1070 + if (state == DDB_DS_NO_CONNECTION_ACTIVE) 1071 + clear_bit(fw_ddb_index, ha->ddb_idx_map); 1072 + 1073 + goto exit_ddb_event; 1074 + } 1075 + 1076 + ddb_entry->ddb_change(ha, fw_ddb_index, ddb_entry, state); 929 1077 930 1078 exit_ddb_event: 931 1079 return status; 932 1080 } 1081 + 1082 + /** 1083 + * qla4xxx_login_flash_ddb - Login to target (DDB) 1084 + * @cls_session: Pointer to the session to login 1085 + * 1086 + * This routine logins to the target. 1087 + * Issues setddb and conn open mbx 1088 + **/ 1089 + void qla4xxx_login_flash_ddb(struct iscsi_cls_session *cls_session) 1090 + { 1091 + struct iscsi_session *sess; 1092 + struct ddb_entry *ddb_entry; 1093 + struct scsi_qla_host *ha; 1094 + struct dev_db_entry *fw_ddb_entry = NULL; 1095 + dma_addr_t fw_ddb_dma; 1096 + uint32_t mbx_sts = 0; 1097 + int ret; 1098 + 1099 + sess = cls_session->dd_data; 1100 + ddb_entry = sess->dd_data; 1101 + ha = ddb_entry->ha; 1102 + 1103 + if (!test_bit(AF_LINK_UP, &ha->flags)) 1104 + return; 1105 + 1106 + if (ddb_entry->ddb_type != FLASH_DDB) { 1107 + DEBUG2(ql4_printk(KERN_INFO, ha, 1108 + "Skipping login to non FLASH DB")); 1109 + goto exit_login; 1110 + } 1111 + 1112 + fw_ddb_entry = dma_pool_alloc(ha->fw_ddb_dma_pool, GFP_KERNEL, 1113 + &fw_ddb_dma); 1114 + if (fw_ddb_entry == NULL) { 1115 + DEBUG2(ql4_printk(KERN_ERR, ha, "Out of memory\n")); 1116 + goto exit_login; 1117 + } 1118 + 1119 + if (ddb_entry->fw_ddb_index == INVALID_ENTRY) { 1120 + ret = qla4xxx_get_ddb_index(ha, &ddb_entry->fw_ddb_index); 1121 + if (ret == QLA_ERROR) 1122 + goto exit_login; 1123 + 1124 + ha->fw_ddb_index_map[ddb_entry->fw_ddb_index] = ddb_entry; 1125 + ha->tot_ddbs++; 1126 + } 1127 + 1128 + memcpy(fw_ddb_entry, &ddb_entry->fw_ddb_entry, 1129 + sizeof(struct dev_db_entry)); 1130 + ddb_entry->sess->target_id = ddb_entry->fw_ddb_index; 1131 + 1132 + ret = qla4xxx_set_ddb_entry(ha, ddb_entry->fw_ddb_index, 1133 + fw_ddb_dma, &mbx_sts); 1134 + if (ret == QLA_ERROR) { 1135 + DEBUG2(ql4_printk(KERN_ERR, ha, "Set DDB failed\n")); 1136 + goto exit_login; 1137 + } 1138 + 1139 + ddb_entry->fw_ddb_device_state = DDB_DS_LOGIN_IN_PROCESS; 1140 + ret = qla4xxx_conn_open(ha, ddb_entry->fw_ddb_index); 1141 + if (ret == QLA_ERROR) { 1142 + ql4_printk(KERN_ERR, ha, "%s: Login failed: %s\n", __func__, 1143 + sess->targetname); 1144 + goto exit_login; 1145 + } 1146 + 1147 + exit_login: 1148 + if (fw_ddb_entry) 1149 + dma_pool_free(ha->fw_ddb_dma_pool, fw_ddb_entry, fw_ddb_dma); 1150 + } 1151 +
+11
drivers/scsi/qla4xxx/ql4_mbx.c
··· 41 41 return status; 42 42 } 43 43 44 + if (is_qla40XX(ha)) { 45 + if (test_bit(AF_HA_REMOVAL, &ha->flags)) { 46 + DEBUG2(ql4_printk(KERN_WARNING, ha, "scsi%ld: %s: " 47 + "prematurely completing mbx cmd as " 48 + "adapter removal detected\n", 49 + ha->host_no, __func__)); 50 + return status; 51 + } 52 + } 53 + 44 54 if (is_qla8022(ha)) { 45 55 if (test_bit(AF_FW_RECOVERY, &ha->flags)) { 46 56 DEBUG2(ql4_printk(KERN_WARNING, ha, "scsi%ld: %s: " ··· 423 413 memcpy(ha->name_string, init_fw_cb->iscsi_name, 424 414 min(sizeof(ha->name_string), 425 415 sizeof(init_fw_cb->iscsi_name))); 416 + ha->def_timeout = le16_to_cpu(init_fw_cb->def_timeout); 426 417 /*memcpy(ha->alias, init_fw_cb->Alias, 427 418 min(sizeof(ha->alias), sizeof(init_fw_cb->Alias)));*/ 428 419
+1030 -54
drivers/scsi/qla4xxx/ql4_os.c
··· 8 8 #include <linux/slab.h> 9 9 #include <linux/blkdev.h> 10 10 #include <linux/iscsi_boot_sysfs.h> 11 + #include <linux/inet.h> 11 12 12 13 #include <scsi/scsi_tcq.h> 13 14 #include <scsi/scsicam.h> ··· 32 31 /* 33 32 * Module parameter information and variables 34 33 */ 34 + int ql4xdisablesysfsboot = 1; 35 + module_param(ql4xdisablesysfsboot, int, S_IRUGO | S_IWUSR); 36 + MODULE_PARM_DESC(ql4xdisablesysfsboot, 37 + "Set to disable exporting boot targets to sysfs\n" 38 + " 0 - Export boot targets\n" 39 + " 1 - Do not export boot targets (Default)"); 40 + 35 41 int ql4xdontresethba = 0; 36 42 module_param(ql4xdontresethba, int, S_IRUGO | S_IWUSR); 37 43 MODULE_PARM_DESC(ql4xdontresethba, ··· 71 63 module_param(ql4xsess_recovery_tmo, int, S_IRUGO); 72 64 MODULE_PARM_DESC(ql4xsess_recovery_tmo, 73 65 "Target Session Recovery Timeout.\n" 74 - " Default: 30 sec."); 66 + " Default: 120 sec."); 75 67 76 68 static int qla4xxx_wait_for_hba_online(struct scsi_qla_host *ha); 77 69 /* ··· 423 415 qla_ep = ep->dd_data; 424 416 ha = to_qla_host(qla_ep->host); 425 417 426 - if (adapter_up(ha)) 418 + if (adapter_up(ha) && !test_bit(AF_BUILD_DDB_LIST, &ha->flags)) 427 419 ret = 1; 428 420 429 421 return ret; ··· 983 975 984 976 } 985 977 978 + int qla4xxx_get_ddb_index(struct scsi_qla_host *ha, uint16_t *ddb_index) 979 + { 980 + uint32_t mbx_sts = 0; 981 + uint16_t tmp_ddb_index; 982 + int ret; 983 + 984 + get_ddb_index: 985 + tmp_ddb_index = find_first_zero_bit(ha->ddb_idx_map, MAX_DDB_ENTRIES); 986 + 987 + if (tmp_ddb_index >= MAX_DDB_ENTRIES) { 988 + DEBUG2(ql4_printk(KERN_INFO, ha, 989 + "Free DDB index not available\n")); 990 + ret = QLA_ERROR; 991 + goto exit_get_ddb_index; 992 + } 993 + 994 + if (test_and_set_bit(tmp_ddb_index, ha->ddb_idx_map)) 995 + goto get_ddb_index; 996 + 997 + DEBUG2(ql4_printk(KERN_INFO, ha, 998 + "Found a free DDB index at %d\n", tmp_ddb_index)); 999 + ret = qla4xxx_req_ddb_entry(ha, tmp_ddb_index, &mbx_sts); 1000 + if (ret == QLA_ERROR) { 1001 + if (mbx_sts == MBOX_STS_COMMAND_ERROR) { 1002 + ql4_printk(KERN_INFO, ha, 1003 + "DDB index = %d not available trying next\n", 1004 + tmp_ddb_index); 1005 + goto get_ddb_index; 1006 + } 1007 + DEBUG2(ql4_printk(KERN_INFO, ha, 1008 + "Free FW DDB not available\n")); 1009 + } 1010 + 1011 + *ddb_index = tmp_ddb_index; 1012 + 1013 + exit_get_ddb_index: 1014 + return ret; 1015 + } 1016 + 1017 + static int qla4xxx_match_ipaddress(struct scsi_qla_host *ha, 1018 + struct ddb_entry *ddb_entry, 1019 + char *existing_ipaddr, 1020 + char *user_ipaddr) 1021 + { 1022 + uint8_t dst_ipaddr[IPv6_ADDR_LEN]; 1023 + char formatted_ipaddr[DDB_IPADDR_LEN]; 1024 + int status = QLA_SUCCESS, ret = 0; 1025 + 1026 + if (ddb_entry->fw_ddb_entry.options & DDB_OPT_IPV6_DEVICE) { 1027 + ret = in6_pton(user_ipaddr, strlen(user_ipaddr), dst_ipaddr, 1028 + '\0', NULL); 1029 + if (ret == 0) { 1030 + status = QLA_ERROR; 1031 + goto out_match; 1032 + } 1033 + ret = sprintf(formatted_ipaddr, "%pI6", dst_ipaddr); 1034 + } else { 1035 + ret = in4_pton(user_ipaddr, strlen(user_ipaddr), dst_ipaddr, 1036 + '\0', NULL); 1037 + if (ret == 0) { 1038 + status = QLA_ERROR; 1039 + goto out_match; 1040 + } 1041 + ret = sprintf(formatted_ipaddr, "%pI4", dst_ipaddr); 1042 + } 1043 + 1044 + if (strcmp(existing_ipaddr, formatted_ipaddr)) 1045 + status = QLA_ERROR; 1046 + 1047 + out_match: 1048 + return status; 1049 + } 1050 + 1051 + static int qla4xxx_match_fwdb_session(struct scsi_qla_host *ha, 1052 + struct iscsi_cls_conn *cls_conn) 1053 + { 1054 + int idx = 0, max_ddbs, rval; 1055 + struct iscsi_cls_session *cls_sess = iscsi_conn_to_session(cls_conn); 1056 + struct iscsi_session *sess, *existing_sess; 1057 + struct iscsi_conn *conn, *existing_conn; 1058 + struct ddb_entry *ddb_entry; 1059 + 1060 + sess = cls_sess->dd_data; 1061 + conn = cls_conn->dd_data; 1062 + 1063 + if (sess->targetname == NULL || 1064 + conn->persistent_address == NULL || 1065 + conn->persistent_port == 0) 1066 + return QLA_ERROR; 1067 + 1068 + max_ddbs = is_qla40XX(ha) ? MAX_DEV_DB_ENTRIES_40XX : 1069 + MAX_DEV_DB_ENTRIES; 1070 + 1071 + for (idx = 0; idx < max_ddbs; idx++) { 1072 + ddb_entry = qla4xxx_lookup_ddb_by_fw_index(ha, idx); 1073 + if (ddb_entry == NULL) 1074 + continue; 1075 + 1076 + if (ddb_entry->ddb_type != FLASH_DDB) 1077 + continue; 1078 + 1079 + existing_sess = ddb_entry->sess->dd_data; 1080 + existing_conn = ddb_entry->conn->dd_data; 1081 + 1082 + if (existing_sess->targetname == NULL || 1083 + existing_conn->persistent_address == NULL || 1084 + existing_conn->persistent_port == 0) 1085 + continue; 1086 + 1087 + DEBUG2(ql4_printk(KERN_INFO, ha, 1088 + "IQN = %s User IQN = %s\n", 1089 + existing_sess->targetname, 1090 + sess->targetname)); 1091 + 1092 + DEBUG2(ql4_printk(KERN_INFO, ha, 1093 + "IP = %s User IP = %s\n", 1094 + existing_conn->persistent_address, 1095 + conn->persistent_address)); 1096 + 1097 + DEBUG2(ql4_printk(KERN_INFO, ha, 1098 + "Port = %d User Port = %d\n", 1099 + existing_conn->persistent_port, 1100 + conn->persistent_port)); 1101 + 1102 + if (strcmp(existing_sess->targetname, sess->targetname)) 1103 + continue; 1104 + rval = qla4xxx_match_ipaddress(ha, ddb_entry, 1105 + existing_conn->persistent_address, 1106 + conn->persistent_address); 1107 + if (rval == QLA_ERROR) 1108 + continue; 1109 + if (existing_conn->persistent_port != conn->persistent_port) 1110 + continue; 1111 + break; 1112 + } 1113 + 1114 + if (idx == max_ddbs) 1115 + return QLA_ERROR; 1116 + 1117 + DEBUG2(ql4_printk(KERN_INFO, ha, 1118 + "Match found in fwdb sessions\n")); 1119 + return QLA_SUCCESS; 1120 + } 1121 + 986 1122 static struct iscsi_cls_session * 987 1123 qla4xxx_session_create(struct iscsi_endpoint *ep, 988 1124 uint16_t cmds_max, uint16_t qdepth, ··· 1136 984 struct scsi_qla_host *ha; 1137 985 struct qla_endpoint *qla_ep; 1138 986 struct ddb_entry *ddb_entry; 1139 - uint32_t ddb_index; 1140 - uint32_t mbx_sts = 0; 987 + uint16_t ddb_index; 1141 988 struct iscsi_session *sess; 1142 989 struct sockaddr *dst_addr; 1143 990 int ret; ··· 1151 1000 dst_addr = (struct sockaddr *)&qla_ep->dst_addr; 1152 1001 ha = to_qla_host(qla_ep->host); 1153 1002 1154 - get_ddb_index: 1155 - ddb_index = find_first_zero_bit(ha->ddb_idx_map, MAX_DDB_ENTRIES); 1156 - 1157 - if (ddb_index >= MAX_DDB_ENTRIES) { 1158 - DEBUG2(ql4_printk(KERN_INFO, ha, 1159 - "Free DDB index not available\n")); 1003 + ret = qla4xxx_get_ddb_index(ha, &ddb_index); 1004 + if (ret == QLA_ERROR) 1160 1005 return NULL; 1161 - } 1162 - 1163 - if (test_and_set_bit(ddb_index, ha->ddb_idx_map)) 1164 - goto get_ddb_index; 1165 - 1166 - DEBUG2(ql4_printk(KERN_INFO, ha, 1167 - "Found a free DDB index at %d\n", ddb_index)); 1168 - ret = qla4xxx_req_ddb_entry(ha, ddb_index, &mbx_sts); 1169 - if (ret == QLA_ERROR) { 1170 - if (mbx_sts == MBOX_STS_COMMAND_ERROR) { 1171 - ql4_printk(KERN_INFO, ha, 1172 - "DDB index = %d not available trying next\n", 1173 - ddb_index); 1174 - goto get_ddb_index; 1175 - } 1176 - DEBUG2(ql4_printk(KERN_INFO, ha, 1177 - "Free FW DDB not available\n")); 1178 - return NULL; 1179 - } 1180 1006 1181 1007 cls_sess = iscsi_session_setup(&qla4xxx_iscsi_transport, qla_ep->host, 1182 1008 cmds_max, sizeof(struct ddb_entry), ··· 1168 1040 ddb_entry->fw_ddb_device_state = DDB_DS_NO_CONNECTION_ACTIVE; 1169 1041 ddb_entry->ha = ha; 1170 1042 ddb_entry->sess = cls_sess; 1043 + ddb_entry->unblock_sess = qla4xxx_unblock_ddb; 1044 + ddb_entry->ddb_change = qla4xxx_ddb_change; 1171 1045 cls_sess->recovery_tmo = ql4xsess_recovery_tmo; 1172 1046 ha->fw_ddb_index_map[ddb_entry->fw_ddb_index] = ddb_entry; 1173 1047 ha->tot_ddbs++; ··· 1207 1077 DEBUG2(printk(KERN_INFO "Func: %s\n", __func__)); 1208 1078 cls_conn = iscsi_conn_setup(cls_sess, sizeof(struct qla_conn), 1209 1079 conn_idx); 1080 + if (!cls_conn) 1081 + return NULL; 1082 + 1210 1083 sess = cls_sess->dd_data; 1211 1084 ddb_entry = sess->dd_data; 1212 1085 ddb_entry->conn = cls_conn; ··· 1242 1109 struct iscsi_session *sess; 1243 1110 struct ddb_entry *ddb_entry; 1244 1111 struct scsi_qla_host *ha; 1245 - struct dev_db_entry *fw_ddb_entry; 1112 + struct dev_db_entry *fw_ddb_entry = NULL; 1246 1113 dma_addr_t fw_ddb_entry_dma; 1247 1114 uint32_t mbx_sts = 0; 1248 1115 int ret = 0; ··· 1253 1120 ddb_entry = sess->dd_data; 1254 1121 ha = ddb_entry->ha; 1255 1122 1123 + /* Check if we have matching FW DDB, if yes then do not 1124 + * login to this target. This could cause target to logout previous 1125 + * connection 1126 + */ 1127 + ret = qla4xxx_match_fwdb_session(ha, cls_conn); 1128 + if (ret == QLA_SUCCESS) { 1129 + ql4_printk(KERN_INFO, ha, 1130 + "Session already exist in FW.\n"); 1131 + ret = -EEXIST; 1132 + goto exit_conn_start; 1133 + } 1134 + 1256 1135 fw_ddb_entry = dma_alloc_coherent(&ha->pdev->dev, sizeof(*fw_ddb_entry), 1257 1136 &fw_ddb_entry_dma, GFP_KERNEL); 1258 1137 if (!fw_ddb_entry) { 1259 1138 ql4_printk(KERN_ERR, ha, 1260 1139 "%s: Unable to allocate dma buffer\n", __func__); 1261 - return -ENOMEM; 1140 + ret = -ENOMEM; 1141 + goto exit_conn_start; 1262 1142 } 1263 1143 1264 1144 ret = qla4xxx_set_param_ddbentry(ha, ddb_entry, cls_conn, &mbx_sts); ··· 1284 1138 if (mbx_sts) 1285 1139 if (ddb_entry->fw_ddb_device_state == 1286 1140 DDB_DS_SESSION_ACTIVE) { 1287 - iscsi_conn_start(ddb_entry->conn); 1288 - iscsi_conn_login_event(ddb_entry->conn, 1289 - ISCSI_CONN_STATE_LOGGED_IN); 1141 + ddb_entry->unblock_sess(ddb_entry->sess); 1290 1142 goto exit_set_param; 1291 1143 } 1292 1144 ··· 1311 1167 ret = 0; 1312 1168 1313 1169 exit_conn_start: 1314 - dma_free_coherent(&ha->pdev->dev, sizeof(*fw_ddb_entry), 1315 - fw_ddb_entry, fw_ddb_entry_dma); 1170 + if (fw_ddb_entry) 1171 + dma_free_coherent(&ha->pdev->dev, sizeof(*fw_ddb_entry), 1172 + fw_ddb_entry, fw_ddb_entry_dma); 1316 1173 return ret; 1317 1174 } 1318 1175 ··· 1489 1344 return -ENOSYS; 1490 1345 } 1491 1346 1347 + static void qla4xxx_copy_fwddb_param(struct scsi_qla_host *ha, 1348 + struct dev_db_entry *fw_ddb_entry, 1349 + struct iscsi_cls_session *cls_sess, 1350 + struct iscsi_cls_conn *cls_conn) 1351 + { 1352 + int buflen = 0; 1353 + struct iscsi_session *sess; 1354 + struct iscsi_conn *conn; 1355 + char ip_addr[DDB_IPADDR_LEN]; 1356 + uint16_t options = 0; 1357 + 1358 + sess = cls_sess->dd_data; 1359 + conn = cls_conn->dd_data; 1360 + 1361 + conn->max_recv_dlength = BYTE_UNITS * 1362 + le16_to_cpu(fw_ddb_entry->iscsi_max_rcv_data_seg_len); 1363 + 1364 + conn->max_xmit_dlength = BYTE_UNITS * 1365 + le16_to_cpu(fw_ddb_entry->iscsi_max_snd_data_seg_len); 1366 + 1367 + sess->initial_r2t_en = 1368 + (BIT_10 & le16_to_cpu(fw_ddb_entry->iscsi_options)); 1369 + 1370 + sess->max_r2t = le16_to_cpu(fw_ddb_entry->iscsi_max_outsnd_r2t); 1371 + 1372 + sess->imm_data_en = (BIT_11 & le16_to_cpu(fw_ddb_entry->iscsi_options)); 1373 + 1374 + sess->first_burst = BYTE_UNITS * 1375 + le16_to_cpu(fw_ddb_entry->iscsi_first_burst_len); 1376 + 1377 + sess->max_burst = BYTE_UNITS * 1378 + le16_to_cpu(fw_ddb_entry->iscsi_max_burst_len); 1379 + 1380 + sess->time2wait = le16_to_cpu(fw_ddb_entry->iscsi_def_time2wait); 1381 + 1382 + sess->time2retain = le16_to_cpu(fw_ddb_entry->iscsi_def_time2retain); 1383 + 1384 + conn->persistent_port = le16_to_cpu(fw_ddb_entry->port); 1385 + 1386 + sess->tpgt = le32_to_cpu(fw_ddb_entry->tgt_portal_grp); 1387 + 1388 + options = le16_to_cpu(fw_ddb_entry->options); 1389 + if (options & DDB_OPT_IPV6_DEVICE) 1390 + sprintf(ip_addr, "%pI6", fw_ddb_entry->ip_addr); 1391 + else 1392 + sprintf(ip_addr, "%pI4", fw_ddb_entry->ip_addr); 1393 + 1394 + iscsi_set_param(cls_conn, ISCSI_PARAM_TARGET_NAME, 1395 + (char *)fw_ddb_entry->iscsi_name, buflen); 1396 + iscsi_set_param(cls_conn, ISCSI_PARAM_INITIATOR_NAME, 1397 + (char *)ha->name_string, buflen); 1398 + iscsi_set_param(cls_conn, ISCSI_PARAM_PERSISTENT_ADDRESS, 1399 + (char *)ip_addr, buflen); 1400 + } 1401 + 1402 + void qla4xxx_update_session_conn_fwddb_param(struct scsi_qla_host *ha, 1403 + struct ddb_entry *ddb_entry) 1404 + { 1405 + struct iscsi_cls_session *cls_sess; 1406 + struct iscsi_cls_conn *cls_conn; 1407 + uint32_t ddb_state; 1408 + dma_addr_t fw_ddb_entry_dma; 1409 + struct dev_db_entry *fw_ddb_entry; 1410 + 1411 + fw_ddb_entry = dma_alloc_coherent(&ha->pdev->dev, sizeof(*fw_ddb_entry), 1412 + &fw_ddb_entry_dma, GFP_KERNEL); 1413 + if (!fw_ddb_entry) { 1414 + ql4_printk(KERN_ERR, ha, 1415 + "%s: Unable to allocate dma buffer\n", __func__); 1416 + goto exit_session_conn_fwddb_param; 1417 + } 1418 + 1419 + if (qla4xxx_get_fwddb_entry(ha, ddb_entry->fw_ddb_index, fw_ddb_entry, 1420 + fw_ddb_entry_dma, NULL, NULL, &ddb_state, 1421 + NULL, NULL, NULL) == QLA_ERROR) { 1422 + DEBUG2(ql4_printk(KERN_ERR, ha, "scsi%ld: %s: failed " 1423 + "get_ddb_entry for fw_ddb_index %d\n", 1424 + ha->host_no, __func__, 1425 + ddb_entry->fw_ddb_index)); 1426 + goto exit_session_conn_fwddb_param; 1427 + } 1428 + 1429 + cls_sess = ddb_entry->sess; 1430 + 1431 + cls_conn = ddb_entry->conn; 1432 + 1433 + /* Update params */ 1434 + qla4xxx_copy_fwddb_param(ha, fw_ddb_entry, cls_sess, cls_conn); 1435 + 1436 + exit_session_conn_fwddb_param: 1437 + if (fw_ddb_entry) 1438 + dma_free_coherent(&ha->pdev->dev, sizeof(*fw_ddb_entry), 1439 + fw_ddb_entry, fw_ddb_entry_dma); 1440 + } 1441 + 1492 1442 void qla4xxx_update_session_conn_param(struct scsi_qla_host *ha, 1493 1443 struct ddb_entry *ddb_entry) 1494 1444 { ··· 1600 1360 if (!fw_ddb_entry) { 1601 1361 ql4_printk(KERN_ERR, ha, 1602 1362 "%s: Unable to allocate dma buffer\n", __func__); 1603 - return; 1363 + goto exit_session_conn_param; 1604 1364 } 1605 1365 1606 1366 if (qla4xxx_get_fwddb_entry(ha, ddb_entry->fw_ddb_index, fw_ddb_entry, ··· 1610 1370 "get_ddb_entry for fw_ddb_index %d\n", 1611 1371 ha->host_no, __func__, 1612 1372 ddb_entry->fw_ddb_index)); 1613 - return; 1373 + goto exit_session_conn_param; 1614 1374 } 1615 1375 1616 1376 cls_sess = ddb_entry->sess; ··· 1618 1378 1619 1379 cls_conn = ddb_entry->conn; 1620 1380 conn = cls_conn->dd_data; 1381 + 1382 + /* Update timers after login */ 1383 + ddb_entry->default_relogin_timeout = 1384 + le16_to_cpu(fw_ddb_entry->def_timeout); 1385 + ddb_entry->default_time2wait = 1386 + le16_to_cpu(fw_ddb_entry->iscsi_def_time2wait); 1621 1387 1622 1388 /* Update params */ 1623 1389 conn->max_recv_dlength = BYTE_UNITS * ··· 1653 1407 1654 1408 memcpy(sess->initiatorname, ha->name_string, 1655 1409 min(sizeof(ha->name_string), sizeof(sess->initiatorname))); 1410 + 1411 + exit_session_conn_param: 1412 + if (fw_ddb_entry) 1413 + dma_free_coherent(&ha->pdev->dev, sizeof(*fw_ddb_entry), 1414 + fw_ddb_entry, fw_ddb_entry_dma); 1656 1415 } 1657 1416 1658 1417 /* ··· 1858 1607 vfree(ha->chap_list); 1859 1608 ha->chap_list = NULL; 1860 1609 1610 + if (ha->fw_ddb_dma_pool) 1611 + dma_pool_destroy(ha->fw_ddb_dma_pool); 1612 + 1861 1613 /* release io space registers */ 1862 1614 if (is_qla8022(ha)) { 1863 1615 if (ha->nx_pcibase) ··· 1940 1686 if (ha->chap_dma_pool == NULL) { 1941 1687 ql4_printk(KERN_WARNING, ha, 1942 1688 "%s: chap_dma_pool allocation failed..\n", __func__); 1689 + goto mem_alloc_error_exit; 1690 + } 1691 + 1692 + ha->fw_ddb_dma_pool = dma_pool_create("ql4_fw_ddb", &ha->pdev->dev, 1693 + DDB_DMA_BLOCK_SIZE, 8, 0); 1694 + 1695 + if (ha->fw_ddb_dma_pool == NULL) { 1696 + ql4_printk(KERN_WARNING, ha, 1697 + "%s: fw_ddb_dma_pool allocation failed..\n", 1698 + __func__); 1943 1699 goto mem_alloc_error_exit; 1944 1700 } 1945 1701 ··· 2064 1800 } 2065 1801 } 2066 1802 1803 + void qla4xxx_check_relogin_flash_ddb(struct iscsi_cls_session *cls_sess) 1804 + { 1805 + struct iscsi_session *sess; 1806 + struct ddb_entry *ddb_entry; 1807 + struct scsi_qla_host *ha; 1808 + 1809 + sess = cls_sess->dd_data; 1810 + ddb_entry = sess->dd_data; 1811 + ha = ddb_entry->ha; 1812 + 1813 + if (!(ddb_entry->ddb_type == FLASH_DDB)) 1814 + return; 1815 + 1816 + if (adapter_up(ha) && !test_bit(DF_RELOGIN, &ddb_entry->flags) && 1817 + !iscsi_is_session_online(cls_sess)) { 1818 + if (atomic_read(&ddb_entry->retry_relogin_timer) != 1819 + INVALID_ENTRY) { 1820 + if (atomic_read(&ddb_entry->retry_relogin_timer) == 1821 + 0) { 1822 + atomic_set(&ddb_entry->retry_relogin_timer, 1823 + INVALID_ENTRY); 1824 + set_bit(DPC_RELOGIN_DEVICE, &ha->dpc_flags); 1825 + set_bit(DF_RELOGIN, &ddb_entry->flags); 1826 + DEBUG2(ql4_printk(KERN_INFO, ha, 1827 + "%s: index [%d] login device\n", 1828 + __func__, ddb_entry->fw_ddb_index)); 1829 + } else 1830 + atomic_dec(&ddb_entry->retry_relogin_timer); 1831 + } 1832 + } 1833 + 1834 + /* Wait for relogin to timeout */ 1835 + if (atomic_read(&ddb_entry->relogin_timer) && 1836 + (atomic_dec_and_test(&ddb_entry->relogin_timer) != 0)) { 1837 + /* 1838 + * If the relogin times out and the device is 1839 + * still NOT ONLINE then try and relogin again. 1840 + */ 1841 + if (!iscsi_is_session_online(cls_sess)) { 1842 + /* Reset retry relogin timer */ 1843 + atomic_inc(&ddb_entry->relogin_retry_count); 1844 + DEBUG2(ql4_printk(KERN_INFO, ha, 1845 + "%s: index[%d] relogin timed out-retrying" 1846 + " relogin (%d), retry (%d)\n", __func__, 1847 + ddb_entry->fw_ddb_index, 1848 + atomic_read(&ddb_entry->relogin_retry_count), 1849 + ddb_entry->default_time2wait + 4)); 1850 + set_bit(DPC_RELOGIN_DEVICE, &ha->dpc_flags); 1851 + atomic_set(&ddb_entry->retry_relogin_timer, 1852 + ddb_entry->default_time2wait + 4); 1853 + } 1854 + } 1855 + } 1856 + 2067 1857 /** 2068 1858 * qla4xxx_timer - checks every second for work to do. 2069 1859 * @ha: Pointer to host adapter structure. ··· 2126 1808 { 2127 1809 int start_dpc = 0; 2128 1810 uint16_t w; 1811 + 1812 + iscsi_host_for_each_session(ha->host, qla4xxx_check_relogin_flash_ddb); 2129 1813 2130 1814 /* If we are in the middle of AER/EEH processing 2131 1815 * skip any processing and reschedule the timer ··· 2398 2078 sess = cls_session->dd_data; 2399 2079 ddb_entry = sess->dd_data; 2400 2080 ddb_entry->fw_ddb_device_state = DDB_DS_SESSION_FAILED; 2401 - iscsi_session_failure(cls_session->dd_data, ISCSI_ERR_CONN_FAILED); 2081 + 2082 + if (ddb_entry->ddb_type == FLASH_DDB) 2083 + iscsi_block_session(ddb_entry->sess); 2084 + else 2085 + iscsi_session_failure(cls_session->dd_data, 2086 + ISCSI_ERR_CONN_FAILED); 2402 2087 } 2403 2088 2404 2089 /** ··· 2488 2163 2489 2164 /* NOTE: AF_ONLINE flag set upon successful completion of 2490 2165 * qla4xxx_initialize_adapter */ 2491 - status = qla4xxx_initialize_adapter(ha); 2166 + status = qla4xxx_initialize_adapter(ha, RESET_ADAPTER); 2492 2167 } 2493 2168 2494 2169 /* Retry failed adapter initialization, if necessary ··· 2570 2245 iscsi_unblock_session(ddb_entry->sess); 2571 2246 } else { 2572 2247 /* Trigger relogin */ 2573 - iscsi_session_failure(cls_session->dd_data, 2574 - ISCSI_ERR_CONN_FAILED); 2248 + if (ddb_entry->ddb_type == FLASH_DDB) { 2249 + if (!test_bit(DF_RELOGIN, &ddb_entry->flags)) 2250 + qla4xxx_arm_relogin_timer(ddb_entry); 2251 + } else 2252 + iscsi_session_failure(cls_session->dd_data, 2253 + ISCSI_ERR_CONN_FAILED); 2575 2254 } 2576 2255 } 2256 + } 2257 + 2258 + int qla4xxx_unblock_flash_ddb(struct iscsi_cls_session *cls_session) 2259 + { 2260 + struct iscsi_session *sess; 2261 + struct ddb_entry *ddb_entry; 2262 + struct scsi_qla_host *ha; 2263 + 2264 + sess = cls_session->dd_data; 2265 + ddb_entry = sess->dd_data; 2266 + ha = ddb_entry->ha; 2267 + ql4_printk(KERN_INFO, ha, "scsi%ld: %s: ddb[%d]" 2268 + " unblock session\n", ha->host_no, __func__, 2269 + ddb_entry->fw_ddb_index); 2270 + 2271 + iscsi_unblock_session(ddb_entry->sess); 2272 + 2273 + /* Start scan target */ 2274 + if (test_bit(AF_ONLINE, &ha->flags)) { 2275 + ql4_printk(KERN_INFO, ha, "scsi%ld: %s: ddb[%d]" 2276 + " start scan\n", ha->host_no, __func__, 2277 + ddb_entry->fw_ddb_index); 2278 + scsi_queue_work(ha->host, &ddb_entry->sess->scan_work); 2279 + } 2280 + return QLA_SUCCESS; 2281 + } 2282 + 2283 + int qla4xxx_unblock_ddb(struct iscsi_cls_session *cls_session) 2284 + { 2285 + struct iscsi_session *sess; 2286 + struct ddb_entry *ddb_entry; 2287 + struct scsi_qla_host *ha; 2288 + 2289 + sess = cls_session->dd_data; 2290 + ddb_entry = sess->dd_data; 2291 + ha = ddb_entry->ha; 2292 + ql4_printk(KERN_INFO, ha, "scsi%ld: %s: ddb[%d]" 2293 + " unblock user space session\n", ha->host_no, __func__, 2294 + ddb_entry->fw_ddb_index); 2295 + iscsi_conn_start(ddb_entry->conn); 2296 + iscsi_conn_login_event(ddb_entry->conn, 2297 + ISCSI_CONN_STATE_LOGGED_IN); 2298 + 2299 + return QLA_SUCCESS; 2577 2300 } 2578 2301 2579 2302 static void qla4xxx_relogin_all_devices(struct scsi_qla_host *ha) 2580 2303 { 2581 2304 iscsi_host_for_each_session(ha->host, qla4xxx_relogin_devices); 2305 + } 2306 + 2307 + static void qla4xxx_relogin_flash_ddb(struct iscsi_cls_session *cls_sess) 2308 + { 2309 + uint16_t relogin_timer; 2310 + struct iscsi_session *sess; 2311 + struct ddb_entry *ddb_entry; 2312 + struct scsi_qla_host *ha; 2313 + 2314 + sess = cls_sess->dd_data; 2315 + ddb_entry = sess->dd_data; 2316 + ha = ddb_entry->ha; 2317 + 2318 + relogin_timer = max(ddb_entry->default_relogin_timeout, 2319 + (uint16_t)RELOGIN_TOV); 2320 + atomic_set(&ddb_entry->relogin_timer, relogin_timer); 2321 + 2322 + DEBUG2(ql4_printk(KERN_INFO, ha, 2323 + "scsi%ld: Relogin index [%d]. TOV=%d\n", ha->host_no, 2324 + ddb_entry->fw_ddb_index, relogin_timer)); 2325 + 2326 + qla4xxx_login_flash_ddb(cls_sess); 2327 + } 2328 + 2329 + static void qla4xxx_dpc_relogin(struct iscsi_cls_session *cls_sess) 2330 + { 2331 + struct iscsi_session *sess; 2332 + struct ddb_entry *ddb_entry; 2333 + struct scsi_qla_host *ha; 2334 + 2335 + sess = cls_sess->dd_data; 2336 + ddb_entry = sess->dd_data; 2337 + ha = ddb_entry->ha; 2338 + 2339 + if (!(ddb_entry->ddb_type == FLASH_DDB)) 2340 + return; 2341 + 2342 + if (test_and_clear_bit(DF_RELOGIN, &ddb_entry->flags) && 2343 + !iscsi_is_session_online(cls_sess)) { 2344 + DEBUG2(ql4_printk(KERN_INFO, ha, 2345 + "relogin issued\n")); 2346 + qla4xxx_relogin_flash_ddb(cls_sess); 2347 + } 2582 2348 } 2583 2349 2584 2350 void qla4xxx_wake_dpc(struct scsi_qla_host *ha) ··· 2772 2356 if (test_and_clear_bit(DPC_GET_DHCP_IP_ADDR, &ha->dpc_flags)) 2773 2357 qla4xxx_get_dhcp_ip_address(ha); 2774 2358 2359 + /* ---- relogin device? --- */ 2360 + if (adapter_up(ha) && 2361 + test_and_clear_bit(DPC_RELOGIN_DEVICE, &ha->dpc_flags)) { 2362 + iscsi_host_for_each_session(ha->host, qla4xxx_dpc_relogin); 2363 + } 2364 + 2775 2365 /* ---- link change? --- */ 2776 2366 if (test_and_clear_bit(DPC_LINK_CHANGED, &ha->dpc_flags)) { 2777 2367 if (!test_bit(AF_LINK_UP, &ha->flags)) { ··· 2790 2368 * fatal error recovery. Therefore, the driver must 2791 2369 * manually relogin to devices when recovering from 2792 2370 * connection failures, logouts, expired KATO, etc. */ 2793 - 2794 - qla4xxx_relogin_all_devices(ha); 2371 + if (test_and_clear_bit(AF_BUILD_DDB_LIST, &ha->flags)) { 2372 + qla4xxx_build_ddb_list(ha, ha->is_reset); 2373 + iscsi_host_for_each_session(ha->host, 2374 + qla4xxx_login_flash_ddb); 2375 + } else 2376 + qla4xxx_relogin_all_devices(ha); 2795 2377 } 2796 2378 } 2797 2379 } ··· 3293 2867 " target ID %d\n", __func__, ddb_index[0], 3294 2868 ddb_index[1])); 3295 2869 2870 + ha->pri_ddb_idx = ddb_index[0]; 2871 + ha->sec_ddb_idx = ddb_index[1]; 2872 + 3296 2873 exit_boot_info_free: 3297 2874 dma_free_coherent(&ha->pdev->dev, size, buf, buf_dma); 3298 2875 exit_boot_info: ··· 3463 3034 return ret; 3464 3035 } 3465 3036 3037 + if (ql4xdisablesysfsboot) 3038 + return QLA_SUCCESS; 3039 + 3466 3040 if (ddb_index[0] == 0xffff) 3467 3041 goto sec_target; 3468 3042 ··· 3498 3066 struct iscsi_boot_kobj *boot_kobj; 3499 3067 3500 3068 if (qla4xxx_get_boot_info(ha) != QLA_SUCCESS) 3501 - return 0; 3069 + return QLA_ERROR; 3070 + 3071 + if (ql4xdisablesysfsboot) { 3072 + ql4_printk(KERN_INFO, ha, 3073 + "%s: syfsboot disabled - driver will trigger login" 3074 + "and publish session for discovery .\n", __func__); 3075 + return QLA_SUCCESS; 3076 + } 3077 + 3502 3078 3503 3079 ha->boot_kset = iscsi_boot_create_host_kset(ha->host->host_no); 3504 3080 if (!ha->boot_kset) ··· 3548 3108 if (!boot_kobj) 3549 3109 goto put_host; 3550 3110 3551 - return 0; 3111 + return QLA_SUCCESS; 3552 3112 3553 3113 put_host: 3554 3114 scsi_host_put(ha->host); ··· 3614 3174 exit_chap_list: 3615 3175 dma_free_coherent(&ha->pdev->dev, chap_size, 3616 3176 chap_flash_data, chap_dma); 3617 - return; 3618 3177 } 3178 + 3179 + static void qla4xxx_get_param_ddb(struct ddb_entry *ddb_entry, 3180 + struct ql4_tuple_ddb *tddb) 3181 + { 3182 + struct scsi_qla_host *ha; 3183 + struct iscsi_cls_session *cls_sess; 3184 + struct iscsi_cls_conn *cls_conn; 3185 + struct iscsi_session *sess; 3186 + struct iscsi_conn *conn; 3187 + 3188 + DEBUG2(printk(KERN_INFO "Func: %s\n", __func__)); 3189 + ha = ddb_entry->ha; 3190 + cls_sess = ddb_entry->sess; 3191 + sess = cls_sess->dd_data; 3192 + cls_conn = ddb_entry->conn; 3193 + conn = cls_conn->dd_data; 3194 + 3195 + tddb->tpgt = sess->tpgt; 3196 + tddb->port = conn->persistent_port; 3197 + strncpy(tddb->iscsi_name, sess->targetname, ISCSI_NAME_SIZE); 3198 + strncpy(tddb->ip_addr, conn->persistent_address, DDB_IPADDR_LEN); 3199 + } 3200 + 3201 + static void qla4xxx_convert_param_ddb(struct dev_db_entry *fw_ddb_entry, 3202 + struct ql4_tuple_ddb *tddb) 3203 + { 3204 + uint16_t options = 0; 3205 + 3206 + tddb->tpgt = le32_to_cpu(fw_ddb_entry->tgt_portal_grp); 3207 + memcpy(&tddb->iscsi_name[0], &fw_ddb_entry->iscsi_name[0], 3208 + min(sizeof(tddb->iscsi_name), sizeof(fw_ddb_entry->iscsi_name))); 3209 + 3210 + options = le16_to_cpu(fw_ddb_entry->options); 3211 + if (options & DDB_OPT_IPV6_DEVICE) 3212 + sprintf(tddb->ip_addr, "%pI6", fw_ddb_entry->ip_addr); 3213 + else 3214 + sprintf(tddb->ip_addr, "%pI4", fw_ddb_entry->ip_addr); 3215 + 3216 + tddb->port = le16_to_cpu(fw_ddb_entry->port); 3217 + } 3218 + 3219 + static int qla4xxx_compare_tuple_ddb(struct scsi_qla_host *ha, 3220 + struct ql4_tuple_ddb *old_tddb, 3221 + struct ql4_tuple_ddb *new_tddb) 3222 + { 3223 + if (strcmp(old_tddb->iscsi_name, new_tddb->iscsi_name)) 3224 + return QLA_ERROR; 3225 + 3226 + if (strcmp(old_tddb->ip_addr, new_tddb->ip_addr)) 3227 + return QLA_ERROR; 3228 + 3229 + if (old_tddb->port != new_tddb->port) 3230 + return QLA_ERROR; 3231 + 3232 + DEBUG2(ql4_printk(KERN_INFO, ha, 3233 + "Match Found, fw[%d,%d,%s,%s], [%d,%d,%s,%s]", 3234 + old_tddb->port, old_tddb->tpgt, old_tddb->ip_addr, 3235 + old_tddb->iscsi_name, new_tddb->port, new_tddb->tpgt, 3236 + new_tddb->ip_addr, new_tddb->iscsi_name)); 3237 + 3238 + return QLA_SUCCESS; 3239 + } 3240 + 3241 + static int qla4xxx_is_session_exists(struct scsi_qla_host *ha, 3242 + struct dev_db_entry *fw_ddb_entry) 3243 + { 3244 + struct ddb_entry *ddb_entry; 3245 + struct ql4_tuple_ddb *fw_tddb = NULL; 3246 + struct ql4_tuple_ddb *tmp_tddb = NULL; 3247 + int idx; 3248 + int ret = QLA_ERROR; 3249 + 3250 + fw_tddb = vzalloc(sizeof(*fw_tddb)); 3251 + if (!fw_tddb) { 3252 + DEBUG2(ql4_printk(KERN_WARNING, ha, 3253 + "Memory Allocation failed.\n")); 3254 + ret = QLA_SUCCESS; 3255 + goto exit_check; 3256 + } 3257 + 3258 + tmp_tddb = vzalloc(sizeof(*tmp_tddb)); 3259 + if (!tmp_tddb) { 3260 + DEBUG2(ql4_printk(KERN_WARNING, ha, 3261 + "Memory Allocation failed.\n")); 3262 + ret = QLA_SUCCESS; 3263 + goto exit_check; 3264 + } 3265 + 3266 + qla4xxx_convert_param_ddb(fw_ddb_entry, fw_tddb); 3267 + 3268 + for (idx = 0; idx < MAX_DDB_ENTRIES; idx++) { 3269 + ddb_entry = qla4xxx_lookup_ddb_by_fw_index(ha, idx); 3270 + if (ddb_entry == NULL) 3271 + continue; 3272 + 3273 + qla4xxx_get_param_ddb(ddb_entry, tmp_tddb); 3274 + if (!qla4xxx_compare_tuple_ddb(ha, fw_tddb, tmp_tddb)) { 3275 + ret = QLA_SUCCESS; /* found */ 3276 + goto exit_check; 3277 + } 3278 + } 3279 + 3280 + exit_check: 3281 + if (fw_tddb) 3282 + vfree(fw_tddb); 3283 + if (tmp_tddb) 3284 + vfree(tmp_tddb); 3285 + return ret; 3286 + } 3287 + 3288 + static int qla4xxx_is_flash_ddb_exists(struct scsi_qla_host *ha, 3289 + struct list_head *list_nt, 3290 + struct dev_db_entry *fw_ddb_entry) 3291 + { 3292 + struct qla_ddb_index *nt_ddb_idx, *nt_ddb_idx_tmp; 3293 + struct ql4_tuple_ddb *fw_tddb = NULL; 3294 + struct ql4_tuple_ddb *tmp_tddb = NULL; 3295 + int ret = QLA_ERROR; 3296 + 3297 + fw_tddb = vzalloc(sizeof(*fw_tddb)); 3298 + if (!fw_tddb) { 3299 + DEBUG2(ql4_printk(KERN_WARNING, ha, 3300 + "Memory Allocation failed.\n")); 3301 + ret = QLA_SUCCESS; 3302 + goto exit_check; 3303 + } 3304 + 3305 + tmp_tddb = vzalloc(sizeof(*tmp_tddb)); 3306 + if (!tmp_tddb) { 3307 + DEBUG2(ql4_printk(KERN_WARNING, ha, 3308 + "Memory Allocation failed.\n")); 3309 + ret = QLA_SUCCESS; 3310 + goto exit_check; 3311 + } 3312 + 3313 + qla4xxx_convert_param_ddb(fw_ddb_entry, fw_tddb); 3314 + 3315 + list_for_each_entry_safe(nt_ddb_idx, nt_ddb_idx_tmp, list_nt, list) { 3316 + qla4xxx_convert_param_ddb(&nt_ddb_idx->fw_ddb, tmp_tddb); 3317 + if (!qla4xxx_compare_tuple_ddb(ha, fw_tddb, tmp_tddb)) { 3318 + ret = QLA_SUCCESS; /* found */ 3319 + goto exit_check; 3320 + } 3321 + } 3322 + 3323 + exit_check: 3324 + if (fw_tddb) 3325 + vfree(fw_tddb); 3326 + if (tmp_tddb) 3327 + vfree(tmp_tddb); 3328 + return ret; 3329 + } 3330 + 3331 + static void qla4xxx_free_nt_list(struct list_head *list_nt) 3332 + { 3333 + struct qla_ddb_index *nt_ddb_idx, *nt_ddb_idx_tmp; 3334 + 3335 + /* Free up the normaltargets list */ 3336 + list_for_each_entry_safe(nt_ddb_idx, nt_ddb_idx_tmp, list_nt, list) { 3337 + list_del_init(&nt_ddb_idx->list); 3338 + vfree(nt_ddb_idx); 3339 + } 3340 + 3341 + } 3342 + 3343 + static struct iscsi_endpoint *qla4xxx_get_ep_fwdb(struct scsi_qla_host *ha, 3344 + struct dev_db_entry *fw_ddb_entry) 3345 + { 3346 + struct iscsi_endpoint *ep; 3347 + struct sockaddr_in *addr; 3348 + struct sockaddr_in6 *addr6; 3349 + struct sockaddr *dst_addr; 3350 + char *ip; 3351 + 3352 + /* TODO: need to destroy on unload iscsi_endpoint*/ 3353 + dst_addr = vmalloc(sizeof(*dst_addr)); 3354 + if (!dst_addr) 3355 + return NULL; 3356 + 3357 + if (fw_ddb_entry->options & DDB_OPT_IPV6_DEVICE) { 3358 + dst_addr->sa_family = AF_INET6; 3359 + addr6 = (struct sockaddr_in6 *)dst_addr; 3360 + ip = (char *)&addr6->sin6_addr; 3361 + memcpy(ip, fw_ddb_entry->ip_addr, IPv6_ADDR_LEN); 3362 + addr6->sin6_port = htons(le16_to_cpu(fw_ddb_entry->port)); 3363 + 3364 + } else { 3365 + dst_addr->sa_family = AF_INET; 3366 + addr = (struct sockaddr_in *)dst_addr; 3367 + ip = (char *)&addr->sin_addr; 3368 + memcpy(ip, fw_ddb_entry->ip_addr, IP_ADDR_LEN); 3369 + addr->sin_port = htons(le16_to_cpu(fw_ddb_entry->port)); 3370 + } 3371 + 3372 + ep = qla4xxx_ep_connect(ha->host, dst_addr, 0); 3373 + vfree(dst_addr); 3374 + return ep; 3375 + } 3376 + 3377 + static int qla4xxx_verify_boot_idx(struct scsi_qla_host *ha, uint16_t idx) 3378 + { 3379 + if (ql4xdisablesysfsboot) 3380 + return QLA_SUCCESS; 3381 + if (idx == ha->pri_ddb_idx || idx == ha->sec_ddb_idx) 3382 + return QLA_ERROR; 3383 + return QLA_SUCCESS; 3384 + } 3385 + 3386 + static void qla4xxx_setup_flash_ddb_entry(struct scsi_qla_host *ha, 3387 + struct ddb_entry *ddb_entry) 3388 + { 3389 + ddb_entry->ddb_type = FLASH_DDB; 3390 + ddb_entry->fw_ddb_index = INVALID_ENTRY; 3391 + ddb_entry->fw_ddb_device_state = DDB_DS_NO_CONNECTION_ACTIVE; 3392 + ddb_entry->ha = ha; 3393 + ddb_entry->unblock_sess = qla4xxx_unblock_flash_ddb; 3394 + ddb_entry->ddb_change = qla4xxx_flash_ddb_change; 3395 + 3396 + atomic_set(&ddb_entry->retry_relogin_timer, INVALID_ENTRY); 3397 + atomic_set(&ddb_entry->relogin_timer, 0); 3398 + atomic_set(&ddb_entry->relogin_retry_count, 0); 3399 + 3400 + ddb_entry->default_relogin_timeout = 3401 + le16_to_cpu(ddb_entry->fw_ddb_entry.def_timeout); 3402 + ddb_entry->default_time2wait = 3403 + le16_to_cpu(ddb_entry->fw_ddb_entry.iscsi_def_time2wait); 3404 + } 3405 + 3406 + static void qla4xxx_wait_for_ip_configuration(struct scsi_qla_host *ha) 3407 + { 3408 + uint32_t idx = 0; 3409 + uint32_t ip_idx[IP_ADDR_COUNT] = {0, 1, 2, 3}; /* 4 IP interfaces */ 3410 + uint32_t sts[MBOX_REG_COUNT]; 3411 + uint32_t ip_state; 3412 + unsigned long wtime; 3413 + int ret; 3414 + 3415 + wtime = jiffies + (HZ * IP_CONFIG_TOV); 3416 + do { 3417 + for (idx = 0; idx < IP_ADDR_COUNT; idx++) { 3418 + if (ip_idx[idx] == -1) 3419 + continue; 3420 + 3421 + ret = qla4xxx_get_ip_state(ha, 0, ip_idx[idx], sts); 3422 + 3423 + if (ret == QLA_ERROR) { 3424 + ip_idx[idx] = -1; 3425 + continue; 3426 + } 3427 + 3428 + ip_state = (sts[1] & IP_STATE_MASK) >> IP_STATE_SHIFT; 3429 + 3430 + DEBUG2(ql4_printk(KERN_INFO, ha, 3431 + "Waiting for IP state for idx = %d, state = 0x%x\n", 3432 + ip_idx[idx], ip_state)); 3433 + if (ip_state == IP_ADDRSTATE_UNCONFIGURED || 3434 + ip_state == IP_ADDRSTATE_INVALID || 3435 + ip_state == IP_ADDRSTATE_PREFERRED || 3436 + ip_state == IP_ADDRSTATE_DEPRICATED || 3437 + ip_state == IP_ADDRSTATE_DISABLING) 3438 + ip_idx[idx] = -1; 3439 + 3440 + } 3441 + 3442 + /* Break if all IP states checked */ 3443 + if ((ip_idx[0] == -1) && 3444 + (ip_idx[1] == -1) && 3445 + (ip_idx[2] == -1) && 3446 + (ip_idx[3] == -1)) 3447 + break; 3448 + schedule_timeout_uninterruptible(HZ); 3449 + } while (time_after(wtime, jiffies)); 3450 + } 3451 + 3452 + void qla4xxx_build_ddb_list(struct scsi_qla_host *ha, int is_reset) 3453 + { 3454 + int max_ddbs; 3455 + int ret; 3456 + uint32_t idx = 0, next_idx = 0; 3457 + uint32_t state = 0, conn_err = 0; 3458 + uint16_t conn_id; 3459 + struct dev_db_entry *fw_ddb_entry; 3460 + struct ddb_entry *ddb_entry = NULL; 3461 + dma_addr_t fw_ddb_dma; 3462 + struct iscsi_cls_session *cls_sess; 3463 + struct iscsi_session *sess; 3464 + struct iscsi_cls_conn *cls_conn; 3465 + struct iscsi_endpoint *ep; 3466 + uint16_t cmds_max = 32, tmo = 0; 3467 + uint32_t initial_cmdsn = 0; 3468 + struct list_head list_st, list_nt; /* List of sendtargets */ 3469 + struct qla_ddb_index *st_ddb_idx, *st_ddb_idx_tmp; 3470 + int fw_idx_size; 3471 + unsigned long wtime; 3472 + struct qla_ddb_index *nt_ddb_idx; 3473 + 3474 + if (!test_bit(AF_LINK_UP, &ha->flags)) { 3475 + set_bit(AF_BUILD_DDB_LIST, &ha->flags); 3476 + ha->is_reset = is_reset; 3477 + return; 3478 + } 3479 + max_ddbs = is_qla40XX(ha) ? MAX_DEV_DB_ENTRIES_40XX : 3480 + MAX_DEV_DB_ENTRIES; 3481 + 3482 + fw_ddb_entry = dma_pool_alloc(ha->fw_ddb_dma_pool, GFP_KERNEL, 3483 + &fw_ddb_dma); 3484 + if (fw_ddb_entry == NULL) { 3485 + DEBUG2(ql4_printk(KERN_ERR, ha, "Out of memory\n")); 3486 + goto exit_ddb_list; 3487 + } 3488 + 3489 + INIT_LIST_HEAD(&list_st); 3490 + INIT_LIST_HEAD(&list_nt); 3491 + fw_idx_size = sizeof(struct qla_ddb_index); 3492 + 3493 + for (idx = 0; idx < max_ddbs; idx = next_idx) { 3494 + ret = qla4xxx_get_fwddb_entry(ha, idx, fw_ddb_entry, 3495 + fw_ddb_dma, NULL, 3496 + &next_idx, &state, &conn_err, 3497 + NULL, &conn_id); 3498 + if (ret == QLA_ERROR) 3499 + break; 3500 + 3501 + if (qla4xxx_verify_boot_idx(ha, idx) != QLA_SUCCESS) 3502 + goto continue_next_st; 3503 + 3504 + /* Check if ST, add to the list_st */ 3505 + if (strlen((char *) fw_ddb_entry->iscsi_name) != 0) 3506 + goto continue_next_st; 3507 + 3508 + st_ddb_idx = vzalloc(fw_idx_size); 3509 + if (!st_ddb_idx) 3510 + break; 3511 + 3512 + st_ddb_idx->fw_ddb_idx = idx; 3513 + 3514 + list_add_tail(&st_ddb_idx->list, &list_st); 3515 + continue_next_st: 3516 + if (next_idx == 0) 3517 + break; 3518 + } 3519 + 3520 + /* Before issuing conn open mbox, ensure all IPs states are configured 3521 + * Note, conn open fails if IPs are not configured 3522 + */ 3523 + qla4xxx_wait_for_ip_configuration(ha); 3524 + 3525 + /* Go thru the STs and fire the sendtargets by issuing conn open mbx */ 3526 + list_for_each_entry_safe(st_ddb_idx, st_ddb_idx_tmp, &list_st, list) { 3527 + qla4xxx_conn_open(ha, st_ddb_idx->fw_ddb_idx); 3528 + } 3529 + 3530 + /* Wait to ensure all sendtargets are done for min 12 sec wait */ 3531 + tmo = ((ha->def_timeout < LOGIN_TOV) ? LOGIN_TOV : ha->def_timeout); 3532 + DEBUG2(ql4_printk(KERN_INFO, ha, 3533 + "Default time to wait for build ddb %d\n", tmo)); 3534 + 3535 + wtime = jiffies + (HZ * tmo); 3536 + do { 3537 + list_for_each_entry_safe(st_ddb_idx, st_ddb_idx_tmp, &list_st, 3538 + list) { 3539 + ret = qla4xxx_get_fwddb_entry(ha, 3540 + st_ddb_idx->fw_ddb_idx, 3541 + NULL, 0, NULL, &next_idx, 3542 + &state, &conn_err, NULL, 3543 + NULL); 3544 + if (ret == QLA_ERROR) 3545 + continue; 3546 + 3547 + if (state == DDB_DS_NO_CONNECTION_ACTIVE || 3548 + state == DDB_DS_SESSION_FAILED) { 3549 + list_del_init(&st_ddb_idx->list); 3550 + vfree(st_ddb_idx); 3551 + } 3552 + } 3553 + schedule_timeout_uninterruptible(HZ / 10); 3554 + } while (time_after(wtime, jiffies)); 3555 + 3556 + /* Free up the sendtargets list */ 3557 + list_for_each_entry_safe(st_ddb_idx, st_ddb_idx_tmp, &list_st, list) { 3558 + list_del_init(&st_ddb_idx->list); 3559 + vfree(st_ddb_idx); 3560 + } 3561 + 3562 + for (idx = 0; idx < max_ddbs; idx = next_idx) { 3563 + ret = qla4xxx_get_fwddb_entry(ha, idx, fw_ddb_entry, 3564 + fw_ddb_dma, NULL, 3565 + &next_idx, &state, &conn_err, 3566 + NULL, &conn_id); 3567 + if (ret == QLA_ERROR) 3568 + break; 3569 + 3570 + if (qla4xxx_verify_boot_idx(ha, idx) != QLA_SUCCESS) 3571 + goto continue_next_nt; 3572 + 3573 + /* Check if NT, then add to list it */ 3574 + if (strlen((char *) fw_ddb_entry->iscsi_name) == 0) 3575 + goto continue_next_nt; 3576 + 3577 + if (state == DDB_DS_NO_CONNECTION_ACTIVE || 3578 + state == DDB_DS_SESSION_FAILED) { 3579 + DEBUG2(ql4_printk(KERN_INFO, ha, 3580 + "Adding DDB to session = 0x%x\n", 3581 + idx)); 3582 + if (is_reset == INIT_ADAPTER) { 3583 + nt_ddb_idx = vmalloc(fw_idx_size); 3584 + if (!nt_ddb_idx) 3585 + break; 3586 + 3587 + nt_ddb_idx->fw_ddb_idx = idx; 3588 + 3589 + memcpy(&nt_ddb_idx->fw_ddb, fw_ddb_entry, 3590 + sizeof(struct dev_db_entry)); 3591 + 3592 + if (qla4xxx_is_flash_ddb_exists(ha, &list_nt, 3593 + fw_ddb_entry) == QLA_SUCCESS) { 3594 + vfree(nt_ddb_idx); 3595 + goto continue_next_nt; 3596 + } 3597 + list_add_tail(&nt_ddb_idx->list, &list_nt); 3598 + } else if (is_reset == RESET_ADAPTER) { 3599 + if (qla4xxx_is_session_exists(ha, 3600 + fw_ddb_entry) == QLA_SUCCESS) 3601 + goto continue_next_nt; 3602 + } 3603 + 3604 + /* Create session object, with INVALID_ENTRY, 3605 + * the targer_id would get set when we issue the login 3606 + */ 3607 + cls_sess = iscsi_session_setup(&qla4xxx_iscsi_transport, 3608 + ha->host, cmds_max, 3609 + sizeof(struct ddb_entry), 3610 + sizeof(struct ql4_task_data), 3611 + initial_cmdsn, INVALID_ENTRY); 3612 + if (!cls_sess) 3613 + goto exit_ddb_list; 3614 + 3615 + /* 3616 + * iscsi_session_setup increments the driver reference 3617 + * count which wouldn't let the driver to be unloaded. 3618 + * so calling module_put function to decrement the 3619 + * reference count. 3620 + **/ 3621 + module_put(qla4xxx_iscsi_transport.owner); 3622 + sess = cls_sess->dd_data; 3623 + ddb_entry = sess->dd_data; 3624 + ddb_entry->sess = cls_sess; 3625 + 3626 + cls_sess->recovery_tmo = ql4xsess_recovery_tmo; 3627 + memcpy(&ddb_entry->fw_ddb_entry, fw_ddb_entry, 3628 + sizeof(struct dev_db_entry)); 3629 + 3630 + qla4xxx_setup_flash_ddb_entry(ha, ddb_entry); 3631 + 3632 + cls_conn = iscsi_conn_setup(cls_sess, 3633 + sizeof(struct qla_conn), 3634 + conn_id); 3635 + if (!cls_conn) 3636 + goto exit_ddb_list; 3637 + 3638 + ddb_entry->conn = cls_conn; 3639 + 3640 + /* Setup ep, for displaying attributes in sysfs */ 3641 + ep = qla4xxx_get_ep_fwdb(ha, fw_ddb_entry); 3642 + if (ep) { 3643 + ep->conn = cls_conn; 3644 + cls_conn->ep = ep; 3645 + } else { 3646 + DEBUG2(ql4_printk(KERN_ERR, ha, 3647 + "Unable to get ep\n")); 3648 + } 3649 + 3650 + /* Update sess/conn params */ 3651 + qla4xxx_copy_fwddb_param(ha, fw_ddb_entry, cls_sess, 3652 + cls_conn); 3653 + 3654 + if (is_reset == RESET_ADAPTER) { 3655 + iscsi_block_session(cls_sess); 3656 + /* Use the relogin path to discover new devices 3657 + * by short-circuting the logic of setting 3658 + * timer to relogin - instead set the flags 3659 + * to initiate login right away. 3660 + */ 3661 + set_bit(DPC_RELOGIN_DEVICE, &ha->dpc_flags); 3662 + set_bit(DF_RELOGIN, &ddb_entry->flags); 3663 + } 3664 + } 3665 + continue_next_nt: 3666 + if (next_idx == 0) 3667 + break; 3668 + } 3669 + exit_ddb_list: 3670 + qla4xxx_free_nt_list(&list_nt); 3671 + if (fw_ddb_entry) 3672 + dma_pool_free(ha->fw_ddb_dma_pool, fw_ddb_entry, fw_ddb_dma); 3673 + 3674 + qla4xxx_free_ddb_index(ha); 3675 + } 3676 + 3619 3677 3620 3678 /** 3621 3679 * qla4xxx_probe_adapter - callback function to probe HBA ··· 4236 3298 * firmware 4237 3299 * NOTE: interrupts enabled upon successful completion 4238 3300 */ 4239 - status = qla4xxx_initialize_adapter(ha); 3301 + status = qla4xxx_initialize_adapter(ha, INIT_ADAPTER); 4240 3302 while ((!test_bit(AF_ONLINE, &ha->flags)) && 4241 3303 init_retry_count++ < MAX_INIT_RETRIES) { 4242 3304 ··· 4257 3319 if (ha->isp_ops->reset_chip(ha) == QLA_ERROR) 4258 3320 continue; 4259 3321 4260 - status = qla4xxx_initialize_adapter(ha); 3322 + status = qla4xxx_initialize_adapter(ha, INIT_ADAPTER); 4261 3323 } 4262 3324 4263 3325 if (!test_bit(AF_ONLINE, &ha->flags)) { ··· 4324 3386 ha->host_no, ha->firmware_version[0], ha->firmware_version[1], 4325 3387 ha->patch_number, ha->build_number); 4326 3388 4327 - qla4xxx_create_chap_list(ha); 4328 - 4329 3389 if (qla4xxx_setup_boot_info(ha)) 4330 3390 ql4_printk(KERN_ERR, ha, "%s:ISCSI boot info setup failed\n", 4331 3391 __func__); 3392 + 3393 + /* Perform the build ddb list and login to each */ 3394 + qla4xxx_build_ddb_list(ha, INIT_ADAPTER); 3395 + iscsi_host_for_each_session(ha->host, qla4xxx_login_flash_ddb); 3396 + 3397 + qla4xxx_create_chap_list(ha); 4332 3398 4333 3399 qla4xxx_create_ifaces(ha); 4334 3400 return 0; ··· 4391 3449 } 4392 3450 } 4393 3451 3452 + static void qla4xxx_destroy_fw_ddb_session(struct scsi_qla_host *ha) 3453 + { 3454 + struct ddb_entry *ddb_entry; 3455 + int options; 3456 + int idx; 3457 + 3458 + for (idx = 0; idx < MAX_DDB_ENTRIES; idx++) { 3459 + 3460 + ddb_entry = qla4xxx_lookup_ddb_by_fw_index(ha, idx); 3461 + if ((ddb_entry != NULL) && 3462 + (ddb_entry->ddb_type == FLASH_DDB)) { 3463 + 3464 + options = LOGOUT_OPTION_CLOSE_SESSION; 3465 + if (qla4xxx_session_logout_ddb(ha, ddb_entry, options) 3466 + == QLA_ERROR) 3467 + ql4_printk(KERN_ERR, ha, "%s: Logout failed\n", 3468 + __func__); 3469 + 3470 + qla4xxx_clear_ddb_entry(ha, ddb_entry->fw_ddb_index); 3471 + /* 3472 + * we have decremented the reference count of the driver 3473 + * when we setup the session to have the driver unload 3474 + * to be seamless without actually destroying the 3475 + * session 3476 + **/ 3477 + try_module_get(qla4xxx_iscsi_transport.owner); 3478 + iscsi_destroy_endpoint(ddb_entry->conn->ep); 3479 + qla4xxx_free_ddb(ha, ddb_entry); 3480 + iscsi_session_teardown(ddb_entry->sess); 3481 + } 3482 + } 3483 + } 4394 3484 /** 4395 3485 * qla4xxx_remove_adapter - calback function to remove adapter. 4396 3486 * @pci_dev: PCI device pointer ··· 4439 3465 /* destroy iface from sysfs */ 4440 3466 qla4xxx_destroy_ifaces(ha); 4441 3467 4442 - if (ha->boot_kset) 3468 + if ((!ql4xdisablesysfsboot) && ha->boot_kset) 4443 3469 iscsi_boot_destroy_kset(ha->boot_kset); 3470 + 3471 + qla4xxx_destroy_fw_ddb_session(ha); 4444 3472 4445 3473 scsi_remove_host(ha->host); 4446 3474 ··· 5091 4115 5092 4116 qla4_8xxx_idc_unlock(ha); 5093 4117 clear_bit(AF_FW_RECOVERY, &ha->flags); 5094 - rval = qla4xxx_initialize_adapter(ha); 4118 + rval = qla4xxx_initialize_adapter(ha, RESET_ADAPTER); 5095 4119 qla4_8xxx_idc_lock(ha); 5096 4120 5097 4121 if (rval != QLA_SUCCESS) { ··· 5127 4151 if ((qla4_8xxx_rd_32(ha, QLA82XX_CRB_DEV_STATE) == 5128 4152 QLA82XX_DEV_READY)) { 5129 4153 clear_bit(AF_FW_RECOVERY, &ha->flags); 5130 - rval = qla4xxx_initialize_adapter(ha); 4154 + rval = qla4xxx_initialize_adapter(ha, RESET_ADAPTER); 5131 4155 if (rval == QLA_SUCCESS) { 5132 4156 ret = qla4xxx_request_irqs(ha); 5133 4157 if (ret) {
+1 -1
drivers/scsi/qla4xxx/ql4_version.h
··· 5 5 * See LICENSE.qla4xxx for copyright and licensing details. 6 6 */ 7 7 8 - #define QLA4XXX_DRIVER_VERSION "5.02.00-k8" 8 + #define QLA4XXX_DRIVER_VERSION "5.02.00-k9"
+2 -2
drivers/spi/Kconfig
··· 199 199 depends on FSL_SOC 200 200 201 201 config SPI_FSL_SPI 202 - tristate "Freescale SPI controller" 202 + bool "Freescale SPI controller" 203 203 depends on FSL_SOC 204 204 select SPI_FSL_LIB 205 205 help ··· 208 208 MPC8569 uses the controller in QE mode, MPC8610 in cpu mode. 209 209 210 210 config SPI_FSL_ESPI 211 - tristate "Freescale eSPI controller" 211 + bool "Freescale eSPI controller" 212 212 depends on FSL_SOC 213 213 select SPI_FSL_LIB 214 214 help
+1
drivers/spi/spi-ath79.c
··· 13 13 */ 14 14 15 15 #include <linux/kernel.h> 16 + #include <linux/module.h> 16 17 #include <linux/init.h> 17 18 #include <linux/delay.h> 18 19 #include <linux/spinlock.h>
+2 -2
drivers/spi/spi-gpio.c
··· 256 256 spi_bitbang_cleanup(spi); 257 257 } 258 258 259 - static int __init spi_gpio_alloc(unsigned pin, const char *label, bool is_in) 259 + static int __devinit spi_gpio_alloc(unsigned pin, const char *label, bool is_in) 260 260 { 261 261 int value; 262 262 ··· 270 270 return value; 271 271 } 272 272 273 - static int __init 273 + static int __devinit 274 274 spi_gpio_request(struct spi_gpio_platform_data *pdata, const char *label, 275 275 u16 *res_flags) 276 276 {
+2 -1
drivers/spi/spi-nuc900.c
··· 8 8 * 9 9 */ 10 10 11 + #include <linux/module.h> 11 12 #include <linux/init.h> 12 13 #include <linux/spinlock.h> 13 14 #include <linux/workqueue.h> ··· 427 426 goto err_clk; 428 427 } 429 428 430 - mfp_set_groupg(&pdev->dev); 429 + mfp_set_groupg(&pdev->dev, NULL); 431 430 nuc900_init_spi(hw); 432 431 433 432 err = spi_bitbang_start(&hw->bitbang);
+6 -2
drivers/ssb/driver_pcicore.c
··· 517 517 518 518 static void __devinit ssb_pcicore_init_clientmode(struct ssb_pcicore *pc) 519 519 { 520 - ssb_pcicore_fix_sprom_core_index(pc); 520 + struct ssb_device *pdev = pc->dev; 521 + struct ssb_bus *bus = pdev->bus; 522 + 523 + if (bus->bustype == SSB_BUSTYPE_PCI) 524 + ssb_pcicore_fix_sprom_core_index(pc); 521 525 522 526 /* Disable PCI interrupts. */ 523 - ssb_write32(pc->dev, SSB_INTVEC, 0); 527 + ssb_write32(pdev, SSB_INTVEC, 0); 524 528 525 529 /* Additional PCIe always once-executed workarounds */ 526 530 if (pc->dev->id.coreid == SSB_DEV_PCIE) {
+73 -23
drivers/staging/comedi/comedi_fops.c
··· 671 671 } 672 672 673 673 insns = 674 - kmalloc(sizeof(struct comedi_insn) * insnlist.n_insns, GFP_KERNEL); 674 + kcalloc(insnlist.n_insns, sizeof(struct comedi_insn), GFP_KERNEL); 675 675 if (!insns) { 676 676 DPRINTK("kmalloc failed\n"); 677 677 ret = -ENOMEM; ··· 1432 1432 return ret; 1433 1433 } 1434 1434 1435 - static void comedi_unmap(struct vm_area_struct *area) 1435 + 1436 + static void comedi_vm_open(struct vm_area_struct *area) 1437 + { 1438 + struct comedi_async *async; 1439 + struct comedi_device *dev; 1440 + 1441 + async = area->vm_private_data; 1442 + dev = async->subdevice->device; 1443 + 1444 + mutex_lock(&dev->mutex); 1445 + async->mmap_count++; 1446 + mutex_unlock(&dev->mutex); 1447 + } 1448 + 1449 + static void comedi_vm_close(struct vm_area_struct *area) 1436 1450 { 1437 1451 struct comedi_async *async; 1438 1452 struct comedi_device *dev; ··· 1460 1446 } 1461 1447 1462 1448 static struct vm_operations_struct comedi_vm_ops = { 1463 - .close = comedi_unmap, 1449 + .open = comedi_vm_open, 1450 + .close = comedi_vm_close, 1464 1451 }; 1465 1452 1466 1453 static int comedi_mmap(struct file *file, struct vm_area_struct *vma) 1467 1454 { 1468 1455 const unsigned minor = iminor(file->f_dentry->d_inode); 1469 - struct comedi_device_file_info *dev_file_info = 1470 - comedi_get_device_file_info(minor); 1471 - struct comedi_device *dev = dev_file_info->device; 1472 1456 struct comedi_async *async = NULL; 1473 1457 unsigned long start = vma->vm_start; 1474 1458 unsigned long size; ··· 1474 1462 int i; 1475 1463 int retval; 1476 1464 struct comedi_subdevice *s; 1465 + struct comedi_device_file_info *dev_file_info; 1466 + struct comedi_device *dev; 1467 + 1468 + dev_file_info = comedi_get_device_file_info(minor); 1469 + if (dev_file_info == NULL) 1470 + return -ENODEV; 1471 + dev = dev_file_info->device; 1472 + if (dev == NULL) 1473 + return -ENODEV; 1477 1474 1478 1475 mutex_lock(&dev->mutex); 1479 1476 if (!dev->attached) { ··· 1549 1528 { 1550 1529 unsigned int mask = 0; 1551 1530 const unsigned minor = iminor(file->f_dentry->d_inode); 1552 - struct comedi_device_file_info *dev_file_info = 1553 - comedi_get_device_file_info(minor); 1554 - struct comedi_device *dev = dev_file_info->device; 1555 1531 struct comedi_subdevice *read_subdev; 1556 1532 struct comedi_subdevice *write_subdev; 1533 + struct comedi_device_file_info *dev_file_info; 1534 + struct comedi_device *dev; 1535 + dev_file_info = comedi_get_device_file_info(minor); 1536 + 1537 + if (dev_file_info == NULL) 1538 + return -ENODEV; 1539 + dev = dev_file_info->device; 1540 + if (dev == NULL) 1541 + return -ENODEV; 1557 1542 1558 1543 mutex_lock(&dev->mutex); 1559 1544 if (!dev->attached) { ··· 1605 1578 int n, m, count = 0, retval = 0; 1606 1579 DECLARE_WAITQUEUE(wait, current); 1607 1580 const unsigned minor = iminor(file->f_dentry->d_inode); 1608 - struct comedi_device_file_info *dev_file_info = 1609 - comedi_get_device_file_info(minor); 1610 - struct comedi_device *dev = dev_file_info->device; 1581 + struct comedi_device_file_info *dev_file_info; 1582 + struct comedi_device *dev; 1583 + dev_file_info = comedi_get_device_file_info(minor); 1584 + 1585 + if (dev_file_info == NULL) 1586 + return -ENODEV; 1587 + dev = dev_file_info->device; 1588 + if (dev == NULL) 1589 + return -ENODEV; 1611 1590 1612 1591 if (!dev->attached) { 1613 1592 DPRINTK("no driver configured on comedi%i\n", dev->minor); ··· 1673 1640 retval = -EAGAIN; 1674 1641 break; 1675 1642 } 1643 + schedule(); 1676 1644 if (signal_pending(current)) { 1677 1645 retval = -ERESTARTSYS; 1678 1646 break; 1679 1647 } 1680 - schedule(); 1681 1648 if (!s->busy) 1682 1649 break; 1683 1650 if (s->busy != file) { ··· 1716 1683 int n, m, count = 0, retval = 0; 1717 1684 DECLARE_WAITQUEUE(wait, current); 1718 1685 const unsigned minor = iminor(file->f_dentry->d_inode); 1719 - struct comedi_device_file_info *dev_file_info = 1720 - comedi_get_device_file_info(minor); 1721 - struct comedi_device *dev = dev_file_info->device; 1686 + struct comedi_device_file_info *dev_file_info; 1687 + struct comedi_device *dev; 1688 + dev_file_info = comedi_get_device_file_info(minor); 1689 + 1690 + if (dev_file_info == NULL) 1691 + return -ENODEV; 1692 + dev = dev_file_info->device; 1693 + if (dev == NULL) 1694 + return -ENODEV; 1722 1695 1723 1696 if (!dev->attached) { 1724 1697 DPRINTK("no driver configured on comedi%i\n", dev->minor); ··· 1780 1741 retval = -EAGAIN; 1781 1742 break; 1782 1743 } 1744 + schedule(); 1783 1745 if (signal_pending(current)) { 1784 1746 retval = -ERESTARTSYS; 1785 1747 break; 1786 1748 } 1787 - schedule(); 1788 1749 if (!s->busy) { 1789 1750 retval = 0; 1790 1751 break; ··· 1924 1885 static int comedi_close(struct inode *inode, struct file *file) 1925 1886 { 1926 1887 const unsigned minor = iminor(inode); 1927 - struct comedi_device_file_info *dev_file_info = 1928 - comedi_get_device_file_info(minor); 1929 - struct comedi_device *dev = dev_file_info->device; 1930 1888 struct comedi_subdevice *s = NULL; 1931 1889 int i; 1890 + struct comedi_device_file_info *dev_file_info; 1891 + struct comedi_device *dev; 1892 + dev_file_info = comedi_get_device_file_info(minor); 1893 + 1894 + if (dev_file_info == NULL) 1895 + return -ENODEV; 1896 + dev = dev_file_info->device; 1897 + if (dev == NULL) 1898 + return -ENODEV; 1932 1899 1933 1900 mutex_lock(&dev->mutex); 1934 1901 ··· 1968 1923 static int comedi_fasync(int fd, struct file *file, int on) 1969 1924 { 1970 1925 const unsigned minor = iminor(file->f_dentry->d_inode); 1971 - struct comedi_device_file_info *dev_file_info = 1972 - comedi_get_device_file_info(minor); 1926 + struct comedi_device_file_info *dev_file_info; 1927 + struct comedi_device *dev; 1928 + dev_file_info = comedi_get_device_file_info(minor); 1973 1929 1974 - struct comedi_device *dev = dev_file_info->device; 1930 + if (dev_file_info == NULL) 1931 + return -ENODEV; 1932 + dev = dev_file_info->device; 1933 + if (dev == NULL) 1934 + return -ENODEV; 1975 1935 1976 1936 return fasync_helper(fd, file, on, &dev->async_queue); 1977 1937 }
+4 -3
drivers/staging/comedi/drivers/usbduxsigma.c
··· 1 - #define DRIVER_VERSION "v0.5" 1 + #define DRIVER_VERSION "v0.6" 2 2 #define DRIVER_AUTHOR "Bernd Porr, BerndPorr@f2s.com" 3 3 #define DRIVER_DESC "Stirling/ITL USB-DUX SIGMA -- Bernd.Porr@f2s.com" 4 4 /* ··· 25 25 Description: University of Stirling USB DAQ & INCITE Technology Limited 26 26 Devices: [ITL] USB-DUX (usbduxsigma.o) 27 27 Author: Bernd Porr <BerndPorr@f2s.com> 28 - Updated: 21 Jul 2011 28 + Updated: 8 Nov 2011 29 29 Status: testing 30 30 */ 31 31 /* ··· 44 44 * 0.3: proper vendor ID and driver name 45 45 * 0.4: fixed D/A voltage range 46 46 * 0.5: various bug fixes, health check at startup 47 + * 0.6: corrected wrong input range 47 48 */ 48 49 49 50 /* generates loads of debug info */ ··· 176 175 /* comedi constants */ 177 176 static const struct comedi_lrange range_usbdux_ai_range = { 1, { 178 177 BIP_RANGE 179 - (2.65) 178 + (2.65/2.0) 180 179 } 181 180 }; 182 181
+9 -10
drivers/staging/iio/industrialio-core.c
··· 242 242 243 243 static int iio_event_getfd(struct iio_dev *indio_dev) 244 244 { 245 + struct iio_event_interface *ev_int = indio_dev->event_interface; 245 246 int fd; 246 247 247 - if (indio_dev->event_interface == NULL) 248 + if (ev_int == NULL) 248 249 return -ENODEV; 249 250 250 - mutex_lock(&indio_dev->event_interface->event_list_lock); 251 - if (test_and_set_bit(IIO_BUSY_BIT_POS, 252 - &indio_dev->event_interface->flags)) { 253 - mutex_unlock(&indio_dev->event_interface->event_list_lock); 251 + mutex_lock(&ev_int->event_list_lock); 252 + if (test_and_set_bit(IIO_BUSY_BIT_POS, &ev_int->flags)) { 253 + mutex_unlock(&ev_int->event_list_lock); 254 254 return -EBUSY; 255 255 } 256 - mutex_unlock(&indio_dev->event_interface->event_list_lock); 256 + mutex_unlock(&ev_int->event_list_lock); 257 257 fd = anon_inode_getfd("iio:event", 258 - &iio_event_chrdev_fileops, 259 - indio_dev->event_interface, O_RDONLY); 258 + &iio_event_chrdev_fileops, ev_int, O_RDONLY); 260 259 if (fd < 0) { 261 - mutex_lock(&indio_dev->event_interface->event_list_lock); 260 + mutex_lock(&ev_int->event_list_lock); 262 261 clear_bit(IIO_BUSY_BIT_POS, &ev_int->flags); 263 - mutex_unlock(&indio_dev->event_interface->event_list_lock); 262 + mutex_unlock(&ev_int->event_list_lock); 264 263 } 265 264 return fd; 266 265 }
+1
drivers/staging/rtl8712/usb_intf.c
··· 89 89 {USB_DEVICE(0x0DF6, 0x0045)}, 90 90 {USB_DEVICE(0x0DF6, 0x0059)}, /* 11n mode disable */ 91 91 {USB_DEVICE(0x0DF6, 0x004B)}, 92 + {USB_DEVICE(0x0DF6, 0x005D)}, 92 93 {USB_DEVICE(0x0DF6, 0x0063)}, 93 94 /* Sweex */ 94 95 {USB_DEVICE(0x177F, 0x0154)},
+1
drivers/staging/rts_pstor/rtsx.c
··· 1021 1021 th = kthread_create(rtsx_scan_thread, dev, "rtsx-scan"); 1022 1022 if (IS_ERR(th)) { 1023 1023 printk(KERN_ERR "Unable to start the device-scanning thread\n"); 1024 + complete(&dev->scanning_done); 1024 1025 quiesce_and_remove_host(dev); 1025 1026 err = PTR_ERR(th); 1026 1027 goto errout;
+12 -3
drivers/staging/tidspbridge/core/dsp-clock.c
··· 54 54 55 55 /* Bridge GPT id (1 - 4), DM Timer id (5 - 8) */ 56 56 #define DMT_ID(id) ((id) + 4) 57 + #define DM_TIMER_CLOCKS 4 57 58 58 59 /* Bridge MCBSP id (6 - 10), OMAP Mcbsp id (0 - 4) */ 59 60 #define MCBSP_ID(id) ((id) - 6) ··· 115 114 */ 116 115 void dsp_clk_exit(void) 117 116 { 117 + int i; 118 + 118 119 dsp_clock_disable_all(dsp_clocks); 120 + 121 + for (i = 0; i < DM_TIMER_CLOCKS; i++) 122 + omap_dm_timer_free(timer[i]); 119 123 120 124 clk_put(iva2_clk); 121 125 clk_put(ssi.sst_fck); ··· 136 130 void dsp_clk_init(void) 137 131 { 138 132 static struct platform_device dspbridge_device; 133 + int i, id; 139 134 140 135 dspbridge_device.dev.bus = &platform_bus_type; 136 + 137 + for (i = 0, id = 5; i < DM_TIMER_CLOCKS; i++, id++) 138 + timer[i] = omap_dm_timer_request_specific(id); 141 139 142 140 iva2_clk = clk_get(&dspbridge_device.dev, "iva2_ck"); 143 141 if (IS_ERR(iva2_clk)) ··· 214 204 clk_enable(iva2_clk); 215 205 break; 216 206 case GPT_CLK: 217 - timer[clk_id - 1] = 218 - omap_dm_timer_request_specific(DMT_ID(clk_id)); 207 + status = omap_dm_timer_start(timer[clk_id - 1]); 219 208 break; 220 209 #ifdef CONFIG_OMAP_MCBSP 221 210 case MCBSP_CLK: ··· 290 281 clk_disable(iva2_clk); 291 282 break; 292 283 case GPT_CLK: 293 - omap_dm_timer_free(timer[clk_id - 1]); 284 + status = omap_dm_timer_stop(timer[clk_id - 1]); 294 285 break; 295 286 #ifdef CONFIG_OMAP_MCBSP 296 287 case MCBSP_CLK:
-4
drivers/staging/tidspbridge/rmgr/drv_interface.c
··· 24 24 #include <linux/types.h> 25 25 #include <linux/platform_device.h> 26 26 #include <linux/pm.h> 27 - 28 - #ifdef MODULE 29 27 #include <linux/module.h> 30 - #endif 31 - 32 28 #include <linux/device.h> 33 29 #include <linux/init.h> 34 30 #include <linux/moduleparam.h>
+6 -4
drivers/staging/usbip/vhci_rx.c
··· 68 68 { 69 69 struct usbip_device *ud = &vdev->ud; 70 70 struct urb *urb; 71 + unsigned long flags; 71 72 72 73 spin_lock(&vdev->priv_lock); 73 74 urb = pickup_urb_and_free_priv(vdev, pdu->base.seqnum); ··· 102 101 103 102 usbip_dbg_vhci_rx("now giveback urb %p\n", urb); 104 103 105 - spin_lock(&the_controller->lock); 104 + spin_lock_irqsave(&the_controller->lock, flags); 106 105 usb_hcd_unlink_urb_from_ep(vhci_to_hcd(the_controller), urb); 107 - spin_unlock(&the_controller->lock); 106 + spin_unlock_irqrestore(&the_controller->lock, flags); 108 107 109 108 usb_hcd_giveback_urb(vhci_to_hcd(the_controller), urb, urb->status); 110 109 ··· 142 141 { 143 142 struct vhci_unlink *unlink; 144 143 struct urb *urb; 144 + unsigned long flags; 145 145 146 146 usbip_dump_header(pdu); 147 147 ··· 172 170 urb->status = pdu->u.ret_unlink.status; 173 171 pr_info("urb->status %d\n", urb->status); 174 172 175 - spin_lock(&the_controller->lock); 173 + spin_lock_irqsave(&the_controller->lock, flags); 176 174 usb_hcd_unlink_urb_from_ep(vhci_to_hcd(the_controller), urb); 177 - spin_unlock(&the_controller->lock); 175 + spin_unlock_irqrestore(&the_controller->lock, flags); 178 176 179 177 usb_hcd_giveback_urb(vhci_to_hcd(the_controller), urb, 180 178 urb->status);
+11 -15
drivers/target/iscsi/iscsi_target.c
··· 614 614 hdr = (struct iscsi_reject *) cmd->pdu; 615 615 hdr->reason = reason; 616 616 617 - cmd->buf_ptr = kzalloc(ISCSI_HDR_LEN, GFP_KERNEL); 617 + cmd->buf_ptr = kmemdup(buf, ISCSI_HDR_LEN, GFP_KERNEL); 618 618 if (!cmd->buf_ptr) { 619 619 pr_err("Unable to allocate memory for cmd->buf_ptr\n"); 620 620 iscsit_release_cmd(cmd); 621 621 return -1; 622 622 } 623 - memcpy(cmd->buf_ptr, buf, ISCSI_HDR_LEN); 624 623 625 624 spin_lock_bh(&conn->cmd_lock); 626 625 list_add_tail(&cmd->i_list, &conn->conn_cmd_list); ··· 660 661 hdr = (struct iscsi_reject *) cmd->pdu; 661 662 hdr->reason = reason; 662 663 663 - cmd->buf_ptr = kzalloc(ISCSI_HDR_LEN, GFP_KERNEL); 664 + cmd->buf_ptr = kmemdup(buf, ISCSI_HDR_LEN, GFP_KERNEL); 664 665 if (!cmd->buf_ptr) { 665 666 pr_err("Unable to allocate memory for cmd->buf_ptr\n"); 666 667 iscsit_release_cmd(cmd); 667 668 return -1; 668 669 } 669 - memcpy(cmd->buf_ptr, buf, ISCSI_HDR_LEN); 670 670 671 671 if (add_to_conn) { 672 672 spin_lock_bh(&conn->cmd_lock); ··· 1015 1017 " non-existent or non-exported iSCSI LUN:" 1016 1018 " 0x%016Lx\n", get_unaligned_le64(&hdr->lun)); 1017 1019 } 1018 - if (ret == PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES) 1019 - return iscsit_add_reject_from_cmd( 1020 - ISCSI_REASON_BOOKMARK_NO_RESOURCES, 1021 - 1, 1, buf, cmd); 1022 - 1023 1020 send_check_condition = 1; 1024 1021 goto attach_cmd; 1025 1022 } ··· 1037 1044 */ 1038 1045 send_check_condition = 1; 1039 1046 } else { 1047 + cmd->data_length = cmd->se_cmd.data_length; 1048 + 1040 1049 if (iscsit_decide_list_to_build(cmd, payload_length) < 0) 1041 1050 return iscsit_add_reject_from_cmd( 1042 1051 ISCSI_REASON_BOOKMARK_NO_RESOURCES, ··· 1118 1123 * the backend memory allocation. 1119 1124 */ 1120 1125 ret = transport_generic_new_cmd(&cmd->se_cmd); 1121 - if ((ret < 0) || (cmd->se_cmd.se_cmd_flags & SCF_SE_CMD_FAILED)) { 1126 + if (ret < 0) { 1122 1127 immed_ret = IMMEDIATE_DATA_NORMAL_OPERATION; 1123 1128 dump_immediate_data = 1; 1124 1129 goto after_immediate_data; ··· 1336 1341 1337 1342 spin_lock_irqsave(&se_cmd->t_state_lock, flags); 1338 1343 if (!(se_cmd->se_cmd_flags & SCF_SUPPORTED_SAM_OPCODE) || 1339 - (se_cmd->se_cmd_flags & SCF_SE_CMD_FAILED)) 1344 + (se_cmd->se_cmd_flags & SCF_SCSI_CDB_EXCEPTION)) 1340 1345 dump_unsolicited_data = 1; 1341 1346 spin_unlock_irqrestore(&se_cmd->t_state_lock, flags); 1342 1347 ··· 2508 2513 if (hdr->flags & ISCSI_FLAG_DATA_STATUS) { 2509 2514 if (cmd->se_cmd.se_cmd_flags & SCF_OVERFLOW_BIT) { 2510 2515 hdr->flags |= ISCSI_FLAG_DATA_OVERFLOW; 2511 - hdr->residual_count = cpu_to_be32(cmd->residual_count); 2516 + hdr->residual_count = cpu_to_be32(cmd->se_cmd.residual_count); 2512 2517 } else if (cmd->se_cmd.se_cmd_flags & SCF_UNDERFLOW_BIT) { 2513 2518 hdr->flags |= ISCSI_FLAG_DATA_UNDERFLOW; 2514 - hdr->residual_count = cpu_to_be32(cmd->residual_count); 2519 + hdr->residual_count = cpu_to_be32(cmd->se_cmd.residual_count); 2515 2520 } 2516 2521 } 2517 2522 hton24(hdr->dlength, datain.length); ··· 3013 3018 hdr->flags |= ISCSI_FLAG_CMD_FINAL; 3014 3019 if (cmd->se_cmd.se_cmd_flags & SCF_OVERFLOW_BIT) { 3015 3020 hdr->flags |= ISCSI_FLAG_CMD_OVERFLOW; 3016 - hdr->residual_count = cpu_to_be32(cmd->residual_count); 3021 + hdr->residual_count = cpu_to_be32(cmd->se_cmd.residual_count); 3017 3022 } else if (cmd->se_cmd.se_cmd_flags & SCF_UNDERFLOW_BIT) { 3018 3023 hdr->flags |= ISCSI_FLAG_CMD_UNDERFLOW; 3019 - hdr->residual_count = cpu_to_be32(cmd->residual_count); 3024 + hdr->residual_count = cpu_to_be32(cmd->se_cmd.residual_count); 3020 3025 } 3021 3026 hdr->response = cmd->iscsi_response; 3022 3027 hdr->cmd_status = cmd->se_cmd.scsi_status; ··· 3128 3133 hdr = (struct iscsi_tm_rsp *) cmd->pdu; 3129 3134 memset(hdr, 0, ISCSI_HDR_LEN); 3130 3135 hdr->opcode = ISCSI_OP_SCSI_TMFUNC_RSP; 3136 + hdr->flags = ISCSI_FLAG_CMD_FINAL; 3131 3137 hdr->response = iscsit_convert_tcm_tmr_rsp(se_tmr); 3132 3138 hdr->itt = cpu_to_be32(cmd->init_task_tag); 3133 3139 cmd->stat_sn = conn->stat_sn++;
+4 -2
drivers/target/iscsi/iscsi_target_auth.c
··· 30 30 31 31 static int chap_string_to_hex(unsigned char *dst, unsigned char *src, int len) 32 32 { 33 - int j = DIV_ROUND_UP(len, 2); 33 + int j = DIV_ROUND_UP(len, 2), rc; 34 34 35 - hex2bin(dst, src, j); 35 + rc = hex2bin(dst, src, j); 36 + if (rc < 0) 37 + pr_debug("CHAP string contains non hex digit symbols\n"); 36 38 37 39 dst[j] = '\0'; 38 40 return j;
-3
drivers/target/iscsi/iscsi_target_core.h
··· 398 398 u32 pdu_send_order; 399 399 /* Current struct iscsi_pdu in struct iscsi_cmd->pdu_list */ 400 400 u32 pdu_start; 401 - u32 residual_count; 402 401 /* Next struct iscsi_seq to send in struct iscsi_cmd->seq_list */ 403 402 u32 seq_send_order; 404 403 /* Number of struct iscsi_seq in struct iscsi_cmd->seq_list */ ··· 534 535 atomic_t connection_exit; 535 536 atomic_t connection_recovery; 536 537 atomic_t connection_reinstatement; 537 - atomic_t connection_wait; 538 538 atomic_t connection_wait_rcfr; 539 539 atomic_t sleep_on_conn_wait_comp; 540 540 atomic_t transport_failed; ··· 641 643 atomic_t session_reinstatement; 642 644 atomic_t session_stop_active; 643 645 atomic_t sleep_on_sess_wait_comp; 644 - atomic_t transport_wait_cmds; 645 646 /* connection list */ 646 647 struct list_head sess_conn_list; 647 648 struct list_head cr_active_list;
+1 -2
drivers/target/iscsi/iscsi_target_erl1.c
··· 938 938 * handle the SCF_SCSI_RESERVATION_CONFLICT case here as well. 939 939 */ 940 940 if (se_cmd->se_cmd_flags & SCF_SCSI_CDB_EXCEPTION) { 941 - if (se_cmd->se_cmd_flags & 942 - SCF_SCSI_RESERVATION_CONFLICT) { 941 + if (se_cmd->scsi_sense_reason == TCM_RESERVATION_CONFLICT) { 943 942 cmd->i_state = ISTATE_SEND_STATUS; 944 943 spin_unlock_bh(&cmd->istate_lock); 945 944 iscsit_add_cmd_to_response_queue(cmd, cmd->conn,
+8 -5
drivers/target/iscsi/iscsi_target_login.c
··· 224 224 iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, 225 225 ISCSI_LOGIN_STATUS_NO_RESOURCES); 226 226 pr_err("Could not allocate memory for session\n"); 227 - return -1; 227 + return -ENOMEM; 228 228 } 229 229 230 230 iscsi_login_set_conn_values(sess, conn, pdu->cid); ··· 250 250 pr_err("idr_pre_get() for sess_idr failed\n"); 251 251 iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, 252 252 ISCSI_LOGIN_STATUS_NO_RESOURCES); 253 - return -1; 253 + kfree(sess); 254 + return -ENOMEM; 254 255 } 255 256 spin_lock(&sess_idr_lock); 256 257 idr_get_new(&sess_idr, NULL, &sess->session_index); ··· 271 270 ISCSI_LOGIN_STATUS_NO_RESOURCES); 272 271 pr_err("Unable to allocate memory for" 273 272 " struct iscsi_sess_ops.\n"); 274 - return -1; 273 + kfree(sess); 274 + return -ENOMEM; 275 275 } 276 276 277 277 sess->se_sess = transport_init_session(); 278 - if (!sess->se_sess) { 278 + if (IS_ERR(sess->se_sess)) { 279 279 iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, 280 280 ISCSI_LOGIN_STATUS_NO_RESOURCES); 281 - return -1; 281 + kfree(sess); 282 + return -ENOMEM; 282 283 } 283 284 284 285 return 0;
+1 -2
drivers/target/iscsi/iscsi_target_nego.c
··· 981 981 return NULL; 982 982 } 983 983 984 - login->req = kzalloc(ISCSI_HDR_LEN, GFP_KERNEL); 984 + login->req = kmemdup(login_pdu, ISCSI_HDR_LEN, GFP_KERNEL); 985 985 if (!login->req) { 986 986 pr_err("Unable to allocate memory for Login Request.\n"); 987 987 iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, 988 988 ISCSI_LOGIN_STATUS_NO_RESOURCES); 989 989 goto out; 990 990 } 991 - memcpy(login->req, login_pdu, ISCSI_HDR_LEN); 992 991 993 992 login->req_buf = kzalloc(MAX_KEY_VALUE_PAIRS, GFP_KERNEL); 994 993 if (!login->req_buf) {
+10 -31
drivers/target/loopback/tcm_loop.c
··· 113 113 scsi_bufflen(sc), sc->sc_data_direction, sam_task_attr, 114 114 &tl_cmd->tl_sense_buf[0]); 115 115 116 - /* 117 - * Signal BIDI usage with T_TASK(cmd)->t_tasks_bidi 118 - */ 119 116 if (scsi_bidi_cmnd(sc)) 120 - se_cmd->t_tasks_bidi = 1; 117 + se_cmd->se_cmd_flags |= SCF_BIDI; 118 + 121 119 /* 122 120 * Locate the struct se_lun pointer and attach it to struct se_cmd 123 121 */ ··· 146 148 * Allocate the necessary tasks to complete the received CDB+data 147 149 */ 148 150 ret = transport_generic_allocate_tasks(se_cmd, sc->cmnd); 149 - if (ret == -ENOMEM) { 150 - /* Out of Resources */ 151 - return PYX_TRANSPORT_LU_COMM_FAILURE; 152 - } else if (ret == -EINVAL) { 153 - /* 154 - * Handle case for SAM_STAT_RESERVATION_CONFLICT 155 - */ 156 - if (se_cmd->se_cmd_flags & SCF_SCSI_RESERVATION_CONFLICT) 157 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 158 - /* 159 - * Otherwise, return SAM_STAT_CHECK_CONDITION and return 160 - * sense data. 161 - */ 162 - return PYX_TRANSPORT_USE_SENSE_REASON; 163 - } 164 - 151 + if (ret != 0) 152 + return ret; 165 153 /* 166 154 * For BIDI commands, pass in the extra READ buffer 167 155 * to transport_generic_map_mem_to_cmd() below.. 168 156 */ 169 - if (se_cmd->t_tasks_bidi) { 157 + if (se_cmd->se_cmd_flags & SCF_BIDI) { 170 158 struct scsi_data_buffer *sdb = scsi_in(sc); 171 159 172 160 sgl_bidi = sdb->table.sgl; ··· 178 194 } 179 195 180 196 /* Tell the core about our preallocated memory */ 181 - ret = transport_generic_map_mem_to_cmd(se_cmd, scsi_sglist(sc), 197 + return transport_generic_map_mem_to_cmd(se_cmd, scsi_sglist(sc), 182 198 scsi_sg_count(sc), sgl_bidi, sgl_bidi_count); 183 - if (ret < 0) 184 - return PYX_TRANSPORT_LU_COMM_FAILURE; 185 - 186 - return 0; 187 199 } 188 200 189 201 /* ··· 1340 1360 { 1341 1361 struct tcm_loop_hba *tl_hba = container_of(wwn, 1342 1362 struct tcm_loop_hba, tl_hba_wwn); 1343 - int host_no = tl_hba->sh->host_no; 1363 + 1364 + pr_debug("TCM_Loop_ConfigFS: Deallocating emulated Target" 1365 + " SAS Address: %s at Linux/SCSI Host ID: %d\n", 1366 + tl_hba->tl_wwn_address, tl_hba->sh->host_no); 1344 1367 /* 1345 1368 * Call device_unregister() on the original tl_hba->dev. 1346 1369 * tcm_loop_fabric_scsi.c:tcm_loop_release_adapter() will 1347 1370 * release *tl_hba; 1348 1371 */ 1349 1372 device_unregister(&tl_hba->dev); 1350 - 1351 - pr_debug("TCM_Loop_ConfigFS: Deallocated emulated Target" 1352 - " SAS Address: %s at Linux/SCSI Host ID: %d\n", 1353 - config_item_name(&wwn->wwn_group.cg_item), host_no); 1354 1373 } 1355 1374 1356 1375 /* Start items for tcm_loop_cit */
+16 -11
drivers/target/target_core_alua.c
··· 191 191 int alua_access_state, primary = 0, rc; 192 192 u16 tg_pt_id, rtpi; 193 193 194 - if (!l_port) 195 - return PYX_TRANSPORT_LU_COMM_FAILURE; 196 - 194 + if (!l_port) { 195 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 196 + return -EINVAL; 197 + } 197 198 buf = transport_kmap_first_data_page(cmd); 198 199 199 200 /* ··· 204 203 l_tg_pt_gp_mem = l_port->sep_alua_tg_pt_gp_mem; 205 204 if (!l_tg_pt_gp_mem) { 206 205 pr_err("Unable to access l_port->sep_alua_tg_pt_gp_mem\n"); 207 - rc = PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 206 + cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 207 + rc = -EINVAL; 208 208 goto out; 209 209 } 210 210 spin_lock(&l_tg_pt_gp_mem->tg_pt_gp_mem_lock); ··· 213 211 if (!l_tg_pt_gp) { 214 212 spin_unlock(&l_tg_pt_gp_mem->tg_pt_gp_mem_lock); 215 213 pr_err("Unable to access *l_tg_pt_gp_mem->tg_pt_gp\n"); 216 - rc = PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 214 + cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 215 + rc = -EINVAL; 217 216 goto out; 218 217 } 219 218 rc = (l_tg_pt_gp->tg_pt_gp_alua_access_type & TPGS_EXPLICT_ALUA); ··· 223 220 if (!rc) { 224 221 pr_debug("Unable to process SET_TARGET_PORT_GROUPS" 225 222 " while TPGS_EXPLICT_ALUA is disabled\n"); 226 - rc = PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 223 + cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 224 + rc = -EINVAL; 227 225 goto out; 228 226 } 229 227 ··· 249 245 * REQUEST, and the additional sense code set to INVALID 250 246 * FIELD IN PARAMETER LIST. 251 247 */ 252 - rc = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 248 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 249 + rc = -EINVAL; 253 250 goto out; 254 251 } 255 252 rc = -1; ··· 303 298 * throw an exception with ASCQ: INVALID_PARAMETER_LIST 304 299 */ 305 300 if (rc != 0) { 306 - rc = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 301 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 302 + rc = -EINVAL; 307 303 goto out; 308 304 } 309 305 } else { ··· 341 335 * INVALID_PARAMETER_LIST 342 336 */ 343 337 if (rc != 0) { 344 - rc = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 338 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 339 + rc = -EINVAL; 345 340 goto out; 346 341 } 347 342 } ··· 1191 1184 * struct t10_alua_lu_gp. 1192 1185 */ 1193 1186 spin_lock(&lu_gps_lock); 1194 - atomic_set(&lu_gp->lu_gp_shutdown, 1); 1195 1187 list_del(&lu_gp->lu_gp_node); 1196 1188 alua_lu_gps_count--; 1197 1189 spin_unlock(&lu_gps_lock); ··· 1444 1438 1445 1439 tg_pt_gp_mem->tg_pt = port; 1446 1440 port->sep_alua_tg_pt_gp_mem = tg_pt_gp_mem; 1447 - atomic_set(&port->sep_tg_pt_gp_active, 1); 1448 1441 1449 1442 return tg_pt_gp_mem; 1450 1443 }
+14 -6
drivers/target/target_core_cdb.c
··· 478 478 if (cmd->data_length < 60) 479 479 return 0; 480 480 481 - buf[2] = 0x3c; 481 + buf[3] = 0x3c; 482 482 /* Set HEADSUP, ORDSUP, SIMPSUP */ 483 483 buf[5] = 0x07; 484 484 ··· 703 703 if (cmd->data_length < 4) { 704 704 pr_err("SCSI Inquiry payload length: %u" 705 705 " too small for EVPD=1\n", cmd->data_length); 706 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 706 707 return -EINVAL; 707 708 } 708 709 ··· 720 719 } 721 720 722 721 pr_err("Unknown VPD Code: 0x%02x\n", cdb[2]); 722 + cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 723 723 ret = -EINVAL; 724 724 725 725 out_unmap: ··· 971 969 default: 972 970 pr_err("MODE SENSE: unimplemented page/subpage: 0x%02x/0x%02x\n", 973 971 cdb[2] & 0x3f, cdb[3]); 974 - return PYX_TRANSPORT_UNKNOWN_MODE_PAGE; 972 + cmd->scsi_sense_reason = TCM_UNKNOWN_MODE_PAGE; 973 + return -EINVAL; 975 974 } 976 975 offset += length; 977 976 ··· 1030 1027 if (cdb[1] & 0x01) { 1031 1028 pr_err("REQUEST_SENSE description emulation not" 1032 1029 " supported\n"); 1033 - return PYX_TRANSPORT_INVALID_CDB_FIELD; 1030 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 1031 + return -ENOSYS; 1034 1032 } 1035 1033 1036 1034 buf = transport_kmap_first_data_page(cmd); ··· 1104 1100 if (!dev->transport->do_discard) { 1105 1101 pr_err("UNMAP emulation not supported for: %s\n", 1106 1102 dev->transport->name); 1107 - return PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 1103 + cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 1104 + return -ENOSYS; 1108 1105 } 1109 1106 1110 1107 /* First UNMAP block descriptor starts at 8 byte offset */ ··· 1162 1157 if (!dev->transport->do_discard) { 1163 1158 pr_err("WRITE_SAME emulation not supported" 1164 1159 " for: %s\n", dev->transport->name); 1165 - return PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 1160 + cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 1161 + return -ENOSYS; 1166 1162 } 1167 1163 1168 1164 if (cmd->t_task_cdb[0] == WRITE_SAME) ··· 1199 1193 int target_emulate_synchronize_cache(struct se_task *task) 1200 1194 { 1201 1195 struct se_device *dev = task->task_se_cmd->se_dev; 1196 + struct se_cmd *cmd = task->task_se_cmd; 1202 1197 1203 1198 if (!dev->transport->do_sync_cache) { 1204 1199 pr_err("SYNCHRONIZE_CACHE emulation not supported" 1205 1200 " for: %s\n", dev->transport->name); 1206 - return PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 1201 + cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 1202 + return -ENOSYS; 1207 1203 } 1208 1204 1209 1205 dev->transport->do_sync_cache(task);
-11
drivers/target/target_core_configfs.c
··· 67 67 static struct config_group alua_group; 68 68 static struct config_group alua_lu_gps_group; 69 69 70 - static DEFINE_SPINLOCK(se_device_lock); 71 - static LIST_HEAD(se_dev_list); 72 - 73 70 static inline struct se_hba * 74 71 item_to_hba(struct config_item *item) 75 72 { ··· 2738 2741 " struct se_subsystem_dev\n"); 2739 2742 goto unlock; 2740 2743 } 2741 - INIT_LIST_HEAD(&se_dev->se_dev_node); 2742 2744 INIT_LIST_HEAD(&se_dev->t10_wwn.t10_vpd_list); 2743 2745 spin_lock_init(&se_dev->t10_wwn.t10_vpd_lock); 2744 2746 INIT_LIST_HEAD(&se_dev->t10_pr.registration_list); ··· 2773 2777 " from allocate_virtdevice()\n"); 2774 2778 goto out; 2775 2779 } 2776 - spin_lock(&se_device_lock); 2777 - list_add_tail(&se_dev->se_dev_node, &se_dev_list); 2778 - spin_unlock(&se_device_lock); 2779 2780 2780 2781 config_group_init_type_name(&se_dev->se_dev_group, name, 2781 2782 &target_core_dev_cit); ··· 2866 2873 2867 2874 mutex_lock(&hba->hba_access_mutex); 2868 2875 t = hba->transport; 2869 - 2870 - spin_lock(&se_device_lock); 2871 - list_del(&se_dev->se_dev_node); 2872 - spin_unlock(&se_device_lock); 2873 2876 2874 2877 dev_stat_grp = &se_dev->dev_stat_grps.stat_group; 2875 2878 for (i = 0; dev_stat_grp->default_groups[i]; i++) {
+17 -13
drivers/target/target_core_device.c
··· 104 104 se_cmd->se_lun = deve->se_lun; 105 105 se_cmd->pr_res_key = deve->pr_res_key; 106 106 se_cmd->orig_fe_lun = unpacked_lun; 107 - se_cmd->se_orig_obj_ptr = se_cmd->se_lun->lun_se_dev; 108 107 se_cmd->se_cmd_flags |= SCF_SE_LUN_CMD; 109 108 } 110 109 spin_unlock_irqrestore(&se_sess->se_node_acl->device_list_lock, flags); ··· 136 137 se_lun = &se_sess->se_tpg->tpg_virt_lun0; 137 138 se_cmd->se_lun = &se_sess->se_tpg->tpg_virt_lun0; 138 139 se_cmd->orig_fe_lun = 0; 139 - se_cmd->se_orig_obj_ptr = se_cmd->se_lun->lun_se_dev; 140 140 se_cmd->se_cmd_flags |= SCF_SE_LUN_CMD; 141 141 } 142 142 /* ··· 198 200 se_lun = deve->se_lun; 199 201 se_cmd->pr_res_key = deve->pr_res_key; 200 202 se_cmd->orig_fe_lun = unpacked_lun; 201 - se_cmd->se_orig_obj_ptr = se_cmd->se_dev; 202 203 } 203 204 spin_unlock_irqrestore(&se_sess->se_node_acl->device_list_lock, flags); 204 205 ··· 705 708 706 709 se_task->task_scsi_status = GOOD; 707 710 transport_complete_task(se_task, 1); 708 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 711 + return 0; 709 712 } 710 713 711 714 /* se_release_device_for_hba(): ··· 954 957 return -EINVAL; 955 958 } 956 959 957 - pr_err("dpo_emulated not supported\n"); 958 - return -EINVAL; 960 + if (flag) { 961 + pr_err("dpo_emulated not supported\n"); 962 + return -EINVAL; 963 + } 964 + 965 + return 0; 959 966 } 960 967 961 968 int se_dev_set_emulate_fua_write(struct se_device *dev, int flag) ··· 969 968 return -EINVAL; 970 969 } 971 970 972 - if (dev->transport->fua_write_emulated == 0) { 971 + if (flag && dev->transport->fua_write_emulated == 0) { 973 972 pr_err("fua_write_emulated not supported\n"); 974 973 return -EINVAL; 975 974 } ··· 986 985 return -EINVAL; 987 986 } 988 987 989 - pr_err("ua read emulated not supported\n"); 990 - return -EINVAL; 988 + if (flag) { 989 + pr_err("ua read emulated not supported\n"); 990 + return -EINVAL; 991 + } 992 + 993 + return 0; 991 994 } 992 995 993 996 int se_dev_set_emulate_write_cache(struct se_device *dev, int flag) ··· 1000 995 pr_err("Illegal value %d\n", flag); 1001 996 return -EINVAL; 1002 997 } 1003 - if (dev->transport->write_cache_emulated == 0) { 998 + if (flag && dev->transport->write_cache_emulated == 0) { 1004 999 pr_err("write_cache_emulated not supported\n"); 1005 1000 return -EINVAL; 1006 1001 } ··· 1061 1056 * We expect this value to be non-zero when generic Block Layer 1062 1057 * Discard supported is detected iblock_create_virtdevice(). 1063 1058 */ 1064 - if (!dev->se_sub_dev->se_dev_attrib.max_unmap_block_desc_count) { 1059 + if (flag && !dev->se_sub_dev->se_dev_attrib.max_unmap_block_desc_count) { 1065 1060 pr_err("Generic Block Discard not supported\n"); 1066 1061 return -ENOSYS; 1067 1062 } ··· 1082 1077 * We expect this value to be non-zero when generic Block Layer 1083 1078 * Discard supported is detected iblock_create_virtdevice(). 1084 1079 */ 1085 - if (!dev->se_sub_dev->se_dev_attrib.max_unmap_block_desc_count) { 1080 + if (flag && !dev->se_sub_dev->se_dev_attrib.max_unmap_block_desc_count) { 1086 1081 pr_err("Generic Block Discard not supported\n"); 1087 1082 return -ENOSYS; 1088 1083 } ··· 1592 1587 ret = -ENOMEM; 1593 1588 goto out; 1594 1589 } 1595 - INIT_LIST_HEAD(&se_dev->se_dev_node); 1596 1590 INIT_LIST_HEAD(&se_dev->t10_wwn.t10_vpd_list); 1597 1591 spin_lock_init(&se_dev->t10_wwn.t10_vpd_lock); 1598 1592 INIT_LIST_HEAD(&se_dev->t10_pr.registration_list);
+11 -9
drivers/target/target_core_file.c
··· 289 289 return -ENOMEM; 290 290 } 291 291 292 - for (i = 0; i < task->task_sg_nents; i++) { 293 - iov[i].iov_len = sg[i].length; 294 - iov[i].iov_base = sg_virt(&sg[i]); 292 + for_each_sg(task->task_sg, sg, task->task_sg_nents, i) { 293 + iov[i].iov_len = sg->length; 294 + iov[i].iov_base = sg_virt(sg); 295 295 } 296 296 297 297 old_fs = get_fs(); ··· 342 342 return -ENOMEM; 343 343 } 344 344 345 - for (i = 0; i < task->task_sg_nents; i++) { 346 - iov[i].iov_len = sg[i].length; 347 - iov[i].iov_base = sg_virt(&sg[i]); 345 + for_each_sg(task->task_sg, sg, task->task_sg_nents, i) { 346 + iov[i].iov_len = sg->length; 347 + iov[i].iov_base = sg_virt(sg); 348 348 } 349 349 350 350 old_fs = get_fs(); ··· 438 438 if (ret > 0 && 439 439 dev->se_sub_dev->se_dev_attrib.emulate_write_cache > 0 && 440 440 dev->se_sub_dev->se_dev_attrib.emulate_fua_write > 0 && 441 - cmd->t_tasks_fua) { 441 + (cmd->se_cmd_flags & SCF_FUA)) { 442 442 /* 443 443 * We might need to be a bit smarter here 444 444 * and return some sense data to let the initiator ··· 449 449 450 450 } 451 451 452 - if (ret < 0) 452 + if (ret < 0) { 453 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 453 454 return ret; 455 + } 454 456 if (ret) { 455 457 task->task_scsi_status = GOOD; 456 458 transport_complete_task(task, 1); 457 459 } 458 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 460 + return 0; 459 461 } 460 462 461 463 /* fd_free_task(): (Part of se_subsystem_api_t template)
+10 -6
drivers/target/target_core_iblock.c
··· 531 531 */ 532 532 if (dev->se_sub_dev->se_dev_attrib.emulate_write_cache == 0 || 533 533 (dev->se_sub_dev->se_dev_attrib.emulate_fua_write > 0 && 534 - task->task_se_cmd->t_tasks_fua)) 534 + (cmd->se_cmd_flags & SCF_FUA))) 535 535 rw = WRITE_FUA; 536 536 else 537 537 rw = WRITE; ··· 554 554 else { 555 555 pr_err("Unsupported SCSI -> BLOCK LBA conversion:" 556 556 " %u\n", dev->se_sub_dev->se_dev_attrib.block_size); 557 - return PYX_TRANSPORT_LU_COMM_FAILURE; 557 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 558 + return -ENOSYS; 558 559 } 559 560 560 561 bio = iblock_get_bio(task, block_lba, sg_num); 561 - if (!bio) 562 - return PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES; 562 + if (!bio) { 563 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 564 + return -ENOMEM; 565 + } 563 566 564 567 bio_list_init(&list); 565 568 bio_list_add(&list, bio); ··· 591 588 submit_bio(rw, bio); 592 589 blk_finish_plug(&plug); 593 590 594 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 591 + return 0; 595 592 596 593 fail: 597 594 while ((bio = bio_list_pop(&list))) 598 595 bio_put(bio); 599 - return PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES; 596 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 597 + return -ENOMEM; 600 598 } 601 599 602 600 static u32 iblock_get_device_rev(struct se_device *dev)
+160 -80
drivers/target/target_core_pr.c
··· 191 191 pr_err("Received legacy SPC-2 RESERVE/RELEASE" 192 192 " while active SPC-3 registrations exist," 193 193 " returning RESERVATION_CONFLICT\n"); 194 - *ret = PYX_TRANSPORT_RESERVATION_CONFLICT; 194 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 195 195 return true; 196 196 } 197 197 ··· 252 252 (cmd->t_task_cdb[1] & 0x02)) { 253 253 pr_err("LongIO and Obselete Bits set, returning" 254 254 " ILLEGAL_REQUEST\n"); 255 - ret = PYX_TRANSPORT_ILLEGAL_REQUEST; 255 + cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 256 + ret = -EINVAL; 256 257 goto out; 257 258 } 258 259 /* ··· 278 277 " from %s \n", cmd->se_lun->unpacked_lun, 279 278 cmd->se_deve->mapped_lun, 280 279 sess->se_node_acl->initiatorname); 281 - ret = PYX_TRANSPORT_RESERVATION_CONFLICT; 280 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 281 + ret = -EINVAL; 282 282 goto out_unlock; 283 283 } 284 284 ··· 1512 1510 tidh_new = kzalloc(sizeof(struct pr_transport_id_holder), GFP_KERNEL); 1513 1511 if (!tidh_new) { 1514 1512 pr_err("Unable to allocate tidh_new\n"); 1515 - return PYX_TRANSPORT_LU_COMM_FAILURE; 1513 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1514 + return -EINVAL; 1516 1515 } 1517 1516 INIT_LIST_HEAD(&tidh_new->dest_list); 1518 1517 tidh_new->dest_tpg = tpg; ··· 1525 1522 sa_res_key, all_tg_pt, aptpl); 1526 1523 if (!local_pr_reg) { 1527 1524 kfree(tidh_new); 1528 - return PYX_TRANSPORT_LU_COMM_FAILURE; 1525 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1526 + return -ENOMEM; 1529 1527 } 1530 1528 tidh_new->dest_pr_reg = local_pr_reg; 1531 1529 /* ··· 1552 1548 pr_err("SPC-3 PR: Illegal tpdl: %u + 28 byte header" 1553 1549 " does not equal CDB data_length: %u\n", tpdl, 1554 1550 cmd->data_length); 1555 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 1551 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 1552 + ret = -EINVAL; 1556 1553 goto out; 1557 1554 } 1558 1555 /* ··· 1603 1598 " for tmp_tpg\n"); 1604 1599 atomic_dec(&tmp_tpg->tpg_pr_ref_count); 1605 1600 smp_mb__after_atomic_dec(); 1606 - ret = PYX_TRANSPORT_LU_COMM_FAILURE; 1601 + cmd->scsi_sense_reason = 1602 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1603 + ret = -EINVAL; 1607 1604 goto out; 1608 1605 } 1609 1606 /* ··· 1635 1628 atomic_dec(&dest_node_acl->acl_pr_ref_count); 1636 1629 smp_mb__after_atomic_dec(); 1637 1630 core_scsi3_tpg_undepend_item(tmp_tpg); 1638 - ret = PYX_TRANSPORT_LU_COMM_FAILURE; 1631 + cmd->scsi_sense_reason = 1632 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1633 + ret = -EINVAL; 1639 1634 goto out; 1640 1635 } 1641 1636 ··· 1655 1646 if (!dest_tpg) { 1656 1647 pr_err("SPC-3 PR SPEC_I_PT: Unable to locate" 1657 1648 " dest_tpg\n"); 1658 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 1649 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 1650 + ret = -EINVAL; 1659 1651 goto out; 1660 1652 } 1661 1653 #if 0 ··· 1670 1660 " %u for Transport ID: %s\n", tid_len, ptr); 1671 1661 core_scsi3_nodeacl_undepend_item(dest_node_acl); 1672 1662 core_scsi3_tpg_undepend_item(dest_tpg); 1673 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 1663 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 1664 + ret = -EINVAL; 1674 1665 goto out; 1675 1666 } 1676 1667 /* ··· 1689 1678 1690 1679 core_scsi3_nodeacl_undepend_item(dest_node_acl); 1691 1680 core_scsi3_tpg_undepend_item(dest_tpg); 1692 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 1681 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 1682 + ret = -EINVAL; 1693 1683 goto out; 1694 1684 } 1695 1685 ··· 1702 1690 smp_mb__after_atomic_dec(); 1703 1691 core_scsi3_nodeacl_undepend_item(dest_node_acl); 1704 1692 core_scsi3_tpg_undepend_item(dest_tpg); 1705 - ret = PYX_TRANSPORT_LU_COMM_FAILURE; 1693 + cmd->scsi_sense_reason = 1694 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1695 + ret = -EINVAL; 1706 1696 goto out; 1707 1697 } 1708 1698 #if 0 ··· 1741 1727 core_scsi3_lunacl_undepend_item(dest_se_deve); 1742 1728 core_scsi3_nodeacl_undepend_item(dest_node_acl); 1743 1729 core_scsi3_tpg_undepend_item(dest_tpg); 1744 - ret = PYX_TRANSPORT_LU_COMM_FAILURE; 1730 + cmd->scsi_sense_reason = 1731 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1732 + ret = -ENOMEM; 1745 1733 goto out; 1746 1734 } 1747 1735 INIT_LIST_HEAD(&tidh_new->dest_list); ··· 1775 1759 core_scsi3_nodeacl_undepend_item(dest_node_acl); 1776 1760 core_scsi3_tpg_undepend_item(dest_tpg); 1777 1761 kfree(tidh_new); 1778 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 1762 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 1763 + ret = -EINVAL; 1779 1764 goto out; 1780 1765 } 1781 1766 tidh_new->dest_pr_reg = dest_pr_reg; ··· 2115 2098 2116 2099 if (!se_sess || !se_lun) { 2117 2100 pr_err("SPC-3 PR: se_sess || struct se_lun is NULL!\n"); 2118 - return PYX_TRANSPORT_LU_COMM_FAILURE; 2101 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2102 + return -EINVAL; 2119 2103 } 2120 2104 se_tpg = se_sess->se_tpg; 2121 2105 se_deve = &se_sess->se_node_acl->device_list[cmd->orig_fe_lun]; ··· 2135 2117 if (res_key) { 2136 2118 pr_warn("SPC-3 PR: Reservation Key non-zero" 2137 2119 " for SA REGISTER, returning CONFLICT\n"); 2138 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2120 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2121 + return -EINVAL; 2139 2122 } 2140 2123 /* 2141 2124 * Do nothing but return GOOD status. 2142 2125 */ 2143 2126 if (!sa_res_key) 2144 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 2127 + return 0; 2145 2128 2146 2129 if (!spec_i_pt) { 2147 2130 /* ··· 2157 2138 if (ret != 0) { 2158 2139 pr_err("Unable to allocate" 2159 2140 " struct t10_pr_registration\n"); 2160 - return PYX_TRANSPORT_INVALID_PARAMETER_LIST; 2141 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 2142 + return -EINVAL; 2161 2143 } 2162 2144 } else { 2163 2145 /* ··· 2217 2197 " 0x%016Lx\n", res_key, 2218 2198 pr_reg->pr_res_key); 2219 2199 core_scsi3_put_pr_reg(pr_reg); 2220 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2200 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2201 + return -EINVAL; 2221 2202 } 2222 2203 } 2223 2204 if (spec_i_pt) { 2224 2205 pr_err("SPC-3 PR UNREGISTER: SPEC_I_PT" 2225 2206 " set while sa_res_key=0\n"); 2226 2207 core_scsi3_put_pr_reg(pr_reg); 2227 - return PYX_TRANSPORT_INVALID_PARAMETER_LIST; 2208 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 2209 + return -EINVAL; 2228 2210 } 2229 2211 /* 2230 2212 * An existing ALL_TG_PT=1 registration being released ··· 2237 2215 " registration exists, but ALL_TG_PT=1 bit not" 2238 2216 " present in received PROUT\n"); 2239 2217 core_scsi3_put_pr_reg(pr_reg); 2240 - return PYX_TRANSPORT_INVALID_CDB_FIELD; 2218 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 2219 + return -EINVAL; 2241 2220 } 2242 2221 /* 2243 2222 * Allocate APTPL metadata buffer used for UNREGISTER ops ··· 2250 2227 pr_err("Unable to allocate" 2251 2228 " pr_aptpl_buf\n"); 2252 2229 core_scsi3_put_pr_reg(pr_reg); 2253 - return PYX_TRANSPORT_LU_COMM_FAILURE; 2230 + cmd->scsi_sense_reason = 2231 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2232 + return -EINVAL; 2254 2233 } 2255 2234 } 2256 2235 /* ··· 2266 2241 if (pr_holder < 0) { 2267 2242 kfree(pr_aptpl_buf); 2268 2243 core_scsi3_put_pr_reg(pr_reg); 2269 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2244 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2245 + return -EINVAL; 2270 2246 } 2271 2247 2272 2248 spin_lock(&pr_tmpl->registration_lock); ··· 2431 2405 2432 2406 if (!se_sess || !se_lun) { 2433 2407 pr_err("SPC-3 PR: se_sess || struct se_lun is NULL!\n"); 2434 - return PYX_TRANSPORT_LU_COMM_FAILURE; 2408 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2409 + return -EINVAL; 2435 2410 } 2436 2411 se_tpg = se_sess->se_tpg; 2437 2412 se_deve = &se_sess->se_node_acl->device_list[cmd->orig_fe_lun]; ··· 2444 2417 if (!pr_reg) { 2445 2418 pr_err("SPC-3 PR: Unable to locate" 2446 2419 " PR_REGISTERED *pr_reg for RESERVE\n"); 2447 - return PYX_TRANSPORT_LU_COMM_FAILURE; 2420 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2421 + return -EINVAL; 2448 2422 } 2449 2423 /* 2450 2424 * From spc4r17 Section 5.7.9: Reserving: ··· 2461 2433 " does not match existing SA REGISTER res_key:" 2462 2434 " 0x%016Lx\n", res_key, pr_reg->pr_res_key); 2463 2435 core_scsi3_put_pr_reg(pr_reg); 2464 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2436 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2437 + return -EINVAL; 2465 2438 } 2466 2439 /* 2467 2440 * From spc4r17 Section 5.7.9: Reserving: ··· 2477 2448 if (scope != PR_SCOPE_LU_SCOPE) { 2478 2449 pr_err("SPC-3 PR: Illegal SCOPE: 0x%02x\n", scope); 2479 2450 core_scsi3_put_pr_reg(pr_reg); 2480 - return PYX_TRANSPORT_INVALID_PARAMETER_LIST; 2451 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 2452 + return -EINVAL; 2481 2453 } 2482 2454 /* 2483 2455 * See if we have an existing PR reservation holder pointer at ··· 2510 2480 2511 2481 spin_unlock(&dev->dev_reservation_lock); 2512 2482 core_scsi3_put_pr_reg(pr_reg); 2513 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2483 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2484 + return -EINVAL; 2514 2485 } 2515 2486 /* 2516 2487 * From spc4r17 Section 5.7.9: Reserving: ··· 2534 2503 2535 2504 spin_unlock(&dev->dev_reservation_lock); 2536 2505 core_scsi3_put_pr_reg(pr_reg); 2537 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2506 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2507 + return -EINVAL; 2538 2508 } 2539 2509 /* 2540 2510 * From spc4r17 Section 5.7.9: Reserving: ··· 2549 2517 */ 2550 2518 spin_unlock(&dev->dev_reservation_lock); 2551 2519 core_scsi3_put_pr_reg(pr_reg); 2552 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 2520 + return 0; 2553 2521 } 2554 2522 /* 2555 2523 * Otherwise, our *pr_reg becomes the PR reservation holder for said ··· 2606 2574 default: 2607 2575 pr_err("SPC-3 PR: Unknown Service Action RESERVE Type:" 2608 2576 " 0x%02x\n", type); 2609 - return PYX_TRANSPORT_INVALID_CDB_FIELD; 2577 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 2578 + return -EINVAL; 2610 2579 } 2611 2580 2612 2581 return ret; ··· 2663 2630 2664 2631 if (!se_sess || !se_lun) { 2665 2632 pr_err("SPC-3 PR: se_sess || struct se_lun is NULL!\n"); 2666 - return PYX_TRANSPORT_LU_COMM_FAILURE; 2633 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2634 + return -EINVAL; 2667 2635 } 2668 2636 /* 2669 2637 * Locate the existing *pr_reg via struct se_node_acl pointers ··· 2673 2639 if (!pr_reg) { 2674 2640 pr_err("SPC-3 PR: Unable to locate" 2675 2641 " PR_REGISTERED *pr_reg for RELEASE\n"); 2676 - return PYX_TRANSPORT_LU_COMM_FAILURE; 2642 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2643 + return -EINVAL; 2677 2644 } 2678 2645 /* 2679 2646 * From spc4r17 Section 5.7.11.2 Releasing: ··· 2696 2661 */ 2697 2662 spin_unlock(&dev->dev_reservation_lock); 2698 2663 core_scsi3_put_pr_reg(pr_reg); 2699 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 2664 + return 0; 2700 2665 } 2701 2666 if ((pr_res_holder->pr_res_type == PR_TYPE_WRITE_EXCLUSIVE_ALLREG) || 2702 2667 (pr_res_holder->pr_res_type == PR_TYPE_EXCLUSIVE_ACCESS_ALLREG)) ··· 2710 2675 */ 2711 2676 spin_unlock(&dev->dev_reservation_lock); 2712 2677 core_scsi3_put_pr_reg(pr_reg); 2713 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 2678 + return 0; 2714 2679 } 2715 2680 /* 2716 2681 * From spc4r17 Section 5.7.11.2 Releasing: ··· 2732 2697 " 0x%016Lx\n", res_key, pr_reg->pr_res_key); 2733 2698 spin_unlock(&dev->dev_reservation_lock); 2734 2699 core_scsi3_put_pr_reg(pr_reg); 2735 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2700 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2701 + return -EINVAL; 2736 2702 } 2737 2703 /* 2738 2704 * From spc4r17 Section 5.7.11.2 Releasing and above: ··· 2755 2719 2756 2720 spin_unlock(&dev->dev_reservation_lock); 2757 2721 core_scsi3_put_pr_reg(pr_reg); 2758 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2722 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2723 + return -EINVAL; 2759 2724 } 2760 2725 /* 2761 2726 * In response to a persistent reservation release request from the ··· 2839 2802 if (!pr_reg_n) { 2840 2803 pr_err("SPC-3 PR: Unable to locate" 2841 2804 " PR_REGISTERED *pr_reg for CLEAR\n"); 2842 - return PYX_TRANSPORT_LU_COMM_FAILURE; 2805 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2806 + return -EINVAL; 2843 2807 } 2844 2808 /* 2845 2809 * From spc4r17 section 5.7.11.6, Clearing: ··· 2859 2821 " existing SA REGISTER res_key:" 2860 2822 " 0x%016Lx\n", res_key, pr_reg_n->pr_res_key); 2861 2823 core_scsi3_put_pr_reg(pr_reg_n); 2862 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2824 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2825 + return -EINVAL; 2863 2826 } 2864 2827 /* 2865 2828 * a) Release the persistent reservation, if any; ··· 3018 2979 int all_reg = 0, calling_it_nexus = 0, released_regs = 0; 3019 2980 int prh_type = 0, prh_scope = 0, ret; 3020 2981 3021 - if (!se_sess) 3022 - return PYX_TRANSPORT_LU_COMM_FAILURE; 2982 + if (!se_sess) { 2983 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2984 + return -EINVAL; 2985 + } 3023 2986 3024 2987 se_deve = &se_sess->se_node_acl->device_list[cmd->orig_fe_lun]; 3025 2988 pr_reg_n = core_scsi3_locate_pr_reg(cmd->se_dev, se_sess->se_node_acl, ··· 3030 2989 pr_err("SPC-3 PR: Unable to locate" 3031 2990 " PR_REGISTERED *pr_reg for PREEMPT%s\n", 3032 2991 (abort) ? "_AND_ABORT" : ""); 3033 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2992 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2993 + return -EINVAL; 3034 2994 } 3035 2995 if (pr_reg_n->pr_res_key != res_key) { 3036 2996 core_scsi3_put_pr_reg(pr_reg_n); 3037 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 2997 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2998 + return -EINVAL; 3038 2999 } 3039 3000 if (scope != PR_SCOPE_LU_SCOPE) { 3040 3001 pr_err("SPC-3 PR: Illegal SCOPE: 0x%02x\n", scope); 3041 3002 core_scsi3_put_pr_reg(pr_reg_n); 3042 - return PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3003 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3004 + return -EINVAL; 3043 3005 } 3044 3006 INIT_LIST_HEAD(&preempt_and_abort_list); 3045 3007 ··· 3056 3012 if (!all_reg && !sa_res_key) { 3057 3013 spin_unlock(&dev->dev_reservation_lock); 3058 3014 core_scsi3_put_pr_reg(pr_reg_n); 3059 - return PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3015 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3016 + return -EINVAL; 3060 3017 } 3061 3018 /* 3062 3019 * From spc4r17, section 5.7.11.4.4 Removing Registrations: ··· 3151 3106 if (!released_regs) { 3152 3107 spin_unlock(&dev->dev_reservation_lock); 3153 3108 core_scsi3_put_pr_reg(pr_reg_n); 3154 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 3109 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 3110 + return -EINVAL; 3155 3111 } 3156 3112 /* 3157 3113 * For an existing all registrants type reservation ··· 3343 3297 default: 3344 3298 pr_err("SPC-3 PR: Unknown Service Action PREEMPT%s" 3345 3299 " Type: 0x%02x\n", (abort) ? "_AND_ABORT" : "", type); 3346 - return PYX_TRANSPORT_INVALID_CDB_FIELD; 3300 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 3301 + return -EINVAL; 3347 3302 } 3348 3303 3349 3304 return ret; ··· 3378 3331 3379 3332 if (!se_sess || !se_lun) { 3380 3333 pr_err("SPC-3 PR: se_sess || struct se_lun is NULL!\n"); 3381 - return PYX_TRANSPORT_LU_COMM_FAILURE; 3334 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 3335 + return -EINVAL; 3382 3336 } 3383 3337 memset(dest_iport, 0, 64); 3384 3338 memset(i_buf, 0, PR_REG_ISID_ID_LEN); ··· 3397 3349 if (!pr_reg) { 3398 3350 pr_err("SPC-3 PR: Unable to locate PR_REGISTERED" 3399 3351 " *pr_reg for REGISTER_AND_MOVE\n"); 3400 - return PYX_TRANSPORT_LU_COMM_FAILURE; 3352 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 3353 + return -EINVAL; 3401 3354 } 3402 3355 /* 3403 3356 * The provided reservation key much match the existing reservation key ··· 3409 3360 " res_key: 0x%016Lx does not match existing SA REGISTER" 3410 3361 " res_key: 0x%016Lx\n", res_key, pr_reg->pr_res_key); 3411 3362 core_scsi3_put_pr_reg(pr_reg); 3412 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 3363 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 3364 + return -EINVAL; 3413 3365 } 3414 3366 /* 3415 3367 * The service active reservation key needs to be non zero ··· 3419 3369 pr_warn("SPC-3 PR REGISTER_AND_MOVE: Received zero" 3420 3370 " sa_res_key\n"); 3421 3371 core_scsi3_put_pr_reg(pr_reg); 3422 - return PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3372 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3373 + return -EINVAL; 3423 3374 } 3424 3375 3425 3376 /* ··· 3443 3392 " does not equal CDB data_length: %u\n", tid_len, 3444 3393 cmd->data_length); 3445 3394 core_scsi3_put_pr_reg(pr_reg); 3446 - return PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3395 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3396 + return -EINVAL; 3447 3397 } 3448 3398 3449 3399 spin_lock(&dev->se_port_lock); ··· 3469 3417 atomic_dec(&dest_se_tpg->tpg_pr_ref_count); 3470 3418 smp_mb__after_atomic_dec(); 3471 3419 core_scsi3_put_pr_reg(pr_reg); 3472 - return PYX_TRANSPORT_LU_COMM_FAILURE; 3420 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 3421 + return -EINVAL; 3473 3422 } 3474 3423 3475 3424 spin_lock(&dev->se_port_lock); ··· 3483 3430 " fabric ops from Relative Target Port Identifier:" 3484 3431 " %hu\n", rtpi); 3485 3432 core_scsi3_put_pr_reg(pr_reg); 3486 - return PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3433 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3434 + return -EINVAL; 3487 3435 } 3488 3436 3489 3437 buf = transport_kmap_first_data_page(cmd); ··· 3499 3445 " from fabric: %s\n", proto_ident, 3500 3446 dest_tf_ops->get_fabric_proto_ident(dest_se_tpg), 3501 3447 dest_tf_ops->get_fabric_name()); 3502 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3448 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3449 + ret = -EINVAL; 3503 3450 goto out; 3504 3451 } 3505 3452 if (dest_tf_ops->tpg_parse_pr_out_transport_id == NULL) { 3506 3453 pr_err("SPC-3 PR REGISTER_AND_MOVE: Fabric does not" 3507 3454 " containg a valid tpg_parse_pr_out_transport_id" 3508 3455 " function pointer\n"); 3509 - ret = PYX_TRANSPORT_LU_COMM_FAILURE; 3456 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 3457 + ret = -EINVAL; 3510 3458 goto out; 3511 3459 } 3512 3460 initiator_str = dest_tf_ops->tpg_parse_pr_out_transport_id(dest_se_tpg, ··· 3516 3460 if (!initiator_str) { 3517 3461 pr_err("SPC-3 PR REGISTER_AND_MOVE: Unable to locate" 3518 3462 " initiator_str from Transport ID\n"); 3519 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3463 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3464 + ret = -EINVAL; 3520 3465 goto out; 3521 3466 } 3522 3467 ··· 3546 3489 pr_err("SPC-3 PR REGISTER_AND_MOVE: TransportID: %s" 3547 3490 " matches: %s on received I_T Nexus\n", initiator_str, 3548 3491 pr_reg_nacl->initiatorname); 3549 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3492 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3493 + ret = -EINVAL; 3550 3494 goto out; 3551 3495 } 3552 3496 if (!strcmp(iport_ptr, pr_reg->pr_reg_isid)) { ··· 3555 3497 " matches: %s %s on received I_T Nexus\n", 3556 3498 initiator_str, iport_ptr, pr_reg_nacl->initiatorname, 3557 3499 pr_reg->pr_reg_isid); 3558 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3500 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3501 + ret = -EINVAL; 3559 3502 goto out; 3560 3503 } 3561 3504 after_iport_check: ··· 3576 3517 pr_err("Unable to locate %s dest_node_acl for" 3577 3518 " TransportID%s\n", dest_tf_ops->get_fabric_name(), 3578 3519 initiator_str); 3579 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3520 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3521 + ret = -EINVAL; 3580 3522 goto out; 3581 3523 } 3582 3524 ret = core_scsi3_nodeacl_depend_item(dest_node_acl); ··· 3587 3527 atomic_dec(&dest_node_acl->acl_pr_ref_count); 3588 3528 smp_mb__after_atomic_dec(); 3589 3529 dest_node_acl = NULL; 3590 - ret = PYX_TRANSPORT_LU_COMM_FAILURE; 3530 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3531 + ret = -EINVAL; 3591 3532 goto out; 3592 3533 } 3593 3534 #if 0 ··· 3604 3543 if (!dest_se_deve) { 3605 3544 pr_err("Unable to locate %s dest_se_deve from RTPI:" 3606 3545 " %hu\n", dest_tf_ops->get_fabric_name(), rtpi); 3607 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3546 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3547 + ret = -EINVAL; 3608 3548 goto out; 3609 3549 } 3610 3550 ··· 3615 3553 atomic_dec(&dest_se_deve->pr_ref_count); 3616 3554 smp_mb__after_atomic_dec(); 3617 3555 dest_se_deve = NULL; 3618 - ret = PYX_TRANSPORT_LU_COMM_FAILURE; 3556 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 3557 + ret = -EINVAL; 3619 3558 goto out; 3620 3559 } 3621 3560 #if 0 ··· 3635 3572 pr_warn("SPC-3 PR REGISTER_AND_MOVE: No reservation" 3636 3573 " currently held\n"); 3637 3574 spin_unlock(&dev->dev_reservation_lock); 3638 - ret = PYX_TRANSPORT_INVALID_CDB_FIELD; 3575 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 3576 + ret = -EINVAL; 3639 3577 goto out; 3640 3578 } 3641 3579 /* ··· 3649 3585 pr_warn("SPC-3 PR REGISTER_AND_MOVE: Calling I_T" 3650 3586 " Nexus is not reservation holder\n"); 3651 3587 spin_unlock(&dev->dev_reservation_lock); 3652 - ret = PYX_TRANSPORT_RESERVATION_CONFLICT; 3588 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 3589 + ret = -EINVAL; 3653 3590 goto out; 3654 3591 } 3655 3592 /* ··· 3668 3603 " reservation for type: %s\n", 3669 3604 core_scsi3_pr_dump_type(pr_res_holder->pr_res_type)); 3670 3605 spin_unlock(&dev->dev_reservation_lock); 3671 - ret = PYX_TRANSPORT_RESERVATION_CONFLICT; 3606 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 3607 + ret = -EINVAL; 3672 3608 goto out; 3673 3609 } 3674 3610 pr_res_nacl = pr_res_holder->pr_reg_nacl; ··· 3706 3640 sa_res_key, 0, aptpl, 2, 1); 3707 3641 if (ret != 0) { 3708 3642 spin_unlock(&dev->dev_reservation_lock); 3709 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3643 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3644 + ret = -EINVAL; 3710 3645 goto out; 3711 3646 } 3712 3647 dest_pr_reg = __core_scsi3_locate_pr_reg(dev, dest_node_acl, ··· 3838 3771 pr_err("Received PERSISTENT_RESERVE CDB while legacy" 3839 3772 " SPC-2 reservation is held, returning" 3840 3773 " RESERVATION_CONFLICT\n"); 3841 - ret = PYX_TRANSPORT_RESERVATION_CONFLICT; 3774 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 3775 + ret = EINVAL; 3842 3776 goto out; 3843 3777 } 3844 3778 ··· 3847 3779 * FIXME: A NULL struct se_session pointer means an this is not coming from 3848 3780 * a $FABRIC_MOD's nexus, but from internal passthrough ops. 3849 3781 */ 3850 - if (!cmd->se_sess) 3851 - return PYX_TRANSPORT_LU_COMM_FAILURE; 3782 + if (!cmd->se_sess) { 3783 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 3784 + return -EINVAL; 3785 + } 3852 3786 3853 3787 if (cmd->data_length < 24) { 3854 3788 pr_warn("SPC-PR: Received PR OUT parameter list" 3855 3789 " length too small: %u\n", cmd->data_length); 3856 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3790 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3791 + ret = -EINVAL; 3857 3792 goto out; 3858 3793 } 3859 3794 /* ··· 3891 3820 * SPEC_I_PT=1 is only valid for Service action: REGISTER 3892 3821 */ 3893 3822 if (spec_i_pt && ((cdb[1] & 0x1f) != PRO_REGISTER)) { 3894 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3823 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3824 + ret = -EINVAL; 3895 3825 goto out; 3896 3826 } 3897 3827 ··· 3909 3837 (cmd->data_length != 24)) { 3910 3838 pr_warn("SPC-PR: Received PR OUT illegal parameter" 3911 3839 " list length: %u\n", cmd->data_length); 3912 - ret = PYX_TRANSPORT_INVALID_PARAMETER_LIST; 3840 + cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 3841 + ret = -EINVAL; 3913 3842 goto out; 3914 3843 } 3915 3844 /* ··· 3951 3878 default: 3952 3879 pr_err("Unknown PERSISTENT_RESERVE_OUT service" 3953 3880 " action: 0x%02x\n", cdb[1] & 0x1f); 3954 - ret = PYX_TRANSPORT_INVALID_CDB_FIELD; 3881 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 3882 + ret = -EINVAL; 3955 3883 break; 3956 3884 } 3957 3885 ··· 3980 3906 if (cmd->data_length < 8) { 3981 3907 pr_err("PRIN SA READ_KEYS SCSI Data Length: %u" 3982 3908 " too small\n", cmd->data_length); 3983 - return PYX_TRANSPORT_INVALID_CDB_FIELD; 3909 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 3910 + return -EINVAL; 3984 3911 } 3985 3912 3986 3913 buf = transport_kmap_first_data_page(cmd); ··· 4040 3965 if (cmd->data_length < 8) { 4041 3966 pr_err("PRIN SA READ_RESERVATIONS SCSI Data Length: %u" 4042 3967 " too small\n", cmd->data_length); 4043 - return PYX_TRANSPORT_INVALID_CDB_FIELD; 3968 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 3969 + return -EINVAL; 4044 3970 } 4045 3971 4046 3972 buf = transport_kmap_first_data_page(cmd); ··· 4123 4047 if (cmd->data_length < 6) { 4124 4048 pr_err("PRIN SA REPORT_CAPABILITIES SCSI Data Length:" 4125 4049 " %u too small\n", cmd->data_length); 4126 - return PYX_TRANSPORT_INVALID_CDB_FIELD; 4050 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 4051 + return -EINVAL; 4127 4052 } 4128 4053 4129 4054 buf = transport_kmap_first_data_page(cmd); ··· 4185 4108 if (cmd->data_length < 8) { 4186 4109 pr_err("PRIN SA READ_FULL_STATUS SCSI Data Length: %u" 4187 4110 " too small\n", cmd->data_length); 4188 - return PYX_TRANSPORT_INVALID_CDB_FIELD; 4111 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 4112 + return -EINVAL; 4189 4113 } 4190 4114 4191 4115 buf = transport_kmap_first_data_page(cmd); ··· 4333 4255 pr_err("Received PERSISTENT_RESERVE CDB while legacy" 4334 4256 " SPC-2 reservation is held, returning" 4335 4257 " RESERVATION_CONFLICT\n"); 4336 - return PYX_TRANSPORT_RESERVATION_CONFLICT; 4258 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 4259 + return -EINVAL; 4337 4260 } 4338 4261 4339 4262 switch (cmd->t_task_cdb[1] & 0x1f) { ··· 4353 4274 default: 4354 4275 pr_err("Unknown PERSISTENT_RESERVE_IN service" 4355 4276 " action: 0x%02x\n", cmd->t_task_cdb[1] & 0x1f); 4356 - ret = PYX_TRANSPORT_INVALID_CDB_FIELD; 4277 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 4278 + ret = -EINVAL; 4357 4279 break; 4358 4280 } 4359 4281
+18 -10
drivers/target/target_core_pscsi.c
··· 963 963 static int pscsi_map_sg(struct se_task *task, struct scatterlist *task_sg, 964 964 struct bio **hbio) 965 965 { 966 + struct se_cmd *cmd = task->task_se_cmd; 966 967 struct pscsi_dev_virt *pdv = task->task_se_cmd->se_dev->dev_ptr; 967 968 u32 task_sg_num = task->task_sg_nents; 968 969 struct bio *bio = NULL, *tbio = NULL; ··· 972 971 u32 data_len = task->task_size, i, len, bytes, off; 973 972 int nr_pages = (task->task_size + task_sg[0].offset + 974 973 PAGE_SIZE - 1) >> PAGE_SHIFT; 975 - int nr_vecs = 0, rc, ret = PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES; 974 + int nr_vecs = 0, rc; 976 975 int rw = (task->task_data_direction == DMA_TO_DEVICE); 977 976 978 977 *hbio = NULL; ··· 1059 1058 bio->bi_next = NULL; 1060 1059 bio_endio(bio, 0); /* XXX: should be error */ 1061 1060 } 1062 - return ret; 1061 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1062 + return -ENOMEM; 1063 1063 } 1064 1064 1065 1065 static int pscsi_do_task(struct se_task *task) 1066 1066 { 1067 + struct se_cmd *cmd = task->task_se_cmd; 1067 1068 struct pscsi_dev_virt *pdv = task->task_se_cmd->se_dev->dev_ptr; 1068 1069 struct pscsi_plugin_task *pt = PSCSI_TASK(task); 1069 1070 struct request *req; ··· 1081 1078 if (!req || IS_ERR(req)) { 1082 1079 pr_err("PSCSI: blk_get_request() failed: %ld\n", 1083 1080 req ? IS_ERR(req) : -ENOMEM); 1084 - return PYX_TRANSPORT_LU_COMM_FAILURE; 1081 + cmd->scsi_sense_reason = 1082 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1083 + return -ENODEV; 1085 1084 } 1086 1085 } else { 1087 1086 BUG_ON(!task->task_size); ··· 1092 1087 * Setup the main struct request for the task->task_sg[] payload 1093 1088 */ 1094 1089 ret = pscsi_map_sg(task, task->task_sg, &hbio); 1095 - if (ret < 0) 1096 - return PYX_TRANSPORT_LU_COMM_FAILURE; 1090 + if (ret < 0) { 1091 + cmd->scsi_sense_reason = 1092 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1093 + return ret; 1094 + } 1097 1095 1098 1096 req = blk_make_request(pdv->pdv_sd->request_queue, hbio, 1099 1097 GFP_KERNEL); ··· 1123 1115 (task->task_se_cmd->sam_task_attr == MSG_HEAD_TAG), 1124 1116 pscsi_req_done); 1125 1117 1126 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 1118 + return 0; 1127 1119 1128 1120 fail: 1129 1121 while (hbio) { ··· 1132 1124 bio->bi_next = NULL; 1133 1125 bio_endio(bio, 0); /* XXX: should be error */ 1134 1126 } 1135 - return PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES; 1127 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1128 + return -ENOMEM; 1136 1129 } 1137 1130 1138 1131 /* pscsi_get_sense_buffer(): ··· 1207 1198 " 0x%02x Result: 0x%08x\n", task, pt->pscsi_cdb[0], 1208 1199 pt->pscsi_result); 1209 1200 task->task_scsi_status = SAM_STAT_CHECK_CONDITION; 1210 - task->task_error_status = PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 1211 - task->task_se_cmd->transport_error_status = 1212 - PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 1201 + task->task_se_cmd->scsi_sense_reason = 1202 + TCM_UNSUPPORTED_SCSI_OPCODE; 1213 1203 transport_complete_task(task, 0); 1214 1204 break; 1215 1205 }
+45 -213
drivers/target/target_core_rd.c
··· 343 343 return NULL; 344 344 } 345 345 346 - /* rd_MEMCPY_read(): 347 - * 348 - * 349 - */ 350 - static int rd_MEMCPY_read(struct rd_request *req) 346 + static int rd_MEMCPY(struct rd_request *req, u32 read_rd) 351 347 { 352 348 struct se_task *task = &req->rd_task; 353 349 struct rd_dev *dev = req->rd_task.task_se_cmd->se_dev->dev_ptr; 354 350 struct rd_dev_sg_table *table; 355 - struct scatterlist *sg_d, *sg_s; 356 - void *dst, *src; 357 - u32 i = 0, j = 0, dst_offset = 0, src_offset = 0; 358 - u32 length, page_end = 0, table_sg_end; 351 + struct scatterlist *rd_sg; 352 + struct sg_mapping_iter m; 359 353 u32 rd_offset = req->rd_offset; 354 + u32 src_len; 360 355 361 356 table = rd_get_sg_table(dev, req->rd_page); 362 357 if (!table) 363 358 return -EINVAL; 364 359 365 - table_sg_end = (table->page_end_offset - req->rd_page); 366 - sg_d = task->task_sg; 367 - sg_s = &table->sg_table[req->rd_page - table->page_start_offset]; 360 + rd_sg = &table->sg_table[req->rd_page - table->page_start_offset]; 368 361 369 - pr_debug("RD[%u]: Read LBA: %llu, Size: %u Page: %u, Offset:" 370 - " %u\n", dev->rd_dev_id, task->task_lba, req->rd_size, 371 - req->rd_page, req->rd_offset); 362 + pr_debug("RD[%u]: %s LBA: %llu, Size: %u Page: %u, Offset: %u\n", 363 + dev->rd_dev_id, read_rd ? "Read" : "Write", 364 + task->task_lba, req->rd_size, req->rd_page, 365 + rd_offset); 372 366 373 - src_offset = rd_offset; 374 - 367 + src_len = PAGE_SIZE - rd_offset; 368 + sg_miter_start(&m, task->task_sg, task->task_sg_nents, 369 + read_rd ? SG_MITER_TO_SG : SG_MITER_FROM_SG); 375 370 while (req->rd_size) { 376 - if ((sg_d[i].length - dst_offset) < 377 - (sg_s[j].length - src_offset)) { 378 - length = (sg_d[i].length - dst_offset); 371 + u32 len; 372 + void *rd_addr; 379 373 380 - pr_debug("Step 1 - sg_d[%d]: %p length: %d" 381 - " offset: %u sg_s[%d].length: %u\n", i, 382 - &sg_d[i], sg_d[i].length, sg_d[i].offset, j, 383 - sg_s[j].length); 384 - pr_debug("Step 1 - length: %u dst_offset: %u" 385 - " src_offset: %u\n", length, dst_offset, 386 - src_offset); 374 + sg_miter_next(&m); 375 + len = min((u32)m.length, src_len); 376 + m.consumed = len; 387 377 388 - if (length > req->rd_size) 389 - length = req->rd_size; 378 + rd_addr = sg_virt(rd_sg) + rd_offset; 390 379 391 - dst = sg_virt(&sg_d[i++]) + dst_offset; 392 - BUG_ON(!dst); 380 + if (read_rd) 381 + memcpy(m.addr, rd_addr, len); 382 + else 383 + memcpy(rd_addr, m.addr, len); 393 384 394 - src = sg_virt(&sg_s[j]) + src_offset; 395 - BUG_ON(!src); 396 - 397 - dst_offset = 0; 398 - src_offset = length; 399 - page_end = 0; 400 - } else { 401 - length = (sg_s[j].length - src_offset); 402 - 403 - pr_debug("Step 2 - sg_d[%d]: %p length: %d" 404 - " offset: %u sg_s[%d].length: %u\n", i, 405 - &sg_d[i], sg_d[i].length, sg_d[i].offset, 406 - j, sg_s[j].length); 407 - pr_debug("Step 2 - length: %u dst_offset: %u" 408 - " src_offset: %u\n", length, dst_offset, 409 - src_offset); 410 - 411 - if (length > req->rd_size) 412 - length = req->rd_size; 413 - 414 - dst = sg_virt(&sg_d[i]) + dst_offset; 415 - BUG_ON(!dst); 416 - 417 - if (sg_d[i].length == length) { 418 - i++; 419 - dst_offset = 0; 420 - } else 421 - dst_offset = length; 422 - 423 - src = sg_virt(&sg_s[j++]) + src_offset; 424 - BUG_ON(!src); 425 - 426 - src_offset = 0; 427 - page_end = 1; 428 - } 429 - 430 - memcpy(dst, src, length); 431 - 432 - pr_debug("page: %u, remaining size: %u, length: %u," 433 - " i: %u, j: %u\n", req->rd_page, 434 - (req->rd_size - length), length, i, j); 435 - 436 - req->rd_size -= length; 385 + req->rd_size -= len; 437 386 if (!req->rd_size) 438 - return 0; 439 - 440 - if (!page_end) 441 387 continue; 442 388 443 - if (++req->rd_page <= table->page_end_offset) { 444 - pr_debug("page: %u in same page table\n", 445 - req->rd_page); 389 + src_len -= len; 390 + if (src_len) { 391 + rd_offset += len; 446 392 continue; 447 393 } 448 394 449 - pr_debug("getting new page table for page: %u\n", 450 - req->rd_page); 395 + /* rd page completed, next one please */ 396 + req->rd_page++; 397 + rd_offset = 0; 398 + src_len = PAGE_SIZE; 399 + if (req->rd_page <= table->page_end_offset) { 400 + rd_sg++; 401 + continue; 402 + } 451 403 452 404 table = rd_get_sg_table(dev, req->rd_page); 453 - if (!table) 405 + if (!table) { 406 + sg_miter_stop(&m); 454 407 return -EINVAL; 455 - 456 - sg_s = &table->sg_table[j = 0]; 457 - } 458 - 459 - return 0; 460 - } 461 - 462 - /* rd_MEMCPY_write(): 463 - * 464 - * 465 - */ 466 - static int rd_MEMCPY_write(struct rd_request *req) 467 - { 468 - struct se_task *task = &req->rd_task; 469 - struct rd_dev *dev = req->rd_task.task_se_cmd->se_dev->dev_ptr; 470 - struct rd_dev_sg_table *table; 471 - struct scatterlist *sg_d, *sg_s; 472 - void *dst, *src; 473 - u32 i = 0, j = 0, dst_offset = 0, src_offset = 0; 474 - u32 length, page_end = 0, table_sg_end; 475 - u32 rd_offset = req->rd_offset; 476 - 477 - table = rd_get_sg_table(dev, req->rd_page); 478 - if (!table) 479 - return -EINVAL; 480 - 481 - table_sg_end = (table->page_end_offset - req->rd_page); 482 - sg_d = &table->sg_table[req->rd_page - table->page_start_offset]; 483 - sg_s = task->task_sg; 484 - 485 - pr_debug("RD[%d] Write LBA: %llu, Size: %u, Page: %u," 486 - " Offset: %u\n", dev->rd_dev_id, task->task_lba, req->rd_size, 487 - req->rd_page, req->rd_offset); 488 - 489 - dst_offset = rd_offset; 490 - 491 - while (req->rd_size) { 492 - if ((sg_s[i].length - src_offset) < 493 - (sg_d[j].length - dst_offset)) { 494 - length = (sg_s[i].length - src_offset); 495 - 496 - pr_debug("Step 1 - sg_s[%d]: %p length: %d" 497 - " offset: %d sg_d[%d].length: %u\n", i, 498 - &sg_s[i], sg_s[i].length, sg_s[i].offset, 499 - j, sg_d[j].length); 500 - pr_debug("Step 1 - length: %u src_offset: %u" 501 - " dst_offset: %u\n", length, src_offset, 502 - dst_offset); 503 - 504 - if (length > req->rd_size) 505 - length = req->rd_size; 506 - 507 - src = sg_virt(&sg_s[i++]) + src_offset; 508 - BUG_ON(!src); 509 - 510 - dst = sg_virt(&sg_d[j]) + dst_offset; 511 - BUG_ON(!dst); 512 - 513 - src_offset = 0; 514 - dst_offset = length; 515 - page_end = 0; 516 - } else { 517 - length = (sg_d[j].length - dst_offset); 518 - 519 - pr_debug("Step 2 - sg_s[%d]: %p length: %d" 520 - " offset: %d sg_d[%d].length: %u\n", i, 521 - &sg_s[i], sg_s[i].length, sg_s[i].offset, 522 - j, sg_d[j].length); 523 - pr_debug("Step 2 - length: %u src_offset: %u" 524 - " dst_offset: %u\n", length, src_offset, 525 - dst_offset); 526 - 527 - if (length > req->rd_size) 528 - length = req->rd_size; 529 - 530 - src = sg_virt(&sg_s[i]) + src_offset; 531 - BUG_ON(!src); 532 - 533 - if (sg_s[i].length == length) { 534 - i++; 535 - src_offset = 0; 536 - } else 537 - src_offset = length; 538 - 539 - dst = sg_virt(&sg_d[j++]) + dst_offset; 540 - BUG_ON(!dst); 541 - 542 - dst_offset = 0; 543 - page_end = 1; 544 408 } 545 409 546 - memcpy(dst, src, length); 547 - 548 - pr_debug("page: %u, remaining size: %u, length: %u," 549 - " i: %u, j: %u\n", req->rd_page, 550 - (req->rd_size - length), length, i, j); 551 - 552 - req->rd_size -= length; 553 - if (!req->rd_size) 554 - return 0; 555 - 556 - if (!page_end) 557 - continue; 558 - 559 - if (++req->rd_page <= table->page_end_offset) { 560 - pr_debug("page: %u in same page table\n", 561 - req->rd_page); 562 - continue; 563 - } 564 - 565 - pr_debug("getting new page table for page: %u\n", 566 - req->rd_page); 567 - 568 - table = rd_get_sg_table(dev, req->rd_page); 569 - if (!table) 570 - return -EINVAL; 571 - 572 - sg_d = &table->sg_table[j = 0]; 410 + /* since we increment, the first sg entry is correct */ 411 + rd_sg = table->sg_table; 573 412 } 574 - 413 + sg_miter_stop(&m); 575 414 return 0; 576 415 } 577 416 ··· 422 583 { 423 584 struct se_device *dev = task->task_se_cmd->se_dev; 424 585 struct rd_request *req = RD_REQ(task); 425 - unsigned long long lba; 586 + u64 tmp; 426 587 int ret; 427 588 428 - req->rd_page = (task->task_lba * dev->se_sub_dev->se_dev_attrib.block_size) / PAGE_SIZE; 429 - lba = task->task_lba; 430 - req->rd_offset = (do_div(lba, 431 - (PAGE_SIZE / dev->se_sub_dev->se_dev_attrib.block_size))) * 432 - dev->se_sub_dev->se_dev_attrib.block_size; 589 + tmp = task->task_lba * dev->se_sub_dev->se_dev_attrib.block_size; 590 + req->rd_offset = do_div(tmp, PAGE_SIZE); 591 + req->rd_page = tmp; 433 592 req->rd_size = task->task_size; 434 593 435 - if (task->task_data_direction == DMA_FROM_DEVICE) 436 - ret = rd_MEMCPY_read(req); 437 - else 438 - ret = rd_MEMCPY_write(req); 439 - 594 + ret = rd_MEMCPY(req, task->task_data_direction == DMA_FROM_DEVICE); 440 595 if (ret != 0) 441 596 return ret; 442 597 443 598 task->task_scsi_status = GOOD; 444 599 transport_complete_task(task, 1); 445 - 446 - return PYX_TRANSPORT_SENT_TO_TRANSPORT; 600 + return 0; 447 601 } 448 602 449 603 /* rd_free_task(): (Part of se_subsystem_api_t template)
-4
drivers/target/target_core_tmr.c
··· 345 345 " %d t_fe_count: %d\n", (preempt_and_abort_list) ? 346 346 "Preempt" : "", cmd, cmd->t_state, 347 347 atomic_read(&cmd->t_fe_count)); 348 - /* 349 - * Signal that the command has failed via cmd->se_cmd_flags, 350 - */ 351 - transport_new_cmd_failure(cmd); 352 348 353 349 core_tmr_handle_tas_abort(tmr_nacl, cmd, tas, 354 350 atomic_read(&cmd->t_fe_count));
+91 -169
drivers/target/target_core_transport.c
··· 61 61 static int sub_api_initialized; 62 62 63 63 static struct workqueue_struct *target_completion_wq; 64 - static struct kmem_cache *se_cmd_cache; 65 64 static struct kmem_cache *se_sess_cache; 66 65 struct kmem_cache *se_tmr_req_cache; 67 66 struct kmem_cache *se_ua_cache; ··· 81 82 static void transport_put_cmd(struct se_cmd *cmd); 82 83 static void transport_remove_cmd_from_queue(struct se_cmd *cmd); 83 84 static int transport_set_sense_codes(struct se_cmd *cmd, u8 asc, u8 ascq); 84 - static void transport_generic_request_failure(struct se_cmd *, int, int); 85 + static void transport_generic_request_failure(struct se_cmd *); 85 86 static void target_complete_ok_work(struct work_struct *work); 86 87 87 88 int init_se_kmem_caches(void) 88 89 { 89 - se_cmd_cache = kmem_cache_create("se_cmd_cache", 90 - sizeof(struct se_cmd), __alignof__(struct se_cmd), 0, NULL); 91 - if (!se_cmd_cache) { 92 - pr_err("kmem_cache_create for struct se_cmd failed\n"); 93 - goto out; 94 - } 95 90 se_tmr_req_cache = kmem_cache_create("se_tmr_cache", 96 91 sizeof(struct se_tmr_req), __alignof__(struct se_tmr_req), 97 92 0, NULL); 98 93 if (!se_tmr_req_cache) { 99 94 pr_err("kmem_cache_create() for struct se_tmr_req" 100 95 " failed\n"); 101 - goto out_free_cmd_cache; 96 + goto out; 102 97 } 103 98 se_sess_cache = kmem_cache_create("se_sess_cache", 104 99 sizeof(struct se_session), __alignof__(struct se_session), ··· 175 182 kmem_cache_destroy(se_sess_cache); 176 183 out_free_tmr_req_cache: 177 184 kmem_cache_destroy(se_tmr_req_cache); 178 - out_free_cmd_cache: 179 - kmem_cache_destroy(se_cmd_cache); 180 185 out: 181 186 return -ENOMEM; 182 187 } ··· 182 191 void release_se_kmem_caches(void) 183 192 { 184 193 destroy_workqueue(target_completion_wq); 185 - kmem_cache_destroy(se_cmd_cache); 186 194 kmem_cache_destroy(se_tmr_req_cache); 187 195 kmem_cache_destroy(se_sess_cache); 188 196 kmem_cache_destroy(se_ua_cache); ··· 670 680 task->task_scsi_status = GOOD; 671 681 } else { 672 682 task->task_scsi_status = SAM_STAT_CHECK_CONDITION; 673 - task->task_error_status = PYX_TRANSPORT_ILLEGAL_REQUEST; 674 - task->task_se_cmd->transport_error_status = 675 - PYX_TRANSPORT_ILLEGAL_REQUEST; 683 + task->task_se_cmd->scsi_sense_reason = 684 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 685 + 676 686 } 677 687 678 688 transport_complete_task(task, good); ··· 683 693 { 684 694 struct se_cmd *cmd = container_of(work, struct se_cmd, work); 685 695 686 - transport_generic_request_failure(cmd, 1, 1); 696 + transport_generic_request_failure(cmd); 687 697 } 688 698 689 699 /* transport_complete_task(): ··· 745 755 if (cmd->t_tasks_failed) { 746 756 if (!task->task_error_status) { 747 757 task->task_error_status = 748 - PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 749 - cmd->transport_error_status = 750 - PYX_TRANSPORT_UNKNOWN_SAM_OPCODE; 758 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 759 + cmd->scsi_sense_reason = 760 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 751 761 } 762 + 752 763 INIT_WORK(&cmd->work, target_complete_failure_work); 753 764 } else { 754 765 atomic_set(&cmd->t_transport_complete, 1); ··· 1326 1335 dev->se_hba = hba; 1327 1336 dev->se_sub_dev = se_dev; 1328 1337 dev->transport = transport; 1329 - atomic_set(&dev->active_cmds, 0); 1330 1338 INIT_LIST_HEAD(&dev->dev_list); 1331 1339 INIT_LIST_HEAD(&dev->dev_sep_list); 1332 1340 INIT_LIST_HEAD(&dev->dev_tmr_list); 1333 1341 INIT_LIST_HEAD(&dev->execute_task_list); 1334 1342 INIT_LIST_HEAD(&dev->delayed_cmd_list); 1335 - INIT_LIST_HEAD(&dev->ordered_cmd_list); 1336 1343 INIT_LIST_HEAD(&dev->state_task_list); 1337 1344 INIT_LIST_HEAD(&dev->qf_cmd_list); 1338 1345 spin_lock_init(&dev->execute_task_lock); 1339 1346 spin_lock_init(&dev->delayed_cmd_lock); 1340 - spin_lock_init(&dev->ordered_cmd_lock); 1341 - spin_lock_init(&dev->state_task_lock); 1342 - spin_lock_init(&dev->dev_alua_lock); 1343 1347 spin_lock_init(&dev->dev_reservation_lock); 1344 1348 spin_lock_init(&dev->dev_status_lock); 1345 - spin_lock_init(&dev->dev_status_thr_lock); 1346 1349 spin_lock_init(&dev->se_port_lock); 1347 1350 spin_lock_init(&dev->se_tmr_lock); 1348 1351 spin_lock_init(&dev->qf_cmd_lock); ··· 1492 1507 { 1493 1508 INIT_LIST_HEAD(&cmd->se_lun_node); 1494 1509 INIT_LIST_HEAD(&cmd->se_delayed_node); 1495 - INIT_LIST_HEAD(&cmd->se_ordered_node); 1496 1510 INIT_LIST_HEAD(&cmd->se_qf_node); 1497 1511 INIT_LIST_HEAD(&cmd->se_queue_node); 1498 1512 INIT_LIST_HEAD(&cmd->se_cmd_list); ··· 1557 1573 pr_err("Received SCSI CDB with command_size: %d that" 1558 1574 " exceeds SCSI_MAX_VARLEN_CDB_SIZE: %d\n", 1559 1575 scsi_command_size(cdb), SCSI_MAX_VARLEN_CDB_SIZE); 1576 + cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION; 1577 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 1560 1578 return -EINVAL; 1561 1579 } 1562 1580 /* ··· 1574 1588 " %u > sizeof(cmd->__t_task_cdb): %lu ops\n", 1575 1589 scsi_command_size(cdb), 1576 1590 (unsigned long)sizeof(cmd->__t_task_cdb)); 1591 + cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION; 1592 + cmd->scsi_sense_reason = 1593 + TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1577 1594 return -ENOMEM; 1578 1595 } 1579 1596 } else ··· 1647 1658 * and call transport_generic_request_failure() if necessary.. 1648 1659 */ 1649 1660 ret = transport_generic_new_cmd(cmd); 1650 - if (ret < 0) { 1651 - cmd->transport_error_status = ret; 1652 - transport_generic_request_failure(cmd, 0, 1653 - (cmd->data_direction != DMA_TO_DEVICE)); 1654 - } 1661 + if (ret < 0) 1662 + transport_generic_request_failure(cmd); 1663 + 1655 1664 return 0; 1656 1665 } 1657 1666 EXPORT_SYMBOL(transport_handle_cdb_direct); ··· 1785 1798 /* 1786 1799 * Handle SAM-esque emulation for generic transport request failures. 1787 1800 */ 1788 - static void transport_generic_request_failure( 1789 - struct se_cmd *cmd, 1790 - int complete, 1791 - int sc) 1801 + static void transport_generic_request_failure(struct se_cmd *cmd) 1792 1802 { 1793 1803 int ret = 0; 1794 1804 1795 1805 pr_debug("-----[ Storage Engine Exception for cmd: %p ITT: 0x%08x" 1796 1806 " CDB: 0x%02x\n", cmd, cmd->se_tfo->get_task_tag(cmd), 1797 1807 cmd->t_task_cdb[0]); 1798 - pr_debug("-----[ i_state: %d t_state: %d transport_error_status: %d\n", 1808 + pr_debug("-----[ i_state: %d t_state: %d scsi_sense_reason: %d\n", 1799 1809 cmd->se_tfo->get_cmd_state(cmd), 1800 - cmd->t_state, 1801 - cmd->transport_error_status); 1810 + cmd->t_state, cmd->scsi_sense_reason); 1802 1811 pr_debug("-----[ t_tasks: %d t_task_cdbs_left: %d" 1803 1812 " t_task_cdbs_sent: %d t_task_cdbs_ex_left: %d --" 1804 1813 " t_transport_active: %d t_transport_stop: %d" ··· 1812 1829 if (cmd->se_dev->dev_task_attr_type == SAM_TASK_ATTR_EMULATED) 1813 1830 transport_complete_task_attr(cmd); 1814 1831 1815 - if (complete) { 1816 - cmd->transport_error_status = PYX_TRANSPORT_LU_COMM_FAILURE; 1817 - } 1818 - 1819 - switch (cmd->transport_error_status) { 1820 - case PYX_TRANSPORT_UNKNOWN_SAM_OPCODE: 1821 - cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 1832 + switch (cmd->scsi_sense_reason) { 1833 + case TCM_NON_EXISTENT_LUN: 1834 + case TCM_UNSUPPORTED_SCSI_OPCODE: 1835 + case TCM_INVALID_CDB_FIELD: 1836 + case TCM_INVALID_PARAMETER_LIST: 1837 + case TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE: 1838 + case TCM_UNKNOWN_MODE_PAGE: 1839 + case TCM_WRITE_PROTECTED: 1840 + case TCM_CHECK_CONDITION_ABORT_CMD: 1841 + case TCM_CHECK_CONDITION_UNIT_ATTENTION: 1842 + case TCM_CHECK_CONDITION_NOT_READY: 1822 1843 break; 1823 - case PYX_TRANSPORT_REQ_TOO_MANY_SECTORS: 1824 - cmd->scsi_sense_reason = TCM_SECTOR_COUNT_TOO_MANY; 1825 - break; 1826 - case PYX_TRANSPORT_INVALID_CDB_FIELD: 1827 - cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 1828 - break; 1829 - case PYX_TRANSPORT_INVALID_PARAMETER_LIST: 1830 - cmd->scsi_sense_reason = TCM_INVALID_PARAMETER_LIST; 1831 - break; 1832 - case PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES: 1833 - if (!sc) 1834 - transport_new_cmd_failure(cmd); 1835 - /* 1836 - * Currently for PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES, 1837 - * we force this session to fall back to session 1838 - * recovery. 1839 - */ 1840 - cmd->se_tfo->fall_back_to_erl0(cmd->se_sess); 1841 - cmd->se_tfo->stop_session(cmd->se_sess, 0, 0); 1842 - 1843 - goto check_stop; 1844 - case PYX_TRANSPORT_LU_COMM_FAILURE: 1845 - case PYX_TRANSPORT_ILLEGAL_REQUEST: 1846 - cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1847 - break; 1848 - case PYX_TRANSPORT_UNKNOWN_MODE_PAGE: 1849 - cmd->scsi_sense_reason = TCM_UNKNOWN_MODE_PAGE; 1850 - break; 1851 - case PYX_TRANSPORT_WRITE_PROTECTED: 1852 - cmd->scsi_sense_reason = TCM_WRITE_PROTECTED; 1853 - break; 1854 - case PYX_TRANSPORT_RESERVATION_CONFLICT: 1844 + case TCM_RESERVATION_CONFLICT: 1855 1845 /* 1856 1846 * No SENSE Data payload for this case, set SCSI Status 1857 1847 * and queue the response to $FABRIC_MOD. ··· 1849 1893 if (ret == -EAGAIN || ret == -ENOMEM) 1850 1894 goto queue_full; 1851 1895 goto check_stop; 1852 - case PYX_TRANSPORT_USE_SENSE_REASON: 1853 - /* 1854 - * struct se_cmd->scsi_sense_reason already set 1855 - */ 1856 - break; 1857 1896 default: 1858 1897 pr_err("Unknown transport error for CDB 0x%02x: %d\n", 1859 - cmd->t_task_cdb[0], 1860 - cmd->transport_error_status); 1898 + cmd->t_task_cdb[0], cmd->scsi_sense_reason); 1861 1899 cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 1862 1900 break; 1863 1901 } ··· 1862 1912 * transport_send_check_condition_and_sense() after handling 1863 1913 * possible unsoliticied write data payloads. 1864 1914 */ 1865 - if (!sc && !cmd->se_tfo->new_cmd_map) 1866 - transport_new_cmd_failure(cmd); 1867 - else { 1868 - ret = transport_send_check_condition_and_sense(cmd, 1869 - cmd->scsi_sense_reason, 0); 1870 - if (ret == -EAGAIN || ret == -ENOMEM) 1871 - goto queue_full; 1872 - } 1915 + ret = transport_send_check_condition_and_sense(cmd, 1916 + cmd->scsi_sense_reason, 0); 1917 + if (ret == -EAGAIN || ret == -ENOMEM) 1918 + goto queue_full; 1873 1919 1874 1920 check_stop: 1875 1921 transport_lun_remove_cmd(cmd); ··· 1948 2002 * to allow the passed struct se_cmd list of tasks to the front of the list. 1949 2003 */ 1950 2004 if (cmd->sam_task_attr == MSG_HEAD_TAG) { 1951 - atomic_inc(&cmd->se_dev->dev_hoq_count); 1952 - smp_mb__after_atomic_inc(); 1953 2005 pr_debug("Added HEAD_OF_QUEUE for CDB:" 1954 2006 " 0x%02x, se_ordered_id: %u\n", 1955 2007 cmd->t_task_cdb[0], 1956 2008 cmd->se_ordered_id); 1957 2009 return 1; 1958 2010 } else if (cmd->sam_task_attr == MSG_ORDERED_TAG) { 1959 - spin_lock(&cmd->se_dev->ordered_cmd_lock); 1960 - list_add_tail(&cmd->se_ordered_node, 1961 - &cmd->se_dev->ordered_cmd_list); 1962 - spin_unlock(&cmd->se_dev->ordered_cmd_lock); 1963 - 1964 2011 atomic_inc(&cmd->se_dev->dev_ordered_sync); 1965 2012 smp_mb__after_atomic_inc(); 1966 2013 ··· 2015 2076 { 2016 2077 int add_tasks; 2017 2078 2018 - if (se_dev_check_online(cmd->se_orig_obj_ptr) != 0) { 2019 - cmd->transport_error_status = PYX_TRANSPORT_LU_COMM_FAILURE; 2020 - transport_generic_request_failure(cmd, 0, 1); 2079 + if (se_dev_check_online(cmd->se_dev) != 0) { 2080 + cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 2081 + transport_generic_request_failure(cmd); 2021 2082 return 0; 2022 2083 } 2023 2084 ··· 2102 2163 else 2103 2164 error = dev->transport->do_task(task); 2104 2165 if (error != 0) { 2105 - cmd->transport_error_status = error; 2106 2166 spin_lock_irqsave(&cmd->t_state_lock, flags); 2107 2167 task->task_flags &= ~TF_ACTIVE; 2108 2168 spin_unlock_irqrestore(&cmd->t_state_lock, flags); 2109 2169 atomic_set(&cmd->t_transport_sent, 0); 2110 2170 transport_stop_tasks_for_cmd(cmd); 2111 2171 atomic_inc(&dev->depth_left); 2112 - transport_generic_request_failure(cmd, 0, 1); 2172 + transport_generic_request_failure(cmd); 2113 2173 } 2114 2174 2115 2175 goto check_depth; 2116 2176 2117 2177 return 0; 2118 - } 2119 - 2120 - void transport_new_cmd_failure(struct se_cmd *se_cmd) 2121 - { 2122 - unsigned long flags; 2123 - /* 2124 - * Any unsolicited data will get dumped for failed command inside of 2125 - * the fabric plugin 2126 - */ 2127 - spin_lock_irqsave(&se_cmd->t_state_lock, flags); 2128 - se_cmd->se_cmd_flags |= SCF_SE_CMD_FAILED; 2129 - se_cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION; 2130 - spin_unlock_irqrestore(&se_cmd->t_state_lock, flags); 2131 2178 } 2132 2179 2133 2180 static inline u32 transport_get_sectors_6( ··· 2138 2213 2139 2214 /* 2140 2215 * Everything else assume TYPE_DISK Sector CDB location. 2141 - * Use 8-bit sector value. 2216 + * Use 8-bit sector value. SBC-3 says: 2217 + * 2218 + * A TRANSFER LENGTH field set to zero specifies that 256 2219 + * logical blocks shall be written. Any other value 2220 + * specifies the number of logical blocks that shall be 2221 + * written. 2142 2222 */ 2143 2223 type_disk: 2144 - return (u32)cdb[4]; 2224 + return cdb[4] ? : 256; 2145 2225 } 2146 2226 2147 2227 static inline u32 transport_get_sectors_10( ··· 2390 2460 return -1; 2391 2461 } 2392 2462 2393 - static int 2394 - transport_handle_reservation_conflict(struct se_cmd *cmd) 2395 - { 2396 - cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION; 2397 - cmd->se_cmd_flags |= SCF_SCSI_RESERVATION_CONFLICT; 2398 - cmd->scsi_status = SAM_STAT_RESERVATION_CONFLICT; 2399 - /* 2400 - * For UA Interlock Code 11b, a RESERVATION CONFLICT will 2401 - * establish a UNIT ATTENTION with PREVIOUS RESERVATION 2402 - * CONFLICT STATUS. 2403 - * 2404 - * See spc4r17, section 7.4.6 Control Mode Page, Table 349 2405 - */ 2406 - if (cmd->se_sess && 2407 - cmd->se_dev->se_sub_dev->se_dev_attrib.emulate_ua_intlck_ctrl == 2) 2408 - core_scsi3_ua_allocate(cmd->se_sess->se_node_acl, 2409 - cmd->orig_fe_lun, 0x2C, 2410 - ASCQ_2CH_PREVIOUS_RESERVATION_CONFLICT_STATUS); 2411 - return -EINVAL; 2412 - } 2413 - 2414 2463 static inline long long transport_dev_end_lba(struct se_device *dev) 2415 2464 { 2416 2465 return dev->transport->get_blocks(dev) + 1; ··· 2504 2595 */ 2505 2596 if (su_dev->t10_pr.pr_ops.t10_reservation_check(cmd, &pr_reg_type) != 0) { 2506 2597 if (su_dev->t10_pr.pr_ops.t10_seq_non_holder( 2507 - cmd, cdb, pr_reg_type) != 0) 2508 - return transport_handle_reservation_conflict(cmd); 2598 + cmd, cdb, pr_reg_type) != 0) { 2599 + cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION; 2600 + cmd->se_cmd_flags |= SCF_SCSI_RESERVATION_CONFLICT; 2601 + cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 2602 + return -EBUSY; 2603 + } 2509 2604 /* 2510 2605 * This means the CDB is allowed for the SCSI Initiator port 2511 2606 * when said port is *NOT* holding the legacy SPC-2 or ··· 2571 2658 goto out_unsupported_cdb; 2572 2659 size = transport_get_size(sectors, cdb, cmd); 2573 2660 cmd->t_task_lba = transport_lba_32(cdb); 2574 - cmd->t_tasks_fua = (cdb[1] & 0x8); 2661 + if (cdb[1] & 0x8) 2662 + cmd->se_cmd_flags |= SCF_FUA; 2575 2663 cmd->se_cmd_flags |= SCF_SCSI_DATA_SG_IO_CDB; 2576 2664 break; 2577 2665 case WRITE_12: ··· 2581 2667 goto out_unsupported_cdb; 2582 2668 size = transport_get_size(sectors, cdb, cmd); 2583 2669 cmd->t_task_lba = transport_lba_32(cdb); 2584 - cmd->t_tasks_fua = (cdb[1] & 0x8); 2670 + if (cdb[1] & 0x8) 2671 + cmd->se_cmd_flags |= SCF_FUA; 2585 2672 cmd->se_cmd_flags |= SCF_SCSI_DATA_SG_IO_CDB; 2586 2673 break; 2587 2674 case WRITE_16: ··· 2591 2676 goto out_unsupported_cdb; 2592 2677 size = transport_get_size(sectors, cdb, cmd); 2593 2678 cmd->t_task_lba = transport_lba_64(cdb); 2594 - cmd->t_tasks_fua = (cdb[1] & 0x8); 2679 + if (cdb[1] & 0x8) 2680 + cmd->se_cmd_flags |= SCF_FUA; 2595 2681 cmd->se_cmd_flags |= SCF_SCSI_DATA_SG_IO_CDB; 2596 2682 break; 2597 2683 case XDWRITEREAD_10: 2598 2684 if ((cmd->data_direction != DMA_TO_DEVICE) || 2599 - !(cmd->t_tasks_bidi)) 2685 + !(cmd->se_cmd_flags & SCF_BIDI)) 2600 2686 goto out_invalid_cdb_field; 2601 2687 sectors = transport_get_sectors_10(cdb, cmd, &sector_ret); 2602 2688 if (sector_ret) ··· 2616 2700 * Setup BIDI XOR callback to be run after I/O completion. 2617 2701 */ 2618 2702 cmd->transport_complete_callback = &transport_xor_callback; 2619 - cmd->t_tasks_fua = (cdb[1] & 0x8); 2703 + if (cdb[1] & 0x8) 2704 + cmd->se_cmd_flags |= SCF_FUA; 2620 2705 break; 2621 2706 case VARIABLE_LENGTH_CMD: 2622 2707 service_action = get_unaligned_be16(&cdb[8]); ··· 2645 2728 * completion. 2646 2729 */ 2647 2730 cmd->transport_complete_callback = &transport_xor_callback; 2648 - cmd->t_tasks_fua = (cdb[10] & 0x8); 2731 + if (cdb[1] & 0x8) 2732 + cmd->se_cmd_flags |= SCF_FUA; 2649 2733 break; 2650 2734 case WRITE_SAME_32: 2651 2735 sectors = transport_get_sectors_32(cdb, cmd, &sector_ret); ··· 3089 3171 " SIMPLE: %u\n", dev->dev_cur_ordered_id, 3090 3172 cmd->se_ordered_id); 3091 3173 } else if (cmd->sam_task_attr == MSG_HEAD_TAG) { 3092 - atomic_dec(&dev->dev_hoq_count); 3093 - smp_mb__after_atomic_dec(); 3094 3174 dev->dev_cur_ordered_id++; 3095 3175 pr_debug("Incremented dev_cur_ordered_id: %u for" 3096 3176 " HEAD_OF_QUEUE: %u\n", dev->dev_cur_ordered_id, 3097 3177 cmd->se_ordered_id); 3098 3178 } else if (cmd->sam_task_attr == MSG_ORDERED_TAG) { 3099 - spin_lock(&dev->ordered_cmd_lock); 3100 - list_del(&cmd->se_ordered_node); 3101 3179 atomic_dec(&dev->dev_ordered_sync); 3102 3180 smp_mb__after_atomic_dec(); 3103 - spin_unlock(&dev->ordered_cmd_lock); 3104 3181 3105 3182 dev->dev_cur_ordered_id++; 3106 3183 pr_debug("Incremented dev_cur_ordered_id: %u for ORDERED:" ··· 3408 3495 3409 3496 if ((cmd->se_cmd_flags & SCF_SCSI_DATA_SG_IO_CDB) || 3410 3497 (cmd->se_cmd_flags & SCF_SCSI_CONTROL_SG_IO_CDB)) { 3498 + /* 3499 + * Reject SCSI data overflow with map_mem_to_cmd() as incoming 3500 + * scatterlists already have been set to follow what the fabric 3501 + * passes for the original expected data transfer length. 3502 + */ 3503 + if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT) { 3504 + pr_warn("Rejecting SCSI DATA overflow for fabric using" 3505 + " SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC\n"); 3506 + cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION; 3507 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 3508 + return -EINVAL; 3509 + } 3411 3510 3412 3511 cmd->t_data_sg = sgl; 3413 3512 cmd->t_data_nents = sgl_count; ··· 3738 3813 cmd->data_length) { 3739 3814 ret = transport_generic_get_mem(cmd); 3740 3815 if (ret < 0) 3741 - return ret; 3816 + goto out_fail; 3742 3817 } 3743 3818 3744 3819 /* ··· 3767 3842 task_cdbs = transport_allocate_control_task(cmd); 3768 3843 } 3769 3844 3770 - if (task_cdbs <= 0) 3845 + if (task_cdbs < 0) 3771 3846 goto out_fail; 3847 + else if (!task_cdbs && (cmd->se_cmd_flags & SCF_SCSI_DATA_SG_IO_CDB)) { 3848 + cmd->t_state = TRANSPORT_COMPLETE; 3849 + atomic_set(&cmd->t_transport_active, 1); 3850 + INIT_WORK(&cmd->work, target_complete_ok_work); 3851 + queue_work(target_completion_wq, &cmd->work); 3852 + return 0; 3853 + } 3772 3854 3773 3855 if (set_counts) { 3774 3856 atomic_inc(&cmd->t_fe_count); ··· 3861 3929 else if (ret < 0) 3862 3930 return ret; 3863 3931 3864 - return PYX_TRANSPORT_WRITE_PENDING; 3932 + return 1; 3865 3933 3866 3934 queue_full: 3867 3935 pr_debug("Handling write_pending QUEUE__FULL: se_cmd: %p\n", cmd); ··· 4534 4602 if (cmd->se_tfo->write_pending_status(cmd) != 0) { 4535 4603 atomic_inc(&cmd->t_transport_aborted); 4536 4604 smp_mb__after_atomic_inc(); 4537 - cmd->scsi_status = SAM_STAT_TASK_ABORTED; 4538 - transport_new_cmd_failure(cmd); 4539 - return; 4540 4605 } 4541 4606 } 4542 4607 cmd->scsi_status = SAM_STAT_TASK_ABORTED; ··· 4599 4670 struct se_cmd *cmd; 4600 4671 struct se_device *dev = (struct se_device *) param; 4601 4672 4602 - set_user_nice(current, -20); 4603 - 4604 4673 while (!kthread_should_stop()) { 4605 4674 ret = wait_event_interruptible(dev->dev_queue_obj.thread_wq, 4606 4675 atomic_read(&dev->dev_queue_obj.queue_cnt) || ··· 4625 4698 } 4626 4699 ret = cmd->se_tfo->new_cmd_map(cmd); 4627 4700 if (ret < 0) { 4628 - cmd->transport_error_status = ret; 4629 - transport_generic_request_failure(cmd, 4630 - 0, (cmd->data_direction != 4631 - DMA_TO_DEVICE)); 4701 + transport_generic_request_failure(cmd); 4632 4702 break; 4633 4703 } 4634 4704 ret = transport_generic_new_cmd(cmd); 4635 4705 if (ret < 0) { 4636 - cmd->transport_error_status = ret; 4637 - transport_generic_request_failure(cmd, 4638 - 0, (cmd->data_direction != 4639 - DMA_TO_DEVICE)); 4706 + transport_generic_request_failure(cmd); 4707 + break; 4640 4708 } 4641 4709 break; 4642 4710 case TRANSPORT_PROCESS_WRITE:
+1 -1
drivers/target/tcm_fc/tfc_cmd.c
··· 200 200 lport = ep->lp; 201 201 fp = fc_frame_alloc(lport, sizeof(*txrdy)); 202 202 if (!fp) 203 - return PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES; 203 + return -ENOMEM; /* Signal QUEUE_FULL */ 204 204 205 205 txrdy = fc_frame_payload_get(fp, sizeof(*txrdy)); 206 206 memset(txrdy, 0, sizeof(*txrdy));
+1 -2
drivers/target/tcm_fc/tfc_conf.c
··· 436 436 struct ft_lport_acl *lacl = container_of(wwn, 437 437 struct ft_lport_acl, fc_lport_wwn); 438 438 439 - pr_debug("del lport %s\n", 440 - config_item_name(&wwn->wwn_group.cg_item)); 439 + pr_debug("del lport %s\n", lacl->name); 441 440 mutex_lock(&ft_lport_lock); 442 441 list_del(&lacl->list); 443 442 mutex_unlock(&ft_lport_lock);
+10
drivers/usb/class/cdc-acm.c
··· 1458 1458 }, 1459 1459 { USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */ 1460 1460 }, 1461 + /* Motorola H24 HSPA module: */ 1462 + { USB_DEVICE(0x22b8, 0x2d91) }, /* modem */ 1463 + { USB_DEVICE(0x22b8, 0x2d92) }, /* modem + diagnostics */ 1464 + { USB_DEVICE(0x22b8, 0x2d93) }, /* modem + AT port */ 1465 + { USB_DEVICE(0x22b8, 0x2d95) }, /* modem + AT port + diagnostics */ 1466 + { USB_DEVICE(0x22b8, 0x2d96) }, /* modem + NMEA */ 1467 + { USB_DEVICE(0x22b8, 0x2d97) }, /* modem + diagnostics + NMEA */ 1468 + { USB_DEVICE(0x22b8, 0x2d99) }, /* modem + AT port + NMEA */ 1469 + { USB_DEVICE(0x22b8, 0x2d9a) }, /* modem + AT port + diagnostics + NMEA */ 1470 + 1461 1471 { USB_DEVICE(0x0572, 0x1329), /* Hummingbird huc56s (Conexant) */ 1462 1472 .driver_info = NO_UNION_NORMAL, /* union descriptor misplaced on 1463 1473 data interface instead of
+1 -1
drivers/usb/gadget/amd5536udc.c
··· 1959 1959 u32 tmp; 1960 1960 1961 1961 if (!driver || !bind || !driver->setup 1962 - || driver->speed != USB_SPEED_HIGH) 1962 + || driver->speed < USB_SPEED_HIGH) 1963 1963 return -EINVAL; 1964 1964 if (!dev) 1965 1965 return -ENODEV;
+1
drivers/usb/gadget/f_mass_storage.c
··· 2975 2975 fsg_common_put(common); 2976 2976 usb_free_descriptors(fsg->function.descriptors); 2977 2977 usb_free_descriptors(fsg->function.hs_descriptors); 2978 + usb_free_descriptors(fsg->function.ss_descriptors); 2978 2979 kfree(fsg); 2979 2980 } 2980 2981
+2 -2
drivers/usb/gadget/f_serial.c
··· 131 131 } 132 132 if (!gser->port.in->desc || !gser->port.out->desc) { 133 133 DBG(cdev, "activate generic ttyGS%d\n", gser->port_num); 134 - if (!config_ep_by_speed(cdev->gadget, f, gser->port.in) || 135 - !config_ep_by_speed(cdev->gadget, f, gser->port.out)) { 134 + if (config_ep_by_speed(cdev->gadget, f, gser->port.in) || 135 + config_ep_by_speed(cdev->gadget, f, gser->port.out)) { 136 136 gser->port.in->desc = NULL; 137 137 gser->port.out->desc = NULL; 138 138 return -EINVAL;
+1 -2
drivers/usb/gadget/fsl_mxc_udc.c
··· 16 16 #include <linux/err.h> 17 17 #include <linux/fsl_devices.h> 18 18 #include <linux/platform_device.h> 19 + #include <linux/io.h> 19 20 20 21 #include <mach/hardware.h> 21 22 ··· 89 88 void fsl_udc_clk_finalize(struct platform_device *pdev) 90 89 { 91 90 struct fsl_usb2_platform_data *pdata = pdev->dev.platform_data; 92 - #if defined(CONFIG_SOC_IMX35) 93 91 if (cpu_is_mx35()) { 94 92 unsigned int v; 95 93 ··· 101 101 USBPHYCTRL_OTGBASE_OFFSET)); 102 102 } 103 103 } 104 - #endif 105 104 106 105 /* ULPI transceivers don't need usbpll */ 107 106 if (pdata->phy_mode == FSL_USB2_PHY_ULPI) {
+1 -2
drivers/usb/gadget/fsl_qe_udc.c
··· 2336 2336 if (!udc_controller) 2337 2337 return -ENODEV; 2338 2338 2339 - if (!driver || (driver->speed != USB_SPEED_FULL 2340 - && driver->speed != USB_SPEED_HIGH) 2339 + if (!driver || driver->speed < USB_SPEED_FULL 2341 2340 || !bind || !driver->disconnect || !driver->setup) 2342 2341 return -EINVAL; 2343 2342
+35 -40
drivers/usb/gadget/fsl_udc_core.c
··· 696 696 kfree(req); 697 697 } 698 698 699 - /*-------------------------------------------------------------------------*/ 699 + /* Actually add a dTD chain to an empty dQH and let go */ 700 + static void fsl_prime_ep(struct fsl_ep *ep, struct ep_td_struct *td) 701 + { 702 + struct ep_queue_head *qh = get_qh_by_ep(ep); 703 + 704 + /* Write dQH next pointer and terminate bit to 0 */ 705 + qh->next_dtd_ptr = cpu_to_hc32(td->td_dma 706 + & EP_QUEUE_HEAD_NEXT_POINTER_MASK); 707 + 708 + /* Clear active and halt bit */ 709 + qh->size_ioc_int_sts &= cpu_to_hc32(~(EP_QUEUE_HEAD_STATUS_ACTIVE 710 + | EP_QUEUE_HEAD_STATUS_HALT)); 711 + 712 + /* Ensure that updates to the QH will occur before priming. */ 713 + wmb(); 714 + 715 + /* Prime endpoint by writing correct bit to ENDPTPRIME */ 716 + fsl_writel(ep_is_in(ep) ? (1 << (ep_index(ep) + 16)) 717 + : (1 << (ep_index(ep))), &dr_regs->endpointprime); 718 + } 719 + 720 + /* Add dTD chain to the dQH of an EP */ 700 721 static void fsl_queue_td(struct fsl_ep *ep, struct fsl_req *req) 701 722 { 702 - int i = ep_index(ep) * 2 + ep_is_in(ep); 703 723 u32 temp, bitmask, tmp_stat; 704 - struct ep_queue_head *dQH = &ep->udc->ep_qh[i]; 705 724 706 725 /* VDBG("QH addr Register 0x%8x", dr_regs->endpointlistaddr); 707 726 VDBG("ep_qh[%d] addr is 0x%8x", i, (u32)&(ep->udc->ep_qh[i])); */ ··· 738 719 cpu_to_hc32(req->head->td_dma & DTD_ADDR_MASK); 739 720 /* Read prime bit, if 1 goto done */ 740 721 if (fsl_readl(&dr_regs->endpointprime) & bitmask) 741 - goto out; 722 + return; 742 723 743 724 do { 744 725 /* Set ATDTW bit in USBCMD */ ··· 755 736 fsl_writel(temp & ~USB_CMD_ATDTW, &dr_regs->usbcmd); 756 737 757 738 if (tmp_stat) 758 - goto out; 739 + return; 759 740 } 760 741 761 - /* Write dQH next pointer and terminate bit to 0 */ 762 - temp = req->head->td_dma & EP_QUEUE_HEAD_NEXT_POINTER_MASK; 763 - dQH->next_dtd_ptr = cpu_to_hc32(temp); 764 - 765 - /* Clear active and halt bit */ 766 - temp = cpu_to_hc32(~(EP_QUEUE_HEAD_STATUS_ACTIVE 767 - | EP_QUEUE_HEAD_STATUS_HALT)); 768 - dQH->size_ioc_int_sts &= temp; 769 - 770 - /* Ensure that updates to the QH will occur before priming. */ 771 - wmb(); 772 - 773 - /* Prime endpoint by writing 1 to ENDPTPRIME */ 774 - temp = ep_is_in(ep) 775 - ? (1 << (ep_index(ep) + 16)) 776 - : (1 << (ep_index(ep))); 777 - fsl_writel(temp, &dr_regs->endpointprime); 778 - out: 779 - return; 742 + fsl_prime_ep(ep, req->head); 780 743 } 781 744 782 745 /* Fill in the dTD structure ··· 878 877 VDBG("%s, bad ep", __func__); 879 878 return -EINVAL; 880 879 } 881 - if (ep->desc->bmAttributes == USB_ENDPOINT_XFER_ISOC) { 880 + if (usb_endpoint_xfer_isoc(ep->desc)) { 882 881 if (req->req.length > ep->ep.maxpacket) 883 882 return -EMSGSIZE; 884 883 } ··· 974 973 975 974 /* The request isn't the last request in this ep queue */ 976 975 if (req->queue.next != &ep->queue) { 977 - struct ep_queue_head *qh; 978 976 struct fsl_req *next_req; 979 977 980 - qh = ep->qh; 981 978 next_req = list_entry(req->queue.next, struct fsl_req, 982 979 queue); 983 980 984 - /* Point the QH to the first TD of next request */ 985 - fsl_writel((u32) next_req->head, &qh->curr_dtd_ptr); 981 + /* prime with dTD of next request */ 982 + fsl_prime_ep(ep, next_req->head); 986 983 } 987 - 988 - /* The request hasn't been processed, patch up the TD chain */ 984 + /* The request hasn't been processed, patch up the TD chain */ 989 985 } else { 990 986 struct fsl_req *prev_req; 991 987 992 988 prev_req = list_entry(req->queue.prev, struct fsl_req, queue); 993 - fsl_writel(fsl_readl(&req->tail->next_td_ptr), 994 - &prev_req->tail->next_td_ptr); 995 - 989 + prev_req->tail->next_td_ptr = req->tail->next_td_ptr; 996 990 } 997 991 998 992 done(ep, req, -ECONNRESET); ··· 1028 1032 goto out; 1029 1033 } 1030 1034 1031 - if (ep->desc->bmAttributes == USB_ENDPOINT_XFER_ISOC) { 1035 + if (usb_endpoint_xfer_isoc(ep->desc)) { 1032 1036 status = -EOPNOTSUPP; 1033 1037 goto out; 1034 1038 } ··· 1064 1068 struct fsl_udc *udc; 1065 1069 int size = 0; 1066 1070 u32 bitmask; 1067 - struct ep_queue_head *d_qh; 1071 + struct ep_queue_head *qh; 1068 1072 1069 1073 ep = container_of(_ep, struct fsl_ep, ep); 1070 1074 if (!_ep || (!ep->desc && ep_index(ep) != 0)) ··· 1075 1079 if (!udc->driver || udc->gadget.speed == USB_SPEED_UNKNOWN) 1076 1080 return -ESHUTDOWN; 1077 1081 1078 - d_qh = &ep->udc->ep_qh[ep_index(ep) * 2 + ep_is_in(ep)]; 1082 + qh = get_qh_by_ep(ep); 1079 1083 1080 1084 bitmask = (ep_is_in(ep)) ? (1 << (ep_index(ep) + 16)) : 1081 1085 (1 << (ep_index(ep))); 1082 1086 1083 1087 if (fsl_readl(&dr_regs->endptstatus) & bitmask) 1084 - size = (d_qh->size_ioc_int_sts & DTD_PACKET_SIZE) 1088 + size = (qh->size_ioc_int_sts & DTD_PACKET_SIZE) 1085 1089 >> DTD_LENGTH_BIT_POS; 1086 1090 1087 1091 pr_debug("%s %u\n", __func__, size); ··· 1934 1938 if (!udc_controller) 1935 1939 return -ENODEV; 1936 1940 1937 - if (!driver || (driver->speed != USB_SPEED_FULL 1938 - && driver->speed != USB_SPEED_HIGH) 1941 + if (!driver || driver->speed < USB_SPEED_FULL 1939 1942 || !bind || !driver->disconnect || !driver->setup) 1940 1943 return -EINVAL; 1941 1944
+10
drivers/usb/gadget/fsl_usb2_udc.h
··· 569 569 * 2 + ((windex & USB_DIR_IN) ? 1 : 0)) 570 570 #define get_pipe_by_ep(EP) (ep_index(EP) * 2 + ep_is_in(EP)) 571 571 572 + static inline struct ep_queue_head *get_qh_by_ep(struct fsl_ep *ep) 573 + { 574 + /* we only have one ep0 structure but two queue heads */ 575 + if (ep_index(ep) != 0) 576 + return ep->qh; 577 + else 578 + return &ep->udc->ep_qh[(ep->udc->ep0_dir == 579 + USB_DIR_IN) ? 1 : 0]; 580 + } 581 + 572 582 struct platform_device; 573 583 #ifdef CONFIG_ARCH_MXC 574 584 int fsl_udc_clk_init(struct platform_device *pdev);
+1 -1
drivers/usb/gadget/m66592-udc.c
··· 1472 1472 int retval; 1473 1473 1474 1474 if (!driver 1475 - || driver->speed != USB_SPEED_HIGH 1475 + || driver->speed < USB_SPEED_HIGH 1476 1476 || !bind 1477 1477 || !driver->setup) 1478 1478 return -EINVAL;
+1 -1
drivers/usb/gadget/net2280.c
··· 1881 1881 * (dev->usb->xcvrdiag & FORCE_FULL_SPEED_MODE) 1882 1882 * "must not be used in normal operation" 1883 1883 */ 1884 - if (!driver || driver->speed != USB_SPEED_HIGH 1884 + if (!driver || driver->speed < USB_SPEED_HIGH 1885 1885 || !driver->setup) 1886 1886 return -EINVAL; 1887 1887
+1 -1
drivers/usb/gadget/r8a66597-udc.c
··· 1746 1746 struct r8a66597 *r8a66597 = gadget_to_r8a66597(gadget); 1747 1747 1748 1748 if (!driver 1749 - || driver->speed != USB_SPEED_HIGH 1749 + || driver->speed < USB_SPEED_HIGH 1750 1750 || !driver->setup) 1751 1751 return -EINVAL; 1752 1752 if (!r8a66597)
+1 -3
drivers/usb/gadget/s3c-hsotg.c
··· 2586 2586 return -EINVAL; 2587 2587 } 2588 2588 2589 - if (driver->speed != USB_SPEED_HIGH && 2590 - driver->speed != USB_SPEED_FULL) { 2589 + if (driver->speed < USB_SPEED_FULL) 2591 2590 dev_err(hsotg->dev, "%s: bad speed\n", __func__); 2592 - } 2593 2591 2594 2592 if (!bind || !driver->setup) { 2595 2593 dev_err(hsotg->dev, "%s: missing entry points\n", __func__);
+1 -2
drivers/usb/gadget/s3c-hsudc.c
··· 1142 1142 int ret; 1143 1143 1144 1144 if (!driver 1145 - || (driver->speed != USB_SPEED_FULL && 1146 - driver->speed != USB_SPEED_HIGH) 1145 + || driver->speed < USB_SPEED_FULL 1147 1146 || !bind 1148 1147 || !driver->unbind || !driver->disconnect || !driver->setup) 1149 1148 return -EINVAL;
+5 -4
drivers/usb/host/ehci-sched.c
··· 1475 1475 * jump until after the queue is primed. 1476 1476 */ 1477 1477 else { 1478 + int done = 0; 1478 1479 start = SCHEDULE_SLOP + (now & ~0x07); 1479 1480 1480 1481 /* NOTE: assumes URB_ISO_ASAP, to limit complexity/bugs */ ··· 1493 1492 if (stream->highspeed) { 1494 1493 if (itd_slot_ok(ehci, mod, start, 1495 1494 stream->usecs, period)) 1496 - break; 1495 + done = 1; 1497 1496 } else { 1498 1497 if ((start % 8) >= 6) 1499 1498 continue; 1500 1499 if (sitd_slot_ok(ehci, mod, stream, 1501 1500 start, sched, period)) 1502 - break; 1501 + done = 1; 1503 1502 } 1504 - } while (start > next); 1503 + } while (start > next && !done); 1505 1504 1506 1505 /* no room in the schedule */ 1507 - if (start == next) { 1506 + if (!done) { 1508 1507 ehci_dbg(ehci, "iso resched full %p (now %d max %d)\n", 1509 1508 urb, now, now + mod); 1510 1509 status = -ENOSPC;
+1 -1
drivers/usb/host/whci/qset.c
··· 124 124 { 125 125 qset->td_start = qset->td_end = qset->ntds = 0; 126 126 127 - qset->qh.link = cpu_to_le32(QH_LINK_NTDS(8) | QH_LINK_T); 127 + qset->qh.link = cpu_to_le64(QH_LINK_NTDS(8) | QH_LINK_T); 128 128 qset->qh.status = qset->qh.status & QH_STATUS_SEQ_MASK; 129 129 qset->qh.err_count = 0; 130 130 qset->qh.scratch[0] = 0;
+4 -1
drivers/usb/host/xhci.c
··· 711 711 ring = xhci->cmd_ring; 712 712 seg = ring->deq_seg; 713 713 do { 714 - memset(seg->trbs, 0, SEGMENT_SIZE); 714 + memset(seg->trbs, 0, 715 + sizeof(union xhci_trb) * (TRBS_PER_SEGMENT - 1)); 716 + seg->trbs[TRBS_PER_SEGMENT - 1].link.control &= 717 + cpu_to_le32(~TRB_CYCLE); 715 718 seg = seg->next; 716 719 } while (seg != ring->deq_seg); 717 720
-6
drivers/usb/musb/musb_core.c
··· 2301 2301 */ 2302 2302 } 2303 2303 2304 - musb_save_context(musb); 2305 - 2306 2304 spin_unlock_irqrestore(&musb->lock, flags); 2307 2305 return 0; 2308 2306 } 2309 2307 2310 2308 static int musb_resume_noirq(struct device *dev) 2311 2309 { 2312 - struct musb *musb = dev_to_musb(dev); 2313 - 2314 - musb_restore_context(musb); 2315 - 2316 2310 /* for static cmos like DaVinci, register values were preserved 2317 2311 * unless for some reason the whole soc powered down or the USB 2318 2312 * module got reset through the PSC (vs just being disabled).
+1 -1
drivers/usb/musb/musb_gadget.c
··· 1903 1903 unsigned long flags; 1904 1904 int retval = -EINVAL; 1905 1905 1906 - if (driver->speed != USB_SPEED_HIGH) 1906 + if (driver->speed < USB_SPEED_HIGH) 1907 1907 goto err0; 1908 1908 1909 1909 pm_runtime_get_sync(musb->controller);
+1 -1
drivers/usb/renesas_usbhs/mod.c
··· 349 349 if (mod->irq_attch) 350 350 intenb1 |= ATTCHE; 351 351 352 - if (mod->irq_attch) 352 + if (mod->irq_dtch) 353 353 intenb1 |= DTCHE; 354 354 355 355 if (mod->irq_sign)
+20 -27
drivers/usb/renesas_usbhs/mod_gadget.c
··· 751 751 struct usb_gadget_driver *driver) 752 752 { 753 753 struct usbhsg_gpriv *gpriv = usbhsg_gadget_to_gpriv(gadget); 754 - struct usbhs_priv *priv; 755 - struct device *dev; 756 - int ret; 754 + struct usbhs_priv *priv = usbhsg_gpriv_to_priv(gpriv); 757 755 758 756 if (!driver || 759 757 !driver->setup || 760 - driver->speed != USB_SPEED_HIGH) 758 + driver->speed < USB_SPEED_FULL) 761 759 return -EINVAL; 762 - 763 - dev = usbhsg_gpriv_to_dev(gpriv); 764 - priv = usbhsg_gpriv_to_priv(gpriv); 765 760 766 761 /* first hook up the driver ... */ 767 762 gpriv->driver = driver; 768 763 gpriv->gadget.dev.driver = &driver->driver; 769 764 770 - ret = device_add(&gpriv->gadget.dev); 771 - if (ret) { 772 - dev_err(dev, "device_add error %d\n", ret); 773 - goto add_fail; 774 - } 775 - 776 765 return usbhsg_try_start(priv, USBHSG_STATUS_REGISTERD); 777 - 778 - add_fail: 779 - gpriv->driver = NULL; 780 - gpriv->gadget.dev.driver = NULL; 781 - 782 - return ret; 783 766 } 784 767 785 768 static int usbhsg_gadget_stop(struct usb_gadget *gadget, 786 769 struct usb_gadget_driver *driver) 787 770 { 788 771 struct usbhsg_gpriv *gpriv = usbhsg_gadget_to_gpriv(gadget); 789 - struct usbhs_priv *priv; 790 - struct device *dev; 772 + struct usbhs_priv *priv = usbhsg_gpriv_to_priv(gpriv); 791 773 792 774 if (!driver || 793 775 !driver->unbind) 794 776 return -EINVAL; 795 777 796 - dev = usbhsg_gpriv_to_dev(gpriv); 797 - priv = usbhsg_gpriv_to_priv(gpriv); 798 - 799 778 usbhsg_try_stop(priv, USBHSG_STATUS_REGISTERD); 800 - device_del(&gpriv->gadget.dev); 779 + gpriv->gadget.dev.driver = NULL; 801 780 gpriv->driver = NULL; 802 781 803 782 return 0; ··· 806 827 807 828 static int usbhsg_stop(struct usbhs_priv *priv) 808 829 { 830 + struct usbhsg_gpriv *gpriv = usbhsg_priv_to_gpriv(priv); 831 + 832 + /* cable disconnect */ 833 + if (gpriv->driver && 834 + gpriv->driver->disconnect) 835 + gpriv->driver->disconnect(&gpriv->gadget); 836 + 809 837 return usbhsg_try_stop(priv, USBHSG_STATUS_STARTED); 810 838 } 811 839 ··· 862 876 /* 863 877 * init gadget 864 878 */ 865 - device_initialize(&gpriv->gadget.dev); 866 879 dev_set_name(&gpriv->gadget.dev, "gadget"); 867 880 gpriv->gadget.dev.parent = dev; 868 881 gpriv->gadget.name = "renesas_usbhs_udc"; 869 882 gpriv->gadget.ops = &usbhsg_gadget_ops; 870 883 gpriv->gadget.is_dualspeed = 1; 884 + ret = device_register(&gpriv->gadget.dev); 885 + if (ret < 0) 886 + goto err_add_udc; 871 887 872 888 INIT_LIST_HEAD(&gpriv->gadget.ep_list); 873 889 ··· 900 912 901 913 ret = usb_add_gadget_udc(dev, &gpriv->gadget); 902 914 if (ret) 903 - goto err_add_udc; 915 + goto err_register; 904 916 905 917 906 918 dev_info(dev, "gadget probed\n"); 907 919 908 920 return 0; 921 + 922 + err_register: 923 + device_unregister(&gpriv->gadget.dev); 909 924 err_add_udc: 910 925 kfree(gpriv->uep); 911 926 ··· 923 932 struct usbhsg_gpriv *gpriv = usbhsg_priv_to_gpriv(priv); 924 933 925 934 usb_del_gadget_udc(&gpriv->gadget); 935 + 936 + device_unregister(&gpriv->gadget.dev); 926 937 927 938 usbhsg_controller_unregister(gpriv); 928 939
+1
drivers/usb/renesas_usbhs/mod_host.c
··· 1267 1267 dev_err(dev, "Failed to create hcd\n"); 1268 1268 return -ENOMEM; 1269 1269 } 1270 + hcd->has_tt = 1; /* for low/full speed */ 1270 1271 1271 1272 pipe_info = kzalloc(sizeof(*pipe_info) * pipe_size, GFP_KERNEL); 1272 1273 if (!pipe_info) {
+1
drivers/usb/serial/ftdi_sio.c
··· 736 736 { USB_DEVICE(TML_VID, TML_USB_SERIAL_PID) }, 737 737 { USB_DEVICE(FTDI_VID, FTDI_ELSTER_UNICOM_PID) }, 738 738 { USB_DEVICE(FTDI_VID, FTDI_PROPOX_JTAGCABLEII_PID) }, 739 + { USB_DEVICE(FTDI_VID, FTDI_PROPOX_ISPCABLEIII_PID) }, 739 740 { USB_DEVICE(OLIMEX_VID, OLIMEX_ARM_USB_OCD_PID), 740 741 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 741 742 { USB_DEVICE(OLIMEX_VID, OLIMEX_ARM_USB_OCD_H_PID),
+1
drivers/usb/serial/ftdi_sio_ids.h
··· 112 112 113 113 /* Propox devices */ 114 114 #define FTDI_PROPOX_JTAGCABLEII_PID 0xD738 115 + #define FTDI_PROPOX_ISPCABLEIII_PID 0xD739 115 116 116 117 /* Lenz LI-USB Computer Interface. */ 117 118 #define FTDI_LENZ_LIUSB_PID 0xD780
+9
drivers/usb/serial/option.c
··· 661 661 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4511, 0xff, 0x01, 0x31) }, 662 662 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4511, 0xff, 0x01, 0x32) }, 663 663 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x01, 0x01) }, 664 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x01, 0x02) }, 665 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x01, 0x03) }, 666 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x01, 0x10) }, 667 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x01, 0x12) }, 668 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x01, 0x13) }, 669 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x02, 0x01) }, /* E398 3G Modem */ 670 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x02, 0x02) }, /* E398 3G PC UI Interface */ 671 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E353, 0xff, 0x02, 0x03) }, /* E398 3G Application Interface */ 664 672 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V640) }, 665 673 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V620) }, 666 674 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V740) }, ··· 755 747 { USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC680) }, 756 748 { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6000)}, /* ZTE AC8700 */ 757 749 { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */ 750 + { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */ 758 751 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6280) }, /* BP3-USB & BP3-EXT HSDPA */ 759 752 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6008) }, 760 753 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864E) },
+7
drivers/usb/storage/unusual_devs.h
··· 1854 1854 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1855 1855 US_FL_IGNORE_RESIDUE ), 1856 1856 1857 + /* Reported by Qinglin Ye <yestyle@gmail.com> */ 1858 + UNUSUAL_DEV( 0x13fe, 0x3600, 0x0100, 0x0100, 1859 + "Kingston", 1860 + "DT 101 G2", 1861 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1862 + US_FL_BULK_IGNORE_TAG ), 1863 + 1857 1864 /* Reported by Francesco Foresti <frafore@tiscali.it> */ 1858 1865 UNUSUAL_DEV( 0x14cd, 0x6600, 0x0201, 0x0201, 1859 1866 "Super Top",
+14 -1
drivers/video/da8xx-fb.c
··· 116 116 /* Clock registers available only on Version 2 */ 117 117 #define LCD_CLK_ENABLE_REG 0x6c 118 118 #define LCD_CLK_RESET_REG 0x70 119 + #define LCD_CLK_MAIN_RESET BIT(3) 119 120 120 121 #define LCD_NUM_BUFFERS 2 121 122 ··· 245 244 { 246 245 u32 reg; 247 246 247 + /* Bring LCDC out of reset */ 248 + if (lcd_revision == LCD_VERSION_2) 249 + lcdc_write(0, LCD_CLK_RESET_REG); 250 + 248 251 reg = lcdc_read(LCD_RASTER_CTRL_REG); 249 252 if (!(reg & LCD_RASTER_ENABLE)) 250 253 lcdc_write(reg | LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG); ··· 262 257 reg = lcdc_read(LCD_RASTER_CTRL_REG); 263 258 if (reg & LCD_RASTER_ENABLE) 264 259 lcdc_write(reg & ~LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG); 260 + 261 + if (lcd_revision == LCD_VERSION_2) 262 + /* Write 1 to reset LCDC */ 263 + lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG); 265 264 } 266 265 267 266 static void lcd_blit(int load_mode, struct da8xx_fb_par *par) ··· 593 584 lcdc_write(0, LCD_DMA_CTRL_REG); 594 585 lcdc_write(0, LCD_RASTER_CTRL_REG); 595 586 596 - if (lcd_revision == LCD_VERSION_2) 587 + if (lcd_revision == LCD_VERSION_2) { 597 588 lcdc_write(0, LCD_INT_ENABLE_SET_REG); 589 + /* Write 1 to reset */ 590 + lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG); 591 + lcdc_write(0, LCD_CLK_RESET_REG); 592 + } 598 593 } 599 594 600 595 static void lcd_calc_clk_divider(struct da8xx_fb_par *par)
+1
drivers/video/omap/dispc.c
··· 19 19 * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 20 20 */ 21 21 #include <linux/kernel.h> 22 + #include <linux/module.h> 22 23 #include <linux/dma-mapping.h> 23 24 #include <linux/mm.h> 24 25 #include <linux/vmalloc.h>
+5 -6
drivers/video/omap2/dss/dispc.c
··· 1720 1720 const int maxdownscale = dss_feat_get_param_max(FEAT_PARAM_DOWNSCALE); 1721 1721 unsigned long fclk = 0; 1722 1722 1723 - if ((ovl->caps & OMAP_DSS_OVL_CAP_SCALE) == 0) { 1724 - if (width != out_width || height != out_height) 1725 - return -EINVAL; 1726 - else 1727 - return 0; 1728 - } 1723 + if (width == out_width && height == out_height) 1724 + return 0; 1725 + 1726 + if ((ovl->caps & OMAP_DSS_OVL_CAP_SCALE) == 0) 1727 + return -EINVAL; 1729 1728 1730 1729 if (out_width < width / maxdownscale || 1731 1730 out_width > width * 8)
+1 -1
drivers/video/omap2/dss/hdmi.c
··· 269 269 unsigned long hdmi_get_pixel_clock(void) 270 270 { 271 271 /* HDMI Pixel Clock in Mhz */ 272 - return hdmi.ip_data.cfg.timings.timings.pixel_clock * 10000; 272 + return hdmi.ip_data.cfg.timings.timings.pixel_clock * 1000; 273 273 } 274 274 275 275 static void hdmi_compute_pll(struct omap_dss_device *dssdev, int phy,
+2 -2
drivers/video/via/share.h
··· 559 559 #define M1200X720_R60_VSP POSITIVE 560 560 561 561 /* 1200x900@60 Sync Polarity (DCON) */ 562 - #define M1200X900_R60_HSP NEGATIVE 563 - #define M1200X900_R60_VSP NEGATIVE 562 + #define M1200X900_R60_HSP POSITIVE 563 + #define M1200X900_R60_VSP POSITIVE 564 564 565 565 /* 1280x600@60 Sync Polarity (GTF Mode) */ 566 566 #define M1280x600_R60_HSP NEGATIVE
+1 -1
drivers/virtio/Kconfig
··· 37 37 38 38 config VIRTIO_MMIO 39 39 tristate "Platform bus driver for memory mapped virtio devices (EXPERIMENTAL)" 40 - depends on EXPERIMENTAL 40 + depends on HAS_IOMEM && EXPERIMENTAL 41 41 select VIRTIO 42 42 select VIRTIO_RING 43 43 ---help---
+1 -1
drivers/virtio/virtio_mmio.c
··· 118 118 vring_transport_features(vdev); 119 119 120 120 for (i = 0; i < ARRAY_SIZE(vdev->features); i++) { 121 - writel(i, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SET); 121 + writel(i, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SEL); 122 122 writel(vdev->features[i], 123 123 vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES); 124 124 }
+18
drivers/virtio/virtio_pci.c
··· 169 169 iowrite8(status, vp_dev->ioaddr + VIRTIO_PCI_STATUS); 170 170 } 171 171 172 + /* wait for pending irq handlers */ 173 + static void vp_synchronize_vectors(struct virtio_device *vdev) 174 + { 175 + struct virtio_pci_device *vp_dev = to_vp_device(vdev); 176 + int i; 177 + 178 + if (vp_dev->intx_enabled) 179 + synchronize_irq(vp_dev->pci_dev->irq); 180 + 181 + for (i = 0; i < vp_dev->msix_vectors; ++i) 182 + synchronize_irq(vp_dev->msix_entries[i].vector); 183 + } 184 + 172 185 static void vp_reset(struct virtio_device *vdev) 173 186 { 174 187 struct virtio_pci_device *vp_dev = to_vp_device(vdev); 175 188 /* 0 status means a reset. */ 176 189 iowrite8(0, vp_dev->ioaddr + VIRTIO_PCI_STATUS); 190 + /* Flush out the status write, and flush in device writes, 191 + * including MSi-X interrupts, if any. */ 192 + ioread8(vp_dev->ioaddr + VIRTIO_PCI_STATUS); 193 + /* Flush pending VQ/configuration callbacks. */ 194 + vp_synchronize_vectors(vdev); 177 195 } 178 196 179 197 /* the notify function used when creating a virt queue */
+2 -2
drivers/xen/swiotlb-xen.c
··· 166 166 /* 167 167 * Get IO TLB memory from any location. 168 168 */ 169 - xen_io_tlb_start = alloc_bootmem(bytes); 169 + xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes)); 170 170 if (!xen_io_tlb_start) { 171 171 m = "Cannot allocate Xen-SWIOTLB buffer!\n"; 172 172 goto error; ··· 179 179 bytes, 180 180 xen_io_tlb_nslabs); 181 181 if (rc) { 182 - free_bootmem(__pa(xen_io_tlb_start), bytes); 182 + free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes)); 183 183 m = "Failed to get contiguous memory for DMA from Xen!\n"\ 184 184 "You either: don't have the permissions, do not have"\ 185 185 " enough free memory under 4GB, or the hypervisor memory"\
+56 -63
fs/btrfs/async-thread.c
··· 64 64 int idle; 65 65 }; 66 66 67 + static int __btrfs_start_workers(struct btrfs_workers *workers); 68 + 67 69 /* 68 70 * btrfs_start_workers uses kthread_run, which can block waiting for memory 69 71 * for a very long time. It will actually throttle on page writeback, ··· 90 88 { 91 89 struct worker_start *start; 92 90 start = container_of(work, struct worker_start, work); 93 - btrfs_start_workers(start->queue, 1); 91 + __btrfs_start_workers(start->queue); 94 92 kfree(start); 95 - } 96 - 97 - static int start_new_worker(struct btrfs_workers *queue) 98 - { 99 - struct worker_start *start; 100 - int ret; 101 - 102 - start = kzalloc(sizeof(*start), GFP_NOFS); 103 - if (!start) 104 - return -ENOMEM; 105 - 106 - start->work.func = start_new_worker_func; 107 - start->queue = queue; 108 - ret = btrfs_queue_worker(queue->atomic_worker_start, &start->work); 109 - if (ret) 110 - kfree(start); 111 - return ret; 112 93 } 113 94 114 95 /* ··· 138 153 static void check_pending_worker_creates(struct btrfs_worker_thread *worker) 139 154 { 140 155 struct btrfs_workers *workers = worker->workers; 156 + struct worker_start *start; 141 157 unsigned long flags; 142 158 143 159 rmb(); 144 160 if (!workers->atomic_start_pending) 145 161 return; 162 + 163 + start = kzalloc(sizeof(*start), GFP_NOFS); 164 + if (!start) 165 + return; 166 + 167 + start->work.func = start_new_worker_func; 168 + start->queue = workers; 146 169 147 170 spin_lock_irqsave(&workers->lock, flags); 148 171 if (!workers->atomic_start_pending) ··· 163 170 164 171 workers->num_workers_starting += 1; 165 172 spin_unlock_irqrestore(&workers->lock, flags); 166 - start_new_worker(workers); 173 + btrfs_queue_worker(workers->atomic_worker_start, &start->work); 167 174 return; 168 175 169 176 out: 177 + kfree(start); 170 178 spin_unlock_irqrestore(&workers->lock, flags); 171 179 } 172 180 ··· 325 331 run_ordered_completions(worker->workers, work); 326 332 327 333 check_pending_worker_creates(worker); 328 - 334 + cond_resched(); 329 335 } 330 336 331 337 spin_lock_irq(&worker->lock); ··· 456 462 * starts new worker threads. This does not enforce the max worker 457 463 * count in case you need to temporarily go past it. 458 464 */ 459 - static int __btrfs_start_workers(struct btrfs_workers *workers, 460 - int num_workers) 465 + static int __btrfs_start_workers(struct btrfs_workers *workers) 461 466 { 462 467 struct btrfs_worker_thread *worker; 463 468 int ret = 0; 464 - int i; 465 469 466 - for (i = 0; i < num_workers; i++) { 467 - worker = kzalloc(sizeof(*worker), GFP_NOFS); 468 - if (!worker) { 469 - ret = -ENOMEM; 470 - goto fail; 471 - } 472 - 473 - INIT_LIST_HEAD(&worker->pending); 474 - INIT_LIST_HEAD(&worker->prio_pending); 475 - INIT_LIST_HEAD(&worker->worker_list); 476 - spin_lock_init(&worker->lock); 477 - 478 - atomic_set(&worker->num_pending, 0); 479 - atomic_set(&worker->refs, 1); 480 - worker->workers = workers; 481 - worker->task = kthread_run(worker_loop, worker, 482 - "btrfs-%s-%d", workers->name, 483 - workers->num_workers + i); 484 - if (IS_ERR(worker->task)) { 485 - ret = PTR_ERR(worker->task); 486 - kfree(worker); 487 - goto fail; 488 - } 489 - spin_lock_irq(&workers->lock); 490 - list_add_tail(&worker->worker_list, &workers->idle_list); 491 - worker->idle = 1; 492 - workers->num_workers++; 493 - workers->num_workers_starting--; 494 - WARN_ON(workers->num_workers_starting < 0); 495 - spin_unlock_irq(&workers->lock); 470 + worker = kzalloc(sizeof(*worker), GFP_NOFS); 471 + if (!worker) { 472 + ret = -ENOMEM; 473 + goto fail; 496 474 } 475 + 476 + INIT_LIST_HEAD(&worker->pending); 477 + INIT_LIST_HEAD(&worker->prio_pending); 478 + INIT_LIST_HEAD(&worker->worker_list); 479 + spin_lock_init(&worker->lock); 480 + 481 + atomic_set(&worker->num_pending, 0); 482 + atomic_set(&worker->refs, 1); 483 + worker->workers = workers; 484 + worker->task = kthread_run(worker_loop, worker, 485 + "btrfs-%s-%d", workers->name, 486 + workers->num_workers + 1); 487 + if (IS_ERR(worker->task)) { 488 + ret = PTR_ERR(worker->task); 489 + kfree(worker); 490 + goto fail; 491 + } 492 + spin_lock_irq(&workers->lock); 493 + list_add_tail(&worker->worker_list, &workers->idle_list); 494 + worker->idle = 1; 495 + workers->num_workers++; 496 + workers->num_workers_starting--; 497 + WARN_ON(workers->num_workers_starting < 0); 498 + spin_unlock_irq(&workers->lock); 499 + 497 500 return 0; 498 501 fail: 499 - btrfs_stop_workers(workers); 502 + spin_lock_irq(&workers->lock); 503 + workers->num_workers_starting--; 504 + spin_unlock_irq(&workers->lock); 500 505 return ret; 501 506 } 502 507 503 - int btrfs_start_workers(struct btrfs_workers *workers, int num_workers) 508 + int btrfs_start_workers(struct btrfs_workers *workers) 504 509 { 505 510 spin_lock_irq(&workers->lock); 506 - workers->num_workers_starting += num_workers; 511 + workers->num_workers_starting++; 507 512 spin_unlock_irq(&workers->lock); 508 - return __btrfs_start_workers(workers, num_workers); 513 + return __btrfs_start_workers(workers); 509 514 } 510 515 511 516 /* ··· 561 568 struct btrfs_worker_thread *worker; 562 569 unsigned long flags; 563 570 struct list_head *fallback; 571 + int ret; 564 572 565 573 again: 566 574 spin_lock_irqsave(&workers->lock, flags); ··· 578 584 workers->num_workers_starting++; 579 585 spin_unlock_irqrestore(&workers->lock, flags); 580 586 /* we're below the limit, start another worker */ 581 - __btrfs_start_workers(workers, 1); 587 + ret = __btrfs_start_workers(workers); 588 + if (ret) 589 + goto fallback; 582 590 goto again; 583 591 } 584 592 } ··· 661 665 /* 662 666 * places a struct btrfs_work into the pending queue of one of the kthreads 663 667 */ 664 - int btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work) 668 + void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work) 665 669 { 666 670 struct btrfs_worker_thread *worker; 667 671 unsigned long flags; ··· 669 673 670 674 /* don't requeue something already on a list */ 671 675 if (test_and_set_bit(WORK_QUEUED_BIT, &work->flags)) 672 - goto out; 676 + return; 673 677 674 678 worker = find_worker(workers); 675 679 if (workers->ordered) { ··· 708 712 if (wake) 709 713 wake_up_process(worker->task); 710 714 spin_unlock_irqrestore(&worker->lock, flags); 711 - 712 - out: 713 - return 0; 714 715 }
+2 -2
fs/btrfs/async-thread.h
··· 109 109 char *name; 110 110 }; 111 111 112 - int btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work); 113 - int btrfs_start_workers(struct btrfs_workers *workers, int num_workers); 112 + void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work); 113 + int btrfs_start_workers(struct btrfs_workers *workers); 114 114 int btrfs_stop_workers(struct btrfs_workers *workers); 115 115 void btrfs_init_workers(struct btrfs_workers *workers, char *name, int max, 116 116 struct btrfs_workers *async_starter);
+5 -1
fs/btrfs/ctree.h
··· 2369 2369 int btrfs_block_rsv_refill(struct btrfs_root *root, 2370 2370 struct btrfs_block_rsv *block_rsv, 2371 2371 u64 min_reserved); 2372 + int btrfs_block_rsv_refill_noflush(struct btrfs_root *root, 2373 + struct btrfs_block_rsv *block_rsv, 2374 + u64 min_reserved); 2372 2375 int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv, 2373 2376 struct btrfs_block_rsv *dst_rsv, 2374 2377 u64 num_bytes); ··· 2692 2689 int btrfs_readpage(struct file *file, struct page *page); 2693 2690 void btrfs_evict_inode(struct inode *inode); 2694 2691 int btrfs_write_inode(struct inode *inode, struct writeback_control *wbc); 2695 - void btrfs_dirty_inode(struct inode *inode, int flags); 2692 + int btrfs_dirty_inode(struct inode *inode); 2693 + int btrfs_update_time(struct file *file); 2696 2694 struct inode *btrfs_alloc_inode(struct super_block *sb); 2697 2695 void btrfs_destroy_inode(struct inode *inode); 2698 2696 int btrfs_drop_inode(struct inode *inode);
+2 -2
fs/btrfs/delayed-inode.c
··· 640 640 * Now if src_rsv == delalloc_block_rsv we'll let it just steal since 641 641 * we're accounted for. 642 642 */ 643 - if (!trans->bytes_reserved && 644 - src_rsv != &root->fs_info->delalloc_block_rsv) { 643 + if (!src_rsv || (!trans->bytes_reserved && 644 + src_rsv != &root->fs_info->delalloc_block_rsv)) { 645 645 ret = btrfs_block_rsv_add_noflush(root, dst_rsv, num_bytes); 646 646 /* 647 647 * Since we're under a transaction reserve_metadata_bytes could
+21 -13
fs/btrfs/disk-io.c
··· 2194 2194 fs_info->endio_meta_write_workers.idle_thresh = 2; 2195 2195 fs_info->readahead_workers.idle_thresh = 2; 2196 2196 2197 - btrfs_start_workers(&fs_info->workers, 1); 2198 - btrfs_start_workers(&fs_info->generic_worker, 1); 2199 - btrfs_start_workers(&fs_info->submit_workers, 1); 2200 - btrfs_start_workers(&fs_info->delalloc_workers, 1); 2201 - btrfs_start_workers(&fs_info->fixup_workers, 1); 2202 - btrfs_start_workers(&fs_info->endio_workers, 1); 2203 - btrfs_start_workers(&fs_info->endio_meta_workers, 1); 2204 - btrfs_start_workers(&fs_info->endio_meta_write_workers, 1); 2205 - btrfs_start_workers(&fs_info->endio_write_workers, 1); 2206 - btrfs_start_workers(&fs_info->endio_freespace_worker, 1); 2207 - btrfs_start_workers(&fs_info->delayed_workers, 1); 2208 - btrfs_start_workers(&fs_info->caching_workers, 1); 2209 - btrfs_start_workers(&fs_info->readahead_workers, 1); 2197 + /* 2198 + * btrfs_start_workers can really only fail because of ENOMEM so just 2199 + * return -ENOMEM if any of these fail. 2200 + */ 2201 + ret = btrfs_start_workers(&fs_info->workers); 2202 + ret |= btrfs_start_workers(&fs_info->generic_worker); 2203 + ret |= btrfs_start_workers(&fs_info->submit_workers); 2204 + ret |= btrfs_start_workers(&fs_info->delalloc_workers); 2205 + ret |= btrfs_start_workers(&fs_info->fixup_workers); 2206 + ret |= btrfs_start_workers(&fs_info->endio_workers); 2207 + ret |= btrfs_start_workers(&fs_info->endio_meta_workers); 2208 + ret |= btrfs_start_workers(&fs_info->endio_meta_write_workers); 2209 + ret |= btrfs_start_workers(&fs_info->endio_write_workers); 2210 + ret |= btrfs_start_workers(&fs_info->endio_freespace_worker); 2211 + ret |= btrfs_start_workers(&fs_info->delayed_workers); 2212 + ret |= btrfs_start_workers(&fs_info->caching_workers); 2213 + ret |= btrfs_start_workers(&fs_info->readahead_workers); 2214 + if (ret) { 2215 + ret = -ENOMEM; 2216 + goto fail_sb_buffer; 2217 + } 2210 2218 2211 2219 fs_info->bdi.ra_pages *= btrfs_super_num_devices(disk_super); 2212 2220 fs_info->bdi.ra_pages = max(fs_info->bdi.ra_pages,
+99 -72
fs/btrfs/extent-tree.c
··· 2822 2822 btrfs_release_path(path); 2823 2823 out: 2824 2824 spin_lock(&block_group->lock); 2825 - if (!ret) 2825 + if (!ret && dcs == BTRFS_DC_SETUP) 2826 2826 block_group->cache_generation = trans->transid; 2827 2827 block_group->disk_cache_state = dcs; 2828 2828 spin_unlock(&block_group->lock); ··· 3888 3888 return ret; 3889 3889 } 3890 3890 3891 - int btrfs_block_rsv_refill(struct btrfs_root *root, 3892 - struct btrfs_block_rsv *block_rsv, 3893 - u64 min_reserved) 3891 + static inline int __btrfs_block_rsv_refill(struct btrfs_root *root, 3892 + struct btrfs_block_rsv *block_rsv, 3893 + u64 min_reserved, int flush) 3894 3894 { 3895 3895 u64 num_bytes = 0; 3896 3896 int ret = -ENOSPC; ··· 3909 3909 if (!ret) 3910 3910 return 0; 3911 3911 3912 - ret = reserve_metadata_bytes(root, block_rsv, num_bytes, 1); 3912 + ret = reserve_metadata_bytes(root, block_rsv, num_bytes, flush); 3913 3913 if (!ret) { 3914 3914 block_rsv_add_bytes(block_rsv, num_bytes, 0); 3915 3915 return 0; 3916 3916 } 3917 3917 3918 3918 return ret; 3919 + } 3920 + 3921 + int btrfs_block_rsv_refill(struct btrfs_root *root, 3922 + struct btrfs_block_rsv *block_rsv, 3923 + u64 min_reserved) 3924 + { 3925 + return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 1); 3926 + } 3927 + 3928 + int btrfs_block_rsv_refill_noflush(struct btrfs_root *root, 3929 + struct btrfs_block_rsv *block_rsv, 3930 + u64 min_reserved) 3931 + { 3932 + return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 0); 3919 3933 } 3920 3934 3921 3935 int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv, ··· 4204 4190 struct btrfs_root *root = BTRFS_I(inode)->root; 4205 4191 struct btrfs_block_rsv *block_rsv = &root->fs_info->delalloc_block_rsv; 4206 4192 u64 to_reserve = 0; 4193 + u64 csum_bytes; 4207 4194 unsigned nr_extents = 0; 4195 + int extra_reserve = 0; 4208 4196 int flush = 1; 4209 4197 int ret; 4210 4198 4199 + /* Need to be holding the i_mutex here if we aren't free space cache */ 4211 4200 if (btrfs_is_free_space_inode(root, inode)) 4212 4201 flush = 0; 4202 + else 4203 + WARN_ON(!mutex_is_locked(&inode->i_mutex)); 4213 4204 4214 4205 if (flush && btrfs_transaction_in_commit(root->fs_info)) 4215 4206 schedule_timeout(1); ··· 4225 4206 BTRFS_I(inode)->outstanding_extents++; 4226 4207 4227 4208 if (BTRFS_I(inode)->outstanding_extents > 4228 - BTRFS_I(inode)->reserved_extents) { 4209 + BTRFS_I(inode)->reserved_extents) 4229 4210 nr_extents = BTRFS_I(inode)->outstanding_extents - 4230 4211 BTRFS_I(inode)->reserved_extents; 4231 - BTRFS_I(inode)->reserved_extents += nr_extents; 4232 - } 4233 4212 4234 4213 /* 4235 4214 * Add an item to reserve for updating the inode when we complete the ··· 4235 4218 */ 4236 4219 if (!BTRFS_I(inode)->delalloc_meta_reserved) { 4237 4220 nr_extents++; 4238 - BTRFS_I(inode)->delalloc_meta_reserved = 1; 4221 + extra_reserve = 1; 4239 4222 } 4240 4223 4241 4224 to_reserve = btrfs_calc_trans_metadata_size(root, nr_extents); 4242 4225 to_reserve += calc_csum_metadata_size(inode, num_bytes, 1); 4226 + csum_bytes = BTRFS_I(inode)->csum_bytes; 4243 4227 spin_unlock(&BTRFS_I(inode)->lock); 4244 4228 4245 4229 ret = reserve_metadata_bytes(root, block_rsv, to_reserve, flush); ··· 4250 4232 4251 4233 spin_lock(&BTRFS_I(inode)->lock); 4252 4234 dropped = drop_outstanding_extent(inode); 4253 - to_free = calc_csum_metadata_size(inode, num_bytes, 0); 4254 - spin_unlock(&BTRFS_I(inode)->lock); 4255 - to_free += btrfs_calc_trans_metadata_size(root, dropped); 4256 - 4257 4235 /* 4258 - * Somebody could have come in and twiddled with the 4259 - * reservation, so if we have to free more than we would have 4260 - * reserved from this reservation go ahead and release those 4261 - * bytes. 4236 + * If the inodes csum_bytes is the same as the original 4237 + * csum_bytes then we know we haven't raced with any free()ers 4238 + * so we can just reduce our inodes csum bytes and carry on. 4239 + * Otherwise we have to do the normal free thing to account for 4240 + * the case that the free side didn't free up its reserve 4241 + * because of this outstanding reservation. 4262 4242 */ 4263 - to_free -= to_reserve; 4243 + if (BTRFS_I(inode)->csum_bytes == csum_bytes) 4244 + calc_csum_metadata_size(inode, num_bytes, 0); 4245 + else 4246 + to_free = calc_csum_metadata_size(inode, num_bytes, 0); 4247 + spin_unlock(&BTRFS_I(inode)->lock); 4248 + if (dropped) 4249 + to_free += btrfs_calc_trans_metadata_size(root, dropped); 4250 + 4264 4251 if (to_free) 4265 4252 btrfs_block_rsv_release(root, block_rsv, to_free); 4266 4253 return ret; 4267 4254 } 4255 + 4256 + spin_lock(&BTRFS_I(inode)->lock); 4257 + if (extra_reserve) { 4258 + BTRFS_I(inode)->delalloc_meta_reserved = 1; 4259 + nr_extents--; 4260 + } 4261 + BTRFS_I(inode)->reserved_extents += nr_extents; 4262 + spin_unlock(&BTRFS_I(inode)->lock); 4268 4263 4269 4264 block_rsv_add_bytes(block_rsv, to_reserve, 1); 4270 4265 ··· 5124 5093 struct btrfs_root *root = orig_root->fs_info->extent_root; 5125 5094 struct btrfs_free_cluster *last_ptr = NULL; 5126 5095 struct btrfs_block_group_cache *block_group = NULL; 5096 + struct btrfs_block_group_cache *used_block_group; 5127 5097 int empty_cluster = 2 * 1024 * 1024; 5128 5098 int allowed_chunk_alloc = 0; 5129 5099 int done_chunk_alloc = 0; 5130 5100 struct btrfs_space_info *space_info; 5131 - int last_ptr_loop = 0; 5132 5101 int loop = 0; 5133 5102 int index = 0; 5134 5103 int alloc_type = (data & BTRFS_BLOCK_GROUP_DATA) ? ··· 5190 5159 ideal_cache: 5191 5160 block_group = btrfs_lookup_block_group(root->fs_info, 5192 5161 search_start); 5162 + used_block_group = block_group; 5193 5163 /* 5194 5164 * we don't want to use the block group if it doesn't match our 5195 5165 * allocation bits, or if its not cached. ··· 5228 5196 u64 offset; 5229 5197 int cached; 5230 5198 5199 + used_block_group = block_group; 5231 5200 btrfs_get_block_group(block_group); 5232 5201 search_start = block_group->key.objectid; 5233 5202 ··· 5298 5265 spin_lock(&block_group->free_space_ctl->tree_lock); 5299 5266 if (cached && 5300 5267 block_group->free_space_ctl->free_space < 5301 - num_bytes + empty_size) { 5268 + num_bytes + empty_cluster + empty_size) { 5302 5269 spin_unlock(&block_group->free_space_ctl->tree_lock); 5303 5270 goto loop; 5304 5271 } 5305 5272 spin_unlock(&block_group->free_space_ctl->tree_lock); 5306 5273 5307 5274 /* 5308 - * Ok we want to try and use the cluster allocator, so lets look 5309 - * there, unless we are on LOOP_NO_EMPTY_SIZE, since we will 5310 - * have tried the cluster allocator plenty of times at this 5311 - * point and not have found anything, so we are likely way too 5312 - * fragmented for the clustering stuff to find anything, so lets 5313 - * just skip it and let the allocator find whatever block it can 5314 - * find 5275 + * Ok we want to try and use the cluster allocator, so 5276 + * lets look there 5315 5277 */ 5316 - if (last_ptr && loop < LOOP_NO_EMPTY_SIZE) { 5278 + if (last_ptr) { 5317 5279 /* 5318 5280 * the refill lock keeps out other 5319 5281 * people trying to start a new cluster 5320 5282 */ 5321 5283 spin_lock(&last_ptr->refill_lock); 5322 - if (last_ptr->block_group && 5323 - (last_ptr->block_group->ro || 5324 - !block_group_bits(last_ptr->block_group, data))) { 5325 - offset = 0; 5284 + used_block_group = last_ptr->block_group; 5285 + if (used_block_group != block_group && 5286 + (!used_block_group || 5287 + used_block_group->ro || 5288 + !block_group_bits(used_block_group, data))) { 5289 + used_block_group = block_group; 5326 5290 goto refill_cluster; 5327 5291 } 5328 5292 5329 - offset = btrfs_alloc_from_cluster(block_group, last_ptr, 5330 - num_bytes, search_start); 5293 + if (used_block_group != block_group) 5294 + btrfs_get_block_group(used_block_group); 5295 + 5296 + offset = btrfs_alloc_from_cluster(used_block_group, 5297 + last_ptr, num_bytes, used_block_group->key.objectid); 5331 5298 if (offset) { 5332 5299 /* we have a block, we're done */ 5333 5300 spin_unlock(&last_ptr->refill_lock); 5334 5301 goto checks; 5335 5302 } 5336 5303 5337 - spin_lock(&last_ptr->lock); 5338 - /* 5339 - * whoops, this cluster doesn't actually point to 5340 - * this block group. Get a ref on the block 5341 - * group is does point to and try again 5342 - */ 5343 - if (!last_ptr_loop && last_ptr->block_group && 5344 - last_ptr->block_group != block_group && 5345 - index <= 5346 - get_block_group_index(last_ptr->block_group)) { 5347 - 5348 - btrfs_put_block_group(block_group); 5349 - block_group = last_ptr->block_group; 5350 - btrfs_get_block_group(block_group); 5351 - spin_unlock(&last_ptr->lock); 5352 - spin_unlock(&last_ptr->refill_lock); 5353 - 5354 - last_ptr_loop = 1; 5355 - search_start = block_group->key.objectid; 5356 - /* 5357 - * we know this block group is properly 5358 - * in the list because 5359 - * btrfs_remove_block_group, drops the 5360 - * cluster before it removes the block 5361 - * group from the list 5362 - */ 5363 - goto have_block_group; 5304 + WARN_ON(last_ptr->block_group != used_block_group); 5305 + if (used_block_group != block_group) { 5306 + btrfs_put_block_group(used_block_group); 5307 + used_block_group = block_group; 5364 5308 } 5365 - spin_unlock(&last_ptr->lock); 5366 5309 refill_cluster: 5310 + BUG_ON(used_block_group != block_group); 5311 + /* If we are on LOOP_NO_EMPTY_SIZE, we can't 5312 + * set up a new clusters, so lets just skip it 5313 + * and let the allocator find whatever block 5314 + * it can find. If we reach this point, we 5315 + * will have tried the cluster allocator 5316 + * plenty of times and not have found 5317 + * anything, so we are likely way too 5318 + * fragmented for the clustering stuff to find 5319 + * anything. */ 5320 + if (loop >= LOOP_NO_EMPTY_SIZE) { 5321 + spin_unlock(&last_ptr->refill_lock); 5322 + goto unclustered_alloc; 5323 + } 5324 + 5367 5325 /* 5368 5326 * this cluster didn't work out, free it and 5369 5327 * start over 5370 5328 */ 5371 5329 btrfs_return_cluster_to_free_space(NULL, last_ptr); 5372 5330 5373 - last_ptr_loop = 0; 5374 - 5375 5331 /* allocate a cluster in this block group */ 5376 5332 ret = btrfs_find_space_cluster(trans, root, 5377 5333 block_group, last_ptr, 5378 - offset, num_bytes, 5334 + search_start, num_bytes, 5379 5335 empty_cluster + empty_size); 5380 5336 if (ret == 0) { 5381 5337 /* ··· 5400 5378 goto loop; 5401 5379 } 5402 5380 5381 + unclustered_alloc: 5403 5382 offset = btrfs_find_space_for_alloc(block_group, search_start, 5404 5383 num_bytes, empty_size); 5405 5384 /* ··· 5427 5404 search_start = stripe_align(root, offset); 5428 5405 /* move on to the next group */ 5429 5406 if (search_start + num_bytes >= search_end) { 5430 - btrfs_add_free_space(block_group, offset, num_bytes); 5407 + btrfs_add_free_space(used_block_group, offset, num_bytes); 5431 5408 goto loop; 5432 5409 } 5433 5410 5434 5411 /* move on to the next group */ 5435 5412 if (search_start + num_bytes > 5436 - block_group->key.objectid + block_group->key.offset) { 5437 - btrfs_add_free_space(block_group, offset, num_bytes); 5413 + used_block_group->key.objectid + used_block_group->key.offset) { 5414 + btrfs_add_free_space(used_block_group, offset, num_bytes); 5438 5415 goto loop; 5439 5416 } 5440 5417 ··· 5442 5419 ins->offset = num_bytes; 5443 5420 5444 5421 if (offset < search_start) 5445 - btrfs_add_free_space(block_group, offset, 5422 + btrfs_add_free_space(used_block_group, offset, 5446 5423 search_start - offset); 5447 5424 BUG_ON(offset > search_start); 5448 5425 5449 - ret = btrfs_update_reserved_bytes(block_group, num_bytes, 5426 + ret = btrfs_update_reserved_bytes(used_block_group, num_bytes, 5450 5427 alloc_type); 5451 5428 if (ret == -EAGAIN) { 5452 - btrfs_add_free_space(block_group, offset, num_bytes); 5429 + btrfs_add_free_space(used_block_group, offset, num_bytes); 5453 5430 goto loop; 5454 5431 } 5455 5432 ··· 5458 5435 ins->offset = num_bytes; 5459 5436 5460 5437 if (offset < search_start) 5461 - btrfs_add_free_space(block_group, offset, 5438 + btrfs_add_free_space(used_block_group, offset, 5462 5439 search_start - offset); 5463 5440 BUG_ON(offset > search_start); 5441 + if (used_block_group != block_group) 5442 + btrfs_put_block_group(used_block_group); 5464 5443 btrfs_put_block_group(block_group); 5465 5444 break; 5466 5445 loop: 5467 5446 failed_cluster_refill = false; 5468 5447 failed_alloc = false; 5469 5448 BUG_ON(index != get_block_group_index(block_group)); 5449 + if (used_block_group != block_group) 5450 + btrfs_put_block_group(used_block_group); 5470 5451 btrfs_put_block_group(block_group); 5471 5452 } 5472 5453 up_read(&space_info->groups_sem);
+36 -15
fs/btrfs/extent_io.c
··· 935 935 node = tree_search(tree, start); 936 936 if (!node) { 937 937 prealloc = alloc_extent_state_atomic(prealloc); 938 - if (!prealloc) 939 - return -ENOMEM; 938 + if (!prealloc) { 939 + err = -ENOMEM; 940 + goto out; 941 + } 940 942 err = insert_state(tree, prealloc, start, end, &bits); 941 943 prealloc = NULL; 942 944 BUG_ON(err == -EEXIST); ··· 994 992 */ 995 993 if (state->start < start) { 996 994 prealloc = alloc_extent_state_atomic(prealloc); 997 - if (!prealloc) 998 - return -ENOMEM; 995 + if (!prealloc) { 996 + err = -ENOMEM; 997 + goto out; 998 + } 999 999 err = split_state(tree, state, prealloc, start); 1000 1000 BUG_ON(err == -EEXIST); 1001 1001 prealloc = NULL; ··· 1028 1024 this_end = last_start - 1; 1029 1025 1030 1026 prealloc = alloc_extent_state_atomic(prealloc); 1031 - if (!prealloc) 1032 - return -ENOMEM; 1027 + if (!prealloc) { 1028 + err = -ENOMEM; 1029 + goto out; 1030 + } 1033 1031 1034 1032 /* 1035 1033 * Avoid to free 'prealloc' if it can be merged with ··· 1057 1051 */ 1058 1052 if (state->start <= end && state->end > end) { 1059 1053 prealloc = alloc_extent_state_atomic(prealloc); 1060 - if (!prealloc) 1061 - return -ENOMEM; 1054 + if (!prealloc) { 1055 + err = -ENOMEM; 1056 + goto out; 1057 + } 1062 1058 1063 1059 err = split_state(tree, state, prealloc, end + 1); 1064 1060 BUG_ON(err == -EEXIST); ··· 2295 2287 if (!uptodate) { 2296 2288 int failed_mirror; 2297 2289 failed_mirror = (int)(unsigned long)bio->bi_bdev; 2298 - if (tree->ops && tree->ops->readpage_io_failed_hook) 2299 - ret = tree->ops->readpage_io_failed_hook( 2300 - bio, page, start, end, 2301 - failed_mirror, state); 2302 - else 2303 - ret = bio_readpage_error(bio, page, start, end, 2304 - failed_mirror, NULL); 2290 + /* 2291 + * The generic bio_readpage_error handles errors the 2292 + * following way: If possible, new read requests are 2293 + * created and submitted and will end up in 2294 + * end_bio_extent_readpage as well (if we're lucky, not 2295 + * in the !uptodate case). In that case it returns 0 and 2296 + * we just go on with the next page in our bio. If it 2297 + * can't handle the error it will return -EIO and we 2298 + * remain responsible for that page. 2299 + */ 2300 + ret = bio_readpage_error(bio, page, start, end, 2301 + failed_mirror, NULL); 2305 2302 if (ret == 0) { 2303 + error_handled: 2306 2304 uptodate = 2307 2305 test_bit(BIO_UPTODATE, &bio->bi_flags); 2308 2306 if (err) 2309 2307 uptodate = 0; 2310 2308 uncache_state(&cached); 2311 2309 continue; 2310 + } 2311 + if (tree->ops && tree->ops->readpage_io_failed_hook) { 2312 + ret = tree->ops->readpage_io_failed_hook( 2313 + bio, page, start, end, 2314 + failed_mirror, state); 2315 + if (ret == 0) 2316 + goto error_handled; 2312 2317 } 2313 2318 } 2314 2319
+7 -1
fs/btrfs/file.c
··· 1167 1167 nrptrs = min((iov_iter_count(i) + PAGE_CACHE_SIZE - 1) / 1168 1168 PAGE_CACHE_SIZE, PAGE_CACHE_SIZE / 1169 1169 (sizeof(struct page *))); 1170 + nrptrs = min(nrptrs, current->nr_dirtied_pause - current->nr_dirtied); 1171 + nrptrs = max(nrptrs, 8); 1170 1172 pages = kmalloc(nrptrs * sizeof(struct page *), GFP_KERNEL); 1171 1173 if (!pages) 1172 1174 return -ENOMEM; ··· 1389 1387 goto out; 1390 1388 } 1391 1389 1392 - file_update_time(file); 1390 + err = btrfs_update_time(file); 1391 + if (err) { 1392 + mutex_unlock(&inode->i_mutex); 1393 + goto out; 1394 + } 1393 1395 BTRFS_I(inode)->sequence++; 1394 1396 1395 1397 start_pos = round_down(pos, root->sectorsize);
+2
fs/btrfs/free-space-cache.c
··· 1470 1470 { 1471 1471 info->offset = offset_to_bitmap(ctl, offset); 1472 1472 info->bytes = 0; 1473 + INIT_LIST_HEAD(&info->list); 1473 1474 link_free_space(ctl, info); 1474 1475 ctl->total_bitmaps++; 1475 1476 ··· 2320 2319 2321 2320 if (!found) { 2322 2321 start = i; 2322 + cluster->max_size = 0; 2323 2323 found = true; 2324 2324 } 2325 2325
+148 -34
fs/btrfs/inode.c
··· 38 38 #include <linux/falloc.h> 39 39 #include <linux/slab.h> 40 40 #include <linux/ratelimit.h> 41 + #include <linux/mount.h> 41 42 #include "compat.h" 42 43 #include "ctree.h" 43 44 #include "disk-io.h" ··· 2032 2031 /* insert an orphan item to track this unlinked/truncated file */ 2033 2032 if (insert >= 1) { 2034 2033 ret = btrfs_insert_orphan_item(trans, root, btrfs_ino(inode)); 2035 - BUG_ON(ret); 2034 + BUG_ON(ret && ret != -EEXIST); 2036 2035 } 2037 2036 2038 2037 /* insert an orphan item to track subvolume contains orphan files */ ··· 2159 2158 if (ret && ret != -ESTALE) 2160 2159 goto out; 2161 2160 2161 + if (ret == -ESTALE && root == root->fs_info->tree_root) { 2162 + struct btrfs_root *dead_root; 2163 + struct btrfs_fs_info *fs_info = root->fs_info; 2164 + int is_dead_root = 0; 2165 + 2166 + /* 2167 + * this is an orphan in the tree root. Currently these 2168 + * could come from 2 sources: 2169 + * a) a snapshot deletion in progress 2170 + * b) a free space cache inode 2171 + * We need to distinguish those two, as the snapshot 2172 + * orphan must not get deleted. 2173 + * find_dead_roots already ran before us, so if this 2174 + * is a snapshot deletion, we should find the root 2175 + * in the dead_roots list 2176 + */ 2177 + spin_lock(&fs_info->trans_lock); 2178 + list_for_each_entry(dead_root, &fs_info->dead_roots, 2179 + root_list) { 2180 + if (dead_root->root_key.objectid == 2181 + found_key.objectid) { 2182 + is_dead_root = 1; 2183 + break; 2184 + } 2185 + } 2186 + spin_unlock(&fs_info->trans_lock); 2187 + if (is_dead_root) { 2188 + /* prevent this orphan from being found again */ 2189 + key.offset = found_key.objectid - 1; 2190 + continue; 2191 + } 2192 + } 2162 2193 /* 2163 2194 * Inode is already gone but the orphan item is still there, 2164 2195 * kill the orphan item. ··· 2224 2191 continue; 2225 2192 } 2226 2193 nr_truncate++; 2194 + /* 2195 + * Need to hold the imutex for reservation purposes, not 2196 + * a huge deal here but I have a WARN_ON in 2197 + * btrfs_delalloc_reserve_space to catch offenders. 2198 + */ 2199 + mutex_lock(&inode->i_mutex); 2227 2200 ret = btrfs_truncate(inode); 2201 + mutex_unlock(&inode->i_mutex); 2228 2202 } else { 2229 2203 nr_unlink++; 2230 2204 } ··· 3367 3327 u64 hint_byte = 0; 3368 3328 hole_size = last_byte - cur_offset; 3369 3329 3370 - trans = btrfs_start_transaction(root, 2); 3330 + trans = btrfs_start_transaction(root, 3); 3371 3331 if (IS_ERR(trans)) { 3372 3332 err = PTR_ERR(trans); 3373 3333 break; ··· 3377 3337 cur_offset + hole_size, 3378 3338 &hint_byte, 1); 3379 3339 if (err) { 3340 + btrfs_update_inode(trans, root, inode); 3380 3341 btrfs_end_transaction(trans, root); 3381 3342 break; 3382 3343 } ··· 3387 3346 0, hole_size, 0, hole_size, 3388 3347 0, 0, 0); 3389 3348 if (err) { 3349 + btrfs_update_inode(trans, root, inode); 3390 3350 btrfs_end_transaction(trans, root); 3391 3351 break; 3392 3352 } ··· 3395 3353 btrfs_drop_extent_cache(inode, hole_start, 3396 3354 last_byte - 1, 0); 3397 3355 3356 + btrfs_update_inode(trans, root, inode); 3398 3357 btrfs_end_transaction(trans, root); 3399 3358 } 3400 3359 free_extent_map(em); ··· 3413 3370 3414 3371 static int btrfs_setsize(struct inode *inode, loff_t newsize) 3415 3372 { 3373 + struct btrfs_root *root = BTRFS_I(inode)->root; 3374 + struct btrfs_trans_handle *trans; 3416 3375 loff_t oldsize = i_size_read(inode); 3417 3376 int ret; 3418 3377 ··· 3422 3377 return 0; 3423 3378 3424 3379 if (newsize > oldsize) { 3425 - i_size_write(inode, newsize); 3426 - btrfs_ordered_update_i_size(inode, i_size_read(inode), NULL); 3427 3380 truncate_pagecache(inode, oldsize, newsize); 3428 3381 ret = btrfs_cont_expand(inode, oldsize, newsize); 3429 - if (ret) { 3430 - btrfs_setsize(inode, oldsize); 3382 + if (ret) 3431 3383 return ret; 3432 - } 3433 3384 3434 - mark_inode_dirty(inode); 3385 + trans = btrfs_start_transaction(root, 1); 3386 + if (IS_ERR(trans)) 3387 + return PTR_ERR(trans); 3388 + 3389 + i_size_write(inode, newsize); 3390 + btrfs_ordered_update_i_size(inode, i_size_read(inode), NULL); 3391 + ret = btrfs_update_inode(trans, root, inode); 3392 + btrfs_end_transaction_throttle(trans, root); 3435 3393 } else { 3436 3394 3437 3395 /* ··· 3474 3426 3475 3427 if (attr->ia_valid) { 3476 3428 setattr_copy(inode, attr); 3477 - mark_inode_dirty(inode); 3429 + err = btrfs_dirty_inode(inode); 3478 3430 3479 - if (attr->ia_valid & ATTR_MODE) 3431 + if (!err && attr->ia_valid & ATTR_MODE) 3480 3432 err = btrfs_acl_chmod(inode); 3481 3433 } 3482 3434 ··· 3538 3490 * doing the truncate. 3539 3491 */ 3540 3492 while (1) { 3541 - ret = btrfs_block_rsv_refill(root, rsv, min_size); 3493 + ret = btrfs_block_rsv_refill_noflush(root, rsv, min_size); 3542 3494 3543 3495 /* 3544 3496 * Try and steal from the global reserve since we will ··· 4252 4204 * FIXME, needs more benchmarking...there are no reasons other than performance 4253 4205 * to keep or drop this code. 4254 4206 */ 4255 - void btrfs_dirty_inode(struct inode *inode, int flags) 4207 + int btrfs_dirty_inode(struct inode *inode) 4256 4208 { 4257 4209 struct btrfs_root *root = BTRFS_I(inode)->root; 4258 4210 struct btrfs_trans_handle *trans; 4259 4211 int ret; 4260 4212 4261 4213 if (BTRFS_I(inode)->dummy_inode) 4262 - return; 4214 + return 0; 4263 4215 4264 4216 trans = btrfs_join_transaction(root); 4265 - BUG_ON(IS_ERR(trans)); 4217 + if (IS_ERR(trans)) 4218 + return PTR_ERR(trans); 4266 4219 4267 4220 ret = btrfs_update_inode(trans, root, inode); 4268 4221 if (ret && ret == -ENOSPC) { 4269 4222 /* whoops, lets try again with the full transaction */ 4270 4223 btrfs_end_transaction(trans, root); 4271 4224 trans = btrfs_start_transaction(root, 1); 4272 - if (IS_ERR(trans)) { 4273 - printk_ratelimited(KERN_ERR "btrfs: fail to " 4274 - "dirty inode %llu error %ld\n", 4275 - (unsigned long long)btrfs_ino(inode), 4276 - PTR_ERR(trans)); 4277 - return; 4278 - } 4225 + if (IS_ERR(trans)) 4226 + return PTR_ERR(trans); 4279 4227 4280 4228 ret = btrfs_update_inode(trans, root, inode); 4281 - if (ret) { 4282 - printk_ratelimited(KERN_ERR "btrfs: fail to " 4283 - "dirty inode %llu error %d\n", 4284 - (unsigned long long)btrfs_ino(inode), 4285 - ret); 4286 - } 4287 4229 } 4288 4230 btrfs_end_transaction(trans, root); 4289 4231 if (BTRFS_I(inode)->delayed_node) 4290 4232 btrfs_balance_delayed_items(root); 4233 + 4234 + return ret; 4235 + } 4236 + 4237 + /* 4238 + * This is a copy of file_update_time. We need this so we can return error on 4239 + * ENOSPC for updating the inode in the case of file write and mmap writes. 4240 + */ 4241 + int btrfs_update_time(struct file *file) 4242 + { 4243 + struct inode *inode = file->f_path.dentry->d_inode; 4244 + struct timespec now; 4245 + int ret; 4246 + enum { S_MTIME = 1, S_CTIME = 2, S_VERSION = 4 } sync_it = 0; 4247 + 4248 + /* First try to exhaust all avenues to not sync */ 4249 + if (IS_NOCMTIME(inode)) 4250 + return 0; 4251 + 4252 + now = current_fs_time(inode->i_sb); 4253 + if (!timespec_equal(&inode->i_mtime, &now)) 4254 + sync_it = S_MTIME; 4255 + 4256 + if (!timespec_equal(&inode->i_ctime, &now)) 4257 + sync_it |= S_CTIME; 4258 + 4259 + if (IS_I_VERSION(inode)) 4260 + sync_it |= S_VERSION; 4261 + 4262 + if (!sync_it) 4263 + return 0; 4264 + 4265 + /* Finally allowed to write? Takes lock. */ 4266 + if (mnt_want_write_file(file)) 4267 + return 0; 4268 + 4269 + /* Only change inode inside the lock region */ 4270 + if (sync_it & S_VERSION) 4271 + inode_inc_iversion(inode); 4272 + if (sync_it & S_CTIME) 4273 + inode->i_ctime = now; 4274 + if (sync_it & S_MTIME) 4275 + inode->i_mtime = now; 4276 + ret = btrfs_dirty_inode(inode); 4277 + if (!ret) 4278 + mark_inode_dirty_sync(inode); 4279 + mnt_drop_write(file->f_path.mnt); 4280 + return ret; 4291 4281 } 4292 4282 4293 4283 /* ··· 4641 4555 goto out_unlock; 4642 4556 } 4643 4557 4558 + /* 4559 + * If the active LSM wants to access the inode during 4560 + * d_instantiate it needs these. Smack checks to see 4561 + * if the filesystem supports xattrs by looking at the 4562 + * ops vector. 4563 + */ 4564 + 4565 + inode->i_op = &btrfs_special_inode_operations; 4644 4566 err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index); 4645 4567 if (err) 4646 4568 drop_inode = 1; 4647 4569 else { 4648 - inode->i_op = &btrfs_special_inode_operations; 4649 4570 init_special_inode(inode, inode->i_mode, rdev); 4650 4571 btrfs_update_inode(trans, root, inode); 4651 4572 } ··· 4706 4613 goto out_unlock; 4707 4614 } 4708 4615 4616 + /* 4617 + * If the active LSM wants to access the inode during 4618 + * d_instantiate it needs these. Smack checks to see 4619 + * if the filesystem supports xattrs by looking at the 4620 + * ops vector. 4621 + */ 4622 + inode->i_fop = &btrfs_file_operations; 4623 + inode->i_op = &btrfs_file_inode_operations; 4624 + 4709 4625 err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index); 4710 4626 if (err) 4711 4627 drop_inode = 1; 4712 4628 else { 4713 4629 inode->i_mapping->a_ops = &btrfs_aops; 4714 4630 inode->i_mapping->backing_dev_info = &root->fs_info->bdi; 4715 - inode->i_fop = &btrfs_file_operations; 4716 - inode->i_op = &btrfs_file_inode_operations; 4717 4631 BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops; 4718 4632 } 4719 4633 out_unlock: ··· 6403 6303 u64 page_start; 6404 6304 u64 page_end; 6405 6305 6306 + /* Need this to keep space reservations serialized */ 6307 + mutex_lock(&inode->i_mutex); 6406 6308 ret = btrfs_delalloc_reserve_space(inode, PAGE_CACHE_SIZE); 6309 + mutex_unlock(&inode->i_mutex); 6310 + if (!ret) 6311 + ret = btrfs_update_time(vma->vm_file); 6407 6312 if (ret) { 6408 6313 if (ret == -ENOMEM) 6409 6314 ret = VM_FAULT_OOM; ··· 6620 6515 /* Just need the 1 for updating the inode */ 6621 6516 trans = btrfs_start_transaction(root, 1); 6622 6517 if (IS_ERR(trans)) { 6623 - err = PTR_ERR(trans); 6624 - goto out; 6518 + ret = err = PTR_ERR(trans); 6519 + trans = NULL; 6520 + break; 6625 6521 } 6626 6522 } 6627 6523 ··· 7182 7076 goto out_unlock; 7183 7077 } 7184 7078 7079 + /* 7080 + * If the active LSM wants to access the inode during 7081 + * d_instantiate it needs these. Smack checks to see 7082 + * if the filesystem supports xattrs by looking at the 7083 + * ops vector. 7084 + */ 7085 + inode->i_fop = &btrfs_file_operations; 7086 + inode->i_op = &btrfs_file_inode_operations; 7087 + 7185 7088 err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index); 7186 7089 if (err) 7187 7090 drop_inode = 1; 7188 7091 else { 7189 7092 inode->i_mapping->a_ops = &btrfs_aops; 7190 7093 inode->i_mapping->backing_dev_info = &root->fs_info->bdi; 7191 - inode->i_fop = &btrfs_file_operations; 7192 - inode->i_op = &btrfs_file_inode_operations; 7193 7094 BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops; 7194 7095 } 7195 7096 if (drop_inode) ··· 7466 7353 .follow_link = page_follow_link_light, 7467 7354 .put_link = page_put_link, 7468 7355 .getattr = btrfs_getattr, 7356 + .setattr = btrfs_setattr, 7469 7357 .permission = btrfs_permission, 7470 7358 .setxattr = btrfs_setxattr, 7471 7359 .getxattr = btrfs_getxattr,
+5 -3
fs/btrfs/ioctl.c
··· 252 252 trans = btrfs_join_transaction(root); 253 253 BUG_ON(IS_ERR(trans)); 254 254 255 + btrfs_update_iflags(inode); 256 + inode->i_ctime = CURRENT_TIME; 255 257 ret = btrfs_update_inode(trans, root, inode); 256 258 BUG_ON(ret); 257 259 258 - btrfs_update_iflags(inode); 259 - inode->i_ctime = CURRENT_TIME; 260 260 btrfs_end_transaction(trans, root); 261 261 262 262 mnt_drop_write(file->f_path.mnt); ··· 858 858 return 0; 859 859 file_end = (isize - 1) >> PAGE_CACHE_SHIFT; 860 860 861 + mutex_lock(&inode->i_mutex); 861 862 ret = btrfs_delalloc_reserve_space(inode, 862 863 num_pages << PAGE_CACHE_SHIFT); 864 + mutex_unlock(&inode->i_mutex); 863 865 if (ret) 864 866 return ret; 865 867 again: ··· 1280 1278 } 1281 1279 ret = btrfs_grow_device(trans, device, new_size); 1282 1280 btrfs_commit_transaction(trans, root); 1283 - } else { 1281 + } else if (new_size < old_size) { 1284 1282 ret = btrfs_shrink_device(device, new_size); 1285 1283 } 1286 1284
+2
fs/btrfs/relocation.c
··· 2947 2947 index = (cluster->start - offset) >> PAGE_CACHE_SHIFT; 2948 2948 last_index = (cluster->end - offset) >> PAGE_CACHE_SHIFT; 2949 2949 while (index <= last_index) { 2950 + mutex_lock(&inode->i_mutex); 2950 2951 ret = btrfs_delalloc_reserve_metadata(inode, PAGE_CACHE_SIZE); 2952 + mutex_unlock(&inode->i_mutex); 2951 2953 if (ret) 2952 2954 goto out; 2953 2955
+11 -2
fs/btrfs/scrub.c
··· 256 256 btrfs_release_path(swarn->path); 257 257 258 258 ipath = init_ipath(4096, local_root, swarn->path); 259 + if (IS_ERR(ipath)) { 260 + ret = PTR_ERR(ipath); 261 + ipath = NULL; 262 + goto err; 263 + } 259 264 ret = paths_from_inode(inum, ipath); 260 265 261 266 if (ret < 0) ··· 1535 1530 static noinline_for_stack int scrub_workers_get(struct btrfs_root *root) 1536 1531 { 1537 1532 struct btrfs_fs_info *fs_info = root->fs_info; 1533 + int ret = 0; 1538 1534 1539 1535 mutex_lock(&fs_info->scrub_lock); 1540 1536 if (fs_info->scrub_workers_refcnt == 0) { 1541 1537 btrfs_init_workers(&fs_info->scrub_workers, "scrub", 1542 1538 fs_info->thread_pool_size, &fs_info->generic_worker); 1543 1539 fs_info->scrub_workers.idle_thresh = 4; 1544 - btrfs_start_workers(&fs_info->scrub_workers, 1); 1540 + ret = btrfs_start_workers(&fs_info->scrub_workers); 1541 + if (ret) 1542 + goto out; 1545 1543 } 1546 1544 ++fs_info->scrub_workers_refcnt; 1545 + out: 1547 1546 mutex_unlock(&fs_info->scrub_lock); 1548 1547 1549 - return 0; 1548 + return ret; 1550 1549 } 1551 1550 1552 1551 static noinline_for_stack void scrub_workers_put(struct btrfs_root *root)
+28 -10
fs/btrfs/super.c
··· 41 41 #include <linux/slab.h> 42 42 #include <linux/cleancache.h> 43 43 #include <linux/mnt_namespace.h> 44 + #include <linux/ratelimit.h> 44 45 #include "compat.h" 45 46 #include "delayed-inode.h" 46 47 #include "ctree.h" ··· 1054 1053 u64 avail_space; 1055 1054 u64 used_space; 1056 1055 u64 min_stripe_size; 1057 - int min_stripes = 1; 1056 + int min_stripes = 1, num_stripes = 1; 1058 1057 int i = 0, nr_devices; 1059 1058 int ret; 1060 1059 1061 - nr_devices = fs_info->fs_devices->rw_devices; 1060 + nr_devices = fs_info->fs_devices->open_devices; 1062 1061 BUG_ON(!nr_devices); 1063 1062 1064 1063 devices_info = kmalloc(sizeof(*devices_info) * nr_devices, ··· 1068 1067 1069 1068 /* calc min stripe number for data space alloction */ 1070 1069 type = btrfs_get_alloc_profile(root, 1); 1071 - if (type & BTRFS_BLOCK_GROUP_RAID0) 1070 + if (type & BTRFS_BLOCK_GROUP_RAID0) { 1072 1071 min_stripes = 2; 1073 - else if (type & BTRFS_BLOCK_GROUP_RAID1) 1072 + num_stripes = nr_devices; 1073 + } else if (type & BTRFS_BLOCK_GROUP_RAID1) { 1074 1074 min_stripes = 2; 1075 - else if (type & BTRFS_BLOCK_GROUP_RAID10) 1075 + num_stripes = 2; 1076 + } else if (type & BTRFS_BLOCK_GROUP_RAID10) { 1076 1077 min_stripes = 4; 1078 + num_stripes = 4; 1079 + } 1077 1080 1078 1081 if (type & BTRFS_BLOCK_GROUP_DUP) 1079 1082 min_stripe_size = 2 * BTRFS_STRIPE_LEN; 1080 1083 else 1081 1084 min_stripe_size = BTRFS_STRIPE_LEN; 1082 1085 1083 - list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) { 1084 - if (!device->in_fs_metadata) 1086 + list_for_each_entry(device, &fs_devices->devices, dev_list) { 1087 + if (!device->in_fs_metadata || !device->bdev) 1085 1088 continue; 1086 1089 1087 1090 avail_space = device->total_bytes - device->bytes_used; ··· 1146 1141 i = nr_devices - 1; 1147 1142 avail_space = 0; 1148 1143 while (nr_devices >= min_stripes) { 1144 + if (num_stripes > nr_devices) 1145 + num_stripes = nr_devices; 1146 + 1149 1147 if (devices_info[i].max_avail >= min_stripe_size) { 1150 1148 int j; 1151 1149 u64 alloc_size; 1152 1150 1153 - avail_space += devices_info[i].max_avail * min_stripes; 1151 + avail_space += devices_info[i].max_avail * num_stripes; 1154 1152 alloc_size = devices_info[i].max_avail; 1155 - for (j = i + 1 - min_stripes; j <= i; j++) 1153 + for (j = i + 1 - num_stripes; j <= i; j++) 1156 1154 devices_info[j].max_avail -= alloc_size; 1157 1155 } 1158 1156 i--; ··· 1272 1264 return 0; 1273 1265 } 1274 1266 1267 + static void btrfs_fs_dirty_inode(struct inode *inode, int flags) 1268 + { 1269 + int ret; 1270 + 1271 + ret = btrfs_dirty_inode(inode); 1272 + if (ret) 1273 + printk_ratelimited(KERN_ERR "btrfs: fail to dirty inode %Lu " 1274 + "error %d\n", btrfs_ino(inode), ret); 1275 + } 1276 + 1275 1277 static const struct super_operations btrfs_super_ops = { 1276 1278 .drop_inode = btrfs_drop_inode, 1277 1279 .evict_inode = btrfs_evict_inode, ··· 1289 1271 .sync_fs = btrfs_sync_fs, 1290 1272 .show_options = btrfs_show_options, 1291 1273 .write_inode = btrfs_write_inode, 1292 - .dirty_inode = btrfs_dirty_inode, 1274 + .dirty_inode = btrfs_fs_dirty_inode, 1293 1275 .alloc_inode = btrfs_alloc_inode, 1294 1276 .destroy_inode = btrfs_destroy_inode, 1295 1277 .statfs = btrfs_statfs,
+8 -2
fs/btrfs/volumes.c
··· 295 295 btrfs_requeue_work(&device->work); 296 296 goto done; 297 297 } 298 + /* unplug every 64 requests just for good measure */ 299 + if (batch_run % 64 == 0) { 300 + blk_finish_plug(&plug); 301 + blk_start_plug(&plug); 302 + sync_pending = 0; 303 + } 298 304 } 299 305 300 306 cond_resched(); ··· 1617 1611 if ((sb->s_flags & MS_RDONLY) && !root->fs_info->fs_devices->seeding) 1618 1612 return -EINVAL; 1619 1613 1620 - bdev = blkdev_get_by_path(device_path, FMODE_EXCL, 1614 + bdev = blkdev_get_by_path(device_path, FMODE_WRITE | FMODE_EXCL, 1621 1615 root->fs_info->bdev_holder); 1622 1616 if (IS_ERR(bdev)) 1623 1617 return PTR_ERR(bdev); ··· 3264 3258 */ 3265 3259 if (atomic_read(&bbio->error) > bbio->max_errors) { 3266 3260 err = -EIO; 3267 - } else if (err) { 3261 + } else { 3268 3262 /* 3269 3263 * this bio is actually up to date, we didn't 3270 3264 * go over the max number of errors
+4 -4
fs/ceph/addr.c
··· 87 87 snapc = ceph_get_snap_context(ci->i_snap_realm->cached_context); 88 88 89 89 /* dirty the head */ 90 - spin_lock(&inode->i_lock); 90 + spin_lock(&ci->i_ceph_lock); 91 91 if (ci->i_head_snapc == NULL) 92 92 ci->i_head_snapc = ceph_get_snap_context(snapc); 93 93 ++ci->i_wrbuffer_ref_head; ··· 100 100 ci->i_wrbuffer_ref-1, ci->i_wrbuffer_ref_head-1, 101 101 ci->i_wrbuffer_ref, ci->i_wrbuffer_ref_head, 102 102 snapc, snapc->seq, snapc->num_snaps); 103 - spin_unlock(&inode->i_lock); 103 + spin_unlock(&ci->i_ceph_lock); 104 104 105 105 /* now adjust page */ 106 106 spin_lock_irq(&mapping->tree_lock); ··· 391 391 struct ceph_snap_context *snapc = NULL; 392 392 struct ceph_cap_snap *capsnap = NULL; 393 393 394 - spin_lock(&inode->i_lock); 394 + spin_lock(&ci->i_ceph_lock); 395 395 list_for_each_entry(capsnap, &ci->i_cap_snaps, ci_item) { 396 396 dout(" cap_snap %p snapc %p has %d dirty pages\n", capsnap, 397 397 capsnap->context, capsnap->dirty_pages); ··· 407 407 dout(" head snapc %p has %d dirty pages\n", 408 408 snapc, ci->i_wrbuffer_ref_head); 409 409 } 410 - spin_unlock(&inode->i_lock); 410 + spin_unlock(&ci->i_ceph_lock); 411 411 return snapc; 412 412 } 413 413
+94 -93
fs/ceph/caps.c
··· 309 309 /* 310 310 * Find ceph_cap for given mds, if any. 311 311 * 312 - * Called with i_lock held. 312 + * Called with i_ceph_lock held. 313 313 */ 314 314 static struct ceph_cap *__get_cap_for_mds(struct ceph_inode_info *ci, int mds) 315 315 { ··· 332 332 { 333 333 struct ceph_cap *cap; 334 334 335 - spin_lock(&ci->vfs_inode.i_lock); 335 + spin_lock(&ci->i_ceph_lock); 336 336 cap = __get_cap_for_mds(ci, mds); 337 - spin_unlock(&ci->vfs_inode.i_lock); 337 + spin_unlock(&ci->i_ceph_lock); 338 338 return cap; 339 339 } 340 340 ··· 361 361 362 362 int ceph_get_cap_mds(struct inode *inode) 363 363 { 364 + struct ceph_inode_info *ci = ceph_inode(inode); 364 365 int mds; 365 - spin_lock(&inode->i_lock); 366 + spin_lock(&ci->i_ceph_lock); 366 367 mds = __ceph_get_cap_mds(ceph_inode(inode)); 367 - spin_unlock(&inode->i_lock); 368 + spin_unlock(&ci->i_ceph_lock); 368 369 return mds; 369 370 } 370 371 371 372 /* 372 - * Called under i_lock. 373 + * Called under i_ceph_lock. 373 374 */ 374 375 static void __insert_cap_node(struct ceph_inode_info *ci, 375 376 struct ceph_cap *new) ··· 416 415 * 417 416 * If I_FLUSH is set, leave the inode at the front of the list. 418 417 * 419 - * Caller holds i_lock 418 + * Caller holds i_ceph_lock 420 419 * -> we take mdsc->cap_delay_lock 421 420 */ 422 421 static void __cap_delay_requeue(struct ceph_mds_client *mdsc, ··· 458 457 /* 459 458 * Cancel delayed work on cap. 460 459 * 461 - * Caller must hold i_lock. 460 + * Caller must hold i_ceph_lock. 462 461 */ 463 462 static void __cap_delay_cancel(struct ceph_mds_client *mdsc, 464 463 struct ceph_inode_info *ci) ··· 533 532 wanted |= ceph_caps_for_mode(fmode); 534 533 535 534 retry: 536 - spin_lock(&inode->i_lock); 535 + spin_lock(&ci->i_ceph_lock); 537 536 cap = __get_cap_for_mds(ci, mds); 538 537 if (!cap) { 539 538 if (new_cap) { 540 539 cap = new_cap; 541 540 new_cap = NULL; 542 541 } else { 543 - spin_unlock(&inode->i_lock); 542 + spin_unlock(&ci->i_ceph_lock); 544 543 new_cap = get_cap(mdsc, caps_reservation); 545 544 if (new_cap == NULL) 546 545 return -ENOMEM; ··· 626 625 627 626 if (fmode >= 0) 628 627 __ceph_get_fmode(ci, fmode); 629 - spin_unlock(&inode->i_lock); 628 + spin_unlock(&ci->i_ceph_lock); 630 629 wake_up_all(&ci->i_cap_wq); 631 630 return 0; 632 631 } ··· 793 792 struct rb_node *p; 794 793 int ret = 0; 795 794 796 - spin_lock(&inode->i_lock); 795 + spin_lock(&ci->i_ceph_lock); 797 796 for (p = rb_first(&ci->i_caps); p; p = rb_next(p)) { 798 797 cap = rb_entry(p, struct ceph_cap, ci_node); 799 798 if (__cap_is_valid(cap) && ··· 802 801 break; 803 802 } 804 803 } 805 - spin_unlock(&inode->i_lock); 804 + spin_unlock(&ci->i_ceph_lock); 806 805 dout("ceph_caps_revoking %p %s = %d\n", inode, 807 806 ceph_cap_string(mask), ret); 808 807 return ret; ··· 856 855 } 857 856 858 857 /* 859 - * called under i_lock 858 + * called under i_ceph_lock 860 859 */ 861 860 static int __ceph_is_any_caps(struct ceph_inode_info *ci) 862 861 { ··· 866 865 /* 867 866 * Remove a cap. Take steps to deal with a racing iterate_session_caps. 868 867 * 869 - * caller should hold i_lock. 868 + * caller should hold i_ceph_lock. 870 869 * caller will not hold session s_mutex if called from destroy_inode. 871 870 */ 872 871 void __ceph_remove_cap(struct ceph_cap *cap) ··· 1029 1028 1030 1029 /* 1031 1030 * Queue cap releases when an inode is dropped from our cache. Since 1032 - * inode is about to be destroyed, there is no need for i_lock. 1031 + * inode is about to be destroyed, there is no need for i_ceph_lock. 1033 1032 */ 1034 1033 void ceph_queue_caps_release(struct inode *inode) 1035 1034 { ··· 1050 1049 1051 1050 /* 1052 1051 * Send a cap msg on the given inode. Update our caps state, then 1053 - * drop i_lock and send the message. 1052 + * drop i_ceph_lock and send the message. 1054 1053 * 1055 1054 * Make note of max_size reported/requested from mds, revoked caps 1056 1055 * that have now been implemented. ··· 1062 1061 * Return non-zero if delayed release, or we experienced an error 1063 1062 * such that the caller should requeue + retry later. 1064 1063 * 1065 - * called with i_lock, then drops it. 1064 + * called with i_ceph_lock, then drops it. 1066 1065 * caller should hold snap_rwsem (read), s_mutex. 1067 1066 */ 1068 1067 static int __send_cap(struct ceph_mds_client *mdsc, struct ceph_cap *cap, 1069 1068 int op, int used, int want, int retain, int flushing, 1070 1069 unsigned *pflush_tid) 1071 - __releases(cap->ci->vfs_inode->i_lock) 1070 + __releases(cap->ci->i_ceph_lock) 1072 1071 { 1073 1072 struct ceph_inode_info *ci = cap->ci; 1074 1073 struct inode *inode = &ci->vfs_inode; ··· 1171 1170 xattr_version = ci->i_xattrs.version; 1172 1171 } 1173 1172 1174 - spin_unlock(&inode->i_lock); 1173 + spin_unlock(&ci->i_ceph_lock); 1175 1174 1176 1175 ret = send_cap_msg(session, ceph_vino(inode).ino, cap_id, 1177 1176 op, keep, want, flushing, seq, flush_tid, issue_seq, mseq, ··· 1199 1198 * Unless @again is true, skip cap_snaps that were already sent to 1200 1199 * the MDS (i.e., during this session). 1201 1200 * 1202 - * Called under i_lock. Takes s_mutex as needed. 1201 + * Called under i_ceph_lock. Takes s_mutex as needed. 1203 1202 */ 1204 1203 void __ceph_flush_snaps(struct ceph_inode_info *ci, 1205 1204 struct ceph_mds_session **psession, 1206 1205 int again) 1207 - __releases(ci->vfs_inode->i_lock) 1208 - __acquires(ci->vfs_inode->i_lock) 1206 + __releases(ci->i_ceph_lock) 1207 + __acquires(ci->i_ceph_lock) 1209 1208 { 1210 1209 struct inode *inode = &ci->vfs_inode; 1211 1210 int mds; ··· 1262 1261 session = NULL; 1263 1262 } 1264 1263 if (!session) { 1265 - spin_unlock(&inode->i_lock); 1264 + spin_unlock(&ci->i_ceph_lock); 1266 1265 mutex_lock(&mdsc->mutex); 1267 1266 session = __ceph_lookup_mds_session(mdsc, mds); 1268 1267 mutex_unlock(&mdsc->mutex); ··· 1276 1275 * deletion or migration. retry, and we'll 1277 1276 * get a better @mds value next time. 1278 1277 */ 1279 - spin_lock(&inode->i_lock); 1278 + spin_lock(&ci->i_ceph_lock); 1280 1279 goto retry; 1281 1280 } 1282 1281 ··· 1286 1285 list_del_init(&capsnap->flushing_item); 1287 1286 list_add_tail(&capsnap->flushing_item, 1288 1287 &session->s_cap_snaps_flushing); 1289 - spin_unlock(&inode->i_lock); 1288 + spin_unlock(&ci->i_ceph_lock); 1290 1289 1291 1290 dout("flush_snaps %p cap_snap %p follows %lld tid %llu\n", 1292 1291 inode, capsnap, capsnap->follows, capsnap->flush_tid); ··· 1303 1302 next_follows = capsnap->follows + 1; 1304 1303 ceph_put_cap_snap(capsnap); 1305 1304 1306 - spin_lock(&inode->i_lock); 1305 + spin_lock(&ci->i_ceph_lock); 1307 1306 goto retry; 1308 1307 } 1309 1308 ··· 1323 1322 1324 1323 static void ceph_flush_snaps(struct ceph_inode_info *ci) 1325 1324 { 1326 - struct inode *inode = &ci->vfs_inode; 1327 - 1328 - spin_lock(&inode->i_lock); 1325 + spin_lock(&ci->i_ceph_lock); 1329 1326 __ceph_flush_snaps(ci, NULL, 0); 1330 - spin_unlock(&inode->i_lock); 1327 + spin_unlock(&ci->i_ceph_lock); 1331 1328 } 1332 1329 1333 1330 /* ··· 1372 1373 * Add dirty inode to the flushing list. Assigned a seq number so we 1373 1374 * can wait for caps to flush without starving. 1374 1375 * 1375 - * Called under i_lock. 1376 + * Called under i_ceph_lock. 1376 1377 */ 1377 1378 static int __mark_caps_flushing(struct inode *inode, 1378 1379 struct ceph_mds_session *session) ··· 1420 1421 struct ceph_inode_info *ci = ceph_inode(inode); 1421 1422 u32 invalidating_gen = ci->i_rdcache_gen; 1422 1423 1423 - spin_unlock(&inode->i_lock); 1424 + spin_unlock(&ci->i_ceph_lock); 1424 1425 invalidate_mapping_pages(&inode->i_data, 0, -1); 1425 - spin_lock(&inode->i_lock); 1426 + spin_lock(&ci->i_ceph_lock); 1426 1427 1427 1428 if (inode->i_data.nrpages == 0 && 1428 1429 invalidating_gen == ci->i_rdcache_gen) { ··· 1469 1470 if (mdsc->stopping) 1470 1471 is_delayed = 1; 1471 1472 1472 - spin_lock(&inode->i_lock); 1473 + spin_lock(&ci->i_ceph_lock); 1473 1474 1474 1475 if (ci->i_ceph_flags & CEPH_I_FLUSH) 1475 1476 flags |= CHECK_CAPS_FLUSH; ··· 1479 1480 __ceph_flush_snaps(ci, &session, 0); 1480 1481 goto retry_locked; 1481 1482 retry: 1482 - spin_lock(&inode->i_lock); 1483 + spin_lock(&ci->i_ceph_lock); 1483 1484 retry_locked: 1484 1485 file_wanted = __ceph_caps_file_wanted(ci); 1485 1486 used = __ceph_caps_used(ci); ··· 1633 1634 if (mutex_trylock(&session->s_mutex) == 0) { 1634 1635 dout("inverting session/ino locks on %p\n", 1635 1636 session); 1636 - spin_unlock(&inode->i_lock); 1637 + spin_unlock(&ci->i_ceph_lock); 1637 1638 if (took_snap_rwsem) { 1638 1639 up_read(&mdsc->snap_rwsem); 1639 1640 took_snap_rwsem = 0; ··· 1647 1648 if (down_read_trylock(&mdsc->snap_rwsem) == 0) { 1648 1649 dout("inverting snap/in locks on %p\n", 1649 1650 inode); 1650 - spin_unlock(&inode->i_lock); 1651 + spin_unlock(&ci->i_ceph_lock); 1651 1652 down_read(&mdsc->snap_rwsem); 1652 1653 took_snap_rwsem = 1; 1653 1654 goto retry; ··· 1663 1664 mds = cap->mds; /* remember mds, so we don't repeat */ 1664 1665 sent++; 1665 1666 1666 - /* __send_cap drops i_lock */ 1667 + /* __send_cap drops i_ceph_lock */ 1667 1668 delayed += __send_cap(mdsc, cap, CEPH_CAP_OP_UPDATE, used, want, 1668 1669 retain, flushing, NULL); 1669 - goto retry; /* retake i_lock and restart our cap scan. */ 1670 + goto retry; /* retake i_ceph_lock and restart our cap scan. */ 1670 1671 } 1671 1672 1672 1673 /* ··· 1680 1681 else if (!is_delayed || force_requeue) 1681 1682 __cap_delay_requeue(mdsc, ci); 1682 1683 1683 - spin_unlock(&inode->i_lock); 1684 + spin_unlock(&ci->i_ceph_lock); 1684 1685 1685 1686 if (queue_invalidate) 1686 1687 ceph_queue_invalidate(inode); ··· 1703 1704 int flushing = 0; 1704 1705 1705 1706 retry: 1706 - spin_lock(&inode->i_lock); 1707 + spin_lock(&ci->i_ceph_lock); 1707 1708 if (ci->i_ceph_flags & CEPH_I_NOFLUSH) { 1708 1709 dout("try_flush_caps skipping %p I_NOFLUSH set\n", inode); 1709 1710 goto out; ··· 1715 1716 int delayed; 1716 1717 1717 1718 if (!session) { 1718 - spin_unlock(&inode->i_lock); 1719 + spin_unlock(&ci->i_ceph_lock); 1719 1720 session = cap->session; 1720 1721 mutex_lock(&session->s_mutex); 1721 1722 goto retry; ··· 1726 1727 1727 1728 flushing = __mark_caps_flushing(inode, session); 1728 1729 1729 - /* __send_cap drops i_lock */ 1730 + /* __send_cap drops i_ceph_lock */ 1730 1731 delayed = __send_cap(mdsc, cap, CEPH_CAP_OP_FLUSH, used, want, 1731 1732 cap->issued | cap->implemented, flushing, 1732 1733 flush_tid); 1733 1734 if (!delayed) 1734 1735 goto out_unlocked; 1735 1736 1736 - spin_lock(&inode->i_lock); 1737 + spin_lock(&ci->i_ceph_lock); 1737 1738 __cap_delay_requeue(mdsc, ci); 1738 1739 } 1739 1740 out: 1740 - spin_unlock(&inode->i_lock); 1741 + spin_unlock(&ci->i_ceph_lock); 1741 1742 out_unlocked: 1742 1743 if (session && unlock_session) 1743 1744 mutex_unlock(&session->s_mutex); ··· 1752 1753 struct ceph_inode_info *ci = ceph_inode(inode); 1753 1754 int i, ret = 1; 1754 1755 1755 - spin_lock(&inode->i_lock); 1756 + spin_lock(&ci->i_ceph_lock); 1756 1757 for (i = 0; i < CEPH_CAP_BITS; i++) 1757 1758 if ((ci->i_flushing_caps & (1 << i)) && 1758 1759 ci->i_cap_flush_tid[i] <= tid) { ··· 1760 1761 ret = 0; 1761 1762 break; 1762 1763 } 1763 - spin_unlock(&inode->i_lock); 1764 + spin_unlock(&ci->i_ceph_lock); 1764 1765 return ret; 1765 1766 } 1766 1767 ··· 1867 1868 struct ceph_mds_client *mdsc = 1868 1869 ceph_sb_to_client(inode->i_sb)->mdsc; 1869 1870 1870 - spin_lock(&inode->i_lock); 1871 + spin_lock(&ci->i_ceph_lock); 1871 1872 if (__ceph_caps_dirty(ci)) 1872 1873 __cap_delay_requeue_front(mdsc, ci); 1873 - spin_unlock(&inode->i_lock); 1874 + spin_unlock(&ci->i_ceph_lock); 1874 1875 } 1875 1876 return err; 1876 1877 } ··· 1893 1894 struct inode *inode = &ci->vfs_inode; 1894 1895 struct ceph_cap *cap; 1895 1896 1896 - spin_lock(&inode->i_lock); 1897 + spin_lock(&ci->i_ceph_lock); 1897 1898 cap = ci->i_auth_cap; 1898 1899 if (cap && cap->session == session) { 1899 1900 dout("kick_flushing_caps %p cap %p capsnap %p\n", inode, ··· 1903 1904 pr_err("%p auth cap %p not mds%d ???\n", inode, 1904 1905 cap, session->s_mds); 1905 1906 } 1906 - spin_unlock(&inode->i_lock); 1907 + spin_unlock(&ci->i_ceph_lock); 1907 1908 } 1908 1909 } 1909 1910 ··· 1920 1921 struct ceph_cap *cap; 1921 1922 int delayed = 0; 1922 1923 1923 - spin_lock(&inode->i_lock); 1924 + spin_lock(&ci->i_ceph_lock); 1924 1925 cap = ci->i_auth_cap; 1925 1926 if (cap && cap->session == session) { 1926 1927 dout("kick_flushing_caps %p cap %p %s\n", inode, ··· 1931 1932 cap->issued | cap->implemented, 1932 1933 ci->i_flushing_caps, NULL); 1933 1934 if (delayed) { 1934 - spin_lock(&inode->i_lock); 1935 + spin_lock(&ci->i_ceph_lock); 1935 1936 __cap_delay_requeue(mdsc, ci); 1936 - spin_unlock(&inode->i_lock); 1937 + spin_unlock(&ci->i_ceph_lock); 1937 1938 } 1938 1939 } else { 1939 1940 pr_err("%p auth cap %p not mds%d ???\n", inode, 1940 1941 cap, session->s_mds); 1941 - spin_unlock(&inode->i_lock); 1942 + spin_unlock(&ci->i_ceph_lock); 1942 1943 } 1943 1944 } 1944 1945 } ··· 1951 1952 struct ceph_cap *cap; 1952 1953 int delayed = 0; 1953 1954 1954 - spin_lock(&inode->i_lock); 1955 + spin_lock(&ci->i_ceph_lock); 1955 1956 cap = ci->i_auth_cap; 1956 1957 dout("kick_flushing_inode_caps %p flushing %s flush_seq %lld\n", inode, 1957 1958 ceph_cap_string(ci->i_flushing_caps), ci->i_cap_flush_seq); ··· 1963 1964 cap->issued | cap->implemented, 1964 1965 ci->i_flushing_caps, NULL); 1965 1966 if (delayed) { 1966 - spin_lock(&inode->i_lock); 1967 + spin_lock(&ci->i_ceph_lock); 1967 1968 __cap_delay_requeue(mdsc, ci); 1968 - spin_unlock(&inode->i_lock); 1969 + spin_unlock(&ci->i_ceph_lock); 1969 1970 } 1970 1971 } else { 1971 - spin_unlock(&inode->i_lock); 1972 + spin_unlock(&ci->i_ceph_lock); 1972 1973 } 1973 1974 } 1974 1975 ··· 1977 1978 * Take references to capabilities we hold, so that we don't release 1978 1979 * them to the MDS prematurely. 1979 1980 * 1980 - * Protected by i_lock. 1981 + * Protected by i_ceph_lock. 1981 1982 */ 1982 1983 static void __take_cap_refs(struct ceph_inode_info *ci, int got) 1983 1984 { ··· 2015 2016 2016 2017 dout("get_cap_refs %p need %s want %s\n", inode, 2017 2018 ceph_cap_string(need), ceph_cap_string(want)); 2018 - spin_lock(&inode->i_lock); 2019 + spin_lock(&ci->i_ceph_lock); 2019 2020 2020 2021 /* make sure file is actually open */ 2021 2022 file_wanted = __ceph_caps_file_wanted(ci); ··· 2076 2077 ceph_cap_string(have), ceph_cap_string(need)); 2077 2078 } 2078 2079 out: 2079 - spin_unlock(&inode->i_lock); 2080 + spin_unlock(&ci->i_ceph_lock); 2080 2081 dout("get_cap_refs %p ret %d got %s\n", inode, 2081 2082 ret, ceph_cap_string(*got)); 2082 2083 return ret; ··· 2093 2094 int check = 0; 2094 2095 2095 2096 /* do we need to explicitly request a larger max_size? */ 2096 - spin_lock(&inode->i_lock); 2097 + spin_lock(&ci->i_ceph_lock); 2097 2098 if ((endoff >= ci->i_max_size || 2098 2099 endoff > (inode->i_size << 1)) && 2099 2100 endoff > ci->i_wanted_max_size) { ··· 2102 2103 ci->i_wanted_max_size = endoff; 2103 2104 check = 1; 2104 2105 } 2105 - spin_unlock(&inode->i_lock); 2106 + spin_unlock(&ci->i_ceph_lock); 2106 2107 if (check) 2107 2108 ceph_check_caps(ci, CHECK_CAPS_AUTHONLY, NULL); 2108 2109 } ··· 2139 2140 */ 2140 2141 void ceph_get_cap_refs(struct ceph_inode_info *ci, int caps) 2141 2142 { 2142 - spin_lock(&ci->vfs_inode.i_lock); 2143 + spin_lock(&ci->i_ceph_lock); 2143 2144 __take_cap_refs(ci, caps); 2144 - spin_unlock(&ci->vfs_inode.i_lock); 2145 + spin_unlock(&ci->i_ceph_lock); 2145 2146 } 2146 2147 2147 2148 /* ··· 2159 2160 int last = 0, put = 0, flushsnaps = 0, wake = 0; 2160 2161 struct ceph_cap_snap *capsnap; 2161 2162 2162 - spin_lock(&inode->i_lock); 2163 + spin_lock(&ci->i_ceph_lock); 2163 2164 if (had & CEPH_CAP_PIN) 2164 2165 --ci->i_pin_ref; 2165 2166 if (had & CEPH_CAP_FILE_RD) ··· 2192 2193 } 2193 2194 } 2194 2195 } 2195 - spin_unlock(&inode->i_lock); 2196 + spin_unlock(&ci->i_ceph_lock); 2196 2197 2197 2198 dout("put_cap_refs %p had %s%s%s\n", inode, ceph_cap_string(had), 2198 2199 last ? " last" : "", put ? " put" : ""); ··· 2224 2225 int found = 0; 2225 2226 struct ceph_cap_snap *capsnap = NULL; 2226 2227 2227 - spin_lock(&inode->i_lock); 2228 + spin_lock(&ci->i_ceph_lock); 2228 2229 ci->i_wrbuffer_ref -= nr; 2229 2230 last = !ci->i_wrbuffer_ref; 2230 2231 ··· 2273 2274 } 2274 2275 } 2275 2276 2276 - spin_unlock(&inode->i_lock); 2277 + spin_unlock(&ci->i_ceph_lock); 2277 2278 2278 2279 if (last) { 2279 2280 ceph_check_caps(ci, CHECK_CAPS_AUTHONLY, NULL); ··· 2290 2291 * Handle a cap GRANT message from the MDS. (Note that a GRANT may 2291 2292 * actually be a revocation if it specifies a smaller cap set.) 2292 2293 * 2293 - * caller holds s_mutex and i_lock, we drop both. 2294 + * caller holds s_mutex and i_ceph_lock, we drop both. 2294 2295 * 2295 2296 * return value: 2296 2297 * 0 - ok ··· 2301 2302 struct ceph_mds_session *session, 2302 2303 struct ceph_cap *cap, 2303 2304 struct ceph_buffer *xattr_buf) 2304 - __releases(inode->i_lock) 2305 + __releases(ci->i_ceph_lock) 2305 2306 { 2306 2307 struct ceph_inode_info *ci = ceph_inode(inode); 2307 2308 int mds = session->s_mds; ··· 2452 2453 } 2453 2454 BUG_ON(cap->issued & ~cap->implemented); 2454 2455 2455 - spin_unlock(&inode->i_lock); 2456 + spin_unlock(&ci->i_ceph_lock); 2456 2457 if (writeback) 2457 2458 /* 2458 2459 * queue inode for writeback: we can't actually call ··· 2482 2483 struct ceph_mds_caps *m, 2483 2484 struct ceph_mds_session *session, 2484 2485 struct ceph_cap *cap) 2485 - __releases(inode->i_lock) 2486 + __releases(ci->i_ceph_lock) 2486 2487 { 2487 2488 struct ceph_inode_info *ci = ceph_inode(inode); 2488 2489 struct ceph_mds_client *mdsc = ceph_sb_to_client(inode->i_sb)->mdsc; ··· 2538 2539 wake_up_all(&ci->i_cap_wq); 2539 2540 2540 2541 out: 2541 - spin_unlock(&inode->i_lock); 2542 + spin_unlock(&ci->i_ceph_lock); 2542 2543 if (drop) 2543 2544 iput(inode); 2544 2545 } ··· 2561 2562 dout("handle_cap_flushsnap_ack inode %p ci %p mds%d follows %lld\n", 2562 2563 inode, ci, session->s_mds, follows); 2563 2564 2564 - spin_lock(&inode->i_lock); 2565 + spin_lock(&ci->i_ceph_lock); 2565 2566 list_for_each_entry(capsnap, &ci->i_cap_snaps, ci_item) { 2566 2567 if (capsnap->follows == follows) { 2567 2568 if (capsnap->flush_tid != flush_tid) { ··· 2584 2585 capsnap, capsnap->follows); 2585 2586 } 2586 2587 } 2587 - spin_unlock(&inode->i_lock); 2588 + spin_unlock(&ci->i_ceph_lock); 2588 2589 if (drop) 2589 2590 iput(inode); 2590 2591 } ··· 2597 2598 static void handle_cap_trunc(struct inode *inode, 2598 2599 struct ceph_mds_caps *trunc, 2599 2600 struct ceph_mds_session *session) 2600 - __releases(inode->i_lock) 2601 + __releases(ci->i_ceph_lock) 2601 2602 { 2602 2603 struct ceph_inode_info *ci = ceph_inode(inode); 2603 2604 int mds = session->s_mds; ··· 2616 2617 inode, mds, seq, truncate_size, truncate_seq); 2617 2618 queue_trunc = ceph_fill_file_size(inode, issued, 2618 2619 truncate_seq, truncate_size, size); 2619 - spin_unlock(&inode->i_lock); 2620 + spin_unlock(&ci->i_ceph_lock); 2620 2621 2621 2622 if (queue_trunc) 2622 2623 ceph_queue_vmtruncate(inode); ··· 2645 2646 dout("handle_cap_export inode %p ci %p mds%d mseq %d\n", 2646 2647 inode, ci, mds, mseq); 2647 2648 2648 - spin_lock(&inode->i_lock); 2649 + spin_lock(&ci->i_ceph_lock); 2649 2650 2650 2651 /* make sure we haven't seen a higher mseq */ 2651 2652 for (p = rb_first(&ci->i_caps); p; p = rb_next(p)) { ··· 2689 2690 } 2690 2691 /* else, we already released it */ 2691 2692 2692 - spin_unlock(&inode->i_lock); 2693 + spin_unlock(&ci->i_ceph_lock); 2693 2694 } 2694 2695 2695 2696 /* ··· 2744 2745 up_read(&mdsc->snap_rwsem); 2745 2746 2746 2747 /* make sure we re-request max_size, if necessary */ 2747 - spin_lock(&inode->i_lock); 2748 + spin_lock(&ci->i_ceph_lock); 2748 2749 ci->i_requested_max_size = 0; 2749 - spin_unlock(&inode->i_lock); 2750 + spin_unlock(&ci->i_ceph_lock); 2750 2751 } 2751 2752 2752 2753 /* ··· 2761 2762 struct ceph_mds_client *mdsc = session->s_mdsc; 2762 2763 struct super_block *sb = mdsc->fsc->sb; 2763 2764 struct inode *inode; 2765 + struct ceph_inode_info *ci; 2764 2766 struct ceph_cap *cap; 2765 2767 struct ceph_mds_caps *h; 2766 2768 int mds = session->s_mds; ··· 2815 2815 2816 2816 /* lookup ino */ 2817 2817 inode = ceph_find_inode(sb, vino); 2818 + ci = ceph_inode(inode); 2818 2819 dout(" op %s ino %llx.%llx inode %p\n", ceph_cap_op_name(op), vino.ino, 2819 2820 vino.snap, inode); 2820 2821 if (!inode) { ··· 2845 2844 } 2846 2845 2847 2846 /* the rest require a cap */ 2848 - spin_lock(&inode->i_lock); 2847 + spin_lock(&ci->i_ceph_lock); 2849 2848 cap = __get_cap_for_mds(ceph_inode(inode), mds); 2850 2849 if (!cap) { 2851 2850 dout(" no cap on %p ino %llx.%llx from mds%d\n", 2852 2851 inode, ceph_ino(inode), ceph_snap(inode), mds); 2853 - spin_unlock(&inode->i_lock); 2852 + spin_unlock(&ci->i_ceph_lock); 2854 2853 goto flush_cap_releases; 2855 2854 } 2856 2855 2857 - /* note that each of these drops i_lock for us */ 2856 + /* note that each of these drops i_ceph_lock for us */ 2858 2857 switch (op) { 2859 2858 case CEPH_CAP_OP_REVOKE: 2860 2859 case CEPH_CAP_OP_GRANT: ··· 2870 2869 break; 2871 2870 2872 2871 default: 2873 - spin_unlock(&inode->i_lock); 2872 + spin_unlock(&ci->i_ceph_lock); 2874 2873 pr_err("ceph_handle_caps: unknown cap op %d %s\n", op, 2875 2874 ceph_cap_op_name(op)); 2876 2875 } ··· 2963 2962 struct inode *inode = &ci->vfs_inode; 2964 2963 int last = 0; 2965 2964 2966 - spin_lock(&inode->i_lock); 2965 + spin_lock(&ci->i_ceph_lock); 2967 2966 dout("put_fmode %p fmode %d %d -> %d\n", inode, fmode, 2968 2967 ci->i_nr_by_mode[fmode], ci->i_nr_by_mode[fmode]-1); 2969 2968 BUG_ON(ci->i_nr_by_mode[fmode] == 0); 2970 2969 if (--ci->i_nr_by_mode[fmode] == 0) 2971 2970 last++; 2972 - spin_unlock(&inode->i_lock); 2971 + spin_unlock(&ci->i_ceph_lock); 2973 2972 2974 2973 if (last && ci->i_vino.snap == CEPH_NOSNAP) 2975 2974 ceph_check_caps(ci, 0, NULL); ··· 2992 2991 int used, dirty; 2993 2992 int ret = 0; 2994 2993 2995 - spin_lock(&inode->i_lock); 2994 + spin_lock(&ci->i_ceph_lock); 2996 2995 used = __ceph_caps_used(ci); 2997 2996 dirty = __ceph_caps_dirty(ci); 2998 2997 ··· 3047 3046 inode, cap, ceph_cap_string(cap->issued)); 3048 3047 } 3049 3048 } 3050 - spin_unlock(&inode->i_lock); 3049 + spin_unlock(&ci->i_ceph_lock); 3051 3050 return ret; 3052 3051 } 3053 3052 ··· 3062 3061 3063 3062 /* 3064 3063 * force an record for the directory caps if we have a dentry lease. 3065 - * this is racy (can't take i_lock and d_lock together), but it 3064 + * this is racy (can't take i_ceph_lock and d_lock together), but it 3066 3065 * doesn't have to be perfect; the mds will revoke anything we don't 3067 3066 * release. 3068 3067 */
+12 -12
fs/ceph/dir.c
··· 281 281 } 282 282 283 283 /* can we use the dcache? */ 284 - spin_lock(&inode->i_lock); 284 + spin_lock(&ci->i_ceph_lock); 285 285 if ((filp->f_pos == 2 || fi->dentry) && 286 286 !ceph_test_mount_opt(fsc, NOASYNCREADDIR) && 287 287 ceph_snap(inode) != CEPH_SNAPDIR && 288 288 ceph_dir_test_complete(inode) && 289 289 __ceph_caps_issued_mask(ci, CEPH_CAP_FILE_SHARED, 1)) { 290 - spin_unlock(&inode->i_lock); 290 + spin_unlock(&ci->i_ceph_lock); 291 291 err = __dcache_readdir(filp, dirent, filldir); 292 292 if (err != -EAGAIN) 293 293 return err; 294 294 } else { 295 - spin_unlock(&inode->i_lock); 295 + spin_unlock(&ci->i_ceph_lock); 296 296 } 297 297 if (fi->dentry) { 298 298 err = note_last_dentry(fi, fi->dentry->d_name.name, ··· 428 428 * were released during the whole readdir, and we should have 429 429 * the complete dir contents in our cache. 430 430 */ 431 - spin_lock(&inode->i_lock); 431 + spin_lock(&ci->i_ceph_lock); 432 432 if (ci->i_release_count == fi->dir_release_count) { 433 433 ceph_dir_set_complete(inode); 434 434 ci->i_max_offset = filp->f_pos; 435 435 } 436 - spin_unlock(&inode->i_lock); 436 + spin_unlock(&ci->i_ceph_lock); 437 437 438 438 dout("readdir %p filp %p done.\n", inode, filp); 439 439 return 0; ··· 607 607 struct ceph_inode_info *ci = ceph_inode(dir); 608 608 struct ceph_dentry_info *di = ceph_dentry(dentry); 609 609 610 - spin_lock(&dir->i_lock); 610 + spin_lock(&ci->i_ceph_lock); 611 611 dout(" dir %p flags are %d\n", dir, ci->i_ceph_flags); 612 612 if (strncmp(dentry->d_name.name, 613 613 fsc->mount_options->snapdir_name, ··· 615 615 !is_root_ceph_dentry(dir, dentry) && 616 616 ceph_dir_test_complete(dir) && 617 617 (__ceph_caps_issued_mask(ci, CEPH_CAP_FILE_SHARED, 1))) { 618 - spin_unlock(&dir->i_lock); 618 + spin_unlock(&ci->i_ceph_lock); 619 619 dout(" dir %p complete, -ENOENT\n", dir); 620 620 d_add(dentry, NULL); 621 621 di->lease_shared_gen = ci->i_shared_gen; 622 622 return NULL; 623 623 } 624 - spin_unlock(&dir->i_lock); 624 + spin_unlock(&ci->i_ceph_lock); 625 625 } 626 626 627 627 op = ceph_snap(dir) == CEPH_SNAPDIR ? ··· 841 841 struct ceph_inode_info *ci = ceph_inode(inode); 842 842 int drop = CEPH_CAP_LINK_SHARED | CEPH_CAP_LINK_EXCL; 843 843 844 - spin_lock(&inode->i_lock); 844 + spin_lock(&ci->i_ceph_lock); 845 845 if (inode->i_nlink == 1) { 846 846 drop |= ~(__ceph_caps_wanted(ci) | CEPH_CAP_PIN); 847 847 ci->i_ceph_flags |= CEPH_I_NODELAY; 848 848 } 849 - spin_unlock(&inode->i_lock); 849 + spin_unlock(&ci->i_ceph_lock); 850 850 return drop; 851 851 } 852 852 ··· 1015 1015 struct ceph_dentry_info *di = ceph_dentry(dentry); 1016 1016 int valid = 0; 1017 1017 1018 - spin_lock(&dir->i_lock); 1018 + spin_lock(&ci->i_ceph_lock); 1019 1019 if (ci->i_shared_gen == di->lease_shared_gen) 1020 1020 valid = __ceph_caps_issued_mask(ci, CEPH_CAP_FILE_SHARED, 1); 1021 - spin_unlock(&dir->i_lock); 1021 + spin_unlock(&ci->i_ceph_lock); 1022 1022 dout("dir_lease_is_valid dir %p v%u dentry %p v%u = %d\n", 1023 1023 dir, (unsigned)ci->i_shared_gen, dentry, 1024 1024 (unsigned)di->lease_shared_gen, valid);
+12 -11
fs/ceph/file.c
··· 147 147 148 148 /* trivially open snapdir */ 149 149 if (ceph_snap(inode) == CEPH_SNAPDIR) { 150 - spin_lock(&inode->i_lock); 150 + spin_lock(&ci->i_ceph_lock); 151 151 __ceph_get_fmode(ci, fmode); 152 - spin_unlock(&inode->i_lock); 152 + spin_unlock(&ci->i_ceph_lock); 153 153 return ceph_init_file(inode, file, fmode); 154 154 } 155 155 ··· 158 158 * write) or any MDS (for read). Update wanted set 159 159 * asynchronously. 160 160 */ 161 - spin_lock(&inode->i_lock); 161 + spin_lock(&ci->i_ceph_lock); 162 162 if (__ceph_is_any_real_caps(ci) && 163 163 (((fmode & CEPH_FILE_MODE_WR) == 0) || ci->i_auth_cap)) { 164 164 int mds_wanted = __ceph_caps_mds_wanted(ci); ··· 168 168 inode, fmode, ceph_cap_string(wanted), 169 169 ceph_cap_string(issued)); 170 170 __ceph_get_fmode(ci, fmode); 171 - spin_unlock(&inode->i_lock); 171 + spin_unlock(&ci->i_ceph_lock); 172 172 173 173 /* adjust wanted? */ 174 174 if ((issued & wanted) != wanted && ··· 180 180 } else if (ceph_snap(inode) != CEPH_NOSNAP && 181 181 (ci->i_snap_caps & wanted) == wanted) { 182 182 __ceph_get_fmode(ci, fmode); 183 - spin_unlock(&inode->i_lock); 183 + spin_unlock(&ci->i_ceph_lock); 184 184 return ceph_init_file(inode, file, fmode); 185 185 } 186 - spin_unlock(&inode->i_lock); 186 + spin_unlock(&ci->i_ceph_lock); 187 187 188 188 dout("open fmode %d wants %s\n", fmode, ceph_cap_string(wanted)); 189 189 req = prepare_open_request(inode->i_sb, flags, 0); ··· 743 743 */ 744 744 int dirty; 745 745 746 - spin_lock(&inode->i_lock); 746 + spin_lock(&ci->i_ceph_lock); 747 747 dirty = __ceph_mark_dirty_caps(ci, CEPH_CAP_FILE_WR); 748 - spin_unlock(&inode->i_lock); 748 + spin_unlock(&ci->i_ceph_lock); 749 749 ceph_put_cap_refs(ci, got); 750 750 751 751 ret = generic_file_aio_write(iocb, iov, nr_segs, pos); ··· 764 764 765 765 if (ret >= 0) { 766 766 int dirty; 767 - spin_lock(&inode->i_lock); 767 + spin_lock(&ci->i_ceph_lock); 768 768 dirty = __ceph_mark_dirty_caps(ci, CEPH_CAP_FILE_WR); 769 - spin_unlock(&inode->i_lock); 769 + spin_unlock(&ci->i_ceph_lock); 770 770 if (dirty) 771 771 __mark_inode_dirty(inode, dirty); 772 772 } ··· 797 797 798 798 mutex_lock(&inode->i_mutex); 799 799 __ceph_do_pending_vmtruncate(inode); 800 - if (origin != SEEK_CUR || origin != SEEK_SET) { 800 + 801 + if (origin == SEEK_END || origin == SEEK_DATA || origin == SEEK_HOLE) { 801 802 ret = ceph_do_getattr(inode, CEPH_STAT_CAP_SIZE); 802 803 if (ret < 0) { 803 804 offset = ret;
+28 -25
fs/ceph/inode.c
··· 297 297 298 298 dout("alloc_inode %p\n", &ci->vfs_inode); 299 299 300 + spin_lock_init(&ci->i_ceph_lock); 301 + 300 302 ci->i_version = 0; 301 303 ci->i_time_warp_seq = 0; 302 304 ci->i_ceph_flags = 0; ··· 585 583 iinfo->xattr_len); 586 584 } 587 585 588 - spin_lock(&inode->i_lock); 586 + spin_lock(&ci->i_ceph_lock); 589 587 590 588 /* 591 589 * provided version will be odd if inode value is projected, ··· 682 680 char *sym; 683 681 684 682 BUG_ON(symlen != inode->i_size); 685 - spin_unlock(&inode->i_lock); 683 + spin_unlock(&ci->i_ceph_lock); 686 684 687 685 err = -ENOMEM; 688 686 sym = kmalloc(symlen+1, GFP_NOFS); ··· 691 689 memcpy(sym, iinfo->symlink, symlen); 692 690 sym[symlen] = 0; 693 691 694 - spin_lock(&inode->i_lock); 692 + spin_lock(&ci->i_ceph_lock); 695 693 if (!ci->i_symlink) 696 694 ci->i_symlink = sym; 697 695 else ··· 717 715 } 718 716 719 717 no_change: 720 - spin_unlock(&inode->i_lock); 718 + spin_unlock(&ci->i_ceph_lock); 721 719 722 720 /* queue truncate if we saw i_size decrease */ 723 721 if (queue_trunc) ··· 752 750 info->cap.flags, 753 751 caps_reservation); 754 752 } else { 755 - spin_lock(&inode->i_lock); 753 + spin_lock(&ci->i_ceph_lock); 756 754 dout(" %p got snap_caps %s\n", inode, 757 755 ceph_cap_string(le32_to_cpu(info->cap.caps))); 758 756 ci->i_snap_caps |= le32_to_cpu(info->cap.caps); 759 757 if (cap_fmode >= 0) 760 758 __ceph_get_fmode(ci, cap_fmode); 761 - spin_unlock(&inode->i_lock); 759 + spin_unlock(&ci->i_ceph_lock); 762 760 } 763 761 } else if (cap_fmode >= 0) { 764 762 pr_warning("mds issued no caps on %llx.%llx\n", ··· 851 849 { 852 850 struct dentry *dir = dn->d_parent; 853 851 struct inode *inode = dir->d_inode; 852 + struct ceph_inode_info *ci = ceph_inode(inode); 854 853 struct ceph_dentry_info *di; 855 854 856 855 BUG_ON(!inode); 857 856 858 857 di = ceph_dentry(dn); 859 858 860 - spin_lock(&inode->i_lock); 859 + spin_lock(&ci->i_ceph_lock); 861 860 if (!ceph_dir_test_complete(inode)) { 862 - spin_unlock(&inode->i_lock); 861 + spin_unlock(&ci->i_ceph_lock); 863 862 return; 864 863 } 865 864 di->offset = ceph_inode(inode)->i_max_offset++; 866 - spin_unlock(&inode->i_lock); 865 + spin_unlock(&ci->i_ceph_lock); 867 866 868 867 spin_lock(&dir->d_lock); 869 868 spin_lock_nested(&dn->d_lock, DENTRY_D_LOCK_NESTED); ··· 1311 1308 struct ceph_inode_info *ci = ceph_inode(inode); 1312 1309 int ret = 0; 1313 1310 1314 - spin_lock(&inode->i_lock); 1311 + spin_lock(&ci->i_ceph_lock); 1315 1312 dout("set_size %p %llu -> %llu\n", inode, inode->i_size, size); 1316 1313 inode->i_size = size; 1317 1314 inode->i_blocks = (size + (1 << 9) - 1) >> 9; ··· 1321 1318 (ci->i_reported_size << 1) < ci->i_max_size) 1322 1319 ret = 1; 1323 1320 1324 - spin_unlock(&inode->i_lock); 1321 + spin_unlock(&ci->i_ceph_lock); 1325 1322 return ret; 1326 1323 } 1327 1324 ··· 1379 1376 u32 orig_gen; 1380 1377 int check = 0; 1381 1378 1382 - spin_lock(&inode->i_lock); 1379 + spin_lock(&ci->i_ceph_lock); 1383 1380 dout("invalidate_pages %p gen %d revoking %d\n", inode, 1384 1381 ci->i_rdcache_gen, ci->i_rdcache_revoking); 1385 1382 if (ci->i_rdcache_revoking != ci->i_rdcache_gen) { 1386 1383 /* nevermind! */ 1387 - spin_unlock(&inode->i_lock); 1384 + spin_unlock(&ci->i_ceph_lock); 1388 1385 goto out; 1389 1386 } 1390 1387 orig_gen = ci->i_rdcache_gen; 1391 - spin_unlock(&inode->i_lock); 1388 + spin_unlock(&ci->i_ceph_lock); 1392 1389 1393 1390 truncate_inode_pages(&inode->i_data, 0); 1394 1391 1395 - spin_lock(&inode->i_lock); 1392 + spin_lock(&ci->i_ceph_lock); 1396 1393 if (orig_gen == ci->i_rdcache_gen && 1397 1394 orig_gen == ci->i_rdcache_revoking) { 1398 1395 dout("invalidate_pages %p gen %d successful\n", inode, ··· 1404 1401 inode, orig_gen, ci->i_rdcache_gen, 1405 1402 ci->i_rdcache_revoking); 1406 1403 } 1407 - spin_unlock(&inode->i_lock); 1404 + spin_unlock(&ci->i_ceph_lock); 1408 1405 1409 1406 if (check) 1410 1407 ceph_check_caps(ci, 0, NULL); ··· 1463 1460 int wrbuffer_refs, wake = 0; 1464 1461 1465 1462 retry: 1466 - spin_lock(&inode->i_lock); 1463 + spin_lock(&ci->i_ceph_lock); 1467 1464 if (ci->i_truncate_pending == 0) { 1468 1465 dout("__do_pending_vmtruncate %p none pending\n", inode); 1469 - spin_unlock(&inode->i_lock); 1466 + spin_unlock(&ci->i_ceph_lock); 1470 1467 return; 1471 1468 } 1472 1469 ··· 1477 1474 if (ci->i_wrbuffer_ref_head < ci->i_wrbuffer_ref) { 1478 1475 dout("__do_pending_vmtruncate %p flushing snaps first\n", 1479 1476 inode); 1480 - spin_unlock(&inode->i_lock); 1477 + spin_unlock(&ci->i_ceph_lock); 1481 1478 filemap_write_and_wait_range(&inode->i_data, 0, 1482 1479 inode->i_sb->s_maxbytes); 1483 1480 goto retry; ··· 1487 1484 wrbuffer_refs = ci->i_wrbuffer_ref; 1488 1485 dout("__do_pending_vmtruncate %p (%d) to %lld\n", inode, 1489 1486 ci->i_truncate_pending, to); 1490 - spin_unlock(&inode->i_lock); 1487 + spin_unlock(&ci->i_ceph_lock); 1491 1488 1492 1489 truncate_inode_pages(inode->i_mapping, to); 1493 1490 1494 - spin_lock(&inode->i_lock); 1491 + spin_lock(&ci->i_ceph_lock); 1495 1492 ci->i_truncate_pending--; 1496 1493 if (ci->i_truncate_pending == 0) 1497 1494 wake = 1; 1498 - spin_unlock(&inode->i_lock); 1495 + spin_unlock(&ci->i_ceph_lock); 1499 1496 1500 1497 if (wrbuffer_refs == 0) 1501 1498 ceph_check_caps(ci, CHECK_CAPS_AUTHONLY, NULL); ··· 1550 1547 if (IS_ERR(req)) 1551 1548 return PTR_ERR(req); 1552 1549 1553 - spin_lock(&inode->i_lock); 1550 + spin_lock(&ci->i_ceph_lock); 1554 1551 issued = __ceph_caps_issued(ci, NULL); 1555 1552 dout("setattr %p issued %s\n", inode, ceph_cap_string(issued)); 1556 1553 ··· 1698 1695 } 1699 1696 1700 1697 release &= issued; 1701 - spin_unlock(&inode->i_lock); 1698 + spin_unlock(&ci->i_ceph_lock); 1702 1699 1703 1700 if (inode_dirty_flags) 1704 1701 __mark_inode_dirty(inode, inode_dirty_flags); ··· 1720 1717 __ceph_do_pending_vmtruncate(inode); 1721 1718 return err; 1722 1719 out: 1723 - spin_unlock(&inode->i_lock); 1720 + spin_unlock(&ci->i_ceph_lock); 1724 1721 ceph_mdsc_put_request(req); 1725 1722 return err; 1726 1723 }
+2 -2
fs/ceph/ioctl.c
··· 241 241 struct ceph_inode_info *ci = ceph_inode(inode); 242 242 243 243 if ((fi->fmode & CEPH_FILE_MODE_LAZY) == 0) { 244 - spin_lock(&inode->i_lock); 244 + spin_lock(&ci->i_ceph_lock); 245 245 ci->i_nr_by_mode[fi->fmode]--; 246 246 fi->fmode |= CEPH_FILE_MODE_LAZY; 247 247 ci->i_nr_by_mode[fi->fmode]++; 248 - spin_unlock(&inode->i_lock); 248 + spin_unlock(&ci->i_ceph_lock); 249 249 dout("ioctl_layzio: file %p marked lazy\n", file); 250 250 251 251 ceph_check_caps(ci, 0, NULL);
+17 -16
fs/ceph/mds_client.c
··· 732 732 } 733 733 } 734 734 735 - spin_lock(&inode->i_lock); 735 + spin_lock(&ci->i_ceph_lock); 736 736 cap = NULL; 737 737 if (mode == USE_AUTH_MDS) 738 738 cap = ci->i_auth_cap; 739 739 if (!cap && !RB_EMPTY_ROOT(&ci->i_caps)) 740 740 cap = rb_entry(rb_first(&ci->i_caps), struct ceph_cap, ci_node); 741 741 if (!cap) { 742 - spin_unlock(&inode->i_lock); 742 + spin_unlock(&ci->i_ceph_lock); 743 743 goto random; 744 744 } 745 745 mds = cap->session->s_mds; 746 746 dout("choose_mds %p %llx.%llx mds%d (%scap %p)\n", 747 747 inode, ceph_vinop(inode), mds, 748 748 cap == ci->i_auth_cap ? "auth " : "", cap); 749 - spin_unlock(&inode->i_lock); 749 + spin_unlock(&ci->i_ceph_lock); 750 750 return mds; 751 751 752 752 random: ··· 951 951 952 952 dout("removing cap %p, ci is %p, inode is %p\n", 953 953 cap, ci, &ci->vfs_inode); 954 - spin_lock(&inode->i_lock); 954 + spin_lock(&ci->i_ceph_lock); 955 955 __ceph_remove_cap(cap); 956 956 if (!__ceph_is_any_real_caps(ci)) { 957 957 struct ceph_mds_client *mdsc = ··· 984 984 } 985 985 spin_unlock(&mdsc->cap_dirty_lock); 986 986 } 987 - spin_unlock(&inode->i_lock); 987 + spin_unlock(&ci->i_ceph_lock); 988 988 while (drop--) 989 989 iput(inode); 990 990 return 0; ··· 1015 1015 1016 1016 wake_up_all(&ci->i_cap_wq); 1017 1017 if (arg) { 1018 - spin_lock(&inode->i_lock); 1018 + spin_lock(&ci->i_ceph_lock); 1019 1019 ci->i_wanted_max_size = 0; 1020 1020 ci->i_requested_max_size = 0; 1021 - spin_unlock(&inode->i_lock); 1021 + spin_unlock(&ci->i_ceph_lock); 1022 1022 } 1023 1023 return 0; 1024 1024 } ··· 1151 1151 if (session->s_trim_caps <= 0) 1152 1152 return -1; 1153 1153 1154 - spin_lock(&inode->i_lock); 1154 + spin_lock(&ci->i_ceph_lock); 1155 1155 mine = cap->issued | cap->implemented; 1156 1156 used = __ceph_caps_used(ci); 1157 1157 oissued = __ceph_caps_issued_other(ci, cap); ··· 1170 1170 __ceph_remove_cap(cap); 1171 1171 } else { 1172 1172 /* try to drop referring dentries */ 1173 - spin_unlock(&inode->i_lock); 1173 + spin_unlock(&ci->i_ceph_lock); 1174 1174 d_prune_aliases(inode); 1175 1175 dout("trim_caps_cb %p cap %p pruned, count now %d\n", 1176 1176 inode, cap, atomic_read(&inode->i_count)); ··· 1178 1178 } 1179 1179 1180 1180 out: 1181 - spin_unlock(&inode->i_lock); 1181 + spin_unlock(&ci->i_ceph_lock); 1182 1182 return 0; 1183 1183 } 1184 1184 ··· 1296 1296 i_flushing_item); 1297 1297 struct inode *inode = &ci->vfs_inode; 1298 1298 1299 - spin_lock(&inode->i_lock); 1299 + spin_lock(&ci->i_ceph_lock); 1300 1300 if (ci->i_cap_flush_seq <= want_flush_seq) { 1301 1301 dout("check_cap_flush still flushing %p " 1302 1302 "seq %lld <= %lld to mds%d\n", inode, ··· 1304 1304 session->s_mds); 1305 1305 ret = 0; 1306 1306 } 1307 - spin_unlock(&inode->i_lock); 1307 + spin_unlock(&ci->i_ceph_lock); 1308 1308 } 1309 1309 mutex_unlock(&session->s_mutex); 1310 1310 ceph_put_mds_session(session); ··· 1495 1495 pos, temp); 1496 1496 } else if (stop_on_nosnap && inode && 1497 1497 ceph_snap(inode) == CEPH_NOSNAP) { 1498 + spin_unlock(&temp->d_lock); 1498 1499 break; 1499 1500 } else { 1500 1501 pos -= temp->d_name.len; ··· 2012 2011 struct ceph_inode_info *ci = ceph_inode(inode); 2013 2012 2014 2013 dout("invalidate_dir_request %p (D_COMPLETE, lease(s))\n", inode); 2015 - spin_lock(&inode->i_lock); 2014 + spin_lock(&ci->i_ceph_lock); 2016 2015 ceph_dir_clear_complete(inode); 2017 2016 ci->i_release_count++; 2018 - spin_unlock(&inode->i_lock); 2017 + spin_unlock(&ci->i_ceph_lock); 2019 2018 2020 2019 if (req->r_dentry) 2021 2020 ceph_invalidate_dentry_lease(req->r_dentry); ··· 2423 2422 if (err) 2424 2423 goto out_free; 2425 2424 2426 - spin_lock(&inode->i_lock); 2425 + spin_lock(&ci->i_ceph_lock); 2427 2426 cap->seq = 0; /* reset cap seq */ 2428 2427 cap->issue_seq = 0; /* and issue_seq */ 2429 2428 ··· 2446 2445 rec.v1.pathbase = cpu_to_le64(pathbase); 2447 2446 reclen = sizeof(rec.v1); 2448 2447 } 2449 - spin_unlock(&inode->i_lock); 2448 + spin_unlock(&ci->i_ceph_lock); 2450 2449 2451 2450 if (recon_state->flock) { 2452 2451 int num_fcntl_locks, num_flock_locks;
+1 -1
fs/ceph/mds_client.h
··· 20 20 * 21 21 * mdsc->snap_rwsem 22 22 * 23 - * inode->i_lock 23 + * ci->i_ceph_lock 24 24 * mdsc->snap_flush_lock 25 25 * mdsc->cap_delay_lock 26 26 *
+8 -8
fs/ceph/snap.c
··· 446 446 return; 447 447 } 448 448 449 - spin_lock(&inode->i_lock); 449 + spin_lock(&ci->i_ceph_lock); 450 450 used = __ceph_caps_used(ci); 451 451 dirty = __ceph_caps_dirty(ci); 452 452 ··· 528 528 kfree(capsnap); 529 529 } 530 530 531 - spin_unlock(&inode->i_lock); 531 + spin_unlock(&ci->i_ceph_lock); 532 532 } 533 533 534 534 /* ··· 537 537 * 538 538 * If capsnap can now be flushed, add to snap_flush list, and return 1. 539 539 * 540 - * Caller must hold i_lock. 540 + * Caller must hold i_ceph_lock. 541 541 */ 542 542 int __ceph_finish_cap_snap(struct ceph_inode_info *ci, 543 543 struct ceph_cap_snap *capsnap) ··· 739 739 inode = &ci->vfs_inode; 740 740 ihold(inode); 741 741 spin_unlock(&mdsc->snap_flush_lock); 742 - spin_lock(&inode->i_lock); 742 + spin_lock(&ci->i_ceph_lock); 743 743 __ceph_flush_snaps(ci, &session, 0); 744 - spin_unlock(&inode->i_lock); 744 + spin_unlock(&ci->i_ceph_lock); 745 745 iput(inode); 746 746 spin_lock(&mdsc->snap_flush_lock); 747 747 } ··· 847 847 continue; 848 848 ci = ceph_inode(inode); 849 849 850 - spin_lock(&inode->i_lock); 850 + spin_lock(&ci->i_ceph_lock); 851 851 if (!ci->i_snap_realm) 852 852 goto skip_inode; 853 853 /* ··· 876 876 oldrealm = ci->i_snap_realm; 877 877 ci->i_snap_realm = realm; 878 878 spin_unlock(&realm->inodes_with_caps_lock); 879 - spin_unlock(&inode->i_lock); 879 + spin_unlock(&ci->i_ceph_lock); 880 880 881 881 ceph_get_snap_realm(mdsc, realm); 882 882 ceph_put_snap_realm(mdsc, oldrealm); ··· 885 885 continue; 886 886 887 887 skip_inode: 888 - spin_unlock(&inode->i_lock); 888 + spin_unlock(&ci->i_ceph_lock); 889 889 iput(inode); 890 890 } 891 891
+1 -1
fs/ceph/super.c
··· 383 383 if (fsopt->rsize != CEPH_RSIZE_DEFAULT) 384 384 seq_printf(m, ",rsize=%d", fsopt->rsize); 385 385 if (fsopt->rasize != CEPH_RASIZE_DEFAULT) 386 - seq_printf(m, ",rasize=%d", fsopt->rsize); 386 + seq_printf(m, ",rasize=%d", fsopt->rasize); 387 387 if (fsopt->congestion_kb != default_congestion_kb()) 388 388 seq_printf(m, ",write_congestion_kb=%d", fsopt->congestion_kb); 389 389 if (fsopt->caps_wanted_delay_min != CEPH_CAPS_WANTED_DELAY_MIN_DEFAULT)
+16 -15
fs/ceph/super.h
··· 220 220 * The locking for D_COMPLETE is a bit odd: 221 221 * - we can clear it at almost any time (see ceph_d_prune) 222 222 * - it is only meaningful if: 223 - * - we hold dir inode i_lock 223 + * - we hold dir inode i_ceph_lock 224 224 * - we hold dir FILE_SHARED caps 225 225 * - the dentry D_COMPLETE is set 226 226 */ ··· 250 250 struct ceph_inode_info { 251 251 struct ceph_vino i_vino; /* ceph ino + snap */ 252 252 253 + spinlock_t i_ceph_lock; 254 + 253 255 u64 i_version; 254 256 u32 i_time_warp_seq; 255 257 ··· 273 271 274 272 struct ceph_inode_xattrs_info i_xattrs; 275 273 276 - /* capabilities. protected _both_ by i_lock and cap->session's 274 + /* capabilities. protected _both_ by i_ceph_lock and cap->session's 277 275 * s_mutex. */ 278 276 struct rb_root i_caps; /* cap list */ 279 277 struct ceph_cap *i_auth_cap; /* authoritative cap, if any */ ··· 439 437 { 440 438 struct ceph_inode_info *ci = ceph_inode(inode); 441 439 442 - spin_lock(&inode->i_lock); 440 + spin_lock(&ci->i_ceph_lock); 443 441 ci->i_ceph_flags &= ~mask; 444 - spin_unlock(&inode->i_lock); 442 + spin_unlock(&ci->i_ceph_lock); 445 443 } 446 444 447 445 static inline void ceph_i_set(struct inode *inode, unsigned mask) 448 446 { 449 447 struct ceph_inode_info *ci = ceph_inode(inode); 450 448 451 - spin_lock(&inode->i_lock); 449 + spin_lock(&ci->i_ceph_lock); 452 450 ci->i_ceph_flags |= mask; 453 - spin_unlock(&inode->i_lock); 451 + spin_unlock(&ci->i_ceph_lock); 454 452 } 455 453 456 454 static inline bool ceph_i_test(struct inode *inode, unsigned mask) ··· 458 456 struct ceph_inode_info *ci = ceph_inode(inode); 459 457 bool r; 460 458 461 - spin_lock(&inode->i_lock); 459 + spin_lock(&ci->i_ceph_lock); 462 460 r = (ci->i_ceph_flags & mask) == mask; 463 - spin_unlock(&inode->i_lock); 461 + spin_unlock(&ci->i_ceph_lock); 464 462 return r; 465 463 } 466 464 ··· 510 508 static inline int ceph_caps_issued(struct ceph_inode_info *ci) 511 509 { 512 510 int issued; 513 - spin_lock(&ci->vfs_inode.i_lock); 511 + spin_lock(&ci->i_ceph_lock); 514 512 issued = __ceph_caps_issued(ci, NULL); 515 - spin_unlock(&ci->vfs_inode.i_lock); 513 + spin_unlock(&ci->i_ceph_lock); 516 514 return issued; 517 515 } 518 516 ··· 520 518 int touch) 521 519 { 522 520 int r; 523 - spin_lock(&ci->vfs_inode.i_lock); 521 + spin_lock(&ci->i_ceph_lock); 524 522 r = __ceph_caps_issued_mask(ci, mask, touch); 525 - spin_unlock(&ci->vfs_inode.i_lock); 523 + spin_unlock(&ci->i_ceph_lock); 526 524 return r; 527 525 } 528 526 ··· 745 743 extern void __ceph_remove_cap(struct ceph_cap *cap); 746 744 static inline void ceph_remove_cap(struct ceph_cap *cap) 747 745 { 748 - struct inode *inode = &cap->ci->vfs_inode; 749 - spin_lock(&inode->i_lock); 746 + spin_lock(&cap->ci->i_ceph_lock); 750 747 __ceph_remove_cap(cap); 751 - spin_unlock(&inode->i_lock); 748 + spin_unlock(&cap->ci->i_ceph_lock); 752 749 } 753 750 extern void ceph_put_cap(struct ceph_mds_client *mdsc, 754 751 struct ceph_cap *cap);
+21 -21
fs/ceph/xattr.c
··· 343 343 } 344 344 345 345 static int __build_xattrs(struct inode *inode) 346 - __releases(inode->i_lock) 347 - __acquires(inode->i_lock) 346 + __releases(ci->i_ceph_lock) 347 + __acquires(ci->i_ceph_lock) 348 348 { 349 349 u32 namelen; 350 350 u32 numattr = 0; ··· 372 372 end = p + ci->i_xattrs.blob->vec.iov_len; 373 373 ceph_decode_32_safe(&p, end, numattr, bad); 374 374 xattr_version = ci->i_xattrs.version; 375 - spin_unlock(&inode->i_lock); 375 + spin_unlock(&ci->i_ceph_lock); 376 376 377 377 xattrs = kcalloc(numattr, sizeof(struct ceph_xattr *), 378 378 GFP_NOFS); ··· 387 387 goto bad_lock; 388 388 } 389 389 390 - spin_lock(&inode->i_lock); 390 + spin_lock(&ci->i_ceph_lock); 391 391 if (ci->i_xattrs.version != xattr_version) { 392 392 /* lost a race, retry */ 393 393 for (i = 0; i < numattr; i++) ··· 418 418 419 419 return err; 420 420 bad_lock: 421 - spin_lock(&inode->i_lock); 421 + spin_lock(&ci->i_ceph_lock); 422 422 bad: 423 423 if (xattrs) { 424 424 for (i = 0; i < numattr; i++) ··· 512 512 if (vxattrs) 513 513 vxattr = ceph_match_vxattr(vxattrs, name); 514 514 515 - spin_lock(&inode->i_lock); 515 + spin_lock(&ci->i_ceph_lock); 516 516 dout("getxattr %p ver=%lld index_ver=%lld\n", inode, 517 517 ci->i_xattrs.version, ci->i_xattrs.index_version); 518 518 ··· 520 520 (ci->i_xattrs.index_version >= ci->i_xattrs.version)) { 521 521 goto get_xattr; 522 522 } else { 523 - spin_unlock(&inode->i_lock); 523 + spin_unlock(&ci->i_ceph_lock); 524 524 /* get xattrs from mds (if we don't already have them) */ 525 525 err = ceph_do_getattr(inode, CEPH_STAT_CAP_XATTR); 526 526 if (err) 527 527 return err; 528 528 } 529 529 530 - spin_lock(&inode->i_lock); 530 + spin_lock(&ci->i_ceph_lock); 531 531 532 532 if (vxattr && vxattr->readonly) { 533 533 err = vxattr->getxattr_cb(ci, value, size); ··· 558 558 memcpy(value, xattr->val, xattr->val_len); 559 559 560 560 out: 561 - spin_unlock(&inode->i_lock); 561 + spin_unlock(&ci->i_ceph_lock); 562 562 return err; 563 563 } 564 564 ··· 573 573 u32 len; 574 574 int i; 575 575 576 - spin_lock(&inode->i_lock); 576 + spin_lock(&ci->i_ceph_lock); 577 577 dout("listxattr %p ver=%lld index_ver=%lld\n", inode, 578 578 ci->i_xattrs.version, ci->i_xattrs.index_version); 579 579 ··· 581 581 (ci->i_xattrs.index_version >= ci->i_xattrs.version)) { 582 582 goto list_xattr; 583 583 } else { 584 - spin_unlock(&inode->i_lock); 584 + spin_unlock(&ci->i_ceph_lock); 585 585 err = ceph_do_getattr(inode, CEPH_STAT_CAP_XATTR); 586 586 if (err) 587 587 return err; 588 588 } 589 589 590 - spin_lock(&inode->i_lock); 590 + spin_lock(&ci->i_ceph_lock); 591 591 592 592 err = __build_xattrs(inode); 593 593 if (err < 0) ··· 619 619 } 620 620 621 621 out: 622 - spin_unlock(&inode->i_lock); 622 + spin_unlock(&ci->i_ceph_lock); 623 623 return err; 624 624 } 625 625 ··· 739 739 if (!xattr) 740 740 goto out; 741 741 742 - spin_lock(&inode->i_lock); 742 + spin_lock(&ci->i_ceph_lock); 743 743 retry: 744 744 issued = __ceph_caps_issued(ci, NULL); 745 745 if (!(issued & CEPH_CAP_XATTR_EXCL)) ··· 752 752 required_blob_size > ci->i_xattrs.prealloc_blob->alloc_len) { 753 753 struct ceph_buffer *blob = NULL; 754 754 755 - spin_unlock(&inode->i_lock); 755 + spin_unlock(&ci->i_ceph_lock); 756 756 dout(" preaallocating new blob size=%d\n", required_blob_size); 757 757 blob = ceph_buffer_new(required_blob_size, GFP_NOFS); 758 758 if (!blob) 759 759 goto out; 760 - spin_lock(&inode->i_lock); 760 + spin_lock(&ci->i_ceph_lock); 761 761 if (ci->i_xattrs.prealloc_blob) 762 762 ceph_buffer_put(ci->i_xattrs.prealloc_blob); 763 763 ci->i_xattrs.prealloc_blob = blob; ··· 770 770 dirty = __ceph_mark_dirty_caps(ci, CEPH_CAP_XATTR_EXCL); 771 771 ci->i_xattrs.dirty = true; 772 772 inode->i_ctime = CURRENT_TIME; 773 - spin_unlock(&inode->i_lock); 773 + spin_unlock(&ci->i_ceph_lock); 774 774 if (dirty) 775 775 __mark_inode_dirty(inode, dirty); 776 776 return err; 777 777 778 778 do_sync: 779 - spin_unlock(&inode->i_lock); 779 + spin_unlock(&ci->i_ceph_lock); 780 780 err = ceph_sync_setxattr(dentry, name, value, size, flags); 781 781 out: 782 782 kfree(newname); ··· 833 833 return -EOPNOTSUPP; 834 834 } 835 835 836 - spin_lock(&inode->i_lock); 836 + spin_lock(&ci->i_ceph_lock); 837 837 __build_xattrs(inode); 838 838 issued = __ceph_caps_issued(ci, NULL); 839 839 dout("removexattr %p issued %s\n", inode, ceph_cap_string(issued)); ··· 846 846 ci->i_xattrs.dirty = true; 847 847 inode->i_ctime = CURRENT_TIME; 848 848 849 - spin_unlock(&inode->i_lock); 849 + spin_unlock(&ci->i_ceph_lock); 850 850 if (dirty) 851 851 __mark_inode_dirty(inode, dirty); 852 852 return err; 853 853 do_sync: 854 - spin_unlock(&inode->i_lock); 854 + spin_unlock(&ci->i_ceph_lock); 855 855 err = ceph_send_removexattr(dentry, name); 856 856 return err; 857 857 }
+2
fs/cifs/connect.c
··· 441 441 smb_msg.msg_controllen = 0; 442 442 443 443 for (total_read = 0; to_read; total_read += length, to_read -= length) { 444 + try_to_freeze(); 445 + 444 446 if (server_unresponsive(server)) { 445 447 total_read = -EAGAIN; 446 448 break;
+26
fs/cifs/file.c
··· 702 702 lock->type, lock->netfid, conf_lock); 703 703 } 704 704 705 + /* 706 + * Check if there is another lock that prevents us to set the lock (mandatory 707 + * style). If such a lock exists, update the flock structure with its 708 + * properties. Otherwise, set the flock type to F_UNLCK if we can cache brlocks 709 + * or leave it the same if we can't. Returns 0 if we don't need to request to 710 + * the server or 1 otherwise. 711 + */ 705 712 static int 706 713 cifs_lock_test(struct cifsInodeInfo *cinode, __u64 offset, __u64 length, 707 714 __u8 type, __u16 netfid, struct file_lock *flock) ··· 746 739 mutex_unlock(&cinode->lock_mutex); 747 740 } 748 741 742 + /* 743 + * Set the byte-range lock (mandatory style). Returns: 744 + * 1) 0, if we set the lock and don't need to request to the server; 745 + * 2) 1, if no locks prevent us but we need to request to the server; 746 + * 3) -EACCESS, if there is a lock that prevents us and wait is false. 747 + */ 749 748 static int 750 749 cifs_lock_add_if(struct cifsInodeInfo *cinode, struct cifsLockInfo *lock, 751 750 bool wait) ··· 791 778 return rc; 792 779 } 793 780 781 + /* 782 + * Check if there is another lock that prevents us to set the lock (posix 783 + * style). If such a lock exists, update the flock structure with its 784 + * properties. Otherwise, set the flock type to F_UNLCK if we can cache brlocks 785 + * or leave it the same if we can't. Returns 0 if we don't need to request to 786 + * the server or 1 otherwise. 787 + */ 794 788 static int 795 789 cifs_posix_lock_test(struct file *file, struct file_lock *flock) 796 790 { ··· 820 800 return rc; 821 801 } 822 802 803 + /* 804 + * Set the byte-range lock (posix style). Returns: 805 + * 1) 0, if we set the lock and don't need to request to the server; 806 + * 2) 1, if we need to request to the server; 807 + * 3) <0, if the error occurs while setting the lock. 808 + */ 823 809 static int 824 810 cifs_posix_lock_set(struct file *file, struct file_lock *flock) 825 811 {
+8 -2
fs/cifs/readdir.c
··· 554 554 rc); 555 555 return rc; 556 556 } 557 - cifs_save_resume_key(cifsFile->srch_inf.last_entry, cifsFile); 557 + /* FindFirst/Next set last_entry to NULL on malformed reply */ 558 + if (cifsFile->srch_inf.last_entry) 559 + cifs_save_resume_key(cifsFile->srch_inf.last_entry, 560 + cifsFile); 558 561 } 559 562 560 563 while ((index_to_find >= cifsFile->srch_inf.index_of_last_entry) && ··· 565 562 cFYI(1, "calling findnext2"); 566 563 rc = CIFSFindNext(xid, pTcon, cifsFile->netfid, 567 564 &cifsFile->srch_inf); 568 - cifs_save_resume_key(cifsFile->srch_inf.last_entry, cifsFile); 565 + /* FindFirst/Next set last_entry to NULL on malformed reply */ 566 + if (cifsFile->srch_inf.last_entry) 567 + cifs_save_resume_key(cifsFile->srch_inf.last_entry, 568 + cifsFile); 569 569 if (rc) 570 570 return -ENOENT; 571 571 }
+3 -3
fs/cifs/smbencrypt.c
··· 209 209 { 210 210 int rc; 211 211 int len; 212 - __u16 wpwd[129]; 212 + __le16 wpwd[129]; 213 213 214 214 /* Password cannot be longer than 128 characters */ 215 215 if (passwd) /* Password must be converted to NT unicode */ ··· 219 219 *wpwd = 0; /* Ensure string is null terminated */ 220 220 } 221 221 222 - rc = mdfour(p16, (unsigned char *) wpwd, len * sizeof(__u16)); 223 - memset(wpwd, 0, 129 * sizeof(__u16)); 222 + rc = mdfour(p16, (unsigned char *) wpwd, len * sizeof(__le16)); 223 + memset(wpwd, 0, 129 * sizeof(__le16)); 224 224 225 225 return rc; 226 226 }
+1 -1
fs/configfs/inode.c
··· 292 292 return bdi_init(&configfs_backing_dev_info); 293 293 } 294 294 295 - void __exit configfs_inode_exit(void) 295 + void configfs_inode_exit(void) 296 296 { 297 297 bdi_destroy(&configfs_backing_dev_info); 298 298 }
+18 -20
fs/configfs/mount.c
··· 143 143 goto out; 144 144 145 145 config_kobj = kobject_create_and_add("config", kernel_kobj); 146 - if (!config_kobj) { 147 - kmem_cache_destroy(configfs_dir_cachep); 148 - configfs_dir_cachep = NULL; 149 - goto out; 150 - } 151 - 152 - err = register_filesystem(&configfs_fs_type); 153 - if (err) { 154 - printk(KERN_ERR "configfs: Unable to register filesystem!\n"); 155 - kobject_put(config_kobj); 156 - kmem_cache_destroy(configfs_dir_cachep); 157 - configfs_dir_cachep = NULL; 158 - goto out; 159 - } 146 + if (!config_kobj) 147 + goto out2; 160 148 161 149 err = configfs_inode_init(); 162 - if (err) { 163 - unregister_filesystem(&configfs_fs_type); 164 - kobject_put(config_kobj); 165 - kmem_cache_destroy(configfs_dir_cachep); 166 - configfs_dir_cachep = NULL; 167 - } 150 + if (err) 151 + goto out3; 152 + 153 + err = register_filesystem(&configfs_fs_type); 154 + if (err) 155 + goto out4; 156 + 157 + return 0; 158 + out4: 159 + printk(KERN_ERR "configfs: Unable to register filesystem!\n"); 160 + configfs_inode_exit(); 161 + out3: 162 + kobject_put(config_kobj); 163 + out2: 164 + kmem_cache_destroy(configfs_dir_cachep); 165 + configfs_dir_cachep = NULL; 168 166 out: 169 167 return err; 170 168 }
+44 -27
fs/dcache.c
··· 2439 2439 /** 2440 2440 * prepend_path - Prepend path string to a buffer 2441 2441 * @path: the dentry/vfsmount to report 2442 - * @root: root vfsmnt/dentry (may be modified by this function) 2442 + * @root: root vfsmnt/dentry 2443 2443 * @buffer: pointer to the end of the buffer 2444 2444 * @buflen: pointer to buffer length 2445 2445 * 2446 2446 * Caller holds the rename_lock. 2447 - * 2448 - * If path is not reachable from the supplied root, then the value of 2449 - * root is changed (without modifying refcounts). 2450 2447 */ 2451 - static int prepend_path(const struct path *path, struct path *root, 2448 + static int prepend_path(const struct path *path, 2449 + const struct path *root, 2452 2450 char **buffer, int *buflen) 2453 2451 { 2454 2452 struct dentry *dentry = path->dentry; ··· 2481 2483 dentry = parent; 2482 2484 } 2483 2485 2484 - out: 2485 2486 if (!error && !slash) 2486 2487 error = prepend(buffer, buflen, "/", 1); 2487 2488 2489 + out: 2488 2490 br_read_unlock(vfsmount_lock); 2489 2491 return error; 2490 2492 ··· 2498 2500 WARN(1, "Root dentry has weird name <%.*s>\n", 2499 2501 (int) dentry->d_name.len, dentry->d_name.name); 2500 2502 } 2501 - root->mnt = vfsmnt; 2502 - root->dentry = dentry; 2503 + if (!slash) 2504 + error = prepend(buffer, buflen, "/", 1); 2505 + if (!error) 2506 + error = vfsmnt->mnt_ns ? 1 : 2; 2503 2507 goto out; 2504 2508 } 2505 2509 2506 2510 /** 2507 2511 * __d_path - return the path of a dentry 2508 2512 * @path: the dentry/vfsmount to report 2509 - * @root: root vfsmnt/dentry (may be modified by this function) 2513 + * @root: root vfsmnt/dentry 2510 2514 * @buf: buffer to return value in 2511 2515 * @buflen: buffer length 2512 2516 * ··· 2519 2519 * 2520 2520 * "buflen" should be positive. 2521 2521 * 2522 - * If path is not reachable from the supplied root, then the value of 2523 - * root is changed (without modifying refcounts). 2522 + * If the path is not reachable from the supplied root, return %NULL. 2524 2523 */ 2525 - char *__d_path(const struct path *path, struct path *root, 2524 + char *__d_path(const struct path *path, 2525 + const struct path *root, 2526 2526 char *buf, int buflen) 2527 2527 { 2528 2528 char *res = buf + buflen; ··· 2533 2533 error = prepend_path(path, root, &res, &buflen); 2534 2534 write_sequnlock(&rename_lock); 2535 2535 2536 - if (error) 2536 + if (error < 0) 2537 + return ERR_PTR(error); 2538 + if (error > 0) 2539 + return NULL; 2540 + return res; 2541 + } 2542 + 2543 + char *d_absolute_path(const struct path *path, 2544 + char *buf, int buflen) 2545 + { 2546 + struct path root = {}; 2547 + char *res = buf + buflen; 2548 + int error; 2549 + 2550 + prepend(&res, &buflen, "\0", 1); 2551 + write_seqlock(&rename_lock); 2552 + error = prepend_path(path, &root, &res, &buflen); 2553 + write_sequnlock(&rename_lock); 2554 + 2555 + if (error > 1) 2556 + error = -EINVAL; 2557 + if (error < 0) 2537 2558 return ERR_PTR(error); 2538 2559 return res; 2539 2560 } ··· 2562 2541 /* 2563 2542 * same as __d_path but appends "(deleted)" for unlinked files. 2564 2543 */ 2565 - static int path_with_deleted(const struct path *path, struct path *root, 2566 - char **buf, int *buflen) 2544 + static int path_with_deleted(const struct path *path, 2545 + const struct path *root, 2546 + char **buf, int *buflen) 2567 2547 { 2568 2548 prepend(buf, buflen, "\0", 1); 2569 2549 if (d_unlinked(path->dentry)) { ··· 2601 2579 { 2602 2580 char *res = buf + buflen; 2603 2581 struct path root; 2604 - struct path tmp; 2605 2582 int error; 2606 2583 2607 2584 /* ··· 2615 2594 2616 2595 get_fs_root(current->fs, &root); 2617 2596 write_seqlock(&rename_lock); 2618 - tmp = root; 2619 - error = path_with_deleted(path, &tmp, &res, &buflen); 2620 - if (error) 2597 + error = path_with_deleted(path, &root, &res, &buflen); 2598 + if (error < 0) 2621 2599 res = ERR_PTR(error); 2622 2600 write_sequnlock(&rename_lock); 2623 2601 path_put(&root); ··· 2637 2617 { 2638 2618 char *res = buf + buflen; 2639 2619 struct path root; 2640 - struct path tmp; 2641 2620 int error; 2642 2621 2643 2622 if (path->dentry->d_op && path->dentry->d_op->d_dname) ··· 2644 2625 2645 2626 get_fs_root(current->fs, &root); 2646 2627 write_seqlock(&rename_lock); 2647 - tmp = root; 2648 - error = path_with_deleted(path, &tmp, &res, &buflen); 2649 - if (!error && !path_equal(&tmp, &root)) 2628 + error = path_with_deleted(path, &root, &res, &buflen); 2629 + if (error > 0) 2650 2630 error = prepend_unreachable(&res, &buflen); 2651 2631 write_sequnlock(&rename_lock); 2652 2632 path_put(&root); ··· 2776 2758 write_seqlock(&rename_lock); 2777 2759 if (!d_unlinked(pwd.dentry)) { 2778 2760 unsigned long len; 2779 - struct path tmp = root; 2780 2761 char *cwd = page + PAGE_SIZE; 2781 2762 int buflen = PAGE_SIZE; 2782 2763 2783 2764 prepend(&cwd, &buflen, "\0", 1); 2784 - error = prepend_path(&pwd, &tmp, &cwd, &buflen); 2765 + error = prepend_path(&pwd, &root, &cwd, &buflen); 2785 2766 write_sequnlock(&rename_lock); 2786 2767 2787 - if (error) 2768 + if (error < 0) 2788 2769 goto out; 2789 2770 2790 2771 /* Unreachable from current root */ 2791 - if (!path_equal(&tmp, &root)) { 2772 + if (error > 0) { 2792 2773 error = prepend_unreachable(&cwd, &buflen); 2793 2774 if (error) 2794 2775 goto out;
+14 -12
fs/ecryptfs/crypto.c
··· 967 967 968 968 /** 969 969 * ecryptfs_new_file_context 970 - * @ecryptfs_dentry: The eCryptfs dentry 970 + * @ecryptfs_inode: The eCryptfs inode 971 971 * 972 972 * If the crypto context for the file has not yet been established, 973 973 * this is where we do that. Establishing a new crypto context ··· 984 984 * 985 985 * Returns zero on success; non-zero otherwise 986 986 */ 987 - int ecryptfs_new_file_context(struct dentry *ecryptfs_dentry) 987 + int ecryptfs_new_file_context(struct inode *ecryptfs_inode) 988 988 { 989 989 struct ecryptfs_crypt_stat *crypt_stat = 990 - &ecryptfs_inode_to_private(ecryptfs_dentry->d_inode)->crypt_stat; 990 + &ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat; 991 991 struct ecryptfs_mount_crypt_stat *mount_crypt_stat = 992 992 &ecryptfs_superblock_to_private( 993 - ecryptfs_dentry->d_sb)->mount_crypt_stat; 993 + ecryptfs_inode->i_sb)->mount_crypt_stat; 994 994 int cipher_name_len; 995 995 int rc = 0; 996 996 ··· 1299 1299 } 1300 1300 1301 1301 static int 1302 - ecryptfs_write_metadata_to_contents(struct dentry *ecryptfs_dentry, 1302 + ecryptfs_write_metadata_to_contents(struct inode *ecryptfs_inode, 1303 1303 char *virt, size_t virt_len) 1304 1304 { 1305 1305 int rc; 1306 1306 1307 - rc = ecryptfs_write_lower(ecryptfs_dentry->d_inode, virt, 1307 + rc = ecryptfs_write_lower(ecryptfs_inode, virt, 1308 1308 0, virt_len); 1309 1309 if (rc < 0) 1310 1310 printk(KERN_ERR "%s: Error attempting to write header " ··· 1338 1338 1339 1339 /** 1340 1340 * ecryptfs_write_metadata 1341 - * @ecryptfs_dentry: The eCryptfs dentry 1341 + * @ecryptfs_dentry: The eCryptfs dentry, which should be negative 1342 + * @ecryptfs_inode: The newly created eCryptfs inode 1342 1343 * 1343 1344 * Write the file headers out. This will likely involve a userspace 1344 1345 * callout, in which the session key is encrypted with one or more ··· 1349 1348 * 1350 1349 * Returns zero on success; non-zero on error 1351 1350 */ 1352 - int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry) 1351 + int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry, 1352 + struct inode *ecryptfs_inode) 1353 1353 { 1354 1354 struct ecryptfs_crypt_stat *crypt_stat = 1355 - &ecryptfs_inode_to_private(ecryptfs_dentry->d_inode)->crypt_stat; 1355 + &ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat; 1356 1356 unsigned int order; 1357 1357 char *virt; 1358 1358 size_t virt_len; ··· 1393 1391 rc = ecryptfs_write_metadata_to_xattr(ecryptfs_dentry, virt, 1394 1392 size); 1395 1393 else 1396 - rc = ecryptfs_write_metadata_to_contents(ecryptfs_dentry, virt, 1394 + rc = ecryptfs_write_metadata_to_contents(ecryptfs_inode, virt, 1397 1395 virt_len); 1398 1396 if (rc) { 1399 1397 printk(KERN_ERR "%s: Error writing metadata out to lower file; " ··· 1945 1943 1946 1944 /* We could either offset on every reverse map or just pad some 0x00's 1947 1945 * at the front here */ 1948 - static const unsigned char filename_rev_map[] = { 1946 + static const unsigned char filename_rev_map[256] = { 1949 1947 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 7 */ 1950 1948 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 15 */ 1951 1949 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 23 */ ··· 1961 1959 0x00, 0x26, 0x27, 0x28, 0x29, 0x2A, 0x2B, 0x2C, /* 103 */ 1962 1960 0x2D, 0x2E, 0x2F, 0x30, 0x31, 0x32, 0x33, 0x34, /* 111 */ 1963 1961 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A, 0x3B, 0x3C, /* 119 */ 1964 - 0x3D, 0x3E, 0x3F 1962 + 0x3D, 0x3E, 0x3F /* 123 - 255 initialized to 0x00 */ 1965 1963 }; 1966 1964 1967 1965 /**
+3 -2
fs/ecryptfs/ecryptfs_kernel.h
··· 584 584 int ecryptfs_write_inode_size_to_metadata(struct inode *ecryptfs_inode); 585 585 int ecryptfs_encrypt_page(struct page *page); 586 586 int ecryptfs_decrypt_page(struct page *page); 587 - int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry); 587 + int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry, 588 + struct inode *ecryptfs_inode); 588 589 int ecryptfs_read_metadata(struct dentry *ecryptfs_dentry); 589 - int ecryptfs_new_file_context(struct dentry *ecryptfs_dentry); 590 + int ecryptfs_new_file_context(struct inode *ecryptfs_inode); 590 591 void ecryptfs_write_crypt_stat_flags(char *page_virt, 591 592 struct ecryptfs_crypt_stat *crypt_stat, 592 593 size_t *written);
+22 -1
fs/ecryptfs/file.c
··· 139 139 return rc; 140 140 } 141 141 142 + static void ecryptfs_vma_close(struct vm_area_struct *vma) 143 + { 144 + filemap_write_and_wait(vma->vm_file->f_mapping); 145 + } 146 + 147 + static const struct vm_operations_struct ecryptfs_file_vm_ops = { 148 + .close = ecryptfs_vma_close, 149 + .fault = filemap_fault, 150 + }; 151 + 152 + static int ecryptfs_file_mmap(struct file *file, struct vm_area_struct *vma) 153 + { 154 + int rc; 155 + 156 + rc = generic_file_mmap(file, vma); 157 + if (!rc) 158 + vma->vm_ops = &ecryptfs_file_vm_ops; 159 + 160 + return rc; 161 + } 162 + 142 163 struct kmem_cache *ecryptfs_file_info_cache; 143 164 144 165 /** ··· 370 349 #ifdef CONFIG_COMPAT 371 350 .compat_ioctl = ecryptfs_compat_ioctl, 372 351 #endif 373 - .mmap = generic_file_mmap, 352 + .mmap = ecryptfs_file_mmap, 374 353 .open = ecryptfs_open, 375 354 .flush = ecryptfs_flush, 376 355 .release = ecryptfs_release,
+31 -21
fs/ecryptfs/inode.c
··· 172 172 * it. It will also update the eCryptfs directory inode to mimic the 173 173 * stat of the lower directory inode. 174 174 * 175 - * Returns zero on success; non-zero on error condition 175 + * Returns the new eCryptfs inode on success; an ERR_PTR on error condition 176 176 */ 177 - static int 177 + static struct inode * 178 178 ecryptfs_do_create(struct inode *directory_inode, 179 179 struct dentry *ecryptfs_dentry, int mode) 180 180 { 181 181 int rc; 182 182 struct dentry *lower_dentry; 183 183 struct dentry *lower_dir_dentry; 184 + struct inode *inode; 184 185 185 186 lower_dentry = ecryptfs_dentry_to_lower(ecryptfs_dentry); 186 187 lower_dir_dentry = lock_parent(lower_dentry); 187 188 if (IS_ERR(lower_dir_dentry)) { 188 189 ecryptfs_printk(KERN_ERR, "Error locking directory of " 189 190 "dentry\n"); 190 - rc = PTR_ERR(lower_dir_dentry); 191 + inode = ERR_CAST(lower_dir_dentry); 191 192 goto out; 192 193 } 193 194 rc = ecryptfs_create_underlying_file(lower_dir_dentry->d_inode, ··· 196 195 if (rc) { 197 196 printk(KERN_ERR "%s: Failure to create dentry in lower fs; " 198 197 "rc = [%d]\n", __func__, rc); 198 + inode = ERR_PTR(rc); 199 199 goto out_lock; 200 200 } 201 - rc = ecryptfs_interpose(lower_dentry, ecryptfs_dentry, 202 - directory_inode->i_sb); 203 - if (rc) { 204 - ecryptfs_printk(KERN_ERR, "Failure in ecryptfs_interpose\n"); 201 + inode = __ecryptfs_get_inode(lower_dentry->d_inode, 202 + directory_inode->i_sb); 203 + if (IS_ERR(inode)) 205 204 goto out_lock; 206 - } 207 205 fsstack_copy_attr_times(directory_inode, lower_dir_dentry->d_inode); 208 206 fsstack_copy_inode_size(directory_inode, lower_dir_dentry->d_inode); 209 207 out_lock: 210 208 unlock_dir(lower_dir_dentry); 211 209 out: 212 - return rc; 210 + return inode; 213 211 } 214 212 215 213 /** ··· 219 219 * 220 220 * Returns zero on success 221 221 */ 222 - static int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry) 222 + static int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry, 223 + struct inode *ecryptfs_inode) 223 224 { 224 225 struct ecryptfs_crypt_stat *crypt_stat = 225 - &ecryptfs_inode_to_private(ecryptfs_dentry->d_inode)->crypt_stat; 226 + &ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat; 226 227 int rc = 0; 227 228 228 - if (S_ISDIR(ecryptfs_dentry->d_inode->i_mode)) { 229 + if (S_ISDIR(ecryptfs_inode->i_mode)) { 229 230 ecryptfs_printk(KERN_DEBUG, "This is a directory\n"); 230 231 crypt_stat->flags &= ~(ECRYPTFS_ENCRYPTED); 231 232 goto out; 232 233 } 233 234 ecryptfs_printk(KERN_DEBUG, "Initializing crypto context\n"); 234 - rc = ecryptfs_new_file_context(ecryptfs_dentry); 235 + rc = ecryptfs_new_file_context(ecryptfs_inode); 235 236 if (rc) { 236 237 ecryptfs_printk(KERN_ERR, "Error creating new file " 237 238 "context; rc = [%d]\n", rc); 238 239 goto out; 239 240 } 240 - rc = ecryptfs_get_lower_file(ecryptfs_dentry, 241 - ecryptfs_dentry->d_inode); 241 + rc = ecryptfs_get_lower_file(ecryptfs_dentry, ecryptfs_inode); 242 242 if (rc) { 243 243 printk(KERN_ERR "%s: Error attempting to initialize " 244 244 "the lower file for the dentry with name " ··· 246 246 ecryptfs_dentry->d_name.name, rc); 247 247 goto out; 248 248 } 249 - rc = ecryptfs_write_metadata(ecryptfs_dentry); 249 + rc = ecryptfs_write_metadata(ecryptfs_dentry, ecryptfs_inode); 250 250 if (rc) 251 251 printk(KERN_ERR "Error writing headers; rc = [%d]\n", rc); 252 - ecryptfs_put_lower_file(ecryptfs_dentry->d_inode); 252 + ecryptfs_put_lower_file(ecryptfs_inode); 253 253 out: 254 254 return rc; 255 255 } ··· 269 269 ecryptfs_create(struct inode *directory_inode, struct dentry *ecryptfs_dentry, 270 270 int mode, struct nameidata *nd) 271 271 { 272 + struct inode *ecryptfs_inode; 272 273 int rc; 273 274 274 - /* ecryptfs_do_create() calls ecryptfs_interpose() */ 275 - rc = ecryptfs_do_create(directory_inode, ecryptfs_dentry, mode); 276 - if (unlikely(rc)) { 275 + ecryptfs_inode = ecryptfs_do_create(directory_inode, ecryptfs_dentry, 276 + mode); 277 + if (unlikely(IS_ERR(ecryptfs_inode))) { 277 278 ecryptfs_printk(KERN_WARNING, "Failed to create file in" 278 279 "lower filesystem\n"); 280 + rc = PTR_ERR(ecryptfs_inode); 279 281 goto out; 280 282 } 281 283 /* At this point, a file exists on "disk"; we need to make sure 282 284 * that this on disk file is prepared to be an ecryptfs file */ 283 - rc = ecryptfs_initialize_file(ecryptfs_dentry); 285 + rc = ecryptfs_initialize_file(ecryptfs_dentry, ecryptfs_inode); 286 + if (rc) { 287 + drop_nlink(ecryptfs_inode); 288 + unlock_new_inode(ecryptfs_inode); 289 + iput(ecryptfs_inode); 290 + goto out; 291 + } 292 + d_instantiate(ecryptfs_dentry, ecryptfs_inode); 293 + unlock_new_inode(ecryptfs_inode); 284 294 out: 285 295 return rc; 286 296 }
+1 -2
fs/ext4/extents.c
··· 1095 1095 le32_to_cpu(EXT_FIRST_INDEX(neh)->ei_block), 1096 1096 ext4_idx_pblock(EXT_FIRST_INDEX(neh))); 1097 1097 1098 - neh->eh_depth = cpu_to_le16(neh->eh_depth + 1); 1098 + neh->eh_depth = cpu_to_le16(le16_to_cpu(neh->eh_depth) + 1); 1099 1099 ext4_mark_inode_dirty(handle, inode); 1100 1100 out: 1101 1101 brelse(bh); ··· 2955 2955 /* Pre-conditions */ 2956 2956 BUG_ON(!ext4_ext_is_uninitialized(ex)); 2957 2957 BUG_ON(!in_range(map->m_lblk, ee_block, ee_len)); 2958 - BUG_ON(map->m_lblk + map->m_len > ee_block + ee_len); 2959 2958 2960 2959 /* 2961 2960 * Attempt to transfer newly initialized blocks from the currently
+10 -44
fs/ext4/inode.c
··· 1339 1339 clear_buffer_unwritten(bh); 1340 1340 } 1341 1341 1342 - /* skip page if block allocation undone */ 1343 - if (buffer_delay(bh) || buffer_unwritten(bh)) 1342 + /* 1343 + * skip page if block allocation undone and 1344 + * block is dirty 1345 + */ 1346 + if (ext4_bh_delay_or_unwritten(NULL, bh)) 1344 1347 skip_page = 1; 1345 1348 bh = bh->b_this_page; 1346 1349 block_start += bh->b_size; ··· 2390 2387 pgoff_t index; 2391 2388 struct inode *inode = mapping->host; 2392 2389 handle_t *handle; 2393 - loff_t page_len; 2394 2390 2395 2391 index = pos >> PAGE_CACHE_SHIFT; 2396 2392 ··· 2436 2434 */ 2437 2435 if (pos + len > inode->i_size) 2438 2436 ext4_truncate_failed_write(inode); 2439 - } else { 2440 - page_len = pos & (PAGE_CACHE_SIZE - 1); 2441 - if (page_len > 0) { 2442 - ret = ext4_discard_partial_page_buffers_no_lock(handle, 2443 - inode, page, pos - page_len, page_len, 2444 - EXT4_DISCARD_PARTIAL_PG_ZERO_UNMAPPED); 2445 - } 2446 2437 } 2447 2438 2448 2439 if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries)) ··· 2478 2483 loff_t new_i_size; 2479 2484 unsigned long start, end; 2480 2485 int write_mode = (int)(unsigned long)fsdata; 2481 - loff_t page_len; 2482 2486 2483 2487 if (write_mode == FALL_BACK_TO_NONDELALLOC) { 2484 2488 if (ext4_should_order_data(inode)) { ··· 2502 2508 */ 2503 2509 2504 2510 new_i_size = pos + copied; 2505 - if (new_i_size > EXT4_I(inode)->i_disksize) { 2511 + if (copied && new_i_size > EXT4_I(inode)->i_disksize) { 2506 2512 if (ext4_da_should_update_i_disksize(page, end)) { 2507 2513 down_write(&EXT4_I(inode)->i_data_sem); 2508 2514 if (new_i_size > EXT4_I(inode)->i_disksize) { ··· 2526 2532 } 2527 2533 ret2 = generic_write_end(file, mapping, pos, len, copied, 2528 2534 page, fsdata); 2529 - 2530 - page_len = PAGE_CACHE_SIZE - 2531 - ((pos + copied - 1) & (PAGE_CACHE_SIZE - 1)); 2532 - 2533 - if (page_len > 0) { 2534 - ret = ext4_discard_partial_page_buffers_no_lock(handle, 2535 - inode, page, pos + copied - 1, page_len, 2536 - EXT4_DISCARD_PARTIAL_PG_ZERO_UNMAPPED); 2537 - } 2538 - 2539 2535 copied = ret2; 2540 2536 if (ret2 < 0) 2541 2537 ret = ret2; ··· 2765 2781 iocb->private, io_end->inode->i_ino, iocb, offset, 2766 2782 size); 2767 2783 2784 + iocb->private = NULL; 2785 + 2768 2786 /* if not aio dio with unwritten extents, just free io and return */ 2769 2787 if (!(io_end->flag & EXT4_IO_END_UNWRITTEN)) { 2770 2788 ext4_free_io_end(io_end); 2771 - iocb->private = NULL; 2772 2789 out: 2773 2790 if (is_async) 2774 2791 aio_complete(iocb, ret, 0); ··· 2793 2808 2794 2809 /* queue the work to convert unwritten extents to written */ 2795 2810 queue_work(wq, &io_end->work); 2796 - iocb->private = NULL; 2797 2811 2798 2812 /* XXX: probably should move into the real I/O completion handler */ 2799 2813 inode_dio_done(inode); ··· 3187 3203 3188 3204 iblock = index << (PAGE_CACHE_SHIFT - inode->i_sb->s_blocksize_bits); 3189 3205 3190 - if (!page_has_buffers(page)) { 3191 - /* 3192 - * If the range to be discarded covers a partial block 3193 - * we need to get the page buffers. This is because 3194 - * partial blocks cannot be released and the page needs 3195 - * to be updated with the contents of the block before 3196 - * we write the zeros on top of it. 3197 - */ 3198 - if ((from & (blocksize - 1)) || 3199 - ((from + length) & (blocksize - 1))) { 3200 - create_empty_buffers(page, blocksize, 0); 3201 - } else { 3202 - /* 3203 - * If there are no partial blocks, 3204 - * there is nothing to update, 3205 - * so we can return now 3206 - */ 3207 - return 0; 3208 - } 3209 - } 3206 + if (!page_has_buffers(page)) 3207 + create_empty_buffers(page, blocksize, 0); 3210 3208 3211 3209 /* Find the buffer that contains "offset" */ 3212 3210 bh = page_buffers(page);
+12
fs/ext4/page-io.c
··· 385 385 386 386 block_end = block_start + blocksize; 387 387 if (block_start >= len) { 388 + /* 389 + * Comments copied from block_write_full_page_endio: 390 + * 391 + * The page straddles i_size. It must be zeroed out on 392 + * each and every writepage invocation because it may 393 + * be mmapped. "A file is mapped in multiples of the 394 + * page size. For a file that is not a multiple of 395 + * the page size, the remaining memory is zeroed when 396 + * mapped, and writes to that region are not written 397 + * out to the file." 398 + */ 399 + zero_user_segment(page, block_start, block_end); 388 400 clear_buffer_dirty(bh); 389 401 set_buffer_uptodate(bh); 390 402 continue;
+8 -9
fs/ext4/super.c
··· 1155 1155 seq_puts(seq, ",block_validity"); 1156 1156 1157 1157 if (!test_opt(sb, INIT_INODE_TABLE)) 1158 - seq_puts(seq, ",noinit_inode_table"); 1158 + seq_puts(seq, ",noinit_itable"); 1159 1159 else if (sbi->s_li_wait_mult != EXT4_DEF_LI_WAIT_MULT) 1160 - seq_printf(seq, ",init_inode_table=%u", 1160 + seq_printf(seq, ",init_itable=%u", 1161 1161 (unsigned) sbi->s_li_wait_mult); 1162 1162 1163 1163 ext4_show_quota_options(seq, sb); ··· 1333 1333 Opt_nomblk_io_submit, Opt_block_validity, Opt_noblock_validity, 1334 1334 Opt_inode_readahead_blks, Opt_journal_ioprio, 1335 1335 Opt_dioread_nolock, Opt_dioread_lock, 1336 - Opt_discard, Opt_nodiscard, 1337 - Opt_init_inode_table, Opt_noinit_inode_table, 1336 + Opt_discard, Opt_nodiscard, Opt_init_itable, Opt_noinit_itable, 1338 1337 }; 1339 1338 1340 1339 static const match_table_t tokens = { ··· 1406 1407 {Opt_dioread_lock, "dioread_lock"}, 1407 1408 {Opt_discard, "discard"}, 1408 1409 {Opt_nodiscard, "nodiscard"}, 1409 - {Opt_init_inode_table, "init_itable=%u"}, 1410 - {Opt_init_inode_table, "init_itable"}, 1411 - {Opt_noinit_inode_table, "noinit_itable"}, 1410 + {Opt_init_itable, "init_itable=%u"}, 1411 + {Opt_init_itable, "init_itable"}, 1412 + {Opt_noinit_itable, "noinit_itable"}, 1412 1413 {Opt_err, NULL}, 1413 1414 }; 1414 1415 ··· 1891 1892 case Opt_dioread_lock: 1892 1893 clear_opt(sb, DIOREAD_NOLOCK); 1893 1894 break; 1894 - case Opt_init_inode_table: 1895 + case Opt_init_itable: 1895 1896 set_opt(sb, INIT_INODE_TABLE); 1896 1897 if (args[0].from) { 1897 1898 if (match_int(&args[0], &option)) ··· 1902 1903 return 0; 1903 1904 sbi->s_li_wait_mult = option; 1904 1905 break; 1905 - case Opt_noinit_inode_table: 1906 + case Opt_noinit_itable: 1906 1907 clear_opt(sb, INIT_INODE_TABLE); 1907 1908 break; 1908 1909 default:
+5
fs/fs-writeback.c
··· 156 156 * bdi_start_writeback - start writeback 157 157 * @bdi: the backing device to write from 158 158 * @nr_pages: the number of pages to write 159 + * @reason: reason why some writeback work was initiated 159 160 * 160 161 * Description: 161 162 * This does WB_SYNC_NONE opportunistic writeback. The IO is only ··· 1224 1223 * writeback_inodes_sb_nr - writeback dirty inodes from given super_block 1225 1224 * @sb: the superblock 1226 1225 * @nr: the number of pages to write 1226 + * @reason: reason why some writeback work initiated 1227 1227 * 1228 1228 * Start writeback on some inodes on this super_block. No guarantees are made 1229 1229 * on how many (if any) will be written, and this function does not wait ··· 1253 1251 /** 1254 1252 * writeback_inodes_sb - writeback dirty inodes from given super_block 1255 1253 * @sb: the superblock 1254 + * @reason: reason why some writeback work was initiated 1256 1255 * 1257 1256 * Start writeback on some inodes on this super_block. No guarantees are made 1258 1257 * on how many (if any) will be written, and this function does not wait ··· 1268 1265 /** 1269 1266 * writeback_inodes_sb_if_idle - start writeback if none underway 1270 1267 * @sb: the superblock 1268 + * @reason: reason why some writeback work was initiated 1271 1269 * 1272 1270 * Invoke writeback_inodes_sb if no writeback is currently underway. 1273 1271 * Returns 1 if writeback was started, 0 if not. ··· 1289 1285 * writeback_inodes_sb_if_idle - start writeback if none underway 1290 1286 * @sb: the superblock 1291 1287 * @nr: the number of pages to write 1288 + * @reason: reason why some writeback work was initiated 1292 1289 * 1293 1290 * Invoke writeback_inodes_sb if no writeback is currently underway. 1294 1291 * Returns 1 if writeback was started, 0 if not.
+2 -1
fs/fuse/dev.c
··· 1512 1512 else if (outarg->offset + num > file_size) 1513 1513 num = file_size - outarg->offset; 1514 1514 1515 - while (num) { 1515 + while (num && req->num_pages < FUSE_MAX_PAGES_PER_REQ) { 1516 1516 struct page *page; 1517 1517 unsigned int this_num; 1518 1518 ··· 1526 1526 1527 1527 num -= this_num; 1528 1528 total_len += this_num; 1529 + index++; 1529 1530 } 1530 1531 req->misc.retrieve_in.offset = outarg->offset; 1531 1532 req->misc.retrieve_in.size = total_len;
+5 -1
fs/fuse/file.c
··· 1556 1556 struct inode *inode = file->f_path.dentry->d_inode; 1557 1557 1558 1558 mutex_lock(&inode->i_mutex); 1559 - if (origin != SEEK_CUR || origin != SEEK_SET) { 1559 + if (origin != SEEK_CUR && origin != SEEK_SET) { 1560 1560 retval = fuse_update_attributes(inode, NULL, file, NULL); 1561 1561 if (retval) 1562 1562 goto exit; ··· 1567 1567 offset += i_size_read(inode); 1568 1568 break; 1569 1569 case SEEK_CUR: 1570 + if (offset == 0) { 1571 + retval = file->f_pos; 1572 + goto exit; 1573 + } 1570 1574 offset += file->f_pos; 1571 1575 break; 1572 1576 case SEEK_DATA:
+12 -12
fs/fuse/inode.c
··· 1138 1138 { 1139 1139 int err; 1140 1140 1141 - err = register_filesystem(&fuse_fs_type); 1142 - if (err) 1143 - goto out; 1144 - 1145 - err = register_fuseblk(); 1146 - if (err) 1147 - goto out_unreg; 1148 - 1149 1141 fuse_inode_cachep = kmem_cache_create("fuse_inode", 1150 1142 sizeof(struct fuse_inode), 1151 1143 0, SLAB_HWCACHE_ALIGN, 1152 1144 fuse_inode_init_once); 1153 1145 err = -ENOMEM; 1154 1146 if (!fuse_inode_cachep) 1155 - goto out_unreg2; 1147 + goto out; 1148 + 1149 + err = register_fuseblk(); 1150 + if (err) 1151 + goto out2; 1152 + 1153 + err = register_filesystem(&fuse_fs_type); 1154 + if (err) 1155 + goto out3; 1156 1156 1157 1157 return 0; 1158 1158 1159 - out_unreg2: 1159 + out3: 1160 1160 unregister_fuseblk(); 1161 - out_unreg: 1162 - unregister_filesystem(&fuse_fs_type); 1161 + out2: 1162 + kmem_cache_destroy(fuse_inode_cachep); 1163 1163 out: 1164 1164 return err; 1165 1165 }
+11 -9
fs/namespace.c
··· 1048 1048 if (err) 1049 1049 goto out; 1050 1050 seq_putc(m, ' '); 1051 - seq_path_root(m, &mnt_path, &root, " \t\n\\"); 1052 - if (root.mnt != p->root.mnt || root.dentry != p->root.dentry) { 1053 - /* 1054 - * Mountpoint is outside root, discard that one. Ugly, 1055 - * but less so than trying to do that in iterator in a 1056 - * race-free way (due to renames). 1057 - */ 1058 - return SEQ_SKIP; 1059 - } 1051 + 1052 + /* mountpoints outside of chroot jail will give SEQ_SKIP on this */ 1053 + err = seq_path_root(m, &mnt_path, &root, " \t\n\\"); 1054 + if (err) 1055 + goto out; 1056 + 1060 1057 seq_puts(m, mnt->mnt_flags & MNT_READONLY ? " ro" : " rw"); 1061 1058 show_mnt_opts(m, mnt); 1062 1059 ··· 2773 2776 } 2774 2777 } 2775 2778 EXPORT_SYMBOL(kern_unmount); 2779 + 2780 + bool our_mnt(struct vfsmount *mnt) 2781 + { 2782 + return check_mnt(mnt); 2783 + }
+4 -4
fs/ncpfs/inode.c
··· 548 548 549 549 error = bdi_setup_and_register(&server->bdi, "ncpfs", BDI_CAP_MAP_COPY); 550 550 if (error) 551 - goto out_bdi; 551 + goto out_fput; 552 552 553 553 server->ncp_filp = ncp_filp; 554 554 server->ncp_sock = sock; ··· 559 559 error = -EBADF; 560 560 server->info_filp = fget(data.info_fd); 561 561 if (!server->info_filp) 562 - goto out_fput; 562 + goto out_bdi; 563 563 error = -ENOTSOCK; 564 564 sock_inode = server->info_filp->f_path.dentry->d_inode; 565 565 if (!S_ISSOCK(sock_inode->i_mode)) ··· 746 746 out_fput2: 747 747 if (server->info_filp) 748 748 fput(server->info_filp); 749 - out_fput: 750 - bdi_destroy(&server->bdi); 751 749 out_bdi: 750 + bdi_destroy(&server->bdi); 751 + out_fput: 752 752 /* 23/12/1998 Marcin Dalecki <dalecki@cs.net.pl>: 753 753 * 754 754 * The previously used put_filp(ncp_filp); was bogus, since
+1 -1
fs/ocfs2/alloc.c
··· 5699 5699 OCFS2_JOURNAL_ACCESS_WRITE); 5700 5700 if (ret) { 5701 5701 mlog_errno(ret); 5702 - goto out; 5702 + goto out_commit; 5703 5703 } 5704 5704 5705 5705 dquot_free_space_nodirty(inode,
+61 -8
fs/ocfs2/aops.c
··· 290 290 } 291 291 292 292 if (down_read_trylock(&oi->ip_alloc_sem) == 0) { 293 + /* 294 + * Unlock the page and cycle ip_alloc_sem so that we don't 295 + * busyloop waiting for ip_alloc_sem to unlock 296 + */ 293 297 ret = AOP_TRUNCATED_PAGE; 298 + unlock_page(page); 299 + unlock = 0; 300 + down_read(&oi->ip_alloc_sem); 301 + up_read(&oi->ip_alloc_sem); 294 302 goto out_inode_unlock; 295 303 } 296 304 ··· 571 563 { 572 564 struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode; 573 565 int level; 566 + wait_queue_head_t *wq = ocfs2_ioend_wq(inode); 574 567 575 568 /* this io's submitter should not have unlocked this before we could */ 576 569 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb)); 577 570 578 571 if (ocfs2_iocb_is_sem_locked(iocb)) 579 572 ocfs2_iocb_clear_sem_locked(iocb); 573 + 574 + if (ocfs2_iocb_is_unaligned_aio(iocb)) { 575 + ocfs2_iocb_clear_unaligned_aio(iocb); 576 + 577 + if (atomic_dec_and_test(&OCFS2_I(inode)->ip_unaligned_aio) && 578 + waitqueue_active(wq)) { 579 + wake_up_all(wq); 580 + } 581 + } 580 582 581 583 ocfs2_iocb_clear_rw_locked(iocb); 582 584 ··· 881 863 struct page *w_target_page; 882 864 883 865 /* 866 + * w_target_locked is used for page_mkwrite path indicating no unlocking 867 + * against w_target_page in ocfs2_write_end_nolock. 868 + */ 869 + unsigned int w_target_locked:1; 870 + 871 + /* 884 872 * ocfs2_write_end() uses this to know what the real range to 885 873 * write in the target should be. 886 874 */ ··· 919 895 920 896 static void ocfs2_free_write_ctxt(struct ocfs2_write_ctxt *wc) 921 897 { 898 + int i; 899 + 900 + /* 901 + * w_target_locked is only set to true in the page_mkwrite() case. 902 + * The intent is to allow us to lock the target page from write_begin() 903 + * to write_end(). The caller must hold a ref on w_target_page. 904 + */ 905 + if (wc->w_target_locked) { 906 + BUG_ON(!wc->w_target_page); 907 + for (i = 0; i < wc->w_num_pages; i++) { 908 + if (wc->w_target_page == wc->w_pages[i]) { 909 + wc->w_pages[i] = NULL; 910 + break; 911 + } 912 + } 913 + mark_page_accessed(wc->w_target_page); 914 + page_cache_release(wc->w_target_page); 915 + } 922 916 ocfs2_unlock_and_free_pages(wc->w_pages, wc->w_num_pages); 923 917 924 918 brelse(wc->w_di_bh); ··· 1174 1132 */ 1175 1133 lock_page(mmap_page); 1176 1134 1135 + /* Exit and let the caller retry */ 1177 1136 if (mmap_page->mapping != mapping) { 1137 + WARN_ON(mmap_page->mapping); 1178 1138 unlock_page(mmap_page); 1179 - /* 1180 - * Sanity check - the locking in 1181 - * ocfs2_pagemkwrite() should ensure 1182 - * that this code doesn't trigger. 1183 - */ 1184 - ret = -EINVAL; 1185 - mlog_errno(ret); 1139 + ret = -EAGAIN; 1186 1140 goto out; 1187 1141 } 1188 1142 1189 1143 page_cache_get(mmap_page); 1190 1144 wc->w_pages[i] = mmap_page; 1145 + wc->w_target_locked = true; 1191 1146 } else { 1192 1147 wc->w_pages[i] = find_or_create_page(mapping, index, 1193 1148 GFP_NOFS); ··· 1199 1160 wc->w_target_page = wc->w_pages[i]; 1200 1161 } 1201 1162 out: 1163 + if (ret) 1164 + wc->w_target_locked = false; 1202 1165 return ret; 1203 1166 } 1204 1167 ··· 1858 1817 */ 1859 1818 ret = ocfs2_grab_pages_for_write(mapping, wc, wc->w_cpos, pos, len, 1860 1819 cluster_of_pages, mmap_page); 1861 - if (ret) { 1820 + if (ret && ret != -EAGAIN) { 1862 1821 mlog_errno(ret); 1822 + goto out_quota; 1823 + } 1824 + 1825 + /* 1826 + * ocfs2_grab_pages_for_write() returns -EAGAIN if it could not lock 1827 + * the target page. In this case, we exit with no error and no target 1828 + * page. This will trigger the caller, page_mkwrite(), to re-try 1829 + * the operation. 1830 + */ 1831 + if (ret == -EAGAIN) { 1832 + BUG_ON(wc->w_target_page); 1833 + ret = 0; 1863 1834 goto out_quota; 1864 1835 } 1865 1836
+14
fs/ocfs2/aops.h
··· 78 78 OCFS2_IOCB_RW_LOCK = 0, 79 79 OCFS2_IOCB_RW_LOCK_LEVEL, 80 80 OCFS2_IOCB_SEM, 81 + OCFS2_IOCB_UNALIGNED_IO, 81 82 OCFS2_IOCB_NUM_LOCKS 82 83 }; 83 84 ··· 92 91 clear_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private) 93 92 #define ocfs2_iocb_is_sem_locked(iocb) \ 94 93 test_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private) 94 + 95 + #define ocfs2_iocb_set_unaligned_aio(iocb) \ 96 + set_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 97 + #define ocfs2_iocb_clear_unaligned_aio(iocb) \ 98 + clear_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 99 + #define ocfs2_iocb_is_unaligned_aio(iocb) \ 100 + test_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 101 + 102 + #define OCFS2_IOEND_WQ_HASH_SZ 37 103 + #define ocfs2_ioend_wq(v) (&ocfs2__ioend_wq[((unsigned long)(v)) %\ 104 + OCFS2_IOEND_WQ_HASH_SZ]) 105 + extern wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ]; 106 + 95 107 #endif /* OCFS2_FILE_H */
+123 -73
fs/ocfs2/cluster/heartbeat.c
··· 216 216 217 217 struct list_head hr_all_item; 218 218 unsigned hr_unclean_stop:1, 219 + hr_aborted_start:1, 219 220 hr_item_pinned:1, 220 221 hr_item_dropped:1; 221 222 ··· 254 253 * has reached a 'steady' state. This will be fixed when we have 255 254 * a more complete api that doesn't lead to this sort of fragility. */ 256 255 atomic_t hr_steady_iterations; 256 + 257 + /* terminate o2hb thread if it does not reach steady state 258 + * (hr_steady_iterations == 0) within hr_unsteady_iterations */ 259 + atomic_t hr_unsteady_iterations; 257 260 258 261 char hr_dev_name[BDEVNAME_SIZE]; 259 262 ··· 329 324 330 325 static void o2hb_arm_write_timeout(struct o2hb_region *reg) 331 326 { 327 + /* Arm writeout only after thread reaches steady state */ 328 + if (atomic_read(&reg->hr_steady_iterations) != 0) 329 + return; 330 + 332 331 mlog(ML_HEARTBEAT, "Queue write timeout for %u ms\n", 333 332 O2HB_MAX_WRITE_TIMEOUT_MS); 334 333 ··· 546 537 return read == computed; 547 538 } 548 539 549 - /* We want to make sure that nobody is heartbeating on top of us -- 550 - * this will help detect an invalid configuration. */ 551 - static void o2hb_check_last_timestamp(struct o2hb_region *reg) 540 + /* 541 + * Compare the slot data with what we wrote in the last iteration. 542 + * If the match fails, print an appropriate error message. This is to 543 + * detect errors like... another node hearting on the same slot, 544 + * flaky device that is losing writes, etc. 545 + * Returns 1 if check succeeds, 0 otherwise. 546 + */ 547 + static int o2hb_check_own_slot(struct o2hb_region *reg) 552 548 { 553 549 struct o2hb_disk_slot *slot; 554 550 struct o2hb_disk_heartbeat_block *hb_block; ··· 562 548 slot = &reg->hr_slots[o2nm_this_node()]; 563 549 /* Don't check on our 1st timestamp */ 564 550 if (!slot->ds_last_time) 565 - return; 551 + return 0; 566 552 567 553 hb_block = slot->ds_raw_block; 568 554 if (le64_to_cpu(hb_block->hb_seq) == slot->ds_last_time && 569 555 le64_to_cpu(hb_block->hb_generation) == slot->ds_last_generation && 570 556 hb_block->hb_node == slot->ds_node_num) 571 - return; 557 + return 1; 572 558 573 559 #define ERRSTR1 "Another node is heartbeating on device" 574 560 #define ERRSTR2 "Heartbeat generation mismatch on device" ··· 588 574 (unsigned long long)slot->ds_last_time, hb_block->hb_node, 589 575 (unsigned long long)le64_to_cpu(hb_block->hb_generation), 590 576 (unsigned long long)le64_to_cpu(hb_block->hb_seq)); 577 + 578 + return 0; 591 579 } 592 580 593 581 static inline void o2hb_prepare_block(struct o2hb_region *reg, ··· 735 719 o2nm_node_put(node); 736 720 } 737 721 738 - static void o2hb_set_quorum_device(struct o2hb_region *reg, 739 - struct o2hb_disk_slot *slot) 722 + static void o2hb_set_quorum_device(struct o2hb_region *reg) 740 723 { 741 - assert_spin_locked(&o2hb_live_lock); 742 - 743 724 if (!o2hb_global_heartbeat_active()) 744 725 return; 745 726 746 - if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 727 + /* Prevent race with o2hb_heartbeat_group_drop_item() */ 728 + if (kthread_should_stop()) 747 729 return; 730 + 731 + /* Tag region as quorum only after thread reaches steady state */ 732 + if (atomic_read(&reg->hr_steady_iterations) != 0) 733 + return; 734 + 735 + spin_lock(&o2hb_live_lock); 736 + 737 + if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 738 + goto unlock; 748 739 749 740 /* 750 741 * A region can be added to the quorum only when it sees all ··· 760 737 */ 761 738 if (memcmp(reg->hr_live_node_bitmap, o2hb_live_node_bitmap, 762 739 sizeof(o2hb_live_node_bitmap))) 763 - return; 740 + goto unlock; 764 741 765 - if (slot->ds_changed_samples < O2HB_LIVE_THRESHOLD) 766 - return; 767 - 768 - printk(KERN_NOTICE "o2hb: Region %s is now a quorum device\n", 769 - config_item_name(&reg->hr_item)); 742 + printk(KERN_NOTICE "o2hb: Region %s (%s) is now a quorum device\n", 743 + config_item_name(&reg->hr_item), reg->hr_dev_name); 770 744 771 745 set_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 772 746 ··· 774 754 if (o2hb_pop_count(&o2hb_quorum_region_bitmap, 775 755 O2NM_MAX_REGIONS) > O2HB_PIN_CUT_OFF) 776 756 o2hb_region_unpin(NULL); 757 + unlock: 758 + spin_unlock(&o2hb_live_lock); 777 759 } 778 760 779 761 static int o2hb_check_slot(struct o2hb_region *reg, ··· 947 925 slot->ds_equal_samples = 0; 948 926 } 949 927 out: 950 - o2hb_set_quorum_device(reg, slot); 951 - 952 928 spin_unlock(&o2hb_live_lock); 953 929 954 930 o2hb_run_event_list(&event); ··· 977 957 978 958 static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) 979 959 { 980 - int i, ret, highest_node, change = 0; 960 + int i, ret, highest_node; 961 + int membership_change = 0, own_slot_ok = 0; 981 962 unsigned long configured_nodes[BITS_TO_LONGS(O2NM_MAX_NODES)]; 982 963 unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 983 964 struct o2hb_bio_wait_ctxt write_wc; ··· 987 966 sizeof(configured_nodes)); 988 967 if (ret) { 989 968 mlog_errno(ret); 990 - return ret; 969 + goto bail; 991 970 } 992 971 993 972 /* ··· 1003 982 1004 983 highest_node = o2hb_highest_node(configured_nodes, O2NM_MAX_NODES); 1005 984 if (highest_node >= O2NM_MAX_NODES) { 1006 - mlog(ML_NOTICE, "ocfs2_heartbeat: no configured nodes found!\n"); 1007 - return -EINVAL; 985 + mlog(ML_NOTICE, "o2hb: No configured nodes found!\n"); 986 + ret = -EINVAL; 987 + goto bail; 1008 988 } 1009 989 1010 990 /* No sense in reading the slots of nodes that don't exist ··· 1015 993 ret = o2hb_read_slots(reg, highest_node + 1); 1016 994 if (ret < 0) { 1017 995 mlog_errno(ret); 1018 - return ret; 996 + goto bail; 1019 997 } 1020 998 1021 999 /* With an up to date view of the slots, we can check that no 1022 1000 * other node has been improperly configured to heartbeat in 1023 1001 * our slot. */ 1024 - o2hb_check_last_timestamp(reg); 1002 + own_slot_ok = o2hb_check_own_slot(reg); 1025 1003 1026 1004 /* fill in the proper info for our next heartbeat */ 1027 1005 o2hb_prepare_block(reg, reg->hr_generation); 1028 1006 1029 - /* And fire off the write. Note that we don't wait on this I/O 1030 - * until later. */ 1031 1007 ret = o2hb_issue_node_write(reg, &write_wc); 1032 1008 if (ret < 0) { 1033 1009 mlog_errno(ret); 1034 - return ret; 1010 + goto bail; 1035 1011 } 1036 1012 1037 1013 i = -1; 1038 1014 while((i = find_next_bit(configured_nodes, 1039 1015 O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) { 1040 - change |= o2hb_check_slot(reg, &reg->hr_slots[i]); 1016 + membership_change |= o2hb_check_slot(reg, &reg->hr_slots[i]); 1041 1017 } 1042 1018 1043 1019 /* ··· 1050 1030 * disk */ 1051 1031 mlog(ML_ERROR, "Write error %d on device \"%s\"\n", 1052 1032 write_wc.wc_error, reg->hr_dev_name); 1053 - return write_wc.wc_error; 1033 + ret = write_wc.wc_error; 1034 + goto bail; 1054 1035 } 1055 1036 1056 - o2hb_arm_write_timeout(reg); 1037 + /* Skip disarming the timeout if own slot has stale/bad data */ 1038 + if (own_slot_ok) { 1039 + o2hb_set_quorum_device(reg); 1040 + o2hb_arm_write_timeout(reg); 1041 + } 1057 1042 1043 + bail: 1058 1044 /* let the person who launched us know when things are steady */ 1059 - if (!change && (atomic_read(&reg->hr_steady_iterations) != 0)) { 1060 - if (atomic_dec_and_test(&reg->hr_steady_iterations)) 1061 - wake_up(&o2hb_steady_queue); 1045 + if (atomic_read(&reg->hr_steady_iterations) != 0) { 1046 + if (!ret && own_slot_ok && !membership_change) { 1047 + if (atomic_dec_and_test(&reg->hr_steady_iterations)) 1048 + wake_up(&o2hb_steady_queue); 1049 + } 1062 1050 } 1063 1051 1064 - return 0; 1052 + if (atomic_read(&reg->hr_steady_iterations) != 0) { 1053 + if (atomic_dec_and_test(&reg->hr_unsteady_iterations)) { 1054 + printk(KERN_NOTICE "o2hb: Unable to stabilize " 1055 + "heartbeart on region %s (%s)\n", 1056 + config_item_name(&reg->hr_item), 1057 + reg->hr_dev_name); 1058 + atomic_set(&reg->hr_steady_iterations, 0); 1059 + reg->hr_aborted_start = 1; 1060 + wake_up(&o2hb_steady_queue); 1061 + ret = -EIO; 1062 + } 1063 + } 1064 + 1065 + return ret; 1065 1066 } 1066 1067 1067 1068 /* Subtract b from a, storing the result in a. a *must* have a larger ··· 1136 1095 /* Pin node */ 1137 1096 o2nm_depend_this_node(); 1138 1097 1139 - while (!kthread_should_stop() && !reg->hr_unclean_stop) { 1098 + while (!kthread_should_stop() && 1099 + !reg->hr_unclean_stop && !reg->hr_aborted_start) { 1140 1100 /* We track the time spent inside 1141 1101 * o2hb_do_disk_heartbeat so that we avoid more than 1142 1102 * hr_timeout_ms between disk writes. On busy systems ··· 1145 1103 * likely to time itself out. */ 1146 1104 do_gettimeofday(&before_hb); 1147 1105 1148 - i = 0; 1149 - do { 1150 - ret = o2hb_do_disk_heartbeat(reg); 1151 - } while (ret && ++i < 2); 1106 + ret = o2hb_do_disk_heartbeat(reg); 1152 1107 1153 1108 do_gettimeofday(&after_hb); 1154 1109 elapsed_msec = o2hb_elapsed_msecs(&before_hb, &after_hb); ··· 1156 1117 after_hb.tv_sec, (unsigned long) after_hb.tv_usec, 1157 1118 elapsed_msec); 1158 1119 1159 - if (elapsed_msec < reg->hr_timeout_ms) { 1120 + if (!kthread_should_stop() && 1121 + elapsed_msec < reg->hr_timeout_ms) { 1160 1122 /* the kthread api has blocked signals for us so no 1161 1123 * need to record the return value. */ 1162 1124 msleep_interruptible(reg->hr_timeout_ms - elapsed_msec); ··· 1174 1134 * to timeout on this region when we could just as easily 1175 1135 * write a clear generation - thus indicating to them that 1176 1136 * this node has left this region. 1177 - * 1178 - * XXX: Should we skip this on unclean_stop? */ 1179 - o2hb_prepare_block(reg, 0); 1180 - ret = o2hb_issue_node_write(reg, &write_wc); 1181 - if (ret == 0) { 1182 - o2hb_wait_on_io(reg, &write_wc); 1183 - } else { 1184 - mlog_errno(ret); 1137 + */ 1138 + if (!reg->hr_unclean_stop && !reg->hr_aborted_start) { 1139 + o2hb_prepare_block(reg, 0); 1140 + ret = o2hb_issue_node_write(reg, &write_wc); 1141 + if (ret == 0) 1142 + o2hb_wait_on_io(reg, &write_wc); 1143 + else 1144 + mlog_errno(ret); 1185 1145 } 1186 1146 1187 1147 /* Unpin node */ 1188 1148 o2nm_undepend_this_node(); 1189 1149 1190 - mlog(ML_HEARTBEAT|ML_KTHREAD, "hb thread exiting\n"); 1150 + mlog(ML_HEARTBEAT|ML_KTHREAD, "o2hb thread exiting\n"); 1191 1151 1192 1152 return 0; 1193 1153 } ··· 1198 1158 struct o2hb_debug_buf *db = inode->i_private; 1199 1159 struct o2hb_region *reg; 1200 1160 unsigned long map[BITS_TO_LONGS(O2NM_MAX_NODES)]; 1161 + unsigned long lts; 1201 1162 char *buf = NULL; 1202 1163 int i = -1; 1203 1164 int out = 0; ··· 1235 1194 1236 1195 case O2HB_DB_TYPE_REGION_ELAPSED_TIME: 1237 1196 reg = (struct o2hb_region *)db->db_data; 1238 - out += snprintf(buf + out, PAGE_SIZE - out, "%u\n", 1239 - jiffies_to_msecs(jiffies - 1240 - reg->hr_last_timeout_start)); 1197 + lts = reg->hr_last_timeout_start; 1198 + /* If 0, it has never been set before */ 1199 + if (lts) 1200 + lts = jiffies_to_msecs(jiffies - lts); 1201 + out += snprintf(buf + out, PAGE_SIZE - out, "%lu\n", lts); 1241 1202 goto done; 1242 1203 1243 1204 case O2HB_DB_TYPE_REGION_PINNED: ··· 1468 1425 int i; 1469 1426 struct page *page; 1470 1427 struct o2hb_region *reg = to_o2hb_region(item); 1428 + 1429 + mlog(ML_HEARTBEAT, "hb region release (%s)\n", reg->hr_dev_name); 1471 1430 1472 1431 if (reg->hr_tmp_block) 1473 1432 kfree(reg->hr_tmp_block); ··· 1837 1792 live_threshold <<= 1; 1838 1793 spin_unlock(&o2hb_live_lock); 1839 1794 } 1840 - atomic_set(&reg->hr_steady_iterations, live_threshold + 1); 1795 + ++live_threshold; 1796 + atomic_set(&reg->hr_steady_iterations, live_threshold); 1797 + /* unsteady_iterations is double the steady_iterations */ 1798 + atomic_set(&reg->hr_unsteady_iterations, (live_threshold << 1)); 1841 1799 1842 1800 hb_task = kthread_run(o2hb_thread, reg, "o2hb-%s", 1843 1801 reg->hr_item.ci_name); ··· 1857 1809 ret = wait_event_interruptible(o2hb_steady_queue, 1858 1810 atomic_read(&reg->hr_steady_iterations) == 0); 1859 1811 if (ret) { 1860 - /* We got interrupted (hello ptrace!). Clean up */ 1861 - spin_lock(&o2hb_live_lock); 1862 - hb_task = reg->hr_task; 1863 - reg->hr_task = NULL; 1864 - spin_unlock(&o2hb_live_lock); 1812 + atomic_set(&reg->hr_steady_iterations, 0); 1813 + reg->hr_aborted_start = 1; 1814 + } 1865 1815 1866 - if (hb_task) 1867 - kthread_stop(hb_task); 1816 + if (reg->hr_aborted_start) { 1817 + ret = -EIO; 1868 1818 goto out; 1869 1819 } 1870 1820 ··· 1879 1833 ret = -EIO; 1880 1834 1881 1835 if (hb_task && o2hb_global_heartbeat_active()) 1882 - printk(KERN_NOTICE "o2hb: Heartbeat started on region %s\n", 1883 - config_item_name(&reg->hr_item)); 1836 + printk(KERN_NOTICE "o2hb: Heartbeat started on region %s (%s)\n", 1837 + config_item_name(&reg->hr_item), reg->hr_dev_name); 1884 1838 1885 1839 out: 1886 1840 if (filp) ··· 2138 2092 2139 2093 /* stop the thread when the user removes the region dir */ 2140 2094 spin_lock(&o2hb_live_lock); 2141 - if (o2hb_global_heartbeat_active()) { 2142 - clear_bit(reg->hr_region_num, o2hb_region_bitmap); 2143 - clear_bit(reg->hr_region_num, o2hb_live_region_bitmap); 2144 - if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 2145 - quorum_region = 1; 2146 - clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 2147 - } 2148 2095 hb_task = reg->hr_task; 2149 2096 reg->hr_task = NULL; 2150 2097 reg->hr_item_dropped = 1; ··· 2146 2107 if (hb_task) 2147 2108 kthread_stop(hb_task); 2148 2109 2110 + if (o2hb_global_heartbeat_active()) { 2111 + spin_lock(&o2hb_live_lock); 2112 + clear_bit(reg->hr_region_num, o2hb_region_bitmap); 2113 + clear_bit(reg->hr_region_num, o2hb_live_region_bitmap); 2114 + if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 2115 + quorum_region = 1; 2116 + clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 2117 + spin_unlock(&o2hb_live_lock); 2118 + printk(KERN_NOTICE "o2hb: Heartbeat %s on region %s (%s)\n", 2119 + ((atomic_read(&reg->hr_steady_iterations) == 0) ? 2120 + "stopped" : "start aborted"), config_item_name(item), 2121 + reg->hr_dev_name); 2122 + } 2123 + 2149 2124 /* 2150 2125 * If we're racing a dev_write(), we need to wake them. They will 2151 2126 * check reg->hr_task 2152 2127 */ 2153 2128 if (atomic_read(&reg->hr_steady_iterations) != 0) { 2129 + reg->hr_aborted_start = 1; 2154 2130 atomic_set(&reg->hr_steady_iterations, 0); 2155 2131 wake_up(&o2hb_steady_queue); 2156 2132 } 2157 - 2158 - if (o2hb_global_heartbeat_active()) 2159 - printk(KERN_NOTICE "o2hb: Heartbeat stopped on region %s\n", 2160 - config_item_name(&reg->hr_item)); 2161 2133 2162 2134 config_item_put(item); 2163 2135
+69 -33
fs/ocfs2/cluster/netdebug.c
··· 47 47 #define SC_DEBUG_NAME "sock_containers" 48 48 #define NST_DEBUG_NAME "send_tracking" 49 49 #define STATS_DEBUG_NAME "stats" 50 + #define NODES_DEBUG_NAME "connected_nodes" 50 51 51 52 #define SHOW_SOCK_CONTAINERS 0 52 53 #define SHOW_SOCK_STATS 1 ··· 56 55 static struct dentry *sc_dentry; 57 56 static struct dentry *nst_dentry; 58 57 static struct dentry *stats_dentry; 58 + static struct dentry *nodes_dentry; 59 59 60 60 static DEFINE_SPINLOCK(o2net_debug_lock); 61 61 ··· 493 491 .release = sc_fop_release, 494 492 }; 495 493 496 - int o2net_debugfs_init(void) 494 + static int o2net_fill_bitmap(char *buf, int len) 497 495 { 498 - o2net_dentry = debugfs_create_dir(O2NET_DEBUG_DIR, NULL); 499 - if (!o2net_dentry) { 500 - mlog_errno(-ENOMEM); 501 - goto bail; 502 - } 496 + unsigned long map[BITS_TO_LONGS(O2NM_MAX_NODES)]; 497 + int i = -1, out = 0; 503 498 504 - nst_dentry = debugfs_create_file(NST_DEBUG_NAME, S_IFREG|S_IRUSR, 505 - o2net_dentry, NULL, 506 - &nst_seq_fops); 507 - if (!nst_dentry) { 508 - mlog_errno(-ENOMEM); 509 - goto bail; 510 - } 499 + o2net_fill_node_map(map, sizeof(map)); 511 500 512 - sc_dentry = debugfs_create_file(SC_DEBUG_NAME, S_IFREG|S_IRUSR, 513 - o2net_dentry, NULL, 514 - &sc_seq_fops); 515 - if (!sc_dentry) { 516 - mlog_errno(-ENOMEM); 517 - goto bail; 518 - } 501 + while ((i = find_next_bit(map, O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) 502 + out += snprintf(buf + out, PAGE_SIZE - out, "%d ", i); 503 + out += snprintf(buf + out, PAGE_SIZE - out, "\n"); 519 504 520 - stats_dentry = debugfs_create_file(STATS_DEBUG_NAME, S_IFREG|S_IRUSR, 521 - o2net_dentry, NULL, 522 - &stats_seq_fops); 523 - if (!stats_dentry) { 524 - mlog_errno(-ENOMEM); 525 - goto bail; 526 - } 505 + return out; 506 + } 507 + 508 + static int nodes_fop_open(struct inode *inode, struct file *file) 509 + { 510 + char *buf; 511 + 512 + buf = kmalloc(PAGE_SIZE, GFP_KERNEL); 513 + if (!buf) 514 + return -ENOMEM; 515 + 516 + i_size_write(inode, o2net_fill_bitmap(buf, PAGE_SIZE)); 517 + 518 + file->private_data = buf; 527 519 528 520 return 0; 529 - bail: 530 - debugfs_remove(stats_dentry); 531 - debugfs_remove(sc_dentry); 532 - debugfs_remove(nst_dentry); 533 - debugfs_remove(o2net_dentry); 534 - return -ENOMEM; 535 521 } 522 + 523 + static int o2net_debug_release(struct inode *inode, struct file *file) 524 + { 525 + kfree(file->private_data); 526 + return 0; 527 + } 528 + 529 + static ssize_t o2net_debug_read(struct file *file, char __user *buf, 530 + size_t nbytes, loff_t *ppos) 531 + { 532 + return simple_read_from_buffer(buf, nbytes, ppos, file->private_data, 533 + i_size_read(file->f_mapping->host)); 534 + } 535 + 536 + static const struct file_operations nodes_fops = { 537 + .open = nodes_fop_open, 538 + .release = o2net_debug_release, 539 + .read = o2net_debug_read, 540 + .llseek = generic_file_llseek, 541 + }; 536 542 537 543 void o2net_debugfs_exit(void) 538 544 { 545 + debugfs_remove(nodes_dentry); 539 546 debugfs_remove(stats_dentry); 540 547 debugfs_remove(sc_dentry); 541 548 debugfs_remove(nst_dentry); 542 549 debugfs_remove(o2net_dentry); 550 + } 551 + 552 + int o2net_debugfs_init(void) 553 + { 554 + mode_t mode = S_IFREG|S_IRUSR; 555 + 556 + o2net_dentry = debugfs_create_dir(O2NET_DEBUG_DIR, NULL); 557 + if (o2net_dentry) 558 + nst_dentry = debugfs_create_file(NST_DEBUG_NAME, mode, 559 + o2net_dentry, NULL, &nst_seq_fops); 560 + if (nst_dentry) 561 + sc_dentry = debugfs_create_file(SC_DEBUG_NAME, mode, 562 + o2net_dentry, NULL, &sc_seq_fops); 563 + if (sc_dentry) 564 + stats_dentry = debugfs_create_file(STATS_DEBUG_NAME, mode, 565 + o2net_dentry, NULL, &stats_seq_fops); 566 + if (stats_dentry) 567 + nodes_dentry = debugfs_create_file(NODES_DEBUG_NAME, mode, 568 + o2net_dentry, NULL, &nodes_fops); 569 + if (nodes_dentry) 570 + return 0; 571 + 572 + o2net_debugfs_exit(); 573 + mlog_errno(-ENOMEM); 574 + return -ENOMEM; 543 575 } 544 576 545 577 #endif /* CONFIG_DEBUG_FS */
+72 -66
fs/ocfs2/cluster/tcp.c
··· 546 546 } 547 547 548 548 if (was_valid && !valid) { 549 - printk(KERN_NOTICE "o2net: no longer connected to " 549 + printk(KERN_NOTICE "o2net: No longer connected to " 550 550 SC_NODEF_FMT "\n", SC_NODEF_ARGS(old_sc)); 551 551 o2net_complete_nodes_nsw(nn); 552 552 } ··· 556 556 cancel_delayed_work(&nn->nn_connect_expired); 557 557 printk(KERN_NOTICE "o2net: %s " SC_NODEF_FMT "\n", 558 558 o2nm_this_node() > sc->sc_node->nd_num ? 559 - "connected to" : "accepted connection from", 559 + "Connected to" : "Accepted connection from", 560 560 SC_NODEF_ARGS(sc)); 561 561 } 562 562 ··· 644 644 o2net_sc_queue_work(sc, &sc->sc_connect_work); 645 645 break; 646 646 default: 647 - printk(KERN_INFO "o2net: connection to " SC_NODEF_FMT 647 + printk(KERN_INFO "o2net: Connection to " SC_NODEF_FMT 648 648 " shutdown, state %d\n", 649 649 SC_NODEF_ARGS(sc), sk->sk_state); 650 650 o2net_sc_queue_work(sc, &sc->sc_shutdown_work); ··· 1035 1035 return ret; 1036 1036 } 1037 1037 1038 + /* Get a map of all nodes to which this node is currently connected to */ 1039 + void o2net_fill_node_map(unsigned long *map, unsigned bytes) 1040 + { 1041 + struct o2net_sock_container *sc; 1042 + int node, ret; 1043 + 1044 + BUG_ON(bytes < (BITS_TO_LONGS(O2NM_MAX_NODES) * sizeof(unsigned long))); 1045 + 1046 + memset(map, 0, bytes); 1047 + for (node = 0; node < O2NM_MAX_NODES; ++node) { 1048 + o2net_tx_can_proceed(o2net_nn_from_num(node), &sc, &ret); 1049 + if (!ret) { 1050 + set_bit(node, map); 1051 + sc_put(sc); 1052 + } 1053 + } 1054 + } 1055 + EXPORT_SYMBOL_GPL(o2net_fill_node_map); 1056 + 1038 1057 int o2net_send_message_vec(u32 msg_type, u32 key, struct kvec *caller_vec, 1039 1058 size_t caller_veclen, u8 target_node, int *status) 1040 1059 { ··· 1304 1285 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num); 1305 1286 1306 1287 if (hand->protocol_version != cpu_to_be64(O2NET_PROTOCOL_VERSION)) { 1307 - mlog(ML_NOTICE, SC_NODEF_FMT " advertised net protocol " 1308 - "version %llu but %llu is required, disconnecting\n", 1309 - SC_NODEF_ARGS(sc), 1310 - (unsigned long long)be64_to_cpu(hand->protocol_version), 1311 - O2NET_PROTOCOL_VERSION); 1288 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " Advertised net " 1289 + "protocol version %llu but %llu is required. " 1290 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1291 + (unsigned long long)be64_to_cpu(hand->protocol_version), 1292 + O2NET_PROTOCOL_VERSION); 1312 1293 1313 1294 /* don't bother reconnecting if its the wrong version. */ 1314 1295 o2net_ensure_shutdown(nn, sc, -ENOTCONN); ··· 1322 1303 */ 1323 1304 if (be32_to_cpu(hand->o2net_idle_timeout_ms) != 1324 1305 o2net_idle_timeout()) { 1325 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a network idle timeout of " 1326 - "%u ms, but we use %u ms locally. disconnecting\n", 1327 - SC_NODEF_ARGS(sc), 1328 - be32_to_cpu(hand->o2net_idle_timeout_ms), 1329 - o2net_idle_timeout()); 1306 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a network " 1307 + "idle timeout of %u ms, but we use %u ms locally. " 1308 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1309 + be32_to_cpu(hand->o2net_idle_timeout_ms), 1310 + o2net_idle_timeout()); 1330 1311 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1331 1312 return -1; 1332 1313 } 1333 1314 1334 1315 if (be32_to_cpu(hand->o2net_keepalive_delay_ms) != 1335 1316 o2net_keepalive_delay()) { 1336 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a keepalive delay of " 1337 - "%u ms, but we use %u ms locally. disconnecting\n", 1338 - SC_NODEF_ARGS(sc), 1339 - be32_to_cpu(hand->o2net_keepalive_delay_ms), 1340 - o2net_keepalive_delay()); 1317 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a keepalive " 1318 + "delay of %u ms, but we use %u ms locally. " 1319 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1320 + be32_to_cpu(hand->o2net_keepalive_delay_ms), 1321 + o2net_keepalive_delay()); 1341 1322 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1342 1323 return -1; 1343 1324 } 1344 1325 1345 1326 if (be32_to_cpu(hand->o2hb_heartbeat_timeout_ms) != 1346 1327 O2HB_MAX_WRITE_TIMEOUT_MS) { 1347 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a heartbeat timeout of " 1348 - "%u ms, but we use %u ms locally. disconnecting\n", 1349 - SC_NODEF_ARGS(sc), 1350 - be32_to_cpu(hand->o2hb_heartbeat_timeout_ms), 1351 - O2HB_MAX_WRITE_TIMEOUT_MS); 1328 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a heartbeat " 1329 + "timeout of %u ms, but we use %u ms locally. " 1330 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1331 + be32_to_cpu(hand->o2hb_heartbeat_timeout_ms), 1332 + O2HB_MAX_WRITE_TIMEOUT_MS); 1352 1333 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1353 1334 return -1; 1354 1335 } ··· 1559 1540 { 1560 1541 struct o2net_sock_container *sc = (struct o2net_sock_container *)data; 1561 1542 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num); 1562 - 1563 1543 #ifdef CONFIG_DEBUG_FS 1564 - ktime_t now = ktime_get(); 1544 + unsigned long msecs = ktime_to_ms(ktime_get()) - 1545 + ktime_to_ms(sc->sc_tv_timer); 1546 + #else 1547 + unsigned long msecs = o2net_idle_timeout(); 1565 1548 #endif 1566 1549 1567 - printk(KERN_NOTICE "o2net: connection to " SC_NODEF_FMT " has been idle for %u.%u " 1568 - "seconds, shutting it down.\n", SC_NODEF_ARGS(sc), 1569 - o2net_idle_timeout() / 1000, 1570 - o2net_idle_timeout() % 1000); 1571 - 1572 - #ifdef CONFIG_DEBUG_FS 1573 - mlog(ML_NOTICE, "Here are some times that might help debug the " 1574 - "situation: (Timer: %lld, Now %lld, DataReady %lld, Advance %lld-%lld, " 1575 - "Key 0x%08x, Func %u, FuncTime %lld-%lld)\n", 1576 - (long long)ktime_to_us(sc->sc_tv_timer), (long long)ktime_to_us(now), 1577 - (long long)ktime_to_us(sc->sc_tv_data_ready), 1578 - (long long)ktime_to_us(sc->sc_tv_advance_start), 1579 - (long long)ktime_to_us(sc->sc_tv_advance_stop), 1580 - sc->sc_msg_key, sc->sc_msg_type, 1581 - (long long)ktime_to_us(sc->sc_tv_func_start), 1582 - (long long)ktime_to_us(sc->sc_tv_func_stop)); 1583 - #endif 1550 + printk(KERN_NOTICE "o2net: Connection to " SC_NODEF_FMT " has been " 1551 + "idle for %lu.%lu secs, shutting it down.\n", SC_NODEF_ARGS(sc), 1552 + msecs / 1000, msecs % 1000); 1584 1553 1585 1554 /* 1586 1555 * Initialize the nn_timeout so that the next connection attempt ··· 1701 1694 1702 1695 out: 1703 1696 if (ret) { 1704 - mlog(ML_NOTICE, "connect attempt to " SC_NODEF_FMT " failed " 1705 - "with errno %d\n", SC_NODEF_ARGS(sc), ret); 1697 + printk(KERN_NOTICE "o2net: Connect attempt to " SC_NODEF_FMT 1698 + " failed with errno %d\n", SC_NODEF_ARGS(sc), ret); 1706 1699 /* 0 err so that another will be queued and attempted 1707 1700 * from set_nn_state */ 1708 1701 if (sc) ··· 1725 1718 1726 1719 spin_lock(&nn->nn_lock); 1727 1720 if (!nn->nn_sc_valid) { 1728 - mlog(ML_ERROR, "no connection established with node %u after " 1729 - "%u.%u seconds, giving up and returning errors.\n", 1721 + printk(KERN_NOTICE "o2net: No connection established with " 1722 + "node %u after %u.%u seconds, giving up.\n", 1730 1723 o2net_num_from_nn(nn), 1731 1724 o2net_idle_timeout() / 1000, 1732 1725 o2net_idle_timeout() % 1000); ··· 1869 1862 1870 1863 node = o2nm_get_node_by_ip(sin.sin_addr.s_addr); 1871 1864 if (node == NULL) { 1872 - mlog(ML_NOTICE, "attempt to connect from unknown node at %pI4:%d\n", 1873 - &sin.sin_addr.s_addr, ntohs(sin.sin_port)); 1865 + printk(KERN_NOTICE "o2net: Attempt to connect from unknown " 1866 + "node at %pI4:%d\n", &sin.sin_addr.s_addr, 1867 + ntohs(sin.sin_port)); 1874 1868 ret = -EINVAL; 1875 1869 goto out; 1876 1870 } 1877 1871 1878 1872 if (o2nm_this_node() >= node->nd_num) { 1879 1873 local_node = o2nm_get_node_by_num(o2nm_this_node()); 1880 - mlog(ML_NOTICE, "unexpected connect attempt seen at node '%s' (" 1881 - "%u, %pI4:%d) from node '%s' (%u, %pI4:%d)\n", 1882 - local_node->nd_name, local_node->nd_num, 1883 - &(local_node->nd_ipv4_address), 1884 - ntohs(local_node->nd_ipv4_port), 1885 - node->nd_name, node->nd_num, &sin.sin_addr.s_addr, 1886 - ntohs(sin.sin_port)); 1874 + printk(KERN_NOTICE "o2net: Unexpected connect attempt seen " 1875 + "at node '%s' (%u, %pI4:%d) from node '%s' (%u, " 1876 + "%pI4:%d)\n", local_node->nd_name, local_node->nd_num, 1877 + &(local_node->nd_ipv4_address), 1878 + ntohs(local_node->nd_ipv4_port), node->nd_name, 1879 + node->nd_num, &sin.sin_addr.s_addr, ntohs(sin.sin_port)); 1887 1880 ret = -EINVAL; 1888 1881 goto out; 1889 1882 } ··· 1908 1901 ret = 0; 1909 1902 spin_unlock(&nn->nn_lock); 1910 1903 if (ret) { 1911 - mlog(ML_NOTICE, "attempt to connect from node '%s' at " 1912 - "%pI4:%d but it already has an open connection\n", 1913 - node->nd_name, &sin.sin_addr.s_addr, 1914 - ntohs(sin.sin_port)); 1904 + printk(KERN_NOTICE "o2net: Attempt to connect from node '%s' " 1905 + "at %pI4:%d but it already has an open connection\n", 1906 + node->nd_name, &sin.sin_addr.s_addr, 1907 + ntohs(sin.sin_port)); 1915 1908 goto out; 1916 1909 } 1917 1910 ··· 1991 1984 1992 1985 ret = sock_create(PF_INET, SOCK_STREAM, IPPROTO_TCP, &sock); 1993 1986 if (ret < 0) { 1994 - mlog(ML_ERROR, "unable to create socket, ret=%d\n", ret); 1987 + printk(KERN_ERR "o2net: Error %d while creating socket\n", ret); 1995 1988 goto out; 1996 1989 } 1997 1990 ··· 2008 2001 sock->sk->sk_reuse = 1; 2009 2002 ret = sock->ops->bind(sock, (struct sockaddr *)&sin, sizeof(sin)); 2010 2003 if (ret < 0) { 2011 - mlog(ML_ERROR, "unable to bind socket at %pI4:%u, " 2012 - "ret=%d\n", &addr, ntohs(port), ret); 2004 + printk(KERN_ERR "o2net: Error %d while binding socket at " 2005 + "%pI4:%u\n", ret, &addr, ntohs(port)); 2013 2006 goto out; 2014 2007 } 2015 2008 2016 2009 ret = sock->ops->listen(sock, 64); 2017 - if (ret < 0) { 2018 - mlog(ML_ERROR, "unable to listen on %pI4:%u, ret=%d\n", 2019 - &addr, ntohs(port), ret); 2020 - } 2010 + if (ret < 0) 2011 + printk(KERN_ERR "o2net: Error %d while listening on %pI4:%u\n", 2012 + ret, &addr, ntohs(port)); 2021 2013 2022 2014 out: 2023 2015 if (ret) {
+2
fs/ocfs2/cluster/tcp.h
··· 106 106 struct list_head *unreg_list); 107 107 void o2net_unregister_handler_list(struct list_head *list); 108 108 109 + void o2net_fill_node_map(unsigned long *map, unsigned bytes); 110 + 109 111 struct o2nm_node; 110 112 int o2net_register_hb_callbacks(void); 111 113 void o2net_unregister_hb_callbacks(void);
+1 -2
fs/ocfs2/dir.c
··· 1184 1184 if (pde) 1185 1185 le16_add_cpu(&pde->rec_len, 1186 1186 le16_to_cpu(de->rec_len)); 1187 - else 1188 - de->inode = 0; 1187 + de->inode = 0; 1189 1188 dir->i_version++; 1190 1189 ocfs2_journal_dirty(handle, bh); 1191 1190 goto bail;
+12 -44
fs/ocfs2/dlm/dlmcommon.h
··· 859 859 void dlm_wait_for_recovery(struct dlm_ctxt *dlm); 860 860 void dlm_kick_recovery_thread(struct dlm_ctxt *dlm); 861 861 int dlm_is_node_dead(struct dlm_ctxt *dlm, u8 node); 862 - int dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout); 863 - int dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout); 862 + void dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout); 863 + void dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout); 864 864 865 865 void dlm_put(struct dlm_ctxt *dlm); 866 866 struct dlm_ctxt *dlm_grab(struct dlm_ctxt *dlm); ··· 877 877 kref_get(&res->refs); 878 878 } 879 879 void dlm_lockres_put(struct dlm_lock_resource *res); 880 - void __dlm_unhash_lockres(struct dlm_lock_resource *res); 881 - void __dlm_insert_lockres(struct dlm_ctxt *dlm, 882 - struct dlm_lock_resource *res); 880 + void __dlm_unhash_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res); 881 + void __dlm_insert_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res); 883 882 struct dlm_lock_resource * __dlm_lookup_lockres_full(struct dlm_ctxt *dlm, 884 883 const char *name, 885 884 unsigned int len, ··· 901 902 const char *name, 902 903 unsigned int namelen); 903 904 904 - #define dlm_lockres_set_refmap_bit(bit,res) \ 905 - __dlm_lockres_set_refmap_bit(bit,res,__FILE__,__LINE__) 906 - #define dlm_lockres_clear_refmap_bit(bit,res) \ 907 - __dlm_lockres_clear_refmap_bit(bit,res,__FILE__,__LINE__) 905 + void dlm_lockres_set_refmap_bit(struct dlm_ctxt *dlm, 906 + struct dlm_lock_resource *res, int bit); 907 + void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm, 908 + struct dlm_lock_resource *res, int bit); 908 909 909 - static inline void __dlm_lockres_set_refmap_bit(int bit, 910 - struct dlm_lock_resource *res, 911 - const char *file, 912 - int line) 913 - { 914 - //printk("%s:%d:%.*s: setting bit %d\n", file, line, 915 - // res->lockname.len, res->lockname.name, bit); 916 - set_bit(bit, res->refmap); 917 - } 918 - 919 - static inline void __dlm_lockres_clear_refmap_bit(int bit, 920 - struct dlm_lock_resource *res, 921 - const char *file, 922 - int line) 923 - { 924 - //printk("%s:%d:%.*s: clearing bit %d\n", file, line, 925 - // res->lockname.len, res->lockname.name, bit); 926 - clear_bit(bit, res->refmap); 927 - } 928 - 929 - void __dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 930 - struct dlm_lock_resource *res, 931 - const char *file, 932 - int line); 933 - void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 934 - struct dlm_lock_resource *res, 935 - int new_lockres, 936 - const char *file, 937 - int line); 938 - #define dlm_lockres_drop_inflight_ref(d,r) \ 939 - __dlm_lockres_drop_inflight_ref(d,r,__FILE__,__LINE__) 940 - #define dlm_lockres_grab_inflight_ref(d,r) \ 941 - __dlm_lockres_grab_inflight_ref(d,r,0,__FILE__,__LINE__) 942 - #define dlm_lockres_grab_inflight_ref_new(d,r) \ 943 - __dlm_lockres_grab_inflight_ref(d,r,1,__FILE__,__LINE__) 910 + void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 911 + struct dlm_lock_resource *res); 912 + void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 913 + struct dlm_lock_resource *res); 944 914 945 915 void dlm_queue_ast(struct dlm_ctxt *dlm, struct dlm_lock *lock); 946 916 void dlm_queue_bast(struct dlm_ctxt *dlm, struct dlm_lock *lock);
+22 -22
fs/ocfs2/dlm/dlmdomain.c
··· 157 157 158 158 static void dlm_unregister_domain_handlers(struct dlm_ctxt *dlm); 159 159 160 - void __dlm_unhash_lockres(struct dlm_lock_resource *lockres) 160 + void __dlm_unhash_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res) 161 161 { 162 - if (!hlist_unhashed(&lockres->hash_node)) { 163 - hlist_del_init(&lockres->hash_node); 164 - dlm_lockres_put(lockres); 165 - } 162 + if (hlist_unhashed(&res->hash_node)) 163 + return; 164 + 165 + mlog(0, "%s: Unhash res %.*s\n", dlm->name, res->lockname.len, 166 + res->lockname.name); 167 + hlist_del_init(&res->hash_node); 168 + dlm_lockres_put(res); 166 169 } 167 170 168 - void __dlm_insert_lockres(struct dlm_ctxt *dlm, 169 - struct dlm_lock_resource *res) 171 + void __dlm_insert_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res) 170 172 { 171 173 struct hlist_head *bucket; 172 174 struct qstr *q; ··· 182 180 dlm_lockres_get(res); 183 181 184 182 hlist_add_head(&res->hash_node, bucket); 183 + 184 + mlog(0, "%s: Hash res %.*s\n", dlm->name, res->lockname.len, 185 + res->lockname.name); 185 186 } 186 187 187 188 struct dlm_lock_resource * __dlm_lookup_lockres_full(struct dlm_ctxt *dlm, ··· 544 539 545 540 static void __dlm_print_nodes(struct dlm_ctxt *dlm) 546 541 { 547 - int node = -1; 542 + int node = -1, num = 0; 548 543 549 544 assert_spin_locked(&dlm->spinlock); 550 545 551 - printk(KERN_NOTICE "o2dlm: Nodes in domain %s: ", dlm->name); 552 - 546 + printk("( "); 553 547 while ((node = find_next_bit(dlm->domain_map, O2NM_MAX_NODES, 554 548 node + 1)) < O2NM_MAX_NODES) { 555 549 printk("%d ", node); 550 + ++num; 556 551 } 557 - printk("\n"); 552 + printk(") %u nodes\n", num); 558 553 } 559 554 560 555 static int dlm_exit_domain_handler(struct o2net_msg *msg, u32 len, void *data, ··· 571 566 572 567 node = exit_msg->node_idx; 573 568 574 - printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s\n", node, dlm->name); 575 - 576 569 spin_lock(&dlm->spinlock); 577 570 clear_bit(node, dlm->domain_map); 578 571 clear_bit(node, dlm->exit_domain_map); 572 + printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s ", node, dlm->name); 579 573 __dlm_print_nodes(dlm); 580 574 581 575 /* notify anything attached to the heartbeat events */ ··· 759 755 760 756 dlm_mark_domain_leaving(dlm); 761 757 dlm_leave_domain(dlm); 758 + printk(KERN_NOTICE "o2dlm: Leaving domain %s\n", dlm->name); 762 759 dlm_force_free_mles(dlm); 763 760 dlm_complete_dlm_shutdown(dlm); 764 761 } ··· 975 970 clear_bit(assert->node_idx, dlm->exit_domain_map); 976 971 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN); 977 972 978 - printk(KERN_NOTICE "o2dlm: Node %u joins domain %s\n", 973 + printk(KERN_NOTICE "o2dlm: Node %u joins domain %s ", 979 974 assert->node_idx, dlm->name); 980 975 __dlm_print_nodes(dlm); 981 976 ··· 1706 1701 bail: 1707 1702 spin_lock(&dlm->spinlock); 1708 1703 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN); 1709 - if (!status) 1704 + if (!status) { 1705 + printk(KERN_NOTICE "o2dlm: Joining domain %s ", dlm->name); 1710 1706 __dlm_print_nodes(dlm); 1707 + } 1711 1708 spin_unlock(&dlm->spinlock); 1712 1709 1713 1710 if (ctxt) { ··· 2135 2128 if (strlen(domain) >= O2NM_MAX_NAME_LEN) { 2136 2129 ret = -ENAMETOOLONG; 2137 2130 mlog(ML_ERROR, "domain name length too long\n"); 2138 - goto leave; 2139 - } 2140 - 2141 - if (!o2hb_check_local_node_heartbeating()) { 2142 - mlog(ML_ERROR, "the local node has not been configured, or is " 2143 - "not heartbeating\n"); 2144 - ret = -EPROTO; 2145 2131 goto leave; 2146 2132 } 2147 2133
+26 -28
fs/ocfs2/dlm/dlmlock.c
··· 183 183 kick_thread = 1; 184 184 } 185 185 } 186 - /* reduce the inflight count, this may result in the lockres 187 - * being purged below during calc_usage */ 188 - if (lock->ml.node == dlm->node_num) 189 - dlm_lockres_drop_inflight_ref(dlm, res); 190 186 191 187 spin_unlock(&res->spinlock); 192 188 wake_up(&res->wq); ··· 227 231 lock->ml.type, res->lockname.len, 228 232 res->lockname.name, flags); 229 233 234 + /* 235 + * Wait if resource is getting recovered, remastered, etc. 236 + * If the resource was remastered and new owner is self, then exit. 237 + */ 230 238 spin_lock(&res->spinlock); 231 - 232 - /* will exit this call with spinlock held */ 233 239 __dlm_wait_on_lockres(res); 240 + if (res->owner == dlm->node_num) { 241 + spin_unlock(&res->spinlock); 242 + return DLM_RECOVERING; 243 + } 234 244 res->state |= DLM_LOCK_RES_IN_PROGRESS; 235 245 236 246 /* add lock to local (secondary) queue */ ··· 321 319 tmpret = o2net_send_message(DLM_CREATE_LOCK_MSG, dlm->key, &create, 322 320 sizeof(create), res->owner, &status); 323 321 if (tmpret >= 0) { 324 - // successfully sent and received 325 - ret = status; // this is already a dlm_status 322 + ret = status; 326 323 if (ret == DLM_REJECTED) { 327 - mlog(ML_ERROR, "%s:%.*s: BUG. this is a stale lockres " 328 - "no longer owned by %u. that node is coming back " 329 - "up currently.\n", dlm->name, create.namelen, 324 + mlog(ML_ERROR, "%s: res %.*s, Stale lockres no longer " 325 + "owned by node %u. That node is coming back up " 326 + "currently.\n", dlm->name, create.namelen, 330 327 create.name, res->owner); 331 328 dlm_print_one_lock_resource(res); 332 329 BUG(); 333 330 } 334 331 } else { 335 - mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to " 336 - "node %u\n", tmpret, DLM_CREATE_LOCK_MSG, dlm->key, 337 - res->owner); 338 - if (dlm_is_host_down(tmpret)) { 332 + mlog(ML_ERROR, "%s: res %.*s, Error %d send CREATE LOCK to " 333 + "node %u\n", dlm->name, create.namelen, create.name, 334 + tmpret, res->owner); 335 + if (dlm_is_host_down(tmpret)) 339 336 ret = DLM_RECOVERING; 340 - mlog(0, "node %u died so returning DLM_RECOVERING " 341 - "from lock message!\n", res->owner); 342 - } else { 337 + else 343 338 ret = dlm_err_to_dlm_status(tmpret); 344 - } 345 339 } 346 340 347 341 return ret; ··· 438 440 /* zero memory only if kernel-allocated */ 439 441 lksb = kzalloc(sizeof(*lksb), GFP_NOFS); 440 442 if (!lksb) { 441 - kfree(lock); 443 + kmem_cache_free(dlm_lock_cache, lock); 442 444 return NULL; 443 445 } 444 446 kernel_allocated = 1; ··· 716 718 717 719 if (status == DLM_RECOVERING || status == DLM_MIGRATING || 718 720 status == DLM_FORWARD) { 719 - mlog(0, "retrying lock with migration/" 720 - "recovery/in progress\n"); 721 721 msleep(100); 722 - /* no waiting for dlm_reco_thread */ 723 722 if (recovery) { 724 723 if (status != DLM_RECOVERING) 725 724 goto retry_lock; 726 - 727 - mlog(0, "%s: got RECOVERING " 728 - "for $RECOVERY lock, master " 729 - "was %u\n", dlm->name, 730 - res->owner); 731 725 /* wait to see the node go down, then 732 726 * drop down and allow the lockres to 733 727 * get cleaned up. need to remaster. */ ··· 730 740 goto retry_lock; 731 741 } 732 742 } 743 + 744 + /* Inflight taken in dlm_get_lock_resource() is dropped here */ 745 + spin_lock(&res->spinlock); 746 + dlm_lockres_drop_inflight_ref(dlm, res); 747 + spin_unlock(&res->spinlock); 748 + 749 + dlm_lockres_calc_usage(dlm, res); 750 + dlm_kick_thread(dlm, res); 733 751 734 752 if (status != DLM_NORMAL) { 735 753 lock->lksb->flags &= ~DLM_LKSB_GET_LVB;
+91 -90
fs/ocfs2/dlm/dlmmaster.c
··· 631 631 return NULL; 632 632 } 633 633 634 - void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 635 - struct dlm_lock_resource *res, 636 - int new_lockres, 637 - const char *file, 638 - int line) 634 + void dlm_lockres_set_refmap_bit(struct dlm_ctxt *dlm, 635 + struct dlm_lock_resource *res, int bit) 639 636 { 640 - if (!new_lockres) 641 - assert_spin_locked(&res->spinlock); 637 + assert_spin_locked(&res->spinlock); 642 638 643 - if (!test_bit(dlm->node_num, res->refmap)) { 644 - BUG_ON(res->inflight_locks != 0); 645 - dlm_lockres_set_refmap_bit(dlm->node_num, res); 646 - } 647 - res->inflight_locks++; 648 - mlog(0, "%s:%.*s: inflight++: now %u\n", 649 - dlm->name, res->lockname.len, res->lockname.name, 650 - res->inflight_locks); 639 + mlog(0, "res %.*s, set node %u, %ps()\n", res->lockname.len, 640 + res->lockname.name, bit, __builtin_return_address(0)); 641 + 642 + set_bit(bit, res->refmap); 651 643 } 652 644 653 - void __dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 654 - struct dlm_lock_resource *res, 655 - const char *file, 656 - int line) 645 + void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm, 646 + struct dlm_lock_resource *res, int bit) 647 + { 648 + assert_spin_locked(&res->spinlock); 649 + 650 + mlog(0, "res %.*s, clr node %u, %ps()\n", res->lockname.len, 651 + res->lockname.name, bit, __builtin_return_address(0)); 652 + 653 + clear_bit(bit, res->refmap); 654 + } 655 + 656 + 657 + void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 658 + struct dlm_lock_resource *res) 659 + { 660 + assert_spin_locked(&res->spinlock); 661 + 662 + res->inflight_locks++; 663 + 664 + mlog(0, "%s: res %.*s, inflight++: now %u, %ps()\n", dlm->name, 665 + res->lockname.len, res->lockname.name, res->inflight_locks, 666 + __builtin_return_address(0)); 667 + } 668 + 669 + void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 670 + struct dlm_lock_resource *res) 657 671 { 658 672 assert_spin_locked(&res->spinlock); 659 673 660 674 BUG_ON(res->inflight_locks == 0); 675 + 661 676 res->inflight_locks--; 662 - mlog(0, "%s:%.*s: inflight--: now %u\n", 663 - dlm->name, res->lockname.len, res->lockname.name, 664 - res->inflight_locks); 665 - if (res->inflight_locks == 0) 666 - dlm_lockres_clear_refmap_bit(dlm->node_num, res); 677 + 678 + mlog(0, "%s: res %.*s, inflight--: now %u, %ps()\n", dlm->name, 679 + res->lockname.len, res->lockname.name, res->inflight_locks, 680 + __builtin_return_address(0)); 681 + 667 682 wake_up(&res->wq); 668 683 } 669 684 ··· 712 697 unsigned int hash; 713 698 int tries = 0; 714 699 int bit, wait_on_recovery = 0; 715 - int drop_inflight_if_nonlocal = 0; 716 700 717 701 BUG_ON(!lockid); 718 702 ··· 723 709 spin_lock(&dlm->spinlock); 724 710 tmpres = __dlm_lookup_lockres_full(dlm, lockid, namelen, hash); 725 711 if (tmpres) { 726 - int dropping_ref = 0; 727 - 728 712 spin_unlock(&dlm->spinlock); 729 - 730 713 spin_lock(&tmpres->spinlock); 731 - /* We wait for the other thread that is mastering the resource */ 714 + /* Wait on the thread that is mastering the resource */ 732 715 if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) { 733 716 __dlm_wait_on_lockres(tmpres); 734 717 BUG_ON(tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN); 735 - } 736 - 737 - if (tmpres->owner == dlm->node_num) { 738 - BUG_ON(tmpres->state & DLM_LOCK_RES_DROPPING_REF); 739 - dlm_lockres_grab_inflight_ref(dlm, tmpres); 740 - } else if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) 741 - dropping_ref = 1; 742 - spin_unlock(&tmpres->spinlock); 743 - 744 - /* wait until done messaging the master, drop our ref to allow 745 - * the lockres to be purged, start over. */ 746 - if (dropping_ref) { 747 - spin_lock(&tmpres->spinlock); 748 - __dlm_wait_on_lockres_flags(tmpres, DLM_LOCK_RES_DROPPING_REF); 749 718 spin_unlock(&tmpres->spinlock); 750 719 dlm_lockres_put(tmpres); 751 720 tmpres = NULL; 752 721 goto lookup; 753 722 } 754 723 755 - mlog(0, "found in hash!\n"); 724 + /* Wait on the resource purge to complete before continuing */ 725 + if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) { 726 + BUG_ON(tmpres->owner == dlm->node_num); 727 + __dlm_wait_on_lockres_flags(tmpres, 728 + DLM_LOCK_RES_DROPPING_REF); 729 + spin_unlock(&tmpres->spinlock); 730 + dlm_lockres_put(tmpres); 731 + tmpres = NULL; 732 + goto lookup; 733 + } 734 + 735 + /* Grab inflight ref to pin the resource */ 736 + dlm_lockres_grab_inflight_ref(dlm, tmpres); 737 + 738 + spin_unlock(&tmpres->spinlock); 756 739 if (res) 757 740 dlm_lockres_put(res); 758 741 res = tmpres; ··· 840 829 * but they might own this lockres. wait on them. */ 841 830 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0); 842 831 if (bit < O2NM_MAX_NODES) { 843 - mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to " 844 - "recover before lock mastery can begin\n", 832 + mlog(0, "%s: res %.*s, At least one node (%d) " 833 + "to recover before lock mastery can begin\n", 845 834 dlm->name, namelen, (char *)lockid, bit); 846 835 wait_on_recovery = 1; 847 836 } ··· 854 843 855 844 /* finally add the lockres to its hash bucket */ 856 845 __dlm_insert_lockres(dlm, res); 857 - /* since this lockres is new it doesn't not require the spinlock */ 858 - dlm_lockres_grab_inflight_ref_new(dlm, res); 859 846 860 - /* if this node does not become the master make sure to drop 861 - * this inflight reference below */ 862 - drop_inflight_if_nonlocal = 1; 847 + /* Grab inflight ref to pin the resource */ 848 + spin_lock(&res->spinlock); 849 + dlm_lockres_grab_inflight_ref(dlm, res); 850 + spin_unlock(&res->spinlock); 863 851 864 852 /* get an extra ref on the mle in case this is a BLOCK 865 853 * if so, the creator of the BLOCK may try to put the last ··· 874 864 * dlm spinlock would be detectable be a change on the mle, 875 865 * so we only need to clear out the recovery map once. */ 876 866 if (dlm_is_recovery_lock(lockid, namelen)) { 877 - mlog(ML_NOTICE, "%s: recovery map is not empty, but " 878 - "must master $RECOVERY lock now\n", dlm->name); 867 + mlog(0, "%s: Recovery map is not empty, but must " 868 + "master $RECOVERY lock now\n", dlm->name); 879 869 if (!dlm_pre_master_reco_lockres(dlm, res)) 880 870 wait_on_recovery = 0; 881 871 else { ··· 893 883 spin_lock(&dlm->spinlock); 894 884 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0); 895 885 if (bit < O2NM_MAX_NODES) { 896 - mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to " 897 - "recover before lock mastery can begin\n", 886 + mlog(0, "%s: res %.*s, At least one node (%d) " 887 + "to recover before lock mastery can begin\n", 898 888 dlm->name, namelen, (char *)lockid, bit); 899 889 wait_on_recovery = 1; 900 890 } else ··· 923 913 * yet, keep going until it does. this is how the 924 914 * master will know that asserts are needed back to 925 915 * the lower nodes. */ 926 - mlog(0, "%s:%.*s: requests only up to %u but master " 927 - "is %u, keep going\n", dlm->name, namelen, 916 + mlog(0, "%s: res %.*s, Requests only up to %u but " 917 + "master is %u, keep going\n", dlm->name, namelen, 928 918 lockid, nodenum, mle->master); 929 919 } 930 920 } ··· 934 924 ret = dlm_wait_for_lock_mastery(dlm, res, mle, &blocked); 935 925 if (ret < 0) { 936 926 wait_on_recovery = 1; 937 - mlog(0, "%s:%.*s: node map changed, redo the " 938 - "master request now, blocked=%d\n", 939 - dlm->name, res->lockname.len, 927 + mlog(0, "%s: res %.*s, Node map changed, redo the master " 928 + "request now, blocked=%d\n", dlm->name, res->lockname.len, 940 929 res->lockname.name, blocked); 941 930 if (++tries > 20) { 942 - mlog(ML_ERROR, "%s:%.*s: spinning on " 943 - "dlm_wait_for_lock_mastery, blocked=%d\n", 931 + mlog(ML_ERROR, "%s: res %.*s, Spinning on " 932 + "dlm_wait_for_lock_mastery, blocked = %d\n", 944 933 dlm->name, res->lockname.len, 945 934 res->lockname.name, blocked); 946 935 dlm_print_one_lock_resource(res); ··· 949 940 goto redo_request; 950 941 } 951 942 952 - mlog(0, "lockres mastered by %u\n", res->owner); 943 + mlog(0, "%s: res %.*s, Mastered by %u\n", dlm->name, res->lockname.len, 944 + res->lockname.name, res->owner); 953 945 /* make sure we never continue without this */ 954 946 BUG_ON(res->owner == O2NM_MAX_NODES); 955 947 ··· 962 952 963 953 wake_waiters: 964 954 spin_lock(&res->spinlock); 965 - if (res->owner != dlm->node_num && drop_inflight_if_nonlocal) 966 - dlm_lockres_drop_inflight_ref(dlm, res); 967 955 res->state &= ~DLM_LOCK_RES_IN_PROGRESS; 968 956 spin_unlock(&res->spinlock); 969 957 wake_up(&res->wq); ··· 1434 1426 } 1435 1427 1436 1428 if (res->owner == dlm->node_num) { 1437 - mlog(0, "%s:%.*s: setting bit %u in refmap\n", 1438 - dlm->name, namelen, name, request->node_idx); 1439 - dlm_lockres_set_refmap_bit(request->node_idx, res); 1429 + dlm_lockres_set_refmap_bit(dlm, res, request->node_idx); 1440 1430 spin_unlock(&res->spinlock); 1441 1431 response = DLM_MASTER_RESP_YES; 1442 1432 if (mle) ··· 1499 1493 * go back and clean the mles on any 1500 1494 * other nodes */ 1501 1495 dispatch_assert = 1; 1502 - dlm_lockres_set_refmap_bit(request->node_idx, res); 1503 - mlog(0, "%s:%.*s: setting bit %u in refmap\n", 1504 - dlm->name, namelen, name, 1505 - request->node_idx); 1496 + dlm_lockres_set_refmap_bit(dlm, res, 1497 + request->node_idx); 1506 1498 } else 1507 1499 response = DLM_MASTER_RESP_NO; 1508 1500 } else { ··· 1706 1702 "lockres, set the bit in the refmap\n", 1707 1703 namelen, lockname, to); 1708 1704 spin_lock(&res->spinlock); 1709 - dlm_lockres_set_refmap_bit(to, res); 1705 + dlm_lockres_set_refmap_bit(dlm, res, to); 1710 1706 spin_unlock(&res->spinlock); 1711 1707 } 1712 1708 } ··· 2191 2187 namelen = res->lockname.len; 2192 2188 BUG_ON(namelen > O2NM_MAX_NAME_LEN); 2193 2189 2194 - mlog(0, "%s:%.*s: sending deref to %d\n", 2195 - dlm->name, namelen, lockname, res->owner); 2196 2190 memset(&deref, 0, sizeof(deref)); 2197 2191 deref.node_idx = dlm->node_num; 2198 2192 deref.namelen = namelen; ··· 2199 2197 ret = o2net_send_message(DLM_DEREF_LOCKRES_MSG, dlm->key, 2200 2198 &deref, sizeof(deref), res->owner, &r); 2201 2199 if (ret < 0) 2202 - mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to " 2203 - "node %u\n", ret, DLM_DEREF_LOCKRES_MSG, dlm->key, 2204 - res->owner); 2200 + mlog(ML_ERROR, "%s: res %.*s, error %d send DEREF to node %u\n", 2201 + dlm->name, namelen, lockname, ret, res->owner); 2205 2202 else if (r < 0) { 2206 2203 /* BAD. other node says I did not have a ref. */ 2207 - mlog(ML_ERROR,"while dropping ref on %s:%.*s " 2208 - "(master=%u) got %d.\n", dlm->name, namelen, 2209 - lockname, res->owner, r); 2204 + mlog(ML_ERROR, "%s: res %.*s, DEREF to node %u got %d\n", 2205 + dlm->name, namelen, lockname, res->owner, r); 2210 2206 dlm_print_one_lock_resource(res); 2211 2207 BUG(); 2212 2208 } ··· 2260 2260 else { 2261 2261 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF); 2262 2262 if (test_bit(node, res->refmap)) { 2263 - dlm_lockres_clear_refmap_bit(node, res); 2263 + dlm_lockres_clear_refmap_bit(dlm, res, node); 2264 2264 cleared = 1; 2265 2265 } 2266 2266 } ··· 2320 2320 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF); 2321 2321 if (test_bit(node, res->refmap)) { 2322 2322 __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_SETREF_INPROG); 2323 - dlm_lockres_clear_refmap_bit(node, res); 2323 + dlm_lockres_clear_refmap_bit(dlm, res, node); 2324 2324 cleared = 1; 2325 2325 } 2326 2326 spin_unlock(&res->spinlock); ··· 2802 2802 BUG_ON(!list_empty(&lock->bast_list)); 2803 2803 BUG_ON(lock->ast_pending); 2804 2804 BUG_ON(lock->bast_pending); 2805 - dlm_lockres_clear_refmap_bit(lock->ml.node, res); 2805 + dlm_lockres_clear_refmap_bit(dlm, res, 2806 + lock->ml.node); 2806 2807 list_del_init(&lock->list); 2807 2808 dlm_lock_put(lock); 2808 2809 /* In a normal unlock, we would have added a ··· 2824 2823 mlog(0, "%s:%.*s: node %u had a ref to this " 2825 2824 "migrating lockres, clearing\n", dlm->name, 2826 2825 res->lockname.len, res->lockname.name, bit); 2827 - dlm_lockres_clear_refmap_bit(bit, res); 2826 + dlm_lockres_clear_refmap_bit(dlm, res, bit); 2828 2827 } 2829 2828 bit++; 2830 2829 } ··· 2917 2916 &migrate, sizeof(migrate), nodenum, 2918 2917 &status); 2919 2918 if (ret < 0) { 2920 - mlog(ML_ERROR, "Error %d when sending message %u (key " 2921 - "0x%x) to node %u\n", ret, DLM_MIGRATE_REQUEST_MSG, 2922 - dlm->key, nodenum); 2919 + mlog(ML_ERROR, "%s: res %.*s, Error %d send " 2920 + "MIGRATE_REQUEST to node %u\n", dlm->name, 2921 + migrate.namelen, migrate.name, ret, nodenum); 2923 2922 if (!dlm_is_host_down(ret)) { 2924 2923 mlog(ML_ERROR, "unhandled error=%d!\n", ret); 2925 2924 BUG(); ··· 2938 2937 dlm->name, res->lockname.len, res->lockname.name, 2939 2938 nodenum); 2940 2939 spin_lock(&res->spinlock); 2941 - dlm_lockres_set_refmap_bit(nodenum, res); 2940 + dlm_lockres_set_refmap_bit(dlm, res, nodenum); 2942 2941 spin_unlock(&res->spinlock); 2943 2942 } 2944 2943 } ··· 3272 3271 * mastery reference here since old_master will briefly have 3273 3272 * a reference after the migration completes */ 3274 3273 spin_lock(&res->spinlock); 3275 - dlm_lockres_set_refmap_bit(old_master, res); 3274 + dlm_lockres_set_refmap_bit(dlm, res, old_master); 3276 3275 spin_unlock(&res->spinlock); 3277 3276 3278 3277 mlog(0, "now time to do a migrate request to other nodes\n");
+82 -82
fs/ocfs2/dlm/dlmrecovery.c
··· 362 362 } 363 363 364 364 365 - int dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout) 365 + void dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout) 366 366 { 367 - if (timeout) { 368 - mlog(ML_NOTICE, "%s: waiting %dms for notification of " 369 - "death of node %u\n", dlm->name, timeout, node); 367 + if (dlm_is_node_dead(dlm, node)) 368 + return; 369 + 370 + printk(KERN_NOTICE "o2dlm: Waiting on the death of node %u in " 371 + "domain %s\n", node, dlm->name); 372 + 373 + if (timeout) 370 374 wait_event_timeout(dlm->dlm_reco_thread_wq, 371 - dlm_is_node_dead(dlm, node), 372 - msecs_to_jiffies(timeout)); 373 - } else { 374 - mlog(ML_NOTICE, "%s: waiting indefinitely for notification " 375 - "of death of node %u\n", dlm->name, node); 375 + dlm_is_node_dead(dlm, node), 376 + msecs_to_jiffies(timeout)); 377 + else 376 378 wait_event(dlm->dlm_reco_thread_wq, 377 379 dlm_is_node_dead(dlm, node)); 378 - } 379 - /* for now, return 0 */ 380 - return 0; 381 380 } 382 381 383 - int dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout) 382 + void dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout) 384 383 { 385 - if (timeout) { 386 - mlog(0, "%s: waiting %dms for notification of " 387 - "recovery of node %u\n", dlm->name, timeout, node); 384 + if (dlm_is_node_recovered(dlm, node)) 385 + return; 386 + 387 + printk(KERN_NOTICE "o2dlm: Waiting on the recovery of node %u in " 388 + "domain %s\n", node, dlm->name); 389 + 390 + if (timeout) 388 391 wait_event_timeout(dlm->dlm_reco_thread_wq, 389 - dlm_is_node_recovered(dlm, node), 390 - msecs_to_jiffies(timeout)); 391 - } else { 392 - mlog(0, "%s: waiting indefinitely for notification " 393 - "of recovery of node %u\n", dlm->name, node); 392 + dlm_is_node_recovered(dlm, node), 393 + msecs_to_jiffies(timeout)); 394 + else 394 395 wait_event(dlm->dlm_reco_thread_wq, 395 396 dlm_is_node_recovered(dlm, node)); 396 - } 397 - /* for now, return 0 */ 398 - return 0; 399 397 } 400 398 401 399 /* callers of the top-level api calls (dlmlock/dlmunlock) should ··· 428 430 { 429 431 spin_lock(&dlm->spinlock); 430 432 BUG_ON(dlm->reco.state & DLM_RECO_STATE_ACTIVE); 433 + printk(KERN_NOTICE "o2dlm: Begin recovery on domain %s for node %u\n", 434 + dlm->name, dlm->reco.dead_node); 431 435 dlm->reco.state |= DLM_RECO_STATE_ACTIVE; 432 436 spin_unlock(&dlm->spinlock); 433 437 } ··· 440 440 BUG_ON(!(dlm->reco.state & DLM_RECO_STATE_ACTIVE)); 441 441 dlm->reco.state &= ~DLM_RECO_STATE_ACTIVE; 442 442 spin_unlock(&dlm->spinlock); 443 + printk(KERN_NOTICE "o2dlm: End recovery on domain %s\n", dlm->name); 443 444 wake_up(&dlm->reco.event); 445 + } 446 + 447 + static void dlm_print_recovery_master(struct dlm_ctxt *dlm) 448 + { 449 + printk(KERN_NOTICE "o2dlm: Node %u (%s) is the Recovery Master for the " 450 + "dead node %u in domain %s\n", dlm->reco.new_master, 451 + (dlm->node_num == dlm->reco.new_master ? "me" : "he"), 452 + dlm->reco.dead_node, dlm->name); 444 453 } 445 454 446 455 static int dlm_do_recovery(struct dlm_ctxt *dlm) ··· 514 505 } 515 506 mlog(0, "another node will master this recovery session.\n"); 516 507 } 517 - mlog(0, "dlm=%s (%d), new_master=%u, this node=%u, dead_node=%u\n", 518 - dlm->name, task_pid_nr(dlm->dlm_reco_thread_task), dlm->reco.new_master, 519 - dlm->node_num, dlm->reco.dead_node); 508 + 509 + dlm_print_recovery_master(dlm); 520 510 521 511 /* it is safe to start everything back up here 522 512 * because all of the dead node's lock resources ··· 526 518 return 0; 527 519 528 520 master_here: 529 - mlog(ML_NOTICE, "(%d) Node %u is the Recovery Master for the Dead Node " 530 - "%u for Domain %s\n", task_pid_nr(dlm->dlm_reco_thread_task), 531 - dlm->node_num, dlm->reco.dead_node, dlm->name); 521 + dlm_print_recovery_master(dlm); 532 522 533 523 status = dlm_remaster_locks(dlm, dlm->reco.dead_node); 534 524 if (status < 0) { 535 525 /* we should never hit this anymore */ 536 - mlog(ML_ERROR, "error %d remastering locks for node %u, " 537 - "retrying.\n", status, dlm->reco.dead_node); 526 + mlog(ML_ERROR, "%s: Error %d remastering locks for node %u, " 527 + "retrying.\n", dlm->name, status, dlm->reco.dead_node); 538 528 /* yield a bit to allow any final network messages 539 529 * to get handled on remaining nodes */ 540 530 msleep(100); ··· 573 567 BUG_ON(ndata->state != DLM_RECO_NODE_DATA_INIT); 574 568 ndata->state = DLM_RECO_NODE_DATA_REQUESTING; 575 569 576 - mlog(0, "requesting lock info from node %u\n", 570 + mlog(0, "%s: Requesting lock info from node %u\n", dlm->name, 577 571 ndata->node_num); 578 572 579 573 if (ndata->node_num == dlm->node_num) { ··· 646 640 spin_unlock(&dlm_reco_state_lock); 647 641 } 648 642 649 - mlog(0, "done requesting all lock info\n"); 643 + mlog(0, "%s: Done requesting all lock info\n", dlm->name); 650 644 651 645 /* nodes should be sending reco data now 652 646 * just need to wait */ ··· 808 802 809 803 /* negative status is handled by caller */ 810 804 if (ret < 0) 811 - mlog(ML_ERROR, "Error %d when sending message %u (key " 812 - "0x%x) to node %u\n", ret, DLM_LOCK_REQUEST_MSG, 813 - dlm->key, request_from); 814 - 805 + mlog(ML_ERROR, "%s: Error %d send LOCK_REQUEST to node %u " 806 + "to recover dead node %u\n", dlm->name, ret, 807 + request_from, dead_node); 815 808 // return from here, then 816 809 // sleep until all received or error 817 810 return ret; ··· 961 956 ret = o2net_send_message(DLM_RECO_DATA_DONE_MSG, dlm->key, &done_msg, 962 957 sizeof(done_msg), send_to, &tmpret); 963 958 if (ret < 0) { 964 - mlog(ML_ERROR, "Error %d when sending message %u (key " 965 - "0x%x) to node %u\n", ret, DLM_RECO_DATA_DONE_MSG, 966 - dlm->key, send_to); 959 + mlog(ML_ERROR, "%s: Error %d send RECO_DATA_DONE to node %u " 960 + "to recover dead node %u\n", dlm->name, ret, send_to, 961 + dead_node); 967 962 if (!dlm_is_host_down(ret)) { 968 963 BUG(); 969 964 } ··· 1132 1127 if (ret < 0) { 1133 1128 /* XXX: negative status is not handled. 1134 1129 * this will end up killing this node. */ 1135 - mlog(ML_ERROR, "Error %d when sending message %u (key " 1136 - "0x%x) to node %u\n", ret, DLM_MIG_LOCKRES_MSG, 1137 - dlm->key, send_to); 1130 + mlog(ML_ERROR, "%s: res %.*s, Error %d send MIG_LOCKRES to " 1131 + "node %u (%s)\n", dlm->name, mres->lockname_len, 1132 + mres->lockname, ret, send_to, 1133 + (orig_flags & DLM_MRES_MIGRATION ? 1134 + "migration" : "recovery")); 1138 1135 } else { 1139 1136 /* might get an -ENOMEM back here */ 1140 1137 ret = status; ··· 1774 1767 dlm->name, mres->lockname_len, mres->lockname, 1775 1768 from); 1776 1769 spin_lock(&res->spinlock); 1777 - dlm_lockres_set_refmap_bit(from, res); 1770 + dlm_lockres_set_refmap_bit(dlm, res, from); 1778 1771 spin_unlock(&res->spinlock); 1779 1772 added++; 1780 1773 break; ··· 1972 1965 mlog(0, "%s:%.*s: added lock for node %u, " 1973 1966 "setting refmap bit\n", dlm->name, 1974 1967 res->lockname.len, res->lockname.name, ml->node); 1975 - dlm_lockres_set_refmap_bit(ml->node, res); 1968 + dlm_lockres_set_refmap_bit(dlm, res, ml->node); 1976 1969 added++; 1977 1970 } 1978 1971 spin_unlock(&res->spinlock); ··· 2091 2084 2092 2085 list_for_each_entry_safe(res, next, &dlm->reco.resources, recovering) { 2093 2086 if (res->owner == dead_node) { 2087 + mlog(0, "%s: res %.*s, Changing owner from %u to %u\n", 2088 + dlm->name, res->lockname.len, res->lockname.name, 2089 + res->owner, new_master); 2094 2090 list_del_init(&res->recovering); 2095 2091 spin_lock(&res->spinlock); 2096 2092 /* new_master has our reference from ··· 2115 2105 for (i = 0; i < DLM_HASH_BUCKETS; i++) { 2116 2106 bucket = dlm_lockres_hash(dlm, i); 2117 2107 hlist_for_each_entry(res, hash_iter, bucket, hash_node) { 2118 - if (res->state & DLM_LOCK_RES_RECOVERING) { 2119 - if (res->owner == dead_node) { 2120 - mlog(0, "(this=%u) res %.*s owner=%u " 2121 - "was not on recovering list, but " 2122 - "clearing state anyway\n", 2123 - dlm->node_num, res->lockname.len, 2124 - res->lockname.name, new_master); 2125 - } else if (res->owner == dlm->node_num) { 2126 - mlog(0, "(this=%u) res %.*s owner=%u " 2127 - "was not on recovering list, " 2128 - "owner is THIS node, clearing\n", 2129 - dlm->node_num, res->lockname.len, 2130 - res->lockname.name, new_master); 2131 - } else 2132 - continue; 2108 + if (!(res->state & DLM_LOCK_RES_RECOVERING)) 2109 + continue; 2133 2110 2134 - if (!list_empty(&res->recovering)) { 2135 - mlog(0, "%s:%.*s: lockres was " 2136 - "marked RECOVERING, owner=%u\n", 2137 - dlm->name, res->lockname.len, 2138 - res->lockname.name, res->owner); 2139 - list_del_init(&res->recovering); 2140 - dlm_lockres_put(res); 2141 - } 2142 - spin_lock(&res->spinlock); 2143 - /* new_master has our reference from 2144 - * the lock state sent during recovery */ 2145 - dlm_change_lockres_owner(dlm, res, new_master); 2146 - res->state &= ~DLM_LOCK_RES_RECOVERING; 2147 - if (__dlm_lockres_has_locks(res)) 2148 - __dlm_dirty_lockres(dlm, res); 2149 - spin_unlock(&res->spinlock); 2150 - wake_up(&res->wq); 2111 + if (res->owner != dead_node && 2112 + res->owner != dlm->node_num) 2113 + continue; 2114 + 2115 + if (!list_empty(&res->recovering)) { 2116 + list_del_init(&res->recovering); 2117 + dlm_lockres_put(res); 2151 2118 } 2119 + 2120 + /* new_master has our reference from 2121 + * the lock state sent during recovery */ 2122 + mlog(0, "%s: res %.*s, Changing owner from %u to %u\n", 2123 + dlm->name, res->lockname.len, res->lockname.name, 2124 + res->owner, new_master); 2125 + spin_lock(&res->spinlock); 2126 + dlm_change_lockres_owner(dlm, res, new_master); 2127 + res->state &= ~DLM_LOCK_RES_RECOVERING; 2128 + if (__dlm_lockres_has_locks(res)) 2129 + __dlm_dirty_lockres(dlm, res); 2130 + spin_unlock(&res->spinlock); 2131 + wake_up(&res->wq); 2152 2132 } 2153 2133 } 2154 2134 } ··· 2252 2252 res->lockname.len, res->lockname.name, freed, dead_node); 2253 2253 __dlm_print_one_lock_resource(res); 2254 2254 } 2255 - dlm_lockres_clear_refmap_bit(dead_node, res); 2255 + dlm_lockres_clear_refmap_bit(dlm, res, dead_node); 2256 2256 } else if (test_bit(dead_node, res->refmap)) { 2257 2257 mlog(0, "%s:%.*s: dead node %u had a ref, but had " 2258 2258 "no locks and had not purged before dying\n", dlm->name, 2259 2259 res->lockname.len, res->lockname.name, dead_node); 2260 - dlm_lockres_clear_refmap_bit(dead_node, res); 2260 + dlm_lockres_clear_refmap_bit(dlm, res, dead_node); 2261 2261 } 2262 2262 2263 2263 /* do not kick thread yet */ ··· 2324 2324 dlm_revalidate_lvb(dlm, res, dead_node); 2325 2325 if (res->owner == dead_node) { 2326 2326 if (res->state & DLM_LOCK_RES_DROPPING_REF) { 2327 - mlog(ML_NOTICE, "Ignore %.*s for " 2327 + mlog(ML_NOTICE, "%s: res %.*s, Skip " 2328 2328 "recovery as it is being freed\n", 2329 - res->lockname.len, 2329 + dlm->name, res->lockname.len, 2330 2330 res->lockname.name); 2331 2331 } else 2332 2332 dlm_move_lockres_to_recovery_list(dlm,
+8 -8
fs/ocfs2/dlm/dlmthread.c
··· 94 94 { 95 95 int bit; 96 96 97 + assert_spin_locked(&res->spinlock); 98 + 97 99 if (__dlm_lockres_has_locks(res)) 100 + return 0; 101 + 102 + /* Locks are in the process of being created */ 103 + if (res->inflight_locks) 98 104 return 0; 99 105 100 106 if (!list_empty(&res->dirty) || res->state & DLM_LOCK_RES_DIRTY) ··· 109 103 if (res->state & DLM_LOCK_RES_RECOVERING) 110 104 return 0; 111 105 106 + /* Another node has this resource with this node as the master */ 112 107 bit = find_next_bit(res->refmap, O2NM_MAX_NODES, 0); 113 108 if (bit < O2NM_MAX_NODES) 114 109 return 0; 115 110 116 - /* 117 - * since the bit for dlm->node_num is not set, inflight_locks better 118 - * be zero 119 - */ 120 - BUG_ON(res->inflight_locks != 0); 121 111 return 1; 122 112 } 123 113 ··· 187 185 /* clear our bit from the master's refmap, ignore errors */ 188 186 ret = dlm_drop_lockres_ref(dlm, res); 189 187 if (ret < 0) { 190 - mlog(ML_ERROR, "%s: deref %.*s failed %d\n", dlm->name, 191 - res->lockname.len, res->lockname.name, ret); 192 188 if (!dlm_is_host_down(ret)) 193 189 BUG(); 194 190 } ··· 209 209 BUG(); 210 210 } 211 211 212 - __dlm_unhash_lockres(res); 212 + __dlm_unhash_lockres(dlm, res); 213 213 214 214 /* lockres is not in the hash now. drop the flag and wake up 215 215 * any processes waiting in dlm_get_lock_resource. */
+15 -6
fs/ocfs2/dlmglue.c
··· 1692 1692 mlog(0, "inode %llu take PRMODE open lock\n", 1693 1693 (unsigned long long)OCFS2_I(inode)->ip_blkno); 1694 1694 1695 - if (ocfs2_mount_local(osb)) 1695 + if (ocfs2_is_hard_readonly(osb) || ocfs2_mount_local(osb)) 1696 1696 goto out; 1697 1697 1698 1698 lockres = &OCFS2_I(inode)->ip_open_lockres; ··· 1717 1717 mlog(0, "inode %llu try to take %s open lock\n", 1718 1718 (unsigned long long)OCFS2_I(inode)->ip_blkno, 1719 1719 write ? "EXMODE" : "PRMODE"); 1720 + 1721 + if (ocfs2_is_hard_readonly(osb)) { 1722 + if (write) 1723 + status = -EROFS; 1724 + goto out; 1725 + } 1720 1726 1721 1727 if (ocfs2_mount_local(osb)) 1722 1728 goto out; ··· 2304 2298 if (ocfs2_is_hard_readonly(osb)) { 2305 2299 if (ex) 2306 2300 status = -EROFS; 2307 - goto bail; 2301 + goto getbh; 2308 2302 } 2309 2303 2310 2304 if (ocfs2_mount_local(osb)) ··· 2362 2356 mlog_errno(status); 2363 2357 goto bail; 2364 2358 } 2365 - 2359 + getbh: 2366 2360 if (ret_bh) { 2367 2361 status = ocfs2_assign_bh(inode, ret_bh, local_bh); 2368 2362 if (status < 0) { ··· 2634 2628 2635 2629 BUG_ON(!dl); 2636 2630 2637 - if (ocfs2_is_hard_readonly(osb)) 2638 - return -EROFS; 2631 + if (ocfs2_is_hard_readonly(osb)) { 2632 + if (ex) 2633 + return -EROFS; 2634 + return 0; 2635 + } 2639 2636 2640 2637 if (ocfs2_mount_local(osb)) 2641 2638 return 0; ··· 2656 2647 struct ocfs2_dentry_lock *dl = dentry->d_fsdata; 2657 2648 struct ocfs2_super *osb = OCFS2_SB(dentry->d_sb); 2658 2649 2659 - if (!ocfs2_mount_local(osb)) 2650 + if (!ocfs2_is_hard_readonly(osb) && !ocfs2_mount_local(osb)) 2660 2651 ocfs2_cluster_unlock(osb, &dl->dl_lockres, level); 2661 2652 } 2662 2653
+96
fs/ocfs2/extent_map.c
··· 832 832 return ret; 833 833 } 834 834 835 + int ocfs2_seek_data_hole_offset(struct file *file, loff_t *offset, int origin) 836 + { 837 + struct inode *inode = file->f_mapping->host; 838 + int ret; 839 + unsigned int is_last = 0, is_data = 0; 840 + u16 cs_bits = OCFS2_SB(inode->i_sb)->s_clustersize_bits; 841 + u32 cpos, cend, clen, hole_size; 842 + u64 extoff, extlen; 843 + struct buffer_head *di_bh = NULL; 844 + struct ocfs2_extent_rec rec; 845 + 846 + BUG_ON(origin != SEEK_DATA && origin != SEEK_HOLE); 847 + 848 + ret = ocfs2_inode_lock(inode, &di_bh, 0); 849 + if (ret) { 850 + mlog_errno(ret); 851 + goto out; 852 + } 853 + 854 + down_read(&OCFS2_I(inode)->ip_alloc_sem); 855 + 856 + if (*offset >= inode->i_size) { 857 + ret = -ENXIO; 858 + goto out_unlock; 859 + } 860 + 861 + if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL) { 862 + if (origin == SEEK_HOLE) 863 + *offset = inode->i_size; 864 + goto out_unlock; 865 + } 866 + 867 + clen = 0; 868 + cpos = *offset >> cs_bits; 869 + cend = ocfs2_clusters_for_bytes(inode->i_sb, inode->i_size); 870 + 871 + while (cpos < cend && !is_last) { 872 + ret = ocfs2_get_clusters_nocache(inode, di_bh, cpos, &hole_size, 873 + &rec, &is_last); 874 + if (ret) { 875 + mlog_errno(ret); 876 + goto out_unlock; 877 + } 878 + 879 + extoff = cpos; 880 + extoff <<= cs_bits; 881 + 882 + if (rec.e_blkno == 0ULL) { 883 + clen = hole_size; 884 + is_data = 0; 885 + } else { 886 + clen = le16_to_cpu(rec.e_leaf_clusters) - 887 + (cpos - le32_to_cpu(rec.e_cpos)); 888 + is_data = (rec.e_flags & OCFS2_EXT_UNWRITTEN) ? 0 : 1; 889 + } 890 + 891 + if ((!is_data && origin == SEEK_HOLE) || 892 + (is_data && origin == SEEK_DATA)) { 893 + if (extoff > *offset) 894 + *offset = extoff; 895 + goto out_unlock; 896 + } 897 + 898 + if (!is_last) 899 + cpos += clen; 900 + } 901 + 902 + if (origin == SEEK_HOLE) { 903 + extoff = cpos; 904 + extoff <<= cs_bits; 905 + extlen = clen; 906 + extlen <<= cs_bits; 907 + 908 + if ((extoff + extlen) > inode->i_size) 909 + extlen = inode->i_size - extoff; 910 + extoff += extlen; 911 + if (extoff > *offset) 912 + *offset = extoff; 913 + goto out_unlock; 914 + } 915 + 916 + ret = -ENXIO; 917 + 918 + out_unlock: 919 + 920 + brelse(di_bh); 921 + 922 + up_read(&OCFS2_I(inode)->ip_alloc_sem); 923 + 924 + ocfs2_inode_unlock(inode, 0); 925 + out: 926 + if (ret && ret != -ENXIO) 927 + ret = -ENXIO; 928 + return ret; 929 + } 930 + 835 931 int ocfs2_read_virt_blocks(struct inode *inode, u64 v_block, int nr, 836 932 struct buffer_head *bhs[], int flags, 837 933 int (*validate)(struct super_block *sb,
+2
fs/ocfs2/extent_map.h
··· 53 53 int ocfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 54 54 u64 map_start, u64 map_len); 55 55 56 + int ocfs2_seek_data_hole_offset(struct file *file, loff_t *offset, int origin); 57 + 56 58 int ocfs2_xattr_get_clusters(struct inode *inode, u32 v_cluster, 57 59 u32 *p_cluster, u32 *num_clusters, 58 60 struct ocfs2_extent_list *el,
+94 -2
fs/ocfs2/file.c
··· 1950 1950 if (ret < 0) 1951 1951 mlog_errno(ret); 1952 1952 1953 + if (file->f_flags & O_SYNC) 1954 + handle->h_sync = 1; 1955 + 1953 1956 ocfs2_commit_trans(osb, handle); 1954 1957 1955 1958 out_inode_unlock: ··· 2053 2050 } 2054 2051 out: 2055 2052 return ret; 2053 + } 2054 + 2055 + static void ocfs2_aiodio_wait(struct inode *inode) 2056 + { 2057 + wait_queue_head_t *wq = ocfs2_ioend_wq(inode); 2058 + 2059 + wait_event(*wq, (atomic_read(&OCFS2_I(inode)->ip_unaligned_aio) == 0)); 2060 + } 2061 + 2062 + static int ocfs2_is_io_unaligned(struct inode *inode, size_t count, loff_t pos) 2063 + { 2064 + int blockmask = inode->i_sb->s_blocksize - 1; 2065 + loff_t final_size = pos + count; 2066 + 2067 + if ((pos & blockmask) || (final_size & blockmask)) 2068 + return 1; 2069 + return 0; 2056 2070 } 2057 2071 2058 2072 static int ocfs2_prepare_inode_for_refcount(struct inode *inode, ··· 2250 2230 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 2251 2231 int full_coherency = !(osb->s_mount_opt & 2252 2232 OCFS2_MOUNT_COHERENCY_BUFFERED); 2233 + int unaligned_dio = 0; 2253 2234 2254 2235 trace_ocfs2_file_aio_write(inode, file, file->f_path.dentry, 2255 2236 (unsigned long long)OCFS2_I(inode)->ip_blkno, ··· 2318 2297 goto out; 2319 2298 } 2320 2299 2300 + if (direct_io && !is_sync_kiocb(iocb)) 2301 + unaligned_dio = ocfs2_is_io_unaligned(inode, iocb->ki_left, 2302 + *ppos); 2303 + 2321 2304 /* 2322 2305 * We can't complete the direct I/O as requested, fall back to 2323 2306 * buffered I/O. ··· 2334 2309 2335 2310 direct_io = 0; 2336 2311 goto relock; 2312 + } 2313 + 2314 + if (unaligned_dio) { 2315 + /* 2316 + * Wait on previous unaligned aio to complete before 2317 + * proceeding. 2318 + */ 2319 + ocfs2_aiodio_wait(inode); 2320 + 2321 + /* Mark the iocb as needing a decrement in ocfs2_dio_end_io */ 2322 + atomic_inc(&OCFS2_I(inode)->ip_unaligned_aio); 2323 + ocfs2_iocb_set_unaligned_aio(iocb); 2337 2324 } 2338 2325 2339 2326 /* ··· 2419 2382 if ((ret == -EIOCBQUEUED) || (!ocfs2_iocb_is_rw_locked(iocb))) { 2420 2383 rw_level = -1; 2421 2384 have_alloc_sem = 0; 2385 + unaligned_dio = 0; 2422 2386 } 2387 + 2388 + if (unaligned_dio) 2389 + atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio); 2423 2390 2424 2391 out: 2425 2392 if (rw_level != -1) ··· 2632 2591 return ret; 2633 2592 } 2634 2593 2594 + /* Refer generic_file_llseek_unlocked() */ 2595 + static loff_t ocfs2_file_llseek(struct file *file, loff_t offset, int origin) 2596 + { 2597 + struct inode *inode = file->f_mapping->host; 2598 + int ret = 0; 2599 + 2600 + mutex_lock(&inode->i_mutex); 2601 + 2602 + switch (origin) { 2603 + case SEEK_SET: 2604 + break; 2605 + case SEEK_END: 2606 + offset += inode->i_size; 2607 + break; 2608 + case SEEK_CUR: 2609 + if (offset == 0) { 2610 + offset = file->f_pos; 2611 + goto out; 2612 + } 2613 + offset += file->f_pos; 2614 + break; 2615 + case SEEK_DATA: 2616 + case SEEK_HOLE: 2617 + ret = ocfs2_seek_data_hole_offset(file, &offset, origin); 2618 + if (ret) 2619 + goto out; 2620 + break; 2621 + default: 2622 + ret = -EINVAL; 2623 + goto out; 2624 + } 2625 + 2626 + if (offset < 0 && !(file->f_mode & FMODE_UNSIGNED_OFFSET)) 2627 + ret = -EINVAL; 2628 + if (!ret && offset > inode->i_sb->s_maxbytes) 2629 + ret = -EINVAL; 2630 + if (ret) 2631 + goto out; 2632 + 2633 + if (offset != file->f_pos) { 2634 + file->f_pos = offset; 2635 + file->f_version = 0; 2636 + } 2637 + 2638 + out: 2639 + mutex_unlock(&inode->i_mutex); 2640 + if (ret) 2641 + return ret; 2642 + return offset; 2643 + } 2644 + 2635 2645 const struct inode_operations ocfs2_file_iops = { 2636 2646 .setattr = ocfs2_setattr, 2637 2647 .getattr = ocfs2_getattr, ··· 2707 2615 * ocfs2_fops_no_plocks and ocfs2_dops_no_plocks! 2708 2616 */ 2709 2617 const struct file_operations ocfs2_fops = { 2710 - .llseek = generic_file_llseek, 2618 + .llseek = ocfs2_file_llseek, 2711 2619 .read = do_sync_read, 2712 2620 .write = do_sync_write, 2713 2621 .mmap = ocfs2_mmap, ··· 2755 2663 * the cluster. 2756 2664 */ 2757 2665 const struct file_operations ocfs2_fops_no_plocks = { 2758 - .llseek = generic_file_llseek, 2666 + .llseek = ocfs2_file_llseek, 2759 2667 .read = do_sync_read, 2760 2668 .write = do_sync_write, 2761 2669 .mmap = ocfs2_mmap,
+1 -1
fs/ocfs2/inode.c
··· 951 951 trace_ocfs2_cleanup_delete_inode( 952 952 (unsigned long long)OCFS2_I(inode)->ip_blkno, sync_data); 953 953 if (sync_data) 954 - write_inode_now(inode, 1); 954 + filemap_write_and_wait(inode->i_mapping); 955 955 truncate_inode_pages(&inode->i_data, 0); 956 956 } 957 957
+3
fs/ocfs2/inode.h
··· 43 43 /* protects extended attribute changes on this inode */ 44 44 struct rw_semaphore ip_xattr_sem; 45 45 46 + /* Number of outstanding AIO's which are not page aligned */ 47 + atomic_t ip_unaligned_aio; 48 + 46 49 /* These fields are protected by ip_lock */ 47 50 spinlock_t ip_lock; 48 51 u32 ip_open_count;
+6 -5
fs/ocfs2/ioctl.c
··· 122 122 if ((oldflags & OCFS2_IMMUTABLE_FL) || ((flags ^ oldflags) & 123 123 (OCFS2_APPEND_FL | OCFS2_IMMUTABLE_FL))) { 124 124 if (!capable(CAP_LINUX_IMMUTABLE)) 125 - goto bail_unlock; 125 + goto bail_commit; 126 126 } 127 127 128 128 ocfs2_inode->ip_attr = flags; ··· 132 132 if (status < 0) 133 133 mlog_errno(status); 134 134 135 + bail_commit: 135 136 ocfs2_commit_trans(osb, handle); 136 137 bail_unlock: 137 138 ocfs2_inode_unlock(inode, 1); ··· 382 381 if (!oifi) { 383 382 status = -ENOMEM; 384 383 mlog_errno(status); 385 - goto bail; 384 + goto out_err; 386 385 } 387 386 388 387 if (o2info_from_user(*oifi, req)) ··· 432 431 o2info_set_request_error(&oifi->ifi_req, req); 433 432 434 433 kfree(oifi); 435 - 434 + out_err: 436 435 return status; 437 436 } 438 437 ··· 667 666 if (!oiff) { 668 667 status = -ENOMEM; 669 668 mlog_errno(status); 670 - goto bail; 669 + goto out_err; 671 670 } 672 671 673 672 if (o2info_from_user(*oiff, req)) ··· 717 716 o2info_set_request_error(&oiff->iff_req, req); 718 717 719 718 kfree(oiff); 720 - 719 + out_err: 721 720 return status; 722 721 } 723 722
+20 -3
fs/ocfs2/journal.c
··· 1544 1544 /* we need to run complete recovery for offline orphan slots */ 1545 1545 ocfs2_replay_map_set_state(osb, REPLAY_NEEDED); 1546 1546 1547 - mlog(ML_NOTICE, "Recovering node %d from slot %d on device (%u,%u)\n", 1548 - node_num, slot_num, 1549 - MAJOR(osb->sb->s_dev), MINOR(osb->sb->s_dev)); 1547 + printk(KERN_NOTICE "ocfs2: Begin replay journal (node %d, slot %d) on "\ 1548 + "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev), 1549 + MINOR(osb->sb->s_dev)); 1550 1550 1551 1551 OCFS2_I(inode)->ip_clusters = le32_to_cpu(fe->i_clusters); 1552 1552 ··· 1601 1601 1602 1602 jbd2_journal_destroy(journal); 1603 1603 1604 + printk(KERN_NOTICE "ocfs2: End replay journal (node %d, slot %d) on "\ 1605 + "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev), 1606 + MINOR(osb->sb->s_dev)); 1604 1607 done: 1605 1608 /* drop the lock on this nodes journal */ 1606 1609 if (got_lock) ··· 1810 1807 * ocfs2_queue_orphan_scan calls ocfs2_queue_recovery_completion for 1811 1808 * every slot, queuing a recovery of the slot on the ocfs2_wq thread. This 1812 1809 * is done to catch any orphans that are left over in orphan directories. 1810 + * 1811 + * It scans all slots, even ones that are in use. It does so to handle the 1812 + * case described below: 1813 + * 1814 + * Node 1 has an inode it was using. The dentry went away due to memory 1815 + * pressure. Node 1 closes the inode, but it's on the free list. The node 1816 + * has the open lock. 1817 + * Node 2 unlinks the inode. It grabs the dentry lock to notify others, 1818 + * but node 1 has no dentry and doesn't get the message. It trylocks the 1819 + * open lock, sees that another node has a PR, and does nothing. 1820 + * Later node 2 runs its orphan dir. It igets the inode, trylocks the 1821 + * open lock, sees the PR still, and does nothing. 1822 + * Basically, we have to trigger an orphan iput on node 1. The only way 1823 + * for this to happen is if node 1 runs node 2's orphan dir. 1813 1824 * 1814 1825 * ocfs2_queue_orphan_scan gets called every ORPHAN_SCAN_SCHEDULE_TIMEOUT 1815 1826 * seconds. It gets an EX lock on os_lockres and checks sequence number
+3 -2
fs/ocfs2/journal.h
··· 441 441 #define OCFS2_SIMPLE_DIR_EXTEND_CREDITS (2) 442 442 443 443 /* file update (nlink, etc) + directory mtime/ctime + dir entry block + quota 444 - * update on dir + index leaf + dx root update for free list */ 444 + * update on dir + index leaf + dx root update for free list + 445 + * previous dirblock update in the free list */ 445 446 static inline int ocfs2_link_credits(struct super_block *sb) 446 447 { 447 - return 2*OCFS2_INODE_UPDATE_CREDITS + 3 + 448 + return 2*OCFS2_INODE_UPDATE_CREDITS + 4 + 448 449 ocfs2_quota_trans_credits(sb); 449 450 } 450 451
+24 -29
fs/ocfs2/mmap.c
··· 61 61 static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh, 62 62 struct page *page) 63 63 { 64 - int ret; 64 + int ret = VM_FAULT_NOPAGE; 65 65 struct inode *inode = file->f_path.dentry->d_inode; 66 66 struct address_space *mapping = inode->i_mapping; 67 67 loff_t pos = page_offset(page); ··· 71 71 void *fsdata; 72 72 loff_t size = i_size_read(inode); 73 73 74 - /* 75 - * Another node might have truncated while we were waiting on 76 - * cluster locks. 77 - * We don't check size == 0 before the shift. This is borrowed 78 - * from do_generic_file_read. 79 - */ 80 74 last_index = (size - 1) >> PAGE_CACHE_SHIFT; 81 - if (unlikely(!size || page->index > last_index)) { 82 - ret = -EINVAL; 83 - goto out; 84 - } 85 75 86 76 /* 87 - * The i_size check above doesn't catch the case where nodes 88 - * truncated and then re-extended the file. We'll re-check the 89 - * page mapping after taking the page lock inside of 90 - * ocfs2_write_begin_nolock(). 77 + * There are cases that lead to the page no longer bebongs to the 78 + * mapping. 79 + * 1) pagecache truncates locally due to memory pressure. 80 + * 2) pagecache truncates when another is taking EX lock against 81 + * inode lock. see ocfs2_data_convert_worker. 82 + * 83 + * The i_size check doesn't catch the case where nodes truncated and 84 + * then re-extended the file. We'll re-check the page mapping after 85 + * taking the page lock inside of ocfs2_write_begin_nolock(). 86 + * 87 + * Let VM retry with these cases. 91 88 */ 92 - if (!PageUptodate(page) || page->mapping != inode->i_mapping) { 93 - /* 94 - * the page has been umapped in ocfs2_data_downconvert_worker. 95 - * So return 0 here and let VFS retry. 96 - */ 97 - ret = 0; 89 + if ((page->mapping != inode->i_mapping) || 90 + (!PageUptodate(page)) || 91 + (page_offset(page) >= size)) 98 92 goto out; 99 - } 100 93 101 94 /* 102 95 * Call ocfs2_write_begin() and ocfs2_write_end() to take ··· 109 116 if (ret) { 110 117 if (ret != -ENOSPC) 111 118 mlog_errno(ret); 119 + if (ret == -ENOMEM) 120 + ret = VM_FAULT_OOM; 121 + else 122 + ret = VM_FAULT_SIGBUS; 112 123 goto out; 113 124 } 114 125 115 - ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page, 116 - fsdata); 117 - if (ret < 0) { 118 - mlog_errno(ret); 126 + if (!locked_page) { 127 + ret = VM_FAULT_NOPAGE; 119 128 goto out; 120 129 } 130 + ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page, 131 + fsdata); 121 132 BUG_ON(ret != len); 122 - ret = 0; 133 + ret = VM_FAULT_LOCKED; 123 134 out: 124 135 return ret; 125 136 } ··· 165 168 166 169 out: 167 170 ocfs2_unblock_signals(&oldset); 168 - if (ret) 169 - ret = VM_FAULT_SIGBUS; 170 171 return ret; 171 172 } 172 173
+1 -1
fs/ocfs2/move_extents.c
··· 745 745 */ 746 746 ocfs2_probe_alloc_group(inode, gd_bh, &goal_bit, len, move_max_hop, 747 747 new_phys_cpos); 748 - if (!new_phys_cpos) { 748 + if (!*new_phys_cpos) { 749 749 ret = -ENOSPC; 750 750 goto out_commit; 751 751 }
+49 -2
fs/ocfs2/ocfs2.h
··· 836 836 837 837 static inline void _ocfs2_set_bit(unsigned int bit, unsigned long *bitmap) 838 838 { 839 - __test_and_set_bit_le(bit, bitmap); 839 + __set_bit_le(bit, bitmap); 840 840 } 841 841 #define ocfs2_set_bit(bit, addr) _ocfs2_set_bit((bit), (unsigned long *)(addr)) 842 842 843 843 static inline void _ocfs2_clear_bit(unsigned int bit, unsigned long *bitmap) 844 844 { 845 - __test_and_clear_bit_le(bit, bitmap); 845 + __clear_bit_le(bit, bitmap); 846 846 } 847 847 #define ocfs2_clear_bit(bit, addr) _ocfs2_clear_bit((bit), (unsigned long *)(addr)) 848 848 849 849 #define ocfs2_test_bit test_bit_le 850 850 #define ocfs2_find_next_zero_bit find_next_zero_bit_le 851 851 #define ocfs2_find_next_bit find_next_bit_le 852 + 853 + static inline void *correct_addr_and_bit_unaligned(int *bit, void *addr) 854 + { 855 + #if BITS_PER_LONG == 64 856 + *bit += ((unsigned long) addr & 7UL) << 3; 857 + addr = (void *) ((unsigned long) addr & ~7UL); 858 + #elif BITS_PER_LONG == 32 859 + *bit += ((unsigned long) addr & 3UL) << 3; 860 + addr = (void *) ((unsigned long) addr & ~3UL); 861 + #else 862 + #error "how many bits you are?!" 863 + #endif 864 + return addr; 865 + } 866 + 867 + static inline void ocfs2_set_bit_unaligned(int bit, void *bitmap) 868 + { 869 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 870 + ocfs2_set_bit(bit, bitmap); 871 + } 872 + 873 + static inline void ocfs2_clear_bit_unaligned(int bit, void *bitmap) 874 + { 875 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 876 + ocfs2_clear_bit(bit, bitmap); 877 + } 878 + 879 + static inline int ocfs2_test_bit_unaligned(int bit, void *bitmap) 880 + { 881 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 882 + return ocfs2_test_bit(bit, bitmap); 883 + } 884 + 885 + static inline int ocfs2_find_next_zero_bit_unaligned(void *bitmap, int max, 886 + int start) 887 + { 888 + int fix = 0, ret, tmpmax; 889 + bitmap = correct_addr_and_bit_unaligned(&fix, bitmap); 890 + tmpmax = max + fix; 891 + start += fix; 892 + 893 + ret = ocfs2_find_next_zero_bit(bitmap, tmpmax, start) - fix; 894 + if (ret > max) 895 + return max; 896 + return ret; 897 + } 898 + 852 899 #endif /* OCFS2_H */ 853 900
+14 -9
fs/ocfs2/quota_local.c
··· 404 404 int status = 0; 405 405 struct ocfs2_quota_recovery *rec; 406 406 407 - mlog(ML_NOTICE, "Beginning quota recovery in slot %u\n", slot_num); 407 + printk(KERN_NOTICE "ocfs2: Beginning quota recovery on device (%s) for " 408 + "slot %u\n", osb->dev_str, slot_num); 409 + 408 410 rec = ocfs2_alloc_quota_recovery(); 409 411 if (!rec) 410 412 return ERR_PTR(-ENOMEM); ··· 551 549 goto out_commit; 552 550 } 553 551 lock_buffer(qbh); 554 - WARN_ON(!ocfs2_test_bit(bit, dchunk->dqc_bitmap)); 555 - ocfs2_clear_bit(bit, dchunk->dqc_bitmap); 552 + WARN_ON(!ocfs2_test_bit_unaligned(bit, dchunk->dqc_bitmap)); 553 + ocfs2_clear_bit_unaligned(bit, dchunk->dqc_bitmap); 556 554 le32_add_cpu(&dchunk->dqc_free, 1); 557 555 unlock_buffer(qbh); 558 556 ocfs2_journal_dirty(handle, qbh); ··· 598 596 struct inode *lqinode; 599 597 unsigned int flags; 600 598 601 - mlog(ML_NOTICE, "Finishing quota recovery in slot %u\n", slot_num); 599 + printk(KERN_NOTICE "ocfs2: Finishing quota recovery on device (%s) for " 600 + "slot %u\n", osb->dev_str, slot_num); 601 + 602 602 mutex_lock(&sb_dqopt(sb)->dqonoff_mutex); 603 603 for (type = 0; type < MAXQUOTAS; type++) { 604 604 if (list_empty(&(rec->r_list[type]))) ··· 616 612 /* Someone else is holding the lock? Then he must be 617 613 * doing the recovery. Just skip the file... */ 618 614 if (status == -EAGAIN) { 619 - mlog(ML_NOTICE, "skipping quota recovery for slot %d " 620 - "because quota file is locked.\n", slot_num); 615 + printk(KERN_NOTICE "ocfs2: Skipping quota recovery on " 616 + "device (%s) for slot %d because quota file is " 617 + "locked.\n", osb->dev_str, slot_num); 621 618 status = 0; 622 619 goto out_put; 623 620 } else if (status < 0) { ··· 949 944 * ol_quota_entries_per_block(sb); 950 945 } 951 946 952 - found = ocfs2_find_next_zero_bit(dchunk->dqc_bitmap, len, 0); 947 + found = ocfs2_find_next_zero_bit_unaligned(dchunk->dqc_bitmap, len, 0); 953 948 /* We failed? */ 954 949 if (found == len) { 955 950 mlog(ML_ERROR, "Did not find empty entry in chunk %d with %u" ··· 1213 1208 struct ocfs2_local_disk_chunk *dchunk; 1214 1209 1215 1210 dchunk = (struct ocfs2_local_disk_chunk *)bh->b_data; 1216 - ocfs2_set_bit(*offset, dchunk->dqc_bitmap); 1211 + ocfs2_set_bit_unaligned(*offset, dchunk->dqc_bitmap); 1217 1212 le32_add_cpu(&dchunk->dqc_free, -1); 1218 1213 } 1219 1214 ··· 1294 1289 (od->dq_chunk->qc_headerbh->b_data); 1295 1290 /* Mark structure as freed */ 1296 1291 lock_buffer(od->dq_chunk->qc_headerbh); 1297 - ocfs2_clear_bit(offset, dchunk->dqc_bitmap); 1292 + ocfs2_clear_bit_unaligned(offset, dchunk->dqc_bitmap); 1298 1293 le32_add_cpu(&dchunk->dqc_free, 1); 1299 1294 unlock_buffer(od->dq_chunk->qc_headerbh); 1300 1295 ocfs2_journal_dirty(handle, od->dq_chunk->qc_headerbh);
+2 -2
fs/ocfs2/slot_map.c
··· 493 493 goto bail; 494 494 } 495 495 } else 496 - mlog(ML_NOTICE, "slot %d is already allocated to this node!\n", 497 - slot); 496 + printk(KERN_INFO "ocfs2: Slot %d on device (%s) was already " 497 + "allocated to this node!\n", slot, osb->dev_str); 498 498 499 499 ocfs2_set_slot(si, slot, osb->node_num); 500 500 osb->slot_num = slot;
+63 -8
fs/ocfs2/stack_o2cb.c
··· 28 28 #include "cluster/masklog.h" 29 29 #include "cluster/nodemanager.h" 30 30 #include "cluster/heartbeat.h" 31 + #include "cluster/tcp.h" 31 32 32 33 #include "stackglue.h" 33 34 ··· 257 256 } 258 257 259 258 /* 259 + * Check if this node is heartbeating and is connected to all other 260 + * heartbeating nodes. 261 + */ 262 + static int o2cb_cluster_check(void) 263 + { 264 + u8 node_num; 265 + int i; 266 + unsigned long hbmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 267 + unsigned long netmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 268 + 269 + node_num = o2nm_this_node(); 270 + if (node_num == O2NM_MAX_NODES) { 271 + printk(KERN_ERR "o2cb: This node has not been configured.\n"); 272 + return -EINVAL; 273 + } 274 + 275 + /* 276 + * o2dlm expects o2net sockets to be created. If not, then 277 + * dlm_join_domain() fails with a stack of errors which are both cryptic 278 + * and incomplete. The idea here is to detect upfront whether we have 279 + * managed to connect to all nodes or not. If not, then list the nodes 280 + * to allow the user to check the configuration (incorrect IP, firewall, 281 + * etc.) Yes, this is racy. But its not the end of the world. 282 + */ 283 + #define O2CB_MAP_STABILIZE_COUNT 60 284 + for (i = 0; i < O2CB_MAP_STABILIZE_COUNT; ++i) { 285 + o2hb_fill_node_map(hbmap, sizeof(hbmap)); 286 + if (!test_bit(node_num, hbmap)) { 287 + printk(KERN_ERR "o2cb: %s heartbeat has not been " 288 + "started.\n", (o2hb_global_heartbeat_active() ? 289 + "Global" : "Local")); 290 + return -EINVAL; 291 + } 292 + o2net_fill_node_map(netmap, sizeof(netmap)); 293 + /* Force set the current node to allow easy compare */ 294 + set_bit(node_num, netmap); 295 + if (!memcmp(hbmap, netmap, sizeof(hbmap))) 296 + return 0; 297 + if (i < O2CB_MAP_STABILIZE_COUNT) 298 + msleep(1000); 299 + } 300 + 301 + printk(KERN_ERR "o2cb: This node could not connect to nodes:"); 302 + i = -1; 303 + while ((i = find_next_bit(hbmap, O2NM_MAX_NODES, 304 + i + 1)) < O2NM_MAX_NODES) { 305 + if (!test_bit(i, netmap)) 306 + printk(" %u", i); 307 + } 308 + printk(".\n"); 309 + 310 + return -ENOTCONN; 311 + } 312 + 313 + /* 260 314 * Called from the dlm when it's about to evict a node. This is how the 261 315 * classic stack signals node death. 262 316 */ ··· 319 263 { 320 264 struct ocfs2_cluster_connection *conn = data; 321 265 322 - mlog(ML_NOTICE, "o2dlm has evicted node %d from group %.*s\n", 323 - node_num, conn->cc_namelen, conn->cc_name); 266 + printk(KERN_NOTICE "o2cb: o2dlm has evicted node %d from domain %.*s\n", 267 + node_num, conn->cc_namelen, conn->cc_name); 324 268 325 269 conn->cc_recovery_handler(node_num, conn->cc_recovery_data); 326 270 } ··· 336 280 BUG_ON(conn == NULL); 337 281 BUG_ON(conn->cc_proto == NULL); 338 282 339 - /* for now we only have one cluster/node, make sure we see it 340 - * in the heartbeat universe */ 341 - if (!o2hb_check_local_node_heartbeating()) { 342 - if (o2hb_global_heartbeat_active()) 343 - mlog(ML_ERROR, "Global heartbeat not started\n"); 344 - rc = -EINVAL; 283 + /* Ensure cluster stack is up and all nodes are connected */ 284 + rc = o2cb_cluster_check(); 285 + if (rc) { 286 + printk(KERN_ERR "o2cb: Cluster check failed. Fix errors " 287 + "before retrying.\n"); 345 288 goto out; 346 289 } 347 290
+16 -9
fs/ocfs2/super.c
··· 54 54 #include "ocfs1_fs_compat.h" 55 55 56 56 #include "alloc.h" 57 + #include "aops.h" 57 58 #include "blockcheck.h" 58 59 #include "dlmglue.h" 59 60 #include "export.h" ··· 1108 1107 1109 1108 ocfs2_set_ro_flag(osb, 1); 1110 1109 1111 - printk(KERN_NOTICE "Readonly device detected. No cluster " 1112 - "services will be utilized for this mount. Recovery " 1113 - "will be skipped.\n"); 1110 + printk(KERN_NOTICE "ocfs2: Readonly device (%s) detected. " 1111 + "Cluster services will not be used for this mount. " 1112 + "Recovery will be skipped.\n", osb->dev_str); 1114 1113 } 1115 1114 1116 1115 if (!ocfs2_is_hard_readonly(osb)) { ··· 1617 1616 return 0; 1618 1617 } 1619 1618 1619 + wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ]; 1620 + 1620 1621 static int __init ocfs2_init(void) 1621 1622 { 1622 - int status; 1623 + int status, i; 1623 1624 1624 1625 ocfs2_print_version(); 1626 + 1627 + for (i = 0; i < OCFS2_IOEND_WQ_HASH_SZ; i++) 1628 + init_waitqueue_head(&ocfs2__ioend_wq[i]); 1625 1629 1626 1630 status = init_ocfs2_uptodate_cache(); 1627 1631 if (status < 0) { ··· 1766 1760 ocfs2_extent_map_init(&oi->vfs_inode); 1767 1761 INIT_LIST_HEAD(&oi->ip_io_markers); 1768 1762 oi->ip_dir_start_lookup = 0; 1769 - 1763 + atomic_set(&oi->ip_unaligned_aio, 0); 1770 1764 init_rwsem(&oi->ip_alloc_sem); 1771 1765 init_rwsem(&oi->ip_xattr_sem); 1772 1766 mutex_init(&oi->ip_io_mutex); ··· 1980 1974 * If we failed before we got a uuid_str yet, we can't stop 1981 1975 * heartbeat. Otherwise, do it. 1982 1976 */ 1983 - if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str) 1977 + if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str && 1978 + !ocfs2_is_hard_readonly(osb)) 1984 1979 hangup_needed = 1; 1985 1980 1986 1981 if (osb->cconn) ··· 2360 2353 mlog_errno(status); 2361 2354 goto bail; 2362 2355 } 2363 - cleancache_init_shared_fs((char *)&uuid_net_key, sb); 2356 + cleancache_init_shared_fs((char *)&di->id2.i_super.s_uuid, sb); 2364 2357 2365 2358 bail: 2366 2359 return status; ··· 2469 2462 goto finally; 2470 2463 } 2471 2464 } else { 2472 - mlog(ML_NOTICE, "File system was not unmounted cleanly, " 2473 - "recovering volume.\n"); 2465 + printk(KERN_NOTICE "ocfs2: File system on device (%s) was not " 2466 + "unmounted cleanly, recovering it.\n", osb->dev_str); 2474 2467 } 2475 2468 2476 2469 local = ocfs2_mount_local(osb);
+6 -4
fs/ocfs2/xattr.c
··· 2376 2376 } 2377 2377 2378 2378 ret = ocfs2_xattr_value_truncate(inode, vb, 0, &ctxt); 2379 - if (ret < 0) { 2380 - mlog_errno(ret); 2381 - break; 2382 - } 2383 2379 2384 2380 ocfs2_commit_trans(osb, ctxt.handle); 2385 2381 if (ctxt.meta_ac) { 2386 2382 ocfs2_free_alloc_context(ctxt.meta_ac); 2387 2383 ctxt.meta_ac = NULL; 2388 2384 } 2385 + 2386 + if (ret < 0) { 2387 + mlog_errno(ret); 2388 + break; 2389 + } 2390 + 2389 2391 } 2390 2392 2391 2393 if (ctxt.meta_ac)
+4 -3
fs/proc/meminfo.c
··· 131 131 K(i.freeswap), 132 132 K(global_page_state(NR_FILE_DIRTY)), 133 133 K(global_page_state(NR_WRITEBACK)), 134 - K(global_page_state(NR_ANON_PAGES) 135 134 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 135 + K(global_page_state(NR_ANON_PAGES) 136 136 + global_page_state(NR_ANON_TRANSPARENT_HUGEPAGES) * 137 - HPAGE_PMD_NR 137 + HPAGE_PMD_NR), 138 + #else 139 + K(global_page_state(NR_ANON_PAGES)), 138 140 #endif 139 - ), 140 141 K(global_page_state(NR_FILE_MAPPED)), 141 142 K(global_page_state(NR_SHMEM)), 142 143 K(global_page_state(NR_SLAB_RECLAIMABLE) +
+3 -5
fs/proc/root.c
··· 91 91 92 92 void __init proc_root_init(void) 93 93 { 94 - struct vfsmount *mnt; 95 94 int err; 96 95 97 96 proc_init_inodecache(); 98 97 err = register_filesystem(&proc_fs_type); 99 98 if (err) 100 99 return; 101 - mnt = kern_mount_data(&proc_fs_type, &init_pid_ns); 102 - if (IS_ERR(mnt)) { 100 + err = pid_ns_prepare_proc(&init_pid_ns); 101 + if (err) { 103 102 unregister_filesystem(&proc_fs_type); 104 103 return; 105 104 } 106 105 107 - init_pid_ns.proc_mnt = mnt; 108 106 proc_symlink("mounts", NULL, "self/mounts"); 109 107 110 108 proc_net_init(); ··· 207 209 208 210 void pid_ns_release_proc(struct pid_namespace *ns) 209 211 { 210 - mntput(ns->proc_mnt); 212 + kern_unmount(ns->proc_mnt); 211 213 }
+2 -2
fs/proc/stat.c
··· 32 32 idle = kstat_cpu(cpu).cpustat.idle; 33 33 idle = cputime64_add(idle, arch_idle_time(cpu)); 34 34 } else 35 - idle = usecs_to_cputime(idle_time); 35 + idle = nsecs_to_jiffies64(1000 * idle_time); 36 36 37 37 return idle; 38 38 } ··· 46 46 /* !NO_HZ so we can rely on cpustat.iowait */ 47 47 iowait = kstat_cpu(cpu).cpustat.iowait; 48 48 else 49 - iowait = usecs_to_cputime(iowait_time); 49 + iowait = nsecs_to_jiffies64(1000 * iowait_time); 50 50 51 51 return iowait; 52 52 }
+8 -5
fs/pstore/platform.c
··· 167 167 } 168 168 169 169 psinfo = psi; 170 + mutex_init(&psinfo->read_mutex); 170 171 spin_unlock(&pstore_lock); 171 172 172 173 if (owner && !try_module_get(owner)) { ··· 196 195 void pstore_get_records(int quiet) 197 196 { 198 197 struct pstore_info *psi = psinfo; 198 + char *buf = NULL; 199 199 ssize_t size; 200 200 u64 id; 201 201 enum pstore_type_id type; 202 202 struct timespec time; 203 203 int failed = 0, rc; 204 - unsigned long flags; 205 204 206 205 if (!psi) 207 206 return; 208 207 209 - spin_lock_irqsave(&psinfo->buf_lock, flags); 208 + mutex_lock(&psi->read_mutex); 210 209 rc = psi->open(psi); 211 210 if (rc) 212 211 goto out; 213 212 214 - while ((size = psi->read(&id, &type, &time, psi)) > 0) { 215 - rc = pstore_mkfile(type, psi->name, id, psi->buf, (size_t)size, 213 + while ((size = psi->read(&id, &type, &time, &buf, psi)) > 0) { 214 + rc = pstore_mkfile(type, psi->name, id, buf, (size_t)size, 216 215 time, psi); 216 + kfree(buf); 217 + buf = NULL; 217 218 if (rc && (rc != -EEXIST || !quiet)) 218 219 failed++; 219 220 } 220 221 psi->close(psi); 221 222 out: 222 - spin_unlock_irqrestore(&psinfo->buf_lock, flags); 223 + mutex_unlock(&psi->read_mutex); 223 224 224 225 if (failed) 225 226 printk(KERN_WARNING "pstore: failed to load %d record(s) from '%s'\n",
+3 -3
fs/seq_file.c
··· 449 449 450 450 /* 451 451 * Same as seq_path, but relative to supplied root. 452 - * 453 - * root may be changed, see __d_path(). 454 452 */ 455 453 int seq_path_root(struct seq_file *m, struct path *path, struct path *root, 456 454 char *esc) ··· 461 463 char *p; 462 464 463 465 p = __d_path(path, root, buf, size); 466 + if (!p) 467 + return SEQ_SKIP; 464 468 res = PTR_ERR(p); 465 469 if (!IS_ERR(p)) { 466 470 char *end = mangle_path(buf, p, esc); ··· 474 474 } 475 475 seq_commit(m, res); 476 476 477 - return res < 0 ? res : 0; 477 + return res < 0 && res != -ENAMETOOLONG ? res : 0; 478 478 } 479 479 480 480 /*
+8 -10
fs/ubifs/super.c
··· 2264 2264 return -EINVAL; 2265 2265 } 2266 2266 2267 - err = register_filesystem(&ubifs_fs_type); 2268 - if (err) { 2269 - ubifs_err("cannot register file system, error %d", err); 2270 - return err; 2271 - } 2272 - 2273 - err = -ENOMEM; 2274 2267 ubifs_inode_slab = kmem_cache_create("ubifs_inode_slab", 2275 2268 sizeof(struct ubifs_inode), 0, 2276 2269 SLAB_MEM_SPREAD | SLAB_RECLAIM_ACCOUNT, 2277 2270 &inode_slab_ctor); 2278 2271 if (!ubifs_inode_slab) 2279 - goto out_reg; 2272 + return -ENOMEM; 2280 2273 2281 2274 register_shrinker(&ubifs_shrinker_info); 2282 2275 ··· 2281 2288 if (err) 2282 2289 goto out_compr; 2283 2290 2291 + err = register_filesystem(&ubifs_fs_type); 2292 + if (err) { 2293 + ubifs_err("cannot register file system, error %d", err); 2294 + goto out_dbg; 2295 + } 2284 2296 return 0; 2285 2297 2298 + out_dbg: 2299 + dbg_debugfs_exit(); 2286 2300 out_compr: 2287 2301 ubifs_compressors_exit(); 2288 2302 out_shrinker: 2289 2303 unregister_shrinker(&ubifs_shrinker_info); 2290 2304 kmem_cache_destroy(ubifs_inode_slab); 2291 - out_reg: 2292 - unregister_filesystem(&ubifs_fs_type); 2293 2305 return err; 2294 2306 } 2295 2307 /* late_initcall to let compressors initialize first */
+2
fs/xfs/xfs_acl.c
··· 42 42 int count, i; 43 43 44 44 count = be32_to_cpu(aclp->acl_cnt); 45 + if (count > XFS_ACL_MAX_ENTRIES) 46 + return ERR_PTR(-EFSCORRUPTED); 45 47 46 48 acl = posix_acl_alloc(count, GFP_KERNEL); 47 49 if (!acl)
+39 -25
fs/xfs/xfs_attr_leaf.c
··· 110 110 /* 111 111 * Query whether the requested number of additional bytes of extended 112 112 * attribute space will be able to fit inline. 113 + * 113 114 * Returns zero if not, else the di_forkoff fork offset to be used in the 114 115 * literal area for attribute data once the new bytes have been added. 115 116 * ··· 123 122 int offset; 124 123 int minforkoff; /* lower limit on valid forkoff locations */ 125 124 int maxforkoff; /* upper limit on valid forkoff locations */ 126 - int dsize; 125 + int dsize; 127 126 xfs_mount_t *mp = dp->i_mount; 128 127 129 128 offset = (XFS_LITINO(mp) - bytes) >> 3; /* rounded down */ ··· 137 136 return (offset >= minforkoff) ? minforkoff : 0; 138 137 } 139 138 140 - if (!(mp->m_flags & XFS_MOUNT_ATTR2)) { 141 - if (bytes <= XFS_IFORK_ASIZE(dp)) 142 - return dp->i_d.di_forkoff; 139 + /* 140 + * If the requested numbers of bytes is smaller or equal to the 141 + * current attribute fork size we can always proceed. 142 + * 143 + * Note that if_bytes in the data fork might actually be larger than 144 + * the current data fork size is due to delalloc extents. In that 145 + * case either the extent count will go down when they are converted 146 + * to real extents, or the delalloc conversion will take care of the 147 + * literal area rebalancing. 148 + */ 149 + if (bytes <= XFS_IFORK_ASIZE(dp)) 150 + return dp->i_d.di_forkoff; 151 + 152 + /* 153 + * For attr2 we can try to move the forkoff if there is space in the 154 + * literal area, but for the old format we are done if there is no 155 + * space in the fixed attribute fork. 156 + */ 157 + if (!(mp->m_flags & XFS_MOUNT_ATTR2)) 143 158 return 0; 144 - } 145 159 146 160 dsize = dp->i_df.if_bytes; 147 - 161 + 148 162 switch (dp->i_d.di_format) { 149 163 case XFS_DINODE_FMT_EXTENTS: 150 - /* 164 + /* 151 165 * If there is no attr fork and the data fork is extents, 152 - * determine if creating the default attr fork will result 153 - * in the extents form migrating to btree. If so, the 154 - * minimum offset only needs to be the space required for 166 + * determine if creating the default attr fork will result 167 + * in the extents form migrating to btree. If so, the 168 + * minimum offset only needs to be the space required for 155 169 * the btree root. 156 - */ 170 + */ 157 171 if (!dp->i_d.di_forkoff && dp->i_df.if_bytes > 158 172 xfs_default_attroffset(dp)) 159 173 dsize = XFS_BMDR_SPACE_CALC(MINDBTPTRS); 160 174 break; 161 - 162 175 case XFS_DINODE_FMT_BTREE: 163 176 /* 164 - * If have data btree then keep forkoff if we have one, 165 - * otherwise we are adding a new attr, so then we set 166 - * minforkoff to where the btree root can finish so we have 177 + * If we have a data btree then keep forkoff if we have one, 178 + * otherwise we are adding a new attr, so then we set 179 + * minforkoff to where the btree root can finish so we have 167 180 * plenty of room for attrs 168 181 */ 169 182 if (dp->i_d.di_forkoff) { 170 - if (offset < dp->i_d.di_forkoff) 183 + if (offset < dp->i_d.di_forkoff) 171 184 return 0; 172 - else 173 - return dp->i_d.di_forkoff; 174 - } else 175 - dsize = XFS_BMAP_BROOT_SPACE(dp->i_df.if_broot); 185 + return dp->i_d.di_forkoff; 186 + } 187 + dsize = XFS_BMAP_BROOT_SPACE(dp->i_df.if_broot); 176 188 break; 177 189 } 178 - 179 - /* 180 - * A data fork btree root must have space for at least 190 + 191 + /* 192 + * A data fork btree root must have space for at least 181 193 * MINDBTPTRS key/ptr pairs if the data fork is small or empty. 182 194 */ 183 195 minforkoff = MAX(dsize, XFS_BMDR_SPACE_CALC(MINDBTPTRS)); ··· 200 186 maxforkoff = XFS_LITINO(mp) - XFS_BMDR_SPACE_CALC(MINABTPTRS); 201 187 maxforkoff = maxforkoff >> 3; /* rounded down */ 202 188 203 - if (offset >= minforkoff && offset < maxforkoff) 204 - return offset; 205 189 if (offset >= maxforkoff) 206 190 return maxforkoff; 191 + if (offset >= minforkoff) 192 + return offset; 207 193 return 0; 208 194 } 209 195
+19 -1
fs/xfs/xfs_bmap.c
··· 2383 2383 int tryagain; 2384 2384 int error; 2385 2385 2386 + ASSERT(ap->length); 2387 + 2386 2388 mp = ap->ip->i_mount; 2387 2389 align = ap->userdata ? xfs_get_extsz_hint(ap->ip) : 0; 2388 2390 if (unlikely(align)) { ··· 4631 4629 int error; 4632 4630 int rt; 4633 4631 4632 + ASSERT(bma->length > 0); 4633 + 4634 4634 rt = (whichfork == XFS_DATA_FORK) && XFS_IS_REALTIME_INODE(bma->ip); 4635 4635 4636 4636 /* ··· 4853 4849 ASSERT(*nmap <= XFS_BMAP_MAX_NMAP); 4854 4850 ASSERT(!(flags & XFS_BMAPI_IGSTATE)); 4855 4851 ASSERT(tp != NULL); 4852 + ASSERT(len > 0); 4856 4853 4857 4854 whichfork = (flags & XFS_BMAPI_ATTRFORK) ? 4858 4855 XFS_ATTR_FORK : XFS_DATA_FORK; ··· 4923 4918 bma.eof = eof; 4924 4919 bma.conv = !!(flags & XFS_BMAPI_CONVERT); 4925 4920 bma.wasdel = wasdelay; 4926 - bma.length = len; 4927 4921 bma.offset = bno; 4928 4922 4923 + /* 4924 + * There's a 32/64 bit type mismatch between the 4925 + * allocation length request (which can be 64 bits in 4926 + * length) and the bma length request, which is 4927 + * xfs_extlen_t and therefore 32 bits. Hence we have to 4928 + * check for 32-bit overflows and handle them here. 4929 + */ 4930 + if (len > (xfs_filblks_t)MAXEXTLEN) 4931 + bma.length = MAXEXTLEN; 4932 + else 4933 + bma.length = len; 4934 + 4935 + ASSERT(len > 0); 4936 + ASSERT(bma.length > 0); 4929 4937 error = xfs_bmapi_allocate(&bma, flags); 4930 4938 if (error) 4931 4939 goto error0;
+4 -4
fs/xfs/xfs_export.c
··· 98 98 switch (fileid_type) { 99 99 case FILEID_INO32_GEN_PARENT: 100 100 spin_lock(&dentry->d_lock); 101 - fid->i32.parent_ino = dentry->d_parent->d_inode->i_ino; 101 + fid->i32.parent_ino = XFS_I(dentry->d_parent->d_inode)->i_ino; 102 102 fid->i32.parent_gen = dentry->d_parent->d_inode->i_generation; 103 103 spin_unlock(&dentry->d_lock); 104 104 /*FALLTHRU*/ 105 105 case FILEID_INO32_GEN: 106 - fid->i32.ino = inode->i_ino; 106 + fid->i32.ino = XFS_I(inode)->i_ino; 107 107 fid->i32.gen = inode->i_generation; 108 108 break; 109 109 case FILEID_INO32_GEN_PARENT | XFS_FILEID_TYPE_64FLAG: 110 110 spin_lock(&dentry->d_lock); 111 - fid64->parent_ino = dentry->d_parent->d_inode->i_ino; 111 + fid64->parent_ino = XFS_I(dentry->d_parent->d_inode)->i_ino; 112 112 fid64->parent_gen = dentry->d_parent->d_inode->i_generation; 113 113 spin_unlock(&dentry->d_lock); 114 114 /*FALLTHRU*/ 115 115 case FILEID_INO32_GEN | XFS_FILEID_TYPE_64FLAG: 116 - fid64->ino = inode->i_ino; 116 + fid64->ino = XFS_I(inode)->i_ino; 117 117 fid64->gen = inode->i_generation; 118 118 break; 119 119 }
+21
fs/xfs/xfs_inode.c
··· 2835 2835 return XFS_ERROR(EFSCORRUPTED); 2836 2836 } 2837 2837 2838 + void 2839 + xfs_promote_inode( 2840 + struct xfs_inode *ip) 2841 + { 2842 + struct xfs_buf *bp; 2843 + 2844 + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); 2845 + 2846 + bp = xfs_incore(ip->i_mount->m_ddev_targp, ip->i_imap.im_blkno, 2847 + ip->i_imap.im_len, XBF_TRYLOCK); 2848 + if (!bp) 2849 + return; 2850 + 2851 + if (XFS_BUF_ISDELAYWRITE(bp)) { 2852 + xfs_buf_delwri_promote(bp); 2853 + wake_up_process(ip->i_mount->m_ddev_targp->bt_task); 2854 + } 2855 + 2856 + xfs_buf_relse(bp); 2857 + } 2858 + 2838 2859 /* 2839 2860 * Return a pointer to the extent record at file index idx. 2840 2861 */
+1
fs/xfs/xfs_inode.h
··· 498 498 void xfs_iext_realloc(xfs_inode_t *, int, int); 499 499 void xfs_iunpin_wait(xfs_inode_t *); 500 500 int xfs_iflush(xfs_inode_t *, uint); 501 + void xfs_promote_inode(struct xfs_inode *); 501 502 void xfs_lock_inodes(xfs_inode_t **, int, uint); 502 503 void xfs_lock_two_inodes(xfs_inode_t *, xfs_inode_t *, uint); 503 504
+173 -175
fs/xfs/xfs_log.c
··· 150 150 } while (head_val != old); 151 151 } 152 152 153 + STATIC bool 154 + xlog_reserveq_wake( 155 + struct log *log, 156 + int *free_bytes) 157 + { 158 + struct xlog_ticket *tic; 159 + int need_bytes; 160 + 161 + list_for_each_entry(tic, &log->l_reserveq, t_queue) { 162 + if (tic->t_flags & XLOG_TIC_PERM_RESERV) 163 + need_bytes = tic->t_unit_res * tic->t_cnt; 164 + else 165 + need_bytes = tic->t_unit_res; 166 + 167 + if (*free_bytes < need_bytes) 168 + return false; 169 + *free_bytes -= need_bytes; 170 + 171 + trace_xfs_log_grant_wake_up(log, tic); 172 + wake_up(&tic->t_wait); 173 + } 174 + 175 + return true; 176 + } 177 + 178 + STATIC bool 179 + xlog_writeq_wake( 180 + struct log *log, 181 + int *free_bytes) 182 + { 183 + struct xlog_ticket *tic; 184 + int need_bytes; 185 + 186 + list_for_each_entry(tic, &log->l_writeq, t_queue) { 187 + ASSERT(tic->t_flags & XLOG_TIC_PERM_RESERV); 188 + 189 + need_bytes = tic->t_unit_res; 190 + 191 + if (*free_bytes < need_bytes) 192 + return false; 193 + *free_bytes -= need_bytes; 194 + 195 + trace_xfs_log_regrant_write_wake_up(log, tic); 196 + wake_up(&tic->t_wait); 197 + } 198 + 199 + return true; 200 + } 201 + 202 + STATIC int 203 + xlog_reserveq_wait( 204 + struct log *log, 205 + struct xlog_ticket *tic, 206 + int need_bytes) 207 + { 208 + list_add_tail(&tic->t_queue, &log->l_reserveq); 209 + 210 + do { 211 + if (XLOG_FORCED_SHUTDOWN(log)) 212 + goto shutdown; 213 + xlog_grant_push_ail(log, need_bytes); 214 + 215 + XFS_STATS_INC(xs_sleep_logspace); 216 + trace_xfs_log_grant_sleep(log, tic); 217 + 218 + xlog_wait(&tic->t_wait, &log->l_grant_reserve_lock); 219 + trace_xfs_log_grant_wake(log, tic); 220 + 221 + spin_lock(&log->l_grant_reserve_lock); 222 + if (XLOG_FORCED_SHUTDOWN(log)) 223 + goto shutdown; 224 + } while (xlog_space_left(log, &log->l_grant_reserve_head) < need_bytes); 225 + 226 + list_del_init(&tic->t_queue); 227 + return 0; 228 + shutdown: 229 + list_del_init(&tic->t_queue); 230 + return XFS_ERROR(EIO); 231 + } 232 + 233 + STATIC int 234 + xlog_writeq_wait( 235 + struct log *log, 236 + struct xlog_ticket *tic, 237 + int need_bytes) 238 + { 239 + list_add_tail(&tic->t_queue, &log->l_writeq); 240 + 241 + do { 242 + if (XLOG_FORCED_SHUTDOWN(log)) 243 + goto shutdown; 244 + xlog_grant_push_ail(log, need_bytes); 245 + 246 + XFS_STATS_INC(xs_sleep_logspace); 247 + trace_xfs_log_regrant_write_sleep(log, tic); 248 + 249 + xlog_wait(&tic->t_wait, &log->l_grant_write_lock); 250 + trace_xfs_log_regrant_write_wake(log, tic); 251 + 252 + spin_lock(&log->l_grant_write_lock); 253 + if (XLOG_FORCED_SHUTDOWN(log)) 254 + goto shutdown; 255 + } while (xlog_space_left(log, &log->l_grant_write_head) < need_bytes); 256 + 257 + list_del_init(&tic->t_queue); 258 + return 0; 259 + shutdown: 260 + list_del_init(&tic->t_queue); 261 + return XFS_ERROR(EIO); 262 + } 263 + 153 264 static void 154 265 xlog_tic_reset_res(xlog_ticket_t *tic) 155 266 { ··· 461 350 retval = xlog_grant_log_space(log, internal_ticket); 462 351 } 463 352 353 + if (unlikely(retval)) { 354 + /* 355 + * If we are failing, make sure the ticket doesn't have any 356 + * current reservations. We don't want to add this back 357 + * when the ticket/ transaction gets cancelled. 358 + */ 359 + internal_ticket->t_curr_res = 0; 360 + /* ungrant will give back unit_res * t_cnt. */ 361 + internal_ticket->t_cnt = 0; 362 + } 363 + 464 364 return retval; 465 - } /* xfs_log_reserve */ 365 + } 466 366 467 367 468 368 /* ··· 2603 2481 /* 2604 2482 * Atomically get the log space required for a log ticket. 2605 2483 * 2606 - * Once a ticket gets put onto the reserveq, it will only return after 2607 - * the needed reservation is satisfied. 2484 + * Once a ticket gets put onto the reserveq, it will only return after the 2485 + * needed reservation is satisfied. 2608 2486 * 2609 2487 * This function is structured so that it has a lock free fast path. This is 2610 2488 * necessary because every new transaction reservation will come through this ··· 2612 2490 * every pass. 2613 2491 * 2614 2492 * As tickets are only ever moved on and off the reserveq under the 2615 - * l_grant_reserve_lock, we only need to take that lock if we are going 2616 - * to add the ticket to the queue and sleep. We can avoid taking the lock if the 2617 - * ticket was never added to the reserveq because the t_queue list head will be 2618 - * empty and we hold the only reference to it so it can safely be checked 2619 - * unlocked. 2493 + * l_grant_reserve_lock, we only need to take that lock if we are going to add 2494 + * the ticket to the queue and sleep. We can avoid taking the lock if the ticket 2495 + * was never added to the reserveq because the t_queue list head will be empty 2496 + * and we hold the only reference to it so it can safely be checked unlocked. 2620 2497 */ 2621 2498 STATIC int 2622 - xlog_grant_log_space(xlog_t *log, 2623 - xlog_ticket_t *tic) 2499 + xlog_grant_log_space( 2500 + struct log *log, 2501 + struct xlog_ticket *tic) 2624 2502 { 2625 - int free_bytes; 2626 - int need_bytes; 2503 + int free_bytes, need_bytes; 2504 + int error = 0; 2627 2505 2628 - #ifdef DEBUG 2629 - if (log->l_flags & XLOG_ACTIVE_RECOVERY) 2630 - panic("grant Recovery problem"); 2631 - #endif 2506 + ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY)); 2632 2507 2633 2508 trace_xfs_log_grant_enter(log, tic); 2634 2509 2510 + /* 2511 + * If there are other waiters on the queue then give them a chance at 2512 + * logspace before us. Wake up the first waiters, if we do not wake 2513 + * up all the waiters then go to sleep waiting for more free space, 2514 + * otherwise try to get some space for this transaction. 2515 + */ 2635 2516 need_bytes = tic->t_unit_res; 2636 2517 if (tic->t_flags & XFS_LOG_PERM_RESERV) 2637 2518 need_bytes *= tic->t_ocnt; 2638 - 2639 - /* something is already sleeping; insert new transaction at end */ 2519 + free_bytes = xlog_space_left(log, &log->l_grant_reserve_head); 2640 2520 if (!list_empty_careful(&log->l_reserveq)) { 2641 2521 spin_lock(&log->l_grant_reserve_lock); 2642 - /* recheck the queue now we are locked */ 2643 - if (list_empty(&log->l_reserveq)) { 2644 - spin_unlock(&log->l_grant_reserve_lock); 2645 - goto redo; 2646 - } 2647 - list_add_tail(&tic->t_queue, &log->l_reserveq); 2648 - 2649 - trace_xfs_log_grant_sleep1(log, tic); 2650 - 2651 - /* 2652 - * Gotta check this before going to sleep, while we're 2653 - * holding the grant lock. 2654 - */ 2655 - if (XLOG_FORCED_SHUTDOWN(log)) 2656 - goto error_return; 2657 - 2658 - XFS_STATS_INC(xs_sleep_logspace); 2659 - xlog_wait(&tic->t_wait, &log->l_grant_reserve_lock); 2660 - 2661 - /* 2662 - * If we got an error, and the filesystem is shutting down, 2663 - * we'll catch it down below. So just continue... 2664 - */ 2665 - trace_xfs_log_grant_wake1(log, tic); 2666 - } 2667 - 2668 - redo: 2669 - if (XLOG_FORCED_SHUTDOWN(log)) 2670 - goto error_return_unlocked; 2671 - 2672 - free_bytes = xlog_space_left(log, &log->l_grant_reserve_head); 2673 - if (free_bytes < need_bytes) { 2522 + if (!xlog_reserveq_wake(log, &free_bytes) || 2523 + free_bytes < need_bytes) 2524 + error = xlog_reserveq_wait(log, tic, need_bytes); 2525 + spin_unlock(&log->l_grant_reserve_lock); 2526 + } else if (free_bytes < need_bytes) { 2674 2527 spin_lock(&log->l_grant_reserve_lock); 2675 - if (list_empty(&tic->t_queue)) 2676 - list_add_tail(&tic->t_queue, &log->l_reserveq); 2677 - 2678 - trace_xfs_log_grant_sleep2(log, tic); 2679 - 2680 - if (XLOG_FORCED_SHUTDOWN(log)) 2681 - goto error_return; 2682 - 2683 - xlog_grant_push_ail(log, need_bytes); 2684 - 2685 - XFS_STATS_INC(xs_sleep_logspace); 2686 - xlog_wait(&tic->t_wait, &log->l_grant_reserve_lock); 2687 - 2688 - trace_xfs_log_grant_wake2(log, tic); 2689 - goto redo; 2690 - } 2691 - 2692 - if (!list_empty(&tic->t_queue)) { 2693 - spin_lock(&log->l_grant_reserve_lock); 2694 - list_del_init(&tic->t_queue); 2528 + error = xlog_reserveq_wait(log, tic, need_bytes); 2695 2529 spin_unlock(&log->l_grant_reserve_lock); 2696 2530 } 2531 + if (error) 2532 + return error; 2697 2533 2698 - /* we've got enough space */ 2699 2534 xlog_grant_add_space(log, &log->l_grant_reserve_head, need_bytes); 2700 2535 xlog_grant_add_space(log, &log->l_grant_write_head, need_bytes); 2701 2536 trace_xfs_log_grant_exit(log, tic); 2702 2537 xlog_verify_grant_tail(log); 2703 2538 return 0; 2704 - 2705 - error_return_unlocked: 2706 - spin_lock(&log->l_grant_reserve_lock); 2707 - error_return: 2708 - list_del_init(&tic->t_queue); 2709 - spin_unlock(&log->l_grant_reserve_lock); 2710 - trace_xfs_log_grant_error(log, tic); 2711 - 2712 - /* 2713 - * If we are failing, make sure the ticket doesn't have any 2714 - * current reservations. We don't want to add this back when 2715 - * the ticket/transaction gets cancelled. 2716 - */ 2717 - tic->t_curr_res = 0; 2718 - tic->t_cnt = 0; /* ungrant will give back unit_res * t_cnt. */ 2719 - return XFS_ERROR(EIO); 2720 - } /* xlog_grant_log_space */ 2721 - 2539 + } 2722 2540 2723 2541 /* 2724 2542 * Replenish the byte reservation required by moving the grant write head. ··· 2667 2605 * free fast path. 2668 2606 */ 2669 2607 STATIC int 2670 - xlog_regrant_write_log_space(xlog_t *log, 2671 - xlog_ticket_t *tic) 2608 + xlog_regrant_write_log_space( 2609 + struct log *log, 2610 + struct xlog_ticket *tic) 2672 2611 { 2673 - int free_bytes, need_bytes; 2612 + int free_bytes, need_bytes; 2613 + int error = 0; 2674 2614 2675 2615 tic->t_curr_res = tic->t_unit_res; 2676 2616 xlog_tic_reset_res(tic); ··· 2680 2616 if (tic->t_cnt > 0) 2681 2617 return 0; 2682 2618 2683 - #ifdef DEBUG 2684 - if (log->l_flags & XLOG_ACTIVE_RECOVERY) 2685 - panic("regrant Recovery problem"); 2686 - #endif 2619 + ASSERT(!(log->l_flags & XLOG_ACTIVE_RECOVERY)); 2687 2620 2688 2621 trace_xfs_log_regrant_write_enter(log, tic); 2689 - if (XLOG_FORCED_SHUTDOWN(log)) 2690 - goto error_return_unlocked; 2691 2622 2692 - /* If there are other waiters on the queue then give them a 2693 - * chance at logspace before us. Wake up the first waiters, 2694 - * if we do not wake up all the waiters then go to sleep waiting 2695 - * for more free space, otherwise try to get some space for 2696 - * this transaction. 2623 + /* 2624 + * If there are other waiters on the queue then give them a chance at 2625 + * logspace before us. Wake up the first waiters, if we do not wake 2626 + * up all the waiters then go to sleep waiting for more free space, 2627 + * otherwise try to get some space for this transaction. 2697 2628 */ 2698 2629 need_bytes = tic->t_unit_res; 2699 - if (!list_empty_careful(&log->l_writeq)) { 2700 - struct xlog_ticket *ntic; 2701 - 2702 - spin_lock(&log->l_grant_write_lock); 2703 - free_bytes = xlog_space_left(log, &log->l_grant_write_head); 2704 - list_for_each_entry(ntic, &log->l_writeq, t_queue) { 2705 - ASSERT(ntic->t_flags & XLOG_TIC_PERM_RESERV); 2706 - 2707 - if (free_bytes < ntic->t_unit_res) 2708 - break; 2709 - free_bytes -= ntic->t_unit_res; 2710 - wake_up(&ntic->t_wait); 2711 - } 2712 - 2713 - if (ntic != list_first_entry(&log->l_writeq, 2714 - struct xlog_ticket, t_queue)) { 2715 - if (list_empty(&tic->t_queue)) 2716 - list_add_tail(&tic->t_queue, &log->l_writeq); 2717 - trace_xfs_log_regrant_write_sleep1(log, tic); 2718 - 2719 - xlog_grant_push_ail(log, need_bytes); 2720 - 2721 - XFS_STATS_INC(xs_sleep_logspace); 2722 - xlog_wait(&tic->t_wait, &log->l_grant_write_lock); 2723 - trace_xfs_log_regrant_write_wake1(log, tic); 2724 - } else 2725 - spin_unlock(&log->l_grant_write_lock); 2726 - } 2727 - 2728 - redo: 2729 - if (XLOG_FORCED_SHUTDOWN(log)) 2730 - goto error_return_unlocked; 2731 - 2732 2630 free_bytes = xlog_space_left(log, &log->l_grant_write_head); 2733 - if (free_bytes < need_bytes) { 2631 + if (!list_empty_careful(&log->l_writeq)) { 2734 2632 spin_lock(&log->l_grant_write_lock); 2735 - if (list_empty(&tic->t_queue)) 2736 - list_add_tail(&tic->t_queue, &log->l_writeq); 2737 - 2738 - if (XLOG_FORCED_SHUTDOWN(log)) 2739 - goto error_return; 2740 - 2741 - xlog_grant_push_ail(log, need_bytes); 2742 - 2743 - XFS_STATS_INC(xs_sleep_logspace); 2744 - trace_xfs_log_regrant_write_sleep2(log, tic); 2745 - xlog_wait(&tic->t_wait, &log->l_grant_write_lock); 2746 - 2747 - trace_xfs_log_regrant_write_wake2(log, tic); 2748 - goto redo; 2749 - } 2750 - 2751 - if (!list_empty(&tic->t_queue)) { 2633 + if (!xlog_writeq_wake(log, &free_bytes) || 2634 + free_bytes < need_bytes) 2635 + error = xlog_writeq_wait(log, tic, need_bytes); 2636 + spin_unlock(&log->l_grant_write_lock); 2637 + } else if (free_bytes < need_bytes) { 2752 2638 spin_lock(&log->l_grant_write_lock); 2753 - list_del_init(&tic->t_queue); 2639 + error = xlog_writeq_wait(log, tic, need_bytes); 2754 2640 spin_unlock(&log->l_grant_write_lock); 2755 2641 } 2756 2642 2757 - /* we've got enough space */ 2643 + if (error) 2644 + return error; 2645 + 2758 2646 xlog_grant_add_space(log, &log->l_grant_write_head, need_bytes); 2759 2647 trace_xfs_log_regrant_write_exit(log, tic); 2760 2648 xlog_verify_grant_tail(log); 2761 2649 return 0; 2762 - 2763 - 2764 - error_return_unlocked: 2765 - spin_lock(&log->l_grant_write_lock); 2766 - error_return: 2767 - list_del_init(&tic->t_queue); 2768 - spin_unlock(&log->l_grant_write_lock); 2769 - trace_xfs_log_regrant_write_error(log, tic); 2770 - 2771 - /* 2772 - * If we are failing, make sure the ticket doesn't have any 2773 - * current reservations. We don't want to add this back when 2774 - * the ticket/transaction gets cancelled. 2775 - */ 2776 - tic->t_curr_res = 0; 2777 - tic->t_cnt = 0; /* ungrant will give back unit_res * t_cnt. */ 2778 - return XFS_ERROR(EIO); 2779 - } /* xlog_regrant_write_log_space */ 2780 - 2650 + } 2781 2651 2782 2652 /* The first cnt-1 times through here we don't need to 2783 2653 * move the grant write head because the permanent
+11
fs/xfs/xfs_sync.c
··· 770 770 if (!xfs_iflock_nowait(ip)) { 771 771 if (!(sync_mode & SYNC_WAIT)) 772 772 goto out; 773 + 774 + /* 775 + * If we only have a single dirty inode in a cluster there is 776 + * a fair chance that the AIL push may have pushed it into 777 + * the buffer, but xfsbufd won't touch it until 30 seconds 778 + * from now, and thus we will lock up here. 779 + * 780 + * Promote the inode buffer to the front of the delwri list 781 + * and wake up xfsbufd now. 782 + */ 783 + xfs_promote_inode(ip); 773 784 xfs_iflock(ip); 774 785 } 775 786
+4 -8
fs/xfs/xfs_trace.h
··· 834 834 DEFINE_LOGGRANT_EVENT(xfs_log_grant_enter); 835 835 DEFINE_LOGGRANT_EVENT(xfs_log_grant_exit); 836 836 DEFINE_LOGGRANT_EVENT(xfs_log_grant_error); 837 - DEFINE_LOGGRANT_EVENT(xfs_log_grant_sleep1); 838 - DEFINE_LOGGRANT_EVENT(xfs_log_grant_wake1); 839 - DEFINE_LOGGRANT_EVENT(xfs_log_grant_sleep2); 840 - DEFINE_LOGGRANT_EVENT(xfs_log_grant_wake2); 837 + DEFINE_LOGGRANT_EVENT(xfs_log_grant_sleep); 838 + DEFINE_LOGGRANT_EVENT(xfs_log_grant_wake); 841 839 DEFINE_LOGGRANT_EVENT(xfs_log_grant_wake_up); 842 840 DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_enter); 843 841 DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_exit); 844 842 DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_error); 845 - DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_sleep1); 846 - DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_wake1); 847 - DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_sleep2); 848 - DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_wake2); 843 + DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_sleep); 844 + DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_wake); 849 845 DEFINE_LOGGRANT_EVENT(xfs_log_regrant_write_wake_up); 850 846 DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_enter); 851 847 DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_exit);
+7 -1
include/asm-generic/unistd.h
··· 685 685 __SYSCALL(__NR_setns, sys_setns) 686 686 #define __NR_sendmmsg 269 687 687 __SC_COMP(__NR_sendmmsg, sys_sendmmsg, compat_sys_sendmmsg) 688 + #define __NR_process_vm_readv 270 689 + __SC_COMP(__NR_process_vm_readv, sys_process_vm_readv, \ 690 + compat_sys_process_vm_readv) 691 + #define __NR_process_vm_writev 271 692 + __SC_COMP(__NR_process_vm_writev, sys_process_vm_writev, \ 693 + compat_sys_process_vm_writev) 688 694 689 695 #undef __NR_syscalls 690 - #define __NR_syscalls 270 696 + #define __NR_syscalls 272 691 697 692 698 /* 693 699 * All syscalls below here should go away really,
+18
include/drm/drm_pciids.h
··· 182 182 {0x1002, 0x6748, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 183 183 {0x1002, 0x6749, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 184 184 {0x1002, 0x6750, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 185 + {0x1002, 0x6751, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 185 186 {0x1002, 0x6758, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 186 187 {0x1002, 0x6759, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 188 + {0x1002, 0x675B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 189 + {0x1002, 0x675D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 187 190 {0x1002, 0x675F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 188 191 {0x1002, 0x6760, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 189 192 {0x1002, 0x6761, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ ··· 198 195 {0x1002, 0x6767, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ 199 196 {0x1002, 0x6768, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ 200 197 {0x1002, 0x6770, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ 198 + {0x1002, 0x6772, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ 201 199 {0x1002, 0x6778, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ 202 200 {0x1002, 0x6779, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ 201 + {0x1002, 0x677B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ 202 + {0x1002, 0x6840, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 203 + {0x1002, 0x6841, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 204 + {0x1002, 0x6842, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 205 + {0x1002, 0x6843, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 206 + {0x1002, 0x6849, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 207 + {0x1002, 0x6850, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 208 + {0x1002, 0x6858, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 209 + {0x1002, 0x6859, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ 203 210 {0x1002, 0x6880, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CYPRESS|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 204 211 {0x1002, 0x6888, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CYPRESS|RADEON_NEW_MEMMAP}, \ 205 212 {0x1002, 0x6889, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CYPRESS|RADEON_NEW_MEMMAP}, \ ··· 251 238 {0x1002, 0x68f2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CEDAR|RADEON_NEW_MEMMAP}, \ 252 239 {0x1002, 0x68f8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CEDAR|RADEON_NEW_MEMMAP}, \ 253 240 {0x1002, 0x68f9, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CEDAR|RADEON_NEW_MEMMAP}, \ 241 + {0x1002, 0x68fa, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CEDAR|RADEON_NEW_MEMMAP}, \ 254 242 {0x1002, 0x68fe, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CEDAR|RADEON_NEW_MEMMAP}, \ 255 243 {0x1002, 0x7100, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R520|RADEON_NEW_MEMMAP}, \ 256 244 {0x1002, 0x7101, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R520|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ ··· 494 480 {0x1002, 0x9647, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\ 495 481 {0x1002, 0x9648, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\ 496 482 {0x1002, 0x964a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 483 + {0x1002, 0x964b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 484 + {0x1002, 0x964c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 497 485 {0x1002, 0x964e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\ 498 486 {0x1002, 0x964f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\ 499 487 {0x1002, 0x9710, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ ··· 510 494 {0x1002, 0x9805, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 511 495 {0x1002, 0x9806, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 512 496 {0x1002, 0x9807, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 497 + {0x1002, 0x9808, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 498 + {0x1002, 0x9809, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 513 499 {0, 0, 0} 514 500 515 501 #define r128_PCI_IDS \
+4 -5
include/drm/exynos_drm.h
··· 32 32 /** 33 33 * User-desired buffer creation information structure. 34 34 * 35 - * @size: requested size for the object. 35 + * @size: user-desired memory allocation size. 36 36 * - this size value would be page-aligned internally. 37 37 * @flags: user request for setting memory type or cache attributes. 38 - * @handle: returned handle for the object. 39 - * @pad: just padding to be 64-bit aligned. 38 + * @handle: returned a handle to created gem object. 39 + * - this handle will be set by gem module of kernel side. 40 40 */ 41 41 struct drm_exynos_gem_create { 42 - unsigned int size; 42 + uint64_t size; 43 43 unsigned int flags; 44 44 unsigned int handle; 45 - unsigned int pad; 46 45 }; 47 46 48 47 /**
-3
include/linux/blkdev.h
··· 805 805 */ 806 806 extern struct request_queue *blk_init_queue_node(request_fn_proc *rfn, 807 807 spinlock_t *lock, int node_id); 808 - extern struct request_queue *blk_init_allocated_queue_node(struct request_queue *, 809 - request_fn_proc *, 810 - spinlock_t *, int node_id); 811 808 extern struct request_queue *blk_init_queue(request_fn_proc *, spinlock_t *); 812 809 extern struct request_queue *blk_init_allocated_queue(struct request_queue *, 813 810 request_fn_proc *, spinlock_t *);
+2 -1
include/linux/clocksource.h
··· 156 156 * @mult: cycle to nanosecond multiplier 157 157 * @shift: cycle to nanosecond divisor (power of two) 158 158 * @max_idle_ns: max idle time permitted by the clocksource (nsecs) 159 + * @maxadj maximum adjustment value to mult (~11%) 159 160 * @flags: flags describing special properties 160 161 * @archdata: arch-specific data 161 162 * @suspend: suspend function for the clocksource, if necessary ··· 173 172 u32 mult; 174 173 u32 shift; 175 174 u64 max_idle_ns; 176 - 175 + u32 maxadj; 177 176 #ifdef CONFIG_ARCH_CLOCKSOURCE_DATA 178 177 struct arch_clocksource_data archdata; 179 178 #endif
+9
include/linux/compat.h
··· 552 552 553 553 extern void __user *compat_alloc_user_space(unsigned long len); 554 554 555 + asmlinkage ssize_t compat_sys_process_vm_readv(compat_pid_t pid, 556 + const struct compat_iovec __user *lvec, 557 + unsigned long liovcnt, const struct compat_iovec __user *rvec, 558 + unsigned long riovcnt, unsigned long flags); 559 + asmlinkage ssize_t compat_sys_process_vm_writev(compat_pid_t pid, 560 + const struct compat_iovec __user *lvec, 561 + unsigned long liovcnt, const struct compat_iovec __user *rvec, 562 + unsigned long riovcnt, unsigned long flags); 563 + 555 564 #endif /* CONFIG_COMPAT */ 556 565 #endif /* _LINUX_COMPAT_H */
+2 -1
include/linux/dcache.h
··· 339 339 */ 340 340 extern char *dynamic_dname(struct dentry *, char *, int, const char *, ...); 341 341 342 - extern char *__d_path(const struct path *path, struct path *root, char *, int); 342 + extern char *__d_path(const struct path *, const struct path *, char *, int); 343 + extern char *d_absolute_path(const struct path *, char *, int); 343 344 extern char *d_path(const struct path *, char *, int); 344 345 extern char *d_path_with_unreachable(const struct path *, char *, int); 345 346 extern char *dentry_path_raw(struct dentry *, char *, int);
+2
include/linux/dma_remapping.h
··· 31 31 extern int iommu_calculate_agaw(struct intel_iommu *iommu); 32 32 extern int iommu_calculate_max_sagaw(struct intel_iommu *iommu); 33 33 extern int dmar_disabled; 34 + extern int intel_iommu_enabled; 34 35 #else 35 36 static inline int iommu_calculate_agaw(struct intel_iommu *iommu) 36 37 { ··· 45 44 { 46 45 } 47 46 #define dmar_disabled (1) 47 + #define intel_iommu_enabled (0) 48 48 #endif 49 49 50 50
+2 -1
include/linux/fs.h
··· 393 393 #include <linux/semaphore.h> 394 394 #include <linux/fiemap.h> 395 395 #include <linux/rculist_bl.h> 396 - #include <linux/shrinker.h> 397 396 #include <linux/atomic.h> 397 + #include <linux/shrinker.h> 398 398 399 399 #include <asm/byteorder.h> 400 400 ··· 1942 1942 extern int statfs_by_dentry(struct dentry *, struct kstatfs *); 1943 1943 extern int freeze_super(struct super_block *super); 1944 1944 extern int thaw_super(struct super_block *super); 1945 + extern bool our_mnt(struct vfsmount *mnt); 1945 1946 1946 1947 extern int current_umask(void); 1947 1948
+2
include/linux/ftrace_event.h
··· 172 172 TRACE_EVENT_FL_FILTERED_BIT, 173 173 TRACE_EVENT_FL_RECORDED_CMD_BIT, 174 174 TRACE_EVENT_FL_CAP_ANY_BIT, 175 + TRACE_EVENT_FL_NO_SET_FILTER_BIT, 175 176 }; 176 177 177 178 enum { ··· 180 179 TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT), 181 180 TRACE_EVENT_FL_RECORDED_CMD = (1 << TRACE_EVENT_FL_RECORDED_CMD_BIT), 182 181 TRACE_EVENT_FL_CAP_ANY = (1 << TRACE_EVENT_FL_CAP_ANY_BIT), 182 + TRACE_EVENT_FL_NO_SET_FILTER = (1 << TRACE_EVENT_FL_NO_SET_FILTER_BIT), 183 183 }; 184 184 185 185 struct ftrace_event_call {
+3 -1
include/linux/init_task.h
··· 126 126 # define INIT_PERF_EVENTS(tsk) 127 127 #endif 128 128 129 + #define INIT_TASK_COMM "swapper" 130 + 129 131 /* 130 132 * INIT_TASK is used to set up the first task table, touch at 131 133 * your own risk!. Base=0, limit=0x1fffff (=2MB) ··· 164 162 .group_leader = &tsk, \ 165 163 RCU_INIT_POINTER(.real_cred, &init_cred), \ 166 164 RCU_INIT_POINTER(.cred, &init_cred), \ 167 - .comm = "swapper", \ 165 + .comm = INIT_TASK_COMM, \ 168 166 .thread = INIT_THREAD, \ 169 167 .fs = &init_fs, \ 170 168 .files = &init_files, \
-1
include/linux/log2.h
··· 185 185 #define rounddown_pow_of_two(n) \ 186 186 ( \ 187 187 __builtin_constant_p(n) ? ( \ 188 - (n == 1) ? 0 : \ 189 188 (1UL << ilog2(n))) : \ 190 189 __rounddown_pow_of_two(n) \ 191 190 )
+1
include/linux/mm.h
··· 10 10 #include <linux/mmzone.h> 11 11 #include <linux/rbtree.h> 12 12 #include <linux/prio_tree.h> 13 + #include <linux/atomic.h> 13 14 #include <linux/debug_locks.h> 14 15 #include <linux/mm_types.h> 15 16 #include <linux/range.h>
+6
include/linux/mmc/card.h
··· 218 218 #define MMC_QUIRK_INAND_CMD38 (1<<6) /* iNAND devices have broken CMD38 */ 219 219 #define MMC_QUIRK_BLK_NO_CMD23 (1<<7) /* Avoid CMD23 for regular multiblock */ 220 220 #define MMC_QUIRK_BROKEN_BYTE_MODE_512 (1<<8) /* Avoid sending 512 bytes in */ 221 + #define MMC_QUIRK_LONG_READ_TIME (1<<9) /* Data read time > CSD says */ 221 222 /* byte mode */ 222 223 unsigned int poweroff_notify_state; /* eMMC4.5 notify feature */ 223 224 #define MMC_NO_POWER_NOTIFICATION 0 ··· 432 431 static inline int mmc_card_broken_byte_mode_512(const struct mmc_card *c) 433 432 { 434 433 return c->quirks & MMC_QUIRK_BROKEN_BYTE_MODE_512; 434 + } 435 + 436 + static inline int mmc_card_long_read_time(const struct mmc_card *c) 437 + { 438 + return c->quirks & MMC_QUIRK_LONG_READ_TIME; 435 439 } 436 440 437 441 #define mmc_card_name(c) ((c)->cid.prod_name)
+2
include/linux/netdevice.h
··· 2536 2536 extern void *dev_seq_start(struct seq_file *seq, loff_t *pos); 2537 2537 extern void *dev_seq_next(struct seq_file *seq, void *v, loff_t *pos); 2538 2538 extern void dev_seq_stop(struct seq_file *seq, void *v); 2539 + extern int dev_seq_open_ops(struct inode *inode, struct file *file, 2540 + const struct seq_operations *ops); 2539 2541 #endif 2540 2542 2541 2543 extern int netdev_class_create_file(struct class_attribute *class_attr);
+3 -3
include/linux/pci-ats.h
··· 12 12 unsigned int is_enabled:1; /* Enable bit is set */ 13 13 }; 14 14 15 - #ifdef CONFIG_PCI_IOV 15 + #ifdef CONFIG_PCI_ATS 16 16 17 17 extern int pci_enable_ats(struct pci_dev *dev, int ps); 18 18 extern void pci_disable_ats(struct pci_dev *dev); ··· 29 29 return dev->ats && dev->ats->is_enabled; 30 30 } 31 31 32 - #else /* CONFIG_PCI_IOV */ 32 + #else /* CONFIG_PCI_ATS */ 33 33 34 34 static inline int pci_enable_ats(struct pci_dev *dev, int ps) 35 35 { ··· 50 50 return 0; 51 51 } 52 52 53 - #endif /* CONFIG_PCI_IOV */ 53 + #endif /* CONFIG_PCI_ATS */ 54 54 55 55 #ifdef CONFIG_PCI_PRI 56 56
+1 -1
include/linux/pci.h
··· 338 338 struct list_head msi_list; 339 339 #endif 340 340 struct pci_vpd *vpd; 341 - #ifdef CONFIG_PCI_IOV 341 + #ifdef CONFIG_PCI_ATS 342 342 union { 343 343 struct pci_sriov *sriov; /* SR-IOV capability related */ 344 344 struct pci_dev *physfn; /* the PF this VF is associated with */
+4
include/linux/pci_ids.h
··· 517 517 #define PCI_DEVICE_ID_AMD_11H_NB_DRAM 0x1302 518 518 #define PCI_DEVICE_ID_AMD_11H_NB_MISC 0x1303 519 519 #define PCI_DEVICE_ID_AMD_11H_NB_LINK 0x1304 520 + #define PCI_DEVICE_ID_AMD_15H_NB_F0 0x1600 521 + #define PCI_DEVICE_ID_AMD_15H_NB_F1 0x1601 522 + #define PCI_DEVICE_ID_AMD_15H_NB_F2 0x1602 520 523 #define PCI_DEVICE_ID_AMD_15H_NB_F3 0x1603 521 524 #define PCI_DEVICE_ID_AMD_15H_NB_F4 0x1604 525 + #define PCI_DEVICE_ID_AMD_15H_NB_F5 0x1605 522 526 #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 523 527 #define PCI_DEVICE_ID_AMD_LANCE 0x2000 524 528 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
+1
include/linux/perf_event.h
··· 822 822 int mmap_locked; 823 823 struct user_struct *mmap_user; 824 824 struct ring_buffer *rb; 825 + struct list_head rb_entry; 825 826 826 827 /* poll related */ 827 828 wait_queue_head_t waitq;
+3 -3
include/linux/pkt_sched.h
··· 30 30 */ 31 31 32 32 struct tc_stats { 33 - __u64 bytes; /* NUmber of enqueues bytes */ 33 + __u64 bytes; /* Number of enqueued bytes */ 34 34 __u32 packets; /* Number of enqueued packets */ 35 35 __u32 drops; /* Packets dropped because of lack of resources */ 36 36 __u32 overlimits; /* Number of throttle events when this ··· 297 297 __u32 debug; /* debug flags */ 298 298 299 299 /* stats */ 300 - __u32 direct_pkts; /* count of non shapped packets */ 300 + __u32 direct_pkts; /* count of non shaped packets */ 301 301 }; 302 302 enum { 303 303 TCA_HTB_UNSPEC, ··· 503 503 }; 504 504 #define NETEM_LOSS_MAX (__NETEM_LOSS_MAX - 1) 505 505 506 - /* State transition probablities for 4 state model */ 506 + /* State transition probabilities for 4 state model */ 507 507 struct tc_netem_gimodel { 508 508 __u32 p13; 509 509 __u32 p31;
+127 -88
include/linux/pm.h
··· 54 54 /** 55 55 * struct dev_pm_ops - device PM callbacks 56 56 * 57 - * Several driver power state transitions are externally visible, affecting 57 + * Several device power state transitions are externally visible, affecting 58 58 * the state of pending I/O queues and (for drivers that touch hardware) 59 59 * interrupts, wakeups, DMA, and other hardware state. There may also be 60 - * internal transitions to various low power modes, which are transparent 60 + * internal transitions to various low-power modes which are transparent 61 61 * to the rest of the driver stack (such as a driver that's ON gating off 62 62 * clocks which are not in active use). 63 63 * 64 - * The externally visible transitions are handled with the help of the following 65 - * callbacks included in this structure: 64 + * The externally visible transitions are handled with the help of callbacks 65 + * included in this structure in such a way that two levels of callbacks are 66 + * involved. First, the PM core executes callbacks provided by PM domains, 67 + * device types, classes and bus types. They are the subsystem-level callbacks 68 + * supposed to execute callbacks provided by device drivers, although they may 69 + * choose not to do that. If the driver callbacks are executed, they have to 70 + * collaborate with the subsystem-level callbacks to achieve the goals 71 + * appropriate for the given system transition, given transition phase and the 72 + * subsystem the device belongs to. 66 73 * 67 - * @prepare: Prepare the device for the upcoming transition, but do NOT change 68 - * its hardware state. Prevent new children of the device from being 69 - * registered after @prepare() returns (the driver's subsystem and 70 - * generally the rest of the kernel is supposed to prevent new calls to the 71 - * probe method from being made too once @prepare() has succeeded). If 72 - * @prepare() detects a situation it cannot handle (e.g. registration of a 73 - * child already in progress), it may return -EAGAIN, so that the PM core 74 - * can execute it once again (e.g. after the new child has been registered) 75 - * to recover from the race condition. This method is executed for all 76 - * kinds of suspend transitions and is followed by one of the suspend 77 - * callbacks: @suspend(), @freeze(), or @poweroff(). 78 - * The PM core executes @prepare() for all devices before starting to 79 - * execute suspend callbacks for any of them, so drivers may assume all of 80 - * the other devices to be present and functional while @prepare() is being 81 - * executed. In particular, it is safe to make GFP_KERNEL memory 82 - * allocations from within @prepare(). However, drivers may NOT assume 83 - * anything about the availability of the user space at that time and it 84 - * is not correct to request firmware from within @prepare() (it's too 85 - * late to do that). [To work around this limitation, drivers may 86 - * register suspend and hibernation notifiers that are executed before the 87 - * freezing of tasks.] 74 + * @prepare: The principal role of this callback is to prevent new children of 75 + * the device from being registered after it has returned (the driver's 76 + * subsystem and generally the rest of the kernel is supposed to prevent 77 + * new calls to the probe method from being made too once @prepare() has 78 + * succeeded). If @prepare() detects a situation it cannot handle (e.g. 79 + * registration of a child already in progress), it may return -EAGAIN, so 80 + * that the PM core can execute it once again (e.g. after a new child has 81 + * been registered) to recover from the race condition. 82 + * This method is executed for all kinds of suspend transitions and is 83 + * followed by one of the suspend callbacks: @suspend(), @freeze(), or 84 + * @poweroff(). The PM core executes subsystem-level @prepare() for all 85 + * devices before starting to invoke suspend callbacks for any of them, so 86 + * generally devices may be assumed to be functional or to respond to 87 + * runtime resume requests while @prepare() is being executed. However, 88 + * device drivers may NOT assume anything about the availability of user 89 + * space at that time and it is NOT valid to request firmware from within 90 + * @prepare() (it's too late to do that). It also is NOT valid to allocate 91 + * substantial amounts of memory from @prepare() in the GFP_KERNEL mode. 92 + * [To work around these limitations, drivers may register suspend and 93 + * hibernation notifiers to be executed before the freezing of tasks.] 88 94 * 89 95 * @complete: Undo the changes made by @prepare(). This method is executed for 90 96 * all kinds of resume transitions, following one of the resume callbacks: 91 97 * @resume(), @thaw(), @restore(). Also called if the state transition 92 - * fails before the driver's suspend callback (@suspend(), @freeze(), 93 - * @poweroff()) can be executed (e.g. if the suspend callback fails for one 98 + * fails before the driver's suspend callback: @suspend(), @freeze() or 99 + * @poweroff(), can be executed (e.g. if the suspend callback fails for one 94 100 * of the other devices that the PM core has unsuccessfully attempted to 95 101 * suspend earlier). 96 - * The PM core executes @complete() after it has executed the appropriate 97 - * resume callback for all devices. 102 + * The PM core executes subsystem-level @complete() after it has executed 103 + * the appropriate resume callbacks for all devices. 98 104 * 99 105 * @suspend: Executed before putting the system into a sleep state in which the 100 - * contents of main memory are preserved. Quiesce the device, put it into 101 - * a low power state appropriate for the upcoming system state (such as 102 - * PCI_D3hot), and enable wakeup events as appropriate. 106 + * contents of main memory are preserved. The exact action to perform 107 + * depends on the device's subsystem (PM domain, device type, class or bus 108 + * type), but generally the device must be quiescent after subsystem-level 109 + * @suspend() has returned, so that it doesn't do any I/O or DMA. 110 + * Subsystem-level @suspend() is executed for all devices after invoking 111 + * subsystem-level @prepare() for all of them. 103 112 * 104 113 * @resume: Executed after waking the system up from a sleep state in which the 105 - * contents of main memory were preserved. Put the device into the 106 - * appropriate state, according to the information saved in memory by the 107 - * preceding @suspend(). The driver starts working again, responding to 108 - * hardware events and software requests. The hardware may have gone 109 - * through a power-off reset, or it may have maintained state from the 110 - * previous suspend() which the driver may rely on while resuming. On most 111 - * platforms, there are no restrictions on availability of resources like 112 - * clocks during @resume(). 114 + * contents of main memory were preserved. The exact action to perform 115 + * depends on the device's subsystem, but generally the driver is expected 116 + * to start working again, responding to hardware events and software 117 + * requests (the device itself may be left in a low-power state, waiting 118 + * for a runtime resume to occur). The state of the device at the time its 119 + * driver's @resume() callback is run depends on the platform and subsystem 120 + * the device belongs to. On most platforms, there are no restrictions on 121 + * availability of resources like clocks during @resume(). 122 + * Subsystem-level @resume() is executed for all devices after invoking 123 + * subsystem-level @resume_noirq() for all of them. 113 124 * 114 125 * @freeze: Hibernation-specific, executed before creating a hibernation image. 115 - * Quiesce operations so that a consistent image can be created, but do NOT 116 - * otherwise put the device into a low power device state and do NOT emit 117 - * system wakeup events. Save in main memory the device settings to be 118 - * used by @restore() during the subsequent resume from hibernation or by 119 - * the subsequent @thaw(), if the creation of the image or the restoration 120 - * of main memory contents from it fails. 126 + * Analogous to @suspend(), but it should not enable the device to signal 127 + * wakeup events or change its power state. The majority of subsystems 128 + * (with the notable exception of the PCI bus type) expect the driver-level 129 + * @freeze() to save the device settings in memory to be used by @restore() 130 + * during the subsequent resume from hibernation. 131 + * Subsystem-level @freeze() is executed for all devices after invoking 132 + * subsystem-level @prepare() for all of them. 121 133 * 122 134 * @thaw: Hibernation-specific, executed after creating a hibernation image OR 123 - * if the creation of the image fails. Also executed after a failing 135 + * if the creation of an image has failed. Also executed after a failing 124 136 * attempt to restore the contents of main memory from such an image. 125 137 * Undo the changes made by the preceding @freeze(), so the device can be 126 138 * operated in the same way as immediately before the call to @freeze(). 139 + * Subsystem-level @thaw() is executed for all devices after invoking 140 + * subsystem-level @thaw_noirq() for all of them. It also may be executed 141 + * directly after @freeze() in case of a transition error. 127 142 * 128 143 * @poweroff: Hibernation-specific, executed after saving a hibernation image. 129 - * Quiesce the device, put it into a low power state appropriate for the 130 - * upcoming system state (such as PCI_D3hot), and enable wakeup events as 131 - * appropriate. 144 + * Analogous to @suspend(), but it need not save the device's settings in 145 + * memory. 146 + * Subsystem-level @poweroff() is executed for all devices after invoking 147 + * subsystem-level @prepare() for all of them. 132 148 * 133 149 * @restore: Hibernation-specific, executed after restoring the contents of main 134 - * memory from a hibernation image. Driver starts working again, 135 - * responding to hardware events and software requests. Drivers may NOT 136 - * make ANY assumptions about the hardware state right prior to @restore(). 137 - * On most platforms, there are no restrictions on availability of 138 - * resources like clocks during @restore(). 150 + * memory from a hibernation image, analogous to @resume(). 139 151 * 140 - * @suspend_noirq: Complete the operations of ->suspend() by carrying out any 141 - * actions required for suspending the device that need interrupts to be 142 - * disabled 152 + * @suspend_noirq: Complete the actions started by @suspend(). Carry out any 153 + * additional operations required for suspending the device that might be 154 + * racing with its driver's interrupt handler, which is guaranteed not to 155 + * run while @suspend_noirq() is being executed. 156 + * It generally is expected that the device will be in a low-power state 157 + * (appropriate for the target system sleep state) after subsystem-level 158 + * @suspend_noirq() has returned successfully. If the device can generate 159 + * system wakeup signals and is enabled to wake up the system, it should be 160 + * configured to do so at that time. However, depending on the platform 161 + * and device's subsystem, @suspend() may be allowed to put the device into 162 + * the low-power state and configure it to generate wakeup signals, in 163 + * which case it generally is not necessary to define @suspend_noirq(). 143 164 * 144 - * @resume_noirq: Prepare for the execution of ->resume() by carrying out any 145 - * actions required for resuming the device that need interrupts to be 146 - * disabled 165 + * @resume_noirq: Prepare for the execution of @resume() by carrying out any 166 + * operations required for resuming the device that might be racing with 167 + * its driver's interrupt handler, which is guaranteed not to run while 168 + * @resume_noirq() is being executed. 147 169 * 148 - * @freeze_noirq: Complete the operations of ->freeze() by carrying out any 149 - * actions required for freezing the device that need interrupts to be 150 - * disabled 170 + * @freeze_noirq: Complete the actions started by @freeze(). Carry out any 171 + * additional operations required for freezing the device that might be 172 + * racing with its driver's interrupt handler, which is guaranteed not to 173 + * run while @freeze_noirq() is being executed. 174 + * The power state of the device should not be changed by either @freeze() 175 + * or @freeze_noirq() and it should not be configured to signal system 176 + * wakeup by any of these callbacks. 151 177 * 152 - * @thaw_noirq: Prepare for the execution of ->thaw() by carrying out any 153 - * actions required for thawing the device that need interrupts to be 154 - * disabled 178 + * @thaw_noirq: Prepare for the execution of @thaw() by carrying out any 179 + * operations required for thawing the device that might be racing with its 180 + * driver's interrupt handler, which is guaranteed not to run while 181 + * @thaw_noirq() is being executed. 155 182 * 156 - * @poweroff_noirq: Complete the operations of ->poweroff() by carrying out any 157 - * actions required for handling the device that need interrupts to be 158 - * disabled 183 + * @poweroff_noirq: Complete the actions started by @poweroff(). Analogous to 184 + * @suspend_noirq(), but it need not save the device's settings in memory. 159 185 * 160 - * @restore_noirq: Prepare for the execution of ->restore() by carrying out any 161 - * actions required for restoring the operations of the device that need 162 - * interrupts to be disabled 186 + * @restore_noirq: Prepare for the execution of @restore() by carrying out any 187 + * operations required for thawing the device that might be racing with its 188 + * driver's interrupt handler, which is guaranteed not to run while 189 + * @restore_noirq() is being executed. Analogous to @resume_noirq(). 163 190 * 164 191 * All of the above callbacks, except for @complete(), return error codes. 165 192 * However, the error codes returned by the resume operations, @resume(), 166 - * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq() do 193 + * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq(), do 167 194 * not cause the PM core to abort the resume transition during which they are 168 - * returned. The error codes returned in that cases are only printed by the PM 195 + * returned. The error codes returned in those cases are only printed by the PM 169 196 * core to the system logs for debugging purposes. Still, it is recommended 170 197 * that drivers only return error codes from their resume methods in case of an 171 198 * unrecoverable failure (i.e. when the device being handled refuses to resume ··· 201 174 * their children. 202 175 * 203 176 * It is allowed to unregister devices while the above callbacks are being 204 - * executed. However, it is not allowed to unregister a device from within any 205 - * of its own callbacks. 177 + * executed. However, a callback routine must NOT try to unregister the device 178 + * it was called for, although it may unregister children of that device (for 179 + * example, if it detects that a child was unplugged while the system was 180 + * asleep). 206 181 * 207 - * There also are the following callbacks related to run-time power management 208 - * of devices: 182 + * Refer to Documentation/power/devices.txt for more information about the role 183 + * of the above callbacks in the system suspend process. 184 + * 185 + * There also are callbacks related to runtime power management of devices. 186 + * Again, these callbacks are executed by the PM core only for subsystems 187 + * (PM domains, device types, classes and bus types) and the subsystem-level 188 + * callbacks are supposed to invoke the driver callbacks. Moreover, the exact 189 + * actions to be performed by a device driver's callbacks generally depend on 190 + * the platform and subsystem the device belongs to. 209 191 * 210 192 * @runtime_suspend: Prepare the device for a condition in which it won't be 211 193 * able to communicate with the CPU(s) and RAM due to power management. 212 - * This need not mean that the device should be put into a low power state. 194 + * This need not mean that the device should be put into a low-power state. 213 195 * For example, if the device is behind a link which is about to be turned 214 196 * off, the device may remain at full power. If the device does go to low 215 - * power and is capable of generating run-time wake-up events, remote 216 - * wake-up (i.e., a hardware mechanism allowing the device to request a 217 - * change of its power state via a wake-up event, such as PCI PME) should 218 - * be enabled for it. 197 + * power and is capable of generating runtime wakeup events, remote wakeup 198 + * (i.e., a hardware mechanism allowing the device to request a change of 199 + * its power state via an interrupt) should be enabled for it. 219 200 * 220 201 * @runtime_resume: Put the device into the fully active state in response to a 221 - * wake-up event generated by hardware or at the request of software. If 222 - * necessary, put the device into the full power state and restore its 202 + * wakeup event generated by hardware or at the request of software. If 203 + * necessary, put the device into the full-power state and restore its 223 204 * registers, so that it is fully operational. 224 205 * 225 - * @runtime_idle: Device appears to be inactive and it might be put into a low 226 - * power state if all of the necessary conditions are satisfied. Check 206 + * @runtime_idle: Device appears to be inactive and it might be put into a 207 + * low-power state if all of the necessary conditions are satisfied. Check 227 208 * these conditions and handle the device as appropriate, possibly queueing 228 209 * a suspend request for it. The return value is ignored by the PM core. 210 + * 211 + * Refer to Documentation/power/runtime_pm.txt for more information about the 212 + * role of the above callbacks in device runtime power management. 213 + * 229 214 */ 230 215 231 216 struct dev_pm_ops {
+3 -1
include/linux/pstore.h
··· 35 35 spinlock_t buf_lock; /* serialize access to 'buf' */ 36 36 char *buf; 37 37 size_t bufsize; 38 + struct mutex read_mutex; /* serialize open/read/close */ 38 39 int (*open)(struct pstore_info *psi); 39 40 int (*close)(struct pstore_info *psi); 40 41 ssize_t (*read)(u64 *id, enum pstore_type_id *type, 41 - struct timespec *time, struct pstore_info *psi); 42 + struct timespec *time, char **buf, 43 + struct pstore_info *psi); 42 44 int (*write)(enum pstore_type_id type, u64 *id, 43 45 unsigned int part, size_t size, struct pstore_info *psi); 44 46 int (*erase)(enum pstore_type_id type, u64 id,
+1 -1
include/linux/shrinker.h
··· 35 35 36 36 /* These are for internal use */ 37 37 struct list_head list; 38 - long nr; /* objs pending delete */ 38 + atomic_long_t nr_in_batch; /* objs pending delete */ 39 39 }; 40 40 #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ 41 41 extern void register_shrinker(struct shrinker *);
+4 -9
include/linux/sigma.h
··· 24 24 struct sigma_firmware_header { 25 25 unsigned char magic[7]; 26 26 u8 version; 27 - u32 crc; 27 + __le32 crc; 28 28 }; 29 29 30 30 enum { ··· 40 40 struct sigma_action { 41 41 u8 instr; 42 42 u8 len_hi; 43 - u16 len; 44 - u16 addr; 43 + __le16 len; 44 + __be16 addr; 45 45 unsigned char payload[]; 46 46 }; 47 47 48 48 static inline u32 sigma_action_len(struct sigma_action *sa) 49 49 { 50 - return (sa->len_hi << 16) | sa->len; 51 - } 52 - 53 - static inline size_t sigma_action_size(struct sigma_action *sa, u32 payload_len) 54 - { 55 - return sizeof(*sa) + payload_len + (payload_len % 2); 50 + return (sa->len_hi << 16) | le16_to_cpu(sa->len); 56 51 } 57 52 58 53 extern int process_sigma_firmware(struct i2c_client *client, const char *name);
+2
include/linux/virtio_config.h
··· 85 85 * @reset: reset the device 86 86 * vdev: the virtio device 87 87 * After this, status and feature negotiation must be done again 88 + * Device must not be reset from its vq/config callbacks, or in 89 + * parallel with being added/removed. 88 90 * @find_vqs: find virtqueues and instantiate them. 89 91 * vdev: the virtio_device 90 92 * nvqs: the number of virtqueues to find
+1 -1
include/linux/virtio_mmio.h
··· 63 63 #define VIRTIO_MMIO_GUEST_FEATURES 0x020 64 64 65 65 /* Activated features set selector - Write Only */ 66 - #define VIRTIO_MMIO_GUEST_FEATURES_SET 0x024 66 + #define VIRTIO_MMIO_GUEST_FEATURES_SEL 0x024 67 67 68 68 /* Guest's memory page size in bytes - Write Only */ 69 69 #define VIRTIO_MMIO_GUEST_PAGE_SIZE 0x028
+1 -6
include/net/dst.h
··· 205 205 206 206 static inline u32 dst_mtu(const struct dst_entry *dst) 207 207 { 208 - u32 mtu = dst_metric_raw(dst, RTAX_MTU); 209 - 210 - if (!mtu) 211 - mtu = dst->ops->default_mtu(dst); 212 - 213 - return mtu; 208 + return dst->ops->mtu(dst); 214 209 } 215 210 216 211 /* RTT metrics are stored in milliseconds for user ABI, but used as jiffies */
+1 -1
include/net/dst_ops.h
··· 17 17 int (*gc)(struct dst_ops *ops); 18 18 struct dst_entry * (*check)(struct dst_entry *, __u32 cookie); 19 19 unsigned int (*default_advmss)(const struct dst_entry *); 20 - unsigned int (*default_mtu)(const struct dst_entry *); 20 + unsigned int (*mtu)(const struct dst_entry *); 21 21 u32 * (*cow_metrics)(struct dst_entry *, unsigned long); 22 22 void (*destroy)(struct dst_entry *); 23 23 void (*ifdown)(struct dst_entry *,
+2
include/net/inet_sock.h
··· 31 31 /** struct ip_options - IP Options 32 32 * 33 33 * @faddr - Saved first hop address 34 + * @nexthop - Saved nexthop address in LSRR and SSRR 34 35 * @is_data - Options in __data, rather than skb 35 36 * @is_strictroute - Strict source route 36 37 * @srr_is_hit - Packet destination addr was our one ··· 42 41 */ 43 42 struct ip_options { 44 43 __be32 faddr; 44 + __be32 nexthop; 45 45 unsigned char optlen; 46 46 unsigned char srr; 47 47 unsigned char rr;
+1
include/net/inetpeer.h
··· 35 35 36 36 u32 metrics[RTAX_MAX]; 37 37 u32 rate_tokens; /* rate limiting for ICMP */ 38 + int redirect_genid; 38 39 unsigned long rate_last; 39 40 unsigned long pmtu_expires; 40 41 u32 pmtu_orig;
+10 -9
include/net/netfilter/nf_conntrack_ecache.h
··· 67 67 int (*fcn)(unsigned int events, struct nf_ct_event *item); 68 68 }; 69 69 70 - extern struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb; 71 - extern int nf_conntrack_register_notifier(struct nf_ct_event_notifier *nb); 72 - extern void nf_conntrack_unregister_notifier(struct nf_ct_event_notifier *nb); 70 + extern int nf_conntrack_register_notifier(struct net *net, struct nf_ct_event_notifier *nb); 71 + extern void nf_conntrack_unregister_notifier(struct net *net, struct nf_ct_event_notifier *nb); 73 72 74 73 extern void nf_ct_deliver_cached_events(struct nf_conn *ct); 75 74 76 75 static inline void 77 76 nf_conntrack_event_cache(enum ip_conntrack_events event, struct nf_conn *ct) 78 77 { 78 + struct net *net = nf_ct_net(ct); 79 79 struct nf_conntrack_ecache *e; 80 80 81 - if (nf_conntrack_event_cb == NULL) 81 + if (net->ct.nf_conntrack_event_cb == NULL) 82 82 return; 83 83 84 84 e = nf_ct_ecache_find(ct); ··· 95 95 int report) 96 96 { 97 97 int ret = 0; 98 + struct net *net = nf_ct_net(ct); 98 99 struct nf_ct_event_notifier *notify; 99 100 struct nf_conntrack_ecache *e; 100 101 101 102 rcu_read_lock(); 102 - notify = rcu_dereference(nf_conntrack_event_cb); 103 + notify = rcu_dereference(net->ct.nf_conntrack_event_cb); 103 104 if (notify == NULL) 104 105 goto out_unlock; 105 106 ··· 165 164 int (*fcn)(unsigned int events, struct nf_exp_event *item); 166 165 }; 167 166 168 - extern struct nf_exp_event_notifier __rcu *nf_expect_event_cb; 169 - extern int nf_ct_expect_register_notifier(struct nf_exp_event_notifier *nb); 170 - extern void nf_ct_expect_unregister_notifier(struct nf_exp_event_notifier *nb); 167 + extern int nf_ct_expect_register_notifier(struct net *net, struct nf_exp_event_notifier *nb); 168 + extern void nf_ct_expect_unregister_notifier(struct net *net, struct nf_exp_event_notifier *nb); 171 169 172 170 static inline void 173 171 nf_ct_expect_event_report(enum ip_conntrack_expect_events event, ··· 174 174 u32 pid, 175 175 int report) 176 176 { 177 + struct net *net = nf_ct_exp_net(exp); 177 178 struct nf_exp_event_notifier *notify; 178 179 struct nf_conntrack_ecache *e; 179 180 180 181 rcu_read_lock(); 181 - notify = rcu_dereference(nf_expect_event_cb); 182 + notify = rcu_dereference(net->ct.nf_expect_event_cb); 182 183 if (notify == NULL) 183 184 goto out_unlock; 184 185
+2
include/net/netns/conntrack.h
··· 18 18 struct hlist_nulls_head unconfirmed; 19 19 struct hlist_nulls_head dying; 20 20 struct ip_conntrack_stat __percpu *stat; 21 + struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb; 22 + struct nf_exp_event_notifier __rcu *nf_expect_event_cb; 21 23 int sysctl_events; 22 24 unsigned int sysctl_events_retry_timeout; 23 25 int sysctl_acct;
+6 -9
include/net/red.h
··· 116 116 u32 qR; /* Cached random number */ 117 117 118 118 unsigned long qavg; /* Average queue length: A scaled */ 119 - psched_time_t qidlestart; /* Start of current idle period */ 119 + ktime_t qidlestart; /* Start of current idle period */ 120 120 }; 121 121 122 122 static inline u32 red_rmask(u8 Plog) ··· 148 148 149 149 static inline int red_is_idling(struct red_parms *p) 150 150 { 151 - return p->qidlestart != PSCHED_PASTPERFECT; 151 + return p->qidlestart.tv64 != 0; 152 152 } 153 153 154 154 static inline void red_start_of_idle_period(struct red_parms *p) 155 155 { 156 - p->qidlestart = psched_get_time(); 156 + p->qidlestart = ktime_get(); 157 157 } 158 158 159 159 static inline void red_end_of_idle_period(struct red_parms *p) 160 160 { 161 - p->qidlestart = PSCHED_PASTPERFECT; 161 + p->qidlestart.tv64 = 0; 162 162 } 163 163 164 164 static inline void red_restart(struct red_parms *p) ··· 170 170 171 171 static inline unsigned long red_calc_qavg_from_idle_time(struct red_parms *p) 172 172 { 173 - psched_time_t now; 174 - long us_idle; 173 + s64 delta = ktime_us_delta(ktime_get(), p->qidlestart); 174 + long us_idle = min_t(s64, delta, p->Scell_max); 175 175 int shift; 176 - 177 - now = psched_get_time(); 178 - us_idle = psched_tdiff_bounded(now, p->qidlestart, p->Scell_max); 179 176 180 177 /* 181 178 * The problem: ideally, average length queue recalcultion should
+2 -2
include/net/route.h
··· 71 71 struct fib_info *fi; /* for client ref to shared metrics */ 72 72 }; 73 73 74 - static inline bool rt_is_input_route(struct rtable *rt) 74 + static inline bool rt_is_input_route(const struct rtable *rt) 75 75 { 76 76 return rt->rt_route_iif != 0; 77 77 } 78 78 79 - static inline bool rt_is_output_route(struct rtable *rt) 79 + static inline bool rt_is_output_route(const struct rtable *rt) 80 80 { 81 81 return rt->rt_route_iif == 0; 82 82 }
+3
include/scsi/libfcoe.h
··· 147 147 u8 map_dest; 148 148 u8 spma; 149 149 u8 probe_tries; 150 + u8 priority; 150 151 u8 dest_addr[ETH_ALEN]; 151 152 u8 ctl_src_addr[ETH_ALEN]; 152 153 ··· 302 301 * @lport: The associated local port 303 302 * @fcoe_pending_queue: The pending Rx queue of skbs 304 303 * @fcoe_pending_queue_active: Indicates if the pending queue is active 304 + * @priority: Packet priority (DCB) 305 305 * @max_queue_depth: Max queue depth of pending queue 306 306 * @min_queue_depth: Min queue depth of pending queue 307 307 * @timer: The queue timer ··· 318 316 struct fc_lport *lport; 319 317 struct sk_buff_head fcoe_pending_queue; 320 318 u8 fcoe_pending_queue_active; 319 + u8 priority; 321 320 u32 max_queue_depth; 322 321 u32 min_queue_depth; 323 322 struct timer_list timer;
+6 -40
include/target/target_core_base.h
··· 103 103 SCF_SCSI_NON_DATA_CDB = 0x00000040, 104 104 SCF_SCSI_CDB_EXCEPTION = 0x00000080, 105 105 SCF_SCSI_RESERVATION_CONFLICT = 0x00000100, 106 - SCF_SE_CMD_FAILED = 0x00000400, 106 + SCF_FUA = 0x00000200, 107 107 SCF_SE_LUN_CMD = 0x00000800, 108 108 SCF_SE_ALLOW_EOO = 0x00001000, 109 + SCF_BIDI = 0x00002000, 109 110 SCF_SENT_CHECK_CONDITION = 0x00004000, 110 111 SCF_OVERFLOW_BIT = 0x00008000, 111 112 SCF_UNDERFLOW_BIT = 0x00010000, ··· 155 154 TCM_CHECK_CONDITION_ABORT_CMD = 0x0d, 156 155 TCM_CHECK_CONDITION_UNIT_ATTENTION = 0x0e, 157 156 TCM_CHECK_CONDITION_NOT_READY = 0x0f, 157 + TCM_RESERVATION_CONFLICT = 0x10, 158 158 }; 159 159 160 160 struct se_obj { ··· 213 211 u16 lu_gp_id; 214 212 int lu_gp_valid_id; 215 213 u32 lu_gp_members; 216 - atomic_t lu_gp_shutdown; 217 214 atomic_t lu_gp_ref_cnt; 218 215 spinlock_t lu_gp_lock; 219 216 struct config_group lu_gp_group; ··· 423 422 int sam_task_attr; 424 423 /* Transport protocol dependent state, see transport_state_table */ 425 424 enum transport_state_table t_state; 426 - /* Transport specific error status */ 427 - int transport_error_status; 428 425 /* Used to signal cmd->se_tfo->check_release_cmd() usage per cmd */ 429 - int check_release:1; 430 - int cmd_wait_set:1; 426 + unsigned check_release:1; 427 + unsigned cmd_wait_set:1; 431 428 /* See se_cmd_flags_table */ 432 429 u32 se_cmd_flags; 433 430 u32 se_ordered_id; ··· 440 441 /* Used for sense data */ 441 442 void *sense_buffer; 442 443 struct list_head se_delayed_node; 443 - struct list_head se_ordered_node; 444 444 struct list_head se_lun_node; 445 445 struct list_head se_qf_node; 446 446 struct se_device *se_dev; 447 447 struct se_dev_entry *se_deve; 448 - struct se_device *se_obj_ptr; 449 - struct se_device *se_orig_obj_ptr; 450 448 struct se_lun *se_lun; 451 449 /* Only used for internal passthrough and legacy TCM fabric modules */ 452 450 struct se_session *se_sess; ··· 459 463 unsigned char __t_task_cdb[TCM_MAX_COMMAND_SIZE]; 460 464 unsigned long long t_task_lba; 461 465 int t_tasks_failed; 462 - int t_tasks_fua; 463 - bool t_tasks_bidi; 464 466 u32 t_tasks_sg_chained_no; 465 467 atomic_t t_fe_count; 466 468 atomic_t t_se_count; ··· 482 488 struct scatterlist *t_tasks_sg_chained; 483 489 484 490 struct work_struct work; 485 - 486 - /* 487 - * Used for pre-registered fabric SGL passthrough WRITE and READ 488 - * with the special SCF_PASSTHROUGH_CONTIG_TO_SG case for TCM_Loop 489 - * and other HW target mode fabric modules. 490 - */ 491 - struct scatterlist *t_task_pt_sgl; 492 - u32 t_task_pt_sgl_num; 493 491 494 492 struct scatterlist *t_data_sg; 495 493 unsigned int t_data_nents; ··· 548 562 } ____cacheline_aligned; 549 563 550 564 struct se_session { 551 - int sess_tearing_down:1; 565 + unsigned sess_tearing_down:1; 552 566 u64 sess_bin_isid; 553 567 struct se_node_acl *se_node_acl; 554 568 struct se_portal_group *se_tpg; ··· 669 683 struct t10_reservation t10_pr; 670 684 spinlock_t se_dev_lock; 671 685 void *se_dev_su_ptr; 672 - struct list_head se_dev_node; 673 686 struct config_group se_dev_group; 674 687 /* For T10 Reservations */ 675 688 struct config_group se_dev_pr_group; ··· 677 692 } ____cacheline_aligned; 678 693 679 694 struct se_device { 680 - /* Set to 1 if thread is NOT sleeping on thread_sem */ 681 - u8 thread_active; 682 - u8 dev_status_timer_flags; 683 695 /* RELATIVE TARGET PORT IDENTIFER Counter */ 684 696 u16 dev_rpti_counter; 685 697 /* Used for SAM Task Attribute ordering */ ··· 701 719 u64 write_bytes; 702 720 spinlock_t stats_lock; 703 721 /* Active commands on this virtual SE device */ 704 - atomic_t active_cmds; 705 722 atomic_t simple_cmds; 706 723 atomic_t depth_left; 707 724 atomic_t dev_ordered_id; 708 - atomic_t dev_tur_active; 709 725 atomic_t execute_tasks; 710 - atomic_t dev_status_thr_count; 711 - atomic_t dev_hoq_count; 712 726 atomic_t dev_ordered_sync; 713 727 atomic_t dev_qf_count; 714 728 struct se_obj dev_obj; ··· 712 734 struct se_obj dev_export_obj; 713 735 struct se_queue_obj dev_queue_obj; 714 736 spinlock_t delayed_cmd_lock; 715 - spinlock_t ordered_cmd_lock; 716 737 spinlock_t execute_task_lock; 717 - spinlock_t state_task_lock; 718 - spinlock_t dev_alua_lock; 719 738 spinlock_t dev_reservation_lock; 720 - spinlock_t dev_state_lock; 721 739 spinlock_t dev_status_lock; 722 - spinlock_t dev_status_thr_lock; 723 740 spinlock_t se_port_lock; 724 741 spinlock_t se_tmr_lock; 725 742 spinlock_t qf_cmd_lock; ··· 726 753 struct t10_pr_registration *dev_pr_res_holder; 727 754 struct list_head dev_sep_list; 728 755 struct list_head dev_tmr_list; 729 - struct timer_list dev_status_timer; 730 756 /* Pointer to descriptor for processing thread */ 731 757 struct task_struct *process_thread; 732 - pid_t process_thread_pid; 733 - struct task_struct *dev_mgmt_thread; 734 758 struct work_struct qf_work_queue; 735 759 struct list_head delayed_cmd_list; 736 - struct list_head ordered_cmd_list; 737 760 struct list_head execute_task_list; 738 761 struct list_head state_task_list; 739 762 struct list_head qf_cmd_list; ··· 740 771 struct se_subsystem_api *transport; 741 772 /* Linked list for struct se_hba struct se_device list */ 742 773 struct list_head dev_list; 743 - /* Linked list for struct se_global->g_se_dev_list */ 744 - struct list_head g_se_dev_list; 745 774 } ____cacheline_aligned; 746 775 747 776 struct se_hba { ··· 801 834 u32 sep_index; 802 835 struct scsi_port_stats sep_stats; 803 836 /* Used for ALUA Target Port Groups membership */ 804 - atomic_t sep_tg_pt_gp_active; 805 837 atomic_t sep_tg_pt_secondary_offline; 806 838 /* Used for PR ALL_TG_PT=1 */ 807 839 atomic_t sep_tg_pt_ref_cnt;
-24
include/target/target_core_transport.h
··· 10 10 11 11 #define PYX_TRANSPORT_STATUS_INTERVAL 5 /* In seconds */ 12 12 13 - #define PYX_TRANSPORT_SENT_TO_TRANSPORT 0 14 - #define PYX_TRANSPORT_WRITE_PENDING 1 15 - 16 - #define PYX_TRANSPORT_UNKNOWN_SAM_OPCODE -1 17 - #define PYX_TRANSPORT_HBA_QUEUE_FULL -2 18 - #define PYX_TRANSPORT_REQ_TOO_MANY_SECTORS -3 19 - #define PYX_TRANSPORT_OUT_OF_MEMORY_RESOURCES -4 20 - #define PYX_TRANSPORT_INVALID_CDB_FIELD -5 21 - #define PYX_TRANSPORT_INVALID_PARAMETER_LIST -6 22 - #define PYX_TRANSPORT_LU_COMM_FAILURE -7 23 - #define PYX_TRANSPORT_UNKNOWN_MODE_PAGE -8 24 - #define PYX_TRANSPORT_WRITE_PROTECTED -9 25 - #define PYX_TRANSPORT_RESERVATION_CONFLICT -10 26 - #define PYX_TRANSPORT_ILLEGAL_REQUEST -11 27 - #define PYX_TRANSPORT_USE_SENSE_REASON -12 28 - 29 - #ifndef SAM_STAT_RESERVATION_CONFLICT 30 - #define SAM_STAT_RESERVATION_CONFLICT 0x18 31 - #endif 32 - 33 - #define TRANSPORT_PLUGIN_FREE 0 34 - #define TRANSPORT_PLUGIN_REGISTERED 1 35 - 36 13 #define TRANSPORT_PLUGIN_PHBA_PDEV 1 37 14 #define TRANSPORT_PLUGIN_VHBA_PDEV 2 38 15 #define TRANSPORT_PLUGIN_VHBA_VDEV 3 ··· 135 158 extern int transport_handle_cdb_direct(struct se_cmd *); 136 159 extern int transport_generic_handle_cdb_map(struct se_cmd *); 137 160 extern int transport_generic_handle_data(struct se_cmd *); 138 - extern void transport_new_cmd_failure(struct se_cmd *); 139 161 extern int transport_generic_handle_tmr(struct se_cmd *); 140 162 extern bool target_stop_task(struct se_task *task, unsigned long *flags); 141 163 extern int transport_generic_map_mem_to_cmd(struct se_cmd *cmd, struct scatterlist *, u32,
-7
include/video/omapdss.h
··· 307 307 void (*dsi_disable_pads)(int dsi_id, unsigned lane_mask); 308 308 }; 309 309 310 - #if defined(CONFIG_OMAP2_DSS_MODULE) || defined(CONFIG_OMAP2_DSS) 311 310 /* Init with the board info */ 312 311 extern int omap_display_init(struct omap_dss_board_info *board_data); 313 - #else 314 - static inline int omap_display_init(struct omap_dss_board_info *board_data) 315 - { 316 - return 0; 317 - } 318 - #endif 319 312 320 313 struct omap_display_platform_data { 321 314 struct omap_dss_board_info *board_data;
+3 -5
ipc/mqueue.c
··· 1269 1269 1270 1270 void mq_put_mnt(struct ipc_namespace *ns) 1271 1271 { 1272 - mntput(ns->mq_mnt); 1272 + kern_unmount(ns->mq_mnt); 1273 1273 } 1274 1274 1275 1275 static int __init init_mqueue_fs(void) ··· 1291 1291 1292 1292 spin_lock_init(&mq_lock); 1293 1293 1294 - init_ipc_ns.mq_mnt = kern_mount_data(&mqueue_fs_type, &init_ipc_ns); 1295 - if (IS_ERR(init_ipc_ns.mq_mnt)) { 1296 - error = PTR_ERR(init_ipc_ns.mq_mnt); 1294 + error = mq_init_ns(&init_ipc_ns); 1295 + if (error) 1297 1296 goto out_filesystem; 1298 - } 1299 1297 1300 1298 return 0; 1301 1299
-5
ipc/msgutil.c
··· 27 27 */ 28 28 struct ipc_namespace init_ipc_ns = { 29 29 .count = ATOMIC_INIT(1), 30 - #ifdef CONFIG_POSIX_MQUEUE 31 - .mq_queues_max = DFLT_QUEUESMAX, 32 - .mq_msg_max = DFLT_MSGMAX, 33 - .mq_msgsize_max = DFLT_MSGSIZEMAX, 34 - #endif 35 30 .user_ns = &init_user_ns, 36 31 }; 37 32
+9 -2
kernel/cgroup_freezer.c
··· 153 153 kfree(cgroup_freezer(cgroup)); 154 154 } 155 155 156 + /* task is frozen or will freeze immediately when next it gets woken */ 157 + static bool is_task_frozen_enough(struct task_struct *task) 158 + { 159 + return frozen(task) || 160 + (task_is_stopped_or_traced(task) && freezing(task)); 161 + } 162 + 156 163 /* 157 164 * The call to cgroup_lock() in the freezer.state write method prevents 158 165 * a write to that file racing against an attach, and hence the ··· 238 231 cgroup_iter_start(cgroup, &it); 239 232 while ((task = cgroup_iter_next(cgroup, &it))) { 240 233 ntotal++; 241 - if (frozen(task)) 234 + if (is_task_frozen_enough(task)) 242 235 nfrozen++; 243 236 } 244 237 ··· 291 284 while ((task = cgroup_iter_next(cgroup, &it))) { 292 285 if (!freeze_task(task, true)) 293 286 continue; 294 - if (frozen(task)) 287 + if (is_task_frozen_enough(task)) 295 288 continue; 296 289 if (!freezing(task) && !freezer_should_skip(task)) 297 290 num_cant_freeze_now++;
+91 -4
kernel/events/core.c
··· 185 185 static void update_context_time(struct perf_event_context *ctx); 186 186 static u64 perf_event_time(struct perf_event *event); 187 187 188 + static void ring_buffer_attach(struct perf_event *event, 189 + struct ring_buffer *rb); 190 + 188 191 void __weak perf_event_print_debug(void) { } 189 192 190 193 extern __weak const char *perf_pmu_name(void) ··· 2174 2171 */ 2175 2172 cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE); 2176 2173 2177 - perf_event_sched_in(cpuctx, ctx, task); 2174 + if (ctx->nr_events) 2175 + cpuctx->task_ctx = ctx; 2178 2176 2179 - cpuctx->task_ctx = ctx; 2177 + perf_event_sched_in(cpuctx, cpuctx->task_ctx, task); 2180 2178 2181 2179 perf_pmu_enable(ctx->pmu); 2182 2180 perf_ctx_unlock(cpuctx, ctx); ··· 3194 3190 struct ring_buffer *rb; 3195 3191 unsigned int events = POLL_HUP; 3196 3192 3193 + /* 3194 + * Race between perf_event_set_output() and perf_poll(): perf_poll() 3195 + * grabs the rb reference but perf_event_set_output() overrides it. 3196 + * Here is the timeline for two threads T1, T2: 3197 + * t0: T1, rb = rcu_dereference(event->rb) 3198 + * t1: T2, old_rb = event->rb 3199 + * t2: T2, event->rb = new rb 3200 + * t3: T2, ring_buffer_detach(old_rb) 3201 + * t4: T1, ring_buffer_attach(rb1) 3202 + * t5: T1, poll_wait(event->waitq) 3203 + * 3204 + * To avoid this problem, we grab mmap_mutex in perf_poll() 3205 + * thereby ensuring that the assignment of the new ring buffer 3206 + * and the detachment of the old buffer appear atomic to perf_poll() 3207 + */ 3208 + mutex_lock(&event->mmap_mutex); 3209 + 3197 3210 rcu_read_lock(); 3198 3211 rb = rcu_dereference(event->rb); 3199 - if (rb) 3212 + if (rb) { 3213 + ring_buffer_attach(event, rb); 3200 3214 events = atomic_xchg(&rb->poll, 0); 3215 + } 3201 3216 rcu_read_unlock(); 3217 + 3218 + mutex_unlock(&event->mmap_mutex); 3202 3219 3203 3220 poll_wait(file, &event->waitq, wait); 3204 3221 ··· 3521 3496 return ret; 3522 3497 } 3523 3498 3499 + static void ring_buffer_attach(struct perf_event *event, 3500 + struct ring_buffer *rb) 3501 + { 3502 + unsigned long flags; 3503 + 3504 + if (!list_empty(&event->rb_entry)) 3505 + return; 3506 + 3507 + spin_lock_irqsave(&rb->event_lock, flags); 3508 + if (!list_empty(&event->rb_entry)) 3509 + goto unlock; 3510 + 3511 + list_add(&event->rb_entry, &rb->event_list); 3512 + unlock: 3513 + spin_unlock_irqrestore(&rb->event_lock, flags); 3514 + } 3515 + 3516 + static void ring_buffer_detach(struct perf_event *event, 3517 + struct ring_buffer *rb) 3518 + { 3519 + unsigned long flags; 3520 + 3521 + if (list_empty(&event->rb_entry)) 3522 + return; 3523 + 3524 + spin_lock_irqsave(&rb->event_lock, flags); 3525 + list_del_init(&event->rb_entry); 3526 + wake_up_all(&event->waitq); 3527 + spin_unlock_irqrestore(&rb->event_lock, flags); 3528 + } 3529 + 3530 + static void ring_buffer_wakeup(struct perf_event *event) 3531 + { 3532 + struct ring_buffer *rb; 3533 + 3534 + rcu_read_lock(); 3535 + rb = rcu_dereference(event->rb); 3536 + if (!rb) 3537 + goto unlock; 3538 + 3539 + list_for_each_entry_rcu(event, &rb->event_list, rb_entry) 3540 + wake_up_all(&event->waitq); 3541 + 3542 + unlock: 3543 + rcu_read_unlock(); 3544 + } 3545 + 3524 3546 static void rb_free_rcu(struct rcu_head *rcu_head) 3525 3547 { 3526 3548 struct ring_buffer *rb; ··· 3593 3521 3594 3522 static void ring_buffer_put(struct ring_buffer *rb) 3595 3523 { 3524 + struct perf_event *event, *n; 3525 + unsigned long flags; 3526 + 3596 3527 if (!atomic_dec_and_test(&rb->refcount)) 3597 3528 return; 3529 + 3530 + spin_lock_irqsave(&rb->event_lock, flags); 3531 + list_for_each_entry_safe(event, n, &rb->event_list, rb_entry) { 3532 + list_del_init(&event->rb_entry); 3533 + wake_up_all(&event->waitq); 3534 + } 3535 + spin_unlock_irqrestore(&rb->event_lock, flags); 3598 3536 3599 3537 call_rcu(&rb->rcu_head, rb_free_rcu); 3600 3538 } ··· 3628 3546 atomic_long_sub((size >> PAGE_SHIFT) + 1, &user->locked_vm); 3629 3547 vma->vm_mm->pinned_vm -= event->mmap_locked; 3630 3548 rcu_assign_pointer(event->rb, NULL); 3549 + ring_buffer_detach(event, rb); 3631 3550 mutex_unlock(&event->mmap_mutex); 3632 3551 3633 3552 ring_buffer_put(rb); ··· 3783 3700 3784 3701 void perf_event_wakeup(struct perf_event *event) 3785 3702 { 3786 - wake_up_all(&event->waitq); 3703 + ring_buffer_wakeup(event); 3787 3704 3788 3705 if (event->pending_kill) { 3789 3706 kill_fasync(&event->fasync, SIGIO, event->pending_kill); ··· 5905 5822 INIT_LIST_HEAD(&event->group_entry); 5906 5823 INIT_LIST_HEAD(&event->event_entry); 5907 5824 INIT_LIST_HEAD(&event->sibling_list); 5825 + INIT_LIST_HEAD(&event->rb_entry); 5826 + 5908 5827 init_waitqueue_head(&event->waitq); 5909 5828 init_irq_work(&event->pending, perf_pending_event); 5910 5829 ··· 6113 6028 6114 6029 old_rb = event->rb; 6115 6030 rcu_assign_pointer(event->rb, rb); 6031 + if (old_rb) 6032 + ring_buffer_detach(event, old_rb); 6116 6033 ret = 0; 6117 6034 unlock: 6118 6035 mutex_unlock(&event->mmap_mutex);
+3
kernel/events/internal.h
··· 22 22 local_t lost; /* nr records lost */ 23 23 24 24 long watermark; /* wakeup watermark */ 25 + /* poll crap */ 26 + spinlock_t event_lock; 27 + struct list_head event_list; 25 28 26 29 struct perf_event_mmap_page *user_page; 27 30 void *data_pages[0];
+3
kernel/events/ring_buffer.c
··· 209 209 rb->writable = 1; 210 210 211 211 atomic_set(&rb->refcount, 1); 212 + 213 + INIT_LIST_HEAD(&rb->event_list); 214 + spin_lock_init(&rb->event_lock); 212 215 } 213 216 214 217 #ifndef CONFIG_PERF_USE_VMALLOC
+4 -2
kernel/hrtimer.c
··· 885 885 struct hrtimer_clock_base *base, 886 886 unsigned long newstate, int reprogram) 887 887 { 888 + struct timerqueue_node *next_timer; 888 889 if (!(timer->state & HRTIMER_STATE_ENQUEUED)) 889 890 goto out; 890 891 891 - if (&timer->node == timerqueue_getnext(&base->active)) { 892 + next_timer = timerqueue_getnext(&base->active); 893 + timerqueue_del(&base->active, &timer->node); 894 + if (&timer->node == next_timer) { 892 895 #ifdef CONFIG_HIGH_RES_TIMERS 893 896 /* Reprogram the clock event device. if enabled */ 894 897 if (reprogram && hrtimer_hres_active()) { ··· 904 901 } 905 902 #endif 906 903 } 907 - timerqueue_del(&base->active, &timer->node); 908 904 if (!timerqueue_getnext(&base->active)) 909 905 base->cpu_base->active_bases &= ~(1 << base->index); 910 906 out:
+5 -2
kernel/irq/manage.c
··· 623 623 624 624 static int irq_wait_for_interrupt(struct irqaction *action) 625 625 { 626 + set_current_state(TASK_INTERRUPTIBLE); 627 + 626 628 while (!kthread_should_stop()) { 627 - set_current_state(TASK_INTERRUPTIBLE); 628 629 629 630 if (test_and_clear_bit(IRQTF_RUNTHREAD, 630 631 &action->thread_flags)) { ··· 633 632 return 0; 634 633 } 635 634 schedule(); 635 + set_current_state(TASK_INTERRUPTIBLE); 636 636 } 637 + __set_current_state(TASK_RUNNING); 637 638 return -1; 638 639 } 639 640 ··· 1599 1596 return -ENOMEM; 1600 1597 1601 1598 action->handler = handler; 1602 - action->flags = IRQF_PERCPU; 1599 + action->flags = IRQF_PERCPU | IRQF_NO_SUSPEND; 1603 1600 action->name = devname; 1604 1601 action->percpu_dev_id = dev_id; 1605 1602
+3 -1
kernel/irq/spurious.c
··· 84 84 */ 85 85 action = desc->action; 86 86 if (!action || !(action->flags & IRQF_SHARED) || 87 - (action->flags & __IRQF_TIMER) || !action->next) 87 + (action->flags & __IRQF_TIMER) || 88 + (action->handler(irq, action->dev_id) == IRQ_HANDLED) || 89 + !action->next) 88 90 goto out; 89 91 90 92 /* Already running on another processor */
+2 -1
kernel/jump_label.c
··· 66 66 return; 67 67 68 68 jump_label_lock(); 69 - if (atomic_add_return(1, &key->enabled) == 1) 69 + if (atomic_read(&key->enabled) == 0) 70 70 jump_label_update(key, JUMP_LABEL_ENABLE); 71 + atomic_inc(&key->enabled); 71 72 jump_label_unlock(); 72 73 } 73 74
+7 -1
kernel/lockdep.c
··· 44 44 #include <linux/stringify.h> 45 45 #include <linux/bitops.h> 46 46 #include <linux/gfp.h> 47 + #include <linux/kmemcheck.h> 47 48 48 49 #include <asm/sections.h> 49 50 ··· 2949 2948 void lockdep_init_map(struct lockdep_map *lock, const char *name, 2950 2949 struct lock_class_key *key, int subclass) 2951 2950 { 2952 - memset(lock, 0, sizeof(*lock)); 2951 + int i; 2952 + 2953 + kmemcheck_mark_initialized(lock, sizeof(*lock)); 2954 + 2955 + for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) 2956 + lock->class_cache[i] = NULL; 2953 2957 2954 2958 #ifdef CONFIG_LOCK_STAT 2955 2959 lock->cpu = raw_smp_processor_id();
+2 -1
kernel/printk.c
··· 1293 1293 raw_spin_lock(&logbuf_lock); 1294 1294 if (con_start != log_end) 1295 1295 retry = 1; 1296 + raw_spin_unlock_irqrestore(&logbuf_lock, flags); 1297 + 1296 1298 if (retry && console_trylock()) 1297 1299 goto again; 1298 1300 1299 - raw_spin_unlock_irqrestore(&logbuf_lock, flags); 1300 1301 if (wake_klogd) 1301 1302 wake_up_klogd(); 1302 1303 }
+17
kernel/sched.c
··· 71 71 #include <linux/ctype.h> 72 72 #include <linux/ftrace.h> 73 73 #include <linux/slab.h> 74 + #include <linux/init_task.h> 74 75 75 76 #include <asm/tlb.h> 76 77 #include <asm/irq_regs.h> ··· 4811 4810 * This waits for either a completion of a specific task to be signaled or for a 4812 4811 * specified timeout to expire. The timeout is in jiffies. It is not 4813 4812 * interruptible. 4813 + * 4814 + * The return value is 0 if timed out, and positive (at least 1, or number of 4815 + * jiffies left till timeout) if completed. 4814 4816 */ 4815 4817 unsigned long __sched 4816 4818 wait_for_completion_timeout(struct completion *x, unsigned long timeout) ··· 4828 4824 * 4829 4825 * This waits for completion of a specific task to be signaled. It is 4830 4826 * interruptible. 4827 + * 4828 + * The return value is -ERESTARTSYS if interrupted, 0 if completed. 4831 4829 */ 4832 4830 int __sched wait_for_completion_interruptible(struct completion *x) 4833 4831 { ··· 4847 4841 * 4848 4842 * This waits for either a completion of a specific task to be signaled or for a 4849 4843 * specified timeout to expire. It is interruptible. The timeout is in jiffies. 4844 + * 4845 + * The return value is -ERESTARTSYS if interrupted, 0 if timed out, 4846 + * positive (at least 1, or number of jiffies left till timeout) if completed. 4850 4847 */ 4851 4848 long __sched 4852 4849 wait_for_completion_interruptible_timeout(struct completion *x, ··· 4865 4856 * 4866 4857 * This waits to be signaled for completion of a specific task. It can be 4867 4858 * interrupted by a kill signal. 4859 + * 4860 + * The return value is -ERESTARTSYS if interrupted, 0 if completed. 4868 4861 */ 4869 4862 int __sched wait_for_completion_killable(struct completion *x) 4870 4863 { ··· 4885 4874 * This waits for either a completion of a specific task to be 4886 4875 * signaled or for a specified timeout to expire. It can be 4887 4876 * interrupted by a kill signal. The timeout is in jiffies. 4877 + * 4878 + * The return value is -ERESTARTSYS if interrupted, 0 if timed out, 4879 + * positive (at least 1, or number of jiffies left till timeout) if completed. 4888 4880 */ 4889 4881 long __sched 4890 4882 wait_for_completion_killable_timeout(struct completion *x, ··· 6113 6099 */ 6114 6100 idle->sched_class = &idle_sched_class; 6115 6101 ftrace_graph_init_idle_task(idle, cpu); 6102 + #if defined(CONFIG_SMP) 6103 + sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu); 6104 + #endif 6116 6105 } 6117 6106 6118 6107 /*
+127 -34
kernel/sched_fair.c
··· 772 772 list_del_leaf_cfs_rq(cfs_rq); 773 773 } 774 774 775 + static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq) 776 + { 777 + long tg_weight; 778 + 779 + /* 780 + * Use this CPU's actual weight instead of the last load_contribution 781 + * to gain a more accurate current total weight. See 782 + * update_cfs_rq_load_contribution(). 783 + */ 784 + tg_weight = atomic_read(&tg->load_weight); 785 + tg_weight -= cfs_rq->load_contribution; 786 + tg_weight += cfs_rq->load.weight; 787 + 788 + return tg_weight; 789 + } 790 + 775 791 static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg) 776 792 { 777 - long load_weight, load, shares; 793 + long tg_weight, load, shares; 778 794 795 + tg_weight = calc_tg_weight(tg, cfs_rq); 779 796 load = cfs_rq->load.weight; 780 797 781 - load_weight = atomic_read(&tg->load_weight); 782 - load_weight += load; 783 - load_weight -= cfs_rq->load_contribution; 784 - 785 798 shares = (tg->shares * load); 786 - if (load_weight) 787 - shares /= load_weight; 799 + if (tg_weight) 800 + shares /= tg_weight; 788 801 789 802 if (shares < MIN_SHARES) 790 803 shares = MIN_SHARES; ··· 1756 1743 1757 1744 static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq) 1758 1745 { 1759 - if (!cfs_rq->runtime_enabled || !cfs_rq->nr_running) 1746 + if (!cfs_rq->runtime_enabled || cfs_rq->nr_running) 1760 1747 return; 1761 1748 1762 1749 __return_cfs_rq_runtime(cfs_rq); ··· 2049 2036 * Adding load to a group doesn't make a group heavier, but can cause movement 2050 2037 * of group shares between cpus. Assuming the shares were perfectly aligned one 2051 2038 * can calculate the shift in shares. 2039 + * 2040 + * Calculate the effective load difference if @wl is added (subtracted) to @tg 2041 + * on this @cpu and results in a total addition (subtraction) of @wg to the 2042 + * total group weight. 2043 + * 2044 + * Given a runqueue weight distribution (rw_i) we can compute a shares 2045 + * distribution (s_i) using: 2046 + * 2047 + * s_i = rw_i / \Sum rw_j (1) 2048 + * 2049 + * Suppose we have 4 CPUs and our @tg is a direct child of the root group and 2050 + * has 7 equal weight tasks, distributed as below (rw_i), with the resulting 2051 + * shares distribution (s_i): 2052 + * 2053 + * rw_i = { 2, 4, 1, 0 } 2054 + * s_i = { 2/7, 4/7, 1/7, 0 } 2055 + * 2056 + * As per wake_affine() we're interested in the load of two CPUs (the CPU the 2057 + * task used to run on and the CPU the waker is running on), we need to 2058 + * compute the effect of waking a task on either CPU and, in case of a sync 2059 + * wakeup, compute the effect of the current task going to sleep. 2060 + * 2061 + * So for a change of @wl to the local @cpu with an overall group weight change 2062 + * of @wl we can compute the new shares distribution (s'_i) using: 2063 + * 2064 + * s'_i = (rw_i + @wl) / (@wg + \Sum rw_j) (2) 2065 + * 2066 + * Suppose we're interested in CPUs 0 and 1, and want to compute the load 2067 + * differences in waking a task to CPU 0. The additional task changes the 2068 + * weight and shares distributions like: 2069 + * 2070 + * rw'_i = { 3, 4, 1, 0 } 2071 + * s'_i = { 3/8, 4/8, 1/8, 0 } 2072 + * 2073 + * We can then compute the difference in effective weight by using: 2074 + * 2075 + * dw_i = S * (s'_i - s_i) (3) 2076 + * 2077 + * Where 'S' is the group weight as seen by its parent. 2078 + * 2079 + * Therefore the effective change in loads on CPU 0 would be 5/56 (3/8 - 2/7) 2080 + * times the weight of the group. The effect on CPU 1 would be -4/56 (4/8 - 2081 + * 4/7) times the weight of the group. 2052 2082 */ 2053 2083 static long effective_load(struct task_group *tg, int cpu, long wl, long wg) 2054 2084 { 2055 2085 struct sched_entity *se = tg->se[cpu]; 2056 2086 2057 - if (!tg->parent) 2087 + if (!tg->parent) /* the trivial, non-cgroup case */ 2058 2088 return wl; 2059 2089 2060 2090 for_each_sched_entity(se) { 2061 - long lw, w; 2091 + long w, W; 2062 2092 2063 2093 tg = se->my_q->tg; 2064 - w = se->my_q->load.weight; 2065 2094 2066 - /* use this cpu's instantaneous contribution */ 2067 - lw = atomic_read(&tg->load_weight); 2068 - lw -= se->my_q->load_contribution; 2069 - lw += w + wg; 2095 + /* 2096 + * W = @wg + \Sum rw_j 2097 + */ 2098 + W = wg + calc_tg_weight(tg, se->my_q); 2070 2099 2071 - wl += w; 2100 + /* 2101 + * w = rw_i + @wl 2102 + */ 2103 + w = se->my_q->load.weight + wl; 2072 2104 2073 - if (lw > 0 && wl < lw) 2074 - wl = (wl * tg->shares) / lw; 2105 + /* 2106 + * wl = S * s'_i; see (2) 2107 + */ 2108 + if (W > 0 && w < W) 2109 + wl = (w * tg->shares) / W; 2075 2110 else 2076 2111 wl = tg->shares; 2077 2112 2078 - /* zero point is MIN_SHARES */ 2113 + /* 2114 + * Per the above, wl is the new se->load.weight value; since 2115 + * those are clipped to [MIN_SHARES, ...) do so now. See 2116 + * calc_cfs_shares(). 2117 + */ 2079 2118 if (wl < MIN_SHARES) 2080 2119 wl = MIN_SHARES; 2120 + 2121 + /* 2122 + * wl = dw_i = S * (s'_i - s_i); see (3) 2123 + */ 2081 2124 wl -= se->load.weight; 2125 + 2126 + /* 2127 + * Recursively apply this logic to all parent groups to compute 2128 + * the final effective load change on the root group. Since 2129 + * only the @tg group gets extra weight, all parent groups can 2130 + * only redistribute existing shares. @wl is the shift in shares 2131 + * resulting from this level per the above. 2132 + */ 2082 2133 wg = 0; 2083 2134 } 2084 2135 ··· 2326 2249 int cpu = smp_processor_id(); 2327 2250 int prev_cpu = task_cpu(p); 2328 2251 struct sched_domain *sd; 2329 - int i; 2252 + struct sched_group *sg; 2253 + int i, smt = 0; 2330 2254 2331 2255 /* 2332 2256 * If the task is going to be woken-up on this cpu and if it is ··· 2347 2269 * Otherwise, iterate the domains and find an elegible idle cpu. 2348 2270 */ 2349 2271 rcu_read_lock(); 2272 + again: 2350 2273 for_each_domain(target, sd) { 2274 + if (!smt && (sd->flags & SD_SHARE_CPUPOWER)) 2275 + continue; 2276 + 2277 + if (smt && !(sd->flags & SD_SHARE_CPUPOWER)) 2278 + break; 2279 + 2351 2280 if (!(sd->flags & SD_SHARE_PKG_RESOURCES)) 2352 2281 break; 2353 2282 2354 - for_each_cpu_and(i, sched_domain_span(sd), tsk_cpus_allowed(p)) { 2355 - if (idle_cpu(i)) { 2356 - target = i; 2357 - break; 2358 - } 2359 - } 2283 + sg = sd->groups; 2284 + do { 2285 + if (!cpumask_intersects(sched_group_cpus(sg), 2286 + tsk_cpus_allowed(p))) 2287 + goto next; 2360 2288 2361 - /* 2362 - * Lets stop looking for an idle sibling when we reached 2363 - * the domain that spans the current cpu and prev_cpu. 2364 - */ 2365 - if (cpumask_test_cpu(cpu, sched_domain_span(sd)) && 2366 - cpumask_test_cpu(prev_cpu, sched_domain_span(sd))) 2367 - break; 2289 + for_each_cpu(i, sched_group_cpus(sg)) { 2290 + if (!idle_cpu(i)) 2291 + goto next; 2292 + } 2293 + 2294 + target = cpumask_first_and(sched_group_cpus(sg), 2295 + tsk_cpus_allowed(p)); 2296 + goto done; 2297 + next: 2298 + sg = sg->next; 2299 + } while (sg != sd->groups); 2368 2300 } 2301 + if (!smt) { 2302 + smt = 1; 2303 + goto again; 2304 + } 2305 + done: 2369 2306 rcu_read_unlock(); 2370 2307 2371 2308 return target; ··· 3604 3511 } 3605 3512 3606 3513 /** 3607 - * update_sd_lb_stats - Update sched_group's statistics for load balancing. 3514 + * update_sd_lb_stats - Update sched_domain's statistics for load balancing. 3608 3515 * @sd: sched_domain whose statistics are to be updated. 3609 3516 * @this_cpu: Cpu for which load balance is currently performed. 3610 3517 * @idle: Idle status of this_cpu
+1
kernel/sched_features.h
··· 67 67 SCHED_FEAT(TTWU_QUEUE, 1) 68 68 69 69 SCHED_FEAT(FORCE_SD_OVERLAP, 0) 70 + SCHED_FEAT(RT_RUNTIME_SHARE, 1)
+3
kernel/sched_rt.c
··· 560 560 { 561 561 int more = 0; 562 562 563 + if (!sched_feat(RT_RUNTIME_SHARE)) 564 + return more; 565 + 563 566 if (rt_rq->rt_time > rt_rq->rt_runtime) { 564 567 raw_spin_unlock(&rt_rq->rt_runtime_lock); 565 568 more = do_balance_runtime(rt_rq);
+1 -1
kernel/time/alarmtimer.c
··· 195 195 struct alarm *alarm; 196 196 ktime_t expired = next->expires; 197 197 198 - if (expired.tv64 >= now.tv64) 198 + if (expired.tv64 > now.tv64) 199 199 break; 200 200 201 201 alarm = container_of(next, struct alarm, node);
+1
kernel/time/clockevents.c
··· 387 387 * released list and do a notify add later. 388 388 */ 389 389 if (old) { 390 + old->event_handler = clockevents_handle_noop; 390 391 clockevents_set_mode(old, CLOCK_EVT_MODE_UNUSED); 391 392 list_del(&old->list); 392 393 list_add(&old->list, &clockevents_released);
+50 -12
kernel/time/clocksource.c
··· 492 492 } 493 493 494 494 /** 495 + * clocksource_max_adjustment- Returns max adjustment amount 496 + * @cs: Pointer to clocksource 497 + * 498 + */ 499 + static u32 clocksource_max_adjustment(struct clocksource *cs) 500 + { 501 + u64 ret; 502 + /* 503 + * We won't try to correct for more then 11% adjustments (110,000 ppm), 504 + */ 505 + ret = (u64)cs->mult * 11; 506 + do_div(ret,100); 507 + return (u32)ret; 508 + } 509 + 510 + /** 495 511 * clocksource_max_deferment - Returns max time the clocksource can be deferred 496 512 * @cs: Pointer to clocksource 497 513 * ··· 519 503 /* 520 504 * Calculate the maximum number of cycles that we can pass to the 521 505 * cyc2ns function without overflowing a 64-bit signed result. The 522 - * maximum number of cycles is equal to ULLONG_MAX/cs->mult which 523 - * is equivalent to the below. 524 - * max_cycles < (2^63)/cs->mult 525 - * max_cycles < 2^(log2((2^63)/cs->mult)) 526 - * max_cycles < 2^(log2(2^63) - log2(cs->mult)) 527 - * max_cycles < 2^(63 - log2(cs->mult)) 528 - * max_cycles < 1 << (63 - log2(cs->mult)) 506 + * maximum number of cycles is equal to ULLONG_MAX/(cs->mult+cs->maxadj) 507 + * which is equivalent to the below. 508 + * max_cycles < (2^63)/(cs->mult + cs->maxadj) 509 + * max_cycles < 2^(log2((2^63)/(cs->mult + cs->maxadj))) 510 + * max_cycles < 2^(log2(2^63) - log2(cs->mult + cs->maxadj)) 511 + * max_cycles < 2^(63 - log2(cs->mult + cs->maxadj)) 512 + * max_cycles < 1 << (63 - log2(cs->mult + cs->maxadj)) 529 513 * Please note that we add 1 to the result of the log2 to account for 530 514 * any rounding errors, ensure the above inequality is satisfied and 531 515 * no overflow will occur. 532 516 */ 533 - max_cycles = 1ULL << (63 - (ilog2(cs->mult) + 1)); 517 + max_cycles = 1ULL << (63 - (ilog2(cs->mult + cs->maxadj) + 1)); 534 518 535 519 /* 536 520 * The actual maximum number of cycles we can defer the clocksource is 537 521 * determined by the minimum of max_cycles and cs->mask. 522 + * Note: Here we subtract the maxadj to make sure we don't sleep for 523 + * too long if there's a large negative adjustment. 538 524 */ 539 525 max_cycles = min_t(u64, max_cycles, (u64) cs->mask); 540 - max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult, cs->shift); 526 + max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult - cs->maxadj, 527 + cs->shift); 541 528 542 529 /* 543 530 * To ensure that the clocksource does not wrap whilst we are idle, ··· 548 529 * note a margin of 12.5% is used because this can be computed with 549 530 * a shift, versus say 10% which would require division. 550 531 */ 551 - return max_nsecs - (max_nsecs >> 5); 532 + return max_nsecs - (max_nsecs >> 3); 552 533 } 553 534 554 535 #ifndef CONFIG_ARCH_USES_GETTIMEOFFSET ··· 659 640 void __clocksource_updatefreq_scale(struct clocksource *cs, u32 scale, u32 freq) 660 641 { 661 642 u64 sec; 662 - 663 643 /* 664 644 * Calc the maximum number of seconds which we can run before 665 645 * wrapping around. For clocksources which have a mask > 32bit ··· 669 651 * ~ 0.06ppm granularity for NTP. We apply the same 12.5% 670 652 * margin as we do in clocksource_max_deferment() 671 653 */ 672 - sec = (cs->mask - (cs->mask >> 5)); 654 + sec = (cs->mask - (cs->mask >> 3)); 673 655 do_div(sec, freq); 674 656 do_div(sec, scale); 675 657 if (!sec) ··· 679 661 680 662 clocks_calc_mult_shift(&cs->mult, &cs->shift, freq, 681 663 NSEC_PER_SEC / scale, sec * scale); 664 + 665 + /* 666 + * for clocksources that have large mults, to avoid overflow. 667 + * Since mult may be adjusted by ntp, add an safety extra margin 668 + * 669 + */ 670 + cs->maxadj = clocksource_max_adjustment(cs); 671 + while ((cs->mult + cs->maxadj < cs->mult) 672 + || (cs->mult - cs->maxadj > cs->mult)) { 673 + cs->mult >>= 1; 674 + cs->shift--; 675 + cs->maxadj = clocksource_max_adjustment(cs); 676 + } 677 + 682 678 cs->max_idle_ns = clocksource_max_deferment(cs); 683 679 } 684 680 EXPORT_SYMBOL_GPL(__clocksource_updatefreq_scale); ··· 733 701 */ 734 702 int clocksource_register(struct clocksource *cs) 735 703 { 704 + /* calculate max adjustment for given mult/shift */ 705 + cs->maxadj = clocksource_max_adjustment(cs); 706 + WARN_ONCE(cs->mult + cs->maxadj < cs->mult, 707 + "Clocksource %s might overflow on 11%% adjustment\n", 708 + cs->name); 709 + 736 710 /* calculate max idle time permitted for this clocksource */ 737 711 cs->max_idle_ns = clocksource_max_deferment(cs); 738 712
+1 -1
kernel/time/tick-broadcast.c
··· 71 71 (dev->features & CLOCK_EVT_FEAT_C3STOP)) 72 72 return 0; 73 73 74 - clockevents_exchange_device(NULL, dev); 74 + clockevents_exchange_device(tick_broadcast_device.evtdev, dev); 75 75 tick_broadcast_device.evtdev = dev; 76 76 if (!cpumask_empty(tick_get_broadcast_mask())) 77 77 tick_broadcast_start_periodic(dev);
+91 -1
kernel/time/timekeeping.c
··· 249 249 secs = xtime.tv_sec + wall_to_monotonic.tv_sec; 250 250 nsecs = xtime.tv_nsec + wall_to_monotonic.tv_nsec; 251 251 nsecs += timekeeping_get_ns(); 252 + /* If arch requires, add in gettimeoffset() */ 253 + nsecs += arch_gettimeoffset(); 252 254 253 255 } while (read_seqretry(&xtime_lock, seq)); 254 256 /* ··· 282 280 *ts = xtime; 283 281 tomono = wall_to_monotonic; 284 282 nsecs = timekeeping_get_ns(); 283 + /* If arch requires, add in gettimeoffset() */ 284 + nsecs += arch_gettimeoffset(); 285 285 286 286 } while (read_seqretry(&xtime_lock, seq)); 287 287 ··· 806 802 s64 error, interval = timekeeper.cycle_interval; 807 803 int adj; 808 804 805 + /* 806 + * The point of this is to check if the error is greater then half 807 + * an interval. 808 + * 809 + * First we shift it down from NTP_SHIFT to clocksource->shifted nsecs. 810 + * 811 + * Note we subtract one in the shift, so that error is really error*2. 812 + * This "saves" dividing(shifting) intererval twice, but keeps the 813 + * (error > interval) comparision as still measuring if error is 814 + * larger then half an interval. 815 + * 816 + * Note: It does not "save" on aggrivation when reading the code. 817 + */ 809 818 error = timekeeper.ntp_error >> (timekeeper.ntp_error_shift - 1); 810 819 if (error > interval) { 820 + /* 821 + * We now divide error by 4(via shift), which checks if 822 + * the error is greater then twice the interval. 823 + * If it is greater, we need a bigadjust, if its smaller, 824 + * we can adjust by 1. 825 + */ 811 826 error >>= 2; 827 + /* 828 + * XXX - In update_wall_time, we round up to the next 829 + * nanosecond, and store the amount rounded up into 830 + * the error. This causes the likely below to be unlikely. 831 + * 832 + * The properfix is to avoid rounding up by using 833 + * the high precision timekeeper.xtime_nsec instead of 834 + * xtime.tv_nsec everywhere. Fixing this will take some 835 + * time. 836 + */ 812 837 if (likely(error <= interval)) 813 838 adj = 1; 814 839 else 815 840 adj = timekeeping_bigadjust(error, &interval, &offset); 816 841 } else if (error < -interval) { 842 + /* See comment above, this is just switched for the negative */ 817 843 error >>= 2; 818 844 if (likely(error >= -interval)) { 819 845 adj = -1; ··· 851 817 offset = -offset; 852 818 } else 853 819 adj = timekeeping_bigadjust(error, &interval, &offset); 854 - } else 820 + } else /* No adjustment needed */ 855 821 return; 856 822 823 + WARN_ONCE(timekeeper.clock->maxadj && 824 + (timekeeper.mult + adj > timekeeper.clock->mult + 825 + timekeeper.clock->maxadj), 826 + "Adjusting %s more then 11%% (%ld vs %ld)\n", 827 + timekeeper.clock->name, (long)timekeeper.mult + adj, 828 + (long)timekeeper.clock->mult + 829 + timekeeper.clock->maxadj); 830 + /* 831 + * So the following can be confusing. 832 + * 833 + * To keep things simple, lets assume adj == 1 for now. 834 + * 835 + * When adj != 1, remember that the interval and offset values 836 + * have been appropriately scaled so the math is the same. 837 + * 838 + * The basic idea here is that we're increasing the multiplier 839 + * by one, this causes the xtime_interval to be incremented by 840 + * one cycle_interval. This is because: 841 + * xtime_interval = cycle_interval * mult 842 + * So if mult is being incremented by one: 843 + * xtime_interval = cycle_interval * (mult + 1) 844 + * Its the same as: 845 + * xtime_interval = (cycle_interval * mult) + cycle_interval 846 + * Which can be shortened to: 847 + * xtime_interval += cycle_interval 848 + * 849 + * So offset stores the non-accumulated cycles. Thus the current 850 + * time (in shifted nanoseconds) is: 851 + * now = (offset * adj) + xtime_nsec 852 + * Now, even though we're adjusting the clock frequency, we have 853 + * to keep time consistent. In other words, we can't jump back 854 + * in time, and we also want to avoid jumping forward in time. 855 + * 856 + * So given the same offset value, we need the time to be the same 857 + * both before and after the freq adjustment. 858 + * now = (offset * adj_1) + xtime_nsec_1 859 + * now = (offset * adj_2) + xtime_nsec_2 860 + * So: 861 + * (offset * adj_1) + xtime_nsec_1 = 862 + * (offset * adj_2) + xtime_nsec_2 863 + * And we know: 864 + * adj_2 = adj_1 + 1 865 + * So: 866 + * (offset * adj_1) + xtime_nsec_1 = 867 + * (offset * (adj_1+1)) + xtime_nsec_2 868 + * (offset * adj_1) + xtime_nsec_1 = 869 + * (offset * adj_1) + offset + xtime_nsec_2 870 + * Canceling the sides: 871 + * xtime_nsec_1 = offset + xtime_nsec_2 872 + * Which gives us: 873 + * xtime_nsec_2 = xtime_nsec_1 - offset 874 + * Which simplfies to: 875 + * xtime_nsec -= offset 876 + * 877 + * XXX - TODO: Doc ntp_error calculation. 878 + */ 857 879 timekeeper.mult += adj; 858 880 timekeeper.xtime_interval += interval; 859 881 timekeeper.xtime_nsec -= offset;
+1 -1
kernel/timer.c
··· 1368 1368 int pid; 1369 1369 1370 1370 rcu_read_lock(); 1371 - pid = task_tgid_vnr(current->real_parent); 1371 + pid = task_tgid_vnr(rcu_dereference(current->real_parent)); 1372 1372 rcu_read_unlock(); 1373 1373 1374 1374 return pid;
+3 -2
kernel/trace/ftrace.c
··· 152 152 ftrace_pid_function = ftrace_stub; 153 153 } 154 154 155 - #undef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST 156 155 #ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST 157 156 /* 158 157 * For those archs that do not test ftrace_trace_stop in their ··· 1211 1212 if (!src->count) { 1212 1213 free_ftrace_hash_rcu(*dst); 1213 1214 rcu_assign_pointer(*dst, EMPTY_HASH); 1214 - return 0; 1215 + /* still need to update the function records */ 1216 + ret = 0; 1217 + goto out; 1215 1218 } 1216 1219 1217 1220 /*
-1
kernel/trace/trace_events.c
··· 1078 1078 /* First see if we did not already create this dir */ 1079 1079 list_for_each_entry(system, &event_subsystems, list) { 1080 1080 if (strcmp(system->name, name) == 0) { 1081 - __get_system(system); 1082 1081 system->nr_events++; 1083 1082 return system->entry; 1084 1083 }
+9 -4
kernel/trace/trace_events_filter.c
··· 1649 1649 */ 1650 1650 err = replace_preds(call, NULL, ps, filter_string, true); 1651 1651 if (err) 1652 - goto fail; 1652 + call->flags |= TRACE_EVENT_FL_NO_SET_FILTER; 1653 + else 1654 + call->flags &= ~TRACE_EVENT_FL_NO_SET_FILTER; 1653 1655 } 1654 1656 1655 1657 list_for_each_entry(call, &ftrace_events, list) { 1656 1658 struct event_filter *filter; 1657 1659 1658 1660 if (strcmp(call->class->system, system->name) != 0) 1661 + continue; 1662 + 1663 + if (call->flags & TRACE_EVENT_FL_NO_SET_FILTER) 1659 1664 continue; 1660 1665 1661 1666 filter_item = kzalloc(sizeof(*filter_item), GFP_KERNEL); ··· 1691 1686 * replace the filter for the call. 1692 1687 */ 1693 1688 filter = call->filter; 1694 - call->filter = filter_item->filter; 1689 + rcu_assign_pointer(call->filter, filter_item->filter); 1695 1690 filter_item->filter = filter; 1696 1691 1697 1692 fail = false; ··· 1746 1741 filter = call->filter; 1747 1742 if (!filter) 1748 1743 goto out_unlock; 1749 - call->filter = NULL; 1744 + RCU_INIT_POINTER(call->filter, NULL); 1750 1745 /* Make sure the filter is not being used */ 1751 1746 synchronize_sched(); 1752 1747 __free_filter(filter); ··· 1787 1782 * string 1788 1783 */ 1789 1784 tmp = call->filter; 1790 - call->filter = filter; 1785 + rcu_assign_pointer(call->filter, filter); 1791 1786 if (tmp) { 1792 1787 /* Make sure the call is done with the filter */ 1793 1788 synchronize_sched();
+1 -1
lib/dma-debug.c
··· 245 245 246 246 static bool exact_match(struct dma_debug_entry *a, struct dma_debug_entry *b) 247 247 { 248 - return ((a->dev_addr == a->dev_addr) && 248 + return ((a->dev_addr == b->dev_addr) && 249 249 (a->dev == b->dev)) ? true : false; 250 250 } 251 251
+4 -2
mm/filemap.c
··· 2407 2407 iov_iter_count(i)); 2408 2408 2409 2409 again: 2410 - 2411 2410 /* 2412 2411 * Bring in the user page that we will copy from _first_. 2413 2412 * Otherwise there's a nasty deadlock on copying from the ··· 2462 2463 written += copied; 2463 2464 2464 2465 balance_dirty_pages_ratelimited(mapping); 2465 - 2466 + if (fatal_signal_pending(current)) { 2467 + status = -EINTR; 2468 + break; 2469 + } 2466 2470 } while (iov_iter_count(i)); 2467 2471 2468 2472 return written ? written : status;
+4 -12
mm/huge_memory.c
··· 2259 2259 2260 2260 static void khugepaged_alloc_sleep(void) 2261 2261 { 2262 - DEFINE_WAIT(wait); 2263 - add_wait_queue(&khugepaged_wait, &wait); 2264 - schedule_timeout_interruptible( 2265 - msecs_to_jiffies( 2266 - khugepaged_alloc_sleep_millisecs)); 2267 - remove_wait_queue(&khugepaged_wait, &wait); 2262 + wait_event_freezable_timeout(khugepaged_wait, false, 2263 + msecs_to_jiffies(khugepaged_alloc_sleep_millisecs)); 2268 2264 } 2269 2265 2270 2266 #ifndef CONFIG_NUMA ··· 2309 2313 if (unlikely(kthread_should_stop())) 2310 2314 break; 2311 2315 if (khugepaged_has_work()) { 2312 - DEFINE_WAIT(wait); 2313 2316 if (!khugepaged_scan_sleep_millisecs) 2314 2317 continue; 2315 - add_wait_queue(&khugepaged_wait, &wait); 2316 - schedule_timeout_interruptible( 2317 - msecs_to_jiffies( 2318 - khugepaged_scan_sleep_millisecs)); 2319 - remove_wait_queue(&khugepaged_wait, &wait); 2318 + wait_event_freezable_timeout(khugepaged_wait, false, 2319 + msecs_to_jiffies(khugepaged_scan_sleep_millisecs)); 2320 2320 } else if (khugepaged_enabled()) 2321 2321 wait_event_freezable(khugepaged_wait, 2322 2322 khugepaged_wait_event());
+1
mm/hugetlb.c
··· 576 576 __SetPageHead(page); 577 577 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { 578 578 __SetPageTail(p); 579 + set_page_count(p, 0); 579 580 p->first_page = page; 580 581 } 581 582 }
+1 -1
mm/migrate.c
··· 871 871 872 872 if (anon_vma) 873 873 put_anon_vma(anon_vma); 874 - out: 875 874 unlock_page(hpage); 876 875 876 + out: 877 877 if (rc != -EAGAIN) { 878 878 list_del(&hpage->lru); 879 879 put_page(hpage);
+28 -4
mm/page-writeback.c
··· 411 411 * 412 412 * Returns @bdi's dirty limit in pages. The term "dirty" in the context of 413 413 * dirty balancing includes all PG_dirty, PG_writeback and NFS unstable pages. 414 - * And the "limit" in the name is not seriously taken as hard limit in 415 - * balance_dirty_pages(). 414 + * 415 + * Note that balance_dirty_pages() will only seriously take it as a hard limit 416 + * when sleeping max_pause per page is not enough to keep the dirty pages under 417 + * control. For example, when the device is completely stalled due to some error 418 + * conditions, or when there are 1000 dd tasks writing to a slow 10MB/s USB key. 419 + * In the other normal situations, it acts more gently by throttling the tasks 420 + * more (rather than completely block them) when the bdi dirty pages go high. 416 421 * 417 422 * It allocates high/low dirty limits to fast/slow devices, in order to prevent 418 423 * - starving fast devices ··· 599 594 */ 600 595 if (unlikely(bdi_thresh > thresh)) 601 596 bdi_thresh = thresh; 597 + /* 598 + * It's very possible that bdi_thresh is close to 0 not because the 599 + * device is slow, but that it has remained inactive for long time. 600 + * Honour such devices a reasonable good (hopefully IO efficient) 601 + * threshold, so that the occasional writes won't be blocked and active 602 + * writes can rampup the threshold quickly. 603 + */ 602 604 bdi_thresh = max(bdi_thresh, (limit - dirty) / 8); 603 605 /* 604 606 * scale global setpoint to bdi's: ··· 989 977 * 990 978 * 8 serves as the safety ratio. 991 979 */ 992 - if (bdi_dirty) 993 - t = min(t, bdi_dirty * HZ / (8 * bw + 1)); 980 + t = min(t, bdi_dirty * HZ / (8 * bw + 1)); 994 981 995 982 /* 996 983 * The pause time will be settled within range (max_pause/4, max_pause). ··· 1145 1134 * also keep "1000+ dd on a slow USB stick" under control. 1146 1135 */ 1147 1136 if (task_ratelimit) 1137 + break; 1138 + 1139 + /* 1140 + * In the case of an unresponding NFS server and the NFS dirty 1141 + * pages exceeds dirty_thresh, give the other good bdi's a pipe 1142 + * to go through, so that tasks on them still remain responsive. 1143 + * 1144 + * In theory 1 page is enough to keep the comsumer-producer 1145 + * pipe going: the flusher cleans 1 page => the task dirties 1 1146 + * more page. However bdi_dirty has accounting errors. So use 1147 + * the larger and more IO friendly bdi_stat_error. 1148 + */ 1149 + if (bdi_dirty <= bdi_stat_error(bdi)) 1148 1150 break; 1149 1151 1150 1152 if (fatal_signal_pending(current))
+8 -2
mm/page_alloc.c
··· 356 356 __SetPageHead(page); 357 357 for (i = 1; i < nr_pages; i++) { 358 358 struct page *p = page + i; 359 - 360 359 __SetPageTail(p); 360 + set_page_count(p, 0); 361 361 p->first_page = page; 362 362 } 363 363 } ··· 3377 3377 unsigned long block_migratetype; 3378 3378 int reserve; 3379 3379 3380 - /* Get the start pfn, end pfn and the number of blocks to reserve */ 3380 + /* 3381 + * Get the start pfn, end pfn and the number of blocks to reserve 3382 + * We have to be careful to be aligned to pageblock_nr_pages to 3383 + * make sure that we always check pfn_valid for the first page in 3384 + * the block. 3385 + */ 3381 3386 start_pfn = zone->zone_start_pfn; 3382 3387 end_pfn = start_pfn + zone->spanned_pages; 3388 + start_pfn = roundup(start_pfn, pageblock_nr_pages); 3383 3389 reserve = roundup(min_wmark_pages(zone), pageblock_nr_pages) >> 3384 3390 pageblock_order; 3385 3391
+8 -9
mm/percpu-vm.c
··· 50 50 51 51 if (!pages || !bitmap) { 52 52 if (may_alloc && !pages) 53 - pages = pcpu_mem_alloc(pages_size); 53 + pages = pcpu_mem_zalloc(pages_size); 54 54 if (may_alloc && !bitmap) 55 - bitmap = pcpu_mem_alloc(bitmap_size); 55 + bitmap = pcpu_mem_zalloc(bitmap_size); 56 56 if (!pages || !bitmap) 57 57 return NULL; 58 58 } 59 59 60 - memset(pages, 0, pages_size); 61 60 bitmap_copy(bitmap, chunk->populated, pcpu_unit_pages); 62 61 63 62 *bitmapp = bitmap; ··· 142 143 int page_start, int page_end) 143 144 { 144 145 flush_cache_vunmap( 145 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 146 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 146 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 147 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 147 148 } 148 149 149 150 static void __pcpu_unmap_pages(unsigned long addr, int nr_pages) ··· 205 206 int page_start, int page_end) 206 207 { 207 208 flush_tlb_kernel_range( 208 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 209 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 209 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 210 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 210 211 } 211 212 212 213 static int __pcpu_map_pages(unsigned long addr, struct page **pages, ··· 283 284 int page_start, int page_end) 284 285 { 285 286 flush_cache_vmap( 286 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 287 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 287 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 288 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 288 289 } 289 290 290 291 /**
+40 -22
mm/percpu.c
··· 116 116 static int pcpu_nr_slots __read_mostly; 117 117 static size_t pcpu_chunk_struct_size __read_mostly; 118 118 119 - /* cpus with the lowest and highest unit numbers */ 120 - static unsigned int pcpu_first_unit_cpu __read_mostly; 121 - static unsigned int pcpu_last_unit_cpu __read_mostly; 119 + /* cpus with the lowest and highest unit addresses */ 120 + static unsigned int pcpu_low_unit_cpu __read_mostly; 121 + static unsigned int pcpu_high_unit_cpu __read_mostly; 122 122 123 123 /* the address of the first chunk which starts with the kernel static area */ 124 124 void *pcpu_base_addr __read_mostly; ··· 273 273 (rs) = (re) + 1, pcpu_next_pop((chunk), &(rs), &(re), (end))) 274 274 275 275 /** 276 - * pcpu_mem_alloc - allocate memory 276 + * pcpu_mem_zalloc - allocate memory 277 277 * @size: bytes to allocate 278 278 * 279 279 * Allocate @size bytes. If @size is smaller than PAGE_SIZE, 280 - * kzalloc() is used; otherwise, vmalloc() is used. The returned 280 + * kzalloc() is used; otherwise, vzalloc() is used. The returned 281 281 * memory is always zeroed. 282 282 * 283 283 * CONTEXT: ··· 286 286 * RETURNS: 287 287 * Pointer to the allocated area on success, NULL on failure. 288 288 */ 289 - static void *pcpu_mem_alloc(size_t size) 289 + static void *pcpu_mem_zalloc(size_t size) 290 290 { 291 291 if (WARN_ON_ONCE(!slab_is_available())) 292 292 return NULL; ··· 302 302 * @ptr: memory to free 303 303 * @size: size of the area 304 304 * 305 - * Free @ptr. @ptr should have been allocated using pcpu_mem_alloc(). 305 + * Free @ptr. @ptr should have been allocated using pcpu_mem_zalloc(). 306 306 */ 307 307 static void pcpu_mem_free(void *ptr, size_t size) 308 308 { ··· 384 384 size_t old_size = 0, new_size = new_alloc * sizeof(new[0]); 385 385 unsigned long flags; 386 386 387 - new = pcpu_mem_alloc(new_size); 387 + new = pcpu_mem_zalloc(new_size); 388 388 if (!new) 389 389 return -ENOMEM; 390 390 ··· 604 604 { 605 605 struct pcpu_chunk *chunk; 606 606 607 - chunk = pcpu_mem_alloc(pcpu_chunk_struct_size); 607 + chunk = pcpu_mem_zalloc(pcpu_chunk_struct_size); 608 608 if (!chunk) 609 609 return NULL; 610 610 611 - chunk->map = pcpu_mem_alloc(PCPU_DFL_MAP_ALLOC * sizeof(chunk->map[0])); 611 + chunk->map = pcpu_mem_zalloc(PCPU_DFL_MAP_ALLOC * 612 + sizeof(chunk->map[0])); 612 613 if (!chunk->map) { 613 614 kfree(chunk); 614 615 return NULL; ··· 978 977 * address. The caller is responsible for ensuring @addr stays valid 979 978 * until this function finishes. 980 979 * 980 + * percpu allocator has special setup for the first chunk, which currently 981 + * supports either embedding in linear address space or vmalloc mapping, 982 + * and, from the second one, the backing allocator (currently either vm or 983 + * km) provides translation. 984 + * 985 + * The addr can be tranlated simply without checking if it falls into the 986 + * first chunk. But the current code reflects better how percpu allocator 987 + * actually works, and the verification can discover both bugs in percpu 988 + * allocator itself and per_cpu_ptr_to_phys() callers. So we keep current 989 + * code. 990 + * 981 991 * RETURNS: 982 992 * The physical address for @addr. 983 993 */ ··· 996 984 { 997 985 void __percpu *base = __addr_to_pcpu_ptr(pcpu_base_addr); 998 986 bool in_first_chunk = false; 999 - unsigned long first_start, first_end; 987 + unsigned long first_low, first_high; 1000 988 unsigned int cpu; 1001 989 1002 990 /* 1003 - * The following test on first_start/end isn't strictly 991 + * The following test on unit_low/high isn't strictly 1004 992 * necessary but will speed up lookups of addresses which 1005 993 * aren't in the first chunk. 1006 994 */ 1007 - first_start = pcpu_chunk_addr(pcpu_first_chunk, pcpu_first_unit_cpu, 0); 1008 - first_end = pcpu_chunk_addr(pcpu_first_chunk, pcpu_last_unit_cpu, 1009 - pcpu_unit_pages); 1010 - if ((unsigned long)addr >= first_start && 1011 - (unsigned long)addr < first_end) { 995 + first_low = pcpu_chunk_addr(pcpu_first_chunk, pcpu_low_unit_cpu, 0); 996 + first_high = pcpu_chunk_addr(pcpu_first_chunk, pcpu_high_unit_cpu, 997 + pcpu_unit_pages); 998 + if ((unsigned long)addr >= first_low && 999 + (unsigned long)addr < first_high) { 1012 1000 for_each_possible_cpu(cpu) { 1013 1001 void *start = per_cpu_ptr(base, cpu); 1014 1002 ··· 1245 1233 1246 1234 for (cpu = 0; cpu < nr_cpu_ids; cpu++) 1247 1235 unit_map[cpu] = UINT_MAX; 1248 - pcpu_first_unit_cpu = NR_CPUS; 1236 + 1237 + pcpu_low_unit_cpu = NR_CPUS; 1238 + pcpu_high_unit_cpu = NR_CPUS; 1249 1239 1250 1240 for (group = 0, unit = 0; group < ai->nr_groups; group++, unit += i) { 1251 1241 const struct pcpu_group_info *gi = &ai->groups[group]; ··· 1267 1253 unit_map[cpu] = unit + i; 1268 1254 unit_off[cpu] = gi->base_offset + i * ai->unit_size; 1269 1255 1270 - if (pcpu_first_unit_cpu == NR_CPUS) 1271 - pcpu_first_unit_cpu = cpu; 1272 - pcpu_last_unit_cpu = cpu; 1256 + /* determine low/high unit_cpu */ 1257 + if (pcpu_low_unit_cpu == NR_CPUS || 1258 + unit_off[cpu] < unit_off[pcpu_low_unit_cpu]) 1259 + pcpu_low_unit_cpu = cpu; 1260 + if (pcpu_high_unit_cpu == NR_CPUS || 1261 + unit_off[cpu] > unit_off[pcpu_high_unit_cpu]) 1262 + pcpu_high_unit_cpu = cpu; 1273 1263 } 1274 1264 } 1275 1265 pcpu_nr_units = unit; ··· 1907 1889 1908 1890 BUILD_BUG_ON(size > PAGE_SIZE); 1909 1891 1910 - map = pcpu_mem_alloc(size); 1892 + map = pcpu_mem_zalloc(size); 1911 1893 BUG_ON(!map); 1912 1894 1913 1895 spin_lock_irqsave(&pcpu_lock, flags);
+4 -1
mm/slab.c
··· 595 595 PARTIAL_AC, 596 596 PARTIAL_L3, 597 597 EARLY, 598 + LATE, 598 599 FULL 599 600 } g_cpucache_up; 600 601 ··· 672 671 { 673 672 struct cache_sizes *s = malloc_sizes; 674 673 675 - if (g_cpucache_up != FULL) 674 + if (g_cpucache_up < LATE) 676 675 return; 677 676 678 677 for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) { ··· 1666 1665 void __init kmem_cache_init_late(void) 1667 1666 { 1668 1667 struct kmem_cache *cachep; 1668 + 1669 + g_cpucache_up = LATE; 1669 1670 1670 1671 /* Annotate slab for lockdep -- annotate the malloc caches */ 1671 1672 init_lock_keys();
+26 -16
mm/slub.c
··· 1862 1862 { 1863 1863 struct kmem_cache_node *n = NULL; 1864 1864 struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab); 1865 - struct page *page; 1865 + struct page *page, *discard_page = NULL; 1866 1866 1867 1867 while ((page = c->partial)) { 1868 1868 enum slab_modes { M_PARTIAL, M_FREE }; ··· 1904 1904 if (l == M_PARTIAL) 1905 1905 remove_partial(n, page); 1906 1906 else 1907 - add_partial(n, page, 1); 1907 + add_partial(n, page, 1908 + DEACTIVATE_TO_TAIL); 1908 1909 1909 1910 l = m; 1910 1911 } ··· 1916 1915 "unfreezing slab")); 1917 1916 1918 1917 if (m == M_FREE) { 1919 - stat(s, DEACTIVATE_EMPTY); 1920 - discard_slab(s, page); 1921 - stat(s, FREE_SLAB); 1918 + page->next = discard_page; 1919 + discard_page = page; 1922 1920 } 1923 1921 } 1924 1922 1925 1923 if (n) 1926 1924 spin_unlock(&n->list_lock); 1925 + 1926 + while (discard_page) { 1927 + page = discard_page; 1928 + discard_page = discard_page->next; 1929 + 1930 + stat(s, DEACTIVATE_EMPTY); 1931 + discard_slab(s, page); 1932 + stat(s, FREE_SLAB); 1933 + } 1927 1934 } 1928 1935 1929 1936 /* ··· 1978 1969 page->pobjects = pobjects; 1979 1970 page->next = oldpage; 1980 1971 1981 - } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage); 1972 + } while (irqsafe_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage); 1982 1973 stat(s, CPU_PARTIAL_FREE); 1983 1974 return pobjects; 1984 1975 } ··· 4444 4435 4445 4436 for_each_possible_cpu(cpu) { 4446 4437 struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); 4438 + int node = ACCESS_ONCE(c->node); 4447 4439 struct page *page; 4448 4440 4449 - if (!c || c->node < 0) 4441 + if (node < 0) 4450 4442 continue; 4451 - 4452 - if (c->page) { 4453 - if (flags & SO_TOTAL) 4454 - x = c->page->objects; 4443 + page = ACCESS_ONCE(c->page); 4444 + if (page) { 4445 + if (flags & SO_TOTAL) 4446 + x = page->objects; 4455 4447 else if (flags & SO_OBJECTS) 4456 - x = c->page->inuse; 4448 + x = page->inuse; 4457 4449 else 4458 4450 x = 1; 4459 4451 4460 4452 total += x; 4461 - nodes[c->node] += x; 4453 + nodes[node] += x; 4462 4454 } 4463 4455 page = c->partial; 4464 4456 4465 4457 if (page) { 4466 4458 x = page->pobjects; 4467 - total += x; 4468 - nodes[c->node] += x; 4459 + total += x; 4460 + nodes[node] += x; 4469 4461 } 4470 - per_cpu[c->node]++; 4462 + per_cpu[node]++; 4471 4463 } 4472 4464 } 4473 4465
+2
mm/vmalloc.c
··· 1633 1633 goto fail; 1634 1634 1635 1635 addr = __vmalloc_area_node(area, gfp_mask, prot, node, caller); 1636 + if (!addr) 1637 + return NULL; 1636 1638 1637 1639 /* 1638 1640 * In this function, newly allocated vm_struct is not added
+13 -13
mm/vmscan.c
··· 183 183 */ 184 184 void register_shrinker(struct shrinker *shrinker) 185 185 { 186 - shrinker->nr = 0; 186 + atomic_long_set(&shrinker->nr_in_batch, 0); 187 187 down_write(&shrinker_rwsem); 188 188 list_add_tail(&shrinker->list, &shrinker_list); 189 189 up_write(&shrinker_rwsem); ··· 247 247 248 248 list_for_each_entry(shrinker, &shrinker_list, list) { 249 249 unsigned long long delta; 250 - unsigned long total_scan; 251 - unsigned long max_pass; 250 + long total_scan; 251 + long max_pass; 252 252 int shrink_ret = 0; 253 253 long nr; 254 254 long new_nr; 255 255 long batch_size = shrinker->batch ? shrinker->batch 256 256 : SHRINK_BATCH; 257 257 258 + max_pass = do_shrinker_shrink(shrinker, shrink, 0); 259 + if (max_pass <= 0) 260 + continue; 261 + 258 262 /* 259 263 * copy the current shrinker scan count into a local variable 260 264 * and zero it so that other concurrent shrinker invocations 261 265 * don't also do this scanning work. 262 266 */ 263 - do { 264 - nr = shrinker->nr; 265 - } while (cmpxchg(&shrinker->nr, nr, 0) != nr); 267 + nr = atomic_long_xchg(&shrinker->nr_in_batch, 0); 266 268 267 269 total_scan = nr; 268 - max_pass = do_shrinker_shrink(shrinker, shrink, 0); 269 270 delta = (4 * nr_pages_scanned) / shrinker->seeks; 270 271 delta *= max_pass; 271 272 do_div(delta, lru_pages + 1); ··· 326 325 * manner that handles concurrent updates. If we exhausted the 327 326 * scan, there is no need to do an update. 328 327 */ 329 - do { 330 - nr = shrinker->nr; 331 - new_nr = total_scan + nr; 332 - if (total_scan <= 0) 333 - break; 334 - } while (cmpxchg(&shrinker->nr, nr, new_nr) != nr); 328 + if (total_scan > 0) 329 + new_nr = atomic_long_add_return(total_scan, 330 + &shrinker->nr_in_batch); 331 + else 332 + new_nr = atomic_long_read(&shrinker->nr_in_batch); 335 333 336 334 trace_mm_shrink_slab_end(shrinker, shrink_ret, nr, new_nr); 337 335 }
+22 -5
net/batman-adv/translation-table.c
··· 245 245 if (tt_global_entry) { 246 246 /* This node is probably going to update its tt table */ 247 247 tt_global_entry->orig_node->tt_poss_change = true; 248 - /* The global entry has to be marked as PENDING and has to be 248 + /* The global entry has to be marked as ROAMING and has to be 249 249 * kept for consistency purpose */ 250 - tt_global_entry->flags |= TT_CLIENT_PENDING; 250 + tt_global_entry->flags |= TT_CLIENT_ROAM; 251 + tt_global_entry->roam_at = jiffies; 252 + 251 253 send_roam_adv(bat_priv, tt_global_entry->addr, 252 254 tt_global_entry->orig_node); 253 255 } ··· 696 694 const char *message, bool roaming) 697 695 { 698 696 struct tt_global_entry *tt_global_entry = NULL; 697 + struct tt_local_entry *tt_local_entry = NULL; 699 698 700 699 tt_global_entry = tt_global_hash_find(bat_priv, addr); 701 700 if (!tt_global_entry) ··· 704 701 705 702 if (tt_global_entry->orig_node == orig_node) { 706 703 if (roaming) { 707 - tt_global_entry->flags |= TT_CLIENT_ROAM; 708 - tt_global_entry->roam_at = jiffies; 709 - goto out; 704 + /* if we are deleting a global entry due to a roam 705 + * event, there are two possibilities: 706 + * 1) the client roamed from node A to node B => we mark 707 + * it with TT_CLIENT_ROAM, we start a timer and we 708 + * wait for node B to claim it. In case of timeout 709 + * the entry is purged. 710 + * 2) the client roamed to us => we can directly delete 711 + * the global entry, since it is useless now. */ 712 + tt_local_entry = tt_local_hash_find(bat_priv, 713 + tt_global_entry->addr); 714 + if (!tt_local_entry) { 715 + tt_global_entry->flags |= TT_CLIENT_ROAM; 716 + tt_global_entry->roam_at = jiffies; 717 + goto out; 718 + } 710 719 } 711 720 _tt_global_del(bat_priv, tt_global_entry, message); 712 721 } 713 722 out: 714 723 if (tt_global_entry) 715 724 tt_global_entry_free_ref(tt_global_entry); 725 + if (tt_local_entry) 726 + tt_local_entry_free_ref(tt_local_entry); 716 727 } 717 728 718 729 void tt_global_del_orig(struct bat_priv *bat_priv,
+3 -5
net/bluetooth/bnep/core.c
··· 79 79 80 80 static void __bnep_link_session(struct bnep_session *s) 81 81 { 82 - /* It's safe to call __module_get() here because sessions are added 83 - by the socket layer which has to hold the reference to this module. 84 - */ 85 - __module_get(THIS_MODULE); 86 82 list_add(&s->list, &bnep_session_list); 87 83 } 88 84 89 85 static void __bnep_unlink_session(struct bnep_session *s) 90 86 { 91 87 list_del(&s->list); 92 - module_put(THIS_MODULE); 93 88 } 94 89 95 90 static int bnep_send(struct bnep_session *s, void *data, size_t len) ··· 525 530 526 531 up_write(&bnep_session_sem); 527 532 free_netdev(dev); 533 + module_put_and_exit(0); 528 534 return 0; 529 535 } 530 536 ··· 612 616 613 617 __bnep_link_session(s); 614 618 619 + __module_get(THIS_MODULE); 615 620 s->task = kthread_run(bnep_session, s, "kbnepd %s", dev->name); 616 621 if (IS_ERR(s->task)) { 617 622 /* Session thread start failed, gotta cleanup. */ 623 + module_put(THIS_MODULE); 618 624 unregister_netdev(dev); 619 625 __bnep_unlink_session(s); 620 626 err = PTR_ERR(s->task);
+3 -2
net/bluetooth/cmtp/core.c
··· 67 67 68 68 static void __cmtp_link_session(struct cmtp_session *session) 69 69 { 70 - __module_get(THIS_MODULE); 71 70 list_add(&session->list, &cmtp_session_list); 72 71 } 73 72 74 73 static void __cmtp_unlink_session(struct cmtp_session *session) 75 74 { 76 75 list_del(&session->list); 77 - module_put(THIS_MODULE); 78 76 } 79 77 80 78 static void __cmtp_copy_session(struct cmtp_session *session, struct cmtp_conninfo *ci) ··· 325 327 up_write(&cmtp_session_sem); 326 328 327 329 kfree(session); 330 + module_put_and_exit(0); 328 331 return 0; 329 332 } 330 333 ··· 375 376 376 377 __cmtp_link_session(session); 377 378 379 + __module_get(THIS_MODULE); 378 380 session->task = kthread_run(cmtp_session, session, "kcmtpd_ctr_%d", 379 381 session->num); 380 382 if (IS_ERR(session->task)) { 383 + module_put(THIS_MODULE); 381 384 err = PTR_ERR(session->task); 382 385 goto unlink; 383 386 }
+1 -1
net/bluetooth/hci_event.c
··· 545 545 { 546 546 hci_setup_event_mask(hdev); 547 547 548 - if (hdev->lmp_ver > 1) 548 + if (hdev->hci_ver > 1) 549 549 hci_send_cmd(hdev, HCI_OP_READ_LOCAL_COMMANDS, 0, NULL); 550 550 551 551 if (hdev->features[6] & LMP_SIMPLE_PAIR) {
+6
net/bridge/br_netlink.c
··· 18 18 #include <net/sock.h> 19 19 20 20 #include "br_private.h" 21 + #include "br_private_stp.h" 21 22 22 23 static inline size_t br_nlmsg_size(void) 23 24 { ··· 189 188 190 189 p->state = new_state; 191 190 br_log_state(p); 191 + 192 + spin_lock_bh(&p->br->lock); 193 + br_port_state_selection(p->br); 194 + spin_unlock_bh(&p->br->lock); 195 + 192 196 br_ifinfo_notify(RTM_NEWLINK, p); 193 197 194 198 return 0;
+14 -15
net/bridge/br_stp.c
··· 399 399 struct net_bridge_port *p; 400 400 unsigned int liveports = 0; 401 401 402 - /* Don't change port states if userspace is handling STP */ 403 - if (br->stp_enabled == BR_USER_STP) 404 - return; 405 - 406 402 list_for_each_entry(p, &br->port_list, list) { 407 403 if (p->state == BR_STATE_DISABLED) 408 404 continue; 409 405 410 - if (p->port_no == br->root_port) { 411 - p->config_pending = 0; 412 - p->topology_change_ack = 0; 413 - br_make_forwarding(p); 414 - } else if (br_is_designated_port(p)) { 415 - del_timer(&p->message_age_timer); 416 - br_make_forwarding(p); 417 - } else { 418 - p->config_pending = 0; 419 - p->topology_change_ack = 0; 420 - br_make_blocking(p); 406 + /* Don't change port states if userspace is handling STP */ 407 + if (br->stp_enabled != BR_USER_STP) { 408 + if (p->port_no == br->root_port) { 409 + p->config_pending = 0; 410 + p->topology_change_ack = 0; 411 + br_make_forwarding(p); 412 + } else if (br_is_designated_port(p)) { 413 + del_timer(&p->message_age_timer); 414 + br_make_forwarding(p); 415 + } else { 416 + p->config_pending = 0; 417 + p->topology_change_ack = 0; 418 + br_make_blocking(p); 419 + } 421 420 } 422 421 423 422 if (p->state == BR_STATE_FORWARDING)
+6 -5
net/caif/cffrml.c
··· 136 136 137 137 static int cffrml_transmit(struct cflayer *layr, struct cfpkt *pkt) 138 138 { 139 - int tmp; 140 139 u16 chks; 141 140 u16 len; 141 + __le16 data; 142 + 142 143 struct cffrml *this = container_obj(layr); 143 144 if (this->dofcs) { 144 145 chks = cfpkt_iterate(pkt, cffrml_checksum, 0xffff); 145 - tmp = cpu_to_le16(chks); 146 - cfpkt_add_trail(pkt, &tmp, 2); 146 + data = cpu_to_le16(chks); 147 + cfpkt_add_trail(pkt, &data, 2); 147 148 } else { 148 149 cfpkt_pad_trail(pkt, 2); 149 150 } 150 151 len = cfpkt_getlen(pkt); 151 - tmp = cpu_to_le16(len); 152 - cfpkt_add_head(pkt, &tmp, 2); 152 + data = cpu_to_le16(len); 153 + cfpkt_add_head(pkt, &data, 2); 153 154 cfpkt_info(pkt)->hdr_len += 2; 154 155 if (cfpkt_erroneous(pkt)) { 155 156 pr_err("Packet is erroneous!\n");
+13 -22
net/ceph/crush/mapper.c
··· 477 477 int i, j; 478 478 int numrep; 479 479 int firstn; 480 - int rc = -1; 481 480 482 481 BUG_ON(ruleno >= map->max_rules); 483 482 ··· 490 491 * that this may or may not correspond to the specific types 491 492 * referenced by the crush rule. 492 493 */ 493 - if (force >= 0) { 494 - if (force >= map->max_devices || 495 - map->device_parents[force] == 0) { 496 - /*dprintk("CRUSH: forcefed device dne\n");*/ 497 - rc = -1; /* force fed device dne */ 498 - goto out; 499 - } 500 - if (!is_out(map, weight, force, x)) { 501 - while (1) { 502 - force_context[++force_pos] = force; 503 - if (force >= 0) 504 - force = map->device_parents[force]; 505 - else 506 - force = map->bucket_parents[-1-force]; 507 - if (force == 0) 508 - break; 509 - } 494 + if (force >= 0 && 495 + force < map->max_devices && 496 + map->device_parents[force] != 0 && 497 + !is_out(map, weight, force, x)) { 498 + while (1) { 499 + force_context[++force_pos] = force; 500 + if (force >= 0) 501 + force = map->device_parents[force]; 502 + else 503 + force = map->bucket_parents[-1-force]; 504 + if (force == 0) 505 + break; 510 506 } 511 507 } 512 508 ··· 594 600 BUG_ON(1); 595 601 } 596 602 } 597 - rc = result_len; 598 - 599 - out: 600 - return rc; 603 + return result_len; 601 604 } 602 605 603 606
+8 -1
net/core/dev.c
··· 1396 1396 for_each_net(net) { 1397 1397 for_each_netdev(net, dev) { 1398 1398 if (dev == last) 1399 - break; 1399 + goto outroll; 1400 1400 1401 1401 if (dev->flags & IFF_UP) { 1402 1402 nb->notifier_call(nb, NETDEV_GOING_DOWN, dev); ··· 1407 1407 } 1408 1408 } 1409 1409 1410 + outroll: 1410 1411 raw_notifier_chain_unregister(&netdev_chain, nb); 1411 1412 goto unlock; 1412 1413 } ··· 4281 4280 { 4282 4281 return seq_open_net(inode, file, &dev_seq_ops, 4283 4282 sizeof(struct dev_iter_state)); 4283 + } 4284 + 4285 + int dev_seq_open_ops(struct inode *inode, struct file *file, 4286 + const struct seq_operations *ops) 4287 + { 4288 + return seq_open_net(inode, file, ops, sizeof(struct dev_iter_state)); 4284 4289 } 4285 4290 4286 4291 static const struct file_operations dev_seq_fops = {
+1 -2
net/core/dev_addr_lists.c
··· 696 696 697 697 static int dev_mc_seq_open(struct inode *inode, struct file *file) 698 698 { 699 - return seq_open_net(inode, file, &dev_mc_seq_ops, 700 - sizeof(struct seq_net_private)); 699 + return dev_seq_open_ops(inode, file, &dev_mc_seq_ops); 701 700 } 702 701 703 702 static const struct file_operations dev_mc_seq_fops = {
+4 -1
net/core/neighbour.c
··· 2397 2397 struct net *net = seq_file_net(seq); 2398 2398 struct neigh_table *tbl = state->tbl; 2399 2399 2400 - pn = pn->next; 2400 + do { 2401 + pn = pn->next; 2402 + } while (pn && !net_eq(pneigh_net(pn), net)); 2403 + 2401 2404 while (!pn) { 2402 2405 if (++state->bucket > PNEIGH_HASHMASK) 2403 2406 break;
+4 -3
net/core/request_sock.c
··· 26 26 * but then some measure against one socket starving all other sockets 27 27 * would be needed. 28 28 * 29 - * It was 128 by default. Experiments with real servers show, that 29 + * The minimum value of it is 128. Experiments with real servers show that 30 30 * it is absolutely not enough even at 100conn/sec. 256 cures most 31 - * of problems. This value is adjusted to 128 for very small machines 32 - * (<=32Mb of memory) and to 1024 on normal or better ones (>=256Mb). 31 + * of problems. 32 + * This value is adjusted to 128 for low memory machines, 33 + * and it will increase in proportion to the memory of machine. 33 34 * Note : Dont forget somaxconn that may limit backlog too. 34 35 */ 35 36 int sysctl_max_syn_backlog = 256;
+2
net/core/secure_seq.c
··· 19 19 } 20 20 late_initcall(net_secret_init); 21 21 22 + #ifdef CONFIG_INET 22 23 static u32 seq_scale(u32 seq) 23 24 { 24 25 /* ··· 34 33 */ 35 34 return seq + (ktime_to_ns(ktime_get_real()) >> 6); 36 35 } 36 + #endif 37 37 38 38 #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) 39 39 __u32 secure_tcpv6_sequence_number(const __be32 *saddr, const __be32 *daddr,
+1 -1
net/core/skbuff.c
··· 2230 2230 * @shiftlen: shift up to this many bytes 2231 2231 * 2232 2232 * Attempts to shift up to shiftlen worth of bytes, which may be less than 2233 - * the length of the skb, from tgt to skb. Returns number bytes shifted. 2233 + * the length of the skb, from skb to tgt. Returns number bytes shifted. 2234 2234 * It's up to caller to free skb if everything was shifted. 2235 2235 * 2236 2236 * If @tgt runs out of frags, the whole operation is aborted.
+1
net/dccp/ipv4.c
··· 111 111 rt = ip_route_newports(fl4, rt, orig_sport, orig_dport, 112 112 inet->inet_sport, inet->inet_dport, sk); 113 113 if (IS_ERR(rt)) { 114 + err = PTR_ERR(rt); 114 115 rt = NULL; 115 116 goto failure; 116 117 }
+6 -4
net/decnet/dn_route.c
··· 112 112 static int dn_dst_gc(struct dst_ops *ops); 113 113 static struct dst_entry *dn_dst_check(struct dst_entry *, __u32); 114 114 static unsigned int dn_dst_default_advmss(const struct dst_entry *dst); 115 - static unsigned int dn_dst_default_mtu(const struct dst_entry *dst); 115 + static unsigned int dn_dst_mtu(const struct dst_entry *dst); 116 116 static void dn_dst_destroy(struct dst_entry *); 117 117 static struct dst_entry *dn_dst_negative_advice(struct dst_entry *); 118 118 static void dn_dst_link_failure(struct sk_buff *); ··· 135 135 .gc = dn_dst_gc, 136 136 .check = dn_dst_check, 137 137 .default_advmss = dn_dst_default_advmss, 138 - .default_mtu = dn_dst_default_mtu, 138 + .mtu = dn_dst_mtu, 139 139 .cow_metrics = dst_cow_metrics_generic, 140 140 .destroy = dn_dst_destroy, 141 141 .negative_advice = dn_dst_negative_advice, ··· 825 825 return dn_mss_from_pmtu(dst->dev, dst_mtu(dst)); 826 826 } 827 827 828 - static unsigned int dn_dst_default_mtu(const struct dst_entry *dst) 828 + static unsigned int dn_dst_mtu(const struct dst_entry *dst) 829 829 { 830 - return dst->dev->mtu; 830 + unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 831 + 832 + return mtu ? : dst->dev->mtu; 831 833 } 832 834 833 835 static struct neighbour *dn_dst_neigh_lookup(const struct dst_entry *dst, const void *daddr)
+5 -12
net/decnet/dn_timer.c
··· 36 36 37 37 void dn_start_slow_timer(struct sock *sk) 38 38 { 39 - sk->sk_timer.expires = jiffies + SLOW_INTERVAL; 40 - sk->sk_timer.function = dn_slow_timer; 41 - sk->sk_timer.data = (unsigned long)sk; 42 - 43 - add_timer(&sk->sk_timer); 39 + setup_timer(&sk->sk_timer, dn_slow_timer, (unsigned long)sk); 40 + sk_reset_timer(sk, &sk->sk_timer, jiffies + SLOW_INTERVAL); 44 41 } 45 42 46 43 void dn_stop_slow_timer(struct sock *sk) 47 44 { 48 - del_timer(&sk->sk_timer); 45 + sk_stop_timer(sk, &sk->sk_timer); 49 46 } 50 47 51 48 static void dn_slow_timer(unsigned long arg) ··· 50 53 struct sock *sk = (struct sock *)arg; 51 54 struct dn_scp *scp = DN_SK(sk); 52 55 53 - sock_hold(sk); 54 56 bh_lock_sock(sk); 55 57 56 58 if (sock_owned_by_user(sk)) { 57 - sk->sk_timer.expires = jiffies + HZ / 10; 58 - add_timer(&sk->sk_timer); 59 + sk_reset_timer(sk, &sk->sk_timer, jiffies + HZ / 10); 59 60 goto out; 60 61 } 61 62 ··· 95 100 scp->keepalive_fxn(sk); 96 101 } 97 102 98 - sk->sk_timer.expires = jiffies + SLOW_INTERVAL; 99 - 100 - add_timer(&sk->sk_timer); 103 + sk_reset_timer(sk, &sk->sk_timer, jiffies + SLOW_INTERVAL); 101 104 out: 102 105 bh_unlock_sock(sk); 103 106 sock_put(sk);
+5
net/ipv4/devinet.c
··· 1490 1490 void __user *buffer, 1491 1491 size_t *lenp, loff_t *ppos) 1492 1492 { 1493 + int old_value = *(int *)ctl->data; 1493 1494 int ret = proc_dointvec(ctl, write, buffer, lenp, ppos); 1495 + int new_value = *(int *)ctl->data; 1494 1496 1495 1497 if (write) { 1496 1498 struct ipv4_devconf *cnf = ctl->extra1; ··· 1503 1501 1504 1502 if (cnf == net->ipv4.devconf_dflt) 1505 1503 devinet_copy_dflt_conf(net, i); 1504 + if (i == IPV4_DEVCONF_ACCEPT_LOCAL - 1) 1505 + if ((new_value == 0) && (old_value != 0)) 1506 + rt_cache_flush(net, 0); 1506 1507 } 1507 1508 1508 1509 return ret;
+2 -1
net/ipv4/igmp.c
··· 1716 1716 if (err) { 1717 1717 int j; 1718 1718 1719 - pmc->sfcount[sfmode]--; 1719 + if (!delta) 1720 + pmc->sfcount[sfmode]--; 1720 1721 for (j=0; j<i; j++) 1721 1722 (void) ip_mc_del1_src(pmc, sfmode, &psfsrc[j]); 1722 1723 } else if (isexclude != (pmc->sfcount[MCAST_EXCLUDE] != 0)) {
+9 -5
net/ipv4/inet_diag.c
··· 108 108 icsk->icsk_ca_ops->name); 109 109 } 110 110 111 - if ((ext & (1 << (INET_DIAG_TOS - 1))) && (sk->sk_family != AF_INET6)) 112 - RTA_PUT_U8(skb, INET_DIAG_TOS, inet->tos); 113 - 114 111 r->idiag_family = sk->sk_family; 115 112 r->idiag_state = sk->sk_state; 116 113 r->idiag_timer = 0; ··· 122 125 r->id.idiag_src[0] = inet->inet_rcv_saddr; 123 126 r->id.idiag_dst[0] = inet->inet_daddr; 124 127 128 + /* IPv6 dual-stack sockets use inet->tos for IPv4 connections, 129 + * hence this needs to be included regardless of socket family. 130 + */ 131 + if (ext & (1 << (INET_DIAG_TOS - 1))) 132 + RTA_PUT_U8(skb, INET_DIAG_TOS, inet->tos); 133 + 125 134 #if defined(CONFIG_IPV6) || defined (CONFIG_IPV6_MODULE) 126 135 if (r->idiag_family == AF_INET6) { 127 136 const struct ipv6_pinfo *np = inet6_sk(sk); 137 + 138 + if (ext & (1 << (INET_DIAG_TCLASS - 1))) 139 + RTA_PUT_U8(skb, INET_DIAG_TCLASS, np->tclass); 128 140 129 141 ipv6_addr_copy((struct in6_addr *)r->id.idiag_src, 130 142 &np->rcv_saddr); 131 143 ipv6_addr_copy((struct in6_addr *)r->id.idiag_dst, 132 144 &np->daddr); 133 - if (ext & (1 << (INET_DIAG_TCLASS - 1))) 134 - RTA_PUT_U8(skb, INET_DIAG_TCLASS, np->tclass); 135 145 } 136 146 #endif 137 147
+1 -1
net/ipv4/ip_forward.c
··· 84 84 85 85 rt = skb_rtable(skb); 86 86 87 - if (opt->is_strictroute && ip_hdr(skb)->daddr != rt->rt_gateway) 87 + if (opt->is_strictroute && opt->nexthop != rt->rt_gateway) 88 88 goto sr_failed; 89 89 90 90 if (unlikely(skb->len > dst_mtu(&rt->dst) && !skb_is_gso(skb) &&
+3 -2
net/ipv4/ip_options.c
··· 568 568 ) { 569 569 if (srrptr + 3 > srrspace) 570 570 break; 571 - if (memcmp(&ip_hdr(skb)->daddr, &optptr[srrptr-1], 4) == 0) 571 + if (memcmp(&opt->nexthop, &optptr[srrptr-1], 4) == 0) 572 572 break; 573 573 } 574 574 if (srrptr + 3 <= srrspace) { 575 575 opt->is_changed = 1; 576 576 ip_rt_get_source(&optptr[srrptr-1], skb, rt); 577 + ip_hdr(skb)->daddr = opt->nexthop; 577 578 optptr[2] = srrptr+4; 578 579 } else if (net_ratelimit()) 579 580 printk(KERN_CRIT "ip_forward(): Argh! Destination lost!\n"); ··· 641 640 } 642 641 if (srrptr <= srrspace) { 643 642 opt->srr_is_hit = 1; 644 - iph->daddr = nexthop; 643 + opt->nexthop = nexthop; 645 644 opt->is_changed = 1; 646 645 } 647 646 return 0;
+6 -1
net/ipv4/ipip.c
··· 285 285 if (register_netdevice(dev) < 0) 286 286 goto failed_free; 287 287 288 + strcpy(nt->parms.name, dev->name); 289 + 288 290 dev_hold(dev); 289 291 ipip_tunnel_link(ipn, nt); 290 292 return nt; ··· 761 759 struct ip_tunnel *tunnel = netdev_priv(dev); 762 760 763 761 tunnel->dev = dev; 764 - strcpy(tunnel->parms.name, dev->name); 765 762 766 763 memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4); 767 764 memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4); ··· 826 825 static int __net_init ipip_init_net(struct net *net) 827 826 { 828 827 struct ipip_net *ipn = net_generic(net, ipip_net_id); 828 + struct ip_tunnel *t; 829 829 int err; 830 830 831 831 ipn->tunnels[0] = ipn->tunnels_wc; ··· 850 848 if ((err = register_netdev(ipn->fb_tunnel_dev))) 851 849 goto err_reg_dev; 852 850 851 + t = netdev_priv(ipn->fb_tunnel_dev); 852 + 853 + strcpy(t->parms.name, ipn->fb_tunnel_dev->name); 853 854 return 0; 854 855 855 856 err_reg_dev:
+2 -1
net/ipv4/netfilter.c
··· 64 64 /* Change in oif may mean change in hh_len. */ 65 65 hh_len = skb_dst(skb)->dev->hard_header_len; 66 66 if (skb_headroom(skb) < hh_len && 67 - pskb_expand_head(skb, hh_len - skb_headroom(skb), 0, GFP_ATOMIC)) 67 + pskb_expand_head(skb, HH_DATA_ALIGN(hh_len - skb_headroom(skb)), 68 + 0, GFP_ATOMIC)) 68 69 return -1; 69 70 70 71 return 0;
-1
net/ipv4/netfilter/Kconfig
··· 325 325 # raw + specific targets 326 326 config IP_NF_RAW 327 327 tristate 'raw table support (required for NOTRACK/TRACE)' 328 - depends on NETFILTER_ADVANCED 329 328 help 330 329 This option adds a `raw' table to iptables. This table is the very 331 330 first in the netfilter framework and hooks in at the PREROUTING
+58 -36
net/ipv4/route.c
··· 112 112 #include <net/secure_seq.h> 113 113 114 114 #define RT_FL_TOS(oldflp4) \ 115 - ((u32)(oldflp4->flowi4_tos & (IPTOS_RT_MASK | RTO_ONLINK))) 115 + ((oldflp4)->flowi4_tos & (IPTOS_RT_MASK | RTO_ONLINK)) 116 116 117 117 #define IP_MAX_MTU 0xFFF0 118 118 ··· 131 131 static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20; 132 132 static int ip_rt_min_advmss __read_mostly = 256; 133 133 static int rt_chain_length_max __read_mostly = 20; 134 + static int redirect_genid; 134 135 135 136 /* 136 137 * Interface to generic destination cache. ··· 139 138 140 139 static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie); 141 140 static unsigned int ipv4_default_advmss(const struct dst_entry *dst); 142 - static unsigned int ipv4_default_mtu(const struct dst_entry *dst); 141 + static unsigned int ipv4_mtu(const struct dst_entry *dst); 143 142 static void ipv4_dst_destroy(struct dst_entry *dst); 144 143 static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst); 145 144 static void ipv4_link_failure(struct sk_buff *skb); ··· 194 193 .gc = rt_garbage_collect, 195 194 .check = ipv4_dst_check, 196 195 .default_advmss = ipv4_default_advmss, 197 - .default_mtu = ipv4_default_mtu, 196 + .mtu = ipv4_mtu, 198 197 .cow_metrics = ipv4_cow_metrics, 199 198 .destroy = ipv4_dst_destroy, 200 199 .ifdown = ipv4_dst_ifdown, ··· 417 416 else { 418 417 struct rtable *r = v; 419 418 struct neighbour *n; 420 - int len; 419 + int len, HHUptod; 421 420 421 + rcu_read_lock(); 422 422 n = dst_get_neighbour(&r->dst); 423 + HHUptod = (n && (n->nud_state & NUD_CONNECTED)) ? 1 : 0; 424 + rcu_read_unlock(); 425 + 423 426 seq_printf(seq, "%s\t%08X\t%08X\t%8X\t%d\t%u\t%d\t" 424 427 "%08X\t%d\t%u\t%u\t%02X\t%d\t%1d\t%08X%n", 425 428 r->dst.dev ? r->dst.dev->name : "*", ··· 437 432 dst_metric(&r->dst, RTAX_RTTVAR)), 438 433 r->rt_key_tos, 439 434 -1, 440 - (n && (n->nud_state & NUD_CONNECTED)) ? 1 : 0, 435 + HHUptod, 441 436 r->rt_spec_dst, &len); 442 437 443 438 seq_printf(seq, "%*s\n", 127 - len, ""); ··· 842 837 843 838 get_random_bytes(&shuffle, sizeof(shuffle)); 844 839 atomic_add(shuffle + 1U, &net->ipv4.rt_genid); 840 + redirect_genid++; 845 841 } 846 842 847 843 /* ··· 1310 1304 spin_unlock_bh(rt_hash_lock_addr(hash)); 1311 1305 } 1312 1306 1313 - static int check_peer_redir(struct dst_entry *dst, struct inet_peer *peer) 1307 + static void check_peer_redir(struct dst_entry *dst, struct inet_peer *peer) 1314 1308 { 1315 1309 struct rtable *rt = (struct rtable *) dst; 1316 1310 __be32 orig_gw = rt->rt_gateway; ··· 1321 1315 rt->rt_gateway = peer->redirect_learned.a4; 1322 1316 1323 1317 n = ipv4_neigh_lookup(&rt->dst, &rt->rt_gateway); 1324 - if (IS_ERR(n)) 1325 - return PTR_ERR(n); 1318 + if (IS_ERR(n)) { 1319 + rt->rt_gateway = orig_gw; 1320 + return; 1321 + } 1326 1322 old_n = xchg(&rt->dst._neighbour, n); 1327 1323 if (old_n) 1328 1324 neigh_release(old_n); 1329 - if (!n || !(n->nud_state & NUD_VALID)) { 1330 - if (n) 1331 - neigh_event_send(n, NULL); 1332 - rt->rt_gateway = orig_gw; 1333 - return -EAGAIN; 1325 + if (!(n->nud_state & NUD_VALID)) { 1326 + neigh_event_send(n, NULL); 1334 1327 } else { 1335 1328 rt->rt_flags |= RTCF_REDIRECTED; 1336 1329 call_netevent_notifiers(NETEVENT_NEIGH_UPDATE, n); 1337 1330 } 1338 - return 0; 1339 1331 } 1340 1332 1341 1333 /* called in rcu_read_lock() section */ ··· 1395 1391 1396 1392 peer = rt->peer; 1397 1393 if (peer) { 1398 - if (peer->redirect_learned.a4 != new_gw) { 1394 + if (peer->redirect_learned.a4 != new_gw || 1395 + peer->redirect_genid != redirect_genid) { 1399 1396 peer->redirect_learned.a4 = new_gw; 1397 + peer->redirect_genid = redirect_genid; 1400 1398 atomic_inc(&__rt_peer_genid); 1401 1399 } 1402 1400 check_peer_redir(&rt->dst, peer); ··· 1691 1685 } 1692 1686 1693 1687 1694 - static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie) 1688 + static void ipv4_validate_peer(struct rtable *rt) 1695 1689 { 1696 - struct rtable *rt = (struct rtable *) dst; 1697 - 1698 - if (rt_is_expired(rt)) 1699 - return NULL; 1700 1690 if (rt->rt_peer_genid != rt_peer_genid()) { 1701 1691 struct inet_peer *peer; 1702 1692 ··· 1701 1699 1702 1700 peer = rt->peer; 1703 1701 if (peer) { 1704 - check_peer_pmtu(dst, peer); 1702 + check_peer_pmtu(&rt->dst, peer); 1705 1703 1704 + if (peer->redirect_genid != redirect_genid) 1705 + peer->redirect_learned.a4 = 0; 1706 1706 if (peer->redirect_learned.a4 && 1707 - peer->redirect_learned.a4 != rt->rt_gateway) { 1708 - if (check_peer_redir(dst, peer)) 1709 - return NULL; 1710 - } 1707 + peer->redirect_learned.a4 != rt->rt_gateway) 1708 + check_peer_redir(&rt->dst, peer); 1711 1709 } 1712 1710 1713 1711 rt->rt_peer_genid = rt_peer_genid(); 1714 1712 } 1713 + } 1714 + 1715 + static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie) 1716 + { 1717 + struct rtable *rt = (struct rtable *) dst; 1718 + 1719 + if (rt_is_expired(rt)) 1720 + return NULL; 1721 + ipv4_validate_peer(rt); 1715 1722 return dst; 1716 1723 } 1717 1724 ··· 1825 1814 return advmss; 1826 1815 } 1827 1816 1828 - static unsigned int ipv4_default_mtu(const struct dst_entry *dst) 1817 + static unsigned int ipv4_mtu(const struct dst_entry *dst) 1829 1818 { 1830 - unsigned int mtu = dst->dev->mtu; 1819 + const struct rtable *rt = (const struct rtable *) dst; 1820 + unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 1821 + 1822 + if (mtu && rt_is_output_route(rt)) 1823 + return mtu; 1824 + 1825 + mtu = dst->dev->mtu; 1831 1826 1832 1827 if (unlikely(dst_metric_locked(dst, RTAX_MTU))) { 1833 - const struct rtable *rt = (const struct rtable *) dst; 1834 1828 1835 1829 if (rt->rt_gateway != rt->rt_dst && mtu > 576) 1836 1830 mtu = 576; ··· 1868 1852 dst_init_metrics(&rt->dst, peer->metrics, false); 1869 1853 1870 1854 check_peer_pmtu(&rt->dst, peer); 1855 + if (peer->redirect_genid != redirect_genid) 1856 + peer->redirect_learned.a4 = 0; 1871 1857 if (peer->redirect_learned.a4 && 1872 1858 peer->redirect_learned.a4 != rt->rt_gateway) { 1873 1859 rt->rt_gateway = peer->redirect_learned.a4; ··· 2375 2357 rth->rt_mark == skb->mark && 2376 2358 net_eq(dev_net(rth->dst.dev), net) && 2377 2359 !rt_is_expired(rth)) { 2360 + ipv4_validate_peer(rth); 2378 2361 if (noref) { 2379 2362 dst_use_noref(&rth->dst, jiffies); 2380 2363 skb_dst_set_noref(skb, &rth->dst); ··· 2434 2415 static struct rtable *__mkroute_output(const struct fib_result *res, 2435 2416 const struct flowi4 *fl4, 2436 2417 __be32 orig_daddr, __be32 orig_saddr, 2437 - int orig_oif, struct net_device *dev_out, 2418 + int orig_oif, __u8 orig_rtos, 2419 + struct net_device *dev_out, 2438 2420 unsigned int flags) 2439 2421 { 2440 2422 struct fib_info *fi = res->fi; 2441 - u32 tos = RT_FL_TOS(fl4); 2442 2423 struct in_device *in_dev; 2443 2424 u16 type = res->type; 2444 2425 struct rtable *rth; ··· 2489 2470 rth->rt_genid = rt_genid(dev_net(dev_out)); 2490 2471 rth->rt_flags = flags; 2491 2472 rth->rt_type = type; 2492 - rth->rt_key_tos = tos; 2473 + rth->rt_key_tos = orig_rtos; 2493 2474 rth->rt_dst = fl4->daddr; 2494 2475 rth->rt_src = fl4->saddr; 2495 2476 rth->rt_route_iif = 0; ··· 2539 2520 static struct rtable *ip_route_output_slow(struct net *net, struct flowi4 *fl4) 2540 2521 { 2541 2522 struct net_device *dev_out = NULL; 2542 - u32 tos = RT_FL_TOS(fl4); 2523 + __u8 tos = RT_FL_TOS(fl4); 2543 2524 unsigned int flags = 0; 2544 2525 struct fib_result res; 2545 2526 struct rtable *rth; ··· 2715 2696 2716 2697 make_route: 2717 2698 rth = __mkroute_output(&res, fl4, orig_daddr, orig_saddr, orig_oif, 2718 - dev_out, flags); 2699 + tos, dev_out, flags); 2719 2700 if (!IS_ERR(rth)) { 2720 2701 unsigned int hash; 2721 2702 ··· 2751 2732 (IPTOS_RT_MASK | RTO_ONLINK)) && 2752 2733 net_eq(dev_net(rth->dst.dev), net) && 2753 2734 !rt_is_expired(rth)) { 2735 + ipv4_validate_peer(rth); 2754 2736 dst_use(&rth->dst, jiffies); 2755 2737 RT_CACHE_STAT_INC(out_hit); 2756 2738 rcu_read_unlock_bh(); ··· 2775 2755 return NULL; 2776 2756 } 2777 2757 2778 - static unsigned int ipv4_blackhole_default_mtu(const struct dst_entry *dst) 2758 + static unsigned int ipv4_blackhole_mtu(const struct dst_entry *dst) 2779 2759 { 2780 - return 0; 2760 + unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 2761 + 2762 + return mtu ? : dst->dev->mtu; 2781 2763 } 2782 2764 2783 2765 static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu) ··· 2797 2775 .protocol = cpu_to_be16(ETH_P_IP), 2798 2776 .destroy = ipv4_dst_destroy, 2799 2777 .check = ipv4_blackhole_dst_check, 2800 - .default_mtu = ipv4_blackhole_default_mtu, 2778 + .mtu = ipv4_blackhole_mtu, 2801 2779 .default_advmss = ipv4_default_advmss, 2802 2780 .update_pmtu = ipv4_rt_blackhole_update_pmtu, 2803 2781 .cow_metrics = ipv4_rt_blackhole_cow_metrics,
+8 -7
net/ipv4/udp.c
··· 1164 1164 struct inet_sock *inet = inet_sk(sk); 1165 1165 struct sockaddr_in *sin = (struct sockaddr_in *)msg->msg_name; 1166 1166 struct sk_buff *skb; 1167 - unsigned int ulen; 1167 + unsigned int ulen, copied; 1168 1168 int peeked; 1169 1169 int err; 1170 1170 int is_udplite = IS_UDPLITE(sk); ··· 1186 1186 goto out; 1187 1187 1188 1188 ulen = skb->len - sizeof(struct udphdr); 1189 - if (len > ulen) 1190 - len = ulen; 1191 - else if (len < ulen) 1189 + copied = len; 1190 + if (copied > ulen) 1191 + copied = ulen; 1192 + else if (copied < ulen) 1192 1193 msg->msg_flags |= MSG_TRUNC; 1193 1194 1194 1195 /* ··· 1198 1197 * coverage checksum (UDP-Lite), do it before the copy. 1199 1198 */ 1200 1199 1201 - if (len < ulen || UDP_SKB_CB(skb)->partial_cov) { 1200 + if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) { 1202 1201 if (udp_lib_checksum_complete(skb)) 1203 1202 goto csum_copy_err; 1204 1203 } 1205 1204 1206 1205 if (skb_csum_unnecessary(skb)) 1207 1206 err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), 1208 - msg->msg_iov, len); 1207 + msg->msg_iov, copied); 1209 1208 else { 1210 1209 err = skb_copy_and_csum_datagram_iovec(skb, 1211 1210 sizeof(struct udphdr), ··· 1234 1233 if (inet->cmsg_flags) 1235 1234 ip_cmsg_recv(msg, skb); 1236 1235 1237 - err = len; 1236 + err = copied; 1238 1237 if (flags & MSG_TRUNC) 1239 1238 err = ulen; 1240 1239
+2 -1
net/ipv6/addrconf.c
··· 1805 1805 return ERR_PTR(-EACCES); 1806 1806 1807 1807 /* Add default multicast route */ 1808 - addrconf_add_mroute(dev); 1808 + if (!(dev->flags & IFF_LOOPBACK)) 1809 + addrconf_add_mroute(dev); 1809 1810 1810 1811 /* Add link local route */ 1811 1812 addrconf_add_lroute(dev);
+1 -1
net/ipv6/inet6_connection_sock.c
··· 85 85 * request_sock (formerly open request) hash tables. 86 86 */ 87 87 static u32 inet6_synq_hash(const struct in6_addr *raddr, const __be16 rport, 88 - const u32 rnd, const u16 synq_hsize) 88 + const u32 rnd, const u32 synq_hsize) 89 89 { 90 90 u32 c; 91 91
+1 -1
net/ipv6/ipv6_sockglue.c
··· 503 503 goto e_inval; 504 504 if (val > 255 || val < -1) 505 505 goto e_inval; 506 - np->mcast_hops = val; 506 + np->mcast_hops = (val == -1 ? IPV6_DEFAULT_MCASTHOPS : val); 507 507 retv = 0; 508 508 break; 509 509
+1 -1
net/ipv6/ndisc.c
··· 1571 1571 } 1572 1572 if (!rt->rt6i_peer) 1573 1573 rt6_bind_peer(rt, 1); 1574 - if (inet_peer_xrlim_allow(rt->rt6i_peer, 1*HZ)) 1574 + if (!inet_peer_xrlim_allow(rt->rt6i_peer, 1*HZ)) 1575 1575 goto release; 1576 1576 1577 1577 if (dev->addr_len) {
-1
net/ipv6/netfilter/Kconfig
··· 186 186 187 187 config IP6_NF_RAW 188 188 tristate 'raw table support (required for TRACE)' 189 - depends on NETFILTER_ADVANCED 190 189 help 191 190 This option adds a `raw' table to ip6tables. This table is the very 192 191 first in the netfilter framework and hooks in at the PREROUTING
+15 -8
net/ipv6/route.c
··· 77 77 const struct in6_addr *dest); 78 78 static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie); 79 79 static unsigned int ip6_default_advmss(const struct dst_entry *dst); 80 - static unsigned int ip6_default_mtu(const struct dst_entry *dst); 80 + static unsigned int ip6_mtu(const struct dst_entry *dst); 81 81 static struct dst_entry *ip6_negative_advice(struct dst_entry *); 82 82 static void ip6_dst_destroy(struct dst_entry *); 83 83 static void ip6_dst_ifdown(struct dst_entry *, ··· 144 144 .gc_thresh = 1024, 145 145 .check = ip6_dst_check, 146 146 .default_advmss = ip6_default_advmss, 147 - .default_mtu = ip6_default_mtu, 147 + .mtu = ip6_mtu, 148 148 .cow_metrics = ipv6_cow_metrics, 149 149 .destroy = ip6_dst_destroy, 150 150 .ifdown = ip6_dst_ifdown, ··· 155 155 .neigh_lookup = ip6_neigh_lookup, 156 156 }; 157 157 158 - static unsigned int ip6_blackhole_default_mtu(const struct dst_entry *dst) 158 + static unsigned int ip6_blackhole_mtu(const struct dst_entry *dst) 159 159 { 160 - return 0; 160 + unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 161 + 162 + return mtu ? : dst->dev->mtu; 161 163 } 162 164 163 165 static void ip6_rt_blackhole_update_pmtu(struct dst_entry *dst, u32 mtu) ··· 177 175 .protocol = cpu_to_be16(ETH_P_IPV6), 178 176 .destroy = ip6_dst_destroy, 179 177 .check = ip6_dst_check, 180 - .default_mtu = ip6_blackhole_default_mtu, 178 + .mtu = ip6_blackhole_mtu, 181 179 .default_advmss = ip6_default_advmss, 182 180 .update_pmtu = ip6_rt_blackhole_update_pmtu, 183 181 .cow_metrics = ip6_rt_blackhole_cow_metrics, ··· 728 726 int attempts = !in_softirq(); 729 727 730 728 if (!(rt->rt6i_flags&RTF_GATEWAY)) { 731 - if (rt->rt6i_dst.plen != 128 && 729 + if (ort->rt6i_dst.plen != 128 && 732 730 ipv6_addr_equal(&ort->rt6i_dst.addr, daddr)) 733 731 rt->rt6i_flags |= RTF_ANYCAST; 734 732 ipv6_addr_copy(&rt->rt6i_gateway, daddr); ··· 1043 1041 return mtu; 1044 1042 } 1045 1043 1046 - static unsigned int ip6_default_mtu(const struct dst_entry *dst) 1044 + static unsigned int ip6_mtu(const struct dst_entry *dst) 1047 1045 { 1048 - unsigned int mtu = IPV6_MIN_MTU; 1049 1046 struct inet6_dev *idev; 1047 + unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 1048 + 1049 + if (mtu) 1050 + return mtu; 1051 + 1052 + mtu = IPV6_MIN_MTU; 1050 1053 1051 1054 rcu_read_lock(); 1052 1055 idev = __in6_dev_get(dst->dev);
+6 -1
net/ipv6/sit.c
··· 263 263 if (register_netdevice(dev) < 0) 264 264 goto failed_free; 265 265 266 + strcpy(nt->parms.name, dev->name); 267 + 266 268 dev_hold(dev); 267 269 268 270 ipip6_tunnel_link(sitn, nt); ··· 1146 1144 struct ip_tunnel *tunnel = netdev_priv(dev); 1147 1145 1148 1146 tunnel->dev = dev; 1149 - strcpy(tunnel->parms.name, dev->name); 1150 1147 1151 1148 memcpy(dev->dev_addr, &tunnel->parms.iph.saddr, 4); 1152 1149 memcpy(dev->broadcast, &tunnel->parms.iph.daddr, 4); ··· 1208 1207 static int __net_init sit_init_net(struct net *net) 1209 1208 { 1210 1209 struct sit_net *sitn = net_generic(net, sit_net_id); 1210 + struct ip_tunnel *t; 1211 1211 int err; 1212 1212 1213 1213 sitn->tunnels[0] = sitn->tunnels_wc; ··· 1233 1231 if ((err = register_netdev(sitn->fb_tunnel_dev))) 1234 1232 goto err_reg_dev; 1235 1233 1234 + t = netdev_priv(sitn->fb_tunnel_dev); 1235 + 1236 + strcpy(t->parms.name, sitn->fb_tunnel_dev->name); 1236 1237 return 0; 1237 1238 1238 1239 err_reg_dev:
+7 -6
net/ipv6/tcp_ipv6.c
··· 1255 1255 if (!want_cookie || tmp_opt.tstamp_ok) 1256 1256 TCP_ECN_create_request(req, tcp_hdr(skb)); 1257 1257 1258 + treq->iif = sk->sk_bound_dev_if; 1259 + 1260 + /* So that link locals have meaning */ 1261 + if (!sk->sk_bound_dev_if && 1262 + ipv6_addr_type(&treq->rmt_addr) & IPV6_ADDR_LINKLOCAL) 1263 + treq->iif = inet6_iif(skb); 1264 + 1258 1265 if (!isn) { 1259 1266 struct inet_peer *peer = NULL; 1260 1267 ··· 1271 1264 atomic_inc(&skb->users); 1272 1265 treq->pktopts = skb; 1273 1266 } 1274 - treq->iif = sk->sk_bound_dev_if; 1275 - 1276 - /* So that link locals have meaning */ 1277 - if (!sk->sk_bound_dev_if && 1278 - ipv6_addr_type(&treq->rmt_addr) & IPV6_ADDR_LINKLOCAL) 1279 - treq->iif = inet6_iif(skb); 1280 1267 1281 1268 if (want_cookie) { 1282 1269 isn = cookie_v6_init_sequence(sk, skb, &req->mss);
+8 -7
net/ipv6/udp.c
··· 340 340 struct ipv6_pinfo *np = inet6_sk(sk); 341 341 struct inet_sock *inet = inet_sk(sk); 342 342 struct sk_buff *skb; 343 - unsigned int ulen; 343 + unsigned int ulen, copied; 344 344 int peeked; 345 345 int err; 346 346 int is_udplite = IS_UDPLITE(sk); ··· 363 363 goto out; 364 364 365 365 ulen = skb->len - sizeof(struct udphdr); 366 - if (len > ulen) 367 - len = ulen; 368 - else if (len < ulen) 366 + copied = len; 367 + if (copied > ulen) 368 + copied = ulen; 369 + else if (copied < ulen) 369 370 msg->msg_flags |= MSG_TRUNC; 370 371 371 372 is_udp4 = (skb->protocol == htons(ETH_P_IP)); ··· 377 376 * coverage checksum (UDP-Lite), do it before the copy. 378 377 */ 379 378 380 - if (len < ulen || UDP_SKB_CB(skb)->partial_cov) { 379 + if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) { 381 380 if (udp_lib_checksum_complete(skb)) 382 381 goto csum_copy_err; 383 382 } 384 383 385 384 if (skb_csum_unnecessary(skb)) 386 385 err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), 387 - msg->msg_iov,len); 386 + msg->msg_iov, copied ); 388 387 else { 389 388 err = skb_copy_and_csum_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov); 390 389 if (err == -EINVAL) ··· 433 432 datagram_recv_ctl(sk, msg, skb); 434 433 } 435 434 436 - err = len; 435 + err = copied; 437 436 if (flags & MSG_TRUNC) 438 437 err = ulen; 439 438
+1 -1
net/l2tp/l2tp_core.c
··· 1072 1072 1073 1073 /* Get routing info from the tunnel socket */ 1074 1074 skb_dst_drop(skb); 1075 - skb_dst_set(skb, dst_clone(__sk_dst_get(sk))); 1075 + skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0))); 1076 1076 1077 1077 inet = inet_sk(sk); 1078 1078 fl = &inet->cork.fl;
+80 -48
net/mac80211/agg-tx.c
··· 161 161 return -ENOENT; 162 162 } 163 163 164 + /* if we're already stopping ignore any new requests to stop */ 165 + if (test_bit(HT_AGG_STATE_STOPPING, &tid_tx->state)) { 166 + spin_unlock_bh(&sta->lock); 167 + return -EALREADY; 168 + } 169 + 164 170 if (test_bit(HT_AGG_STATE_WANT_START, &tid_tx->state)) { 165 171 /* not even started yet! */ 166 172 ieee80211_assign_tid_tx(sta, tid, NULL); ··· 175 169 return 0; 176 170 } 177 171 172 + set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state); 173 + 178 174 spin_unlock_bh(&sta->lock); 179 175 180 176 #ifdef CONFIG_MAC80211_HT_DEBUG 181 177 printk(KERN_DEBUG "Tx BA session stop requested for %pM tid %u\n", 182 178 sta->sta.addr, tid); 183 179 #endif /* CONFIG_MAC80211_HT_DEBUG */ 184 - 185 - set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state); 186 180 187 181 del_timer_sync(&tid_tx->addba_resp_timer); 188 182 ··· 192 186 * with locking to ensure proper access. 193 187 */ 194 188 clear_bit(HT_AGG_STATE_OPERATIONAL, &tid_tx->state); 189 + 190 + /* 191 + * There might be a few packets being processed right now (on 192 + * another CPU) that have already gotten past the aggregation 193 + * check when it was still OPERATIONAL and consequently have 194 + * IEEE80211_TX_CTL_AMPDU set. In that case, this code might 195 + * call into the driver at the same time or even before the 196 + * TX paths calls into it, which could confuse the driver. 197 + * 198 + * Wait for all currently running TX paths to finish before 199 + * telling the driver. New packets will not go through since 200 + * the aggregation session is no longer OPERATIONAL. 201 + */ 202 + synchronize_net(); 195 203 196 204 tid_tx->stop_initiator = initiator; 197 205 tid_tx->tx_stop = tx; ··· 303 283 __release(agg_queue); 304 284 } 305 285 286 + /* 287 + * splice packets from the STA's pending to the local pending, 288 + * requires a call to ieee80211_agg_splice_finish later 289 + */ 290 + static void __acquires(agg_queue) 291 + ieee80211_agg_splice_packets(struct ieee80211_local *local, 292 + struct tid_ampdu_tx *tid_tx, u16 tid) 293 + { 294 + int queue = ieee80211_ac_from_tid(tid); 295 + unsigned long flags; 296 + 297 + ieee80211_stop_queue_agg(local, tid); 298 + 299 + if (WARN(!tid_tx, "TID %d gone but expected when splicing aggregates" 300 + " from the pending queue\n", tid)) 301 + return; 302 + 303 + if (!skb_queue_empty(&tid_tx->pending)) { 304 + spin_lock_irqsave(&local->queue_stop_reason_lock, flags); 305 + /* copy over remaining packets */ 306 + skb_queue_splice_tail_init(&tid_tx->pending, 307 + &local->pending[queue]); 308 + spin_unlock_irqrestore(&local->queue_stop_reason_lock, flags); 309 + } 310 + } 311 + 312 + static void __releases(agg_queue) 313 + ieee80211_agg_splice_finish(struct ieee80211_local *local, u16 tid) 314 + { 315 + ieee80211_wake_queue_agg(local, tid); 316 + } 317 + 306 318 void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid) 307 319 { 308 320 struct tid_ampdu_tx *tid_tx; ··· 346 294 tid_tx = rcu_dereference_protected_tid_tx(sta, tid); 347 295 348 296 /* 349 - * While we're asking the driver about the aggregation, 350 - * stop the AC queue so that we don't have to worry 351 - * about frames that came in while we were doing that, 352 - * which would require us to put them to the AC pending 353 - * afterwards which just makes the code more complex. 297 + * Start queuing up packets for this aggregation session. 298 + * We're going to release them once the driver is OK with 299 + * that. 354 300 */ 355 - ieee80211_stop_queue_agg(local, tid); 356 - 357 301 clear_bit(HT_AGG_STATE_WANT_START, &tid_tx->state); 358 302 359 303 /* 360 - * make sure no packets are being processed to get 361 - * valid starting sequence number 304 + * Make sure no packets are being processed. This ensures that 305 + * we have a valid starting sequence number and that in-flight 306 + * packets have been flushed out and no packets for this TID 307 + * will go into the driver during the ampdu_action call. 362 308 */ 363 309 synchronize_net(); 364 310 ··· 370 320 " tid %d\n", tid); 371 321 #endif 372 322 spin_lock_bh(&sta->lock); 323 + ieee80211_agg_splice_packets(local, tid_tx, tid); 373 324 ieee80211_assign_tid_tx(sta, tid, NULL); 325 + ieee80211_agg_splice_finish(local, tid); 374 326 spin_unlock_bh(&sta->lock); 375 327 376 - ieee80211_wake_queue_agg(local, tid); 377 328 kfree_rcu(tid_tx, rcu_head); 378 329 return; 379 330 } 380 - 381 - /* we can take packets again now */ 382 - ieee80211_wake_queue_agg(local, tid); 383 331 384 332 /* activate the timer for the recipient's addBA response */ 385 333 mod_timer(&tid_tx->addba_resp_timer, jiffies + ADDBA_RESP_INTERVAL); ··· 493 445 return ret; 494 446 } 495 447 EXPORT_SYMBOL(ieee80211_start_tx_ba_session); 496 - 497 - /* 498 - * splice packets from the STA's pending to the local pending, 499 - * requires a call to ieee80211_agg_splice_finish later 500 - */ 501 - static void __acquires(agg_queue) 502 - ieee80211_agg_splice_packets(struct ieee80211_local *local, 503 - struct tid_ampdu_tx *tid_tx, u16 tid) 504 - { 505 - int queue = ieee80211_ac_from_tid(tid); 506 - unsigned long flags; 507 - 508 - ieee80211_stop_queue_agg(local, tid); 509 - 510 - if (WARN(!tid_tx, "TID %d gone but expected when splicing aggregates" 511 - " from the pending queue\n", tid)) 512 - return; 513 - 514 - if (!skb_queue_empty(&tid_tx->pending)) { 515 - spin_lock_irqsave(&local->queue_stop_reason_lock, flags); 516 - /* copy over remaining packets */ 517 - skb_queue_splice_tail_init(&tid_tx->pending, 518 - &local->pending[queue]); 519 - spin_unlock_irqrestore(&local->queue_stop_reason_lock, flags); 520 - } 521 - } 522 - 523 - static void __releases(agg_queue) 524 - ieee80211_agg_splice_finish(struct ieee80211_local *local, u16 tid) 525 - { 526 - ieee80211_wake_queue_agg(local, tid); 527 - } 528 448 529 449 static void ieee80211_agg_tx_operational(struct ieee80211_local *local, 530 450 struct sta_info *sta, u16 tid) ··· 773 757 goto out; 774 758 } 775 759 776 - del_timer(&tid_tx->addba_resp_timer); 760 + del_timer_sync(&tid_tx->addba_resp_timer); 777 761 778 762 #ifdef CONFIG_MAC80211_HT_DEBUG 779 763 printk(KERN_DEBUG "switched off addBA timer for tid %d\n", tid); 780 764 #endif 765 + 766 + /* 767 + * addba_resp_timer may have fired before we got here, and 768 + * caused WANT_STOP to be set. If the stop then was already 769 + * processed further, STOPPING might be set. 770 + */ 771 + if (test_bit(HT_AGG_STATE_WANT_STOP, &tid_tx->state) || 772 + test_bit(HT_AGG_STATE_STOPPING, &tid_tx->state)) { 773 + #ifdef CONFIG_MAC80211_HT_DEBUG 774 + printk(KERN_DEBUG 775 + "got addBA resp for tid %d but we already gave up\n", 776 + tid); 777 + #endif 778 + goto out; 779 + } 780 + 781 781 /* 782 782 * IEEE 802.11-2007 7.3.1.14: 783 783 * In an ADDBA Response frame, when the Status Code field
+2 -2
net/mac80211/debugfs_sta.c
··· 274 274 275 275 PRINT_HT_CAP((htc->cap & BIT(10)), "HT Delayed Block Ack"); 276 276 277 - PRINT_HT_CAP((htc->cap & BIT(11)), "Max AMSDU length: " 278 - "3839 bytes"); 279 277 PRINT_HT_CAP(!(htc->cap & BIT(11)), "Max AMSDU length: " 278 + "3839 bytes"); 279 + PRINT_HT_CAP((htc->cap & BIT(11)), "Max AMSDU length: " 280 280 "7935 bytes"); 281 281 282 282 /*
+6
net/mac80211/main.c
··· 757 757 if (!local->int_scan_req) 758 758 return -ENOMEM; 759 759 760 + for (band = 0; band < IEEE80211_NUM_BANDS; band++) { 761 + if (!local->hw.wiphy->bands[band]) 762 + continue; 763 + local->int_scan_req->rates[band] = (u32) -1; 764 + } 765 + 760 766 /* if low-level driver supports AP, we also support VLAN */ 761 767 if (local->hw.wiphy->interface_modes & BIT(NL80211_IFTYPE_AP)) { 762 768 hw->wiphy->interface_modes |= BIT(NL80211_IFTYPE_AP_VLAN);
+4 -4
net/mac80211/status.c
··· 260 260 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; 261 261 struct ieee80211_radiotap_header *rthdr; 262 262 unsigned char *pos; 263 - __le16 txflags; 263 + u16 txflags; 264 264 265 265 rthdr = (struct ieee80211_radiotap_header *) skb_push(skb, rtap_len); 266 266 ··· 290 290 txflags = 0; 291 291 if (!(info->flags & IEEE80211_TX_STAT_ACK) && 292 292 !is_multicast_ether_addr(hdr->addr1)) 293 - txflags |= cpu_to_le16(IEEE80211_RADIOTAP_F_TX_FAIL); 293 + txflags |= IEEE80211_RADIOTAP_F_TX_FAIL; 294 294 295 295 if ((info->status.rates[0].flags & IEEE80211_TX_RC_USE_RTS_CTS) || 296 296 (info->status.rates[0].flags & IEEE80211_TX_RC_USE_CTS_PROTECT)) 297 - txflags |= cpu_to_le16(IEEE80211_RADIOTAP_F_TX_CTS); 297 + txflags |= IEEE80211_RADIOTAP_F_TX_CTS; 298 298 else if (info->status.rates[0].flags & IEEE80211_TX_RC_USE_RTS_CTS) 299 - txflags |= cpu_to_le16(IEEE80211_RADIOTAP_F_TX_RTS); 299 + txflags |= IEEE80211_RADIOTAP_F_TX_RTS; 300 300 301 301 put_unaligned_le16(txflags, pos); 302 302 pos += 2;
-1
net/mac80211/util.c
··· 1039 1039 struct ieee80211_sub_if_data, 1040 1040 u.ap); 1041 1041 1042 - memset(&sta->sta.drv_priv, 0, hw->sta_data_size); 1043 1042 WARN_ON(drv_sta_add(local, sdata, &sta->sta)); 1044 1043 } 1045 1044 }
-2
net/netfilter/Kconfig
··· 201 201 202 202 config NF_CONNTRACK_NETBIOS_NS 203 203 tristate "NetBIOS name service protocol support" 204 - depends on NETFILTER_ADVANCED 205 204 select NF_CONNTRACK_BROADCAST 206 205 help 207 206 NetBIOS name service requests are sent as broadcast messages from an ··· 541 542 tristate '"NOTRACK" target support' 542 543 depends on IP_NF_RAW || IP6_NF_RAW 543 544 depends on NF_CONNTRACK 544 - depends on NETFILTER_ADVANCED 545 545 help 546 546 The NOTRACK target allows a select rule to specify 547 547 which packets *not* to enter the conntrack/NAT
+1 -1
net/netfilter/ipset/ip_set_hash_ipport.c
··· 158 158 const struct ip_set_hash *h = set->data; 159 159 ipset_adtfn adtfn = set->variant->adt[adt]; 160 160 struct hash_ipport4_elem data = { }; 161 - u32 ip, ip_to, p = 0, port, port_to; 161 + u32 ip, ip_to = 0, p = 0, port, port_to; 162 162 u32 timeout = h->timeout; 163 163 bool with_ports = false; 164 164 int ret;
+1 -1
net/netfilter/ipset/ip_set_hash_ipportip.c
··· 162 162 const struct ip_set_hash *h = set->data; 163 163 ipset_adtfn adtfn = set->variant->adt[adt]; 164 164 struct hash_ipportip4_elem data = { }; 165 - u32 ip, ip_to, p = 0, port, port_to; 165 + u32 ip, ip_to = 0, p = 0, port, port_to; 166 166 u32 timeout = h->timeout; 167 167 bool with_ports = false; 168 168 int ret;
+1 -1
net/netfilter/ipset/ip_set_hash_ipportnet.c
··· 184 184 const struct ip_set_hash *h = set->data; 185 185 ipset_adtfn adtfn = set->variant->adt[adt]; 186 186 struct hash_ipportnet4_elem data = { .cidr = HOST_MASK }; 187 - u32 ip, ip_to, p = 0, port, port_to; 187 + u32 ip, ip_to = 0, p = 0, port, port_to; 188 188 u32 ip2_from = 0, ip2_to, ip2_last, ip2; 189 189 u32 timeout = h->timeout; 190 190 bool with_ports = false;
+18 -19
net/netfilter/nf_conntrack_ecache.c
··· 27 27 28 28 static DEFINE_MUTEX(nf_ct_ecache_mutex); 29 29 30 - struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb __read_mostly; 31 - EXPORT_SYMBOL_GPL(nf_conntrack_event_cb); 32 - 33 - struct nf_exp_event_notifier __rcu *nf_expect_event_cb __read_mostly; 34 - EXPORT_SYMBOL_GPL(nf_expect_event_cb); 35 - 36 30 /* deliver cached events and clear cache entry - must be called with locally 37 31 * disabled softirqs */ 38 32 void nf_ct_deliver_cached_events(struct nf_conn *ct) 39 33 { 34 + struct net *net = nf_ct_net(ct); 40 35 unsigned long events; 41 36 struct nf_ct_event_notifier *notify; 42 37 struct nf_conntrack_ecache *e; 43 38 44 39 rcu_read_lock(); 45 - notify = rcu_dereference(nf_conntrack_event_cb); 40 + notify = rcu_dereference(net->ct.nf_conntrack_event_cb); 46 41 if (notify == NULL) 47 42 goto out_unlock; 48 43 ··· 78 83 } 79 84 EXPORT_SYMBOL_GPL(nf_ct_deliver_cached_events); 80 85 81 - int nf_conntrack_register_notifier(struct nf_ct_event_notifier *new) 86 + int nf_conntrack_register_notifier(struct net *net, 87 + struct nf_ct_event_notifier *new) 82 88 { 83 89 int ret = 0; 84 90 struct nf_ct_event_notifier *notify; 85 91 86 92 mutex_lock(&nf_ct_ecache_mutex); 87 - notify = rcu_dereference_protected(nf_conntrack_event_cb, 93 + notify = rcu_dereference_protected(net->ct.nf_conntrack_event_cb, 88 94 lockdep_is_held(&nf_ct_ecache_mutex)); 89 95 if (notify != NULL) { 90 96 ret = -EBUSY; 91 97 goto out_unlock; 92 98 } 93 - RCU_INIT_POINTER(nf_conntrack_event_cb, new); 99 + RCU_INIT_POINTER(net->ct.nf_conntrack_event_cb, new); 94 100 mutex_unlock(&nf_ct_ecache_mutex); 95 101 return ret; 96 102 ··· 101 105 } 102 106 EXPORT_SYMBOL_GPL(nf_conntrack_register_notifier); 103 107 104 - void nf_conntrack_unregister_notifier(struct nf_ct_event_notifier *new) 108 + void nf_conntrack_unregister_notifier(struct net *net, 109 + struct nf_ct_event_notifier *new) 105 110 { 106 111 struct nf_ct_event_notifier *notify; 107 112 108 113 mutex_lock(&nf_ct_ecache_mutex); 109 - notify = rcu_dereference_protected(nf_conntrack_event_cb, 114 + notify = rcu_dereference_protected(net->ct.nf_conntrack_event_cb, 110 115 lockdep_is_held(&nf_ct_ecache_mutex)); 111 116 BUG_ON(notify != new); 112 - RCU_INIT_POINTER(nf_conntrack_event_cb, NULL); 117 + RCU_INIT_POINTER(net->ct.nf_conntrack_event_cb, NULL); 113 118 mutex_unlock(&nf_ct_ecache_mutex); 114 119 } 115 120 EXPORT_SYMBOL_GPL(nf_conntrack_unregister_notifier); 116 121 117 - int nf_ct_expect_register_notifier(struct nf_exp_event_notifier *new) 122 + int nf_ct_expect_register_notifier(struct net *net, 123 + struct nf_exp_event_notifier *new) 118 124 { 119 125 int ret = 0; 120 126 struct nf_exp_event_notifier *notify; 121 127 122 128 mutex_lock(&nf_ct_ecache_mutex); 123 - notify = rcu_dereference_protected(nf_expect_event_cb, 129 + notify = rcu_dereference_protected(net->ct.nf_expect_event_cb, 124 130 lockdep_is_held(&nf_ct_ecache_mutex)); 125 131 if (notify != NULL) { 126 132 ret = -EBUSY; 127 133 goto out_unlock; 128 134 } 129 - RCU_INIT_POINTER(nf_expect_event_cb, new); 135 + RCU_INIT_POINTER(net->ct.nf_expect_event_cb, new); 130 136 mutex_unlock(&nf_ct_ecache_mutex); 131 137 return ret; 132 138 ··· 138 140 } 139 141 EXPORT_SYMBOL_GPL(nf_ct_expect_register_notifier); 140 142 141 - void nf_ct_expect_unregister_notifier(struct nf_exp_event_notifier *new) 143 + void nf_ct_expect_unregister_notifier(struct net *net, 144 + struct nf_exp_event_notifier *new) 142 145 { 143 146 struct nf_exp_event_notifier *notify; 144 147 145 148 mutex_lock(&nf_ct_ecache_mutex); 146 - notify = rcu_dereference_protected(nf_expect_event_cb, 149 + notify = rcu_dereference_protected(net->ct.nf_expect_event_cb, 147 150 lockdep_is_held(&nf_ct_ecache_mutex)); 148 151 BUG_ON(notify != new); 149 - RCU_INIT_POINTER(nf_expect_event_cb, NULL); 152 + RCU_INIT_POINTER(net->ct.nf_expect_event_cb, NULL); 150 153 mutex_unlock(&nf_ct_ecache_mutex); 151 154 } 152 155 EXPORT_SYMBOL_GPL(nf_ct_expect_unregister_notifier);
+52 -21
net/netfilter/nf_conntrack_netlink.c
··· 4 4 * (C) 2001 by Jay Schulist <jschlst@samba.org> 5 5 * (C) 2002-2006 by Harald Welte <laforge@gnumonks.org> 6 6 * (C) 2003 by Patrick Mchardy <kaber@trash.net> 7 - * (C) 2005-2008 by Pablo Neira Ayuso <pablo@netfilter.org> 7 + * (C) 2005-2011 by Pablo Neira Ayuso <pablo@netfilter.org> 8 8 * 9 9 * Initial connection tracking via netlink development funded and 10 10 * generally made possible by Network Robots, Inc. (www.networkrobots.com) ··· 2163 2163 MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_CTNETLINK); 2164 2164 MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_CTNETLINK_EXP); 2165 2165 2166 + static int __net_init ctnetlink_net_init(struct net *net) 2167 + { 2168 + #ifdef CONFIG_NF_CONNTRACK_EVENTS 2169 + int ret; 2170 + 2171 + ret = nf_conntrack_register_notifier(net, &ctnl_notifier); 2172 + if (ret < 0) { 2173 + pr_err("ctnetlink_init: cannot register notifier.\n"); 2174 + goto err_out; 2175 + } 2176 + 2177 + ret = nf_ct_expect_register_notifier(net, &ctnl_notifier_exp); 2178 + if (ret < 0) { 2179 + pr_err("ctnetlink_init: cannot expect register notifier.\n"); 2180 + goto err_unreg_notifier; 2181 + } 2182 + #endif 2183 + return 0; 2184 + 2185 + #ifdef CONFIG_NF_CONNTRACK_EVENTS 2186 + err_unreg_notifier: 2187 + nf_conntrack_unregister_notifier(net, &ctnl_notifier); 2188 + err_out: 2189 + return ret; 2190 + #endif 2191 + } 2192 + 2193 + static void ctnetlink_net_exit(struct net *net) 2194 + { 2195 + #ifdef CONFIG_NF_CONNTRACK_EVENTS 2196 + nf_ct_expect_unregister_notifier(net, &ctnl_notifier_exp); 2197 + nf_conntrack_unregister_notifier(net, &ctnl_notifier); 2198 + #endif 2199 + } 2200 + 2201 + static void __net_exit ctnetlink_net_exit_batch(struct list_head *net_exit_list) 2202 + { 2203 + struct net *net; 2204 + 2205 + list_for_each_entry(net, net_exit_list, exit_list) 2206 + ctnetlink_net_exit(net); 2207 + } 2208 + 2209 + static struct pernet_operations ctnetlink_net_ops = { 2210 + .init = ctnetlink_net_init, 2211 + .exit_batch = ctnetlink_net_exit_batch, 2212 + }; 2213 + 2166 2214 static int __init ctnetlink_init(void) 2167 2215 { 2168 2216 int ret; ··· 2228 2180 goto err_unreg_subsys; 2229 2181 } 2230 2182 2231 - #ifdef CONFIG_NF_CONNTRACK_EVENTS 2232 - ret = nf_conntrack_register_notifier(&ctnl_notifier); 2233 - if (ret < 0) { 2234 - pr_err("ctnetlink_init: cannot register notifier.\n"); 2183 + if (register_pernet_subsys(&ctnetlink_net_ops)) { 2184 + pr_err("ctnetlink_init: cannot register pernet operations\n"); 2235 2185 goto err_unreg_exp_subsys; 2236 2186 } 2237 2187 2238 - ret = nf_ct_expect_register_notifier(&ctnl_notifier_exp); 2239 - if (ret < 0) { 2240 - pr_err("ctnetlink_init: cannot expect register notifier.\n"); 2241 - goto err_unreg_notifier; 2242 - } 2243 - #endif 2244 - 2245 2188 return 0; 2246 2189 2247 - #ifdef CONFIG_NF_CONNTRACK_EVENTS 2248 - err_unreg_notifier: 2249 - nf_conntrack_unregister_notifier(&ctnl_notifier); 2250 2190 err_unreg_exp_subsys: 2251 2191 nfnetlink_subsys_unregister(&ctnl_exp_subsys); 2252 - #endif 2253 2192 err_unreg_subsys: 2254 2193 nfnetlink_subsys_unregister(&ctnl_subsys); 2255 2194 err_out: ··· 2248 2213 pr_info("ctnetlink: unregistering from nfnetlink.\n"); 2249 2214 2250 2215 nf_ct_remove_userspace_expectations(); 2251 - #ifdef CONFIG_NF_CONNTRACK_EVENTS 2252 - nf_ct_expect_unregister_notifier(&ctnl_notifier_exp); 2253 - nf_conntrack_unregister_notifier(&ctnl_notifier); 2254 - #endif 2255 - 2216 + unregister_pernet_subsys(&ctnetlink_net_ops); 2256 2217 nfnetlink_subsys_unregister(&ctnl_exp_subsys); 2257 2218 nfnetlink_subsys_unregister(&ctnl_subsys); 2258 2219 }
+16 -10
net/netlabel/netlabel_kapi.c
··· 111 111 struct netlbl_domaddr_map *addrmap = NULL; 112 112 struct netlbl_domaddr4_map *map4 = NULL; 113 113 struct netlbl_domaddr6_map *map6 = NULL; 114 - const struct in_addr *addr4, *mask4; 115 - const struct in6_addr *addr6, *mask6; 116 114 117 115 entry = kzalloc(sizeof(*entry), GFP_ATOMIC); 118 116 if (entry == NULL) ··· 131 133 INIT_LIST_HEAD(&addrmap->list6); 132 134 133 135 switch (family) { 134 - case AF_INET: 135 - addr4 = addr; 136 - mask4 = mask; 136 + case AF_INET: { 137 + const struct in_addr *addr4 = addr; 138 + const struct in_addr *mask4 = mask; 137 139 map4 = kzalloc(sizeof(*map4), GFP_ATOMIC); 138 140 if (map4 == NULL) 139 141 goto cfg_unlbl_map_add_failure; ··· 146 148 if (ret_val != 0) 147 149 goto cfg_unlbl_map_add_failure; 148 150 break; 149 - case AF_INET6: 150 - addr6 = addr; 151 - mask6 = mask; 151 + } 152 + #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) 153 + case AF_INET6: { 154 + const struct in6_addr *addr6 = addr; 155 + const struct in6_addr *mask6 = mask; 152 156 map6 = kzalloc(sizeof(*map6), GFP_ATOMIC); 153 157 if (map6 == NULL) 154 158 goto cfg_unlbl_map_add_failure; ··· 162 162 map6->list.addr.s6_addr32[3] &= mask6->s6_addr32[3]; 163 163 ipv6_addr_copy(&map6->list.mask, mask6); 164 164 map6->list.valid = 1; 165 - ret_val = netlbl_af4list_add(&map4->list, 166 - &addrmap->list4); 165 + ret_val = netlbl_af6list_add(&map6->list, 166 + &addrmap->list6); 167 167 if (ret_val != 0) 168 168 goto cfg_unlbl_map_add_failure; 169 169 break; 170 + } 171 + #endif /* IPv6 */ 170 172 default: 171 173 goto cfg_unlbl_map_add_failure; 172 174 break; ··· 227 225 case AF_INET: 228 226 addr_len = sizeof(struct in_addr); 229 227 break; 228 + #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) 230 229 case AF_INET6: 231 230 addr_len = sizeof(struct in6_addr); 232 231 break; 232 + #endif /* IPv6 */ 233 233 default: 234 234 return -EPFNOSUPPORT; 235 235 } ··· 270 266 case AF_INET: 271 267 addr_len = sizeof(struct in_addr); 272 268 break; 269 + #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) 273 270 case AF_INET6: 274 271 addr_len = sizeof(struct in6_addr); 275 272 break; 273 + #endif /* IPv6 */ 276 274 default: 277 275 return -EPFNOSUPPORT; 278 276 }
+1 -1
net/sched/sch_gred.c
··· 385 385 struct gred_sched_data *q; 386 386 387 387 if (table->tab[dp] == NULL) { 388 - table->tab[dp] = kzalloc(sizeof(*q), GFP_KERNEL); 388 + table->tab[dp] = kzalloc(sizeof(*q), GFP_ATOMIC); 389 389 if (table->tab[dp] == NULL) 390 390 return -ENOMEM; 391 391 }
+2 -2
net/sched/sch_red.c
··· 209 209 ctl->Plog, ctl->Scell_log, 210 210 nla_data(tb[TCA_RED_STAB])); 211 211 212 - if (skb_queue_empty(&sch->q)) 213 - red_end_of_idle_period(&q->parms); 212 + if (!q->qdisc->q.qlen) 213 + red_start_of_idle_period(&q->parms); 214 214 215 215 sch_tree_unlock(sch); 216 216 return 0;
+20 -11
net/sched/sch_teql.c
··· 225 225 226 226 227 227 static int 228 - __teql_resolve(struct sk_buff *skb, struct sk_buff *skb_res, struct net_device *dev) 228 + __teql_resolve(struct sk_buff *skb, struct sk_buff *skb_res, 229 + struct net_device *dev, struct netdev_queue *txq, 230 + struct neighbour *mn) 229 231 { 230 - struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, 0); 231 - struct teql_sched_data *q = qdisc_priv(dev_queue->qdisc); 232 - struct neighbour *mn = dst_get_neighbour(skb_dst(skb)); 232 + struct teql_sched_data *q = qdisc_priv(txq->qdisc); 233 233 struct neighbour *n = q->ncache; 234 234 235 235 if (mn->tbl == NULL) ··· 262 262 } 263 263 264 264 static inline int teql_resolve(struct sk_buff *skb, 265 - struct sk_buff *skb_res, struct net_device *dev) 265 + struct sk_buff *skb_res, 266 + struct net_device *dev, 267 + struct netdev_queue *txq) 266 268 { 267 - struct netdev_queue *txq = netdev_get_tx_queue(dev, 0); 269 + struct dst_entry *dst = skb_dst(skb); 270 + struct neighbour *mn; 271 + int res; 272 + 268 273 if (txq->qdisc == &noop_qdisc) 269 274 return -ENODEV; 270 275 271 - if (dev->header_ops == NULL || 272 - skb_dst(skb) == NULL || 273 - dst_get_neighbour(skb_dst(skb)) == NULL) 276 + if (!dev->header_ops || !dst) 274 277 return 0; 275 - return __teql_resolve(skb, skb_res, dev); 278 + 279 + rcu_read_lock(); 280 + mn = dst_get_neighbour(dst); 281 + res = mn ? __teql_resolve(skb, skb_res, dev, txq, mn) : 0; 282 + rcu_read_unlock(); 283 + 284 + return res; 276 285 } 277 286 278 287 static netdev_tx_t teql_master_xmit(struct sk_buff *skb, struct net_device *dev) ··· 316 307 continue; 317 308 } 318 309 319 - switch (teql_resolve(skb, skb_res, slave)) { 310 + switch (teql_resolve(skb, skb_res, slave, slave_txq)) { 320 311 case 0: 321 312 if (__netif_tx_trylock(slave_txq)) { 322 313 unsigned int length = qdisc_pkt_len(skb);
+1 -1
net/sctp/auth.c
··· 82 82 struct sctp_auth_bytes *key; 83 83 84 84 /* Verify that we are not going to overflow INT_MAX */ 85 - if ((INT_MAX - key_len) < sizeof(struct sctp_auth_bytes)) 85 + if (key_len > (INT_MAX - sizeof(struct sctp_auth_bytes))) 86 86 return NULL; 87 87 88 88 /* Allocate the shared key */
+1 -2
net/sunrpc/xprtsock.c
··· 496 496 struct rpc_rqst *req = task->tk_rqstp; 497 497 struct rpc_xprt *xprt = req->rq_xprt; 498 498 struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt); 499 - int ret = 0; 499 + int ret = -EAGAIN; 500 500 501 501 dprintk("RPC: %5u xmit incomplete (%u left of %u)\n", 502 502 task->tk_pid, req->rq_slen - req->rq_bytes_sent, ··· 508 508 /* Don't race with disconnect */ 509 509 if (xprt_connected(xprt)) { 510 510 if (test_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags)) { 511 - ret = -EAGAIN; 512 511 /* 513 512 * Notify TCP that we're limited by the application 514 513 * window size
+4
net/unix/af_unix.c
··· 1957 1957 if ((UNIXCB(skb).pid != siocb->scm->pid) || 1958 1958 (UNIXCB(skb).cred != siocb->scm->cred)) { 1959 1959 skb_queue_head(&sk->sk_receive_queue, skb); 1960 + sk->sk_data_ready(sk, skb->len); 1960 1961 break; 1961 1962 } 1962 1963 } else { ··· 1975 1974 chunk = min_t(unsigned int, skb->len, size); 1976 1975 if (memcpy_toiovec(msg->msg_iov, skb->data, chunk)) { 1977 1976 skb_queue_head(&sk->sk_receive_queue, skb); 1977 + sk->sk_data_ready(sk, skb->len); 1978 1978 if (copied == 0) 1979 1979 copied = -EFAULT; 1980 1980 break; ··· 1993 1991 /* put the skb back if we didn't use it up.. */ 1994 1992 if (skb->len) { 1995 1993 skb_queue_head(&sk->sk_receive_queue, skb); 1994 + sk->sk_data_ready(sk, skb->len); 1996 1995 break; 1997 1996 } 1998 1997 ··· 2009 2006 2010 2007 /* put message back and return */ 2011 2008 skb_queue_head(&sk->sk_receive_queue, skb); 2009 + sk->sk_data_ready(sk, skb->len); 2012 2010 break; 2013 2011 } 2014 2012 } while (size);
+2 -2
net/wireless/nl80211.c
··· 89 89 [NL80211_ATTR_IFINDEX] = { .type = NLA_U32 }, 90 90 [NL80211_ATTR_IFNAME] = { .type = NLA_NUL_STRING, .len = IFNAMSIZ-1 }, 91 91 92 - [NL80211_ATTR_MAC] = { .type = NLA_BINARY, .len = ETH_ALEN }, 93 - [NL80211_ATTR_PREV_BSSID] = { .type = NLA_BINARY, .len = ETH_ALEN }, 92 + [NL80211_ATTR_MAC] = { .len = ETH_ALEN }, 93 + [NL80211_ATTR_PREV_BSSID] = { .len = ETH_ALEN }, 94 94 95 95 [NL80211_ATTR_KEY] = { .type = NLA_NESTED, }, 96 96 [NL80211_ATTR_KEY_DATA] = { .type = NLA_BINARY,
+33 -16
net/wireless/reg.c
··· 57 57 #define REG_DBG_PRINT(args...) 58 58 #endif 59 59 60 + static struct regulatory_request core_request_world = { 61 + .initiator = NL80211_REGDOM_SET_BY_CORE, 62 + .alpha2[0] = '0', 63 + .alpha2[1] = '0', 64 + .intersect = false, 65 + .processed = true, 66 + .country_ie_env = ENVIRON_ANY, 67 + }; 68 + 60 69 /* Receipt of information from last regulatory request */ 61 - static struct regulatory_request *last_request; 70 + static struct regulatory_request *last_request = &core_request_world; 62 71 63 72 /* To trigger userspace events */ 64 73 static struct platform_device *reg_pdev; ··· 159 150 module_param(ieee80211_regdom, charp, 0444); 160 151 MODULE_PARM_DESC(ieee80211_regdom, "IEEE 802.11 regulatory domain code"); 161 152 162 - static void reset_regdomains(void) 153 + static void reset_regdomains(bool full_reset) 163 154 { 164 155 /* avoid freeing static information or freeing something twice */ 165 156 if (cfg80211_regdomain == cfg80211_world_regdom) ··· 174 165 175 166 cfg80211_world_regdom = &world_regdom; 176 167 cfg80211_regdomain = NULL; 168 + 169 + if (!full_reset) 170 + return; 171 + 172 + if (last_request != &core_request_world) 173 + kfree(last_request); 174 + last_request = &core_request_world; 177 175 } 178 176 179 177 /* ··· 191 175 { 192 176 BUG_ON(!last_request); 193 177 194 - reset_regdomains(); 178 + reset_regdomains(false); 195 179 196 180 cfg80211_world_regdom = rd; 197 181 cfg80211_regdomain = rd; ··· 1423 1407 } 1424 1408 1425 1409 new_request: 1426 - kfree(last_request); 1410 + if (last_request != &core_request_world) 1411 + kfree(last_request); 1427 1412 1428 1413 last_request = pending_request; 1429 1414 last_request->intersect = intersect; ··· 1593 1576 static int regulatory_hint_core(const char *alpha2) 1594 1577 { 1595 1578 struct regulatory_request *request; 1596 - 1597 - kfree(last_request); 1598 - last_request = NULL; 1599 1579 1600 1580 request = kzalloc(sizeof(struct regulatory_request), 1601 1581 GFP_KERNEL); ··· 1791 1777 mutex_lock(&cfg80211_mutex); 1792 1778 mutex_lock(&reg_mutex); 1793 1779 1794 - reset_regdomains(); 1780 + reset_regdomains(true); 1795 1781 restore_alpha2(alpha2, reset_user); 1796 1782 1797 1783 /* ··· 2051 2037 } 2052 2038 2053 2039 request_wiphy = wiphy_idx_to_wiphy(last_request->wiphy_idx); 2040 + if (!request_wiphy && 2041 + (last_request->initiator == NL80211_REGDOM_SET_BY_DRIVER || 2042 + last_request->initiator == NL80211_REGDOM_SET_BY_COUNTRY_IE)) { 2043 + schedule_delayed_work(&reg_timeout, 0); 2044 + return -ENODEV; 2045 + } 2054 2046 2055 2047 if (!last_request->intersect) { 2056 2048 int r; 2057 2049 2058 2050 if (last_request->initiator != NL80211_REGDOM_SET_BY_DRIVER) { 2059 - reset_regdomains(); 2051 + reset_regdomains(false); 2060 2052 cfg80211_regdomain = rd; 2061 2053 return 0; 2062 2054 } ··· 2083 2063 if (r) 2084 2064 return r; 2085 2065 2086 - reset_regdomains(); 2066 + reset_regdomains(false); 2087 2067 cfg80211_regdomain = rd; 2088 2068 return 0; 2089 2069 } ··· 2108 2088 2109 2089 rd = NULL; 2110 2090 2111 - reset_regdomains(); 2091 + reset_regdomains(false); 2112 2092 cfg80211_regdomain = intersected_rd; 2113 2093 2114 2094 return 0; ··· 2128 2108 kfree(rd); 2129 2109 rd = NULL; 2130 2110 2131 - reset_regdomains(); 2111 + reset_regdomains(false); 2132 2112 cfg80211_regdomain = intersected_rd; 2133 2113 2134 2114 return 0; ··· 2281 2261 mutex_lock(&cfg80211_mutex); 2282 2262 mutex_lock(&reg_mutex); 2283 2263 2284 - reset_regdomains(); 2264 + reset_regdomains(true); 2285 2265 2286 - kfree(last_request); 2287 - 2288 - last_request = NULL; 2289 2266 dev_set_uevent_suppress(&reg_pdev->dev, true); 2290 2267 2291 2268 platform_device_unregister(reg_pdev);
+6 -4
net/xfrm/xfrm_policy.c
··· 2382 2382 return dst_metric_advmss(dst->path); 2383 2383 } 2384 2384 2385 - static unsigned int xfrm_default_mtu(const struct dst_entry *dst) 2385 + static unsigned int xfrm_mtu(const struct dst_entry *dst) 2386 2386 { 2387 - return dst_mtu(dst->path); 2387 + unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 2388 + 2389 + return mtu ? : dst_mtu(dst->path); 2388 2390 } 2389 2391 2390 2392 static struct neighbour *xfrm_neigh_lookup(const struct dst_entry *dst, const void *daddr) ··· 2413 2411 dst_ops->check = xfrm_dst_check; 2414 2412 if (likely(dst_ops->default_advmss == NULL)) 2415 2413 dst_ops->default_advmss = xfrm_default_advmss; 2416 - if (likely(dst_ops->default_mtu == NULL)) 2417 - dst_ops->default_mtu = xfrm_default_mtu; 2414 + if (likely(dst_ops->mtu == NULL)) 2415 + dst_ops->mtu = xfrm_mtu; 2418 2416 if (likely(dst_ops->negative_advice == NULL)) 2419 2417 dst_ops->negative_advice = xfrm_negative_advice; 2420 2418 if (likely(dst_ops->link_failure == NULL))
+40 -29
security/apparmor/path.c
··· 57 57 static int d_namespace_path(struct path *path, char *buf, int buflen, 58 58 char **name, int flags) 59 59 { 60 - struct path root, tmp; 61 60 char *res; 62 - int connected, error = 0; 61 + int error = 0; 62 + int connected = 1; 63 63 64 - /* Get the root we want to resolve too, released below */ 65 - if (flags & PATH_CHROOT_REL) { 66 - /* resolve paths relative to chroot */ 67 - get_fs_root(current->fs, &root); 68 - } else { 69 - /* resolve paths relative to namespace */ 70 - root.mnt = current->nsproxy->mnt_ns->root; 71 - root.dentry = root.mnt->mnt_root; 72 - path_get(&root); 64 + if (path->mnt->mnt_flags & MNT_INTERNAL) { 65 + /* it's not mounted anywhere */ 66 + res = dentry_path(path->dentry, buf, buflen); 67 + *name = res; 68 + if (IS_ERR(res)) { 69 + *name = buf; 70 + return PTR_ERR(res); 71 + } 72 + if (path->dentry->d_sb->s_magic == PROC_SUPER_MAGIC && 73 + strncmp(*name, "/sys/", 5) == 0) { 74 + /* TODO: convert over to using a per namespace 75 + * control instead of hard coded /proc 76 + */ 77 + return prepend(name, *name - buf, "/proc", 5); 78 + } 79 + return 0; 73 80 } 74 81 75 - tmp = root; 76 - res = __d_path(path, &tmp, buf, buflen); 82 + /* resolve paths relative to chroot?*/ 83 + if (flags & PATH_CHROOT_REL) { 84 + struct path root; 85 + get_fs_root(current->fs, &root); 86 + res = __d_path(path, &root, buf, buflen); 87 + if (res && !IS_ERR(res)) { 88 + /* everything's fine */ 89 + *name = res; 90 + path_put(&root); 91 + goto ok; 92 + } 93 + path_put(&root); 94 + connected = 0; 95 + } 96 + 97 + res = d_absolute_path(path, buf, buflen); 77 98 78 99 *name = res; 79 100 /* handle error conditions - and still allow a partial path to ··· 105 84 *name = buf; 106 85 goto out; 107 86 } 87 + if (!our_mnt(path->mnt)) 88 + connected = 0; 108 89 90 + ok: 109 91 /* Handle two cases: 110 92 * 1. A deleted dentry && profile is not allowing mediation of deleted 111 93 * 2. On some filesystems, newly allocated dentries appear to the ··· 121 97 goto out; 122 98 } 123 99 124 - /* Determine if the path is connected to the expected root */ 125 - connected = tmp.dentry == root.dentry && tmp.mnt == root.mnt; 126 - 127 - /* If the path is not connected, 100 + /* If the path is not connected to the expected root, 128 101 * check if it is a sysctl and handle specially else remove any 129 102 * leading / that __d_path may have returned. 130 103 * Unless ··· 133 112 * namespace root. 134 113 */ 135 114 if (!connected) { 136 - /* is the disconnect path a sysctl? */ 137 - if (tmp.dentry->d_sb->s_magic == PROC_SUPER_MAGIC && 138 - strncmp(*name, "/sys/", 5) == 0) { 139 - /* TODO: convert over to using a per namespace 140 - * control instead of hard coded /proc 141 - */ 142 - error = prepend(name, *name - buf, "/proc", 5); 143 - } else if (!(flags & PATH_CONNECT_PATH) && 115 + if (!(flags & PATH_CONNECT_PATH) && 144 116 !(((flags & CHROOT_NSCONNECT) == CHROOT_NSCONNECT) && 145 - (tmp.mnt == current->nsproxy->mnt_ns->root && 146 - tmp.dentry == tmp.mnt->mnt_root))) { 117 + our_mnt(path->mnt))) { 147 118 /* disconnected path, don't return pathname starting 148 119 * with '/' 149 120 */ ··· 146 133 } 147 134 148 135 out: 149 - path_put(&root); 150 - 151 136 return error; 152 137 } 153 138
+10 -3
security/tomoyo/realpath.c
··· 101 101 { 102 102 char *pos = ERR_PTR(-ENOMEM); 103 103 if (buflen >= 256) { 104 - struct path ns_root = { }; 105 104 /* go to whatever namespace root we are under */ 106 - pos = __d_path(path, &ns_root, buffer, buflen - 1); 105 + pos = d_absolute_path(path, buffer, buflen - 1); 107 106 if (!IS_ERR(pos) && *pos == '/' && pos[1]) { 108 107 struct inode *inode = path->dentry->d_inode; 109 108 if (inode && S_ISDIR(inode->i_mode)) { ··· 293 294 pos = tomoyo_get_local_path(path->dentry, buf, 294 295 buf_len - 1); 295 296 /* Get absolute name for the rest. */ 296 - else 297 + else { 297 298 pos = tomoyo_get_absolute_path(path, buf, buf_len - 1); 299 + /* 300 + * Fall back to local name if absolute name is not 301 + * available. 302 + */ 303 + if (pos == ERR_PTR(-EINVAL)) 304 + pos = tomoyo_get_local_path(path->dentry, buf, 305 + buf_len - 1); 306 + } 298 307 encode: 299 308 if (IS_ERR(pos)) 300 309 continue;
+1 -1
sound/pci/cs5535audio/cs5535audio_pcm.c
··· 148 148 struct cs5535audio_dma_desc *desc = 149 149 &((struct cs5535audio_dma_desc *) dma->desc_buf.area)[i]; 150 150 desc->addr = cpu_to_le32(addr); 151 - desc->size = cpu_to_le32(period_bytes); 151 + desc->size = cpu_to_le16(period_bytes); 152 152 desc->ctlreserved = cpu_to_le16(PRD_EOP); 153 153 desc_addr += sizeof(struct cs5535audio_dma_desc); 154 154 addr += period_bytes;
+3 -3
sound/pci/hda/hda_codec.c
··· 4046 4046 4047 4047 /* Search for codec ID */ 4048 4048 for (q = tbl; q->subvendor; q++) { 4049 - unsigned long vendorid = (q->subdevice) | (q->subvendor << 16); 4050 - 4051 - if (vendorid == codec->subsystem_id) 4049 + unsigned int mask = 0xffff0000 | q->subdevice_mask; 4050 + unsigned int id = (q->subdevice | (q->subvendor << 16)) & mask; 4051 + if ((codec->subsystem_id & mask) == id) 4052 4052 break; 4053 4053 } 4054 4054
+19 -9
sound/pci/hda/hda_eld.c
··· 347 347 348 348 for (i = 0; i < size; i++) { 349 349 unsigned int val = hdmi_get_eld_data(codec, nid, i); 350 + /* 351 + * Graphics driver might be writing to ELD buffer right now. 352 + * Just abort. The caller will repoll after a while. 353 + */ 350 354 if (!(val & AC_ELDD_ELD_VALID)) { 351 - if (!i) { 352 - snd_printd(KERN_INFO 353 - "HDMI: invalid ELD data\n"); 354 - ret = -EINVAL; 355 - goto error; 356 - } 357 355 snd_printd(KERN_INFO 358 356 "HDMI: invalid ELD data byte %d\n", i); 359 - val = 0; 360 - } else 361 - val &= AC_ELDD_ELD_DATA; 357 + ret = -EINVAL; 358 + goto error; 359 + } 360 + val &= AC_ELDD_ELD_DATA; 361 + /* 362 + * The first byte cannot be zero. This can happen on some DVI 363 + * connections. Some Intel chips may also need some 250ms delay 364 + * to return non-zero ELD data, even when the graphics driver 365 + * correctly writes ELD content before setting ELD_valid bit. 366 + */ 367 + if (!val && !i) { 368 + snd_printdd(KERN_INFO "HDMI: 0 ELD data\n"); 369 + ret = -EINVAL; 370 + goto error; 371 + } 362 372 buf[i] = val; 363 373 } 364 374
+3 -2
sound/pci/hda/hda_intel.c
··· 2507 2507 SND_PCI_QUIRK(0x1043, 0x813d, "ASUS P5AD2", POS_FIX_LPIB), 2508 2508 SND_PCI_QUIRK(0x1043, 0x81b3, "ASUS", POS_FIX_LPIB), 2509 2509 SND_PCI_QUIRK(0x1043, 0x81e7, "ASUS M2V", POS_FIX_LPIB), 2510 + SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS 1101HA", POS_FIX_LPIB), 2510 2511 SND_PCI_QUIRK(0x104d, 0x9069, "Sony VPCS11V9E", POS_FIX_LPIB), 2511 - SND_PCI_QUIRK(0x1106, 0x3288, "ASUS M2V-MX SE", POS_FIX_LPIB), 2512 2512 SND_PCI_QUIRK(0x1297, 0x3166, "Shuttle", POS_FIX_LPIB), 2513 2513 SND_PCI_QUIRK(0x1458, 0xa022, "ga-ma770-ud3", POS_FIX_LPIB), 2514 2514 SND_PCI_QUIRK(0x1462, 0x1002, "MSI Wind U115", POS_FIX_LPIB), ··· 2971 2971 /* SCH */ 2972 2972 { PCI_DEVICE(0x8086, 0x811b), 2973 2973 .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_SCH_SNOOP | 2974 - AZX_DCAPS_BUFSIZE}, 2974 + AZX_DCAPS_BUFSIZE | AZX_DCAPS_POSFIX_LPIB }, /* Poulsbo */ 2975 + /* ICH */ 2975 2976 { PCI_DEVICE(0x8086, 0x2668), 2976 2977 .driver_data = AZX_DRIVER_ICH | AZX_DCAPS_OLD_SSYNC | 2977 2978 AZX_DCAPS_BUFSIZE }, /* ICH6 */
+23 -9
sound/pci/hda/patch_cirrus.c
··· 58 58 unsigned int gpio_mask; 59 59 unsigned int gpio_dir; 60 60 unsigned int gpio_data; 61 + unsigned int gpio_eapd_hp; /* EAPD GPIO bit for headphones */ 62 + unsigned int gpio_eapd_speaker; /* EAPD GPIO bit for speakers */ 61 63 62 64 struct hda_pcm pcm_rec[2]; /* PCM information */ 63 65 ··· 78 76 CS420X_MBP53, 79 77 CS420X_MBP55, 80 78 CS420X_IMAC27, 79 + CS420X_APPLE, 81 80 CS420X_AUTO, 82 81 CS420X_MODELS 83 82 }; ··· 931 928 spdif_present ? 0 : PIN_OUT); 932 929 } 933 930 } 934 - if (spec->board_config == CS420X_MBP53 || 935 - spec->board_config == CS420X_MBP55 || 936 - spec->board_config == CS420X_IMAC27) { 937 - unsigned int gpio = hp_present ? 0x02 : 0x08; 931 + if (spec->gpio_eapd_hp) { 932 + unsigned int gpio = hp_present ? 933 + spec->gpio_eapd_hp : spec->gpio_eapd_speaker; 938 934 snd_hda_codec_write(codec, 0x01, 0, 939 935 AC_VERB_SET_GPIO_DATA, gpio); 940 936 } ··· 1278 1276 [CS420X_MBP53] = "mbp53", 1279 1277 [CS420X_MBP55] = "mbp55", 1280 1278 [CS420X_IMAC27] = "imac27", 1279 + [CS420X_APPLE] = "apple", 1281 1280 [CS420X_AUTO] = "auto", 1282 1281 }; 1283 1282 ··· 1288 1285 SND_PCI_QUIRK(0x10de, 0x0d94, "MacBookAir 3,1(2)", CS420X_MBP55), 1289 1286 SND_PCI_QUIRK(0x10de, 0xcb79, "MacBookPro 5,5", CS420X_MBP55), 1290 1287 SND_PCI_QUIRK(0x10de, 0xcb89, "MacBookPro 7,1", CS420X_MBP55), 1291 - SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27), 1288 + /* this conflicts with too many other models */ 1289 + /*SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27),*/ 1290 + {} /* terminator */ 1291 + }; 1292 + 1293 + static const struct snd_pci_quirk cs420x_codec_cfg_tbl[] = { 1294 + SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE), 1292 1295 {} /* terminator */ 1293 1296 }; 1294 1297 ··· 1376 1367 spec->board_config = 1377 1368 snd_hda_check_board_config(codec, CS420X_MODELS, 1378 1369 cs420x_models, cs420x_cfg_tbl); 1370 + if (spec->board_config < 0) 1371 + spec->board_config = 1372 + snd_hda_check_board_codec_sid_config(codec, 1373 + CS420X_MODELS, NULL, cs420x_codec_cfg_tbl); 1379 1374 if (spec->board_config >= 0) 1380 1375 fix_pincfg(codec, spec->board_config, cs_pincfgs); 1381 1376 ··· 1387 1374 case CS420X_IMAC27: 1388 1375 case CS420X_MBP53: 1389 1376 case CS420X_MBP55: 1390 - /* GPIO1 = headphones */ 1391 - /* GPIO3 = speakers */ 1392 - spec->gpio_mask = 0x0a; 1393 - spec->gpio_dir = 0x0a; 1377 + case CS420X_APPLE: 1378 + spec->gpio_eapd_hp = 2; /* GPIO1 = headphones */ 1379 + spec->gpio_eapd_speaker = 8; /* GPIO3 = speakers */ 1380 + spec->gpio_mask = spec->gpio_dir = 1381 + spec->gpio_eapd_hp | spec->gpio_eapd_speaker; 1394 1382 break; 1395 1383 } 1396 1384
+10 -6
sound/pci/hda/patch_hdmi.c
··· 69 69 struct hda_codec *codec; 70 70 struct hdmi_eld sink_eld; 71 71 struct delayed_work work; 72 + int repoll_count; 72 73 }; 73 74 74 75 struct hdmi_spec { ··· 749 748 * Unsolicited events 750 749 */ 751 750 752 - static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, bool retry); 751 + static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll); 753 752 754 753 static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res) 755 754 { ··· 767 766 if (pin_idx < 0) 768 767 return; 769 768 770 - hdmi_present_sense(&spec->pins[pin_idx], true); 769 + hdmi_present_sense(&spec->pins[pin_idx], 1); 771 770 } 772 771 773 772 static void hdmi_non_intrinsic_event(struct hda_codec *codec, unsigned int res) ··· 961 960 return 0; 962 961 } 963 962 964 - static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, bool retry) 963 + static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll) 965 964 { 966 965 struct hda_codec *codec = per_pin->codec; 967 966 struct hdmi_eld *eld = &per_pin->sink_eld; ··· 990 989 if (eld_valid) { 991 990 if (!snd_hdmi_get_eld(eld, codec, pin_nid)) 992 991 snd_hdmi_show_eld(eld); 993 - else if (retry) { 992 + else if (repoll) { 994 993 queue_delayed_work(codec->bus->workq, 995 994 &per_pin->work, 996 995 msecs_to_jiffies(300)); ··· 1005 1004 struct hdmi_spec_per_pin *per_pin = 1006 1005 container_of(to_delayed_work(work), struct hdmi_spec_per_pin, work); 1007 1006 1008 - hdmi_present_sense(per_pin, false); 1007 + if (per_pin->repoll_count++ > 6) 1008 + per_pin->repoll_count = 0; 1009 + 1010 + hdmi_present_sense(per_pin, per_pin->repoll_count); 1009 1011 } 1010 1012 1011 1013 static int hdmi_add_pin(struct hda_codec *codec, hda_nid_t pin_nid) ··· 1239 1235 if (err < 0) 1240 1236 return err; 1241 1237 1242 - hdmi_present_sense(per_pin, false); 1238 + hdmi_present_sense(per_pin, 0); 1243 1239 return 0; 1244 1240 } 1245 1241
+72 -27
sound/pci/hda/patch_realtek.c
··· 277 277 return false; 278 278 } 279 279 280 + static inline hda_nid_t get_capsrc(struct alc_spec *spec, int idx) 281 + { 282 + return spec->capsrc_nids ? 283 + spec->capsrc_nids[idx] : spec->adc_nids[idx]; 284 + } 285 + 280 286 /* select the given imux item; either unmute exclusively or select the route */ 281 287 static int alc_mux_select(struct hda_codec *codec, unsigned int adc_idx, 282 288 unsigned int idx, bool force) ··· 297 291 imux = &spec->input_mux[mux_idx]; 298 292 if (!imux->num_items && mux_idx > 0) 299 293 imux = &spec->input_mux[0]; 294 + if (!imux->num_items) 295 + return 0; 300 296 301 297 if (idx >= imux->num_items) 302 298 idx = imux->num_items - 1; ··· 311 303 adc_idx = spec->dyn_adc_idx[idx]; 312 304 } 313 305 314 - nid = spec->capsrc_nids ? 315 - spec->capsrc_nids[adc_idx] : spec->adc_nids[adc_idx]; 306 + nid = get_capsrc(spec, adc_idx); 316 307 317 308 /* no selection? */ 318 309 num_conns = snd_hda_get_conn_list(codec, nid, NULL); ··· 1061 1054 spec->imux_pins[2] = spec->dock_mic_pin; 1062 1055 for (i = 0; i < 3; i++) { 1063 1056 strcpy(imux->items[i].label, texts[i]); 1064 - if (spec->imux_pins[i]) 1057 + if (spec->imux_pins[i]) { 1058 + hda_nid_t pin = spec->imux_pins[i]; 1059 + int c; 1060 + for (c = 0; c < spec->num_adc_nids; c++) { 1061 + hda_nid_t cap = get_capsrc(spec, c); 1062 + int idx = get_connection_index(codec, cap, pin); 1063 + if (idx >= 0) { 1064 + imux->items[i].index = idx; 1065 + break; 1066 + } 1067 + } 1065 1068 imux->num_items = i + 1; 1069 + } 1066 1070 } 1067 1071 spec->num_mux_defs = 1; 1068 1072 spec->input_mux = imux; ··· 1975 1957 if (!kctl) 1976 1958 kctl = snd_hda_find_mixer_ctl(codec, "Input Source"); 1977 1959 for (i = 0; kctl && i < kctl->count; i++) { 1978 - const hda_nid_t *nids = spec->capsrc_nids; 1979 - if (!nids) 1980 - nids = spec->adc_nids; 1981 - err = snd_hda_add_nid(codec, kctl, i, nids[i]); 1960 + err = snd_hda_add_nid(codec, kctl, i, 1961 + get_capsrc(spec, i)); 1982 1962 if (err < 0) 1983 1963 return err; 1984 1964 } ··· 2631 2615 case AUTO_PIN_SPEAKER_OUT: 2632 2616 if (cfg->line_outs == 1) 2633 2617 return "Speaker"; 2618 + if (cfg->line_outs == 2) 2619 + return ch ? "Bass Speaker" : "Speaker"; 2634 2620 break; 2635 2621 case AUTO_PIN_HP_OUT: 2636 2622 /* for multi-io case, only the primary out */ ··· 2765 2747 } 2766 2748 2767 2749 for (c = 0; c < num_adcs; c++) { 2768 - hda_nid_t cap = spec->capsrc_nids ? 2769 - spec->capsrc_nids[c] : spec->adc_nids[c]; 2750 + hda_nid_t cap = get_capsrc(spec, c); 2770 2751 idx = get_connection_index(codec, cap, pin); 2771 2752 if (idx >= 0) { 2772 2753 spec->imux_pins[imux->num_items] = pin; ··· 2906 2889 if (!nid) 2907 2890 continue; 2908 2891 if (found_in_nid_list(nid, spec->multiout.dac_nids, 2909 - spec->multiout.num_dacs)) 2892 + ARRAY_SIZE(spec->private_dac_nids))) 2910 2893 continue; 2911 2894 if (found_in_nid_list(nid, spec->multiout.hp_out_nid, 2912 2895 ARRAY_SIZE(spec->multiout.hp_out_nid))) ··· 2927 2910 return 0; 2928 2911 } 2929 2912 2913 + /* return 0 if no possible DAC is found, 1 if one or more found */ 2930 2914 static int alc_auto_fill_extra_dacs(struct hda_codec *codec, int num_outs, 2931 2915 const hda_nid_t *pins, hda_nid_t *dacs) 2932 2916 { ··· 2945 2927 if (!dacs[i]) 2946 2928 dacs[i] = alc_auto_look_for_dac(codec, pins[i]); 2947 2929 } 2948 - return 0; 2930 + return 1; 2949 2931 } 2950 2932 2951 2933 static int alc_auto_fill_multi_ios(struct hda_codec *codec, ··· 2955 2937 static int alc_auto_fill_dac_nids(struct hda_codec *codec) 2956 2938 { 2957 2939 struct alc_spec *spec = codec->spec; 2958 - const struct auto_pin_cfg *cfg = &spec->autocfg; 2940 + struct auto_pin_cfg *cfg = &spec->autocfg; 2959 2941 bool redone = false; 2960 2942 int i; 2961 2943 ··· 2966 2948 spec->multiout.extra_out_nid[0] = 0; 2967 2949 memset(spec->private_dac_nids, 0, sizeof(spec->private_dac_nids)); 2968 2950 spec->multiout.dac_nids = spec->private_dac_nids; 2951 + spec->multi_ios = 0; 2969 2952 2970 2953 /* fill hard-wired DACs first */ 2971 2954 if (!redone) { ··· 3000 2981 for (i = 0; i < cfg->line_outs; i++) { 3001 2982 if (spec->private_dac_nids[i]) 3002 2983 spec->multiout.num_dacs++; 3003 - else 2984 + else { 3004 2985 memmove(spec->private_dac_nids + i, 3005 2986 spec->private_dac_nids + i + 1, 3006 2987 sizeof(hda_nid_t) * (cfg->line_outs - i - 1)); 2988 + spec->private_dac_nids[cfg->line_outs - 1] = 0; 2989 + } 3007 2990 } 3008 2991 3009 2992 if (cfg->line_outs == 1 && cfg->line_out_type != AUTO_PIN_SPEAKER_OUT) { ··· 3027 3006 if (cfg->line_out_type != AUTO_PIN_HP_OUT) 3028 3007 alc_auto_fill_extra_dacs(codec, cfg->hp_outs, cfg->hp_pins, 3029 3008 spec->multiout.hp_out_nid); 3030 - if (cfg->line_out_type != AUTO_PIN_SPEAKER_OUT) 3031 - alc_auto_fill_extra_dacs(codec, cfg->speaker_outs, cfg->speaker_pins, 3032 - spec->multiout.extra_out_nid); 3009 + if (cfg->line_out_type != AUTO_PIN_SPEAKER_OUT) { 3010 + int err = alc_auto_fill_extra_dacs(codec, cfg->speaker_outs, 3011 + cfg->speaker_pins, 3012 + spec->multiout.extra_out_nid); 3013 + /* if no speaker volume is assigned, try again as the primary 3014 + * output 3015 + */ 3016 + if (!err && cfg->speaker_outs > 0 && 3017 + cfg->line_out_type == AUTO_PIN_HP_OUT) { 3018 + cfg->hp_outs = cfg->line_outs; 3019 + memcpy(cfg->hp_pins, cfg->line_out_pins, 3020 + sizeof(cfg->hp_pins)); 3021 + cfg->line_outs = cfg->speaker_outs; 3022 + memcpy(cfg->line_out_pins, cfg->speaker_pins, 3023 + sizeof(cfg->speaker_pins)); 3024 + cfg->speaker_outs = 0; 3025 + memset(cfg->speaker_pins, 0, sizeof(cfg->speaker_pins)); 3026 + cfg->line_out_type = AUTO_PIN_SPEAKER_OUT; 3027 + redone = false; 3028 + goto again; 3029 + } 3030 + } 3033 3031 3034 3032 return 0; 3035 3033 } ··· 3198 3158 } 3199 3159 3200 3160 static int alc_auto_create_extra_out(struct hda_codec *codec, hda_nid_t pin, 3201 - hda_nid_t dac, const char *pfx) 3161 + hda_nid_t dac, const char *pfx, 3162 + int cidx) 3202 3163 { 3203 3164 struct alc_spec *spec = codec->spec; 3204 3165 hda_nid_t sw, vol; ··· 3215 3174 if (is_ctl_used(spec->sw_ctls, val)) 3216 3175 return 0; /* already created */ 3217 3176 mark_ctl_usage(spec->sw_ctls, val); 3218 - return add_pb_sw_ctrl(spec, ALC_CTL_WIDGET_MUTE, pfx, val); 3177 + return __add_pb_sw_ctrl(spec, ALC_CTL_WIDGET_MUTE, pfx, cidx, val); 3219 3178 } 3220 3179 3221 3180 sw = alc_look_for_out_mute_nid(codec, pin, dac); 3222 3181 vol = alc_look_for_out_vol_nid(codec, pin, dac); 3223 - err = alc_auto_add_stereo_vol(codec, pfx, 0, vol); 3182 + err = alc_auto_add_stereo_vol(codec, pfx, cidx, vol); 3224 3183 if (err < 0) 3225 3184 return err; 3226 - err = alc_auto_add_stereo_sw(codec, pfx, 0, sw); 3185 + err = alc_auto_add_stereo_sw(codec, pfx, cidx, sw); 3227 3186 if (err < 0) 3228 3187 return err; 3229 3188 return 0; ··· 3264 3223 hda_nid_t dac = *dacs; 3265 3224 if (!dac) 3266 3225 dac = spec->multiout.dac_nids[0]; 3267 - return alc_auto_create_extra_out(codec, *pins, dac, pfx); 3226 + return alc_auto_create_extra_out(codec, *pins, dac, pfx, 0); 3268 3227 } 3269 3228 3270 3229 if (dacs[num_pins - 1]) { 3271 3230 /* OK, we have a multi-output system with individual volumes */ 3272 3231 for (i = 0; i < num_pins; i++) { 3273 - snprintf(name, sizeof(name), "%s %s", 3274 - pfx, channel_name[i]); 3275 - err = alc_auto_create_extra_out(codec, pins[i], dacs[i], 3276 - name); 3232 + if (num_pins >= 3) { 3233 + snprintf(name, sizeof(name), "%s %s", 3234 + pfx, channel_name[i]); 3235 + err = alc_auto_create_extra_out(codec, pins[i], dacs[i], 3236 + name, 0); 3237 + } else { 3238 + err = alc_auto_create_extra_out(codec, pins[i], dacs[i], 3239 + pfx, i); 3240 + } 3277 3241 if (err < 0) 3278 3242 return err; 3279 3243 } ··· 3740 3694 if (!pin) 3741 3695 return 0; 3742 3696 for (i = 0; i < spec->num_adc_nids; i++) { 3743 - hda_nid_t cap = spec->capsrc_nids ? 3744 - spec->capsrc_nids[i] : spec->adc_nids[i]; 3697 + hda_nid_t cap = get_capsrc(spec, i); 3745 3698 int idx; 3746 3699 3747 3700 idx = get_connection_index(codec, cap, pin);
+35 -40
sound/pci/hda/patch_sigmatel.c
··· 215 215 unsigned int gpio_mute; 216 216 unsigned int gpio_led; 217 217 unsigned int gpio_led_polarity; 218 + unsigned int vref_mute_led_nid; /* pin NID for mute-LED vref control */ 218 219 unsigned int vref_led; 219 220 220 221 /* stream */ ··· 1641 1640 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x02a1, 1642 1641 "Alienware M17x", STAC_ALIENWARE_M17X), 1643 1642 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x043a, 1643 + "Alienware M17x", STAC_ALIENWARE_M17X), 1644 + SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0490, 1644 1645 "Alienware M17x", STAC_ALIENWARE_M17X), 1645 1646 {} /* terminator */ 1646 1647 }; ··· 4319 4316 spec->eapd_switch = val; 4320 4317 get_int_hint(codec, "gpio_led_polarity", &spec->gpio_led_polarity); 4321 4318 if (get_int_hint(codec, "gpio_led", &spec->gpio_led)) { 4322 - if (spec->gpio_led <= 8) { 4323 - spec->gpio_mask |= spec->gpio_led; 4324 - spec->gpio_dir |= spec->gpio_led; 4325 - if (spec->gpio_led_polarity) 4326 - spec->gpio_data |= spec->gpio_led; 4327 - } 4319 + spec->gpio_mask |= spec->gpio_led; 4320 + spec->gpio_dir |= spec->gpio_led; 4321 + if (spec->gpio_led_polarity) 4322 + spec->gpio_data |= spec->gpio_led; 4328 4323 } 4329 4324 } 4330 4325 ··· 4440 4439 int pinctl, def_conf; 4441 4440 4442 4441 /* power on when no jack detection is available */ 4443 - if (!spec->hp_detect) { 4442 + /* or when the VREF is used for controlling LED */ 4443 + if (!spec->hp_detect || 4444 + spec->vref_mute_led_nid == nid) { 4444 4445 stac_toggle_power_map(codec, nid, 1); 4445 4446 continue; 4446 4447 } ··· 4914 4911 if (sscanf(dev->name, "HP_Mute_LED_%d_%x", 4915 4912 &spec->gpio_led_polarity, 4916 4913 &spec->gpio_led) == 2) { 4917 - if (spec->gpio_led < 4) 4914 + unsigned int max_gpio; 4915 + max_gpio = snd_hda_param_read(codec, codec->afg, 4916 + AC_PAR_GPIO_CAP); 4917 + max_gpio &= AC_GPIO_IO_COUNT; 4918 + if (spec->gpio_led < max_gpio) 4918 4919 spec->gpio_led = 1 << spec->gpio_led; 4920 + else 4921 + spec->vref_mute_led_nid = spec->gpio_led; 4919 4922 return 1; 4920 4923 } 4921 4924 if (sscanf(dev->name, "HP_Mute_LED_%d", 4922 4925 &spec->gpio_led_polarity) == 1) { 4923 4926 set_hp_led_gpio(codec); 4927 + return 1; 4928 + } 4929 + /* BIOS bug: unfilled OEM string */ 4930 + if (strstr(dev->name, "HP_Mute_LED_P_G")) { 4931 + set_hp_led_gpio(codec); 4932 + spec->gpio_led_polarity = 1; 4924 4933 return 1; 4925 4934 } 4926 4935 } ··· 5056 5041 struct sigmatel_spec *spec = codec->spec; 5057 5042 5058 5043 /* sync mute LED */ 5059 - if (spec->gpio_led) { 5060 - if (spec->gpio_led <= 8) { 5061 - stac_gpio_set(codec, spec->gpio_mask, 5062 - spec->gpio_dir, spec->gpio_data); 5063 - } else { 5064 - stac_vrefout_set(codec, 5065 - spec->gpio_led, spec->vref_led); 5066 - } 5067 - } 5068 - return 0; 5069 - } 5070 - 5071 - static int stac92xx_post_suspend(struct hda_codec *codec) 5072 - { 5073 - struct sigmatel_spec *spec = codec->spec; 5074 - if (spec->gpio_led > 8) { 5075 - /* with vref-out pin used for mute led control 5076 - * codec AFG is prevented from D3 state, but on 5077 - * system suspend it can (and should) be used 5078 - */ 5079 - snd_hda_codec_read(codec, codec->afg, 0, 5080 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 5081 - } 5044 + if (spec->vref_mute_led_nid) 5045 + stac_vrefout_set(codec, spec->vref_mute_led_nid, 5046 + spec->vref_led); 5047 + else if (spec->gpio_led) 5048 + stac_gpio_set(codec, spec->gpio_mask, 5049 + spec->gpio_dir, spec->gpio_data); 5082 5050 return 0; 5083 5051 } 5084 5052 ··· 5072 5074 struct sigmatel_spec *spec = codec->spec; 5073 5075 5074 5076 if (power_state == AC_PWRST_D3) { 5075 - if (spec->gpio_led > 8) { 5077 + if (spec->vref_mute_led_nid) { 5076 5078 /* with vref-out pin used for mute led control 5077 5079 * codec AFG is prevented from D3 state 5078 5080 */ ··· 5125 5127 } 5126 5128 } 5127 5129 /*polarity defines *not* muted state level*/ 5128 - if (spec->gpio_led <= 8) { 5130 + if (!spec->vref_mute_led_nid) { 5129 5131 if (muted) 5130 5132 spec->gpio_data &= ~spec->gpio_led; /* orange */ 5131 5133 else ··· 5143 5145 muted_lvl = spec->gpio_led_polarity ? 5144 5146 AC_PINCTL_VREF_GRD : AC_PINCTL_VREF_HIZ; 5145 5147 spec->vref_led = muted ? muted_lvl : notmtd_lvl; 5146 - stac_vrefout_set(codec, spec->gpio_led, spec->vref_led); 5148 + stac_vrefout_set(codec, spec->vref_mute_led_nid, 5149 + spec->vref_led); 5147 5150 } 5148 5151 return 0; 5149 5152 } ··· 5658 5659 5659 5660 #ifdef CONFIG_SND_HDA_POWER_SAVE 5660 5661 if (spec->gpio_led) { 5661 - if (spec->gpio_led <= 8) { 5662 + if (!spec->vref_mute_led_nid) { 5662 5663 spec->gpio_mask |= spec->gpio_led; 5663 5664 spec->gpio_dir |= spec->gpio_led; 5664 5665 spec->gpio_data |= spec->gpio_led; 5665 5666 } else { 5666 5667 codec->patch_ops.set_power_state = 5667 5668 stac92xx_set_power_state; 5668 - codec->patch_ops.post_suspend = 5669 - stac92xx_post_suspend; 5670 5669 } 5671 5670 codec->patch_ops.pre_resume = stac92xx_pre_resume; 5672 5671 codec->patch_ops.check_power_status = ··· 5971 5974 5972 5975 #ifdef CONFIG_SND_HDA_POWER_SAVE 5973 5976 if (spec->gpio_led) { 5974 - if (spec->gpio_led <= 8) { 5977 + if (!spec->vref_mute_led_nid) { 5975 5978 spec->gpio_mask |= spec->gpio_led; 5976 5979 spec->gpio_dir |= spec->gpio_led; 5977 5980 spec->gpio_data |= spec->gpio_led; 5978 5981 } else { 5979 5982 codec->patch_ops.set_power_state = 5980 5983 stac92xx_set_power_state; 5981 - codec->patch_ops.post_suspend = 5982 - stac92xx_post_suspend; 5983 5984 } 5984 5985 codec->patch_ops.pre_resume = stac92xx_pre_resume; 5985 5986 codec->patch_ops.check_power_status =
+45 -35
sound/pci/hda/patch_via.c
··· 208 208 /* work to check hp jack state */ 209 209 struct hda_codec *codec; 210 210 struct delayed_work vt1708_hp_work; 211 + int hp_work_active; 211 212 int vt1708_jack_detect; 212 213 int vt1708_hp_present; 213 214 ··· 306 305 static void analog_low_current_mode(struct hda_codec *codec); 307 306 static bool is_aa_path_mute(struct hda_codec *codec); 308 307 309 - static void vt1708_start_hp_work(struct via_spec *spec) 310 - { 311 - if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 312 - return; 313 - snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 314 - !spec->vt1708_jack_detect); 315 - if (!delayed_work_pending(&spec->vt1708_hp_work)) 316 - schedule_delayed_work(&spec->vt1708_hp_work, 317 - msecs_to_jiffies(100)); 318 - } 308 + #define hp_detect_with_aa(codec) \ 309 + (snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") == 1 && \ 310 + !is_aa_path_mute(codec)) 319 311 320 312 static void vt1708_stop_hp_work(struct via_spec *spec) 321 313 { 322 314 if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 323 315 return; 324 - if (snd_hda_get_bool_hint(spec->codec, "analog_loopback_hp_detect") == 1 325 - && !is_aa_path_mute(spec->codec)) 316 + if (spec->hp_work_active) { 317 + snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 1); 318 + cancel_delayed_work_sync(&spec->vt1708_hp_work); 319 + spec->hp_work_active = 0; 320 + } 321 + } 322 + 323 + static void vt1708_update_hp_work(struct via_spec *spec) 324 + { 325 + if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 326 326 return; 327 - snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 328 - !spec->vt1708_jack_detect); 329 - cancel_delayed_work_sync(&spec->vt1708_hp_work); 327 + if (spec->vt1708_jack_detect && 328 + (spec->active_streams || hp_detect_with_aa(spec->codec))) { 329 + if (!spec->hp_work_active) { 330 + snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 0); 331 + schedule_delayed_work(&spec->vt1708_hp_work, 332 + msecs_to_jiffies(100)); 333 + spec->hp_work_active = 1; 334 + } 335 + } else if (!hp_detect_with_aa(spec->codec)) 336 + vt1708_stop_hp_work(spec); 330 337 } 331 338 332 339 static void set_widgets_power_state(struct hda_codec *codec) ··· 352 343 353 344 set_widgets_power_state(codec); 354 345 analog_low_current_mode(snd_kcontrol_chip(kcontrol)); 355 - if (snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") == 1) { 356 - if (is_aa_path_mute(codec)) 357 - vt1708_start_hp_work(codec->spec); 358 - else 359 - vt1708_stop_hp_work(codec->spec); 360 - } 346 + vt1708_update_hp_work(codec->spec); 361 347 return change; 362 348 } 363 349 ··· 1158 1154 spec->cur_dac_stream_tag = stream_tag; 1159 1155 spec->cur_dac_format = format; 1160 1156 mutex_unlock(&spec->config_mutex); 1161 - vt1708_start_hp_work(spec); 1157 + vt1708_update_hp_work(spec); 1162 1158 return 0; 1163 1159 } 1164 1160 ··· 1178 1174 spec->cur_hp_stream_tag = stream_tag; 1179 1175 spec->cur_hp_format = format; 1180 1176 mutex_unlock(&spec->config_mutex); 1181 - vt1708_start_hp_work(spec); 1177 + vt1708_update_hp_work(spec); 1182 1178 return 0; 1183 1179 } 1184 1180 ··· 1192 1188 snd_hda_multi_out_analog_cleanup(codec, &spec->multiout); 1193 1189 spec->active_streams &= ~STREAM_MULTI_OUT; 1194 1190 mutex_unlock(&spec->config_mutex); 1195 - vt1708_stop_hp_work(spec); 1191 + vt1708_update_hp_work(spec); 1196 1192 return 0; 1197 1193 } 1198 1194 ··· 1207 1203 snd_hda_codec_setup_stream(codec, spec->hp_dac_nid, 0, 0, 0); 1208 1204 spec->active_streams &= ~STREAM_INDEP_HP; 1209 1205 mutex_unlock(&spec->config_mutex); 1210 - vt1708_stop_hp_work(spec); 1206 + vt1708_update_hp_work(spec); 1211 1207 return 0; 1212 1208 } 1213 1209 ··· 1649 1645 int nums; 1650 1646 struct via_spec *spec = codec->spec; 1651 1647 1652 - if (!spec->hp_independent_mode && spec->autocfg.hp_pins[0]) 1648 + if (!spec->hp_independent_mode && spec->autocfg.hp_pins[0] && 1649 + (spec->codec_type != VT1708 || spec->vt1708_jack_detect)) 1653 1650 present = snd_hda_jack_detect(codec, spec->autocfg.hp_pins[0]); 1654 1651 1655 1652 if (spec->smart51_enabled) ··· 2617 2612 2618 2613 if (spec->codec_type != VT1708) 2619 2614 return 0; 2620 - spec->vt1708_jack_detect = 2621 - !((snd_hda_codec_read(codec, 0x1, 0, 0xf84, 0) >> 8) & 0x1); 2622 2615 ucontrol->value.integer.value[0] = spec->vt1708_jack_detect; 2623 2616 return 0; 2624 2617 } ··· 2626 2623 { 2627 2624 struct hda_codec *codec = snd_kcontrol_chip(kcontrol); 2628 2625 struct via_spec *spec = codec->spec; 2629 - int change; 2626 + int val; 2630 2627 2631 2628 if (spec->codec_type != VT1708) 2632 2629 return 0; 2633 - spec->vt1708_jack_detect = ucontrol->value.integer.value[0]; 2634 - change = (0x1 & (snd_hda_codec_read(codec, 0x1, 0, 0xf84, 0) >> 8)) 2635 - == !spec->vt1708_jack_detect; 2636 - if (spec->vt1708_jack_detect) { 2630 + val = !!ucontrol->value.integer.value[0]; 2631 + if (spec->vt1708_jack_detect == val) 2632 + return 0; 2633 + spec->vt1708_jack_detect = val; 2634 + if (spec->vt1708_jack_detect && 2635 + snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") != 1) { 2637 2636 mute_aa_path(codec, 1); 2638 2637 notify_aa_path_ctls(codec); 2639 2638 } 2640 - return change; 2639 + via_hp_automute(codec); 2640 + vt1708_update_hp_work(spec); 2641 + return 1; 2641 2642 } 2642 2643 2643 2644 static const struct snd_kcontrol_new vt1708_jack_detect_ctl = { ··· 2778 2771 via_auto_init_unsol_event(codec); 2779 2772 2780 2773 via_hp_automute(codec); 2774 + vt1708_update_hp_work(spec); 2781 2775 2782 2776 return 0; 2783 2777 } ··· 2795 2787 spec->vt1708_hp_present ^= 1; 2796 2788 via_hp_automute(spec->codec); 2797 2789 } 2798 - vt1708_start_hp_work(spec); 2790 + if (spec->vt1708_jack_detect) 2791 + schedule_delayed_work(&spec->vt1708_hp_work, 2792 + msecs_to_jiffies(100)); 2799 2793 } 2800 2794 2801 2795 static int get_mux_nids(struct hda_codec *codec)
+16 -7
sound/pci/lx6464es/lx_core.c
··· 78 78 return ioread32(address); 79 79 } 80 80 81 - void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, u32 len) 81 + static void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, 82 + u32 len) 82 83 { 83 - void __iomem *address = lx_dsp_register(chip, port); 84 - memcpy_fromio(data, address, len*sizeof(u32)); 84 + u32 __iomem *address = lx_dsp_register(chip, port); 85 + int i; 86 + 87 + /* we cannot use memcpy_fromio */ 88 + for (i = 0; i != len; ++i) 89 + data[i] = ioread32(address + i); 85 90 } 86 91 87 92 ··· 96 91 iowrite32(data, address); 97 92 } 98 93 99 - void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, const u32 *data, 100 - u32 len) 94 + static void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, 95 + const u32 *data, u32 len) 101 96 { 102 - void __iomem *address = lx_dsp_register(chip, port); 103 - memcpy_toio(address, data, len*sizeof(u32)); 97 + u32 __iomem *address = lx_dsp_register(chip, port); 98 + int i; 99 + 100 + /* we cannot use memcpy_to */ 101 + for (i = 0; i != len; ++i) 102 + iowrite32(data[i], address + i); 104 103 } 105 104 106 105
-3
sound/pci/lx6464es/lx_core.h
··· 72 72 }; 73 73 74 74 unsigned long lx_dsp_reg_read(struct lx6464es *chip, int port); 75 - void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, u32 len); 76 75 void lx_dsp_reg_write(struct lx6464es *chip, int port, unsigned data); 77 - void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, const u32 *data, 78 - u32 len); 79 76 80 77 /* plx register access */ 81 78 enum {
+1 -1
sound/pci/rme9652/hdspm.c
··· 6518 6518 hdspm->io_type = AES32; 6519 6519 hdspm->card_name = "RME AES32"; 6520 6520 hdspm->midiPorts = 2; 6521 - } else if ((hdspm->firmware_rev == 0xd5) || 6521 + } else if ((hdspm->firmware_rev == 0xd2) || 6522 6522 ((hdspm->firmware_rev >= 0xc8) && 6523 6523 (hdspm->firmware_rev <= 0xcf))) { 6524 6524 hdspm->io_type = MADI;
+54 -12
sound/pci/sis7019.c
··· 41 41 static int index = SNDRV_DEFAULT_IDX1; /* Index 0-MAX */ 42 42 static char *id = SNDRV_DEFAULT_STR1; /* ID for this card */ 43 43 static int enable = 1; 44 + static int codecs = 1; 44 45 45 46 module_param(index, int, 0444); 46 47 MODULE_PARM_DESC(index, "Index value for SiS7019 Audio Accelerator."); ··· 49 48 MODULE_PARM_DESC(id, "ID string for SiS7019 Audio Accelerator."); 50 49 module_param(enable, bool, 0444); 51 50 MODULE_PARM_DESC(enable, "Enable SiS7019 Audio Accelerator."); 51 + module_param(codecs, int, 0444); 52 + MODULE_PARM_DESC(codecs, "Set bit to indicate that codec number is expected to be present (default 1)"); 52 53 53 54 static DEFINE_PCI_DEVICE_TABLE(snd_sis7019_ids) = { 54 55 { PCI_DEVICE(PCI_VENDOR_ID_SI, 0x7019) }, ··· 143 140 dma_addr_t silence_dma_addr; 144 141 }; 145 142 143 + /* These values are also used by the module param 'codecs' to indicate 144 + * which codecs should be present. 145 + */ 146 146 #define SIS_PRIMARY_CODEC_PRESENT 0x0001 147 147 #define SIS_SECONDARY_CODEC_PRESENT 0x0002 148 148 #define SIS_TERTIARY_CODEC_PRESENT 0x0004 ··· 1084 1078 { 1085 1079 unsigned long io = sis->ioport; 1086 1080 void __iomem *ioaddr = sis->ioaddr; 1081 + unsigned long timeout; 1087 1082 u16 status; 1088 1083 int count; 1089 1084 int i; ··· 1111 1104 while ((inw(io + SIS_AC97_STATUS) & SIS_AC97_STATUS_BUSY) && --count) 1112 1105 udelay(1); 1113 1106 1114 - /* Now that we've finished the reset, find out what's attached. 1115 - */ 1116 - status = inl(io + SIS_AC97_STATUS); 1117 - if (status & SIS_AC97_STATUS_CODEC_READY) 1118 - sis->codecs_present |= SIS_PRIMARY_CODEC_PRESENT; 1119 - if (status & SIS_AC97_STATUS_CODEC2_READY) 1120 - sis->codecs_present |= SIS_SECONDARY_CODEC_PRESENT; 1121 - if (status & SIS_AC97_STATUS_CODEC3_READY) 1122 - sis->codecs_present |= SIS_TERTIARY_CODEC_PRESENT; 1123 - 1124 - /* All done, let go of the semaphore, and check for errors 1107 + /* Command complete, we can let go of the semaphore now. 1125 1108 */ 1126 1109 outl(SIS_AC97_SEMA_RELEASE, io + SIS_AC97_SEMA); 1127 - if (!sis->codecs_present || !count) 1110 + if (!count) 1128 1111 return -EIO; 1112 + 1113 + /* Now that we've finished the reset, find out what's attached. 1114 + * There are some codec/board combinations that take an extremely 1115 + * long time to come up. 350+ ms has been observed in the field, 1116 + * so we'll give them up to 500ms. 1117 + */ 1118 + sis->codecs_present = 0; 1119 + timeout = msecs_to_jiffies(500) + jiffies; 1120 + while (time_before_eq(jiffies, timeout)) { 1121 + status = inl(io + SIS_AC97_STATUS); 1122 + if (status & SIS_AC97_STATUS_CODEC_READY) 1123 + sis->codecs_present |= SIS_PRIMARY_CODEC_PRESENT; 1124 + if (status & SIS_AC97_STATUS_CODEC2_READY) 1125 + sis->codecs_present |= SIS_SECONDARY_CODEC_PRESENT; 1126 + if (status & SIS_AC97_STATUS_CODEC3_READY) 1127 + sis->codecs_present |= SIS_TERTIARY_CODEC_PRESENT; 1128 + 1129 + if (sis->codecs_present == codecs) 1130 + break; 1131 + 1132 + msleep(1); 1133 + } 1134 + 1135 + /* All done, check for errors. 1136 + */ 1137 + if (!sis->codecs_present) { 1138 + printk(KERN_ERR "sis7019: could not find any codecs\n"); 1139 + return -EIO; 1140 + } 1141 + 1142 + if (sis->codecs_present != codecs) { 1143 + printk(KERN_WARNING "sis7019: missing codecs, found %0x, expected %0x\n", 1144 + sis->codecs_present, codecs); 1145 + } 1129 1146 1130 1147 /* Let the hardware know that the audio driver is alive, 1131 1148 * and enable PCM slots on the AC-link for L/R playback (3 & 4) and ··· 1420 1389 rc = -ENOENT; 1421 1390 if (!enable) 1422 1391 goto error_out; 1392 + 1393 + /* The user can specify which codecs should be present so that we 1394 + * can wait for them to show up if they are slow to recover from 1395 + * the AC97 cold reset. We default to a single codec, the primary. 1396 + * 1397 + * We assume that SIS_PRIMARY_*_PRESENT matches bits 0-2. 1398 + */ 1399 + codecs &= SIS_PRIMARY_CODEC_PRESENT | SIS_SECONDARY_CODEC_PRESENT | 1400 + SIS_TERTIARY_CODEC_PRESENT; 1401 + if (!codecs) 1402 + codecs = SIS_PRIMARY_CODEC_PRESENT; 1423 1403 1424 1404 rc = snd_card_create(index, id, THIS_MODULE, sizeof(*sis), &card); 1425 1405 if (rc < 0)
+1 -20
sound/soc/atmel/Kconfig
··· 1 1 config SND_ATMEL_SOC 2 2 tristate "SoC Audio for the Atmel System-on-Chip" 3 - depends on ARCH_AT91 || AVR32 3 + depends on ARCH_AT91 4 4 help 5 5 Say Y or M if you want to add support for codecs attached to 6 6 the ATMEL SSC interface. You will also need ··· 23 23 help 24 24 Say Y if you want to add support for SoC audio on WM8731-based 25 25 AT91sam9g20 evaluation board. 26 - 27 - config SND_AT32_SOC_PLAYPAQ 28 - tristate "SoC Audio support for PlayPaq with WM8510" 29 - depends on SND_ATMEL_SOC && BOARD_PLAYPAQ && AT91_PROGRAMMABLE_CLOCKS 30 - select SND_ATMEL_SOC_SSC 31 - select SND_SOC_WM8510 32 - help 33 - Say Y or M here if you want to add support for SoC audio 34 - on the LRS PlayPaq. 35 - 36 - config SND_AT32_SOC_PLAYPAQ_SLAVE 37 - bool "Run CODEC on PlayPaq in slave mode" 38 - depends on SND_AT32_SOC_PLAYPAQ 39 - default n 40 - help 41 - Say Y if you want to run with the AT32 SSC generating the BCLK 42 - and FRAME signals on the PlayPaq. Unless you want to play 43 - with the AT32 as the SSC master, you probably want to say N here, 44 - as this will give you better sound quality. 45 26 46 27 config SND_AT91_SOC_AFEB9260 47 28 tristate "SoC Audio support for AFEB9260 board"
-4
sound/soc/atmel/Makefile
··· 8 8 # AT91 Machine Support 9 9 snd-soc-sam9g20-wm8731-objs := sam9g20_wm8731.o 10 10 11 - # AT32 Machine Support 12 - snd-soc-playpaq-objs := playpaq_wm8510.o 13 - 14 11 obj-$(CONFIG_SND_AT91_SOC_SAM9G20_WM8731) += snd-soc-sam9g20-wm8731.o 15 - obj-$(CONFIG_SND_AT32_SOC_PLAYPAQ) += snd-soc-playpaq.o 16 12 obj-$(CONFIG_SND_AT91_SOC_AFEB9260) += snd-soc-afeb9260.o
-473
sound/soc/atmel/playpaq_wm8510.c
··· 1 - /* sound/soc/at32/playpaq_wm8510.c 2 - * ASoC machine driver for PlayPaq using WM8510 codec 3 - * 4 - * Copyright (C) 2008 Long Range Systems 5 - * Geoffrey Wossum <gwossum@acm.org> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - * 11 - * This code is largely inspired by sound/soc/at91/eti_b1_wm8731.c 12 - * 13 - * NOTE: If you don't have the AT32 enhanced portmux configured (which 14 - * isn't currently in the mainline or Atmel patched kernel), you will 15 - * need to set the MCLK pin (PA30) to peripheral A in your board initialization 16 - * code. Something like: 17 - * at32_select_periph(GPIO_PIN_PA(30), GPIO_PERIPH_A, 0); 18 - * 19 - */ 20 - 21 - /* #define DEBUG */ 22 - 23 - #include <linux/module.h> 24 - #include <linux/moduleparam.h> 25 - #include <linux/kernel.h> 26 - #include <linux/errno.h> 27 - #include <linux/clk.h> 28 - #include <linux/timer.h> 29 - #include <linux/interrupt.h> 30 - #include <linux/platform_device.h> 31 - 32 - #include <sound/core.h> 33 - #include <sound/pcm.h> 34 - #include <sound/pcm_params.h> 35 - #include <sound/soc.h> 36 - 37 - #include <mach/at32ap700x.h> 38 - #include <mach/portmux.h> 39 - 40 - #include "../codecs/wm8510.h" 41 - #include "atmel-pcm.h" 42 - #include "atmel_ssc_dai.h" 43 - 44 - 45 - /*-------------------------------------------------------------------------*\ 46 - * constants 47 - \*-------------------------------------------------------------------------*/ 48 - #define MCLK_PIN GPIO_PIN_PA(30) 49 - #define MCLK_PERIPH GPIO_PERIPH_A 50 - 51 - 52 - /*-------------------------------------------------------------------------*\ 53 - * data types 54 - \*-------------------------------------------------------------------------*/ 55 - /* SSC clocking data */ 56 - struct ssc_clock_data { 57 - /* CMR div */ 58 - unsigned int cmr_div; 59 - 60 - /* Frame period (as needed by xCMR.PERIOD) */ 61 - unsigned int period; 62 - 63 - /* The SSC clock rate these settings where calculated for */ 64 - unsigned long ssc_rate; 65 - }; 66 - 67 - 68 - /*-------------------------------------------------------------------------*\ 69 - * module data 70 - \*-------------------------------------------------------------------------*/ 71 - static struct clk *_gclk0; 72 - static struct clk *_pll0; 73 - 74 - #define CODEC_CLK (_gclk0) 75 - 76 - 77 - /*-------------------------------------------------------------------------*\ 78 - * Sound SOC operations 79 - \*-------------------------------------------------------------------------*/ 80 - #if defined CONFIG_SND_AT32_SOC_PLAYPAQ_SLAVE 81 - static struct ssc_clock_data playpaq_wm8510_calc_ssc_clock( 82 - struct snd_pcm_hw_params *params, 83 - struct snd_soc_dai *cpu_dai) 84 - { 85 - struct at32_ssc_info *ssc_p = snd_soc_dai_get_drvdata(cpu_dai); 86 - struct ssc_device *ssc = ssc_p->ssc; 87 - struct ssc_clock_data cd; 88 - unsigned int rate, width_bits, channels; 89 - unsigned int bitrate, ssc_div; 90 - unsigned actual_rate; 91 - 92 - 93 - /* 94 - * Figure out required bitrate 95 - */ 96 - rate = params_rate(params); 97 - channels = params_channels(params); 98 - width_bits = snd_pcm_format_physical_width(params_format(params)); 99 - bitrate = rate * width_bits * channels; 100 - 101 - 102 - /* 103 - * Figure out required SSC divider and period for required bitrate 104 - */ 105 - cd.ssc_rate = clk_get_rate(ssc->clk); 106 - ssc_div = cd.ssc_rate / bitrate; 107 - cd.cmr_div = ssc_div / 2; 108 - if (ssc_div & 1) { 109 - /* round cmr_div up */ 110 - cd.cmr_div++; 111 - } 112 - cd.period = width_bits - 1; 113 - 114 - 115 - /* 116 - * Find actual rate, compare to requested rate 117 - */ 118 - actual_rate = (cd.ssc_rate / (cd.cmr_div * 2)) / (2 * (cd.period + 1)); 119 - pr_debug("playpaq_wm8510: Request rate = %u, actual rate = %u\n", 120 - rate, actual_rate); 121 - 122 - 123 - return cd; 124 - } 125 - #endif /* CONFIG_SND_AT32_SOC_PLAYPAQ_SLAVE */ 126 - 127 - 128 - 129 - static int playpaq_wm8510_hw_params(struct snd_pcm_substream *substream, 130 - struct snd_pcm_hw_params *params) 131 - { 132 - struct snd_soc_pcm_runtime *rtd = substream->private_data; 133 - struct snd_soc_dai *codec_dai = rtd->codec_dai; 134 - struct snd_soc_dai *cpu_dai = rtd->cpu_dai; 135 - struct at32_ssc_info *ssc_p = snd_soc_dai_get_drvdata(cpu_dai); 136 - struct ssc_device *ssc = ssc_p->ssc; 137 - unsigned int pll_out = 0, bclk = 0, mclk_div = 0; 138 - int ret; 139 - 140 - 141 - /* Due to difficulties with getting the correct clocks from the AT32's 142 - * PLL0, we're going to let the CODEC be in charge of all the clocks 143 - */ 144 - #if !defined CONFIG_SND_AT32_SOC_PLAYPAQ_SLAVE 145 - const unsigned int fmt = (SND_SOC_DAIFMT_I2S | 146 - SND_SOC_DAIFMT_NB_NF | 147 - SND_SOC_DAIFMT_CBM_CFM); 148 - #else 149 - struct ssc_clock_data cd; 150 - const unsigned int fmt = (SND_SOC_DAIFMT_I2S | 151 - SND_SOC_DAIFMT_NB_NF | 152 - SND_SOC_DAIFMT_CBS_CFS); 153 - #endif 154 - 155 - if (ssc == NULL) { 156 - pr_warning("playpaq_wm8510_hw_params: ssc is NULL!\n"); 157 - return -EINVAL; 158 - } 159 - 160 - 161 - /* 162 - * Figure out PLL and BCLK dividers for WM8510 163 - */ 164 - switch (params_rate(params)) { 165 - case 48000: 166 - pll_out = 24576000; 167 - mclk_div = WM8510_MCLKDIV_2; 168 - bclk = WM8510_BCLKDIV_8; 169 - break; 170 - 171 - case 44100: 172 - pll_out = 22579200; 173 - mclk_div = WM8510_MCLKDIV_2; 174 - bclk = WM8510_BCLKDIV_8; 175 - break; 176 - 177 - case 22050: 178 - pll_out = 22579200; 179 - mclk_div = WM8510_MCLKDIV_4; 180 - bclk = WM8510_BCLKDIV_8; 181 - break; 182 - 183 - case 16000: 184 - pll_out = 24576000; 185 - mclk_div = WM8510_MCLKDIV_6; 186 - bclk = WM8510_BCLKDIV_8; 187 - break; 188 - 189 - case 11025: 190 - pll_out = 22579200; 191 - mclk_div = WM8510_MCLKDIV_8; 192 - bclk = WM8510_BCLKDIV_8; 193 - break; 194 - 195 - case 8000: 196 - pll_out = 24576000; 197 - mclk_div = WM8510_MCLKDIV_12; 198 - bclk = WM8510_BCLKDIV_8; 199 - break; 200 - 201 - default: 202 - pr_warning("playpaq_wm8510: Unsupported sample rate %d\n", 203 - params_rate(params)); 204 - return -EINVAL; 205 - } 206 - 207 - 208 - /* 209 - * set CPU and CODEC DAI configuration 210 - */ 211 - ret = snd_soc_dai_set_fmt(codec_dai, fmt); 212 - if (ret < 0) { 213 - pr_warning("playpaq_wm8510: " 214 - "Failed to set CODEC DAI format (%d)\n", 215 - ret); 216 - return ret; 217 - } 218 - ret = snd_soc_dai_set_fmt(cpu_dai, fmt); 219 - if (ret < 0) { 220 - pr_warning("playpaq_wm8510: " 221 - "Failed to set CPU DAI format (%d)\n", 222 - ret); 223 - return ret; 224 - } 225 - 226 - 227 - /* 228 - * Set CPU clock configuration 229 - */ 230 - #if defined CONFIG_SND_AT32_SOC_PLAYPAQ_SLAVE 231 - cd = playpaq_wm8510_calc_ssc_clock(params, cpu_dai); 232 - pr_debug("playpaq_wm8510: cmr_div = %d, period = %d\n", 233 - cd.cmr_div, cd.period); 234 - ret = snd_soc_dai_set_clkdiv(cpu_dai, AT32_SSC_CMR_DIV, cd.cmr_div); 235 - if (ret < 0) { 236 - pr_warning("playpaq_wm8510: Failed to set CPU CMR_DIV (%d)\n", 237 - ret); 238 - return ret; 239 - } 240 - ret = snd_soc_dai_set_clkdiv(cpu_dai, AT32_SSC_TCMR_PERIOD, 241 - cd.period); 242 - if (ret < 0) { 243 - pr_warning("playpaq_wm8510: " 244 - "Failed to set CPU transmit period (%d)\n", 245 - ret); 246 - return ret; 247 - } 248 - #endif /* CONFIG_SND_AT32_SOC_PLAYPAQ_SLAVE */ 249 - 250 - 251 - /* 252 - * Set CODEC clock configuration 253 - */ 254 - pr_debug("playpaq_wm8510: " 255 - "pll_in = %ld, pll_out = %u, bclk = %x, mclk = %x\n", 256 - clk_get_rate(CODEC_CLK), pll_out, bclk, mclk_div); 257 - 258 - 259 - #if !defined CONFIG_SND_AT32_SOC_PLAYPAQ_SLAVE 260 - ret = snd_soc_dai_set_clkdiv(codec_dai, WM8510_BCLKDIV, bclk); 261 - if (ret < 0) { 262 - pr_warning 263 - ("playpaq_wm8510: Failed to set CODEC DAI BCLKDIV (%d)\n", 264 - ret); 265 - return ret; 266 - } 267 - #endif /* CONFIG_SND_AT32_SOC_PLAYPAQ_SLAVE */ 268 - 269 - 270 - ret = snd_soc_dai_set_pll(codec_dai, 0, 0, 271 - clk_get_rate(CODEC_CLK), pll_out); 272 - if (ret < 0) { 273 - pr_warning("playpaq_wm8510: Failed to set CODEC DAI PLL (%d)\n", 274 - ret); 275 - return ret; 276 - } 277 - 278 - 279 - ret = snd_soc_dai_set_clkdiv(codec_dai, WM8510_MCLKDIV, mclk_div); 280 - if (ret < 0) { 281 - pr_warning("playpaq_wm8510: Failed to set CODEC MCLKDIV (%d)\n", 282 - ret); 283 - return ret; 284 - } 285 - 286 - 287 - return 0; 288 - } 289 - 290 - 291 - 292 - static struct snd_soc_ops playpaq_wm8510_ops = { 293 - .hw_params = playpaq_wm8510_hw_params, 294 - }; 295 - 296 - 297 - 298 - static const struct snd_soc_dapm_widget playpaq_dapm_widgets[] = { 299 - SND_SOC_DAPM_MIC("Int Mic", NULL), 300 - SND_SOC_DAPM_SPK("Ext Spk", NULL), 301 - }; 302 - 303 - 304 - 305 - static const struct snd_soc_dapm_route intercon[] = { 306 - /* speaker connected to SPKOUT */ 307 - {"Ext Spk", NULL, "SPKOUTP"}, 308 - {"Ext Spk", NULL, "SPKOUTN"}, 309 - 310 - {"Mic Bias", NULL, "Int Mic"}, 311 - {"MICN", NULL, "Mic Bias"}, 312 - {"MICP", NULL, "Mic Bias"}, 313 - }; 314 - 315 - 316 - 317 - static int playpaq_wm8510_init(struct snd_soc_pcm_runtime *rtd) 318 - { 319 - struct snd_soc_codec *codec = rtd->codec; 320 - struct snd_soc_dapm_context *dapm = &codec->dapm; 321 - int i; 322 - 323 - /* 324 - * Add DAPM widgets 325 - */ 326 - for (i = 0; i < ARRAY_SIZE(playpaq_dapm_widgets); i++) 327 - snd_soc_dapm_new_control(dapm, &playpaq_dapm_widgets[i]); 328 - 329 - 330 - 331 - /* 332 - * Setup audio path interconnects 333 - */ 334 - snd_soc_dapm_add_routes(dapm, intercon, ARRAY_SIZE(intercon)); 335 - 336 - 337 - 338 - /* always connected pins */ 339 - snd_soc_dapm_enable_pin(dapm, "Int Mic"); 340 - snd_soc_dapm_enable_pin(dapm, "Ext Spk"); 341 - 342 - 343 - 344 - /* Make CSB show PLL rate */ 345 - snd_soc_dai_set_clkdiv(rtd->codec_dai, WM8510_OPCLKDIV, 346 - WM8510_OPCLKDIV_1 | 4); 347 - 348 - return 0; 349 - } 350 - 351 - 352 - 353 - static struct snd_soc_dai_link playpaq_wm8510_dai = { 354 - .name = "WM8510", 355 - .stream_name = "WM8510 PCM", 356 - .cpu_dai_name= "atmel-ssc-dai.0", 357 - .platform_name = "atmel-pcm-audio", 358 - .codec_name = "wm8510-codec.0-0x1a", 359 - .codec_dai_name = "wm8510-hifi", 360 - .init = playpaq_wm8510_init, 361 - .ops = &playpaq_wm8510_ops, 362 - }; 363 - 364 - 365 - 366 - static struct snd_soc_card snd_soc_playpaq = { 367 - .name = "LRS_PlayPaq_WM8510", 368 - .dai_link = &playpaq_wm8510_dai, 369 - .num_links = 1, 370 - }; 371 - 372 - static struct platform_device *playpaq_snd_device; 373 - 374 - 375 - static int __init playpaq_asoc_init(void) 376 - { 377 - int ret = 0; 378 - 379 - /* 380 - * Configure MCLK for WM8510 381 - */ 382 - _gclk0 = clk_get(NULL, "gclk0"); 383 - if (IS_ERR(_gclk0)) { 384 - _gclk0 = NULL; 385 - ret = PTR_ERR(_gclk0); 386 - goto err_gclk0; 387 - } 388 - _pll0 = clk_get(NULL, "pll0"); 389 - if (IS_ERR(_pll0)) { 390 - _pll0 = NULL; 391 - ret = PTR_ERR(_pll0); 392 - goto err_pll0; 393 - } 394 - ret = clk_set_parent(_gclk0, _pll0); 395 - if (ret) { 396 - pr_warning("snd-soc-playpaq: " 397 - "Failed to set PLL0 as parent for DAC clock\n"); 398 - goto err_set_clk; 399 - } 400 - clk_set_rate(CODEC_CLK, 12000000); 401 - clk_enable(CODEC_CLK); 402 - 403 - #if defined CONFIG_AT32_ENHANCED_PORTMUX 404 - at32_select_periph(MCLK_PIN, MCLK_PERIPH, 0); 405 - #endif 406 - 407 - 408 - /* 409 - * Create and register platform device 410 - */ 411 - playpaq_snd_device = platform_device_alloc("soc-audio", 0); 412 - if (playpaq_snd_device == NULL) { 413 - ret = -ENOMEM; 414 - goto err_device_alloc; 415 - } 416 - 417 - platform_set_drvdata(playpaq_snd_device, &snd_soc_playpaq); 418 - 419 - ret = platform_device_add(playpaq_snd_device); 420 - if (ret) { 421 - pr_warning("playpaq_wm8510: platform_device_add failed (%d)\n", 422 - ret); 423 - goto err_device_add; 424 - } 425 - 426 - return 0; 427 - 428 - 429 - err_device_add: 430 - if (playpaq_snd_device != NULL) { 431 - platform_device_put(playpaq_snd_device); 432 - playpaq_snd_device = NULL; 433 - } 434 - err_device_alloc: 435 - err_set_clk: 436 - if (_pll0 != NULL) { 437 - clk_put(_pll0); 438 - _pll0 = NULL; 439 - } 440 - err_pll0: 441 - if (_gclk0 != NULL) { 442 - clk_put(_gclk0); 443 - _gclk0 = NULL; 444 - } 445 - return ret; 446 - } 447 - 448 - 449 - static void __exit playpaq_asoc_exit(void) 450 - { 451 - if (_gclk0 != NULL) { 452 - clk_put(_gclk0); 453 - _gclk0 = NULL; 454 - } 455 - if (_pll0 != NULL) { 456 - clk_put(_pll0); 457 - _pll0 = NULL; 458 - } 459 - 460 - #if defined CONFIG_AT32_ENHANCED_PORTMUX 461 - at32_free_pin(MCLK_PIN); 462 - #endif 463 - 464 - platform_device_unregister(playpaq_snd_device); 465 - playpaq_snd_device = NULL; 466 - } 467 - 468 - module_init(playpaq_asoc_init); 469 - module_exit(playpaq_asoc_exit); 470 - 471 - MODULE_AUTHOR("Geoffrey Wossum <gwossum@acm.org>"); 472 - MODULE_DESCRIPTION("ASoC machine driver for LRS PlayPaq"); 473 - MODULE_LICENSE("GPL");
+1 -1
sound/soc/codecs/Kconfig
··· 33 33 select SND_SOC_CX20442 34 34 select SND_SOC_DA7210 if I2C 35 35 select SND_SOC_DFBMCS320 36 - select SND_SOC_JZ4740_CODEC if SOC_JZ4740 36 + select SND_SOC_JZ4740_CODEC 37 37 select SND_SOC_LM4857 if I2C 38 38 select SND_SOC_MAX98088 if I2C 39 39 select SND_SOC_MAX98095 if I2C
+1 -1
sound/soc/codecs/ad1836.h
··· 34 34 35 35 #define AD1836_ADC_CTRL2 13 36 36 #define AD1836_ADC_WORD_LEN_MASK 0x30 37 - #define AD1836_ADC_WORD_OFFSET 5 37 + #define AD1836_ADC_WORD_OFFSET 4 38 38 #define AD1836_ADC_SERFMT_MASK (7 << 6) 39 39 #define AD1836_ADC_SERFMT_PCK256 (0x4 << 6) 40 40 #define AD1836_ADC_SERFMT_PCK128 (0x5 << 6)
+1 -1
sound/soc/codecs/adau1373.c
··· 245 245 }; 246 246 247 247 static const unsigned int adau1373_bass_tlv[] = { 248 - TLV_DB_RANGE_HEAD(4), 248 + TLV_DB_RANGE_HEAD(3), 249 249 0, 2, TLV_DB_SCALE_ITEM(-600, 600, 1), 250 250 3, 4, TLV_DB_SCALE_ITEM(950, 250, 0), 251 251 5, 7, TLV_DB_SCALE_ITEM(1400, 150, 0),
+1 -9
sound/soc/codecs/cs4270.c
··· 601 601 static int cs4270_soc_resume(struct snd_soc_codec *codec) 602 602 { 603 603 struct cs4270_private *cs4270 = snd_soc_codec_get_drvdata(codec); 604 - struct i2c_client *i2c_client = to_i2c_client(codec->dev); 605 604 int reg; 606 605 607 606 regulator_bulk_enable(ARRAY_SIZE(cs4270->supplies), ··· 611 612 ndelay(500); 612 613 613 614 /* first restore the entire register cache ... */ 614 - for (reg = CS4270_FIRSTREG; reg <= CS4270_LASTREG; reg++) { 615 - u8 val = snd_soc_read(codec, reg); 616 - 617 - if (i2c_smbus_write_byte_data(i2c_client, reg, val)) { 618 - dev_err(codec->dev, "i2c write failed\n"); 619 - return -EIO; 620 - } 621 - } 615 + snd_soc_cache_sync(codec); 622 616 623 617 /* ... then disable the power-down bits */ 624 618 reg = snd_soc_read(codec, CS4270_PWRCTL);
+5 -3
sound/soc/codecs/cs4271.c
··· 434 434 { 435 435 int ret; 436 436 /* Set power-down bit */ 437 - ret = snd_soc_update_bits(codec, CS4271_MODE2, 0, CS4271_MODE2_PDN); 437 + ret = snd_soc_update_bits(codec, CS4271_MODE2, CS4271_MODE2_PDN, 438 + CS4271_MODE2_PDN); 438 439 if (ret < 0) 439 440 return ret; 440 441 return 0; ··· 502 501 return ret; 503 502 } 504 503 505 - ret = snd_soc_update_bits(codec, CS4271_MODE2, 0, 506 - CS4271_MODE2_PDN | CS4271_MODE2_CPEN); 504 + ret = snd_soc_update_bits(codec, CS4271_MODE2, 505 + CS4271_MODE2_PDN | CS4271_MODE2_CPEN, 506 + CS4271_MODE2_PDN | CS4271_MODE2_CPEN); 507 507 if (ret < 0) 508 508 return ret; 509 509 ret = snd_soc_update_bits(codec, CS4271_MODE2, CS4271_MODE2_PDN, 0);
+1 -1
sound/soc/codecs/cs42l51.c
··· 555 555 556 556 static struct snd_soc_codec_driver soc_codec_device_cs42l51 = { 557 557 .probe = cs42l51_probe, 558 - .reg_cache_size = CS42L51_NUMREGS, 558 + .reg_cache_size = CS42L51_NUMREGS + 1, 559 559 .reg_word_size = sizeof(u8), 560 560 }; 561 561
+1
sound/soc/codecs/jz4740.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/platform_device.h> 17 17 #include <linux/slab.h> 18 + #include <linux/io.h> 18 19 19 20 #include <linux/delay.h> 20 21
+5 -5
sound/soc/codecs/max9877.c
··· 106 106 unsigned int mask = mc->max; 107 107 unsigned int val = (ucontrol->value.integer.value[0] & mask); 108 108 unsigned int val2 = (ucontrol->value.integer.value[1] & mask); 109 - unsigned int change = 1; 109 + unsigned int change = 0; 110 110 111 - if (((max9877_regs[reg] >> shift) & mask) == val) 112 - change = 0; 111 + if (((max9877_regs[reg] >> shift) & mask) != val) 112 + change = 1; 113 113 114 - if (((max9877_regs[reg2] >> shift) & mask) == val2) 115 - change = 0; 114 + if (((max9877_regs[reg2] >> shift) & mask) != val2) 115 + change = 1; 116 116 117 117 if (change) { 118 118 max9877_regs[reg] &= ~(mask << shift);
+1 -1
sound/soc/codecs/rt5631.c
··· 177 177 static const DECLARE_TLV_DB_SCALE(in_vol_tlv, -3450, 150, 0); 178 178 /* {0, +20, +24, +30, +35, +40, +44, +50, +52}dB */ 179 179 static unsigned int mic_bst_tlv[] = { 180 - TLV_DB_RANGE_HEAD(6), 180 + TLV_DB_RANGE_HEAD(7), 181 181 0, 0, TLV_DB_SCALE_ITEM(0, 0, 0), 182 182 1, 1, TLV_DB_SCALE_ITEM(2000, 0, 0), 183 183 2, 2, TLV_DB_SCALE_ITEM(2400, 0, 0),
+1 -1
sound/soc/codecs/sgtl5000.c
··· 365 365 366 366 /* tlv for mic gain, 0db 20db 30db 40db */ 367 367 static const unsigned int mic_gain_tlv[] = { 368 - TLV_DB_RANGE_HEAD(4), 368 + TLV_DB_RANGE_HEAD(2), 369 369 0, 0, TLV_DB_SCALE_ITEM(0, 0, 0), 370 370 1, 3, TLV_DB_SCALE_ITEM(2000, 1000, 0), 371 371 };
+62 -1
sound/soc/codecs/sta32x.c
··· 76 76 77 77 unsigned int mclk; 78 78 unsigned int format; 79 + 80 + u32 coef_shadow[STA32X_COEF_COUNT]; 79 81 }; 80 82 81 83 static const DECLARE_TLV_DB_SCALE(mvol_tlv, -12700, 50, 1); ··· 229 227 struct snd_ctl_elem_value *ucontrol) 230 228 { 231 229 struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); 230 + struct sta32x_priv *sta32x = snd_soc_codec_get_drvdata(codec); 232 231 int numcoef = kcontrol->private_value >> 16; 233 232 int index = kcontrol->private_value & 0xffff; 234 233 unsigned int cfud; ··· 242 239 snd_soc_write(codec, STA32X_CFUD, cfud); 243 240 244 241 snd_soc_write(codec, STA32X_CFADDR2, index); 242 + for (i = 0; i < numcoef && (index + i < STA32X_COEF_COUNT); i++) 243 + sta32x->coef_shadow[index + i] = 244 + (ucontrol->value.bytes.data[3 * i] << 16) 245 + | (ucontrol->value.bytes.data[3 * i + 1] << 8) 246 + | (ucontrol->value.bytes.data[3 * i + 2]); 245 247 for (i = 0; i < 3 * numcoef; i++) 246 248 snd_soc_write(codec, STA32X_B1CF1 + i, 247 249 ucontrol->value.bytes.data[i]); ··· 258 250 return -EINVAL; 259 251 260 252 return 0; 253 + } 254 + 255 + int sta32x_sync_coef_shadow(struct snd_soc_codec *codec) 256 + { 257 + struct sta32x_priv *sta32x = snd_soc_codec_get_drvdata(codec); 258 + unsigned int cfud; 259 + int i; 260 + 261 + /* preserve reserved bits in STA32X_CFUD */ 262 + cfud = snd_soc_read(codec, STA32X_CFUD) & 0xf0; 263 + 264 + for (i = 0; i < STA32X_COEF_COUNT; i++) { 265 + snd_soc_write(codec, STA32X_CFADDR2, i); 266 + snd_soc_write(codec, STA32X_B1CF1, 267 + (sta32x->coef_shadow[i] >> 16) & 0xff); 268 + snd_soc_write(codec, STA32X_B1CF2, 269 + (sta32x->coef_shadow[i] >> 8) & 0xff); 270 + snd_soc_write(codec, STA32X_B1CF3, 271 + (sta32x->coef_shadow[i]) & 0xff); 272 + /* chip documentation does not say if the bits are 273 + * self-clearing, so do it explicitly */ 274 + snd_soc_write(codec, STA32X_CFUD, cfud); 275 + snd_soc_write(codec, STA32X_CFUD, cfud | 0x01); 276 + } 277 + return 0; 278 + } 279 + 280 + int sta32x_cache_sync(struct snd_soc_codec *codec) 281 + { 282 + unsigned int mute; 283 + int rc; 284 + 285 + if (!codec->cache_sync) 286 + return 0; 287 + 288 + /* mute during register sync */ 289 + mute = snd_soc_read(codec, STA32X_MMUTE); 290 + snd_soc_write(codec, STA32X_MMUTE, mute | STA32X_MMUTE_MMUTE); 291 + sta32x_sync_coef_shadow(codec); 292 + rc = snd_soc_cache_sync(codec); 293 + snd_soc_write(codec, STA32X_MMUTE, mute); 294 + return rc; 261 295 } 262 296 263 297 #define SINGLE_COEF(xname, index) \ ··· 711 661 return ret; 712 662 } 713 663 714 - snd_soc_cache_sync(codec); 664 + sta32x_cache_sync(codec); 715 665 } 716 666 717 667 /* Power up to mute */ ··· 839 789 snd_soc_update_bits(codec, STA32X_C3CFG, 840 790 STA32X_CxCFG_OM_MASK, 841 791 2 << STA32X_CxCFG_OM_SHIFT); 792 + 793 + /* initialize coefficient shadow RAM with reset values */ 794 + for (i = 4; i <= 49; i += 5) 795 + sta32x->coef_shadow[i] = 0x400000; 796 + for (i = 50; i <= 54; i++) 797 + sta32x->coef_shadow[i] = 0x7fffff; 798 + sta32x->coef_shadow[55] = 0x5a9df7; 799 + sta32x->coef_shadow[56] = 0x7fffff; 800 + sta32x->coef_shadow[59] = 0x7fffff; 801 + sta32x->coef_shadow[60] = 0x400000; 802 + sta32x->coef_shadow[61] = 0x400000; 842 803 843 804 sta32x_set_bias_level(codec, SND_SOC_BIAS_STANDBY); 844 805 /* Bias level configuration will have done an extra enable */
+1
sound/soc/codecs/sta32x.h
··· 19 19 /* STA326 register addresses */ 20 20 21 21 #define STA32X_REGISTER_COUNT 0x2d 22 + #define STA32X_COEF_COUNT 62 22 23 23 24 #define STA32X_CONFA 0x00 24 25 #define STA32X_CONFB 0x01
+2 -2
sound/soc/codecs/uda1380.c
··· 863 863 864 864 static int __init uda1380_modinit(void) 865 865 { 866 - int ret; 866 + int ret = 0; 867 867 #if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE) 868 868 ret = i2c_add_driver(&uda1380_i2c_driver); 869 869 if (ret != 0) 870 870 pr_err("Failed to register UDA1380 I2C driver: %d\n", ret); 871 871 #endif 872 - return 0; 872 + return ret; 873 873 } 874 874 module_init(uda1380_modinit); 875 875
+1
sound/soc/codecs/wm8731.c
··· 453 453 snd_soc_write(codec, WM8731_PWR, 0xffff); 454 454 regulator_bulk_disable(ARRAY_SIZE(wm8731->supplies), 455 455 wm8731->supplies); 456 + codec->cache_sync = 1; 456 457 break; 457 458 } 458 459 codec->dapm.bias_level = level;
+3
sound/soc/codecs/wm8753.c
··· 190 190 struct wm8753_priv *wm8753 = snd_soc_codec_get_drvdata(codec); 191 191 u16 ioctl; 192 192 193 + if (wm8753->dai_func == ucontrol->value.integer.value[0]) 194 + return 0; 195 + 193 196 if (codec->active) 194 197 return -EBUSY; 195 198
+2
sound/soc/codecs/wm8958-dsp2.c
··· 60 60 } 61 61 62 62 if (memcmp(fw->data, "WMFW", 4) != 0) { 63 + memcpy(&data32, fw->data, sizeof(data32)); 64 + data32 = be32_to_cpu(data32); 63 65 dev_err(codec->dev, "%s: firmware has bad file magic %08x\n", 64 66 name, data32); 65 67 goto err;
+2 -2
sound/soc/codecs/wm8962.c
··· 1973 1973 static const DECLARE_TLV_DB_SCALE(inpga_tlv, -2325, 75, 0); 1974 1974 static const DECLARE_TLV_DB_SCALE(mixin_tlv, -1500, 300, 0); 1975 1975 static const unsigned int mixinpga_tlv[] = { 1976 - TLV_DB_RANGE_HEAD(7), 1976 + TLV_DB_RANGE_HEAD(5), 1977 1977 0, 1, TLV_DB_SCALE_ITEM(0, 600, 0), 1978 1978 2, 2, TLV_DB_SCALE_ITEM(1300, 1300, 0), 1979 1979 3, 4, TLV_DB_SCALE_ITEM(1800, 200, 0), ··· 1988 1988 static const DECLARE_TLV_DB_SCALE(out_tlv, -12100, 100, 1); 1989 1989 static const DECLARE_TLV_DB_SCALE(hp_tlv, -700, 100, 0); 1990 1990 static const unsigned int classd_tlv[] = { 1991 - TLV_DB_RANGE_HEAD(7), 1991 + TLV_DB_RANGE_HEAD(2), 1992 1992 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 1993 1993 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 1994 1994 };
+1 -1
sound/soc/codecs/wm8993.c
··· 512 512 static const DECLARE_TLV_DB_SCALE(drc_comp_amp, -2250, 75, 0); 513 513 static const DECLARE_TLV_DB_SCALE(drc_min_tlv, -1800, 600, 0); 514 514 static const unsigned int drc_max_tlv[] = { 515 - TLV_DB_RANGE_HEAD(4), 515 + TLV_DB_RANGE_HEAD(2), 516 516 0, 2, TLV_DB_SCALE_ITEM(1200, 600, 0), 517 517 3, 3, TLV_DB_SCALE_ITEM(3600, 0, 0), 518 518 };
+13 -6
sound/soc/codecs/wm8994.c
··· 1325 1325 }; 1326 1326 1327 1327 static const struct snd_soc_dapm_widget wm8994_adc_revd_widgets[] = { 1328 - SND_SOC_DAPM_MUX_E("ADCL Mux", WM8994_POWER_MANAGEMENT_4, 1, 0, &adcl_mux, 1329 - adc_mux_ev, SND_SOC_DAPM_PRE_PMU), 1330 - SND_SOC_DAPM_MUX_E("ADCR Mux", WM8994_POWER_MANAGEMENT_4, 0, 0, &adcr_mux, 1331 - adc_mux_ev, SND_SOC_DAPM_PRE_PMU), 1328 + SND_SOC_DAPM_VIRT_MUX_E("ADCL Mux", WM8994_POWER_MANAGEMENT_4, 1, 0, &adcl_mux, 1329 + adc_mux_ev, SND_SOC_DAPM_PRE_PMU), 1330 + SND_SOC_DAPM_VIRT_MUX_E("ADCR Mux", WM8994_POWER_MANAGEMENT_4, 0, 0, &adcr_mux, 1331 + adc_mux_ev, SND_SOC_DAPM_PRE_PMU), 1332 1332 }; 1333 1333 1334 1334 static const struct snd_soc_dapm_widget wm8994_adc_widgets[] = { 1335 - SND_SOC_DAPM_MUX("ADCL Mux", WM8994_POWER_MANAGEMENT_4, 1, 0, &adcl_mux), 1336 - SND_SOC_DAPM_MUX("ADCR Mux", WM8994_POWER_MANAGEMENT_4, 0, 0, &adcr_mux), 1335 + SND_SOC_DAPM_VIRT_MUX("ADCL Mux", WM8994_POWER_MANAGEMENT_4, 1, 0, &adcl_mux), 1336 + SND_SOC_DAPM_VIRT_MUX("ADCR Mux", WM8994_POWER_MANAGEMENT_4, 0, 0, &adcr_mux), 1337 1337 }; 1338 1338 1339 1339 static const struct snd_soc_dapm_widget wm8994_dapm_widgets[] = { ··· 2357 2357 bclk |= best << WM8994_AIF1_BCLK_DIV_SHIFT; 2358 2358 2359 2359 lrclk = bclk_rate / params_rate(params); 2360 + if (!lrclk) { 2361 + dev_err(dai->dev, "Unable to generate LRCLK from %dHz BCLK\n", 2362 + bclk_rate); 2363 + return -EINVAL; 2364 + } 2360 2365 dev_dbg(dai->dev, "Using LRCLK rate %d for actual LRCLK %dHz\n", 2361 2366 lrclk, bclk_rate / lrclk); 2362 2367 ··· 3183 3178 switch (wm8994->revision) { 3184 3179 case 0: 3185 3180 case 1: 3181 + case 2: 3182 + case 3: 3186 3183 wm8994->hubs.dcs_codes_l = -9; 3187 3184 wm8994->hubs.dcs_codes_r = -5; 3188 3185 break;
+1
sound/soc/codecs/wm8996.c
··· 1968 1968 break; 1969 1969 case 24576000: 1970 1970 ratediv = WM8996_SYSCLK_DIV; 1971 + wm8996->sysclk /= 2; 1971 1972 case 12288000: 1972 1973 snd_soc_update_bits(codec, WM8996_AIF_RATE, 1973 1974 WM8996_SYSCLK_RATE, WM8996_SYSCLK_RATE);
+5 -5
sound/soc/codecs/wm9081.c
··· 807 807 mdelay(100); 808 808 809 809 /* Normal bias enable & soft start off */ 810 - reg |= WM9081_BIAS_ENA; 811 810 reg &= ~WM9081_VMID_RAMP; 812 811 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); 813 812 ··· 817 818 } 818 819 819 820 /* VMID 2*240k */ 820 - reg = snd_soc_read(codec, WM9081_BIAS_CONTROL_1); 821 + reg = snd_soc_read(codec, WM9081_VMID_CONTROL); 821 822 reg &= ~WM9081_VMID_SEL_MASK; 822 823 reg |= 0x04; 823 824 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); ··· 829 830 break; 830 831 831 832 case SND_SOC_BIAS_OFF: 832 - /* Startup bias source */ 833 + /* Startup bias source and disable bias */ 833 834 reg = snd_soc_read(codec, WM9081_BIAS_CONTROL_1); 834 835 reg |= WM9081_BIAS_SRC; 836 + reg &= ~WM9081_BIAS_ENA; 835 837 snd_soc_write(codec, WM9081_BIAS_CONTROL_1, reg); 836 838 837 - /* Disable VMID and biases with soft ramping */ 839 + /* Disable VMID with soft ramping */ 838 840 reg = snd_soc_read(codec, WM9081_VMID_CONTROL); 839 - reg &= ~(WM9081_VMID_SEL_MASK | WM9081_BIAS_ENA); 841 + reg &= ~WM9081_VMID_SEL_MASK; 840 842 reg |= WM9081_VMID_RAMP; 841 843 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); 842 844
+3 -3
sound/soc/codecs/wm9090.c
··· 177 177 } 178 178 179 179 static const unsigned int in_tlv[] = { 180 - TLV_DB_RANGE_HEAD(6), 180 + TLV_DB_RANGE_HEAD(3), 181 181 0, 0, TLV_DB_SCALE_ITEM(-600, 0, 0), 182 182 1, 3, TLV_DB_SCALE_ITEM(-350, 350, 0), 183 183 4, 6, TLV_DB_SCALE_ITEM(600, 600, 0), 184 184 }; 185 185 static const unsigned int mix_tlv[] = { 186 - TLV_DB_RANGE_HEAD(4), 186 + TLV_DB_RANGE_HEAD(2), 187 187 0, 2, TLV_DB_SCALE_ITEM(-1200, 300, 0), 188 188 3, 3, TLV_DB_SCALE_ITEM(0, 0, 0), 189 189 }; 190 190 static const DECLARE_TLV_DB_SCALE(out_tlv, -5700, 100, 0); 191 191 static const unsigned int spkboost_tlv[] = { 192 - TLV_DB_RANGE_HEAD(7), 192 + TLV_DB_RANGE_HEAD(2), 193 193 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 194 194 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 195 195 };
+1 -1
sound/soc/codecs/wm_hubs.c
··· 40 40 static const DECLARE_TLV_DB_SCALE(spkmixout_tlv, -1800, 600, 1); 41 41 static const DECLARE_TLV_DB_SCALE(outpga_tlv, -5700, 100, 0); 42 42 static const unsigned int spkboost_tlv[] = { 43 - TLV_DB_RANGE_HEAD(7), 43 + TLV_DB_RANGE_HEAD(2), 44 44 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 45 45 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 46 46 };
+1
sound/soc/fsl/fsl_ssi.c
··· 694 694 695 695 /* Initialize the the device_attribute structure */ 696 696 dev_attr = &ssi_private->dev_attr; 697 + sysfs_attr_init(&dev_attr->attr); 697 698 dev_attr->attr.name = "statistics"; 698 699 dev_attr->attr.mode = S_IRUGO; 699 700 dev_attr->show = fsl_sysfs_ssi_show;
+16 -8
sound/soc/fsl/mpc8610_hpcd.c
··· 392 392 } 393 393 394 394 if (strcasecmp(sprop, "i2s-slave") == 0) { 395 - machine_data->dai_format = SND_SOC_DAIFMT_I2S; 395 + machine_data->dai_format = 396 + SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_CBM_CFM; 396 397 machine_data->codec_clk_direction = SND_SOC_CLOCK_OUT; 397 398 machine_data->cpu_clk_direction = SND_SOC_CLOCK_IN; 398 399 ··· 410 409 } 411 410 machine_data->clk_frequency = be32_to_cpup(iprop); 412 411 } else if (strcasecmp(sprop, "i2s-master") == 0) { 413 - machine_data->dai_format = SND_SOC_DAIFMT_I2S; 412 + machine_data->dai_format = 413 + SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_CBS_CFS; 414 414 machine_data->codec_clk_direction = SND_SOC_CLOCK_IN; 415 415 machine_data->cpu_clk_direction = SND_SOC_CLOCK_OUT; 416 416 } else if (strcasecmp(sprop, "lj-slave") == 0) { 417 - machine_data->dai_format = SND_SOC_DAIFMT_LEFT_J; 417 + machine_data->dai_format = 418 + SND_SOC_DAIFMT_LEFT_J | SND_SOC_DAIFMT_CBM_CFM; 418 419 machine_data->codec_clk_direction = SND_SOC_CLOCK_OUT; 419 420 machine_data->cpu_clk_direction = SND_SOC_CLOCK_IN; 420 421 } else if (strcasecmp(sprop, "lj-master") == 0) { 421 - machine_data->dai_format = SND_SOC_DAIFMT_LEFT_J; 422 + machine_data->dai_format = 423 + SND_SOC_DAIFMT_LEFT_J | SND_SOC_DAIFMT_CBS_CFS; 422 424 machine_data->codec_clk_direction = SND_SOC_CLOCK_IN; 423 425 machine_data->cpu_clk_direction = SND_SOC_CLOCK_OUT; 424 426 } else if (strcasecmp(sprop, "rj-slave") == 0) { 425 - machine_data->dai_format = SND_SOC_DAIFMT_RIGHT_J; 427 + machine_data->dai_format = 428 + SND_SOC_DAIFMT_RIGHT_J | SND_SOC_DAIFMT_CBM_CFM; 426 429 machine_data->codec_clk_direction = SND_SOC_CLOCK_OUT; 427 430 machine_data->cpu_clk_direction = SND_SOC_CLOCK_IN; 428 431 } else if (strcasecmp(sprop, "rj-master") == 0) { 429 - machine_data->dai_format = SND_SOC_DAIFMT_RIGHT_J; 432 + machine_data->dai_format = 433 + SND_SOC_DAIFMT_RIGHT_J | SND_SOC_DAIFMT_CBS_CFS; 430 434 machine_data->codec_clk_direction = SND_SOC_CLOCK_IN; 431 435 machine_data->cpu_clk_direction = SND_SOC_CLOCK_OUT; 432 436 } else if (strcasecmp(sprop, "ac97-slave") == 0) { 433 - machine_data->dai_format = SND_SOC_DAIFMT_AC97; 437 + machine_data->dai_format = 438 + SND_SOC_DAIFMT_AC97 | SND_SOC_DAIFMT_CBM_CFM; 434 439 machine_data->codec_clk_direction = SND_SOC_CLOCK_OUT; 435 440 machine_data->cpu_clk_direction = SND_SOC_CLOCK_IN; 436 441 } else if (strcasecmp(sprop, "ac97-master") == 0) { 437 - machine_data->dai_format = SND_SOC_DAIFMT_AC97; 442 + machine_data->dai_format = 443 + SND_SOC_DAIFMT_AC97 | SND_SOC_DAIFMT_CBS_CFS; 438 444 machine_data->codec_clk_direction = SND_SOC_CLOCK_IN; 439 445 machine_data->cpu_clk_direction = SND_SOC_CLOCK_OUT; 440 446 } else {
+1 -1
sound/soc/imx/Kconfig
··· 28 28 29 29 config SND_SOC_MX27VIS_AIC32X4 30 30 tristate "SoC audio support for Visstrim M10 boards" 31 - depends on MACH_IMX27_VISSTRIM_M10 31 + depends on MACH_IMX27_VISSTRIM_M10 && I2C 32 32 select SND_SOC_TLV320AIC32X4 33 33 select SND_MXC_SOC_MX2 34 34 help
+2 -1
sound/soc/kirkwood/Kconfig
··· 12 12 config SND_KIRKWOOD_SOC_OPENRD 13 13 tristate "SoC Audio support for Kirkwood Openrd Client" 14 14 depends on SND_KIRKWOOD_SOC && (MACH_OPENRD_CLIENT || MACH_OPENRD_ULTIMATE) 15 + depends on I2C 15 16 select SND_KIRKWOOD_SOC_I2S 16 17 select SND_SOC_CS42L51 17 18 help ··· 21 20 22 21 config SND_KIRKWOOD_SOC_T5325 23 22 tristate "SoC Audio support for HP t5325" 24 - depends on SND_KIRKWOOD_SOC && MACH_T5325 23 + depends on SND_KIRKWOOD_SOC && MACH_T5325 && I2C 25 24 select SND_KIRKWOOD_SOC_I2S 26 25 select SND_SOC_ALC5623 27 26 help
+3
sound/soc/mxs/mxs-pcm.c
··· 357 357 platform_driver_unregister(&mxs_pcm_driver); 358 358 } 359 359 module_exit(snd_mxs_pcm_exit); 360 + 361 + MODULE_LICENSE("GPL"); 362 + MODULE_ALIAS("platform:mxs-pcm-audio");
+1
sound/soc/mxs/mxs-sgtl5000.c
··· 171 171 MODULE_AUTHOR("Freescale Semiconductor, Inc."); 172 172 MODULE_DESCRIPTION("MXS ALSA SoC Machine driver"); 173 173 MODULE_LICENSE("GPL"); 174 + MODULE_ALIAS("platform:mxs-sgtl5000");
+2 -1
sound/soc/nuc900/nuc900-ac97.c
··· 365 365 if (ret) 366 366 goto out3; 367 367 368 - mfp_set_groupg(nuc900_audio->dev); /* enbale ac97 multifunction pin*/ 368 + /* enbale ac97 multifunction pin */ 369 + mfp_set_groupg(nuc900_audio->dev, "nuc900-audio"); 369 370 370 371 return 0; 371 372
+2 -1
sound/soc/pxa/Kconfig
··· 151 151 config SND_SOC_RAUMFELD 152 152 tristate "SoC Audio support Raumfeld audio adapter" 153 153 depends on SND_PXA2XX_SOC && (MACH_RAUMFELD_SPEAKER || MACH_RAUMFELD_CONNECTOR) 154 + depends on I2C && SPI_MASTER 154 155 select SND_PXA_SOC_SSP 155 156 select SND_SOC_CS4270 156 157 select SND_SOC_AK4104 ··· 160 159 161 160 config SND_PXA2XX_SOC_HX4700 162 161 tristate "SoC Audio support for HP iPAQ hx4700" 163 - depends on SND_PXA2XX_SOC && MACH_H4700 162 + depends on SND_PXA2XX_SOC && MACH_H4700 && I2C 164 163 select SND_PXA2XX_SOC_I2S 165 164 select SND_SOC_AK4641 166 165 help
+3 -2
sound/soc/pxa/hx4700.c
··· 209 209 snd_soc_card_hx4700.dev = &pdev->dev; 210 210 ret = snd_soc_register_card(&snd_soc_card_hx4700); 211 211 if (ret) 212 - return ret; 212 + gpio_free_array(hx4700_audio_gpios, 213 + ARRAY_SIZE(hx4700_audio_gpios)); 213 214 214 - return 0; 215 + return ret; 215 216 } 216 217 217 218 static int __devexit hx4700_audio_remove(struct platform_device *pdev)
+1 -2
sound/soc/samsung/jive_wm8750.c
··· 101 101 { 102 102 struct snd_soc_codec *codec = rtd->codec; 103 103 struct snd_soc_dapm_context *dapm = &codec->dapm; 104 - int err; 105 104 106 105 /* These endpoints are not being used. */ 107 106 snd_soc_dapm_nc_pin(dapm, "LINPUT2"); ··· 130 131 .dai_link = &jive_dai, 131 132 .num_links = 1, 132 133 133 - .dapm_widgtets = wm8750_dapm_widgets, 134 + .dapm_widgets = wm8750_dapm_widgets, 134 135 .num_dapm_widgets = ARRAY_SIZE(wm8750_dapm_widgets), 135 136 .dapm_routes = audio_map, 136 137 .num_dapm_routes = ARRAY_SIZE(audio_map),
+1
sound/soc/samsung/smdk2443_wm9710.c
··· 12 12 * 13 13 */ 14 14 15 + #include <linux/module.h> 15 16 #include <sound/soc.h> 16 17 17 18 static struct snd_soc_card smdk2443;
+1
sound/soc/samsung/smdk_wm8994.c
··· 9 9 10 10 #include "../codecs/wm8994.h" 11 11 #include <sound/pcm_params.h> 12 + #include <linux/module.h> 12 13 13 14 /* 14 15 * Default CFG switch settings to use this driver:
+1 -1
sound/soc/samsung/speyside.c
··· 191 191 snd_soc_dapm_ignore_suspend(&card->dapm, "Headset Mic"); 192 192 snd_soc_dapm_ignore_suspend(&card->dapm, "Main AMIC"); 193 193 snd_soc_dapm_ignore_suspend(&card->dapm, "Main DMIC"); 194 - snd_soc_dapm_ignore_suspend(&card->dapm, "Speaker"); 194 + snd_soc_dapm_ignore_suspend(&card->dapm, "Main Speaker"); 195 195 snd_soc_dapm_ignore_suspend(&card->dapm, "WM1250 Output"); 196 196 snd_soc_dapm_ignore_suspend(&card->dapm, "WM1250 Input"); 197 197
+6
sound/soc/soc-core.c
··· 709 709 struct snd_soc_card *card = dev_get_drvdata(dev); 710 710 int i, ac97_control = 0; 711 711 712 + /* If the initialization of this soc device failed, there is no codec 713 + * associated with it. Just bail out in this case. 714 + */ 715 + if (list_empty(&card->codec_dev_list)) 716 + return 0; 717 + 712 718 /* AC97 devices might have other drivers hanging off them so 713 719 * need to resume immediately. Other drivers don't have that 714 720 * problem and may take a substantial amount of time to resume
+30 -1
sound/soc/soc-utils.c
··· 58 58 } 59 59 EXPORT_SYMBOL_GPL(snd_soc_params_to_bclk); 60 60 61 - static struct snd_soc_platform_driver dummy_platform; 61 + static const struct snd_pcm_hardware dummy_dma_hardware = { 62 + .formats = 0xffffffff, 63 + .channels_min = 1, 64 + .channels_max = UINT_MAX, 65 + 66 + /* Random values to keep userspace happy when checking constraints */ 67 + .info = SNDRV_PCM_INFO_INTERLEAVED | 68 + SNDRV_PCM_INFO_BLOCK_TRANSFER, 69 + .buffer_bytes_max = 128*1024, 70 + .period_bytes_min = PAGE_SIZE, 71 + .period_bytes_max = PAGE_SIZE*2, 72 + .periods_min = 2, 73 + .periods_max = 128, 74 + }; 75 + 76 + static int dummy_dma_open(struct snd_pcm_substream *substream) 77 + { 78 + snd_soc_set_runtime_hwparams(substream, &dummy_dma_hardware); 79 + 80 + return 0; 81 + } 82 + 83 + static struct snd_pcm_ops dummy_dma_ops = { 84 + .open = dummy_dma_open, 85 + .ioctl = snd_pcm_lib_ioctl, 86 + }; 87 + 88 + static struct snd_soc_platform_driver dummy_platform = { 89 + .ops = &dummy_dma_ops, 90 + }; 62 91 63 92 static __devinit int snd_soc_dummy_probe(struct platform_device *pdev) 64 93 {
+31
sound/usb/quirks-table.h
··· 1633 1633 } 1634 1634 }, 1635 1635 { 1636 + /* Roland GAIA SH-01 */ 1637 + USB_DEVICE(0x0582, 0x0111), 1638 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 1639 + .vendor_name = "Roland", 1640 + .product_name = "GAIA", 1641 + .ifnum = QUIRK_ANY_INTERFACE, 1642 + .type = QUIRK_COMPOSITE, 1643 + .data = (const struct snd_usb_audio_quirk[]) { 1644 + { 1645 + .ifnum = 0, 1646 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 1647 + }, 1648 + { 1649 + .ifnum = 1, 1650 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 1651 + }, 1652 + { 1653 + .ifnum = 2, 1654 + .type = QUIRK_MIDI_FIXED_ENDPOINT, 1655 + .data = &(const struct snd_usb_midi_endpoint_info) { 1656 + .out_cables = 0x0003, 1657 + .in_cables = 0x0003 1658 + } 1659 + }, 1660 + { 1661 + .ifnum = -1 1662 + } 1663 + } 1664 + } 1665 + }, 1666 + { 1636 1667 USB_DEVICE(0x0582, 0x0113), 1637 1668 .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 1638 1669 /* .vendor_name = "BOSS", */
+2 -1
tools/perf/builtin-stat.c
··· 463 463 464 464 list_for_each_entry(counter, &evsel_list->entries, node) { 465 465 if (create_perf_stat_counter(counter, first) < 0) { 466 - if (errno == EINVAL || errno == ENOSYS || errno == ENOENT) { 466 + if (errno == EINVAL || errno == ENOSYS || 467 + errno == ENOENT || errno == EOPNOTSUPP) { 467 468 if (verbose) 468 469 ui__warning("%s event is not supported by the kernel.\n", 469 470 event_name(counter));
+10
tools/perf/util/evsel.c
··· 34 34 return size; 35 35 } 36 36 37 + static void hists__init(struct hists *hists) 38 + { 39 + memset(hists, 0, sizeof(*hists)); 40 + hists->entries_in_array[0] = hists->entries_in_array[1] = RB_ROOT; 41 + hists->entries_in = &hists->entries_in_array[0]; 42 + hists->entries_collapsed = RB_ROOT; 43 + hists->entries = RB_ROOT; 44 + pthread_mutex_init(&hists->lock, NULL); 45 + } 46 + 37 47 void perf_evsel__init(struct perf_evsel *evsel, 38 48 struct perf_event_attr *attr, int idx) 39 49 {
+1 -1
tools/perf/util/header.c
··· 388 388 /* 389 389 * write event string as passed on cmdline 390 390 */ 391 - ret = do_write_string(fd, attr->name); 391 + ret = do_write_string(fd, event_name(attr)); 392 392 if (ret < 0) 393 393 return ret; 394 394 /*
-10
tools/perf/util/hist.c
··· 1211 1211 1212 1212 return ret; 1213 1213 } 1214 - 1215 - void hists__init(struct hists *hists) 1216 - { 1217 - memset(hists, 0, sizeof(*hists)); 1218 - hists->entries_in_array[0] = hists->entries_in_array[1] = RB_ROOT; 1219 - hists->entries_in = &hists->entries_in_array[0]; 1220 - hists->entries_collapsed = RB_ROOT; 1221 - hists->entries = RB_ROOT; 1222 - pthread_mutex_init(&hists->lock, NULL); 1223 - }
-2
tools/perf/util/hist.h
··· 63 63 struct callchain_cursor callchain_cursor; 64 64 }; 65 65 66 - void hists__init(struct hists *hists); 67 - 68 66 struct hist_entry *__hists__add_entry(struct hists *self, 69 67 struct addr_location *al, 70 68 struct symbol *parent, u64 period);
+4
tools/perf/util/session.c
··· 1333 1333 } 1334 1334 1335 1335 map = cpu_map__new(cpu_list); 1336 + if (map == NULL) { 1337 + pr_err("Invalid cpu_list\n"); 1338 + return -1; 1339 + } 1336 1340 1337 1341 for (i = 0; i < map->nr; i++) { 1338 1342 int cpu = map->map[i];
+2
tools/perf/util/trace-event-parse.c
··· 1537 1537 field = malloc_or_die(sizeof(*field)); 1538 1538 1539 1539 type = process_arg(event, field, &token); 1540 + while (type == EVENT_OP) 1541 + type = process_op(event, field, &token); 1540 1542 if (test_type_token(type, token, EVENT_DELIM, ",")) 1541 1543 goto out_free; 1542 1544