···521521522522<itemizedlist>523523<listitem><para>524524+<varname>const char *name</varname>: Optional. Set this to help identify525525+the memory region, it will show up in the corresponding sysfs node.526526+</para></listitem>527527+528528+<listitem><para>524529<varname>int memtype</varname>: Required if the mapping is used. Set this to525530<varname>UIO_MEM_PHYS</varname> if you you have physical memory on your526531card to be mapped. Use <varname>UIO_MEM_LOGICAL</varname> for logical···558553</itemizedlist>559554560555<para>561561-Please do not touch the <varname>kobj</varname> element of556556+Please do not touch the <varname>map</varname> element of562557<varname>struct uio_mem</varname>! It is used by the UIO framework563558to set up sysfs files for this mapping. Simply leave it alone.564559</para>
···6363Userspace tools for creating and manipulating Btrfs file systems are6464available from the git repository at the following location:65656666- http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs-unstable.git6767- git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs-unstable.git6666+ http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git6767+ git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git68686969These include the following tools:7070
+19-17
Documentation/i2c/ten-bit-addresses
···11The I2C protocol knows about two kinds of device addresses: normal 7 bit22addresses, and an extended set of 10 bit addresses. The sets of addresses33do not intersect: the 7 bit address 0x10 is not the same as the 10 bit44-address 0x10 (though a single device could respond to both of them). You55-select a 10 bit address by adding an extra byte after the address66-byte:77- S Addr7 Rd/Wr ....88-becomes99- S 11110 Addr10 Rd/Wr1010-S is the start bit, Rd/Wr the read/write bit, and if you count the number1111-of bits, you will see the there are 8 after the S bit for 7 bit addresses,1212-and 16 after the S bit for 10 bit addresses.44+address 0x10 (though a single device could respond to both of them).1351414-WARNING! The current 10 bit address support is EXPERIMENTAL. There are1515-several places in the code that will cause SEVERE PROBLEMS with 10 bit1616-addresses, even though there is some basic handling and hooks. Also,1717-almost no supported adapter handles the 10 bit addresses correctly.66+I2C messages to and from 10-bit address devices have a different format.77+See the I2C specification for the details.1881919-As soon as a real 10 bit address device is spotted 'in the wild', we2020-can and will add proper support. Right now, 10 bit address devices2121-are defined by the I2C protocol, but we have never seen a single device2222-which supports them.99+The current 10 bit address support is minimal. It should work, however1010+you can expect some problems along the way:1111+* Not all bus drivers support 10-bit addresses. Some don't because the1212+ hardware doesn't support them (SMBus doesn't require 10-bit address1313+ support for example), some don't because nobody bothered adding the1414+ code (or it's there but not working properly.) Software implementation1515+ (i2c-algo-bit) is known to work.1616+* Some optional features do not support 10-bit addresses. This is the1717+ case of automatic detection and instantiation of devices by their,1818+ drivers, for example.1919+* Many user-space packages (for example i2c-tools) lack support for2020+ 10-bit addresses.2121+2222+Note that 10-bit address devices are still pretty rare, so the limitations2323+listed above could stay for a long time, maybe even forever if nobody2424+needs them to be fixed.
+66-39
Documentation/power/devices.txt
···123123Subsystem-Level Methods124124-----------------------125125The core methods to suspend and resume devices reside in struct dev_pm_ops126126-pointed to by the pm member of struct bus_type, struct device_type and127127-struct class. They are mostly of interest to the people writing infrastructure128128-for buses, like PCI or USB, or device type and device class drivers.126126+pointed to by the ops member of struct dev_pm_domain, or by the pm member of127127+struct bus_type, struct device_type and struct class. They are mostly of128128+interest to the people writing infrastructure for platforms and buses, like PCI129129+or USB, or device type and device class drivers.129130130131Bus drivers implement these methods as appropriate for the hardware and the131132drivers using it; PCI works differently from USB, and so on. Not many people···140139141140/sys/devices/.../power/wakeup files142141-----------------------------------143143-All devices in the driver model have two flags to control handling of wakeup144144-events (hardware signals that can force the device and/or system out of a low145145-power state). These flags are initialized by bus or device driver code using142142+All device objects in the driver model contain fields that control the handling143143+of system wakeup events (hardware signals that can force the system out of a144144+sleep state). These fields are initialized by bus or device driver code using146145device_set_wakeup_capable() and device_set_wakeup_enable(), defined in147146include/linux/pm_wakeup.h.148147149149-The "can_wakeup" flag just records whether the device (and its driver) can148148+The "power.can_wakeup" flag just records whether the device (and its driver) can150149physically support wakeup events. The device_set_wakeup_capable() routine151151-affects this flag. The "should_wakeup" flag controls whether the device should152152-try to use its wakeup mechanism. device_set_wakeup_enable() affects this flag;153153-for the most part drivers should not change its value. The initial value of154154-should_wakeup is supposed to be false for the majority of devices; the major155155-exceptions are power buttons, keyboards, and Ethernet adapters whose WoL156156-(wake-on-LAN) feature has been set up with ethtool. It should also default157157-to true for devices that don't generate wakeup requests on their own but merely158158-forward wakeup requests from one bus to another (like PCI bridges).150150+affects this flag. The "power.wakeup" field is a pointer to an object of type151151+struct wakeup_source used for controlling whether or not the device should use152152+its system wakeup mechanism and for notifying the PM core of system wakeup153153+events signaled by the device. This object is only present for wakeup-capable154154+devices (i.e. devices whose "can_wakeup" flags are set) and is created (or155155+removed) by device_set_wakeup_capable().159156160157Whether or not a device is capable of issuing wakeup events is a hardware161158matter, and the kernel is responsible for keeping track of it. By contrast,162159whether or not a wakeup-capable device should issue wakeup events is a policy163160decision, and it is managed by user space through a sysfs attribute: the164164-power/wakeup file. User space can write the strings "enabled" or "disabled" to165165-set or clear the "should_wakeup" flag, respectively. This file is only present166166-for wakeup-capable devices (i.e. devices whose "can_wakeup" flags are set)167167-and is created (or removed) by device_set_wakeup_capable(). Reads from the168168-file will return the corresponding string.161161+"power/wakeup" file. User space can write the strings "enabled" or "disabled"162162+to it to indicate whether or not, respectively, the device is supposed to signal163163+system wakeup. This file is only present if the "power.wakeup" object exists164164+for the given device and is created (or removed) along with that object, by165165+device_set_wakeup_capable(). Reads from the file will return the corresponding166166+string.169167170170-The device_may_wakeup() routine returns true only if both flags are set.168168+The "power/wakeup" file is supposed to contain the "disabled" string initially169169+for the majority of devices; the major exceptions are power buttons, keyboards,170170+and Ethernet adapters whose WoL (wake-on-LAN) feature has been set up with171171+ethtool. It should also default to "enabled" for devices that don't generate172172+wakeup requests on their own but merely forward wakeup requests from one bus to173173+another (like PCI Express ports).174174+175175+The device_may_wakeup() routine returns true only if the "power.wakeup" object176176+exists and the corresponding "power/wakeup" file contains the string "enabled".171177This information is used by subsystems, like the PCI bus type code, to see172178whether or not to enable the devices' wakeup mechanisms. If device wakeup173179mechanisms are enabled or disabled directly by drivers, they also should use174180device_may_wakeup() to decide what to do during a system sleep transition.175175-However for runtime power management, wakeup events should be enabled whenever176176-the device and driver both support them, regardless of the should_wakeup flag.181181+Device drivers, however, are not supposed to call device_set_wakeup_enable()182182+directly in any case.177183184184+It ought to be noted that system wakeup is conceptually different from "remote185185+wakeup" used by runtime power management, although it may be supported by the186186+same physical mechanism. Remote wakeup is a feature allowing devices in187187+low-power states to trigger specific interrupts to signal conditions in which188188+they should be put into the full-power state. Those interrupts may or may not189189+be used to signal system wakeup events, depending on the hardware design. On190190+some systems it is impossible to trigger them from system sleep states. In any191191+case, remote wakeup should always be enabled for runtime power management for192192+all devices and drivers that support it.178193179194/sys/devices/.../power/control files180195------------------------------------···266249support all these callbacks and not all drivers use all the callbacks. The267250various phases always run after tasks have been frozen and before they are268251unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have269269-been disabled (except for those marked with the IRQ_WAKEUP flag).252252+been disabled (except for those marked with the IRQF_NO_SUSPEND flag).270253271271-All phases use bus, type, or class callbacks (that is, methods defined in272272-dev->bus->pm, dev->type->pm, or dev->class->pm). These callbacks are mutually273273-exclusive, so if the device type provides a struct dev_pm_ops object pointed to274274-by its pm field (i.e. both dev->type and dev->type->pm are defined), the275275-callbacks included in that object (i.e. dev->type->pm) will be used. Otherwise,276276-if the class provides a struct dev_pm_ops object pointed to by its pm field277277-(i.e. both dev->class and dev->class->pm are defined), the PM core will use the278278-callbacks from that object (i.e. dev->class->pm). Finally, if the pm fields of279279-both the device type and class objects are NULL (or those objects do not exist),280280-the callbacks provided by the bus (that is, the callbacks from dev->bus->pm)281281-will be used (this allows device types to override callbacks provided by bus282282-types or classes if necessary).254254+All phases use PM domain, bus, type, or class callbacks (that is, methods255255+defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, or dev->class->pm).256256+These callbacks are regarded by the PM core as mutually exclusive. Moreover,257257+PM domain callbacks always take precedence over bus, type and class callbacks,258258+while type callbacks take precedence over bus and class callbacks, and class259259+callbacks take precedence over bus callbacks. To be precise, the following260260+rules are used to determine which callback to execute in the given phase:261261+262262+ 1. If dev->pm_domain is present, the PM core will attempt to execute the263263+ callback included in dev->pm_domain->ops. If that callback is not264264+ present, no action will be carried out for the given device.265265+266266+ 2. Otherwise, if both dev->type and dev->type->pm are present, the callback267267+ included in dev->type->pm will be executed.268268+269269+ 3. Otherwise, if both dev->class and dev->class->pm are present, the270270+ callback included in dev->class->pm will be executed.271271+272272+ 4. Otherwise, if both dev->bus and dev->bus->pm are present, the callback273273+ included in dev->bus->pm will be executed.274274+275275+This allows PM domains and device types to override callbacks provided by bus276276+types or device classes if necessary.283277284278These callbacks may in turn invoke device- or driver-specific methods stored in285279dev->driver->pm, but they don't have to.···311283312284 After the prepare callback method returns, no new children may be313285 registered below the device. The method may also prepare the device or314314- driver in some way for the upcoming system power transition (for315315- example, by allocating additional memory required for this purpose), but316316- it should not put the device into a low-power state.286286+ driver in some way for the upcoming system power transition, but it287287+ should not put the device into a low-power state.317288318289 2. The suspend methods should quiesce the device to stop it from performing319290 I/O. They also may save the device registers and put it into the
+24-16
Documentation/power/runtime_pm.txt
···4444};45454646The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks4747-are executed by the PM core for either the power domain, or the device type4848-(if the device power domain's struct dev_pm_ops does not exist), or the class4949-(if the device power domain's and type's struct dev_pm_ops object does not5050-exist), or the bus type (if the device power domain's, type's and class'5151-struct dev_pm_ops objects do not exist) of the given device, so the priority5252-order of callbacks from high to low is that power domain callbacks, device5353-type callbacks, class callbacks and bus type callbacks, and the high priority5454-one will take precedence over low priority one. The bus type, device type and5555-class callbacks are referred to as subsystem-level callbacks in what follows,5656-and generally speaking, the power domain callbacks are used for representing5757-power domains within a SoC.4747+are executed by the PM core for the device's subsystem that may be either of4848+the following:4949+5050+ 1. PM domain of the device, if the device's PM domain object, dev->pm_domain,5151+ is present.5252+5353+ 2. Device type of the device, if both dev->type and dev->type->pm are present.5454+5555+ 3. Device class of the device, if both dev->class and dev->class->pm are5656+ present.5757+5858+ 4. Bus type of the device, if both dev->bus and dev->bus->pm are present.5959+6060+The PM core always checks which callback to use in the order given above, so the6161+priority order of callbacks from high to low is: PM domain, device type, class6262+and bus type. Moreover, the high-priority one will always take precedence over6363+a low-priority one. The PM domain, bus type, device type and class callbacks6464+are referred to as subsystem-level callbacks in what follows.58655966By default, the callbacks are always invoked in process context with interrupts6067enabled. However, subsystems can use the pm_runtime_irq_safe() helper function6161-to tell the PM core that a device's ->runtime_suspend() and ->runtime_resume()6262-callbacks should be invoked in atomic context with interrupts disabled.6363-This implies that these callback routines must not block or sleep, but it also6464-means that the synchronous helper functions listed at the end of Section 4 can6565-be used within an interrupt handler or in an atomic context.6868+to tell the PM core that their ->runtime_suspend(), ->runtime_resume() and6969+->runtime_idle() callbacks may be invoked in atomic context with interrupts7070+disabled for a given device. This implies that the callback routines in7171+question must not block or sleep, but it also means that the synchronous helper7272+functions listed at the end of Section 4 may be used for that device within an7373+interrupt handler or generally in an atomic context.66746775The subsystem-level suspend callback is _entirely_ _responsible_ for handling6876the suspend of the device as appropriate, which may, but need not include
+11-3
Documentation/serial/serial-rs485.txt
···97979898 struct serial_rs485 rs485conf;9999100100- /* Set RS485 mode: */100100+ /* Enable RS485 mode: */101101 rs485conf.flags |= SER_RS485_ENABLED;102102103103+ /* Set logical level for RTS pin equal to 1 when sending: */104104+ rs485conf.flags |= SER_RS485_RTS_ON_SEND;105105+ /* or, set logical level for RTS pin equal to 0 when sending: */106106+ rs485conf.flags &= ~(SER_RS485_RTS_ON_SEND);107107+108108+ /* Set logical level for RTS pin equal to 1 after sending: */109109+ rs485conf.flags |= SER_RS485_RTS_AFTER_SEND;110110+ /* or, set logical level for RTS pin equal to 0 after sending: */111111+ rs485conf.flags &= ~(SER_RS485_RTS_AFTER_SEND);112112+103113 /* Set rts delay before send, if needed: */104104- rs485conf.flags |= SER_RS485_RTS_BEFORE_SEND;105114 rs485conf.delay_rts_before_send = ...;106115107116 /* Set rts delay after send, if needed: */108108- rs485conf.flags |= SER_RS485_RTS_AFTER_SEND;109117 rs485conf.delay_rts_after_send = ...;110118111119 /* Set this flag if you want to receive data even whilst sending data */
+17-2
MAINTAINERS
···789789S: Maintained790790T: git git://git.pengutronix.de/git/imx/linux-2.6.git791791F: arch/arm/mach-mx*/792792+F: arch/arm/mach-imx/792793F: arch/arm/plat-mxc/793794794795ARM/FREESCALE IMX51···804803S: Maintained805804T: git git://git.linaro.org/people/shawnguo/linux-2.6.git806805F: arch/arm/mach-imx/*imx6*806806+807807+ARM/FREESCALE MXS ARM ARCHITECTURE808808+M: Shawn Guo <shawn.guo@linaro.org>809809+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)810810+S: Maintained811811+T: git git://git.linaro.org/people/shawnguo/linux-2.6.git812812+F: arch/arm/mach-mxs/807813808814ARM/GLOMATION GESBC9312SX MACHINE SUPPORT809815M: Lennert Buytenhek <kernel@wantstofly.org>···17961788F: include/net/cfg80211.h17971789F: net/wireless/*17981790X: net/wireless/wext*17911791+17921792+CHAR and MISC DRIVERS17931793+M: Arnd Bergmann <arnd@arndb.de>17941794+M: Greg Kroah-Hartman <greg@kroah.com>17951795+T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git17961796+S: Maintained17971797+F: drivers/char/*17981798+F: drivers/misc/*1799179918001800CHECKPATCH18011801M: Andy Whitcroft <apw@canonical.com>···37363720F: include/linux/jbd2.h3737372137383722JSM Neo PCI based serial card37393739-M: Breno Leitao <leitao@linux.vnet.ibm.com>37233723+M: Lucas Tavares <lucaskt@linux.vnet.ibm.com>37403724L: linux-serial@vger.kernel.org37413725S: Maintained37423726F: drivers/tty/serial/jsm/···56755659F: include/media/*7146*5676566056775661SAMSUNG AUDIO (ASoC) DRIVERS56785678-M: Jassi Brar <jassisinghbrar@gmail.com>56795662M: Sangbeom Kim <sbkim73@samsung.com>56805663L: alsa-devel@alsa-project.org (moderated for non-subscribers)56815664S: Supported
···12311231 capabilities of the processor.1232123212331233config PL310_ERRATA_58836912341234- bool "Clean & Invalidate maintenance operations do not invalidate clean lines"12341234+ bool "PL310 errata: Clean & Invalidate maintenance operations do not invalidate clean lines"12351235 depends on CACHE_L2X012361236 help12371237 The PL310 L2 cache controller implements three types of Clean &···12561256 entries regardless of the ASID.1257125712581258config PL310_ERRATA_72791512591259- bool "Background Clean & Invalidate by Way operation can cause data corruption"12591259+ bool "PL310 errata: Background Clean & Invalidate by Way operation can cause data corruption"12601260 depends on CACHE_L2X012611261 help12621262 PL310 implements the Clean & Invalidate by Way L2 cache maintenance···12891289 operation is received by a CPU before the ICIALLUIS has completed,12901290 potentially leading to corrupted entries in the cache or TLB.1291129112921292-config ARM_ERRATA_75397012931293- bool "ARM errata: cache sync operation may be faulty"12921292+config PL310_ERRATA_75397012931293+ bool "PL310 errata: cache sync operation may be faulty"12941294 depends on CACHE_PL31012951295 help12961296 This option enables the workaround for the 753970 PL310 (r3p0) erratum.···13511351 system. This workaround adds a DSB instruction before the13521352 relevant cache maintenance functions and sets a specific bit13531353 in the diagnostic control register of the SCU.13541354+13551355+config PL310_ERRATA_76941913561356+ bool "PL310 errata: no automatic Store Buffer drain"13571357+ depends on CACHE_L2X013581358+ help13591359+ On revisions of the PL310 prior to r3p2, the Store Buffer does13601360+ not automatically drain. This can cause normal, non-cacheable13611361+ writes to be retained when the memory system is idle, leading13621362+ to suboptimal I/O performance for drivers using coherent DMA.13631363+ This option adds a write barrier to the cpu_idle loop so that,13641364+ on systems with an outer cache, the store buffer is drained13651365+ explicitly.1354136613551367endmenu13561368
+10-6
arch/arm/common/gic.c
···526526 sizeof(u32));527527 BUG_ON(!gic->saved_ppi_conf);528528529529- cpu_pm_register_notifier(&gic_notifier_block);529529+ if (gic == &gic_data[0])530530+ cpu_pm_register_notifier(&gic_notifier_block);530531}531532#else532533static void __init gic_pm_init(struct gic_chip_data *gic)···582581 * For primary GICs, skip over SGIs.583582 * For secondary GICs, skip over PPIs, too.584583 */584584+ domain->hwirq_base = 32;585585 if (gic_nr == 0) {586586 gic_cpu_base_addr = cpu_base;587587- domain->hwirq_base = 16;588588- if (irq_start > 0)589589- irq_start = (irq_start & ~31) + 16;590590- } else591591- domain->hwirq_base = 32;587587+588588+ if ((irq_start & 31) > 0) {589589+ domain->hwirq_base = 16;590590+ if (irq_start != -1)591591+ irq_start = (irq_start & ~31) + 16;592592+ }593593+ }592594593595 /*594596 * Find out how many interrupts are supported.
···1111# CONFIG_IOSCHED_DEADLINE is not set1212# CONFIG_IOSCHED_CFQ is not set1313CONFIG_ARCH_AT91=y1414-CONFIG_ARCH_AT91CAP9=y1515-CONFIG_MACH_AT91CAP9ADK=y1616-CONFIG_MTD_AT91_DATAFLASH_CARD=y1414+CONFIG_ARCH_AT91SAM9RL=y1515+CONFIG_MACH_AT91SAM9RLEK=y1716CONFIG_AT91_PROGRAMMABLE_CLOCKS=y1817# CONFIG_ARM_THUMB is not set1919-CONFIG_AEABI=y2020-CONFIG_LEDS=y2121-CONFIG_LEDS_CPU=y2218CONFIG_ZBOOT_ROM_TEXT=0x02319CONFIG_ZBOOT_ROM_BSS=0x02424-CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw"2020+CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw"2521CONFIG_FPE_NWFPE=y2622CONFIG_NET=y2727-CONFIG_PACKET=y2823CONFIG_UNIX=y2929-CONFIG_INET=y3030-CONFIG_IP_PNP=y3131-CONFIG_IP_PNP_BOOTP=y3232-CONFIG_IP_PNP_RARP=y3333-# CONFIG_INET_XFRM_MODE_TRANSPORT is not set3434-# CONFIG_INET_XFRM_MODE_TUNNEL is not set3535-# CONFIG_INET_XFRM_MODE_BEET is not set3636-# CONFIG_INET_LRO is not set3737-# CONFIG_INET_DIAG is not set3838-# CONFIG_IPV6 is not set3924CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"4025CONFIG_MTD=y4141-CONFIG_MTD_PARTITIONS=y4226CONFIG_MTD_CMDLINE_PARTS=y4327CONFIG_MTD_CHAR=y4428CONFIG_MTD_BLOCK=y4545-CONFIG_MTD_CFI=y4646-CONFIG_MTD_JEDECPROBE=y4747-CONFIG_MTD_CFI_AMDSTD=y4848-CONFIG_MTD_PHYSMAP=y4929CONFIG_MTD_DATAFLASH=y5030CONFIG_MTD_NAND=y5131CONFIG_MTD_NAND_ATMEL=y5232CONFIG_BLK_DEV_LOOP=y5333CONFIG_BLK_DEV_RAM=y5454-CONFIG_BLK_DEV_RAM_SIZE=81925555-CONFIG_ATMEL_SSC=y3434+CONFIG_BLK_DEV_RAM_COUNT=43535+CONFIG_BLK_DEV_RAM_SIZE=245765636CONFIG_SCSI=y5737CONFIG_BLK_DEV_SD=y5838CONFIG_SCSI_MULTI_LUN=y5959-CONFIG_NETDEVICES=y6060-CONFIG_NET_ETHERNET=y6161-CONFIG_MII=y6262-CONFIG_MACB=y6363-# CONFIG_NETDEV_1000 is not set6464-# CONFIG_NETDEV_10000 is not set6539# CONFIG_INPUT_MOUSEDEV_PSAUX is not set4040+CONFIG_INPUT_MOUSEDEV_SCREEN_X=3204141+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=2406642CONFIG_INPUT_EVDEV=y6743# CONFIG_INPUT_KEYBOARD is not set6844# CONFIG_INPUT_MOUSE is not set6945CONFIG_INPUT_TOUCHSCREEN=y7070-CONFIG_TOUCHSCREEN_ADS7846=y4646+CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y7147# CONFIG_SERIO is not set7248CONFIG_SERIAL_ATMEL=y7349CONFIG_SERIAL_ATMEL_CONSOLE=y7474-CONFIG_HW_RANDOM=y5050+# CONFIG_HW_RANDOM is not set7551CONFIG_I2C=y7652CONFIG_I2C_CHARDEV=y5353+CONFIG_I2C_GPIO=y7754CONFIG_SPI=y7855CONFIG_SPI_ATMEL=y7956# CONFIG_HWMON is not set8057CONFIG_WATCHDOG=y8158CONFIG_WATCHDOG_NOWAYOUT=y5959+CONFIG_AT91SAM9X_WATCHDOG=y8260CONFIG_FB=y8361CONFIG_FB_ATMEL=y8484-# CONFIG_VGA_CONSOLE is not set8585-CONFIG_LOGO=y8686-# CONFIG_LOGO_LINUX_MONO is not set8787-# CONFIG_LOGO_LINUX_CLUT224 is not set8888-# CONFIG_USB_HID is not set8989-CONFIG_USB=y9090-CONFIG_USB_DEVICEFS=y9191-CONFIG_USB_MON=y9292-CONFIG_USB_OHCI_HCD=y9393-CONFIG_USB_STORAGE=y9494-CONFIG_USB_GADGET=y9595-CONFIG_USB_ETH=m9696-CONFIG_USB_FILE_STORAGE=m9762CONFIG_MMC=y9863CONFIG_MMC_AT91=m9964CONFIG_RTC_CLASS=y10065CONFIG_RTC_DRV_AT91SAM9=y10166CONFIG_EXT2_FS=y102102-CONFIG_INOTIFY=y6767+CONFIG_MSDOS_FS=y10368CONFIG_VFAT_FS=y10469CONFIG_TMPFS=y105105-CONFIG_JFFS2_FS=y10670CONFIG_CRAMFS=y107107-CONFIG_NFS_FS=y108108-CONFIG_ROOT_NFS=y10971CONFIG_NLS_CODEPAGE_437=y11072CONFIG_NLS_CODEPAGE_850=y11173CONFIG_NLS_ISO8859_1=y112112-CONFIG_DEBUG_FS=y7474+CONFIG_NLS_ISO8859_15=y7575+CONFIG_NLS_UTF8=y11376CONFIG_DEBUG_KERNEL=y11477CONFIG_DEBUG_INFO=y11578CONFIG_DEBUG_USER=y7979+CONFIG_DEBUG_LL=y
+14-33
arch/arm/configs/at91rm9200_defconfig
···55CONFIG_IKCONFIG=y66CONFIG_IKCONFIG_PROC=y77CONFIG_LOG_BUF_SHIFT=1488-CONFIG_SYSFS_DEPRECATED_V2=y98CONFIG_BLK_DEV_INITRD=y109CONFIG_MODULES=y1110CONFIG_MODULE_FORCE_LOAD=y···5556CONFIG_IP_PNP_DHCP=y5657CONFIG_IP_PNP_BOOTP=y5758CONFIG_NET_IPIP=m5858-CONFIG_NET_IPGRE=m5959CONFIG_INET_AH=m6060CONFIG_INET_ESP=m6161CONFIG_INET_IPCOMP=m···7375CONFIG_BRIDGE=m7476CONFIG_VLAN_8021Q=m7577CONFIG_BT=m7676-CONFIG_BT_L2CAP=m7777-CONFIG_BT_SCO=m7878-CONFIG_BT_RFCOMM=m7979-CONFIG_BT_RFCOMM_TTY=y8080-CONFIG_BT_BNEP=m8181-CONFIG_BT_BNEP_MC_FILTER=y8282-CONFIG_BT_BNEP_PROTO_FILTER=y8383-CONFIG_BT_HIDP=m8478CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"8579CONFIG_MTD=y8686-CONFIG_MTD_CONCAT=y8787-CONFIG_MTD_PARTITIONS=y8880CONFIG_MTD_CMDLINE_PARTS=y8981CONFIG_MTD_AFS_PARTS=y9082CONFIG_MTD_CHAR=y···96108CONFIG_BLK_DEV_NBD=y97109CONFIG_BLK_DEV_RAM=y98110CONFIG_BLK_DEV_RAM_SIZE=81929999-CONFIG_ATMEL_TCLIB=y100100-CONFIG_EEPROM_LEGACY=m101111CONFIG_SCSI=y102112CONFIG_BLK_DEV_SD=y103113CONFIG_BLK_DEV_SR=m···105119# CONFIG_SCSI_LOWLEVEL is not set106120CONFIG_NETDEVICES=y107121CONFIG_TUN=m122122+CONFIG_ARM_AT91_ETHER=y108123CONFIG_PHYLIB=y109124CONFIG_DAVICOM_PHY=y110125CONFIG_SMSC_PHY=y111126CONFIG_MICREL_PHY=y112112-CONFIG_NET_ETHERNET=y113113-CONFIG_ARM_AT91_ETHER=y114114-# CONFIG_NETDEV_1000 is not set115115-# CONFIG_NETDEV_10000 is not set127127+CONFIG_PPP=y128128+CONFIG_PPP_BSDCOMP=y129129+CONFIG_PPP_DEFLATE=y130130+CONFIG_PPP_FILTER=y131131+CONFIG_PPP_MPPE=m132132+CONFIG_PPP_MULTILINK=y133133+CONFIG_PPPOE=m134134+CONFIG_PPP_ASYNC=y135135+CONFIG_SLIP=m136136+CONFIG_SLIP_COMPRESSED=y137137+CONFIG_SLIP_SMART=y138138+CONFIG_SLIP_MODE_SLIP6=y116139CONFIG_USB_CATC=m117140CONFIG_USB_KAWETH=m118141CONFIG_USB_PEGASUS=m···134139CONFIG_USB_ALI_M5632=y135140CONFIG_USB_AN2720=y136141CONFIG_USB_EPSON2888=y137137-CONFIG_PPP=y138138-CONFIG_PPP_MULTILINK=y139139-CONFIG_PPP_FILTER=y140140-CONFIG_PPP_ASYNC=y141141-CONFIG_PPP_DEFLATE=y142142-CONFIG_PPP_BSDCOMP=y143143-CONFIG_PPP_MPPE=m144144-CONFIG_PPPOE=m145145-CONFIG_SLIP=m146146-CONFIG_SLIP_COMPRESSED=y147147-CONFIG_SLIP_SMART=y148148-CONFIG_SLIP_MODE_SLIP6=y149142# CONFIG_INPUT_MOUSEDEV_PSAUX is not set150143CONFIG_INPUT_MOUSEDEV_SCREEN_X=640151144CONFIG_INPUT_MOUSEDEV_SCREEN_Y=480···141158CONFIG_KEYBOARD_GPIO=y142159# CONFIG_INPUT_MOUSE is not set143160CONFIG_INPUT_TOUCHSCREEN=y161161+CONFIG_LEGACY_PTY_COUNT=32144162CONFIG_SERIAL_ATMEL=y145163CONFIG_SERIAL_ATMEL_CONSOLE=y146146-CONFIG_LEGACY_PTY_COUNT=32147164CONFIG_HW_RANDOM=y148165CONFIG_I2C=y149166CONFIG_I2C_CHARDEV=y···273290CONFIG_NFS_V4=y274291CONFIG_ROOT_NFS=y275292CONFIG_NFSD=y276276-CONFIG_SMB_FS=m277293CONFIG_CIFS=m278294CONFIG_PARTITION_ADVANCED=y279295CONFIG_MAC_PARTITION=y···317335CONFIG_MAGIC_SYSRQ=y318336CONFIG_DEBUG_FS=y319337CONFIG_DEBUG_KERNEL=y320320-# CONFIG_RCU_CPU_STALL_DETECTOR is not set321338# CONFIG_FTRACE is not set322339CONFIG_CRYPTO_PCBC=y323340CONFIG_CRYPTO_SHA1=y
···1111# CONFIG_IOSCHED_DEADLINE is not set1212# CONFIG_IOSCHED_CFQ is not set1313CONFIG_ARCH_AT91=y1414-CONFIG_ARCH_AT91SAM9260=y1515-CONFIG_MACH_AT91SAM9260EK=y1414+CONFIG_ARCH_AT91SAM9G20=y1515+CONFIG_MACH_AT91SAM9G20EK=y1616+CONFIG_MACH_AT91SAM9G20EK_2MMC=y1717+CONFIG_MACH_CPU9G20=y1818+CONFIG_MACH_ACMENETUSFOXG20=y1919+CONFIG_MACH_PORTUXG20=y2020+CONFIG_MACH_STAMP9G20=y2121+CONFIG_MACH_PCONTROL_G20=y2222+CONFIG_MACH_GSIA18S=y2323+CONFIG_MACH_USB_A9G20=y2424+CONFIG_MACH_SNAPPER_9260=y2525+CONFIG_MACH_AT91SAM_DT=y1626CONFIG_AT91_PROGRAMMABLE_CLOCKS=y1727# CONFIG_ARM_THUMB is not set2828+CONFIG_AEABI=y2929+CONFIG_LEDS=y3030+CONFIG_LEDS_CPU=y1831CONFIG_ZBOOT_ROM_TEXT=0x01932CONFIG_ZBOOT_ROM_BSS=0x03333+CONFIG_ARM_APPENDED_DTB=y3434+CONFIG_ARM_ATAG_DTB_COMPAT=y2035CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw"2136CONFIG_FPE_NWFPE=y2237CONFIG_NET=y···4631# CONFIG_INET_LRO is not set4732# CONFIG_IPV6 is not set4833CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"3434+CONFIG_MTD=y3535+CONFIG_MTD_CMDLINE_PARTS=y3636+CONFIG_MTD_CHAR=y3737+CONFIG_MTD_BLOCK=y3838+CONFIG_MTD_DATAFLASH=y3939+CONFIG_MTD_NAND=y4040+CONFIG_MTD_NAND_ATMEL=y4141+CONFIG_BLK_DEV_LOOP=y4942CONFIG_BLK_DEV_RAM=y5043CONFIG_BLK_DEV_RAM_SIZE=81925151-CONFIG_ATMEL_SSC=y5244CONFIG_SCSI=y5345CONFIG_BLK_DEV_SD=y5446CONFIG_SCSI_MULTI_LUN=y4747+# CONFIG_SCSI_LOWLEVEL is not set5548CONFIG_NETDEVICES=y5656-CONFIG_NET_ETHERNET=y5749CONFIG_MII=y5850CONFIG_MACB=y5951# CONFIG_INPUT_MOUSEDEV_PSAUX is not set6060-# CONFIG_INPUT_KEYBOARD is not set5252+CONFIG_INPUT_MOUSEDEV_SCREEN_X=3205353+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=2405454+CONFIG_INPUT_EVDEV=y5555+# CONFIG_KEYBOARD_ATKBD is not set5656+CONFIG_KEYBOARD_GPIO=y6157# CONFIG_INPUT_MOUSE is not set6262-# CONFIG_SERIO is not set5858+CONFIG_LEGACY_PTY_COUNT=166359CONFIG_SERIAL_ATMEL=y6460CONFIG_SERIAL_ATMEL_CONSOLE=y6565-# CONFIG_HW_RANDOM is not set6666-CONFIG_I2C=y6767-CONFIG_I2C_CHARDEV=y6868-CONFIG_I2C_GPIO=y6161+CONFIG_HW_RANDOM=y6262+CONFIG_SPI=y6363+CONFIG_SPI_ATMEL=y6464+CONFIG_SPI_SPIDEV=y6965# CONFIG_HWMON is not set7070-CONFIG_WATCHDOG=y7171-CONFIG_WATCHDOG_NOWAYOUT=y7272-CONFIG_AT91SAM9X_WATCHDOG=y7373-# CONFIG_VGA_CONSOLE is not set7474-# CONFIG_USB_HID is not set6666+CONFIG_SOUND=y6767+CONFIG_SND=y6868+CONFIG_SND_SEQUENCER=y6969+CONFIG_SND_MIXER_OSS=y7070+CONFIG_SND_PCM_OSS=y7171+CONFIG_SND_SEQUENCER_OSS=y7272+# CONFIG_SND_VERBOSE_PROCFS is not set7573CONFIG_USB=y7674CONFIG_USB_DEVICEFS=y7575+# CONFIG_USB_DEVICE_CLASS is not set7776CONFIG_USB_MON=y7877CONFIG_USB_OHCI_HCD=y7978CONFIG_USB_STORAGE=y8080-CONFIG_USB_STORAGE_DEBUG=y8179CONFIG_USB_GADGET=y8280CONFIG_USB_ZERO=m8381CONFIG_USB_GADGETFS=m8482CONFIG_USB_FILE_STORAGE=m8583CONFIG_USB_G_SERIAL=m8484+CONFIG_MMC=y8585+CONFIG_MMC_AT91=m8686+CONFIG_NEW_LEDS=y8787+CONFIG_LEDS_CLASS=y8888+CONFIG_LEDS_GPIO=y8989+CONFIG_LEDS_TRIGGERS=y9090+CONFIG_LEDS_TRIGGER_TIMER=y9191+CONFIG_LEDS_TRIGGER_HEARTBEAT=y8692CONFIG_RTC_CLASS=y8793CONFIG_RTC_DRV_AT91SAM9=y8894CONFIG_EXT2_FS=y8989-CONFIG_INOTIFY=y9595+CONFIG_MSDOS_FS=y9096CONFIG_VFAT_FS=y9197CONFIG_TMPFS=y9898+CONFIG_JFFS2_FS=y9999+CONFIG_JFFS2_SUMMARY=y92100CONFIG_CRAMFS=y101101+CONFIG_NFS_FS=y102102+CONFIG_NFS_V3=y103103+CONFIG_ROOT_NFS=y93104CONFIG_NLS_CODEPAGE_437=y94105CONFIG_NLS_CODEPAGE_850=y95106CONFIG_NLS_ISO8859_1=y9696-CONFIG_DEBUG_KERNEL=y9797-CONFIG_DEBUG_USER=y9898-CONFIG_DEBUG_LL=y107107+CONFIG_NLS_ISO8859_15=y108108+CONFIG_NLS_UTF8=y109109+# CONFIG_ENABLE_WARN_DEPRECATED is not set
···1111# CONFIG_IOSCHED_DEADLINE is not set1212# CONFIG_IOSCHED_CFQ is not set1313CONFIG_ARCH_AT91=y1414-CONFIG_ARCH_AT91SAM9G20=y1515-CONFIG_MACH_AT91SAM9G20EK=y1616-CONFIG_MACH_AT91SAM9G20EK_2MMC=y1414+CONFIG_ARCH_AT91CAP9=y1515+CONFIG_MACH_AT91CAP9ADK=y1616+CONFIG_MTD_AT91_DATAFLASH_CARD=y1717CONFIG_AT91_PROGRAMMABLE_CLOCKS=y1818# CONFIG_ARM_THUMB is not set1919CONFIG_AEABI=y···2121CONFIG_LEDS_CPU=y2222CONFIG_ZBOOT_ROM_TEXT=0x02323CONFIG_ZBOOT_ROM_BSS=0x02424-CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw"2424+CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw"2525CONFIG_FPE_NWFPE=y2626-CONFIG_PM=y2726CONFIG_NET=y2827CONFIG_PACKET=y2928CONFIG_UNIX=y3029CONFIG_INET=y3130CONFIG_IP_PNP=y3231CONFIG_IP_PNP_BOOTP=y3232+CONFIG_IP_PNP_RARP=y3333# CONFIG_INET_XFRM_MODE_TRANSPORT is not set3434# CONFIG_INET_XFRM_MODE_TUNNEL is not set3535# CONFIG_INET_XFRM_MODE_BEET is not set3636# CONFIG_INET_LRO is not set3737+# CONFIG_INET_DIAG is not set3738# CONFIG_IPV6 is not set3839CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"3940CONFIG_MTD=y4040-CONFIG_MTD_CONCAT=y4141-CONFIG_MTD_PARTITIONS=y4241CONFIG_MTD_CMDLINE_PARTS=y4342CONFIG_MTD_CHAR=y4443CONFIG_MTD_BLOCK=y4444+CONFIG_MTD_CFI=y4545+CONFIG_MTD_JEDECPROBE=y4646+CONFIG_MTD_CFI_AMDSTD=y4747+CONFIG_MTD_PHYSMAP=y4548CONFIG_MTD_DATAFLASH=y4649CONFIG_MTD_NAND=y4750CONFIG_MTD_NAND_ATMEL=y4851CONFIG_BLK_DEV_LOOP=y4952CONFIG_BLK_DEV_RAM=y5053CONFIG_BLK_DEV_RAM_SIZE=81925151-CONFIG_ATMEL_SSC=y5254CONFIG_SCSI=y5355CONFIG_BLK_DEV_SD=y5456CONFIG_SCSI_MULTI_LUN=y5555-# CONFIG_SCSI_LOWLEVEL is not set5657CONFIG_NETDEVICES=y5757-CONFIG_NET_ETHERNET=y5858CONFIG_MII=y5959CONFIG_MACB=y6060-# CONFIG_NETDEV_1000 is not set6161-# CONFIG_NETDEV_10000 is not set6260# CONFIG_INPUT_MOUSEDEV_PSAUX is not set6363-CONFIG_INPUT_MOUSEDEV_SCREEN_X=3206464-CONFIG_INPUT_MOUSEDEV_SCREEN_Y=2406561CONFIG_INPUT_EVDEV=y6666-# CONFIG_KEYBOARD_ATKBD is not set6767-CONFIG_KEYBOARD_GPIO=y6262+# CONFIG_INPUT_KEYBOARD is not set6863# CONFIG_INPUT_MOUSE is not set6464+CONFIG_INPUT_TOUCHSCREEN=y6565+CONFIG_TOUCHSCREEN_ADS7846=y6666+# CONFIG_SERIO is not set6967CONFIG_SERIAL_ATMEL=y7068CONFIG_SERIAL_ATMEL_CONSOLE=y7171-CONFIG_LEGACY_PTY_COUNT=167269CONFIG_HW_RANDOM=y7070+CONFIG_I2C=y7171+CONFIG_I2C_CHARDEV=y7372CONFIG_SPI=y7473CONFIG_SPI_ATMEL=y7575-CONFIG_SPI_SPIDEV=y7674# CONFIG_HWMON is not set7777-# CONFIG_VGA_CONSOLE is not set7878-CONFIG_SOUND=y7979-CONFIG_SND=y8080-CONFIG_SND_SEQUENCER=y8181-CONFIG_SND_MIXER_OSS=y8282-CONFIG_SND_PCM_OSS=y8383-CONFIG_SND_SEQUENCER_OSS=y8484-# CONFIG_SND_VERBOSE_PROCFS is not set8585-CONFIG_SND_AT73C213=y7575+CONFIG_WATCHDOG=y7676+CONFIG_WATCHDOG_NOWAYOUT=y7777+CONFIG_FB=y7878+CONFIG_FB_ATMEL=y7979+CONFIG_LOGO=y8080+# CONFIG_LOGO_LINUX_MONO is not set8181+# CONFIG_LOGO_LINUX_CLUT224 is not set8282+# CONFIG_USB_HID is not set8683CONFIG_USB=y8784CONFIG_USB_DEVICEFS=y8888-# CONFIG_USB_DEVICE_CLASS is not set8985CONFIG_USB_MON=y9086CONFIG_USB_OHCI_HCD=y9187CONFIG_USB_STORAGE=y9288CONFIG_USB_GADGET=y9393-CONFIG_USB_ZERO=m9494-CONFIG_USB_GADGETFS=m8989+CONFIG_USB_ETH=m9590CONFIG_USB_FILE_STORAGE=m9696-CONFIG_USB_G_SERIAL=m9791CONFIG_MMC=y9892CONFIG_MMC_AT91=m9999-CONFIG_NEW_LEDS=y100100-CONFIG_LEDS_CLASS=y101101-CONFIG_LEDS_GPIO=y102102-CONFIG_LEDS_TRIGGERS=y103103-CONFIG_LEDS_TRIGGER_TIMER=y104104-CONFIG_LEDS_TRIGGER_HEARTBEAT=y10593CONFIG_RTC_CLASS=y10694CONFIG_RTC_DRV_AT91SAM9=y10795CONFIG_EXT2_FS=y108108-CONFIG_INOTIFY=y109109-CONFIG_MSDOS_FS=y11096CONFIG_VFAT_FS=y11197CONFIG_TMPFS=y11298CONFIG_JFFS2_FS=y113113-CONFIG_JFFS2_SUMMARY=y11499CONFIG_CRAMFS=y115100CONFIG_NFS_FS=y116116-CONFIG_NFS_V3=y117101CONFIG_ROOT_NFS=y118102CONFIG_NLS_CODEPAGE_437=y119103CONFIG_NLS_CODEPAGE_850=y120104CONFIG_NLS_ISO8859_1=y121121-CONFIG_NLS_ISO8859_15=y122122-CONFIG_NLS_UTF8=y123123-# CONFIG_ENABLE_WARN_DEPRECATED is not set105105+CONFIG_DEBUG_FS=y106106+CONFIG_DEBUG_KERNEL=y107107+CONFIG_DEBUG_INFO=y108108+CONFIG_DEBUG_USER=y
+2-5
arch/arm/configs/at91sam9g45_defconfig
···1818CONFIG_ARCH_AT91=y1919CONFIG_ARCH_AT91SAM9G45=y2020CONFIG_MACH_AT91SAM9M10G45EK=y2121+CONFIG_MACH_AT91SAM_DT=y2122CONFIG_AT91_PROGRAMMABLE_CLOCKS=y2223CONFIG_AT91_SLOW_CLOCK=y2324CONFIG_AEABI=y···7473# CONFIG_SCSI_LOWLEVEL is not set7574CONFIG_NETDEVICES=y7675CONFIG_MII=y7777-CONFIG_DAVICOM_PHY=y7878-CONFIG_NET_ETHERNET=y7976CONFIG_MACB=y8080-# CONFIG_NETDEV_1000 is not set8181-# CONFIG_NETDEV_10000 is not set7777+CONFIG_DAVICOM_PHY=y8278CONFIG_LIBERTAS_THINFIRM=m8379CONFIG_LIBERTAS_THINFIRM_USB=m8480CONFIG_AT76C50X_USB=m···129131CONFIG_SPI=y130132CONFIG_SPI_ATMEL=y131133# CONFIG_HWMON is not set132132-# CONFIG_MFD_SUPPORT is not set133134CONFIG_FB=y134135CONFIG_FB_ATMEL=y135136CONFIG_FB_UDL=m
···1111# CONFIG_IOSCHED_DEADLINE is not set1212# CONFIG_IOSCHED_CFQ is not set1313CONFIG_ARCH_AT91=y1414-CONFIG_ARCH_AT91SAM9RL=y1515-CONFIG_MACH_AT91SAM9RLEK=y1414+CONFIG_ARCH_AT91SAM9260=y1515+CONFIG_ARCH_AT91SAM9260_SAM9XE=y1616+CONFIG_MACH_AT91SAM9260EK=y1717+CONFIG_MACH_CAM60=y1818+CONFIG_MACH_SAM9_L9260=y1919+CONFIG_MACH_AFEB9260=y2020+CONFIG_MACH_USB_A9260=y2121+CONFIG_MACH_QIL_A9260=y2222+CONFIG_MACH_CPU9260=y2323+CONFIG_MACH_FLEXIBITY=y2424+CONFIG_MACH_SNAPPER_9260=y2525+CONFIG_MACH_AT91SAM_DT=y1626CONFIG_AT91_PROGRAMMABLE_CLOCKS=y1727# CONFIG_ARM_THUMB is not set1828CONFIG_ZBOOT_ROM_TEXT=0x01929CONFIG_ZBOOT_ROM_BSS=0x02020-CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw"3030+CONFIG_ARM_APPENDED_DTB=y3131+CONFIG_ARM_ATAG_DTB_COMPAT=y3232+CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw"2133CONFIG_FPE_NWFPE=y2234CONFIG_NET=y3535+CONFIG_PACKET=y2336CONFIG_UNIX=y3737+CONFIG_INET=y3838+CONFIG_IP_PNP=y3939+CONFIG_IP_PNP_BOOTP=y4040+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set4141+# CONFIG_INET_XFRM_MODE_TUNNEL is not set4242+# CONFIG_INET_XFRM_MODE_BEET is not set4343+# CONFIG_INET_LRO is not set4444+# CONFIG_IPV6 is not set2445CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"2525-CONFIG_MTD=y2626-CONFIG_MTD_CONCAT=y2727-CONFIG_MTD_PARTITIONS=y2828-CONFIG_MTD_CMDLINE_PARTS=y2929-CONFIG_MTD_CHAR=y3030-CONFIG_MTD_BLOCK=y3131-CONFIG_MTD_DATAFLASH=y3232-CONFIG_MTD_NAND=y3333-CONFIG_MTD_NAND_ATMEL=y3434-CONFIG_BLK_DEV_LOOP=y3546CONFIG_BLK_DEV_RAM=y3636-CONFIG_BLK_DEV_RAM_COUNT=43737-CONFIG_BLK_DEV_RAM_SIZE=245763838-CONFIG_ATMEL_SSC=y4747+CONFIG_BLK_DEV_RAM_SIZE=81923948CONFIG_SCSI=y4049CONFIG_BLK_DEV_SD=y4150CONFIG_SCSI_MULTI_LUN=y5151+CONFIG_NETDEVICES=y5252+CONFIG_MII=y5353+CONFIG_MACB=y4254# CONFIG_INPUT_MOUSEDEV_PSAUX is not set4343-CONFIG_INPUT_MOUSEDEV_SCREEN_X=3204444-CONFIG_INPUT_MOUSEDEV_SCREEN_Y=2404545-CONFIG_INPUT_EVDEV=y4655# CONFIG_INPUT_KEYBOARD is not set4756# CONFIG_INPUT_MOUSE is not set4848-CONFIG_INPUT_TOUCHSCREEN=y4949-CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y5057# CONFIG_SERIO is not set5158CONFIG_SERIAL_ATMEL=y5259CONFIG_SERIAL_ATMEL_CONSOLE=y···6154CONFIG_I2C=y6255CONFIG_I2C_CHARDEV=y6356CONFIG_I2C_GPIO=y6464-CONFIG_SPI=y6565-CONFIG_SPI_ATMEL=y6657# CONFIG_HWMON is not set6758CONFIG_WATCHDOG=y6859CONFIG_WATCHDOG_NOWAYOUT=y6960CONFIG_AT91SAM9X_WATCHDOG=y7070-CONFIG_FB=y7171-CONFIG_FB_ATMEL=y7272-# CONFIG_VGA_CONSOLE is not set7373-CONFIG_MMC=y7474-CONFIG_MMC_AT91=m6161+# CONFIG_USB_HID is not set6262+CONFIG_USB=y6363+CONFIG_USB_DEVICEFS=y6464+CONFIG_USB_MON=y6565+CONFIG_USB_OHCI_HCD=y6666+CONFIG_USB_STORAGE=y6767+CONFIG_USB_STORAGE_DEBUG=y6868+CONFIG_USB_GADGET=y6969+CONFIG_USB_ZERO=m7070+CONFIG_USB_GADGETFS=m7171+CONFIG_USB_FILE_STORAGE=m7272+CONFIG_USB_G_SERIAL=m7573CONFIG_RTC_CLASS=y7674CONFIG_RTC_DRV_AT91SAM9=y7775CONFIG_EXT2_FS=y7878-CONFIG_INOTIFY=y7979-CONFIG_MSDOS_FS=y8076CONFIG_VFAT_FS=y8177CONFIG_TMPFS=y8278CONFIG_CRAMFS=y8379CONFIG_NLS_CODEPAGE_437=y8480CONFIG_NLS_CODEPAGE_850=y8581CONFIG_NLS_ISO8859_1=y8686-CONFIG_NLS_ISO8859_15=y8787-CONFIG_NLS_UTF8=y8882CONFIG_DEBUG_KERNEL=y8989-CONFIG_DEBUG_INFO=y9083CONFIG_DEBUG_USER=y9184CONFIG_DEBUG_LL=y
+1-1
arch/arm/configs/ezx_defconfig
···287287# CONFIG_USB_DEVICE_CLASS is not set288288CONFIG_USB_OHCI_HCD=y289289CONFIG_USB_GADGET=y290290-CONFIG_USB_GADGET_PXA27X=y290290+CONFIG_USB_PXA27X=y291291CONFIG_USB_ETH=m292292# CONFIG_USB_ETH_RNDIS is not set293293CONFIG_MMC=y
+1-1
arch/arm/configs/imote2_defconfig
···263263# CONFIG_USB_DEVICE_CLASS is not set264264CONFIG_USB_OHCI_HCD=y265265CONFIG_USB_GADGET=y266266-CONFIG_USB_GADGET_PXA27X=y266266+CONFIG_USB_PXA27X=y267267CONFIG_USB_ETH=m268268# CONFIG_USB_ETH_RNDIS is not set269269CONFIG_MMC=y
+1-1
arch/arm/configs/magician_defconfig
···132132CONFIG_USB_OHCI_HCD=y133133CONFIG_USB_GADGET=y134134CONFIG_USB_GADGET_VBUS_DRAW=500135135-CONFIG_USB_GADGET_PXA27X=y135135+CONFIG_USB_PXA27X=y136136CONFIG_USB_ETH=m137137# CONFIG_USB_ETH_RNDIS is not set138138CONFIG_USB_GADGETFS=m
···1414CONFIG_ARCH_U300=y1515CONFIG_MACH_U300=y1616CONFIG_MACH_U300_BS335=y1717-CONFIG_MACH_U300_DUAL_RAM=y1818-CONFIG_U300_DEBUG=y1917CONFIG_MACH_U300_SPIDUMMY=y2018CONFIG_NO_HZ=y2119CONFIG_HIGH_RES_TIMERS=y···2426CONFIG_CMDLINE="root=/dev/ram0 rw rootfstype=rootfs console=ttyAMA0,115200n8 lpj=515072"2527CONFIG_CPU_IDLE=y2628CONFIG_FPE_NWFPE=y2727-CONFIG_PM=y2829# CONFIG_SUSPEND is not set2930CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"3031# CONFIG_PREVENT_FIRMWARE_BUILD is not set3131-# CONFIG_MISC_DEVICES is not set3232+CONFIG_MTD=y3333+CONFIG_MTD_CMDLINE_PARTS=y3434+CONFIG_MTD_NAND=y3535+CONFIG_MTD_NAND_FSMC=y3236# CONFIG_INPUT_MOUSEDEV is not set3337CONFIG_INPUT_EVDEV=y3438# CONFIG_KEYBOARD_ATKBD is not set3539# CONFIG_INPUT_MOUSE is not set3640# CONFIG_SERIO is not set4141+CONFIG_LEGACY_PTY_COUNT=163742CONFIG_SERIAL_AMBA_PL011=y3843CONFIG_SERIAL_AMBA_PL011_CONSOLE=y3939-CONFIG_LEGACY_PTY_COUNT=164044# CONFIG_HW_RANDOM is not set4145CONFIG_I2C=y4246# CONFIG_HWMON is not set···5151# CONFIG_HID_SUPPORT is not set5252# CONFIG_USB_SUPPORT is not set5353CONFIG_MMC=y5454+CONFIG_MMC_CLKGATE=y5455CONFIG_MMC_ARMMMCI=y5556CONFIG_RTC_CLASS=y5657# CONFIG_RTC_HCTOSYS is not set···6665CONFIG_NLS_ISO8859_1=y6766CONFIG_PRINTK_TIME=y6867CONFIG_DEBUG_FS=y6969-CONFIG_DEBUG_KERNEL=y7068# CONFIG_SCHED_DEBUG is not set7169CONFIG_TIMER_STATS=y7270# CONFIG_DEBUG_PREEMPT is not set7371CONFIG_DEBUG_INFO=y7474-# CONFIG_RCU_CPU_STALL_DETECTOR is not set7572# CONFIG_CRC32 is not set
+5-9
arch/arm/configs/u8500_defconfig
···1010CONFIG_ARCH_U8500=y1111CONFIG_UX500_SOC_DB5500=y1212CONFIG_UX500_SOC_DB8500=y1313-CONFIG_MACH_U8500=y1313+CONFIG_MACH_HREFV60=y1414CONFIG_MACH_SNOWBALL=y1515CONFIG_MACH_U5500=y1616CONFIG_NO_HZ=y···2424CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y2525CONFIG_VFP=y2626CONFIG_NEON=y2727+CONFIG_PM_RUNTIME=y2728CONFIG_NET=y2829CONFIG_PACKET=y2930CONFIG_UNIX=y···4241CONFIG_AB8500_PWM=y4342CONFIG_SENSORS_BH1780=y4443CONFIG_NETDEVICES=y4545-CONFIG_SMSC_PHY=y4646-CONFIG_NET_ETHERNET=y4744CONFIG_SMSC911X=y4848-# CONFIG_NETDEV_1000 is not set4949-# CONFIG_NETDEV_10000 is not set4545+CONFIG_SMSC_PHY=y5046# CONFIG_WLAN is not set5147# CONFIG_INPUT_MOUSEDEV_PSAUX is not set5248CONFIG_INPUT_EVDEV=y···7072CONFIG_SPI_PL022=y7173CONFIG_GPIO_STMPE=y7274CONFIG_GPIO_TC3589X=y7373-# CONFIG_HWMON is not set7475CONFIG_MFD_STMPE=y7576CONFIG_MFD_TC3589X=y7777+CONFIG_AB5500_CORE=y7678CONFIG_AB8500_CORE=y7779CONFIG_REGULATOR_AB8500=y7880# CONFIG_HID_SUPPORT is not set7979-CONFIG_USB_MUSB_HDRC=y8080-CONFIG_USB_GADGET_MUSB_HDRC=y8181-CONFIG_MUSB_PIO_ONLY=y8281CONFIG_USB_GADGET=y8382CONFIG_AB8500_USB=y8483CONFIG_MMC=y···9297CONFIG_STE_DMA40=y9398CONFIG_STAGING=y9499CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y100100+CONFIG_HSEM_U8500=y95101CONFIG_EXT2_FS=y96102CONFIG_EXT2_FS_XATTR=y97103CONFIG_EXT2_FS_POSIX_ACL=y
···5555extern void5656release_pmu(enum arm_pmu_type type);57575858-/**5959- * init_pmu() - Initialise the PMU.6060- *6161- * Initialise the system ready for PMU enabling. This should typically set the6262- * IRQ affinity and nothing else. The users (oprofile/perf events etc) will do6363- * the actual hardware initialisation.6464- */6565-extern int6666-init_pmu(enum arm_pmu_type type);6767-6858#else /* CONFIG_CPU_HAS_PMU */69597060#include <linux/err.h>
+1-1
arch/arm/include/asm/topology.h
···25252626void init_cpu_topology(void);2727void store_cpu_topology(unsigned int cpuid);2828-const struct cpumask *cpu_coregroup_mask(unsigned int cpu);2828+const struct cpumask *cpu_coregroup_mask(int cpu);29293030#else3131
···1010config HAVE_IMX_SRC1111 bool12121313-#1414-# ARCH_MX31 and ARCH_MX35 are left for compatibility1515-# Some usages assume that having one of them implies not having (e.g.) ARCH_MX2.1616-# To easily distinguish good and reviewed from unreviewed usages new (and IMHO1717-# more sensible) names are used: SOC_IMX31 and SOC_IMX351813config ARCH_MX11914 bool2015···2025 bool21262227config MACH_MX272323- bool2424-2525-config ARCH_MX312626- bool2727-2828-config ARCH_MX352928 bool30293130config SOC_IMX1···6172 select CPU_V66273 select IMX_HAVE_PLATFORM_MXC_RNGA6374 select ARCH_MXC_AUDMUX_V26464- select ARCH_MX316575 select MXC_AVIC6676 select SMP_ON_UP if SMP6777···7082 select ARCH_MXC_IOMUX_V37183 select ARCH_MXC_AUDMUX_V27284 select HAVE_EPIT7373- select ARCH_MX357485 select MXC_AVIC7586 select SMP_ON_UP if SMP7687
+5-2
arch/arm/mach-imx/clock-imx6q.c
···19531953 imx_map_entry(MX6Q, ANATOP, MT_DEVICE),19541954};1955195519561956+void __init imx6q_clock_map_io(void)19571957+{19581958+ iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc));19591959+}19601960+19561961int __init mx6q_clocks_init(void)19571962{19581963 struct device_node *np;19591964 void __iomem *base;19601965 int i, irq;19611961-19621962- iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc));1963196619641967 /* retrieve the freqency of fixed clocks from device tree */19651968 for_each_compatible_node(np, NULL, "fixed-clock") {
···171171comment "OMAP CPU Speed"172172 depends on ARCH_OMAP1173173174174-config OMAP_CLOCKS_SET_BY_BOOTLOADER175175- bool "OMAP clocks set by bootloader"176176- depends on ARCH_OMAP1177177- help178178- Enable this option to prevent the kernel from overriding the clock179179- frequencies programmed by bootloader for MPU, DSP, MMUs, TC,180180- internal LCD controller and MPU peripherals.181181-182174config OMAP_ARM_216MHZ183175 bool "OMAP ARM 216 MHz CPU (1710 only)"184176 depends on ARCH_OMAP1 && ARCH_OMAP16XX
···17171818#include <plat/clock.h>19192020-extern int __init omap1_clk_init(void);2020+int omap1_clk_init(void);2121+void omap1_clk_late_init(void);2122extern int omap1_clk_enable(struct clk *clk);2223extern void omap1_clk_disable(struct clk *clk);2324extern long omap1_clk_round_rate(struct clk *clk, unsigned long rate);
+34-19
arch/arm/mach-omap1/clock_data.c
···767767 .clk_disable_unused = omap1_clk_disable_unused,768768};769769770770+static void __init omap1_show_rates(void)771771+{772772+ pr_notice("Clocking rate (xtal/DPLL1/MPU): "773773+ "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n",774774+ ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10,775775+ ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10,776776+ arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10);777777+}778778+770779int __init omap1_clk_init(void)771780{772781 struct omap_clk *c;···844835 /* We want to be in syncronous scalable mode */845836 omap_writew(0x1000, ARM_SYSST);846837847847-#ifdef CONFIG_OMAP_CLOCKS_SET_BY_BOOTLOADER848848- /* Use values set by bootloader. Determine PLL rate and recalculate849849- * dependent clocks as if kernel had changed PLL or divisors.838838+839839+ /*840840+ * Initially use the values set by bootloader. Determine PLL rate and841841+ * recalculate dependent clocks as if kernel had changed PLL or842842+ * divisors. See also omap1_clk_late_init() that can reprogram dpll1843843+ * after the SRAM is initialized.850844 */851845 {852846 unsigned pll_ctl_val = omap_readw(DPLL_CTL);···874862 }875863 }876864 }877877-#else878878- /* Find the highest supported frequency and enable it */879879- if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) {880880- printk(KERN_ERR "System frequencies not set. Check your config.\n");881881- /* Guess sane values (60MHz) */882882- omap_writew(0x2290, DPLL_CTL);883883- omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL);884884- ck_dpll1.rate = 60000000;885885- }886886-#endif887865 propagate_rate(&ck_dpll1);888866 /* Cache rates for clocks connected to ck_ref (not dpll1) */889867 propagate_rate(&ck_ref);890890- printk(KERN_INFO "Clocking rate (xtal/DPLL1/MPU): "891891- "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n",892892- ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10,893893- ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10,894894- arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10);895895-868868+ omap1_show_rates();896869 if (machine_is_omap_perseus2() || machine_is_omap_fsample()) {897870 /* Select slicer output as OMAP input clock */898871 omap_writew(omap_readw(OMAP7XX_PCC_UPLD_CTRL) & ~0x1,···921924 clk_enable(&arm_gpio_ck);922925923926 return 0;927927+}928928+929929+#define OMAP1_DPLL1_SANE_VALUE 60000000930930+931931+void __init omap1_clk_late_init(void)932932+{933933+ if (ck_dpll1.rate >= OMAP1_DPLL1_SANE_VALUE)934934+ return;935935+936936+ /* Find the highest supported frequency and enable it */937937+ if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) {938938+ pr_err("System frequencies not set, using default. Check your config.\n");939939+ omap_writew(0x2290, DPLL_CTL);940940+ omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL);941941+ ck_dpll1.rate = OMAP1_DPLL1_SANE_VALUE;942942+ }943943+ propagate_rate(&ck_dpll1);944944+ omap1_show_rates();924945}
+3
arch/arm/mach-omap1/devices.c
···3030#include <plat/omap7xx.h>3131#include <plat/mcbsp.h>32323333+#include "clock.h"3434+3335/*-------------------------------------------------------------------------*/34363537#if defined(CONFIG_RTC_DRV_OMAP) || defined(CONFIG_RTC_DRV_OMAP_MODULE)···295293 return -ENODEV;296294297295 omap_sram_init();296296+ omap1_clk_late_init();298297299298 /* please keep these calls, and their implementations above,300299 * in alphabetical order so they're easier to sort through.
+1
arch/arm/mach-omap2/Kconfig
···334334config OMAP3_EMU335335 bool "OMAP3 debugging peripherals"336336 depends on ARCH_OMAP3337337+ select ARM_AMBA337338 select OC_ETM338339 help339340 Say Y here to enable debugging hardware of omap3
···2727#include <plat/omap_hwmod.h>2828#include <plat/omap_device.h>2929#include <plat/omap-pm.h>3030+#include <plat/common.h>30313132#include "control.h"3333+#include "display.h"3434+3535+#define DISPC_CONTROL 0x00403636+#define DISPC_CONTROL2 0x02383737+#define DISPC_IRQSTATUS 0x00183838+3939+#define DSS_SYSCONFIG 0x104040+#define DSS_SYSSTATUS 0x144141+#define DSS_CONTROL 0x404242+#define DSS_SDI_CONTROL 0x444343+#define DSS_PLL_CONTROL 0x484444+4545+#define LCD_EN_MASK (0x1 << 0)4646+#define DIGIT_EN_MASK (0x1 << 1)4747+4848+#define FRAMEDONE_IRQ_SHIFT 04949+#define EVSYNC_EVEN_IRQ_SHIFT 25050+#define EVSYNC_ODD_IRQ_SHIFT 35151+#define FRAMEDONE2_IRQ_SHIFT 225252+#define FRAMEDONETV_IRQ_SHIFT 245353+5454+/*5555+ * FRAMEDONE_IRQ_TIMEOUT: how long (in milliseconds) to wait during DISPC5656+ * reset before deciding that something has gone wrong5757+ */5858+#define FRAMEDONE_IRQ_TIMEOUT 10032593360static struct platform_device omap_display_device = {3461 .name = "omapdss",···196169 r = platform_device_register(&omap_display_device);197170 if (r < 0)198171 printk(KERN_ERR "Unable to register OMAP-Display device\n");172172+173173+ return r;174174+}175175+176176+static void dispc_disable_outputs(void)177177+{178178+ u32 v, irq_mask = 0;179179+ bool lcd_en, digit_en, lcd2_en = false;180180+ int i;181181+ struct omap_dss_dispc_dev_attr *da;182182+ struct omap_hwmod *oh;183183+184184+ oh = omap_hwmod_lookup("dss_dispc");185185+ if (!oh) {186186+ WARN(1, "display: could not disable outputs during reset - could not find dss_dispc hwmod\n");187187+ return;188188+ }189189+190190+ if (!oh->dev_attr) {191191+ pr_err("display: could not disable outputs during reset due to missing dev_attr\n");192192+ return;193193+ }194194+195195+ da = (struct omap_dss_dispc_dev_attr *)oh->dev_attr;196196+197197+ /* store value of LCDENABLE and DIGITENABLE bits */198198+ v = omap_hwmod_read(oh, DISPC_CONTROL);199199+ lcd_en = v & LCD_EN_MASK;200200+ digit_en = v & DIGIT_EN_MASK;201201+202202+ /* store value of LCDENABLE for LCD2 */203203+ if (da->manager_count > 2) {204204+ v = omap_hwmod_read(oh, DISPC_CONTROL2);205205+ lcd2_en = v & LCD_EN_MASK;206206+ }207207+208208+ if (!(lcd_en | digit_en | lcd2_en))209209+ return; /* no managers currently enabled */210210+211211+ /*212212+ * If any manager was enabled, we need to disable it before213213+ * DSS clocks are disabled or DISPC module is reset214214+ */215215+ if (lcd_en)216216+ irq_mask |= 1 << FRAMEDONE_IRQ_SHIFT;217217+218218+ if (digit_en) {219219+ if (da->has_framedonetv_irq) {220220+ irq_mask |= 1 << FRAMEDONETV_IRQ_SHIFT;221221+ } else {222222+ irq_mask |= 1 << EVSYNC_EVEN_IRQ_SHIFT |223223+ 1 << EVSYNC_ODD_IRQ_SHIFT;224224+ }225225+ }226226+227227+ if (lcd2_en)228228+ irq_mask |= 1 << FRAMEDONE2_IRQ_SHIFT;229229+230230+ /*231231+ * clear any previous FRAMEDONE, FRAMEDONETV,232232+ * EVSYNC_EVEN/ODD or FRAMEDONE2 interrupts233233+ */234234+ omap_hwmod_write(irq_mask, oh, DISPC_IRQSTATUS);235235+236236+ /* disable LCD and TV managers */237237+ v = omap_hwmod_read(oh, DISPC_CONTROL);238238+ v &= ~(LCD_EN_MASK | DIGIT_EN_MASK);239239+ omap_hwmod_write(v, oh, DISPC_CONTROL);240240+241241+ /* disable LCD2 manager */242242+ if (da->manager_count > 2) {243243+ v = omap_hwmod_read(oh, DISPC_CONTROL2);244244+ v &= ~LCD_EN_MASK;245245+ omap_hwmod_write(v, oh, DISPC_CONTROL2);246246+ }247247+248248+ i = 0;249249+ while ((omap_hwmod_read(oh, DISPC_IRQSTATUS) & irq_mask) !=250250+ irq_mask) {251251+ i++;252252+ if (i > FRAMEDONE_IRQ_TIMEOUT) {253253+ pr_err("didn't get FRAMEDONE1/2 or TV interrupt\n");254254+ break;255255+ }256256+ mdelay(1);257257+ }258258+}259259+260260+#define MAX_MODULE_SOFTRESET_WAIT 10000261261+int omap_dss_reset(struct omap_hwmod *oh)262262+{263263+ struct omap_hwmod_opt_clk *oc;264264+ int c = 0;265265+ int i, r;266266+267267+ if (!(oh->class->sysc->sysc_flags & SYSS_HAS_RESET_STATUS)) {268268+ pr_err("dss_core: hwmod data doesn't contain reset data\n");269269+ return -EINVAL;270270+ }271271+272272+ for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++)273273+ if (oc->_clk)274274+ clk_enable(oc->_clk);275275+276276+ dispc_disable_outputs();277277+278278+ /* clear SDI registers */279279+ if (cpu_is_omap3430()) {280280+ omap_hwmod_write(0x0, oh, DSS_SDI_CONTROL);281281+ omap_hwmod_write(0x0, oh, DSS_PLL_CONTROL);282282+ }283283+284284+ /*285285+ * clear DSS_CONTROL register to switch DSS clock sources to286286+ * PRCM clock, if any287287+ */288288+ omap_hwmod_write(0x0, oh, DSS_CONTROL);289289+290290+ omap_test_timeout((omap_hwmod_read(oh, oh->class->sysc->syss_offs)291291+ & SYSS_RESETDONE_MASK),292292+ MAX_MODULE_SOFTRESET_WAIT, c);293293+294294+ if (c == MAX_MODULE_SOFTRESET_WAIT)295295+ pr_warning("dss_core: waiting for reset to finish failed\n");296296+ else297297+ pr_debug("dss_core: softreset done\n");298298+299299+ for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++)300300+ if (oc->_clk)301301+ clk_disable(oc->_clk);302302+303303+ r = (c == MAX_MODULE_SOFTRESET_WAIT) ? -ETIMEDOUT : 0;199304200305 return r;201306}
+29
arch/arm/mach-omap2/display.h
···11+/*22+ * display.h - OMAP2+ integration-specific DSS header33+ *44+ * Copyright (C) 2011 Texas Instruments, Inc.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms of the GNU General Public License version 2 as published by88+ * the Free Software Foundation.99+ *1010+ * This program is distributed in the hope that it will be useful, but WITHOUT1111+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ * more details.1414+ *1515+ * You should have received a copy of the GNU General Public License along with1616+ * this program. If not, see <http://www.gnu.org/licenses/>.1717+ */1818+1919+#ifndef __ARCH_ARM_MACH_OMAP2_DISPLAY_H2020+#define __ARCH_ARM_MACH_OMAP2_DISPLAY_H2121+2222+#include <linux/kernel.h>2323+2424+struct omap_dss_dispc_dev_attr {2525+ u8 manager_count;2626+ bool has_framedonetv_irq;2727+};2828+2929+#endif
···2424#include "powerdomain.h"2525#include "clockdomain.h"2626#include "pm.h"2727+#include "twl-common.h"27282829static struct omap_device_pm_latency *pm_lats;2930···227226228227static int __init omap2_common_pm_late_init(void)229228{230230- /* Init the OMAP TWL parameters */231231- omap3_twl_init();232232- omap4_twl_init();233233-234229 /* Init the voltage layer */230230+ omap_pmic_late_init();235231 omap_voltage_late_init();236232237233 /* Initialize the voltages */
+1-1
arch/arm/mach-omap2/smartreflex.c
···139139 sr_write_reg(sr_info, ERRCONFIG_V1, status);140140 } else if (sr_info->ip_type == SR_TYPE_V2) {141141 /* Read the status bits */142142- sr_read_reg(sr_info, IRQSTATUS);142142+ status = sr_read_reg(sr_info, IRQSTATUS);143143144144 /* Clear them by writing back */145145 sr_write_reg(sr_info, IRQSTATUS, status);
+11
arch/arm/mach-omap2/twl-common.c
···3030#include <plat/usb.h>31313232#include "twl-common.h"3333+#include "pm.h"33343435static struct i2c_board_info __initdata pmic_i2c_board_info = {3536 .addr = 0x48,···4746 pmic_i2c_board_info.platform_data = pmic_data;48474948 omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1);4949+}5050+5151+void __init omap_pmic_late_init(void)5252+{5353+ /* Init the OMAP TWL parameters (if PMIC has been registerd) */5454+ if (!pmic_i2c_board_info.irq)5555+ return;5656+5757+ omap3_twl_init();5858+ omap4_twl_init();5059}51605261#if defined(CONFIG_ARCH_OMAP3)
···88 * published by the Free Software Foundation.99 */10101111-#include <linux/module.h>1111+#include <linux/export.h>1212#include <linux/interrupt.h>1313#include <linux/i2c.h>1414
+1-1
arch/arm/mm/cache-l2x0.c
···6161{6262 void __iomem *base = l2x0_base;63636464-#ifdef CONFIG_ARM_ERRATA_7539706464+#ifdef CONFIG_PL310_ERRATA_7539706565 /* write to an unmmapped register */6666 writel_relaxed(0, base + L2X0_DUMMY_REG);6767#else
+10-1
arch/arm/mm/dma-mapping.c
···168168 pte_t *pte;169169 int i = 0;170170 unsigned long base = consistent_base;171171- unsigned long num_ptes = (CONSISTENT_END - base) >> PGDIR_SHIFT;171171+ unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT;172172173173 consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL);174174 if (!consistent_pte) {···331331{332332 struct page *page;333333 void *addr;334334+335335+ /*336336+ * Following is a work-around (a.k.a. hack) to prevent pages337337+ * with __GFP_COMP being passed to split_page() which cannot338338+ * handle them. The real problem is that this flag probably339339+ * should be 0 on ARM as it is not supported on this340340+ * platform; see CONFIG_HUGETLBFS.341341+ */342342+ gfp &= ~(__GFP_COMP);334343335344 *handle = ~0;336345 size = PAGE_ALIGN(size);
+6-17
arch/arm/mm/mmap.c
···99#include <linux/io.h>1010#include <linux/personality.h>1111#include <linux/random.h>1212-#include <asm/cputype.h>1313-#include <asm/system.h>1212+#include <asm/cachetype.h>14131514#define COLOUR_ALIGN(addr,pgoff) \1615 ((((addr)+SHMLBA-1)&~(SHMLBA-1)) + \···3132 struct mm_struct *mm = current->mm;3233 struct vm_area_struct *vma;3334 unsigned long start_addr;3434-#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K)3535- unsigned int cache_type;3636- int do_align = 0, aliasing = 0;3535+ int do_align = 0;3636+ int aliasing = cache_is_vipt_aliasing();37373838 /*3939 * We only need to do colour alignment if either the I or D4040- * caches alias. This is indicated by bits 9 and 21 of the4141- * cache type register.4040+ * caches alias.4241 */4343- cache_type = read_cpuid_cachetype();4444- if (cache_type != read_cpuid_id()) {4545- aliasing = (cache_type | cache_type >> 12) & (1 << 11);4646- if (aliasing)4747- do_align = filp || flags & MAP_SHARED;4848- }4949-#else5050-#define do_align 05151-#define aliasing 05252-#endif4242+ if (aliasing)4343+ do_align = filp || (flags & MAP_SHARED);53445445 /*5546 * We enforce the MAP_FIXED case.
···1111 * the Free Software Foundation; either version 2 of the License.1212*/13131414-#include <linux/module.h>1414+#include <linux/export.h>1515#include <linux/kernel.h>1616#include <linux/platform_device.h>1717#include <linux/slab.h>
···11-/*22- * Copyright (C) 2006 Atmark Techno, Inc.33- *44- * This file is subject to the terms and conditions of the GNU General Public55- * License. See the file "COPYING" in the main directory of this archive66- * for more details.77- */88-99-#ifndef _ASM_MICROBLAZE_NAMEI_H1010-#define _ASM_MICROBLAZE_NAMEI_H1111-1212-#ifdef __KERNEL__1313-1414-/* This dummy routine maybe changed to something useful1515- * for /usr/gnemul/ emulation stuff.1616- * Look at asm-sparc/namei.h for details.1717- */1818-#define __emul_prefix() NULL1919-2020-#endif /* __KERNEL__ */2121-2222-#endif /* _ASM_MICROBLAZE_NAMEI_H */
+13-4
arch/powerpc/boot/dts/p1023rds.dts
···449449 interrupt-parent = <&mpic>;450450 interrupts = <16 2>;451451 interrupt-map-mask = <0xf800 0 0 7>;452452+ /* IRQ[0:3] are pulled up on board, set to active-low */452453 interrupt-map = <453454 /* IDSEL 0x0 */454455 0000 0 0 1 &mpic 0 1···489488 interrupt-parent = <&mpic>;490489 interrupts = <16 2>;491490 interrupt-map-mask = <0xf800 0 0 7>;491491+ /*492492+ * IRQ[4:6] only for PCIe, set to active-high,493493+ * IRQ[7] is pulled up on board, set to active-low494494+ */492495 interrupt-map = <493496 /* IDSEL 0x0 */494494- 0000 0 0 1 &mpic 4 1495495- 0000 0 0 2 &mpic 5 1496496- 0000 0 0 3 &mpic 6 1497497+ 0000 0 0 1 &mpic 4 2498498+ 0000 0 0 2 &mpic 5 2499499+ 0000 0 0 3 &mpic 6 2497500 0000 0 0 4 &mpic 7 1498501 >;499502 ranges = <0x2000000 0x0 0xa0000000···532527 interrupt-parent = <&mpic>;533528 interrupts = <16 2>;534529 interrupt-map-mask = <0xf800 0 0 7>;530530+ /*531531+ * IRQ[8:10] are pulled up on board, set to active-low532532+ * IRQ[11] only for PCIe, set to active-high,533533+ */535534 interrupt-map = <536535 /* IDSEL 0x0 */537536 0000 0 0 1 &mpic 8 1538537 0000 0 0 2 &mpic 9 1539538 0000 0 0 3 &mpic 10 1540540- 0000 0 0 4 &mpic 11 1539539+ 0000 0 0 4 &mpic 11 2541540 >;542541 ranges = <0x2000000 0x0 0x80000000543542 0x2000000 0x0 0x80000000
···216216 /* Errata QE_General4, which affects some MPC832x and MPC836x SOCs, says217217 that the BRG divisor must be even if you're not using divide-by-16218218 mode. */219219- if (!div16 && (divisor & 1))219219+ if (!div16 && (divisor & 1) && (divisor > 3))220220 divisor++;221221222222 tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) |
···546546 * Translate OpenFirmware node properties into platform_data547547 * WARNING: This is DEPRECATED and will be removed eventually!548548 */549549-void549549+static void550550pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert)551551{552552 struct device_node *node;···574574 *invert = *val;575575}576576#else577577-void577577+static void578578pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert)579579{580580 *gpio_base = -1;
+4
drivers/gpu/drm/drm_crtc.c
···18731873 }1874187418751875 if (num_clips && clips_ptr) {18761876+ if (num_clips < 0 || num_clips > DRM_MODE_FB_DIRTY_MAX_CLIPS) {18771877+ ret = -EINVAL;18781878+ goto out_err1;18791879+ }18761880 clips = kzalloc(num_clips * sizeof(*clips), GFP_KERNEL);18771881 if (!clips) {18781882 ret = -ENOMEM;
+32-30
drivers/gpu/drm/exynos/exynos_drm_buf.c
···2727#include "drm.h"28282929#include "exynos_drm_drv.h"3030+#include "exynos_drm_gem.h"3031#include "exynos_drm_buf.h"31323232-static DEFINE_MUTEX(exynos_drm_buf_lock);3333-3433static int lowlevel_buffer_allocate(struct drm_device *dev,3535- struct exynos_drm_buf_entry *entry)3434+ struct exynos_drm_gem_buf *buffer)3635{3736 DRM_DEBUG_KMS("%s\n", __FILE__);38373939- entry->vaddr = dma_alloc_writecombine(dev->dev, entry->size,4040- (dma_addr_t *)&entry->paddr, GFP_KERNEL);4141- if (!entry->paddr) {3838+ buffer->kvaddr = dma_alloc_writecombine(dev->dev, buffer->size,3939+ &buffer->dma_addr, GFP_KERNEL);4040+ if (!buffer->kvaddr) {4241 DRM_ERROR("failed to allocate buffer.\n");4342 return -ENOMEM;4443 }45444646- DRM_DEBUG_KMS("allocated : vaddr(0x%x), paddr(0x%x), size(0x%x)\n",4747- (unsigned int)entry->vaddr, entry->paddr, entry->size);4545+ DRM_DEBUG_KMS("vaddr(0x%lx), dma_addr(0x%lx), size(0x%lx)\n",4646+ (unsigned long)buffer->kvaddr,4747+ (unsigned long)buffer->dma_addr,4848+ buffer->size);48494950 return 0;5051}51525253static void lowlevel_buffer_deallocate(struct drm_device *dev,5353- struct exynos_drm_buf_entry *entry)5454+ struct exynos_drm_gem_buf *buffer)5455{5556 DRM_DEBUG_KMS("%s.\n", __FILE__);56575757- if (entry->paddr && entry->vaddr && entry->size)5858- dma_free_writecombine(dev->dev, entry->size, entry->vaddr,5959- entry->paddr);5858+ if (buffer->dma_addr && buffer->size)5959+ dma_free_writecombine(dev->dev, buffer->size, buffer->kvaddr,6060+ (dma_addr_t)buffer->dma_addr);6061 else6161- DRM_DEBUG_KMS("entry data is null.\n");6262+ DRM_DEBUG_KMS("buffer data are invalid.\n");6263}63646464-struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev,6565+struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev,6566 unsigned int size)6667{6767- struct exynos_drm_buf_entry *entry;6868+ struct exynos_drm_gem_buf *buffer;68696970 DRM_DEBUG_KMS("%s.\n", __FILE__);7171+ DRM_DEBUG_KMS("desired size = 0x%x\n", size);70727171- entry = kzalloc(sizeof(*entry), GFP_KERNEL);7272- if (!entry) {7373- DRM_ERROR("failed to allocate exynos_drm_buf_entry.\n");7373+ buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);7474+ if (!buffer) {7575+ DRM_ERROR("failed to allocate exynos_drm_gem_buf.\n");7476 return ERR_PTR(-ENOMEM);7577 }76787777- entry->size = size;7979+ buffer->size = size;78807981 /*8082 * allocate memory region with size and set the memory information8181- * to vaddr and paddr of a entry object.8383+ * to vaddr and dma_addr of a buffer object.8284 */8383- if (lowlevel_buffer_allocate(dev, entry) < 0) {8484- kfree(entry);8585- entry = NULL;8585+ if (lowlevel_buffer_allocate(dev, buffer) < 0) {8686+ kfree(buffer);8787+ buffer = NULL;8688 return ERR_PTR(-ENOMEM);8789 }88908989- return entry;9191+ return buffer;9092}91939294void exynos_drm_buf_destroy(struct drm_device *dev,9393- struct exynos_drm_buf_entry *entry)9595+ struct exynos_drm_gem_buf *buffer)9496{9597 DRM_DEBUG_KMS("%s.\n", __FILE__);96989797- if (!entry) {9898- DRM_DEBUG_KMS("entry is null.\n");9999+ if (!buffer) {100100+ DRM_DEBUG_KMS("buffer is null.\n");99101 return;100102 }101103102102- lowlevel_buffer_deallocate(dev, entry);104104+ lowlevel_buffer_deallocate(dev, buffer);103105104104- kfree(entry);105105- entry = NULL;106106+ kfree(buffer);107107+ buffer = NULL;106108}107109108110MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+4-17
drivers/gpu/drm/exynos/exynos_drm_buf.h
···2626#ifndef _EXYNOS_DRM_BUF_H_2727#define _EXYNOS_DRM_BUF_H_28282929-/*3030- * exynos drm buffer entry structure.3131- *3232- * @paddr: physical address of allocated memory.3333- * @vaddr: kernel virtual address of allocated memory.3434- * @size: size of allocated memory.3535- */3636-struct exynos_drm_buf_entry {3737- dma_addr_t paddr;3838- void __iomem *vaddr;3939- unsigned int size;4040-};4141-4229/* allocate physical memory. */4343-struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev,3030+struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev,4431 unsigned int size);45324646-/* get physical memory information of a drm framebuffer. */4747-struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb);3333+/* get memory information of a drm framebuffer. */3434+struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb);48354936/* remove allocated physical memory. */5037void exynos_drm_buf_destroy(struct drm_device *dev,5151- struct exynos_drm_buf_entry *entry);3838+ struct exynos_drm_gem_buf *buffer);52395340#endif
+56-22
drivers/gpu/drm/exynos/exynos_drm_connector.c
···37373838struct exynos_drm_connector {3939 struct drm_connector drm_connector;4040+ uint32_t encoder_id;4141+ struct exynos_drm_manager *manager;4042};41434244/* convert exynos_video_timings to drm_display_mode */···4947 DRM_DEBUG_KMS("%s\n", __FILE__);50485149 mode->clock = timing->pixclock / 1000;5050+ mode->vrefresh = timing->refresh;52515352 mode->hdisplay = timing->xres;5453 mode->hsync_start = mode->hdisplay + timing->left_margin;···6057 mode->vsync_start = mode->vdisplay + timing->upper_margin;6158 mode->vsync_end = mode->vsync_start + timing->vsync_len;6259 mode->vtotal = mode->vsync_end + timing->lower_margin;6060+6161+ if (timing->vmode & FB_VMODE_INTERLACED)6262+ mode->flags |= DRM_MODE_FLAG_INTERLACE;6363+6464+ if (timing->vmode & FB_VMODE_DOUBLE)6565+ mode->flags |= DRM_MODE_FLAG_DBLSCAN;6366}64676568/* convert drm_display_mode to exynos_video_timings */···7869 memset(timing, 0, sizeof(*timing));79708071 timing->pixclock = mode->clock * 1000;8181- timing->refresh = mode->vrefresh;7272+ timing->refresh = drm_mode_vrefresh(mode);82738374 timing->xres = mode->hdisplay;8475 timing->left_margin = mode->hsync_start - mode->hdisplay;···1019210293static int exynos_drm_connector_get_modes(struct drm_connector *connector)10394{104104- struct exynos_drm_manager *manager =105105- exynos_drm_get_manager(connector->encoder);106106- struct exynos_drm_display *display = manager->display;9595+ struct exynos_drm_connector *exynos_connector =9696+ to_exynos_connector(connector);9797+ struct exynos_drm_manager *manager = exynos_connector->manager;9898+ struct exynos_drm_display_ops *display_ops = manager->display_ops;10799 unsigned int count;108100109101 DRM_DEBUG_KMS("%s\n", __FILE__);110102111111- if (!display) {112112- DRM_DEBUG_KMS("display is null.\n");103103+ if (!display_ops) {104104+ DRM_DEBUG_KMS("display_ops is null.\n");113105 return 0;114106 }115107···122112 * P.S. in case of lcd panel, count is always 1 if success123113 * because lcd panel has only one mode.124114 */125125- if (display->get_edid) {115115+ if (display_ops->get_edid) {126116 int ret;127117 void *edid;128118···132122 return 0;133123 }134124135135- ret = display->get_edid(manager->dev, connector,125125+ ret = display_ops->get_edid(manager->dev, connector,136126 edid, MAX_EDID);137127 if (ret < 0) {138128 DRM_ERROR("failed to get edid data.\n");···150140 struct drm_display_mode *mode = drm_mode_create(connector->dev);151141 struct fb_videomode *timing;152142153153- if (display->get_timing)154154- timing = display->get_timing(manager->dev);143143+ if (display_ops->get_timing)144144+ timing = display_ops->get_timing(manager->dev);155145 else {156146 drm_mode_destroy(connector->dev, mode);157147 return 0;···172162static int exynos_drm_connector_mode_valid(struct drm_connector *connector,173163 struct drm_display_mode *mode)174164{175175- struct exynos_drm_manager *manager =176176- exynos_drm_get_manager(connector->encoder);177177- struct exynos_drm_display *display = manager->display;165165+ struct exynos_drm_connector *exynos_connector =166166+ to_exynos_connector(connector);167167+ struct exynos_drm_manager *manager = exynos_connector->manager;168168+ struct exynos_drm_display_ops *display_ops = manager->display_ops;178169 struct fb_videomode timing;179170 int ret = MODE_BAD;180171···183172184173 convert_to_video_timing(&timing, mode);185174186186- if (display && display->check_timing)187187- if (!display->check_timing(manager->dev, (void *)&timing))175175+ if (display_ops && display_ops->check_timing)176176+ if (!display_ops->check_timing(manager->dev, (void *)&timing))188177 ret = MODE_OK;189178190179 return ret;···192181193182struct drm_encoder *exynos_drm_best_encoder(struct drm_connector *connector)194183{184184+ struct drm_device *dev = connector->dev;185185+ struct exynos_drm_connector *exynos_connector =186186+ to_exynos_connector(connector);187187+ struct drm_mode_object *obj;188188+ struct drm_encoder *encoder;189189+195190 DRM_DEBUG_KMS("%s\n", __FILE__);196191197197- return connector->encoder;192192+ obj = drm_mode_object_find(dev, exynos_connector->encoder_id,193193+ DRM_MODE_OBJECT_ENCODER);194194+ if (!obj) {195195+ DRM_DEBUG_KMS("Unknown ENCODER ID %d\n",196196+ exynos_connector->encoder_id);197197+ return NULL;198198+ }199199+200200+ encoder = obj_to_encoder(obj);201201+202202+ return encoder;198203}199204200205static struct drm_connector_helper_funcs exynos_connector_helper_funcs = {···223196static enum drm_connector_status224197exynos_drm_connector_detect(struct drm_connector *connector, bool force)225198{226226- struct exynos_drm_manager *manager =227227- exynos_drm_get_manager(connector->encoder);228228- struct exynos_drm_display *display = manager->display;199199+ struct exynos_drm_connector *exynos_connector =200200+ to_exynos_connector(connector);201201+ struct exynos_drm_manager *manager = exynos_connector->manager;202202+ struct exynos_drm_display_ops *display_ops =203203+ manager->display_ops;229204 enum drm_connector_status status = connector_status_disconnected;230205231206 DRM_DEBUG_KMS("%s\n", __FILE__);232207233233- if (display && display->is_connected) {234234- if (display->is_connected(manager->dev))208208+ if (display_ops && display_ops->is_connected) {209209+ if (display_ops->is_connected(manager->dev))235210 status = connector_status_connected;236211 else237212 status = connector_status_disconnected;···280251281252 connector = &exynos_connector->drm_connector;282253283283- switch (manager->display->type) {254254+ switch (manager->display_ops->type) {284255 case EXYNOS_DISPLAY_TYPE_HDMI:285256 type = DRM_MODE_CONNECTOR_HDMIA;257257+ connector->interlace_allowed = true;258258+ connector->polled = DRM_CONNECTOR_POLL_HPD;286259 break;287260 default:288261 type = DRM_MODE_CONNECTOR_Unknown;···298267 if (err)299268 goto err_connector;300269270270+ exynos_connector->encoder_id = encoder->base.id;271271+ exynos_connector->manager = manager;301272 connector->encoder = encoder;273273+302274 err = drm_mode_connector_attach_encoder(connector, encoder);303275 if (err) {304276 DRM_ERROR("failed to attach a connector to a encoder\n");
+39-37
drivers/gpu/drm/exynos/exynos_drm_crtc.c
···2929#include "drmP.h"3030#include "drm_crtc_helper.h"31313232+#include "exynos_drm_crtc.h"3233#include "exynos_drm_drv.h"3334#include "exynos_drm_fb.h"3435#include "exynos_drm_encoder.h"3636+#include "exynos_drm_gem.h"3537#include "exynos_drm_buf.h"36383739#define to_exynos_crtc(x) container_of(x, struct exynos_drm_crtc,\3840 drm_crtc)3939-4040-/*4141- * Exynos specific crtc postion structure.4242- *4343- * @fb_x: offset x on a framebuffer to be displyed4444- * - the unit is screen coordinates.4545- * @fb_y: offset y on a framebuffer to be displayed4646- * - the unit is screen coordinates.4747- * @crtc_x: offset x on hardware screen.4848- * @crtc_y: offset y on hardware screen.4949- * @crtc_w: width of hardware screen.5050- * @crtc_h: height of hardware screen.5151- */5252-struct exynos_drm_crtc_pos {5353- unsigned int fb_x;5454- unsigned int fb_y;5555- unsigned int crtc_x;5656- unsigned int crtc_y;5757- unsigned int crtc_w;5858- unsigned int crtc_h;5959-};60416142/*6243 * Exynos specific crtc structure.···66856786 exynos_drm_fn_encoder(crtc, overlay,6887 exynos_drm_encoder_crtc_mode_set);6969- exynos_drm_fn_encoder(crtc, NULL, exynos_drm_encoder_crtc_commit);8888+ exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe,8989+ exynos_drm_encoder_crtc_commit);7090}71917272-static int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay,7373- struct drm_framebuffer *fb,7474- struct drm_display_mode *mode,7575- struct exynos_drm_crtc_pos *pos)9292+int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay,9393+ struct drm_framebuffer *fb,9494+ struct drm_display_mode *mode,9595+ struct exynos_drm_crtc_pos *pos)7696{7777- struct exynos_drm_buf_entry *entry;9797+ struct exynos_drm_gem_buf *buffer;7898 unsigned int actual_w;7999 unsigned int actual_h;801008181- entry = exynos_drm_fb_get_buf(fb);8282- if (!entry) {8383- DRM_LOG_KMS("entry is null.\n");101101+ buffer = exynos_drm_fb_get_buf(fb);102102+ if (!buffer) {103103+ DRM_LOG_KMS("buffer is null.\n");84104 return -EFAULT;85105 }861068787- overlay->paddr = entry->paddr;8888- overlay->vaddr = entry->vaddr;107107+ overlay->dma_addr = buffer->dma_addr;108108+ overlay->vaddr = buffer->kvaddr;891099090- DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n",110110+ DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n",91111 (unsigned long)overlay->vaddr,9292- (unsigned long)overlay->paddr);112112+ (unsigned long)overlay->dma_addr);9311394114 actual_w = min((mode->hdisplay - pos->crtc_x), pos->crtc_w);95115 actual_h = min((mode->vdisplay - pos->crtc_y), pos->crtc_h);···153171154172static void exynos_drm_crtc_dpms(struct drm_crtc *crtc, int mode)155173{156156- DRM_DEBUG_KMS("%s\n", __FILE__);174174+ struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);157175158158- /* TODO */176176+ DRM_DEBUG_KMS("crtc[%d] mode[%d]\n", crtc->base.id, mode);177177+178178+ switch (mode) {179179+ case DRM_MODE_DPMS_ON:180180+ exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe,181181+ exynos_drm_encoder_crtc_commit);182182+ break;183183+ case DRM_MODE_DPMS_STANDBY:184184+ case DRM_MODE_DPMS_SUSPEND:185185+ case DRM_MODE_DPMS_OFF:186186+ /* TODO */187187+ exynos_drm_fn_encoder(crtc, NULL,188188+ exynos_drm_encoder_crtc_disable);189189+ break;190190+ default:191191+ DRM_DEBUG_KMS("unspecified mode %d\n", mode);192192+ break;193193+ }159194}160195161196static void exynos_drm_crtc_prepare(struct drm_crtc *crtc)···184185185186static void exynos_drm_crtc_commit(struct drm_crtc *crtc)186187{188188+ struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);189189+187190 DRM_DEBUG_KMS("%s\n", __FILE__);188191189189- /* drm framework doesn't check NULL. */192192+ exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe,193193+ exynos_drm_encoder_crtc_commit);190194}191195192196static bool
+25
drivers/gpu/drm/exynos/exynos_drm_crtc.h
···3535int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int crtc);3636void exynos_drm_crtc_disable_vblank(struct drm_device *dev, int crtc);37373838+/*3939+ * Exynos specific crtc postion structure.4040+ *4141+ * @fb_x: offset x on a framebuffer to be displyed4242+ * - the unit is screen coordinates.4343+ * @fb_y: offset y on a framebuffer to be displayed4444+ * - the unit is screen coordinates.4545+ * @crtc_x: offset x on hardware screen.4646+ * @crtc_y: offset y on hardware screen.4747+ * @crtc_w: width of hardware screen.4848+ * @crtc_h: height of hardware screen.4949+ */5050+struct exynos_drm_crtc_pos {5151+ unsigned int fb_x;5252+ unsigned int fb_y;5353+ unsigned int crtc_x;5454+ unsigned int crtc_y;5555+ unsigned int crtc_w;5656+ unsigned int crtc_h;5757+};5858+5959+int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay,6060+ struct drm_framebuffer *fb,6161+ struct drm_display_mode *mode,6262+ struct exynos_drm_crtc_pos *pos);3863#endif
···2929#ifndef _EXYNOS_DRM_DRV_H_3030#define _EXYNOS_DRM_DRV_H_31313232+#include <linux/module.h>3233#include "drm.h"33343435#define MAX_CRTC 2···8079 * @scan_flag: interlace or progressive way.8180 * (it could be DRM_MODE_FLAG_*)8281 * @bpp: pixel size.(in bit)8383- * @paddr: bus(accessed by dma) physical memory address to this overlay8484- * and this is physically continuous.8282+ * @dma_addr: bus(accessed by dma) address to the memory region allocated8383+ * for a overlay.8584 * @vaddr: virtual memory addresss to this overlay.8685 * @default_win: a window to be enabled.8786 * @color_key: color key on or off.···109108 unsigned int scan_flag;110109 unsigned int bpp;111110 unsigned int pitch;112112- dma_addr_t paddr;111111+ dma_addr_t dma_addr;113112 void __iomem *vaddr;114113115114 bool default_win;···131130 * @check_timing: check if timing is valid or not.132131 * @power_on: display device on or off.133132 */134134-struct exynos_drm_display {133133+struct exynos_drm_display_ops {135134 enum exynos_drm_output_type type;136135 bool (*is_connected)(struct device *dev);137136 int (*get_edid)(struct device *dev, struct drm_connector *connector,···147146 * @mode_set: convert drm_display_mode to hw specific display mode and148147 * would be called by encoder->mode_set().149148 * @commit: set current hw specific display mode to hw.149149+ * @disable: disable hardware specific display mode.150150 * @enable_vblank: specific driver callback for enabling vblank interrupt.151151 * @disable_vblank: specific driver callback for disabling vblank interrupt.152152 */153153struct exynos_drm_manager_ops {154154 void (*mode_set)(struct device *subdrv_dev, void *mode);155155 void (*commit)(struct device *subdrv_dev);156156+ void (*disable)(struct device *subdrv_dev);156157 int (*enable_vblank)(struct device *subdrv_dev);157158 void (*disable_vblank)(struct device *subdrv_dev);158159};···181178 int pipe;182179 struct exynos_drm_manager_ops *ops;183180 struct exynos_drm_overlay_ops *overlay_ops;184184- struct exynos_drm_display *display;181181+ struct exynos_drm_display_ops *display_ops;185182};186183187184/*
+72-11
drivers/gpu/drm/exynos/exynos_drm_encoder.c
···5353 struct drm_device *dev = encoder->dev;5454 struct drm_connector *connector;5555 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder);5656+ struct exynos_drm_manager_ops *manager_ops = manager->ops;56575758 DRM_DEBUG_KMS("%s, encoder dpms: %d\n", __FILE__, mode);58596060+ switch (mode) {6161+ case DRM_MODE_DPMS_ON:6262+ if (manager_ops && manager_ops->commit)6363+ manager_ops->commit(manager->dev);6464+ break;6565+ case DRM_MODE_DPMS_STANDBY:6666+ case DRM_MODE_DPMS_SUSPEND:6767+ case DRM_MODE_DPMS_OFF:6868+ /* TODO */6969+ if (manager_ops && manager_ops->disable)7070+ manager_ops->disable(manager->dev);7171+ break;7272+ default:7373+ DRM_ERROR("unspecified mode %d\n", mode);7474+ break;7575+ }7676+5977 list_for_each_entry(connector, &dev->mode_config.connector_list, head) {6078 if (connector->encoder == encoder) {6161- struct exynos_drm_display *display = manager->display;7979+ struct exynos_drm_display_ops *display_ops =8080+ manager->display_ops;62816363- if (display && display->power_on)6464- display->power_on(manager->dev, mode);8282+ DRM_DEBUG_KMS("connector[%d] dpms[%d]\n",8383+ connector->base.id, mode);8484+ if (display_ops && display_ops->power_on)8585+ display_ops->power_on(manager->dev, mode);6586 }6687 }6788}···137116{138117 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder);139118 struct exynos_drm_manager_ops *manager_ops = manager->ops;140140- struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;141119142120 DRM_DEBUG_KMS("%s\n", __FILE__);143121144122 if (manager_ops && manager_ops->commit)145123 manager_ops->commit(manager->dev);146146-147147- if (overlay_ops && overlay_ops->commit)148148- overlay_ops->commit(manager->dev);149124}150125151126static struct drm_crtc *···225208{226209 struct drm_device *dev = crtc->dev;227210 struct drm_encoder *encoder;211211+ struct exynos_drm_private *private = dev->dev_private;212212+ struct exynos_drm_manager *manager;228213229214 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {230230- if (encoder->crtc != crtc)231231- continue;215215+ /*216216+ * if crtc is detached from encoder, check pipe,217217+ * otherwise check crtc attached to encoder218218+ */219219+ if (!encoder->crtc) {220220+ manager = to_exynos_encoder(encoder)->manager;221221+ if (manager->pipe < 0 ||222222+ private->crtc[manager->pipe] != crtc)223223+ continue;224224+ } else {225225+ if (encoder->crtc != crtc)226226+ continue;227227+ }232228233229 fn(encoder, data);234230 }···280250 struct exynos_drm_manager *manager =281251 to_exynos_encoder(encoder)->manager;282252 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;253253+ int crtc = *(int *)data;283254284284- overlay_ops->commit(manager->dev);255255+ DRM_DEBUG_KMS("%s\n", __FILE__);256256+257257+ /*258258+ * when crtc is detached from encoder, this pipe is used259259+ * to select manager operation260260+ */261261+ manager->pipe = crtc;262262+263263+ if (overlay_ops && overlay_ops->commit)264264+ overlay_ops->commit(manager->dev);285265}286266287267void exynos_drm_encoder_crtc_mode_set(struct drm_encoder *encoder, void *data)···301261 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;302262 struct exynos_drm_overlay *overlay = data;303263304304- overlay_ops->mode_set(manager->dev, overlay);264264+ if (overlay_ops && overlay_ops->mode_set)265265+ overlay_ops->mode_set(manager->dev, overlay);266266+}267267+268268+void exynos_drm_encoder_crtc_disable(struct drm_encoder *encoder, void *data)269269+{270270+ struct exynos_drm_manager *manager =271271+ to_exynos_encoder(encoder)->manager;272272+ struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;273273+274274+ DRM_DEBUG_KMS("\n");275275+276276+ if (overlay_ops && overlay_ops->disable)277277+ overlay_ops->disable(manager->dev);278278+279279+ /*280280+ * crtc is already detached from encoder and last281281+ * function for detaching is properly done, so282282+ * clear pipe from manager to prevent repeated call283283+ */284284+ if (!encoder->crtc)285285+ manager->pipe = -1;305286}306287307288MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
···2929#include "drmP.h"3030#include "drm_crtc.h"3131#include "drm_crtc_helper.h"3232+#include "drm_fb_helper.h"32333434+#include "exynos_drm_drv.h"3335#include "exynos_drm_fb.h"3436#include "exynos_drm_buf.h"3537#include "exynos_drm_gem.h"···4341 *4442 * @fb: drm framebuffer obejct.4543 * @exynos_gem_obj: exynos specific gem object containing a gem object.4646- * @entry: pointer to exynos drm buffer entry object.4747- * - containing only the information to physically continuous memory4848- * region allocated at default framebuffer creation.4444+ * @buffer: pointer to exynos_drm_gem_buffer object.4545+ * - contain the memory information to memory region allocated4646+ * at default framebuffer creation.4947 */5048struct exynos_drm_fb {5149 struct drm_framebuffer fb;5250 struct exynos_drm_gem_obj *exynos_gem_obj;5353- struct exynos_drm_buf_entry *entry;5151+ struct exynos_drm_gem_buf *buffer;5452};55535654static void exynos_drm_fb_destroy(struct drm_framebuffer *fb)···6563 * default framebuffer has no gem object so6664 * a buffer of the default framebuffer should be released at here.6765 */6868- if (!exynos_fb->exynos_gem_obj && exynos_fb->entry)6969- exynos_drm_buf_destroy(fb->dev, exynos_fb->entry);6666+ if (!exynos_fb->exynos_gem_obj && exynos_fb->buffer)6767+ exynos_drm_buf_destroy(fb->dev, exynos_fb->buffer);70687169 kfree(exynos_fb);7270 exynos_fb = NULL;···145143 */146144 if (!mode_cmd->handle) {147145 if (!file_priv) {148148- struct exynos_drm_buf_entry *entry;146146+ struct exynos_drm_gem_buf *buffer;149147150148 /*151149 * in case that file_priv is NULL, it allocates152150 * only buffer and this buffer would be used153151 * for default framebuffer.154152 */155155- entry = exynos_drm_buf_create(dev, size);156156- if (IS_ERR(entry)) {157157- ret = PTR_ERR(entry);153153+ buffer = exynos_drm_buf_create(dev, size);154154+ if (IS_ERR(buffer)) {155155+ ret = PTR_ERR(buffer);158156 goto err_buffer;159157 }160158161161- exynos_fb->entry = entry;159159+ exynos_fb->buffer = buffer;162160163163- DRM_LOG_KMS("default fb: paddr = 0x%lx, size = 0x%x\n",164164- (unsigned long)entry->paddr, size);161161+ DRM_LOG_KMS("default: dma_addr = 0x%lx, size = 0x%x\n",162162+ (unsigned long)buffer->dma_addr, size);165163166164 goto out;167165 } else {168168- exynos_gem_obj = exynos_drm_gem_create(file_priv, dev,169169- size,170170- &mode_cmd->handle);166166+ exynos_gem_obj = exynos_drm_gem_create(dev, file_priv,167167+ &mode_cmd->handle,168168+ size);171169 if (IS_ERR(exynos_gem_obj)) {172170 ret = PTR_ERR(exynos_gem_obj);173171 goto err_buffer;···191189 * so that default framebuffer has no its own gem object,192190 * only its own buffer object.193191 */194194- exynos_fb->entry = exynos_gem_obj->entry;192192+ exynos_fb->buffer = exynos_gem_obj->buffer;195193196196- DRM_LOG_KMS("paddr = 0x%lx, size = 0x%x, gem object = 0x%x\n",197197- (unsigned long)exynos_fb->entry->paddr, size,194194+ DRM_LOG_KMS("dma_addr = 0x%lx, size = 0x%x, gem object = 0x%x\n",195195+ (unsigned long)exynos_fb->buffer->dma_addr, size,198196 (unsigned int)&exynos_gem_obj->base);199197200198out:···222220 return exynos_drm_fb_init(file_priv, dev, mode_cmd);223221}224222225225-struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb)223223+struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb)226224{227225 struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);228228- struct exynos_drm_buf_entry *entry;226226+ struct exynos_drm_gem_buf *buffer;229227230228 DRM_DEBUG_KMS("%s\n", __FILE__);231229232232- entry = exynos_fb->entry;233233- if (!entry)230230+ buffer = exynos_fb->buffer;231231+ if (!buffer)234232 return NULL;235233236236- DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n",237237- (unsigned long)entry->vaddr,238238- (unsigned long)entry->paddr);234234+ DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n",235235+ (unsigned long)buffer->kvaddr,236236+ (unsigned long)buffer->dma_addr);239237240240- return entry;238238+ return buffer;239239+}240240+241241+static void exynos_drm_output_poll_changed(struct drm_device *dev)242242+{243243+ struct exynos_drm_private *private = dev->dev_private;244244+ struct drm_fb_helper *fb_helper = private->fb_helper;245245+246246+ if (fb_helper)247247+ drm_fb_helper_hotplug_event(fb_helper);241248}242249243250static struct drm_mode_config_funcs exynos_drm_mode_config_funcs = {244251 .fb_create = exynos_drm_fb_create,252252+ .output_poll_changed = exynos_drm_output_poll_changed,245253};246254247255void exynos_drm_mode_config_init(struct drm_device *dev)
+28-16
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
···33333434#include "exynos_drm_drv.h"3535#include "exynos_drm_fb.h"3636+#include "exynos_drm_gem.h"3637#include "exynos_drm_buf.h"37383839#define MAX_CONNECTOR 4···8685};87868887static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,8989- struct drm_framebuffer *fb,9090- unsigned int fb_width,9191- unsigned int fb_height)8888+ struct drm_framebuffer *fb)9289{9390 struct fb_info *fbi = helper->fbdev;9491 struct drm_device *dev = helper->dev;9592 struct exynos_drm_fbdev *exynos_fb = to_exynos_fbdev(helper);9696- struct exynos_drm_buf_entry *entry;9797- unsigned int size = fb_width * fb_height * (fb->bits_per_pixel >> 3);9393+ struct exynos_drm_gem_buf *buffer;9494+ unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3);9895 unsigned long offset;999610097 DRM_DEBUG_KMS("%s\n", __FILE__);···100101 exynos_fb->fb = fb;101102102103 drm_fb_helper_fill_fix(fbi, fb->pitch, fb->depth);103103- drm_fb_helper_fill_var(fbi, helper, fb_width, fb_height);104104+ drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height);104105105105- entry = exynos_drm_fb_get_buf(fb);106106- if (!entry) {107107- DRM_LOG_KMS("entry is null.\n");106106+ buffer = exynos_drm_fb_get_buf(fb);107107+ if (!buffer) {108108+ DRM_LOG_KMS("buffer is null.\n");108109 return -EFAULT;109110 }110111111112 offset = fbi->var.xoffset * (fb->bits_per_pixel >> 3);112113 offset += fbi->var.yoffset * fb->pitch;113114114114- dev->mode_config.fb_base = entry->paddr;115115- fbi->screen_base = entry->vaddr + offset;116116- fbi->fix.smem_start = entry->paddr + offset;115115+ dev->mode_config.fb_base = (resource_size_t)buffer->dma_addr;116116+ fbi->screen_base = buffer->kvaddr + offset;117117+ fbi->fix.smem_start = (unsigned long)(buffer->dma_addr + offset);117118 fbi->screen_size = size;118119 fbi->fix.smem_len = size;119120···170171 goto out;171172 }172173173173- ret = exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width,174174- sizes->fb_height);174174+ ret = exynos_drm_fbdev_update(helper, helper->fb);175175 if (ret < 0)176176 fb_dealloc_cmap(&fbi->cmap);177177···233235 }234236235237 helper->fb = exynos_fbdev->fb;236236- return exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width,237237- sizes->fb_height);238238+ return exynos_drm_fbdev_update(helper, helper->fb);238239}239240240241static int exynos_drm_fbdev_probe(struct drm_fb_helper *helper,···402405 fb_helper = private->fb_helper;403406404407 if (fb_helper) {408408+ struct list_head temp_list;409409+410410+ INIT_LIST_HEAD(&temp_list);411411+412412+ /*413413+ * fb_helper is reintialized but kernel fb is reused414414+ * so kernel_fb_list need to be backuped and restored415415+ */416416+ if (!list_empty(&fb_helper->kernel_fb_list))417417+ list_replace_init(&fb_helper->kernel_fb_list,418418+ &temp_list);419419+405420 drm_fb_helper_fini(fb_helper);406421407422 ret = drm_fb_helper_init(dev, fb_helper,···422413 DRM_ERROR("failed to initialize drm fb helper\n");423414 return ret;424415 }416416+417417+ if (!list_empty(&temp_list))418418+ list_replace(&temp_list, &fb_helper->kernel_fb_list);425419426420 ret = drm_fb_helper_single_add_all_connectors(fb_helper);427421 if (ret < 0) {
+53-18
drivers/gpu/drm/exynos/exynos_drm_fimd.c
···6464 unsigned int fb_width;6565 unsigned int fb_height;6666 unsigned int bpp;6767- dma_addr_t paddr;6767+ dma_addr_t dma_addr;6868 void __iomem *vaddr;6969 unsigned int buf_offsize;7070 unsigned int line_size; /* bytes */···124124 return 0;125125}126126127127-static struct exynos_drm_display fimd_display = {127127+static struct exynos_drm_display_ops fimd_display_ops = {128128 .type = EXYNOS_DISPLAY_TYPE_LCD,129129 .is_connected = fimd_display_is_connected,130130 .get_timing = fimd_get_timing,···177177 writel(val, ctx->regs + VIDCON0);178178}179179180180+static void fimd_disable(struct device *dev)181181+{182182+ struct fimd_context *ctx = get_fimd_context(dev);183183+ struct exynos_drm_subdrv *subdrv = &ctx->subdrv;184184+ struct drm_device *drm_dev = subdrv->drm_dev;185185+ struct exynos_drm_manager *manager = &subdrv->manager;186186+ u32 val;187187+188188+ DRM_DEBUG_KMS("%s\n", __FILE__);189189+190190+ /* fimd dma off */191191+ val = readl(ctx->regs + VIDCON0);192192+ val &= ~(VIDCON0_ENVID | VIDCON0_ENVID_F);193193+ writel(val, ctx->regs + VIDCON0);194194+195195+ /*196196+ * if vblank is enabled status with dma off then197197+ * it disables vsync interrupt.198198+ */199199+ if (drm_dev->vblank_enabled[manager->pipe] &&200200+ atomic_read(&drm_dev->vblank_refcount[manager->pipe])) {201201+ drm_vblank_put(drm_dev, manager->pipe);202202+203203+ /*204204+ * if vblank_disable_allowed is 0 then disable205205+ * vsync interrupt right now else the vsync interrupt206206+ * would be disabled by drm timer once a current process207207+ * gives up ownershop of vblank event.208208+ */209209+ if (!drm_dev->vblank_disable_allowed)210210+ drm_vblank_off(drm_dev, manager->pipe);211211+ }212212+}213213+180214static int fimd_enable_vblank(struct device *dev)181215{182216 struct fimd_context *ctx = get_fimd_context(dev);···254220255221static struct exynos_drm_manager_ops fimd_manager_ops = {256222 .commit = fimd_commit,223223+ .disable = fimd_disable,257224 .enable_vblank = fimd_enable_vblank,258225 .disable_vblank = fimd_disable_vblank,259226};···286251 win_data->ovl_height = overlay->crtc_height;287252 win_data->fb_width = overlay->fb_width;288253 win_data->fb_height = overlay->fb_height;289289- win_data->paddr = overlay->paddr + offset;254254+ win_data->dma_addr = overlay->dma_addr + offset;290255 win_data->vaddr = overlay->vaddr + offset;291256 win_data->bpp = overlay->bpp;292257 win_data->buf_offsize = (overlay->fb_width - overlay->crtc_width) *···298263 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",299264 win_data->ovl_width, win_data->ovl_height);300265 DRM_DEBUG_KMS("paddr = 0x%lx, vaddr = 0x%lx\n",301301- (unsigned long)win_data->paddr,266266+ (unsigned long)win_data->dma_addr,302267 (unsigned long)win_data->vaddr);303268 DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n",304269 overlay->fb_width, overlay->crtc_width);···411376 writel(val, ctx->regs + SHADOWCON);412377413378 /* buffer start address */414414- val = win_data->paddr;379379+ val = (unsigned long)win_data->dma_addr;415380 writel(val, ctx->regs + VIDWx_BUF_START(win, 0));416381417382 /* buffer end address */418383 size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3);419419- val = win_data->paddr + size;384384+ val = (unsigned long)(win_data->dma_addr + size);420385 writel(val, ctx->regs + VIDWx_BUF_END(win, 0));421386422387 DRM_DEBUG_KMS("start addr = 0x%lx, end addr = 0x%lx, size = 0x%lx\n",423423- (unsigned long)win_data->paddr, val, size);388388+ (unsigned long)win_data->dma_addr, val, size);424389 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",425390 win_data->ovl_width, win_data->ovl_height);426391···482447static void fimd_win_disable(struct device *dev)483448{484449 struct fimd_context *ctx = get_fimd_context(dev);485485- struct fimd_win_data *win_data;486450 int win = ctx->default_win;487451 u32 val;488452···489455490456 if (win < 0 || win > WINDOWS_NR)491457 return;492492-493493- win_data = &ctx->win_data[win];494458495459 /* protect windows */496460 val = readl(ctx->regs + SHADOWCON);···560528 /* VSYNC interrupt */561529 writel(VIDINTCON1_INT_FRAME, ctx->regs + VIDINTCON1);562530531531+ /*532532+ * in case that vblank_disable_allowed is 1, it could induce533533+ * the problem that manager->pipe could be -1 because with534534+ * disable callback, vsync interrupt isn't disabled and at this moment,535535+ * vsync interrupt could occur. the vsync interrupt would be disabled536536+ * by timer handler later.537537+ */538538+ if (manager->pipe == -1)539539+ return IRQ_HANDLED;540540+563541 drm_handle_vblank(drm_dev, manager->pipe);564542 fimd_finish_pageflip(drm_dev, manager->pipe);565543···589547 * drm framework supports only one irq handler.590548 */591549 drm_dev->irq_enabled = 1;592592-593593- /*594594- * with vblank_disable_allowed = 1, vblank interrupt will be disabled595595- * by drm timer once a current process gives up ownership of596596- * vblank event.(drm_vblank_put function was called)597597- */598598- drm_dev->vblank_disable_allowed = 1;599550600551 return 0;601552}···766731 subdrv->manager.pipe = -1;767732 subdrv->manager.ops = &fimd_manager_ops;768733 subdrv->manager.overlay_ops = &fimd_overlay_ops;769769- subdrv->manager.display = &fimd_display;734734+ subdrv->manager.display_ops = &fimd_display_ops;770735 subdrv->manager.dev = dev;771736772737 platform_set_drvdata(pdev, ctx);
+52-37
drivers/gpu/drm/exynos/exynos_drm_gem.c
···6262 return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT;6363}64646565-struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv,6666- struct drm_device *dev, unsigned int size,6767- unsigned int *handle)6565+static struct exynos_drm_gem_obj6666+ *exynos_drm_gem_init(struct drm_device *drm_dev,6767+ struct drm_file *file_priv, unsigned int *handle,6868+ unsigned int size)6869{6970 struct exynos_drm_gem_obj *exynos_gem_obj;7070- struct exynos_drm_buf_entry *entry;7171 struct drm_gem_object *obj;7272 int ret;7373-7474- DRM_DEBUG_KMS("%s\n", __FILE__);7575-7676- size = roundup(size, PAGE_SIZE);77737874 exynos_gem_obj = kzalloc(sizeof(*exynos_gem_obj), GFP_KERNEL);7975 if (!exynos_gem_obj) {···7781 return ERR_PTR(-ENOMEM);7882 }79838080- /* allocate the new buffer object and memory region. */8181- entry = exynos_drm_buf_create(dev, size);8282- if (!entry) {8383- kfree(exynos_gem_obj);8484- return ERR_PTR(-ENOMEM);8585- }8686-8787- exynos_gem_obj->entry = entry;8888-8984 obj = &exynos_gem_obj->base;90859191- ret = drm_gem_object_init(dev, obj, size);8686+ ret = drm_gem_object_init(drm_dev, obj, size);9287 if (ret < 0) {9393- DRM_ERROR("failed to initailize gem object.\n");9494- goto err_obj_init;8888+ DRM_ERROR("failed to initialize gem object.\n");8989+ ret = -EINVAL;9090+ goto err_object_init;9591 }96929793 DRM_DEBUG_KMS("created file object = 0x%x\n", (unsigned int)obj->filp);···115127err_create_mmap_offset:116128 drm_gem_object_release(obj);117129118118-err_obj_init:119119- exynos_drm_buf_destroy(dev, exynos_gem_obj->entry);120120-130130+err_object_init:121131 kfree(exynos_gem_obj);122132123133 return ERR_PTR(ret);124134}125135136136+struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev,137137+ struct drm_file *file_priv,138138+ unsigned int *handle, unsigned long size)139139+{140140+141141+ struct exynos_drm_gem_obj *exynos_gem_obj = NULL;142142+ struct exynos_drm_gem_buf *buffer;143143+144144+ size = roundup(size, PAGE_SIZE);145145+146146+ DRM_DEBUG_KMS("%s: size = 0x%lx\n", __FILE__, size);147147+148148+ buffer = exynos_drm_buf_create(dev, size);149149+ if (IS_ERR(buffer)) {150150+ return ERR_CAST(buffer);151151+ }152152+153153+ exynos_gem_obj = exynos_drm_gem_init(dev, file_priv, handle, size);154154+ if (IS_ERR(exynos_gem_obj)) {155155+ exynos_drm_buf_destroy(dev, buffer);156156+ return exynos_gem_obj;157157+ }158158+159159+ exynos_gem_obj->buffer = buffer;160160+161161+ return exynos_gem_obj;162162+}163163+126164int exynos_drm_gem_create_ioctl(struct drm_device *dev, void *data,127127- struct drm_file *file_priv)165165+ struct drm_file *file_priv)128166{129167 struct drm_exynos_gem_create *args = data;130130- struct exynos_drm_gem_obj *exynos_gem_obj;168168+ struct exynos_drm_gem_obj *exynos_gem_obj = NULL;131169132132- DRM_DEBUG_KMS("%s : size = 0x%x\n", __FILE__, args->size);170170+ DRM_DEBUG_KMS("%s\n", __FILE__);133171134134- exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size,135135- &args->handle);172172+ exynos_gem_obj = exynos_drm_gem_create(dev, file_priv,173173+ &args->handle, args->size);136174 if (IS_ERR(exynos_gem_obj))137175 return PTR_ERR(exynos_gem_obj);138176···189175{190176 struct drm_gem_object *obj = filp->private_data;191177 struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);192192- struct exynos_drm_buf_entry *entry;178178+ struct exynos_drm_gem_buf *buffer;193179 unsigned long pfn, vm_size;194180195181 DRM_DEBUG_KMS("%s\n", __FILE__);···201187202188 vm_size = vma->vm_end - vma->vm_start;203189 /*204204- * a entry contains information to physically continuous memory190190+ * a buffer contains information to physically continuous memory205191 * allocated by user request or at framebuffer creation.206192 */207207- entry = exynos_gem_obj->entry;193193+ buffer = exynos_gem_obj->buffer;208194209195 /* check if user-requested size is valid. */210210- if (vm_size > entry->size)196196+ if (vm_size > buffer->size)211197 return -EINVAL;212198213199 /*214200 * get page frame number to physical memory to be mapped215201 * to user space.216202 */217217- pfn = exynos_gem_obj->entry->paddr >> PAGE_SHIFT;203203+ pfn = ((unsigned long)exynos_gem_obj->buffer->dma_addr) >> PAGE_SHIFT;218204219205 DRM_DEBUG_KMS("pfn = 0x%lx\n", pfn);220206···295281296282 exynos_gem_obj = to_exynos_gem_obj(gem_obj);297283298298- exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->entry);284284+ exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->buffer);299285300286 kfree(exynos_gem_obj);301287}···316302 args->pitch = args->width * args->bpp >> 3;317303 args->size = args->pitch * args->height;318304319319- exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size,320320- &args->handle);305305+ exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, &args->handle,306306+ args->size);321307 if (IS_ERR(exynos_gem_obj))322308 return PTR_ERR(exynos_gem_obj);323309···374360375361 mutex_lock(&dev->struct_mutex);376362377377- pfn = (exynos_gem_obj->entry->paddr >> PAGE_SHIFT) + page_offset;363363+ pfn = (((unsigned long)exynos_gem_obj->buffer->dma_addr) >>364364+ PAGE_SHIFT) + page_offset;378365379366 ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);380367
+22-6
drivers/gpu/drm/exynos/exynos_drm_gem.h
···3030 struct exynos_drm_gem_obj, base)31313232/*3333+ * exynos drm gem buffer structure.3434+ *3535+ * @kvaddr: kernel virtual address to allocated memory region.3636+ * @dma_addr: bus address(accessed by dma) to allocated memory region.3737+ * - this address could be physical address without IOMMU and3838+ * device address with IOMMU.3939+ * @size: size of allocated memory region.4040+ */4141+struct exynos_drm_gem_buf {4242+ void __iomem *kvaddr;4343+ dma_addr_t dma_addr;4444+ unsigned long size;4545+};4646+4747+/*3348 * exynos drm buffer structure.3449 *3550 * @base: a gem object.3651 * - a new handle to this gem object would be created3752 * by drm_gem_handle_create().3838- * @entry: pointer to exynos drm buffer entry object.3939- * - containing the information to physically5353+ * @buffer: a pointer to exynos_drm_gem_buffer object.5454+ * - contain the information to memory region allocated5555+ * by user request or at framebuffer creation.4056 * continuous memory region allocated by user request4157 * or at framebuffer creation.4258 *···6145 */6246struct exynos_drm_gem_obj {6347 struct drm_gem_object base;6464- struct exynos_drm_buf_entry *entry;4848+ struct exynos_drm_gem_buf *buffer;6549};66506751/* create a new buffer and get a new gem handle. */6868-struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv,6969- struct drm_device *dev, unsigned int size,7070- unsigned int *handle);5252+struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev,5353+ struct drm_file *file_priv,5454+ unsigned int *handle, unsigned long size);71557256/*7357 * request gem object creation and buffer allocation as the size
+51-6
drivers/gpu/drm/i915/i915_debugfs.c
···636636 struct drm_device *dev = node->minor->dev;637637 drm_i915_private_t *dev_priv = dev->dev_private;638638 struct intel_ring_buffer *ring;639639+ int ret;639640640641 ring = &dev_priv->ring[(uintptr_t)node->info_ent->data];641642 if (ring->size == 0)642643 return 0;644644+645645+ ret = mutex_lock_interruptible(&dev->struct_mutex);646646+ if (ret)647647+ return ret;643648644649 seq_printf(m, "Ring %s:\n", ring->name);645650 seq_printf(m, " Head : %08x\n", I915_READ_HEAD(ring) & HEAD_ADDR);···658653 }659654 seq_printf(m, " Control : %08x\n", I915_READ_CTL(ring));660655 seq_printf(m, " Start : %08x\n", I915_READ_START(ring));656656+657657+ mutex_unlock(&dev->struct_mutex);661658662659 return 0;663660}···849842 struct drm_info_node *node = (struct drm_info_node *) m->private;850843 struct drm_device *dev = node->minor->dev;851844 drm_i915_private_t *dev_priv = dev->dev_private;852852- u16 crstanddelay = I915_READ16(CRSTANDVID);845845+ u16 crstanddelay;846846+ int ret;847847+848848+ ret = mutex_lock_interruptible(&dev->struct_mutex);849849+ if (ret)850850+ return ret;851851+852852+ crstanddelay = I915_READ16(CRSTANDVID);853853+854854+ mutex_unlock(&dev->struct_mutex);853855854856 seq_printf(m, "w/ctx: %d, w/o ctx: %d\n", (crstanddelay >> 8) & 0x3f, (crstanddelay & 0x3f));855857···956940 struct drm_device *dev = node->minor->dev;957941 drm_i915_private_t *dev_priv = dev->dev_private;958942 u32 delayfreq;959959- int i;943943+ int ret, i;944944+945945+ ret = mutex_lock_interruptible(&dev->struct_mutex);946946+ if (ret)947947+ return ret;960948961949 for (i = 0; i < 16; i++) {962950 delayfreq = I915_READ(PXVFREQ_BASE + i * 4);963951 seq_printf(m, "P%02dVIDFREQ: 0x%08x (VID: %d)\n", i, delayfreq,964952 (delayfreq & PXVFREQ_PX_MASK) >> PXVFREQ_PX_SHIFT);965953 }954954+955955+ mutex_unlock(&dev->struct_mutex);966956967957 return 0;968958}···984962 struct drm_device *dev = node->minor->dev;985963 drm_i915_private_t *dev_priv = dev->dev_private;986964 u32 inttoext;987987- int i;965965+ int ret, i;966966+967967+ ret = mutex_lock_interruptible(&dev->struct_mutex);968968+ if (ret)969969+ return ret;988970989971 for (i = 1; i <= 32; i++) {990972 inttoext = I915_READ(INTTOEXT_BASE_ILK + i * 4);991973 seq_printf(m, "INTTOEXT%02d: 0x%08x\n", i, inttoext);992974 }975975+976976+ mutex_unlock(&dev->struct_mutex);993977994978 return 0;995979}···1005977 struct drm_info_node *node = (struct drm_info_node *) m->private;1006978 struct drm_device *dev = node->minor->dev;1007979 drm_i915_private_t *dev_priv = dev->dev_private;10081008- u32 rgvmodectl = I915_READ(MEMMODECTL);10091009- u32 rstdbyctl = I915_READ(RSTDBYCTL);10101010- u16 crstandvid = I915_READ16(CRSTANDVID);980980+ u32 rgvmodectl, rstdbyctl;981981+ u16 crstandvid;982982+ int ret;983983+984984+ ret = mutex_lock_interruptible(&dev->struct_mutex);985985+ if (ret)986986+ return ret;987987+988988+ rgvmodectl = I915_READ(MEMMODECTL);989989+ rstdbyctl = I915_READ(RSTDBYCTL);990990+ crstandvid = I915_READ16(CRSTANDVID);991991+992992+ mutex_unlock(&dev->struct_mutex);10119931012994 seq_printf(m, "HD boost: %s\n", (rgvmodectl & MEMMODE_BOOST_EN) ?1013995 "yes" : "no");···12051167 struct drm_info_node *node = (struct drm_info_node *) m->private;12061168 struct drm_device *dev = node->minor->dev;12071169 drm_i915_private_t *dev_priv = dev->dev_private;11701170+ int ret;11711171+11721172+ ret = mutex_lock_interruptible(&dev->struct_mutex);11731173+ if (ret)11741174+ return ret;1208117512091176 seq_printf(m, "GFXEC: %ld\n", (unsigned long)I915_READ(0x112f4));11771177+11781178+ mutex_unlock(&dev->struct_mutex);1210117912111180 return 0;12121181}
···126126 struct _drm_i915_sarea *sarea_priv;127127};128128#define I915_FENCE_REG_NONE -1129129+#define I915_MAX_NUM_FENCES 16130130+/* 16 fences + sign bit for FENCE_REG_NONE */131131+#define I915_MAX_NUM_FENCE_BITS 5129132130133struct drm_i915_fence_reg {131134 struct list_head lru_list;···171168 u32 instdone1;172169 u32 seqno;173170 u64 bbaddr;174174- u64 fence[16];171171+ u64 fence[I915_MAX_NUM_FENCES];175172 struct timeval time;176173 struct drm_i915_error_object {177174 int page_count;···185182 u32 gtt_offset;186183 u32 read_domains;187184 u32 write_domain;188188- s32 fence_reg:5;185185+ s32 fence_reg:I915_MAX_NUM_FENCE_BITS;189186 s32 pinned:2;190187 u32 tiling:2;191188 u32 dirty:1;···378375 struct notifier_block lid_notifier;379376380377 int crt_ddc_pin;381381- struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */378378+ struct drm_i915_fence_reg fence_regs[I915_MAX_NUM_FENCES]; /* assume 965 */382379 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */383380 int num_fence_regs; /* 8 on pre-965, 16 otherwise */384381···509506 u8 saveAR[21];510507 u8 saveDACMASK;511508 u8 saveCR[37];512512- uint64_t saveFENCE[16];509509+ uint64_t saveFENCE[I915_MAX_NUM_FENCES];513510 u32 saveCURACNTR;514511 u32 saveCURAPOS;515512 u32 saveCURABASE;···780777 * Fence register bits (if any) for this object. Will be set781778 * as needed when mapped into the GTT.782779 * Protected by dev->struct_mutex.783783- *784784- * Size: 4 bits for 16 fences + sign (for FENCE_REG_NONE)785780 */786786- signed int fence_reg:5;781781+ signed int fence_reg:I915_MAX_NUM_FENCE_BITS;787782788783 /**789784 * Advice: are the backing pages purgeable?···1000999extern unsigned int i915_powersave __read_mostly;10011000extern unsigned int i915_semaphores __read_mostly;10021001extern unsigned int i915_lvds_downclock __read_mostly;10031003-extern unsigned int i915_panel_use_ssc __read_mostly;10021002+extern int i915_panel_use_ssc __read_mostly;10041003extern int i915_vbt_sdvo_panel_type __read_mostly;10051004extern unsigned int i915_enable_rc6 __read_mostly;10061006-extern unsigned int i915_enable_fbc __read_mostly;10051005+extern int i915_enable_fbc __read_mostly;10071006extern bool i915_enable_hangcheck __read_mostly;1008100710091008extern int i915_suspend(struct drm_device *dev, pm_message_t state);
+7-5
drivers/gpu/drm/i915/i915_gem.c
···17451745 struct drm_i915_private *dev_priv = dev->dev_private;17461746 int i;1747174717481748- for (i = 0; i < 16; i++) {17481748+ for (i = 0; i < dev_priv->num_fence_regs; i++) {17491749 struct drm_i915_fence_reg *reg = &dev_priv->fence_regs[i];17501750 struct drm_i915_gem_object *obj = reg->obj;17511751···35123512 * so emit a request to do so.35133513 */35143514 request = kzalloc(sizeof(*request), GFP_KERNEL);35153515- if (request)35153515+ if (request) {35163516 ret = i915_add_request(obj->ring, NULL, request);35173517- else35173517+ if (ret)35183518+ kfree(request);35193519+ } else35183520 ret = -ENOMEM;35193521 }35203522···36153613 obj->base.write_domain = I915_GEM_DOMAIN_CPU;36163614 obj->base.read_domains = I915_GEM_DOMAIN_CPU;3617361536183618- if (IS_GEN6(dev)) {36163616+ if (IS_GEN6(dev) || IS_GEN7(dev)) {36193617 /* On Gen6, we can have the GPU use the LLC (the CPU36203618 * cache) for about a 10% performance improvement36213619 * compared to uncached. Graphics requests other than···38793877 INIT_LIST_HEAD(&dev_priv->mm.gtt_list);38803878 for (i = 0; i < I915_NUM_RINGS; i++)38813879 init_ring_lists(&dev_priv->ring[i]);38823882- for (i = 0; i < 16; i++)38803880+ for (i = 0; i < I915_MAX_NUM_FENCES; i++)38833881 INIT_LIST_HEAD(&dev_priv->fence_regs[i].lru_list);38843882 INIT_DELAYED_WORK(&dev_priv->mm.retire_work,38853883 i915_gem_retire_work_handler);
+1
drivers/gpu/drm/i915/i915_irq.c
···824824825825 /* Fences */826826 switch (INTEL_INFO(dev)->gen) {827827+ case 7:827828 case 6:828829 for (i = 0; i < 16; i++)829830 error->fence[i] = I915_READ64(FENCE_REG_SANDYBRIDGE_0 + (i * 8));
···370370371371 /* Fences */372372 switch (INTEL_INFO(dev)->gen) {373373+ case 7:373374 case 6:374375 for (i = 0; i < 16; i++)375376 dev_priv->saveFENCE[i] = I915_READ64(FENCE_REG_SANDYBRIDGE_0 + (i * 8));···405404406405 /* Fences */407406 switch (INTEL_INFO(dev)->gen) {407407+ case 7:408408 case 6:409409 for (i = 0; i < 16; i++)410410 I915_WRITE64(FENCE_REG_SANDYBRIDGE_0 + (i * 8), dev_priv->saveFENCE[i]);
+24-9
drivers/gpu/drm/i915/intel_display.c
···2933293329342934 /* For PCH DP, enable TRANS_DP_CTL */29352935 if (HAS_PCH_CPT(dev) &&29362936- intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {29362936+ (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT) ||29372937+ intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))) {29372938 u32 bpc = (I915_READ(PIPECONF(pipe)) & PIPE_BPC_MASK) >> 5;29382939 reg = TRANS_DP_CTL(pipe);29392940 temp = I915_READ(reg);···47124711 lvds_bpc = 6;4713471247144713 if (lvds_bpc < display_bpc) {47154715- DRM_DEBUG_DRIVER("clamping display bpc (was %d) to LVDS (%d)\n", display_bpc, lvds_bpc);47144714+ DRM_DEBUG_KMS("clamping display bpc (was %d) to LVDS (%d)\n", display_bpc, lvds_bpc);47164715 display_bpc = lvds_bpc;47174716 }47184717 continue;···47234722 unsigned int edp_bpc = dev_priv->edp.bpp / 3;4724472347254724 if (edp_bpc < display_bpc) {47264726- DRM_DEBUG_DRIVER("clamping display bpc (was %d) to eDP (%d)\n", display_bpc, edp_bpc);47254725+ DRM_DEBUG_KMS("clamping display bpc (was %d) to eDP (%d)\n", display_bpc, edp_bpc);47274726 display_bpc = edp_bpc;47284727 }47294728 continue;···47384737 /* Don't use an invalid EDID bpc value */47394738 if (connector->display_info.bpc &&47404739 connector->display_info.bpc < display_bpc) {47414741- DRM_DEBUG_DRIVER("clamping display bpc (was %d) to EDID reported max of %d\n", display_bpc, connector->display_info.bpc);47404740+ DRM_DEBUG_KMS("clamping display bpc (was %d) to EDID reported max of %d\n", display_bpc, connector->display_info.bpc);47424741 display_bpc = connector->display_info.bpc;47434742 }47444743 }···47494748 */47504749 if (intel_encoder->type == INTEL_OUTPUT_HDMI) {47514750 if (display_bpc > 8 && display_bpc < 12) {47524752- DRM_DEBUG_DRIVER("forcing bpc to 12 for HDMI\n");47514751+ DRM_DEBUG_KMS("forcing bpc to 12 for HDMI\n");47534752 display_bpc = 12;47544753 } else {47554755- DRM_DEBUG_DRIVER("forcing bpc to 8 for HDMI\n");47544754+ DRM_DEBUG_KMS("forcing bpc to 8 for HDMI\n");47564755 display_bpc = 8;47574756 }47584757 }···4790478947914790 display_bpc = min(display_bpc, bpc);4792479147934793- DRM_DEBUG_DRIVER("setting pipe bpc to %d (max display bpc %d)\n",47944794- bpc, display_bpc);47924792+ DRM_DEBUG_KMS("setting pipe bpc to %d (max display bpc %d)\n",47934793+ bpc, display_bpc);4795479447964795 *pipe_bpp = display_bpc * 3;47974796···56725671 pipeconf &= ~PIPECONF_DITHER_TYPE_MASK;56735672 if ((is_lvds && dev_priv->lvds_dither) || dither) {56745673 pipeconf |= PIPECONF_DITHER_EN;56755675- pipeconf |= PIPECONF_DITHER_TYPE_ST1;56745674+ pipeconf |= PIPECONF_DITHER_TYPE_SP;56765675 }56775676 if (is_dp || intel_encoder_is_pch_edp(&has_edp_encoder->base)) {56785677 intel_dp_set_m_n(crtc, mode, adjusted_mode);···81488147 I915_WRITE(WM3_LP_ILK, 0);81498148 I915_WRITE(WM2_LP_ILK, 0);81508149 I915_WRITE(WM1_LP_ILK, 0);81508150+81518151+ /* According to the BSpec vol1g, bit 12 (RCPBUNIT) clock81528152+ * gating disable must be set. Failure to set it results in81538153+ * flickering pixels due to Z write ordering failures after81548154+ * some amount of runtime in the Mesa "fire" demo, and Unigine81558155+ * Sanctuary and Tropics, and apparently anything else with81568156+ * alpha test or pixel discard.81578157+ *81588158+ * According to the spec, bit 11 (RCCUNIT) must also be set,81598159+ * but we didn't debug actual testcases to find it out.81608160+ */81618161+ I915_WRITE(GEN6_UCGCTL2,81628162+ GEN6_RCPBUNIT_CLOCK_GATE_DISABLE |81638163+ GEN6_RCCUNIT_CLOCK_GATE_DISABLE);8151816481528165 /*81538166 * According to the spec the following bits should be
+237-174
drivers/gpu/drm/i915/intel_dp.c
···5959 struct i2c_algo_dp_aux_data algo;6060 bool is_pch_edp;6161 uint8_t train_set[4];6262- uint8_t link_status[DP_LINK_STATUS_SIZE];6362 int panel_power_up_delay;6463 int panel_power_down_delay;6564 int panel_power_cycle_delay;···6768 struct drm_display_mode *panel_fixed_mode; /* for eDP */6869 struct delayed_work panel_vdd_work;6970 bool want_panel_vdd;7070- unsigned long panel_off_jiffies;7171};72727373/**···155157static int156158intel_dp_max_lane_count(struct intel_dp *intel_dp)157159{158158- int max_lane_count = 4;159159-160160- if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11) {161161- max_lane_count = intel_dp->dpcd[DP_MAX_LANE_COUNT] & 0x1f;162162- switch (max_lane_count) {163163- case 1: case 2: case 4:164164- break;165165- default:166166- max_lane_count = 4;167167- }160160+ int max_lane_count = intel_dp->dpcd[DP_MAX_LANE_COUNT] & 0x1f;161161+ switch (max_lane_count) {162162+ case 1: case 2: case 4:163163+ break;164164+ default:165165+ max_lane_count = 4;168166 }169167 return max_lane_count;170168}···762768 continue;763769764770 intel_dp = enc_to_intel_dp(encoder);765765- if (intel_dp->base.type == INTEL_OUTPUT_DISPLAYPORT) {771771+ if (intel_dp->base.type == INTEL_OUTPUT_DISPLAYPORT ||772772+ intel_dp->base.type == INTEL_OUTPUT_EDP)773773+ {766774 lane_count = intel_dp->lane_count;767767- break;768768- } else if (is_edp(intel_dp)) {769769- lane_count = dev_priv->edp.lanes;770775 break;771776 }772777 }···803810 struct drm_display_mode *adjusted_mode)804811{805812 struct drm_device *dev = encoder->dev;813813+ struct drm_i915_private *dev_priv = dev->dev_private;806814 struct intel_dp *intel_dp = enc_to_intel_dp(encoder);807815 struct drm_crtc *crtc = intel_dp->base.base.crtc;808816 struct intel_crtc *intel_crtc = to_intel_crtc(crtc);···816822 ironlake_edp_pll_off(encoder);817823 }818824819819- intel_dp->DP = DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0;820820- intel_dp->DP |= intel_dp->color_range;825825+ /*826826+ * There are three kinds of DP registers:827827+ *828828+ * IBX PCH829829+ * CPU830830+ * CPT PCH831831+ *832832+ * IBX PCH and CPU are the same for almost everything,833833+ * except that the CPU DP PLL is configured in this834834+ * register835835+ *836836+ * CPT PCH is quite different, having many bits moved837837+ * to the TRANS_DP_CTL register instead. That838838+ * configuration happens (oddly) in ironlake_pch_enable839839+ */821840822822- if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)823823- intel_dp->DP |= DP_SYNC_HS_HIGH;824824- if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)825825- intel_dp->DP |= DP_SYNC_VS_HIGH;841841+ /* Preserve the BIOS-computed detected bit. This is842842+ * supposed to be read-only.843843+ */844844+ intel_dp->DP = I915_READ(intel_dp->output_reg) & DP_DETECTED;845845+ intel_dp->DP |= DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0;826846827827- if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp))828828- intel_dp->DP |= DP_LINK_TRAIN_OFF_CPT;829829- else830830- intel_dp->DP |= DP_LINK_TRAIN_OFF;847847+ /* Handle DP bits in common between all three register formats */848848+849849+ intel_dp->DP |= DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0;831850832851 switch (intel_dp->lane_count) {833852 case 1:···859852 intel_dp->DP |= DP_AUDIO_OUTPUT_ENABLE;860853 intel_write_eld(encoder, adjusted_mode);861854 }862862-863855 memset(intel_dp->link_configuration, 0, DP_LINK_CONFIGURATION_SIZE);864856 intel_dp->link_configuration[0] = intel_dp->link_bw;865857 intel_dp->link_configuration[1] = intel_dp->lane_count;866858 intel_dp->link_configuration[8] = DP_SET_ANSI_8B10B;867867-868859 /*869860 * Check for DPCD version > 1.1 and enhanced framing support870861 */871862 if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11 &&872863 (intel_dp->dpcd[DP_MAX_LANE_COUNT] & DP_ENHANCED_FRAME_CAP)) {873864 intel_dp->link_configuration[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;874874- intel_dp->DP |= DP_ENHANCED_FRAMING;875865 }876866877877- /* CPT DP's pipe select is decided in TRANS_DP_CTL */878878- if (intel_crtc->pipe == 1 && !HAS_PCH_CPT(dev))879879- intel_dp->DP |= DP_PIPEB_SELECT;867867+ /* Split out the IBX/CPU vs CPT settings */880868881881- if (is_cpu_edp(intel_dp)) {882882- /* don't miss out required setting for eDP */883883- intel_dp->DP |= DP_PLL_ENABLE;884884- if (adjusted_mode->clock < 200000)885885- intel_dp->DP |= DP_PLL_FREQ_160MHZ;886886- else887887- intel_dp->DP |= DP_PLL_FREQ_270MHZ;869869+ if (!HAS_PCH_CPT(dev) || is_cpu_edp(intel_dp)) {870870+ intel_dp->DP |= intel_dp->color_range;871871+872872+ if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)873873+ intel_dp->DP |= DP_SYNC_HS_HIGH;874874+ if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)875875+ intel_dp->DP |= DP_SYNC_VS_HIGH;876876+ intel_dp->DP |= DP_LINK_TRAIN_OFF;877877+878878+ if (intel_dp->link_configuration[1] & DP_LANE_COUNT_ENHANCED_FRAME_EN)879879+ intel_dp->DP |= DP_ENHANCED_FRAMING;880880+881881+ if (intel_crtc->pipe == 1)882882+ intel_dp->DP |= DP_PIPEB_SELECT;883883+884884+ if (is_cpu_edp(intel_dp)) {885885+ /* don't miss out required setting for eDP */886886+ intel_dp->DP |= DP_PLL_ENABLE;887887+ if (adjusted_mode->clock < 200000)888888+ intel_dp->DP |= DP_PLL_FREQ_160MHZ;889889+ else890890+ intel_dp->DP |= DP_PLL_FREQ_270MHZ;891891+ }892892+ } else {893893+ intel_dp->DP |= DP_LINK_TRAIN_OFF_CPT;888894 }895895+}896896+897897+#define IDLE_ON_MASK (PP_ON | 0 | PP_SEQUENCE_MASK | 0 | PP_SEQUENCE_STATE_MASK)898898+#define IDLE_ON_VALUE (PP_ON | 0 | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_ON_IDLE)899899+900900+#define IDLE_OFF_MASK (PP_ON | 0 | PP_SEQUENCE_MASK | 0 | PP_SEQUENCE_STATE_MASK)901901+#define IDLE_OFF_VALUE (0 | 0 | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_OFF_IDLE)902902+903903+#define IDLE_CYCLE_MASK (PP_ON | 0 | PP_SEQUENCE_MASK | PP_CYCLE_DELAY_ACTIVE | PP_SEQUENCE_STATE_MASK)904904+#define IDLE_CYCLE_VALUE (0 | 0 | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_OFF_IDLE)905905+906906+static void ironlake_wait_panel_status(struct intel_dp *intel_dp,907907+ u32 mask,908908+ u32 value)909909+{910910+ struct drm_device *dev = intel_dp->base.base.dev;911911+ struct drm_i915_private *dev_priv = dev->dev_private;912912+913913+ DRM_DEBUG_KMS("mask %08x value %08x status %08x control %08x\n",914914+ mask, value,915915+ I915_READ(PCH_PP_STATUS),916916+ I915_READ(PCH_PP_CONTROL));917917+918918+ if (_wait_for((I915_READ(PCH_PP_STATUS) & mask) == value, 5000, 10)) {919919+ DRM_ERROR("Panel status timeout: status %08x control %08x\n",920920+ I915_READ(PCH_PP_STATUS),921921+ I915_READ(PCH_PP_CONTROL));922922+ }923923+}924924+925925+static void ironlake_wait_panel_on(struct intel_dp *intel_dp)926926+{927927+ DRM_DEBUG_KMS("Wait for panel power on\n");928928+ ironlake_wait_panel_status(intel_dp, IDLE_ON_MASK, IDLE_ON_VALUE);889929}890930891931static void ironlake_wait_panel_off(struct intel_dp *intel_dp)892932{893893- unsigned long off_time;894894- unsigned long delay;895895-896933 DRM_DEBUG_KMS("Wait for panel power off time\n");934934+ ironlake_wait_panel_status(intel_dp, IDLE_OFF_MASK, IDLE_OFF_VALUE);935935+}897936898898- if (ironlake_edp_have_panel_power(intel_dp) ||899899- ironlake_edp_have_panel_vdd(intel_dp))900900- {901901- DRM_DEBUG_KMS("Panel still on, no delay needed\n");902902- return;903903- }937937+static void ironlake_wait_panel_power_cycle(struct intel_dp *intel_dp)938938+{939939+ DRM_DEBUG_KMS("Wait for panel power cycle\n");940940+ ironlake_wait_panel_status(intel_dp, IDLE_CYCLE_MASK, IDLE_CYCLE_VALUE);941941+}904942905905- off_time = intel_dp->panel_off_jiffies + msecs_to_jiffies(intel_dp->panel_power_down_delay);906906- if (time_after(jiffies, off_time)) {907907- DRM_DEBUG_KMS("Time already passed");908908- return;909909- }910910- delay = jiffies_to_msecs(off_time - jiffies);911911- if (delay > intel_dp->panel_power_down_delay)912912- delay = intel_dp->panel_power_down_delay;913913- DRM_DEBUG_KMS("Waiting an additional %ld ms\n", delay);914914- msleep(delay);943943+944944+/* Read the current pp_control value, unlocking the register if it945945+ * is locked946946+ */947947+948948+static u32 ironlake_get_pp_control(struct drm_i915_private *dev_priv)949949+{950950+ u32 control = I915_READ(PCH_PP_CONTROL);951951+952952+ control &= ~PANEL_UNLOCK_MASK;953953+ control |= PANEL_UNLOCK_REGS;954954+ return control;915955}916956917957static void ironlake_edp_panel_vdd_on(struct intel_dp *intel_dp)···975921 "eDP VDD already requested on\n");976922977923 intel_dp->want_panel_vdd = true;924924+978925 if (ironlake_edp_have_panel_vdd(intel_dp)) {979926 DRM_DEBUG_KMS("eDP VDD already on\n");980927 return;981928 }982929983983- ironlake_wait_panel_off(intel_dp);984984- pp = I915_READ(PCH_PP_CONTROL);985985- pp &= ~PANEL_UNLOCK_MASK;986986- pp |= PANEL_UNLOCK_REGS;930930+ if (!ironlake_edp_have_panel_power(intel_dp))931931+ ironlake_wait_panel_power_cycle(intel_dp);932932+933933+ pp = ironlake_get_pp_control(dev_priv);987934 pp |= EDP_FORCE_VDD;988935 I915_WRITE(PCH_PP_CONTROL, pp);989936 POSTING_READ(PCH_PP_CONTROL);···1007952 u32 pp;10089531009954 if (!intel_dp->want_panel_vdd && ironlake_edp_have_panel_vdd(intel_dp)) {10101010- pp = I915_READ(PCH_PP_CONTROL);10111011- pp &= ~PANEL_UNLOCK_MASK;10121012- pp |= PANEL_UNLOCK_REGS;955955+ pp = ironlake_get_pp_control(dev_priv);1013956 pp &= ~EDP_FORCE_VDD;1014957 I915_WRITE(PCH_PP_CONTROL, pp);1015958 POSTING_READ(PCH_PP_CONTROL);···1015962 /* Make sure sequencer is idle before allowing subsequent activity */1016963 DRM_DEBUG_KMS("PCH_PP_STATUS: 0x%08x PCH_PP_CONTROL: 0x%08x\n",1017964 I915_READ(PCH_PP_STATUS), I915_READ(PCH_PP_CONTROL));10181018- intel_dp->panel_off_jiffies = jiffies;965965+966966+ msleep(intel_dp->panel_power_down_delay);1019967 }1020968}1021969···1026972 struct intel_dp, panel_vdd_work);1027973 struct drm_device *dev = intel_dp->base.base.dev;102897410291029- mutex_lock(&dev->struct_mutex);975975+ mutex_lock(&dev->mode_config.mutex);1030976 ironlake_panel_vdd_off_sync(intel_dp);10311031- mutex_unlock(&dev->struct_mutex);977977+ mutex_unlock(&dev->mode_config.mutex);1032978}10339791034980static void ironlake_edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync)···10389841039985 DRM_DEBUG_KMS("Turn eDP VDD off %d\n", intel_dp->want_panel_vdd);1040986 WARN(!intel_dp->want_panel_vdd, "eDP VDD not forced on");10411041-987987+1042988 intel_dp->want_panel_vdd = false;10439891044990 if (sync) {···10541000 }10551001}1056100210571057-/* Returns true if the panel was already on when called */10581003static void ironlake_edp_panel_on(struct intel_dp *intel_dp)10591004{10601005 struct drm_device *dev = intel_dp->base.base.dev;10611006 struct drm_i915_private *dev_priv = dev->dev_private;10621062- u32 pp, idle_on_mask = PP_ON | PP_SEQUENCE_STATE_ON_IDLE;10071007+ u32 pp;1063100810641009 if (!is_edp(intel_dp))10651010 return;10661066- if (ironlake_edp_have_panel_power(intel_dp))10111011+10121012+ DRM_DEBUG_KMS("Turn eDP power on\n");10131013+10141014+ if (ironlake_edp_have_panel_power(intel_dp)) {10151015+ DRM_DEBUG_KMS("eDP power already on\n");10671016 return;10171017+ }1068101810691069- ironlake_wait_panel_off(intel_dp);10701070- pp = I915_READ(PCH_PP_CONTROL);10711071- pp &= ~PANEL_UNLOCK_MASK;10721072- pp |= PANEL_UNLOCK_REGS;10191019+ ironlake_wait_panel_power_cycle(intel_dp);1073102010211021+ pp = ironlake_get_pp_control(dev_priv);10741022 if (IS_GEN5(dev)) {10751023 /* ILK workaround: disable reset around power sequence */10761024 pp &= ~PANEL_POWER_RESET;···10811025 }1082102610831027 pp |= POWER_TARGET_ON;10281028+ if (!IS_GEN5(dev))10291029+ pp |= PANEL_POWER_RESET;10301030+10841031 I915_WRITE(PCH_PP_CONTROL, pp);10851032 POSTING_READ(PCH_PP_CONTROL);1086103310871087- if (wait_for((I915_READ(PCH_PP_STATUS) & idle_on_mask) == idle_on_mask,10881088- 5000))10891089- DRM_ERROR("panel on wait timed out: 0x%08x\n",10901090- I915_READ(PCH_PP_STATUS));10341034+ ironlake_wait_panel_on(intel_dp);1091103510921036 if (IS_GEN5(dev)) {10931037 pp |= PANEL_POWER_RESET; /* restore panel reset bit */···10961040 }10971041}1098104210991099-static void ironlake_edp_panel_off(struct drm_encoder *encoder)10431043+static void ironlake_edp_panel_off(struct intel_dp *intel_dp)11001044{11011101- struct intel_dp *intel_dp = enc_to_intel_dp(encoder);11021102- struct drm_device *dev = encoder->dev;10451045+ struct drm_device *dev = intel_dp->base.base.dev;11031046 struct drm_i915_private *dev_priv = dev->dev_private;11041104- u32 pp, idle_off_mask = PP_ON | PP_SEQUENCE_MASK |11051105- PP_CYCLE_DELAY_ACTIVE | PP_SEQUENCE_STATE_MASK;10471047+ u32 pp;1106104811071049 if (!is_edp(intel_dp))11081050 return;11091109- pp = I915_READ(PCH_PP_CONTROL);11101110- pp &= ~PANEL_UNLOCK_MASK;11111111- pp |= PANEL_UNLOCK_REGS;1112105111131113- if (IS_GEN5(dev)) {11141114- /* ILK workaround: disable reset around power sequence */11151115- pp &= ~PANEL_POWER_RESET;11161116- I915_WRITE(PCH_PP_CONTROL, pp);11171117- POSTING_READ(PCH_PP_CONTROL);11181118- }10521052+ DRM_DEBUG_KMS("Turn eDP power off\n");1119105311201120- intel_dp->panel_off_jiffies = jiffies;10541054+ WARN(intel_dp->want_panel_vdd, "Cannot turn power off while VDD is on\n");1121105511221122- if (IS_GEN5(dev)) {11231123- pp &= ~POWER_TARGET_ON;11241124- I915_WRITE(PCH_PP_CONTROL, pp);11251125- POSTING_READ(PCH_PP_CONTROL);11261126- pp &= ~POWER_TARGET_ON;11271127- I915_WRITE(PCH_PP_CONTROL, pp);11281128- POSTING_READ(PCH_PP_CONTROL);11291129- msleep(intel_dp->panel_power_cycle_delay);10561056+ pp = ironlake_get_pp_control(dev_priv);10571057+ pp &= ~(POWER_TARGET_ON | EDP_FORCE_VDD | PANEL_POWER_RESET | EDP_BLC_ENABLE);10581058+ I915_WRITE(PCH_PP_CONTROL, pp);10591059+ POSTING_READ(PCH_PP_CONTROL);1130106011311131- if (wait_for((I915_READ(PCH_PP_STATUS) & idle_off_mask) == 0, 5000))11321132- DRM_ERROR("panel off wait timed out: 0x%08x\n",11331133- I915_READ(PCH_PP_STATUS));11341134-11351135- pp |= PANEL_POWER_RESET; /* restore panel reset bit */11361136- I915_WRITE(PCH_PP_CONTROL, pp);11371137- POSTING_READ(PCH_PP_CONTROL);11381138- }10611061+ ironlake_wait_panel_off(intel_dp);11391062}1140106311411064static void ironlake_edp_backlight_on(struct intel_dp *intel_dp)···11341099 * allowing it to appear.11351100 */11361101 msleep(intel_dp->backlight_on_delay);11371137- pp = I915_READ(PCH_PP_CONTROL);11381138- pp &= ~PANEL_UNLOCK_MASK;11391139- pp |= PANEL_UNLOCK_REGS;11021102+ pp = ironlake_get_pp_control(dev_priv);11401103 pp |= EDP_BLC_ENABLE;11411104 I915_WRITE(PCH_PP_CONTROL, pp);11421105 POSTING_READ(PCH_PP_CONTROL);···11501117 return;1151111811521119 DRM_DEBUG_KMS("\n");11531153- pp = I915_READ(PCH_PP_CONTROL);11541154- pp &= ~PANEL_UNLOCK_MASK;11551155- pp |= PANEL_UNLOCK_REGS;11201120+ pp = ironlake_get_pp_control(dev_priv);11561121 pp &= ~EDP_BLC_ENABLE;11571122 I915_WRITE(PCH_PP_CONTROL, pp);11581123 POSTING_READ(PCH_PP_CONTROL);···12181187{12191188 struct intel_dp *intel_dp = enc_to_intel_dp(encoder);1220118911901190+ ironlake_edp_backlight_off(intel_dp);11911191+ ironlake_edp_panel_off(intel_dp);11921192+12211193 /* Wake up the sink first */12221194 ironlake_edp_panel_vdd_on(intel_dp);12231195 intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);11961196+ intel_dp_link_down(intel_dp);12241197 ironlake_edp_panel_vdd_off(intel_dp, false);1225119812261199 /* Make sure the panel is off before trying to12271200 * change the mode12281201 */12291229- ironlake_edp_backlight_off(intel_dp);12301230- intel_dp_link_down(intel_dp);12311231- ironlake_edp_panel_off(encoder);12321202}1233120312341204static void intel_dp_commit(struct drm_encoder *encoder)···12431211 intel_dp_start_link_train(intel_dp);12441212 ironlake_edp_panel_on(intel_dp);12451213 ironlake_edp_panel_vdd_off(intel_dp, true);12461246-12471214 intel_dp_complete_link_train(intel_dp);12481215 ironlake_edp_backlight_on(intel_dp);12491216···12611230 uint32_t dp_reg = I915_READ(intel_dp->output_reg);1262123112631232 if (mode != DRM_MODE_DPMS_ON) {12331233+ ironlake_edp_backlight_off(intel_dp);12341234+ ironlake_edp_panel_off(intel_dp);12351235+12641236 ironlake_edp_panel_vdd_on(intel_dp);12651265- if (is_edp(intel_dp))12661266- ironlake_edp_backlight_off(intel_dp);12671237 intel_dp_sink_dpms(intel_dp, mode);12681238 intel_dp_link_down(intel_dp);12691269- ironlake_edp_panel_off(encoder);12701270- if (is_edp(intel_dp) && !is_pch_edp(intel_dp))12711271- ironlake_edp_pll_off(encoder);12721239 ironlake_edp_panel_vdd_off(intel_dp, false);12401240+12411241+ if (is_cpu_edp(intel_dp))12421242+ ironlake_edp_pll_off(encoder);12731243 } else {12441244+ if (is_cpu_edp(intel_dp))12451245+ ironlake_edp_pll_on(encoder);12461246+12741247 ironlake_edp_panel_vdd_on(intel_dp);12751248 intel_dp_sink_dpms(intel_dp, mode);12761249 if (!(dp_reg & DP_PORT_EN)) {···12821247 ironlake_edp_panel_on(intel_dp);12831248 ironlake_edp_panel_vdd_off(intel_dp, true);12841249 intel_dp_complete_link_train(intel_dp);12851285- ironlake_edp_backlight_on(intel_dp);12861250 } else12871251 ironlake_edp_panel_vdd_off(intel_dp, false);12881252 ironlake_edp_backlight_on(intel_dp);···13191285 * link status information13201286 */13211287static bool13221322-intel_dp_get_link_status(struct intel_dp *intel_dp)12881288+intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE])13231289{13241290 return intel_dp_aux_native_read_retry(intel_dp,13251291 DP_LANE0_1_STATUS,13261326- intel_dp->link_status,12921292+ link_status,13271293 DP_LINK_STATUS_SIZE);13281294}13291295···13351301}1336130213371303static uint8_t13381338-intel_get_adjust_request_voltage(uint8_t link_status[DP_LINK_STATUS_SIZE],13041304+intel_get_adjust_request_voltage(uint8_t adjust_request[2],13391305 int lane)13401306{13411341- int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1);13421307 int s = ((lane & 1) ?13431308 DP_ADJUST_VOLTAGE_SWING_LANE1_SHIFT :13441309 DP_ADJUST_VOLTAGE_SWING_LANE0_SHIFT);13451345- uint8_t l = intel_dp_link_status(link_status, i);13101310+ uint8_t l = adjust_request[lane>>1];1346131113471312 return ((l >> s) & 3) << DP_TRAIN_VOLTAGE_SWING_SHIFT;13481313}1349131413501315static uint8_t13511351-intel_get_adjust_request_pre_emphasis(uint8_t link_status[DP_LINK_STATUS_SIZE],13161316+intel_get_adjust_request_pre_emphasis(uint8_t adjust_request[2],13521317 int lane)13531318{13541354- int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1);13551319 int s = ((lane & 1) ?13561320 DP_ADJUST_PRE_EMPHASIS_LANE1_SHIFT :13571321 DP_ADJUST_PRE_EMPHASIS_LANE0_SHIFT);13581358- uint8_t l = intel_dp_link_status(link_status, i);13221322+ uint8_t l = adjust_request[lane>>1];1359132313601324 return ((l >> s) & 3) << DP_TRAIN_PRE_EMPHASIS_SHIFT;13611325}···13761344 * a maximum voltage of 800mV and a maximum pre-emphasis of 6dB13771345 */13781346#define I830_DP_VOLTAGE_MAX DP_TRAIN_VOLTAGE_SWING_80013471347+#define I830_DP_VOLTAGE_MAX_CPT DP_TRAIN_VOLTAGE_SWING_12001379134813801349static uint8_t13811350intel_dp_pre_emphasis_max(uint8_t voltage_swing)···13951362}1396136313971364static void13981398-intel_get_adjust_train(struct intel_dp *intel_dp)13651365+intel_get_adjust_train(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE])13991366{13671367+ struct drm_device *dev = intel_dp->base.base.dev;14001368 uint8_t v = 0;14011369 uint8_t p = 0;14021370 int lane;13711371+ uint8_t *adjust_request = link_status + (DP_ADJUST_REQUEST_LANE0_1 - DP_LANE0_1_STATUS);13721372+ int voltage_max;1403137314041374 for (lane = 0; lane < intel_dp->lane_count; lane++) {14051405- uint8_t this_v = intel_get_adjust_request_voltage(intel_dp->link_status, lane);14061406- uint8_t this_p = intel_get_adjust_request_pre_emphasis(intel_dp->link_status, lane);13751375+ uint8_t this_v = intel_get_adjust_request_voltage(adjust_request, lane);13761376+ uint8_t this_p = intel_get_adjust_request_pre_emphasis(adjust_request, lane);1407137714081378 if (this_v > v)14091379 v = this_v;···14141378 p = this_p;14151379 }1416138014171417- if (v >= I830_DP_VOLTAGE_MAX)14181418- v = I830_DP_VOLTAGE_MAX | DP_TRAIN_MAX_SWING_REACHED;13811381+ if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp))13821382+ voltage_max = I830_DP_VOLTAGE_MAX_CPT;13831383+ else13841384+ voltage_max = I830_DP_VOLTAGE_MAX;13851385+ if (v >= voltage_max)13861386+ v = voltage_max | DP_TRAIN_MAX_SWING_REACHED;1419138714201388 if (p >= intel_dp_pre_emphasis_max(v))14211389 p = intel_dp_pre_emphasis_max(v) | DP_TRAIN_MAX_PRE_EMPHASIS_REACHED;···14291389}1430139014311391static uint32_t14321432-intel_dp_signal_levels(uint8_t train_set, int lane_count)13921392+intel_dp_signal_levels(uint8_t train_set)14331393{14341394 uint32_t signal_levels = 0;14351395···14981458intel_get_lane_status(uint8_t link_status[DP_LINK_STATUS_SIZE],14991459 int lane)15001460{15011501- int i = DP_LANE0_1_STATUS + (lane >> 1);15021461 int s = (lane & 1) * 4;15031503- uint8_t l = intel_dp_link_status(link_status, i);14621462+ uint8_t l = link_status[lane>>1];1504146315051464 return (l >> s) & 0xf;15061465}···15241485 DP_LANE_CHANNEL_EQ_DONE|\15251486 DP_LANE_SYMBOL_LOCKED)15261487static bool15271527-intel_channel_eq_ok(struct intel_dp *intel_dp)14881488+intel_channel_eq_ok(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE])15281489{15291490 uint8_t lane_align;15301491 uint8_t lane_status;15311492 int lane;1532149315331533- lane_align = intel_dp_link_status(intel_dp->link_status,14941494+ lane_align = intel_dp_link_status(link_status,15341495 DP_LANE_ALIGN_STATUS_UPDATED);15351496 if ((lane_align & DP_INTERLANE_ALIGN_DONE) == 0)15361497 return false;15371498 for (lane = 0; lane < intel_dp->lane_count; lane++) {15381538- lane_status = intel_get_lane_status(intel_dp->link_status, lane);14991499+ lane_status = intel_get_lane_status(link_status, lane);15391500 if ((lane_status & CHANNEL_EQ_BITS) != CHANNEL_EQ_BITS)15401501 return false;15411502 }···1560152115611522 ret = intel_dp_aux_native_write(intel_dp,15621523 DP_TRAINING_LANE0_SET,15631563- intel_dp->train_set, 4);15641564- if (ret != 4)15241524+ intel_dp->train_set,15251525+ intel_dp->lane_count);15261526+ if (ret != intel_dp->lane_count)15651527 return false;1566152815671529 return true;···15781538 int i;15791539 uint8_t voltage;15801540 bool clock_recovery = false;15811581- int tries;15411541+ int voltage_tries, loop_tries;15821542 u32 reg;15831543 uint32_t DP = intel_dp->DP;15841544···16051565 DP &= ~DP_LINK_TRAIN_MASK;16061566 memset(intel_dp->train_set, 0, 4);16071567 voltage = 0xff;16081608- tries = 0;15681568+ voltage_tries = 0;15691569+ loop_tries = 0;16091570 clock_recovery = false;16101571 for (;;) {16111572 /* Use intel_dp->train_set[0] to set the voltage and pre emphasis values */15731573+ uint8_t link_status[DP_LINK_STATUS_SIZE];16121574 uint32_t signal_levels;16131613- if (IS_GEN6(dev) && is_edp(intel_dp)) {15751575+15761576+ if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) {16141577 signal_levels = intel_gen6_edp_signal_levels(intel_dp->train_set[0]);16151578 DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_SNB) | signal_levels;16161579 } else {16171617- signal_levels = intel_dp_signal_levels(intel_dp->train_set[0], intel_dp->lane_count);15801580+ signal_levels = intel_dp_signal_levels(intel_dp->train_set[0]);15811581+ DRM_DEBUG_KMS("training pattern 1 signal levels %08x\n", signal_levels);16181582 DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels;16191583 }16201584···16341590 /* Set training pattern 1 */1635159116361592 udelay(100);16371637- if (!intel_dp_get_link_status(intel_dp))15931593+ if (!intel_dp_get_link_status(intel_dp, link_status)) {15941594+ DRM_ERROR("failed to get link status\n");16381595 break;15961596+ }1639159716401640- if (intel_clock_recovery_ok(intel_dp->link_status, intel_dp->lane_count)) {15981598+ if (intel_clock_recovery_ok(link_status, intel_dp->lane_count)) {15991599+ DRM_DEBUG_KMS("clock recovery OK\n");16411600 clock_recovery = true;16421601 break;16431602 }···16491602 for (i = 0; i < intel_dp->lane_count; i++)16501603 if ((intel_dp->train_set[i] & DP_TRAIN_MAX_SWING_REACHED) == 0)16511604 break;16521652- if (i == intel_dp->lane_count)16531653- break;16051605+ if (i == intel_dp->lane_count) {16061606+ ++loop_tries;16071607+ if (loop_tries == 5) {16081608+ DRM_DEBUG_KMS("too many full retries, give up\n");16091609+ break;16101610+ }16111611+ memset(intel_dp->train_set, 0, 4);16121612+ voltage_tries = 0;16131613+ continue;16141614+ }1654161516551616 /* Check to see if we've tried the same voltage 5 times */16561617 if ((intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK) == voltage) {16571657- ++tries;16581658- if (tries == 5)16181618+ ++voltage_tries;16191619+ if (voltage_tries == 5) {16201620+ DRM_DEBUG_KMS("too many voltage retries, give up\n");16591621 break;16221622+ }16601623 } else16611661- tries = 0;16241624+ voltage_tries = 0;16621625 voltage = intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK;1663162616641627 /* Compute new intel_dp->train_set as requested by target */16651665- intel_get_adjust_train(intel_dp);16281628+ intel_get_adjust_train(intel_dp, link_status);16661629 }1667163016681631 intel_dp->DP = DP;···16951638 for (;;) {16961639 /* Use intel_dp->train_set[0] to set the voltage and pre emphasis values */16971640 uint32_t signal_levels;16411641+ uint8_t link_status[DP_LINK_STATUS_SIZE];1698164216991643 if (cr_tries > 5) {17001644 DRM_ERROR("failed to train DP, aborting\n");···17031645 break;17041646 }1705164717061706- if (IS_GEN6(dev) && is_edp(intel_dp)) {16481648+ if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) {17071649 signal_levels = intel_gen6_edp_signal_levels(intel_dp->train_set[0]);17081650 DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_SNB) | signal_levels;17091651 } else {17101710- signal_levels = intel_dp_signal_levels(intel_dp->train_set[0], intel_dp->lane_count);16521652+ signal_levels = intel_dp_signal_levels(intel_dp->train_set[0]);17111653 DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels;17121654 }17131655···17231665 break;1724166617251667 udelay(400);17261726- if (!intel_dp_get_link_status(intel_dp))16681668+ if (!intel_dp_get_link_status(intel_dp, link_status))17271669 break;1728167017291671 /* Make sure clock is still ok */17301730- if (!intel_clock_recovery_ok(intel_dp->link_status, intel_dp->lane_count)) {16721672+ if (!intel_clock_recovery_ok(link_status, intel_dp->lane_count)) {17311673 intel_dp_start_link_train(intel_dp);17321674 cr_tries++;17331675 continue;17341676 }1735167717361736- if (intel_channel_eq_ok(intel_dp)) {16781678+ if (intel_channel_eq_ok(intel_dp, link_status)) {17371679 channel_eq = true;17381680 break;17391681 }···17481690 }1749169117501692 /* Compute new intel_dp->train_set as requested by target */17511751- intel_get_adjust_train(intel_dp);16931693+ intel_get_adjust_train(intel_dp, link_status);17521694 ++tries;17531695 }17541696···1793173517941736 msleep(17);1795173717961796- if (is_edp(intel_dp))17971797- DP |= DP_LINK_TRAIN_OFF;17381738+ if (is_edp(intel_dp)) {17391739+ if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp))17401740+ DP |= DP_LINK_TRAIN_OFF_CPT;17411741+ else17421742+ DP |= DP_LINK_TRAIN_OFF;17431743+ }1798174417991745 if (!HAS_PCH_CPT(dev) &&18001746 I915_READ(intel_dp->output_reg) & DP_PIPEB_SELECT) {···18841822intel_dp_check_link_status(struct intel_dp *intel_dp)18851823{18861824 u8 sink_irq_vector;18251825+ u8 link_status[DP_LINK_STATUS_SIZE];1887182618881827 if (intel_dp->dpms_mode != DRM_MODE_DPMS_ON)18891828 return;···18931830 return;1894183118951832 /* Try to read receiver status if the link appears to be up */18961896- if (!intel_dp_get_link_status(intel_dp)) {18331833+ if (!intel_dp_get_link_status(intel_dp, link_status)) {18971834 intel_dp_link_down(intel_dp);18981835 return;18991836 }···19181855 DRM_DEBUG_DRIVER("CP or sink specific irq unhandled\n");19191856 }1920185719211921- if (!intel_channel_eq_ok(intel_dp)) {18581858+ if (!intel_channel_eq_ok(intel_dp, link_status)) {19221859 DRM_DEBUG_KMS("%s: channel EQ not ok, retraining\n",19231860 drm_get_encoder_name(&intel_dp->base.base));19241861 intel_dp_start_link_train(intel_dp);···22422179 continue;2243218022442181 intel_dp = enc_to_intel_dp(encoder);22452245- if (intel_dp->base.type == INTEL_OUTPUT_DISPLAYPORT)21822182+ if (intel_dp->base.type == INTEL_OUTPUT_DISPLAYPORT ||21832183+ intel_dp->base.type == INTEL_OUTPUT_EDP)22462184 return intel_dp->output_reg;22472185 }22482186···2385232123862322 cur.t8 = (pp_on & PANEL_LIGHT_ON_DELAY_MASK) >>23872323 PANEL_LIGHT_ON_DELAY_SHIFT;23882388-23242324+23892325 cur.t9 = (pp_off & PANEL_LIGHT_OFF_DELAY_MASK) >>23902326 PANEL_LIGHT_OFF_DELAY_SHIFT;23912327···24182354 DRM_DEBUG_KMS("backlight on delay %d, off delay %d\n",24192355 intel_dp->backlight_on_delay, intel_dp->backlight_off_delay);2420235624212421- intel_dp->panel_off_jiffies = jiffies - intel_dp->panel_power_down_delay;24222422-24232357 ironlake_edp_panel_vdd_on(intel_dp);24242358 ret = intel_dp_get_dpcd(intel_dp);24252359 ironlake_edp_panel_vdd_off(intel_dp, false);23602360+24262361 if (ret) {24272362 if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11)24282363 dev_priv->no_aux_handshake =
···941941 track->db_depth_control = radeon_get_ib_value(p, idx);942942 break;943943 case R_028010_DB_DEPTH_INFO:944944- if (r600_cs_packet_next_is_pkt3_nop(p)) {944944+ if (!p->keep_tiling_flags &&945945+ r600_cs_packet_next_is_pkt3_nop(p)) {945946 r = r600_cs_packet_next_reloc(p, &reloc);946947 if (r) {947948 dev_warn(p->dev, "bad SET_CONTEXT_REG "···993992 case R_0280B4_CB_COLOR5_INFO:994993 case R_0280B8_CB_COLOR6_INFO:995994 case R_0280BC_CB_COLOR7_INFO:996996- if (r600_cs_packet_next_is_pkt3_nop(p)) {995995+ if (!p->keep_tiling_flags &&996996+ r600_cs_packet_next_is_pkt3_nop(p)) {997997 r = r600_cs_packet_next_reloc(p, &reloc);998998 if (r) {999999 dev_err(p->dev, "bad SET_CONTEXT_REG 0x%04X\n", reg);···12931291 mip_offset <<= 8;1294129212951293 word0 = radeon_get_ib_value(p, idx + 0);12961296- if (tiling_flags & RADEON_TILING_MACRO)12971297- word0 |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1);12981298- else if (tiling_flags & RADEON_TILING_MICRO)12991299- word0 |= S_038000_TILE_MODE(V_038000_ARRAY_1D_TILED_THIN1);12941294+ if (!p->keep_tiling_flags) {12951295+ if (tiling_flags & RADEON_TILING_MACRO)12961296+ word0 |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1);12971297+ else if (tiling_flags & RADEON_TILING_MICRO)12981298+ word0 |= S_038000_TILE_MODE(V_038000_ARRAY_1D_TILED_THIN1);12991299+ }13001300 word1 = radeon_get_ib_value(p, idx + 1);13011301 w0 = G_038000_TEX_WIDTH(word0) + 1;13021302 h0 = G_038004_TEX_HEIGHT(word1) + 1;···16251621 return -EINVAL;16261622 }16271623 base_offset = (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);16281628- if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO)16291629- ib[idx+1+(i*7)+0] |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1);16301630- else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO)16311631- ib[idx+1+(i*7)+0] |= S_038000_TILE_MODE(V_038000_ARRAY_1D_TILED_THIN1);16241624+ if (!p->keep_tiling_flags) {16251625+ if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO)16261626+ ib[idx+1+(i*7)+0] |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1);16271627+ else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO)16281628+ ib[idx+1+(i*7)+0] |= S_038000_TILE_MODE(V_038000_ARRAY_1D_TILED_THIN1);16291629+ }16321630 texture = reloc->robj;16331631 /* tex mip base */16341632 r = r600_cs_packet_next_reloc(p, &reloc);
+2-1
drivers/gpu/drm/radeon/radeon.h
···611611 struct radeon_ib *ib;612612 void *track;613613 unsigned family;614614- int parser_error;614614+ int parser_error;615615+ bool keep_tiling_flags;615616};616617617618extern int radeon_cs_update_pages(struct radeon_cs_parser *p, int pg_idx);
+86-116
drivers/gpu/drm/radeon/radeon_atombios.c
···6262 struct _ATOM_SUPPORTED_DEVICES_INFO_2d1 info_2d1;6363};64646565+static void radeon_lookup_i2c_gpio_quirks(struct radeon_device *rdev,6666+ ATOM_GPIO_I2C_ASSIGMENT *gpio,6767+ u8 index)6868+{6969+ /* r4xx mask is technically not used by the hw, so patch in the legacy mask bits */7070+ if ((rdev->family == CHIP_R420) ||7171+ (rdev->family == CHIP_R423) ||7272+ (rdev->family == CHIP_RV410)) {7373+ if ((le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x0018) ||7474+ (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x0019) ||7575+ (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x001a)) {7676+ gpio->ucClkMaskShift = 0x19;7777+ gpio->ucDataMaskShift = 0x18;7878+ }7979+ }8080+8181+ /* some evergreen boards have bad data for this entry */8282+ if (ASIC_IS_DCE4(rdev)) {8383+ if ((index == 7) &&8484+ (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1936) &&8585+ (gpio->sucI2cId.ucAccess == 0)) {8686+ gpio->sucI2cId.ucAccess = 0x97;8787+ gpio->ucDataMaskShift = 8;8888+ gpio->ucDataEnShift = 8;8989+ gpio->ucDataY_Shift = 8;9090+ gpio->ucDataA_Shift = 8;9191+ }9292+ }9393+9494+ /* some DCE3 boards have bad data for this entry */9595+ if (ASIC_IS_DCE3(rdev)) {9696+ if ((index == 4) &&9797+ (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1fda) &&9898+ (gpio->sucI2cId.ucAccess == 0x94))9999+ gpio->sucI2cId.ucAccess = 0x14;100100+ }101101+}102102+103103+static struct radeon_i2c_bus_rec radeon_get_bus_rec_for_i2c_gpio(ATOM_GPIO_I2C_ASSIGMENT *gpio)104104+{105105+ struct radeon_i2c_bus_rec i2c;106106+107107+ memset(&i2c, 0, sizeof(struct radeon_i2c_bus_rec));108108+109109+ i2c.mask_clk_reg = le16_to_cpu(gpio->usClkMaskRegisterIndex) * 4;110110+ i2c.mask_data_reg = le16_to_cpu(gpio->usDataMaskRegisterIndex) * 4;111111+ i2c.en_clk_reg = le16_to_cpu(gpio->usClkEnRegisterIndex) * 4;112112+ i2c.en_data_reg = le16_to_cpu(gpio->usDataEnRegisterIndex) * 4;113113+ i2c.y_clk_reg = le16_to_cpu(gpio->usClkY_RegisterIndex) * 4;114114+ i2c.y_data_reg = le16_to_cpu(gpio->usDataY_RegisterIndex) * 4;115115+ i2c.a_clk_reg = le16_to_cpu(gpio->usClkA_RegisterIndex) * 4;116116+ i2c.a_data_reg = le16_to_cpu(gpio->usDataA_RegisterIndex) * 4;117117+ i2c.mask_clk_mask = (1 << gpio->ucClkMaskShift);118118+ i2c.mask_data_mask = (1 << gpio->ucDataMaskShift);119119+ i2c.en_clk_mask = (1 << gpio->ucClkEnShift);120120+ i2c.en_data_mask = (1 << gpio->ucDataEnShift);121121+ i2c.y_clk_mask = (1 << gpio->ucClkY_Shift);122122+ i2c.y_data_mask = (1 << gpio->ucDataY_Shift);123123+ i2c.a_clk_mask = (1 << gpio->ucClkA_Shift);124124+ i2c.a_data_mask = (1 << gpio->ucDataA_Shift);125125+126126+ if (gpio->sucI2cId.sbfAccess.bfHW_Capable)127127+ i2c.hw_capable = true;128128+ else129129+ i2c.hw_capable = false;130130+131131+ if (gpio->sucI2cId.ucAccess == 0xa0)132132+ i2c.mm_i2c = true;133133+ else134134+ i2c.mm_i2c = false;135135+136136+ i2c.i2c_id = gpio->sucI2cId.ucAccess;137137+138138+ if (i2c.mask_clk_reg)139139+ i2c.valid = true;140140+ else141141+ i2c.valid = false;142142+143143+ return i2c;144144+}145145+65146static struct radeon_i2c_bus_rec radeon_lookup_i2c_gpio(struct radeon_device *rdev,66147 uint8_t id)67148{···16685 for (i = 0; i < num_indices; i++) {16786 gpio = &i2c_info->asGPIO_Info[i];16887169169- /* r4xx mask is technically not used by the hw, so patch in the legacy mask bits */170170- if ((rdev->family == CHIP_R420) ||171171- (rdev->family == CHIP_R423) ||172172- (rdev->family == CHIP_RV410)) {173173- if ((le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x0018) ||174174- (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x0019) ||175175- (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x001a)) {176176- gpio->ucClkMaskShift = 0x19;177177- gpio->ucDataMaskShift = 0x18;178178- }179179- }180180-181181- /* some evergreen boards have bad data for this entry */182182- if (ASIC_IS_DCE4(rdev)) {183183- if ((i == 7) &&184184- (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1936) &&185185- (gpio->sucI2cId.ucAccess == 0)) {186186- gpio->sucI2cId.ucAccess = 0x97;187187- gpio->ucDataMaskShift = 8;188188- gpio->ucDataEnShift = 8;189189- gpio->ucDataY_Shift = 8;190190- gpio->ucDataA_Shift = 8;191191- }192192- }193193-194194- /* some DCE3 boards have bad data for this entry */195195- if (ASIC_IS_DCE3(rdev)) {196196- if ((i == 4) &&197197- (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1fda) &&198198- (gpio->sucI2cId.ucAccess == 0x94))199199- gpio->sucI2cId.ucAccess = 0x14;200200- }8888+ radeon_lookup_i2c_gpio_quirks(rdev, gpio, i);2018920290 if (gpio->sucI2cId.ucAccess == id) {203203- i2c.mask_clk_reg = le16_to_cpu(gpio->usClkMaskRegisterIndex) * 4;204204- i2c.mask_data_reg = le16_to_cpu(gpio->usDataMaskRegisterIndex) * 4;205205- i2c.en_clk_reg = le16_to_cpu(gpio->usClkEnRegisterIndex) * 4;206206- i2c.en_data_reg = le16_to_cpu(gpio->usDataEnRegisterIndex) * 4;207207- i2c.y_clk_reg = le16_to_cpu(gpio->usClkY_RegisterIndex) * 4;208208- i2c.y_data_reg = le16_to_cpu(gpio->usDataY_RegisterIndex) * 4;209209- i2c.a_clk_reg = le16_to_cpu(gpio->usClkA_RegisterIndex) * 4;210210- i2c.a_data_reg = le16_to_cpu(gpio->usDataA_RegisterIndex) * 4;211211- i2c.mask_clk_mask = (1 << gpio->ucClkMaskShift);212212- i2c.mask_data_mask = (1 << gpio->ucDataMaskShift);213213- i2c.en_clk_mask = (1 << gpio->ucClkEnShift);214214- i2c.en_data_mask = (1 << gpio->ucDataEnShift);215215- i2c.y_clk_mask = (1 << gpio->ucClkY_Shift);216216- i2c.y_data_mask = (1 << gpio->ucDataY_Shift);217217- i2c.a_clk_mask = (1 << gpio->ucClkA_Shift);218218- i2c.a_data_mask = (1 << gpio->ucDataA_Shift);219219-220220- if (gpio->sucI2cId.sbfAccess.bfHW_Capable)221221- i2c.hw_capable = true;222222- else223223- i2c.hw_capable = false;224224-225225- if (gpio->sucI2cId.ucAccess == 0xa0)226226- i2c.mm_i2c = true;227227- else228228- i2c.mm_i2c = false;229229-230230- i2c.i2c_id = gpio->sucI2cId.ucAccess;231231-232232- if (i2c.mask_clk_reg)233233- i2c.valid = true;9191+ i2c = radeon_get_bus_rec_for_i2c_gpio(gpio);23492 break;23593 }23694 }···189169 int i, num_indices;190170 char stmp[32];191171192192- memset(&i2c, 0, sizeof(struct radeon_i2c_bus_rec));193193-194172 if (atom_parse_data_header(ctx, index, &size, NULL, NULL, &data_offset)) {195173 i2c_info = (struct _ATOM_GPIO_I2C_INFO *)(ctx->bios + data_offset);196174···197179198180 for (i = 0; i < num_indices; i++) {199181 gpio = &i2c_info->asGPIO_Info[i];200200- i2c.valid = false;201182202202- /* some evergreen boards have bad data for this entry */203203- if (ASIC_IS_DCE4(rdev)) {204204- if ((i == 7) &&205205- (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1936) &&206206- (gpio->sucI2cId.ucAccess == 0)) {207207- gpio->sucI2cId.ucAccess = 0x97;208208- gpio->ucDataMaskShift = 8;209209- gpio->ucDataEnShift = 8;210210- gpio->ucDataY_Shift = 8;211211- gpio->ucDataA_Shift = 8;212212- }213213- }183183+ radeon_lookup_i2c_gpio_quirks(rdev, gpio, i);214184215215- /* some DCE3 boards have bad data for this entry */216216- if (ASIC_IS_DCE3(rdev)) {217217- if ((i == 4) &&218218- (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1fda) &&219219- (gpio->sucI2cId.ucAccess == 0x94))220220- gpio->sucI2cId.ucAccess = 0x14;221221- }185185+ i2c = radeon_get_bus_rec_for_i2c_gpio(gpio);222186223223- i2c.mask_clk_reg = le16_to_cpu(gpio->usClkMaskRegisterIndex) * 4;224224- i2c.mask_data_reg = le16_to_cpu(gpio->usDataMaskRegisterIndex) * 4;225225- i2c.en_clk_reg = le16_to_cpu(gpio->usClkEnRegisterIndex) * 4;226226- i2c.en_data_reg = le16_to_cpu(gpio->usDataEnRegisterIndex) * 4;227227- i2c.y_clk_reg = le16_to_cpu(gpio->usClkY_RegisterIndex) * 4;228228- i2c.y_data_reg = le16_to_cpu(gpio->usDataY_RegisterIndex) * 4;229229- i2c.a_clk_reg = le16_to_cpu(gpio->usClkA_RegisterIndex) * 4;230230- i2c.a_data_reg = le16_to_cpu(gpio->usDataA_RegisterIndex) * 4;231231- i2c.mask_clk_mask = (1 << gpio->ucClkMaskShift);232232- i2c.mask_data_mask = (1 << gpio->ucDataMaskShift);233233- i2c.en_clk_mask = (1 << gpio->ucClkEnShift);234234- i2c.en_data_mask = (1 << gpio->ucDataEnShift);235235- i2c.y_clk_mask = (1 << gpio->ucClkY_Shift);236236- i2c.y_data_mask = (1 << gpio->ucDataY_Shift);237237- i2c.a_clk_mask = (1 << gpio->ucClkA_Shift);238238- i2c.a_data_mask = (1 << gpio->ucDataA_Shift);239239-240240- if (gpio->sucI2cId.sbfAccess.bfHW_Capable)241241- i2c.hw_capable = true;242242- else243243- i2c.hw_capable = false;244244-245245- if (gpio->sucI2cId.ucAccess == 0xa0)246246- i2c.mm_i2c = true;247247- else248248- i2c.mm_i2c = false;249249-250250- i2c.i2c_id = gpio->sucI2cId.ucAccess;251251-252252- if (i2c.mask_clk_reg) {253253- i2c.valid = true;187187+ if (i2c.valid) {254188 sprintf(stmp, "0x%x", i2c.i2c_id);255189 rdev->i2c_bus[i] = radeon_i2c_create(rdev->ddev, &i2c, stmp);256190 }
···23072307 SYM_LSB(IBCCtrlA_0, MaxPktLen);23082308 ppd->cpspec->ibcctrl_a = ibc; /* without linkcmd or linkinitcmd! */2309230923102310- /* initially come up waiting for TS1, without sending anything. */23112311- val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE <<23122312- QLOGIC_IB_IBCC_LINKINITCMD_SHIFT);23132313-23142314- ppd->cpspec->ibcctrl_a = val;23152310 /*23162311 * Reset the PCS interface to the serdes (and also ibc, which is still23172312 * in reset from above). Writes new value of ibcctrl_a as last step.23182313 */23192314 qib_7322_mini_pcs_reset(ppd);23202320- qib_write_kreg(dd, kr_scratch, 0ULL);23212321- /* clear the linkinit cmds */23222322- ppd->cpspec->ibcctrl_a &= ~SYM_MASK(IBCCtrlA_0, LinkInitCmd);2323231523242316 if (!ppd->cpspec->ibcctrl_b) {23252317 unsigned lse = ppd->link_speed_enabled;···23762384 /* Enable port */23772385 ppd->cpspec->ibcctrl_a |= SYM_MASK(IBCCtrlA_0, IBLinkEn);23782386 set_vls(ppd);23872387+23882388+ /* initially come up DISABLED, without sending anything. */23892389+ val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE <<23902390+ QLOGIC_IB_IBCC_LINKINITCMD_SHIFT);23912391+ qib_write_kreg_port(ppd, krp_ibcctrl_a, val);23922392+ qib_write_kreg(dd, kr_scratch, 0ULL);23932393+ /* clear the linkinit cmds */23942394+ ppd->cpspec->ibcctrl_a = val & ~SYM_MASK(IBCCtrlA_0, LinkInitCmd);2379239523802396 /* be paranoid against later code motion, etc. */23812397 spin_lock_irqsave(&dd->cspec->rcvmod_lock, flags);···52415241 off */52425242 if (ppd->dd->flags & QIB_HAS_QSFP) {52435243 qd->t_insert = get_jiffies_64();52445244- schedule_work(&qd->work);52445244+ queue_work(ib_wq, &qd->work);52455245 }52465246 spin_lock_irqsave(&ppd->sdma_lock, flags);52475247 if (__qib_sdma_running(ppd))
-12
drivers/infiniband/hw/qib/qib_qsfp.c
···480480 udelay(20); /* Generous RST dwell */481481482482 dd->f_gpio_mod(dd, mask, mask, mask);483483- /* Spec says module can take up to two seconds! */484484- mask = QSFP_GPIO_MOD_PRS_N;485485- if (qd->ppd->hw_pidx)486486- mask <<= QSFP_GPIO_PORT2_SHIFT;487487-488488- /* Do not try to wait here. Better to let event handle it */489489- if (!qib_qsfp_mod_present(qd->ppd))490490- goto bail;491491- /* We see a module, but it may be unwise to look yet. Just schedule */492492- qd->t_insert = get_jiffies_64();493493- queue_work(ib_wq, &qd->work);494494-bail:495483 return;496484}497485
···432432433433 spin_lock_irqsave(&priv->lock, flags);434434435435- if (ah) {435435+ if (!IS_ERR_OR_NULL(ah)) {436436 path->pathrec = *pathrec;437437438438 old_ah = path->ah;···555555 return 0;556556}557557558558+/* called with rcu_read_lock */558559static void neigh_add_path(struct sk_buff *skb, struct net_device *dev)559560{560561 struct ipoib_dev_priv *priv = netdev_priv(dev);···637636 spin_unlock_irqrestore(&priv->lock, flags);638637}639638639639+/* called with rcu_read_lock */640640static void ipoib_path_lookup(struct sk_buff *skb, struct net_device *dev)641641{642642 struct ipoib_dev_priv *priv = netdev_priv(skb->dev);···722720 struct neighbour *n = NULL;723721 unsigned long flags;724722723723+ rcu_read_lock();725724 if (likely(skb_dst(skb)))726725 n = dst_get_neighbour(skb_dst(skb));727726728727 if (likely(n)) {729728 if (unlikely(!*to_ipoib_neigh(n))) {730729 ipoib_path_lookup(skb, dev);731731- return NETDEV_TX_OK;730730+ goto unlock;732731 }733732734733 neigh = *to_ipoib_neigh(n);···752749 ipoib_neigh_free(dev, neigh);753750 spin_unlock_irqrestore(&priv->lock, flags);754751 ipoib_path_lookup(skb, dev);755755- return NETDEV_TX_OK;752752+ goto unlock;756753 }757754758755 if (ipoib_cm_get(neigh)) {759756 if (ipoib_cm_up(neigh)) {760757 ipoib_cm_send(dev, skb, ipoib_cm_get(neigh));761761- return NETDEV_TX_OK;758758+ goto unlock;762759 }763760 } else if (neigh->ah) {764761 ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(n->ha));765765- return NETDEV_TX_OK;762762+ goto unlock;766763 }767764768765 if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {···796793 phdr->hwaddr + 4);797794 dev_kfree_skb_any(skb);798795 ++dev->stats.tx_dropped;799799- return NETDEV_TX_OK;796796+ goto unlock;800797 }801798802799 unicast_arp_send(skb, dev, phdr);803800 }804801 }805805-802802+unlock:803803+ rcu_read_unlock();806804 return NETDEV_TX_OK;807805}808806···841837 dst = skb_dst(skb);842838 n = NULL;843839 if (dst)844844- n = dst_get_neighbour(dst);840840+ n = dst_get_neighbour_raw(dst);845841 if ((!dst || !n) && daddr) {846842 struct ipoib_pseudoheader *phdr =847843 (struct ipoib_pseudoheader *) skb_push(skb, sizeof *phdr);
+9-4
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
···240240 av.grh.dgid = mcast->mcmember.mgid;241241242242 ah = ipoib_create_ah(dev, priv->pd, &av);243243- if (!ah) {244244- ipoib_warn(priv, "ib_address_create failed\n");243243+ if (IS_ERR(ah)) {244244+ ipoib_warn(priv, "ib_address_create failed %ld\n",245245+ -PTR_ERR(ah));246246+ /* use original error */247247+ return PTR_ERR(ah);245248 } else {246249 spin_lock_irq(&priv->lock);247250 mcast->ah = ah;···269266270267 skb->dev = dev;271268 if (dst)272272- n = dst_get_neighbour(dst);269269+ n = dst_get_neighbour_raw(dst);273270 if (!dst || !n) {274271 /* put pseudoheader back on for next time */275272 skb_push(skb, sizeof (struct ipoib_pseudoheader));···725722 if (mcast && mcast->ah) {726723 struct dst_entry *dst = skb_dst(skb);727724 struct neighbour *n = NULL;725725+726726+ rcu_read_lock();728727 if (dst)729728 n = dst_get_neighbour(dst);730729 if (n && !*to_ipoib_neigh(n)) {···739734 list_add_tail(&neigh->list, &mcast->neigh_list);740735 }741736 }742742-737737+ rcu_read_unlock();743738 spin_unlock_irqrestore(&priv->lock, flags);744739 ipoib_send(dev, skb, mcast->ah, IB_MULTICAST_QPN);745740 return;
+18-8
drivers/input/mouse/elantech.c
···12101210 */12111211static int elantech_set_properties(struct elantech_data *etd)12121212{12131213+ /* This represents the version of IC body. */12131214 int ver = (etd->fw_version & 0x0f0000) >> 16;1214121512161216+ /* Early version of Elan touchpads doesn't obey the rule. */12151217 if (etd->fw_version < 0x020030 || etd->fw_version == 0x020600)12161218 etd->hw_version = 1;12171217- else if (etd->fw_version < 0x150600)12181218- etd->hw_version = 2;12191219- else if (ver == 5)12201220- etd->hw_version = 3;12211221- else if (ver == 6)12221222- etd->hw_version = 4;12231223- else12241224- return -1;12191219+ else {12201220+ switch (ver) {12211221+ case 2:12221222+ case 4:12231223+ etd->hw_version = 2;12241224+ break;12251225+ case 5:12261226+ etd->hw_version = 3;12271227+ break;12281228+ case 6:12291229+ etd->hw_version = 4;12301230+ break;12311231+ default:12321232+ return -1;12331233+ }12341234+ }1225123512261236 /*12271237 * Turn on packet checking by default.
···472472 module will be called bmp085.473473474474config PCH_PHUB475475- tristate "Intel EG20T PCH / OKI SEMICONDUCTOR IOH(ML7213/ML7223) PHUB"475475+ tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) PHUB"476476 depends on PCI477477 help478478 This driver is for PCH(Platform controller Hub) PHUB(Packet Hub) of···480480 processor. The Topcliff has MAC address and Option ROM data in SROM.481481 This driver can access MAC address and Option ROM data in SROM.482482483483- This driver also can be used for OKI SEMICONDUCTOR IOH(Input/484484- Output Hub), ML7213 and ML7223.485485- ML7213 IOH is for IVI(In-Vehicle Infotainment) use and ML7223 IOH is486486- for MP(Media Phone) use.487487- ML7213/ML7223 is companion chip for Intel Atom E6xx series.488488- ML7213/ML7223 is completely compatible for Intel EG20T PCH.483483+ This driver also can be used for LAPIS Semiconductor's IOH,484484+ ML7213/ML7223/ML7831.485485+ ML7213 which is for IVI(In-Vehicle Infotainment) use.486486+ ML7223 IOH is for MP(Media Phone) use.487487+ ML7831 IOH is for general purpose use.488488+ ML7213/ML7223/ML7831 is companion chip for Intel Atom E6xx series.489489+ ML7213/ML7223/ML7831 is completely compatible for Intel EG20T PCH.489490490491 To compile this driver as a module, choose M here: the module will491492 be called pch_phub.
···25542554 }25552555}2556255625572557-static __be32 bond_glean_dev_ip(struct net_device *dev)25582558-{25592559- struct in_device *idev;25602560- struct in_ifaddr *ifa;25612561- __be32 addr = 0;25622562-25632563- if (!dev)25642564- return 0;25652565-25662566- rcu_read_lock();25672567- idev = __in_dev_get_rcu(dev);25682568- if (!idev)25692569- goto out;25702570-25712571- ifa = idev->ifa_list;25722572- if (!ifa)25732573- goto out;25742574-25752575- addr = ifa->ifa_local;25762576-out:25772577- rcu_read_unlock();25782578- return addr;25792579-}25802580-25812557static int bond_has_this_ip(struct bonding *bond, __be32 ip)25822558{25832559 struct vlan_entry *vlan;···32993323 struct bonding *bond;33003324 struct vlan_entry *vlan;3301332533263326+ /* we only care about primary address */33273327+ if(ifa->ifa_flags & IFA_F_SECONDARY)33283328+ return NOTIFY_DONE;33293329+33023330 list_for_each_entry(bond, &bn->dev_list, bond_list) {33033331 if (bond->dev == event_dev) {33043332 switch (event) {···33103330 bond->master_ip = ifa->ifa_local;33113331 return NOTIFY_OK;33123332 case NETDEV_DOWN:33133313- bond->master_ip = bond_glean_dev_ip(bond->dev);33333333+ bond->master_ip = 0;33143334 return NOTIFY_OK;33153335 default:33163336 return NOTIFY_DONE;···33263346 vlan->vlan_ip = ifa->ifa_local;33273347 return NOTIFY_OK;33283348 case NETDEV_DOWN:33293329- vlan->vlan_ip =33303330- bond_glean_dev_ip(vlan_dev);33493349+ vlan->vlan_ip = 0;33313350 return NOTIFY_OK;33323351 default:33333352 return NOTIFY_DONE;
+1-1
drivers/net/ethernet/davicom/dm9000.c
···614614615615 if (!dm->wake_state)616616 irq_set_irq_wake(dm->irq_wake, 1);617617- else if (dm->wake_state & !opts)617617+ else if (dm->wake_state && !opts)618618 irq_set_irq_wake(dm->irq_wake, 0);619619 }620620
+1
drivers/net/ethernet/freescale/Kconfig
···2424 bool "FEC ethernet controller (of ColdFire and some i.MX CPUs)"2525 depends on (M523x || M527x || M5272 || M528x || M520x || M532x || \2626 ARCH_MXC || ARCH_MXS)2727+ default ARCH_MXC || ARCH_MXS if ARM2728 select PHYLIB2829 ---help---2930 Say Y here if you want to use the built-in 10/100 Fast ethernet
···18271827 }1828182818291829 /* Clear Bit 14 of AR_WA after putting chip into Full Sleep mode. */18301830- REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE);18301830+ if (AR_SREV_9300_20_OR_LATER(ah))18311831+ REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE);18311832}1832183318331834/*
+9-8
drivers/net/wireless/rtlwifi/ps.c
···395395 if (mac->link_state != MAC80211_LINKED)396396 return;397397398398- spin_lock(&rtlpriv->locks.lps_lock);398398+ spin_lock_irq(&rtlpriv->locks.lps_lock);399399400400 /* Idle for a while if we connect to AP a while ago. */401401 if (mac->cnt_after_linked >= 2) {···407407 }408408 }409409410410- spin_unlock(&rtlpriv->locks.lps_lock);410410+ spin_unlock_irq(&rtlpriv->locks.lps_lock);411411}412412413413/*Leave the leisure power save mode.*/···416416 struct rtl_priv *rtlpriv = rtl_priv(hw);417417 struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));418418 struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));419419+ unsigned long flags;419420420420- spin_lock(&rtlpriv->locks.lps_lock);421421+ spin_lock_irqsave(&rtlpriv->locks.lps_lock, flags);421422422423 if (ppsc->fwctrl_lps) {423424 if (ppsc->dot11_psmode != EACTIVE) {···439438 rtl_lps_set_psmode(hw, EACTIVE);440439 }441440 }442442- spin_unlock(&rtlpriv->locks.lps_lock);441441+ spin_unlock_irqrestore(&rtlpriv->locks.lps_lock, flags);443442}444443445444/* For sw LPS*/···540539 RT_CLEAR_PS_LEVEL(ppsc, RT_PS_LEVEL_ASPM);541540 }542541543543- spin_lock(&rtlpriv->locks.lps_lock);542542+ spin_lock_irq(&rtlpriv->locks.lps_lock);544543 rtl_ps_set_rf_state(hw, ERFON, RF_CHANGE_BY_PS);545545- spin_unlock(&rtlpriv->locks.lps_lock);544544+ spin_unlock_irq(&rtlpriv->locks.lps_lock);546545}547546548547void rtl_swlps_rfon_wq_callback(void *data)···575574 if (rtlpriv->link_info.busytraffic)576575 return;577576578578- spin_lock(&rtlpriv->locks.lps_lock);577577+ spin_lock_irq(&rtlpriv->locks.lps_lock);579578 rtl_ps_set_rf_state(hw, ERFSLEEP, RF_CHANGE_BY_PS);580580- spin_unlock(&rtlpriv->locks.lps_lock);579579+ spin_unlock_irq(&rtlpriv->locks.lps_lock);581580582581 if (ppsc->reg_rfps_level & RT_RF_OFF_LEVL_ASPM &&583582 !RT_IN_PS_LEVEL(ppsc, RT_PS_LEVEL_ASPM)) {
+9-7
drivers/of/irq.c
···6060 */6161struct device_node *of_irq_find_parent(struct device_node *child)6262{6363- struct device_node *p, *c = child;6363+ struct device_node *p;6464 const __be32 *parp;65656666- if (!of_node_get(c))6666+ if (!of_node_get(child))6767 return NULL;68686969 do {7070- parp = of_get_property(c, "interrupt-parent", NULL);7070+ parp = of_get_property(child, "interrupt-parent", NULL);7171 if (parp == NULL)7272- p = of_get_parent(c);7272+ p = of_get_parent(child);7373 else {7474 if (of_irq_workarounds & OF_IMAP_NO_PHANDLE)7575 p = of_node_get(of_irq_dflt_pic);7676 else7777 p = of_find_node_by_phandle(be32_to_cpup(parp));7878 }7979- of_node_put(c);8080- c = p;7979+ of_node_put(child);8080+ child = p;8181 } while (p && of_get_property(p, "#interrupt-cells", NULL) == NULL);82828383- return (p == child) ? NULL : p;8383+ return p;8484}85858686/**···424424425425 desc->dev = np;426426 desc->interrupt_parent = of_irq_find_parent(np);427427+ if (desc->interrupt_parent == np)428428+ desc->interrupt_parent = NULL;427429 list_add_tail(&desc->list, &intc_desc_list);428430 }429431
+1
drivers/pci/Kconfig
···76767777config PCI_PRI7878 bool "PCI PRI support"7979+ depends on PCI7980 select PCI_ATS8081 help8182 PRI is the PCI Page Request Interface. It allows PCI devices that are
+24-5
drivers/pci/hotplug/acpiphp_glue.c
···459459{460460 acpi_status status;461461 unsigned long long tmp;462462+ struct acpi_pci_root *root;462463 acpi_handle dummy_handle;464464+465465+ /*466466+ * We shouldn't use this bridge if PCIe native hotplug control has been467467+ * granted by the BIOS for it.468468+ */469469+ root = acpi_pci_find_root(handle);470470+ if (root && (root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL))471471+ return -ENODEV;463472464473 /* if the bridge doesn't have _STA, we assume it is always there */465474 status = acpi_get_handle(handle, "_STA", &dummy_handle);···13851376static acpi_status13861377find_root_bridges(acpi_handle handle, u32 lvl, void *context, void **rv)13871378{13791379+ struct acpi_pci_root *root;13881380 int *count = (int *)context;1389138113901390- if (acpi_is_root_bridge(handle)) {13911391- acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY,13921392- handle_hotplug_event_bridge, NULL);13931393- (*count)++;13941394- }13821382+ if (!acpi_is_root_bridge(handle))13831383+ return AE_OK;13841384+13851385+ root = acpi_pci_find_root(handle);13861386+ if (!root)13871387+ return AE_OK;13881388+13891389+ if (root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL)13901390+ return AE_OK;13911391+13921392+ (*count)++;13931393+ acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY,13941394+ handle_hotplug_event_bridge, NULL);13951395+13951396 return AE_OK ;13961397}13971398
-3
drivers/pci/hotplug/pciehp_ctrl.c
···213213 goto err_exit;214214 }215215216216- /* Wait for 1 second after checking link training status */217217- msleep(1000);218218-219216 /* Check for a power fault */220217 if (ctrl->power_fault_detected || pciehp_query_power_fault(p_slot)) {221218 ctrl_err(ctrl, "Power fault on slot %s\n", slot_name(p_slot));
+18-9
drivers/pci/hotplug/pciehp_hpc.c
···280280 else281281 msleep(1000);282282283283+ /*284284+ * Need to wait for 1000 ms after Data Link Layer Link Active285285+ * (DLLLA) bit reads 1b before sending configuration request.286286+ * We need it before checking Link Training (LT) bit becuase287287+ * LT is still set even after DLLLA bit is set on some platform.288288+ */289289+ msleep(1000);290290+283291 retval = pciehp_readw(ctrl, PCI_EXP_LNKSTA, &lnk_status);284292 if (retval) {285293 ctrl_err(ctrl, "Cannot read LNKSTATUS register\n");···301293 retval = -1;302294 return retval;303295 }296296+297297+ /*298298+ * If the port supports Link speeds greater than 5.0 GT/s, we299299+ * must wait for 100 ms after Link training completes before300300+ * sending configuration request.301301+ */302302+ if (ctrl->pcie->port->subordinate->max_bus_speed > PCIE_SPEED_5_0GT)303303+ msleep(100);304304+305305+ pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status);304306305307 return retval;306308}···502484 u16 slot_cmd;503485 u16 cmd_mask;504486 u16 slot_status;505505- u16 lnk_status;506487 int retval = 0;507488508489 /* Clear sticky power-fault bit from previous power failures */···532515 }533516 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,534517 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd);535535-536536- retval = pciehp_readw(ctrl, PCI_EXP_LNKSTA, &lnk_status);537537- if (retval) {538538- ctrl_err(ctrl, "%s: Cannot read LNKSTA register\n",539539- __func__);540540- return retval;541541- }542542- pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status);543518544519 return retval;545520}
+2-2
drivers/pci/hotplug/shpchp_core.c
···278278279279static int is_shpc_capable(struct pci_dev *dev)280280{281281- if ((dev->vendor == PCI_VENDOR_ID_AMD) || (dev->device ==282282- PCI_DEVICE_ID_AMD_GOLAM_7450))281281+ if (dev->vendor == PCI_VENDOR_ID_AMD &&282282+ dev->device == PCI_DEVICE_ID_AMD_GOLAM_7450)283283 return 1;284284 if (!pci_find_capability(dev, PCI_CAP_ID_SHPC))285285 return 0;
+2-2
drivers/pci/hotplug/shpchp_hpc.c
···944944 ctrl->pci_dev = pdev; /* pci_dev of the P2P bridge */945945 ctrl_dbg(ctrl, "Hotplug Controller:\n");946946947947- if ((pdev->vendor == PCI_VENDOR_ID_AMD) || (pdev->device ==948948- PCI_DEVICE_ID_AMD_GOLAM_7450)) {947947+ if (pdev->vendor == PCI_VENDOR_ID_AMD &&948948+ pdev->device == PCI_DEVICE_ID_AMD_GOLAM_7450) {949949 /* amd shpc driver doesn't use Base Offset; assume 0 */950950 ctrl->mmio_base = pci_resource_start(pdev, 0);951951 ctrl->mmio_size = pci_resource_len(pdev, 0);
···242242243243static int iio_event_getfd(struct iio_dev *indio_dev)244244{245245- if (indio_dev->event_interface == NULL)245245+ struct iio_event_interface *ev_int = indio_dev->event_interface;246246+ int fd;247247+248248+ if (ev_int == NULL)246249 return -ENODEV;247250248248- mutex_lock(&indio_dev->event_interface->event_list_lock);249249- if (test_and_set_bit(IIO_BUSY_BIT_POS,250250- &indio_dev->event_interface->flags)) {251251- mutex_unlock(&indio_dev->event_interface->event_list_lock);251251+ mutex_lock(&ev_int->event_list_lock);252252+ if (test_and_set_bit(IIO_BUSY_BIT_POS, &ev_int->flags)) {253253+ mutex_unlock(&ev_int->event_list_lock);252254 return -EBUSY;253255 }254254- mutex_unlock(&indio_dev->event_interface->event_list_lock);255255- return anon_inode_getfd("iio:event",256256- &iio_event_chrdev_fileops,257257- indio_dev->event_interface, O_RDONLY);256256+ mutex_unlock(&ev_int->event_list_lock);257257+ fd = anon_inode_getfd("iio:event",258258+ &iio_event_chrdev_fileops, ev_int, O_RDONLY);259259+ if (fd < 0) {260260+ mutex_lock(&ev_int->event_list_lock);261261+ clear_bit(IIO_BUSY_BIT_POS, &ev_int->flags);262262+ mutex_unlock(&ev_int->event_list_lock);263263+ }264264+ return fd;258265}259266260267static int __init iio_init(void)
+1-1
drivers/staging/slicoss/Kconfig
···11config SLICOSS22 tristate "Alacritech Gigabit IS-NIC support"33- depends on PCI && X8633+ depends on PCI && X86 && NET44 default n55 help66 This driver supports Alacritech's IS-NIC gigabit ethernet cards.
+2
drivers/tty/hvc/hvc_dcc.c
···46464747 asm volatile("mrc p14, 0, %0, c0, c5, 0 @ read comms data reg"4848 : "=r" (__c));4949+ isb();49505051 return __c;5152}···5655 asm volatile("mcr p14, 0, %0, c0, c5, 0 @ write a char"5756 : /* no output register */5857 : "r" (c));5858+ isb();5959}60606161static int hvc_dcc_put_chars(uint32_t vt, const char *buf, int count)
+7-7
drivers/tty/serial/Kconfig
···15601560 Support for the IFX6x60 modem devices on Intel MID platforms.1561156115621562config SERIAL_PCH_UART15631563- tristate "Intel EG20T PCH / OKI SEMICONDUCTOR IOH(ML7213/ML7223) UART"15631563+ tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) UART"15641564 depends on PCI15651565 select SERIAL_CORE15661566 help···15681568 which is an IOH(Input/Output Hub) for x86 embedded processor.15691569 Enabling PCH_DMA, this PCH UART works as DMA mode.1570157015711571- This driver also can be used for OKI SEMICONDUCTOR IOH(Input/15721572- Output Hub), ML7213 and ML7223.15731573- ML7213 IOH is for IVI(In-Vehicle Infotainment) use and ML7223 IOH is15741574- for MP(Media Phone) use.15751575- ML7213/ML7223 is companion chip for Intel Atom E6xx series.15761576- ML7213/ML7223 is completely compatible for Intel EG20T PCH.15711571+ This driver also can be used for LAPIS Semiconductor IOH(Input/15721572+ Output Hub), ML7213, ML7223 and ML7831.15731573+ ML7213 IOH is for IVI(In-Vehicle Infotainment) use, ML7223 IOH is15741574+ for MP(Media Phone) use and ML7831 IOH is for general purpose use.15751575+ ML7213/ML7223/ML7831 is companion chip for Intel Atom E6xx series.15761576+ ML7213/ML7223/ML7831 is completely compatible for Intel EG20T PCH.1577157715781578config SERIAL_MSM_SMD15791579 bool "Enable tty device interface for some SMD ports"
+3-13
drivers/tty/serial/atmel_serial.c
···228228 if (rs485conf->flags & SER_RS485_ENABLED) {229229 dev_dbg(port->dev, "Setting UART to RS485\n");230230 atmel_port->tx_done_mask = ATMEL_US_TXEMPTY;231231- if (rs485conf->flags & SER_RS485_RTS_AFTER_SEND)231231+ if ((rs485conf->delay_rts_after_send) > 0)232232 UART_PUT_TTGR(port, rs485conf->delay_rts_after_send);233233 mode |= ATMEL_US_USMODE_RS485;234234 } else {···304304305305 if (atmel_port->rs485.flags & SER_RS485_ENABLED) {306306 dev_dbg(port->dev, "Setting UART to RS485\n");307307- if (atmel_port->rs485.flags & SER_RS485_RTS_AFTER_SEND)307307+ if ((atmel_port->rs485.delay_rts_after_send) > 0)308308 UART_PUT_TTGR(port,309309 atmel_port->rs485.delay_rts_after_send);310310 mode |= ATMEL_US_USMODE_RS485;···1228122812291229 if (atmel_port->rs485.flags & SER_RS485_ENABLED) {12301230 dev_dbg(port->dev, "Setting UART to RS485\n");12311231- if (atmel_port->rs485.flags & SER_RS485_RTS_AFTER_SEND)12311231+ if ((atmel_port->rs485.delay_rts_after_send) > 0)12321232 UART_PUT_TTGR(port,12331233 atmel_port->rs485.delay_rts_after_send);12341234 mode |= ATMEL_US_USMODE_RS485;···14461446 rs485conf->delay_rts_before_send = rs485_delay[0];14471447 rs485conf->delay_rts_after_send = rs485_delay[1];14481448 rs485conf->flags = 0;14491449-14501450- if (rs485conf->delay_rts_before_send == 0 &&14511451- rs485conf->delay_rts_after_send == 0) {14521452- rs485conf->flags |= SER_RS485_RTS_ON_SEND;14531453- } else {14541454- if (rs485conf->delay_rts_before_send)14551455- rs485conf->flags |= SER_RS485_RTS_BEFORE_SEND;14561456- if (rs485conf->delay_rts_after_send)14571457- rs485conf->flags |= SER_RS485_RTS_AFTER_SEND;14581458- }1459144914601450 if (of_get_property(np, "rs485-rx-during-tx", NULL))14611451 rs485conf->flags |= SER_RS485_RX_DURING_TX;
···884884{885885 struct uart_hsu_port *up =886886 container_of(port, struct uart_hsu_port, port);887887- struct tty_struct *tty = port->state->port.tty;888887 unsigned char cval, fcr = 0;889888 unsigned long flags;890889 unsigned int baud, quot;···906907 }907908908909 /* CMSPAR isn't supported by this driver */909909- if (tty)910910- tty->termios->c_cflag &= ~CMSPAR;910910+ termios->c_cflag &= ~CMSPAR;911911912912 if (termios->c_cflag & CSTOPB)913913 cval |= UART_LCR_STOP;
+14-5
drivers/tty/serial/pch_uart.c
···11/*22- *Copyright (C) 2010 OKI SEMICONDUCTOR CO., LTD.22+ *Copyright (C) 2011 LAPIS Semiconductor Co., Ltd.33 *44 *This program is free software; you can redistribute it and/or modify55 *it under the terms of the GNU General Public License as published by···46464747/* Set the max number of UART port4848 * Intel EG20T PCH: 4 port4949- * OKI SEMICONDUCTOR ML7213 IOH: 3 port5050- * OKI SEMICONDUCTOR ML7223 IOH: 2 port4949+ * LAPIS Semiconductor ML7213 IOH: 3 port5050+ * LAPIS Semiconductor ML7223 IOH: 2 port5151*/5252#define PCH_UART_NR 45353···258258 pch_ml7213_uart2,259259 pch_ml7223_uart0,260260 pch_ml7223_uart1,261261+ pch_ml7831_uart0,262262+ pch_ml7831_uart1,261263};262264263265static struct pch_uart_driver_data drv_dat[] = {···272270 [pch_ml7213_uart2] = {PCH_UART_2LINE, 2},273271 [pch_ml7223_uart0] = {PCH_UART_8LINE, 0},274272 [pch_ml7223_uart1] = {PCH_UART_2LINE, 1},273273+ [pch_ml7831_uart0] = {PCH_UART_8LINE, 0},274274+ [pch_ml7831_uart1] = {PCH_UART_2LINE, 1},275275};276276277277static unsigned int default_baud = 9600;···632628 dev_err(priv->port.dev, "%s:dma_request_channel FAILS(Rx)\n",633629 __func__);634630 dma_release_channel(priv->chan_tx);631631+ priv->chan_tx = NULL;635632 return;636633 }637634···12201215 dev_err(priv->port.dev,12211216 "pch_uart_hal_set_fifo Failed(ret=%d)\n", ret);1222121712231223- if (priv->use_dma_flag)12241224- pch_free_dma(port);12181218+ pch_free_dma(port);1225121912261220 free_irq(priv->port.irq, priv);12271221}···12841280 if (rtn)12851281 goto out;1286128212831283+ pch_uart_set_mctrl(&priv->port, priv->port.mctrl);12871284 /* Don't rewrite B0 */12881285 if (tty_termios_baud_rate(termios))12891286 tty_termios_encode_baud_rate(termios, baud, baud);···15571552 .driver_data = pch_ml7223_uart0},15581553 {PCI_DEVICE(PCI_VENDOR_ID_ROHM, 0x800D),15591554 .driver_data = pch_ml7223_uart1},15551555+ {PCI_DEVICE(PCI_VENDOR_ID_ROHM, 0x8811),15561556+ .driver_data = pch_ml7831_uart0},15571557+ {PCI_DEVICE(PCI_VENDOR_ID_ROHM, 0x8812),15581558+ .driver_data = pch_ml7831_uart1},15601559 {0,},15611560};15621561
+23-7
drivers/tty/tty_ldisc.c
···36363737#include <linux/kmod.h>3838#include <linux/nsproxy.h>3939+#include <linux/ratelimit.h>39404041/*4142 * This guards the refcounted line discipline lists. The lock···548547/**549548 * tty_ldisc_wait_idle - wait for the ldisc to become idle550549 * @tty: tty to wait for550550+ * @timeout: for how long to wait at most551551 *552552 * Wait for the line discipline to become idle. The discipline must553553 * have been halted for this to guarantee it remains idle.554554 */555555-static int tty_ldisc_wait_idle(struct tty_struct *tty)555555+static int tty_ldisc_wait_idle(struct tty_struct *tty, long timeout)556556{557557- int ret;557557+ long ret;558558 ret = wait_event_timeout(tty_ldisc_idle,559559- atomic_read(&tty->ldisc->users) == 1, 5 * HZ);559559+ atomic_read(&tty->ldisc->users) == 1, timeout);560560 if (ret < 0)561561 return ret;562562 return ret > 0 ? 0 : -EBUSY;···667665668666 tty_ldisc_flush_works(tty);669667670670- retval = tty_ldisc_wait_idle(tty);668668+ retval = tty_ldisc_wait_idle(tty, 5 * HZ);671669672670 tty_lock();673671 mutex_lock(&tty->ldisc_mutex);···764762 if (IS_ERR(ld))765763 return -1;766764767767- WARN_ON_ONCE(tty_ldisc_wait_idle(tty));768768-769765 tty_ldisc_close(tty, tty->ldisc);770766 tty_ldisc_put(tty->ldisc);771767 tty->ldisc = NULL;···838838 tty_unlock();839839 cancel_work_sync(&tty->buf.work);840840 mutex_unlock(&tty->ldisc_mutex);841841-841841+retry:842842 tty_lock();843843 mutex_lock(&tty->ldisc_mutex);844844···847847 it means auditing a lot of other paths so this is848848 a FIXME */849849 if (tty->ldisc) { /* Not yet closed */850850+ if (atomic_read(&tty->ldisc->users) != 1) {851851+ char cur_n[TASK_COMM_LEN], tty_n[64];852852+ long timeout = 3 * HZ;853853+ tty_unlock();854854+855855+ while (tty_ldisc_wait_idle(tty, timeout) == -EBUSY) {856856+ timeout = MAX_SCHEDULE_TIMEOUT;857857+ printk_ratelimited(KERN_WARNING858858+ "%s: waiting (%s) for %s took too long, but we keep waiting...\n",859859+ __func__, get_task_comm(cur_n, current),860860+ tty_name(tty, tty_n));861861+ }862862+ mutex_unlock(&tty->ldisc_mutex);863863+ goto retry;864864+ }865865+850866 if (reset == 0) {851867852868 if (!tty_ldisc_reinit(tty, tty->termios->c_line))
···813813 USB_PORT_FEAT_C_PORT_LINK_STATE);814814 }815815816816+ if ((portchange & USB_PORT_STAT_C_BH_RESET) &&817817+ hub_is_superspeed(hub->hdev)) {818818+ need_debounce_delay = true;819819+ clear_port_feature(hub->hdev, port1,820820+ USB_PORT_FEAT_C_BH_PORT_RESET);821821+ }816822 /* We can forget about a "removed" device when there's a817823 * physical disconnect or the connect status changes.818824 */
···469469 gadget drivers to also be dynamically linked.470470471471config USB_EG20T472472- tristate "Intel EG20T PCH/OKI SEMICONDUCTOR ML7213 IOH UDC"472472+ tristate "Intel EG20T PCH/LAPIS Semiconductor IOH(ML7213/ML7831) UDC"473473 depends on PCI474474 select USB_GADGET_DUALSPEED475475 help···485485 This driver dose not support interrupt transfer or isochronous486486 transfer modes.487487488488- This driver also can be used for OKI SEMICONDUCTOR's ML7213 which is488488+ This driver also can be used for LAPIS Semiconductor's ML7213 which is489489 for IVI(In-Vehicle Infotainment) use.490490- ML7213 is companion chip for Intel Atom E6xx series.491491- ML7213 is completely compatible for Intel EG20T PCH.490490+ ML7831 is for general purpose use.491491+ ML7213/ML7831 is companion chip for Intel Atom E6xx series.492492+ ML7213/ML7831 is completely compatible for Intel EG20T PCH.492493493494config USB_CI13XXX_MSM494495 tristate "MIPS USB CI13xxx for MSM"
···1479147914801480 /* NOTE: assumes URB_ISO_ASAP, to limit complexity/bugs */1481148114821482- /* find a uframe slot with enough bandwidth */14831483- next = start + period;14841484- for (; start < next; start++) {14851485-14821482+ /* find a uframe slot with enough bandwidth.14831483+ * Early uframes are more precious because full-speed14841484+ * iso IN transfers can't use late uframes,14851485+ * and therefore they should be allocated last.14861486+ */14871487+ next = start;14881488+ start += period;14891489+ do {14901490+ start--;14861491 /* check schedule: enough space? */14871492 if (stream->highspeed) {14881493 if (itd_slot_ok(ehci, mod, start,···15001495 start, sched, period))15011496 break;15021497 }15031503- }14981498+ } while (start > next);1504149915051500 /* no room in the schedule */15061501 if (start == next) {
···223223 if (port < 0 || port >= 2)224224 return;225225226226+ if (pdata->vbus_pin[port] <= 0)227227+ return;228228+226229 gpio_set_value(pdata->vbus_pin[port], !pdata->vbus_pin_inverted ^ enable);227230}228231229232static int ohci_at91_usb_get_power(struct at91_usbh_data *pdata, int port)230233{231234 if (port < 0 || port >= 2)235235+ return -EINVAL;236236+237237+ if (pdata->vbus_pin[port] <= 0)232238 return -EINVAL;233239234240 return gpio_get_value(pdata->vbus_pin[port]) ^ !pdata->vbus_pin_inverted;
+6-9
drivers/usb/host/ohci-hcd.c
···389389 struct ohci_hcd *ohci;390390391391 ohci = hcd_to_ohci (hcd);392392- ohci_writel (ohci, OHCI_INTR_MIE, &ohci->regs->intrdisable);393393- ohci->hc_control = ohci_readl(ohci, &ohci->regs->control);392392+ ohci_writel(ohci, (u32) ~0, &ohci->regs->intrdisable);394393395395- /* If the SHUTDOWN quirk is set, don't put the controller in RESET */396396- ohci->hc_control &= (ohci->flags & OHCI_QUIRK_SHUTDOWN ?397397- OHCI_CTRL_RWC | OHCI_CTRL_HCFS :398398- OHCI_CTRL_RWC);399399- ohci_writel(ohci, ohci->hc_control, &ohci->regs->control);394394+ /* Software reset, after which the controller goes into SUSPEND */395395+ ohci_writel(ohci, OHCI_HCR, &ohci->regs->cmdstatus);396396+ ohci_readl(ohci, &ohci->regs->cmdstatus); /* flush the writes */397397+ udelay(10);400398401401- /* flush the writes */402402- (void) ohci_readl (ohci, &ohci->regs->control);399399+ ohci_writel(ohci, ohci->fminterval, &ohci->regs->fminterval);403400}404401405402static int check_ed(struct ohci_hcd *ohci, struct ed *ed)
-26
drivers/usb/host/ohci-pci.c
···175175 return 0;176176}177177178178-/* nVidia controllers continue to drive Reset signalling on the bus179179- * even after system shutdown, wasting power. This flag tells the180180- * shutdown routine to leave the controller OPERATIONAL instead of RESET.181181- */182182-static int ohci_quirk_nvidia_shutdown(struct usb_hcd *hcd)183183-{184184- struct pci_dev *pdev = to_pci_dev(hcd->self.controller);185185- struct ohci_hcd *ohci = hcd_to_ohci(hcd);186186-187187- /* Evidently nVidia fixed their later hardware; this is a guess at188188- * the changeover point.189189- */190190-#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_USB 0x026d191191-192192- if (pdev->device < PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_USB) {193193- ohci->flags |= OHCI_QUIRK_SHUTDOWN;194194- ohci_dbg(ohci, "enabled nVidia shutdown quirk\n");195195- }196196-197197- return 0;198198-}199199-200178static void sb800_prefetch(struct ohci_hcd *ohci, int on)201179{202180 struct pci_dev *pdev;···237259 {238260 PCI_DEVICE(PCI_VENDOR_ID_ATI, 0x4399),239261 .driver_data = (unsigned long)ohci_quirk_amd700,240240- },241241- {242242- PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID),243243- .driver_data = (unsigned long) ohci_quirk_nvidia_shutdown,244262 },245263246264 /* FIXME for some of the early AMD 760 southbridges, OHCI
-1
drivers/usb/host/ohci.h
···403403#define OHCI_QUIRK_HUB_POWER 0x100 /* distrust firmware power/oc setup */404404#define OHCI_QUIRK_AMD_PLL 0x200 /* AMD PLL quirk*/405405#define OHCI_QUIRK_AMD_PREFETCH 0x400 /* pre-fetch for ISO transfer */406406-#define OHCI_QUIRK_SHUTDOWN 0x800 /* nVidia power bug */407406 // there are also chip quirks/bugs in init logic408407409408 struct work_struct nec_work; /* Worker for NEC quirk */
+26-33
drivers/usb/host/pci-quirks.c
···3737#define OHCI_INTRENABLE 0x103838#define OHCI_INTRDISABLE 0x143939#define OHCI_FMINTERVAL 0x344040+#define OHCI_HCFS (3 << 6) /* hc functional state */4041#define OHCI_HCR (1 << 0) /* host controller reset */4142#define OHCI_OCR (1 << 3) /* ownership change request */4243#define OHCI_CTRL_RWC (1 << 9) /* remote wakeup connected */···467466{468467 void __iomem *base;469468 u32 control;469469+ u32 fminterval;470470+ int cnt;470471471472 if (!mmio_resource_enabled(pdev, 0))472473 return;···501498 }502499#endif503500504504- /* reset controller, preserving RWC (and possibly IR) */505505- writel(control & OHCI_CTRL_MASK, base + OHCI_CONTROL);506506- readl(base + OHCI_CONTROL);501501+ /* disable interrupts */502502+ writel((u32) ~0, base + OHCI_INTRDISABLE);507503508508- /* Some NVIDIA controllers stop working if kept in RESET for too long */509509- if (pdev->vendor == PCI_VENDOR_ID_NVIDIA) {510510- u32 fminterval;511511- int cnt;504504+ /* Reset the USB bus, if the controller isn't already in RESET */505505+ if (control & OHCI_HCFS) {506506+ /* Go into RESET, preserving RWC (and possibly IR) */507507+ writel(control & OHCI_CTRL_MASK, base + OHCI_CONTROL);508508+ readl(base + OHCI_CONTROL);512509513513- /* drive reset for at least 50 ms (7.1.7.5) */510510+ /* drive bus reset for at least 50 ms (7.1.7.5) */514511 msleep(50);515515-516516- /* software reset of the controller, preserving HcFmInterval */517517- fminterval = readl(base + OHCI_FMINTERVAL);518518- writel(OHCI_HCR, base + OHCI_CMDSTATUS);519519-520520- /* reset requires max 10 us delay */521521- for (cnt = 30; cnt > 0; --cnt) { /* ... allow extra time */522522- if ((readl(base + OHCI_CMDSTATUS) & OHCI_HCR) == 0)523523- break;524524- udelay(1);525525- }526526- writel(fminterval, base + OHCI_FMINTERVAL);527527-528528- /* Now we're in the SUSPEND state with all devices reset529529- * and wakeups and interrupts disabled530530- */531512 }532513533533- /*534534- * disable interrupts535535- */536536- writel(~(u32)0, base + OHCI_INTRDISABLE);537537- writel(~(u32)0, base + OHCI_INTRSTATUS);514514+ /* software reset of the controller, preserving HcFmInterval */515515+ fminterval = readl(base + OHCI_FMINTERVAL);516516+ writel(OHCI_HCR, base + OHCI_CMDSTATUS);538517518518+ /* reset requires max 10 us delay */519519+ for (cnt = 30; cnt > 0; --cnt) { /* ... allow extra time */520520+ if ((readl(base + OHCI_CMDSTATUS) & OHCI_HCR) == 0)521521+ break;522522+ udelay(1);523523+ }524524+ writel(fminterval, base + OHCI_FMINTERVAL);525525+526526+ /* Now the controller is safely in SUSPEND and nothing can wake it up */539527 iounmap(base);540528}541529···621627 void __iomem *base, *op_reg_base;622628 u32 hcc_params, cap, val;623629 u8 offset, cap_length;624624- int wait_time, delta, count = 256/4;630630+ int wait_time, count = 256/4;625631626632 if (!mmio_resource_enabled(pdev, 0))627633 return;···667673 writel(val, op_reg_base + EHCI_USBCMD);668674669675 wait_time = 2000;670670- delta = 100;671676 do {672677 writel(0x3f, op_reg_base + EHCI_USBSTS);673673- udelay(delta);674674- wait_time -= delta;678678+ udelay(100);679679+ wait_time -= 100;675680 val = readl(op_reg_base + EHCI_USBSTS);676681 if ((val == ~(u32)0) || (val & EHCI_USBSTS_HALTED)) {677682 break;
-5
drivers/usb/host/xhci-mem.c
···982982 struct xhci_virt_device *dev;983983 struct xhci_ep_ctx *ep0_ctx;984984 struct xhci_slot_ctx *slot_ctx;985985- struct xhci_input_control_ctx *ctrl_ctx;986985 u32 port_num;987986 struct usb_device *top_dev;988987···993994 return -EINVAL;994995 }995996 ep0_ctx = xhci_get_ep_ctx(xhci, dev->in_ctx, 0);996996- ctrl_ctx = xhci_get_input_control_ctx(xhci, dev->in_ctx);997997 slot_ctx = xhci_get_slot_ctx(xhci, dev->in_ctx);998998-999999- /* 2) New slot context and endpoint 0 context are valid*/10001000- ctrl_ctx->add_flags = cpu_to_le32(SLOT_FLAG | EP0_FLAG);10019981002999 /* 3) Only the control endpoint is valid - one endpoint context */10031000 slot_ctx->dev_info |= cpu_to_le32(LAST_CTX(1) | udev->route);
+7-6
drivers/usb/host/xhci-ring.c
···816816 struct xhci_ring *ring;817817 struct xhci_td *cur_td;818818 int ret, i, j;819819+ unsigned long flags;819820820821 ep = (struct xhci_virt_ep *) arg;821822 xhci = ep->xhci;822823823823- spin_lock(&xhci->lock);824824+ spin_lock_irqsave(&xhci->lock, flags);824825825826 ep->stop_cmds_pending--;826827 if (xhci->xhc_state & XHCI_STATE_DYING) {827828 xhci_dbg(xhci, "Stop EP timer ran, but another timer marked "828829 "xHCI as DYING, exiting.\n");829829- spin_unlock(&xhci->lock);830830+ spin_unlock_irqrestore(&xhci->lock, flags);830831 return;831832 }832833 if (!(ep->stop_cmds_pending == 0 && (ep->ep_state & EP_HALT_PENDING))) {833834 xhci_dbg(xhci, "Stop EP timer ran, but no command pending, "834835 "exiting.\n");835835- spin_unlock(&xhci->lock);836836+ spin_unlock_irqrestore(&xhci->lock, flags);836837 return;837838 }838839···845844 xhci->xhc_state |= XHCI_STATE_DYING;846845 /* Disable interrupts from the host controller and start halting it */847846 xhci_quiesce(xhci);848848- spin_unlock(&xhci->lock);847847+ spin_unlock_irqrestore(&xhci->lock, flags);849848850849 ret = xhci_halt(xhci);851850852852- spin_lock(&xhci->lock);851851+ spin_lock_irqsave(&xhci->lock, flags);853852 if (ret < 0) {854853 /* This is bad; the host is not responding to commands and it's855854 * not allowing itself to be halted. At least interrupts are···897896 }898897 }899898 }900900- spin_unlock(&xhci->lock);899899+ spin_unlock_irqrestore(&xhci->lock, flags);901900 xhci_dbg(xhci, "Calling usb_hc_died()\n");902901 usb_hc_died(xhci_to_hcd(xhci)->primary_hcd);903902 xhci_dbg(xhci, "xHCI host controller is dead.\n");
+18-16
drivers/usb/host/xhci.c
···799799 u32 command, temp = 0;800800 struct usb_hcd *hcd = xhci_to_hcd(xhci);801801 struct usb_hcd *secondary_hcd;802802- int retval;802802+ int retval = 0;803803804804 /* Wait a bit if either of the roothubs need to settle from the805805 * transition into bus suspend.···808808 time_before(jiffies,809809 xhci->bus_state[1].next_statechange))810810 msleep(100);811811+812812+ set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);813813+ set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);811814812815 spin_lock_irq(&xhci->lock);813816 if (xhci->quirks & XHCI_RESET_ON_RESUME)···881878 return retval;882879 xhci_dbg(xhci, "Start the primary HCD\n");883880 retval = xhci_run(hcd->primary_hcd);884884- if (retval)885885- goto failed_restart;886886-887887- xhci_dbg(xhci, "Start the secondary HCD\n");888888- retval = xhci_run(secondary_hcd);889881 if (!retval) {890890- set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);891891- set_bit(HCD_FLAG_HW_ACCESSIBLE,892892- &xhci->shared_hcd->flags);882882+ xhci_dbg(xhci, "Start the secondary HCD\n");883883+ retval = xhci_run(secondary_hcd);893884 }894894-failed_restart:895885 hcd->state = HC_STATE_SUSPENDED;896886 xhci->shared_hcd->state = HC_STATE_SUSPENDED;897897- return retval;887887+ goto done;898888 }899889900890 /* step 4: set Run/Stop bit */···906910 * Running endpoints by ringing their doorbells907911 */908912909909- set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);910910- set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);911911-912913 spin_unlock_irq(&xhci->lock);913913- return 0;914914+915915+ done:916916+ if (retval == 0) {917917+ usb_hcd_resume_root_hub(hcd);918918+ usb_hcd_resume_root_hub(xhci->shared_hcd);919919+ }920920+ return retval;914921}915922#endif /* CONFIG_PM */916923···35033504 /* Otherwise, update the control endpoint ring enqueue pointer. */35043505 else35053506 xhci_copy_ep0_dequeue_into_input_ctx(xhci, udev);35073507+ ctrl_ctx = xhci_get_input_control_ctx(xhci, virt_dev->in_ctx);35083508+ ctrl_ctx->add_flags = cpu_to_le32(SLOT_FLAG | EP0_FLAG);35093509+ ctrl_ctx->drop_flags = 0;35103510+35063511 xhci_dbg(xhci, "Slot ID %d Input Context:\n", udev->slot_id);35073512 xhci_dbg_ctx(xhci, virt_dev->in_ctx, 2);35083513···35883585 virt_dev->address = (le32_to_cpu(slot_ctx->dev_state) & DEV_ADDR_MASK)35893586 + 1;35903587 /* Zero the input context control for later use */35913591- ctrl_ctx = xhci_get_input_control_ctx(xhci, virt_dev->in_ctx);35923588 ctrl_ctx->add_flags = 0;35933589 ctrl_ctx->drop_flags = 0;35943590
+2-1
drivers/usb/musb/Kconfig
···1111 select TWL4030_USB if MACH_OMAP_3430SDP1212 select TWL6030_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA1313 select USB_OTG_UTILS1414+ select USB_GADGET_DUALSPEED1415 tristate 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)'1516 help1617 Say Y here if your system has a dual role high speed USB···61606261config USB_MUSB_UX5006362 tristate "U8500 and U5500"6464- depends on (ARCH_U8500 && AB8500_USB) || (ARCH_U5500)6363+ depends on (ARCH_U8500 && AB8500_USB)65646665endchoice6766
···4242 * Version information4343 */44444545-#define DRIVER_VERSION "v0.6"4545+#define DRIVER_VERSION "v0.7"4646#define DRIVER_AUTHOR "Bart Hartgers <bart.hartgers+ark3116@gmail.com>"4747#define DRIVER_DESC "USB ARK3116 serial/IrDA driver"4848#define DRIVER_DEV_DESC "ARK3116 RS232/IrDA"···380380 goto err_out;381381 }382382383383- /* setup termios */384384- if (tty)385385- ark3116_set_termios(tty, port, NULL);386386-387383 /* remove any data still left: also clears error state */388384 ark3116_read_reg(serial, UART_RX, buf);389385···401405402406 /* enable DMA */403407 ark3116_write_reg(port->serial, UART_FCR, UART_FCR_DMA_SELECT);408408+409409+ /* setup termios */410410+ if (tty)411411+ ark3116_set_termios(tty, port, NULL);404412405413err_out:406414 kfree(buf);
+11-3
drivers/usb/serial/ftdi_sio.c
···2104210421052105 cflag = termios->c_cflag;2106210621072107- /* FIXME -For this cut I don't care if the line is really changing or21082108- not - so just do the change regardless - should be able to21092109- compare old_termios and tty->termios */21072107+ if (old_termios->c_cflag == termios->c_cflag21082108+ && old_termios->c_ispeed == termios->c_ispeed21092109+ && old_termios->c_ospeed == termios->c_ospeed)21102110+ goto no_c_cflag_changes;21112111+21102112 /* NOTE These routines can get interrupted by21112113 ftdi_sio_read_bulk_callback - need to examine what this means -21122114 don't see any problems yet */21152115+21162116+ if ((old_termios->c_cflag & (CSIZE|PARODD|PARENB|CMSPAR|CSTOPB)) ==21172117+ (termios->c_cflag & (CSIZE|PARODD|PARENB|CMSPAR|CSTOPB)))21182118+ goto no_data_parity_stop_changes;2113211921142120 /* Set number of data bits, parity, stop bits */21152121···21572151 }2158215221592153 /* Now do the baudrate */21542154+no_data_parity_stop_changes:21602155 if ((cflag & CBAUD) == B0) {21612156 /* Disable flow control */21622157 if (usb_control_msg(dev, usb_sndctrlpipe(dev, 0),···2185217821862179 /* Set flow control */21872180 /* Note device also supports DTR/CD (ugh) and Xon/Xoff in hardware */21812181+no_c_cflag_changes:21882182 if (cflag & CRTSCTS) {21892183 dbg("%s Setting to CRTSCTS flow control", __func__);21902184 if (usb_control_msg(dev,
···59596060void usb_stor_pad12_command(struct scsi_cmnd *srb, struct us_data *us)6161{6262- /* Pad the SCSI command with zeros out to 12 bytes6262+ /*6363+ * Pad the SCSI command with zeros out to 12 bytes. If the6464+ * command already is 12 bytes or longer, leave it alone.6365 *6466 * NOTE: This only works because a scsi_cmnd struct field contains6567 * a unsigned char cmnd[16], so we know we have storage available6668 */6769 for (; srb->cmd_len<12; srb->cmd_len++)6870 srb->cmnd[srb->cmd_len] = 0;6969-7070- /* set command length to 12 bytes */7171- srb->cmd_len = 12;72717372 /* send the command to the transport layer */7473 usb_stor_invoke_transport(srb, us);
+14-1
drivers/video/da8xx-fb.c
···116116/* Clock registers available only on Version 2 */117117#define LCD_CLK_ENABLE_REG 0x6c118118#define LCD_CLK_RESET_REG 0x70119119+#define LCD_CLK_MAIN_RESET BIT(3)119120120121#define LCD_NUM_BUFFERS 2121122···245244{246245 u32 reg;247246247247+ /* Bring LCDC out of reset */248248+ if (lcd_revision == LCD_VERSION_2)249249+ lcdc_write(0, LCD_CLK_RESET_REG);250250+248251 reg = lcdc_read(LCD_RASTER_CTRL_REG);249252 if (!(reg & LCD_RASTER_ENABLE))250253 lcdc_write(reg | LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG);···262257 reg = lcdc_read(LCD_RASTER_CTRL_REG);263258 if (reg & LCD_RASTER_ENABLE)264259 lcdc_write(reg & ~LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG);260260+261261+ if (lcd_revision == LCD_VERSION_2)262262+ /* Write 1 to reset LCDC */263263+ lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG);265264}266265267266static void lcd_blit(int load_mode, struct da8xx_fb_par *par)···593584 lcdc_write(0, LCD_DMA_CTRL_REG);594585 lcdc_write(0, LCD_RASTER_CTRL_REG);595586596596- if (lcd_revision == LCD_VERSION_2)587587+ if (lcd_revision == LCD_VERSION_2) {597588 lcdc_write(0, LCD_INT_ENABLE_SET_REG);589589+ /* Write 1 to reset */590590+ lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG);591591+ lcdc_write(0, LCD_CLK_RESET_REG);592592+ }598593}599594600595static void lcd_calc_clk_divider(struct da8xx_fb_par *par)
+1
drivers/video/omap/dispc.c
···1919 * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.2020 */2121#include <linux/kernel.h>2222+#include <linux/module.h>2223#include <linux/dma-mapping.h>2324#include <linux/mm.h>2425#include <linux/vmalloc.h>
+5-6
drivers/video/omap2/dss/dispc.c
···17201720 const int maxdownscale = dss_feat_get_param_max(FEAT_PARAM_DOWNSCALE);17211721 unsigned long fclk = 0;1722172217231723- if ((ovl->caps & OMAP_DSS_OVL_CAP_SCALE) == 0) {17241724- if (width != out_width || height != out_height)17251725- return -EINVAL;17261726- else17271727- return 0;17281728- }17231723+ if (width == out_width && height == out_height)17241724+ return 0;17251725+17261726+ if ((ovl->caps & OMAP_DSS_OVL_CAP_SCALE) == 0)17271727+ return -EINVAL;1729172817301729 if (out_width < width / maxdownscale ||17311730 out_width > width * 8)
+1-1
drivers/video/omap2/dss/hdmi.c
···269269unsigned long hdmi_get_pixel_clock(void)270270{271271 /* HDMI Pixel Clock in Mhz */272272- return hdmi.ip_data.cfg.timings.timings.pixel_clock * 10000;272272+ return hdmi.ip_data.cfg.timings.timings.pixel_clock * 1000;273273}274274275275static void hdmi_compute_pll(struct omap_dss_device *dssdev, int phy,
···37373838 config VIRTIO_MMIO3939 tristate "Platform bus driver for memory mapped virtio devices (EXPERIMENTAL)"4040- depends on EXPERIMENTAL4040+ depends on HAS_IOMEM && EXPERIMENTAL4141 select VIRTIO4242 select VIRTIO_RING4343 ---help---
+1-1
drivers/virtio/virtio_mmio.c
···118118 vring_transport_features(vdev);119119120120 for (i = 0; i < ARRAY_SIZE(vdev->features); i++) {121121- writel(i, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SET);121121+ writel(i, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SEL);122122 writel(vdev->features[i],123123 vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES);124124 }
+18
drivers/virtio/virtio_pci.c
···169169 iowrite8(status, vp_dev->ioaddr + VIRTIO_PCI_STATUS);170170}171171172172+/* wait for pending irq handlers */173173+static void vp_synchronize_vectors(struct virtio_device *vdev)174174+{175175+ struct virtio_pci_device *vp_dev = to_vp_device(vdev);176176+ int i;177177+178178+ if (vp_dev->intx_enabled)179179+ synchronize_irq(vp_dev->pci_dev->irq);180180+181181+ for (i = 0; i < vp_dev->msix_vectors; ++i)182182+ synchronize_irq(vp_dev->msix_entries[i].vector);183183+}184184+172185static void vp_reset(struct virtio_device *vdev)173186{174187 struct virtio_pci_device *vp_dev = to_vp_device(vdev);175188 /* 0 status means a reset. */176189 iowrite8(0, vp_dev->ioaddr + VIRTIO_PCI_STATUS);190190+ /* Flush out the status write, and flush in device writes,191191+ * including MSi-X interrupts, if any. */192192+ ioread8(vp_dev->ioaddr + VIRTIO_PCI_STATUS);193193+ /* Flush pending VQ/configuration callbacks. */194194+ vp_synchronize_vectors(vdev);177195}178196179197/* the notify function used when creating a virt queue */
-7
drivers/watchdog/Kconfig
···314314 To compile this driver as a module, choose M here: the315315 module will be called nuc900_wdt.316316317317-config ADX_WATCHDOG318318- tristate "Avionic Design Xanthos watchdog"319319- depends on ARCH_PXA_ADX320320- help321321- Say Y here if you want support for the watchdog timer on Avionic322322- Design Xanthos boards.323323-324317config TS72XX_WATCHDOG325318 tristate "TS-72XX SBC Watchdog"326319 depends on MACH_TS72XX
···150150 if (wm831x_wdt_cfgs[i].time == timeout)151151 break;152152 if (i == ARRAY_SIZE(wm831x_wdt_cfgs))153153- ret = -EINVAL;153153+ return -EINVAL;154154155155 ret = wm831x_reg_unlock(wm831x);156156 if (ret == 0) {
···514514 struct btrfs_root *root,515515 struct extent_buffer *buf)516516{517517+ /* ensure we can see the force_cow */518518+ smp_rmb();519519+520520+ /*521521+ * We do not need to cow a block if522522+ * 1) this block is not created or changed in this transaction;523523+ * 2) this block does not belong to TREE_RELOC tree;524524+ * 3) the root is not forced COW.525525+ *526526+ * What is forced COW:527527+ * when we create snapshot during commiting the transaction,528528+ * after we've finished coping src root, we must COW the shared529529+ * block to ensure the metadata consistency.530530+ */517531 if (btrfs_header_generation(buf) == trans->transid &&518532 !btrfs_header_flag(buf, BTRFS_HEADER_FLAG_WRITTEN) &&519533 !(root->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID &&520520- btrfs_header_flag(buf, BTRFS_HEADER_FLAG_RELOC)))534534+ btrfs_header_flag(buf, BTRFS_HEADER_FLAG_RELOC)) &&535535+ !root->force_cow)521536 return 0;522537 return 1;523538}
+7-1
fs/btrfs/ctree.h
···848848enum btrfs_caching_type {849849 BTRFS_CACHE_NO = 0,850850 BTRFS_CACHE_STARTED = 1,851851- BTRFS_CACHE_FINISHED = 2,851851+ BTRFS_CACHE_FAST = 2,852852+ BTRFS_CACHE_FINISHED = 3,852853};853854854855enum btrfs_disk_cache_state {···12721271 * for stat. It may be used for more later12731272 */12741273 dev_t anon_dev;12741274+12751275+ int force_cow;12751276};1276127712771278struct btrfs_ioctl_defrag_range_args {···23692366int btrfs_block_rsv_refill(struct btrfs_root *root,23702367 struct btrfs_block_rsv *block_rsv,23712368 u64 min_reserved);23692369+int btrfs_block_rsv_refill_noflush(struct btrfs_root *root,23702370+ struct btrfs_block_rsv *block_rsv,23712371+ u64 min_reserved);23722372int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv,23732373 struct btrfs_block_rsv *dst_rsv,23742374 u64 num_bytes);
+129-18
fs/btrfs/disk-io.c
···620620621621static int btree_io_failed_hook(struct bio *failed_bio,622622 struct page *page, u64 start, u64 end,623623- u64 mirror_num, struct extent_state *state)623623+ int mirror_num, struct extent_state *state)624624{625625 struct extent_io_tree *tree;626626 unsigned long len;···25732573 int errors = 0;25742574 u32 crc;25752575 u64 bytenr;25762576- int last_barrier = 0;2577257625782577 if (max_mirrors == 0)25792578 max_mirrors = BTRFS_SUPER_MIRROR_MAX;25802580-25812581- /* make sure only the last submit_bh does a barrier */25822582- if (do_barriers) {25832583- for (i = 0; i < max_mirrors; i++) {25842584- bytenr = btrfs_sb_offset(i);25852585- if (bytenr + BTRFS_SUPER_INFO_SIZE >=25862586- device->total_bytes)25872587- break;25882588- last_barrier = i;25892589- }25902590- }2591257925922580 for (i = 0; i < max_mirrors; i++) {25932581 bytenr = btrfs_sb_offset(i);···26222634 bh->b_end_io = btrfs_end_buffer_write_sync;26232635 }2624263626252625- if (i == last_barrier && do_barriers)26262626- ret = submit_bh(WRITE_FLUSH_FUA, bh);26272627- else26282628- ret = submit_bh(WRITE_SYNC, bh);26292629-26372637+ /*26382638+ * we fua the first super. The others we allow26392639+ * to go down lazy.26402640+ */26412641+ ret = submit_bh(WRITE_FUA, bh);26302642 if (ret)26312643 errors++;26322644 }26332645 return errors < i ? 0 : -1;26462646+}26472647+26482648+/*26492649+ * endio for the write_dev_flush, this will wake anyone waiting26502650+ * for the barrier when it is done26512651+ */26522652+static void btrfs_end_empty_barrier(struct bio *bio, int err)26532653+{26542654+ if (err) {26552655+ if (err == -EOPNOTSUPP)26562656+ set_bit(BIO_EOPNOTSUPP, &bio->bi_flags);26572657+ clear_bit(BIO_UPTODATE, &bio->bi_flags);26582658+ }26592659+ if (bio->bi_private)26602660+ complete(bio->bi_private);26612661+ bio_put(bio);26622662+}26632663+26642664+/*26652665+ * trigger flushes for one the devices. If you pass wait == 0, the flushes are26662666+ * sent down. With wait == 1, it waits for the previous flush.26672667+ *26682668+ * any device where the flush fails with eopnotsupp are flagged as not-barrier26692669+ * capable26702670+ */26712671+static int write_dev_flush(struct btrfs_device *device, int wait)26722672+{26732673+ struct bio *bio;26742674+ int ret = 0;26752675+26762676+ if (device->nobarriers)26772677+ return 0;26782678+26792679+ if (wait) {26802680+ bio = device->flush_bio;26812681+ if (!bio)26822682+ return 0;26832683+26842684+ wait_for_completion(&device->flush_wait);26852685+26862686+ if (bio_flagged(bio, BIO_EOPNOTSUPP)) {26872687+ printk("btrfs: disabling barriers on dev %s\n",26882688+ device->name);26892689+ device->nobarriers = 1;26902690+ }26912691+ if (!bio_flagged(bio, BIO_UPTODATE)) {26922692+ ret = -EIO;26932693+ }26942694+26952695+ /* drop the reference from the wait == 0 run */26962696+ bio_put(bio);26972697+ device->flush_bio = NULL;26982698+26992699+ return ret;27002700+ }27012701+27022702+ /*27032703+ * one reference for us, and we leave it for the27042704+ * caller27052705+ */27062706+ device->flush_bio = NULL;;27072707+ bio = bio_alloc(GFP_NOFS, 0);27082708+ if (!bio)27092709+ return -ENOMEM;27102710+27112711+ bio->bi_end_io = btrfs_end_empty_barrier;27122712+ bio->bi_bdev = device->bdev;27132713+ init_completion(&device->flush_wait);27142714+ bio->bi_private = &device->flush_wait;27152715+ device->flush_bio = bio;27162716+27172717+ bio_get(bio);27182718+ submit_bio(WRITE_FLUSH, bio);27192719+27202720+ return 0;27212721+}27222722+27232723+/*27242724+ * send an empty flush down to each device in parallel,27252725+ * then wait for them27262726+ */27272727+static int barrier_all_devices(struct btrfs_fs_info *info)27282728+{27292729+ struct list_head *head;27302730+ struct btrfs_device *dev;27312731+ int errors = 0;27322732+ int ret;27332733+27342734+ /* send down all the barriers */27352735+ head = &info->fs_devices->devices;27362736+ list_for_each_entry_rcu(dev, head, dev_list) {27372737+ if (!dev->bdev) {27382738+ errors++;27392739+ continue;27402740+ }27412741+ if (!dev->in_fs_metadata || !dev->writeable)27422742+ continue;27432743+27442744+ ret = write_dev_flush(dev, 0);27452745+ if (ret)27462746+ errors++;27472747+ }27482748+27492749+ /* wait for all the barriers */27502750+ list_for_each_entry_rcu(dev, head, dev_list) {27512751+ if (!dev->bdev) {27522752+ errors++;27532753+ continue;27542754+ }27552755+ if (!dev->in_fs_metadata || !dev->writeable)27562756+ continue;27572757+27582758+ ret = write_dev_flush(dev, 1);27592759+ if (ret)27602760+ errors++;27612761+ }27622762+ if (errors)27632763+ return -EIO;27642764+ return 0;26342765}2635276626362767int write_all_supers(struct btrfs_root *root, int max_mirrors)···2773266627742667 mutex_lock(&root->fs_info->fs_devices->device_list_mutex);27752668 head = &root->fs_info->fs_devices->devices;26692669+26702670+ if (do_barriers)26712671+ barrier_all_devices(root->fs_info);26722672+27762673 list_for_each_entry_rcu(dev, head, dev_list) {27772674 if (!dev->bdev) {27782675 total_errors++;
+103-52
fs/btrfs/extent-tree.c
···467467 struct btrfs_root *root,468468 int load_cache_only)469469{470470+ DEFINE_WAIT(wait);470471 struct btrfs_fs_info *fs_info = cache->fs_info;471472 struct btrfs_caching_control *caching_ctl;472473 int ret = 0;473474474474- smp_mb();475475- if (cache->cached != BTRFS_CACHE_NO)475475+ caching_ctl = kzalloc(sizeof(*caching_ctl), GFP_NOFS);476476+ BUG_ON(!caching_ctl);477477+478478+ INIT_LIST_HEAD(&caching_ctl->list);479479+ mutex_init(&caching_ctl->mutex);480480+ init_waitqueue_head(&caching_ctl->wait);481481+ caching_ctl->block_group = cache;482482+ caching_ctl->progress = cache->key.objectid;483483+ atomic_set(&caching_ctl->count, 1);484484+ caching_ctl->work.func = caching_thread;485485+486486+ spin_lock(&cache->lock);487487+ /*488488+ * This should be a rare occasion, but this could happen I think in the489489+ * case where one thread starts to load the space cache info, and then490490+ * some other thread starts a transaction commit which tries to do an491491+ * allocation while the other thread is still loading the space cache492492+ * info. The previous loop should have kept us from choosing this block493493+ * group, but if we've moved to the state where we will wait on caching494494+ * block groups we need to first check if we're doing a fast load here,495495+ * so we can wait for it to finish, otherwise we could end up allocating496496+ * from a block group who's cache gets evicted for one reason or497497+ * another.498498+ */499499+ while (cache->cached == BTRFS_CACHE_FAST) {500500+ struct btrfs_caching_control *ctl;501501+502502+ ctl = cache->caching_ctl;503503+ atomic_inc(&ctl->count);504504+ prepare_to_wait(&ctl->wait, &wait, TASK_UNINTERRUPTIBLE);505505+ spin_unlock(&cache->lock);506506+507507+ schedule();508508+509509+ finish_wait(&ctl->wait, &wait);510510+ put_caching_control(ctl);511511+ spin_lock(&cache->lock);512512+ }513513+514514+ if (cache->cached != BTRFS_CACHE_NO) {515515+ spin_unlock(&cache->lock);516516+ kfree(caching_ctl);476517 return 0;518518+ }519519+ WARN_ON(cache->caching_ctl);520520+ cache->caching_ctl = caching_ctl;521521+ cache->cached = BTRFS_CACHE_FAST;522522+ spin_unlock(&cache->lock);477523478524 /*479525 * We can't do the read from on-disk cache during a commit since we need···530484 if (trans && (!trans->transaction->in_commit) &&531485 (root && root != root->fs_info->tree_root) &&532486 btrfs_test_opt(root, SPACE_CACHE)) {533533- spin_lock(&cache->lock);534534- if (cache->cached != BTRFS_CACHE_NO) {535535- spin_unlock(&cache->lock);536536- return 0;537537- }538538- cache->cached = BTRFS_CACHE_STARTED;539539- spin_unlock(&cache->lock);540540-541487 ret = load_free_space_cache(fs_info, cache);542488543489 spin_lock(&cache->lock);544490 if (ret == 1) {491491+ cache->caching_ctl = NULL;545492 cache->cached = BTRFS_CACHE_FINISHED;546493 cache->last_byte_to_unpin = (u64)-1;547494 } else {548548- cache->cached = BTRFS_CACHE_NO;495495+ if (load_cache_only) {496496+ cache->caching_ctl = NULL;497497+ cache->cached = BTRFS_CACHE_NO;498498+ } else {499499+ cache->cached = BTRFS_CACHE_STARTED;500500+ }549501 }550502 spin_unlock(&cache->lock);503503+ wake_up(&caching_ctl->wait);551504 if (ret == 1) {505505+ put_caching_control(caching_ctl);552506 free_excluded_extents(fs_info->extent_root, cache);553507 return 0;554508 }555555- }556556-557557- if (load_cache_only)558558- return 0;559559-560560- caching_ctl = kzalloc(sizeof(*caching_ctl), GFP_NOFS);561561- BUG_ON(!caching_ctl);562562-563563- INIT_LIST_HEAD(&caching_ctl->list);564564- mutex_init(&caching_ctl->mutex);565565- init_waitqueue_head(&caching_ctl->wait);566566- caching_ctl->block_group = cache;567567- caching_ctl->progress = cache->key.objectid;568568- /* one for caching kthread, one for caching block group list */569569- atomic_set(&caching_ctl->count, 2);570570- caching_ctl->work.func = caching_thread;571571-572572- spin_lock(&cache->lock);573573- if (cache->cached != BTRFS_CACHE_NO) {509509+ } else {510510+ /*511511+ * We are not going to do the fast caching, set cached to the512512+ * appropriate value and wakeup any waiters.513513+ */514514+ spin_lock(&cache->lock);515515+ if (load_cache_only) {516516+ cache->caching_ctl = NULL;517517+ cache->cached = BTRFS_CACHE_NO;518518+ } else {519519+ cache->cached = BTRFS_CACHE_STARTED;520520+ }574521 spin_unlock(&cache->lock);575575- kfree(caching_ctl);522522+ wake_up(&caching_ctl->wait);523523+ }524524+525525+ if (load_cache_only) {526526+ put_caching_control(caching_ctl);576527 return 0;577528 }578578- cache->caching_ctl = caching_ctl;579579- cache->cached = BTRFS_CACHE_STARTED;580580- spin_unlock(&cache->lock);581529582530 down_write(&fs_info->extent_commit_sem);531531+ atomic_inc(&caching_ctl->count);583532 list_add_tail(&caching_ctl->list, &fs_info->caching_block_groups);584533 up_write(&fs_info->extent_commit_sem);585534···38883847 return ret;38893848}3890384938913891-int btrfs_block_rsv_refill(struct btrfs_root *root,38923892- struct btrfs_block_rsv *block_rsv,38933893- u64 min_reserved)38503850+static inline int __btrfs_block_rsv_refill(struct btrfs_root *root,38513851+ struct btrfs_block_rsv *block_rsv,38523852+ u64 min_reserved, int flush)38943853{38953854 u64 num_bytes = 0;38963855 int ret = -ENOSPC;···39093868 if (!ret)39103869 return 0;3911387039123912- ret = reserve_metadata_bytes(root, block_rsv, num_bytes, 1);38713871+ ret = reserve_metadata_bytes(root, block_rsv, num_bytes, flush);39133872 if (!ret) {39143873 block_rsv_add_bytes(block_rsv, num_bytes, 0);39153874 return 0;39163875 }3917387639183877 return ret;38783878+}38793879+38803880+int btrfs_block_rsv_refill(struct btrfs_root *root,38813881+ struct btrfs_block_rsv *block_rsv,38823882+ u64 min_reserved)38833883+{38843884+ return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 1);38853885+}38863886+38873887+int btrfs_block_rsv_refill_noflush(struct btrfs_root *root,38883888+ struct btrfs_block_rsv *block_rsv,38893889+ u64 min_reserved)38903890+{38913891+ return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 0);39193892}3920389339213894int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv,···52335178 }5234517952355180have_block_group:52365236- if (unlikely(block_group->cached == BTRFS_CACHE_NO)) {51815181+ cached = block_group_cache_done(block_group);51825182+ if (unlikely(!cached)) {52375183 u64 free_percent;5238518451855185+ found_uncached_bg = true;52395186 ret = cache_block_group(block_group, trans,52405187 orig_root, 1);52415188 if (block_group->cached == BTRFS_CACHE_FINISHED)52425242- goto have_block_group;51895189+ goto alloc;5243519052445191 free_percent = btrfs_block_group_used(&block_group->item);52455192 free_percent *= 100;···52635206 orig_root, 0);52645207 BUG_ON(ret);52655208 }52665266- found_uncached_bg = true;5267520952685210 /*52695211 * If loop is set for cached only, try the next block···52725216 goto loop;52735217 }5274521852755275- cached = block_group_cache_done(block_group);52765276- if (unlikely(!cached))52775277- found_uncached_bg = true;52785278-52195219+alloc:52795220 if (unlikely(block_group->ro))52805221 goto loop;5281522252825223 spin_lock(&block_group->free_space_ctl->tree_lock);52835224 if (cached &&52845225 block_group->free_space_ctl->free_space <52855285- num_bytes + empty_size) {52265226+ num_bytes + empty_cluster + empty_size) {52865227 spin_unlock(&block_group->free_space_ctl->tree_lock);52875228 goto loop;52885229 }···53005247 * people trying to start a new cluster53015248 */53025249 spin_lock(&last_ptr->refill_lock);53035303- if (last_ptr->block_group &&53045304- (last_ptr->block_group->ro ||53055305- !block_group_bits(last_ptr->block_group, data))) {53065306- offset = 0;52505250+ if (!last_ptr->block_group ||52515251+ last_ptr->block_group->ro ||52525252+ !block_group_bits(last_ptr->block_group, data))53075253 goto refill_cluster;53085308- }5309525453105255 offset = btrfs_alloc_from_cluster(block_group, last_ptr,53115256 num_bytes, search_start);···53545303 /* allocate a cluster in this block group */53555304 ret = btrfs_find_space_cluster(trans, root,53565305 block_group, last_ptr,53575357- offset, num_bytes,53065306+ search_start, num_bytes,53585307 empty_cluster + empty_size);53595308 if (ret == 0) {53605309 /*
+26-10
fs/btrfs/extent_io.c
···22852285 clean_io_failure(start, page);22862286 }22872287 if (!uptodate) {22882288- u64 failed_mirror;22892289- failed_mirror = (u64)bio->bi_bdev;22902290- if (tree->ops && tree->ops->readpage_io_failed_hook)22912291- ret = tree->ops->readpage_io_failed_hook(22922292- bio, page, start, end,22932293- failed_mirror, state);22942294- else22952295- ret = bio_readpage_error(bio, page, start, end,22962296- failed_mirror, NULL);22882288+ int failed_mirror;22892289+ failed_mirror = (int)(unsigned long)bio->bi_bdev;22902290+ /*22912291+ * The generic bio_readpage_error handles errors the22922292+ * following way: If possible, new read requests are22932293+ * created and submitted and will end up in22942294+ * end_bio_extent_readpage as well (if we're lucky, not22952295+ * in the !uptodate case). In that case it returns 0 and22962296+ * we just go on with the next page in our bio. If it22972297+ * can't handle the error it will return -EIO and we22982298+ * remain responsible for that page.22992299+ */23002300+ ret = bio_readpage_error(bio, page, start, end,23012301+ failed_mirror, NULL);22972302 if (ret == 0) {23032303+error_handled:22982304 uptodate =22992305 test_bit(BIO_UPTODATE, &bio->bi_flags);23002306 if (err)23012307 uptodate = 0;23022308 uncache_state(&cached);23032309 continue;23102310+ }23112311+ if (tree->ops && tree->ops->readpage_io_failed_hook) {23122312+ ret = tree->ops->readpage_io_failed_hook(23132313+ bio, page, start, end,23142314+ failed_mirror, state);23152315+ if (ret == 0)23162316+ goto error_handled;23042317 }23052318 }23062319···33793366 return -ENOMEM;33803367 path->leave_spinning = 1;3381336833693369+ start = ALIGN(start, BTRFS_I(inode)->root->sectorsize);33703370+ len = ALIGN(len, BTRFS_I(inode)->root->sectorsize);33713371+33823372 /*33833373 * lookup the last file extent. We're not using i_size here33843374 * because there might be preallocation past i_size···34293413 lock_extent_bits(&BTRFS_I(inode)->io_tree, start, start + len, 0,34303414 &cached_state, GFP_NOFS);3431341534323432- em = get_extent_skip_holes(inode, off, last_for_get_extent,34163416+ em = get_extent_skip_holes(inode, start, last_for_get_extent,34333417 get_extent);34343418 if (!em)34353419 goto out;
+1-1
fs/btrfs/extent_io.h
···7070 unsigned long bio_flags);7171 int (*readpage_io_hook)(struct page *page, u64 start, u64 end);7272 int (*readpage_io_failed_hook)(struct bio *bio, struct page *page,7373- u64 start, u64 end, u64 failed_mirror,7373+ u64 start, u64 end, int failed_mirror,7474 struct extent_state *state);7575 int (*writepage_io_failed_hook)(struct bio *bio, struct page *page,7676 u64 start, u64 end,
+28-37
fs/btrfs/free-space-cache.c
···351351 }352352 }353353354354+ for (i = 0; i < io_ctl->num_pages; i++) {355355+ clear_page_dirty_for_io(io_ctl->pages[i]);356356+ set_page_extent_mapped(io_ctl->pages[i]);357357+ }358358+354359 return 0;355360}356361···14701465{14711466 info->offset = offset_to_bitmap(ctl, offset);14721467 info->bytes = 0;14681468+ INIT_LIST_HEAD(&info->list);14731469 link_free_space(ctl, info);14741470 ctl->total_bitmaps++;14751471···18501844 info = tree_search_offset(ctl, offset_to_bitmap(ctl, offset),18511845 1, 0);18521846 if (!info) {18531853- WARN_ON(1);18471847+ /* the tree logging code might be calling us before we18481848+ * have fully loaded the free space rbtree for this18491849+ * block group. So it is possible the entry won't18501850+ * be in the rbtree yet at all. The caching code18511851+ * will make sure not to put it in the rbtree if18521852+ * the logging code has pinned it.18531853+ */18541854 goto out_lock;18551855 }18561856 }···2320230823212309 if (!found) {23222310 start = i;23112311+ cluster->max_size = 0;23232312 found = true;23242313 }23252314···24642451{24652452 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;24662453 struct btrfs_free_space *entry;24672467- struct rb_node *node;24682454 int ret = -ENOSPC;24552455+ u64 bitmap_offset = offset_to_bitmap(ctl, offset);2469245624702457 if (ctl->total_bitmaps == 0)24712458 return -ENOSPC;2472245924732460 /*24742474- * First check our cached list of bitmaps and see if there is an entry24752475- * here that will work.24612461+ * The bitmap that covers offset won't be in the list unless offset24622462+ * is just its start offset.24762463 */24642464+ entry = list_first_entry(bitmaps, struct btrfs_free_space, list);24652465+ if (entry->offset != bitmap_offset) {24662466+ entry = tree_search_offset(ctl, bitmap_offset, 1, 0);24672467+ if (entry && list_empty(&entry->list))24682468+ list_add(&entry->list, bitmaps);24692469+ }24702470+24772471 list_for_each_entry(entry, bitmaps, list) {24782472 if (entry->bytes < min_bytes)24792473 continue;···24912471 }2492247224932473 /*24942494- * If we do have entries on our list and we are here then we didn't find24952495- * anything, so go ahead and get the next entry after the last entry in24962496- * this list and start the search from there.24742474+ * The bitmaps list has all the bitmaps that record free space24752475+ * starting after offset, so no more search is required.24972476 */24982498- if (!list_empty(bitmaps)) {24992499- entry = list_entry(bitmaps->prev, struct btrfs_free_space,25002500- list);25012501- node = rb_next(&entry->offset_index);25022502- if (!node)25032503- return -ENOSPC;25042504- entry = rb_entry(node, struct btrfs_free_space, offset_index);25052505- goto search;25062506- }25072507-25082508- entry = tree_search_offset(ctl, offset_to_bitmap(ctl, offset), 0, 1);25092509- if (!entry)25102510- return -ENOSPC;25112511-25122512-search:25132513- node = &entry->offset_index;25142514- do {25152515- entry = rb_entry(node, struct btrfs_free_space, offset_index);25162516- node = rb_next(&entry->offset_index);25172517- if (!entry->bitmap)25182518- continue;25192519- if (entry->bytes < min_bytes)25202520- continue;25212521- ret = btrfs_bitmap_cluster(block_group, entry, cluster, offset,25222522- bytes, min_bytes);25232523- } while (ret && node);25242524-25252525- return ret;24772477+ return -ENOSPC;25262478}2527247925282480/*···25122520 u64 offset, u64 bytes, u64 empty_size)25132521{25142522 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;25152515- struct list_head bitmaps;25162523 struct btrfs_free_space *entry, *tmp;25242524+ LIST_HEAD(bitmaps);25172525 u64 min_bytes;25182526 int ret;25192527···25522560 goto out;25532561 }2554256225552555- INIT_LIST_HEAD(&bitmaps);25562563 ret = setup_cluster_no_bitmap(block_group, cluster, &bitmaps, offset,25572564 bytes, min_bytes);25582565 if (ret)
+5-3
fs/btrfs/inode.c
···34903490 * doing the truncate.34913491 */34923492 while (1) {34933493- ret = btrfs_block_rsv_refill(root, rsv, min_size);34933493+ ret = btrfs_block_rsv_refill_noflush(root, rsv, min_size);3494349434953495 /*34963496 * Try and steal from the global reserve since we will···67946794 struct dentry *dentry, struct kstat *stat)67956795{67966796 struct inode *inode = dentry->d_inode;67976797+ u32 blocksize = inode->i_sb->s_blocksize;67986798+67976799 generic_fillattr(inode, stat);67986800 stat->dev = BTRFS_I(inode)->root->anon_dev;67996801 stat->blksize = PAGE_CACHE_SIZE;68006800- stat->blocks = (inode_get_bytes(inode) +68016801- BTRFS_I(inode)->delalloc_bytes) >> 9;68026802+ stat->blocks = (ALIGN(inode_get_bytes(inode), blocksize) +68036803+ ALIGN(BTRFS_I(inode)->delalloc_bytes, blocksize)) >> 9;68026804 return 0;68036805}68046806
+10-7
fs/btrfs/ioctl.c
···12161216 *devstr = '\0';12171217 devstr = vol_args->name;12181218 devid = simple_strtoull(devstr, &end, 10);12191219- printk(KERN_INFO "resizing devid %llu\n",12191219+ printk(KERN_INFO "btrfs: resizing devid %llu\n",12201220 (unsigned long long)devid);12211221 }12221222 device = btrfs_find_device(root, devid, NULL, NULL);12231223 if (!device) {12241224- printk(KERN_INFO "resizer unable to find device %llu\n",12241224+ printk(KERN_INFO "btrfs: resizer unable to find device %llu\n",12251225 (unsigned long long)devid);12261226 ret = -EINVAL;12271227 goto out_unlock;···12671267 do_div(new_size, root->sectorsize);12681268 new_size *= root->sectorsize;1269126912701270- printk(KERN_INFO "new size for %s is %llu\n",12701270+ printk(KERN_INFO "btrfs: new size for %s is %llu\n",12711271 device->name, (unsigned long long)new_size);1272127212731273 if (new_size > old_size) {···12781278 }12791279 ret = btrfs_grow_device(trans, device, new_size);12801280 btrfs_commit_transaction(trans, root);12811281- } else {12811281+ } else if (new_size < old_size) {12821282 ret = btrfs_shrink_device(device, new_size);12831283 }12841284···29302930 goto out;2931293129322932 for (i = 0; i < ipath->fspath->elem_cnt; ++i) {29332933- rel_ptr = ipath->fspath->val[i] - (u64)ipath->fspath->val;29332933+ rel_ptr = ipath->fspath->val[i] -29342934+ (u64)(unsigned long)ipath->fspath->val;29342935 ipath->fspath->val[i] = rel_ptr;29352936 }2936293729372937- ret = copy_to_user((void *)ipa->fspath, (void *)ipath->fspath, size);29382938+ ret = copy_to_user((void *)(unsigned long)ipa->fspath,29392939+ (void *)(unsigned long)ipath->fspath, size);29382940 if (ret) {29392941 ret = -EFAULT;29402942 goto out;···30193017 if (ret < 0)30203018 goto out;3021301930223022- ret = copy_to_user((void *)loi->inodes, (void *)inodes, size);30203020+ ret = copy_to_user((void *)(unsigned long)loi->inodes,30213021+ (void *)(unsigned long)inodes, size);30233022 if (ret)30243023 ret = -EFAULT;30253024
+6-1
fs/btrfs/scrub.c
···256256 btrfs_release_path(swarn->path);257257258258 ipath = init_ipath(4096, local_root, swarn->path);259259+ if (IS_ERR(ipath)) {260260+ ret = PTR_ERR(ipath);261261+ ipath = NULL;262262+ goto err;263263+ }259264 ret = paths_from_inode(inum, ipath);260265261266 if (ret < 0)···277272 swarn->logical, swarn->dev->name,278273 (unsigned long long)swarn->sector, root, inum, offset,279274 min(isize - offset, (u64)PAGE_SIZE), nlink,280280- (char *)ipath->fspath->val[i]);275275+ (char *)(unsigned long)ipath->fspath->val[i]);281276282277 free_ipath(ipath);283278 return 0;
+3-3
fs/btrfs/super.c
···10571057 int i = 0, nr_devices;10581058 int ret;1059105910601060- nr_devices = fs_info->fs_devices->rw_devices;10601060+ nr_devices = fs_info->fs_devices->open_devices;10611061 BUG_ON(!nr_devices);1062106210631063 devices_info = kmalloc(sizeof(*devices_info) * nr_devices,···10791079 else10801080 min_stripe_size = BTRFS_STRIPE_LEN;1081108110821082- list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) {10831083- if (!device->in_fs_metadata)10821082+ list_for_each_entry(device, &fs_devices->devices, dev_list) {10831083+ if (!device->in_fs_metadata || !device->bdev)10841084 continue;1085108510861086 avail_space = device->total_bytes - device->bytes_used;
+8
fs/btrfs/transaction.c
···785785786786 btrfs_save_ino_cache(root, trans);787787788788+ /* see comments in should_cow_block() */789789+ root->force_cow = 0;790790+ smp_wmb();791791+788792 if (root->commit_root != root->node) {789793 mutex_lock(&root->fs_commit_mutex);790794 switch_commit_root(root);···950946 btrfs_copy_root(trans, root, old, &tmp, objectid);951947 btrfs_tree_unlock(old);952948 free_extent_buffer(old);949949+950950+ /* see comments in should_cow_block() */951951+ root->force_cow = 1;952952+ smp_wmb();953953954954 btrfs_set_root_node(new_root_item, tmp);955955 /* record when the snapshot was created in key.offset */
+6
fs/btrfs/volumes.h
···100100 struct reada_zone *reada_curr_zone;101101 struct radix_tree_root reada_zones;102102 struct radix_tree_root reada_extents;103103+104104+ /* for sending down flush barriers */105105+ struct bio *flush_bio;106106+ struct completion flush_wait;107107+ int nobarriers;108108+103109};104110105111struct btrfs_fs_devices {
+1-1
fs/ceph/dir.c
···11431143{11441144 struct ceph_dentry_info *di;1145114511461146- dout("d_release %p\n", dentry);11461146+ dout("ceph_d_prune %p\n", dentry);1147114711481148 /* do we have a valid parent? */11491149 if (!dentry->d_parent || IS_ROOT(dentry))
···22702270 ext4_msg(inode->i_sb, KERN_CRIT, "%s: jbd2_start: "22712271 "%ld pages, ino %lu; err %d", __func__,22722272 wbc->nr_to_write, inode->i_ino, ret);22732273+ blk_finish_plug(&plug);22732274 goto out_writepages;22742275 }22752276···28072806 spin_unlock_irqrestore(&ei->i_completed_io_lock, flags);2808280728092808 /* queue the work to convert unwritten extents to written */28102810- queue_work(wq, &io_end->work);28112809 iocb->private = NULL;28102810+ queue_work(wq, &io_end->work);2812281128132812 /* XXX: probably should move into the real I/O completion handler */28142813 inode_dio_done(inode);
+3-3
fs/ext4/super.c
···16831683 data_opt = EXT4_MOUNT_WRITEBACK_DATA;16841684 datacheck:16851685 if (is_remount) {16861686- if (test_opt(sb, DATA_FLAGS) != data_opt) {16861686+ if (!sbi->s_journal)16871687+ ext4_msg(sb, KERN_WARNING, "Remounting file system with no journal so ignoring journalled data option");16881688+ else if (test_opt(sb, DATA_FLAGS) != data_opt) {16871689 ext4_msg(sb, KERN_ERR,16881690 "Cannot change data mode on remount");16891691 return 0;···31013099}3102310031033101static int ext4_fill_super(struct super_block *sb, void *data, int silent)31043104- __releases(kernel_lock)31053105- __acquires(kernel_lock)31063102{31073103 char *orig_data = kstrdup(data, GFP_KERNEL);31083104 struct buffer_head *bh;
+25-32
fs/minix/bitmap.c
···1616#include <linux/bitops.h>1717#include <linux/sched.h>18181919-static const int nibblemap[] = { 4,3,3,2,3,2,2,1,3,2,2,1,2,1,1,0 };2020-2119static DEFINE_SPINLOCK(bitmap_lock);22202323-static unsigned long count_free(struct buffer_head *map[], unsigned numblocks, __u32 numbits)2121+/*2222+ * bitmap consists of blocks filled with 16bit words2323+ * bit set == busy, bit clear == free2424+ * endianness is a mess, but for counting zero bits it really doesn't matter...2525+ */2626+static __u32 count_free(struct buffer_head *map[], unsigned blocksize, __u32 numbits)2427{2525- unsigned i, j, sum = 0;2626- struct buffer_head *bh;2727-2828- for (i=0; i<numblocks-1; i++) {2929- if (!(bh=map[i])) 3030- return(0);3131- for (j=0; j<bh->b_size; j++)3232- sum += nibblemap[bh->b_data[j] & 0xf]3333- + nibblemap[(bh->b_data[j]>>4) & 0xf];2828+ __u32 sum = 0;2929+ unsigned blocks = DIV_ROUND_UP(numbits, blocksize * 8);3030+3131+ while (blocks--) {3232+ unsigned words = blocksize / 2;3333+ __u16 *p = (__u16 *)(*map++)->b_data;3434+ while (words--)3535+ sum += 16 - hweight16(*p++);3436 }35373636- if (numblocks==0 || !(bh=map[numblocks-1]))3737- return(0);3838- i = ((numbits - (numblocks-1) * bh->b_size * 8) / 16) * 2;3939- for (j=0; j<i; j++) {4040- sum += nibblemap[bh->b_data[j] & 0xf]4141- + nibblemap[(bh->b_data[j]>>4) & 0xf];4242- }4343-4444- i = numbits%16;4545- if (i!=0) {4646- i = *(__u16 *)(&bh->b_data[j]) | ~((1<<i) - 1);4747- sum += nibblemap[i & 0xf] + nibblemap[(i>>4) & 0xf];4848- sum += nibblemap[(i>>8) & 0xf] + nibblemap[(i>>12) & 0xf];4949- }5050- return(sum);3838+ return sum;5139}52405341void minix_free_block(struct inode *inode, unsigned long block)···93105 return 0;94106}951079696-unsigned long minix_count_free_blocks(struct minix_sb_info *sbi)108108+unsigned long minix_count_free_blocks(struct super_block *sb)97109{9898- return (count_free(sbi->s_zmap, sbi->s_zmap_blocks,9999- sbi->s_nzones - sbi->s_firstdatazone + 1)110110+ struct minix_sb_info *sbi = minix_sb(sb);111111+ u32 bits = sbi->s_nzones - (sbi->s_firstdatazone + 1);112112+113113+ return (count_free(sbi->s_zmap, sb->s_blocksize, bits)100114 << sbi->s_log_zone_size);101115}102116···263273 return inode;264274}265275266266-unsigned long minix_count_free_inodes(struct minix_sb_info *sbi)276276+unsigned long minix_count_free_inodes(struct super_block *sb)267277{268268- return count_free(sbi->s_imap, sbi->s_imap_blocks, sbi->s_ninodes + 1);278278+ struct minix_sb_info *sbi = minix_sb(sb);279279+ u32 bits = sbi->s_ninodes + 1;280280+281281+ return count_free(sbi->s_imap, sb->s_blocksize, bits);269282}
+23-2
fs/minix/inode.c
···279279 else if (sbi->s_mount_state & MINIX_ERROR_FS)280280 printk("MINIX-fs: mounting file system with errors, "281281 "running fsck is recommended\n");282282+283283+ /* Apparently minix can create filesystems that allocate more blocks for284284+ * the bitmaps than needed. We simply ignore that, but verify it didn't285285+ * create one with not enough blocks and bail out if so.286286+ */287287+ block = minix_blocks_needed(sbi->s_ninodes, s->s_blocksize);288288+ if (sbi->s_imap_blocks < block) {289289+ printk("MINIX-fs: file system does not have enough "290290+ "imap blocks allocated. Refusing to mount\n");291291+ goto out_iput;292292+ }293293+294294+ block = minix_blocks_needed(295295+ (sbi->s_nzones - (sbi->s_firstdatazone + 1)),296296+ s->s_blocksize);297297+ if (sbi->s_zmap_blocks < block) {298298+ printk("MINIX-fs: file system does not have enough "299299+ "zmap blocks allocated. Refusing to mount.\n");300300+ goto out_iput;301301+ }302302+282303 return 0;283304284305out_iput:···360339 buf->f_type = sb->s_magic;361340 buf->f_bsize = sb->s_blocksize;362341 buf->f_blocks = (sbi->s_nzones - sbi->s_firstdatazone) << sbi->s_log_zone_size;363363- buf->f_bfree = minix_count_free_blocks(sbi);342342+ buf->f_bfree = minix_count_free_blocks(sb);364343 buf->f_bavail = buf->f_bfree;365344 buf->f_files = sbi->s_ninodes;366366- buf->f_ffree = minix_count_free_inodes(sbi);345345+ buf->f_ffree = minix_count_free_inodes(sb);367346 buf->f_namelen = sbi->s_namelen;368347 buf->f_fsid.val[0] = (u32)id;369348 buf->f_fsid.val[1] = (u32)(id >> 32);
···24932493struct dentry *mount_subtree(struct vfsmount *mnt, const char *name)24942494{24952495 struct mnt_namespace *ns;24962496+ struct super_block *s;24962497 struct path path;24972498 int err;24982499···25102509 return ERR_PTR(err);2511251025122511 /* trade a vfsmount reference for active sb one */25132513- atomic_inc(&path.mnt->mnt_sb->s_active);25122512+ s = path.mnt->mnt_sb;25132513+ atomic_inc(&s->s_active);25142514 mntput(path.mnt);25152515 /* lock the sucker */25162516- down_write(&path.mnt->mnt_sb->s_umount);25162516+ down_write(&s->s_umount);25172517 /* ... and return the root of (sub)tree on it */25182518 return path.dentry;25192519}
···290290 }291291292292 if (down_read_trylock(&oi->ip_alloc_sem) == 0) {293293+ /*294294+ * Unlock the page and cycle ip_alloc_sem so that we don't295295+ * busyloop waiting for ip_alloc_sem to unlock296296+ */293297 ret = AOP_TRUNCATED_PAGE;298298+ unlock_page(page);299299+ unlock = 0;300300+ down_read(&oi->ip_alloc_sem);301301+ up_read(&oi->ip_alloc_sem);294302 goto out_inode_unlock;295303 }296304···571563{572564 struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode;573565 int level;566566+ wait_queue_head_t *wq = ocfs2_ioend_wq(inode);574567575568 /* this io's submitter should not have unlocked this before we could */576569 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));577570578571 if (ocfs2_iocb_is_sem_locked(iocb))579572 ocfs2_iocb_clear_sem_locked(iocb);573573+574574+ if (ocfs2_iocb_is_unaligned_aio(iocb)) {575575+ ocfs2_iocb_clear_unaligned_aio(iocb);576576+577577+ if (atomic_dec_and_test(&OCFS2_I(inode)->ip_unaligned_aio) &&578578+ waitqueue_active(wq)) {579579+ wake_up_all(wq);580580+ }581581+ }580582581583 ocfs2_iocb_clear_rw_locked(iocb);582584···881863 struct page *w_target_page;882864883865 /*866866+ * w_target_locked is used for page_mkwrite path indicating no unlocking867867+ * against w_target_page in ocfs2_write_end_nolock.868868+ */869869+ unsigned int w_target_locked:1;870870+871871+ /*884872 * ocfs2_write_end() uses this to know what the real range to885873 * write in the target should be.886874 */···919895920896static void ocfs2_free_write_ctxt(struct ocfs2_write_ctxt *wc)921897{898898+ int i;899899+900900+ /*901901+ * w_target_locked is only set to true in the page_mkwrite() case.902902+ * The intent is to allow us to lock the target page from write_begin()903903+ * to write_end(). The caller must hold a ref on w_target_page.904904+ */905905+ if (wc->w_target_locked) {906906+ BUG_ON(!wc->w_target_page);907907+ for (i = 0; i < wc->w_num_pages; i++) {908908+ if (wc->w_target_page == wc->w_pages[i]) {909909+ wc->w_pages[i] = NULL;910910+ break;911911+ }912912+ }913913+ mark_page_accessed(wc->w_target_page);914914+ page_cache_release(wc->w_target_page);915915+ }922916 ocfs2_unlock_and_free_pages(wc->w_pages, wc->w_num_pages);923917924918 brelse(wc->w_di_bh);···11741132 */11751133 lock_page(mmap_page);1176113411351135+ /* Exit and let the caller retry */11771136 if (mmap_page->mapping != mapping) {11371137+ WARN_ON(mmap_page->mapping);11781138 unlock_page(mmap_page);11791179- /*11801180- * Sanity check - the locking in11811181- * ocfs2_pagemkwrite() should ensure11821182- * that this code doesn't trigger.11831183- */11841184- ret = -EINVAL;11851185- mlog_errno(ret);11391139+ ret = -EAGAIN;11861140 goto out;11871141 }1188114211891143 page_cache_get(mmap_page);11901144 wc->w_pages[i] = mmap_page;11451145+ wc->w_target_locked = true;11911146 } else {11921147 wc->w_pages[i] = find_or_create_page(mapping, index,11931148 GFP_NOFS);···11991160 wc->w_target_page = wc->w_pages[i];12001161 }12011162out:11631163+ if (ret)11641164+ wc->w_target_locked = false;12021165 return ret;12031166}12041167···18581817 */18591818 ret = ocfs2_grab_pages_for_write(mapping, wc, wc->w_cpos, pos, len,18601819 cluster_of_pages, mmap_page);18611861- if (ret) {18201820+ if (ret && ret != -EAGAIN) {18621821 mlog_errno(ret);18221822+ goto out_quota;18231823+ }18241824+18251825+ /*18261826+ * ocfs2_grab_pages_for_write() returns -EAGAIN if it could not lock18271827+ * the target page. In this case, we exit with no error and no target18281828+ * page. This will trigger the caller, page_mkwrite(), to re-try18291829+ * the operation.18301830+ */18311831+ if (ret == -EAGAIN) {18321832+ BUG_ON(wc->w_target_page);18331833+ ret = 0;18631834 goto out_quota;18641835 }18651836
+14
fs/ocfs2/aops.h
···7878 OCFS2_IOCB_RW_LOCK = 0,7979 OCFS2_IOCB_RW_LOCK_LEVEL,8080 OCFS2_IOCB_SEM,8181+ OCFS2_IOCB_UNALIGNED_IO,8182 OCFS2_IOCB_NUM_LOCKS8283};8384···9291 clear_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private)9392#define ocfs2_iocb_is_sem_locked(iocb) \9493 test_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private)9494+9595+#define ocfs2_iocb_set_unaligned_aio(iocb) \9696+ set_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private)9797+#define ocfs2_iocb_clear_unaligned_aio(iocb) \9898+ clear_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private)9999+#define ocfs2_iocb_is_unaligned_aio(iocb) \100100+ test_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private)101101+102102+#define OCFS2_IOEND_WQ_HASH_SZ 37103103+#define ocfs2_ioend_wq(v) (&ocfs2__ioend_wq[((unsigned long)(v)) %\104104+ OCFS2_IOEND_WQ_HASH_SZ])105105+extern wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ];106106+95107#endif /* OCFS2_FILE_H */
+123-73
fs/ocfs2/cluster/heartbeat.c
···216216217217 struct list_head hr_all_item;218218 unsigned hr_unclean_stop:1,219219+ hr_aborted_start:1,219220 hr_item_pinned:1,220221 hr_item_dropped:1;221222···254253 * has reached a 'steady' state. This will be fixed when we have255254 * a more complete api that doesn't lead to this sort of fragility. */256255 atomic_t hr_steady_iterations;256256+257257+ /* terminate o2hb thread if it does not reach steady state258258+ * (hr_steady_iterations == 0) within hr_unsteady_iterations */259259+ atomic_t hr_unsteady_iterations;257260258261 char hr_dev_name[BDEVNAME_SIZE];259262···329324330325static void o2hb_arm_write_timeout(struct o2hb_region *reg)331326{327327+ /* Arm writeout only after thread reaches steady state */328328+ if (atomic_read(®->hr_steady_iterations) != 0)329329+ return;330330+332331 mlog(ML_HEARTBEAT, "Queue write timeout for %u ms\n",333332 O2HB_MAX_WRITE_TIMEOUT_MS);334333···546537 return read == computed;547538}548539549549-/* We want to make sure that nobody is heartbeating on top of us --550550- * this will help detect an invalid configuration. */551551-static void o2hb_check_last_timestamp(struct o2hb_region *reg)540540+/*541541+ * Compare the slot data with what we wrote in the last iteration.542542+ * If the match fails, print an appropriate error message. This is to543543+ * detect errors like... another node hearting on the same slot,544544+ * flaky device that is losing writes, etc.545545+ * Returns 1 if check succeeds, 0 otherwise.546546+ */547547+static int o2hb_check_own_slot(struct o2hb_region *reg)552548{553549 struct o2hb_disk_slot *slot;554550 struct o2hb_disk_heartbeat_block *hb_block;···562548 slot = ®->hr_slots[o2nm_this_node()];563549 /* Don't check on our 1st timestamp */564550 if (!slot->ds_last_time)565565- return;551551+ return 0;566552567553 hb_block = slot->ds_raw_block;568554 if (le64_to_cpu(hb_block->hb_seq) == slot->ds_last_time &&569555 le64_to_cpu(hb_block->hb_generation) == slot->ds_last_generation &&570556 hb_block->hb_node == slot->ds_node_num)571571- return;557557+ return 1;572558573559#define ERRSTR1 "Another node is heartbeating on device"574560#define ERRSTR2 "Heartbeat generation mismatch on device"···588574 (unsigned long long)slot->ds_last_time, hb_block->hb_node,589575 (unsigned long long)le64_to_cpu(hb_block->hb_generation),590576 (unsigned long long)le64_to_cpu(hb_block->hb_seq));577577+578578+ return 0;591579}592580593581static inline void o2hb_prepare_block(struct o2hb_region *reg,···735719 o2nm_node_put(node);736720}737721738738-static void o2hb_set_quorum_device(struct o2hb_region *reg,739739- struct o2hb_disk_slot *slot)722722+static void o2hb_set_quorum_device(struct o2hb_region *reg)740723{741741- assert_spin_locked(&o2hb_live_lock);742742-743724 if (!o2hb_global_heartbeat_active())744725 return;745726746746- if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap))727727+ /* Prevent race with o2hb_heartbeat_group_drop_item() */728728+ if (kthread_should_stop())747729 return;730730+731731+ /* Tag region as quorum only after thread reaches steady state */732732+ if (atomic_read(®->hr_steady_iterations) != 0)733733+ return;734734+735735+ spin_lock(&o2hb_live_lock);736736+737737+ if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap))738738+ goto unlock;748739749740 /*750741 * A region can be added to the quorum only when it sees all···760737 */761738 if (memcmp(reg->hr_live_node_bitmap, o2hb_live_node_bitmap,762739 sizeof(o2hb_live_node_bitmap)))763763- return;740740+ goto unlock;764741765765- if (slot->ds_changed_samples < O2HB_LIVE_THRESHOLD)766766- return;767767-768768- printk(KERN_NOTICE "o2hb: Region %s is now a quorum device\n",769769- config_item_name(®->hr_item));742742+ printk(KERN_NOTICE "o2hb: Region %s (%s) is now a quorum device\n",743743+ config_item_name(®->hr_item), reg->hr_dev_name);770744771745 set_bit(reg->hr_region_num, o2hb_quorum_region_bitmap);772746···774754 if (o2hb_pop_count(&o2hb_quorum_region_bitmap,775755 O2NM_MAX_REGIONS) > O2HB_PIN_CUT_OFF)776756 o2hb_region_unpin(NULL);757757+unlock:758758+ spin_unlock(&o2hb_live_lock);777759}778760779761static int o2hb_check_slot(struct o2hb_region *reg,···947925 slot->ds_equal_samples = 0;948926 }949927out:950950- o2hb_set_quorum_device(reg, slot);951951-952928 spin_unlock(&o2hb_live_lock);953929954930 o2hb_run_event_list(&event);···977957978958static int o2hb_do_disk_heartbeat(struct o2hb_region *reg)979959{980980- int i, ret, highest_node, change = 0;960960+ int i, ret, highest_node;961961+ int membership_change = 0, own_slot_ok = 0;981962 unsigned long configured_nodes[BITS_TO_LONGS(O2NM_MAX_NODES)];982963 unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];983964 struct o2hb_bio_wait_ctxt write_wc;···987966 sizeof(configured_nodes));988967 if (ret) {989968 mlog_errno(ret);990990- return ret;969969+ goto bail;991970 }992971993972 /*···10039821004983 highest_node = o2hb_highest_node(configured_nodes, O2NM_MAX_NODES);1005984 if (highest_node >= O2NM_MAX_NODES) {10061006- mlog(ML_NOTICE, "ocfs2_heartbeat: no configured nodes found!\n");10071007- return -EINVAL;985985+ mlog(ML_NOTICE, "o2hb: No configured nodes found!\n");986986+ ret = -EINVAL;987987+ goto bail;1008988 }10099891010990 /* No sense in reading the slots of nodes that don't exist···1015993 ret = o2hb_read_slots(reg, highest_node + 1);1016994 if (ret < 0) {1017995 mlog_errno(ret);10181018- return ret;996996+ goto bail;1019997 }10209981021999 /* With an up to date view of the slots, we can check that no10221000 * other node has been improperly configured to heartbeat in10231001 * our slot. */10241024- o2hb_check_last_timestamp(reg);10021002+ own_slot_ok = o2hb_check_own_slot(reg);1025100310261004 /* fill in the proper info for our next heartbeat */10271005 o2hb_prepare_block(reg, reg->hr_generation);1028100610291029- /* And fire off the write. Note that we don't wait on this I/O10301030- * until later. */10311007 ret = o2hb_issue_node_write(reg, &write_wc);10321008 if (ret < 0) {10331009 mlog_errno(ret);10341034- return ret;10101010+ goto bail;10351011 }1036101210371013 i = -1;10381014 while((i = find_next_bit(configured_nodes,10391015 O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) {10401040- change |= o2hb_check_slot(reg, ®->hr_slots[i]);10161016+ membership_change |= o2hb_check_slot(reg, ®->hr_slots[i]);10411017 }1042101810431019 /*···10501030 * disk */10511031 mlog(ML_ERROR, "Write error %d on device \"%s\"\n",10521032 write_wc.wc_error, reg->hr_dev_name);10531053- return write_wc.wc_error;10331033+ ret = write_wc.wc_error;10341034+ goto bail;10541035 }1055103610561056- o2hb_arm_write_timeout(reg);10371037+ /* Skip disarming the timeout if own slot has stale/bad data */10381038+ if (own_slot_ok) {10391039+ o2hb_set_quorum_device(reg);10401040+ o2hb_arm_write_timeout(reg);10411041+ }1057104210431043+bail:10581044 /* let the person who launched us know when things are steady */10591059- if (!change && (atomic_read(®->hr_steady_iterations) != 0)) {10601060- if (atomic_dec_and_test(®->hr_steady_iterations))10611061- wake_up(&o2hb_steady_queue);10451045+ if (atomic_read(®->hr_steady_iterations) != 0) {10461046+ if (!ret && own_slot_ok && !membership_change) {10471047+ if (atomic_dec_and_test(®->hr_steady_iterations))10481048+ wake_up(&o2hb_steady_queue);10491049+ }10621050 }1063105110641064- return 0;10521052+ if (atomic_read(®->hr_steady_iterations) != 0) {10531053+ if (atomic_dec_and_test(®->hr_unsteady_iterations)) {10541054+ printk(KERN_NOTICE "o2hb: Unable to stabilize "10551055+ "heartbeart on region %s (%s)\n",10561056+ config_item_name(®->hr_item),10571057+ reg->hr_dev_name);10581058+ atomic_set(®->hr_steady_iterations, 0);10591059+ reg->hr_aborted_start = 1;10601060+ wake_up(&o2hb_steady_queue);10611061+ ret = -EIO;10621062+ }10631063+ }10641064+10651065+ return ret;10651066}1066106710671068/* Subtract b from a, storing the result in a. a *must* have a larger···11361095 /* Pin node */11371096 o2nm_depend_this_node();1138109711391139- while (!kthread_should_stop() && !reg->hr_unclean_stop) {10981098+ while (!kthread_should_stop() &&10991099+ !reg->hr_unclean_stop && !reg->hr_aborted_start) {11401100 /* We track the time spent inside11411101 * o2hb_do_disk_heartbeat so that we avoid more than11421102 * hr_timeout_ms between disk writes. On busy systems···11451103 * likely to time itself out. */11461104 do_gettimeofday(&before_hb);1147110511481148- i = 0;11491149- do {11501150- ret = o2hb_do_disk_heartbeat(reg);11511151- } while (ret && ++i < 2);11061106+ ret = o2hb_do_disk_heartbeat(reg);1152110711531108 do_gettimeofday(&after_hb);11541109 elapsed_msec = o2hb_elapsed_msecs(&before_hb, &after_hb);···11561117 after_hb.tv_sec, (unsigned long) after_hb.tv_usec,11571118 elapsed_msec);1158111911591159- if (elapsed_msec < reg->hr_timeout_ms) {11201120+ if (!kthread_should_stop() &&11211121+ elapsed_msec < reg->hr_timeout_ms) {11601122 /* the kthread api has blocked signals for us so no11611123 * need to record the return value. */11621124 msleep_interruptible(reg->hr_timeout_ms - elapsed_msec);···11741134 * to timeout on this region when we could just as easily11751135 * write a clear generation - thus indicating to them that11761136 * this node has left this region.11771177- *11781178- * XXX: Should we skip this on unclean_stop? */11791179- o2hb_prepare_block(reg, 0);11801180- ret = o2hb_issue_node_write(reg, &write_wc);11811181- if (ret == 0) {11821182- o2hb_wait_on_io(reg, &write_wc);11831183- } else {11841184- mlog_errno(ret);11371137+ */11381138+ if (!reg->hr_unclean_stop && !reg->hr_aborted_start) {11391139+ o2hb_prepare_block(reg, 0);11401140+ ret = o2hb_issue_node_write(reg, &write_wc);11411141+ if (ret == 0)11421142+ o2hb_wait_on_io(reg, &write_wc);11431143+ else11441144+ mlog_errno(ret);11851145 }1186114611871147 /* Unpin node */11881148 o2nm_undepend_this_node();1189114911901190- mlog(ML_HEARTBEAT|ML_KTHREAD, "hb thread exiting\n");11501150+ mlog(ML_HEARTBEAT|ML_KTHREAD, "o2hb thread exiting\n");1191115111921152 return 0;11931153}···11981158 struct o2hb_debug_buf *db = inode->i_private;11991159 struct o2hb_region *reg;12001160 unsigned long map[BITS_TO_LONGS(O2NM_MAX_NODES)];11611161+ unsigned long lts;12011162 char *buf = NULL;12021163 int i = -1;12031164 int out = 0;···1235119412361195 case O2HB_DB_TYPE_REGION_ELAPSED_TIME:12371196 reg = (struct o2hb_region *)db->db_data;12381238- out += snprintf(buf + out, PAGE_SIZE - out, "%u\n",12391239- jiffies_to_msecs(jiffies -12401240- reg->hr_last_timeout_start));11971197+ lts = reg->hr_last_timeout_start;11981198+ /* If 0, it has never been set before */11991199+ if (lts)12001200+ lts = jiffies_to_msecs(jiffies - lts);12011201+ out += snprintf(buf + out, PAGE_SIZE - out, "%lu\n", lts);12411202 goto done;1242120312431204 case O2HB_DB_TYPE_REGION_PINNED:···14681425 int i;14691426 struct page *page;14701427 struct o2hb_region *reg = to_o2hb_region(item);14281428+14291429+ mlog(ML_HEARTBEAT, "hb region release (%s)\n", reg->hr_dev_name);1471143014721431 if (reg->hr_tmp_block)14731432 kfree(reg->hr_tmp_block);···18371792 live_threshold <<= 1;18381793 spin_unlock(&o2hb_live_lock);18391794 }18401840- atomic_set(®->hr_steady_iterations, live_threshold + 1);17951795+ ++live_threshold;17961796+ atomic_set(®->hr_steady_iterations, live_threshold);17971797+ /* unsteady_iterations is double the steady_iterations */17981798+ atomic_set(®->hr_unsteady_iterations, (live_threshold << 1));1841179918421800 hb_task = kthread_run(o2hb_thread, reg, "o2hb-%s",18431801 reg->hr_item.ci_name);···18571809 ret = wait_event_interruptible(o2hb_steady_queue,18581810 atomic_read(®->hr_steady_iterations) == 0);18591811 if (ret) {18601860- /* We got interrupted (hello ptrace!). Clean up */18611861- spin_lock(&o2hb_live_lock);18621862- hb_task = reg->hr_task;18631863- reg->hr_task = NULL;18641864- spin_unlock(&o2hb_live_lock);18121812+ atomic_set(®->hr_steady_iterations, 0);18131813+ reg->hr_aborted_start = 1;18141814+ }1865181518661866- if (hb_task)18671867- kthread_stop(hb_task);18161816+ if (reg->hr_aborted_start) {18171817+ ret = -EIO;18681818 goto out;18691819 }18701820···18791833 ret = -EIO;1880183418811835 if (hb_task && o2hb_global_heartbeat_active())18821882- printk(KERN_NOTICE "o2hb: Heartbeat started on region %s\n",18831883- config_item_name(®->hr_item));18361836+ printk(KERN_NOTICE "o2hb: Heartbeat started on region %s (%s)\n",18371837+ config_item_name(®->hr_item), reg->hr_dev_name);1884183818851839out:18861840 if (filp)···2138209221392093 /* stop the thread when the user removes the region dir */21402094 spin_lock(&o2hb_live_lock);21412141- if (o2hb_global_heartbeat_active()) {21422142- clear_bit(reg->hr_region_num, o2hb_region_bitmap);21432143- clear_bit(reg->hr_region_num, o2hb_live_region_bitmap);21442144- if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap))21452145- quorum_region = 1;21462146- clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap);21472147- }21482095 hb_task = reg->hr_task;21492096 reg->hr_task = NULL;21502097 reg->hr_item_dropped = 1;···21462107 if (hb_task)21472108 kthread_stop(hb_task);2148210921102110+ if (o2hb_global_heartbeat_active()) {21112111+ spin_lock(&o2hb_live_lock);21122112+ clear_bit(reg->hr_region_num, o2hb_region_bitmap);21132113+ clear_bit(reg->hr_region_num, o2hb_live_region_bitmap);21142114+ if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap))21152115+ quorum_region = 1;21162116+ clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap);21172117+ spin_unlock(&o2hb_live_lock);21182118+ printk(KERN_NOTICE "o2hb: Heartbeat %s on region %s (%s)\n",21192119+ ((atomic_read(®->hr_steady_iterations) == 0) ?21202120+ "stopped" : "start aborted"), config_item_name(item),21212121+ reg->hr_dev_name);21222122+ }21232123+21492124 /*21502125 * If we're racing a dev_write(), we need to wake them. They will21512126 * check reg->hr_task21522127 */21532128 if (atomic_read(®->hr_steady_iterations) != 0) {21292129+ reg->hr_aborted_start = 1;21542130 atomic_set(®->hr_steady_iterations, 0);21552131 wake_up(&o2hb_steady_queue);21562132 }21572157-21582158- if (o2hb_global_heartbeat_active())21592159- printk(KERN_NOTICE "o2hb: Heartbeat stopped on region %s\n",21602160- config_item_name(®->hr_item));2161213321622134 config_item_put(item);21632135
···546546 }547547548548 if (was_valid && !valid) {549549- printk(KERN_NOTICE "o2net: no longer connected to "549549+ printk(KERN_NOTICE "o2net: No longer connected to "550550 SC_NODEF_FMT "\n", SC_NODEF_ARGS(old_sc));551551 o2net_complete_nodes_nsw(nn);552552 }···556556 cancel_delayed_work(&nn->nn_connect_expired);557557 printk(KERN_NOTICE "o2net: %s " SC_NODEF_FMT "\n",558558 o2nm_this_node() > sc->sc_node->nd_num ?559559- "connected to" : "accepted connection from",559559+ "Connected to" : "Accepted connection from",560560 SC_NODEF_ARGS(sc));561561 }562562···644644 o2net_sc_queue_work(sc, &sc->sc_connect_work);645645 break;646646 default:647647- printk(KERN_INFO "o2net: connection to " SC_NODEF_FMT647647+ printk(KERN_INFO "o2net: Connection to " SC_NODEF_FMT648648 " shutdown, state %d\n",649649 SC_NODEF_ARGS(sc), sk->sk_state);650650 o2net_sc_queue_work(sc, &sc->sc_shutdown_work);···10351035 return ret;10361036}1037103710381038+/* Get a map of all nodes to which this node is currently connected to */10391039+void o2net_fill_node_map(unsigned long *map, unsigned bytes)10401040+{10411041+ struct o2net_sock_container *sc;10421042+ int node, ret;10431043+10441044+ BUG_ON(bytes < (BITS_TO_LONGS(O2NM_MAX_NODES) * sizeof(unsigned long)));10451045+10461046+ memset(map, 0, bytes);10471047+ for (node = 0; node < O2NM_MAX_NODES; ++node) {10481048+ o2net_tx_can_proceed(o2net_nn_from_num(node), &sc, &ret);10491049+ if (!ret) {10501050+ set_bit(node, map);10511051+ sc_put(sc);10521052+ }10531053+ }10541054+}10551055+EXPORT_SYMBOL_GPL(o2net_fill_node_map);10561056+10381057int o2net_send_message_vec(u32 msg_type, u32 key, struct kvec *caller_vec,10391058 size_t caller_veclen, u8 target_node, int *status)10401059{···13041285 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num);1305128613061287 if (hand->protocol_version != cpu_to_be64(O2NET_PROTOCOL_VERSION)) {13071307- mlog(ML_NOTICE, SC_NODEF_FMT " advertised net protocol "13081308- "version %llu but %llu is required, disconnecting\n",13091309- SC_NODEF_ARGS(sc),13101310- (unsigned long long)be64_to_cpu(hand->protocol_version),13111311- O2NET_PROTOCOL_VERSION);12881288+ printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " Advertised net "12891289+ "protocol version %llu but %llu is required. "12901290+ "Disconnecting.\n", SC_NODEF_ARGS(sc),12911291+ (unsigned long long)be64_to_cpu(hand->protocol_version),12921292+ O2NET_PROTOCOL_VERSION);1312129313131294 /* don't bother reconnecting if its the wrong version. */13141295 o2net_ensure_shutdown(nn, sc, -ENOTCONN);···13221303 */13231304 if (be32_to_cpu(hand->o2net_idle_timeout_ms) !=13241305 o2net_idle_timeout()) {13251325- mlog(ML_NOTICE, SC_NODEF_FMT " uses a network idle timeout of "13261326- "%u ms, but we use %u ms locally. disconnecting\n",13271327- SC_NODEF_ARGS(sc),13281328- be32_to_cpu(hand->o2net_idle_timeout_ms),13291329- o2net_idle_timeout());13061306+ printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a network "13071307+ "idle timeout of %u ms, but we use %u ms locally. "13081308+ "Disconnecting.\n", SC_NODEF_ARGS(sc),13091309+ be32_to_cpu(hand->o2net_idle_timeout_ms),13101310+ o2net_idle_timeout());13301311 o2net_ensure_shutdown(nn, sc, -ENOTCONN);13311312 return -1;13321313 }1333131413341315 if (be32_to_cpu(hand->o2net_keepalive_delay_ms) !=13351316 o2net_keepalive_delay()) {13361336- mlog(ML_NOTICE, SC_NODEF_FMT " uses a keepalive delay of "13371337- "%u ms, but we use %u ms locally. disconnecting\n",13381338- SC_NODEF_ARGS(sc),13391339- be32_to_cpu(hand->o2net_keepalive_delay_ms),13401340- o2net_keepalive_delay());13171317+ printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a keepalive "13181318+ "delay of %u ms, but we use %u ms locally. "13191319+ "Disconnecting.\n", SC_NODEF_ARGS(sc),13201320+ be32_to_cpu(hand->o2net_keepalive_delay_ms),13211321+ o2net_keepalive_delay());13411322 o2net_ensure_shutdown(nn, sc, -ENOTCONN);13421323 return -1;13431324 }1344132513451326 if (be32_to_cpu(hand->o2hb_heartbeat_timeout_ms) !=13461327 O2HB_MAX_WRITE_TIMEOUT_MS) {13471347- mlog(ML_NOTICE, SC_NODEF_FMT " uses a heartbeat timeout of "13481348- "%u ms, but we use %u ms locally. disconnecting\n",13491349- SC_NODEF_ARGS(sc),13501350- be32_to_cpu(hand->o2hb_heartbeat_timeout_ms),13511351- O2HB_MAX_WRITE_TIMEOUT_MS);13281328+ printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a heartbeat "13291329+ "timeout of %u ms, but we use %u ms locally. "13301330+ "Disconnecting.\n", SC_NODEF_ARGS(sc),13311331+ be32_to_cpu(hand->o2hb_heartbeat_timeout_ms),13321332+ O2HB_MAX_WRITE_TIMEOUT_MS);13521333 o2net_ensure_shutdown(nn, sc, -ENOTCONN);13531334 return -1;13541335 }···15591540{15601541 struct o2net_sock_container *sc = (struct o2net_sock_container *)data;15611542 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num);15621562-15631543#ifdef CONFIG_DEBUG_FS15641564- ktime_t now = ktime_get();15441544+ unsigned long msecs = ktime_to_ms(ktime_get()) -15451545+ ktime_to_ms(sc->sc_tv_timer);15461546+#else15471547+ unsigned long msecs = o2net_idle_timeout();15651548#endif1566154915671567- printk(KERN_NOTICE "o2net: connection to " SC_NODEF_FMT " has been idle for %u.%u "15681568- "seconds, shutting it down.\n", SC_NODEF_ARGS(sc),15691569- o2net_idle_timeout() / 1000,15701570- o2net_idle_timeout() % 1000);15711571-15721572-#ifdef CONFIG_DEBUG_FS15731573- mlog(ML_NOTICE, "Here are some times that might help debug the "15741574- "situation: (Timer: %lld, Now %lld, DataReady %lld, Advance %lld-%lld, "15751575- "Key 0x%08x, Func %u, FuncTime %lld-%lld)\n",15761576- (long long)ktime_to_us(sc->sc_tv_timer), (long long)ktime_to_us(now),15771577- (long long)ktime_to_us(sc->sc_tv_data_ready),15781578- (long long)ktime_to_us(sc->sc_tv_advance_start),15791579- (long long)ktime_to_us(sc->sc_tv_advance_stop),15801580- sc->sc_msg_key, sc->sc_msg_type,15811581- (long long)ktime_to_us(sc->sc_tv_func_start),15821582- (long long)ktime_to_us(sc->sc_tv_func_stop));15831583-#endif15501550+ printk(KERN_NOTICE "o2net: Connection to " SC_NODEF_FMT " has been "15511551+ "idle for %lu.%lu secs, shutting it down.\n", SC_NODEF_ARGS(sc),15521552+ msecs / 1000, msecs % 1000);1584155315851554 /*15861555 * Initialize the nn_timeout so that the next connection attempt···1701169417021695out:17031696 if (ret) {17041704- mlog(ML_NOTICE, "connect attempt to " SC_NODEF_FMT " failed "17051705- "with errno %d\n", SC_NODEF_ARGS(sc), ret);16971697+ printk(KERN_NOTICE "o2net: Connect attempt to " SC_NODEF_FMT16981698+ " failed with errno %d\n", SC_NODEF_ARGS(sc), ret);17061699 /* 0 err so that another will be queued and attempted17071700 * from set_nn_state */17081701 if (sc)···1725171817261719 spin_lock(&nn->nn_lock);17271720 if (!nn->nn_sc_valid) {17281728- mlog(ML_ERROR, "no connection established with node %u after "17291729- "%u.%u seconds, giving up and returning errors.\n",17211721+ printk(KERN_NOTICE "o2net: No connection established with "17221722+ "node %u after %u.%u seconds, giving up.\n",17301723 o2net_num_from_nn(nn),17311724 o2net_idle_timeout() / 1000,17321725 o2net_idle_timeout() % 1000);···1869186218701863 node = o2nm_get_node_by_ip(sin.sin_addr.s_addr);18711864 if (node == NULL) {18721872- mlog(ML_NOTICE, "attempt to connect from unknown node at %pI4:%d\n",18731873- &sin.sin_addr.s_addr, ntohs(sin.sin_port));18651865+ printk(KERN_NOTICE "o2net: Attempt to connect from unknown "18661866+ "node at %pI4:%d\n", &sin.sin_addr.s_addr,18671867+ ntohs(sin.sin_port));18741868 ret = -EINVAL;18751869 goto out;18761870 }1877187118781872 if (o2nm_this_node() >= node->nd_num) {18791873 local_node = o2nm_get_node_by_num(o2nm_this_node());18801880- mlog(ML_NOTICE, "unexpected connect attempt seen at node '%s' ("18811881- "%u, %pI4:%d) from node '%s' (%u, %pI4:%d)\n",18821882- local_node->nd_name, local_node->nd_num,18831883- &(local_node->nd_ipv4_address),18841884- ntohs(local_node->nd_ipv4_port),18851885- node->nd_name, node->nd_num, &sin.sin_addr.s_addr,18861886- ntohs(sin.sin_port));18741874+ printk(KERN_NOTICE "o2net: Unexpected connect attempt seen "18751875+ "at node '%s' (%u, %pI4:%d) from node '%s' (%u, "18761876+ "%pI4:%d)\n", local_node->nd_name, local_node->nd_num,18771877+ &(local_node->nd_ipv4_address),18781878+ ntohs(local_node->nd_ipv4_port), node->nd_name,18791879+ node->nd_num, &sin.sin_addr.s_addr, ntohs(sin.sin_port));18871880 ret = -EINVAL;18881881 goto out;18891882 }···19081901 ret = 0;19091902 spin_unlock(&nn->nn_lock);19101903 if (ret) {19111911- mlog(ML_NOTICE, "attempt to connect from node '%s' at "19121912- "%pI4:%d but it already has an open connection\n",19131913- node->nd_name, &sin.sin_addr.s_addr,19141914- ntohs(sin.sin_port));19041904+ printk(KERN_NOTICE "o2net: Attempt to connect from node '%s' "19051905+ "at %pI4:%d but it already has an open connection\n",19061906+ node->nd_name, &sin.sin_addr.s_addr,19071907+ ntohs(sin.sin_port));19151908 goto out;19161909 }19171910···1991198419921985 ret = sock_create(PF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);19931986 if (ret < 0) {19941994- mlog(ML_ERROR, "unable to create socket, ret=%d\n", ret);19871987+ printk(KERN_ERR "o2net: Error %d while creating socket\n", ret);19951988 goto out;19961989 }19971990···20082001 sock->sk->sk_reuse = 1;20092002 ret = sock->ops->bind(sock, (struct sockaddr *)&sin, sizeof(sin));20102003 if (ret < 0) {20112011- mlog(ML_ERROR, "unable to bind socket at %pI4:%u, "20122012- "ret=%d\n", &addr, ntohs(port), ret);20042004+ printk(KERN_ERR "o2net: Error %d while binding socket at "20052005+ "%pI4:%u\n", ret, &addr, ntohs(port)); 20132006 goto out;20142007 }2015200820162009 ret = sock->ops->listen(sock, 64);20172017- if (ret < 0) {20182018- mlog(ML_ERROR, "unable to listen on %pI4:%u, ret=%d\n",20192019- &addr, ntohs(port), ret);20202020- }20102010+ if (ret < 0)20112011+ printk(KERN_ERR "o2net: Error %d while listening on %pI4:%u\n",20122012+ ret, &addr, ntohs(port));2021201320222014out:20232015 if (ret) {
···157157158158static void dlm_unregister_domain_handlers(struct dlm_ctxt *dlm);159159160160-void __dlm_unhash_lockres(struct dlm_lock_resource *lockres)160160+void __dlm_unhash_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res)161161{162162- if (!hlist_unhashed(&lockres->hash_node)) {163163- hlist_del_init(&lockres->hash_node);164164- dlm_lockres_put(lockres);165165- }162162+ if (hlist_unhashed(&res->hash_node))163163+ return;164164+165165+ mlog(0, "%s: Unhash res %.*s\n", dlm->name, res->lockname.len,166166+ res->lockname.name);167167+ hlist_del_init(&res->hash_node);168168+ dlm_lockres_put(res);166169}167170168168-void __dlm_insert_lockres(struct dlm_ctxt *dlm,169169- struct dlm_lock_resource *res)171171+void __dlm_insert_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res)170172{171173 struct hlist_head *bucket;172174 struct qstr *q;···182180 dlm_lockres_get(res);183181184182 hlist_add_head(&res->hash_node, bucket);183183+184184+ mlog(0, "%s: Hash res %.*s\n", dlm->name, res->lockname.len,185185+ res->lockname.name);185186}186187187188struct dlm_lock_resource * __dlm_lookup_lockres_full(struct dlm_ctxt *dlm,···544539545540static void __dlm_print_nodes(struct dlm_ctxt *dlm)546541{547547- int node = -1;542542+ int node = -1, num = 0;548543549544 assert_spin_locked(&dlm->spinlock);550545551551- printk(KERN_NOTICE "o2dlm: Nodes in domain %s: ", dlm->name);552552-546546+ printk("( ");553547 while ((node = find_next_bit(dlm->domain_map, O2NM_MAX_NODES,554548 node + 1)) < O2NM_MAX_NODES) {555549 printk("%d ", node);550550+ ++num;556551 }557557- printk("\n");552552+ printk(") %u nodes\n", num);558553}559554560555static int dlm_exit_domain_handler(struct o2net_msg *msg, u32 len, void *data,···571566572567 node = exit_msg->node_idx;573568574574- printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s\n", node, dlm->name);575575-576569 spin_lock(&dlm->spinlock);577570 clear_bit(node, dlm->domain_map);578571 clear_bit(node, dlm->exit_domain_map);572572+ printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s ", node, dlm->name);579573 __dlm_print_nodes(dlm);580574581575 /* notify anything attached to the heartbeat events */···759755760756 dlm_mark_domain_leaving(dlm);761757 dlm_leave_domain(dlm);758758+ printk(KERN_NOTICE "o2dlm: Leaving domain %s\n", dlm->name);762759 dlm_force_free_mles(dlm);763760 dlm_complete_dlm_shutdown(dlm);764761 }···975970 clear_bit(assert->node_idx, dlm->exit_domain_map);976971 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN);977972978978- printk(KERN_NOTICE "o2dlm: Node %u joins domain %s\n",973973+ printk(KERN_NOTICE "o2dlm: Node %u joins domain %s ",979974 assert->node_idx, dlm->name);980975 __dlm_print_nodes(dlm);981976···17061701bail:17071702 spin_lock(&dlm->spinlock);17081703 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN);17091709- if (!status)17041704+ if (!status) {17051705+ printk(KERN_NOTICE "o2dlm: Joining domain %s ", dlm->name);17101706 __dlm_print_nodes(dlm);17071707+ }17111708 spin_unlock(&dlm->spinlock);1712170917131710 if (ctxt) {···21352128 if (strlen(domain) >= O2NM_MAX_NAME_LEN) {21362129 ret = -ENAMETOOLONG;21372130 mlog(ML_ERROR, "domain name length too long\n");21382138- goto leave;21392139- }21402140-21412141- if (!o2hb_check_local_node_heartbeating()) {21422142- mlog(ML_ERROR, "the local node has not been configured, or is "21432143- "not heartbeating\n");21442144- ret = -EPROTO;21452131 goto leave;21462132 }21472133
+26-28
fs/ocfs2/dlm/dlmlock.c
···183183 kick_thread = 1;184184 }185185 }186186- /* reduce the inflight count, this may result in the lockres187187- * being purged below during calc_usage */188188- if (lock->ml.node == dlm->node_num)189189- dlm_lockres_drop_inflight_ref(dlm, res);190186191187 spin_unlock(&res->spinlock);192188 wake_up(&res->wq);···227231 lock->ml.type, res->lockname.len,228232 res->lockname.name, flags);229233234234+ /*235235+ * Wait if resource is getting recovered, remastered, etc.236236+ * If the resource was remastered and new owner is self, then exit.237237+ */230238 spin_lock(&res->spinlock);231231-232232- /* will exit this call with spinlock held */233239 __dlm_wait_on_lockres(res);240240+ if (res->owner == dlm->node_num) {241241+ spin_unlock(&res->spinlock);242242+ return DLM_RECOVERING;243243+ }234244 res->state |= DLM_LOCK_RES_IN_PROGRESS;235245236246 /* add lock to local (secondary) queue */···321319 tmpret = o2net_send_message(DLM_CREATE_LOCK_MSG, dlm->key, &create,322320 sizeof(create), res->owner, &status);323321 if (tmpret >= 0) {324324- // successfully sent and received325325- ret = status; // this is already a dlm_status322322+ ret = status;326323 if (ret == DLM_REJECTED) {327327- mlog(ML_ERROR, "%s:%.*s: BUG. this is a stale lockres "328328- "no longer owned by %u. that node is coming back "329329- "up currently.\n", dlm->name, create.namelen,324324+ mlog(ML_ERROR, "%s: res %.*s, Stale lockres no longer "325325+ "owned by node %u. That node is coming back up "326326+ "currently.\n", dlm->name, create.namelen,330327 create.name, res->owner);331328 dlm_print_one_lock_resource(res);332329 BUG();333330 }334331 } else {335335- mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to "336336- "node %u\n", tmpret, DLM_CREATE_LOCK_MSG, dlm->key,337337- res->owner);338338- if (dlm_is_host_down(tmpret)) {332332+ mlog(ML_ERROR, "%s: res %.*s, Error %d send CREATE LOCK to "333333+ "node %u\n", dlm->name, create.namelen, create.name,334334+ tmpret, res->owner);335335+ if (dlm_is_host_down(tmpret))339336 ret = DLM_RECOVERING;340340- mlog(0, "node %u died so returning DLM_RECOVERING "341341- "from lock message!\n", res->owner);342342- } else {337337+ else343338 ret = dlm_err_to_dlm_status(tmpret);344344- }345339 }346340347341 return ret;···438440 /* zero memory only if kernel-allocated */439441 lksb = kzalloc(sizeof(*lksb), GFP_NOFS);440442 if (!lksb) {441441- kfree(lock);443443+ kmem_cache_free(dlm_lock_cache, lock);442444 return NULL;443445 }444446 kernel_allocated = 1;···716718717719 if (status == DLM_RECOVERING || status == DLM_MIGRATING ||718720 status == DLM_FORWARD) {719719- mlog(0, "retrying lock with migration/"720720- "recovery/in progress\n");721721 msleep(100);722722- /* no waiting for dlm_reco_thread */723722 if (recovery) {724723 if (status != DLM_RECOVERING)725724 goto retry_lock;726726-727727- mlog(0, "%s: got RECOVERING "728728- "for $RECOVERY lock, master "729729- "was %u\n", dlm->name,730730- res->owner);731725 /* wait to see the node go down, then732726 * drop down and allow the lockres to733727 * get cleaned up. need to remaster. */···730740 goto retry_lock;731741 }732742 }743743+744744+ /* Inflight taken in dlm_get_lock_resource() is dropped here */745745+ spin_lock(&res->spinlock);746746+ dlm_lockres_drop_inflight_ref(dlm, res);747747+ spin_unlock(&res->spinlock);748748+749749+ dlm_lockres_calc_usage(dlm, res);750750+ dlm_kick_thread(dlm, res);733751734752 if (status != DLM_NORMAL) {735753 lock->lksb->flags &= ~DLM_LKSB_GET_LVB;
+91-90
fs/ocfs2/dlm/dlmmaster.c
···631631 return NULL;632632}633633634634-void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm,635635- struct dlm_lock_resource *res,636636- int new_lockres,637637- const char *file,638638- int line)634634+void dlm_lockres_set_refmap_bit(struct dlm_ctxt *dlm,635635+ struct dlm_lock_resource *res, int bit)639636{640640- if (!new_lockres)641641- assert_spin_locked(&res->spinlock);637637+ assert_spin_locked(&res->spinlock);642638643643- if (!test_bit(dlm->node_num, res->refmap)) {644644- BUG_ON(res->inflight_locks != 0);645645- dlm_lockres_set_refmap_bit(dlm->node_num, res);646646- }647647- res->inflight_locks++;648648- mlog(0, "%s:%.*s: inflight++: now %u\n",649649- dlm->name, res->lockname.len, res->lockname.name,650650- res->inflight_locks);639639+ mlog(0, "res %.*s, set node %u, %ps()\n", res->lockname.len,640640+ res->lockname.name, bit, __builtin_return_address(0));641641+642642+ set_bit(bit, res->refmap);651643}652644653653-void __dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm,654654- struct dlm_lock_resource *res,655655- const char *file,656656- int line)645645+void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm,646646+ struct dlm_lock_resource *res, int bit)647647+{648648+ assert_spin_locked(&res->spinlock);649649+650650+ mlog(0, "res %.*s, clr node %u, %ps()\n", res->lockname.len,651651+ res->lockname.name, bit, __builtin_return_address(0));652652+653653+ clear_bit(bit, res->refmap);654654+}655655+656656+657657+void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm,658658+ struct dlm_lock_resource *res)659659+{660660+ assert_spin_locked(&res->spinlock);661661+662662+ res->inflight_locks++;663663+664664+ mlog(0, "%s: res %.*s, inflight++: now %u, %ps()\n", dlm->name,665665+ res->lockname.len, res->lockname.name, res->inflight_locks,666666+ __builtin_return_address(0));667667+}668668+669669+void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm,670670+ struct dlm_lock_resource *res)657671{658672 assert_spin_locked(&res->spinlock);659673660674 BUG_ON(res->inflight_locks == 0);675675+661676 res->inflight_locks--;662662- mlog(0, "%s:%.*s: inflight--: now %u\n",663663- dlm->name, res->lockname.len, res->lockname.name,664664- res->inflight_locks);665665- if (res->inflight_locks == 0)666666- dlm_lockres_clear_refmap_bit(dlm->node_num, res);677677+678678+ mlog(0, "%s: res %.*s, inflight--: now %u, %ps()\n", dlm->name,679679+ res->lockname.len, res->lockname.name, res->inflight_locks,680680+ __builtin_return_address(0));681681+667682 wake_up(&res->wq);668683}669684···712697 unsigned int hash;713698 int tries = 0;714699 int bit, wait_on_recovery = 0;715715- int drop_inflight_if_nonlocal = 0;716700717701 BUG_ON(!lockid);718702···723709 spin_lock(&dlm->spinlock);724710 tmpres = __dlm_lookup_lockres_full(dlm, lockid, namelen, hash);725711 if (tmpres) {726726- int dropping_ref = 0;727727-728712 spin_unlock(&dlm->spinlock);729729-730713 spin_lock(&tmpres->spinlock);731731- /* We wait for the other thread that is mastering the resource */714714+ /* Wait on the thread that is mastering the resource */732715 if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) {733716 __dlm_wait_on_lockres(tmpres);734717 BUG_ON(tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN);735735- }736736-737737- if (tmpres->owner == dlm->node_num) {738738- BUG_ON(tmpres->state & DLM_LOCK_RES_DROPPING_REF);739739- dlm_lockres_grab_inflight_ref(dlm, tmpres);740740- } else if (tmpres->state & DLM_LOCK_RES_DROPPING_REF)741741- dropping_ref = 1;742742- spin_unlock(&tmpres->spinlock);743743-744744- /* wait until done messaging the master, drop our ref to allow745745- * the lockres to be purged, start over. */746746- if (dropping_ref) {747747- spin_lock(&tmpres->spinlock);748748- __dlm_wait_on_lockres_flags(tmpres, DLM_LOCK_RES_DROPPING_REF);749718 spin_unlock(&tmpres->spinlock);750719 dlm_lockres_put(tmpres);751720 tmpres = NULL;752721 goto lookup;753722 }754723755755- mlog(0, "found in hash!\n");724724+ /* Wait on the resource purge to complete before continuing */725725+ if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) {726726+ BUG_ON(tmpres->owner == dlm->node_num);727727+ __dlm_wait_on_lockres_flags(tmpres,728728+ DLM_LOCK_RES_DROPPING_REF);729729+ spin_unlock(&tmpres->spinlock);730730+ dlm_lockres_put(tmpres);731731+ tmpres = NULL;732732+ goto lookup;733733+ }734734+735735+ /* Grab inflight ref to pin the resource */736736+ dlm_lockres_grab_inflight_ref(dlm, tmpres);737737+738738+ spin_unlock(&tmpres->spinlock);756739 if (res)757740 dlm_lockres_put(res);758741 res = tmpres;···840829 * but they might own this lockres. wait on them. */841830 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0);842831 if (bit < O2NM_MAX_NODES) {843843- mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to "844844- "recover before lock mastery can begin\n",832832+ mlog(0, "%s: res %.*s, At least one node (%d) "833833+ "to recover before lock mastery can begin\n",845834 dlm->name, namelen, (char *)lockid, bit);846835 wait_on_recovery = 1;847836 }···854843855844 /* finally add the lockres to its hash bucket */856845 __dlm_insert_lockres(dlm, res);857857- /* since this lockres is new it doesn't not require the spinlock */858858- dlm_lockres_grab_inflight_ref_new(dlm, res);859846860860- /* if this node does not become the master make sure to drop861861- * this inflight reference below */862862- drop_inflight_if_nonlocal = 1;847847+ /* Grab inflight ref to pin the resource */848848+ spin_lock(&res->spinlock);849849+ dlm_lockres_grab_inflight_ref(dlm, res);850850+ spin_unlock(&res->spinlock);863851864852 /* get an extra ref on the mle in case this is a BLOCK865853 * if so, the creator of the BLOCK may try to put the last···874864 * dlm spinlock would be detectable be a change on the mle,875865 * so we only need to clear out the recovery map once. */876866 if (dlm_is_recovery_lock(lockid, namelen)) {877877- mlog(ML_NOTICE, "%s: recovery map is not empty, but "878878- "must master $RECOVERY lock now\n", dlm->name);867867+ mlog(0, "%s: Recovery map is not empty, but must "868868+ "master $RECOVERY lock now\n", dlm->name);879869 if (!dlm_pre_master_reco_lockres(dlm, res))880870 wait_on_recovery = 0;881871 else {···893883 spin_lock(&dlm->spinlock);894884 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0);895885 if (bit < O2NM_MAX_NODES) {896896- mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to "897897- "recover before lock mastery can begin\n",886886+ mlog(0, "%s: res %.*s, At least one node (%d) "887887+ "to recover before lock mastery can begin\n",898888 dlm->name, namelen, (char *)lockid, bit);899889 wait_on_recovery = 1;900890 } else···923913 * yet, keep going until it does. this is how the924914 * master will know that asserts are needed back to925915 * the lower nodes. */926926- mlog(0, "%s:%.*s: requests only up to %u but master "927927- "is %u, keep going\n", dlm->name, namelen,916916+ mlog(0, "%s: res %.*s, Requests only up to %u but "917917+ "master is %u, keep going\n", dlm->name, namelen,928918 lockid, nodenum, mle->master);929919 }930920 }···934924 ret = dlm_wait_for_lock_mastery(dlm, res, mle, &blocked);935925 if (ret < 0) {936926 wait_on_recovery = 1;937937- mlog(0, "%s:%.*s: node map changed, redo the "938938- "master request now, blocked=%d\n",939939- dlm->name, res->lockname.len,927927+ mlog(0, "%s: res %.*s, Node map changed, redo the master "928928+ "request now, blocked=%d\n", dlm->name, res->lockname.len,940929 res->lockname.name, blocked);941930 if (++tries > 20) {942942- mlog(ML_ERROR, "%s:%.*s: spinning on "943943- "dlm_wait_for_lock_mastery, blocked=%d\n",931931+ mlog(ML_ERROR, "%s: res %.*s, Spinning on "932932+ "dlm_wait_for_lock_mastery, blocked = %d\n",944933 dlm->name, res->lockname.len,945934 res->lockname.name, blocked);946935 dlm_print_one_lock_resource(res);···949940 goto redo_request;950941 }951942952952- mlog(0, "lockres mastered by %u\n", res->owner);943943+ mlog(0, "%s: res %.*s, Mastered by %u\n", dlm->name, res->lockname.len,944944+ res->lockname.name, res->owner);953945 /* make sure we never continue without this */954946 BUG_ON(res->owner == O2NM_MAX_NODES);955947···962952963953wake_waiters:964954 spin_lock(&res->spinlock);965965- if (res->owner != dlm->node_num && drop_inflight_if_nonlocal)966966- dlm_lockres_drop_inflight_ref(dlm, res);967955 res->state &= ~DLM_LOCK_RES_IN_PROGRESS;968956 spin_unlock(&res->spinlock);969957 wake_up(&res->wq);···14341426 }1435142714361428 if (res->owner == dlm->node_num) {14371437- mlog(0, "%s:%.*s: setting bit %u in refmap\n",14381438- dlm->name, namelen, name, request->node_idx);14391439- dlm_lockres_set_refmap_bit(request->node_idx, res);14291429+ dlm_lockres_set_refmap_bit(dlm, res, request->node_idx);14401430 spin_unlock(&res->spinlock);14411431 response = DLM_MASTER_RESP_YES;14421432 if (mle)···14991493 * go back and clean the mles on any15001494 * other nodes */15011495 dispatch_assert = 1;15021502- dlm_lockres_set_refmap_bit(request->node_idx, res);15031503- mlog(0, "%s:%.*s: setting bit %u in refmap\n",15041504- dlm->name, namelen, name,15051505- request->node_idx);14961496+ dlm_lockres_set_refmap_bit(dlm, res,14971497+ request->node_idx);15061498 } else15071499 response = DLM_MASTER_RESP_NO;15081500 } else {···17061702 "lockres, set the bit in the refmap\n",17071703 namelen, lockname, to);17081704 spin_lock(&res->spinlock);17091709- dlm_lockres_set_refmap_bit(to, res);17051705+ dlm_lockres_set_refmap_bit(dlm, res, to);17101706 spin_unlock(&res->spinlock);17111707 }17121708 }···21912187 namelen = res->lockname.len;21922188 BUG_ON(namelen > O2NM_MAX_NAME_LEN);2193218921942194- mlog(0, "%s:%.*s: sending deref to %d\n",21952195- dlm->name, namelen, lockname, res->owner);21962190 memset(&deref, 0, sizeof(deref));21972191 deref.node_idx = dlm->node_num;21982192 deref.namelen = namelen;···21992197 ret = o2net_send_message(DLM_DEREF_LOCKRES_MSG, dlm->key,22002198 &deref, sizeof(deref), res->owner, &r);22012199 if (ret < 0)22022202- mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to "22032203- "node %u\n", ret, DLM_DEREF_LOCKRES_MSG, dlm->key,22042204- res->owner);22002200+ mlog(ML_ERROR, "%s: res %.*s, error %d send DEREF to node %u\n",22012201+ dlm->name, namelen, lockname, ret, res->owner);22052202 else if (r < 0) {22062203 /* BAD. other node says I did not have a ref. */22072207- mlog(ML_ERROR,"while dropping ref on %s:%.*s "22082208- "(master=%u) got %d.\n", dlm->name, namelen,22092209- lockname, res->owner, r);22042204+ mlog(ML_ERROR, "%s: res %.*s, DEREF to node %u got %d\n",22052205+ dlm->name, namelen, lockname, res->owner, r);22102206 dlm_print_one_lock_resource(res);22112207 BUG();22122208 }···22602260 else {22612261 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF);22622262 if (test_bit(node, res->refmap)) {22632263- dlm_lockres_clear_refmap_bit(node, res);22632263+ dlm_lockres_clear_refmap_bit(dlm, res, node);22642264 cleared = 1;22652265 }22662266 }···23202320 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF);23212321 if (test_bit(node, res->refmap)) {23222322 __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_SETREF_INPROG);23232323- dlm_lockres_clear_refmap_bit(node, res);23232323+ dlm_lockres_clear_refmap_bit(dlm, res, node);23242324 cleared = 1;23252325 }23262326 spin_unlock(&res->spinlock);···28022802 BUG_ON(!list_empty(&lock->bast_list));28032803 BUG_ON(lock->ast_pending);28042804 BUG_ON(lock->bast_pending);28052805- dlm_lockres_clear_refmap_bit(lock->ml.node, res);28052805+ dlm_lockres_clear_refmap_bit(dlm, res,28062806+ lock->ml.node);28062807 list_del_init(&lock->list);28072808 dlm_lock_put(lock);28082809 /* In a normal unlock, we would have added a···28242823 mlog(0, "%s:%.*s: node %u had a ref to this "28252824 "migrating lockres, clearing\n", dlm->name,28262825 res->lockname.len, res->lockname.name, bit);28272827- dlm_lockres_clear_refmap_bit(bit, res);28262826+ dlm_lockres_clear_refmap_bit(dlm, res, bit);28282827 }28292828 bit++;28302829 }···29172916 &migrate, sizeof(migrate), nodenum,29182917 &status);29192918 if (ret < 0) {29202920- mlog(ML_ERROR, "Error %d when sending message %u (key "29212921- "0x%x) to node %u\n", ret, DLM_MIGRATE_REQUEST_MSG,29222922- dlm->key, nodenum);29192919+ mlog(ML_ERROR, "%s: res %.*s, Error %d send "29202920+ "MIGRATE_REQUEST to node %u\n", dlm->name,29212921+ migrate.namelen, migrate.name, ret, nodenum);29232922 if (!dlm_is_host_down(ret)) {29242923 mlog(ML_ERROR, "unhandled error=%d!\n", ret);29252924 BUG();···29382937 dlm->name, res->lockname.len, res->lockname.name,29392938 nodenum);29402939 spin_lock(&res->spinlock);29412941- dlm_lockres_set_refmap_bit(nodenum, res);29402940+ dlm_lockres_set_refmap_bit(dlm, res, nodenum);29422941 spin_unlock(&res->spinlock);29432942 }29442943 }···32723271 * mastery reference here since old_master will briefly have32733272 * a reference after the migration completes */32743273 spin_lock(&res->spinlock);32753275- dlm_lockres_set_refmap_bit(old_master, res);32743274+ dlm_lockres_set_refmap_bit(dlm, res, old_master);32763275 spin_unlock(&res->spinlock);3277327632783277 mlog(0, "now time to do a migrate request to other nodes\n");
+82-82
fs/ocfs2/dlm/dlmrecovery.c
···362362}363363364364365365-int dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout)365365+void dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout)366366{367367- if (timeout) {368368- mlog(ML_NOTICE, "%s: waiting %dms for notification of "369369- "death of node %u\n", dlm->name, timeout, node);367367+ if (dlm_is_node_dead(dlm, node))368368+ return;369369+370370+ printk(KERN_NOTICE "o2dlm: Waiting on the death of node %u in "371371+ "domain %s\n", node, dlm->name);372372+373373+ if (timeout)370374 wait_event_timeout(dlm->dlm_reco_thread_wq,371371- dlm_is_node_dead(dlm, node),372372- msecs_to_jiffies(timeout));373373- } else {374374- mlog(ML_NOTICE, "%s: waiting indefinitely for notification "375375- "of death of node %u\n", dlm->name, node);375375+ dlm_is_node_dead(dlm, node),376376+ msecs_to_jiffies(timeout));377377+ else376378 wait_event(dlm->dlm_reco_thread_wq,377379 dlm_is_node_dead(dlm, node));378378- }379379- /* for now, return 0 */380380- return 0;381380}382381383383-int dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout)382382+void dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout)384383{385385- if (timeout) {386386- mlog(0, "%s: waiting %dms for notification of "387387- "recovery of node %u\n", dlm->name, timeout, node);384384+ if (dlm_is_node_recovered(dlm, node))385385+ return;386386+387387+ printk(KERN_NOTICE "o2dlm: Waiting on the recovery of node %u in "388388+ "domain %s\n", node, dlm->name);389389+390390+ if (timeout)388391 wait_event_timeout(dlm->dlm_reco_thread_wq,389389- dlm_is_node_recovered(dlm, node),390390- msecs_to_jiffies(timeout));391391- } else {392392- mlog(0, "%s: waiting indefinitely for notification "393393- "of recovery of node %u\n", dlm->name, node);392392+ dlm_is_node_recovered(dlm, node),393393+ msecs_to_jiffies(timeout));394394+ else394395 wait_event(dlm->dlm_reco_thread_wq,395396 dlm_is_node_recovered(dlm, node));396396- }397397- /* for now, return 0 */398398- return 0;399397}400398401399/* callers of the top-level api calls (dlmlock/dlmunlock) should···428430{429431 spin_lock(&dlm->spinlock);430432 BUG_ON(dlm->reco.state & DLM_RECO_STATE_ACTIVE);433433+ printk(KERN_NOTICE "o2dlm: Begin recovery on domain %s for node %u\n",434434+ dlm->name, dlm->reco.dead_node);431435 dlm->reco.state |= DLM_RECO_STATE_ACTIVE;432436 spin_unlock(&dlm->spinlock);433437}···440440 BUG_ON(!(dlm->reco.state & DLM_RECO_STATE_ACTIVE));441441 dlm->reco.state &= ~DLM_RECO_STATE_ACTIVE;442442 spin_unlock(&dlm->spinlock);443443+ printk(KERN_NOTICE "o2dlm: End recovery on domain %s\n", dlm->name);443444 wake_up(&dlm->reco.event);445445+}446446+447447+static void dlm_print_recovery_master(struct dlm_ctxt *dlm)448448+{449449+ printk(KERN_NOTICE "o2dlm: Node %u (%s) is the Recovery Master for the "450450+ "dead node %u in domain %s\n", dlm->reco.new_master,451451+ (dlm->node_num == dlm->reco.new_master ? "me" : "he"),452452+ dlm->reco.dead_node, dlm->name);444453}445454446455static int dlm_do_recovery(struct dlm_ctxt *dlm)···514505 }515506 mlog(0, "another node will master this recovery session.\n");516507 }517517- mlog(0, "dlm=%s (%d), new_master=%u, this node=%u, dead_node=%u\n",518518- dlm->name, task_pid_nr(dlm->dlm_reco_thread_task), dlm->reco.new_master,519519- dlm->node_num, dlm->reco.dead_node);508508+509509+ dlm_print_recovery_master(dlm);520510521511 /* it is safe to start everything back up here522512 * because all of the dead node's lock resources···526518 return 0;527519528520master_here:529529- mlog(ML_NOTICE, "(%d) Node %u is the Recovery Master for the Dead Node "530530- "%u for Domain %s\n", task_pid_nr(dlm->dlm_reco_thread_task),531531- dlm->node_num, dlm->reco.dead_node, dlm->name);521521+ dlm_print_recovery_master(dlm);532522533523 status = dlm_remaster_locks(dlm, dlm->reco.dead_node);534524 if (status < 0) {535525 /* we should never hit this anymore */536536- mlog(ML_ERROR, "error %d remastering locks for node %u, "537537- "retrying.\n", status, dlm->reco.dead_node);526526+ mlog(ML_ERROR, "%s: Error %d remastering locks for node %u, "527527+ "retrying.\n", dlm->name, status, dlm->reco.dead_node);538528 /* yield a bit to allow any final network messages539529 * to get handled on remaining nodes */540530 msleep(100);···573567 BUG_ON(ndata->state != DLM_RECO_NODE_DATA_INIT);574568 ndata->state = DLM_RECO_NODE_DATA_REQUESTING;575569576576- mlog(0, "requesting lock info from node %u\n",570570+ mlog(0, "%s: Requesting lock info from node %u\n", dlm->name,577571 ndata->node_num);578572579573 if (ndata->node_num == dlm->node_num) {···646640 spin_unlock(&dlm_reco_state_lock);647641 }648642649649- mlog(0, "done requesting all lock info\n");643643+ mlog(0, "%s: Done requesting all lock info\n", dlm->name);650644651645 /* nodes should be sending reco data now652646 * just need to wait */···808802809803 /* negative status is handled by caller */810804 if (ret < 0)811811- mlog(ML_ERROR, "Error %d when sending message %u (key "812812- "0x%x) to node %u\n", ret, DLM_LOCK_REQUEST_MSG,813813- dlm->key, request_from);814814-805805+ mlog(ML_ERROR, "%s: Error %d send LOCK_REQUEST to node %u "806806+ "to recover dead node %u\n", dlm->name, ret,807807+ request_from, dead_node);815808 // return from here, then816809 // sleep until all received or error817810 return ret;···961956 ret = o2net_send_message(DLM_RECO_DATA_DONE_MSG, dlm->key, &done_msg,962957 sizeof(done_msg), send_to, &tmpret);963958 if (ret < 0) {964964- mlog(ML_ERROR, "Error %d when sending message %u (key "965965- "0x%x) to node %u\n", ret, DLM_RECO_DATA_DONE_MSG,966966- dlm->key, send_to);959959+ mlog(ML_ERROR, "%s: Error %d send RECO_DATA_DONE to node %u "960960+ "to recover dead node %u\n", dlm->name, ret, send_to,961961+ dead_node);967962 if (!dlm_is_host_down(ret)) {968963 BUG();969964 }···11321127 if (ret < 0) {11331128 /* XXX: negative status is not handled.11341129 * this will end up killing this node. */11351135- mlog(ML_ERROR, "Error %d when sending message %u (key "11361136- "0x%x) to node %u\n", ret, DLM_MIG_LOCKRES_MSG,11371137- dlm->key, send_to);11301130+ mlog(ML_ERROR, "%s: res %.*s, Error %d send MIG_LOCKRES to "11311131+ "node %u (%s)\n", dlm->name, mres->lockname_len,11321132+ mres->lockname, ret, send_to,11331133+ (orig_flags & DLM_MRES_MIGRATION ?11341134+ "migration" : "recovery"));11381135 } else {11391136 /* might get an -ENOMEM back here */11401137 ret = status;···17741767 dlm->name, mres->lockname_len, mres->lockname,17751768 from);17761769 spin_lock(&res->spinlock);17771777- dlm_lockres_set_refmap_bit(from, res);17701770+ dlm_lockres_set_refmap_bit(dlm, res, from);17781771 spin_unlock(&res->spinlock);17791772 added++;17801773 break;···19721965 mlog(0, "%s:%.*s: added lock for node %u, "19731966 "setting refmap bit\n", dlm->name,19741967 res->lockname.len, res->lockname.name, ml->node);19751975- dlm_lockres_set_refmap_bit(ml->node, res);19681968+ dlm_lockres_set_refmap_bit(dlm, res, ml->node);19761969 added++;19771970 }19781971 spin_unlock(&res->spinlock);···2091208420922085 list_for_each_entry_safe(res, next, &dlm->reco.resources, recovering) {20932086 if (res->owner == dead_node) {20872087+ mlog(0, "%s: res %.*s, Changing owner from %u to %u\n",20882088+ dlm->name, res->lockname.len, res->lockname.name,20892089+ res->owner, new_master);20942090 list_del_init(&res->recovering);20952091 spin_lock(&res->spinlock);20962092 /* new_master has our reference from···21152105 for (i = 0; i < DLM_HASH_BUCKETS; i++) {21162106 bucket = dlm_lockres_hash(dlm, i);21172107 hlist_for_each_entry(res, hash_iter, bucket, hash_node) {21182118- if (res->state & DLM_LOCK_RES_RECOVERING) {21192119- if (res->owner == dead_node) {21202120- mlog(0, "(this=%u) res %.*s owner=%u "21212121- "was not on recovering list, but "21222122- "clearing state anyway\n",21232123- dlm->node_num, res->lockname.len,21242124- res->lockname.name, new_master);21252125- } else if (res->owner == dlm->node_num) {21262126- mlog(0, "(this=%u) res %.*s owner=%u "21272127- "was not on recovering list, "21282128- "owner is THIS node, clearing\n",21292129- dlm->node_num, res->lockname.len,21302130- res->lockname.name, new_master);21312131- } else21322132- continue;21082108+ if (!(res->state & DLM_LOCK_RES_RECOVERING))21092109+ continue;2133211021342134- if (!list_empty(&res->recovering)) {21352135- mlog(0, "%s:%.*s: lockres was "21362136- "marked RECOVERING, owner=%u\n",21372137- dlm->name, res->lockname.len,21382138- res->lockname.name, res->owner);21392139- list_del_init(&res->recovering);21402140- dlm_lockres_put(res);21412141- }21422142- spin_lock(&res->spinlock);21432143- /* new_master has our reference from21442144- * the lock state sent during recovery */21452145- dlm_change_lockres_owner(dlm, res, new_master);21462146- res->state &= ~DLM_LOCK_RES_RECOVERING;21472147- if (__dlm_lockres_has_locks(res))21482148- __dlm_dirty_lockres(dlm, res);21492149- spin_unlock(&res->spinlock);21502150- wake_up(&res->wq);21112111+ if (res->owner != dead_node &&21122112+ res->owner != dlm->node_num)21132113+ continue;21142114+21152115+ if (!list_empty(&res->recovering)) {21162116+ list_del_init(&res->recovering);21172117+ dlm_lockres_put(res);21512118 }21192119+21202120+ /* new_master has our reference from21212121+ * the lock state sent during recovery */21222122+ mlog(0, "%s: res %.*s, Changing owner from %u to %u\n",21232123+ dlm->name, res->lockname.len, res->lockname.name,21242124+ res->owner, new_master);21252125+ spin_lock(&res->spinlock);21262126+ dlm_change_lockres_owner(dlm, res, new_master);21272127+ res->state &= ~DLM_LOCK_RES_RECOVERING;21282128+ if (__dlm_lockres_has_locks(res))21292129+ __dlm_dirty_lockres(dlm, res);21302130+ spin_unlock(&res->spinlock);21312131+ wake_up(&res->wq);21522132 }21532133 }21542134}···22522252 res->lockname.len, res->lockname.name, freed, dead_node);22532253 __dlm_print_one_lock_resource(res);22542254 }22552255- dlm_lockres_clear_refmap_bit(dead_node, res);22552255+ dlm_lockres_clear_refmap_bit(dlm, res, dead_node);22562256 } else if (test_bit(dead_node, res->refmap)) {22572257 mlog(0, "%s:%.*s: dead node %u had a ref, but had "22582258 "no locks and had not purged before dying\n", dlm->name,22592259 res->lockname.len, res->lockname.name, dead_node);22602260- dlm_lockres_clear_refmap_bit(dead_node, res);22602260+ dlm_lockres_clear_refmap_bit(dlm, res, dead_node);22612261 }2262226222632263 /* do not kick thread yet */···23242324 dlm_revalidate_lvb(dlm, res, dead_node);23252325 if (res->owner == dead_node) {23262326 if (res->state & DLM_LOCK_RES_DROPPING_REF) {23272327- mlog(ML_NOTICE, "Ignore %.*s for "23272327+ mlog(ML_NOTICE, "%s: res %.*s, Skip "23282328 "recovery as it is being freed\n",23292329- res->lockname.len,23292329+ dlm->name, res->lockname.len,23302330 res->lockname.name);23312331 } else23322332 dlm_move_lockres_to_recovery_list(dlm,
+8-8
fs/ocfs2/dlm/dlmthread.c
···9494{9595 int bit;96969797+ assert_spin_locked(&res->spinlock);9898+9799 if (__dlm_lockres_has_locks(res))100100+ return 0;101101+102102+ /* Locks are in the process of being created */103103+ if (res->inflight_locks)98104 return 0;99105100106 if (!list_empty(&res->dirty) || res->state & DLM_LOCK_RES_DIRTY)···109103 if (res->state & DLM_LOCK_RES_RECOVERING)110104 return 0;111105106106+ /* Another node has this resource with this node as the master */112107 bit = find_next_bit(res->refmap, O2NM_MAX_NODES, 0);113108 if (bit < O2NM_MAX_NODES)114109 return 0;115110116116- /*117117- * since the bit for dlm->node_num is not set, inflight_locks better118118- * be zero119119- */120120- BUG_ON(res->inflight_locks != 0);121111 return 1;122112}123113···187185 /* clear our bit from the master's refmap, ignore errors */188186 ret = dlm_drop_lockres_ref(dlm, res);189187 if (ret < 0) {190190- mlog(ML_ERROR, "%s: deref %.*s failed %d\n", dlm->name,191191- res->lockname.len, res->lockname.name, ret);192188 if (!dlm_is_host_down(ret))193189 BUG();194190 }···209209 BUG();210210 }211211212212- __dlm_unhash_lockres(res);212212+ __dlm_unhash_lockres(dlm, res);213213214214 /* lockres is not in the hash now. drop the flag and wake up215215 * any processes waiting in dlm_get_lock_resource. */
+15-6
fs/ocfs2/dlmglue.c
···16921692 mlog(0, "inode %llu take PRMODE open lock\n",16931693 (unsigned long long)OCFS2_I(inode)->ip_blkno);1694169416951695- if (ocfs2_mount_local(osb))16951695+ if (ocfs2_is_hard_readonly(osb) || ocfs2_mount_local(osb))16961696 goto out;1697169716981698 lockres = &OCFS2_I(inode)->ip_open_lockres;···17171717 mlog(0, "inode %llu try to take %s open lock\n",17181718 (unsigned long long)OCFS2_I(inode)->ip_blkno,17191719 write ? "EXMODE" : "PRMODE");17201720+17211721+ if (ocfs2_is_hard_readonly(osb)) {17221722+ if (write)17231723+ status = -EROFS;17241724+ goto out;17251725+ }1720172617211727 if (ocfs2_mount_local(osb))17221728 goto out;···23042298 if (ocfs2_is_hard_readonly(osb)) {23052299 if (ex)23062300 status = -EROFS;23072307- goto bail;23012301+ goto getbh;23082302 }2309230323102304 if (ocfs2_mount_local(osb))···23622356 mlog_errno(status);23632357 goto bail;23642358 }23652365-23592359+getbh:23662360 if (ret_bh) {23672361 status = ocfs2_assign_bh(inode, ret_bh, local_bh);23682362 if (status < 0) {···2634262826352629 BUG_ON(!dl);2636263026372637- if (ocfs2_is_hard_readonly(osb))26382638- return -EROFS;26312631+ if (ocfs2_is_hard_readonly(osb)) {26322632+ if (ex)26332633+ return -EROFS;26342634+ return 0;26352635+ }2639263626402637 if (ocfs2_mount_local(osb))26412638 return 0;···26562647 struct ocfs2_dentry_lock *dl = dentry->d_fsdata;26572648 struct ocfs2_super *osb = OCFS2_SB(dentry->d_sb);2658264926592659- if (!ocfs2_mount_local(osb))26502650+ if (!ocfs2_is_hard_readonly(osb) && !ocfs2_mount_local(osb))26602651 ocfs2_cluster_unlock(osb, &dl->dl_lockres, level);26612652}26622653
···19501950 if (ret < 0)19511951 mlog_errno(ret);1952195219531953+ if (file->f_flags & O_SYNC)19541954+ handle->h_sync = 1;19551955+19531956 ocfs2_commit_trans(osb, handle);1954195719551958out_inode_unlock:···20532050 }20542051out:20552052 return ret;20532053+}20542054+20552055+static void ocfs2_aiodio_wait(struct inode *inode)20562056+{20572057+ wait_queue_head_t *wq = ocfs2_ioend_wq(inode);20582058+20592059+ wait_event(*wq, (atomic_read(&OCFS2_I(inode)->ip_unaligned_aio) == 0));20602060+}20612061+20622062+static int ocfs2_is_io_unaligned(struct inode *inode, size_t count, loff_t pos)20632063+{20642064+ int blockmask = inode->i_sb->s_blocksize - 1;20652065+ loff_t final_size = pos + count;20662066+20672067+ if ((pos & blockmask) || (final_size & blockmask))20682068+ return 1;20692069+ return 0;20562070}2057207120582072static int ocfs2_prepare_inode_for_refcount(struct inode *inode,···22502230 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);22512231 int full_coherency = !(osb->s_mount_opt &22522232 OCFS2_MOUNT_COHERENCY_BUFFERED);22332233+ int unaligned_dio = 0;2253223422542235 trace_ocfs2_file_aio_write(inode, file, file->f_path.dentry,22552236 (unsigned long long)OCFS2_I(inode)->ip_blkno,···23182297 goto out;23192298 }2320229923002300+ if (direct_io && !is_sync_kiocb(iocb))23012301+ unaligned_dio = ocfs2_is_io_unaligned(inode, iocb->ki_left,23022302+ *ppos);23032303+23212304 /*23222305 * We can't complete the direct I/O as requested, fall back to23232306 * buffered I/O.···2334230923352310 direct_io = 0;23362311 goto relock;23122312+ }23132313+23142314+ if (unaligned_dio) {23152315+ /*23162316+ * Wait on previous unaligned aio to complete before23172317+ * proceeding.23182318+ */23192319+ ocfs2_aiodio_wait(inode);23202320+23212321+ /* Mark the iocb as needing a decrement in ocfs2_dio_end_io */23222322+ atomic_inc(&OCFS2_I(inode)->ip_unaligned_aio);23232323+ ocfs2_iocb_set_unaligned_aio(iocb);23372324 }2338232523392326 /*···24192382 if ((ret == -EIOCBQUEUED) || (!ocfs2_iocb_is_rw_locked(iocb))) {24202383 rw_level = -1;24212384 have_alloc_sem = 0;23852385+ unaligned_dio = 0;24222386 }23872387+23882388+ if (unaligned_dio)23892389+ atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio);2423239024242391out:24252392 if (rw_level != -1)···26322591 return ret;26332592}2634259325942594+/* Refer generic_file_llseek_unlocked() */25952595+static loff_t ocfs2_file_llseek(struct file *file, loff_t offset, int origin)25962596+{25972597+ struct inode *inode = file->f_mapping->host;25982598+ int ret = 0;25992599+26002600+ mutex_lock(&inode->i_mutex);26012601+26022602+ switch (origin) {26032603+ case SEEK_SET:26042604+ break;26052605+ case SEEK_END:26062606+ offset += inode->i_size;26072607+ break;26082608+ case SEEK_CUR:26092609+ if (offset == 0) {26102610+ offset = file->f_pos;26112611+ goto out;26122612+ }26132613+ offset += file->f_pos;26142614+ break;26152615+ case SEEK_DATA:26162616+ case SEEK_HOLE:26172617+ ret = ocfs2_seek_data_hole_offset(file, &offset, origin);26182618+ if (ret)26192619+ goto out;26202620+ break;26212621+ default:26222622+ ret = -EINVAL;26232623+ goto out;26242624+ }26252625+26262626+ if (offset < 0 && !(file->f_mode & FMODE_UNSIGNED_OFFSET))26272627+ ret = -EINVAL;26282628+ if (!ret && offset > inode->i_sb->s_maxbytes)26292629+ ret = -EINVAL;26302630+ if (ret)26312631+ goto out;26322632+26332633+ if (offset != file->f_pos) {26342634+ file->f_pos = offset;26352635+ file->f_version = 0;26362636+ }26372637+26382638+out:26392639+ mutex_unlock(&inode->i_mutex);26402640+ if (ret)26412641+ return ret;26422642+ return offset;26432643+}26442644+26352645const struct inode_operations ocfs2_file_iops = {26362646 .setattr = ocfs2_setattr,26372647 .getattr = ocfs2_getattr,···27072615 * ocfs2_fops_no_plocks and ocfs2_dops_no_plocks!27082616 */27092617const struct file_operations ocfs2_fops = {27102710- .llseek = generic_file_llseek,26182618+ .llseek = ocfs2_file_llseek,27112619 .read = do_sync_read,27122620 .write = do_sync_write,27132621 .mmap = ocfs2_mmap,···27552663 * the cluster.27562664 */27572665const struct file_operations ocfs2_fops_no_plocks = {27582758- .llseek = generic_file_llseek,26662666+ .llseek = ocfs2_file_llseek,27592667 .read = do_sync_read,27602668 .write = do_sync_write,27612669 .mmap = ocfs2_mmap,
+1-1
fs/ocfs2/inode.c
···951951 trace_ocfs2_cleanup_delete_inode(952952 (unsigned long long)OCFS2_I(inode)->ip_blkno, sync_data);953953 if (sync_data)954954- write_inode_now(inode, 1);954954+ filemap_write_and_wait(inode->i_mapping);955955 truncate_inode_pages(&inode->i_data, 0);956956}957957
+3
fs/ocfs2/inode.h
···4343 /* protects extended attribute changes on this inode */4444 struct rw_semaphore ip_xattr_sem;45454646+ /* Number of outstanding AIO's which are not page aligned */4747+ atomic_t ip_unaligned_aio;4848+4649 /* These fields are protected by ip_lock */4750 spinlock_t ip_lock;4851 u32 ip_open_count;
+6-5
fs/ocfs2/ioctl.c
···122122 if ((oldflags & OCFS2_IMMUTABLE_FL) || ((flags ^ oldflags) &123123 (OCFS2_APPEND_FL | OCFS2_IMMUTABLE_FL))) {124124 if (!capable(CAP_LINUX_IMMUTABLE))125125- goto bail_unlock;125125+ goto bail_commit;126126 }127127128128 ocfs2_inode->ip_attr = flags;···132132 if (status < 0)133133 mlog_errno(status);134134135135+bail_commit:135136 ocfs2_commit_trans(osb, handle);136137bail_unlock:137138 ocfs2_inode_unlock(inode, 1);···382381 if (!oifi) {383382 status = -ENOMEM;384383 mlog_errno(status);385385- goto bail;384384+ goto out_err;386385 }387386388387 if (o2info_from_user(*oifi, req))···432431 o2info_set_request_error(&oifi->ifi_req, req);433432434433 kfree(oifi);435435-434434+out_err:436435 return status;437436}438437···667666 if (!oiff) {668667 status = -ENOMEM;669668 mlog_errno(status);670670- goto bail;669669+ goto out_err;671670 }672671673672 if (o2info_from_user(*oiff, req))···717716 o2info_set_request_error(&oiff->iff_req, req);718717719718 kfree(oiff);720720-719719+out_err:721720 return status;722721}723722
+20-3
fs/ocfs2/journal.c
···15441544 /* we need to run complete recovery for offline orphan slots */15451545 ocfs2_replay_map_set_state(osb, REPLAY_NEEDED);1546154615471547- mlog(ML_NOTICE, "Recovering node %d from slot %d on device (%u,%u)\n",15481548- node_num, slot_num,15491549- MAJOR(osb->sb->s_dev), MINOR(osb->sb->s_dev));15471547+ printk(KERN_NOTICE "ocfs2: Begin replay journal (node %d, slot %d) on "\15481548+ "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev),15491549+ MINOR(osb->sb->s_dev));1550155015511551 OCFS2_I(inode)->ip_clusters = le32_to_cpu(fe->i_clusters);15521552···1601160116021602 jbd2_journal_destroy(journal);1603160316041604+ printk(KERN_NOTICE "ocfs2: End replay journal (node %d, slot %d) on "\16051605+ "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev),16061606+ MINOR(osb->sb->s_dev));16041607done:16051608 /* drop the lock on this nodes journal */16061609 if (got_lock)···18101807 * ocfs2_queue_orphan_scan calls ocfs2_queue_recovery_completion for18111808 * every slot, queuing a recovery of the slot on the ocfs2_wq thread. This18121809 * is done to catch any orphans that are left over in orphan directories.18101810+ *18111811+ * It scans all slots, even ones that are in use. It does so to handle the18121812+ * case described below:18131813+ *18141814+ * Node 1 has an inode it was using. The dentry went away due to memory18151815+ * pressure. Node 1 closes the inode, but it's on the free list. The node18161816+ * has the open lock.18171817+ * Node 2 unlinks the inode. It grabs the dentry lock to notify others,18181818+ * but node 1 has no dentry and doesn't get the message. It trylocks the18191819+ * open lock, sees that another node has a PR, and does nothing.18201820+ * Later node 2 runs its orphan dir. It igets the inode, trylocks the18211821+ * open lock, sees the PR still, and does nothing.18221822+ * Basically, we have to trigger an orphan iput on node 1. The only way18231823+ * for this to happen is if node 1 runs node 2's orphan dir.18131824 *18141825 * ocfs2_queue_orphan_scan gets called every ORPHAN_SCAN_SCHEDULE_TIMEOUT18151826 * seconds. It gets an EX lock on os_lockres and checks sequence number
+3-2
fs/ocfs2/journal.h
···441441#define OCFS2_SIMPLE_DIR_EXTEND_CREDITS (2)442442443443/* file update (nlink, etc) + directory mtime/ctime + dir entry block + quota444444- * update on dir + index leaf + dx root update for free list */444444+ * update on dir + index leaf + dx root update for free list +445445+ * previous dirblock update in the free list */445446static inline int ocfs2_link_credits(struct super_block *sb)446447{447447- return 2*OCFS2_INODE_UPDATE_CREDITS + 3 +448448+ return 2*OCFS2_INODE_UPDATE_CREDITS + 4 +448449 ocfs2_quota_trans_credits(sb);449450}450451
+24-29
fs/ocfs2/mmap.c
···6161static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,6262 struct page *page)6363{6464- int ret;6464+ int ret = VM_FAULT_NOPAGE;6565 struct inode *inode = file->f_path.dentry->d_inode;6666 struct address_space *mapping = inode->i_mapping;6767 loff_t pos = page_offset(page);···7171 void *fsdata;7272 loff_t size = i_size_read(inode);73737474- /*7575- * Another node might have truncated while we were waiting on7676- * cluster locks.7777- * We don't check size == 0 before the shift. This is borrowed7878- * from do_generic_file_read.7979- */8074 last_index = (size - 1) >> PAGE_CACHE_SHIFT;8181- if (unlikely(!size || page->index > last_index)) {8282- ret = -EINVAL;8383- goto out;8484- }85758676 /*8787- * The i_size check above doesn't catch the case where nodes8888- * truncated and then re-extended the file. We'll re-check the8989- * page mapping after taking the page lock inside of9090- * ocfs2_write_begin_nolock().7777+ * There are cases that lead to the page no longer bebongs to the7878+ * mapping.7979+ * 1) pagecache truncates locally due to memory pressure.8080+ * 2) pagecache truncates when another is taking EX lock against 8181+ * inode lock. see ocfs2_data_convert_worker.8282+ * 8383+ * The i_size check doesn't catch the case where nodes truncated and8484+ * then re-extended the file. We'll re-check the page mapping after8585+ * taking the page lock inside of ocfs2_write_begin_nolock().8686+ *8787+ * Let VM retry with these cases.9188 */9292- if (!PageUptodate(page) || page->mapping != inode->i_mapping) {9393- /*9494- * the page has been umapped in ocfs2_data_downconvert_worker.9595- * So return 0 here and let VFS retry.9696- */9797- ret = 0;8989+ if ((page->mapping != inode->i_mapping) ||9090+ (!PageUptodate(page)) ||9191+ (page_offset(page) >= size))9892 goto out;9999- }1009310194 /*10295 * Call ocfs2_write_begin() and ocfs2_write_end() to take···109116 if (ret) {110117 if (ret != -ENOSPC)111118 mlog_errno(ret);119119+ if (ret == -ENOMEM)120120+ ret = VM_FAULT_OOM;121121+ else122122+ ret = VM_FAULT_SIGBUS;112123 goto out;113124 }114125115115- ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page,116116- fsdata);117117- if (ret < 0) {118118- mlog_errno(ret);126126+ if (!locked_page) {127127+ ret = VM_FAULT_NOPAGE;119128 goto out;120129 }130130+ ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page,131131+ fsdata);121132 BUG_ON(ret != len);122122- ret = 0;133133+ ret = VM_FAULT_LOCKED;123134out:124135 return ret;125136}···165168166169out:167170 ocfs2_unblock_signals(&oldset);168168- if (ret)169169- ret = VM_FAULT_SIGBUS;170171 return ret;171172}172173
+1-1
fs/ocfs2/move_extents.c
···745745 */746746 ocfs2_probe_alloc_group(inode, gd_bh, &goal_bit, len, move_max_hop,747747 new_phys_cpos);748748- if (!new_phys_cpos) {748748+ if (!*new_phys_cpos) {749749 ret = -ENOSPC;750750 goto out_commit;751751 }
···404404 int status = 0;405405 struct ocfs2_quota_recovery *rec;406406407407- mlog(ML_NOTICE, "Beginning quota recovery in slot %u\n", slot_num);407407+ printk(KERN_NOTICE "ocfs2: Beginning quota recovery on device (%s) for "408408+ "slot %u\n", osb->dev_str, slot_num);409409+408410 rec = ocfs2_alloc_quota_recovery();409411 if (!rec)410412 return ERR_PTR(-ENOMEM);···551549 goto out_commit;552550 }553551 lock_buffer(qbh);554554- WARN_ON(!ocfs2_test_bit(bit, dchunk->dqc_bitmap));555555- ocfs2_clear_bit(bit, dchunk->dqc_bitmap);552552+ WARN_ON(!ocfs2_test_bit_unaligned(bit, dchunk->dqc_bitmap));553553+ ocfs2_clear_bit_unaligned(bit, dchunk->dqc_bitmap);556554 le32_add_cpu(&dchunk->dqc_free, 1);557555 unlock_buffer(qbh);558556 ocfs2_journal_dirty(handle, qbh);···598596 struct inode *lqinode;599597 unsigned int flags;600598601601- mlog(ML_NOTICE, "Finishing quota recovery in slot %u\n", slot_num);599599+ printk(KERN_NOTICE "ocfs2: Finishing quota recovery on device (%s) for "600600+ "slot %u\n", osb->dev_str, slot_num);601601+602602 mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);603603 for (type = 0; type < MAXQUOTAS; type++) {604604 if (list_empty(&(rec->r_list[type])))···616612 /* Someone else is holding the lock? Then he must be617613 * doing the recovery. Just skip the file... */618614 if (status == -EAGAIN) {619619- mlog(ML_NOTICE, "skipping quota recovery for slot %d "620620- "because quota file is locked.\n", slot_num);615615+ printk(KERN_NOTICE "ocfs2: Skipping quota recovery on "616616+ "device (%s) for slot %d because quota file is "617617+ "locked.\n", osb->dev_str, slot_num);621618 status = 0;622619 goto out_put;623620 } else if (status < 0) {···949944 * ol_quota_entries_per_block(sb);950945 }951946952952- found = ocfs2_find_next_zero_bit(dchunk->dqc_bitmap, len, 0);947947+ found = ocfs2_find_next_zero_bit_unaligned(dchunk->dqc_bitmap, len, 0);953948 /* We failed? */954949 if (found == len) {955950 mlog(ML_ERROR, "Did not find empty entry in chunk %d with %u"···12131208 struct ocfs2_local_disk_chunk *dchunk;1214120912151210 dchunk = (struct ocfs2_local_disk_chunk *)bh->b_data;12161216- ocfs2_set_bit(*offset, dchunk->dqc_bitmap);12111211+ ocfs2_set_bit_unaligned(*offset, dchunk->dqc_bitmap);12171212 le32_add_cpu(&dchunk->dqc_free, -1);12181213}12191214···12941289 (od->dq_chunk->qc_headerbh->b_data);12951290 /* Mark structure as freed */12961291 lock_buffer(od->dq_chunk->qc_headerbh);12971297- ocfs2_clear_bit(offset, dchunk->dqc_bitmap);12921292+ ocfs2_clear_bit_unaligned(offset, dchunk->dqc_bitmap);12981293 le32_add_cpu(&dchunk->dqc_free, 1);12991294 unlock_buffer(od->dq_chunk->qc_headerbh);13001295 ocfs2_journal_dirty(handle, od->dq_chunk->qc_headerbh);
+2-2
fs/ocfs2/slot_map.c
···493493 goto bail;494494 }495495 } else496496- mlog(ML_NOTICE, "slot %d is already allocated to this node!\n",497497- slot);496496+ printk(KERN_INFO "ocfs2: Slot %d on device (%s) was already "497497+ "allocated to this node!\n", slot, osb->dev_str);498498499499 ocfs2_set_slot(si, slot, osb->node_num);500500 osb->slot_num = slot;
+63-8
fs/ocfs2/stack_o2cb.c
···2828#include "cluster/masklog.h"2929#include "cluster/nodemanager.h"3030#include "cluster/heartbeat.h"3131+#include "cluster/tcp.h"31323233#include "stackglue.h"3334···257256}258257259258/*259259+ * Check if this node is heartbeating and is connected to all other260260+ * heartbeating nodes.261261+ */262262+static int o2cb_cluster_check(void)263263+{264264+ u8 node_num;265265+ int i;266266+ unsigned long hbmap[BITS_TO_LONGS(O2NM_MAX_NODES)];267267+ unsigned long netmap[BITS_TO_LONGS(O2NM_MAX_NODES)];268268+269269+ node_num = o2nm_this_node();270270+ if (node_num == O2NM_MAX_NODES) {271271+ printk(KERN_ERR "o2cb: This node has not been configured.\n");272272+ return -EINVAL;273273+ }274274+275275+ /*276276+ * o2dlm expects o2net sockets to be created. If not, then277277+ * dlm_join_domain() fails with a stack of errors which are both cryptic278278+ * and incomplete. The idea here is to detect upfront whether we have279279+ * managed to connect to all nodes or not. If not, then list the nodes280280+ * to allow the user to check the configuration (incorrect IP, firewall,281281+ * etc.) Yes, this is racy. But its not the end of the world.282282+ */283283+#define O2CB_MAP_STABILIZE_COUNT 60284284+ for (i = 0; i < O2CB_MAP_STABILIZE_COUNT; ++i) {285285+ o2hb_fill_node_map(hbmap, sizeof(hbmap));286286+ if (!test_bit(node_num, hbmap)) {287287+ printk(KERN_ERR "o2cb: %s heartbeat has not been "288288+ "started.\n", (o2hb_global_heartbeat_active() ?289289+ "Global" : "Local"));290290+ return -EINVAL;291291+ }292292+ o2net_fill_node_map(netmap, sizeof(netmap));293293+ /* Force set the current node to allow easy compare */294294+ set_bit(node_num, netmap);295295+ if (!memcmp(hbmap, netmap, sizeof(hbmap)))296296+ return 0;297297+ if (i < O2CB_MAP_STABILIZE_COUNT)298298+ msleep(1000);299299+ }300300+301301+ printk(KERN_ERR "o2cb: This node could not connect to nodes:");302302+ i = -1;303303+ while ((i = find_next_bit(hbmap, O2NM_MAX_NODES,304304+ i + 1)) < O2NM_MAX_NODES) {305305+ if (!test_bit(i, netmap))306306+ printk(" %u", i);307307+ }308308+ printk(".\n");309309+310310+ return -ENOTCONN;311311+}312312+313313+/*260314 * Called from the dlm when it's about to evict a node. This is how the261315 * classic stack signals node death.262316 */···319263{320264 struct ocfs2_cluster_connection *conn = data;321265322322- mlog(ML_NOTICE, "o2dlm has evicted node %d from group %.*s\n",323323- node_num, conn->cc_namelen, conn->cc_name);266266+ printk(KERN_NOTICE "o2cb: o2dlm has evicted node %d from domain %.*s\n",267267+ node_num, conn->cc_namelen, conn->cc_name);324268325269 conn->cc_recovery_handler(node_num, conn->cc_recovery_data);326270}···336280 BUG_ON(conn == NULL);337281 BUG_ON(conn->cc_proto == NULL);338282339339- /* for now we only have one cluster/node, make sure we see it340340- * in the heartbeat universe */341341- if (!o2hb_check_local_node_heartbeating()) {342342- if (o2hb_global_heartbeat_active())343343- mlog(ML_ERROR, "Global heartbeat not started\n");344344- rc = -EINVAL;283283+ /* Ensure cluster stack is up and all nodes are connected */284284+ rc = o2cb_cluster_check();285285+ if (rc) {286286+ printk(KERN_ERR "o2cb: Cluster check failed. Fix errors "287287+ "before retrying.\n");345288 goto out;346289 }347290
+16-9
fs/ocfs2/super.c
···5454#include "ocfs1_fs_compat.h"55555656#include "alloc.h"5757+#include "aops.h"5758#include "blockcheck.h"5859#include "dlmglue.h"5960#include "export.h"···1108110711091108 ocfs2_set_ro_flag(osb, 1);1110110911111111- printk(KERN_NOTICE "Readonly device detected. No cluster "11121112- "services will be utilized for this mount. Recovery "11131113- "will be skipped.\n");11101110+ printk(KERN_NOTICE "ocfs2: Readonly device (%s) detected. "11111111+ "Cluster services will not be used for this mount. "11121112+ "Recovery will be skipped.\n", osb->dev_str);11141113 }1115111411161115 if (!ocfs2_is_hard_readonly(osb)) {···16171616 return 0;16181617}1619161816191619+wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ];16201620+16201621static int __init ocfs2_init(void)16211622{16221622- int status;16231623+ int status, i;1623162416241625 ocfs2_print_version();16261626+16271627+ for (i = 0; i < OCFS2_IOEND_WQ_HASH_SZ; i++)16281628+ init_waitqueue_head(&ocfs2__ioend_wq[i]);1625162916261630 status = init_ocfs2_uptodate_cache();16271631 if (status < 0) {···17661760 ocfs2_extent_map_init(&oi->vfs_inode);17671761 INIT_LIST_HEAD(&oi->ip_io_markers);17681762 oi->ip_dir_start_lookup = 0;17691769-17631763+ atomic_set(&oi->ip_unaligned_aio, 0);17701764 init_rwsem(&oi->ip_alloc_sem);17711765 init_rwsem(&oi->ip_xattr_sem);17721766 mutex_init(&oi->ip_io_mutex);···19801974 * If we failed before we got a uuid_str yet, we can't stop19811975 * heartbeat. Otherwise, do it.19821976 */19831983- if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str)19771977+ if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str &&19781978+ !ocfs2_is_hard_readonly(osb))19841979 hangup_needed = 1;1985198019861981 if (osb->cconn)···23602353 mlog_errno(status);23612354 goto bail;23622355 }23632363- cleancache_init_shared_fs((char *)&uuid_net_key, sb);23562356+ cleancache_init_shared_fs((char *)&di->id2.i_super.s_uuid, sb);2364235723652358bail:23662359 return status;···24692462 goto finally;24702463 }24712464 } else {24722472- mlog(ML_NOTICE, "File system was not unmounted cleanly, "24732473- "recovering volume.\n");24652465+ printk(KERN_NOTICE "ocfs2: File system on device (%s) was not "24662466+ "unmounted cleanly, recovering it.\n", osb->dev_str);24742467 }2475246824762469 local = ocfs2_mount_local(osb);
···167167 }168168169169 psinfo = psi;170170+ mutex_init(&psinfo->read_mutex);170171 spin_unlock(&pstore_lock);171172172173 if (owner && !try_module_get(owner)) {···196195void pstore_get_records(int quiet)197196{198197 struct pstore_info *psi = psinfo;198198+ char *buf = NULL;199199 ssize_t size;200200 u64 id;201201 enum pstore_type_id type;202202 struct timespec time;203203 int failed = 0, rc;204204- unsigned long flags;205204206205 if (!psi)207206 return;208207209209- spin_lock_irqsave(&psinfo->buf_lock, flags);208208+ mutex_lock(&psi->read_mutex);210209 rc = psi->open(psi);211210 if (rc)212211 goto out;213212214214- while ((size = psi->read(&id, &type, &time, psi)) > 0) {215215- rc = pstore_mkfile(type, psi->name, id, psi->buf, (size_t)size,213213+ while ((size = psi->read(&id, &type, &time, &buf, psi)) > 0) {214214+ rc = pstore_mkfile(type, psi->name, id, buf, (size_t)size,216215 time, psi);216216+ kfree(buf);217217+ buf = NULL;217218 if (rc && (rc != -EEXIST || !quiet))218219 failed++;219220 }220221 psi->close(psi);221222out:222222- spin_unlock_irqrestore(&psinfo->buf_lock, flags);223223+ mutex_unlock(&psi->read_mutex);223224224225 if (failed)225226 printk(KERN_WARNING "pstore: failed to load %d record(s) from '%s'\n",
+2
include/drm/drm_mode.h
···235235#define DRM_MODE_FB_DIRTY_ANNOTATE_FILL 0x02236236#define DRM_MODE_FB_DIRTY_FLAGS 0x03237237238238+#define DRM_MODE_FB_DIRTY_MAX_CLIPS 256239239+238240/*239241 * Mark a region of a framebuffer as dirty.240242 *
+4-5
include/drm/exynos_drm.h
···3232/**3333 * User-desired buffer creation information structure.3434 *3535- * @size: requested size for the object.3535+ * @size: user-desired memory allocation size.3636 * - this size value would be page-aligned internally.3737 * @flags: user request for setting memory type or cache attributes.3838- * @handle: returned handle for the object.3939- * @pad: just padding to be 64-bit aligned.3838+ * @handle: returned a handle to created gem object.3939+ * - this handle will be set by gem module of kernel side.4040 */4141struct drm_exynos_gem_create {4242- unsigned int size;4242+ uint64_t size;4343 unsigned int flags;4444 unsigned int handle;4545- unsigned int pad;4645};47464847/**
+4
include/drm/radeon_drm.h
···874874875875#define RADEON_CHUNK_ID_RELOCS 0x01876876#define RADEON_CHUNK_ID_IB 0x02877877+#define RADEON_CHUNK_ID_FLAGS 0x03878878+879879+/* The first dword of RADEON_CHUNK_ID_FLAGS is a uint32 of these flags: */880880+#define RADEON_CS_KEEP_TILING_FLAGS 0x01877881878882struct drm_radeon_cs_chunk {879883 uint32_t chunk_id;
+7-1
include/linux/ceph/osd_client.h
···1010#include "osdmap.h"1111#include "messenger.h"12121313+/* 1414+ * Maximum object name size 1515+ * (must be at least as big as RBD_MAX_MD_NAME_LEN -- currently 100) 1616+ */1717+#define MAX_OBJ_NAME_SIZE 1001818+1319struct ceph_msg;1420struct ceph_snap_context;1521struct ceph_osd_request;···8175 struct inode *r_inode; /* for use by callbacks */8276 void *r_priv; /* ditto */83778484- char r_oid[40]; /* object name */7878+ char r_oid[MAX_OBJ_NAME_SIZE]; /* object name */8579 int r_oid_len;8680 unsigned long r_stamp; /* send OR check time */8781
+2-1
include/linux/clocksource.h
···156156 * @mult: cycle to nanosecond multiplier157157 * @shift: cycle to nanosecond divisor (power of two)158158 * @max_idle_ns: max idle time permitted by the clocksource (nsecs)159159+ * @maxadj maximum adjustment value to mult (~11%)159160 * @flags: flags describing special properties160161 * @archdata: arch-specific data161162 * @suspend: suspend function for the clocksource, if necessary···173172 u32 mult;174173 u32 shift;175174 u64 max_idle_ns;176176-175175+ u32 maxadj;177176#ifdef CONFIG_ARCH_CLOCKSOURCE_DATA178177 struct arch_clocksource_data archdata;179178#endif
+1-1
include/linux/device.h
···6969 * @resume: Called to bring a device on this bus out of sleep mode.7070 * @pm: Power management operations of this bus, callback the specific7171 * device driver's pm-ops.7272- * @iommu_ops IOMMU specific operations for this bus, used to attach IOMMU7272+ * @iommu_ops: IOMMU specific operations for this bus, used to attach IOMMU7373 * driver implementations to a bus and allow the driver to do7474 * bus-specific setup7575 * @p: The private data of the driver core, only the driver core can
-3
include/linux/i2c.h
···432432/* Internal numbers to terminate lists */433433#define I2C_CLIENT_END 0xfffeU434434435435-/* The numbers to use to set I2C bus address */436436-#define ANY_I2C_BUS 0xffff437437-438435/* Construct an I2C_CLIENT_END-terminated array of i2c addresses */439436#define I2C_ADDRS(addr, addrs...) \440437 ((const unsigned short []){ addr, ## addrs, I2C_CLIENT_END })
···1212 unsigned int is_enabled:1; /* Enable bit is set */1313};14141515-#ifdef CONFIG_PCI_IOV1515+#ifdef CONFIG_PCI_ATS16161717extern int pci_enable_ats(struct pci_dev *dev, int ps);1818extern void pci_disable_ats(struct pci_dev *dev);···2929 return dev->ats && dev->ats->is_enabled;3030}31313232-#else /* CONFIG_PCI_IOV */3232+#else /* CONFIG_PCI_ATS */33333434static inline int pci_enable_ats(struct pci_dev *dev, int ps)3535{···5050 return 0;5151}52525353-#endif /* CONFIG_PCI_IOV */5353+#endif /* CONFIG_PCI_ATS */54545555#ifdef CONFIG_PCI_PRI5656
+1-1
include/linux/pci.h
···338338 struct list_head msi_list;339339#endif340340 struct pci_vpd *vpd;341341-#ifdef CONFIG_PCI_IOV341341+#ifdef CONFIG_PCI_ATS342342 union {343343 struct pci_sriov *sriov; /* SR-IOV capability related */344344 struct pci_dev *physfn; /* the PF this VF is associated with */
+127-88
include/linux/pm.h
···5454/**5555 * struct dev_pm_ops - device PM callbacks5656 *5757- * Several driver power state transitions are externally visible, affecting5757+ * Several device power state transitions are externally visible, affecting5858 * the state of pending I/O queues and (for drivers that touch hardware)5959 * interrupts, wakeups, DMA, and other hardware state. There may also be6060- * internal transitions to various low power modes, which are transparent6060+ * internal transitions to various low-power modes which are transparent6161 * to the rest of the driver stack (such as a driver that's ON gating off6262 * clocks which are not in active use).6363 *6464- * The externally visible transitions are handled with the help of the following6565- * callbacks included in this structure:6464+ * The externally visible transitions are handled with the help of callbacks6565+ * included in this structure in such a way that two levels of callbacks are6666+ * involved. First, the PM core executes callbacks provided by PM domains,6767+ * device types, classes and bus types. They are the subsystem-level callbacks6868+ * supposed to execute callbacks provided by device drivers, although they may6969+ * choose not to do that. If the driver callbacks are executed, they have to7070+ * collaborate with the subsystem-level callbacks to achieve the goals7171+ * appropriate for the given system transition, given transition phase and the7272+ * subsystem the device belongs to.6673 *6767- * @prepare: Prepare the device for the upcoming transition, but do NOT change6868- * its hardware state. Prevent new children of the device from being6969- * registered after @prepare() returns (the driver's subsystem and7070- * generally the rest of the kernel is supposed to prevent new calls to the7171- * probe method from being made too once @prepare() has succeeded). If7272- * @prepare() detects a situation it cannot handle (e.g. registration of a7373- * child already in progress), it may return -EAGAIN, so that the PM core7474- * can execute it once again (e.g. after the new child has been registered)7575- * to recover from the race condition. This method is executed for all7676- * kinds of suspend transitions and is followed by one of the suspend7777- * callbacks: @suspend(), @freeze(), or @poweroff().7878- * The PM core executes @prepare() for all devices before starting to7979- * execute suspend callbacks for any of them, so drivers may assume all of8080- * the other devices to be present and functional while @prepare() is being8181- * executed. In particular, it is safe to make GFP_KERNEL memory8282- * allocations from within @prepare(). However, drivers may NOT assume8383- * anything about the availability of the user space at that time and it8484- * is not correct to request firmware from within @prepare() (it's too8585- * late to do that). [To work around this limitation, drivers may8686- * register suspend and hibernation notifiers that are executed before the8787- * freezing of tasks.]7474+ * @prepare: The principal role of this callback is to prevent new children of7575+ * the device from being registered after it has returned (the driver's7676+ * subsystem and generally the rest of the kernel is supposed to prevent7777+ * new calls to the probe method from being made too once @prepare() has7878+ * succeeded). If @prepare() detects a situation it cannot handle (e.g.7979+ * registration of a child already in progress), it may return -EAGAIN, so8080+ * that the PM core can execute it once again (e.g. after a new child has8181+ * been registered) to recover from the race condition.8282+ * This method is executed for all kinds of suspend transitions and is8383+ * followed by one of the suspend callbacks: @suspend(), @freeze(), or8484+ * @poweroff(). The PM core executes subsystem-level @prepare() for all8585+ * devices before starting to invoke suspend callbacks for any of them, so8686+ * generally devices may be assumed to be functional or to respond to8787+ * runtime resume requests while @prepare() is being executed. However,8888+ * device drivers may NOT assume anything about the availability of user8989+ * space at that time and it is NOT valid to request firmware from within9090+ * @prepare() (it's too late to do that). It also is NOT valid to allocate9191+ * substantial amounts of memory from @prepare() in the GFP_KERNEL mode.9292+ * [To work around these limitations, drivers may register suspend and9393+ * hibernation notifiers to be executed before the freezing of tasks.]8894 *8995 * @complete: Undo the changes made by @prepare(). This method is executed for9096 * all kinds of resume transitions, following one of the resume callbacks:9197 * @resume(), @thaw(), @restore(). Also called if the state transition9292- * fails before the driver's suspend callback (@suspend(), @freeze(),9393- * @poweroff()) can be executed (e.g. if the suspend callback fails for one9898+ * fails before the driver's suspend callback: @suspend(), @freeze() or9999+ * @poweroff(), can be executed (e.g. if the suspend callback fails for one94100 * of the other devices that the PM core has unsuccessfully attempted to95101 * suspend earlier).9696- * The PM core executes @complete() after it has executed the appropriate9797- * resume callback for all devices.102102+ * The PM core executes subsystem-level @complete() after it has executed103103+ * the appropriate resume callbacks for all devices.98104 *99105 * @suspend: Executed before putting the system into a sleep state in which the100100- * contents of main memory are preserved. Quiesce the device, put it into101101- * a low power state appropriate for the upcoming system state (such as102102- * PCI_D3hot), and enable wakeup events as appropriate.106106+ * contents of main memory are preserved. The exact action to perform107107+ * depends on the device's subsystem (PM domain, device type, class or bus108108+ * type), but generally the device must be quiescent after subsystem-level109109+ * @suspend() has returned, so that it doesn't do any I/O or DMA.110110+ * Subsystem-level @suspend() is executed for all devices after invoking111111+ * subsystem-level @prepare() for all of them.103112 *104113 * @resume: Executed after waking the system up from a sleep state in which the105105- * contents of main memory were preserved. Put the device into the106106- * appropriate state, according to the information saved in memory by the107107- * preceding @suspend(). The driver starts working again, responding to108108- * hardware events and software requests. The hardware may have gone109109- * through a power-off reset, or it may have maintained state from the110110- * previous suspend() which the driver may rely on while resuming. On most111111- * platforms, there are no restrictions on availability of resources like112112- * clocks during @resume().114114+ * contents of main memory were preserved. The exact action to perform115115+ * depends on the device's subsystem, but generally the driver is expected116116+ * to start working again, responding to hardware events and software117117+ * requests (the device itself may be left in a low-power state, waiting118118+ * for a runtime resume to occur). The state of the device at the time its119119+ * driver's @resume() callback is run depends on the platform and subsystem120120+ * the device belongs to. On most platforms, there are no restrictions on121121+ * availability of resources like clocks during @resume().122122+ * Subsystem-level @resume() is executed for all devices after invoking123123+ * subsystem-level @resume_noirq() for all of them.113124 *114125 * @freeze: Hibernation-specific, executed before creating a hibernation image.115115- * Quiesce operations so that a consistent image can be created, but do NOT116116- * otherwise put the device into a low power device state and do NOT emit117117- * system wakeup events. Save in main memory the device settings to be118118- * used by @restore() during the subsequent resume from hibernation or by119119- * the subsequent @thaw(), if the creation of the image or the restoration120120- * of main memory contents from it fails.126126+ * Analogous to @suspend(), but it should not enable the device to signal127127+ * wakeup events or change its power state. The majority of subsystems128128+ * (with the notable exception of the PCI bus type) expect the driver-level129129+ * @freeze() to save the device settings in memory to be used by @restore()130130+ * during the subsequent resume from hibernation.131131+ * Subsystem-level @freeze() is executed for all devices after invoking132132+ * subsystem-level @prepare() for all of them.121133 *122134 * @thaw: Hibernation-specific, executed after creating a hibernation image OR123123- * if the creation of the image fails. Also executed after a failing135135+ * if the creation of an image has failed. Also executed after a failing124136 * attempt to restore the contents of main memory from such an image.125137 * Undo the changes made by the preceding @freeze(), so the device can be126138 * operated in the same way as immediately before the call to @freeze().139139+ * Subsystem-level @thaw() is executed for all devices after invoking140140+ * subsystem-level @thaw_noirq() for all of them. It also may be executed141141+ * directly after @freeze() in case of a transition error.127142 *128143 * @poweroff: Hibernation-specific, executed after saving a hibernation image.129129- * Quiesce the device, put it into a low power state appropriate for the130130- * upcoming system state (such as PCI_D3hot), and enable wakeup events as131131- * appropriate.144144+ * Analogous to @suspend(), but it need not save the device's settings in145145+ * memory.146146+ * Subsystem-level @poweroff() is executed for all devices after invoking147147+ * subsystem-level @prepare() for all of them.132148 *133149 * @restore: Hibernation-specific, executed after restoring the contents of main134134- * memory from a hibernation image. Driver starts working again,135135- * responding to hardware events and software requests. Drivers may NOT136136- * make ANY assumptions about the hardware state right prior to @restore().137137- * On most platforms, there are no restrictions on availability of138138- * resources like clocks during @restore().150150+ * memory from a hibernation image, analogous to @resume().139151 *140140- * @suspend_noirq: Complete the operations of ->suspend() by carrying out any141141- * actions required for suspending the device that need interrupts to be142142- * disabled152152+ * @suspend_noirq: Complete the actions started by @suspend(). Carry out any153153+ * additional operations required for suspending the device that might be154154+ * racing with its driver's interrupt handler, which is guaranteed not to155155+ * run while @suspend_noirq() is being executed.156156+ * It generally is expected that the device will be in a low-power state157157+ * (appropriate for the target system sleep state) after subsystem-level158158+ * @suspend_noirq() has returned successfully. If the device can generate159159+ * system wakeup signals and is enabled to wake up the system, it should be160160+ * configured to do so at that time. However, depending on the platform161161+ * and device's subsystem, @suspend() may be allowed to put the device into162162+ * the low-power state and configure it to generate wakeup signals, in163163+ * which case it generally is not necessary to define @suspend_noirq().143164 *144144- * @resume_noirq: Prepare for the execution of ->resume() by carrying out any145145- * actions required for resuming the device that need interrupts to be146146- * disabled165165+ * @resume_noirq: Prepare for the execution of @resume() by carrying out any166166+ * operations required for resuming the device that might be racing with167167+ * its driver's interrupt handler, which is guaranteed not to run while168168+ * @resume_noirq() is being executed.147169 *148148- * @freeze_noirq: Complete the operations of ->freeze() by carrying out any149149- * actions required for freezing the device that need interrupts to be150150- * disabled170170+ * @freeze_noirq: Complete the actions started by @freeze(). Carry out any171171+ * additional operations required for freezing the device that might be172172+ * racing with its driver's interrupt handler, which is guaranteed not to173173+ * run while @freeze_noirq() is being executed.174174+ * The power state of the device should not be changed by either @freeze()175175+ * or @freeze_noirq() and it should not be configured to signal system176176+ * wakeup by any of these callbacks.151177 *152152- * @thaw_noirq: Prepare for the execution of ->thaw() by carrying out any153153- * actions required for thawing the device that need interrupts to be154154- * disabled178178+ * @thaw_noirq: Prepare for the execution of @thaw() by carrying out any179179+ * operations required for thawing the device that might be racing with its180180+ * driver's interrupt handler, which is guaranteed not to run while181181+ * @thaw_noirq() is being executed.155182 *156156- * @poweroff_noirq: Complete the operations of ->poweroff() by carrying out any157157- * actions required for handling the device that need interrupts to be158158- * disabled183183+ * @poweroff_noirq: Complete the actions started by @poweroff(). Analogous to184184+ * @suspend_noirq(), but it need not save the device's settings in memory.159185 *160160- * @restore_noirq: Prepare for the execution of ->restore() by carrying out any161161- * actions required for restoring the operations of the device that need162162- * interrupts to be disabled186186+ * @restore_noirq: Prepare for the execution of @restore() by carrying out any187187+ * operations required for thawing the device that might be racing with its188188+ * driver's interrupt handler, which is guaranteed not to run while189189+ * @restore_noirq() is being executed. Analogous to @resume_noirq().163190 *164191 * All of the above callbacks, except for @complete(), return error codes.165192 * However, the error codes returned by the resume operations, @resume(),166166- * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq() do193193+ * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq(), do167194 * not cause the PM core to abort the resume transition during which they are168168- * returned. The error codes returned in that cases are only printed by the PM195195+ * returned. The error codes returned in those cases are only printed by the PM169196 * core to the system logs for debugging purposes. Still, it is recommended170197 * that drivers only return error codes from their resume methods in case of an171198 * unrecoverable failure (i.e. when the device being handled refuses to resume···201174 * their children.202175 *203176 * It is allowed to unregister devices while the above callbacks are being204204- * executed. However, it is not allowed to unregister a device from within any205205- * of its own callbacks.177177+ * executed. However, a callback routine must NOT try to unregister the device178178+ * it was called for, although it may unregister children of that device (for179179+ * example, if it detects that a child was unplugged while the system was180180+ * asleep).206181 *207207- * There also are the following callbacks related to run-time power management208208- * of devices:182182+ * Refer to Documentation/power/devices.txt for more information about the role183183+ * of the above callbacks in the system suspend process.184184+ *185185+ * There also are callbacks related to runtime power management of devices.186186+ * Again, these callbacks are executed by the PM core only for subsystems187187+ * (PM domains, device types, classes and bus types) and the subsystem-level188188+ * callbacks are supposed to invoke the driver callbacks. Moreover, the exact189189+ * actions to be performed by a device driver's callbacks generally depend on190190+ * the platform and subsystem the device belongs to.209191 *210192 * @runtime_suspend: Prepare the device for a condition in which it won't be211193 * able to communicate with the CPU(s) and RAM due to power management.212212- * This need not mean that the device should be put into a low power state.194194+ * This need not mean that the device should be put into a low-power state.213195 * For example, if the device is behind a link which is about to be turned214196 * off, the device may remain at full power. If the device does go to low215215- * power and is capable of generating run-time wake-up events, remote216216- * wake-up (i.e., a hardware mechanism allowing the device to request a217217- * change of its power state via a wake-up event, such as PCI PME) should218218- * be enabled for it.197197+ * power and is capable of generating runtime wakeup events, remote wakeup198198+ * (i.e., a hardware mechanism allowing the device to request a change of199199+ * its power state via an interrupt) should be enabled for it.219200 *220201 * @runtime_resume: Put the device into the fully active state in response to a221221- * wake-up event generated by hardware or at the request of software. If222222- * necessary, put the device into the full power state and restore its202202+ * wakeup event generated by hardware or at the request of software. If203203+ * necessary, put the device into the full-power state and restore its223204 * registers, so that it is fully operational.224205 *225225- * @runtime_idle: Device appears to be inactive and it might be put into a low226226- * power state if all of the necessary conditions are satisfied. Check206206+ * @runtime_idle: Device appears to be inactive and it might be put into a207207+ * low-power state if all of the necessary conditions are satisfied. Check227208 * these conditions and handle the device as appropriate, possibly queueing228209 * a suspend request for it. The return value is ignored by the PM core.210210+ *211211+ * Refer to Documentation/power/runtime_pm.txt for more information about the212212+ * role of the above callbacks in device runtime power management.213213+ *229214 */230215231216struct dev_pm_ops {
···15211521#ifdef CONFIG_FAULT_INJECTION15221522 int make_it_fail;15231523#endif15241524- struct prop_local_single dirties;15251524 /*15261525 * when (nr_dirtied >= nr_dirtied_pause), it's time to call15271526 * balance_dirty_pages() for some dirty throttling pause
···8585 * @reset: reset the device8686 * vdev: the virtio device8787 * After this, status and feature negotiation must be done again8888+ * Device must not be reset from its vq/config callbacks, or in8989+ * parallel with being added/removed.8890 * @find_vqs: find virtqueues and instantiate them.8991 * vdev: the virtio_device9092 * nvqs: the number of virtqueues to find
+1-1
include/linux/virtio_mmio.h
···6363#define VIRTIO_MMIO_GUEST_FEATURES 0x02064646565/* Activated features set selector - Write Only */6666-#define VIRTIO_MMIO_GUEST_FEATURES_SET 0x0246666+#define VIRTIO_MMIO_GUEST_FEATURES_SEL 0x02467676868/* Guest's memory page size in bytes - Write Only */6969#define VIRTIO_MMIO_GUEST_PAGE_SIZE 0x028
+1
include/net/inetpeer.h
···35353636 u32 metrics[RTAX_MAX];3737 u32 rate_tokens; /* rate limiting for ICMP */3838+ int redirect_genid;3839 unsigned long rate_last;3940 unsigned long pmtu_expires;4041 u32 pmtu_orig;
+10-9
include/net/netfilter/nf_conntrack_ecache.h
···6767 int (*fcn)(unsigned int events, struct nf_ct_event *item);6868};69697070-extern struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb;7171-extern int nf_conntrack_register_notifier(struct nf_ct_event_notifier *nb);7272-extern void nf_conntrack_unregister_notifier(struct nf_ct_event_notifier *nb);7070+extern int nf_conntrack_register_notifier(struct net *net, struct nf_ct_event_notifier *nb);7171+extern void nf_conntrack_unregister_notifier(struct net *net, struct nf_ct_event_notifier *nb);73727473extern void nf_ct_deliver_cached_events(struct nf_conn *ct);75747675static inline void7776nf_conntrack_event_cache(enum ip_conntrack_events event, struct nf_conn *ct)7877{7878+ struct net *net = nf_ct_net(ct);7979 struct nf_conntrack_ecache *e;80808181- if (nf_conntrack_event_cb == NULL)8181+ if (net->ct.nf_conntrack_event_cb == NULL)8282 return;83838484 e = nf_ct_ecache_find(ct);···9595 int report)9696{9797 int ret = 0;9898+ struct net *net = nf_ct_net(ct);9899 struct nf_ct_event_notifier *notify;99100 struct nf_conntrack_ecache *e;100101101102 rcu_read_lock();102102- notify = rcu_dereference(nf_conntrack_event_cb);103103+ notify = rcu_dereference(net->ct.nf_conntrack_event_cb);103104 if (notify == NULL)104105 goto out_unlock;105106···165164 int (*fcn)(unsigned int events, struct nf_exp_event *item);166165};167166168168-extern struct nf_exp_event_notifier __rcu *nf_expect_event_cb;169169-extern int nf_ct_expect_register_notifier(struct nf_exp_event_notifier *nb);170170-extern void nf_ct_expect_unregister_notifier(struct nf_exp_event_notifier *nb);167167+extern int nf_ct_expect_register_notifier(struct net *net, struct nf_exp_event_notifier *nb);168168+extern void nf_ct_expect_unregister_notifier(struct net *net, struct nf_exp_event_notifier *nb);171169172170static inline void173171nf_ct_expect_event_report(enum ip_conntrack_expect_events event,···174174 u32 pid,175175 int report)176176{177177+ struct net *net = nf_ct_exp_net(exp);177178 struct nf_exp_event_notifier *notify;178179 struct nf_conntrack_ecache *e;179180180181 rcu_read_lock();181181- notify = rcu_dereference(nf_expect_event_cb);182182+ notify = rcu_dereference(net->ct.nf_expect_event_cb);182183 if (notify == NULL)183184 goto out_unlock;184185
+2
include/net/netns/conntrack.h
···1818 struct hlist_nulls_head unconfirmed;1919 struct hlist_nulls_head dying;2020 struct ip_conntrack_stat __percpu *stat;2121+ struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb;2222+ struct nf_exp_event_notifier __rcu *nf_expect_event_cb;2123 int sysctl_events;2224 unsigned int sysctl_events_retry_timeout;2325 int sysctl_acct;
+6-9
include/net/red.h
···116116 u32 qR; /* Cached random number */117117118118 unsigned long qavg; /* Average queue length: A scaled */119119- psched_time_t qidlestart; /* Start of current idle period */119119+ ktime_t qidlestart; /* Start of current idle period */120120};121121122122static inline u32 red_rmask(u8 Plog)···148148149149static inline int red_is_idling(struct red_parms *p)150150{151151- return p->qidlestart != PSCHED_PASTPERFECT;151151+ return p->qidlestart.tv64 != 0;152152}153153154154static inline void red_start_of_idle_period(struct red_parms *p)155155{156156- p->qidlestart = psched_get_time();156156+ p->qidlestart = ktime_get();157157}158158159159static inline void red_end_of_idle_period(struct red_parms *p)160160{161161- p->qidlestart = PSCHED_PASTPERFECT;161161+ p->qidlestart.tv64 = 0;162162}163163164164static inline void red_restart(struct red_parms *p)···170170171171static inline unsigned long red_calc_qavg_from_idle_time(struct red_parms *p)172172{173173- psched_time_t now;174174- long us_idle;173173+ s64 delta = ktime_us_delta(ktime_get(), p->qidlestart);174174+ long us_idle = min_t(s64, delta, p->Scell_max);175175 int shift;176176-177177- now = psched_get_time();178178- us_idle = psched_tdiff_bounded(now, p->qidlestart, p->Scell_max);179176180177 /*181178 * The problem: ideally, average length queue recalcultion should
-7
include/video/omapdss.h
···307307 void (*dsi_disable_pads)(int dsi_id, unsigned lane_mask);308308};309309310310-#if defined(CONFIG_OMAP2_DSS_MODULE) || defined(CONFIG_OMAP2_DSS)311310/* Init with the board info */312311extern int omap_display_init(struct omap_dss_board_info *board_data);313313-#else314314-static inline int omap_display_init(struct omap_dss_board_info *board_data)315315-{316316- return 0;317317-}318318-#endif319312320313struct omap_display_platform_data {321314 struct omap_dss_board_info *board_data;
+9-2
kernel/cgroup_freezer.c
···153153 kfree(cgroup_freezer(cgroup));154154}155155156156+/* task is frozen or will freeze immediately when next it gets woken */157157+static bool is_task_frozen_enough(struct task_struct *task)158158+{159159+ return frozen(task) ||160160+ (task_is_stopped_or_traced(task) && freezing(task));161161+}162162+156163/*157164 * The call to cgroup_lock() in the freezer.state write method prevents158165 * a write to that file racing against an attach, and hence the···238231 cgroup_iter_start(cgroup, &it);239232 while ((task = cgroup_iter_next(cgroup, &it))) {240233 ntotal++;241241- if (frozen(task))234234+ if (is_task_frozen_enough(task))242235 nfrozen++;243236 }244237···291284 while ((task = cgroup_iter_next(cgroup, &it))) {292285 if (!freeze_task(task, true))293286 continue;294294- if (frozen(task))287287+ if (is_task_frozen_enough(task))295288 continue;296289 if (!freezing(task) && !freezer_should_skip(task))297290 num_cant_freeze_now++;
···492492}493493494494/**495495+ * clocksource_max_adjustment- Returns max adjustment amount496496+ * @cs: Pointer to clocksource497497+ *498498+ */499499+static u32 clocksource_max_adjustment(struct clocksource *cs)500500+{501501+ u64 ret;502502+ /*503503+ * We won't try to correct for more then 11% adjustments (110,000 ppm),504504+ */505505+ ret = (u64)cs->mult * 11;506506+ do_div(ret,100);507507+ return (u32)ret;508508+}509509+510510+/**495511 * clocksource_max_deferment - Returns max time the clocksource can be deferred496512 * @cs: Pointer to clocksource497513 *···519503 /*520504 * Calculate the maximum number of cycles that we can pass to the521505 * cyc2ns function without overflowing a 64-bit signed result. The522522- * maximum number of cycles is equal to ULLONG_MAX/cs->mult which523523- * is equivalent to the below.524524- * max_cycles < (2^63)/cs->mult525525- * max_cycles < 2^(log2((2^63)/cs->mult))526526- * max_cycles < 2^(log2(2^63) - log2(cs->mult))527527- * max_cycles < 2^(63 - log2(cs->mult))528528- * max_cycles < 1 << (63 - log2(cs->mult))506506+ * maximum number of cycles is equal to ULLONG_MAX/(cs->mult+cs->maxadj)507507+ * which is equivalent to the below.508508+ * max_cycles < (2^63)/(cs->mult + cs->maxadj)509509+ * max_cycles < 2^(log2((2^63)/(cs->mult + cs->maxadj)))510510+ * max_cycles < 2^(log2(2^63) - log2(cs->mult + cs->maxadj))511511+ * max_cycles < 2^(63 - log2(cs->mult + cs->maxadj))512512+ * max_cycles < 1 << (63 - log2(cs->mult + cs->maxadj))529513 * Please note that we add 1 to the result of the log2 to account for530514 * any rounding errors, ensure the above inequality is satisfied and531515 * no overflow will occur.532516 */533533- max_cycles = 1ULL << (63 - (ilog2(cs->mult) + 1));517517+ max_cycles = 1ULL << (63 - (ilog2(cs->mult + cs->maxadj) + 1));534518535519 /*536520 * The actual maximum number of cycles we can defer the clocksource is537521 * determined by the minimum of max_cycles and cs->mask.522522+ * Note: Here we subtract the maxadj to make sure we don't sleep for523523+ * too long if there's a large negative adjustment.538524 */539525 max_cycles = min_t(u64, max_cycles, (u64) cs->mask);540540- max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult, cs->shift);526526+ max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult - cs->maxadj,527527+ cs->shift);541528542529 /*543530 * To ensure that the clocksource does not wrap whilst we are idle,···659640void __clocksource_updatefreq_scale(struct clocksource *cs, u32 scale, u32 freq)660641{661642 u64 sec;662662-663643 /*664644 * Calc the maximum number of seconds which we can run before665645 * wrapping around. For clocksources which have a mask > 32bit···679661680662 clocks_calc_mult_shift(&cs->mult, &cs->shift, freq,681663 NSEC_PER_SEC / scale, sec * scale);664664+665665+ /*666666+ * for clocksources that have large mults, to avoid overflow.667667+ * Since mult may be adjusted by ntp, add an safety extra margin668668+ *669669+ */670670+ cs->maxadj = clocksource_max_adjustment(cs);671671+ while ((cs->mult + cs->maxadj < cs->mult)672672+ || (cs->mult - cs->maxadj > cs->mult)) {673673+ cs->mult >>= 1;674674+ cs->shift--;675675+ cs->maxadj = clocksource_max_adjustment(cs);676676+ }677677+682678 cs->max_idle_ns = clocksource_max_deferment(cs);683679}684680EXPORT_SYMBOL_GPL(__clocksource_updatefreq_scale);···733701 */734702int clocksource_register(struct clocksource *cs)735703{704704+ /* calculate max adjustment for given mult/shift */705705+ cs->maxadj = clocksource_max_adjustment(cs);706706+ WARN_ONCE(cs->mult + cs->maxadj < cs->mult,707707+ "Clocksource %s might overflow on 11%% adjustment\n",708708+ cs->name);709709+736710 /* calculate max idle time permitted for this clocksource */737711 cs->max_idle_ns = clocksource_max_deferment(cs);738712
+91-1
kernel/time/timekeeping.c
···249249 secs = xtime.tv_sec + wall_to_monotonic.tv_sec;250250 nsecs = xtime.tv_nsec + wall_to_monotonic.tv_nsec;251251 nsecs += timekeeping_get_ns();252252+ /* If arch requires, add in gettimeoffset() */253253+ nsecs += arch_gettimeoffset();252254253255 } while (read_seqretry(&xtime_lock, seq));254256 /*···282280 *ts = xtime;283281 tomono = wall_to_monotonic;284282 nsecs = timekeeping_get_ns();283283+ /* If arch requires, add in gettimeoffset() */284284+ nsecs += arch_gettimeoffset();285285286286 } while (read_seqretry(&xtime_lock, seq));287287···806802 s64 error, interval = timekeeper.cycle_interval;807803 int adj;808804805805+ /*806806+ * The point of this is to check if the error is greater then half807807+ * an interval.808808+ *809809+ * First we shift it down from NTP_SHIFT to clocksource->shifted nsecs.810810+ *811811+ * Note we subtract one in the shift, so that error is really error*2.812812+ * This "saves" dividing(shifting) intererval twice, but keeps the813813+ * (error > interval) comparision as still measuring if error is814814+ * larger then half an interval.815815+ *816816+ * Note: It does not "save" on aggrivation when reading the code.817817+ */809818 error = timekeeper.ntp_error >> (timekeeper.ntp_error_shift - 1);810819 if (error > interval) {820820+ /*821821+ * We now divide error by 4(via shift), which checks if822822+ * the error is greater then twice the interval.823823+ * If it is greater, we need a bigadjust, if its smaller,824824+ * we can adjust by 1.825825+ */811826 error >>= 2;827827+ /*828828+ * XXX - In update_wall_time, we round up to the next829829+ * nanosecond, and store the amount rounded up into830830+ * the error. This causes the likely below to be unlikely.831831+ *832832+ * The properfix is to avoid rounding up by using833833+ * the high precision timekeeper.xtime_nsec instead of834834+ * xtime.tv_nsec everywhere. Fixing this will take some835835+ * time.836836+ */812837 if (likely(error <= interval))813838 adj = 1;814839 else815840 adj = timekeeping_bigadjust(error, &interval, &offset);816841 } else if (error < -interval) {842842+ /* See comment above, this is just switched for the negative */817843 error >>= 2;818844 if (likely(error >= -interval)) {819845 adj = -1;···851817 offset = -offset;852818 } else853819 adj = timekeeping_bigadjust(error, &interval, &offset);854854- } else820820+ } else /* No adjustment needed */855821 return;856822823823+ WARN_ONCE(timekeeper.clock->maxadj &&824824+ (timekeeper.mult + adj > timekeeper.clock->mult +825825+ timekeeper.clock->maxadj),826826+ "Adjusting %s more then 11%% (%ld vs %ld)\n",827827+ timekeeper.clock->name, (long)timekeeper.mult + adj,828828+ (long)timekeeper.clock->mult +829829+ timekeeper.clock->maxadj);830830+ /*831831+ * So the following can be confusing.832832+ *833833+ * To keep things simple, lets assume adj == 1 for now.834834+ *835835+ * When adj != 1, remember that the interval and offset values836836+ * have been appropriately scaled so the math is the same.837837+ *838838+ * The basic idea here is that we're increasing the multiplier839839+ * by one, this causes the xtime_interval to be incremented by840840+ * one cycle_interval. This is because:841841+ * xtime_interval = cycle_interval * mult842842+ * So if mult is being incremented by one:843843+ * xtime_interval = cycle_interval * (mult + 1)844844+ * Its the same as:845845+ * xtime_interval = (cycle_interval * mult) + cycle_interval846846+ * Which can be shortened to:847847+ * xtime_interval += cycle_interval848848+ *849849+ * So offset stores the non-accumulated cycles. Thus the current850850+ * time (in shifted nanoseconds) is:851851+ * now = (offset * adj) + xtime_nsec852852+ * Now, even though we're adjusting the clock frequency, we have853853+ * to keep time consistent. In other words, we can't jump back854854+ * in time, and we also want to avoid jumping forward in time.855855+ *856856+ * So given the same offset value, we need the time to be the same857857+ * both before and after the freq adjustment.858858+ * now = (offset * adj_1) + xtime_nsec_1859859+ * now = (offset * adj_2) + xtime_nsec_2860860+ * So:861861+ * (offset * adj_1) + xtime_nsec_1 =862862+ * (offset * adj_2) + xtime_nsec_2863863+ * And we know:864864+ * adj_2 = adj_1 + 1865865+ * So:866866+ * (offset * adj_1) + xtime_nsec_1 =867867+ * (offset * (adj_1+1)) + xtime_nsec_2868868+ * (offset * adj_1) + xtime_nsec_1 =869869+ * (offset * adj_1) + offset + xtime_nsec_2870870+ * Canceling the sides:871871+ * xtime_nsec_1 = offset + xtime_nsec_2872872+ * Which gives us:873873+ * xtime_nsec_2 = xtime_nsec_1 - offset874874+ * Which simplfies to:875875+ * xtime_nsec -= offset876876+ *877877+ * XXX - TODO: Doc ntp_error calculation.878878+ */857879 timekeeper.mult += adj;858880 timekeeper.xtime_interval += interval;859881 timekeeper.xtime_nsec -= offset;
+7-16
mm/page-writeback.c
···128128 *129129 */130130static struct prop_descriptor vm_completions;131131-static struct prop_descriptor vm_dirties;132131133132/*134133 * couple the period to the dirty_ratio:···153154{154155 int shift = calc_period_shift();155156 prop_change_shift(&vm_completions, shift);156156- prop_change_shift(&vm_dirties, shift);157157158158 writeback_set_ratelimit();159159}···232234 local_irq_restore(flags);233235}234236EXPORT_SYMBOL_GPL(bdi_writeout_inc);235235-236236-void task_dirty_inc(struct task_struct *tsk)237237-{238238- prop_inc_single(&vm_dirties, &tsk->dirties);239239-}240237241238/*242239 * Obtain an accurate fraction of the BDI's portion.···11261133 pages_dirtied,11271134 pause,11281135 start_time);11291129- __set_current_state(TASK_UNINTERRUPTIBLE);11361136+ __set_current_state(TASK_KILLABLE);11301137 io_schedule_timeout(pause);1131113811321132- dirty_thresh = hard_dirty_limit(dirty_thresh);11331139 /*11341134- * max-pause area. If dirty exceeded but still within this11351135- * area, no need to sleep for more than 200ms: (a) 8 pages per11361136- * 200ms is typically more than enough to curb heavy dirtiers;11371137- * (b) the pause time limit makes the dirtiers more responsive.11401140+ * This is typically equal to (nr_dirty < dirty_thresh) and can11411141+ * also keep "1000+ dd on a slow USB stick" under control.11381142 */11391139- if (nr_dirty < dirty_thresh)11431143+ if (task_ratelimit)11441144+ break;11451145+11461146+ if (fatal_signal_pending(current))11401147 break;11411148 }11421149···1388139513891396 shift = calc_period_shift();13901397 prop_descriptor_init(&vm_completions, shift);13911391- prop_descriptor_init(&vm_dirties, shift);13921398}1393139913941400/**···17161724 __inc_zone_page_state(page, NR_DIRTIED);17171725 __inc_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE);17181726 __inc_bdi_stat(mapping->backing_dev_info, BDI_DIRTIED);17191719- task_dirty_inc(current);17201727 task_io_account_write(PAGE_CACHE_SIZE);17211728 }17221729}
+8-9
mm/percpu-vm.c
···50505151 if (!pages || !bitmap) {5252 if (may_alloc && !pages)5353- pages = pcpu_mem_alloc(pages_size);5353+ pages = pcpu_mem_zalloc(pages_size);5454 if (may_alloc && !bitmap)5555- bitmap = pcpu_mem_alloc(bitmap_size);5555+ bitmap = pcpu_mem_zalloc(bitmap_size);5656 if (!pages || !bitmap)5757 return NULL;5858 }59596060- memset(pages, 0, pages_size);6160 bitmap_copy(bitmap, chunk->populated, pcpu_unit_pages);62616362 *bitmapp = bitmap;···142143 int page_start, int page_end)143144{144145 flush_cache_vunmap(145145- pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start),146146- pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end));146146+ pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start),147147+ pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end));147148}148149149150static void __pcpu_unmap_pages(unsigned long addr, int nr_pages)···205206 int page_start, int page_end)206207{207208 flush_tlb_kernel_range(208208- pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start),209209- pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end));209209+ pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start),210210+ pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end));210211}211212212213static int __pcpu_map_pages(unsigned long addr, struct page **pages,···283284 int page_start, int page_end)284285{285286 flush_cache_vmap(286286- pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start),287287- pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end));287287+ pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start),288288+ pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end));288289}289290290291/**
+40-22
mm/percpu.c
···116116static int pcpu_nr_slots __read_mostly;117117static size_t pcpu_chunk_struct_size __read_mostly;118118119119-/* cpus with the lowest and highest unit numbers */120120-static unsigned int pcpu_first_unit_cpu __read_mostly;121121-static unsigned int pcpu_last_unit_cpu __read_mostly;119119+/* cpus with the lowest and highest unit addresses */120120+static unsigned int pcpu_low_unit_cpu __read_mostly;121121+static unsigned int pcpu_high_unit_cpu __read_mostly;122122123123/* the address of the first chunk which starts with the kernel static area */124124void *pcpu_base_addr __read_mostly;···273273 (rs) = (re) + 1, pcpu_next_pop((chunk), &(rs), &(re), (end)))274274275275/**276276- * pcpu_mem_alloc - allocate memory276276+ * pcpu_mem_zalloc - allocate memory277277 * @size: bytes to allocate278278 *279279 * Allocate @size bytes. If @size is smaller than PAGE_SIZE,280280- * kzalloc() is used; otherwise, vmalloc() is used. The returned280280+ * kzalloc() is used; otherwise, vzalloc() is used. The returned281281 * memory is always zeroed.282282 *283283 * CONTEXT:···286286 * RETURNS:287287 * Pointer to the allocated area on success, NULL on failure.288288 */289289-static void *pcpu_mem_alloc(size_t size)289289+static void *pcpu_mem_zalloc(size_t size)290290{291291 if (WARN_ON_ONCE(!slab_is_available()))292292 return NULL;···302302 * @ptr: memory to free303303 * @size: size of the area304304 *305305- * Free @ptr. @ptr should have been allocated using pcpu_mem_alloc().305305+ * Free @ptr. @ptr should have been allocated using pcpu_mem_zalloc().306306 */307307static void pcpu_mem_free(void *ptr, size_t size)308308{···384384 size_t old_size = 0, new_size = new_alloc * sizeof(new[0]);385385 unsigned long flags;386386387387- new = pcpu_mem_alloc(new_size);387387+ new = pcpu_mem_zalloc(new_size);388388 if (!new)389389 return -ENOMEM;390390···604604{605605 struct pcpu_chunk *chunk;606606607607- chunk = pcpu_mem_alloc(pcpu_chunk_struct_size);607607+ chunk = pcpu_mem_zalloc(pcpu_chunk_struct_size);608608 if (!chunk)609609 return NULL;610610611611- chunk->map = pcpu_mem_alloc(PCPU_DFL_MAP_ALLOC * sizeof(chunk->map[0]));611611+ chunk->map = pcpu_mem_zalloc(PCPU_DFL_MAP_ALLOC *612612+ sizeof(chunk->map[0]));612613 if (!chunk->map) {613614 kfree(chunk);614615 return NULL;···978977 * address. The caller is responsible for ensuring @addr stays valid979978 * until this function finishes.980979 *980980+ * percpu allocator has special setup for the first chunk, which currently981981+ * supports either embedding in linear address space or vmalloc mapping,982982+ * and, from the second one, the backing allocator (currently either vm or983983+ * km) provides translation.984984+ *985985+ * The addr can be tranlated simply without checking if it falls into the986986+ * first chunk. But the current code reflects better how percpu allocator987987+ * actually works, and the verification can discover both bugs in percpu988988+ * allocator itself and per_cpu_ptr_to_phys() callers. So we keep current989989+ * code.990990+ *981991 * RETURNS:982992 * The physical address for @addr.983993 */···996984{997985 void __percpu *base = __addr_to_pcpu_ptr(pcpu_base_addr);998986 bool in_first_chunk = false;999999- unsigned long first_start, first_end;987987+ unsigned long first_low, first_high;1000988 unsigned int cpu;10019891002990 /*10031003- * The following test on first_start/end isn't strictly991991+ * The following test on unit_low/high isn't strictly1004992 * necessary but will speed up lookups of addresses which1005993 * aren't in the first chunk.1006994 */10071007- first_start = pcpu_chunk_addr(pcpu_first_chunk, pcpu_first_unit_cpu, 0);10081008- first_end = pcpu_chunk_addr(pcpu_first_chunk, pcpu_last_unit_cpu,10091009- pcpu_unit_pages);10101010- if ((unsigned long)addr >= first_start &&10111011- (unsigned long)addr < first_end) {995995+ first_low = pcpu_chunk_addr(pcpu_first_chunk, pcpu_low_unit_cpu, 0);996996+ first_high = pcpu_chunk_addr(pcpu_first_chunk, pcpu_high_unit_cpu,997997+ pcpu_unit_pages);998998+ if ((unsigned long)addr >= first_low &&999999+ (unsigned long)addr < first_high) {10121000 for_each_possible_cpu(cpu) {10131001 void *start = per_cpu_ptr(base, cpu);10141002···1245123312461234 for (cpu = 0; cpu < nr_cpu_ids; cpu++)12471235 unit_map[cpu] = UINT_MAX;12481248- pcpu_first_unit_cpu = NR_CPUS;12361236+12371237+ pcpu_low_unit_cpu = NR_CPUS;12381238+ pcpu_high_unit_cpu = NR_CPUS;1249123912501240 for (group = 0, unit = 0; group < ai->nr_groups; group++, unit += i) {12511241 const struct pcpu_group_info *gi = &ai->groups[group];···12671253 unit_map[cpu] = unit + i;12681254 unit_off[cpu] = gi->base_offset + i * ai->unit_size;1269125512701270- if (pcpu_first_unit_cpu == NR_CPUS)12711271- pcpu_first_unit_cpu = cpu;12721272- pcpu_last_unit_cpu = cpu;12561256+ /* determine low/high unit_cpu */12571257+ if (pcpu_low_unit_cpu == NR_CPUS ||12581258+ unit_off[cpu] < unit_off[pcpu_low_unit_cpu])12591259+ pcpu_low_unit_cpu = cpu;12601260+ if (pcpu_high_unit_cpu == NR_CPUS ||12611261+ unit_off[cpu] > unit_off[pcpu_high_unit_cpu])12621262+ pcpu_high_unit_cpu = cpu;12731263 }12741264 }12751265 pcpu_nr_units = unit;···1907188919081890 BUILD_BUG_ON(size > PAGE_SIZE);1909189119101910- map = pcpu_mem_alloc(size);18921892+ map = pcpu_mem_zalloc(size);19111893 BUG_ON(!map);1912189419131895 spin_lock_irqsave(&pcpu_lock, flags);
+26-16
mm/slub.c
···18621862{18631863 struct kmem_cache_node *n = NULL;18641864 struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);18651865- struct page *page;18651865+ struct page *page, *discard_page = NULL;1866186618671867 while ((page = c->partial)) {18681868 enum slab_modes { M_PARTIAL, M_FREE };···19041904 if (l == M_PARTIAL)19051905 remove_partial(n, page);19061906 else19071907- add_partial(n, page, 1);19071907+ add_partial(n, page,19081908+ DEACTIVATE_TO_TAIL);1908190919091910 l = m;19101911 }···19161915 "unfreezing slab"));1917191619181917 if (m == M_FREE) {19191919- stat(s, DEACTIVATE_EMPTY);19201920- discard_slab(s, page);19211921- stat(s, FREE_SLAB);19181918+ page->next = discard_page;19191919+ discard_page = page;19221920 }19231921 }1924192219251923 if (n)19261924 spin_unlock(&n->list_lock);19251925+19261926+ while (discard_page) {19271927+ page = discard_page;19281928+ discard_page = discard_page->next;19291929+19301930+ stat(s, DEACTIVATE_EMPTY);19311931+ discard_slab(s, page);19321932+ stat(s, FREE_SLAB);19331933+ }19271934}1928193519291936/*···19781969 page->pobjects = pobjects;19791970 page->next = oldpage;1980197119811981- } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);19721972+ } while (irqsafe_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);19821973 stat(s, CPU_PARTIAL_FREE);19831974 return pobjects;19841975}···4444443544454436 for_each_possible_cpu(cpu) {44464437 struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);44384438+ int node = ACCESS_ONCE(c->node);44474439 struct page *page;4448444044494449- if (!c || c->node < 0)44414441+ if (node < 0)44504442 continue;44514451-44524452- if (c->page) {44534453- if (flags & SO_TOTAL)44544454- x = c->page->objects;44434443+ page = ACCESS_ONCE(c->page);44444444+ if (page) {44454445+ if (flags & SO_TOTAL)44464446+ x = page->objects;44554447 else if (flags & SO_OBJECTS)44564456- x = c->page->inuse;44484448+ x = page->inuse;44574449 else44584450 x = 1;4459445144604452 total += x;44614461- nodes[c->node] += x;44534453+ nodes[node] += x;44624454 }44634455 page = c->partial;4464445644654457 if (page) {44664458 x = page->pobjects;44674467- total += x;44684468- nodes[c->node] += x;44594459+ total += x;44604460+ nodes[node] += x;44694461 }44704470- per_cpu[c->node]++;44624462+ per_cpu[node]++;44714463 }44724464 }44734465
···340340 struct ipv6_pinfo *np = inet6_sk(sk);341341 struct inet_sock *inet = inet_sk(sk);342342 struct sk_buff *skb;343343- unsigned int ulen;343343+ unsigned int ulen, copied;344344 int peeked;345345 int err;346346 int is_udplite = IS_UDPLITE(sk);···363363 goto out;364364365365 ulen = skb->len - sizeof(struct udphdr);366366- if (len > ulen)367367- len = ulen;368368- else if (len < ulen)366366+ copied = len;367367+ if (copied > ulen)368368+ copied = ulen;369369+ else if (copied < ulen)369370 msg->msg_flags |= MSG_TRUNC;370371371372 is_udp4 = (skb->protocol == htons(ETH_P_IP));···377376 * coverage checksum (UDP-Lite), do it before the copy.378377 */379378380380- if (len < ulen || UDP_SKB_CB(skb)->partial_cov) {379379+ if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) {381380 if (udp_lib_checksum_complete(skb))382381 goto csum_copy_err;383382 }384383385384 if (skb_csum_unnecessary(skb))386385 err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr),387387- msg->msg_iov,len);386386+ msg->msg_iov, copied );388387 else {389388 err = skb_copy_and_csum_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov);390389 if (err == -EINVAL)···432431 datagram_recv_ctl(sk, msg, skb);433432 }434433435435- err = len;434434+ err = copied;436435 if (flags & MSG_TRUNC)437436 err = ulen;438437
+1-1
net/l2tp/l2tp_core.c
···1072107210731073 /* Get routing info from the tunnel socket */10741074 skb_dst_drop(skb);10751075- skb_dst_set(skb, dst_clone(__sk_dst_get(sk)));10751075+ skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0)));1076107610771077 inet = inet_sk(sk);10781078 fl = &inet->cork.fl;
+39-3
net/mac80211/agg-tx.c
···162162 return -ENOENT;163163 }164164165165+ /* if we're already stopping ignore any new requests to stop */166166+ if (test_bit(HT_AGG_STATE_STOPPING, &tid_tx->state)) {167167+ spin_unlock_bh(&sta->lock);168168+ return -EALREADY;169169+ }170170+165171 if (test_bit(HT_AGG_STATE_WANT_START, &tid_tx->state)) {166172 /* not even started yet! */167173 ieee80211_assign_tid_tx(sta, tid, NULL);···176170 return 0;177171 }178172173173+ set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state);174174+179175 spin_unlock_bh(&sta->lock);180176181177#ifdef CONFIG_MAC80211_HT_DEBUG182178 printk(KERN_DEBUG "Tx BA session stop requested for %pM tid %u\n",183179 sta->sta.addr, tid);184180#endif /* CONFIG_MAC80211_HT_DEBUG */185185-186186- set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state);187181188182 del_timer_sync(&tid_tx->addba_resp_timer);189183···193187 * with locking to ensure proper access.194188 */195189 clear_bit(HT_AGG_STATE_OPERATIONAL, &tid_tx->state);190190+191191+ /*192192+ * There might be a few packets being processed right now (on193193+ * another CPU) that have already gotten past the aggregation194194+ * check when it was still OPERATIONAL and consequently have195195+ * IEEE80211_TX_CTL_AMPDU set. In that case, this code might196196+ * call into the driver at the same time or even before the197197+ * TX paths calls into it, which could confuse the driver.198198+ *199199+ * Wait for all currently running TX paths to finish before200200+ * telling the driver. New packets will not go through since201201+ * the aggregation session is no longer OPERATIONAL.202202+ */203203+ synchronize_net();196204197205 tid_tx->stop_initiator = initiator;198206 tid_tx->tx_stop = tx;···773753 goto out;774754 }775755776776- del_timer(&tid_tx->addba_resp_timer);756756+ del_timer_sync(&tid_tx->addba_resp_timer);777757778758#ifdef CONFIG_MAC80211_HT_DEBUG779759 printk(KERN_DEBUG "switched off addBA timer for tid %d\n", tid);780760#endif761761+762762+ /*763763+ * addba_resp_timer may have fired before we got here, and764764+ * caused WANT_STOP to be set. If the stop then was already765765+ * processed further, STOPPING might be set.766766+ */767767+ if (test_bit(HT_AGG_STATE_WANT_STOP, &tid_tx->state) ||768768+ test_bit(HT_AGG_STATE_STOPPING, &tid_tx->state)) {769769+#ifdef CONFIG_MAC80211_HT_DEBUG770770+ printk(KERN_DEBUG771771+ "got addBA resp for tid %d but we already gave up\n",772772+ tid);773773+#endif774774+ goto out;775775+ }776776+781777 /*782778 * IEEE 802.11-2007 7.3.1.14:783779 * In an ADDBA Response frame, when the Status Code field
-1
net/netfilter/Kconfig
···201201202202config NF_CONNTRACK_NETBIOS_NS203203 tristate "NetBIOS name service protocol support"204204- depends on NETFILTER_ADVANCED205204 select NF_CONNTRACK_BROADCAST206205 help207206 NetBIOS name service requests are sent as broadcast messages from an
···8282 struct sctp_auth_bytes *key;83838484 /* Verify that we are not going to overflow INT_MAX */8585- if ((INT_MAX - key_len) < sizeof(struct sctp_auth_bytes))8585+ if (key_len > (INT_MAX - sizeof(struct sctp_auth_bytes)))8686 return NULL;87878888 /* Allocate the shared key */
+4-3
net/sunrpc/xprtsock.c
···496496 struct rpc_rqst *req = task->tk_rqstp;497497 struct rpc_xprt *xprt = req->rq_xprt;498498 struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);499499- int ret = 0;499499+ int ret = -EAGAIN;500500501501 dprintk("RPC: %5u xmit incomplete (%u left of %u)\n",502502 task->tk_pid, req->rq_slen - req->rq_bytes_sent,···508508 /* Don't race with disconnect */509509 if (xprt_connected(xprt)) {510510 if (test_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags)) {511511- ret = -EAGAIN;512511 /*513512 * Notify TCP that we're limited by the application514513 * window size···25292530 int err;25302531 err = xs_init_anyaddr(args->dstaddr->sa_family,25312532 (struct sockaddr *)&new->srcaddr);25322532- if (err != 0)25332533+ if (err != 0) {25342534+ xprt_free(xprt);25332535 return ERR_PTR(err);25362536+ }25342537 }2535253825362539 return xprt;
+4
net/unix/af_unix.c
···19571957 if ((UNIXCB(skb).pid != siocb->scm->pid) ||19581958 (UNIXCB(skb).cred != siocb->scm->cred)) {19591959 skb_queue_head(&sk->sk_receive_queue, skb);19601960+ sk->sk_data_ready(sk, skb->len);19601961 break;19611962 }19621963 } else {···19751974 chunk = min_t(unsigned int, skb->len, size);19761975 if (memcpy_toiovec(msg->msg_iov, skb->data, chunk)) {19771976 skb_queue_head(&sk->sk_receive_queue, skb);19771977+ sk->sk_data_ready(sk, skb->len);19781978 if (copied == 0)19791979 copied = -EFAULT;19801980 break;···19931991 /* put the skb back if we didn't use it up.. */19941992 if (skb->len) {19951993 skb_queue_head(&sk->sk_receive_queue, skb);19941994+ sk->sk_data_ready(sk, skb->len);19961995 break;19971996 }19981997···2009200620102007 /* put message back and return */20112008 skb_queue_head(&sk->sk_receive_queue, skb);20092009+ sk->sk_data_ready(sk, skb->len);20122010 break;20132011 }20142012 } while (size);
···4046404640474047 /* Search for codec ID */40484048 for (q = tbl; q->subvendor; q++) {40494049- unsigned long vendorid = (q->subdevice) | (q->subvendor << 16);40504050-40514051- if (vendorid == codec->subsystem_id)40494049+ unsigned int mask = 0xffff0000 | q->subdevice_mask;40504050+ unsigned int id = (q->subdevice | (q->subvendor << 16)) & mask;40514051+ if ((codec->subsystem_id & mask) == id)40524052 break;40534053 }40544054
+19-9
sound/pci/hda/hda_eld.c
···347347348348 for (i = 0; i < size; i++) {349349 unsigned int val = hdmi_get_eld_data(codec, nid, i);350350+ /*351351+ * Graphics driver might be writing to ELD buffer right now.352352+ * Just abort. The caller will repoll after a while.353353+ */350354 if (!(val & AC_ELDD_ELD_VALID)) {351351- if (!i) {352352- snd_printd(KERN_INFO353353- "HDMI: invalid ELD data\n");354354- ret = -EINVAL;355355- goto error;356356- }357355 snd_printd(KERN_INFO358356 "HDMI: invalid ELD data byte %d\n", i);359359- val = 0;360360- } else361361- val &= AC_ELDD_ELD_DATA;357357+ ret = -EINVAL;358358+ goto error;359359+ }360360+ val &= AC_ELDD_ELD_DATA;361361+ /*362362+ * The first byte cannot be zero. This can happen on some DVI363363+ * connections. Some Intel chips may also need some 250ms delay364364+ * to return non-zero ELD data, even when the graphics driver365365+ * correctly writes ELD content before setting ELD_valid bit.366366+ */367367+ if (!val && !i) {368368+ snd_printdd(KERN_INFO "HDMI: 0 ELD data\n");369369+ ret = -EINVAL;370370+ goto error;371371+ }362372 buf[i] = val;363373 }364374
+23-9
sound/pci/hda/patch_cirrus.c
···5858 unsigned int gpio_mask;5959 unsigned int gpio_dir;6060 unsigned int gpio_data;6161+ unsigned int gpio_eapd_hp; /* EAPD GPIO bit for headphones */6262+ unsigned int gpio_eapd_speaker; /* EAPD GPIO bit for speakers */61636264 struct hda_pcm pcm_rec[2]; /* PCM information */6365···7876 CS420X_MBP53,7977 CS420X_MBP55,8078 CS420X_IMAC27,7979+ CS420X_APPLE,8180 CS420X_AUTO,8281 CS420X_MODELS8382};···931928 spdif_present ? 0 : PIN_OUT);932929 }933930 }934934- if (spec->board_config == CS420X_MBP53 ||935935- spec->board_config == CS420X_MBP55 ||936936- spec->board_config == CS420X_IMAC27) {937937- unsigned int gpio = hp_present ? 0x02 : 0x08;931931+ if (spec->gpio_eapd_hp) {932932+ unsigned int gpio = hp_present ?933933+ spec->gpio_eapd_hp : spec->gpio_eapd_speaker;938934 snd_hda_codec_write(codec, 0x01, 0,939935 AC_VERB_SET_GPIO_DATA, gpio);940936 }···12781276 [CS420X_MBP53] = "mbp53",12791277 [CS420X_MBP55] = "mbp55",12801278 [CS420X_IMAC27] = "imac27",12791279+ [CS420X_APPLE] = "apple",12811280 [CS420X_AUTO] = "auto",12821281};12831282···12881285 SND_PCI_QUIRK(0x10de, 0x0d94, "MacBookAir 3,1(2)", CS420X_MBP55),12891286 SND_PCI_QUIRK(0x10de, 0xcb79, "MacBookPro 5,5", CS420X_MBP55),12901287 SND_PCI_QUIRK(0x10de, 0xcb89, "MacBookPro 7,1", CS420X_MBP55),12911291- SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27),12881288+ /* this conflicts with too many other models */12891289+ /*SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27),*/12901290+ {} /* terminator */12911291+};12921292+12931293+static const struct snd_pci_quirk cs420x_codec_cfg_tbl[] = {12941294+ SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE),12921295 {} /* terminator */12931296};12941297···13761367 spec->board_config =13771368 snd_hda_check_board_config(codec, CS420X_MODELS,13781369 cs420x_models, cs420x_cfg_tbl);13701370+ if (spec->board_config < 0)13711371+ spec->board_config =13721372+ snd_hda_check_board_codec_sid_config(codec,13731373+ CS420X_MODELS, NULL, cs420x_codec_cfg_tbl);13791374 if (spec->board_config >= 0)13801375 fix_pincfg(codec, spec->board_config, cs_pincfgs);13811376···13871374 case CS420X_IMAC27:13881375 case CS420X_MBP53:13891376 case CS420X_MBP55:13901390- /* GPIO1 = headphones */13911391- /* GPIO3 = speakers */13921392- spec->gpio_mask = 0x0a;13931393- spec->gpio_dir = 0x0a;13771377+ case CS420X_APPLE:13781378+ spec->gpio_eapd_hp = 2; /* GPIO1 = headphones */13791379+ spec->gpio_eapd_speaker = 8; /* GPIO3 = speakers */13801380+ spec->gpio_mask = spec->gpio_dir =13811381+ spec->gpio_eapd_hp | spec->gpio_eapd_speaker;13941382 break;13951383 }13961384