···6363Userspace tools for creating and manipulating Btrfs file systems are6464available from the git repository at the following location:65656666- http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs-unstable.git6767- git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs-unstable.git6666+ http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git6767+ git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git68686969These include the following tools:7070
+66-39
Documentation/power/devices.txt
···123123Subsystem-Level Methods124124-----------------------125125The core methods to suspend and resume devices reside in struct dev_pm_ops126126-pointed to by the pm member of struct bus_type, struct device_type and127127-struct class. They are mostly of interest to the people writing infrastructure128128-for buses, like PCI or USB, or device type and device class drivers.126126+pointed to by the ops member of struct dev_pm_domain, or by the pm member of127127+struct bus_type, struct device_type and struct class. They are mostly of128128+interest to the people writing infrastructure for platforms and buses, like PCI129129+or USB, or device type and device class drivers.129130130131Bus drivers implement these methods as appropriate for the hardware and the131132drivers using it; PCI works differently from USB, and so on. Not many people···140139141140/sys/devices/.../power/wakeup files142141-----------------------------------143143-All devices in the driver model have two flags to control handling of wakeup144144-events (hardware signals that can force the device and/or system out of a low145145-power state). These flags are initialized by bus or device driver code using142142+All device objects in the driver model contain fields that control the handling143143+of system wakeup events (hardware signals that can force the system out of a144144+sleep state). These fields are initialized by bus or device driver code using146145device_set_wakeup_capable() and device_set_wakeup_enable(), defined in147146include/linux/pm_wakeup.h.148147149149-The "can_wakeup" flag just records whether the device (and its driver) can148148+The "power.can_wakeup" flag just records whether the device (and its driver) can150149physically support wakeup events. The device_set_wakeup_capable() routine151151-affects this flag. The "should_wakeup" flag controls whether the device should152152-try to use its wakeup mechanism. device_set_wakeup_enable() affects this flag;153153-for the most part drivers should not change its value. The initial value of154154-should_wakeup is supposed to be false for the majority of devices; the major155155-exceptions are power buttons, keyboards, and Ethernet adapters whose WoL156156-(wake-on-LAN) feature has been set up with ethtool. It should also default157157-to true for devices that don't generate wakeup requests on their own but merely158158-forward wakeup requests from one bus to another (like PCI bridges).150150+affects this flag. The "power.wakeup" field is a pointer to an object of type151151+struct wakeup_source used for controlling whether or not the device should use152152+its system wakeup mechanism and for notifying the PM core of system wakeup153153+events signaled by the device. This object is only present for wakeup-capable154154+devices (i.e. devices whose "can_wakeup" flags are set) and is created (or155155+removed) by device_set_wakeup_capable().159156160157Whether or not a device is capable of issuing wakeup events is a hardware161158matter, and the kernel is responsible for keeping track of it. By contrast,162159whether or not a wakeup-capable device should issue wakeup events is a policy163160decision, and it is managed by user space through a sysfs attribute: the164164-power/wakeup file. User space can write the strings "enabled" or "disabled" to165165-set or clear the "should_wakeup" flag, respectively. This file is only present166166-for wakeup-capable devices (i.e. devices whose "can_wakeup" flags are set)167167-and is created (or removed) by device_set_wakeup_capable(). Reads from the168168-file will return the corresponding string.161161+"power/wakeup" file. User space can write the strings "enabled" or "disabled"162162+to it to indicate whether or not, respectively, the device is supposed to signal163163+system wakeup. This file is only present if the "power.wakeup" object exists164164+for the given device and is created (or removed) along with that object, by165165+device_set_wakeup_capable(). Reads from the file will return the corresponding166166+string.169167170170-The device_may_wakeup() routine returns true only if both flags are set.168168+The "power/wakeup" file is supposed to contain the "disabled" string initially169169+for the majority of devices; the major exceptions are power buttons, keyboards,170170+and Ethernet adapters whose WoL (wake-on-LAN) feature has been set up with171171+ethtool. It should also default to "enabled" for devices that don't generate172172+wakeup requests on their own but merely forward wakeup requests from one bus to173173+another (like PCI Express ports).174174+175175+The device_may_wakeup() routine returns true only if the "power.wakeup" object176176+exists and the corresponding "power/wakeup" file contains the string "enabled".171177This information is used by subsystems, like the PCI bus type code, to see172178whether or not to enable the devices' wakeup mechanisms. If device wakeup173179mechanisms are enabled or disabled directly by drivers, they also should use174180device_may_wakeup() to decide what to do during a system sleep transition.175175-However for runtime power management, wakeup events should be enabled whenever176176-the device and driver both support them, regardless of the should_wakeup flag.181181+Device drivers, however, are not supposed to call device_set_wakeup_enable()182182+directly in any case.177183184184+It ought to be noted that system wakeup is conceptually different from "remote185185+wakeup" used by runtime power management, although it may be supported by the186186+same physical mechanism. Remote wakeup is a feature allowing devices in187187+low-power states to trigger specific interrupts to signal conditions in which188188+they should be put into the full-power state. Those interrupts may or may not189189+be used to signal system wakeup events, depending on the hardware design. On190190+some systems it is impossible to trigger them from system sleep states. In any191191+case, remote wakeup should always be enabled for runtime power management for192192+all devices and drivers that support it.178193179194/sys/devices/.../power/control files180195------------------------------------···266249support all these callbacks and not all drivers use all the callbacks. The267250various phases always run after tasks have been frozen and before they are268251unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have269269-been disabled (except for those marked with the IRQ_WAKEUP flag).252252+been disabled (except for those marked with the IRQF_NO_SUSPEND flag).270253271271-All phases use bus, type, or class callbacks (that is, methods defined in272272-dev->bus->pm, dev->type->pm, or dev->class->pm). These callbacks are mutually273273-exclusive, so if the device type provides a struct dev_pm_ops object pointed to274274-by its pm field (i.e. both dev->type and dev->type->pm are defined), the275275-callbacks included in that object (i.e. dev->type->pm) will be used. Otherwise,276276-if the class provides a struct dev_pm_ops object pointed to by its pm field277277-(i.e. both dev->class and dev->class->pm are defined), the PM core will use the278278-callbacks from that object (i.e. dev->class->pm). Finally, if the pm fields of279279-both the device type and class objects are NULL (or those objects do not exist),280280-the callbacks provided by the bus (that is, the callbacks from dev->bus->pm)281281-will be used (this allows device types to override callbacks provided by bus282282-types or classes if necessary).254254+All phases use PM domain, bus, type, or class callbacks (that is, methods255255+defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, or dev->class->pm).256256+These callbacks are regarded by the PM core as mutually exclusive. Moreover,257257+PM domain callbacks always take precedence over bus, type and class callbacks,258258+while type callbacks take precedence over bus and class callbacks, and class259259+callbacks take precedence over bus callbacks. To be precise, the following260260+rules are used to determine which callback to execute in the given phase:261261+262262+ 1. If dev->pm_domain is present, the PM core will attempt to execute the263263+ callback included in dev->pm_domain->ops. If that callback is not264264+ present, no action will be carried out for the given device.265265+266266+ 2. Otherwise, if both dev->type and dev->type->pm are present, the callback267267+ included in dev->type->pm will be executed.268268+269269+ 3. Otherwise, if both dev->class and dev->class->pm are present, the270270+ callback included in dev->class->pm will be executed.271271+272272+ 4. Otherwise, if both dev->bus and dev->bus->pm are present, the callback273273+ included in dev->bus->pm will be executed.274274+275275+This allows PM domains and device types to override callbacks provided by bus276276+types or device classes if necessary.283277284278These callbacks may in turn invoke device- or driver-specific methods stored in285279dev->driver->pm, but they don't have to.···311283312284 After the prepare callback method returns, no new children may be313285 registered below the device. The method may also prepare the device or314314- driver in some way for the upcoming system power transition (for315315- example, by allocating additional memory required for this purpose), but316316- it should not put the device into a low-power state.286286+ driver in some way for the upcoming system power transition, but it287287+ should not put the device into a low-power state.317288318289 2. The suspend methods should quiesce the device to stop it from performing319290 I/O. They also may save the device registers and put it into the
+24-16
Documentation/power/runtime_pm.txt
···4444};45454646The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks4747-are executed by the PM core for either the power domain, or the device type4848-(if the device power domain's struct dev_pm_ops does not exist), or the class4949-(if the device power domain's and type's struct dev_pm_ops object does not5050-exist), or the bus type (if the device power domain's, type's and class'5151-struct dev_pm_ops objects do not exist) of the given device, so the priority5252-order of callbacks from high to low is that power domain callbacks, device5353-type callbacks, class callbacks and bus type callbacks, and the high priority5454-one will take precedence over low priority one. The bus type, device type and5555-class callbacks are referred to as subsystem-level callbacks in what follows,5656-and generally speaking, the power domain callbacks are used for representing5757-power domains within a SoC.4747+are executed by the PM core for the device's subsystem that may be either of4848+the following:4949+5050+ 1. PM domain of the device, if the device's PM domain object, dev->pm_domain,5151+ is present.5252+5353+ 2. Device type of the device, if both dev->type and dev->type->pm are present.5454+5555+ 3. Device class of the device, if both dev->class and dev->class->pm are5656+ present.5757+5858+ 4. Bus type of the device, if both dev->bus and dev->bus->pm are present.5959+6060+The PM core always checks which callback to use in the order given above, so the6161+priority order of callbacks from high to low is: PM domain, device type, class6262+and bus type. Moreover, the high-priority one will always take precedence over6363+a low-priority one. The PM domain, bus type, device type and class callbacks6464+are referred to as subsystem-level callbacks in what follows.58655966By default, the callbacks are always invoked in process context with interrupts6067enabled. However, subsystems can use the pm_runtime_irq_safe() helper function6161-to tell the PM core that a device's ->runtime_suspend() and ->runtime_resume()6262-callbacks should be invoked in atomic context with interrupts disabled.6363-This implies that these callback routines must not block or sleep, but it also6464-means that the synchronous helper functions listed at the end of Section 4 can6565-be used within an interrupt handler or in an atomic context.6868+to tell the PM core that their ->runtime_suspend(), ->runtime_resume() and6969+->runtime_idle() callbacks may be invoked in atomic context with interrupts7070+disabled for a given device. This implies that the callback routines in7171+question must not block or sleep, but it also means that the synchronous helper7272+functions listed at the end of Section 4 may be used for that device within an7373+interrupt handler or generally in an atomic context.66746775The subsystem-level suspend callback is _entirely_ _responsible_ for handling6876the suspend of the device as appropriate, which may, but need not include
···12311231 capabilities of the processor.1232123212331233config PL310_ERRATA_58836912341234- bool "Clean & Invalidate maintenance operations do not invalidate clean lines"12341234+ bool "PL310 errata: Clean & Invalidate maintenance operations do not invalidate clean lines"12351235 depends on CACHE_L2X012361236 help12371237 The PL310 L2 cache controller implements three types of Clean &···12561256 entries regardless of the ASID.1257125712581258config PL310_ERRATA_72791512591259- bool "Background Clean & Invalidate by Way operation can cause data corruption"12591259+ bool "PL310 errata: Background Clean & Invalidate by Way operation can cause data corruption"12601260 depends on CACHE_L2X012611261 help12621262 PL310 implements the Clean & Invalidate by Way L2 cache maintenance···12891289 operation is received by a CPU before the ICIALLUIS has completed,12901290 potentially leading to corrupted entries in the cache or TLB.1291129112921292-config ARM_ERRATA_75397012931293- bool "ARM errata: cache sync operation may be faulty"12921292+config PL310_ERRATA_75397012931293+ bool "PL310 errata: cache sync operation may be faulty"12941294 depends on CACHE_PL31012951295 help12961296 This option enables the workaround for the 753970 PL310 (r3p0) erratum.···13511351 system. This workaround adds a DSB instruction before the13521352 relevant cache maintenance functions and sets a specific bit13531353 in the diagnostic control register of the SCU.13541354+13551355+config PL310_ERRATA_76941913561356+ bool "PL310 errata: no automatic Store Buffer drain"13571357+ depends on CACHE_L2X013581358+ help13591359+ On revisions of the PL310 prior to r3p2, the Store Buffer does13601360+ not automatically drain. This can cause normal, non-cacheable13611361+ writes to be retained when the memory system is idle, leading13621362+ to suboptimal I/O performance for drivers using coherent DMA.13631363+ This option adds a write barrier to the cpu_idle loop so that,13641364+ on systems with an outer cache, the store buffer is drained13651365+ explicitly.1354136613551367endmenu13561368
+10-6
arch/arm/common/gic.c
···526526 sizeof(u32));527527 BUG_ON(!gic->saved_ppi_conf);528528529529- cpu_pm_register_notifier(&gic_notifier_block);529529+ if (gic == &gic_data[0])530530+ cpu_pm_register_notifier(&gic_notifier_block);530531}531532#else532533static void __init gic_pm_init(struct gic_chip_data *gic)···582581 * For primary GICs, skip over SGIs.583582 * For secondary GICs, skip over PPIs, too.584583 */584584+ domain->hwirq_base = 32;585585 if (gic_nr == 0) {586586 gic_cpu_base_addr = cpu_base;587587- domain->hwirq_base = 16;588588- if (irq_start > 0)589589- irq_start = (irq_start & ~31) + 16;590590- } else591591- domain->hwirq_base = 32;587587+588588+ if ((irq_start & 31) > 0) {589589+ domain->hwirq_base = 16;590590+ if (irq_start != -1)591591+ irq_start = (irq_start & ~31) + 16;592592+ }593593+ }592594593595 /*594596 * Find out how many interrupts are supported.
···1111# CONFIG_IOSCHED_DEADLINE is not set1212# CONFIG_IOSCHED_CFQ is not set1313CONFIG_ARCH_AT91=y1414-CONFIG_ARCH_AT91CAP9=y1515-CONFIG_MACH_AT91CAP9ADK=y1616-CONFIG_MTD_AT91_DATAFLASH_CARD=y1414+CONFIG_ARCH_AT91SAM9RL=y1515+CONFIG_MACH_AT91SAM9RLEK=y1716CONFIG_AT91_PROGRAMMABLE_CLOCKS=y1817# CONFIG_ARM_THUMB is not set1919-CONFIG_AEABI=y2020-CONFIG_LEDS=y2121-CONFIG_LEDS_CPU=y2218CONFIG_ZBOOT_ROM_TEXT=0x02319CONFIG_ZBOOT_ROM_BSS=0x02424-CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw"2020+CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw"2521CONFIG_FPE_NWFPE=y2622CONFIG_NET=y2727-CONFIG_PACKET=y2823CONFIG_UNIX=y2929-CONFIG_INET=y3030-CONFIG_IP_PNP=y3131-CONFIG_IP_PNP_BOOTP=y3232-CONFIG_IP_PNP_RARP=y3333-# CONFIG_INET_XFRM_MODE_TRANSPORT is not set3434-# CONFIG_INET_XFRM_MODE_TUNNEL is not set3535-# CONFIG_INET_XFRM_MODE_BEET is not set3636-# CONFIG_INET_LRO is not set3737-# CONFIG_INET_DIAG is not set3838-# CONFIG_IPV6 is not set3924CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"4025CONFIG_MTD=y4141-CONFIG_MTD_PARTITIONS=y4226CONFIG_MTD_CMDLINE_PARTS=y4327CONFIG_MTD_CHAR=y4428CONFIG_MTD_BLOCK=y4545-CONFIG_MTD_CFI=y4646-CONFIG_MTD_JEDECPROBE=y4747-CONFIG_MTD_CFI_AMDSTD=y4848-CONFIG_MTD_PHYSMAP=y4929CONFIG_MTD_DATAFLASH=y5030CONFIG_MTD_NAND=y5131CONFIG_MTD_NAND_ATMEL=y5232CONFIG_BLK_DEV_LOOP=y5333CONFIG_BLK_DEV_RAM=y5454-CONFIG_BLK_DEV_RAM_SIZE=81925555-CONFIG_ATMEL_SSC=y3434+CONFIG_BLK_DEV_RAM_COUNT=43535+CONFIG_BLK_DEV_RAM_SIZE=245765636CONFIG_SCSI=y5737CONFIG_BLK_DEV_SD=y5838CONFIG_SCSI_MULTI_LUN=y5959-CONFIG_NETDEVICES=y6060-CONFIG_NET_ETHERNET=y6161-CONFIG_MII=y6262-CONFIG_MACB=y6363-# CONFIG_NETDEV_1000 is not set6464-# CONFIG_NETDEV_10000 is not set6539# CONFIG_INPUT_MOUSEDEV_PSAUX is not set4040+CONFIG_INPUT_MOUSEDEV_SCREEN_X=3204141+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=2406642CONFIG_INPUT_EVDEV=y6743# CONFIG_INPUT_KEYBOARD is not set6844# CONFIG_INPUT_MOUSE is not set6945CONFIG_INPUT_TOUCHSCREEN=y7070-CONFIG_TOUCHSCREEN_ADS7846=y4646+CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y7147# CONFIG_SERIO is not set7248CONFIG_SERIAL_ATMEL=y7349CONFIG_SERIAL_ATMEL_CONSOLE=y7474-CONFIG_HW_RANDOM=y5050+# CONFIG_HW_RANDOM is not set7551CONFIG_I2C=y7652CONFIG_I2C_CHARDEV=y5353+CONFIG_I2C_GPIO=y7754CONFIG_SPI=y7855CONFIG_SPI_ATMEL=y7956# CONFIG_HWMON is not set8057CONFIG_WATCHDOG=y8158CONFIG_WATCHDOG_NOWAYOUT=y5959+CONFIG_AT91SAM9X_WATCHDOG=y8260CONFIG_FB=y8361CONFIG_FB_ATMEL=y8484-# CONFIG_VGA_CONSOLE is not set8585-CONFIG_LOGO=y8686-# CONFIG_LOGO_LINUX_MONO is not set8787-# CONFIG_LOGO_LINUX_CLUT224 is not set8888-# CONFIG_USB_HID is not set8989-CONFIG_USB=y9090-CONFIG_USB_DEVICEFS=y9191-CONFIG_USB_MON=y9292-CONFIG_USB_OHCI_HCD=y9393-CONFIG_USB_STORAGE=y9494-CONFIG_USB_GADGET=y9595-CONFIG_USB_ETH=m9696-CONFIG_USB_FILE_STORAGE=m9762CONFIG_MMC=y9863CONFIG_MMC_AT91=m9964CONFIG_RTC_CLASS=y10065CONFIG_RTC_DRV_AT91SAM9=y10166CONFIG_EXT2_FS=y102102-CONFIG_INOTIFY=y6767+CONFIG_MSDOS_FS=y10368CONFIG_VFAT_FS=y10469CONFIG_TMPFS=y105105-CONFIG_JFFS2_FS=y10670CONFIG_CRAMFS=y107107-CONFIG_NFS_FS=y108108-CONFIG_ROOT_NFS=y10971CONFIG_NLS_CODEPAGE_437=y11072CONFIG_NLS_CODEPAGE_850=y11173CONFIG_NLS_ISO8859_1=y112112-CONFIG_DEBUG_FS=y7474+CONFIG_NLS_ISO8859_15=y7575+CONFIG_NLS_UTF8=y11376CONFIG_DEBUG_KERNEL=y11477CONFIG_DEBUG_INFO=y11578CONFIG_DEBUG_USER=y7979+CONFIG_DEBUG_LL=y
+14-33
arch/arm/configs/at91rm9200_defconfig
···55CONFIG_IKCONFIG=y66CONFIG_IKCONFIG_PROC=y77CONFIG_LOG_BUF_SHIFT=1488-CONFIG_SYSFS_DEPRECATED_V2=y98CONFIG_BLK_DEV_INITRD=y109CONFIG_MODULES=y1110CONFIG_MODULE_FORCE_LOAD=y···5556CONFIG_IP_PNP_DHCP=y5657CONFIG_IP_PNP_BOOTP=y5758CONFIG_NET_IPIP=m5858-CONFIG_NET_IPGRE=m5959CONFIG_INET_AH=m6060CONFIG_INET_ESP=m6161CONFIG_INET_IPCOMP=m···7375CONFIG_BRIDGE=m7476CONFIG_VLAN_8021Q=m7577CONFIG_BT=m7676-CONFIG_BT_L2CAP=m7777-CONFIG_BT_SCO=m7878-CONFIG_BT_RFCOMM=m7979-CONFIG_BT_RFCOMM_TTY=y8080-CONFIG_BT_BNEP=m8181-CONFIG_BT_BNEP_MC_FILTER=y8282-CONFIG_BT_BNEP_PROTO_FILTER=y8383-CONFIG_BT_HIDP=m8478CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"8579CONFIG_MTD=y8686-CONFIG_MTD_CONCAT=y8787-CONFIG_MTD_PARTITIONS=y8880CONFIG_MTD_CMDLINE_PARTS=y8981CONFIG_MTD_AFS_PARTS=y9082CONFIG_MTD_CHAR=y···96108CONFIG_BLK_DEV_NBD=y97109CONFIG_BLK_DEV_RAM=y98110CONFIG_BLK_DEV_RAM_SIZE=81929999-CONFIG_ATMEL_TCLIB=y100100-CONFIG_EEPROM_LEGACY=m101111CONFIG_SCSI=y102112CONFIG_BLK_DEV_SD=y103113CONFIG_BLK_DEV_SR=m···105119# CONFIG_SCSI_LOWLEVEL is not set106120CONFIG_NETDEVICES=y107121CONFIG_TUN=m122122+CONFIG_ARM_AT91_ETHER=y108123CONFIG_PHYLIB=y109124CONFIG_DAVICOM_PHY=y110125CONFIG_SMSC_PHY=y111126CONFIG_MICREL_PHY=y112112-CONFIG_NET_ETHERNET=y113113-CONFIG_ARM_AT91_ETHER=y114114-# CONFIG_NETDEV_1000 is not set115115-# CONFIG_NETDEV_10000 is not set127127+CONFIG_PPP=y128128+CONFIG_PPP_BSDCOMP=y129129+CONFIG_PPP_DEFLATE=y130130+CONFIG_PPP_FILTER=y131131+CONFIG_PPP_MPPE=m132132+CONFIG_PPP_MULTILINK=y133133+CONFIG_PPPOE=m134134+CONFIG_PPP_ASYNC=y135135+CONFIG_SLIP=m136136+CONFIG_SLIP_COMPRESSED=y137137+CONFIG_SLIP_SMART=y138138+CONFIG_SLIP_MODE_SLIP6=y116139CONFIG_USB_CATC=m117140CONFIG_USB_KAWETH=m118141CONFIG_USB_PEGASUS=m···134139CONFIG_USB_ALI_M5632=y135140CONFIG_USB_AN2720=y136141CONFIG_USB_EPSON2888=y137137-CONFIG_PPP=y138138-CONFIG_PPP_MULTILINK=y139139-CONFIG_PPP_FILTER=y140140-CONFIG_PPP_ASYNC=y141141-CONFIG_PPP_DEFLATE=y142142-CONFIG_PPP_BSDCOMP=y143143-CONFIG_PPP_MPPE=m144144-CONFIG_PPPOE=m145145-CONFIG_SLIP=m146146-CONFIG_SLIP_COMPRESSED=y147147-CONFIG_SLIP_SMART=y148148-CONFIG_SLIP_MODE_SLIP6=y149142# CONFIG_INPUT_MOUSEDEV_PSAUX is not set150143CONFIG_INPUT_MOUSEDEV_SCREEN_X=640151144CONFIG_INPUT_MOUSEDEV_SCREEN_Y=480···141158CONFIG_KEYBOARD_GPIO=y142159# CONFIG_INPUT_MOUSE is not set143160CONFIG_INPUT_TOUCHSCREEN=y161161+CONFIG_LEGACY_PTY_COUNT=32144162CONFIG_SERIAL_ATMEL=y145163CONFIG_SERIAL_ATMEL_CONSOLE=y146146-CONFIG_LEGACY_PTY_COUNT=32147164CONFIG_HW_RANDOM=y148165CONFIG_I2C=y149166CONFIG_I2C_CHARDEV=y···273290CONFIG_NFS_V4=y274291CONFIG_ROOT_NFS=y275292CONFIG_NFSD=y276276-CONFIG_SMB_FS=m277293CONFIG_CIFS=m278294CONFIG_PARTITION_ADVANCED=y279295CONFIG_MAC_PARTITION=y···317335CONFIG_MAGIC_SYSRQ=y318336CONFIG_DEBUG_FS=y319337CONFIG_DEBUG_KERNEL=y320320-# CONFIG_RCU_CPU_STALL_DETECTOR is not set321338# CONFIG_FTRACE is not set322339CONFIG_CRYPTO_PCBC=y323340CONFIG_CRYPTO_SHA1=y
···1111# CONFIG_IOSCHED_DEADLINE is not set1212# CONFIG_IOSCHED_CFQ is not set1313CONFIG_ARCH_AT91=y1414-CONFIG_ARCH_AT91SAM9260=y1515-CONFIG_MACH_AT91SAM9260EK=y1414+CONFIG_ARCH_AT91SAM9G20=y1515+CONFIG_MACH_AT91SAM9G20EK=y1616+CONFIG_MACH_AT91SAM9G20EK_2MMC=y1717+CONFIG_MACH_CPU9G20=y1818+CONFIG_MACH_ACMENETUSFOXG20=y1919+CONFIG_MACH_PORTUXG20=y2020+CONFIG_MACH_STAMP9G20=y2121+CONFIG_MACH_PCONTROL_G20=y2222+CONFIG_MACH_GSIA18S=y2323+CONFIG_MACH_USB_A9G20=y2424+CONFIG_MACH_SNAPPER_9260=y2525+CONFIG_MACH_AT91SAM_DT=y1626CONFIG_AT91_PROGRAMMABLE_CLOCKS=y1727# CONFIG_ARM_THUMB is not set2828+CONFIG_AEABI=y2929+CONFIG_LEDS=y3030+CONFIG_LEDS_CPU=y1831CONFIG_ZBOOT_ROM_TEXT=0x01932CONFIG_ZBOOT_ROM_BSS=0x03333+CONFIG_ARM_APPENDED_DTB=y3434+CONFIG_ARM_ATAG_DTB_COMPAT=y2035CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw"2136CONFIG_FPE_NWFPE=y2237CONFIG_NET=y···4631# CONFIG_INET_LRO is not set4732# CONFIG_IPV6 is not set4833CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"3434+CONFIG_MTD=y3535+CONFIG_MTD_CMDLINE_PARTS=y3636+CONFIG_MTD_CHAR=y3737+CONFIG_MTD_BLOCK=y3838+CONFIG_MTD_DATAFLASH=y3939+CONFIG_MTD_NAND=y4040+CONFIG_MTD_NAND_ATMEL=y4141+CONFIG_BLK_DEV_LOOP=y4942CONFIG_BLK_DEV_RAM=y5043CONFIG_BLK_DEV_RAM_SIZE=81925151-CONFIG_ATMEL_SSC=y5244CONFIG_SCSI=y5345CONFIG_BLK_DEV_SD=y5446CONFIG_SCSI_MULTI_LUN=y4747+# CONFIG_SCSI_LOWLEVEL is not set5548CONFIG_NETDEVICES=y5656-CONFIG_NET_ETHERNET=y5749CONFIG_MII=y5850CONFIG_MACB=y5951# CONFIG_INPUT_MOUSEDEV_PSAUX is not set6060-# CONFIG_INPUT_KEYBOARD is not set5252+CONFIG_INPUT_MOUSEDEV_SCREEN_X=3205353+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=2405454+CONFIG_INPUT_EVDEV=y5555+# CONFIG_KEYBOARD_ATKBD is not set5656+CONFIG_KEYBOARD_GPIO=y6157# CONFIG_INPUT_MOUSE is not set6262-# CONFIG_SERIO is not set5858+CONFIG_LEGACY_PTY_COUNT=166359CONFIG_SERIAL_ATMEL=y6460CONFIG_SERIAL_ATMEL_CONSOLE=y6565-# CONFIG_HW_RANDOM is not set6666-CONFIG_I2C=y6767-CONFIG_I2C_CHARDEV=y6868-CONFIG_I2C_GPIO=y6161+CONFIG_HW_RANDOM=y6262+CONFIG_SPI=y6363+CONFIG_SPI_ATMEL=y6464+CONFIG_SPI_SPIDEV=y6965# CONFIG_HWMON is not set7070-CONFIG_WATCHDOG=y7171-CONFIG_WATCHDOG_NOWAYOUT=y7272-CONFIG_AT91SAM9X_WATCHDOG=y7373-# CONFIG_VGA_CONSOLE is not set7474-# CONFIG_USB_HID is not set6666+CONFIG_SOUND=y6767+CONFIG_SND=y6868+CONFIG_SND_SEQUENCER=y6969+CONFIG_SND_MIXER_OSS=y7070+CONFIG_SND_PCM_OSS=y7171+CONFIG_SND_SEQUENCER_OSS=y7272+# CONFIG_SND_VERBOSE_PROCFS is not set7573CONFIG_USB=y7674CONFIG_USB_DEVICEFS=y7575+# CONFIG_USB_DEVICE_CLASS is not set7776CONFIG_USB_MON=y7877CONFIG_USB_OHCI_HCD=y7978CONFIG_USB_STORAGE=y8080-CONFIG_USB_STORAGE_DEBUG=y8179CONFIG_USB_GADGET=y8280CONFIG_USB_ZERO=m8381CONFIG_USB_GADGETFS=m8482CONFIG_USB_FILE_STORAGE=m8583CONFIG_USB_G_SERIAL=m8484+CONFIG_MMC=y8585+CONFIG_MMC_AT91=m8686+CONFIG_NEW_LEDS=y8787+CONFIG_LEDS_CLASS=y8888+CONFIG_LEDS_GPIO=y8989+CONFIG_LEDS_TRIGGERS=y9090+CONFIG_LEDS_TRIGGER_TIMER=y9191+CONFIG_LEDS_TRIGGER_HEARTBEAT=y8692CONFIG_RTC_CLASS=y8793CONFIG_RTC_DRV_AT91SAM9=y8894CONFIG_EXT2_FS=y8989-CONFIG_INOTIFY=y9595+CONFIG_MSDOS_FS=y9096CONFIG_VFAT_FS=y9197CONFIG_TMPFS=y9898+CONFIG_JFFS2_FS=y9999+CONFIG_JFFS2_SUMMARY=y92100CONFIG_CRAMFS=y101101+CONFIG_NFS_FS=y102102+CONFIG_NFS_V3=y103103+CONFIG_ROOT_NFS=y93104CONFIG_NLS_CODEPAGE_437=y94105CONFIG_NLS_CODEPAGE_850=y95106CONFIG_NLS_ISO8859_1=y9696-CONFIG_DEBUG_KERNEL=y9797-CONFIG_DEBUG_USER=y9898-CONFIG_DEBUG_LL=y107107+CONFIG_NLS_ISO8859_15=y108108+CONFIG_NLS_UTF8=y109109+# CONFIG_ENABLE_WARN_DEPRECATED is not set
···1111# CONFIG_IOSCHED_DEADLINE is not set1212# CONFIG_IOSCHED_CFQ is not set1313CONFIG_ARCH_AT91=y1414-CONFIG_ARCH_AT91SAM9G20=y1515-CONFIG_MACH_AT91SAM9G20EK=y1616-CONFIG_MACH_AT91SAM9G20EK_2MMC=y1414+CONFIG_ARCH_AT91CAP9=y1515+CONFIG_MACH_AT91CAP9ADK=y1616+CONFIG_MTD_AT91_DATAFLASH_CARD=y1717CONFIG_AT91_PROGRAMMABLE_CLOCKS=y1818# CONFIG_ARM_THUMB is not set1919CONFIG_AEABI=y···2121CONFIG_LEDS_CPU=y2222CONFIG_ZBOOT_ROM_TEXT=0x02323CONFIG_ZBOOT_ROM_BSS=0x02424-CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw"2424+CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw"2525CONFIG_FPE_NWFPE=y2626-CONFIG_PM=y2726CONFIG_NET=y2827CONFIG_PACKET=y2928CONFIG_UNIX=y3029CONFIG_INET=y3130CONFIG_IP_PNP=y3231CONFIG_IP_PNP_BOOTP=y3232+CONFIG_IP_PNP_RARP=y3333# CONFIG_INET_XFRM_MODE_TRANSPORT is not set3434# CONFIG_INET_XFRM_MODE_TUNNEL is not set3535# CONFIG_INET_XFRM_MODE_BEET is not set3636# CONFIG_INET_LRO is not set3737+# CONFIG_INET_DIAG is not set3738# CONFIG_IPV6 is not set3839CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"3940CONFIG_MTD=y4040-CONFIG_MTD_CONCAT=y4141-CONFIG_MTD_PARTITIONS=y4241CONFIG_MTD_CMDLINE_PARTS=y4342CONFIG_MTD_CHAR=y4443CONFIG_MTD_BLOCK=y4444+CONFIG_MTD_CFI=y4545+CONFIG_MTD_JEDECPROBE=y4646+CONFIG_MTD_CFI_AMDSTD=y4747+CONFIG_MTD_PHYSMAP=y4548CONFIG_MTD_DATAFLASH=y4649CONFIG_MTD_NAND=y4750CONFIG_MTD_NAND_ATMEL=y4851CONFIG_BLK_DEV_LOOP=y4952CONFIG_BLK_DEV_RAM=y5053CONFIG_BLK_DEV_RAM_SIZE=81925151-CONFIG_ATMEL_SSC=y5254CONFIG_SCSI=y5355CONFIG_BLK_DEV_SD=y5456CONFIG_SCSI_MULTI_LUN=y5555-# CONFIG_SCSI_LOWLEVEL is not set5657CONFIG_NETDEVICES=y5757-CONFIG_NET_ETHERNET=y5858CONFIG_MII=y5959CONFIG_MACB=y6060-# CONFIG_NETDEV_1000 is not set6161-# CONFIG_NETDEV_10000 is not set6260# CONFIG_INPUT_MOUSEDEV_PSAUX is not set6363-CONFIG_INPUT_MOUSEDEV_SCREEN_X=3206464-CONFIG_INPUT_MOUSEDEV_SCREEN_Y=2406561CONFIG_INPUT_EVDEV=y6666-# CONFIG_KEYBOARD_ATKBD is not set6767-CONFIG_KEYBOARD_GPIO=y6262+# CONFIG_INPUT_KEYBOARD is not set6863# CONFIG_INPUT_MOUSE is not set6464+CONFIG_INPUT_TOUCHSCREEN=y6565+CONFIG_TOUCHSCREEN_ADS7846=y6666+# CONFIG_SERIO is not set6967CONFIG_SERIAL_ATMEL=y7068CONFIG_SERIAL_ATMEL_CONSOLE=y7171-CONFIG_LEGACY_PTY_COUNT=167269CONFIG_HW_RANDOM=y7070+CONFIG_I2C=y7171+CONFIG_I2C_CHARDEV=y7372CONFIG_SPI=y7473CONFIG_SPI_ATMEL=y7575-CONFIG_SPI_SPIDEV=y7674# CONFIG_HWMON is not set7777-# CONFIG_VGA_CONSOLE is not set7878-CONFIG_SOUND=y7979-CONFIG_SND=y8080-CONFIG_SND_SEQUENCER=y8181-CONFIG_SND_MIXER_OSS=y8282-CONFIG_SND_PCM_OSS=y8383-CONFIG_SND_SEQUENCER_OSS=y8484-# CONFIG_SND_VERBOSE_PROCFS is not set8585-CONFIG_SND_AT73C213=y7575+CONFIG_WATCHDOG=y7676+CONFIG_WATCHDOG_NOWAYOUT=y7777+CONFIG_FB=y7878+CONFIG_FB_ATMEL=y7979+CONFIG_LOGO=y8080+# CONFIG_LOGO_LINUX_MONO is not set8181+# CONFIG_LOGO_LINUX_CLUT224 is not set8282+# CONFIG_USB_HID is not set8683CONFIG_USB=y8784CONFIG_USB_DEVICEFS=y8888-# CONFIG_USB_DEVICE_CLASS is not set8985CONFIG_USB_MON=y9086CONFIG_USB_OHCI_HCD=y9187CONFIG_USB_STORAGE=y9288CONFIG_USB_GADGET=y9393-CONFIG_USB_ZERO=m9494-CONFIG_USB_GADGETFS=m8989+CONFIG_USB_ETH=m9590CONFIG_USB_FILE_STORAGE=m9696-CONFIG_USB_G_SERIAL=m9791CONFIG_MMC=y9892CONFIG_MMC_AT91=m9999-CONFIG_NEW_LEDS=y100100-CONFIG_LEDS_CLASS=y101101-CONFIG_LEDS_GPIO=y102102-CONFIG_LEDS_TRIGGERS=y103103-CONFIG_LEDS_TRIGGER_TIMER=y104104-CONFIG_LEDS_TRIGGER_HEARTBEAT=y10593CONFIG_RTC_CLASS=y10694CONFIG_RTC_DRV_AT91SAM9=y10795CONFIG_EXT2_FS=y108108-CONFIG_INOTIFY=y109109-CONFIG_MSDOS_FS=y11096CONFIG_VFAT_FS=y11197CONFIG_TMPFS=y11298CONFIG_JFFS2_FS=y113113-CONFIG_JFFS2_SUMMARY=y11499CONFIG_CRAMFS=y115100CONFIG_NFS_FS=y116116-CONFIG_NFS_V3=y117101CONFIG_ROOT_NFS=y118102CONFIG_NLS_CODEPAGE_437=y119103CONFIG_NLS_CODEPAGE_850=y120104CONFIG_NLS_ISO8859_1=y121121-CONFIG_NLS_ISO8859_15=y122122-CONFIG_NLS_UTF8=y123123-# CONFIG_ENABLE_WARN_DEPRECATED is not set105105+CONFIG_DEBUG_FS=y106106+CONFIG_DEBUG_KERNEL=y107107+CONFIG_DEBUG_INFO=y108108+CONFIG_DEBUG_USER=y
+2-5
arch/arm/configs/at91sam9g45_defconfig
···1818CONFIG_ARCH_AT91=y1919CONFIG_ARCH_AT91SAM9G45=y2020CONFIG_MACH_AT91SAM9M10G45EK=y2121+CONFIG_MACH_AT91SAM_DT=y2122CONFIG_AT91_PROGRAMMABLE_CLOCKS=y2223CONFIG_AT91_SLOW_CLOCK=y2324CONFIG_AEABI=y···7473# CONFIG_SCSI_LOWLEVEL is not set7574CONFIG_NETDEVICES=y7675CONFIG_MII=y7777-CONFIG_DAVICOM_PHY=y7878-CONFIG_NET_ETHERNET=y7976CONFIG_MACB=y8080-# CONFIG_NETDEV_1000 is not set8181-# CONFIG_NETDEV_10000 is not set7777+CONFIG_DAVICOM_PHY=y8278CONFIG_LIBERTAS_THINFIRM=m8379CONFIG_LIBERTAS_THINFIRM_USB=m8480CONFIG_AT76C50X_USB=m···129131CONFIG_SPI=y130132CONFIG_SPI_ATMEL=y131133# CONFIG_HWMON is not set132132-# CONFIG_MFD_SUPPORT is not set133134CONFIG_FB=y134135CONFIG_FB_ATMEL=y135136CONFIG_FB_UDL=m
···1111# CONFIG_IOSCHED_DEADLINE is not set1212# CONFIG_IOSCHED_CFQ is not set1313CONFIG_ARCH_AT91=y1414-CONFIG_ARCH_AT91SAM9RL=y1515-CONFIG_MACH_AT91SAM9RLEK=y1414+CONFIG_ARCH_AT91SAM9260=y1515+CONFIG_ARCH_AT91SAM9260_SAM9XE=y1616+CONFIG_MACH_AT91SAM9260EK=y1717+CONFIG_MACH_CAM60=y1818+CONFIG_MACH_SAM9_L9260=y1919+CONFIG_MACH_AFEB9260=y2020+CONFIG_MACH_USB_A9260=y2121+CONFIG_MACH_QIL_A9260=y2222+CONFIG_MACH_CPU9260=y2323+CONFIG_MACH_FLEXIBITY=y2424+CONFIG_MACH_SNAPPER_9260=y2525+CONFIG_MACH_AT91SAM_DT=y1626CONFIG_AT91_PROGRAMMABLE_CLOCKS=y1727# CONFIG_ARM_THUMB is not set1828CONFIG_ZBOOT_ROM_TEXT=0x01929CONFIG_ZBOOT_ROM_BSS=0x02020-CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw"3030+CONFIG_ARM_APPENDED_DTB=y3131+CONFIG_ARM_ATAG_DTB_COMPAT=y3232+CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw"2133CONFIG_FPE_NWFPE=y2234CONFIG_NET=y3535+CONFIG_PACKET=y2336CONFIG_UNIX=y3737+CONFIG_INET=y3838+CONFIG_IP_PNP=y3939+CONFIG_IP_PNP_BOOTP=y4040+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set4141+# CONFIG_INET_XFRM_MODE_TUNNEL is not set4242+# CONFIG_INET_XFRM_MODE_BEET is not set4343+# CONFIG_INET_LRO is not set4444+# CONFIG_IPV6 is not set2445CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"2525-CONFIG_MTD=y2626-CONFIG_MTD_CONCAT=y2727-CONFIG_MTD_PARTITIONS=y2828-CONFIG_MTD_CMDLINE_PARTS=y2929-CONFIG_MTD_CHAR=y3030-CONFIG_MTD_BLOCK=y3131-CONFIG_MTD_DATAFLASH=y3232-CONFIG_MTD_NAND=y3333-CONFIG_MTD_NAND_ATMEL=y3434-CONFIG_BLK_DEV_LOOP=y3546CONFIG_BLK_DEV_RAM=y3636-CONFIG_BLK_DEV_RAM_COUNT=43737-CONFIG_BLK_DEV_RAM_SIZE=245763838-CONFIG_ATMEL_SSC=y4747+CONFIG_BLK_DEV_RAM_SIZE=81923948CONFIG_SCSI=y4049CONFIG_BLK_DEV_SD=y4150CONFIG_SCSI_MULTI_LUN=y5151+CONFIG_NETDEVICES=y5252+CONFIG_MII=y5353+CONFIG_MACB=y4254# CONFIG_INPUT_MOUSEDEV_PSAUX is not set4343-CONFIG_INPUT_MOUSEDEV_SCREEN_X=3204444-CONFIG_INPUT_MOUSEDEV_SCREEN_Y=2404545-CONFIG_INPUT_EVDEV=y4655# CONFIG_INPUT_KEYBOARD is not set4756# CONFIG_INPUT_MOUSE is not set4848-CONFIG_INPUT_TOUCHSCREEN=y4949-CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y5057# CONFIG_SERIO is not set5158CONFIG_SERIAL_ATMEL=y5259CONFIG_SERIAL_ATMEL_CONSOLE=y···6154CONFIG_I2C=y6255CONFIG_I2C_CHARDEV=y6356CONFIG_I2C_GPIO=y6464-CONFIG_SPI=y6565-CONFIG_SPI_ATMEL=y6657# CONFIG_HWMON is not set6758CONFIG_WATCHDOG=y6859CONFIG_WATCHDOG_NOWAYOUT=y6960CONFIG_AT91SAM9X_WATCHDOG=y7070-CONFIG_FB=y7171-CONFIG_FB_ATMEL=y7272-# CONFIG_VGA_CONSOLE is not set7373-CONFIG_MMC=y7474-CONFIG_MMC_AT91=m6161+# CONFIG_USB_HID is not set6262+CONFIG_USB=y6363+CONFIG_USB_DEVICEFS=y6464+CONFIG_USB_MON=y6565+CONFIG_USB_OHCI_HCD=y6666+CONFIG_USB_STORAGE=y6767+CONFIG_USB_STORAGE_DEBUG=y6868+CONFIG_USB_GADGET=y6969+CONFIG_USB_ZERO=m7070+CONFIG_USB_GADGETFS=m7171+CONFIG_USB_FILE_STORAGE=m7272+CONFIG_USB_G_SERIAL=m7573CONFIG_RTC_CLASS=y7674CONFIG_RTC_DRV_AT91SAM9=y7775CONFIG_EXT2_FS=y7878-CONFIG_INOTIFY=y7979-CONFIG_MSDOS_FS=y8076CONFIG_VFAT_FS=y8177CONFIG_TMPFS=y8278CONFIG_CRAMFS=y8379CONFIG_NLS_CODEPAGE_437=y8480CONFIG_NLS_CODEPAGE_850=y8581CONFIG_NLS_ISO8859_1=y8686-CONFIG_NLS_ISO8859_15=y8787-CONFIG_NLS_UTF8=y8882CONFIG_DEBUG_KERNEL=y8989-CONFIG_DEBUG_INFO=y9083CONFIG_DEBUG_USER=y9184CONFIG_DEBUG_LL=y
+1-1
arch/arm/configs/ezx_defconfig
···287287# CONFIG_USB_DEVICE_CLASS is not set288288CONFIG_USB_OHCI_HCD=y289289CONFIG_USB_GADGET=y290290-CONFIG_USB_GADGET_PXA27X=y290290+CONFIG_USB_PXA27X=y291291CONFIG_USB_ETH=m292292# CONFIG_USB_ETH_RNDIS is not set293293CONFIG_MMC=y
+1-1
arch/arm/configs/imote2_defconfig
···263263# CONFIG_USB_DEVICE_CLASS is not set264264CONFIG_USB_OHCI_HCD=y265265CONFIG_USB_GADGET=y266266-CONFIG_USB_GADGET_PXA27X=y266266+CONFIG_USB_PXA27X=y267267CONFIG_USB_ETH=m268268# CONFIG_USB_ETH_RNDIS is not set269269CONFIG_MMC=y
+1-1
arch/arm/configs/magician_defconfig
···132132CONFIG_USB_OHCI_HCD=y133133CONFIG_USB_GADGET=y134134CONFIG_USB_GADGET_VBUS_DRAW=500135135-CONFIG_USB_GADGET_PXA27X=y135135+CONFIG_USB_PXA27X=y136136CONFIG_USB_ETH=m137137# CONFIG_USB_ETH_RNDIS is not set138138CONFIG_USB_GADGETFS=m
···1414CONFIG_ARCH_U300=y1515CONFIG_MACH_U300=y1616CONFIG_MACH_U300_BS335=y1717-CONFIG_MACH_U300_DUAL_RAM=y1818-CONFIG_U300_DEBUG=y1917CONFIG_MACH_U300_SPIDUMMY=y2018CONFIG_NO_HZ=y2119CONFIG_HIGH_RES_TIMERS=y···2426CONFIG_CMDLINE="root=/dev/ram0 rw rootfstype=rootfs console=ttyAMA0,115200n8 lpj=515072"2527CONFIG_CPU_IDLE=y2628CONFIG_FPE_NWFPE=y2727-CONFIG_PM=y2829# CONFIG_SUSPEND is not set2930CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"3031# CONFIG_PREVENT_FIRMWARE_BUILD is not set3131-# CONFIG_MISC_DEVICES is not set3232+CONFIG_MTD=y3333+CONFIG_MTD_CMDLINE_PARTS=y3434+CONFIG_MTD_NAND=y3535+CONFIG_MTD_NAND_FSMC=y3236# CONFIG_INPUT_MOUSEDEV is not set3337CONFIG_INPUT_EVDEV=y3438# CONFIG_KEYBOARD_ATKBD is not set3539# CONFIG_INPUT_MOUSE is not set3640# CONFIG_SERIO is not set4141+CONFIG_LEGACY_PTY_COUNT=163742CONFIG_SERIAL_AMBA_PL011=y3843CONFIG_SERIAL_AMBA_PL011_CONSOLE=y3939-CONFIG_LEGACY_PTY_COUNT=164044# CONFIG_HW_RANDOM is not set4145CONFIG_I2C=y4246# CONFIG_HWMON is not set···5151# CONFIG_HID_SUPPORT is not set5252# CONFIG_USB_SUPPORT is not set5353CONFIG_MMC=y5454+CONFIG_MMC_CLKGATE=y5455CONFIG_MMC_ARMMMCI=y5556CONFIG_RTC_CLASS=y5657# CONFIG_RTC_HCTOSYS is not set···6665CONFIG_NLS_ISO8859_1=y6766CONFIG_PRINTK_TIME=y6867CONFIG_DEBUG_FS=y6969-CONFIG_DEBUG_KERNEL=y7068# CONFIG_SCHED_DEBUG is not set7169CONFIG_TIMER_STATS=y7270# CONFIG_DEBUG_PREEMPT is not set7371CONFIG_DEBUG_INFO=y7474-# CONFIG_RCU_CPU_STALL_DETECTOR is not set7572# CONFIG_CRC32 is not set
+5-9
arch/arm/configs/u8500_defconfig
···1010CONFIG_ARCH_U8500=y1111CONFIG_UX500_SOC_DB5500=y1212CONFIG_UX500_SOC_DB8500=y1313-CONFIG_MACH_U8500=y1313+CONFIG_MACH_HREFV60=y1414CONFIG_MACH_SNOWBALL=y1515CONFIG_MACH_U5500=y1616CONFIG_NO_HZ=y···2424CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y2525CONFIG_VFP=y2626CONFIG_NEON=y2727+CONFIG_PM_RUNTIME=y2728CONFIG_NET=y2829CONFIG_PACKET=y2930CONFIG_UNIX=y···4241CONFIG_AB8500_PWM=y4342CONFIG_SENSORS_BH1780=y4443CONFIG_NETDEVICES=y4545-CONFIG_SMSC_PHY=y4646-CONFIG_NET_ETHERNET=y4744CONFIG_SMSC911X=y4848-# CONFIG_NETDEV_1000 is not set4949-# CONFIG_NETDEV_10000 is not set4545+CONFIG_SMSC_PHY=y5046# CONFIG_WLAN is not set5147# CONFIG_INPUT_MOUSEDEV_PSAUX is not set5248CONFIG_INPUT_EVDEV=y···7072CONFIG_SPI_PL022=y7173CONFIG_GPIO_STMPE=y7274CONFIG_GPIO_TC3589X=y7373-# CONFIG_HWMON is not set7475CONFIG_MFD_STMPE=y7576CONFIG_MFD_TC3589X=y7777+CONFIG_AB5500_CORE=y7678CONFIG_AB8500_CORE=y7779CONFIG_REGULATOR_AB8500=y7880# CONFIG_HID_SUPPORT is not set7979-CONFIG_USB_MUSB_HDRC=y8080-CONFIG_USB_GADGET_MUSB_HDRC=y8181-CONFIG_MUSB_PIO_ONLY=y8281CONFIG_USB_GADGET=y8382CONFIG_AB8500_USB=y8483CONFIG_MMC=y···9297CONFIG_STE_DMA40=y9398CONFIG_STAGING=y9499CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y100100+CONFIG_HSEM_U8500=y95101CONFIG_EXT2_FS=y96102CONFIG_EXT2_FS_XATTR=y97103CONFIG_EXT2_FS_POSIX_ACL=y
···5555extern void5656release_pmu(enum arm_pmu_type type);57575858-/**5959- * init_pmu() - Initialise the PMU.6060- *6161- * Initialise the system ready for PMU enabling. This should typically set the6262- * IRQ affinity and nothing else. The users (oprofile/perf events etc) will do6363- * the actual hardware initialisation.6464- */6565-extern int6666-init_pmu(enum arm_pmu_type type);6767-6858#else /* CONFIG_CPU_HAS_PMU */69597060#include <linux/err.h>
+1-1
arch/arm/include/asm/topology.h
···25252626void init_cpu_topology(void);2727void store_cpu_topology(unsigned int cpuid);2828-const struct cpumask *cpu_coregroup_mask(unsigned int cpu);2828+const struct cpumask *cpu_coregroup_mask(int cpu);29293030#else3131
···1010config HAVE_IMX_SRC1111 bool12121313-#1414-# ARCH_MX31 and ARCH_MX35 are left for compatibility1515-# Some usages assume that having one of them implies not having (e.g.) ARCH_MX2.1616-# To easily distinguish good and reviewed from unreviewed usages new (and IMHO1717-# more sensible) names are used: SOC_IMX31 and SOC_IMX351813config ARCH_MX11914 bool2015···2025 bool21262227config MACH_MX272323- bool2424-2525-config ARCH_MX312626- bool2727-2828-config ARCH_MX352928 bool30293130config SOC_IMX1···6172 select CPU_V66273 select IMX_HAVE_PLATFORM_MXC_RNGA6374 select ARCH_MXC_AUDMUX_V26464- select ARCH_MX316575 select MXC_AVIC6676 select SMP_ON_UP if SMP6777···7082 select ARCH_MXC_IOMUX_V37183 select ARCH_MXC_AUDMUX_V27284 select HAVE_EPIT7373- select ARCH_MX357485 select MXC_AVIC7586 select SMP_ON_UP if SMP7687
+5-2
arch/arm/mach-imx/clock-imx6q.c
···19531953 imx_map_entry(MX6Q, ANATOP, MT_DEVICE),19541954};1955195519561956+void __init imx6q_clock_map_io(void)19571957+{19581958+ iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc));19591959+}19601960+19561961int __init mx6q_clocks_init(void)19571962{19581963 struct device_node *np;19591964 void __iomem *base;19601965 int i, irq;19611961-19621962- iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc));1963196619641967 /* retrieve the freqency of fixed clocks from device tree */19651968 for_each_compatible_node(np, NULL, "fixed-clock") {
···171171comment "OMAP CPU Speed"172172 depends on ARCH_OMAP1173173174174-config OMAP_CLOCKS_SET_BY_BOOTLOADER175175- bool "OMAP clocks set by bootloader"176176- depends on ARCH_OMAP1177177- help178178- Enable this option to prevent the kernel from overriding the clock179179- frequencies programmed by bootloader for MPU, DSP, MMUs, TC,180180- internal LCD controller and MPU peripherals.181181-182174config OMAP_ARM_216MHZ183175 bool "OMAP ARM 216 MHz CPU (1710 only)"184176 depends on ARCH_OMAP1 && ARCH_OMAP16XX
···17171818#include <plat/clock.h>19192020-extern int __init omap1_clk_init(void);2020+int omap1_clk_init(void);2121+void omap1_clk_late_init(void);2122extern int omap1_clk_enable(struct clk *clk);2223extern void omap1_clk_disable(struct clk *clk);2324extern long omap1_clk_round_rate(struct clk *clk, unsigned long rate);
+34-19
arch/arm/mach-omap1/clock_data.c
···767767 .clk_disable_unused = omap1_clk_disable_unused,768768};769769770770+static void __init omap1_show_rates(void)771771+{772772+ pr_notice("Clocking rate (xtal/DPLL1/MPU): "773773+ "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n",774774+ ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10,775775+ ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10,776776+ arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10);777777+}778778+770779int __init omap1_clk_init(void)771780{772781 struct omap_clk *c;···844835 /* We want to be in syncronous scalable mode */845836 omap_writew(0x1000, ARM_SYSST);846837847847-#ifdef CONFIG_OMAP_CLOCKS_SET_BY_BOOTLOADER848848- /* Use values set by bootloader. Determine PLL rate and recalculate849849- * dependent clocks as if kernel had changed PLL or divisors.838838+839839+ /*840840+ * Initially use the values set by bootloader. Determine PLL rate and841841+ * recalculate dependent clocks as if kernel had changed PLL or842842+ * divisors. See also omap1_clk_late_init() that can reprogram dpll1843843+ * after the SRAM is initialized.850844 */851845 {852846 unsigned pll_ctl_val = omap_readw(DPLL_CTL);···874862 }875863 }876864 }877877-#else878878- /* Find the highest supported frequency and enable it */879879- if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) {880880- printk(KERN_ERR "System frequencies not set. Check your config.\n");881881- /* Guess sane values (60MHz) */882882- omap_writew(0x2290, DPLL_CTL);883883- omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL);884884- ck_dpll1.rate = 60000000;885885- }886886-#endif887865 propagate_rate(&ck_dpll1);888866 /* Cache rates for clocks connected to ck_ref (not dpll1) */889867 propagate_rate(&ck_ref);890890- printk(KERN_INFO "Clocking rate (xtal/DPLL1/MPU): "891891- "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n",892892- ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10,893893- ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10,894894- arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10);895895-868868+ omap1_show_rates();896869 if (machine_is_omap_perseus2() || machine_is_omap_fsample()) {897870 /* Select slicer output as OMAP input clock */898871 omap_writew(omap_readw(OMAP7XX_PCC_UPLD_CTRL) & ~0x1,···921924 clk_enable(&arm_gpio_ck);922925923926 return 0;927927+}928928+929929+#define OMAP1_DPLL1_SANE_VALUE 60000000930930+931931+void __init omap1_clk_late_init(void)932932+{933933+ if (ck_dpll1.rate >= OMAP1_DPLL1_SANE_VALUE)934934+ return;935935+936936+ /* Find the highest supported frequency and enable it */937937+ if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) {938938+ pr_err("System frequencies not set, using default. Check your config.\n");939939+ omap_writew(0x2290, DPLL_CTL);940940+ omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL);941941+ ck_dpll1.rate = OMAP1_DPLL1_SANE_VALUE;942942+ }943943+ propagate_rate(&ck_dpll1);944944+ omap1_show_rates();924945}
+3
arch/arm/mach-omap1/devices.c
···3030#include <plat/omap7xx.h>3131#include <plat/mcbsp.h>32323333+#include "clock.h"3434+3335/*-------------------------------------------------------------------------*/34363537#if defined(CONFIG_RTC_DRV_OMAP) || defined(CONFIG_RTC_DRV_OMAP_MODULE)···295293 return -ENODEV;296294297295 omap_sram_init();296296+ omap1_clk_late_init();298297299298 /* please keep these calls, and their implementations above,300299 * in alphabetical order so they're easier to sort through.
+1
arch/arm/mach-omap2/Kconfig
···334334config OMAP3_EMU335335 bool "OMAP3 debugging peripherals"336336 depends on ARCH_OMAP3337337+ select ARM_AMBA337338 select OC_ETM338339 help339340 Say Y here to enable debugging hardware of omap3
···2727#include <plat/omap_hwmod.h>2828#include <plat/omap_device.h>2929#include <plat/omap-pm.h>3030+#include <plat/common.h>30313132#include "control.h"3333+#include "display.h"3434+3535+#define DISPC_CONTROL 0x00403636+#define DISPC_CONTROL2 0x02383737+#define DISPC_IRQSTATUS 0x00183838+3939+#define DSS_SYSCONFIG 0x104040+#define DSS_SYSSTATUS 0x144141+#define DSS_CONTROL 0x404242+#define DSS_SDI_CONTROL 0x444343+#define DSS_PLL_CONTROL 0x484444+4545+#define LCD_EN_MASK (0x1 << 0)4646+#define DIGIT_EN_MASK (0x1 << 1)4747+4848+#define FRAMEDONE_IRQ_SHIFT 04949+#define EVSYNC_EVEN_IRQ_SHIFT 25050+#define EVSYNC_ODD_IRQ_SHIFT 35151+#define FRAMEDONE2_IRQ_SHIFT 225252+#define FRAMEDONETV_IRQ_SHIFT 245353+5454+/*5555+ * FRAMEDONE_IRQ_TIMEOUT: how long (in milliseconds) to wait during DISPC5656+ * reset before deciding that something has gone wrong5757+ */5858+#define FRAMEDONE_IRQ_TIMEOUT 10032593360static struct platform_device omap_display_device = {3461 .name = "omapdss",···196169 r = platform_device_register(&omap_display_device);197170 if (r < 0)198171 printk(KERN_ERR "Unable to register OMAP-Display device\n");172172+173173+ return r;174174+}175175+176176+static void dispc_disable_outputs(void)177177+{178178+ u32 v, irq_mask = 0;179179+ bool lcd_en, digit_en, lcd2_en = false;180180+ int i;181181+ struct omap_dss_dispc_dev_attr *da;182182+ struct omap_hwmod *oh;183183+184184+ oh = omap_hwmod_lookup("dss_dispc");185185+ if (!oh) {186186+ WARN(1, "display: could not disable outputs during reset - could not find dss_dispc hwmod\n");187187+ return;188188+ }189189+190190+ if (!oh->dev_attr) {191191+ pr_err("display: could not disable outputs during reset due to missing dev_attr\n");192192+ return;193193+ }194194+195195+ da = (struct omap_dss_dispc_dev_attr *)oh->dev_attr;196196+197197+ /* store value of LCDENABLE and DIGITENABLE bits */198198+ v = omap_hwmod_read(oh, DISPC_CONTROL);199199+ lcd_en = v & LCD_EN_MASK;200200+ digit_en = v & DIGIT_EN_MASK;201201+202202+ /* store value of LCDENABLE for LCD2 */203203+ if (da->manager_count > 2) {204204+ v = omap_hwmod_read(oh, DISPC_CONTROL2);205205+ lcd2_en = v & LCD_EN_MASK;206206+ }207207+208208+ if (!(lcd_en | digit_en | lcd2_en))209209+ return; /* no managers currently enabled */210210+211211+ /*212212+ * If any manager was enabled, we need to disable it before213213+ * DSS clocks are disabled or DISPC module is reset214214+ */215215+ if (lcd_en)216216+ irq_mask |= 1 << FRAMEDONE_IRQ_SHIFT;217217+218218+ if (digit_en) {219219+ if (da->has_framedonetv_irq) {220220+ irq_mask |= 1 << FRAMEDONETV_IRQ_SHIFT;221221+ } else {222222+ irq_mask |= 1 << EVSYNC_EVEN_IRQ_SHIFT |223223+ 1 << EVSYNC_ODD_IRQ_SHIFT;224224+ }225225+ }226226+227227+ if (lcd2_en)228228+ irq_mask |= 1 << FRAMEDONE2_IRQ_SHIFT;229229+230230+ /*231231+ * clear any previous FRAMEDONE, FRAMEDONETV,232232+ * EVSYNC_EVEN/ODD or FRAMEDONE2 interrupts233233+ */234234+ omap_hwmod_write(irq_mask, oh, DISPC_IRQSTATUS);235235+236236+ /* disable LCD and TV managers */237237+ v = omap_hwmod_read(oh, DISPC_CONTROL);238238+ v &= ~(LCD_EN_MASK | DIGIT_EN_MASK);239239+ omap_hwmod_write(v, oh, DISPC_CONTROL);240240+241241+ /* disable LCD2 manager */242242+ if (da->manager_count > 2) {243243+ v = omap_hwmod_read(oh, DISPC_CONTROL2);244244+ v &= ~LCD_EN_MASK;245245+ omap_hwmod_write(v, oh, DISPC_CONTROL2);246246+ }247247+248248+ i = 0;249249+ while ((omap_hwmod_read(oh, DISPC_IRQSTATUS) & irq_mask) !=250250+ irq_mask) {251251+ i++;252252+ if (i > FRAMEDONE_IRQ_TIMEOUT) {253253+ pr_err("didn't get FRAMEDONE1/2 or TV interrupt\n");254254+ break;255255+ }256256+ mdelay(1);257257+ }258258+}259259+260260+#define MAX_MODULE_SOFTRESET_WAIT 10000261261+int omap_dss_reset(struct omap_hwmod *oh)262262+{263263+ struct omap_hwmod_opt_clk *oc;264264+ int c = 0;265265+ int i, r;266266+267267+ if (!(oh->class->sysc->sysc_flags & SYSS_HAS_RESET_STATUS)) {268268+ pr_err("dss_core: hwmod data doesn't contain reset data\n");269269+ return -EINVAL;270270+ }271271+272272+ for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++)273273+ if (oc->_clk)274274+ clk_enable(oc->_clk);275275+276276+ dispc_disable_outputs();277277+278278+ /* clear SDI registers */279279+ if (cpu_is_omap3430()) {280280+ omap_hwmod_write(0x0, oh, DSS_SDI_CONTROL);281281+ omap_hwmod_write(0x0, oh, DSS_PLL_CONTROL);282282+ }283283+284284+ /*285285+ * clear DSS_CONTROL register to switch DSS clock sources to286286+ * PRCM clock, if any287287+ */288288+ omap_hwmod_write(0x0, oh, DSS_CONTROL);289289+290290+ omap_test_timeout((omap_hwmod_read(oh, oh->class->sysc->syss_offs)291291+ & SYSS_RESETDONE_MASK),292292+ MAX_MODULE_SOFTRESET_WAIT, c);293293+294294+ if (c == MAX_MODULE_SOFTRESET_WAIT)295295+ pr_warning("dss_core: waiting for reset to finish failed\n");296296+ else297297+ pr_debug("dss_core: softreset done\n");298298+299299+ for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++)300300+ if (oc->_clk)301301+ clk_disable(oc->_clk);302302+303303+ r = (c == MAX_MODULE_SOFTRESET_WAIT) ? -ETIMEDOUT : 0;199304200305 return r;201306}
+29
arch/arm/mach-omap2/display.h
···11+/*22+ * display.h - OMAP2+ integration-specific DSS header33+ *44+ * Copyright (C) 2011 Texas Instruments, Inc.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms of the GNU General Public License version 2 as published by88+ * the Free Software Foundation.99+ *1010+ * This program is distributed in the hope that it will be useful, but WITHOUT1111+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ * more details.1414+ *1515+ * You should have received a copy of the GNU General Public License along with1616+ * this program. If not, see <http://www.gnu.org/licenses/>.1717+ */1818+1919+#ifndef __ARCH_ARM_MACH_OMAP2_DISPLAY_H2020+#define __ARCH_ARM_MACH_OMAP2_DISPLAY_H2121+2222+#include <linux/kernel.h>2323+2424+struct omap_dss_dispc_dev_attr {2525+ u8 manager_count;2626+ bool has_framedonetv_irq;2727+};2828+2929+#endif
···2424#include "powerdomain.h"2525#include "clockdomain.h"2626#include "pm.h"2727+#include "twl-common.h"27282829static struct omap_device_pm_latency *pm_lats;2930···227226228227static int __init omap2_common_pm_late_init(void)229228{230230- /* Init the OMAP TWL parameters */231231- omap3_twl_init();232232- omap4_twl_init();233233-234229 /* Init the voltage layer */230230+ omap_pmic_late_init();235231 omap_voltage_late_init();236232237233 /* Initialize the voltages */
+1-1
arch/arm/mach-omap2/smartreflex.c
···139139 sr_write_reg(sr_info, ERRCONFIG_V1, status);140140 } else if (sr_info->ip_type == SR_TYPE_V2) {141141 /* Read the status bits */142142- sr_read_reg(sr_info, IRQSTATUS);142142+ status = sr_read_reg(sr_info, IRQSTATUS);143143144144 /* Clear them by writing back */145145 sr_write_reg(sr_info, IRQSTATUS, status);
+11
arch/arm/mach-omap2/twl-common.c
···3030#include <plat/usb.h>31313232#include "twl-common.h"3333+#include "pm.h"33343435static struct i2c_board_info __initdata pmic_i2c_board_info = {3536 .addr = 0x48,···4746 pmic_i2c_board_info.platform_data = pmic_data;48474948 omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1);4949+}5050+5151+void __init omap_pmic_late_init(void)5252+{5353+ /* Init the OMAP TWL parameters (if PMIC has been registerd) */5454+ if (!pmic_i2c_board_info.irq)5555+ return;5656+5757+ omap3_twl_init();5858+ omap4_twl_init();5059}51605261#if defined(CONFIG_ARCH_OMAP3)
···88 * published by the Free Software Foundation.99 */10101111-#include <linux/module.h>1111+#include <linux/export.h>1212#include <linux/interrupt.h>1313#include <linux/i2c.h>1414
+1-1
arch/arm/mm/cache-l2x0.c
···6161{6262 void __iomem *base = l2x0_base;63636464-#ifdef CONFIG_ARM_ERRATA_7539706464+#ifdef CONFIG_PL310_ERRATA_7539706565 /* write to an unmmapped register */6666 writel_relaxed(0, base + L2X0_DUMMY_REG);6767#else
+10-1
arch/arm/mm/dma-mapping.c
···168168 pte_t *pte;169169 int i = 0;170170 unsigned long base = consistent_base;171171- unsigned long num_ptes = (CONSISTENT_END - base) >> PGDIR_SHIFT;171171+ unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT;172172173173 consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL);174174 if (!consistent_pte) {···331331{332332 struct page *page;333333 void *addr;334334+335335+ /*336336+ * Following is a work-around (a.k.a. hack) to prevent pages337337+ * with __GFP_COMP being passed to split_page() which cannot338338+ * handle them. The real problem is that this flag probably339339+ * should be 0 on ARM as it is not supported on this340340+ * platform; see CONFIG_HUGETLBFS.341341+ */342342+ gfp &= ~(__GFP_COMP);334343335344 *handle = ~0;336345 size = PAGE_ALIGN(size);
+6-17
arch/arm/mm/mmap.c
···99#include <linux/io.h>1010#include <linux/personality.h>1111#include <linux/random.h>1212-#include <asm/cputype.h>1313-#include <asm/system.h>1212+#include <asm/cachetype.h>14131514#define COLOUR_ALIGN(addr,pgoff) \1615 ((((addr)+SHMLBA-1)&~(SHMLBA-1)) + \···3132 struct mm_struct *mm = current->mm;3233 struct vm_area_struct *vma;3334 unsigned long start_addr;3434-#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K)3535- unsigned int cache_type;3636- int do_align = 0, aliasing = 0;3535+ int do_align = 0;3636+ int aliasing = cache_is_vipt_aliasing();37373838 /*3939 * We only need to do colour alignment if either the I or D4040- * caches alias. This is indicated by bits 9 and 21 of the4141- * cache type register.4040+ * caches alias.4241 */4343- cache_type = read_cpuid_cachetype();4444- if (cache_type != read_cpuid_id()) {4545- aliasing = (cache_type | cache_type >> 12) & (1 << 11);4646- if (aliasing)4747- do_align = filp || flags & MAP_SHARED;4848- }4949-#else5050-#define do_align 05151-#define aliasing 05252-#endif4242+ if (aliasing)4343+ do_align = filp || (flags & MAP_SHARED);53445445 /*5546 * We enforce the MAP_FIXED case.
···1111 * the Free Software Foundation; either version 2 of the License.1212*/13131414-#include <linux/module.h>1414+#include <linux/export.h>1515#include <linux/kernel.h>1616#include <linux/platform_device.h>1717#include <linux/slab.h>
···216216 /* Errata QE_General4, which affects some MPC832x and MPC836x SOCs, says217217 that the BRG divisor must be even if you're not using divide-by-16218218 mode. */219219- if (!div16 && (divisor & 1))219219+ if (!div16 && (divisor & 1) && (divisor > 3))220220 divisor++;221221222222 tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) |
+22-9
drivers/acpi/apei/erst.c
···932932static int erst_open_pstore(struct pstore_info *psi);933933static int erst_close_pstore(struct pstore_info *psi);934934static ssize_t erst_reader(u64 *id, enum pstore_type_id *type,935935- struct timespec *time, struct pstore_info *psi);935935+ struct timespec *time, char **buf,936936+ struct pstore_info *psi);936937static int erst_writer(enum pstore_type_id type, u64 *id, unsigned int part,937938 size_t size, struct pstore_info *psi);938939static int erst_clearer(enum pstore_type_id type, u64 id,···987986}988987989988static ssize_t erst_reader(u64 *id, enum pstore_type_id *type,990990- struct timespec *time, struct pstore_info *psi)989989+ struct timespec *time, char **buf,990990+ struct pstore_info *psi)991991{992992 int rc;993993 ssize_t len = 0;994994 u64 record_id;995995- struct cper_pstore_record *rcd = (struct cper_pstore_record *)996996- (erst_info.buf - sizeof(*rcd));995995+ struct cper_pstore_record *rcd;996996+ size_t rcd_len = sizeof(*rcd) + erst_info.bufsize;997997998998 if (erst_disable)999999 return -ENODEV;1000100010011001+ rcd = kmalloc(rcd_len, GFP_KERNEL);10021002+ if (!rcd) {10031003+ rc = -ENOMEM;10041004+ goto out;10051005+ }10011006skip:10021007 rc = erst_get_record_id_next(&reader_pos, &record_id);10031008 if (rc)···1011100410121005 /* no more record */10131006 if (record_id == APEI_ERST_INVALID_RECORD_ID) {10141014- rc = -1;10071007+ rc = -EINVAL;10151008 goto out;10161009 }1017101010181018- len = erst_read(record_id, &rcd->hdr, sizeof(*rcd) +10191019- erst_info.bufsize);10111011+ len = erst_read(record_id, &rcd->hdr, rcd_len);10201012 /* The record may be cleared by others, try read next record */10211013 if (len == -ENOENT)10221014 goto skip;10231023- else if (len < 0) {10241024- rc = -1;10151015+ else if (len < sizeof(*rcd)) {10161016+ rc = -EIO;10251017 goto out;10261018 }10271019 if (uuid_le_cmp(rcd->hdr.creator_id, CPER_CREATOR_PSTORE) != 0)10281020 goto skip;1029102110221022+ *buf = kmalloc(len, GFP_KERNEL);10231023+ if (*buf == NULL) {10241024+ rc = -ENOMEM;10251025+ goto out;10261026+ }10271027+ memcpy(*buf, rcd->data, len - sizeof(*rcd));10301028 *id = record_id;10311029 if (uuid_le_cmp(rcd->sec_hdr.section_type,10321030 CPER_SECTION_TYPE_DMESG) == 0)···10491037 time->tv_nsec = 0;1050103810511039out:10401040+ kfree(rcd);10521041 return (rc < 0) ? rc : (len - sizeof(*rcd));10531042}10541043
···546546 * Translate OpenFirmware node properties into platform_data547547 * WARNING: This is DEPRECATED and will be removed eventually!548548 */549549-void549549+static void550550pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert)551551{552552 struct device_node *node;···574574 *invert = *val;575575}576576#else577577-void577577+static void578578pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert)579579{580580 *gpio_base = -1;
+32-30
drivers/gpu/drm/exynos/exynos_drm_buf.c
···2727#include "drm.h"28282929#include "exynos_drm_drv.h"3030+#include "exynos_drm_gem.h"3031#include "exynos_drm_buf.h"31323232-static DEFINE_MUTEX(exynos_drm_buf_lock);3333-3433static int lowlevel_buffer_allocate(struct drm_device *dev,3535- struct exynos_drm_buf_entry *entry)3434+ struct exynos_drm_gem_buf *buffer)3635{3736 DRM_DEBUG_KMS("%s\n", __FILE__);38373939- entry->vaddr = dma_alloc_writecombine(dev->dev, entry->size,4040- (dma_addr_t *)&entry->paddr, GFP_KERNEL);4141- if (!entry->paddr) {3838+ buffer->kvaddr = dma_alloc_writecombine(dev->dev, buffer->size,3939+ &buffer->dma_addr, GFP_KERNEL);4040+ if (!buffer->kvaddr) {4241 DRM_ERROR("failed to allocate buffer.\n");4342 return -ENOMEM;4443 }45444646- DRM_DEBUG_KMS("allocated : vaddr(0x%x), paddr(0x%x), size(0x%x)\n",4747- (unsigned int)entry->vaddr, entry->paddr, entry->size);4545+ DRM_DEBUG_KMS("vaddr(0x%lx), dma_addr(0x%lx), size(0x%lx)\n",4646+ (unsigned long)buffer->kvaddr,4747+ (unsigned long)buffer->dma_addr,4848+ buffer->size);48494950 return 0;5051}51525253static void lowlevel_buffer_deallocate(struct drm_device *dev,5353- struct exynos_drm_buf_entry *entry)5454+ struct exynos_drm_gem_buf *buffer)5455{5556 DRM_DEBUG_KMS("%s.\n", __FILE__);56575757- if (entry->paddr && entry->vaddr && entry->size)5858- dma_free_writecombine(dev->dev, entry->size, entry->vaddr,5959- entry->paddr);5858+ if (buffer->dma_addr && buffer->size)5959+ dma_free_writecombine(dev->dev, buffer->size, buffer->kvaddr,6060+ (dma_addr_t)buffer->dma_addr);6061 else6161- DRM_DEBUG_KMS("entry data is null.\n");6262+ DRM_DEBUG_KMS("buffer data are invalid.\n");6263}63646464-struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev,6565+struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev,6566 unsigned int size)6667{6767- struct exynos_drm_buf_entry *entry;6868+ struct exynos_drm_gem_buf *buffer;68696970 DRM_DEBUG_KMS("%s.\n", __FILE__);7171+ DRM_DEBUG_KMS("desired size = 0x%x\n", size);70727171- entry = kzalloc(sizeof(*entry), GFP_KERNEL);7272- if (!entry) {7373- DRM_ERROR("failed to allocate exynos_drm_buf_entry.\n");7373+ buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);7474+ if (!buffer) {7575+ DRM_ERROR("failed to allocate exynos_drm_gem_buf.\n");7476 return ERR_PTR(-ENOMEM);7577 }76787777- entry->size = size;7979+ buffer->size = size;78807981 /*8082 * allocate memory region with size and set the memory information8181- * to vaddr and paddr of a entry object.8383+ * to vaddr and dma_addr of a buffer object.8284 */8383- if (lowlevel_buffer_allocate(dev, entry) < 0) {8484- kfree(entry);8585- entry = NULL;8585+ if (lowlevel_buffer_allocate(dev, buffer) < 0) {8686+ kfree(buffer);8787+ buffer = NULL;8688 return ERR_PTR(-ENOMEM);8789 }88908989- return entry;9191+ return buffer;9092}91939294void exynos_drm_buf_destroy(struct drm_device *dev,9393- struct exynos_drm_buf_entry *entry)9595+ struct exynos_drm_gem_buf *buffer)9496{9597 DRM_DEBUG_KMS("%s.\n", __FILE__);96989797- if (!entry) {9898- DRM_DEBUG_KMS("entry is null.\n");9999+ if (!buffer) {100100+ DRM_DEBUG_KMS("buffer is null.\n");99101 return;100102 }101103102102- lowlevel_buffer_deallocate(dev, entry);104104+ lowlevel_buffer_deallocate(dev, buffer);103105104104- kfree(entry);105105- entry = NULL;106106+ kfree(buffer);107107+ buffer = NULL;106108}107109108110MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+4-17
drivers/gpu/drm/exynos/exynos_drm_buf.h
···2626#ifndef _EXYNOS_DRM_BUF_H_2727#define _EXYNOS_DRM_BUF_H_28282929-/*3030- * exynos drm buffer entry structure.3131- *3232- * @paddr: physical address of allocated memory.3333- * @vaddr: kernel virtual address of allocated memory.3434- * @size: size of allocated memory.3535- */3636-struct exynos_drm_buf_entry {3737- dma_addr_t paddr;3838- void __iomem *vaddr;3939- unsigned int size;4040-};4141-4229/* allocate physical memory. */4343-struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev,3030+struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev,4431 unsigned int size);45324646-/* get physical memory information of a drm framebuffer. */4747-struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb);3333+/* get memory information of a drm framebuffer. */3434+struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb);48354936/* remove allocated physical memory. */5037void exynos_drm_buf_destroy(struct drm_device *dev,5151- struct exynos_drm_buf_entry *entry);3838+ struct exynos_drm_gem_buf *buffer);52395340#endif
+56-22
drivers/gpu/drm/exynos/exynos_drm_connector.c
···37373838struct exynos_drm_connector {3939 struct drm_connector drm_connector;4040+ uint32_t encoder_id;4141+ struct exynos_drm_manager *manager;4042};41434244/* convert exynos_video_timings to drm_display_mode */···4947 DRM_DEBUG_KMS("%s\n", __FILE__);50485149 mode->clock = timing->pixclock / 1000;5050+ mode->vrefresh = timing->refresh;52515352 mode->hdisplay = timing->xres;5453 mode->hsync_start = mode->hdisplay + timing->left_margin;···6057 mode->vsync_start = mode->vdisplay + timing->upper_margin;6158 mode->vsync_end = mode->vsync_start + timing->vsync_len;6259 mode->vtotal = mode->vsync_end + timing->lower_margin;6060+6161+ if (timing->vmode & FB_VMODE_INTERLACED)6262+ mode->flags |= DRM_MODE_FLAG_INTERLACE;6363+6464+ if (timing->vmode & FB_VMODE_DOUBLE)6565+ mode->flags |= DRM_MODE_FLAG_DBLSCAN;6366}64676568/* convert drm_display_mode to exynos_video_timings */···7869 memset(timing, 0, sizeof(*timing));79708071 timing->pixclock = mode->clock * 1000;8181- timing->refresh = mode->vrefresh;7272+ timing->refresh = drm_mode_vrefresh(mode);82738374 timing->xres = mode->hdisplay;8475 timing->left_margin = mode->hsync_start - mode->hdisplay;···1019210293static int exynos_drm_connector_get_modes(struct drm_connector *connector)10394{104104- struct exynos_drm_manager *manager =105105- exynos_drm_get_manager(connector->encoder);106106- struct exynos_drm_display *display = manager->display;9595+ struct exynos_drm_connector *exynos_connector =9696+ to_exynos_connector(connector);9797+ struct exynos_drm_manager *manager = exynos_connector->manager;9898+ struct exynos_drm_display_ops *display_ops = manager->display_ops;10799 unsigned int count;108100109101 DRM_DEBUG_KMS("%s\n", __FILE__);110102111111- if (!display) {112112- DRM_DEBUG_KMS("display is null.\n");103103+ if (!display_ops) {104104+ DRM_DEBUG_KMS("display_ops is null.\n");113105 return 0;114106 }115107···122112 * P.S. in case of lcd panel, count is always 1 if success123113 * because lcd panel has only one mode.124114 */125125- if (display->get_edid) {115115+ if (display_ops->get_edid) {126116 int ret;127117 void *edid;128118···132122 return 0;133123 }134124135135- ret = display->get_edid(manager->dev, connector,125125+ ret = display_ops->get_edid(manager->dev, connector,136126 edid, MAX_EDID);137127 if (ret < 0) {138128 DRM_ERROR("failed to get edid data.\n");···150140 struct drm_display_mode *mode = drm_mode_create(connector->dev);151141 struct fb_videomode *timing;152142153153- if (display->get_timing)154154- timing = display->get_timing(manager->dev);143143+ if (display_ops->get_timing)144144+ timing = display_ops->get_timing(manager->dev);155145 else {156146 drm_mode_destroy(connector->dev, mode);157147 return 0;···172162static int exynos_drm_connector_mode_valid(struct drm_connector *connector,173163 struct drm_display_mode *mode)174164{175175- struct exynos_drm_manager *manager =176176- exynos_drm_get_manager(connector->encoder);177177- struct exynos_drm_display *display = manager->display;165165+ struct exynos_drm_connector *exynos_connector =166166+ to_exynos_connector(connector);167167+ struct exynos_drm_manager *manager = exynos_connector->manager;168168+ struct exynos_drm_display_ops *display_ops = manager->display_ops;178169 struct fb_videomode timing;179170 int ret = MODE_BAD;180171···183172184173 convert_to_video_timing(&timing, mode);185174186186- if (display && display->check_timing)187187- if (!display->check_timing(manager->dev, (void *)&timing))175175+ if (display_ops && display_ops->check_timing)176176+ if (!display_ops->check_timing(manager->dev, (void *)&timing))188177 ret = MODE_OK;189178190179 return ret;···192181193182struct drm_encoder *exynos_drm_best_encoder(struct drm_connector *connector)194183{184184+ struct drm_device *dev = connector->dev;185185+ struct exynos_drm_connector *exynos_connector =186186+ to_exynos_connector(connector);187187+ struct drm_mode_object *obj;188188+ struct drm_encoder *encoder;189189+195190 DRM_DEBUG_KMS("%s\n", __FILE__);196191197197- return connector->encoder;192192+ obj = drm_mode_object_find(dev, exynos_connector->encoder_id,193193+ DRM_MODE_OBJECT_ENCODER);194194+ if (!obj) {195195+ DRM_DEBUG_KMS("Unknown ENCODER ID %d\n",196196+ exynos_connector->encoder_id);197197+ return NULL;198198+ }199199+200200+ encoder = obj_to_encoder(obj);201201+202202+ return encoder;198203}199204200205static struct drm_connector_helper_funcs exynos_connector_helper_funcs = {···223196static enum drm_connector_status224197exynos_drm_connector_detect(struct drm_connector *connector, bool force)225198{226226- struct exynos_drm_manager *manager =227227- exynos_drm_get_manager(connector->encoder);228228- struct exynos_drm_display *display = manager->display;199199+ struct exynos_drm_connector *exynos_connector =200200+ to_exynos_connector(connector);201201+ struct exynos_drm_manager *manager = exynos_connector->manager;202202+ struct exynos_drm_display_ops *display_ops =203203+ manager->display_ops;229204 enum drm_connector_status status = connector_status_disconnected;230205231206 DRM_DEBUG_KMS("%s\n", __FILE__);232207233233- if (display && display->is_connected) {234234- if (display->is_connected(manager->dev))208208+ if (display_ops && display_ops->is_connected) {209209+ if (display_ops->is_connected(manager->dev))235210 status = connector_status_connected;236211 else237212 status = connector_status_disconnected;···280251281252 connector = &exynos_connector->drm_connector;282253283283- switch (manager->display->type) {254254+ switch (manager->display_ops->type) {284255 case EXYNOS_DISPLAY_TYPE_HDMI:285256 type = DRM_MODE_CONNECTOR_HDMIA;257257+ connector->interlace_allowed = true;258258+ connector->polled = DRM_CONNECTOR_POLL_HPD;286259 break;287260 default:288261 type = DRM_MODE_CONNECTOR_Unknown;···298267 if (err)299268 goto err_connector;300269270270+ exynos_connector->encoder_id = encoder->base.id;271271+ exynos_connector->manager = manager;301272 connector->encoder = encoder;273273+302274 err = drm_mode_connector_attach_encoder(connector, encoder);303275 if (err) {304276 DRM_ERROR("failed to attach a connector to a encoder\n");
+39-37
drivers/gpu/drm/exynos/exynos_drm_crtc.c
···2929#include "drmP.h"3030#include "drm_crtc_helper.h"31313232+#include "exynos_drm_crtc.h"3233#include "exynos_drm_drv.h"3334#include "exynos_drm_fb.h"3435#include "exynos_drm_encoder.h"3636+#include "exynos_drm_gem.h"3537#include "exynos_drm_buf.h"36383739#define to_exynos_crtc(x) container_of(x, struct exynos_drm_crtc,\3840 drm_crtc)3939-4040-/*4141- * Exynos specific crtc postion structure.4242- *4343- * @fb_x: offset x on a framebuffer to be displyed4444- * - the unit is screen coordinates.4545- * @fb_y: offset y on a framebuffer to be displayed4646- * - the unit is screen coordinates.4747- * @crtc_x: offset x on hardware screen.4848- * @crtc_y: offset y on hardware screen.4949- * @crtc_w: width of hardware screen.5050- * @crtc_h: height of hardware screen.5151- */5252-struct exynos_drm_crtc_pos {5353- unsigned int fb_x;5454- unsigned int fb_y;5555- unsigned int crtc_x;5656- unsigned int crtc_y;5757- unsigned int crtc_w;5858- unsigned int crtc_h;5959-};60416142/*6243 * Exynos specific crtc structure.···66856786 exynos_drm_fn_encoder(crtc, overlay,6887 exynos_drm_encoder_crtc_mode_set);6969- exynos_drm_fn_encoder(crtc, NULL, exynos_drm_encoder_crtc_commit);8888+ exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe,8989+ exynos_drm_encoder_crtc_commit);7090}71917272-static int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay,7373- struct drm_framebuffer *fb,7474- struct drm_display_mode *mode,7575- struct exynos_drm_crtc_pos *pos)9292+int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay,9393+ struct drm_framebuffer *fb,9494+ struct drm_display_mode *mode,9595+ struct exynos_drm_crtc_pos *pos)7696{7777- struct exynos_drm_buf_entry *entry;9797+ struct exynos_drm_gem_buf *buffer;7898 unsigned int actual_w;7999 unsigned int actual_h;801008181- entry = exynos_drm_fb_get_buf(fb);8282- if (!entry) {8383- DRM_LOG_KMS("entry is null.\n");101101+ buffer = exynos_drm_fb_get_buf(fb);102102+ if (!buffer) {103103+ DRM_LOG_KMS("buffer is null.\n");84104 return -EFAULT;85105 }861068787- overlay->paddr = entry->paddr;8888- overlay->vaddr = entry->vaddr;107107+ overlay->dma_addr = buffer->dma_addr;108108+ overlay->vaddr = buffer->kvaddr;891099090- DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n",110110+ DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n",91111 (unsigned long)overlay->vaddr,9292- (unsigned long)overlay->paddr);112112+ (unsigned long)overlay->dma_addr);9311394114 actual_w = min((mode->hdisplay - pos->crtc_x), pos->crtc_w);95115 actual_h = min((mode->vdisplay - pos->crtc_y), pos->crtc_h);···153171154172static void exynos_drm_crtc_dpms(struct drm_crtc *crtc, int mode)155173{156156- DRM_DEBUG_KMS("%s\n", __FILE__);174174+ struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);157175158158- /* TODO */176176+ DRM_DEBUG_KMS("crtc[%d] mode[%d]\n", crtc->base.id, mode);177177+178178+ switch (mode) {179179+ case DRM_MODE_DPMS_ON:180180+ exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe,181181+ exynos_drm_encoder_crtc_commit);182182+ break;183183+ case DRM_MODE_DPMS_STANDBY:184184+ case DRM_MODE_DPMS_SUSPEND:185185+ case DRM_MODE_DPMS_OFF:186186+ /* TODO */187187+ exynos_drm_fn_encoder(crtc, NULL,188188+ exynos_drm_encoder_crtc_disable);189189+ break;190190+ default:191191+ DRM_DEBUG_KMS("unspecified mode %d\n", mode);192192+ break;193193+ }159194}160195161196static void exynos_drm_crtc_prepare(struct drm_crtc *crtc)···184185185186static void exynos_drm_crtc_commit(struct drm_crtc *crtc)186187{188188+ struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);189189+187190 DRM_DEBUG_KMS("%s\n", __FILE__);188191189189- /* drm framework doesn't check NULL. */192192+ exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe,193193+ exynos_drm_encoder_crtc_commit);190194}191195192196static bool
+25
drivers/gpu/drm/exynos/exynos_drm_crtc.h
···3535int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int crtc);3636void exynos_drm_crtc_disable_vblank(struct drm_device *dev, int crtc);37373838+/*3939+ * Exynos specific crtc postion structure.4040+ *4141+ * @fb_x: offset x on a framebuffer to be displyed4242+ * - the unit is screen coordinates.4343+ * @fb_y: offset y on a framebuffer to be displayed4444+ * - the unit is screen coordinates.4545+ * @crtc_x: offset x on hardware screen.4646+ * @crtc_y: offset y on hardware screen.4747+ * @crtc_w: width of hardware screen.4848+ * @crtc_h: height of hardware screen.4949+ */5050+struct exynos_drm_crtc_pos {5151+ unsigned int fb_x;5252+ unsigned int fb_y;5353+ unsigned int crtc_x;5454+ unsigned int crtc_y;5555+ unsigned int crtc_w;5656+ unsigned int crtc_h;5757+};5858+5959+int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay,6060+ struct drm_framebuffer *fb,6161+ struct drm_display_mode *mode,6262+ struct exynos_drm_crtc_pos *pos);3863#endif
···2929#ifndef _EXYNOS_DRM_DRV_H_3030#define _EXYNOS_DRM_DRV_H_31313232+#include <linux/module.h>3233#include "drm.h"33343435#define MAX_CRTC 2···8079 * @scan_flag: interlace or progressive way.8180 * (it could be DRM_MODE_FLAG_*)8281 * @bpp: pixel size.(in bit)8383- * @paddr: bus(accessed by dma) physical memory address to this overlay8484- * and this is physically continuous.8282+ * @dma_addr: bus(accessed by dma) address to the memory region allocated8383+ * for a overlay.8584 * @vaddr: virtual memory addresss to this overlay.8685 * @default_win: a window to be enabled.8786 * @color_key: color key on or off.···109108 unsigned int scan_flag;110109 unsigned int bpp;111110 unsigned int pitch;112112- dma_addr_t paddr;111111+ dma_addr_t dma_addr;113112 void __iomem *vaddr;114113115114 bool default_win;···131130 * @check_timing: check if timing is valid or not.132131 * @power_on: display device on or off.133132 */134134-struct exynos_drm_display {133133+struct exynos_drm_display_ops {135134 enum exynos_drm_output_type type;136135 bool (*is_connected)(struct device *dev);137136 int (*get_edid)(struct device *dev, struct drm_connector *connector,···147146 * @mode_set: convert drm_display_mode to hw specific display mode and148147 * would be called by encoder->mode_set().149148 * @commit: set current hw specific display mode to hw.149149+ * @disable: disable hardware specific display mode.150150 * @enable_vblank: specific driver callback for enabling vblank interrupt.151151 * @disable_vblank: specific driver callback for disabling vblank interrupt.152152 */153153struct exynos_drm_manager_ops {154154 void (*mode_set)(struct device *subdrv_dev, void *mode);155155 void (*commit)(struct device *subdrv_dev);156156+ void (*disable)(struct device *subdrv_dev);156157 int (*enable_vblank)(struct device *subdrv_dev);157158 void (*disable_vblank)(struct device *subdrv_dev);158159};···181178 int pipe;182179 struct exynos_drm_manager_ops *ops;183180 struct exynos_drm_overlay_ops *overlay_ops;184184- struct exynos_drm_display *display;181181+ struct exynos_drm_display_ops *display_ops;185182};186183187184/*
+72-11
drivers/gpu/drm/exynos/exynos_drm_encoder.c
···5353 struct drm_device *dev = encoder->dev;5454 struct drm_connector *connector;5555 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder);5656+ struct exynos_drm_manager_ops *manager_ops = manager->ops;56575758 DRM_DEBUG_KMS("%s, encoder dpms: %d\n", __FILE__, mode);58596060+ switch (mode) {6161+ case DRM_MODE_DPMS_ON:6262+ if (manager_ops && manager_ops->commit)6363+ manager_ops->commit(manager->dev);6464+ break;6565+ case DRM_MODE_DPMS_STANDBY:6666+ case DRM_MODE_DPMS_SUSPEND:6767+ case DRM_MODE_DPMS_OFF:6868+ /* TODO */6969+ if (manager_ops && manager_ops->disable)7070+ manager_ops->disable(manager->dev);7171+ break;7272+ default:7373+ DRM_ERROR("unspecified mode %d\n", mode);7474+ break;7575+ }7676+5977 list_for_each_entry(connector, &dev->mode_config.connector_list, head) {6078 if (connector->encoder == encoder) {6161- struct exynos_drm_display *display = manager->display;7979+ struct exynos_drm_display_ops *display_ops =8080+ manager->display_ops;62816363- if (display && display->power_on)6464- display->power_on(manager->dev, mode);8282+ DRM_DEBUG_KMS("connector[%d] dpms[%d]\n",8383+ connector->base.id, mode);8484+ if (display_ops && display_ops->power_on)8585+ display_ops->power_on(manager->dev, mode);6586 }6687 }6788}···137116{138117 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder);139118 struct exynos_drm_manager_ops *manager_ops = manager->ops;140140- struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;141119142120 DRM_DEBUG_KMS("%s\n", __FILE__);143121144122 if (manager_ops && manager_ops->commit)145123 manager_ops->commit(manager->dev);146146-147147- if (overlay_ops && overlay_ops->commit)148148- overlay_ops->commit(manager->dev);149124}150125151126static struct drm_crtc *···225208{226209 struct drm_device *dev = crtc->dev;227210 struct drm_encoder *encoder;211211+ struct exynos_drm_private *private = dev->dev_private;212212+ struct exynos_drm_manager *manager;228213229214 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {230230- if (encoder->crtc != crtc)231231- continue;215215+ /*216216+ * if crtc is detached from encoder, check pipe,217217+ * otherwise check crtc attached to encoder218218+ */219219+ if (!encoder->crtc) {220220+ manager = to_exynos_encoder(encoder)->manager;221221+ if (manager->pipe < 0 ||222222+ private->crtc[manager->pipe] != crtc)223223+ continue;224224+ } else {225225+ if (encoder->crtc != crtc)226226+ continue;227227+ }232228233229 fn(encoder, data);234230 }···280250 struct exynos_drm_manager *manager =281251 to_exynos_encoder(encoder)->manager;282252 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;253253+ int crtc = *(int *)data;283254284284- overlay_ops->commit(manager->dev);255255+ DRM_DEBUG_KMS("%s\n", __FILE__);256256+257257+ /*258258+ * when crtc is detached from encoder, this pipe is used259259+ * to select manager operation260260+ */261261+ manager->pipe = crtc;262262+263263+ if (overlay_ops && overlay_ops->commit)264264+ overlay_ops->commit(manager->dev);285265}286266287267void exynos_drm_encoder_crtc_mode_set(struct drm_encoder *encoder, void *data)···301261 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;302262 struct exynos_drm_overlay *overlay = data;303263304304- overlay_ops->mode_set(manager->dev, overlay);264264+ if (overlay_ops && overlay_ops->mode_set)265265+ overlay_ops->mode_set(manager->dev, overlay);266266+}267267+268268+void exynos_drm_encoder_crtc_disable(struct drm_encoder *encoder, void *data)269269+{270270+ struct exynos_drm_manager *manager =271271+ to_exynos_encoder(encoder)->manager;272272+ struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;273273+274274+ DRM_DEBUG_KMS("\n");275275+276276+ if (overlay_ops && overlay_ops->disable)277277+ overlay_ops->disable(manager->dev);278278+279279+ /*280280+ * crtc is already detached from encoder and last281281+ * function for detaching is properly done, so282282+ * clear pipe from manager to prevent repeated call283283+ */284284+ if (!encoder->crtc)285285+ manager->pipe = -1;305286}306287307288MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
···2929#include "drmP.h"3030#include "drm_crtc.h"3131#include "drm_crtc_helper.h"3232+#include "drm_fb_helper.h"32333434+#include "exynos_drm_drv.h"3335#include "exynos_drm_fb.h"3436#include "exynos_drm_buf.h"3537#include "exynos_drm_gem.h"···4341 *4442 * @fb: drm framebuffer obejct.4543 * @exynos_gem_obj: exynos specific gem object containing a gem object.4646- * @entry: pointer to exynos drm buffer entry object.4747- * - containing only the information to physically continuous memory4848- * region allocated at default framebuffer creation.4444+ * @buffer: pointer to exynos_drm_gem_buffer object.4545+ * - contain the memory information to memory region allocated4646+ * at default framebuffer creation.4947 */5048struct exynos_drm_fb {5149 struct drm_framebuffer fb;5250 struct exynos_drm_gem_obj *exynos_gem_obj;5353- struct exynos_drm_buf_entry *entry;5151+ struct exynos_drm_gem_buf *buffer;5452};55535654static void exynos_drm_fb_destroy(struct drm_framebuffer *fb)···6563 * default framebuffer has no gem object so6664 * a buffer of the default framebuffer should be released at here.6765 */6868- if (!exynos_fb->exynos_gem_obj && exynos_fb->entry)6969- exynos_drm_buf_destroy(fb->dev, exynos_fb->entry);6666+ if (!exynos_fb->exynos_gem_obj && exynos_fb->buffer)6767+ exynos_drm_buf_destroy(fb->dev, exynos_fb->buffer);70687169 kfree(exynos_fb);7270 exynos_fb = NULL;···145143 */146144 if (!mode_cmd->handle) {147145 if (!file_priv) {148148- struct exynos_drm_buf_entry *entry;146146+ struct exynos_drm_gem_buf *buffer;149147150148 /*151149 * in case that file_priv is NULL, it allocates152150 * only buffer and this buffer would be used153151 * for default framebuffer.154152 */155155- entry = exynos_drm_buf_create(dev, size);156156- if (IS_ERR(entry)) {157157- ret = PTR_ERR(entry);153153+ buffer = exynos_drm_buf_create(dev, size);154154+ if (IS_ERR(buffer)) {155155+ ret = PTR_ERR(buffer);158156 goto err_buffer;159157 }160158161161- exynos_fb->entry = entry;159159+ exynos_fb->buffer = buffer;162160163163- DRM_LOG_KMS("default fb: paddr = 0x%lx, size = 0x%x\n",164164- (unsigned long)entry->paddr, size);161161+ DRM_LOG_KMS("default: dma_addr = 0x%lx, size = 0x%x\n",162162+ (unsigned long)buffer->dma_addr, size);165163166164 goto out;167165 } else {168168- exynos_gem_obj = exynos_drm_gem_create(file_priv, dev,169169- size,170170- &mode_cmd->handle);166166+ exynos_gem_obj = exynos_drm_gem_create(dev, file_priv,167167+ &mode_cmd->handle,168168+ size);171169 if (IS_ERR(exynos_gem_obj)) {172170 ret = PTR_ERR(exynos_gem_obj);173171 goto err_buffer;···191189 * so that default framebuffer has no its own gem object,192190 * only its own buffer object.193191 */194194- exynos_fb->entry = exynos_gem_obj->entry;192192+ exynos_fb->buffer = exynos_gem_obj->buffer;195193196196- DRM_LOG_KMS("paddr = 0x%lx, size = 0x%x, gem object = 0x%x\n",197197- (unsigned long)exynos_fb->entry->paddr, size,194194+ DRM_LOG_KMS("dma_addr = 0x%lx, size = 0x%x, gem object = 0x%x\n",195195+ (unsigned long)exynos_fb->buffer->dma_addr, size,198196 (unsigned int)&exynos_gem_obj->base);199197200198out:···222220 return exynos_drm_fb_init(file_priv, dev, mode_cmd);223221}224222225225-struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb)223223+struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb)226224{227225 struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);228228- struct exynos_drm_buf_entry *entry;226226+ struct exynos_drm_gem_buf *buffer;229227230228 DRM_DEBUG_KMS("%s\n", __FILE__);231229232232- entry = exynos_fb->entry;233233- if (!entry)230230+ buffer = exynos_fb->buffer;231231+ if (!buffer)234232 return NULL;235233236236- DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n",237237- (unsigned long)entry->vaddr,238238- (unsigned long)entry->paddr);234234+ DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n",235235+ (unsigned long)buffer->kvaddr,236236+ (unsigned long)buffer->dma_addr);239237240240- return entry;238238+ return buffer;239239+}240240+241241+static void exynos_drm_output_poll_changed(struct drm_device *dev)242242+{243243+ struct exynos_drm_private *private = dev->dev_private;244244+ struct drm_fb_helper *fb_helper = private->fb_helper;245245+246246+ if (fb_helper)247247+ drm_fb_helper_hotplug_event(fb_helper);241248}242249243250static struct drm_mode_config_funcs exynos_drm_mode_config_funcs = {244251 .fb_create = exynos_drm_fb_create,252252+ .output_poll_changed = exynos_drm_output_poll_changed,245253};246254247255void exynos_drm_mode_config_init(struct drm_device *dev)
+28-16
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
···33333434#include "exynos_drm_drv.h"3535#include "exynos_drm_fb.h"3636+#include "exynos_drm_gem.h"3637#include "exynos_drm_buf.h"37383839#define MAX_CONNECTOR 4···8685};87868887static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,8989- struct drm_framebuffer *fb,9090- unsigned int fb_width,9191- unsigned int fb_height)8888+ struct drm_framebuffer *fb)9289{9390 struct fb_info *fbi = helper->fbdev;9491 struct drm_device *dev = helper->dev;9592 struct exynos_drm_fbdev *exynos_fb = to_exynos_fbdev(helper);9696- struct exynos_drm_buf_entry *entry;9797- unsigned int size = fb_width * fb_height * (fb->bits_per_pixel >> 3);9393+ struct exynos_drm_gem_buf *buffer;9494+ unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3);9895 unsigned long offset;999610097 DRM_DEBUG_KMS("%s\n", __FILE__);···100101 exynos_fb->fb = fb;101102102103 drm_fb_helper_fill_fix(fbi, fb->pitch, fb->depth);103103- drm_fb_helper_fill_var(fbi, helper, fb_width, fb_height);104104+ drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height);104105105105- entry = exynos_drm_fb_get_buf(fb);106106- if (!entry) {107107- DRM_LOG_KMS("entry is null.\n");106106+ buffer = exynos_drm_fb_get_buf(fb);107107+ if (!buffer) {108108+ DRM_LOG_KMS("buffer is null.\n");108109 return -EFAULT;109110 }110111111112 offset = fbi->var.xoffset * (fb->bits_per_pixel >> 3);112113 offset += fbi->var.yoffset * fb->pitch;113114114114- dev->mode_config.fb_base = entry->paddr;115115- fbi->screen_base = entry->vaddr + offset;116116- fbi->fix.smem_start = entry->paddr + offset;115115+ dev->mode_config.fb_base = (resource_size_t)buffer->dma_addr;116116+ fbi->screen_base = buffer->kvaddr + offset;117117+ fbi->fix.smem_start = (unsigned long)(buffer->dma_addr + offset);117118 fbi->screen_size = size;118119 fbi->fix.smem_len = size;119120···170171 goto out;171172 }172173173173- ret = exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width,174174- sizes->fb_height);174174+ ret = exynos_drm_fbdev_update(helper, helper->fb);175175 if (ret < 0)176176 fb_dealloc_cmap(&fbi->cmap);177177···233235 }234236235237 helper->fb = exynos_fbdev->fb;236236- return exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width,237237- sizes->fb_height);238238+ return exynos_drm_fbdev_update(helper, helper->fb);238239}239240240241static int exynos_drm_fbdev_probe(struct drm_fb_helper *helper,···402405 fb_helper = private->fb_helper;403406404407 if (fb_helper) {408408+ struct list_head temp_list;409409+410410+ INIT_LIST_HEAD(&temp_list);411411+412412+ /*413413+ * fb_helper is reintialized but kernel fb is reused414414+ * so kernel_fb_list need to be backuped and restored415415+ */416416+ if (!list_empty(&fb_helper->kernel_fb_list))417417+ list_replace_init(&fb_helper->kernel_fb_list,418418+ &temp_list);419419+405420 drm_fb_helper_fini(fb_helper);406421407422 ret = drm_fb_helper_init(dev, fb_helper,···422413 DRM_ERROR("failed to initialize drm fb helper\n");423414 return ret;424415 }416416+417417+ if (!list_empty(&temp_list))418418+ list_replace(&temp_list, &fb_helper->kernel_fb_list);425419426420 ret = drm_fb_helper_single_add_all_connectors(fb_helper);427421 if (ret < 0) {
+53-18
drivers/gpu/drm/exynos/exynos_drm_fimd.c
···6464 unsigned int fb_width;6565 unsigned int fb_height;6666 unsigned int bpp;6767- dma_addr_t paddr;6767+ dma_addr_t dma_addr;6868 void __iomem *vaddr;6969 unsigned int buf_offsize;7070 unsigned int line_size; /* bytes */···124124 return 0;125125}126126127127-static struct exynos_drm_display fimd_display = {127127+static struct exynos_drm_display_ops fimd_display_ops = {128128 .type = EXYNOS_DISPLAY_TYPE_LCD,129129 .is_connected = fimd_display_is_connected,130130 .get_timing = fimd_get_timing,···177177 writel(val, ctx->regs + VIDCON0);178178}179179180180+static void fimd_disable(struct device *dev)181181+{182182+ struct fimd_context *ctx = get_fimd_context(dev);183183+ struct exynos_drm_subdrv *subdrv = &ctx->subdrv;184184+ struct drm_device *drm_dev = subdrv->drm_dev;185185+ struct exynos_drm_manager *manager = &subdrv->manager;186186+ u32 val;187187+188188+ DRM_DEBUG_KMS("%s\n", __FILE__);189189+190190+ /* fimd dma off */191191+ val = readl(ctx->regs + VIDCON0);192192+ val &= ~(VIDCON0_ENVID | VIDCON0_ENVID_F);193193+ writel(val, ctx->regs + VIDCON0);194194+195195+ /*196196+ * if vblank is enabled status with dma off then197197+ * it disables vsync interrupt.198198+ */199199+ if (drm_dev->vblank_enabled[manager->pipe] &&200200+ atomic_read(&drm_dev->vblank_refcount[manager->pipe])) {201201+ drm_vblank_put(drm_dev, manager->pipe);202202+203203+ /*204204+ * if vblank_disable_allowed is 0 then disable205205+ * vsync interrupt right now else the vsync interrupt206206+ * would be disabled by drm timer once a current process207207+ * gives up ownershop of vblank event.208208+ */209209+ if (!drm_dev->vblank_disable_allowed)210210+ drm_vblank_off(drm_dev, manager->pipe);211211+ }212212+}213213+180214static int fimd_enable_vblank(struct device *dev)181215{182216 struct fimd_context *ctx = get_fimd_context(dev);···254220255221static struct exynos_drm_manager_ops fimd_manager_ops = {256222 .commit = fimd_commit,223223+ .disable = fimd_disable,257224 .enable_vblank = fimd_enable_vblank,258225 .disable_vblank = fimd_disable_vblank,259226};···286251 win_data->ovl_height = overlay->crtc_height;287252 win_data->fb_width = overlay->fb_width;288253 win_data->fb_height = overlay->fb_height;289289- win_data->paddr = overlay->paddr + offset;254254+ win_data->dma_addr = overlay->dma_addr + offset;290255 win_data->vaddr = overlay->vaddr + offset;291256 win_data->bpp = overlay->bpp;292257 win_data->buf_offsize = (overlay->fb_width - overlay->crtc_width) *···298263 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",299264 win_data->ovl_width, win_data->ovl_height);300265 DRM_DEBUG_KMS("paddr = 0x%lx, vaddr = 0x%lx\n",301301- (unsigned long)win_data->paddr,266266+ (unsigned long)win_data->dma_addr,302267 (unsigned long)win_data->vaddr);303268 DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n",304269 overlay->fb_width, overlay->crtc_width);···411376 writel(val, ctx->regs + SHADOWCON);412377413378 /* buffer start address */414414- val = win_data->paddr;379379+ val = (unsigned long)win_data->dma_addr;415380 writel(val, ctx->regs + VIDWx_BUF_START(win, 0));416381417382 /* buffer end address */418383 size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3);419419- val = win_data->paddr + size;384384+ val = (unsigned long)(win_data->dma_addr + size);420385 writel(val, ctx->regs + VIDWx_BUF_END(win, 0));421386422387 DRM_DEBUG_KMS("start addr = 0x%lx, end addr = 0x%lx, size = 0x%lx\n",423423- (unsigned long)win_data->paddr, val, size);388388+ (unsigned long)win_data->dma_addr, val, size);424389 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",425390 win_data->ovl_width, win_data->ovl_height);426391···482447static void fimd_win_disable(struct device *dev)483448{484449 struct fimd_context *ctx = get_fimd_context(dev);485485- struct fimd_win_data *win_data;486450 int win = ctx->default_win;487451 u32 val;488452···489455490456 if (win < 0 || win > WINDOWS_NR)491457 return;492492-493493- win_data = &ctx->win_data[win];494458495459 /* protect windows */496460 val = readl(ctx->regs + SHADOWCON);···560528 /* VSYNC interrupt */561529 writel(VIDINTCON1_INT_FRAME, ctx->regs + VIDINTCON1);562530531531+ /*532532+ * in case that vblank_disable_allowed is 1, it could induce533533+ * the problem that manager->pipe could be -1 because with534534+ * disable callback, vsync interrupt isn't disabled and at this moment,535535+ * vsync interrupt could occur. the vsync interrupt would be disabled536536+ * by timer handler later.537537+ */538538+ if (manager->pipe == -1)539539+ return IRQ_HANDLED;540540+563541 drm_handle_vblank(drm_dev, manager->pipe);564542 fimd_finish_pageflip(drm_dev, manager->pipe);565543···589547 * drm framework supports only one irq handler.590548 */591549 drm_dev->irq_enabled = 1;592592-593593- /*594594- * with vblank_disable_allowed = 1, vblank interrupt will be disabled595595- * by drm timer once a current process gives up ownership of596596- * vblank event.(drm_vblank_put function was called)597597- */598598- drm_dev->vblank_disable_allowed = 1;599550600551 return 0;601552}···766731 subdrv->manager.pipe = -1;767732 subdrv->manager.ops = &fimd_manager_ops;768733 subdrv->manager.overlay_ops = &fimd_overlay_ops;769769- subdrv->manager.display = &fimd_display;734734+ subdrv->manager.display_ops = &fimd_display_ops;770735 subdrv->manager.dev = dev;771736772737 platform_set_drvdata(pdev, ctx);
+52-37
drivers/gpu/drm/exynos/exynos_drm_gem.c
···6262 return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT;6363}64646565-struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv,6666- struct drm_device *dev, unsigned int size,6767- unsigned int *handle)6565+static struct exynos_drm_gem_obj6666+ *exynos_drm_gem_init(struct drm_device *drm_dev,6767+ struct drm_file *file_priv, unsigned int *handle,6868+ unsigned int size)6869{6970 struct exynos_drm_gem_obj *exynos_gem_obj;7070- struct exynos_drm_buf_entry *entry;7171 struct drm_gem_object *obj;7272 int ret;7373-7474- DRM_DEBUG_KMS("%s\n", __FILE__);7575-7676- size = roundup(size, PAGE_SIZE);77737874 exynos_gem_obj = kzalloc(sizeof(*exynos_gem_obj), GFP_KERNEL);7975 if (!exynos_gem_obj) {···7781 return ERR_PTR(-ENOMEM);7882 }79838080- /* allocate the new buffer object and memory region. */8181- entry = exynos_drm_buf_create(dev, size);8282- if (!entry) {8383- kfree(exynos_gem_obj);8484- return ERR_PTR(-ENOMEM);8585- }8686-8787- exynos_gem_obj->entry = entry;8888-8984 obj = &exynos_gem_obj->base;90859191- ret = drm_gem_object_init(dev, obj, size);8686+ ret = drm_gem_object_init(drm_dev, obj, size);9287 if (ret < 0) {9393- DRM_ERROR("failed to initailize gem object.\n");9494- goto err_obj_init;8888+ DRM_ERROR("failed to initialize gem object.\n");8989+ ret = -EINVAL;9090+ goto err_object_init;9591 }96929793 DRM_DEBUG_KMS("created file object = 0x%x\n", (unsigned int)obj->filp);···115127err_create_mmap_offset:116128 drm_gem_object_release(obj);117129118118-err_obj_init:119119- exynos_drm_buf_destroy(dev, exynos_gem_obj->entry);120120-130130+err_object_init:121131 kfree(exynos_gem_obj);122132123133 return ERR_PTR(ret);124134}125135136136+struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev,137137+ struct drm_file *file_priv,138138+ unsigned int *handle, unsigned long size)139139+{140140+141141+ struct exynos_drm_gem_obj *exynos_gem_obj = NULL;142142+ struct exynos_drm_gem_buf *buffer;143143+144144+ size = roundup(size, PAGE_SIZE);145145+146146+ DRM_DEBUG_KMS("%s: size = 0x%lx\n", __FILE__, size);147147+148148+ buffer = exynos_drm_buf_create(dev, size);149149+ if (IS_ERR(buffer)) {150150+ return ERR_CAST(buffer);151151+ }152152+153153+ exynos_gem_obj = exynos_drm_gem_init(dev, file_priv, handle, size);154154+ if (IS_ERR(exynos_gem_obj)) {155155+ exynos_drm_buf_destroy(dev, buffer);156156+ return exynos_gem_obj;157157+ }158158+159159+ exynos_gem_obj->buffer = buffer;160160+161161+ return exynos_gem_obj;162162+}163163+126164int exynos_drm_gem_create_ioctl(struct drm_device *dev, void *data,127127- struct drm_file *file_priv)165165+ struct drm_file *file_priv)128166{129167 struct drm_exynos_gem_create *args = data;130130- struct exynos_drm_gem_obj *exynos_gem_obj;168168+ struct exynos_drm_gem_obj *exynos_gem_obj = NULL;131169132132- DRM_DEBUG_KMS("%s : size = 0x%x\n", __FILE__, args->size);170170+ DRM_DEBUG_KMS("%s\n", __FILE__);133171134134- exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size,135135- &args->handle);172172+ exynos_gem_obj = exynos_drm_gem_create(dev, file_priv,173173+ &args->handle, args->size);136174 if (IS_ERR(exynos_gem_obj))137175 return PTR_ERR(exynos_gem_obj);138176···189175{190176 struct drm_gem_object *obj = filp->private_data;191177 struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);192192- struct exynos_drm_buf_entry *entry;178178+ struct exynos_drm_gem_buf *buffer;193179 unsigned long pfn, vm_size;194180195181 DRM_DEBUG_KMS("%s\n", __FILE__);···201187202188 vm_size = vma->vm_end - vma->vm_start;203189 /*204204- * a entry contains information to physically continuous memory190190+ * a buffer contains information to physically continuous memory205191 * allocated by user request or at framebuffer creation.206192 */207207- entry = exynos_gem_obj->entry;193193+ buffer = exynos_gem_obj->buffer;208194209195 /* check if user-requested size is valid. */210210- if (vm_size > entry->size)196196+ if (vm_size > buffer->size)211197 return -EINVAL;212198213199 /*214200 * get page frame number to physical memory to be mapped215201 * to user space.216202 */217217- pfn = exynos_gem_obj->entry->paddr >> PAGE_SHIFT;203203+ pfn = ((unsigned long)exynos_gem_obj->buffer->dma_addr) >> PAGE_SHIFT;218204219205 DRM_DEBUG_KMS("pfn = 0x%lx\n", pfn);220206···295281296282 exynos_gem_obj = to_exynos_gem_obj(gem_obj);297283298298- exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->entry);284284+ exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->buffer);299285300286 kfree(exynos_gem_obj);301287}···316302 args->pitch = args->width * args->bpp >> 3;317303 args->size = args->pitch * args->height;318304319319- exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size,320320- &args->handle);305305+ exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, &args->handle,306306+ args->size);321307 if (IS_ERR(exynos_gem_obj))322308 return PTR_ERR(exynos_gem_obj);323309···374360375361 mutex_lock(&dev->struct_mutex);376362377377- pfn = (exynos_gem_obj->entry->paddr >> PAGE_SHIFT) + page_offset;363363+ pfn = (((unsigned long)exynos_gem_obj->buffer->dma_addr) >>364364+ PAGE_SHIFT) + page_offset;378365379366 ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);380367
+22-6
drivers/gpu/drm/exynos/exynos_drm_gem.h
···3030 struct exynos_drm_gem_obj, base)31313232/*3333+ * exynos drm gem buffer structure.3434+ *3535+ * @kvaddr: kernel virtual address to allocated memory region.3636+ * @dma_addr: bus address(accessed by dma) to allocated memory region.3737+ * - this address could be physical address without IOMMU and3838+ * device address with IOMMU.3939+ * @size: size of allocated memory region.4040+ */4141+struct exynos_drm_gem_buf {4242+ void __iomem *kvaddr;4343+ dma_addr_t dma_addr;4444+ unsigned long size;4545+};4646+4747+/*3348 * exynos drm buffer structure.3449 *3550 * @base: a gem object.3651 * - a new handle to this gem object would be created3752 * by drm_gem_handle_create().3838- * @entry: pointer to exynos drm buffer entry object.3939- * - containing the information to physically5353+ * @buffer: a pointer to exynos_drm_gem_buffer object.5454+ * - contain the information to memory region allocated5555+ * by user request or at framebuffer creation.4056 * continuous memory region allocated by user request4157 * or at framebuffer creation.4258 *···6145 */6246struct exynos_drm_gem_obj {6347 struct drm_gem_object base;6464- struct exynos_drm_buf_entry *entry;4848+ struct exynos_drm_gem_buf *buffer;6549};66506751/* create a new buffer and get a new gem handle. */6868-struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv,6969- struct drm_device *dev, unsigned int size,7070- unsigned int *handle);5252+struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev,5353+ struct drm_file *file_priv,5454+ unsigned int *handle, unsigned long size);71557256/*7357 * request gem object creation and buffer allocation as the size
···23072307 SYM_LSB(IBCCtrlA_0, MaxPktLen);23082308 ppd->cpspec->ibcctrl_a = ibc; /* without linkcmd or linkinitcmd! */2309230923102310- /* initially come up waiting for TS1, without sending anything. */23112311- val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE <<23122312- QLOGIC_IB_IBCC_LINKINITCMD_SHIFT);23132313-23142314- ppd->cpspec->ibcctrl_a = val;23152310 /*23162311 * Reset the PCS interface to the serdes (and also ibc, which is still23172312 * in reset from above). Writes new value of ibcctrl_a as last step.23182313 */23192314 qib_7322_mini_pcs_reset(ppd);23202320- qib_write_kreg(dd, kr_scratch, 0ULL);23212321- /* clear the linkinit cmds */23222322- ppd->cpspec->ibcctrl_a &= ~SYM_MASK(IBCCtrlA_0, LinkInitCmd);2323231523242316 if (!ppd->cpspec->ibcctrl_b) {23252317 unsigned lse = ppd->link_speed_enabled;···23762384 /* Enable port */23772385 ppd->cpspec->ibcctrl_a |= SYM_MASK(IBCCtrlA_0, IBLinkEn);23782386 set_vls(ppd);23872387+23882388+ /* initially come up DISABLED, without sending anything. */23892389+ val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE <<23902390+ QLOGIC_IB_IBCC_LINKINITCMD_SHIFT);23912391+ qib_write_kreg_port(ppd, krp_ibcctrl_a, val);23922392+ qib_write_kreg(dd, kr_scratch, 0ULL);23932393+ /* clear the linkinit cmds */23942394+ ppd->cpspec->ibcctrl_a = val & ~SYM_MASK(IBCCtrlA_0, LinkInitCmd);2379239523802396 /* be paranoid against later code motion, etc. */23812397 spin_lock_irqsave(&dd->cspec->rcvmod_lock, flags);···52415241 off */52425242 if (ppd->dd->flags & QIB_HAS_QSFP) {52435243 qd->t_insert = get_jiffies_64();52445244- schedule_work(&qd->work);52445244+ queue_work(ib_wq, &qd->work);52455245 }52465246 spin_lock_irqsave(&ppd->sdma_lock, flags);52475247 if (__qib_sdma_running(ppd))
-12
drivers/infiniband/hw/qib/qib_qsfp.c
···480480 udelay(20); /* Generous RST dwell */481481482482 dd->f_gpio_mod(dd, mask, mask, mask);483483- /* Spec says module can take up to two seconds! */484484- mask = QSFP_GPIO_MOD_PRS_N;485485- if (qd->ppd->hw_pidx)486486- mask <<= QSFP_GPIO_PORT2_SHIFT;487487-488488- /* Do not try to wait here. Better to let event handle it */489489- if (!qib_qsfp_mod_present(qd->ppd))490490- goto bail;491491- /* We see a module, but it may be unwise to look yet. Just schedule */492492- qd->t_insert = get_jiffies_64();493493- queue_work(ib_wq, &qd->work);494494-bail:495483 return;496484}497485
···432432433433 spin_lock_irqsave(&priv->lock, flags);434434435435- if (ah) {435435+ if (!IS_ERR_OR_NULL(ah)) {436436 path->pathrec = *pathrec;437437438438 old_ah = path->ah;···555555 return 0;556556}557557558558+/* called with rcu_read_lock */558559static void neigh_add_path(struct sk_buff *skb, struct net_device *dev)559560{560561 struct ipoib_dev_priv *priv = netdev_priv(dev);···637636 spin_unlock_irqrestore(&priv->lock, flags);638637}639638639639+/* called with rcu_read_lock */640640static void ipoib_path_lookup(struct sk_buff *skb, struct net_device *dev)641641{642642 struct ipoib_dev_priv *priv = netdev_priv(skb->dev);···722720 struct neighbour *n = NULL;723721 unsigned long flags;724722723723+ rcu_read_lock();725724 if (likely(skb_dst(skb)))726725 n = dst_get_neighbour(skb_dst(skb));727726728727 if (likely(n)) {729728 if (unlikely(!*to_ipoib_neigh(n))) {730729 ipoib_path_lookup(skb, dev);731731- return NETDEV_TX_OK;730730+ goto unlock;732731 }733732734733 neigh = *to_ipoib_neigh(n);···752749 ipoib_neigh_free(dev, neigh);753750 spin_unlock_irqrestore(&priv->lock, flags);754751 ipoib_path_lookup(skb, dev);755755- return NETDEV_TX_OK;752752+ goto unlock;756753 }757754758755 if (ipoib_cm_get(neigh)) {759756 if (ipoib_cm_up(neigh)) {760757 ipoib_cm_send(dev, skb, ipoib_cm_get(neigh));761761- return NETDEV_TX_OK;758758+ goto unlock;762759 }763760 } else if (neigh->ah) {764761 ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(n->ha));765765- return NETDEV_TX_OK;762762+ goto unlock;766763 }767764768765 if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {···796793 phdr->hwaddr + 4);797794 dev_kfree_skb_any(skb);798795 ++dev->stats.tx_dropped;799799- return NETDEV_TX_OK;796796+ goto unlock;800797 }801798802799 unicast_arp_send(skb, dev, phdr);803800 }804801 }805805-802802+unlock:803803+ rcu_read_unlock();806804 return NETDEV_TX_OK;807805}808806···841837 dst = skb_dst(skb);842838 n = NULL;843839 if (dst)844844- n = dst_get_neighbour(dst);840840+ n = dst_get_neighbour_raw(dst);845841 if ((!dst || !n) && daddr) {846842 struct ipoib_pseudoheader *phdr =847843 (struct ipoib_pseudoheader *) skb_push(skb, sizeof *phdr);
+9-4
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
···240240 av.grh.dgid = mcast->mcmember.mgid;241241242242 ah = ipoib_create_ah(dev, priv->pd, &av);243243- if (!ah) {244244- ipoib_warn(priv, "ib_address_create failed\n");243243+ if (IS_ERR(ah)) {244244+ ipoib_warn(priv, "ib_address_create failed %ld\n",245245+ -PTR_ERR(ah));246246+ /* use original error */247247+ return PTR_ERR(ah);245248 } else {246249 spin_lock_irq(&priv->lock);247250 mcast->ah = ah;···269266270267 skb->dev = dev;271268 if (dst)272272- n = dst_get_neighbour(dst);269269+ n = dst_get_neighbour_raw(dst);273270 if (!dst || !n) {274271 /* put pseudoheader back on for next time */275272 skb_push(skb, sizeof (struct ipoib_pseudoheader));···725722 if (mcast && mcast->ah) {726723 struct dst_entry *dst = skb_dst(skb);727724 struct neighbour *n = NULL;725725+726726+ rcu_read_lock();728727 if (dst)729728 n = dst_get_neighbour(dst);730729 if (n && !*to_ipoib_neigh(n)) {···739734 list_add_tail(&neigh->list, &mcast->neigh_list);740735 }741736 }742742-737737+ rcu_read_unlock();743738 spin_unlock_irqrestore(&priv->lock, flags);744739 ipoib_send(dev, skb, mcast->ah, IB_MULTICAST_QPN);745740 return;
+2-1
drivers/net/wireless/ath/ath9k/hw.c
···18271827 }1828182818291829 /* Clear Bit 14 of AR_WA after putting chip into Full Sleep mode. */18301830- REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE);18301830+ if (AR_SREV_9300_20_OR_LATER(ah))18311831+ REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE);18311832}1832183318331834/*
···38883888 return ret;38893889}3890389038913891-int btrfs_block_rsv_refill(struct btrfs_root *root,38923892- struct btrfs_block_rsv *block_rsv,38933893- u64 min_reserved)38913891+static inline int __btrfs_block_rsv_refill(struct btrfs_root *root,38923892+ struct btrfs_block_rsv *block_rsv,38933893+ u64 min_reserved, int flush)38943894{38953895 u64 num_bytes = 0;38963896 int ret = -ENOSPC;···39093909 if (!ret)39103910 return 0;3911391139123912- ret = reserve_metadata_bytes(root, block_rsv, num_bytes, 1);39123912+ ret = reserve_metadata_bytes(root, block_rsv, num_bytes, flush);39133913 if (!ret) {39143914 block_rsv_add_bytes(block_rsv, num_bytes, 0);39153915 return 0;39163916 }3917391739183918 return ret;39193919+}39203920+39213921+int btrfs_block_rsv_refill(struct btrfs_root *root,39223922+ struct btrfs_block_rsv *block_rsv,39233923+ u64 min_reserved)39243924+{39253925+ return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 1);39263926+}39273927+39283928+int btrfs_block_rsv_refill_noflush(struct btrfs_root *root,39293929+ struct btrfs_block_rsv *block_rsv,39303930+ u64 min_reserved)39313931+{39323932+ return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 0);39193933}3920393439213935int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv,···52795265 spin_lock(&block_group->free_space_ctl->tree_lock);52805266 if (cached &&52815267 block_group->free_space_ctl->free_space <52825282- num_bytes + empty_size) {52685268+ num_bytes + empty_cluster + empty_size) {52835269 spin_unlock(&block_group->free_space_ctl->tree_lock);52845270 goto loop;52855271 }···53005286 * people trying to start a new cluster53015287 */53025288 spin_lock(&last_ptr->refill_lock);53035303- if (last_ptr->block_group &&53045304- (last_ptr->block_group->ro ||53055305- !block_group_bits(last_ptr->block_group, data))) {53065306- offset = 0;52895289+ if (!last_ptr->block_group ||52905290+ last_ptr->block_group->ro ||52915291+ !block_group_bits(last_ptr->block_group, data))53075292 goto refill_cluster;53085308- }5309529353105294 offset = btrfs_alloc_from_cluster(block_group, last_ptr,53115295 num_bytes, search_start);···53545342 /* allocate a cluster in this block group */53555343 ret = btrfs_find_space_cluster(trans, root,53565344 block_group, last_ptr,53575357- offset, num_bytes,53455345+ search_start, num_bytes,53585346 empty_cluster + empty_size);53595347 if (ret == 0) {53605348 /*
+20-7
fs/btrfs/extent_io.c
···22872287 if (!uptodate) {22882288 int failed_mirror;22892289 failed_mirror = (int)(unsigned long)bio->bi_bdev;22902290- if (tree->ops && tree->ops->readpage_io_failed_hook)22912291- ret = tree->ops->readpage_io_failed_hook(22922292- bio, page, start, end,22932293- failed_mirror, state);22942294- else22952295- ret = bio_readpage_error(bio, page, start, end,22962296- failed_mirror, NULL);22902290+ /*22912291+ * The generic bio_readpage_error handles errors the22922292+ * following way: If possible, new read requests are22932293+ * created and submitted and will end up in22942294+ * end_bio_extent_readpage as well (if we're lucky, not22952295+ * in the !uptodate case). In that case it returns 0 and22962296+ * we just go on with the next page in our bio. If it22972297+ * can't handle the error it will return -EIO and we22982298+ * remain responsible for that page.22992299+ */23002300+ ret = bio_readpage_error(bio, page, start, end,23012301+ failed_mirror, NULL);22972302 if (ret == 0) {23032303+error_handled:22982304 uptodate =22992305 test_bit(BIO_UPTODATE, &bio->bi_flags);23002306 if (err)23012307 uptodate = 0;23022308 uncache_state(&cached);23032309 continue;23102310+ }23112311+ if (tree->ops && tree->ops->readpage_io_failed_hook) {23122312+ ret = tree->ops->readpage_io_failed_hook(23132313+ bio, page, start, end,23142314+ failed_mirror, state);23152315+ if (ret == 0)23162316+ goto error_handled;23042317 }23052318 }23062319
···34903490 * doing the truncate.34913491 */34923492 while (1) {34933493- ret = btrfs_block_rsv_refill(root, rsv, min_size);34933493+ ret = btrfs_block_rsv_refill_noflush(root, rsv, min_size);3494349434953495 /*34963496 * Try and steal from the global reserve since we will
+1-1
fs/btrfs/ioctl.c
···12781278 }12791279 ret = btrfs_grow_device(trans, device, new_size);12801280 btrfs_commit_transaction(trans, root);12811281- } else {12811281+ } else if (new_size < old_size) {12821282 ret = btrfs_shrink_device(device, new_size);12831283 }12841284
+5
fs/btrfs/scrub.c
···256256 btrfs_release_path(swarn->path);257257258258 ipath = init_ipath(4096, local_root, swarn->path);259259+ if (IS_ERR(ipath)) {260260+ ret = PTR_ERR(ipath);261261+ ipath = NULL;262262+ goto err;263263+ }259264 ret = paths_from_inode(inum, ipath);260265261266 if (ret < 0)
+3-3
fs/btrfs/super.c
···10571057 int i = 0, nr_devices;10581058 int ret;1059105910601060- nr_devices = fs_info->fs_devices->rw_devices;10601060+ nr_devices = fs_info->fs_devices->open_devices;10611061 BUG_ON(!nr_devices);1062106210631063 devices_info = kmalloc(sizeof(*devices_info) * nr_devices,···10791079 else10801080 min_stripe_size = BTRFS_STRIPE_LEN;1081108110821082- list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) {10831083- if (!device->in_fs_metadata)10821082+ list_for_each_entry(device, &fs_devices->devices, dev_list) {10831083+ if (!device->in_fs_metadata || !device->bdev)10841084 continue;1085108510861086 avail_space = device->total_bytes - device->bytes_used;
+1-1
fs/ext4/inode.c
···28072807 spin_unlock_irqrestore(&ei->i_completed_io_lock, flags);2808280828092809 /* queue the work to convert unwritten extents to written */28102810- queue_work(wq, &io_end->work);28112810 iocb->private = NULL;28112811+ queue_work(wq, &io_end->work);2812281228132813 /* XXX: probably should move into the real I/O completion handler */28142814 inode_dio_done(inode);
···290290 }291291292292 if (down_read_trylock(&oi->ip_alloc_sem) == 0) {293293+ /*294294+ * Unlock the page and cycle ip_alloc_sem so that we don't295295+ * busyloop waiting for ip_alloc_sem to unlock296296+ */293297 ret = AOP_TRUNCATED_PAGE;298298+ unlock_page(page);299299+ unlock = 0;300300+ down_read(&oi->ip_alloc_sem);301301+ up_read(&oi->ip_alloc_sem);294302 goto out_inode_unlock;295303 }296304···571563{572564 struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode;573565 int level;566566+ wait_queue_head_t *wq = ocfs2_ioend_wq(inode);574567575568 /* this io's submitter should not have unlocked this before we could */576569 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));577570578571 if (ocfs2_iocb_is_sem_locked(iocb))579572 ocfs2_iocb_clear_sem_locked(iocb);573573+574574+ if (ocfs2_iocb_is_unaligned_aio(iocb)) {575575+ ocfs2_iocb_clear_unaligned_aio(iocb);576576+577577+ if (atomic_dec_and_test(&OCFS2_I(inode)->ip_unaligned_aio) &&578578+ waitqueue_active(wq)) {579579+ wake_up_all(wq);580580+ }581581+ }580582581583 ocfs2_iocb_clear_rw_locked(iocb);582584···881863 struct page *w_target_page;882864883865 /*866866+ * w_target_locked is used for page_mkwrite path indicating no unlocking867867+ * against w_target_page in ocfs2_write_end_nolock.868868+ */869869+ unsigned int w_target_locked:1;870870+871871+ /*884872 * ocfs2_write_end() uses this to know what the real range to885873 * write in the target should be.886874 */···919895920896static void ocfs2_free_write_ctxt(struct ocfs2_write_ctxt *wc)921897{898898+ int i;899899+900900+ /*901901+ * w_target_locked is only set to true in the page_mkwrite() case.902902+ * The intent is to allow us to lock the target page from write_begin()903903+ * to write_end(). The caller must hold a ref on w_target_page.904904+ */905905+ if (wc->w_target_locked) {906906+ BUG_ON(!wc->w_target_page);907907+ for (i = 0; i < wc->w_num_pages; i++) {908908+ if (wc->w_target_page == wc->w_pages[i]) {909909+ wc->w_pages[i] = NULL;910910+ break;911911+ }912912+ }913913+ mark_page_accessed(wc->w_target_page);914914+ page_cache_release(wc->w_target_page);915915+ }922916 ocfs2_unlock_and_free_pages(wc->w_pages, wc->w_num_pages);923917924918 brelse(wc->w_di_bh);···11741132 */11751133 lock_page(mmap_page);1176113411351135+ /* Exit and let the caller retry */11771136 if (mmap_page->mapping != mapping) {11371137+ WARN_ON(mmap_page->mapping);11781138 unlock_page(mmap_page);11791179- /*11801180- * Sanity check - the locking in11811181- * ocfs2_pagemkwrite() should ensure11821182- * that this code doesn't trigger.11831183- */11841184- ret = -EINVAL;11851185- mlog_errno(ret);11391139+ ret = -EAGAIN;11861140 goto out;11871141 }1188114211891143 page_cache_get(mmap_page);11901144 wc->w_pages[i] = mmap_page;11451145+ wc->w_target_locked = true;11911146 } else {11921147 wc->w_pages[i] = find_or_create_page(mapping, index,11931148 GFP_NOFS);···11991160 wc->w_target_page = wc->w_pages[i];12001161 }12011162out:11631163+ if (ret)11641164+ wc->w_target_locked = false;12021165 return ret;12031166}12041167···18581817 */18591818 ret = ocfs2_grab_pages_for_write(mapping, wc, wc->w_cpos, pos, len,18601819 cluster_of_pages, mmap_page);18611861- if (ret) {18201820+ if (ret && ret != -EAGAIN) {18621821 mlog_errno(ret);18221822+ goto out_quota;18231823+ }18241824+18251825+ /*18261826+ * ocfs2_grab_pages_for_write() returns -EAGAIN if it could not lock18271827+ * the target page. In this case, we exit with no error and no target18281828+ * page. This will trigger the caller, page_mkwrite(), to re-try18291829+ * the operation.18301830+ */18311831+ if (ret == -EAGAIN) {18321832+ BUG_ON(wc->w_target_page);18331833+ ret = 0;18631834 goto out_quota;18641835 }18651836
+14
fs/ocfs2/aops.h
···7878 OCFS2_IOCB_RW_LOCK = 0,7979 OCFS2_IOCB_RW_LOCK_LEVEL,8080 OCFS2_IOCB_SEM,8181+ OCFS2_IOCB_UNALIGNED_IO,8182 OCFS2_IOCB_NUM_LOCKS8283};8384···9291 clear_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private)9392#define ocfs2_iocb_is_sem_locked(iocb) \9493 test_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private)9494+9595+#define ocfs2_iocb_set_unaligned_aio(iocb) \9696+ set_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private)9797+#define ocfs2_iocb_clear_unaligned_aio(iocb) \9898+ clear_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private)9999+#define ocfs2_iocb_is_unaligned_aio(iocb) \100100+ test_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private)101101+102102+#define OCFS2_IOEND_WQ_HASH_SZ 37103103+#define ocfs2_ioend_wq(v) (&ocfs2__ioend_wq[((unsigned long)(v)) %\104104+ OCFS2_IOEND_WQ_HASH_SZ])105105+extern wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ];106106+95107#endif /* OCFS2_FILE_H */
+123-73
fs/ocfs2/cluster/heartbeat.c
···216216217217 struct list_head hr_all_item;218218 unsigned hr_unclean_stop:1,219219+ hr_aborted_start:1,219220 hr_item_pinned:1,220221 hr_item_dropped:1;221222···254253 * has reached a 'steady' state. This will be fixed when we have255254 * a more complete api that doesn't lead to this sort of fragility. */256255 atomic_t hr_steady_iterations;256256+257257+ /* terminate o2hb thread if it does not reach steady state258258+ * (hr_steady_iterations == 0) within hr_unsteady_iterations */259259+ atomic_t hr_unsteady_iterations;257260258261 char hr_dev_name[BDEVNAME_SIZE];259262···329324330325static void o2hb_arm_write_timeout(struct o2hb_region *reg)331326{327327+ /* Arm writeout only after thread reaches steady state */328328+ if (atomic_read(®->hr_steady_iterations) != 0)329329+ return;330330+332331 mlog(ML_HEARTBEAT, "Queue write timeout for %u ms\n",333332 O2HB_MAX_WRITE_TIMEOUT_MS);334333···546537 return read == computed;547538}548539549549-/* We want to make sure that nobody is heartbeating on top of us --550550- * this will help detect an invalid configuration. */551551-static void o2hb_check_last_timestamp(struct o2hb_region *reg)540540+/*541541+ * Compare the slot data with what we wrote in the last iteration.542542+ * If the match fails, print an appropriate error message. This is to543543+ * detect errors like... another node hearting on the same slot,544544+ * flaky device that is losing writes, etc.545545+ * Returns 1 if check succeeds, 0 otherwise.546546+ */547547+static int o2hb_check_own_slot(struct o2hb_region *reg)552548{553549 struct o2hb_disk_slot *slot;554550 struct o2hb_disk_heartbeat_block *hb_block;···562548 slot = ®->hr_slots[o2nm_this_node()];563549 /* Don't check on our 1st timestamp */564550 if (!slot->ds_last_time)565565- return;551551+ return 0;566552567553 hb_block = slot->ds_raw_block;568554 if (le64_to_cpu(hb_block->hb_seq) == slot->ds_last_time &&569555 le64_to_cpu(hb_block->hb_generation) == slot->ds_last_generation &&570556 hb_block->hb_node == slot->ds_node_num)571571- return;557557+ return 1;572558573559#define ERRSTR1 "Another node is heartbeating on device"574560#define ERRSTR2 "Heartbeat generation mismatch on device"···588574 (unsigned long long)slot->ds_last_time, hb_block->hb_node,589575 (unsigned long long)le64_to_cpu(hb_block->hb_generation),590576 (unsigned long long)le64_to_cpu(hb_block->hb_seq));577577+578578+ return 0;591579}592580593581static inline void o2hb_prepare_block(struct o2hb_region *reg,···735719 o2nm_node_put(node);736720}737721738738-static void o2hb_set_quorum_device(struct o2hb_region *reg,739739- struct o2hb_disk_slot *slot)722722+static void o2hb_set_quorum_device(struct o2hb_region *reg)740723{741741- assert_spin_locked(&o2hb_live_lock);742742-743724 if (!o2hb_global_heartbeat_active())744725 return;745726746746- if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap))727727+ /* Prevent race with o2hb_heartbeat_group_drop_item() */728728+ if (kthread_should_stop())747729 return;730730+731731+ /* Tag region as quorum only after thread reaches steady state */732732+ if (atomic_read(®->hr_steady_iterations) != 0)733733+ return;734734+735735+ spin_lock(&o2hb_live_lock);736736+737737+ if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap))738738+ goto unlock;748739749740 /*750741 * A region can be added to the quorum only when it sees all···760737 */761738 if (memcmp(reg->hr_live_node_bitmap, o2hb_live_node_bitmap,762739 sizeof(o2hb_live_node_bitmap)))763763- return;740740+ goto unlock;764741765765- if (slot->ds_changed_samples < O2HB_LIVE_THRESHOLD)766766- return;767767-768768- printk(KERN_NOTICE "o2hb: Region %s is now a quorum device\n",769769- config_item_name(®->hr_item));742742+ printk(KERN_NOTICE "o2hb: Region %s (%s) is now a quorum device\n",743743+ config_item_name(®->hr_item), reg->hr_dev_name);770744771745 set_bit(reg->hr_region_num, o2hb_quorum_region_bitmap);772746···774754 if (o2hb_pop_count(&o2hb_quorum_region_bitmap,775755 O2NM_MAX_REGIONS) > O2HB_PIN_CUT_OFF)776756 o2hb_region_unpin(NULL);757757+unlock:758758+ spin_unlock(&o2hb_live_lock);777759}778760779761static int o2hb_check_slot(struct o2hb_region *reg,···947925 slot->ds_equal_samples = 0;948926 }949927out:950950- o2hb_set_quorum_device(reg, slot);951951-952928 spin_unlock(&o2hb_live_lock);953929954930 o2hb_run_event_list(&event);···977957978958static int o2hb_do_disk_heartbeat(struct o2hb_region *reg)979959{980980- int i, ret, highest_node, change = 0;960960+ int i, ret, highest_node;961961+ int membership_change = 0, own_slot_ok = 0;981962 unsigned long configured_nodes[BITS_TO_LONGS(O2NM_MAX_NODES)];982963 unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];983964 struct o2hb_bio_wait_ctxt write_wc;···987966 sizeof(configured_nodes));988967 if (ret) {989968 mlog_errno(ret);990990- return ret;969969+ goto bail;991970 }992971993972 /*···10039821004983 highest_node = o2hb_highest_node(configured_nodes, O2NM_MAX_NODES);1005984 if (highest_node >= O2NM_MAX_NODES) {10061006- mlog(ML_NOTICE, "ocfs2_heartbeat: no configured nodes found!\n");10071007- return -EINVAL;985985+ mlog(ML_NOTICE, "o2hb: No configured nodes found!\n");986986+ ret = -EINVAL;987987+ goto bail;1008988 }10099891010990 /* No sense in reading the slots of nodes that don't exist···1015993 ret = o2hb_read_slots(reg, highest_node + 1);1016994 if (ret < 0) {1017995 mlog_errno(ret);10181018- return ret;996996+ goto bail;1019997 }10209981021999 /* With an up to date view of the slots, we can check that no10221000 * other node has been improperly configured to heartbeat in10231001 * our slot. */10241024- o2hb_check_last_timestamp(reg);10021002+ own_slot_ok = o2hb_check_own_slot(reg);1025100310261004 /* fill in the proper info for our next heartbeat */10271005 o2hb_prepare_block(reg, reg->hr_generation);1028100610291029- /* And fire off the write. Note that we don't wait on this I/O10301030- * until later. */10311007 ret = o2hb_issue_node_write(reg, &write_wc);10321008 if (ret < 0) {10331009 mlog_errno(ret);10341034- return ret;10101010+ goto bail;10351011 }1036101210371013 i = -1;10381014 while((i = find_next_bit(configured_nodes,10391015 O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) {10401040- change |= o2hb_check_slot(reg, ®->hr_slots[i]);10161016+ membership_change |= o2hb_check_slot(reg, ®->hr_slots[i]);10411017 }1042101810431019 /*···10501030 * disk */10511031 mlog(ML_ERROR, "Write error %d on device \"%s\"\n",10521032 write_wc.wc_error, reg->hr_dev_name);10531053- return write_wc.wc_error;10331033+ ret = write_wc.wc_error;10341034+ goto bail;10541035 }1055103610561056- o2hb_arm_write_timeout(reg);10371037+ /* Skip disarming the timeout if own slot has stale/bad data */10381038+ if (own_slot_ok) {10391039+ o2hb_set_quorum_device(reg);10401040+ o2hb_arm_write_timeout(reg);10411041+ }1057104210431043+bail:10581044 /* let the person who launched us know when things are steady */10591059- if (!change && (atomic_read(®->hr_steady_iterations) != 0)) {10601060- if (atomic_dec_and_test(®->hr_steady_iterations))10611061- wake_up(&o2hb_steady_queue);10451045+ if (atomic_read(®->hr_steady_iterations) != 0) {10461046+ if (!ret && own_slot_ok && !membership_change) {10471047+ if (atomic_dec_and_test(®->hr_steady_iterations))10481048+ wake_up(&o2hb_steady_queue);10491049+ }10621050 }1063105110641064- return 0;10521052+ if (atomic_read(®->hr_steady_iterations) != 0) {10531053+ if (atomic_dec_and_test(®->hr_unsteady_iterations)) {10541054+ printk(KERN_NOTICE "o2hb: Unable to stabilize "10551055+ "heartbeart on region %s (%s)\n",10561056+ config_item_name(®->hr_item),10571057+ reg->hr_dev_name);10581058+ atomic_set(®->hr_steady_iterations, 0);10591059+ reg->hr_aborted_start = 1;10601060+ wake_up(&o2hb_steady_queue);10611061+ ret = -EIO;10621062+ }10631063+ }10641064+10651065+ return ret;10651066}1066106710671068/* Subtract b from a, storing the result in a. a *must* have a larger···11361095 /* Pin node */11371096 o2nm_depend_this_node();1138109711391139- while (!kthread_should_stop() && !reg->hr_unclean_stop) {10981098+ while (!kthread_should_stop() &&10991099+ !reg->hr_unclean_stop && !reg->hr_aborted_start) {11401100 /* We track the time spent inside11411101 * o2hb_do_disk_heartbeat so that we avoid more than11421102 * hr_timeout_ms between disk writes. On busy systems···11451103 * likely to time itself out. */11461104 do_gettimeofday(&before_hb);1147110511481148- i = 0;11491149- do {11501150- ret = o2hb_do_disk_heartbeat(reg);11511151- } while (ret && ++i < 2);11061106+ ret = o2hb_do_disk_heartbeat(reg);1152110711531108 do_gettimeofday(&after_hb);11541109 elapsed_msec = o2hb_elapsed_msecs(&before_hb, &after_hb);···11561117 after_hb.tv_sec, (unsigned long) after_hb.tv_usec,11571118 elapsed_msec);1158111911591159- if (elapsed_msec < reg->hr_timeout_ms) {11201120+ if (!kthread_should_stop() &&11211121+ elapsed_msec < reg->hr_timeout_ms) {11601122 /* the kthread api has blocked signals for us so no11611123 * need to record the return value. */11621124 msleep_interruptible(reg->hr_timeout_ms - elapsed_msec);···11741134 * to timeout on this region when we could just as easily11751135 * write a clear generation - thus indicating to them that11761136 * this node has left this region.11771177- *11781178- * XXX: Should we skip this on unclean_stop? */11791179- o2hb_prepare_block(reg, 0);11801180- ret = o2hb_issue_node_write(reg, &write_wc);11811181- if (ret == 0) {11821182- o2hb_wait_on_io(reg, &write_wc);11831183- } else {11841184- mlog_errno(ret);11371137+ */11381138+ if (!reg->hr_unclean_stop && !reg->hr_aborted_start) {11391139+ o2hb_prepare_block(reg, 0);11401140+ ret = o2hb_issue_node_write(reg, &write_wc);11411141+ if (ret == 0)11421142+ o2hb_wait_on_io(reg, &write_wc);11431143+ else11441144+ mlog_errno(ret);11851145 }1186114611871147 /* Unpin node */11881148 o2nm_undepend_this_node();1189114911901190- mlog(ML_HEARTBEAT|ML_KTHREAD, "hb thread exiting\n");11501150+ mlog(ML_HEARTBEAT|ML_KTHREAD, "o2hb thread exiting\n");1191115111921152 return 0;11931153}···11981158 struct o2hb_debug_buf *db = inode->i_private;11991159 struct o2hb_region *reg;12001160 unsigned long map[BITS_TO_LONGS(O2NM_MAX_NODES)];11611161+ unsigned long lts;12011162 char *buf = NULL;12021163 int i = -1;12031164 int out = 0;···1235119412361195 case O2HB_DB_TYPE_REGION_ELAPSED_TIME:12371196 reg = (struct o2hb_region *)db->db_data;12381238- out += snprintf(buf + out, PAGE_SIZE - out, "%u\n",12391239- jiffies_to_msecs(jiffies -12401240- reg->hr_last_timeout_start));11971197+ lts = reg->hr_last_timeout_start;11981198+ /* If 0, it has never been set before */11991199+ if (lts)12001200+ lts = jiffies_to_msecs(jiffies - lts);12011201+ out += snprintf(buf + out, PAGE_SIZE - out, "%lu\n", lts);12411202 goto done;1242120312431204 case O2HB_DB_TYPE_REGION_PINNED:···14681425 int i;14691426 struct page *page;14701427 struct o2hb_region *reg = to_o2hb_region(item);14281428+14291429+ mlog(ML_HEARTBEAT, "hb region release (%s)\n", reg->hr_dev_name);1471143014721431 if (reg->hr_tmp_block)14731432 kfree(reg->hr_tmp_block);···18371792 live_threshold <<= 1;18381793 spin_unlock(&o2hb_live_lock);18391794 }18401840- atomic_set(®->hr_steady_iterations, live_threshold + 1);17951795+ ++live_threshold;17961796+ atomic_set(®->hr_steady_iterations, live_threshold);17971797+ /* unsteady_iterations is double the steady_iterations */17981798+ atomic_set(®->hr_unsteady_iterations, (live_threshold << 1));1841179918421800 hb_task = kthread_run(o2hb_thread, reg, "o2hb-%s",18431801 reg->hr_item.ci_name);···18571809 ret = wait_event_interruptible(o2hb_steady_queue,18581810 atomic_read(®->hr_steady_iterations) == 0);18591811 if (ret) {18601860- /* We got interrupted (hello ptrace!). Clean up */18611861- spin_lock(&o2hb_live_lock);18621862- hb_task = reg->hr_task;18631863- reg->hr_task = NULL;18641864- spin_unlock(&o2hb_live_lock);18121812+ atomic_set(®->hr_steady_iterations, 0);18131813+ reg->hr_aborted_start = 1;18141814+ }1865181518661866- if (hb_task)18671867- kthread_stop(hb_task);18161816+ if (reg->hr_aborted_start) {18171817+ ret = -EIO;18681818 goto out;18691819 }18701820···18791833 ret = -EIO;1880183418811835 if (hb_task && o2hb_global_heartbeat_active())18821882- printk(KERN_NOTICE "o2hb: Heartbeat started on region %s\n",18831883- config_item_name(®->hr_item));18361836+ printk(KERN_NOTICE "o2hb: Heartbeat started on region %s (%s)\n",18371837+ config_item_name(®->hr_item), reg->hr_dev_name);1884183818851839out:18861840 if (filp)···2138209221392093 /* stop the thread when the user removes the region dir */21402094 spin_lock(&o2hb_live_lock);21412141- if (o2hb_global_heartbeat_active()) {21422142- clear_bit(reg->hr_region_num, o2hb_region_bitmap);21432143- clear_bit(reg->hr_region_num, o2hb_live_region_bitmap);21442144- if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap))21452145- quorum_region = 1;21462146- clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap);21472147- }21482095 hb_task = reg->hr_task;21492096 reg->hr_task = NULL;21502097 reg->hr_item_dropped = 1;···21462107 if (hb_task)21472108 kthread_stop(hb_task);2148210921102110+ if (o2hb_global_heartbeat_active()) {21112111+ spin_lock(&o2hb_live_lock);21122112+ clear_bit(reg->hr_region_num, o2hb_region_bitmap);21132113+ clear_bit(reg->hr_region_num, o2hb_live_region_bitmap);21142114+ if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap))21152115+ quorum_region = 1;21162116+ clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap);21172117+ spin_unlock(&o2hb_live_lock);21182118+ printk(KERN_NOTICE "o2hb: Heartbeat %s on region %s (%s)\n",21192119+ ((atomic_read(®->hr_steady_iterations) == 0) ?21202120+ "stopped" : "start aborted"), config_item_name(item),21212121+ reg->hr_dev_name);21222122+ }21232123+21492124 /*21502125 * If we're racing a dev_write(), we need to wake them. They will21512126 * check reg->hr_task21522127 */21532128 if (atomic_read(®->hr_steady_iterations) != 0) {21292129+ reg->hr_aborted_start = 1;21542130 atomic_set(®->hr_steady_iterations, 0);21552131 wake_up(&o2hb_steady_queue);21562132 }21572157-21582158- if (o2hb_global_heartbeat_active())21592159- printk(KERN_NOTICE "o2hb: Heartbeat stopped on region %s\n",21602160- config_item_name(®->hr_item));2161213321622134 config_item_put(item);21632135
···546546 }547547548548 if (was_valid && !valid) {549549- printk(KERN_NOTICE "o2net: no longer connected to "549549+ printk(KERN_NOTICE "o2net: No longer connected to "550550 SC_NODEF_FMT "\n", SC_NODEF_ARGS(old_sc));551551 o2net_complete_nodes_nsw(nn);552552 }···556556 cancel_delayed_work(&nn->nn_connect_expired);557557 printk(KERN_NOTICE "o2net: %s " SC_NODEF_FMT "\n",558558 o2nm_this_node() > sc->sc_node->nd_num ?559559- "connected to" : "accepted connection from",559559+ "Connected to" : "Accepted connection from",560560 SC_NODEF_ARGS(sc));561561 }562562···644644 o2net_sc_queue_work(sc, &sc->sc_connect_work);645645 break;646646 default:647647- printk(KERN_INFO "o2net: connection to " SC_NODEF_FMT647647+ printk(KERN_INFO "o2net: Connection to " SC_NODEF_FMT648648 " shutdown, state %d\n",649649 SC_NODEF_ARGS(sc), sk->sk_state);650650 o2net_sc_queue_work(sc, &sc->sc_shutdown_work);···10351035 return ret;10361036}1037103710381038+/* Get a map of all nodes to which this node is currently connected to */10391039+void o2net_fill_node_map(unsigned long *map, unsigned bytes)10401040+{10411041+ struct o2net_sock_container *sc;10421042+ int node, ret;10431043+10441044+ BUG_ON(bytes < (BITS_TO_LONGS(O2NM_MAX_NODES) * sizeof(unsigned long)));10451045+10461046+ memset(map, 0, bytes);10471047+ for (node = 0; node < O2NM_MAX_NODES; ++node) {10481048+ o2net_tx_can_proceed(o2net_nn_from_num(node), &sc, &ret);10491049+ if (!ret) {10501050+ set_bit(node, map);10511051+ sc_put(sc);10521052+ }10531053+ }10541054+}10551055+EXPORT_SYMBOL_GPL(o2net_fill_node_map);10561056+10381057int o2net_send_message_vec(u32 msg_type, u32 key, struct kvec *caller_vec,10391058 size_t caller_veclen, u8 target_node, int *status)10401059{···13041285 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num);1305128613061287 if (hand->protocol_version != cpu_to_be64(O2NET_PROTOCOL_VERSION)) {13071307- mlog(ML_NOTICE, SC_NODEF_FMT " advertised net protocol "13081308- "version %llu but %llu is required, disconnecting\n",13091309- SC_NODEF_ARGS(sc),13101310- (unsigned long long)be64_to_cpu(hand->protocol_version),13111311- O2NET_PROTOCOL_VERSION);12881288+ printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " Advertised net "12891289+ "protocol version %llu but %llu is required. "12901290+ "Disconnecting.\n", SC_NODEF_ARGS(sc),12911291+ (unsigned long long)be64_to_cpu(hand->protocol_version),12921292+ O2NET_PROTOCOL_VERSION);1312129313131294 /* don't bother reconnecting if its the wrong version. */13141295 o2net_ensure_shutdown(nn, sc, -ENOTCONN);···13221303 */13231304 if (be32_to_cpu(hand->o2net_idle_timeout_ms) !=13241305 o2net_idle_timeout()) {13251325- mlog(ML_NOTICE, SC_NODEF_FMT " uses a network idle timeout of "13261326- "%u ms, but we use %u ms locally. disconnecting\n",13271327- SC_NODEF_ARGS(sc),13281328- be32_to_cpu(hand->o2net_idle_timeout_ms),13291329- o2net_idle_timeout());13061306+ printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a network "13071307+ "idle timeout of %u ms, but we use %u ms locally. "13081308+ "Disconnecting.\n", SC_NODEF_ARGS(sc),13091309+ be32_to_cpu(hand->o2net_idle_timeout_ms),13101310+ o2net_idle_timeout());13301311 o2net_ensure_shutdown(nn, sc, -ENOTCONN);13311312 return -1;13321313 }1333131413341315 if (be32_to_cpu(hand->o2net_keepalive_delay_ms) !=13351316 o2net_keepalive_delay()) {13361336- mlog(ML_NOTICE, SC_NODEF_FMT " uses a keepalive delay of "13371337- "%u ms, but we use %u ms locally. disconnecting\n",13381338- SC_NODEF_ARGS(sc),13391339- be32_to_cpu(hand->o2net_keepalive_delay_ms),13401340- o2net_keepalive_delay());13171317+ printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a keepalive "13181318+ "delay of %u ms, but we use %u ms locally. "13191319+ "Disconnecting.\n", SC_NODEF_ARGS(sc),13201320+ be32_to_cpu(hand->o2net_keepalive_delay_ms),13211321+ o2net_keepalive_delay());13411322 o2net_ensure_shutdown(nn, sc, -ENOTCONN);13421323 return -1;13431324 }1344132513451326 if (be32_to_cpu(hand->o2hb_heartbeat_timeout_ms) !=13461327 O2HB_MAX_WRITE_TIMEOUT_MS) {13471347- mlog(ML_NOTICE, SC_NODEF_FMT " uses a heartbeat timeout of "13481348- "%u ms, but we use %u ms locally. disconnecting\n",13491349- SC_NODEF_ARGS(sc),13501350- be32_to_cpu(hand->o2hb_heartbeat_timeout_ms),13511351- O2HB_MAX_WRITE_TIMEOUT_MS);13281328+ printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a heartbeat "13291329+ "timeout of %u ms, but we use %u ms locally. "13301330+ "Disconnecting.\n", SC_NODEF_ARGS(sc),13311331+ be32_to_cpu(hand->o2hb_heartbeat_timeout_ms),13321332+ O2HB_MAX_WRITE_TIMEOUT_MS);13521333 o2net_ensure_shutdown(nn, sc, -ENOTCONN);13531334 return -1;13541335 }···15591540{15601541 struct o2net_sock_container *sc = (struct o2net_sock_container *)data;15611542 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num);15621562-15631543#ifdef CONFIG_DEBUG_FS15641564- ktime_t now = ktime_get();15441544+ unsigned long msecs = ktime_to_ms(ktime_get()) -15451545+ ktime_to_ms(sc->sc_tv_timer);15461546+#else15471547+ unsigned long msecs = o2net_idle_timeout();15651548#endif1566154915671567- printk(KERN_NOTICE "o2net: connection to " SC_NODEF_FMT " has been idle for %u.%u "15681568- "seconds, shutting it down.\n", SC_NODEF_ARGS(sc),15691569- o2net_idle_timeout() / 1000,15701570- o2net_idle_timeout() % 1000);15711571-15721572-#ifdef CONFIG_DEBUG_FS15731573- mlog(ML_NOTICE, "Here are some times that might help debug the "15741574- "situation: (Timer: %lld, Now %lld, DataReady %lld, Advance %lld-%lld, "15751575- "Key 0x%08x, Func %u, FuncTime %lld-%lld)\n",15761576- (long long)ktime_to_us(sc->sc_tv_timer), (long long)ktime_to_us(now),15771577- (long long)ktime_to_us(sc->sc_tv_data_ready),15781578- (long long)ktime_to_us(sc->sc_tv_advance_start),15791579- (long long)ktime_to_us(sc->sc_tv_advance_stop),15801580- sc->sc_msg_key, sc->sc_msg_type,15811581- (long long)ktime_to_us(sc->sc_tv_func_start),15821582- (long long)ktime_to_us(sc->sc_tv_func_stop));15831583-#endif15501550+ printk(KERN_NOTICE "o2net: Connection to " SC_NODEF_FMT " has been "15511551+ "idle for %lu.%lu secs, shutting it down.\n", SC_NODEF_ARGS(sc),15521552+ msecs / 1000, msecs % 1000);1584155315851554 /*15861555 * Initialize the nn_timeout so that the next connection attempt···1701169417021695out:17031696 if (ret) {17041704- mlog(ML_NOTICE, "connect attempt to " SC_NODEF_FMT " failed "17051705- "with errno %d\n", SC_NODEF_ARGS(sc), ret);16971697+ printk(KERN_NOTICE "o2net: Connect attempt to " SC_NODEF_FMT16981698+ " failed with errno %d\n", SC_NODEF_ARGS(sc), ret);17061699 /* 0 err so that another will be queued and attempted17071700 * from set_nn_state */17081701 if (sc)···1725171817261719 spin_lock(&nn->nn_lock);17271720 if (!nn->nn_sc_valid) {17281728- mlog(ML_ERROR, "no connection established with node %u after "17291729- "%u.%u seconds, giving up and returning errors.\n",17211721+ printk(KERN_NOTICE "o2net: No connection established with "17221722+ "node %u after %u.%u seconds, giving up.\n",17301723 o2net_num_from_nn(nn),17311724 o2net_idle_timeout() / 1000,17321725 o2net_idle_timeout() % 1000);···1869186218701863 node = o2nm_get_node_by_ip(sin.sin_addr.s_addr);18711864 if (node == NULL) {18721872- mlog(ML_NOTICE, "attempt to connect from unknown node at %pI4:%d\n",18731873- &sin.sin_addr.s_addr, ntohs(sin.sin_port));18651865+ printk(KERN_NOTICE "o2net: Attempt to connect from unknown "18661866+ "node at %pI4:%d\n", &sin.sin_addr.s_addr,18671867+ ntohs(sin.sin_port));18741868 ret = -EINVAL;18751869 goto out;18761870 }1877187118781872 if (o2nm_this_node() >= node->nd_num) {18791873 local_node = o2nm_get_node_by_num(o2nm_this_node());18801880- mlog(ML_NOTICE, "unexpected connect attempt seen at node '%s' ("18811881- "%u, %pI4:%d) from node '%s' (%u, %pI4:%d)\n",18821882- local_node->nd_name, local_node->nd_num,18831883- &(local_node->nd_ipv4_address),18841884- ntohs(local_node->nd_ipv4_port),18851885- node->nd_name, node->nd_num, &sin.sin_addr.s_addr,18861886- ntohs(sin.sin_port));18741874+ printk(KERN_NOTICE "o2net: Unexpected connect attempt seen "18751875+ "at node '%s' (%u, %pI4:%d) from node '%s' (%u, "18761876+ "%pI4:%d)\n", local_node->nd_name, local_node->nd_num,18771877+ &(local_node->nd_ipv4_address),18781878+ ntohs(local_node->nd_ipv4_port), node->nd_name,18791879+ node->nd_num, &sin.sin_addr.s_addr, ntohs(sin.sin_port));18871880 ret = -EINVAL;18881881 goto out;18891882 }···19081901 ret = 0;19091902 spin_unlock(&nn->nn_lock);19101903 if (ret) {19111911- mlog(ML_NOTICE, "attempt to connect from node '%s' at "19121912- "%pI4:%d but it already has an open connection\n",19131913- node->nd_name, &sin.sin_addr.s_addr,19141914- ntohs(sin.sin_port));19041904+ printk(KERN_NOTICE "o2net: Attempt to connect from node '%s' "19051905+ "at %pI4:%d but it already has an open connection\n",19061906+ node->nd_name, &sin.sin_addr.s_addr,19071907+ ntohs(sin.sin_port));19151908 goto out;19161909 }19171910···1991198419921985 ret = sock_create(PF_INET, SOCK_STREAM, IPPROTO_TCP, &sock);19931986 if (ret < 0) {19941994- mlog(ML_ERROR, "unable to create socket, ret=%d\n", ret);19871987+ printk(KERN_ERR "o2net: Error %d while creating socket\n", ret);19951988 goto out;19961989 }19971990···20082001 sock->sk->sk_reuse = 1;20092002 ret = sock->ops->bind(sock, (struct sockaddr *)&sin, sizeof(sin));20102003 if (ret < 0) {20112011- mlog(ML_ERROR, "unable to bind socket at %pI4:%u, "20122012- "ret=%d\n", &addr, ntohs(port), ret);20042004+ printk(KERN_ERR "o2net: Error %d while binding socket at "20052005+ "%pI4:%u\n", ret, &addr, ntohs(port)); 20132006 goto out;20142007 }2015200820162009 ret = sock->ops->listen(sock, 64);20172017- if (ret < 0) {20182018- mlog(ML_ERROR, "unable to listen on %pI4:%u, ret=%d\n",20192019- &addr, ntohs(port), ret);20202020- }20102010+ if (ret < 0)20112011+ printk(KERN_ERR "o2net: Error %d while listening on %pI4:%u\n",20122012+ ret, &addr, ntohs(port));2021201320222014out:20232015 if (ret) {
···157157158158static void dlm_unregister_domain_handlers(struct dlm_ctxt *dlm);159159160160-void __dlm_unhash_lockres(struct dlm_lock_resource *lockres)160160+void __dlm_unhash_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res)161161{162162- if (!hlist_unhashed(&lockres->hash_node)) {163163- hlist_del_init(&lockres->hash_node);164164- dlm_lockres_put(lockres);165165- }162162+ if (hlist_unhashed(&res->hash_node))163163+ return;164164+165165+ mlog(0, "%s: Unhash res %.*s\n", dlm->name, res->lockname.len,166166+ res->lockname.name);167167+ hlist_del_init(&res->hash_node);168168+ dlm_lockres_put(res);166169}167170168168-void __dlm_insert_lockres(struct dlm_ctxt *dlm,169169- struct dlm_lock_resource *res)171171+void __dlm_insert_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res)170172{171173 struct hlist_head *bucket;172174 struct qstr *q;···182180 dlm_lockres_get(res);183181184182 hlist_add_head(&res->hash_node, bucket);183183+184184+ mlog(0, "%s: Hash res %.*s\n", dlm->name, res->lockname.len,185185+ res->lockname.name);185186}186187187188struct dlm_lock_resource * __dlm_lookup_lockres_full(struct dlm_ctxt *dlm,···544539545540static void __dlm_print_nodes(struct dlm_ctxt *dlm)546541{547547- int node = -1;542542+ int node = -1, num = 0;548543549544 assert_spin_locked(&dlm->spinlock);550545551551- printk(KERN_NOTICE "o2dlm: Nodes in domain %s: ", dlm->name);552552-546546+ printk("( ");553547 while ((node = find_next_bit(dlm->domain_map, O2NM_MAX_NODES,554548 node + 1)) < O2NM_MAX_NODES) {555549 printk("%d ", node);550550+ ++num;556551 }557557- printk("\n");552552+ printk(") %u nodes\n", num);558553}559554560555static int dlm_exit_domain_handler(struct o2net_msg *msg, u32 len, void *data,···571566572567 node = exit_msg->node_idx;573568574574- printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s\n", node, dlm->name);575575-576569 spin_lock(&dlm->spinlock);577570 clear_bit(node, dlm->domain_map);578571 clear_bit(node, dlm->exit_domain_map);572572+ printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s ", node, dlm->name);579573 __dlm_print_nodes(dlm);580574581575 /* notify anything attached to the heartbeat events */···759755760756 dlm_mark_domain_leaving(dlm);761757 dlm_leave_domain(dlm);758758+ printk(KERN_NOTICE "o2dlm: Leaving domain %s\n", dlm->name);762759 dlm_force_free_mles(dlm);763760 dlm_complete_dlm_shutdown(dlm);764761 }···975970 clear_bit(assert->node_idx, dlm->exit_domain_map);976971 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN);977972978978- printk(KERN_NOTICE "o2dlm: Node %u joins domain %s\n",973973+ printk(KERN_NOTICE "o2dlm: Node %u joins domain %s ",979974 assert->node_idx, dlm->name);980975 __dlm_print_nodes(dlm);981976···17061701bail:17071702 spin_lock(&dlm->spinlock);17081703 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN);17091709- if (!status)17041704+ if (!status) {17051705+ printk(KERN_NOTICE "o2dlm: Joining domain %s ", dlm->name);17101706 __dlm_print_nodes(dlm);17071707+ }17111708 spin_unlock(&dlm->spinlock);1712170917131710 if (ctxt) {···21352128 if (strlen(domain) >= O2NM_MAX_NAME_LEN) {21362129 ret = -ENAMETOOLONG;21372130 mlog(ML_ERROR, "domain name length too long\n");21382138- goto leave;21392139- }21402140-21412141- if (!o2hb_check_local_node_heartbeating()) {21422142- mlog(ML_ERROR, "the local node has not been configured, or is "21432143- "not heartbeating\n");21442144- ret = -EPROTO;21452131 goto leave;21462132 }21472133
+26-28
fs/ocfs2/dlm/dlmlock.c
···183183 kick_thread = 1;184184 }185185 }186186- /* reduce the inflight count, this may result in the lockres187187- * being purged below during calc_usage */188188- if (lock->ml.node == dlm->node_num)189189- dlm_lockres_drop_inflight_ref(dlm, res);190186191187 spin_unlock(&res->spinlock);192188 wake_up(&res->wq);···227231 lock->ml.type, res->lockname.len,228232 res->lockname.name, flags);229233234234+ /*235235+ * Wait if resource is getting recovered, remastered, etc.236236+ * If the resource was remastered and new owner is self, then exit.237237+ */230238 spin_lock(&res->spinlock);231231-232232- /* will exit this call with spinlock held */233239 __dlm_wait_on_lockres(res);240240+ if (res->owner == dlm->node_num) {241241+ spin_unlock(&res->spinlock);242242+ return DLM_RECOVERING;243243+ }234244 res->state |= DLM_LOCK_RES_IN_PROGRESS;235245236246 /* add lock to local (secondary) queue */···321319 tmpret = o2net_send_message(DLM_CREATE_LOCK_MSG, dlm->key, &create,322320 sizeof(create), res->owner, &status);323321 if (tmpret >= 0) {324324- // successfully sent and received325325- ret = status; // this is already a dlm_status322322+ ret = status;326323 if (ret == DLM_REJECTED) {327327- mlog(ML_ERROR, "%s:%.*s: BUG. this is a stale lockres "328328- "no longer owned by %u. that node is coming back "329329- "up currently.\n", dlm->name, create.namelen,324324+ mlog(ML_ERROR, "%s: res %.*s, Stale lockres no longer "325325+ "owned by node %u. That node is coming back up "326326+ "currently.\n", dlm->name, create.namelen,330327 create.name, res->owner);331328 dlm_print_one_lock_resource(res);332329 BUG();333330 }334331 } else {335335- mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to "336336- "node %u\n", tmpret, DLM_CREATE_LOCK_MSG, dlm->key,337337- res->owner);338338- if (dlm_is_host_down(tmpret)) {332332+ mlog(ML_ERROR, "%s: res %.*s, Error %d send CREATE LOCK to "333333+ "node %u\n", dlm->name, create.namelen, create.name,334334+ tmpret, res->owner);335335+ if (dlm_is_host_down(tmpret))339336 ret = DLM_RECOVERING;340340- mlog(0, "node %u died so returning DLM_RECOVERING "341341- "from lock message!\n", res->owner);342342- } else {337337+ else343338 ret = dlm_err_to_dlm_status(tmpret);344344- }345339 }346340347341 return ret;···438440 /* zero memory only if kernel-allocated */439441 lksb = kzalloc(sizeof(*lksb), GFP_NOFS);440442 if (!lksb) {441441- kfree(lock);443443+ kmem_cache_free(dlm_lock_cache, lock);442444 return NULL;443445 }444446 kernel_allocated = 1;···716718717719 if (status == DLM_RECOVERING || status == DLM_MIGRATING ||718720 status == DLM_FORWARD) {719719- mlog(0, "retrying lock with migration/"720720- "recovery/in progress\n");721721 msleep(100);722722- /* no waiting for dlm_reco_thread */723722 if (recovery) {724723 if (status != DLM_RECOVERING)725724 goto retry_lock;726726-727727- mlog(0, "%s: got RECOVERING "728728- "for $RECOVERY lock, master "729729- "was %u\n", dlm->name,730730- res->owner);731725 /* wait to see the node go down, then732726 * drop down and allow the lockres to733727 * get cleaned up. need to remaster. */···730740 goto retry_lock;731741 }732742 }743743+744744+ /* Inflight taken in dlm_get_lock_resource() is dropped here */745745+ spin_lock(&res->spinlock);746746+ dlm_lockres_drop_inflight_ref(dlm, res);747747+ spin_unlock(&res->spinlock);748748+749749+ dlm_lockres_calc_usage(dlm, res);750750+ dlm_kick_thread(dlm, res);733751734752 if (status != DLM_NORMAL) {735753 lock->lksb->flags &= ~DLM_LKSB_GET_LVB;
+91-90
fs/ocfs2/dlm/dlmmaster.c
···631631 return NULL;632632}633633634634-void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm,635635- struct dlm_lock_resource *res,636636- int new_lockres,637637- const char *file,638638- int line)634634+void dlm_lockres_set_refmap_bit(struct dlm_ctxt *dlm,635635+ struct dlm_lock_resource *res, int bit)639636{640640- if (!new_lockres)641641- assert_spin_locked(&res->spinlock);637637+ assert_spin_locked(&res->spinlock);642638643643- if (!test_bit(dlm->node_num, res->refmap)) {644644- BUG_ON(res->inflight_locks != 0);645645- dlm_lockres_set_refmap_bit(dlm->node_num, res);646646- }647647- res->inflight_locks++;648648- mlog(0, "%s:%.*s: inflight++: now %u\n",649649- dlm->name, res->lockname.len, res->lockname.name,650650- res->inflight_locks);639639+ mlog(0, "res %.*s, set node %u, %ps()\n", res->lockname.len,640640+ res->lockname.name, bit, __builtin_return_address(0));641641+642642+ set_bit(bit, res->refmap);651643}652644653653-void __dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm,654654- struct dlm_lock_resource *res,655655- const char *file,656656- int line)645645+void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm,646646+ struct dlm_lock_resource *res, int bit)647647+{648648+ assert_spin_locked(&res->spinlock);649649+650650+ mlog(0, "res %.*s, clr node %u, %ps()\n", res->lockname.len,651651+ res->lockname.name, bit, __builtin_return_address(0));652652+653653+ clear_bit(bit, res->refmap);654654+}655655+656656+657657+void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm,658658+ struct dlm_lock_resource *res)659659+{660660+ assert_spin_locked(&res->spinlock);661661+662662+ res->inflight_locks++;663663+664664+ mlog(0, "%s: res %.*s, inflight++: now %u, %ps()\n", dlm->name,665665+ res->lockname.len, res->lockname.name, res->inflight_locks,666666+ __builtin_return_address(0));667667+}668668+669669+void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm,670670+ struct dlm_lock_resource *res)657671{658672 assert_spin_locked(&res->spinlock);659673660674 BUG_ON(res->inflight_locks == 0);675675+661676 res->inflight_locks--;662662- mlog(0, "%s:%.*s: inflight--: now %u\n",663663- dlm->name, res->lockname.len, res->lockname.name,664664- res->inflight_locks);665665- if (res->inflight_locks == 0)666666- dlm_lockres_clear_refmap_bit(dlm->node_num, res);677677+678678+ mlog(0, "%s: res %.*s, inflight--: now %u, %ps()\n", dlm->name,679679+ res->lockname.len, res->lockname.name, res->inflight_locks,680680+ __builtin_return_address(0));681681+667682 wake_up(&res->wq);668683}669684···712697 unsigned int hash;713698 int tries = 0;714699 int bit, wait_on_recovery = 0;715715- int drop_inflight_if_nonlocal = 0;716700717701 BUG_ON(!lockid);718702···723709 spin_lock(&dlm->spinlock);724710 tmpres = __dlm_lookup_lockres_full(dlm, lockid, namelen, hash);725711 if (tmpres) {726726- int dropping_ref = 0;727727-728712 spin_unlock(&dlm->spinlock);729729-730713 spin_lock(&tmpres->spinlock);731731- /* We wait for the other thread that is mastering the resource */714714+ /* Wait on the thread that is mastering the resource */732715 if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) {733716 __dlm_wait_on_lockres(tmpres);734717 BUG_ON(tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN);735735- }736736-737737- if (tmpres->owner == dlm->node_num) {738738- BUG_ON(tmpres->state & DLM_LOCK_RES_DROPPING_REF);739739- dlm_lockres_grab_inflight_ref(dlm, tmpres);740740- } else if (tmpres->state & DLM_LOCK_RES_DROPPING_REF)741741- dropping_ref = 1;742742- spin_unlock(&tmpres->spinlock);743743-744744- /* wait until done messaging the master, drop our ref to allow745745- * the lockres to be purged, start over. */746746- if (dropping_ref) {747747- spin_lock(&tmpres->spinlock);748748- __dlm_wait_on_lockres_flags(tmpres, DLM_LOCK_RES_DROPPING_REF);749718 spin_unlock(&tmpres->spinlock);750719 dlm_lockres_put(tmpres);751720 tmpres = NULL;752721 goto lookup;753722 }754723755755- mlog(0, "found in hash!\n");724724+ /* Wait on the resource purge to complete before continuing */725725+ if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) {726726+ BUG_ON(tmpres->owner == dlm->node_num);727727+ __dlm_wait_on_lockres_flags(tmpres,728728+ DLM_LOCK_RES_DROPPING_REF);729729+ spin_unlock(&tmpres->spinlock);730730+ dlm_lockres_put(tmpres);731731+ tmpres = NULL;732732+ goto lookup;733733+ }734734+735735+ /* Grab inflight ref to pin the resource */736736+ dlm_lockres_grab_inflight_ref(dlm, tmpres);737737+738738+ spin_unlock(&tmpres->spinlock);756739 if (res)757740 dlm_lockres_put(res);758741 res = tmpres;···840829 * but they might own this lockres. wait on them. */841830 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0);842831 if (bit < O2NM_MAX_NODES) {843843- mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to "844844- "recover before lock mastery can begin\n",832832+ mlog(0, "%s: res %.*s, At least one node (%d) "833833+ "to recover before lock mastery can begin\n",845834 dlm->name, namelen, (char *)lockid, bit);846835 wait_on_recovery = 1;847836 }···854843855844 /* finally add the lockres to its hash bucket */856845 __dlm_insert_lockres(dlm, res);857857- /* since this lockres is new it doesn't not require the spinlock */858858- dlm_lockres_grab_inflight_ref_new(dlm, res);859846860860- /* if this node does not become the master make sure to drop861861- * this inflight reference below */862862- drop_inflight_if_nonlocal = 1;847847+ /* Grab inflight ref to pin the resource */848848+ spin_lock(&res->spinlock);849849+ dlm_lockres_grab_inflight_ref(dlm, res);850850+ spin_unlock(&res->spinlock);863851864852 /* get an extra ref on the mle in case this is a BLOCK865853 * if so, the creator of the BLOCK may try to put the last···874864 * dlm spinlock would be detectable be a change on the mle,875865 * so we only need to clear out the recovery map once. */876866 if (dlm_is_recovery_lock(lockid, namelen)) {877877- mlog(ML_NOTICE, "%s: recovery map is not empty, but "878878- "must master $RECOVERY lock now\n", dlm->name);867867+ mlog(0, "%s: Recovery map is not empty, but must "868868+ "master $RECOVERY lock now\n", dlm->name);879869 if (!dlm_pre_master_reco_lockres(dlm, res))880870 wait_on_recovery = 0;881871 else {···893883 spin_lock(&dlm->spinlock);894884 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0);895885 if (bit < O2NM_MAX_NODES) {896896- mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to "897897- "recover before lock mastery can begin\n",886886+ mlog(0, "%s: res %.*s, At least one node (%d) "887887+ "to recover before lock mastery can begin\n",898888 dlm->name, namelen, (char *)lockid, bit);899889 wait_on_recovery = 1;900890 } else···923913 * yet, keep going until it does. this is how the924914 * master will know that asserts are needed back to925915 * the lower nodes. */926926- mlog(0, "%s:%.*s: requests only up to %u but master "927927- "is %u, keep going\n", dlm->name, namelen,916916+ mlog(0, "%s: res %.*s, Requests only up to %u but "917917+ "master is %u, keep going\n", dlm->name, namelen,928918 lockid, nodenum, mle->master);929919 }930920 }···934924 ret = dlm_wait_for_lock_mastery(dlm, res, mle, &blocked);935925 if (ret < 0) {936926 wait_on_recovery = 1;937937- mlog(0, "%s:%.*s: node map changed, redo the "938938- "master request now, blocked=%d\n",939939- dlm->name, res->lockname.len,927927+ mlog(0, "%s: res %.*s, Node map changed, redo the master "928928+ "request now, blocked=%d\n", dlm->name, res->lockname.len,940929 res->lockname.name, blocked);941930 if (++tries > 20) {942942- mlog(ML_ERROR, "%s:%.*s: spinning on "943943- "dlm_wait_for_lock_mastery, blocked=%d\n",931931+ mlog(ML_ERROR, "%s: res %.*s, Spinning on "932932+ "dlm_wait_for_lock_mastery, blocked = %d\n",944933 dlm->name, res->lockname.len,945934 res->lockname.name, blocked);946935 dlm_print_one_lock_resource(res);···949940 goto redo_request;950941 }951942952952- mlog(0, "lockres mastered by %u\n", res->owner);943943+ mlog(0, "%s: res %.*s, Mastered by %u\n", dlm->name, res->lockname.len,944944+ res->lockname.name, res->owner);953945 /* make sure we never continue without this */954946 BUG_ON(res->owner == O2NM_MAX_NODES);955947···962952963953wake_waiters:964954 spin_lock(&res->spinlock);965965- if (res->owner != dlm->node_num && drop_inflight_if_nonlocal)966966- dlm_lockres_drop_inflight_ref(dlm, res);967955 res->state &= ~DLM_LOCK_RES_IN_PROGRESS;968956 spin_unlock(&res->spinlock);969957 wake_up(&res->wq);···14341426 }1435142714361428 if (res->owner == dlm->node_num) {14371437- mlog(0, "%s:%.*s: setting bit %u in refmap\n",14381438- dlm->name, namelen, name, request->node_idx);14391439- dlm_lockres_set_refmap_bit(request->node_idx, res);14291429+ dlm_lockres_set_refmap_bit(dlm, res, request->node_idx);14401430 spin_unlock(&res->spinlock);14411431 response = DLM_MASTER_RESP_YES;14421432 if (mle)···14991493 * go back and clean the mles on any15001494 * other nodes */15011495 dispatch_assert = 1;15021502- dlm_lockres_set_refmap_bit(request->node_idx, res);15031503- mlog(0, "%s:%.*s: setting bit %u in refmap\n",15041504- dlm->name, namelen, name,15051505- request->node_idx);14961496+ dlm_lockres_set_refmap_bit(dlm, res,14971497+ request->node_idx);15061498 } else15071499 response = DLM_MASTER_RESP_NO;15081500 } else {···17061702 "lockres, set the bit in the refmap\n",17071703 namelen, lockname, to);17081704 spin_lock(&res->spinlock);17091709- dlm_lockres_set_refmap_bit(to, res);17051705+ dlm_lockres_set_refmap_bit(dlm, res, to);17101706 spin_unlock(&res->spinlock);17111707 }17121708 }···21912187 namelen = res->lockname.len;21922188 BUG_ON(namelen > O2NM_MAX_NAME_LEN);2193218921942194- mlog(0, "%s:%.*s: sending deref to %d\n",21952195- dlm->name, namelen, lockname, res->owner);21962190 memset(&deref, 0, sizeof(deref));21972191 deref.node_idx = dlm->node_num;21982192 deref.namelen = namelen;···21992197 ret = o2net_send_message(DLM_DEREF_LOCKRES_MSG, dlm->key,22002198 &deref, sizeof(deref), res->owner, &r);22012199 if (ret < 0)22022202- mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to "22032203- "node %u\n", ret, DLM_DEREF_LOCKRES_MSG, dlm->key,22042204- res->owner);22002200+ mlog(ML_ERROR, "%s: res %.*s, error %d send DEREF to node %u\n",22012201+ dlm->name, namelen, lockname, ret, res->owner);22052202 else if (r < 0) {22062203 /* BAD. other node says I did not have a ref. */22072207- mlog(ML_ERROR,"while dropping ref on %s:%.*s "22082208- "(master=%u) got %d.\n", dlm->name, namelen,22092209- lockname, res->owner, r);22042204+ mlog(ML_ERROR, "%s: res %.*s, DEREF to node %u got %d\n",22052205+ dlm->name, namelen, lockname, res->owner, r);22102206 dlm_print_one_lock_resource(res);22112207 BUG();22122208 }···22602260 else {22612261 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF);22622262 if (test_bit(node, res->refmap)) {22632263- dlm_lockres_clear_refmap_bit(node, res);22632263+ dlm_lockres_clear_refmap_bit(dlm, res, node);22642264 cleared = 1;22652265 }22662266 }···23202320 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF);23212321 if (test_bit(node, res->refmap)) {23222322 __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_SETREF_INPROG);23232323- dlm_lockres_clear_refmap_bit(node, res);23232323+ dlm_lockres_clear_refmap_bit(dlm, res, node);23242324 cleared = 1;23252325 }23262326 spin_unlock(&res->spinlock);···28022802 BUG_ON(!list_empty(&lock->bast_list));28032803 BUG_ON(lock->ast_pending);28042804 BUG_ON(lock->bast_pending);28052805- dlm_lockres_clear_refmap_bit(lock->ml.node, res);28052805+ dlm_lockres_clear_refmap_bit(dlm, res,28062806+ lock->ml.node);28062807 list_del_init(&lock->list);28072808 dlm_lock_put(lock);28082809 /* In a normal unlock, we would have added a···28242823 mlog(0, "%s:%.*s: node %u had a ref to this "28252824 "migrating lockres, clearing\n", dlm->name,28262825 res->lockname.len, res->lockname.name, bit);28272827- dlm_lockres_clear_refmap_bit(bit, res);28262826+ dlm_lockres_clear_refmap_bit(dlm, res, bit);28282827 }28292828 bit++;28302829 }···29172916 &migrate, sizeof(migrate), nodenum,29182917 &status);29192918 if (ret < 0) {29202920- mlog(ML_ERROR, "Error %d when sending message %u (key "29212921- "0x%x) to node %u\n", ret, DLM_MIGRATE_REQUEST_MSG,29222922- dlm->key, nodenum);29192919+ mlog(ML_ERROR, "%s: res %.*s, Error %d send "29202920+ "MIGRATE_REQUEST to node %u\n", dlm->name,29212921+ migrate.namelen, migrate.name, ret, nodenum);29232922 if (!dlm_is_host_down(ret)) {29242923 mlog(ML_ERROR, "unhandled error=%d!\n", ret);29252924 BUG();···29382937 dlm->name, res->lockname.len, res->lockname.name,29392938 nodenum);29402939 spin_lock(&res->spinlock);29412941- dlm_lockres_set_refmap_bit(nodenum, res);29402940+ dlm_lockres_set_refmap_bit(dlm, res, nodenum);29422941 spin_unlock(&res->spinlock);29432942 }29442943 }···32723271 * mastery reference here since old_master will briefly have32733272 * a reference after the migration completes */32743273 spin_lock(&res->spinlock);32753275- dlm_lockres_set_refmap_bit(old_master, res);32743274+ dlm_lockres_set_refmap_bit(dlm, res, old_master);32763275 spin_unlock(&res->spinlock);3277327632783277 mlog(0, "now time to do a migrate request to other nodes\n");
+82-82
fs/ocfs2/dlm/dlmrecovery.c
···362362}363363364364365365-int dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout)365365+void dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout)366366{367367- if (timeout) {368368- mlog(ML_NOTICE, "%s: waiting %dms for notification of "369369- "death of node %u\n", dlm->name, timeout, node);367367+ if (dlm_is_node_dead(dlm, node))368368+ return;369369+370370+ printk(KERN_NOTICE "o2dlm: Waiting on the death of node %u in "371371+ "domain %s\n", node, dlm->name);372372+373373+ if (timeout)370374 wait_event_timeout(dlm->dlm_reco_thread_wq,371371- dlm_is_node_dead(dlm, node),372372- msecs_to_jiffies(timeout));373373- } else {374374- mlog(ML_NOTICE, "%s: waiting indefinitely for notification "375375- "of death of node %u\n", dlm->name, node);375375+ dlm_is_node_dead(dlm, node),376376+ msecs_to_jiffies(timeout));377377+ else376378 wait_event(dlm->dlm_reco_thread_wq,377379 dlm_is_node_dead(dlm, node));378378- }379379- /* for now, return 0 */380380- return 0;381380}382381383383-int dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout)382382+void dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout)384383{385385- if (timeout) {386386- mlog(0, "%s: waiting %dms for notification of "387387- "recovery of node %u\n", dlm->name, timeout, node);384384+ if (dlm_is_node_recovered(dlm, node))385385+ return;386386+387387+ printk(KERN_NOTICE "o2dlm: Waiting on the recovery of node %u in "388388+ "domain %s\n", node, dlm->name);389389+390390+ if (timeout)388391 wait_event_timeout(dlm->dlm_reco_thread_wq,389389- dlm_is_node_recovered(dlm, node),390390- msecs_to_jiffies(timeout));391391- } else {392392- mlog(0, "%s: waiting indefinitely for notification "393393- "of recovery of node %u\n", dlm->name, node);392392+ dlm_is_node_recovered(dlm, node),393393+ msecs_to_jiffies(timeout));394394+ else394395 wait_event(dlm->dlm_reco_thread_wq,395396 dlm_is_node_recovered(dlm, node));396396- }397397- /* for now, return 0 */398398- return 0;399397}400398401399/* callers of the top-level api calls (dlmlock/dlmunlock) should···428430{429431 spin_lock(&dlm->spinlock);430432 BUG_ON(dlm->reco.state & DLM_RECO_STATE_ACTIVE);433433+ printk(KERN_NOTICE "o2dlm: Begin recovery on domain %s for node %u\n",434434+ dlm->name, dlm->reco.dead_node);431435 dlm->reco.state |= DLM_RECO_STATE_ACTIVE;432436 spin_unlock(&dlm->spinlock);433437}···440440 BUG_ON(!(dlm->reco.state & DLM_RECO_STATE_ACTIVE));441441 dlm->reco.state &= ~DLM_RECO_STATE_ACTIVE;442442 spin_unlock(&dlm->spinlock);443443+ printk(KERN_NOTICE "o2dlm: End recovery on domain %s\n", dlm->name);443444 wake_up(&dlm->reco.event);445445+}446446+447447+static void dlm_print_recovery_master(struct dlm_ctxt *dlm)448448+{449449+ printk(KERN_NOTICE "o2dlm: Node %u (%s) is the Recovery Master for the "450450+ "dead node %u in domain %s\n", dlm->reco.new_master,451451+ (dlm->node_num == dlm->reco.new_master ? "me" : "he"),452452+ dlm->reco.dead_node, dlm->name);444453}445454446455static int dlm_do_recovery(struct dlm_ctxt *dlm)···514505 }515506 mlog(0, "another node will master this recovery session.\n");516507 }517517- mlog(0, "dlm=%s (%d), new_master=%u, this node=%u, dead_node=%u\n",518518- dlm->name, task_pid_nr(dlm->dlm_reco_thread_task), dlm->reco.new_master,519519- dlm->node_num, dlm->reco.dead_node);508508+509509+ dlm_print_recovery_master(dlm);520510521511 /* it is safe to start everything back up here522512 * because all of the dead node's lock resources···526518 return 0;527519528520master_here:529529- mlog(ML_NOTICE, "(%d) Node %u is the Recovery Master for the Dead Node "530530- "%u for Domain %s\n", task_pid_nr(dlm->dlm_reco_thread_task),531531- dlm->node_num, dlm->reco.dead_node, dlm->name);521521+ dlm_print_recovery_master(dlm);532522533523 status = dlm_remaster_locks(dlm, dlm->reco.dead_node);534524 if (status < 0) {535525 /* we should never hit this anymore */536536- mlog(ML_ERROR, "error %d remastering locks for node %u, "537537- "retrying.\n", status, dlm->reco.dead_node);526526+ mlog(ML_ERROR, "%s: Error %d remastering locks for node %u, "527527+ "retrying.\n", dlm->name, status, dlm->reco.dead_node);538528 /* yield a bit to allow any final network messages539529 * to get handled on remaining nodes */540530 msleep(100);···573567 BUG_ON(ndata->state != DLM_RECO_NODE_DATA_INIT);574568 ndata->state = DLM_RECO_NODE_DATA_REQUESTING;575569576576- mlog(0, "requesting lock info from node %u\n",570570+ mlog(0, "%s: Requesting lock info from node %u\n", dlm->name,577571 ndata->node_num);578572579573 if (ndata->node_num == dlm->node_num) {···646640 spin_unlock(&dlm_reco_state_lock);647641 }648642649649- mlog(0, "done requesting all lock info\n");643643+ mlog(0, "%s: Done requesting all lock info\n", dlm->name);650644651645 /* nodes should be sending reco data now652646 * just need to wait */···808802809803 /* negative status is handled by caller */810804 if (ret < 0)811811- mlog(ML_ERROR, "Error %d when sending message %u (key "812812- "0x%x) to node %u\n", ret, DLM_LOCK_REQUEST_MSG,813813- dlm->key, request_from);814814-805805+ mlog(ML_ERROR, "%s: Error %d send LOCK_REQUEST to node %u "806806+ "to recover dead node %u\n", dlm->name, ret,807807+ request_from, dead_node);815808 // return from here, then816809 // sleep until all received or error817810 return ret;···961956 ret = o2net_send_message(DLM_RECO_DATA_DONE_MSG, dlm->key, &done_msg,962957 sizeof(done_msg), send_to, &tmpret);963958 if (ret < 0) {964964- mlog(ML_ERROR, "Error %d when sending message %u (key "965965- "0x%x) to node %u\n", ret, DLM_RECO_DATA_DONE_MSG,966966- dlm->key, send_to);959959+ mlog(ML_ERROR, "%s: Error %d send RECO_DATA_DONE to node %u "960960+ "to recover dead node %u\n", dlm->name, ret, send_to,961961+ dead_node);967962 if (!dlm_is_host_down(ret)) {968963 BUG();969964 }···11321127 if (ret < 0) {11331128 /* XXX: negative status is not handled.11341129 * this will end up killing this node. */11351135- mlog(ML_ERROR, "Error %d when sending message %u (key "11361136- "0x%x) to node %u\n", ret, DLM_MIG_LOCKRES_MSG,11371137- dlm->key, send_to);11301130+ mlog(ML_ERROR, "%s: res %.*s, Error %d send MIG_LOCKRES to "11311131+ "node %u (%s)\n", dlm->name, mres->lockname_len,11321132+ mres->lockname, ret, send_to,11331133+ (orig_flags & DLM_MRES_MIGRATION ?11341134+ "migration" : "recovery"));11381135 } else {11391136 /* might get an -ENOMEM back here */11401137 ret = status;···17741767 dlm->name, mres->lockname_len, mres->lockname,17751768 from);17761769 spin_lock(&res->spinlock);17771777- dlm_lockres_set_refmap_bit(from, res);17701770+ dlm_lockres_set_refmap_bit(dlm, res, from);17781771 spin_unlock(&res->spinlock);17791772 added++;17801773 break;···19721965 mlog(0, "%s:%.*s: added lock for node %u, "19731966 "setting refmap bit\n", dlm->name,19741967 res->lockname.len, res->lockname.name, ml->node);19751975- dlm_lockres_set_refmap_bit(ml->node, res);19681968+ dlm_lockres_set_refmap_bit(dlm, res, ml->node);19761969 added++;19771970 }19781971 spin_unlock(&res->spinlock);···2091208420922085 list_for_each_entry_safe(res, next, &dlm->reco.resources, recovering) {20932086 if (res->owner == dead_node) {20872087+ mlog(0, "%s: res %.*s, Changing owner from %u to %u\n",20882088+ dlm->name, res->lockname.len, res->lockname.name,20892089+ res->owner, new_master);20942090 list_del_init(&res->recovering);20952091 spin_lock(&res->spinlock);20962092 /* new_master has our reference from···21152105 for (i = 0; i < DLM_HASH_BUCKETS; i++) {21162106 bucket = dlm_lockres_hash(dlm, i);21172107 hlist_for_each_entry(res, hash_iter, bucket, hash_node) {21182118- if (res->state & DLM_LOCK_RES_RECOVERING) {21192119- if (res->owner == dead_node) {21202120- mlog(0, "(this=%u) res %.*s owner=%u "21212121- "was not on recovering list, but "21222122- "clearing state anyway\n",21232123- dlm->node_num, res->lockname.len,21242124- res->lockname.name, new_master);21252125- } else if (res->owner == dlm->node_num) {21262126- mlog(0, "(this=%u) res %.*s owner=%u "21272127- "was not on recovering list, "21282128- "owner is THIS node, clearing\n",21292129- dlm->node_num, res->lockname.len,21302130- res->lockname.name, new_master);21312131- } else21322132- continue;21082108+ if (!(res->state & DLM_LOCK_RES_RECOVERING))21092109+ continue;2133211021342134- if (!list_empty(&res->recovering)) {21352135- mlog(0, "%s:%.*s: lockres was "21362136- "marked RECOVERING, owner=%u\n",21372137- dlm->name, res->lockname.len,21382138- res->lockname.name, res->owner);21392139- list_del_init(&res->recovering);21402140- dlm_lockres_put(res);21412141- }21422142- spin_lock(&res->spinlock);21432143- /* new_master has our reference from21442144- * the lock state sent during recovery */21452145- dlm_change_lockres_owner(dlm, res, new_master);21462146- res->state &= ~DLM_LOCK_RES_RECOVERING;21472147- if (__dlm_lockres_has_locks(res))21482148- __dlm_dirty_lockres(dlm, res);21492149- spin_unlock(&res->spinlock);21502150- wake_up(&res->wq);21112111+ if (res->owner != dead_node &&21122112+ res->owner != dlm->node_num)21132113+ continue;21142114+21152115+ if (!list_empty(&res->recovering)) {21162116+ list_del_init(&res->recovering);21172117+ dlm_lockres_put(res);21512118 }21192119+21202120+ /* new_master has our reference from21212121+ * the lock state sent during recovery */21222122+ mlog(0, "%s: res %.*s, Changing owner from %u to %u\n",21232123+ dlm->name, res->lockname.len, res->lockname.name,21242124+ res->owner, new_master);21252125+ spin_lock(&res->spinlock);21262126+ dlm_change_lockres_owner(dlm, res, new_master);21272127+ res->state &= ~DLM_LOCK_RES_RECOVERING;21282128+ if (__dlm_lockres_has_locks(res))21292129+ __dlm_dirty_lockres(dlm, res);21302130+ spin_unlock(&res->spinlock);21312131+ wake_up(&res->wq);21522132 }21532133 }21542134}···22522252 res->lockname.len, res->lockname.name, freed, dead_node);22532253 __dlm_print_one_lock_resource(res);22542254 }22552255- dlm_lockres_clear_refmap_bit(dead_node, res);22552255+ dlm_lockres_clear_refmap_bit(dlm, res, dead_node);22562256 } else if (test_bit(dead_node, res->refmap)) {22572257 mlog(0, "%s:%.*s: dead node %u had a ref, but had "22582258 "no locks and had not purged before dying\n", dlm->name,22592259 res->lockname.len, res->lockname.name, dead_node);22602260- dlm_lockres_clear_refmap_bit(dead_node, res);22602260+ dlm_lockres_clear_refmap_bit(dlm, res, dead_node);22612261 }2262226222632263 /* do not kick thread yet */···23242324 dlm_revalidate_lvb(dlm, res, dead_node);23252325 if (res->owner == dead_node) {23262326 if (res->state & DLM_LOCK_RES_DROPPING_REF) {23272327- mlog(ML_NOTICE, "Ignore %.*s for "23272327+ mlog(ML_NOTICE, "%s: res %.*s, Skip "23282328 "recovery as it is being freed\n",23292329- res->lockname.len,23292329+ dlm->name, res->lockname.len,23302330 res->lockname.name);23312331 } else23322332 dlm_move_lockres_to_recovery_list(dlm,
+8-8
fs/ocfs2/dlm/dlmthread.c
···9494{9595 int bit;96969797+ assert_spin_locked(&res->spinlock);9898+9799 if (__dlm_lockres_has_locks(res))100100+ return 0;101101+102102+ /* Locks are in the process of being created */103103+ if (res->inflight_locks)98104 return 0;99105100106 if (!list_empty(&res->dirty) || res->state & DLM_LOCK_RES_DIRTY)···109103 if (res->state & DLM_LOCK_RES_RECOVERING)110104 return 0;111105106106+ /* Another node has this resource with this node as the master */112107 bit = find_next_bit(res->refmap, O2NM_MAX_NODES, 0);113108 if (bit < O2NM_MAX_NODES)114109 return 0;115110116116- /*117117- * since the bit for dlm->node_num is not set, inflight_locks better118118- * be zero119119- */120120- BUG_ON(res->inflight_locks != 0);121111 return 1;122112}123113···187185 /* clear our bit from the master's refmap, ignore errors */188186 ret = dlm_drop_lockres_ref(dlm, res);189187 if (ret < 0) {190190- mlog(ML_ERROR, "%s: deref %.*s failed %d\n", dlm->name,191191- res->lockname.len, res->lockname.name, ret);192188 if (!dlm_is_host_down(ret))193189 BUG();194190 }···209209 BUG();210210 }211211212212- __dlm_unhash_lockres(res);212212+ __dlm_unhash_lockres(dlm, res);213213214214 /* lockres is not in the hash now. drop the flag and wake up215215 * any processes waiting in dlm_get_lock_resource. */
+15-6
fs/ocfs2/dlmglue.c
···16921692 mlog(0, "inode %llu take PRMODE open lock\n",16931693 (unsigned long long)OCFS2_I(inode)->ip_blkno);1694169416951695- if (ocfs2_mount_local(osb))16951695+ if (ocfs2_is_hard_readonly(osb) || ocfs2_mount_local(osb))16961696 goto out;1697169716981698 lockres = &OCFS2_I(inode)->ip_open_lockres;···17171717 mlog(0, "inode %llu try to take %s open lock\n",17181718 (unsigned long long)OCFS2_I(inode)->ip_blkno,17191719 write ? "EXMODE" : "PRMODE");17201720+17211721+ if (ocfs2_is_hard_readonly(osb)) {17221722+ if (write)17231723+ status = -EROFS;17241724+ goto out;17251725+ }1720172617211727 if (ocfs2_mount_local(osb))17221728 goto out;···23042298 if (ocfs2_is_hard_readonly(osb)) {23052299 if (ex)23062300 status = -EROFS;23072307- goto bail;23012301+ goto getbh;23082302 }2309230323102304 if (ocfs2_mount_local(osb))···23622356 mlog_errno(status);23632357 goto bail;23642358 }23652365-23592359+getbh:23662360 if (ret_bh) {23672361 status = ocfs2_assign_bh(inode, ret_bh, local_bh);23682362 if (status < 0) {···2634262826352629 BUG_ON(!dl);2636263026372637- if (ocfs2_is_hard_readonly(osb))26382638- return -EROFS;26312631+ if (ocfs2_is_hard_readonly(osb)) {26322632+ if (ex)26332633+ return -EROFS;26342634+ return 0;26352635+ }2639263626402637 if (ocfs2_mount_local(osb))26412638 return 0;···26562647 struct ocfs2_dentry_lock *dl = dentry->d_fsdata;26572648 struct ocfs2_super *osb = OCFS2_SB(dentry->d_sb);2658264926592659- if (!ocfs2_mount_local(osb))26502650+ if (!ocfs2_is_hard_readonly(osb) && !ocfs2_mount_local(osb))26602651 ocfs2_cluster_unlock(osb, &dl->dl_lockres, level);26612652}26622653
···19501950 if (ret < 0)19511951 mlog_errno(ret);1952195219531953+ if (file->f_flags & O_SYNC)19541954+ handle->h_sync = 1;19551955+19531956 ocfs2_commit_trans(osb, handle);1954195719551958out_inode_unlock:···20532050 }20542051out:20552052 return ret;20532053+}20542054+20552055+static void ocfs2_aiodio_wait(struct inode *inode)20562056+{20572057+ wait_queue_head_t *wq = ocfs2_ioend_wq(inode);20582058+20592059+ wait_event(*wq, (atomic_read(&OCFS2_I(inode)->ip_unaligned_aio) == 0));20602060+}20612061+20622062+static int ocfs2_is_io_unaligned(struct inode *inode, size_t count, loff_t pos)20632063+{20642064+ int blockmask = inode->i_sb->s_blocksize - 1;20652065+ loff_t final_size = pos + count;20662066+20672067+ if ((pos & blockmask) || (final_size & blockmask))20682068+ return 1;20692069+ return 0;20562070}2057207120582072static int ocfs2_prepare_inode_for_refcount(struct inode *inode,···22502230 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);22512231 int full_coherency = !(osb->s_mount_opt &22522232 OCFS2_MOUNT_COHERENCY_BUFFERED);22332233+ int unaligned_dio = 0;2253223422542235 trace_ocfs2_file_aio_write(inode, file, file->f_path.dentry,22552236 (unsigned long long)OCFS2_I(inode)->ip_blkno,···23182297 goto out;23192298 }2320229923002300+ if (direct_io && !is_sync_kiocb(iocb))23012301+ unaligned_dio = ocfs2_is_io_unaligned(inode, iocb->ki_left,23022302+ *ppos);23032303+23212304 /*23222305 * We can't complete the direct I/O as requested, fall back to23232306 * buffered I/O.···2334230923352310 direct_io = 0;23362311 goto relock;23122312+ }23132313+23142314+ if (unaligned_dio) {23152315+ /*23162316+ * Wait on previous unaligned aio to complete before23172317+ * proceeding.23182318+ */23192319+ ocfs2_aiodio_wait(inode);23202320+23212321+ /* Mark the iocb as needing a decrement in ocfs2_dio_end_io */23222322+ atomic_inc(&OCFS2_I(inode)->ip_unaligned_aio);23232323+ ocfs2_iocb_set_unaligned_aio(iocb);23372324 }2338232523392326 /*···24192382 if ((ret == -EIOCBQUEUED) || (!ocfs2_iocb_is_rw_locked(iocb))) {24202383 rw_level = -1;24212384 have_alloc_sem = 0;23852385+ unaligned_dio = 0;24222386 }23872387+23882388+ if (unaligned_dio)23892389+ atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio);2423239024242391out:24252392 if (rw_level != -1)···26322591 return ret;26332592}2634259325942594+/* Refer generic_file_llseek_unlocked() */25952595+static loff_t ocfs2_file_llseek(struct file *file, loff_t offset, int origin)25962596+{25972597+ struct inode *inode = file->f_mapping->host;25982598+ int ret = 0;25992599+26002600+ mutex_lock(&inode->i_mutex);26012601+26022602+ switch (origin) {26032603+ case SEEK_SET:26042604+ break;26052605+ case SEEK_END:26062606+ offset += inode->i_size;26072607+ break;26082608+ case SEEK_CUR:26092609+ if (offset == 0) {26102610+ offset = file->f_pos;26112611+ goto out;26122612+ }26132613+ offset += file->f_pos;26142614+ break;26152615+ case SEEK_DATA:26162616+ case SEEK_HOLE:26172617+ ret = ocfs2_seek_data_hole_offset(file, &offset, origin);26182618+ if (ret)26192619+ goto out;26202620+ break;26212621+ default:26222622+ ret = -EINVAL;26232623+ goto out;26242624+ }26252625+26262626+ if (offset < 0 && !(file->f_mode & FMODE_UNSIGNED_OFFSET))26272627+ ret = -EINVAL;26282628+ if (!ret && offset > inode->i_sb->s_maxbytes)26292629+ ret = -EINVAL;26302630+ if (ret)26312631+ goto out;26322632+26332633+ if (offset != file->f_pos) {26342634+ file->f_pos = offset;26352635+ file->f_version = 0;26362636+ }26372637+26382638+out:26392639+ mutex_unlock(&inode->i_mutex);26402640+ if (ret)26412641+ return ret;26422642+ return offset;26432643+}26442644+26352645const struct inode_operations ocfs2_file_iops = {26362646 .setattr = ocfs2_setattr,26372647 .getattr = ocfs2_getattr,···27072615 * ocfs2_fops_no_plocks and ocfs2_dops_no_plocks!27082616 */27092617const struct file_operations ocfs2_fops = {27102710- .llseek = generic_file_llseek,26182618+ .llseek = ocfs2_file_llseek,27112619 .read = do_sync_read,27122620 .write = do_sync_write,27132621 .mmap = ocfs2_mmap,···27552663 * the cluster.27562664 */27572665const struct file_operations ocfs2_fops_no_plocks = {27582758- .llseek = generic_file_llseek,26662666+ .llseek = ocfs2_file_llseek,27592667 .read = do_sync_read,27602668 .write = do_sync_write,27612669 .mmap = ocfs2_mmap,
+1-1
fs/ocfs2/inode.c
···951951 trace_ocfs2_cleanup_delete_inode(952952 (unsigned long long)OCFS2_I(inode)->ip_blkno, sync_data);953953 if (sync_data)954954- write_inode_now(inode, 1);954954+ filemap_write_and_wait(inode->i_mapping);955955 truncate_inode_pages(&inode->i_data, 0);956956}957957
+3
fs/ocfs2/inode.h
···4343 /* protects extended attribute changes on this inode */4444 struct rw_semaphore ip_xattr_sem;45454646+ /* Number of outstanding AIO's which are not page aligned */4747+ atomic_t ip_unaligned_aio;4848+4649 /* These fields are protected by ip_lock */4750 spinlock_t ip_lock;4851 u32 ip_open_count;
+6-5
fs/ocfs2/ioctl.c
···122122 if ((oldflags & OCFS2_IMMUTABLE_FL) || ((flags ^ oldflags) &123123 (OCFS2_APPEND_FL | OCFS2_IMMUTABLE_FL))) {124124 if (!capable(CAP_LINUX_IMMUTABLE))125125- goto bail_unlock;125125+ goto bail_commit;126126 }127127128128 ocfs2_inode->ip_attr = flags;···132132 if (status < 0)133133 mlog_errno(status);134134135135+bail_commit:135136 ocfs2_commit_trans(osb, handle);136137bail_unlock:137138 ocfs2_inode_unlock(inode, 1);···382381 if (!oifi) {383382 status = -ENOMEM;384383 mlog_errno(status);385385- goto bail;384384+ goto out_err;386385 }387386388387 if (o2info_from_user(*oifi, req))···432431 o2info_set_request_error(&oifi->ifi_req, req);433432434433 kfree(oifi);435435-434434+out_err:436435 return status;437436}438437···667666 if (!oiff) {668667 status = -ENOMEM;669668 mlog_errno(status);670670- goto bail;669669+ goto out_err;671670 }672671673672 if (o2info_from_user(*oiff, req))···717716 o2info_set_request_error(&oiff->iff_req, req);718717719718 kfree(oiff);720720-719719+out_err:721720 return status;722721}723722
+20-3
fs/ocfs2/journal.c
···15441544 /* we need to run complete recovery for offline orphan slots */15451545 ocfs2_replay_map_set_state(osb, REPLAY_NEEDED);1546154615471547- mlog(ML_NOTICE, "Recovering node %d from slot %d on device (%u,%u)\n",15481548- node_num, slot_num,15491549- MAJOR(osb->sb->s_dev), MINOR(osb->sb->s_dev));15471547+ printk(KERN_NOTICE "ocfs2: Begin replay journal (node %d, slot %d) on "\15481548+ "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev),15491549+ MINOR(osb->sb->s_dev));1550155015511551 OCFS2_I(inode)->ip_clusters = le32_to_cpu(fe->i_clusters);15521552···1601160116021602 jbd2_journal_destroy(journal);1603160316041604+ printk(KERN_NOTICE "ocfs2: End replay journal (node %d, slot %d) on "\16051605+ "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev),16061606+ MINOR(osb->sb->s_dev));16041607done:16051608 /* drop the lock on this nodes journal */16061609 if (got_lock)···18101807 * ocfs2_queue_orphan_scan calls ocfs2_queue_recovery_completion for18111808 * every slot, queuing a recovery of the slot on the ocfs2_wq thread. This18121809 * is done to catch any orphans that are left over in orphan directories.18101810+ *18111811+ * It scans all slots, even ones that are in use. It does so to handle the18121812+ * case described below:18131813+ *18141814+ * Node 1 has an inode it was using. The dentry went away due to memory18151815+ * pressure. Node 1 closes the inode, but it's on the free list. The node18161816+ * has the open lock.18171817+ * Node 2 unlinks the inode. It grabs the dentry lock to notify others,18181818+ * but node 1 has no dentry and doesn't get the message. It trylocks the18191819+ * open lock, sees that another node has a PR, and does nothing.18201820+ * Later node 2 runs its orphan dir. It igets the inode, trylocks the18211821+ * open lock, sees the PR still, and does nothing.18221822+ * Basically, we have to trigger an orphan iput on node 1. The only way18231823+ * for this to happen is if node 1 runs node 2's orphan dir.18131824 *18141825 * ocfs2_queue_orphan_scan gets called every ORPHAN_SCAN_SCHEDULE_TIMEOUT18151826 * seconds. It gets an EX lock on os_lockres and checks sequence number
+3-2
fs/ocfs2/journal.h
···441441#define OCFS2_SIMPLE_DIR_EXTEND_CREDITS (2)442442443443/* file update (nlink, etc) + directory mtime/ctime + dir entry block + quota444444- * update on dir + index leaf + dx root update for free list */444444+ * update on dir + index leaf + dx root update for free list +445445+ * previous dirblock update in the free list */445446static inline int ocfs2_link_credits(struct super_block *sb)446447{447447- return 2*OCFS2_INODE_UPDATE_CREDITS + 3 +448448+ return 2*OCFS2_INODE_UPDATE_CREDITS + 4 +448449 ocfs2_quota_trans_credits(sb);449450}450451
+24-29
fs/ocfs2/mmap.c
···6161static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,6262 struct page *page)6363{6464- int ret;6464+ int ret = VM_FAULT_NOPAGE;6565 struct inode *inode = file->f_path.dentry->d_inode;6666 struct address_space *mapping = inode->i_mapping;6767 loff_t pos = page_offset(page);···7171 void *fsdata;7272 loff_t size = i_size_read(inode);73737474- /*7575- * Another node might have truncated while we were waiting on7676- * cluster locks.7777- * We don't check size == 0 before the shift. This is borrowed7878- * from do_generic_file_read.7979- */8074 last_index = (size - 1) >> PAGE_CACHE_SHIFT;8181- if (unlikely(!size || page->index > last_index)) {8282- ret = -EINVAL;8383- goto out;8484- }85758676 /*8787- * The i_size check above doesn't catch the case where nodes8888- * truncated and then re-extended the file. We'll re-check the8989- * page mapping after taking the page lock inside of9090- * ocfs2_write_begin_nolock().7777+ * There are cases that lead to the page no longer bebongs to the7878+ * mapping.7979+ * 1) pagecache truncates locally due to memory pressure.8080+ * 2) pagecache truncates when another is taking EX lock against 8181+ * inode lock. see ocfs2_data_convert_worker.8282+ * 8383+ * The i_size check doesn't catch the case where nodes truncated and8484+ * then re-extended the file. We'll re-check the page mapping after8585+ * taking the page lock inside of ocfs2_write_begin_nolock().8686+ *8787+ * Let VM retry with these cases.9188 */9292- if (!PageUptodate(page) || page->mapping != inode->i_mapping) {9393- /*9494- * the page has been umapped in ocfs2_data_downconvert_worker.9595- * So return 0 here and let VFS retry.9696- */9797- ret = 0;8989+ if ((page->mapping != inode->i_mapping) ||9090+ (!PageUptodate(page)) ||9191+ (page_offset(page) >= size))9892 goto out;9999- }1009310194 /*10295 * Call ocfs2_write_begin() and ocfs2_write_end() to take···109116 if (ret) {110117 if (ret != -ENOSPC)111118 mlog_errno(ret);119119+ if (ret == -ENOMEM)120120+ ret = VM_FAULT_OOM;121121+ else122122+ ret = VM_FAULT_SIGBUS;112123 goto out;113124 }114125115115- ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page,116116- fsdata);117117- if (ret < 0) {118118- mlog_errno(ret);126126+ if (!locked_page) {127127+ ret = VM_FAULT_NOPAGE;119128 goto out;120129 }130130+ ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page,131131+ fsdata);121132 BUG_ON(ret != len);122122- ret = 0;133133+ ret = VM_FAULT_LOCKED;123134out:124135 return ret;125136}···165168166169out:167170 ocfs2_unblock_signals(&oldset);168168- if (ret)169169- ret = VM_FAULT_SIGBUS;170171 return ret;171172}172173
+1-1
fs/ocfs2/move_extents.c
···745745 */746746 ocfs2_probe_alloc_group(inode, gd_bh, &goal_bit, len, move_max_hop,747747 new_phys_cpos);748748- if (!new_phys_cpos) {748748+ if (!*new_phys_cpos) {749749 ret = -ENOSPC;750750 goto out_commit;751751 }
···404404 int status = 0;405405 struct ocfs2_quota_recovery *rec;406406407407- mlog(ML_NOTICE, "Beginning quota recovery in slot %u\n", slot_num);407407+ printk(KERN_NOTICE "ocfs2: Beginning quota recovery on device (%s) for "408408+ "slot %u\n", osb->dev_str, slot_num);409409+408410 rec = ocfs2_alloc_quota_recovery();409411 if (!rec)410412 return ERR_PTR(-ENOMEM);···551549 goto out_commit;552550 }553551 lock_buffer(qbh);554554- WARN_ON(!ocfs2_test_bit(bit, dchunk->dqc_bitmap));555555- ocfs2_clear_bit(bit, dchunk->dqc_bitmap);552552+ WARN_ON(!ocfs2_test_bit_unaligned(bit, dchunk->dqc_bitmap));553553+ ocfs2_clear_bit_unaligned(bit, dchunk->dqc_bitmap);556554 le32_add_cpu(&dchunk->dqc_free, 1);557555 unlock_buffer(qbh);558556 ocfs2_journal_dirty(handle, qbh);···598596 struct inode *lqinode;599597 unsigned int flags;600598601601- mlog(ML_NOTICE, "Finishing quota recovery in slot %u\n", slot_num);599599+ printk(KERN_NOTICE "ocfs2: Finishing quota recovery on device (%s) for "600600+ "slot %u\n", osb->dev_str, slot_num);601601+602602 mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);603603 for (type = 0; type < MAXQUOTAS; type++) {604604 if (list_empty(&(rec->r_list[type])))···616612 /* Someone else is holding the lock? Then he must be617613 * doing the recovery. Just skip the file... */618614 if (status == -EAGAIN) {619619- mlog(ML_NOTICE, "skipping quota recovery for slot %d "620620- "because quota file is locked.\n", slot_num);615615+ printk(KERN_NOTICE "ocfs2: Skipping quota recovery on "616616+ "device (%s) for slot %d because quota file is "617617+ "locked.\n", osb->dev_str, slot_num);621618 status = 0;622619 goto out_put;623620 } else if (status < 0) {···949944 * ol_quota_entries_per_block(sb);950945 }951946952952- found = ocfs2_find_next_zero_bit(dchunk->dqc_bitmap, len, 0);947947+ found = ocfs2_find_next_zero_bit_unaligned(dchunk->dqc_bitmap, len, 0);953948 /* We failed? */954949 if (found == len) {955950 mlog(ML_ERROR, "Did not find empty entry in chunk %d with %u"···12131208 struct ocfs2_local_disk_chunk *dchunk;1214120912151210 dchunk = (struct ocfs2_local_disk_chunk *)bh->b_data;12161216- ocfs2_set_bit(*offset, dchunk->dqc_bitmap);12111211+ ocfs2_set_bit_unaligned(*offset, dchunk->dqc_bitmap);12171212 le32_add_cpu(&dchunk->dqc_free, -1);12181213}12191214···12941289 (od->dq_chunk->qc_headerbh->b_data);12951290 /* Mark structure as freed */12961291 lock_buffer(od->dq_chunk->qc_headerbh);12971297- ocfs2_clear_bit(offset, dchunk->dqc_bitmap);12921292+ ocfs2_clear_bit_unaligned(offset, dchunk->dqc_bitmap);12981293 le32_add_cpu(&dchunk->dqc_free, 1);12991294 unlock_buffer(od->dq_chunk->qc_headerbh);13001295 ocfs2_journal_dirty(handle, od->dq_chunk->qc_headerbh);
+2-2
fs/ocfs2/slot_map.c
···493493 goto bail;494494 }495495 } else496496- mlog(ML_NOTICE, "slot %d is already allocated to this node!\n",497497- slot);496496+ printk(KERN_INFO "ocfs2: Slot %d on device (%s) was already "497497+ "allocated to this node!\n", slot, osb->dev_str);498498499499 ocfs2_set_slot(si, slot, osb->node_num);500500 osb->slot_num = slot;
+63-8
fs/ocfs2/stack_o2cb.c
···2828#include "cluster/masklog.h"2929#include "cluster/nodemanager.h"3030#include "cluster/heartbeat.h"3131+#include "cluster/tcp.h"31323233#include "stackglue.h"3334···257256}258257259258/*259259+ * Check if this node is heartbeating and is connected to all other260260+ * heartbeating nodes.261261+ */262262+static int o2cb_cluster_check(void)263263+{264264+ u8 node_num;265265+ int i;266266+ unsigned long hbmap[BITS_TO_LONGS(O2NM_MAX_NODES)];267267+ unsigned long netmap[BITS_TO_LONGS(O2NM_MAX_NODES)];268268+269269+ node_num = o2nm_this_node();270270+ if (node_num == O2NM_MAX_NODES) {271271+ printk(KERN_ERR "o2cb: This node has not been configured.\n");272272+ return -EINVAL;273273+ }274274+275275+ /*276276+ * o2dlm expects o2net sockets to be created. If not, then277277+ * dlm_join_domain() fails with a stack of errors which are both cryptic278278+ * and incomplete. The idea here is to detect upfront whether we have279279+ * managed to connect to all nodes or not. If not, then list the nodes280280+ * to allow the user to check the configuration (incorrect IP, firewall,281281+ * etc.) Yes, this is racy. But its not the end of the world.282282+ */283283+#define O2CB_MAP_STABILIZE_COUNT 60284284+ for (i = 0; i < O2CB_MAP_STABILIZE_COUNT; ++i) {285285+ o2hb_fill_node_map(hbmap, sizeof(hbmap));286286+ if (!test_bit(node_num, hbmap)) {287287+ printk(KERN_ERR "o2cb: %s heartbeat has not been "288288+ "started.\n", (o2hb_global_heartbeat_active() ?289289+ "Global" : "Local"));290290+ return -EINVAL;291291+ }292292+ o2net_fill_node_map(netmap, sizeof(netmap));293293+ /* Force set the current node to allow easy compare */294294+ set_bit(node_num, netmap);295295+ if (!memcmp(hbmap, netmap, sizeof(hbmap)))296296+ return 0;297297+ if (i < O2CB_MAP_STABILIZE_COUNT)298298+ msleep(1000);299299+ }300300+301301+ printk(KERN_ERR "o2cb: This node could not connect to nodes:");302302+ i = -1;303303+ while ((i = find_next_bit(hbmap, O2NM_MAX_NODES,304304+ i + 1)) < O2NM_MAX_NODES) {305305+ if (!test_bit(i, netmap))306306+ printk(" %u", i);307307+ }308308+ printk(".\n");309309+310310+ return -ENOTCONN;311311+}312312+313313+/*260314 * Called from the dlm when it's about to evict a node. This is how the261315 * classic stack signals node death.262316 */···319263{320264 struct ocfs2_cluster_connection *conn = data;321265322322- mlog(ML_NOTICE, "o2dlm has evicted node %d from group %.*s\n",323323- node_num, conn->cc_namelen, conn->cc_name);266266+ printk(KERN_NOTICE "o2cb: o2dlm has evicted node %d from domain %.*s\n",267267+ node_num, conn->cc_namelen, conn->cc_name);324268325269 conn->cc_recovery_handler(node_num, conn->cc_recovery_data);326270}···336280 BUG_ON(conn == NULL);337281 BUG_ON(conn->cc_proto == NULL);338282339339- /* for now we only have one cluster/node, make sure we see it340340- * in the heartbeat universe */341341- if (!o2hb_check_local_node_heartbeating()) {342342- if (o2hb_global_heartbeat_active())343343- mlog(ML_ERROR, "Global heartbeat not started\n");344344- rc = -EINVAL;283283+ /* Ensure cluster stack is up and all nodes are connected */284284+ rc = o2cb_cluster_check();285285+ if (rc) {286286+ printk(KERN_ERR "o2cb: Cluster check failed. Fix errors "287287+ "before retrying.\n");345288 goto out;346289 }347290
+16-9
fs/ocfs2/super.c
···5454#include "ocfs1_fs_compat.h"55555656#include "alloc.h"5757+#include "aops.h"5758#include "blockcheck.h"5859#include "dlmglue.h"5960#include "export.h"···1108110711091108 ocfs2_set_ro_flag(osb, 1);1110110911111111- printk(KERN_NOTICE "Readonly device detected. No cluster "11121112- "services will be utilized for this mount. Recovery "11131113- "will be skipped.\n");11101110+ printk(KERN_NOTICE "ocfs2: Readonly device (%s) detected. "11111111+ "Cluster services will not be used for this mount. "11121112+ "Recovery will be skipped.\n", osb->dev_str);11141113 }1115111411161115 if (!ocfs2_is_hard_readonly(osb)) {···16171616 return 0;16181617}1619161816191619+wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ];16201620+16201621static int __init ocfs2_init(void)16211622{16221622- int status;16231623+ int status, i;1623162416241625 ocfs2_print_version();16261626+16271627+ for (i = 0; i < OCFS2_IOEND_WQ_HASH_SZ; i++)16281628+ init_waitqueue_head(&ocfs2__ioend_wq[i]);1625162916261630 status = init_ocfs2_uptodate_cache();16271631 if (status < 0) {···17661760 ocfs2_extent_map_init(&oi->vfs_inode);17671761 INIT_LIST_HEAD(&oi->ip_io_markers);17681762 oi->ip_dir_start_lookup = 0;17691769-17631763+ atomic_set(&oi->ip_unaligned_aio, 0);17701764 init_rwsem(&oi->ip_alloc_sem);17711765 init_rwsem(&oi->ip_xattr_sem);17721766 mutex_init(&oi->ip_io_mutex);···19801974 * If we failed before we got a uuid_str yet, we can't stop19811975 * heartbeat. Otherwise, do it.19821976 */19831983- if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str)19771977+ if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str &&19781978+ !ocfs2_is_hard_readonly(osb))19841979 hangup_needed = 1;1985198019861981 if (osb->cconn)···23602353 mlog_errno(status);23612354 goto bail;23622355 }23632363- cleancache_init_shared_fs((char *)&uuid_net_key, sb);23562356+ cleancache_init_shared_fs((char *)&di->id2.i_super.s_uuid, sb);2364235723652358bail:23662359 return status;···24692462 goto finally;24702463 }24712464 } else {24722472- mlog(ML_NOTICE, "File system was not unmounted cleanly, "24732473- "recovering volume.\n");24652465+ printk(KERN_NOTICE "ocfs2: File system on device (%s) was not "24662466+ "unmounted cleanly, recovering it.\n", osb->dev_str);24742467 }2475246824762469 local = ocfs2_mount_local(osb);
···167167 }168168169169 psinfo = psi;170170+ mutex_init(&psinfo->read_mutex);170171 spin_unlock(&pstore_lock);171172172173 if (owner && !try_module_get(owner)) {···196195void pstore_get_records(int quiet)197196{198197 struct pstore_info *psi = psinfo;198198+ char *buf = NULL;199199 ssize_t size;200200 u64 id;201201 enum pstore_type_id type;202202 struct timespec time;203203 int failed = 0, rc;204204- unsigned long flags;205204206205 if (!psi)207206 return;208207209209- spin_lock_irqsave(&psinfo->buf_lock, flags);208208+ mutex_lock(&psi->read_mutex);210209 rc = psi->open(psi);211210 if (rc)212211 goto out;213212214214- while ((size = psi->read(&id, &type, &time, psi)) > 0) {215215- rc = pstore_mkfile(type, psi->name, id, psi->buf, (size_t)size,213213+ while ((size = psi->read(&id, &type, &time, &buf, psi)) > 0) {214214+ rc = pstore_mkfile(type, psi->name, id, buf, (size_t)size,216215 time, psi);216216+ kfree(buf);217217+ buf = NULL;217218 if (rc && (rc != -EEXIST || !quiet))218219 failed++;219220 }220221 psi->close(psi);221222out:222222- spin_unlock_irqrestore(&psinfo->buf_lock, flags);223223+ mutex_unlock(&psi->read_mutex);223224224225 if (failed)225226 printk(KERN_WARNING "pstore: failed to load %d record(s) from '%s'\n",
+4-5
include/drm/exynos_drm.h
···3232/**3333 * User-desired buffer creation information structure.3434 *3535- * @size: requested size for the object.3535+ * @size: user-desired memory allocation size.3636 * - this size value would be page-aligned internally.3737 * @flags: user request for setting memory type or cache attributes.3838- * @handle: returned handle for the object.3939- * @pad: just padding to be 64-bit aligned.3838+ * @handle: returned a handle to created gem object.3939+ * - this handle will be set by gem module of kernel side.4040 */4141struct drm_exynos_gem_create {4242- unsigned int size;4242+ uint64_t size;4343 unsigned int flags;4444 unsigned int handle;4545- unsigned int pad;4645};47464847/**
+2-1
include/linux/clocksource.h
···156156 * @mult: cycle to nanosecond multiplier157157 * @shift: cycle to nanosecond divisor (power of two)158158 * @max_idle_ns: max idle time permitted by the clocksource (nsecs)159159+ * @maxadj maximum adjustment value to mult (~11%)159160 * @flags: flags describing special properties160161 * @archdata: arch-specific data161162 * @suspend: suspend function for the clocksource, if necessary···173172 u32 mult;174173 u32 shift;175174 u64 max_idle_ns;176176-175175+ u32 maxadj;177176#ifdef CONFIG_ARCH_CLOCKSOURCE_DATA178177 struct arch_clocksource_data archdata;179178#endif
+127-88
include/linux/pm.h
···5454/**5555 * struct dev_pm_ops - device PM callbacks5656 *5757- * Several driver power state transitions are externally visible, affecting5757+ * Several device power state transitions are externally visible, affecting5858 * the state of pending I/O queues and (for drivers that touch hardware)5959 * interrupts, wakeups, DMA, and other hardware state. There may also be6060- * internal transitions to various low power modes, which are transparent6060+ * internal transitions to various low-power modes which are transparent6161 * to the rest of the driver stack (such as a driver that's ON gating off6262 * clocks which are not in active use).6363 *6464- * The externally visible transitions are handled with the help of the following6565- * callbacks included in this structure:6464+ * The externally visible transitions are handled with the help of callbacks6565+ * included in this structure in such a way that two levels of callbacks are6666+ * involved. First, the PM core executes callbacks provided by PM domains,6767+ * device types, classes and bus types. They are the subsystem-level callbacks6868+ * supposed to execute callbacks provided by device drivers, although they may6969+ * choose not to do that. If the driver callbacks are executed, they have to7070+ * collaborate with the subsystem-level callbacks to achieve the goals7171+ * appropriate for the given system transition, given transition phase and the7272+ * subsystem the device belongs to.6673 *6767- * @prepare: Prepare the device for the upcoming transition, but do NOT change6868- * its hardware state. Prevent new children of the device from being6969- * registered after @prepare() returns (the driver's subsystem and7070- * generally the rest of the kernel is supposed to prevent new calls to the7171- * probe method from being made too once @prepare() has succeeded). If7272- * @prepare() detects a situation it cannot handle (e.g. registration of a7373- * child already in progress), it may return -EAGAIN, so that the PM core7474- * can execute it once again (e.g. after the new child has been registered)7575- * to recover from the race condition. This method is executed for all7676- * kinds of suspend transitions and is followed by one of the suspend7777- * callbacks: @suspend(), @freeze(), or @poweroff().7878- * The PM core executes @prepare() for all devices before starting to7979- * execute suspend callbacks for any of them, so drivers may assume all of8080- * the other devices to be present and functional while @prepare() is being8181- * executed. In particular, it is safe to make GFP_KERNEL memory8282- * allocations from within @prepare(). However, drivers may NOT assume8383- * anything about the availability of the user space at that time and it8484- * is not correct to request firmware from within @prepare() (it's too8585- * late to do that). [To work around this limitation, drivers may8686- * register suspend and hibernation notifiers that are executed before the8787- * freezing of tasks.]7474+ * @prepare: The principal role of this callback is to prevent new children of7575+ * the device from being registered after it has returned (the driver's7676+ * subsystem and generally the rest of the kernel is supposed to prevent7777+ * new calls to the probe method from being made too once @prepare() has7878+ * succeeded). If @prepare() detects a situation it cannot handle (e.g.7979+ * registration of a child already in progress), it may return -EAGAIN, so8080+ * that the PM core can execute it once again (e.g. after a new child has8181+ * been registered) to recover from the race condition.8282+ * This method is executed for all kinds of suspend transitions and is8383+ * followed by one of the suspend callbacks: @suspend(), @freeze(), or8484+ * @poweroff(). The PM core executes subsystem-level @prepare() for all8585+ * devices before starting to invoke suspend callbacks for any of them, so8686+ * generally devices may be assumed to be functional or to respond to8787+ * runtime resume requests while @prepare() is being executed. However,8888+ * device drivers may NOT assume anything about the availability of user8989+ * space at that time and it is NOT valid to request firmware from within9090+ * @prepare() (it's too late to do that). It also is NOT valid to allocate9191+ * substantial amounts of memory from @prepare() in the GFP_KERNEL mode.9292+ * [To work around these limitations, drivers may register suspend and9393+ * hibernation notifiers to be executed before the freezing of tasks.]8894 *8995 * @complete: Undo the changes made by @prepare(). This method is executed for9096 * all kinds of resume transitions, following one of the resume callbacks:9197 * @resume(), @thaw(), @restore(). Also called if the state transition9292- * fails before the driver's suspend callback (@suspend(), @freeze(),9393- * @poweroff()) can be executed (e.g. if the suspend callback fails for one9898+ * fails before the driver's suspend callback: @suspend(), @freeze() or9999+ * @poweroff(), can be executed (e.g. if the suspend callback fails for one94100 * of the other devices that the PM core has unsuccessfully attempted to95101 * suspend earlier).9696- * The PM core executes @complete() after it has executed the appropriate9797- * resume callback for all devices.102102+ * The PM core executes subsystem-level @complete() after it has executed103103+ * the appropriate resume callbacks for all devices.98104 *99105 * @suspend: Executed before putting the system into a sleep state in which the100100- * contents of main memory are preserved. Quiesce the device, put it into101101- * a low power state appropriate for the upcoming system state (such as102102- * PCI_D3hot), and enable wakeup events as appropriate.106106+ * contents of main memory are preserved. The exact action to perform107107+ * depends on the device's subsystem (PM domain, device type, class or bus108108+ * type), but generally the device must be quiescent after subsystem-level109109+ * @suspend() has returned, so that it doesn't do any I/O or DMA.110110+ * Subsystem-level @suspend() is executed for all devices after invoking111111+ * subsystem-level @prepare() for all of them.103112 *104113 * @resume: Executed after waking the system up from a sleep state in which the105105- * contents of main memory were preserved. Put the device into the106106- * appropriate state, according to the information saved in memory by the107107- * preceding @suspend(). The driver starts working again, responding to108108- * hardware events and software requests. The hardware may have gone109109- * through a power-off reset, or it may have maintained state from the110110- * previous suspend() which the driver may rely on while resuming. On most111111- * platforms, there are no restrictions on availability of resources like112112- * clocks during @resume().114114+ * contents of main memory were preserved. The exact action to perform115115+ * depends on the device's subsystem, but generally the driver is expected116116+ * to start working again, responding to hardware events and software117117+ * requests (the device itself may be left in a low-power state, waiting118118+ * for a runtime resume to occur). The state of the device at the time its119119+ * driver's @resume() callback is run depends on the platform and subsystem120120+ * the device belongs to. On most platforms, there are no restrictions on121121+ * availability of resources like clocks during @resume().122122+ * Subsystem-level @resume() is executed for all devices after invoking123123+ * subsystem-level @resume_noirq() for all of them.113124 *114125 * @freeze: Hibernation-specific, executed before creating a hibernation image.115115- * Quiesce operations so that a consistent image can be created, but do NOT116116- * otherwise put the device into a low power device state and do NOT emit117117- * system wakeup events. Save in main memory the device settings to be118118- * used by @restore() during the subsequent resume from hibernation or by119119- * the subsequent @thaw(), if the creation of the image or the restoration120120- * of main memory contents from it fails.126126+ * Analogous to @suspend(), but it should not enable the device to signal127127+ * wakeup events or change its power state. The majority of subsystems128128+ * (with the notable exception of the PCI bus type) expect the driver-level129129+ * @freeze() to save the device settings in memory to be used by @restore()130130+ * during the subsequent resume from hibernation.131131+ * Subsystem-level @freeze() is executed for all devices after invoking132132+ * subsystem-level @prepare() for all of them.121133 *122134 * @thaw: Hibernation-specific, executed after creating a hibernation image OR123123- * if the creation of the image fails. Also executed after a failing135135+ * if the creation of an image has failed. Also executed after a failing124136 * attempt to restore the contents of main memory from such an image.125137 * Undo the changes made by the preceding @freeze(), so the device can be126138 * operated in the same way as immediately before the call to @freeze().139139+ * Subsystem-level @thaw() is executed for all devices after invoking140140+ * subsystem-level @thaw_noirq() for all of them. It also may be executed141141+ * directly after @freeze() in case of a transition error.127142 *128143 * @poweroff: Hibernation-specific, executed after saving a hibernation image.129129- * Quiesce the device, put it into a low power state appropriate for the130130- * upcoming system state (such as PCI_D3hot), and enable wakeup events as131131- * appropriate.144144+ * Analogous to @suspend(), but it need not save the device's settings in145145+ * memory.146146+ * Subsystem-level @poweroff() is executed for all devices after invoking147147+ * subsystem-level @prepare() for all of them.132148 *133149 * @restore: Hibernation-specific, executed after restoring the contents of main134134- * memory from a hibernation image. Driver starts working again,135135- * responding to hardware events and software requests. Drivers may NOT136136- * make ANY assumptions about the hardware state right prior to @restore().137137- * On most platforms, there are no restrictions on availability of138138- * resources like clocks during @restore().150150+ * memory from a hibernation image, analogous to @resume().139151 *140140- * @suspend_noirq: Complete the operations of ->suspend() by carrying out any141141- * actions required for suspending the device that need interrupts to be142142- * disabled152152+ * @suspend_noirq: Complete the actions started by @suspend(). Carry out any153153+ * additional operations required for suspending the device that might be154154+ * racing with its driver's interrupt handler, which is guaranteed not to155155+ * run while @suspend_noirq() is being executed.156156+ * It generally is expected that the device will be in a low-power state157157+ * (appropriate for the target system sleep state) after subsystem-level158158+ * @suspend_noirq() has returned successfully. If the device can generate159159+ * system wakeup signals and is enabled to wake up the system, it should be160160+ * configured to do so at that time. However, depending on the platform161161+ * and device's subsystem, @suspend() may be allowed to put the device into162162+ * the low-power state and configure it to generate wakeup signals, in163163+ * which case it generally is not necessary to define @suspend_noirq().143164 *144144- * @resume_noirq: Prepare for the execution of ->resume() by carrying out any145145- * actions required for resuming the device that need interrupts to be146146- * disabled165165+ * @resume_noirq: Prepare for the execution of @resume() by carrying out any166166+ * operations required for resuming the device that might be racing with167167+ * its driver's interrupt handler, which is guaranteed not to run while168168+ * @resume_noirq() is being executed.147169 *148148- * @freeze_noirq: Complete the operations of ->freeze() by carrying out any149149- * actions required for freezing the device that need interrupts to be150150- * disabled170170+ * @freeze_noirq: Complete the actions started by @freeze(). Carry out any171171+ * additional operations required for freezing the device that might be172172+ * racing with its driver's interrupt handler, which is guaranteed not to173173+ * run while @freeze_noirq() is being executed.174174+ * The power state of the device should not be changed by either @freeze()175175+ * or @freeze_noirq() and it should not be configured to signal system176176+ * wakeup by any of these callbacks.151177 *152152- * @thaw_noirq: Prepare for the execution of ->thaw() by carrying out any153153- * actions required for thawing the device that need interrupts to be154154- * disabled178178+ * @thaw_noirq: Prepare for the execution of @thaw() by carrying out any179179+ * operations required for thawing the device that might be racing with its180180+ * driver's interrupt handler, which is guaranteed not to run while181181+ * @thaw_noirq() is being executed.155182 *156156- * @poweroff_noirq: Complete the operations of ->poweroff() by carrying out any157157- * actions required for handling the device that need interrupts to be158158- * disabled183183+ * @poweroff_noirq: Complete the actions started by @poweroff(). Analogous to184184+ * @suspend_noirq(), but it need not save the device's settings in memory.159185 *160160- * @restore_noirq: Prepare for the execution of ->restore() by carrying out any161161- * actions required for restoring the operations of the device that need162162- * interrupts to be disabled186186+ * @restore_noirq: Prepare for the execution of @restore() by carrying out any187187+ * operations required for thawing the device that might be racing with its188188+ * driver's interrupt handler, which is guaranteed not to run while189189+ * @restore_noirq() is being executed. Analogous to @resume_noirq().163190 *164191 * All of the above callbacks, except for @complete(), return error codes.165192 * However, the error codes returned by the resume operations, @resume(),166166- * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq() do193193+ * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq(), do167194 * not cause the PM core to abort the resume transition during which they are168168- * returned. The error codes returned in that cases are only printed by the PM195195+ * returned. The error codes returned in those cases are only printed by the PM169196 * core to the system logs for debugging purposes. Still, it is recommended170197 * that drivers only return error codes from their resume methods in case of an171198 * unrecoverable failure (i.e. when the device being handled refuses to resume···201174 * their children.202175 *203176 * It is allowed to unregister devices while the above callbacks are being204204- * executed. However, it is not allowed to unregister a device from within any205205- * of its own callbacks.177177+ * executed. However, a callback routine must NOT try to unregister the device178178+ * it was called for, although it may unregister children of that device (for179179+ * example, if it detects that a child was unplugged while the system was180180+ * asleep).206181 *207207- * There also are the following callbacks related to run-time power management208208- * of devices:182182+ * Refer to Documentation/power/devices.txt for more information about the role183183+ * of the above callbacks in the system suspend process.184184+ *185185+ * There also are callbacks related to runtime power management of devices.186186+ * Again, these callbacks are executed by the PM core only for subsystems187187+ * (PM domains, device types, classes and bus types) and the subsystem-level188188+ * callbacks are supposed to invoke the driver callbacks. Moreover, the exact189189+ * actions to be performed by a device driver's callbacks generally depend on190190+ * the platform and subsystem the device belongs to.209191 *210192 * @runtime_suspend: Prepare the device for a condition in which it won't be211193 * able to communicate with the CPU(s) and RAM due to power management.212212- * This need not mean that the device should be put into a low power state.194194+ * This need not mean that the device should be put into a low-power state.213195 * For example, if the device is behind a link which is about to be turned214196 * off, the device may remain at full power. If the device does go to low215215- * power and is capable of generating run-time wake-up events, remote216216- * wake-up (i.e., a hardware mechanism allowing the device to request a217217- * change of its power state via a wake-up event, such as PCI PME) should218218- * be enabled for it.197197+ * power and is capable of generating runtime wakeup events, remote wakeup198198+ * (i.e., a hardware mechanism allowing the device to request a change of199199+ * its power state via an interrupt) should be enabled for it.219200 *220201 * @runtime_resume: Put the device into the fully active state in response to a221221- * wake-up event generated by hardware or at the request of software. If222222- * necessary, put the device into the full power state and restore its202202+ * wakeup event generated by hardware or at the request of software. If203203+ * necessary, put the device into the full-power state and restore its223204 * registers, so that it is fully operational.224205 *225225- * @runtime_idle: Device appears to be inactive and it might be put into a low226226- * power state if all of the necessary conditions are satisfied. Check206206+ * @runtime_idle: Device appears to be inactive and it might be put into a207207+ * low-power state if all of the necessary conditions are satisfied. Check227208 * these conditions and handle the device as appropriate, possibly queueing228209 * a suspend request for it. The return value is ignored by the PM core.210210+ *211211+ * Refer to Documentation/power/runtime_pm.txt for more information about the212212+ * role of the above callbacks in device runtime power management.213213+ *229214 */230215231216struct dev_pm_ops {
···307307 void (*dsi_disable_pads)(int dsi_id, unsigned lane_mask);308308};309309310310-#if defined(CONFIG_OMAP2_DSS_MODULE) || defined(CONFIG_OMAP2_DSS)311310/* Init with the board info */312311extern int omap_display_init(struct omap_dss_board_info *board_data);313313-#else314314-static inline int omap_display_init(struct omap_dss_board_info *board_data)315315-{316316- return 0;317317-}318318-#endif319312320313struct omap_display_platform_data {321314 struct omap_dss_board_info *board_data;
+9-2
kernel/cgroup_freezer.c
···153153 kfree(cgroup_freezer(cgroup));154154}155155156156+/* task is frozen or will freeze immediately when next it gets woken */157157+static bool is_task_frozen_enough(struct task_struct *task)158158+{159159+ return frozen(task) ||160160+ (task_is_stopped_or_traced(task) && freezing(task));161161+}162162+156163/*157164 * The call to cgroup_lock() in the freezer.state write method prevents158165 * a write to that file racing against an attach, and hence the···238231 cgroup_iter_start(cgroup, &it);239232 while ((task = cgroup_iter_next(cgroup, &it))) {240233 ntotal++;241241- if (frozen(task))234234+ if (is_task_frozen_enough(task))242235 nfrozen++;243236 }244237···291284 while ((task = cgroup_iter_next(cgroup, &it))) {292285 if (!freeze_task(task, true))293286 continue;294294- if (frozen(task))287287+ if (is_task_frozen_enough(task))295288 continue;296289 if (!freezing(task) && !freezer_should_skip(task))297290 num_cant_freeze_now++;
+4-2
kernel/hrtimer.c
···885885 struct hrtimer_clock_base *base,886886 unsigned long newstate, int reprogram)887887{888888+ struct timerqueue_node *next_timer;888889 if (!(timer->state & HRTIMER_STATE_ENQUEUED))889890 goto out;890891891891- if (&timer->node == timerqueue_getnext(&base->active)) {892892+ next_timer = timerqueue_getnext(&base->active);893893+ timerqueue_del(&base->active, &timer->node);894894+ if (&timer->node == next_timer) {892895#ifdef CONFIG_HIGH_RES_TIMERS893896 /* Reprogram the clock event device. if enabled */894897 if (reprogram && hrtimer_hres_active()) {···904901 }905902#endif906903 }907907- timerqueue_del(&base->active, &timer->node);908904 if (!timerqueue_getnext(&base->active))909905 base->cpu_base->active_bases &= ~(1 << base->index);910906out:
···492492}493493494494/**495495+ * clocksource_max_adjustment- Returns max adjustment amount496496+ * @cs: Pointer to clocksource497497+ *498498+ */499499+static u32 clocksource_max_adjustment(struct clocksource *cs)500500+{501501+ u64 ret;502502+ /*503503+ * We won't try to correct for more then 11% adjustments (110,000 ppm),504504+ */505505+ ret = (u64)cs->mult * 11;506506+ do_div(ret,100);507507+ return (u32)ret;508508+}509509+510510+/**495511 * clocksource_max_deferment - Returns max time the clocksource can be deferred496512 * @cs: Pointer to clocksource497513 *···519503 /*520504 * Calculate the maximum number of cycles that we can pass to the521505 * cyc2ns function without overflowing a 64-bit signed result. The522522- * maximum number of cycles is equal to ULLONG_MAX/cs->mult which523523- * is equivalent to the below.524524- * max_cycles < (2^63)/cs->mult525525- * max_cycles < 2^(log2((2^63)/cs->mult))526526- * max_cycles < 2^(log2(2^63) - log2(cs->mult))527527- * max_cycles < 2^(63 - log2(cs->mult))528528- * max_cycles < 1 << (63 - log2(cs->mult))506506+ * maximum number of cycles is equal to ULLONG_MAX/(cs->mult+cs->maxadj)507507+ * which is equivalent to the below.508508+ * max_cycles < (2^63)/(cs->mult + cs->maxadj)509509+ * max_cycles < 2^(log2((2^63)/(cs->mult + cs->maxadj)))510510+ * max_cycles < 2^(log2(2^63) - log2(cs->mult + cs->maxadj))511511+ * max_cycles < 2^(63 - log2(cs->mult + cs->maxadj))512512+ * max_cycles < 1 << (63 - log2(cs->mult + cs->maxadj))529513 * Please note that we add 1 to the result of the log2 to account for530514 * any rounding errors, ensure the above inequality is satisfied and531515 * no overflow will occur.532516 */533533- max_cycles = 1ULL << (63 - (ilog2(cs->mult) + 1));517517+ max_cycles = 1ULL << (63 - (ilog2(cs->mult + cs->maxadj) + 1));534518535519 /*536520 * The actual maximum number of cycles we can defer the clocksource is537521 * determined by the minimum of max_cycles and cs->mask.522522+ * Note: Here we subtract the maxadj to make sure we don't sleep for523523+ * too long if there's a large negative adjustment.538524 */539525 max_cycles = min_t(u64, max_cycles, (u64) cs->mask);540540- max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult, cs->shift);526526+ max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult - cs->maxadj,527527+ cs->shift);541528542529 /*543530 * To ensure that the clocksource does not wrap whilst we are idle,···659640void __clocksource_updatefreq_scale(struct clocksource *cs, u32 scale, u32 freq)660641{661642 u64 sec;662662-663643 /*664644 * Calc the maximum number of seconds which we can run before665645 * wrapping around. For clocksources which have a mask > 32bit···679661680662 clocks_calc_mult_shift(&cs->mult, &cs->shift, freq,681663 NSEC_PER_SEC / scale, sec * scale);664664+665665+ /*666666+ * for clocksources that have large mults, to avoid overflow.667667+ * Since mult may be adjusted by ntp, add an safety extra margin668668+ *669669+ */670670+ cs->maxadj = clocksource_max_adjustment(cs);671671+ while ((cs->mult + cs->maxadj < cs->mult)672672+ || (cs->mult - cs->maxadj > cs->mult)) {673673+ cs->mult >>= 1;674674+ cs->shift--;675675+ cs->maxadj = clocksource_max_adjustment(cs);676676+ }677677+682678 cs->max_idle_ns = clocksource_max_deferment(cs);683679}684680EXPORT_SYMBOL_GPL(__clocksource_updatefreq_scale);···733701 */734702int clocksource_register(struct clocksource *cs)735703{704704+ /* calculate max adjustment for given mult/shift */705705+ cs->maxadj = clocksource_max_adjustment(cs);706706+ WARN_ONCE(cs->mult + cs->maxadj < cs->mult,707707+ "Clocksource %s might overflow on 11%% adjustment\n",708708+ cs->name);709709+736710 /* calculate max idle time permitted for this clocksource */737711 cs->max_idle_ns = clocksource_max_deferment(cs);738712
+91-1
kernel/time/timekeeping.c
···249249 secs = xtime.tv_sec + wall_to_monotonic.tv_sec;250250 nsecs = xtime.tv_nsec + wall_to_monotonic.tv_nsec;251251 nsecs += timekeeping_get_ns();252252+ /* If arch requires, add in gettimeoffset() */253253+ nsecs += arch_gettimeoffset();252254253255 } while (read_seqretry(&xtime_lock, seq));254256 /*···282280 *ts = xtime;283281 tomono = wall_to_monotonic;284282 nsecs = timekeeping_get_ns();283283+ /* If arch requires, add in gettimeoffset() */284284+ nsecs += arch_gettimeoffset();285285286286 } while (read_seqretry(&xtime_lock, seq));287287···806802 s64 error, interval = timekeeper.cycle_interval;807803 int adj;808804805805+ /*806806+ * The point of this is to check if the error is greater then half807807+ * an interval.808808+ *809809+ * First we shift it down from NTP_SHIFT to clocksource->shifted nsecs.810810+ *811811+ * Note we subtract one in the shift, so that error is really error*2.812812+ * This "saves" dividing(shifting) intererval twice, but keeps the813813+ * (error > interval) comparision as still measuring if error is814814+ * larger then half an interval.815815+ *816816+ * Note: It does not "save" on aggrivation when reading the code.817817+ */809818 error = timekeeper.ntp_error >> (timekeeper.ntp_error_shift - 1);810819 if (error > interval) {820820+ /*821821+ * We now divide error by 4(via shift), which checks if822822+ * the error is greater then twice the interval.823823+ * If it is greater, we need a bigadjust, if its smaller,824824+ * we can adjust by 1.825825+ */811826 error >>= 2;827827+ /*828828+ * XXX - In update_wall_time, we round up to the next829829+ * nanosecond, and store the amount rounded up into830830+ * the error. This causes the likely below to be unlikely.831831+ *832832+ * The properfix is to avoid rounding up by using833833+ * the high precision timekeeper.xtime_nsec instead of834834+ * xtime.tv_nsec everywhere. Fixing this will take some835835+ * time.836836+ */812837 if (likely(error <= interval))813838 adj = 1;814839 else815840 adj = timekeeping_bigadjust(error, &interval, &offset);816841 } else if (error < -interval) {842842+ /* See comment above, this is just switched for the negative */817843 error >>= 2;818844 if (likely(error >= -interval)) {819845 adj = -1;···851817 offset = -offset;852818 } else853819 adj = timekeeping_bigadjust(error, &interval, &offset);854854- } else820820+ } else /* No adjustment needed */855821 return;856822823823+ WARN_ONCE(timekeeper.clock->maxadj &&824824+ (timekeeper.mult + adj > timekeeper.clock->mult +825825+ timekeeper.clock->maxadj),826826+ "Adjusting %s more then 11%% (%ld vs %ld)\n",827827+ timekeeper.clock->name, (long)timekeeper.mult + adj,828828+ (long)timekeeper.clock->mult +829829+ timekeeper.clock->maxadj);830830+ /*831831+ * So the following can be confusing.832832+ *833833+ * To keep things simple, lets assume adj == 1 for now.834834+ *835835+ * When adj != 1, remember that the interval and offset values836836+ * have been appropriately scaled so the math is the same.837837+ *838838+ * The basic idea here is that we're increasing the multiplier839839+ * by one, this causes the xtime_interval to be incremented by840840+ * one cycle_interval. This is because:841841+ * xtime_interval = cycle_interval * mult842842+ * So if mult is being incremented by one:843843+ * xtime_interval = cycle_interval * (mult + 1)844844+ * Its the same as:845845+ * xtime_interval = (cycle_interval * mult) + cycle_interval846846+ * Which can be shortened to:847847+ * xtime_interval += cycle_interval848848+ *849849+ * So offset stores the non-accumulated cycles. Thus the current850850+ * time (in shifted nanoseconds) is:851851+ * now = (offset * adj) + xtime_nsec852852+ * Now, even though we're adjusting the clock frequency, we have853853+ * to keep time consistent. In other words, we can't jump back854854+ * in time, and we also want to avoid jumping forward in time.855855+ *856856+ * So given the same offset value, we need the time to be the same857857+ * both before and after the freq adjustment.858858+ * now = (offset * adj_1) + xtime_nsec_1859859+ * now = (offset * adj_2) + xtime_nsec_2860860+ * So:861861+ * (offset * adj_1) + xtime_nsec_1 =862862+ * (offset * adj_2) + xtime_nsec_2863863+ * And we know:864864+ * adj_2 = adj_1 + 1865865+ * So:866866+ * (offset * adj_1) + xtime_nsec_1 =867867+ * (offset * (adj_1+1)) + xtime_nsec_2868868+ * (offset * adj_1) + xtime_nsec_1 =869869+ * (offset * adj_1) + offset + xtime_nsec_2870870+ * Canceling the sides:871871+ * xtime_nsec_1 = offset + xtime_nsec_2872872+ * Which gives us:873873+ * xtime_nsec_2 = xtime_nsec_1 - offset874874+ * Which simplfies to:875875+ * xtime_nsec -= offset876876+ *877877+ * XXX - TODO: Doc ntp_error calculation.878878+ */857879 timekeeper.mult += adj;858880 timekeeper.xtime_interval += interval;859881 timekeeper.xtime_nsec -= offset;
+8-9
mm/percpu-vm.c
···50505151 if (!pages || !bitmap) {5252 if (may_alloc && !pages)5353- pages = pcpu_mem_alloc(pages_size);5353+ pages = pcpu_mem_zalloc(pages_size);5454 if (may_alloc && !bitmap)5555- bitmap = pcpu_mem_alloc(bitmap_size);5555+ bitmap = pcpu_mem_zalloc(bitmap_size);5656 if (!pages || !bitmap)5757 return NULL;5858 }59596060- memset(pages, 0, pages_size);6160 bitmap_copy(bitmap, chunk->populated, pcpu_unit_pages);62616362 *bitmapp = bitmap;···142143 int page_start, int page_end)143144{144145 flush_cache_vunmap(145145- pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start),146146- pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end));146146+ pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start),147147+ pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end));147148}148149149150static void __pcpu_unmap_pages(unsigned long addr, int nr_pages)···205206 int page_start, int page_end)206207{207208 flush_tlb_kernel_range(208208- pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start),209209- pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end));209209+ pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start),210210+ pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end));210211}211212212213static int __pcpu_map_pages(unsigned long addr, struct page **pages,···283284 int page_start, int page_end)284285{285286 flush_cache_vmap(286286- pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start),287287- pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end));287287+ pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start),288288+ pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end));288289}289290290291/**
+40-22
mm/percpu.c
···116116static int pcpu_nr_slots __read_mostly;117117static size_t pcpu_chunk_struct_size __read_mostly;118118119119-/* cpus with the lowest and highest unit numbers */120120-static unsigned int pcpu_first_unit_cpu __read_mostly;121121-static unsigned int pcpu_last_unit_cpu __read_mostly;119119+/* cpus with the lowest and highest unit addresses */120120+static unsigned int pcpu_low_unit_cpu __read_mostly;121121+static unsigned int pcpu_high_unit_cpu __read_mostly;122122123123/* the address of the first chunk which starts with the kernel static area */124124void *pcpu_base_addr __read_mostly;···273273 (rs) = (re) + 1, pcpu_next_pop((chunk), &(rs), &(re), (end)))274274275275/**276276- * pcpu_mem_alloc - allocate memory276276+ * pcpu_mem_zalloc - allocate memory277277 * @size: bytes to allocate278278 *279279 * Allocate @size bytes. If @size is smaller than PAGE_SIZE,280280- * kzalloc() is used; otherwise, vmalloc() is used. The returned280280+ * kzalloc() is used; otherwise, vzalloc() is used. The returned281281 * memory is always zeroed.282282 *283283 * CONTEXT:···286286 * RETURNS:287287 * Pointer to the allocated area on success, NULL on failure.288288 */289289-static void *pcpu_mem_alloc(size_t size)289289+static void *pcpu_mem_zalloc(size_t size)290290{291291 if (WARN_ON_ONCE(!slab_is_available()))292292 return NULL;···302302 * @ptr: memory to free303303 * @size: size of the area304304 *305305- * Free @ptr. @ptr should have been allocated using pcpu_mem_alloc().305305+ * Free @ptr. @ptr should have been allocated using pcpu_mem_zalloc().306306 */307307static void pcpu_mem_free(void *ptr, size_t size)308308{···384384 size_t old_size = 0, new_size = new_alloc * sizeof(new[0]);385385 unsigned long flags;386386387387- new = pcpu_mem_alloc(new_size);387387+ new = pcpu_mem_zalloc(new_size);388388 if (!new)389389 return -ENOMEM;390390···604604{605605 struct pcpu_chunk *chunk;606606607607- chunk = pcpu_mem_alloc(pcpu_chunk_struct_size);607607+ chunk = pcpu_mem_zalloc(pcpu_chunk_struct_size);608608 if (!chunk)609609 return NULL;610610611611- chunk->map = pcpu_mem_alloc(PCPU_DFL_MAP_ALLOC * sizeof(chunk->map[0]));611611+ chunk->map = pcpu_mem_zalloc(PCPU_DFL_MAP_ALLOC *612612+ sizeof(chunk->map[0]));612613 if (!chunk->map) {613614 kfree(chunk);614615 return NULL;···978977 * address. The caller is responsible for ensuring @addr stays valid979978 * until this function finishes.980979 *980980+ * percpu allocator has special setup for the first chunk, which currently981981+ * supports either embedding in linear address space or vmalloc mapping,982982+ * and, from the second one, the backing allocator (currently either vm or983983+ * km) provides translation.984984+ *985985+ * The addr can be tranlated simply without checking if it falls into the986986+ * first chunk. But the current code reflects better how percpu allocator987987+ * actually works, and the verification can discover both bugs in percpu988988+ * allocator itself and per_cpu_ptr_to_phys() callers. So we keep current989989+ * code.990990+ *981991 * RETURNS:982992 * The physical address for @addr.983993 */···996984{997985 void __percpu *base = __addr_to_pcpu_ptr(pcpu_base_addr);998986 bool in_first_chunk = false;999999- unsigned long first_start, first_end;987987+ unsigned long first_low, first_high;1000988 unsigned int cpu;10019891002990 /*10031003- * The following test on first_start/end isn't strictly991991+ * The following test on unit_low/high isn't strictly1004992 * necessary but will speed up lookups of addresses which1005993 * aren't in the first chunk.1006994 */10071007- first_start = pcpu_chunk_addr(pcpu_first_chunk, pcpu_first_unit_cpu, 0);10081008- first_end = pcpu_chunk_addr(pcpu_first_chunk, pcpu_last_unit_cpu,10091009- pcpu_unit_pages);10101010- if ((unsigned long)addr >= first_start &&10111011- (unsigned long)addr < first_end) {995995+ first_low = pcpu_chunk_addr(pcpu_first_chunk, pcpu_low_unit_cpu, 0);996996+ first_high = pcpu_chunk_addr(pcpu_first_chunk, pcpu_high_unit_cpu,997997+ pcpu_unit_pages);998998+ if ((unsigned long)addr >= first_low &&999999+ (unsigned long)addr < first_high) {10121000 for_each_possible_cpu(cpu) {10131001 void *start = per_cpu_ptr(base, cpu);10141002···1245123312461234 for (cpu = 0; cpu < nr_cpu_ids; cpu++)12471235 unit_map[cpu] = UINT_MAX;12481248- pcpu_first_unit_cpu = NR_CPUS;12361236+12371237+ pcpu_low_unit_cpu = NR_CPUS;12381238+ pcpu_high_unit_cpu = NR_CPUS;1249123912501240 for (group = 0, unit = 0; group < ai->nr_groups; group++, unit += i) {12511241 const struct pcpu_group_info *gi = &ai->groups[group];···12671253 unit_map[cpu] = unit + i;12681254 unit_off[cpu] = gi->base_offset + i * ai->unit_size;1269125512701270- if (pcpu_first_unit_cpu == NR_CPUS)12711271- pcpu_first_unit_cpu = cpu;12721272- pcpu_last_unit_cpu = cpu;12561256+ /* determine low/high unit_cpu */12571257+ if (pcpu_low_unit_cpu == NR_CPUS ||12581258+ unit_off[cpu] < unit_off[pcpu_low_unit_cpu])12591259+ pcpu_low_unit_cpu = cpu;12601260+ if (pcpu_high_unit_cpu == NR_CPUS ||12611261+ unit_off[cpu] > unit_off[pcpu_high_unit_cpu])12621262+ pcpu_high_unit_cpu = cpu;12731263 }12741264 }12751265 pcpu_nr_units = unit;···1907188919081890 BUILD_BUG_ON(size > PAGE_SIZE);1909189119101910- map = pcpu_mem_alloc(size);18921892+ map = pcpu_mem_zalloc(size);19111893 BUG_ON(!map);1912189419131895 spin_lock_irqsave(&pcpu_lock, flags);
+26-16
mm/slub.c
···18621862{18631863 struct kmem_cache_node *n = NULL;18641864 struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);18651865- struct page *page;18651865+ struct page *page, *discard_page = NULL;1866186618671867 while ((page = c->partial)) {18681868 enum slab_modes { M_PARTIAL, M_FREE };···19041904 if (l == M_PARTIAL)19051905 remove_partial(n, page);19061906 else19071907- add_partial(n, page, 1);19071907+ add_partial(n, page,19081908+ DEACTIVATE_TO_TAIL);1908190919091910 l = m;19101911 }···19161915 "unfreezing slab"));1917191619181917 if (m == M_FREE) {19191919- stat(s, DEACTIVATE_EMPTY);19201920- discard_slab(s, page);19211921- stat(s, FREE_SLAB);19181918+ page->next = discard_page;19191919+ discard_page = page;19221920 }19231921 }1924192219251923 if (n)19261924 spin_unlock(&n->list_lock);19251925+19261926+ while (discard_page) {19271927+ page = discard_page;19281928+ discard_page = discard_page->next;19291929+19301930+ stat(s, DEACTIVATE_EMPTY);19311931+ discard_slab(s, page);19321932+ stat(s, FREE_SLAB);19331933+ }19271934}1928193519291936/*···19781969 page->pobjects = pobjects;19791970 page->next = oldpage;1980197119811981- } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);19721972+ } while (irqsafe_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);19821973 stat(s, CPU_PARTIAL_FREE);19831974 return pobjects;19841975}···4444443544454436 for_each_possible_cpu(cpu) {44464437 struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);44384438+ int node = ACCESS_ONCE(c->node);44474439 struct page *page;4448444044494449- if (!c || c->node < 0)44414441+ if (node < 0)44504442 continue;44514451-44524452- if (c->page) {44534453- if (flags & SO_TOTAL)44544454- x = c->page->objects;44434443+ page = ACCESS_ONCE(c->page);44444444+ if (page) {44454445+ if (flags & SO_TOTAL)44464446+ x = page->objects;44554447 else if (flags & SO_OBJECTS)44564456- x = c->page->inuse;44484448+ x = page->inuse;44574449 else44584450 x = 1;4459445144604452 total += x;44614461- nodes[c->node] += x;44534453+ nodes[node] += x;44624454 }44634455 page = c->partial;4464445644654457 if (page) {44664458 x = page->pobjects;44674467- total += x;44684468- nodes[c->node] += x;44594459+ total += x;44604460+ nodes[node] += x;44694461 }44704470- per_cpu[c->node]++;44624462+ per_cpu[node]++;44714463 }44724464 }44734465
+1-2
net/sunrpc/xprtsock.c
···496496 struct rpc_rqst *req = task->tk_rqstp;497497 struct rpc_xprt *xprt = req->rq_xprt;498498 struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);499499- int ret = 0;499499+ int ret = -EAGAIN;500500501501 dprintk("RPC: %5u xmit incomplete (%u left of %u)\n",502502 task->tk_pid, req->rq_slen - req->rq_bytes_sent,···508508 /* Don't race with disconnect */509509 if (xprt_connected(xprt)) {510510 if (test_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags)) {511511- ret = -EAGAIN;512511 /*513512 * Notify TCP that we're limited by the application514513 * window size
···4046404640474047 /* Search for codec ID */40484048 for (q = tbl; q->subvendor; q++) {40494049- unsigned long vendorid = (q->subdevice) | (q->subvendor << 16);40504050-40514051- if (vendorid == codec->subsystem_id)40494049+ unsigned int mask = 0xffff0000 | q->subdevice_mask;40504050+ unsigned int id = (q->subdevice | (q->subvendor << 16)) & mask;40514051+ if ((codec->subsystem_id & mask) == id)40524052 break;40534053 }40544054
+19-9
sound/pci/hda/hda_eld.c
···347347348348 for (i = 0; i < size; i++) {349349 unsigned int val = hdmi_get_eld_data(codec, nid, i);350350+ /*351351+ * Graphics driver might be writing to ELD buffer right now.352352+ * Just abort. The caller will repoll after a while.353353+ */350354 if (!(val & AC_ELDD_ELD_VALID)) {351351- if (!i) {352352- snd_printd(KERN_INFO353353- "HDMI: invalid ELD data\n");354354- ret = -EINVAL;355355- goto error;356356- }357355 snd_printd(KERN_INFO358356 "HDMI: invalid ELD data byte %d\n", i);359359- val = 0;360360- } else361361- val &= AC_ELDD_ELD_DATA;357357+ ret = -EINVAL;358358+ goto error;359359+ }360360+ val &= AC_ELDD_ELD_DATA;361361+ /*362362+ * The first byte cannot be zero. This can happen on some DVI363363+ * connections. Some Intel chips may also need some 250ms delay364364+ * to return non-zero ELD data, even when the graphics driver365365+ * correctly writes ELD content before setting ELD_valid bit.366366+ */367367+ if (!val && !i) {368368+ snd_printdd(KERN_INFO "HDMI: 0 ELD data\n");369369+ ret = -EINVAL;370370+ goto error;371371+ }362372 buf[i] = val;363373 }364374
+23-9
sound/pci/hda/patch_cirrus.c
···5858 unsigned int gpio_mask;5959 unsigned int gpio_dir;6060 unsigned int gpio_data;6161+ unsigned int gpio_eapd_hp; /* EAPD GPIO bit for headphones */6262+ unsigned int gpio_eapd_speaker; /* EAPD GPIO bit for speakers */61636264 struct hda_pcm pcm_rec[2]; /* PCM information */6365···7876 CS420X_MBP53,7977 CS420X_MBP55,8078 CS420X_IMAC27,7979+ CS420X_APPLE,8180 CS420X_AUTO,8281 CS420X_MODELS8382};···931928 spdif_present ? 0 : PIN_OUT);932929 }933930 }934934- if (spec->board_config == CS420X_MBP53 ||935935- spec->board_config == CS420X_MBP55 ||936936- spec->board_config == CS420X_IMAC27) {937937- unsigned int gpio = hp_present ? 0x02 : 0x08;931931+ if (spec->gpio_eapd_hp) {932932+ unsigned int gpio = hp_present ?933933+ spec->gpio_eapd_hp : spec->gpio_eapd_speaker;938934 snd_hda_codec_write(codec, 0x01, 0,939935 AC_VERB_SET_GPIO_DATA, gpio);940936 }···12781276 [CS420X_MBP53] = "mbp53",12791277 [CS420X_MBP55] = "mbp55",12801278 [CS420X_IMAC27] = "imac27",12791279+ [CS420X_APPLE] = "apple",12811280 [CS420X_AUTO] = "auto",12821281};12831282···12881285 SND_PCI_QUIRK(0x10de, 0x0d94, "MacBookAir 3,1(2)", CS420X_MBP55),12891286 SND_PCI_QUIRK(0x10de, 0xcb79, "MacBookPro 5,5", CS420X_MBP55),12901287 SND_PCI_QUIRK(0x10de, 0xcb89, "MacBookPro 7,1", CS420X_MBP55),12911291- SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27),12881288+ /* this conflicts with too many other models */12891289+ /*SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27),*/12901290+ {} /* terminator */12911291+};12921292+12931293+static const struct snd_pci_quirk cs420x_codec_cfg_tbl[] = {12941294+ SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE),12921295 {} /* terminator */12931296};12941297···13761367 spec->board_config =13771368 snd_hda_check_board_config(codec, CS420X_MODELS,13781369 cs420x_models, cs420x_cfg_tbl);13701370+ if (spec->board_config < 0)13711371+ spec->board_config =13721372+ snd_hda_check_board_codec_sid_config(codec,13731373+ CS420X_MODELS, NULL, cs420x_codec_cfg_tbl);13791374 if (spec->board_config >= 0)13801375 fix_pincfg(codec, spec->board_config, cs_pincfgs);13811376···13871374 case CS420X_IMAC27:13881375 case CS420X_MBP53:13891376 case CS420X_MBP55:13901390- /* GPIO1 = headphones */13911391- /* GPIO3 = speakers */13921392- spec->gpio_mask = 0x0a;13931393- spec->gpio_dir = 0x0a;13771377+ case CS420X_APPLE:13781378+ spec->gpio_eapd_hp = 2; /* GPIO1 = headphones */13791379+ spec->gpio_eapd_speaker = 8; /* GPIO3 = speakers */13801380+ spec->gpio_mask = spec->gpio_dir =13811381+ spec->gpio_eapd_hp | spec->gpio_eapd_speaker;13941382 break;13951383 }13961384