···520520device. This field is a pointer to an object of type struct dev_power_domain,521521defined in include/linux/pm.h, providing a set of power management callbacks522522analogous to the subsystem-level and device driver callbacks that are executed523523-for the given device during all power transitions, in addition to the respective524524-subsystem-level callbacks. Specifically, the power domain "suspend" callbacks525525-(i.e. ->runtime_suspend(), ->suspend(), ->freeze(), ->poweroff(), etc.) are526526-executed after the analogous subsystem-level callbacks, while the power domain527527-"resume" callbacks (i.e. ->runtime_resume(), ->resume(), ->thaw(), ->restore,528528-etc.) are executed before the analogous subsystem-level callbacks. Error codes529529-returned by the "suspend" and "resume" power domain callbacks are ignored.523523+for the given device during all power transitions, instead of the respective524524+subsystem-level callbacks. Specifically, if a device's pm_domain pointer is525525+not NULL, the ->suspend() callback from the object pointed to by it will be526526+executed instead of its subsystem's (e.g. bus type's) ->suspend() callback and527527+anlogously for all of the remaining callbacks. In other words, power management528528+domain callbacks, if defined for the given device, always take precedence over529529+the callbacks provided by the device's subsystem (e.g. bus type).530530531531-Power domain ->runtime_idle() callback is executed before the subsystem-level532532-->runtime_idle() callback and the result returned by it is not ignored. Namely,533533-if it returns error code, the subsystem-level ->runtime_idle() callback will not534534-be called and the helper function rpm_idle() executing it will return error535535-code. This mechanism is intended to help platforms where saving device state536536-is a time consuming operation and should only be carried out if all devices537537-in the power domain are idle, before turning off the shared power resource(s).538538-Namely, the power domain ->runtime_idle() callback may return error code until539539-the pm_runtime_idle() helper (or its asychronous version) has been called for540540-all devices in the power domain (it is recommended that the returned error code541541-be -EBUSY in those cases), preventing the subsystem-level ->runtime_idle()542542-callback from being run prematurely.543543-544544-The support for device power domains is only relevant to platforms needing to545545-use the same subsystem-level (e.g. platform bus type) and device driver power546546-management callbacks in many different power domain configurations and wanting547547-to avoid incorporating the support for power domains into the subsystem-level548548-callbacks. The other platforms need not implement it or take it into account549549-in any way.550550-551551-552552-System Devices553553---------------554554-System devices (sysdevs) follow a slightly different API, which can be found in555555-556556- include/linux/sysdev.h557557- drivers/base/sys.c558558-559559-System devices will be suspended with interrupts disabled, and after all other560560-devices have been suspended. On resume, they will be resumed before any other561561-devices, and also with interrupts disabled. These things occur in special562562-"sysdev_driver" phases, which affect only system devices.563563-564564-Thus, after the suspend_noirq (or freeze_noirq or poweroff_noirq) phase, when565565-the non-boot CPUs are all offline and IRQs are disabled on the remaining online566566-CPU, then a sysdev_driver.suspend phase is carried out, and the system enters a567567-sleep state (or a system image is created). During resume (or after the image568568-has been created or loaded) a sysdev_driver.resume phase is carried out, IRQs569569-are enabled on the only online CPU, the non-boot CPUs are enabled, and the570570-resume_noirq (or thaw_noirq or restore_noirq) phase begins.571571-572572-Code to actually enter and exit the system-wide low power state sometimes573573-involves hardware details that are only known to the boot firmware, and574574-may leave a CPU running software (from SRAM or flash memory) that monitors575575-the system and manages its wakeup sequence.531531+The support for device power management domains is only relevant to platforms532532+needing to use the same device driver power management callbacks in many533533+different power domain configurations and wanting to avoid incorporating the534534+support for power domains into subsystem-level callbacks, for example by535535+modifying the platform bus type. Other platforms need not implement it or take536536+it into account in any way.576537577538578539Device Low Power (suspend) States
-5
Documentation/power/runtime_pm.txt
···566566 pm_runtime_set_active(dev);567567 pm_runtime_enable(dev);568568569569-The PM core always increments the run-time usage counter before calling the570570-->prepare() callback and decrements it after calling the ->complete() callback.571571-Hence disabling run-time PM temporarily like this will not cause any run-time572572-suspend callbacks to be lost.573573-5745697. Generic subsystem callbacks575570576571Subsystems may wish to conserve code space by using the set of generic power
+8-1
Documentation/usb/error-codes.txt
···7676reported. That's because transfers often involve several packets, so that7777one or more packets could finish before an error stops further endpoint I/O.78787979+For isochronous URBs, the urb status value is non-zero only if the URB is8080+unlinked, the device is removed, the host controller is disabled, or the total8181+transferred length is less than the requested length and the URB_SHORT_NOT_OK8282+flag is set. Completion handlers for isochronous URBs should only see8383+urb->status set to zero, -ENOENT, -ECONNRESET, -ESHUTDOWN, or -EREMOTEIO.8484+Individual frame descriptor status fields may report more status codes.8585+798680870 Transfer completed successfully8188···139132 device removal events immediately.140133141134-EXDEV ISO transfer only partially completed142142- look at individual frame status for details135135+ (only set in iso_frame_desc[n].status, not urb->status)143136144137-EINVAL ISO madness, if this happens: Log off and go home145138
···5656 * Given a kernel address, find the home node of the underlying memory.5757 */5858#define kvaddr_to_nid(kaddr) pa_to_nid(__pa(kaddr))5959-#define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn)60596160/*6261 * Given a kaddr, LOCAL_BASE_ADDR finds the owning node of the memory
+13-1
arch/arm/boot/compressed/head.S
···597597 sub pc, lr, r0, lsr #32 @ properly flush pipeline598598#endif599599600600+#define PROC_ENTRY_SIZE (4*5)601601+600602/*601603 * Here follow the relocatable cache support functions for the602604 * various processors. This is a generic hook for locating an···626624 ARM( addeq pc, r12, r3 ) @ call cache function627625 THUMB( addeq r12, r3 )628626 THUMB( moveq pc, r12 ) @ call cache function629629- add r12, r12, #4*5627627+ add r12, r12, #PROC_ENTRY_SIZE630628 b 1b631629632630/*···795793 THUMB( nop )796794797795 .size proc_types, . - proc_types796796+797797+ /*798798+ * If you get a "non-constant expression in ".if" statement"799799+ * error from the assembler on this line, check that you have800800+ * not accidentally written a "b" instruction where you should801801+ * have written W(b).802802+ */803803+ .if (. - proc_types) % PROC_ENTRY_SIZE != 0804804+ .error "The size of one or more proc_types entries is wrong."805805+ .endif798806799807/*800808 * Turn off the Cache and MMU. ARMv3 does not support
+4
arch/arm/include/asm/assembler.h
···1313 * Do not include any C declarations in this file - it is included by1414 * assembler source.1515 */1616+#ifndef __ASM_ASSEMBLER_H__1717+#define __ASM_ASSEMBLER_H__1818+1619#ifndef __ASSEMBLY__1720#error "Only include this from assembly code"1821#endif···293290 .macro ldrusr, reg, ptr, inc, cond=al, rept=1, abort=9001f294291 usracc ldr, \reg, \ptr, \inc, \cond, \rept, \abort295292 .endm293293+#endif /* __ASM_ASSEMBLER_H__ */
+2
arch/arm/include/asm/entry-macro-multi.S
···11+#include <asm/assembler.h>22+13/*24 * Interrupt handling. Preserves r7, r8, r935 */
+11-2
arch/arm/kernel/module.c
···193193 offset -= 0x02000000;194194 offset += sym->st_value - loc;195195196196- /* only Thumb addresses allowed (no interworking) */197197- if (!(offset & 1) ||196196+ /*197197+ * For function symbols, only Thumb addresses are198198+ * allowed (no interworking).199199+ *200200+ * For non-function symbols, the destination201201+ * has no specific ARM/Thumb disposition, so202202+ * the branch is resolved under the assumption203203+ * that interworking is not required.204204+ */205205+ if ((ELF32_ST_TYPE(sym->st_info) == STT_FUNC &&206206+ !(offset & 1)) ||198207 offset <= (s32)0xff000000 ||199208 offset >= (s32)0x01000000) {200209 pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n",
+5-1
arch/arm/kernel/smp.c
···318318 smp_store_cpu_info(cpu);319319320320 /*321321- * OK, now it's safe to let the boot CPU continue321321+ * OK, now it's safe to let the boot CPU continue. Wait for322322+ * the CPU migration code to notice that the CPU is online323323+ * before we continue.322324 */323325 set_cpu_online(cpu, true);326326+ while (!cpu_active(cpu))327327+ cpu_relax();324328325329 /*326330 * OK, it's off to the idle thread for us
···66 bool "gms30c7201"77 depends on ARCH_H720X88 select CPU_H720199+ select ZONE_DMA910 help1011 Say Y here if you are using the Hynix GMS30C7201 Reference Board11121213config ARCH_H72021314 bool "hms30c7202"1415 select CPU_H72021616+ select ZONE_DMA1517 depends on ARCH_H720X1618 help1719 Say Y here if you are using the Hynix HMS30C7202 Reference Board
···110110 GPIO168_KP_O0,111111112112 /* UART */113113- GPIO0_U0_CTSn | PIN_INPUT_PULLUP,114114- GPIO1_U0_RTSn | PIN_OUTPUT_HIGH,115115- GPIO2_U0_RXD | PIN_INPUT_PULLUP,116116- GPIO3_U0_TXD | PIN_OUTPUT_HIGH,113113+ /* uart-0 pins gpio configuration should be114114+ * kept intact to prevent glitch in tx line115115+ * when tty dev is opened. Later these pins116116+ * are configured to uart mop500_pins_uart0117117+ *118118+ * It will be replaced with uart configuration119119+ * once the issue is solved.120120+ */121121+ GPIO0_GPIO | PIN_INPUT_PULLUP,122122+ GPIO1_GPIO | PIN_OUTPUT_HIGH,123123+ GPIO2_GPIO | PIN_INPUT_PULLUP,124124+ GPIO3_GPIO | PIN_OUTPUT_HIGH,117125118126 GPIO29_U2_RXD | PIN_INPUT_PULLUP,119127 GPIO30_U2_TXD | PIN_OUTPUT_HIGH,
···3838#define memory_hotplug_max() memblock_end_of_DRAM()3939#endif40404141-/*4242- * Following are macros that each numa implmentation must define.4343- */4444-4545-#define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn)4646-#define node_end_pfn(nid) (NODE_DATA(nid)->node_end_pfn)4747-4841#else4942#define memory_hotplug_max() memblock_end_of_DRAM()5043#endif /* CONFIG_NEED_MULTIPLE_NODES */
+17-12
arch/powerpc/kernel/rtas-rtc.c
···44#include <linux/init.h>55#include <linux/rtc.h>66#include <linux/delay.h>77+#include <linux/ratelimit.h>78#include <asm/prom.h>89#include <asm/rtas.h>910#include <asm/time.h>···3029 }3130 } while (wait_time && (get_tb() < max_wait_tb));32313333- if (error != 0 && printk_ratelimit()) {3434- printk(KERN_WARNING "error: reading the clock failed (%d)\n",3535- error);3232+ if (error != 0) {3333+ printk_ratelimited(KERN_WARNING3434+ "error: reading the clock failed (%d)\n",3535+ error);3636 return 0;3737 }3838···57555856 wait_time = rtas_busy_delay_time(error);5957 if (wait_time) {6060- if (in_interrupt() && printk_ratelimit()) {5858+ if (in_interrupt()) {6159 memset(rtc_tm, 0, sizeof(struct rtc_time));6262- printk(KERN_WARNING "error: reading clock"6363- " would delay interrupt\n");6060+ printk_ratelimited(KERN_WARNING6161+ "error: reading clock "6262+ "would delay interrupt\n");6463 return; /* delay not allowed */6564 }6665 msleep(wait_time);6766 }6867 } while (wait_time && (get_tb() < max_wait_tb));69687070- if (error != 0 && printk_ratelimit()) {7171- printk(KERN_WARNING "error: reading the clock failed (%d)\n",7272- error);6969+ if (error != 0) {7070+ printk_ratelimited(KERN_WARNING7171+ "error: reading the clock failed (%d)\n",7272+ error);7373 return;7474 }7575···10399 }104100 } while (wait_time && (get_tb() < max_wait_tb));105101106106- if (error != 0 && printk_ratelimit())107107- printk(KERN_WARNING "error: setting the clock failed (%d)\n",108108- error);102102+ if (error != 0)103103+ printk_ratelimited(KERN_WARNING104104+ "error: setting the clock failed (%d)\n",105105+ error);109106110107 return 0;111108}
+31-26
arch/powerpc/kernel/signal_32.c
···2525#include <linux/errno.h>2626#include <linux/elf.h>2727#include <linux/ptrace.h>2828+#include <linux/ratelimit.h>2829#ifdef CONFIG_PPC642930#include <linux/syscalls.h>3031#include <linux/compat.h>···893892 printk("badframe in handle_rt_signal, regs=%p frame=%p newsp=%lx\n",894893 regs, frame, newsp);895894#endif896896- if (show_unhandled_signals && printk_ratelimit())897897- printk(KERN_INFO "%s[%d]: bad frame in handle_rt_signal32: "898898- "%p nip %08lx lr %08lx\n",899899- current->comm, current->pid,900900- addr, regs->nip, regs->link);895895+ if (show_unhandled_signals)896896+ printk_ratelimited(KERN_INFO897897+ "%s[%d]: bad frame in handle_rt_signal32: "898898+ "%p nip %08lx lr %08lx\n",899899+ current->comm, current->pid,900900+ addr, regs->nip, regs->link);901901902902 force_sigsegv(sig, current);903903 return 0;···10601058 return 0;1061105910621060 bad:10631063- if (show_unhandled_signals && printk_ratelimit())10641064- printk(KERN_INFO "%s[%d]: bad frame in sys_rt_sigreturn: "10651065- "%p nip %08lx lr %08lx\n",10661066- current->comm, current->pid,10671067- rt_sf, regs->nip, regs->link);10611061+ if (show_unhandled_signals)10621062+ printk_ratelimited(KERN_INFO10631063+ "%s[%d]: bad frame in sys_rt_sigreturn: "10641064+ "%p nip %08lx lr %08lx\n",10651065+ current->comm, current->pid,10661066+ rt_sf, regs->nip, regs->link);1068106710691068 force_sig(SIGSEGV, current);10701069 return 0;···11521149 * We kill the task with a SIGSEGV in this situation.11531150 */11541151 if (do_setcontext(ctx, regs, 1)) {11551155- if (show_unhandled_signals && printk_ratelimit())11561156- printk(KERN_INFO "%s[%d]: bad frame in "11571157- "sys_debug_setcontext: %p nip %08lx "11581158- "lr %08lx\n",11591159- current->comm, current->pid,11601160- ctx, regs->nip, regs->link);11521152+ if (show_unhandled_signals)11531153+ printk_ratelimited(KERN_INFO "%s[%d]: bad frame in "11541154+ "sys_debug_setcontext: %p nip %08lx "11551155+ "lr %08lx\n",11561156+ current->comm, current->pid,11571157+ ctx, regs->nip, regs->link);1161115811621159 force_sig(SIGSEGV, current);11631160 goto out;···12391236 printk("badframe in handle_signal, regs=%p frame=%p newsp=%lx\n",12401237 regs, frame, newsp);12411238#endif12421242- if (show_unhandled_signals && printk_ratelimit())12431243- printk(KERN_INFO "%s[%d]: bad frame in handle_signal32: "12441244- "%p nip %08lx lr %08lx\n",12451245- current->comm, current->pid,12461246- frame, regs->nip, regs->link);12391239+ if (show_unhandled_signals)12401240+ printk_ratelimited(KERN_INFO12411241+ "%s[%d]: bad frame in handle_signal32: "12421242+ "%p nip %08lx lr %08lx\n",12431243+ current->comm, current->pid,12441244+ frame, regs->nip, regs->link);1247124512481246 force_sigsegv(sig, current);12491247 return 0;···12921288 return 0;1293128912941290badframe:12951295- if (show_unhandled_signals && printk_ratelimit())12961296- printk(KERN_INFO "%s[%d]: bad frame in sys_sigreturn: "12971297- "%p nip %08lx lr %08lx\n",12981298- current->comm, current->pid,12991299- addr, regs->nip, regs->link);12911291+ if (show_unhandled_signals)12921292+ printk_ratelimited(KERN_INFO12931293+ "%s[%d]: bad frame in sys_sigreturn: "12941294+ "%p nip %08lx lr %08lx\n",12951295+ current->comm, current->pid,12961296+ addr, regs->nip, regs->link);1300129713011298 force_sig(SIGSEGV, current);13021299 return 0;
···25252626#include "hwsampler.h"27272828-#define DEFAULT_INTERVAL 40962828+#define DEFAULT_INTERVAL 412751829293030#define DEFAULT_SDBT_BLOCKS 13131#define DEFAULT_SDB_BLOCKS 511···150150 oprofile_max_interval = hwsampler_query_max_interval();151151 if (oprofile_max_interval == 0)152152 return -ENODEV;153153+154154+ /* The initial value should be sane */155155+ if (oprofile_hw_interval < oprofile_min_interval)156156+ oprofile_hw_interval = oprofile_min_interval;157157+ if (oprofile_hw_interval > oprofile_max_interval)158158+ oprofile_hw_interval = oprofile_max_interval;153159154160 if (oprofile_timer_init(ops))155161 return -ENODEV;
+5
arch/sh/Kconfig
···348348 select SYS_SUPPORTS_CMT349349 select ARCH_WANT_OPTIONAL_GPIOLIB350350 select USB_ARCH_HAS_OHCI351351+ select USB_OHCI_SH if USB_OHCI_HCD351352 help352353 Select SH7720 if you have a SH3-DSP SH7720 CPU.353354···358357 select CPU_HAS_DSP359358 select SYS_SUPPORTS_CMT360359 select USB_ARCH_HAS_OHCI360360+ select USB_OHCI_SH if USB_OHCI_HCD361361 help362362 Select SH7721 if you have a SH3-DSP SH7721 CPU.363363···442440 bool "Support SH7763 processor"443441 select CPU_SH4A444442 select USB_ARCH_HAS_OHCI443443+ select USB_OHCI_SH if USB_OHCI_HCD445444 help446445 Select SH7763 if you have a SH4A SH7763(R5S77631) CPU.447446···470467 select GENERIC_CLOCKEVENTS_BROADCAST if SMP471468 select ARCH_WANT_OPTIONAL_GPIOLIB472469 select USB_ARCH_HAS_OHCI470470+ select USB_OHCI_SH if USB_OHCI_HCD473471 select USB_ARCH_HAS_EHCI472472+ select USB_EHCI_SH if USB_EHCI_HCD474473475474config CPU_SUBTYPE_SHX3476475 bool "Support SH-X3 processor"
+3-5
arch/sh/configs/sh7757lcr_defconfig
···99CONFIG_TASK_IO_ACCOUNTING=y1010CONFIG_LOG_BUF_SHIFT=141111CONFIG_BLK_DEV_INITRD=y1212-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set1312# CONFIG_SYSCTL_SYSCALL is not set1413CONFIG_KALLSYMS_ALL=y1514CONFIG_SLAB=y···3839CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"3940# CONFIG_FW_LOADER is not set4041CONFIG_MTD=y4141-CONFIG_MTD_CONCAT=y4242-CONFIG_MTD_PARTITIONS=y4342CONFIG_MTD_CHAR=y4443CONFIG_MTD_BLOCK=y4544CONFIG_MTD_M25P80=y···5356# CONFIG_KEYBOARD_ATKBD is not set5457# CONFIG_MOUSE_PS2 is not set5558# CONFIG_SERIO is not set5959+# CONFIG_LEGACY_PTYS is not set5660CONFIG_SERIAL_SH_SCI=y5761CONFIG_SERIAL_SH_SCI_NR_UARTS=35862CONFIG_SERIAL_SH_SCI_CONSOLE=y5959-# CONFIG_LEGACY_PTYS is not set6063# CONFIG_HW_RANDOM is not set6164CONFIG_SPI=y6265CONFIG_SPI_SH=y6366# CONFIG_HWMON is not set6464-CONFIG_MFD_SH_MOBILE_SDHI=y6567CONFIG_USB=y6668CONFIG_USB_EHCI_HCD=y6969+CONFIG_USB_EHCI_SH=y6770CONFIG_USB_OHCI_HCD=y7171+CONFIG_USB_OHCI_SH=y6872CONFIG_USB_STORAGE=y6973CONFIG_MMC=y7074CONFIG_MMC_SDHI=y
-4
arch/sh/include/asm/mmzone.h
···99extern struct pglist_data *node_data[];1010#define NODE_DATA(nid) (node_data[nid])11111212-#define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn)1313-#define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \1414- NODE_DATA(nid)->node_spanned_pages)1515-1612static inline int pfn_to_nid(unsigned long pfn)1713{1814 int nid;
···4040 return highbits_to_node[__pfn_to_highbits(pfn)];4141}42424343-/*4444- * Following are macros that each numa implmentation must define.4545- */4646-4747-#define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn)4848-#define node_end_pfn(nid) \4949-({ \5050- pg_data_t *__pgdat = NODE_DATA(nid); \5151- __pgdat->node_start_pfn + __pgdat->node_spanned_pages; \5252-})5353-5443#define kern_addr_valid(kaddr) virt_addr_valid((void *)kaddr)55445645static inline int pfn_valid(int pfn)
···4848#endif4949}50505151-/*5252- * Following are macros that each numa implmentation must define.5353- */5454-5555-#define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn)5656-#define node_end_pfn(nid) \5757-({ \5858- pg_data_t *__pgdat = NODE_DATA(nid); \5959- __pgdat->node_start_pfn + __pgdat->node_spanned_pages; \6060-})6161-6251static inline int pfn_valid(int pfn)6352{6453 int nid = pfn_to_nid(pfn);
···13711371 struct gendisk *disk; /* the associated disk */13721372 spinlock_t lock;1373137313741374+ struct mutex block_mutex; /* protects blocking */13741375 int block; /* event blocking depth */13751376 unsigned int pending; /* events already sent out */13761377 unsigned int clearing; /* events being cleared */···14151414 return msecs_to_jiffies(intv_msecs);14161415}1417141614181418-static void __disk_block_events(struct gendisk *disk, bool sync)14171417+/**14181418+ * disk_block_events - block and flush disk event checking14191419+ * @disk: disk to block events for14201420+ *14211421+ * On return from this function, it is guaranteed that event checking14221422+ * isn't in progress and won't happen until unblocked by14231423+ * disk_unblock_events(). Events blocking is counted and the actual14241424+ * unblocking happens after the matching number of unblocks are done.14251425+ *14261426+ * Note that this intentionally does not block event checking from14271427+ * disk_clear_events().14281428+ *14291429+ * CONTEXT:14301430+ * Might sleep.14311431+ */14321432+void disk_block_events(struct gendisk *disk)14191433{14201434 struct disk_events *ev = disk->ev;14211435 unsigned long flags;14221436 bool cancel;1423143714381438+ if (!ev)14391439+ return;14401440+14411441+ /*14421442+ * Outer mutex ensures that the first blocker completes canceling14431443+ * the event work before further blockers are allowed to finish.14441444+ */14451445+ mutex_lock(&ev->block_mutex);14461446+14241447 spin_lock_irqsave(&ev->lock, flags);14251448 cancel = !ev->block++;14261449 spin_unlock_irqrestore(&ev->lock, flags);1427145014281428- if (cancel) {14291429- if (sync)14301430- cancel_delayed_work_sync(&disk->ev->dwork);14311431- else14321432- cancel_delayed_work(&disk->ev->dwork);14331433- }14511451+ if (cancel)14521452+ cancel_delayed_work_sync(&disk->ev->dwork);14531453+14541454+ mutex_unlock(&ev->block_mutex);14341455}1435145614361457static void __disk_unblock_events(struct gendisk *disk, bool check_now)···14841461}1485146214861463/**14871487- * disk_block_events - block and flush disk event checking14881488- * @disk: disk to block events for14891489- *14901490- * On return from this function, it is guaranteed that event checking14911491- * isn't in progress and won't happen until unblocked by14921492- * disk_unblock_events(). Events blocking is counted and the actual14931493- * unblocking happens after the matching number of unblocks are done.14941494- *14951495- * Note that this intentionally does not block event checking from14961496- * disk_clear_events().14971497- *14981498- * CONTEXT:14991499- * Might sleep.15001500- */15011501-void disk_block_events(struct gendisk *disk)15021502-{15031503- if (disk->ev)15041504- __disk_block_events(disk, true);15051505-}15061506-15071507-/**15081464 * disk_unblock_events - unblock disk event checking15091465 * @disk: disk to unblock events for15101466 *···15101508 */15111509void disk_check_events(struct gendisk *disk)15121510{15131513- if (disk->ev) {15141514- __disk_block_events(disk, false);15151515- __disk_unblock_events(disk, true);15111511+ struct disk_events *ev = disk->ev;15121512+ unsigned long flags;15131513+15141514+ if (!ev)15151515+ return;15161516+15171517+ spin_lock_irqsave(&ev->lock, flags);15181518+ if (!ev->block) {15191519+ cancel_delayed_work(&ev->dwork);15201520+ queue_delayed_work(system_nrt_wq, &ev->dwork, 0);15161521 }15221522+ spin_unlock_irqrestore(&ev->lock, flags);15171523}15181524EXPORT_SYMBOL_GPL(disk_check_events);15191525···15561546 spin_unlock_irq(&ev->lock);1557154715581548 /* uncondtionally schedule event check and wait for it to finish */15591559- __disk_block_events(disk, true);15491549+ disk_block_events(disk);15601550 queue_delayed_work(system_nrt_wq, &ev->dwork, 0);15611551 flush_delayed_work(&ev->dwork);15621552 __disk_unblock_events(disk, false);···16741664 if (intv < 0 && intv != -1)16751665 return -EINVAL;1676166616771677- __disk_block_events(disk, true);16671667+ disk_block_events(disk);16781668 disk->ev->poll_msecs = intv;16791669 __disk_unblock_events(disk, true);16801670···17601750 INIT_LIST_HEAD(&ev->node);17611751 ev->disk = disk;17621752 spin_lock_init(&ev->lock);17531753+ mutex_init(&ev->block_mutex);17631754 ev->block = 1;17641755 ev->poll_msecs = -1;17651756 INIT_DELAYED_WORK(&ev->dwork, disk_events_workfn);···17811770 if (!disk->ev)17821771 return;1783177217841784- __disk_block_events(disk, true);17731773+ disk_block_events(disk);1785177417861775 mutex_lock(&disk_events_mutex);17871776 list_del_init(&disk->ev->node);
+3-4
crypto/deflate.c
···3232#include <linux/interrupt.h>3333#include <linux/mm.h>3434#include <linux/net.h>3535-#include <linux/slab.h>36353736#define DEFLATE_DEF_LEVEL Z_DEFAULT_COMPRESSION3837#define DEFLATE_DEF_WINBITS 11···7273 int ret = 0;7374 struct z_stream_s *stream = &ctx->decomp_stream;74757575- stream->workspace = kzalloc(zlib_inflate_workspacesize(), GFP_KERNEL);7676+ stream->workspace = vzalloc(zlib_inflate_workspacesize());7677 if (!stream->workspace) {7778 ret = -ENOMEM;7879 goto out;···8586out:8687 return ret;8788out_free:8888- kfree(stream->workspace);8989+ vfree(stream->workspace);8990 goto out;9091}9192···9899static void deflate_decomp_exit(struct deflate_ctx *ctx)99100{100101 zlib_inflateEnd(&ctx->decomp_stream);101101- kfree(ctx->decomp_stream.workspace);102102+ vfree(ctx->decomp_stream.workspace);102103}103104104105static int deflate_init(struct crypto_tfm *tfm)
···41434143 * Devices which choke on SETXFER. Applies only if both the41444144 * device and controller are SATA.41454145 */41464146- { "PIONEER DVD-RW DVRTD08", "1.00", ATA_HORKAGE_NOSETXFER },41474147- { "PIONEER DVD-RW DVR-212D", "1.28", ATA_HORKAGE_NOSETXFER },41484148- { "PIONEER DVD-RW DVR-216D", "1.08", ATA_HORKAGE_NOSETXFER },41464146+ { "PIONEER DVD-RW DVRTD08", NULL, ATA_HORKAGE_NOSETXFER },41474147+ { "PIONEER DVD-RW DVR-212D", NULL, ATA_HORKAGE_NOSETXFER },41484148+ { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER },4149414941504150 /* End Marker */41514151 { }
+6
drivers/ata/libata-scsi.c
···37973797 */37983798int ata_sas_port_start(struct ata_port *ap)37993799{38003800+ /*38013801+ * the port is marked as frozen at allocation time, but if we don't38023802+ * have new eh, we won't thaw it38033803+ */38043804+ if (!ap->ops->error_handler)38053805+ ap->pflags &= ~ATA_PFLAG_FROZEN;38003806 return 0;38013807}38023808EXPORT_SYMBOL_GPL(ata_sas_port_start);
···389389/*390390 * Function: get_burst_length_encode391391 * arguments: datalength: length in bytes of data392392- * returns value to be programmed in register corrresponding to data length392392+ * returns value to be programmed in register corresponding to data length393393 * This value is effectively the log(base 2) of the length394394 */395395static int get_burst_length_encode(int datalength)
+1-1
drivers/base/platform.c
···367367 *368368 * Returns &struct platform_device pointer on success, or ERR_PTR() on error.369369 */370370-struct platform_device *__init_or_module platform_device_register_resndata(370370+struct platform_device *platform_device_register_resndata(371371 struct device *parent,372372 const char *name, int id,373373 const struct resource *res, unsigned int num,
+2-2
drivers/base/power/clock_ops.c
···387387 clknb = container_of(nb, struct pm_clk_notifier_block, nb);388388389389 switch (action) {390390- case BUS_NOTIFY_ADD_DEVICE:390390+ case BUS_NOTIFY_BIND_DRIVER:391391 if (clknb->con_ids[0]) {392392 for (con_id = clknb->con_ids; *con_id; con_id++)393393 enable_clock(dev, *con_id);···395395 enable_clock(dev, NULL);396396 }397397 break;398398- case BUS_NOTIFY_DEL_DEVICE:398398+ case BUS_NOTIFY_UNBOUND_DRIVER:399399 if (clknb->con_ids[0]) {400400 for (con_id = clknb->con_ids; *con_id; con_id++)401401 disable_clock(dev, *con_id);
+21-7
drivers/base/power/main.c
···5757 */5858void device_pm_init(struct device *dev)5959{6060- dev->power.in_suspend = false;6060+ dev->power.is_prepared = false;6161+ dev->power.is_suspended = false;6162 init_completion(&dev->power.completion);6263 complete_all(&dev->power.completion);6364 dev->power.wakeup = NULL;···9291 pr_debug("PM: Adding info for %s:%s\n",9392 dev->bus ? dev->bus->name : "No Bus", dev_name(dev));9493 mutex_lock(&dpm_list_mtx);9595- if (dev->parent && dev->parent->power.in_suspend)9494+ if (dev->parent && dev->parent->power.is_prepared)9695 dev_warn(dev, "parent %s should not be sleeping\n",9796 dev_name(dev->parent));9897 list_add_tail(&dev->power.entry, &dpm_list);···512511 dpm_wait(dev->parent, async);513512 device_lock(dev);514513515515- dev->power.in_suspend = false;514514+ /*515515+ * This is a fib. But we'll allow new children to be added below516516+ * a resumed device, even if the device hasn't been completed yet.517517+ */518518+ dev->power.is_prepared = false;519519+520520+ if (!dev->power.is_suspended)521521+ goto Unlock;516522517523 if (dev->pwr_domain) {518524 pm_dev_dbg(dev, state, "power domain ");···556548 }557549558550 End:551551+ dev->power.is_suspended = false;552552+553553+ Unlock:559554 device_unlock(dev);560555 complete_all(&dev->power.completion);561556···681670 struct device *dev = to_device(dpm_prepared_list.prev);682671683672 get_device(dev);684684- dev->power.in_suspend = false;673673+ dev->power.is_prepared = false;685674 list_move(&dev->power.entry, &list);686675 mutex_unlock(&dpm_list_mtx);687676···846835 device_lock(dev);847836848837 if (async_error)849849- goto End;838838+ goto Unlock;850839851840 if (pm_wakeup_pending()) {852841 async_error = -EBUSY;853853- goto End;842842+ goto Unlock;854843 }855844856845 if (dev->pwr_domain) {···888877 }889878890879 End:880880+ dev->power.is_suspended = !error;881881+882882+ Unlock:891883 device_unlock(dev);892884 complete_all(&dev->power.completion);893885···10561042 put_device(dev);10571043 break;10581044 }10591059- dev->power.in_suspend = true;10451045+ dev->power.is_prepared = true;10601046 if (!list_empty(&dev->power.entry))10611047 list_move_tail(&dev->power.entry, &dpm_prepared_list);10621048 put_device(dev);
···1313config GOOGLE_SMI1414 tristate "SMI interface for Google platforms"1515 depends on ACPI && DMI1616+ select EFI1617 select EFI_VARS1718 help1819 Say Y here if you want to enable SMI callbacks for Google
···21822182 /* Flush any outstanding unpin_work. */21832183 flush_workqueue(dev_priv->wq);2184218421852185- i915_gem_free_all_phys_object(dev);21862186-21872185 mutex_lock(&dev->struct_mutex);21862186+ i915_gem_free_all_phys_object(dev);21882187 i915_gem_cleanup_ringbuffer(dev);21892188 mutex_unlock(&dev->struct_mutex);21902189 if (I915_HAS_FBC(dev) && i915_powersave)
+3
drivers/gpu/drm/i915/i915_drv.c
···579579 } else switch (INTEL_INFO(dev)->gen) {580580 case 6:581581 ret = gen6_do_reset(dev, flags);582582+ /* If reset with a user forcewake, try to restore */583583+ if (atomic_read(&dev_priv->forcewake_count))584584+ __gen6_gt_force_wake_get(dev_priv);582585 break;583586 case 5:584587 ret = ironlake_do_reset(dev, flags);
···23202320 le16_to_cpu(clock_info->r600.usVDDC);23212321 }2322232223232323+ /* patch up vddc if necessary */23242324+ if (rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage == 0xff01) {23252325+ u16 vddc;23262326+23272327+ if (radeon_atom_get_max_vddc(rdev, &vddc) == 0)23282328+ rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage = vddc;23292329+ }23302330+23232331 if (rdev->flags & RADEON_IS_IGP) {23242332 /* skip invalid modes */23252333 if (rdev->pm.power_state[state_index].clock_info[mode_index].sclk == 0)···26382630 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);26392631}2640263226332633+int radeon_atom_get_max_vddc(struct radeon_device *rdev,26342634+ u16 *voltage)26352635+{26362636+ union set_voltage args;26372637+ int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);26382638+ u8 frev, crev;2641263926402640+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))26412641+ return -EINVAL;26422642+26432643+ switch (crev) {26442644+ case 1:26452645+ return -EINVAL;26462646+ case 2:26472647+ args.v2.ucVoltageType = SET_VOLTAGE_GET_MAX_VOLTAGE;26482648+ args.v2.ucVoltageMode = 0;26492649+ args.v2.usVoltageLevel = 0;26502650+26512651+ atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);26522652+26532653+ *voltage = le16_to_cpu(args.v2.usVoltageLevel);26542654+ break;26552655+ default:26562656+ DRM_ERROR("Unknown table version %d, %d\n", frev, crev);26572657+ return -EINVAL;26582658+ }26592659+26602660+ return 0;26612661+}2642266226432663void radeon_atom_initialize_bios_scratch_regs(struct drm_device *dev)26442664{
+1-1
drivers/gpu/drm/radeon/radeon_bios.c
···104104static bool radeon_atrm_get_bios(struct radeon_device *rdev)105105{106106 int ret;107107- int size = 64 * 1024;107107+ int size = 256 * 1024;108108 int i;109109110110 if (!radeon_atrm_supported(rdev->pdev))
+3-2
drivers/gpu/drm/ttm/ttm_tt.c
···3131#include <linux/sched.h>3232#include <linux/highmem.h>3333#include <linux/pagemap.h>3434+#include <linux/shmem_fs.h>3435#include <linux/file.h>3536#include <linux/swap.h>3637#include <linux/slab.h>···485484 swap_space = swap_storage->f_path.dentry->d_inode->i_mapping;486485487486 for (i = 0; i < ttm->num_pages; ++i) {488488- from_page = read_mapping_page(swap_space, i, NULL);487487+ from_page = shmem_read_mapping_page(swap_space, i);489488 if (IS_ERR(from_page)) {490489 ret = PTR_ERR(from_page);491490 goto out_err;···558557 from_page = ttm->pages[i];559558 if (unlikely(from_page == NULL))560559 continue;561561- to_page = read_mapping_page(swap_space, i, NULL);560560+ to_page = shmem_read_mapping_page(swap_space, i);562561 if (unlikely(IS_ERR(to_page))) {563562 ret = PTR_ERR(to_page);564563 goto out_err;
···201201202202 i2c_set_clientdata(client, data);203203204204- /* Read the mux register at addr to verify205205- * that the mux is in fact present.204204+ /* Write the mux register at addr to verify205205+ * that the mux is in fact present. This also206206+ * initializes the mux to disconnected state.206207 */207207- if (i2c_smbus_read_byte(client) < 0) {208208+ if (i2c_smbus_write_byte(client, 0) < 0) {208209 dev_warn(&client->dev, "probe failed\n");209210 goto exit_free;210211 }
+37-9
drivers/infiniband/hw/cxgb4/cm.c
···14631463 struct c4iw_qp_attributes attrs;14641464 int disconnect = 1;14651465 int release = 0;14661466- int abort = 0;14671466 struct tid_info *t = dev->rdev.lldi.tids;14681467 unsigned int tid = GET_TID(hdr);14681468+ int ret;1469146914701470 ep = lookup_tid(t, tid);14711471 PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);···15011501 start_ep_timer(ep);15021502 __state_set(&ep->com, CLOSING);15031503 attrs.next_state = C4IW_QP_STATE_CLOSING;15041504- abort = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,15041504+ ret = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,15051505 C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);15061506- peer_close_upcall(ep);15071507- disconnect = 1;15061506+ if (ret != -ECONNRESET) {15071507+ peer_close_upcall(ep);15081508+ disconnect = 1;15091509+ }15081510 break;15091511 case ABORTING:15101512 disconnect = 0;···21112109 break;21122110 }2113211121142114- mutex_unlock(&ep->com.mutex);21152112 if (close) {21162116- if (abrupt)21172117- ret = abort_connection(ep, NULL, gfp);21182118- else21132113+ if (abrupt) {21142114+ close_complete_upcall(ep);21152115+ ret = send_abort(ep, NULL, gfp);21162116+ } else21192117 ret = send_halfclose(ep, gfp);21202118 if (ret)21212119 fatal = 1;21222120 }21212121+ mutex_unlock(&ep->com.mutex);21232122 if (fatal)21242123 release_ep_resources(ep);21252124 return ret;···23042301 return 0;23052302}2306230323042304+static int peer_abort_intr(struct c4iw_dev *dev, struct sk_buff *skb)23052305+{23062306+ struct cpl_abort_req_rss *req = cplhdr(skb);23072307+ struct c4iw_ep *ep;23082308+ struct tid_info *t = dev->rdev.lldi.tids;23092309+ unsigned int tid = GET_TID(req);23102310+23112311+ ep = lookup_tid(t, tid);23122312+ if (is_neg_adv_abort(req->status)) {23132313+ PDBG("%s neg_adv_abort ep %p tid %u\n", __func__, ep,23142314+ ep->hwtid);23152315+ kfree_skb(skb);23162316+ return 0;23172317+ }23182318+ PDBG("%s ep %p tid %u state %u\n", __func__, ep, ep->hwtid,23192319+ ep->com.state);23202320+23212321+ /*23222322+ * Wake up any threads in rdma_init() or rdma_fini().23232323+ */23242324+ c4iw_wake_up(&ep->com.wr_wait, -ECONNRESET);23252325+ sched(dev, skb);23262326+ return 0;23272327+}23282328+23072329/*23082330 * Most upcalls from the T4 Core go to sched() to23092331 * schedule the processing on a work queue.···23452317 [CPL_PASS_ESTABLISH] = sched,23462318 [CPL_PEER_CLOSE] = sched,23472319 [CPL_CLOSE_CON_RPL] = sched,23482348- [CPL_ABORT_REQ_RSS] = sched,23202320+ [CPL_ABORT_REQ_RSS] = peer_abort_intr,23492321 [CPL_RDMA_TERMINATE] = sched,23502322 [CPL_FW4_ACK] = sched,23512323 [CPL_SET_TCB_RPL] = set_tcb_rpl,
···12071207 c4iw_get_ep(&qhp->ep->com);12081208 }12091209 ret = rdma_fini(rhp, qhp, ep);12101210- if (ret) {12111211- if (internal)12121212- c4iw_get_ep(&qhp->ep->com);12101210+ if (ret)12131211 goto err;12141214- }12151212 break;12161213 case C4IW_QP_STATE_TERMINATE:12171214 set_state(qhp, C4IW_QP_STATE_TERMINATE);
+18-7
drivers/infiniband/hw/qib/qib_iba7322.c
···469469#define IB_7322_LT_STATE_RECOVERIDLE 0x0f470470#define IB_7322_LT_STATE_CFGENH 0x10471471#define IB_7322_LT_STATE_CFGTEST 0x11472472+#define IB_7322_LT_STATE_CFGWAITRMTTEST 0x12473473+#define IB_7322_LT_STATE_CFGWAITENH 0x13472474473475/* link state machine states from IBC */474476#define IB_7322_L_STATE_DOWN 0x0···500498 IB_PHYSPORTSTATE_LINK_ERR_RECOVER,501499 [IB_7322_LT_STATE_CFGENH] = IB_PHYSPORTSTATE_CFG_ENH,502500 [IB_7322_LT_STATE_CFGTEST] = IB_PHYSPORTSTATE_CFG_TRAIN,503503- [0x12] = IB_PHYSPORTSTATE_CFG_TRAIN,504504- [0x13] = IB_PHYSPORTSTATE_CFG_WAIT_ENH,501501+ [IB_7322_LT_STATE_CFGWAITRMTTEST] =502502+ IB_PHYSPORTSTATE_CFG_TRAIN,503503+ [IB_7322_LT_STATE_CFGWAITENH] =504504+ IB_PHYSPORTSTATE_CFG_WAIT_ENH,505505 [0x14] = IB_PHYSPORTSTATE_CFG_TRAIN,506506 [0x15] = IB_PHYSPORTSTATE_CFG_TRAIN,507507 [0x16] = IB_PHYSPORTSTATE_CFG_TRAIN,···16961692 break;16971693 }1698169416991699- if (ibclt == IB_7322_LT_STATE_CFGTEST &&16951695+ if (((ibclt >= IB_7322_LT_STATE_CFGTEST &&16961696+ ibclt <= IB_7322_LT_STATE_CFGWAITENH) ||16971697+ ibclt == IB_7322_LT_STATE_LINKUP) &&17001698 (ibcst & SYM_MASK(IBCStatusA_0, LinkSpeedQDR))) {17011699 force_h1(ppd);17021700 ppd->cpspec->qdr_reforce = 1;···73077301static void serdes_7322_los_enable(struct qib_pportdata *ppd, int enable)73087302{73097303 u64 data = qib_read_kreg_port(ppd, krp_serdesctrl);73107310- printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS %s\n",73117311- ppd->dd->unit, ppd->port, (enable ? "on" : "off"));73127312- if (enable)73047304+ u8 state = SYM_FIELD(data, IBSerdesCtrl_0, RXLOSEN);73057305+73067306+ if (enable && !state) {73077307+ printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS on\n",73087308+ ppd->dd->unit, ppd->port);73137309 data |= SYM_MASK(IBSerdesCtrl_0, RXLOSEN);73147314- else73107310+ } else if (!enable && state) {73117311+ printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS off\n",73127312+ ppd->dd->unit, ppd->port);73157313 data &= ~SYM_MASK(IBSerdesCtrl_0, RXLOSEN);73147314+ }73167315 qib_write_kreg_port(ppd, krp_serdesctrl, data);73177316}73187317
+5-1
drivers/infiniband/hw/qib/qib_intr.c
···9696 * states, or if it transitions from any of the up (INIT or better)9797 * states into any of the down states (except link recovery), then9898 * call the chip-specific code to take appropriate actions.9999+ *100100+ * ppd->lflags could be 0 if this is the first time the interrupt101101+ * handlers has been called but the link is already up.99102 */100100- if (lstate >= IB_PORT_INIT && (ppd->lflags & QIBL_LINKDOWN) &&103103+ if (lstate >= IB_PORT_INIT &&104104+ (!ppd->lflags || (ppd->lflags & QIBL_LINKDOWN)) &&101105 ltstate == IB_PHYSPORTSTATE_LINKUP) {102106 /* transitioned to UP */103107 if (dd->f_ib_updown(ppd, 1, ibcs))
+2-2
drivers/leds/leds-lp5521.c
···593593 &lp5521_led_attribute_group);594594}595595596596-static int __init lp5521_init_led(struct lp5521_led *led,596596+static int __devinit lp5521_init_led(struct lp5521_led *led,597597 struct i2c_client *client,598598 int chan, struct lp5521_platform_data *pdata)599599{···637637 return 0;638638}639639640640-static int lp5521_probe(struct i2c_client *client,640640+static int __devinit lp5521_probe(struct i2c_client *client,641641 const struct i2c_device_id *id)642642{643643 struct lp5521_chip *chip;
+2-2
drivers/leds/leds-lp5523.c
···826826 return 0;827827}828828829829-static int __init lp5523_init_led(struct lp5523_led *led, struct device *dev,829829+static int __devinit lp5523_init_led(struct lp5523_led *led, struct device *dev,830830 int chan, struct lp5523_platform_data *pdata)831831{832832 char name[32];···872872873873static struct i2c_driver lp5523_driver;874874875875-static int lp5523_probe(struct i2c_client *client,875875+static int __devinit lp5523_probe(struct i2c_client *client,876876 const struct i2c_device_id *id)877877{878878 struct lp5523_chip *chip;
···691691static int mmc_sdio_power_restore(struct mmc_host *host)692692{693693 int ret;694694+ u32 ocr;694695695696 BUG_ON(!host);696697 BUG_ON(!host->card);697698698699 mmc_claim_host(host);700700+701701+ /*702702+ * Reset the card by performing the same steps that are taken by703703+ * mmc_rescan_try_freq() and mmc_attach_sdio() during a "normal" probe.704704+ *705705+ * sdio_reset() is technically not needed. Having just powered up the706706+ * hardware, it should already be in reset state. However, some707707+ * platforms (such as SD8686 on OLPC) do not instantly cut power,708708+ * meaning that a reset is required when restoring power soon after709709+ * powering off. It is harmless in other cases.710710+ *711711+ * The CMD5 reset (mmc_send_io_op_cond()), according to the SDIO spec,712712+ * is not necessary for non-removable cards. However, it is required713713+ * for OLPC SD8686 (which expects a [CMD5,5,3,7] init sequence), and714714+ * harmless in other situations.715715+ *716716+ * With these steps taken, mmc_select_voltage() is also required to717717+ * restore the correct voltage setting of the card.718718+ */719719+ sdio_reset(host);720720+ mmc_go_idle(host);721721+ mmc_send_if_cond(host, host->ocr_avail);722722+723723+ ret = mmc_send_io_op_cond(host, 0, &ocr);724724+ if (ret)725725+ goto out;726726+727727+ if (host->ocr_avail_sdio)728728+ host->ocr_avail = host->ocr_avail_sdio;729729+730730+ host->ocr = mmc_select_voltage(host, ocr & ~0x7F);731731+ if (!host->ocr) {732732+ ret = -EINVAL;733733+ goto out;734734+ }735735+699736 ret = mmc_sdio_init_card(host, host->ocr, host->card,700737 mmc_card_keep_power(host));701738 if (!ret && host->sdio_irqs)702739 mmc_signal_sdio_irq(host);740740+741741+out:703742 mmc_release_host(host);704743705744 return ret;
+1-1
drivers/mmc/core/sdio_bus.c
···189189190190 /* Then undo the runtime PM settings in sdio_bus_probe() */191191 if (func->card->host->caps & MMC_CAP_POWER_OFF_CARD)192192- pm_runtime_put_noidle(dev);192192+ pm_runtime_put_sync(dev);193193194194out:195195 return ret;
+5
drivers/mmc/host/of_mmc_spi.c
···2525#include <linux/mmc/core.h>2626#include <linux/mmc/host.h>27272828+/* For archs that don't support NO_IRQ (such as mips), provide a dummy value */2929+#ifndef NO_IRQ3030+#define NO_IRQ 03131+#endif3232+2833MODULE_LICENSE("GPL");29343035enum {
···3415341534163416config NETCONSOLE_DYNAMIC34173417 bool "Dynamic reconfiguration of logging targets"34183418- depends on NETCONSOLE && SYSFS && CONFIGFS_FS34183418+ depends on NETCONSOLE && SYSFS && CONFIGFS_FS && \34193419+ !(NETCONSOLE=y && CONFIGFS_FS=m)34193420 help34203421 This option enables the ability to dynamically reconfigure target34213422 parameters (interface, IP addresses, port numbers, MAC addresses)
···3434config CAN_DEV3535 tristate "Platform CAN drivers with Netlink support"3636 depends on CAN3737- default Y3737+ default y3838 ---help---3939 Enables the common framework for platform CAN drivers with Netlink4040 support. This is the standard library for CAN drivers.···4343config CAN_CALC_BITTIMING4444 bool "CAN bit-timing calculation"4545 depends on CAN_DEV4646- default Y4646+ default y4747 ---help---4848 If enabled, CAN bit-timing parameters will be calculated for the4949 bit-rate specified via Netlink argument "bitrate" when the device
···21522152 * thread21532153 */21542154 clear_bit(QL_ADAPTER_UP, &qdev->flags);21552155+ /* Set asic recovery bit to indicate reset process that we are21562156+ * in fatal error recovery process rather than normal close21572157+ */21582158+ set_bit(QL_ASIC_RECOVERY, &qdev->flags);21552159 queue_delayed_work(qdev->workqueue, &qdev->asic_reset_work, 0);21562160}21572161···21702166 return;2171216721722168 case CAM_LOOKUP_ERR_EVENT:21732173- netif_err(qdev, link, qdev->ndev,21742174- "Multiple CAM hits lookup occurred.\n");21752175- netif_err(qdev, drv, qdev->ndev,21762176- "This event shouldn't occur.\n");21692169+ netdev_err(qdev->ndev, "Multiple CAM hits lookup occurred.\n");21702170+ netdev_err(qdev->ndev, "This event shouldn't occur.\n");21772171 ql_queue_asic_error(qdev);21782172 return;2179217321802174 case SOFT_ECC_ERROR_EVENT:21812181- netif_err(qdev, rx_err, qdev->ndev,21822182- "Soft ECC error detected.\n");21752175+ netdev_err(qdev->ndev, "Soft ECC error detected.\n");21832176 ql_queue_asic_error(qdev);21842177 break;2185217821862179 case PCI_ERR_ANON_BUF_RD:21872187- netif_err(qdev, rx_err, qdev->ndev,21882188- "PCI error occurred when reading anonymous buffers from rx_ring %d.\n",21892189- ib_ae_rsp->q_id);21802180+ netdev_err(qdev->ndev, "PCI error occurred when reading "21812181+ "anonymous buffers from rx_ring %d.\n",21822182+ ib_ae_rsp->q_id);21902183 ql_queue_asic_error(qdev);21912184 break;21922185···24382437 */24392438 if (var & STS_FE) {24402439 ql_queue_asic_error(qdev);24412441- netif_err(qdev, intr, qdev->ndev,24422442- "Got fatal error, STS = %x.\n", var);24402440+ netdev_err(qdev->ndev, "Got fatal error, STS = %x.\n", var);24432441 var = ql_read32(qdev, ERR_STS);24442444- netif_err(qdev, intr, qdev->ndev,24452445- "Resetting chip. Error Status Register = 0x%x\n", var);24422442+ netdev_err(qdev->ndev, "Resetting chip. "24432443+ "Error Status Register = 0x%x\n", var);24462444 return IRQ_HANDLED;24472445 }24482446···38183818 end_jiffies = jiffies +38193819 max((unsigned long)1, usecs_to_jiffies(30));3820382038213821- /* Stop management traffic. */38223822- ql_mb_set_mgmnt_traffic_ctl(qdev, MB_SET_MPI_TFK_STOP);38213821+ /* Check if bit is set then skip the mailbox command and38223822+ * clear the bit, else we are in normal reset process.38233823+ */38243824+ if (!test_bit(QL_ASIC_RECOVERY, &qdev->flags)) {38253825+ /* Stop management traffic. */38263826+ ql_mb_set_mgmnt_traffic_ctl(qdev, MB_SET_MPI_TFK_STOP);3823382738243824- /* Wait for the NIC and MGMNT FIFOs to empty. */38253825- ql_wait_fifo_empty(qdev);38283828+ /* Wait for the NIC and MGMNT FIFOs to empty. */38293829+ ql_wait_fifo_empty(qdev);38303830+ } else38313831+ clear_bit(QL_ASIC_RECOVERY, &qdev->flags);3826383238273833 ql_write32(qdev, RST_FO, (RST_FO_FR << 16) | RST_FO_FR);38283834
+1-1
drivers/net/r8169.c
···753753 msleep(2);754754 for (i = 0; i < 5; i++) {755755 udelay(100);756756- if (!(RTL_R32(ERIDR) & ERIAR_FLAG))756756+ if (!(RTL_R32(ERIAR) & ERIAR_FLAG))757757 break;758758 }759759
+15-13
drivers/net/rionet.c
···378378379379static void rionet_remove(struct rio_dev *rdev)380380{381381- struct net_device *ndev = NULL;381381+ struct net_device *ndev = rio_get_drvdata(rdev);382382 struct rionet_peer *peer, *tmp;383383384384 free_pages((unsigned long)rionet_active, rdev->net->hport->sys_size ?···433433 .ndo_set_mac_address = eth_mac_addr,434434};435435436436-static int rionet_setup_netdev(struct rio_mport *mport)436436+static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev)437437{438438 int rc = 0;439439- struct net_device *ndev = NULL;440439 struct rionet_private *rnet;441440 u16 device_id;442442-443443- /* Allocate our net_device structure */444444- ndev = alloc_etherdev(sizeof(struct rionet_private));445445- if (ndev == NULL) {446446- printk(KERN_INFO "%s: could not allocate ethernet device.\n",447447- DRV_NAME);448448- rc = -ENOMEM;449449- goto out;450450- }451441452442 rionet_active = (struct rio_dev **)__get_free_pages(GFP_KERNEL,453443 mport->sys_size ? __fls(sizeof(void *)) + 4 : 0);···494504 int rc = -ENODEV;495505 u32 lpef, lsrc_ops, ldst_ops;496506 struct rionet_peer *peer;507507+ struct net_device *ndev = NULL;497508498509 /* If local device is not rionet capable, give up quickly */499510 if (!rionet_capable)500511 goto out;512512+513513+ /* Allocate our net_device structure */514514+ ndev = alloc_etherdev(sizeof(struct rionet_private));515515+ if (ndev == NULL) {516516+ printk(KERN_INFO "%s: could not allocate ethernet device.\n",517517+ DRV_NAME);518518+ rc = -ENOMEM;519519+ goto out;520520+ }501521502522 /*503523 * First time through, make sure local device is rionet···529529 goto out;530530 }531531532532- rc = rionet_setup_netdev(rdev->net->hport);532532+ rc = rionet_setup_netdev(rdev->net->hport, ndev);533533 rionet_check = 1;534534 }535535···545545 peer->rdev = rdev;546546 list_add_tail(&peer->node, &rionet_peers);547547 }548548+549549+ rio_set_drvdata(rdev, ndev);548550549551 out:550552 return rc;
···331331 ZAURUS_MASTER_INTERFACE,332332 .driver_info = ZAURUS_PXA_INFO,333333},334334-335335-336336-/* At least some of the newest PXA units have very different lies about337337- * their standards support: they claim to be cell phones offering338338- * direct access to their radios! (No, they don't conform to CDC MDLM.)339339- */340334{341341- USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MDLM,342342- USB_CDC_PROTO_NONE),343343- .driver_info = (unsigned long) &bogus_mdlm_info,344344-}, {345335 /* Motorola MOTOMAGX phones */346336 USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x6425, USB_CLASS_COMM,347337 USB_CDC_SUBCLASS_MDLM, USB_CDC_PROTO_NONE),
+92-39
drivers/net/vmxnet3/vmxnet3_drv.c
···573573 struct vmxnet3_cmd_ring *ring = &rq->rx_ring[ring_idx];574574 u32 val;575575576576- while (num_allocated < num_to_alloc) {576576+ while (num_allocated <= num_to_alloc) {577577 struct vmxnet3_rx_buf_info *rbi;578578 union Vmxnet3_GenericDesc *gd;579579···619619620620 BUG_ON(rbi->dma_addr == 0);621621 gd->rxd.addr = cpu_to_le64(rbi->dma_addr);622622- gd->dword[2] = cpu_to_le32((ring->gen << VMXNET3_RXD_GEN_SHIFT)622622+ gd->dword[2] = cpu_to_le32((!ring->gen << VMXNET3_RXD_GEN_SHIFT)623623 | val | rbi->len);624624625625+ /* Fill the last buffer but dont mark it ready, or else the626626+ * device will think that the queue is full */627627+ if (num_allocated == num_to_alloc)628628+ break;629629+630630+ gd->dword[2] |= cpu_to_le32(ring->gen << VMXNET3_RXD_GEN_SHIFT);625631 num_allocated++;626632 vmxnet3_cmd_ring_adv_next2fill(ring);627633 }···11441138 VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD211451139 };11461140 u32 num_rxd = 0;11411141+ bool skip_page_frags = false;11471142 struct Vmxnet3_RxCompDesc *rcd;11481143 struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx;11491144#ifdef __BIG_ENDIAN_BITFIELD···11551148 &rxComp);11561149 while (rcd->gen == rq->comp_ring.gen) {11571150 struct vmxnet3_rx_buf_info *rbi;11581158- struct sk_buff *skb;11511151+ struct sk_buff *skb, *new_skb = NULL;11521152+ struct page *new_page = NULL;11591153 int num_to_alloc;11601154 struct Vmxnet3_RxDesc *rxd;11611155 u32 idx, ring_idx;11621162-11561156+ struct vmxnet3_cmd_ring *ring = NULL;11631157 if (num_rxd >= quota) {11641158 /* we may stop even before we see the EOP desc of11651159 * the current pkt···11711163 BUG_ON(rcd->rqID != rq->qid && rcd->rqID != rq->qid2);11721164 idx = rcd->rxdIdx;11731165 ring_idx = rcd->rqID < adapter->num_rx_queues ? 0 : 1;11661166+ ring = rq->rx_ring + ring_idx;11741167 vmxnet3_getRxDesc(rxd, &rq->rx_ring[ring_idx].base[idx].rxd,11751168 &rxCmdDesc);11761169 rbi = rq->buf_info[ring_idx] + idx;···12001191 goto rcd_done;12011192 }1202119311941194+ skip_page_frags = false;12031195 ctx->skb = rbi->skb;12041204- rbi->skb = NULL;11961196+ new_skb = dev_alloc_skb(rbi->len + NET_IP_ALIGN);11971197+ if (new_skb == NULL) {11981198+ /* Skb allocation failed, do not handover this11991199+ * skb to stack. Reuse it. Drop the existing pkt12001200+ */12011201+ rq->stats.rx_buf_alloc_failure++;12021202+ ctx->skb = NULL;12031203+ rq->stats.drop_total++;12041204+ skip_page_frags = true;12051205+ goto rcd_done;12061206+ }1205120712061208 pci_unmap_single(adapter->pdev, rbi->dma_addr, rbi->len,12071209 PCI_DMA_FROMDEVICE);1208121012091211 skb_put(ctx->skb, rcd->len);12121212+12131213+ /* Immediate refill */12141214+ new_skb->dev = adapter->netdev;12151215+ skb_reserve(new_skb, NET_IP_ALIGN);12161216+ rbi->skb = new_skb;12171217+ rbi->dma_addr = pci_map_single(adapter->pdev,12181218+ rbi->skb->data, rbi->len,12191219+ PCI_DMA_FROMDEVICE);12201220+ rxd->addr = cpu_to_le64(rbi->dma_addr);12211221+ rxd->len = rbi->len;12221222+12101223 } else {12111211- BUG_ON(ctx->skb == NULL);12241224+ BUG_ON(ctx->skb == NULL && !skip_page_frags);12251225+12121226 /* non SOP buffer must be type 1 in most cases */12131213- if (rbi->buf_type == VMXNET3_RX_BUF_PAGE) {12141214- BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_BODY);12271227+ BUG_ON(rbi->buf_type != VMXNET3_RX_BUF_PAGE);12281228+ BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_BODY);1215122912161216- if (rcd->len) {12171217- pci_unmap_page(adapter->pdev,12181218- rbi->dma_addr, rbi->len,12191219- PCI_DMA_FROMDEVICE);12301230+ /* If an sop buffer was dropped, skip all12311231+ * following non-sop fragments. They will be reused.12321232+ */12331233+ if (skip_page_frags)12341234+ goto rcd_done;1220123512211221- vmxnet3_append_frag(ctx->skb, rcd, rbi);12221222- rbi->page = NULL;12231223- }12241224- } else {12251225- /*12261226- * The only time a non-SOP buffer is type 0 is12271227- * when it's EOP and error flag is raised, which12281228- * has already been handled.12361236+ new_page = alloc_page(GFP_ATOMIC);12371237+ if (unlikely(new_page == NULL)) {12381238+ /* Replacement page frag could not be allocated.12391239+ * Reuse this page. Drop the pkt and free the12401240+ * skb which contained this page as a frag. Skip12411241+ * processing all the following non-sop frags.12291242 */12301230- BUG_ON(true);12431243+ rq->stats.rx_buf_alloc_failure++;12441244+ dev_kfree_skb(ctx->skb);12451245+ ctx->skb = NULL;12461246+ skip_page_frags = true;12471247+ goto rcd_done;12311248 }12491249+12501250+ if (rcd->len) {12511251+ pci_unmap_page(adapter->pdev,12521252+ rbi->dma_addr, rbi->len,12531253+ PCI_DMA_FROMDEVICE);12541254+12551255+ vmxnet3_append_frag(ctx->skb, rcd, rbi);12561256+ }12571257+12581258+ /* Immediate refill */12591259+ rbi->page = new_page;12601260+ rbi->dma_addr = pci_map_page(adapter->pdev, rbi->page,12611261+ 0, PAGE_SIZE,12621262+ PCI_DMA_FROMDEVICE);12631263+ rxd->addr = cpu_to_le64(rbi->dma_addr);12641264+ rxd->len = rbi->len;12321265 }12661266+1233126712341268 skb = ctx->skb;12351269 if (rcd->eop) {···12951243 }1296124412971245rcd_done:12981298- /* device may skip some rx descs */12991299- rq->rx_ring[ring_idx].next2comp = idx;13001300- VMXNET3_INC_RING_IDX_ONLY(rq->rx_ring[ring_idx].next2comp,13011301- rq->rx_ring[ring_idx].size);12461246+ /* device may have skipped some rx descs */12471247+ ring->next2comp = idx;12481248+ num_to_alloc = vmxnet3_cmd_ring_desc_avail(ring);12491249+ ring = rq->rx_ring + ring_idx;12501250+ while (num_to_alloc) {12511251+ vmxnet3_getRxDesc(rxd, &ring->base[ring->next2fill].rxd,12521252+ &rxCmdDesc);12531253+ BUG_ON(!rxd->addr);1302125413031303- /* refill rx buffers frequently to avoid starving the h/w */13041304- num_to_alloc = vmxnet3_cmd_ring_desc_avail(rq->rx_ring +13051305- ring_idx);13061306- if (unlikely(num_to_alloc > VMXNET3_RX_ALLOC_THRESHOLD(rq,13071307- ring_idx, adapter))) {13081308- vmxnet3_rq_alloc_rx_buf(rq, ring_idx, num_to_alloc,13091309- adapter);12551255+ /* Recv desc is ready to be used by the device */12561256+ rxd->gen = ring->gen;12571257+ vmxnet3_cmd_ring_adv_next2fill(ring);12581258+ num_to_alloc--;12591259+ }1310126013111311- /* if needed, update the register */13121312- if (unlikely(rq->shared->updateRxProd)) {13131313- VMXNET3_WRITE_BAR0_REG(adapter,13141314- rxprod_reg[ring_idx] + rq->qid * 8,13151315- rq->rx_ring[ring_idx].next2fill);13161316- rq->uncommitted[ring_idx] = 0;13171317- }12611261+ /* if needed, update the register */12621262+ if (unlikely(rq->shared->updateRxProd)) {12631263+ VMXNET3_WRITE_BAR0_REG(adapter,12641264+ rxprod_reg[ring_idx] + rq->qid * 8,12651265+ ring->next2fill);12661266+ rq->uncommitted[ring_idx] = 0;13181267 }1319126813201269 vmxnet3_comp_ring_adv_next2proc(&rq->comp_ring);
+2-2
drivers/net/vmxnet3/vmxnet3_int.h
···6969/*7070 * Version numbers7171 */7272-#define VMXNET3_DRIVER_VERSION_STRING "1.1.9.0-k"7272+#define VMXNET3_DRIVER_VERSION_STRING "1.1.14.0-k"73737474/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */7575-#define VMXNET3_DRIVER_VERSION_NUM 0x010109007575+#define VMXNET3_DRIVER_VERSION_NUM 0x01010E0076767777#if defined(CONFIG_PCI_MSI)7878 /* RSS only makes sense if MSI-X is supported. */
+3-5
drivers/net/wireless/ath/ath5k/eeprom.c
···691691 if (!chinfo[pier].pd_curves)692692 continue;693693694694- for (pdg = 0; pdg < ee->ee_pd_gains[mode]; pdg++) {694694+ for (pdg = 0; pdg < AR5K_EEPROM_N_PD_CURVES; pdg++) {695695 struct ath5k_pdgain_info *pd =696696 &chinfo[pier].pd_curves[pdg];697697698698- if (pd != NULL) {699699- kfree(pd->pd_step);700700- kfree(pd->pd_pwr);701701- }698698+ kfree(pd->pd_step);699699+ kfree(pd->pd_pwr);702700 }703701704702 kfree(chinfo[pier].pd_curves);
+6
drivers/net/wireless/ath/ath9k/pci.c
···278278279279 ath9k_hw_set_gpio(sc->sc_ah, sc->sc_ah->led_pin, 1);280280281281+ /* The device has to be moved to FULLSLEEP forcibly.282282+ * Otherwise the chip never moved to full sleep,283283+ * when no interface is up.284284+ */285285+ ath9k_hw_setpower(sc->sc_ah, ATH9K_PM_FULL_SLEEP);286286+281287 return 0;282288}283289
+3-1
drivers/pci/pci-driver.c
···624624 * system from the sleep state, we'll have to prevent it from signaling625625 * wake-up.626626 */627627- pm_runtime_resume(dev);627627+ pm_runtime_get_sync(dev);628628629629 if (drv && drv->pm && drv->pm->prepare)630630 error = drv->pm->prepare(dev);···638638639639 if (drv && drv->pm && drv->pm->complete)640640 drv->pm->complete(dev);641641+642642+ pm_runtime_put_sync(dev);641643}642644643645#else /* !CONFIG_PM_SLEEP */
···7878 void __iomem *regbase;7979 struct resource *res;8080 int irq_alarm;8181- int irq_hz;8281 struct rtc_device *rtc;8382 spinlock_t lock; /* Protects this structure */8483};···989999100 if (isr & 1)100101 events |= RTC_AF | RTC_IRQF;101101-102102- /* Only second/minute interrupts are supported */103103- if (isr & 2)104104- events |= RTC_UF | RTC_IRQF;105102106103 rtc_update_irq(vt8500_rtc->rtc, 1, events);107104···194199 return 0;195200}196201197197-static int vt8500_update_irq_enable(struct device *dev, unsigned int enabled)198198-{199199- struct vt8500_rtc *vt8500_rtc = dev_get_drvdata(dev);200200- unsigned long tmp = readl(vt8500_rtc->regbase + VT8500_RTC_CR);201201-202202- if (enabled)203203- tmp |= VT8500_RTC_CR_SM_SEC | VT8500_RTC_CR_SM_ENABLE;204204- else205205- tmp &= ~VT8500_RTC_CR_SM_ENABLE;206206-207207- writel(tmp, vt8500_rtc->regbase + VT8500_RTC_CR);208208- return 0;209209-}210210-211202static const struct rtc_class_ops vt8500_rtc_ops = {212203 .read_time = vt8500_rtc_read_time,213204 .set_time = vt8500_rtc_set_time,214205 .read_alarm = vt8500_rtc_read_alarm,215206 .set_alarm = vt8500_rtc_set_alarm,216207 .alarm_irq_enable = vt8500_alarm_irq_enable,217217- .update_irq_enable = vt8500_update_irq_enable,218208};219209220210static int __devinit vt8500_rtc_probe(struct platform_device *pdev)···228248 goto err_free;229249 }230250231231- vt8500_rtc->irq_hz = platform_get_irq(pdev, 1);232232- if (vt8500_rtc->irq_hz < 0) {233233- dev_err(&pdev->dev, "No 1Hz IRQ resource defined\n");234234- ret = -ENXIO;235235- goto err_free;236236- }237237-238251 vt8500_rtc->res = request_mem_region(vt8500_rtc->res->start,239252 resource_size(vt8500_rtc->res),240253 "vt8500-rtc");···245272 goto err_release;246273 }247274248248- /* Enable the second/minute interrupt generation and enable RTC */249249- writel(VT8500_RTC_CR_ENABLE | VT8500_RTC_CR_24H250250- | VT8500_RTC_CR_SM_ENABLE | VT8500_RTC_CR_SM_SEC,275275+ /* Enable RTC and set it to 24-hour mode */276276+ writel(VT8500_RTC_CR_ENABLE | VT8500_RTC_CR_24H,251277 vt8500_rtc->regbase + VT8500_RTC_CR);252278253279 vt8500_rtc->rtc = rtc_device_register("vt8500-rtc", &pdev->dev,···258286 goto err_unmap;259287 }260288261261- ret = request_irq(vt8500_rtc->irq_hz, vt8500_rtc_irq, 0,262262- "rtc 1Hz", vt8500_rtc);263263- if (ret < 0) {264264- dev_err(&pdev->dev, "can't get irq %i, err %d\n",265265- vt8500_rtc->irq_hz, ret);266266- goto err_unreg;267267- }268268-269289 ret = request_irq(vt8500_rtc->irq_alarm, vt8500_rtc_irq, 0,270290 "rtc alarm", vt8500_rtc);271291 if (ret < 0) {272292 dev_err(&pdev->dev, "can't get irq %i, err %d\n",273293 vt8500_rtc->irq_alarm, ret);274274- goto err_free_hz;294294+ goto err_unreg;275295 }276296277297 return 0;278298279279-err_free_hz:280280- free_irq(vt8500_rtc->irq_hz, vt8500_rtc);281299err_unreg:282300 rtc_device_unregister(vt8500_rtc->rtc);283301err_unmap:···285323 struct vt8500_rtc *vt8500_rtc = platform_get_drvdata(pdev);286324287325 free_irq(vt8500_rtc->irq_alarm, vt8500_rtc);288288- free_irq(vt8500_rtc->irq_hz, vt8500_rtc);289326290327 rtc_device_unregister(vt8500_rtc->rtc);291328
+2
drivers/staging/brcm80211/Kconfig
···77 default n88 depends on PCI99 depends on WLAN && MAC802111010+ depends on X86 || MIPS1011 select BRCMUTIL1112 select FW_LOADER1213 select CRC_CCITT···2120 default n2221 depends on MMC2322 depends on WLAN && CFG802112323+ depends on X86 || MIPS2424 select BRCMUTIL2525 select FW_LOADER2626 select WIRELESS_EXT
+22
drivers/staging/comedi/Kconfig
···22 tristate "Data acquisition support (comedi)"33 default N44 depends on m55+ depends on BROKEN || FRV || M32R || MN10300 || SUPERH || TILE || X8656 ---help---67 Enable support a wide range of data acquisition devices78 for Linux.···161160162161config COMEDI_PCL812163162 tristate "Advantech PCL-812/813 and ADlink ACL-8112/8113/8113/8216"163163+ depends on VIRT_TO_BUS164164 default N165165 ---help---166166 Enable support for Advantech PCL-812/PG, PCL-813/B, ADLink···173171174172config COMEDI_PCL816175173 tristate "Advantech PCL-814 and PCL-816 ISA card support"174174+ depends on VIRT_TO_BUS176175 default N177176 ---help---178177 Enable support for Advantech PCL-814 and PCL-816 ISA cards···183180184181config COMEDI_PCL818185182 tristate "Advantech PCL-718 and PCL-818 ISA card support"183183+ depends on VIRT_TO_BUS186184 default N187185 ---help---188186 Enable support for Advantech PCL-818 ISA cards···273269274270config COMEDI_DAS1800275271 tristate "DAS1800 and compatible ISA card support"272272+ depends on VIRT_TO_BUS276273 select COMEDI_FC277274 default N278275 ---help---···345340config COMEDI_DT282X346341 tristate "Data Translation DT2821 series and DT-EZ ISA card support"347342 select COMEDI_FC343343+ depends on VIRT_TO_BUS348344 default N349345 ---help---350346 Enable support for Data Translation DT2821 series including DT-EZ···425419config COMEDI_NI_AT_A2150426420 tristate "NI AT-A2150 ISA card support"427421 depends on COMEDI_NI_COMMON422422+ depends on VIRT_TO_BUS428423 default N429424 ---help---430425 Enable support for National Instruments AT-A2150 cards···543536544537config COMEDI_ADDI_APCI_035545538 tristate "ADDI-DATA APCI_035 support"539539+ depends on VIRT_TO_BUS546540 default N547541 ---help---548542 Enable support for ADDI-DATA APCI_035 cards···553545554546config COMEDI_ADDI_APCI_1032555547 tristate "ADDI-DATA APCI_1032 support"548548+ depends on VIRT_TO_BUS556549 default N557550 ---help---558551 Enable support for ADDI-DATA APCI_1032 cards···563554564555config COMEDI_ADDI_APCI_1500565556 tristate "ADDI-DATA APCI_1500 support"557557+ depends on VIRT_TO_BUS566558 default N567559 ---help---568560 Enable support for ADDI-DATA APCI_1500 cards···573563574564config COMEDI_ADDI_APCI_1516575565 tristate "ADDI-DATA APCI_1516 support"566566+ depends on VIRT_TO_BUS576567 default N577568 ---help---578569 Enable support for ADDI-DATA APCI_1516 cards···583572584573config COMEDI_ADDI_APCI_1564585574 tristate "ADDI-DATA APCI_1564 support"575575+ depends on VIRT_TO_BUS586576 default N587577 ---help---588578 Enable support for ADDI-DATA APCI_1564 cards···593581594582config COMEDI_ADDI_APCI_16XX595583 tristate "ADDI-DATA APCI_16xx support"584584+ depends on VIRT_TO_BUS596585 default N597586 ---help---598587 Enable support for ADDI-DATA APCI_16xx cards···603590604591config COMEDI_ADDI_APCI_2016605592 tristate "ADDI-DATA APCI_2016 support"593593+ depends on VIRT_TO_BUS606594 default N607595 ---help---608596 Enable support for ADDI-DATA APCI_2016 cards···613599614600config COMEDI_ADDI_APCI_2032615601 tristate "ADDI-DATA APCI_2032 support"602602+ depends on VIRT_TO_BUS616603 default N617604 ---help---618605 Enable support for ADDI-DATA APCI_2032 cards···623608624609config COMEDI_ADDI_APCI_2200625610 tristate "ADDI-DATA APCI_2200 support"611611+ depends on VIRT_TO_BUS626612 default N627613 ---help---628614 Enable support for ADDI-DATA APCI_2200 cards···633617634618config COMEDI_ADDI_APCI_3001635619 tristate "ADDI-DATA APCI_3001 support"620620+ depends on VIRT_TO_BUS636621 select COMEDI_FC637622 default N638623 ---help---···644627645628config COMEDI_ADDI_APCI_3120646629 tristate "ADDI-DATA APCI_3520 support"630630+ depends on VIRT_TO_BUS647631 select COMEDI_FC648632 default N649633 ---help---···655637656638config COMEDI_ADDI_APCI_3501657639 tristate "ADDI-DATA APCI_3501 support"640640+ depends on VIRT_TO_BUS658641 default N659642 ---help---660643 Enable support for ADDI-DATA APCI_3501 cards···665646666647config COMEDI_ADDI_APCI_3XXX667648 tristate "ADDI-DATA APCI_3xxx support"649649+ depends on VIRT_TO_BUS668650 default N669651 ---help---670652 Enable support for ADDI-DATA APCI_3xxx cards···732712config COMEDI_ADL_PCI9118733713 tristate "ADLink PCI-9118DG, PCI-9118HG, PCI-9118HR support"734714 select COMEDI_FC715715+ depends on VIRT_TO_BUS735716 default N736717 ---help---737718 Enable support for ADlink PCI-9118DG, PCI-9118HG, PCI-9118HR cards···13081287 depends on COMEDI_MITE13091288 select COMEDI_825513101289 select COMEDI_FC12901290+ depends on VIRT_TO_BUS13111291 default N13121292 ---help---13131293 Enable support for National Instruments Lab-PC and compatibles
+1-1
drivers/staging/iio/Kconfig
···4455menuconfig IIO66 tristate "Industrial I/O support"77- depends on !S39077+ depends on GENERIC_HARDIRQS88 help99 The industrial I/O subsystem provides a unified framework for1010 drivers for many different types of embedded sensors using a
···104104105105int adis16260_set_irq(struct iio_dev *indio_dev, bool enable);106106107107-#ifdef CONFIG_IIO_RING_BUFFER108107/* At the moment triggers are only used for ring buffer109108 * filling. This may change!110109 */···114115#define ADIS16260_SCAN_TEMP 3115116#define ADIS16260_SCAN_ANGL 4116117118118+#ifdef CONFIG_IIO_RING_BUFFER117119void adis16260_remove_trigger(struct iio_dev *indio_dev);118120int adis16260_probe_trigger(struct iio_dev *indio_dev);119121
+1-1
drivers/staging/iio/imu/adis16400.h
···158158159159int adis16400_set_irq(struct iio_dev *indio_dev, bool enable);160160161161-#ifdef CONFIG_IIO_RING_BUFFER162161/* At the moment triggers are only used for ring buffer163162 * filling. This may change!164163 */···181182#define ADIS16300_SCAN_INCLI_X 12182183#define ADIS16300_SCAN_INCLI_Y 13183184185185+#ifdef CONFIG_IIO_RING_BUFFER184186void adis16400_remove_trigger(struct iio_dev *indio_dev);185187int adis16400_probe_trigger(struct iio_dev *indio_dev);186188
···536536void transport_deregister_session_configfs(struct se_session *se_sess)537537{538538 struct se_node_acl *se_nacl;539539-539539+ unsigned long flags;540540 /*541541 * Used by struct se_node_acl's under ConfigFS to locate active struct se_session542542 */543543 se_nacl = se_sess->se_node_acl;544544 if ((se_nacl)) {545545- spin_lock_irq(&se_nacl->nacl_sess_lock);545545+ spin_lock_irqsave(&se_nacl->nacl_sess_lock, flags);546546 list_del(&se_sess->sess_acl_list);547547 /*548548 * If the session list is empty, then clear the pointer.···556556 se_nacl->acl_sess_list.prev,557557 struct se_session, sess_acl_list);558558 }559559- spin_unlock_irq(&se_nacl->nacl_sess_lock);559559+ spin_unlock_irqrestore(&se_nacl->nacl_sess_lock, flags);560560 }561561}562562EXPORT_SYMBOL(transport_deregister_session_configfs);
+1-1
drivers/target/tcm_fc/tcm_fc.h
···144144 */145145struct ft_cmd {146146 enum ft_cmd_state state;147147- u16 lun; /* LUN from request */147147+ u32 lun; /* LUN from request */148148 struct ft_sess *sess; /* session held for cmd */149149 struct fc_seq *seq; /* sequence in exchange mgr */150150 struct se_cmd se_cmd; /* Local TCM I/O descriptor */
+33-31
drivers/target/tcm_fc/tfc_cmd.c
···9494 16, 4, cmd->cdb, MAX_COMMAND_SIZE, 0);9595}96969797-/*9898- * Get LUN from CDB.9999- */100100-static int ft_get_lun_for_cmd(struct ft_cmd *cmd, u8 *lunp)101101-{102102- u64 lun;103103-104104- lun = lunp[1];105105- switch (lunp[0] >> 6) {106106- case 0:107107- break;108108- case 1:109109- lun |= (lunp[0] & 0x3f) << 8;110110- break;111111- default:112112- return -1;113113- }114114- if (lun >= TRANSPORT_MAX_LUNS_PER_TPG)115115- return -1;116116- cmd->lun = lun;117117- return transport_get_lun_for_cmd(&cmd->se_cmd, NULL, lun);118118-}119119-12097static void ft_queue_cmd(struct ft_sess *sess, struct ft_cmd *cmd)12198{12299 struct se_queue_obj *qobj;···395418{396419 struct se_tmr_req *tmr;397420 struct fcp_cmnd *fcp;421421+ struct ft_sess *sess;398422 u8 tm_func;399423400424 fcp = fc_frame_payload_get(cmd->req_frame, sizeof(*fcp));···403425 switch (fcp->fc_tm_flags) {404426 case FCP_TMF_LUN_RESET:405427 tm_func = TMR_LUN_RESET;406406- if (ft_get_lun_for_cmd(cmd, fcp->fc_lun) < 0) {407407- ft_dump_cmd(cmd, __func__);408408- transport_send_check_condition_and_sense(&cmd->se_cmd,409409- cmd->se_cmd.scsi_sense_reason, 0);410410- ft_sess_put(cmd->sess);411411- return;412412- }413428 break;414429 case FCP_TMF_TGT_RESET:415430 tm_func = TMR_TARGET_WARM_RESET;···434463 return;435464 }436465 cmd->se_cmd.se_tmr_req = tmr;466466+467467+ switch (fcp->fc_tm_flags) {468468+ case FCP_TMF_LUN_RESET:469469+ cmd->lun = scsilun_to_int((struct scsi_lun *)fcp->fc_lun);470470+ if (transport_get_lun_for_tmr(&cmd->se_cmd, cmd->lun) < 0) {471471+ /*472472+ * Make sure to clean up newly allocated TMR request473473+ * since "unable to handle TMR request because failed474474+ * to get to LUN"475475+ */476476+ FT_TM_DBG("Failed to get LUN for TMR func %d, "477477+ "se_cmd %p, unpacked_lun %d\n",478478+ tm_func, &cmd->se_cmd, cmd->lun);479479+ ft_dump_cmd(cmd, __func__);480480+ sess = cmd->sess;481481+ transport_send_check_condition_and_sense(&cmd->se_cmd,482482+ cmd->se_cmd.scsi_sense_reason, 0);483483+ transport_generic_free_cmd(&cmd->se_cmd, 0, 1, 0);484484+ ft_sess_put(sess);485485+ return;486486+ }487487+ break;488488+ case FCP_TMF_TGT_RESET:489489+ case FCP_TMF_CLR_TASK_SET:490490+ case FCP_TMF_ABT_TASK_SET:491491+ case FCP_TMF_CLR_ACA:492492+ break;493493+ default:494494+ return;495495+ }437496 transport_generic_handle_tmr(&cmd->se_cmd);438497}439498···636635637636 fc_seq_exch(cmd->seq)->lp->tt.seq_set_resp(cmd->seq, ft_recv_seq, cmd);638637639639- ret = ft_get_lun_for_cmd(cmd, fcp->fc_lun);638638+ cmd->lun = scsilun_to_int((struct scsi_lun *)fcp->fc_lun);639639+ ret = transport_get_lun_for_cmd(&cmd->se_cmd, NULL, cmd->lun);640640 if (ret < 0) {641641 ft_dump_cmd(cmd, __func__);642642 transport_send_check_condition_and_sense(&cmd->se_cmd,
+1-1
drivers/target/tcm_fc/tfc_io.c
···203203 /* XXX For now, initiator will retry */204204 if (printk_ratelimit())205205 printk(KERN_ERR "%s: Failed to send frame %p, "206206- "xid <0x%x>, remaining <0x%x>, "206206+ "xid <0x%x>, remaining %zu, "207207 "lso_max <0x%x>\n",208208 __func__, fp, ep->xid,209209 remaining, lport->lso_max);
···875875 *dp++ = last << 7 | first << 6 | 1; /* EA */876876 len--;877877 }878878- memcpy(dp, skb_pull(dlci->skb, len), len);878878+ memcpy(dp, dlci->skb->data, len);879879+ skb_pull(dlci->skb, len);879880 __gsm_data_queue(dlci, msg);880881 if (last)881882 dlci->skb = NULL;···985984 */986985987986static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci,988988- u32 modem)987987+ u32 modem, int clen)989988{990989 int mlines = 0;991991- u8 brk = modem >> 6;990990+ u8 brk = 0;991991+992992+ /* The modem status command can either contain one octet (v.24 signals)993993+ or two octets (v.24 signals + break signals). The length field will994994+ either be 2 or 3 respectively. This is specified in section995995+ 5.4.6.3.7 of the 27.010 mux spec. */996996+997997+ if (clen == 2)998998+ modem = modem & 0x7f;999999+ else {10001000+ brk = modem & 0x7f;10011001+ modem = (modem >> 7) & 0x7f;10021002+ };99210039931004 /* Flow control/ready to communicate */9941005 if (modem & MDM_FC) {···10741061 return;10751062 }10761063 tty = tty_port_tty_get(&dlci->port);10771077- gsm_process_modem(tty, dlci, modem);10641064+ gsm_process_modem(tty, dlci, modem, clen);10781065 if (tty) {10791066 tty_wakeup(tty);10801067 tty_kref_put(tty);···14951482 * open we shovel the bits down it, if not we drop them.14961483 */1497148414981498-static void gsm_dlci_data(struct gsm_dlci *dlci, u8 *data, int len)14851485+static void gsm_dlci_data(struct gsm_dlci *dlci, u8 *data, int clen)14991486{15001487 /* krefs .. */15011488 struct tty_port *port = &dlci->port;15021489 struct tty_struct *tty = tty_port_tty_get(port);15031490 unsigned int modem = 0;14911491+ int len = clen;1504149215051493 if (debug & 16)15061494 pr_debug("%d bytes for tty %p\n", len, tty);···15211507 if (len == 0)15221508 return;15231509 }15241524- gsm_process_modem(tty, dlci, modem);15101510+ gsm_process_modem(tty, dlci, modem, clen);15251511 /* Line state will go via DLCI 0 controls only */15261512 case 1:15271513 default:
+1
drivers/tty/n_tty.c
···18151815 /* FIXME: does n_tty_set_room need locking ? */18161816 n_tty_set_room(tty);18171817 timeout = schedule_timeout(timeout);18181818+ BUG_ON(!tty->read_buf);18181819 continue;18191820 }18201821 __set_current_state(TASK_RUNNING);
···5050#include <linux/dmaengine.h>5151#include <linux/dma-mapping.h>5252#include <linux/scatterlist.h>5353+#include <linux/delay.h>53545455#include <asm/io.h>5556#include <asm/sizes.h>···6665#define UART_DR_ERROR (UART011_DR_OE|UART011_DR_BE|UART011_DR_PE|UART011_DR_FE)6766#define UART_DUMMY_DR_RX (1 << 16)68676868+6969+#define UART_WA_SAVE_NR 147070+7171+static void pl011_lockup_wa(unsigned long data);7272+static const u32 uart_wa_reg[UART_WA_SAVE_NR] = {7373+ ST_UART011_DMAWM,7474+ ST_UART011_TIMEOUT,7575+ ST_UART011_LCRH_RX,7676+ UART011_IBRD,7777+ UART011_FBRD,7878+ ST_UART011_LCRH_TX,7979+ UART011_IFLS,8080+ ST_UART011_XFCR,8181+ ST_UART011_XON1,8282+ ST_UART011_XON2,8383+ ST_UART011_XOFF1,8484+ ST_UART011_XOFF2,8585+ UART011_CR,8686+ UART011_IMSC8787+};8888+8989+static u32 uart_wa_regdata[UART_WA_SAVE_NR];9090+static DECLARE_TASKLET(pl011_lockup_tlet, pl011_lockup_wa, 0);9191+6992/* There is by now at least one vendor with differing details, so handle it */7093struct vendor_data {7194 unsigned int ifls;···9772 unsigned int lcrh_tx;9873 unsigned int lcrh_rx;9974 bool oversampling;7575+ bool interrupt_may_hang; /* vendor-specific */10076 bool dma_threshold;10177};10278···11690 .lcrh_tx = ST_UART011_LCRH_TX,11791 .lcrh_rx = ST_UART011_LCRH_RX,11892 .oversampling = true,9393+ .interrupt_may_hang = true,11994 .dma_threshold = true,12095};9696+9797+static struct uart_amba_port *amba_ports[UART_NR];1219812299/* Deals with DMA transactions */123100···161132 unsigned int lcrh_rx; /* vendor-specific */162133 bool autorts;163134 char type[12];135135+ bool interrupt_may_hang; /* vendor-specific */164136#ifdef CONFIG_DMA_ENGINE165137 /* DMA stuff */166138 bool using_tx_dma;···10381008#endif103910091040101010111011+/*10121012+ * pl011_lockup_wa10131013+ * This workaround aims to break the deadlock situation10141014+ * when after long transfer over uart in hardware flow10151015+ * control, uart interrupt registers cannot be cleared.10161016+ * Hence uart transfer gets blocked.10171017+ *10181018+ * It is seen that during such deadlock condition ICR10191019+ * don't get cleared even on multiple write. This leads10201020+ * pass_counter to decrease and finally reach zero. This10211021+ * can be taken as trigger point to run this UART_BT_WA.10221022+ *10231023+ */10241024+static void pl011_lockup_wa(unsigned long data)10251025+{10261026+ struct uart_amba_port *uap = amba_ports[0];10271027+ void __iomem *base = uap->port.membase;10281028+ struct circ_buf *xmit = &uap->port.state->xmit;10291029+ struct tty_struct *tty = uap->port.state->port.tty;10301030+ int buf_empty_retries = 200;10311031+ int loop;10321032+10331033+ /* Stop HCI layer from submitting data for tx */10341034+ tty->hw_stopped = 1;10351035+ while (!uart_circ_empty(xmit)) {10361036+ if (buf_empty_retries-- == 0)10371037+ break;10381038+ udelay(100);10391039+ }10401040+10411041+ /* Backup registers */10421042+ for (loop = 0; loop < UART_WA_SAVE_NR; loop++)10431043+ uart_wa_regdata[loop] = readl(base + uart_wa_reg[loop]);10441044+10451045+ /* Disable UART so that FIFO data is flushed out */10461046+ writew(0x00, uap->port.membase + UART011_CR);10471047+10481048+ /* Soft reset UART module */10491049+ if (uap->port.dev->platform_data) {10501050+ struct amba_pl011_data *plat;10511051+10521052+ plat = uap->port.dev->platform_data;10531053+ if (plat->reset)10541054+ plat->reset();10551055+ }10561056+10571057+ /* Restore registers */10581058+ for (loop = 0; loop < UART_WA_SAVE_NR; loop++)10591059+ writew(uart_wa_regdata[loop] ,10601060+ uap->port.membase + uart_wa_reg[loop]);10611061+10621062+ /* Initialise the old status of the modem signals */10631063+ uap->old_status = readw(uap->port.membase + UART01x_FR) &10641064+ UART01x_FR_MODEM_ANY;10651065+10661066+ if (readl(base + UART011_MIS) & 0x2)10671067+ printk(KERN_EMERG "UART_BT_WA: ***FAILED***\n");10681068+10691069+ /* Start Tx/Rx */10701070+ tty->hw_stopped = 0;10711071+}10721072+10411073static void pl011_stop_tx(struct uart_port *port)10421074{10431075 struct uart_amba_port *uap = (struct uart_amba_port *)port;···12501158 if (status & UART011_TXIS)12511159 pl011_tx_chars(uap);1252116012531253- if (pass_counter-- == 0)11611161+ if (pass_counter-- == 0) {11621162+ if (uap->interrupt_may_hang)11631163+ tasklet_schedule(&pl011_lockup_tlet);12541164 break;11651165+ }1255116612561167 status = readw(uap->port.membase + UART011_MIS);12571168 } while (status != 0);···14341339 writew(uap->im, uap->port.membase + UART011_IMSC);14351340 spin_unlock_irq(&uap->port.lock);1436134113421342+ if (uap->port.dev->platform_data) {13431343+ struct amba_pl011_data *plat;13441344+13451345+ plat = uap->port.dev->platform_data;13461346+ if (plat->init)13471347+ plat->init();13481348+ }13491349+14371350 return 0;1438135114391352 clk_dis:···14971394 * Shut down the clock producer14981395 */14991396 clk_disable(uap->clk);13971397+13981398+ if (uap->port.dev->platform_data) {13991399+ struct amba_pl011_data *plat;14001400+14011401+ plat = uap->port.dev->platform_data;14021402+ if (plat->exit)14031403+ plat->exit();14041404+ }14051405+15001406}1501140715021408static void···18121700 if (!uap)18131701 return -ENODEV;1814170217031703+ if (uap->port.dev->platform_data) {17041704+ struct amba_pl011_data *plat;17051705+17061706+ plat = uap->port.dev->platform_data;17071707+ if (plat->init)17081708+ plat->init();17091709+ }17101710+18151711 uap->port.uartclk = clk_get_rate(uap->clk);1816171218171713 if (options)···18941774 uap->lcrh_rx = vendor->lcrh_rx;18951775 uap->lcrh_tx = vendor->lcrh_tx;18961776 uap->fifosize = vendor->fifosize;17771777+ uap->interrupt_may_hang = vendor->interrupt_may_hang;18971778 uap->port.dev = &dev->dev;18981779 uap->port.mapbase = dev->res.start;18991780 uap->port.membase = base;
+14-4
drivers/tty/serial/bcm63xx_uart.c
···250250 /* get overrun/fifo empty information from ier251251 * register */252252 iestat = bcm_uart_readl(port, UART_IR_REG);253253+254254+ if (unlikely(iestat & UART_IR_STAT(UART_IR_RXOVER))) {255255+ unsigned int val;256256+257257+ /* fifo reset is required to clear258258+ * interrupt */259259+ val = bcm_uart_readl(port, UART_CTL_REG);260260+ val |= UART_CTL_RSTRXFIFO_MASK;261261+ bcm_uart_writel(port, val, UART_CTL_REG);262262+263263+ port->icount.overrun++;264264+ tty_insert_flip_char(tty, 0, TTY_OVERRUN);265265+ }266266+253267 if (!(iestat & UART_IR_STAT(UART_IR_RXNOTEMPTY)))254268 break;255269···298284 if (uart_handle_sysrq_char(port, c))299285 continue;300286301301- if (unlikely(iestat & UART_IR_STAT(UART_IR_RXOVER))) {302302- port->icount.overrun++;303303- tty_insert_flip_char(tty, 0, TTY_OVERRUN);304304- }305287306288 if ((cstat & port->ignore_status_mask) == 0)307289 tty_insert_flip_char(tty, c, flag);
+1-1
drivers/tty/serial/jsm/jsm_driver.c
···125125 brd->bd_uart_offset = 0x200;126126 brd->bd_dividend = 921600;127127128128- brd->re_map_membase = ioremap(brd->membase, 0x1000);128128+ brd->re_map_membase = ioremap(brd->membase, pci_resource_len(pdev, 0));129129 if (!brd->re_map_membase) {130130 dev_err(&pdev->dev,131131 "card has no PCI Memory resources, "
+3-2
drivers/tty/serial/mrst_max3110.c
···421421 int ret = 0;422422 struct circ_buf *xmit = &max->con_xmit;423423424424- init_waitqueue_head(wq);425424 pr_info(PR_FMT "start main thread\n");426425427426 do {···822823 res = RC_TAG;823824 ret = max3110_write_then_read(max, (u8 *)&res, (u8 *)&res, 2, 0);824825 if (ret < 0 || res == 0 || res == 0xffff) {825825- printk(KERN_ERR "MAX3111 deemed not present (conf reg %04x)",826826+ dev_dbg(&spi->dev, "MAX3111 deemed not present (conf reg %04x)",826827 res);827828 ret = -ENODEV;828829 goto err_get_page;···836837 max->con_xmit.buf = buffer;837838 max->con_xmit.head = 0;838839 max->con_xmit.tail = 0;840840+841841+ init_waitqueue_head(&max->wq);839842840843 max->main_thread = kthread_run(max3110_main_thread,841844 max, "max3110_main");
+2-2
drivers/tty/serial/s5pv210.c
···3030 struct s3c2410_uartcfg *cfg = port->dev->platform_data;3131 unsigned long ucon = rd_regl(port, S3C2410_UCON);32323333- if ((cfg->clocks_size) == 1)3333+ if (cfg->flags & NO_NEED_CHECK_CLKSRC)3434 return 0;35353636 if (strcmp(clk->name, "pclk") == 0)···55555656 clk->divisor = 1;57575858- if ((cfg->clocks_size) == 1)5858+ if (cfg->flags & NO_NEED_CHECK_CLKSRC)5959 return 0;60606161 switch (ucon & S5PV210_UCON_CLKMASK) {
+3-1
drivers/tty/tty_ldisc.c
···555555static int tty_ldisc_wait_idle(struct tty_struct *tty)556556{557557 int ret;558558- ret = wait_event_interruptible_timeout(tty_ldisc_idle,558558+ ret = wait_event_timeout(tty_ldisc_idle,559559 atomic_read(&tty->ldisc->users) == 1, 5 * HZ);560560 if (ret < 0)561561 return ret;···762762763763 if (IS_ERR(ld))764764 return -1;765765+766766+ WARN_ON_ONCE(tty_ldisc_wait_idle(tty));765767766768 tty_ldisc_close(tty, tty->ldisc);767769 tty_ldisc_put(tty->ldisc);
+13-4
drivers/usb/core/driver.c
···375375 * Just re-enable it without affecting the endpoint toggles.376376 */377377 usb_enable_interface(udev, intf, false);378378- } else if (!error && !intf->dev.power.in_suspend) {378378+ } else if (!error && !intf->dev.power.is_prepared) {379379 r = usb_set_interface(udev, intf->altsetting[0].380380 desc.bInterfaceNumber, 0);381381 if (r < 0)···960960 }961961962962 /* Try to rebind the interface */963963- if (!intf->dev.power.in_suspend) {963963+ if (!intf->dev.power.is_prepared) {964964 intf->needs_binding = 0;965965 rc = device_attach(&intf->dev);966966 if (rc < 0)···11071107 if (intf->condition == USB_INTERFACE_UNBOUND) {1108110811091109 /* Carry out a deferred switch to altsetting 0 */11101110- if (intf->needs_altsetting0 && !intf->dev.power.in_suspend) {11101110+ if (intf->needs_altsetting0 && !intf->dev.power.is_prepared) {11111111 usb_set_interface(udev, intf->altsetting[0].11121112 desc.bInterfaceNumber, 0);11131113 intf->needs_altsetting0 = 0;···11871187 for (i = n - 1; i >= 0; --i) {11881188 intf = udev->actconfig->interface[i];11891189 status = usb_suspend_interface(udev, intf, msg);11901190+11911191+ /* Ignore errors during system sleep transitions */11921192+ if (!(msg.event & PM_EVENT_AUTO))11931193+ status = 0;11901194 if (status != 0)11911195 break;11921196 }11931197 }11941194- if (status == 0)11981198+ if (status == 0) {11951199 status = usb_suspend_device(udev, msg);12001200+12011201+ /* Again, ignore errors during system sleep transitions */12021202+ if (!(msg.event & PM_EVENT_AUTO))12031203+ status = 0;12041204+ }1196120511971206 /* If the suspend failed, resume interfaces that did get suspended */11981207 if (status != 0) {
+11-5
drivers/usb/core/hub.c
···16341634{16351635 struct usb_device *udev = *pdev;16361636 int i;16371637+ struct usb_hcd *hcd = bus_to_hcd(udev->bus);1637163816381639 if (!udev) {16391640 pr_debug ("%s nodev\n", __func__);···16621661 * so that the hardware is now fully quiesced.16631662 */16641663 dev_dbg (&udev->dev, "unregistering device\n");16641664+ mutex_lock(hcd->bandwidth_mutex);16651665 usb_disable_device(udev, 0);16661666+ mutex_unlock(hcd->bandwidth_mutex);16661667 usb_hcd_synchronize_unlinks(udev);1667166816681669 usb_remove_ep_devs(&udev->ep0);···23652362 USB_DEVICE_REMOTE_WAKEUP, 0,23662363 NULL, 0,23672364 USB_CTRL_SET_TIMEOUT);23652365+23662366+ /* System sleep transitions should never fail */23672367+ if (!(msg.event & PM_EVENT_AUTO))23682368+ status = 0;23682369 } else {23692370 /* device has up to 10 msec to fully suspend */23702371 dev_dbg(&udev->dev, "usb %ssuspend\n",···26182611 struct usb_device *hdev = hub->hdev;26192612 unsigned port1;2620261326212621- /* fail if children aren't already suspended */26142614+ /* Warn if children aren't already suspended */26222615 for (port1 = 1; port1 <= hdev->maxchild; port1++) {26232616 struct usb_device *udev;2624261726252618 udev = hdev->children [port1-1];26262619 if (udev && udev->can_submit) {26272627- if (!(msg.event & PM_EVENT_AUTO))26282628- dev_dbg(&intf->dev, "port %d nyet suspended\n",26292629- port1);26302630- return -EBUSY;26202620+ dev_warn(&intf->dev, "port %d nyet suspended\n", port1);26212621+ if (msg.event & PM_EVENT_AUTO)26222622+ return -EBUSY;26312623 }26322624 }26332625
+14-1
drivers/usb/core/message.c
···11351135 * Deallocates hcd/hardware state for the endpoints (nuking all or most11361136 * pending urbs) and usbcore state for the interfaces, so that usbcore11371137 * must usb_set_configuration() before any interfaces could be used.11381138+ *11391139+ * Must be called with hcd->bandwidth_mutex held.11381140 */11391141void usb_disable_device(struct usb_device *dev, int skip_ep0)11401142{11411143 int i;11441144+ struct usb_hcd *hcd = bus_to_hcd(dev->bus);1142114511431146 /* getting rid of interfaces will disconnect11441147 * any drivers bound to them (a key side effect)···1175117211761173 dev_dbg(&dev->dev, "%s nuking %s URBs\n", __func__,11771174 skip_ep0 ? "non-ep0" : "all");11751175+ if (hcd->driver->check_bandwidth) {11761176+ /* First pass: Cancel URBs, leave endpoint pointers intact. */11771177+ for (i = skip_ep0; i < 16; ++i) {11781178+ usb_disable_endpoint(dev, i, false);11791179+ usb_disable_endpoint(dev, i + USB_DIR_IN, false);11801180+ }11811181+ /* Remove endpoints from the host controller internal state */11821182+ usb_hcd_alloc_bandwidth(dev, NULL, NULL, NULL);11831183+ /* Second pass: remove endpoint pointers */11841184+ }11781185 for (i = skip_ep0; i < 16; ++i) {11791186 usb_disable_endpoint(dev, i, true);11801187 usb_disable_endpoint(dev, i + USB_DIR_IN, true);···17401727 /* if it's already configured, clear out old state first.17411728 * getting rid of old interfaces means unbinding their drivers.17421729 */17301730+ mutex_lock(hcd->bandwidth_mutex);17431731 if (dev->state != USB_STATE_ADDRESS)17441732 usb_disable_device(dev, 1); /* Skip ep0 */17451733···17531739 * host controller will not allow submissions to dropped endpoints. If17541740 * this call fails, the device state is unchanged.17551741 */17561756- mutex_lock(hcd->bandwidth_mutex);17571742 ret = usb_hcd_alloc_bandwidth(dev, cp, NULL, NULL);17581743 if (ret < 0) {17591744 mutex_unlock(hcd->bandwidth_mutex);
···11/*22+ * Enhanced Host Controller Interface (EHCI) driver for USB.33+ *44+ * Maintainer: Alan Stern <stern@rowland.harvard.edu>55+ *26 * Copyright (c) 2000-2004 by David Brownell37 *48 * This program is free software; you can redistribute it and/or modify it
+1-1
drivers/usb/host/isp1760-hcd.c
···1555155515561556 /* We need to forcefully reclaim the slot since some transfers never15571557 return, e.g. interrupt transfers and NAKed bulk transfers. */15581558- if (usb_pipebulk(urb->pipe)) {15581558+ if (usb_pipecontrol(urb->pipe) || usb_pipebulk(urb->pipe)) {15591559 skip_map = reg_read32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG);15601560 skip_map |= (1 << qh->slot);15611561 reg_write32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG, skip_map);
+3-1
drivers/usb/host/ohci-hcd.c
···11/*22- * OHCI HCD (Host Controller Driver) for USB.22+ * Open Host Controller Interface (OHCI) driver for USB.33+ *44+ * Maintainer: Alan Stern <stern@rowland.harvard.edu>35 *46 * (C) Copyright 1999 Roman Weissgaerber <weissg@vienna.at>57 * (C) Copyright 2000-2004 David Brownell <dbrownell@users.sourceforge.net>
+1
drivers/usb/host/r8a66597-hcd.c
···25172517 INIT_LIST_HEAD(&r8a66597->child_device);2518251825192519 hcd->rsrc_start = res->start;25202520+ hcd->has_tt = 1;2520252125212522 ret = usb_add_hcd(hcd, irq, IRQF_DISABLED | irq_trigger);25222523 if (ret != 0) {
-2
drivers/usb/host/xhci-mem.c
···12151215 ep_ctx->ep_info2 |= cpu_to_le32(MAX_PACKET(max_packet));12161216 /* dig out max burst from ep companion desc */12171217 max_packet = ep->ss_ep_comp.bMaxBurst;12181218- if (!max_packet)12191219- xhci_warn(xhci, "WARN no SS endpoint bMaxBurst\n");12201218 ep_ctx->ep_info2 |= cpu_to_le32(MAX_BURST(max_packet));12211219 break;12221220 case USB_SPEED_HIGH:
+8
drivers/usb/host/xhci-pci.c
···2929#define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b733030#define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x100031313232+#define PCI_VENDOR_ID_ETRON 0x1b6f3333+#define PCI_DEVICE_ID_ASROCK_P67 0x70233434+3235static const char hcd_name[] = "xhci_hcd";33363437/* called after powerup, by probe or system-pm "wakeup" */···136133 xhci->quirks |= XHCI_SPURIOUS_SUCCESS;137134 xhci->quirks |= XHCI_EP_LIMIT_QUIRK;138135 xhci->limit_active_eps = 64;136136+ }137137+ if (pdev->vendor == PCI_VENDOR_ID_ETRON &&138138+ pdev->device == PCI_DEVICE_ID_ASROCK_P67) {139139+ xhci->quirks |= XHCI_RESET_ON_RESUME;140140+ xhci_dbg(xhci, "QUIRK: Resetting on resume\n");139141 }140142141143 /* Make sure the HC is halted. */
+25-5
drivers/usb/host/xhci-ring.c
···17331733 frame->status = -EOVERFLOW;17341734 skip_td = true;17351735 break;17361736+ case COMP_DEV_ERR:17361737 case COMP_STALL:17371738 frame->status = -EPROTO;17381739 skip_td = true;···17681767 }17691768 }1770176917711771- if ((idx == urb_priv->length - 1) && *status == -EINPROGRESS)17721772- *status = 0;17731773-17741770 return finish_td(xhci, td, event_trb, event, ep, status, false);17751771}17761772···17851787 idx = urb_priv->td_cnt;17861788 frame = &td->urb->iso_frame_desc[idx];1787178917881788- /* The transfer is partly done */17891789- *status = -EXDEV;17901790+ /* The transfer is partly done. */17901791 frame->status = -EXDEV;1791179217921793 /* calc actual length */···20132016 TRB_TO_SLOT_ID(le32_to_cpu(event->flags)),20142017 ep_index);20152018 goto cleanup;20192019+ case COMP_DEV_ERR:20202020+ xhci_warn(xhci, "WARN: detect an incompatible device");20212021+ status = -EPROTO;20222022+ break;20162023 case COMP_MISSED_INT:20172024 /*20182025 * When encounter missed service error, one or more isoc tds···20642063 /* Is this a TRB in the currently executing TD? */20652064 event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue,20662065 td->last_trb, event_dma);20662066+20672067+ /*20682068+ * Skip the Force Stopped Event. The event_trb(event_dma) of FSE20692069+ * is not in the current TD pointed by ep_ring->dequeue because20702070+ * that the hardware dequeue pointer still at the previous TRB20712071+ * of the current TD. The previous TRB maybe a Link TD or the20722072+ * last TRB of the previous TD. The command completion handle20732073+ * will take care the rest.20742074+ */20752075+ if (!event_seg && trb_comp_code == COMP_STOP_INVAL) {20762076+ ret = 0;20772077+ goto cleanup;20782078+ }20792079+20672080 if (!event_seg) {20682081 if (!ep->skip ||20692082 !usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {···21732158 urb->transfer_buffer_length,21742159 status);21752160 spin_unlock(&xhci->lock);21612161+ /* EHCI, UHCI, and OHCI always unconditionally set the21622162+ * urb->status of an isochronous endpoint to 0.21632163+ */21642164+ if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS)21652165+ status = 0;21762166 usb_hcd_giveback_urb(bus_to_hcd(urb->dev->bus), urb, status);21772167 spin_lock(&xhci->lock);21782168 }
+35-4
drivers/usb/host/xhci.c
···759759 msleep(100);760760761761 spin_lock_irq(&xhci->lock);762762+ if (xhci->quirks & XHCI_RESET_ON_RESUME)763763+ hibernated = true;762764763765 if (!hibernated) {764766 /* step 1: restore register */···14031401 u32 added_ctxs;14041402 unsigned int last_ctx;14051403 u32 new_add_flags, new_drop_flags, new_slot_info;14041404+ struct xhci_virt_device *virt_dev;14061405 int ret = 0;1407140614081407 ret = xhci_check_args(hcd, udev, ep, 1, true, __func__);···14281425 return 0;14291426 }1430142714311431- in_ctx = xhci->devs[udev->slot_id]->in_ctx;14321432- out_ctx = xhci->devs[udev->slot_id]->out_ctx;14281428+ virt_dev = xhci->devs[udev->slot_id];14291429+ in_ctx = virt_dev->in_ctx;14301430+ out_ctx = virt_dev->out_ctx;14331431 ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx);14341432 ep_index = xhci_get_endpoint_index(&ep->desc);14351433 ep_ctx = xhci_get_ep_ctx(xhci, out_ctx, ep_index);14341434+14351435+ /* If this endpoint is already in use, and the upper layers are trying14361436+ * to add it again without dropping it, reject the addition.14371437+ */14381438+ if (virt_dev->eps[ep_index].ring &&14391439+ !(le32_to_cpu(ctrl_ctx->drop_flags) &14401440+ xhci_get_endpoint_flag(&ep->desc))) {14411441+ xhci_warn(xhci, "Trying to add endpoint 0x%x "14421442+ "without dropping it.\n",14431443+ (unsigned int) ep->desc.bEndpointAddress);14441444+ return -EINVAL;14451445+ }14461446+14361447 /* If the HCD has already noted the endpoint is enabled,14371448 * ignore this request.14381449 */···14621445 * process context, not interrupt context (or so documenation14631446 * for usb_set_interface() and usb_set_configuration() claim).14641447 */14651465- if (xhci_endpoint_init(xhci, xhci->devs[udev->slot_id],14661466- udev, ep, GFP_NOIO) < 0) {14481448+ if (xhci_endpoint_init(xhci, virt_dev, udev, ep, GFP_NOIO) < 0) {14671449 dev_dbg(&udev->dev, "%s - could not initialize ep %#x\n",14681450 __func__, ep->desc.bEndpointAddress);14691451 return -ENOMEM;···15531537 "and endpoint is not disabled.\n");15541538 ret = -EINVAL;15551539 break;15401540+ case COMP_DEV_ERR:15411541+ dev_warn(&udev->dev, "ERROR: Incompatible device for endpoint "15421542+ "configure command.\n");15431543+ ret = -ENODEV;15441544+ break;15561545 case COMP_SUCCESS:15571546 dev_dbg(&udev->dev, "Successful Endpoint Configure command\n");15581547 ret = 0;···15911570 "evaluate context command.\n");15921571 xhci_dbg_ctx(xhci, virt_dev->out_ctx, 1);15931572 ret = -EINVAL;15731573+ break;15741574+ case COMP_DEV_ERR:15751575+ dev_warn(&udev->dev, "ERROR: Incompatible device for evaluate "15761576+ "context command.\n");15771577+ ret = -ENODEV;15941578 break;15951579 case COMP_MEL_ERR:15961580 /* Max Exit Latency too large error */···28782852 case COMP_TX_ERR:28792853 dev_warn(&udev->dev, "Device not responding to set address.\n");28802854 ret = -EPROTO;28552855+ break;28562856+ case COMP_DEV_ERR:28572857+ dev_warn(&udev->dev, "ERROR: Incompatible device for address "28582858+ "device command.\n");28592859+ ret = -ENODEV;28812860 break;28822861 case COMP_SUCCESS:28832862 xhci_dbg(xhci, "Successful Address Device command\n");
+3
drivers/usb/host/xhci.h
···874874#define COMP_PING_ERR 20875875/* Event Ring is full */876876#define COMP_ER_FULL 21877877+/* Incompatible Device Error */878878+#define COMP_DEV_ERR 22877879/* Missed Service Error - HC couldn't service an isoc ep within interval */878880#define COMP_MISSED_INT 23879881/* Successfully stopped command ring */···13101308 */13111309#define XHCI_EP_LIMIT_QUIRK (1 << 5)13121310#define XHCI_BROKEN_MSI (1 << 6)13111311+#define XHCI_RESET_ON_RESUME (1 << 7)13131312 unsigned int num_active_eps;13141313 unsigned int limit_active_eps;13151314 /* There are two roothubs to keep track of bus suspend info for */
+6
drivers/usb/musb/musb_gadget.c
···15241524 csr = musb_readw(epio, MUSB_TXCSR);15251525 if (csr & MUSB_TXCSR_FIFONOTEMPTY) {15261526 csr |= MUSB_TXCSR_FLUSHFIFO | MUSB_TXCSR_P_WZC_BITS;15271527+ /*15281528+ * Setting both TXPKTRDY and FLUSHFIFO makes controller15291529+ * to interrupt current FIFO loading, but not flushing15301530+ * the already loaded ones.15311531+ */15321532+ csr &= ~MUSB_TXCSR_TXPKTRDY;15271533 musb_writew(epio, MUSB_TXCSR, csr);15281534 /* REVISIT may be inappropriate w/o FIFONOTEMPTY ... */15291535 musb_writew(epio, MUSB_TXCSR, csr);
+1-1
drivers/usb/musb/musb_host.c
···15751575 /* even if there was an error, we did the dma15761576 * for iso_frame_desc->length15771577 */15781578- if (d->status != EILSEQ && d->status != -EOVERFLOW)15781578+ if (d->status != -EILSEQ && d->status != -EOVERFLOW)15791579 d->status = 0;1580158015811581 if (++qh->iso_idx >= urb->number_of_packets)
···2222#define FTDI_8U232AM_ALT_PID 0x6006 /* FTDI's alternate PID for above */2323#define FTDI_8U2232C_PID 0x6010 /* Dual channel device */2424#define FTDI_4232H_PID 0x6011 /* Quad channel hi-speed device */2525+#define FTDI_232H_PID 0x6014 /* Single channel hi-speed device */2526#define FTDI_SIO_PID 0x8372 /* Product Id SIO application of 8U100AX */2627#define FTDI_232RL_PID 0xFBFA /* Product ID for FT232RL */2728
+1
drivers/usb/serial/ti_usb_3410_5052.c
···17451745 }17461746 if (fw_p->size > TI_FIRMWARE_BUF_SIZE) {17471747 dev_err(&dev->dev, "%s - firmware too large %zu\n", __func__, fw_p->size);17481748+ release_firmware(fw_p);17481749 return -ENOENT;17491750 }17501751
+1-2
drivers/watchdog/Kconfig
···535535536536config INTEL_SCU_WATCHDOG537537 bool "Intel SCU Watchdog for Mobile Platforms"538538- depends on WATCHDOG539539- depends on INTEL_SCU_IPC538538+ depends on X86_MRST540539 ---help---541540 Hardware driver for the watchdog time built into the Intel SCU542541 for Intel Mobile Platforms.
+1-1
drivers/watchdog/at32ap700x_wdt.c
···448448}449449module_exit(at32_wdt_exit);450450451451-MODULE_AUTHOR("Hans-Christian Egtvedt <hcegtvedt@atmel.com>");451451+MODULE_AUTHOR("Hans-Christian Egtvedt <egtvedt@samfundet.no>");452452MODULE_DESCRIPTION("Watchdog driver for Atmel AT32AP700X");453453MODULE_LICENSE("GPL");454454MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
···320320 struct wm831x_watchdog_pdata *pdata;321321 int reg, ret;322322323323+ if (wm831x) {324324+ dev_err(&pdev->dev, "wm831x watchdog already registered\n");325325+ return -EBUSY;326326+ }327327+323328 wm831x = dev_get_drvdata(pdev->dev.parent);324329325330 ret = wm831x_reg_read(wm831x, WM831X_WATCHDOG);
+13-1
fs/block_dev.c
···762762 if (!disk)763763 return ERR_PTR(-ENXIO);764764765765- whole = bdget_disk(disk, 0);765765+ /*766766+ * Normally, @bdev should equal what's returned from bdget_disk()767767+ * if partno is 0; however, some drivers (floppy) use multiple768768+ * bdev's for the same physical device and @bdev may be one of the769769+ * aliases. Keep @bdev if partno is 0. This means claimer770770+ * tracking is broken for those devices but it has always been that771771+ * way.772772+ */773773+ if (partno)774774+ whole = bdget_disk(disk, 0);775775+ else776776+ whole = bdgrab(bdev);777777+766778 module_put(disk->fops->owner);767779 put_disk(disk);768780 if (!whole)
···8282 return root->fs_info->delayed_root;8383}84848585+static struct btrfs_delayed_node *btrfs_get_delayed_node(struct inode *inode)8686+{8787+ struct btrfs_inode *btrfs_inode = BTRFS_I(inode);8888+ struct btrfs_root *root = btrfs_inode->root;8989+ u64 ino = btrfs_ino(inode);9090+ struct btrfs_delayed_node *node;9191+9292+ node = ACCESS_ONCE(btrfs_inode->delayed_node);9393+ if (node) {9494+ atomic_inc(&node->refs);9595+ return node;9696+ }9797+9898+ spin_lock(&root->inode_lock);9999+ node = radix_tree_lookup(&root->delayed_nodes_tree, ino);100100+ if (node) {101101+ if (btrfs_inode->delayed_node) {102102+ atomic_inc(&node->refs); /* can be accessed */103103+ BUG_ON(btrfs_inode->delayed_node != node);104104+ spin_unlock(&root->inode_lock);105105+ return node;106106+ }107107+ btrfs_inode->delayed_node = node;108108+ atomic_inc(&node->refs); /* can be accessed */109109+ atomic_inc(&node->refs); /* cached in the inode */110110+ spin_unlock(&root->inode_lock);111111+ return node;112112+ }113113+ spin_unlock(&root->inode_lock);114114+115115+ return NULL;116116+}117117+85118static struct btrfs_delayed_node *btrfs_get_or_create_delayed_node(86119 struct inode *inode)87120{···12592 int ret;1269312794again:128128- node = ACCESS_ONCE(btrfs_inode->delayed_node);129129- if (node) {130130- atomic_inc(&node->refs); /* can be accessed */9595+ node = btrfs_get_delayed_node(inode);9696+ if (node)13197 return node;132132- }133133-134134- spin_lock(&root->inode_lock);135135- node = radix_tree_lookup(&root->delayed_nodes_tree, ino);136136- if (node) {137137- if (btrfs_inode->delayed_node) {138138- spin_unlock(&root->inode_lock);139139- goto again;140140- }141141- btrfs_inode->delayed_node = node;142142- atomic_inc(&node->refs); /* can be accessed */143143- atomic_inc(&node->refs); /* cached in the inode */144144- spin_unlock(&root->inode_lock);145145- return node;146146- }147147- spin_unlock(&root->inode_lock);1489814999 node = kmem_cache_alloc(delayed_node_cache, GFP_NOFS);150100 if (!node)···562546 next = rb_entry(p, struct btrfs_delayed_item, rb_node);563547564548 return next;565565-}566566-567567-static inline struct btrfs_delayed_node *btrfs_get_delayed_node(568568- struct inode *inode)569569-{570570- struct btrfs_inode *btrfs_inode = BTRFS_I(inode);571571- struct btrfs_delayed_node *delayed_node;572572-573573- delayed_node = btrfs_inode->delayed_node;574574- if (delayed_node)575575- atomic_inc(&delayed_node->refs);576576-577577- return delayed_node;578549}579550580551static inline struct btrfs_root *btrfs_get_fs_root(struct btrfs_root *root,···1407140414081405int btrfs_inode_delayed_dir_index_count(struct inode *inode)14091406{14101410- struct btrfs_delayed_node *delayed_node = BTRFS_I(inode)->delayed_node;14111411- int ret = 0;14071407+ struct btrfs_delayed_node *delayed_node = btrfs_get_delayed_node(inode);1412140814131409 if (!delayed_node)14141410 return -ENOENT;···14171415 * a new directory index is added into the delayed node and index_cnt14181416 * is updated now. So we needn't lock the delayed node.14191417 */14201420- if (!delayed_node->index_cnt)14181418+ if (!delayed_node->index_cnt) {14191419+ btrfs_release_delayed_node(delayed_node);14211420 return -EINVAL;14211421+ }1422142214231423 BTRFS_I(inode)->index_cnt = delayed_node->index_cnt;14241424- return ret;14241424+ btrfs_release_delayed_node(delayed_node);14251425+ return 0;14251426}1426142714271428void btrfs_get_delayed_items(struct inode *inode, struct list_head *ins_list,···16161611 inode->i_ctime.tv_sec);16171612 btrfs_set_stack_timespec_nsec(btrfs_inode_ctime(inode_item),16181613 inode->i_ctime.tv_nsec);16141614+}16151615+16161616+int btrfs_fill_inode(struct inode *inode, u32 *rdev)16171617+{16181618+ struct btrfs_delayed_node *delayed_node;16191619+ struct btrfs_inode_item *inode_item;16201620+ struct btrfs_timespec *tspec;16211621+16221622+ delayed_node = btrfs_get_delayed_node(inode);16231623+ if (!delayed_node)16241624+ return -ENOENT;16251625+16261626+ mutex_lock(&delayed_node->mutex);16271627+ if (!delayed_node->inode_dirty) {16281628+ mutex_unlock(&delayed_node->mutex);16291629+ btrfs_release_delayed_node(delayed_node);16301630+ return -ENOENT;16311631+ }16321632+16331633+ inode_item = &delayed_node->inode_item;16341634+16351635+ inode->i_uid = btrfs_stack_inode_uid(inode_item);16361636+ inode->i_gid = btrfs_stack_inode_gid(inode_item);16371637+ btrfs_i_size_write(inode, btrfs_stack_inode_size(inode_item));16381638+ inode->i_mode = btrfs_stack_inode_mode(inode_item);16391639+ inode->i_nlink = btrfs_stack_inode_nlink(inode_item);16401640+ inode_set_bytes(inode, btrfs_stack_inode_nbytes(inode_item));16411641+ BTRFS_I(inode)->generation = btrfs_stack_inode_generation(inode_item);16421642+ BTRFS_I(inode)->sequence = btrfs_stack_inode_sequence(inode_item);16431643+ inode->i_rdev = 0;16441644+ *rdev = btrfs_stack_inode_rdev(inode_item);16451645+ BTRFS_I(inode)->flags = btrfs_stack_inode_flags(inode_item);16461646+16471647+ tspec = btrfs_inode_atime(inode_item);16481648+ inode->i_atime.tv_sec = btrfs_stack_timespec_sec(tspec);16491649+ inode->i_atime.tv_nsec = btrfs_stack_timespec_nsec(tspec);16501650+16511651+ tspec = btrfs_inode_mtime(inode_item);16521652+ inode->i_mtime.tv_sec = btrfs_stack_timespec_sec(tspec);16531653+ inode->i_mtime.tv_nsec = btrfs_stack_timespec_nsec(tspec);16541654+16551655+ tspec = btrfs_inode_ctime(inode_item);16561656+ inode->i_ctime.tv_sec = btrfs_stack_timespec_sec(tspec);16571657+ inode->i_ctime.tv_nsec = btrfs_stack_timespec_nsec(tspec);16581658+16591659+ inode->i_generation = BTRFS_I(inode)->generation;16601660+ BTRFS_I(inode)->index_cnt = (u64)-1;16611661+16621662+ mutex_unlock(&delayed_node->mutex);16631663+ btrfs_release_delayed_node(delayed_node);16641664+ return 0;16191665}1620166616211667int btrfs_delayed_update_inode(struct btrfs_trans_handle *trans,
+1
fs/btrfs/delayed-inode.h
···119119120120int btrfs_delayed_update_inode(struct btrfs_trans_handle *trans,121121 struct btrfs_root *root, struct inode *inode);122122+int btrfs_fill_inode(struct inode *inode, u32 *rdev);122123123124/* Used for drop dead root */124125void btrfs_kill_all_delayed_nodes(struct btrfs_root *root);
+2-2
fs/btrfs/extent-tree.c
···48424842 u64 num_bytes, u64 empty_size,48434843 u64 search_start, u64 search_end,48444844 u64 hint_byte, struct btrfs_key *ins,48454845- int data)48454845+ u64 data)48464846{48474847 int ret = 0;48484848 struct btrfs_root *root = orig_root->fs_info->extent_root;···4869486948704870 space_info = __find_space_info(root->fs_info, data);48714871 if (!space_info) {48724872- printk(KERN_ERR "No space info for %d\n", data);48724872+ printk(KERN_ERR "No space info for %llu\n", data);48734873 return -ENOSPC;48744874 }48754875
+6-3
fs/btrfs/free-space-cache.c
···1893189318941894 while ((node = rb_last(&ctl->free_space_offset)) != NULL) {18951895 info = rb_entry(node, struct btrfs_free_space, offset_index);18961896- unlink_free_space(ctl, info);18971897- kfree(info->bitmap);18981898- kmem_cache_free(btrfs_free_space_cachep, info);18961896+ if (!info->bitmap) {18971897+ unlink_free_space(ctl, info);18981898+ kmem_cache_free(btrfs_free_space_cachep, info);18991899+ } else {19001900+ free_bitmap(ctl, info);19011901+ }18991902 if (need_resched()) {19001903 spin_unlock(&ctl->tree_lock);19011904 cond_resched();
···156156157157config CIFS_NFSD_EXPORT158158 bool "Allow nfsd to export CIFS file system (EXPERIMENTAL)"159159- depends on CIFS && EXPERIMENTAL159159+ depends on CIFS && EXPERIMENTAL && BROKEN160160 help161161 Allows NFS server to export a CIFS mounted share (nfsd over cifs)
+1
fs/cifs/cifs_fs_sb.h
···4242#define CIFS_MOUNT_MULTIUSER 0x20000 /* multiuser mount */4343#define CIFS_MOUNT_STRICT_IO 0x40000 /* strict cache mode */4444#define CIFS_MOUNT_RWPIDFORWARD 0x80000 /* use pid forwarding for rw */4545+#define CIFS_MOUNT_POSIXACL 0x100000 /* mirror of MS_POSIXACL in mnt_cifs_flags */45464647struct cifs_sb_info {4748 struct rb_root tlink_tree;
+66-92
fs/cifs/cifsfs.c
···104104}105105106106static int107107-cifs_read_super(struct super_block *sb, struct smb_vol *volume_info,108108- const char *devname, int silent)107107+cifs_read_super(struct super_block *sb)109108{110109 struct inode *inode;111110 struct cifs_sb_info *cifs_sb;···112113113114 cifs_sb = CIFS_SB(sb);114115115115- spin_lock_init(&cifs_sb->tlink_tree_lock);116116- cifs_sb->tlink_tree = RB_ROOT;116116+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIXACL)117117+ sb->s_flags |= MS_POSIXACL;117118118118- rc = bdi_setup_and_register(&cifs_sb->bdi, "cifs", BDI_CAP_MAP_COPY);119119- if (rc)120120- return rc;119119+ if (cifs_sb_master_tcon(cifs_sb)->ses->capabilities & CAP_LARGE_FILES)120120+ sb->s_maxbytes = MAX_LFS_FILESIZE;121121+ else122122+ sb->s_maxbytes = MAX_NON_LFS;121123122122- cifs_sb->bdi.ra_pages = default_backing_dev_info.ra_pages;123123-124124- rc = cifs_mount(sb, cifs_sb, volume_info, devname);125125-126126- if (rc) {127127- if (!silent)128128- cERROR(1, "cifs_mount failed w/return code = %d", rc);129129- goto out_mount_failed;130130- }124124+ /* BB FIXME fix time_gran to be larger for LANMAN sessions */125125+ sb->s_time_gran = 100;131126132127 sb->s_magic = CIFS_MAGIC_NUMBER;133128 sb->s_op = &cifs_super_ops;···163170 if (inode)164171 iput(inode);165172166166- cifs_umount(sb, cifs_sb);167167-168168-out_mount_failed:169169- bdi_destroy(&cifs_sb->bdi);170173 return rc;171174}172175173173-static void174174-cifs_put_super(struct super_block *sb)176176+static void cifs_kill_sb(struct super_block *sb)175177{176176- int rc = 0;177177- struct cifs_sb_info *cifs_sb;178178-179179- cFYI(1, "In cifs_put_super");180180- cifs_sb = CIFS_SB(sb);181181- if (cifs_sb == NULL) {182182- cFYI(1, "Empty cifs superblock info passed to unmount");183183- return;184184- }185185-186186- rc = cifs_umount(sb, cifs_sb);187187- if (rc)188188- cERROR(1, "cifs_umount failed with return code %d", rc);189189- if (cifs_sb->mountdata) {190190- kfree(cifs_sb->mountdata);191191- cifs_sb->mountdata = NULL;192192- }193193-194194- unload_nls(cifs_sb->local_nls);195195- bdi_destroy(&cifs_sb->bdi);196196- kfree(cifs_sb);178178+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);179179+ kill_anon_super(sb);180180+ cifs_umount(cifs_sb);197181}198182199183static int···518548}519549520550static const struct super_operations cifs_super_ops = {521521- .put_super = cifs_put_super,522551 .statfs = cifs_statfs,523552 .alloc_inode = cifs_alloc_inode,524553 .destroy_inode = cifs_destroy_inode,···554585 full_path = cifs_build_path_to_root(vol, cifs_sb,555586 cifs_sb_master_tcon(cifs_sb));556587 if (full_path == NULL)557557- return NULL;588588+ return ERR_PTR(-ENOMEM);558589559590 cFYI(1, "Get root dentry for %s", full_path);560591···583614 dchild = d_alloc(dparent, &name);584615 if (dchild == NULL) {585616 dput(dparent);586586- dparent = NULL;617617+ dparent = ERR_PTR(-ENOMEM);587618 goto out;588619 }589620 }···601632 if (rc) {602633 dput(dchild);603634 dput(dparent);604604- dparent = NULL;635635+ dparent = ERR_PTR(rc);605636 goto out;606637 }607638 alias = d_materialise_unique(dchild, inode);···609640 dput(dchild);610641 if (IS_ERR(alias)) {611642 dput(dparent);612612- dparent = NULL;643643+ dparent = ERR_PTR(-EINVAL); /* XXX */613644 goto out;614645 }615646 dchild = alias;···627658 _FreeXid(xid);628659 kfree(full_path);629660 return dparent;661661+}662662+663663+static int cifs_set_super(struct super_block *sb, void *data)664664+{665665+ struct cifs_mnt_data *mnt_data = data;666666+ sb->s_fs_info = mnt_data->cifs_sb;667667+ return set_anon_super(sb, NULL);630668}631669632670static struct dentry *···656680 cifs_sb = kzalloc(sizeof(struct cifs_sb_info), GFP_KERNEL);657681 if (cifs_sb == NULL) {658682 root = ERR_PTR(-ENOMEM);659659- goto out;683683+ goto out_nls;684684+ }685685+686686+ cifs_sb->mountdata = kstrndup(data, PAGE_SIZE, GFP_KERNEL);687687+ if (cifs_sb->mountdata == NULL) {688688+ root = ERR_PTR(-ENOMEM);689689+ goto out_cifs_sb;660690 }661691662692 cifs_setup_cifs_sb(volume_info, cifs_sb);693693+694694+ rc = cifs_mount(cifs_sb, volume_info);695695+ if (rc) {696696+ if (!(flags & MS_SILENT))697697+ cERROR(1, "cifs_mount failed w/return code = %d", rc);698698+ root = ERR_PTR(rc);699699+ goto out_mountdata;700700+ }663701664702 mnt_data.vol = volume_info;665703 mnt_data.cifs_sb = cifs_sb;666704 mnt_data.flags = flags;667705668668- sb = sget(fs_type, cifs_match_super, set_anon_super, &mnt_data);706706+ sb = sget(fs_type, cifs_match_super, cifs_set_super, &mnt_data);669707 if (IS_ERR(sb)) {670708 root = ERR_CAST(sb);671671- goto out_cifs_sb;709709+ cifs_umount(cifs_sb);710710+ goto out;672711 }673712674674- if (sb->s_fs_info) {713713+ if (sb->s_root) {675714 cFYI(1, "Use existing superblock");676676- goto out_shared;715715+ cifs_umount(cifs_sb);716716+ } else {717717+ sb->s_flags = flags;718718+ /* BB should we make this contingent on mount parm? */719719+ sb->s_flags |= MS_NODIRATIME | MS_NOATIME;720720+721721+ rc = cifs_read_super(sb);722722+ if (rc) {723723+ root = ERR_PTR(rc);724724+ goto out_super;725725+ }726726+727727+ sb->s_flags |= MS_ACTIVE;677728 }678678-679679- /*680680- * Copy mount params for use in submounts. Better to do681681- * the copy here and deal with the error before cleanup gets682682- * complicated post-mount.683683- */684684- cifs_sb->mountdata = kstrndup(data, PAGE_SIZE, GFP_KERNEL);685685- if (cifs_sb->mountdata == NULL) {686686- root = ERR_PTR(-ENOMEM);687687- goto out_super;688688- }689689-690690- sb->s_flags = flags;691691- /* BB should we make this contingent on mount parm? */692692- sb->s_flags |= MS_NODIRATIME | MS_NOATIME;693693- sb->s_fs_info = cifs_sb;694694-695695- rc = cifs_read_super(sb, volume_info, dev_name,696696- flags & MS_SILENT ? 1 : 0);697697- if (rc) {698698- root = ERR_PTR(rc);699699- goto out_super;700700- }701701-702702- sb->s_flags |= MS_ACTIVE;703729704730 root = cifs_get_root(volume_info, sb);705705- if (root == NULL)731731+ if (IS_ERR(root))706732 goto out_super;707733708734 cFYI(1, "dentry root is: %p", root);709735 goto out;710736711711-out_shared:712712- root = cifs_get_root(volume_info, sb);713713- if (root)714714- cFYI(1, "dentry root is: %p", root);715715- goto out;716716-717737out_super:718718- kfree(cifs_sb->mountdata);719738 deactivate_locked_super(sb);720720-721721-out_cifs_sb:722722- unload_nls(cifs_sb->local_nls);723723- kfree(cifs_sb);724724-725739out:726740 cifs_cleanup_volume_info(&volume_info);727741 return root;742742+743743+out_mountdata:744744+ kfree(cifs_sb->mountdata);745745+out_cifs_sb:746746+ kfree(cifs_sb);747747+out_nls:748748+ unload_nls(volume_info->local_nls);749749+ goto out;728750}729751730752static ssize_t cifs_file_aio_write(struct kiocb *iocb, const struct iovec *iov,···811837 .owner = THIS_MODULE,812838 .name = "cifs",813839 .mount = cifs_do_mount,814814- .kill_sb = kill_anon_super,840840+ .kill_sb = cifs_kill_sb,815841 /* .fs_flags */816842};817843const struct inode_operations cifs_dir_inode_ops = {
+4-4
fs/cifs/cifsproto.h
···157157extern void cifs_cleanup_volume_info(struct smb_vol **pvolume_info);158158extern int cifs_setup_volume_info(struct smb_vol **pvolume_info,159159 char *mount_data, const char *devname);160160-extern int cifs_mount(struct super_block *, struct cifs_sb_info *,161161- struct smb_vol *, const char *);162162-extern int cifs_umount(struct super_block *, struct cifs_sb_info *);160160+extern int cifs_mount(struct cifs_sb_info *, struct smb_vol *);161161+extern void cifs_umount(struct cifs_sb_info *);163162extern void cifs_dfs_release_automount_timer(void);164163void cifs_proc_init(void);165164void cifs_proc_clean(void);···217218 struct dfs_info3_param **preferrals,218219 int remap);219220extern void reset_cifs_unix_caps(int xid, struct cifs_tcon *tcon,220220- struct super_block *sb, struct smb_vol *vol);221221+ struct cifs_sb_info *cifs_sb,222222+ struct smb_vol *vol);221223extern int CIFSSMBQFSInfo(const int xid, struct cifs_tcon *tcon,222224 struct kstatfs *FSData);223225extern int SMBOldQFSInfo(const int xid, struct cifs_tcon *tcon,
+53-35
fs/cifs/connect.c
···25462546}2547254725482548void reset_cifs_unix_caps(int xid, struct cifs_tcon *tcon,25492549- struct super_block *sb, struct smb_vol *vol_info)25492549+ struct cifs_sb_info *cifs_sb, struct smb_vol *vol_info)25502550{25512551 /* if we are reconnecting then should we check to see if25522552 * any requested capabilities changed locally e.g. via···26002600 cap &= ~CIFS_UNIX_POSIX_ACL_CAP;26012601 else if (CIFS_UNIX_POSIX_ACL_CAP & cap) {26022602 cFYI(1, "negotiated posix acl support");26032603- if (sb)26042604- sb->s_flags |= MS_POSIXACL;26032603+ if (cifs_sb)26042604+ cifs_sb->mnt_cifs_flags |=26052605+ CIFS_MOUNT_POSIXACL;26052606 }2606260726072608 if (vol_info && vol_info->posix_paths == 0)26082609 cap &= ~CIFS_UNIX_POSIX_PATHNAMES_CAP;26092610 else if (cap & CIFS_UNIX_POSIX_PATHNAMES_CAP) {26102611 cFYI(1, "negotiate posix pathnames");26112611- if (sb)26122612- CIFS_SB(sb)->mnt_cifs_flags |=26122612+ if (cifs_sb)26132613+ cifs_sb->mnt_cifs_flags |=26132614 CIFS_MOUNT_POSIX_PATHS;26142615 }2615261626162616- if (sb && (CIFS_SB(sb)->rsize > 127 * 1024)) {26172617+ if (cifs_sb && (cifs_sb->rsize > 127 * 1024)) {26172618 if ((cap & CIFS_UNIX_LARGE_READ_CAP) == 0) {26182618- CIFS_SB(sb)->rsize = 127 * 1024;26192619+ cifs_sb->rsize = 127 * 1024;26192620 cFYI(DBG2, "larger reads not supported by srv");26202621 }26212622 }···26622661 struct cifs_sb_info *cifs_sb)26632662{26642663 INIT_DELAYED_WORK(&cifs_sb->prune_tlinks, cifs_prune_tlinks);26642664+26652665+ spin_lock_init(&cifs_sb->tlink_tree_lock);26662666+ cifs_sb->tlink_tree = RB_ROOT;2665266726662668 if (pvolume_info->rsize > CIFSMaxBufSize) {26672669 cERROR(1, "rsize %d too large, using MaxBufSize",···2754275027552751/*27562752 * When the server supports very large writes via POSIX extensions, we can27572757- * allow up to 2^24 - PAGE_CACHE_SIZE.27532753+ * allow up to 2^24-1, minus the size of a WRITE_AND_X header, not including27542754+ * the RFC1001 length.27582755 *27592756 * Note that this might make for "interesting" allocation problems during27602760- * writeback however (as we have to allocate an array of pointers for the27612761- * pages). A 16M write means ~32kb page array with PAGE_CACHE_SIZE == 4096.27572757+ * writeback however as we have to allocate an array of pointers for the27582758+ * pages. A 16M write means ~32kb page array with PAGE_CACHE_SIZE == 4096.27622759 */27632763-#define CIFS_MAX_WSIZE ((1<<24) - PAGE_CACHE_SIZE)27602760+#define CIFS_MAX_WSIZE ((1<<24) - 1 - sizeof(WRITE_REQ) + 4)2764276127652762/*27662766- * When the server doesn't allow large posix writes, default to a wsize of27672767- * 128k - PAGE_CACHE_SIZE -- one page less than the largest frame size27682768- * described in RFC1001. This allows space for the header without going over27692769- * that by default.27632763+ * When the server doesn't allow large posix writes, only allow a wsize of27642764+ * 128k minus the size of the WRITE_AND_X header. That allows for a write up27652765+ * to the maximum size described by RFC1002.27702766 */27712771-#define CIFS_MAX_RFC1001_WSIZE (128 * 1024 - PAGE_CACHE_SIZE)27672767+#define CIFS_MAX_RFC1002_WSIZE (128 * 1024 - sizeof(WRITE_REQ) + 4)2772276827732769/*27742770 * The default wsize is 1M. find_get_pages seems to return a maximum of 256···2787278327882784 /* can server support 24-bit write sizes? (via UNIX extensions) */27892785 if (!tcon->unix_ext || !(unix_cap & CIFS_UNIX_LARGE_WRITE_CAP))27902790- wsize = min_t(unsigned int, wsize, CIFS_MAX_RFC1001_WSIZE);27862786+ wsize = min_t(unsigned int, wsize, CIFS_MAX_RFC1002_WSIZE);2791278727922792- /* no CAP_LARGE_WRITE_X? Limit it to 16 bits */27932793- if (!(server->capabilities & CAP_LARGE_WRITE_X))27942794- wsize = min_t(unsigned int, wsize, USHRT_MAX);27882788+ /*27892789+ * no CAP_LARGE_WRITE_X or is signing enabled without CAP_UNIX set?27902790+ * Limit it to max buffer offered by the server, minus the size of the27912791+ * WRITEX header, not including the 4 byte RFC1001 length.27922792+ */27932793+ if (!(server->capabilities & CAP_LARGE_WRITE_X) ||27942794+ (!(server->capabilities & CAP_UNIX) &&27952795+ (server->sec_mode & (SECMODE_SIGN_ENABLED|SECMODE_SIGN_REQUIRED))))27962796+ wsize = min_t(unsigned int, wsize,27972797+ server->maxBuf - sizeof(WRITE_REQ) + 4);2795279827962799 /* hard limit of CIFS_MAX_WSIZE */27972800 wsize = min_t(unsigned int, wsize, CIFS_MAX_WSIZE);···2948293729492938 if (volume_info->nullauth) {29502939 cFYI(1, "null user");29512951- volume_info->username = "";29402940+ volume_info->username = kzalloc(1, GFP_KERNEL);29412941+ if (volume_info->username == NULL) {29422942+ rc = -ENOMEM;29432943+ goto out;29442944+ }29522945 } else if (volume_info->username) {29532946 /* BB fixme parse for domain name here */29542947 cFYI(1, "Username: %s", volume_info->username);···29862971}2987297229882973int29892989-cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb,29902990- struct smb_vol *volume_info, const char *devname)29742974+cifs_mount(struct cifs_sb_info *cifs_sb, struct smb_vol *volume_info)29912975{29922976 int rc = 0;29932977 int xid;···29972983 struct tcon_link *tlink;29982984#ifdef CONFIG_CIFS_DFS_UPCALL29992985 int referral_walks_count = 0;29862986+29872987+ rc = bdi_setup_and_register(&cifs_sb->bdi, "cifs", BDI_CAP_MAP_COPY);29882988+ if (rc)29892989+ return rc;29902990+29912991+ cifs_sb->bdi.ra_pages = default_backing_dev_info.ra_pages;29922992+30002993try_mount_again:30012994 /* cleanup activities if we're chasing a referral */30022995 if (referral_walks_count) {···30283007 srvTcp = cifs_get_tcp_session(volume_info);30293008 if (IS_ERR(srvTcp)) {30303009 rc = PTR_ERR(srvTcp);30103010+ bdi_destroy(&cifs_sb->bdi);30313011 goto out;30323012 }30333013···30393017 pSesInfo = NULL;30403018 goto mount_fail_check;30413019 }30423042-30433043- if (pSesInfo->capabilities & CAP_LARGE_FILES)30443044- sb->s_maxbytes = MAX_LFS_FILESIZE;30453045- else30463046- sb->s_maxbytes = MAX_NON_LFS;30473047-30483048- /* BB FIXME fix time_gran to be larger for LANMAN sessions */30493049- sb->s_time_gran = 100;3050302030513021 /* search for existing tcon to this server share */30523022 tcon = cifs_get_tcon(pSesInfo, volume_info);···30523038 if (tcon->ses->capabilities & CAP_UNIX) {30533039 /* reset of caps checks mount to see if unix extensions30543040 disabled for just this mount */30553055- reset_cifs_unix_caps(xid, tcon, sb, volume_info);30413041+ reset_cifs_unix_caps(xid, tcon, cifs_sb, volume_info);30563042 if ((tcon->ses->server->tcpStatus == CifsNeedReconnect) &&30573043 (le64_to_cpu(tcon->fsUnixInfo.Capability) &30583044 CIFS_UNIX_TRANSPORT_ENCRYPTION_MANDATORY_CAP)) {···31753161 cifs_put_smb_ses(pSesInfo);31763162 else31773163 cifs_put_tcp_session(srvTcp);31643164+ bdi_destroy(&cifs_sb->bdi);31783165 goto out;31793166 }31803167···33503335 return rc;33513336}3352333733533353-int33543354-cifs_umount(struct super_block *sb, struct cifs_sb_info *cifs_sb)33383338+void33393339+cifs_umount(struct cifs_sb_info *cifs_sb)33553340{33563341 struct rb_root *root = &cifs_sb->tlink_tree;33573342 struct rb_node *node;···33723357 }33733358 spin_unlock(&cifs_sb->tlink_tree_lock);3374335933753375- return 0;33603360+ bdi_destroy(&cifs_sb->bdi);33613361+ kfree(cifs_sb->mountdata);33623362+ unload_nls(cifs_sb->local_nls);33633363+ kfree(cifs_sb);33763364}3377336533783366int cifs_negotiate_protocol(unsigned int xid, struct cifs_ses *ses)
···125125 * positive retcode - signal for ext4_ext_walk_space(), see below126126 * callback must return valid extent (passed or newly created)127127 */128128-typedef int (*ext_prepare_callback)(struct inode *, struct ext4_ext_path *,128128+typedef int (*ext_prepare_callback)(struct inode *, ext4_lblk_t,129129 struct ext4_ext_cache *,130130 struct ext4_extent *, void *);131131···133133#define EXT_BREAK 1134134#define EXT_REPEAT 2135135136136-/* Maximum logical block in a file; ext4_extent's ee_block is __le32 */137137-#define EXT_MAX_BLOCK 0xffffffff136136+/*137137+ * Maximum number of logical blocks in a file; ext4_extent's ee_block is138138+ * __le32.139139+ */140140+#define EXT_MAX_BLOCKS 0xffffffff138141139142/*140143 * EXT_INIT_MAX_LEN is the maximum number of blocks we can have in an
+20-22
fs/ext4/extents.c
···1408140814091409/*14101410 * ext4_ext_next_allocated_block:14111411- * returns allocated block in subsequent extent or EXT_MAX_BLOCK.14111411+ * returns allocated block in subsequent extent or EXT_MAX_BLOCKS.14121412 * NOTE: it considers block number from index entry as14131413 * allocated block. Thus, index entries have to be consistent14141414 * with leaves.···14221422 depth = path->p_depth;1423142314241424 if (depth == 0 && path->p_ext == NULL)14251425- return EXT_MAX_BLOCK;14251425+ return EXT_MAX_BLOCKS;1426142614271427 while (depth >= 0) {14281428 if (depth == path->p_depth) {···14391439 depth--;14401440 }1441144114421442- return EXT_MAX_BLOCK;14421442+ return EXT_MAX_BLOCKS;14431443}1444144414451445/*14461446 * ext4_ext_next_leaf_block:14471447- * returns first allocated block from next leaf or EXT_MAX_BLOCK14471447+ * returns first allocated block from next leaf or EXT_MAX_BLOCKS14481448 */14491449static ext4_lblk_t ext4_ext_next_leaf_block(struct inode *inode,14501450 struct ext4_ext_path *path)···1456145614571457 /* zero-tree has no leaf blocks at all */14581458 if (depth == 0)14591459- return EXT_MAX_BLOCK;14591459+ return EXT_MAX_BLOCKS;1460146014611461 /* go to index block */14621462 depth--;···14691469 depth--;14701470 }1471147114721472- return EXT_MAX_BLOCK;14721472+ return EXT_MAX_BLOCKS;14731473}1474147414751475/*···16771677 */16781678 if (b2 < b1) {16791679 b2 = ext4_ext_next_allocated_block(path);16801680- if (b2 == EXT_MAX_BLOCK)16801680+ if (b2 == EXT_MAX_BLOCKS)16811681 goto out;16821682 }1683168316841684 /* check for wrap through zero on extent logical start block*/16851685 if (b1 + len1 < b1) {16861686- len1 = EXT_MAX_BLOCK - b1;16861686+ len1 = EXT_MAX_BLOCKS - b1;16871687 newext->ee_len = cpu_to_le16(len1);16881688 ret = 1;16891689 }···17671767 fex = EXT_LAST_EXTENT(eh);17681768 next = ext4_ext_next_leaf_block(inode, path);17691769 if (le32_to_cpu(newext->ee_block) > le32_to_cpu(fex->ee_block)17701770- && next != EXT_MAX_BLOCK) {17701770+ && next != EXT_MAX_BLOCKS) {17711771 ext_debug("next leaf block - %d\n", next);17721772 BUG_ON(npath != NULL);17731773 npath = ext4_ext_find_extent(inode, next, NULL);···18871887 BUG_ON(func == NULL);18881888 BUG_ON(inode == NULL);1889188918901890- while (block < last && block != EXT_MAX_BLOCK) {18901890+ while (block < last && block != EXT_MAX_BLOCKS) {18911891 num = last - block;18921892 /* find extent for this block */18931893 down_read(&EXT4_I(inode)->i_data_sem);···19581958 err = -EIO;19591959 break;19601960 }19611961- err = func(inode, path, &cbex, ex, cbdata);19611961+ err = func(inode, next, &cbex, ex, cbdata);19621962 ext4_ext_drop_refs(path);1963196319641964 if (err < 0)···20202020 if (ex == NULL) {20212021 /* there is no extent yet, so gap is [0;-] */20222022 lblock = 0;20232023- len = EXT_MAX_BLOCK;20232023+ len = EXT_MAX_BLOCKS;20242024 ext_debug("cache gap(whole file):");20252025 } else if (block < le32_to_cpu(ex->ee_block)) {20262026 lblock = block;···23502350 * never happen because at least one of the end points23512351 * needs to be on the edge of the extent.23522352 */23532353- if (end == EXT_MAX_BLOCK) {23532353+ if (end == EXT_MAX_BLOCKS - 1) {23542354 ext_debug(" bad truncate %u:%u\n",23552355 start, end);23562356 block = 0;···23982398 * If this is a truncate, this condition23992399 * should never happen24002400 */24012401- if (end == EXT_MAX_BLOCK) {24012401+ if (end == EXT_MAX_BLOCKS - 1) {24022402 ext_debug(" bad truncate %u:%u\n",24032403 start, end);24042404 err = -EIO;···24782478 * we need to remove it from the leaf24792479 */24802480 if (num == 0) {24812481- if (end != EXT_MAX_BLOCK) {24812481+ if (end != EXT_MAX_BLOCKS - 1) {24822482 /*24832483 * For hole punching, we need to scoot all the24842484 * extents up when an extent is removed so that···3699369937003700 last_block = (inode->i_size + sb->s_blocksize - 1)37013701 >> EXT4_BLOCK_SIZE_BITS(sb);37023702- err = ext4_ext_remove_space(inode, last_block, EXT_MAX_BLOCK);37023702+ err = ext4_ext_remove_space(inode, last_block, EXT_MAX_BLOCKS - 1);3703370337043704 /* In a multi-transaction truncate, we only make the final37053705 * transaction synchronous.···39143914/*39153915 * Callback function called for each extent to gather FIEMAP information.39163916 */39173917-static int ext4_ext_fiemap_cb(struct inode *inode, struct ext4_ext_path *path,39173917+static int ext4_ext_fiemap_cb(struct inode *inode, ext4_lblk_t next,39183918 struct ext4_ext_cache *newex, struct ext4_extent *ex,39193919 void *data)39203920{39213921 __u64 logical;39223922 __u64 physical;39233923 __u64 length;39243924- loff_t size;39253924 __u32 flags = 0;39263925 int ret = 0;39273926 struct fiemap_extent_info *fieinfo = data;···41024103 if (ex && ext4_ext_is_uninitialized(ex))41034104 flags |= FIEMAP_EXTENT_UNWRITTEN;4104410541054105- size = i_size_read(inode);41064106- if (logical + length >= size)41064106+ if (next == EXT_MAX_BLOCKS)41074107 flags |= FIEMAP_EXTENT_LAST;4108410841094109 ret = fiemap_fill_next_extent(fieinfo, logical, physical,···4345434743464348 start_blk = start >> inode->i_sb->s_blocksize_bits;43474349 last_blk = (start + len - 1) >> inode->i_sb->s_blocksize_bits;43484348- if (last_blk >= EXT_MAX_BLOCK)43494349- last_blk = EXT_MAX_BLOCK-1;43504350+ if (last_blk >= EXT_MAX_BLOCKS)43514351+ last_blk = EXT_MAX_BLOCKS-1;43504352 len_blks = ((ext4_lblk_t) last_blk) - start_blk + 1;4351435343524354 /*
···22432243 * in the vfs. ext4 inode has 48 bits of i_block in fsblock units,22442244 * so that won't be a limiting factor.22452245 *22462246+ * However there is other limiting factor. We do store extents in the form22472247+ * of starting block and length, hence the resulting length of the extent22482248+ * covering maximum file size must fit into on-disk format containers as22492249+ * well. Given that length is always by 1 unit bigger than max unit (because22502250+ * we count 0 as well) we have to lower the s_maxbytes by one fs block.22512251+ *22462252 * Note, this does *not* consider any metadata overhead for vfs i_blocks.22472253 */22482254static loff_t ext4_max_size(int blkbits, int has_huge_files)···22702264 upper_limit <<= blkbits;22712265 }2272226622732273- /* 32-bit extent-start container, ee_block */22742274- res = 1LL << 32;22672267+ /*22682268+ * 32-bit extent-start container, ee_block. We lower the maxbytes22692269+ * by one fs block, so ee_len can cover the extent of maximum file22702270+ * size22712271+ */22722272+ res = (1LL << 32) - 1;22752273 res <<= blkbits;22762276- res -= 1;2277227422782275 /* Sanity check against vm- & vfs- imposed limits */22792276 if (res > upper_limit)
+7
fs/inode.c
···423423void end_writeback(struct inode *inode)424424{425425 might_sleep();426426+ /*427427+ * We have to cycle tree_lock here because reclaim can be still in the428428+ * process of removing the last page (in __delete_from_page_cache())429429+ * and we must not free mapping under it.430430+ */431431+ spin_lock_irq(&inode->i_data.tree_lock);426432 BUG_ON(inode->i_data.nrpages);433433+ spin_unlock_irq(&inode->i_data.tree_lock);427434 BUG_ON(!list_empty(&inode->i_data.private_list));428435 BUG_ON(!(inode->i_state & I_FREEING));429436 BUG_ON(inode->i_state & I_CLEAR);
+16-12
fs/jbd2/checkpoint.c
···97979898 if (jh->b_jlist == BJ_None && !buffer_locked(bh) &&9999 !buffer_dirty(bh) && !buffer_write_io_error(bh)) {100100+ /*101101+ * Get our reference so that bh cannot be freed before102102+ * we unlock it103103+ */104104+ get_bh(bh);100105 JBUFFER_TRACE(jh, "remove from checkpoint list");101106 ret = __jbd2_journal_remove_checkpoint(jh) + 1;102107 jbd_unlock_bh_state(bh);103103- jbd2_journal_remove_journal_head(bh);104108 BUFFER_TRACE(bh, "release");105109 __brelse(bh);106110 } else {···227223 spin_lock(&journal->j_list_lock);228224 goto restart;229225 }226226+ get_bh(bh);230227 if (buffer_locked(bh)) {231231- atomic_inc(&bh->b_count);232228 spin_unlock(&journal->j_list_lock);233229 jbd_unlock_bh_state(bh);234230 wait_on_buffer(bh);···247243 */248244 released = __jbd2_journal_remove_checkpoint(jh);249245 jbd_unlock_bh_state(bh);250250- jbd2_journal_remove_journal_head(bh);251246 __brelse(bh);252247 }253248···287284 int ret = 0;288285289286 if (buffer_locked(bh)) {290290- atomic_inc(&bh->b_count);287287+ get_bh(bh);291288 spin_unlock(&journal->j_list_lock);292289 jbd_unlock_bh_state(bh);293290 wait_on_buffer(bh);···319316 ret = 1;320317 if (unlikely(buffer_write_io_error(bh)))321318 ret = -EIO;319319+ get_bh(bh);322320 J_ASSERT_JH(jh, !buffer_jbddirty(bh));323321 BUFFER_TRACE(bh, "remove from checkpoint");324322 __jbd2_journal_remove_checkpoint(jh);325323 spin_unlock(&journal->j_list_lock);326324 jbd_unlock_bh_state(bh);327327- jbd2_journal_remove_journal_head(bh);328325 __brelse(bh);329326 } else {330327 /*···557554/*558555 * journal_clean_one_cp_list559556 *560560- * Find all the written-back checkpoint buffers in the given list and release them.557557+ * Find all the written-back checkpoint buffers in the given list and558558+ * release them.561559 *562560 * Called with the journal locked.563561 * Called with j_list_lock held.···667663 * checkpoint lists.668664 *669665 * The function returns 1 if it frees the transaction, 0 otherwise.666666+ * The function can free jh and bh.670667 *671671- * This function is called with the journal locked.672668 * This function is called with j_list_lock held.673669 * This function is called with jbd_lock_bh_state(jh2bh(jh))674670 */···688684 }689685 journal = transaction->t_journal;690686687687+ JBUFFER_TRACE(jh, "removing from transaction");691688 __buffer_unlink(jh);692689 jh->b_cp_transaction = NULL;690690+ jbd2_journal_put_journal_head(jh);693691694692 if (transaction->t_checkpoint_list != NULL ||695693 transaction->t_checkpoint_io_list != NULL)696694 goto out;697697- JBUFFER_TRACE(jh, "transaction has no more buffers");698695699696 /*700697 * There is one special case to worry about: if we have just pulled the···706701 * The locking here around t_state is a bit sleazy.707702 * See the comment at the end of jbd2_journal_commit_transaction().708703 */709709- if (transaction->t_state != T_FINISHED) {710710- JBUFFER_TRACE(jh, "belongs to running/committing transaction");704704+ if (transaction->t_state != T_FINISHED)711705 goto out;712712- }713706714707 /* OK, that was the last buffer for the transaction: we can now715708 safely remove this transaction from the log */···726723 wake_up(&journal->j_wait_logspace);727724 ret = 1;728725out:729729- JBUFFER_TRACE(jh, "exit");730726 return ret;731727}732728···744742 J_ASSERT_JH(jh, buffer_dirty(jh2bh(jh)) || buffer_jbddirty(jh2bh(jh)));745743 J_ASSERT_JH(jh, jh->b_cp_transaction == NULL);746744745745+ /* Get reference for checkpointing transaction */746746+ jbd2_journal_grab_journal_head(jh2bh(jh));747747 jh->b_cp_transaction = transaction;748748749749 if (!transaction->t_checkpoint_list) {
+19-14
fs/jbd2/commit.c
···848848 while (commit_transaction->t_forget) {849849 transaction_t *cp_transaction;850850 struct buffer_head *bh;851851+ int try_to_free = 0;851852852853 jh = commit_transaction->t_forget;853854 spin_unlock(&journal->j_list_lock);854855 bh = jh2bh(jh);856856+ /*857857+ * Get a reference so that bh cannot be freed before we are858858+ * done with it.859859+ */860860+ get_bh(bh);855861 jbd_lock_bh_state(bh);856862 J_ASSERT_JH(jh, jh->b_transaction == commit_transaction);857863···920914 __jbd2_journal_insert_checkpoint(jh, commit_transaction);921915 if (is_journal_aborted(journal))922916 clear_buffer_jbddirty(bh);923923- JBUFFER_TRACE(jh, "refile for checkpoint writeback");924924- __jbd2_journal_refile_buffer(jh);925925- jbd_unlock_bh_state(bh);926917 } else {927918 J_ASSERT_BH(bh, !buffer_dirty(bh));928928- /* The buffer on BJ_Forget list and not jbddirty means919919+ /*920920+ * The buffer on BJ_Forget list and not jbddirty means929921 * it has been freed by this transaction and hence it930922 * could not have been reallocated until this931923 * transaction has committed. *BUT* it could be932924 * reallocated once we have written all the data to933925 * disk and before we process the buffer on BJ_Forget934934- * list. */935935- JBUFFER_TRACE(jh, "refile or unfile freed buffer");936936- __jbd2_journal_refile_buffer(jh);937937- if (!jh->b_transaction) {938938- jbd_unlock_bh_state(bh);939939- /* needs a brelse */940940- jbd2_journal_remove_journal_head(bh);941941- release_buffer_page(bh);942942- } else943943- jbd_unlock_bh_state(bh);926926+ * list.927927+ */928928+ if (!jh->b_next_transaction)929929+ try_to_free = 1;944930 }931931+ JBUFFER_TRACE(jh, "refile or unfile buffer");932932+ __jbd2_journal_refile_buffer(jh);933933+ jbd_unlock_bh_state(bh);934934+ if (try_to_free)935935+ release_buffer_page(bh); /* Drops bh reference */936936+ else937937+ __brelse(bh);945938 cond_resched_lock(&journal->j_list_lock);946939 }947940 spin_unlock(&journal->j_list_lock);
+29-62
fs/jbd2/journal.c
···20782078 * When a buffer has its BH_JBD bit set it is immune from being released by20792079 * core kernel code, mainly via ->b_count.20802080 *20812081- * A journal_head may be detached from its buffer_head when the journal_head's20822082- * b_transaction, b_cp_transaction and b_next_transaction pointers are NULL.20832083- * Various places in JBD call jbd2_journal_remove_journal_head() to indicate that the20842084- * journal_head can be dropped if needed.20812081+ * A journal_head is detached from its buffer_head when the journal_head's20822082+ * b_jcount reaches zero. Running transaction (b_transaction) and checkpoint20832083+ * transaction (b_cp_transaction) hold their references to b_jcount.20852084 *20862085 * Various places in the kernel want to attach a journal_head to a buffer_head20872086 * _before_ attaching the journal_head to a transaction. To protect the···20932094 * (Attach a journal_head if needed. Increments b_jcount)20942095 * struct journal_head *jh = jbd2_journal_add_journal_head(bh);20952096 * ...20972097+ * (Get another reference for transaction)20982098+ * jbd2_journal_grab_journal_head(bh);20962099 * jh->b_transaction = xxx;21002100+ * (Put original reference)20972101 * jbd2_journal_put_journal_head(jh);20982098- *20992099- * Now, the journal_head's b_jcount is zero, but it is safe from being released21002100- * because it has a non-zero b_transaction.21012102 */2102210321032104/*21042105 * Give a buffer_head a journal_head.21052106 *21062106- * Doesn't need the journal lock.21072107 * May sleep.21082108 */21092109struct journal_head *jbd2_journal_add_journal_head(struct buffer_head *bh)···21662168 struct journal_head *jh = bh2jh(bh);2167216921682170 J_ASSERT_JH(jh, jh->b_jcount >= 0);21692169-21702170- get_bh(bh);21712171- if (jh->b_jcount == 0) {21722172- if (jh->b_transaction == NULL &&21732173- jh->b_next_transaction == NULL &&21742174- jh->b_cp_transaction == NULL) {21752175- J_ASSERT_JH(jh, jh->b_jlist == BJ_None);21762176- J_ASSERT_BH(bh, buffer_jbd(bh));21772177- J_ASSERT_BH(bh, jh2bh(jh) == bh);21782178- BUFFER_TRACE(bh, "remove journal_head");21792179- if (jh->b_frozen_data) {21802180- printk(KERN_WARNING "%s: freeing "21812181- "b_frozen_data\n",21822182- __func__);21832183- jbd2_free(jh->b_frozen_data, bh->b_size);21842184- }21852185- if (jh->b_committed_data) {21862186- printk(KERN_WARNING "%s: freeing "21872187- "b_committed_data\n",21882188- __func__);21892189- jbd2_free(jh->b_committed_data, bh->b_size);21902190- }21912191- bh->b_private = NULL;21922192- jh->b_bh = NULL; /* debug, really */21932193- clear_buffer_jbd(bh);21942194- __brelse(bh);21952195- journal_free_journal_head(jh);21962196- } else {21972197- BUFFER_TRACE(bh, "journal_head was locked");21982198- }21712171+ J_ASSERT_JH(jh, jh->b_transaction == NULL);21722172+ J_ASSERT_JH(jh, jh->b_next_transaction == NULL);21732173+ J_ASSERT_JH(jh, jh->b_cp_transaction == NULL);21742174+ J_ASSERT_JH(jh, jh->b_jlist == BJ_None);21752175+ J_ASSERT_BH(bh, buffer_jbd(bh));21762176+ J_ASSERT_BH(bh, jh2bh(jh) == bh);21772177+ BUFFER_TRACE(bh, "remove journal_head");21782178+ if (jh->b_frozen_data) {21792179+ printk(KERN_WARNING "%s: freeing b_frozen_data\n", __func__);21802180+ jbd2_free(jh->b_frozen_data, bh->b_size);21992181 }21822182+ if (jh->b_committed_data) {21832183+ printk(KERN_WARNING "%s: freeing b_committed_data\n", __func__);21842184+ jbd2_free(jh->b_committed_data, bh->b_size);21852185+ }21862186+ bh->b_private = NULL;21872187+ jh->b_bh = NULL; /* debug, really */21882188+ clear_buffer_jbd(bh);21892189+ journal_free_journal_head(jh);22002190}2201219122022192/*22032203- * jbd2_journal_remove_journal_head(): if the buffer isn't attached to a transaction22042204- * and has a zero b_jcount then remove and release its journal_head. If we did22052205- * see that the buffer is not used by any transaction we also "logically"22062206- * decrement ->b_count.22072207- *22082208- * We in fact take an additional increment on ->b_count as a convenience,22092209- * because the caller usually wants to do additional things with the bh22102210- * after calling here.22112211- * The caller of jbd2_journal_remove_journal_head() *must* run __brelse(bh) at some22122212- * time. Once the caller has run __brelse(), the buffer is eligible for22132213- * reaping by try_to_free_buffers().22142214- */22152215-void jbd2_journal_remove_journal_head(struct buffer_head *bh)22162216-{22172217- jbd_lock_bh_journal_head(bh);22182218- __journal_remove_journal_head(bh);22192219- jbd_unlock_bh_journal_head(bh);22202220-}22212221-22222222-/*22232223- * Drop a reference on the passed journal_head. If it fell to zero then try to21932193+ * Drop a reference on the passed journal_head. If it fell to zero then22242194 * release the journal_head from the buffer_head.22252195 */22262196void jbd2_journal_put_journal_head(struct journal_head *jh)···21982232 jbd_lock_bh_journal_head(bh);21992233 J_ASSERT_JH(jh, jh->b_jcount > 0);22002234 --jh->b_jcount;22012201- if (!jh->b_jcount && !jh->b_transaction) {22352235+ if (!jh->b_jcount) {22022236 __journal_remove_journal_head(bh);22372237+ jbd_unlock_bh_journal_head(bh);22032238 __brelse(bh);22042204- }22052205- jbd_unlock_bh_journal_head(bh);22392239+ } else22402240+ jbd_unlock_bh_journal_head(bh);22062241}2207224222082243/*
+34-33
fs/jbd2/transaction.c
···3030#include <linux/module.h>31313232static void __jbd2_journal_temp_unlink_buffer(struct journal_head *jh);3333+static void __jbd2_journal_unfile_buffer(struct journal_head *jh);33343435/*3536 * jbd2_get_transaction: obtain a new transaction_t object.···765764 if (!jh->b_transaction) {766765 JBUFFER_TRACE(jh, "no transaction");767766 J_ASSERT_JH(jh, !jh->b_next_transaction);768768- jh->b_transaction = transaction;769767 JBUFFER_TRACE(jh, "file as BJ_Reserved");770768 spin_lock(&journal->j_list_lock);771769 __jbd2_journal_file_buffer(jh, transaction, BJ_Reserved);···814814 * int jbd2_journal_get_write_access() - notify intent to modify a buffer for metadata (not data) update.815815 * @handle: transaction to add buffer modifications to816816 * @bh: bh to be used for metadata writes817817- * @credits: variable that will receive credits for the buffer818817 *819818 * Returns an error code or 0 on success.820819 *···895896 * committed and so it's safe to clear the dirty bit.896897 */897898 clear_buffer_dirty(jh2bh(jh));898898- jh->b_transaction = transaction;899899-900899 /* first access by this transaction */901900 jh->b_modified = 0;902901···929932 * non-rewindable consequences930933 * @handle: transaction931934 * @bh: buffer to undo932932- * @credits: store the number of taken credits here (if not NULL)933935 *934936 * Sometimes there is a need to distinguish between metadata which has935937 * been committed to disk and that which has not. The ext3fs code uses···12281232 __jbd2_journal_file_buffer(jh, transaction, BJ_Forget);12291233 } else {12301234 __jbd2_journal_unfile_buffer(jh);12311231- jbd2_journal_remove_journal_head(bh);12321232- __brelse(bh);12331235 if (!buffer_jbd(bh)) {12341236 spin_unlock(&journal->j_list_lock);12351237 jbd_unlock_bh_state(bh);···15501556 mark_buffer_dirty(bh); /* Expose it to the VM */15511557}1552155815531553-void __jbd2_journal_unfile_buffer(struct journal_head *jh)15591559+/*15601560+ * Remove buffer from all transactions.15611561+ *15621562+ * Called with bh_state lock and j_list_lock15631563+ *15641564+ * jh and bh may be already freed when this function returns.15651565+ */15661566+static void __jbd2_journal_unfile_buffer(struct journal_head *jh)15541567{15551568 __jbd2_journal_temp_unlink_buffer(jh);15561569 jh->b_transaction = NULL;15701570+ jbd2_journal_put_journal_head(jh);15571571}1558157215591573void jbd2_journal_unfile_buffer(journal_t *journal, struct journal_head *jh)15601574{15611561- jbd_lock_bh_state(jh2bh(jh));15751575+ struct buffer_head *bh = jh2bh(jh);15761576+15771577+ /* Get reference so that buffer cannot be freed before we unlock it */15781578+ get_bh(bh);15791579+ jbd_lock_bh_state(bh);15621580 spin_lock(&journal->j_list_lock);15631581 __jbd2_journal_unfile_buffer(jh);15641582 spin_unlock(&journal->j_list_lock);15651565- jbd_unlock_bh_state(jh2bh(jh));15831583+ jbd_unlock_bh_state(bh);15841584+ __brelse(bh);15661585}1567158615681587/*···16021595 if (jh->b_jlist == BJ_None) {16031596 JBUFFER_TRACE(jh, "remove from checkpoint list");16041597 __jbd2_journal_remove_checkpoint(jh);16051605- jbd2_journal_remove_journal_head(bh);16061606- __brelse(bh);16071598 }16081599 }16091600 spin_unlock(&journal->j_list_lock);···16641659 /*16651660 * We take our own ref against the journal_head here to avoid16661661 * having to add tons of locking around each instance of16671667- * jbd2_journal_remove_journal_head() and16681662 * jbd2_journal_put_journal_head().16691663 */16701664 jh = jbd2_journal_grab_journal_head(bh);···17011697 int may_free = 1;17021698 struct buffer_head *bh = jh2bh(jh);1703169917041704- __jbd2_journal_unfile_buffer(jh);17051705-17061700 if (jh->b_cp_transaction) {17071701 JBUFFER_TRACE(jh, "on running+cp transaction");17021702+ __jbd2_journal_temp_unlink_buffer(jh);17081703 /*17091704 * We don't want to write the buffer anymore, clear the17101705 * bit so that we don't confuse checks in···17141711 may_free = 0;17151712 } else {17161713 JBUFFER_TRACE(jh, "on running transaction");17171717- jbd2_journal_remove_journal_head(bh);17181718- __brelse(bh);17141714+ __jbd2_journal_unfile_buffer(jh);17191715 }17201716 return may_free;17211717}···1992199019931991 if (jh->b_transaction)19941992 __jbd2_journal_temp_unlink_buffer(jh);19931993+ else19941994+ jbd2_journal_grab_journal_head(bh);19951995 jh->b_transaction = transaction;1996199619971997 switch (jlist) {···20452041 * already started to be used by a subsequent transaction, refile the20462042 * buffer on that transaction's metadata list.20472043 *20482048- * Called under journal->j_list_lock20492049- *20442044+ * Called under j_list_lock20502045 * Called under jbd_lock_bh_state(jh2bh(jh))20462046+ *20472047+ * jh and bh may be already free when this function returns20512048 */20522049void __jbd2_journal_refile_buffer(struct journal_head *jh)20532050{···2072206720732068 was_dirty = test_clear_buffer_jbddirty(bh);20742069 __jbd2_journal_temp_unlink_buffer(jh);20702070+ /*20712071+ * We set b_transaction here because b_next_transaction will inherit20722072+ * our jh reference and thus __jbd2_journal_file_buffer() must not20732073+ * take a new one.20742074+ */20752075 jh->b_transaction = jh->b_next_transaction;20762076 jh->b_next_transaction = NULL;20772077 if (buffer_freed(bh))···20932083}2094208420952085/*20962096- * For the unlocked version of this call, also make sure that any20972097- * hanging journal_head is cleaned up if necessary.20862086+ * __jbd2_journal_refile_buffer() with necessary locking added. We take our20872087+ * bh reference so that we can safely unlock bh.20982088 *20992099- * __jbd2_journal_refile_buffer is usually called as part of a single locked21002100- * operation on a buffer_head, in which the caller is probably going to21012101- * be hooking the journal_head onto other lists. In that case it is up21022102- * to the caller to remove the journal_head if necessary. For the21032103- * unlocked jbd2_journal_refile_buffer call, the caller isn't going to be21042104- * doing anything else to the buffer so we need to do the cleanup21052105- * ourselves to avoid a jh leak.21062106- *21072107- * *** The journal_head may be freed by this call! ***20892089+ * The jh and bh may be freed by this call.21082090 */21092091void jbd2_journal_refile_buffer(journal_t *journal, struct journal_head *jh)21102092{21112093 struct buffer_head *bh = jh2bh(jh);2112209420952095+ /* Get reference so that buffer cannot be freed before we unlock it */20962096+ get_bh(bh);21132097 jbd_lock_bh_state(bh);21142098 spin_lock(&journal->j_list_lock);21152115-21162099 __jbd2_journal_refile_buffer(jh);21172100 jbd_unlock_bh_state(bh);21182118- jbd2_journal_remove_journal_head(bh);21192119-21202101 spin_unlock(&journal->j_list_lock);21212102 __brelse(bh);21222103}
···397397 release_metapage(mp);398398399399 /* set the ag for the inode */400400- JFS_IP(ip)->agno = BLKTOAG(agstart, sbi);400400+ JFS_IP(ip)->agstart = agstart;401401 JFS_IP(ip)->active_ag = -1;402402403403 return (rc);···901901902902 /* get the allocation group for this ino.903903 */904904- agno = JFS_IP(ip)->agno;904904+ agno = BLKTOAG(JFS_IP(ip)->agstart, JFS_SBI(ip->i_sb));905905906906 /* Lock the AG specific inode map information907907 */···13151315static inline void13161316diInitInode(struct inode *ip, int iagno, int ino, int extno, struct iag * iagp)13171317{13181318- struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb);13191318 struct jfs_inode_info *jfs_ip = JFS_IP(ip);1320131913211320 ip->i_ino = (iagno << L2INOSPERIAG) + ino;13221321 jfs_ip->ixpxd = iagp->inoext[extno];13231323- jfs_ip->agno = BLKTOAG(le64_to_cpu(iagp->agstart), sbi);13221322+ jfs_ip->agstart = le64_to_cpu(iagp->agstart);13241323 jfs_ip->active_ag = -1;13251324}13261325···13781379 */1379138013801381 /* get the ag number of this iag */13811381- agno = JFS_IP(pip)->agno;13821382+ agno = BLKTOAG(JFS_IP(pip)->agstart, JFS_SBI(pip->i_sb));1382138313831384 if (atomic_read(&JFS_SBI(pip->i_sb)->bmap->db_active[agno])) {13841385 /*···29202921 continue;29212922 }2922292329232923- /* agstart that computes to the same ag is treated as same; */29242924 agstart = le64_to_cpu(iagp->agstart);29252925- /* iagp->agstart = agstart & ~(mp->db_agsize - 1); */29262925 n = agstart >> mp->db_agl2size;29262926+ iagp->agstart = cpu_to_le64((s64)n << mp->db_agl2size);2927292729282928 /* compute backed inodes */29292929 numinos = (EXTSPERIAG - le32_to_cpu(iagp->nfreeexts))
+2-1
fs/jfs/jfs_incore.h
···5050 short btindex; /* btpage entry index*/5151 struct inode *ipimap; /* inode map */5252 unsigned long cflag; /* commit flags */5353+ u64 agstart; /* agstart of the containing IAG */5354 u16 bxflag; /* xflag of pseudo buffer? */5454- unchar agno; /* ag number */5555+ unchar pad;5556 signed char active_ag; /* ag currently allocating from */5657 lid_t blid; /* lid of pseudo buffer? */5758 lid_t atlhead; /* anonymous tlock list head */
···3030 */31313232#include <linux/nfs_fs.h>3333+#include <linux/nfs_page.h>33343435#include "internal.h"3536#include "nfs4filelayout.h"···553552 __func__, nfl_util, fl->num_fh, fl->first_stripe_index,554553 fl->pattern_offset);555554556556- if (!fl->num_fh)555555+ /* Note that a zero value for num_fh is legal for STRIPE_SPARSE.556556+ * Futher checking is done in filelayout_check_layout */557557+ if (fl->num_fh < 0 || fl->num_fh >558558+ max(NFS4_PNFS_MAX_STRIPE_CNT, NFS4_PNFS_MAX_MULTI_CNT))557559 goto out_err;558560559559- fl->fh_array = kzalloc(fl->num_fh * sizeof(struct nfs_fh *),560560- gfp_flags);561561- if (!fl->fh_array)562562- goto out_err;561561+ if (fl->num_fh > 0) {562562+ fl->fh_array = kzalloc(fl->num_fh * sizeof(struct nfs_fh *),563563+ gfp_flags);564564+ if (!fl->fh_array)565565+ goto out_err;566566+ }563567564568 for (i = 0; i < fl->num_fh; i++) {565569 /* Do we want to use a mempool here? */···667661 u64 p_stripe, r_stripe;668662 u32 stripe_unit;669663670670- if (!pnfs_generic_pg_test(pgio, prev, req))671671- return 0;664664+ if (!pnfs_generic_pg_test(pgio, prev, req) ||665665+ !nfs_generic_pg_test(pgio, prev, req))666666+ return false;672667673668 if (!pgio->pg_lseg)674669 return 1;
+27-18
fs/nfs/nfs4proc.c
···22652265 return nfs4_map_errors(status);22662266}2267226722682268+static void nfs_fixup_referral_attributes(struct nfs_fattr *fattr);22682269/*22692270 * Get locations and (maybe) other attributes of a referral.22702271 * Note that we'll actually follow the referral later when22712272 * we detect fsid mismatch in inode revalidation22722273 */22732273-static int nfs4_get_referral(struct inode *dir, const struct qstr *name, struct nfs_fattr *fattr, struct nfs_fh *fhandle)22742274+static int nfs4_get_referral(struct inode *dir, const struct qstr *name,22752275+ struct nfs_fattr *fattr, struct nfs_fh *fhandle)22742276{22752277 int status = -ENOMEM;22762278 struct page *page = NULL;···22902288 goto out;22912289 /* Make sure server returned a different fsid for the referral */22922290 if (nfs_fsid_equal(&NFS_SERVER(dir)->fsid, &locations->fattr.fsid)) {22932293- dprintk("%s: server did not return a different fsid for a referral at %s\n", __func__, name->name);22912291+ dprintk("%s: server did not return a different fsid for"22922292+ " a referral at %s\n", __func__, name->name);22942293 status = -EIO;22952294 goto out;22962295 }22962296+ /* Fixup attributes for the nfs_lookup() call to nfs_fhget() */22972297+ nfs_fixup_referral_attributes(&locations->fattr);2297229822992299+ /* replace the lookup nfs_fattr with the locations nfs_fattr */22982300 memcpy(fattr, &locations->fattr, sizeof(struct nfs_fattr));22992299- fattr->valid |= NFS_ATTR_FATTR_V4_REFERRAL;23002300- if (!fattr->mode)23012301- fattr->mode = S_IFDIR;23022301 memset(fhandle, 0, sizeof(struct nfs_fh));23032302out:23042303 if (page)···46704667 return len;46714668}4672466946704670+/*46714671+ * nfs_fhget will use either the mounted_on_fileid or the fileid46724672+ */46734673static void nfs_fixup_referral_attributes(struct nfs_fattr *fattr)46744674{46754675- if (!((fattr->valid & NFS_ATTR_FATTR_FILEID) &&46764676- (fattr->valid & NFS_ATTR_FATTR_FSID) &&46774677- (fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL)))46754675+ if (!(((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) ||46764676+ (fattr->valid & NFS_ATTR_FATTR_FILEID)) &&46774677+ (fattr->valid & NFS_ATTR_FATTR_FSID) &&46784678+ (fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL)))46784679 return;4679468046804681 fattr->valid |= NFS_ATTR_FATTR_TYPE | NFS_ATTR_FATTR_MODE |···46934686 struct nfs_server *server = NFS_SERVER(dir);46944687 u32 bitmask[2] = {46954688 [0] = FATTR4_WORD0_FSID | FATTR4_WORD0_FS_LOCATIONS,46964696- [1] = FATTR4_WORD1_MOUNTED_ON_FILEID,46974689 };46984690 struct nfs4_fs_locations_arg args = {46994691 .dir_fh = NFS_FH(dir),···47114705 int status;4712470647134707 dprintk("%s: start\n", __func__);47084708+47094709+ /* Ask for the fileid of the absent filesystem if mounted_on_fileid47104710+ * is not supported */47114711+ if (NFS_SERVER(dir)->attr_bitmask[1] & FATTR4_WORD1_MOUNTED_ON_FILEID)47124712+ bitmask[1] |= FATTR4_WORD1_MOUNTED_ON_FILEID;47134713+ else47144714+ bitmask[0] |= FATTR4_WORD0_FILEID;47154715+47144716 nfs_fattr_init(&fs_locations->fattr);47154717 fs_locations->server = server;47164718 fs_locations->nlocations = 0;47174719 status = nfs4_call_sync(server->client, server, &msg, &args.seq_args, &res.seq_res, 0);47184718- nfs_fixup_referral_attributes(&fs_locations->fattr);47194720 dprintk("%s: returned status = %d\n", __func__, status);47204721 return status;47214722}···51115098 if (mxresp_sz == 0)51125099 mxresp_sz = NFS_MAX_FILE_IO_SIZE;51135100 /* Fore channel attributes */51145114- args->fc_attrs.headerpadsz = 0;51155101 args->fc_attrs.max_rqst_sz = mxrqst_sz;51165102 args->fc_attrs.max_resp_sz = mxresp_sz;51175103 args->fc_attrs.max_ops = NFS4_MAX_OPS;···51235111 args->fc_attrs.max_ops, args->fc_attrs.max_reqs);5124511251255113 /* Back channel attributes */51265126- args->bc_attrs.headerpadsz = 0;51275114 args->bc_attrs.max_rqst_sz = PAGE_SIZE;51285115 args->bc_attrs.max_resp_sz = PAGE_SIZE;51295116 args->bc_attrs.max_resp_sz_cached = 0;···51425131 struct nfs4_channel_attrs *sent = &args->fc_attrs;51435132 struct nfs4_channel_attrs *rcvd = &session->fc_attrs;5144513351455145- if (rcvd->headerpadsz > sent->headerpadsz)51465146- return -EINVAL;51475134 if (rcvd->max_resp_sz > sent->max_resp_sz)51485135 return -EINVAL;51495136 /*···57065697{57075698 struct nfs4_layoutreturn *lrp = calldata;57085699 struct nfs_server *server;57005700+ struct pnfs_layout_hdr *lo = NFS_I(lrp->args.inode)->layout;5709570157105702 dprintk("--> %s\n", __func__);57115703···57185708 nfs_restart_rpc(task, lrp->clp);57195709 return;57205710 }57115711+ spin_lock(&lo->plh_inode->i_lock);57215712 if (task->tk_status == 0) {57225722- struct pnfs_layout_hdr *lo = NFS_I(lrp->args.inode)->layout;57235723-57245713 if (lrp->res.lrs_present) {57255725- spin_lock(&lo->plh_inode->i_lock);57265714 pnfs_set_layout_stateid(lo, &lrp->res.stateid, true);57275727- spin_unlock(&lo->plh_inode->i_lock);57285715 } else57295716 BUG_ON(!list_empty(&lo->plh_segs));57305717 }57185718+ lo->plh_block_lgets--;57195719+ spin_unlock(&lo->plh_inode->i_lock);57315720 dprintk("<-- %s\n", __func__);57325721}57335722
···204204 TASK_UNINTERRUPTIBLE);205205}206206207207-static bool nfs_generic_pg_test(struct nfs_pageio_descriptor *desc, struct nfs_page *prev, struct nfs_page *req)207207+bool nfs_generic_pg_test(struct nfs_pageio_descriptor *desc, struct nfs_page *prev, struct nfs_page *req)208208{209209 /*210210 * FIXME: ideally we should be able to coalesce all requests···218218219219 return desc->pg_count + req->wb_bytes <= desc->pg_bsize;220220}221221+EXPORT_SYMBOL_GPL(nfs_generic_pg_test);221222222223/**223224 * nfs_pageio_init - initialise a page io descriptor
+31-13
fs/nfs/pnfs.c
···634634635635 spin_lock(&ino->i_lock);636636 lo = nfsi->layout;637637- if (!lo || !mark_matching_lsegs_invalid(lo, &tmp_list, NULL)) {637637+ if (!lo) {638638 spin_unlock(&ino->i_lock);639639- dprintk("%s: no layout segments to return\n", __func__);640640- goto out;639639+ dprintk("%s: no layout to return\n", __func__);640640+ return status;641641 }642642 stateid = nfsi->layout->plh_stateid;643643 /* Reference matched in nfs4_layoutreturn_release */644644 get_layout_hdr(lo);645645+ mark_matching_lsegs_invalid(lo, &tmp_list, NULL);646646+ lo->plh_block_lgets++;645647 spin_unlock(&ino->i_lock);646648 pnfs_free_lseg_list(&tmp_list);647649···652650 lrp = kzalloc(sizeof(*lrp), GFP_KERNEL);653651 if (unlikely(lrp == NULL)) {654652 status = -ENOMEM;653653+ set_bit(NFS_LAYOUT_RW_FAILED, &lo->plh_flags);654654+ set_bit(NFS_LAYOUT_RO_FAILED, &lo->plh_flags);655655+ put_layout_hdr(lo);655656 goto out;656657 }657658···892887 ret = get_lseg(lseg);893888 break;894889 }895895- if (cmp_layout(range, &lseg->pls_range) > 0)890890+ if (lseg->pls_range.offset > range->offset)896891 break;897892 }898893···10641059 gfp_flags = GFP_NOFS;10651060 }1066106110671067- if (pgio->pg_count == prev->wb_bytes) {10621062+ if (pgio->pg_lseg == NULL) {10631063+ if (pgio->pg_count != prev->wb_bytes)10641064+ return true;10681065 /* This is first coelesce call for a series of nfs_pages */10691066 pgio->pg_lseg = pnfs_update_layout(pgio->pg_inode,10701067 prev->wb_context,10711071- req_offset(req),10681068+ req_offset(prev),10721069 pgio->pg_count,10731070 access_type,10741071 gfp_flags);10751075- return true;10721072+ if (pgio->pg_lseg == NULL)10731073+ return true;10761074 }1077107510781078- if (pgio->pg_lseg &&10791079- req_offset(req) > end_offset(pgio->pg_lseg->pls_range.offset,10801080- pgio->pg_lseg->pls_range.length))10811081- return false;10821082-10831083- return true;10761076+ /*10771077+ * Test if a nfs_page is fully contained in the pnfs_layout_range.10781078+ * Note that this test makes several assumptions:10791079+ * - that the previous nfs_page in the struct nfs_pageio_descriptor10801080+ * is known to lie within the range.10811081+ * - that the nfs_page being tested is known to be contiguous with the10821082+ * previous nfs_page.10831083+ * - Layout ranges are page aligned, so we only have to test the10841084+ * start offset of the request.10851085+ *10861086+ * Please also note that 'end_offset' is actually the offset of the10871087+ * first byte that lies outside the pnfs_layout_range. FIXME?10881088+ *10891089+ */10901090+ return req_offset(req) < end_offset(pgio->pg_lseg->pls_range.offset,10911091+ pgio->pg_lseg->pls_range.length);10841092}10851093EXPORT_SYMBOL_GPL(pnfs_generic_pg_test);10861094
···2727{2828 struct inode *inode = file->f_mapping->host;2929 struct mtd_info *mtd = inode->i_sb->s_mtd;3030- unsigned long isize, offset;3030+ unsigned long isize, offset, maxpages, lpages;31313232 if (!mtd)3333 goto cant_map_directly;34343535+ /* the mapping mustn't extend beyond the EOF */3636+ lpages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;3537 isize = i_size_read(inode);3638 offset = pgoff << PAGE_SHIFT;3737- if (offset > isize || len > isize || offset > isize - len)3939+4040+ maxpages = (isize + PAGE_SIZE - 1) >> PAGE_SHIFT;4141+ if ((pgoff >= maxpages) || (maxpages - pgoff < lpages))3842 return (unsigned long) -EINVAL;39434044 /* we need to call down to the MTD layer to do the actual mapping */
+7
fs/xfs/xfs_attr.c
···490490 args.whichfork = XFS_ATTR_FORK;491491492492 /*493493+ * we have no control over the attribute names that userspace passes us494494+ * to remove, so we have to allow the name lookup prior to attribute495495+ * removal to fail.496496+ */497497+ args.op_flags = XFS_DA_OP_OKNOENT;498498+499499+ /*493500 * Attach the dquots to the inode.494501 */495502 error = xfs_qm_dqattach(dp, 0);
+9-4
fs/xfs/xfs_iget.c
···253253 rcu_read_lock();254254 spin_lock(&ip->i_flags_lock);255255256256- ip->i_flags &= ~XFS_INEW;257257- ip->i_flags |= XFS_IRECLAIMABLE;258258- __xfs_inode_set_reclaim_tag(pag, ip);256256+ ip->i_flags &= ~(XFS_INEW | XFS_IRECLAIM);257257+ ASSERT(ip->i_flags & XFS_IRECLAIMABLE);259258 trace_xfs_iget_reclaim_fail(ip);260259 goto out_error;261260 }262261263262 spin_lock(&pag->pag_ici_lock);264263 spin_lock(&ip->i_flags_lock);265265- ip->i_flags &= ~(XFS_IRECLAIMABLE | XFS_IRECLAIM);264264+265265+ /*266266+ * Clear the per-lifetime state in the inode as we are now267267+ * effectively a new inode and need to return to the initial268268+ * state before reuse occurs.269269+ */270270+ ip->i_flags &= ~XFS_IRECLAIM_RESET_FLAGS;266271 ip->i_flags |= XFS_INEW;267272 __xfs_inode_clear_reclaim_tag(mp, pag, ip);268273 inode->i_state = I_NEW;
+10
fs/xfs/xfs_inode.h
···384384#define XFS_IDIRTY_RELEASE 0x0040 /* dirty release already seen */385385386386/*387387+ * Per-lifetime flags need to be reset when re-using a reclaimable inode during388388+ * inode lookup. Thi prevents unintended behaviour on the new inode from389389+ * ocurring.390390+ */391391+#define XFS_IRECLAIM_RESET_FLAGS \392392+ (XFS_IRECLAIMABLE | XFS_IRECLAIM | \393393+ XFS_IDIRTY_RELEASE | XFS_ITRUNCATED | \394394+ XFS_IFILESTREAM);395395+396396+/*387397 * Flags for inode locking.388398 * Bit ranges: 1<<1 - 1<<16-1 -- iolock/ilock modes (bitfield)389399 * 1<<16 - 1<<32-1 -- lockdep annotation (integers)
+5-2
fs/xfs/xfs_vnodeops.c
···960960 * be exposed to that problem.961961 */962962 truncated = xfs_iflags_test_and_clear(ip, XFS_ITRUNCATED);963963- if (truncated && VN_DIRTY(VFS_I(ip)) && ip->i_delayed_blks > 0)964964- xfs_flush_pages(ip, 0, -1, XBF_ASYNC, FI_NONE);963963+ if (truncated) {964964+ xfs_iflags_clear(ip, XFS_IDIRTY_RELEASE);965965+ if (VN_DIRTY(VFS_I(ip)) && ip->i_delayed_blks > 0)966966+ xfs_flush_pages(ip, 0, -1, XBF_ASYNC, FI_NONE);967967+ }965968 }966969967970 if (ip->i_d.di_nlink == 0)
···530530 * @dma_mem: Internal for coherent mem override.531531 * @archdata: For arch-specific additions.532532 * @of_node: Associated device tree node.533533- * @of_match: Matching of_device_id from driver.534533 * @devt: For creating the sysfs "dev".535534 * @devres_lock: Spinlock to protect the resource of the device.536535 * @devres_head: The resources list of the device.···653654654655static inline void device_enable_async_suspend(struct device *dev)655656{656656- if (!dev->power.in_suspend)657657+ if (!dev->power.is_prepared)657658 dev->power.async_suspend = true;658659}659660660661static inline void device_disable_async_suspend(struct device *dev)661662{662662- if (!dev->power.in_suspend)663663+ if (!dev->power.is_prepared)663664 dev->power.async_suspend = false;664665}665666
+1
include/linux/fs.h
···639639 struct prio_tree_root i_mmap; /* tree of private and shared mappings */640640 struct list_head i_mmap_nonlinear;/*list VM_NONLINEAR mappings */641641 struct mutex i_mmap_mutex; /* protect tree, count, list */642642+ /* Protected by tree_lock together with the radix tree */642643 unsigned long nrpages; /* number of total pages */643644 pgoff_t writeback_index;/* writeback starts here */644645 const struct address_space_operations *a_ops; /* methods */
+1
include/linux/hrtimer.h
···135135 * @cpu_base: per cpu clock base136136 * @index: clock type index for per_cpu support when moving a137137 * timer to a base on another cpu.138138+ * @clockid: clock id for per_cpu support138139 * @active: red black tree root node for the active timers139140 * @resolution: the resolution of the clock, in nanoseconds140141 * @get_time: function to retrieve the current time of the clock
···113113 if (error)114114 pm_notifier_call_chain(PM_POST_RESTORE);115115 }116116- if (error)116116+ if (error) {117117+ free_basic_memory_bitmaps();117118 atomic_inc(&snapshot_device_available);119119+ }118120 data->frozen = 0;119121 data->ready = 0;120122 data->platform_support = 0;
+12-3
kernel/taskstats.c
···285285static int add_del_listener(pid_t pid, const struct cpumask *mask, int isadd)286286{287287 struct listener_list *listeners;288288- struct listener *s, *tmp;288288+ struct listener *s, *tmp, *s2;289289 unsigned int cpu;290290291291 if (!cpumask_subset(mask, cpu_possible_mask))292292 return -EINVAL;293293294294+ s = NULL;294295 if (isadd == REGISTER) {295296 for_each_cpu(cpu, mask) {296296- s = kmalloc_node(sizeof(struct listener), GFP_KERNEL,297297- cpu_to_node(cpu));297297+ if (!s)298298+ s = kmalloc_node(sizeof(struct listener),299299+ GFP_KERNEL, cpu_to_node(cpu));298300 if (!s)299301 goto cleanup;300302 s->pid = pid;···305303306304 listeners = &per_cpu(listener_array, cpu);307305 down_write(&listeners->sem);306306+ list_for_each_entry_safe(s2, tmp, &listeners->list, list) {307307+ if (s2->pid == pid)308308+ goto next_cpu;309309+ }308310 list_add(&s->list, &listeners->list);311311+ s = NULL;312312+next_cpu:309313 up_write(&listeners->sem);310314 }315315+ kfree(s);311316 return 0;312317 }313318
+88-70
kernel/time/alarmtimer.c
···4242 clockid_t base_clockid;4343} alarm_bases[ALARM_NUMTYPE];44444545+/* freezer delta & lock used to handle clock_nanosleep triggered wakeups */4646+static ktime_t freezer_delta;4747+static DEFINE_SPINLOCK(freezer_delta_lock);4848+4549#ifdef CONFIG_RTC_CLASS4650/* rtc timer and device for setting alarm wakeups at suspend */4751static struct rtc_timer rtctimer;4852static struct rtc_device *rtcdev;4949-#endif5353+static DEFINE_SPINLOCK(rtcdev_lock);50545151-/* freezer delta & lock used to handle clock_nanosleep triggered wakeups */5252-static ktime_t freezer_delta;5353-static DEFINE_SPINLOCK(freezer_delta_lock);5555+/**5656+ * has_wakealarm - check rtc device has wakealarm ability5757+ * @dev: current device5858+ * @name_ptr: name to be returned5959+ *6060+ * This helper function checks to see if the rtc device can wake6161+ * from suspend.6262+ */6363+static int has_wakealarm(struct device *dev, void *name_ptr)6464+{6565+ struct rtc_device *candidate = to_rtc_device(dev);6666+6767+ if (!candidate->ops->set_alarm)6868+ return 0;6969+ if (!device_may_wakeup(candidate->dev.parent))7070+ return 0;7171+7272+ *(const char **)name_ptr = dev_name(dev);7373+ return 1;7474+}7575+7676+/**7777+ * alarmtimer_get_rtcdev - Return selected rtcdevice7878+ *7979+ * This function returns the rtc device to use for wakealarms.8080+ * If one has not already been chosen, it checks to see if a8181+ * functional rtc device is available.8282+ */8383+static struct rtc_device *alarmtimer_get_rtcdev(void)8484+{8585+ struct device *dev;8686+ char *str;8787+ unsigned long flags;8888+ struct rtc_device *ret;8989+9090+ spin_lock_irqsave(&rtcdev_lock, flags);9191+ if (!rtcdev) {9292+ /* Find an rtc device and init the rtc_timer */9393+ dev = class_find_device(rtc_class, NULL, &str, has_wakealarm);9494+ /* If we have a device then str is valid. See has_wakealarm() */9595+ if (dev) {9696+ rtcdev = rtc_class_open(str);9797+ /*9898+ * Drop the reference we got in class_find_device,9999+ * rtc_open takes its own.100100+ */101101+ put_device(dev);102102+ rtc_timer_init(&rtctimer, NULL, NULL);103103+ }104104+ }105105+ ret = rtcdev;106106+ spin_unlock_irqrestore(&rtcdev_lock, flags);107107+108108+ return ret;109109+}110110+#else111111+#define alarmtimer_get_rtcdev() (0)112112+#define rtcdev (0)113113+#endif541145511556116/**···226166 struct rtc_time tm;227167 ktime_t min, now;228168 unsigned long flags;169169+ struct rtc_device *rtc;229170 int i;230171231172 spin_lock_irqsave(&freezer_delta_lock, flags);···234173 freezer_delta = ktime_set(0, 0);235174 spin_unlock_irqrestore(&freezer_delta_lock, flags);236175176176+ rtc = rtcdev;237177 /* If we have no rtcdev, just return */238238- if (!rtcdev)178178+ if (!rtc)239179 return 0;240180241181 /* Find the soonest timer to expire*/···261199 WARN_ON(min.tv64 < NSEC_PER_SEC);262200263201 /* Setup an rtc timer to fire that far in the future */264264- rtc_timer_cancel(rtcdev, &rtctimer);265265- rtc_read_time(rtcdev, &tm);202202+ rtc_timer_cancel(rtc, &rtctimer);203203+ rtc_read_time(rtc, &tm);266204 now = rtc_tm_to_ktime(tm);267205 now = ktime_add(now, min);268206269269- rtc_timer_start(rtcdev, &rtctimer, now, ktime_set(0, 0));207207+ rtc_timer_start(rtc, &rtctimer, now, ktime_set(0, 0));270208271209 return 0;272210}···384322{385323 clockid_t baseid = alarm_bases[clock2alarm(which_clock)].base_clockid;386324325325+ if (!alarmtimer_get_rtcdev())326326+ return -ENOTSUPP;327327+387328 return hrtimer_get_res(baseid, tp);388329}389330···400335static int alarm_clock_get(clockid_t which_clock, struct timespec *tp)401336{402337 struct alarm_base *base = &alarm_bases[clock2alarm(which_clock)];338338+339339+ if (!alarmtimer_get_rtcdev())340340+ return -ENOTSUPP;403341404342 *tp = ktime_to_timespec(base->gettime());405343 return 0;···418350{419351 enum alarmtimer_type type;420352 struct alarm_base *base;353353+354354+ if (!alarmtimer_get_rtcdev())355355+ return -ENOTSUPP;421356422357 if (!capable(CAP_WAKE_ALARM))423358 return -EPERM;···456385 */457386static int alarm_timer_del(struct k_itimer *timr)458387{388388+ if (!rtcdev)389389+ return -ENOTSUPP;390390+459391 alarm_cancel(&timr->it.alarmtimer);460392 return 0;461393}···476402 struct itimerspec *new_setting,477403 struct itimerspec *old_setting)478404{405405+ if (!rtcdev)406406+ return -ENOTSUPP;407407+479408 /* Save old values */480409 old_setting->it_interval =481410 ktime_to_timespec(timr->it.alarmtimer.period);···618541 int ret = 0;619542 struct restart_block *restart;620543544544+ if (!alarmtimer_get_rtcdev())545545+ return -ENOTSUPP;546546+621547 if (!capable(CAP_WAKE_ALARM))622548 return -EPERM;623549···718638}719639device_initcall(alarmtimer_init);720640721721-#ifdef CONFIG_RTC_CLASS722722-/**723723- * has_wakealarm - check rtc device has wakealarm ability724724- * @dev: current device725725- * @name_ptr: name to be returned726726- *727727- * This helper function checks to see if the rtc device can wake728728- * from suspend.729729- */730730-static int __init has_wakealarm(struct device *dev, void *name_ptr)731731-{732732- struct rtc_device *candidate = to_rtc_device(dev);733733-734734- if (!candidate->ops->set_alarm)735735- return 0;736736- if (!device_may_wakeup(candidate->dev.parent))737737- return 0;738738-739739- *(const char **)name_ptr = dev_name(dev);740740- return 1;741741-}742742-743743-/**744744- * alarmtimer_init_late - Late initializing of alarmtimer code745745- *746746- * This function locates a rtc device to use for wakealarms.747747- * Run as late_initcall to make sure rtc devices have been748748- * registered.749749- */750750-static int __init alarmtimer_init_late(void)751751-{752752- struct device *dev;753753- char *str;754754-755755- /* Find an rtc device and init the rtc_timer */756756- dev = class_find_device(rtc_class, NULL, &str, has_wakealarm);757757- /* If we have a device then str is valid. See has_wakealarm() */758758- if (dev) {759759- rtcdev = rtc_class_open(str);760760- /*761761- * Drop the reference we got in class_find_device,762762- * rtc_open takes its own.763763- */764764- put_device(dev);765765- }766766- if (!rtcdev) {767767- printk(KERN_WARNING "No RTC device found, ALARM timers will"768768- " not wake from suspend");769769- }770770- rtc_timer_init(&rtctimer, NULL, NULL);771771-772772- return 0;773773-}774774-#else775775-static int __init alarmtimer_init_late(void)776776-{777777- printk(KERN_WARNING "Kernel not built with RTC support, ALARM timers"778778- " will not wake from suspend");779779- return 0;780780-}781781-#endif782782-late_initcall(alarmtimer_init_late);
···391391 struct task_struct *tsk;392392 struct anon_vma *av;393393394394- read_lock(&tasklist_lock);395394 av = page_lock_anon_vma(page);396395 if (av == NULL) /* Not actually mapped anymore */397397- goto out;396396+ return;397397+398398+ read_lock(&tasklist_lock);398399 for_each_process (tsk) {399400 struct anon_vma_chain *vmac;400401···409408 add_to_kill(tsk, page, vma, to_kill, tkc);410409 }411410 }412412- page_unlock_anon_vma(av);413413-out:414411 read_unlock(&tasklist_lock);412412+ page_unlock_anon_vma(av);415413}416414417415/*···424424 struct prio_tree_iter iter;425425 struct address_space *mapping = page->mapping;426426427427- /*428428- * A note on the locking order between the two locks.429429- * We don't rely on this particular order.430430- * If you have some other code that needs a different order431431- * feel free to switch them around. Or add a reverse link432432- * from mm_struct to task_struct, then this could be all433433- * done without taking tasklist_lock and looping over all tasks.434434- */435435-436436- read_lock(&tasklist_lock);437427 mutex_lock(&mapping->i_mmap_mutex);428428+ read_lock(&tasklist_lock);438429 for_each_process(tsk) {439430 pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);440431···445454 add_to_kill(tsk, page, vma, to_kill, tkc);446455 }447456 }448448- mutex_unlock(&mapping->i_mmap_mutex);449457 read_unlock(&tasklist_lock);458458+ mutex_unlock(&mapping->i_mmap_mutex);450459}451460452461/*
-24
mm/memory.c
···27982798}27992799EXPORT_SYMBOL(unmap_mapping_range);2800280028012801-int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end)28022802-{28032803- struct address_space *mapping = inode->i_mapping;28042804-28052805- /*28062806- * If the underlying filesystem is not going to provide28072807- * a way to truncate a range of blocks (punch a hole) -28082808- * we should return failure right now.28092809- */28102810- if (!inode->i_op->truncate_range)28112811- return -ENOSYS;28122812-28132813- mutex_lock(&inode->i_mutex);28142814- down_write(&inode->i_alloc_sem);28152815- unmap_mapping_range(mapping, offset, (end - offset), 1);28162816- truncate_inode_pages_range(mapping, offset, end);28172817- unmap_mapping_range(mapping, offset, (end - offset), 1);28182818- inode->i_op->truncate_range(inode, offset, end);28192819- up_write(&inode->i_alloc_sem);28202820- mutex_unlock(&inode->i_mutex);28212821-28222822- return 0;28232823-}28242824-28252801/*28262802 * We enter with non-exclusive mmap_sem (to exclude vma changes,28272803 * but allow concurrent faults), and pte mapped but not yet locked.
+3-1
mm/memory_hotplug.c
···498498 * The node we allocated has no zone fallback lists. For avoiding499499 * to access not-initialized zonelist, build here.500500 */501501+ mutex_lock(&zonelists_mutex);501502 build_all_zonelists(NULL);503503+ mutex_unlock(&zonelists_mutex);502504503505 return pgdat;504506}···523521524522 lock_memory_hotplug();525523 pgdat = hotadd_new_pgdat(nid, 0);526526- if (pgdat) {524524+ if (!pgdat) {527525 ret = -ENOMEM;528526 goto out;529527 }
+2-3
mm/rmap.c
···3838 * in arch-dependent flush_dcache_mmap_lock,3939 * within inode_wb_list_lock in __sync_single_inode)4040 *4141- * (code doesn't rely on that order so it could be switched around)4242- * ->tasklist_lock4343- * anon_vma->mutex (memory_failure, collect_procs_anon)4141+ * anon_vma->mutex,mapping->i_mutex (memory_failure, collect_procs_anon)4242+ * ->tasklist_lock4443 * pte map lock4544 */4645
+52-22
mm/shmem.c
···539539 } while (next);540540}541541542542-static void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end)542542+void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end)543543{544544 struct shmem_inode_info *info = SHMEM_I(inode);545545 unsigned long idx;···561561 spinlock_t *needs_lock;562562 spinlock_t *punch_lock;563563 unsigned long upper_limit;564564+565565+ truncate_inode_pages_range(inode->i_mapping, start, end);564566565567 inode->i_ctime = inode->i_mtime = CURRENT_TIME;566568 idx = (start + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;···740738 * lowered next_index. Also, though shmem_getpage checks741739 * i_size before adding to cache, no recheck after: so fix the742740 * narrow window there too.743743- *744744- * Recalling truncate_inode_pages_range and unmap_mapping_range745745- * every time for punch_hole (which never got a chance to clear746746- * SHMEM_PAGEIN at the start of vmtruncate_range) is expensive,747747- * yet hardly ever necessary: try to optimize them out later.748741 */749742 truncate_inode_pages_range(inode->i_mapping, start, end);750750- if (punch_hole)751751- unmap_mapping_range(inode->i_mapping, start,752752- end - start, 1);753743 }754744755745 spin_lock(&info->lock);···760766 shmem_free_pages(pages_to_free.next);761767 }762768}769769+EXPORT_SYMBOL_GPL(shmem_truncate_range);763770764764-static int shmem_notify_change(struct dentry *dentry, struct iattr *attr)771771+static int shmem_setattr(struct dentry *dentry, struct iattr *attr)765772{766773 struct inode *inode = dentry->d_inode;767767- loff_t newsize = attr->ia_size;768774 int error;769775770776 error = inode_change_ok(inode, attr);771777 if (error)772778 return error;773779774774- if (S_ISREG(inode->i_mode) && (attr->ia_valid & ATTR_SIZE)775775- && newsize != inode->i_size) {780780+ if (S_ISREG(inode->i_mode) && (attr->ia_valid & ATTR_SIZE)) {781781+ loff_t oldsize = inode->i_size;782782+ loff_t newsize = attr->ia_size;776783 struct page *page = NULL;777784778778- if (newsize < inode->i_size) {785785+ if (newsize < oldsize) {779786 /*780787 * If truncating down to a partial page, then781788 * if that page is already allocated, hold it···805810 spin_unlock(&info->lock);806811 }807812 }808808-809809- /* XXX(truncate): truncate_setsize should be called last */810810- truncate_setsize(inode, newsize);813813+ if (newsize != oldsize) {814814+ i_size_write(inode, newsize);815815+ inode->i_ctime = inode->i_mtime = CURRENT_TIME;816816+ }817817+ if (newsize < oldsize) {818818+ loff_t holebegin = round_up(newsize, PAGE_SIZE);819819+ unmap_mapping_range(inode->i_mapping, holebegin, 0, 1);820820+ shmem_truncate_range(inode, newsize, (loff_t)-1);821821+ /* unmap again to remove racily COWed private pages */822822+ unmap_mapping_range(inode->i_mapping, holebegin, 0, 1);823823+ }811824 if (page)812825 page_cache_release(page);813813- shmem_truncate_range(inode, newsize, (loff_t)-1);814826 }815827816828 setattr_copy(inode, attr);···834832 struct shmem_xattr *xattr, *nxattr;835833836834 if (inode->i_mapping->a_ops == &shmem_aops) {837837- truncate_inode_pages(inode->i_mapping, 0);838835 shmem_unacct_size(info->flags, inode->i_size);839836 inode->i_size = 0;840837 shmem_truncate_range(inode, 0, (loff_t)-1);···27072706};2708270727092708static const struct inode_operations shmem_inode_operations = {27102710- .setattr = shmem_notify_change,27092709+ .setattr = shmem_setattr,27112710 .truncate_range = shmem_truncate_range,27122711#ifdef CONFIG_TMPFS_XATTR27132712 .setxattr = shmem_setxattr,···27402739 .removexattr = shmem_removexattr,27412740#endif27422741#ifdef CONFIG_TMPFS_POSIX_ACL27432743- .setattr = shmem_notify_change,27422742+ .setattr = shmem_setattr,27442743 .check_acl = generic_check_acl,27452744#endif27462745};···27532752 .removexattr = shmem_removexattr,27542753#endif27552754#ifdef CONFIG_TMPFS_POSIX_ACL27562756- .setattr = shmem_notify_change,27552755+ .setattr = shmem_setattr,27572756 .check_acl = generic_check_acl,27582757#endif27592758};···29092908 return 0;29102909}2911291029112911+void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end)29122912+{29132913+ truncate_inode_pages_range(inode->i_mapping, start, end);29142914+}29152915+EXPORT_SYMBOL_GPL(shmem_truncate_range);29162916+29122917#ifdef CONFIG_CGROUP_MEM_RES_CTLR29132918/**29142919 * mem_cgroup_get_shmem_target - find a page or entry assigned to the shmem file···30353028 vma->vm_flags |= VM_CAN_NONLINEAR;30363029 return 0;30373030}30313031+30323032+/**30333033+ * shmem_read_mapping_page_gfp - read into page cache, using specified page allocation flags.30343034+ * @mapping: the page's address_space30353035+ * @index: the page index30363036+ * @gfp: the page allocator flags to use if allocating30373037+ *30383038+ * This behaves as a tmpfs "read_cache_page_gfp(mapping, index, gfp)",30393039+ * with any new page allocations done using the specified allocation flags.30403040+ * But read_cache_page_gfp() uses the ->readpage() method: which does not30413041+ * suit tmpfs, since it may have pages in swapcache, and needs to find those30423042+ * for itself; although drivers/gpu/drm i915 and ttm rely upon this support.30433043+ *30443044+ * Provide a stub for those callers to start using now, then later30453045+ * flesh it out to call shmem_getpage() with additional gfp mask, when30463046+ * shmem_file_splice_read() is added and shmem_readpage() is removed.30473047+ */30483048+struct page *shmem_read_mapping_page_gfp(struct address_space *mapping,30493049+ pgoff_t index, gfp_t gfp)30503050+{30513051+ return read_cache_page_gfp(mapping, index, gfp);30523052+}30533053+EXPORT_SYMBOL_GPL(shmem_read_mapping_page_gfp);
···304304 * @lstart: offset from which to truncate305305 *306306 * Called under (and serialised by) inode->i_mutex.307307+ *308308+ * Note: When this function returns, there can be a page in the process of309309+ * deletion (inside __delete_from_page_cache()) in the specified range. Thus310310+ * mapping->nrpages can be non-zero when this function returns even after311311+ * truncation of the whole mapping.307312 */308313void truncate_inode_pages(struct address_space *mapping, loff_t lstart)309314{···608603 return 0;609604}610605EXPORT_SYMBOL(vmtruncate);606606+607607+int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end)608608+{609609+ struct address_space *mapping = inode->i_mapping;610610+611611+ /*612612+ * If the underlying filesystem is not going to provide613613+ * a way to truncate a range of blocks (punch a hole) -614614+ * we should return failure right now.615615+ */616616+ if (!inode->i_op->truncate_range)617617+ return -ENOSYS;618618+619619+ mutex_lock(&inode->i_mutex);620620+ down_write(&inode->i_alloc_sem);621621+ unmap_mapping_range(mapping, offset, (end - offset), 1);622622+ inode->i_op->truncate_range(inode, offset, end);623623+ /* unmap again to remove racily COWed private pages */624624+ unmap_mapping_range(mapping, offset, (end - offset), 1);625625+ up_write(&inode->i_alloc_sem);626626+ mutex_unlock(&inode->i_mutex);627627+628628+ return 0;629629+}
+15-12
mm/vmscan.c
···19951995 * If a zone is deemed to be full of pinned pages then just give it a light19961996 * scan then give up on it.19971997 */19981998-static unsigned long shrink_zones(int priority, struct zonelist *zonelist,19981998+static void shrink_zones(int priority, struct zonelist *zonelist,19991999 struct scan_control *sc)20002000{20012001 struct zoneref *z;20022002 struct zone *zone;20032003 unsigned long nr_soft_reclaimed;20042004 unsigned long nr_soft_scanned;20052005- unsigned long total_scanned = 0;2006200520072006 for_each_zone_zonelist_nodemask(zone, z, zonelist,20082007 gfp_zone(sc->gfp_mask), sc->nodemask) {···20162017 continue;20172018 if (zone->all_unreclaimable && priority != DEF_PRIORITY)20182019 continue; /* Let kswapd poll it */20202020+ /*20212021+ * This steals pages from memory cgroups over softlimit20222022+ * and returns the number of reclaimed pages and20232023+ * scanned pages. This works for global memory pressure20242024+ * and balancing, not for a memcg's limit.20252025+ */20262026+ nr_soft_scanned = 0;20272027+ nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone,20282028+ sc->order, sc->gfp_mask,20292029+ &nr_soft_scanned);20302030+ sc->nr_reclaimed += nr_soft_reclaimed;20312031+ sc->nr_scanned += nr_soft_scanned;20322032+ /* need some check for avoid more shrink_zone() */20192033 }20202020-20212021- nr_soft_scanned = 0;20222022- nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone,20232023- sc->order, sc->gfp_mask,20242024- &nr_soft_scanned);20252025- sc->nr_reclaimed += nr_soft_reclaimed;20262026- total_scanned += nr_soft_scanned;2027203420282035 shrink_zone(priority, zone, sc);20292036 }20302030-20312031- return total_scanned;20322037}2033203820342039static bool zone_reclaimable(struct zone *zone)···20972094 sc->nr_scanned = 0;20982095 if (!priority)20992096 disable_swap_token(sc->mem_cgroup);21002100- total_scanned += shrink_zones(priority, zonelist, sc);20972097+ shrink_zones(priority, zonelist, sc);21012098 /*21022099 * Don't shrink slabs when reclaiming memory from21032100 * over limit cgroups
+5
net/8021q/vlan_dev.c
···588588static u32 vlan_dev_fix_features(struct net_device *dev, u32 features)589589{590590 struct net_device *real_dev = vlan_dev_info(dev)->real_dev;591591+ u32 old_features = features;591592592593 features &= real_dev->features;593594 features &= real_dev->vlan_features;595595+596596+ if (old_features & NETIF_F_SOFT_FEATURES)597597+ features |= old_features & NETIF_F_SOFT_FEATURES;598598+594599 if (dev_ethtool_get_rx_csum(real_dev))595600 features |= NETIF_F_RXCSUM;596601 features |= NETIF_F_LLTX;
+3-1
net/bridge/br_device.c
···4949 skb_pull(skb, ETH_HLEN);50505151 rcu_read_lock();5252- if (is_multicast_ether_addr(dest)) {5252+ if (is_broadcast_ether_addr(dest))5353+ br_flood_deliver(br, skb);5454+ else if (is_multicast_ether_addr(dest)) {5355 if (unlikely(netpoll_tx_running(dev))) {5456 br_flood_deliver(br, skb);5557 goto out;
+4-2
net/bridge/br_input.c
···6060 br = p->br;6161 br_fdb_update(br, p, eth_hdr(skb)->h_source);62626363- if (is_multicast_ether_addr(dest) &&6363+ if (!is_broadcast_ether_addr(dest) && is_multicast_ether_addr(dest) &&6464 br_multicast_rcv(br, p, skb))6565 goto drop;6666···77777878 dst = NULL;79798080- if (is_multicast_ether_addr(dest)) {8080+ if (is_broadcast_ether_addr(dest))8181+ skb2 = skb;8282+ else if (is_multicast_ether_addr(dest)) {8183 mdst = br_mdb_get(br, skb);8284 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) {8385 if ((mdst && mdst->mglist) ||
+4-1
net/bridge/br_multicast.c
···13791379 if (unlikely(ip_fast_csum((u8 *)iph, iph->ihl)))13801380 return -EINVAL;1381138113821382- if (iph->protocol != IPPROTO_IGMP)13821382+ if (iph->protocol != IPPROTO_IGMP) {13831383+ if ((iph->daddr & IGMP_LOCAL_GROUP_MASK) != IGMP_LOCAL_GROUP)13841384+ BR_INPUT_SKB_CB(skb)->mrouters_only = 1;13831385 return 0;13861386+ }1384138713851388 len = ntohs(iph->tot_len);13861389 if (skb->len < len || len < ip_hdrlen(skb))
···453453 }454454 unlock_sock_fast(sk, slow);455455456456- if (flags & MSG_DONTWAIT)456456+ if (noblock)457457 return -EAGAIN;458458+459459+ /* starting over for a new packet */460460+ msg->msg_flags &= ~MSG_TRUNC;458461 goto try_again;459462}460463
+2-2
net/sunrpc/auth_gss/auth_gss.c
···577577 }578578 inode = &gss_msg->inode->vfs_inode;579579 for (;;) {580580- prepare_to_wait(&gss_msg->waitqueue, &wait, TASK_INTERRUPTIBLE);580580+ prepare_to_wait(&gss_msg->waitqueue, &wait, TASK_KILLABLE);581581 spin_lock(&inode->i_lock);582582 if (gss_msg->ctx != NULL || gss_msg->msg.errno < 0) {583583 break;584584 }585585 spin_unlock(&inode->i_lock);586586- if (signalled()) {586586+ if (fatal_signal_pending(current)) {587587 err = -ERESTARTSYS;588588 goto out_intr;589589 }
+4-1
net/sunrpc/clnt.c
···1061106110621062 dprintk("RPC: %5u rpc_buffer allocation failed\n", task->tk_pid);1063106310641064- if (RPC_IS_ASYNC(task) || !signalled()) {10641064+ if (RPC_IS_ASYNC(task) || !fatal_signal_pending(current)) {10651065 task->tk_action = call_allocate;10661066 rpc_delay(task, HZ>>4);10671067 return;···11751175 status = -EOPNOTSUPP;11761176 break;11771177 }11781178+ if (task->tk_rebind_retry == 0)11791179+ break;11801180+ task->tk_rebind_retry--;11781181 rpc_delay(task, 3*HZ);11791182 goto retry_timeout;11801183 case -ETIMEDOUT:
···11111212if SND_IMX_SOC13131414-config SND_MXC_SOC_SSI1515- tristate1616-1714config SND_MXC_SOC_FIQ1815 tristate1916···2124 tristate "Audio on the the i.MX31ADS with WM1133-EV1 fitted"2225 depends on MACH_MX31ADS_WM1133_EV1 && EXPERIMENTAL2326 select SND_SOC_WM83502424- select SND_MXC_SOC_SSI2527 select SND_MXC_SOC_FIQ2628 help2729 Enable support for audio on the i.MX31ADS with the WM1133-EV1···3034 tristate "SoC audio support for Visstrim M10 boards"3135 depends on MACH_IMX27_VISSTRIM_M103236 select SND_SOC_TVL320AIC32X43333- select SND_MXC_SOC_SSI3437 select SND_MXC_SOC_MX23538 help3639 Say Y if you want to add support for SoC audio on Visstrim SM10···3944 tristate "SoC Audio support for Phytec phyCORE (and phyCARD) boards"4045 depends on MACH_PCM043 || MACH_PCA1004146 select SND_SOC_WM97124242- select SND_MXC_SOC_SSI4347 select SND_MXC_SOC_FIQ4448 help4549 Say Y if you want to add support for SoC audio on Phytec phyCORE···5157 || MACH_EUKREA_MBIMXSD35_BASEBOARD \5258 || MACH_EUKREA_MBIMXSD51_BASEBOARD5359 select SND_SOC_TLV320AIC235454- select SND_MXC_SOC_SSI5560 select SND_MXC_SOC_FIQ5661 help5762 Enable I2S based access to the TLV320AIC23B codec attached
···9595 if (!card->dev->coherent_dma_mask)9696 card->dev->coherent_dma_mask = DMA_BIT_MASK(32);97979898- if (dai->driver->playback.channels_min) {9898+ if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) {9999 ret = pxa2xx_pcm_preallocate_dma_buffer(pcm,100100 SNDRV_PCM_STREAM_PLAYBACK);101101 if (ret)102102 goto out;103103 }104104105105- if (dai->driver->capture.channels_min) {105105+ if (pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream) {106106 ret = pxa2xx_pcm_preallocate_dma_buffer(pcm,107107 SNDRV_PCM_STREAM_CAPTURE);108108 if (ret)
-3
sound/soc/soc-cache.c
···409409 codec->bulk_write_raw = snd_soc_hw_bulk_write_raw;410410411411 switch (control) {412412- case SND_SOC_CUSTOM:413413- break;414414-415412 case SND_SOC_I2C:416413#if defined(CONFIG_I2C) || (defined(CONFIG_I2C_MODULE) && defined(MODULE))417414 codec->hw_write = (hw_write_t)i2c_master_send;