···125125nowayout: Watchdog cannot be stopped once started126126 (default=kernel config parameter)127127-------------------------------------------------128128+imx2_wdt:129129+timeout: Watchdog timeout in seconds (default 60 s)130130+nowayout: Watchdog cannot be stopped once started131131+ (default=kernel config parameter)132132+-------------------------------------------------128133indydog:129134nowayout: Watchdog cannot be stopped once started130135 (default=kernel config parameter)
+23-8
MAINTAINERS
···896896897897ARM/SAMSUNG ARM ARCHITECTURES898898M: Ben Dooks <ben-linux@fluff.org>899899+M: Kukjin Kim <kgene.kim@samsung.com>899900L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)900901W: http://www.fluff.org/ben/linux/901902S: Maintained902902-F: arch/arm/plat-s3c/903903+F: arch/arm/plat-samsung/903904F: arch/arm/plat-s3c24xx/905905+F: arch/arm/plat-s5p/904906905907ARM/S3C2410 ARM ARCHITECTURE906908M: Ben Dooks <ben-linux@fluff.org>···11501148F: drivers/mmc/host/atmel-mci-regs.h1151114911521150ATMEL AT91 / AT32 SERIAL DRIVER11531153-M: Haavard Skinnemoen <hskinnemoen@atmel.com>11511151+M: Nicolas Ferre <nicolas.ferre@atmel.com>11541152S: Supported11551153F: drivers/serial/atmel_serial.c11561154···11621160F: include/video/atmel_lcdc.h1163116111641162ATMEL MACB ETHERNET DRIVER11651165-M: Haavard Skinnemoen <hskinnemoen@atmel.com>11631163+M: Nicolas Ferre <nicolas.ferre@atmel.com>11661164S: Supported11671165F: drivers/net/macb.*1168116611691167ATMEL SPI DRIVER11701170-M: Haavard Skinnemoen <hskinnemoen@atmel.com>11681168+M: Nicolas Ferre <nicolas.ferre@atmel.com>11711169S: Supported11721170F: drivers/spi/atmel_spi.*1173117111741172ATMEL USBA UDC DRIVER11751175-M: Haavard Skinnemoen <hskinnemoen@atmel.com>11761176-L: kernel@avr32linux.org11731173+M: Nicolas Ferre <nicolas.ferre@atmel.com>11741174+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)11771175W: http://avr32linux.org/twiki/bin/view/Main/AtmelUsbDeviceDriver11781176S: Supported11791177F: drivers/usb/gadget/atmel_usba_udc.*···2111210921122110EDAC-I540021132111M: Mauro Carvalho Chehab <mchehab@redhat.com>21142114-L: bluesmoke-devel@lists.sourceforge.net (moderated for non-subscribers)21122112+L: linux-edac@vger.kernel.org21152113W: bluesmoke.sourceforge.net21162114S: Maintained21172115F: drivers/edac/i5400_edac.c21162116+21172117+EDAC-I7CORE21182118+M: Mauro Carvalho Chehab <mchehab@redhat.com>21192119+L: linux-edac@vger.kernel.org21202120+W: bluesmoke.sourceforge.net21212121+S: Maintained21222122+F: drivers/edac/i7core_edac.c linux/edac_mce.h drivers/edac/edac_mce.c2118212321192124EDAC-I82975X21202125M: Ranganathan Desikan <ravi@jetztechnologies.com>···33823373M: Ananth N Mavinakayanahalli <ananth@in.ibm.com>33833374M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>33843375M: "David S. Miller" <davem@davemloft.net>33853385-M: Masami Hiramatsu <mhiramat@redhat.com>33763376+M: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>33863377S: Maintained33873378F: Documentation/kprobes.txt33883379F: include/linux/kprobes.h···46294620M: Robert Jarzmik <robert.jarzmik@free.fr>46304621L: rtc-linux@googlegroups.com46314622S: Maintained46234623+46244624+QLOGIC QLA1280 SCSI DRIVER46254625+M: Michael Reed <mdr@sgi.com>46264626+L: linux-scsi@vger.kernel.org46274627+S: Maintained46284628+F: drivers/scsi/qla1280.[ch]4632462946334630QLOGIC QLA2XXX FC-SCSI DRIVER46344631M: Andrew Vasquez <andrew.vasquez@qlogic.com>
+2-72
Makefile
···11VERSION = 222PATCHLEVEL = 633SUBLEVEL = 3544-EXTRAVERSION = -rc344+EXTRAVERSION = -rc455NAME = Sheep on Meth6677# *DOCUMENTATION*···883883$(vmlinux-dirs): prepare scripts884884 $(Q)$(MAKE) $(build)=$@885885886886-# Build the kernel release string887887-#888888-# The KERNELRELEASE value built here is stored in the file889889-# include/config/kernel.release, and is used when executing several890890-# make targets, such as "make install" or "make modules_install."891891-#892892-# The eventual kernel release string consists of the following fields,893893-# shown in a hierarchical format to show how smaller parts are concatenated894894-# to form the larger and final value, with values coming from places like895895-# the Makefile, kernel config options, make command line options and/or896896-# SCM tag information.897897-#898898-# $(KERNELVERSION)899899-# $(VERSION) eg, 2900900-# $(PATCHLEVEL) eg, 6901901-# $(SUBLEVEL) eg, 18902902-# $(EXTRAVERSION) eg, -rc6903903-# $(localver-full)904904-# $(localver)905905-# localversion* (files without backups, containing '~')906906-# $(CONFIG_LOCALVERSION) (from kernel config setting)907907-# $(LOCALVERSION) (from make command line, if provided)908908-# $(localver-extra)909909-# $(scm-identifier) (unique SCM tag, if one exists)910910-# ./scripts/setlocalversion (only with CONFIG_LOCALVERSION_AUTO)911911-# .scmversion (only with CONFIG_LOCALVERSION_AUTO)912912-# + (only without CONFIG_LOCALVERSION_AUTO913913-# and without LOCALVERSION= and914914-# repository is at non-tagged commit)915915-#916916-# For kernels without CONFIG_LOCALVERSION_AUTO compiled from an SCM that has917917-# been revised beyond a tagged commit, `+' is appended to the version string918918-# when not overridden by using "make LOCALVERSION=". This indicates that the919919-# kernel is not a vanilla release version and has been modified.920920-921921-pattern = ".*/localversion[^~]*"922922-string = $(shell cat /dev/null \923923- `find $(objtree) $(srctree) -maxdepth 1 -regex $(pattern) | sort -u`)924924-925925-localver = $(subst $(space),, $(string) \926926- $(patsubst "%",%,$(CONFIG_LOCALVERSION)))927927-928928-# scripts/setlocalversion is called to create a unique identifier if the source929929-# is managed by a known SCM and the repository has been revised since the last930930-# tagged (release) commit. The format of the identifier is determined by the931931-# SCM's implementation.932932-#933933-# .scmversion is used when generating rpm packages so we do not loose934934-# the version information from the SCM when we do the build of the kernel935935-# from the copied source936936-ifeq ($(wildcard .scmversion),)937937- scm-identifier = $(shell $(CONFIG_SHELL) \938938- $(srctree)/scripts/setlocalversion $(srctree))939939-else940940- scm-identifier = $(shell cat .scmversion 2> /dev/null)941941-endif942942-943943-ifdef CONFIG_LOCALVERSION_AUTO944944- localver-extra = $(scm-identifier)945945-else946946- ifneq ($(scm-identifier),)947947- ifeq ("$(origin LOCALVERSION)", "undefined")948948- localver-extra = +949949- endif950950- endif951951-endif952952-953953-localver-full = $(localver)$(LOCALVERSION)$(localver-extra)954954-955886# Store (new) KERNELRELASE string in include/config/kernel.release956956-kernelrelease = $(KERNELVERSION)$(localver-full)957887include/config/kernel.release: include/config/auto.conf FORCE958888 $(Q)rm -f $@959959- $(Q)echo $(kernelrelease) > $@889889+ $(Q)echo "$(KERNELVERSION)$$($(CONFIG_SHELL) scripts/setlocalversion $(srctree))" > $@960890961891962892# Things we need to do before we recursively start building the kernel
···2121 * here. Note that sometimes the signals go through inverters...2222 */2323 bool gpio_vbus_inverted;2424- u16 gpio_vbus; /* high == vbus present */2424+ int gpio_vbus; /* high == vbus present */2525 bool gpio_pullup_inverted;2626- u16 gpio_pullup; /* high == pullup activated */2626+ int gpio_pullup; /* high == pullup activated */2727};2828
+4
arch/arm/include/asm/processor.h
···91919292unsigned long get_wchan(struct task_struct *p);93939494+#if __LINUX_ARM_ARCH__ == 69595+#define cpu_relax() smp_mb()9696+#else9497#define cpu_relax() barrier()9898+#endif959996100/*97101 * Create a new kernel thread
···409409 return 0;410410411411 oh->_clk = omap_clk_get_by_name(oh->main_clk);412412- if (!oh->_clk)412412+ if (!oh->_clk) {413413 pr_warning("omap_hwmod: %s: cannot clk_get main_clk %s\n",414414 oh->name, oh->main_clk);415415 return -EINVAL;416416+ }416417417418 if (!oh->_clk->clkdm)418419 pr_warning("omap_hwmod: %s: missing clockdomain for %s.\n",···445444 continue;446445447446 c = omap_clk_get_by_name(os->clk);448448- if (!c)447447+ if (!c) {449448 pr_warning("omap_hwmod: %s: cannot clk_get interface_clk %s\n",450449 oh->name, os->clk);451450 ret = -EINVAL;451451+ }452452 os->_clk = c;453453 }454454···472470473471 for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) {474472 c = omap_clk_get_by_name(oc->clk);475475- if (!c)473473+ if (!c) {476474 pr_warning("omap_hwmod: %s: cannot clk_get opt_clk %s\n",477475 oh->name, oc->clk);478476 ret = -EINVAL;477477+ }479478 oc->_clk = c;480479 }481480
+2-2
arch/arm/mach-omap2/pm34xx.c
···9999 /* Do a readback to assure write has been done */100100 prm_read_mod_reg(WKUP_MOD, PM_WKEN);101101102102- while (!(prm_read_mod_reg(WKUP_MOD, PM_WKST) &102102+ while (!(prm_read_mod_reg(WKUP_MOD, PM_WKEN) &103103 OMAP3430_ST_IO_CHAIN_MASK)) {104104 timeout++;105105 if (timeout > 1000) {···108108 return;109109 }110110 prm_set_mod_reg_bits(OMAP3430_ST_IO_CHAIN_MASK,111111- WKUP_MOD, PM_WKST);111111+ WKUP_MOD, PM_WKEN);112112 }113113 }114114}
···33 *44 * Support for the Zipit Z2 Handheld device.55 *66- * Author: Ken McGuire77- * Created: Jan 25, 200966+ * Copyright (C) 2009-2010 Marek Vasut <marek.vasut@gmail.com>77+ *88+ * Based on research and code by: Ken McGuire89 * Based on mainstone.c as modified for the Zipit Z2.910 *1011 * This program is free software; you can redistribute it and/or modify···158157 {159158 .name = "U-Boot Bootloader",160159 .offset = 0x0,161161- .size = 0x20000,162162- },163163- {164164- .name = "Linux Kernel",165165- .offset = 0x20000,166166- .size = 0x220000,167167- },168168- {169169- .name = "Filesystem",170170- .offset = 0x240000,171171- .size = 0x5b0000,172172- },173173- {160160+ .size = 0x40000,161161+ }, {174162 .name = "U-Boot Environment",175175- .offset = 0x7f0000,163163+ .offset = 0x40000,164164+ .size = 0x60000,165165+ }, {166166+ .name = "Flash",167167+ .offset = 0x60000,176168 .size = MTDPART_SIZ_FULL,177169 },178170};
+2
arch/arm/mach-realview/Kconfig
···1818 bool "Support ARM11MPCore tile"1919 depends on MACH_REALVIEW_EB2020 select CPU_V62121+ select ARCH_HAS_BARRIERS if SMP2122 help2223 Enable support for the ARM11MPCore tile on the Realview platform.2324···3635 select CPU_V63736 select ARM_GIC3837 select HAVE_PATA_PLATFORM3838+ select ARCH_HAS_BARRIERS if SMP3939 help4040 Include support for the ARM(R) RealView MPCore Platform Baseboard.4141 PB11MPCore is a platform with an on-board ARM11MPCore and has
+8
arch/arm/mach-realview/include/mach/barriers.h
···11+/*22+ * Barriers redefined for RealView ARM11MPCore platforms with L220 cache33+ * controller to work around hardware errata causing the outer_sync()44+ * operation to deadlock the system.55+ */66+#define mb() dsb()77+#define rmb() dmb()88+#define wmb() mb()
···735735 Forget about fast user space cmpxchg support.736736 It is just not possible.737737738738+config DMA_CACHE_RWFO739739+ bool "Enable read/write for ownership DMA cache maintenance"740740+ depends on CPU_V6 && SMP741741+ default y742742+ help743743+ The Snoop Control Unit on ARM11MPCore does not detect the744744+ cache maintenance operations and the dma_{map,unmap}_area()745745+ functions may leave stale cache entries on other CPUs. By746746+ enabling this option, Read or Write For Ownership in the ARMv6747747+ DMA cache maintenance functions is performed. These LDR/STR748748+ instructions change the cache line state to shared or modified749749+ so that the cache operation has the desired effect.750750+751751+ Note that the workaround is only valid on processors that do752752+ not perform speculative loads into the D-cache. For such753753+ processors, if cache maintenance operations are not broadcast754754+ in hardware, other workarounds are needed (e.g. cache755755+ maintenance broadcasting in software via FIQ).756756+738757config OUTER_CACHE739758 bool740759···813794814795config ARM_DMA_MEM_BUFFERABLE815796 bool "Use non-cacheable memory for DMA" if CPU_V6 && !CPU_V7797797+ depends on !(MACH_REALVIEW_PB1176 || REALVIEW_EB_ARM11MP || \798798+ MACH_REALVIEW_PB11MP)816799 default y if CPU_V6 || CPU_V7817800 help818801 Historically, the kernel has used strongly ordered mappings to
···526526dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code)527527{528528 struct task_struct *tsk = current;529529+ int user_icebp = 0;529530 unsigned long dr6;530531 int si_code;531532···534533535534 /* Filter out all the reserved bits which are preset to 1 */536535 dr6 &= ~DR6_RESERVED;536536+537537+ /*538538+ * If dr6 has no reason to give us about the origin of this trap,539539+ * then it's very likely the result of an icebp/int01 trap.540540+ * User wants a sigtrap for that.541541+ */542542+ if (!dr6 && user_mode(regs))543543+ user_icebp = 1;537544538545 /* Catch kmemcheck conditions first of all! */539546 if ((dr6 & DR_STEP) && kmemcheck_trap(regs))···584575 regs->flags &= ~X86_EFLAGS_TF;585576 }586577 si_code = get_si_code(tsk->thread.debugreg6);587587- if (tsk->thread.debugreg6 & (DR_STEP | DR_TRAP_BITS))578578+ if (tsk->thread.debugreg6 & (DR_STEP | DR_TRAP_BITS) || user_icebp)588579 send_sigtrap(tsk, regs, error_code, si_code);589580 preempt_conditional_cli(regs);590581
+3-6
block/blk-core.c
···11491149 else11501150 req->cmd_flags |= bio->bi_rw & REQ_FAILFAST_MASK;1151115111521152- if (unlikely(bio_rw_flagged(bio, BIO_RW_DISCARD))) {11521152+ if (bio_rw_flagged(bio, BIO_RW_DISCARD))11531153 req->cmd_flags |= REQ_DISCARD;11541154- if (bio_rw_flagged(bio, BIO_RW_BARRIER))11551155- req->cmd_flags |= REQ_SOFTBARRIER;11561156- } else if (unlikely(bio_rw_flagged(bio, BIO_RW_BARRIER)))11541154+ if (bio_rw_flagged(bio, BIO_RW_BARRIER))11571155 req->cmd_flags |= REQ_HARDBARRIER;11581158-11591156 if (bio_rw_flagged(bio, BIO_RW_SYNCIO))11601157 req->cmd_flags |= REQ_RW_SYNC;11611158 if (bio_rw_flagged(bio, BIO_RW_META))···15831586 * If it's a regular read/write or a barrier with data attached,15841587 * go through the normal accounting stuff before submission.15851588 */15861586- if (bio_has_data(bio)) {15891589+ if (bio_has_data(bio) && !(rw & (1 << BIO_RW_DISCARD))) {15871590 if (rw & WRITE) {15881591 count_vm_events(PGPGOUT, count);15891592 } else {
+38-29
block/cfq-iosched.c
···1414#include <linux/rbtree.h>1515#include <linux/ioprio.h>1616#include <linux/blktrace_api.h>1717-#include "blk-cgroup.h"1717+#include "cfq.h"18181919/*2020 * tunables···879879 if (!RB_EMPTY_NODE(&cfqg->rb_node))880880 cfq_rb_erase(&cfqg->rb_node, st);881881 cfqg->saved_workload_slice = 0;882882- blkiocg_update_dequeue_stats(&cfqg->blkg, 1);882882+ cfq_blkiocg_update_dequeue_stats(&cfqg->blkg, 1);883883}884884885885static inline unsigned int cfq_cfqq_slice_usage(struct cfq_queue *cfqq)···939939940940 cfq_log_cfqg(cfqd, cfqg, "served: vt=%llu min_vt=%llu", cfqg->vdisktime,941941 st->min_vdisktime);942942- blkiocg_update_timeslice_used(&cfqg->blkg, used_sl);943943- blkiocg_set_start_empty_time(&cfqg->blkg);942942+ cfq_blkiocg_update_timeslice_used(&cfqg->blkg, used_sl);943943+ cfq_blkiocg_set_start_empty_time(&cfqg->blkg);944944}945945946946#ifdef CONFIG_CFQ_GROUP_IOSCHED···995995996996 /* Add group onto cgroup list */997997 sscanf(dev_name(bdi->dev), "%u:%u", &major, &minor);998998- blkiocg_add_blkio_group(blkcg, &cfqg->blkg, (void *)cfqd,998998+ cfq_blkiocg_add_blkio_group(blkcg, &cfqg->blkg, (void *)cfqd,999999 MKDEV(major, minor));10001000 cfqg->weight = blkcg_get_weight(blkcg, cfqg->blkg.dev);10011001···10791079 * it from cgroup list, then it will take care of destroying10801080 * cfqg also.10811081 */10821082- if (!blkiocg_del_blkio_group(&cfqg->blkg))10821082+ if (!cfq_blkiocg_del_blkio_group(&cfqg->blkg))10831083 cfq_destroy_cfqg(cfqd, cfqg);10841084 }10851085}···14211421{14221422 elv_rb_del(&cfqq->sort_list, rq);14231423 cfqq->queued[rq_is_sync(rq)]--;14241424- blkiocg_update_io_remove_stats(&(RQ_CFQG(rq))->blkg, rq_data_dir(rq),14251425- rq_is_sync(rq));14241424+ cfq_blkiocg_update_io_remove_stats(&(RQ_CFQG(rq))->blkg,14251425+ rq_data_dir(rq), rq_is_sync(rq));14261426 cfq_add_rq_rb(rq);14271427- blkiocg_update_io_add_stats(&(RQ_CFQG(rq))->blkg,14271427+ cfq_blkiocg_update_io_add_stats(&(RQ_CFQG(rq))->blkg,14281428 &cfqq->cfqd->serving_group->blkg, rq_data_dir(rq),14291429 rq_is_sync(rq));14301430}···14821482 cfq_del_rq_rb(rq);1483148314841484 cfqq->cfqd->rq_queued--;14851485- blkiocg_update_io_remove_stats(&(RQ_CFQG(rq))->blkg, rq_data_dir(rq),14861486- rq_is_sync(rq));14851485+ cfq_blkiocg_update_io_remove_stats(&(RQ_CFQG(rq))->blkg,14861486+ rq_data_dir(rq), rq_is_sync(rq));14871487 if (rq_is_meta(rq)) {14881488 WARN_ON(!cfqq->meta_pending);14891489 cfqq->meta_pending--;···15181518static void cfq_bio_merged(struct request_queue *q, struct request *req,15191519 struct bio *bio)15201520{15211521- blkiocg_update_io_merged_stats(&(RQ_CFQG(req))->blkg, bio_data_dir(bio),15221522- cfq_bio_sync(bio));15211521+ cfq_blkiocg_update_io_merged_stats(&(RQ_CFQG(req))->blkg,15221522+ bio_data_dir(bio), cfq_bio_sync(bio));15231523}1524152415251525static void···15391539 if (cfqq->next_rq == next)15401540 cfqq->next_rq = rq;15411541 cfq_remove_request(next);15421542- blkiocg_update_io_merged_stats(&(RQ_CFQG(rq))->blkg, rq_data_dir(next),15431543- rq_is_sync(next));15421542+ cfq_blkiocg_update_io_merged_stats(&(RQ_CFQG(rq))->blkg,15431543+ rq_data_dir(next), rq_is_sync(next));15441544}1545154515461546static int cfq_allow_merge(struct request_queue *q, struct request *rq,···15711571static inline void cfq_del_timer(struct cfq_data *cfqd, struct cfq_queue *cfqq)15721572{15731573 del_timer(&cfqd->idle_slice_timer);15741574- blkiocg_update_idle_time_stats(&cfqq->cfqg->blkg);15741574+ cfq_blkiocg_update_idle_time_stats(&cfqq->cfqg->blkg);15751575}1576157615771577static void __cfq_set_active_queue(struct cfq_data *cfqd,···15801580 if (cfqq) {15811581 cfq_log_cfqq(cfqd, cfqq, "set_active wl_prio:%d wl_type:%d",15821582 cfqd->serving_prio, cfqd->serving_type);15831583- blkiocg_update_avg_queue_size_stats(&cfqq->cfqg->blkg);15831583+ cfq_blkiocg_update_avg_queue_size_stats(&cfqq->cfqg->blkg);15841584 cfqq->slice_start = 0;15851585 cfqq->dispatch_start = jiffies;15861586 cfqq->allocated_slice = 0;···19111911 sl = cfqd->cfq_slice_idle;1912191219131913 mod_timer(&cfqd->idle_slice_timer, jiffies + sl);19141914- blkiocg_update_set_idle_time_stats(&cfqq->cfqg->blkg);19141914+ cfq_blkiocg_update_set_idle_time_stats(&cfqq->cfqg->blkg);19151915 cfq_log_cfqq(cfqd, cfqq, "arm_idle: %lu", sl);19161916}19171917···19311931 elv_dispatch_sort(q, rq);1932193219331933 cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]++;19341934- blkiocg_update_dispatch_stats(&cfqq->cfqg->blkg, blk_rq_bytes(rq),19341934+ cfq_blkiocg_update_dispatch_stats(&cfqq->cfqg->blkg, blk_rq_bytes(rq),19351935 rq_data_dir(rq), rq_is_sync(rq));19361936}19371937···19861986 int process_refs, new_process_refs;19871987 struct cfq_queue *__cfqq;1988198819891989+ /*19901990+ * If there are no process references on the new_cfqq, then it is19911991+ * unsafe to follow the ->new_cfqq chain as other cfqq's in the19921992+ * chain may have dropped their last reference (not just their19931993+ * last process reference).19941994+ */19951995+ if (!cfqq_process_refs(new_cfqq))19961996+ return;19971997+19891998 /* Avoid a circular list and skip interim queue merges */19901999 while ((__cfqq = new_cfqq->new_cfqq)) {19912000 if (__cfqq == cfqq)···20031994 }2004199520051996 process_refs = cfqq_process_refs(cfqq);19971997+ new_process_refs = cfqq_process_refs(new_cfqq);20061998 /*20071999 * If the process for the cfqq has gone away, there is no20082000 * sense in merging the queues.20092001 */20102010- if (process_refs == 0)20022002+ if (process_refs == 0 || new_process_refs == 0)20112003 return;2012200420132005 /*20142006 * Merge in the direction of the lesser amount of work.20152007 */20162016- new_process_refs = cfqq_process_refs(new_cfqq);20172008 if (new_process_refs >= process_refs) {20182009 cfqq->new_cfqq = new_cfqq;20192010 atomic_add(process_refs, &new_cfqq->ref);···32573248 cfq_clear_cfqq_wait_request(cfqq);32583249 __blk_run_queue(cfqd->queue);32593250 } else {32603260- blkiocg_update_idle_time_stats(32513251+ cfq_blkiocg_update_idle_time_stats(32613252 &cfqq->cfqg->blkg);32623253 cfq_mark_cfqq_must_dispatch(cfqq);32633254 }···32853276 rq_set_fifo_time(rq, jiffies + cfqd->cfq_fifo_expire[rq_is_sync(rq)]);32863277 list_add_tail(&rq->queuelist, &cfqq->fifo);32873278 cfq_add_rq_rb(rq);32883288- blkiocg_update_io_add_stats(&(RQ_CFQG(rq))->blkg,32793279+ cfq_blkiocg_update_io_add_stats(&(RQ_CFQG(rq))->blkg,32893280 &cfqd->serving_group->blkg, rq_data_dir(rq),32903281 rq_is_sync(rq));32913282 cfq_rq_enqueued(cfqd, cfqq, rq);···33733364 WARN_ON(!cfqq->dispatched);33743365 cfqd->rq_in_driver--;33753366 cfqq->dispatched--;33763376- blkiocg_update_completion_stats(&cfqq->cfqg->blkg, rq_start_time_ns(rq),33773377- rq_io_start_time_ns(rq), rq_data_dir(rq),33783378- rq_is_sync(rq));33673367+ cfq_blkiocg_update_completion_stats(&cfqq->cfqg->blkg,33683368+ rq_start_time_ns(rq), rq_io_start_time_ns(rq),33693369+ rq_data_dir(rq), rq_is_sync(rq));3379337033803371 cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]--;33813372···3739373037403731 cfq_put_async_queues(cfqd);37413732 cfq_release_cfq_groups(cfqd);37423742- blkiocg_del_blkio_group(&cfqd->root_group.blkg);37333733+ cfq_blkiocg_del_blkio_group(&cfqd->root_group.blkg);3743373437443735 spin_unlock_irq(q->queue_lock);37453736···38073798 */38083799 atomic_set(&cfqg->ref, 1);38093800 rcu_read_lock();38103810- blkiocg_add_blkio_group(&blkio_root_cgroup, &cfqg->blkg, (void *)cfqd,38113811- 0);38013801+ cfq_blkiocg_add_blkio_group(&blkio_root_cgroup, &cfqg->blkg,38023802+ (void *)cfqd, 0);38123803 rcu_read_unlock();38133804#endif38143805 /*
···781781 status = acpi_get_table(ACPI_SIG_ERST, 0,782782 (struct acpi_table_header **)&erst_tab);783783 if (status == AE_NOT_FOUND) {784784- pr_err(ERST_PFX "Table is not found!\n");784784+ pr_info(ERST_PFX "Table is not found!\n");785785 goto err;786786 } else if (ACPI_FAILURE(status)) {787787 const char *msg = acpi_format_exception(status);
+10
drivers/ata/ahci.c
···10531053 if (pdev->vendor == PCI_VENDOR_ID_MARVELL && !marvell_enable)10541054 return -ENODEV;1055105510561056+ /*10571057+ * For some reason, MCP89 on MacBook 7,1 doesn't work with10581058+ * ahci, use ata_generic instead.10591059+ */10601060+ if (pdev->vendor == PCI_VENDOR_ID_NVIDIA &&10611061+ pdev->device == PCI_DEVICE_ID_NVIDIA_NFORCE_MCP89_SATA &&10621062+ pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE &&10631063+ pdev->subsystem_device == 0xcb89)10641064+ return -ENODEV;10651065+10561066 /* Promise's PDC42819 is a SAS/SATA controller that has an AHCI mode.10571067 * At the moment, we can only use the AHCI mode. Let the users know10581068 * that for SAS drives they're out of luck.
+24-6
drivers/ata/ata_generic.c
···3232 * A generic parallel ATA driver using libata3333 */34343535+enum {3636+ ATA_GEN_CLASS_MATCH = (1 << 0),3737+ ATA_GEN_FORCE_DMA = (1 << 1),3838+};3939+3540/**3641 * generic_set_mode - mode setting3742 * @link: link to set up···5146static int generic_set_mode(struct ata_link *link, struct ata_device **unused)5247{5348 struct ata_port *ap = link->ap;4949+ const struct pci_device_id *id = ap->host->private_data;5450 int dma_enabled = 0;5551 struct ata_device *dev;5652 struct pci_dev *pdev = to_pci_dev(ap->host->dev);57535858- /* Bits 5 and 6 indicate if DMA is active on master/slave */5959- if (ap->ioaddr.bmdma_addr)5454+ if (id->driver_data & ATA_GEN_FORCE_DMA) {5555+ dma_enabled = 0xff;5656+ } else if (ap->ioaddr.bmdma_addr) {5757+ /* Bits 5 and 6 indicate if DMA is active on master/slave */6058 dma_enabled = ioread8(ap->ioaddr.bmdma_addr + ATA_DMA_STATUS);5959+ }61606261 if (pdev->vendor == PCI_VENDOR_ID_CENATEK)6362 dma_enabled = 0xFF;···135126 const struct ata_port_info *ppi[] = { &info, NULL };136127137128 /* Don't use the generic entry unless instructed to do so */138138- if (id->driver_data == 1 && all_generic_ide == 0)129129+ if ((id->driver_data & ATA_GEN_CLASS_MATCH) && all_generic_ide == 0)139130 return -ENODEV;140131141132 /* Devices that need care */···164155 return rc;165156 pcim_pin_device(dev);166157 }167167- return ata_pci_bmdma_init_one(dev, ppi, &generic_sht, NULL, 0);158158+ return ata_pci_bmdma_init_one(dev, ppi, &generic_sht, (void *)id, 0);168159}169160170161static struct pci_device_id ata_generic[] = {···176167 { PCI_DEVICE(PCI_VENDOR_ID_HINT, PCI_DEVICE_ID_HINT_VXPROII_IDE), },177168 { PCI_DEVICE(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C561), },178169 { PCI_DEVICE(PCI_VENDOR_ID_OPTI, PCI_DEVICE_ID_OPTI_82C558), },179179- { PCI_DEVICE(PCI_VENDOR_ID_CENATEK,PCI_DEVICE_ID_CENATEK_IDE), },170170+ { PCI_DEVICE(PCI_VENDOR_ID_CENATEK,PCI_DEVICE_ID_CENATEK_IDE),171171+ .driver_data = ATA_GEN_FORCE_DMA },172172+ /*173173+ * For some reason, MCP89 on MacBook 7,1 doesn't work with174174+ * ahci, use ata_generic instead.175175+ */176176+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP89_SATA,177177+ PCI_VENDOR_ID_APPLE, 0xcb89,178178+ .driver_data = ATA_GEN_FORCE_DMA },180179#if !defined(CONFIG_PATA_TOSHIBA) && !defined(CONFIG_PATA_TOSHIBA_MODULE)181180 { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_1), },182181 { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_2), },···192175 { PCI_DEVICE(PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_5), },193176#endif 194177 /* Must come last. If you add entries adjust this table appropriately */195195- { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_STORAGE_IDE << 8, 0xFFFFFF00UL, 1},178178+ { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_IDE << 8, 0xFFFFFF00UL),179179+ .driver_data = ATA_GEN_CLASS_MATCH },196180 { 0, },197181};198182
···861861 sh->n_io_port = 0; // I don't think we use these two...862862 sh->this_id = SELF_SCSI_ID; 863863 sh->sg_tablesize = hba[ctlr]->maxsgentries;864864+ sh->max_cmd_len = MAX_COMMAND_SIZE;864865865866 ((struct cciss_scsi_adapter_data_t *) 866867 hba[ctlr]->scsi_ctlr)->scsi_host = sh;
+3-3
drivers/block/cpqarray.c
···386386}387387388388/* pdev is NULL for eisa */389389-static int __init cpqarray_register_ctlr( int i, struct pci_dev *pdev)389389+static int __devinit cpqarray_register_ctlr( int i, struct pci_dev *pdev)390390{391391 struct request_queue *q;392392 int j;···503503 return -1;504504}505505506506-static int __init cpqarray_init_one( struct pci_dev *pdev,506506+static int __devinit cpqarray_init_one( struct pci_dev *pdev,507507 const struct pci_device_id *ent)508508{509509 int i;···740740/*741741 * Find an EISA controller's signature. Set up an hba if we find it.742742 */743743-static int __init cpqarray_eisa_detect(void)743743+static int __devinit cpqarray_eisa_detect(void)744744{745745 int i=0, j;746746 __u32 board_id;
-2
drivers/block/drbd/drbd_main.c
···12361236 /* Last part of the attaching process ... */12371237 if (ns.conn >= C_CONNECTED &&12381238 os.disk == D_ATTACHING && ns.disk == D_NEGOTIATING) {12391239- kfree(mdev->p_uuid); /* We expect to receive up-to-date UUIDs soon. */12401240- mdev->p_uuid = NULL; /* ...to not use the old ones in the mean time */12411239 drbd_send_sizes(mdev, 0, 0); /* to start sync... */12421240 drbd_send_uuids(mdev);12431241 drbd_send_state(mdev);
+6
drivers/block/drbd/drbd_nl.c
···11141114 mdev->new_state_tmp.i = ns.i;11151115 ns.i = os.i;11161116 ns.disk = D_NEGOTIATING;11171117+11181118+ /* We expect to receive up-to-date UUIDs soon.11191119+ To avoid a race in receive_state, free p_uuid while11201120+ holding req_lock. I.e. atomic with the state change */11211121+ kfree(mdev->p_uuid);11221122+ mdev->p_uuid = NULL;11171123 }1118112411191125 rv = _drbd_set_state(mdev, ns, CS_VERBOSE, NULL);
···302302303303static int force_kipmid[SI_MAX_PARMS];304304static int num_force_kipmid;305305+#ifdef CONFIG_PCI306306+static int pci_registered;307307+#endif308308+#ifdef CONFIG_PPC_OF309309+static int of_registered;310310+#endif305311306312static unsigned int kipmid_max_busy_us[SI_MAX_PARMS];307313static int num_max_busy_us;···10241018 else if (smi_result == SI_SM_IDLE)10251019 schedule_timeout_interruptible(100);10261020 else10271027- schedule_timeout_interruptible(0);10211021+ schedule_timeout_interruptible(1);10281022 }10291023 return 0;10301024}···33203314 rv = pci_register_driver(&ipmi_pci_driver);33213315 if (rv)33223316 printk(KERN_ERR PFX "Unable to register PCI driver: %d\n", rv);33173317+ else33183318+ pci_registered = 1;33233319#endif3324332033253321#ifdef CONFIG_ACPI···3338333033393331#ifdef CONFIG_PPC_OF33403332 of_register_platform_driver(&ipmi_of_platform_driver);33333333+ of_registered = 1;33413334#endif3342333533433336 /* We prefer devices with interrupts, but in the case of a machine···33923383 if (unload_when_empty && list_empty(&smi_infos)) {33933384 mutex_unlock(&smi_infos_lock);33943385#ifdef CONFIG_PCI33953395- pci_unregister_driver(&ipmi_pci_driver);33863386+ if (pci_registered)33873387+ pci_unregister_driver(&ipmi_pci_driver);33963388#endif3397338933983390#ifdef CONFIG_PPC_OF33993399- of_unregister_platform_driver(&ipmi_of_platform_driver);33913391+ if (of_registered)33923392+ of_unregister_platform_driver(&ipmi_of_platform_driver);34003393#endif34013394 driver_unregister(&ipmi_driver.driver);34023395 printk(KERN_WARNING PFX···34893478 return;3490347934913480#ifdef CONFIG_PCI34923492- pci_unregister_driver(&ipmi_pci_driver);34813481+ if (pci_registered)34823482+ pci_unregister_driver(&ipmi_pci_driver);34933483#endif34943484#ifdef CONFIG_ACPI34953485 pnp_unregister_driver(&ipmi_pnp_driver);34963486#endif3497348734983488#ifdef CONFIG_PPC_OF34993499- of_unregister_platform_driver(&ipmi_of_platform_driver);34893489+ if (of_registered)34903490+ of_unregister_platform_driver(&ipmi_of_platform_driver);35003491#endif3501349235023493 mutex_lock(&smi_infos_lock);
+2-2
drivers/cpuidle/governors/menu.c
···143143 * This allows us to calculate144144 * E(duration)|iowait145145 */146146- if (nr_iowait_cpu())146146+ if (nr_iowait_cpu(smp_processor_id()))147147 bucket = BUCKETS/2;148148149149 if (duration < 10)···175175 mult += 2 * get_loadavg();176176177177 /* for IO wait tasks (per cpu!) we add 5x each */178178- mult += 10 * nr_iowait_cpu();178178+ mult += 10 * nr_iowait_cpu(smp_processor_id());179179180180 return mult;181181}
···12331233 for (i = 0; i < MAX_SOCKET_BUSES; i++)12341234 pcibios_scan_specific_bus(255-i);12351235 }12361236+ pci_dev_put(pdev);12361237 table++;12371238 }12391239+}12401240+12411241+static unsigned i7core_pci_lastbus(void)12421242+{12431243+ int last_bus = 0, bus;12441244+ struct pci_bus *b = NULL;12451245+12461246+ while ((b = pci_find_next_bus(b)) != NULL) {12471247+ bus = b->number;12481248+ debugf0("Found bus %d\n", bus);12491249+ if (bus > last_bus)12501250+ last_bus = bus;12511251+ }12521252+12531253+ debugf0("Last bus %d\n", last_bus);12541254+12551255+ return last_bus;12381256}1239125712401258/*···12621244 * Need to 'get' device 16 func 1 and func 212631245 */12641246int i7core_get_onedevice(struct pci_dev **prev, int devno,12651265- struct pci_id_descr *dev_descr, unsigned n_devs)12471247+ struct pci_id_descr *dev_descr, unsigned n_devs,12481248+ unsigned last_bus)12661249{12671250 struct i7core_dev *i7core_dev;12681251···13101291 }13111292 bus = pdev->bus->number;1312129313131313- if (bus == 0x3f)13141314- socket = 0;13151315- else13161316- socket = 255 - bus;12941294+ socket = last_bus - bus;1317129513181296 i7core_dev = get_i7core_dev(socket);13191297 if (!i7core_dev) {···1374135813751359static int i7core_get_devices(struct pci_id_table *table)13761360{13771377- int i, rc;13611361+ int i, rc, last_bus;13781362 struct pci_dev *pdev = NULL;13791363 struct pci_id_descr *dev_descr;13641364+13651365+ last_bus = i7core_pci_lastbus();1380136613811367 while (table && table->descr) {13821368 dev_descr = table->descr;13831369 for (i = 0; i < table->n_devs; i++) {13841370 pdev = NULL;13851371 do {13861386- rc = i7core_get_onedevice(&pdev, i, &dev_descr[i],13871387- table->n_devs);13721372+ rc = i7core_get_onedevice(&pdev, i,13731373+ &dev_descr[i],13741374+ table->n_devs,13751375+ last_bus);13881376 if (rc < 0) {13891377 if (i == 0) {13901378 i = table->n_devs;···19471927 * 0 for FOUND a device19481928 * < 0 for error code19491929 */19301930+19311931+static int probed = 0;19321932+19501933static int __devinit i7core_probe(struct pci_dev *pdev,19511934 const struct pci_device_id *id)19521935{19531953- int dev_idx = id->driver_data;19541936 int rc;19551937 struct i7core_dev *i7core_dev;19381938+19391939+ /* get the pci devices we want to reserve for our use */19401940+ mutex_lock(&i7core_edac_lock);1956194119571942 /*19581943 * All memory controllers are allocated at the first pass.19591944 */19601960- if (unlikely(dev_idx >= 1))19451945+ if (unlikely(probed >= 1)) {19461946+ mutex_unlock(&i7core_edac_lock);19611947 return -EINVAL;19621962-19631963- /* get the pci devices we want to reserve for our use */19641964- mutex_lock(&i7core_edac_lock);19481948+ }19491949+ probed++;1965195019661951 rc = i7core_get_devices(pci_dev_table);19671952 if (unlikely(rc < 0))···20382013 i7core_dev->socket);20392014 }20402015 }20162016+ probed--;20172017+20412018 mutex_unlock(&i7core_edac_lock);20422019}20432020
+1-1
drivers/gpio/Kconfig
···11#22-# GPIO infrastructure and expanders22+# platform-neutral GPIO infrastructure and expanders33#4455config ARCH_WANT_OPTIONAL_GPIOLIB
+5-1
drivers/gpio/Makefile
···11-# gpio support: dedicated expander chips, etc11+# generic gpio support: dedicated expander chips, etc22+#33+# NOTE: platform-specific GPIO drivers don't belong in the44+# drivers/gpio directory; put them with other platform setup55+# code, IRQ controllers, board init, etc.2637ccflags-$(CONFIG_DEBUG_GPIO) += -DDEBUG48
···208208 uint8_t ctl2;209209210210 if (tfp410_readb(dvo, TFP410_CTL_2, &ctl2)) {211211- if (ctl2 & TFP410_CTL_2_HTPLG)211211+ if (ctl2 & TFP410_CTL_2_RSEN)212212 ret = connector_status_connected;213213 else214214 ret = connector_status_disconnected;
···128128 if (dev->irq_enabled)129129 drm_irq_uninstall(dev);130130131131+ mutex_lock(&dev->struct_mutex);131132 intel_cleanup_ring_buffer(dev, &dev_priv->render_ring);132133 if (HAS_BSD(dev))133134 intel_cleanup_ring_buffer(dev, &dev_priv->bsd_ring);135135+ mutex_unlock(&dev->struct_mutex);134136135137 /* Clear the HWS virtual address at teardown */136138 if (I915_NEED_GFX_HWS(dev))···12311229static void i915_setup_compression(struct drm_device *dev, int size)12321230{12331231 struct drm_i915_private *dev_priv = dev->dev_private;12341234- struct drm_mm_node *compressed_fb, *compressed_llb;12321232+ struct drm_mm_node *compressed_fb, *uninitialized_var(compressed_llb);12351233 unsigned long cfb_base;12361234 unsigned long ll_base = 0;12371235···14111409 i915_switcheroo_can_switch);14121410 if (ret)14131411 goto cleanup_vga_client;14121412+14131413+ /* IIR "flip pending" bit means done if this bit is set */14141414+ if (IS_GEN3(dev) && (I915_READ(ECOSKPD) & ECO_FLIP_DONE))14151415+ dev_priv->flip_pending_is_done = true;1414141614151417 intel_modeset_init(dev);14161418
···16281628 case RADEON_TXFORMAT_RGB332:16291629 case RADEON_TXFORMAT_Y8:16301630 track->textures[i].cpp = 1;16311631+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;16311632 break;16321633 case RADEON_TXFORMAT_AI88:16331634 case RADEON_TXFORMAT_ARGB1555:···16401639 case RADEON_TXFORMAT_LDUDV655:16411640 case RADEON_TXFORMAT_DUDV88:16421641 track->textures[i].cpp = 2;16421642+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;16431643 break;16441644 case RADEON_TXFORMAT_ARGB8888:16451645 case RADEON_TXFORMAT_RGBA8888:16461646 case RADEON_TXFORMAT_SHADOW32:16471647 case RADEON_TXFORMAT_LDUDUV8888:16481648 track->textures[i].cpp = 4;16491649+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;16491650 break;16501651 case RADEON_TXFORMAT_DXT1:16511652 track->textures[i].cpp = 1;···26072604 int surf_index = reg * 16;26082605 int flags = 0;2609260626102610- /* r100/r200 divide by 16 */26112611- if (rdev->family < CHIP_R300)26122612- flags = pitch / 16;26132613- else26142614- flags = pitch / 8;26152615-26162607 if (rdev->family <= CHIP_RS200) {26172608 if ((tiling_flags & (RADEON_TILING_MACRO|RADEON_TILING_MICRO))26182609 == (RADEON_TILING_MACRO|RADEON_TILING_MICRO))···26292632 flags |= RADEON_SURF_AP0_SWP_16BPP | RADEON_SURF_AP1_SWP_16BPP;26302633 if (tiling_flags & RADEON_TILING_SWAP_32BIT)26312634 flags |= RADEON_SURF_AP0_SWP_32BPP | RADEON_SURF_AP1_SWP_32BPP;26352635+26362636+ /* when we aren't tiling the pitch seems to needs to be furtherdivided down. - tested on power5 + rn50 server */26372637+ if (tiling_flags & (RADEON_TILING_SWAP_16BIT | RADEON_TILING_SWAP_32BIT)) {26382638+ if (!(tiling_flags & (RADEON_TILING_MACRO | RADEON_TILING_MICRO)))26392639+ if (ASIC_IS_RN50(rdev))26402640+ pitch /= 16;26412641+ }26422642+26432643+ /* r100/r200 divide by 16 */26442644+ if (rdev->family < CHIP_R300)26452645+ flags |= pitch / 16;26462646+ else26472647+ flags |= pitch / 8;26482648+2632264926332650 DRM_DEBUG("writing surface %d %d %x %x\n", reg, flags, offset, offset+obj_size-1);26342651 WREG32(RADEON_SURFACE0_INFO + surf_index, flags);···31583147 DRM_ERROR("compress format %d\n", t->compress_format);31593148}3160314931613161-static int r100_cs_track_cube(struct radeon_device *rdev,31623162- struct r100_cs_track *track, unsigned idx)31633163-{31643164- unsigned face, w, h;31653165- struct radeon_bo *cube_robj;31663166- unsigned long size;31673167-31683168- for (face = 0; face < 5; face++) {31693169- cube_robj = track->textures[idx].cube_info[face].robj;31703170- w = track->textures[idx].cube_info[face].width;31713171- h = track->textures[idx].cube_info[face].height;31723172-31733173- size = w * h;31743174- size *= track->textures[idx].cpp;31753175-31763176- size += track->textures[idx].cube_info[face].offset;31773177-31783178- if (size > radeon_bo_size(cube_robj)) {31793179- DRM_ERROR("Cube texture offset greater than object size %lu %lu\n",31803180- size, radeon_bo_size(cube_robj));31813181- r100_cs_track_texture_print(&track->textures[idx]);31823182- return -1;31833183- }31843184- }31853185- return 0;31863186-}31873187-31883150static int r100_track_compress_size(int compress_format, int w, int h)31893151{31903152 int block_width, block_height, block_bytes;···31863202 wblocks = min_wblocks;31873203 sz = wblocks * hblocks * block_bytes;31883204 return sz;32053205+}32063206+32073207+static int r100_cs_track_cube(struct radeon_device *rdev,32083208+ struct r100_cs_track *track, unsigned idx)32093209+{32103210+ unsigned face, w, h;32113211+ struct radeon_bo *cube_robj;32123212+ unsigned long size;32133213+ unsigned compress_format = track->textures[idx].compress_format;32143214+32153215+ for (face = 0; face < 5; face++) {32163216+ cube_robj = track->textures[idx].cube_info[face].robj;32173217+ w = track->textures[idx].cube_info[face].width;32183218+ h = track->textures[idx].cube_info[face].height;32193219+32203220+ if (compress_format) {32213221+ size = r100_track_compress_size(compress_format, w, h);32223222+ } else32233223+ size = w * h;32243224+ size *= track->textures[idx].cpp;32253225+32263226+ size += track->textures[idx].cube_info[face].offset;32273227+32283228+ if (size > radeon_bo_size(cube_robj)) {32293229+ DRM_ERROR("Cube texture offset greater than object size %lu %lu\n",32303230+ size, radeon_bo_size(cube_robj));32313231+ r100_cs_track_texture_print(&track->textures[idx]);32323232+ return -1;32333233+ }32343234+ }32353235+ return 0;31893236}3190323731913238static int r100_cs_track_texture_check(struct radeon_device *rdev,
+5
drivers/gpu/drm/radeon/r200.c
···415415 /* 2D, 3D, CUBE */416416 switch (tmp) {417417 case 0:418418+ case 3:419419+ case 4:418420 case 5:419421 case 6:420422 case 7:···452450 case R200_TXFORMAT_RGB332:453451 case R200_TXFORMAT_Y8:454452 track->textures[i].cpp = 1;453453+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;455454 break;456455 case R200_TXFORMAT_AI88:457456 case R200_TXFORMAT_ARGB1555:···464461 case R200_TXFORMAT_DVDU88:465462 case R200_TXFORMAT_AVYU4444:466463 track->textures[i].cpp = 2;464464+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;467465 break;468466 case R200_TXFORMAT_ARGB8888:469467 case R200_TXFORMAT_RGBA8888:···472468 case R200_TXFORMAT_BGR111110:473469 case R200_TXFORMAT_LDVDU8888:474470 track->textures[i].cpp = 4;471471+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;475472 break;476473 case R200_TXFORMAT_DXT1:477474 track->textures[i].cpp = 1;
+5
drivers/gpu/drm/radeon/r300.c
···881881 case R300_TX_FORMAT_Y4X4:882882 case R300_TX_FORMAT_Z3Y3X2:883883 track->textures[i].cpp = 1;884884+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;884885 break;885886 case R300_TX_FORMAT_X16:886887 case R300_TX_FORMAT_Y8X8:···893892 case R300_TX_FORMAT_B8G8_B8G8:894893 case R300_TX_FORMAT_G8R8_G8B8:895894 track->textures[i].cpp = 2;895895+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;896896 break;897897 case R300_TX_FORMAT_Y16X16:898898 case R300_TX_FORMAT_Z11Y11X10:···904902 case R300_TX_FORMAT_FL_I32:905903 case 0x1e:906904 track->textures[i].cpp = 4;905905+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;907906 break;908907 case R300_TX_FORMAT_W16Z16Y16X16:909908 case R300_TX_FORMAT_FL_R16G16B16A16:910909 case R300_TX_FORMAT_FL_I32A32:911910 track->textures[i].cpp = 8;911911+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;912912 break;913913 case R300_TX_FORMAT_FL_R32G32B32A32:914914 track->textures[i].cpp = 16;915915+ track->textures[i].compress_format = R100_TRACK_COMP_NONE;915916 break;916917 case R300_TX_FORMAT_DXT1:917918 track->textures[i].cpp = 1;
+12-5
drivers/gpu/drm/radeon/r600.c
···130130 break;131131 }132132 }133133- } else134134- rdev->pm.requested_power_state_index =135135- rdev->pm.current_power_state_index - 1;133133+ } else {134134+ if (rdev->pm.current_power_state_index == 0)135135+ rdev->pm.requested_power_state_index =136136+ rdev->pm.num_power_states - 1;137137+ else138138+ rdev->pm.requested_power_state_index =139139+ rdev->pm.current_power_state_index - 1;140140+ }136141 }137142 rdev->pm.requested_clock_mode_index = 0;138143 /* don't use the power state if crtcs are active and no display flag is set */···11021097 WREG32(MC_VM_FB_LOCATION, tmp);11031098 WREG32(HDP_NONSURFACE_BASE, (rdev->mc.vram_start >> 8));11041099 WREG32(HDP_NONSURFACE_INFO, (2 << 7));11051105- WREG32(HDP_NONSURFACE_SIZE, rdev->mc.mc_vram_size | 0x3FF);11001100+ WREG32(HDP_NONSURFACE_SIZE, 0x3FFFFFFF);11061101 if (rdev->flags & RADEON_IS_AGP) {11071102 WREG32(MC_VM_AGP_TOP, rdev->mc.gtt_end >> 22);11081103 WREG32(MC_VM_AGP_BOT, rdev->mc.gtt_start >> 22);···12241219 rdev->mc.visible_vram_size = rdev->mc.aper_size;12251220 r600_vram_gtt_location(rdev, &rdev->mc);1226122112271227- if (rdev->flags & RADEON_IS_IGP)12221222+ if (rdev->flags & RADEON_IS_IGP) {12231223+ rs690_pm_info(rdev);12281224 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev);12251225+ }12291226 radeon_update_bandwidth_info(rdev);12301227 return 0;12311228}
···667667{668668 struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);669669 struct page *p = NULL;670670- int gfp_flags = 0;670670+ int gfp_flags = GFP_USER;671671 int r;672672673673 /* set zero flag for page allocation if required */
···930930 }931931}932932933933+#if defined(CONFIG_CONSOLE_POLL) || defined(CONFIG_SERIAL_CPM_CONSOLE)934934+/*935935+ * Write a string to the serial port936936+ * Note that this is called with interrupts already disabled937937+ */938938+static void cpm_uart_early_write(struct uart_cpm_port *pinfo,939939+ const char *string, u_int count)940940+{941941+ unsigned int i;942942+ cbd_t __iomem *bdp, *bdbase;943943+ unsigned char *cpm_outp_addr;944944+945945+ /* Get the address of the host memory buffer.946946+ */947947+ bdp = pinfo->tx_cur;948948+ bdbase = pinfo->tx_bd_base;949949+950950+ /*951951+ * Now, do each character. This is not as bad as it looks952952+ * since this is a holding FIFO and not a transmitting FIFO.953953+ * We could add the complexity of filling the entire transmit954954+ * buffer, but we would just wait longer between accesses......955955+ */956956+ for (i = 0; i < count; i++, string++) {957957+ /* Wait for transmitter fifo to empty.958958+ * Ready indicates output is ready, and xmt is doing959959+ * that, not that it is ready for us to send.960960+ */961961+ while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)962962+ ;963963+964964+ /* Send the character out.965965+ * If the buffer address is in the CPM DPRAM, don't966966+ * convert it.967967+ */968968+ cpm_outp_addr = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr),969969+ pinfo);970970+ *cpm_outp_addr = *string;971971+972972+ out_be16(&bdp->cbd_datlen, 1);973973+ setbits16(&bdp->cbd_sc, BD_SC_READY);974974+975975+ if (in_be16(&bdp->cbd_sc) & BD_SC_WRAP)976976+ bdp = bdbase;977977+ else978978+ bdp++;979979+980980+ /* if a LF, also do CR... */981981+ if (*string == 10) {982982+ while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)983983+ ;984984+985985+ cpm_outp_addr = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr),986986+ pinfo);987987+ *cpm_outp_addr = 13;988988+989989+ out_be16(&bdp->cbd_datlen, 1);990990+ setbits16(&bdp->cbd_sc, BD_SC_READY);991991+992992+ if (in_be16(&bdp->cbd_sc) & BD_SC_WRAP)993993+ bdp = bdbase;994994+ else995995+ bdp++;996996+ }997997+ }998998+999999+ /*10001000+ * Finally, Wait for transmitter & holding register to empty10011001+ * and restore the IER10021002+ */10031003+ while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)10041004+ ;10051005+10061006+ pinfo->tx_cur = bdp;10071007+}10081008+#endif10091009+9331010#ifdef CONFIG_CONSOLE_POLL9341011/* Serial polling routines for writing and reading from the uart while9351012 * in an interrupt or debug context.···1076999 static char ch[2];1077100010781001 ch[0] = (char)c;10791079- cpm_uart_early_write(pinfo->port.line, ch, 1);10021002+ cpm_uart_early_write(pinfo, ch, 1);10801003}10811004#endif /* CONFIG_CONSOLE_POLL */10821005···12071130 u_int count)12081131{12091132 struct uart_cpm_port *pinfo = &cpm_uart_ports[co->index];12101210- unsigned int i;12111211- cbd_t __iomem *bdp, *bdbase;12121212- unsigned char *cp;12131133 unsigned long flags;12141134 int nolock = oops_in_progress;12151135···12161142 spin_lock_irqsave(&pinfo->port.lock, flags);12171143 }1218114412191219- /* Get the address of the host memory buffer.12201220- */12211221- bdp = pinfo->tx_cur;12221222- bdbase = pinfo->tx_bd_base;12231223-12241224- /*12251225- * Now, do each character. This is not as bad as it looks12261226- * since this is a holding FIFO and not a transmitting FIFO.12271227- * We could add the complexity of filling the entire transmit12281228- * buffer, but we would just wait longer between accesses......12291229- */12301230- for (i = 0; i < count; i++, s++) {12311231- /* Wait for transmitter fifo to empty.12321232- * Ready indicates output is ready, and xmt is doing12331233- * that, not that it is ready for us to send.12341234- */12351235- while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)12361236- ;12371237-12381238- /* Send the character out.12391239- * If the buffer address is in the CPM DPRAM, don't12401240- * convert it.12411241- */12421242- cp = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr), pinfo);12431243- *cp = *s;12441244-12451245- out_be16(&bdp->cbd_datlen, 1);12461246- setbits16(&bdp->cbd_sc, BD_SC_READY);12471247-12481248- if (in_be16(&bdp->cbd_sc) & BD_SC_WRAP)12491249- bdp = bdbase;12501250- else12511251- bdp++;12521252-12531253- /* if a LF, also do CR... */12541254- if (*s == 10) {12551255- while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)12561256- ;12571257-12581258- cp = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr), pinfo);12591259- *cp = 13;12601260-12611261- out_be16(&bdp->cbd_datlen, 1);12621262- setbits16(&bdp->cbd_sc, BD_SC_READY);12631263-12641264- if (in_be16(&bdp->cbd_sc) & BD_SC_WRAP)12651265- bdp = bdbase;12661266- else12671267- bdp++;12681268- }12691269- }12701270-12711271- /*12721272- * Finally, Wait for transmitter & holding register to empty12731273- * and restore the IER12741274- */12751275- while ((in_be16(&bdp->cbd_sc) & BD_SC_READY) != 0)12761276- ;12771277-12781278- pinfo->tx_cur = bdp;11451145+ cpm_uart_early_write(pinfo, s, count);1279114612801147 if (unlikely(nolock)) {12811148 local_irq_restore(flags);
···5252#include "8255.h"53535454#define PCI_VENDOR_ID_CB 0x1307 /* PCI vendor number of ComputerBoards */5555-#define N_BOARDS 10 /* Number of boards in cb_pcidda_boards */5655#define EEPROM_SIZE 128 /* number of entries in eeprom */5756#define MAX_AO_CHANNELS 8 /* maximum number of ao channels for supported boards */5857···306307 continue;307308 }308309 }309309- for (index = 0; index < N_BOARDS; index++) {310310+ for (index = 0; index < ARRAY_SIZE(cb_pcidda_boards); index++) {310311 if (cb_pcidda_boards[index].device_id ==311312 pcidev->device) {312313 goto found;
+30-11
drivers/staging/hv/channel_mgmt.c
···2323#include <linux/slab.h>2424#include <linux/list.h>2525#include <linux/module.h>2626+#include <linux/completion.h>2627#include "osd.h"2728#include "logging.h"2829#include "vmbus_private.h"···294293 Channel);295294}296295296296+297297+DECLARE_COMPLETION(hv_channel_ready);298298+299299+/*300300+ * Count initialized channels, and ensure all channels are ready when hv_vmbus301301+ * module loading completes.302302+ */303303+static void count_hv_channel(void)304304+{305305+ static int counter;306306+ unsigned long flags;307307+308308+ spin_lock_irqsave(&gVmbusConnection.channel_lock, flags);309309+ if (++counter == MAX_MSG_TYPES)310310+ complete(&hv_channel_ready);311311+ spin_unlock_irqrestore(&gVmbusConnection.channel_lock, flags);312312+}313313+314314+297315/*298316 * VmbusChannelProcessOffer - Process the offer by creating a channel/device299317 * associated with this offer···393373 * can cleanup properly394374 */395375 newChannel->State = CHANNEL_OPEN_STATE;396396- cnt = 0;397376398398- while (cnt != MAX_MSG_TYPES) {377377+ /* Open IC channels */378378+ for (cnt = 0; cnt < MAX_MSG_TYPES; cnt++) {399379 if (memcmp(&newChannel->OfferMsg.Offer.InterfaceType,400380 &hv_cb_utils[cnt].data,401401- sizeof(struct hv_guid)) == 0) {381381+ sizeof(struct hv_guid)) == 0 &&382382+ VmbusChannelOpen(newChannel, 2 * PAGE_SIZE,383383+ 2 * PAGE_SIZE, NULL, 0,384384+ hv_cb_utils[cnt].callback,385385+ newChannel) == 0) {386386+ hv_cb_utils[cnt].channel = newChannel;402387 DPRINT_INFO(VMBUS, "%s",403403- hv_cb_utils[cnt].log_msg);404404-405405- if (VmbusChannelOpen(newChannel, 2 * PAGE_SIZE,406406- 2 * PAGE_SIZE, NULL, 0,407407- hv_cb_utils[cnt].callback,408408- newChannel) == 0)409409- hv_cb_utils[cnt].channel = newChannel;388388+ hv_cb_utils[cnt].log_msg);389389+ count_hv_channel();410390 }411411- cnt++;412391 }413392 }414393 DPRINT_EXIT(VMBUS);
···1272127212731273static void choose_wakeup(struct usb_device *udev, pm_message_t msg)12741274{12751275- int w, i;12761276- struct usb_interface *intf;12751275+ int w;1277127612781277 /* Remote wakeup is needed only when we actually go to sleep.12791278 * For things like FREEZE and QUIESCE, if the device is already···12841285 return;12851286 }1286128712871287- /* If remote wakeup is permitted, see whether any interface drivers12881288+ /* Enable remote wakeup if it is allowed, even if no interface drivers12881289 * actually want it.12891290 */12901290- w = 0;12911291- if (device_may_wakeup(&udev->dev) && udev->actconfig) {12921292- for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {12931293- intf = udev->actconfig->interface[i];12941294- w |= intf->needs_remote_wakeup;12951295- }12961296- }12911291+ w = device_may_wakeup(&udev->dev);1297129212981293 /* If the device is autosuspended with the wrong wakeup setting,12991294 * autoresume now so the setting can be changed.
+5-2
drivers/usb/core/message.c
···416416 /* A length of zero means transfer the whole sg list */417417 len = length;418418 if (len == 0) {419419- for_each_sg(sg, sg, nents, i)420420- len += sg->length;419419+ struct scatterlist *sg2;420420+ int j;421421+422422+ for_each_sg(sg, sg2, nents, j)423423+ len += sg2->length;421424 }422425 } else {423426 /*
···321321/* Data shared by all the FSG instances. */322322struct fsg_common {323323 struct usb_gadget *gadget;324324- struct fsg_dev *fsg;325325- struct fsg_dev *prev_fsg;324324+ struct fsg_dev *fsg, *new_fsg;325325+ wait_queue_head_t fsg_wait;326326327327 /* filesem protects: backing files in use */328328 struct rw_semaphore filesem;···351351 enum fsg_state state; /* For exception handling */352352 unsigned int exception_req_tag;353353354354- u8 config, new_config;355354 enum data_direction data_dir;356355 u32 data_size;357356 u32 data_size_from_cmnd;···594595 u16 w_value = le16_to_cpu(ctrl->wValue);595596 u16 w_length = le16_to_cpu(ctrl->wLength);596597597597- if (!fsg->common->config)598598+ if (!fsg_is_set(fsg->common))598599 return -EOPNOTSUPP;599600600601 switch (ctrl->bRequest) {···23022303 return -ENOMEM;23032304}2304230523052305-/*23062306- * Reset interface setting and re-init endpoint state (toggle etc).23072307- * Call with altsetting < 0 to disable the interface. The only other23082308- * available altsetting is 0, which enables the interface.23092309- */23102310-static int do_set_interface(struct fsg_common *common, int altsetting)23062306+/* Reset interface setting and re-init endpoint state (toggle etc). */23072307+static int do_set_interface(struct fsg_common *common, struct fsg_dev *new_fsg)23112308{23122312- int rc = 0;23132313- int i;23142314- const struct usb_endpoint_descriptor *d;23092309+ const struct usb_endpoint_descriptor *d;23102310+ struct fsg_dev *fsg;23112311+ int i, rc = 0;2315231223162313 if (common->running)23172314 DBG(common, "reset interface\n");2318231523192316reset:23202317 /* Deallocate the requests */23212321- if (common->prev_fsg) {23222322- struct fsg_dev *fsg = common->prev_fsg;23182318+ if (common->fsg) {23192319+ fsg = common->fsg;2323232023242321 for (i = 0; i < FSG_NUM_BUFFERS; ++i) {23252322 struct fsg_buffhd *bh = &common->buffhds[i];···23402345 fsg->bulk_out_enabled = 0;23412346 }2342234723432343- common->prev_fsg = 0;23482348+ common->fsg = NULL;23492349+ wake_up(&common->fsg_wait);23442350 }2345235123462352 common->running = 0;23472347- if (altsetting < 0 || rc != 0)23532353+ if (!new_fsg || rc)23482354 return rc;2349235523502350- DBG(common, "set interface %d\n", altsetting);23562356+ common->fsg = new_fsg;23572357+ fsg = common->fsg;2351235823522352- if (fsg_is_set(common)) {23532353- struct fsg_dev *fsg = common->fsg;23542354- common->prev_fsg = common->fsg;23592359+ /* Enable the endpoints */23602360+ d = fsg_ep_desc(common->gadget,23612361+ &fsg_fs_bulk_in_desc, &fsg_hs_bulk_in_desc);23622362+ rc = enable_endpoint(common, fsg->bulk_in, d);23632363+ if (rc)23642364+ goto reset;23652365+ fsg->bulk_in_enabled = 1;2355236623562356- /* Enable the endpoints */23572357- d = fsg_ep_desc(common->gadget,23582358- &fsg_fs_bulk_in_desc, &fsg_hs_bulk_in_desc);23592359- rc = enable_endpoint(common, fsg->bulk_in, d);23672367+ d = fsg_ep_desc(common->gadget,23682368+ &fsg_fs_bulk_out_desc, &fsg_hs_bulk_out_desc);23692369+ rc = enable_endpoint(common, fsg->bulk_out, d);23702370+ if (rc)23712371+ goto reset;23722372+ fsg->bulk_out_enabled = 1;23732373+ common->bulk_out_maxpacket = le16_to_cpu(d->wMaxPacketSize);23742374+ clear_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags);23752375+23762376+ /* Allocate the requests */23772377+ for (i = 0; i < FSG_NUM_BUFFERS; ++i) {23782378+ struct fsg_buffhd *bh = &common->buffhds[i];23792379+23802380+ rc = alloc_request(common, fsg->bulk_in, &bh->inreq);23602381 if (rc)23612382 goto reset;23622362- fsg->bulk_in_enabled = 1;23632363-23642364- d = fsg_ep_desc(common->gadget,23652365- &fsg_fs_bulk_out_desc, &fsg_hs_bulk_out_desc);23662366- rc = enable_endpoint(common, fsg->bulk_out, d);23832383+ rc = alloc_request(common, fsg->bulk_out, &bh->outreq);23672384 if (rc)23682385 goto reset;23692369- fsg->bulk_out_enabled = 1;23702370- common->bulk_out_maxpacket = le16_to_cpu(d->wMaxPacketSize);23712371- clear_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags);23722372-23732373- /* Allocate the requests */23742374- for (i = 0; i < FSG_NUM_BUFFERS; ++i) {23752375- struct fsg_buffhd *bh = &common->buffhds[i];23762376-23772377- rc = alloc_request(common, fsg->bulk_in, &bh->inreq);23782378- if (rc)23792379- goto reset;23802380- rc = alloc_request(common, fsg->bulk_out, &bh->outreq);23812381- if (rc)23822382- goto reset;23832383- bh->inreq->buf = bh->outreq->buf = bh->buf;23842384- bh->inreq->context = bh->outreq->context = bh;23852385- bh->inreq->complete = bulk_in_complete;23862386- bh->outreq->complete = bulk_out_complete;23872387- }23882388-23892389- common->running = 1;23902390- for (i = 0; i < common->nluns; ++i)23912391- common->luns[i].unit_attention_data = SS_RESET_OCCURRED;23922392- return rc;23932393- } else {23942394- return -EIO;23952395- }23962396-}23972397-23982398-23992399-/*24002400- * Change our operational configuration. This code must agree with the code24012401- * that returns config descriptors, and with interface altsetting code.24022402- *24032403- * It's also responsible for power management interactions. Some24042404- * configurations might not work with our current power sources.24052405- * For now we just assume the gadget is always self-powered.24062406- */24072407-static int do_set_config(struct fsg_common *common, u8 new_config)24082408-{24092409- int rc = 0;24102410-24112411- /* Disable the single interface */24122412- if (common->config != 0) {24132413- DBG(common, "reset config\n");24142414- common->config = 0;24152415- rc = do_set_interface(common, -1);23862386+ bh->inreq->buf = bh->outreq->buf = bh->buf;23872387+ bh->inreq->context = bh->outreq->context = bh;23882388+ bh->inreq->complete = bulk_in_complete;23892389+ bh->outreq->complete = bulk_out_complete;24162390 }2417239124182418- /* Enable the interface */24192419- if (new_config != 0) {24202420- common->config = new_config;24212421- rc = do_set_interface(common, 0);24222422- if (rc != 0)24232423- common->config = 0; /* Reset on errors */24242424- }23922392+ common->running = 1;23932393+ for (i = 0; i < common->nluns; ++i)23942394+ common->luns[i].unit_attention_data = SS_RESET_OCCURRED;24252395 return rc;24262396}24272397···23972437static int fsg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)23982438{23992439 struct fsg_dev *fsg = fsg_from_func(f);24002400- fsg->common->prev_fsg = fsg->common->fsg;24012401- fsg->common->fsg = fsg;24022402- fsg->common->new_config = 1;24402440+ fsg->common->new_fsg = fsg;24032441 raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE);24042442 return 0;24052443}···24052447static void fsg_disable(struct usb_function *f)24062448{24072449 struct fsg_dev *fsg = fsg_from_func(f);24082408- fsg->common->prev_fsg = fsg->common->fsg;24092409- fsg->common->fsg = fsg;24102410- fsg->common->new_config = 0;24502450+ fsg->common->new_fsg = NULL;24112451 raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE);24122452}24132453···24152459static void handle_exception(struct fsg_common *common)24162460{24172461 siginfo_t info;24182418- int sig;24192462 int i;24202463 struct fsg_buffhd *bh;24212464 enum fsg_state old_state;24222422- u8 new_config;24232465 struct fsg_lun *curlun;24242466 unsigned int exception_req_tag;24252425- int rc;2426246724272468 /* Clear the existing signals. Anything but SIGUSR1 is converted24282469 * into a high-priority EXIT exception. */24292470 for (;;) {24302430- sig = dequeue_signal_lock(current, ¤t->blocked, &info);24712471+ int sig =24722472+ dequeue_signal_lock(current, ¤t->blocked, &info);24312473 if (!sig)24322474 break;24332475 if (sig != SIGUSR1) {···24362482 }2437248324382484 /* Cancel all the pending transfers */24392439- if (fsg_is_set(common)) {24852485+ if (likely(common->fsg)) {24402486 for (i = 0; i < FSG_NUM_BUFFERS; ++i) {24412487 bh = &common->buffhds[i];24422488 if (bh->inreq_busy)···24772523 common->next_buffhd_to_fill = &common->buffhds[0];24782524 common->next_buffhd_to_drain = &common->buffhds[0];24792525 exception_req_tag = common->exception_req_tag;24802480- new_config = common->new_config;24812526 old_state = common->state;2482252724832528 if (old_state == FSG_STATE_ABORT_BULK_OUT)···25262573 break;2527257425282575 case FSG_STATE_CONFIG_CHANGE:25292529- rc = do_set_config(common, new_config);25762576+ do_set_interface(common, common->new_fsg);25302577 break;2531257825322579 case FSG_STATE_EXIT:25332580 case FSG_STATE_TERMINATED:25342534- do_set_config(common, 0); /* Free resources */25812581+ do_set_interface(common, NULL); /* Free resources */25352582 spin_lock_irq(&common->lock);25362583 common->state = FSG_STATE_TERMINATED; /* Stop the thread */25372584 spin_unlock_irq(&common->lock);···28162863 goto error_release;28172864 }28182865 init_completion(&common->thread_notifier);28662866+ init_waitqueue_head(&common->fsg_wait);28192867#undef OR2820286828212869···29112957static void fsg_unbind(struct usb_configuration *c, struct usb_function *f)29122958{29132959 struct fsg_dev *fsg = fsg_from_func(f);29602960+ struct fsg_common *common = fsg->common;2914296129152962 DBG(fsg, "unbind\n");29162916- fsg_common_put(fsg->common);29632963+ if (fsg->common->fsg == fsg) {29642964+ fsg->common->new_fsg = NULL;29652965+ raise_exception(fsg->common, FSG_STATE_CONFIG_CHANGE);29662966+ /* FIXME: make interruptible or killable somehow? */29672967+ wait_event(common->fsg_wait, common->fsg != fsg);29682968+ }29692969+29702970+ fsg_common_put(common);29172971 usb_free_descriptors(fsg->function.descriptors);29182972 usb_free_descriptors(fsg->function.hs_descriptors);29192973 kfree(fsg);···29322970{29332971 struct fsg_dev *fsg = fsg_from_func(f);29342972 struct usb_gadget *gadget = c->cdev->gadget;29352935- int rc;29362973 int i;29372974 struct usb_ep *ep;29382975···29572996 ep->driver_data = fsg->common; /* claim the endpoint */29582997 fsg->bulk_out = ep;2959299829992999+ /* Copy descriptors */30003000+ f->descriptors = usb_copy_descriptors(fsg_fs_function);30013001+ if (unlikely(!f->descriptors))30023002+ return -ENOMEM;30033003+29603004 if (gadget_is_dualspeed(gadget)) {29613005 /* Assume endpoint addresses are the same for both speeds */29623006 fsg_hs_bulk_in_desc.bEndpointAddress =···29693003 fsg_hs_bulk_out_desc.bEndpointAddress =29703004 fsg_fs_bulk_out_desc.bEndpointAddress;29713005 f->hs_descriptors = usb_copy_descriptors(fsg_hs_function);29722972- if (unlikely(!f->hs_descriptors))30063006+ if (unlikely(!f->hs_descriptors)) {30073007+ usb_free_descriptors(f->descriptors);29733008 return -ENOMEM;30093009+ }29743010 }2975301129763012 return 0;2977301329783014autoconf_fail:29793015 ERROR(fsg, "unable to autoconfigure all endpoints\n");29802980- rc = -ENOTSUPP;29812981- return rc;30163016+ return -ENOTSUPP;29823017}2983301829843019···3003303630043037 fsg->function.name = FSG_DRIVER_DESC;30053038 fsg->function.strings = fsg_strings_array;30063006- fsg->function.descriptors = usb_copy_descriptors(fsg_fs_function);30073007- if (unlikely(!fsg->function.descriptors)) {30083008- rc = -ENOMEM;30093009- goto error_free_fsg;30103010- }30113039 fsg->function.bind = fsg_bind;30123040 fsg->function.unbind = fsg_unbind;30133041 fsg->function.setup = fsg_setup;···3018305630193057 rc = usb_add_function(c, &fsg->function);30203058 if (unlikely(rc))30213021- goto error_free_all;30223022-30233023- fsg_common_get(fsg->common);30243024- return 0;30253025-30263026-error_free_all:30273027- usb_free_descriptors(fsg->function.descriptors);30283028- /* fsg_bind() might have copied those; or maybe not? who cares30293029- * -- free it just in case. */30303030- usb_free_descriptors(fsg->function.hs_descriptors);30313031-error_free_fsg:30323032- kfree(fsg);30333033-30593059+ kfree(fsg);30603060+ else30613061+ fsg_common_get(fsg->common);30343062 return rc;30353063}30363064
+11
drivers/usb/gadget/g_ffs.c
···392392 if (unlikely(ret < 0))393393 return ret;394394395395+ /* After previous do_configs there may be some invalid396396+ * pointers in c->interface array. This happens every time397397+ * a user space function with fewer interfaces than a user398398+ * space function that was run before the new one is run. The399399+ * compasit's set_config() assumes that if there is no more400400+ * then MAX_CONFIG_INTERFACES interfaces in a configuration401401+ * then there is a NULL pointer after the last interface in402402+ * c->interface array. We need to make sure this is true. */403403+ if (c->next_interface_id < ARRAY_SIZE(c->interface))404404+ c->interface[c->next_interface_id] = NULL;405405+395406 return 0;396407}397408
+16-16
drivers/usb/gadget/printer.c
···8282struct printer_dev {8383 spinlock_t lock; /* lock this structure */8484 /* lock buffer lists during read/write calls */8585- spinlock_t lock_printer_io;8585+ struct mutex lock_printer_io;8686 struct usb_gadget *gadget;8787 struct usb_request *req; /* for control responses */8888 u8 config;···567567568568 DBG(dev, "printer_read trying to read %d bytes\n", (int)len);569569570570- spin_lock(&dev->lock_printer_io);570570+ mutex_lock(&dev->lock_printer_io);571571 spin_lock_irqsave(&dev->lock, flags);572572573573 /* We will use this flag later to check if a printer reset happened···601601 * call or not.602602 */603603 if (fd->f_flags & (O_NONBLOCK|O_NDELAY)) {604604- spin_unlock(&dev->lock_printer_io);604604+ mutex_unlock(&dev->lock_printer_io);605605 return -EAGAIN;606606 }607607···648648 if (dev->reset_printer) {649649 list_add(¤t_rx_req->list, &dev->rx_reqs);650650 spin_unlock_irqrestore(&dev->lock, flags);651651- spin_unlock(&dev->lock_printer_io);651651+ mutex_unlock(&dev->lock_printer_io);652652 return -EAGAIN;653653 }654654···673673 dev->current_rx_buf = current_rx_buf;674674675675 spin_unlock_irqrestore(&dev->lock, flags);676676- spin_unlock(&dev->lock_printer_io);676676+ mutex_unlock(&dev->lock_printer_io);677677678678 DBG(dev, "printer_read returned %d bytes\n", (int)bytes_copied);679679···697697 if (len == 0)698698 return -EINVAL;699699700700- spin_lock(&dev->lock_printer_io);700700+ mutex_lock(&dev->lock_printer_io);701701 spin_lock_irqsave(&dev->lock, flags);702702703703 /* Check if a printer reset happens while we have interrupts on */···713713 * a NON-Blocking call or not.714714 */715715 if (fd->f_flags & (O_NONBLOCK|O_NDELAY)) {716716- spin_unlock(&dev->lock_printer_io);716716+ mutex_unlock(&dev->lock_printer_io);717717 return -EAGAIN;718718 }719719···752752753753 if (copy_from_user(req->buf, buf, size)) {754754 list_add(&req->list, &dev->tx_reqs);755755- spin_unlock(&dev->lock_printer_io);755755+ mutex_unlock(&dev->lock_printer_io);756756 return bytes_copied;757757 }758758···766766 if (dev->reset_printer) {767767 list_add(&req->list, &dev->tx_reqs);768768 spin_unlock_irqrestore(&dev->lock, flags);769769- spin_unlock(&dev->lock_printer_io);769769+ mutex_unlock(&dev->lock_printer_io);770770 return -EAGAIN;771771 }772772773773 if (usb_ep_queue(dev->in_ep, req, GFP_ATOMIC)) {774774 list_add(&req->list, &dev->tx_reqs);775775 spin_unlock_irqrestore(&dev->lock, flags);776776- spin_unlock(&dev->lock_printer_io);776776+ mutex_unlock(&dev->lock_printer_io);777777 return -EAGAIN;778778 }779779···782782 }783783784784 spin_unlock_irqrestore(&dev->lock, flags);785785- spin_unlock(&dev->lock_printer_io);785785+ mutex_unlock(&dev->lock_printer_io);786786787787 DBG(dev, "printer_write sent %d bytes\n", (int)bytes_copied);788788···820820 unsigned long flags;821821 int status = 0;822822823823- spin_lock(&dev->lock_printer_io);823823+ mutex_lock(&dev->lock_printer_io);824824 spin_lock_irqsave(&dev->lock, flags);825825 setup_rx_reqs(dev);826826 spin_unlock_irqrestore(&dev->lock, flags);827827- spin_unlock(&dev->lock_printer_io);827827+ mutex_unlock(&dev->lock_printer_io);828828829829 poll_wait(fd, &dev->rx_wait, wait);830830 poll_wait(fd, &dev->tx_wait, wait);···14611461 }1462146214631463 spin_lock_init(&dev->lock);14641464- spin_lock_init(&dev->lock_printer_io);14641464+ mutex_init(&dev->lock_printer_io);14651465 INIT_LIST_HEAD(&dev->tx_reqs);14661466 INIT_LIST_HEAD(&dev->tx_reqs_active);14671467 INIT_LIST_HEAD(&dev->rx_reqs);···15941594{15951595 int status;1596159615971597- spin_lock(&usb_printer_gadget.lock_printer_io);15971597+ mutex_lock(&usb_printer_gadget.lock_printer_io);15981598 class_destroy(usb_gadget_class);15991599 unregister_chrdev_region(g_printer_devno, 2);16001600···16021602 if (status)16031603 ERROR(dev, "usb_gadget_unregister_driver %x\n", status);1604160416051605- spin_unlock(&usb_printer_gadget.lock_printer_io);16051605+ mutex_unlock(&usb_printer_gadget.lock_printer_io);16061606}16071607module_exit(cleanup);
···536536 list_move(&req->list, &port->read_pool);537537 }538538539539- /* Push from tty to ldisc; this is immediate with low_latency, and540540- * may trigger callbacks to this driver ... so drop the spinlock.539539+ /* Push from tty to ldisc; without low_latency set this is handled by540540+ * a workqueue, so we won't get callbacks and can hold port_lock541541 */542542 if (tty && do_push) {543543- spin_unlock_irq(&port->port_lock);544543 tty_flip_buffer_push(tty);545545- wake_up_interruptible(&tty->read_wait);546546- spin_lock_irq(&port->port_lock);547547-548548- /* tty may have been closed */549549- tty = port->port_tty;550544 }551545552546···777783778784 port->open_count = 1;779785 port->openclose = false;780780-781781- /* low_latency means ldiscs work in tasklet context, without782782- * needing a workqueue schedule ... easier to keep up.783783- */784784- tty->low_latency = 1;785786786787 /* if connected, start the I/O stream */787788 if (port->port_usb) {···11841195 n_ports = 0;1185119611861197 tty_unregister_driver(gs_tty_driver);11981198+ put_tty_driver(gs_tty_driver);11871199 gs_tty_driver = NULL;1188120011891201 pr_debug("%s: cleaned up ttyGS* support\n", __func__);
+10-3
drivers/usb/host/ehci-mxc.c
···207207 /* Initialize the transceiver */208208 if (pdata->otg) {209209 pdata->otg->io_priv = hcd->regs + ULPI_VIEWPORT_OFFSET;210210- if (otg_init(pdata->otg) != 0)211211- dev_err(dev, "unable to init transceiver\n");212212- else if (otg_set_vbus(pdata->otg, 1) != 0)210210+ ret = otg_init(pdata->otg);211211+ if (ret) {212212+ dev_err(dev, "unable to init transceiver, probably missing\n");213213+ ret = -ENODEV;214214+ goto err_add;215215+ }216216+ ret = otg_set_vbus(pdata->otg, 1);217217+ if (ret) {213218 dev_err(dev, "unable to enable vbus on transceiver\n");219219+ goto err_add;220220+ }214221 }215222216223 priv->hcd = hcd;
+10-5
drivers/usb/host/isp1362-hcd.c
···2224222422252225/*-------------------------------------------------------------------------*/2226222622272227-static void isp1362_sw_reset(struct isp1362_hcd *isp1362_hcd)22272227+static void __isp1362_sw_reset(struct isp1362_hcd *isp1362_hcd)22282228{22292229 int tmp = 20;22302230- unsigned long flags;22312231-22322232- spin_lock_irqsave(&isp1362_hcd->lock, flags);2233223022342231 isp1362_write_reg16(isp1362_hcd, HCSWRES, HCSWRES_MAGIC);22352232 isp1362_write_reg32(isp1362_hcd, HCCMDSTAT, OHCI_HCR);···22372240 }22382241 if (!tmp)22392242 pr_err("Software reset timeout\n");22432243+}22442244+22452245+static void isp1362_sw_reset(struct isp1362_hcd *isp1362_hcd)22462246+{22472247+ unsigned long flags;22482248+22492249+ spin_lock_irqsave(&isp1362_hcd->lock, flags);22502250+ __isp1362_sw_reset(isp1362_hcd);22402251 spin_unlock_irqrestore(&isp1362_hcd->lock, flags);22412252}22422253···24232418 if (isp1362_hcd->board && isp1362_hcd->board->reset)24242419 isp1362_hcd->board->reset(hcd->self.controller, 1);24252420 else24262426- isp1362_sw_reset(isp1362_hcd);24212421+ __isp1362_sw_reset(isp1362_hcd);2427242224282423 if (isp1362_hcd->board && isp1362_hcd->board->clock)24292424 isp1362_hcd->board->clock(hcd->self.controller, 0);
···182182 * set, but other sections talk about dealing with the chain bit set. This was183183 * fixed in the 0.96 specification errata, but we have to assume that all 0.95184184 * xHCI hardware can't handle the chain bit being cleared on a link TRB.185185+ *186186+ * @more_trbs_coming: Will you enqueue more TRBs before calling187187+ * prepare_transfer()?185188 */186186-static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, bool consumer)189189+static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring,190190+ bool consumer, bool more_trbs_coming)187191{188192 u32 chain;189193 union xhci_trb *next;···203199 while (last_trb(xhci, ring, ring->enq_seg, next)) {204200 if (!consumer) {205201 if (ring != xhci->event_ring) {206206- if (chain) {207207- next->link.control |= TRB_CHAIN;208208-209209- /* Give this link TRB to the hardware */210210- wmb();211211- next->link.control ^= TRB_CYCLE;212212- } else {202202+ /*203203+ * If the caller doesn't plan on enqueueing more204204+ * TDs before ringing the doorbell, then we205205+ * don't want to give the link TRB to the206206+ * hardware just yet. We'll give the link TRB207207+ * back in prepare_ring() just before we enqueue208208+ * the TD at the top of the ring.209209+ */210210+ if (!chain && !more_trbs_coming)213211 break;212212+213213+ /* If we're not dealing with 0.95 hardware,214214+ * carry over the chain bit of the previous TRB215215+ * (which may mean the chain bit is cleared).216216+ */217217+ if (!xhci_link_trb_quirk(xhci)) {218218+ next->link.control &= ~TRB_CHAIN;219219+ next->link.control |= chain;214220 }221221+ /* Give this link TRB to the hardware */222222+ wmb();223223+ next->link.control ^= TRB_CYCLE;215224 }216225 /* Toggle the cycle bit after the last ring segment. */217226 if (last_trb_on_last_seg(xhci, ring, ring->enq_seg, next)) {···17241707/*17251708 * Generic function for queueing a TRB on a ring.17261709 * The caller must have checked to make sure there's room on the ring.17101710+ *17111711+ * @more_trbs_coming: Will you enqueue more TRBs before calling17121712+ * prepare_transfer()?17271713 */17281714static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,17291729- bool consumer,17151715+ bool consumer, bool more_trbs_coming,17301716 u32 field1, u32 field2, u32 field3, u32 field4)17311717{17321718 struct xhci_generic_trb *trb;···17391719 trb->field[1] = field2;17401720 trb->field[2] = field3;17411721 trb->field[3] = field4;17421742- inc_enq(xhci, ring, consumer);17221722+ inc_enq(xhci, ring, consumer, more_trbs_coming);17431723}1744172417451725/*···20081988 int trb_buff_len, this_sg_len, running_total;20091989 bool first_trb;20101990 u64 addr;19911991+ bool more_trbs_coming;2011199220121993 struct xhci_generic_trb *start_trb;20131994 int start_cycle;···20942073 length_field = TRB_LEN(trb_buff_len) |20952074 remainder |20962075 TRB_INTR_TARGET(0);20972097- queue_trb(xhci, ep_ring, false,20762076+ if (num_trbs > 1)20772077+ more_trbs_coming = true;20782078+ else20792079+ more_trbs_coming = false;20802080+ queue_trb(xhci, ep_ring, false, more_trbs_coming,20982081 lower_32_bits(addr),20992082 upper_32_bits(addr),21002083 length_field,···21492124 int num_trbs;21502125 struct xhci_generic_trb *start_trb;21512126 bool first_trb;21272127+ bool more_trbs_coming;21522128 int start_cycle;21532129 u32 field, length_field;21542130···22382212 length_field = TRB_LEN(trb_buff_len) |22392213 remainder |22402214 TRB_INTR_TARGET(0);22412241- queue_trb(xhci, ep_ring, false,22152215+ if (num_trbs > 1)22162216+ more_trbs_coming = true;22172217+ else22182218+ more_trbs_coming = false;22192219+ queue_trb(xhci, ep_ring, false, more_trbs_coming,22422220 lower_32_bits(addr),22432221 upper_32_bits(addr),22442222 length_field,···23212291 /* Queue setup TRB - see section 6.4.1.2.1 */23222292 /* FIXME better way to translate setup_packet into two u32 fields? */23232293 setup = (struct usb_ctrlrequest *) urb->setup_packet;23242324- queue_trb(xhci, ep_ring, false,22942294+ queue_trb(xhci, ep_ring, false, true,23252295 /* FIXME endianness is probably going to bite my ass here. */23262296 setup->bRequestType | setup->bRequest << 8 | setup->wValue << 16,23272297 setup->wIndex | setup->wLength << 16,···23372307 if (urb->transfer_buffer_length > 0) {23382308 if (setup->bRequestType & USB_DIR_IN)23392309 field |= TRB_DIR_IN;23402340- queue_trb(xhci, ep_ring, false,23102310+ queue_trb(xhci, ep_ring, false, true,23412311 lower_32_bits(urb->transfer_dma),23422312 upper_32_bits(urb->transfer_dma),23432313 length_field,···23542324 field = 0;23552325 else23562326 field = TRB_DIR_IN;23572357- queue_trb(xhci, ep_ring, false,23272327+ queue_trb(xhci, ep_ring, false, false,23582328 0,23592329 0,23602330 TRB_INTR_TARGET(0),···23912361 "unfailable commands failed.\n");23922362 return -ENOMEM;23932363 }23942394- queue_trb(xhci, xhci->cmd_ring, false, field1, field2, field3,23642364+ queue_trb(xhci, xhci->cmd_ring, false, false, field1, field2, field3,23952365 field4 | xhci->cmd_ring->cycle_state);23962366 return 0;23972367}
+5-8
drivers/usb/musb/musb_core.c
···219219 return 0;220220}221221#else222222-#define musb_ulpi_read(a, b) NULL223223-#define musb_ulpi_write(a, b, c) NULL222222+#define musb_ulpi_read NULL223223+#define musb_ulpi_write NULL224224#endif225225226226static struct otg_io_access_ops musb_ulpi_access = {···451451 * @param power452452 */453453454454-#define STAGE0_MASK (MUSB_INTR_RESUME | MUSB_INTR_SESSREQ \455455- | MUSB_INTR_VBUSERROR | MUSB_INTR_CONNECT \456456- | MUSB_INTR_RESET)457457-458454static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb,459455 u8 devctl, u8 power)460456{···638642 handled = IRQ_HANDLED;639643 }640644641641-645645+#endif642646 if (int_usb & MUSB_INTR_SUSPEND) {643647 DBG(1, "SUSPEND (%s) devctl %02x power %02x\n",644648 otg_state_string(musb), devctl, power);···701705 }702706 }703707708708+#ifdef CONFIG_USB_MUSB_HDRC_HCD704709 if (int_usb & MUSB_INTR_CONNECT) {705710 struct usb_hcd *hcd = musb_to_hcd(musb);706711 void __iomem *mbase = musb->mregs;···15941597 /* the core can interrupt us for multiple reasons; docs have15951598 * a generic interrupt flowchart to follow15961599 */15971597- if (musb->int_usb & STAGE0_MASK)16001600+ if (musb->int_usb)15981601 retval |= musb_stage0_irq(musb, musb->int_usb,15991602 devctl, power);16001603
···596596 goto release_regs;597597 }598598599599- nuc900_driver_clksrc_div(&pdev->dev, "ext", 0x2);600600-601599 fbi->clk = clk_get(&pdev->dev, NULL);602600 if (!fbi->clk || IS_ERR(fbi->clk)) {603601 printk(KERN_ERR "nuc900-lcd:failed to get lcd clock source\n");
+1-5
fs/binfmt_flat.c
···6868 * Here we can be a bit looser than the data sections since this6969 * needs to only meet arch ABI requirements.7070 */7171-#ifdef ARCH_SLAB_MINALIGN7272-#define FLAT_STACK_ALIGN (ARCH_SLAB_MINALIGN)7373-#else7474-#define FLAT_STACK_ALIGN (sizeof(void *))7575-#endif7171+#define FLAT_STACK_ALIGN max_t(unsigned long, sizeof(void *), ARCH_SLAB_MINALIGN)76727773#define RELOC_FAILED 0xff00ff01 /* Relocation incorrect somewhere */7874#define UNLOADED_LIB 0x7ff000ff /* Placeholder for unused library */
+2
fs/dcache.c
···590590 up_read(&sb->s_umount);591591 }592592 spin_lock(&sb_lock);593593+ /* lock was dropped, must reset next */594594+ list_safe_reset_next(sb, n, s_list);593595 count -= pruned;594596 __put_super(sb);595597 /* more work left to do? */
+4-2
fs/fcntl.c
···733733{734734 while (fa) {735735 struct fown_struct *fown;736736+ unsigned long flags;737737+736738 if (fa->magic != FASYNC_MAGIC) {737739 printk(KERN_ERR "kill_fasync: bad magic number in "738740 "fasync_struct!\n");739741 return;740742 }741741- spin_lock(&fa->fa_lock);743743+ spin_lock_irqsave(&fa->fa_lock, flags);742744 if (fa->fa_file) {743745 fown = &fa->fa_file->f_owner;744746 /* Don't send SIGURG to processes which have not set a···749747 if (!(sig == SIGURG && fown->signum == 0))750748 send_sigio(fown, fa->fa_fd, band);751749 }752752- spin_unlock(&fa->fa_lock);750750+ spin_unlock_irqrestore(&fa->fa_lock, flags);753751 fa = rcu_dereference(fa->fa_next);754752 }755753}
+115-155
fs/fs-writeback.c
···6363};64646565enum {6666- WS_USED_B = 0,6767- WS_ONSTACK_B,6666+ WS_INPROGRESS = 0,6767+ WS_ONSTACK,6868};6969-7070-#define WS_USED (1 << WS_USED_B)7171-#define WS_ONSTACK (1 << WS_ONSTACK_B)7272-7373-static inline bool bdi_work_on_stack(struct bdi_work *work)7474-{7575- return test_bit(WS_ONSTACK_B, &work->state);7676-}77697870static inline void bdi_work_init(struct bdi_work *work,7971 struct wb_writeback_args *args)8072{8173 INIT_RCU_HEAD(&work->rcu_head);8274 work->args = *args;8383- work->state = WS_USED;7575+ __set_bit(WS_INPROGRESS, &work->state);8476}85778678/**···8795 return !list_empty(&bdi->work_list);8896}89979090-static void bdi_work_clear(struct bdi_work *work)9191-{9292- clear_bit(WS_USED_B, &work->state);9393- smp_mb__after_clear_bit();9494- /*9595- * work can have disappeared at this point. bit waitq functions9696- * should be able to tolerate this, provided bdi_sched_wait does9797- * not dereference it's pointer argument.9898- */9999- wake_up_bit(&work->state, WS_USED_B);100100-}101101-10298static void bdi_work_free(struct rcu_head *head)10399{104100 struct bdi_work *work = container_of(head, struct bdi_work, rcu_head);105101106106- if (!bdi_work_on_stack(work))102102+ clear_bit(WS_INPROGRESS, &work->state);103103+ smp_mb__after_clear_bit();104104+ wake_up_bit(&work->state, WS_INPROGRESS);105105+106106+ if (!test_bit(WS_ONSTACK, &work->state))107107 kfree(work);108108- else109109- bdi_work_clear(work);110110-}111111-112112-static void wb_work_complete(struct bdi_work *work)113113-{114114- const enum writeback_sync_modes sync_mode = work->args.sync_mode;115115- int onstack = bdi_work_on_stack(work);116116-117117- /*118118- * For allocated work, we can clear the done/seen bit right here.119119- * For on-stack work, we need to postpone both the clear and free120120- * to after the RCU grace period, since the stack could be invalidated121121- * as soon as bdi_work_clear() has done the wakeup.122122- */123123- if (!onstack)124124- bdi_work_clear(work);125125- if (sync_mode == WB_SYNC_NONE || onstack)126126- call_rcu(&work->rcu_head, bdi_work_free);127108}128109129110static void wb_clear_pending(struct bdi_writeback *wb, struct bdi_work *work)···112147 list_del_rcu(&work->list);113148 spin_unlock(&bdi->wb_lock);114149115115- wb_work_complete(work);150150+ call_rcu(&work->rcu_head, bdi_work_free);116151 }117152}118153···150185 * Used for on-stack allocated work items. The caller needs to wait until151186 * the wb threads have acked the work before it's safe to continue.152187 */153153-static void bdi_wait_on_work_clear(struct bdi_work *work)188188+static void bdi_wait_on_work_done(struct bdi_work *work)154189{155155- wait_on_bit(&work->state, WS_USED_B, bdi_sched_wait,190190+ wait_on_bit(&work->state, WS_INPROGRESS, bdi_sched_wait,156191 TASK_UNINTERRUPTIBLE);157192}158193···178213}179214180215/**181181- * bdi_sync_writeback - start and wait for writeback182182- * @bdi: the backing device to write from216216+ * bdi_queue_work_onstack - start and wait for writeback183217 * @sb: write inodes from this super_block184218 *185219 * Description:186186- * This does WB_SYNC_ALL data integrity writeback and waits for the187187- * IO to complete. Callers must hold the sb s_umount semaphore for220220+ * This function initiates writeback and waits for the operation to221221+ * complete. Callers must hold the sb s_umount semaphore for188222 * reading, to avoid having the super disappear before we are done.189223 */190190-static void bdi_sync_writeback(struct backing_dev_info *bdi,191191- struct super_block *sb)224224+static void bdi_queue_work_onstack(struct wb_writeback_args *args)192225{193193- struct wb_writeback_args args = {194194- .sb = sb,195195- .sync_mode = WB_SYNC_ALL,196196- .nr_pages = LONG_MAX,197197- .range_cyclic = 0,198198- };199226 struct bdi_work work;200227201201- bdi_work_init(&work, &args);202202- work.state |= WS_ONSTACK;228228+ bdi_work_init(&work, args);229229+ __set_bit(WS_ONSTACK, &work.state);203230204204- bdi_queue_work(bdi, &work);205205- bdi_wait_on_work_clear(&work);231231+ bdi_queue_work(args->sb->s_bdi, &work);232232+ bdi_wait_on_work_done(&work);206233}207234208235/**209236 * bdi_start_writeback - start writeback210237 * @bdi: the backing device to write from211211- * @sb: write inodes from this super_block212238 * @nr_pages: the number of pages to write213239 *214240 * Description:···208252 * completion. Caller need not hold sb s_umount semaphore.209253 *210254 */211211-void bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,212212- long nr_pages)255255+void bdi_start_writeback(struct backing_dev_info *bdi, long nr_pages)213256{214257 struct wb_writeback_args args = {215215- .sb = sb,216258 .sync_mode = WB_SYNC_NONE,217259 .nr_pages = nr_pages,218260 .range_cyclic = 1,219261 };220262221221- /*222222- * We treat @nr_pages=0 as the special case to do background writeback,223223- * ie. to sync pages until the background dirty threshold is reached.224224- */225225- if (!nr_pages) {226226- args.nr_pages = LONG_MAX;227227- args.for_background = 1;228228- }263263+ bdi_alloc_queue_work(bdi, &args);264264+}229265266266+/**267267+ * bdi_start_background_writeback - start background writeback268268+ * @bdi: the backing device to write from269269+ *270270+ * Description:271271+ * This does WB_SYNC_NONE background writeback. The IO is only272272+ * started when this function returns, we make no guarentees on273273+ * completion. Caller need not hold sb s_umount semaphore.274274+ */275275+void bdi_start_background_writeback(struct backing_dev_info *bdi)276276+{277277+ struct wb_writeback_args args = {278278+ .sync_mode = WB_SYNC_NONE,279279+ .nr_pages = LONG_MAX,280280+ .for_background = 1,281281+ .range_cyclic = 1,282282+ };230283 bdi_alloc_queue_work(bdi, &args);231284}232285···526561 return ret;527562}528563529529-static void unpin_sb_for_writeback(struct super_block *sb)530530-{531531- up_read(&sb->s_umount);532532- put_super(sb);533533-}534534-535535-enum sb_pin_state {536536- SB_PINNED,537537- SB_NOT_PINNED,538538- SB_PIN_FAILED539539-};540540-541564/*542542- * For WB_SYNC_NONE writeback, the caller does not have the sb pinned565565+ * For background writeback the caller does not have the sb pinned543566 * before calling writeback. So make sure that we do pin it, so it doesn't544567 * go away while we are writing inodes from it.545568 */546546-static enum sb_pin_state pin_sb_for_writeback(struct writeback_control *wbc,547547- struct super_block *sb)569569+static bool pin_sb_for_writeback(struct super_block *sb)548570{549549- /*550550- * Caller must already hold the ref for this551551- */552552- if (wbc->sync_mode == WB_SYNC_ALL) {553553- WARN_ON(!rwsem_is_locked(&sb->s_umount));554554- return SB_NOT_PINNED;555555- }556571 spin_lock(&sb_lock);572572+ if (list_empty(&sb->s_instances)) {573573+ spin_unlock(&sb_lock);574574+ return false;575575+ }576576+557577 sb->s_count++;578578+ spin_unlock(&sb_lock);579579+558580 if (down_read_trylock(&sb->s_umount)) {559559- if (sb->s_root) {560560- spin_unlock(&sb_lock);561561- return SB_PINNED;562562- }563563- /*564564- * umounted, drop rwsem again and fall through to failure565565- */581581+ if (sb->s_root)582582+ return true;566583 up_read(&sb->s_umount);567584 }568568- sb->s_count--;569569- spin_unlock(&sb_lock);570570- return SB_PIN_FAILED;585585+586586+ put_super(sb);587587+ return false;571588}572589573590/*···628681 struct inode *inode = list_entry(wb->b_io.prev,629682 struct inode, i_list);630683 struct super_block *sb = inode->i_sb;631631- enum sb_pin_state state;632684633633- if (wbc->sb && sb != wbc->sb) {634634- /* super block given and doesn't635635- match, skip this inode */636636- redirty_tail(inode);637637- continue;685685+ if (wbc->sb) {686686+ /*687687+ * We are requested to write out inodes for a specific688688+ * superblock. This means we already have s_umount689689+ * taken by the caller which also waits for us to690690+ * complete the writeout.691691+ */692692+ if (sb != wbc->sb) {693693+ redirty_tail(inode);694694+ continue;695695+ }696696+697697+ WARN_ON(!rwsem_is_locked(&sb->s_umount));698698+699699+ ret = writeback_sb_inodes(sb, wb, wbc);700700+ } else {701701+ if (!pin_sb_for_writeback(sb)) {702702+ requeue_io(inode);703703+ continue;704704+ }705705+ ret = writeback_sb_inodes(sb, wb, wbc);706706+ drop_super(sb);638707 }639639- state = pin_sb_for_writeback(wbc, sb);640708641641- if (state == SB_PIN_FAILED) {642642- requeue_io(inode);643643- continue;644644- }645645- ret = writeback_sb_inodes(sb, wb, wbc);646646-647647- if (state == SB_PINNED)648648- unpin_sb_for_writeback(sb);649709 if (ret)650710 break;651711 }···865911 * If this isn't a data integrity operation, just notify866912 * that we have seen this work and we are now starting it.867913 */868868- if (args.sync_mode == WB_SYNC_NONE)914914+ if (!test_bit(WS_ONSTACK, &work->state))869915 wb_clear_pending(wb, work);870916871917 wrote += wb_writeback(wb, &args);···874920 * This is a data integrity writeback, so only do the875921 * notification when we have completed the work.876922 */877877- if (args.sync_mode == WB_SYNC_ALL)923923+ if (test_bit(WS_ONSTACK, &work->state))878924 wb_clear_pending(wb, work);879925 }880926···932978}933979934980/*935935- * Schedule writeback for all backing devices. This does WB_SYNC_NONE936936- * writeback, for integrity writeback see bdi_sync_writeback().937937- */938938-static void bdi_writeback_all(struct super_block *sb, long nr_pages)939939-{940940- struct wb_writeback_args args = {941941- .sb = sb,942942- .nr_pages = nr_pages,943943- .sync_mode = WB_SYNC_NONE,944944- };945945- struct backing_dev_info *bdi;946946-947947- rcu_read_lock();948948-949949- list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {950950- if (!bdi_has_dirty_io(bdi))951951- continue;952952-953953- bdi_alloc_queue_work(bdi, &args);954954- }955955-956956- rcu_read_unlock();957957-}958958-959959-/*960981 * Start writeback of `nr_pages' pages. If `nr_pages' is zero, write back961982 * the whole world.962983 */963984void wakeup_flusher_threads(long nr_pages)964985{965965- if (nr_pages == 0)966966- nr_pages = global_page_state(NR_FILE_DIRTY) +986986+ struct backing_dev_info *bdi;987987+ struct wb_writeback_args args = {988988+ .sync_mode = WB_SYNC_NONE,989989+ };990990+991991+ if (nr_pages) {992992+ args.nr_pages = nr_pages;993993+ } else {994994+ args.nr_pages = global_page_state(NR_FILE_DIRTY) +967995 global_page_state(NR_UNSTABLE_NFS);968968- bdi_writeback_all(NULL, nr_pages);996996+ }997997+998998+ rcu_read_lock();999999+ list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {10001000+ if (!bdi_has_dirty_io(bdi))10011001+ continue;10021002+ bdi_alloc_queue_work(bdi, &args);10031003+ }10041004+ rcu_read_unlock();9691005}97010069711007static noinline void block_dump___mark_inode_dirty(struct inode *inode)···11621218{11631219 unsigned long nr_dirty = global_page_state(NR_FILE_DIRTY);11641220 unsigned long nr_unstable = global_page_state(NR_UNSTABLE_NFS);11651165- long nr_to_write;12211221+ struct wb_writeback_args args = {12221222+ .sb = sb,12231223+ .sync_mode = WB_SYNC_NONE,12241224+ };1166122511671167- nr_to_write = nr_dirty + nr_unstable +12261226+ WARN_ON(!rwsem_is_locked(&sb->s_umount));12271227+12281228+ args.nr_pages = nr_dirty + nr_unstable +11681229 (inodes_stat.nr_inodes - inodes_stat.nr_unused);1169123011701170- bdi_start_writeback(sb->s_bdi, sb, nr_to_write);12311231+ bdi_queue_work_onstack(&args);11711232}11721233EXPORT_SYMBOL(writeback_inodes_sb);11731234···11861237int writeback_inodes_sb_if_idle(struct super_block *sb)11871238{11881239 if (!writeback_in_progress(sb->s_bdi)) {12401240+ down_read(&sb->s_umount);11891241 writeback_inodes_sb(sb);12421242+ up_read(&sb->s_umount);11901243 return 1;11911244 } else11921245 return 0;···12041253 */12051254void sync_inodes_sb(struct super_block *sb)12061255{12071207- bdi_sync_writeback(sb->s_bdi, sb);12561256+ struct wb_writeback_args args = {12571257+ .sb = sb,12581258+ .sync_mode = WB_SYNC_ALL,12591259+ .nr_pages = LONG_MAX,12601260+ .range_cyclic = 0,12611261+ };12621262+12631263+ WARN_ON(!rwsem_is_locked(&sb->s_umount));12641264+12651265+ bdi_queue_work_onstack(&args);12081266 wait_sb_inodes(sb);12091267}12101268EXPORT_SYMBOL(sync_inodes_sb);
+16-4
fs/proc/task_nommu.c
···122122 return size;123123}124124125125+static void pad_len_spaces(struct seq_file *m, int len)126126+{127127+ len = 25 + sizeof(void*) * 6 - len;128128+ if (len < 1)129129+ len = 1;130130+ seq_printf(m, "%*c", len, ' ');131131+}132132+125133/*126134 * display a single VMA to a sequenced file127135 */128136static int nommu_vma_show(struct seq_file *m, struct vm_area_struct *vma)129137{138138+ struct mm_struct *mm = vma->vm_mm;130139 unsigned long ino = 0;131140 struct file *file;132141 dev_t dev = 0;···164155 MAJOR(dev), MINOR(dev), ino, &len);165156166157 if (file) {167167- len = 25 + sizeof(void *) * 6 - len;168168- if (len < 1)169169- len = 1;170170- seq_printf(m, "%*c", len, ' ');158158+ pad_len_spaces(m, len);171159 seq_path(m, &file->f_path, "");160160+ } else if (mm) {161161+ if (vma->vm_start <= mm->start_stack &&162162+ vma->vm_end >= mm->start_stack) {163163+ pad_len_spaces(m, len);164164+ seq_puts(m, "[stack]");165165+ }172166 }173167174168 seq_putc(m, '\n');
+6
fs/super.c
···374374 up_read(&sb->s_umount);375375376376 spin_lock(&sb_lock);377377+ /* lock was dropped, must reset next */378378+ list_safe_reset_next(sb, n, s_list);377379 __put_super(sb);378380 }379381 }···407405 up_read(&sb->s_umount);408406409407 spin_lock(&sb_lock);408408+ /* lock was dropped, must reset next */409409+ list_safe_reset_next(sb, n, s_list);410410 __put_super(sb);411411 }412412 spin_unlock(&sb_lock);···589585 }590586 up_write(&sb->s_umount);591587 spin_lock(&sb_lock);588588+ /* lock was dropped, must reset next */589589+ list_safe_reset_next(sb, n, s_list);592590 __put_super(sb);593591 }594592 spin_unlock(&sb_lock);
+5-1
fs/sysv/ialloc.c
···2525#include <linux/stat.h>2626#include <linux/string.h>2727#include <linux/buffer_head.h>2828+#include <linux/writeback.h>2829#include "sysv.h"29303031/* We don't trust the value of···140139 struct inode *inode;141140 sysv_ino_t ino;142141 unsigned count;142142+ struct writeback_control wbc = {143143+ .sync_mode = WB_SYNC_NONE144144+ };143145144146 inode = new_inode(sb);145147 if (!inode)···172168 insert_inode_hash(inode);173169 mark_inode_dirty(inode);174170175175- sysv_write_inode(inode, 0); /* ensure inode not allocated again */171171+ sysv_write_inode(inode, &wbc); /* ensure inode not allocated again */176172 mark_inode_dirty(inode); /* cleared by sysv_write_inode() */177173 /* That's it. */178174 unlock_super(sb);
···128128 return ERR_PTR(-ESTALE);129129130130 /*131131- * The XFS_IGET_BULKSTAT means that an invalid inode number is just132132- * fine and not an indication of a corrupted filesystem. Because133133- * clients can send any kind of invalid file handle, e.g. after134134- * a restore on the server we have to deal with this case gracefully.131131+ * The XFS_IGET_UNTRUSTED means that an invalid inode number is just132132+ * fine and not an indication of a corrupted filesystem as clients can133133+ * send invalid file handles and we have to handle it gracefully..135134 */136136- error = xfs_iget(mp, NULL, ino, XFS_IGET_BULKSTAT,137137- XFS_ILOCK_SHARED, &ip, 0);135135+ error = xfs_iget(mp, NULL, ino, XFS_IGET_UNTRUSTED,136136+ XFS_ILOCK_SHARED, &ip);138137 if (error) {139138 /*140139 * EINVAL means the inode cluster doesn't exist anymore.
···237237 xfs_ino_t ino, /* inode number to get data for */238238 void __user *buffer, /* buffer to place output in */239239 int ubsize, /* size of buffer */240240- void *private_data, /* my private data */241241- xfs_daddr_t bno, /* starting bno of inode cluster */242240 int *ubused, /* bytes used by me */243243- void *dibuff, /* on-disk inode buffer */244241 int *stat) /* BULKSTAT_RV_... */245242{246243 return xfs_bulkstat_one_int(mp, ino, buffer, ubsize,247247- xfs_bulkstat_one_fmt_compat, bno,248248- ubused, dibuff, stat);244244+ xfs_bulkstat_one_fmt_compat,245245+ ubused, stat);249246}250247251248/* copied from xfs_ioctl.c */···295298 int res;296299297300 error = xfs_bulkstat_one_compat(mp, inlast, bulkreq.ubuffer,298298- sizeof(compat_xfs_bstat_t),299299- NULL, 0, NULL, NULL, &res);301301+ sizeof(compat_xfs_bstat_t), 0, &res);300302 } else if (cmd == XFS_IOC_FSBULKSTAT_32) {301303 error = xfs_bulkstat(mp, &inlast, &count,302302- xfs_bulkstat_one_compat, NULL,303303- sizeof(compat_xfs_bstat_t), bulkreq.ubuffer,304304- BULKSTAT_FG_QUICK, &done);304304+ xfs_bulkstat_one_compat, sizeof(compat_xfs_bstat_t),305305+ bulkreq.ubuffer, &done);305306 } else306307 error = XFS_ERROR(EINVAL);307308 if (error)
+8-10
fs/xfs/quota/xfs_qm.c
···16321632 xfs_ino_t ino, /* inode number to get data for */16331633 void __user *buffer, /* not used */16341634 int ubsize, /* not used */16351635- void *private_data, /* not used */16361636- xfs_daddr_t bno, /* starting block of inode cluster */16371635 int *ubused, /* not used */16381638- void *dip, /* on-disk inode pointer (not used) */16391636 int *res) /* result code value */16401637{16411638 xfs_inode_t *ip;···16571660 * the case in all other instances. It's OK that we do this because16581661 * quotacheck is done only at mount time.16591662 */16601660- if ((error = xfs_iget(mp, NULL, ino, 0, XFS_ILOCK_EXCL, &ip, bno))) {16631663+ if ((error = xfs_iget(mp, NULL, ino, 0, XFS_ILOCK_EXCL, &ip))) {16611664 *res = BULKSTAT_RV_NOTHING;16621665 return error;16631666 }···17931796 * Iterate thru all the inodes in the file system,17941797 * adjusting the corresponding dquot counters in core.17951798 */17961796- if ((error = xfs_bulkstat(mp, &lastino, &count,17971797- xfs_qm_dqusage_adjust, NULL,17981798- structsz, NULL, BULKSTAT_FG_IGET, &done)))17991799+ error = xfs_bulkstat(mp, &lastino, &count,18001800+ xfs_qm_dqusage_adjust,18011801+ structsz, NULL, &done);18021802+ if (error)17991803 break;1800180418011801- } while (! done);18051805+ } while (!done);1802180618031807 /*18041808 * We've made all the changes that we need to make incore.···18871889 mp->m_sb.sb_uquotino != NULLFSINO) {18881890 ASSERT(mp->m_sb.sb_uquotino > 0);18891891 if ((error = xfs_iget(mp, NULL, mp->m_sb.sb_uquotino,18901890- 0, 0, &uip, 0)))18921892+ 0, 0, &uip)))18911893 return XFS_ERROR(error);18921894 }18931895 if (XFS_IS_OQUOTA_ON(mp) &&18941896 mp->m_sb.sb_gquotino != NULLFSINO) {18951897 ASSERT(mp->m_sb.sb_gquotino > 0);18961898 if ((error = xfs_iget(mp, NULL, mp->m_sb.sb_gquotino,18971897- 0, 0, &gip, 0))) {18991899+ 0, 0, &gip))) {18981900 if (uip)18991901 IRELE(uip);19001902 return XFS_ERROR(error);
+12-15
fs/xfs/quota/xfs_qm_syscalls.c
···262262 }263263264264 if ((flags & XFS_DQ_USER) && mp->m_sb.sb_uquotino != NULLFSINO) {265265- error = xfs_iget(mp, NULL, mp->m_sb.sb_uquotino, 0, 0, &qip, 0);265265+ error = xfs_iget(mp, NULL, mp->m_sb.sb_uquotino, 0, 0, &qip);266266 if (!error) {267267 error = xfs_truncate_file(mp, qip);268268 IRELE(qip);···271271272272 if ((flags & (XFS_DQ_GROUP|XFS_DQ_PROJ)) &&273273 mp->m_sb.sb_gquotino != NULLFSINO) {274274- error2 = xfs_iget(mp, NULL, mp->m_sb.sb_gquotino, 0, 0, &qip, 0);274274+ error2 = xfs_iget(mp, NULL, mp->m_sb.sb_gquotino, 0, 0, &qip);275275 if (!error2) {276276 error2 = xfs_truncate_file(mp, qip);277277 IRELE(qip);···417417 }418418 if (!uip && mp->m_sb.sb_uquotino != NULLFSINO) {419419 if (xfs_iget(mp, NULL, mp->m_sb.sb_uquotino,420420- 0, 0, &uip, 0) == 0)420420+ 0, 0, &uip) == 0)421421 tempuqip = B_TRUE;422422 }423423 if (!gip && mp->m_sb.sb_gquotino != NULLFSINO) {424424 if (xfs_iget(mp, NULL, mp->m_sb.sb_gquotino,425425- 0, 0, &gip, 0) == 0)425425+ 0, 0, &gip) == 0)426426 tempgqip = B_TRUE;427427 }428428 if (uip) {···11091109 xfs_ino_t ino, /* inode number to get data for */11101110 void __user *buffer, /* not used */11111111 int ubsize, /* not used */11121112- void *private_data, /* not used */11131113- xfs_daddr_t bno, /* starting block of inode cluster */11141112 int *ubused, /* not used */11151115- void *dip, /* not used */11161113 int *res) /* bulkstat result code */11171114{11181115 xfs_inode_t *ip;···11311134 ipreleased = B_FALSE;11321135 again:11331136 lock_flags = XFS_ILOCK_SHARED;11341134- if ((error = xfs_iget(mp, NULL, ino, 0, lock_flags, &ip, bno))) {11371137+ if ((error = xfs_iget(mp, NULL, ino, 0, lock_flags, &ip))) {11351138 *res = BULKSTAT_RV_NOTHING;11361139 return (error);11371140 }···12021205 * Iterate thru all the inodes in the file system,12031206 * adjusting the corresponding dquot counters12041207 */12051205- if ((error = xfs_bulkstat(mp, &lastino, &count,12061206- xfs_qm_internalqcheck_adjust, NULL,12071207- 0, NULL, BULKSTAT_FG_IGET, &done))) {12081208+ error = xfs_bulkstat(mp, &lastino, &count,12091209+ xfs_qm_internalqcheck_adjust,12101210+ 0, NULL, &done);12111211+ if (error) {12121212+ cmn_err(CE_DEBUG, "Bulkstat returned error 0x%x", error);12081213 break;12091214 }12101210- } while (! done);12111211- if (error) {12121212- cmn_err(CE_DEBUG, "Bulkstat returned error 0x%x", error);12131213- }12151215+ } while (!done);12161216+12141217 cmn_err(CE_DEBUG, "Checking results against system dquots");12151218 for (i = 0; i < qmtest_hashmask; i++) {12161219 xfs_dqtest_t *d, *n;
···12031203 return error;12041204}1205120512061206+STATIC int12071207+xfs_imap_lookup(12081208+ struct xfs_mount *mp,12091209+ struct xfs_trans *tp,12101210+ xfs_agnumber_t agno,12111211+ xfs_agino_t agino,12121212+ xfs_agblock_t agbno,12131213+ xfs_agblock_t *chunk_agbno,12141214+ xfs_agblock_t *offset_agbno,12151215+ int flags)12161216+{12171217+ struct xfs_inobt_rec_incore rec;12181218+ struct xfs_btree_cur *cur;12191219+ struct xfs_buf *agbp;12201220+ xfs_agino_t startino;12211221+ int error;12221222+ int i;12231223+12241224+ error = xfs_ialloc_read_agi(mp, tp, agno, &agbp);12251225+ if (error) {12261226+ xfs_fs_cmn_err(CE_ALERT, mp, "xfs_imap: "12271227+ "xfs_ialloc_read_agi() returned "12281228+ "error %d, agno %d",12291229+ error, agno);12301230+ return error;12311231+ }12321232+12331233+ /*12341234+ * derive and lookup the exact inode record for the given agino. If the12351235+ * record cannot be found, then it's an invalid inode number and we12361236+ * should abort.12371237+ */12381238+ cur = xfs_inobt_init_cursor(mp, tp, agbp, agno);12391239+ startino = agino & ~(XFS_IALLOC_INODES(mp) - 1);12401240+ error = xfs_inobt_lookup(cur, startino, XFS_LOOKUP_EQ, &i);12411241+ if (!error) {12421242+ if (i)12431243+ error = xfs_inobt_get_rec(cur, &rec, &i);12441244+ if (!error && i == 0)12451245+ error = EINVAL;12461246+ }12471247+12481248+ xfs_trans_brelse(tp, agbp);12491249+ xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR);12501250+ if (error)12511251+ return error;12521252+12531253+ /* for untrusted inodes check it is allocated first */12541254+ if ((flags & XFS_IGET_UNTRUSTED) &&12551255+ (rec.ir_free & XFS_INOBT_MASK(agino - rec.ir_startino)))12561256+ return EINVAL;12571257+12581258+ *chunk_agbno = XFS_AGINO_TO_AGBNO(mp, rec.ir_startino);12591259+ *offset_agbno = agbno - *chunk_agbno;12601260+ return 0;12611261+}12621262+12061263/*12071264 * Return the location of the inode in imap, for mapping it into a buffer.12081265 */···12921235 if (agno >= mp->m_sb.sb_agcount || agbno >= mp->m_sb.sb_agblocks ||12931236 ino != XFS_AGINO_TO_INO(mp, agno, agino)) {12941237#ifdef DEBUG12951295- /* no diagnostics for bulkstat, ino comes from userspace */12961296- if (flags & XFS_IGET_BULKSTAT)12381238+ /*12391239+ * Don't output diagnostic information for untrusted inodes12401240+ * as they can be invalid without implying corruption.12411241+ */12421242+ if (flags & XFS_IGET_UNTRUSTED)12971243 return XFS_ERROR(EINVAL);12981244 if (agno >= mp->m_sb.sb_agcount) {12991245 xfs_fs_cmn_err(CE_ALERT, mp,···13231263 return XFS_ERROR(EINVAL);13241264 }1325126512661266+ blks_per_cluster = XFS_INODE_CLUSTER_SIZE(mp) >> mp->m_sb.sb_blocklog;12671267+12681268+ /*12691269+ * For bulkstat and handle lookups, we have an untrusted inode number12701270+ * that we have to verify is valid. We cannot do this just by reading12711271+ * the inode buffer as it may have been unlinked and removed leaving12721272+ * inodes in stale state on disk. Hence we have to do a btree lookup12731273+ * in all cases where an untrusted inode number is passed.12741274+ */12751275+ if (flags & XFS_IGET_UNTRUSTED) {12761276+ error = xfs_imap_lookup(mp, tp, agno, agino, agbno,12771277+ &chunk_agbno, &offset_agbno, flags);12781278+ if (error)12791279+ return error;12801280+ goto out_map;12811281+ }12821282+13261283 /*13271284 * If the inode cluster size is the same as the blocksize or13281285 * smaller we get to the buffer by simple arithmetics.···13541277 return 0;13551278 }1356127913571357- blks_per_cluster = XFS_INODE_CLUSTER_SIZE(mp) >> mp->m_sb.sb_blocklog;13581358-13591359- /*13601360- * If we get a block number passed from bulkstat we can use it to13611361- * find the buffer easily.13621362- */13631363- if (imap->im_blkno) {13641364- offset = XFS_INO_TO_OFFSET(mp, ino);13651365- ASSERT(offset < mp->m_sb.sb_inopblock);13661366-13671367- cluster_agbno = xfs_daddr_to_agbno(mp, imap->im_blkno);13681368- offset += (agbno - cluster_agbno) * mp->m_sb.sb_inopblock;13691369-13701370- imap->im_len = XFS_FSB_TO_BB(mp, blks_per_cluster);13711371- imap->im_boffset = (ushort)(offset << mp->m_sb.sb_inodelog);13721372- return 0;13731373- }13741374-13751280 /*13761281 * If the inode chunks are aligned then use simple maths to13771282 * find the location. Otherwise we have to do a btree···13631304 offset_agbno = agbno & mp->m_inoalign_mask;13641305 chunk_agbno = agbno - offset_agbno;13651306 } else {13661366- xfs_btree_cur_t *cur; /* inode btree cursor */13671367- xfs_inobt_rec_incore_t chunk_rec;13681368- xfs_buf_t *agbp; /* agi buffer */13691369- int i; /* temp state */13701370-13711371- error = xfs_ialloc_read_agi(mp, tp, agno, &agbp);13721372- if (error) {13731373- xfs_fs_cmn_err(CE_ALERT, mp, "xfs_imap: "13741374- "xfs_ialloc_read_agi() returned "13751375- "error %d, agno %d",13761376- error, agno);13771377- return error;13781378- }13791379-13801380- cur = xfs_inobt_init_cursor(mp, tp, agbp, agno);13811381- error = xfs_inobt_lookup(cur, agino, XFS_LOOKUP_LE, &i);13821382- if (error) {13831383- xfs_fs_cmn_err(CE_ALERT, mp, "xfs_imap: "13841384- "xfs_inobt_lookup() failed");13851385- goto error0;13861386- }13871387-13881388- error = xfs_inobt_get_rec(cur, &chunk_rec, &i);13891389- if (error) {13901390- xfs_fs_cmn_err(CE_ALERT, mp, "xfs_imap: "13911391- "xfs_inobt_get_rec() failed");13921392- goto error0;13931393- }13941394- if (i == 0) {13951395-#ifdef DEBUG13961396- xfs_fs_cmn_err(CE_ALERT, mp, "xfs_imap: "13971397- "xfs_inobt_get_rec() failed");13981398-#endif /* DEBUG */13991399- error = XFS_ERROR(EINVAL);14001400- }14011401- error0:14021402- xfs_trans_brelse(tp, agbp);14031403- xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR);13071307+ error = xfs_imap_lookup(mp, tp, agno, agino, agbno,13081308+ &chunk_agbno, &offset_agbno, flags);14041309 if (error)14051310 return error;14061406- chunk_agbno = XFS_AGINO_TO_AGBNO(mp, chunk_rec.ir_startino);14071407- offset_agbno = agbno - chunk_agbno;14081311 }1409131213131313+out_map:14101314 ASSERT(agbno >= chunk_agbno);14111315 cluster_agbno = chunk_agbno +14121316 ((offset_agbno / blks_per_cluster) * blks_per_cluster);
+3-7
fs/xfs/xfs_iget.c
···259259 xfs_trans_t *tp,260260 xfs_ino_t ino,261261 struct xfs_inode **ipp,262262- xfs_daddr_t bno,263262 int flags,264263 int lock_flags)265264{···271272 if (!ip)272273 return ENOMEM;273274274274- error = xfs_iread(mp, tp, ip, bno, flags);275275+ error = xfs_iread(mp, tp, ip, flags);275276 if (error)276277 goto out_destroy;277278···357358 * within the file system for the inode being requested.358359 * lock_flags -- flags indicating how to lock the inode. See the comment359360 * for xfs_ilock() for a list of valid values.360360- * bno -- the block number starting the buffer containing the inode,361361- * if known (as by bulkstat), else 0.362361 */363362int364363xfs_iget(···365368 xfs_ino_t ino,366369 uint flags,367370 uint lock_flags,368368- xfs_inode_t **ipp,369369- xfs_daddr_t bno)371371+ xfs_inode_t **ipp)370372{371373 xfs_inode_t *ip;372374 int error;···393397 read_unlock(&pag->pag_ici_lock);394398 XFS_STATS_INC(xs_ig_missed);395399396396- error = xfs_iget_cache_miss(mp, pag, tp, ino, &ip, bno,400400+ error = xfs_iget_cache_miss(mp, pag, tp, ino, &ip,397401 flags, lock_flags);398402 if (error)399403 goto out_error_or_again;
+1-4
fs/xfs/xfs_inode.c
···177177 if (unlikely(XFS_TEST_ERROR(!di_ok, mp,178178 XFS_ERRTAG_ITOBP_INOTOBP,179179 XFS_RANDOM_ITOBP_INOTOBP))) {180180- if (iget_flags & XFS_IGET_BULKSTAT) {180180+ if (iget_flags & XFS_IGET_UNTRUSTED) {181181 xfs_trans_brelse(tp, bp);182182 return XFS_ERROR(EINVAL);183183 }···787787 xfs_mount_t *mp,788788 xfs_trans_t *tp,789789 xfs_inode_t *ip,790790- xfs_daddr_t bno,791790 uint iget_flags)792791{793792 xfs_buf_t *bp;···796797 /*797798 * Fill in the location information in the in-core inode.798799 */799799- ip->i_imap.im_blkno = bno;800800 error = xfs_imap(mp, tp, ip->i_ino, &ip->i_imap, iget_flags);801801 if (error)802802 return error;803803- ASSERT(bno == 0 || bno == ip->i_imap.im_blkno);804803805804 /*806805 * Get pointers to the on-disk inode and the buffer containing it.
···4949 (ino == mp->m_sb.sb_uquotino || ino == mp->m_sb.sb_gquotino)));5050}51515252-STATIC int5353-xfs_bulkstat_one_iget(5454- xfs_mount_t *mp, /* mount point for filesystem */5555- xfs_ino_t ino, /* inode number to get data for */5656- xfs_daddr_t bno, /* starting bno of inode cluster */5757- xfs_bstat_t *buf, /* return buffer */5858- int *stat) /* BULKSTAT_RV_... */5252+/*5353+ * Return stat information for one inode.5454+ * Return 0 if ok, else errno.5555+ */5656+int5757+xfs_bulkstat_one_int(5858+ struct xfs_mount *mp, /* mount point for filesystem */5959+ xfs_ino_t ino, /* inode to get data for */6060+ void __user *buffer, /* buffer to place output in */6161+ int ubsize, /* size of buffer */6262+ bulkstat_one_fmt_pf formatter, /* formatter, copy to user */6363+ int *ubused, /* bytes used by me */6464+ int *stat) /* BULKSTAT_RV_... */5965{6060- xfs_icdinode_t *dic; /* dinode core info pointer */6161- xfs_inode_t *ip; /* incore inode pointer */6262- struct inode *inode;6363- int error;6666+ struct xfs_icdinode *dic; /* dinode core info pointer */6767+ struct xfs_inode *ip; /* incore inode pointer */6868+ struct inode *inode;6969+ struct xfs_bstat *buf; /* return buffer */7070+ int error = 0; /* error value */7171+7272+ *stat = BULKSTAT_RV_NOTHING;7373+7474+ if (!buffer || xfs_internal_inum(mp, ino))7575+ return XFS_ERROR(EINVAL);7676+7777+ buf = kmem_alloc(sizeof(*buf), KM_SLEEP | KM_MAYFAIL);7878+ if (!buf)7979+ return XFS_ERROR(ENOMEM);64806581 error = xfs_iget(mp, NULL, ino,6666- XFS_IGET_BULKSTAT, XFS_ILOCK_SHARED, &ip, bno);8282+ XFS_IGET_UNTRUSTED, XFS_ILOCK_SHARED, &ip);6783 if (error) {6884 *stat = BULKSTAT_RV_NOTHING;6969- return error;8585+ goto out_free;7086 }71877288 ASSERT(ip != NULL);···143127 buf->bs_blocks = dic->di_nblocks + ip->i_delayed_blks;144128 break;145129 }146146-147130 xfs_iput(ip, XFS_ILOCK_SHARED);131131+132132+ error = formatter(buffer, ubsize, ubused, buf);133133+134134+ if (!error)135135+ *stat = BULKSTAT_RV_DIDONE;136136+137137+ out_free:138138+ kmem_free(buf);148139 return error;149149-}150150-151151-STATIC void152152-xfs_bulkstat_one_dinode(153153- xfs_mount_t *mp, /* mount point for filesystem */154154- xfs_ino_t ino, /* inode number to get data for */155155- xfs_dinode_t *dic, /* dinode inode pointer */156156- xfs_bstat_t *buf) /* return buffer */157157-{158158- /*159159- * The inode format changed when we moved the link count and160160- * made it 32 bits long. If this is an old format inode,161161- * convert it in memory to look like a new one. If it gets162162- * flushed to disk we will convert back before flushing or163163- * logging it. We zero out the new projid field and the old link164164- * count field. We'll handle clearing the pad field (the remains165165- * of the old uuid field) when we actually convert the inode to166166- * the new format. We don't change the version number so that we167167- * can distinguish this from a real new format inode.168168- */169169- if (dic->di_version == 1) {170170- buf->bs_nlink = be16_to_cpu(dic->di_onlink);171171- buf->bs_projid = 0;172172- } else {173173- buf->bs_nlink = be32_to_cpu(dic->di_nlink);174174- buf->bs_projid = be16_to_cpu(dic->di_projid);175175- }176176-177177- buf->bs_ino = ino;178178- buf->bs_mode = be16_to_cpu(dic->di_mode);179179- buf->bs_uid = be32_to_cpu(dic->di_uid);180180- buf->bs_gid = be32_to_cpu(dic->di_gid);181181- buf->bs_size = be64_to_cpu(dic->di_size);182182- buf->bs_atime.tv_sec = be32_to_cpu(dic->di_atime.t_sec);183183- buf->bs_atime.tv_nsec = be32_to_cpu(dic->di_atime.t_nsec);184184- buf->bs_mtime.tv_sec = be32_to_cpu(dic->di_mtime.t_sec);185185- buf->bs_mtime.tv_nsec = be32_to_cpu(dic->di_mtime.t_nsec);186186- buf->bs_ctime.tv_sec = be32_to_cpu(dic->di_ctime.t_sec);187187- buf->bs_ctime.tv_nsec = be32_to_cpu(dic->di_ctime.t_nsec);188188- buf->bs_xflags = xfs_dic2xflags(dic);189189- buf->bs_extsize = be32_to_cpu(dic->di_extsize) << mp->m_sb.sb_blocklog;190190- buf->bs_extents = be32_to_cpu(dic->di_nextents);191191- buf->bs_gen = be32_to_cpu(dic->di_gen);192192- memset(buf->bs_pad, 0, sizeof(buf->bs_pad));193193- buf->bs_dmevmask = be32_to_cpu(dic->di_dmevmask);194194- buf->bs_dmstate = be16_to_cpu(dic->di_dmstate);195195- buf->bs_aextents = be16_to_cpu(dic->di_anextents);196196- buf->bs_forkoff = XFS_DFORK_BOFF(dic);197197-198198- switch (dic->di_format) {199199- case XFS_DINODE_FMT_DEV:200200- buf->bs_rdev = xfs_dinode_get_rdev(dic);201201- buf->bs_blksize = BLKDEV_IOSIZE;202202- buf->bs_blocks = 0;203203- break;204204- case XFS_DINODE_FMT_LOCAL:205205- case XFS_DINODE_FMT_UUID:206206- buf->bs_rdev = 0;207207- buf->bs_blksize = mp->m_sb.sb_blocksize;208208- buf->bs_blocks = 0;209209- break;210210- case XFS_DINODE_FMT_EXTENTS:211211- case XFS_DINODE_FMT_BTREE:212212- buf->bs_rdev = 0;213213- buf->bs_blksize = mp->m_sb.sb_blocksize;214214- buf->bs_blocks = be64_to_cpu(dic->di_nblocks);215215- break;216216- }217140}218141219142/* Return 0 on success or positive error */···172217 return 0;173218}174219175175-/*176176- * Return stat information for one inode.177177- * Return 0 if ok, else errno.178178- */179179-int /* error status */180180-xfs_bulkstat_one_int(181181- xfs_mount_t *mp, /* mount point for filesystem */182182- xfs_ino_t ino, /* inode number to get data for */183183- void __user *buffer, /* buffer to place output in */184184- int ubsize, /* size of buffer */185185- bulkstat_one_fmt_pf formatter, /* formatter, copy to user */186186- xfs_daddr_t bno, /* starting bno of inode cluster */187187- int *ubused, /* bytes used by me */188188- void *dibuff, /* on-disk inode buffer */189189- int *stat) /* BULKSTAT_RV_... */190190-{191191- xfs_bstat_t *buf; /* return buffer */192192- int error = 0; /* error value */193193- xfs_dinode_t *dip; /* dinode inode pointer */194194-195195- dip = (xfs_dinode_t *)dibuff;196196- *stat = BULKSTAT_RV_NOTHING;197197-198198- if (!buffer || xfs_internal_inum(mp, ino))199199- return XFS_ERROR(EINVAL);200200-201201- buf = kmem_alloc(sizeof(*buf), KM_SLEEP);202202-203203- if (dip == NULL) {204204- /* We're not being passed a pointer to a dinode. This happens205205- * if BULKSTAT_FG_IGET is selected. Do the iget.206206- */207207- error = xfs_bulkstat_one_iget(mp, ino, bno, buf, stat);208208- if (error)209209- goto out_free;210210- } else {211211- xfs_bulkstat_one_dinode(mp, ino, dip, buf);212212- }213213-214214- error = formatter(buffer, ubsize, ubused, buf);215215- if (error)216216- goto out_free;217217-218218- *stat = BULKSTAT_RV_DIDONE;219219-220220- out_free:221221- kmem_free(buf);222222- return error;223223-}224224-225220int226221xfs_bulkstat_one(227222 xfs_mount_t *mp, /* mount point for filesystem */228223 xfs_ino_t ino, /* inode number to get data for */229224 void __user *buffer, /* buffer to place output in */230225 int ubsize, /* size of buffer */231231- void *private_data, /* my private data */232232- xfs_daddr_t bno, /* starting bno of inode cluster */233226 int *ubused, /* bytes used by me */234234- void *dibuff, /* on-disk inode buffer */235227 int *stat) /* BULKSTAT_RV_... */236228{237229 return xfs_bulkstat_one_int(mp, ino, buffer, ubsize,238238- xfs_bulkstat_one_fmt, bno,239239- ubused, dibuff, stat);240240-}241241-242242-/*243243- * Test to see whether we can use the ondisk inode directly, based244244- * on the given bulkstat flags, filling in dipp accordingly.245245- * Returns zero if the inode is dodgey.246246- */247247-STATIC int248248-xfs_bulkstat_use_dinode(249249- xfs_mount_t *mp,250250- int flags,251251- xfs_buf_t *bp,252252- int clustidx,253253- xfs_dinode_t **dipp)254254-{255255- xfs_dinode_t *dip;256256- unsigned int aformat;257257-258258- *dipp = NULL;259259- if (!bp || (flags & BULKSTAT_FG_IGET))260260- return 1;261261- dip = (xfs_dinode_t *)262262- xfs_buf_offset(bp, clustidx << mp->m_sb.sb_inodelog);263263- /*264264- * Check the buffer containing the on-disk inode for di_mode == 0.265265- * This is to prevent xfs_bulkstat from picking up just reclaimed266266- * inodes that have their in-core state initialized but not flushed267267- * to disk yet. This is a temporary hack that would require a proper268268- * fix in the future.269269- */270270- if (be16_to_cpu(dip->di_magic) != XFS_DINODE_MAGIC ||271271- !XFS_DINODE_GOOD_VERSION(dip->di_version) ||272272- !dip->di_mode)273273- return 0;274274- if (flags & BULKSTAT_FG_QUICK) {275275- *dipp = dip;276276- return 1;277277- }278278- /* BULKSTAT_FG_INLINE: if attr fork is local, or not there, use it */279279- aformat = dip->di_aformat;280280- if ((XFS_DFORK_Q(dip) == 0) ||281281- (aformat == XFS_DINODE_FMT_LOCAL) ||282282- (aformat == XFS_DINODE_FMT_EXTENTS && !dip->di_anextents)) {283283- *dipp = dip;284284- return 1;285285- }286286- return 1;230230+ xfs_bulkstat_one_fmt, ubused, stat);287231}288232289233#define XFS_BULKSTAT_UBLEFT(ubleft) ((ubleft) >= statstruct_size)···196342 xfs_ino_t *lastinop, /* last inode returned */197343 int *ubcountp, /* size of buffer/count returned */198344 bulkstat_one_pf formatter, /* func that'd fill a single buf */199199- void *private_data,/* private data for formatter */200345 size_t statstruct_size, /* sizeof struct filling */201346 char __user *ubuffer, /* buffer with inode stats */202202- int flags, /* defined in xfs_itable.h */203347 int *done) /* 1 if there are more stats to get */204348{205349 xfs_agblock_t agbno=0;/* allocation group block number */···232380 int ubelem; /* spaces used in user's buffer */233381 int ubused; /* bytes used by formatter */234382 xfs_buf_t *bp; /* ptr to on-disk inode cluster buf */235235- xfs_dinode_t *dip; /* ptr into bp for specific inode */236383237384 /*238385 * Get the last inode value, see if there's nothing to do.239386 */240387 ino = (xfs_ino_t)*lastinop;241388 lastino = ino;242242- dip = NULL;243389 agno = XFS_INO_TO_AGNO(mp, ino);244390 agino = XFS_INO_TO_AGINO(mp, ino);245391 if (agno >= mp->m_sb.sb_agcount ||···462612 irbp->ir_startino) +463613 ((chunkidx & nimask) >>464614 mp->m_sb.sb_inopblog);465465-466466- if (flags & (BULKSTAT_FG_QUICK |467467- BULKSTAT_FG_INLINE)) {468468- int offset;469469-470470- ino = XFS_AGINO_TO_INO(mp, agno,471471- agino);472472- bno = XFS_AGB_TO_DADDR(mp, agno,473473- agbno);474474-475475- /*476476- * Get the inode cluster buffer477477- */478478- if (bp)479479- xfs_buf_relse(bp);480480-481481- error = xfs_inotobp(mp, NULL, ino, &dip,482482- &bp, &offset,483483- XFS_IGET_BULKSTAT);484484-485485- if (!error)486486- clustidx = offset / mp->m_sb.sb_inodesize;487487- if (XFS_TEST_ERROR(error != 0,488488- mp, XFS_ERRTAG_BULKSTAT_READ_CHUNK,489489- XFS_RANDOM_BULKSTAT_READ_CHUNK)) {490490- bp = NULL;491491- ubleft = 0;492492- rval = error;493493- break;494494- }495495- }496615 }497616 ino = XFS_AGINO_TO_INO(mp, agno, agino);498617 bno = XFS_AGB_TO_DADDR(mp, agno, agbno);···477658 * when the chunk is used up.478659 */479660 irbp->ir_freecount++;480480- if (!xfs_bulkstat_use_dinode(mp, flags, bp,481481- clustidx, &dip)) {482482- lastino = ino;483483- continue;484484- }485485- /*486486- * If we need to do an iget, cannot hold bp.487487- * Drop it, until starting the next cluster.488488- */489489- if ((flags & BULKSTAT_FG_INLINE) && !dip) {490490- if (bp)491491- xfs_buf_relse(bp);492492- bp = NULL;493493- }494661495662 /*496663 * Get the inode and fill in a single buffer.497497- * BULKSTAT_FG_QUICK uses dip to fill it in.498498- * BULKSTAT_FG_IGET uses igets.499499- * BULKSTAT_FG_INLINE uses dip if we have an500500- * inline attr fork, else igets.501501- * See: xfs_bulkstat_one & xfs_dm_bulkstat_one.502502- * This is also used to count inodes/blks, etc503503- * in xfs_qm_quotacheck.504664 */505665 ubused = statstruct_size;506506- error = formatter(mp, ino, ubufp,507507- ubleft, private_data,508508- bno, &ubused, dip, &fmterror);666666+ error = formatter(mp, ino, ubufp, ubleft,667667+ &ubused, &fmterror);509668 if (fmterror == BULKSTAT_RV_NOTHING) {510669 if (error && error != ENOENT &&511670 error != EINVAL) {···575778 */576779577780 ino = (xfs_ino_t)*lastinop;578578- error = xfs_bulkstat_one(mp, ino, buffer, sizeof(xfs_bstat_t),579579- NULL, 0, NULL, NULL, &res);781781+ error = xfs_bulkstat_one(mp, ino, buffer, sizeof(xfs_bstat_t), 0, &res);580782 if (error) {581783 /*582784 * Special case way failed, do it the "long" way···584788 (*lastinop)--;585789 count = 1;586790 if (xfs_bulkstat(mp, lastinop, &count, xfs_bulkstat_one,587587- NULL, sizeof(xfs_bstat_t), buffer,588588- BULKSTAT_FG_IGET, done))791791+ sizeof(xfs_bstat_t), buffer, done))589792 return error;590793 if (count == 0 || (xfs_ino_t)*lastinop != ino)591794 return error == EFSCORRUPTED ?
-17
fs/xfs/xfs_itable.h
···2727 xfs_ino_t ino,2828 void __user *buffer,2929 int ubsize,3030- void *private_data,3131- xfs_daddr_t bno,3230 int *ubused,3333- void *dip,3431 int *stat);35323633/*···3841#define BULKSTAT_RV_GIVEUP 239424043/*4141- * Values for bulkstat flag argument.4242- */4343-#define BULKSTAT_FG_IGET 0x1 /* Go through the buffer cache */4444-#define BULKSTAT_FG_QUICK 0x2 /* No iget, walk the dinode cluster */4545-#define BULKSTAT_FG_INLINE 0x4 /* No iget if inline attrs */4646-4747-/*4844 * Return stat information in bulk (by-inode) for the filesystem.4945 */5046int /* error status */···4656 xfs_ino_t *lastino, /* last inode returned */4757 int *count, /* size of buffer/count returned */4858 bulkstat_one_pf formatter, /* func that'd fill a single buf */4949- void *private_data, /* private data for formatter */5059 size_t statstruct_size,/* sizeof struct that we're filling */5160 char __user *ubuffer,/* buffer with inode stats */5252- int flags, /* flag to control access method */5361 int *done); /* 1 if there are more stats to get */54625563int···7082 void __user *buffer,7183 int ubsize,7284 bulkstat_one_fmt_pf formatter,7373- xfs_daddr_t bno,7485 int *ubused,7575- void *dibuff,7686 int *stat);77877888int···7993 xfs_ino_t ino,8094 void __user *buffer,8195 int ubsize,8282- void *private_data,8383- xfs_daddr_t bno,8496 int *ubused,8585- void *dibuff,8697 int *stat);87988899typedef int (*inumbers_fmt_pf)(
···13001300 * Get and sanity-check the root inode.13011301 * Save the pointer to it in the mount structure.13021302 */13031303- error = xfs_iget(mp, NULL, sbp->sb_rootino, 0, XFS_ILOCK_EXCL, &rip, 0);13031303+ error = xfs_iget(mp, NULL, sbp->sb_rootino, 0, XFS_ILOCK_EXCL, &rip);13041304 if (error) {13051305 cmn_err(CE_WARN, "XFS: failed to read root inode");13061306 goto out_log_dealloc;
···5858 * naked functions because then mcount is called without stack and frame pointer5959 * being set up and there is no chance to restore the lr register to the value6060 * before mcount was called.6161+ *6262+ * The asm() bodies of naked functions often depend on standard calling conventions,6363+ * therefore they must be noinline and noclone. GCC 4.[56] currently fail to enforce6464+ * this, so we must do so ourselves. See GCC PR44290.6165 */6262-#define __naked __attribute__((naked)) notrace6666+#define __naked __attribute__((naked)) noinline __noclone notrace63676468#define __noreturn __attribute__((noreturn))6569···8985#define _gcc_header(x) __gcc_header(linux/compiler-gcc##x.h)9086#define gcc_header(x) _gcc_header(x)9187#include gcc_header(__GNUC__)8888+8989+#if !defined(__noclone)9090+#define __noclone /* not needed */9191+#endif
+4
include/linux/compiler-gcc4.h
···4848 * unreleased. Really, we need to have autoconf for the kernel.4949 */5050#define unreachable() __builtin_unreachable()5151+5252+/* Mark a function definition as prohibited from being cloned. */5353+#define __noclone __attribute__((__noclone__))5454+5155#endif52565357#endif
···4040 const char *modname);41414242#if defined(CONFIG_DYNAMIC_DEBUG)4343-extern int ddebug_remove_module(char *mod_name);4343+extern int ddebug_remove_module(const char *mod_name);44444545#define __dynamic_dbg_enabled(dd) ({ \4646 int __ret = 0; \···73737474#else75757676-static inline int ddebug_remove_module(char *mod)7676+static inline int ddebug_remove_module(const char *mod)7777{7878 return 0;7979}
+2-2
include/linux/fb.h
···786786#define FBINFO_MISC_USEREVENT 0x10000 /* event request787787 from userspace */788788#define FBINFO_MISC_TILEBLITTING 0x20000 /* use tile blitting */789789-#define FBINFO_MISC_FIRMWARE 0x40000 /* a replaceable firmware790790- inited framebuffer */791789792790/* A driver may set this flag to indicate that it does want a set_par to be793791 * called every time when fbcon_switch is executed. The advantage is that with···799801 */800802#define FBINFO_MISC_ALWAYS_SETPAR 0x40000801803804804+/* where the fb is a firmware driver, and can be replaced with a proper one */805805+#define FBINFO_MISC_FIRMWARE 0x80000802806/*803807 * Host and GPU endianness differ.804808 */
+15
include/linux/list.h
···544544 &pos->member != (head); \545545 pos = n, n = list_entry(n->member.prev, typeof(*n), member))546546547547+/**548548+ * list_safe_reset_next - reset a stale list_for_each_entry_safe loop549549+ * @pos: the loop cursor used in the list_for_each_entry_safe loop550550+ * @n: temporary storage used in list_for_each_entry_safe551551+ * @member: the name of the list_struct within the struct.552552+ *553553+ * list_safe_reset_next is not safe to use in general if the list may be554554+ * modified concurrently (eg. the lock is dropped in the loop body). An555555+ * exception to this is if the cursor element (pos) is pinned in the list,556556+ * and list_safe_reset_next is called after re-taking the lock and before557557+ * completing the current iteration of the loop body.558558+ */559559+#define list_safe_reset_next(pos, n, member) \560560+ n = list_entry(pos->member.next, typeof(*pos), member)561561+547562/*548563 * Double linked lists with a single pointer list head.549564 * Mostly useful for hash tables where the two pointer list head is
···139139extern unsigned long nr_running(void);140140extern unsigned long nr_uninterruptible(void);141141extern unsigned long nr_iowait(void);142142-extern unsigned long nr_iowait_cpu(void);142142+extern unsigned long nr_iowait_cpu(int cpu);143143extern unsigned long this_cpu_load(void);144144145145
+12
init/main.c
···422422 * gcc-3.4 accidentally inlines this function, so use noinline.423423 */424424425425+static __initdata DECLARE_COMPLETION(kthreadd_done);426426+425427static noinline void __init_refok rest_init(void)426428 __releases(kernel_lock)427429{428430 int pid;429431430432 rcu_scheduler_starting();433433+ /*434434+ * We need to spawn init first so that it obtains pid 1, however435435+ * the init task will end up wanting to create kthreads, which, if436436+ * we schedule it before we create kthreadd, will OOPS.437437+ */431438 kernel_thread(kernel_init, NULL, CLONE_FS | CLONE_SIGHAND);432439 numa_default_policy();433440 pid = kernel_thread(kthreadd, NULL, CLONE_FS | CLONE_FILES);434441 rcu_read_lock();435442 kthreadd_task = find_task_by_pid_ns(pid, &init_pid_ns);436443 rcu_read_unlock();444444+ complete(&kthreadd_done);437445 unlock_kernel();438446439447 /*···857849858850static int __init kernel_init(void * unused)859851{852852+ /*853853+ * Wait until kthreadd is all set-up.854854+ */855855+ wait_for_completion(&kthreadd_done);860856 lock_kernel();861857862858 /*
+4-13
kernel/futex.c
···429429static struct task_struct * futex_find_get_task(pid_t pid)430430{431431 struct task_struct *p;432432- const struct cred *cred = current_cred(), *pcred;433432434433 rcu_read_lock();435434 p = find_task_by_vpid(pid);436436- if (!p) {437437- p = ERR_PTR(-ESRCH);438438- } else {439439- pcred = __task_cred(p);440440- if (cred->euid != pcred->euid &&441441- cred->euid != pcred->uid)442442- p = ERR_PTR(-ESRCH);443443- else444444- get_task_struct(p);445445- }435435+ if (p)436436+ get_task_struct(p);446437447438 rcu_read_unlock();448439···555564 if (!pid)556565 return -ESRCH;557566 p = futex_find_get_task(pid);558558- if (IS_ERR(p))559559- return PTR_ERR(p);567567+ if (!p)568568+ return -ESRCH;560569561570 /*562571 * We need to look at the task state flags to figure out,
···154154 * Updates the per cpu time idle statistics counters155155 */156156static void157157-update_ts_time_stats(struct tick_sched *ts, ktime_t now, u64 *last_update_time)157157+update_ts_time_stats(int cpu, struct tick_sched *ts, ktime_t now, u64 *last_update_time)158158{159159 ktime_t delta;160160161161 if (ts->idle_active) {162162 delta = ktime_sub(now, ts->idle_entrytime);163163 ts->idle_sleeptime = ktime_add(ts->idle_sleeptime, delta);164164- if (nr_iowait_cpu() > 0)164164+ if (nr_iowait_cpu(cpu) > 0)165165 ts->iowait_sleeptime = ktime_add(ts->iowait_sleeptime, delta);166166 ts->idle_entrytime = now;167167 }···175175{176176 struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);177177178178- update_ts_time_stats(ts, now, NULL);178178+ update_ts_time_stats(cpu, ts, now, NULL);179179 ts->idle_active = 0;180180181181 sched_clock_idle_wakeup_event(0);182182}183183184184-static ktime_t tick_nohz_start_idle(struct tick_sched *ts)184184+static ktime_t tick_nohz_start_idle(int cpu, struct tick_sched *ts)185185{186186 ktime_t now;187187188188 now = ktime_get();189189190190- update_ts_time_stats(ts, now, NULL);190190+ update_ts_time_stats(cpu, ts, now, NULL);191191192192 ts->idle_entrytime = now;193193 ts->idle_active = 1;···216216 if (!tick_nohz_enabled)217217 return -1;218218219219- update_ts_time_stats(ts, ktime_get(), last_update_time);219219+ update_ts_time_stats(cpu, ts, ktime_get(), last_update_time);220220221221 return ktime_to_us(ts->idle_sleeptime);222222}···242242 if (!tick_nohz_enabled)243243 return -1;244244245245- update_ts_time_stats(ts, ktime_get(), last_update_time);245245+ update_ts_time_stats(cpu, ts, ktime_get(), last_update_time);246246247247 return ktime_to_us(ts->iowait_sleeptime);248248}···284284 */285285 ts->inidle = 1;286286287287- now = tick_nohz_start_idle(ts);287287+ now = tick_nohz_start_idle(cpu, ts);288288289289 /*290290 * If this cpu is offline and it is the one which updates
+1-1
lib/dynamic_debug.c
···692692 * Called in response to a module being unloaded. Removes693693 * any ddebug_table's which point at the module.694694 */695695-int ddebug_remove_module(char *mod_name)695695+int ddebug_remove_module(const char *mod_name)696696{697697 struct ddebug_table *dt, *nextdt;698698 int ret = -ENOENT;
···20942094 NODEMASK_SCRATCH(scratch);2095209520962096 if (!scratch)20972097- return;20972097+ goto put_mpol;20982098 /* contextualize the tmpfs mount point mempolicy */20992099 new = mpol_new(mpol->mode, mpol->flags, &mpol->w.user_nodemask);21002100 if (IS_ERR(new))···21032103 task_lock(current);21042104 ret = mpol_set_nodemask(new, &mpol->w.user_nodemask, scratch);21052105 task_unlock(current);21062106- mpol_put(mpol); /* drop our ref on sb mpol */21072106 if (ret)21082108- goto put_free;21072107+ goto put_new;2109210821102109 /* Create pseudo-vma that contains just the policy */21112110 memset(&pvma, 0, sizeof(struct vm_area_struct));21122111 pvma.vm_end = TASK_SIZE; /* policy covers entire file */21132112 mpol_set_shared_policy(sp, &pvma, new); /* adds ref */2114211321152115-put_free:21142114+put_new:21162115 mpol_put(new); /* drop initial ref */21172116free_scratch:21182117 NODEMASK_SCRATCH_FREE(scratch);21182118+put_mpol:21192119+ mpol_put(mpol); /* drop our incoming ref on sb mpol */21192120 }21202121}21212122
+2-3
mm/page-writeback.c
···597597 (!laptop_mode && ((global_page_state(NR_FILE_DIRTY)598598 + global_page_state(NR_UNSTABLE_NFS))599599 > background_thresh)))600600- bdi_start_writeback(bdi, NULL, 0);600600+ bdi_start_background_writeback(bdi);601601}602602603603void set_page_dirty_balance(struct page *page, int page_mkwrite)···705705 * We want to write everything out, not just down to the dirty706706 * threshold707707 */708708-709708 if (bdi_has_dirty_io(&q->backing_dev_info))710710- bdi_start_writeback(&q->backing_dev_info, NULL, nr_pages);709709+ bdi_start_writeback(&q->backing_dev_info, nr_pages);711710}712711713712/*
···1010#11111212usage() {1313- echo "Usage: $0 [srctree]" >&21313+ echo "Usage: $0 [--scm-only] [srctree]" >&21414 exit 11515}16161717-cd "${1:-.}" || usage1717+scm_only=false1818+srctree=.1919+if test "$1" = "--scm-only"; then2020+ scm_only=true2121+ shift2222+fi2323+if test $# -gt 0; then2424+ srctree=$12525+ shift2626+fi2727+if test $# -gt 0 -o ! -d "$srctree"; then2828+ usage2929+fi18301919-# Check for git and a git repo.2020-if head=`git rev-parse --verify --short HEAD 2>/dev/null`; then3131+scm_version()3232+{3333+ local short=false21342222- # If we are at a tagged commit (like "v2.6.30-rc6"), we ignore it,2323- # because this version is defined in the top level Makefile.2424- if [ -z "`git describe --exact-match 2>/dev/null`" ]; then3535+ cd "$srctree"3636+ if test -e .scmversion; then3737+ cat "$_"3838+ return3939+ fi4040+ if test "$1" = "--short"; then4141+ short=true4242+ fi25432626- # If we are past a tagged commit (like "v2.6.30-rc5-302-g72357d5"),2727- # we pretty print it.2828- if atag="`git describe 2>/dev/null`"; then2929- echo "$atag" | awk -F- '{printf("-%05d-%s", $(NF-1),$(NF))}'4444+ # Check for git and a git repo.4545+ if head=`git rev-parse --verify --short HEAD 2>/dev/null`; then30463131- # If we don't have a tag at all we print -g{commitish}.3232- else3333- printf '%s%s' -g $head4747+ # If we are at a tagged commit (like "v2.6.30-rc6"), we ignore4848+ # it, because this version is defined in the top level Makefile.4949+ if [ -z "`git describe --exact-match 2>/dev/null`" ]; then5050+5151+ # If only the short version is requested, don't bother5252+ # running further git commands5353+ if $short; then5454+ echo "+"5555+ return5656+ fi5757+ # If we are past a tagged commit (like5858+ # "v2.6.30-rc5-302-g72357d5"), we pretty print it.5959+ if atag="`git describe 2>/dev/null`"; then6060+ echo "$atag" | awk -F- '{printf("-%05d-%s", $(NF-1),$(NF))}'6161+6262+ # If we don't have a tag at all we print -g{commitish}.6363+ else6464+ printf '%s%s' -g $head6565+ fi3466 fi6767+6868+ # Is this git on svn?6969+ if git config --get svn-remote.svn.url >/dev/null; then7070+ printf -- '-svn%s' "`git svn find-rev $head`"7171+ fi7272+7373+ # Update index only on r/w media7474+ [ -w . ] && git update-index --refresh --unmerged > /dev/null7575+7676+ # Check for uncommitted changes7777+ if git diff-index --name-only HEAD | grep -v "^scripts/package" \7878+ | read dummy; then7979+ printf '%s' -dirty8080+ fi8181+8282+ # All done with git8383+ return3584 fi36853737- # Is this git on svn?3838- if git config --get svn-remote.svn.url >/dev/null; then3939- printf -- '-svn%s' "`git svn find-rev $head`"8686+ # Check for mercurial and a mercurial repo.8787+ if hgid=`hg id 2>/dev/null`; then8888+ tag=`printf '%s' "$hgid" | cut -d' ' -f2`8989+9090+ # Do we have an untagged version?9191+ if [ -z "$tag" -o "$tag" = tip ]; then9292+ id=`printf '%s' "$hgid" | sed 's/[+ ].*//'`9393+ printf '%s%s' -hg "$id"9494+ fi9595+9696+ # Are there uncommitted changes?9797+ # These are represented by + after the changeset id.9898+ case "$hgid" in9999+ *+|*+\ *) printf '%s' -dirty ;;100100+ esac101101+102102+ # All done with mercurial103103+ return40104 fi411054242- # Update index only on r/w media4343- [ -w . ] && git update-index --refresh --unmerged > /dev/null106106+ # Check for svn and a svn repo.107107+ if rev=`svn info 2>/dev/null | grep '^Last Changed Rev'`; then108108+ rev=`echo $rev | awk '{print $NF}'`109109+ printf -- '-svn%s' "$rev"441104545- # Check for uncommitted changes4646- if git diff-index --name-only HEAD | grep -v "^scripts/package" \4747- | read dummy; then4848- printf '%s' -dirty111111+ # All done with svn112112+ return49113 fi114114+}501155151- # All done with git116116+collect_files()117117+{118118+ local file res119119+120120+ for file; do121121+ case "$file" in122122+ *\~*)123123+ continue124124+ ;;125125+ esac126126+ if test -e "$file"; then127127+ res="$res$(cat "$file")"128128+ fi129129+ done130130+ echo "$res"131131+}132132+133133+if $scm_only; then134134+ scm_version52135 exit53136fi541375555-# Check for mercurial and a mercurial repo.5656-if hgid=`hg id 2>/dev/null`; then5757- tag=`printf '%s' "$hgid" | cut -d' ' -f2`138138+if test -e include/config/auto.conf; then139139+ source "$_"140140+else141141+ echo "Error: kernelrelease not valid - run 'make prepare' to update it"142142+ exit 1143143+fi581445959- # Do we have an untagged version?6060- if [ -z "$tag" -o "$tag" = tip ]; then6161- id=`printf '%s' "$hgid" | sed 's/[+ ].*//'`6262- printf '%s%s' -hg "$id"145145+# localversion* files in the build and source directory146146+res="$(collect_files localversion*)"147147+if test ! "$srctree" -ef .; then148148+ res="$res$(collect_files "$srctree"/localversion*)"149149+fi150150+151151+# CONFIG_LOCALVERSION and LOCALVERSION (if set)152152+res="${res}${CONFIG_LOCALVERSION}${LOCALVERSION}"153153+154154+# scm version string if not at a tagged commit155155+if test "$CONFIG_LOCALVERSION_AUTO" = "y"; then156156+ # full scm version string157157+ res="$res$(scm_version)"158158+else159159+ # apped a plus sign if the repository is not in a clean tagged160160+ # state and LOCALVERSION= is not specified161161+ if test "${LOCALVERSION+set}" != "set"; then162162+ scm=$(scm_version --short)163163+ res="$res${scm:++}"63164 fi6464-6565- # Are there uncommitted changes?6666- # These are represented by + after the changeset id.6767- case "$hgid" in6868- *+|*+\ *) printf '%s' -dirty ;;6969- esac7070-7171- # All done with mercurial7272- exit73165fi741667575-# Check for svn and a svn repo.7676-if rev=`svn info 2>/dev/null | grep '^Last Changed Rev'`; then7777- rev=`echo $rev | awk '{print $NF}'`7878- printf -- '-svn%s' "$rev"7979-8080- # All done with svn8181- exit8282-fi167167+echo "$res"