···387387388388----------------------------389389390390-What: Support for lcd_switch and display_get in asus-laptop driver391391-When: March 2010392392-Why: These two features use non-standard interfaces. There are the393393- only features that really need multiple path to guess what's394394- the right method name on a specific laptop.395395-396396- Removing them will allow to remove a lot of code an significantly397397- clean the drivers.398398-399399- This will affect the backlight code which won't be able to know400400- if the backlight is on or off. The platform display file will also be401401- write only (like the one in eeepc-laptop).402402-403403- This should'nt affect a lot of user because they usually know404404- when their display is on or off.405405-406406-Who: Corentin Chary <corentin.chary@gmail.com>407407-408408-----------------------------409409-410390What: sysfs-class-rfkill state file411391When: Feb 2014412392Files: net/rfkill/core.c
+262
Documentation/input/event-codes.txt
···11+The input protocol uses a map of types and codes to express input device values22+to userspace. This document describes the types and codes and how and when they33+may be used.44+55+A single hardware event generates multiple input events. Each input event66+contains the new value of a single data item. A special event type, EV_SYN, is77+used to separate input events into packets of input data changes occurring at88+the same moment in time. In the following, the term "event" refers to a single99+input event encompassing a type, code, and value.1010+1111+The input protocol is a stateful protocol. Events are emitted only when values1212+of event codes have changed. However, the state is maintained within the Linux1313+input subsystem; drivers do not need to maintain the state and may attempt to1414+emit unchanged values without harm. Userspace may obtain the current state of1515+event code values using the EVIOCG* ioctls defined in linux/input.h. The event1616+reports supported by a device are also provided by sysfs in1717+class/input/event*/device/capabilities/, and the properties of a device are1818+provided in class/input/event*/device/properties.1919+2020+Types:2121+==========2222+Types are groupings of codes under a logical input construct. Each type has a2323+set of applicable codes to be used in generating events. See the Codes section2424+for details on valid codes for each type.2525+2626+* EV_SYN:2727+ - Used as markers to separate events. Events may be separated in time or in2828+ space, such as with the multitouch protocol.2929+3030+* EV_KEY:3131+ - Used to describe state changes of keyboards, buttons, or other key-like3232+ devices.3333+3434+* EV_REL:3535+ - Used to describe relative axis value changes, e.g. moving the mouse 5 units3636+ to the left.3737+3838+* EV_ABS:3939+ - Used to describe absolute axis value changes, e.g. describing the4040+ coordinates of a touch on a touchscreen.4141+4242+* EV_MSC:4343+ - Used to describe miscellaneous input data that do not fit into other types.4444+4545+* EV_SW:4646+ - Used to describe binary state input switches.4747+4848+* EV_LED:4949+ - Used to turn LEDs on devices on and off.5050+5151+* EV_SND:5252+ - Used to output sound to devices.5353+5454+* EV_REP:5555+ - Used for autorepeating devices.5656+5757+* EV_FF:5858+ - Used to send force feedback commands to an input device.5959+6060+* EV_PWR:6161+ - A special type for power button and switch input.6262+6363+* EV_FF_STATUS:6464+ - Used to receive force feedback device status.6565+6666+Codes:6767+==========6868+Codes define the precise type of event.6969+7070+EV_SYN:7171+----------7272+EV_SYN event values are undefined. Their usage is defined only by when they are7373+sent in the evdev event stream.7474+7575+* SYN_REPORT:7676+ - Used to synchronize and separate events into packets of input data changes7777+ occurring at the same moment in time. For example, motion of a mouse may set7878+ the REL_X and REL_Y values for one motion, then emit a SYN_REPORT. The next7979+ motion will emit more REL_X and REL_Y values and send another SYN_REPORT.8080+8181+* SYN_CONFIG:8282+ - TBD8383+8484+* SYN_MT_REPORT:8585+ - Used to synchronize and separate touch events. See the8686+ multi-touch-protocol.txt document for more information.8787+8888+* SYN_DROPPED:8989+ - Used to indicate buffer overrun in the evdev client's event queue.9090+ Client should ignore all events up to and including next SYN_REPORT9191+ event and query the device (using EVIOCG* ioctls) to obtain its9292+ current state.9393+9494+EV_KEY:9595+----------9696+EV_KEY events take the form KEY_<name> or BTN_<name>. For example, KEY_A is used9797+to represent the 'A' key on a keyboard. When a key is depressed, an event with9898+the key's code is emitted with value 1. When the key is released, an event is9999+emitted with value 0. Some hardware send events when a key is repeated. These100100+events have a value of 2. In general, KEY_<name> is used for keyboard keys, and101101+BTN_<name> is used for other types of momentary switch events.102102+103103+A few EV_KEY codes have special meanings:104104+105105+* BTN_TOOL_<name>:106106+ - These codes are used in conjunction with input trackpads, tablets, and107107+ touchscreens. These devices may be used with fingers, pens, or other tools.108108+ When an event occurs and a tool is used, the corresponding BTN_TOOL_<name>109109+ code should be set to a value of 1. When the tool is no longer interacting110110+ with the input device, the BTN_TOOL_<name> code should be reset to 0. All111111+ trackpads, tablets, and touchscreens should use at least one BTN_TOOL_<name>112112+ code when events are generated.113113+114114+* BTN_TOUCH:115115+ BTN_TOUCH is used for touch contact. While an input tool is determined to be116116+ within meaningful physical contact, the value of this property must be set117117+ to 1. Meaningful physical contact may mean any contact, or it may mean118118+ contact conditioned by an implementation defined property. For example, a119119+ touchpad may set the value to 1 only when the touch pressure rises above a120120+ certain value. BTN_TOUCH may be combined with BTN_TOOL_<name> codes. For121121+ example, a pen tablet may set BTN_TOOL_PEN to 1 and BTN_TOUCH to 0 while the122122+ pen is hovering over but not touching the tablet surface.123123+124124+Note: For appropriate function of the legacy mousedev emulation driver,125125+BTN_TOUCH must be the first evdev code emitted in a synchronization frame.126126+127127+Note: Historically a touch device with BTN_TOOL_FINGER and BTN_TOUCH was128128+interpreted as a touchpad by userspace, while a similar device without129129+BTN_TOOL_FINGER was interpreted as a touchscreen. For backwards compatibility130130+with current userspace it is recommended to follow this distinction. In the131131+future, this distinction will be deprecated and the device properties ioctl132132+EVIOCGPROP, defined in linux/input.h, will be used to convey the device type.133133+134134+* BTN_TOOL_FINGER, BTN_TOOL_DOUBLETAP, BTN_TOOL_TRIPLETAP, BTN_TOOL_QUADTAP:135135+ - These codes denote one, two, three, and four finger interaction on a136136+ trackpad or touchscreen. For example, if the user uses two fingers and moves137137+ them on the touchpad in an effort to scroll content on screen,138138+ BTN_TOOL_DOUBLETAP should be set to value 1 for the duration of the motion.139139+ Note that all BTN_TOOL_<name> codes and the BTN_TOUCH code are orthogonal in140140+ purpose. A trackpad event generated by finger touches should generate events141141+ for one code from each group. At most only one of these BTN_TOOL_<name>142142+ codes should have a value of 1 during any synchronization frame.143143+144144+Note: Historically some drivers emitted multiple of the finger count codes with145145+a value of 1 in the same synchronization frame. This usage is deprecated.146146+147147+Note: In multitouch drivers, the input_mt_report_finger_count() function should148148+be used to emit these codes. Please see multi-touch-protocol.txt for details.149149+150150+EV_REL:151151+----------152152+EV_REL events describe relative changes in a property. For example, a mouse may153153+move to the left by a certain number of units, but its absolute position in154154+space is unknown. If the absolute position is known, EV_ABS codes should be used155155+instead of EV_REL codes.156156+157157+A few EV_REL codes have special meanings:158158+159159+* REL_WHEEL, REL_HWHEEL:160160+ - These codes are used for vertical and horizontal scroll wheels,161161+ respectively.162162+163163+EV_ABS:164164+----------165165+EV_ABS events describe absolute changes in a property. For example, a touchpad166166+may emit coordinates for a touch location.167167+168168+A few EV_ABS codes have special meanings:169169+170170+* ABS_DISTANCE:171171+ - Used to describe the distance of a tool from an interaction surface. This172172+ event should only be emitted while the tool is hovering, meaning in close173173+ proximity of the device and while the value of the BTN_TOUCH code is 0. If174174+ the input device may be used freely in three dimensions, consider ABS_Z175175+ instead.176176+177177+* ABS_MT_<name>:178178+ - Used to describe multitouch input events. Please see179179+ multi-touch-protocol.txt for details.180180+181181+EV_SW:182182+----------183183+EV_SW events describe stateful binary switches. For example, the SW_LID code is184184+used to denote when a laptop lid is closed.185185+186186+Upon binding to a device or resuming from suspend, a driver must report187187+the current switch state. This ensures that the device, kernel, and userspace188188+state is in sync.189189+190190+Upon resume, if the switch state is the same as before suspend, then the input191191+subsystem will filter out the duplicate switch state reports. The driver does192192+not need to keep the state of the switch at any time.193193+194194+EV_MSC:195195+----------196196+EV_MSC events are used for input and output events that do not fall under other197197+categories.198198+199199+EV_LED:200200+----------201201+EV_LED events are used for input and output to set and query the state of202202+various LEDs on devices.203203+204204+EV_REP:205205+----------206206+EV_REP events are used for specifying autorepeating events.207207+208208+EV_SND:209209+----------210210+EV_SND events are used for sending sound commands to simple sound output211211+devices.212212+213213+EV_FF:214214+----------215215+EV_FF events are used to initialize a force feedback capable device and to cause216216+such device to feedback.217217+218218+EV_PWR:219219+----------220220+EV_PWR events are a special type of event used specifically for power221221+mangement. Its usage is not well defined. To be addressed later.222222+223223+Guidelines:224224+==========225225+The guidelines below ensure proper single-touch and multi-finger functionality.226226+For multi-touch functionality, see the multi-touch-protocol.txt document for227227+more information.228228+229229+Mice:230230+----------231231+REL_{X,Y} must be reported when the mouse moves. BTN_LEFT must be used to report232232+the primary button press. BTN_{MIDDLE,RIGHT,4,5,etc.} should be used to report233233+further buttons of the device. REL_WHEEL and REL_HWHEEL should be used to report234234+scroll wheel events where available.235235+236236+Touchscreens:237237+----------238238+ABS_{X,Y} must be reported with the location of the touch. BTN_TOUCH must be239239+used to report when a touch is active on the screen.240240+BTN_{MOUSE,LEFT,MIDDLE,RIGHT} must not be reported as the result of touch241241+contact. BTN_TOOL_<name> events should be reported where possible.242242+243243+Trackpads:244244+----------245245+Legacy trackpads that only provide relative position information must report246246+events like mice described above.247247+248248+Trackpads that provide absolute touch position must report ABS_{X,Y} for the249249+location of the touch. BTN_TOUCH should be used to report when a touch is active250250+on the trackpad. Where multi-finger support is available, BTN_TOOL_<name> should251251+be used to report the number of touches active on the trackpad.252252+253253+Tablets:254254+----------255255+BTN_TOOL_<name> events must be reported when a stylus or other tool is active on256256+the tablet. ABS_{X,Y} must be reported with the location of the tool. BTN_TOUCH257257+should be used to report when the tool is in contact with the tablet.258258+BTN_{STYLUS,STYLUS2} should be used to report buttons on the tool itself. Any259259+button may be used for buttons on the tablet except BTN_{MOUSE,LEFT}.260260+BTN_{0,1,2,etc} are good generic codes for unlabeled buttons. Do not use261261+meaningful buttons, like BTN_FORWARD, unless the button is labeled for that262262+purpose on the device.
+25-21
MAINTAINERS
···184184F: fs/9p/185185186186A2232 SERIAL BOARD DRIVER187187-M: Enver Haase <A2232@gmx.net>188187L: linux-m68k@lists.linux-m68k.org189189-S: Maintained190190-F: drivers/char/ser_a2232*188188+S: Orphan189189+F: drivers/staging/generic_serial/ser_a2232*191190192191AACRAID SCSI RAID DRIVER193192M: Adaptec OEM Raid Solutions <aacraid@adaptec.com>···876877F: arch/arm/mach-orion5x/877878F: arch/arm/plat-orion/878879880880+ARM/Orion SoC/Technologic Systems TS-78xx platform support881881+M: Alexander Clouter <alex@digriz.org.uk>882882+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)883883+W: http://www.digriz.org.uk/ts78xx/kernel884884+S: Maintained885885+F: arch/arm/mach-orion5x/ts78xx-*886886+879887ARM/MIOA701 MACHINE SUPPORT880888M: Robert Jarzmik <robert.jarzmik@free.fr>881889L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)···10691063F: drivers/sh/1070106410711065ARM/TELECHIPS ARM ARCHITECTURE10721072-M: "Hans J. Koch" <hjk@linutronix.de>10661066+M: "Hans J. Koch" <hjk@hansjkoch.de>10731067L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)10741068S: Maintained10751069F: arch/arm/plat-tcc/···18291823F: drivers/platform/x86/compal-laptop.c1830182418311825COMPUTONE INTELLIPORT MULTIPORT CARD18321832-M: "Michael H. Warfield" <mhw@wittsend.com>18331826W: http://www.wittsend.com/computone.html18341834-S: Maintained18271827+S: Orphan18351828F: Documentation/serial/computone.txt18361836-F: drivers/char/ip2/18291829+F: drivers/staging/tty/ip2/1837183018381831CONEXANT ACCESSRUNNER USB DRIVER18391832M: Simon Arlott <cxacru@fire.lp0.eu>···20152010CYCLADES ASYNC MUX DRIVER20162011W: http://www.cyclades.com/20172012S: Orphan20182018-F: drivers/char/cyclades.c20132013+F: drivers/tty/cyclades.c20192014F: include/linux/cyclades.h2020201520212016CYCLADES PC300 DRIVER···21292124W: http://www.digi.com21302125S: Orphan21312126F: Documentation/serial/digiepca.txt21322132-F: drivers/char/epca*21332133-F: drivers/char/digi*21272127+F: drivers/staging/tty/epca*21282128+F: drivers/staging/tty/digi*2134212921352130DIOLAN U2C-12 I2C DRIVER21362131M: Guenter Roeck <guenter.roeck@ericsson.com>···40824077F: include/linux/matroxfb.h4083407840844079MAX6650 HARDWARE MONITOR AND FAN CONTROLLER DRIVER40854085-M: "Hans J. Koch" <hjk@linutronix.de>40804080+M: "Hans J. Koch" <hjk@hansjkoch.de>40864081L: lm-sensors@lm-sensors.org40874082S: Maintained40884083F: Documentation/hwmon/max6650···41974192M: Jiri Slaby <jirislaby@gmail.com>41984193S: Maintained41994194F: Documentation/serial/moxa-smartio42004200-F: drivers/char/mxser.*41954195+F: drivers/tty/mxser.*4201419642024197MSI LAPTOP SUPPORT42034198M: "Lee, Chun-Yi" <jlee@novell.com>···4239423442404235MULTITECH MULTIPORT CARD (ISICOM)42414236S: Orphan42424242-F: drivers/char/isicom.c42374237+F: drivers/tty/isicom.c42434238F: include/linux/isicom.h4244423942454240MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER···52785273RISCOM8 DRIVER52795274S: Orphan52805275F: Documentation/serial/riscom8.txt52815281-F: drivers/char/riscom8*52765276+F: drivers/staging/tty/riscom8*5282527752835278ROCKETPORT DRIVER52845279P: Comtrol Corp.52855280W: http://www.comtrol.com52865281S: Maintained52875282F: Documentation/serial/rocket.txt52885288-F: drivers/char/rocket*52835283+F: drivers/tty/rocket*5289528452905285ROSE NETWORK LAYER52915286M: Ralf Baechle <ralf@linux-mips.org>···59215916F: arch/arm/mach-spear6xx/spear600_evb.c5922591759235918SPECIALIX IO8+ MULTIPORT SERIAL CARD DRIVER59245924-M: Roger Wolff <R.E.Wolff@BitWizard.nl>59255925-S: Supported59195919+S: Orphan59265920F: Documentation/serial/specialix.txt59275927-F: drivers/char/specialix*59215921+F: drivers/staging/tty/specialix*5928592259295923SPI SUBSYSTEM59305924M: David Brownell <dbrownell@users.sourceforge.net>···5968596459695965STABLE BRANCH59705966M: Greg Kroah-Hartman <greg@kroah.com>59715971-M: Chris Wright <chrisw@sous-sol.org>59725967L: stable@kernel.org59735968S: Maintained59745969···62516248W: http://www.uclinux.org/62526249L: uclinux-dev@uclinux.org (subscribers-only)62536250S: Maintained62546254-F: arch/m68knommu/62516251+F: arch/m68k/*/*_no.*62526252+F: arch/m68k/include/asm/*_no.*6255625362566254UCLINUX FOR RENESAS H8/300 (H8300)62576255M: Yoshinori Sato <ysato@users.sourceforge.jp>···66226618F: fs/hppfs/6623661966246620USERSPACE I/O (UIO)66256625-M: "Hans J. Koch" <hjk@linutronix.de>66216621+M: "Hans J. Koch" <hjk@hansjkoch.de>66266622M: Greg Kroah-Hartman <gregkh@suse.de>66276623S: Maintained66286624F: Documentation/DocBook/uio-howto.tmpl
···6363 depends on ARCH_DAVINCI_DM644x6464 select MISC_DEVICES6565 select EEPROM_AT246666+ select I2C6667 help6768 Configure this option to specify the whether the board used6869 for development is a DM644x EVM···7372 depends on ARCH_DAVINCI_DM644x7473 select MISC_DEVICES7574 select EEPROM_AT247575+ select I2C7676 help7777 Say Y here to select the Lyrtech Small Form Factor7878 Software Defined Radio (SFFSDR) board.···107105 select MACH_DAVINCI_DM6467TEVM108106 select MISC_DEVICES109107 select EEPROM_AT24108108+ select I2C110109 help111110 Configure this option to specify the whether the board used112111 for development is a DM6467 EVM···121118 depends on ARCH_DAVINCI_DM365122119 select MISC_DEVICES123120 select EEPROM_AT24121121+ select I2C124122 help125123 Configure this option to specify whether the board used126124 for development is a DM365 EVM···133129 select GPIO_PCF857X134130 select MISC_DEVICES135131 select EEPROM_AT24132132+ select I2C136133 help137134 Say Y here to select the TI DA830/OMAP-L137/AM17x Evaluation Module.138135···210205 depends on ARCH_DAVINCI_DA850211206 select MISC_DEVICES212207 select EEPROM_AT24208208+ select I2C213209 help214210 Say Y here to select the Critical Link MityDSP-L138/MityARM-1808215211 System on Module. Information on this SoM may be found at
···2222 *2323 * This area sits just below the page tables (see arch/arm/kernel/head.S).2424 */2525-#define DAVINCI_UART_INFO (PHYS_OFFSET + 0x3ff8)2525+#define DAVINCI_UART_INFO (PLAT_PHYS_OFFSET + 0x3ff8)26262727#define DAVINCI_UART0_BASE (IO_PHYS + 0x20000)2828#define DAVINCI_UART1_BASE (IO_PHYS + 0x20400)
+1-4
arch/arm/mach-msm/board-qsd8x50.c
···160160161161static void __init qsd8x50_init_mmc(void)162162{163163- if (machine_is_qsd8x50_ffa() || machine_is_qsd8x50a_ffa())164164- vreg_mmc = vreg_get(NULL, "gp6");165165- else166166- vreg_mmc = vreg_get(NULL, "gp5");163163+ vreg_mmc = vreg_get(NULL, "gp5");167164168165 if (IS_ERR(vreg_mmc)) {169166 pr_err("vreg get for vreg_mmc failed (%ld)\n",
+1-1
arch/arm/mach-msm/timer.c
···269269270270 /* Use existing clock_event for cpu 0 */271271 if (!smp_processor_id())272272- return;272272+ return 0;273273274274 writel(DGT_CLK_CTL_DIV_4, MSM_TMR_BASE + DGT_CLK_CTL);275275
+4-2
arch/arm/mach-tegra/gpio.c
···257257void tegra_gpio_resume(void)258258{259259 unsigned long flags;260260- int b, p, i;260260+ int b;261261+ int p;261262262263 local_irq_save(flags);263264···281280void tegra_gpio_suspend(void)282281{283282 unsigned long flags;284284- int b, p, i;283283+ int b;284284+ int p;285285286286 local_irq_save(flags);287287 for (b = 0; b < ARRAY_SIZE(tegra_gpio_banks); b++) {
+5-4
arch/arm/mach-tegra/tegra2_clocks.c
···13621362{13631363 unsigned long flags;13641364 int ret;13651365+ long new_rate = rate;1365136613661366- rate = clk_round_rate(c->parent, rate);13671367- if (rate < 0)13681368- return rate;13671367+ new_rate = clk_round_rate(c->parent, new_rate);13681368+ if (new_rate < 0)13691369+ return new_rate;1369137013701371 spin_lock_irqsave(&c->parent->spinlock, flags);1371137213721372- c->u.shared_bus_user.rate = rate;13731373+ c->u.shared_bus_user.rate = new_rate;13731374 ret = tegra_clk_shared_bus_update(c->parent);1374137513751376 spin_unlock_irqrestore(&c->parent->spinlock, flags);
-11
arch/arm/plat-s5p/pm.c
···19192020#define PFX "s5p pm: "21212222-/* s3c_pm_check_resume_pin2323- *2424- * check to see if the pin is configured correctly for sleep mode, and2525- * make any necessary adjustments if it is not2626-*/2727-2828-static void s3c_pm_check_resume_pin(unsigned int pin, unsigned int irqoffs)2929-{3030- /* nothing here yet */3131-}3232-3322/* s3c_pm_configure_extint3423 *3524 * configure all external interrupt pins
-6
arch/arm/plat-samsung/pm-check.c
···164164 */165165static u32 *s3c_pm_runcheck(struct resource *res, u32 *val)166166{167167- void *save_at = phys_to_virt(s3c_sleep_save_phys);168167 unsigned long addr;169168 unsigned long left;170169 void *stkpage;···188189189190 if (in_region(ptr, left, crcs, crc_size)) {190191 S3C_PMDBG("skipping %08lx, has crc block in\n", addr);191191- goto skip_check;192192- }193193-194194- if (in_region(ptr, left, save_at, 32*4 )) {195195- S3C_PMDBG("skipping %08lx, has save block in\n", addr);196192 goto skip_check;197193 }198194
+3-2
arch/arm/plat-samsung/pm.c
···214214 *215215 * print any IRQs asserted at resume time (ie, we woke from)216216*/217217-static void s3c_pm_show_resume_irqs(int start, unsigned long which,218218- unsigned long mask)217217+static void __maybe_unused s3c_pm_show_resume_irqs(int start,218218+ unsigned long which,219219+ unsigned long mask)219220{220221 int i;221222
···391391__tagtable(ATAG_CLOCK, parse_tag_clock);392392393393/*394394+ * The board_number correspond to the bd->bi_board_number in U-Boot. This395395+ * parameter is only available during initialisation and can be used in some396396+ * kind of board identification.397397+ */398398+u32 __initdata board_number;399399+400400+static int __init parse_tag_boardinfo(struct tag *tag)401401+{402402+ board_number = tag->u.boardinfo.board_number;403403+404404+ return 0;405405+}406406+__tagtable(ATAG_BOARDINFO, parse_tag_boardinfo);407407+408408+/*394409 * Scan the tag table for this tag, and call its parse function. The395410 * tag table is built by the linker from all the __tagtable396411 * declarations.
-22
arch/avr32/kernel/traps.c
···9595 info.si_code = code;9696 info.si_addr = (void __user *)addr;9797 force_sig_info(signr, &info, current);9898-9999- /*100100- * Init gets no signals that it doesn't have a handler for.101101- * That's all very well, but if it has caused a synchronous102102- * exception and we ignore the resulting signal, it will just103103- * generate the same exception over and over again and we get104104- * nowhere. Better to kill it and let the kernel panic.105105- */106106- if (is_global_init(current)) {107107- __sighandler_t handler;108108-109109- spin_lock_irq(¤t->sighand->siglock);110110- handler = current->sighand->action[signr-1].sa.sa_handler;111111- spin_unlock_irq(¤t->sighand->siglock);112112- if (handler == SIG_DFL) {113113- /* init has generated a synchronous exception114114- and it doesn't have a handler for the signal */115115- printk(KERN_CRIT "init has generated signal %ld "116116- "but has no handler for it\n", signr);117117- do_exit(signr);118118- }119119- }12098}12199122100asmlinkage void do_nmi(unsigned long ecr, struct pt_regs *regs)
···5353 st.w r8[TI_flags], r95454 unmask_interrupts5555 sleep CPU_SLEEP_IDLE5656- .size cpu_idle_sleep, . - cpu_idle_sleep5656+ .size cpu_enter_idle, . - cpu_enter_idle57575858 /*5959 * Common return path for PM functions that don't run from
+18-18
arch/blackfin/include/asm/system.h
···1919 * Force strict CPU ordering.2020 */2121#define nop() __asm__ __volatile__ ("nop;\n\t" : : )2222-#define mb() __asm__ __volatile__ ("" : : : "memory")2323-#define rmb() __asm__ __volatile__ ("" : : : "memory")2424-#define wmb() __asm__ __volatile__ ("" : : : "memory")2525-#define set_mb(var, value) do { (void) xchg(&var, value); } while (0)2626-#define read_barrier_depends() do { } while(0)2222+#define smp_mb() mb()2323+#define smp_rmb() rmb()2424+#define smp_wmb() wmb()2525+#define set_mb(var, value) do { var = value; mb(); } while (0)2626+#define smp_read_barrier_depends() read_barrier_depends()27272828#ifdef CONFIG_SMP2929asmlinkage unsigned long __raw_xchg_1_asm(volatile void *ptr, unsigned long value);···3737 unsigned long new, unsigned long old);38383939#ifdef __ARCH_SYNC_CORE_DCACHE4040-# define smp_mb() do { barrier(); smp_check_barrier(); smp_mark_barrier(); } while (0)4141-# define smp_rmb() do { barrier(); smp_check_barrier(); } while (0)4242-# define smp_wmb() do { barrier(); smp_mark_barrier(); } while (0)4343-#define smp_read_barrier_depends() do { barrier(); smp_check_barrier(); } while (0)4444-4040+/* Force Core data cache coherence */4141+# define mb() do { barrier(); smp_check_barrier(); smp_mark_barrier(); } while (0)4242+# define rmb() do { barrier(); smp_check_barrier(); } while (0)4343+# define wmb() do { barrier(); smp_mark_barrier(); } while (0)4444+# define read_barrier_depends() do { barrier(); smp_check_barrier(); } while (0)4545#else4646-# define smp_mb() barrier()4747-# define smp_rmb() barrier()4848-# define smp_wmb() barrier()4949-#define smp_read_barrier_depends() barrier()4646+# define mb() barrier()4747+# define rmb() barrier()4848+# define wmb() barrier()4949+# define read_barrier_depends() do { } while (0)5050#endif51515252static inline unsigned long __xchg(unsigned long x, volatile void *ptr,···9999100100#else /* !CONFIG_SMP */101101102102-#define smp_mb() barrier()103103-#define smp_rmb() barrier()104104-#define smp_wmb() barrier()105105-#define smp_read_barrier_depends() do { } while(0)102102+#define mb() barrier()103103+#define rmb() barrier()104104+#define wmb() barrier()105105+#define read_barrier_depends() do { } while (0)106106107107struct __xchg_dummy {108108 unsigned long a[100];
+1-1
arch/blackfin/kernel/gptimers.c
···268268 _disable_gptimers(mask);269269 for (i = 0; i < MAX_BLACKFIN_GPTIMERS; ++i)270270 if (mask & (1 << i))271271- group_regs[BFIN_TIMER_OCTET(i)]->status |= trun_mask[i];271271+ group_regs[BFIN_TIMER_OCTET(i)]->status = trun_mask[i];272272 SSYNC();273273}274274EXPORT_SYMBOL(disable_gptimers);
+7-1
arch/blackfin/kernel/time-ts.c
···206206{207207 struct clock_event_device *evt = dev_id;208208 smp_mb();209209- evt->event_handler(evt);209209+ /*210210+ * We want to ACK before we handle so that we can handle smaller timer211211+ * intervals. This way if the timer expires again while we're handling212212+ * things, we're more likely to see that 2nd int rather than swallowing213213+ * it by ACKing the int at the end of this handler.214214+ */210215 bfin_gptmr0_ack();216216+ evt->event_handler(evt);211217 return IRQ_HANDLED;212218}213219
+16-3
arch/blackfin/mach-common/smp.c
···109109 struct blackfin_flush_data *fdata = info;110110111111 /* Invalidate the memory holding the bounds of the flushed region. */112112- invalidate_dcache_range((unsigned long)fdata,113113- (unsigned long)fdata + sizeof(*fdata));112112+ blackfin_dcache_invalidate_range((unsigned long)fdata,113113+ (unsigned long)fdata + sizeof(*fdata));114114115115- flush_icache_range(fdata->start, fdata->end);115115+ /* Make sure all write buffers in the data side of the core116116+ * are flushed before trying to invalidate the icache. This117117+ * needs to be after the data flush and before the icache118118+ * flush so that the SSYNC does the right thing in preventing119119+ * the instruction prefetcher from hitting things in cached120120+ * memory at the wrong time -- it runs much further ahead than121121+ * the pipeline.122122+ */123123+ SSYNC();124124+125125+ /* ipi_flaush_icache is invoked by generic flush_icache_range,126126+ * so call blackfin arch icache flush directly here.127127+ */128128+ blackfin_icache_flush_range(fdata->start, fdata->end);116129}117130118131static void ipi_call_function(unsigned int cpu, struct ipi_message *msg)
···162162 * on platforms where such control is possible.163163 */164164#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\165165- defined(CONFIG_KPROBES)165165+ defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)166166#define PAGE_KERNEL_TEXT PAGE_KERNEL_X167167#else168168#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
···163163}164164165165/* wait for all the CPUs to hit real mode but timeout if they don't come in */166166-#if defined(CONFIG_PPC_STD_MMU_64) && defined(CONFIG_SMP)166166+#ifdef CONFIG_PPC_STD_MMU_64167167static void crash_kexec_wait_realmode(int cpu)168168{169169 unsigned int msecs;···188188 }189189 mb();190190}191191-#else192192-static inline void crash_kexec_wait_realmode(int cpu) {}193193-#endif191191+#endif /* CONFIG_PPC_STD_MMU_64 */194192195193/*196194 * This function will be called by secondary cpus or by kexec cpu···233235 crash_ipi_callback(regs);234236}235237236236-#else238238+#else /* ! CONFIG_SMP */239239+static inline void crash_kexec_wait_realmode(int cpu) {}240240+237241static void crash_kexec_prepare_cpus(int cpu)238242{239243 /*···255255{256256 cpus_in_sr = CPU_MASK_NONE;257257}258258-#endif258258+#endif /* CONFIG_SMP */259259260260/*261261 * Register a function to be called on shutdown. Only use this if you
···330330 if (!parent)331331 continue;332332 if (of_match_node(legacy_serial_parents, parent) != NULL) {333333- index = add_legacy_soc_port(np, np);334334- if (index >= 0 && np == stdout)335335- legacy_serial_console = index;333333+ if (of_device_is_available(np)) {334334+ index = add_legacy_soc_port(np, np);335335+ if (index >= 0 && np == stdout)336336+ legacy_serial_console = index;337337+ }336338 }337339 of_node_put(parent);338340 }
+30-7
arch/powerpc/kernel/perf_event.c
···398398 return 0;399399}400400401401+static u64 check_and_compute_delta(u64 prev, u64 val)402402+{403403+ u64 delta = (val - prev) & 0xfffffffful;404404+405405+ /*406406+ * POWER7 can roll back counter values, if the new value is smaller407407+ * than the previous value it will cause the delta and the counter to408408+ * have bogus values unless we rolled a counter over. If a coutner is409409+ * rolled back, it will be smaller, but within 256, which is the maximum410410+ * number of events to rollback at once. If we dectect a rollback411411+ * return 0. This can lead to a small lack of precision in the412412+ * counters.413413+ */414414+ if (prev > val && (prev - val) < 256)415415+ delta = 0;416416+417417+ return delta;418418+}419419+401420static void power_pmu_read(struct perf_event *event)402421{403422 s64 val, delta, prev;···435416 prev = local64_read(&event->hw.prev_count);436417 barrier();437418 val = read_pmc(event->hw.idx);419419+ delta = check_and_compute_delta(prev, val);420420+ if (!delta)421421+ return;438422 } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev);439423440440- /* The counters are only 32 bits wide */441441- delta = (val - prev) & 0xfffffffful;442424 local64_add(delta, &event->count);443425 local64_sub(delta, &event->hw.period_left);444426}···469449 val = (event->hw.idx == 5) ? pmc5 : pmc6;470450 prev = local64_read(&event->hw.prev_count);471451 event->hw.idx = 0;472472- delta = (val - prev) & 0xfffffffful;473473- local64_add(delta, &event->count);452452+ delta = check_and_compute_delta(prev, val);453453+ if (delta)454454+ local64_add(delta, &event->count);474455 }475456}476457···479458 unsigned long pmc5, unsigned long pmc6)480459{481460 struct perf_event *event;482482- u64 val;461461+ u64 val, prev;483462 int i;484463485464 for (i = 0; i < cpuhw->n_limited; ++i) {486465 event = cpuhw->limited_counter[i];487466 event->hw.idx = cpuhw->limited_hwidx[i];488467 val = (event->hw.idx == 5) ? pmc5 : pmc6;489489- local64_set(&event->hw.prev_count, val);468468+ prev = local64_read(&event->hw.prev_count);469469+ if (check_and_compute_delta(prev, val))470470+ local64_set(&event->hw.prev_count, val);490471 perf_event_update_userpage(event);491472 }492473}···1220119712211198 /* we don't have to worry about interrupts here */12221199 prev = local64_read(&event->hw.prev_count);12231223- delta = (val - prev) & 0xfffffffful;12001200+ delta = check_and_compute_delta(prev, val);12241201 local64_add(delta, &event->count);1225120212261203 /*
···9696#define MSR_IA32_MC0_ADDR 0x000004029797#define MSR_IA32_MC0_MISC 0x0000040398989999+#define MSR_AMD64_MC0_MASK 0xc0010044100100+99101#define MSR_IA32_MCx_CTL(x) (MSR_IA32_MC0_CTL + 4*(x))100102#define MSR_IA32_MCx_STATUS(x) (MSR_IA32_MC0_STATUS + 4*(x))101103#define MSR_IA32_MCx_ADDR(x) (MSR_IA32_MC0_ADDR + 4*(x))102104#define MSR_IA32_MCx_MISC(x) (MSR_IA32_MC0_MISC + 4*(x))105105+106106+#define MSR_AMD64_MCx_MASK(x) (MSR_AMD64_MC0_MASK + (x))103107104108/* These are consecutive and not in the normal 4er MCE bank block */105109#define MSR_IA32_MC0_CTL2 0x00000280
+19
arch/x86/kernel/cpu/amd.c
···615615 /* As a rule processors have APIC timer running in deep C states */616616 if (c->x86 >= 0xf && !cpu_has_amd_erratum(amd_erratum_400))617617 set_cpu_cap(c, X86_FEATURE_ARAT);618618+619619+ /*620620+ * Disable GART TLB Walk Errors on Fam10h. We do this here621621+ * because this is always needed when GART is enabled, even in a622622+ * kernel which has no MCE support built in.623623+ */624624+ if (c->x86 == 0x10) {625625+ /*626626+ * BIOS should disable GartTlbWlk Errors themself. If627627+ * it doesn't do it here as suggested by the BKDG.628628+ *629629+ * Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=33012630630+ */631631+ u64 mask;632632+633633+ rdmsrl(MSR_AMD64_MCx_MASK(4), mask);634634+ mask |= (1 << 10);635635+ wrmsrl(MSR_AMD64_MCx_MASK(4), mask);636636+ }618637}619638620639#ifdef CONFIG_X86_32
+23
arch/x86/kernel/smpboot.c
···312312 identify_secondary_cpu(c);313313}314314315315+static void __cpuinit check_cpu_siblings_on_same_node(int cpu1, int cpu2)316316+{317317+ int node1 = early_cpu_to_node(cpu1);318318+ int node2 = early_cpu_to_node(cpu2);319319+320320+ /*321321+ * Our CPU scheduler assumes all logical cpus in the same physical cpu322322+ * share the same node. But, buggy ACPI or NUMA emulation might assign323323+ * them to different node. Fix it.324324+ */325325+ if (node1 != node2) {326326+ pr_warning("CPU %d in node %d and CPU %d in node %d are in the same physical CPU. forcing same node %d\n",327327+ cpu1, node1, cpu2, node2, node2);328328+329329+ numa_remove_cpu(cpu1);330330+ numa_set_node(cpu1, node2);331331+ numa_add_cpu(cpu1);332332+ }333333+}334334+315335static void __cpuinit link_thread_siblings(int cpu1, int cpu2)316336{317337 cpumask_set_cpu(cpu1, cpu_sibling_mask(cpu2));···340320 cpumask_set_cpu(cpu2, cpu_core_mask(cpu1));341321 cpumask_set_cpu(cpu1, cpu_llc_shared_mask(cpu2));342322 cpumask_set_cpu(cpu2, cpu_llc_shared_mask(cpu1));323323+ check_cpu_siblings_on_same_node(cpu1, cpu2);343324}344325345326···382361 per_cpu(cpu_llc_id, cpu) == per_cpu(cpu_llc_id, i)) {383362 cpumask_set_cpu(i, cpu_llc_shared_mask(cpu));384363 cpumask_set_cpu(cpu, cpu_llc_shared_mask(i));364364+ check_cpu_siblings_on_same_node(cpu, i);385365 }386366 if (c->phys_proc_id == cpu_data(i).phys_proc_id) {387367 cpumask_set_cpu(i, cpu_core_mask(cpu));388368 cpumask_set_cpu(cpu, cpu_core_mask(i));369369+ check_cpu_siblings_on_same_node(cpu, i);389370 /*390371 * Does this new cpu bringup a new core?391372 */
···198198}199199EXPORT_SYMBOL(blk_dump_rq_flags);200200201201-/*202202- * Make sure that plugs that were pending when this function was entered,203203- * are now complete and requests pushed to the queue.204204-*/205205-static inline void queue_sync_plugs(struct request_queue *q)206206-{207207- /*208208- * If the current process is plugged and has barriers submitted,209209- * we will livelock if we don't unplug first.210210- */211211- blk_flush_plug(current);212212-}213213-214201static void blk_delay_work(struct work_struct *work)215202{216203 struct request_queue *q;217204218205 q = container_of(work, struct request_queue, delay_work.work);219206 spin_lock_irq(q->queue_lock);220220- __blk_run_queue(q, false);207207+ __blk_run_queue(q);221208 spin_unlock_irq(q->queue_lock);222209}223210···220233 */221234void blk_delay_queue(struct request_queue *q, unsigned long msecs)222235{223223- schedule_delayed_work(&q->delay_work, msecs_to_jiffies(msecs));236236+ queue_delayed_work(kblockd_workqueue, &q->delay_work,237237+ msecs_to_jiffies(msecs));224238}225239EXPORT_SYMBOL(blk_delay_queue);226240···239251 WARN_ON(!irqs_disabled());240252241253 queue_flag_clear(QUEUE_FLAG_STOPPED, q);242242- __blk_run_queue(q, false);254254+ __blk_run_queue(q);243255}244256EXPORT_SYMBOL(blk_start_queue);245257···286298{287299 del_timer_sync(&q->timeout);288300 cancel_delayed_work_sync(&q->delay_work);289289- queue_sync_plugs(q);290301}291302EXPORT_SYMBOL(blk_sync_queue);292303···297310 * Description:298311 * See @blk_run_queue. This variant must be called with the queue lock299312 * held and interrupts disabled.300300- *301313 */302302-void __blk_run_queue(struct request_queue *q, bool force_kblockd)314314+void __blk_run_queue(struct request_queue *q)303315{304316 if (unlikely(blk_queue_stopped(q)))305317 return;···307321 * Only recurse once to avoid overrunning the stack, let the unplug308322 * handling reinvoke the handler shortly if we already got there.309323 */310310- if (!force_kblockd && !queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {324324+ if (!queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {311325 q->request_fn(q);312326 queue_flag_clear(QUEUE_FLAG_REENTER, q);313327 } else314328 queue_delayed_work(kblockd_workqueue, &q->delay_work, 0);315329}316330EXPORT_SYMBOL(__blk_run_queue);331331+332332+/**333333+ * blk_run_queue_async - run a single device queue in workqueue context334334+ * @q: The queue to run335335+ *336336+ * Description:337337+ * Tells kblockd to perform the equivalent of @blk_run_queue on behalf338338+ * of us.339339+ */340340+void blk_run_queue_async(struct request_queue *q)341341+{342342+ if (likely(!blk_queue_stopped(q)))343343+ queue_delayed_work(kblockd_workqueue, &q->delay_work, 0);344344+}317345318346/**319347 * blk_run_queue - run a single device queue···342342 unsigned long flags;343343344344 spin_lock_irqsave(q->queue_lock, flags);345345- __blk_run_queue(q, false);345345+ __blk_run_queue(q);346346 spin_unlock_irqrestore(q->queue_lock, flags);347347}348348EXPORT_SYMBOL(blk_run_queue);···991991 blk_queue_end_tag(q, rq);992992993993 add_acct_request(q, rq, where);994994- __blk_run_queue(q, false);994994+ __blk_run_queue(q);995995 spin_unlock_irqrestore(q->queue_lock, flags);996996}997997EXPORT_SYMBOL(blk_insert_request);···1311131113121312 plug = current->plug;13131313 if (plug) {13141314- if (!plug->should_sort && !list_empty(&plug->list)) {13141314+ /*13151315+ * If this is the first request added after a plug, fire13161316+ * of a plug trace. If others have been added before, check13171317+ * if we have multiple devices in this plug. If so, make a13181318+ * note to sort the list before dispatch.13191319+ */13201320+ if (list_empty(&plug->list))13211321+ trace_block_plug(q);13221322+ else if (!plug->should_sort) {13151323 struct request *__rq;1316132413171325 __rq = list_entry_rq(plug->list.prev);···13351327 } else {13361328 spin_lock_irq(q->queue_lock);13371329 add_acct_request(q, req, where);13381338- __blk_run_queue(q, false);13301330+ __blk_run_queue(q);13391331out_unlock:13401332 spin_unlock_irq(q->queue_lock);13411333 }···2652264426532645 plug->magic = PLUG_MAGIC;26542646 INIT_LIST_HEAD(&plug->list);26472647+ INIT_LIST_HEAD(&plug->cb_list);26552648 plug->should_sort = 0;2656264926572650 /*···26772668 return !(rqa->q <= rqb->q);26782669}2679267026802680-static void flush_plug_list(struct blk_plug *plug)26712671+/*26722672+ * If 'from_schedule' is true, then postpone the dispatch of requests26732673+ * until a safe kblockd context. We due this to avoid accidental big26742674+ * additional stack usage in driver dispatch, in places where the originally26752675+ * plugger did not intend it.26762676+ */26772677+static void queue_unplugged(struct request_queue *q, unsigned int depth,26782678+ bool from_schedule)26792679+ __releases(q->queue_lock)26802680+{26812681+ trace_block_unplug(q, depth, !from_schedule);26822682+26832683+ /*26842684+ * If we are punting this to kblockd, then we can safely drop26852685+ * the queue_lock before waking kblockd (which needs to take26862686+ * this lock).26872687+ */26882688+ if (from_schedule) {26892689+ spin_unlock(q->queue_lock);26902690+ blk_run_queue_async(q);26912691+ } else {26922692+ __blk_run_queue(q);26932693+ spin_unlock(q->queue_lock);26942694+ }26952695+26962696+}26972697+26982698+static void flush_plug_callbacks(struct blk_plug *plug)26992699+{27002700+ LIST_HEAD(callbacks);27012701+27022702+ if (list_empty(&plug->cb_list))27032703+ return;27042704+27052705+ list_splice_init(&plug->cb_list, &callbacks);27062706+27072707+ while (!list_empty(&callbacks)) {27082708+ struct blk_plug_cb *cb = list_first_entry(&callbacks,27092709+ struct blk_plug_cb,27102710+ list);27112711+ list_del(&cb->list);27122712+ cb->callback(cb);27132713+ }27142714+}27152715+27162716+void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)26812717{26822718 struct request_queue *q;26832719 unsigned long flags;26842720 struct request *rq;27212721+ LIST_HEAD(list);27222722+ unsigned int depth;2685272326862724 BUG_ON(plug->magic != PLUG_MAGIC);2687272527262726+ flush_plug_callbacks(plug);26882727 if (list_empty(&plug->list))26892728 return;2690272926912691- if (plug->should_sort)26922692- list_sort(NULL, &plug->list, plug_rq_cmp);27302730+ list_splice_init(&plug->list, &list);27312731+27322732+ if (plug->should_sort) {27332733+ list_sort(NULL, &list, plug_rq_cmp);27342734+ plug->should_sort = 0;27352735+ }2693273626942737 q = NULL;27382738+ depth = 0;27392739+27402740+ /*27412741+ * Save and disable interrupts here, to avoid doing it for every27422742+ * queue lock we have to take.27432743+ */26952744 local_irq_save(flags);26962696- while (!list_empty(&plug->list)) {26972697- rq = list_entry_rq(plug->list.next);27452745+ while (!list_empty(&list)) {27462746+ rq = list_entry_rq(list.next);26982747 list_del_init(&rq->queuelist);26992748 BUG_ON(!(rq->cmd_flags & REQ_ON_PLUG));27002749 BUG_ON(!rq->q);27012750 if (rq->q != q) {27022702- if (q) {27032703- __blk_run_queue(q, false);27042704- spin_unlock(q->queue_lock);27052705- }27512751+ /*27522752+ * This drops the queue lock27532753+ */27542754+ if (q)27552755+ queue_unplugged(q, depth, from_schedule);27062756 q = rq->q;27572757+ depth = 0;27072758 spin_lock(q->queue_lock);27082759 }27092760 rq->cmd_flags &= ~REQ_ON_PLUG;···27752706 __elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH);27762707 else27772708 __elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);27092709+27102710+ depth++;27782711 }2779271227802780- if (q) {27812781- __blk_run_queue(q, false);27822782- spin_unlock(q->queue_lock);27832783- }27132713+ /*27142714+ * This drops the queue lock27152715+ */27162716+ if (q)27172717+ queue_unplugged(q, depth, from_schedule);2784271827852785- BUG_ON(!list_empty(&plug->list));27862719 local_irq_restore(flags);27872720}27882788-27892789-static void __blk_finish_plug(struct task_struct *tsk, struct blk_plug *plug)27902790-{27912791- flush_plug_list(plug);27922792-27932793- if (plug == tsk->plug)27942794- tsk->plug = NULL;27952795-}27212721+EXPORT_SYMBOL(blk_flush_plug_list);2796272227972723void blk_finish_plug(struct blk_plug *plug)27982724{27992799- if (plug)28002800- __blk_finish_plug(current, plug);27252725+ blk_flush_plug_list(plug, false);27262726+27272727+ if (plug == current->plug)27282728+ current->plug = NULL;28012729}28022730EXPORT_SYMBOL(blk_finish_plug);28032803-28042804-void __blk_flush_plug(struct task_struct *tsk, struct blk_plug *plug)28052805-{28062806- __blk_finish_plug(tsk, plug);28072807- tsk->plug = plug;28082808-}28092809-EXPORT_SYMBOL(__blk_flush_plug);2810273128112732int __init blk_dev_init(void)28122733{
+1-1
block/blk-exec.c
···5555 WARN_ON(irqs_disabled());5656 spin_lock_irq(q->queue_lock);5757 __elv_add_request(q, rq, where);5858- __blk_run_queue(q, false);5858+ __blk_run_queue(q);5959 /* the queue is stopped so it won't be plugged+unplugged */6060 if (rq->cmd_type == REQ_TYPE_PM_RESUME)6161 q->request_fn(q);
+2-2
block/blk-flush.c
···218218 * request_fn may confuse the driver. Always use kblockd.219219 */220220 if (queued)221221- __blk_run_queue(q, true);221221+ blk_run_queue_async(q);222222}223223224224/**···274274 * the comment in flush_end_io().275275 */276276 if (blk_flush_complete_seq(rq, REQ_FSEQ_DATA, error))277277- __blk_run_queue(q, true);277277+ blk_run_queue_async(q);278278}279279280280/**
+1-2
block/blk-sysfs.c
···498498{499499 int ret;500500 struct device *dev = disk_to_dev(disk);501501-502501 struct request_queue *q = disk->queue;503502504503 if (WARN_ON(!q))···520521 if (ret) {521522 kobject_uevent(&q->kobj, KOBJ_REMOVE);522523 kobject_del(&q->kobj);523523- blk_trace_remove_sysfs(disk_to_dev(disk));524524+ blk_trace_remove_sysfs(dev);524525 kobject_put(&dev->kobj);525526 return ret;526527 }
···33683368 cfqd->busy_queues > 1) {33693369 cfq_del_timer(cfqd, cfqq);33703370 cfq_clear_cfqq_wait_request(cfqq);33713371- __blk_run_queue(cfqd->queue, false);33713371+ __blk_run_queue(cfqd->queue);33723372 } else {33733373 cfq_blkiocg_update_idle_time_stats(33743374 &cfqq->cfqg->blkg);···33833383 * this new queue is RT and the current one is BE33843384 */33853385 cfq_preempt_queue(cfqd, cfqq);33863386- __blk_run_queue(cfqd->queue, false);33863386+ __blk_run_queue(cfqd->queue);33873387 }33883388}33893389···37433743 struct request_queue *q = cfqd->queue;3744374437453745 spin_lock_irq(q->queue_lock);37463746- __blk_run_queue(cfqd->queue, false);37463746+ __blk_run_queue(cfqd->queue);37473747 spin_unlock_irq(q->queue_lock);37483748}37493749
+2-2
block/elevator.c
···642642 */643643 elv_drain_elevator(q);644644 while (q->rq.elvpriv) {645645- __blk_run_queue(q, false);645645+ __blk_run_queue(q);646646 spin_unlock_irq(q->queue_lock);647647 msleep(10);648648 spin_lock_irq(q->queue_lock);···695695 * with anything. There's no point in delaying queue696696 * processing.697697 */698698- __blk_run_queue(q, false);698698+ __blk_run_queue(q);699699 break;700700701701 case ELEVATOR_INSERT_SORT_MERGE:
···233233 }234234 break;235235#endif /* CONFIG_SUSPEND */236236-#ifdef CONFIG_HIBERNATION236236+#ifdef CONFIG_HIBERNATE_CALLBACKS237237 case PM_EVENT_FREEZE:238238 case PM_EVENT_QUIESCE:239239 if (ops->freeze) {···260260 suspend_report_result(ops->restore, error);261261 }262262 break;263263-#endif /* CONFIG_HIBERNATION */263263+#endif /* CONFIG_HIBERNATE_CALLBACKS */264264 default:265265 error = -EINVAL;266266 }···308308 }309309 break;310310#endif /* CONFIG_SUSPEND */311311-#ifdef CONFIG_HIBERNATION311311+#ifdef CONFIG_HIBERNATE_CALLBACKS312312 case PM_EVENT_FREEZE:313313 case PM_EVENT_QUIESCE:314314 if (ops->freeze_noirq) {···335335 suspend_report_result(ops->restore_noirq, error);336336 }337337 break;338338-#endif /* CONFIG_HIBERNATION */338338+#endif /* CONFIG_HIBERNATE_CALLBACKS */339339 default:340340 error = -EINVAL;341341 }
+1
drivers/gpu/drm/Kconfig
···9696 # i915 depends on ACPI_VIDEO when ACPI is enabled9797 # but for select to work, need to select ACPI_VIDEO's dependencies, ick9898 select BACKLIGHT_CLASS_DEVICE if ACPI9999+ select VIDEO_OUTPUT_CONTROL if ACPI99100 select INPUT if ACPI100101 select ACPI_VIDEO if ACPI101102 select ACPI_BUTTON if ACPI
···2323#include "drmP.h"2424#include "radeon.h"2525#include "avivod.h"2626+#include "atom.h"2627#ifdef CONFIG_ACPI2728#include <linux/acpi.h>2829#endif···536535 /* set up the default clocks if the MC ucode is loaded */537536 if (ASIC_IS_DCE5(rdev) && rdev->mc_fw) {538537 if (rdev->pm.default_vddc)539539- radeon_atom_set_voltage(rdev, rdev->pm.default_vddc);538538+ radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,539539+ SET_VOLTAGE_TYPE_ASIC_VDDC);540540+ if (rdev->pm.default_vddci)541541+ radeon_atom_set_voltage(rdev, rdev->pm.default_vddci,542542+ SET_VOLTAGE_TYPE_ASIC_VDDCI);540543 if (rdev->pm.default_sclk)541544 radeon_set_engine_clock(rdev, rdev->pm.default_sclk);542545 if (rdev->pm.default_mclk)···553548 rdev->pm.current_sclk = rdev->pm.default_sclk;554549 rdev->pm.current_mclk = rdev->pm.default_mclk;555550 rdev->pm.current_vddc = rdev->pm.power_state[rdev->pm.default_power_state_index].clock_info[0].voltage.voltage;551551+ rdev->pm.current_vddci = rdev->pm.power_state[rdev->pm.default_power_state_index].clock_info[0].voltage.vddci;556552 if (rdev->pm.pm_method == PM_METHOD_DYNPM557553 && rdev->pm.dynpm_state == DYNPM_STATE_SUSPENDED) {558554 rdev->pm.dynpm_state = DYNPM_STATE_ACTIVE;···591585 /* set up the default clocks if the MC ucode is loaded */592586 if (ASIC_IS_DCE5(rdev) && rdev->mc_fw) {593587 if (rdev->pm.default_vddc)594594- radeon_atom_set_voltage(rdev, rdev->pm.default_vddc);588588+ radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,589589+ SET_VOLTAGE_TYPE_ASIC_VDDC);595590 if (rdev->pm.default_sclk)596591 radeon_set_engine_clock(rdev, rdev->pm.default_sclk);597592 if (rdev->pm.default_mclk)
···106106107107 if ((voltage->type == VOLTAGE_SW) && voltage->voltage) {108108 if (voltage->voltage != rdev->pm.current_vddc) {109109- radeon_atom_set_voltage(rdev, voltage->voltage);109109+ radeon_atom_set_voltage(rdev, voltage->voltage, SET_VOLTAGE_TYPE_ASIC_VDDC);110110 rdev->pm.current_vddc = voltage->voltage;111111 DRM_DEBUG("Setting: v: %d\n", voltage->voltage);112112 }···12551255{12561256 int r;1257125712581258- r = radeon_dummy_page_init(rdev);12591259- if (r)12601260- return r;12611258 /* This don't do much */12621259 r = radeon_gem_init(rdev);12631260 if (r)···13691372 radeon_atombios_fini(rdev);13701373 kfree(rdev->bios);13711374 rdev->bios = NULL;13721372- radeon_dummy_page_fini(rdev);13731375}1374137613751377static void rv770_pcie_gen2_enable(struct radeon_device *rdev)
+3-23
drivers/gpu/drm/ttm/ttm_page_alloc.c
···683683 gfp_flags |= GFP_HIGHUSER;684684685685 for (r = 0; r < count; ++r) {686686- if ((flags & TTM_PAGE_FLAG_DMA32) && dma_address) {687687- void *addr;688688- addr = dma_alloc_coherent(NULL, PAGE_SIZE,689689- &dma_address[r],690690- gfp_flags);691691- if (addr == NULL)692692- return -ENOMEM;693693- p = virt_to_page(addr);694694- } else695695- p = alloc_page(gfp_flags);686686+ p = alloc_page(gfp_flags);696687 if (!p) {697688698689 printk(KERN_ERR TTM_PFX699690 "Unable to allocate page.");700691 return -ENOMEM;701692 }693693+702694 list_add(&p->lru, pages);703695 }704696 return 0;···738746 unsigned long irq_flags;739747 struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);740748 struct page *p, *tmp;741741- unsigned r;742749743750 if (pool == NULL) {744751 /* No pool for this memory type so free the pages */745752746746- r = page_count-1;747753 list_for_each_entry_safe(p, tmp, pages, lru) {748748- if ((flags & TTM_PAGE_FLAG_DMA32) && dma_address) {749749- void *addr = page_address(p);750750- WARN_ON(!addr || !dma_address[r]);751751- if (addr)752752- dma_free_coherent(NULL, PAGE_SIZE,753753- addr,754754- dma_address[r]);755755- dma_address[r] = 0;756756- } else757757- __free_page(p);758758- r--;754754+ __free_page(p);759755 }760756 /* Make the pages list empty */761757 INIT_LIST_HEAD(pages);
+1
drivers/gpu/stub/Kconfig
···55 # Poulsbo stub depends on ACPI_VIDEO when ACPI is enabled66 # but for select to work, need to select ACPI_VIDEO's dependencies, ick77 select BACKLIGHT_CLASS_DEVICE if ACPI88+ select VIDEO_OUTPUT_CONTROL if ACPI89 select INPUT if ACPI910 select ACPI_VIDEO if ACPI1011 select THERMAL if ACPI
+19-3
drivers/i2c/algos/i2c-algo-bit.c
···232232 * Sanity check for the adapter hardware - check the reaction of233233 * the bus lines only if it seems to be idle.234234 */235235-static int test_bus(struct i2c_algo_bit_data *adap, char *name)235235+static int test_bus(struct i2c_adapter *i2c_adap)236236{237237- int scl, sda;237237+ struct i2c_algo_bit_data *adap = i2c_adap->algo_data;238238+ const char *name = i2c_adap->name;239239+ int scl, sda, ret;240240+241241+ if (adap->pre_xfer) {242242+ ret = adap->pre_xfer(i2c_adap);243243+ if (ret < 0)244244+ return -ENODEV;245245+ }238246239247 if (adap->getscl == NULL)240248 pr_info("%s: Testing SDA only, SCL is not readable\n", name);···305297 "while pulling SCL high!\n", name);306298 goto bailout;307299 }300300+301301+ if (adap->post_xfer)302302+ adap->post_xfer(i2c_adap);303303+308304 pr_info("%s: Test OK\n", name);309305 return 0;310306bailout:311307 sdahi(adap);312308 sclhi(adap);309309+310310+ if (adap->post_xfer)311311+ adap->post_xfer(i2c_adap);312312+313313 return -ENODEV;314314}315315···623607 int ret;624608625609 if (bit_test) {626626- ret = test_bus(bit_adap, adap->name);610610+ ret = test_bus(adap);627611 if (ret < 0)628612 return -ENODEV;629613 }
+4-2
drivers/i2c/i2c-core.c
···797797798798 /* Let legacy drivers scan this bus for matching devices */799799 if (driver->attach_adapter) {800800- dev_warn(&adap->dev, "attach_adapter method is deprecated\n");800800+ dev_warn(&adap->dev, "%s: attach_adapter method is deprecated\n",801801+ driver->driver.name);801802 dev_warn(&adap->dev, "Please use another way to instantiate "802803 "your i2c_client\n");803804 /* We ignore the return code; if it fails, too bad */···985984986985 if (!driver->detach_adapter)987986 return 0;988988- dev_warn(&adapter->dev, "detach_adapter method is deprecated\n");987987+ dev_warn(&adapter->dev, "%s: detach_adapter method is deprecated\n",988988+ driver->driver.name);989989 res = driver->detach_adapter(adapter);990990 if (res)991991 dev_err(&adapter->dev, "detach_adapter failed (%d) "
+21-12
drivers/input/evdev.c
···3939};40404141struct evdev_client {4242- int head;4343- int tail;4242+ unsigned int head;4343+ unsigned int tail;4444 spinlock_t buffer_lock; /* protects access to buffer, head and tail */4545 struct fasync_struct *fasync;4646 struct evdev *evdev;4747 struct list_head node;4848- int bufsize;4848+ unsigned int bufsize;4949 struct input_event buffer[];5050};5151···5555static void evdev_pass_event(struct evdev_client *client,5656 struct input_event *event)5757{5858- /*5959- * Interrupts are disabled, just acquire the lock.6060- * Make sure we don't leave with the client buffer6161- * "empty" by having client->head == client->tail.6262- */5858+ /* Interrupts are disabled, just acquire the lock. */6359 spin_lock(&client->buffer_lock);6464- do {6565- client->buffer[client->head++] = *event;6666- client->head &= client->bufsize - 1;6767- } while (client->head == client->tail);6060+6161+ client->buffer[client->head++] = *event;6262+ client->head &= client->bufsize - 1;6363+6464+ if (unlikely(client->head == client->tail)) {6565+ /*6666+ * This effectively "drops" all unconsumed events, leaving6767+ * EV_SYN/SYN_DROPPED plus the newest event in the queue.6868+ */6969+ client->tail = (client->head - 2) & (client->bufsize - 1);7070+7171+ client->buffer[client->tail].time = event->time;7272+ client->buffer[client->tail].type = EV_SYN;7373+ client->buffer[client->tail].code = SYN_DROPPED;7474+ client->buffer[client->tail].value = 0;7575+ }7676+6877 spin_unlock(&client->buffer_lock);69787079 if (event->type == EV_SYN)
+40
drivers/input/input.c
···17461746}17471747EXPORT_SYMBOL(input_set_capability);1748174817491749+static unsigned int input_estimate_events_per_packet(struct input_dev *dev)17501750+{17511751+ int mt_slots;17521752+ int i;17531753+ unsigned int events;17541754+17551755+ if (dev->mtsize) {17561756+ mt_slots = dev->mtsize;17571757+ } else if (test_bit(ABS_MT_TRACKING_ID, dev->absbit)) {17581758+ mt_slots = dev->absinfo[ABS_MT_TRACKING_ID].maximum -17591759+ dev->absinfo[ABS_MT_TRACKING_ID].minimum + 1,17601760+ clamp(mt_slots, 2, 32);17611761+ } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) {17621762+ mt_slots = 2;17631763+ } else {17641764+ mt_slots = 0;17651765+ }17661766+17671767+ events = mt_slots + 1; /* count SYN_MT_REPORT and SYN_REPORT */17681768+17691769+ for (i = 0; i < ABS_CNT; i++) {17701770+ if (test_bit(i, dev->absbit)) {17711771+ if (input_is_mt_axis(i))17721772+ events += mt_slots;17731773+ else17741774+ events++;17751775+ }17761776+ }17771777+17781778+ for (i = 0; i < REL_CNT; i++)17791779+ if (test_bit(i, dev->relbit))17801780+ events++;17811781+17821782+ return events;17831783+}17841784+17491785#define INPUT_CLEANSE_BITMASK(dev, type, bits) \17501786 do { \17511787 if (!test_bit(EV_##type, dev->evbit)) \···1828179218291793 /* Make sure that bitmasks not mentioned in dev->evbit are clean. */18301794 input_cleanse_bitmasks(dev);17951795+17961796+ if (!dev->hint_events_per_packet)17971797+ dev->hint_events_per_packet =17981798+ input_estimate_events_per_packet(dev);1831179918321800 /*18331801 * If delay and period are pre-set by the driver, then autorepeating
···447447448448/* Support for plugging.449449 * This mirrors the plugging support in request_queue, but does not450450- * require having a whole queue450450+ * require having a whole queue or request structures.451451+ * We allocate an md_plug_cb for each md device and each thread it gets452452+ * plugged on. This links tot the private plug_handle structure in the453453+ * personality data where we keep a count of the number of outstanding454454+ * plugs so other code can see if a plug is active.451455 */452452-static void plugger_work(struct work_struct *work)453453-{454454- struct plug_handle *plug =455455- container_of(work, struct plug_handle, unplug_work);456456- plug->unplug_fn(plug);457457-}458458-static void plugger_timeout(unsigned long data)459459-{460460- struct plug_handle *plug = (void *)data;461461- kblockd_schedule_work(NULL, &plug->unplug_work);462462-}463463-void plugger_init(struct plug_handle *plug,464464- void (*unplug_fn)(struct plug_handle *))465465-{466466- plug->unplug_flag = 0;467467- plug->unplug_fn = unplug_fn;468468- init_timer(&plug->unplug_timer);469469- plug->unplug_timer.function = plugger_timeout;470470- plug->unplug_timer.data = (unsigned long)plug;471471- INIT_WORK(&plug->unplug_work, plugger_work);472472-}473473-EXPORT_SYMBOL_GPL(plugger_init);456456+struct md_plug_cb {457457+ struct blk_plug_cb cb;458458+ mddev_t *mddev;459459+};474460475475-void plugger_set_plug(struct plug_handle *plug)461461+static void plugger_unplug(struct blk_plug_cb *cb)476462{477477- if (!test_and_set_bit(PLUGGED_FLAG, &plug->unplug_flag))478478- mod_timer(&plug->unplug_timer, jiffies + msecs_to_jiffies(3)+1);463463+ struct md_plug_cb *mdcb = container_of(cb, struct md_plug_cb, cb);464464+ if (atomic_dec_and_test(&mdcb->mddev->plug_cnt))465465+ md_wakeup_thread(mdcb->mddev->thread);466466+ kfree(mdcb);479467}480480-EXPORT_SYMBOL_GPL(plugger_set_plug);481468482482-int plugger_remove_plug(struct plug_handle *plug)469469+/* Check that an unplug wakeup will come shortly.470470+ * If not, wakeup the md thread immediately471471+ */472472+int mddev_check_plugged(mddev_t *mddev)483473{484484- if (test_and_clear_bit(PLUGGED_FLAG, &plug->unplug_flag)) {485485- del_timer(&plug->unplug_timer);486486- return 1;487487- } else474474+ struct blk_plug *plug = current->plug;475475+ struct md_plug_cb *mdcb;476476+477477+ if (!plug)488478 return 0;489489-}490490-EXPORT_SYMBOL_GPL(plugger_remove_plug);491479480480+ list_for_each_entry(mdcb, &plug->cb_list, cb.list) {481481+ if (mdcb->cb.callback == plugger_unplug &&482482+ mdcb->mddev == mddev) {483483+ /* Already on the list, move to top */484484+ if (mdcb != list_first_entry(&plug->cb_list,485485+ struct md_plug_cb,486486+ cb.list))487487+ list_move(&mdcb->cb.list, &plug->cb_list);488488+ return 1;489489+ }490490+ }491491+ /* Not currently on the callback list */492492+ mdcb = kmalloc(sizeof(*mdcb), GFP_ATOMIC);493493+ if (!mdcb)494494+ return 0;495495+496496+ mdcb->mddev = mddev;497497+ mdcb->cb.callback = plugger_unplug;498498+ atomic_inc(&mddev->plug_cnt);499499+ list_add(&mdcb->cb.list, &plug->cb_list);500500+ return 1;501501+}502502+EXPORT_SYMBOL_GPL(mddev_check_plugged);492503493504static inline mddev_t *mddev_get(mddev_t *mddev)494505{···549538 atomic_set(&mddev->active, 1);550539 atomic_set(&mddev->openers, 0);551540 atomic_set(&mddev->active_io, 0);541541+ atomic_set(&mddev->plug_cnt, 0);552542 spin_lock_init(&mddev->write_lock);553543 atomic_set(&mddev->flush_pending, 0);554544 init_waitqueue_head(&mddev->sb_wait);···47354723 mddev->bitmap_info.chunksize = 0;47364724 mddev->bitmap_info.daemon_sleep = 0;47374725 mddev->bitmap_info.max_write_behind = 0;47384738- mddev->plug = NULL;47394726}4740472747414728static void __md_stop_writes(mddev_t *mddev)···66986687 return 0;66996688}67006689EXPORT_SYMBOL_GPL(md_allow_write);67016701-67026702-void md_unplug(mddev_t *mddev)67036703-{67046704- if (mddev->plug)67056705- mddev->plug->unplug_fn(mddev->plug);67066706-}6707669067086691#define SYNC_MARKS 1067096692#define SYNC_MARK_STEP (3*HZ)
+4-22
drivers/md/md.h
···2929typedef struct mddev_s mddev_t;3030typedef struct mdk_rdev_s mdk_rdev_t;31313232-/* generic plugging support - like that provided with request_queue,3333- * but does not require a request_queue3434- */3535-struct plug_handle {3636- void (*unplug_fn)(struct plug_handle *);3737- struct timer_list unplug_timer;3838- struct work_struct unplug_work;3939- unsigned long unplug_flag;4040-};4141-#define PLUGGED_FLAG 14242-void plugger_init(struct plug_handle *plug,4343- void (*unplug_fn)(struct plug_handle *));4444-void plugger_set_plug(struct plug_handle *plug);4545-int plugger_remove_plug(struct plug_handle *plug);4646-static inline void plugger_flush(struct plug_handle *plug)4747-{4848- del_timer_sync(&plug->unplug_timer);4949- cancel_work_sync(&plug->unplug_work);5050-}5151-5232/*5333 * MD's 'extended' device5434 */···179199 int delta_disks, new_level, new_layout;180200 int new_chunk_sectors;181201202202+ atomic_t plug_cnt; /* If device is expecting203203+ * more bios soon.204204+ */182205 struct mdk_thread_s *thread; /* management thread */183206 struct mdk_thread_s *sync_thread; /* doing resync or reconstruct */184207 sector_t curr_resync; /* last block scheduled */···319336 struct list_head all_mddevs;320337321338 struct attribute_group *to_remove;322322- struct plug_handle *plug; /* if used by personality */323339324340 struct bio_set *bio_set;325341···498516extern void md_integrity_add_rdev(mdk_rdev_t *rdev, mddev_t *mddev);499517extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale);500518extern void restore_bitmap_write_access(struct file *file);501501-extern void md_unplug(mddev_t *mddev);502519503520extern void mddev_init(mddev_t *mddev);504521extern int md_run(mddev_t *mddev);···511530 mddev_t *mddev);512531extern struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,513532 mddev_t *mddev);533533+extern int mddev_check_plugged(mddev_t *mddev);514534#endif /* _MD_MD_H */
+14-15
drivers/md/raid1.c
···565565 spin_unlock_irq(&conf->device_lock);566566}567567568568-static void md_kick_device(mddev_t *mddev)569569-{570570- blk_flush_plug(current);571571- md_wakeup_thread(mddev->thread);572572-}573573-574568/* Barriers....575569 * Sometimes we need to suspend IO while we do something else,576570 * either some resync/recovery, or reconfigure the array.···594600595601 /* Wait until no block IO is waiting */596602 wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,597597- conf->resync_lock, md_kick_device(conf->mddev));603603+ conf->resync_lock, );598604599605 /* block any new IO from starting */600606 conf->barrier++;···602608 /* Now wait for all pending IO to complete */603609 wait_event_lock_irq(conf->wait_barrier,604610 !conf->nr_pending && conf->barrier < RESYNC_DEPTH,605605- conf->resync_lock, md_kick_device(conf->mddev));611611+ conf->resync_lock, );606612607613 spin_unlock_irq(&conf->resync_lock);608614}···624630 conf->nr_waiting++;625631 wait_event_lock_irq(conf->wait_barrier, !conf->barrier,626632 conf->resync_lock,627627- md_kick_device(conf->mddev));633633+ );628634 conf->nr_waiting--;629635 }630636 conf->nr_pending++;···660666 wait_event_lock_irq(conf->wait_barrier,661667 conf->nr_pending == conf->nr_queued+1,662668 conf->resync_lock,663663- ({ flush_pending_writes(conf);664664- md_kick_device(conf->mddev); }));669669+ flush_pending_writes(conf));665670 spin_unlock_irq(&conf->resync_lock);666671}667672static void unfreeze_array(conf_t *conf)···722729 const unsigned long do_sync = (bio->bi_rw & REQ_SYNC);723730 const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA));724731 mdk_rdev_t *blocked_rdev;732732+ int plugged;725733726734 /*727735 * Register the new request and wait if the reconstruction···814820 * inc refcount on their rdev. Record them by setting815821 * bios[x] to bio816822 */823823+ plugged = mddev_check_plugged(mddev);824824+817825 disks = conf->raid_disks;818826 retry_write:819827 blocked_rdev = NULL;···921925 /* In case raid1d snuck in to freeze_array */922926 wake_up(&conf->wait_barrier);923927924924- if (do_sync || !bitmap)928928+ if (do_sync || !bitmap || !plugged)925929 md_wakeup_thread(mddev->thread);926930927931 return 0;···15121516 conf_t *conf = mddev->private;15131517 struct list_head *head = &conf->retry_list;15141518 mdk_rdev_t *rdev;15191519+ struct blk_plug plug;1515152015161521 md_check_recovery(mddev);15171517-15221522+15231523+ blk_start_plug(&plug);15181524 for (;;) {15191525 char b[BDEVNAME_SIZE];1520152615211521- flush_pending_writes(conf);15271527+ if (atomic_read(&mddev->plug_cnt) == 0)15281528+ flush_pending_writes(conf);1522152915231530 spin_lock_irqsave(&conf->device_lock, flags);15241531 if (list_empty(head)) {···15921593 }15931594 cond_resched();15941595 }15961596+ blk_finish_plug(&plug);15951597}1596159815971599···2039203920402040 md_unregister_thread(mddev->thread);20412041 mddev->thread = NULL;20422042- blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/20432042 if (conf->r1bio_pool)20442043 mempool_destroy(conf->r1bio_pool);20452044 kfree(conf->mirrors);
+13-14
drivers/md/raid10.c
···634634 spin_unlock_irq(&conf->device_lock);635635}636636637637-static void md_kick_device(mddev_t *mddev)638638-{639639- blk_flush_plug(current);640640- md_wakeup_thread(mddev->thread);641641-}642642-643637/* Barriers....644638 * Sometimes we need to suspend IO while we do something else,645639 * either some resync/recovery, or reconfigure the array.···663669664670 /* Wait until no block IO is waiting (unless 'force') */665671 wait_event_lock_irq(conf->wait_barrier, force || !conf->nr_waiting,666666- conf->resync_lock, md_kick_device(conf->mddev));672672+ conf->resync_lock, );667673668674 /* block any new IO from starting */669675 conf->barrier++;670676671671- /* No wait for all pending IO to complete */677677+ /* Now wait for all pending IO to complete */672678 wait_event_lock_irq(conf->wait_barrier,673679 !conf->nr_pending && conf->barrier < RESYNC_DEPTH,674674- conf->resync_lock, md_kick_device(conf->mddev));680680+ conf->resync_lock, );675681676682 spin_unlock_irq(&conf->resync_lock);677683}···692698 conf->nr_waiting++;693699 wait_event_lock_irq(conf->wait_barrier, !conf->barrier,694700 conf->resync_lock,695695- md_kick_device(conf->mddev));701701+ );696702 conf->nr_waiting--;697703 }698704 conf->nr_pending++;···728734 wait_event_lock_irq(conf->wait_barrier,729735 conf->nr_pending == conf->nr_queued+1,730736 conf->resync_lock,731731- ({ flush_pending_writes(conf);732732- md_kick_device(conf->mddev); }));737737+ flush_pending_writes(conf));738738+733739 spin_unlock_irq(&conf->resync_lock);734740}735741···756762 const unsigned long do_fua = (bio->bi_rw & REQ_FUA);757763 unsigned long flags;758764 mdk_rdev_t *blocked_rdev;765765+ int plugged;759766760767 if (unlikely(bio->bi_rw & REQ_FLUSH)) {761768 md_flush_request(mddev, bio);···865870 * inc refcount on their rdev. Record them by setting866871 * bios[x] to bio867872 */873873+ plugged = mddev_check_plugged(mddev);874874+868875 raid10_find_phys(conf, r10_bio);869876 retry_write:870877 blocked_rdev = NULL;···943946 /* In case raid10d snuck in to freeze_array */944947 wake_up(&conf->wait_barrier);945948946946- if (do_sync || !mddev->bitmap)949949+ if (do_sync || !mddev->bitmap || !plugged)947950 md_wakeup_thread(mddev->thread);948948-949951 return 0;950952}951953···16361640 conf_t *conf = mddev->private;16371641 struct list_head *head = &conf->retry_list;16381642 mdk_rdev_t *rdev;16431643+ struct blk_plug plug;1639164416401645 md_check_recovery(mddev);1641164616471647+ blk_start_plug(&plug);16421648 for (;;) {16431649 char b[BDEVNAME_SIZE];16441650···17141716 }17151717 cond_resched();17161718 }17191719+ blk_finish_plug(&plug);17171720}1718172117191722
+26-35
drivers/md/raid5.c
···2727 *2828 * We group bitmap updates into batches. Each batch has a number.2929 * We may write out several batches at once, but that isn't very important.3030- * conf->bm_write is the number of the last batch successfully written.3131- * conf->bm_flush is the number of the last batch that was closed to3030+ * conf->seq_write is the number of the last batch successfully written.3131+ * conf->seq_flush is the number of the last batch that was closed to3232 * new additions.3333 * When we discover that we will need to write to any block in a stripe3434 * (in add_stripe_bio) we update the in-memory bitmap and record in sh->bm_seq3535- * the number of the batch it will be in. This is bm_flush+1.3535+ * the number of the batch it will be in. This is seq_flush+1.3636 * When we are ready to do a write, if that batch hasn't been written yet,3737 * we plug the array and queue the stripe for later.3838 * When an unplug happens, we increment bm_flush, thus closing the current···199199 BUG_ON(!list_empty(&sh->lru));200200 BUG_ON(atomic_read(&conf->active_stripes)==0);201201 if (test_bit(STRIPE_HANDLE, &sh->state)) {202202- if (test_bit(STRIPE_DELAYED, &sh->state)) {202202+ if (test_bit(STRIPE_DELAYED, &sh->state))203203 list_add_tail(&sh->lru, &conf->delayed_list);204204- plugger_set_plug(&conf->plug);205205- } else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&206206- sh->bm_seq - conf->seq_write > 0) {204204+ else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&205205+ sh->bm_seq - conf->seq_write > 0)207206 list_add_tail(&sh->lru, &conf->bitmap_list);208208- plugger_set_plug(&conf->plug);209209- } else {207207+ else {210208 clear_bit(STRIPE_BIT_DELAY, &sh->state);211209 list_add_tail(&sh->lru, &conf->handle_list);212210 }···459461 < (conf->max_nr_stripes *3/4)460462 || !conf->inactive_blocked),461463 conf->device_lock,462462- md_raid5_kick_device(conf));464464+ );463465 conf->inactive_blocked = 0;464466 } else465467 init_stripe(sh, sector, previous);···14681470 wait_event_lock_irq(conf->wait_for_stripe,14691471 !list_empty(&conf->inactive_list),14701472 conf->device_lock,14711471- blk_flush_plug(current));14731473+ );14721474 osh = get_free_stripe(conf);14731475 spin_unlock_irq(&conf->device_lock);14741476 atomic_set(&nsh->count, 1);···36213623 atomic_inc(&conf->preread_active_stripes);36223624 list_add_tail(&sh->lru, &conf->hold_list);36233625 }36243624- } else36253625- plugger_set_plug(&conf->plug);36263626+ }36263627}3627362836283629static void activate_bit_delay(raid5_conf_t *conf)···36363639 atomic_inc(&sh->count);36373640 __release_stripe(conf, sh);36383641 }36393639-}36403640-36413641-void md_raid5_kick_device(raid5_conf_t *conf)36423642-{36433643- blk_flush_plug(current);36443644- raid5_activate_delayed(conf);36453645- md_wakeup_thread(conf->mddev->thread);36463646-}36473647-EXPORT_SYMBOL_GPL(md_raid5_kick_device);36483648-36493649-static void raid5_unplug(struct plug_handle *plug)36503650-{36513651- raid5_conf_t *conf = container_of(plug, raid5_conf_t, plug);36523652-36533653- md_raid5_kick_device(conf);36543642}3655364336563644int md_raid5_congested(mddev_t *mddev, int bits)···39273945 struct stripe_head *sh;39283946 const int rw = bio_data_dir(bi);39293947 int remaining;39483948+ int plugged;3930394939313950 if (unlikely(bi->bi_rw & REQ_FLUSH)) {39323951 md_flush_request(mddev, bi);···39463963 bi->bi_next = NULL;39473964 bi->bi_phys_segments = 1; /* over-loaded to count active stripes */3948396539663966+ plugged = mddev_check_plugged(mddev);39493967 for (;logical_sector < last_sector; logical_sector += STRIPE_SECTORS) {39503968 DEFINE_WAIT(w);39513969 int disks, data_disks;···40414057 * add failed due to overlap. Flush everything40424058 * and wait a while40434059 */40444044- md_raid5_kick_device(conf);40604060+ md_wakeup_thread(mddev->thread);40454061 release_stripe(sh);40464062 schedule();40474063 goto retry;···40614077 }4062407840634079 }40804080+ if (!plugged)40814081+ md_wakeup_thread(mddev->thread);40824082+40644083 spin_lock_irq(&conf->device_lock);40654084 remaining = raid5_dec_bi_phys_segments(bi);40664085 spin_unlock_irq(&conf->device_lock);···44654478 struct stripe_head *sh;44664479 raid5_conf_t *conf = mddev->private;44674480 int handled;44814481+ struct blk_plug plug;4468448244694483 pr_debug("+++ raid5d active\n");4470448444714485 md_check_recovery(mddev);4472448644874487+ blk_start_plug(&plug);44734488 handled = 0;44744489 spin_lock_irq(&conf->device_lock);44754490 while (1) {44764491 struct bio *bio;4477449244784478- if (conf->seq_flush != conf->seq_write) {44794479- int seq = conf->seq_flush;44934493+ if (atomic_read(&mddev->plug_cnt) == 0 &&44944494+ !list_empty(&conf->bitmap_list)) {44954495+ /* Now is a good time to flush some bitmap updates */44964496+ conf->seq_flush++;44804497 spin_unlock_irq(&conf->device_lock);44814498 bitmap_unplug(mddev->bitmap);44824499 spin_lock_irq(&conf->device_lock);44834483- conf->seq_write = seq;45004500+ conf->seq_write = conf->seq_flush;44844501 activate_bit_delay(conf);44854502 }45034503+ if (atomic_read(&mddev->plug_cnt) == 0)45044504+ raid5_activate_delayed(conf);4486450544874506 while ((bio = remove_bio_from_retry(conf))) {44884507 int ok;···45184525 spin_unlock_irq(&conf->device_lock);4519452645204527 async_tx_issue_pending_all();45284528+ blk_finish_plug(&plug);4521452945224530 pr_debug("--- raid5d inactive\n");45234531}···51355141 mdname(mddev));51365142 md_set_array_sectors(mddev, raid5_size(mddev, 0, 0));5137514351385138- plugger_init(&conf->plug, raid5_unplug);51395139- mddev->plug = &conf->plug;51405144 if (mddev->queue) {51415145 int chunk_size;51425146 /* read-ahead size must cover two whole stripes, which···51845192 mddev->thread = NULL;51855193 if (mddev->queue)51865194 mddev->queue->backing_dev_info.congested_fn = NULL;51875187- plugger_flush(&conf->plug); /* the unplug fn references 'conf'*/51885195 free_conf(conf);51895196 mddev->private = NULL;51905197 mddev->to_remove = &raid5_attrs_group;
-2
drivers/md/raid5.h
···400400 * Cleared when a sync completes.401401 */402402403403- struct plug_handle plug;404404-405403 /* per cpu variables */406404 struct raid5_percpu {407405 struct page *spare_page; /* Used when checking P/Q in raid6 */
+14-2
drivers/mfd/mfd-core.c
···5555}5656EXPORT_SYMBOL(mfd_cell_disable);57575858+static int mfd_platform_add_cell(struct platform_device *pdev,5959+ const struct mfd_cell *cell)6060+{6161+ if (!cell)6262+ return 0;6363+6464+ pdev->mfd_cell = kmemdup(cell, sizeof(*cell), GFP_KERNEL);6565+ if (!pdev->mfd_cell)6666+ return -ENOMEM;6767+6868+ return 0;6969+}7070+5871static int mfd_add_device(struct device *parent, int id,5972 const struct mfd_cell *cell,6073 struct resource *mem_base,···88758976 pdev->dev.parent = parent;90779191- ret = platform_device_add_data(pdev, cell, sizeof(*cell));7878+ ret = mfd_platform_add_cell(pdev, cell);9279 if (ret)9380 goto fail_res;9481···136123137124 return 0;138125139139-/* platform_device_del(pdev); */140126fail_res:141127 kfree(res);142128fail_device:
···187187 depends on ACPI188188 depends on BACKLIGHT_CLASS_DEVICE189189 depends on RFKILL190190- depends on SERIO_I8042190190+ depends on INPUT && SERIO_I8042191191+ select INPUT_SPARSEKMAP191192 ---help---192193 This is a driver for laptops built by MSI (MICRO-STAR193194 INTERNATIONAL):
···336336337337 /* do not clear AIE here, it may be needed for wake */338338339339- s3c_rtc_setpie(dev, 0);340339 free_irq(s3c_rtc_alarmno, rtc_dev);341340 free_irq(s3c_rtc_tickno, rtc_dev);342341}···407408 platform_set_drvdata(dev, NULL);408409 rtc_device_unregister(rtc);409410410410- s3c_rtc_setpie(&dev->dev, 0);411411 s3c_rtc_setaie(&dev->dev, 0);412412413413 clk_disable(rtc_clk);
+1-1
drivers/scsi/scsi_lib.c
···443443 &sdev->request_queue->queue_flags);444444 if (flagset)445445 queue_flag_set(QUEUE_FLAG_REENTER, sdev->request_queue);446446- __blk_run_queue(sdev->request_queue, false);446446+ __blk_run_queue(sdev->request_queue);447447 if (flagset)448448 queue_flag_clear(QUEUE_FLAG_REENTER, sdev->request_queue);449449 spin_unlock(sdev->request_queue->queue_lock);
+1-1
drivers/scsi/scsi_transport_fc.c
···38293829 !test_bit(QUEUE_FLAG_REENTER, &rport->rqst_q->queue_flags);38303830 if (flagset)38313831 queue_flag_set(QUEUE_FLAG_REENTER, rport->rqst_q);38323832- __blk_run_queue(rport->rqst_q, false);38323832+ __blk_run_queue(rport->rqst_q);38333833 if (flagset)38343834 queue_flag_clear(QUEUE_FLAG_REENTER, rport->rqst_q);38353835 spin_unlock_irqrestore(rport->rqst_q->queue_lock, flags);
···11-config SAMSUNG_LAPTOP22- tristate "Samsung Laptop driver"33- default n44- depends on RFKILL && BACKLIGHT_CLASS_DEVICE && X8655- help66- This module implements a driver for the N128 Samsung Laptop77- providing control over the Wireless LED and the LCD backlight88-99- To compile this driver as a module, choose1010- M here: the module will be called samsung-laptop.
···11-TODO:22- - review from other developers33- - figure out ACPI video issues44-55-Please send patches to Greg Kroah-Hartman <gregkh@suse.de>
-843
drivers/staging/samsung-laptop/samsung-laptop.c
···11-/*22- * Samsung Laptop driver33- *44- * Copyright (C) 2009,2011 Greg Kroah-Hartman (gregkh@suse.de)55- * Copyright (C) 2009,2011 Novell Inc.66- *77- * This program is free software; you can redistribute it and/or modify it88- * under the terms of the GNU General Public License version 2 as published by99- * the Free Software Foundation.1010- *1111- */1212-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt1313-1414-#include <linux/kernel.h>1515-#include <linux/init.h>1616-#include <linux/module.h>1717-#include <linux/delay.h>1818-#include <linux/pci.h>1919-#include <linux/backlight.h>2020-#include <linux/fb.h>2121-#include <linux/dmi.h>2222-#include <linux/platform_device.h>2323-#include <linux/rfkill.h>2424-2525-/*2626- * This driver is needed because a number of Samsung laptops do not hook2727- * their control settings through ACPI. So we have to poke around in the2828- * BIOS to do things like brightness values, and "special" key controls.2929- */3030-3131-/*3232- * We have 0 - 8 as valid brightness levels. The specs say that level 0 should3333- * be reserved by the BIOS (which really doesn't make much sense), we tell3434- * userspace that the value is 0 - 7 and then just tell the hardware 1 - 83535- */3636-#define MAX_BRIGHT 0x073737-3838-3939-#define SABI_IFACE_MAIN 0x004040-#define SABI_IFACE_SUB 0x024141-#define SABI_IFACE_COMPLETE 0x044242-#define SABI_IFACE_DATA 0x054343-4444-/* Structure to get data back to the calling function */4545-struct sabi_retval {4646- u8 retval[20];4747-};4848-4949-struct sabi_header_offsets {5050- u8 port;5151- u8 re_mem;5252- u8 iface_func;5353- u8 en_mem;5454- u8 data_offset;5555- u8 data_segment;5656-};5757-5858-struct sabi_commands {5959- /*6060- * Brightness is 0 - 8, as described above.6161- * Value 0 is for the BIOS to use6262- */6363- u8 get_brightness;6464- u8 set_brightness;6565-6666- /*6767- * first byte:6868- * 0x00 - wireless is off6969- * 0x01 - wireless is on7070- * second byte:7171- * 0x02 - 3G is off7272- * 0x03 - 3G is on7373- * TODO, verify 3G is correct, that doesn't seem right...7474- */7575- u8 get_wireless_button;7676- u8 set_wireless_button;7777-7878- /* 0 is off, 1 is on */7979- u8 get_backlight;8080- u8 set_backlight;8181-8282- /*8383- * 0x80 or 0x00 - no action8484- * 0x81 - recovery key pressed8585- */8686- u8 get_recovery_mode;8787- u8 set_recovery_mode;8888-8989- /*9090- * on seclinux: 0 is low, 1 is high,9191- * on swsmi: 0 is normal, 1 is silent, 2 is turbo9292- */9393- u8 get_performance_level;9494- u8 set_performance_level;9595-9696- /*9797- * Tell the BIOS that Linux is running on this machine.9898- * 81 is on, 80 is off9999- */100100- u8 set_linux;101101-};102102-103103-struct sabi_performance_level {104104- const char *name;105105- u8 value;106106-};107107-108108-struct sabi_config {109109- const char *test_string;110110- u16 main_function;111111- const struct sabi_header_offsets header_offsets;112112- const struct sabi_commands commands;113113- const struct sabi_performance_level performance_levels[4];114114- u8 min_brightness;115115- u8 max_brightness;116116-};117117-118118-static const struct sabi_config sabi_configs[] = {119119- {120120- .test_string = "SECLINUX",121121-122122- .main_function = 0x4c49,123123-124124- .header_offsets = {125125- .port = 0x00,126126- .re_mem = 0x02,127127- .iface_func = 0x03,128128- .en_mem = 0x04,129129- .data_offset = 0x05,130130- .data_segment = 0x07,131131- },132132-133133- .commands = {134134- .get_brightness = 0x00,135135- .set_brightness = 0x01,136136-137137- .get_wireless_button = 0x02,138138- .set_wireless_button = 0x03,139139-140140- .get_backlight = 0x04,141141- .set_backlight = 0x05,142142-143143- .get_recovery_mode = 0x06,144144- .set_recovery_mode = 0x07,145145-146146- .get_performance_level = 0x08,147147- .set_performance_level = 0x09,148148-149149- .set_linux = 0x0a,150150- },151151-152152- .performance_levels = {153153- {154154- .name = "silent",155155- .value = 0,156156- },157157- {158158- .name = "normal",159159- .value = 1,160160- },161161- { },162162- },163163- .min_brightness = 1,164164- .max_brightness = 8,165165- },166166- {167167- .test_string = "SwSmi@",168168-169169- .main_function = 0x5843,170170-171171- .header_offsets = {172172- .port = 0x00,173173- .re_mem = 0x04,174174- .iface_func = 0x02,175175- .en_mem = 0x03,176176- .data_offset = 0x05,177177- .data_segment = 0x07,178178- },179179-180180- .commands = {181181- .get_brightness = 0x10,182182- .set_brightness = 0x11,183183-184184- .get_wireless_button = 0x12,185185- .set_wireless_button = 0x13,186186-187187- .get_backlight = 0x2d,188188- .set_backlight = 0x2e,189189-190190- .get_recovery_mode = 0xff,191191- .set_recovery_mode = 0xff,192192-193193- .get_performance_level = 0x31,194194- .set_performance_level = 0x32,195195-196196- .set_linux = 0xff,197197- },198198-199199- .performance_levels = {200200- {201201- .name = "normal",202202- .value = 0,203203- },204204- {205205- .name = "silent",206206- .value = 1,207207- },208208- {209209- .name = "overclock",210210- .value = 2,211211- },212212- { },213213- },214214- .min_brightness = 0,215215- .max_brightness = 8,216216- },217217- { },218218-};219219-220220-static const struct sabi_config *sabi_config;221221-222222-static void __iomem *sabi;223223-static void __iomem *sabi_iface;224224-static void __iomem *f0000_segment;225225-static struct backlight_device *backlight_device;226226-static struct mutex sabi_mutex;227227-static struct platform_device *sdev;228228-static struct rfkill *rfk;229229-230230-static int force;231231-module_param(force, bool, 0);232232-MODULE_PARM_DESC(force,233233- "Disable the DMI check and forces the driver to be loaded");234234-235235-static int debug;236236-module_param(debug, bool, S_IRUGO | S_IWUSR);237237-MODULE_PARM_DESC(debug, "Debug enabled or not");238238-239239-static int sabi_get_command(u8 command, struct sabi_retval *sretval)240240-{241241- int retval = 0;242242- u16 port = readw(sabi + sabi_config->header_offsets.port);243243- u8 complete, iface_data;244244-245245- mutex_lock(&sabi_mutex);246246-247247- /* enable memory to be able to write to it */248248- outb(readb(sabi + sabi_config->header_offsets.en_mem), port);249249-250250- /* write out the command */251251- writew(sabi_config->main_function, sabi_iface + SABI_IFACE_MAIN);252252- writew(command, sabi_iface + SABI_IFACE_SUB);253253- writeb(0, sabi_iface + SABI_IFACE_COMPLETE);254254- outb(readb(sabi + sabi_config->header_offsets.iface_func), port);255255-256256- /* write protect memory to make it safe */257257- outb(readb(sabi + sabi_config->header_offsets.re_mem), port);258258-259259- /* see if the command actually succeeded */260260- complete = readb(sabi_iface + SABI_IFACE_COMPLETE);261261- iface_data = readb(sabi_iface + SABI_IFACE_DATA);262262- if (complete != 0xaa || iface_data == 0xff) {263263- pr_warn("SABI get command 0x%02x failed with completion flag 0x%02x and data 0x%02x\n",264264- command, complete, iface_data);265265- retval = -EINVAL;266266- goto exit;267267- }268268- /*269269- * Save off the data into a structure so the caller use it.270270- * Right now we only want the first 4 bytes,271271- * There are commands that need more, but not for the ones we272272- * currently care about.273273- */274274- sretval->retval[0] = readb(sabi_iface + SABI_IFACE_DATA);275275- sretval->retval[1] = readb(sabi_iface + SABI_IFACE_DATA + 1);276276- sretval->retval[2] = readb(sabi_iface + SABI_IFACE_DATA + 2);277277- sretval->retval[3] = readb(sabi_iface + SABI_IFACE_DATA + 3);278278-279279-exit:280280- mutex_unlock(&sabi_mutex);281281- return retval;282282-283283-}284284-285285-static int sabi_set_command(u8 command, u8 data)286286-{287287- int retval = 0;288288- u16 port = readw(sabi + sabi_config->header_offsets.port);289289- u8 complete, iface_data;290290-291291- mutex_lock(&sabi_mutex);292292-293293- /* enable memory to be able to write to it */294294- outb(readb(sabi + sabi_config->header_offsets.en_mem), port);295295-296296- /* write out the command */297297- writew(sabi_config->main_function, sabi_iface + SABI_IFACE_MAIN);298298- writew(command, sabi_iface + SABI_IFACE_SUB);299299- writeb(0, sabi_iface + SABI_IFACE_COMPLETE);300300- writeb(data, sabi_iface + SABI_IFACE_DATA);301301- outb(readb(sabi + sabi_config->header_offsets.iface_func), port);302302-303303- /* write protect memory to make it safe */304304- outb(readb(sabi + sabi_config->header_offsets.re_mem), port);305305-306306- /* see if the command actually succeeded */307307- complete = readb(sabi_iface + SABI_IFACE_COMPLETE);308308- iface_data = readb(sabi_iface + SABI_IFACE_DATA);309309- if (complete != 0xaa || iface_data == 0xff) {310310- pr_warn("SABI set command 0x%02x failed with completion flag 0x%02x and data 0x%02x\n",311311- command, complete, iface_data);312312- retval = -EINVAL;313313- }314314-315315- mutex_unlock(&sabi_mutex);316316- return retval;317317-}318318-319319-static void test_backlight(void)320320-{321321- struct sabi_retval sretval;322322-323323- sabi_get_command(sabi_config->commands.get_backlight, &sretval);324324- printk(KERN_DEBUG "backlight = 0x%02x\n", sretval.retval[0]);325325-326326- sabi_set_command(sabi_config->commands.set_backlight, 0);327327- printk(KERN_DEBUG "backlight should be off\n");328328-329329- sabi_get_command(sabi_config->commands.get_backlight, &sretval);330330- printk(KERN_DEBUG "backlight = 0x%02x\n", sretval.retval[0]);331331-332332- msleep(1000);333333-334334- sabi_set_command(sabi_config->commands.set_backlight, 1);335335- printk(KERN_DEBUG "backlight should be on\n");336336-337337- sabi_get_command(sabi_config->commands.get_backlight, &sretval);338338- printk(KERN_DEBUG "backlight = 0x%02x\n", sretval.retval[0]);339339-}340340-341341-static void test_wireless(void)342342-{343343- struct sabi_retval sretval;344344-345345- sabi_get_command(sabi_config->commands.get_wireless_button, &sretval);346346- printk(KERN_DEBUG "wireless led = 0x%02x\n", sretval.retval[0]);347347-348348- sabi_set_command(sabi_config->commands.set_wireless_button, 0);349349- printk(KERN_DEBUG "wireless led should be off\n");350350-351351- sabi_get_command(sabi_config->commands.get_wireless_button, &sretval);352352- printk(KERN_DEBUG "wireless led = 0x%02x\n", sretval.retval[0]);353353-354354- msleep(1000);355355-356356- sabi_set_command(sabi_config->commands.set_wireless_button, 1);357357- printk(KERN_DEBUG "wireless led should be on\n");358358-359359- sabi_get_command(sabi_config->commands.get_wireless_button, &sretval);360360- printk(KERN_DEBUG "wireless led = 0x%02x\n", sretval.retval[0]);361361-}362362-363363-static u8 read_brightness(void)364364-{365365- struct sabi_retval sretval;366366- int user_brightness = 0;367367- int retval;368368-369369- retval = sabi_get_command(sabi_config->commands.get_brightness,370370- &sretval);371371- if (!retval) {372372- user_brightness = sretval.retval[0];373373- if (user_brightness != 0)374374- user_brightness -= sabi_config->min_brightness;375375- }376376- return user_brightness;377377-}378378-379379-static void set_brightness(u8 user_brightness)380380-{381381- u8 user_level = user_brightness - sabi_config->min_brightness;382382-383383- sabi_set_command(sabi_config->commands.set_brightness, user_level);384384-}385385-386386-static int get_brightness(struct backlight_device *bd)387387-{388388- return (int)read_brightness();389389-}390390-391391-static int update_status(struct backlight_device *bd)392392-{393393- set_brightness(bd->props.brightness);394394-395395- if (bd->props.power == FB_BLANK_UNBLANK)396396- sabi_set_command(sabi_config->commands.set_backlight, 1);397397- else398398- sabi_set_command(sabi_config->commands.set_backlight, 0);399399- return 0;400400-}401401-402402-static const struct backlight_ops backlight_ops = {403403- .get_brightness = get_brightness,404404- .update_status = update_status,405405-};406406-407407-static int rfkill_set(void *data, bool blocked)408408-{409409- /* Do something with blocked...*/410410- /*411411- * blocked == false is on412412- * blocked == true is off413413- */414414- if (blocked)415415- sabi_set_command(sabi_config->commands.set_wireless_button, 0);416416- else417417- sabi_set_command(sabi_config->commands.set_wireless_button, 1);418418-419419- return 0;420420-}421421-422422-static struct rfkill_ops rfkill_ops = {423423- .set_block = rfkill_set,424424-};425425-426426-static int init_wireless(struct platform_device *sdev)427427-{428428- int retval;429429-430430- rfk = rfkill_alloc("samsung-wifi", &sdev->dev, RFKILL_TYPE_WLAN,431431- &rfkill_ops, NULL);432432- if (!rfk)433433- return -ENOMEM;434434-435435- retval = rfkill_register(rfk);436436- if (retval) {437437- rfkill_destroy(rfk);438438- return -ENODEV;439439- }440440-441441- return 0;442442-}443443-444444-static void destroy_wireless(void)445445-{446446- rfkill_unregister(rfk);447447- rfkill_destroy(rfk);448448-}449449-450450-static ssize_t get_performance_level(struct device *dev,451451- struct device_attribute *attr, char *buf)452452-{453453- struct sabi_retval sretval;454454- int retval;455455- int i;456456-457457- /* Read the state */458458- retval = sabi_get_command(sabi_config->commands.get_performance_level,459459- &sretval);460460- if (retval)461461- return retval;462462-463463- /* The logic is backwards, yeah, lots of fun... */464464- for (i = 0; sabi_config->performance_levels[i].name; ++i) {465465- if (sretval.retval[0] == sabi_config->performance_levels[i].value)466466- return sprintf(buf, "%s\n", sabi_config->performance_levels[i].name);467467- }468468- return sprintf(buf, "%s\n", "unknown");469469-}470470-471471-static ssize_t set_performance_level(struct device *dev,472472- struct device_attribute *attr, const char *buf,473473- size_t count)474474-{475475- if (count >= 1) {476476- int i;477477- for (i = 0; sabi_config->performance_levels[i].name; ++i) {478478- const struct sabi_performance_level *level =479479- &sabi_config->performance_levels[i];480480- if (!strncasecmp(level->name, buf, strlen(level->name))) {481481- sabi_set_command(sabi_config->commands.set_performance_level,482482- level->value);483483- break;484484- }485485- }486486- if (!sabi_config->performance_levels[i].name)487487- return -EINVAL;488488- }489489- return count;490490-}491491-static DEVICE_ATTR(performance_level, S_IWUSR | S_IRUGO,492492- get_performance_level, set_performance_level);493493-494494-495495-static int __init dmi_check_cb(const struct dmi_system_id *id)496496-{497497- pr_info("found laptop model '%s'\n",498498- id->ident);499499- return 0;500500-}501501-502502-static struct dmi_system_id __initdata samsung_dmi_table[] = {503503- {504504- .ident = "N128",505505- .matches = {506506- DMI_MATCH(DMI_SYS_VENDOR,507507- "SAMSUNG ELECTRONICS CO., LTD."),508508- DMI_MATCH(DMI_PRODUCT_NAME, "N128"),509509- DMI_MATCH(DMI_BOARD_NAME, "N128"),510510- },511511- .callback = dmi_check_cb,512512- },513513- {514514- .ident = "N130",515515- .matches = {516516- DMI_MATCH(DMI_SYS_VENDOR,517517- "SAMSUNG ELECTRONICS CO., LTD."),518518- DMI_MATCH(DMI_PRODUCT_NAME, "N130"),519519- DMI_MATCH(DMI_BOARD_NAME, "N130"),520520- },521521- .callback = dmi_check_cb,522522- },523523- {524524- .ident = "X125",525525- .matches = {526526- DMI_MATCH(DMI_SYS_VENDOR,527527- "SAMSUNG ELECTRONICS CO., LTD."),528528- DMI_MATCH(DMI_PRODUCT_NAME, "X125"),529529- DMI_MATCH(DMI_BOARD_NAME, "X125"),530530- },531531- .callback = dmi_check_cb,532532- },533533- {534534- .ident = "X120/X170",535535- .matches = {536536- DMI_MATCH(DMI_SYS_VENDOR,537537- "SAMSUNG ELECTRONICS CO., LTD."),538538- DMI_MATCH(DMI_PRODUCT_NAME, "X120/X170"),539539- DMI_MATCH(DMI_BOARD_NAME, "X120/X170"),540540- },541541- .callback = dmi_check_cb,542542- },543543- {544544- .ident = "NC10",545545- .matches = {546546- DMI_MATCH(DMI_SYS_VENDOR,547547- "SAMSUNG ELECTRONICS CO., LTD."),548548- DMI_MATCH(DMI_PRODUCT_NAME, "NC10"),549549- DMI_MATCH(DMI_BOARD_NAME, "NC10"),550550- },551551- .callback = dmi_check_cb,552552- },553553- {554554- .ident = "NP-Q45",555555- .matches = {556556- DMI_MATCH(DMI_SYS_VENDOR,557557- "SAMSUNG ELECTRONICS CO., LTD."),558558- DMI_MATCH(DMI_PRODUCT_NAME, "SQ45S70S"),559559- DMI_MATCH(DMI_BOARD_NAME, "SQ45S70S"),560560- },561561- .callback = dmi_check_cb,562562- },563563- {564564- .ident = "X360",565565- .matches = {566566- DMI_MATCH(DMI_SYS_VENDOR,567567- "SAMSUNG ELECTRONICS CO., LTD."),568568- DMI_MATCH(DMI_PRODUCT_NAME, "X360"),569569- DMI_MATCH(DMI_BOARD_NAME, "X360"),570570- },571571- .callback = dmi_check_cb,572572- },573573- {574574- .ident = "R410 Plus",575575- .matches = {576576- DMI_MATCH(DMI_SYS_VENDOR,577577- "SAMSUNG ELECTRONICS CO., LTD."),578578- DMI_MATCH(DMI_PRODUCT_NAME, "R410P"),579579- DMI_MATCH(DMI_BOARD_NAME, "R460"),580580- },581581- .callback = dmi_check_cb,582582- },583583- {584584- .ident = "R518",585585- .matches = {586586- DMI_MATCH(DMI_SYS_VENDOR,587587- "SAMSUNG ELECTRONICS CO., LTD."),588588- DMI_MATCH(DMI_PRODUCT_NAME, "R518"),589589- DMI_MATCH(DMI_BOARD_NAME, "R518"),590590- },591591- .callback = dmi_check_cb,592592- },593593- {594594- .ident = "R519/R719",595595- .matches = {596596- DMI_MATCH(DMI_SYS_VENDOR,597597- "SAMSUNG ELECTRONICS CO., LTD."),598598- DMI_MATCH(DMI_PRODUCT_NAME, "R519/R719"),599599- DMI_MATCH(DMI_BOARD_NAME, "R519/R719"),600600- },601601- .callback = dmi_check_cb,602602- },603603- {604604- .ident = "N150/N210/N220/N230",605605- .matches = {606606- DMI_MATCH(DMI_SYS_VENDOR,607607- "SAMSUNG ELECTRONICS CO., LTD."),608608- DMI_MATCH(DMI_PRODUCT_NAME, "N150/N210/N220/N230"),609609- DMI_MATCH(DMI_BOARD_NAME, "N150/N210/N220/N230"),610610- },611611- .callback = dmi_check_cb,612612- },613613- {614614- .ident = "N150P/N210P/N220P",615615- .matches = {616616- DMI_MATCH(DMI_SYS_VENDOR,617617- "SAMSUNG ELECTRONICS CO., LTD."),618618- DMI_MATCH(DMI_PRODUCT_NAME, "N150P/N210P/N220P"),619619- DMI_MATCH(DMI_BOARD_NAME, "N150P/N210P/N220P"),620620- },621621- .callback = dmi_check_cb,622622- },623623- {624624- .ident = "R530/R730",625625- .matches = {626626- DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),627627- DMI_MATCH(DMI_PRODUCT_NAME, "R530/R730"),628628- DMI_MATCH(DMI_BOARD_NAME, "R530/R730"),629629- },630630- .callback = dmi_check_cb,631631- },632632- {633633- .ident = "NF110/NF210/NF310",634634- .matches = {635635- DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),636636- DMI_MATCH(DMI_PRODUCT_NAME, "NF110/NF210/NF310"),637637- DMI_MATCH(DMI_BOARD_NAME, "NF110/NF210/NF310"),638638- },639639- .callback = dmi_check_cb,640640- },641641- {642642- .ident = "N145P/N250P/N260P",643643- .matches = {644644- DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),645645- DMI_MATCH(DMI_PRODUCT_NAME, "N145P/N250P/N260P"),646646- DMI_MATCH(DMI_BOARD_NAME, "N145P/N250P/N260P"),647647- },648648- .callback = dmi_check_cb,649649- },650650- {651651- .ident = "R70/R71",652652- .matches = {653653- DMI_MATCH(DMI_SYS_VENDOR,654654- "SAMSUNG ELECTRONICS CO., LTD."),655655- DMI_MATCH(DMI_PRODUCT_NAME, "R70/R71"),656656- DMI_MATCH(DMI_BOARD_NAME, "R70/R71"),657657- },658658- .callback = dmi_check_cb,659659- },660660- {661661- .ident = "P460",662662- .matches = {663663- DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),664664- DMI_MATCH(DMI_PRODUCT_NAME, "P460"),665665- DMI_MATCH(DMI_BOARD_NAME, "P460"),666666- },667667- .callback = dmi_check_cb,668668- },669669- { },670670-};671671-MODULE_DEVICE_TABLE(dmi, samsung_dmi_table);672672-673673-static int find_signature(void __iomem *memcheck, const char *testStr)674674-{675675- int i = 0;676676- int loca;677677-678678- for (loca = 0; loca < 0xffff; loca++) {679679- char temp = readb(memcheck + loca);680680-681681- if (temp == testStr[i]) {682682- if (i == strlen(testStr)-1)683683- break;684684- ++i;685685- } else {686686- i = 0;687687- }688688- }689689- return loca;690690-}691691-692692-static int __init samsung_init(void)693693-{694694- struct backlight_properties props;695695- struct sabi_retval sretval;696696- unsigned int ifaceP;697697- int i;698698- int loca;699699- int retval;700700-701701- mutex_init(&sabi_mutex);702702-703703- if (!force && !dmi_check_system(samsung_dmi_table))704704- return -ENODEV;705705-706706- f0000_segment = ioremap_nocache(0xf0000, 0xffff);707707- if (!f0000_segment) {708708- pr_err("Can't map the segment at 0xf0000\n");709709- return -EINVAL;710710- }711711-712712- /* Try to find one of the signatures in memory to find the header */713713- for (i = 0; sabi_configs[i].test_string != 0; ++i) {714714- sabi_config = &sabi_configs[i];715715- loca = find_signature(f0000_segment, sabi_config->test_string);716716- if (loca != 0xffff)717717- break;718718- }719719-720720- if (loca == 0xffff) {721721- pr_err("This computer does not support SABI\n");722722- goto error_no_signature;723723- }724724-725725- /* point to the SMI port Number */726726- loca += 1;727727- sabi = (f0000_segment + loca);728728-729729- if (debug) {730730- printk(KERN_DEBUG "This computer supports SABI==%x\n",731731- loca + 0xf0000 - 6);732732- printk(KERN_DEBUG "SABI header:\n");733733- printk(KERN_DEBUG " SMI Port Number = 0x%04x\n",734734- readw(sabi + sabi_config->header_offsets.port));735735- printk(KERN_DEBUG " SMI Interface Function = 0x%02x\n",736736- readb(sabi + sabi_config->header_offsets.iface_func));737737- printk(KERN_DEBUG " SMI enable memory buffer = 0x%02x\n",738738- readb(sabi + sabi_config->header_offsets.en_mem));739739- printk(KERN_DEBUG " SMI restore memory buffer = 0x%02x\n",740740- readb(sabi + sabi_config->header_offsets.re_mem));741741- printk(KERN_DEBUG " SABI data offset = 0x%04x\n",742742- readw(sabi + sabi_config->header_offsets.data_offset));743743- printk(KERN_DEBUG " SABI data segment = 0x%04x\n",744744- readw(sabi + sabi_config->header_offsets.data_segment));745745- }746746-747747- /* Get a pointer to the SABI Interface */748748- ifaceP = (readw(sabi + sabi_config->header_offsets.data_segment) & 0x0ffff) << 4;749749- ifaceP += readw(sabi + sabi_config->header_offsets.data_offset) & 0x0ffff;750750- sabi_iface = ioremap_nocache(ifaceP, 16);751751- if (!sabi_iface) {752752- pr_err("Can't remap %x\n", ifaceP);753753- goto exit;754754- }755755- if (debug) {756756- printk(KERN_DEBUG "ifaceP = 0x%08x\n", ifaceP);757757- printk(KERN_DEBUG "sabi_iface = %p\n", sabi_iface);758758-759759- test_backlight();760760- test_wireless();761761-762762- retval = sabi_get_command(sabi_config->commands.get_brightness,763763- &sretval);764764- printk(KERN_DEBUG "brightness = 0x%02x\n", sretval.retval[0]);765765- }766766-767767- /* Turn on "Linux" mode in the BIOS */768768- if (sabi_config->commands.set_linux != 0xff) {769769- retval = sabi_set_command(sabi_config->commands.set_linux,770770- 0x81);771771- if (retval) {772772- pr_warn("Linux mode was not set!\n");773773- goto error_no_platform;774774- }775775- }776776-777777- /* knock up a platform device to hang stuff off of */778778- sdev = platform_device_register_simple("samsung", -1, NULL, 0);779779- if (IS_ERR(sdev))780780- goto error_no_platform;781781-782782- /* create a backlight device to talk to this one */783783- memset(&props, 0, sizeof(struct backlight_properties));784784- props.type = BACKLIGHT_PLATFORM;785785- props.max_brightness = sabi_config->max_brightness;786786- backlight_device = backlight_device_register("samsung", &sdev->dev,787787- NULL, &backlight_ops,788788- &props);789789- if (IS_ERR(backlight_device))790790- goto error_no_backlight;791791-792792- backlight_device->props.brightness = read_brightness();793793- backlight_device->props.power = FB_BLANK_UNBLANK;794794- backlight_update_status(backlight_device);795795-796796- retval = init_wireless(sdev);797797- if (retval)798798- goto error_no_rfk;799799-800800- retval = device_create_file(&sdev->dev, &dev_attr_performance_level);801801- if (retval)802802- goto error_file_create;803803-804804-exit:805805- return 0;806806-807807-error_file_create:808808- destroy_wireless();809809-810810-error_no_rfk:811811- backlight_device_unregister(backlight_device);812812-813813-error_no_backlight:814814- platform_device_unregister(sdev);815815-816816-error_no_platform:817817- iounmap(sabi_iface);818818-819819-error_no_signature:820820- iounmap(f0000_segment);821821- return -EINVAL;822822-}823823-824824-static void __exit samsung_exit(void)825825-{826826- /* Turn off "Linux" mode in the BIOS */827827- if (sabi_config->commands.set_linux != 0xff)828828- sabi_set_command(sabi_config->commands.set_linux, 0x80);829829-830830- device_remove_file(&sdev->dev, &dev_attr_performance_level);831831- backlight_device_unregister(backlight_device);832832- destroy_wireless();833833- iounmap(sabi_iface);834834- iounmap(f0000_segment);835835- platform_device_unregister(sdev);836836-}837837-838838-module_init(samsung_init);839839-module_exit(samsung_exit);840840-841841-MODULE_AUTHOR("Greg Kroah-Hartman <gregkh@suse.de>");842842-MODULE_DESCRIPTION("Samsung Backlight driver");843843-MODULE_LICENSE("GPL");
+1
drivers/usb/Kconfig
···6666 default y if ARCH_VT85006767 default y if PLAT_SPEAR6868 default y if ARCH_MSM6969+ default y if MICROBLAZE6970 default PCI70717172# ARM SA1111 chips have a non-PCI based "OHCI-compatible" USB host interface.
+6-4
drivers/usb/core/devices.c
···221221 break;222222 case USB_ENDPOINT_XFER_INT:223223 type = "Int.";224224- if (speed == USB_SPEED_HIGH)224224+ if (speed == USB_SPEED_HIGH || speed == USB_SPEED_SUPER)225225 interval = 1 << (desc->bInterval - 1);226226 else227227 interval = desc->bInterval;···229229 default: /* "can't happen" */230230 return start;231231 }232232- interval *= (speed == USB_SPEED_HIGH) ? 125 : 1000;232232+ interval *= (speed == USB_SPEED_HIGH ||233233+ speed == USB_SPEED_SUPER) ? 125 : 1000;233234 if (interval % 1000)234235 unit = 'u';235236 else {···543542 if (level == 0) {544543 int max;545544546546- /* high speed reserves 80%, full/low reserves 90% */547547- if (usbdev->speed == USB_SPEED_HIGH)545545+ /* super/high speed reserves 80%, full/low reserves 90% */546546+ if (usbdev->speed == USB_SPEED_HIGH ||547547+ usbdev->speed == USB_SPEED_SUPER)548548 max = 800;549549 else550550 max = FRAME_TIME_MAX_USECS_ALLOC;
+1-1
drivers/usb/core/hcd.c
···1908190819091909 /* Streams only apply to bulk endpoints. */19101910 for (i = 0; i < num_eps; i++)19111911- if (!usb_endpoint_xfer_bulk(&eps[i]->desc))19111911+ if (!eps[i] || !usb_endpoint_xfer_bulk(&eps[i]->desc))19121912 return;1913191319141914 hcd->driver->free_streams(hcd, dev, eps, num_eps, mem_flags);
+11-1
drivers/usb/core/hub.c
···22852285 }2286228622872287 /* see 7.1.7.6 */22882288- status = set_port_feature(hub->hdev, port1, USB_PORT_FEAT_SUSPEND);22882288+ /* Clear PORT_POWER if it's a USB3.0 device connected to USB 3.022892289+ * external hub.22902290+ * FIXME: this is a temporary workaround to make the system able22912291+ * to suspend/resume.22922292+ */22932293+ if ((hub->hdev->parent != NULL) && hub_is_superspeed(hub->hdev))22942294+ status = clear_port_feature(hub->hdev, port1,22952295+ USB_PORT_FEAT_POWER);22962296+ else22972297+ status = set_port_feature(hub->hdev, port1,22982298+ USB_PORT_FEAT_SUSPEND);22892299 if (status) {22902300 dev_dbg(hub->intfdev, "can't suspend port %d, status %d\n",22912301 port1, status);
···11481148static int txcomplete(struct qe_ep *ep, unsigned char restart)11491149{11501150 if (ep->tx_req != NULL) {11511151+ struct qe_req *req = ep->tx_req;11521152+ unsigned zlp = 0, last_len = 0;11531153+11541154+ last_len = min_t(unsigned, req->req.length - ep->sent,11551155+ ep->ep.maxpacket);11561156+11511157 if (!restart) {11521158 int asent = ep->last;11531159 ep->sent += asent;···11621156 ep->last = 0;11631157 }1164115811591159+ /* zlp needed when req->re.zero is set */11601160+ if (req->req.zero) {11611161+ if (last_len == 0 ||11621162+ (req->req.length % ep->ep.maxpacket) != 0)11631163+ zlp = 0;11641164+ else11651165+ zlp = 1;11661166+ } else11671167+ zlp = 0;11681168+11651169 /* a request already were transmitted completely */11661166- if ((ep->tx_req->req.length - ep->sent) <= 0) {11671167- ep->tx_req->req.actual = (unsigned int)ep->sent;11701170+ if (((ep->tx_req->req.length - ep->sent) <= 0) && !zlp) {11681171 done(ep, ep->tx_req, 0);11691172 ep->tx_req = NULL;11701173 ep->last = 0;···12061191 buf = (u8 *)ep->tx_req->req.buf + ep->sent;12071192 if (buf && size) {12081193 ep->last = size;11941194+ ep->tx_req->req.actual += size;12091195 frame_set_data(frame, buf);12101196 frame_set_length(frame, size);12111197 frame_set_status(frame, FRAME_OK);
+3-1
drivers/usb/gadget/inode.c
···386386387387 /* halt any endpoint by doing a "wrong direction" i/o call */388388 if (usb_endpoint_dir_in(&data->desc)) {389389- if (usb_endpoint_xfer_isoc(&data->desc))389389+ if (usb_endpoint_xfer_isoc(&data->desc)) {390390+ mutex_unlock(&data->lock);390391 return -EINVAL;392392+ }391393 DBG (data->dev, "%s halt\n", data->name);392394 spin_lock_irq (&data->dev->lock);393395 if (likely (data->ep != NULL))
+5-3
drivers/usb/gadget/pch_udc.c
···16081608 return -EINVAL;16091609 if (!dev->driver || (dev->gadget.speed == USB_SPEED_UNKNOWN))16101610 return -ESHUTDOWN;16111611- spin_lock_irqsave(&ep->dev->lock, iflags);16111611+ spin_lock_irqsave(&dev->lock, iflags);16121612 /* map the buffer for dma */16131613 if (usbreq->length &&16141614 ((usbreq->dma == DMA_ADDR_INVALID) || !usbreq->dma)) {···16251625 DMA_FROM_DEVICE);16261626 } else {16271627 req->buf = kzalloc(usbreq->length, GFP_ATOMIC);16281628- if (!req->buf)16291629- return -ENOMEM;16281628+ if (!req->buf) {16291629+ retval = -ENOMEM;16301630+ goto probe_end;16311631+ }16301632 if (ep->in) {16311633 memcpy(req->buf, usbreq->buf, usbreq->length);16321634 req->dma = dma_map_single(&dev->pdev->dev,
+2
drivers/usb/gadget/r8a66597-udc.c
···1083108310841084 if (dvsq == DS_DFLT) {10851085 /* bus reset */10861086+ spin_unlock(&r8a66597->lock);10861087 r8a66597->driver->disconnect(&r8a66597->gadget);10881088+ spin_lock(&r8a66597->lock);10871089 r8a66597_update_usb_speed(r8a66597);10881090 }10891091 if (r8a66597->old_dvsq == DS_CNFG && dvsq != DS_CNFG)
+9-6
drivers/usb/host/ehci-q.c
···1247124712481248static void scan_async (struct ehci_hcd *ehci)12491249{12501250+ bool stopped;12501251 struct ehci_qh *qh;12511252 enum ehci_timer_action action = TIMER_IO_WATCHDOG;1252125312531254 ehci->stamp = ehci_readl(ehci, &ehci->regs->frame_index);12541255 timer_action_done (ehci, TIMER_ASYNC_SHRINK);12551256rescan:12571257+ stopped = !HC_IS_RUNNING(ehci_to_hcd(ehci)->state);12561258 qh = ehci->async->qh_next.qh;12571259 if (likely (qh != NULL)) {12581260 do {12591261 /* clean any finished work for this qh */12601260- if (!list_empty (&qh->qtd_list)12611261- && qh->stamp != ehci->stamp) {12621262+ if (!list_empty(&qh->qtd_list) && (stopped ||12631263+ qh->stamp != ehci->stamp)) {12621264 int temp;1263126512641266 /* unlinks could happen here; completion12651267 * reporting drops the lock. rescan using12661268 * the latest schedule, but don't rescan12671267- * qhs we already finished (no looping).12691269+ * qhs we already finished (no looping)12701270+ * unless the controller is stopped.12681271 */12691272 qh = qh_get (qh);12701273 qh->stamp = ehci->stamp;···12881285 */12891286 if (list_empty(&qh->qtd_list)12901287 && qh->qh_state == QH_STATE_LINKED) {12911291- if (!ehci->reclaim12921292- && ((ehci->stamp - qh->stamp) & 0x1fff)12931293- >= (EHCI_SHRINK_FRAMES * 8))12881288+ if (!ehci->reclaim && (stopped ||12891289+ ((ehci->stamp - qh->stamp) & 0x1fff)12901290+ >= EHCI_SHRINK_FRAMES * 8))12941291 start_unlink_async(ehci, qh);12951292 else12961293 action = TIMER_ASYNC_SHRINK;
+1-1
drivers/usb/host/isp1760-hcd.c
···295295 }296296297297 dev_err(hcd->self.controller,298298- "%s: Can not allocate %lu bytes of memory\n"298298+ "%s: Cannot allocate %zu bytes of memory\n"299299 "Current memory map:\n",300300 __func__, qtd->length);301301 for (i = 0; i < BLOCKS; i++) {
···8484{8585 u8 rev = 0;8686 unsigned long flags;8787+ struct amd_chipset_info info;8888+ int ret;87898890 spin_lock_irqsave(&amd_lock, flags);89919090- amd_chipset.probe_count++;9192 /* probe only once */9292- if (amd_chipset.probe_count > 1) {9393+ if (amd_chipset.probe_count > 0) {9494+ amd_chipset.probe_count++;9395 spin_unlock_irqrestore(&amd_lock, flags);9496 return amd_chipset.probe_result;9597 }9898+ memset(&info, 0, sizeof(info));9999+ spin_unlock_irqrestore(&amd_lock, flags);961009797- amd_chipset.smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL);9898- if (amd_chipset.smbus_dev) {9999- rev = amd_chipset.smbus_dev->revision;101101+ info.smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL);102102+ if (info.smbus_dev) {103103+ rev = info.smbus_dev->revision;100104 if (rev >= 0x40)101101- amd_chipset.sb_type = 1;105105+ info.sb_type = 1;102106 else if (rev >= 0x30 && rev <= 0x3b)103103- amd_chipset.sb_type = 3;107107+ info.sb_type = 3;104108 } else {105105- amd_chipset.smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD,106106- 0x780b, NULL);107107- if (!amd_chipset.smbus_dev) {108108- spin_unlock_irqrestore(&amd_lock, flags);109109- return 0;109109+ info.smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD,110110+ 0x780b, NULL);111111+ if (!info.smbus_dev) {112112+ ret = 0;113113+ goto commit;110114 }111111- rev = amd_chipset.smbus_dev->revision;115115+116116+ rev = info.smbus_dev->revision;112117 if (rev >= 0x11 && rev <= 0x18)113113- amd_chipset.sb_type = 2;118118+ info.sb_type = 2;114119 }115120116116- if (amd_chipset.sb_type == 0) {117117- if (amd_chipset.smbus_dev) {118118- pci_dev_put(amd_chipset.smbus_dev);119119- amd_chipset.smbus_dev = NULL;121121+ if (info.sb_type == 0) {122122+ if (info.smbus_dev) {123123+ pci_dev_put(info.smbus_dev);124124+ info.smbus_dev = NULL;120125 }121121- spin_unlock_irqrestore(&amd_lock, flags);122122- return 0;126126+ ret = 0;127127+ goto commit;123128 }124129125125- amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x9601, NULL);126126- if (amd_chipset.nb_dev) {127127- amd_chipset.nb_type = 1;130130+ info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x9601, NULL);131131+ if (info.nb_dev) {132132+ info.nb_type = 1;128133 } else {129129- amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD,130130- 0x1510, NULL);131131- if (amd_chipset.nb_dev) {132132- amd_chipset.nb_type = 2;133133- } else {134134- amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD,135135- 0x9600, NULL);136136- if (amd_chipset.nb_dev)137137- amd_chipset.nb_type = 3;134134+ info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x1510, NULL);135135+ if (info.nb_dev) {136136+ info.nb_type = 2;137137+ } else {138138+ info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD,139139+ 0x9600, NULL);140140+ if (info.nb_dev)141141+ info.nb_type = 3;138142 }139143 }140144141141- amd_chipset.probe_result = 1;145145+ ret = info.probe_result = 1;142146 printk(KERN_DEBUG "QUIRK: Enable AMD PLL fix\n");143147144144- spin_unlock_irqrestore(&amd_lock, flags);145145- return amd_chipset.probe_result;148148+commit:149149+150150+ spin_lock_irqsave(&amd_lock, flags);151151+ if (amd_chipset.probe_count > 0) {152152+ /* race - someone else was faster - drop devices */153153+154154+ /* Mark that we where here */155155+ amd_chipset.probe_count++;156156+ ret = amd_chipset.probe_result;157157+158158+ spin_unlock_irqrestore(&amd_lock, flags);159159+160160+ if (info.nb_dev)161161+ pci_dev_put(info.nb_dev);162162+ if (info.smbus_dev)163163+ pci_dev_put(info.smbus_dev);164164+165165+ } else {166166+ /* no race - commit the result */167167+ info.probe_count++;168168+ amd_chipset = info;169169+ spin_unlock_irqrestore(&amd_lock, flags);170170+ }171171+172172+ return ret;146173}147174EXPORT_SYMBOL_GPL(usb_amd_find_chipset_info);148175···311284312285void usb_amd_dev_put(void)313286{287287+ struct pci_dev *nb, *smbus;314288 unsigned long flags;315289316290 spin_lock_irqsave(&amd_lock, flags);···322294 return;323295 }324296325325- if (amd_chipset.nb_dev) {326326- pci_dev_put(amd_chipset.nb_dev);327327- amd_chipset.nb_dev = NULL;328328- }329329- if (amd_chipset.smbus_dev) {330330- pci_dev_put(amd_chipset.smbus_dev);331331- amd_chipset.smbus_dev = NULL;332332- }297297+ /* save them to pci_dev_put outside of spinlock */298298+ nb = amd_chipset.nb_dev;299299+ smbus = amd_chipset.smbus_dev;300300+301301+ amd_chipset.nb_dev = NULL;302302+ amd_chipset.smbus_dev = NULL;333303 amd_chipset.nb_type = 0;334304 amd_chipset.sb_type = 0;335305 amd_chipset.isoc_reqs = 0;336306 amd_chipset.probe_result = 0;337307338308 spin_unlock_irqrestore(&amd_lock, flags);309309+310310+ if (nb)311311+ pci_dev_put(nb);312312+ if (smbus)313313+ pci_dev_put(smbus);339314}340315EXPORT_SYMBOL_GPL(usb_amd_dev_put);341316
+70-36
drivers/usb/host/xhci-mem.c
···846846 * Skip ports that don't have known speeds, or have duplicate847847 * Extended Capabilities port speed entries.848848 */849849- if (port_speed == 0 || port_speed == -1)849849+ if (port_speed == 0 || port_speed == DUPLICATE_ENTRY)850850 continue;851851852852 /*···974974 return 0;975975}976976977977+/*978978+ * Convert interval expressed as 2^(bInterval - 1) == interval into979979+ * straight exponent value 2^n == interval.980980+ *981981+ */982982+static unsigned int xhci_parse_exponent_interval(struct usb_device *udev,983983+ struct usb_host_endpoint *ep)984984+{985985+ unsigned int interval;986986+987987+ interval = clamp_val(ep->desc.bInterval, 1, 16) - 1;988988+ if (interval != ep->desc.bInterval - 1)989989+ dev_warn(&udev->dev,990990+ "ep %#x - rounding interval to %d microframes\n",991991+ ep->desc.bEndpointAddress,992992+ 1 << interval);993993+994994+ return interval;995995+}996996+997997+/*998998+ * Convert bInterval expressed in frames (in 1-255 range) to exponent of999999+ * microframes, rounded down to nearest power of 2.10001000+ */10011001+static unsigned int xhci_parse_frame_interval(struct usb_device *udev,10021002+ struct usb_host_endpoint *ep)10031003+{10041004+ unsigned int interval;10051005+10061006+ interval = fls(8 * ep->desc.bInterval) - 1;10071007+ interval = clamp_val(interval, 3, 10);10081008+ if ((1 << interval) != 8 * ep->desc.bInterval)10091009+ dev_warn(&udev->dev,10101010+ "ep %#x - rounding interval to %d microframes, ep desc says %d microframes\n",10111011+ ep->desc.bEndpointAddress,10121012+ 1 << interval,10131013+ 8 * ep->desc.bInterval);10141014+10151015+ return interval;10161016+}10171017+9771018/* Return the polling or NAK interval.9781019 *9791020 * The polling interval is expressed in "microframes". If xHCI's Interval field···1023982 * The NAK interval is one NAK per 1 to 255 microframes, or no NAKs if interval1024983 * is set to 0.1025984 */10261026-static inline unsigned int xhci_get_endpoint_interval(struct usb_device *udev,985985+static unsigned int xhci_get_endpoint_interval(struct usb_device *udev,1027986 struct usb_host_endpoint *ep)1028987{1029988 unsigned int interval = 0;···1032991 case USB_SPEED_HIGH:1033992 /* Max NAK rate */1034993 if (usb_endpoint_xfer_control(&ep->desc) ||10351035- usb_endpoint_xfer_bulk(&ep->desc))994994+ usb_endpoint_xfer_bulk(&ep->desc)) {1036995 interval = ep->desc.bInterval;996996+ break;997997+ }1037998 /* Fall through - SS and HS isoc/int have same decoding */999999+10381000 case USB_SPEED_SUPER:10391001 if (usb_endpoint_xfer_int(&ep->desc) ||10401040- usb_endpoint_xfer_isoc(&ep->desc)) {10411041- if (ep->desc.bInterval == 0)10421042- interval = 0;10431043- else10441044- interval = ep->desc.bInterval - 1;10451045- if (interval > 15)10461046- interval = 15;10471047- if (interval != ep->desc.bInterval + 1)10481048- dev_warn(&udev->dev, "ep %#x - rounding interval to %d microframes\n",10491049- ep->desc.bEndpointAddress, 1 << interval);10021002+ usb_endpoint_xfer_isoc(&ep->desc)) {10031003+ interval = xhci_parse_exponent_interval(udev, ep);10501004 }10511005 break;10521052- /* Convert bInterval (in 1-255 frames) to microframes and round down to10531053- * nearest power of 2.10541054- */10061006+10551007 case USB_SPEED_FULL:10081008+ if (usb_endpoint_xfer_int(&ep->desc)) {10091009+ interval = xhci_parse_exponent_interval(udev, ep);10101010+ break;10111011+ }10121012+ /*10131013+ * Fall through for isochronous endpoint interval decoding10141014+ * since it uses the same rules as low speed interrupt10151015+ * endpoints.10161016+ */10171017+10561018 case USB_SPEED_LOW:10571019 if (usb_endpoint_xfer_int(&ep->desc) ||10581058- usb_endpoint_xfer_isoc(&ep->desc)) {10591059- interval = fls(8*ep->desc.bInterval) - 1;10601060- if (interval > 10)10611061- interval = 10;10621062- if (interval < 3)10631063- interval = 3;10641064- if ((1 << interval) != 8*ep->desc.bInterval)10651065- dev_warn(&udev->dev,10661066- "ep %#x - rounding interval"10671067- " to %d microframes, "10681068- "ep desc says %d microframes\n",10691069- ep->desc.bEndpointAddress,10701070- 1 << interval,10711071- 8*ep->desc.bInterval);10201020+ usb_endpoint_xfer_isoc(&ep->desc)) {10211021+10221022+ interval = xhci_parse_frame_interval(udev, ep);10721023 }10731024 break;10251025+10741026 default:10751027 BUG();10761028 }···10751041 * transaction opportunities per microframe", but that goes in the Max Burst10761042 * endpoint context field.10771043 */10781078-static inline u32 xhci_get_endpoint_mult(struct usb_device *udev,10441044+static u32 xhci_get_endpoint_mult(struct usb_device *udev,10791045 struct usb_host_endpoint *ep)10801046{10811047 if (udev->speed != USB_SPEED_SUPER ||···10841050 return ep->ss_ep_comp.bmAttributes;10851051}1086105210871087-static inline u32 xhci_get_endpoint_type(struct usb_device *udev,10531053+static u32 xhci_get_endpoint_type(struct usb_device *udev,10881054 struct usb_host_endpoint *ep)10891055{10901056 int in;···11181084 * Basically, this is the maxpacket size, multiplied by the burst size11191085 * and mult size.11201086 */11211121-static inline u32 xhci_get_max_esit_payload(struct xhci_hcd *xhci,10871087+static u32 xhci_get_max_esit_payload(struct xhci_hcd *xhci,11221088 struct usb_device *udev,11231089 struct usb_host_endpoint *ep)11241090{···17611727 * found a similar duplicate.17621728 */17631729 if (xhci->port_array[i] != major_revision &&17641764- xhci->port_array[i] != (u8) -1) {17301730+ xhci->port_array[i] != DUPLICATE_ENTRY) {17651731 if (xhci->port_array[i] == 0x03)17661732 xhci->num_usb3_ports--;17671733 else17681734 xhci->num_usb2_ports--;17691769- xhci->port_array[i] = (u8) -1;17351735+ xhci->port_array[i] = DUPLICATE_ENTRY;17701736 }17711737 /* FIXME: Should we disable the port? */17721738 continue;···18651831 for (i = 0; i < num_ports; i++) {18661832 if (xhci->port_array[i] == 0x03 ||18671833 xhci->port_array[i] == 0 ||18681868- xhci->port_array[i] == -1)18341834+ xhci->port_array[i] == DUPLICATE_ENTRY)18691835 continue;1870183618711837 xhci->usb2_ports[port_index] =
+4
drivers/usb/host/xhci-pci.c
···114114 if (pdev->vendor == PCI_VENDOR_ID_NEC)115115 xhci->quirks |= XHCI_NEC_HOST;116116117117+ /* AMD PLL quirk */118118+ if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info())119119+ xhci->quirks |= XHCI_AMD_PLL_FIX;120120+117121 /* Make sure the HC is halted. */118122 retval = xhci_halt(xhci);119123 if (retval)
+134-85
drivers/usb/host/xhci-ring.c
···9393/* Does this link TRB point to the first segment in a ring,9494 * or was the previous TRB the last TRB on the last segment in the ERST?9595 */9696-static inline bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring,9696+static bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring,9797 struct xhci_segment *seg, union xhci_trb *trb)9898{9999 if (ring == xhci->event_ring)···107107 * segment? I.e. would the updated event TRB pointer step off the end of the108108 * event seg?109109 */110110-static inline int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,110110+static int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,111111 struct xhci_segment *seg, union xhci_trb *trb)112112{113113 if (ring == xhci->event_ring)···116116 return (trb->link.control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK);117117}118118119119-static inline int enqueue_is_link_trb(struct xhci_ring *ring)119119+static int enqueue_is_link_trb(struct xhci_ring *ring)120120{121121 struct xhci_link_trb *link = &ring->enqueue->link;122122 return ((link->control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK));···592592 ep->ep_state |= SET_DEQ_PENDING;593593}594594595595-static inline void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci,595595+static void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci,596596 struct xhci_virt_ep *ep)597597{598598 ep->ep_state &= ~EP_HALT_PENDING;···619619620620 /* Only giveback urb when this is the last td in urb */621621 if (urb_priv->td_cnt == urb_priv->length) {622622+ if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {623623+ xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--;624624+ if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs == 0) {625625+ if (xhci->quirks & XHCI_AMD_PLL_FIX)626626+ usb_amd_quirk_pll_enable();627627+ }628628+ }622629 usb_hcd_unlink_urb_from_ep(hcd, urb);623630 xhci_dbg(xhci, "Giveback %s URB %p\n", adjective, urb);624631···12161209 * Skip ports that don't have known speeds, or have duplicate12171210 * Extended Capabilities port speed entries.12181211 */12191219- if (port_speed == 0 || port_speed == -1)12121212+ if (port_speed == 0 || port_speed == DUPLICATE_ENTRY)12201213 continue;1221121412221215 /*···12421235 u8 major_revision;12431236 struct xhci_bus_state *bus_state;12441237 u32 __iomem **port_array;12381238+ bool bogus_port_status = false;1245123912461240 /* Port status change events always have a successful completion code */12471241 if (GET_COMP_CODE(event->generic.field[2]) != COMP_SUCCESS) {···12551247 max_ports = HCS_MAX_PORTS(xhci->hcs_params1);12561248 if ((port_id <= 0) || (port_id > max_ports)) {12571249 xhci_warn(xhci, "Invalid port id %d\n", port_id);12501250+ bogus_port_status = true;12581251 goto cleanup;12591252 }12601253···12671258 xhci_warn(xhci, "Event for port %u not in "12681259 "Extended Capabilities, ignoring.\n",12691260 port_id);12611261+ bogus_port_status = true;12701262 goto cleanup;12711263 }12721272- if (major_revision == (u8) -1) {12641264+ if (major_revision == DUPLICATE_ENTRY) {12731265 xhci_warn(xhci, "Event for port %u duplicated in"12741266 "Extended Capabilities, ignoring.\n",12751267 port_id);12681268+ bogus_port_status = true;12761269 goto cleanup;12771270 }12781271···13451334cleanup:13461335 /* Update event ring dequeue pointer before dropping the lock */13471336 inc_deq(xhci, xhci->event_ring, true);13371337+13381338+ /* Don't make the USB core poll the roothub if we got a bad port status13391339+ * change event. Besides, at that point we can't tell which roothub13401340+ * (USB 2.0 or USB 3.0) to kick.13411341+ */13421342+ if (bogus_port_status)13431343+ return;1348134413491345 spin_unlock(&xhci->lock);13501346 /* Pass this up to the core */···1572155415731555 urb_priv->td_cnt++;15741556 /* Giveback the urb when all the tds are completed */15751575- if (urb_priv->td_cnt == urb_priv->length)15571557+ if (urb_priv->td_cnt == urb_priv->length) {15761558 ret = 1;15591559+ if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {15601560+ xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--;15611561+ if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs15621562+ == 0) {15631563+ if (xhci->quirks & XHCI_AMD_PLL_FIX)15641564+ usb_amd_quirk_pll_enable();15651565+ }15661566+ }15671567+ }15771568 }1578156915791570 return ret;···17021675 struct urb_priv *urb_priv;17031676 int idx;17041677 int len = 0;17051705- int skip_td = 0;17061678 union xhci_trb *cur_trb;17071679 struct xhci_segment *cur_seg;16801680+ struct usb_iso_packet_descriptor *frame;17081681 u32 trb_comp_code;16821682+ bool skip_td = false;1709168317101684 ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer);17111685 trb_comp_code = GET_COMP_CODE(event->transfer_len);17121686 urb_priv = td->urb->hcpriv;17131687 idx = urb_priv->td_cnt;16881688+ frame = &td->urb->iso_frame_desc[idx];1714168917151715- if (ep->skip) {17161716- /* The transfer is partly done */17171717- *status = -EXDEV;17181718- td->urb->iso_frame_desc[idx].status = -EXDEV;17191719- } else {17201720- /* handle completion code */17211721- switch (trb_comp_code) {17221722- case COMP_SUCCESS:17231723- td->urb->iso_frame_desc[idx].status = 0;17241724- xhci_dbg(xhci, "Successful isoc transfer!\n");17251725- break;17261726- case COMP_SHORT_TX:17271727- if (td->urb->transfer_flags & URB_SHORT_NOT_OK)17281728- td->urb->iso_frame_desc[idx].status =17291729- -EREMOTEIO;17301730- else17311731- td->urb->iso_frame_desc[idx].status = 0;17321732- break;17331733- case COMP_BW_OVER:17341734- td->urb->iso_frame_desc[idx].status = -ECOMM;17351735- skip_td = 1;17361736- break;17371737- case COMP_BUFF_OVER:17381738- case COMP_BABBLE:17391739- td->urb->iso_frame_desc[idx].status = -EOVERFLOW;17401740- skip_td = 1;17411741- break;17421742- case COMP_STALL:17431743- td->urb->iso_frame_desc[idx].status = -EPROTO;17441744- skip_td = 1;17451745- break;17461746- case COMP_STOP:17471747- case COMP_STOP_INVAL:17481748- break;17491749- default:17501750- td->urb->iso_frame_desc[idx].status = -1;17511751- break;17521752- }16901690+ /* handle completion code */16911691+ switch (trb_comp_code) {16921692+ case COMP_SUCCESS:16931693+ frame->status = 0;16941694+ xhci_dbg(xhci, "Successful isoc transfer!\n");16951695+ break;16961696+ case COMP_SHORT_TX:16971697+ frame->status = td->urb->transfer_flags & URB_SHORT_NOT_OK ?16981698+ -EREMOTEIO : 0;16991699+ break;17001700+ case COMP_BW_OVER:17011701+ frame->status = -ECOMM;17021702+ skip_td = true;17031703+ break;17041704+ case COMP_BUFF_OVER:17051705+ case COMP_BABBLE:17061706+ frame->status = -EOVERFLOW;17071707+ skip_td = true;17081708+ break;17091709+ case COMP_STALL:17101710+ frame->status = -EPROTO;17111711+ skip_td = true;17121712+ break;17131713+ case COMP_STOP:17141714+ case COMP_STOP_INVAL:17151715+ break;17161716+ default:17171717+ frame->status = -1;17181718+ break;17531719 }1754172017551755- /* calc actual length */17561756- if (ep->skip) {17571757- td->urb->iso_frame_desc[idx].actual_length = 0;17581758- /* Update ring dequeue pointer */17591759- while (ep_ring->dequeue != td->last_trb)17601760- inc_deq(xhci, ep_ring, false);17611761- inc_deq(xhci, ep_ring, false);17621762- return finish_td(xhci, td, event_trb, event, ep, status, true);17631763- }17641764-17651765- if (trb_comp_code == COMP_SUCCESS || skip_td == 1) {17661766- td->urb->iso_frame_desc[idx].actual_length =17671767- td->urb->iso_frame_desc[idx].length;17681768- td->urb->actual_length +=17691769- td->urb->iso_frame_desc[idx].length;17211721+ if (trb_comp_code == COMP_SUCCESS || skip_td) {17221722+ frame->actual_length = frame->length;17231723+ td->urb->actual_length += frame->length;17701724 } else {17711725 for (cur_trb = ep_ring->dequeue,17721726 cur_seg = ep_ring->deq_seg; cur_trb != event_trb;···17631755 TRB_LEN(event->transfer_len);1764175617651757 if (trb_comp_code != COMP_STOP_INVAL) {17661766- td->urb->iso_frame_desc[idx].actual_length = len;17581758+ frame->actual_length = len;17671759 td->urb->actual_length += len;17681760 }17691761 }···17721764 *status = 0;1773176517741766 return finish_td(xhci, td, event_trb, event, ep, status, false);17671767+}17681768+17691769+static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,17701770+ struct xhci_transfer_event *event,17711771+ struct xhci_virt_ep *ep, int *status)17721772+{17731773+ struct xhci_ring *ep_ring;17741774+ struct urb_priv *urb_priv;17751775+ struct usb_iso_packet_descriptor *frame;17761776+ int idx;17771777+17781778+ ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer);17791779+ urb_priv = td->urb->hcpriv;17801780+ idx = urb_priv->td_cnt;17811781+ frame = &td->urb->iso_frame_desc[idx];17821782+17831783+ /* The transfer is partly done */17841784+ *status = -EXDEV;17851785+ frame->status = -EXDEV;17861786+17871787+ /* calc actual length */17881788+ frame->actual_length = 0;17891789+17901790+ /* Update ring dequeue pointer */17911791+ while (ep_ring->dequeue != td->last_trb)17921792+ inc_deq(xhci, ep_ring, false);17931793+ inc_deq(xhci, ep_ring, false);17941794+17951795+ return finish_td(xhci, td, NULL, event, ep, status, true);17751796}1776179717771798/*···20612024 }2062202520632026 td = list_entry(ep_ring->td_list.next, struct xhci_td, td_list);20272027+20642028 /* Is this a TRB in the currently executing TD? */20652029 event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue,20662030 td->last_trb, event_dma);20672067- if (event_seg && ep->skip) {20312031+ if (!event_seg) {20322032+ if (!ep->skip ||20332033+ !usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {20342034+ /* HC is busted, give up! */20352035+ xhci_err(xhci,20362036+ "ERROR Transfer event TRB DMA ptr not "20372037+ "part of current TD\n");20382038+ return -ESHUTDOWN;20392039+ }20402040+20412041+ ret = skip_isoc_td(xhci, td, event, ep, &status);20422042+ goto cleanup;20432043+ }20442044+20452045+ if (ep->skip) {20682046 xhci_dbg(xhci, "Found td. Clear skip flag.\n");20692047 ep->skip = false;20702048 }20712071- if (!event_seg &&20722072- (!ep->skip || !usb_endpoint_xfer_isoc(&td->urb->ep->desc))) {20732073- /* HC is busted, give up! */20742074- xhci_err(xhci, "ERROR Transfer event TRB DMA ptr not "20752075- "part of current TD\n");20762076- return -ESHUTDOWN;20772077- }2078204920792079- if (event_seg) {20802080- event_trb = &event_seg->trbs[(event_dma -20812081- event_seg->dma) / sizeof(*event_trb)];20822082- /*20832083- * No-op TRB should not trigger interrupts.20842084- * If event_trb is a no-op TRB, it means the20852085- * corresponding TD has been cancelled. Just ignore20862086- * the TD.20872087- */20882088- if ((event_trb->generic.field[3] & TRB_TYPE_BITMASK)20892089- == TRB_TYPE(TRB_TR_NOOP)) {20902090- xhci_dbg(xhci, "event_trb is a no-op TRB. "20912091- "Skip it\n");20922092- goto cleanup;20932093- }20502050+ event_trb = &event_seg->trbs[(event_dma - event_seg->dma) /20512051+ sizeof(*event_trb)];20522052+ /*20532053+ * No-op TRB should not trigger interrupts.20542054+ * If event_trb is a no-op TRB, it means the20552055+ * corresponding TD has been cancelled. Just ignore20562056+ * the TD.20572057+ */20582058+ if ((event_trb->generic.field[3] & TRB_TYPE_BITMASK)20592059+ == TRB_TYPE(TRB_TR_NOOP)) {20602060+ xhci_dbg(xhci,20612061+ "event_trb is a no-op TRB. Skip it\n");20622062+ goto cleanup;20942063 }2095206420962065 /* Now update the urb's actual_length and give back to···31683125 return -EINVAL;31693126 }31703127 }31283128+31293129+ if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs == 0) {31303130+ if (xhci->quirks & XHCI_AMD_PLL_FIX)31313131+ usb_amd_quirk_pll_disable();31323132+ }31333133+ xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs++;3171313431723135 giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,31733136 start_cycle, start_trb);
+18-5
drivers/usb/host/xhci.c
···550550 del_timer_sync(&xhci->event_ring_timer);551551#endif552552553553+ if (xhci->quirks & XHCI_AMD_PLL_FIX)554554+ usb_amd_dev_put();555555+553556 xhci_dbg(xhci, "// Disabling event ring interrupts\n");554557 temp = xhci_readl(xhci, &xhci->op_regs->status);555558 xhci_writel(xhci, temp & ~STS_EINT, &xhci->op_regs->status);···774771775772 /* If restore operation fails, re-initialize the HC during resume */776773 if ((temp & STS_SRE) || hibernated) {777777- usb_root_hub_lost_power(hcd->self.root_hub);774774+ /* Let the USB core know _both_ roothubs lost power. */775775+ usb_root_hub_lost_power(xhci->main_hcd->self.root_hub);776776+ usb_root_hub_lost_power(xhci->shared_hcd->self.root_hub);778777779778 xhci_dbg(xhci, "Stop HCD\n");780779 xhci_halt(xhci);···23912386 /* Everything but endpoint 0 is disabled, so free or cache the rings. */23922387 last_freed_endpoint = 1;23932388 for (i = 1; i < 31; ++i) {23942394- if (!virt_dev->eps[i].ring)23952395- continue;23962396- xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i);23972397- last_freed_endpoint = i;23892389+ struct xhci_virt_ep *ep = &virt_dev->eps[i];23902390+23912391+ if (ep->ep_state & EP_HAS_STREAMS) {23922392+ xhci_free_stream_info(xhci, ep->stream_info);23932393+ ep->stream_info = NULL;23942394+ ep->ep_state &= ~EP_HAS_STREAMS;23952395+ }23962396+23972397+ if (ep->ring) {23982398+ xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i);23992399+ last_freed_endpoint = i;24002400+ }23982401 }23992402 xhci_dbg(xhci, "Output context after successful reset device cmd:\n");24002403 xhci_dbg_ctx(xhci, virt_dev->out_ctx, last_freed_endpoint);
+8-3
drivers/usb/host/xhci.h
···30303131/* Code sharing between pci-quirks and xhci hcd */3232#include "xhci-ext-caps.h"3333+#include "pci-quirks.h"33343435/* xHCI PCI Configuration Registers */3536#define XHCI_SBRN_OFFSET (0x60)···233232 * notification type that matches a bit set in this bit field.234233 */235234#define DEV_NOTE_MASK (0xffff)236236-#define ENABLE_DEV_NOTE(x) (1 << x)235235+#define ENABLE_DEV_NOTE(x) (1 << (x))237236/* Most of the device notification types should only be used for debug.238237 * SW does need to pay attention to function wake notifications.239238 */···348347#define PORT_DEV_REMOVE (1 << 30)349348/* Initiate a warm port reset - complete when PORT_WRC is '1' */350349#define PORT_WR (1 << 31)350350+351351+/* We mark duplicate entries with -1 */352352+#define DUPLICATE_ENTRY ((u8)(-1))351353352354/* Port Power Management Status and Control - port_power_base bitmasks */353355/* Inactivity timer value for transitions into U1, in microseconds.···605601#define EP_STATE_STOPPED 3606602#define EP_STATE_ERROR 4607603/* Mult - Max number of burtst within an interval, in EP companion desc. */608608-#define EP_MULT(p) ((p & 0x3) << 8)604604+#define EP_MULT(p) (((p) & 0x3) << 8)609605/* bits 10:14 are Max Primary Streams */610606/* bit 15 is Linear Stream Array */611607/* Interval - period between requests to an endpoint - 125u increments. */612612-#define EP_INTERVAL(p) ((p & 0xff) << 16)608608+#define EP_INTERVAL(p) (((p) & 0xff) << 16)613609#define EP_INTERVAL_TO_UFRAMES(p) (1 << (((p) >> 16) & 0xff))614610#define EP_MAXPSTREAMS_MASK (0x1f << 10)615611#define EP_MAXPSTREAMS(p) (((p) << 10) & EP_MAXPSTREAMS_MASK)···12801276#define XHCI_LINK_TRB_QUIRK (1 << 0)12811277#define XHCI_RESET_EP_QUIRK (1 << 1)12821278#define XHCI_NEC_HOST (1 << 2)12791279+#define XHCI_AMD_PLL_FIX (1 << 3)12831280 /* There are two roothubs to keep track of bus suspend info for */12841281 struct xhci_bus_state bus_state[2];12851282 /* Is each xHCI roothub port a USB 3.0, USB 2.0, or USB 1.1 port? */
+3-3
drivers/usb/musb/Kconfig
···1414 select TWL4030_USB if MACH_OMAP_3430SDP1515 select TWL6030_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA1616 select USB_OTG_UTILS1717- tristate 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)'1717+ bool 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)'1818 help1919 Say Y here if your system has a dual role high speed USB2020 controller based on the Mentor Graphics silicon IP. Then···30303131 If you do not know what this is, please say N.32323333- To compile this driver as a module, choose M here; the3434- module will be called "musb-hdrc".3333+# To compile this driver as a module, choose M here; the3434+# module will be called "musb-hdrc".35353636choice3737 prompt "Platform Glue Layer"
+24
drivers/usb/musb/blackfin.c
···2121#include <asm/cacheflush.h>22222323#include "musb_core.h"2424+#include "musbhsdma.h"2425#include "blackfin.h"25262627struct bfin_glue {···333332 return -EIO;334333}335334335335+static int bfin_musb_adjust_channel_params(struct dma_channel *channel,336336+ u16 packet_sz, u8 *mode,337337+ dma_addr_t *dma_addr, u32 *len)338338+{339339+ struct musb_dma_channel *musb_channel = channel->private_data;340340+341341+ /*342342+ * Anomaly 05000450 might cause data corruption when using DMA343343+ * MODE 1 transmits with short packet. So to work around this,344344+ * we truncate all MODE 1 transfers down to a multiple of the345345+ * max packet size, and then do the last short packet transfer346346+ * (if there is any) using MODE 0.347347+ */348348+ if (ANOMALY_05000450) {349349+ if (musb_channel->transmit && *mode == 1)350350+ *len = *len - (*len % packet_sz);351351+ }352352+353353+ return 0;354354+}355355+336356static void bfin_musb_reg_init(struct musb *musb)337357{338358 if (ANOMALY_05000346) {···452430453431 .vbus_status = bfin_musb_vbus_status,454432 .set_vbus = bfin_musb_set_vbus,433433+434434+ .adjust_channel_params = bfin_musb_adjust_channel_params,455435};456436457437static u64 bfin_dmamask = DMA_BIT_MASK(32);
···10301030 struct musb *musb = dev_to_musb(&pdev->dev);10311031 unsigned long flags;1032103210331033+ pm_runtime_get_sync(musb->controller);10331034 spin_lock_irqsave(&musb->lock, flags);10341035 musb_platform_disable(musb);10351036 musb_generic_disable(musb);···10411040 musb_writeb(musb->mregs, MUSB_DEVCTL, 0);10421041 musb_platform_exit(musb);1043104210431043+ pm_runtime_put(musb->controller);10441044 /* FIXME power down */10451045}10461046
+5
drivers/usb/musb/musb_core.h
···261261 * @try_ilde: tries to idle the IP262262 * @vbus_status: returns vbus status if possible263263 * @set_vbus: forces vbus status264264+ * @channel_program: pre check for standard dma channel_program func264265 */265266struct musb_platform_ops {266267 int (*init)(struct musb *musb);···275274276275 int (*vbus_status)(struct musb *musb);277276 void (*set_vbus)(struct musb *musb, int on);277277+278278+ int (*adjust_channel_params)(struct dma_channel *channel,279279+ u16 packet_sz, u8 *mode,280280+ dma_addr_t *dma_addr, u32 *len);278281};279282280283/*
+2-2
drivers/usb/musb/musb_gadget.c
···535535 is_dma = 1;536536 csr |= MUSB_TXCSR_P_WZC_BITS;537537 csr &= ~(MUSB_TXCSR_DMAENAB | MUSB_TXCSR_P_UNDERRUN |538538- MUSB_TXCSR_TXPKTRDY);538538+ MUSB_TXCSR_TXPKTRDY | MUSB_TXCSR_AUTOSET);539539 musb_writew(epio, MUSB_TXCSR, csr);540540 /* Ensure writebuffer is empty. */541541 csr = musb_readw(epio, MUSB_TXCSR);···12961296 }1297129712981298 /* if the hardware doesn't have the request, easy ... */12991299- if (musb_ep->req_list.next != &request->list || musb_ep->busy)12991299+ if (musb_ep->req_list.next != &req->list || musb_ep->busy)13001300 musb_g_giveback(musb_ep, request, -ECONNRESET);1301130113021302 /* ... else abort the dma transfer ... */
+8
drivers/usb/musb/musbhsdma.c
···169169 BUG_ON(channel->status == MUSB_DMA_STATUS_UNKNOWN ||170170 channel->status == MUSB_DMA_STATUS_BUSY);171171172172+ /* Let targets check/tweak the arguments */173173+ if (musb->ops->adjust_channel_params) {174174+ int ret = musb->ops->adjust_channel_params(channel,175175+ packet_sz, &mode, &dma_addr, &len);176176+ if (ret)177177+ return ret;178178+ }179179+172180 /*173181 * The DMA engine in RTL1.8 and above cannot handle174182 * DMA addresses that are not aligned to a 4 byte boundary.
+2-1
drivers/usb/musb/omap2430.c
···259259 case USB_EVENT_VBUS:260260 DBG(4, "VBUS Connect\n");261261262262+#ifdef CONFIG_USB_GADGET_MUSB_HDRC262263 if (musb->gadget_driver)263264 pm_runtime_get_sync(musb->controller);264264-265265+#endif265266 otg_init(musb->xceiv);266267 break;267268
···300300 * Hameg HO820 and HO870 interface (using VID 0x0403)301301 */302302#define HAMEG_HO820_PID 0xed74303303+#define HAMEG_HO730_PID 0xed73304304+#define HAMEG_HO720_PID 0xed72303305#define HAMEG_HO870_PID 0xed71304306305307/*···574572/* Note: OCT US101 is also rebadged as Dick Smith Electronics (NZ) XH6381 */575573/* Also rebadged as Dick Smith Electronics (Aus) XH6451 */576574/* Also rebadged as SIIG Inc. model US2308 hardware version 1 */575575+#define OCT_DK201_PID 0x0103 /* OCT DK201 USB docking station */577576#define OCT_US101_PID 0x0421 /* OCT US101 USB to RS-232 */578577579578/*···11431140 */11441141#define QIHARDWARE_VID 0x20B711451142#define MILKYMISTONE_JTAGSERIAL_PID 0x071311431143+11441144+/*11451145+ * CTI GmbH RS485 Converter http://www.cti-lean.com/11461146+ */11471147+/* USB-485-Mini*/11481148+#define FTDI_CTI_MINI_PID 0xF60811491149+/* USB-Nano-485*/11501150+#define FTDI_CTI_NANO_PID 0xF60B11511151+11461152
+5
drivers/usb/serial/option.c
···407407/* ONDA MT825UP HSDPA 14.2 modem */408408#define ONDA_MT825UP 0x000b409409410410+/* Samsung products */411411+#define SAMSUNG_VENDOR_ID 0x04e8412412+#define SAMSUNG_PRODUCT_GT_B3730 0x6889413413+410414/* some devices interfaces need special handling due to a number of reasons */411415enum option_blacklist_reason {412416 OPTION_BLACKLIST_NONE = 0,···972968 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) },973969 { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */974970 { USB_DEVICE(ONDA_VENDOR_ID, ONDA_MT825UP) }, /* ONDA MT825UP modem */971971+ { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730/GT-B3710 LTE USB modem.*/975972 { } /* Terminating entry */976973};977974MODULE_DEVICE_TABLE(usb, option_ids);
+24-7
drivers/usb/serial/qcserial.c
···111111 ifnum = intf->desc.bInterfaceNumber;112112 dbg("This Interface = %d", ifnum);113113114114- data = serial->private = kzalloc(sizeof(struct usb_wwan_intf_private),114114+ data = kzalloc(sizeof(struct usb_wwan_intf_private),115115 GFP_KERNEL);116116 if (!data)117117 return -ENOMEM;···134134 usb_endpoint_is_bulk_out(&intf->endpoint[1].desc)) {135135 dbg("QDL port found");136136137137- if (serial->interface->num_altsetting == 1)138138- return 0;137137+ if (serial->interface->num_altsetting == 1) {138138+ retval = 0; /* Success */139139+ break;140140+ }139141140142 retval = usb_set_interface(serial->dev, ifnum, 1);141143 if (retval < 0) {···147145 retval = -ENODEV;148146 kfree(data);149147 }150150- return retval;151148 }152149 break;153150···167166 "Could not set interface, error %d\n",168167 retval);169168 retval = -ENODEV;169169+ kfree(data);170170 }171171 } else if (ifnum == 2) {172172 dbg("Modem port found");···179177 retval = -ENODEV;180178 kfree(data);181179 }182182- return retval;183180 } else if (ifnum==3) {184181 /*185182 * NMEA (serial line 9600 8N1)···192191 "Could not set interface, error %d\n",193192 retval);194193 retval = -ENODEV;194194+ kfree(data);195195 }196196 }197197 break;···201199 dev_err(&serial->dev->dev,202200 "unknown number of interfaces: %d\n", nintf);203201 kfree(data);204204- return -ENODEV;202202+ retval = -ENODEV;205203 }206204205205+ /* Set serial->private if not returning -ENODEV */206206+ if (retval != -ENODEV)207207+ usb_set_serial_data(serial, data);207208 return retval;209209+}210210+211211+static void qc_release(struct usb_serial *serial)212212+{213213+ struct usb_wwan_intf_private *priv = usb_get_serial_data(serial);214214+215215+ dbg("%s", __func__);216216+217217+ /* Call usb_wwan release & free the private data allocated in qcprobe */218218+ usb_wwan_release(serial);219219+ usb_set_serial_data(serial, NULL);220220+ kfree(priv);208221}209222210223static struct usb_serial_driver qcdevice = {···239222 .chars_in_buffer = usb_wwan_chars_in_buffer,240223 .attach = usb_wwan_startup,241224 .disconnect = usb_wwan_disconnect,242242- .release = usb_wwan_release,225225+ .release = qc_release,243226#ifdef CONFIG_PM244227 .suspend = usb_wwan_suspend,245228 .resume = usb_wwan_resume,
+2-4
drivers/xen/events.c
···912912 unsigned long irqflags,913913 const char *devname, void *dev_id)914914{915915- unsigned int irq;916916- int retval;915915+ int irq, retval;917916918917 irq = bind_evtchn_to_irq(evtchn);919918 if (irq < 0)···954955 irq_handler_t handler,955956 unsigned long irqflags, const char *devname, void *dev_id)956957{957957- unsigned int irq;958958- int retval;958958+ int irq, retval;959959960960 irq = bind_virq_to_irq(virq, cpu);961961 if (irq < 0)
···286286287287struct p9_fid *v9fs_writeback_fid(struct dentry *dentry)288288{289289- int err, flags;289289+ int err;290290 struct p9_fid *fid;291291- struct v9fs_session_info *v9ses;292291293293- v9ses = v9fs_dentry2v9ses(dentry);294292 fid = v9fs_fid_clone_with_uid(dentry, 0);295293 if (IS_ERR(fid))296294 goto error_out;···297299 * dirty pages. We always request for the open fid in read-write298300 * mode so that a partial page write which result in page299301 * read can work.300300- *301301- * we don't have a tsyncfs operation for older version302302- * of protocol. So make sure the write back fid is303303- * opened in O_SYNC mode.304302 */305305- if (!v9fs_proto_dotl(v9ses))306306- flags = O_RDWR | O_SYNC;307307- else308308- flags = O_RDWR;309309-310310- err = p9_client_open(fid, flags);303303+ err = p9_client_open(fid, O_RDWR);311304 if (err < 0) {312305 p9_client_clunk(fid);313306 fid = ERR_PTR(err);
-1
fs/9p/v9fs.h
···116116 struct list_head slist; /* list of sessions registered with v9fs */117117 struct backing_dev_info bdi;118118 struct rw_semaphore rename_sem;119119- struct p9_fid *root_fid; /* Used for file system sync */120119};121120122121/* cache_validity flags */
+3-1
fs/9p/vfs_dentry.c
···126126 retval = v9fs_refresh_inode_dotl(fid, inode);127127 else128128 retval = v9fs_refresh_inode(fid, inode);129129- if (retval <= 0)129129+ if (retval == -ENOENT)130130+ return 0;131131+ if (retval < 0)130132 return retval;131133 }132134out_valid:
+1-1
fs/9p/vfs_inode_dotl.c
···811811 fid = v9fs_fid_lookup(dentry);812812 if (IS_ERR(fid)) {813813 __putname(link);814814- link = ERR_PTR(PTR_ERR(fid));814814+ link = ERR_CAST(fid);815815 goto ndset;816816 }817817 retval = p9_client_readlink(fid, &target);
+56-24
fs/9p/vfs_super.c
···154154 retval = PTR_ERR(inode);155155 goto release_sb;156156 }157157+157158 root = d_alloc_root(inode);158159 if (!root) {159160 iput(inode);···186185 p9stat_free(st);187186 kfree(st);188187 }189189- v9fs_fid_add(root, fid);190188 retval = v9fs_get_acl(inode, fid);191189 if (retval)192190 goto release_sb;193193- /*194194- * Add the root fid to session info. This is used195195- * for file system sync. We want a cloned fid here196196- * so that we can do a sync_filesystem after a197197- * shrink_dcache_for_umount198198- */199199- v9ses->root_fid = v9fs_fid_clone(root);200200- if (IS_ERR(v9ses->root_fid)) {201201- retval = PTR_ERR(v9ses->root_fid);202202- goto release_sb;203203- }191191+ v9fs_fid_add(root, fid);204192205193 P9_DPRINTK(P9_DEBUG_VFS, " simple set mount, return 0\n");206194 return dget(sb->s_root);···200210 v9fs_session_close(v9ses);201211 kfree(v9ses);202212 return ERR_PTR(retval);213213+203214release_sb:204215 /*205205- * we will do the session_close and root dentry206206- * release in the below call.216216+ * we will do the session_close and root dentry release217217+ * in the below call. But we need to clunk fid, because we haven't218218+ * attached the fid to dentry so it won't get clunked219219+ * automatically.207220 */221221+ p9_client_clunk(fid);208222 deactivate_locked_super(sb);209223 return ERR_PTR(retval);210224}···226232 P9_DPRINTK(P9_DEBUG_VFS, " %p\n", s);227233228234 kill_anon_super(s);229229- p9_client_clunk(v9ses->root_fid);235235+230236 v9fs_session_cancel(v9ses);231237 v9fs_session_close(v9ses);232238 kfree(v9ses);···279285 return res;280286}281287282282-static int v9fs_sync_fs(struct super_block *sb, int wait)283283-{284284- struct v9fs_session_info *v9ses = sb->s_fs_info;285285-286286- P9_DPRINTK(P9_DEBUG_VFS, "v9fs_sync_fs: super_block %p\n", sb);287287- return p9_client_sync_fs(v9ses->root_fid);288288-}289289-290288static int v9fs_drop_inode(struct inode *inode)291289{292290 struct v9fs_session_info *v9ses;···293307 return 1;294308}295309310310+static int v9fs_write_inode(struct inode *inode,311311+ struct writeback_control *wbc)312312+{313313+ int ret;314314+ struct p9_wstat wstat;315315+ struct v9fs_inode *v9inode;316316+ /*317317+ * send an fsync request to server irrespective of318318+ * wbc->sync_mode.319319+ */320320+ P9_DPRINTK(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode);321321+ v9inode = V9FS_I(inode);322322+ if (!v9inode->writeback_fid)323323+ return 0;324324+ v9fs_blank_wstat(&wstat);325325+326326+ ret = p9_client_wstat(v9inode->writeback_fid, &wstat);327327+ if (ret < 0) {328328+ __mark_inode_dirty(inode, I_DIRTY_DATASYNC);329329+ return ret;330330+ }331331+ return 0;332332+}333333+334334+static int v9fs_write_inode_dotl(struct inode *inode,335335+ struct writeback_control *wbc)336336+{337337+ int ret;338338+ struct v9fs_inode *v9inode;339339+ /*340340+ * send an fsync request to server irrespective of341341+ * wbc->sync_mode.342342+ */343343+ P9_DPRINTK(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode);344344+ v9inode = V9FS_I(inode);345345+ if (!v9inode->writeback_fid)346346+ return 0;347347+ ret = p9_client_fsync(v9inode->writeback_fid, 0);348348+ if (ret < 0) {349349+ __mark_inode_dirty(inode, I_DIRTY_DATASYNC);350350+ return ret;351351+ }352352+ return 0;353353+}354354+296355static const struct super_operations v9fs_super_ops = {297356 .alloc_inode = v9fs_alloc_inode,298357 .destroy_inode = v9fs_destroy_inode,···345314 .evict_inode = v9fs_evict_inode,346315 .show_options = generic_show_options,347316 .umount_begin = v9fs_umount_begin,317317+ .write_inode = v9fs_write_inode,348318};349319350320static const struct super_operations v9fs_super_ops_dotl = {351321 .alloc_inode = v9fs_alloc_inode,352322 .destroy_inode = v9fs_destroy_inode,353353- .sync_fs = v9fs_sync_fs,354323 .statfs = v9fs_statfs,355324 .drop_inode = v9fs_drop_inode,356325 .evict_inode = v9fs_evict_inode,357326 .show_options = generic_show_options,358327 .umount_begin = v9fs_umount_begin,328328+ .write_inode = v9fs_write_inode_dotl,359329};360330361331struct file_system_type v9fs_fs_type = {
···178178179179 if (value) {180180 acl = posix_acl_from_xattr(value, size);181181- if (acl == NULL) {182182- value = NULL;183183- size = 0;181181+ if (acl) {182182+ ret = posix_acl_valid(acl);183183+ if (ret)184184+ goto out;184185 } else if (IS_ERR(acl)) {185186 return PTR_ERR(acl);186187 }187188 }188189189190 ret = btrfs_set_acl(NULL, dentry->d_inode, acl, type);190190-191191+out:191192 posix_acl_release(acl);192193193194 return ret;
+8-1
fs/btrfs/ctree.h
···740740 */741741 unsigned long reservation_progress;742742743743- int full; /* indicates that we cannot allocate any more743743+ int full:1; /* indicates that we cannot allocate any more744744 chunks for this space */745745+ int chunk_alloc:1; /* set if we are allocating a chunk */746746+745747 int force_alloc; /* set if we need to force a chunk alloc for746748 this space */747749···25782576int btrfs_mark_extent_written(struct btrfs_trans_handle *trans,25792577 struct inode *inode, u64 start, u64 end);25802578int btrfs_release_file(struct inode *inode, struct file *file);25792579+void btrfs_drop_pages(struct page **pages, size_t num_pages);25802580+int btrfs_dirty_pages(struct btrfs_root *root, struct inode *inode,25812581+ struct page **pages, size_t num_pages,25822582+ loff_t pos, size_t write_bytes,25832583+ struct extent_state **cached);2581258425822585/* tree-defrag.c */25832586int btrfs_defrag_leaves(struct btrfs_trans_handle *trans,
···3333#include "locking.h"3434#include "free-space-cache.h"35353636+/* control flags for do_chunk_alloc's force field3737+ * CHUNK_ALLOC_NO_FORCE means to only allocate a chunk3838+ * if we really need one.3939+ *4040+ * CHUNK_ALLOC_FORCE means it must try to allocate one4141+ *4242+ * CHUNK_ALLOC_LIMITED means to only try and allocate one4343+ * if we have very few chunks already allocated. This is4444+ * used as part of the clustering code to help make sure4545+ * we have a good pool of storage to cluster in, without4646+ * filling the FS with empty chunks4747+ *4848+ */4949+enum {5050+ CHUNK_ALLOC_NO_FORCE = 0,5151+ CHUNK_ALLOC_FORCE = 1,5252+ CHUNK_ALLOC_LIMITED = 2,5353+};5454+3655static int update_block_group(struct btrfs_trans_handle *trans,3756 struct btrfs_root *root,3857 u64 bytenr, u64 num_bytes, int alloc);···30383019 found->bytes_readonly = 0;30393020 found->bytes_may_use = 0;30403021 found->full = 0;30413041- found->force_alloc = 0;30223022+ found->force_alloc = CHUNK_ALLOC_NO_FORCE;30233023+ found->chunk_alloc = 0;30423024 *space_info = found;30433025 list_add_rcu(&found->list, &info->space_info);30443026 atomic_set(&found->caching_threads, 0);···31703150 if (!data_sinfo->full && alloc_chunk) {31713151 u64 alloc_target;3172315231733173- data_sinfo->force_alloc = 1;31533153+ data_sinfo->force_alloc = CHUNK_ALLOC_FORCE;31743154 spin_unlock(&data_sinfo->lock);31753155alloc:31763156 alloc_target = btrfs_get_alloc_profile(root, 1);···3180316031813161 ret = do_chunk_alloc(trans, root->fs_info->extent_root,31823162 bytes + 2 * 1024 * 1024,31833183- alloc_target, 0);31633163+ alloc_target,31643164+ CHUNK_ALLOC_NO_FORCE);31843165 btrfs_end_transaction(trans, root);31853166 if (ret < 0) {31863167 if (ret != -ENOSPC)···32603239 rcu_read_lock();32613240 list_for_each_entry_rcu(found, head, list) {32623241 if (found->flags & BTRFS_BLOCK_GROUP_METADATA)32633263- found->force_alloc = 1;32423242+ found->force_alloc = CHUNK_ALLOC_FORCE;32643243 }32653244 rcu_read_unlock();32663245}3267324632683247static int should_alloc_chunk(struct btrfs_root *root,32693269- struct btrfs_space_info *sinfo, u64 alloc_bytes)32483248+ struct btrfs_space_info *sinfo, u64 alloc_bytes,32493249+ int force)32703250{32713251 u64 num_bytes = sinfo->total_bytes - sinfo->bytes_readonly;32523252+ u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved;32723253 u64 thresh;3273325432743274- if (sinfo->bytes_used + sinfo->bytes_reserved +32753275- alloc_bytes + 256 * 1024 * 1024 < num_bytes)32553255+ if (force == CHUNK_ALLOC_FORCE)32563256+ return 1;32573257+32583258+ /*32593259+ * in limited mode, we want to have some free space up to32603260+ * about 1% of the FS size.32613261+ */32623262+ if (force == CHUNK_ALLOC_LIMITED) {32633263+ thresh = btrfs_super_total_bytes(&root->fs_info->super_copy);32643264+ thresh = max_t(u64, 64 * 1024 * 1024,32653265+ div_factor_fine(thresh, 1));32663266+32673267+ if (num_bytes - num_allocated < thresh)32683268+ return 1;32693269+ }32703270+32713271+ /*32723272+ * we have two similar checks here, one based on percentage32733273+ * and once based on a hard number of 256MB. The idea32743274+ * is that if we have a good amount of free32753275+ * room, don't allocate a chunk. A good mount is32763276+ * less than 80% utilized of the chunks we have allocated,32773277+ * or more than 256MB free32783278+ */32793279+ if (num_allocated + alloc_bytes + 256 * 1024 * 1024 < num_bytes)32763280 return 0;3277328132783278- if (sinfo->bytes_used + sinfo->bytes_reserved +32793279- alloc_bytes < div_factor(num_bytes, 8))32823282+ if (num_allocated + alloc_bytes < div_factor(num_bytes, 8))32803283 return 0;3281328432823285 thresh = btrfs_super_total_bytes(&root->fs_info->super_copy);32863286+32873287+ /* 256MB or 5% of the FS */32833288 thresh = max_t(u64, 256 * 1024 * 1024, div_factor_fine(thresh, 5));3284328932853290 if (num_bytes > thresh && sinfo->bytes_used < div_factor(num_bytes, 3))32863291 return 0;32873287-32883292 return 1;32893293}32903294···33193273{33203274 struct btrfs_space_info *space_info;33213275 struct btrfs_fs_info *fs_info = extent_root->fs_info;32763276+ int wait_for_alloc = 0;33223277 int ret = 0;33233323-33243324- mutex_lock(&fs_info->chunk_mutex);3325327833263279 flags = btrfs_reduce_alloc_profile(extent_root, flags);33273280···33323287 }33333288 BUG_ON(!space_info);3334328932903290+again:33353291 spin_lock(&space_info->lock);33363292 if (space_info->force_alloc)33373337- force = 1;32933293+ force = space_info->force_alloc;33383294 if (space_info->full) {33393295 spin_unlock(&space_info->lock);33403340- goto out;32963296+ return 0;33413297 }3342329833433343- if (!force && !should_alloc_chunk(extent_root, space_info,33443344- alloc_bytes)) {32993299+ if (!should_alloc_chunk(extent_root, space_info, alloc_bytes, force)) {33453300 spin_unlock(&space_info->lock);33463346- goto out;33013301+ return 0;33023302+ } else if (space_info->chunk_alloc) {33033303+ wait_for_alloc = 1;33043304+ } else {33053305+ space_info->chunk_alloc = 1;33473306 }33073307+33483308 spin_unlock(&space_info->lock);33093309+33103310+ mutex_lock(&fs_info->chunk_mutex);33113311+33123312+ /*33133313+ * The chunk_mutex is held throughout the entirety of a chunk33143314+ * allocation, so once we've acquired the chunk_mutex we know that the33153315+ * other guy is done and we need to recheck and see if we should33163316+ * allocate.33173317+ */33183318+ if (wait_for_alloc) {33193319+ mutex_unlock(&fs_info->chunk_mutex);33203320+ wait_for_alloc = 0;33213321+ goto again;33223322+ }3349332333503324 /*33513325 * If we have mixed data/metadata chunks we want to make sure we keep···33913327 space_info->full = 1;33923328 else33933329 ret = 1;33943394- space_info->force_alloc = 0;33303330+33313331+ space_info->force_alloc = CHUNK_ALLOC_NO_FORCE;33323332+ space_info->chunk_alloc = 0;33953333 spin_unlock(&space_info->lock);33963396-out:33973334 mutex_unlock(&extent_root->fs_info->chunk_mutex);33983335 return ret;33993336}···5368530353695304 if (allowed_chunk_alloc) {53705305 ret = do_chunk_alloc(trans, root, num_bytes +53715371- 2 * 1024 * 1024, data, 1);53065306+ 2 * 1024 * 1024, data,53075307+ CHUNK_ALLOC_LIMITED);53725308 allowed_chunk_alloc = 0;53735309 done_chunk_alloc = 1;53745374- } else if (!done_chunk_alloc) {53755375- space_info->force_alloc = 1;53105310+ } else if (!done_chunk_alloc &&53115311+ space_info->force_alloc == CHUNK_ALLOC_NO_FORCE) {53125312+ space_info->force_alloc = CHUNK_ALLOC_LIMITED;53765313 }5377531453785315 if (loop < LOOP_NO_EMPTY_SIZE) {···54605393 */54615394 if (empty_size || root->ref_cows)54625395 ret = do_chunk_alloc(trans, root->fs_info->extent_root,54635463- num_bytes + 2 * 1024 * 1024, data, 0);53965396+ num_bytes + 2 * 1024 * 1024, data,53975397+ CHUNK_ALLOC_NO_FORCE);5464539854655399 WARN_ON(num_bytes < root->sectorsize);54665400 ret = find_free_extent(trans, root, num_bytes, empty_size,···54735405 num_bytes = num_bytes & ~(root->sectorsize - 1);54745406 num_bytes = max(num_bytes, min_alloc_size);54755407 do_chunk_alloc(trans, root->fs_info->extent_root,54765476- num_bytes, data, 1);54085408+ num_bytes, data, CHUNK_ALLOC_FORCE);54775409 goto again;54785410 }54795411 if (ret == -ENOSPC && btrfs_test_opt(root, ENOSPC_DEBUG)) {···8177810981788110 alloc_flags = update_block_group_flags(root, cache->flags);81798111 if (alloc_flags != cache->flags)81808180- do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1);81128112+ do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags,81138113+ CHUNK_ALLOC_FORCE);8181811481828115 ret = set_block_group_ro(cache);81838116 if (!ret)81848117 goto out;81858118 alloc_flags = get_alloc_profile(root, cache->space_info->flags);81868186- ret = do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1);81198119+ ret = do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags,81208120+ CHUNK_ALLOC_FORCE);81878121 if (ret < 0)81888122 goto out;81898123 ret = set_block_group_ro(cache);···81988128 struct btrfs_root *root, u64 type)81998129{82008130 u64 alloc_flags = get_alloc_profile(root, type);82018201- return do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1);81318131+ return do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags,81328132+ CHUNK_ALLOC_FORCE);82028133}8203813482048135/*
+62-20
fs/btrfs/extent_io.c
···690690 }691691}692692693693+static void uncache_state(struct extent_state **cached_ptr)694694+{695695+ if (cached_ptr && (*cached_ptr)) {696696+ struct extent_state *state = *cached_ptr;697697+ *cached_ptr = NULL;698698+ free_extent_state(state);699699+ }700700+}701701+693702/*694703 * set some bits on a range in the tree. This may require allocations or695704 * sleeping, so the gfp mask is used to indicate what is allowed.···949940}950941951942int set_extent_uptodate(struct extent_io_tree *tree, u64 start, u64 end,952952- gfp_t mask)943943+ struct extent_state **cached_state, gfp_t mask)953944{954954- return set_extent_bit(tree, start, end, EXTENT_UPTODATE, 0, NULL,955955- NULL, mask);945945+ return set_extent_bit(tree, start, end, EXTENT_UPTODATE, 0,946946+ NULL, cached_state, mask);956947}957948958949static int clear_extent_uptodate(struct extent_io_tree *tree, u64 start,···10211012 mask);10221013}1023101410241024-int unlock_extent(struct extent_io_tree *tree, u64 start, u64 end,10251025- gfp_t mask)10151015+int unlock_extent(struct extent_io_tree *tree, u64 start, u64 end, gfp_t mask)10261016{10271017 return clear_extent_bit(tree, start, end, EXTENT_LOCKED, 1, 0, NULL,10281018 mask);···1743173517441736 do {17451737 struct page *page = bvec->bv_page;17381738+ struct extent_state *cached = NULL;17391739+ struct extent_state *state;17401740+17461741 tree = &BTRFS_I(page->mapping->host)->io_tree;1747174217481743 start = ((u64)page->index << PAGE_CACHE_SHIFT) +···17601749 if (++bvec <= bvec_end)17611750 prefetchw(&bvec->bv_page->flags);1762175117521752+ spin_lock(&tree->lock);17531753+ state = find_first_extent_bit_state(tree, start, EXTENT_LOCKED);17541754+ if (state && state->start == start) {17551755+ /*17561756+ * take a reference on the state, unlock will drop17571757+ * the ref17581758+ */17591759+ cache_state(state, &cached);17601760+ }17611761+ spin_unlock(&tree->lock);17621762+17631763 if (uptodate && tree->ops && tree->ops->readpage_end_io_hook) {17641764 ret = tree->ops->readpage_end_io_hook(page, start, end,17651765- NULL);17651765+ state);17661766 if (ret)17671767 uptodate = 0;17681768 }···17861764 test_bit(BIO_UPTODATE, &bio->bi_flags);17871765 if (err)17881766 uptodate = 0;17671767+ uncache_state(&cached);17891768 continue;17901769 }17911770 }1792177117931772 if (uptodate) {17941794- set_extent_uptodate(tree, start, end,17731773+ set_extent_uptodate(tree, start, end, &cached,17951774 GFP_ATOMIC);17961775 }17971797- unlock_extent(tree, start, end, GFP_ATOMIC);17761776+ unlock_extent_cached(tree, start, end, &cached, GFP_ATOMIC);1798177717991778 if (whole_page) {18001779 if (uptodate) {···1834181118351812 do {18361813 struct page *page = bvec->bv_page;18141814+ struct extent_state *cached = NULL;18371815 tree = &BTRFS_I(page->mapping->host)->io_tree;1838181618391817 start = ((u64)page->index << PAGE_CACHE_SHIFT) +···18451821 prefetchw(&bvec->bv_page->flags);1846182218471823 if (uptodate) {18481848- set_extent_uptodate(tree, start, end, GFP_ATOMIC);18241824+ set_extent_uptodate(tree, start, end, &cached,18251825+ GFP_ATOMIC);18491826 } else {18501827 ClearPageUptodate(page);18511828 SetPageError(page);18521829 }1853183018541854- unlock_extent(tree, start, end, GFP_ATOMIC);18311831+ unlock_extent_cached(tree, start, end, &cached, GFP_ATOMIC);1855183218561833 } while (bvec >= bio->bi_io_vec);18571834···20412016 while (cur <= end) {20422017 if (cur >= last_byte) {20432018 char *userpage;20192019+ struct extent_state *cached = NULL;20202020+20442021 iosize = PAGE_CACHE_SIZE - page_offset;20452022 userpage = kmap_atomic(page, KM_USER0);20462023 memset(userpage + page_offset, 0, iosize);20472024 flush_dcache_page(page);20482025 kunmap_atomic(userpage, KM_USER0);20492026 set_extent_uptodate(tree, cur, cur + iosize - 1,20502050- GFP_NOFS);20512051- unlock_extent(tree, cur, cur + iosize - 1, GFP_NOFS);20272027+ &cached, GFP_NOFS);20282028+ unlock_extent_cached(tree, cur, cur + iosize - 1,20292029+ &cached, GFP_NOFS);20522030 break;20532031 }20542032 em = get_extent(inode, page, page_offset, cur,···20912063 /* we've found a hole, just zero and go on */20922064 if (block_start == EXTENT_MAP_HOLE) {20932065 char *userpage;20662066+ struct extent_state *cached = NULL;20672067+20942068 userpage = kmap_atomic(page, KM_USER0);20952069 memset(userpage + page_offset, 0, iosize);20962070 flush_dcache_page(page);20972071 kunmap_atomic(userpage, KM_USER0);2098207220992073 set_extent_uptodate(tree, cur, cur + iosize - 1,21002100- GFP_NOFS);21012101- unlock_extent(tree, cur, cur + iosize - 1, GFP_NOFS);20742074+ &cached, GFP_NOFS);20752075+ unlock_extent_cached(tree, cur, cur + iosize - 1,20762076+ &cached, GFP_NOFS);21022077 cur = cur + iosize;21032078 page_offset += iosize;21042079 continue;···28202789 iocount++;28212790 block_start = block_start + iosize;28222791 } else {28232823- set_extent_uptodate(tree, block_start, cur_end,27922792+ struct extent_state *cached = NULL;27932793+27942794+ set_extent_uptodate(tree, block_start, cur_end, &cached,28242795 GFP_NOFS);28252825- unlock_extent(tree, block_start, cur_end, GFP_NOFS);27962796+ unlock_extent_cached(tree, block_start, cur_end,27972797+ &cached, GFP_NOFS);28262798 block_start = cur_end + 1;28272799 }28282800 page_offset = block_start & (PAGE_CACHE_SIZE - 1);···34913457 num_pages = num_extent_pages(eb->start, eb->len);3492345834933459 set_extent_uptodate(tree, eb->start, eb->start + eb->len - 1,34943494- GFP_NOFS);34603460+ NULL, GFP_NOFS);34953461 for (i = 0; i < num_pages; i++) {34963462 page = extent_buffer_page(eb, i);34973463 if ((i == 0 && (eb->start & (PAGE_CACHE_SIZE - 1))) ||···39193885 kunmap_atomic(dst_kaddr, KM_USER0);39203886}3921388738883888+static inline bool areas_overlap(unsigned long src, unsigned long dst, unsigned long len)38893889+{38903890+ unsigned long distance = (src > dst) ? src - dst : dst - src;38913891+ return distance < len;38923892+}38933893+39223894static void copy_pages(struct page *dst_page, struct page *src_page,39233895 unsigned long dst_off, unsigned long src_off,39243896 unsigned long len)···39323892 char *dst_kaddr = kmap_atomic(dst_page, KM_USER0);39333893 char *src_kaddr;3934389439353935- if (dst_page != src_page)38953895+ if (dst_page != src_page) {39363896 src_kaddr = kmap_atomic(src_page, KM_USER1);39373937- else38973897+ } else {39383898 src_kaddr = dst_kaddr;38993899+ BUG_ON(areas_overlap(src_off, dst_off, len));39003900+ }3939390139403902 memcpy(dst_kaddr + dst_off, src_kaddr + src_off, len);39413903 kunmap_atomic(dst_kaddr, KM_USER0);···40123970 "len %lu len %lu\n", dst_offset, len, dst->len);40133971 BUG_ON(1);40143972 }40154015- if (dst_offset < src_offset) {39733973+ if (!areas_overlap(src_offset, dst_offset, len)) {40163974 memcpy_extent_buffer(dst, dst_offset, src_offset, len);40173975 return;40183976 }
···159159 Opt_compress_type, Opt_compress_force, Opt_compress_force_type,160160 Opt_notreelog, Opt_ratio, Opt_flushoncommit, Opt_discard,161161 Opt_space_cache, Opt_clear_cache, Opt_user_subvol_rm_allowed,162162- Opt_enospc_debug, Opt_err,162162+ Opt_enospc_debug, Opt_subvolrootid, Opt_err,163163};164164165165static match_table_t tokens = {···189189 {Opt_clear_cache, "clear_cache"},190190 {Opt_user_subvol_rm_allowed, "user_subvol_rm_allowed"},191191 {Opt_enospc_debug, "enospc_debug"},192192+ {Opt_subvolrootid, "subvolrootid=%d"},192193 {Opt_err, NULL},193194};194195···233232 break;234233 case Opt_subvol:235234 case Opt_subvolid:235235+ case Opt_subvolrootid:236236 case Opt_device:237237 /*238238 * These are parsed by btrfs_parse_early_options···390388 */391389static int btrfs_parse_early_options(const char *options, fmode_t flags,392390 void *holder, char **subvol_name, u64 *subvol_objectid,393393- struct btrfs_fs_devices **fs_devices)391391+ u64 *subvol_rootid, struct btrfs_fs_devices **fs_devices)394392{395393 substring_t args[MAX_OPT_ARGS];396394 char *opts, *orig, *p;···429427 BTRFS_FS_TREE_OBJECTID;430428 else431429 *subvol_objectid = intarg;430430+ }431431+ break;432432+ case Opt_subvolrootid:433433+ intarg = 0;434434+ error = match_int(&args[0], &intarg);435435+ if (!error) {436436+ /* we want the original fs_tree */437437+ if (!intarg)438438+ *subvol_rootid =439439+ BTRFS_FS_TREE_OBJECTID;440440+ else441441+ *subvol_rootid = intarg;432442 }433443 break;434444 case Opt_device:···750736 fmode_t mode = FMODE_READ;751737 char *subvol_name = NULL;752738 u64 subvol_objectid = 0;739739+ u64 subvol_rootid = 0;753740 int error = 0;754741755742 if (!(flags & MS_RDONLY))···758743759744 error = btrfs_parse_early_options(data, mode, fs_type,760745 &subvol_name, &subvol_objectid,761761- &fs_devices);746746+ &subvol_rootid, &fs_devices);762747 if (error)763748 return ERR_PTR(error);764749···822807 s->s_flags |= MS_ACTIVE;823808 }824809825825- root = get_default_root(s, subvol_objectid);826826- if (IS_ERR(root)) {827827- error = PTR_ERR(root);828828- deactivate_locked_super(s);829829- goto error_free_subvol_name;830830- }831810 /* if they gave us a subvolume name bind mount into that */832811 if (strcmp(subvol_name, ".")) {833812 struct dentry *new_root;813813+814814+ root = get_default_root(s, subvol_rootid);815815+ if (IS_ERR(root)) {816816+ error = PTR_ERR(root);817817+ deactivate_locked_super(s);818818+ goto error_free_subvol_name;819819+ }820820+834821 mutex_lock(&root->d_inode->i_mutex);835822 new_root = lookup_one_len(subvol_name, root,836823 strlen(subvol_name));···853836 }854837 dput(root);855838 root = new_root;839839+ } else {840840+ root = get_default_root(s, subvol_objectid);841841+ if (IS_ERR(root)) {842842+ error = PTR_ERR(root);843843+ deactivate_locked_super(s);844844+ goto error_free_subvol_name;845845+ }856846 }857847858848 kfree(subvol_name);
+26-22
fs/btrfs/transaction.c
···32323333static noinline void put_transaction(struct btrfs_transaction *transaction)3434{3535- WARN_ON(transaction->use_count == 0);3636- transaction->use_count--;3737- if (transaction->use_count == 0) {3838- list_del_init(&transaction->list);3535+ WARN_ON(atomic_read(&transaction->use_count) == 0);3636+ if (atomic_dec_and_test(&transaction->use_count)) {3937 memset(transaction, 0, sizeof(*transaction));4038 kmem_cache_free(btrfs_transaction_cachep, transaction);4139 }···5860 if (!cur_trans)5961 return -ENOMEM;6062 root->fs_info->generation++;6161- cur_trans->num_writers = 1;6363+ atomic_set(&cur_trans->num_writers, 1);6264 cur_trans->num_joined = 0;6365 cur_trans->transid = root->fs_info->generation;6466 init_waitqueue_head(&cur_trans->writer_wait);6567 init_waitqueue_head(&cur_trans->commit_wait);6668 cur_trans->in_commit = 0;6769 cur_trans->blocked = 0;6868- cur_trans->use_count = 1;7070+ atomic_set(&cur_trans->use_count, 1);6971 cur_trans->commit_done = 0;7072 cur_trans->start_time = get_seconds();7173···8688 root->fs_info->running_transaction = cur_trans;8789 spin_unlock(&root->fs_info->new_trans_lock);8890 } else {8989- cur_trans->num_writers++;9191+ atomic_inc(&cur_trans->num_writers);9092 cur_trans->num_joined++;9193 }9294···143145 cur_trans = root->fs_info->running_transaction;144146 if (cur_trans && cur_trans->blocked) {145147 DEFINE_WAIT(wait);146146- cur_trans->use_count++;148148+ atomic_inc(&cur_trans->use_count);147149 while (1) {148150 prepare_to_wait(&root->fs_info->transaction_wait, &wait,149151 TASK_UNINTERRUPTIBLE);···179181{180182 struct btrfs_trans_handle *h;181183 struct btrfs_transaction *cur_trans;184184+ int retries = 0;182185 int ret;183186184187 if (root->fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR)···203204 }204205205206 cur_trans = root->fs_info->running_transaction;206206- cur_trans->use_count++;207207+ atomic_inc(&cur_trans->use_count);207208 if (type != TRANS_JOIN_NOLOCK)208209 mutex_unlock(&root->fs_info->trans_mutex);209210···223224224225 if (num_items > 0) {225226 ret = btrfs_trans_reserve_metadata(h, root, num_items);226226- if (ret == -EAGAIN) {227227+ if (ret == -EAGAIN && !retries) {228228+ retries++;227229 btrfs_commit_transaction(h, root);228230 goto again;231231+ } else if (ret == -EAGAIN) {232232+ /*233233+ * We have already retried and got EAGAIN, so really we234234+ * don't have space, so set ret to -ENOSPC.235235+ */236236+ ret = -ENOSPC;229237 }238238+230239 if (ret < 0) {231240 btrfs_end_transaction(h, root);232241 return ERR_PTR(ret);···334327 goto out_unlock; /* nothing committing|committed */335328 }336329337337- cur_trans->use_count++;330330+ atomic_inc(&cur_trans->use_count);338331 mutex_unlock(&root->fs_info->trans_mutex);339332340333 wait_for_commit(root, cur_trans);···464457 wake_up_process(info->transaction_kthread);465458 }466459467467- if (lock)468468- mutex_lock(&info->trans_mutex);469460 WARN_ON(cur_trans != info->running_transaction);470470- WARN_ON(cur_trans->num_writers < 1);471471- cur_trans->num_writers--;461461+ WARN_ON(atomic_read(&cur_trans->num_writers) < 1);462462+ atomic_dec(&cur_trans->num_writers);472463473464 smp_mb();474465 if (waitqueue_active(&cur_trans->writer_wait))475466 wake_up(&cur_trans->writer_wait);476467 put_transaction(cur_trans);477477- if (lock)478478- mutex_unlock(&info->trans_mutex);479468480469 if (current->journal_info == trans)481470 current->journal_info = NULL;···11811178 /* take transaction reference */11821179 mutex_lock(&root->fs_info->trans_mutex);11831180 cur_trans = trans->transaction;11841184- cur_trans->use_count++;11811181+ atomic_inc(&cur_trans->use_count);11851182 mutex_unlock(&root->fs_info->trans_mutex);1186118311871184 btrfs_end_transaction(trans, root);···1240123712411238 mutex_lock(&root->fs_info->trans_mutex);12421239 if (cur_trans->in_commit) {12431243- cur_trans->use_count++;12401240+ atomic_inc(&cur_trans->use_count);12441241 mutex_unlock(&root->fs_info->trans_mutex);12451242 btrfs_end_transaction(trans, root);12461243···12621259 prev_trans = list_entry(cur_trans->list.prev,12631260 struct btrfs_transaction, list);12641261 if (!prev_trans->commit_done) {12651265- prev_trans->use_count++;12621262+ atomic_inc(&prev_trans->use_count);12661263 mutex_unlock(&root->fs_info->trans_mutex);1267126412681265 wait_for_commit(root, prev_trans);···13031300 TASK_UNINTERRUPTIBLE);1304130113051302 smp_mb();13061306- if (cur_trans->num_writers > 1)13031303+ if (atomic_read(&cur_trans->num_writers) > 1)13071304 schedule_timeout(MAX_SCHEDULE_TIMEOUT);13081305 else if (should_grow)13091306 schedule_timeout(1);1310130713111308 mutex_lock(&root->fs_info->trans_mutex);13121309 finish_wait(&cur_trans->writer_wait, &wait);13131313- } while (cur_trans->num_writers > 1 ||13101310+ } while (atomic_read(&cur_trans->num_writers) > 1 ||13141311 (should_grow && cur_trans->num_joined != joined));1315131213161313 ret = create_pending_snapshots(trans, root->fs_info);···1397139413981395 wake_up(&cur_trans->commit_wait);1399139613971397+ list_del_init(&cur_trans->list);14001398 put_transaction(cur_trans);14011399 put_transaction(cur_trans);14021400
+2-2
fs/btrfs/transaction.h
···2727 * total writers in this transaction, it must be zero before the2828 * transaction can end2929 */3030- unsigned long num_writers;3030+ atomic_t num_writers;31313232 unsigned long num_joined;3333 int in_commit;3434- int use_count;3434+ atomic_t use_count;3535 int commit_done;3636 int blocked;3737 struct list_head list;
+12-21
fs/btrfs/xattr.c
···180180 struct btrfs_path *path;181181 struct extent_buffer *leaf;182182 struct btrfs_dir_item *di;183183- int ret = 0, slot, advance;183183+ int ret = 0, slot;184184 size_t total_size = 0, size_left = size;185185 unsigned long name_ptr;186186 size_t name_len;187187- u32 nritems;188187189188 /*190189 * ok we want all objects associated with this id.···203204 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);204205 if (ret < 0)205206 goto err;206206- advance = 0;207207+207208 while (1) {208209 leaf = path->nodes[0];209209- nritems = btrfs_header_nritems(leaf);210210 slot = path->slots[0];211211212212 /* this is where we start walking through the path */213213- if (advance || slot >= nritems) {213213+ if (slot >= btrfs_header_nritems(leaf)) {214214 /*215215 * if we've reached the last slot in this leaf we need216216 * to go to the next leaf and reset everything217217 */218218- if (slot >= nritems-1) {219219- ret = btrfs_next_leaf(root, path);220220- if (ret)221221- break;222222- leaf = path->nodes[0];223223- nritems = btrfs_header_nritems(leaf);224224- slot = path->slots[0];225225- } else {226226- /*227227- * just walking through the slots on this leaf228228- */229229- slot++;230230- path->slots[0]++;231231- }218218+ ret = btrfs_next_leaf(root, path);219219+ if (ret < 0)220220+ goto err;221221+ else if (ret > 0)222222+ break;223223+ continue;232224 }233233- advance = 1;234225235226 btrfs_item_key_to_cpu(leaf, &found_key, slot);236227···239250240251 /* we are just looking for how big our buffer needs to be */241252 if (!size)242242- continue;253253+ goto next;243254244255 if (!buffer || (name_len + 1) > size_left) {245256 ret = -ERANGE;···252263253264 size_left -= name_len + 1;254265 buffer += name_len + 1;266266+next:267267+ path->slots[0]++;255268 }256269 ret = total_size;257270
-16
fs/cifs/README
···685685 support and want to map the uid and gid fields 686686 to values supplied at mount (rather than the 687687 actual values, then set this to zero. (default 1)688688-Experimental When set to 1 used to enable certain experimental689689- features (currently enables multipage writes690690- when signing is enabled, the multipage write691691- performance enhancement was disabled when692692- signing turned on in case buffer was modified693693- just before it was sent, also this flag will694694- be used to use the new experimental directory change 695695- notification code). When set to 2 enables696696- an additional experimental feature, "raw ntlmssp"697697- session establishment support (which allows698698- specifying "sec=ntlmssp" on mount). The Linux cifs699699- module will use ntlmv2 authentication encapsulated700700- in "raw ntlmssp" (not using SPNEGO) when701701- "sec=ntlmssp" is specified on mount.702702- This support also requires building cifs with703703- the CONFIG_CIFS_EXPERIMENTAL configuration flag.704688705689These experimental features and tracing can be enabled by changing flags in 706690/proc/fs/cifs (after the cifs module has been installed or built into the
+1-1
fs/cifs/cache.c
···5050 */5151struct cifs_server_key {5252 uint16_t family; /* address family */5353- uint16_t port; /* IP port */5353+ __be16 port; /* IP port */5454 union {5555 struct in_addr ipv4_addr;5656 struct in6_addr ipv6_addr;
···9090 case UNI_COLON:9191 *target = ':';9292 break;9393- case UNI_ASTERIK:9393+ case UNI_ASTERISK:9494 *target = '*';9595 break;9696 case UNI_QUESTION:···264264 * names are little endian 16 bit Unicode on the wire265265 */266266int267267-cifsConvertToUCS(__le16 *target, const char *source, int maxlen,267267+cifsConvertToUCS(__le16 *target, const char *source, int srclen,268268 const struct nls_table *cp, int mapChars)269269{270270 int i, j, charlen;271271- int len_remaining = maxlen;272271 char src_char;273273- __u16 temp;272272+ __le16 dst_char;273273+ wchar_t tmp;274274275275 if (!mapChars)276276 return cifs_strtoUCS(target, source, PATH_MAX, cp);277277278278- for (i = 0, j = 0; i < maxlen; j++) {278278+ for (i = 0, j = 0; i < srclen; j++) {279279 src_char = source[i];280280 switch (src_char) {281281 case 0:282282- put_unaligned_le16(0, &target[j]);282282+ put_unaligned(0, &target[j]);283283 goto ctoUCS_out;284284 case ':':285285- temp = UNI_COLON;285285+ dst_char = cpu_to_le16(UNI_COLON);286286 break;287287 case '*':288288- temp = UNI_ASTERIK;288288+ dst_char = cpu_to_le16(UNI_ASTERISK);289289 break;290290 case '?':291291- temp = UNI_QUESTION;291291+ dst_char = cpu_to_le16(UNI_QUESTION);292292 break;293293 case '<':294294- temp = UNI_LESSTHAN;294294+ dst_char = cpu_to_le16(UNI_LESSTHAN);295295 break;296296 case '>':297297- temp = UNI_GRTRTHAN;297297+ dst_char = cpu_to_le16(UNI_GRTRTHAN);298298 break;299299 case '|':300300- temp = UNI_PIPE;300300+ dst_char = cpu_to_le16(UNI_PIPE);301301 break;302302 /*303303 * FIXME: We can not handle remapping backslash (UNI_SLASH)···305305 * as they use backslash as separator.306306 */307307 default:308308- charlen = cp->char2uni(source+i, len_remaining,309309- &temp);308308+ charlen = cp->char2uni(source + i, srclen - i, &tmp);309309+ dst_char = cpu_to_le16(tmp);310310+310311 /*311312 * if no match, use question mark, which at least in312313 * some cases serves as wild card313314 */314315 if (charlen < 1) {315315- temp = 0x003f;316316+ dst_char = cpu_to_le16(0x003f);316317 charlen = 1;317318 }318318- len_remaining -= charlen;319319 /*320320 * character may take more than one byte in the source321321 * string, but will take exactly two bytes in the···324324 i += charlen;325325 continue;326326 }327327- put_unaligned_le16(temp, &target[j]);327327+ put_unaligned(dst_char, &target[j]);328328 i++; /* move to next char in source string */329329- len_remaining--;330329 }331330332331ctoUCS_out:
+1-1
fs/cifs/cifs_unicode.h
···4444 * reserved symbols (along with \ and /), otherwise illegal to store4545 * in filenames in NTFS4646 */4747-#define UNI_ASTERIK (__u16) ('*' + 0xF000)4747+#define UNI_ASTERISK (__u16) ('*' + 0xF000)4848#define UNI_QUESTION (__u16) ('?' + 0xF000)4949#define UNI_COLON (__u16) (':' + 0xF000)5050#define UNI_GRTRTHAN (__u16) ('>' + 0xF000)
+12-9
fs/cifs/cifsencrypt.c
···3030#include <linux/ctype.h>3131#include <linux/random.h>32323333-/* Calculate and return the CIFS signature based on the mac key and SMB PDU */3434-/* the 16 byte signature must be allocated by the caller */3535-/* Note we only use the 1st eight bytes */3636-/* Note that the smb header signature field on input contains the3737- sequence number before this function is called */3838-3333+/*3434+ * Calculate and return the CIFS signature based on the mac key and SMB PDU.3535+ * The 16 byte signature must be allocated by the caller. Note we only use the3636+ * 1st eight bytes and that the smb header signature field on input contains3737+ * the sequence number before this function is called. Also, this function3838+ * should be called with the server->srv_mutex held.3939+ */3940static int cifs_calculate_signature(const struct smb_hdr *cifs_pdu,4041 struct TCP_Server_Info *server, char *signature)4142{···210209 cpu_to_le32(expected_sequence_number);211210 cifs_pdu->Signature.Sequence.Reserved = 0;212211212212+ mutex_lock(&server->srv_mutex);213213 rc = cifs_calculate_signature(cifs_pdu, server,214214 what_we_think_sig_should_be);215215+ mutex_unlock(&server->srv_mutex);215216216217 if (rc)217218 return rc;···472469 return rc;473470 }474471475475- /* convert ses->userName to unicode and uppercase */476476- len = strlen(ses->userName);472472+ /* convert ses->user_name to unicode and uppercase */473473+ len = strlen(ses->user_name);477474 user = kmalloc(2 + (len * 2), GFP_KERNEL);478475 if (user == NULL) {479476 cERROR(1, "calc_ntlmv2_hash: user mem alloc failure\n");480477 rc = -ENOMEM;481478 goto calc_exit_2;482479 }483483- len = cifs_strtoUCS((__le16 *)user, ses->userName, len, nls_cp);480480+ len = cifs_strtoUCS((__le16 *)user, ses->user_name, len, nls_cp);484481 UniStrupr(user);485482486483 crypto_shash_update(&ses->server->secmech.sdeschmacmd5->shash,
+3-3
fs/cifs/cifsfs.c
···5353int cifsERROR = 1;5454int traceSMB = 0;5555unsigned int oplockEnabled = 1;5656-unsigned int experimEnabled = 0;5756unsigned int linuxExtEnabled = 1;5857unsigned int lookupCacheEnabled = 1;5958unsigned int multiuser_mount = 0;···126127 kfree(cifs_sb);127128 return rc;128129 }130130+ cifs_sb->bdi.ra_pages = default_backing_dev_info.ra_pages;129131130132#ifdef CONFIG_CIFS_DFS_UPCALL131133 /* copy mount params to sb for use in submounts */···409409410410 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER)411411 seq_printf(s, ",multiuser");412412- else if (tcon->ses->userName)413413- seq_printf(s, ",username=%s", tcon->ses->userName);412412+ else if (tcon->ses->user_name)413413+ seq_printf(s, ",username=%s", tcon->ses->user_name);414414415415 if (tcon->ses->domainName)416416 seq_printf(s, ",domain=%s", tcon->ses->domainName);
+6-7
fs/cifs/cifsglob.h
···37373838#define MAX_TREE_SIZE (2 + MAX_SERVER_SIZE + 1 + MAX_SHARE_SIZE + 1)3939#define MAX_SERVER_SIZE 154040-#define MAX_SHARE_SIZE 64 /* used to be 20, this should still be enough */4141-#define MAX_USERNAME_SIZE 32 /* 32 is to allow for 15 char names + null4242- termination then *2 for unicode versions */4343-#define MAX_PASSWORD_SIZE 512 /* max for windows seems to be 256 wide chars */4040+#define MAX_SHARE_SIZE 804141+#define MAX_USERNAME_SIZE 256 /* reasonable maximum for current servers */4242+#define MAX_PASSWORD_SIZE 512 /* max for windows seems to be 256 wide chars */44434544#define CIFS_MIN_RCV_POOL 44645···9192 CifsNew = 0,9293 CifsGood,9394 CifsExiting,9494- CifsNeedReconnect9595+ CifsNeedReconnect,9696+ CifsNeedNegotiate9597};96989799enum securityEnum {···274274 int capabilities;275275 char serverName[SERVER_NAME_LEN_WITH_NULL * 2]; /* BB make bigger for276276 TCP names - will ipv6 and sctp addresses fit? */277277- char userName[MAX_USERNAME_SIZE + 1];277277+ char *user_name;278278 char *domainName;279279 char *password;280280 struct session_key auth_key;···817817 have the uid/password or Kerberos credential818818 or equivalent for current user */819819GLOBAL_EXTERN unsigned int oplockEnabled;820820-GLOBAL_EXTERN unsigned int experimEnabled;821820GLOBAL_EXTERN unsigned int lookupCacheEnabled;822821GLOBAL_EXTERN unsigned int global_secflags; /* if on, session setup sent823822 with more secure ntlmssp2 challenge/resp */
+7-7
fs/cifs/cifssmb.c
···142142 */143143 while (server->tcpStatus == CifsNeedReconnect) {144144 wait_event_interruptible_timeout(server->response_q,145145- (server->tcpStatus == CifsGood), 10 * HZ);145145+ (server->tcpStatus != CifsNeedReconnect), 10 * HZ);146146147147- /* is TCP session is reestablished now ?*/147147+ /* are we still trying to reconnect? */148148 if (server->tcpStatus != CifsNeedReconnect)149149 break;150150···729729 return rc;730730731731 /* set up echo request */732732- smb->hdr.Tid = cpu_to_le16(0xffff);732732+ smb->hdr.Tid = 0xffff;733733 smb->hdr.WordCount = 1;734734 put_unaligned_le16(1, &smb->EchoCount);735735 put_bcc_le(1, &smb->hdr);···18841884 __constant_cpu_to_le16(CIFS_WRLCK))18851885 pLockData->fl_type = F_WRLCK;1886188618871887- pLockData->fl_start = parm_data->start;18881888- pLockData->fl_end = parm_data->start +18891889- parm_data->length - 1;18901890- pLockData->fl_pid = parm_data->pid;18871887+ pLockData->fl_start = le64_to_cpu(parm_data->start);18881888+ pLockData->fl_end = pLockData->fl_start +18891889+ le64_to_cpu(parm_data->length) - 1;18901890+ pLockData->fl_pid = le32_to_cpu(parm_data->pid);18911891 }18921892 }18931893
+42-26
fs/cifs/connect.c
···199199 }200200 spin_unlock(&GlobalMid_Lock);201201202202- while ((server->tcpStatus != CifsExiting) &&203203- (server->tcpStatus != CifsGood)) {202202+ while (server->tcpStatus == CifsNeedReconnect) {204203 try_to_freeze();205204206205 /* we should try only the port we connected to before */···211212 atomic_inc(&tcpSesReconnectCount);212213 spin_lock(&GlobalMid_Lock);213214 if (server->tcpStatus != CifsExiting)214214- server->tcpStatus = CifsGood;215215+ server->tcpStatus = CifsNeedNegotiate;215216 spin_unlock(&GlobalMid_Lock);216217 }217218 }···247248 total_data_size = get_unaligned_le16(&pSMBt->t2_rsp.TotalDataCount);248249 data_in_this_rsp = get_unaligned_le16(&pSMBt->t2_rsp.DataCount);249250250250- remaining = total_data_size - data_in_this_rsp;251251-252252- if (remaining == 0)251251+ if (total_data_size == data_in_this_rsp)253252 return 0;254254- else if (remaining < 0) {253253+ else if (total_data_size < data_in_this_rsp) {255254 cFYI(1, "total data %d smaller than data in frame %d",256255 total_data_size, data_in_this_rsp);257256 return -EINVAL;258258- } else {259259- cFYI(1, "missing %d bytes from transact2, check next response",260260- remaining);261261- if (total_data_size > maxBufSize) {262262- cERROR(1, "TotalDataSize %d is over maximum buffer %d",263263- total_data_size, maxBufSize);264264- return -EINVAL;265265- }266266- return remaining;267257 }258258+259259+ remaining = total_data_size - data_in_this_rsp;260260+261261+ cFYI(1, "missing %d bytes from transact2, check next response",262262+ remaining);263263+ if (total_data_size > maxBufSize) {264264+ cERROR(1, "TotalDataSize %d is over maximum buffer %d",265265+ total_data_size, maxBufSize);266266+ return -EINVAL;267267+ }268268+ return remaining;268269}269270270271static int coalesce_t2(struct smb_hdr *psecond, struct smb_hdr *pTargetSMB)···420421 pdu_length = 4; /* enough to get RFC1001 header */421422422423incomplete_rcv:423423- if (echo_retries > 0 &&424424+ if (echo_retries > 0 && server->tcpStatus == CifsGood &&424425 time_after(jiffies, server->lstrp +425426 (echo_retries * SMB_ECHO_INTERVAL))) {426427 cERROR(1, "Server %s has not responded in %d seconds. "···880881 /* null user, ie anonymous, authentication */881882 vol->nullauth = 1;882883 }883883- if (strnlen(value, 200) < 200) {884884+ if (strnlen(value, MAX_USERNAME_SIZE) <885885+ MAX_USERNAME_SIZE) {884886 vol->username = value;885887 } else {886888 printk(KERN_WARNING "CIFS: username too long\n");···14721472static bool14731473match_port(struct TCP_Server_Info *server, struct sockaddr *addr)14741474{14751475- unsigned short int port, *sport;14751475+ __be16 port, *sport;1476147614771477 switch (addr->sa_family) {14781478 case AF_INET:···17651765 module_put(THIS_MODULE);17661766 goto out_err_crypto_release;17671767 }17681768+ tcp_ses->tcpStatus = CifsNeedNegotiate;1768176917691770 /* thread spawned, put it on the list */17701771 spin_lock(&cifs_tcp_ses_lock);···18091808 break;18101809 default:18111810 /* anything else takes username/password */18121812- if (strncmp(ses->userName, vol->username,18111811+ if (ses->user_name == NULL)18121812+ continue;18131813+ if (strncmp(ses->user_name, vol->username,18131814 MAX_USERNAME_SIZE))18141815 continue;18151816 if (strlen(vol->username) != 0 &&···18531850 sesInfoFree(ses);18541851 cifs_put_tcp_session(server);18551852}18531853+18541854+static bool warned_on_ntlm; /* globals init to false automatically */1856185518571856static struct cifsSesInfo *18581857cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info)···19111906 else19121907 sprintf(ses->serverName, "%pI4", &addr->sin_addr);1913190819141914- if (volume_info->username)19151915- strncpy(ses->userName, volume_info->username,19161916- MAX_USERNAME_SIZE);19091909+ if (volume_info->username) {19101910+ ses->user_name = kstrdup(volume_info->username, GFP_KERNEL);19111911+ if (!ses->user_name)19121912+ goto get_ses_fail;19131913+ }1917191419181915 /* volume_info->password freed at unmount */19191916 if (volume_info->password) {···19301923 }19311924 ses->cred_uid = volume_info->cred_uid;19321925 ses->linux_uid = volume_info->linux_uid;19261926+19271927+ /* ntlmv2 is much stronger than ntlm security, and has been broadly19281928+ supported for many years, time to update default security mechanism */19291929+ if ((volume_info->secFlg == 0) && warned_on_ntlm == false) {19301930+ warned_on_ntlm = true;19311931+ cERROR(1, "default security mechanism requested. The default "19321932+ "security mechanism will be upgraded from ntlm to "19331933+ "ntlmv2 in kernel release 2.6.41");19341934+ }19331935 ses->overrideSecFlg = volume_info->secFlg;1934193619351937 mutex_lock(&ses->session_mutex);···22922276generic_ip_connect(struct TCP_Server_Info *server)22932277{22942278 int rc = 0;22952295- unsigned short int sport;22792279+ __be16 sport;22962280 int slen, sfamily;22972281 struct socket *socket = server->ssocket;22982282 struct sockaddr *saddr;···23772361static int23782362ip_connect(struct TCP_Server_Info *server)23792363{23802380- unsigned short int *sport;23642364+ __be16 *sport;23812365 struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *)&server->dstaddr;23822366 struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;23832367···2842282628432827remote_path_check:28442828 /* check if a whole path (including prepath) is not remote */28452845- if (!rc && cifs_sb->prepathlen && tcon) {28292829+ if (!rc && tcon) {28462830 /* build_path_to_root works only when we have a valid tcon */28472831 full_path = cifs_build_path_to_root(cifs_sb, tcon);28482832 if (full_path == NULL) {
+36-32
fs/cifs/file.c
···575575576576int cifs_close(struct inode *inode, struct file *file)577577{578578- cifsFileInfo_put(file->private_data);579579- file->private_data = NULL;578578+ if (file->private_data != NULL) {579579+ cifsFileInfo_put(file->private_data);580580+ file->private_data = NULL;581581+ }580582581583 /* return code from the ->release op is always ignored */582584 return 0;···972970 total_written += bytes_written) {973971 rc = -EAGAIN;974972 while (rc == -EAGAIN) {973973+ struct kvec iov[2];974974+ unsigned int len;975975+975976 if (open_file->invalidHandle) {976977 /* we could deadlock if we called977978 filemap_fdatawait from here so tell···984979 if (rc != 0)985980 break;986981 }987987- if (experimEnabled || (pTcon->ses->server &&988988- ((pTcon->ses->server->secMode &989989- (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED))990990- == 0))) {991991- struct kvec iov[2];992992- unsigned int len;993982994994- len = min((size_t)cifs_sb->wsize,995995- write_size - total_written);996996- /* iov[0] is reserved for smb header */997997- iov[1].iov_base = (char *)write_data +998998- total_written;999999- iov[1].iov_len = len;10001000- rc = CIFSSMBWrite2(xid, pTcon,10011001- open_file->netfid, len,10021002- *poffset, &bytes_written,10031003- iov, 1, 0);10041004- } else10051005- rc = CIFSSMBWrite(xid, pTcon,10061006- open_file->netfid,10071007- min_t(const int, cifs_sb->wsize,10081008- write_size - total_written),10091009- *poffset, &bytes_written,10101010- write_data + total_written,10111011- NULL, 0);983983+ len = min((size_t)cifs_sb->wsize,984984+ write_size - total_written);985985+ /* iov[0] is reserved for smb header */986986+ iov[1].iov_base = (char *)write_data + total_written;987987+ iov[1].iov_len = len;988988+ rc = CIFSSMBWrite2(xid, pTcon, open_file->netfid, len,989989+ *poffset, &bytes_written, iov, 1, 0);1012990 }1013991 if (rc || (bytes_written == 0)) {1014992 if (total_written)···12281240 }1229124112301242 tcon = tlink_tcon(open_file->tlink);12311231- if (!experimEnabled && tcon->ses->server->secMode &12321232- (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED)) {12331233- cifsFileInfo_put(open_file);12341234- kfree(iov);12351235- return generic_writepages(mapping, wbc);12361236- }12371243 cifsFileInfo_put(open_file);1238124412391245 xid = GetXid();···19621980 return total_read;19631981}1964198219831983+/*19841984+ * If the page is mmap'ed into a process' page tables, then we need to make19851985+ * sure that it doesn't change while being written back.19861986+ */19871987+static int19881988+cifs_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)19891989+{19901990+ struct page *page = vmf->page;19911991+19921992+ lock_page(page);19931993+ return VM_FAULT_LOCKED;19941994+}19951995+19961996+static struct vm_operations_struct cifs_file_vm_ops = {19971997+ .fault = filemap_fault,19981998+ .page_mkwrite = cifs_page_mkwrite,19991999+};20002000+19652001int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)19662002{19672003 int rc, xid;···19911991 cifs_invalidate_mapping(inode);1992199219931993 rc = generic_file_mmap(file, vma);19941994+ if (rc == 0)19951995+ vma->vm_ops = &cifs_file_vm_ops;19941996 FreeXid(xid);19951997 return rc;19961998}···20092007 return rc;20102008 }20112009 rc = generic_file_mmap(file, vma);20102010+ if (rc == 0)20112011+ vma->vm_ops = &cifs_file_vm_ops;20122012 FreeXid(xid);20132013 return rc;20142014}
+2-2
fs/cifs/link.c
···239239 if (rc != 0)240240 return rc;241241242242- if (file_info.EndOfFile != CIFS_MF_SYMLINK_FILE_SIZE) {242242+ if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) {243243 CIFSSMBClose(xid, tcon, netfid);244244 /* it's not a symlink */245245 return -EINVAL;···316316 if (rc != 0)317317 goto out;318318319319- if (file_info.EndOfFile != CIFS_MF_SYMLINK_FILE_SIZE) {319319+ if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) {320320 CIFSSMBClose(xid, pTcon, netfid);321321 /* it's not a symlink */322322 goto out;
···219219 bcc_ptr++;220220 } */221221 /* copy user */222222- if (ses->userName == NULL) {222222+ if (ses->user_name == NULL) {223223 /* null user mount */224224 *bcc_ptr = 0;225225 *(bcc_ptr+1) = 0;226226 } else {227227- bytes_ret = cifs_strtoUCS((__le16 *) bcc_ptr, ses->userName,227227+ bytes_ret = cifs_strtoUCS((__le16 *) bcc_ptr, ses->user_name,228228 MAX_USERNAME_SIZE, nls_cp);229229 }230230 bcc_ptr += 2 * bytes_ret;···244244 /* copy user */245245 /* BB what about null user mounts - check that we do this BB */246246 /* copy user */247247- if (ses->userName == NULL) {248248- /* BB what about null user mounts - check that we do this BB */249249- } else {250250- strncpy(bcc_ptr, ses->userName, MAX_USERNAME_SIZE);251251- }252252- bcc_ptr += strnlen(ses->userName, MAX_USERNAME_SIZE);247247+ if (ses->user_name != NULL)248248+ strncpy(bcc_ptr, ses->user_name, MAX_USERNAME_SIZE);249249+ /* else null user mount */250250+251251+ bcc_ptr += strnlen(ses->user_name, MAX_USERNAME_SIZE);253252 *bcc_ptr = 0;254253 bcc_ptr++; /* account for null termination */255254···404405 /* BB spec says that if AvId field of MsvAvTimestamp is populated then405406 we must set the MIC field of the AUTHENTICATE_MESSAGE */406407 ses->ntlmssp->server_flags = le32_to_cpu(pblob->NegotiateFlags);407407- tioffset = cpu_to_le16(pblob->TargetInfoArray.BufferOffset);408408- tilen = cpu_to_le16(pblob->TargetInfoArray.Length);408408+ tioffset = le32_to_cpu(pblob->TargetInfoArray.BufferOffset);409409+ tilen = le16_to_cpu(pblob->TargetInfoArray.Length);409410 if (tilen) {410411 ses->auth_key.response = kmalloc(tilen, GFP_KERNEL);411412 if (!ses->auth_key.response) {···522523 tmp += len;523524 }524525525525- if (ses->userName == NULL) {526526+ if (ses->user_name == NULL) {526527 sec_blob->UserName.BufferOffset = cpu_to_le32(tmp - pbuffer);527528 sec_blob->UserName.Length = 0;528529 sec_blob->UserName.MaximumLength = 0;529530 tmp += 2;530531 } else {531532 int len;532532- len = cifs_strtoUCS((__le16 *)tmp, ses->userName,533533+ len = cifs_strtoUCS((__le16 *)tmp, ses->user_name,533534 MAX_USERNAME_SIZE, nls_cp);534535 len *= 2; /* unicode is 2 bytes each */535536 sec_blob->UserName.BufferOffset = cpu_to_le32(tmp - pbuffer);
···1312131213131313 dbg_gen("syncing inode %lu", inode->i_ino);1314131413151315+ if (inode->i_sb->s_flags & MS_RDONLY)13161316+ return 0;13171317+13151318 /*13161319 * VFS has already synchronized dirty pages for this inode. Synchronize13171320 * the inode unless this is a 'datasync()' call.
···8686 */8787static inline const struct mfd_cell *mfd_get_cell(struct platform_device *pdev)8888{8989- return pdev->dev.platform_data;8989+ return pdev->mfd_cell;9090}91919292/*9393 * Given a platform device that's been created by mfd_add_devices(), fetch9494 * the .mfd_data entry from the mfd_cell that created it.9595+ * Otherwise just return the platform_data pointer.9696+ * This maintains compatibility with platform drivers whose devices aren't9797+ * created by the mfd layer, and expect platform_data to contain what would've9898+ * otherwise been in mfd_data.9599 */96100static inline void *mfd_get_data(struct platform_device *pdev)97101{9898- return mfd_get_cell(pdev)->mfd_data;102102+ const struct mfd_cell *cell = mfd_get_cell(pdev);103103+104104+ if (cell)105105+ return cell->mfd_data;106106+ else107107+ return pdev->dev.platform_data;99108}100109101110extern int mfd_add_devices(struct device *parent, int id,
···401401402402DECLARE_EVENT_CLASS(block_unplug,403403404404- TP_PROTO(struct request_queue *q),404404+ TP_PROTO(struct request_queue *q, unsigned int depth, bool explicit),405405406406- TP_ARGS(q),406406+ TP_ARGS(q, depth, explicit),407407408408 TP_STRUCT__entry(409409 __field( int, nr_rq )···411411 ),412412413413 TP_fast_assign(414414- __entry->nr_rq = q->rq.count[READ] + q->rq.count[WRITE];414414+ __entry->nr_rq = depth;415415 memcpy(__entry->comm, current->comm, TASK_COMM_LEN);416416 ),417417···419419);420420421421/**422422- * block_unplug_timer - timed release of operations requests in queue to device driver422422+ * block_unplug - release of operations requests in request queue423423 * @q: request queue to unplug424424- *425425- * Unplug the request queue @q because a timer expired and allow block426426- * operation requests to be sent to the device driver.427427- */428428-DEFINE_EVENT(block_unplug, block_unplug_timer,429429-430430- TP_PROTO(struct request_queue *q),431431-432432- TP_ARGS(q)433433-);434434-435435-/**436436- * block_unplug_io - release of operations requests in request queue437437- * @q: request queue to unplug424424+ * @depth: number of requests just added to the queue425425+ * @explicit: whether this was an explicit unplug, or one from schedule()438426 *439427 * Unplug request queue @q because device driver is scheduled to work440428 * on elements in the request queue.441429 */442442-DEFINE_EVENT(block_unplug, block_unplug_io,430430+DEFINE_EVENT(block_unplug, block_unplug,443431444444- TP_PROTO(struct request_queue *q),432432+ TP_PROTO(struct request_queue *q, unsigned int depth, bool explicit),445433446446- TP_ARGS(q)434434+ TP_ARGS(q, depth, explicit)447435);448436449437/**
···364364 }365365366366 if (mode & PERF_CGROUP_SWIN) {367367+ WARN_ON_ONCE(cpuctx->cgrp);367368 /* set cgrp before ctxsw in to368369 * allow event_filter_match() to not369370 * have to pass task around···24242423 if (!ctx || !ctx->nr_events)24252424 goto out;2426242524262426+ /*24272427+ * We must ctxsw out cgroup events to avoid conflict24282428+ * when invoking perf_task_event_sched_in() later on24292429+ * in this function. Otherwise we end up trying to24302430+ * ctxswin cgroup events which are already scheduled24312431+ * in.24322432+ */24332433+ perf_cgroup_sched_out(current);24272434 task_ctx_sched_out(ctx, EVENT_ALL);2428243524292436 raw_spin_lock(&ctx->lock);···2456244724572448 raw_spin_unlock(&ctx->lock);2458244924502450+ /*24512451+ * Also calls ctxswin for cgroup events, if any:24522452+ */24592453 perf_event_context_sched_in(ctx, ctx->task);24602454out:24612455 local_irq_restore(flags);
+4-1
kernel/pid.c
···217217 return -1;218218}219219220220-int next_pidmap(struct pid_namespace *pid_ns, int last)220220+int next_pidmap(struct pid_namespace *pid_ns, unsigned int last)221221{222222 int offset;223223 struct pidmap *map, *end;224224+225225+ if (last >= PID_MAX_LIMIT)226226+ return -1;224227225228 offset = (last + 1) & BITS_PER_PAGE_MASK;226229 map = &pid_ns->pidmap[(last + 1)/BITS_PER_PAGE];
+5-1
kernel/power/Kconfig
···18181919 Turning OFF this setting is NOT recommended! If in doubt, say Y.20202121+config HIBERNATE_CALLBACKS2222+ bool2323+2124config HIBERNATION2225 bool "Hibernation (aka 'suspend to disk')"2326 depends on SWAP && ARCH_HIBERNATION_POSSIBLE2727+ select HIBERNATE_CALLBACKS2428 select LZO_COMPRESS2529 select LZO_DECOMPRESS2630 ---help---···89859086config PM_SLEEP9187 def_bool y9292- depends on SUSPEND || HIBERNATION || XEN_SAVE_RESTORE8888+ depends on SUSPEND || HIBERNATE_CALLBACKS93899490config PM_SLEEP_SMP9591 def_bool y
+10-10
kernel/sched.c
···41114111 try_to_wake_up_local(to_wakeup);41124112 }41134113 deactivate_task(rq, prev, DEQUEUE_SLEEP);41144114+41154115+ /*41164116+ * If we are going to sleep and we have plugged IO queued, make41174117+ * sure to submit it to avoid deadlocks.41184118+ */41194119+ if (blk_needs_flush_plug(prev)) {41204120+ raw_spin_unlock(&rq->lock);41214121+ blk_schedule_flush_plug(prev);41224122+ raw_spin_lock(&rq->lock);41234123+ }41144124 }41154125 switch_count = &prev->nvcsw;41164116- }41174117-41184118- /*41194119- * If we are going to sleep and we have plugged IO queued, make41204120- * sure to submit it to avoid deadlocks.41214121- */41224122- if (prev->state != TASK_RUNNING && blk_needs_flush_plug(prev)) {41234123- raw_spin_unlock(&rq->lock);41244124- blk_flush_plug(prev);41254125- raw_spin_lock(&rq->lock);41264126 }4127412741284128 pre_schedule(rq, prev);
+6-8
kernel/sched_fair.c
···21042104 enum cpu_idle_type idle, int *all_pinned,21052105 int *this_best_prio, struct cfs_rq *busiest_cfs_rq)21062106{21072107- int loops = 0, pulled = 0, pinned = 0;21072107+ int loops = 0, pulled = 0;21082108 long rem_load_move = max_load_move;21092109 struct task_struct *p, *n;2110211021112111 if (max_load_move == 0)21122112 goto out;2113211321142114- pinned = 1;21152115-21162114 list_for_each_entry_safe(p, n, &busiest_cfs_rq->tasks, se.group_node) {21172115 if (loops++ > sysctl_sched_nr_migrate)21182116 break;2119211721202118 if ((p->se.load.weight >> 1) > rem_load_move ||21212121- !can_migrate_task(p, busiest, this_cpu, sd, idle, &pinned))21192119+ !can_migrate_task(p, busiest, this_cpu, sd, idle,21202120+ all_pinned))21222121 continue;2123212221242123 pull_task(busiest, p, this_rq, this_cpu);···21512152 * inside pull_task().21522153 */21532154 schedstat_add(sd, lb_gained[idle], pulled);21542154-21552155- if (all_pinned)21562156- *all_pinned = pinned;2157215521582156 return max_load_move - rem_load_move;21592157}···31233127 if (!sds.busiest || sds.busiest_nr_running == 0)31243128 goto out_balanced;3125312931303130+ sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr;31313131+31263132 /*31273133 * If the busiest group is imbalanced the below checks don't31283134 * work because they assumes all things are equal, which typically···31493151 * Don't pull any tasks if this group is already above the domain31503152 * average load.31513153 */31523152- sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr;31533154 if (sds.this_load >= sds.avg_load)31543155 goto out_balanced;31553156···33373340 * still unbalanced. ld_moved simply stays zero, so it is33383341 * correctly treated as an imbalance.33393342 */33433343+ all_pinned = 1;33403344 local_irq_save(flags);33413345 double_rq_lock(this_rq, busiest);33423346 ld_moved = move_tasks(this_rq, this_cpu, busiest,
···135135 }136136}137137138138-static inline int stack_guard_page(struct vm_area_struct *vma, unsigned long addr)139139-{140140- return (vma->vm_flags & VM_GROWSDOWN) &&141141- (vma->vm_start == addr) &&142142- !vma_stack_continue(vma->vm_prev, addr);143143-}144144-145138/**146139 * __mlock_vma_pages_range() - mlock a range of pages in the vma.147140 * @vma: target vma···180187181188 if (vma->vm_flags & VM_LOCKED)182189 gup_flags |= FOLL_MLOCK;183183-184184- /* We don't try to access the guard page of a stack vma */185185- if (stack_guard_page(vma, start)) {186186- addr += PAGE_SIZE;187187- nr_pages--;188188- }189190190191 return __get_user_pages(current, mm, addr, nr_pages, gup_flags,191192 NULL, NULL, nonblocking);
+9-6
mm/mmap.c
···259259 * randomize_va_space to 2, which will still cause mm->start_brk260260 * to be arbitrarily shifted261261 */262262- if (mm->start_brk > PAGE_ALIGN(mm->end_data))262262+ if (current->brk_randomized)263263 min_brk = mm->start_brk;264264 else265265 min_brk = mm->end_data;···18141814 size = vma->vm_end - address;18151815 grow = (vma->vm_start - address) >> PAGE_SHIFT;1816181618171817- error = acct_stack_growth(vma, size, grow);18181818- if (!error) {18191819- vma->vm_start = address;18201820- vma->vm_pgoff -= grow;18211821- perf_event_mmap(vma);18171817+ error = -ENOMEM;18181818+ if (grow <= vma->vm_pgoff) {18191819+ error = acct_stack_growth(vma, size, grow);18201820+ if (!error) {18211821+ vma->vm_start = address;18221822+ vma->vm_pgoff -= grow;18231823+ perf_event_mmap(vma);18241824+ }18221825 }18231826 }18241827 vma_unlock_anon_vma(vma);
-28
mm/oom_kill.c
···8484#endif /* CONFIG_NUMA */85858686/*8787- * If this is a system OOM (not a memcg OOM) and the task selected to be8888- * killed is not already running at high (RT) priorities, speed up the8989- * recovery by boosting the dying task to the lowest FIFO priority.9090- * That helps with the recovery and avoids interfering with RT tasks.9191- */9292-static void boost_dying_task_prio(struct task_struct *p,9393- struct mem_cgroup *mem)9494-{9595- struct sched_param param = { .sched_priority = 1 };9696-9797- if (mem)9898- return;9999-100100- if (!rt_task(p))101101- sched_setscheduler_nocheck(p, SCHED_FIFO, ¶m);102102-}103103-104104-/*10587 * The process p may have detached its own ->mm while exiting or through10688 * use_mm(), but one or more of its subthreads may still have a valid10789 * pointer. Return p, or any of its subthreads with a valid ->mm, with···434452 set_tsk_thread_flag(p, TIF_MEMDIE);435453 force_sig(SIGKILL, p);436454437437- /*438438- * We give our sacrificial lamb high priority and access to439439- * all the memory it needs. That way it should be able to440440- * exit() and clear out its resources quickly...441441- */442442- boost_dying_task_prio(p, mem);443443-444455 return 0;445456}446457#undef K···457482 */458483 if (p->flags & PF_EXITING) {459484 set_tsk_thread_flag(p, TIF_MEMDIE);460460- boost_dying_task_prio(p, mem);461485 return 0;462486 }463487···530556 */531557 if (fatal_signal_pending(current)) {532558 set_thread_flag(TIF_MEMDIE);533533- boost_dying_task_prio(current, NULL);534559 return;535560 }536561···685712 */686713 if (fatal_signal_pending(current)) {687714 set_thread_flag(TIF_MEMDIE);688688- boost_dying_task_prio(current, NULL);689715 return;690716 }691717
+1-1
mm/page_alloc.c
···31763176 * Called with zonelists_mutex held always31773177 * unless system_state == SYSTEM_BOOTING.31783178 */31793179-void build_all_zonelists(void *data)31793179+void __ref build_all_zonelists(void *data)31803180{31813181 set_zonelist_order();31823182
+4-2
mm/shmem.c
···421421 * a waste to allocate index if we cannot allocate data.422422 */423423 if (sbinfo->max_blocks) {424424- if (percpu_counter_compare(&sbinfo->used_blocks, (sbinfo->max_blocks - 1)) > 0)424424+ if (percpu_counter_compare(&sbinfo->used_blocks,425425+ sbinfo->max_blocks - 1) >= 0)425426 return ERR_PTR(-ENOSPC);426427 percpu_counter_inc(&sbinfo->used_blocks);427428 spin_lock(&inode->i_lock);···13981397 shmem_swp_unmap(entry);13991398 sbinfo = SHMEM_SB(inode->i_sb);14001399 if (sbinfo->max_blocks) {14011401- if ((percpu_counter_compare(&sbinfo->used_blocks, sbinfo->max_blocks) > 0) ||14001400+ if (percpu_counter_compare(&sbinfo->used_blocks,14011401+ sbinfo->max_blocks) >= 0 ||14021402 shmem_acct_block(info->flags)) {14031403 spin_unlock(&info->lock);14041404 error = -ENOSPC;
+13-11
mm/vmscan.c
···4141#include <linux/memcontrol.h>4242#include <linux/delayacct.h>4343#include <linux/sysctl.h>4444+#include <linux/oom.h>44454546#include <asm/tlbflush.h>4647#include <asm/div64.h>···19891988 return zone->pages_scanned < zone_reclaimable_pages(zone) * 6;19901989}1991199019921992-/*19931993- * As hibernation is going on, kswapd is freezed so that it can't mark19941994- * the zone into all_unreclaimable. It can't handle OOM during hibernation.19951995- * So let's check zone's unreclaimable in direct reclaim as well as kswapd.19961996- */19911991+/* All zones in zonelist are unreclaimable? */19971992static bool all_unreclaimable(struct zonelist *zonelist,19981993 struct scan_control *sc)19991994{20001995 struct zoneref *z;20011996 struct zone *zone;20022002- bool all_unreclaimable = true;2003199720041998 for_each_zone_zonelist_nodemask(zone, z, zonelist,20051999 gfp_zone(sc->gfp_mask), sc->nodemask) {···20022006 continue;20032007 if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))20042008 continue;20052005- if (zone_reclaimable(zone)) {20062006- all_unreclaimable = false;20072007- break;20082008- }20092009+ if (!zone->all_unreclaimable)20102010+ return false;20092011 }2010201220112011- return all_unreclaimable;20132013+ return true;20122014}2013201520142016/*···2101210721022108 if (sc->nr_reclaimed)21032109 return sc->nr_reclaimed;21102110+21112111+ /*21122112+ * As hibernation is going on, kswapd is freezed so that it can't mark21132113+ * the zone into all_unreclaimable. Thus bypassing all_unreclaimable21142114+ * check.21152115+ */21162116+ if (oom_killer_disabled)21172117+ return 0;2104211821052119 /* top priority shrink_zones still had more to do? don't OOM, then */21062120 if (scanning_global_lru(sc) && !all_unreclaimable(zonelist, sc))
+15-3
mm/vmstat.c
···321321 /*322322 * The fetching of the stat_threshold is racy. We may apply323323 * a counter threshold to the wrong the cpu if we get324324- * rescheduled while executing here. However, the following325325- * will apply the threshold again and therefore bring the326326- * counter under the threshold.324324+ * rescheduled while executing here. However, the next325325+ * counter update will apply the threshold again and326326+ * therefore bring the counter under the threshold again.327327+ *328328+ * Most of the time the thresholds are the same anyways329329+ * for all cpus in a zone.327330 */328331 t = this_cpu_read(pcp->stat_threshold);329332···948945 "unevictable_pgs_cleared",949946 "unevictable_pgs_stranded",950947 "unevictable_pgs_mlockfreed",948948+949949+#ifdef CONFIG_TRANSPARENT_HUGEPAGE950950+ "thp_fault_alloc",951951+ "thp_fault_fallback",952952+ "thp_collapse_alloc",953953+ "thp_collapse_alloc_failed",954954+ "thp_split",951955#endif956956+957957+#endif /* CONFIG_VM_EVENTS_COUNTERS */952958};953959954960static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,