Merge branch 'davinci-fixes' of git://gitorious.org/linux-davinci/linux-davinci into fixes

+2862 -2489
-20
Documentation/feature-removal-schedule.txt
··· 387 387 388 388 ---------------------------- 389 389 390 - What: Support for lcd_switch and display_get in asus-laptop driver 391 - When: March 2010 392 - Why: These two features use non-standard interfaces. There are the 393 - only features that really need multiple path to guess what's 394 - the right method name on a specific laptop. 395 - 396 - Removing them will allow to remove a lot of code an significantly 397 - clean the drivers. 398 - 399 - This will affect the backlight code which won't be able to know 400 - if the backlight is on or off. The platform display file will also be 401 - write only (like the one in eeepc-laptop). 402 - 403 - This should'nt affect a lot of user because they usually know 404 - when their display is on or off. 405 - 406 - Who: Corentin Chary <corentin.chary@gmail.com> 407 - 408 - ---------------------------- 409 - 410 390 What: sysfs-class-rfkill state file 411 391 When: Feb 2014 412 392 Files: net/rfkill/core.c
+262
Documentation/input/event-codes.txt
··· 1 + The input protocol uses a map of types and codes to express input device values 2 + to userspace. This document describes the types and codes and how and when they 3 + may be used. 4 + 5 + A single hardware event generates multiple input events. Each input event 6 + contains the new value of a single data item. A special event type, EV_SYN, is 7 + used to separate input events into packets of input data changes occurring at 8 + the same moment in time. In the following, the term "event" refers to a single 9 + input event encompassing a type, code, and value. 10 + 11 + The input protocol is a stateful protocol. Events are emitted only when values 12 + of event codes have changed. However, the state is maintained within the Linux 13 + input subsystem; drivers do not need to maintain the state and may attempt to 14 + emit unchanged values without harm. Userspace may obtain the current state of 15 + event code values using the EVIOCG* ioctls defined in linux/input.h. The event 16 + reports supported by a device are also provided by sysfs in 17 + class/input/event*/device/capabilities/, and the properties of a device are 18 + provided in class/input/event*/device/properties. 19 + 20 + Types: 21 + ========== 22 + Types are groupings of codes under a logical input construct. Each type has a 23 + set of applicable codes to be used in generating events. See the Codes section 24 + for details on valid codes for each type. 25 + 26 + * EV_SYN: 27 + - Used as markers to separate events. Events may be separated in time or in 28 + space, such as with the multitouch protocol. 29 + 30 + * EV_KEY: 31 + - Used to describe state changes of keyboards, buttons, or other key-like 32 + devices. 33 + 34 + * EV_REL: 35 + - Used to describe relative axis value changes, e.g. moving the mouse 5 units 36 + to the left. 37 + 38 + * EV_ABS: 39 + - Used to describe absolute axis value changes, e.g. describing the 40 + coordinates of a touch on a touchscreen. 41 + 42 + * EV_MSC: 43 + - Used to describe miscellaneous input data that do not fit into other types. 44 + 45 + * EV_SW: 46 + - Used to describe binary state input switches. 47 + 48 + * EV_LED: 49 + - Used to turn LEDs on devices on and off. 50 + 51 + * EV_SND: 52 + - Used to output sound to devices. 53 + 54 + * EV_REP: 55 + - Used for autorepeating devices. 56 + 57 + * EV_FF: 58 + - Used to send force feedback commands to an input device. 59 + 60 + * EV_PWR: 61 + - A special type for power button and switch input. 62 + 63 + * EV_FF_STATUS: 64 + - Used to receive force feedback device status. 65 + 66 + Codes: 67 + ========== 68 + Codes define the precise type of event. 69 + 70 + EV_SYN: 71 + ---------- 72 + EV_SYN event values are undefined. Their usage is defined only by when they are 73 + sent in the evdev event stream. 74 + 75 + * SYN_REPORT: 76 + - Used to synchronize and separate events into packets of input data changes 77 + occurring at the same moment in time. For example, motion of a mouse may set 78 + the REL_X and REL_Y values for one motion, then emit a SYN_REPORT. The next 79 + motion will emit more REL_X and REL_Y values and send another SYN_REPORT. 80 + 81 + * SYN_CONFIG: 82 + - TBD 83 + 84 + * SYN_MT_REPORT: 85 + - Used to synchronize and separate touch events. See the 86 + multi-touch-protocol.txt document for more information. 87 + 88 + * SYN_DROPPED: 89 + - Used to indicate buffer overrun in the evdev client's event queue. 90 + Client should ignore all events up to and including next SYN_REPORT 91 + event and query the device (using EVIOCG* ioctls) to obtain its 92 + current state. 93 + 94 + EV_KEY: 95 + ---------- 96 + EV_KEY events take the form KEY_<name> or BTN_<name>. For example, KEY_A is used 97 + to represent the 'A' key on a keyboard. When a key is depressed, an event with 98 + the key's code is emitted with value 1. When the key is released, an event is 99 + emitted with value 0. Some hardware send events when a key is repeated. These 100 + events have a value of 2. In general, KEY_<name> is used for keyboard keys, and 101 + BTN_<name> is used for other types of momentary switch events. 102 + 103 + A few EV_KEY codes have special meanings: 104 + 105 + * BTN_TOOL_<name>: 106 + - These codes are used in conjunction with input trackpads, tablets, and 107 + touchscreens. These devices may be used with fingers, pens, or other tools. 108 + When an event occurs and a tool is used, the corresponding BTN_TOOL_<name> 109 + code should be set to a value of 1. When the tool is no longer interacting 110 + with the input device, the BTN_TOOL_<name> code should be reset to 0. All 111 + trackpads, tablets, and touchscreens should use at least one BTN_TOOL_<name> 112 + code when events are generated. 113 + 114 + * BTN_TOUCH: 115 + BTN_TOUCH is used for touch contact. While an input tool is determined to be 116 + within meaningful physical contact, the value of this property must be set 117 + to 1. Meaningful physical contact may mean any contact, or it may mean 118 + contact conditioned by an implementation defined property. For example, a 119 + touchpad may set the value to 1 only when the touch pressure rises above a 120 + certain value. BTN_TOUCH may be combined with BTN_TOOL_<name> codes. For 121 + example, a pen tablet may set BTN_TOOL_PEN to 1 and BTN_TOUCH to 0 while the 122 + pen is hovering over but not touching the tablet surface. 123 + 124 + Note: For appropriate function of the legacy mousedev emulation driver, 125 + BTN_TOUCH must be the first evdev code emitted in a synchronization frame. 126 + 127 + Note: Historically a touch device with BTN_TOOL_FINGER and BTN_TOUCH was 128 + interpreted as a touchpad by userspace, while a similar device without 129 + BTN_TOOL_FINGER was interpreted as a touchscreen. For backwards compatibility 130 + with current userspace it is recommended to follow this distinction. In the 131 + future, this distinction will be deprecated and the device properties ioctl 132 + EVIOCGPROP, defined in linux/input.h, will be used to convey the device type. 133 + 134 + * BTN_TOOL_FINGER, BTN_TOOL_DOUBLETAP, BTN_TOOL_TRIPLETAP, BTN_TOOL_QUADTAP: 135 + - These codes denote one, two, three, and four finger interaction on a 136 + trackpad or touchscreen. For example, if the user uses two fingers and moves 137 + them on the touchpad in an effort to scroll content on screen, 138 + BTN_TOOL_DOUBLETAP should be set to value 1 for the duration of the motion. 139 + Note that all BTN_TOOL_<name> codes and the BTN_TOUCH code are orthogonal in 140 + purpose. A trackpad event generated by finger touches should generate events 141 + for one code from each group. At most only one of these BTN_TOOL_<name> 142 + codes should have a value of 1 during any synchronization frame. 143 + 144 + Note: Historically some drivers emitted multiple of the finger count codes with 145 + a value of 1 in the same synchronization frame. This usage is deprecated. 146 + 147 + Note: In multitouch drivers, the input_mt_report_finger_count() function should 148 + be used to emit these codes. Please see multi-touch-protocol.txt for details. 149 + 150 + EV_REL: 151 + ---------- 152 + EV_REL events describe relative changes in a property. For example, a mouse may 153 + move to the left by a certain number of units, but its absolute position in 154 + space is unknown. If the absolute position is known, EV_ABS codes should be used 155 + instead of EV_REL codes. 156 + 157 + A few EV_REL codes have special meanings: 158 + 159 + * REL_WHEEL, REL_HWHEEL: 160 + - These codes are used for vertical and horizontal scroll wheels, 161 + respectively. 162 + 163 + EV_ABS: 164 + ---------- 165 + EV_ABS events describe absolute changes in a property. For example, a touchpad 166 + may emit coordinates for a touch location. 167 + 168 + A few EV_ABS codes have special meanings: 169 + 170 + * ABS_DISTANCE: 171 + - Used to describe the distance of a tool from an interaction surface. This 172 + event should only be emitted while the tool is hovering, meaning in close 173 + proximity of the device and while the value of the BTN_TOUCH code is 0. If 174 + the input device may be used freely in three dimensions, consider ABS_Z 175 + instead. 176 + 177 + * ABS_MT_<name>: 178 + - Used to describe multitouch input events. Please see 179 + multi-touch-protocol.txt for details. 180 + 181 + EV_SW: 182 + ---------- 183 + EV_SW events describe stateful binary switches. For example, the SW_LID code is 184 + used to denote when a laptop lid is closed. 185 + 186 + Upon binding to a device or resuming from suspend, a driver must report 187 + the current switch state. This ensures that the device, kernel, and userspace 188 + state is in sync. 189 + 190 + Upon resume, if the switch state is the same as before suspend, then the input 191 + subsystem will filter out the duplicate switch state reports. The driver does 192 + not need to keep the state of the switch at any time. 193 + 194 + EV_MSC: 195 + ---------- 196 + EV_MSC events are used for input and output events that do not fall under other 197 + categories. 198 + 199 + EV_LED: 200 + ---------- 201 + EV_LED events are used for input and output to set and query the state of 202 + various LEDs on devices. 203 + 204 + EV_REP: 205 + ---------- 206 + EV_REP events are used for specifying autorepeating events. 207 + 208 + EV_SND: 209 + ---------- 210 + EV_SND events are used for sending sound commands to simple sound output 211 + devices. 212 + 213 + EV_FF: 214 + ---------- 215 + EV_FF events are used to initialize a force feedback capable device and to cause 216 + such device to feedback. 217 + 218 + EV_PWR: 219 + ---------- 220 + EV_PWR events are a special type of event used specifically for power 221 + mangement. Its usage is not well defined. To be addressed later. 222 + 223 + Guidelines: 224 + ========== 225 + The guidelines below ensure proper single-touch and multi-finger functionality. 226 + For multi-touch functionality, see the multi-touch-protocol.txt document for 227 + more information. 228 + 229 + Mice: 230 + ---------- 231 + REL_{X,Y} must be reported when the mouse moves. BTN_LEFT must be used to report 232 + the primary button press. BTN_{MIDDLE,RIGHT,4,5,etc.} should be used to report 233 + further buttons of the device. REL_WHEEL and REL_HWHEEL should be used to report 234 + scroll wheel events where available. 235 + 236 + Touchscreens: 237 + ---------- 238 + ABS_{X,Y} must be reported with the location of the touch. BTN_TOUCH must be 239 + used to report when a touch is active on the screen. 240 + BTN_{MOUSE,LEFT,MIDDLE,RIGHT} must not be reported as the result of touch 241 + contact. BTN_TOOL_<name> events should be reported where possible. 242 + 243 + Trackpads: 244 + ---------- 245 + Legacy trackpads that only provide relative position information must report 246 + events like mice described above. 247 + 248 + Trackpads that provide absolute touch position must report ABS_{X,Y} for the 249 + location of the touch. BTN_TOUCH should be used to report when a touch is active 250 + on the trackpad. Where multi-finger support is available, BTN_TOOL_<name> should 251 + be used to report the number of touches active on the trackpad. 252 + 253 + Tablets: 254 + ---------- 255 + BTN_TOOL_<name> events must be reported when a stylus or other tool is active on 256 + the tablet. ABS_{X,Y} must be reported with the location of the tool. BTN_TOUCH 257 + should be used to report when the tool is in contact with the tablet. 258 + BTN_{STYLUS,STYLUS2} should be used to report buttons on the tool itself. Any 259 + button may be used for buttons on the tablet except BTN_{MOUSE,LEFT}. 260 + BTN_{0,1,2,etc} are good generic codes for unlabeled buttons. Do not use 261 + meaningful buttons, like BTN_FORWARD, unless the button is labeled for that 262 + purpose on the device.
+25 -21
MAINTAINERS
··· 184 184 F: fs/9p/ 185 185 186 186 A2232 SERIAL BOARD DRIVER 187 - M: Enver Haase <A2232@gmx.net> 188 187 L: linux-m68k@lists.linux-m68k.org 189 - S: Maintained 190 - F: drivers/char/ser_a2232* 188 + S: Orphan 189 + F: drivers/staging/generic_serial/ser_a2232* 191 190 192 191 AACRAID SCSI RAID DRIVER 193 192 M: Adaptec OEM Raid Solutions <aacraid@adaptec.com> ··· 876 877 F: arch/arm/mach-orion5x/ 877 878 F: arch/arm/plat-orion/ 878 879 880 + ARM/Orion SoC/Technologic Systems TS-78xx platform support 881 + M: Alexander Clouter <alex@digriz.org.uk> 882 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 883 + W: http://www.digriz.org.uk/ts78xx/kernel 884 + S: Maintained 885 + F: arch/arm/mach-orion5x/ts78xx-* 886 + 879 887 ARM/MIOA701 MACHINE SUPPORT 880 888 M: Robert Jarzmik <robert.jarzmik@free.fr> 881 889 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 1069 1063 F: drivers/sh/ 1070 1064 1071 1065 ARM/TELECHIPS ARM ARCHITECTURE 1072 - M: "Hans J. Koch" <hjk@linutronix.de> 1066 + M: "Hans J. Koch" <hjk@hansjkoch.de> 1073 1067 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1074 1068 S: Maintained 1075 1069 F: arch/arm/plat-tcc/ ··· 1829 1823 F: drivers/platform/x86/compal-laptop.c 1830 1824 1831 1825 COMPUTONE INTELLIPORT MULTIPORT CARD 1832 - M: "Michael H. Warfield" <mhw@wittsend.com> 1833 1826 W: http://www.wittsend.com/computone.html 1834 - S: Maintained 1827 + S: Orphan 1835 1828 F: Documentation/serial/computone.txt 1836 - F: drivers/char/ip2/ 1829 + F: drivers/staging/tty/ip2/ 1837 1830 1838 1831 CONEXANT ACCESSRUNNER USB DRIVER 1839 1832 M: Simon Arlott <cxacru@fire.lp0.eu> ··· 2015 2010 CYCLADES ASYNC MUX DRIVER 2016 2011 W: http://www.cyclades.com/ 2017 2012 S: Orphan 2018 - F: drivers/char/cyclades.c 2013 + F: drivers/tty/cyclades.c 2019 2014 F: include/linux/cyclades.h 2020 2015 2021 2016 CYCLADES PC300 DRIVER ··· 2129 2124 W: http://www.digi.com 2130 2125 S: Orphan 2131 2126 F: Documentation/serial/digiepca.txt 2132 - F: drivers/char/epca* 2133 - F: drivers/char/digi* 2127 + F: drivers/staging/tty/epca* 2128 + F: drivers/staging/tty/digi* 2134 2129 2135 2130 DIOLAN U2C-12 I2C DRIVER 2136 2131 M: Guenter Roeck <guenter.roeck@ericsson.com> ··· 4082 4077 F: include/linux/matroxfb.h 4083 4078 4084 4079 MAX6650 HARDWARE MONITOR AND FAN CONTROLLER DRIVER 4085 - M: "Hans J. Koch" <hjk@linutronix.de> 4080 + M: "Hans J. Koch" <hjk@hansjkoch.de> 4086 4081 L: lm-sensors@lm-sensors.org 4087 4082 S: Maintained 4088 4083 F: Documentation/hwmon/max6650 ··· 4197 4192 M: Jiri Slaby <jirislaby@gmail.com> 4198 4193 S: Maintained 4199 4194 F: Documentation/serial/moxa-smartio 4200 - F: drivers/char/mxser.* 4195 + F: drivers/tty/mxser.* 4201 4196 4202 4197 MSI LAPTOP SUPPORT 4203 4198 M: "Lee, Chun-Yi" <jlee@novell.com> ··· 4239 4234 4240 4235 MULTITECH MULTIPORT CARD (ISICOM) 4241 4236 S: Orphan 4242 - F: drivers/char/isicom.c 4237 + F: drivers/tty/isicom.c 4243 4238 F: include/linux/isicom.h 4244 4239 4245 4240 MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER ··· 5278 5273 RISCOM8 DRIVER 5279 5274 S: Orphan 5280 5275 F: Documentation/serial/riscom8.txt 5281 - F: drivers/char/riscom8* 5276 + F: drivers/staging/tty/riscom8* 5282 5277 5283 5278 ROCKETPORT DRIVER 5284 5279 P: Comtrol Corp. 5285 5280 W: http://www.comtrol.com 5286 5281 S: Maintained 5287 5282 F: Documentation/serial/rocket.txt 5288 - F: drivers/char/rocket* 5283 + F: drivers/tty/rocket* 5289 5284 5290 5285 ROSE NETWORK LAYER 5291 5286 M: Ralf Baechle <ralf@linux-mips.org> ··· 5921 5916 F: arch/arm/mach-spear6xx/spear600_evb.c 5922 5917 5923 5918 SPECIALIX IO8+ MULTIPORT SERIAL CARD DRIVER 5924 - M: Roger Wolff <R.E.Wolff@BitWizard.nl> 5925 - S: Supported 5919 + S: Orphan 5926 5920 F: Documentation/serial/specialix.txt 5927 - F: drivers/char/specialix* 5921 + F: drivers/staging/tty/specialix* 5928 5922 5929 5923 SPI SUBSYSTEM 5930 5924 M: David Brownell <dbrownell@users.sourceforge.net> ··· 5968 5964 5969 5965 STABLE BRANCH 5970 5966 M: Greg Kroah-Hartman <greg@kroah.com> 5971 - M: Chris Wright <chrisw@sous-sol.org> 5972 5967 L: stable@kernel.org 5973 5968 S: Maintained 5974 5969 ··· 6251 6248 W: http://www.uclinux.org/ 6252 6249 L: uclinux-dev@uclinux.org (subscribers-only) 6253 6250 S: Maintained 6254 - F: arch/m68knommu/ 6251 + F: arch/m68k/*/*_no.* 6252 + F: arch/m68k/include/asm/*_no.* 6255 6253 6256 6254 UCLINUX FOR RENESAS H8/300 (H8300) 6257 6255 M: Yoshinori Sato <ysato@users.sourceforge.jp> ··· 6622 6618 F: fs/hppfs/ 6623 6619 6624 6620 USERSPACE I/O (UIO) 6625 - M: "Hans J. Koch" <hjk@linutronix.de> 6621 + M: "Hans J. Koch" <hjk@hansjkoch.de> 6626 6622 M: Greg Kroah-Hartman <gregkh@suse.de> 6627 6623 S: Maintained 6628 6624 F: Documentation/DocBook/uio-howto.tmpl
+1 -1
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 39 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc4 5 5 NAME = Flesh-Eating Bats with Fangs 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/alpha/kernel/Makefile
··· 4 4 5 5 extra-y := head.o vmlinux.lds 6 6 asflags-y := $(KBUILD_CFLAGS) 7 - ccflags-y := -Werror -Wno-sign-compare 7 + ccflags-y := -Wno-sign-compare 8 8 9 9 obj-y := entry.o traps.o process.o init_task.o osf_sys.o irq.o \ 10 10 irq_alpha.o signal.o setup.o ptrace.o time.o \
+5 -7
arch/alpha/kernel/core_mcpcia.c
··· 88 88 { 89 89 unsigned long flags; 90 90 unsigned long mid = MCPCIA_HOSE2MID(hose->index); 91 - unsigned int stat0, value, temp, cpu; 91 + unsigned int stat0, value, cpu; 92 92 93 93 cpu = smp_processor_id(); 94 94 ··· 101 101 stat0 = *(vuip)MCPCIA_CAP_ERR(mid); 102 102 *(vuip)MCPCIA_CAP_ERR(mid) = stat0; 103 103 mb(); 104 - temp = *(vuip)MCPCIA_CAP_ERR(mid); 104 + *(vuip)MCPCIA_CAP_ERR(mid); 105 105 DBG_CFG(("conf_read: MCPCIA_CAP_ERR(%d) was 0x%x\n", mid, stat0)); 106 106 107 107 mb(); ··· 136 136 { 137 137 unsigned long flags; 138 138 unsigned long mid = MCPCIA_HOSE2MID(hose->index); 139 - unsigned int stat0, temp, cpu; 139 + unsigned int stat0, cpu; 140 140 141 141 cpu = smp_processor_id(); 142 142 ··· 145 145 /* Reset status register to avoid losing errors. */ 146 146 stat0 = *(vuip)MCPCIA_CAP_ERR(mid); 147 147 *(vuip)MCPCIA_CAP_ERR(mid) = stat0; mb(); 148 - temp = *(vuip)MCPCIA_CAP_ERR(mid); 148 + *(vuip)MCPCIA_CAP_ERR(mid); 149 149 DBG_CFG(("conf_write: MCPCIA CAP_ERR(%d) was 0x%x\n", mid, stat0)); 150 150 151 151 draina(); ··· 157 157 *((vuip)addr) = value; 158 158 mb(); 159 159 mb(); /* magic */ 160 - temp = *(vuip)MCPCIA_CAP_ERR(mid); /* read to force the write */ 160 + *(vuip)MCPCIA_CAP_ERR(mid); /* read to force the write */ 161 161 mcheck_expected(cpu) = 0; 162 162 mb(); 163 163 ··· 572 572 void 573 573 mcpcia_machine_check(unsigned long vector, unsigned long la_ptr) 574 574 { 575 - struct el_common *mchk_header; 576 575 struct el_MCPCIA_uncorrected_frame_mcheck *mchk_logout; 577 576 unsigned int cpu = smp_processor_id(); 578 577 int expected; 579 578 580 - mchk_header = (struct el_common *)la_ptr; 581 579 mchk_logout = (struct el_MCPCIA_uncorrected_frame_mcheck *)la_ptr; 582 580 expected = mcheck_expected(cpu); 583 581
+1 -3
arch/alpha/kernel/err_titan.c
··· 533 533 static struct el_subpacket * 534 534 el_process_regatta_subpacket(struct el_subpacket *header) 535 535 { 536 - int status; 537 - 538 536 if (header->class != EL_CLASS__REGATTA_FAMILY) { 539 537 printk("%s ** Unexpected header CLASS %d TYPE %d, aborting\n", 540 538 err_print_prefix, ··· 549 551 printk("%s ** Occurred on CPU %d:\n", 550 552 err_print_prefix, 551 553 (int)header->by_type.regatta_frame.cpuid); 552 - status = privateer_process_logout_frame((struct el_common *) 554 + privateer_process_logout_frame((struct el_common *) 553 555 header->by_type.regatta_frame.data_start, 1); 554 556 break; 555 557 default:
+1 -1
arch/alpha/kernel/irq_alpha.c
··· 228 228 void __init 229 229 init_rtc_irq(void) 230 230 { 231 - irq_set_chip_and_handler_name(RTC_IRQ, &no_irq_chip, 231 + irq_set_chip_and_handler_name(RTC_IRQ, &dummy_irq_chip, 232 232 handle_simple_irq, "RTC"); 233 233 setup_irq(RTC_IRQ, &timer_irqaction); 234 234 }
+3 -3
arch/alpha/kernel/setup.c
··· 1404 1404 case PCA56_CPU: 1405 1405 case PCA57_CPU: 1406 1406 { 1407 - unsigned long cbox_config, size; 1408 - 1409 1407 if (cpu_type == PCA56_CPU) { 1410 1408 L1I = CSHAPE(16*1024, 6, 1); 1411 1409 L1D = CSHAPE(8*1024, 5, 1); ··· 1413 1415 } 1414 1416 L3 = -1; 1415 1417 1418 + #if 0 1419 + unsigned long cbox_config, size; 1420 + 1416 1421 cbox_config = *(vulp) phys_to_virt (0xfffff00008UL); 1417 1422 size = 512*1024 * (1 << ((cbox_config >> 12) & 3)); 1418 1423 1419 - #if 0 1420 1424 L2 = ((cbox_config >> 31) & 1 ? CSHAPE (size, 6, 1) : -1); 1421 1425 #else 1422 1426 L2 = external_cache_probe(512*1024, 6);
+1 -2
arch/alpha/kernel/smc37c93x.c
··· 79 79 static unsigned long __init SMCConfigState(unsigned long baseAddr) 80 80 { 81 81 unsigned char devId; 82 - unsigned char devRev; 83 82 84 83 unsigned long configPort; 85 84 unsigned long indexPort; ··· 99 100 devId = inb(dataPort); 100 101 if (devId == VALID_DEVICE_ID) { 101 102 outb(DEVICE_REV, indexPort); 102 - devRev = inb(dataPort); 103 + /* unsigned char devRev = */ inb(dataPort); 103 104 break; 104 105 } 105 106 else
+3 -2
arch/alpha/kernel/sys_wildfire.c
··· 156 156 wildfire_init_irq_per_pca(int qbbno, int pcano) 157 157 { 158 158 int i, irq_bias; 159 - unsigned long io_bias; 160 159 static struct irqaction isa_enable = { 161 160 .handler = no_action, 162 161 .name = "isa_enable", ··· 164 165 irq_bias = qbbno * (WILDFIRE_PCA_PER_QBB * WILDFIRE_IRQ_PER_PCA) 165 166 + pcano * WILDFIRE_IRQ_PER_PCA; 166 167 168 + #if 0 169 + unsigned long io_bias; 170 + 167 171 /* Only need the following for first PCI bus per PCA. */ 168 172 io_bias = WILDFIRE_IO(qbbno, pcano<<1) - WILDFIRE_IO_BIAS; 169 173 170 - #if 0 171 174 outb(0, DMA1_RESET_REG + io_bias); 172 175 outb(0, DMA2_RESET_REG + io_bias); 173 176 outb(DMA_MODE_CASCADE, DMA2_MODE_REG + io_bias);
+1
arch/alpha/kernel/time.c
··· 153 153 year += 100; 154 154 155 155 ts->tv_sec = mktime(year, mon, day, hour, min, sec); 156 + ts->tv_nsec = 0; 156 157 } 157 158 158 159
+6
arch/arm/mach-davinci/Kconfig
··· 63 63 depends on ARCH_DAVINCI_DM644x 64 64 select MISC_DEVICES 65 65 select EEPROM_AT24 66 + select I2C 66 67 help 67 68 Configure this option to specify the whether the board used 68 69 for development is a DM644x EVM ··· 73 72 depends on ARCH_DAVINCI_DM644x 74 73 select MISC_DEVICES 75 74 select EEPROM_AT24 75 + select I2C 76 76 help 77 77 Say Y here to select the Lyrtech Small Form Factor 78 78 Software Defined Radio (SFFSDR) board. ··· 107 105 select MACH_DAVINCI_DM6467TEVM 108 106 select MISC_DEVICES 109 107 select EEPROM_AT24 108 + select I2C 110 109 help 111 110 Configure this option to specify the whether the board used 112 111 for development is a DM6467 EVM ··· 121 118 depends on ARCH_DAVINCI_DM365 122 119 select MISC_DEVICES 123 120 select EEPROM_AT24 121 + select I2C 124 122 help 125 123 Configure this option to specify whether the board used 126 124 for development is a DM365 EVM ··· 133 129 select GPIO_PCF857X 134 130 select MISC_DEVICES 135 131 select EEPROM_AT24 132 + select I2C 136 133 help 137 134 Say Y here to select the TI DA830/OMAP-L137/AM17x Evaluation Module. 138 135 ··· 210 205 depends on ARCH_DAVINCI_DA850 211 206 select MISC_DEVICES 212 207 select EEPROM_AT24 208 + select I2C 213 209 help 214 210 Say Y here to select the Critical Link MityDSP-L138/MityARM-1808 215 211 System on Module. Information on this SoM may be found at
+2 -2
arch/arm/mach-davinci/board-mityomapl138.c
··· 29 29 #include <mach/mux.h> 30 30 #include <mach/spi.h> 31 31 32 - #define MITYOMAPL138_PHY_ID "0:03" 32 + #define MITYOMAPL138_PHY_ID "" 33 33 34 34 #define FACTORY_CONFIG_MAGIC 0x012C0138 35 35 #define FACTORY_CONFIG_VERSION 0x00010001 ··· 414 414 415 415 static struct platform_device mityomapl138_nandflash_device = { 416 416 .name = "davinci_nand", 417 - .id = 0, 417 + .id = 1, 418 418 .dev = { 419 419 .platform_data = &mityomapl138_nandflash_data, 420 420 },
+9 -3
arch/arm/mach-davinci/devices-da8xx.c
··· 39 39 #define DA8XX_GPIO_BASE 0x01e26000 40 40 #define DA8XX_I2C1_BASE 0x01e28000 41 41 #define DA8XX_SPI0_BASE 0x01c41000 42 - #define DA8XX_SPI1_BASE 0x01f0e000 42 + #define DA830_SPI1_BASE 0x01e12000 43 + #define DA850_SPI1_BASE 0x01f0e000 43 44 44 45 #define DA8XX_EMAC_CTRL_REG_OFFSET 0x3000 45 46 #define DA8XX_EMAC_MOD_REG_OFFSET 0x2000 ··· 763 762 764 763 static struct resource da8xx_spi1_resources[] = { 765 764 [0] = { 766 - .start = DA8XX_SPI1_BASE, 767 - .end = DA8XX_SPI1_BASE + SZ_4K - 1, 765 + .start = DA830_SPI1_BASE, 766 + .end = DA830_SPI1_BASE + SZ_4K - 1, 768 767 .flags = IORESOURCE_MEM, 769 768 }, 770 769 [1] = { ··· 832 831 " %d\n", __func__, instance, ret); 833 832 834 833 da8xx_spi_pdata[instance].num_chipselect = len; 834 + 835 + if (instance == 1 && cpu_is_davinci_da850()) { 836 + da8xx_spi1_resources[0].start = DA850_SPI1_BASE; 837 + da8xx_spi1_resources[0].end = DA850_SPI1_BASE + SZ_4K - 1; 838 + } 835 839 836 840 return platform_device_register(&da8xx_spi_device[instance]); 837 841 }
+8 -5
arch/arm/mach-davinci/include/mach/debug-macro.S
··· 24 24 25 25 #define UART_SHIFT 2 26 26 27 + #define davinci_uart_v2p(x) ((x) - PAGE_OFFSET + PLAT_PHYS_OFFSET) 28 + #define davinci_uart_p2v(x) ((x) - PLAT_PHYS_OFFSET + PAGE_OFFSET) 29 + 27 30 .pushsection .data 28 31 davinci_uart_phys: .word 0 29 32 davinci_uart_virt: .word 0 ··· 37 34 /* Use davinci_uart_phys/virt if already configured */ 38 35 10: mrc p15, 0, \rp, c1, c0 39 36 tst \rp, #1 @ MMU enabled? 40 - ldreq \rp, =__virt_to_phys(davinci_uart_phys) 37 + ldreq \rp, =davinci_uart_v2p(davinci_uart_phys) 41 38 ldrne \rp, =davinci_uart_phys 42 39 add \rv, \rp, #4 @ davinci_uart_virt 43 40 ldr \rp, [\rp, #0] ··· 51 48 tst \rp, #1 @ MMU enabled? 52 49 53 50 /* Copy uart phys address from decompressor uart info */ 54 - ldreq \rv, =__virt_to_phys(davinci_uart_phys) 51 + ldreq \rv, =davinci_uart_v2p(davinci_uart_phys) 55 52 ldrne \rv, =davinci_uart_phys 56 53 ldreq \rp, =DAVINCI_UART_INFO 57 - ldrne \rp, =__phys_to_virt(DAVINCI_UART_INFO) 54 + ldrne \rp, =davinci_uart_p2v(DAVINCI_UART_INFO) 58 55 ldr \rp, [\rp, #0] 59 56 str \rp, [\rv] 60 57 61 58 /* Copy uart virt address from decompressor uart info */ 62 - ldreq \rv, =__virt_to_phys(davinci_uart_virt) 59 + ldreq \rv, =davinci_uart_v2p(davinci_uart_virt) 63 60 ldrne \rv, =davinci_uart_virt 64 61 ldreq \rp, =DAVINCI_UART_INFO 65 - ldrne \rp, =__phys_to_virt(DAVINCI_UART_INFO) 62 + ldrne \rp, =davinci_uart_p2v(DAVINCI_UART_INFO) 66 63 ldr \rp, [\rp, #4] 67 64 str \rp, [\rv] 68 65
+1 -1
arch/arm/mach-davinci/include/mach/serial.h
··· 22 22 * 23 23 * This area sits just below the page tables (see arch/arm/kernel/head.S). 24 24 */ 25 - #define DAVINCI_UART_INFO (PHYS_OFFSET + 0x3ff8) 25 + #define DAVINCI_UART_INFO (PLAT_PHYS_OFFSET + 0x3ff8) 26 26 27 27 #define DAVINCI_UART0_BASE (IO_PHYS + 0x20000) 28 28 #define DAVINCI_UART1_BASE (IO_PHYS + 0x20400)
+1 -4
arch/arm/mach-msm/board-qsd8x50.c
··· 160 160 161 161 static void __init qsd8x50_init_mmc(void) 162 162 { 163 - if (machine_is_qsd8x50_ffa() || machine_is_qsd8x50a_ffa()) 164 - vreg_mmc = vreg_get(NULL, "gp6"); 165 - else 166 - vreg_mmc = vreg_get(NULL, "gp5"); 163 + vreg_mmc = vreg_get(NULL, "gp5"); 167 164 168 165 if (IS_ERR(vreg_mmc)) { 169 166 pr_err("vreg get for vreg_mmc failed (%ld)\n",
+1 -1
arch/arm/mach-msm/timer.c
··· 269 269 270 270 /* Use existing clock_event for cpu 0 */ 271 271 if (!smp_processor_id()) 272 - return; 272 + return 0; 273 273 274 274 writel(DGT_CLK_CTL_DIV_4, MSM_TMR_BASE + DGT_CLK_CTL); 275 275
+4 -2
arch/arm/mach-tegra/gpio.c
··· 257 257 void tegra_gpio_resume(void) 258 258 { 259 259 unsigned long flags; 260 - int b, p, i; 260 + int b; 261 + int p; 261 262 262 263 local_irq_save(flags); 263 264 ··· 281 280 void tegra_gpio_suspend(void) 282 281 { 283 282 unsigned long flags; 284 - int b, p, i; 283 + int b; 284 + int p; 285 285 286 286 local_irq_save(flags); 287 287 for (b = 0; b < ARRAY_SIZE(tegra_gpio_banks); b++) {
+5 -4
arch/arm/mach-tegra/tegra2_clocks.c
··· 1362 1362 { 1363 1363 unsigned long flags; 1364 1364 int ret; 1365 + long new_rate = rate; 1365 1366 1366 - rate = clk_round_rate(c->parent, rate); 1367 - if (rate < 0) 1368 - return rate; 1367 + new_rate = clk_round_rate(c->parent, new_rate); 1368 + if (new_rate < 0) 1369 + return new_rate; 1369 1370 1370 1371 spin_lock_irqsave(&c->parent->spinlock, flags); 1371 1372 1372 - c->u.shared_bus_user.rate = rate; 1373 + c->u.shared_bus_user.rate = new_rate; 1373 1374 ret = tegra_clk_shared_bus_update(c->parent); 1374 1375 1375 1376 spin_unlock_irqrestore(&c->parent->spinlock, flags);
-11
arch/arm/plat-s5p/pm.c
··· 19 19 20 20 #define PFX "s5p pm: " 21 21 22 - /* s3c_pm_check_resume_pin 23 - * 24 - * check to see if the pin is configured correctly for sleep mode, and 25 - * make any necessary adjustments if it is not 26 - */ 27 - 28 - static void s3c_pm_check_resume_pin(unsigned int pin, unsigned int irqoffs) 29 - { 30 - /* nothing here yet */ 31 - } 32 - 33 22 /* s3c_pm_configure_extint 34 23 * 35 24 * configure all external interrupt pins
-6
arch/arm/plat-samsung/pm-check.c
··· 164 164 */ 165 165 static u32 *s3c_pm_runcheck(struct resource *res, u32 *val) 166 166 { 167 - void *save_at = phys_to_virt(s3c_sleep_save_phys); 168 167 unsigned long addr; 169 168 unsigned long left; 170 169 void *stkpage; ··· 188 189 189 190 if (in_region(ptr, left, crcs, crc_size)) { 190 191 S3C_PMDBG("skipping %08lx, has crc block in\n", addr); 191 - goto skip_check; 192 - } 193 - 194 - if (in_region(ptr, left, save_at, 32*4 )) { 195 - S3C_PMDBG("skipping %08lx, has save block in\n", addr); 196 192 goto skip_check; 197 193 } 198 194
+3 -2
arch/arm/plat-samsung/pm.c
··· 214 214 * 215 215 * print any IRQs asserted at resume time (ie, we woke from) 216 216 */ 217 - static void s3c_pm_show_resume_irqs(int start, unsigned long which, 218 - unsigned long mask) 217 + static void __maybe_unused s3c_pm_show_resume_irqs(int start, 218 + unsigned long which, 219 + unsigned long mask) 219 220 { 220 221 int i; 221 222
+9
arch/avr32/include/asm/setup.h
··· 94 94 95 95 #define ETH_INVALID_PHY 0xff 96 96 97 + /* board information */ 98 + #define ATAG_BOARDINFO 0x54410008 99 + 100 + struct tag_boardinfo { 101 + u32 board_number; 102 + }; 103 + 97 104 struct tag { 98 105 struct tag_header hdr; 99 106 union { ··· 109 102 struct tag_cmdline cmdline; 110 103 struct tag_clock clock; 111 104 struct tag_ethernet ethernet; 105 + struct tag_boardinfo boardinfo; 112 106 } u; 113 107 }; 114 108 ··· 136 128 137 129 extern resource_size_t fbmem_start; 138 130 extern resource_size_t fbmem_size; 131 + extern u32 board_number; 139 132 140 133 void setup_processor(void); 141 134
+15
arch/avr32/kernel/setup.c
··· 391 391 __tagtable(ATAG_CLOCK, parse_tag_clock); 392 392 393 393 /* 394 + * The board_number correspond to the bd->bi_board_number in U-Boot. This 395 + * parameter is only available during initialisation and can be used in some 396 + * kind of board identification. 397 + */ 398 + u32 __initdata board_number; 399 + 400 + static int __init parse_tag_boardinfo(struct tag *tag) 401 + { 402 + board_number = tag->u.boardinfo.board_number; 403 + 404 + return 0; 405 + } 406 + __tagtable(ATAG_BOARDINFO, parse_tag_boardinfo); 407 + 408 + /* 394 409 * Scan the tag table for this tag, and call its parse function. The 395 410 * tag table is built by the linker from all the __tagtable 396 411 * declarations.
-22
arch/avr32/kernel/traps.c
··· 95 95 info.si_code = code; 96 96 info.si_addr = (void __user *)addr; 97 97 force_sig_info(signr, &info, current); 98 - 99 - /* 100 - * Init gets no signals that it doesn't have a handler for. 101 - * That's all very well, but if it has caused a synchronous 102 - * exception and we ignore the resulting signal, it will just 103 - * generate the same exception over and over again and we get 104 - * nowhere. Better to kill it and let the kernel panic. 105 - */ 106 - if (is_global_init(current)) { 107 - __sighandler_t handler; 108 - 109 - spin_lock_irq(&current->sighand->siglock); 110 - handler = current->sighand->action[signr-1].sa.sa_handler; 111 - spin_unlock_irq(&current->sighand->siglock); 112 - if (handler == SIG_DFL) { 113 - /* init has generated a synchronous exception 114 - and it doesn't have a handler for the signal */ 115 - printk(KERN_CRIT "init has generated signal %ld " 116 - "but has no handler for it\n", signr); 117 - do_exit(signr); 118 - } 119 - } 120 98 } 121 99 122 100 asmlinkage void do_nmi(unsigned long ecr, struct pt_regs *regs)
+20 -12
arch/avr32/mach-at32ap/clock.c
··· 35 35 spin_unlock(&clk_list_lock); 36 36 } 37 37 38 + static struct clk *__clk_get(struct device *dev, const char *id) 39 + { 40 + struct clk *clk; 41 + 42 + list_for_each_entry(clk, &at32_clock_list, list) { 43 + if (clk->dev == dev && strcmp(id, clk->name) == 0) { 44 + return clk; 45 + } 46 + } 47 + 48 + return ERR_PTR(-ENOENT); 49 + } 50 + 38 51 struct clk *clk_get(struct device *dev, const char *id) 39 52 { 40 53 struct clk *clk; 41 54 42 55 spin_lock(&clk_list_lock); 43 - 44 - list_for_each_entry(clk, &at32_clock_list, list) { 45 - if (clk->dev == dev && strcmp(id, clk->name) == 0) { 46 - spin_unlock(&clk_list_lock); 47 - return clk; 48 - } 49 - } 50 - 56 + clk = __clk_get(dev, id); 51 57 spin_unlock(&clk_list_lock); 52 - return ERR_PTR(-ENOENT); 58 + 59 + return clk; 53 60 } 61 + 54 62 EXPORT_SYMBOL(clk_get); 55 63 56 64 void clk_put(struct clk *clk) ··· 265 257 spin_lock(&clk_list_lock); 266 258 267 259 /* show clock tree as derived from the three oscillators */ 268 - clk = clk_get(NULL, "osc32k"); 260 + clk = __clk_get(NULL, "osc32k"); 269 261 dump_clock(clk, &r); 270 262 clk_put(clk); 271 263 272 - clk = clk_get(NULL, "osc0"); 264 + clk = __clk_get(NULL, "osc0"); 273 265 dump_clock(clk, &r); 274 266 clk_put(clk); 275 267 276 - clk = clk_get(NULL, "osc1"); 268 + clk = __clk_get(NULL, "osc1"); 277 269 dump_clock(clk, &r); 278 270 clk_put(clk); 279 271
+11 -11
arch/avr32/mach-at32ap/extint.c
··· 61 61 static struct eic *nmi_eic; 62 62 static bool nmi_enabled; 63 63 64 - static void eic_ack_irq(struct irq_chip *d) 64 + static void eic_ack_irq(struct irq_data *d) 65 65 { 66 - struct eic *eic = irq_data_get_irq_chip_data(data); 66 + struct eic *eic = irq_data_get_irq_chip_data(d); 67 67 eic_writel(eic, ICR, 1 << (d->irq - eic->first_irq)); 68 68 } 69 69 70 - static void eic_mask_irq(struct irq_chip *d) 70 + static void eic_mask_irq(struct irq_data *d) 71 71 { 72 - struct eic *eic = irq_data_get_irq_chip_data(data); 72 + struct eic *eic = irq_data_get_irq_chip_data(d); 73 73 eic_writel(eic, IDR, 1 << (d->irq - eic->first_irq)); 74 74 } 75 75 76 - static void eic_mask_ack_irq(struct irq_chip *d) 76 + static void eic_mask_ack_irq(struct irq_data *d) 77 77 { 78 - struct eic *eic = irq_data_get_irq_chip_data(data); 78 + struct eic *eic = irq_data_get_irq_chip_data(d); 79 79 eic_writel(eic, ICR, 1 << (d->irq - eic->first_irq)); 80 80 eic_writel(eic, IDR, 1 << (d->irq - eic->first_irq)); 81 81 } 82 82 83 - static void eic_unmask_irq(struct irq_chip *d) 83 + static void eic_unmask_irq(struct irq_data *d) 84 84 { 85 - struct eic *eic = irq_data_get_irq_chip_data(data); 85 + struct eic *eic = irq_data_get_irq_chip_data(d); 86 86 eic_writel(eic, IER, 1 << (d->irq - eic->first_irq)); 87 87 } 88 88 89 - static int eic_set_irq_type(struct irq_chip *d, unsigned int flow_type) 89 + static int eic_set_irq_type(struct irq_data *d, unsigned int flow_type) 90 90 { 91 - struct eic *eic = irq_data_get_irq_chip_data(data); 91 + struct eic *eic = irq_data_get_irq_chip_data(d); 92 92 unsigned int irq = d->irq; 93 93 unsigned int i = irq - eic->first_irq; 94 94 u32 mode, edge, level; ··· 191 191 192 192 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 193 193 int_irq = platform_get_irq(pdev, 0); 194 - if (!regs || !int_irq) { 194 + if (!regs || (int)int_irq <= 0) { 195 195 dev_dbg(&pdev->dev, "missing regs and/or irq resource\n"); 196 196 return -ENXIO; 197 197 }
+1 -1
arch/avr32/mach-at32ap/pio.c
··· 257 257 pio_writel(pio, IDR, 1 << (gpio & 0x1f)); 258 258 } 259 259 260 - static void gpio_irq_unmask(struct irq_data *d)) 260 + static void gpio_irq_unmask(struct irq_data *d) 261 261 { 262 262 unsigned gpio = irq_to_gpio(d->irq); 263 263 struct pio_device *pio = &pio_dev[gpio >> 5];
+1 -1
arch/avr32/mach-at32ap/pm-at32ap700x.S
··· 53 53 st.w r8[TI_flags], r9 54 54 unmask_interrupts 55 55 sleep CPU_SLEEP_IDLE 56 - .size cpu_idle_sleep, . - cpu_idle_sleep 56 + .size cpu_enter_idle, . - cpu_enter_idle 57 57 58 58 /* 59 59 * Common return path for PM functions that don't run from
+18 -18
arch/blackfin/include/asm/system.h
··· 19 19 * Force strict CPU ordering. 20 20 */ 21 21 #define nop() __asm__ __volatile__ ("nop;\n\t" : : ) 22 - #define mb() __asm__ __volatile__ ("" : : : "memory") 23 - #define rmb() __asm__ __volatile__ ("" : : : "memory") 24 - #define wmb() __asm__ __volatile__ ("" : : : "memory") 25 - #define set_mb(var, value) do { (void) xchg(&var, value); } while (0) 26 - #define read_barrier_depends() do { } while(0) 22 + #define smp_mb() mb() 23 + #define smp_rmb() rmb() 24 + #define smp_wmb() wmb() 25 + #define set_mb(var, value) do { var = value; mb(); } while (0) 26 + #define smp_read_barrier_depends() read_barrier_depends() 27 27 28 28 #ifdef CONFIG_SMP 29 29 asmlinkage unsigned long __raw_xchg_1_asm(volatile void *ptr, unsigned long value); ··· 37 37 unsigned long new, unsigned long old); 38 38 39 39 #ifdef __ARCH_SYNC_CORE_DCACHE 40 - # define smp_mb() do { barrier(); smp_check_barrier(); smp_mark_barrier(); } while (0) 41 - # define smp_rmb() do { barrier(); smp_check_barrier(); } while (0) 42 - # define smp_wmb() do { barrier(); smp_mark_barrier(); } while (0) 43 - #define smp_read_barrier_depends() do { barrier(); smp_check_barrier(); } while (0) 44 - 40 + /* Force Core data cache coherence */ 41 + # define mb() do { barrier(); smp_check_barrier(); smp_mark_barrier(); } while (0) 42 + # define rmb() do { barrier(); smp_check_barrier(); } while (0) 43 + # define wmb() do { barrier(); smp_mark_barrier(); } while (0) 44 + # define read_barrier_depends() do { barrier(); smp_check_barrier(); } while (0) 45 45 #else 46 - # define smp_mb() barrier() 47 - # define smp_rmb() barrier() 48 - # define smp_wmb() barrier() 49 - #define smp_read_barrier_depends() barrier() 46 + # define mb() barrier() 47 + # define rmb() barrier() 48 + # define wmb() barrier() 49 + # define read_barrier_depends() do { } while (0) 50 50 #endif 51 51 52 52 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, ··· 99 99 100 100 #else /* !CONFIG_SMP */ 101 101 102 - #define smp_mb() barrier() 103 - #define smp_rmb() barrier() 104 - #define smp_wmb() barrier() 105 - #define smp_read_barrier_depends() do { } while(0) 102 + #define mb() barrier() 103 + #define rmb() barrier() 104 + #define wmb() barrier() 105 + #define read_barrier_depends() do { } while (0) 106 106 107 107 struct __xchg_dummy { 108 108 unsigned long a[100];
+1 -1
arch/blackfin/kernel/gptimers.c
··· 268 268 _disable_gptimers(mask); 269 269 for (i = 0; i < MAX_BLACKFIN_GPTIMERS; ++i) 270 270 if (mask & (1 << i)) 271 - group_regs[BFIN_TIMER_OCTET(i)]->status |= trun_mask[i]; 271 + group_regs[BFIN_TIMER_OCTET(i)]->status = trun_mask[i]; 272 272 SSYNC(); 273 273 } 274 274 EXPORT_SYMBOL(disable_gptimers);
+7 -1
arch/blackfin/kernel/time-ts.c
··· 206 206 { 207 207 struct clock_event_device *evt = dev_id; 208 208 smp_mb(); 209 - evt->event_handler(evt); 209 + /* 210 + * We want to ACK before we handle so that we can handle smaller timer 211 + * intervals. This way if the timer expires again while we're handling 212 + * things, we're more likely to see that 2nd int rather than swallowing 213 + * it by ACKing the int at the end of this handler. 214 + */ 210 215 bfin_gptmr0_ack(); 216 + evt->event_handler(evt); 211 217 return IRQ_HANDLED; 212 218 } 213 219
+16 -3
arch/blackfin/mach-common/smp.c
··· 109 109 struct blackfin_flush_data *fdata = info; 110 110 111 111 /* Invalidate the memory holding the bounds of the flushed region. */ 112 - invalidate_dcache_range((unsigned long)fdata, 113 - (unsigned long)fdata + sizeof(*fdata)); 112 + blackfin_dcache_invalidate_range((unsigned long)fdata, 113 + (unsigned long)fdata + sizeof(*fdata)); 114 114 115 - flush_icache_range(fdata->start, fdata->end); 115 + /* Make sure all write buffers in the data side of the core 116 + * are flushed before trying to invalidate the icache. This 117 + * needs to be after the data flush and before the icache 118 + * flush so that the SSYNC does the right thing in preventing 119 + * the instruction prefetcher from hitting things in cached 120 + * memory at the wrong time -- it runs much further ahead than 121 + * the pipeline. 122 + */ 123 + SSYNC(); 124 + 125 + /* ipi_flaush_icache is invoked by generic flush_icache_range, 126 + * so call blackfin arch icache flush directly here. 127 + */ 128 + blackfin_icache_flush_range(fdata->start, fdata->end); 116 129 } 117 130 118 131 static void ipi_call_function(unsigned int cpu, struct ipi_message *msg)
+5 -1
arch/m68k/include/asm/unistd.h
··· 343 343 #define __NR_fanotify_init 337 344 344 #define __NR_fanotify_mark 338 345 345 #define __NR_prlimit64 339 346 + #define __NR_name_to_handle_at 340 347 + #define __NR_open_by_handle_at 341 348 + #define __NR_clock_adjtime 342 349 + #define __NR_syncfs 343 346 350 347 351 #ifdef __KERNEL__ 348 352 349 - #define NR_syscalls 340 353 + #define NR_syscalls 344 350 354 351 355 #define __ARCH_WANT_IPC_PARSE_VERSION 352 356 #define __ARCH_WANT_OLD_READDIR
+4
arch/m68k/kernel/entry_mm.S
··· 750 750 .long sys_fanotify_init 751 751 .long sys_fanotify_mark 752 752 .long sys_prlimit64 753 + .long sys_name_to_handle_at /* 340 */ 754 + .long sys_open_by_handle_at 755 + .long sys_clock_adjtime 756 + .long sys_syncfs 753 757
+4
arch/m68k/kernel/syscalltable.S
··· 358 358 .long sys_fanotify_init 359 359 .long sys_fanotify_mark 360 360 .long sys_prlimit64 361 + .long sys_name_to_handle_at /* 340 */ 362 + .long sys_open_by_handle_at 363 + .long sys_clock_adjtime 364 + .long sys_syncfs 361 365 362 366 .rept NR_syscalls-(.-sys_call_table)/4 363 367 .long sys_ni_syscall
-1
arch/microblaze/Kconfig
··· 6 6 select HAVE_FUNCTION_GRAPH_TRACER 7 7 select HAVE_DYNAMIC_FTRACE 8 8 select HAVE_FTRACE_MCOUNT_RECORD 9 - select USB_ARCH_HAS_EHCI 10 9 select ARCH_WANT_OPTIONAL_GPIOLIB 11 10 select HAVE_OPROFILE 12 11 select HAVE_ARCH_KGDB
+1 -1
arch/powerpc/Kconfig
··· 209 209 config ARCH_SUSPEND_POSSIBLE 210 210 def_bool y 211 211 depends on ADB_PMU || PPC_EFIKA || PPC_LITE5200 || PPC_83xx || \ 212 - PPC_85xx || PPC_86xx || PPC_PSERIES || 44x || 40x 212 + (PPC_85xx && !SMP) || PPC_86xx || PPC_PSERIES || 44x || 40x 213 213 214 214 config PPC_DCR_NATIVE 215 215 bool
+14 -2
arch/powerpc/include/asm/cputable.h
··· 382 382 #define CPU_FTRS_E500_2 (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \ 383 383 CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | \ 384 384 CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE) 385 - #define CPU_FTRS_E500MC (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \ 386 - CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NODSISRALIGN | \ 385 + #define CPU_FTRS_E500MC (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \ 387 386 CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \ 388 387 CPU_FTR_DBELL) 388 + #define CPU_FTRS_E5500 (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \ 389 + CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \ 390 + CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD) 389 391 #define CPU_FTRS_GENERIC_32 (CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN) 390 392 391 393 /* 64-bit CPUs */ ··· 437 435 #define CPU_FTRS_COMPATIBLE (CPU_FTR_USE_TB | CPU_FTR_PPCAS_ARCH_V2) 438 436 439 437 #ifdef __powerpc64__ 438 + #ifdef CONFIG_PPC_BOOK3E 439 + #define CPU_FTRS_POSSIBLE (CPU_FTRS_E5500) 440 + #else 440 441 #define CPU_FTRS_POSSIBLE \ 441 442 (CPU_FTRS_POWER3 | CPU_FTRS_RS64 | CPU_FTRS_POWER4 | \ 442 443 CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | CPU_FTRS_POWER6 | \ 443 444 CPU_FTRS_POWER7 | CPU_FTRS_CELL | CPU_FTRS_PA6T | \ 444 445 CPU_FTR_1T_SEGMENT | CPU_FTR_VSX) 446 + #endif 445 447 #else 446 448 enum { 447 449 CPU_FTRS_POSSIBLE = ··· 479 473 #endif 480 474 #ifdef CONFIG_E500 481 475 CPU_FTRS_E500 | CPU_FTRS_E500_2 | CPU_FTRS_E500MC | 476 + CPU_FTRS_E5500 | 482 477 #endif 483 478 0, 484 479 }; 485 480 #endif /* __powerpc64__ */ 486 481 487 482 #ifdef __powerpc64__ 483 + #ifdef CONFIG_PPC_BOOK3E 484 + #define CPU_FTRS_ALWAYS (CPU_FTRS_E5500) 485 + #else 488 486 #define CPU_FTRS_ALWAYS \ 489 487 (CPU_FTRS_POWER3 & CPU_FTRS_RS64 & CPU_FTRS_POWER4 & \ 490 488 CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & CPU_FTRS_POWER6 & \ 491 489 CPU_FTRS_POWER7 & CPU_FTRS_CELL & CPU_FTRS_PA6T & CPU_FTRS_POSSIBLE) 490 + #endif 492 491 #else 493 492 enum { 494 493 CPU_FTRS_ALWAYS = ··· 524 513 #endif 525 514 #ifdef CONFIG_E500 526 515 CPU_FTRS_E500 & CPU_FTRS_E500_2 & CPU_FTRS_E500MC & 516 + CPU_FTRS_E5500 & 527 517 #endif 528 518 CPU_FTRS_POSSIBLE, 529 519 };
+1 -1
arch/powerpc/include/asm/pte-common.h
··· 162 162 * on platforms where such control is possible. 163 163 */ 164 164 #if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\ 165 - defined(CONFIG_KPROBES) 165 + defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE) 166 166 #define PAGE_KERNEL_TEXT PAGE_KERNEL_X 167 167 #else 168 168 #define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
+1 -1
arch/powerpc/kernel/cputable.c
··· 1973 1973 .pvr_mask = 0xffff0000, 1974 1974 .pvr_value = 0x80240000, 1975 1975 .cpu_name = "e5500", 1976 - .cpu_features = CPU_FTRS_E500MC, 1976 + .cpu_features = CPU_FTRS_E5500, 1977 1977 .cpu_user_features = COMMON_USER_BOOKE, 1978 1978 .mmu_features = MMU_FTR_TYPE_FSL_E | MMU_FTR_BIG_PHYS | 1979 1979 MMU_FTR_USE_TLBILX,
+6 -6
arch/powerpc/kernel/crash.c
··· 163 163 } 164 164 165 165 /* wait for all the CPUs to hit real mode but timeout if they don't come in */ 166 - #if defined(CONFIG_PPC_STD_MMU_64) && defined(CONFIG_SMP) 166 + #ifdef CONFIG_PPC_STD_MMU_64 167 167 static void crash_kexec_wait_realmode(int cpu) 168 168 { 169 169 unsigned int msecs; ··· 188 188 } 189 189 mb(); 190 190 } 191 - #else 192 - static inline void crash_kexec_wait_realmode(int cpu) {} 193 - #endif 191 + #endif /* CONFIG_PPC_STD_MMU_64 */ 194 192 195 193 /* 196 194 * This function will be called by secondary cpus or by kexec cpu ··· 233 235 crash_ipi_callback(regs); 234 236 } 235 237 236 - #else 238 + #else /* ! CONFIG_SMP */ 239 + static inline void crash_kexec_wait_realmode(int cpu) {} 240 + 237 241 static void crash_kexec_prepare_cpus(int cpu) 238 242 { 239 243 /* ··· 255 255 { 256 256 cpus_in_sr = CPU_MASK_NONE; 257 257 } 258 - #endif 258 + #endif /* CONFIG_SMP */ 259 259 260 260 /* 261 261 * Register a function to be called on shutdown. Only use this if you
+3 -3
arch/powerpc/kernel/ibmebus.c
··· 527 527 528 528 #endif /* !CONFIG_SUSPEND */ 529 529 530 - #ifdef CONFIG_HIBERNATION 530 + #ifdef CONFIG_HIBERNATE_CALLBACKS 531 531 532 532 static int ibmebus_bus_pm_freeze(struct device *dev) 533 533 { ··· 665 665 return ret; 666 666 } 667 667 668 - #else /* !CONFIG_HIBERNATION */ 668 + #else /* !CONFIG_HIBERNATE_CALLBACKS */ 669 669 670 670 #define ibmebus_bus_pm_freeze NULL 671 671 #define ibmebus_bus_pm_thaw NULL ··· 676 676 #define ibmebus_bus_pm_poweroff_noirq NULL 677 677 #define ibmebus_bus_pm_restore_noirq NULL 678 678 679 - #endif /* !CONFIG_HIBERNATION */ 679 + #endif /* !CONFIG_HIBERNATE_CALLBACKS */ 680 680 681 681 static struct dev_pm_ops ibmebus_bus_dev_pm_ops = { 682 682 .prepare = ibmebus_bus_pm_prepare,
+5 -3
arch/powerpc/kernel/legacy_serial.c
··· 330 330 if (!parent) 331 331 continue; 332 332 if (of_match_node(legacy_serial_parents, parent) != NULL) { 333 - index = add_legacy_soc_port(np, np); 334 - if (index >= 0 && np == stdout) 335 - legacy_serial_console = index; 333 + if (of_device_is_available(np)) { 334 + index = add_legacy_soc_port(np, np); 335 + if (index >= 0 && np == stdout) 336 + legacy_serial_console = index; 337 + } 336 338 } 337 339 of_node_put(parent); 338 340 }
+30 -7
arch/powerpc/kernel/perf_event.c
··· 398 398 return 0; 399 399 } 400 400 401 + static u64 check_and_compute_delta(u64 prev, u64 val) 402 + { 403 + u64 delta = (val - prev) & 0xfffffffful; 404 + 405 + /* 406 + * POWER7 can roll back counter values, if the new value is smaller 407 + * than the previous value it will cause the delta and the counter to 408 + * have bogus values unless we rolled a counter over. If a coutner is 409 + * rolled back, it will be smaller, but within 256, which is the maximum 410 + * number of events to rollback at once. If we dectect a rollback 411 + * return 0. This can lead to a small lack of precision in the 412 + * counters. 413 + */ 414 + if (prev > val && (prev - val) < 256) 415 + delta = 0; 416 + 417 + return delta; 418 + } 419 + 401 420 static void power_pmu_read(struct perf_event *event) 402 421 { 403 422 s64 val, delta, prev; ··· 435 416 prev = local64_read(&event->hw.prev_count); 436 417 barrier(); 437 418 val = read_pmc(event->hw.idx); 419 + delta = check_and_compute_delta(prev, val); 420 + if (!delta) 421 + return; 438 422 } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev); 439 423 440 - /* The counters are only 32 bits wide */ 441 - delta = (val - prev) & 0xfffffffful; 442 424 local64_add(delta, &event->count); 443 425 local64_sub(delta, &event->hw.period_left); 444 426 } ··· 469 449 val = (event->hw.idx == 5) ? pmc5 : pmc6; 470 450 prev = local64_read(&event->hw.prev_count); 471 451 event->hw.idx = 0; 472 - delta = (val - prev) & 0xfffffffful; 473 - local64_add(delta, &event->count); 452 + delta = check_and_compute_delta(prev, val); 453 + if (delta) 454 + local64_add(delta, &event->count); 474 455 } 475 456 } 476 457 ··· 479 458 unsigned long pmc5, unsigned long pmc6) 480 459 { 481 460 struct perf_event *event; 482 - u64 val; 461 + u64 val, prev; 483 462 int i; 484 463 485 464 for (i = 0; i < cpuhw->n_limited; ++i) { 486 465 event = cpuhw->limited_counter[i]; 487 466 event->hw.idx = cpuhw->limited_hwidx[i]; 488 467 val = (event->hw.idx == 5) ? pmc5 : pmc6; 489 - local64_set(&event->hw.prev_count, val); 468 + prev = local64_read(&event->hw.prev_count); 469 + if (check_and_compute_delta(prev, val)) 470 + local64_set(&event->hw.prev_count, val); 490 471 perf_event_update_userpage(event); 491 472 } 492 473 } ··· 1220 1197 1221 1198 /* we don't have to worry about interrupts here */ 1222 1199 prev = local64_read(&event->hw.prev_count); 1223 - delta = (val - prev) & 0xfffffffful; 1200 + delta = check_and_compute_delta(prev, val); 1224 1201 local64_add(delta, &event->count); 1225 1202 1226 1203 /*
+3
arch/powerpc/kernel/time.c
··· 229 229 u64 stolen = 0; 230 230 u64 dtb; 231 231 232 + if (!dtl) 233 + return 0; 234 + 232 235 if (i == vpa->dtl_idx) 233 236 return 0; 234 237 while (i < vpa->dtl_idx) {
+5 -3
arch/powerpc/platforms/powermac/smp.c
··· 842 842 mpic_setup_this_cpu(); 843 843 } 844 844 845 + #ifdef CONFIG_PPC64 845 846 #ifdef CONFIG_HOTPLUG_CPU 846 847 static int smp_core99_cpu_notify(struct notifier_block *self, 847 848 unsigned long action, void *hcpu) ··· 880 879 881 880 static void __init smp_core99_bringup_done(void) 882 881 { 883 - #ifdef CONFIG_PPC64 884 882 extern void g5_phy_disable_cpu1(void); 885 883 886 884 /* Close i2c bus if it was used for tb sync */ ··· 894 894 set_cpu_present(1, false); 895 895 g5_phy_disable_cpu1(); 896 896 } 897 - #endif /* CONFIG_PPC64 */ 898 - 899 897 #ifdef CONFIG_HOTPLUG_CPU 900 898 register_cpu_notifier(&smp_core99_cpu_nb); 901 899 #endif 900 + 902 901 if (ppc_md.progress) 903 902 ppc_md.progress("smp_core99_bringup_done", 0x349); 904 903 } 904 + #endif /* CONFIG_PPC64 */ 905 905 906 906 #ifdef CONFIG_HOTPLUG_CPU 907 907 ··· 975 975 struct smp_ops_t core99_smp_ops = { 976 976 .message_pass = smp_mpic_message_pass, 977 977 .probe = smp_core99_probe, 978 + #ifdef CONFIG_PPC64 978 979 .bringup_done = smp_core99_bringup_done, 980 + #endif 979 981 .kick_cpu = smp_core99_kick_cpu, 980 982 .setup_cpu = smp_core99_setup_cpu, 981 983 .give_timebase = smp_core99_give_timebase,
+10 -2
arch/powerpc/platforms/pseries/setup.c
··· 287 287 int cpu, ret; 288 288 struct paca_struct *pp; 289 289 struct dtl_entry *dtl; 290 + struct kmem_cache *dtl_cache; 290 291 291 292 if (!firmware_has_feature(FW_FEATURE_SPLPAR)) 292 293 return 0; 293 294 295 + dtl_cache = kmem_cache_create("dtl", DISPATCH_LOG_BYTES, 296 + DISPATCH_LOG_BYTES, 0, NULL); 297 + if (!dtl_cache) { 298 + pr_warn("Failed to create dispatch trace log buffer cache\n"); 299 + pr_warn("Stolen time statistics will be unreliable\n"); 300 + return 0; 301 + } 302 + 294 303 for_each_possible_cpu(cpu) { 295 304 pp = &paca[cpu]; 296 - dtl = kmalloc_node(DISPATCH_LOG_BYTES, GFP_KERNEL, 297 - cpu_to_node(cpu)); 305 + dtl = kmem_cache_alloc(dtl_cache, GFP_KERNEL); 298 306 if (!dtl) { 299 307 pr_warn("Failed to allocate dispatch trace log for cpu %d\n", 300 308 cpu);
+5
arch/powerpc/sysdev/fsl_pci.c
··· 324 324 struct resource rsrc; 325 325 const int *bus_range; 326 326 327 + if (!of_device_is_available(dev)) { 328 + pr_warning("%s: disabled\n", dev->full_name); 329 + return -ENODEV; 330 + } 331 + 327 332 pr_debug("Adding PCI host bridge %s\n", dev->full_name); 328 333 329 334 /* Fetch host bridge registers address */
+3 -1
arch/powerpc/sysdev/fsl_rio.c
··· 1457 1457 port->ops = ops; 1458 1458 port->priv = priv; 1459 1459 port->phys_efptr = 0x100; 1460 - rio_register_mport(port); 1461 1460 1462 1461 priv->regs_win = ioremap(regs.start, regs.end - regs.start + 1); 1463 1462 rio_regs_win = priv->regs_win; ··· 1502 1503 & RIO_PEF_CTLS) >> 4; 1503 1504 dev_info(&dev->dev, "RapidIO Common Transport System size: %d\n", 1504 1505 port->sys_size ? 65536 : 256); 1506 + 1507 + if (rio_register_mport(port)) 1508 + goto err; 1505 1509 1506 1510 if (port->host_deviceid >= 0) 1507 1511 out_be32(priv->regs_win + RIO_GCCSR, RIO_PORT_GEN_HOST |
+4
arch/um/Kconfig.x86
··· 4 4 5 5 menu "Host processor type and features" 6 6 7 + config CMPXCHG_LOCAL 8 + bool 9 + default n 10 + 7 11 source "arch/x86/Kconfig.cpu" 8 12 9 13 endmenu
+6
arch/um/include/asm/bug.h
··· 1 + #ifndef __UM_BUG_H 2 + #define __UM_BUG_H 3 + 4 + #include <asm-generic/bug.h> 5 + 6 + #endif
+4
arch/x86/include/asm/msr-index.h
··· 96 96 #define MSR_IA32_MC0_ADDR 0x00000402 97 97 #define MSR_IA32_MC0_MISC 0x00000403 98 98 99 + #define MSR_AMD64_MC0_MASK 0xc0010044 100 + 99 101 #define MSR_IA32_MCx_CTL(x) (MSR_IA32_MC0_CTL + 4*(x)) 100 102 #define MSR_IA32_MCx_STATUS(x) (MSR_IA32_MC0_STATUS + 4*(x)) 101 103 #define MSR_IA32_MCx_ADDR(x) (MSR_IA32_MC0_ADDR + 4*(x)) 102 104 #define MSR_IA32_MCx_MISC(x) (MSR_IA32_MC0_MISC + 4*(x)) 105 + 106 + #define MSR_AMD64_MCx_MASK(x) (MSR_AMD64_MC0_MASK + (x)) 103 107 104 108 /* These are consecutive and not in the normal 4er MCE bank block */ 105 109 #define MSR_IA32_MC0_CTL2 0x00000280
+19
arch/x86/kernel/cpu/amd.c
··· 615 615 /* As a rule processors have APIC timer running in deep C states */ 616 616 if (c->x86 >= 0xf && !cpu_has_amd_erratum(amd_erratum_400)) 617 617 set_cpu_cap(c, X86_FEATURE_ARAT); 618 + 619 + /* 620 + * Disable GART TLB Walk Errors on Fam10h. We do this here 621 + * because this is always needed when GART is enabled, even in a 622 + * kernel which has no MCE support built in. 623 + */ 624 + if (c->x86 == 0x10) { 625 + /* 626 + * BIOS should disable GartTlbWlk Errors themself. If 627 + * it doesn't do it here as suggested by the BKDG. 628 + * 629 + * Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=33012 630 + */ 631 + u64 mask; 632 + 633 + rdmsrl(MSR_AMD64_MCx_MASK(4), mask); 634 + mask |= (1 << 10); 635 + wrmsrl(MSR_AMD64_MCx_MASK(4), mask); 636 + } 618 637 } 619 638 620 639 #ifdef CONFIG_X86_32
+23
arch/x86/kernel/smpboot.c
··· 312 312 identify_secondary_cpu(c); 313 313 } 314 314 315 + static void __cpuinit check_cpu_siblings_on_same_node(int cpu1, int cpu2) 316 + { 317 + int node1 = early_cpu_to_node(cpu1); 318 + int node2 = early_cpu_to_node(cpu2); 319 + 320 + /* 321 + * Our CPU scheduler assumes all logical cpus in the same physical cpu 322 + * share the same node. But, buggy ACPI or NUMA emulation might assign 323 + * them to different node. Fix it. 324 + */ 325 + if (node1 != node2) { 326 + pr_warning("CPU %d in node %d and CPU %d in node %d are in the same physical CPU. forcing same node %d\n", 327 + cpu1, node1, cpu2, node2, node2); 328 + 329 + numa_remove_cpu(cpu1); 330 + numa_set_node(cpu1, node2); 331 + numa_add_cpu(cpu1); 332 + } 333 + } 334 + 315 335 static void __cpuinit link_thread_siblings(int cpu1, int cpu2) 316 336 { 317 337 cpumask_set_cpu(cpu1, cpu_sibling_mask(cpu2)); ··· 340 320 cpumask_set_cpu(cpu2, cpu_core_mask(cpu1)); 341 321 cpumask_set_cpu(cpu1, cpu_llc_shared_mask(cpu2)); 342 322 cpumask_set_cpu(cpu2, cpu_llc_shared_mask(cpu1)); 323 + check_cpu_siblings_on_same_node(cpu1, cpu2); 343 324 } 344 325 345 326 ··· 382 361 per_cpu(cpu_llc_id, cpu) == per_cpu(cpu_llc_id, i)) { 383 362 cpumask_set_cpu(i, cpu_llc_shared_mask(cpu)); 384 363 cpumask_set_cpu(cpu, cpu_llc_shared_mask(i)); 364 + check_cpu_siblings_on_same_node(cpu, i); 385 365 } 386 366 if (c->phys_proc_id == cpu_data(i).phys_proc_id) { 387 367 cpumask_set_cpu(i, cpu_core_mask(cpu)); 388 368 cpumask_set_cpu(cpu, cpu_core_mask(i)); 369 + check_cpu_siblings_on_same_node(cpu, i); 389 370 /* 390 371 * Does this new cpu bringup a new core? 391 372 */
+2
arch/x86/platform/ce4100/falconfalls.dts
··· 74 74 compatible = "intel,ce4100-pci", "pci"; 75 75 device_type = "pci"; 76 76 bus-range = <1 1>; 77 + reg = <0x0800 0x0 0x0 0x0 0x0>; 77 78 ranges = <0x2000000 0 0xdffe0000 0x2000000 0 0xdffe0000 0 0x1000>; 78 79 79 80 interrupt-parent = <&ioapic2>; ··· 413 412 #address-cells = <2>; 414 413 #size-cells = <1>; 415 414 compatible = "isa"; 415 + reg = <0xf800 0x0 0x0 0x0 0x0>; 416 416 ranges = <1 0 0 0 0 0x100>; 417 417 418 418 rtc@70 {
+5 -5
arch/x86/platform/mrst/mrst.c
··· 97 97 pentry->freq_hz, pentry->irq); 98 98 if (!pentry->irq) 99 99 continue; 100 - mp_irq.type = MP_IOAPIC; 100 + mp_irq.type = MP_INTSRC; 101 101 mp_irq.irqtype = mp_INT; 102 102 /* triggering mode edge bit 2-3, active high polarity bit 0-1 */ 103 103 mp_irq.irqflag = 5; 104 - mp_irq.srcbus = 0; 104 + mp_irq.srcbus = MP_BUS_ISA; 105 105 mp_irq.srcbusirq = pentry->irq; /* IRQ */ 106 106 mp_irq.dstapic = MP_APIC_ALL; 107 107 mp_irq.dstirq = pentry->irq; ··· 168 168 for (totallen = 0; totallen < sfi_mrtc_num; totallen++, pentry++) { 169 169 pr_debug("RTC[%d]: paddr = 0x%08x, irq = %d\n", 170 170 totallen, (u32)pentry->phys_addr, pentry->irq); 171 - mp_irq.type = MP_IOAPIC; 171 + mp_irq.type = MP_INTSRC; 172 172 mp_irq.irqtype = mp_INT; 173 173 mp_irq.irqflag = 0xf; /* level trigger and active low */ 174 - mp_irq.srcbus = 0; 174 + mp_irq.srcbus = MP_BUS_ISA; 175 175 mp_irq.srcbusirq = pentry->irq; /* IRQ */ 176 176 mp_irq.dstapic = MP_APIC_ALL; 177 177 mp_irq.dstirq = pentry->irq; ··· 282 282 /* Avoid searching for BIOS MP tables */ 283 283 x86_init.mpparse.find_smp_config = x86_init_noop; 284 284 x86_init.mpparse.get_smp_config = x86_init_uint_noop; 285 - 285 + set_bit(MP_BUS_ISA, mp_bus_not_pci); 286 286 } 287 287 288 288 /*
+1
arch/x86/xen/Kconfig
··· 39 39 config XEN_SAVE_RESTORE 40 40 bool 41 41 depends on XEN 42 + select HIBERNATE_CALLBACKS 42 43 default y 43 44 44 45 config XEN_DEBUG_FS
+7 -14
arch/x86/xen/enlighten.c
··· 238 238 static __init void xen_init_cpuid_mask(void) 239 239 { 240 240 unsigned int ax, bx, cx, dx; 241 + unsigned int xsave_mask; 241 242 242 243 cpuid_leaf1_edx_mask = 243 244 ~((1 << X86_FEATURE_MCE) | /* disable MCE */ ··· 250 249 cpuid_leaf1_edx_mask &= 251 250 ~((1 << X86_FEATURE_APIC) | /* disable local APIC */ 252 251 (1 << X86_FEATURE_ACPI)); /* disable ACPI */ 253 - 254 252 ax = 1; 255 - cx = 0; 256 253 xen_cpuid(&ax, &bx, &cx, &dx); 257 254 258 - /* cpuid claims we support xsave; try enabling it to see what happens */ 259 - if (cx & (1 << (X86_FEATURE_XSAVE % 32))) { 260 - unsigned long cr4; 255 + xsave_mask = 256 + (1 << (X86_FEATURE_XSAVE % 32)) | 257 + (1 << (X86_FEATURE_OSXSAVE % 32)); 261 258 262 - set_in_cr4(X86_CR4_OSXSAVE); 263 - 264 - cr4 = read_cr4(); 265 - 266 - if ((cr4 & X86_CR4_OSXSAVE) == 0) 267 - cpuid_leaf1_ecx_mask &= ~(1 << (X86_FEATURE_XSAVE % 32)); 268 - 269 - clear_in_cr4(X86_CR4_OSXSAVE); 270 - } 259 + /* Xen will set CR4.OSXSAVE if supported and not disabled by force */ 260 + if ((cx & xsave_mask) != xsave_mask) 261 + cpuid_leaf1_ecx_mask &= ~xsave_mask; /* disable XSAVE & OSXSAVE */ 271 262 } 272 263 273 264 static void xen_set_debugreg(int reg, unsigned long val)
+2 -2
arch/x86/xen/mmu.c
··· 565 565 if (io_page && 566 566 (xen_initial_domain() || addr >= ISA_END_ADDRESS)) { 567 567 other_addr = pfn_to_mfn(addr >> PAGE_SHIFT) << PAGE_SHIFT; 568 - WARN(addr != other_addr, 568 + WARN_ONCE(addr != other_addr, 569 569 "0x%lx is using VM_IO, but it is 0x%lx!\n", 570 570 (unsigned long)addr, (unsigned long)other_addr); 571 571 } else { 572 572 pteval_t iomap_set = (_pte.pte & PTE_FLAGS_MASK) & _PAGE_IOMAP; 573 573 other_addr = (_pte.pte & PTE_PFN_MASK); 574 - WARN((addr == other_addr) && (!io_page) && (!iomap_set), 574 + WARN_ONCE((addr == other_addr) && (!io_page) && (!iomap_set), 575 575 "0x%lx is missing VM_IO (and wasn't fixed)!\n", 576 576 (unsigned long)addr); 577 577 }
+114 -55
block/blk-core.c
··· 198 198 } 199 199 EXPORT_SYMBOL(blk_dump_rq_flags); 200 200 201 - /* 202 - * Make sure that plugs that were pending when this function was entered, 203 - * are now complete and requests pushed to the queue. 204 - */ 205 - static inline void queue_sync_plugs(struct request_queue *q) 206 - { 207 - /* 208 - * If the current process is plugged and has barriers submitted, 209 - * we will livelock if we don't unplug first. 210 - */ 211 - blk_flush_plug(current); 212 - } 213 - 214 201 static void blk_delay_work(struct work_struct *work) 215 202 { 216 203 struct request_queue *q; 217 204 218 205 q = container_of(work, struct request_queue, delay_work.work); 219 206 spin_lock_irq(q->queue_lock); 220 - __blk_run_queue(q, false); 207 + __blk_run_queue(q); 221 208 spin_unlock_irq(q->queue_lock); 222 209 } 223 210 ··· 220 233 */ 221 234 void blk_delay_queue(struct request_queue *q, unsigned long msecs) 222 235 { 223 - schedule_delayed_work(&q->delay_work, msecs_to_jiffies(msecs)); 236 + queue_delayed_work(kblockd_workqueue, &q->delay_work, 237 + msecs_to_jiffies(msecs)); 224 238 } 225 239 EXPORT_SYMBOL(blk_delay_queue); 226 240 ··· 239 251 WARN_ON(!irqs_disabled()); 240 252 241 253 queue_flag_clear(QUEUE_FLAG_STOPPED, q); 242 - __blk_run_queue(q, false); 254 + __blk_run_queue(q); 243 255 } 244 256 EXPORT_SYMBOL(blk_start_queue); 245 257 ··· 286 298 { 287 299 del_timer_sync(&q->timeout); 288 300 cancel_delayed_work_sync(&q->delay_work); 289 - queue_sync_plugs(q); 290 301 } 291 302 EXPORT_SYMBOL(blk_sync_queue); 292 303 ··· 297 310 * Description: 298 311 * See @blk_run_queue. This variant must be called with the queue lock 299 312 * held and interrupts disabled. 300 - * 301 313 */ 302 - void __blk_run_queue(struct request_queue *q, bool force_kblockd) 314 + void __blk_run_queue(struct request_queue *q) 303 315 { 304 316 if (unlikely(blk_queue_stopped(q))) 305 317 return; ··· 307 321 * Only recurse once to avoid overrunning the stack, let the unplug 308 322 * handling reinvoke the handler shortly if we already got there. 309 323 */ 310 - if (!force_kblockd && !queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) { 324 + if (!queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) { 311 325 q->request_fn(q); 312 326 queue_flag_clear(QUEUE_FLAG_REENTER, q); 313 327 } else 314 328 queue_delayed_work(kblockd_workqueue, &q->delay_work, 0); 315 329 } 316 330 EXPORT_SYMBOL(__blk_run_queue); 331 + 332 + /** 333 + * blk_run_queue_async - run a single device queue in workqueue context 334 + * @q: The queue to run 335 + * 336 + * Description: 337 + * Tells kblockd to perform the equivalent of @blk_run_queue on behalf 338 + * of us. 339 + */ 340 + void blk_run_queue_async(struct request_queue *q) 341 + { 342 + if (likely(!blk_queue_stopped(q))) 343 + queue_delayed_work(kblockd_workqueue, &q->delay_work, 0); 344 + } 317 345 318 346 /** 319 347 * blk_run_queue - run a single device queue ··· 342 342 unsigned long flags; 343 343 344 344 spin_lock_irqsave(q->queue_lock, flags); 345 - __blk_run_queue(q, false); 345 + __blk_run_queue(q); 346 346 spin_unlock_irqrestore(q->queue_lock, flags); 347 347 } 348 348 EXPORT_SYMBOL(blk_run_queue); ··· 991 991 blk_queue_end_tag(q, rq); 992 992 993 993 add_acct_request(q, rq, where); 994 - __blk_run_queue(q, false); 994 + __blk_run_queue(q); 995 995 spin_unlock_irqrestore(q->queue_lock, flags); 996 996 } 997 997 EXPORT_SYMBOL(blk_insert_request); ··· 1311 1311 1312 1312 plug = current->plug; 1313 1313 if (plug) { 1314 - if (!plug->should_sort && !list_empty(&plug->list)) { 1314 + /* 1315 + * If this is the first request added after a plug, fire 1316 + * of a plug trace. If others have been added before, check 1317 + * if we have multiple devices in this plug. If so, make a 1318 + * note to sort the list before dispatch. 1319 + */ 1320 + if (list_empty(&plug->list)) 1321 + trace_block_plug(q); 1322 + else if (!plug->should_sort) { 1315 1323 struct request *__rq; 1316 1324 1317 1325 __rq = list_entry_rq(plug->list.prev); ··· 1335 1327 } else { 1336 1328 spin_lock_irq(q->queue_lock); 1337 1329 add_acct_request(q, req, where); 1338 - __blk_run_queue(q, false); 1330 + __blk_run_queue(q); 1339 1331 out_unlock: 1340 1332 spin_unlock_irq(q->queue_lock); 1341 1333 } ··· 2652 2644 2653 2645 plug->magic = PLUG_MAGIC; 2654 2646 INIT_LIST_HEAD(&plug->list); 2647 + INIT_LIST_HEAD(&plug->cb_list); 2655 2648 plug->should_sort = 0; 2656 2649 2657 2650 /* ··· 2677 2668 return !(rqa->q <= rqb->q); 2678 2669 } 2679 2670 2680 - static void flush_plug_list(struct blk_plug *plug) 2671 + /* 2672 + * If 'from_schedule' is true, then postpone the dispatch of requests 2673 + * until a safe kblockd context. We due this to avoid accidental big 2674 + * additional stack usage in driver dispatch, in places where the originally 2675 + * plugger did not intend it. 2676 + */ 2677 + static void queue_unplugged(struct request_queue *q, unsigned int depth, 2678 + bool from_schedule) 2679 + __releases(q->queue_lock) 2680 + { 2681 + trace_block_unplug(q, depth, !from_schedule); 2682 + 2683 + /* 2684 + * If we are punting this to kblockd, then we can safely drop 2685 + * the queue_lock before waking kblockd (which needs to take 2686 + * this lock). 2687 + */ 2688 + if (from_schedule) { 2689 + spin_unlock(q->queue_lock); 2690 + blk_run_queue_async(q); 2691 + } else { 2692 + __blk_run_queue(q); 2693 + spin_unlock(q->queue_lock); 2694 + } 2695 + 2696 + } 2697 + 2698 + static void flush_plug_callbacks(struct blk_plug *plug) 2699 + { 2700 + LIST_HEAD(callbacks); 2701 + 2702 + if (list_empty(&plug->cb_list)) 2703 + return; 2704 + 2705 + list_splice_init(&plug->cb_list, &callbacks); 2706 + 2707 + while (!list_empty(&callbacks)) { 2708 + struct blk_plug_cb *cb = list_first_entry(&callbacks, 2709 + struct blk_plug_cb, 2710 + list); 2711 + list_del(&cb->list); 2712 + cb->callback(cb); 2713 + } 2714 + } 2715 + 2716 + void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule) 2681 2717 { 2682 2718 struct request_queue *q; 2683 2719 unsigned long flags; 2684 2720 struct request *rq; 2721 + LIST_HEAD(list); 2722 + unsigned int depth; 2685 2723 2686 2724 BUG_ON(plug->magic != PLUG_MAGIC); 2687 2725 2726 + flush_plug_callbacks(plug); 2688 2727 if (list_empty(&plug->list)) 2689 2728 return; 2690 2729 2691 - if (plug->should_sort) 2692 - list_sort(NULL, &plug->list, plug_rq_cmp); 2730 + list_splice_init(&plug->list, &list); 2731 + 2732 + if (plug->should_sort) { 2733 + list_sort(NULL, &list, plug_rq_cmp); 2734 + plug->should_sort = 0; 2735 + } 2693 2736 2694 2737 q = NULL; 2738 + depth = 0; 2739 + 2740 + /* 2741 + * Save and disable interrupts here, to avoid doing it for every 2742 + * queue lock we have to take. 2743 + */ 2695 2744 local_irq_save(flags); 2696 - while (!list_empty(&plug->list)) { 2697 - rq = list_entry_rq(plug->list.next); 2745 + while (!list_empty(&list)) { 2746 + rq = list_entry_rq(list.next); 2698 2747 list_del_init(&rq->queuelist); 2699 2748 BUG_ON(!(rq->cmd_flags & REQ_ON_PLUG)); 2700 2749 BUG_ON(!rq->q); 2701 2750 if (rq->q != q) { 2702 - if (q) { 2703 - __blk_run_queue(q, false); 2704 - spin_unlock(q->queue_lock); 2705 - } 2751 + /* 2752 + * This drops the queue lock 2753 + */ 2754 + if (q) 2755 + queue_unplugged(q, depth, from_schedule); 2706 2756 q = rq->q; 2757 + depth = 0; 2707 2758 spin_lock(q->queue_lock); 2708 2759 } 2709 2760 rq->cmd_flags &= ~REQ_ON_PLUG; ··· 2775 2706 __elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH); 2776 2707 else 2777 2708 __elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE); 2709 + 2710 + depth++; 2778 2711 } 2779 2712 2780 - if (q) { 2781 - __blk_run_queue(q, false); 2782 - spin_unlock(q->queue_lock); 2783 - } 2713 + /* 2714 + * This drops the queue lock 2715 + */ 2716 + if (q) 2717 + queue_unplugged(q, depth, from_schedule); 2784 2718 2785 - BUG_ON(!list_empty(&plug->list)); 2786 2719 local_irq_restore(flags); 2787 2720 } 2788 - 2789 - static void __blk_finish_plug(struct task_struct *tsk, struct blk_plug *plug) 2790 - { 2791 - flush_plug_list(plug); 2792 - 2793 - if (plug == tsk->plug) 2794 - tsk->plug = NULL; 2795 - } 2721 + EXPORT_SYMBOL(blk_flush_plug_list); 2796 2722 2797 2723 void blk_finish_plug(struct blk_plug *plug) 2798 2724 { 2799 - if (plug) 2800 - __blk_finish_plug(current, plug); 2725 + blk_flush_plug_list(plug, false); 2726 + 2727 + if (plug == current->plug) 2728 + current->plug = NULL; 2801 2729 } 2802 2730 EXPORT_SYMBOL(blk_finish_plug); 2803 - 2804 - void __blk_flush_plug(struct task_struct *tsk, struct blk_plug *plug) 2805 - { 2806 - __blk_finish_plug(tsk, plug); 2807 - tsk->plug = plug; 2808 - } 2809 - EXPORT_SYMBOL(__blk_flush_plug); 2810 2731 2811 2732 int __init blk_dev_init(void) 2812 2733 {
+1 -1
block/blk-exec.c
··· 55 55 WARN_ON(irqs_disabled()); 56 56 spin_lock_irq(q->queue_lock); 57 57 __elv_add_request(q, rq, where); 58 - __blk_run_queue(q, false); 58 + __blk_run_queue(q); 59 59 /* the queue is stopped so it won't be plugged+unplugged */ 60 60 if (rq->cmd_type == REQ_TYPE_PM_RESUME) 61 61 q->request_fn(q);
+2 -2
block/blk-flush.c
··· 218 218 * request_fn may confuse the driver. Always use kblockd. 219 219 */ 220 220 if (queued) 221 - __blk_run_queue(q, true); 221 + blk_run_queue_async(q); 222 222 } 223 223 224 224 /** ··· 274 274 * the comment in flush_end_io(). 275 275 */ 276 276 if (blk_flush_complete_seq(rq, REQ_FSEQ_DATA, error)) 277 - __blk_run_queue(q, true); 277 + blk_run_queue_async(q); 278 278 } 279 279 280 280 /**
+1 -2
block/blk-sysfs.c
··· 498 498 { 499 499 int ret; 500 500 struct device *dev = disk_to_dev(disk); 501 - 502 501 struct request_queue *q = disk->queue; 503 502 504 503 if (WARN_ON(!q)) ··· 520 521 if (ret) { 521 522 kobject_uevent(&q->kobj, KOBJ_REMOVE); 522 523 kobject_del(&q->kobj); 523 - blk_trace_remove_sysfs(disk_to_dev(disk)); 524 + blk_trace_remove_sysfs(dev); 524 525 kobject_put(&dev->kobj); 525 526 return ret; 526 527 }
+1
block/blk.h
··· 22 22 void blk_delete_timer(struct request *); 23 23 void blk_add_timer(struct request *); 24 24 void __generic_unplug_device(struct request_queue *); 25 + void blk_run_queue_async(struct request_queue *q); 25 26 26 27 /* 27 28 * Internal atomic flags for request handling
+3 -3
block/cfq-iosched.c
··· 3368 3368 cfqd->busy_queues > 1) { 3369 3369 cfq_del_timer(cfqd, cfqq); 3370 3370 cfq_clear_cfqq_wait_request(cfqq); 3371 - __blk_run_queue(cfqd->queue, false); 3371 + __blk_run_queue(cfqd->queue); 3372 3372 } else { 3373 3373 cfq_blkiocg_update_idle_time_stats( 3374 3374 &cfqq->cfqg->blkg); ··· 3383 3383 * this new queue is RT and the current one is BE 3384 3384 */ 3385 3385 cfq_preempt_queue(cfqd, cfqq); 3386 - __blk_run_queue(cfqd->queue, false); 3386 + __blk_run_queue(cfqd->queue); 3387 3387 } 3388 3388 } 3389 3389 ··· 3743 3743 struct request_queue *q = cfqd->queue; 3744 3744 3745 3745 spin_lock_irq(q->queue_lock); 3746 - __blk_run_queue(cfqd->queue, false); 3746 + __blk_run_queue(cfqd->queue); 3747 3747 spin_unlock_irq(q->queue_lock); 3748 3748 } 3749 3749
+2 -2
block/elevator.c
··· 642 642 */ 643 643 elv_drain_elevator(q); 644 644 while (q->rq.elvpriv) { 645 - __blk_run_queue(q, false); 645 + __blk_run_queue(q); 646 646 spin_unlock_irq(q->queue_lock); 647 647 msleep(10); 648 648 spin_lock_irq(q->queue_lock); ··· 695 695 * with anything. There's no point in delaying queue 696 696 * processing. 697 697 */ 698 - __blk_run_queue(q, false); 698 + __blk_run_queue(q); 699 699 break; 700 700 701 701 case ELEVATOR_INSERT_SORT_MERGE:
+3 -3
drivers/amba/bus.c
··· 214 214 215 215 #endif /* !CONFIG_SUSPEND */ 216 216 217 - #ifdef CONFIG_HIBERNATION 217 + #ifdef CONFIG_HIBERNATE_CALLBACKS 218 218 219 219 static int amba_pm_freeze(struct device *dev) 220 220 { ··· 352 352 return ret; 353 353 } 354 354 355 - #else /* !CONFIG_HIBERNATION */ 355 + #else /* !CONFIG_HIBERNATE_CALLBACKS */ 356 356 357 357 #define amba_pm_freeze NULL 358 358 #define amba_pm_thaw NULL ··· 363 363 #define amba_pm_poweroff_noirq NULL 364 364 #define amba_pm_restore_noirq NULL 365 365 366 - #endif /* !CONFIG_HIBERNATION */ 366 + #endif /* !CONFIG_HIBERNATE_CALLBACKS */ 367 367 368 368 #ifdef CONFIG_PM 369 369
+4 -3
drivers/base/platform.c
··· 149 149 150 150 of_device_node_put(&pa->pdev.dev); 151 151 kfree(pa->pdev.dev.platform_data); 152 + kfree(pa->pdev.mfd_cell); 152 153 kfree(pa->pdev.resource); 153 154 kfree(pa); 154 155 } ··· 772 771 773 772 #endif /* !CONFIG_SUSPEND */ 774 773 775 - #ifdef CONFIG_HIBERNATION 774 + #ifdef CONFIG_HIBERNATE_CALLBACKS 776 775 777 776 static int platform_pm_freeze(struct device *dev) 778 777 { ··· 910 909 return ret; 911 910 } 912 911 913 - #else /* !CONFIG_HIBERNATION */ 912 + #else /* !CONFIG_HIBERNATE_CALLBACKS */ 914 913 915 914 #define platform_pm_freeze NULL 916 915 #define platform_pm_thaw NULL ··· 921 920 #define platform_pm_poweroff_noirq NULL 922 921 #define platform_pm_restore_noirq NULL 923 922 924 - #endif /* !CONFIG_HIBERNATION */ 923 + #endif /* !CONFIG_HIBERNATE_CALLBACKS */ 925 924 926 925 #ifdef CONFIG_PM_RUNTIME 927 926
+4 -4
drivers/base/power/main.c
··· 233 233 } 234 234 break; 235 235 #endif /* CONFIG_SUSPEND */ 236 - #ifdef CONFIG_HIBERNATION 236 + #ifdef CONFIG_HIBERNATE_CALLBACKS 237 237 case PM_EVENT_FREEZE: 238 238 case PM_EVENT_QUIESCE: 239 239 if (ops->freeze) { ··· 260 260 suspend_report_result(ops->restore, error); 261 261 } 262 262 break; 263 - #endif /* CONFIG_HIBERNATION */ 263 + #endif /* CONFIG_HIBERNATE_CALLBACKS */ 264 264 default: 265 265 error = -EINVAL; 266 266 } ··· 308 308 } 309 309 break; 310 310 #endif /* CONFIG_SUSPEND */ 311 - #ifdef CONFIG_HIBERNATION 311 + #ifdef CONFIG_HIBERNATE_CALLBACKS 312 312 case PM_EVENT_FREEZE: 313 313 case PM_EVENT_QUIESCE: 314 314 if (ops->freeze_noirq) { ··· 335 335 suspend_report_result(ops->restore_noirq, error); 336 336 } 337 337 break; 338 - #endif /* CONFIG_HIBERNATION */ 338 + #endif /* CONFIG_HIBERNATE_CALLBACKS */ 339 339 default: 340 340 error = -EINVAL; 341 341 }
+1
drivers/gpu/drm/Kconfig
··· 96 96 # i915 depends on ACPI_VIDEO when ACPI is enabled 97 97 # but for select to work, need to select ACPI_VIDEO's dependencies, ick 98 98 select BACKLIGHT_CLASS_DEVICE if ACPI 99 + select VIDEO_OUTPUT_CONTROL if ACPI 99 100 select INPUT if ACPI 100 101 select ACPI_VIDEO if ACPI 101 102 select ACPI_BUTTON if ACPI
+50 -3
drivers/gpu/drm/nouveau/nouveau_bios.c
··· 269 269 int (*handler)(struct nvbios *, uint16_t, struct init_exec *); 270 270 }; 271 271 272 - static int parse_init_table(struct nvbios *, unsigned int, struct init_exec *); 272 + static int parse_init_table(struct nvbios *, uint16_t, struct init_exec *); 273 273 274 274 #define MACRO_INDEX_SIZE 2 275 275 #define MACRO_SIZE 8 ··· 2011 2011 } 2012 2012 2013 2013 static int 2014 + init_jump(struct nvbios *bios, uint16_t offset, struct init_exec *iexec) 2015 + { 2016 + /* 2017 + * INIT_JUMP opcode: 0x5C ('\') 2018 + * 2019 + * offset (8 bit): opcode 2020 + * offset + 1 (16 bit): offset (in bios) 2021 + * 2022 + * Continue execution of init table from 'offset' 2023 + */ 2024 + 2025 + uint16_t jmp_offset = ROM16(bios->data[offset + 1]); 2026 + 2027 + if (!iexec->execute) 2028 + return 3; 2029 + 2030 + BIOSLOG(bios, "0x%04X: Jump to 0x%04X\n", offset, jmp_offset); 2031 + return jmp_offset - offset; 2032 + } 2033 + 2034 + static int 2014 2035 init_i2c_if(struct nvbios *bios, uint16_t offset, struct init_exec *iexec) 2015 2036 { 2016 2037 /* ··· 3680 3659 { "INIT_ZM_REG_SEQUENCE" , 0x58, init_zm_reg_sequence }, 3681 3660 /* INIT_INDIRECT_REG (0x5A, 7, 0, 0) removed due to no example of use */ 3682 3661 { "INIT_SUB_DIRECT" , 0x5B, init_sub_direct }, 3662 + { "INIT_JUMP" , 0x5C, init_jump }, 3683 3663 { "INIT_I2C_IF" , 0x5E, init_i2c_if }, 3684 3664 { "INIT_COPY_NV_REG" , 0x5F, init_copy_nv_reg }, 3685 3665 { "INIT_ZM_INDEX_IO" , 0x62, init_zm_index_io }, ··· 3722 3700 #define MAX_TABLE_OPS 1000 3723 3701 3724 3702 static int 3725 - parse_init_table(struct nvbios *bios, unsigned int offset, 3726 - struct init_exec *iexec) 3703 + parse_init_table(struct nvbios *bios, uint16_t offset, struct init_exec *iexec) 3727 3704 { 3728 3705 /* 3729 3706 * Parses all commands in an init table. ··· 6351 6330 if (*conn == 0xf2005014 && *conf == 0xffffffff) { 6352 6331 fabricate_dcb_output(dcb, OUTPUT_TMDS, 1, 1, 1); 6353 6332 return false; 6333 + } 6334 + } 6335 + 6336 + /* XFX GT-240X-YA 6337 + * 6338 + * So many things wrong here, replace the entire encoder table.. 6339 + */ 6340 + if (nv_match_device(dev, 0x0ca3, 0x1682, 0x3003)) { 6341 + if (idx == 0) { 6342 + *conn = 0x02001300; /* VGA, connector 1 */ 6343 + *conf = 0x00000028; 6344 + } else 6345 + if (idx == 1) { 6346 + *conn = 0x01010312; /* DVI, connector 0 */ 6347 + *conf = 0x00020030; 6348 + } else 6349 + if (idx == 2) { 6350 + *conn = 0x01010310; /* VGA, connector 0 */ 6351 + *conf = 0x00000028; 6352 + } else 6353 + if (idx == 3) { 6354 + *conn = 0x02022362; /* HDMI, connector 2 */ 6355 + *conf = 0x00020010; 6356 + } else { 6357 + *conn = 0x0000000e; /* EOL */ 6358 + *conf = 0x00000000; 6354 6359 } 6355 6360 } 6356 6361
+1 -1
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 1190 1190 extern int nv50_graph_unload_context(struct drm_device *); 1191 1191 extern int nv50_grctx_init(struct nouveau_grctx *); 1192 1192 extern void nv50_graph_tlb_flush(struct drm_device *dev); 1193 - extern void nv86_graph_tlb_flush(struct drm_device *dev); 1193 + extern void nv84_graph_tlb_flush(struct drm_device *dev); 1194 1194 extern struct nouveau_enum nv50_data_error_names[]; 1195 1195 1196 1196 /* nvc0_graph.c */
+45 -23
drivers/gpu/drm/nouveau/nouveau_mem.c
··· 552 552 u8 tRC; /* Byte 9 */ 553 553 u8 tUNK_10, tUNK_11, tUNK_12, tUNK_13, tUNK_14; 554 554 u8 tUNK_18, tUNK_19, tUNK_20, tUNK_21; 555 + u8 magic_number = 0; /* Yeah... sorry*/ 555 556 u8 *mem = NULL, *entry; 556 557 int i, recordlen, entries; 557 558 ··· 597 596 if (!memtimings->timing) 598 597 return; 599 598 599 + /* Get "some number" from the timing reg for NV_40 600 + * Used in calculations later */ 601 + if(dev_priv->card_type == NV_40) { 602 + magic_number = (nv_rd32(dev,0x100228) & 0x0f000000) >> 24; 603 + } 604 + 600 605 entry = mem + mem[1]; 601 606 for (i = 0; i < entries; i++, entry += recordlen) { 602 607 struct nouveau_pm_memtiming *timing = &pm->memtimings.timing[i]; ··· 642 635 643 636 /* XXX: I don't trust the -1's and +1's... they must come 644 637 * from somewhere! */ 645 - timing->reg_100224 = ((tUNK_0 + tUNK_19 + 1) << 24 | 638 + timing->reg_100224 = (tUNK_0 + tUNK_19 + 1 + magic_number) << 24 | 646 639 tUNK_18 << 16 | 647 - (tUNK_1 + tUNK_19 + 1) << 8 | 648 - (tUNK_2 - 1)); 640 + (tUNK_1 + tUNK_19 + 1 + magic_number) << 8; 641 + if(dev_priv->chipset == 0xa8) { 642 + timing->reg_100224 |= (tUNK_2 - 1); 643 + } else { 644 + timing->reg_100224 |= (tUNK_2 + 2 - magic_number); 645 + } 649 646 650 647 timing->reg_100228 = (tUNK_12 << 16 | tUNK_11 << 8 | tUNK_10); 651 - if(recordlen > 19) { 652 - timing->reg_100228 += (tUNK_19 - 1) << 24; 653 - }/* I cannot back-up this else-statement right now 654 - else { 655 - timing->reg_100228 += tUNK_12 << 24; 656 - }*/ 648 + if(dev_priv->chipset >= 0xa3 && dev_priv->chipset < 0xaa) { 649 + timing->reg_100228 |= (tUNK_19 - 1) << 24; 650 + } 657 651 658 - /* XXX: reg_10022c */ 659 - timing->reg_10022c = tUNK_2 - 1; 652 + if(dev_priv->card_type == NV_40) { 653 + /* NV40: don't know what the rest of the regs are.. 654 + * And don't need to know either */ 655 + timing->reg_100228 |= 0x20200000 | magic_number << 24; 656 + } else if(dev_priv->card_type >= NV_50) { 657 + /* XXX: reg_10022c */ 658 + timing->reg_10022c = tUNK_2 - 1; 660 659 661 - timing->reg_100230 = (tUNK_20 << 24 | tUNK_21 << 16 | 662 - tUNK_13 << 8 | tUNK_13); 660 + timing->reg_100230 = (tUNK_20 << 24 | tUNK_21 << 16 | 661 + tUNK_13 << 8 | tUNK_13); 663 662 664 - /* XXX: +6? */ 665 - timing->reg_100234 = (tRAS << 24 | (tUNK_19 + 6) << 8 | tRC); 666 - timing->reg_100234 += max(tUNK_10,tUNK_11) << 16; 663 + timing->reg_100234 = (tRAS << 24 | tRC); 664 + timing->reg_100234 += max(tUNK_10,tUNK_11) << 16; 667 665 668 - /* XXX; reg_100238, reg_10023c 669 - * reg: 0x00?????? 670 - * reg_10023c: 671 - * 0 for pre-NV50 cards 672 - * 0x????0202 for NV50+ cards (empirical evidence) */ 673 - if(dev_priv->card_type >= NV_50) { 666 + if(dev_priv->chipset < 0xa3) { 667 + timing->reg_100234 |= (tUNK_2 + 2) << 8; 668 + } else { 669 + /* XXX: +6? */ 670 + timing->reg_100234 |= (tUNK_19 + 6) << 8; 671 + } 672 + 673 + /* XXX; reg_100238, reg_10023c 674 + * reg_100238: 0x00?????? 675 + * reg_10023c: 0x!!??0202 for NV50+ cards (empirical evidence) */ 674 676 timing->reg_10023c = 0x202; 677 + if(dev_priv->chipset < 0xa3) { 678 + timing->reg_10023c |= 0x4000000 | (tUNK_2 - 1) << 16; 679 + } else { 680 + /* currently unknown 681 + * 10023c seen as 06xxxxxx, 0bxxxxxx or 0fxxxxxx */ 682 + } 675 683 } 676 684 677 685 NV_DEBUG(dev, "Entry %d: 220: %08x %08x %08x %08x\n", i, ··· 697 675 timing->reg_100238, timing->reg_10023c); 698 676 } 699 677 700 - memtimings->nr_timing = entries; 678 + memtimings->nr_timing = entries; 701 679 memtimings->supported = true; 702 680 } 703 681
+1 -1
drivers/gpu/drm/nouveau/nouveau_perf.c
··· 134 134 case 0x13: 135 135 case 0x15: 136 136 perflvl->fanspeed = entry[55]; 137 - perflvl->voltage = entry[56]; 137 + perflvl->voltage = (recordlen > 56) ? entry[56] : 0; 138 138 perflvl->core = ROM32(entry[1]) * 10; 139 139 perflvl->memory = ROM32(entry[5]) * 20; 140 140 break;
+4 -8
drivers/gpu/drm/nouveau/nouveau_state.c
··· 376 376 engine->graph.destroy_context = nv50_graph_destroy_context; 377 377 engine->graph.load_context = nv50_graph_load_context; 378 378 engine->graph.unload_context = nv50_graph_unload_context; 379 - if (dev_priv->chipset != 0x86) 379 + if (dev_priv->chipset == 0x50 || 380 + dev_priv->chipset == 0xac) 380 381 engine->graph.tlb_flush = nv50_graph_tlb_flush; 381 - else { 382 - /* from what i can see nvidia do this on every 383 - * pre-NVA3 board except NVAC, but, we've only 384 - * ever seen problems on NV86 385 - */ 386 - engine->graph.tlb_flush = nv86_graph_tlb_flush; 387 - } 382 + else 383 + engine->graph.tlb_flush = nv84_graph_tlb_flush; 388 384 engine->fifo.channels = 128; 389 385 engine->fifo.init = nv50_fifo_init; 390 386 engine->fifo.takedown = nv50_fifo_takedown;
+7 -6
drivers/gpu/drm/nouveau/nv04_dfp.c
··· 581 581 int head = nv_encoder->restore.head; 582 582 583 583 if (nv_encoder->dcb->type == OUTPUT_LVDS) { 584 - struct drm_display_mode *native_mode = nouveau_encoder_connector_get(nv_encoder)->native_mode; 585 - if (native_mode) 586 - call_lvds_script(dev, nv_encoder->dcb, head, LVDS_PANEL_ON, 587 - native_mode->clock); 588 - else 589 - NV_ERROR(dev, "Not restoring LVDS without native mode\n"); 584 + struct nouveau_connector *connector = 585 + nouveau_encoder_connector_get(nv_encoder); 586 + 587 + if (connector && connector->native_mode) 588 + call_lvds_script(dev, nv_encoder->dcb, head, 589 + LVDS_PANEL_ON, 590 + connector->native_mode->clock); 590 591 591 592 } else if (nv_encoder->dcb->type == OUTPUT_TMDS) { 592 593 int clock = nouveau_hw_pllvals_to_clk
-3
drivers/gpu/drm/nouveau/nv50_crtc.c
··· 469 469 470 470 start = ptimer->read(dev); 471 471 do { 472 - nv_wr32(dev, 0x61002c, 0x370); 473 - nv_wr32(dev, 0x000140, 1); 474 - 475 472 if (nv_ro32(disp->ntfy, 0x000)) 476 473 return 0; 477 474 } while (ptimer->read(dev) - start < 2000000000ULL);
+1
drivers/gpu/drm/nouveau/nv50_evo.c
··· 186 186 nv_mask(dev, 0x610028, 0x00000000, 0x00010001 << id); 187 187 188 188 evo->dma.max = (4096/4) - 2; 189 + evo->dma.max &= ~7; 189 190 evo->dma.put = 0; 190 191 evo->dma.cur = evo->dma.put; 191 192 evo->dma.free = evo->dma.max - evo->dma.cur;
+1 -1
drivers/gpu/drm/nouveau/nv50_graph.c
··· 503 503 } 504 504 505 505 void 506 - nv86_graph_tlb_flush(struct drm_device *dev) 506 + nv84_graph_tlb_flush(struct drm_device *dev) 507 507 { 508 508 struct drm_nouveau_private *dev_priv = dev->dev_private; 509 509 struct nouveau_timer_engine *ptimer = &dev_priv->engine.timer;
+15 -9
drivers/gpu/drm/nouveau/nvc0_vm.c
··· 104 104 struct nouveau_instmem_engine *pinstmem = &dev_priv->engine.instmem; 105 105 struct drm_device *dev = vm->dev; 106 106 struct nouveau_vm_pgd *vpgd; 107 - u32 r100c80, engine; 107 + u32 engine = (dev_priv->chan_vm == vm) ? 1 : 5; 108 108 109 109 pinstmem->flush(vm->dev); 110 110 111 - if (vm == dev_priv->chan_vm) 112 - engine = 1; 113 - else 114 - engine = 5; 115 - 111 + spin_lock(&dev_priv->ramin_lock); 116 112 list_for_each_entry(vpgd, &vm->pgd_list, head) { 117 - r100c80 = nv_rd32(dev, 0x100c80); 113 + /* looks like maybe a "free flush slots" counter, the 114 + * faster you write to 0x100cbc to more it decreases 115 + */ 116 + if (!nv_wait_ne(dev, 0x100c80, 0x00ff0000, 0x00000000)) { 117 + NV_ERROR(dev, "vm timeout 0: 0x%08x %d\n", 118 + nv_rd32(dev, 0x100c80), engine); 119 + } 118 120 nv_wr32(dev, 0x100cb8, vpgd->obj->vinst >> 8); 119 121 nv_wr32(dev, 0x100cbc, 0x80000000 | engine); 120 - if (!nv_wait(dev, 0x100c80, 0xffffffff, r100c80)) 121 - NV_ERROR(dev, "vm flush timeout eng %d\n", engine); 122 + /* wait for flush to be queued? */ 123 + if (!nv_wait(dev, 0x100c80, 0x00008000, 0x00008000)) { 124 + NV_ERROR(dev, "vm timeout 1: 0x%08x %d\n", 125 + nv_rd32(dev, 0x100c80), engine); 126 + } 122 127 } 128 + spin_unlock(&dev_priv->ramin_lock); 123 129 }
+5 -1
drivers/gpu/drm/radeon/atom.c
··· 32 32 #include "atom.h" 33 33 #include "atom-names.h" 34 34 #include "atom-bits.h" 35 + #include "radeon.h" 35 36 36 37 #define ATOM_COND_ABOVE 0 37 38 #define ATOM_COND_ABOVEOREQUAL 1 ··· 102 101 static uint32_t atom_iio_execute(struct atom_context *ctx, int base, 103 102 uint32_t index, uint32_t data) 104 103 { 104 + struct radeon_device *rdev = ctx->card->dev->dev_private; 105 105 uint32_t temp = 0xCDCDCDCD; 106 + 106 107 while (1) 107 108 switch (CU8(base)) { 108 109 case ATOM_IIO_NOP: ··· 115 112 base += 3; 116 113 break; 117 114 case ATOM_IIO_WRITE: 118 - (void)ctx->card->ioreg_read(ctx->card, CU16(base + 1)); 115 + if (rdev->family == CHIP_RV515) 116 + (void)ctx->card->ioreg_read(ctx->card, CU16(base + 1)); 119 117 ctx->card->ioreg_write(ctx->card, CU16(base + 1), temp); 120 118 base += 3; 121 119 break;
+6
drivers/gpu/drm/radeon/atombios_crtc.c
··· 531 531 pll->flags |= RADEON_PLL_PREFER_HIGH_FB_DIV; 532 532 else 533 533 pll->flags |= RADEON_PLL_PREFER_LOW_REF_DIV; 534 + 535 + if ((rdev->family == CHIP_R600) || 536 + (rdev->family == CHIP_RV610) || 537 + (rdev->family == CHIP_RV630) || 538 + (rdev->family == CHIP_RV670)) 539 + pll->flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP; 534 540 } else { 535 541 pll->flags |= RADEON_PLL_LEGACY; 536 542
+9 -8
drivers/gpu/drm/radeon/evergreen.c
··· 120 120 struct radeon_power_state *ps = &rdev->pm.power_state[req_ps_idx]; 121 121 struct radeon_voltage *voltage = &ps->clock_info[req_cm_idx].voltage; 122 122 123 - if ((voltage->type == VOLTAGE_SW) && voltage->voltage) { 124 - if (voltage->voltage != rdev->pm.current_vddc) { 125 - radeon_atom_set_voltage(rdev, voltage->voltage); 123 + if (voltage->type == VOLTAGE_SW) { 124 + if (voltage->voltage && (voltage->voltage != rdev->pm.current_vddc)) { 125 + radeon_atom_set_voltage(rdev, voltage->voltage, SET_VOLTAGE_TYPE_ASIC_VDDC); 126 126 rdev->pm.current_vddc = voltage->voltage; 127 - DRM_DEBUG("Setting: v: %d\n", voltage->voltage); 127 + DRM_DEBUG("Setting: vddc: %d\n", voltage->voltage); 128 + } 129 + if (voltage->vddci && (voltage->vddci != rdev->pm.current_vddci)) { 130 + radeon_atom_set_voltage(rdev, voltage->vddci, SET_VOLTAGE_TYPE_ASIC_VDDCI); 131 + rdev->pm.current_vddci = voltage->vddci; 132 + DRM_DEBUG("Setting: vddci: %d\n", voltage->vddci); 128 133 } 129 134 } 130 135 } ··· 3041 3036 { 3042 3037 int r; 3043 3038 3044 - r = radeon_dummy_page_init(rdev); 3045 - if (r) 3046 - return r; 3047 3039 /* This don't do much */ 3048 3040 r = radeon_gem_init(rdev); 3049 3041 if (r) ··· 3152 3150 radeon_atombios_fini(rdev); 3153 3151 kfree(rdev->bios); 3154 3152 rdev->bios = NULL; 3155 - radeon_dummy_page_fini(rdev); 3156 3153 } 3157 3154 3158 3155 static void evergreen_pcie_gen2_enable(struct radeon_device *rdev)
+1 -5
drivers/gpu/drm/radeon/r600.c
··· 587 587 588 588 if ((voltage->type == VOLTAGE_SW) && voltage->voltage) { 589 589 if (voltage->voltage != rdev->pm.current_vddc) { 590 - radeon_atom_set_voltage(rdev, voltage->voltage); 590 + radeon_atom_set_voltage(rdev, voltage->voltage, SET_VOLTAGE_TYPE_ASIC_VDDC); 591 591 rdev->pm.current_vddc = voltage->voltage; 592 592 DRM_DEBUG_DRIVER("Setting: v: %d\n", voltage->voltage); 593 593 } ··· 2509 2509 { 2510 2510 int r; 2511 2511 2512 - r = radeon_dummy_page_init(rdev); 2513 - if (r) 2514 - return r; 2515 2512 if (r600_debugfs_mc_info_init(rdev)) { 2516 2513 DRM_ERROR("Failed to register debugfs file for mc !\n"); 2517 2514 } ··· 2622 2625 radeon_atombios_fini(rdev); 2623 2626 kfree(rdev->bios); 2624 2627 rdev->bios = NULL; 2625 - radeon_dummy_page_fini(rdev); 2626 2628 } 2627 2629 2628 2630
+8 -4
drivers/gpu/drm/radeon/radeon.h
··· 177 177 void radeon_pm_resume(struct radeon_device *rdev); 178 178 void radeon_combios_get_power_modes(struct radeon_device *rdev); 179 179 void radeon_atombios_get_power_modes(struct radeon_device *rdev); 180 - void radeon_atom_set_voltage(struct radeon_device *rdev, u16 level); 180 + void radeon_atom_set_voltage(struct radeon_device *rdev, u16 voltage_level, u8 voltage_type); 181 181 void rs690_pm_info(struct radeon_device *rdev); 182 182 extern int rv6xx_get_temp(struct radeon_device *rdev); 183 183 extern int rv770_get_temp(struct radeon_device *rdev); ··· 767 767 u8 vddci_id; /* index into vddci voltage table */ 768 768 bool vddci_enabled; 769 769 /* r6xx+ sw */ 770 - u32 voltage; 770 + u16 voltage; 771 + /* evergreen+ vddci */ 772 + u16 vddci; 771 773 }; 772 774 773 775 /* clock mode flags */ ··· 837 835 int default_power_state_index; 838 836 u32 current_sclk; 839 837 u32 current_mclk; 840 - u32 current_vddc; 838 + u16 current_vddc; 839 + u16 current_vddci; 841 840 u32 default_sclk; 842 841 u32 default_mclk; 843 - u32 default_vddc; 842 + u16 default_vddc; 843 + u16 default_vddci; 844 844 struct radeon_i2c_chan *i2c_bus; 845 845 /* selected pm method */ 846 846 enum radeon_pm_method pm_method;
+1 -1
drivers/gpu/drm/radeon/radeon_asic.c
··· 94 94 rdev->mc_rreg = &rs600_mc_rreg; 95 95 rdev->mc_wreg = &rs600_mc_wreg; 96 96 } 97 - if ((rdev->family >= CHIP_R600) && (rdev->family <= CHIP_HEMLOCK)) { 97 + if (rdev->family >= CHIP_R600) { 98 98 rdev->pciep_rreg = &r600_pciep_rreg; 99 99 rdev->pciep_wreg = &r600_pciep_wreg; 100 100 }
+19 -11
drivers/gpu/drm/radeon/radeon_atombios.c
··· 2176 2176 } 2177 2177 } 2178 2178 2179 - static u16 radeon_atombios_get_default_vddc(struct radeon_device *rdev) 2179 + static void radeon_atombios_get_default_voltages(struct radeon_device *rdev, 2180 + u16 *vddc, u16 *vddci) 2180 2181 { 2181 2182 struct radeon_mode_info *mode_info = &rdev->mode_info; 2182 2183 int index = GetIndexIntoMasterTable(DATA, FirmwareInfo); 2183 2184 u8 frev, crev; 2184 2185 u16 data_offset; 2185 2186 union firmware_info *firmware_info; 2186 - u16 vddc = 0; 2187 + 2188 + *vddc = 0; 2189 + *vddci = 0; 2187 2190 2188 2191 if (atom_parse_data_header(mode_info->atom_context, index, NULL, 2189 2192 &frev, &crev, &data_offset)) { 2190 2193 firmware_info = 2191 2194 (union firmware_info *)(mode_info->atom_context->bios + 2192 2195 data_offset); 2193 - vddc = le16_to_cpu(firmware_info->info_14.usBootUpVDDCVoltage); 2196 + *vddc = le16_to_cpu(firmware_info->info_14.usBootUpVDDCVoltage); 2197 + if ((frev == 2) && (crev >= 2)) 2198 + *vddci = le16_to_cpu(firmware_info->info_22.usBootUpVDDCIVoltage); 2194 2199 } 2195 - 2196 - return vddc; 2197 2200 } 2198 2201 2199 2202 static void radeon_atombios_parse_pplib_non_clock_info(struct radeon_device *rdev, ··· 2206 2203 int j; 2207 2204 u32 misc = le32_to_cpu(non_clock_info->ulCapsAndSettings); 2208 2205 u32 misc2 = le16_to_cpu(non_clock_info->usClassification); 2209 - u16 vddc = radeon_atombios_get_default_vddc(rdev); 2206 + u16 vddc, vddci; 2207 + 2208 + radeon_atombios_get_default_voltages(rdev, &vddc, &vddci); 2210 2209 2211 2210 rdev->pm.power_state[state_index].misc = misc; 2212 2211 rdev->pm.power_state[state_index].misc2 = misc2; ··· 2249 2244 rdev->pm.default_sclk = rdev->pm.power_state[state_index].clock_info[0].sclk; 2250 2245 rdev->pm.default_mclk = rdev->pm.power_state[state_index].clock_info[0].mclk; 2251 2246 rdev->pm.default_vddc = rdev->pm.power_state[state_index].clock_info[0].voltage.voltage; 2247 + rdev->pm.default_vddci = rdev->pm.power_state[state_index].clock_info[0].voltage.vddci; 2252 2248 } else { 2253 2249 /* patch the table values with the default slck/mclk from firmware info */ 2254 2250 for (j = 0; j < mode_index; j++) { ··· 2292 2286 VOLTAGE_SW; 2293 2287 rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage = 2294 2288 le16_to_cpu(clock_info->evergreen.usVDDC); 2289 + rdev->pm.power_state[state_index].clock_info[mode_index].voltage.vddci = 2290 + le16_to_cpu(clock_info->evergreen.usVDDCI); 2295 2291 } else { 2296 2292 sclk = le16_to_cpu(clock_info->r600.usEngineClockLow); 2297 2293 sclk |= clock_info->r600.ucEngineClockHigh << 16; ··· 2585 2577 struct _SET_VOLTAGE_PARAMETERS_V2 v2; 2586 2578 }; 2587 2579 2588 - void radeon_atom_set_voltage(struct radeon_device *rdev, u16 level) 2580 + void radeon_atom_set_voltage(struct radeon_device *rdev, u16 voltage_level, u8 voltage_type) 2589 2581 { 2590 2582 union set_voltage args; 2591 2583 int index = GetIndexIntoMasterTable(COMMAND, SetVoltage); 2592 - u8 frev, crev, volt_index = level; 2584 + u8 frev, crev, volt_index = voltage_level; 2593 2585 2594 2586 if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 2595 2587 return; 2596 2588 2597 2589 switch (crev) { 2598 2590 case 1: 2599 - args.v1.ucVoltageType = SET_VOLTAGE_TYPE_ASIC_VDDC; 2591 + args.v1.ucVoltageType = voltage_type; 2600 2592 args.v1.ucVoltageMode = SET_ASIC_VOLTAGE_MODE_ALL_SOURCE; 2601 2593 args.v1.ucVoltageIndex = volt_index; 2602 2594 break; 2603 2595 case 2: 2604 - args.v2.ucVoltageType = SET_VOLTAGE_TYPE_ASIC_VDDC; 2596 + args.v2.ucVoltageType = voltage_type; 2605 2597 args.v2.ucVoltageMode = SET_ASIC_VOLTAGE_MODE_SET_VOLTAGE; 2606 - args.v2.usVoltageLevel = cpu_to_le16(level); 2598 + args.v2.usVoltageLevel = cpu_to_le16(voltage_level); 2607 2599 break; 2608 2600 default: 2609 2601 DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
+1 -1
drivers/gpu/drm/radeon/radeon_fence.c
··· 79 79 scratch_index = R600_WB_EVENT_OFFSET + rdev->fence_drv.scratch_reg - rdev->scratch.reg_base; 80 80 else 81 81 scratch_index = RADEON_WB_SCRATCH_OFFSET + rdev->fence_drv.scratch_reg - rdev->scratch.reg_base; 82 - seq = rdev->wb.wb[scratch_index/4]; 82 + seq = le32_to_cpu(rdev->wb.wb[scratch_index/4]); 83 83 } else 84 84 seq = RREG32(rdev->fence_drv.scratch_reg); 85 85 if (seq != rdev->fence_drv.last_seq) {
+2
drivers/gpu/drm/radeon/radeon_gart.c
··· 285 285 rdev->gart.pages = NULL; 286 286 rdev->gart.pages_addr = NULL; 287 287 rdev->gart.ttm_alloced = NULL; 288 + 289 + radeon_dummy_page_fini(rdev); 288 290 }
+2 -2
drivers/gpu/drm/radeon/radeon_i2c.c
··· 1062 1062 *val = in_buf[0]; 1063 1063 DRM_DEBUG("val = 0x%02x\n", *val); 1064 1064 } else { 1065 - DRM_ERROR("i2c 0x%02x 0x%02x read failed\n", 1065 + DRM_DEBUG("i2c 0x%02x 0x%02x read failed\n", 1066 1066 addr, *val); 1067 1067 } 1068 1068 } ··· 1084 1084 out_buf[1] = val; 1085 1085 1086 1086 if (i2c_transfer(&i2c_bus->adapter, &msg, 1) != 1) 1087 - DRM_ERROR("i2c 0x%02x 0x%02x write failed\n", 1087 + DRM_DEBUG("i2c 0x%02x 0x%02x write failed\n", 1088 1088 addr, val); 1089 1089 } 1090 1090
+1 -1
drivers/gpu/drm/radeon/radeon_legacy_encoders.c
··· 269 269 .disable = radeon_legacy_encoder_disable, 270 270 }; 271 271 272 - #ifdef CONFIG_BACKLIGHT_CLASS_DEVICE 272 + #if defined(CONFIG_BACKLIGHT_CLASS_DEVICE) || defined(CONFIG_BACKLIGHT_CLASS_DEVICE_MODULE) 273 273 274 274 #define MAX_RADEON_LEVEL 0xFF 275 275
+9 -2
drivers/gpu/drm/radeon/radeon_pm.c
··· 23 23 #include "drmP.h" 24 24 #include "radeon.h" 25 25 #include "avivod.h" 26 + #include "atom.h" 26 27 #ifdef CONFIG_ACPI 27 28 #include <linux/acpi.h> 28 29 #endif ··· 536 535 /* set up the default clocks if the MC ucode is loaded */ 537 536 if (ASIC_IS_DCE5(rdev) && rdev->mc_fw) { 538 537 if (rdev->pm.default_vddc) 539 - radeon_atom_set_voltage(rdev, rdev->pm.default_vddc); 538 + radeon_atom_set_voltage(rdev, rdev->pm.default_vddc, 539 + SET_VOLTAGE_TYPE_ASIC_VDDC); 540 + if (rdev->pm.default_vddci) 541 + radeon_atom_set_voltage(rdev, rdev->pm.default_vddci, 542 + SET_VOLTAGE_TYPE_ASIC_VDDCI); 540 543 if (rdev->pm.default_sclk) 541 544 radeon_set_engine_clock(rdev, rdev->pm.default_sclk); 542 545 if (rdev->pm.default_mclk) ··· 553 548 rdev->pm.current_sclk = rdev->pm.default_sclk; 554 549 rdev->pm.current_mclk = rdev->pm.default_mclk; 555 550 rdev->pm.current_vddc = rdev->pm.power_state[rdev->pm.default_power_state_index].clock_info[0].voltage.voltage; 551 + rdev->pm.current_vddci = rdev->pm.power_state[rdev->pm.default_power_state_index].clock_info[0].voltage.vddci; 556 552 if (rdev->pm.pm_method == PM_METHOD_DYNPM 557 553 && rdev->pm.dynpm_state == DYNPM_STATE_SUSPENDED) { 558 554 rdev->pm.dynpm_state = DYNPM_STATE_ACTIVE; ··· 591 585 /* set up the default clocks if the MC ucode is loaded */ 592 586 if (ASIC_IS_DCE5(rdev) && rdev->mc_fw) { 593 587 if (rdev->pm.default_vddc) 594 - radeon_atom_set_voltage(rdev, rdev->pm.default_vddc); 588 + radeon_atom_set_voltage(rdev, rdev->pm.default_vddc, 589 + SET_VOLTAGE_TYPE_ASIC_VDDC); 595 590 if (rdev->pm.default_sclk) 596 591 radeon_set_engine_clock(rdev, rdev->pm.default_sclk); 597 592 if (rdev->pm.default_mclk)
+1 -1
drivers/gpu/drm/radeon/radeon_ring.c
··· 248 248 void radeon_ring_free_size(struct radeon_device *rdev) 249 249 { 250 250 if (rdev->wb.enabled) 251 - rdev->cp.rptr = rdev->wb.wb[RADEON_WB_CP_RPTR_OFFSET/4]; 251 + rdev->cp.rptr = le32_to_cpu(rdev->wb.wb[RADEON_WB_CP_RPTR_OFFSET/4]); 252 252 else { 253 253 if (rdev->family >= CHIP_R600) 254 254 rdev->cp.rptr = RREG32(R600_CP_RB_RPTR);
+1 -1
drivers/gpu/drm/radeon/rs600.c
··· 114 114 udelay(voltage->delay); 115 115 } 116 116 } else if (voltage->type == VOLTAGE_VDDC) 117 - radeon_atom_set_voltage(rdev, voltage->vddc_id); 117 + radeon_atom_set_voltage(rdev, voltage->vddc_id, SET_VOLTAGE_TYPE_ASIC_VDDC); 118 118 119 119 dyn_pwrmgt_sclk_length = RREG32_PLL(DYN_PWRMGT_SCLK_LENGTH); 120 120 dyn_pwrmgt_sclk_length &= ~REDUCED_POWER_SCLK_HILEN(0xf);
+1 -5
drivers/gpu/drm/radeon/rv770.c
··· 106 106 107 107 if ((voltage->type == VOLTAGE_SW) && voltage->voltage) { 108 108 if (voltage->voltage != rdev->pm.current_vddc) { 109 - radeon_atom_set_voltage(rdev, voltage->voltage); 109 + radeon_atom_set_voltage(rdev, voltage->voltage, SET_VOLTAGE_TYPE_ASIC_VDDC); 110 110 rdev->pm.current_vddc = voltage->voltage; 111 111 DRM_DEBUG("Setting: v: %d\n", voltage->voltage); 112 112 } ··· 1255 1255 { 1256 1256 int r; 1257 1257 1258 - r = radeon_dummy_page_init(rdev); 1259 - if (r) 1260 - return r; 1261 1258 /* This don't do much */ 1262 1259 r = radeon_gem_init(rdev); 1263 1260 if (r) ··· 1369 1372 radeon_atombios_fini(rdev); 1370 1373 kfree(rdev->bios); 1371 1374 rdev->bios = NULL; 1372 - radeon_dummy_page_fini(rdev); 1373 1375 } 1374 1376 1375 1377 static void rv770_pcie_gen2_enable(struct radeon_device *rdev)
+3 -23
drivers/gpu/drm/ttm/ttm_page_alloc.c
··· 683 683 gfp_flags |= GFP_HIGHUSER; 684 684 685 685 for (r = 0; r < count; ++r) { 686 - if ((flags & TTM_PAGE_FLAG_DMA32) && dma_address) { 687 - void *addr; 688 - addr = dma_alloc_coherent(NULL, PAGE_SIZE, 689 - &dma_address[r], 690 - gfp_flags); 691 - if (addr == NULL) 692 - return -ENOMEM; 693 - p = virt_to_page(addr); 694 - } else 695 - p = alloc_page(gfp_flags); 686 + p = alloc_page(gfp_flags); 696 687 if (!p) { 697 688 698 689 printk(KERN_ERR TTM_PFX 699 690 "Unable to allocate page."); 700 691 return -ENOMEM; 701 692 } 693 + 702 694 list_add(&p->lru, pages); 703 695 } 704 696 return 0; ··· 738 746 unsigned long irq_flags; 739 747 struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); 740 748 struct page *p, *tmp; 741 - unsigned r; 742 749 743 750 if (pool == NULL) { 744 751 /* No pool for this memory type so free the pages */ 745 752 746 - r = page_count-1; 747 753 list_for_each_entry_safe(p, tmp, pages, lru) { 748 - if ((flags & TTM_PAGE_FLAG_DMA32) && dma_address) { 749 - void *addr = page_address(p); 750 - WARN_ON(!addr || !dma_address[r]); 751 - if (addr) 752 - dma_free_coherent(NULL, PAGE_SIZE, 753 - addr, 754 - dma_address[r]); 755 - dma_address[r] = 0; 756 - } else 757 - __free_page(p); 758 - r--; 754 + __free_page(p); 759 755 } 760 756 /* Make the pages list empty */ 761 757 INIT_LIST_HEAD(pages);
+1
drivers/gpu/stub/Kconfig
··· 5 5 # Poulsbo stub depends on ACPI_VIDEO when ACPI is enabled 6 6 # but for select to work, need to select ACPI_VIDEO's dependencies, ick 7 7 select BACKLIGHT_CLASS_DEVICE if ACPI 8 + select VIDEO_OUTPUT_CONTROL if ACPI 8 9 select INPUT if ACPI 9 10 select ACPI_VIDEO if ACPI 10 11 select THERMAL if ACPI
+19 -3
drivers/i2c/algos/i2c-algo-bit.c
··· 232 232 * Sanity check for the adapter hardware - check the reaction of 233 233 * the bus lines only if it seems to be idle. 234 234 */ 235 - static int test_bus(struct i2c_algo_bit_data *adap, char *name) 235 + static int test_bus(struct i2c_adapter *i2c_adap) 236 236 { 237 - int scl, sda; 237 + struct i2c_algo_bit_data *adap = i2c_adap->algo_data; 238 + const char *name = i2c_adap->name; 239 + int scl, sda, ret; 240 + 241 + if (adap->pre_xfer) { 242 + ret = adap->pre_xfer(i2c_adap); 243 + if (ret < 0) 244 + return -ENODEV; 245 + } 238 246 239 247 if (adap->getscl == NULL) 240 248 pr_info("%s: Testing SDA only, SCL is not readable\n", name); ··· 305 297 "while pulling SCL high!\n", name); 306 298 goto bailout; 307 299 } 300 + 301 + if (adap->post_xfer) 302 + adap->post_xfer(i2c_adap); 303 + 308 304 pr_info("%s: Test OK\n", name); 309 305 return 0; 310 306 bailout: 311 307 sdahi(adap); 312 308 sclhi(adap); 309 + 310 + if (adap->post_xfer) 311 + adap->post_xfer(i2c_adap); 312 + 313 313 return -ENODEV; 314 314 } 315 315 ··· 623 607 int ret; 624 608 625 609 if (bit_test) { 626 - ret = test_bus(bit_adap, adap->name); 610 + ret = test_bus(adap); 627 611 if (ret < 0) 628 612 return -ENODEV; 629 613 }
+4 -2
drivers/i2c/i2c-core.c
··· 797 797 798 798 /* Let legacy drivers scan this bus for matching devices */ 799 799 if (driver->attach_adapter) { 800 - dev_warn(&adap->dev, "attach_adapter method is deprecated\n"); 800 + dev_warn(&adap->dev, "%s: attach_adapter method is deprecated\n", 801 + driver->driver.name); 801 802 dev_warn(&adap->dev, "Please use another way to instantiate " 802 803 "your i2c_client\n"); 803 804 /* We ignore the return code; if it fails, too bad */ ··· 985 984 986 985 if (!driver->detach_adapter) 987 986 return 0; 988 - dev_warn(&adapter->dev, "detach_adapter method is deprecated\n"); 987 + dev_warn(&adapter->dev, "%s: detach_adapter method is deprecated\n", 988 + driver->driver.name); 989 989 res = driver->detach_adapter(adapter); 990 990 if (res) 991 991 dev_err(&adapter->dev, "detach_adapter failed (%d) "
+21 -12
drivers/input/evdev.c
··· 39 39 }; 40 40 41 41 struct evdev_client { 42 - int head; 43 - int tail; 42 + unsigned int head; 43 + unsigned int tail; 44 44 spinlock_t buffer_lock; /* protects access to buffer, head and tail */ 45 45 struct fasync_struct *fasync; 46 46 struct evdev *evdev; 47 47 struct list_head node; 48 - int bufsize; 48 + unsigned int bufsize; 49 49 struct input_event buffer[]; 50 50 }; 51 51 ··· 55 55 static void evdev_pass_event(struct evdev_client *client, 56 56 struct input_event *event) 57 57 { 58 - /* 59 - * Interrupts are disabled, just acquire the lock. 60 - * Make sure we don't leave with the client buffer 61 - * "empty" by having client->head == client->tail. 62 - */ 58 + /* Interrupts are disabled, just acquire the lock. */ 63 59 spin_lock(&client->buffer_lock); 64 - do { 65 - client->buffer[client->head++] = *event; 66 - client->head &= client->bufsize - 1; 67 - } while (client->head == client->tail); 60 + 61 + client->buffer[client->head++] = *event; 62 + client->head &= client->bufsize - 1; 63 + 64 + if (unlikely(client->head == client->tail)) { 65 + /* 66 + * This effectively "drops" all unconsumed events, leaving 67 + * EV_SYN/SYN_DROPPED plus the newest event in the queue. 68 + */ 69 + client->tail = (client->head - 2) & (client->bufsize - 1); 70 + 71 + client->buffer[client->tail].time = event->time; 72 + client->buffer[client->tail].type = EV_SYN; 73 + client->buffer[client->tail].code = SYN_DROPPED; 74 + client->buffer[client->tail].value = 0; 75 + } 76 + 68 77 spin_unlock(&client->buffer_lock); 69 78 70 79 if (event->type == EV_SYN)
+40
drivers/input/input.c
··· 1746 1746 } 1747 1747 EXPORT_SYMBOL(input_set_capability); 1748 1748 1749 + static unsigned int input_estimate_events_per_packet(struct input_dev *dev) 1750 + { 1751 + int mt_slots; 1752 + int i; 1753 + unsigned int events; 1754 + 1755 + if (dev->mtsize) { 1756 + mt_slots = dev->mtsize; 1757 + } else if (test_bit(ABS_MT_TRACKING_ID, dev->absbit)) { 1758 + mt_slots = dev->absinfo[ABS_MT_TRACKING_ID].maximum - 1759 + dev->absinfo[ABS_MT_TRACKING_ID].minimum + 1, 1760 + clamp(mt_slots, 2, 32); 1761 + } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) { 1762 + mt_slots = 2; 1763 + } else { 1764 + mt_slots = 0; 1765 + } 1766 + 1767 + events = mt_slots + 1; /* count SYN_MT_REPORT and SYN_REPORT */ 1768 + 1769 + for (i = 0; i < ABS_CNT; i++) { 1770 + if (test_bit(i, dev->absbit)) { 1771 + if (input_is_mt_axis(i)) 1772 + events += mt_slots; 1773 + else 1774 + events++; 1775 + } 1776 + } 1777 + 1778 + for (i = 0; i < REL_CNT; i++) 1779 + if (test_bit(i, dev->relbit)) 1780 + events++; 1781 + 1782 + return events; 1783 + } 1784 + 1749 1785 #define INPUT_CLEANSE_BITMASK(dev, type, bits) \ 1750 1786 do { \ 1751 1787 if (!test_bit(EV_##type, dev->evbit)) \ ··· 1828 1792 1829 1793 /* Make sure that bitmasks not mentioned in dev->evbit are clean. */ 1830 1794 input_cleanse_bitmasks(dev); 1795 + 1796 + if (!dev->hint_events_per_packet) 1797 + dev->hint_events_per_packet = 1798 + input_estimate_events_per_packet(dev); 1831 1799 1832 1800 /* 1833 1801 * If delay and period are pre-set by the driver, then autorepeating
+4 -2
drivers/input/keyboard/twl4030_keypad.c
··· 332 332 static int __devinit twl4030_kp_probe(struct platform_device *pdev) 333 333 { 334 334 struct twl4030_keypad_data *pdata = pdev->dev.platform_data; 335 - const struct matrix_keymap_data *keymap_data = pdata->keymap_data; 335 + const struct matrix_keymap_data *keymap_data; 336 336 struct twl4030_keypad *kp; 337 337 struct input_dev *input; 338 338 u8 reg; 339 339 int error; 340 340 341 - if (!pdata || !pdata->rows || !pdata->cols || 341 + if (!pdata || !pdata->rows || !pdata->cols || !pdata->keymap_data || 342 342 pdata->rows > TWL4030_MAX_ROWS || pdata->cols > TWL4030_MAX_COLS) { 343 343 dev_err(&pdev->dev, "Invalid platform_data\n"); 344 344 return -EINVAL; 345 345 } 346 + 347 + keymap_data = pdata->keymap_data; 346 348 347 349 kp = kzalloc(sizeof(*kp), GFP_KERNEL); 348 350 input = input_allocate_device();
+12 -1
drivers/input/misc/xen-kbdfront.c
··· 303 303 enum xenbus_state backend_state) 304 304 { 305 305 struct xenkbd_info *info = dev_get_drvdata(&dev->dev); 306 - int val; 306 + int ret, val; 307 307 308 308 switch (backend_state) { 309 309 case XenbusStateInitialising: ··· 316 316 317 317 case XenbusStateInitWait: 318 318 InitWait: 319 + ret = xenbus_scanf(XBT_NIL, info->xbdev->otherend, 320 + "feature-abs-pointer", "%d", &val); 321 + if (ret < 0) 322 + val = 0; 323 + if (val) { 324 + ret = xenbus_printf(XBT_NIL, info->xbdev->nodename, 325 + "request-abs-pointer", "1"); 326 + if (ret) 327 + pr_warning("xenkbd: can't request abs-pointer"); 328 + } 329 + 319 330 xenbus_switch_state(dev, XenbusStateConnected); 320 331 break; 321 332
+10 -7
drivers/input/touchscreen/h3600_ts_input.c
··· 399 399 IRQF_SHARED | IRQF_DISABLED, "h3600_action", &ts->dev)) { 400 400 printk(KERN_ERR "h3600ts.c: Could not allocate Action Button IRQ!\n"); 401 401 err = -EBUSY; 402 - goto fail2; 402 + goto fail1; 403 403 } 404 404 405 405 if (request_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, npower_button_handler, 406 406 IRQF_SHARED | IRQF_DISABLED, "h3600_suspend", &ts->dev)) { 407 407 printk(KERN_ERR "h3600ts.c: Could not allocate Power Button IRQ!\n"); 408 408 err = -EBUSY; 409 - goto fail3; 409 + goto fail2; 410 410 } 411 411 412 412 serio_set_drvdata(serio, ts); 413 413 414 414 err = serio_open(serio, drv); 415 415 if (err) 416 - return err; 416 + goto fail3; 417 417 418 418 //h3600_flite_control(1, 25); /* default brightness */ 419 - input_register_device(ts->dev); 419 + err = input_register_device(ts->dev); 420 + if (err) 421 + goto fail4; 420 422 421 423 return 0; 422 424 423 - fail3: free_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, ts->dev); 425 + fail4: serio_close(serio); 426 + fail3: serio_set_drvdata(serio, NULL); 427 + free_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, ts->dev); 424 428 fail2: free_irq(IRQ_GPIO_BITSY_ACTION_BUTTON, ts->dev); 425 - fail1: serio_set_drvdata(serio, NULL); 426 - input_free_device(input_dev); 429 + fail1: input_free_device(input_dev); 427 430 kfree(ts); 428 431 return err; 429 432 }
+4
drivers/leds/leds-regulator.c
··· 178 178 led->cdev.flags |= LED_CORE_SUSPENDRESUME; 179 179 led->vcc = vcc; 180 180 181 + /* to handle correctly an already enabled regulator */ 182 + if (regulator_is_enabled(led->vcc)) 183 + led->enabled = 1; 184 + 181 185 mutex_init(&led->mutex); 182 186 INIT_WORK(&led->work, led_work); 183 187
-8
drivers/md/dm-raid.c
··· 390 390 return md_raid5_congested(&rs->md, bits); 391 391 } 392 392 393 - static void raid_unplug(struct dm_target_callbacks *cb) 394 - { 395 - struct raid_set *rs = container_of(cb, struct raid_set, callbacks); 396 - 397 - md_raid5_kick_device(rs->md.private); 398 - } 399 - 400 393 /* 401 394 * Construct a RAID4/5/6 mapping: 402 395 * Args: ··· 480 487 } 481 488 482 489 rs->callbacks.congested_fn = raid_is_congested; 483 - rs->callbacks.unplug_fn = raid_unplug; 484 490 dm_table_add_target_callbacks(ti->table, &rs->callbacks); 485 491 486 492 return 0;
+46 -41
drivers/md/md.c
··· 447 447 448 448 /* Support for plugging. 449 449 * This mirrors the plugging support in request_queue, but does not 450 - * require having a whole queue 450 + * require having a whole queue or request structures. 451 + * We allocate an md_plug_cb for each md device and each thread it gets 452 + * plugged on. This links tot the private plug_handle structure in the 453 + * personality data where we keep a count of the number of outstanding 454 + * plugs so other code can see if a plug is active. 451 455 */ 452 - static void plugger_work(struct work_struct *work) 453 - { 454 - struct plug_handle *plug = 455 - container_of(work, struct plug_handle, unplug_work); 456 - plug->unplug_fn(plug); 457 - } 458 - static void plugger_timeout(unsigned long data) 459 - { 460 - struct plug_handle *plug = (void *)data; 461 - kblockd_schedule_work(NULL, &plug->unplug_work); 462 - } 463 - void plugger_init(struct plug_handle *plug, 464 - void (*unplug_fn)(struct plug_handle *)) 465 - { 466 - plug->unplug_flag = 0; 467 - plug->unplug_fn = unplug_fn; 468 - init_timer(&plug->unplug_timer); 469 - plug->unplug_timer.function = plugger_timeout; 470 - plug->unplug_timer.data = (unsigned long)plug; 471 - INIT_WORK(&plug->unplug_work, plugger_work); 472 - } 473 - EXPORT_SYMBOL_GPL(plugger_init); 456 + struct md_plug_cb { 457 + struct blk_plug_cb cb; 458 + mddev_t *mddev; 459 + }; 474 460 475 - void plugger_set_plug(struct plug_handle *plug) 461 + static void plugger_unplug(struct blk_plug_cb *cb) 476 462 { 477 - if (!test_and_set_bit(PLUGGED_FLAG, &plug->unplug_flag)) 478 - mod_timer(&plug->unplug_timer, jiffies + msecs_to_jiffies(3)+1); 463 + struct md_plug_cb *mdcb = container_of(cb, struct md_plug_cb, cb); 464 + if (atomic_dec_and_test(&mdcb->mddev->plug_cnt)) 465 + md_wakeup_thread(mdcb->mddev->thread); 466 + kfree(mdcb); 479 467 } 480 - EXPORT_SYMBOL_GPL(plugger_set_plug); 481 468 482 - int plugger_remove_plug(struct plug_handle *plug) 469 + /* Check that an unplug wakeup will come shortly. 470 + * If not, wakeup the md thread immediately 471 + */ 472 + int mddev_check_plugged(mddev_t *mddev) 483 473 { 484 - if (test_and_clear_bit(PLUGGED_FLAG, &plug->unplug_flag)) { 485 - del_timer(&plug->unplug_timer); 486 - return 1; 487 - } else 474 + struct blk_plug *plug = current->plug; 475 + struct md_plug_cb *mdcb; 476 + 477 + if (!plug) 488 478 return 0; 489 - } 490 - EXPORT_SYMBOL_GPL(plugger_remove_plug); 491 479 480 + list_for_each_entry(mdcb, &plug->cb_list, cb.list) { 481 + if (mdcb->cb.callback == plugger_unplug && 482 + mdcb->mddev == mddev) { 483 + /* Already on the list, move to top */ 484 + if (mdcb != list_first_entry(&plug->cb_list, 485 + struct md_plug_cb, 486 + cb.list)) 487 + list_move(&mdcb->cb.list, &plug->cb_list); 488 + return 1; 489 + } 490 + } 491 + /* Not currently on the callback list */ 492 + mdcb = kmalloc(sizeof(*mdcb), GFP_ATOMIC); 493 + if (!mdcb) 494 + return 0; 495 + 496 + mdcb->mddev = mddev; 497 + mdcb->cb.callback = plugger_unplug; 498 + atomic_inc(&mddev->plug_cnt); 499 + list_add(&mdcb->cb.list, &plug->cb_list); 500 + return 1; 501 + } 502 + EXPORT_SYMBOL_GPL(mddev_check_plugged); 492 503 493 504 static inline mddev_t *mddev_get(mddev_t *mddev) 494 505 { ··· 549 538 atomic_set(&mddev->active, 1); 550 539 atomic_set(&mddev->openers, 0); 551 540 atomic_set(&mddev->active_io, 0); 541 + atomic_set(&mddev->plug_cnt, 0); 552 542 spin_lock_init(&mddev->write_lock); 553 543 atomic_set(&mddev->flush_pending, 0); 554 544 init_waitqueue_head(&mddev->sb_wait); ··· 4735 4723 mddev->bitmap_info.chunksize = 0; 4736 4724 mddev->bitmap_info.daemon_sleep = 0; 4737 4725 mddev->bitmap_info.max_write_behind = 0; 4738 - mddev->plug = NULL; 4739 4726 } 4740 4727 4741 4728 static void __md_stop_writes(mddev_t *mddev) ··· 6698 6687 return 0; 6699 6688 } 6700 6689 EXPORT_SYMBOL_GPL(md_allow_write); 6701 - 6702 - void md_unplug(mddev_t *mddev) 6703 - { 6704 - if (mddev->plug) 6705 - mddev->plug->unplug_fn(mddev->plug); 6706 - } 6707 6690 6708 6691 #define SYNC_MARKS 10 6709 6692 #define SYNC_MARK_STEP (3*HZ)
+4 -22
drivers/md/md.h
··· 29 29 typedef struct mddev_s mddev_t; 30 30 typedef struct mdk_rdev_s mdk_rdev_t; 31 31 32 - /* generic plugging support - like that provided with request_queue, 33 - * but does not require a request_queue 34 - */ 35 - struct plug_handle { 36 - void (*unplug_fn)(struct plug_handle *); 37 - struct timer_list unplug_timer; 38 - struct work_struct unplug_work; 39 - unsigned long unplug_flag; 40 - }; 41 - #define PLUGGED_FLAG 1 42 - void plugger_init(struct plug_handle *plug, 43 - void (*unplug_fn)(struct plug_handle *)); 44 - void plugger_set_plug(struct plug_handle *plug); 45 - int plugger_remove_plug(struct plug_handle *plug); 46 - static inline void plugger_flush(struct plug_handle *plug) 47 - { 48 - del_timer_sync(&plug->unplug_timer); 49 - cancel_work_sync(&plug->unplug_work); 50 - } 51 - 52 32 /* 53 33 * MD's 'extended' device 54 34 */ ··· 179 199 int delta_disks, new_level, new_layout; 180 200 int new_chunk_sectors; 181 201 202 + atomic_t plug_cnt; /* If device is expecting 203 + * more bios soon. 204 + */ 182 205 struct mdk_thread_s *thread; /* management thread */ 183 206 struct mdk_thread_s *sync_thread; /* doing resync or reconstruct */ 184 207 sector_t curr_resync; /* last block scheduled */ ··· 319 336 struct list_head all_mddevs; 320 337 321 338 struct attribute_group *to_remove; 322 - struct plug_handle *plug; /* if used by personality */ 323 339 324 340 struct bio_set *bio_set; 325 341 ··· 498 516 extern void md_integrity_add_rdev(mdk_rdev_t *rdev, mddev_t *mddev); 499 517 extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale); 500 518 extern void restore_bitmap_write_access(struct file *file); 501 - extern void md_unplug(mddev_t *mddev); 502 519 503 520 extern void mddev_init(mddev_t *mddev); 504 521 extern int md_run(mddev_t *mddev); ··· 511 530 mddev_t *mddev); 512 531 extern struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs, 513 532 mddev_t *mddev); 533 + extern int mddev_check_plugged(mddev_t *mddev); 514 534 #endif /* _MD_MD_H */
+14 -15
drivers/md/raid1.c
··· 565 565 spin_unlock_irq(&conf->device_lock); 566 566 } 567 567 568 - static void md_kick_device(mddev_t *mddev) 569 - { 570 - blk_flush_plug(current); 571 - md_wakeup_thread(mddev->thread); 572 - } 573 - 574 568 /* Barriers.... 575 569 * Sometimes we need to suspend IO while we do something else, 576 570 * either some resync/recovery, or reconfigure the array. ··· 594 600 595 601 /* Wait until no block IO is waiting */ 596 602 wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting, 597 - conf->resync_lock, md_kick_device(conf->mddev)); 603 + conf->resync_lock, ); 598 604 599 605 /* block any new IO from starting */ 600 606 conf->barrier++; ··· 602 608 /* Now wait for all pending IO to complete */ 603 609 wait_event_lock_irq(conf->wait_barrier, 604 610 !conf->nr_pending && conf->barrier < RESYNC_DEPTH, 605 - conf->resync_lock, md_kick_device(conf->mddev)); 611 + conf->resync_lock, ); 606 612 607 613 spin_unlock_irq(&conf->resync_lock); 608 614 } ··· 624 630 conf->nr_waiting++; 625 631 wait_event_lock_irq(conf->wait_barrier, !conf->barrier, 626 632 conf->resync_lock, 627 - md_kick_device(conf->mddev)); 633 + ); 628 634 conf->nr_waiting--; 629 635 } 630 636 conf->nr_pending++; ··· 660 666 wait_event_lock_irq(conf->wait_barrier, 661 667 conf->nr_pending == conf->nr_queued+1, 662 668 conf->resync_lock, 663 - ({ flush_pending_writes(conf); 664 - md_kick_device(conf->mddev); })); 669 + flush_pending_writes(conf)); 665 670 spin_unlock_irq(&conf->resync_lock); 666 671 } 667 672 static void unfreeze_array(conf_t *conf) ··· 722 729 const unsigned long do_sync = (bio->bi_rw & REQ_SYNC); 723 730 const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA)); 724 731 mdk_rdev_t *blocked_rdev; 732 + int plugged; 725 733 726 734 /* 727 735 * Register the new request and wait if the reconstruction ··· 814 820 * inc refcount on their rdev. Record them by setting 815 821 * bios[x] to bio 816 822 */ 823 + plugged = mddev_check_plugged(mddev); 824 + 817 825 disks = conf->raid_disks; 818 826 retry_write: 819 827 blocked_rdev = NULL; ··· 921 925 /* In case raid1d snuck in to freeze_array */ 922 926 wake_up(&conf->wait_barrier); 923 927 924 - if (do_sync || !bitmap) 928 + if (do_sync || !bitmap || !plugged) 925 929 md_wakeup_thread(mddev->thread); 926 930 927 931 return 0; ··· 1512 1516 conf_t *conf = mddev->private; 1513 1517 struct list_head *head = &conf->retry_list; 1514 1518 mdk_rdev_t *rdev; 1519 + struct blk_plug plug; 1515 1520 1516 1521 md_check_recovery(mddev); 1517 - 1522 + 1523 + blk_start_plug(&plug); 1518 1524 for (;;) { 1519 1525 char b[BDEVNAME_SIZE]; 1520 1526 1521 - flush_pending_writes(conf); 1527 + if (atomic_read(&mddev->plug_cnt) == 0) 1528 + flush_pending_writes(conf); 1522 1529 1523 1530 spin_lock_irqsave(&conf->device_lock, flags); 1524 1531 if (list_empty(head)) { ··· 1592 1593 } 1593 1594 cond_resched(); 1594 1595 } 1596 + blk_finish_plug(&plug); 1595 1597 } 1596 1598 1597 1599 ··· 2039 2039 2040 2040 md_unregister_thread(mddev->thread); 2041 2041 mddev->thread = NULL; 2042 - blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/ 2043 2042 if (conf->r1bio_pool) 2044 2043 mempool_destroy(conf->r1bio_pool); 2045 2044 kfree(conf->mirrors);
+13 -14
drivers/md/raid10.c
··· 634 634 spin_unlock_irq(&conf->device_lock); 635 635 } 636 636 637 - static void md_kick_device(mddev_t *mddev) 638 - { 639 - blk_flush_plug(current); 640 - md_wakeup_thread(mddev->thread); 641 - } 642 - 643 637 /* Barriers.... 644 638 * Sometimes we need to suspend IO while we do something else, 645 639 * either some resync/recovery, or reconfigure the array. ··· 663 669 664 670 /* Wait until no block IO is waiting (unless 'force') */ 665 671 wait_event_lock_irq(conf->wait_barrier, force || !conf->nr_waiting, 666 - conf->resync_lock, md_kick_device(conf->mddev)); 672 + conf->resync_lock, ); 667 673 668 674 /* block any new IO from starting */ 669 675 conf->barrier++; 670 676 671 - /* No wait for all pending IO to complete */ 677 + /* Now wait for all pending IO to complete */ 672 678 wait_event_lock_irq(conf->wait_barrier, 673 679 !conf->nr_pending && conf->barrier < RESYNC_DEPTH, 674 - conf->resync_lock, md_kick_device(conf->mddev)); 680 + conf->resync_lock, ); 675 681 676 682 spin_unlock_irq(&conf->resync_lock); 677 683 } ··· 692 698 conf->nr_waiting++; 693 699 wait_event_lock_irq(conf->wait_barrier, !conf->barrier, 694 700 conf->resync_lock, 695 - md_kick_device(conf->mddev)); 701 + ); 696 702 conf->nr_waiting--; 697 703 } 698 704 conf->nr_pending++; ··· 728 734 wait_event_lock_irq(conf->wait_barrier, 729 735 conf->nr_pending == conf->nr_queued+1, 730 736 conf->resync_lock, 731 - ({ flush_pending_writes(conf); 732 - md_kick_device(conf->mddev); })); 737 + flush_pending_writes(conf)); 738 + 733 739 spin_unlock_irq(&conf->resync_lock); 734 740 } 735 741 ··· 756 762 const unsigned long do_fua = (bio->bi_rw & REQ_FUA); 757 763 unsigned long flags; 758 764 mdk_rdev_t *blocked_rdev; 765 + int plugged; 759 766 760 767 if (unlikely(bio->bi_rw & REQ_FLUSH)) { 761 768 md_flush_request(mddev, bio); ··· 865 870 * inc refcount on their rdev. Record them by setting 866 871 * bios[x] to bio 867 872 */ 873 + plugged = mddev_check_plugged(mddev); 874 + 868 875 raid10_find_phys(conf, r10_bio); 869 876 retry_write: 870 877 blocked_rdev = NULL; ··· 943 946 /* In case raid10d snuck in to freeze_array */ 944 947 wake_up(&conf->wait_barrier); 945 948 946 - if (do_sync || !mddev->bitmap) 949 + if (do_sync || !mddev->bitmap || !plugged) 947 950 md_wakeup_thread(mddev->thread); 948 - 949 951 return 0; 950 952 } 951 953 ··· 1636 1640 conf_t *conf = mddev->private; 1637 1641 struct list_head *head = &conf->retry_list; 1638 1642 mdk_rdev_t *rdev; 1643 + struct blk_plug plug; 1639 1644 1640 1645 md_check_recovery(mddev); 1641 1646 1647 + blk_start_plug(&plug); 1642 1648 for (;;) { 1643 1649 char b[BDEVNAME_SIZE]; 1644 1650 ··· 1714 1716 } 1715 1717 cond_resched(); 1716 1718 } 1719 + blk_finish_plug(&plug); 1717 1720 } 1718 1721 1719 1722
+26 -35
drivers/md/raid5.c
··· 27 27 * 28 28 * We group bitmap updates into batches. Each batch has a number. 29 29 * We may write out several batches at once, but that isn't very important. 30 - * conf->bm_write is the number of the last batch successfully written. 31 - * conf->bm_flush is the number of the last batch that was closed to 30 + * conf->seq_write is the number of the last batch successfully written. 31 + * conf->seq_flush is the number of the last batch that was closed to 32 32 * new additions. 33 33 * When we discover that we will need to write to any block in a stripe 34 34 * (in add_stripe_bio) we update the in-memory bitmap and record in sh->bm_seq 35 - * the number of the batch it will be in. This is bm_flush+1. 35 + * the number of the batch it will be in. This is seq_flush+1. 36 36 * When we are ready to do a write, if that batch hasn't been written yet, 37 37 * we plug the array and queue the stripe for later. 38 38 * When an unplug happens, we increment bm_flush, thus closing the current ··· 199 199 BUG_ON(!list_empty(&sh->lru)); 200 200 BUG_ON(atomic_read(&conf->active_stripes)==0); 201 201 if (test_bit(STRIPE_HANDLE, &sh->state)) { 202 - if (test_bit(STRIPE_DELAYED, &sh->state)) { 202 + if (test_bit(STRIPE_DELAYED, &sh->state)) 203 203 list_add_tail(&sh->lru, &conf->delayed_list); 204 - plugger_set_plug(&conf->plug); 205 - } else if (test_bit(STRIPE_BIT_DELAY, &sh->state) && 206 - sh->bm_seq - conf->seq_write > 0) { 204 + else if (test_bit(STRIPE_BIT_DELAY, &sh->state) && 205 + sh->bm_seq - conf->seq_write > 0) 207 206 list_add_tail(&sh->lru, &conf->bitmap_list); 208 - plugger_set_plug(&conf->plug); 209 - } else { 207 + else { 210 208 clear_bit(STRIPE_BIT_DELAY, &sh->state); 211 209 list_add_tail(&sh->lru, &conf->handle_list); 212 210 } ··· 459 461 < (conf->max_nr_stripes *3/4) 460 462 || !conf->inactive_blocked), 461 463 conf->device_lock, 462 - md_raid5_kick_device(conf)); 464 + ); 463 465 conf->inactive_blocked = 0; 464 466 } else 465 467 init_stripe(sh, sector, previous); ··· 1468 1470 wait_event_lock_irq(conf->wait_for_stripe, 1469 1471 !list_empty(&conf->inactive_list), 1470 1472 conf->device_lock, 1471 - blk_flush_plug(current)); 1473 + ); 1472 1474 osh = get_free_stripe(conf); 1473 1475 spin_unlock_irq(&conf->device_lock); 1474 1476 atomic_set(&nsh->count, 1); ··· 3621 3623 atomic_inc(&conf->preread_active_stripes); 3622 3624 list_add_tail(&sh->lru, &conf->hold_list); 3623 3625 } 3624 - } else 3625 - plugger_set_plug(&conf->plug); 3626 + } 3626 3627 } 3627 3628 3628 3629 static void activate_bit_delay(raid5_conf_t *conf) ··· 3636 3639 atomic_inc(&sh->count); 3637 3640 __release_stripe(conf, sh); 3638 3641 } 3639 - } 3640 - 3641 - void md_raid5_kick_device(raid5_conf_t *conf) 3642 - { 3643 - blk_flush_plug(current); 3644 - raid5_activate_delayed(conf); 3645 - md_wakeup_thread(conf->mddev->thread); 3646 - } 3647 - EXPORT_SYMBOL_GPL(md_raid5_kick_device); 3648 - 3649 - static void raid5_unplug(struct plug_handle *plug) 3650 - { 3651 - raid5_conf_t *conf = container_of(plug, raid5_conf_t, plug); 3652 - 3653 - md_raid5_kick_device(conf); 3654 3642 } 3655 3643 3656 3644 int md_raid5_congested(mddev_t *mddev, int bits) ··· 3927 3945 struct stripe_head *sh; 3928 3946 const int rw = bio_data_dir(bi); 3929 3947 int remaining; 3948 + int plugged; 3930 3949 3931 3950 if (unlikely(bi->bi_rw & REQ_FLUSH)) { 3932 3951 md_flush_request(mddev, bi); ··· 3946 3963 bi->bi_next = NULL; 3947 3964 bi->bi_phys_segments = 1; /* over-loaded to count active stripes */ 3948 3965 3966 + plugged = mddev_check_plugged(mddev); 3949 3967 for (;logical_sector < last_sector; logical_sector += STRIPE_SECTORS) { 3950 3968 DEFINE_WAIT(w); 3951 3969 int disks, data_disks; ··· 4041 4057 * add failed due to overlap. Flush everything 4042 4058 * and wait a while 4043 4059 */ 4044 - md_raid5_kick_device(conf); 4060 + md_wakeup_thread(mddev->thread); 4045 4061 release_stripe(sh); 4046 4062 schedule(); 4047 4063 goto retry; ··· 4061 4077 } 4062 4078 4063 4079 } 4080 + if (!plugged) 4081 + md_wakeup_thread(mddev->thread); 4082 + 4064 4083 spin_lock_irq(&conf->device_lock); 4065 4084 remaining = raid5_dec_bi_phys_segments(bi); 4066 4085 spin_unlock_irq(&conf->device_lock); ··· 4465 4478 struct stripe_head *sh; 4466 4479 raid5_conf_t *conf = mddev->private; 4467 4480 int handled; 4481 + struct blk_plug plug; 4468 4482 4469 4483 pr_debug("+++ raid5d active\n"); 4470 4484 4471 4485 md_check_recovery(mddev); 4472 4486 4487 + blk_start_plug(&plug); 4473 4488 handled = 0; 4474 4489 spin_lock_irq(&conf->device_lock); 4475 4490 while (1) { 4476 4491 struct bio *bio; 4477 4492 4478 - if (conf->seq_flush != conf->seq_write) { 4479 - int seq = conf->seq_flush; 4493 + if (atomic_read(&mddev->plug_cnt) == 0 && 4494 + !list_empty(&conf->bitmap_list)) { 4495 + /* Now is a good time to flush some bitmap updates */ 4496 + conf->seq_flush++; 4480 4497 spin_unlock_irq(&conf->device_lock); 4481 4498 bitmap_unplug(mddev->bitmap); 4482 4499 spin_lock_irq(&conf->device_lock); 4483 - conf->seq_write = seq; 4500 + conf->seq_write = conf->seq_flush; 4484 4501 activate_bit_delay(conf); 4485 4502 } 4503 + if (atomic_read(&mddev->plug_cnt) == 0) 4504 + raid5_activate_delayed(conf); 4486 4505 4487 4506 while ((bio = remove_bio_from_retry(conf))) { 4488 4507 int ok; ··· 4518 4525 spin_unlock_irq(&conf->device_lock); 4519 4526 4520 4527 async_tx_issue_pending_all(); 4528 + blk_finish_plug(&plug); 4521 4529 4522 4530 pr_debug("--- raid5d inactive\n"); 4523 4531 } ··· 5135 5141 mdname(mddev)); 5136 5142 md_set_array_sectors(mddev, raid5_size(mddev, 0, 0)); 5137 5143 5138 - plugger_init(&conf->plug, raid5_unplug); 5139 - mddev->plug = &conf->plug; 5140 5144 if (mddev->queue) { 5141 5145 int chunk_size; 5142 5146 /* read-ahead size must cover two whole stripes, which ··· 5184 5192 mddev->thread = NULL; 5185 5193 if (mddev->queue) 5186 5194 mddev->queue->backing_dev_info.congested_fn = NULL; 5187 - plugger_flush(&conf->plug); /* the unplug fn references 'conf'*/ 5188 5195 free_conf(conf); 5189 5196 mddev->private = NULL; 5190 5197 mddev->to_remove = &raid5_attrs_group;
-2
drivers/md/raid5.h
··· 400 400 * Cleared when a sync completes. 401 401 */ 402 402 403 - struct plug_handle plug; 404 - 405 403 /* per cpu variables */ 406 404 struct raid5_percpu { 407 405 struct page *spare_page; /* Used when checking P/Q in raid6 */
+14 -2
drivers/mfd/mfd-core.c
··· 55 55 } 56 56 EXPORT_SYMBOL(mfd_cell_disable); 57 57 58 + static int mfd_platform_add_cell(struct platform_device *pdev, 59 + const struct mfd_cell *cell) 60 + { 61 + if (!cell) 62 + return 0; 63 + 64 + pdev->mfd_cell = kmemdup(cell, sizeof(*cell), GFP_KERNEL); 65 + if (!pdev->mfd_cell) 66 + return -ENOMEM; 67 + 68 + return 0; 69 + } 70 + 58 71 static int mfd_add_device(struct device *parent, int id, 59 72 const struct mfd_cell *cell, 60 73 struct resource *mem_base, ··· 88 75 89 76 pdev->dev.parent = parent; 90 77 91 - ret = platform_device_add_data(pdev, cell, sizeof(*cell)); 78 + ret = mfd_platform_add_cell(pdev, cell); 92 79 if (ret) 93 80 goto fail_res; 94 81 ··· 136 123 137 124 return 0; 138 125 139 - /* platform_device_del(pdev); */ 140 126 fail_res: 141 127 kfree(res); 142 128 fail_device:
+4 -4
drivers/misc/sgi-gru/grufile.c
··· 348 348 349 349 static int gru_irq_count[GRU_CHIPLETS_PER_BLADE]; 350 350 351 - static void gru_noop(unsigned int irq) 351 + static void gru_noop(struct irq_data *d) 352 352 { 353 353 } 354 354 355 355 static struct irq_chip gru_chip[GRU_CHIPLETS_PER_BLADE] = { 356 356 [0 ... GRU_CHIPLETS_PER_BLADE - 1] { 357 - .mask = gru_noop, 358 - .unmask = gru_noop, 359 - .ack = gru_noop 357 + .irq_mask = gru_noop, 358 + .irq_unmask = gru_noop, 359 + .irq_ack = gru_noop 360 360 } 361 361 }; 362 362
+3 -3
drivers/pci/pci-driver.c
··· 781 781 782 782 #endif /* !CONFIG_SUSPEND */ 783 783 784 - #ifdef CONFIG_HIBERNATION 784 + #ifdef CONFIG_HIBERNATE_CALLBACKS 785 785 786 786 static int pci_pm_freeze(struct device *dev) 787 787 { ··· 970 970 return error; 971 971 } 972 972 973 - #else /* !CONFIG_HIBERNATION */ 973 + #else /* !CONFIG_HIBERNATE_CALLBACKS */ 974 974 975 975 #define pci_pm_freeze NULL 976 976 #define pci_pm_freeze_noirq NULL ··· 981 981 #define pci_pm_restore NULL 982 982 #define pci_pm_restore_noirq NULL 983 983 984 - #endif /* !CONFIG_HIBERNATION */ 984 + #endif /* !CONFIG_HIBERNATE_CALLBACKS */ 985 985 986 986 #ifdef CONFIG_PM_RUNTIME 987 987
+2 -1
drivers/platform/x86/Kconfig
··· 187 187 depends on ACPI 188 188 depends on BACKLIGHT_CLASS_DEVICE 189 189 depends on RFKILL 190 - depends on SERIO_I8042 190 + depends on INPUT && SERIO_I8042 191 + select INPUT_SPARSEKMAP 191 192 ---help--- 192 193 This is a driver for laptops built by MSI (MICRO-STAR 193 194 INTERNATIONAL):
+1 -1
drivers/platform/x86/acer-wmi.c
··· 89 89 #define ACERWMID_EVENT_GUID "676AA15E-6A47-4D9F-A2CC-1E6D18D14026" 90 90 91 91 MODULE_ALIAS("wmi:67C3371D-95A3-4C37-BB61-DD47B491DAAB"); 92 - MODULE_ALIAS("wmi:6AF4F258-B401-42Fd-BE91-3D4AC2D7C0D3"); 92 + MODULE_ALIAS("wmi:6AF4F258-B401-42FD-BE91-3D4AC2D7C0D3"); 93 93 MODULE_ALIAS("wmi:676AA15E-6A47-4D9F-A2CC-1E6D18D14026"); 94 94 95 95 enum acer_wmi_event_ids {
+2 -2
drivers/platform/x86/asus-wmi.c
··· 201 201 if (!asus->inputdev) 202 202 return -ENOMEM; 203 203 204 - asus->inputdev->name = asus->driver->input_phys; 205 - asus->inputdev->phys = asus->driver->input_name; 204 + asus->inputdev->name = asus->driver->input_name; 205 + asus->inputdev->phys = asus->driver->input_phys; 206 206 asus->inputdev->id.bustype = BUS_HOST; 207 207 asus->inputdev->dev.parent = &asus->platform_device->dev; 208 208
+2
drivers/platform/x86/eeepc-wmi.c
··· 67 67 { KE_KEY, 0x82, { KEY_CAMERA } }, 68 68 { KE_KEY, 0x83, { KEY_CAMERA_ZOOMIN } }, 69 69 { KE_KEY, 0x88, { KEY_WLAN } }, 70 + { KE_KEY, 0xbd, { KEY_CAMERA } }, 70 71 { KE_KEY, 0xcc, { KEY_SWITCHVIDEOMODE } }, 71 72 { KE_KEY, 0xe0, { KEY_PROG1 } }, /* Task Manager */ 72 73 { KE_KEY, 0xe1, { KEY_F14 } }, /* Change Resolution */ 74 + { KE_KEY, 0xe8, { KEY_SCREENLOCK } }, 73 75 { KE_KEY, 0xe9, { KEY_BRIGHTNESS_ZERO } }, 74 76 { KE_KEY, 0xeb, { KEY_CAMERA_ZOOMOUT } }, 75 77 { KE_KEY, 0xec, { KEY_CAMERA_UP } },
+39 -4
drivers/platform/x86/intel_pmic_gpio.c
··· 74 74 u32 trigger_type; 75 75 }; 76 76 77 + static void pmic_program_irqtype(int gpio, int type) 78 + { 79 + if (type & IRQ_TYPE_EDGE_RISING) 80 + intel_scu_ipc_update_register(GPIO0 + gpio, 0x20, 0x20); 81 + else 82 + intel_scu_ipc_update_register(GPIO0 + gpio, 0x00, 0x20); 83 + 84 + if (type & IRQ_TYPE_EDGE_FALLING) 85 + intel_scu_ipc_update_register(GPIO0 + gpio, 0x10, 0x10); 86 + else 87 + intel_scu_ipc_update_register(GPIO0 + gpio, 0x00, 0x10); 88 + }; 89 + 77 90 static int pmic_gpio_direction_input(struct gpio_chip *chip, unsigned offset) 78 91 { 79 92 if (offset > 8) { ··· 179 166 return pg->irq_base + offset; 180 167 } 181 168 169 + static void pmic_bus_lock(struct irq_data *data) 170 + { 171 + struct pmic_gpio *pg = irq_data_get_irq_chip_data(data); 172 + 173 + mutex_lock(&pg->buslock); 174 + } 175 + 176 + static void pmic_bus_sync_unlock(struct irq_data *data) 177 + { 178 + struct pmic_gpio *pg = irq_data_get_irq_chip_data(data); 179 + 180 + if (pg->update_type) { 181 + unsigned int gpio = pg->update_type & ~GPIO_UPDATE_TYPE; 182 + 183 + pmic_program_irqtype(gpio, pg->trigger_type); 184 + pg->update_type = 0; 185 + } 186 + mutex_unlock(&pg->buslock); 187 + } 188 + 182 189 /* the gpiointr register is read-clear, so just do nothing. */ 183 190 static void pmic_irq_unmask(struct irq_data *data) { } 184 191 185 192 static void pmic_irq_mask(struct irq_data *data) { } 186 193 187 194 static struct irq_chip pmic_irqchip = { 188 - .name = "PMIC-GPIO", 189 - .irq_mask = pmic_irq_mask, 190 - .irq_unmask = pmic_irq_unmask, 191 - .irq_set_type = pmic_irq_type, 195 + .name = "PMIC-GPIO", 196 + .irq_mask = pmic_irq_mask, 197 + .irq_unmask = pmic_irq_unmask, 198 + .irq_set_type = pmic_irq_type, 199 + .irq_bus_lock = pmic_bus_lock, 200 + .irq_bus_sync_unlock = pmic_bus_sync_unlock, 192 201 }; 193 202 194 203 static irqreturn_t pmic_irq_handler(int irq, void *data)
+14 -3
drivers/platform/x86/samsung-laptop.c
··· 571 571 .callback = dmi_check_cb, 572 572 }, 573 573 { 574 + .ident = "R410 Plus", 575 + .matches = { 576 + DMI_MATCH(DMI_SYS_VENDOR, 577 + "SAMSUNG ELECTRONICS CO., LTD."), 578 + DMI_MATCH(DMI_PRODUCT_NAME, "R410P"), 579 + DMI_MATCH(DMI_BOARD_NAME, "R460"), 580 + }, 581 + .callback = dmi_check_cb, 582 + }, 583 + { 574 584 .ident = "R518", 575 585 .matches = { 576 586 DMI_MATCH(DMI_SYS_VENDOR, ··· 601 591 .callback = dmi_check_cb, 602 592 }, 603 593 { 604 - .ident = "N150/N210/N220", 594 + .ident = "N150/N210/N220/N230", 605 595 .matches = { 606 596 DMI_MATCH(DMI_SYS_VENDOR, 607 597 "SAMSUNG ELECTRONICS CO., LTD."), 608 - DMI_MATCH(DMI_PRODUCT_NAME, "N150/N210/N220"), 609 - DMI_MATCH(DMI_BOARD_NAME, "N150/N210/N220"), 598 + DMI_MATCH(DMI_PRODUCT_NAME, "N150/N210/N220/N230"), 599 + DMI_MATCH(DMI_BOARD_NAME, "N150/N210/N220/N230"), 610 600 }, 611 601 .callback = dmi_check_cb, 612 602 }, ··· 781 771 782 772 /* create a backlight device to talk to this one */ 783 773 memset(&props, 0, sizeof(struct backlight_properties)); 774 + props.type = BACKLIGHT_PLATFORM; 784 775 props.max_brightness = sabi_config->max_brightness; 785 776 backlight_device = backlight_device_register("samsung", &sdev->dev, 786 777 NULL, &backlight_ops,
+53 -12
drivers/platform/x86/sony-laptop.c
··· 138 138 "1 for 30 seconds, 2 for 60 seconds and 3 to disable timeout " 139 139 "(default: 0)"); 140 140 141 + static void sony_nc_kbd_backlight_resume(void); 142 + 141 143 enum sony_nc_rfkill { 142 144 SONY_WIFI, 143 145 SONY_BLUETOOTH, ··· 773 771 if (!handles) 774 772 return -ENOMEM; 775 773 776 - sysfs_attr_init(&handles->devattr.attr); 777 - handles->devattr.attr.name = "handles"; 778 - handles->devattr.attr.mode = S_IRUGO; 779 - handles->devattr.show = sony_nc_handles_show; 780 - 781 774 for (i = 0; i < ARRAY_SIZE(handles->cap); i++) { 782 775 if (!acpi_callsetfunc(sony_nc_acpi_handle, 783 776 "SN00", i + 0x20, &result)) { ··· 782 785 } 783 786 } 784 787 785 - /* allow reading capabilities via sysfs */ 786 - if (device_create_file(&pd->dev, &handles->devattr)) { 787 - kfree(handles); 788 - handles = NULL; 789 - return -1; 788 + if (debug) { 789 + sysfs_attr_init(&handles->devattr.attr); 790 + handles->devattr.attr.name = "handles"; 791 + handles->devattr.attr.mode = S_IRUGO; 792 + handles->devattr.show = sony_nc_handles_show; 793 + 794 + /* allow reading capabilities via sysfs */ 795 + if (device_create_file(&pd->dev, &handles->devattr)) { 796 + kfree(handles); 797 + handles = NULL; 798 + return -1; 799 + } 790 800 } 791 801 792 802 return 0; ··· 802 798 static int sony_nc_handles_cleanup(struct platform_device *pd) 803 799 { 804 800 if (handles) { 805 - device_remove_file(&pd->dev, &handles->devattr); 801 + if (debug) 802 + device_remove_file(&pd->dev, &handles->devattr); 806 803 kfree(handles); 807 804 handles = NULL; 808 805 } ··· 813 808 static int sony_find_snc_handle(int handle) 814 809 { 815 810 int i; 811 + 812 + /* not initialized yet, return early */ 813 + if (!handles) 814 + return -1; 815 + 816 816 for (i = 0; i < 0x10; i++) { 817 817 if (handles->cap[i] == handle) { 818 818 dprintk("found handle 0x%.4x (offset: 0x%.2x)\n", ··· 1178 1168 /* re-read rfkill state */ 1179 1169 sony_nc_rfkill_update(); 1180 1170 1171 + /* restore kbd backlight states */ 1172 + sony_nc_kbd_backlight_resume(); 1173 + 1181 1174 return 0; 1182 1175 } 1183 1176 ··· 1368 1355 #define KBDBL_HANDLER 0x137 1369 1356 #define KBDBL_PRESENT 0xB00 1370 1357 #define SET_MODE 0xC00 1358 + #define SET_STATE 0xD00 1371 1359 #define SET_TIMEOUT 0xE00 1372 1360 1373 1361 struct kbd_backlight { ··· 1390 1376 if (sony_call_snc_handle(KBDBL_HANDLER, 1391 1377 (value << 0x10) | SET_MODE, &result)) 1392 1378 return -EIO; 1379 + 1380 + /* Try to turn the light on/off immediately */ 1381 + sony_call_snc_handle(KBDBL_HANDLER, (value << 0x10) | SET_STATE, 1382 + &result); 1393 1383 1394 1384 kbdbl_handle->mode = value; 1395 1385 ··· 1476 1458 { 1477 1459 int result; 1478 1460 1479 - if (sony_call_snc_handle(0x137, KBDBL_PRESENT, &result)) 1461 + if (sony_call_snc_handle(KBDBL_HANDLER, KBDBL_PRESENT, &result)) 1480 1462 return 0; 1481 1463 if (!(result & 0x02)) 1482 1464 return 0; ··· 1519 1501 static int sony_nc_kbd_backlight_cleanup(struct platform_device *pd) 1520 1502 { 1521 1503 if (kbdbl_handle) { 1504 + int result; 1505 + 1522 1506 device_remove_file(&pd->dev, &kbdbl_handle->mode_attr); 1523 1507 device_remove_file(&pd->dev, &kbdbl_handle->timeout_attr); 1508 + 1509 + /* restore the default hw behaviour */ 1510 + sony_call_snc_handle(KBDBL_HANDLER, 0x1000 | SET_MODE, &result); 1511 + sony_call_snc_handle(KBDBL_HANDLER, SET_TIMEOUT, &result); 1512 + 1524 1513 kfree(kbdbl_handle); 1525 1514 } 1526 1515 return 0; 1516 + } 1517 + 1518 + static void sony_nc_kbd_backlight_resume(void) 1519 + { 1520 + int ignore = 0; 1521 + 1522 + if (!kbdbl_handle) 1523 + return; 1524 + 1525 + if (kbdbl_handle->mode == 0) 1526 + sony_call_snc_handle(KBDBL_HANDLER, SET_MODE, &ignore); 1527 + 1528 + if (kbdbl_handle->timeout != 0) 1529 + sony_call_snc_handle(KBDBL_HANDLER, 1530 + (kbdbl_handle->timeout << 0x10) | SET_TIMEOUT, 1531 + &ignore); 1527 1532 } 1528 1533 1529 1534 static void sony_nc_backlight_setup(void)
+1 -2
drivers/platform/x86/thinkpad_acpi.c
··· 8618 8618 tpacpi_is_fw_digit(s[1]) && 8619 8619 s[2] == t && s[3] == 'T' && 8620 8620 tpacpi_is_fw_digit(s[4]) && 8621 - tpacpi_is_fw_digit(s[5]) && 8622 - s[6] == 'W' && s[7] == 'W'; 8621 + tpacpi_is_fw_digit(s[5]); 8623 8622 } 8624 8623 8625 8624 /* returns 0 - probe ok, or < 0 - probe error.
+3 -2
drivers/rapidio/rio.c
··· 1171 1171 1172 1172 __setup("riohdid=", rio_hdid_setup); 1173 1173 1174 - void rio_register_mport(struct rio_mport *port) 1174 + int rio_register_mport(struct rio_mport *port) 1175 1175 { 1176 1176 if (next_portid >= RIO_MAX_MPORTS) { 1177 1177 pr_err("RIO: reached specified max number of mports\n"); 1178 - return; 1178 + return 1; 1179 1179 } 1180 1180 1181 1181 port->id = next_portid++; 1182 1182 port->host_deviceid = rio_get_hdid(port->id); 1183 1183 list_add_tail(&port->node, &rio_mports); 1184 + return 0; 1184 1185 } 1185 1186 1186 1187 EXPORT_SYMBOL_GPL(rio_local_get_device_id);
+1
drivers/rapidio/switches/idt_gen2.c
··· 418 418 DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTCPS1616, idtg2_switch_init); 419 419 DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTVPS1616, idtg2_switch_init); 420 420 DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTSPS1616, idtg2_switch_init); 421 + DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTCPS1432, idtg2_switch_init);
+1 -1
drivers/rtc/class.c
··· 171 171 err = __rtc_read_alarm(rtc, &alrm); 172 172 173 173 if (!err && !rtc_valid_tm(&alrm.time)) 174 - rtc_set_alarm(rtc, &alrm); 174 + rtc_initialize_alarm(rtc, &alrm); 175 175 176 176 strlcpy(rtc->name, name, RTC_DEVICE_NAME_SIZE); 177 177 dev_set_name(&rtc->dev, "rtc%d", id);
+26
drivers/rtc/interface.c
··· 375 375 } 376 376 EXPORT_SYMBOL_GPL(rtc_set_alarm); 377 377 378 + /* Called once per device from rtc_device_register */ 379 + int rtc_initialize_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) 380 + { 381 + int err; 382 + 383 + err = rtc_valid_tm(&alarm->time); 384 + if (err != 0) 385 + return err; 386 + 387 + err = mutex_lock_interruptible(&rtc->ops_lock); 388 + if (err) 389 + return err; 390 + 391 + rtc->aie_timer.node.expires = rtc_tm_to_ktime(alarm->time); 392 + rtc->aie_timer.period = ktime_set(0, 0); 393 + if (alarm->enabled) { 394 + rtc->aie_timer.enabled = 1; 395 + timerqueue_add(&rtc->timerqueue, &rtc->aie_timer.node); 396 + } 397 + mutex_unlock(&rtc->ops_lock); 398 + return err; 399 + } 400 + EXPORT_SYMBOL_GPL(rtc_initialize_alarm); 401 + 402 + 403 + 378 404 int rtc_alarm_irq_enable(struct rtc_device *rtc, unsigned int enabled) 379 405 { 380 406 int err = mutex_lock_interruptible(&rtc->ops_lock);
+2
drivers/rtc/rtc-bfin.c
··· 250 250 bfin_rtc_int_set_alarm(rtc); 251 251 else 252 252 bfin_rtc_int_clear(~(RTC_ISTAT_ALARM | RTC_ISTAT_ALARM_DAY)); 253 + 254 + return 0; 253 255 } 254 256 255 257 static int bfin_rtc_read_time(struct device *dev, struct rtc_time *tm)
+1
drivers/rtc/rtc-mc13xxx.c
··· 401 401 }, { 402 402 .name = "mc13892-rtc", 403 403 }, 404 + { } 404 405 }; 405 406 406 407 static struct platform_driver mc13xxx_rtc_driver = {
-2
drivers/rtc/rtc-s3c.c
··· 336 336 337 337 /* do not clear AIE here, it may be needed for wake */ 338 338 339 - s3c_rtc_setpie(dev, 0); 340 339 free_irq(s3c_rtc_alarmno, rtc_dev); 341 340 free_irq(s3c_rtc_tickno, rtc_dev); 342 341 } ··· 407 408 platform_set_drvdata(dev, NULL); 408 409 rtc_device_unregister(rtc); 409 410 410 - s3c_rtc_setpie(&dev->dev, 0); 411 411 s3c_rtc_setaie(&dev->dev, 0); 412 412 413 413 clk_disable(rtc_clk);
+1 -1
drivers/scsi/scsi_lib.c
··· 443 443 &sdev->request_queue->queue_flags); 444 444 if (flagset) 445 445 queue_flag_set(QUEUE_FLAG_REENTER, sdev->request_queue); 446 - __blk_run_queue(sdev->request_queue, false); 446 + __blk_run_queue(sdev->request_queue); 447 447 if (flagset) 448 448 queue_flag_clear(QUEUE_FLAG_REENTER, sdev->request_queue); 449 449 spin_unlock(sdev->request_queue->queue_lock);
+1 -1
drivers/scsi/scsi_transport_fc.c
··· 3829 3829 !test_bit(QUEUE_FLAG_REENTER, &rport->rqst_q->queue_flags); 3830 3830 if (flagset) 3831 3831 queue_flag_set(QUEUE_FLAG_REENTER, rport->rqst_q); 3832 - __blk_run_queue(rport->rqst_q, false); 3832 + __blk_run_queue(rport->rqst_q); 3833 3833 if (flagset) 3834 3834 queue_flag_clear(QUEUE_FLAG_REENTER, rport->rqst_q); 3835 3835 spin_unlock_irqrestore(rport->rqst_q->queue_lock, flags);
-2
drivers/staging/Kconfig
··· 131 131 132 132 source "drivers/staging/wlags49_h25/Kconfig" 133 133 134 - source "drivers/staging/samsung-laptop/Kconfig" 135 - 136 134 source "drivers/staging/sm7xx/Kconfig" 137 135 138 136 source "drivers/staging/dt3155v4l/Kconfig"
-1
drivers/staging/Makefile
··· 48 48 obj-$(CONFIG_ZCACHE) += zcache/ 49 49 obj-$(CONFIG_WLAGS49_H2) += wlags49_h2/ 50 50 obj-$(CONFIG_WLAGS49_H25) += wlags49_h25/ 51 - obj-$(CONFIG_SAMSUNG_LAPTOP) += samsung-laptop/ 52 51 obj-$(CONFIG_FB_SM7XX) += sm7xx/ 53 52 obj-$(CONFIG_VIDEO_DT3155) += dt3155v4l/ 54 53 obj-$(CONFIG_CRYSTALHD) += crystalhd/
-10
drivers/staging/samsung-laptop/Kconfig
··· 1 - config SAMSUNG_LAPTOP 2 - tristate "Samsung Laptop driver" 3 - default n 4 - depends on RFKILL && BACKLIGHT_CLASS_DEVICE && X86 5 - help 6 - This module implements a driver for the N128 Samsung Laptop 7 - providing control over the Wireless LED and the LCD backlight 8 - 9 - To compile this driver as a module, choose 10 - M here: the module will be called samsung-laptop.
-1
drivers/staging/samsung-laptop/Makefile
··· 1 - obj-$(CONFIG_SAMSUNG_LAPTOP) += samsung-laptop.o
-5
drivers/staging/samsung-laptop/TODO
··· 1 - TODO: 2 - - review from other developers 3 - - figure out ACPI video issues 4 - 5 - Please send patches to Greg Kroah-Hartman <gregkh@suse.de>
-843
drivers/staging/samsung-laptop/samsung-laptop.c
··· 1 - /* 2 - * Samsung Laptop driver 3 - * 4 - * Copyright (C) 2009,2011 Greg Kroah-Hartman (gregkh@suse.de) 5 - * Copyright (C) 2009,2011 Novell Inc. 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms of the GNU General Public License version 2 as published by 9 - * the Free Software Foundation. 10 - * 11 - */ 12 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 - 14 - #include <linux/kernel.h> 15 - #include <linux/init.h> 16 - #include <linux/module.h> 17 - #include <linux/delay.h> 18 - #include <linux/pci.h> 19 - #include <linux/backlight.h> 20 - #include <linux/fb.h> 21 - #include <linux/dmi.h> 22 - #include <linux/platform_device.h> 23 - #include <linux/rfkill.h> 24 - 25 - /* 26 - * This driver is needed because a number of Samsung laptops do not hook 27 - * their control settings through ACPI. So we have to poke around in the 28 - * BIOS to do things like brightness values, and "special" key controls. 29 - */ 30 - 31 - /* 32 - * We have 0 - 8 as valid brightness levels. The specs say that level 0 should 33 - * be reserved by the BIOS (which really doesn't make much sense), we tell 34 - * userspace that the value is 0 - 7 and then just tell the hardware 1 - 8 35 - */ 36 - #define MAX_BRIGHT 0x07 37 - 38 - 39 - #define SABI_IFACE_MAIN 0x00 40 - #define SABI_IFACE_SUB 0x02 41 - #define SABI_IFACE_COMPLETE 0x04 42 - #define SABI_IFACE_DATA 0x05 43 - 44 - /* Structure to get data back to the calling function */ 45 - struct sabi_retval { 46 - u8 retval[20]; 47 - }; 48 - 49 - struct sabi_header_offsets { 50 - u8 port; 51 - u8 re_mem; 52 - u8 iface_func; 53 - u8 en_mem; 54 - u8 data_offset; 55 - u8 data_segment; 56 - }; 57 - 58 - struct sabi_commands { 59 - /* 60 - * Brightness is 0 - 8, as described above. 61 - * Value 0 is for the BIOS to use 62 - */ 63 - u8 get_brightness; 64 - u8 set_brightness; 65 - 66 - /* 67 - * first byte: 68 - * 0x00 - wireless is off 69 - * 0x01 - wireless is on 70 - * second byte: 71 - * 0x02 - 3G is off 72 - * 0x03 - 3G is on 73 - * TODO, verify 3G is correct, that doesn't seem right... 74 - */ 75 - u8 get_wireless_button; 76 - u8 set_wireless_button; 77 - 78 - /* 0 is off, 1 is on */ 79 - u8 get_backlight; 80 - u8 set_backlight; 81 - 82 - /* 83 - * 0x80 or 0x00 - no action 84 - * 0x81 - recovery key pressed 85 - */ 86 - u8 get_recovery_mode; 87 - u8 set_recovery_mode; 88 - 89 - /* 90 - * on seclinux: 0 is low, 1 is high, 91 - * on swsmi: 0 is normal, 1 is silent, 2 is turbo 92 - */ 93 - u8 get_performance_level; 94 - u8 set_performance_level; 95 - 96 - /* 97 - * Tell the BIOS that Linux is running on this machine. 98 - * 81 is on, 80 is off 99 - */ 100 - u8 set_linux; 101 - }; 102 - 103 - struct sabi_performance_level { 104 - const char *name; 105 - u8 value; 106 - }; 107 - 108 - struct sabi_config { 109 - const char *test_string; 110 - u16 main_function; 111 - const struct sabi_header_offsets header_offsets; 112 - const struct sabi_commands commands; 113 - const struct sabi_performance_level performance_levels[4]; 114 - u8 min_brightness; 115 - u8 max_brightness; 116 - }; 117 - 118 - static const struct sabi_config sabi_configs[] = { 119 - { 120 - .test_string = "SECLINUX", 121 - 122 - .main_function = 0x4c49, 123 - 124 - .header_offsets = { 125 - .port = 0x00, 126 - .re_mem = 0x02, 127 - .iface_func = 0x03, 128 - .en_mem = 0x04, 129 - .data_offset = 0x05, 130 - .data_segment = 0x07, 131 - }, 132 - 133 - .commands = { 134 - .get_brightness = 0x00, 135 - .set_brightness = 0x01, 136 - 137 - .get_wireless_button = 0x02, 138 - .set_wireless_button = 0x03, 139 - 140 - .get_backlight = 0x04, 141 - .set_backlight = 0x05, 142 - 143 - .get_recovery_mode = 0x06, 144 - .set_recovery_mode = 0x07, 145 - 146 - .get_performance_level = 0x08, 147 - .set_performance_level = 0x09, 148 - 149 - .set_linux = 0x0a, 150 - }, 151 - 152 - .performance_levels = { 153 - { 154 - .name = "silent", 155 - .value = 0, 156 - }, 157 - { 158 - .name = "normal", 159 - .value = 1, 160 - }, 161 - { }, 162 - }, 163 - .min_brightness = 1, 164 - .max_brightness = 8, 165 - }, 166 - { 167 - .test_string = "SwSmi@", 168 - 169 - .main_function = 0x5843, 170 - 171 - .header_offsets = { 172 - .port = 0x00, 173 - .re_mem = 0x04, 174 - .iface_func = 0x02, 175 - .en_mem = 0x03, 176 - .data_offset = 0x05, 177 - .data_segment = 0x07, 178 - }, 179 - 180 - .commands = { 181 - .get_brightness = 0x10, 182 - .set_brightness = 0x11, 183 - 184 - .get_wireless_button = 0x12, 185 - .set_wireless_button = 0x13, 186 - 187 - .get_backlight = 0x2d, 188 - .set_backlight = 0x2e, 189 - 190 - .get_recovery_mode = 0xff, 191 - .set_recovery_mode = 0xff, 192 - 193 - .get_performance_level = 0x31, 194 - .set_performance_level = 0x32, 195 - 196 - .set_linux = 0xff, 197 - }, 198 - 199 - .performance_levels = { 200 - { 201 - .name = "normal", 202 - .value = 0, 203 - }, 204 - { 205 - .name = "silent", 206 - .value = 1, 207 - }, 208 - { 209 - .name = "overclock", 210 - .value = 2, 211 - }, 212 - { }, 213 - }, 214 - .min_brightness = 0, 215 - .max_brightness = 8, 216 - }, 217 - { }, 218 - }; 219 - 220 - static const struct sabi_config *sabi_config; 221 - 222 - static void __iomem *sabi; 223 - static void __iomem *sabi_iface; 224 - static void __iomem *f0000_segment; 225 - static struct backlight_device *backlight_device; 226 - static struct mutex sabi_mutex; 227 - static struct platform_device *sdev; 228 - static struct rfkill *rfk; 229 - 230 - static int force; 231 - module_param(force, bool, 0); 232 - MODULE_PARM_DESC(force, 233 - "Disable the DMI check and forces the driver to be loaded"); 234 - 235 - static int debug; 236 - module_param(debug, bool, S_IRUGO | S_IWUSR); 237 - MODULE_PARM_DESC(debug, "Debug enabled or not"); 238 - 239 - static int sabi_get_command(u8 command, struct sabi_retval *sretval) 240 - { 241 - int retval = 0; 242 - u16 port = readw(sabi + sabi_config->header_offsets.port); 243 - u8 complete, iface_data; 244 - 245 - mutex_lock(&sabi_mutex); 246 - 247 - /* enable memory to be able to write to it */ 248 - outb(readb(sabi + sabi_config->header_offsets.en_mem), port); 249 - 250 - /* write out the command */ 251 - writew(sabi_config->main_function, sabi_iface + SABI_IFACE_MAIN); 252 - writew(command, sabi_iface + SABI_IFACE_SUB); 253 - writeb(0, sabi_iface + SABI_IFACE_COMPLETE); 254 - outb(readb(sabi + sabi_config->header_offsets.iface_func), port); 255 - 256 - /* write protect memory to make it safe */ 257 - outb(readb(sabi + sabi_config->header_offsets.re_mem), port); 258 - 259 - /* see if the command actually succeeded */ 260 - complete = readb(sabi_iface + SABI_IFACE_COMPLETE); 261 - iface_data = readb(sabi_iface + SABI_IFACE_DATA); 262 - if (complete != 0xaa || iface_data == 0xff) { 263 - pr_warn("SABI get command 0x%02x failed with completion flag 0x%02x and data 0x%02x\n", 264 - command, complete, iface_data); 265 - retval = -EINVAL; 266 - goto exit; 267 - } 268 - /* 269 - * Save off the data into a structure so the caller use it. 270 - * Right now we only want the first 4 bytes, 271 - * There are commands that need more, but not for the ones we 272 - * currently care about. 273 - */ 274 - sretval->retval[0] = readb(sabi_iface + SABI_IFACE_DATA); 275 - sretval->retval[1] = readb(sabi_iface + SABI_IFACE_DATA + 1); 276 - sretval->retval[2] = readb(sabi_iface + SABI_IFACE_DATA + 2); 277 - sretval->retval[3] = readb(sabi_iface + SABI_IFACE_DATA + 3); 278 - 279 - exit: 280 - mutex_unlock(&sabi_mutex); 281 - return retval; 282 - 283 - } 284 - 285 - static int sabi_set_command(u8 command, u8 data) 286 - { 287 - int retval = 0; 288 - u16 port = readw(sabi + sabi_config->header_offsets.port); 289 - u8 complete, iface_data; 290 - 291 - mutex_lock(&sabi_mutex); 292 - 293 - /* enable memory to be able to write to it */ 294 - outb(readb(sabi + sabi_config->header_offsets.en_mem), port); 295 - 296 - /* write out the command */ 297 - writew(sabi_config->main_function, sabi_iface + SABI_IFACE_MAIN); 298 - writew(command, sabi_iface + SABI_IFACE_SUB); 299 - writeb(0, sabi_iface + SABI_IFACE_COMPLETE); 300 - writeb(data, sabi_iface + SABI_IFACE_DATA); 301 - outb(readb(sabi + sabi_config->header_offsets.iface_func), port); 302 - 303 - /* write protect memory to make it safe */ 304 - outb(readb(sabi + sabi_config->header_offsets.re_mem), port); 305 - 306 - /* see if the command actually succeeded */ 307 - complete = readb(sabi_iface + SABI_IFACE_COMPLETE); 308 - iface_data = readb(sabi_iface + SABI_IFACE_DATA); 309 - if (complete != 0xaa || iface_data == 0xff) { 310 - pr_warn("SABI set command 0x%02x failed with completion flag 0x%02x and data 0x%02x\n", 311 - command, complete, iface_data); 312 - retval = -EINVAL; 313 - } 314 - 315 - mutex_unlock(&sabi_mutex); 316 - return retval; 317 - } 318 - 319 - static void test_backlight(void) 320 - { 321 - struct sabi_retval sretval; 322 - 323 - sabi_get_command(sabi_config->commands.get_backlight, &sretval); 324 - printk(KERN_DEBUG "backlight = 0x%02x\n", sretval.retval[0]); 325 - 326 - sabi_set_command(sabi_config->commands.set_backlight, 0); 327 - printk(KERN_DEBUG "backlight should be off\n"); 328 - 329 - sabi_get_command(sabi_config->commands.get_backlight, &sretval); 330 - printk(KERN_DEBUG "backlight = 0x%02x\n", sretval.retval[0]); 331 - 332 - msleep(1000); 333 - 334 - sabi_set_command(sabi_config->commands.set_backlight, 1); 335 - printk(KERN_DEBUG "backlight should be on\n"); 336 - 337 - sabi_get_command(sabi_config->commands.get_backlight, &sretval); 338 - printk(KERN_DEBUG "backlight = 0x%02x\n", sretval.retval[0]); 339 - } 340 - 341 - static void test_wireless(void) 342 - { 343 - struct sabi_retval sretval; 344 - 345 - sabi_get_command(sabi_config->commands.get_wireless_button, &sretval); 346 - printk(KERN_DEBUG "wireless led = 0x%02x\n", sretval.retval[0]); 347 - 348 - sabi_set_command(sabi_config->commands.set_wireless_button, 0); 349 - printk(KERN_DEBUG "wireless led should be off\n"); 350 - 351 - sabi_get_command(sabi_config->commands.get_wireless_button, &sretval); 352 - printk(KERN_DEBUG "wireless led = 0x%02x\n", sretval.retval[0]); 353 - 354 - msleep(1000); 355 - 356 - sabi_set_command(sabi_config->commands.set_wireless_button, 1); 357 - printk(KERN_DEBUG "wireless led should be on\n"); 358 - 359 - sabi_get_command(sabi_config->commands.get_wireless_button, &sretval); 360 - printk(KERN_DEBUG "wireless led = 0x%02x\n", sretval.retval[0]); 361 - } 362 - 363 - static u8 read_brightness(void) 364 - { 365 - struct sabi_retval sretval; 366 - int user_brightness = 0; 367 - int retval; 368 - 369 - retval = sabi_get_command(sabi_config->commands.get_brightness, 370 - &sretval); 371 - if (!retval) { 372 - user_brightness = sretval.retval[0]; 373 - if (user_brightness != 0) 374 - user_brightness -= sabi_config->min_brightness; 375 - } 376 - return user_brightness; 377 - } 378 - 379 - static void set_brightness(u8 user_brightness) 380 - { 381 - u8 user_level = user_brightness - sabi_config->min_brightness; 382 - 383 - sabi_set_command(sabi_config->commands.set_brightness, user_level); 384 - } 385 - 386 - static int get_brightness(struct backlight_device *bd) 387 - { 388 - return (int)read_brightness(); 389 - } 390 - 391 - static int update_status(struct backlight_device *bd) 392 - { 393 - set_brightness(bd->props.brightness); 394 - 395 - if (bd->props.power == FB_BLANK_UNBLANK) 396 - sabi_set_command(sabi_config->commands.set_backlight, 1); 397 - else 398 - sabi_set_command(sabi_config->commands.set_backlight, 0); 399 - return 0; 400 - } 401 - 402 - static const struct backlight_ops backlight_ops = { 403 - .get_brightness = get_brightness, 404 - .update_status = update_status, 405 - }; 406 - 407 - static int rfkill_set(void *data, bool blocked) 408 - { 409 - /* Do something with blocked...*/ 410 - /* 411 - * blocked == false is on 412 - * blocked == true is off 413 - */ 414 - if (blocked) 415 - sabi_set_command(sabi_config->commands.set_wireless_button, 0); 416 - else 417 - sabi_set_command(sabi_config->commands.set_wireless_button, 1); 418 - 419 - return 0; 420 - } 421 - 422 - static struct rfkill_ops rfkill_ops = { 423 - .set_block = rfkill_set, 424 - }; 425 - 426 - static int init_wireless(struct platform_device *sdev) 427 - { 428 - int retval; 429 - 430 - rfk = rfkill_alloc("samsung-wifi", &sdev->dev, RFKILL_TYPE_WLAN, 431 - &rfkill_ops, NULL); 432 - if (!rfk) 433 - return -ENOMEM; 434 - 435 - retval = rfkill_register(rfk); 436 - if (retval) { 437 - rfkill_destroy(rfk); 438 - return -ENODEV; 439 - } 440 - 441 - return 0; 442 - } 443 - 444 - static void destroy_wireless(void) 445 - { 446 - rfkill_unregister(rfk); 447 - rfkill_destroy(rfk); 448 - } 449 - 450 - static ssize_t get_performance_level(struct device *dev, 451 - struct device_attribute *attr, char *buf) 452 - { 453 - struct sabi_retval sretval; 454 - int retval; 455 - int i; 456 - 457 - /* Read the state */ 458 - retval = sabi_get_command(sabi_config->commands.get_performance_level, 459 - &sretval); 460 - if (retval) 461 - return retval; 462 - 463 - /* The logic is backwards, yeah, lots of fun... */ 464 - for (i = 0; sabi_config->performance_levels[i].name; ++i) { 465 - if (sretval.retval[0] == sabi_config->performance_levels[i].value) 466 - return sprintf(buf, "%s\n", sabi_config->performance_levels[i].name); 467 - } 468 - return sprintf(buf, "%s\n", "unknown"); 469 - } 470 - 471 - static ssize_t set_performance_level(struct device *dev, 472 - struct device_attribute *attr, const char *buf, 473 - size_t count) 474 - { 475 - if (count >= 1) { 476 - int i; 477 - for (i = 0; sabi_config->performance_levels[i].name; ++i) { 478 - const struct sabi_performance_level *level = 479 - &sabi_config->performance_levels[i]; 480 - if (!strncasecmp(level->name, buf, strlen(level->name))) { 481 - sabi_set_command(sabi_config->commands.set_performance_level, 482 - level->value); 483 - break; 484 - } 485 - } 486 - if (!sabi_config->performance_levels[i].name) 487 - return -EINVAL; 488 - } 489 - return count; 490 - } 491 - static DEVICE_ATTR(performance_level, S_IWUSR | S_IRUGO, 492 - get_performance_level, set_performance_level); 493 - 494 - 495 - static int __init dmi_check_cb(const struct dmi_system_id *id) 496 - { 497 - pr_info("found laptop model '%s'\n", 498 - id->ident); 499 - return 0; 500 - } 501 - 502 - static struct dmi_system_id __initdata samsung_dmi_table[] = { 503 - { 504 - .ident = "N128", 505 - .matches = { 506 - DMI_MATCH(DMI_SYS_VENDOR, 507 - "SAMSUNG ELECTRONICS CO., LTD."), 508 - DMI_MATCH(DMI_PRODUCT_NAME, "N128"), 509 - DMI_MATCH(DMI_BOARD_NAME, "N128"), 510 - }, 511 - .callback = dmi_check_cb, 512 - }, 513 - { 514 - .ident = "N130", 515 - .matches = { 516 - DMI_MATCH(DMI_SYS_VENDOR, 517 - "SAMSUNG ELECTRONICS CO., LTD."), 518 - DMI_MATCH(DMI_PRODUCT_NAME, "N130"), 519 - DMI_MATCH(DMI_BOARD_NAME, "N130"), 520 - }, 521 - .callback = dmi_check_cb, 522 - }, 523 - { 524 - .ident = "X125", 525 - .matches = { 526 - DMI_MATCH(DMI_SYS_VENDOR, 527 - "SAMSUNG ELECTRONICS CO., LTD."), 528 - DMI_MATCH(DMI_PRODUCT_NAME, "X125"), 529 - DMI_MATCH(DMI_BOARD_NAME, "X125"), 530 - }, 531 - .callback = dmi_check_cb, 532 - }, 533 - { 534 - .ident = "X120/X170", 535 - .matches = { 536 - DMI_MATCH(DMI_SYS_VENDOR, 537 - "SAMSUNG ELECTRONICS CO., LTD."), 538 - DMI_MATCH(DMI_PRODUCT_NAME, "X120/X170"), 539 - DMI_MATCH(DMI_BOARD_NAME, "X120/X170"), 540 - }, 541 - .callback = dmi_check_cb, 542 - }, 543 - { 544 - .ident = "NC10", 545 - .matches = { 546 - DMI_MATCH(DMI_SYS_VENDOR, 547 - "SAMSUNG ELECTRONICS CO., LTD."), 548 - DMI_MATCH(DMI_PRODUCT_NAME, "NC10"), 549 - DMI_MATCH(DMI_BOARD_NAME, "NC10"), 550 - }, 551 - .callback = dmi_check_cb, 552 - }, 553 - { 554 - .ident = "NP-Q45", 555 - .matches = { 556 - DMI_MATCH(DMI_SYS_VENDOR, 557 - "SAMSUNG ELECTRONICS CO., LTD."), 558 - DMI_MATCH(DMI_PRODUCT_NAME, "SQ45S70S"), 559 - DMI_MATCH(DMI_BOARD_NAME, "SQ45S70S"), 560 - }, 561 - .callback = dmi_check_cb, 562 - }, 563 - { 564 - .ident = "X360", 565 - .matches = { 566 - DMI_MATCH(DMI_SYS_VENDOR, 567 - "SAMSUNG ELECTRONICS CO., LTD."), 568 - DMI_MATCH(DMI_PRODUCT_NAME, "X360"), 569 - DMI_MATCH(DMI_BOARD_NAME, "X360"), 570 - }, 571 - .callback = dmi_check_cb, 572 - }, 573 - { 574 - .ident = "R410 Plus", 575 - .matches = { 576 - DMI_MATCH(DMI_SYS_VENDOR, 577 - "SAMSUNG ELECTRONICS CO., LTD."), 578 - DMI_MATCH(DMI_PRODUCT_NAME, "R410P"), 579 - DMI_MATCH(DMI_BOARD_NAME, "R460"), 580 - }, 581 - .callback = dmi_check_cb, 582 - }, 583 - { 584 - .ident = "R518", 585 - .matches = { 586 - DMI_MATCH(DMI_SYS_VENDOR, 587 - "SAMSUNG ELECTRONICS CO., LTD."), 588 - DMI_MATCH(DMI_PRODUCT_NAME, "R518"), 589 - DMI_MATCH(DMI_BOARD_NAME, "R518"), 590 - }, 591 - .callback = dmi_check_cb, 592 - }, 593 - { 594 - .ident = "R519/R719", 595 - .matches = { 596 - DMI_MATCH(DMI_SYS_VENDOR, 597 - "SAMSUNG ELECTRONICS CO., LTD."), 598 - DMI_MATCH(DMI_PRODUCT_NAME, "R519/R719"), 599 - DMI_MATCH(DMI_BOARD_NAME, "R519/R719"), 600 - }, 601 - .callback = dmi_check_cb, 602 - }, 603 - { 604 - .ident = "N150/N210/N220/N230", 605 - .matches = { 606 - DMI_MATCH(DMI_SYS_VENDOR, 607 - "SAMSUNG ELECTRONICS CO., LTD."), 608 - DMI_MATCH(DMI_PRODUCT_NAME, "N150/N210/N220/N230"), 609 - DMI_MATCH(DMI_BOARD_NAME, "N150/N210/N220/N230"), 610 - }, 611 - .callback = dmi_check_cb, 612 - }, 613 - { 614 - .ident = "N150P/N210P/N220P", 615 - .matches = { 616 - DMI_MATCH(DMI_SYS_VENDOR, 617 - "SAMSUNG ELECTRONICS CO., LTD."), 618 - DMI_MATCH(DMI_PRODUCT_NAME, "N150P/N210P/N220P"), 619 - DMI_MATCH(DMI_BOARD_NAME, "N150P/N210P/N220P"), 620 - }, 621 - .callback = dmi_check_cb, 622 - }, 623 - { 624 - .ident = "R530/R730", 625 - .matches = { 626 - DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), 627 - DMI_MATCH(DMI_PRODUCT_NAME, "R530/R730"), 628 - DMI_MATCH(DMI_BOARD_NAME, "R530/R730"), 629 - }, 630 - .callback = dmi_check_cb, 631 - }, 632 - { 633 - .ident = "NF110/NF210/NF310", 634 - .matches = { 635 - DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), 636 - DMI_MATCH(DMI_PRODUCT_NAME, "NF110/NF210/NF310"), 637 - DMI_MATCH(DMI_BOARD_NAME, "NF110/NF210/NF310"), 638 - }, 639 - .callback = dmi_check_cb, 640 - }, 641 - { 642 - .ident = "N145P/N250P/N260P", 643 - .matches = { 644 - DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), 645 - DMI_MATCH(DMI_PRODUCT_NAME, "N145P/N250P/N260P"), 646 - DMI_MATCH(DMI_BOARD_NAME, "N145P/N250P/N260P"), 647 - }, 648 - .callback = dmi_check_cb, 649 - }, 650 - { 651 - .ident = "R70/R71", 652 - .matches = { 653 - DMI_MATCH(DMI_SYS_VENDOR, 654 - "SAMSUNG ELECTRONICS CO., LTD."), 655 - DMI_MATCH(DMI_PRODUCT_NAME, "R70/R71"), 656 - DMI_MATCH(DMI_BOARD_NAME, "R70/R71"), 657 - }, 658 - .callback = dmi_check_cb, 659 - }, 660 - { 661 - .ident = "P460", 662 - .matches = { 663 - DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), 664 - DMI_MATCH(DMI_PRODUCT_NAME, "P460"), 665 - DMI_MATCH(DMI_BOARD_NAME, "P460"), 666 - }, 667 - .callback = dmi_check_cb, 668 - }, 669 - { }, 670 - }; 671 - MODULE_DEVICE_TABLE(dmi, samsung_dmi_table); 672 - 673 - static int find_signature(void __iomem *memcheck, const char *testStr) 674 - { 675 - int i = 0; 676 - int loca; 677 - 678 - for (loca = 0; loca < 0xffff; loca++) { 679 - char temp = readb(memcheck + loca); 680 - 681 - if (temp == testStr[i]) { 682 - if (i == strlen(testStr)-1) 683 - break; 684 - ++i; 685 - } else { 686 - i = 0; 687 - } 688 - } 689 - return loca; 690 - } 691 - 692 - static int __init samsung_init(void) 693 - { 694 - struct backlight_properties props; 695 - struct sabi_retval sretval; 696 - unsigned int ifaceP; 697 - int i; 698 - int loca; 699 - int retval; 700 - 701 - mutex_init(&sabi_mutex); 702 - 703 - if (!force && !dmi_check_system(samsung_dmi_table)) 704 - return -ENODEV; 705 - 706 - f0000_segment = ioremap_nocache(0xf0000, 0xffff); 707 - if (!f0000_segment) { 708 - pr_err("Can't map the segment at 0xf0000\n"); 709 - return -EINVAL; 710 - } 711 - 712 - /* Try to find one of the signatures in memory to find the header */ 713 - for (i = 0; sabi_configs[i].test_string != 0; ++i) { 714 - sabi_config = &sabi_configs[i]; 715 - loca = find_signature(f0000_segment, sabi_config->test_string); 716 - if (loca != 0xffff) 717 - break; 718 - } 719 - 720 - if (loca == 0xffff) { 721 - pr_err("This computer does not support SABI\n"); 722 - goto error_no_signature; 723 - } 724 - 725 - /* point to the SMI port Number */ 726 - loca += 1; 727 - sabi = (f0000_segment + loca); 728 - 729 - if (debug) { 730 - printk(KERN_DEBUG "This computer supports SABI==%x\n", 731 - loca + 0xf0000 - 6); 732 - printk(KERN_DEBUG "SABI header:\n"); 733 - printk(KERN_DEBUG " SMI Port Number = 0x%04x\n", 734 - readw(sabi + sabi_config->header_offsets.port)); 735 - printk(KERN_DEBUG " SMI Interface Function = 0x%02x\n", 736 - readb(sabi + sabi_config->header_offsets.iface_func)); 737 - printk(KERN_DEBUG " SMI enable memory buffer = 0x%02x\n", 738 - readb(sabi + sabi_config->header_offsets.en_mem)); 739 - printk(KERN_DEBUG " SMI restore memory buffer = 0x%02x\n", 740 - readb(sabi + sabi_config->header_offsets.re_mem)); 741 - printk(KERN_DEBUG " SABI data offset = 0x%04x\n", 742 - readw(sabi + sabi_config->header_offsets.data_offset)); 743 - printk(KERN_DEBUG " SABI data segment = 0x%04x\n", 744 - readw(sabi + sabi_config->header_offsets.data_segment)); 745 - } 746 - 747 - /* Get a pointer to the SABI Interface */ 748 - ifaceP = (readw(sabi + sabi_config->header_offsets.data_segment) & 0x0ffff) << 4; 749 - ifaceP += readw(sabi + sabi_config->header_offsets.data_offset) & 0x0ffff; 750 - sabi_iface = ioremap_nocache(ifaceP, 16); 751 - if (!sabi_iface) { 752 - pr_err("Can't remap %x\n", ifaceP); 753 - goto exit; 754 - } 755 - if (debug) { 756 - printk(KERN_DEBUG "ifaceP = 0x%08x\n", ifaceP); 757 - printk(KERN_DEBUG "sabi_iface = %p\n", sabi_iface); 758 - 759 - test_backlight(); 760 - test_wireless(); 761 - 762 - retval = sabi_get_command(sabi_config->commands.get_brightness, 763 - &sretval); 764 - printk(KERN_DEBUG "brightness = 0x%02x\n", sretval.retval[0]); 765 - } 766 - 767 - /* Turn on "Linux" mode in the BIOS */ 768 - if (sabi_config->commands.set_linux != 0xff) { 769 - retval = sabi_set_command(sabi_config->commands.set_linux, 770 - 0x81); 771 - if (retval) { 772 - pr_warn("Linux mode was not set!\n"); 773 - goto error_no_platform; 774 - } 775 - } 776 - 777 - /* knock up a platform device to hang stuff off of */ 778 - sdev = platform_device_register_simple("samsung", -1, NULL, 0); 779 - if (IS_ERR(sdev)) 780 - goto error_no_platform; 781 - 782 - /* create a backlight device to talk to this one */ 783 - memset(&props, 0, sizeof(struct backlight_properties)); 784 - props.type = BACKLIGHT_PLATFORM; 785 - props.max_brightness = sabi_config->max_brightness; 786 - backlight_device = backlight_device_register("samsung", &sdev->dev, 787 - NULL, &backlight_ops, 788 - &props); 789 - if (IS_ERR(backlight_device)) 790 - goto error_no_backlight; 791 - 792 - backlight_device->props.brightness = read_brightness(); 793 - backlight_device->props.power = FB_BLANK_UNBLANK; 794 - backlight_update_status(backlight_device); 795 - 796 - retval = init_wireless(sdev); 797 - if (retval) 798 - goto error_no_rfk; 799 - 800 - retval = device_create_file(&sdev->dev, &dev_attr_performance_level); 801 - if (retval) 802 - goto error_file_create; 803 - 804 - exit: 805 - return 0; 806 - 807 - error_file_create: 808 - destroy_wireless(); 809 - 810 - error_no_rfk: 811 - backlight_device_unregister(backlight_device); 812 - 813 - error_no_backlight: 814 - platform_device_unregister(sdev); 815 - 816 - error_no_platform: 817 - iounmap(sabi_iface); 818 - 819 - error_no_signature: 820 - iounmap(f0000_segment); 821 - return -EINVAL; 822 - } 823 - 824 - static void __exit samsung_exit(void) 825 - { 826 - /* Turn off "Linux" mode in the BIOS */ 827 - if (sabi_config->commands.set_linux != 0xff) 828 - sabi_set_command(sabi_config->commands.set_linux, 0x80); 829 - 830 - device_remove_file(&sdev->dev, &dev_attr_performance_level); 831 - backlight_device_unregister(backlight_device); 832 - destroy_wireless(); 833 - iounmap(sabi_iface); 834 - iounmap(f0000_segment); 835 - platform_device_unregister(sdev); 836 - } 837 - 838 - module_init(samsung_init); 839 - module_exit(samsung_exit); 840 - 841 - MODULE_AUTHOR("Greg Kroah-Hartman <gregkh@suse.de>"); 842 - MODULE_DESCRIPTION("Samsung Backlight driver"); 843 - MODULE_LICENSE("GPL");
+1
drivers/usb/Kconfig
··· 66 66 default y if ARCH_VT8500 67 67 default y if PLAT_SPEAR 68 68 default y if ARCH_MSM 69 + default y if MICROBLAZE 69 70 default PCI 70 71 71 72 # ARM SA1111 chips have a non-PCI based "OHCI-compatible" USB host interface.
+6 -4
drivers/usb/core/devices.c
··· 221 221 break; 222 222 case USB_ENDPOINT_XFER_INT: 223 223 type = "Int."; 224 - if (speed == USB_SPEED_HIGH) 224 + if (speed == USB_SPEED_HIGH || speed == USB_SPEED_SUPER) 225 225 interval = 1 << (desc->bInterval - 1); 226 226 else 227 227 interval = desc->bInterval; ··· 229 229 default: /* "can't happen" */ 230 230 return start; 231 231 } 232 - interval *= (speed == USB_SPEED_HIGH) ? 125 : 1000; 232 + interval *= (speed == USB_SPEED_HIGH || 233 + speed == USB_SPEED_SUPER) ? 125 : 1000; 233 234 if (interval % 1000) 234 235 unit = 'u'; 235 236 else { ··· 543 542 if (level == 0) { 544 543 int max; 545 544 546 - /* high speed reserves 80%, full/low reserves 90% */ 547 - if (usbdev->speed == USB_SPEED_HIGH) 545 + /* super/high speed reserves 80%, full/low reserves 90% */ 546 + if (usbdev->speed == USB_SPEED_HIGH || 547 + usbdev->speed == USB_SPEED_SUPER) 548 548 max = 800; 549 549 else 550 550 max = FRAME_TIME_MAX_USECS_ALLOC;
+1 -1
drivers/usb/core/hcd.c
··· 1908 1908 1909 1909 /* Streams only apply to bulk endpoints. */ 1910 1910 for (i = 0; i < num_eps; i++) 1911 - if (!usb_endpoint_xfer_bulk(&eps[i]->desc)) 1911 + if (!eps[i] || !usb_endpoint_xfer_bulk(&eps[i]->desc)) 1912 1912 return; 1913 1913 1914 1914 hcd->driver->free_streams(hcd, dev, eps, num_eps, mem_flags);
+11 -1
drivers/usb/core/hub.c
··· 2285 2285 } 2286 2286 2287 2287 /* see 7.1.7.6 */ 2288 - status = set_port_feature(hub->hdev, port1, USB_PORT_FEAT_SUSPEND); 2288 + /* Clear PORT_POWER if it's a USB3.0 device connected to USB 3.0 2289 + * external hub. 2290 + * FIXME: this is a temporary workaround to make the system able 2291 + * to suspend/resume. 2292 + */ 2293 + if ((hub->hdev->parent != NULL) && hub_is_superspeed(hub->hdev)) 2294 + status = clear_port_feature(hub->hdev, port1, 2295 + USB_PORT_FEAT_POWER); 2296 + else 2297 + status = set_port_feature(hub->hdev, port1, 2298 + USB_PORT_FEAT_SUSPEND); 2289 2299 if (status) { 2290 2300 dev_dbg(hub->intfdev, "can't suspend port %d, status %d\n", 2291 2301 port1, status);
+1
drivers/usb/gadget/f_audio.c
··· 706 706 struct f_audio *audio = func_to_audio(f); 707 707 708 708 usb_free_descriptors(f->descriptors); 709 + usb_free_descriptors(f->hs_descriptors); 709 710 kfree(audio); 710 711 } 711 712
+6 -2
drivers/usb/gadget/f_eem.c
··· 314 314 315 315 static void eem_cmd_complete(struct usb_ep *ep, struct usb_request *req) 316 316 { 317 + struct sk_buff *skb = (struct sk_buff *)req->context; 318 + 319 + dev_kfree_skb_any(skb); 317 320 } 318 321 319 322 /* ··· 431 428 skb_trim(skb2, len); 432 429 put_unaligned_le16(BIT(15) | BIT(11) | len, 433 430 skb_push(skb2, 2)); 434 - skb_copy_bits(skb, 0, req->buf, skb->len); 435 - req->length = skb->len; 431 + skb_copy_bits(skb2, 0, req->buf, skb2->len); 432 + req->length = skb2->len; 436 433 req->complete = eem_cmd_complete; 437 434 req->zero = 1; 435 + req->context = skb2; 438 436 if (usb_ep_queue(port->in_ep, req, GFP_ATOMIC)) 439 437 DBG(cdev, "echo response queue fail\n"); 440 438 break;
+18 -2
drivers/usb/gadget/fsl_qe_udc.c
··· 1148 1148 static int txcomplete(struct qe_ep *ep, unsigned char restart) 1149 1149 { 1150 1150 if (ep->tx_req != NULL) { 1151 + struct qe_req *req = ep->tx_req; 1152 + unsigned zlp = 0, last_len = 0; 1153 + 1154 + last_len = min_t(unsigned, req->req.length - ep->sent, 1155 + ep->ep.maxpacket); 1156 + 1151 1157 if (!restart) { 1152 1158 int asent = ep->last; 1153 1159 ep->sent += asent; ··· 1162 1156 ep->last = 0; 1163 1157 } 1164 1158 1159 + /* zlp needed when req->re.zero is set */ 1160 + if (req->req.zero) { 1161 + if (last_len == 0 || 1162 + (req->req.length % ep->ep.maxpacket) != 0) 1163 + zlp = 0; 1164 + else 1165 + zlp = 1; 1166 + } else 1167 + zlp = 0; 1168 + 1165 1169 /* a request already were transmitted completely */ 1166 - if ((ep->tx_req->req.length - ep->sent) <= 0) { 1167 - ep->tx_req->req.actual = (unsigned int)ep->sent; 1170 + if (((ep->tx_req->req.length - ep->sent) <= 0) && !zlp) { 1168 1171 done(ep, ep->tx_req, 0); 1169 1172 ep->tx_req = NULL; 1170 1173 ep->last = 0; ··· 1206 1191 buf = (u8 *)ep->tx_req->req.buf + ep->sent; 1207 1192 if (buf && size) { 1208 1193 ep->last = size; 1194 + ep->tx_req->req.actual += size; 1209 1195 frame_set_data(frame, buf); 1210 1196 frame_set_length(frame, size); 1211 1197 frame_set_status(frame, FRAME_OK);
+3 -1
drivers/usb/gadget/inode.c
··· 386 386 387 387 /* halt any endpoint by doing a "wrong direction" i/o call */ 388 388 if (usb_endpoint_dir_in(&data->desc)) { 389 - if (usb_endpoint_xfer_isoc(&data->desc)) 389 + if (usb_endpoint_xfer_isoc(&data->desc)) { 390 + mutex_unlock(&data->lock); 390 391 return -EINVAL; 392 + } 391 393 DBG (data->dev, "%s halt\n", data->name); 392 394 spin_lock_irq (&data->dev->lock); 393 395 if (likely (data->ep != NULL))
+5 -3
drivers/usb/gadget/pch_udc.c
··· 1608 1608 return -EINVAL; 1609 1609 if (!dev->driver || (dev->gadget.speed == USB_SPEED_UNKNOWN)) 1610 1610 return -ESHUTDOWN; 1611 - spin_lock_irqsave(&ep->dev->lock, iflags); 1611 + spin_lock_irqsave(&dev->lock, iflags); 1612 1612 /* map the buffer for dma */ 1613 1613 if (usbreq->length && 1614 1614 ((usbreq->dma == DMA_ADDR_INVALID) || !usbreq->dma)) { ··· 1625 1625 DMA_FROM_DEVICE); 1626 1626 } else { 1627 1627 req->buf = kzalloc(usbreq->length, GFP_ATOMIC); 1628 - if (!req->buf) 1629 - return -ENOMEM; 1628 + if (!req->buf) { 1629 + retval = -ENOMEM; 1630 + goto probe_end; 1631 + } 1630 1632 if (ep->in) { 1631 1633 memcpy(req->buf, usbreq->buf, usbreq->length); 1632 1634 req->dma = dma_map_single(&dev->pdev->dev,
+2
drivers/usb/gadget/r8a66597-udc.c
··· 1083 1083 1084 1084 if (dvsq == DS_DFLT) { 1085 1085 /* bus reset */ 1086 + spin_unlock(&r8a66597->lock); 1086 1087 r8a66597->driver->disconnect(&r8a66597->gadget); 1088 + spin_lock(&r8a66597->lock); 1087 1089 r8a66597_update_usb_speed(r8a66597); 1088 1090 } 1089 1091 if (r8a66597->old_dvsq == DS_CNFG && dvsq != DS_CNFG)
+9 -6
drivers/usb/host/ehci-q.c
··· 1247 1247 1248 1248 static void scan_async (struct ehci_hcd *ehci) 1249 1249 { 1250 + bool stopped; 1250 1251 struct ehci_qh *qh; 1251 1252 enum ehci_timer_action action = TIMER_IO_WATCHDOG; 1252 1253 1253 1254 ehci->stamp = ehci_readl(ehci, &ehci->regs->frame_index); 1254 1255 timer_action_done (ehci, TIMER_ASYNC_SHRINK); 1255 1256 rescan: 1257 + stopped = !HC_IS_RUNNING(ehci_to_hcd(ehci)->state); 1256 1258 qh = ehci->async->qh_next.qh; 1257 1259 if (likely (qh != NULL)) { 1258 1260 do { 1259 1261 /* clean any finished work for this qh */ 1260 - if (!list_empty (&qh->qtd_list) 1261 - && qh->stamp != ehci->stamp) { 1262 + if (!list_empty(&qh->qtd_list) && (stopped || 1263 + qh->stamp != ehci->stamp)) { 1262 1264 int temp; 1263 1265 1264 1266 /* unlinks could happen here; completion 1265 1267 * reporting drops the lock. rescan using 1266 1268 * the latest schedule, but don't rescan 1267 - * qhs we already finished (no looping). 1269 + * qhs we already finished (no looping) 1270 + * unless the controller is stopped. 1268 1271 */ 1269 1272 qh = qh_get (qh); 1270 1273 qh->stamp = ehci->stamp; ··· 1288 1285 */ 1289 1286 if (list_empty(&qh->qtd_list) 1290 1287 && qh->qh_state == QH_STATE_LINKED) { 1291 - if (!ehci->reclaim 1292 - && ((ehci->stamp - qh->stamp) & 0x1fff) 1293 - >= (EHCI_SHRINK_FRAMES * 8)) 1288 + if (!ehci->reclaim && (stopped || 1289 + ((ehci->stamp - qh->stamp) & 0x1fff) 1290 + >= EHCI_SHRINK_FRAMES * 8)) 1294 1291 start_unlink_async(ehci, qh); 1295 1292 else 1296 1293 action = TIMER_ASYNC_SHRINK;
+1 -1
drivers/usb/host/isp1760-hcd.c
··· 295 295 } 296 296 297 297 dev_err(hcd->self.controller, 298 - "%s: Can not allocate %lu bytes of memory\n" 298 + "%s: Cannot allocate %zu bytes of memory\n" 299 299 "Current memory map:\n", 300 300 __func__, qtd->length); 301 301 for (i = 0; i < BLOCKS; i++) {
+1 -1
drivers/usb/host/ohci-au1xxx.c
··· 33 33 34 34 #ifdef __LITTLE_ENDIAN 35 35 #define USBH_ENABLE_INIT (USBH_ENABLE_CE | USBH_ENABLE_E | USBH_ENABLE_C) 36 - #elif __BIG_ENDIAN 36 + #elif defined(__BIG_ENDIAN) 37 37 #define USBH_ENABLE_INIT (USBH_ENABLE_CE | USBH_ENABLE_E | USBH_ENABLE_C | \ 38 38 USBH_ENABLE_BE) 39 39 #else
+74 -43
drivers/usb/host/pci-quirks.c
··· 84 84 { 85 85 u8 rev = 0; 86 86 unsigned long flags; 87 + struct amd_chipset_info info; 88 + int ret; 87 89 88 90 spin_lock_irqsave(&amd_lock, flags); 89 91 90 - amd_chipset.probe_count++; 91 92 /* probe only once */ 92 - if (amd_chipset.probe_count > 1) { 93 + if (amd_chipset.probe_count > 0) { 94 + amd_chipset.probe_count++; 93 95 spin_unlock_irqrestore(&amd_lock, flags); 94 96 return amd_chipset.probe_result; 95 97 } 98 + memset(&info, 0, sizeof(info)); 99 + spin_unlock_irqrestore(&amd_lock, flags); 96 100 97 - amd_chipset.smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL); 98 - if (amd_chipset.smbus_dev) { 99 - rev = amd_chipset.smbus_dev->revision; 101 + info.smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL); 102 + if (info.smbus_dev) { 103 + rev = info.smbus_dev->revision; 100 104 if (rev >= 0x40) 101 - amd_chipset.sb_type = 1; 105 + info.sb_type = 1; 102 106 else if (rev >= 0x30 && rev <= 0x3b) 103 - amd_chipset.sb_type = 3; 107 + info.sb_type = 3; 104 108 } else { 105 - amd_chipset.smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, 106 - 0x780b, NULL); 107 - if (!amd_chipset.smbus_dev) { 108 - spin_unlock_irqrestore(&amd_lock, flags); 109 - return 0; 109 + info.smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, 110 + 0x780b, NULL); 111 + if (!info.smbus_dev) { 112 + ret = 0; 113 + goto commit; 110 114 } 111 - rev = amd_chipset.smbus_dev->revision; 115 + 116 + rev = info.smbus_dev->revision; 112 117 if (rev >= 0x11 && rev <= 0x18) 113 - amd_chipset.sb_type = 2; 118 + info.sb_type = 2; 114 119 } 115 120 116 - if (amd_chipset.sb_type == 0) { 117 - if (amd_chipset.smbus_dev) { 118 - pci_dev_put(amd_chipset.smbus_dev); 119 - amd_chipset.smbus_dev = NULL; 121 + if (info.sb_type == 0) { 122 + if (info.smbus_dev) { 123 + pci_dev_put(info.smbus_dev); 124 + info.smbus_dev = NULL; 120 125 } 121 - spin_unlock_irqrestore(&amd_lock, flags); 122 - return 0; 126 + ret = 0; 127 + goto commit; 123 128 } 124 129 125 - amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x9601, NULL); 126 - if (amd_chipset.nb_dev) { 127 - amd_chipset.nb_type = 1; 130 + info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x9601, NULL); 131 + if (info.nb_dev) { 132 + info.nb_type = 1; 128 133 } else { 129 - amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 130 - 0x1510, NULL); 131 - if (amd_chipset.nb_dev) { 132 - amd_chipset.nb_type = 2; 133 - } else { 134 - amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 135 - 0x9600, NULL); 136 - if (amd_chipset.nb_dev) 137 - amd_chipset.nb_type = 3; 134 + info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x1510, NULL); 135 + if (info.nb_dev) { 136 + info.nb_type = 2; 137 + } else { 138 + info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 139 + 0x9600, NULL); 140 + if (info.nb_dev) 141 + info.nb_type = 3; 138 142 } 139 143 } 140 144 141 - amd_chipset.probe_result = 1; 145 + ret = info.probe_result = 1; 142 146 printk(KERN_DEBUG "QUIRK: Enable AMD PLL fix\n"); 143 147 144 - spin_unlock_irqrestore(&amd_lock, flags); 145 - return amd_chipset.probe_result; 148 + commit: 149 + 150 + spin_lock_irqsave(&amd_lock, flags); 151 + if (amd_chipset.probe_count > 0) { 152 + /* race - someone else was faster - drop devices */ 153 + 154 + /* Mark that we where here */ 155 + amd_chipset.probe_count++; 156 + ret = amd_chipset.probe_result; 157 + 158 + spin_unlock_irqrestore(&amd_lock, flags); 159 + 160 + if (info.nb_dev) 161 + pci_dev_put(info.nb_dev); 162 + if (info.smbus_dev) 163 + pci_dev_put(info.smbus_dev); 164 + 165 + } else { 166 + /* no race - commit the result */ 167 + info.probe_count++; 168 + amd_chipset = info; 169 + spin_unlock_irqrestore(&amd_lock, flags); 170 + } 171 + 172 + return ret; 146 173 } 147 174 EXPORT_SYMBOL_GPL(usb_amd_find_chipset_info); 148 175 ··· 311 284 312 285 void usb_amd_dev_put(void) 313 286 { 287 + struct pci_dev *nb, *smbus; 314 288 unsigned long flags; 315 289 316 290 spin_lock_irqsave(&amd_lock, flags); ··· 322 294 return; 323 295 } 324 296 325 - if (amd_chipset.nb_dev) { 326 - pci_dev_put(amd_chipset.nb_dev); 327 - amd_chipset.nb_dev = NULL; 328 - } 329 - if (amd_chipset.smbus_dev) { 330 - pci_dev_put(amd_chipset.smbus_dev); 331 - amd_chipset.smbus_dev = NULL; 332 - } 297 + /* save them to pci_dev_put outside of spinlock */ 298 + nb = amd_chipset.nb_dev; 299 + smbus = amd_chipset.smbus_dev; 300 + 301 + amd_chipset.nb_dev = NULL; 302 + amd_chipset.smbus_dev = NULL; 333 303 amd_chipset.nb_type = 0; 334 304 amd_chipset.sb_type = 0; 335 305 amd_chipset.isoc_reqs = 0; 336 306 amd_chipset.probe_result = 0; 337 307 338 308 spin_unlock_irqrestore(&amd_lock, flags); 309 + 310 + if (nb) 311 + pci_dev_put(nb); 312 + if (smbus) 313 + pci_dev_put(smbus); 339 314 } 340 315 EXPORT_SYMBOL_GPL(usb_amd_dev_put); 341 316
+70 -36
drivers/usb/host/xhci-mem.c
··· 846 846 * Skip ports that don't have known speeds, or have duplicate 847 847 * Extended Capabilities port speed entries. 848 848 */ 849 - if (port_speed == 0 || port_speed == -1) 849 + if (port_speed == 0 || port_speed == DUPLICATE_ENTRY) 850 850 continue; 851 851 852 852 /* ··· 974 974 return 0; 975 975 } 976 976 977 + /* 978 + * Convert interval expressed as 2^(bInterval - 1) == interval into 979 + * straight exponent value 2^n == interval. 980 + * 981 + */ 982 + static unsigned int xhci_parse_exponent_interval(struct usb_device *udev, 983 + struct usb_host_endpoint *ep) 984 + { 985 + unsigned int interval; 986 + 987 + interval = clamp_val(ep->desc.bInterval, 1, 16) - 1; 988 + if (interval != ep->desc.bInterval - 1) 989 + dev_warn(&udev->dev, 990 + "ep %#x - rounding interval to %d microframes\n", 991 + ep->desc.bEndpointAddress, 992 + 1 << interval); 993 + 994 + return interval; 995 + } 996 + 997 + /* 998 + * Convert bInterval expressed in frames (in 1-255 range) to exponent of 999 + * microframes, rounded down to nearest power of 2. 1000 + */ 1001 + static unsigned int xhci_parse_frame_interval(struct usb_device *udev, 1002 + struct usb_host_endpoint *ep) 1003 + { 1004 + unsigned int interval; 1005 + 1006 + interval = fls(8 * ep->desc.bInterval) - 1; 1007 + interval = clamp_val(interval, 3, 10); 1008 + if ((1 << interval) != 8 * ep->desc.bInterval) 1009 + dev_warn(&udev->dev, 1010 + "ep %#x - rounding interval to %d microframes, ep desc says %d microframes\n", 1011 + ep->desc.bEndpointAddress, 1012 + 1 << interval, 1013 + 8 * ep->desc.bInterval); 1014 + 1015 + return interval; 1016 + } 1017 + 977 1018 /* Return the polling or NAK interval. 978 1019 * 979 1020 * The polling interval is expressed in "microframes". If xHCI's Interval field ··· 1023 982 * The NAK interval is one NAK per 1 to 255 microframes, or no NAKs if interval 1024 983 * is set to 0. 1025 984 */ 1026 - static inline unsigned int xhci_get_endpoint_interval(struct usb_device *udev, 985 + static unsigned int xhci_get_endpoint_interval(struct usb_device *udev, 1027 986 struct usb_host_endpoint *ep) 1028 987 { 1029 988 unsigned int interval = 0; ··· 1032 991 case USB_SPEED_HIGH: 1033 992 /* Max NAK rate */ 1034 993 if (usb_endpoint_xfer_control(&ep->desc) || 1035 - usb_endpoint_xfer_bulk(&ep->desc)) 994 + usb_endpoint_xfer_bulk(&ep->desc)) { 1036 995 interval = ep->desc.bInterval; 996 + break; 997 + } 1037 998 /* Fall through - SS and HS isoc/int have same decoding */ 999 + 1038 1000 case USB_SPEED_SUPER: 1039 1001 if (usb_endpoint_xfer_int(&ep->desc) || 1040 - usb_endpoint_xfer_isoc(&ep->desc)) { 1041 - if (ep->desc.bInterval == 0) 1042 - interval = 0; 1043 - else 1044 - interval = ep->desc.bInterval - 1; 1045 - if (interval > 15) 1046 - interval = 15; 1047 - if (interval != ep->desc.bInterval + 1) 1048 - dev_warn(&udev->dev, "ep %#x - rounding interval to %d microframes\n", 1049 - ep->desc.bEndpointAddress, 1 << interval); 1002 + usb_endpoint_xfer_isoc(&ep->desc)) { 1003 + interval = xhci_parse_exponent_interval(udev, ep); 1050 1004 } 1051 1005 break; 1052 - /* Convert bInterval (in 1-255 frames) to microframes and round down to 1053 - * nearest power of 2. 1054 - */ 1006 + 1055 1007 case USB_SPEED_FULL: 1008 + if (usb_endpoint_xfer_int(&ep->desc)) { 1009 + interval = xhci_parse_exponent_interval(udev, ep); 1010 + break; 1011 + } 1012 + /* 1013 + * Fall through for isochronous endpoint interval decoding 1014 + * since it uses the same rules as low speed interrupt 1015 + * endpoints. 1016 + */ 1017 + 1056 1018 case USB_SPEED_LOW: 1057 1019 if (usb_endpoint_xfer_int(&ep->desc) || 1058 - usb_endpoint_xfer_isoc(&ep->desc)) { 1059 - interval = fls(8*ep->desc.bInterval) - 1; 1060 - if (interval > 10) 1061 - interval = 10; 1062 - if (interval < 3) 1063 - interval = 3; 1064 - if ((1 << interval) != 8*ep->desc.bInterval) 1065 - dev_warn(&udev->dev, 1066 - "ep %#x - rounding interval" 1067 - " to %d microframes, " 1068 - "ep desc says %d microframes\n", 1069 - ep->desc.bEndpointAddress, 1070 - 1 << interval, 1071 - 8*ep->desc.bInterval); 1020 + usb_endpoint_xfer_isoc(&ep->desc)) { 1021 + 1022 + interval = xhci_parse_frame_interval(udev, ep); 1072 1023 } 1073 1024 break; 1025 + 1074 1026 default: 1075 1027 BUG(); 1076 1028 } ··· 1075 1041 * transaction opportunities per microframe", but that goes in the Max Burst 1076 1042 * endpoint context field. 1077 1043 */ 1078 - static inline u32 xhci_get_endpoint_mult(struct usb_device *udev, 1044 + static u32 xhci_get_endpoint_mult(struct usb_device *udev, 1079 1045 struct usb_host_endpoint *ep) 1080 1046 { 1081 1047 if (udev->speed != USB_SPEED_SUPER || ··· 1084 1050 return ep->ss_ep_comp.bmAttributes; 1085 1051 } 1086 1052 1087 - static inline u32 xhci_get_endpoint_type(struct usb_device *udev, 1053 + static u32 xhci_get_endpoint_type(struct usb_device *udev, 1088 1054 struct usb_host_endpoint *ep) 1089 1055 { 1090 1056 int in; ··· 1118 1084 * Basically, this is the maxpacket size, multiplied by the burst size 1119 1085 * and mult size. 1120 1086 */ 1121 - static inline u32 xhci_get_max_esit_payload(struct xhci_hcd *xhci, 1087 + static u32 xhci_get_max_esit_payload(struct xhci_hcd *xhci, 1122 1088 struct usb_device *udev, 1123 1089 struct usb_host_endpoint *ep) 1124 1090 { ··· 1761 1727 * found a similar duplicate. 1762 1728 */ 1763 1729 if (xhci->port_array[i] != major_revision && 1764 - xhci->port_array[i] != (u8) -1) { 1730 + xhci->port_array[i] != DUPLICATE_ENTRY) { 1765 1731 if (xhci->port_array[i] == 0x03) 1766 1732 xhci->num_usb3_ports--; 1767 1733 else 1768 1734 xhci->num_usb2_ports--; 1769 - xhci->port_array[i] = (u8) -1; 1735 + xhci->port_array[i] = DUPLICATE_ENTRY; 1770 1736 } 1771 1737 /* FIXME: Should we disable the port? */ 1772 1738 continue; ··· 1865 1831 for (i = 0; i < num_ports; i++) { 1866 1832 if (xhci->port_array[i] == 0x03 || 1867 1833 xhci->port_array[i] == 0 || 1868 - xhci->port_array[i] == -1) 1834 + xhci->port_array[i] == DUPLICATE_ENTRY) 1869 1835 continue; 1870 1836 1871 1837 xhci->usb2_ports[port_index] =
+4
drivers/usb/host/xhci-pci.c
··· 114 114 if (pdev->vendor == PCI_VENDOR_ID_NEC) 115 115 xhci->quirks |= XHCI_NEC_HOST; 116 116 117 + /* AMD PLL quirk */ 118 + if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info()) 119 + xhci->quirks |= XHCI_AMD_PLL_FIX; 120 + 117 121 /* Make sure the HC is halted. */ 118 122 retval = xhci_halt(xhci); 119 123 if (retval)
+134 -85
drivers/usb/host/xhci-ring.c
··· 93 93 /* Does this link TRB point to the first segment in a ring, 94 94 * or was the previous TRB the last TRB on the last segment in the ERST? 95 95 */ 96 - static inline bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring, 96 + static bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring, 97 97 struct xhci_segment *seg, union xhci_trb *trb) 98 98 { 99 99 if (ring == xhci->event_ring) ··· 107 107 * segment? I.e. would the updated event TRB pointer step off the end of the 108 108 * event seg? 109 109 */ 110 - static inline int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, 110 + static int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, 111 111 struct xhci_segment *seg, union xhci_trb *trb) 112 112 { 113 113 if (ring == xhci->event_ring) ··· 116 116 return (trb->link.control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK); 117 117 } 118 118 119 - static inline int enqueue_is_link_trb(struct xhci_ring *ring) 119 + static int enqueue_is_link_trb(struct xhci_ring *ring) 120 120 { 121 121 struct xhci_link_trb *link = &ring->enqueue->link; 122 122 return ((link->control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK)); ··· 592 592 ep->ep_state |= SET_DEQ_PENDING; 593 593 } 594 594 595 - static inline void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci, 595 + static void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci, 596 596 struct xhci_virt_ep *ep) 597 597 { 598 598 ep->ep_state &= ~EP_HALT_PENDING; ··· 619 619 620 620 /* Only giveback urb when this is the last td in urb */ 621 621 if (urb_priv->td_cnt == urb_priv->length) { 622 + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) { 623 + xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--; 624 + if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs == 0) { 625 + if (xhci->quirks & XHCI_AMD_PLL_FIX) 626 + usb_amd_quirk_pll_enable(); 627 + } 628 + } 622 629 usb_hcd_unlink_urb_from_ep(hcd, urb); 623 630 xhci_dbg(xhci, "Giveback %s URB %p\n", adjective, urb); 624 631 ··· 1216 1209 * Skip ports that don't have known speeds, or have duplicate 1217 1210 * Extended Capabilities port speed entries. 1218 1211 */ 1219 - if (port_speed == 0 || port_speed == -1) 1212 + if (port_speed == 0 || port_speed == DUPLICATE_ENTRY) 1220 1213 continue; 1221 1214 1222 1215 /* ··· 1242 1235 u8 major_revision; 1243 1236 struct xhci_bus_state *bus_state; 1244 1237 u32 __iomem **port_array; 1238 + bool bogus_port_status = false; 1245 1239 1246 1240 /* Port status change events always have a successful completion code */ 1247 1241 if (GET_COMP_CODE(event->generic.field[2]) != COMP_SUCCESS) { ··· 1255 1247 max_ports = HCS_MAX_PORTS(xhci->hcs_params1); 1256 1248 if ((port_id <= 0) || (port_id > max_ports)) { 1257 1249 xhci_warn(xhci, "Invalid port id %d\n", port_id); 1250 + bogus_port_status = true; 1258 1251 goto cleanup; 1259 1252 } 1260 1253 ··· 1267 1258 xhci_warn(xhci, "Event for port %u not in " 1268 1259 "Extended Capabilities, ignoring.\n", 1269 1260 port_id); 1261 + bogus_port_status = true; 1270 1262 goto cleanup; 1271 1263 } 1272 - if (major_revision == (u8) -1) { 1264 + if (major_revision == DUPLICATE_ENTRY) { 1273 1265 xhci_warn(xhci, "Event for port %u duplicated in" 1274 1266 "Extended Capabilities, ignoring.\n", 1275 1267 port_id); 1268 + bogus_port_status = true; 1276 1269 goto cleanup; 1277 1270 } 1278 1271 ··· 1345 1334 cleanup: 1346 1335 /* Update event ring dequeue pointer before dropping the lock */ 1347 1336 inc_deq(xhci, xhci->event_ring, true); 1337 + 1338 + /* Don't make the USB core poll the roothub if we got a bad port status 1339 + * change event. Besides, at that point we can't tell which roothub 1340 + * (USB 2.0 or USB 3.0) to kick. 1341 + */ 1342 + if (bogus_port_status) 1343 + return; 1348 1344 1349 1345 spin_unlock(&xhci->lock); 1350 1346 /* Pass this up to the core */ ··· 1572 1554 1573 1555 urb_priv->td_cnt++; 1574 1556 /* Giveback the urb when all the tds are completed */ 1575 - if (urb_priv->td_cnt == urb_priv->length) 1557 + if (urb_priv->td_cnt == urb_priv->length) { 1576 1558 ret = 1; 1559 + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) { 1560 + xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--; 1561 + if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs 1562 + == 0) { 1563 + if (xhci->quirks & XHCI_AMD_PLL_FIX) 1564 + usb_amd_quirk_pll_enable(); 1565 + } 1566 + } 1567 + } 1577 1568 } 1578 1569 1579 1570 return ret; ··· 1702 1675 struct urb_priv *urb_priv; 1703 1676 int idx; 1704 1677 int len = 0; 1705 - int skip_td = 0; 1706 1678 union xhci_trb *cur_trb; 1707 1679 struct xhci_segment *cur_seg; 1680 + struct usb_iso_packet_descriptor *frame; 1708 1681 u32 trb_comp_code; 1682 + bool skip_td = false; 1709 1683 1710 1684 ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); 1711 1685 trb_comp_code = GET_COMP_CODE(event->transfer_len); 1712 1686 urb_priv = td->urb->hcpriv; 1713 1687 idx = urb_priv->td_cnt; 1688 + frame = &td->urb->iso_frame_desc[idx]; 1714 1689 1715 - if (ep->skip) { 1716 - /* The transfer is partly done */ 1717 - *status = -EXDEV; 1718 - td->urb->iso_frame_desc[idx].status = -EXDEV; 1719 - } else { 1720 - /* handle completion code */ 1721 - switch (trb_comp_code) { 1722 - case COMP_SUCCESS: 1723 - td->urb->iso_frame_desc[idx].status = 0; 1724 - xhci_dbg(xhci, "Successful isoc transfer!\n"); 1725 - break; 1726 - case COMP_SHORT_TX: 1727 - if (td->urb->transfer_flags & URB_SHORT_NOT_OK) 1728 - td->urb->iso_frame_desc[idx].status = 1729 - -EREMOTEIO; 1730 - else 1731 - td->urb->iso_frame_desc[idx].status = 0; 1732 - break; 1733 - case COMP_BW_OVER: 1734 - td->urb->iso_frame_desc[idx].status = -ECOMM; 1735 - skip_td = 1; 1736 - break; 1737 - case COMP_BUFF_OVER: 1738 - case COMP_BABBLE: 1739 - td->urb->iso_frame_desc[idx].status = -EOVERFLOW; 1740 - skip_td = 1; 1741 - break; 1742 - case COMP_STALL: 1743 - td->urb->iso_frame_desc[idx].status = -EPROTO; 1744 - skip_td = 1; 1745 - break; 1746 - case COMP_STOP: 1747 - case COMP_STOP_INVAL: 1748 - break; 1749 - default: 1750 - td->urb->iso_frame_desc[idx].status = -1; 1751 - break; 1752 - } 1690 + /* handle completion code */ 1691 + switch (trb_comp_code) { 1692 + case COMP_SUCCESS: 1693 + frame->status = 0; 1694 + xhci_dbg(xhci, "Successful isoc transfer!\n"); 1695 + break; 1696 + case COMP_SHORT_TX: 1697 + frame->status = td->urb->transfer_flags & URB_SHORT_NOT_OK ? 1698 + -EREMOTEIO : 0; 1699 + break; 1700 + case COMP_BW_OVER: 1701 + frame->status = -ECOMM; 1702 + skip_td = true; 1703 + break; 1704 + case COMP_BUFF_OVER: 1705 + case COMP_BABBLE: 1706 + frame->status = -EOVERFLOW; 1707 + skip_td = true; 1708 + break; 1709 + case COMP_STALL: 1710 + frame->status = -EPROTO; 1711 + skip_td = true; 1712 + break; 1713 + case COMP_STOP: 1714 + case COMP_STOP_INVAL: 1715 + break; 1716 + default: 1717 + frame->status = -1; 1718 + break; 1753 1719 } 1754 1720 1755 - /* calc actual length */ 1756 - if (ep->skip) { 1757 - td->urb->iso_frame_desc[idx].actual_length = 0; 1758 - /* Update ring dequeue pointer */ 1759 - while (ep_ring->dequeue != td->last_trb) 1760 - inc_deq(xhci, ep_ring, false); 1761 - inc_deq(xhci, ep_ring, false); 1762 - return finish_td(xhci, td, event_trb, event, ep, status, true); 1763 - } 1764 - 1765 - if (trb_comp_code == COMP_SUCCESS || skip_td == 1) { 1766 - td->urb->iso_frame_desc[idx].actual_length = 1767 - td->urb->iso_frame_desc[idx].length; 1768 - td->urb->actual_length += 1769 - td->urb->iso_frame_desc[idx].length; 1721 + if (trb_comp_code == COMP_SUCCESS || skip_td) { 1722 + frame->actual_length = frame->length; 1723 + td->urb->actual_length += frame->length; 1770 1724 } else { 1771 1725 for (cur_trb = ep_ring->dequeue, 1772 1726 cur_seg = ep_ring->deq_seg; cur_trb != event_trb; ··· 1763 1755 TRB_LEN(event->transfer_len); 1764 1756 1765 1757 if (trb_comp_code != COMP_STOP_INVAL) { 1766 - td->urb->iso_frame_desc[idx].actual_length = len; 1758 + frame->actual_length = len; 1767 1759 td->urb->actual_length += len; 1768 1760 } 1769 1761 } ··· 1772 1764 *status = 0; 1773 1765 1774 1766 return finish_td(xhci, td, event_trb, event, ep, status, false); 1767 + } 1768 + 1769 + static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, 1770 + struct xhci_transfer_event *event, 1771 + struct xhci_virt_ep *ep, int *status) 1772 + { 1773 + struct xhci_ring *ep_ring; 1774 + struct urb_priv *urb_priv; 1775 + struct usb_iso_packet_descriptor *frame; 1776 + int idx; 1777 + 1778 + ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); 1779 + urb_priv = td->urb->hcpriv; 1780 + idx = urb_priv->td_cnt; 1781 + frame = &td->urb->iso_frame_desc[idx]; 1782 + 1783 + /* The transfer is partly done */ 1784 + *status = -EXDEV; 1785 + frame->status = -EXDEV; 1786 + 1787 + /* calc actual length */ 1788 + frame->actual_length = 0; 1789 + 1790 + /* Update ring dequeue pointer */ 1791 + while (ep_ring->dequeue != td->last_trb) 1792 + inc_deq(xhci, ep_ring, false); 1793 + inc_deq(xhci, ep_ring, false); 1794 + 1795 + return finish_td(xhci, td, NULL, event, ep, status, true); 1775 1796 } 1776 1797 1777 1798 /* ··· 2061 2024 } 2062 2025 2063 2026 td = list_entry(ep_ring->td_list.next, struct xhci_td, td_list); 2027 + 2064 2028 /* Is this a TRB in the currently executing TD? */ 2065 2029 event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue, 2066 2030 td->last_trb, event_dma); 2067 - if (event_seg && ep->skip) { 2031 + if (!event_seg) { 2032 + if (!ep->skip || 2033 + !usb_endpoint_xfer_isoc(&td->urb->ep->desc)) { 2034 + /* HC is busted, give up! */ 2035 + xhci_err(xhci, 2036 + "ERROR Transfer event TRB DMA ptr not " 2037 + "part of current TD\n"); 2038 + return -ESHUTDOWN; 2039 + } 2040 + 2041 + ret = skip_isoc_td(xhci, td, event, ep, &status); 2042 + goto cleanup; 2043 + } 2044 + 2045 + if (ep->skip) { 2068 2046 xhci_dbg(xhci, "Found td. Clear skip flag.\n"); 2069 2047 ep->skip = false; 2070 2048 } 2071 - if (!event_seg && 2072 - (!ep->skip || !usb_endpoint_xfer_isoc(&td->urb->ep->desc))) { 2073 - /* HC is busted, give up! */ 2074 - xhci_err(xhci, "ERROR Transfer event TRB DMA ptr not " 2075 - "part of current TD\n"); 2076 - return -ESHUTDOWN; 2077 - } 2078 2049 2079 - if (event_seg) { 2080 - event_trb = &event_seg->trbs[(event_dma - 2081 - event_seg->dma) / sizeof(*event_trb)]; 2082 - /* 2083 - * No-op TRB should not trigger interrupts. 2084 - * If event_trb is a no-op TRB, it means the 2085 - * corresponding TD has been cancelled. Just ignore 2086 - * the TD. 2087 - */ 2088 - if ((event_trb->generic.field[3] & TRB_TYPE_BITMASK) 2089 - == TRB_TYPE(TRB_TR_NOOP)) { 2090 - xhci_dbg(xhci, "event_trb is a no-op TRB. " 2091 - "Skip it\n"); 2092 - goto cleanup; 2093 - } 2050 + event_trb = &event_seg->trbs[(event_dma - event_seg->dma) / 2051 + sizeof(*event_trb)]; 2052 + /* 2053 + * No-op TRB should not trigger interrupts. 2054 + * If event_trb is a no-op TRB, it means the 2055 + * corresponding TD has been cancelled. Just ignore 2056 + * the TD. 2057 + */ 2058 + if ((event_trb->generic.field[3] & TRB_TYPE_BITMASK) 2059 + == TRB_TYPE(TRB_TR_NOOP)) { 2060 + xhci_dbg(xhci, 2061 + "event_trb is a no-op TRB. Skip it\n"); 2062 + goto cleanup; 2094 2063 } 2095 2064 2096 2065 /* Now update the urb's actual_length and give back to ··· 3168 3125 return -EINVAL; 3169 3126 } 3170 3127 } 3128 + 3129 + if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs == 0) { 3130 + if (xhci->quirks & XHCI_AMD_PLL_FIX) 3131 + usb_amd_quirk_pll_disable(); 3132 + } 3133 + xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs++; 3171 3134 3172 3135 giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, 3173 3136 start_cycle, start_trb);
+18 -5
drivers/usb/host/xhci.c
··· 550 550 del_timer_sync(&xhci->event_ring_timer); 551 551 #endif 552 552 553 + if (xhci->quirks & XHCI_AMD_PLL_FIX) 554 + usb_amd_dev_put(); 555 + 553 556 xhci_dbg(xhci, "// Disabling event ring interrupts\n"); 554 557 temp = xhci_readl(xhci, &xhci->op_regs->status); 555 558 xhci_writel(xhci, temp & ~STS_EINT, &xhci->op_regs->status); ··· 774 771 775 772 /* If restore operation fails, re-initialize the HC during resume */ 776 773 if ((temp & STS_SRE) || hibernated) { 777 - usb_root_hub_lost_power(hcd->self.root_hub); 774 + /* Let the USB core know _both_ roothubs lost power. */ 775 + usb_root_hub_lost_power(xhci->main_hcd->self.root_hub); 776 + usb_root_hub_lost_power(xhci->shared_hcd->self.root_hub); 778 777 779 778 xhci_dbg(xhci, "Stop HCD\n"); 780 779 xhci_halt(xhci); ··· 2391 2386 /* Everything but endpoint 0 is disabled, so free or cache the rings. */ 2392 2387 last_freed_endpoint = 1; 2393 2388 for (i = 1; i < 31; ++i) { 2394 - if (!virt_dev->eps[i].ring) 2395 - continue; 2396 - xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i); 2397 - last_freed_endpoint = i; 2389 + struct xhci_virt_ep *ep = &virt_dev->eps[i]; 2390 + 2391 + if (ep->ep_state & EP_HAS_STREAMS) { 2392 + xhci_free_stream_info(xhci, ep->stream_info); 2393 + ep->stream_info = NULL; 2394 + ep->ep_state &= ~EP_HAS_STREAMS; 2395 + } 2396 + 2397 + if (ep->ring) { 2398 + xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i); 2399 + last_freed_endpoint = i; 2400 + } 2398 2401 } 2399 2402 xhci_dbg(xhci, "Output context after successful reset device cmd:\n"); 2400 2403 xhci_dbg_ctx(xhci, virt_dev->out_ctx, last_freed_endpoint);
+8 -3
drivers/usb/host/xhci.h
··· 30 30 31 31 /* Code sharing between pci-quirks and xhci hcd */ 32 32 #include "xhci-ext-caps.h" 33 + #include "pci-quirks.h" 33 34 34 35 /* xHCI PCI Configuration Registers */ 35 36 #define XHCI_SBRN_OFFSET (0x60) ··· 233 232 * notification type that matches a bit set in this bit field. 234 233 */ 235 234 #define DEV_NOTE_MASK (0xffff) 236 - #define ENABLE_DEV_NOTE(x) (1 << x) 235 + #define ENABLE_DEV_NOTE(x) (1 << (x)) 237 236 /* Most of the device notification types should only be used for debug. 238 237 * SW does need to pay attention to function wake notifications. 239 238 */ ··· 348 347 #define PORT_DEV_REMOVE (1 << 30) 349 348 /* Initiate a warm port reset - complete when PORT_WRC is '1' */ 350 349 #define PORT_WR (1 << 31) 350 + 351 + /* We mark duplicate entries with -1 */ 352 + #define DUPLICATE_ENTRY ((u8)(-1)) 351 353 352 354 /* Port Power Management Status and Control - port_power_base bitmasks */ 353 355 /* Inactivity timer value for transitions into U1, in microseconds. ··· 605 601 #define EP_STATE_STOPPED 3 606 602 #define EP_STATE_ERROR 4 607 603 /* Mult - Max number of burtst within an interval, in EP companion desc. */ 608 - #define EP_MULT(p) ((p & 0x3) << 8) 604 + #define EP_MULT(p) (((p) & 0x3) << 8) 609 605 /* bits 10:14 are Max Primary Streams */ 610 606 /* bit 15 is Linear Stream Array */ 611 607 /* Interval - period between requests to an endpoint - 125u increments. */ 612 - #define EP_INTERVAL(p) ((p & 0xff) << 16) 608 + #define EP_INTERVAL(p) (((p) & 0xff) << 16) 613 609 #define EP_INTERVAL_TO_UFRAMES(p) (1 << (((p) >> 16) & 0xff)) 614 610 #define EP_MAXPSTREAMS_MASK (0x1f << 10) 615 611 #define EP_MAXPSTREAMS(p) (((p) << 10) & EP_MAXPSTREAMS_MASK) ··· 1280 1276 #define XHCI_LINK_TRB_QUIRK (1 << 0) 1281 1277 #define XHCI_RESET_EP_QUIRK (1 << 1) 1282 1278 #define XHCI_NEC_HOST (1 << 2) 1279 + #define XHCI_AMD_PLL_FIX (1 << 3) 1283 1280 /* There are two roothubs to keep track of bus suspend info for */ 1284 1281 struct xhci_bus_state bus_state[2]; 1285 1282 /* Is each xHCI roothub port a USB 3.0, USB 2.0, or USB 1.1 port? */
+3 -3
drivers/usb/musb/Kconfig
··· 14 14 select TWL4030_USB if MACH_OMAP_3430SDP 15 15 select TWL6030_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA 16 16 select USB_OTG_UTILS 17 - tristate 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)' 17 + bool 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)' 18 18 help 19 19 Say Y here if your system has a dual role high speed USB 20 20 controller based on the Mentor Graphics silicon IP. Then ··· 30 30 31 31 If you do not know what this is, please say N. 32 32 33 - To compile this driver as a module, choose M here; the 34 - module will be called "musb-hdrc". 33 + # To compile this driver as a module, choose M here; the 34 + # module will be called "musb-hdrc". 35 35 36 36 choice 37 37 prompt "Platform Glue Layer"
+24
drivers/usb/musb/blackfin.c
··· 21 21 #include <asm/cacheflush.h> 22 22 23 23 #include "musb_core.h" 24 + #include "musbhsdma.h" 24 25 #include "blackfin.h" 25 26 26 27 struct bfin_glue { ··· 333 332 return -EIO; 334 333 } 335 334 335 + static int bfin_musb_adjust_channel_params(struct dma_channel *channel, 336 + u16 packet_sz, u8 *mode, 337 + dma_addr_t *dma_addr, u32 *len) 338 + { 339 + struct musb_dma_channel *musb_channel = channel->private_data; 340 + 341 + /* 342 + * Anomaly 05000450 might cause data corruption when using DMA 343 + * MODE 1 transmits with short packet. So to work around this, 344 + * we truncate all MODE 1 transfers down to a multiple of the 345 + * max packet size, and then do the last short packet transfer 346 + * (if there is any) using MODE 0. 347 + */ 348 + if (ANOMALY_05000450) { 349 + if (musb_channel->transmit && *mode == 1) 350 + *len = *len - (*len % packet_sz); 351 + } 352 + 353 + return 0; 354 + } 355 + 336 356 static void bfin_musb_reg_init(struct musb *musb) 337 357 { 338 358 if (ANOMALY_05000346) { ··· 452 430 453 431 .vbus_status = bfin_musb_vbus_status, 454 432 .set_vbus = bfin_musb_set_vbus, 433 + 434 + .adjust_channel_params = bfin_musb_adjust_channel_params, 455 435 }; 456 436 457 437 static u64 bfin_dmamask = DMA_BIT_MASK(32);
+16 -11
drivers/usb/musb/cppi_dma.c
··· 597 597 length = min(n_bds * maxpacket, length); 598 598 } 599 599 600 - DBG(4, "TX DMA%d, pktSz %d %s bds %d dma 0x%x len %u\n", 600 + DBG(4, "TX DMA%d, pktSz %d %s bds %d dma 0x%llx len %u\n", 601 601 tx->index, 602 602 maxpacket, 603 603 rndis ? "rndis" : "transparent", 604 604 n_bds, 605 - addr, length); 605 + (unsigned long long)addr, length); 606 606 607 607 cppi_rndis_update(tx, 0, musb->ctrl_base, rndis); 608 608 ··· 820 820 length = min(n_bds * maxpacket, length); 821 821 822 822 DBG(4, "RX DMA%d seg, maxp %d %s bds %d (cnt %d) " 823 - "dma 0x%x len %u %u/%u\n", 823 + "dma 0x%llx len %u %u/%u\n", 824 824 rx->index, maxpacket, 825 825 onepacket 826 826 ? (is_rndis ? "rndis" : "onepacket") ··· 829 829 musb_readl(tibase, 830 830 DAVINCI_RXCPPI_BUFCNT0_REG + (rx->index * 4)) 831 831 & 0xffff, 832 - addr, length, rx->channel.actual_len, rx->buf_len); 832 + (unsigned long long)addr, length, 833 + rx->channel.actual_len, rx->buf_len); 833 834 834 835 /* only queue one segment at a time, since the hardware prevents 835 836 * correct queue shutdown after unexpected short packets ··· 1040 1039 if (!completed && (bd->hw_options & CPPI_OWN_SET)) 1041 1040 break; 1042 1041 1043 - DBG(5, "C/RXBD %08x: nxt %08x buf %08x " 1042 + DBG(5, "C/RXBD %llx: nxt %08x buf %08x " 1044 1043 "off.len %08x opt.len %08x (%d)\n", 1045 - bd->dma, bd->hw_next, bd->hw_bufp, 1044 + (unsigned long long)bd->dma, bd->hw_next, bd->hw_bufp, 1046 1045 bd->hw_off_len, bd->hw_options, 1047 1046 rx->channel.actual_len); 1048 1047 ··· 1112 1111 musb_ep_select(cppi->mregs, rx->index + 1); 1113 1112 csr = musb_readw(regs, MUSB_RXCSR); 1114 1113 if (csr & MUSB_RXCSR_DMAENAB) { 1115 - DBG(4, "list%d %p/%p, last %08x%s, csr %04x\n", 1114 + DBG(4, "list%d %p/%p, last %llx%s, csr %04x\n", 1116 1115 rx->index, 1117 1116 rx->head, rx->tail, 1118 1117 rx->last_processed 1119 - ? rx->last_processed->dma 1118 + ? (unsigned long long) 1119 + rx->last_processed->dma 1120 1120 : 0, 1121 1121 completed ? ", completed" : "", 1122 1122 csr); ··· 1169 1167 tx = musb_readl(tibase, DAVINCI_TXCPPI_MASKED_REG); 1170 1168 rx = musb_readl(tibase, DAVINCI_RXCPPI_MASKED_REG); 1171 1169 1172 - if (!tx && !rx) 1170 + if (!tx && !rx) { 1171 + if (cppi->irq) 1172 + spin_unlock_irqrestore(&musb->lock, flags); 1173 1173 return IRQ_NONE; 1174 + } 1174 1175 1175 1176 DBG(4, "CPPI IRQ Tx%x Rx%x\n", tx, rx); 1176 1177 ··· 1204 1199 */ 1205 1200 if (NULL == bd) { 1206 1201 DBG(1, "null BD\n"); 1207 - tx_ram->tx_complete = 0; 1202 + musb_writel(&tx_ram->tx_complete, 0, 0); 1208 1203 continue; 1209 1204 } 1210 1205 ··· 1457 1452 * compare mode by writing 1 to the tx_complete register. 1458 1453 */ 1459 1454 cppi_reset_tx(tx_ram, 1); 1460 - cppi_ch->head = 0; 1455 + cppi_ch->head = NULL; 1461 1456 musb_writel(&tx_ram->tx_complete, 0, 1); 1462 1457 cppi_dump_tx(5, cppi_ch, " (done teardown)"); 1463 1458
+2
drivers/usb/musb/musb_core.c
··· 1030 1030 struct musb *musb = dev_to_musb(&pdev->dev); 1031 1031 unsigned long flags; 1032 1032 1033 + pm_runtime_get_sync(musb->controller); 1033 1034 spin_lock_irqsave(&musb->lock, flags); 1034 1035 musb_platform_disable(musb); 1035 1036 musb_generic_disable(musb); ··· 1041 1040 musb_writeb(musb->mregs, MUSB_DEVCTL, 0); 1042 1041 musb_platform_exit(musb); 1043 1042 1043 + pm_runtime_put(musb->controller); 1044 1044 /* FIXME power down */ 1045 1045 } 1046 1046
+5
drivers/usb/musb/musb_core.h
··· 261 261 * @try_ilde: tries to idle the IP 262 262 * @vbus_status: returns vbus status if possible 263 263 * @set_vbus: forces vbus status 264 + * @channel_program: pre check for standard dma channel_program func 264 265 */ 265 266 struct musb_platform_ops { 266 267 int (*init)(struct musb *musb); ··· 275 274 276 275 int (*vbus_status)(struct musb *musb); 277 276 void (*set_vbus)(struct musb *musb, int on); 277 + 278 + int (*adjust_channel_params)(struct dma_channel *channel, 279 + u16 packet_sz, u8 *mode, 280 + dma_addr_t *dma_addr, u32 *len); 278 281 }; 279 282 280 283 /*
+2 -2
drivers/usb/musb/musb_gadget.c
··· 535 535 is_dma = 1; 536 536 csr |= MUSB_TXCSR_P_WZC_BITS; 537 537 csr &= ~(MUSB_TXCSR_DMAENAB | MUSB_TXCSR_P_UNDERRUN | 538 - MUSB_TXCSR_TXPKTRDY); 538 + MUSB_TXCSR_TXPKTRDY | MUSB_TXCSR_AUTOSET); 539 539 musb_writew(epio, MUSB_TXCSR, csr); 540 540 /* Ensure writebuffer is empty. */ 541 541 csr = musb_readw(epio, MUSB_TXCSR); ··· 1296 1296 } 1297 1297 1298 1298 /* if the hardware doesn't have the request, easy ... */ 1299 - if (musb_ep->req_list.next != &request->list || musb_ep->busy) 1299 + if (musb_ep->req_list.next != &req->list || musb_ep->busy) 1300 1300 musb_g_giveback(musb_ep, request, -ECONNRESET); 1301 1301 1302 1302 /* ... else abort the dma transfer ... */
+8
drivers/usb/musb/musbhsdma.c
··· 169 169 BUG_ON(channel->status == MUSB_DMA_STATUS_UNKNOWN || 170 170 channel->status == MUSB_DMA_STATUS_BUSY); 171 171 172 + /* Let targets check/tweak the arguments */ 173 + if (musb->ops->adjust_channel_params) { 174 + int ret = musb->ops->adjust_channel_params(channel, 175 + packet_sz, &mode, &dma_addr, &len); 176 + if (ret) 177 + return ret; 178 + } 179 + 172 180 /* 173 181 * The DMA engine in RTL1.8 and above cannot handle 174 182 * DMA addresses that are not aligned to a 4 byte boundary.
+2 -1
drivers/usb/musb/omap2430.c
··· 259 259 case USB_EVENT_VBUS: 260 260 DBG(4, "VBUS Connect\n"); 261 261 262 + #ifdef CONFIG_USB_GADGET_MUSB_HDRC 262 263 if (musb->gadget_driver) 263 264 pm_runtime_get_sync(musb->controller); 264 - 265 + #endif 265 266 otg_init(musb->xceiv); 266 267 break; 267 268
+2
drivers/usb/musb/ux500.c
··· 93 93 } 94 94 95 95 musb->dev.parent = &pdev->dev; 96 + musb->dev.dma_mask = pdev->dev.dma_mask; 97 + musb->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask; 96 98 97 99 glue->dev = &pdev->dev; 98 100 glue->musb = musb;
+5
drivers/usb/serial/ftdi_sio.c
··· 151 151 * /sys/bus/usb/ftdi_sio/new_id, then send patch/report! 152 152 */ 153 153 static struct usb_device_id id_table_combined [] = { 154 + { USB_DEVICE(FTDI_VID, FTDI_CTI_MINI_PID) }, 155 + { USB_DEVICE(FTDI_VID, FTDI_CTI_NANO_PID) }, 154 156 { USB_DEVICE(FTDI_VID, FTDI_AMC232_PID) }, 155 157 { USB_DEVICE(FTDI_VID, FTDI_CANUSB_PID) }, 156 158 { USB_DEVICE(FTDI_VID, FTDI_CANDAPTER_PID) }, ··· 527 525 { USB_DEVICE(SEALEVEL_VID, SEALEVEL_2803_8_PID) }, 528 526 { USB_DEVICE(IDTECH_VID, IDTECH_IDT1221U_PID) }, 529 527 { USB_DEVICE(OCT_VID, OCT_US101_PID) }, 528 + { USB_DEVICE(OCT_VID, OCT_DK201_PID) }, 530 529 { USB_DEVICE(FTDI_VID, FTDI_HE_TIRA1_PID), 531 530 .driver_info = (kernel_ulong_t)&ftdi_HE_TIRA1_quirk }, 532 531 { USB_DEVICE(FTDI_VID, FTDI_USB_UIRT_PID), ··· 790 787 { USB_DEVICE(FTDI_VID, MARVELL_OPENRD_PID), 791 788 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 792 789 { USB_DEVICE(FTDI_VID, HAMEG_HO820_PID) }, 790 + { USB_DEVICE(FTDI_VID, HAMEG_HO720_PID) }, 791 + { USB_DEVICE(FTDI_VID, HAMEG_HO730_PID) }, 793 792 { USB_DEVICE(FTDI_VID, HAMEG_HO870_PID) }, 794 793 { USB_DEVICE(FTDI_VID, MJSG_GENERIC_PID) }, 795 794 { USB_DEVICE(FTDI_VID, MJSG_SR_RADIO_PID) },
+12
drivers/usb/serial/ftdi_sio_ids.h
··· 300 300 * Hameg HO820 and HO870 interface (using VID 0x0403) 301 301 */ 302 302 #define HAMEG_HO820_PID 0xed74 303 + #define HAMEG_HO730_PID 0xed73 304 + #define HAMEG_HO720_PID 0xed72 303 305 #define HAMEG_HO870_PID 0xed71 304 306 305 307 /* ··· 574 572 /* Note: OCT US101 is also rebadged as Dick Smith Electronics (NZ) XH6381 */ 575 573 /* Also rebadged as Dick Smith Electronics (Aus) XH6451 */ 576 574 /* Also rebadged as SIIG Inc. model US2308 hardware version 1 */ 575 + #define OCT_DK201_PID 0x0103 /* OCT DK201 USB docking station */ 577 576 #define OCT_US101_PID 0x0421 /* OCT US101 USB to RS-232 */ 578 577 579 578 /* ··· 1143 1140 */ 1144 1141 #define QIHARDWARE_VID 0x20B7 1145 1142 #define MILKYMISTONE_JTAGSERIAL_PID 0x0713 1143 + 1144 + /* 1145 + * CTI GmbH RS485 Converter http://www.cti-lean.com/ 1146 + */ 1147 + /* USB-485-Mini*/ 1148 + #define FTDI_CTI_MINI_PID 0xF608 1149 + /* USB-Nano-485*/ 1150 + #define FTDI_CTI_NANO_PID 0xF60B 1151 + 1146 1152
+5
drivers/usb/serial/option.c
··· 407 407 /* ONDA MT825UP HSDPA 14.2 modem */ 408 408 #define ONDA_MT825UP 0x000b 409 409 410 + /* Samsung products */ 411 + #define SAMSUNG_VENDOR_ID 0x04e8 412 + #define SAMSUNG_PRODUCT_GT_B3730 0x6889 413 + 410 414 /* some devices interfaces need special handling due to a number of reasons */ 411 415 enum option_blacklist_reason { 412 416 OPTION_BLACKLIST_NONE = 0, ··· 972 968 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) }, 973 969 { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ 974 970 { USB_DEVICE(ONDA_VENDOR_ID, ONDA_MT825UP) }, /* ONDA MT825UP modem */ 971 + { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730/GT-B3710 LTE USB modem.*/ 975 972 { } /* Terminating entry */ 976 973 }; 977 974 MODULE_DEVICE_TABLE(usb, option_ids);
+24 -7
drivers/usb/serial/qcserial.c
··· 111 111 ifnum = intf->desc.bInterfaceNumber; 112 112 dbg("This Interface = %d", ifnum); 113 113 114 - data = serial->private = kzalloc(sizeof(struct usb_wwan_intf_private), 114 + data = kzalloc(sizeof(struct usb_wwan_intf_private), 115 115 GFP_KERNEL); 116 116 if (!data) 117 117 return -ENOMEM; ··· 134 134 usb_endpoint_is_bulk_out(&intf->endpoint[1].desc)) { 135 135 dbg("QDL port found"); 136 136 137 - if (serial->interface->num_altsetting == 1) 138 - return 0; 137 + if (serial->interface->num_altsetting == 1) { 138 + retval = 0; /* Success */ 139 + break; 140 + } 139 141 140 142 retval = usb_set_interface(serial->dev, ifnum, 1); 141 143 if (retval < 0) { ··· 147 145 retval = -ENODEV; 148 146 kfree(data); 149 147 } 150 - return retval; 151 148 } 152 149 break; 153 150 ··· 167 166 "Could not set interface, error %d\n", 168 167 retval); 169 168 retval = -ENODEV; 169 + kfree(data); 170 170 } 171 171 } else if (ifnum == 2) { 172 172 dbg("Modem port found"); ··· 179 177 retval = -ENODEV; 180 178 kfree(data); 181 179 } 182 - return retval; 183 180 } else if (ifnum==3) { 184 181 /* 185 182 * NMEA (serial line 9600 8N1) ··· 192 191 "Could not set interface, error %d\n", 193 192 retval); 194 193 retval = -ENODEV; 194 + kfree(data); 195 195 } 196 196 } 197 197 break; ··· 201 199 dev_err(&serial->dev->dev, 202 200 "unknown number of interfaces: %d\n", nintf); 203 201 kfree(data); 204 - return -ENODEV; 202 + retval = -ENODEV; 205 203 } 206 204 205 + /* Set serial->private if not returning -ENODEV */ 206 + if (retval != -ENODEV) 207 + usb_set_serial_data(serial, data); 207 208 return retval; 209 + } 210 + 211 + static void qc_release(struct usb_serial *serial) 212 + { 213 + struct usb_wwan_intf_private *priv = usb_get_serial_data(serial); 214 + 215 + dbg("%s", __func__); 216 + 217 + /* Call usb_wwan release & free the private data allocated in qcprobe */ 218 + usb_wwan_release(serial); 219 + usb_set_serial_data(serial, NULL); 220 + kfree(priv); 208 221 } 209 222 210 223 static struct usb_serial_driver qcdevice = { ··· 239 222 .chars_in_buffer = usb_wwan_chars_in_buffer, 240 223 .attach = usb_wwan_startup, 241 224 .disconnect = usb_wwan_disconnect, 242 - .release = usb_wwan_release, 225 + .release = qc_release, 243 226 #ifdef CONFIG_PM 244 227 .suspend = usb_wwan_suspend, 245 228 .resume = usb_wwan_resume,
+2 -4
drivers/xen/events.c
··· 912 912 unsigned long irqflags, 913 913 const char *devname, void *dev_id) 914 914 { 915 - unsigned int irq; 916 - int retval; 915 + int irq, retval; 917 916 918 917 irq = bind_evtchn_to_irq(evtchn); 919 918 if (irq < 0) ··· 954 955 irq_handler_t handler, 955 956 unsigned long irqflags, const char *devname, void *dev_id) 956 957 { 957 - unsigned int irq; 958 - int retval; 958 + int irq, retval; 959 959 960 960 irq = bind_virq_to_irq(virq, cpu); 961 961 if (irq < 0)
+3 -3
drivers/xen/manage.c
··· 61 61 xen_mm_unpin_all(); 62 62 } 63 63 64 - #ifdef CONFIG_HIBERNATION 64 + #ifdef CONFIG_HIBERNATE_CALLBACKS 65 65 static int xen_suspend(void *data) 66 66 { 67 67 struct suspend_info *si = data; ··· 173 173 #endif 174 174 shutting_down = SHUTDOWN_INVALID; 175 175 } 176 - #endif /* CONFIG_HIBERNATION */ 176 + #endif /* CONFIG_HIBERNATE_CALLBACKS */ 177 177 178 178 struct shutdown_handler { 179 179 const char *command; ··· 202 202 { "poweroff", do_poweroff }, 203 203 { "halt", do_poweroff }, 204 204 { "reboot", do_reboot }, 205 - #ifdef CONFIG_HIBERNATION 205 + #ifdef CONFIG_HIBERNATE_CALLBACKS 206 206 { "suspend", do_suspend }, 207 207 #endif 208 208 {NULL, NULL},
+2 -13
fs/9p/fid.c
··· 286 286 287 287 struct p9_fid *v9fs_writeback_fid(struct dentry *dentry) 288 288 { 289 - int err, flags; 289 + int err; 290 290 struct p9_fid *fid; 291 - struct v9fs_session_info *v9ses; 292 291 293 - v9ses = v9fs_dentry2v9ses(dentry); 294 292 fid = v9fs_fid_clone_with_uid(dentry, 0); 295 293 if (IS_ERR(fid)) 296 294 goto error_out; ··· 297 299 * dirty pages. We always request for the open fid in read-write 298 300 * mode so that a partial page write which result in page 299 301 * read can work. 300 - * 301 - * we don't have a tsyncfs operation for older version 302 - * of protocol. So make sure the write back fid is 303 - * opened in O_SYNC mode. 304 302 */ 305 - if (!v9fs_proto_dotl(v9ses)) 306 - flags = O_RDWR | O_SYNC; 307 - else 308 - flags = O_RDWR; 309 - 310 - err = p9_client_open(fid, flags); 303 + err = p9_client_open(fid, O_RDWR); 311 304 if (err < 0) { 312 305 p9_client_clunk(fid); 313 306 fid = ERR_PTR(err);
-1
fs/9p/v9fs.h
··· 116 116 struct list_head slist; /* list of sessions registered with v9fs */ 117 117 struct backing_dev_info bdi; 118 118 struct rw_semaphore rename_sem; 119 - struct p9_fid *root_fid; /* Used for file system sync */ 120 119 }; 121 120 122 121 /* cache_validity flags */
+3 -1
fs/9p/vfs_dentry.c
··· 126 126 retval = v9fs_refresh_inode_dotl(fid, inode); 127 127 else 128 128 retval = v9fs_refresh_inode(fid, inode); 129 - if (retval <= 0) 129 + if (retval == -ENOENT) 130 + return 0; 131 + if (retval < 0) 130 132 return retval; 131 133 } 132 134 out_valid:
+1 -1
fs/9p/vfs_inode_dotl.c
··· 811 811 fid = v9fs_fid_lookup(dentry); 812 812 if (IS_ERR(fid)) { 813 813 __putname(link); 814 - link = ERR_PTR(PTR_ERR(fid)); 814 + link = ERR_CAST(fid); 815 815 goto ndset; 816 816 } 817 817 retval = p9_client_readlink(fid, &target);
+56 -24
fs/9p/vfs_super.c
··· 154 154 retval = PTR_ERR(inode); 155 155 goto release_sb; 156 156 } 157 + 157 158 root = d_alloc_root(inode); 158 159 if (!root) { 159 160 iput(inode); ··· 186 185 p9stat_free(st); 187 186 kfree(st); 188 187 } 189 - v9fs_fid_add(root, fid); 190 188 retval = v9fs_get_acl(inode, fid); 191 189 if (retval) 192 190 goto release_sb; 193 - /* 194 - * Add the root fid to session info. This is used 195 - * for file system sync. We want a cloned fid here 196 - * so that we can do a sync_filesystem after a 197 - * shrink_dcache_for_umount 198 - */ 199 - v9ses->root_fid = v9fs_fid_clone(root); 200 - if (IS_ERR(v9ses->root_fid)) { 201 - retval = PTR_ERR(v9ses->root_fid); 202 - goto release_sb; 203 - } 191 + v9fs_fid_add(root, fid); 204 192 205 193 P9_DPRINTK(P9_DEBUG_VFS, " simple set mount, return 0\n"); 206 194 return dget(sb->s_root); ··· 200 210 v9fs_session_close(v9ses); 201 211 kfree(v9ses); 202 212 return ERR_PTR(retval); 213 + 203 214 release_sb: 204 215 /* 205 - * we will do the session_close and root dentry 206 - * release in the below call. 216 + * we will do the session_close and root dentry release 217 + * in the below call. But we need to clunk fid, because we haven't 218 + * attached the fid to dentry so it won't get clunked 219 + * automatically. 207 220 */ 221 + p9_client_clunk(fid); 208 222 deactivate_locked_super(sb); 209 223 return ERR_PTR(retval); 210 224 } ··· 226 232 P9_DPRINTK(P9_DEBUG_VFS, " %p\n", s); 227 233 228 234 kill_anon_super(s); 229 - p9_client_clunk(v9ses->root_fid); 235 + 230 236 v9fs_session_cancel(v9ses); 231 237 v9fs_session_close(v9ses); 232 238 kfree(v9ses); ··· 279 285 return res; 280 286 } 281 287 282 - static int v9fs_sync_fs(struct super_block *sb, int wait) 283 - { 284 - struct v9fs_session_info *v9ses = sb->s_fs_info; 285 - 286 - P9_DPRINTK(P9_DEBUG_VFS, "v9fs_sync_fs: super_block %p\n", sb); 287 - return p9_client_sync_fs(v9ses->root_fid); 288 - } 289 - 290 288 static int v9fs_drop_inode(struct inode *inode) 291 289 { 292 290 struct v9fs_session_info *v9ses; ··· 293 307 return 1; 294 308 } 295 309 310 + static int v9fs_write_inode(struct inode *inode, 311 + struct writeback_control *wbc) 312 + { 313 + int ret; 314 + struct p9_wstat wstat; 315 + struct v9fs_inode *v9inode; 316 + /* 317 + * send an fsync request to server irrespective of 318 + * wbc->sync_mode. 319 + */ 320 + P9_DPRINTK(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode); 321 + v9inode = V9FS_I(inode); 322 + if (!v9inode->writeback_fid) 323 + return 0; 324 + v9fs_blank_wstat(&wstat); 325 + 326 + ret = p9_client_wstat(v9inode->writeback_fid, &wstat); 327 + if (ret < 0) { 328 + __mark_inode_dirty(inode, I_DIRTY_DATASYNC); 329 + return ret; 330 + } 331 + return 0; 332 + } 333 + 334 + static int v9fs_write_inode_dotl(struct inode *inode, 335 + struct writeback_control *wbc) 336 + { 337 + int ret; 338 + struct v9fs_inode *v9inode; 339 + /* 340 + * send an fsync request to server irrespective of 341 + * wbc->sync_mode. 342 + */ 343 + P9_DPRINTK(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode); 344 + v9inode = V9FS_I(inode); 345 + if (!v9inode->writeback_fid) 346 + return 0; 347 + ret = p9_client_fsync(v9inode->writeback_fid, 0); 348 + if (ret < 0) { 349 + __mark_inode_dirty(inode, I_DIRTY_DATASYNC); 350 + return ret; 351 + } 352 + return 0; 353 + } 354 + 296 355 static const struct super_operations v9fs_super_ops = { 297 356 .alloc_inode = v9fs_alloc_inode, 298 357 .destroy_inode = v9fs_destroy_inode, ··· 345 314 .evict_inode = v9fs_evict_inode, 346 315 .show_options = generic_show_options, 347 316 .umount_begin = v9fs_umount_begin, 317 + .write_inode = v9fs_write_inode, 348 318 }; 349 319 350 320 static const struct super_operations v9fs_super_ops_dotl = { 351 321 .alloc_inode = v9fs_alloc_inode, 352 322 .destroy_inode = v9fs_destroy_inode, 353 - .sync_fs = v9fs_sync_fs, 354 323 .statfs = v9fs_statfs, 355 324 .drop_inode = v9fs_drop_inode, 356 325 .evict_inode = v9fs_evict_inode, 357 326 .show_options = generic_show_options, 358 327 .umount_begin = v9fs_umount_begin, 328 + .write_inode = v9fs_write_inode_dotl, 359 329 }; 360 330 361 331 struct file_system_type v9fs_fs_type = {
+5 -1
fs/binfmt_elf.c
··· 941 941 current->mm->start_stack = bprm->p; 942 942 943 943 #ifdef arch_randomize_brk 944 - if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1)) 944 + if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1)) { 945 945 current->mm->brk = current->mm->start_brk = 946 946 arch_randomize_brk(current->mm); 947 + #ifdef CONFIG_COMPAT_BRK 948 + current->brk_randomized = 1; 949 + #endif 950 + } 947 951 #endif 948 952 949 953 if (current->personality & MMAP_PAGE_ZERO) {
+5 -4
fs/btrfs/acl.c
··· 178 178 179 179 if (value) { 180 180 acl = posix_acl_from_xattr(value, size); 181 - if (acl == NULL) { 182 - value = NULL; 183 - size = 0; 181 + if (acl) { 182 + ret = posix_acl_valid(acl); 183 + if (ret) 184 + goto out; 184 185 } else if (IS_ERR(acl)) { 185 186 return PTR_ERR(acl); 186 187 } 187 188 } 188 189 189 190 ret = btrfs_set_acl(NULL, dentry->d_inode, acl, type); 190 - 191 + out: 191 192 posix_acl_release(acl); 192 193 193 194 return ret;
+8 -1
fs/btrfs/ctree.h
··· 740 740 */ 741 741 unsigned long reservation_progress; 742 742 743 - int full; /* indicates that we cannot allocate any more 743 + int full:1; /* indicates that we cannot allocate any more 744 744 chunks for this space */ 745 + int chunk_alloc:1; /* set if we are allocating a chunk */ 746 + 745 747 int force_alloc; /* set if we need to force a chunk alloc for 746 748 this space */ 747 749 ··· 2578 2576 int btrfs_mark_extent_written(struct btrfs_trans_handle *trans, 2579 2577 struct inode *inode, u64 start, u64 end); 2580 2578 int btrfs_release_file(struct inode *inode, struct file *file); 2579 + void btrfs_drop_pages(struct page **pages, size_t num_pages); 2580 + int btrfs_dirty_pages(struct btrfs_root *root, struct inode *inode, 2581 + struct page **pages, size_t num_pages, 2582 + loff_t pos, size_t write_bytes, 2583 + struct extent_state **cached); 2581 2584 2582 2585 /* tree-defrag.c */ 2583 2586 int btrfs_defrag_leaves(struct btrfs_trans_handle *trans,
+1 -1
fs/btrfs/disk-io.c
··· 3057 3057 btrfs_destroy_pinned_extent(root, 3058 3058 root->fs_info->pinned_extents); 3059 3059 3060 - t->use_count = 0; 3060 + atomic_set(&t->use_count, 0); 3061 3061 list_del_init(&t->list); 3062 3062 memset(t, 0, sizeof(*t)); 3063 3063 kmem_cache_free(btrfs_transaction_cachep, t);
+98 -27
fs/btrfs/extent-tree.c
··· 33 33 #include "locking.h" 34 34 #include "free-space-cache.h" 35 35 36 + /* control flags for do_chunk_alloc's force field 37 + * CHUNK_ALLOC_NO_FORCE means to only allocate a chunk 38 + * if we really need one. 39 + * 40 + * CHUNK_ALLOC_FORCE means it must try to allocate one 41 + * 42 + * CHUNK_ALLOC_LIMITED means to only try and allocate one 43 + * if we have very few chunks already allocated. This is 44 + * used as part of the clustering code to help make sure 45 + * we have a good pool of storage to cluster in, without 46 + * filling the FS with empty chunks 47 + * 48 + */ 49 + enum { 50 + CHUNK_ALLOC_NO_FORCE = 0, 51 + CHUNK_ALLOC_FORCE = 1, 52 + CHUNK_ALLOC_LIMITED = 2, 53 + }; 54 + 36 55 static int update_block_group(struct btrfs_trans_handle *trans, 37 56 struct btrfs_root *root, 38 57 u64 bytenr, u64 num_bytes, int alloc); ··· 3038 3019 found->bytes_readonly = 0; 3039 3020 found->bytes_may_use = 0; 3040 3021 found->full = 0; 3041 - found->force_alloc = 0; 3022 + found->force_alloc = CHUNK_ALLOC_NO_FORCE; 3023 + found->chunk_alloc = 0; 3042 3024 *space_info = found; 3043 3025 list_add_rcu(&found->list, &info->space_info); 3044 3026 atomic_set(&found->caching_threads, 0); ··· 3170 3150 if (!data_sinfo->full && alloc_chunk) { 3171 3151 u64 alloc_target; 3172 3152 3173 - data_sinfo->force_alloc = 1; 3153 + data_sinfo->force_alloc = CHUNK_ALLOC_FORCE; 3174 3154 spin_unlock(&data_sinfo->lock); 3175 3155 alloc: 3176 3156 alloc_target = btrfs_get_alloc_profile(root, 1); ··· 3180 3160 3181 3161 ret = do_chunk_alloc(trans, root->fs_info->extent_root, 3182 3162 bytes + 2 * 1024 * 1024, 3183 - alloc_target, 0); 3163 + alloc_target, 3164 + CHUNK_ALLOC_NO_FORCE); 3184 3165 btrfs_end_transaction(trans, root); 3185 3166 if (ret < 0) { 3186 3167 if (ret != -ENOSPC) ··· 3260 3239 rcu_read_lock(); 3261 3240 list_for_each_entry_rcu(found, head, list) { 3262 3241 if (found->flags & BTRFS_BLOCK_GROUP_METADATA) 3263 - found->force_alloc = 1; 3242 + found->force_alloc = CHUNK_ALLOC_FORCE; 3264 3243 } 3265 3244 rcu_read_unlock(); 3266 3245 } 3267 3246 3268 3247 static int should_alloc_chunk(struct btrfs_root *root, 3269 - struct btrfs_space_info *sinfo, u64 alloc_bytes) 3248 + struct btrfs_space_info *sinfo, u64 alloc_bytes, 3249 + int force) 3270 3250 { 3271 3251 u64 num_bytes = sinfo->total_bytes - sinfo->bytes_readonly; 3252 + u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved; 3272 3253 u64 thresh; 3273 3254 3274 - if (sinfo->bytes_used + sinfo->bytes_reserved + 3275 - alloc_bytes + 256 * 1024 * 1024 < num_bytes) 3255 + if (force == CHUNK_ALLOC_FORCE) 3256 + return 1; 3257 + 3258 + /* 3259 + * in limited mode, we want to have some free space up to 3260 + * about 1% of the FS size. 3261 + */ 3262 + if (force == CHUNK_ALLOC_LIMITED) { 3263 + thresh = btrfs_super_total_bytes(&root->fs_info->super_copy); 3264 + thresh = max_t(u64, 64 * 1024 * 1024, 3265 + div_factor_fine(thresh, 1)); 3266 + 3267 + if (num_bytes - num_allocated < thresh) 3268 + return 1; 3269 + } 3270 + 3271 + /* 3272 + * we have two similar checks here, one based on percentage 3273 + * and once based on a hard number of 256MB. The idea 3274 + * is that if we have a good amount of free 3275 + * room, don't allocate a chunk. A good mount is 3276 + * less than 80% utilized of the chunks we have allocated, 3277 + * or more than 256MB free 3278 + */ 3279 + if (num_allocated + alloc_bytes + 256 * 1024 * 1024 < num_bytes) 3276 3280 return 0; 3277 3281 3278 - if (sinfo->bytes_used + sinfo->bytes_reserved + 3279 - alloc_bytes < div_factor(num_bytes, 8)) 3282 + if (num_allocated + alloc_bytes < div_factor(num_bytes, 8)) 3280 3283 return 0; 3281 3284 3282 3285 thresh = btrfs_super_total_bytes(&root->fs_info->super_copy); 3286 + 3287 + /* 256MB or 5% of the FS */ 3283 3288 thresh = max_t(u64, 256 * 1024 * 1024, div_factor_fine(thresh, 5)); 3284 3289 3285 3290 if (num_bytes > thresh && sinfo->bytes_used < div_factor(num_bytes, 3)) 3286 3291 return 0; 3287 - 3288 3292 return 1; 3289 3293 } 3290 3294 ··· 3319 3273 { 3320 3274 struct btrfs_space_info *space_info; 3321 3275 struct btrfs_fs_info *fs_info = extent_root->fs_info; 3276 + int wait_for_alloc = 0; 3322 3277 int ret = 0; 3323 - 3324 - mutex_lock(&fs_info->chunk_mutex); 3325 3278 3326 3279 flags = btrfs_reduce_alloc_profile(extent_root, flags); 3327 3280 ··· 3332 3287 } 3333 3288 BUG_ON(!space_info); 3334 3289 3290 + again: 3335 3291 spin_lock(&space_info->lock); 3336 3292 if (space_info->force_alloc) 3337 - force = 1; 3293 + force = space_info->force_alloc; 3338 3294 if (space_info->full) { 3339 3295 spin_unlock(&space_info->lock); 3340 - goto out; 3296 + return 0; 3341 3297 } 3342 3298 3343 - if (!force && !should_alloc_chunk(extent_root, space_info, 3344 - alloc_bytes)) { 3299 + if (!should_alloc_chunk(extent_root, space_info, alloc_bytes, force)) { 3345 3300 spin_unlock(&space_info->lock); 3346 - goto out; 3301 + return 0; 3302 + } else if (space_info->chunk_alloc) { 3303 + wait_for_alloc = 1; 3304 + } else { 3305 + space_info->chunk_alloc = 1; 3347 3306 } 3307 + 3348 3308 spin_unlock(&space_info->lock); 3309 + 3310 + mutex_lock(&fs_info->chunk_mutex); 3311 + 3312 + /* 3313 + * The chunk_mutex is held throughout the entirety of a chunk 3314 + * allocation, so once we've acquired the chunk_mutex we know that the 3315 + * other guy is done and we need to recheck and see if we should 3316 + * allocate. 3317 + */ 3318 + if (wait_for_alloc) { 3319 + mutex_unlock(&fs_info->chunk_mutex); 3320 + wait_for_alloc = 0; 3321 + goto again; 3322 + } 3349 3323 3350 3324 /* 3351 3325 * If we have mixed data/metadata chunks we want to make sure we keep ··· 3391 3327 space_info->full = 1; 3392 3328 else 3393 3329 ret = 1; 3394 - space_info->force_alloc = 0; 3330 + 3331 + space_info->force_alloc = CHUNK_ALLOC_NO_FORCE; 3332 + space_info->chunk_alloc = 0; 3395 3333 spin_unlock(&space_info->lock); 3396 - out: 3397 3334 mutex_unlock(&extent_root->fs_info->chunk_mutex); 3398 3335 return ret; 3399 3336 } ··· 5368 5303 5369 5304 if (allowed_chunk_alloc) { 5370 5305 ret = do_chunk_alloc(trans, root, num_bytes + 5371 - 2 * 1024 * 1024, data, 1); 5306 + 2 * 1024 * 1024, data, 5307 + CHUNK_ALLOC_LIMITED); 5372 5308 allowed_chunk_alloc = 0; 5373 5309 done_chunk_alloc = 1; 5374 - } else if (!done_chunk_alloc) { 5375 - space_info->force_alloc = 1; 5310 + } else if (!done_chunk_alloc && 5311 + space_info->force_alloc == CHUNK_ALLOC_NO_FORCE) { 5312 + space_info->force_alloc = CHUNK_ALLOC_LIMITED; 5376 5313 } 5377 5314 5378 5315 if (loop < LOOP_NO_EMPTY_SIZE) { ··· 5460 5393 */ 5461 5394 if (empty_size || root->ref_cows) 5462 5395 ret = do_chunk_alloc(trans, root->fs_info->extent_root, 5463 - num_bytes + 2 * 1024 * 1024, data, 0); 5396 + num_bytes + 2 * 1024 * 1024, data, 5397 + CHUNK_ALLOC_NO_FORCE); 5464 5398 5465 5399 WARN_ON(num_bytes < root->sectorsize); 5466 5400 ret = find_free_extent(trans, root, num_bytes, empty_size, ··· 5473 5405 num_bytes = num_bytes & ~(root->sectorsize - 1); 5474 5406 num_bytes = max(num_bytes, min_alloc_size); 5475 5407 do_chunk_alloc(trans, root->fs_info->extent_root, 5476 - num_bytes, data, 1); 5408 + num_bytes, data, CHUNK_ALLOC_FORCE); 5477 5409 goto again; 5478 5410 } 5479 5411 if (ret == -ENOSPC && btrfs_test_opt(root, ENOSPC_DEBUG)) { ··· 8177 8109 8178 8110 alloc_flags = update_block_group_flags(root, cache->flags); 8179 8111 if (alloc_flags != cache->flags) 8180 - do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1); 8112 + do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 8113 + CHUNK_ALLOC_FORCE); 8181 8114 8182 8115 ret = set_block_group_ro(cache); 8183 8116 if (!ret) 8184 8117 goto out; 8185 8118 alloc_flags = get_alloc_profile(root, cache->space_info->flags); 8186 - ret = do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1); 8119 + ret = do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 8120 + CHUNK_ALLOC_FORCE); 8187 8121 if (ret < 0) 8188 8122 goto out; 8189 8123 ret = set_block_group_ro(cache); ··· 8198 8128 struct btrfs_root *root, u64 type) 8199 8129 { 8200 8130 u64 alloc_flags = get_alloc_profile(root, type); 8201 - return do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1); 8131 + return do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 8132 + CHUNK_ALLOC_FORCE); 8202 8133 } 8203 8134 8204 8135 /*
+62 -20
fs/btrfs/extent_io.c
··· 690 690 } 691 691 } 692 692 693 + static void uncache_state(struct extent_state **cached_ptr) 694 + { 695 + if (cached_ptr && (*cached_ptr)) { 696 + struct extent_state *state = *cached_ptr; 697 + *cached_ptr = NULL; 698 + free_extent_state(state); 699 + } 700 + } 701 + 693 702 /* 694 703 * set some bits on a range in the tree. This may require allocations or 695 704 * sleeping, so the gfp mask is used to indicate what is allowed. ··· 949 940 } 950 941 951 942 int set_extent_uptodate(struct extent_io_tree *tree, u64 start, u64 end, 952 - gfp_t mask) 943 + struct extent_state **cached_state, gfp_t mask) 953 944 { 954 - return set_extent_bit(tree, start, end, EXTENT_UPTODATE, 0, NULL, 955 - NULL, mask); 945 + return set_extent_bit(tree, start, end, EXTENT_UPTODATE, 0, 946 + NULL, cached_state, mask); 956 947 } 957 948 958 949 static int clear_extent_uptodate(struct extent_io_tree *tree, u64 start, ··· 1021 1012 mask); 1022 1013 } 1023 1014 1024 - int unlock_extent(struct extent_io_tree *tree, u64 start, u64 end, 1025 - gfp_t mask) 1015 + int unlock_extent(struct extent_io_tree *tree, u64 start, u64 end, gfp_t mask) 1026 1016 { 1027 1017 return clear_extent_bit(tree, start, end, EXTENT_LOCKED, 1, 0, NULL, 1028 1018 mask); ··· 1743 1735 1744 1736 do { 1745 1737 struct page *page = bvec->bv_page; 1738 + struct extent_state *cached = NULL; 1739 + struct extent_state *state; 1740 + 1746 1741 tree = &BTRFS_I(page->mapping->host)->io_tree; 1747 1742 1748 1743 start = ((u64)page->index << PAGE_CACHE_SHIFT) + ··· 1760 1749 if (++bvec <= bvec_end) 1761 1750 prefetchw(&bvec->bv_page->flags); 1762 1751 1752 + spin_lock(&tree->lock); 1753 + state = find_first_extent_bit_state(tree, start, EXTENT_LOCKED); 1754 + if (state && state->start == start) { 1755 + /* 1756 + * take a reference on the state, unlock will drop 1757 + * the ref 1758 + */ 1759 + cache_state(state, &cached); 1760 + } 1761 + spin_unlock(&tree->lock); 1762 + 1763 1763 if (uptodate && tree->ops && tree->ops->readpage_end_io_hook) { 1764 1764 ret = tree->ops->readpage_end_io_hook(page, start, end, 1765 - NULL); 1765 + state); 1766 1766 if (ret) 1767 1767 uptodate = 0; 1768 1768 } ··· 1786 1764 test_bit(BIO_UPTODATE, &bio->bi_flags); 1787 1765 if (err) 1788 1766 uptodate = 0; 1767 + uncache_state(&cached); 1789 1768 continue; 1790 1769 } 1791 1770 } 1792 1771 1793 1772 if (uptodate) { 1794 - set_extent_uptodate(tree, start, end, 1773 + set_extent_uptodate(tree, start, end, &cached, 1795 1774 GFP_ATOMIC); 1796 1775 } 1797 - unlock_extent(tree, start, end, GFP_ATOMIC); 1776 + unlock_extent_cached(tree, start, end, &cached, GFP_ATOMIC); 1798 1777 1799 1778 if (whole_page) { 1800 1779 if (uptodate) { ··· 1834 1811 1835 1812 do { 1836 1813 struct page *page = bvec->bv_page; 1814 + struct extent_state *cached = NULL; 1837 1815 tree = &BTRFS_I(page->mapping->host)->io_tree; 1838 1816 1839 1817 start = ((u64)page->index << PAGE_CACHE_SHIFT) + ··· 1845 1821 prefetchw(&bvec->bv_page->flags); 1846 1822 1847 1823 if (uptodate) { 1848 - set_extent_uptodate(tree, start, end, GFP_ATOMIC); 1824 + set_extent_uptodate(tree, start, end, &cached, 1825 + GFP_ATOMIC); 1849 1826 } else { 1850 1827 ClearPageUptodate(page); 1851 1828 SetPageError(page); 1852 1829 } 1853 1830 1854 - unlock_extent(tree, start, end, GFP_ATOMIC); 1831 + unlock_extent_cached(tree, start, end, &cached, GFP_ATOMIC); 1855 1832 1856 1833 } while (bvec >= bio->bi_io_vec); 1857 1834 ··· 2041 2016 while (cur <= end) { 2042 2017 if (cur >= last_byte) { 2043 2018 char *userpage; 2019 + struct extent_state *cached = NULL; 2020 + 2044 2021 iosize = PAGE_CACHE_SIZE - page_offset; 2045 2022 userpage = kmap_atomic(page, KM_USER0); 2046 2023 memset(userpage + page_offset, 0, iosize); 2047 2024 flush_dcache_page(page); 2048 2025 kunmap_atomic(userpage, KM_USER0); 2049 2026 set_extent_uptodate(tree, cur, cur + iosize - 1, 2050 - GFP_NOFS); 2051 - unlock_extent(tree, cur, cur + iosize - 1, GFP_NOFS); 2027 + &cached, GFP_NOFS); 2028 + unlock_extent_cached(tree, cur, cur + iosize - 1, 2029 + &cached, GFP_NOFS); 2052 2030 break; 2053 2031 } 2054 2032 em = get_extent(inode, page, page_offset, cur, ··· 2091 2063 /* we've found a hole, just zero and go on */ 2092 2064 if (block_start == EXTENT_MAP_HOLE) { 2093 2065 char *userpage; 2066 + struct extent_state *cached = NULL; 2067 + 2094 2068 userpage = kmap_atomic(page, KM_USER0); 2095 2069 memset(userpage + page_offset, 0, iosize); 2096 2070 flush_dcache_page(page); 2097 2071 kunmap_atomic(userpage, KM_USER0); 2098 2072 2099 2073 set_extent_uptodate(tree, cur, cur + iosize - 1, 2100 - GFP_NOFS); 2101 - unlock_extent(tree, cur, cur + iosize - 1, GFP_NOFS); 2074 + &cached, GFP_NOFS); 2075 + unlock_extent_cached(tree, cur, cur + iosize - 1, 2076 + &cached, GFP_NOFS); 2102 2077 cur = cur + iosize; 2103 2078 page_offset += iosize; 2104 2079 continue; ··· 2820 2789 iocount++; 2821 2790 block_start = block_start + iosize; 2822 2791 } else { 2823 - set_extent_uptodate(tree, block_start, cur_end, 2792 + struct extent_state *cached = NULL; 2793 + 2794 + set_extent_uptodate(tree, block_start, cur_end, &cached, 2824 2795 GFP_NOFS); 2825 - unlock_extent(tree, block_start, cur_end, GFP_NOFS); 2796 + unlock_extent_cached(tree, block_start, cur_end, 2797 + &cached, GFP_NOFS); 2826 2798 block_start = cur_end + 1; 2827 2799 } 2828 2800 page_offset = block_start & (PAGE_CACHE_SIZE - 1); ··· 3491 3457 num_pages = num_extent_pages(eb->start, eb->len); 3492 3458 3493 3459 set_extent_uptodate(tree, eb->start, eb->start + eb->len - 1, 3494 - GFP_NOFS); 3460 + NULL, GFP_NOFS); 3495 3461 for (i = 0; i < num_pages; i++) { 3496 3462 page = extent_buffer_page(eb, i); 3497 3463 if ((i == 0 && (eb->start & (PAGE_CACHE_SIZE - 1))) || ··· 3919 3885 kunmap_atomic(dst_kaddr, KM_USER0); 3920 3886 } 3921 3887 3888 + static inline bool areas_overlap(unsigned long src, unsigned long dst, unsigned long len) 3889 + { 3890 + unsigned long distance = (src > dst) ? src - dst : dst - src; 3891 + return distance < len; 3892 + } 3893 + 3922 3894 static void copy_pages(struct page *dst_page, struct page *src_page, 3923 3895 unsigned long dst_off, unsigned long src_off, 3924 3896 unsigned long len) ··· 3932 3892 char *dst_kaddr = kmap_atomic(dst_page, KM_USER0); 3933 3893 char *src_kaddr; 3934 3894 3935 - if (dst_page != src_page) 3895 + if (dst_page != src_page) { 3936 3896 src_kaddr = kmap_atomic(src_page, KM_USER1); 3937 - else 3897 + } else { 3938 3898 src_kaddr = dst_kaddr; 3899 + BUG_ON(areas_overlap(src_off, dst_off, len)); 3900 + } 3939 3901 3940 3902 memcpy(dst_kaddr + dst_off, src_kaddr + src_off, len); 3941 3903 kunmap_atomic(dst_kaddr, KM_USER0); ··· 4012 3970 "len %lu len %lu\n", dst_offset, len, dst->len); 4013 3971 BUG_ON(1); 4014 3972 } 4015 - if (dst_offset < src_offset) { 3973 + if (!areas_overlap(src_offset, dst_offset, len)) { 4016 3974 memcpy_extent_buffer(dst, dst_offset, src_offset, len); 4017 3975 return; 4018 3976 }
+1 -1
fs/btrfs/extent_io.h
··· 208 208 int bits, int exclusive_bits, u64 *failed_start, 209 209 struct extent_state **cached_state, gfp_t mask); 210 210 int set_extent_uptodate(struct extent_io_tree *tree, u64 start, u64 end, 211 - gfp_t mask); 211 + struct extent_state **cached_state, gfp_t mask); 212 212 int set_extent_new(struct extent_io_tree *tree, u64 start, u64 end, 213 213 gfp_t mask); 214 214 int set_extent_dirty(struct extent_io_tree *tree, u64 start, u64 end,
+9 -12
fs/btrfs/file.c
··· 104 104 /* 105 105 * unlocks pages after btrfs_file_write is done with them 106 106 */ 107 - static noinline void btrfs_drop_pages(struct page **pages, size_t num_pages) 107 + void btrfs_drop_pages(struct page **pages, size_t num_pages) 108 108 { 109 109 size_t i; 110 110 for (i = 0; i < num_pages; i++) { ··· 127 127 * this also makes the decision about creating an inline extent vs 128 128 * doing real data extents, marking pages dirty and delalloc as required. 129 129 */ 130 - static noinline int dirty_and_release_pages(struct btrfs_root *root, 131 - struct file *file, 132 - struct page **pages, 133 - size_t num_pages, 134 - loff_t pos, 135 - size_t write_bytes) 130 + int btrfs_dirty_pages(struct btrfs_root *root, struct inode *inode, 131 + struct page **pages, size_t num_pages, 132 + loff_t pos, size_t write_bytes, 133 + struct extent_state **cached) 136 134 { 137 135 int err = 0; 138 136 int i; 139 - struct inode *inode = fdentry(file)->d_inode; 140 137 u64 num_bytes; 141 138 u64 start_pos; 142 139 u64 end_of_last_block; ··· 146 149 147 150 end_of_last_block = start_pos + num_bytes - 1; 148 151 err = btrfs_set_extent_delalloc(inode, start_pos, end_of_last_block, 149 - NULL); 152 + cached); 150 153 if (err) 151 154 return err; 152 155 ··· 989 992 } 990 993 991 994 if (copied > 0) { 992 - ret = dirty_and_release_pages(root, file, pages, 993 - dirty_pages, pos, 994 - copied); 995 + ret = btrfs_dirty_pages(root, inode, pages, 996 + dirty_pages, pos, copied, 997 + NULL); 995 998 if (ret) { 996 999 btrfs_delalloc_release_space(inode, 997 1000 dirty_pages << PAGE_CACHE_SHIFT);
+56 -63
fs/btrfs/free-space-cache.c
··· 508 508 struct inode *inode; 509 509 struct rb_node *node; 510 510 struct list_head *pos, *n; 511 + struct page **pages; 511 512 struct page *page; 512 513 struct extent_state *cached_state = NULL; 513 514 struct btrfs_free_cluster *cluster = NULL; ··· 518 517 u64 start, end, len; 519 518 u64 bytes = 0; 520 519 u32 *crc, *checksums; 521 - pgoff_t index = 0, last_index = 0; 522 520 unsigned long first_page_offset; 523 - int num_checksums; 521 + int index = 0, num_pages = 0; 524 522 int entries = 0; 525 523 int bitmaps = 0; 526 524 int ret = 0; 527 525 bool next_page = false; 526 + bool out_of_space = false; 528 527 529 528 root = root->fs_info->tree_root; 530 529 ··· 552 551 return 0; 553 552 } 554 553 555 - last_index = (i_size_read(inode) - 1) >> PAGE_CACHE_SHIFT; 554 + num_pages = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> 555 + PAGE_CACHE_SHIFT; 556 556 filemap_write_and_wait(inode->i_mapping); 557 557 btrfs_wait_ordered_range(inode, inode->i_size & 558 558 ~(root->sectorsize - 1), (u64)-1); 559 559 560 560 /* We need a checksum per page. */ 561 - num_checksums = i_size_read(inode) / PAGE_CACHE_SIZE; 562 - crc = checksums = kzalloc(sizeof(u32) * num_checksums, GFP_NOFS); 561 + crc = checksums = kzalloc(sizeof(u32) * num_pages, GFP_NOFS); 563 562 if (!crc) { 563 + iput(inode); 564 + return 0; 565 + } 566 + 567 + pages = kzalloc(sizeof(struct page *) * num_pages, GFP_NOFS); 568 + if (!pages) { 569 + kfree(crc); 564 570 iput(inode); 565 571 return 0; 566 572 } ··· 576 568 * need to calculate the offset into the page that we can start writing 577 569 * our entries. 578 570 */ 579 - first_page_offset = (sizeof(u32) * num_checksums) + sizeof(u64); 571 + first_page_offset = (sizeof(u32) * num_pages) + sizeof(u64); 580 572 581 573 /* Get the cluster for this block_group if it exists */ 582 574 if (!list_empty(&block_group->cluster_list)) ··· 598 590 * after find_get_page at this point. Just putting this here so people 599 591 * know and don't freak out. 600 592 */ 601 - while (index <= last_index) { 593 + while (index < num_pages) { 602 594 page = grab_cache_page(inode->i_mapping, index); 603 595 if (!page) { 604 - pgoff_t i = 0; 596 + int i; 605 597 606 - while (i < index) { 607 - page = find_get_page(inode->i_mapping, i); 608 - unlock_page(page); 609 - page_cache_release(page); 610 - page_cache_release(page); 611 - i++; 598 + for (i = 0; i < num_pages; i++) { 599 + unlock_page(pages[i]); 600 + page_cache_release(pages[i]); 612 601 } 613 602 goto out_free; 614 603 } 604 + pages[index] = page; 615 605 index++; 616 606 } 617 607 ··· 637 631 offset = start_offset; 638 632 } 639 633 640 - page = find_get_page(inode->i_mapping, index); 634 + if (index >= num_pages) { 635 + out_of_space = true; 636 + break; 637 + } 638 + 639 + page = pages[index]; 641 640 642 641 addr = kmap(page); 643 642 entry = addr + start_offset; ··· 719 708 720 709 bytes += PAGE_CACHE_SIZE; 721 710 722 - ClearPageChecked(page); 723 - set_page_extent_mapped(page); 724 - SetPageUptodate(page); 725 - set_page_dirty(page); 726 - 727 - /* 728 - * We need to release our reference we got for grab_cache_page, 729 - * except for the first page which will hold our checksums, we 730 - * do that below. 731 - */ 732 - if (index != 0) { 733 - unlock_page(page); 734 - page_cache_release(page); 735 - } 736 - 737 - page_cache_release(page); 738 - 739 711 index++; 740 712 } while (node || next_page); 741 713 ··· 728 734 struct btrfs_free_space *entry = 729 735 list_entry(pos, struct btrfs_free_space, list); 730 736 731 - page = find_get_page(inode->i_mapping, index); 737 + if (index >= num_pages) { 738 + out_of_space = true; 739 + break; 740 + } 741 + page = pages[index]; 732 742 733 743 addr = kmap(page); 734 744 memcpy(addr, entry->bitmap, PAGE_CACHE_SIZE); ··· 743 745 crc++; 744 746 bytes += PAGE_CACHE_SIZE; 745 747 746 - ClearPageChecked(page); 747 - set_page_extent_mapped(page); 748 - SetPageUptodate(page); 749 - set_page_dirty(page); 750 - unlock_page(page); 751 - page_cache_release(page); 752 - page_cache_release(page); 753 748 list_del_init(&entry->list); 754 749 index++; 755 750 } 756 751 752 + if (out_of_space) { 753 + btrfs_drop_pages(pages, num_pages); 754 + unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0, 755 + i_size_read(inode) - 1, &cached_state, 756 + GFP_NOFS); 757 + ret = 0; 758 + goto out_free; 759 + } 760 + 757 761 /* Zero out the rest of the pages just to make sure */ 758 - while (index <= last_index) { 762 + while (index < num_pages) { 759 763 void *addr; 760 764 761 - page = find_get_page(inode->i_mapping, index); 762 - 765 + page = pages[index]; 763 766 addr = kmap(page); 764 767 memset(addr, 0, PAGE_CACHE_SIZE); 765 768 kunmap(page); 766 - ClearPageChecked(page); 767 - set_page_extent_mapped(page); 768 - SetPageUptodate(page); 769 - set_page_dirty(page); 770 - unlock_page(page); 771 - page_cache_release(page); 772 - page_cache_release(page); 773 769 bytes += PAGE_CACHE_SIZE; 774 770 index++; 775 771 } 776 - 777 - btrfs_set_extent_delalloc(inode, 0, bytes - 1, &cached_state); 778 772 779 773 /* Write the checksums and trans id to the first page */ 780 774 { 781 775 void *addr; 782 776 u64 *gen; 783 777 784 - page = find_get_page(inode->i_mapping, 0); 778 + page = pages[0]; 785 779 786 780 addr = kmap(page); 787 - memcpy(addr, checksums, sizeof(u32) * num_checksums); 788 - gen = addr + (sizeof(u32) * num_checksums); 781 + memcpy(addr, checksums, sizeof(u32) * num_pages); 782 + gen = addr + (sizeof(u32) * num_pages); 789 783 *gen = trans->transid; 790 784 kunmap(page); 791 - ClearPageChecked(page); 792 - set_page_extent_mapped(page); 793 - SetPageUptodate(page); 794 - set_page_dirty(page); 795 - unlock_page(page); 796 - page_cache_release(page); 797 - page_cache_release(page); 798 785 } 799 - BTRFS_I(inode)->generation = trans->transid; 800 786 787 + ret = btrfs_dirty_pages(root, inode, pages, num_pages, 0, 788 + bytes, &cached_state); 789 + btrfs_drop_pages(pages, num_pages); 801 790 unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0, 802 791 i_size_read(inode) - 1, &cached_state, GFP_NOFS); 792 + 793 + if (ret) { 794 + ret = 0; 795 + goto out_free; 796 + } 797 + 798 + BTRFS_I(inode)->generation = trans->transid; 803 799 804 800 filemap_write_and_wait(inode->i_mapping); 805 801 ··· 845 853 BTRFS_I(inode)->generation = 0; 846 854 } 847 855 kfree(checksums); 856 + kfree(pages); 848 857 btrfs_update_inode(trans, root, inode); 849 858 iput(inode); 850 859 return ret;
+116 -49
fs/btrfs/inode.c
··· 1770 1770 add_pending_csums(trans, inode, ordered_extent->file_offset, 1771 1771 &ordered_extent->list); 1772 1772 1773 - btrfs_ordered_update_i_size(inode, 0, ordered_extent); 1774 - ret = btrfs_update_inode(trans, root, inode); 1775 - BUG_ON(ret); 1773 + ret = btrfs_ordered_update_i_size(inode, 0, ordered_extent); 1774 + if (!ret) { 1775 + ret = btrfs_update_inode(trans, root, inode); 1776 + BUG_ON(ret); 1777 + } 1778 + ret = 0; 1776 1779 out: 1777 1780 if (nolock) { 1778 1781 if (trans) ··· 2593 2590 struct btrfs_inode_item *item, 2594 2591 struct inode *inode) 2595 2592 { 2593 + if (!leaf->map_token) 2594 + map_private_extent_buffer(leaf, (unsigned long)item, 2595 + sizeof(struct btrfs_inode_item), 2596 + &leaf->map_token, &leaf->kaddr, 2597 + &leaf->map_start, &leaf->map_len, 2598 + KM_USER1); 2599 + 2596 2600 btrfs_set_inode_uid(leaf, item, inode->i_uid); 2597 2601 btrfs_set_inode_gid(leaf, item, inode->i_gid); 2598 2602 btrfs_set_inode_size(leaf, item, BTRFS_I(inode)->disk_i_size); ··· 2628 2618 btrfs_set_inode_rdev(leaf, item, inode->i_rdev); 2629 2619 btrfs_set_inode_flags(leaf, item, BTRFS_I(inode)->flags); 2630 2620 btrfs_set_inode_block_group(leaf, item, BTRFS_I(inode)->block_group); 2621 + 2622 + if (leaf->map_token) { 2623 + unmap_extent_buffer(leaf, leaf->map_token, KM_USER1); 2624 + leaf->map_token = NULL; 2625 + } 2631 2626 } 2632 2627 2633 2628 /* ··· 4222 4207 struct btrfs_key found_key; 4223 4208 struct btrfs_path *path; 4224 4209 int ret; 4225 - u32 nritems; 4226 4210 struct extent_buffer *leaf; 4227 4211 int slot; 4228 - int advance; 4229 4212 unsigned char d_type; 4230 4213 int over = 0; 4231 4214 u32 di_cur; ··· 4266 4253 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 4267 4254 if (ret < 0) 4268 4255 goto err; 4269 - advance = 0; 4270 4256 4271 4257 while (1) { 4272 4258 leaf = path->nodes[0]; 4273 - nritems = btrfs_header_nritems(leaf); 4274 4259 slot = path->slots[0]; 4275 - if (advance || slot >= nritems) { 4276 - if (slot >= nritems - 1) { 4277 - ret = btrfs_next_leaf(root, path); 4278 - if (ret) 4279 - break; 4280 - leaf = path->nodes[0]; 4281 - nritems = btrfs_header_nritems(leaf); 4282 - slot = path->slots[0]; 4283 - } else { 4284 - slot++; 4285 - path->slots[0]++; 4286 - } 4260 + if (slot >= btrfs_header_nritems(leaf)) { 4261 + ret = btrfs_next_leaf(root, path); 4262 + if (ret < 0) 4263 + goto err; 4264 + else if (ret > 0) 4265 + break; 4266 + continue; 4287 4267 } 4288 4268 4289 - advance = 1; 4290 4269 item = btrfs_item_nr(leaf, slot); 4291 4270 btrfs_item_key_to_cpu(leaf, &found_key, slot); 4292 4271 ··· 4287 4282 if (btrfs_key_type(&found_key) != key_type) 4288 4283 break; 4289 4284 if (found_key.offset < filp->f_pos) 4290 - continue; 4285 + goto next; 4291 4286 4292 4287 filp->f_pos = found_key.offset; 4293 4288 ··· 4340 4335 di_cur += di_len; 4341 4336 di = (struct btrfs_dir_item *)((char *)di + di_len); 4342 4337 } 4338 + next: 4339 + path->slots[0]++; 4343 4340 } 4344 4341 4345 4342 /* Reached end of directory/root. Bump pos past the last item. */ ··· 4534 4527 BUG_ON(!path); 4535 4528 4536 4529 inode = new_inode(root->fs_info->sb); 4537 - if (!inode) 4530 + if (!inode) { 4531 + btrfs_free_path(path); 4538 4532 return ERR_PTR(-ENOMEM); 4533 + } 4539 4534 4540 4535 if (dir) { 4541 4536 trace_btrfs_inode_request(dir); 4542 4537 4543 4538 ret = btrfs_set_inode_index(dir, index); 4544 4539 if (ret) { 4540 + btrfs_free_path(path); 4545 4541 iput(inode); 4546 4542 return ERR_PTR(ret); 4547 4543 } ··· 4844 4834 if (inode->i_nlink == ~0U) 4845 4835 return -EMLINK; 4846 4836 4847 - btrfs_inc_nlink(inode); 4848 - inode->i_ctime = CURRENT_TIME; 4849 - 4850 4837 err = btrfs_set_inode_index(dir, &index); 4851 4838 if (err) 4852 4839 goto fail; ··· 4858 4851 err = PTR_ERR(trans); 4859 4852 goto fail; 4860 4853 } 4854 + 4855 + btrfs_inc_nlink(inode); 4856 + inode->i_ctime = CURRENT_TIME; 4861 4857 4862 4858 btrfs_set_trans_block_group(trans, dir); 4863 4859 ihold(inode); ··· 5231 5221 btrfs_mark_buffer_dirty(leaf); 5232 5222 } 5233 5223 set_extent_uptodate(io_tree, em->start, 5234 - extent_map_end(em) - 1, GFP_NOFS); 5224 + extent_map_end(em) - 1, NULL, GFP_NOFS); 5235 5225 goto insert; 5236 5226 } else { 5237 5227 printk(KERN_ERR "btrfs unknown found_type %d\n", found_type); ··· 5438 5428 } 5439 5429 5440 5430 static struct extent_map *btrfs_new_extent_direct(struct inode *inode, 5431 + struct extent_map *em, 5441 5432 u64 start, u64 len) 5442 5433 { 5443 5434 struct btrfs_root *root = BTRFS_I(inode)->root; 5444 5435 struct btrfs_trans_handle *trans; 5445 - struct extent_map *em; 5446 5436 struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree; 5447 5437 struct btrfs_key ins; 5448 5438 u64 alloc_hint; 5449 5439 int ret; 5440 + bool insert = false; 5450 5441 5451 - btrfs_drop_extent_cache(inode, start, start + len - 1, 0); 5442 + /* 5443 + * Ok if the extent map we looked up is a hole and is for the exact 5444 + * range we want, there is no reason to allocate a new one, however if 5445 + * it is not right then we need to free this one and drop the cache for 5446 + * our range. 5447 + */ 5448 + if (em->block_start != EXTENT_MAP_HOLE || em->start != start || 5449 + em->len != len) { 5450 + free_extent_map(em); 5451 + em = NULL; 5452 + insert = true; 5453 + btrfs_drop_extent_cache(inode, start, start + len - 1, 0); 5454 + } 5452 5455 5453 5456 trans = btrfs_join_transaction(root, 0); 5454 5457 if (IS_ERR(trans)) ··· 5477 5454 goto out; 5478 5455 } 5479 5456 5480 - em = alloc_extent_map(GFP_NOFS); 5481 5457 if (!em) { 5482 - em = ERR_PTR(-ENOMEM); 5483 - goto out; 5458 + em = alloc_extent_map(GFP_NOFS); 5459 + if (!em) { 5460 + em = ERR_PTR(-ENOMEM); 5461 + goto out; 5462 + } 5484 5463 } 5485 5464 5486 5465 em->start = start; ··· 5492 5467 em->block_start = ins.objectid; 5493 5468 em->block_len = ins.offset; 5494 5469 em->bdev = root->fs_info->fs_devices->latest_bdev; 5470 + 5471 + /* 5472 + * We need to do this because if we're using the original em we searched 5473 + * for, we could have EXTENT_FLAG_VACANCY set, and we don't want that. 5474 + */ 5475 + em->flags = 0; 5495 5476 set_bit(EXTENT_FLAG_PINNED, &em->flags); 5496 5477 5497 - while (1) { 5478 + while (insert) { 5498 5479 write_lock(&em_tree->lock); 5499 5480 ret = add_extent_mapping(em_tree, em); 5500 5481 write_unlock(&em_tree->lock); ··· 5718 5687 * it above 5719 5688 */ 5720 5689 len = bh_result->b_size; 5721 - free_extent_map(em); 5722 - em = btrfs_new_extent_direct(inode, start, len); 5690 + em = btrfs_new_extent_direct(inode, em, start, len); 5723 5691 if (IS_ERR(em)) 5724 5692 return PTR_ERR(em); 5725 5693 len = min(len, em->len - (start - em->start)); ··· 5881 5851 } 5882 5852 5883 5853 add_pending_csums(trans, inode, ordered->file_offset, &ordered->list); 5884 - btrfs_ordered_update_i_size(inode, 0, ordered); 5885 - btrfs_update_inode(trans, root, inode); 5854 + ret = btrfs_ordered_update_i_size(inode, 0, ordered); 5855 + if (!ret) 5856 + btrfs_update_inode(trans, root, inode); 5857 + ret = 0; 5886 5858 out_unlock: 5887 5859 unlock_extent_cached(&BTRFS_I(inode)->io_tree, ordered->file_offset, 5888 5860 ordered->file_offset + ordered->len - 1, ··· 5970 5938 5971 5939 static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode, 5972 5940 int rw, u64 file_offset, int skip_sum, 5973 - u32 *csums) 5941 + u32 *csums, int async_submit) 5974 5942 { 5975 5943 int write = rw & REQ_WRITE; 5976 5944 struct btrfs_root *root = BTRFS_I(inode)->root; ··· 5981 5949 if (ret) 5982 5950 goto err; 5983 5951 5984 - if (write && !skip_sum) { 5952 + if (skip_sum) 5953 + goto map; 5954 + 5955 + if (write && async_submit) { 5985 5956 ret = btrfs_wq_submit_bio(root->fs_info, 5986 5957 inode, rw, bio, 0, 0, 5987 5958 file_offset, 5988 5959 __btrfs_submit_bio_start_direct_io, 5989 5960 __btrfs_submit_bio_done); 5990 5961 goto err; 5962 + } else if (write) { 5963 + /* 5964 + * If we aren't doing async submit, calculate the csum of the 5965 + * bio now. 5966 + */ 5967 + ret = btrfs_csum_one_bio(root, inode, bio, file_offset, 1); 5968 + if (ret) 5969 + goto err; 5991 5970 } else if (!skip_sum) { 5992 5971 ret = btrfs_lookup_bio_sums_dio(root, inode, bio, 5993 5972 file_offset, csums); ··· 6006 5963 goto err; 6007 5964 } 6008 5965 6009 - ret = btrfs_map_bio(root, rw, bio, 0, 1); 5966 + map: 5967 + ret = btrfs_map_bio(root, rw, bio, 0, async_submit); 6010 5968 err: 6011 5969 bio_put(bio); 6012 5970 return ret; ··· 6029 5985 int nr_pages = 0; 6030 5986 u32 *csums = dip->csums; 6031 5987 int ret = 0; 5988 + int async_submit = 0; 6032 5989 int write = rw & REQ_WRITE; 6033 - 6034 - bio = btrfs_dio_bio_alloc(orig_bio->bi_bdev, start_sector, GFP_NOFS); 6035 - if (!bio) 6036 - return -ENOMEM; 6037 - bio->bi_private = dip; 6038 - bio->bi_end_io = btrfs_end_dio_bio; 6039 - atomic_inc(&dip->pending_bios); 6040 5990 6041 5991 map_length = orig_bio->bi_size; 6042 5992 ret = btrfs_map_block(map_tree, READ, start_sector << 9, ··· 6039 6001 bio_put(bio); 6040 6002 return -EIO; 6041 6003 } 6004 + 6005 + if (map_length >= orig_bio->bi_size) { 6006 + bio = orig_bio; 6007 + goto submit; 6008 + } 6009 + 6010 + async_submit = 1; 6011 + bio = btrfs_dio_bio_alloc(orig_bio->bi_bdev, start_sector, GFP_NOFS); 6012 + if (!bio) 6013 + return -ENOMEM; 6014 + bio->bi_private = dip; 6015 + bio->bi_end_io = btrfs_end_dio_bio; 6016 + atomic_inc(&dip->pending_bios); 6042 6017 6043 6018 while (bvec <= (orig_bio->bi_io_vec + orig_bio->bi_vcnt - 1)) { 6044 6019 if (unlikely(map_length < submit_len + bvec->bv_len || ··· 6066 6015 atomic_inc(&dip->pending_bios); 6067 6016 ret = __btrfs_submit_dio_bio(bio, inode, rw, 6068 6017 file_offset, skip_sum, 6069 - csums); 6018 + csums, async_submit); 6070 6019 if (ret) { 6071 6020 bio_put(bio); 6072 6021 atomic_dec(&dip->pending_bios); ··· 6103 6052 } 6104 6053 } 6105 6054 6055 + submit: 6106 6056 ret = __btrfs_submit_dio_bio(bio, inode, rw, file_offset, skip_sum, 6107 - csums); 6057 + csums, async_submit); 6108 6058 if (!ret) 6109 6059 return 0; 6110 6060 ··· 6200 6148 unsigned long nr_segs) 6201 6149 { 6202 6150 int seg; 6151 + int i; 6203 6152 size_t size; 6204 6153 unsigned long addr; 6205 6154 unsigned blocksize_mask = root->sectorsize - 1; ··· 6215 6162 addr = (unsigned long)iov[seg].iov_base; 6216 6163 size = iov[seg].iov_len; 6217 6164 end += size; 6218 - if ((addr & blocksize_mask) || (size & blocksize_mask)) 6165 + if ((addr & blocksize_mask) || (size & blocksize_mask)) 6219 6166 goto out; 6167 + 6168 + /* If this is a write we don't need to check anymore */ 6169 + if (rw & WRITE) 6170 + continue; 6171 + 6172 + /* 6173 + * Check to make sure we don't have duplicate iov_base's in this 6174 + * iovec, if so return EINVAL, otherwise we'll get csum errors 6175 + * when reading back. 6176 + */ 6177 + for (i = seg + 1; i < nr_segs; i++) { 6178 + if (iov[seg].iov_base == iov[i].iov_base) 6179 + goto out; 6180 + } 6220 6181 } 6221 6182 retval = 0; 6222 6183 out:
+1 -1
fs/btrfs/ioctl.c
··· 2287 2287 struct btrfs_ioctl_space_info space; 2288 2288 struct btrfs_ioctl_space_info *dest; 2289 2289 struct btrfs_ioctl_space_info *dest_orig; 2290 - struct btrfs_ioctl_space_info *user_dest; 2290 + struct btrfs_ioctl_space_info __user *user_dest; 2291 2291 struct btrfs_space_info *info; 2292 2292 u64 types[] = {BTRFS_BLOCK_GROUP_DATA, 2293 2293 BTRFS_BLOCK_GROUP_SYSTEM,
+33 -9
fs/btrfs/super.c
··· 159 159 Opt_compress_type, Opt_compress_force, Opt_compress_force_type, 160 160 Opt_notreelog, Opt_ratio, Opt_flushoncommit, Opt_discard, 161 161 Opt_space_cache, Opt_clear_cache, Opt_user_subvol_rm_allowed, 162 - Opt_enospc_debug, Opt_err, 162 + Opt_enospc_debug, Opt_subvolrootid, Opt_err, 163 163 }; 164 164 165 165 static match_table_t tokens = { ··· 189 189 {Opt_clear_cache, "clear_cache"}, 190 190 {Opt_user_subvol_rm_allowed, "user_subvol_rm_allowed"}, 191 191 {Opt_enospc_debug, "enospc_debug"}, 192 + {Opt_subvolrootid, "subvolrootid=%d"}, 192 193 {Opt_err, NULL}, 193 194 }; 194 195 ··· 233 232 break; 234 233 case Opt_subvol: 235 234 case Opt_subvolid: 235 + case Opt_subvolrootid: 236 236 case Opt_device: 237 237 /* 238 238 * These are parsed by btrfs_parse_early_options ··· 390 388 */ 391 389 static int btrfs_parse_early_options(const char *options, fmode_t flags, 392 390 void *holder, char **subvol_name, u64 *subvol_objectid, 393 - struct btrfs_fs_devices **fs_devices) 391 + u64 *subvol_rootid, struct btrfs_fs_devices **fs_devices) 394 392 { 395 393 substring_t args[MAX_OPT_ARGS]; 396 394 char *opts, *orig, *p; ··· 429 427 BTRFS_FS_TREE_OBJECTID; 430 428 else 431 429 *subvol_objectid = intarg; 430 + } 431 + break; 432 + case Opt_subvolrootid: 433 + intarg = 0; 434 + error = match_int(&args[0], &intarg); 435 + if (!error) { 436 + /* we want the original fs_tree */ 437 + if (!intarg) 438 + *subvol_rootid = 439 + BTRFS_FS_TREE_OBJECTID; 440 + else 441 + *subvol_rootid = intarg; 432 442 } 433 443 break; 434 444 case Opt_device: ··· 750 736 fmode_t mode = FMODE_READ; 751 737 char *subvol_name = NULL; 752 738 u64 subvol_objectid = 0; 739 + u64 subvol_rootid = 0; 753 740 int error = 0; 754 741 755 742 if (!(flags & MS_RDONLY)) ··· 758 743 759 744 error = btrfs_parse_early_options(data, mode, fs_type, 760 745 &subvol_name, &subvol_objectid, 761 - &fs_devices); 746 + &subvol_rootid, &fs_devices); 762 747 if (error) 763 748 return ERR_PTR(error); 764 749 ··· 822 807 s->s_flags |= MS_ACTIVE; 823 808 } 824 809 825 - root = get_default_root(s, subvol_objectid); 826 - if (IS_ERR(root)) { 827 - error = PTR_ERR(root); 828 - deactivate_locked_super(s); 829 - goto error_free_subvol_name; 830 - } 831 810 /* if they gave us a subvolume name bind mount into that */ 832 811 if (strcmp(subvol_name, ".")) { 833 812 struct dentry *new_root; 813 + 814 + root = get_default_root(s, subvol_rootid); 815 + if (IS_ERR(root)) { 816 + error = PTR_ERR(root); 817 + deactivate_locked_super(s); 818 + goto error_free_subvol_name; 819 + } 820 + 834 821 mutex_lock(&root->d_inode->i_mutex); 835 822 new_root = lookup_one_len(subvol_name, root, 836 823 strlen(subvol_name)); ··· 853 836 } 854 837 dput(root); 855 838 root = new_root; 839 + } else { 840 + root = get_default_root(s, subvol_objectid); 841 + if (IS_ERR(root)) { 842 + error = PTR_ERR(root); 843 + deactivate_locked_super(s); 844 + goto error_free_subvol_name; 845 + } 856 846 } 857 847 858 848 kfree(subvol_name);
+26 -22
fs/btrfs/transaction.c
··· 32 32 33 33 static noinline void put_transaction(struct btrfs_transaction *transaction) 34 34 { 35 - WARN_ON(transaction->use_count == 0); 36 - transaction->use_count--; 37 - if (transaction->use_count == 0) { 38 - list_del_init(&transaction->list); 35 + WARN_ON(atomic_read(&transaction->use_count) == 0); 36 + if (atomic_dec_and_test(&transaction->use_count)) { 39 37 memset(transaction, 0, sizeof(*transaction)); 40 38 kmem_cache_free(btrfs_transaction_cachep, transaction); 41 39 } ··· 58 60 if (!cur_trans) 59 61 return -ENOMEM; 60 62 root->fs_info->generation++; 61 - cur_trans->num_writers = 1; 63 + atomic_set(&cur_trans->num_writers, 1); 62 64 cur_trans->num_joined = 0; 63 65 cur_trans->transid = root->fs_info->generation; 64 66 init_waitqueue_head(&cur_trans->writer_wait); 65 67 init_waitqueue_head(&cur_trans->commit_wait); 66 68 cur_trans->in_commit = 0; 67 69 cur_trans->blocked = 0; 68 - cur_trans->use_count = 1; 70 + atomic_set(&cur_trans->use_count, 1); 69 71 cur_trans->commit_done = 0; 70 72 cur_trans->start_time = get_seconds(); 71 73 ··· 86 88 root->fs_info->running_transaction = cur_trans; 87 89 spin_unlock(&root->fs_info->new_trans_lock); 88 90 } else { 89 - cur_trans->num_writers++; 91 + atomic_inc(&cur_trans->num_writers); 90 92 cur_trans->num_joined++; 91 93 } 92 94 ··· 143 145 cur_trans = root->fs_info->running_transaction; 144 146 if (cur_trans && cur_trans->blocked) { 145 147 DEFINE_WAIT(wait); 146 - cur_trans->use_count++; 148 + atomic_inc(&cur_trans->use_count); 147 149 while (1) { 148 150 prepare_to_wait(&root->fs_info->transaction_wait, &wait, 149 151 TASK_UNINTERRUPTIBLE); ··· 179 181 { 180 182 struct btrfs_trans_handle *h; 181 183 struct btrfs_transaction *cur_trans; 184 + int retries = 0; 182 185 int ret; 183 186 184 187 if (root->fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) ··· 203 204 } 204 205 205 206 cur_trans = root->fs_info->running_transaction; 206 - cur_trans->use_count++; 207 + atomic_inc(&cur_trans->use_count); 207 208 if (type != TRANS_JOIN_NOLOCK) 208 209 mutex_unlock(&root->fs_info->trans_mutex); 209 210 ··· 223 224 224 225 if (num_items > 0) { 225 226 ret = btrfs_trans_reserve_metadata(h, root, num_items); 226 - if (ret == -EAGAIN) { 227 + if (ret == -EAGAIN && !retries) { 228 + retries++; 227 229 btrfs_commit_transaction(h, root); 228 230 goto again; 231 + } else if (ret == -EAGAIN) { 232 + /* 233 + * We have already retried and got EAGAIN, so really we 234 + * don't have space, so set ret to -ENOSPC. 235 + */ 236 + ret = -ENOSPC; 229 237 } 238 + 230 239 if (ret < 0) { 231 240 btrfs_end_transaction(h, root); 232 241 return ERR_PTR(ret); ··· 334 327 goto out_unlock; /* nothing committing|committed */ 335 328 } 336 329 337 - cur_trans->use_count++; 330 + atomic_inc(&cur_trans->use_count); 338 331 mutex_unlock(&root->fs_info->trans_mutex); 339 332 340 333 wait_for_commit(root, cur_trans); ··· 464 457 wake_up_process(info->transaction_kthread); 465 458 } 466 459 467 - if (lock) 468 - mutex_lock(&info->trans_mutex); 469 460 WARN_ON(cur_trans != info->running_transaction); 470 - WARN_ON(cur_trans->num_writers < 1); 471 - cur_trans->num_writers--; 461 + WARN_ON(atomic_read(&cur_trans->num_writers) < 1); 462 + atomic_dec(&cur_trans->num_writers); 472 463 473 464 smp_mb(); 474 465 if (waitqueue_active(&cur_trans->writer_wait)) 475 466 wake_up(&cur_trans->writer_wait); 476 467 put_transaction(cur_trans); 477 - if (lock) 478 - mutex_unlock(&info->trans_mutex); 479 468 480 469 if (current->journal_info == trans) 481 470 current->journal_info = NULL; ··· 1181 1178 /* take transaction reference */ 1182 1179 mutex_lock(&root->fs_info->trans_mutex); 1183 1180 cur_trans = trans->transaction; 1184 - cur_trans->use_count++; 1181 + atomic_inc(&cur_trans->use_count); 1185 1182 mutex_unlock(&root->fs_info->trans_mutex); 1186 1183 1187 1184 btrfs_end_transaction(trans, root); ··· 1240 1237 1241 1238 mutex_lock(&root->fs_info->trans_mutex); 1242 1239 if (cur_trans->in_commit) { 1243 - cur_trans->use_count++; 1240 + atomic_inc(&cur_trans->use_count); 1244 1241 mutex_unlock(&root->fs_info->trans_mutex); 1245 1242 btrfs_end_transaction(trans, root); 1246 1243 ··· 1262 1259 prev_trans = list_entry(cur_trans->list.prev, 1263 1260 struct btrfs_transaction, list); 1264 1261 if (!prev_trans->commit_done) { 1265 - prev_trans->use_count++; 1262 + atomic_inc(&prev_trans->use_count); 1266 1263 mutex_unlock(&root->fs_info->trans_mutex); 1267 1264 1268 1265 wait_for_commit(root, prev_trans); ··· 1303 1300 TASK_UNINTERRUPTIBLE); 1304 1301 1305 1302 smp_mb(); 1306 - if (cur_trans->num_writers > 1) 1303 + if (atomic_read(&cur_trans->num_writers) > 1) 1307 1304 schedule_timeout(MAX_SCHEDULE_TIMEOUT); 1308 1305 else if (should_grow) 1309 1306 schedule_timeout(1); 1310 1307 1311 1308 mutex_lock(&root->fs_info->trans_mutex); 1312 1309 finish_wait(&cur_trans->writer_wait, &wait); 1313 - } while (cur_trans->num_writers > 1 || 1310 + } while (atomic_read(&cur_trans->num_writers) > 1 || 1314 1311 (should_grow && cur_trans->num_joined != joined)); 1315 1312 1316 1313 ret = create_pending_snapshots(trans, root->fs_info); ··· 1397 1394 1398 1395 wake_up(&cur_trans->commit_wait); 1399 1396 1397 + list_del_init(&cur_trans->list); 1400 1398 put_transaction(cur_trans); 1401 1399 put_transaction(cur_trans); 1402 1400
+2 -2
fs/btrfs/transaction.h
··· 27 27 * total writers in this transaction, it must be zero before the 28 28 * transaction can end 29 29 */ 30 - unsigned long num_writers; 30 + atomic_t num_writers; 31 31 32 32 unsigned long num_joined; 33 33 int in_commit; 34 - int use_count; 34 + atomic_t use_count; 35 35 int commit_done; 36 36 int blocked; 37 37 struct list_head list;
+12 -21
fs/btrfs/xattr.c
··· 180 180 struct btrfs_path *path; 181 181 struct extent_buffer *leaf; 182 182 struct btrfs_dir_item *di; 183 - int ret = 0, slot, advance; 183 + int ret = 0, slot; 184 184 size_t total_size = 0, size_left = size; 185 185 unsigned long name_ptr; 186 186 size_t name_len; 187 - u32 nritems; 188 187 189 188 /* 190 189 * ok we want all objects associated with this id. ··· 203 204 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 204 205 if (ret < 0) 205 206 goto err; 206 - advance = 0; 207 + 207 208 while (1) { 208 209 leaf = path->nodes[0]; 209 - nritems = btrfs_header_nritems(leaf); 210 210 slot = path->slots[0]; 211 211 212 212 /* this is where we start walking through the path */ 213 - if (advance || slot >= nritems) { 213 + if (slot >= btrfs_header_nritems(leaf)) { 214 214 /* 215 215 * if we've reached the last slot in this leaf we need 216 216 * to go to the next leaf and reset everything 217 217 */ 218 - if (slot >= nritems-1) { 219 - ret = btrfs_next_leaf(root, path); 220 - if (ret) 221 - break; 222 - leaf = path->nodes[0]; 223 - nritems = btrfs_header_nritems(leaf); 224 - slot = path->slots[0]; 225 - } else { 226 - /* 227 - * just walking through the slots on this leaf 228 - */ 229 - slot++; 230 - path->slots[0]++; 231 - } 218 + ret = btrfs_next_leaf(root, path); 219 + if (ret < 0) 220 + goto err; 221 + else if (ret > 0) 222 + break; 223 + continue; 232 224 } 233 - advance = 1; 234 225 235 226 btrfs_item_key_to_cpu(leaf, &found_key, slot); 236 227 ··· 239 250 240 251 /* we are just looking for how big our buffer needs to be */ 241 252 if (!size) 242 - continue; 253 + goto next; 243 254 244 255 if (!buffer || (name_len + 1) > size_left) { 245 256 ret = -ERANGE; ··· 252 263 253 264 size_left -= name_len + 1; 254 265 buffer += name_len + 1; 266 + next: 267 + path->slots[0]++; 255 268 } 256 269 ret = total_size; 257 270
-16
fs/cifs/README
··· 685 685 support and want to map the uid and gid fields 686 686 to values supplied at mount (rather than the 687 687 actual values, then set this to zero. (default 1) 688 - Experimental When set to 1 used to enable certain experimental 689 - features (currently enables multipage writes 690 - when signing is enabled, the multipage write 691 - performance enhancement was disabled when 692 - signing turned on in case buffer was modified 693 - just before it was sent, also this flag will 694 - be used to use the new experimental directory change 695 - notification code). When set to 2 enables 696 - an additional experimental feature, "raw ntlmssp" 697 - session establishment support (which allows 698 - specifying "sec=ntlmssp" on mount). The Linux cifs 699 - module will use ntlmv2 authentication encapsulated 700 - in "raw ntlmssp" (not using SPNEGO) when 701 - "sec=ntlmssp" is specified on mount. 702 - This support also requires building cifs with 703 - the CONFIG_CIFS_EXPERIMENTAL configuration flag. 704 688 705 689 These experimental features and tracing can be enabled by changing flags in 706 690 /proc/fs/cifs (after the cifs module has been installed or built into the
+1 -1
fs/cifs/cache.c
··· 50 50 */ 51 51 struct cifs_server_key { 52 52 uint16_t family; /* address family */ 53 - uint16_t port; /* IP port */ 53 + __be16 port; /* IP port */ 54 54 union { 55 55 struct in_addr ipv4_addr; 56 56 struct in6_addr ipv6_addr;
-43
fs/cifs/cifs_debug.c
··· 423 423 static const struct file_operations traceSMB_proc_fops; 424 424 static const struct file_operations cifs_multiuser_mount_proc_fops; 425 425 static const struct file_operations cifs_security_flags_proc_fops; 426 - static const struct file_operations cifs_experimental_proc_fops; 427 426 static const struct file_operations cifs_linux_ext_proc_fops; 428 427 429 428 void ··· 440 441 proc_create("cifsFYI", 0, proc_fs_cifs, &cifsFYI_proc_fops); 441 442 proc_create("traceSMB", 0, proc_fs_cifs, &traceSMB_proc_fops); 442 443 proc_create("OplockEnabled", 0, proc_fs_cifs, &cifs_oplock_proc_fops); 443 - proc_create("Experimental", 0, proc_fs_cifs, 444 - &cifs_experimental_proc_fops); 445 444 proc_create("LinuxExtensionsEnabled", 0, proc_fs_cifs, 446 445 &cifs_linux_ext_proc_fops); 447 446 proc_create("MultiuserMount", 0, proc_fs_cifs, ··· 466 469 remove_proc_entry("OplockEnabled", proc_fs_cifs); 467 470 remove_proc_entry("SecurityFlags", proc_fs_cifs); 468 471 remove_proc_entry("LinuxExtensionsEnabled", proc_fs_cifs); 469 - remove_proc_entry("Experimental", proc_fs_cifs); 470 472 remove_proc_entry("LookupCacheEnabled", proc_fs_cifs); 471 473 remove_proc_entry("fs/cifs", NULL); 472 474 } ··· 544 548 .llseek = seq_lseek, 545 549 .release = single_release, 546 550 .write = cifs_oplock_proc_write, 547 - }; 548 - 549 - static int cifs_experimental_proc_show(struct seq_file *m, void *v) 550 - { 551 - seq_printf(m, "%d\n", experimEnabled); 552 - return 0; 553 - } 554 - 555 - static int cifs_experimental_proc_open(struct inode *inode, struct file *file) 556 - { 557 - return single_open(file, cifs_experimental_proc_show, NULL); 558 - } 559 - 560 - static ssize_t cifs_experimental_proc_write(struct file *file, 561 - const char __user *buffer, size_t count, loff_t *ppos) 562 - { 563 - char c; 564 - int rc; 565 - 566 - rc = get_user(c, buffer); 567 - if (rc) 568 - return rc; 569 - if (c == '0' || c == 'n' || c == 'N') 570 - experimEnabled = 0; 571 - else if (c == '1' || c == 'y' || c == 'Y') 572 - experimEnabled = 1; 573 - else if (c == '2') 574 - experimEnabled = 2; 575 - 576 - return count; 577 - } 578 - 579 - static const struct file_operations cifs_experimental_proc_fops = { 580 - .owner = THIS_MODULE, 581 - .open = cifs_experimental_proc_open, 582 - .read = seq_read, 583 - .llseek = seq_lseek, 584 - .release = single_release, 585 - .write = cifs_experimental_proc_write, 586 551 }; 587 552 588 553 static int cifs_linux_ext_proc_show(struct seq_file *m, void *v)
+2 -2
fs/cifs/cifs_spnego.c
··· 113 113 MAX_MECH_STR_LEN + 114 114 UID_KEY_LEN + (sizeof(uid_t) * 2) + 115 115 CREDUID_KEY_LEN + (sizeof(uid_t) * 2) + 116 - USER_KEY_LEN + strlen(sesInfo->userName) + 116 + USER_KEY_LEN + strlen(sesInfo->user_name) + 117 117 PID_KEY_LEN + (sizeof(pid_t) * 2) + 1; 118 118 119 119 spnego_key = ERR_PTR(-ENOMEM); ··· 153 153 sprintf(dp, ";creduid=0x%x", sesInfo->cred_uid); 154 154 155 155 dp = description + strlen(description); 156 - sprintf(dp, ";user=%s", sesInfo->userName); 156 + sprintf(dp, ";user=%s", sesInfo->user_name); 157 157 158 158 dp = description + strlen(description); 159 159 sprintf(dp, ";pid=0x%x", current->pid);
+17 -18
fs/cifs/cifs_unicode.c
··· 90 90 case UNI_COLON: 91 91 *target = ':'; 92 92 break; 93 - case UNI_ASTERIK: 93 + case UNI_ASTERISK: 94 94 *target = '*'; 95 95 break; 96 96 case UNI_QUESTION: ··· 264 264 * names are little endian 16 bit Unicode on the wire 265 265 */ 266 266 int 267 - cifsConvertToUCS(__le16 *target, const char *source, int maxlen, 267 + cifsConvertToUCS(__le16 *target, const char *source, int srclen, 268 268 const struct nls_table *cp, int mapChars) 269 269 { 270 270 int i, j, charlen; 271 - int len_remaining = maxlen; 272 271 char src_char; 273 - __u16 temp; 272 + __le16 dst_char; 273 + wchar_t tmp; 274 274 275 275 if (!mapChars) 276 276 return cifs_strtoUCS(target, source, PATH_MAX, cp); 277 277 278 - for (i = 0, j = 0; i < maxlen; j++) { 278 + for (i = 0, j = 0; i < srclen; j++) { 279 279 src_char = source[i]; 280 280 switch (src_char) { 281 281 case 0: 282 - put_unaligned_le16(0, &target[j]); 282 + put_unaligned(0, &target[j]); 283 283 goto ctoUCS_out; 284 284 case ':': 285 - temp = UNI_COLON; 285 + dst_char = cpu_to_le16(UNI_COLON); 286 286 break; 287 287 case '*': 288 - temp = UNI_ASTERIK; 288 + dst_char = cpu_to_le16(UNI_ASTERISK); 289 289 break; 290 290 case '?': 291 - temp = UNI_QUESTION; 291 + dst_char = cpu_to_le16(UNI_QUESTION); 292 292 break; 293 293 case '<': 294 - temp = UNI_LESSTHAN; 294 + dst_char = cpu_to_le16(UNI_LESSTHAN); 295 295 break; 296 296 case '>': 297 - temp = UNI_GRTRTHAN; 297 + dst_char = cpu_to_le16(UNI_GRTRTHAN); 298 298 break; 299 299 case '|': 300 - temp = UNI_PIPE; 300 + dst_char = cpu_to_le16(UNI_PIPE); 301 301 break; 302 302 /* 303 303 * FIXME: We can not handle remapping backslash (UNI_SLASH) ··· 305 305 * as they use backslash as separator. 306 306 */ 307 307 default: 308 - charlen = cp->char2uni(source+i, len_remaining, 309 - &temp); 308 + charlen = cp->char2uni(source + i, srclen - i, &tmp); 309 + dst_char = cpu_to_le16(tmp); 310 + 310 311 /* 311 312 * if no match, use question mark, which at least in 312 313 * some cases serves as wild card 313 314 */ 314 315 if (charlen < 1) { 315 - temp = 0x003f; 316 + dst_char = cpu_to_le16(0x003f); 316 317 charlen = 1; 317 318 } 318 - len_remaining -= charlen; 319 319 /* 320 320 * character may take more than one byte in the source 321 321 * string, but will take exactly two bytes in the ··· 324 324 i += charlen; 325 325 continue; 326 326 } 327 - put_unaligned_le16(temp, &target[j]); 327 + put_unaligned(dst_char, &target[j]); 328 328 i++; /* move to next char in source string */ 329 - len_remaining--; 330 329 } 331 330 332 331 ctoUCS_out:
+1 -1
fs/cifs/cifs_unicode.h
··· 44 44 * reserved symbols (along with \ and /), otherwise illegal to store 45 45 * in filenames in NTFS 46 46 */ 47 - #define UNI_ASTERIK (__u16) ('*' + 0xF000) 47 + #define UNI_ASTERISK (__u16) ('*' + 0xF000) 48 48 #define UNI_QUESTION (__u16) ('?' + 0xF000) 49 49 #define UNI_COLON (__u16) (':' + 0xF000) 50 50 #define UNI_GRTRTHAN (__u16) ('>' + 0xF000)
+12 -9
fs/cifs/cifsencrypt.c
··· 30 30 #include <linux/ctype.h> 31 31 #include <linux/random.h> 32 32 33 - /* Calculate and return the CIFS signature based on the mac key and SMB PDU */ 34 - /* the 16 byte signature must be allocated by the caller */ 35 - /* Note we only use the 1st eight bytes */ 36 - /* Note that the smb header signature field on input contains the 37 - sequence number before this function is called */ 38 - 33 + /* 34 + * Calculate and return the CIFS signature based on the mac key and SMB PDU. 35 + * The 16 byte signature must be allocated by the caller. Note we only use the 36 + * 1st eight bytes and that the smb header signature field on input contains 37 + * the sequence number before this function is called. Also, this function 38 + * should be called with the server->srv_mutex held. 39 + */ 39 40 static int cifs_calculate_signature(const struct smb_hdr *cifs_pdu, 40 41 struct TCP_Server_Info *server, char *signature) 41 42 { ··· 210 209 cpu_to_le32(expected_sequence_number); 211 210 cifs_pdu->Signature.Sequence.Reserved = 0; 212 211 212 + mutex_lock(&server->srv_mutex); 213 213 rc = cifs_calculate_signature(cifs_pdu, server, 214 214 what_we_think_sig_should_be); 215 + mutex_unlock(&server->srv_mutex); 215 216 216 217 if (rc) 217 218 return rc; ··· 472 469 return rc; 473 470 } 474 471 475 - /* convert ses->userName to unicode and uppercase */ 476 - len = strlen(ses->userName); 472 + /* convert ses->user_name to unicode and uppercase */ 473 + len = strlen(ses->user_name); 477 474 user = kmalloc(2 + (len * 2), GFP_KERNEL); 478 475 if (user == NULL) { 479 476 cERROR(1, "calc_ntlmv2_hash: user mem alloc failure\n"); 480 477 rc = -ENOMEM; 481 478 goto calc_exit_2; 482 479 } 483 - len = cifs_strtoUCS((__le16 *)user, ses->userName, len, nls_cp); 480 + len = cifs_strtoUCS((__le16 *)user, ses->user_name, len, nls_cp); 484 481 UniStrupr(user); 485 482 486 483 crypto_shash_update(&ses->server->secmech.sdeschmacmd5->shash,
+3 -3
fs/cifs/cifsfs.c
··· 53 53 int cifsERROR = 1; 54 54 int traceSMB = 0; 55 55 unsigned int oplockEnabled = 1; 56 - unsigned int experimEnabled = 0; 57 56 unsigned int linuxExtEnabled = 1; 58 57 unsigned int lookupCacheEnabled = 1; 59 58 unsigned int multiuser_mount = 0; ··· 126 127 kfree(cifs_sb); 127 128 return rc; 128 129 } 130 + cifs_sb->bdi.ra_pages = default_backing_dev_info.ra_pages; 129 131 130 132 #ifdef CONFIG_CIFS_DFS_UPCALL 131 133 /* copy mount params to sb for use in submounts */ ··· 409 409 410 410 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER) 411 411 seq_printf(s, ",multiuser"); 412 - else if (tcon->ses->userName) 413 - seq_printf(s, ",username=%s", tcon->ses->userName); 412 + else if (tcon->ses->user_name) 413 + seq_printf(s, ",username=%s", tcon->ses->user_name); 414 414 415 415 if (tcon->ses->domainName) 416 416 seq_printf(s, ",domain=%s", tcon->ses->domainName);
+6 -7
fs/cifs/cifsglob.h
··· 37 37 38 38 #define MAX_TREE_SIZE (2 + MAX_SERVER_SIZE + 1 + MAX_SHARE_SIZE + 1) 39 39 #define MAX_SERVER_SIZE 15 40 - #define MAX_SHARE_SIZE 64 /* used to be 20, this should still be enough */ 41 - #define MAX_USERNAME_SIZE 32 /* 32 is to allow for 15 char names + null 42 - termination then *2 for unicode versions */ 43 - #define MAX_PASSWORD_SIZE 512 /* max for windows seems to be 256 wide chars */ 40 + #define MAX_SHARE_SIZE 80 41 + #define MAX_USERNAME_SIZE 256 /* reasonable maximum for current servers */ 42 + #define MAX_PASSWORD_SIZE 512 /* max for windows seems to be 256 wide chars */ 44 43 45 44 #define CIFS_MIN_RCV_POOL 4 46 45 ··· 91 92 CifsNew = 0, 92 93 CifsGood, 93 94 CifsExiting, 94 - CifsNeedReconnect 95 + CifsNeedReconnect, 96 + CifsNeedNegotiate 95 97 }; 96 98 97 99 enum securityEnum { ··· 274 274 int capabilities; 275 275 char serverName[SERVER_NAME_LEN_WITH_NULL * 2]; /* BB make bigger for 276 276 TCP names - will ipv6 and sctp addresses fit? */ 277 - char userName[MAX_USERNAME_SIZE + 1]; 277 + char *user_name; 278 278 char *domainName; 279 279 char *password; 280 280 struct session_key auth_key; ··· 817 817 have the uid/password or Kerberos credential 818 818 or equivalent for current user */ 819 819 GLOBAL_EXTERN unsigned int oplockEnabled; 820 - GLOBAL_EXTERN unsigned int experimEnabled; 821 820 GLOBAL_EXTERN unsigned int lookupCacheEnabled; 822 821 GLOBAL_EXTERN unsigned int global_secflags; /* if on, session setup sent 823 822 with more secure ntlmssp2 challenge/resp */
+7 -7
fs/cifs/cifssmb.c
··· 142 142 */ 143 143 while (server->tcpStatus == CifsNeedReconnect) { 144 144 wait_event_interruptible_timeout(server->response_q, 145 - (server->tcpStatus == CifsGood), 10 * HZ); 145 + (server->tcpStatus != CifsNeedReconnect), 10 * HZ); 146 146 147 - /* is TCP session is reestablished now ?*/ 147 + /* are we still trying to reconnect? */ 148 148 if (server->tcpStatus != CifsNeedReconnect) 149 149 break; 150 150 ··· 729 729 return rc; 730 730 731 731 /* set up echo request */ 732 - smb->hdr.Tid = cpu_to_le16(0xffff); 732 + smb->hdr.Tid = 0xffff; 733 733 smb->hdr.WordCount = 1; 734 734 put_unaligned_le16(1, &smb->EchoCount); 735 735 put_bcc_le(1, &smb->hdr); ··· 1884 1884 __constant_cpu_to_le16(CIFS_WRLCK)) 1885 1885 pLockData->fl_type = F_WRLCK; 1886 1886 1887 - pLockData->fl_start = parm_data->start; 1888 - pLockData->fl_end = parm_data->start + 1889 - parm_data->length - 1; 1890 - pLockData->fl_pid = parm_data->pid; 1887 + pLockData->fl_start = le64_to_cpu(parm_data->start); 1888 + pLockData->fl_end = pLockData->fl_start + 1889 + le64_to_cpu(parm_data->length) - 1; 1890 + pLockData->fl_pid = le32_to_cpu(parm_data->pid); 1891 1891 } 1892 1892 } 1893 1893
+42 -26
fs/cifs/connect.c
··· 199 199 } 200 200 spin_unlock(&GlobalMid_Lock); 201 201 202 - while ((server->tcpStatus != CifsExiting) && 203 - (server->tcpStatus != CifsGood)) { 202 + while (server->tcpStatus == CifsNeedReconnect) { 204 203 try_to_freeze(); 205 204 206 205 /* we should try only the port we connected to before */ ··· 211 212 atomic_inc(&tcpSesReconnectCount); 212 213 spin_lock(&GlobalMid_Lock); 213 214 if (server->tcpStatus != CifsExiting) 214 - server->tcpStatus = CifsGood; 215 + server->tcpStatus = CifsNeedNegotiate; 215 216 spin_unlock(&GlobalMid_Lock); 216 217 } 217 218 } ··· 247 248 total_data_size = get_unaligned_le16(&pSMBt->t2_rsp.TotalDataCount); 248 249 data_in_this_rsp = get_unaligned_le16(&pSMBt->t2_rsp.DataCount); 249 250 250 - remaining = total_data_size - data_in_this_rsp; 251 - 252 - if (remaining == 0) 251 + if (total_data_size == data_in_this_rsp) 253 252 return 0; 254 - else if (remaining < 0) { 253 + else if (total_data_size < data_in_this_rsp) { 255 254 cFYI(1, "total data %d smaller than data in frame %d", 256 255 total_data_size, data_in_this_rsp); 257 256 return -EINVAL; 258 - } else { 259 - cFYI(1, "missing %d bytes from transact2, check next response", 260 - remaining); 261 - if (total_data_size > maxBufSize) { 262 - cERROR(1, "TotalDataSize %d is over maximum buffer %d", 263 - total_data_size, maxBufSize); 264 - return -EINVAL; 265 - } 266 - return remaining; 267 257 } 258 + 259 + remaining = total_data_size - data_in_this_rsp; 260 + 261 + cFYI(1, "missing %d bytes from transact2, check next response", 262 + remaining); 263 + if (total_data_size > maxBufSize) { 264 + cERROR(1, "TotalDataSize %d is over maximum buffer %d", 265 + total_data_size, maxBufSize); 266 + return -EINVAL; 267 + } 268 + return remaining; 268 269 } 269 270 270 271 static int coalesce_t2(struct smb_hdr *psecond, struct smb_hdr *pTargetSMB) ··· 420 421 pdu_length = 4; /* enough to get RFC1001 header */ 421 422 422 423 incomplete_rcv: 423 - if (echo_retries > 0 && 424 + if (echo_retries > 0 && server->tcpStatus == CifsGood && 424 425 time_after(jiffies, server->lstrp + 425 426 (echo_retries * SMB_ECHO_INTERVAL))) { 426 427 cERROR(1, "Server %s has not responded in %d seconds. " ··· 880 881 /* null user, ie anonymous, authentication */ 881 882 vol->nullauth = 1; 882 883 } 883 - if (strnlen(value, 200) < 200) { 884 + if (strnlen(value, MAX_USERNAME_SIZE) < 885 + MAX_USERNAME_SIZE) { 884 886 vol->username = value; 885 887 } else { 886 888 printk(KERN_WARNING "CIFS: username too long\n"); ··· 1472 1472 static bool 1473 1473 match_port(struct TCP_Server_Info *server, struct sockaddr *addr) 1474 1474 { 1475 - unsigned short int port, *sport; 1475 + __be16 port, *sport; 1476 1476 1477 1477 switch (addr->sa_family) { 1478 1478 case AF_INET: ··· 1765 1765 module_put(THIS_MODULE); 1766 1766 goto out_err_crypto_release; 1767 1767 } 1768 + tcp_ses->tcpStatus = CifsNeedNegotiate; 1768 1769 1769 1770 /* thread spawned, put it on the list */ 1770 1771 spin_lock(&cifs_tcp_ses_lock); ··· 1809 1808 break; 1810 1809 default: 1811 1810 /* anything else takes username/password */ 1812 - if (strncmp(ses->userName, vol->username, 1811 + if (ses->user_name == NULL) 1812 + continue; 1813 + if (strncmp(ses->user_name, vol->username, 1813 1814 MAX_USERNAME_SIZE)) 1814 1815 continue; 1815 1816 if (strlen(vol->username) != 0 && ··· 1853 1850 sesInfoFree(ses); 1854 1851 cifs_put_tcp_session(server); 1855 1852 } 1853 + 1854 + static bool warned_on_ntlm; /* globals init to false automatically */ 1856 1855 1857 1856 static struct cifsSesInfo * 1858 1857 cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info) ··· 1911 1906 else 1912 1907 sprintf(ses->serverName, "%pI4", &addr->sin_addr); 1913 1908 1914 - if (volume_info->username) 1915 - strncpy(ses->userName, volume_info->username, 1916 - MAX_USERNAME_SIZE); 1909 + if (volume_info->username) { 1910 + ses->user_name = kstrdup(volume_info->username, GFP_KERNEL); 1911 + if (!ses->user_name) 1912 + goto get_ses_fail; 1913 + } 1917 1914 1918 1915 /* volume_info->password freed at unmount */ 1919 1916 if (volume_info->password) { ··· 1930 1923 } 1931 1924 ses->cred_uid = volume_info->cred_uid; 1932 1925 ses->linux_uid = volume_info->linux_uid; 1926 + 1927 + /* ntlmv2 is much stronger than ntlm security, and has been broadly 1928 + supported for many years, time to update default security mechanism */ 1929 + if ((volume_info->secFlg == 0) && warned_on_ntlm == false) { 1930 + warned_on_ntlm = true; 1931 + cERROR(1, "default security mechanism requested. The default " 1932 + "security mechanism will be upgraded from ntlm to " 1933 + "ntlmv2 in kernel release 2.6.41"); 1934 + } 1933 1935 ses->overrideSecFlg = volume_info->secFlg; 1934 1936 1935 1937 mutex_lock(&ses->session_mutex); ··· 2292 2276 generic_ip_connect(struct TCP_Server_Info *server) 2293 2277 { 2294 2278 int rc = 0; 2295 - unsigned short int sport; 2279 + __be16 sport; 2296 2280 int slen, sfamily; 2297 2281 struct socket *socket = server->ssocket; 2298 2282 struct sockaddr *saddr; ··· 2377 2361 static int 2378 2362 ip_connect(struct TCP_Server_Info *server) 2379 2363 { 2380 - unsigned short int *sport; 2364 + __be16 *sport; 2381 2365 struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *)&server->dstaddr; 2382 2366 struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr; 2383 2367 ··· 2842 2826 2843 2827 remote_path_check: 2844 2828 /* check if a whole path (including prepath) is not remote */ 2845 - if (!rc && cifs_sb->prepathlen && tcon) { 2829 + if (!rc && tcon) { 2846 2830 /* build_path_to_root works only when we have a valid tcon */ 2847 2831 full_path = cifs_build_path_to_root(cifs_sb, tcon); 2848 2832 if (full_path == NULL) {
+36 -32
fs/cifs/file.c
··· 575 575 576 576 int cifs_close(struct inode *inode, struct file *file) 577 577 { 578 - cifsFileInfo_put(file->private_data); 579 - file->private_data = NULL; 578 + if (file->private_data != NULL) { 579 + cifsFileInfo_put(file->private_data); 580 + file->private_data = NULL; 581 + } 580 582 581 583 /* return code from the ->release op is always ignored */ 582 584 return 0; ··· 972 970 total_written += bytes_written) { 973 971 rc = -EAGAIN; 974 972 while (rc == -EAGAIN) { 973 + struct kvec iov[2]; 974 + unsigned int len; 975 + 975 976 if (open_file->invalidHandle) { 976 977 /* we could deadlock if we called 977 978 filemap_fdatawait from here so tell ··· 984 979 if (rc != 0) 985 980 break; 986 981 } 987 - if (experimEnabled || (pTcon->ses->server && 988 - ((pTcon->ses->server->secMode & 989 - (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED)) 990 - == 0))) { 991 - struct kvec iov[2]; 992 - unsigned int len; 993 982 994 - len = min((size_t)cifs_sb->wsize, 995 - write_size - total_written); 996 - /* iov[0] is reserved for smb header */ 997 - iov[1].iov_base = (char *)write_data + 998 - total_written; 999 - iov[1].iov_len = len; 1000 - rc = CIFSSMBWrite2(xid, pTcon, 1001 - open_file->netfid, len, 1002 - *poffset, &bytes_written, 1003 - iov, 1, 0); 1004 - } else 1005 - rc = CIFSSMBWrite(xid, pTcon, 1006 - open_file->netfid, 1007 - min_t(const int, cifs_sb->wsize, 1008 - write_size - total_written), 1009 - *poffset, &bytes_written, 1010 - write_data + total_written, 1011 - NULL, 0); 983 + len = min((size_t)cifs_sb->wsize, 984 + write_size - total_written); 985 + /* iov[0] is reserved for smb header */ 986 + iov[1].iov_base = (char *)write_data + total_written; 987 + iov[1].iov_len = len; 988 + rc = CIFSSMBWrite2(xid, pTcon, open_file->netfid, len, 989 + *poffset, &bytes_written, iov, 1, 0); 1012 990 } 1013 991 if (rc || (bytes_written == 0)) { 1014 992 if (total_written) ··· 1228 1240 } 1229 1241 1230 1242 tcon = tlink_tcon(open_file->tlink); 1231 - if (!experimEnabled && tcon->ses->server->secMode & 1232 - (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED)) { 1233 - cifsFileInfo_put(open_file); 1234 - kfree(iov); 1235 - return generic_writepages(mapping, wbc); 1236 - } 1237 1243 cifsFileInfo_put(open_file); 1238 1244 1239 1245 xid = GetXid(); ··· 1962 1980 return total_read; 1963 1981 } 1964 1982 1983 + /* 1984 + * If the page is mmap'ed into a process' page tables, then we need to make 1985 + * sure that it doesn't change while being written back. 1986 + */ 1987 + static int 1988 + cifs_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf) 1989 + { 1990 + struct page *page = vmf->page; 1991 + 1992 + lock_page(page); 1993 + return VM_FAULT_LOCKED; 1994 + } 1995 + 1996 + static struct vm_operations_struct cifs_file_vm_ops = { 1997 + .fault = filemap_fault, 1998 + .page_mkwrite = cifs_page_mkwrite, 1999 + }; 2000 + 1965 2001 int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma) 1966 2002 { 1967 2003 int rc, xid; ··· 1991 1991 cifs_invalidate_mapping(inode); 1992 1992 1993 1993 rc = generic_file_mmap(file, vma); 1994 + if (rc == 0) 1995 + vma->vm_ops = &cifs_file_vm_ops; 1994 1996 FreeXid(xid); 1995 1997 return rc; 1996 1998 } ··· 2009 2007 return rc; 2010 2008 } 2011 2009 rc = generic_file_mmap(file, vma); 2010 + if (rc == 0) 2011 + vma->vm_ops = &cifs_file_vm_ops; 2012 2012 FreeXid(xid); 2013 2013 return rc; 2014 2014 }
+2 -2
fs/cifs/link.c
··· 239 239 if (rc != 0) 240 240 return rc; 241 241 242 - if (file_info.EndOfFile != CIFS_MF_SYMLINK_FILE_SIZE) { 242 + if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) { 243 243 CIFSSMBClose(xid, tcon, netfid); 244 244 /* it's not a symlink */ 245 245 return -EINVAL; ··· 316 316 if (rc != 0) 317 317 goto out; 318 318 319 - if (file_info.EndOfFile != CIFS_MF_SYMLINK_FILE_SIZE) { 319 + if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) { 320 320 CIFSSMBClose(xid, pTcon, netfid); 321 321 /* it's not a symlink */ 322 322 goto out;
+2 -1
fs/cifs/misc.c
··· 100 100 memset(buf_to_free->password, 0, strlen(buf_to_free->password)); 101 101 kfree(buf_to_free->password); 102 102 } 103 + kfree(buf_to_free->user_name); 103 104 kfree(buf_to_free->domainName); 104 105 kfree(buf_to_free); 105 106 } ··· 521 520 (struct smb_com_transaction_change_notify_rsp *)buf; 522 521 struct file_notify_information *pnotify; 523 522 __u32 data_offset = 0; 524 - if (pSMBr->ByteCount > sizeof(struct file_notify_information)) { 523 + if (get_bcc_le(buf) > sizeof(struct file_notify_information)) { 525 524 data_offset = le32_to_cpu(pSMBr->DataOffset); 526 525 527 526 pnotify = (struct file_notify_information *)
+11 -12
fs/cifs/sess.c
··· 219 219 bcc_ptr++; 220 220 } */ 221 221 /* copy user */ 222 - if (ses->userName == NULL) { 222 + if (ses->user_name == NULL) { 223 223 /* null user mount */ 224 224 *bcc_ptr = 0; 225 225 *(bcc_ptr+1) = 0; 226 226 } else { 227 - bytes_ret = cifs_strtoUCS((__le16 *) bcc_ptr, ses->userName, 227 + bytes_ret = cifs_strtoUCS((__le16 *) bcc_ptr, ses->user_name, 228 228 MAX_USERNAME_SIZE, nls_cp); 229 229 } 230 230 bcc_ptr += 2 * bytes_ret; ··· 244 244 /* copy user */ 245 245 /* BB what about null user mounts - check that we do this BB */ 246 246 /* copy user */ 247 - if (ses->userName == NULL) { 248 - /* BB what about null user mounts - check that we do this BB */ 249 - } else { 250 - strncpy(bcc_ptr, ses->userName, MAX_USERNAME_SIZE); 251 - } 252 - bcc_ptr += strnlen(ses->userName, MAX_USERNAME_SIZE); 247 + if (ses->user_name != NULL) 248 + strncpy(bcc_ptr, ses->user_name, MAX_USERNAME_SIZE); 249 + /* else null user mount */ 250 + 251 + bcc_ptr += strnlen(ses->user_name, MAX_USERNAME_SIZE); 253 252 *bcc_ptr = 0; 254 253 bcc_ptr++; /* account for null termination */ 255 254 ··· 404 405 /* BB spec says that if AvId field of MsvAvTimestamp is populated then 405 406 we must set the MIC field of the AUTHENTICATE_MESSAGE */ 406 407 ses->ntlmssp->server_flags = le32_to_cpu(pblob->NegotiateFlags); 407 - tioffset = cpu_to_le16(pblob->TargetInfoArray.BufferOffset); 408 - tilen = cpu_to_le16(pblob->TargetInfoArray.Length); 408 + tioffset = le32_to_cpu(pblob->TargetInfoArray.BufferOffset); 409 + tilen = le16_to_cpu(pblob->TargetInfoArray.Length); 409 410 if (tilen) { 410 411 ses->auth_key.response = kmalloc(tilen, GFP_KERNEL); 411 412 if (!ses->auth_key.response) { ··· 522 523 tmp += len; 523 524 } 524 525 525 - if (ses->userName == NULL) { 526 + if (ses->user_name == NULL) { 526 527 sec_blob->UserName.BufferOffset = cpu_to_le32(tmp - pbuffer); 527 528 sec_blob->UserName.Length = 0; 528 529 sec_blob->UserName.MaximumLength = 0; 529 530 tmp += 2; 530 531 } else { 531 532 int len; 532 - len = cifs_strtoUCS((__le16 *)tmp, ses->userName, 533 + len = cifs_strtoUCS((__le16 *)tmp, ses->user_name, 533 534 MAX_USERNAME_SIZE, nls_cp); 534 535 len *= 2; /* unicode is 2 bytes each */ 535 536 sec_blob->UserName.BufferOffset = cpu_to_le32(tmp - pbuffer);
+1 -1
fs/dcache.c
··· 2131 2131 */ 2132 2132 void dentry_update_name_case(struct dentry *dentry, struct qstr *name) 2133 2133 { 2134 - BUG_ON(!mutex_is_locked(&dentry->d_inode->i_mutex)); 2134 + BUG_ON(!mutex_is_locked(&dentry->d_parent->d_inode->i_mutex)); 2135 2135 BUG_ON(dentry->d_name.len != name->len); /* d_lookup gives this */ 2136 2136 2137 2137 spin_lock(&dentry->d_lock);
+1
fs/fhandle.c
··· 7 7 #include <linux/exportfs.h> 8 8 #include <linux/fs_struct.h> 9 9 #include <linux/fsnotify.h> 10 + #include <linux/personality.h> 10 11 #include <asm/uaccess.h> 11 12 #include "internal.h" 12 13
+1 -2
fs/filesystems.c
··· 110 110 *tmp = fs->next; 111 111 fs->next = NULL; 112 112 write_unlock(&file_systems_lock); 113 + synchronize_rcu(); 113 114 return 0; 114 115 } 115 116 tmp = &(*tmp)->next; 116 117 } 117 118 write_unlock(&file_systems_lock); 118 - 119 - synchronize_rcu(); 120 119 121 120 return -EINVAL; 122 121 }
+1
fs/namei.c
··· 697 697 do { 698 698 seq = read_seqcount_begin(&fs->seq); 699 699 nd->root = fs->root; 700 + nd->seq = __read_seqcount_begin(&nd->root.dentry->d_seq); 700 701 } while (read_seqcount_retry(&fs->seq, seq)); 701 702 } 702 703 }
-16
fs/namespace.c
··· 1030 1030 .show = show_vfsmnt 1031 1031 }; 1032 1032 1033 - static int uuid_is_nil(u8 *uuid) 1034 - { 1035 - int i; 1036 - u8 *cp = (u8 *)uuid; 1037 - 1038 - for (i = 0; i < 16; i++) { 1039 - if (*cp++) 1040 - return 0; 1041 - } 1042 - return 1; 1043 - } 1044 - 1045 1033 static int show_mountinfo(struct seq_file *m, void *v) 1046 1034 { 1047 1035 struct proc_mounts *p = m->private; ··· 1072 1084 } 1073 1085 if (IS_MNT_UNBINDABLE(mnt)) 1074 1086 seq_puts(m, " unbindable"); 1075 - 1076 - if (!uuid_is_nil(mnt->mnt_sb->s_uuid)) 1077 - /* print the uuid */ 1078 - seq_printf(m, " uuid:%pU", mnt->mnt_sb->s_uuid); 1079 1087 1080 1088 /* Filesystem specific data */ 1081 1089 seq_puts(m, " - ");
+4 -2
fs/nfs/write.c
··· 542 542 if (!nfs_need_commit(nfsi)) 543 543 return 0; 544 544 545 + spin_lock(&inode->i_lock); 545 546 ret = nfs_scan_list(nfsi, dst, idx_start, npages, NFS_PAGE_TAG_COMMIT); 546 547 if (ret > 0) 547 548 nfsi->ncommit -= ret; 549 + spin_unlock(&inode->i_lock); 550 + 548 551 if (nfs_need_commit(NFS_I(inode))) 549 552 __mark_inode_dirty(inode, I_DIRTY_DATASYNC); 553 + 550 554 return ret; 551 555 } 552 556 #else ··· 1487 1483 res = nfs_commit_set_lock(NFS_I(inode), may_wait); 1488 1484 if (res <= 0) 1489 1485 goto out_mark_dirty; 1490 - spin_lock(&inode->i_lock); 1491 1486 res = nfs_scan_commit(inode, &head, 0, 0); 1492 - spin_unlock(&inode->i_lock); 1493 1487 if (res) { 1494 1488 int error; 1495 1489
+12 -4
fs/partitions/ldm.c
··· 1299 1299 1300 1300 BUG_ON (!data || !frags); 1301 1301 1302 + if (size < 2 * VBLK_SIZE_HEAD) { 1303 + ldm_error("Value of size is to small."); 1304 + return false; 1305 + } 1306 + 1302 1307 group = get_unaligned_be32(data + 0x08); 1303 1308 rec = get_unaligned_be16(data + 0x0C); 1304 1309 num = get_unaligned_be16(data + 0x0E); 1305 1310 if ((num < 1) || (num > 4)) { 1306 1311 ldm_error ("A VBLK claims to have %d parts.", num); 1312 + return false; 1313 + } 1314 + if (rec >= num) { 1315 + ldm_error("REC value (%d) exceeds NUM value (%d)", rec, num); 1307 1316 return false; 1308 1317 } 1309 1318 ··· 1343 1334 1344 1335 f->map |= (1 << rec); 1345 1336 1346 - if (num > 0) { 1347 - data += VBLK_SIZE_HEAD; 1348 - size -= VBLK_SIZE_HEAD; 1349 - } 1337 + data += VBLK_SIZE_HEAD; 1338 + size -= VBLK_SIZE_HEAD; 1339 + 1350 1340 memcpy (f->data+rec*(size-VBLK_SIZE_HEAD)+VBLK_SIZE_HEAD, data, size); 1351 1341 1352 1342 return true;
+7 -2
fs/proc/base.c
··· 3124 3124 /* for the /proc/ directory itself, after non-process stuff has been done */ 3125 3125 int proc_pid_readdir(struct file * filp, void * dirent, filldir_t filldir) 3126 3126 { 3127 - unsigned int nr = filp->f_pos - FIRST_PROCESS_ENTRY; 3128 - struct task_struct *reaper = get_proc_task(filp->f_path.dentry->d_inode); 3127 + unsigned int nr; 3128 + struct task_struct *reaper; 3129 3129 struct tgid_iter iter; 3130 3130 struct pid_namespace *ns; 3131 3131 3132 + if (filp->f_pos >= PID_MAX_LIMIT + TGID_OFFSET) 3133 + goto out_no_task; 3134 + nr = filp->f_pos - FIRST_PROCESS_ENTRY; 3135 + 3136 + reaper = get_proc_task(filp->f_path.dentry->d_inode); 3132 3137 if (!reaper) 3133 3138 goto out_no_task; 3134 3139
+1
fs/ramfs/file-nommu.c
··· 112 112 SetPageDirty(page); 113 113 114 114 unlock_page(page); 115 + put_page(page); 115 116 } 116 117 117 118 return 0;
+91 -55
fs/ubifs/debug.h
··· 23 23 #ifndef __UBIFS_DEBUG_H__ 24 24 #define __UBIFS_DEBUG_H__ 25 25 26 + /* Checking helper functions */ 27 + typedef int (*dbg_leaf_callback)(struct ubifs_info *c, 28 + struct ubifs_zbranch *zbr, void *priv); 29 + typedef int (*dbg_znode_callback)(struct ubifs_info *c, 30 + struct ubifs_znode *znode, void *priv); 31 + 26 32 #ifdef CONFIG_UBIFS_FS_DEBUG 27 33 28 34 /** ··· 276 270 void dbg_dump_index(struct ubifs_info *c); 277 271 void dbg_dump_lpt_lebs(const struct ubifs_info *c); 278 272 279 - /* Checking helper functions */ 280 - typedef int (*dbg_leaf_callback)(struct ubifs_info *c, 281 - struct ubifs_zbranch *zbr, void *priv); 282 - typedef int (*dbg_znode_callback)(struct ubifs_info *c, 283 - struct ubifs_znode *znode, void *priv); 284 273 int dbg_walk_index(struct ubifs_info *c, dbg_leaf_callback leaf_cb, 285 274 dbg_znode_callback znode_cb, void *priv); 286 275 ··· 296 295 int dbg_check_filesystem(struct ubifs_info *c); 297 296 void dbg_check_heap(struct ubifs_info *c, struct ubifs_lpt_heap *heap, int cat, 298 297 int add_pos); 299 - int dbg_check_lprops(struct ubifs_info *c); 300 298 int dbg_check_lpt_nodes(struct ubifs_info *c, struct ubifs_cnode *cnode, 301 299 int row, int col); 302 300 int dbg_check_inode_size(struct ubifs_info *c, const struct inode *inode, ··· 401 401 #define DBGKEY(key) ((char *)(key)) 402 402 #define DBGKEY1(key) ((char *)(key)) 403 403 404 - #define ubifs_debugging_init(c) 0 405 - #define ubifs_debugging_exit(c) ({}) 404 + static inline int ubifs_debugging_init(struct ubifs_info *c) { return 0; } 405 + static inline void ubifs_debugging_exit(struct ubifs_info *c) { return; } 406 + static inline const char *dbg_ntype(int type) { return ""; } 407 + static inline const char *dbg_cstate(int cmt_state) { return ""; } 408 + static inline const char *dbg_jhead(int jhead) { return ""; } 409 + static inline const char * 410 + dbg_get_key_dump(const struct ubifs_info *c, 411 + const union ubifs_key *key) { return ""; } 412 + static inline void dbg_dump_inode(const struct ubifs_info *c, 413 + const struct inode *inode) { return; } 414 + static inline void dbg_dump_node(const struct ubifs_info *c, 415 + const void *node) { return; } 416 + static inline void dbg_dump_lpt_node(const struct ubifs_info *c, 417 + void *node, int lnum, 418 + int offs) { return; } 419 + static inline void 420 + dbg_dump_budget_req(const struct ubifs_budget_req *req) { return; } 421 + static inline void 422 + dbg_dump_lstats(const struct ubifs_lp_stats *lst) { return; } 423 + static inline void dbg_dump_budg(struct ubifs_info *c) { return; } 424 + static inline void dbg_dump_lprop(const struct ubifs_info *c, 425 + const struct ubifs_lprops *lp) { return; } 426 + static inline void dbg_dump_lprops(struct ubifs_info *c) { return; } 427 + static inline void dbg_dump_lpt_info(struct ubifs_info *c) { return; } 428 + static inline void dbg_dump_leb(const struct ubifs_info *c, 429 + int lnum) { return; } 430 + static inline void 431 + dbg_dump_znode(const struct ubifs_info *c, 432 + const struct ubifs_znode *znode) { return; } 433 + static inline void dbg_dump_heap(struct ubifs_info *c, 434 + struct ubifs_lpt_heap *heap, 435 + int cat) { return; } 436 + static inline void dbg_dump_pnode(struct ubifs_info *c, 437 + struct ubifs_pnode *pnode, 438 + struct ubifs_nnode *parent, 439 + int iip) { return; } 440 + static inline void dbg_dump_tnc(struct ubifs_info *c) { return; } 441 + static inline void dbg_dump_index(struct ubifs_info *c) { return; } 442 + static inline void dbg_dump_lpt_lebs(const struct ubifs_info *c) { return; } 406 443 407 - #define dbg_ntype(type) "" 408 - #define dbg_cstate(cmt_state) "" 409 - #define dbg_jhead(jhead) "" 410 - #define dbg_get_key_dump(c, key) ({}) 411 - #define dbg_dump_inode(c, inode) ({}) 412 - #define dbg_dump_node(c, node) ({}) 413 - #define dbg_dump_lpt_node(c, node, lnum, offs) ({}) 414 - #define dbg_dump_budget_req(req) ({}) 415 - #define dbg_dump_lstats(lst) ({}) 416 - #define dbg_dump_budg(c) ({}) 417 - #define dbg_dump_lprop(c, lp) ({}) 418 - #define dbg_dump_lprops(c) ({}) 419 - #define dbg_dump_lpt_info(c) ({}) 420 - #define dbg_dump_leb(c, lnum) ({}) 421 - #define dbg_dump_znode(c, znode) ({}) 422 - #define dbg_dump_heap(c, heap, cat) ({}) 423 - #define dbg_dump_pnode(c, pnode, parent, iip) ({}) 424 - #define dbg_dump_tnc(c) ({}) 425 - #define dbg_dump_index(c) ({}) 426 - #define dbg_dump_lpt_lebs(c) ({}) 444 + static inline int dbg_walk_index(struct ubifs_info *c, 445 + dbg_leaf_callback leaf_cb, 446 + dbg_znode_callback znode_cb, 447 + void *priv) { return 0; } 448 + static inline void dbg_save_space_info(struct ubifs_info *c) { return; } 449 + static inline int dbg_check_space_info(struct ubifs_info *c) { return 0; } 450 + static inline int dbg_check_lprops(struct ubifs_info *c) { return 0; } 451 + static inline int 452 + dbg_old_index_check_init(struct ubifs_info *c, 453 + struct ubifs_zbranch *zroot) { return 0; } 454 + static inline int 455 + dbg_check_old_index(struct ubifs_info *c, 456 + struct ubifs_zbranch *zroot) { return 0; } 457 + static inline int dbg_check_cats(struct ubifs_info *c) { return 0; } 458 + static inline int dbg_check_ltab(struct ubifs_info *c) { return 0; } 459 + static inline int dbg_chk_lpt_free_spc(struct ubifs_info *c) { return 0; } 460 + static inline int dbg_chk_lpt_sz(struct ubifs_info *c, 461 + int action, int len) { return 0; } 462 + static inline int dbg_check_synced_i_size(struct inode *inode) { return 0; } 463 + static inline int dbg_check_dir_size(struct ubifs_info *c, 464 + const struct inode *dir) { return 0; } 465 + static inline int dbg_check_tnc(struct ubifs_info *c, int extra) { return 0; } 466 + static inline int dbg_check_idx_size(struct ubifs_info *c, 467 + long long idx_size) { return 0; } 468 + static inline int dbg_check_filesystem(struct ubifs_info *c) { return 0; } 469 + static inline void dbg_check_heap(struct ubifs_info *c, 470 + struct ubifs_lpt_heap *heap, 471 + int cat, int add_pos) { return; } 472 + static inline int dbg_check_lpt_nodes(struct ubifs_info *c, 473 + struct ubifs_cnode *cnode, int row, int col) { return 0; } 474 + static inline int dbg_check_inode_size(struct ubifs_info *c, 475 + const struct inode *inode, 476 + loff_t size) { return 0; } 477 + static inline int 478 + dbg_check_data_nodes_order(struct ubifs_info *c, 479 + struct list_head *head) { return 0; } 480 + static inline int 481 + dbg_check_nondata_nodes_order(struct ubifs_info *c, 482 + struct list_head *head) { return 0; } 427 483 428 - #define dbg_walk_index(c, leaf_cb, znode_cb, priv) 0 429 - #define dbg_old_index_check_init(c, zroot) 0 430 - #define dbg_save_space_info(c) ({}) 431 - #define dbg_check_space_info(c) 0 432 - #define dbg_check_old_index(c, zroot) 0 433 - #define dbg_check_cats(c) 0 434 - #define dbg_check_ltab(c) 0 435 - #define dbg_chk_lpt_free_spc(c) 0 436 - #define dbg_chk_lpt_sz(c, action, len) 0 437 - #define dbg_check_synced_i_size(inode) 0 438 - #define dbg_check_dir_size(c, dir) 0 439 - #define dbg_check_tnc(c, x) 0 440 - #define dbg_check_idx_size(c, idx_size) 0 441 - #define dbg_check_filesystem(c) 0 442 - #define dbg_check_heap(c, heap, cat, add_pos) ({}) 443 - #define dbg_check_lprops(c) 0 444 - #define dbg_check_lpt_nodes(c, cnode, row, col) 0 445 - #define dbg_check_inode_size(c, inode, size) 0 446 - #define dbg_check_data_nodes_order(c, head) 0 447 - #define dbg_check_nondata_nodes_order(c, head) 0 448 - #define dbg_force_in_the_gaps_enabled 0 449 - #define dbg_force_in_the_gaps() 0 450 - #define dbg_failure_mode 0 484 + static inline int dbg_force_in_the_gaps(void) { return 0; } 485 + #define dbg_force_in_the_gaps_enabled 0 486 + #define dbg_failure_mode 0 451 487 452 - #define dbg_debugfs_init() 0 453 - #define dbg_debugfs_exit() 454 - #define dbg_debugfs_init_fs(c) 0 455 - #define dbg_debugfs_exit_fs(c) 0 488 + static inline int dbg_debugfs_init(void) { return 0; } 489 + static inline void dbg_debugfs_exit(void) { return; } 490 + static inline int dbg_debugfs_init_fs(struct ubifs_info *c) { return 0; } 491 + static inline int dbg_debugfs_exit_fs(struct ubifs_info *c) { return 0; } 456 492 457 493 #endif /* !CONFIG_UBIFS_FS_DEBUG */ 458 494 #endif /* !__UBIFS_DEBUG_H__ */
+3
fs/ubifs/file.c
··· 1312 1312 1313 1313 dbg_gen("syncing inode %lu", inode->i_ino); 1314 1314 1315 + if (inode->i_sb->s_flags & MS_RDONLY) 1316 + return 0; 1317 + 1315 1318 /* 1316 1319 * VFS has already synchronized dirty pages for this inode. Synchronize 1317 1320 * the inode unless this is a 'datasync()' call.
+23 -5
include/linux/blkdev.h
··· 697 697 extern void blk_stop_queue(struct request_queue *q); 698 698 extern void blk_sync_queue(struct request_queue *q); 699 699 extern void __blk_stop_queue(struct request_queue *q); 700 - extern void __blk_run_queue(struct request_queue *q, bool force_kblockd); 700 + extern void __blk_run_queue(struct request_queue *q); 701 701 extern void blk_run_queue(struct request_queue *); 702 702 extern int blk_rq_map_user(struct request_queue *, struct request *, 703 703 struct rq_map_data *, void __user *, unsigned long, ··· 857 857 struct blk_plug { 858 858 unsigned long magic; 859 859 struct list_head list; 860 + struct list_head cb_list; 860 861 unsigned int should_sort; 862 + }; 863 + struct blk_plug_cb { 864 + struct list_head list; 865 + void (*callback)(struct blk_plug_cb *); 861 866 }; 862 867 863 868 extern void blk_start_plug(struct blk_plug *); 864 869 extern void blk_finish_plug(struct blk_plug *); 865 - extern void __blk_flush_plug(struct task_struct *, struct blk_plug *); 870 + extern void blk_flush_plug_list(struct blk_plug *, bool); 866 871 867 872 static inline void blk_flush_plug(struct task_struct *tsk) 868 873 { 869 874 struct blk_plug *plug = tsk->plug; 870 875 871 - if (unlikely(plug)) 872 - __blk_flush_plug(tsk, plug); 876 + if (plug) 877 + blk_flush_plug_list(plug, false); 878 + } 879 + 880 + static inline void blk_schedule_flush_plug(struct task_struct *tsk) 881 + { 882 + struct blk_plug *plug = tsk->plug; 883 + 884 + if (plug) 885 + blk_flush_plug_list(plug, true); 873 886 } 874 887 875 888 static inline bool blk_needs_flush_plug(struct task_struct *tsk) 876 889 { 877 890 struct blk_plug *plug = tsk->plug; 878 891 879 - return plug && !list_empty(&plug->list); 892 + return plug && (!list_empty(&plug->list) || !list_empty(&plug->cb_list)); 880 893 } 881 894 882 895 /* ··· 1326 1313 static inline void blk_flush_plug(struct task_struct *task) 1327 1314 { 1328 1315 } 1316 + 1317 + static inline void blk_schedule_flush_plug(struct task_struct *task) 1318 + { 1319 + } 1320 + 1329 1321 1330 1322 static inline bool blk_needs_flush_plug(struct task_struct *tsk) 1331 1323 {
-1
include/linux/device-mapper.h
··· 197 197 struct dm_target_callbacks { 198 198 struct list_head list; 199 199 int (*congested_fn) (struct dm_target_callbacks *, int); 200 - void (*unplug_fn)(struct dm_target_callbacks *); 201 200 }; 202 201 203 202 int dm_register_target(struct target_type *t);
+6 -4
include/linux/input.h
··· 167 167 #define SYN_REPORT 0 168 168 #define SYN_CONFIG 1 169 169 #define SYN_MT_REPORT 2 170 + #define SYN_DROPPED 3 170 171 171 172 /* 172 173 * Keys and buttons ··· 554 553 #define KEY_DVD 0x185 /* Media Select DVD */ 555 554 #define KEY_AUX 0x186 556 555 #define KEY_MP3 0x187 557 - #define KEY_AUDIO 0x188 558 - #define KEY_VIDEO 0x189 556 + #define KEY_AUDIO 0x188 /* AL Audio Browser */ 557 + #define KEY_VIDEO 0x189 /* AL Movie Browser */ 559 558 #define KEY_DIRECTORY 0x18a 560 559 #define KEY_LIST 0x18b 561 560 #define KEY_MEMO 0x18c /* Media Select Messages */ ··· 604 603 #define KEY_FRAMEFORWARD 0x1b5 605 604 #define KEY_CONTEXT_MENU 0x1b6 /* GenDesc - system context menu */ 606 605 #define KEY_MEDIA_REPEAT 0x1b7 /* Consumer - transport control */ 607 - #define KEY_10CHANNELSUP 0x1b8 /* 10 channels up (10+) */ 608 - #define KEY_10CHANNELSDOWN 0x1b9 /* 10 channels down (10-) */ 606 + #define KEY_10CHANNELSUP 0x1b8 /* 10 channels up (10+) */ 607 + #define KEY_10CHANNELSDOWN 0x1b9 /* 10 channels down (10-) */ 608 + #define KEY_IMAGES 0x1ba /* AL Image Browser */ 609 609 610 610 #define KEY_DEL_EOL 0x1c0 611 611 #define KEY_DEL_EOS 0x1c1
+6
include/linux/input/mt.h
··· 48 48 input_event(dev, EV_ABS, ABS_MT_SLOT, slot); 49 49 } 50 50 51 + static inline bool input_is_mt_axis(int axis) 52 + { 53 + return axis == ABS_MT_SLOT || 54 + (axis >= ABS_MT_FIRST && axis <= ABS_MT_LAST); 55 + } 56 + 51 57 void input_mt_report_slot_state(struct input_dev *dev, 52 58 unsigned int tool_type, bool active); 53 59
+1 -1
include/linux/memcontrol.h
··· 216 216 return ; 217 217 } 218 218 219 - static inline inline void mem_cgroup_rotate_reclaimable_page(struct page *page) 219 + static inline void mem_cgroup_rotate_reclaimable_page(struct page *page) 220 220 { 221 221 return ; 222 222 }
+11 -2
include/linux/mfd/core.h
··· 86 86 */ 87 87 static inline const struct mfd_cell *mfd_get_cell(struct platform_device *pdev) 88 88 { 89 - return pdev->dev.platform_data; 89 + return pdev->mfd_cell; 90 90 } 91 91 92 92 /* 93 93 * Given a platform device that's been created by mfd_add_devices(), fetch 94 94 * the .mfd_data entry from the mfd_cell that created it. 95 + * Otherwise just return the platform_data pointer. 96 + * This maintains compatibility with platform drivers whose devices aren't 97 + * created by the mfd layer, and expect platform_data to contain what would've 98 + * otherwise been in mfd_data. 95 99 */ 96 100 static inline void *mfd_get_data(struct platform_device *pdev) 97 101 { 98 - return mfd_get_cell(pdev)->mfd_data; 102 + const struct mfd_cell *cell = mfd_get_cell(pdev); 103 + 104 + if (cell) 105 + return cell->mfd_data; 106 + else 107 + return pdev->dev.platform_data; 99 108 } 100 109 101 110 extern int mfd_add_devices(struct device *parent, int id,
+1 -1
include/linux/pid.h
··· 117 117 */ 118 118 extern struct pid *find_get_pid(int nr); 119 119 extern struct pid *find_ge_pid(int nr, struct pid_namespace *); 120 - int next_pidmap(struct pid_namespace *pid_ns, int last); 120 + int next_pidmap(struct pid_namespace *pid_ns, unsigned int last); 121 121 122 122 extern struct pid *alloc_pid(struct pid_namespace *ns); 123 123 extern void free_pid(struct pid *pid);
+5
include/linux/platform_device.h
··· 14 14 #include <linux/device.h> 15 15 #include <linux/mod_devicetable.h> 16 16 17 + struct mfd_cell; 18 + 17 19 struct platform_device { 18 20 const char * name; 19 21 int id; ··· 24 22 struct resource * resource; 25 23 26 24 const struct platform_device_id *id_entry; 25 + 26 + /* MFD cell pointer */ 27 + struct mfd_cell *mfd_cell; 27 28 28 29 /* arch specific additions */ 29 30 struct pdev_archdata archdata;
+1 -1
include/linux/rio.h
··· 396 396 }; 397 397 398 398 /* Architecture and hardware-specific functions */ 399 - extern void rio_register_mport(struct rio_mport *); 399 + extern int rio_register_mport(struct rio_mport *); 400 400 extern int rio_open_inb_mbox(struct rio_mport *, void *, int, int); 401 401 extern void rio_close_inb_mbox(struct rio_mport *, int); 402 402 extern int rio_open_outb_mbox(struct rio_mport *, void *, int, int);
+1
include/linux/rio_ids.h
··· 35 35 #define RIO_DID_IDTCPS6Q 0x035f 36 36 #define RIO_DID_IDTCPS10Q 0x035e 37 37 #define RIO_DID_IDTCPS1848 0x0374 38 + #define RIO_DID_IDTCPS1432 0x0375 38 39 #define RIO_DID_IDTCPS1616 0x0379 39 40 #define RIO_DID_IDTVPS1616 0x0377 40 41 #define RIO_DID_IDTSPS1616 0x0378
+2
include/linux/rtc.h
··· 228 228 struct rtc_wkalrm *alrm); 229 229 extern int rtc_set_alarm(struct rtc_device *rtc, 230 230 struct rtc_wkalrm *alrm); 231 + extern int rtc_initialize_alarm(struct rtc_device *rtc, 232 + struct rtc_wkalrm *alrm); 231 233 extern void rtc_update_irq(struct rtc_device *rtc, 232 234 unsigned long num, unsigned long events); 233 235
+3
include/linux/sched.h
··· 1254 1254 #endif 1255 1255 1256 1256 struct mm_struct *mm, *active_mm; 1257 + #ifdef CONFIG_COMPAT_BRK 1258 + unsigned brk_randomized:1; 1259 + #endif 1257 1260 #if defined(SPLIT_RSS_COUNTING) 1258 1261 struct task_rss_stat rss_stat; 1259 1262 #endif
+3 -8
include/linux/suspend.h
··· 249 249 extern int hibernate(void); 250 250 extern bool system_entering_hibernation(void); 251 251 #else /* CONFIG_HIBERNATION */ 252 + static inline void register_nosave_region(unsigned long b, unsigned long e) {} 253 + static inline void register_nosave_region_late(unsigned long b, unsigned long e) {} 252 254 static inline int swsusp_page_is_forbidden(struct page *p) { return 0; } 253 255 static inline void swsusp_set_page_free(struct page *p) {} 254 256 static inline void swsusp_unset_page_free(struct page *p) {} ··· 299 297 300 298 extern struct mutex pm_mutex; 301 299 302 - #ifndef CONFIG_HIBERNATION 303 - static inline void register_nosave_region(unsigned long b, unsigned long e) 304 - { 305 - } 306 - static inline void register_nosave_region_late(unsigned long b, unsigned long e) 307 - { 308 - } 309 - 300 + #ifndef CONFIG_HIBERNATE_CALLBACKS 310 301 static inline void lock_system_sleep(void) {} 311 302 static inline void unlock_system_sleep(void) {} 312 303
+7
include/linux/vmstat.h
··· 58 58 UNEVICTABLE_PGCLEARED, /* on COW, page truncate */ 59 59 UNEVICTABLE_PGSTRANDED, /* unable to isolate on unlock */ 60 60 UNEVICTABLE_MLOCKFREED, 61 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 62 + THP_FAULT_ALLOC, 63 + THP_FAULT_FALLBACK, 64 + THP_COLLAPSE_ALLOC, 65 + THP_COLLAPSE_ALLOC_FAILED, 66 + THP_SPLIT, 67 + #endif 61 68 NR_VM_EVENT_ITEMS 62 69 }; 63 70
-2
include/net/9p/9p.h
··· 139 139 */ 140 140 141 141 enum p9_msg_t { 142 - P9_TSYNCFS = 0, 143 - P9_RSYNCFS, 144 142 P9_TLERROR = 6, 145 143 P9_RLERROR, 146 144 P9_TSTATFS = 8,
+2 -3
include/net/9p/client.h
··· 218 218 void p9_client_begin_disconnect(struct p9_client *clnt); 219 219 struct p9_fid *p9_client_attach(struct p9_client *clnt, struct p9_fid *afid, 220 220 char *uname, u32 n_uname, char *aname); 221 - struct p9_fid *p9_client_walk(struct p9_fid *oldfid, int nwname, char **wnames, 222 - int clone); 221 + struct p9_fid *p9_client_walk(struct p9_fid *oldfid, uint16_t nwname, 222 + char **wnames, int clone); 223 223 int p9_client_open(struct p9_fid *fid, int mode); 224 224 int p9_client_fcreate(struct p9_fid *fid, char *name, u32 perm, int mode, 225 225 char *extension); ··· 230 230 gid_t gid, struct p9_qid *qid); 231 231 int p9_client_clunk(struct p9_fid *fid); 232 232 int p9_client_fsync(struct p9_fid *fid, int datasync); 233 - int p9_client_sync_fs(struct p9_fid *fid); 234 233 int p9_client_remove(struct p9_fid *fid); 235 234 int p9_client_read(struct p9_fid *fid, char *data, char __user *udata, 236 235 u64 offset, u32 count);
+9 -21
include/trace/events/block.h
··· 401 401 402 402 DECLARE_EVENT_CLASS(block_unplug, 403 403 404 - TP_PROTO(struct request_queue *q), 404 + TP_PROTO(struct request_queue *q, unsigned int depth, bool explicit), 405 405 406 - TP_ARGS(q), 406 + TP_ARGS(q, depth, explicit), 407 407 408 408 TP_STRUCT__entry( 409 409 __field( int, nr_rq ) ··· 411 411 ), 412 412 413 413 TP_fast_assign( 414 - __entry->nr_rq = q->rq.count[READ] + q->rq.count[WRITE]; 414 + __entry->nr_rq = depth; 415 415 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 416 416 ), 417 417 ··· 419 419 ); 420 420 421 421 /** 422 - * block_unplug_timer - timed release of operations requests in queue to device driver 422 + * block_unplug - release of operations requests in request queue 423 423 * @q: request queue to unplug 424 - * 425 - * Unplug the request queue @q because a timer expired and allow block 426 - * operation requests to be sent to the device driver. 427 - */ 428 - DEFINE_EVENT(block_unplug, block_unplug_timer, 429 - 430 - TP_PROTO(struct request_queue *q), 431 - 432 - TP_ARGS(q) 433 - ); 434 - 435 - /** 436 - * block_unplug_io - release of operations requests in request queue 437 - * @q: request queue to unplug 424 + * @depth: number of requests just added to the queue 425 + * @explicit: whether this was an explicit unplug, or one from schedule() 438 426 * 439 427 * Unplug request queue @q because device driver is scheduled to work 440 428 * on elements in the request queue. 441 429 */ 442 - DEFINE_EVENT(block_unplug, block_unplug_io, 430 + DEFINE_EVENT(block_unplug, block_unplug, 443 431 444 - TP_PROTO(struct request_queue *q), 432 + TP_PROTO(struct request_queue *q, unsigned int depth, bool explicit), 445 433 446 - TP_ARGS(q) 434 + TP_ARGS(q, depth, explicit) 447 435 ); 448 436 449 437 /**
+1 -1
kernel/futex.c
··· 1886 1886 restart->futex.val = val; 1887 1887 restart->futex.time = abs_time->tv64; 1888 1888 restart->futex.bitset = bitset; 1889 - restart->futex.flags = flags; 1889 + restart->futex.flags = flags | FLAGS_HAS_TIMEOUT; 1890 1890 1891 1891 ret = -ERESTART_RESTARTBLOCK; 1892 1892
+12
kernel/perf_event.c
··· 364 364 } 365 365 366 366 if (mode & PERF_CGROUP_SWIN) { 367 + WARN_ON_ONCE(cpuctx->cgrp); 367 368 /* set cgrp before ctxsw in to 368 369 * allow event_filter_match() to not 369 370 * have to pass task around ··· 2424 2423 if (!ctx || !ctx->nr_events) 2425 2424 goto out; 2426 2425 2426 + /* 2427 + * We must ctxsw out cgroup events to avoid conflict 2428 + * when invoking perf_task_event_sched_in() later on 2429 + * in this function. Otherwise we end up trying to 2430 + * ctxswin cgroup events which are already scheduled 2431 + * in. 2432 + */ 2433 + perf_cgroup_sched_out(current); 2427 2434 task_ctx_sched_out(ctx, EVENT_ALL); 2428 2435 2429 2436 raw_spin_lock(&ctx->lock); ··· 2456 2447 2457 2448 raw_spin_unlock(&ctx->lock); 2458 2449 2450 + /* 2451 + * Also calls ctxswin for cgroup events, if any: 2452 + */ 2459 2453 perf_event_context_sched_in(ctx, ctx->task); 2460 2454 out: 2461 2455 local_irq_restore(flags);
+4 -1
kernel/pid.c
··· 217 217 return -1; 218 218 } 219 219 220 - int next_pidmap(struct pid_namespace *pid_ns, int last) 220 + int next_pidmap(struct pid_namespace *pid_ns, unsigned int last) 221 221 { 222 222 int offset; 223 223 struct pidmap *map, *end; 224 + 225 + if (last >= PID_MAX_LIMIT) 226 + return -1; 224 227 225 228 offset = (last + 1) & BITS_PER_PAGE_MASK; 226 229 map = &pid_ns->pidmap[(last + 1)/BITS_PER_PAGE];
+5 -1
kernel/power/Kconfig
··· 18 18 19 19 Turning OFF this setting is NOT recommended! If in doubt, say Y. 20 20 21 + config HIBERNATE_CALLBACKS 22 + bool 23 + 21 24 config HIBERNATION 22 25 bool "Hibernation (aka 'suspend to disk')" 23 26 depends on SWAP && ARCH_HIBERNATION_POSSIBLE 27 + select HIBERNATE_CALLBACKS 24 28 select LZO_COMPRESS 25 29 select LZO_DECOMPRESS 26 30 ---help--- ··· 89 85 90 86 config PM_SLEEP 91 87 def_bool y 92 - depends on SUSPEND || HIBERNATION || XEN_SAVE_RESTORE 88 + depends on SUSPEND || HIBERNATE_CALLBACKS 93 89 94 90 config PM_SLEEP_SMP 95 91 def_bool y
+10 -10
kernel/sched.c
··· 4111 4111 try_to_wake_up_local(to_wakeup); 4112 4112 } 4113 4113 deactivate_task(rq, prev, DEQUEUE_SLEEP); 4114 + 4115 + /* 4116 + * If we are going to sleep and we have plugged IO queued, make 4117 + * sure to submit it to avoid deadlocks. 4118 + */ 4119 + if (blk_needs_flush_plug(prev)) { 4120 + raw_spin_unlock(&rq->lock); 4121 + blk_schedule_flush_plug(prev); 4122 + raw_spin_lock(&rq->lock); 4123 + } 4114 4124 } 4115 4125 switch_count = &prev->nvcsw; 4116 - } 4117 - 4118 - /* 4119 - * If we are going to sleep and we have plugged IO queued, make 4120 - * sure to submit it to avoid deadlocks. 4121 - */ 4122 - if (prev->state != TASK_RUNNING && blk_needs_flush_plug(prev)) { 4123 - raw_spin_unlock(&rq->lock); 4124 - blk_flush_plug(prev); 4125 - raw_spin_lock(&rq->lock); 4126 4126 } 4127 4127 4128 4128 pre_schedule(rq, prev);
+6 -8
kernel/sched_fair.c
··· 2104 2104 enum cpu_idle_type idle, int *all_pinned, 2105 2105 int *this_best_prio, struct cfs_rq *busiest_cfs_rq) 2106 2106 { 2107 - int loops = 0, pulled = 0, pinned = 0; 2107 + int loops = 0, pulled = 0; 2108 2108 long rem_load_move = max_load_move; 2109 2109 struct task_struct *p, *n; 2110 2110 2111 2111 if (max_load_move == 0) 2112 2112 goto out; 2113 2113 2114 - pinned = 1; 2115 - 2116 2114 list_for_each_entry_safe(p, n, &busiest_cfs_rq->tasks, se.group_node) { 2117 2115 if (loops++ > sysctl_sched_nr_migrate) 2118 2116 break; 2119 2117 2120 2118 if ((p->se.load.weight >> 1) > rem_load_move || 2121 - !can_migrate_task(p, busiest, this_cpu, sd, idle, &pinned)) 2119 + !can_migrate_task(p, busiest, this_cpu, sd, idle, 2120 + all_pinned)) 2122 2121 continue; 2123 2122 2124 2123 pull_task(busiest, p, this_rq, this_cpu); ··· 2151 2152 * inside pull_task(). 2152 2153 */ 2153 2154 schedstat_add(sd, lb_gained[idle], pulled); 2154 - 2155 - if (all_pinned) 2156 - *all_pinned = pinned; 2157 2155 2158 2156 return max_load_move - rem_load_move; 2159 2157 } ··· 3123 3127 if (!sds.busiest || sds.busiest_nr_running == 0) 3124 3128 goto out_balanced; 3125 3129 3130 + sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr; 3131 + 3126 3132 /* 3127 3133 * If the busiest group is imbalanced the below checks don't 3128 3134 * work because they assumes all things are equal, which typically ··· 3149 3151 * Don't pull any tasks if this group is already above the domain 3150 3152 * average load. 3151 3153 */ 3152 - sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr; 3153 3154 if (sds.this_load >= sds.avg_load) 3154 3155 goto out_balanced; 3155 3156 ··· 3337 3340 * still unbalanced. ld_moved simply stays zero, so it is 3338 3341 * correctly treated as an imbalance. 3339 3342 */ 3343 + all_pinned = 1; 3340 3344 local_irq_save(flags); 3341 3345 double_rq_lock(this_rq, busiest); 3342 3346 ld_moved = move_tasks(this_rq, this_cpu, busiest,
+11 -22
kernel/trace/blktrace.c
··· 850 850 __blk_add_trace(bt, 0, 0, 0, BLK_TA_PLUG, 0, 0, NULL); 851 851 } 852 852 853 - static void blk_add_trace_unplug_io(void *ignore, struct request_queue *q) 853 + static void blk_add_trace_unplug(void *ignore, struct request_queue *q, 854 + unsigned int depth, bool explicit) 854 855 { 855 856 struct blk_trace *bt = q->blk_trace; 856 857 857 858 if (bt) { 858 - unsigned int pdu = q->rq.count[READ] + q->rq.count[WRITE]; 859 - __be64 rpdu = cpu_to_be64(pdu); 859 + __be64 rpdu = cpu_to_be64(depth); 860 + u32 what; 860 861 861 - __blk_add_trace(bt, 0, 0, 0, BLK_TA_UNPLUG_IO, 0, 862 - sizeof(rpdu), &rpdu); 863 - } 864 - } 862 + if (explicit) 863 + what = BLK_TA_UNPLUG_IO; 864 + else 865 + what = BLK_TA_UNPLUG_TIMER; 865 866 866 - static void blk_add_trace_unplug_timer(void *ignore, struct request_queue *q) 867 - { 868 - struct blk_trace *bt = q->blk_trace; 869 - 870 - if (bt) { 871 - unsigned int pdu = q->rq.count[READ] + q->rq.count[WRITE]; 872 - __be64 rpdu = cpu_to_be64(pdu); 873 - 874 - __blk_add_trace(bt, 0, 0, 0, BLK_TA_UNPLUG_TIMER, 0, 875 - sizeof(rpdu), &rpdu); 867 + __blk_add_trace(bt, 0, 0, 0, what, 0, sizeof(rpdu), &rpdu); 876 868 } 877 869 } 878 870 ··· 1007 1015 WARN_ON(ret); 1008 1016 ret = register_trace_block_plug(blk_add_trace_plug, NULL); 1009 1017 WARN_ON(ret); 1010 - ret = register_trace_block_unplug_timer(blk_add_trace_unplug_timer, NULL); 1011 - WARN_ON(ret); 1012 - ret = register_trace_block_unplug_io(blk_add_trace_unplug_io, NULL); 1018 + ret = register_trace_block_unplug(blk_add_trace_unplug, NULL); 1013 1019 WARN_ON(ret); 1014 1020 ret = register_trace_block_split(blk_add_trace_split, NULL); 1015 1021 WARN_ON(ret); ··· 1022 1032 unregister_trace_block_rq_remap(blk_add_trace_rq_remap, NULL); 1023 1033 unregister_trace_block_bio_remap(blk_add_trace_bio_remap, NULL); 1024 1034 unregister_trace_block_split(blk_add_trace_split, NULL); 1025 - unregister_trace_block_unplug_io(blk_add_trace_unplug_io, NULL); 1026 - unregister_trace_block_unplug_timer(blk_add_trace_unplug_timer, NULL); 1035 + unregister_trace_block_unplug(blk_add_trace_unplug, NULL); 1027 1036 unregister_trace_block_plug(blk_add_trace_plug, NULL); 1028 1037 unregister_trace_block_sleeprq(blk_add_trace_sleeprq, NULL); 1029 1038 unregister_trace_block_getrq(blk_add_trace_getrq, NULL);
+3 -6
lib/kstrtox.c
··· 49 49 val = *s - '0'; 50 50 else if ('a' <= _tolower(*s) && _tolower(*s) <= 'f') 51 51 val = _tolower(*s) - 'a' + 10; 52 - else if (*s == '\n') { 53 - if (*(s + 1) == '\0') 54 - break; 55 - else 56 - return -EINVAL; 57 - } else 52 + else if (*s == '\n' && *(s + 1) == '\0') 53 + break; 54 + else 58 55 return -EINVAL; 59 56 60 57 if (val >= base)
+16 -16
lib/test-kstrtox.c
··· 315 315 {"65537", 10, 65537}, 316 316 {"2147483646", 10, 2147483646}, 317 317 {"2147483647", 10, 2147483647}, 318 - {"2147483648", 10, 2147483648}, 319 - {"2147483649", 10, 2147483649}, 320 - {"4294967294", 10, 4294967294}, 321 - {"4294967295", 10, 4294967295}, 322 - {"4294967296", 10, 4294967296}, 323 - {"4294967297", 10, 4294967297}, 318 + {"2147483648", 10, 2147483648ULL}, 319 + {"2147483649", 10, 2147483649ULL}, 320 + {"4294967294", 10, 4294967294ULL}, 321 + {"4294967295", 10, 4294967295ULL}, 322 + {"4294967296", 10, 4294967296ULL}, 323 + {"4294967297", 10, 4294967297ULL}, 324 324 {"9223372036854775806", 10, 9223372036854775806ULL}, 325 325 {"9223372036854775807", 10, 9223372036854775807ULL}, 326 326 {"9223372036854775808", 10, 9223372036854775808ULL}, ··· 369 369 {"65537", 10, 65537}, 370 370 {"2147483646", 10, 2147483646}, 371 371 {"2147483647", 10, 2147483647}, 372 - {"2147483648", 10, 2147483648}, 373 - {"2147483649", 10, 2147483649}, 374 - {"4294967294", 10, 4294967294}, 375 - {"4294967295", 10, 4294967295}, 376 - {"4294967296", 10, 4294967296}, 377 - {"4294967297", 10, 4294967297}, 372 + {"2147483648", 10, 2147483648LL}, 373 + {"2147483649", 10, 2147483649LL}, 374 + {"4294967294", 10, 4294967294LL}, 375 + {"4294967295", 10, 4294967295LL}, 376 + {"4294967296", 10, 4294967296LL}, 377 + {"4294967297", 10, 4294967297LL}, 378 378 {"9223372036854775806", 10, 9223372036854775806LL}, 379 379 {"9223372036854775807", 10, 9223372036854775807LL}, 380 380 }; ··· 418 418 {"65537", 10, 65537}, 419 419 {"2147483646", 10, 2147483646}, 420 420 {"2147483647", 10, 2147483647}, 421 - {"2147483648", 10, 2147483648}, 422 - {"2147483649", 10, 2147483649}, 423 - {"4294967294", 10, 4294967294}, 424 - {"4294967295", 10, 4294967295}, 421 + {"2147483648", 10, 2147483648U}, 422 + {"2147483649", 10, 2147483649U}, 423 + {"4294967294", 10, 4294967294U}, 424 + {"4294967295", 10, 4294967295U}, 425 425 }; 426 426 TEST_OK(kstrtou32, u32, "%u", test_u32_ok); 427 427 }
+36 -15
mm/huge_memory.c
··· 244 244 struct kobj_attribute *attr, char *buf, 245 245 enum transparent_hugepage_flag flag) 246 246 { 247 - if (test_bit(flag, &transparent_hugepage_flags)) 248 - return sprintf(buf, "[yes] no\n"); 249 - else 250 - return sprintf(buf, "yes [no]\n"); 247 + return sprintf(buf, "%d\n", 248 + !!test_bit(flag, &transparent_hugepage_flags)); 251 249 } 250 + 252 251 static ssize_t single_flag_store(struct kobject *kobj, 253 252 struct kobj_attribute *attr, 254 253 const char *buf, size_t count, 255 254 enum transparent_hugepage_flag flag) 256 255 { 257 - if (!memcmp("yes", buf, 258 - min(sizeof("yes")-1, count))) { 259 - set_bit(flag, &transparent_hugepage_flags); 260 - } else if (!memcmp("no", buf, 261 - min(sizeof("no")-1, count))) { 262 - clear_bit(flag, &transparent_hugepage_flags); 263 - } else 256 + unsigned long value; 257 + int ret; 258 + 259 + ret = kstrtoul(buf, 10, &value); 260 + if (ret < 0) 261 + return ret; 262 + if (value > 1) 264 263 return -EINVAL; 264 + 265 + if (value) 266 + set_bit(flag, &transparent_hugepage_flags); 267 + else 268 + clear_bit(flag, &transparent_hugepage_flags); 265 269 266 270 return count; 267 271 } ··· 684 680 return VM_FAULT_OOM; 685 681 page = alloc_hugepage_vma(transparent_hugepage_defrag(vma), 686 682 vma, haddr, numa_node_id(), 0); 687 - if (unlikely(!page)) 683 + if (unlikely(!page)) { 684 + count_vm_event(THP_FAULT_FALLBACK); 688 685 goto out; 686 + } 687 + count_vm_event(THP_FAULT_ALLOC); 689 688 if (unlikely(mem_cgroup_newpage_charge(page, mm, GFP_KERNEL))) { 690 689 put_page(page); 691 690 goto out; ··· 916 909 new_page = NULL; 917 910 918 911 if (unlikely(!new_page)) { 912 + count_vm_event(THP_FAULT_FALLBACK); 919 913 ret = do_huge_pmd_wp_page_fallback(mm, vma, address, 920 914 pmd, orig_pmd, page, haddr); 921 915 put_page(page); 922 916 goto out; 923 917 } 918 + count_vm_event(THP_FAULT_ALLOC); 924 919 925 920 if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) { 926 921 put_page(new_page); ··· 1399 1390 1400 1391 BUG_ON(!PageSwapBacked(page)); 1401 1392 __split_huge_page(page, anon_vma); 1393 + count_vm_event(THP_SPLIT); 1402 1394 1403 1395 BUG_ON(PageCompound(page)); 1404 1396 out_unlock: ··· 1794 1784 node, __GFP_OTHER_NODE); 1795 1785 if (unlikely(!new_page)) { 1796 1786 up_read(&mm->mmap_sem); 1787 + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); 1797 1788 *hpage = ERR_PTR(-ENOMEM); 1798 1789 return; 1799 1790 } 1791 + count_vm_event(THP_COLLAPSE_ALLOC); 1800 1792 if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) { 1801 1793 up_read(&mm->mmap_sem); 1802 1794 put_page(new_page); ··· 2163 2151 #ifndef CONFIG_NUMA 2164 2152 if (!*hpage) { 2165 2153 *hpage = alloc_hugepage(khugepaged_defrag()); 2166 - if (unlikely(!*hpage)) 2154 + if (unlikely(!*hpage)) { 2155 + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); 2167 2156 break; 2157 + } 2158 + count_vm_event(THP_COLLAPSE_ALLOC); 2168 2159 } 2169 2160 #else 2170 2161 if (IS_ERR(*hpage)) ··· 2207 2192 2208 2193 do { 2209 2194 hpage = alloc_hugepage(khugepaged_defrag()); 2210 - if (!hpage) 2195 + if (!hpage) { 2196 + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); 2211 2197 khugepaged_alloc_sleep(); 2198 + } else 2199 + count_vm_event(THP_COLLAPSE_ALLOC); 2212 2200 } while (unlikely(!hpage) && 2213 2201 likely(khugepaged_enabled())); 2214 2202 return hpage; ··· 2228 2210 while (likely(khugepaged_enabled())) { 2229 2211 #ifndef CONFIG_NUMA 2230 2212 hpage = khugepaged_alloc_hugepage(); 2231 - if (unlikely(!hpage)) 2213 + if (unlikely(!hpage)) { 2214 + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); 2232 2215 break; 2216 + } 2217 + count_vm_event(THP_COLLAPSE_ALLOC); 2233 2218 #else 2234 2219 if (IS_ERR(hpage)) { 2235 2220 khugepaged_alloc_sleep();
+19 -9
mm/memory.c
··· 1410 1410 return page; 1411 1411 } 1412 1412 1413 + static inline int stack_guard_page(struct vm_area_struct *vma, unsigned long addr) 1414 + { 1415 + return (vma->vm_flags & VM_GROWSDOWN) && 1416 + (vma->vm_start == addr) && 1417 + !vma_stack_continue(vma->vm_prev, addr); 1418 + } 1419 + 1413 1420 /** 1414 1421 * __get_user_pages() - pin user pages in memory 1415 1422 * @tsk: task_struct of target task ··· 1495 1488 vma = find_extend_vma(mm, start); 1496 1489 if (!vma && in_gate_area(mm, start)) { 1497 1490 unsigned long pg = start & PAGE_MASK; 1498 - struct vm_area_struct *gate_vma = get_gate_vma(mm); 1499 1491 pgd_t *pgd; 1500 1492 pud_t *pud; 1501 1493 pmd_t *pmd; ··· 1519 1513 pte_unmap(pte); 1520 1514 return i ? : -EFAULT; 1521 1515 } 1516 + vma = get_gate_vma(mm); 1522 1517 if (pages) { 1523 1518 struct page *page; 1524 1519 1525 - page = vm_normal_page(gate_vma, start, *pte); 1520 + page = vm_normal_page(vma, start, *pte); 1526 1521 if (!page) { 1527 1522 if (!(gup_flags & FOLL_DUMP) && 1528 1523 is_zero_pfn(pte_pfn(*pte))) ··· 1537 1530 get_page(page); 1538 1531 } 1539 1532 pte_unmap(pte); 1540 - if (vmas) 1541 - vmas[i] = gate_vma; 1542 - i++; 1543 - start += PAGE_SIZE; 1544 - nr_pages--; 1545 - continue; 1533 + goto next_page; 1546 1534 } 1547 1535 1548 1536 if (!vma || ··· 1550 1548 &start, &nr_pages, i, gup_flags); 1551 1549 continue; 1552 1550 } 1551 + 1552 + /* 1553 + * If we don't actually want the page itself, 1554 + * and it's the stack guard page, just skip it. 1555 + */ 1556 + if (!pages && stack_guard_page(vma, start)) 1557 + goto next_page; 1553 1558 1554 1559 do { 1555 1560 struct page *page; ··· 1640 1631 flush_anon_page(vma, page, start); 1641 1632 flush_dcache_page(page); 1642 1633 } 1634 + next_page: 1643 1635 if (vmas) 1644 1636 vmas[i] = vma; 1645 1637 i++; ··· 3688 3678 */ 3689 3679 #ifdef CONFIG_HAVE_IOREMAP_PROT 3690 3680 vma = find_vma(mm, addr); 3691 - if (!vma) 3681 + if (!vma || vma->vm_start > addr) 3692 3682 break; 3693 3683 if (vma->vm_ops && vma->vm_ops->access) 3694 3684 ret = vma->vm_ops->access(vma, addr, buf,
+1 -1
mm/memory_hotplug.c
··· 375 375 #endif 376 376 377 377 #ifdef CONFIG_FLATMEM 378 - max_mapnr = max(page_to_pfn(page), max_mapnr); 378 + max_mapnr = max(pfn, max_mapnr); 379 379 #endif 380 380 381 381 ClearPageReserved(page);
-13
mm/mlock.c
··· 135 135 } 136 136 } 137 137 138 - static inline int stack_guard_page(struct vm_area_struct *vma, unsigned long addr) 139 - { 140 - return (vma->vm_flags & VM_GROWSDOWN) && 141 - (vma->vm_start == addr) && 142 - !vma_stack_continue(vma->vm_prev, addr); 143 - } 144 - 145 138 /** 146 139 * __mlock_vma_pages_range() - mlock a range of pages in the vma. 147 140 * @vma: target vma ··· 180 187 181 188 if (vma->vm_flags & VM_LOCKED) 182 189 gup_flags |= FOLL_MLOCK; 183 - 184 - /* We don't try to access the guard page of a stack vma */ 185 - if (stack_guard_page(vma, start)) { 186 - addr += PAGE_SIZE; 187 - nr_pages--; 188 - } 189 190 190 191 return __get_user_pages(current, mm, addr, nr_pages, gup_flags, 191 192 NULL, NULL, nonblocking);
+9 -6
mm/mmap.c
··· 259 259 * randomize_va_space to 2, which will still cause mm->start_brk 260 260 * to be arbitrarily shifted 261 261 */ 262 - if (mm->start_brk > PAGE_ALIGN(mm->end_data)) 262 + if (current->brk_randomized) 263 263 min_brk = mm->start_brk; 264 264 else 265 265 min_brk = mm->end_data; ··· 1814 1814 size = vma->vm_end - address; 1815 1815 grow = (vma->vm_start - address) >> PAGE_SHIFT; 1816 1816 1817 - error = acct_stack_growth(vma, size, grow); 1818 - if (!error) { 1819 - vma->vm_start = address; 1820 - vma->vm_pgoff -= grow; 1821 - perf_event_mmap(vma); 1817 + error = -ENOMEM; 1818 + if (grow <= vma->vm_pgoff) { 1819 + error = acct_stack_growth(vma, size, grow); 1820 + if (!error) { 1821 + vma->vm_start = address; 1822 + vma->vm_pgoff -= grow; 1823 + perf_event_mmap(vma); 1824 + } 1822 1825 } 1823 1826 } 1824 1827 vma_unlock_anon_vma(vma);
-28
mm/oom_kill.c
··· 84 84 #endif /* CONFIG_NUMA */ 85 85 86 86 /* 87 - * If this is a system OOM (not a memcg OOM) and the task selected to be 88 - * killed is not already running at high (RT) priorities, speed up the 89 - * recovery by boosting the dying task to the lowest FIFO priority. 90 - * That helps with the recovery and avoids interfering with RT tasks. 91 - */ 92 - static void boost_dying_task_prio(struct task_struct *p, 93 - struct mem_cgroup *mem) 94 - { 95 - struct sched_param param = { .sched_priority = 1 }; 96 - 97 - if (mem) 98 - return; 99 - 100 - if (!rt_task(p)) 101 - sched_setscheduler_nocheck(p, SCHED_FIFO, &param); 102 - } 103 - 104 - /* 105 87 * The process p may have detached its own ->mm while exiting or through 106 88 * use_mm(), but one or more of its subthreads may still have a valid 107 89 * pointer. Return p, or any of its subthreads with a valid ->mm, with ··· 434 452 set_tsk_thread_flag(p, TIF_MEMDIE); 435 453 force_sig(SIGKILL, p); 436 454 437 - /* 438 - * We give our sacrificial lamb high priority and access to 439 - * all the memory it needs. That way it should be able to 440 - * exit() and clear out its resources quickly... 441 - */ 442 - boost_dying_task_prio(p, mem); 443 - 444 455 return 0; 445 456 } 446 457 #undef K ··· 457 482 */ 458 483 if (p->flags & PF_EXITING) { 459 484 set_tsk_thread_flag(p, TIF_MEMDIE); 460 - boost_dying_task_prio(p, mem); 461 485 return 0; 462 486 } 463 487 ··· 530 556 */ 531 557 if (fatal_signal_pending(current)) { 532 558 set_thread_flag(TIF_MEMDIE); 533 - boost_dying_task_prio(current, NULL); 534 559 return; 535 560 } 536 561 ··· 685 712 */ 686 713 if (fatal_signal_pending(current)) { 687 714 set_thread_flag(TIF_MEMDIE); 688 - boost_dying_task_prio(current, NULL); 689 715 return; 690 716 } 691 717
+1 -1
mm/page_alloc.c
··· 3176 3176 * Called with zonelists_mutex held always 3177 3177 * unless system_state == SYSTEM_BOOTING. 3178 3178 */ 3179 - void build_all_zonelists(void *data) 3179 + void __ref build_all_zonelists(void *data) 3180 3180 { 3181 3181 set_zonelist_order(); 3182 3182
+4 -2
mm/shmem.c
··· 421 421 * a waste to allocate index if we cannot allocate data. 422 422 */ 423 423 if (sbinfo->max_blocks) { 424 - if (percpu_counter_compare(&sbinfo->used_blocks, (sbinfo->max_blocks - 1)) > 0) 424 + if (percpu_counter_compare(&sbinfo->used_blocks, 425 + sbinfo->max_blocks - 1) >= 0) 425 426 return ERR_PTR(-ENOSPC); 426 427 percpu_counter_inc(&sbinfo->used_blocks); 427 428 spin_lock(&inode->i_lock); ··· 1398 1397 shmem_swp_unmap(entry); 1399 1398 sbinfo = SHMEM_SB(inode->i_sb); 1400 1399 if (sbinfo->max_blocks) { 1401 - if ((percpu_counter_compare(&sbinfo->used_blocks, sbinfo->max_blocks) > 0) || 1400 + if (percpu_counter_compare(&sbinfo->used_blocks, 1401 + sbinfo->max_blocks) >= 0 || 1402 1402 shmem_acct_block(info->flags)) { 1403 1403 spin_unlock(&info->lock); 1404 1404 error = -ENOSPC;
+13 -11
mm/vmscan.c
··· 41 41 #include <linux/memcontrol.h> 42 42 #include <linux/delayacct.h> 43 43 #include <linux/sysctl.h> 44 + #include <linux/oom.h> 44 45 45 46 #include <asm/tlbflush.h> 46 47 #include <asm/div64.h> ··· 1989 1988 return zone->pages_scanned < zone_reclaimable_pages(zone) * 6; 1990 1989 } 1991 1990 1992 - /* 1993 - * As hibernation is going on, kswapd is freezed so that it can't mark 1994 - * the zone into all_unreclaimable. It can't handle OOM during hibernation. 1995 - * So let's check zone's unreclaimable in direct reclaim as well as kswapd. 1996 - */ 1991 + /* All zones in zonelist are unreclaimable? */ 1997 1992 static bool all_unreclaimable(struct zonelist *zonelist, 1998 1993 struct scan_control *sc) 1999 1994 { 2000 1995 struct zoneref *z; 2001 1996 struct zone *zone; 2002 - bool all_unreclaimable = true; 2003 1997 2004 1998 for_each_zone_zonelist_nodemask(zone, z, zonelist, 2005 1999 gfp_zone(sc->gfp_mask), sc->nodemask) { ··· 2002 2006 continue; 2003 2007 if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL)) 2004 2008 continue; 2005 - if (zone_reclaimable(zone)) { 2006 - all_unreclaimable = false; 2007 - break; 2008 - } 2009 + if (!zone->all_unreclaimable) 2010 + return false; 2009 2011 } 2010 2012 2011 - return all_unreclaimable; 2013 + return true; 2012 2014 } 2013 2015 2014 2016 /* ··· 2101 2107 2102 2108 if (sc->nr_reclaimed) 2103 2109 return sc->nr_reclaimed; 2110 + 2111 + /* 2112 + * As hibernation is going on, kswapd is freezed so that it can't mark 2113 + * the zone into all_unreclaimable. Thus bypassing all_unreclaimable 2114 + * check. 2115 + */ 2116 + if (oom_killer_disabled) 2117 + return 0; 2104 2118 2105 2119 /* top priority shrink_zones still had more to do? don't OOM, then */ 2106 2120 if (scanning_global_lru(sc) && !all_unreclaimable(zonelist, sc))
+15 -3
mm/vmstat.c
··· 321 321 /* 322 322 * The fetching of the stat_threshold is racy. We may apply 323 323 * a counter threshold to the wrong the cpu if we get 324 - * rescheduled while executing here. However, the following 325 - * will apply the threshold again and therefore bring the 326 - * counter under the threshold. 324 + * rescheduled while executing here. However, the next 325 + * counter update will apply the threshold again and 326 + * therefore bring the counter under the threshold again. 327 + * 328 + * Most of the time the thresholds are the same anyways 329 + * for all cpus in a zone. 327 330 */ 328 331 t = this_cpu_read(pcp->stat_threshold); 329 332 ··· 948 945 "unevictable_pgs_cleared", 949 946 "unevictable_pgs_stranded", 950 947 "unevictable_pgs_mlockfreed", 948 + 949 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 950 + "thp_fault_alloc", 951 + "thp_fault_fallback", 952 + "thp_collapse_alloc", 953 + "thp_collapse_alloc_failed", 954 + "thp_split", 951 955 #endif 956 + 957 + #endif /* CONFIG_VM_EVENTS_COUNTERS */ 952 958 }; 953 959 954 960 static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
+4 -25
net/9p/client.c
··· 929 929 } 930 930 EXPORT_SYMBOL(p9_client_attach); 931 931 932 - struct p9_fid *p9_client_walk(struct p9_fid *oldfid, int nwname, char **wnames, 933 - int clone) 932 + struct p9_fid *p9_client_walk(struct p9_fid *oldfid, uint16_t nwname, 933 + char **wnames, int clone) 934 934 { 935 935 int err; 936 936 struct p9_client *clnt; 937 937 struct p9_fid *fid; 938 938 struct p9_qid *wqids; 939 939 struct p9_req_t *req; 940 - int16_t nwqids, count; 940 + uint16_t nwqids, count; 941 941 942 942 err = 0; 943 943 wqids = NULL; ··· 955 955 fid = oldfid; 956 956 957 957 958 - P9_DPRINTK(P9_DEBUG_9P, ">>> TWALK fids %d,%d nwname %d wname[0] %s\n", 958 + P9_DPRINTK(P9_DEBUG_9P, ">>> TWALK fids %d,%d nwname %ud wname[0] %s\n", 959 959 oldfid->fid, fid->fid, nwname, wnames ? wnames[0] : NULL); 960 960 961 961 req = p9_client_rpc(clnt, P9_TWALK, "ddT", oldfid->fid, fid->fid, ··· 1219 1219 return err; 1220 1220 } 1221 1221 EXPORT_SYMBOL(p9_client_fsync); 1222 - 1223 - int p9_client_sync_fs(struct p9_fid *fid) 1224 - { 1225 - int err = 0; 1226 - struct p9_req_t *req; 1227 - struct p9_client *clnt; 1228 - 1229 - P9_DPRINTK(P9_DEBUG_9P, ">>> TSYNC_FS fid %d\n", fid->fid); 1230 - 1231 - clnt = fid->clnt; 1232 - req = p9_client_rpc(clnt, P9_TSYNCFS, "d", fid->fid); 1233 - if (IS_ERR(req)) { 1234 - err = PTR_ERR(req); 1235 - goto error; 1236 - } 1237 - P9_DPRINTK(P9_DEBUG_9P, "<<< RSYNCFS fid %d\n", fid->fid); 1238 - p9_free_req(clnt, req); 1239 - error: 1240 - return err; 1241 - } 1242 - EXPORT_SYMBOL(p9_client_sync_fs); 1243 1222 1244 1223 int p9_client_clunk(struct p9_fid *fid) 1245 1224 {
+4 -3
net/9p/protocol.c
··· 265 265 } 266 266 break; 267 267 case 'T':{ 268 - int16_t *nwname = va_arg(ap, int16_t *); 268 + uint16_t *nwname = va_arg(ap, uint16_t *); 269 269 char ***wnames = va_arg(ap, char ***); 270 270 271 271 errcode = p9pdu_readf(pdu, proto_version, ··· 468 468 case 'E':{ 469 469 int32_t cnt = va_arg(ap, int32_t); 470 470 const char *k = va_arg(ap, const void *); 471 - const char *u = va_arg(ap, const void *); 471 + const char __user *u = va_arg(ap, 472 + const void __user *); 472 473 errcode = p9pdu_writef(pdu, proto_version, "d", 473 474 cnt); 474 475 if (!errcode && pdu_write_urw(pdu, k, u, cnt)) ··· 496 495 } 497 496 break; 498 497 case 'T':{ 499 - int16_t nwname = va_arg(ap, int); 498 + uint16_t nwname = va_arg(ap, int); 500 499 const char **wnames = va_arg(ap, const char **); 501 500 502 501 errcode = p9pdu_writef(pdu, proto_version, "w",
+1 -1
net/9p/trans_common.c
··· 66 66 uint32_t pdata_mapped_pages; 67 67 struct trans_rpage_info *rpinfo; 68 68 69 - *pdata_off = (size_t)req->tc->pubuf & (PAGE_SIZE-1); 69 + *pdata_off = (__force size_t)req->tc->pubuf & (PAGE_SIZE-1); 70 70 71 71 if (*pdata_off) 72 72 first_page_bytes = min(((size_t)PAGE_SIZE - *pdata_off),
+11 -4
net/9p/trans_virtio.c
··· 326 326 outp = pack_sg_list_p(chan->sg, out, VIRTQUEUE_NUM, 327 327 pdata_off, rpinfo->rp_data, pdata_len); 328 328 } else { 329 - char *pbuf = req->tc->pubuf ? req->tc->pubuf : 330 - req->tc->pkbuf; 329 + char *pbuf; 330 + if (req->tc->pubuf) 331 + pbuf = (__force char *) req->tc->pubuf; 332 + else 333 + pbuf = req->tc->pkbuf; 331 334 outp = pack_sg_list(chan->sg, out, VIRTQUEUE_NUM, pbuf, 332 335 req->tc->pbuf_size); 333 336 } ··· 355 352 in = pack_sg_list_p(chan->sg, out+inp, VIRTQUEUE_NUM, 356 353 pdata_off, rpinfo->rp_data, pdata_len); 357 354 } else { 358 - char *pbuf = req->tc->pubuf ? req->tc->pubuf : 359 - req->tc->pkbuf; 355 + char *pbuf; 356 + if (req->tc->pubuf) 357 + pbuf = (__force char *) req->tc->pubuf; 358 + else 359 + pbuf = req->tc->pkbuf; 360 + 360 361 in = pack_sg_list(chan->sg, out+inp, VIRTQUEUE_NUM, 361 362 pbuf, req->tc->pbuf_size); 362 363 }
+9 -3
net/ceph/osd_client.c
··· 579 579 580 580 list_for_each_entry_safe(req, nreq, &osd->o_linger_requests, 581 581 r_linger_osd) { 582 - __unregister_linger_request(osdc, req); 582 + /* 583 + * reregister request prior to unregistering linger so 584 + * that r_osd is preserved. 585 + */ 586 + BUG_ON(!list_empty(&req->r_req_lru_item)); 583 587 __register_request(osdc, req); 584 - list_move(&req->r_req_lru_item, &osdc->req_unsent); 588 + list_add(&req->r_req_lru_item, &osdc->req_unsent); 589 + list_add(&req->r_osd_item, &req->r_osd->o_requests); 590 + __unregister_linger_request(osdc, req); 585 591 dout("requeued lingering %p tid %llu osd%d\n", req, req->r_tid, 586 592 osd->o_osd); 587 593 } ··· 804 798 req->r_request->hdr.tid = cpu_to_le64(req->r_tid); 805 799 INIT_LIST_HEAD(&req->r_req_lru_item); 806 800 807 - dout("register_request %p tid %lld\n", req, req->r_tid); 801 + dout("__register_request %p tid %lld\n", req, req->r_tid); 808 802 __insert_request(osdc, req); 809 803 ceph_osdc_get_request(req); 810 804 osdc->num_requests++;
+1 -1
tools/perf/util/cgroup.c
··· 13 13 { 14 14 FILE *fp; 15 15 char mountpoint[MAX_PATH+1], tokens[MAX_PATH+1], type[MAX_PATH+1]; 16 - char *token, *saved_ptr; 16 + char *token, *saved_ptr = NULL; 17 17 int found = 0; 18 18 19 19 fp = fopen("/proc/mounts", "r");