Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 3.13-rc2 into driver-core-next

We want those fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+5075 -1807
-11
Documentation/Changes
··· 196 196 as root before you can use this. You'll probably also want to 197 197 get the user-space microcode_ctl utility to use with this. 198 198 199 - Powertweak 200 - ---------- 201 - 202 - If you are running v0.1.17 or earlier, you should upgrade to 203 - version v0.99.0 or higher. Running old versions may cause problems 204 - with programs using shared memory. 205 - 206 199 udev 207 200 ---- 208 201 udev is a userspace application for populating /dev dynamically with ··· 358 365 Intel P6 microcode 359 366 ------------------ 360 367 o <http://www.urbanmyth.org/microcode/> 361 - 362 - Powertweak 363 - ---------- 364 - o <http://powertweak.sourceforge.net/> 365 368 366 369 udev 367 370 ----
+1 -1
Documentation/DocBook/device-drivers.tmpl
··· 58 58 </sect1> 59 59 <sect1><title>Wait queues and Wake events</title> 60 60 !Iinclude/linux/wait.h 61 - !Ekernel/wait.c 61 + !Ekernel/sched/wait.c 62 62 </sect1> 63 63 <sect1><title>High-resolution timers</title> 64 64 !Iinclude/linux/ktime.h
+2 -1
Documentation/devicetree/bindings/i2c/i2c-omap.txt
··· 1 1 I2C for OMAP platforms 2 2 3 3 Required properties : 4 - - compatible : Must be "ti,omap3-i2c" or "ti,omap4-i2c" 4 + - compatible : Must be "ti,omap2420-i2c", "ti,omap2430-i2c", "ti,omap3-i2c" 5 + or "ti,omap4-i2c" 5 6 - ti,hwmods : Must be "i2c<n>", n being the instance number (1-based) 6 7 - #address-cells = <1>; 7 8 - #size-cells = <0>;
+17
Documentation/devicetree/bindings/rng/qcom,prng.txt
··· 1 + Qualcomm MSM pseudo random number generator. 2 + 3 + Required properties: 4 + 5 + - compatible : should be "qcom,prng" 6 + - reg : specifies base physical address and size of the registers map 7 + - clocks : phandle to clock-controller plus clock-specifier pair 8 + - clock-names : "core" clocks all registers, FIFO and circuits in PRNG IP block 9 + 10 + Example: 11 + 12 + rng@f9bff000 { 13 + compatible = "qcom,prng"; 14 + reg = <0xf9bff000 0x200>; 15 + clocks = <&clock GCC_PRNG_AHB_CLK>; 16 + clock-names = "core"; 17 + };
Documentation/gpio.txt Documentation/gpio/gpio-legacy.txt
+115
Documentation/gpio/board.txt
··· 1 + GPIO Mappings 2 + ============= 3 + 4 + This document explains how GPIOs can be assigned to given devices and functions. 5 + Note that it only applies to the new descriptor-based interface. For a 6 + description of the deprecated integer-based GPIO interface please refer to 7 + gpio-legacy.txt (actually, there is no real mapping possible with the old 8 + interface; you just fetch an integer from somewhere and request the 9 + corresponding GPIO. 10 + 11 + Platforms that make use of GPIOs must select ARCH_REQUIRE_GPIOLIB (if GPIO usage 12 + is mandatory) or ARCH_WANT_OPTIONAL_GPIOLIB (if GPIO support can be omitted) in 13 + their Kconfig. Then, how GPIOs are mapped depends on what the platform uses to 14 + describe its hardware layout. Currently, mappings can be defined through device 15 + tree, ACPI, and platform data. 16 + 17 + Device Tree 18 + ----------- 19 + GPIOs can easily be mapped to devices and functions in the device tree. The 20 + exact way to do it depends on the GPIO controller providing the GPIOs, see the 21 + device tree bindings for your controller. 22 + 23 + GPIOs mappings are defined in the consumer device's node, in a property named 24 + <function>-gpios, where <function> is the function the driver will request 25 + through gpiod_get(). For example: 26 + 27 + foo_device { 28 + compatible = "acme,foo"; 29 + ... 30 + led-gpios = <&gpio 15 GPIO_ACTIVE_HIGH>, /* red */ 31 + <&gpio 16 GPIO_ACTIVE_HIGH>, /* green */ 32 + <&gpio 17 GPIO_ACTIVE_HIGH>; /* blue */ 33 + 34 + power-gpio = <&gpio 1 GPIO_ACTIVE_LOW>; 35 + }; 36 + 37 + This property will make GPIOs 15, 16 and 17 available to the driver under the 38 + "led" function, and GPIO 1 as the "power" GPIO: 39 + 40 + struct gpio_desc *red, *green, *blue, *power; 41 + 42 + red = gpiod_get_index(dev, "led", 0); 43 + green = gpiod_get_index(dev, "led", 1); 44 + blue = gpiod_get_index(dev, "led", 2); 45 + 46 + power = gpiod_get(dev, "power"); 47 + 48 + The led GPIOs will be active-high, while the power GPIO will be active-low (i.e. 49 + gpiod_is_active_low(power) will be true). 50 + 51 + ACPI 52 + ---- 53 + ACPI does not support function names for GPIOs. Therefore, only the "idx" 54 + argument of gpiod_get_index() is useful to discriminate between GPIOs assigned 55 + to a device. The "con_id" argument can still be set for debugging purposes (it 56 + will appear under error messages as well as debug and sysfs nodes). 57 + 58 + Platform Data 59 + ------------- 60 + Finally, GPIOs can be bound to devices and functions using platform data. Board 61 + files that desire to do so need to include the following header: 62 + 63 + #include <linux/gpio/driver.h> 64 + 65 + GPIOs are mapped by the means of tables of lookups, containing instances of the 66 + gpiod_lookup structure. Two macros are defined to help declaring such mappings: 67 + 68 + GPIO_LOOKUP(chip_label, chip_hwnum, dev_id, con_id, flags) 69 + GPIO_LOOKUP_IDX(chip_label, chip_hwnum, dev_id, con_id, idx, flags) 70 + 71 + where 72 + 73 + - chip_label is the label of the gpiod_chip instance providing the GPIO 74 + - chip_hwnum is the hardware number of the GPIO within the chip 75 + - dev_id is the identifier of the device that will make use of this GPIO. If 76 + NULL, the GPIO will be available to all devices. 77 + - con_id is the name of the GPIO function from the device point of view. It 78 + can be NULL. 79 + - idx is the index of the GPIO within the function. 80 + - flags is defined to specify the following properties: 81 + * GPIOF_ACTIVE_LOW - to configure the GPIO as active-low 82 + * GPIOF_OPEN_DRAIN - GPIO pin is open drain type. 83 + * GPIOF_OPEN_SOURCE - GPIO pin is open source type. 84 + 85 + In the future, these flags might be extended to support more properties. 86 + 87 + Note that GPIO_LOOKUP() is just a shortcut to GPIO_LOOKUP_IDX() where idx = 0. 88 + 89 + A lookup table can then be defined as follows: 90 + 91 + struct gpiod_lookup gpios_table[] = { 92 + GPIO_LOOKUP_IDX("gpio.0", 15, "foo.0", "led", 0, GPIO_ACTIVE_HIGH), 93 + GPIO_LOOKUP_IDX("gpio.0", 16, "foo.0", "led", 1, GPIO_ACTIVE_HIGH), 94 + GPIO_LOOKUP_IDX("gpio.0", 17, "foo.0", "led", 2, GPIO_ACTIVE_HIGH), 95 + GPIO_LOOKUP("gpio.0", 1, "foo.0", "power", GPIO_ACTIVE_LOW), 96 + }; 97 + 98 + And the table can be added by the board code as follows: 99 + 100 + gpiod_add_table(gpios_table, ARRAY_SIZE(gpios_table)); 101 + 102 + The driver controlling "foo.0" will then be able to obtain its GPIOs as follows: 103 + 104 + struct gpio_desc *red, *green, *blue, *power; 105 + 106 + red = gpiod_get_index(dev, "led", 0); 107 + green = gpiod_get_index(dev, "led", 1); 108 + blue = gpiod_get_index(dev, "led", 2); 109 + 110 + power = gpiod_get(dev, "power"); 111 + gpiod_direction_output(power, 1); 112 + 113 + Since the "power" GPIO is mapped as active-low, its actual signal will be 0 114 + after this code. Contrary to the legacy integer GPIO interface, the active-low 115 + property is handled during mapping and is thus transparent to GPIO consumers.
+197
Documentation/gpio/consumer.txt
··· 1 + GPIO Descriptor Consumer Interface 2 + ================================== 3 + 4 + This document describes the consumer interface of the GPIO framework. Note that 5 + it describes the new descriptor-based interface. For a description of the 6 + deprecated integer-based GPIO interface please refer to gpio-legacy.txt. 7 + 8 + 9 + Guidelines for GPIOs consumers 10 + ============================== 11 + 12 + Drivers that can't work without standard GPIO calls should have Kconfig entries 13 + that depend on GPIOLIB. The functions that allow a driver to obtain and use 14 + GPIOs are available by including the following file: 15 + 16 + #include <linux/gpio/consumer.h> 17 + 18 + All the functions that work with the descriptor-based GPIO interface are 19 + prefixed with gpiod_. The gpio_ prefix is used for the legacy interface. No 20 + other function in the kernel should use these prefixes. 21 + 22 + 23 + Obtaining and Disposing GPIOs 24 + ============================= 25 + 26 + With the descriptor-based interface, GPIOs are identified with an opaque, 27 + non-forgeable handler that must be obtained through a call to one of the 28 + gpiod_get() functions. Like many other kernel subsystems, gpiod_get() takes the 29 + device that will use the GPIO and the function the requested GPIO is supposed to 30 + fulfill: 31 + 32 + struct gpio_desc *gpiod_get(struct device *dev, const char *con_id) 33 + 34 + If a function is implemented by using several GPIOs together (e.g. a simple LED 35 + device that displays digits), an additional index argument can be specified: 36 + 37 + struct gpio_desc *gpiod_get_index(struct device *dev, 38 + const char *con_id, unsigned int idx) 39 + 40 + Both functions return either a valid GPIO descriptor, or an error code checkable 41 + with IS_ERR(). They will never return a NULL pointer. 42 + 43 + Device-managed variants of these functions are also defined: 44 + 45 + struct gpio_desc *devm_gpiod_get(struct device *dev, const char *con_id) 46 + 47 + struct gpio_desc *devm_gpiod_get_index(struct device *dev, 48 + const char *con_id, 49 + unsigned int idx) 50 + 51 + A GPIO descriptor can be disposed of using the gpiod_put() function: 52 + 53 + void gpiod_put(struct gpio_desc *desc) 54 + 55 + It is strictly forbidden to use a descriptor after calling this function. The 56 + device-managed variant is, unsurprisingly: 57 + 58 + void devm_gpiod_put(struct device *dev, struct gpio_desc *desc) 59 + 60 + 61 + Using GPIOs 62 + =========== 63 + 64 + Setting Direction 65 + ----------------- 66 + The first thing a driver must do with a GPIO is setting its direction. This is 67 + done by invoking one of the gpiod_direction_*() functions: 68 + 69 + int gpiod_direction_input(struct gpio_desc *desc) 70 + int gpiod_direction_output(struct gpio_desc *desc, int value) 71 + 72 + The return value is zero for success, else a negative errno. It should be 73 + checked, since the get/set calls don't return errors and since misconfiguration 74 + is possible. You should normally issue these calls from a task context. However, 75 + for spinlock-safe GPIOs it is OK to use them before tasking is enabled, as part 76 + of early board setup. 77 + 78 + For output GPIOs, the value provided becomes the initial output value. This 79 + helps avoid signal glitching during system startup. 80 + 81 + A driver can also query the current direction of a GPIO: 82 + 83 + int gpiod_get_direction(const struct gpio_desc *desc) 84 + 85 + This function will return either GPIOF_DIR_IN or GPIOF_DIR_OUT. 86 + 87 + Be aware that there is no default direction for GPIOs. Therefore, **using a GPIO 88 + without setting its direction first is illegal and will result in undefined 89 + behavior!** 90 + 91 + 92 + Spinlock-Safe GPIO Access 93 + ------------------------- 94 + Most GPIO controllers can be accessed with memory read/write instructions. Those 95 + don't need to sleep, and can safely be done from inside hard (non-threaded) IRQ 96 + handlers and similar contexts. 97 + 98 + Use the following calls to access GPIOs from an atomic context: 99 + 100 + int gpiod_get_value(const struct gpio_desc *desc); 101 + void gpiod_set_value(struct gpio_desc *desc, int value); 102 + 103 + The values are boolean, zero for low, nonzero for high. When reading the value 104 + of an output pin, the value returned should be what's seen on the pin. That 105 + won't always match the specified output value, because of issues including 106 + open-drain signaling and output latencies. 107 + 108 + The get/set calls do not return errors because "invalid GPIO" should have been 109 + reported earlier from gpiod_direction_*(). However, note that not all platforms 110 + can read the value of output pins; those that can't should always return zero. 111 + Also, using these calls for GPIOs that can't safely be accessed without sleeping 112 + (see below) is an error. 113 + 114 + 115 + GPIO Access That May Sleep 116 + -------------------------- 117 + Some GPIO controllers must be accessed using message based buses like I2C or 118 + SPI. Commands to read or write those GPIO values require waiting to get to the 119 + head of a queue to transmit a command and get its response. This requires 120 + sleeping, which can't be done from inside IRQ handlers. 121 + 122 + Platforms that support this type of GPIO distinguish them from other GPIOs by 123 + returning nonzero from this call: 124 + 125 + int gpiod_cansleep(const struct gpio_desc *desc) 126 + 127 + To access such GPIOs, a different set of accessors is defined: 128 + 129 + int gpiod_get_value_cansleep(const struct gpio_desc *desc) 130 + void gpiod_set_value_cansleep(struct gpio_desc *desc, int value) 131 + 132 + Accessing such GPIOs requires a context which may sleep, for example a threaded 133 + IRQ handler, and those accessors must be used instead of spinlock-safe 134 + accessors without the cansleep() name suffix. 135 + 136 + Other than the fact that these accessors might sleep, and will work on GPIOs 137 + that can't be accessed from hardIRQ handlers, these calls act the same as the 138 + spinlock-safe calls. 139 + 140 + 141 + Active-low State and Raw GPIO Values 142 + ------------------------------------ 143 + Device drivers like to manage the logical state of a GPIO, i.e. the value their 144 + device will actually receive, no matter what lies between it and the GPIO line. 145 + In some cases, it might make sense to control the actual GPIO line value. The 146 + following set of calls ignore the active-low property of a GPIO and work on the 147 + raw line value: 148 + 149 + int gpiod_get_raw_value(const struct gpio_desc *desc) 150 + void gpiod_set_raw_value(struct gpio_desc *desc, int value) 151 + int gpiod_get_raw_value_cansleep(const struct gpio_desc *desc) 152 + void gpiod_set_raw_value_cansleep(struct gpio_desc *desc, int value) 153 + 154 + The active-low state of a GPIO can also be queried using the following call: 155 + 156 + int gpiod_is_active_low(const struct gpio_desc *desc) 157 + 158 + Note that these functions should only be used with great moderation ; a driver 159 + should not have to care about the physical line level. 160 + 161 + GPIOs mapped to IRQs 162 + -------------------- 163 + GPIO lines can quite often be used as IRQs. You can get the IRQ number 164 + corresponding to a given GPIO using the following call: 165 + 166 + int gpiod_to_irq(const struct gpio_desc *desc) 167 + 168 + It will return an IRQ number, or an negative errno code if the mapping can't be 169 + done (most likely because that particular GPIO cannot be used as IRQ). It is an 170 + unchecked error to use a GPIO that wasn't set up as an input using 171 + gpiod_direction_input(), or to use an IRQ number that didn't originally come 172 + from gpiod_to_irq(). gpiod_to_irq() is not allowed to sleep. 173 + 174 + Non-error values returned from gpiod_to_irq() can be passed to request_irq() or 175 + free_irq(). They will often be stored into IRQ resources for platform devices, 176 + by the board-specific initialization code. Note that IRQ trigger options are 177 + part of the IRQ interface, e.g. IRQF_TRIGGER_FALLING, as are system wakeup 178 + capabilities. 179 + 180 + 181 + Interacting With the Legacy GPIO Subsystem 182 + ========================================== 183 + Many kernel subsystems still handle GPIOs using the legacy integer-based 184 + interface. Although it is strongly encouraged to upgrade them to the safer 185 + descriptor-based API, the following two functions allow you to convert a GPIO 186 + descriptor into the GPIO integer namespace and vice-versa: 187 + 188 + int desc_to_gpio(const struct gpio_desc *desc) 189 + struct gpio_desc *gpio_to_desc(unsigned gpio) 190 + 191 + The GPIO number returned by desc_to_gpio() can be safely used as long as the 192 + GPIO descriptor has not been freed. All the same, a GPIO number passed to 193 + gpio_to_desc() must have been properly acquired, and usage of the returned GPIO 194 + descriptor is only possible after the GPIO number has been released. 195 + 196 + Freeing a GPIO obtained by one API with the other API is forbidden and an 197 + unchecked error.
+75
Documentation/gpio/driver.txt
··· 1 + GPIO Descriptor Driver Interface 2 + ================================ 3 + 4 + This document serves as a guide for GPIO chip drivers writers. Note that it 5 + describes the new descriptor-based interface. For a description of the 6 + deprecated integer-based GPIO interface please refer to gpio-legacy.txt. 7 + 8 + Each GPIO controller driver needs to include the following header, which defines 9 + the structures used to define a GPIO driver: 10 + 11 + #include <linux/gpio/driver.h> 12 + 13 + 14 + Internal Representation of GPIOs 15 + ================================ 16 + 17 + Inside a GPIO driver, individual GPIOs are identified by their hardware number, 18 + which is a unique number between 0 and n, n being the number of GPIOs managed by 19 + the chip. This number is purely internal: the hardware number of a particular 20 + GPIO descriptor is never made visible outside of the driver. 21 + 22 + On top of this internal number, each GPIO also need to have a global number in 23 + the integer GPIO namespace so that it can be used with the legacy GPIO 24 + interface. Each chip must thus have a "base" number (which can be automatically 25 + assigned), and for each GPIO the global number will be (base + hardware number). 26 + Although the integer representation is considered deprecated, it still has many 27 + users and thus needs to be maintained. 28 + 29 + So for example one platform could use numbers 32-159 for GPIOs, with a 30 + controller defining 128 GPIOs at a "base" of 32 ; while another platform uses 31 + numbers 0..63 with one set of GPIO controllers, 64-79 with another type of GPIO 32 + controller, and on one particular board 80-95 with an FPGA. The numbers need not 33 + be contiguous; either of those platforms could also use numbers 2000-2063 to 34 + identify GPIOs in a bank of I2C GPIO expanders. 35 + 36 + 37 + Controller Drivers: gpio_chip 38 + ============================= 39 + 40 + In the gpiolib framework each GPIO controller is packaged as a "struct 41 + gpio_chip" (see linux/gpio/driver.h for its complete definition) with members 42 + common to each controller of that type: 43 + 44 + - methods to establish GPIO direction 45 + - methods used to access GPIO values 46 + - method to return the IRQ number associated to a given GPIO 47 + - flag saying whether calls to its methods may sleep 48 + - optional debugfs dump method (showing extra state like pullup config) 49 + - optional base number (will be automatically assigned if omitted) 50 + - label for diagnostics and GPIOs mapping using platform data 51 + 52 + The code implementing a gpio_chip should support multiple instances of the 53 + controller, possibly using the driver model. That code will configure each 54 + gpio_chip and issue gpiochip_add(). Removing a GPIO controller should be rare; 55 + use gpiochip_remove() when it is unavoidable. 56 + 57 + Most often a gpio_chip is part of an instance-specific structure with state not 58 + exposed by the GPIO interfaces, such as addressing, power management, and more. 59 + Chips such as codecs will have complex non-GPIO state. 60 + 61 + Any debugfs dump method should normally ignore signals which haven't been 62 + requested as GPIOs. They can use gpiochip_is_requested(), which returns either 63 + NULL or the label associated with that GPIO when it was requested. 64 + 65 + Locking IRQ usage 66 + ----------------- 67 + Input GPIOs can be used as IRQ signals. When this happens, a driver is requested 68 + to mark the GPIO as being used as an IRQ: 69 + 70 + int gpiod_lock_as_irq(struct gpio_desc *desc) 71 + 72 + This will prevent the use of non-irq related GPIO APIs until the GPIO IRQ lock 73 + is released: 74 + 75 + void gpiod_unlock_as_irq(struct gpio_desc *desc)
+119
Documentation/gpio/gpio.txt
··· 1 + GPIO Interfaces 2 + =============== 3 + 4 + The documents in this directory give detailed instructions on how to access 5 + GPIOs in drivers, and how to write a driver for a device that provides GPIOs 6 + itself. 7 + 8 + Due to the history of GPIO interfaces in the kernel, there are two different 9 + ways to obtain and use GPIOs: 10 + 11 + - The descriptor-based interface is the preferred way to manipulate GPIOs, 12 + and is described by all the files in this directory excepted gpio-legacy.txt. 13 + - The legacy integer-based interface which is considered deprecated (but still 14 + usable for compatibility reasons) is documented in gpio-legacy.txt. 15 + 16 + The remainder of this document applies to the new descriptor-based interface. 17 + gpio-legacy.txt contains the same information applied to the legacy 18 + integer-based interface. 19 + 20 + 21 + What is a GPIO? 22 + =============== 23 + 24 + A "General Purpose Input/Output" (GPIO) is a flexible software-controlled 25 + digital signal. They are provided from many kinds of chip, and are familiar 26 + to Linux developers working with embedded and custom hardware. Each GPIO 27 + represents a bit connected to a particular pin, or "ball" on Ball Grid Array 28 + (BGA) packages. Board schematics show which external hardware connects to 29 + which GPIOs. Drivers can be written generically, so that board setup code 30 + passes such pin configuration data to drivers. 31 + 32 + System-on-Chip (SOC) processors heavily rely on GPIOs. In some cases, every 33 + non-dedicated pin can be configured as a GPIO; and most chips have at least 34 + several dozen of them. Programmable logic devices (like FPGAs) can easily 35 + provide GPIOs; multifunction chips like power managers, and audio codecs 36 + often have a few such pins to help with pin scarcity on SOCs; and there are 37 + also "GPIO Expander" chips that connect using the I2C or SPI serial buses. 38 + Most PC southbridges have a few dozen GPIO-capable pins (with only the BIOS 39 + firmware knowing how they're used). 40 + 41 + The exact capabilities of GPIOs vary between systems. Common options: 42 + 43 + - Output values are writable (high=1, low=0). Some chips also have 44 + options about how that value is driven, so that for example only one 45 + value might be driven, supporting "wire-OR" and similar schemes for the 46 + other value (notably, "open drain" signaling). 47 + 48 + - Input values are likewise readable (1, 0). Some chips support readback 49 + of pins configured as "output", which is very useful in such "wire-OR" 50 + cases (to support bidirectional signaling). GPIO controllers may have 51 + input de-glitch/debounce logic, sometimes with software controls. 52 + 53 + - Inputs can often be used as IRQ signals, often edge triggered but 54 + sometimes level triggered. Such IRQs may be configurable as system 55 + wakeup events, to wake the system from a low power state. 56 + 57 + - Usually a GPIO will be configurable as either input or output, as needed 58 + by different product boards; single direction ones exist too. 59 + 60 + - Most GPIOs can be accessed while holding spinlocks, but those accessed 61 + through a serial bus normally can't. Some systems support both types. 62 + 63 + On a given board each GPIO is used for one specific purpose like monitoring 64 + MMC/SD card insertion/removal, detecting card write-protect status, driving 65 + a LED, configuring a transceiver, bit-banging a serial bus, poking a hardware 66 + watchdog, sensing a switch, and so on. 67 + 68 + 69 + Common GPIO Properties 70 + ====================== 71 + 72 + These properties are met through all the other documents of the GPIO interface 73 + and it is useful to understand them, especially if you need to define GPIO 74 + mappings. 75 + 76 + Active-High and Active-Low 77 + -------------------------- 78 + It is natural to assume that a GPIO is "active" when its output signal is 1 79 + ("high"), and inactive when it is 0 ("low"). However in practice the signal of a 80 + GPIO may be inverted before is reaches its destination, or a device could decide 81 + to have different conventions about what "active" means. Such decisions should 82 + be transparent to device drivers, therefore it is possible to define a GPIO as 83 + being either active-high ("1" means "active", the default) or active-low ("0" 84 + means "active") so that drivers only need to worry about the logical signal and 85 + not about what happens at the line level. 86 + 87 + Open Drain and Open Source 88 + -------------------------- 89 + Sometimes shared signals need to use "open drain" (where only the low signal 90 + level is actually driven), or "open source" (where only the high signal level is 91 + driven) signaling. That term applies to CMOS transistors; "open collector" is 92 + used for TTL. A pullup or pulldown resistor causes the high or low signal level. 93 + This is sometimes called a "wire-AND"; or more practically, from the negative 94 + logic (low=true) perspective this is a "wire-OR". 95 + 96 + One common example of an open drain signal is a shared active-low IRQ line. 97 + Also, bidirectional data bus signals sometimes use open drain signals. 98 + 99 + Some GPIO controllers directly support open drain and open source outputs; many 100 + don't. When you need open drain signaling but your hardware doesn't directly 101 + support it, there's a common idiom you can use to emulate it with any GPIO pin 102 + that can be used as either an input or an output: 103 + 104 + LOW: gpiod_direction_output(gpio, 0) ... this drives the signal and overrides 105 + the pullup. 106 + 107 + HIGH: gpiod_direction_input(gpio) ... this turns off the output, so the pullup 108 + (or some other device) controls the signal. 109 + 110 + The same logic can be applied to emulate open source signaling, by driving the 111 + high signal and configuring the GPIO as input for low. This open drain/open 112 + source emulation can be handled transparently by the GPIO framework. 113 + 114 + If you are "driving" the signal high but gpiod_get_value(gpio) reports a low 115 + value (after the appropriate rise time passes), you know some other component is 116 + driving the shared signal low. That's not necessarily an error. As one common 117 + example, that's how I2C clocks are stretched: a slave that needs a slower clock 118 + delays the rising edge of SCK, and the I2C master adjusts its signaling rate 119 + accordingly.
+155
Documentation/gpio/sysfs.txt
··· 1 + GPIO Sysfs Interface for Userspace 2 + ================================== 3 + 4 + Platforms which use the "gpiolib" implementors framework may choose to 5 + configure a sysfs user interface to GPIOs. This is different from the 6 + debugfs interface, since it provides control over GPIO direction and 7 + value instead of just showing a gpio state summary. Plus, it could be 8 + present on production systems without debugging support. 9 + 10 + Given appropriate hardware documentation for the system, userspace could 11 + know for example that GPIO #23 controls the write protect line used to 12 + protect boot loader segments in flash memory. System upgrade procedures 13 + may need to temporarily remove that protection, first importing a GPIO, 14 + then changing its output state, then updating the code before re-enabling 15 + the write protection. In normal use, GPIO #23 would never be touched, 16 + and the kernel would have no need to know about it. 17 + 18 + Again depending on appropriate hardware documentation, on some systems 19 + userspace GPIO can be used to determine system configuration data that 20 + standard kernels won't know about. And for some tasks, simple userspace 21 + GPIO drivers could be all that the system really needs. 22 + 23 + Note that standard kernel drivers exist for common "LEDs and Buttons" 24 + GPIO tasks: "leds-gpio" and "gpio_keys", respectively. Use those 25 + instead of talking directly to the GPIOs; they integrate with kernel 26 + frameworks better than your userspace code could. 27 + 28 + 29 + Paths in Sysfs 30 + -------------- 31 + There are three kinds of entry in /sys/class/gpio: 32 + 33 + - Control interfaces used to get userspace control over GPIOs; 34 + 35 + - GPIOs themselves; and 36 + 37 + - GPIO controllers ("gpio_chip" instances). 38 + 39 + That's in addition to standard files including the "device" symlink. 40 + 41 + The control interfaces are write-only: 42 + 43 + /sys/class/gpio/ 44 + 45 + "export" ... Userspace may ask the kernel to export control of 46 + a GPIO to userspace by writing its number to this file. 47 + 48 + Example: "echo 19 > export" will create a "gpio19" node 49 + for GPIO #19, if that's not requested by kernel code. 50 + 51 + "unexport" ... Reverses the effect of exporting to userspace. 52 + 53 + Example: "echo 19 > unexport" will remove a "gpio19" 54 + node exported using the "export" file. 55 + 56 + GPIO signals have paths like /sys/class/gpio/gpio42/ (for GPIO #42) 57 + and have the following read/write attributes: 58 + 59 + /sys/class/gpio/gpioN/ 60 + 61 + "direction" ... reads as either "in" or "out". This value may 62 + normally be written. Writing as "out" defaults to 63 + initializing the value as low. To ensure glitch free 64 + operation, values "low" and "high" may be written to 65 + configure the GPIO as an output with that initial value. 66 + 67 + Note that this attribute *will not exist* if the kernel 68 + doesn't support changing the direction of a GPIO, or 69 + it was exported by kernel code that didn't explicitly 70 + allow userspace to reconfigure this GPIO's direction. 71 + 72 + "value" ... reads as either 0 (low) or 1 (high). If the GPIO 73 + is configured as an output, this value may be written; 74 + any nonzero value is treated as high. 75 + 76 + If the pin can be configured as interrupt-generating interrupt 77 + and if it has been configured to generate interrupts (see the 78 + description of "edge"), you can poll(2) on that file and 79 + poll(2) will return whenever the interrupt was triggered. If 80 + you use poll(2), set the events POLLPRI and POLLERR. If you 81 + use select(2), set the file descriptor in exceptfds. After 82 + poll(2) returns, either lseek(2) to the beginning of the sysfs 83 + file and read the new value or close the file and re-open it 84 + to read the value. 85 + 86 + "edge" ... reads as either "none", "rising", "falling", or 87 + "both". Write these strings to select the signal edge(s) 88 + that will make poll(2) on the "value" file return. 89 + 90 + This file exists only if the pin can be configured as an 91 + interrupt generating input pin. 92 + 93 + "active_low" ... reads as either 0 (false) or 1 (true). Write 94 + any nonzero value to invert the value attribute both 95 + for reading and writing. Existing and subsequent 96 + poll(2) support configuration via the edge attribute 97 + for "rising" and "falling" edges will follow this 98 + setting. 99 + 100 + GPIO controllers have paths like /sys/class/gpio/gpiochip42/ (for the 101 + controller implementing GPIOs starting at #42) and have the following 102 + read-only attributes: 103 + 104 + /sys/class/gpio/gpiochipN/ 105 + 106 + "base" ... same as N, the first GPIO managed by this chip 107 + 108 + "label" ... provided for diagnostics (not always unique) 109 + 110 + "ngpio" ... how many GPIOs this manges (N to N + ngpio - 1) 111 + 112 + Board documentation should in most cases cover what GPIOs are used for 113 + what purposes. However, those numbers are not always stable; GPIOs on 114 + a daughtercard might be different depending on the base board being used, 115 + or other cards in the stack. In such cases, you may need to use the 116 + gpiochip nodes (possibly in conjunction with schematics) to determine 117 + the correct GPIO number to use for a given signal. 118 + 119 + 120 + Exporting from Kernel code 121 + -------------------------- 122 + Kernel code can explicitly manage exports of GPIOs which have already been 123 + requested using gpio_request(): 124 + 125 + /* export the GPIO to userspace */ 126 + int gpiod_export(struct gpio_desc *desc, bool direction_may_change); 127 + 128 + /* reverse gpio_export() */ 129 + void gpiod_unexport(struct gpio_desc *desc); 130 + 131 + /* create a sysfs link to an exported GPIO node */ 132 + int gpiod_export_link(struct device *dev, const char *name, 133 + struct gpio_desc *desc); 134 + 135 + /* change the polarity of a GPIO node in sysfs */ 136 + int gpiod_sysfs_set_active_low(struct gpio_desc *desc, int value); 137 + 138 + After a kernel driver requests a GPIO, it may only be made available in 139 + the sysfs interface by gpiod_export(). The driver can control whether the 140 + signal direction may change. This helps drivers prevent userspace code 141 + from accidentally clobbering important system state. 142 + 143 + This explicit exporting can help with debugging (by making some kinds 144 + of experiments easier), or can provide an always-there interface that's 145 + suitable for documenting as part of a board support package. 146 + 147 + After the GPIO has been exported, gpiod_export_link() allows creating 148 + symlinks from elsewhere in sysfs to the GPIO sysfs node. Drivers can 149 + use this to provide the interface under their own device in sysfs with 150 + a descriptive name. 151 + 152 + Drivers can use gpiod_sysfs_set_active_low() to hide GPIO line polarity 153 + differences between boards from user space. Polarity change can be done both 154 + before and after gpiod_export(), and previously enabled poll(2) support for 155 + either rising or falling edge will be reconfigured to follow this setting.
+5
MAINTAINERS
··· 2142 2142 S: Maintained 2143 2143 F: drivers/usb/chipidea/ 2144 2144 2145 + CHROME HARDWARE PLATFORM SUPPORT 2146 + M: Olof Johansson <olof@lixom.net> 2147 + S: Maintained 2148 + F: drivers/platform/chrome/ 2149 + 2145 2150 CISCO VIC ETHERNET NIC DRIVER 2146 2151 M: Christian Benvenuti <benve@cisco.com> 2147 2152 M: Sujith Sankar <ssujith@cisco.com>
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 13 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc1 4 + EXTRAVERSION = -rc2 5 5 NAME = One Giant Leap for Frogkind 6 6 7 7 # *DOCUMENTATION*
+4
arch/arm/boot/dts/bcm2835.dtsi
··· 85 85 reg = <0x7e205000 0x1000>; 86 86 interrupts = <2 21>; 87 87 clocks = <&clk_i2c>; 88 + #address-cells = <1>; 89 + #size-cells = <0>; 88 90 status = "disabled"; 89 91 }; 90 92 ··· 95 93 reg = <0x7e804000 0x1000>; 96 94 interrupts = <2 21>; 97 95 clocks = <&clk_i2c>; 96 + #address-cells = <1>; 97 + #size-cells = <0>; 98 98 status = "disabled"; 99 99 }; 100 100
+12
arch/arm/boot/dts/cros5250-common.dtsi
··· 27 27 i2c2_bus: i2c2-bus { 28 28 samsung,pin-pud = <0>; 29 29 }; 30 + 31 + max77686_irq: max77686-irq { 32 + samsung,pins = "gpx3-2"; 33 + samsung,pin-function = <0>; 34 + samsung,pin-pud = <0>; 35 + samsung,pin-drv = <0>; 36 + }; 30 37 }; 31 38 32 39 i2c@12C60000 { ··· 42 35 43 36 max77686@09 { 44 37 compatible = "maxim,max77686"; 38 + interrupt-parent = <&gpx3>; 39 + interrupts = <2 0>; 40 + pinctrl-names = "default"; 41 + pinctrl-0 = <&max77686_irq>; 42 + wakeup-source; 45 43 reg = <0x09>; 46 44 47 45 voltage-regulators {
+1 -1
arch/arm/boot/dts/imx6qdl.dtsi
··· 161 161 clocks = <&clks 197>, <&clks 3>, 162 162 <&clks 197>, <&clks 107>, 163 163 <&clks 0>, <&clks 118>, 164 - <&clks 62>, <&clks 139>, 164 + <&clks 0>, <&clks 139>, 165 165 <&clks 0>; 166 166 clock-names = "core", "rxtx0", 167 167 "rxtx1", "rxtx2",
+1 -1
arch/arm/boot/dts/omap-zoom-common.dtsi
··· 13 13 * they probably share the same GPIO IRQ 14 14 * REVISIT: Add timing support from slls644g.pdf 15 15 */ 16 - 8250@3,0 { 16 + uart@3,0 { 17 17 compatible = "ns16550a"; 18 18 reg = <3 0 0x100>; 19 19 bank-width = <2>;
+96
arch/arm/boot/dts/omap2.dtsi
··· 9 9 */ 10 10 11 11 #include <dt-bindings/gpio/gpio.h> 12 + #include <dt-bindings/interrupt-controller/irq.h> 12 13 #include <dt-bindings/pinctrl/omap.h> 13 14 14 15 #include "skeleton.dtsi" ··· 22 21 serial0 = &uart1; 23 22 serial1 = &uart2; 24 23 serial2 = &uart3; 24 + i2c0 = &i2c1; 25 + i2c1 = &i2c2; 25 26 }; 26 27 27 28 cpus { ··· 56 53 ranges; 57 54 ti,hwmods = "l3_main"; 58 55 56 + aes: aes@480a6000 { 57 + compatible = "ti,omap2-aes"; 58 + ti,hwmods = "aes"; 59 + reg = <0x480a6000 0x50>; 60 + dmas = <&sdma 9 &sdma 10>; 61 + dma-names = "tx", "rx"; 62 + }; 63 + 64 + hdq1w: 1w@480b2000 { 65 + compatible = "ti,omap2420-1w"; 66 + ti,hwmods = "hdq1w"; 67 + reg = <0x480b2000 0x1000>; 68 + interrupts = <58>; 69 + }; 70 + 71 + mailbox: mailbox@48094000 { 72 + compatible = "ti,omap2-mailbox"; 73 + ti,hwmods = "mailbox"; 74 + reg = <0x48094000 0x200>; 75 + interrupts = <26>; 76 + }; 77 + 59 78 intc: interrupt-controller@1 { 60 79 compatible = "ti,omap2-intc"; 61 80 interrupt-controller; ··· 88 63 89 64 sdma: dma-controller@48056000 { 90 65 compatible = "ti,omap2430-sdma", "ti,omap2420-sdma"; 66 + ti,hwmods = "dma"; 91 67 reg = <0x48056000 0x1000>; 92 68 interrupts = <12>, 93 69 <13>, ··· 99 73 #dma-requests = <64>; 100 74 }; 101 75 76 + i2c1: i2c@48070000 { 77 + compatible = "ti,omap2-i2c"; 78 + ti,hwmods = "i2c1"; 79 + reg = <0x48070000 0x80>; 80 + #address-cells = <1>; 81 + #size-cells = <0>; 82 + interrupts = <56>; 83 + dmas = <&sdma 27 &sdma 28>; 84 + dma-names = "tx", "rx"; 85 + }; 86 + 87 + i2c2: i2c@48072000 { 88 + compatible = "ti,omap2-i2c"; 89 + ti,hwmods = "i2c2"; 90 + reg = <0x48072000 0x80>; 91 + #address-cells = <1>; 92 + #size-cells = <0>; 93 + interrupts = <57>; 94 + dmas = <&sdma 29 &sdma 30>; 95 + dma-names = "tx", "rx"; 96 + }; 97 + 98 + mcspi1: mcspi@48098000 { 99 + compatible = "ti,omap2-mcspi"; 100 + ti,hwmods = "mcspi1"; 101 + reg = <0x48098000 0x100>; 102 + interrupts = <65>; 103 + dmas = <&sdma 35 &sdma 36 &sdma 37 &sdma 38 104 + &sdma 39 &sdma 40 &sdma 41 &sdma 42>; 105 + dma-names = "tx0", "rx0", "tx1", "rx1", 106 + "tx2", "rx2", "tx3", "rx3"; 107 + }; 108 + 109 + mcspi2: mcspi@4809a000 { 110 + compatible = "ti,omap2-mcspi"; 111 + ti,hwmods = "mcspi2"; 112 + reg = <0x4809a000 0x100>; 113 + interrupts = <66>; 114 + dmas = <&sdma 43 &sdma 44 &sdma 45 &sdma 46>; 115 + dma-names = "tx0", "rx0", "tx1", "rx1"; 116 + }; 117 + 118 + rng: rng@480a0000 { 119 + compatible = "ti,omap2-rng"; 120 + ti,hwmods = "rng"; 121 + reg = <0x480a0000 0x50>; 122 + interrupts = <36>; 123 + }; 124 + 125 + sham: sham@480a4000 { 126 + compatible = "ti,omap2-sham"; 127 + ti,hwmods = "sham"; 128 + reg = <0x480a4000 0x64>; 129 + interrupts = <51>; 130 + dmas = <&sdma 13>; 131 + dma-names = "rx"; 132 + }; 133 + 102 134 uart1: serial@4806a000 { 103 135 compatible = "ti,omap2-uart"; 104 136 ti,hwmods = "uart1"; 137 + reg = <0x4806a000 0x2000>; 138 + interrupts = <72>; 139 + dmas = <&sdma 49 &sdma 50>; 140 + dma-names = "tx", "rx"; 105 141 clock-frequency = <48000000>; 106 142 }; 107 143 108 144 uart2: serial@4806c000 { 109 145 compatible = "ti,omap2-uart"; 110 146 ti,hwmods = "uart2"; 147 + reg = <0x4806c000 0x400>; 148 + interrupts = <73>; 149 + dmas = <&sdma 51 &sdma 52>; 150 + dma-names = "tx", "rx"; 111 151 clock-frequency = <48000000>; 112 152 }; 113 153 114 154 uart3: serial@4806e000 { 115 155 compatible = "ti,omap2-uart"; 116 156 ti,hwmods = "uart3"; 157 + reg = <0x4806e000 0x400>; 158 + interrupts = <74>; 159 + dmas = <&sdma 53 &sdma 54>; 160 + dma-names = "tx", "rx"; 117 161 clock-frequency = <48000000>; 118 162 }; 119 163
+23
arch/arm/boot/dts/omap2420.dtsi
··· 114 114 dma-names = "tx", "rx"; 115 115 }; 116 116 117 + msdi1: mmc@4809c000 { 118 + compatible = "ti,omap2420-mmc"; 119 + ti,hwmods = "msdi1"; 120 + reg = <0x4809c000 0x80>; 121 + interrupts = <83>; 122 + dmas = <&sdma 61 &sdma 62>; 123 + dma-names = "tx", "rx"; 124 + }; 125 + 117 126 timer1: timer@48028000 { 118 127 compatible = "ti,omap2420-timer"; 119 128 reg = <0x48028000 0x400>; ··· 130 121 ti,hwmods = "timer1"; 131 122 ti,timer-alwon; 132 123 }; 124 + 125 + wd_timer2: wdt@48022000 { 126 + compatible = "ti,omap2-wdt"; 127 + ti,hwmods = "wd_timer2"; 128 + reg = <0x48022000 0x80>; 129 + }; 133 130 }; 131 + }; 132 + 133 + &i2c1 { 134 + compatible = "ti,omap2420-i2c"; 135 + }; 136 + 137 + &i2c2 { 138 + compatible = "ti,omap2420-i2c"; 134 139 };
+49
arch/arm/boot/dts/omap2430.dtsi
··· 175 175 dma-names = "tx", "rx"; 176 176 }; 177 177 178 + mmc1: mmc@4809c000 { 179 + compatible = "ti,omap2-hsmmc"; 180 + reg = <0x4809c000 0x200>; 181 + interrupts = <83>; 182 + ti,hwmods = "mmc1"; 183 + ti,dual-volt; 184 + dmas = <&sdma 61>, <&sdma 62>; 185 + dma-names = "tx", "rx"; 186 + }; 187 + 188 + mmc2: mmc@480b4000 { 189 + compatible = "ti,omap2-hsmmc"; 190 + reg = <0x480b4000 0x200>; 191 + interrupts = <86>; 192 + ti,hwmods = "mmc2"; 193 + dmas = <&sdma 47>, <&sdma 48>; 194 + dma-names = "tx", "rx"; 195 + }; 196 + 178 197 timer1: timer@49018000 { 179 198 compatible = "ti,omap2420-timer"; 180 199 reg = <0x49018000 0x400>; ··· 201 182 ti,hwmods = "timer1"; 202 183 ti,timer-alwon; 203 184 }; 185 + 186 + mcspi3: mcspi@480b8000 { 187 + compatible = "ti,omap2-mcspi"; 188 + ti,hwmods = "mcspi3"; 189 + reg = <0x480b8000 0x100>; 190 + interrupts = <91>; 191 + dmas = <&sdma 15 &sdma 16 &sdma 23 &sdma 24>; 192 + dma-names = "tx0", "rx0", "tx1", "rx1"; 193 + }; 194 + 195 + usb_otg_hs: usb_otg_hs@480ac000 { 196 + compatible = "ti,omap2-musb"; 197 + ti,hwmods = "usb_otg_hs"; 198 + reg = <0x480ac000 0x1000>; 199 + interrupts = <93>; 200 + }; 201 + 202 + wd_timer2: wdt@49016000 { 203 + compatible = "ti,omap2-wdt"; 204 + ti,hwmods = "wd_timer2"; 205 + reg = <0x49016000 0x80>; 206 + }; 204 207 }; 208 + }; 209 + 210 + &i2c1 { 211 + compatible = "ti,omap2430-i2c"; 212 + }; 213 + 214 + &i2c2 { 215 + compatible = "ti,omap2430-i2c"; 205 216 };
+3 -3
arch/arm/mach-omap2/Makefile
··· 19 19 20 20 obj-$(CONFIG_ARCH_OMAP2) += $(omap-2-3-common) $(hwmod-common) 21 21 obj-$(CONFIG_ARCH_OMAP3) += $(omap-2-3-common) $(hwmod-common) $(secure-common) 22 - obj-$(CONFIG_ARCH_OMAP4) += prm44xx.o $(hwmod-common) $(secure-common) 22 + obj-$(CONFIG_ARCH_OMAP4) += $(hwmod-common) $(secure-common) 23 23 obj-$(CONFIG_SOC_AM33XX) += irq.o $(hwmod-common) 24 - obj-$(CONFIG_SOC_OMAP5) += prm44xx.o $(hwmod-common) $(secure-common) 24 + obj-$(CONFIG_SOC_OMAP5) += $(hwmod-common) $(secure-common) 25 25 obj-$(CONFIG_SOC_AM43XX) += $(hwmod-common) $(secure-common) 26 - obj-$(CONFIG_SOC_DRA7XX) += prm44xx.o $(hwmod-common) $(secure-common) 26 + obj-$(CONFIG_SOC_DRA7XX) += $(hwmod-common) $(secure-common) 27 27 28 28 ifneq ($(CONFIG_SND_OMAP_SOC_MCBSP),) 29 29 obj-y += mcbsp.o
-1
arch/arm/mach-omap2/common.h
··· 299 299 extern void omap_sdrc_init(struct omap_sdrc_params *sdrc_cs0, 300 300 struct omap_sdrc_params *sdrc_cs1); 301 301 struct omap2_hsmmc_info; 302 - extern int omap4_twl6030_hsmmc_init(struct omap2_hsmmc_info *controllers); 303 302 extern void omap_reserve(void); 304 303 305 304 struct omap_hwmod;
-78
arch/arm/mach-omap2/display.c
··· 32 32 33 33 #include "soc.h" 34 34 #include "iomap.h" 35 - #include "mux.h" 36 35 #include "control.h" 37 36 #include "display.h" 38 37 #include "prm.h" ··· 101 102 { "dss_hdmi", "omapdss_hdmi", -1 }, 102 103 }; 103 104 104 - static void __init omap4_tpd12s015_mux_pads(void) 105 - { 106 - omap_mux_init_signal("hdmi_cec", 107 - OMAP_PIN_INPUT_PULLUP); 108 - omap_mux_init_signal("hdmi_ddc_scl", 109 - OMAP_PIN_INPUT_PULLUP); 110 - omap_mux_init_signal("hdmi_ddc_sda", 111 - OMAP_PIN_INPUT_PULLUP); 112 - } 113 - 114 - static void __init omap4_hdmi_mux_pads(enum omap_hdmi_flags flags) 115 - { 116 - u32 reg; 117 - u16 control_i2c_1; 118 - 119 - /* 120 - * CONTROL_I2C_1: HDMI_DDC_SDA_PULLUPRESX (bit 28) and 121 - * HDMI_DDC_SCL_PULLUPRESX (bit 24) are set to disable 122 - * internal pull up resistor. 123 - */ 124 - if (flags & OMAP_HDMI_SDA_SCL_EXTERNAL_PULLUP) { 125 - control_i2c_1 = OMAP4_CTRL_MODULE_PAD_CORE_CONTROL_I2C_1; 126 - reg = omap4_ctrl_pad_readl(control_i2c_1); 127 - reg |= (OMAP4_HDMI_DDC_SDA_PULLUPRESX_MASK | 128 - OMAP4_HDMI_DDC_SCL_PULLUPRESX_MASK); 129 - omap4_ctrl_pad_writel(reg, control_i2c_1); 130 - } 131 - } 132 - 133 - static int omap4_dsi_mux_pads(int dsi_id, unsigned lanes) 134 - { 135 - u32 enable_mask, enable_shift; 136 - u32 pipd_mask, pipd_shift; 137 - u32 reg; 138 - 139 - if (dsi_id == 0) { 140 - enable_mask = OMAP4_DSI1_LANEENABLE_MASK; 141 - enable_shift = OMAP4_DSI1_LANEENABLE_SHIFT; 142 - pipd_mask = OMAP4_DSI1_PIPD_MASK; 143 - pipd_shift = OMAP4_DSI1_PIPD_SHIFT; 144 - } else if (dsi_id == 1) { 145 - enable_mask = OMAP4_DSI2_LANEENABLE_MASK; 146 - enable_shift = OMAP4_DSI2_LANEENABLE_SHIFT; 147 - pipd_mask = OMAP4_DSI2_PIPD_MASK; 148 - pipd_shift = OMAP4_DSI2_PIPD_SHIFT; 149 - } else { 150 - return -ENODEV; 151 - } 152 - 153 - reg = omap4_ctrl_pad_readl(OMAP4_CTRL_MODULE_PAD_CORE_CONTROL_DSIPHY); 154 - 155 - reg &= ~enable_mask; 156 - reg &= ~pipd_mask; 157 - 158 - reg |= (lanes << enable_shift) & enable_mask; 159 - reg |= (lanes << pipd_shift) & pipd_mask; 160 - 161 - omap4_ctrl_pad_writel(reg, OMAP4_CTRL_MODULE_PAD_CORE_CONTROL_DSIPHY); 162 - 163 - return 0; 164 - } 165 - 166 - int __init omap_hdmi_init(enum omap_hdmi_flags flags) 167 - { 168 - if (cpu_is_omap44xx()) { 169 - omap4_hdmi_mux_pads(flags); 170 - omap4_tpd12s015_mux_pads(); 171 - } 172 - 173 - return 0; 174 - } 175 - 176 105 static int omap_dsi_enable_pads(int dsi_id, unsigned lane_mask) 177 106 { 178 - if (cpu_is_omap44xx()) 179 - return omap4_dsi_mux_pads(dsi_id, lane_mask); 180 - 181 107 return 0; 182 108 } 183 109 184 110 static void omap_dsi_disable_pads(int dsi_id, unsigned lane_mask) 185 111 { 186 - if (cpu_is_omap44xx()) 187 - omap4_dsi_mux_pads(dsi_id, 0); 188 112 } 189 113 190 114 static int omap_dss_set_min_bus_tput(struct device *dev, unsigned long tput)
+19 -39
arch/arm/mach-omap2/gpmc.c
··· 1502 1502 } 1503 1503 1504 1504 /* 1505 + * For some GPMC devices we still need to rely on the bootloader 1506 + * timings because the devices can be connected via FPGA. So far 1507 + * the list is smc91x on the omap2 SDP boards, and 8250 on zooms. 1508 + * REVISIT: Add timing support from slls644g.pdf and from the 1509 + * lan91c96 manual. 1510 + */ 1511 + if (of_device_is_compatible(child, "ns16550a") || 1512 + of_device_is_compatible(child, "smsc,lan91c94") || 1513 + of_device_is_compatible(child, "smsc,lan91c111")) { 1514 + dev_warn(&pdev->dev, 1515 + "%s using bootloader timings on CS%d\n", 1516 + child->name, cs); 1517 + goto no_timings; 1518 + } 1519 + 1520 + /* 1505 1521 * FIXME: gpmc_cs_request() will map the CS to an arbitary 1506 1522 * location in the gpmc address space. When booting with 1507 1523 * device-tree we want the NOR flash to be mapped to the ··· 1545 1529 gpmc_read_timings_dt(child, &gpmc_t); 1546 1530 gpmc_cs_set_timings(cs, &gpmc_t); 1547 1531 1532 + no_timings: 1548 1533 if (of_platform_device_create(child, NULL, &pdev->dev)) 1549 1534 return 0; 1550 1535 ··· 1556 1539 gpmc_cs_free(cs); 1557 1540 1558 1541 return ret; 1559 - } 1560 - 1561 - /* 1562 - * REVISIT: Add timing support from slls644g.pdf 1563 - */ 1564 - static int gpmc_probe_8250(struct platform_device *pdev, 1565 - struct device_node *child) 1566 - { 1567 - struct resource res; 1568 - unsigned long base; 1569 - int ret, cs; 1570 - 1571 - if (of_property_read_u32(child, "reg", &cs) < 0) { 1572 - dev_err(&pdev->dev, "%s has no 'reg' property\n", 1573 - child->full_name); 1574 - return -ENODEV; 1575 - } 1576 - 1577 - if (of_address_to_resource(child, 0, &res) < 0) { 1578 - dev_err(&pdev->dev, "%s has malformed 'reg' property\n", 1579 - child->full_name); 1580 - return -ENODEV; 1581 - } 1582 - 1583 - ret = gpmc_cs_request(cs, resource_size(&res), &base); 1584 - if (ret < 0) { 1585 - dev_err(&pdev->dev, "cannot request GPMC CS %d\n", cs); 1586 - return ret; 1587 - } 1588 - 1589 - if (of_platform_device_create(child, NULL, &pdev->dev)) 1590 - return 0; 1591 - 1592 - dev_err(&pdev->dev, "failed to create gpmc child %s\n", child->name); 1593 - 1594 - return -ENODEV; 1595 1542 } 1596 1543 1597 1544 static int gpmc_probe_dt(struct platform_device *pdev) ··· 1599 1618 else if (of_node_cmp(child->name, "onenand") == 0) 1600 1619 ret = gpmc_probe_onenand_child(pdev, child); 1601 1620 else if (of_node_cmp(child->name, "ethernet") == 0 || 1602 - of_node_cmp(child->name, "nor") == 0) 1621 + of_node_cmp(child->name, "nor") == 0 || 1622 + of_node_cmp(child->name, "uart") == 0) 1603 1623 ret = gpmc_probe_generic_child(pdev, child); 1604 - else if (of_node_cmp(child->name, "8250") == 0) 1605 - ret = gpmc_probe_8250(pdev, child); 1606 1624 1607 1625 if (WARN(ret < 0, "%s: probing gpmc child %s failed\n", 1608 1626 __func__, child->full_name))
+7
arch/arm/mach-omap2/omap-secure.h
··· 76 76 { } 77 77 #endif 78 78 79 + #ifdef CONFIG_SOC_HAS_REALTIME_COUNTER 79 80 void set_cntfreq(void); 81 + #else 82 + static inline void set_cntfreq(void) 83 + { 84 + } 85 + #endif 86 + 80 87 #endif /* __ASSEMBLER__ */ 81 88 #endif /* OMAP_ARCH_OMAP_SECURE_H */
-57
arch/arm/mach-omap2/omap4-common.c
··· 35 35 #include "iomap.h" 36 36 #include "common.h" 37 37 #include "mmc.h" 38 - #include "hsmmc.h" 39 38 #include "prminst44xx.h" 40 39 #include "prcm_mpu44xx.h" 41 40 #include "omap4-sar-layout.h" ··· 283 284 omap_wakeupgen_init(); 284 285 irqchip_init(); 285 286 } 286 - 287 - #if defined(CONFIG_MMC_OMAP_HS) || defined(CONFIG_MMC_OMAP_HS_MODULE) 288 - static int omap4_twl6030_hsmmc_late_init(struct device *dev) 289 - { 290 - int irq = 0; 291 - struct platform_device *pdev = container_of(dev, 292 - struct platform_device, dev); 293 - struct omap_mmc_platform_data *pdata = dev->platform_data; 294 - 295 - /* Setting MMC1 Card detect Irq */ 296 - if (pdev->id == 0) { 297 - irq = twl6030_mmc_card_detect_config(); 298 - if (irq < 0) { 299 - dev_err(dev, "%s: Error card detect config(%d)\n", 300 - __func__, irq); 301 - return irq; 302 - } 303 - pdata->slots[0].card_detect_irq = irq; 304 - pdata->slots[0].card_detect = twl6030_mmc_card_detect; 305 - } 306 - return 0; 307 - } 308 - 309 - static __init void omap4_twl6030_hsmmc_set_late_init(struct device *dev) 310 - { 311 - struct omap_mmc_platform_data *pdata; 312 - 313 - /* dev can be null if CONFIG_MMC_OMAP_HS is not set */ 314 - if (!dev) { 315 - pr_err("Failed %s\n", __func__); 316 - return; 317 - } 318 - pdata = dev->platform_data; 319 - pdata->init = omap4_twl6030_hsmmc_late_init; 320 - } 321 - 322 - int __init omap4_twl6030_hsmmc_init(struct omap2_hsmmc_info *controllers) 323 - { 324 - struct omap2_hsmmc_info *c; 325 - 326 - omap_hsmmc_init(controllers); 327 - for (c = controllers; c->mmc; c++) { 328 - /* pdev can be null if CONFIG_MMC_OMAP_HS is not set */ 329 - if (!c->pdev) 330 - continue; 331 - omap4_twl6030_hsmmc_set_late_init(&c->pdev->dev); 332 - } 333 - 334 - return 0; 335 - } 336 - #else 337 - int __init omap4_twl6030_hsmmc_init(struct omap2_hsmmc_info *controllers) 338 - { 339 - return 0; 340 - } 341 - #endif
+1 -1
arch/arm/mach-omap2/pm34xx.c
··· 120 120 * will hang the system. 121 121 */ 122 122 pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON); 123 - ret = _omap_save_secure_sram((u32 *) 123 + ret = _omap_save_secure_sram((u32 *)(unsigned long) 124 124 __pa(omap3_secure_ram_storage)); 125 125 pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state); 126 126 /* Following is for error tracking, it should not happen */
+1 -1
arch/arm/mach-omap2/prm44xx_54xx.h
··· 43 43 extern u32 omap4_prm_vcvp_rmw(u32 mask, u32 bits, u8 offset); 44 44 45 45 #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || \ 46 - defined(CONFIG_SOC_DRA7XX) 46 + defined(CONFIG_SOC_DRA7XX) || defined(CONFIG_SOC_AM43XX) 47 47 void omap44xx_prm_reconfigure_io_chain(void); 48 48 #else 49 49 static inline void omap44xx_prm_reconfigure_io_chain(void)
-10
arch/arm/mach-tegra/fuse.c
··· 209 209 tegra_sku_id, tegra_cpu_process_id, 210 210 tegra_core_process_id); 211 211 } 212 - 213 - unsigned long long tegra_chip_uid(void) 214 - { 215 - unsigned long long lo, hi; 216 - 217 - lo = tegra_fuse_readl(FUSE_UID_LOW); 218 - hi = tegra_fuse_readl(FUSE_UID_HIGH); 219 - return (hi << 32ull) | lo; 220 - } 221 - EXPORT_SYMBOL(tegra_chip_uid);
+40
arch/arm/mach-vexpress/spc.c
··· 53 53 #define A15_BX_ADDR0 0x68 54 54 #define A7_BX_ADDR0 0x78 55 55 56 + /* SPC CPU/cluster reset statue */ 57 + #define STANDBYWFI_STAT 0x3c 58 + #define STANDBYWFI_STAT_A15_CPU_MASK(cpu) (1 << (cpu)) 59 + #define STANDBYWFI_STAT_A7_CPU_MASK(cpu) (1 << (3 + (cpu))) 60 + 56 61 /* SPC system config interface registers */ 57 62 #define SYSCFG_WDATA 0x70 58 63 #define SYSCFG_RDATA 0x74 ··· 216 211 217 212 pwdrn_reg = cluster_is_a15(cluster) ? A15_PWRDN_EN : A7_PWRDN_EN; 218 213 writel_relaxed(enable, info->baseaddr + pwdrn_reg); 214 + } 215 + 216 + static u32 standbywfi_cpu_mask(u32 cpu, u32 cluster) 217 + { 218 + return cluster_is_a15(cluster) ? 219 + STANDBYWFI_STAT_A15_CPU_MASK(cpu) 220 + : STANDBYWFI_STAT_A7_CPU_MASK(cpu); 221 + } 222 + 223 + /** 224 + * ve_spc_cpu_in_wfi(u32 cpu, u32 cluster) 225 + * 226 + * @cpu: mpidr[7:0] bitfield describing CPU affinity level within cluster 227 + * @cluster: mpidr[15:8] bitfield describing cluster affinity level 228 + * 229 + * @return: non-zero if and only if the specified CPU is in WFI 230 + * 231 + * Take care when interpreting the result of this function: a CPU might 232 + * be in WFI temporarily due to idle, and is not necessarily safely 233 + * parked. 234 + */ 235 + int ve_spc_cpu_in_wfi(u32 cpu, u32 cluster) 236 + { 237 + int ret; 238 + u32 mask = standbywfi_cpu_mask(cpu, cluster); 239 + 240 + if (cluster >= MAX_CLUSTERS) 241 + return 1; 242 + 243 + ret = readl_relaxed(info->baseaddr + STANDBYWFI_STAT); 244 + 245 + pr_debug("%s: PCFGREG[0x%X] = 0x%08X, mask = 0x%X\n", 246 + __func__, STANDBYWFI_STAT, ret, mask); 247 + 248 + return ret & mask; 219 249 } 220 250 221 251 static int ve_spc_get_performance(int cluster, u32 *freq)
+1
arch/arm/mach-vexpress/spc.h
··· 20 20 void ve_spc_cpu_wakeup_irq(u32 cluster, u32 cpu, bool set); 21 21 void ve_spc_set_resume_addr(u32 cluster, u32 cpu, u32 addr); 22 22 void ve_spc_powerdown(u32 cluster, bool enable); 23 + int ve_spc_cpu_in_wfi(u32 cpu, u32 cluster); 23 24 24 25 #endif
+61 -5
arch/arm/mach-vexpress/tc2_pm.c
··· 12 12 * published by the Free Software Foundation. 13 13 */ 14 14 15 + #include <linux/delay.h> 15 16 #include <linux/init.h> 16 17 #include <linux/io.h> 17 18 #include <linux/kernel.h> ··· 33 32 #include "spc.h" 34 33 35 34 /* SCC conf registers */ 35 + #define RESET_CTRL 0x018 36 + #define RESET_A15_NCORERESET(cpu) (1 << (2 + (cpu))) 37 + #define RESET_A7_NCORERESET(cpu) (1 << (16 + (cpu))) 38 + 36 39 #define A15_CONF 0x400 37 40 #define A7_CONF 0x500 38 41 #define SYS_INFO 0x700 39 42 #define SPC_BASE 0xb00 43 + 44 + static void __iomem *scc; 40 45 41 46 /* 42 47 * We can't use regular spinlocks. In the switcher case, it is possible ··· 197 190 tc2_pm_down(0); 198 191 } 199 192 193 + static int tc2_core_in_reset(unsigned int cpu, unsigned int cluster) 194 + { 195 + u32 mask = cluster ? 196 + RESET_A7_NCORERESET(cpu) 197 + : RESET_A15_NCORERESET(cpu); 198 + 199 + return !(readl_relaxed(scc + RESET_CTRL) & mask); 200 + } 201 + 202 + #define POLL_MSEC 10 203 + #define TIMEOUT_MSEC 1000 204 + 205 + static int tc2_pm_power_down_finish(unsigned int cpu, unsigned int cluster) 206 + { 207 + unsigned tries; 208 + 209 + pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); 210 + BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER); 211 + 212 + for (tries = 0; tries < TIMEOUT_MSEC / POLL_MSEC; ++tries) { 213 + /* 214 + * Only examine the hardware state if the target CPU has 215 + * caught up at least as far as tc2_pm_down(): 216 + */ 217 + if (ACCESS_ONCE(tc2_pm_use_count[cpu][cluster]) == 0) { 218 + pr_debug("%s(cpu=%u, cluster=%u): RESET_CTRL = 0x%08X\n", 219 + __func__, cpu, cluster, 220 + readl_relaxed(scc + RESET_CTRL)); 221 + 222 + /* 223 + * We need the CPU to reach WFI, but the power 224 + * controller may put the cluster in reset and 225 + * power it off as soon as that happens, before 226 + * we have a chance to see STANDBYWFI. 227 + * 228 + * So we need to check for both conditions: 229 + */ 230 + if (tc2_core_in_reset(cpu, cluster) || 231 + ve_spc_cpu_in_wfi(cpu, cluster)) 232 + return 0; /* success: the CPU is halted */ 233 + } 234 + 235 + /* Otherwise, wait and retry: */ 236 + msleep(POLL_MSEC); 237 + } 238 + 239 + return -ETIMEDOUT; /* timeout */ 240 + } 241 + 200 242 static void tc2_pm_suspend(u64 residency) 201 243 { 202 244 unsigned int mpidr, cpu, cluster; ··· 288 232 } 289 233 290 234 static const struct mcpm_platform_ops tc2_pm_power_ops = { 291 - .power_up = tc2_pm_power_up, 292 - .power_down = tc2_pm_power_down, 293 - .suspend = tc2_pm_suspend, 294 - .powered_up = tc2_pm_powered_up, 235 + .power_up = tc2_pm_power_up, 236 + .power_down = tc2_pm_power_down, 237 + .power_down_finish = tc2_pm_power_down_finish, 238 + .suspend = tc2_pm_suspend, 239 + .powered_up = tc2_pm_powered_up, 295 240 }; 296 241 297 242 static bool __init tc2_pm_usage_count_init(void) ··· 326 269 static int __init tc2_pm_init(void) 327 270 { 328 271 int ret, irq; 329 - void __iomem *scc; 330 272 u32 a15_cluster_id, a7_cluster_id, sys_info; 331 273 struct device_node *np; 332 274
+2
arch/arm64/boot/dts/foundation-v8.dts
··· 6 6 7 7 /dts-v1/; 8 8 9 + /memreserve/ 0x80000000 0x00010000; 10 + 9 11 / { 10 12 model = "Foundation-v8A"; 11 13 compatible = "arm,foundation-aarch64", "arm,vexpress";
+3
arch/arm64/include/asm/irqflags.h
··· 56 56 #define local_fiq_enable() asm("msr daifclr, #1" : : : "memory") 57 57 #define local_fiq_disable() asm("msr daifset, #1" : : : "memory") 58 58 59 + #define local_async_enable() asm("msr daifclr, #4" : : : "memory") 60 + #define local_async_disable() asm("msr daifset, #4" : : : "memory") 61 + 59 62 /* 60 63 * Save the current interrupt enable state. 61 64 */
+18 -15
arch/arm64/include/asm/pgtable.h
··· 25 25 * Software defined PTE bits definition. 26 26 */ 27 27 #define PTE_VALID (_AT(pteval_t, 1) << 0) 28 - #define PTE_PROT_NONE (_AT(pteval_t, 1) << 2) /* only when !PTE_VALID */ 29 - #define PTE_FILE (_AT(pteval_t, 1) << 3) /* only when !pte_present() */ 28 + #define PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !pte_present() */ 30 29 #define PTE_DIRTY (_AT(pteval_t, 1) << 55) 31 30 #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) 31 + /* bit 57 for PMD_SECT_SPLITTING */ 32 + #define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ 32 33 33 34 /* 34 35 * VMALLOC and SPARSEMEM_VMEMMAP ranges. ··· 255 254 #define pgprot_noncached(prot) \ 256 255 __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRnE)) 257 256 #define pgprot_writecombine(prot) \ 258 - __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_GRE)) 257 + __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC)) 259 258 #define pgprot_dmacoherent(prot) \ 260 259 __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC)) 261 260 #define __HAVE_PHYS_MEM_ACCESS_PROT ··· 358 357 359 358 /* 360 359 * Encode and decode a swap entry: 361 - * bits 0, 2: present (must both be zero) 362 - * bit 3: PTE_FILE 363 - * bits 4-8: swap type 364 - * bits 9-63: swap offset 360 + * bits 0-1: present (must be zero) 361 + * bit 2: PTE_FILE 362 + * bits 3-8: swap type 363 + * bits 9-57: swap offset 365 364 */ 366 - #define __SWP_TYPE_SHIFT 4 365 + #define __SWP_TYPE_SHIFT 3 367 366 #define __SWP_TYPE_BITS 6 367 + #define __SWP_OFFSET_BITS 49 368 368 #define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1) 369 369 #define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT) 370 + #define __SWP_OFFSET_MASK ((1UL << __SWP_OFFSET_BITS) - 1) 370 371 371 372 #define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK) 372 - #define __swp_offset(x) ((x).val >> __SWP_OFFSET_SHIFT) 373 + #define __swp_offset(x) (((x).val >> __SWP_OFFSET_SHIFT) & __SWP_OFFSET_MASK) 373 374 #define __swp_entry(type,offset) ((swp_entry_t) { ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) }) 374 375 375 376 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) ··· 385 382 386 383 /* 387 384 * Encode and decode a file entry: 388 - * bits 0, 2: present (must both be zero) 389 - * bit 3: PTE_FILE 390 - * bits 4-63: file offset / PAGE_SIZE 385 + * bits 0-1: present (must be zero) 386 + * bit 2: PTE_FILE 387 + * bits 3-57: file offset / PAGE_SIZE 391 388 */ 392 389 #define pte_file(pte) (pte_val(pte) & PTE_FILE) 393 - #define pte_to_pgoff(x) (pte_val(x) >> 4) 394 - #define pgoff_to_pte(x) __pte(((x) << 4) | PTE_FILE) 390 + #define pte_to_pgoff(x) (pte_val(x) >> 3) 391 + #define pgoff_to_pte(x) __pte(((x) << 3) | PTE_FILE) 395 392 396 - #define PTE_FILE_MAX_BITS 60 393 + #define PTE_FILE_MAX_BITS 55 397 394 398 395 extern int kern_addr_valid(unsigned long addr); 399 396
+12 -8
arch/arm64/kernel/debug-monitors.c
··· 248 248 int aarch32_break_handler(struct pt_regs *regs) 249 249 { 250 250 siginfo_t info; 251 - unsigned int instr; 251 + u32 arm_instr; 252 + u16 thumb_instr; 252 253 bool bp = false; 253 254 void __user *pc = (void __user *)instruction_pointer(regs); 254 255 ··· 258 257 259 258 if (compat_thumb_mode(regs)) { 260 259 /* get 16-bit Thumb instruction */ 261 - get_user(instr, (u16 __user *)pc); 262 - if (instr == AARCH32_BREAK_THUMB2_LO) { 260 + get_user(thumb_instr, (u16 __user *)pc); 261 + thumb_instr = le16_to_cpu(thumb_instr); 262 + if (thumb_instr == AARCH32_BREAK_THUMB2_LO) { 263 263 /* get second half of 32-bit Thumb-2 instruction */ 264 - get_user(instr, (u16 __user *)(pc + 2)); 265 - bp = instr == AARCH32_BREAK_THUMB2_HI; 264 + get_user(thumb_instr, (u16 __user *)(pc + 2)); 265 + thumb_instr = le16_to_cpu(thumb_instr); 266 + bp = thumb_instr == AARCH32_BREAK_THUMB2_HI; 266 267 } else { 267 - bp = instr == AARCH32_BREAK_THUMB; 268 + bp = thumb_instr == AARCH32_BREAK_THUMB; 268 269 } 269 270 } else { 270 271 /* 32-bit ARM instruction */ 271 - get_user(instr, (u32 __user *)pc); 272 - bp = (instr & ~0xf0000000) == AARCH32_BREAK_ARM; 272 + get_user(arm_instr, (u32 __user *)pc); 273 + arm_instr = le32_to_cpu(arm_instr); 274 + bp = (arm_instr & ~0xf0000000) == AARCH32_BREAK_ARM; 273 275 } 274 276 275 277 if (!bp)
+7 -22
arch/arm64/kernel/entry.S
··· 309 309 #ifdef CONFIG_TRACE_IRQFLAGS 310 310 bl trace_hardirqs_off 311 311 #endif 312 + 313 + irq_handler 314 + 312 315 #ifdef CONFIG_PREEMPT 313 316 get_thread_info tsk 314 - ldr w24, [tsk, #TI_PREEMPT] // get preempt count 315 - add w0, w24, #1 // increment it 316 - str w0, [tsk, #TI_PREEMPT] 317 - #endif 318 - irq_handler 319 - #ifdef CONFIG_PREEMPT 320 - str w24, [tsk, #TI_PREEMPT] // restore preempt count 317 + ldr w24, [tsk, #TI_PREEMPT] // restore preempt count 321 318 cbnz w24, 1f // preempt count != 0 322 319 ldr x0, [tsk, #TI_FLAGS] // get flags 323 320 tbz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling? ··· 504 507 #ifdef CONFIG_TRACE_IRQFLAGS 505 508 bl trace_hardirqs_off 506 509 #endif 507 - get_thread_info tsk 508 - #ifdef CONFIG_PREEMPT 509 - ldr w24, [tsk, #TI_PREEMPT] // get preempt count 510 - add w23, w24, #1 // increment it 511 - str w23, [tsk, #TI_PREEMPT] 512 - #endif 510 + 513 511 irq_handler 514 - #ifdef CONFIG_PREEMPT 515 - ldr w0, [tsk, #TI_PREEMPT] 516 - str w24, [tsk, #TI_PREEMPT] 517 - cmp w0, w23 518 - b.eq 1f 519 - mov x1, #0 520 - str x1, [x1] // BUG 521 - 1: 522 - #endif 512 + get_thread_info tsk 513 + 523 514 #ifdef CONFIG_TRACE_IRQFLAGS 524 515 bl trace_hardirqs_on 525 516 #endif
+19 -21
arch/arm64/kernel/ptrace.c
··· 636 636 637 637 for (i = 0; i < num_regs; ++i) { 638 638 unsigned int idx = start + i; 639 - void *reg; 639 + compat_ulong_t reg; 640 640 641 641 switch (idx) { 642 642 case 15: 643 - reg = (void *)&task_pt_regs(target)->pc; 643 + reg = task_pt_regs(target)->pc; 644 644 break; 645 645 case 16: 646 - reg = (void *)&task_pt_regs(target)->pstate; 646 + reg = task_pt_regs(target)->pstate; 647 647 break; 648 648 case 17: 649 - reg = (void *)&task_pt_regs(target)->orig_x0; 649 + reg = task_pt_regs(target)->orig_x0; 650 650 break; 651 651 default: 652 - reg = (void *)&task_pt_regs(target)->regs[idx]; 652 + reg = task_pt_regs(target)->regs[idx]; 653 653 } 654 654 655 - ret = copy_to_user(ubuf, reg, sizeof(compat_ulong_t)); 656 - 655 + ret = copy_to_user(ubuf, &reg, sizeof(reg)); 657 656 if (ret) 658 657 break; 659 - else 660 - ubuf += sizeof(compat_ulong_t); 658 + 659 + ubuf += sizeof(reg); 661 660 } 662 661 663 662 return ret; ··· 684 685 685 686 for (i = 0; i < num_regs; ++i) { 686 687 unsigned int idx = start + i; 687 - void *reg; 688 + compat_ulong_t reg; 689 + 690 + ret = copy_from_user(&reg, ubuf, sizeof(reg)); 691 + if (ret) 692 + return ret; 693 + 694 + ubuf += sizeof(reg); 688 695 689 696 switch (idx) { 690 697 case 15: 691 - reg = (void *)&newregs.pc; 698 + newregs.pc = reg; 692 699 break; 693 700 case 16: 694 - reg = (void *)&newregs.pstate; 701 + newregs.pstate = reg; 695 702 break; 696 703 case 17: 697 - reg = (void *)&newregs.orig_x0; 704 + newregs.orig_x0 = reg; 698 705 break; 699 706 default: 700 - reg = (void *)&newregs.regs[idx]; 707 + newregs.regs[idx] = reg; 701 708 } 702 709 703 - ret = copy_from_user(reg, ubuf, sizeof(compat_ulong_t)); 704 - 705 - if (ret) 706 - goto out; 707 - else 708 - ubuf += sizeof(compat_ulong_t); 709 710 } 710 711 711 712 if (valid_user_regs(&newregs.user_regs)) ··· 713 714 else 714 715 ret = -EINVAL; 715 716 716 - out: 717 717 return ret; 718 718 } 719 719
+5
arch/arm64/kernel/setup.c
··· 205 205 206 206 void __init setup_arch(char **cmdline_p) 207 207 { 208 + /* 209 + * Unmask asynchronous aborts early to catch possible system errors. 210 + */ 211 + local_async_enable(); 212 + 208 213 setup_processor(); 209 214 210 215 setup_machine_fdt(__fdt_pointer);
+1
arch/arm64/kernel/smp.c
··· 160 160 161 161 local_irq_enable(); 162 162 local_fiq_enable(); 163 + local_async_enable(); 163 164 164 165 /* 165 166 * OK, it's off to the idle thread for us
+7
arch/powerpc/Makefile
··· 75 75 GNUTARGET := powerpcle 76 76 MULTIPLEWORD := -mno-multiple 77 77 else 78 + ifeq ($(call cc-option-yn,-mbig-endian),y) 78 79 override CC += -mbig-endian 79 80 override AS += -mbig-endian 81 + endif 80 82 override LD += -EB 81 83 LDEMULATION := ppc 82 84 GNUTARGET := powerpc ··· 130 128 CFLAGS-$(CONFIG_POWER6_CPU) += $(call cc-option,-mcpu=power6) 131 129 CFLAGS-$(CONFIG_POWER7_CPU) += $(call cc-option,-mcpu=power7) 132 130 131 + # Altivec option not allowed with e500mc64 in GCC. 132 + ifeq ($(CONFIG_ALTIVEC),y) 133 + E5500_CPU := -mcpu=powerpc64 134 + else 133 135 E5500_CPU := $(call cc-option,-mcpu=e500mc64,-mcpu=powerpc64) 136 + endif 134 137 CFLAGS-$(CONFIG_E5500_CPU) += $(E5500_CPU) 135 138 CFLAGS-$(CONFIG_E6500_CPU) += $(call cc-option,-mcpu=e6500,$(E5500_CPU)) 136 139
+2 -2
arch/powerpc/boot/dts/xcalibur1501.dts
··· 637 637 tlu@2f000 { 638 638 compatible = "fsl,mpc8572-tlu", "fsl_tlu"; 639 639 reg = <0x2f000 0x1000>; 640 - interupts = <61 2 >; 640 + interrupts = <61 2>; 641 641 interrupt-parent = <&mpic>; 642 642 }; 643 643 644 644 tlu@15000 { 645 645 compatible = "fsl,mpc8572-tlu", "fsl_tlu"; 646 646 reg = <0x15000 0x1000>; 647 - interupts = <75 2>; 647 + interrupts = <75 2>; 648 648 interrupt-parent = <&mpic>; 649 649 }; 650 650 };
+2 -2
arch/powerpc/boot/dts/xpedite5301.dts
··· 547 547 tlu@2f000 { 548 548 compatible = "fsl,mpc8572-tlu", "fsl_tlu"; 549 549 reg = <0x2f000 0x1000>; 550 - interupts = <61 2 >; 550 + interrupts = <61 2>; 551 551 interrupt-parent = <&mpic>; 552 552 }; 553 553 554 554 tlu@15000 { 555 555 compatible = "fsl,mpc8572-tlu", "fsl_tlu"; 556 556 reg = <0x15000 0x1000>; 557 - interupts = <75 2>; 557 + interrupts = <75 2>; 558 558 interrupt-parent = <&mpic>; 559 559 }; 560 560 };
+2 -2
arch/powerpc/boot/dts/xpedite5330.dts
··· 583 583 tlu@2f000 { 584 584 compatible = "fsl,mpc8572-tlu", "fsl_tlu"; 585 585 reg = <0x2f000 0x1000>; 586 - interupts = <61 2 >; 586 + interrupts = <61 2>; 587 587 interrupt-parent = <&mpic>; 588 588 }; 589 589 590 590 tlu@15000 { 591 591 compatible = "fsl,mpc8572-tlu", "fsl_tlu"; 592 592 reg = <0x15000 0x1000>; 593 - interupts = <75 2>; 593 + interrupts = <75 2>; 594 594 interrupt-parent = <&mpic>; 595 595 }; 596 596 };
+2 -2
arch/powerpc/boot/dts/xpedite5370.dts
··· 545 545 tlu@2f000 { 546 546 compatible = "fsl,mpc8572-tlu", "fsl_tlu"; 547 547 reg = <0x2f000 0x1000>; 548 - interupts = <61 2 >; 548 + interrupts = <61 2>; 549 549 interrupt-parent = <&mpic>; 550 550 }; 551 551 552 552 tlu@15000 { 553 553 compatible = "fsl,mpc8572-tlu", "fsl_tlu"; 554 554 reg = <0x15000 0x1000>; 555 - interupts = <75 2>; 555 + interrupts = <75 2>; 556 556 interrupt-parent = <&mpic>; 557 557 }; 558 558 };
+14
arch/powerpc/boot/util.S
··· 71 71 add r4,r4,r5 72 72 addi r4,r4,-1 73 73 divw r4,r4,r5 /* BUS ticks */ 74 + #ifdef CONFIG_8xx 75 + 1: mftbu r5 76 + mftb r6 77 + mftbu r7 78 + #else 74 79 1: mfspr r5, SPRN_TBRU 75 80 mfspr r6, SPRN_TBRL 76 81 mfspr r7, SPRN_TBRU 82 + #endif 77 83 cmpw 0,r5,r7 78 84 bne 1b /* Get [synced] base time */ 79 85 addc r9,r6,r4 /* Compute end time */ 80 86 addze r8,r5 87 + #ifdef CONFIG_8xx 88 + 2: mftbu r5 89 + #else 81 90 2: mfspr r5, SPRN_TBRU 91 + #endif 82 92 cmpw 0,r5,r8 83 93 blt 2b 84 94 bgt 3f 95 + #ifdef CONFIG_8xx 96 + mftb r6 97 + #else 85 98 mfspr r6, SPRN_TBRL 99 + #endif 86 100 cmpw 0,r6,r9 87 101 blt 2b 88 102 3: blr
+1
arch/powerpc/include/asm/pgalloc-64.h
··· 16 16 unsigned long phys; 17 17 unsigned long virt_addr; 18 18 }; 19 + extern struct vmemmap_backing *vmemmap_list; 19 20 20 21 /* 21 22 * Functions that deal with pagetables that could be at any level of
+2
arch/powerpc/include/asm/ppc_asm.h
··· 366 366 cmpwi dest,0; \ 367 367 beq- 90b; \ 368 368 END_FTR_SECTION_NESTED(CPU_FTR_CELL_TB_BUG, CPU_FTR_CELL_TB_BUG, 96) 369 + #elif defined(CONFIG_8xx) 370 + #define MFTB(dest) mftb dest 369 371 #else 370 372 #define MFTB(dest) mfspr dest, SPRN_TBRL 371 373 #endif
+7
arch/powerpc/include/asm/reg.h
··· 1174 1174 1175 1175 #else /* __powerpc64__ */ 1176 1176 1177 + #if defined(CONFIG_8xx) 1178 + #define mftbl() ({unsigned long rval; \ 1179 + asm volatile("mftbl %0" : "=r" (rval)); rval;}) 1180 + #define mftbu() ({unsigned long rval; \ 1181 + asm volatile("mftbu %0" : "=r" (rval)); rval;}) 1182 + #else 1177 1183 #define mftbl() ({unsigned long rval; \ 1178 1184 asm volatile("mfspr %0, %1" : "=r" (rval) : \ 1179 1185 "i" (SPRN_TBRL)); rval;}) 1180 1186 #define mftbu() ({unsigned long rval; \ 1181 1187 asm volatile("mfspr %0, %1" : "=r" (rval) : \ 1182 1188 "i" (SPRN_TBRU)); rval;}) 1189 + #endif 1183 1190 #endif /* !__powerpc64__ */ 1184 1191 1185 1192 #define mttbl(v) asm volatile("mttbl %0":: "r"(v))
+8
arch/powerpc/include/asm/timex.h
··· 29 29 ret = 0; 30 30 31 31 __asm__ __volatile__( 32 + #ifdef CONFIG_8xx 33 + "97: mftb %0\n" 34 + #else 32 35 "97: mfspr %0, %2\n" 36 + #endif 33 37 "99:\n" 34 38 ".section __ftr_fixup,\"a\"\n" 35 39 ".align 2\n" ··· 45 41 " .long 0\n" 46 42 " .long 0\n" 47 43 ".previous" 44 + #ifdef CONFIG_8xx 45 + : "=r" (ret) : "i" (CPU_FTR_601)); 46 + #else 48 47 : "=r" (ret) : "i" (CPU_FTR_601), "i" (SPRN_TBRL)); 48 + #endif 49 49 return ret; 50 50 #endif 51 51 }
+12
arch/powerpc/kernel/machine_kexec.c
··· 18 18 #include <linux/ftrace.h> 19 19 20 20 #include <asm/machdep.h> 21 + #include <asm/pgalloc.h> 21 22 #include <asm/prom.h> 22 23 #include <asm/sections.h> 23 24 ··· 75 74 #endif 76 75 #ifndef CONFIG_NEED_MULTIPLE_NODES 77 76 VMCOREINFO_SYMBOL(contig_page_data); 77 + #endif 78 + #if defined(CONFIG_PPC64) && defined(CONFIG_SPARSEMEM_VMEMMAP) 79 + VMCOREINFO_SYMBOL(vmemmap_list); 80 + VMCOREINFO_SYMBOL(mmu_vmemmap_psize); 81 + VMCOREINFO_SYMBOL(mmu_psize_defs); 82 + VMCOREINFO_STRUCT_SIZE(vmemmap_backing); 83 + VMCOREINFO_OFFSET(vmemmap_backing, list); 84 + VMCOREINFO_OFFSET(vmemmap_backing, phys); 85 + VMCOREINFO_OFFSET(vmemmap_backing, virt_addr); 86 + VMCOREINFO_STRUCT_SIZE(mmu_psize_def); 87 + VMCOREINFO_OFFSET(mmu_psize_def, shift); 78 88 #endif 79 89 } 80 90
+1 -1
arch/powerpc/kernel/nvram_64.c
··· 210 210 printk(KERN_WARNING "--------%s---------\n", label); 211 211 printk(KERN_WARNING "indx\t\tsig\tchks\tlen\tname\n"); 212 212 list_for_each_entry(tmp_part, &nvram_partitions, partition) { 213 - printk(KERN_WARNING "%4d \t%02x\t%02x\t%d\t%12s\n", 213 + printk(KERN_WARNING "%4d \t%02x\t%02x\t%d\t%12.12s\n", 214 214 tmp_part->index, tmp_part->header.signature, 215 215 tmp_part->header.checksum, tmp_part->header.length, 216 216 tmp_part->header.name);
+7 -9
arch/powerpc/kernel/signal_32.c
··· 445 445 #endif /* CONFIG_ALTIVEC */ 446 446 if (copy_fpr_to_user(&frame->mc_fregs, current)) 447 447 return 1; 448 + 449 + /* 450 + * Clear the MSR VSX bit to indicate there is no valid state attached 451 + * to this context, except in the specific case below where we set it. 452 + */ 453 + msr &= ~MSR_VSX; 448 454 #ifdef CONFIG_VSX 449 455 /* 450 456 * Copy VSR 0-31 upper half from thread_struct to local ··· 463 457 if (copy_vsx_to_user(&frame->mc_vsregs, current)) 464 458 return 1; 465 459 msr |= MSR_VSX; 466 - } else if (!ctx_has_vsx_region) 467 - /* 468 - * With a small context structure we can't hold the VSX 469 - * registers, hence clear the MSR value to indicate the state 470 - * was not saved. 471 - */ 472 - msr &= ~MSR_VSX; 473 - 474 - 460 + } 475 461 #endif /* CONFIG_VSX */ 476 462 #ifdef CONFIG_SPE 477 463 /* save spe registers */
+6
arch/powerpc/kernel/signal_64.c
··· 122 122 flush_fp_to_thread(current); 123 123 /* copy fpr regs and fpscr */ 124 124 err |= copy_fpr_to_user(&sc->fp_regs, current); 125 + 126 + /* 127 + * Clear the MSR VSX bit to indicate there is no valid state attached 128 + * to this context, except in the specific case below where we set it. 129 + */ 130 + msr &= ~MSR_VSX; 125 131 #ifdef CONFIG_VSX 126 132 /* 127 133 * Copy VSX low doubleword to local buffer for formatting,
+6
arch/powerpc/kernel/vdso32/gettimeofday.S
··· 232 232 lwz r6,(CFG_TB_ORIG_STAMP+4)(r9) 233 233 234 234 /* Get a stable TB value */ 235 + #ifdef CONFIG_8xx 236 + 2: mftbu r3 237 + mftbl r4 238 + mftbu r0 239 + #else 235 240 2: mfspr r3, SPRN_TBRU 236 241 mfspr r4, SPRN_TBRL 237 242 mfspr r0, SPRN_TBRU 243 + #endif 238 244 cmplw cr0,r3,r0 239 245 bne- 2b 240 246
+1 -2
arch/powerpc/mm/hugetlbpage-book3e.c
··· 117 117 struct hstate *hstate = hstate_file(vma->vm_file); 118 118 unsigned long tsize = huge_page_shift(hstate) - 10; 119 119 120 - __flush_tlb_page(vma ? vma->vm_mm : NULL, vmaddr, tsize, 0); 121 - 120 + __flush_tlb_page(vma->vm_mm, vmaddr, tsize, 0); 122 121 }
+1 -1
arch/powerpc/mm/tlb_nohash.c
··· 305 305 void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr) 306 306 { 307 307 #ifdef CONFIG_HUGETLB_PAGE 308 - if (is_vm_hugetlb_page(vma)) 308 + if (vma && is_vm_hugetlb_page(vma)) 309 309 flush_hugetlb_page(vma, vmaddr); 310 310 #endif 311 311
+17 -3
arch/powerpc/platforms/Kconfig.cputype
··· 404 404 405 405 endmenu 406 406 407 - config CPU_LITTLE_ENDIAN 408 - bool "Build little endian kernel" 409 - default n 407 + choice 408 + prompt "Endianness selection" 409 + default CPU_BIG_ENDIAN 410 410 help 411 411 This option selects whether a big endian or little endian kernel will 412 412 be built. 413 413 414 + config CPU_BIG_ENDIAN 415 + bool "Build big endian kernel" 416 + help 417 + Build a big endian kernel. 418 + 419 + If unsure, select this option. 420 + 421 + config CPU_LITTLE_ENDIAN 422 + bool "Build little endian kernel" 423 + help 424 + Build a little endian kernel. 425 + 414 426 Note that if cross compiling a little endian kernel, 415 427 CROSS_COMPILE must point to a toolchain capable of targeting 416 428 little endian powerpc. 429 + 430 + endchoice
+1 -1
arch/s390/Kconfig
··· 101 101 select GENERIC_CPU_DEVICES if !SMP 102 102 select GENERIC_FIND_FIRST_BIT 103 103 select GENERIC_SMP_IDLE_THREAD 104 - select GENERIC_TIME_VSYSCALL_OLD 104 + select GENERIC_TIME_VSYSCALL 105 105 select HAVE_ALIGNED_STRUCT_PAGE if SLUB 106 106 select HAVE_ARCH_JUMP_LABEL if !MARCH_G5 107 107 select HAVE_ARCH_SECCOMP_FILTER
+12 -7
arch/s390/crypto/aes_s390.c
··· 35 35 static char keylen_flag; 36 36 37 37 struct s390_aes_ctx { 38 - u8 iv[AES_BLOCK_SIZE]; 39 38 u8 key[AES_MAX_KEY_SIZE]; 40 39 long enc; 41 40 long dec; ··· 440 441 return aes_set_key(tfm, in_key, key_len); 441 442 } 442 443 443 - static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, void *param, 444 + static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, 444 445 struct blkcipher_walk *walk) 445 446 { 447 + struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm); 446 448 int ret = blkcipher_walk_virt(desc, walk); 447 449 unsigned int nbytes = walk->nbytes; 450 + struct { 451 + u8 iv[AES_BLOCK_SIZE]; 452 + u8 key[AES_MAX_KEY_SIZE]; 453 + } param; 448 454 449 455 if (!nbytes) 450 456 goto out; 451 457 452 - memcpy(param, walk->iv, AES_BLOCK_SIZE); 458 + memcpy(param.iv, walk->iv, AES_BLOCK_SIZE); 459 + memcpy(param.key, sctx->key, sctx->key_len); 453 460 do { 454 461 /* only use complete blocks */ 455 462 unsigned int n = nbytes & ~(AES_BLOCK_SIZE - 1); 456 463 u8 *out = walk->dst.virt.addr; 457 464 u8 *in = walk->src.virt.addr; 458 465 459 - ret = crypt_s390_kmc(func, param, out, in, n); 466 + ret = crypt_s390_kmc(func, &param, out, in, n); 460 467 if (ret < 0 || ret != n) 461 468 return -EIO; 462 469 463 470 nbytes &= AES_BLOCK_SIZE - 1; 464 471 ret = blkcipher_walk_done(desc, walk, nbytes); 465 472 } while ((nbytes = walk->nbytes)); 466 - memcpy(walk->iv, param, AES_BLOCK_SIZE); 473 + memcpy(walk->iv, param.iv, AES_BLOCK_SIZE); 467 474 468 475 out: 469 476 return ret; ··· 486 481 return fallback_blk_enc(desc, dst, src, nbytes); 487 482 488 483 blkcipher_walk_init(&walk, dst, src, nbytes); 489 - return cbc_aes_crypt(desc, sctx->enc, sctx->iv, &walk); 484 + return cbc_aes_crypt(desc, sctx->enc, &walk); 490 485 } 491 486 492 487 static int cbc_aes_decrypt(struct blkcipher_desc *desc, ··· 500 495 return fallback_blk_dec(desc, dst, src, nbytes); 501 496 502 497 blkcipher_walk_init(&walk, dst, src, nbytes); 503 - return cbc_aes_crypt(desc, sctx->dec, sctx->iv, &walk); 498 + return cbc_aes_crypt(desc, sctx->dec, &walk); 504 499 } 505 500 506 501 static struct crypto_alg cbc_aes_alg = {
+13 -25
arch/s390/include/asm/page.h
··· 48 48 : "memory", "cc"); 49 49 } 50 50 51 + /* 52 + * copy_page uses the mvcl instruction with 0xb0 padding byte in order to 53 + * bypass caches when copying a page. Especially when copying huge pages 54 + * this keeps L1 and L2 data caches alive. 55 + */ 51 56 static inline void copy_page(void *to, void *from) 52 57 { 53 - if (MACHINE_HAS_MVPG) { 54 - register unsigned long reg0 asm ("0") = 0; 55 - asm volatile( 56 - " mvpg %0,%1" 57 - : : "a" (to), "a" (from), "d" (reg0) 58 - : "memory", "cc"); 59 - } else 60 - asm volatile( 61 - " mvc 0(256,%0),0(%1)\n" 62 - " mvc 256(256,%0),256(%1)\n" 63 - " mvc 512(256,%0),512(%1)\n" 64 - " mvc 768(256,%0),768(%1)\n" 65 - " mvc 1024(256,%0),1024(%1)\n" 66 - " mvc 1280(256,%0),1280(%1)\n" 67 - " mvc 1536(256,%0),1536(%1)\n" 68 - " mvc 1792(256,%0),1792(%1)\n" 69 - " mvc 2048(256,%0),2048(%1)\n" 70 - " mvc 2304(256,%0),2304(%1)\n" 71 - " mvc 2560(256,%0),2560(%1)\n" 72 - " mvc 2816(256,%0),2816(%1)\n" 73 - " mvc 3072(256,%0),3072(%1)\n" 74 - " mvc 3328(256,%0),3328(%1)\n" 75 - " mvc 3584(256,%0),3584(%1)\n" 76 - " mvc 3840(256,%0),3840(%1)\n" 77 - : : "a" (to), "a" (from) : "memory"); 58 + register void *reg2 asm ("2") = to; 59 + register unsigned long reg3 asm ("3") = 0x1000; 60 + register void *reg4 asm ("4") = from; 61 + register unsigned long reg5 asm ("5") = 0xb0001000; 62 + asm volatile( 63 + " mvcl 2,4" 64 + : "+d" (reg2), "+d" (reg3), "+d" (reg4), "+d" (reg5) 65 + : : "memory", "cc"); 78 66 } 79 67 80 68 #define clear_user_page(page, vaddr, pg) clear_page(page)
+3 -2
arch/s390/include/asm/vdso.h
··· 26 26 __u64 wtom_clock_nsec; /* 0x28 */ 27 27 __u32 tz_minuteswest; /* Minutes west of Greenwich 0x30 */ 28 28 __u32 tz_dsttime; /* Type of dst correction 0x34 */ 29 - __u32 ectg_available; 30 - __u32 ntp_mult; /* NTP adjusted multiplier 0x3C */ 29 + __u32 ectg_available; /* ECTG instruction present 0x38 */ 30 + __u32 tk_mult; /* Mult. used for xtime_nsec 0x3c */ 31 + __u32 tk_shift; /* Shift used for xtime_nsec 0x40 */ 31 32 }; 32 33 33 34 struct vdso_per_cpu_data {
+2 -1
arch/s390/kernel/asm-offsets.c
··· 65 65 DEFINE(__VDSO_WTOM_NSEC, offsetof(struct vdso_data, wtom_clock_nsec)); 66 66 DEFINE(__VDSO_TIMEZONE, offsetof(struct vdso_data, tz_minuteswest)); 67 67 DEFINE(__VDSO_ECTG_OK, offsetof(struct vdso_data, ectg_available)); 68 - DEFINE(__VDSO_NTP_MULT, offsetof(struct vdso_data, ntp_mult)); 68 + DEFINE(__VDSO_TK_MULT, offsetof(struct vdso_data, tk_mult)); 69 + DEFINE(__VDSO_TK_SHIFT, offsetof(struct vdso_data, tk_shift)); 69 70 DEFINE(__VDSO_ECTG_BASE, offsetof(struct vdso_per_cpu_data, ectg_timer_base)); 70 71 DEFINE(__VDSO_ECTG_USER, offsetof(struct vdso_per_cpu_data, ectg_user_time)); 71 72 /* constants used by the vdso */
+1 -1
arch/s390/kernel/compat_signal.c
··· 194 194 return -EINVAL; 195 195 196 196 /* Use regs->psw.mask instead of PSW_USER_BITS to preserve PER bit. */ 197 - regs->psw.mask = (regs->psw.mask & ~PSW_MASK_USER) | 197 + regs->psw.mask = (regs->psw.mask & ~(PSW_MASK_USER | PSW_MASK_RI)) | 198 198 (__u64)(user_sregs.regs.psw.mask & PSW32_MASK_USER) << 32 | 199 199 (__u64)(user_sregs.regs.psw.mask & PSW32_MASK_RI) << 32 | 200 200 (__u64)(user_sregs.regs.psw.addr & PSW32_ADDR_AMODE);
+1 -1
arch/s390/kernel/pgm_check.S
··· 78 78 PGM_CHECK_DEFAULT /* 35 */ 79 79 PGM_CHECK_DEFAULT /* 36 */ 80 80 PGM_CHECK_DEFAULT /* 37 */ 81 - PGM_CHECK_DEFAULT /* 38 */ 81 + PGM_CHECK_64BIT(do_dat_exception) /* 38 */ 82 82 PGM_CHECK_64BIT(do_dat_exception) /* 39 */ 83 83 PGM_CHECK_64BIT(do_dat_exception) /* 3a */ 84 84 PGM_CHECK_64BIT(do_dat_exception) /* 3b */
+1 -1
arch/s390/kernel/signal.c
··· 94 94 return -EINVAL; 95 95 96 96 /* Use regs->psw.mask instead of PSW_USER_BITS to preserve PER bit. */ 97 - regs->psw.mask = (regs->psw.mask & ~PSW_MASK_USER) | 97 + regs->psw.mask = (regs->psw.mask & ~(PSW_MASK_USER | PSW_MASK_RI)) | 98 98 (user_sregs.regs.psw.mask & (PSW_MASK_USER | PSW_MASK_RI)); 99 99 /* Check for invalid user address space control. */ 100 100 if ((regs->psw.mask & PSW_MASK_ASC) == PSW_ASC_HOME)
+22 -24
arch/s390/kernel/time.c
··· 108 108 set_clock_comparator(S390_lowcore.clock_comparator); 109 109 } 110 110 111 - static int s390_next_ktime(ktime_t expires, 111 + static int s390_next_event(unsigned long delta, 112 112 struct clock_event_device *evt) 113 113 { 114 - struct timespec ts; 115 - u64 nsecs; 116 - 117 - ts.tv_sec = ts.tv_nsec = 0; 118 - monotonic_to_bootbased(&ts); 119 - nsecs = ktime_to_ns(ktime_add(timespec_to_ktime(ts), expires)); 120 - do_div(nsecs, 125); 121 - S390_lowcore.clock_comparator = sched_clock_base_cc + (nsecs << 9); 122 - /* Program the maximum value if we have an overflow (== year 2042) */ 123 - if (unlikely(S390_lowcore.clock_comparator < sched_clock_base_cc)) 124 - S390_lowcore.clock_comparator = -1ULL; 114 + S390_lowcore.clock_comparator = get_tod_clock() + delta; 125 115 set_clock_comparator(S390_lowcore.clock_comparator); 126 116 return 0; 127 117 } ··· 136 146 cpu = smp_processor_id(); 137 147 cd = &per_cpu(comparators, cpu); 138 148 cd->name = "comparator"; 139 - cd->features = CLOCK_EVT_FEAT_ONESHOT | 140 - CLOCK_EVT_FEAT_KTIME; 149 + cd->features = CLOCK_EVT_FEAT_ONESHOT; 141 150 cd->mult = 16777; 142 151 cd->shift = 12; 143 152 cd->min_delta_ns = 1; 144 153 cd->max_delta_ns = LONG_MAX; 145 154 cd->rating = 400; 146 155 cd->cpumask = cpumask_of(cpu); 147 - cd->set_next_ktime = s390_next_ktime; 156 + cd->set_next_event = s390_next_event; 148 157 cd->set_mode = s390_set_mode; 149 158 150 159 clockevents_register_device(cd); ··· 210 221 return &clocksource_tod; 211 222 } 212 223 213 - void update_vsyscall_old(struct timespec *wall_time, struct timespec *wtm, 214 - struct clocksource *clock, u32 mult) 224 + void update_vsyscall(struct timekeeper *tk) 215 225 { 216 - if (clock != &clocksource_tod) 226 + u64 nsecps; 227 + 228 + if (tk->clock != &clocksource_tod) 217 229 return; 218 230 219 231 /* Make userspace gettimeofday spin until we're done. */ 220 232 ++vdso_data->tb_update_count; 221 233 smp_wmb(); 222 - vdso_data->xtime_tod_stamp = clock->cycle_last; 223 - vdso_data->xtime_clock_sec = wall_time->tv_sec; 224 - vdso_data->xtime_clock_nsec = wall_time->tv_nsec; 225 - vdso_data->wtom_clock_sec = wtm->tv_sec; 226 - vdso_data->wtom_clock_nsec = wtm->tv_nsec; 227 - vdso_data->ntp_mult = mult; 234 + vdso_data->xtime_tod_stamp = tk->clock->cycle_last; 235 + vdso_data->xtime_clock_sec = tk->xtime_sec; 236 + vdso_data->xtime_clock_nsec = tk->xtime_nsec; 237 + vdso_data->wtom_clock_sec = 238 + tk->xtime_sec + tk->wall_to_monotonic.tv_sec; 239 + vdso_data->wtom_clock_nsec = tk->xtime_nsec + 240 + + (tk->wall_to_monotonic.tv_nsec << tk->shift); 241 + nsecps = (u64) NSEC_PER_SEC << tk->shift; 242 + while (vdso_data->wtom_clock_nsec >= nsecps) { 243 + vdso_data->wtom_clock_nsec -= nsecps; 244 + vdso_data->wtom_clock_sec++; 245 + } 246 + vdso_data->tk_mult = tk->mult; 247 + vdso_data->tk_shift = tk->shift; 228 248 smp_wmb(); 229 249 ++vdso_data->tb_update_count; 230 250 }
+16 -14
arch/s390/kernel/vdso32/clock_gettime.S
··· 38 38 sl %r1,__VDSO_XTIME_STAMP+4(%r5) 39 39 brc 3,2f 40 40 ahi %r0,-1 41 - 2: ms %r0,__VDSO_NTP_MULT(%r5) /* cyc2ns(clock,cycle_delta) */ 41 + 2: ms %r0,__VDSO_TK_MULT(%r5) /* * tk->mult */ 42 42 lr %r2,%r0 43 - l %r0,__VDSO_NTP_MULT(%r5) 43 + l %r0,__VDSO_TK_MULT(%r5) 44 44 ltr %r1,%r1 45 45 mr %r0,%r0 46 46 jnm 3f 47 - a %r0,__VDSO_NTP_MULT(%r5) 47 + a %r0,__VDSO_TK_MULT(%r5) 48 48 3: alr %r0,%r2 49 - srdl %r0,12 50 - al %r0,__VDSO_XTIME_NSEC(%r5) /* + xtime */ 49 + al %r0,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */ 51 50 al %r1,__VDSO_XTIME_NSEC+4(%r5) 52 51 brc 12,4f 53 52 ahi %r0,1 54 - 4: l %r2,__VDSO_XTIME_SEC+4(%r5) 55 - al %r0,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic */ 53 + 4: al %r0,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic.nsec */ 56 54 al %r1,__VDSO_WTOM_NSEC+4(%r5) 57 55 brc 12,5f 58 56 ahi %r0,1 59 - 5: al %r2,__VDSO_WTOM_SEC+4(%r5) 57 + 5: l %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 58 + srdl %r0,0(%r2) /* >> tk->shift */ 59 + l %r2,__VDSO_XTIME_SEC+4(%r5) 60 + al %r2,__VDSO_WTOM_SEC+4(%r5) 60 61 cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */ 61 62 jne 1b 62 63 basr %r5,0 ··· 87 86 sl %r1,__VDSO_XTIME_STAMP+4(%r5) 88 87 brc 3,12f 89 88 ahi %r0,-1 90 - 12: ms %r0,__VDSO_NTP_MULT(%r5) /* cyc2ns(clock,cycle_delta) */ 89 + 12: ms %r0,__VDSO_TK_MULT(%r5) /* * tk->mult */ 91 90 lr %r2,%r0 92 - l %r0,__VDSO_NTP_MULT(%r5) 91 + l %r0,__VDSO_TK_MULT(%r5) 93 92 ltr %r1,%r1 94 93 mr %r0,%r0 95 94 jnm 13f 96 - a %r0,__VDSO_NTP_MULT(%r5) 95 + a %r0,__VDSO_TK_MULT(%r5) 97 96 13: alr %r0,%r2 98 - srdl %r0,12 99 - al %r0,__VDSO_XTIME_NSEC(%r5) /* + xtime */ 97 + al %r0,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */ 100 98 al %r1,__VDSO_XTIME_NSEC+4(%r5) 101 99 brc 12,14f 102 100 ahi %r0,1 103 - 14: l %r2,__VDSO_XTIME_SEC+4(%r5) 101 + 14: l %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 102 + srdl %r0,0(%r2) /* >> tk->shift */ 103 + l %r2,__VDSO_XTIME_SEC+4(%r5) 104 104 cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */ 105 105 jne 11b 106 106 basr %r5,0
+5 -4
arch/s390/kernel/vdso32/gettimeofday.S
··· 35 35 sl %r1,__VDSO_XTIME_STAMP+4(%r5) 36 36 brc 3,3f 37 37 ahi %r0,-1 38 - 3: ms %r0,__VDSO_NTP_MULT(%r5) /* cyc2ns(clock,cycle_delta) */ 38 + 3: ms %r0,__VDSO_TK_MULT(%r5) /* * tk->mult */ 39 39 st %r0,24(%r15) 40 - l %r0,__VDSO_NTP_MULT(%r5) 40 + l %r0,__VDSO_TK_MULT(%r5) 41 41 ltr %r1,%r1 42 42 mr %r0,%r0 43 43 jnm 4f 44 - a %r0,__VDSO_NTP_MULT(%r5) 44 + a %r0,__VDSO_TK_MULT(%r5) 45 45 4: al %r0,24(%r15) 46 - srdl %r0,12 47 46 al %r0,__VDSO_XTIME_NSEC(%r5) /* + xtime */ 48 47 al %r1,__VDSO_XTIME_NSEC+4(%r5) 49 48 brc 12,5f ··· 50 51 5: mvc 24(4,%r15),__VDSO_XTIME_SEC+4(%r5) 51 52 cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */ 52 53 jne 1b 54 + l %r4,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 55 + srdl %r0,0(%r4) /* >> tk->shift */ 53 56 l %r4,24(%r15) /* get tv_sec from stack */ 54 57 basr %r5,0 55 58 6: ltr %r0,%r0
+12 -10
arch/s390/kernel/vdso64/clock_gettime.S
··· 34 34 tmll %r4,0x0001 /* pending update ? loop */ 35 35 jnz 0b 36 36 stck 48(%r15) /* Store TOD clock */ 37 + lgf %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 38 + lg %r0,__VDSO_XTIME_SEC(%r5) /* tk->xtime_sec */ 39 + alg %r0,__VDSO_WTOM_SEC(%r5) /* + wall_to_monotonic.sec */ 37 40 lg %r1,48(%r15) 38 41 sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 39 - msgf %r1,__VDSO_NTP_MULT(%r5) /* * NTP adjustment */ 40 - srlg %r1,%r1,12 /* cyc2ns(clock,cycle_delta) */ 41 - alg %r1,__VDSO_XTIME_NSEC(%r5) /* + xtime */ 42 - lg %r0,__VDSO_XTIME_SEC(%r5) 43 - alg %r1,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic */ 44 - alg %r0,__VDSO_WTOM_SEC(%r5) 42 + msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */ 43 + alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */ 44 + alg %r1,__VDSO_WTOM_NSEC(%r5) /* + wall_to_monotonic.nsec */ 45 + srlg %r1,%r1,0(%r2) /* >> tk->shift */ 45 46 clg %r4,__VDSO_UPD_COUNT(%r5) /* check update counter */ 46 47 jne 0b 47 48 larl %r5,13f ··· 63 62 tmll %r4,0x0001 /* pending update ? loop */ 64 63 jnz 5b 65 64 stck 48(%r15) /* Store TOD clock */ 65 + lgf %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 66 66 lg %r1,48(%r15) 67 67 sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 68 - msgf %r1,__VDSO_NTP_MULT(%r5) /* * NTP adjustment */ 69 - srlg %r1,%r1,12 /* cyc2ns(clock,cycle_delta) */ 70 - alg %r1,__VDSO_XTIME_NSEC(%r5) /* + xtime */ 71 - lg %r0,__VDSO_XTIME_SEC(%r5) 68 + msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */ 69 + alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */ 70 + srlg %r1,%r1,0(%r2) /* >> tk->shift */ 71 + lg %r0,__VDSO_XTIME_SEC(%r5) /* tk->xtime_sec */ 72 72 clg %r4,__VDSO_UPD_COUNT(%r5) /* check update counter */ 73 73 jne 5b 74 74 larl %r5,13f
+5 -4
arch/s390/kernel/vdso64/gettimeofday.S
··· 31 31 stck 48(%r15) /* Store TOD clock */ 32 32 lg %r1,48(%r15) 33 33 sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 34 - msgf %r1,__VDSO_NTP_MULT(%r5) /* * NTP adjustment */ 35 - srlg %r1,%r1,12 /* cyc2ns(clock,cycle_delta) */ 36 - alg %r1,__VDSO_XTIME_NSEC(%r5) /* + xtime.tv_nsec */ 37 - lg %r0,__VDSO_XTIME_SEC(%r5) /* xtime.tv_sec */ 34 + msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */ 35 + alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */ 36 + lg %r0,__VDSO_XTIME_SEC(%r5) /* tk->xtime_sec */ 38 37 clg %r4,__VDSO_UPD_COUNT(%r5) /* check update counter */ 39 38 jne 0b 39 + lgf %r5,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 40 + srlg %r1,%r1,0(%r5) /* >> tk->shift */ 40 41 larl %r5,5f 41 42 2: clg %r1,0(%r5) 42 43 jl 3f
+3
arch/s390/lib/uaccess_pt.c
··· 78 78 * contains the (negative) exception code. 79 79 */ 80 80 #ifdef CONFIG_64BIT 81 + 81 82 static unsigned long follow_table(struct mm_struct *mm, 82 83 unsigned long address, int write) 83 84 { 84 85 unsigned long *table = (unsigned long *)__pa(mm->pgd); 85 86 87 + if (unlikely(address > mm->context.asce_limit - 1)) 88 + return -0x38UL; 86 89 switch (mm->context.asce_bits & _ASCE_TYPE_MASK) { 87 90 case _ASCE_TYPE_REGION1: 88 91 table = table + ((address >> 53) & 0x7ff);
+2 -1
arch/x86/crypto/Makefile
··· 3 3 # 4 4 5 5 avx_supported := $(call as-instr,vpxor %xmm0$(comma)%xmm0$(comma)%xmm0,yes,no) 6 + avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\ 7 + $(comma)4)$(comma)%ymm2,yes,no) 6 8 7 - obj-$(CONFIG_CRYPTO_ABLK_HELPER_X86) += ablk_helper.o 8 9 obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o 9 10 10 11 obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o
+7 -6
arch/x86/crypto/ablk_helper.c crypto/ablk_helper.c
··· 28 28 #include <linux/crypto.h> 29 29 #include <linux/init.h> 30 30 #include <linux/module.h> 31 + #include <linux/hardirq.h> 31 32 #include <crypto/algapi.h> 32 33 #include <crypto/cryptd.h> 33 - #include <asm/i387.h> 34 - #include <asm/crypto/ablk_helper.h> 34 + #include <crypto/ablk_helper.h> 35 + #include <asm/simd.h> 35 36 36 37 int ablk_set_key(struct crypto_ablkcipher *tfm, const u8 *key, 37 38 unsigned int key_len) ··· 71 70 struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); 72 71 struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm); 73 72 74 - if (!irq_fpu_usable()) { 73 + if (!may_use_simd()) { 75 74 struct ablkcipher_request *cryptd_req = 76 75 ablkcipher_request_ctx(req); 77 76 78 - memcpy(cryptd_req, req, sizeof(*req)); 77 + *cryptd_req = *req; 79 78 ablkcipher_request_set_tfm(cryptd_req, &ctx->cryptd_tfm->base); 80 79 81 80 return crypto_ablkcipher_encrypt(cryptd_req); ··· 90 89 struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); 91 90 struct async_helper_ctx *ctx = crypto_ablkcipher_ctx(tfm); 92 91 93 - if (!irq_fpu_usable()) { 92 + if (!may_use_simd()) { 94 93 struct ablkcipher_request *cryptd_req = 95 94 ablkcipher_request_ctx(req); 96 95 97 - memcpy(cryptd_req, req, sizeof(*req)); 96 + *cryptd_req = *req; 98 97 ablkcipher_request_set_tfm(cryptd_req, &ctx->cryptd_tfm->base); 99 98 100 99 return crypto_ablkcipher_decrypt(cryptd_req);
+1 -1
arch/x86/crypto/aesni-intel_glue.c
··· 34 34 #include <asm/cpu_device_id.h> 35 35 #include <asm/i387.h> 36 36 #include <asm/crypto/aes.h> 37 - #include <asm/crypto/ablk_helper.h> 37 + #include <crypto/ablk_helper.h> 38 38 #include <crypto/scatterwalk.h> 39 39 #include <crypto/internal/aead.h> 40 40 #include <linux/workqueue.h>
+1 -1
arch/x86/crypto/camellia_aesni_avx2_glue.c
··· 14 14 #include <linux/types.h> 15 15 #include <linux/crypto.h> 16 16 #include <linux/err.h> 17 + #include <crypto/ablk_helper.h> 17 18 #include <crypto/algapi.h> 18 19 #include <crypto/ctr.h> 19 20 #include <crypto/lrw.h> ··· 22 21 #include <asm/xcr.h> 23 22 #include <asm/xsave.h> 24 23 #include <asm/crypto/camellia.h> 25 - #include <asm/crypto/ablk_helper.h> 26 24 #include <asm/crypto/glue_helper.h> 27 25 28 26 #define CAMELLIA_AESNI_PARALLEL_BLOCKS 16
+1 -1
arch/x86/crypto/camellia_aesni_avx_glue.c
··· 14 14 #include <linux/types.h> 15 15 #include <linux/crypto.h> 16 16 #include <linux/err.h> 17 + #include <crypto/ablk_helper.h> 17 18 #include <crypto/algapi.h> 18 19 #include <crypto/ctr.h> 19 20 #include <crypto/lrw.h> ··· 22 21 #include <asm/xcr.h> 23 22 #include <asm/xsave.h> 24 23 #include <asm/crypto/camellia.h> 25 - #include <asm/crypto/ablk_helper.h> 26 24 #include <asm/crypto/glue_helper.h> 27 25 28 26 #define CAMELLIA_AESNI_PARALLEL_BLOCKS 16
+1 -1
arch/x86/crypto/cast5_avx_glue.c
··· 26 26 #include <linux/types.h> 27 27 #include <linux/crypto.h> 28 28 #include <linux/err.h> 29 + #include <crypto/ablk_helper.h> 29 30 #include <crypto/algapi.h> 30 31 #include <crypto/cast5.h> 31 32 #include <crypto/cryptd.h> 32 33 #include <crypto/ctr.h> 33 34 #include <asm/xcr.h> 34 35 #include <asm/xsave.h> 35 - #include <asm/crypto/ablk_helper.h> 36 36 #include <asm/crypto/glue_helper.h> 37 37 38 38 #define CAST5_PARALLEL_BLOCKS 16
+1 -1
arch/x86/crypto/cast6_avx_glue.c
··· 28 28 #include <linux/types.h> 29 29 #include <linux/crypto.h> 30 30 #include <linux/err.h> 31 + #include <crypto/ablk_helper.h> 31 32 #include <crypto/algapi.h> 32 33 #include <crypto/cast6.h> 33 34 #include <crypto/cryptd.h> ··· 38 37 #include <crypto/xts.h> 39 38 #include <asm/xcr.h> 40 39 #include <asm/xsave.h> 41 - #include <asm/crypto/ablk_helper.h> 42 40 #include <asm/crypto/glue_helper.h> 43 41 44 42 #define CAST6_PARALLEL_BLOCKS 8
+1 -1
arch/x86/crypto/serpent_avx2_glue.c
··· 14 14 #include <linux/types.h> 15 15 #include <linux/crypto.h> 16 16 #include <linux/err.h> 17 + #include <crypto/ablk_helper.h> 17 18 #include <crypto/algapi.h> 18 19 #include <crypto/ctr.h> 19 20 #include <crypto/lrw.h> ··· 23 22 #include <asm/xcr.h> 24 23 #include <asm/xsave.h> 25 24 #include <asm/crypto/serpent-avx.h> 26 - #include <asm/crypto/ablk_helper.h> 27 25 #include <asm/crypto/glue_helper.h> 28 26 29 27 #define SERPENT_AVX2_PARALLEL_BLOCKS 16
+1 -1
arch/x86/crypto/serpent_avx_glue.c
··· 28 28 #include <linux/types.h> 29 29 #include <linux/crypto.h> 30 30 #include <linux/err.h> 31 + #include <crypto/ablk_helper.h> 31 32 #include <crypto/algapi.h> 32 33 #include <crypto/serpent.h> 33 34 #include <crypto/cryptd.h> ··· 39 38 #include <asm/xcr.h> 40 39 #include <asm/xsave.h> 41 40 #include <asm/crypto/serpent-avx.h> 42 - #include <asm/crypto/ablk_helper.h> 43 41 #include <asm/crypto/glue_helper.h> 44 42 45 43 /* 8-way parallel cipher functions */
+1 -1
arch/x86/crypto/serpent_sse2_glue.c
··· 34 34 #include <linux/types.h> 35 35 #include <linux/crypto.h> 36 36 #include <linux/err.h> 37 + #include <crypto/ablk_helper.h> 37 38 #include <crypto/algapi.h> 38 39 #include <crypto/serpent.h> 39 40 #include <crypto/cryptd.h> ··· 43 42 #include <crypto/lrw.h> 44 43 #include <crypto/xts.h> 45 44 #include <asm/crypto/serpent-sse2.h> 46 - #include <asm/crypto/ablk_helper.h> 47 45 #include <asm/crypto/glue_helper.h> 48 46 49 47 static void serpent_decrypt_cbc_xway(void *ctx, u128 *dst, const u128 *src)
+2 -2
arch/x86/crypto/sha256_ssse3_glue.c
··· 281 281 /* allow AVX to override SSSE3, it's a little faster */ 282 282 if (avx_usable()) { 283 283 #ifdef CONFIG_AS_AVX2 284 - if (boot_cpu_has(X86_FEATURE_AVX2)) 284 + if (boot_cpu_has(X86_FEATURE_AVX2) && boot_cpu_has(X86_FEATURE_BMI2)) 285 285 sha256_transform_asm = sha256_transform_rorx; 286 286 else 287 287 #endif ··· 319 319 MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm, Supplemental SSE3 accelerated"); 320 320 321 321 MODULE_ALIAS("sha256"); 322 - MODULE_ALIAS("sha384"); 322 + MODULE_ALIAS("sha224");
+1 -1
arch/x86/crypto/twofish_avx_glue.c
··· 28 28 #include <linux/types.h> 29 29 #include <linux/crypto.h> 30 30 #include <linux/err.h> 31 + #include <crypto/ablk_helper.h> 31 32 #include <crypto/algapi.h> 32 33 #include <crypto/twofish.h> 33 34 #include <crypto/cryptd.h> ··· 40 39 #include <asm/xcr.h> 41 40 #include <asm/xsave.h> 42 41 #include <asm/crypto/twofish.h> 43 - #include <asm/crypto/ablk_helper.h> 44 42 #include <asm/crypto/glue_helper.h> 45 43 #include <crypto/scatterwalk.h> 46 44 #include <linux/workqueue.h>
arch/x86/include/asm/crypto/ablk_helper.h include/crypto/ablk_helper.h
+11
arch/x86/include/asm/simd.h
··· 1 + 2 + #include <asm/i387.h> 3 + 4 + /* 5 + * may_use_simd - whether it is allowable at this time to issue SIMD 6 + * instructions or access the SIMD register file 7 + */ 8 + static __must_check inline bool may_use_simd(void) 9 + { 10 + return irq_fpu_usable(); 11 + }
+11 -12
crypto/Kconfig
··· 174 174 help 175 175 Quick & dirty crypto test module. 176 176 177 - config CRYPTO_ABLK_HELPER_X86 177 + config CRYPTO_ABLK_HELPER 178 178 tristate 179 - depends on X86 180 179 select CRYPTO_CRYPTD 181 180 182 181 config CRYPTO_GLUE_HELPER_X86 ··· 694 695 select CRYPTO_AES_X86_64 if 64BIT 695 696 select CRYPTO_AES_586 if !64BIT 696 697 select CRYPTO_CRYPTD 697 - select CRYPTO_ABLK_HELPER_X86 698 + select CRYPTO_ABLK_HELPER 698 699 select CRYPTO_ALGAPI 699 700 select CRYPTO_GLUE_HELPER_X86 if 64BIT 700 701 select CRYPTO_LRW ··· 894 895 depends on CRYPTO 895 896 select CRYPTO_ALGAPI 896 897 select CRYPTO_CRYPTD 897 - select CRYPTO_ABLK_HELPER_X86 898 + select CRYPTO_ABLK_HELPER 898 899 select CRYPTO_GLUE_HELPER_X86 899 900 select CRYPTO_CAMELLIA_X86_64 900 901 select CRYPTO_LRW ··· 916 917 depends on CRYPTO 917 918 select CRYPTO_ALGAPI 918 919 select CRYPTO_CRYPTD 919 - select CRYPTO_ABLK_HELPER_X86 920 + select CRYPTO_ABLK_HELPER 920 921 select CRYPTO_GLUE_HELPER_X86 921 922 select CRYPTO_CAMELLIA_X86_64 922 923 select CRYPTO_CAMELLIA_AESNI_AVX_X86_64 ··· 968 969 depends on X86 && 64BIT 969 970 select CRYPTO_ALGAPI 970 971 select CRYPTO_CRYPTD 971 - select CRYPTO_ABLK_HELPER_X86 972 + select CRYPTO_ABLK_HELPER 972 973 select CRYPTO_CAST_COMMON 973 974 select CRYPTO_CAST5 974 975 help ··· 991 992 depends on X86 && 64BIT 992 993 select CRYPTO_ALGAPI 993 994 select CRYPTO_CRYPTD 994 - select CRYPTO_ABLK_HELPER_X86 995 + select CRYPTO_ABLK_HELPER 995 996 select CRYPTO_GLUE_HELPER_X86 996 997 select CRYPTO_CAST_COMMON 997 998 select CRYPTO_CAST6 ··· 1109 1110 depends on X86 && 64BIT 1110 1111 select CRYPTO_ALGAPI 1111 1112 select CRYPTO_CRYPTD 1112 - select CRYPTO_ABLK_HELPER_X86 1113 + select CRYPTO_ABLK_HELPER 1113 1114 select CRYPTO_GLUE_HELPER_X86 1114 1115 select CRYPTO_SERPENT 1115 1116 select CRYPTO_LRW ··· 1131 1132 depends on X86 && !64BIT 1132 1133 select CRYPTO_ALGAPI 1133 1134 select CRYPTO_CRYPTD 1134 - select CRYPTO_ABLK_HELPER_X86 1135 + select CRYPTO_ABLK_HELPER 1135 1136 select CRYPTO_GLUE_HELPER_X86 1136 1137 select CRYPTO_SERPENT 1137 1138 select CRYPTO_LRW ··· 1153 1154 depends on X86 && 64BIT 1154 1155 select CRYPTO_ALGAPI 1155 1156 select CRYPTO_CRYPTD 1156 - select CRYPTO_ABLK_HELPER_X86 1157 + select CRYPTO_ABLK_HELPER 1157 1158 select CRYPTO_GLUE_HELPER_X86 1158 1159 select CRYPTO_SERPENT 1159 1160 select CRYPTO_LRW ··· 1175 1176 depends on X86 && 64BIT 1176 1177 select CRYPTO_ALGAPI 1177 1178 select CRYPTO_CRYPTD 1178 - select CRYPTO_ABLK_HELPER_X86 1179 + select CRYPTO_ABLK_HELPER 1179 1180 select CRYPTO_GLUE_HELPER_X86 1180 1181 select CRYPTO_SERPENT 1181 1182 select CRYPTO_SERPENT_AVX_X86_64 ··· 1291 1292 depends on X86 && 64BIT 1292 1293 select CRYPTO_ALGAPI 1293 1294 select CRYPTO_CRYPTD 1294 - select CRYPTO_ABLK_HELPER_X86 1295 + select CRYPTO_ABLK_HELPER 1295 1296 select CRYPTO_GLUE_HELPER_X86 1296 1297 select CRYPTO_TWOFISH_COMMON 1297 1298 select CRYPTO_TWOFISH_X86_64
+7 -1
crypto/Makefile
··· 2 2 # Cryptographic API 3 3 # 4 4 5 + # memneq MUST be built with -Os or -O0 to prevent early-return optimizations 6 + # that will defeat memneq's actual purpose to prevent timing attacks. 7 + CFLAGS_REMOVE_memneq.o := -O1 -O2 -O3 8 + CFLAGS_memneq.o := -Os 9 + 5 10 obj-$(CONFIG_CRYPTO) += crypto.o 6 - crypto-y := api.o cipher.o compress.o 11 + crypto-y := api.o cipher.o compress.o memneq.o 7 12 8 13 obj-$(CONFIG_CRYPTO_WORKQUEUE) += crypto_wq.o 9 14 ··· 110 105 obj-$(CONFIG_ASYNC_CORE) += async_tx/ 111 106 obj-$(CONFIG_ASYMMETRIC_KEY_TYPE) += asymmetric_keys/ 112 107 obj-$(CONFIG_CRYPTO_HASH_INFO) += hash_info.o 108 + obj-$(CONFIG_CRYPTO_ABLK_HELPER) += ablk_helper.o
+1 -20
crypto/ablkcipher.c
··· 16 16 #include <crypto/internal/skcipher.h> 17 17 #include <linux/cpumask.h> 18 18 #include <linux/err.h> 19 - #include <linux/init.h> 20 19 #include <linux/kernel.h> 21 - #include <linux/module.h> 22 20 #include <linux/rtnetlink.h> 23 21 #include <linux/sched.h> 24 22 #include <linux/slab.h> ··· 27 29 #include <crypto/scatterwalk.h> 28 30 29 31 #include "internal.h" 30 - 31 - static const char *skcipher_default_geniv __read_mostly; 32 32 33 33 struct ablkcipher_buffer { 34 34 struct list_head entry; ··· 523 527 alg->cra_blocksize) 524 528 return "chainiv"; 525 529 526 - return alg->cra_flags & CRYPTO_ALG_ASYNC ? 527 - "eseqiv" : skcipher_default_geniv; 530 + return "eseqiv"; 528 531 } 529 532 530 533 static int crypto_givcipher_default(struct crypto_alg *alg, u32 type, u32 mask) ··· 704 709 return ERR_PTR(err); 705 710 } 706 711 EXPORT_SYMBOL_GPL(crypto_alloc_ablkcipher); 707 - 708 - static int __init skcipher_module_init(void) 709 - { 710 - skcipher_default_geniv = num_possible_cpus() > 1 ? 711 - "eseqiv" : "chainiv"; 712 - return 0; 713 - } 714 - 715 - static void skcipher_module_exit(void) 716 - { 717 - } 718 - 719 - module_init(skcipher_module_init); 720 - module_exit(skcipher_module_exit);
+2 -2
crypto/ansi_cprng.c
··· 230 230 */ 231 231 if (byte_count < DEFAULT_BLK_SZ) { 232 232 empty_rbuf: 233 - for (; ctx->rand_data_valid < DEFAULT_BLK_SZ; 234 - ctx->rand_data_valid++) { 233 + while (ctx->rand_data_valid < DEFAULT_BLK_SZ) { 235 234 *ptr = ctx->rand_data[ctx->rand_data_valid]; 236 235 ptr++; 237 236 byte_count--; 237 + ctx->rand_data_valid++; 238 238 if (byte_count == 0) 239 239 goto done; 240 240 }
+3 -2
crypto/asymmetric_keys/rsa.c
··· 13 13 #include <linux/module.h> 14 14 #include <linux/kernel.h> 15 15 #include <linux/slab.h> 16 + #include <crypto/algapi.h> 16 17 #include "public_key.h" 17 18 18 19 MODULE_LICENSE("GPL"); ··· 190 189 } 191 190 } 192 191 193 - if (memcmp(asn1_template, EM + T_offset, asn1_size) != 0) { 192 + if (crypto_memneq(asn1_template, EM + T_offset, asn1_size) != 0) { 194 193 kleave(" = -EBADMSG [EM[T] ASN.1 mismatch]"); 195 194 return -EBADMSG; 196 195 } 197 196 198 - if (memcmp(H, EM + T_offset + asn1_size, hash_size) != 0) { 197 + if (crypto_memneq(H, EM + T_offset + asn1_size, hash_size) != 0) { 199 198 kleave(" = -EKEYREJECTED [EM[T] hash mismatch]"); 200 199 return -EKEYREJECTED; 201 200 }
+1 -80
crypto/asymmetric_keys/x509_public_key.c
··· 18 18 #include <linux/asn1_decoder.h> 19 19 #include <keys/asymmetric-subtype.h> 20 20 #include <keys/asymmetric-parser.h> 21 - #include <keys/system_keyring.h> 22 21 #include <crypto/hash.h> 23 22 #include "asymmetric_keys.h" 24 23 #include "public_key.h" 25 24 #include "x509_parser.h" 26 - 27 - /* 28 - * Find a key in the given keyring by issuer and authority. 29 - */ 30 - static struct key *x509_request_asymmetric_key( 31 - struct key *keyring, 32 - const char *signer, size_t signer_len, 33 - const char *authority, size_t auth_len) 34 - { 35 - key_ref_t key; 36 - char *id; 37 - 38 - /* Construct an identifier. */ 39 - id = kmalloc(signer_len + 2 + auth_len + 1, GFP_KERNEL); 40 - if (!id) 41 - return ERR_PTR(-ENOMEM); 42 - 43 - memcpy(id, signer, signer_len); 44 - id[signer_len + 0] = ':'; 45 - id[signer_len + 1] = ' '; 46 - memcpy(id + signer_len + 2, authority, auth_len); 47 - id[signer_len + 2 + auth_len] = 0; 48 - 49 - pr_debug("Look up: \"%s\"\n", id); 50 - 51 - key = keyring_search(make_key_ref(keyring, 1), 52 - &key_type_asymmetric, id); 53 - if (IS_ERR(key)) 54 - pr_debug("Request for module key '%s' err %ld\n", 55 - id, PTR_ERR(key)); 56 - kfree(id); 57 - 58 - if (IS_ERR(key)) { 59 - switch (PTR_ERR(key)) { 60 - /* Hide some search errors */ 61 - case -EACCES: 62 - case -ENOTDIR: 63 - case -EAGAIN: 64 - return ERR_PTR(-ENOKEY); 65 - default: 66 - return ERR_CAST(key); 67 - } 68 - } 69 - 70 - pr_devel("<==%s() = 0 [%x]\n", __func__, key_serial(key_ref_to_ptr(key))); 71 - return key_ref_to_ptr(key); 72 - } 73 25 74 26 /* 75 27 * Set up the signature parameters in an X.509 certificate. This involves ··· 103 151 EXPORT_SYMBOL_GPL(x509_check_signature); 104 152 105 153 /* 106 - * Check the new certificate against the ones in the trust keyring. If one of 107 - * those is the signing key and validates the new certificate, then mark the 108 - * new certificate as being trusted. 109 - * 110 - * Return 0 if the new certificate was successfully validated, 1 if we couldn't 111 - * find a matching parent certificate in the trusted list and an error if there 112 - * is a matching certificate but the signature check fails. 113 - */ 114 - static int x509_validate_trust(struct x509_certificate *cert, 115 - struct key *trust_keyring) 116 - { 117 - const struct public_key *pk; 118 - struct key *key; 119 - int ret = 1; 120 - 121 - key = x509_request_asymmetric_key(trust_keyring, 122 - cert->issuer, strlen(cert->issuer), 123 - cert->authority, 124 - strlen(cert->authority)); 125 - if (!IS_ERR(key)) { 126 - pk = key->payload.data; 127 - ret = x509_check_signature(pk, cert); 128 - } 129 - return ret; 130 - } 131 - 132 - /* 133 154 * Attempt to parse a data blob for a key as an X509 certificate. 134 155 */ 135 156 static int x509_key_preparse(struct key_preparsed_payload *prep) ··· 155 230 /* Check the signature on the key if it appears to be self-signed */ 156 231 if (!cert->authority || 157 232 strcmp(cert->fingerprint, cert->authority) == 0) { 158 - ret = x509_check_signature(cert->pub, cert); /* self-signed */ 233 + ret = x509_check_signature(cert->pub, cert); 159 234 if (ret < 0) 160 235 goto error_free_cert; 161 - } else { 162 - ret = x509_validate_trust(cert, system_trusted_keyring); 163 - if (!ret) 164 - prep->trusted = 1; 165 236 } 166 237 167 238 /* Propose a description */
+33 -21
crypto/authenc.c
··· 52 52 aead_request_complete(req, err); 53 53 } 54 54 55 - static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key, 56 - unsigned int keylen) 55 + int crypto_authenc_extractkeys(struct crypto_authenc_keys *keys, const u8 *key, 56 + unsigned int keylen) 57 57 { 58 - unsigned int authkeylen; 59 - unsigned int enckeylen; 60 - struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc); 61 - struct crypto_ahash *auth = ctx->auth; 62 - struct crypto_ablkcipher *enc = ctx->enc; 63 - struct rtattr *rta = (void *)key; 58 + struct rtattr *rta = (struct rtattr *)key; 64 59 struct crypto_authenc_key_param *param; 65 - int err = -EINVAL; 66 60 67 61 if (!RTA_OK(rta, keylen)) 68 - goto badkey; 62 + return -EINVAL; 69 63 if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM) 70 - goto badkey; 64 + return -EINVAL; 71 65 if (RTA_PAYLOAD(rta) < sizeof(*param)) 72 - goto badkey; 66 + return -EINVAL; 73 67 74 68 param = RTA_DATA(rta); 75 - enckeylen = be32_to_cpu(param->enckeylen); 69 + keys->enckeylen = be32_to_cpu(param->enckeylen); 76 70 77 71 key += RTA_ALIGN(rta->rta_len); 78 72 keylen -= RTA_ALIGN(rta->rta_len); 79 73 80 - if (keylen < enckeylen) 81 - goto badkey; 74 + if (keylen < keys->enckeylen) 75 + return -EINVAL; 82 76 83 - authkeylen = keylen - enckeylen; 77 + keys->authkeylen = keylen - keys->enckeylen; 78 + keys->authkey = key; 79 + keys->enckey = key + keys->authkeylen; 80 + 81 + return 0; 82 + } 83 + EXPORT_SYMBOL_GPL(crypto_authenc_extractkeys); 84 + 85 + static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key, 86 + unsigned int keylen) 87 + { 88 + struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc); 89 + struct crypto_ahash *auth = ctx->auth; 90 + struct crypto_ablkcipher *enc = ctx->enc; 91 + struct crypto_authenc_keys keys; 92 + int err = -EINVAL; 93 + 94 + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) 95 + goto badkey; 84 96 85 97 crypto_ahash_clear_flags(auth, CRYPTO_TFM_REQ_MASK); 86 98 crypto_ahash_set_flags(auth, crypto_aead_get_flags(authenc) & 87 99 CRYPTO_TFM_REQ_MASK); 88 - err = crypto_ahash_setkey(auth, key, authkeylen); 100 + err = crypto_ahash_setkey(auth, keys.authkey, keys.authkeylen); 89 101 crypto_aead_set_flags(authenc, crypto_ahash_get_flags(auth) & 90 102 CRYPTO_TFM_RES_MASK); 91 103 ··· 107 95 crypto_ablkcipher_clear_flags(enc, CRYPTO_TFM_REQ_MASK); 108 96 crypto_ablkcipher_set_flags(enc, crypto_aead_get_flags(authenc) & 109 97 CRYPTO_TFM_REQ_MASK); 110 - err = crypto_ablkcipher_setkey(enc, key + authkeylen, enckeylen); 98 + err = crypto_ablkcipher_setkey(enc, keys.enckey, keys.enckeylen); 111 99 crypto_aead_set_flags(authenc, crypto_ablkcipher_get_flags(enc) & 112 100 CRYPTO_TFM_RES_MASK); 113 101 ··· 200 188 scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen, 201 189 authsize, 0); 202 190 203 - err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 191 + err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 204 192 if (err) 205 193 goto out; 206 194 ··· 239 227 scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen, 240 228 authsize, 0); 241 229 242 - err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 230 + err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 243 231 if (err) 244 232 goto out; 245 233 ··· 474 462 ihash = ohash + authsize; 475 463 scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen, 476 464 authsize, 0); 477 - return memcmp(ihash, ohash, authsize) ? -EBADMSG : 0; 465 + return crypto_memneq(ihash, ohash, authsize) ? -EBADMSG : 0; 478 466 } 479 467 480 468 static int crypto_authenc_iverify(struct aead_request *req, u8 *iv,
+8 -26
crypto/authencesn.c
··· 59 59 static int crypto_authenc_esn_setkey(struct crypto_aead *authenc_esn, const u8 *key, 60 60 unsigned int keylen) 61 61 { 62 - unsigned int authkeylen; 63 - unsigned int enckeylen; 64 62 struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn); 65 63 struct crypto_ahash *auth = ctx->auth; 66 64 struct crypto_ablkcipher *enc = ctx->enc; 67 - struct rtattr *rta = (void *)key; 68 - struct crypto_authenc_key_param *param; 65 + struct crypto_authenc_keys keys; 69 66 int err = -EINVAL; 70 67 71 - if (!RTA_OK(rta, keylen)) 68 + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) 72 69 goto badkey; 73 - if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM) 74 - goto badkey; 75 - if (RTA_PAYLOAD(rta) < sizeof(*param)) 76 - goto badkey; 77 - 78 - param = RTA_DATA(rta); 79 - enckeylen = be32_to_cpu(param->enckeylen); 80 - 81 - key += RTA_ALIGN(rta->rta_len); 82 - keylen -= RTA_ALIGN(rta->rta_len); 83 - 84 - if (keylen < enckeylen) 85 - goto badkey; 86 - 87 - authkeylen = keylen - enckeylen; 88 70 89 71 crypto_ahash_clear_flags(auth, CRYPTO_TFM_REQ_MASK); 90 72 crypto_ahash_set_flags(auth, crypto_aead_get_flags(authenc_esn) & 91 73 CRYPTO_TFM_REQ_MASK); 92 - err = crypto_ahash_setkey(auth, key, authkeylen); 74 + err = crypto_ahash_setkey(auth, keys.authkey, keys.authkeylen); 93 75 crypto_aead_set_flags(authenc_esn, crypto_ahash_get_flags(auth) & 94 76 CRYPTO_TFM_RES_MASK); 95 77 ··· 81 99 crypto_ablkcipher_clear_flags(enc, CRYPTO_TFM_REQ_MASK); 82 100 crypto_ablkcipher_set_flags(enc, crypto_aead_get_flags(authenc_esn) & 83 101 CRYPTO_TFM_REQ_MASK); 84 - err = crypto_ablkcipher_setkey(enc, key + authkeylen, enckeylen); 102 + err = crypto_ablkcipher_setkey(enc, keys.enckey, keys.enckeylen); 85 103 crypto_aead_set_flags(authenc_esn, crypto_ablkcipher_get_flags(enc) & 86 104 CRYPTO_TFM_RES_MASK); 87 105 ··· 229 247 scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen, 230 248 authsize, 0); 231 249 232 - err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 250 + err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 233 251 if (err) 234 252 goto out; 235 253 ··· 278 296 scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen, 279 297 authsize, 0); 280 298 281 - err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 299 + err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 282 300 if (err) 283 301 goto out; 284 302 ··· 318 336 scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen, 319 337 authsize, 0); 320 338 321 - err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 339 + err = crypto_memneq(ihash, ahreq->result, authsize) ? -EBADMSG : 0; 322 340 if (err) 323 341 goto out; 324 342 ··· 550 568 ihash = ohash + authsize; 551 569 scatterwalk_map_and_copy(ihash, areq_ctx->sg, areq_ctx->cryptlen, 552 570 authsize, 0); 553 - return memcmp(ihash, ohash, authsize) ? -EBADMSG : 0; 571 + return crypto_memneq(ihash, ohash, authsize) ? -EBADMSG : 0; 554 572 } 555 573 556 574 static int crypto_authenc_esn_iverify(struct aead_request *req, u8 *iv,
+2 -2
crypto/ccm.c
··· 363 363 364 364 if (!err) { 365 365 err = crypto_ccm_auth(req, req->dst, cryptlen); 366 - if (!err && memcmp(pctx->auth_tag, pctx->odata, authsize)) 366 + if (!err && crypto_memneq(pctx->auth_tag, pctx->odata, authsize)) 367 367 err = -EBADMSG; 368 368 } 369 369 aead_request_complete(req, err); ··· 422 422 return err; 423 423 424 424 /* verify */ 425 - if (memcmp(authtag, odata, authsize)) 425 + if (crypto_memneq(authtag, odata, authsize)) 426 426 return -EBADMSG; 427 427 428 428 return err;
+1 -1
crypto/gcm.c
··· 582 582 583 583 crypto_xor(auth_tag, iauth_tag, 16); 584 584 scatterwalk_map_and_copy(iauth_tag, req->src, cryptlen, authsize, 0); 585 - return memcmp(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0; 585 + return crypto_memneq(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0; 586 586 } 587 587 588 588 static void gcm_decrypt_done(struct crypto_async_request *areq, int err)
+138
crypto/memneq.c
··· 1 + /* 2 + * Constant-time equality testing of memory regions. 3 + * 4 + * Authors: 5 + * 6 + * James Yonan <james@openvpn.net> 7 + * Daniel Borkmann <dborkman@redhat.com> 8 + * 9 + * This file is provided under a dual BSD/GPLv2 license. When using or 10 + * redistributing this file, you may do so under either license. 11 + * 12 + * GPL LICENSE SUMMARY 13 + * 14 + * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved. 15 + * 16 + * This program is free software; you can redistribute it and/or modify 17 + * it under the terms of version 2 of the GNU General Public License as 18 + * published by the Free Software Foundation. 19 + * 20 + * This program is distributed in the hope that it will be useful, but 21 + * WITHOUT ANY WARRANTY; without even the implied warranty of 22 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 23 + * General Public License for more details. 24 + * 25 + * You should have received a copy of the GNU General Public License 26 + * along with this program; if not, write to the Free Software 27 + * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 28 + * The full GNU General Public License is included in this distribution 29 + * in the file called LICENSE.GPL. 30 + * 31 + * BSD LICENSE 32 + * 33 + * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved. 34 + * 35 + * Redistribution and use in source and binary forms, with or without 36 + * modification, are permitted provided that the following conditions 37 + * are met: 38 + * 39 + * * Redistributions of source code must retain the above copyright 40 + * notice, this list of conditions and the following disclaimer. 41 + * * Redistributions in binary form must reproduce the above copyright 42 + * notice, this list of conditions and the following disclaimer in 43 + * the documentation and/or other materials provided with the 44 + * distribution. 45 + * * Neither the name of OpenVPN Technologies nor the names of its 46 + * contributors may be used to endorse or promote products derived 47 + * from this software without specific prior written permission. 48 + * 49 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 50 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 51 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 52 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 53 + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 54 + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 55 + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 56 + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 57 + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 58 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 59 + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 60 + */ 61 + 62 + #include <crypto/algapi.h> 63 + 64 + #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ 65 + 66 + /* Generic path for arbitrary size */ 67 + static inline unsigned long 68 + __crypto_memneq_generic(const void *a, const void *b, size_t size) 69 + { 70 + unsigned long neq = 0; 71 + 72 + #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) 73 + while (size >= sizeof(unsigned long)) { 74 + neq |= *(unsigned long *)a ^ *(unsigned long *)b; 75 + a += sizeof(unsigned long); 76 + b += sizeof(unsigned long); 77 + size -= sizeof(unsigned long); 78 + } 79 + #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ 80 + while (size > 0) { 81 + neq |= *(unsigned char *)a ^ *(unsigned char *)b; 82 + a += 1; 83 + b += 1; 84 + size -= 1; 85 + } 86 + return neq; 87 + } 88 + 89 + /* Loop-free fast-path for frequently used 16-byte size */ 90 + static inline unsigned long __crypto_memneq_16(const void *a, const void *b) 91 + { 92 + #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 93 + if (sizeof(unsigned long) == 8) 94 + return ((*(unsigned long *)(a) ^ *(unsigned long *)(b)) 95 + | (*(unsigned long *)(a+8) ^ *(unsigned long *)(b+8))); 96 + else if (sizeof(unsigned int) == 4) 97 + return ((*(unsigned int *)(a) ^ *(unsigned int *)(b)) 98 + | (*(unsigned int *)(a+4) ^ *(unsigned int *)(b+4)) 99 + | (*(unsigned int *)(a+8) ^ *(unsigned int *)(b+8)) 100 + | (*(unsigned int *)(a+12) ^ *(unsigned int *)(b+12))); 101 + else 102 + #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ 103 + return ((*(unsigned char *)(a) ^ *(unsigned char *)(b)) 104 + | (*(unsigned char *)(a+1) ^ *(unsigned char *)(b+1)) 105 + | (*(unsigned char *)(a+2) ^ *(unsigned char *)(b+2)) 106 + | (*(unsigned char *)(a+3) ^ *(unsigned char *)(b+3)) 107 + | (*(unsigned char *)(a+4) ^ *(unsigned char *)(b+4)) 108 + | (*(unsigned char *)(a+5) ^ *(unsigned char *)(b+5)) 109 + | (*(unsigned char *)(a+6) ^ *(unsigned char *)(b+6)) 110 + | (*(unsigned char *)(a+7) ^ *(unsigned char *)(b+7)) 111 + | (*(unsigned char *)(a+8) ^ *(unsigned char *)(b+8)) 112 + | (*(unsigned char *)(a+9) ^ *(unsigned char *)(b+9)) 113 + | (*(unsigned char *)(a+10) ^ *(unsigned char *)(b+10)) 114 + | (*(unsigned char *)(a+11) ^ *(unsigned char *)(b+11)) 115 + | (*(unsigned char *)(a+12) ^ *(unsigned char *)(b+12)) 116 + | (*(unsigned char *)(a+13) ^ *(unsigned char *)(b+13)) 117 + | (*(unsigned char *)(a+14) ^ *(unsigned char *)(b+14)) 118 + | (*(unsigned char *)(a+15) ^ *(unsigned char *)(b+15))); 119 + } 120 + 121 + /* Compare two areas of memory without leaking timing information, 122 + * and with special optimizations for common sizes. Users should 123 + * not call this function directly, but should instead use 124 + * crypto_memneq defined in crypto/algapi.h. 125 + */ 126 + noinline unsigned long __crypto_memneq(const void *a, const void *b, 127 + size_t size) 128 + { 129 + switch (size) { 130 + case 16: 131 + return __crypto_memneq_16(a, b); 132 + default: 133 + return __crypto_memneq_generic(a, b, size); 134 + } 135 + } 136 + EXPORT_SYMBOL(__crypto_memneq); 137 + 138 + #endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
+3 -3
drivers/acpi/acpica/acresrc.h
··· 184 184 struct acpi_buffer *output_buffer); 185 185 186 186 acpi_status 187 - acpi_rs_create_aml_resources(struct acpi_resource *linked_list_buffer, 187 + acpi_rs_create_aml_resources(struct acpi_buffer *resource_list, 188 188 struct acpi_buffer *output_buffer); 189 189 190 190 acpi_status ··· 227 227 u32 aml_buffer_length, acpi_size * size_needed); 228 228 229 229 acpi_status 230 - acpi_rs_get_aml_length(struct acpi_resource *linked_list_buffer, 231 - acpi_size * size_needed); 230 + acpi_rs_get_aml_length(struct acpi_resource *resource_list, 231 + acpi_size resource_list_size, acpi_size * size_needed); 232 232 233 233 acpi_status 234 234 acpi_rs_get_pci_routing_table_length(union acpi_operand_object *package_object,
+14 -4
drivers/acpi/acpica/nsalloc.c
··· 106 106 void acpi_ns_delete_node(struct acpi_namespace_node *node) 107 107 { 108 108 union acpi_operand_object *obj_desc; 109 + union acpi_operand_object *next_desc; 109 110 110 111 ACPI_FUNCTION_NAME(ns_delete_node); 111 112 ··· 115 114 acpi_ns_detach_object(node); 116 115 117 116 /* 118 - * Delete an attached data object if present (an object that was created 119 - * and attached via acpi_attach_data). Note: After any normal object is 120 - * detached above, the only possible remaining object is a data object. 117 + * Delete an attached data object list if present (objects that were 118 + * attached via acpi_attach_data). Note: After any normal object is 119 + * detached above, the only possible remaining object(s) are data 120 + * objects, in a linked list. 121 121 */ 122 122 obj_desc = node->object; 123 - if (obj_desc && (obj_desc->common.type == ACPI_TYPE_LOCAL_DATA)) { 123 + while (obj_desc && (obj_desc->common.type == ACPI_TYPE_LOCAL_DATA)) { 124 124 125 125 /* Invoke the attached data deletion handler if present */ 126 126 ··· 129 127 obj_desc->data.handler(node, obj_desc->data.pointer); 130 128 } 131 129 130 + next_desc = obj_desc->common.next_object; 132 131 acpi_ut_remove_reference(obj_desc); 132 + obj_desc = next_desc; 133 + } 134 + 135 + /* Special case for the statically allocated root node */ 136 + 137 + if (node == acpi_gbl_root_node) { 138 + return; 133 139 } 134 140 135 141 /* Now we can delete the node */
+10 -8
drivers/acpi/acpica/nsutils.c
··· 593 593 594 594 void acpi_ns_terminate(void) 595 595 { 596 - union acpi_operand_object *obj_desc; 596 + acpi_status status; 597 597 598 598 ACPI_FUNCTION_TRACE(ns_terminate); 599 599 600 600 /* 601 - * 1) Free the entire namespace -- all nodes and objects 602 - * 603 - * Delete all object descriptors attached to namepsace nodes 601 + * Free the entire namespace -- all nodes and all objects 602 + * attached to the nodes 604 603 */ 605 604 acpi_ns_delete_namespace_subtree(acpi_gbl_root_node); 606 605 607 - /* Detach any objects attached to the root */ 606 + /* Delete any objects attached to the root node */ 608 607 609 - obj_desc = acpi_ns_get_attached_object(acpi_gbl_root_node); 610 - if (obj_desc) { 611 - acpi_ns_detach_object(acpi_gbl_root_node); 608 + status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 609 + if (ACPI_FAILURE(status)) { 610 + return_VOID; 612 611 } 612 + 613 + acpi_ns_delete_node(acpi_gbl_root_node); 614 + (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 613 615 614 616 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Namespace freed\n")); 615 617 return_VOID;
+7 -2
drivers/acpi/acpica/rscalc.c
··· 174 174 * FUNCTION: acpi_rs_get_aml_length 175 175 * 176 176 * PARAMETERS: resource - Pointer to the resource linked list 177 + * resource_list_size - Size of the resource linked list 177 178 * size_needed - Where the required size is returned 178 179 * 179 180 * RETURN: Status ··· 186 185 ******************************************************************************/ 187 186 188 187 acpi_status 189 - acpi_rs_get_aml_length(struct acpi_resource * resource, acpi_size * size_needed) 188 + acpi_rs_get_aml_length(struct acpi_resource *resource, 189 + acpi_size resource_list_size, acpi_size * size_needed) 190 190 { 191 191 acpi_size aml_size_needed = 0; 192 + struct acpi_resource *resource_end; 192 193 acpi_rs_length total_size; 193 194 194 195 ACPI_FUNCTION_TRACE(rs_get_aml_length); 195 196 196 197 /* Traverse entire list of internal resource descriptors */ 197 198 198 - while (resource) { 199 + resource_end = 200 + ACPI_ADD_PTR(struct acpi_resource, resource, resource_list_size); 201 + while (resource < resource_end) { 199 202 200 203 /* Validate the descriptor type */ 201 204
+17 -19
drivers/acpi/acpica/rscreate.c
··· 418 418 * 419 419 * FUNCTION: acpi_rs_create_aml_resources 420 420 * 421 - * PARAMETERS: linked_list_buffer - Pointer to the resource linked list 422 - * output_buffer - Pointer to the user's buffer 421 + * PARAMETERS: resource_list - Pointer to the resource list buffer 422 + * output_buffer - Where the AML buffer is returned 423 423 * 424 424 * RETURN: Status AE_OK if okay, else a valid acpi_status code. 425 425 * If the output_buffer is too small, the error will be 426 426 * AE_BUFFER_OVERFLOW and output_buffer->Length will point 427 427 * to the size buffer needed. 428 428 * 429 - * DESCRIPTION: Takes the linked list of device resources and 430 - * creates a bytestream to be used as input for the 431 - * _SRS control method. 429 + * DESCRIPTION: Converts a list of device resources to an AML bytestream 430 + * to be used as input for the _SRS control method. 432 431 * 433 432 ******************************************************************************/ 434 433 435 434 acpi_status 436 - acpi_rs_create_aml_resources(struct acpi_resource *linked_list_buffer, 435 + acpi_rs_create_aml_resources(struct acpi_buffer *resource_list, 437 436 struct acpi_buffer *output_buffer) 438 437 { 439 438 acpi_status status; ··· 440 441 441 442 ACPI_FUNCTION_TRACE(rs_create_aml_resources); 442 443 443 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "LinkedListBuffer = %p\n", 444 - linked_list_buffer)); 444 + /* Params already validated, no need to re-validate here */ 445 445 446 - /* 447 - * Params already validated, so we don't re-validate here 448 - * 449 - * Pass the linked_list_buffer into a module that calculates 450 - * the buffer size needed for the byte stream. 451 - */ 452 - status = acpi_rs_get_aml_length(linked_list_buffer, &aml_size_needed); 446 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "ResourceList Buffer = %p\n", 447 + resource_list->pointer)); 448 + 449 + /* Get the buffer size needed for the AML byte stream */ 450 + 451 + status = acpi_rs_get_aml_length(resource_list->pointer, 452 + resource_list->length, 453 + &aml_size_needed); 453 454 454 455 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "AmlSizeNeeded=%X, %s\n", 455 456 (u32)aml_size_needed, acpi_format_exception(status))); ··· 466 467 467 468 /* Do the conversion */ 468 469 469 - status = 470 - acpi_rs_convert_resources_to_aml(linked_list_buffer, 471 - aml_size_needed, 472 - output_buffer->pointer); 470 + status = acpi_rs_convert_resources_to_aml(resource_list->pointer, 471 + aml_size_needed, 472 + output_buffer->pointer); 473 473 if (ACPI_FAILURE(status)) { 474 474 return_ACPI_STATUS(status); 475 475 }
+1 -1
drivers/acpi/acpica/rsutils.c
··· 753 753 * Convert the linked list into a byte stream 754 754 */ 755 755 buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 756 - status = acpi_rs_create_aml_resources(in_buffer->pointer, &buffer); 756 + status = acpi_rs_create_aml_resources(in_buffer, &buffer); 757 757 if (ACPI_FAILURE(status)) { 758 758 goto cleanup; 759 759 }
+24 -7
drivers/acpi/acpica/utdebug.c
··· 185 185 } 186 186 187 187 acpi_gbl_prev_thread_id = thread_id; 188 + acpi_gbl_nesting_level = 0; 188 189 } 189 190 190 191 /* ··· 194 193 */ 195 194 acpi_os_printf("%9s-%04ld ", module_name, line_number); 196 195 196 + #ifdef ACPI_EXEC_APP 197 + /* 198 + * For acpi_exec only, emit the thread ID and nesting level. 199 + * Note: nesting level is really only useful during a single-thread 200 + * execution. Otherwise, multiple threads will keep resetting the 201 + * level. 202 + */ 197 203 if (ACPI_LV_THREADS & acpi_dbg_level) { 198 204 acpi_os_printf("[%u] ", (u32)thread_id); 199 205 } 200 206 201 - acpi_os_printf("[%02ld] %-22.22s: ", 202 - acpi_gbl_nesting_level, 203 - acpi_ut_trim_function_name(function_name)); 207 + acpi_os_printf("[%02ld] ", acpi_gbl_nesting_level); 208 + #endif 209 + 210 + acpi_os_printf("%-22.22s: ", acpi_ut_trim_function_name(function_name)); 204 211 205 212 va_start(args, format); 206 213 acpi_os_vprintf(format, args); ··· 429 420 component_id, "%s\n", acpi_gbl_fn_exit_str); 430 421 } 431 422 432 - acpi_gbl_nesting_level--; 423 + if (acpi_gbl_nesting_level) { 424 + acpi_gbl_nesting_level--; 425 + } 433 426 } 434 427 435 428 ACPI_EXPORT_SYMBOL(acpi_ut_exit) ··· 478 467 } 479 468 } 480 469 481 - acpi_gbl_nesting_level--; 470 + if (acpi_gbl_nesting_level) { 471 + acpi_gbl_nesting_level--; 472 + } 482 473 } 483 474 484 475 ACPI_EXPORT_SYMBOL(acpi_ut_status_exit) ··· 517 504 ACPI_FORMAT_UINT64(value)); 518 505 } 519 506 520 - acpi_gbl_nesting_level--; 507 + if (acpi_gbl_nesting_level) { 508 + acpi_gbl_nesting_level--; 509 + } 521 510 } 522 511 523 512 ACPI_EXPORT_SYMBOL(acpi_ut_value_exit) ··· 555 540 ptr); 556 541 } 557 542 558 - acpi_gbl_nesting_level--; 543 + if (acpi_gbl_nesting_level) { 544 + acpi_gbl_nesting_level--; 545 + } 559 546 } 560 547 561 548 #endif
-1
drivers/acpi/nvs.c
··· 13 13 #include <linux/slab.h> 14 14 #include <linux/acpi.h> 15 15 #include <linux/acpi_io.h> 16 - #include <acpi/acpiosxf.h> 17 16 18 17 /* ACPI NVS regions, APEI may use it */ 19 18
+3
drivers/acpi/pci_root.c
··· 65 65 .ids = root_device_ids, 66 66 .attach = acpi_pci_root_add, 67 67 .detach = acpi_pci_root_remove, 68 + .hotplug = { 69 + .ignore = true, 70 + }, 68 71 }; 69 72 70 73 static DEFINE_MUTEX(osc_lock);
+1 -1
drivers/acpi/scan.c
··· 1772 1772 */ 1773 1773 list_for_each_entry(hwid, &pnp.ids, list) { 1774 1774 handler = acpi_scan_match_handler(hwid->id, NULL); 1775 - if (handler) { 1775 + if (handler && !handler->hotplug.ignore) { 1776 1776 acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY, 1777 1777 acpi_hotplug_notify_cb, handler); 1778 1778 break;
+1 -1
drivers/acpi/sleep.c
··· 525 525 * generate wakeup events. 526 526 */ 527 527 if (ACPI_SUCCESS(status) && (acpi_state == ACPI_STATE_S3)) { 528 - acpi_event_status pwr_btn_status; 528 + acpi_event_status pwr_btn_status = ACPI_EVENT_FLAG_DISABLED; 529 529 530 530 acpi_get_event_status(ACPI_EVENT_POWER_BUTTON, &pwr_btn_status); 531 531
+27 -25
drivers/acpi/sysfs.c
··· 309 309 sprintf(table_attr->name + ACPI_NAME_SIZE, "%d", 310 310 table_attr->instance); 311 311 312 - table_attr->attr.size = 0; 312 + table_attr->attr.size = table_header->length; 313 313 table_attr->attr.read = acpi_table_show; 314 314 table_attr->attr.attr.name = table_attr->name; 315 315 table_attr->attr.attr.mode = 0400; ··· 354 354 { 355 355 struct acpi_table_attr *table_attr; 356 356 struct acpi_table_header *table_header = NULL; 357 - int table_index = 0; 358 - int result; 357 + int table_index; 358 + acpi_status status; 359 + int ret; 359 360 360 361 tables_kobj = kobject_create_and_add("tables", acpi_kobj); 361 362 if (!tables_kobj) ··· 366 365 if (!dynamic_tables_kobj) 367 366 goto err_dynamic_tables; 368 367 369 - do { 370 - result = acpi_get_table_by_index(table_index, &table_header); 371 - if (!result) { 372 - table_index++; 373 - table_attr = NULL; 374 - table_attr = 375 - kzalloc(sizeof(struct acpi_table_attr), GFP_KERNEL); 376 - if (!table_attr) 377 - return -ENOMEM; 368 + for (table_index = 0;; table_index++) { 369 + status = acpi_get_table_by_index(table_index, &table_header); 378 370 379 - acpi_table_attr_init(table_attr, table_header); 380 - result = 381 - sysfs_create_bin_file(tables_kobj, 382 - &table_attr->attr); 383 - if (result) { 384 - kfree(table_attr); 385 - return result; 386 - } else 387 - list_add_tail(&table_attr->node, 388 - &acpi_table_attr_list); 371 + if (status == AE_BAD_PARAMETER) 372 + break; 373 + 374 + if (ACPI_FAILURE(status)) 375 + continue; 376 + 377 + table_attr = NULL; 378 + table_attr = kzalloc(sizeof(*table_attr), GFP_KERNEL); 379 + if (!table_attr) 380 + return -ENOMEM; 381 + 382 + acpi_table_attr_init(table_attr, table_header); 383 + ret = sysfs_create_bin_file(tables_kobj, &table_attr->attr); 384 + if (ret) { 385 + kfree(table_attr); 386 + return ret; 389 387 } 390 - } while (!result); 388 + list_add_tail(&table_attr->node, &acpi_table_attr_list); 389 + } 390 + 391 391 kobject_uevent(tables_kobj, KOBJ_ADD); 392 392 kobject_uevent(dynamic_tables_kobj, KOBJ_ADD); 393 - result = acpi_install_table_handler(acpi_sysfs_table_handler, NULL); 393 + status = acpi_install_table_handler(acpi_sysfs_table_handler, NULL); 394 394 395 - return result == AE_OK ? 0 : -EINVAL; 395 + return ACPI_FAILURE(status) ? -EINVAL : 0; 396 396 err_dynamic_tables: 397 397 kobject_put(tables_kobj); 398 398 err:
+2
drivers/ata/ahci.c
··· 435 435 .driver_data = board_ahci_yes_fbs }, /* 88se9172 on some Gigabyte */ 436 436 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x91a3), 437 437 .driver_data = board_ahci_yes_fbs }, 438 + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9230), 439 + .driver_data = board_ahci_yes_fbs }, 438 440 439 441 /* Promise */ 440 442 { PCI_VDEVICE(PROMISE, 0x3f20), board_ahci }, /* PDC42819 */
+1
drivers/ata/ahci_platform.c
··· 329 329 static const struct of_device_id ahci_of_match[] = { 330 330 { .compatible = "snps,spear-ahci", }, 331 331 { .compatible = "snps,exynos5440-ahci", }, 332 + { .compatible = "ibm,476gtr-ahci", }, 332 333 {}, 333 334 }; 334 335 MODULE_DEVICE_TABLE(of, ahci_of_match);
+1 -2
drivers/ata/libata-core.c
··· 6304 6304 for (i = 0; i < SATA_PMP_MAX_PORTS; i++) 6305 6305 ata_tlink_delete(&ap->pmp_link[i]); 6306 6306 } 6307 - ata_tport_delete(ap); 6308 - 6309 6307 /* remove the associated SCSI host */ 6310 6308 scsi_remove_host(ap->scsi_host); 6309 + ata_tport_delete(ap); 6311 6310 } 6312 6311 6313 6312 /**
+1 -3
drivers/ata/libata-zpodd.c
··· 88 88 static bool odd_can_poweroff(struct ata_device *ata_dev) 89 89 { 90 90 acpi_handle handle; 91 - acpi_status status; 92 91 struct acpi_device *acpi_dev; 93 92 94 93 handle = ata_dev_acpi_handle(ata_dev); 95 94 if (!handle) 96 95 return false; 97 96 98 - status = acpi_bus_get_device(handle, &acpi_dev); 99 - if (ACPI_FAILURE(status)) 97 + if (acpi_bus_get_device(handle, &acpi_dev)) 100 98 return false; 101 99 102 100 return acpi_device_can_poweroff(acpi_dev);
+1
drivers/ata/pata_arasan_cf.c
··· 319 319 ret = clk_set_rate(acdev->clk, 166000000); 320 320 if (ret) { 321 321 dev_warn(acdev->host->dev, "clock set rate failed"); 322 + clk_disable_unprepare(acdev->clk); 322 323 return ret; 323 324 } 324 325
+25
drivers/char/hw_random/Kconfig
··· 165 165 166 166 If unsure, say Y. 167 167 168 + config HW_RANDOM_OMAP3_ROM 169 + tristate "OMAP3 ROM Random Number Generator support" 170 + depends on HW_RANDOM && ARCH_OMAP3 171 + default HW_RANDOM 172 + ---help--- 173 + This driver provides kernel-side support for the Random Number 174 + Generator hardware found on OMAP34xx processors. 175 + 176 + To compile this driver as a module, choose M here: the 177 + module will be called omap3-rom-rng. 178 + 179 + If unsure, say Y. 180 + 168 181 config HW_RANDOM_OCTEON 169 182 tristate "Octeon Random Number Generator support" 170 183 depends on HW_RANDOM && CAVIUM_OCTEON_SOC ··· 338 325 339 326 To compile this driver as a module, choose M here: the 340 327 module will be called tpm-rng. 328 + 329 + If unsure, say Y. 330 + 331 + config HW_RANDOM_MSM 332 + tristate "Qualcomm MSM Random Number Generator support" 333 + depends on HW_RANDOM && ARCH_MSM 334 + ---help--- 335 + This driver provides kernel-side support for the Random Number 336 + Generator hardware found on Qualcomm MSM SoCs. 337 + 338 + To compile this driver as a module, choose M here. the 339 + module will be called msm-rng. 341 340 342 341 If unsure, say Y.
+2
drivers/char/hw_random/Makefile
··· 15 15 obj-$(CONFIG_HW_RANDOM_VIA) += via-rng.o 16 16 obj-$(CONFIG_HW_RANDOM_IXP4XX) += ixp4xx-rng.o 17 17 obj-$(CONFIG_HW_RANDOM_OMAP) += omap-rng.o 18 + obj-$(CONFIG_HW_RANDOM_OMAP3_ROM) += omap3-rom-rng.o 18 19 obj-$(CONFIG_HW_RANDOM_PASEMI) += pasemi-rng.o 19 20 obj-$(CONFIG_HW_RANDOM_VIRTIO) += virtio-rng.o 20 21 obj-$(CONFIG_HW_RANDOM_TX4939) += tx4939-rng.o ··· 29 28 obj-$(CONFIG_HW_RANDOM_EXYNOS) += exynos-rng.o 30 29 obj-$(CONFIG_HW_RANDOM_TPM) += tpm-rng.o 31 30 obj-$(CONFIG_HW_RANDOM_BCM2835) += bcm2835-rng.o 31 + obj-$(CONFIG_HW_RANDOM_MSM) += msm-rng.o
+197
drivers/char/hw_random/msm-rng.c
··· 1 + /* 2 + * Copyright (c) 2011-2013, The Linux Foundation. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 and 6 + * only version 2 as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + */ 14 + #include <linux/clk.h> 15 + #include <linux/err.h> 16 + #include <linux/hw_random.h> 17 + #include <linux/io.h> 18 + #include <linux/module.h> 19 + #include <linux/of.h> 20 + #include <linux/platform_device.h> 21 + 22 + /* Device specific register offsets */ 23 + #define PRNG_DATA_OUT 0x0000 24 + #define PRNG_STATUS 0x0004 25 + #define PRNG_LFSR_CFG 0x0100 26 + #define PRNG_CONFIG 0x0104 27 + 28 + /* Device specific register masks and config values */ 29 + #define PRNG_LFSR_CFG_MASK 0x0000ffff 30 + #define PRNG_LFSR_CFG_CLOCKS 0x0000dddd 31 + #define PRNG_CONFIG_HW_ENABLE BIT(1) 32 + #define PRNG_STATUS_DATA_AVAIL BIT(0) 33 + 34 + #define MAX_HW_FIFO_DEPTH 16 35 + #define MAX_HW_FIFO_SIZE (MAX_HW_FIFO_DEPTH * 4) 36 + #define WORD_SZ 4 37 + 38 + struct msm_rng { 39 + void __iomem *base; 40 + struct clk *clk; 41 + struct hwrng hwrng; 42 + }; 43 + 44 + #define to_msm_rng(p) container_of(p, struct msm_rng, hwrng) 45 + 46 + static int msm_rng_enable(struct hwrng *hwrng, int enable) 47 + { 48 + struct msm_rng *rng = to_msm_rng(hwrng); 49 + u32 val; 50 + int ret; 51 + 52 + ret = clk_prepare_enable(rng->clk); 53 + if (ret) 54 + return ret; 55 + 56 + if (enable) { 57 + /* Enable PRNG only if it is not already enabled */ 58 + val = readl_relaxed(rng->base + PRNG_CONFIG); 59 + if (val & PRNG_CONFIG_HW_ENABLE) 60 + goto already_enabled; 61 + 62 + val = readl_relaxed(rng->base + PRNG_LFSR_CFG); 63 + val &= ~PRNG_LFSR_CFG_MASK; 64 + val |= PRNG_LFSR_CFG_CLOCKS; 65 + writel(val, rng->base + PRNG_LFSR_CFG); 66 + 67 + val = readl_relaxed(rng->base + PRNG_CONFIG); 68 + val |= PRNG_CONFIG_HW_ENABLE; 69 + writel(val, rng->base + PRNG_CONFIG); 70 + } else { 71 + val = readl_relaxed(rng->base + PRNG_CONFIG); 72 + val &= ~PRNG_CONFIG_HW_ENABLE; 73 + writel(val, rng->base + PRNG_CONFIG); 74 + } 75 + 76 + already_enabled: 77 + clk_disable_unprepare(rng->clk); 78 + return 0; 79 + } 80 + 81 + static int msm_rng_read(struct hwrng *hwrng, void *data, size_t max, bool wait) 82 + { 83 + struct msm_rng *rng = to_msm_rng(hwrng); 84 + size_t currsize = 0; 85 + u32 *retdata = data; 86 + size_t maxsize; 87 + int ret; 88 + u32 val; 89 + 90 + /* calculate max size bytes to transfer back to caller */ 91 + maxsize = min_t(size_t, MAX_HW_FIFO_SIZE, max); 92 + 93 + /* no room for word data */ 94 + if (maxsize < WORD_SZ) 95 + return 0; 96 + 97 + ret = clk_prepare_enable(rng->clk); 98 + if (ret) 99 + return ret; 100 + 101 + /* read random data from hardware */ 102 + do { 103 + val = readl_relaxed(rng->base + PRNG_STATUS); 104 + if (!(val & PRNG_STATUS_DATA_AVAIL)) 105 + break; 106 + 107 + val = readl_relaxed(rng->base + PRNG_DATA_OUT); 108 + if (!val) 109 + break; 110 + 111 + *retdata++ = val; 112 + currsize += WORD_SZ; 113 + 114 + /* make sure we stay on 32bit boundary */ 115 + if ((maxsize - currsize) < WORD_SZ) 116 + break; 117 + } while (currsize < maxsize); 118 + 119 + clk_disable_unprepare(rng->clk); 120 + 121 + return currsize; 122 + } 123 + 124 + static int msm_rng_init(struct hwrng *hwrng) 125 + { 126 + return msm_rng_enable(hwrng, 1); 127 + } 128 + 129 + static void msm_rng_cleanup(struct hwrng *hwrng) 130 + { 131 + msm_rng_enable(hwrng, 0); 132 + } 133 + 134 + static int msm_rng_probe(struct platform_device *pdev) 135 + { 136 + struct resource *res; 137 + struct msm_rng *rng; 138 + int ret; 139 + 140 + rng = devm_kzalloc(&pdev->dev, sizeof(*rng), GFP_KERNEL); 141 + if (!rng) 142 + return -ENOMEM; 143 + 144 + platform_set_drvdata(pdev, rng); 145 + 146 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 147 + rng->base = devm_ioremap_resource(&pdev->dev, res); 148 + if (IS_ERR(rng->base)) 149 + return PTR_ERR(rng->base); 150 + 151 + rng->clk = devm_clk_get(&pdev->dev, "core"); 152 + if (IS_ERR(rng->clk)) 153 + return PTR_ERR(rng->clk); 154 + 155 + rng->hwrng.name = KBUILD_MODNAME, 156 + rng->hwrng.init = msm_rng_init, 157 + rng->hwrng.cleanup = msm_rng_cleanup, 158 + rng->hwrng.read = msm_rng_read, 159 + 160 + ret = hwrng_register(&rng->hwrng); 161 + if (ret) { 162 + dev_err(&pdev->dev, "failed to register hwrng\n"); 163 + return ret; 164 + } 165 + 166 + return 0; 167 + } 168 + 169 + static int msm_rng_remove(struct platform_device *pdev) 170 + { 171 + struct msm_rng *rng = platform_get_drvdata(pdev); 172 + 173 + hwrng_unregister(&rng->hwrng); 174 + return 0; 175 + } 176 + 177 + static const struct of_device_id msm_rng_of_match[] = { 178 + { .compatible = "qcom,prng", }, 179 + {} 180 + }; 181 + MODULE_DEVICE_TABLE(of, msm_rng_of_match); 182 + 183 + static struct platform_driver msm_rng_driver = { 184 + .probe = msm_rng_probe, 185 + .remove = msm_rng_remove, 186 + .driver = { 187 + .name = KBUILD_MODNAME, 188 + .owner = THIS_MODULE, 189 + .of_match_table = of_match_ptr(msm_rng_of_match), 190 + } 191 + }; 192 + module_platform_driver(msm_rng_driver); 193 + 194 + MODULE_ALIAS("platform:" KBUILD_MODNAME); 195 + MODULE_AUTHOR("The Linux Foundation"); 196 + MODULE_DESCRIPTION("Qualcomm MSM random number generator driver"); 197 + MODULE_LICENSE("GPL v2");
+141
drivers/char/hw_random/omap3-rom-rng.c
··· 1 + /* 2 + * omap3-rom-rng.c - RNG driver for TI OMAP3 CPU family 3 + * 4 + * Copyright (C) 2009 Nokia Corporation 5 + * Author: Juha Yrjola <juha.yrjola@solidboot.com> 6 + * 7 + * Copyright (C) 2013 Pali Rohár <pali.rohar@gmail.com> 8 + * 9 + * This file is licensed under the terms of the GNU General Public 10 + * License version 2. This program is licensed "as is" without any 11 + * warranty of any kind, whether express or implied. 12 + */ 13 + 14 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 15 + 16 + #include <linux/module.h> 17 + #include <linux/init.h> 18 + #include <linux/random.h> 19 + #include <linux/hw_random.h> 20 + #include <linux/timer.h> 21 + #include <linux/clk.h> 22 + #include <linux/err.h> 23 + #include <linux/platform_device.h> 24 + 25 + #define RNG_RESET 0x01 26 + #define RNG_GEN_PRNG_HW_INIT 0x02 27 + #define RNG_GEN_HW 0x08 28 + 29 + /* param1: ptr, param2: count, param3: flag */ 30 + static u32 (*omap3_rom_rng_call)(u32, u32, u32); 31 + 32 + static struct timer_list idle_timer; 33 + static int rng_idle; 34 + static struct clk *rng_clk; 35 + 36 + static void omap3_rom_rng_idle(unsigned long data) 37 + { 38 + int r; 39 + 40 + r = omap3_rom_rng_call(0, 0, RNG_RESET); 41 + if (r != 0) { 42 + pr_err("reset failed: %d\n", r); 43 + return; 44 + } 45 + clk_disable_unprepare(rng_clk); 46 + rng_idle = 1; 47 + } 48 + 49 + static int omap3_rom_rng_get_random(void *buf, unsigned int count) 50 + { 51 + u32 r; 52 + u32 ptr; 53 + 54 + del_timer_sync(&idle_timer); 55 + if (rng_idle) { 56 + clk_prepare_enable(rng_clk); 57 + r = omap3_rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT); 58 + if (r != 0) { 59 + clk_disable_unprepare(rng_clk); 60 + pr_err("HW init failed: %d\n", r); 61 + return -EIO; 62 + } 63 + rng_idle = 0; 64 + } 65 + 66 + ptr = virt_to_phys(buf); 67 + r = omap3_rom_rng_call(ptr, count, RNG_GEN_HW); 68 + mod_timer(&idle_timer, jiffies + msecs_to_jiffies(500)); 69 + if (r != 0) 70 + return -EINVAL; 71 + return 0; 72 + } 73 + 74 + static int omap3_rom_rng_data_present(struct hwrng *rng, int wait) 75 + { 76 + return 1; 77 + } 78 + 79 + static int omap3_rom_rng_data_read(struct hwrng *rng, u32 *data) 80 + { 81 + int r; 82 + 83 + r = omap3_rom_rng_get_random(data, 4); 84 + if (r < 0) 85 + return r; 86 + return 4; 87 + } 88 + 89 + static struct hwrng omap3_rom_rng_ops = { 90 + .name = "omap3-rom", 91 + .data_present = omap3_rom_rng_data_present, 92 + .data_read = omap3_rom_rng_data_read, 93 + }; 94 + 95 + static int omap3_rom_rng_probe(struct platform_device *pdev) 96 + { 97 + pr_info("initializing\n"); 98 + 99 + omap3_rom_rng_call = pdev->dev.platform_data; 100 + if (!omap3_rom_rng_call) { 101 + pr_err("omap3_rom_rng_call is NULL\n"); 102 + return -EINVAL; 103 + } 104 + 105 + setup_timer(&idle_timer, omap3_rom_rng_idle, 0); 106 + rng_clk = clk_get(&pdev->dev, "ick"); 107 + if (IS_ERR(rng_clk)) { 108 + pr_err("unable to get RNG clock\n"); 109 + return PTR_ERR(rng_clk); 110 + } 111 + 112 + /* Leave the RNG in reset state. */ 113 + clk_prepare_enable(rng_clk); 114 + omap3_rom_rng_idle(0); 115 + 116 + return hwrng_register(&omap3_rom_rng_ops); 117 + } 118 + 119 + static int omap3_rom_rng_remove(struct platform_device *pdev) 120 + { 121 + hwrng_unregister(&omap3_rom_rng_ops); 122 + clk_disable_unprepare(rng_clk); 123 + clk_put(rng_clk); 124 + return 0; 125 + } 126 + 127 + static struct platform_driver omap3_rom_rng_driver = { 128 + .driver = { 129 + .name = "omap3-rom-rng", 130 + .owner = THIS_MODULE, 131 + }, 132 + .probe = omap3_rom_rng_probe, 133 + .remove = omap3_rom_rng_remove, 134 + }; 135 + 136 + module_platform_driver(omap3_rom_rng_driver); 137 + 138 + MODULE_ALIAS("platform:omap3-rom-rng"); 139 + MODULE_AUTHOR("Juha Yrjola"); 140 + MODULE_AUTHOR("Pali Rohár <pali.rohar@gmail.com>"); 141 + MODULE_LICENSE("GPL");
+2 -3
drivers/char/hw_random/pseries-rng.c
··· 24 24 #include <linux/hw_random.h> 25 25 #include <asm/vio.h> 26 26 27 - #define MODULE_NAME "pseries-rng" 28 27 29 28 static int pseries_rng_data_read(struct hwrng *rng, u32 *data) 30 29 { ··· 54 55 }; 55 56 56 57 static struct hwrng pseries_rng = { 57 - .name = MODULE_NAME, 58 + .name = KBUILD_MODNAME, 58 59 .data_read = pseries_rng_data_read, 59 60 }; 60 61 ··· 77 78 MODULE_DEVICE_TABLE(vio, pseries_rng_driver_ids); 78 79 79 80 static struct vio_driver pseries_rng_driver = { 80 - .name = MODULE_NAME, 81 + .name = KBUILD_MODNAME, 81 82 .probe = pseries_rng_probe, 82 83 .remove = pseries_rng_remove, 83 84 .get_desired_dma = pseries_rng_get_desired_dma,
+1 -1
drivers/char/hw_random/via-rng.c
··· 221 221 module_init(mod_init); 222 222 module_exit(mod_exit); 223 223 224 - static struct x86_cpu_id via_rng_cpu_id[] = { 224 + static struct x86_cpu_id __maybe_unused via_rng_cpu_id[] = { 225 225 X86_FEATURE_MATCH(X86_FEATURE_XSTORE), 226 226 {} 227 227 };
-1
drivers/cpufreq/exynos4210-cpufreq.c
··· 157 157 pr_debug("%s: failed initialization\n", __func__); 158 158 return -EINVAL; 159 159 } 160 - EXPORT_SYMBOL(exynos4210_cpufreq_init);
-1
drivers/cpufreq/exynos4x12-cpufreq.c
··· 211 211 pr_debug("%s: failed initialization\n", __func__); 212 212 return -EINVAL; 213 213 } 214 - EXPORT_SYMBOL(exynos4x12_cpufreq_init);
-1
drivers/cpufreq/exynos5250-cpufreq.c
··· 236 236 pr_err("%s: failed initialization\n", __func__); 237 237 return -EINVAL; 238 238 } 239 - EXPORT_SYMBOL(exynos5250_cpufreq_init);
+1 -3
drivers/cpufreq/tegra-cpufreq.c
··· 142 142 143 143 mutex_lock(&tegra_cpu_lock); 144 144 145 - if (is_suspended) { 146 - ret = -EBUSY; 145 + if (is_suspended) 147 146 goto out; 148 - } 149 147 150 148 freq = freq_table[index].frequency; 151 149
+19 -6
drivers/crypto/caam/Kconfig
··· 4 4 help 5 5 Enables the driver module for Freescale's Cryptographic Accelerator 6 6 and Assurance Module (CAAM), also known as the SEC version 4 (SEC4). 7 - This module adds a job ring operation interface, and configures h/w 7 + This module creates job ring devices, and configures h/w 8 8 to operate as a DPAA component automatically, depending 9 9 on h/w feature availability. 10 10 11 11 To compile this driver as a module, choose M here: the module 12 12 will be called caam. 13 13 14 + config CRYPTO_DEV_FSL_CAAM_JR 15 + tristate "Freescale CAAM Job Ring driver backend" 16 + depends on CRYPTO_DEV_FSL_CAAM 17 + default y 18 + help 19 + Enables the driver module for Job Rings which are part of 20 + Freescale's Cryptographic Accelerator 21 + and Assurance Module (CAAM). This module adds a job ring operation 22 + interface. 23 + 24 + To compile this driver as a module, choose M here: the module 25 + will be called caam_jr. 26 + 14 27 config CRYPTO_DEV_FSL_CAAM_RINGSIZE 15 28 int "Job Ring size" 16 - depends on CRYPTO_DEV_FSL_CAAM 29 + depends on CRYPTO_DEV_FSL_CAAM_JR 17 30 range 2 9 18 31 default "9" 19 32 help ··· 44 31 45 32 config CRYPTO_DEV_FSL_CAAM_INTC 46 33 bool "Job Ring interrupt coalescing" 47 - depends on CRYPTO_DEV_FSL_CAAM 34 + depends on CRYPTO_DEV_FSL_CAAM_JR 48 35 default n 49 36 help 50 37 Enable the Job Ring's interrupt coalescing feature. ··· 75 62 76 63 config CRYPTO_DEV_FSL_CAAM_CRYPTO_API 77 64 tristate "Register algorithm implementations with the Crypto API" 78 - depends on CRYPTO_DEV_FSL_CAAM 65 + depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR 79 66 default y 80 67 select CRYPTO_ALGAPI 81 68 select CRYPTO_AUTHENC ··· 89 76 90 77 config CRYPTO_DEV_FSL_CAAM_AHASH_API 91 78 tristate "Register hash algorithm implementations with Crypto API" 92 - depends on CRYPTO_DEV_FSL_CAAM 79 + depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR 93 80 default y 94 81 select CRYPTO_HASH 95 82 help ··· 101 88 102 89 config CRYPTO_DEV_FSL_CAAM_RNG_API 103 90 tristate "Register caam device for hwrng API" 104 - depends on CRYPTO_DEV_FSL_CAAM 91 + depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR 105 92 default y 106 93 select CRYPTO_RNG 107 94 select HW_RANDOM
+3 -1
drivers/crypto/caam/Makefile
··· 6 6 endif 7 7 8 8 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam.o 9 + obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_JR) += caam_jr.o 9 10 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o 10 11 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API) += caamhash.o 11 12 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API) += caamrng.o 12 13 13 - caam-objs := ctrl.o jr.o error.o key_gen.o 14 + caam-objs := ctrl.o 15 + caam_jr-objs := jr.o key_gen.o error.o
+20 -63
drivers/crypto/caam/caamalg.c
··· 86 86 #else 87 87 #define debug(format, arg...) 88 88 #endif 89 + static struct list_head alg_list; 89 90 90 91 /* Set DK bit in class 1 operation if shared */ 91 92 static inline void append_dec_op1(u32 *desc, u32 type) ··· 2058 2057 2059 2058 struct caam_crypto_alg { 2060 2059 struct list_head entry; 2061 - struct device *ctrldev; 2062 2060 int class1_alg_type; 2063 2061 int class2_alg_type; 2064 2062 int alg_op; ··· 2070 2070 struct caam_crypto_alg *caam_alg = 2071 2071 container_of(alg, struct caam_crypto_alg, crypto_alg); 2072 2072 struct caam_ctx *ctx = crypto_tfm_ctx(tfm); 2073 - struct caam_drv_private *priv = dev_get_drvdata(caam_alg->ctrldev); 2074 - int tgt_jr = atomic_inc_return(&priv->tfm_count); 2075 2073 2076 - /* 2077 - * distribute tfms across job rings to ensure in-order 2078 - * crypto request processing per tfm 2079 - */ 2080 - ctx->jrdev = priv->jrdev[(tgt_jr / 2) % priv->total_jobrs]; 2074 + ctx->jrdev = caam_jr_alloc(); 2075 + if (IS_ERR(ctx->jrdev)) { 2076 + pr_err("Job Ring Device allocation for transform failed\n"); 2077 + return PTR_ERR(ctx->jrdev); 2078 + } 2081 2079 2082 2080 /* copy descriptor header template value */ 2083 2081 ctx->class1_alg_type = OP_TYPE_CLASS1_ALG | caam_alg->class1_alg_type; ··· 2102 2104 dma_unmap_single(ctx->jrdev, ctx->sh_desc_givenc_dma, 2103 2105 desc_bytes(ctx->sh_desc_givenc), 2104 2106 DMA_TO_DEVICE); 2107 + 2108 + caam_jr_free(ctx->jrdev); 2105 2109 } 2106 2110 2107 2111 static void __exit caam_algapi_exit(void) 2108 2112 { 2109 2113 2110 - struct device_node *dev_node; 2111 - struct platform_device *pdev; 2112 - struct device *ctrldev; 2113 - struct caam_drv_private *priv; 2114 2114 struct caam_crypto_alg *t_alg, *n; 2115 2115 2116 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); 2117 - if (!dev_node) { 2118 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0"); 2119 - if (!dev_node) 2120 - return; 2121 - } 2122 - 2123 - pdev = of_find_device_by_node(dev_node); 2124 - if (!pdev) 2116 + if (!alg_list.next) 2125 2117 return; 2126 2118 2127 - ctrldev = &pdev->dev; 2128 - of_node_put(dev_node); 2129 - priv = dev_get_drvdata(ctrldev); 2130 - 2131 - if (!priv->alg_list.next) 2132 - return; 2133 - 2134 - list_for_each_entry_safe(t_alg, n, &priv->alg_list, entry) { 2119 + list_for_each_entry_safe(t_alg, n, &alg_list, entry) { 2135 2120 crypto_unregister_alg(&t_alg->crypto_alg); 2136 2121 list_del(&t_alg->entry); 2137 2122 kfree(t_alg); 2138 2123 } 2139 2124 } 2140 2125 2141 - static struct caam_crypto_alg *caam_alg_alloc(struct device *ctrldev, 2142 - struct caam_alg_template 2126 + static struct caam_crypto_alg *caam_alg_alloc(struct caam_alg_template 2143 2127 *template) 2144 2128 { 2145 2129 struct caam_crypto_alg *t_alg; ··· 2129 2149 2130 2150 t_alg = kzalloc(sizeof(struct caam_crypto_alg), GFP_KERNEL); 2131 2151 if (!t_alg) { 2132 - dev_err(ctrldev, "failed to allocate t_alg\n"); 2152 + pr_err("failed to allocate t_alg\n"); 2133 2153 return ERR_PTR(-ENOMEM); 2134 2154 } 2135 2155 ··· 2161 2181 t_alg->class1_alg_type = template->class1_alg_type; 2162 2182 t_alg->class2_alg_type = template->class2_alg_type; 2163 2183 t_alg->alg_op = template->alg_op; 2164 - t_alg->ctrldev = ctrldev; 2165 2184 2166 2185 return t_alg; 2167 2186 } 2168 2187 2169 2188 static int __init caam_algapi_init(void) 2170 2189 { 2171 - struct device_node *dev_node; 2172 - struct platform_device *pdev; 2173 - struct device *ctrldev; 2174 - struct caam_drv_private *priv; 2175 2190 int i = 0, err = 0; 2176 2191 2177 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); 2178 - if (!dev_node) { 2179 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0"); 2180 - if (!dev_node) 2181 - return -ENODEV; 2182 - } 2183 - 2184 - pdev = of_find_device_by_node(dev_node); 2185 - if (!pdev) 2186 - return -ENODEV; 2187 - 2188 - ctrldev = &pdev->dev; 2189 - priv = dev_get_drvdata(ctrldev); 2190 - of_node_put(dev_node); 2191 - 2192 - INIT_LIST_HEAD(&priv->alg_list); 2193 - 2194 - atomic_set(&priv->tfm_count, -1); 2192 + INIT_LIST_HEAD(&alg_list); 2195 2193 2196 2194 /* register crypto algorithms the device supports */ 2197 2195 for (i = 0; i < ARRAY_SIZE(driver_algs); i++) { 2198 2196 /* TODO: check if h/w supports alg */ 2199 2197 struct caam_crypto_alg *t_alg; 2200 2198 2201 - t_alg = caam_alg_alloc(ctrldev, &driver_algs[i]); 2199 + t_alg = caam_alg_alloc(&driver_algs[i]); 2202 2200 if (IS_ERR(t_alg)) { 2203 2201 err = PTR_ERR(t_alg); 2204 - dev_warn(ctrldev, "%s alg allocation failed\n", 2205 - driver_algs[i].driver_name); 2202 + pr_warn("%s alg allocation failed\n", 2203 + driver_algs[i].driver_name); 2206 2204 continue; 2207 2205 } 2208 2206 2209 2207 err = crypto_register_alg(&t_alg->crypto_alg); 2210 2208 if (err) { 2211 - dev_warn(ctrldev, "%s alg registration failed\n", 2209 + pr_warn("%s alg registration failed\n", 2212 2210 t_alg->crypto_alg.cra_driver_name); 2213 2211 kfree(t_alg); 2214 2212 } else 2215 - list_add_tail(&t_alg->entry, &priv->alg_list); 2213 + list_add_tail(&t_alg->entry, &alg_list); 2216 2214 } 2217 - if (!list_empty(&priv->alg_list)) 2218 - dev_info(ctrldev, "%s algorithms registered in /proc/crypto\n", 2219 - (char *)of_get_property(dev_node, "compatible", NULL)); 2215 + if (!list_empty(&alg_list)) 2216 + pr_info("caam algorithms registered in /proc/crypto\n"); 2220 2217 2221 2218 return err; 2222 2219 }
+26 -62
drivers/crypto/caam/caamhash.c
··· 94 94 #define debug(format, arg...) 95 95 #endif 96 96 97 + 98 + static struct list_head hash_list; 99 + 97 100 /* ahash per-session context */ 98 101 struct caam_hash_ctx { 99 102 struct device *jrdev; ··· 1656 1653 1657 1654 struct caam_hash_alg { 1658 1655 struct list_head entry; 1659 - struct device *ctrldev; 1660 1656 int alg_type; 1661 1657 int alg_op; 1662 1658 struct ahash_alg ahash_alg; ··· 1672 1670 struct caam_hash_alg *caam_hash = 1673 1671 container_of(alg, struct caam_hash_alg, ahash_alg); 1674 1672 struct caam_hash_ctx *ctx = crypto_tfm_ctx(tfm); 1675 - struct caam_drv_private *priv = dev_get_drvdata(caam_hash->ctrldev); 1676 1673 /* Sizes for MDHA running digests: MD5, SHA1, 224, 256, 384, 512 */ 1677 1674 static const u8 runninglen[] = { HASH_MSG_LEN + MD5_DIGEST_SIZE, 1678 1675 HASH_MSG_LEN + SHA1_DIGEST_SIZE, ··· 1679 1678 HASH_MSG_LEN + SHA256_DIGEST_SIZE, 1680 1679 HASH_MSG_LEN + 64, 1681 1680 HASH_MSG_LEN + SHA512_DIGEST_SIZE }; 1682 - int tgt_jr = atomic_inc_return(&priv->tfm_count); 1683 1681 int ret = 0; 1684 1682 1685 1683 /* 1686 - * distribute tfms across job rings to ensure in-order 1684 + * Get a Job ring from Job Ring driver to ensure in-order 1687 1685 * crypto request processing per tfm 1688 1686 */ 1689 - ctx->jrdev = priv->jrdev[tgt_jr % priv->total_jobrs]; 1690 - 1687 + ctx->jrdev = caam_jr_alloc(); 1688 + if (IS_ERR(ctx->jrdev)) { 1689 + pr_err("Job Ring Device allocation for transform failed\n"); 1690 + return PTR_ERR(ctx->jrdev); 1691 + } 1691 1692 /* copy descriptor header template value */ 1692 1693 ctx->alg_type = OP_TYPE_CLASS2_ALG | caam_hash->alg_type; 1693 1694 ctx->alg_op = OP_TYPE_CLASS2_ALG | caam_hash->alg_op; ··· 1732 1729 !dma_mapping_error(ctx->jrdev, ctx->sh_desc_finup_dma)) 1733 1730 dma_unmap_single(ctx->jrdev, ctx->sh_desc_finup_dma, 1734 1731 desc_bytes(ctx->sh_desc_finup), DMA_TO_DEVICE); 1732 + 1733 + caam_jr_free(ctx->jrdev); 1735 1734 } 1736 1735 1737 1736 static void __exit caam_algapi_hash_exit(void) 1738 1737 { 1739 - struct device_node *dev_node; 1740 - struct platform_device *pdev; 1741 - struct device *ctrldev; 1742 - struct caam_drv_private *priv; 1743 1738 struct caam_hash_alg *t_alg, *n; 1744 1739 1745 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); 1746 - if (!dev_node) { 1747 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0"); 1748 - if (!dev_node) 1749 - return; 1750 - } 1751 - 1752 - pdev = of_find_device_by_node(dev_node); 1753 - if (!pdev) 1740 + if (!hash_list.next) 1754 1741 return; 1755 1742 1756 - ctrldev = &pdev->dev; 1757 - of_node_put(dev_node); 1758 - priv = dev_get_drvdata(ctrldev); 1759 - 1760 - if (!priv->hash_list.next) 1761 - return; 1762 - 1763 - list_for_each_entry_safe(t_alg, n, &priv->hash_list, entry) { 1743 + list_for_each_entry_safe(t_alg, n, &hash_list, entry) { 1764 1744 crypto_unregister_ahash(&t_alg->ahash_alg); 1765 1745 list_del(&t_alg->entry); 1766 1746 kfree(t_alg); ··· 1751 1765 } 1752 1766 1753 1767 static struct caam_hash_alg * 1754 - caam_hash_alloc(struct device *ctrldev, struct caam_hash_template *template, 1768 + caam_hash_alloc(struct caam_hash_template *template, 1755 1769 bool keyed) 1756 1770 { 1757 1771 struct caam_hash_alg *t_alg; ··· 1760 1774 1761 1775 t_alg = kzalloc(sizeof(struct caam_hash_alg), GFP_KERNEL); 1762 1776 if (!t_alg) { 1763 - dev_err(ctrldev, "failed to allocate t_alg\n"); 1777 + pr_err("failed to allocate t_alg\n"); 1764 1778 return ERR_PTR(-ENOMEM); 1765 1779 } 1766 1780 ··· 1791 1805 1792 1806 t_alg->alg_type = template->alg_type; 1793 1807 t_alg->alg_op = template->alg_op; 1794 - t_alg->ctrldev = ctrldev; 1795 1808 1796 1809 return t_alg; 1797 1810 } 1798 1811 1799 1812 static int __init caam_algapi_hash_init(void) 1800 1813 { 1801 - struct device_node *dev_node; 1802 - struct platform_device *pdev; 1803 - struct device *ctrldev; 1804 - struct caam_drv_private *priv; 1805 1814 int i = 0, err = 0; 1806 1815 1807 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); 1808 - if (!dev_node) { 1809 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0"); 1810 - if (!dev_node) 1811 - return -ENODEV; 1812 - } 1813 - 1814 - pdev = of_find_device_by_node(dev_node); 1815 - if (!pdev) 1816 - return -ENODEV; 1817 - 1818 - ctrldev = &pdev->dev; 1819 - priv = dev_get_drvdata(ctrldev); 1820 - of_node_put(dev_node); 1821 - 1822 - INIT_LIST_HEAD(&priv->hash_list); 1823 - 1824 - atomic_set(&priv->tfm_count, -1); 1816 + INIT_LIST_HEAD(&hash_list); 1825 1817 1826 1818 /* register crypto algorithms the device supports */ 1827 1819 for (i = 0; i < ARRAY_SIZE(driver_hash); i++) { ··· 1807 1843 struct caam_hash_alg *t_alg; 1808 1844 1809 1845 /* register hmac version */ 1810 - t_alg = caam_hash_alloc(ctrldev, &driver_hash[i], true); 1846 + t_alg = caam_hash_alloc(&driver_hash[i], true); 1811 1847 if (IS_ERR(t_alg)) { 1812 1848 err = PTR_ERR(t_alg); 1813 - dev_warn(ctrldev, "%s alg allocation failed\n", 1814 - driver_hash[i].driver_name); 1849 + pr_warn("%s alg allocation failed\n", 1850 + driver_hash[i].driver_name); 1815 1851 continue; 1816 1852 } 1817 1853 1818 1854 err = crypto_register_ahash(&t_alg->ahash_alg); 1819 1855 if (err) { 1820 - dev_warn(ctrldev, "%s alg registration failed\n", 1856 + pr_warn("%s alg registration failed\n", 1821 1857 t_alg->ahash_alg.halg.base.cra_driver_name); 1822 1858 kfree(t_alg); 1823 1859 } else 1824 - list_add_tail(&t_alg->entry, &priv->hash_list); 1860 + list_add_tail(&t_alg->entry, &hash_list); 1825 1861 1826 1862 /* register unkeyed version */ 1827 - t_alg = caam_hash_alloc(ctrldev, &driver_hash[i], false); 1863 + t_alg = caam_hash_alloc(&driver_hash[i], false); 1828 1864 if (IS_ERR(t_alg)) { 1829 1865 err = PTR_ERR(t_alg); 1830 - dev_warn(ctrldev, "%s alg allocation failed\n", 1831 - driver_hash[i].driver_name); 1866 + pr_warn("%s alg allocation failed\n", 1867 + driver_hash[i].driver_name); 1832 1868 continue; 1833 1869 } 1834 1870 1835 1871 err = crypto_register_ahash(&t_alg->ahash_alg); 1836 1872 if (err) { 1837 - dev_warn(ctrldev, "%s alg registration failed\n", 1873 + pr_warn("%s alg registration failed\n", 1838 1874 t_alg->ahash_alg.halg.base.cra_driver_name); 1839 1875 kfree(t_alg); 1840 1876 } else 1841 - list_add_tail(&t_alg->entry, &priv->hash_list); 1877 + list_add_tail(&t_alg->entry, &hash_list); 1842 1878 } 1843 1879 1844 1880 return err;
+8 -19
drivers/crypto/caam/caamrng.c
··· 273 273 274 274 static void __exit caam_rng_exit(void) 275 275 { 276 + caam_jr_free(rng_ctx.jrdev); 276 277 hwrng_unregister(&caam_rng); 277 278 } 278 279 279 280 static int __init caam_rng_init(void) 280 281 { 281 - struct device_node *dev_node; 282 - struct platform_device *pdev; 283 - struct device *ctrldev; 284 - struct caam_drv_private *priv; 282 + struct device *dev; 285 283 286 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); 287 - if (!dev_node) { 288 - dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0"); 289 - if (!dev_node) 290 - return -ENODEV; 284 + dev = caam_jr_alloc(); 285 + if (IS_ERR(dev)) { 286 + pr_err("Job Ring Device allocation for transform failed\n"); 287 + return PTR_ERR(dev); 291 288 } 292 289 293 - pdev = of_find_device_by_node(dev_node); 294 - if (!pdev) 295 - return -ENODEV; 290 + caam_init_rng(&rng_ctx, dev); 296 291 297 - ctrldev = &pdev->dev; 298 - priv = dev_get_drvdata(ctrldev); 299 - of_node_put(dev_node); 300 - 301 - caam_init_rng(&rng_ctx, priv->jrdev[0]); 302 - 303 - dev_info(priv->jrdev[0], "registering rng-caam\n"); 292 + dev_info(dev, "registering rng-caam\n"); 304 293 return hwrng_register(&caam_rng); 305 294 } 306 295
+332 -88
drivers/crypto/caam/ctrl.c
··· 16 16 #include "error.h" 17 17 #include "ctrl.h" 18 18 19 - static int caam_remove(struct platform_device *pdev) 20 - { 21 - struct device *ctrldev; 22 - struct caam_drv_private *ctrlpriv; 23 - struct caam_drv_private_jr *jrpriv; 24 - struct caam_full __iomem *topregs; 25 - int ring, ret = 0; 26 - 27 - ctrldev = &pdev->dev; 28 - ctrlpriv = dev_get_drvdata(ctrldev); 29 - topregs = (struct caam_full __iomem *)ctrlpriv->ctrl; 30 - 31 - /* shut down JobRs */ 32 - for (ring = 0; ring < ctrlpriv->total_jobrs; ring++) { 33 - ret |= caam_jr_shutdown(ctrlpriv->jrdev[ring]); 34 - jrpriv = dev_get_drvdata(ctrlpriv->jrdev[ring]); 35 - irq_dispose_mapping(jrpriv->irq); 36 - } 37 - 38 - /* Shut down debug views */ 39 - #ifdef CONFIG_DEBUG_FS 40 - debugfs_remove_recursive(ctrlpriv->dfs_root); 41 - #endif 42 - 43 - /* Unmap controller region */ 44 - iounmap(&topregs->ctrl); 45 - 46 - kfree(ctrlpriv->jrdev); 47 - kfree(ctrlpriv); 48 - 49 - return ret; 50 - } 51 - 52 19 /* 53 20 * Descriptor to instantiate RNG State Handle 0 in normal mode and 54 21 * load the JDKEK, TDKEK and TDSK registers 55 22 */ 56 - static void build_instantiation_desc(u32 *desc) 23 + static void build_instantiation_desc(u32 *desc, int handle, int do_sk) 57 24 { 58 - u32 *jump_cmd; 25 + u32 *jump_cmd, op_flags; 59 26 60 27 init_job_desc(desc, 0); 61 28 29 + op_flags = OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG | 30 + (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INIT; 31 + 62 32 /* INIT RNG in non-test mode */ 63 - append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG | 64 - OP_ALG_AS_INIT); 33 + append_operation(desc, op_flags); 65 34 66 - /* wait for done */ 67 - jump_cmd = append_jump(desc, JUMP_CLASS_CLASS1); 68 - set_jump_tgt_here(desc, jump_cmd); 35 + if (!handle && do_sk) { 36 + /* 37 + * For SH0, Secure Keys must be generated as well 38 + */ 69 39 70 - /* 71 - * load 1 to clear written reg: 72 - * resets the done interrupt and returns the RNG to idle. 73 - */ 74 - append_load_imm_u32(desc, 1, LDST_SRCDST_WORD_CLRW); 40 + /* wait for done */ 41 + jump_cmd = append_jump(desc, JUMP_CLASS_CLASS1); 42 + set_jump_tgt_here(desc, jump_cmd); 75 43 76 - /* generate secure keys (non-test) */ 77 - append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG | 78 - OP_ALG_RNG4_SK); 44 + /* 45 + * load 1 to clear written reg: 46 + * resets the done interrrupt and returns the RNG to idle. 47 + */ 48 + append_load_imm_u32(desc, 1, LDST_SRCDST_WORD_CLRW); 49 + 50 + /* Initialize State Handle */ 51 + append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG | 52 + OP_ALG_AAI_RNG4_SK); 53 + } 54 + 55 + append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT); 79 56 } 80 57 81 - static int instantiate_rng(struct device *ctrldev) 58 + /* Descriptor for deinstantiation of State Handle 0 of the RNG block. */ 59 + static void build_deinstantiation_desc(u32 *desc, int handle) 60 + { 61 + init_job_desc(desc, 0); 62 + 63 + /* Uninstantiate State Handle 0 */ 64 + append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG | 65 + (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INITFINAL); 66 + 67 + append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT); 68 + } 69 + 70 + /* 71 + * run_descriptor_deco0 - runs a descriptor on DECO0, under direct control of 72 + * the software (no JR/QI used). 73 + * @ctrldev - pointer to device 74 + * @status - descriptor status, after being run 75 + * 76 + * Return: - 0 if no error occurred 77 + * - -ENODEV if the DECO couldn't be acquired 78 + * - -EAGAIN if an error occurred while executing the descriptor 79 + */ 80 + static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc, 81 + u32 *status) 82 82 { 83 83 struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev); 84 84 struct caam_full __iomem *topregs; 85 85 unsigned int timeout = 100000; 86 - u32 *desc; 87 - int i, ret = 0; 88 - 89 - desc = kmalloc(CAAM_CMD_SZ * 6, GFP_KERNEL | GFP_DMA); 90 - if (!desc) { 91 - dev_err(ctrldev, "can't allocate RNG init descriptor memory\n"); 92 - return -ENOMEM; 93 - } 94 - build_instantiation_desc(desc); 86 + u32 deco_dbg_reg, flags; 87 + int i; 95 88 96 89 /* Set the bit to request direct access to DECO0 */ 97 90 topregs = (struct caam_full __iomem *)ctrlpriv->ctrl; ··· 96 103 97 104 if (!timeout) { 98 105 dev_err(ctrldev, "failed to acquire DECO 0\n"); 99 - ret = -EIO; 100 - goto out; 106 + clrbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE); 107 + return -ENODEV; 101 108 } 102 109 103 110 for (i = 0; i < desc_len(desc); i++) 104 - topregs->deco.descbuf[i] = *(desc + i); 111 + wr_reg32(&topregs->deco.descbuf[i], *(desc + i)); 105 112 106 - wr_reg32(&topregs->deco.jr_ctl_hi, DECO_JQCR_WHL | DECO_JQCR_FOUR); 113 + flags = DECO_JQCR_WHL; 114 + /* 115 + * If the descriptor length is longer than 4 words, then the 116 + * FOUR bit in JRCTRL register must be set. 117 + */ 118 + if (desc_len(desc) >= 4) 119 + flags |= DECO_JQCR_FOUR; 120 + 121 + /* Instruct the DECO to execute it */ 122 + wr_reg32(&topregs->deco.jr_ctl_hi, flags); 107 123 108 124 timeout = 10000000; 109 - while ((rd_reg32(&topregs->deco.desc_dbg) & DECO_DBG_VALID) && 110 - --timeout) 125 + do { 126 + deco_dbg_reg = rd_reg32(&topregs->deco.desc_dbg); 127 + /* 128 + * If an error occured in the descriptor, then 129 + * the DECO status field will be set to 0x0D 130 + */ 131 + if ((deco_dbg_reg & DESC_DBG_DECO_STAT_MASK) == 132 + DESC_DBG_DECO_STAT_HOST_ERR) 133 + break; 111 134 cpu_relax(); 135 + } while ((deco_dbg_reg & DESC_DBG_DECO_STAT_VALID) && --timeout); 112 136 113 - if (!timeout) { 114 - dev_err(ctrldev, "failed to instantiate RNG\n"); 115 - ret = -EIO; 137 + *status = rd_reg32(&topregs->deco.op_status_hi) & 138 + DECO_OP_STATUS_HI_ERR_MASK; 139 + 140 + /* Mark the DECO as free */ 141 + clrbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE); 142 + 143 + if (!timeout) 144 + return -EAGAIN; 145 + 146 + return 0; 147 + } 148 + 149 + /* 150 + * instantiate_rng - builds and executes a descriptor on DECO0, 151 + * which initializes the RNG block. 152 + * @ctrldev - pointer to device 153 + * @state_handle_mask - bitmask containing the instantiation status 154 + * for the RNG4 state handles which exist in 155 + * the RNG4 block: 1 if it's been instantiated 156 + * by an external entry, 0 otherwise. 157 + * @gen_sk - generate data to be loaded into the JDKEK, TDKEK and TDSK; 158 + * Caution: this can be done only once; if the keys need to be 159 + * regenerated, a POR is required 160 + * 161 + * Return: - 0 if no error occurred 162 + * - -ENOMEM if there isn't enough memory to allocate the descriptor 163 + * - -ENODEV if DECO0 couldn't be acquired 164 + * - -EAGAIN if an error occurred when executing the descriptor 165 + * f.i. there was a RNG hardware error due to not "good enough" 166 + * entropy being aquired. 167 + */ 168 + static int instantiate_rng(struct device *ctrldev, int state_handle_mask, 169 + int gen_sk) 170 + { 171 + struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev); 172 + struct caam_full __iomem *topregs; 173 + struct rng4tst __iomem *r4tst; 174 + u32 *desc, status, rdsta_val; 175 + int ret = 0, sh_idx; 176 + 177 + topregs = (struct caam_full __iomem *)ctrlpriv->ctrl; 178 + r4tst = &topregs->ctrl.r4tst[0]; 179 + 180 + desc = kmalloc(CAAM_CMD_SZ * 7, GFP_KERNEL); 181 + if (!desc) 182 + return -ENOMEM; 183 + 184 + for (sh_idx = 0; sh_idx < RNG4_MAX_HANDLES; sh_idx++) { 185 + /* 186 + * If the corresponding bit is set, this state handle 187 + * was initialized by somebody else, so it's left alone. 188 + */ 189 + if ((1 << sh_idx) & state_handle_mask) 190 + continue; 191 + 192 + /* Create the descriptor for instantiating RNG State Handle */ 193 + build_instantiation_desc(desc, sh_idx, gen_sk); 194 + 195 + /* Try to run it through DECO0 */ 196 + ret = run_descriptor_deco0(ctrldev, desc, &status); 197 + 198 + /* 199 + * If ret is not 0, or descriptor status is not 0, then 200 + * something went wrong. No need to try the next state 201 + * handle (if available), bail out here. 202 + * Also, if for some reason, the State Handle didn't get 203 + * instantiated although the descriptor has finished 204 + * without any error (HW optimizations for later 205 + * CAAM eras), then try again. 206 + */ 207 + rdsta_val = 208 + rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & RDSTA_IFMASK; 209 + if (status || !(rdsta_val & (1 << sh_idx))) 210 + ret = -EAGAIN; 211 + if (ret) 212 + break; 213 + 214 + dev_info(ctrldev, "Instantiated RNG4 SH%d\n", sh_idx); 215 + /* Clear the contents before recreating the descriptor */ 216 + memset(desc, 0x00, CAAM_CMD_SZ * 7); 116 217 } 117 218 118 - clrbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE); 119 - out: 120 219 kfree(desc); 220 + 121 221 return ret; 122 222 } 123 223 124 224 /* 125 - * By default, the TRNG runs for 200 clocks per sample; 126 - * 1600 clocks per sample generates better entropy. 225 + * deinstantiate_rng - builds and executes a descriptor on DECO0, 226 + * which deinitializes the RNG block. 227 + * @ctrldev - pointer to device 228 + * @state_handle_mask - bitmask containing the instantiation status 229 + * for the RNG4 state handles which exist in 230 + * the RNG4 block: 1 if it's been instantiated 231 + * 232 + * Return: - 0 if no error occurred 233 + * - -ENOMEM if there isn't enough memory to allocate the descriptor 234 + * - -ENODEV if DECO0 couldn't be acquired 235 + * - -EAGAIN if an error occurred when executing the descriptor 127 236 */ 128 - static void kick_trng(struct platform_device *pdev) 237 + static int deinstantiate_rng(struct device *ctrldev, int state_handle_mask) 238 + { 239 + u32 *desc, status; 240 + int sh_idx, ret = 0; 241 + 242 + desc = kmalloc(CAAM_CMD_SZ * 3, GFP_KERNEL); 243 + if (!desc) 244 + return -ENOMEM; 245 + 246 + for (sh_idx = 0; sh_idx < RNG4_MAX_HANDLES; sh_idx++) { 247 + /* 248 + * If the corresponding bit is set, then it means the state 249 + * handle was initialized by us, and thus it needs to be 250 + * deintialized as well 251 + */ 252 + if ((1 << sh_idx) & state_handle_mask) { 253 + /* 254 + * Create the descriptor for deinstantating this state 255 + * handle 256 + */ 257 + build_deinstantiation_desc(desc, sh_idx); 258 + 259 + /* Try to run it through DECO0 */ 260 + ret = run_descriptor_deco0(ctrldev, desc, &status); 261 + 262 + if (ret || status) { 263 + dev_err(ctrldev, 264 + "Failed to deinstantiate RNG4 SH%d\n", 265 + sh_idx); 266 + break; 267 + } 268 + dev_info(ctrldev, "Deinstantiated RNG4 SH%d\n", sh_idx); 269 + } 270 + } 271 + 272 + kfree(desc); 273 + 274 + return ret; 275 + } 276 + 277 + static int caam_remove(struct platform_device *pdev) 278 + { 279 + struct device *ctrldev; 280 + struct caam_drv_private *ctrlpriv; 281 + struct caam_full __iomem *topregs; 282 + int ring, ret = 0; 283 + 284 + ctrldev = &pdev->dev; 285 + ctrlpriv = dev_get_drvdata(ctrldev); 286 + topregs = (struct caam_full __iomem *)ctrlpriv->ctrl; 287 + 288 + /* Remove platform devices for JobRs */ 289 + for (ring = 0; ring < ctrlpriv->total_jobrs; ring++) { 290 + if (ctrlpriv->jrpdev[ring]) 291 + of_device_unregister(ctrlpriv->jrpdev[ring]); 292 + } 293 + 294 + /* De-initialize RNG state handles initialized by this driver. */ 295 + if (ctrlpriv->rng4_sh_init) 296 + deinstantiate_rng(ctrldev, ctrlpriv->rng4_sh_init); 297 + 298 + /* Shut down debug views */ 299 + #ifdef CONFIG_DEBUG_FS 300 + debugfs_remove_recursive(ctrlpriv->dfs_root); 301 + #endif 302 + 303 + /* Unmap controller region */ 304 + iounmap(&topregs->ctrl); 305 + 306 + kfree(ctrlpriv->jrpdev); 307 + kfree(ctrlpriv); 308 + 309 + return ret; 310 + } 311 + 312 + /* 313 + * kick_trng - sets the various parameters for enabling the initialization 314 + * of the RNG4 block in CAAM 315 + * @pdev - pointer to the platform device 316 + * @ent_delay - Defines the length (in system clocks) of each entropy sample. 317 + */ 318 + static void kick_trng(struct platform_device *pdev, int ent_delay) 129 319 { 130 320 struct device *ctrldev = &pdev->dev; 131 321 struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev); ··· 321 145 322 146 /* put RNG4 into program mode */ 323 147 setbits32(&r4tst->rtmctl, RTMCTL_PRGM); 324 - /* 1600 clocks per sample */ 148 + 149 + /* 150 + * Performance-wise, it does not make sense to 151 + * set the delay to a value that is lower 152 + * than the last one that worked (i.e. the state handles 153 + * were instantiated properly. Thus, instead of wasting 154 + * time trying to set the values controlling the sample 155 + * frequency, the function simply returns. 156 + */ 157 + val = (rd_reg32(&r4tst->rtsdctl) & RTSDCTL_ENT_DLY_MASK) 158 + >> RTSDCTL_ENT_DLY_SHIFT; 159 + if (ent_delay <= val) { 160 + /* put RNG4 into run mode */ 161 + clrbits32(&r4tst->rtmctl, RTMCTL_PRGM); 162 + return; 163 + } 164 + 325 165 val = rd_reg32(&r4tst->rtsdctl); 326 - val = (val & ~RTSDCTL_ENT_DLY_MASK) | (1600 << RTSDCTL_ENT_DLY_SHIFT); 166 + val = (val & ~RTSDCTL_ENT_DLY_MASK) | 167 + (ent_delay << RTSDCTL_ENT_DLY_SHIFT); 327 168 wr_reg32(&r4tst->rtsdctl, val); 328 - /* min. freq. count */ 329 - wr_reg32(&r4tst->rtfrqmin, 400); 330 - /* max. freq. count */ 331 - wr_reg32(&r4tst->rtfrqmax, 6400); 169 + /* min. freq. count, equal to 1/4 of the entropy sample length */ 170 + wr_reg32(&r4tst->rtfrqmin, ent_delay >> 2); 171 + /* max. freq. count, equal to 8 times the entropy sample length */ 172 + wr_reg32(&r4tst->rtfrqmax, ent_delay << 3); 332 173 /* put RNG4 into run mode */ 333 174 clrbits32(&r4tst->rtmctl, RTMCTL_PRGM); 334 175 } ··· 386 193 /* Probe routine for CAAM top (controller) level */ 387 194 static int caam_probe(struct platform_device *pdev) 388 195 { 389 - int ret, ring, rspec; 196 + int ret, ring, rspec, gen_sk, ent_delay = RTSDCTL_ENT_DLY_MIN; 390 197 u64 caam_id; 391 198 struct device *dev; 392 199 struct device_node *nprop, *np; ··· 451 258 rspec++; 452 259 } 453 260 454 - ctrlpriv->jrdev = kzalloc(sizeof(struct device *) * rspec, GFP_KERNEL); 455 - if (ctrlpriv->jrdev == NULL) { 261 + ctrlpriv->jrpdev = kzalloc(sizeof(struct platform_device *) * rspec, 262 + GFP_KERNEL); 263 + if (ctrlpriv->jrpdev == NULL) { 456 264 iounmap(&topregs->ctrl); 457 265 return -ENOMEM; 458 266 } ··· 461 267 ring = 0; 462 268 ctrlpriv->total_jobrs = 0; 463 269 for_each_compatible_node(np, NULL, "fsl,sec-v4.0-job-ring") { 464 - caam_jr_probe(pdev, np, ring); 270 + ctrlpriv->jrpdev[ring] = 271 + of_platform_device_create(np, NULL, dev); 272 + if (!ctrlpriv->jrpdev[ring]) { 273 + pr_warn("JR%d Platform device creation error\n", ring); 274 + continue; 275 + } 465 276 ctrlpriv->total_jobrs++; 466 277 ring++; 467 278 } 468 279 if (!ring) { 469 280 for_each_compatible_node(np, NULL, "fsl,sec4.0-job-ring") { 470 - caam_jr_probe(pdev, np, ring); 281 + ctrlpriv->jrpdev[ring] = 282 + of_platform_device_create(np, NULL, dev); 283 + if (!ctrlpriv->jrpdev[ring]) { 284 + pr_warn("JR%d Platform device creation error\n", 285 + ring); 286 + continue; 287 + } 471 288 ctrlpriv->total_jobrs++; 472 289 ring++; 473 290 } ··· 504 299 505 300 /* 506 301 * If SEC has RNG version >= 4 and RNG state handle has not been 507 - * already instantiated ,do RNG instantiation 302 + * already instantiated, do RNG instantiation 508 303 */ 509 - if ((cha_vid & CHA_ID_RNG_MASK) >> CHA_ID_RNG_SHIFT >= 4 && 510 - !(rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & RDSTA_IF0)) { 511 - kick_trng(pdev); 512 - ret = instantiate_rng(dev); 304 + if ((cha_vid & CHA_ID_RNG_MASK) >> CHA_ID_RNG_SHIFT >= 4) { 305 + ctrlpriv->rng4_sh_init = 306 + rd_reg32(&topregs->ctrl.r4tst[0].rdsta); 307 + /* 308 + * If the secure keys (TDKEK, JDKEK, TDSK), were already 309 + * generated, signal this to the function that is instantiating 310 + * the state handles. An error would occur if RNG4 attempts 311 + * to regenerate these keys before the next POR. 312 + */ 313 + gen_sk = ctrlpriv->rng4_sh_init & RDSTA_SKVN ? 0 : 1; 314 + ctrlpriv->rng4_sh_init &= RDSTA_IFMASK; 315 + do { 316 + int inst_handles = 317 + rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & 318 + RDSTA_IFMASK; 319 + /* 320 + * If either SH were instantiated by somebody else 321 + * (e.g. u-boot) then it is assumed that the entropy 322 + * parameters are properly set and thus the function 323 + * setting these (kick_trng(...)) is skipped. 324 + * Also, if a handle was instantiated, do not change 325 + * the TRNG parameters. 326 + */ 327 + if (!(ctrlpriv->rng4_sh_init || inst_handles)) { 328 + kick_trng(pdev, ent_delay); 329 + ent_delay += 400; 330 + } 331 + /* 332 + * if instantiate_rng(...) fails, the loop will rerun 333 + * and the kick_trng(...) function will modfiy the 334 + * upper and lower limits of the entropy sampling 335 + * interval, leading to a sucessful initialization of 336 + * the RNG. 337 + */ 338 + ret = instantiate_rng(dev, inst_handles, 339 + gen_sk); 340 + } while ((ret == -EAGAIN) && (ent_delay < RTSDCTL_ENT_DLY_MAX)); 513 341 if (ret) { 342 + dev_err(dev, "failed to instantiate RNG"); 514 343 caam_remove(pdev); 515 344 return ret; 516 345 } 346 + /* 347 + * Set handles init'ed by this module as the complement of the 348 + * already initialized ones 349 + */ 350 + ctrlpriv->rng4_sh_init = ~ctrlpriv->rng4_sh_init & RDSTA_IFMASK; 517 351 518 352 /* Enable RDB bit so that RNG works faster */ 519 353 setbits32(&topregs->ctrl.scfgr, SCFGR_RDBENABLE);
+9 -8
drivers/crypto/caam/desc.h
··· 1155 1155 1156 1156 /* randomizer AAI set */ 1157 1157 #define OP_ALG_AAI_RNG (0x00 << OP_ALG_AAI_SHIFT) 1158 - #define OP_ALG_AAI_RNG_NOZERO (0x10 << OP_ALG_AAI_SHIFT) 1159 - #define OP_ALG_AAI_RNG_ODD (0x20 << OP_ALG_AAI_SHIFT) 1158 + #define OP_ALG_AAI_RNG_NZB (0x10 << OP_ALG_AAI_SHIFT) 1159 + #define OP_ALG_AAI_RNG_OBP (0x20 << OP_ALG_AAI_SHIFT) 1160 + 1161 + /* RNG4 AAI set */ 1162 + #define OP_ALG_AAI_RNG4_SH_0 (0x00 << OP_ALG_AAI_SHIFT) 1163 + #define OP_ALG_AAI_RNG4_SH_1 (0x01 << OP_ALG_AAI_SHIFT) 1164 + #define OP_ALG_AAI_RNG4_PS (0x40 << OP_ALG_AAI_SHIFT) 1165 + #define OP_ALG_AAI_RNG4_AI (0x80 << OP_ALG_AAI_SHIFT) 1166 + #define OP_ALG_AAI_RNG4_SK (0x100 << OP_ALG_AAI_SHIFT) 1160 1167 1161 1168 /* hmac/smac AAI set */ 1162 1169 #define OP_ALG_AAI_HASH (0x00 << OP_ALG_AAI_SHIFT) ··· 1184 1177 #define OP_ALG_AAI_F9 (0xc8 << OP_ALG_AAI_SHIFT) 1185 1178 #define OP_ALG_AAI_GSM (0x10 << OP_ALG_AAI_SHIFT) 1186 1179 #define OP_ALG_AAI_EDGE (0x20 << OP_ALG_AAI_SHIFT) 1187 - 1188 - /* RNG4 set */ 1189 - #define OP_ALG_RNG4_SHIFT 4 1190 - #define OP_ALG_RNG4_MASK (0x1f3 << OP_ALG_RNG4_SHIFT) 1191 - 1192 - #define OP_ALG_RNG4_SK (0x100 << OP_ALG_RNG4_SHIFT) 1193 1180 1194 1181 #define OP_ALG_AS_SHIFT 2 1195 1182 #define OP_ALG_AS_MASK (0x3 << OP_ALG_AS_SHIFT)
+11 -9
drivers/crypto/caam/intern.h
··· 37 37 38 38 /* Private sub-storage for a single JobR */ 39 39 struct caam_drv_private_jr { 40 - struct device *parentdev; /* points back to controller dev */ 41 - struct platform_device *jr_pdev;/* points to platform device for JR */ 40 + struct list_head list_node; /* Job Ring device list */ 41 + struct device *dev; 42 42 int ridx; 43 43 struct caam_job_ring __iomem *rregs; /* JobR's register space */ 44 44 struct tasklet_struct irqtask; 45 45 int irq; /* One per queue */ 46 + 47 + /* Number of scatterlist crypt transforms active on the JobR */ 48 + atomic_t tfm_count ____cacheline_aligned; 46 49 47 50 /* Job ring info */ 48 51 int ringsize; /* Size of rings (assume input = output) */ ··· 66 63 struct caam_drv_private { 67 64 68 65 struct device *dev; 69 - struct device **jrdev; /* Alloc'ed array per sub-device */ 66 + struct platform_device **jrpdev; /* Alloc'ed array per sub-device */ 70 67 struct platform_device *pdev; 71 68 72 69 /* Physical-presence section */ ··· 83 80 u8 qi_present; /* Nonzero if QI present in device */ 84 81 int secvio_irq; /* Security violation interrupt number */ 85 82 86 - /* which jr allocated to scatterlist crypto */ 87 - atomic_t tfm_count ____cacheline_aligned; 88 - /* list of registered crypto algorithms (mk generic context handle?) */ 89 - struct list_head alg_list; 90 - /* list of registered hash algorithms (mk generic context handle?) */ 91 - struct list_head hash_list; 83 + #define RNG4_MAX_HANDLES 2 84 + /* RNG4 block */ 85 + u32 rng4_sh_init; /* This bitmap shows which of the State 86 + Handles of the RNG4 block are initialized 87 + by this driver */ 92 88 93 89 /* 94 90 * debugfs entries for developer view into driver/device
+233 -110
drivers/crypto/caam/jr.c
··· 13 13 #include "desc.h" 14 14 #include "intern.h" 15 15 16 + struct jr_driver_data { 17 + /* List of Physical JobR's with the Driver */ 18 + struct list_head jr_list; 19 + spinlock_t jr_alloc_lock; /* jr_list lock */ 20 + } ____cacheline_aligned; 21 + 22 + static struct jr_driver_data driver_data; 23 + 24 + static int caam_reset_hw_jr(struct device *dev) 25 + { 26 + struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); 27 + unsigned int timeout = 100000; 28 + 29 + /* 30 + * mask interrupts since we are going to poll 31 + * for reset completion status 32 + */ 33 + setbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK); 34 + 35 + /* initiate flush (required prior to reset) */ 36 + wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET); 37 + while (((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) == 38 + JRINT_ERR_HALT_INPROGRESS) && --timeout) 39 + cpu_relax(); 40 + 41 + if ((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) != 42 + JRINT_ERR_HALT_COMPLETE || timeout == 0) { 43 + dev_err(dev, "failed to flush job ring %d\n", jrp->ridx); 44 + return -EIO; 45 + } 46 + 47 + /* initiate reset */ 48 + timeout = 100000; 49 + wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET); 50 + while ((rd_reg32(&jrp->rregs->jrcommand) & JRCR_RESET) && --timeout) 51 + cpu_relax(); 52 + 53 + if (timeout == 0) { 54 + dev_err(dev, "failed to reset job ring %d\n", jrp->ridx); 55 + return -EIO; 56 + } 57 + 58 + /* unmask interrupts */ 59 + clrbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK); 60 + 61 + return 0; 62 + } 63 + 64 + /* 65 + * Shutdown JobR independent of platform property code 66 + */ 67 + int caam_jr_shutdown(struct device *dev) 68 + { 69 + struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); 70 + dma_addr_t inpbusaddr, outbusaddr; 71 + int ret; 72 + 73 + ret = caam_reset_hw_jr(dev); 74 + 75 + tasklet_kill(&jrp->irqtask); 76 + 77 + /* Release interrupt */ 78 + free_irq(jrp->irq, dev); 79 + 80 + /* Free rings */ 81 + inpbusaddr = rd_reg64(&jrp->rregs->inpring_base); 82 + outbusaddr = rd_reg64(&jrp->rregs->outring_base); 83 + dma_free_coherent(dev, sizeof(dma_addr_t) * JOBR_DEPTH, 84 + jrp->inpring, inpbusaddr); 85 + dma_free_coherent(dev, sizeof(struct jr_outentry) * JOBR_DEPTH, 86 + jrp->outring, outbusaddr); 87 + kfree(jrp->entinfo); 88 + 89 + return ret; 90 + } 91 + 92 + static int caam_jr_remove(struct platform_device *pdev) 93 + { 94 + int ret; 95 + struct device *jrdev; 96 + struct caam_drv_private_jr *jrpriv; 97 + 98 + jrdev = &pdev->dev; 99 + jrpriv = dev_get_drvdata(jrdev); 100 + 101 + /* 102 + * Return EBUSY if job ring already allocated. 103 + */ 104 + if (atomic_read(&jrpriv->tfm_count)) { 105 + dev_err(jrdev, "Device is busy\n"); 106 + return -EBUSY; 107 + } 108 + 109 + /* Remove the node from Physical JobR list maintained by driver */ 110 + spin_lock(&driver_data.jr_alloc_lock); 111 + list_del(&jrpriv->list_node); 112 + spin_unlock(&driver_data.jr_alloc_lock); 113 + 114 + /* Release ring */ 115 + ret = caam_jr_shutdown(jrdev); 116 + if (ret) 117 + dev_err(jrdev, "Failed to shut down job ring\n"); 118 + irq_dispose_mapping(jrpriv->irq); 119 + 120 + return ret; 121 + } 122 + 16 123 /* Main per-ring interrupt handler */ 17 124 static irqreturn_t caam_jr_interrupt(int irq, void *st_dev) 18 125 { ··· 235 128 } 236 129 237 130 /** 131 + * caam_jr_alloc() - Alloc a job ring for someone to use as needed. 132 + * 133 + * returns : pointer to the newly allocated physical 134 + * JobR dev can be written to if successful. 135 + **/ 136 + struct device *caam_jr_alloc(void) 137 + { 138 + struct caam_drv_private_jr *jrpriv, *min_jrpriv = NULL; 139 + struct device *dev = NULL; 140 + int min_tfm_cnt = INT_MAX; 141 + int tfm_cnt; 142 + 143 + spin_lock(&driver_data.jr_alloc_lock); 144 + 145 + if (list_empty(&driver_data.jr_list)) { 146 + spin_unlock(&driver_data.jr_alloc_lock); 147 + return ERR_PTR(-ENODEV); 148 + } 149 + 150 + list_for_each_entry(jrpriv, &driver_data.jr_list, list_node) { 151 + tfm_cnt = atomic_read(&jrpriv->tfm_count); 152 + if (tfm_cnt < min_tfm_cnt) { 153 + min_tfm_cnt = tfm_cnt; 154 + min_jrpriv = jrpriv; 155 + } 156 + if (!min_tfm_cnt) 157 + break; 158 + } 159 + 160 + if (min_jrpriv) { 161 + atomic_inc(&min_jrpriv->tfm_count); 162 + dev = min_jrpriv->dev; 163 + } 164 + spin_unlock(&driver_data.jr_alloc_lock); 165 + 166 + return dev; 167 + } 168 + EXPORT_SYMBOL(caam_jr_alloc); 169 + 170 + /** 171 + * caam_jr_free() - Free the Job Ring 172 + * @rdev - points to the dev that identifies the Job ring to 173 + * be released. 174 + **/ 175 + void caam_jr_free(struct device *rdev) 176 + { 177 + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(rdev); 178 + 179 + atomic_dec(&jrpriv->tfm_count); 180 + } 181 + EXPORT_SYMBOL(caam_jr_free); 182 + 183 + /** 238 184 * caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK, 239 185 * -EBUSY if the queue is full, -EIO if it cannot map the caller's 240 186 * descriptor. ··· 367 207 } 368 208 EXPORT_SYMBOL(caam_jr_enqueue); 369 209 370 - static int caam_reset_hw_jr(struct device *dev) 371 - { 372 - struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); 373 - unsigned int timeout = 100000; 374 - 375 - /* 376 - * mask interrupts since we are going to poll 377 - * for reset completion status 378 - */ 379 - setbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK); 380 - 381 - /* initiate flush (required prior to reset) */ 382 - wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET); 383 - while (((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) == 384 - JRINT_ERR_HALT_INPROGRESS) && --timeout) 385 - cpu_relax(); 386 - 387 - if ((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) != 388 - JRINT_ERR_HALT_COMPLETE || timeout == 0) { 389 - dev_err(dev, "failed to flush job ring %d\n", jrp->ridx); 390 - return -EIO; 391 - } 392 - 393 - /* initiate reset */ 394 - timeout = 100000; 395 - wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET); 396 - while ((rd_reg32(&jrp->rregs->jrcommand) & JRCR_RESET) && --timeout) 397 - cpu_relax(); 398 - 399 - if (timeout == 0) { 400 - dev_err(dev, "failed to reset job ring %d\n", jrp->ridx); 401 - return -EIO; 402 - } 403 - 404 - /* unmask interrupts */ 405 - clrbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK); 406 - 407 - return 0; 408 - } 409 - 410 210 /* 411 211 * Init JobR independent of platform property detection 412 212 */ ··· 382 262 383 263 /* Connect job ring interrupt handler. */ 384 264 error = request_irq(jrp->irq, caam_jr_interrupt, IRQF_SHARED, 385 - "caam-jobr", dev); 265 + dev_name(dev), dev); 386 266 if (error) { 387 267 dev_err(dev, "can't connect JobR %d interrupt (%d)\n", 388 268 jrp->ridx, jrp->irq); ··· 438 318 return 0; 439 319 } 440 320 441 - /* 442 - * Shutdown JobR independent of platform property code 443 - */ 444 - int caam_jr_shutdown(struct device *dev) 445 - { 446 - struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); 447 - dma_addr_t inpbusaddr, outbusaddr; 448 - int ret; 449 - 450 - ret = caam_reset_hw_jr(dev); 451 - 452 - tasklet_kill(&jrp->irqtask); 453 - 454 - /* Release interrupt */ 455 - free_irq(jrp->irq, dev); 456 - 457 - /* Free rings */ 458 - inpbusaddr = rd_reg64(&jrp->rregs->inpring_base); 459 - outbusaddr = rd_reg64(&jrp->rregs->outring_base); 460 - dma_free_coherent(dev, sizeof(dma_addr_t) * JOBR_DEPTH, 461 - jrp->inpring, inpbusaddr); 462 - dma_free_coherent(dev, sizeof(struct jr_outentry) * JOBR_DEPTH, 463 - jrp->outring, outbusaddr); 464 - kfree(jrp->entinfo); 465 - of_device_unregister(jrp->jr_pdev); 466 - 467 - return ret; 468 - } 469 321 470 322 /* 471 - * Probe routine for each detected JobR subsystem. It assumes that 472 - * property detection was picked up externally. 323 + * Probe routine for each detected JobR subsystem. 473 324 */ 474 - int caam_jr_probe(struct platform_device *pdev, struct device_node *np, 475 - int ring) 325 + static int caam_jr_probe(struct platform_device *pdev) 476 326 { 477 - struct device *ctrldev, *jrdev; 478 - struct platform_device *jr_pdev; 479 - struct caam_drv_private *ctrlpriv; 327 + struct device *jrdev; 328 + struct device_node *nprop; 329 + struct caam_job_ring __iomem *ctrl; 480 330 struct caam_drv_private_jr *jrpriv; 481 - u32 *jroffset; 331 + static int total_jobrs; 482 332 int error; 483 333 484 - ctrldev = &pdev->dev; 485 - ctrlpriv = dev_get_drvdata(ctrldev); 486 - 334 + jrdev = &pdev->dev; 487 335 jrpriv = kmalloc(sizeof(struct caam_drv_private_jr), 488 336 GFP_KERNEL); 489 - if (jrpriv == NULL) { 490 - dev_err(ctrldev, "can't alloc private mem for job ring %d\n", 491 - ring); 337 + if (!jrpriv) 338 + return -ENOMEM; 339 + 340 + dev_set_drvdata(jrdev, jrpriv); 341 + 342 + /* save ring identity relative to detection */ 343 + jrpriv->ridx = total_jobrs++; 344 + 345 + nprop = pdev->dev.of_node; 346 + /* Get configuration properties from device tree */ 347 + /* First, get register page */ 348 + ctrl = of_iomap(nprop, 0); 349 + if (!ctrl) { 350 + dev_err(jrdev, "of_iomap() failed\n"); 492 351 return -ENOMEM; 493 352 } 494 - jrpriv->parentdev = ctrldev; /* point back to parent */ 495 - jrpriv->ridx = ring; /* save ring identity relative to detection */ 496 353 497 - /* 498 - * Derive a pointer to the detected JobRs regs 499 - * Driver has already iomapped the entire space, we just 500 - * need to add in the offset to this JobR. Don't know if I 501 - * like this long-term, but it'll run 502 - */ 503 - jroffset = (u32 *)of_get_property(np, "reg", NULL); 504 - jrpriv->rregs = (struct caam_job_ring __iomem *)((void *)ctrlpriv->ctrl 505 - + *jroffset); 506 - 507 - /* Build a local dev for each detected queue */ 508 - jr_pdev = of_platform_device_create(np, NULL, ctrldev); 509 - if (jr_pdev == NULL) { 510 - kfree(jrpriv); 511 - return -EINVAL; 512 - } 513 - 514 - jrpriv->jr_pdev = jr_pdev; 515 - jrdev = &jr_pdev->dev; 516 - dev_set_drvdata(jrdev, jrpriv); 517 - ctrlpriv->jrdev[ring] = jrdev; 354 + jrpriv->rregs = (struct caam_job_ring __force *)ctrl; 518 355 519 356 if (sizeof(dma_addr_t) == sizeof(u64)) 520 - if (of_device_is_compatible(np, "fsl,sec-v5.0-job-ring")) 357 + if (of_device_is_compatible(nprop, "fsl,sec-v5.0-job-ring")) 521 358 dma_set_mask(jrdev, DMA_BIT_MASK(40)); 522 359 else 523 360 dma_set_mask(jrdev, DMA_BIT_MASK(36)); ··· 482 405 dma_set_mask(jrdev, DMA_BIT_MASK(32)); 483 406 484 407 /* Identify the interrupt */ 485 - jrpriv->irq = irq_of_parse_and_map(np, 0); 408 + jrpriv->irq = irq_of_parse_and_map(nprop, 0); 486 409 487 410 /* Now do the platform independent part */ 488 411 error = caam_jr_init(jrdev); /* now turn on hardware */ 489 412 if (error) { 490 - of_device_unregister(jr_pdev); 491 413 kfree(jrpriv); 492 414 return error; 493 415 } 494 416 495 - return error; 417 + jrpriv->dev = jrdev; 418 + spin_lock(&driver_data.jr_alloc_lock); 419 + list_add_tail(&jrpriv->list_node, &driver_data.jr_list); 420 + spin_unlock(&driver_data.jr_alloc_lock); 421 + 422 + atomic_set(&jrpriv->tfm_count, 0); 423 + 424 + return 0; 496 425 } 426 + 427 + static struct of_device_id caam_jr_match[] = { 428 + { 429 + .compatible = "fsl,sec-v4.0-job-ring", 430 + }, 431 + { 432 + .compatible = "fsl,sec4.0-job-ring", 433 + }, 434 + {}, 435 + }; 436 + MODULE_DEVICE_TABLE(of, caam_jr_match); 437 + 438 + static struct platform_driver caam_jr_driver = { 439 + .driver = { 440 + .name = "caam_jr", 441 + .owner = THIS_MODULE, 442 + .of_match_table = caam_jr_match, 443 + }, 444 + .probe = caam_jr_probe, 445 + .remove = caam_jr_remove, 446 + }; 447 + 448 + static int __init jr_driver_init(void) 449 + { 450 + spin_lock_init(&driver_data.jr_alloc_lock); 451 + INIT_LIST_HEAD(&driver_data.jr_list); 452 + return platform_driver_register(&caam_jr_driver); 453 + } 454 + 455 + static void __exit jr_driver_exit(void) 456 + { 457 + platform_driver_unregister(&caam_jr_driver); 458 + } 459 + 460 + module_init(jr_driver_init); 461 + module_exit(jr_driver_exit); 462 + 463 + MODULE_LICENSE("GPL"); 464 + MODULE_DESCRIPTION("FSL CAAM JR request backend"); 465 + MODULE_AUTHOR("Freescale Semiconductor - NMG/STC");
+2 -3
drivers/crypto/caam/jr.h
··· 8 8 #define JR_H 9 9 10 10 /* Prototypes for backend-level services exposed to APIs */ 11 + struct device *caam_jr_alloc(void); 12 + void caam_jr_free(struct device *rdev); 11 13 int caam_jr_enqueue(struct device *dev, u32 *desc, 12 14 void (*cbk)(struct device *dev, u32 *desc, u32 status, 13 15 void *areq), 14 16 void *areq); 15 17 16 - extern int caam_jr_probe(struct platform_device *pdev, struct device_node *np, 17 - int ring); 18 - extern int caam_jr_shutdown(struct device *dev); 19 18 #endif /* JR_H */
+11 -3
drivers/crypto/caam/regs.h
··· 245 245 246 246 /* RNG4 TRNG test registers */ 247 247 struct rng4tst { 248 - #define RTMCTL_PRGM 0x00010000 /* 1 -> program mode, 0 -> run mode */ 248 + #define RTMCTL_PRGM 0x00010000 /* 1 -> program mode, 0 -> run mode */ 249 249 u32 rtmctl; /* misc. control register */ 250 250 u32 rtscmisc; /* statistical check misc. register */ 251 251 u32 rtpkrrng; /* poker range register */ ··· 255 255 }; 256 256 #define RTSDCTL_ENT_DLY_SHIFT 16 257 257 #define RTSDCTL_ENT_DLY_MASK (0xffff << RTSDCTL_ENT_DLY_SHIFT) 258 + #define RTSDCTL_ENT_DLY_MIN 1200 259 + #define RTSDCTL_ENT_DLY_MAX 12800 258 260 u32 rtsdctl; /* seed control register */ 259 261 union { 260 262 u32 rtsblim; /* PRGM=1: sparse bit limit register */ ··· 268 266 u32 rtfrqcnt; /* PRGM=0: freq. count register */ 269 267 }; 270 268 u32 rsvd1[40]; 269 + #define RDSTA_SKVT 0x80000000 270 + #define RDSTA_SKVN 0x40000000 271 271 #define RDSTA_IF0 0x00000001 272 + #define RDSTA_IF1 0x00000002 273 + #define RDSTA_IFMASK (RDSTA_IF1 | RDSTA_IF0) 272 274 u32 rdsta; 273 275 u32 rsvd2[15]; 274 276 }; ··· 698 692 u32 jr_ctl_hi; /* CxJRR - JobR Control Register @800 */ 699 693 u32 jr_ctl_lo; 700 694 u64 jr_descaddr; /* CxDADR - JobR Descriptor Address */ 695 + #define DECO_OP_STATUS_HI_ERR_MASK 0xF00000FF 701 696 u32 op_status_hi; /* DxOPSTA - DECO Operation Status */ 702 697 u32 op_status_lo; 703 698 u32 rsvd24[2]; ··· 713 706 u32 rsvd29[48]; 714 707 u32 descbuf[64]; /* DxDESB - Descriptor buffer */ 715 708 u32 rscvd30[193]; 709 + #define DESC_DBG_DECO_STAT_HOST_ERR 0x00D00000 710 + #define DESC_DBG_DECO_STAT_VALID 0x80000000 711 + #define DESC_DBG_DECO_STAT_MASK 0x00F00000 716 712 u32 desc_dbg; /* DxDDR - DECO Debug Register */ 717 713 u32 rsvd31[126]; 718 714 }; 719 715 720 - /* DECO DBG Register Valid Bit*/ 721 - #define DECO_DBG_VALID 0x80000000 722 716 #define DECO_JQCR_WHL 0x20000000 723 717 #define DECO_JQCR_FOUR 0x10000000 724 718
+25 -9
drivers/crypto/caam/sg_sw_sec4.h
··· 117 117 return nents; 118 118 } 119 119 120 + /* Map SG page in kernel virtual address space and copy */ 121 + static inline void sg_map_copy(u8 *dest, struct scatterlist *sg, 122 + int len, int offset) 123 + { 124 + u8 *mapped_addr; 125 + 126 + /* 127 + * Page here can be user-space pinned using get_user_pages 128 + * Same must be kmapped before use and kunmapped subsequently 129 + */ 130 + mapped_addr = kmap_atomic(sg_page(sg)); 131 + memcpy(dest, mapped_addr + offset, len); 132 + kunmap_atomic(mapped_addr); 133 + } 134 + 120 135 /* Copy from len bytes of sg to dest, starting from beginning */ 121 136 static inline void sg_copy(u8 *dest, struct scatterlist *sg, unsigned int len) 122 137 { ··· 139 124 int cpy_index = 0, next_cpy_index = current_sg->length; 140 125 141 126 while (next_cpy_index < len) { 142 - memcpy(dest + cpy_index, (u8 *) sg_virt(current_sg), 143 - current_sg->length); 127 + sg_map_copy(dest + cpy_index, current_sg, current_sg->length, 128 + current_sg->offset); 144 129 current_sg = scatterwalk_sg_next(current_sg); 145 130 cpy_index = next_cpy_index; 146 131 next_cpy_index += current_sg->length; 147 132 } 148 133 if (cpy_index < len) 149 - memcpy(dest + cpy_index, (u8 *) sg_virt(current_sg), 150 - len - cpy_index); 134 + sg_map_copy(dest + cpy_index, current_sg, len-cpy_index, 135 + current_sg->offset); 151 136 } 152 137 153 138 /* Copy sg data, from to_skip to end, to dest */ ··· 155 140 int to_skip, unsigned int end) 156 141 { 157 142 struct scatterlist *current_sg = sg; 158 - int sg_index, cpy_index; 143 + int sg_index, cpy_index, offset; 159 144 160 145 sg_index = current_sg->length; 161 146 while (sg_index <= to_skip) { ··· 163 148 sg_index += current_sg->length; 164 149 } 165 150 cpy_index = sg_index - to_skip; 166 - memcpy(dest, (u8 *) sg_virt(current_sg) + 167 - current_sg->length - cpy_index, cpy_index); 168 - current_sg = scatterwalk_sg_next(current_sg); 169 - if (end - sg_index) 151 + offset = current_sg->offset + current_sg->length - cpy_index; 152 + sg_map_copy(dest, current_sg, cpy_index, offset); 153 + if (end - sg_index) { 154 + current_sg = scatterwalk_sg_next(current_sg); 170 155 sg_copy(dest + cpy_index, current_sg, end - sg_index); 156 + } 171 157 }
+22 -31
drivers/crypto/dcp.c
··· 733 733 platform_set_drvdata(pdev, dev); 734 734 735 735 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 736 - if (!r) { 737 - dev_err(&pdev->dev, "failed to get IORESOURCE_MEM\n"); 738 - return -ENXIO; 739 - } 740 - dev->dcp_regs_base = devm_ioremap(&pdev->dev, r->start, 741 - resource_size(r)); 736 + dev->dcp_regs_base = devm_ioremap_resource(&pdev->dev, r); 737 + if (IS_ERR(dev->dcp_regs_base)) 738 + return PTR_ERR(dev->dcp_regs_base); 742 739 743 740 dcp_set(dev, DCP_CTRL_SFRST, DCP_REG_CTRL); 744 741 udelay(10); ··· 759 762 return -EIO; 760 763 } 761 764 dev->dcp_vmi_irq = r->start; 762 - ret = request_irq(dev->dcp_vmi_irq, dcp_vmi_irq, 0, "dcp", dev); 765 + ret = devm_request_irq(&pdev->dev, dev->dcp_vmi_irq, dcp_vmi_irq, 0, 766 + "dcp", dev); 763 767 if (ret != 0) { 764 768 dev_err(&pdev->dev, "can't request_irq (0)\n"); 765 769 return -EIO; ··· 769 771 r = platform_get_resource(pdev, IORESOURCE_IRQ, 1); 770 772 if (!r) { 771 773 dev_err(&pdev->dev, "can't get IRQ resource (1)\n"); 772 - ret = -EIO; 773 - goto err_free_irq0; 774 + return -EIO; 774 775 } 775 776 dev->dcp_irq = r->start; 776 - ret = request_irq(dev->dcp_irq, dcp_irq, 0, "dcp", dev); 777 + ret = devm_request_irq(&pdev->dev, dev->dcp_irq, dcp_irq, 0, "dcp", 778 + dev); 777 779 if (ret != 0) { 778 780 dev_err(&pdev->dev, "can't request_irq (1)\n"); 779 - ret = -EIO; 780 - goto err_free_irq0; 781 + return -EIO; 781 782 } 782 783 783 784 dev->hw_pkg[0] = dma_alloc_coherent(&pdev->dev, ··· 785 788 GFP_KERNEL); 786 789 if (!dev->hw_pkg[0]) { 787 790 dev_err(&pdev->dev, "Could not allocate hw descriptors\n"); 788 - ret = -ENOMEM; 789 - goto err_free_irq1; 791 + return -ENOMEM; 790 792 } 791 793 792 794 for (i = 1; i < DCP_MAX_PKG; i++) { ··· 844 848 for (j = 0; j < i; j++) 845 849 crypto_unregister_alg(&algs[j]); 846 850 err_free_key_iv: 851 + tasklet_kill(&dev->done_task); 852 + tasklet_kill(&dev->queue_task); 847 853 dma_free_coherent(&pdev->dev, 2 * AES_KEYSIZE_128, dev->payload_base, 848 854 dev->payload_base_dma); 849 855 err_free_hw_packet: 850 856 dma_free_coherent(&pdev->dev, DCP_MAX_PKG * 851 857 sizeof(struct dcp_hw_packet), dev->hw_pkg[0], 852 858 dev->hw_phys_pkg); 853 - err_free_irq1: 854 - free_irq(dev->dcp_irq, dev); 855 - err_free_irq0: 856 - free_irq(dev->dcp_vmi_irq, dev); 857 859 858 860 return ret; 859 861 } ··· 862 868 int j; 863 869 dev = platform_get_drvdata(pdev); 864 870 865 - dma_free_coherent(&pdev->dev, 866 - DCP_MAX_PKG * sizeof(struct dcp_hw_packet), 867 - dev->hw_pkg[0], dev->hw_phys_pkg); 868 - 869 - dma_free_coherent(&pdev->dev, 2 * AES_KEYSIZE_128, dev->payload_base, 870 - dev->payload_base_dma); 871 - 872 - free_irq(dev->dcp_irq, dev); 873 - free_irq(dev->dcp_vmi_irq, dev); 874 - 875 - tasklet_kill(&dev->done_task); 876 - tasklet_kill(&dev->queue_task); 871 + misc_deregister(&dev->dcp_bootstream_misc); 877 872 878 873 for (j = 0; j < ARRAY_SIZE(algs); j++) 879 874 crypto_unregister_alg(&algs[j]); 880 875 881 - misc_deregister(&dev->dcp_bootstream_misc); 876 + tasklet_kill(&dev->done_task); 877 + tasklet_kill(&dev->queue_task); 878 + 879 + dma_free_coherent(&pdev->dev, 2 * AES_KEYSIZE_128, dev->payload_base, 880 + dev->payload_base_dma); 881 + 882 + dma_free_coherent(&pdev->dev, 883 + DCP_MAX_PKG * sizeof(struct dcp_hw_packet), 884 + dev->hw_pkg[0], dev->hw_phys_pkg); 882 885 883 886 return 0; 884 887 }
+10 -18
drivers/crypto/ixp4xx_crypto.c
··· 1149 1149 unsigned int keylen) 1150 1150 { 1151 1151 struct ixp_ctx *ctx = crypto_aead_ctx(tfm); 1152 - struct rtattr *rta = (struct rtattr *)key; 1153 - struct crypto_authenc_key_param *param; 1152 + struct crypto_authenc_keys keys; 1154 1153 1155 - if (!RTA_OK(rta, keylen)) 1156 - goto badkey; 1157 - if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM) 1158 - goto badkey; 1159 - if (RTA_PAYLOAD(rta) < sizeof(*param)) 1154 + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) 1160 1155 goto badkey; 1161 1156 1162 - param = RTA_DATA(rta); 1163 - ctx->enckey_len = be32_to_cpu(param->enckeylen); 1164 - 1165 - key += RTA_ALIGN(rta->rta_len); 1166 - keylen -= RTA_ALIGN(rta->rta_len); 1167 - 1168 - if (keylen < ctx->enckey_len) 1157 + if (keys.authkeylen > sizeof(ctx->authkey)) 1169 1158 goto badkey; 1170 1159 1171 - ctx->authkey_len = keylen - ctx->enckey_len; 1172 - memcpy(ctx->enckey, key + ctx->authkey_len, ctx->enckey_len); 1173 - memcpy(ctx->authkey, key, ctx->authkey_len); 1160 + if (keys.enckeylen > sizeof(ctx->enckey)) 1161 + goto badkey; 1162 + 1163 + memcpy(ctx->authkey, keys.authkey, keys.authkeylen); 1164 + memcpy(ctx->enckey, keys.enckey, keys.enckeylen); 1165 + ctx->authkey_len = keys.authkeylen; 1166 + ctx->enckey_len = keys.enckeylen; 1174 1167 1175 1168 return aead_setup(tfm, crypto_aead_authsize(tfm)); 1176 1169 badkey: 1177 - ctx->enckey_len = 0; 1178 1170 crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); 1179 1171 return -EINVAL; 1180 1172 }
+7 -7
drivers/crypto/mv_cesa.c
··· 907 907 return mv_cra_hash_init(tfm, "sha1", COP_HMAC_SHA1, SHA1_BLOCK_SIZE); 908 908 } 909 909 910 - irqreturn_t crypto_int(int irq, void *priv) 910 + static irqreturn_t crypto_int(int irq, void *priv) 911 911 { 912 912 u32 val; 913 913 ··· 928 928 return IRQ_HANDLED; 929 929 } 930 930 931 - struct crypto_alg mv_aes_alg_ecb = { 931 + static struct crypto_alg mv_aes_alg_ecb = { 932 932 .cra_name = "ecb(aes)", 933 933 .cra_driver_name = "mv-ecb-aes", 934 934 .cra_priority = 300, ··· 951 951 }, 952 952 }; 953 953 954 - struct crypto_alg mv_aes_alg_cbc = { 954 + static struct crypto_alg mv_aes_alg_cbc = { 955 955 .cra_name = "cbc(aes)", 956 956 .cra_driver_name = "mv-cbc-aes", 957 957 .cra_priority = 300, ··· 975 975 }, 976 976 }; 977 977 978 - struct ahash_alg mv_sha1_alg = { 978 + static struct ahash_alg mv_sha1_alg = { 979 979 .init = mv_hash_init, 980 980 .update = mv_hash_update, 981 981 .final = mv_hash_final, ··· 999 999 } 1000 1000 }; 1001 1001 1002 - struct ahash_alg mv_hmac_sha1_alg = { 1002 + static struct ahash_alg mv_hmac_sha1_alg = { 1003 1003 .init = mv_hash_init, 1004 1004 .update = mv_hash_update, 1005 1005 .final = mv_hash_final, ··· 1084 1084 goto err_unmap_sram; 1085 1085 } 1086 1086 1087 - ret = request_irq(irq, crypto_int, IRQF_DISABLED, dev_name(&pdev->dev), 1087 + ret = request_irq(irq, crypto_int, 0, dev_name(&pdev->dev), 1088 1088 cp); 1089 1089 if (ret) 1090 1090 goto err_thread; ··· 1187 1187 .driver = { 1188 1188 .owner = THIS_MODULE, 1189 1189 .name = "mv_crypto", 1190 - .of_match_table = of_match_ptr(mv_cesa_of_match_table), 1190 + .of_match_table = mv_cesa_of_match_table, 1191 1191 }, 1192 1192 }; 1193 1193 MODULE_ALIAS("platform:mv_crypto");
+3 -3
drivers/crypto/omap-aes.c
··· 275 275 if (dd->flags & FLAGS_CBC) 276 276 val |= AES_REG_CTRL_CBC; 277 277 if (dd->flags & FLAGS_CTR) { 278 - val |= AES_REG_CTRL_CTR | AES_REG_CTRL_CTR_WIDTH_32; 278 + val |= AES_REG_CTRL_CTR | AES_REG_CTRL_CTR_WIDTH_128; 279 279 mask = AES_REG_CTRL_CTR | AES_REG_CTRL_CTR_WIDTH_MASK; 280 280 } 281 281 if (dd->flags & FLAGS_ENCRYPT) ··· 554 554 return err; 555 555 } 556 556 557 - int omap_aes_check_aligned(struct scatterlist *sg) 557 + static int omap_aes_check_aligned(struct scatterlist *sg) 558 558 { 559 559 while (sg) { 560 560 if (!IS_ALIGNED(sg->offset, 4)) ··· 566 566 return 0; 567 567 } 568 568 569 - int omap_aes_copy_sgs(struct omap_aes_dev *dd) 569 + static int omap_aes_copy_sgs(struct omap_aes_dev *dd) 570 570 { 571 571 void *buf_in, *buf_out; 572 572 int pages;
+1
drivers/crypto/omap-sham.c
··· 2033 2033 MODULE_DESCRIPTION("OMAP SHA1/MD5 hw acceleration support."); 2034 2034 MODULE_LICENSE("GPL v2"); 2035 2035 MODULE_AUTHOR("Dmitry Kasatkin"); 2036 + MODULE_ALIAS("platform:omap-sham");
+8 -24
drivers/crypto/picoxcell_crypto.c
··· 495 495 { 496 496 struct spacc_aead_ctx *ctx = crypto_aead_ctx(tfm); 497 497 struct spacc_alg *alg = to_spacc_alg(tfm->base.__crt_alg); 498 - struct rtattr *rta = (void *)key; 499 - struct crypto_authenc_key_param *param; 500 - unsigned int authkeylen, enckeylen; 498 + struct crypto_authenc_keys keys; 501 499 int err = -EINVAL; 502 500 503 - if (!RTA_OK(rta, keylen)) 501 + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) 504 502 goto badkey; 505 503 506 - if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM) 504 + if (keys.enckeylen > AES_MAX_KEY_SIZE) 507 505 goto badkey; 508 506 509 - if (RTA_PAYLOAD(rta) < sizeof(*param)) 510 - goto badkey; 511 - 512 - param = RTA_DATA(rta); 513 - enckeylen = be32_to_cpu(param->enckeylen); 514 - 515 - key += RTA_ALIGN(rta->rta_len); 516 - keylen -= RTA_ALIGN(rta->rta_len); 517 - 518 - if (keylen < enckeylen) 519 - goto badkey; 520 - 521 - authkeylen = keylen - enckeylen; 522 - 523 - if (enckeylen > AES_MAX_KEY_SIZE) 507 + if (keys.authkeylen > sizeof(ctx->hash_ctx)) 524 508 goto badkey; 525 509 526 510 if ((alg->ctrl_default & SPACC_CRYPTO_ALG_MASK) == 527 511 SPA_CTRL_CIPH_ALG_AES) 528 - err = spacc_aead_aes_setkey(tfm, key + authkeylen, enckeylen); 512 + err = spacc_aead_aes_setkey(tfm, keys.enckey, keys.enckeylen); 529 513 else 530 - err = spacc_aead_des_setkey(tfm, key + authkeylen, enckeylen); 514 + err = spacc_aead_des_setkey(tfm, keys.enckey, keys.enckeylen); 531 515 532 516 if (err) 533 517 goto badkey; 534 518 535 - memcpy(ctx->hash_ctx, key, authkeylen); 536 - ctx->hash_key_len = authkeylen; 519 + memcpy(ctx->hash_ctx, keys.authkey, keys.authkeylen); 520 + ctx->hash_key_len = keys.authkeylen; 537 521 538 522 return 0; 539 523
+1 -1
drivers/crypto/sahara.c
··· 1058 1058 .driver = { 1059 1059 .name = SAHARA_NAME, 1060 1060 .owner = THIS_MODULE, 1061 - .of_match_table = of_match_ptr(sahara_dt_ids), 1061 + .of_match_table = sahara_dt_ids, 1062 1062 }, 1063 1063 .id_table = sahara_platform_ids, 1064 1064 };
+8 -27
drivers/crypto/talitos.c
··· 673 673 const u8 *key, unsigned int keylen) 674 674 { 675 675 struct talitos_ctx *ctx = crypto_aead_ctx(authenc); 676 - struct rtattr *rta = (void *)key; 677 - struct crypto_authenc_key_param *param; 678 - unsigned int authkeylen; 679 - unsigned int enckeylen; 676 + struct crypto_authenc_keys keys; 680 677 681 - if (!RTA_OK(rta, keylen)) 678 + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) 682 679 goto badkey; 683 680 684 - if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM) 681 + if (keys.authkeylen + keys.enckeylen > TALITOS_MAX_KEY_SIZE) 685 682 goto badkey; 686 683 687 - if (RTA_PAYLOAD(rta) < sizeof(*param)) 688 - goto badkey; 684 + memcpy(ctx->key, keys.authkey, keys.authkeylen); 685 + memcpy(&ctx->key[keys.authkeylen], keys.enckey, keys.enckeylen); 689 686 690 - param = RTA_DATA(rta); 691 - enckeylen = be32_to_cpu(param->enckeylen); 692 - 693 - key += RTA_ALIGN(rta->rta_len); 694 - keylen -= RTA_ALIGN(rta->rta_len); 695 - 696 - if (keylen < enckeylen) 697 - goto badkey; 698 - 699 - authkeylen = keylen - enckeylen; 700 - 701 - if (keylen > TALITOS_MAX_KEY_SIZE) 702 - goto badkey; 703 - 704 - memcpy(&ctx->key, key, keylen); 705 - 706 - ctx->keylen = keylen; 707 - ctx->enckeylen = enckeylen; 708 - ctx->authkeylen = authkeylen; 687 + ctx->keylen = keys.authkeylen + keys.enckeylen; 688 + ctx->enckeylen = keys.enckeylen; 689 + ctx->authkeylen = keys.authkeylen; 709 690 710 691 return 0; 711 692
+8 -18
drivers/crypto/tegra-aes.c
··· 27 27 * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. 28 28 */ 29 29 30 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 31 + 30 32 #include <linux/module.h> 31 33 #include <linux/init.h> 32 34 #include <linux/errno.h> ··· 200 198 static void aes_workqueue_handler(struct work_struct *work); 201 199 static DECLARE_WORK(aes_work, aes_workqueue_handler); 202 200 static struct workqueue_struct *aes_wq; 203 - 204 - extern unsigned long long tegra_chip_uid(void); 205 201 206 202 static inline u32 aes_readl(struct tegra_aes_dev *dd, u32 offset) 207 203 { ··· 713 713 struct tegra_aes_dev *dd = aes_dev; 714 714 struct tegra_aes_ctx *ctx = &rng_ctx; 715 715 struct tegra_aes_slot *key_slot; 716 - struct timespec ts; 717 716 int ret = 0; 718 - u64 nsec, tmp[2]; 717 + u8 tmp[16]; /* 16 bytes = 128 bits of entropy */ 719 718 u8 *dt; 720 719 721 720 if (!ctx || !dd) { 722 - dev_err(dd->dev, "ctx=0x%x, dd=0x%x\n", 721 + pr_err("ctx=0x%x, dd=0x%x\n", 723 722 (unsigned int)ctx, (unsigned int)dd); 724 723 return -EINVAL; 725 724 } ··· 777 778 if (dd->ivlen >= (2 * DEFAULT_RNG_BLK_SZ + AES_KEYSIZE_128)) { 778 779 dt = dd->iv + DEFAULT_RNG_BLK_SZ + AES_KEYSIZE_128; 779 780 } else { 780 - getnstimeofday(&ts); 781 - nsec = timespec_to_ns(&ts); 782 - do_div(nsec, 1000); 783 - nsec ^= dd->ctr << 56; 784 - dd->ctr++; 785 - tmp[0] = nsec; 786 - tmp[1] = tegra_chip_uid(); 787 - dt = (u8 *)tmp; 781 + get_random_bytes(tmp, sizeof(tmp)); 782 + dt = tmp; 788 783 } 789 784 memcpy(dd->dt, dt, DEFAULT_RNG_BLK_SZ); 790 785 ··· 797 804 return 0; 798 805 } 799 806 800 - void tegra_aes_cra_exit(struct crypto_tfm *tfm) 807 + static void tegra_aes_cra_exit(struct crypto_tfm *tfm) 801 808 { 802 809 struct tegra_aes_ctx *ctx = 803 810 crypto_ablkcipher_ctx((struct crypto_ablkcipher *)tfm); ··· 917 924 } 918 925 919 926 /* Initialize the vde clock */ 920 - dd->aes_clk = clk_get(dev, "vde"); 927 + dd->aes_clk = devm_clk_get(dev, "vde"); 921 928 if (IS_ERR(dd->aes_clk)) { 922 929 dev_err(dev, "iclock intialization failed.\n"); 923 930 err = -ENODEV; ··· 1026 1033 if (dd->buf_out) 1027 1034 dma_free_coherent(dev, AES_HW_DMA_BUFFER_SIZE_BYTES, 1028 1035 dd->buf_out, dd->dma_buf_out); 1029 - if (!IS_ERR(dd->aes_clk)) 1030 - clk_put(dd->aes_clk); 1031 1036 if (aes_wq) 1032 1037 destroy_workqueue(aes_wq); 1033 1038 spin_lock(&list_lock); ··· 1059 1068 dd->buf_in, dd->dma_buf_in); 1060 1069 dma_free_coherent(dev, AES_HW_DMA_BUFFER_SIZE_BYTES, 1061 1070 dd->buf_out, dd->dma_buf_out); 1062 - clk_put(dd->aes_clk); 1063 1071 aes_dev = NULL; 1064 1072 1065 1073 return 0;
+1 -1
drivers/gpio/gpio-bcm-kona.c
··· 158 158 spin_unlock_irqrestore(&kona_gpio->lock, flags); 159 159 160 160 /* return the specified bit status */ 161 - return !!(val & bit); 161 + return !!(val & BIT(bit)); 162 162 } 163 163 164 164 static int bcm_kona_gpio_direction_input(struct gpio_chip *chip, unsigned gpio)
+1 -1
drivers/gpio/gpio-msm-v2.c
··· 102 102 DECLARE_BITMAP(wake_irqs, MAX_NR_GPIO); 103 103 DECLARE_BITMAP(dual_edge_irqs, MAX_NR_GPIO); 104 104 struct irq_domain *domain; 105 - unsigned int summary_irq; 105 + int summary_irq; 106 106 void __iomem *msm_tlmm_base; 107 107 }; 108 108
+1 -1
drivers/gpio/gpio-mvebu.c
··· 79 79 spinlock_t lock; 80 80 void __iomem *membase; 81 81 void __iomem *percpu_membase; 82 - unsigned int irqbase; 82 + int irqbase; 83 83 struct irq_domain *domain; 84 84 int soc_variant; 85 85 };
+5 -5
drivers/gpio/gpio-pl061.c
··· 286 286 if (!chip->base) 287 287 return -ENOMEM; 288 288 289 - chip->domain = irq_domain_add_simple(adev->dev.of_node, PL061_GPIO_NR, 290 - irq_base, &pl061_domain_ops, chip); 291 - if (!chip->domain) 292 - return -ENODEV; 293 - 294 289 spin_lock_init(&chip->lock); 295 290 296 291 chip->gc.request = pl061_gpio_request; ··· 314 319 315 320 irq_set_chained_handler(irq, pl061_irq_handler); 316 321 irq_set_handler_data(irq, chip); 322 + 323 + chip->domain = irq_domain_add_simple(adev->dev.of_node, PL061_GPIO_NR, 324 + irq_base, &pl061_domain_ops, chip); 325 + if (!chip->domain) 326 + return -ENODEV; 317 327 318 328 for (i = 0; i < PL061_GPIO_NR; i++) { 319 329 if (pdata) {
+1 -1
drivers/gpio/gpio-rcar.c
··· 381 381 if (!p->irq_domain) { 382 382 ret = -ENXIO; 383 383 dev_err(&pdev->dev, "cannot initialize irq domain\n"); 384 - goto err1; 384 + goto err0; 385 385 } 386 386 387 387 if (devm_request_irq(&pdev->dev, irq->start,
+1
drivers/gpio/gpio-tb10x.c
··· 132 132 int mask = BIT(offset); 133 133 int val = TB10X_GPIO_DIR_OUT << offset; 134 134 135 + tb10x_gpio_set(chip, offset, value); 135 136 tb10x_set_bits(tb10x_gpio, OFFSET_TO_REG_DDR, mask, val); 136 137 137 138 return 0;
+9 -4
drivers/gpio/gpio-twl4030.c
··· 354 354 static int twl_direction_out(struct gpio_chip *chip, unsigned offset, int value) 355 355 { 356 356 struct gpio_twl4030_priv *priv = to_gpio_twl4030(chip); 357 + int ret = -EINVAL; 357 358 358 359 mutex_lock(&priv->mutex); 359 360 if (offset < TWL4030_GPIO_MAX) 360 - twl4030_set_gpio_dataout(offset, value); 361 + ret = twl4030_set_gpio_direction(offset, 0); 361 362 362 363 priv->direction |= BIT(offset); 363 364 mutex_unlock(&priv->mutex); 364 365 365 366 twl_set(chip, offset, value); 366 367 367 - return 0; 368 + return ret; 368 369 } 369 370 370 371 static int twl_to_irq(struct gpio_chip *chip, unsigned offset) ··· 436 435 437 436 static int gpio_twl4030_remove(struct platform_device *pdev); 438 437 439 - static struct twl4030_gpio_platform_data *of_gpio_twl4030(struct device *dev) 438 + static struct twl4030_gpio_platform_data *of_gpio_twl4030(struct device *dev, 439 + struct twl4030_gpio_platform_data *pdata) 440 440 { 441 441 struct twl4030_gpio_platform_data *omap_twl_info; 442 442 443 443 omap_twl_info = devm_kzalloc(dev, sizeof(*omap_twl_info), GFP_KERNEL); 444 444 if (!omap_twl_info) 445 445 return NULL; 446 + 447 + if (pdata) 448 + *omap_twl_info = *pdata; 446 449 447 450 omap_twl_info->use_leds = of_property_read_bool(dev->of_node, 448 451 "ti,use-leds"); ··· 505 500 mutex_init(&priv->mutex); 506 501 507 502 if (node) 508 - pdata = of_gpio_twl4030(&pdev->dev); 503 + pdata = of_gpio_twl4030(&pdev->dev, pdata); 509 504 510 505 if (pdata == NULL) { 511 506 dev_err(&pdev->dev, "Platform data is missing\n");
+1
drivers/gpio/gpio-ucb1400.c
··· 105 105 106 106 MODULE_DESCRIPTION("Philips UCB1400 GPIO driver"); 107 107 MODULE_LICENSE("GPL"); 108 + MODULE_ALIAS("platform:ucb1400_gpio");
+32 -26
drivers/gpio/gpiolib.c
··· 14 14 #include <linux/idr.h> 15 15 #include <linux/slab.h> 16 16 #include <linux/acpi.h> 17 + #include <linux/gpio/driver.h> 17 18 18 19 #define CREATE_TRACE_POINTS 19 20 #include <trace/events/gpio.h> ··· 1309 1308 } 1310 1309 EXPORT_SYMBOL_GPL(gpiochip_find); 1311 1310 1311 + static int gpiochip_match_name(struct gpio_chip *chip, void *data) 1312 + { 1313 + const char *name = data; 1314 + 1315 + return !strcmp(chip->label, name); 1316 + } 1317 + 1318 + static struct gpio_chip *find_chip_by_name(const char *name) 1319 + { 1320 + return gpiochip_find((void *)name, gpiochip_match_name); 1321 + } 1322 + 1312 1323 #ifdef CONFIG_PINCTRL 1313 1324 1314 1325 /** ··· 1354 1341 ret = pinctrl_get_group_pins(pctldev, pin_group, 1355 1342 &pin_range->range.pins, 1356 1343 &pin_range->range.npins); 1357 - if (ret < 0) 1344 + if (ret < 0) { 1345 + kfree(pin_range); 1358 1346 return ret; 1347 + } 1359 1348 1360 1349 pinctrl_add_gpio_range(pctldev, &pin_range->range); 1361 1350 ··· 2275 2260 mutex_unlock(&gpio_lookup_lock); 2276 2261 } 2277 2262 2278 - /* 2279 - * Caller must have a acquired gpio_lookup_lock 2280 - */ 2281 - static struct gpio_chip *find_chip_by_name(const char *name) 2282 - { 2283 - struct gpio_chip *chip = NULL; 2284 - 2285 - list_for_each_entry(chip, &gpio_lookup_list, list) { 2286 - if (chip->label == NULL) 2287 - continue; 2288 - if (!strcmp(chip->label, name)) 2289 - break; 2290 - } 2291 - 2292 - return chip; 2293 - } 2294 - 2295 2263 #ifdef CONFIG_OF 2296 2264 static struct gpio_desc *of_find_gpio(struct device *dev, const char *con_id, 2297 - unsigned int idx, unsigned long *flags) 2265 + unsigned int idx, 2266 + enum gpio_lookup_flags *flags) 2298 2267 { 2299 2268 char prop_name[32]; /* 32 is max size of property name */ 2300 2269 enum of_gpio_flags of_flags; ··· 2296 2297 return desc; 2297 2298 2298 2299 if (of_flags & OF_GPIO_ACTIVE_LOW) 2299 - *flags |= GPIOF_ACTIVE_LOW; 2300 + *flags |= GPIO_ACTIVE_LOW; 2300 2301 2301 2302 return desc; 2302 2303 } 2303 2304 #else 2304 2305 static struct gpio_desc *of_find_gpio(struct device *dev, const char *con_id, 2305 - unsigned int idx, unsigned long *flags) 2306 + unsigned int idx, 2307 + enum gpio_lookup_flags *flags) 2306 2308 { 2307 2309 return ERR_PTR(-ENODEV); 2308 2310 } 2309 2311 #endif 2310 2312 2311 2313 static struct gpio_desc *acpi_find_gpio(struct device *dev, const char *con_id, 2312 - unsigned int idx, unsigned long *flags) 2314 + unsigned int idx, 2315 + enum gpio_lookup_flags *flags) 2313 2316 { 2314 2317 struct acpi_gpio_info info; 2315 2318 struct gpio_desc *desc; ··· 2321 2320 return desc; 2322 2321 2323 2322 if (info.gpioint && info.active_low) 2324 - *flags |= GPIOF_ACTIVE_LOW; 2323 + *flags |= GPIO_ACTIVE_LOW; 2325 2324 2326 2325 return desc; 2327 2326 } 2328 2327 2329 2328 static struct gpio_desc *gpiod_find(struct device *dev, const char *con_id, 2330 - unsigned int idx, unsigned long *flags) 2329 + unsigned int idx, 2330 + enum gpio_lookup_flags *flags) 2331 2331 { 2332 2332 const char *dev_id = dev ? dev_name(dev) : NULL; 2333 2333 struct gpio_desc *desc = ERR_PTR(-ENODEV); ··· 2420 2418 { 2421 2419 struct gpio_desc *desc; 2422 2420 int status; 2423 - unsigned long flags = 0; 2421 + enum gpio_lookup_flags flags = 0; 2424 2422 2425 2423 dev_dbg(dev, "GPIO lookup for consumer %s\n", con_id); 2426 2424 ··· 2446 2444 if (status < 0) 2447 2445 return ERR_PTR(status); 2448 2446 2449 - if (flags & GPIOF_ACTIVE_LOW) 2447 + if (flags & GPIO_ACTIVE_LOW) 2450 2448 set_bit(FLAG_ACTIVE_LOW, &desc->flags); 2449 + if (flags & GPIO_OPEN_DRAIN) 2450 + set_bit(FLAG_OPEN_DRAIN, &desc->flags); 2451 + if (flags & GPIO_OPEN_SOURCE) 2452 + set_bit(FLAG_OPEN_SOURCE, &desc->flags); 2451 2453 2452 2454 return desc; 2453 2455 }
+1 -1
drivers/gpu/drm/drm_sysfs.c
··· 516 516 minor_str = "card%d"; 517 517 518 518 minor->kdev = kzalloc(sizeof(*minor->kdev), GFP_KERNEL); 519 - if (!minor->dev) { 519 + if (!minor->kdev) { 520 520 r = -ENOMEM; 521 521 goto error; 522 522 }
-1
drivers/gpu/drm/nouveau/nouveau_hwmon.c
··· 630 630 hwmon->hwmon = NULL; 631 631 return ret; 632 632 #else 633 - hwmon->hwmon = NULL; 634 633 return 0; 635 634 #endif 636 635 }
+1
drivers/gpu/drm/qxl/qxl_release.c
··· 92 92 - DRM_FILE_OFFSET); 93 93 qxl_fence_remove_release(&bo->fence, release->id); 94 94 qxl_bo_unref(&bo); 95 + kfree(entry); 95 96 } 96 97 spin_lock(&qdev->release_idr_lock); 97 98 idr_remove(&qdev->release_idr, release->id);
+1
drivers/hid/Kconfig
··· 460 460 - Stantum multitouch panels 461 461 - Touch International Panels 462 462 - Unitec Panels 463 + - Wistron optical touch panels 463 464 - XAT optical touch panels 464 465 - Xiroku optical touch panels 465 466 - Zytronic touch panels
+3
drivers/hid/hid-appleir.c
··· 297 297 298 298 appleir->hid = hid; 299 299 300 + /* force input as some remotes bypass the input registration */ 301 + hid->quirks |= HID_QUIRK_HIDINPUT_FORCE; 302 + 300 303 spin_lock_init(&appleir->lock); 301 304 setup_timer(&appleir->key_up_timer, 302 305 key_up_tick, (unsigned long) appleir);
+1 -1
drivers/hid/hid-core.c
··· 1723 1723 { HID_USB_DEVICE(USB_VENDOR_ID_KENSINGTON, USB_DEVICE_ID_KS_SLIMBLADE) }, 1724 1724 { HID_USB_DEVICE(USB_VENDOR_ID_KEYTOUCH, USB_DEVICE_ID_KEYTOUCH_IEC) }, 1725 1725 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE) }, 1726 + { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_MANTICORE) }, 1726 1727 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_GX_IMPERATOR) }, 1727 1728 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_ERGO_525V) }, 1728 1729 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_I405X) }, ··· 1880 1879 1881 1880 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PRESENTER_8K_BT) }, 1882 1881 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE) }, 1883 - { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2, USB_DEVICE_ID_NINTENDO_WIIMOTE) }, 1884 1882 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE2) }, 1885 1883 { } 1886 1884 };
+4 -1
drivers/hid/hid-ids.h
··· 489 489 #define USB_VENDOR_ID_KYE 0x0458 490 490 #define USB_DEVICE_ID_KYE_ERGO_525V 0x0087 491 491 #define USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE 0x0138 492 + #define USB_DEVICE_ID_GENIUS_MANTICORE 0x0153 492 493 #define USB_DEVICE_ID_GENIUS_GX_IMPERATOR 0x4018 493 494 #define USB_DEVICE_ID_KYE_GPEN_560 0x5003 494 495 #define USB_DEVICE_ID_KYE_EASYPEN_I405X 0x5010 ··· 641 640 #define USB_DEVICE_ID_NEXTWINDOW_TOUCHSCREEN 0x0003 642 641 643 642 #define USB_VENDOR_ID_NINTENDO 0x057e 644 - #define USB_VENDOR_ID_NINTENDO2 0x054c 645 643 #define USB_DEVICE_ID_NINTENDO_WIIMOTE 0x0306 646 644 #define USB_DEVICE_ID_NINTENDO_WIIMOTE2 0x0330 647 645 ··· 901 901 #define USB_DEVICE_ID_SUPER_JOY_BOX_3_PRO 0x8801 902 902 #define USB_DEVICE_ID_SUPER_DUAL_BOX_PRO 0x8802 903 903 #define USB_DEVICE_ID_SUPER_JOY_BOX_5_PRO 0x8804 904 + 905 + #define USB_VENDOR_ID_WISTRON 0x0fb8 906 + #define USB_DEVICE_ID_WISTRON_OPTICAL_TOUCH 0x1109 904 907 905 908 #define USB_VENDOR_ID_X_TENSIONS 0x1ae7 906 909 #define USB_DEVICE_ID_SPEEDLINK_VAD_CEZANNE 0x9001
+13
drivers/hid/hid-kye.c
··· 341 341 case USB_DEVICE_ID_GENIUS_GX_IMPERATOR: 342 342 rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 83, 343 343 "Genius Gx Imperator Keyboard"); 344 + case USB_DEVICE_ID_GENIUS_MANTICORE: 345 + rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104, 346 + "Genius Manticore Keyboard"); 344 347 break; 345 348 } 346 349 return rdesc; ··· 421 418 goto enabling_err; 422 419 } 423 420 break; 421 + case USB_DEVICE_ID_GENIUS_MANTICORE: 422 + /* 423 + * The manticore keyboard needs to have all the interfaces 424 + * opened at least once to be fully functional. 425 + */ 426 + if (hid_hw_open(hdev)) 427 + hid_hw_close(hdev); 428 + break; 424 429 } 425 430 426 431 return 0; ··· 450 439 USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE) }, 451 440 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, 452 441 USB_DEVICE_ID_GENIUS_GX_IMPERATOR) }, 442 + { HID_USB_DEVICE(USB_VENDOR_ID_KYE, 443 + USB_DEVICE_ID_GENIUS_MANTICORE) }, 453 444 { } 454 445 }; 455 446 MODULE_DEVICE_TABLE(hid, kye_devices);
+6
drivers/hid/hid-multitouch.c
··· 1335 1335 { .driver_data = MT_CLS_NSMU, 1336 1336 MT_USB_DEVICE(USB_VENDOR_ID_UNITEC, 1337 1337 USB_DEVICE_ID_UNITEC_USB_TOUCH_0A19) }, 1338 + 1339 + /* Wistron panels */ 1340 + { .driver_data = MT_CLS_NSMU, 1341 + MT_USB_DEVICE(USB_VENDOR_ID_WISTRON, 1342 + USB_DEVICE_ID_WISTRON_OPTICAL_TOUCH) }, 1343 + 1338 1344 /* XAT */ 1339 1345 { .driver_data = MT_CLS_NSMU, 1340 1346 MT_USB_DEVICE(USB_VENDOR_ID_XAT,
+42 -11
drivers/hid/hid-sony.c
··· 225 225 struct sony_sc { 226 226 unsigned long quirks; 227 227 228 + #ifdef CONFIG_SONY_FF 229 + struct work_struct rumble_worker; 230 + struct hid_device *hdev; 231 + __u8 left; 232 + __u8 right; 233 + #endif 234 + 228 235 void *extra; 229 236 }; 230 237 ··· 622 615 } 623 616 624 617 #ifdef CONFIG_SONY_FF 625 - static int sony_play_effect(struct input_dev *dev, void *data, 626 - struct ff_effect *effect) 618 + static void sony_rumble_worker(struct work_struct *work) 627 619 { 620 + struct sony_sc *sc = container_of(work, struct sony_sc, rumble_worker); 628 621 unsigned char buf[] = { 629 622 0x01, 630 623 0x00, 0xff, 0x00, 0xff, 0x00, ··· 635 628 0xff, 0x27, 0x10, 0x00, 0x32, 636 629 0x00, 0x00, 0x00, 0x00, 0x00 637 630 }; 638 - __u8 left; 639 - __u8 right; 631 + 632 + buf[3] = sc->right; 633 + buf[5] = sc->left; 634 + 635 + sc->hdev->hid_output_raw_report(sc->hdev, buf, sizeof(buf), 636 + HID_OUTPUT_REPORT); 637 + } 638 + 639 + static int sony_play_effect(struct input_dev *dev, void *data, 640 + struct ff_effect *effect) 641 + { 640 642 struct hid_device *hid = input_get_drvdata(dev); 643 + struct sony_sc *sc = hid_get_drvdata(hid); 641 644 642 645 if (effect->type != FF_RUMBLE) 643 646 return 0; 644 647 645 - left = effect->u.rumble.strong_magnitude / 256; 646 - right = effect->u.rumble.weak_magnitude ? 1 : 0; 648 + sc->left = effect->u.rumble.strong_magnitude / 256; 649 + sc->right = effect->u.rumble.weak_magnitude ? 1 : 0; 647 650 648 - buf[3] = right; 649 - buf[5] = left; 650 - 651 - return hid->hid_output_raw_report(hid, buf, sizeof(buf), 652 - HID_OUTPUT_REPORT); 651 + schedule_work(&sc->rumble_worker); 652 + return 0; 653 653 } 654 654 655 655 static int sony_init_ff(struct hid_device *hdev) ··· 664 650 struct hid_input *hidinput = list_entry(hdev->inputs.next, 665 651 struct hid_input, list); 666 652 struct input_dev *input_dev = hidinput->input; 653 + struct sony_sc *sc = hid_get_drvdata(hdev); 654 + 655 + sc->hdev = hdev; 656 + INIT_WORK(&sc->rumble_worker, sony_rumble_worker); 667 657 668 658 input_set_capability(input_dev, EV_FF, FF_RUMBLE); 669 659 return input_ff_create_memless(input_dev, NULL, sony_play_effect); 660 + } 661 + 662 + static void sony_destroy_ff(struct hid_device *hdev) 663 + { 664 + struct sony_sc *sc = hid_get_drvdata(hdev); 665 + 666 + cancel_work_sync(&sc->rumble_worker); 670 667 } 671 668 672 669 #else 673 670 static int sony_init_ff(struct hid_device *hdev) 674 671 { 675 672 return 0; 673 + } 674 + 675 + static void sony_destroy_ff(struct hid_device *hdev) 676 + { 676 677 } 677 678 #endif 678 679 ··· 756 727 757 728 if (sc->quirks & BUZZ_CONTROLLER) 758 729 buzz_remove(hdev); 730 + 731 + sony_destroy_ff(hdev); 759 732 760 733 hid_hw_stop(hdev); 761 734 }
+1 -4
drivers/hid/hid-wiimote-core.c
··· 834 834 goto done; 835 835 } 836 836 837 - if (vendor == USB_VENDOR_ID_NINTENDO || 838 - vendor == USB_VENDOR_ID_NINTENDO2) { 837 + if (vendor == USB_VENDOR_ID_NINTENDO) { 839 838 if (product == USB_DEVICE_ID_NINTENDO_WIIMOTE) { 840 839 devtype = WIIMOTE_DEV_GEN10; 841 840 goto done; ··· 1854 1855 1855 1856 static const struct hid_device_id wiimote_hid_devices[] = { 1856 1857 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, 1857 - USB_DEVICE_ID_NINTENDO_WIIMOTE) }, 1858 - { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2, 1859 1858 USB_DEVICE_ID_NINTENDO_WIIMOTE) }, 1860 1859 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, 1861 1860 USB_DEVICE_ID_NINTENDO_WIIMOTE2) },
+1 -1
drivers/hid/uhid.c
··· 287 287 */ 288 288 struct uhid_create_req_compat *compat; 289 289 290 - compat = kmalloc(sizeof(*compat), GFP_KERNEL); 290 + compat = kzalloc(sizeof(*compat), GFP_KERNEL); 291 291 if (!compat) 292 292 return -ENOMEM; 293 293
-1
drivers/hwmon/asus_atk0110.c
··· 18 18 #include <linux/err.h> 19 19 20 20 #include <acpi/acpi.h> 21 - #include <acpi/acpixf.h> 22 21 #include <acpi/acpi_drivers.h> 23 22 #include <acpi/acpi_bus.h> 24 23
+1 -2
drivers/i2c/busses/i2c-bcm-kona.c
··· 20 20 #include <linux/platform_device.h> 21 21 #include <linux/clk.h> 22 22 #include <linux/io.h> 23 - #include <linux/clk.h> 24 23 #include <linux/slab.h> 25 24 26 25 /* Hardware register offsets and field defintions */ ··· 890 891 {.compatible = "brcm,kona-i2c",}, 891 892 {}, 892 893 }; 893 - MODULE_DEVICE_TABLE(of, kona_i2c_of_match); 894 + MODULE_DEVICE_TABLE(of, bcm_kona_i2c_of_match); 894 895 895 896 static struct platform_driver bcm_kona_i2c_driver = { 896 897 .driver = {
+1
drivers/i2c/busses/i2c-bcm2835.c
··· 299 299 strlcpy(adap->name, "bcm2835 I2C adapter", sizeof(adap->name)); 300 300 adap->algo = &bcm2835_i2c_algo; 301 301 adap->dev.parent = &pdev->dev; 302 + adap->dev.of_node = pdev->dev.of_node; 302 303 303 304 bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_C, 0); 304 305
+2 -2
drivers/i2c/busses/i2c-davinci.c
··· 125 125 static inline void davinci_i2c_write_reg(struct davinci_i2c_dev *i2c_dev, 126 126 int reg, u16 val) 127 127 { 128 - __raw_writew(val, i2c_dev->base + reg); 128 + writew_relaxed(val, i2c_dev->base + reg); 129 129 } 130 130 131 131 static inline u16 davinci_i2c_read_reg(struct davinci_i2c_dev *i2c_dev, int reg) 132 132 { 133 - return __raw_readw(i2c_dev->base + reg); 133 + return readw_relaxed(i2c_dev->base + reg); 134 134 } 135 135 136 136 /* Generate a pulse on the i2c clock pin. */
+11 -5
drivers/i2c/busses/i2c-diolan-u2c.c
··· 25 25 #define USB_VENDOR_ID_DIOLAN 0x0abf 26 26 #define USB_DEVICE_ID_DIOLAN_U2C 0x3370 27 27 28 - #define DIOLAN_OUT_EP 0x02 29 - #define DIOLAN_IN_EP 0x84 30 28 31 29 /* commands via USB, must match command ids in the firmware */ 32 30 #define CMD_I2C_READ 0x01 ··· 82 84 struct i2c_diolan_u2c { 83 85 u8 obuffer[DIOLAN_OUTBUF_LEN]; /* output buffer */ 84 86 u8 ibuffer[DIOLAN_INBUF_LEN]; /* input buffer */ 87 + int ep_in, ep_out; /* Endpoints */ 85 88 struct usb_device *usb_dev; /* the usb device for this device */ 86 89 struct usb_interface *interface;/* the interface for this device */ 87 90 struct i2c_adapter adapter; /* i2c related things */ ··· 108 109 return -EINVAL; 109 110 110 111 ret = usb_bulk_msg(dev->usb_dev, 111 - usb_sndbulkpipe(dev->usb_dev, DIOLAN_OUT_EP), 112 + usb_sndbulkpipe(dev->usb_dev, dev->ep_out), 112 113 dev->obuffer, dev->olen, &actual, 113 114 DIOLAN_USB_TIMEOUT); 114 115 if (!ret) { ··· 117 118 118 119 tmpret = usb_bulk_msg(dev->usb_dev, 119 120 usb_rcvbulkpipe(dev->usb_dev, 120 - DIOLAN_IN_EP), 121 + dev->ep_in), 121 122 dev->ibuffer, 122 123 sizeof(dev->ibuffer), &actual, 123 124 DIOLAN_USB_TIMEOUT); ··· 209 210 int ret; 210 211 211 212 ret = usb_bulk_msg(dev->usb_dev, 212 - usb_rcvbulkpipe(dev->usb_dev, DIOLAN_IN_EP), 213 + usb_rcvbulkpipe(dev->usb_dev, dev->ep_in), 213 214 dev->ibuffer, sizeof(dev->ibuffer), &actual, 214 215 DIOLAN_USB_TIMEOUT); 215 216 if (ret < 0 || actual == 0) ··· 444 445 static int diolan_u2c_probe(struct usb_interface *interface, 445 446 const struct usb_device_id *id) 446 447 { 448 + struct usb_host_interface *hostif = interface->cur_altsetting; 447 449 struct i2c_diolan_u2c *dev; 448 450 int ret; 451 + 452 + if (hostif->desc.bInterfaceNumber != 0 453 + || hostif->desc.bNumEndpoints < 2) 454 + return -ENODEV; 449 455 450 456 /* allocate memory for our device state and initialize it */ 451 457 dev = kzalloc(sizeof(*dev), GFP_KERNEL); ··· 459 455 ret = -ENOMEM; 460 456 goto error; 461 457 } 458 + dev->ep_out = hostif->endpoint[0].desc.bEndpointAddress; 459 + dev->ep_in = hostif->endpoint[1].desc.bEndpointAddress; 462 460 463 461 dev->usb_dev = usb_get_dev(interface_to_usbdev(interface)); 464 462 dev->interface = interface;
+26 -4
drivers/i2c/busses/i2c-omap.c
··· 266 266 static inline void omap_i2c_write_reg(struct omap_i2c_dev *i2c_dev, 267 267 int reg, u16 val) 268 268 { 269 - __raw_writew(val, i2c_dev->base + 269 + writew_relaxed(val, i2c_dev->base + 270 270 (i2c_dev->regs[reg] << i2c_dev->reg_shift)); 271 271 } 272 272 273 273 static inline u16 omap_i2c_read_reg(struct omap_i2c_dev *i2c_dev, int reg) 274 274 { 275 - return __raw_readw(i2c_dev->base + 275 + return readw_relaxed(i2c_dev->base + 276 276 (i2c_dev->regs[reg] << i2c_dev->reg_shift)); 277 277 } 278 278 ··· 1037 1037 }; 1038 1038 1039 1039 #ifdef CONFIG_OF 1040 + static struct omap_i2c_bus_platform_data omap2420_pdata = { 1041 + .rev = OMAP_I2C_IP_VERSION_1, 1042 + .flags = OMAP_I2C_FLAG_NO_FIFO | 1043 + OMAP_I2C_FLAG_SIMPLE_CLOCK | 1044 + OMAP_I2C_FLAG_16BIT_DATA_REG | 1045 + OMAP_I2C_FLAG_BUS_SHIFT_2, 1046 + }; 1047 + 1048 + static struct omap_i2c_bus_platform_data omap2430_pdata = { 1049 + .rev = OMAP_I2C_IP_VERSION_1, 1050 + .flags = OMAP_I2C_FLAG_BUS_SHIFT_2 | 1051 + OMAP_I2C_FLAG_FORCE_19200_INT_CLK, 1052 + }; 1053 + 1040 1054 static struct omap_i2c_bus_platform_data omap3_pdata = { 1041 1055 .rev = OMAP_I2C_IP_VERSION_1, 1042 1056 .flags = OMAP_I2C_FLAG_BUS_SHIFT_2, ··· 1068 1054 { 1069 1055 .compatible = "ti,omap3-i2c", 1070 1056 .data = &omap3_pdata, 1057 + }, 1058 + { 1059 + .compatible = "ti,omap2430-i2c", 1060 + .data = &omap2430_pdata, 1061 + }, 1062 + { 1063 + .compatible = "ti,omap2420-i2c", 1064 + .data = &omap2420_pdata, 1071 1065 }, 1072 1066 { }, 1073 1067 }; ··· 1162 1140 * Read the Rev hi bit-[15:14] ie scheme this is 1 indicates ver2. 1163 1141 * On omap1/3/2 Offset 4 is IE Reg the bit [15:14] is 0 at reset. 1164 1142 * Also since the omap_i2c_read_reg uses reg_map_ip_* a 1165 - * raw_readw is done. 1143 + * readw_relaxed is done. 1166 1144 */ 1167 - rev = __raw_readw(dev->base + 0x04); 1145 + rev = readw_relaxed(dev->base + 0x04); 1168 1146 1169 1147 dev->scheme = OMAP_I2C_SCHEME(rev); 1170 1148 switch (dev->scheme) {
+3 -2
drivers/iio/accel/hid-sensor-accel-3d.c
··· 350 350 error_iio_unreg: 351 351 iio_device_unregister(indio_dev); 352 352 error_remove_trigger: 353 - hid_sensor_remove_trigger(indio_dev); 353 + hid_sensor_remove_trigger(&accel_state->common_attributes); 354 354 error_unreg_buffer_funcs: 355 355 iio_triggered_buffer_cleanup(indio_dev); 356 356 error_free_dev_mem: ··· 363 363 { 364 364 struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data; 365 365 struct iio_dev *indio_dev = platform_get_drvdata(pdev); 366 + struct accel_3d_state *accel_state = iio_priv(indio_dev); 366 367 367 368 sensor_hub_remove_callback(hsdev, HID_USAGE_SENSOR_ACCEL_3D); 368 369 iio_device_unregister(indio_dev); 369 - hid_sensor_remove_trigger(indio_dev); 370 + hid_sensor_remove_trigger(&accel_state->common_attributes); 370 371 iio_triggered_buffer_cleanup(indio_dev); 371 372 kfree(indio_dev->channels); 372 373
+4 -3
drivers/iio/accel/kxsd9.c
··· 112 112 mutex_lock(&st->buf_lock); 113 113 st->tx[0] = KXSD9_READ(address); 114 114 ret = spi_sync_transfer(st->us, xfers, ARRAY_SIZE(xfers)); 115 - if (ret) 116 - return ret; 117 - return (((u16)(st->rx[0])) << 8) | (st->rx[1] & 0xF0); 115 + if (!ret) 116 + ret = (((u16)(st->rx[0])) << 8) | (st->rx[1] & 0xF0); 117 + mutex_unlock(&st->buf_lock); 118 + return ret; 118 119 } 119 120 120 121 static IIO_CONST_ATTR(accel_scale_available,
+1
drivers/iio/adc/at91_adc.c
··· 1047 1047 } else { 1048 1048 if (!st->caps->has_tsmr) { 1049 1049 dev_err(&pdev->dev, "We don't support non-TSMR adc\n"); 1050 + ret = -ENODEV; 1050 1051 goto error_disable_adc_clk; 1051 1052 } 1052 1053
+4 -4
drivers/iio/adc/mcp3422.c
··· 88 88 89 89 /* sample rates to sign extension table */ 90 90 static const int mcp3422_sign_extend[4] = { 91 - [MCP3422_SRATE_240] = 12, 92 - [MCP3422_SRATE_60] = 14, 93 - [MCP3422_SRATE_15] = 16, 94 - [MCP3422_SRATE_3] = 18 }; 91 + [MCP3422_SRATE_240] = 11, 92 + [MCP3422_SRATE_60] = 13, 93 + [MCP3422_SRATE_15] = 15, 94 + [MCP3422_SRATE_3] = 17 }; 95 95 96 96 /* Client data (each client gets its own) */ 97 97 struct mcp3422 {
+5 -2
drivers/iio/adc/ti_am335x_adc.c
··· 229 229 unsigned long flags, 230 230 const struct iio_buffer_setup_ops *setup_ops) 231 231 { 232 + struct iio_buffer *buffer; 232 233 int ret; 233 234 234 - indio_dev->buffer = iio_kfifo_allocate(indio_dev); 235 - if (!indio_dev->buffer) 235 + buffer = iio_kfifo_allocate(indio_dev); 236 + if (!buffer) 236 237 return -ENOMEM; 238 + 239 + iio_device_attach_buffer(indio_dev, buffer); 237 240 238 241 ret = request_threaded_irq(irq, pollfunc_th, pollfunc_bh, 239 242 flags, indio_dev->name, indio_dev);
+4 -5
drivers/iio/common/hid-sensors/hid-sensor-trigger.c
··· 55 55 return 0; 56 56 } 57 57 58 - void hid_sensor_remove_trigger(struct iio_dev *indio_dev) 58 + void hid_sensor_remove_trigger(struct hid_sensor_common *attrb) 59 59 { 60 - iio_trigger_unregister(indio_dev->trig); 61 - iio_trigger_free(indio_dev->trig); 62 - indio_dev->trig = NULL; 60 + iio_trigger_unregister(attrb->trigger); 61 + iio_trigger_free(attrb->trigger); 63 62 } 64 63 EXPORT_SYMBOL(hid_sensor_remove_trigger); 65 64 ··· 89 90 dev_err(&indio_dev->dev, "Trigger Register Failed\n"); 90 91 goto error_free_trig; 91 92 } 92 - indio_dev->trig = trig; 93 + indio_dev->trig = attrb->trigger = trig; 93 94 94 95 return ret; 95 96
+1 -1
drivers/iio/common/hid-sensors/hid-sensor-trigger.h
··· 21 21 22 22 int hid_sensor_setup_trigger(struct iio_dev *indio_dev, const char *name, 23 23 struct hid_sensor_common *attrb); 24 - void hid_sensor_remove_trigger(struct iio_dev *indio_dev); 24 + void hid_sensor_remove_trigger(struct hid_sensor_common *attrb); 25 25 26 26 #endif
+3 -2
drivers/iio/gyro/hid-sensor-gyro-3d.c
··· 348 348 error_iio_unreg: 349 349 iio_device_unregister(indio_dev); 350 350 error_remove_trigger: 351 - hid_sensor_remove_trigger(indio_dev); 351 + hid_sensor_remove_trigger(&gyro_state->common_attributes); 352 352 error_unreg_buffer_funcs: 353 353 iio_triggered_buffer_cleanup(indio_dev); 354 354 error_free_dev_mem: ··· 361 361 { 362 362 struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data; 363 363 struct iio_dev *indio_dev = platform_get_drvdata(pdev); 364 + struct gyro_3d_state *gyro_state = iio_priv(indio_dev); 364 365 365 366 sensor_hub_remove_callback(hsdev, HID_USAGE_SENSOR_GYRO_3D); 366 367 iio_device_unregister(indio_dev); 367 - hid_sensor_remove_trigger(indio_dev); 368 + hid_sensor_remove_trigger(&gyro_state->common_attributes); 368 369 iio_triggered_buffer_cleanup(indio_dev); 369 370 kfree(indio_dev->channels); 370 371
+2
drivers/iio/light/Kconfig
··· 81 81 config TCS3472 82 82 tristate "TAOS TCS3472 color light-to-digital converter" 83 83 depends on I2C 84 + select IIO_BUFFER 85 + select IIO_TRIGGERED_BUFFER 84 86 help 85 87 If you say yes here you get support for the TAOS TCS3472 86 88 family of color light-to-digital converters with IR filter.
+3 -2
drivers/iio/light/hid-sensor-als.c
··· 314 314 error_iio_unreg: 315 315 iio_device_unregister(indio_dev); 316 316 error_remove_trigger: 317 - hid_sensor_remove_trigger(indio_dev); 317 + hid_sensor_remove_trigger(&als_state->common_attributes); 318 318 error_unreg_buffer_funcs: 319 319 iio_triggered_buffer_cleanup(indio_dev); 320 320 error_free_dev_mem: ··· 327 327 { 328 328 struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data; 329 329 struct iio_dev *indio_dev = platform_get_drvdata(pdev); 330 + struct als_state *als_state = iio_priv(indio_dev); 330 331 331 332 sensor_hub_remove_callback(hsdev, HID_USAGE_SENSOR_ALS); 332 333 iio_device_unregister(indio_dev); 333 - hid_sensor_remove_trigger(indio_dev); 334 + hid_sensor_remove_trigger(&als_state->common_attributes); 334 335 iio_triggered_buffer_cleanup(indio_dev); 335 336 kfree(indio_dev->channels); 336 337
+2
drivers/iio/magnetometer/Kconfig
··· 19 19 config MAG3110 20 20 tristate "Freescale MAG3110 3-Axis Magnetometer" 21 21 depends on I2C 22 + select IIO_BUFFER 23 + select IIO_TRIGGERED_BUFFER 22 24 help 23 25 Say yes here to build support for the Freescale MAG3110 3-Axis 24 26 magnetometer.
+3 -2
drivers/iio/magnetometer/hid-sensor-magn-3d.c
··· 351 351 error_iio_unreg: 352 352 iio_device_unregister(indio_dev); 353 353 error_remove_trigger: 354 - hid_sensor_remove_trigger(indio_dev); 354 + hid_sensor_remove_trigger(&magn_state->common_attributes); 355 355 error_unreg_buffer_funcs: 356 356 iio_triggered_buffer_cleanup(indio_dev); 357 357 error_free_dev_mem: ··· 364 364 { 365 365 struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data; 366 366 struct iio_dev *indio_dev = platform_get_drvdata(pdev); 367 + struct magn_3d_state *magn_state = iio_priv(indio_dev); 367 368 368 369 sensor_hub_remove_callback(hsdev, HID_USAGE_SENSOR_COMPASS_3D); 369 370 iio_device_unregister(indio_dev); 370 - hid_sensor_remove_trigger(indio_dev); 371 + hid_sensor_remove_trigger(&magn_state->common_attributes); 371 372 iio_triggered_buffer_cleanup(indio_dev); 372 373 kfree(indio_dev->channels); 373 374
+6 -1
drivers/iio/magnetometer/mag3110.c
··· 250 250 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SAMP_FREQ) | \ 251 251 BIT(IIO_CHAN_INFO_SCALE), \ 252 252 .scan_index = idx, \ 253 - .scan_type = IIO_ST('s', 16, 16, IIO_BE), \ 253 + .scan_type = { \ 254 + .sign = 's', \ 255 + .realbits = 16, \ 256 + .storagebits = 16, \ 257 + .endianness = IIO_BE, \ 258 + }, \ 254 259 } 255 260 256 261 static const struct iio_chan_spec mag3110_channels[] = {
+4 -1
drivers/input/misc/hp_sdc_rtc.c
··· 180 180 if (WARN_ON(down_interruptible(&i8042tregs))) 181 181 return -1; 182 182 183 - if (hp_sdc_enqueue_transaction(&t)) return -1; 183 + if (hp_sdc_enqueue_transaction(&t)) { 184 + up(&i8042tregs); 185 + return -1; 186 + } 184 187 185 188 /* Sleep until results come back. */ 186 189 if (WARN_ON(down_interruptible(&i8042tregs)))
+11
drivers/input/touchscreen/Kconfig
··· 906 906 To compile this driver as a module, choose M here: the 907 907 module will be called stmpe-ts. 908 908 909 + config TOUCHSCREEN_SUR40 910 + tristate "Samsung SUR40 (Surface 2.0/PixelSense) touchscreen" 911 + depends on USB 912 + select INPUT_POLLDEV 913 + help 914 + Say Y here if you want support for the Samsung SUR40 touchscreen 915 + (also known as Microsoft Surface 2.0 or Microsoft PixelSense). 916 + 917 + To compile this driver as a module, choose M here: the 918 + module will be called sur40. 919 + 909 920 config TOUCHSCREEN_TPS6507X 910 921 tristate "TPS6507x based touchscreens" 911 922 depends on I2C
+1
drivers/input/touchscreen/Makefile
··· 54 54 obj-$(CONFIG_TOUCHSCREEN_S3C2410) += s3c2410_ts.o 55 55 obj-$(CONFIG_TOUCHSCREEN_ST1232) += st1232.o 56 56 obj-$(CONFIG_TOUCHSCREEN_STMPE) += stmpe-ts.o 57 + obj-$(CONFIG_TOUCHSCREEN_SUR40) += sur40.o 57 58 obj-$(CONFIG_TOUCHSCREEN_TI_AM335X_TSC) += ti_am335x_tsc.o 58 59 obj-$(CONFIG_TOUCHSCREEN_TNETV107X) += tnetv107x-ts.o 59 60 obj-$(CONFIG_TOUCHSCREEN_TOUCHIT213) += touchit213.o
+1 -1
drivers/input/touchscreen/atmel-wm97xx.c
··· 391 391 } 392 392 393 393 #ifdef CONFIG_PM_SLEEP 394 - static int atmel_wm97xx_suspend(struct *dev) 394 + static int atmel_wm97xx_suspend(struct device *dev) 395 395 { 396 396 struct platform_device *pdev = to_platform_device(dev); 397 397 struct atmel_wm97xx *atmel_wm97xx = platform_get_drvdata(pdev);
+1 -2
drivers/input/touchscreen/cyttsp4_core.c
··· 1246 1246 1247 1247 dev_vdbg(cd->dev, "%s: Watchdog timer triggered\n", __func__); 1248 1248 1249 - if (!work_pending(&cd->watchdog_work)) 1250 - schedule_work(&cd->watchdog_work); 1249 + schedule_work(&cd->watchdog_work); 1251 1250 1252 1251 return; 1253 1252 }
+466
drivers/input/touchscreen/sur40.c
··· 1 + /* 2 + * Surface2.0/SUR40/PixelSense input driver 3 + * 4 + * Copyright (c) 2013 by Florian 'floe' Echtler <floe@butterbrot.org> 5 + * 6 + * Derived from the USB Skeleton driver 1.1, 7 + * Copyright (c) 2003 Greg Kroah-Hartman (greg@kroah.com) 8 + * 9 + * and from the Apple USB BCM5974 multitouch driver, 10 + * Copyright (c) 2008 Henrik Rydberg (rydberg@euromail.se) 11 + * 12 + * and from the generic hid-multitouch driver, 13 + * Copyright (c) 2010-2012 Stephane Chatty <chatty@enac.fr> 14 + * 15 + * This program is free software; you can redistribute it and/or 16 + * modify it under the terms of the GNU General Public License as 17 + * published by the Free Software Foundation; either version 2 of 18 + * the License, or (at your option) any later version. 19 + */ 20 + 21 + #include <linux/kernel.h> 22 + #include <linux/errno.h> 23 + #include <linux/delay.h> 24 + #include <linux/init.h> 25 + #include <linux/slab.h> 26 + #include <linux/module.h> 27 + #include <linux/completion.h> 28 + #include <linux/uaccess.h> 29 + #include <linux/usb.h> 30 + #include <linux/printk.h> 31 + #include <linux/input-polldev.h> 32 + #include <linux/input/mt.h> 33 + #include <linux/usb/input.h> 34 + 35 + /* read 512 bytes from endpoint 0x86 -> get header + blobs */ 36 + struct sur40_header { 37 + 38 + __le16 type; /* always 0x0001 */ 39 + __le16 count; /* count of blobs (if 0: continue prev. packet) */ 40 + 41 + __le32 packet_id; /* unique ID for all packets in one frame */ 42 + 43 + __le32 timestamp; /* milliseconds (inc. by 16 or 17 each frame) */ 44 + __le32 unknown; /* "epoch?" always 02/03 00 00 00 */ 45 + 46 + } __packed; 47 + 48 + struct sur40_blob { 49 + 50 + __le16 blob_id; 51 + 52 + u8 action; /* 0x02 = enter/exit, 0x03 = update (?) */ 53 + u8 unknown; /* always 0x01 or 0x02 (no idea what this is?) */ 54 + 55 + __le16 bb_pos_x; /* upper left corner of bounding box */ 56 + __le16 bb_pos_y; 57 + 58 + __le16 bb_size_x; /* size of bounding box */ 59 + __le16 bb_size_y; 60 + 61 + __le16 pos_x; /* finger tip position */ 62 + __le16 pos_y; 63 + 64 + __le16 ctr_x; /* centroid position */ 65 + __le16 ctr_y; 66 + 67 + __le16 axis_x; /* somehow related to major/minor axis, mostly: */ 68 + __le16 axis_y; /* axis_x == bb_size_y && axis_y == bb_size_x */ 69 + 70 + __le32 angle; /* orientation in radians relative to x axis - 71 + actually an IEEE754 float, don't use in kernel */ 72 + 73 + __le32 area; /* size in pixels/pressure (?) */ 74 + 75 + u8 padding[32]; 76 + 77 + } __packed; 78 + 79 + /* combined header/blob data */ 80 + struct sur40_data { 81 + struct sur40_header header; 82 + struct sur40_blob blobs[]; 83 + } __packed; 84 + 85 + 86 + /* version information */ 87 + #define DRIVER_SHORT "sur40" 88 + #define DRIVER_AUTHOR "Florian 'floe' Echtler <floe@butterbrot.org>" 89 + #define DRIVER_DESC "Surface2.0/SUR40/PixelSense input driver" 90 + 91 + /* vendor and device IDs */ 92 + #define ID_MICROSOFT 0x045e 93 + #define ID_SUR40 0x0775 94 + 95 + /* sensor resolution */ 96 + #define SENSOR_RES_X 1920 97 + #define SENSOR_RES_Y 1080 98 + 99 + /* touch data endpoint */ 100 + #define TOUCH_ENDPOINT 0x86 101 + 102 + /* polling interval (ms) */ 103 + #define POLL_INTERVAL 10 104 + 105 + /* maximum number of contacts FIXME: this is a guess? */ 106 + #define MAX_CONTACTS 64 107 + 108 + /* control commands */ 109 + #define SUR40_GET_VERSION 0xb0 /* 12 bytes string */ 110 + #define SUR40_UNKNOWN1 0xb3 /* 5 bytes */ 111 + #define SUR40_UNKNOWN2 0xc1 /* 24 bytes */ 112 + 113 + #define SUR40_GET_STATE 0xc5 /* 4 bytes state (?) */ 114 + #define SUR40_GET_SENSORS 0xb1 /* 8 bytes sensors */ 115 + 116 + /* 117 + * Note: an earlier, non-public version of this driver used USB_RECIP_ENDPOINT 118 + * here by mistake which is very likely to have corrupted the firmware EEPROM 119 + * on two separate SUR40 devices. Thanks to Alan Stern who spotted this bug. 120 + * Should you ever run into a similar problem, the background story to this 121 + * incident and instructions on how to fix the corrupted EEPROM are available 122 + * at https://floe.butterbrot.org/matrix/hacking/surface/brick.html 123 + */ 124 + 125 + struct sur40_state { 126 + 127 + struct usb_device *usbdev; 128 + struct device *dev; 129 + struct input_polled_dev *input; 130 + 131 + struct sur40_data *bulk_in_buffer; 132 + size_t bulk_in_size; 133 + u8 bulk_in_epaddr; 134 + 135 + char phys[64]; 136 + }; 137 + 138 + static int sur40_command(struct sur40_state *dev, 139 + u8 command, u16 index, void *buffer, u16 size) 140 + { 141 + return usb_control_msg(dev->usbdev, usb_rcvctrlpipe(dev->usbdev, 0), 142 + command, 143 + USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN, 144 + 0x00, index, buffer, size, 1000); 145 + } 146 + 147 + /* Initialization routine, called from sur40_open */ 148 + static int sur40_init(struct sur40_state *dev) 149 + { 150 + int result; 151 + u8 buffer[24]; 152 + 153 + /* stupidly replay the original MS driver init sequence */ 154 + result = sur40_command(dev, SUR40_GET_VERSION, 0x00, buffer, 12); 155 + if (result < 0) 156 + return result; 157 + 158 + result = sur40_command(dev, SUR40_GET_VERSION, 0x01, buffer, 12); 159 + if (result < 0) 160 + return result; 161 + 162 + result = sur40_command(dev, SUR40_GET_VERSION, 0x02, buffer, 12); 163 + if (result < 0) 164 + return result; 165 + 166 + result = sur40_command(dev, SUR40_UNKNOWN2, 0x00, buffer, 24); 167 + if (result < 0) 168 + return result; 169 + 170 + result = sur40_command(dev, SUR40_UNKNOWN1, 0x00, buffer, 5); 171 + if (result < 0) 172 + return result; 173 + 174 + result = sur40_command(dev, SUR40_GET_VERSION, 0x03, buffer, 12); 175 + 176 + /* 177 + * Discard the result buffer - no known data inside except 178 + * some version strings, maybe extract these sometime... 179 + */ 180 + 181 + return result; 182 + } 183 + 184 + /* 185 + * Callback routines from input_polled_dev 186 + */ 187 + 188 + /* Enable the device, polling will now start. */ 189 + static void sur40_open(struct input_polled_dev *polldev) 190 + { 191 + struct sur40_state *sur40 = polldev->private; 192 + 193 + dev_dbg(sur40->dev, "open\n"); 194 + sur40_init(sur40); 195 + } 196 + 197 + /* Disable device, polling has stopped. */ 198 + static void sur40_close(struct input_polled_dev *polldev) 199 + { 200 + struct sur40_state *sur40 = polldev->private; 201 + 202 + dev_dbg(sur40->dev, "close\n"); 203 + /* 204 + * There is no known way to stop the device, so we simply 205 + * stop polling. 206 + */ 207 + } 208 + 209 + /* 210 + * This function is called when a whole contact has been processed, 211 + * so that it can assign it to a slot and store the data there. 212 + */ 213 + static void sur40_report_blob(struct sur40_blob *blob, struct input_dev *input) 214 + { 215 + int wide, major, minor; 216 + 217 + int bb_size_x = le16_to_cpu(blob->bb_size_x); 218 + int bb_size_y = le16_to_cpu(blob->bb_size_y); 219 + 220 + int pos_x = le16_to_cpu(blob->pos_x); 221 + int pos_y = le16_to_cpu(blob->pos_y); 222 + 223 + int ctr_x = le16_to_cpu(blob->ctr_x); 224 + int ctr_y = le16_to_cpu(blob->ctr_y); 225 + 226 + int slotnum = input_mt_get_slot_by_key(input, blob->blob_id); 227 + if (slotnum < 0 || slotnum >= MAX_CONTACTS) 228 + return; 229 + 230 + input_mt_slot(input, slotnum); 231 + input_mt_report_slot_state(input, MT_TOOL_FINGER, 1); 232 + wide = (bb_size_x > bb_size_y); 233 + major = max(bb_size_x, bb_size_y); 234 + minor = min(bb_size_x, bb_size_y); 235 + 236 + input_report_abs(input, ABS_MT_POSITION_X, pos_x); 237 + input_report_abs(input, ABS_MT_POSITION_Y, pos_y); 238 + input_report_abs(input, ABS_MT_TOOL_X, ctr_x); 239 + input_report_abs(input, ABS_MT_TOOL_Y, ctr_y); 240 + 241 + /* TODO: use a better orientation measure */ 242 + input_report_abs(input, ABS_MT_ORIENTATION, wide); 243 + input_report_abs(input, ABS_MT_TOUCH_MAJOR, major); 244 + input_report_abs(input, ABS_MT_TOUCH_MINOR, minor); 245 + } 246 + 247 + /* core function: poll for new input data */ 248 + static void sur40_poll(struct input_polled_dev *polldev) 249 + { 250 + 251 + struct sur40_state *sur40 = polldev->private; 252 + struct input_dev *input = polldev->input; 253 + int result, bulk_read, need_blobs, packet_blobs, i; 254 + u32 packet_id; 255 + 256 + struct sur40_header *header = &sur40->bulk_in_buffer->header; 257 + struct sur40_blob *inblob = &sur40->bulk_in_buffer->blobs[0]; 258 + 259 + dev_dbg(sur40->dev, "poll\n"); 260 + 261 + need_blobs = -1; 262 + 263 + do { 264 + 265 + /* perform a blocking bulk read to get data from the device */ 266 + result = usb_bulk_msg(sur40->usbdev, 267 + usb_rcvbulkpipe(sur40->usbdev, sur40->bulk_in_epaddr), 268 + sur40->bulk_in_buffer, sur40->bulk_in_size, 269 + &bulk_read, 1000); 270 + 271 + dev_dbg(sur40->dev, "received %d bytes\n", bulk_read); 272 + 273 + if (result < 0) { 274 + dev_err(sur40->dev, "error in usb_bulk_read\n"); 275 + return; 276 + } 277 + 278 + result = bulk_read - sizeof(struct sur40_header); 279 + 280 + if (result % sizeof(struct sur40_blob) != 0) { 281 + dev_err(sur40->dev, "transfer size mismatch\n"); 282 + return; 283 + } 284 + 285 + /* first packet? */ 286 + if (need_blobs == -1) { 287 + need_blobs = le16_to_cpu(header->count); 288 + dev_dbg(sur40->dev, "need %d blobs\n", need_blobs); 289 + packet_id = header->packet_id; 290 + } 291 + 292 + /* 293 + * Sanity check. when video data is also being retrieved, the 294 + * packet ID will usually increase in the middle of a series 295 + * instead of at the end. 296 + */ 297 + if (packet_id != header->packet_id) 298 + dev_warn(sur40->dev, "packet ID mismatch\n"); 299 + 300 + packet_blobs = result / sizeof(struct sur40_blob); 301 + dev_dbg(sur40->dev, "received %d blobs\n", packet_blobs); 302 + 303 + /* packets always contain at least 4 blobs, even if empty */ 304 + if (packet_blobs > need_blobs) 305 + packet_blobs = need_blobs; 306 + 307 + for (i = 0; i < packet_blobs; i++) { 308 + need_blobs--; 309 + dev_dbg(sur40->dev, "processing blob\n"); 310 + sur40_report_blob(&(inblob[i]), input); 311 + } 312 + 313 + } while (need_blobs > 0); 314 + 315 + input_mt_sync_frame(input); 316 + input_sync(input); 317 + } 318 + 319 + /* Initialize input device parameters. */ 320 + static void sur40_input_setup(struct input_dev *input_dev) 321 + { 322 + __set_bit(EV_KEY, input_dev->evbit); 323 + __set_bit(EV_ABS, input_dev->evbit); 324 + 325 + input_set_abs_params(input_dev, ABS_MT_POSITION_X, 326 + 0, SENSOR_RES_X, 0, 0); 327 + input_set_abs_params(input_dev, ABS_MT_POSITION_Y, 328 + 0, SENSOR_RES_Y, 0, 0); 329 + 330 + input_set_abs_params(input_dev, ABS_MT_TOOL_X, 331 + 0, SENSOR_RES_X, 0, 0); 332 + input_set_abs_params(input_dev, ABS_MT_TOOL_Y, 333 + 0, SENSOR_RES_Y, 0, 0); 334 + 335 + /* max value unknown, but major/minor axis 336 + * can never be larger than screen */ 337 + input_set_abs_params(input_dev, ABS_MT_TOUCH_MAJOR, 338 + 0, SENSOR_RES_X, 0, 0); 339 + input_set_abs_params(input_dev, ABS_MT_TOUCH_MINOR, 340 + 0, SENSOR_RES_Y, 0, 0); 341 + 342 + input_set_abs_params(input_dev, ABS_MT_ORIENTATION, 0, 1, 0, 0); 343 + 344 + input_mt_init_slots(input_dev, MAX_CONTACTS, 345 + INPUT_MT_DIRECT | INPUT_MT_DROP_UNUSED); 346 + } 347 + 348 + /* Check candidate USB interface. */ 349 + static int sur40_probe(struct usb_interface *interface, 350 + const struct usb_device_id *id) 351 + { 352 + struct usb_device *usbdev = interface_to_usbdev(interface); 353 + struct sur40_state *sur40; 354 + struct usb_host_interface *iface_desc; 355 + struct usb_endpoint_descriptor *endpoint; 356 + struct input_polled_dev *poll_dev; 357 + int error; 358 + 359 + /* Check if we really have the right interface. */ 360 + iface_desc = &interface->altsetting[0]; 361 + if (iface_desc->desc.bInterfaceClass != 0xFF) 362 + return -ENODEV; 363 + 364 + /* Use endpoint #4 (0x86). */ 365 + endpoint = &iface_desc->endpoint[4].desc; 366 + if (endpoint->bEndpointAddress != TOUCH_ENDPOINT) 367 + return -ENODEV; 368 + 369 + /* Allocate memory for our device state and initialize it. */ 370 + sur40 = kzalloc(sizeof(struct sur40_state), GFP_KERNEL); 371 + if (!sur40) 372 + return -ENOMEM; 373 + 374 + poll_dev = input_allocate_polled_device(); 375 + if (!poll_dev) { 376 + error = -ENOMEM; 377 + goto err_free_dev; 378 + } 379 + 380 + /* Set up polled input device control structure */ 381 + poll_dev->private = sur40; 382 + poll_dev->poll_interval = POLL_INTERVAL; 383 + poll_dev->open = sur40_open; 384 + poll_dev->poll = sur40_poll; 385 + poll_dev->close = sur40_close; 386 + 387 + /* Set up regular input device structure */ 388 + sur40_input_setup(poll_dev->input); 389 + 390 + poll_dev->input->name = "Samsung SUR40"; 391 + usb_to_input_id(usbdev, &poll_dev->input->id); 392 + usb_make_path(usbdev, sur40->phys, sizeof(sur40->phys)); 393 + strlcat(sur40->phys, "/input0", sizeof(sur40->phys)); 394 + poll_dev->input->phys = sur40->phys; 395 + poll_dev->input->dev.parent = &interface->dev; 396 + 397 + sur40->usbdev = usbdev; 398 + sur40->dev = &interface->dev; 399 + sur40->input = poll_dev; 400 + 401 + /* use the bulk-in endpoint tested above */ 402 + sur40->bulk_in_size = usb_endpoint_maxp(endpoint); 403 + sur40->bulk_in_epaddr = endpoint->bEndpointAddress; 404 + sur40->bulk_in_buffer = kmalloc(sur40->bulk_in_size, GFP_KERNEL); 405 + if (!sur40->bulk_in_buffer) { 406 + dev_err(&interface->dev, "Unable to allocate input buffer."); 407 + error = -ENOMEM; 408 + goto err_free_polldev; 409 + } 410 + 411 + error = input_register_polled_device(poll_dev); 412 + if (error) { 413 + dev_err(&interface->dev, 414 + "Unable to register polled input device."); 415 + goto err_free_buffer; 416 + } 417 + 418 + /* we can register the device now, as it is ready */ 419 + usb_set_intfdata(interface, sur40); 420 + dev_dbg(&interface->dev, "%s is now attached\n", DRIVER_DESC); 421 + 422 + return 0; 423 + 424 + err_free_buffer: 425 + kfree(sur40->bulk_in_buffer); 426 + err_free_polldev: 427 + input_free_polled_device(sur40->input); 428 + err_free_dev: 429 + kfree(sur40); 430 + 431 + return error; 432 + } 433 + 434 + /* Unregister device & clean up. */ 435 + static void sur40_disconnect(struct usb_interface *interface) 436 + { 437 + struct sur40_state *sur40 = usb_get_intfdata(interface); 438 + 439 + input_unregister_polled_device(sur40->input); 440 + input_free_polled_device(sur40->input); 441 + kfree(sur40->bulk_in_buffer); 442 + kfree(sur40); 443 + 444 + usb_set_intfdata(interface, NULL); 445 + dev_dbg(&interface->dev, "%s is now disconnected\n", DRIVER_DESC); 446 + } 447 + 448 + static const struct usb_device_id sur40_table[] = { 449 + { USB_DEVICE(ID_MICROSOFT, ID_SUR40) }, /* Samsung SUR40 */ 450 + { } /* terminating null entry */ 451 + }; 452 + MODULE_DEVICE_TABLE(usb, sur40_table); 453 + 454 + /* USB-specific object needed to register this driver with the USB subsystem. */ 455 + static struct usb_driver sur40_driver = { 456 + .name = DRIVER_SHORT, 457 + .probe = sur40_probe, 458 + .disconnect = sur40_disconnect, 459 + .id_table = sur40_table, 460 + }; 461 + 462 + module_usb_driver(sur40_driver); 463 + 464 + MODULE_AUTHOR(DRIVER_AUTHOR); 465 + MODULE_DESCRIPTION(DRIVER_DESC); 466 + MODULE_LICENSE("GPL");
+1
drivers/macintosh/Makefile
··· 40 40 windfarm_ad7417_sensor.o \ 41 41 windfarm_lm75_sensor.o \ 42 42 windfarm_lm87_sensor.o \ 43 + windfarm_max6690_sensor.o \ 43 44 windfarm_pid.o \ 44 45 windfarm_cpufreq_clamp.o \ 45 46 windfarm_rm31.o
+1 -1
drivers/md/md.c
··· 7777 7777 if (mddev->ro && !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery)) 7778 7778 return; 7779 7779 if ( ! ( 7780 - (mddev->flags & ~ (1<<MD_CHANGE_PENDING)) || 7780 + (mddev->flags & MD_UPDATE_SB_FLAGS & ~ (1<<MD_CHANGE_PENDING)) || 7781 7781 test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) || 7782 7782 test_bit(MD_RECOVERY_DONE, &mddev->recovery) || 7783 7783 (mddev->external == 0 && mddev->safemode == 1) ||
+5 -8
drivers/md/raid5.c
··· 678 678 } else 679 679 init_stripe(sh, sector, previous); 680 680 } else { 681 + spin_lock(&conf->device_lock); 681 682 if (atomic_read(&sh->count)) { 682 683 BUG_ON(!list_empty(&sh->lru) 683 684 && !test_bit(STRIPE_EXPANDING, &sh->state) 684 685 && !test_bit(STRIPE_ON_UNPLUG_LIST, &sh->state) 685 - && !test_bit(STRIPE_ON_RELEASE_LIST, &sh->state)); 686 + ); 686 687 } else { 687 - spin_lock(&conf->device_lock); 688 688 if (!test_bit(STRIPE_HANDLE, &sh->state)) 689 689 atomic_inc(&conf->active_stripes); 690 - if (list_empty(&sh->lru) && 691 - !test_bit(STRIPE_ON_RELEASE_LIST, &sh->state) && 692 - !test_bit(STRIPE_EXPANDING, &sh->state)) 693 - BUG(); 690 + BUG_ON(list_empty(&sh->lru)); 694 691 list_del_init(&sh->lru); 695 692 if (sh->group) { 696 693 sh->group->stripes_cnt--; 697 694 sh->group = NULL; 698 695 } 699 - spin_unlock(&conf->device_lock); 700 696 } 697 + spin_unlock(&conf->device_lock); 701 698 } 702 699 } while (sh == NULL); 703 700 ··· 5468 5471 for (i = 0; i < *group_cnt; i++) { 5469 5472 struct r5worker_group *group; 5470 5473 5471 - group = worker_groups[i]; 5474 + group = &(*worker_groups)[i]; 5472 5475 INIT_LIST_HEAD(&group->handle_list); 5473 5476 group->conf = conf; 5474 5477 group->workers = workers + i * cnt;
+103 -18
drivers/ntb/ntb_hw.c
··· 141 141 ndev->event_cb = NULL; 142 142 } 143 143 144 + static void ntb_irq_work(unsigned long data) 145 + { 146 + struct ntb_db_cb *db_cb = (struct ntb_db_cb *)data; 147 + int rc; 148 + 149 + rc = db_cb->callback(db_cb->data, db_cb->db_num); 150 + if (rc) 151 + tasklet_schedule(&db_cb->irq_work); 152 + else { 153 + struct ntb_device *ndev = db_cb->ndev; 154 + unsigned long mask; 155 + 156 + mask = readw(ndev->reg_ofs.ldb_mask); 157 + clear_bit(db_cb->db_num * ndev->bits_per_vector, &mask); 158 + writew(mask, ndev->reg_ofs.ldb_mask); 159 + } 160 + } 161 + 144 162 /** 145 163 * ntb_register_db_callback() - register a callback for doorbell interrupt 146 164 * @ndev: pointer to ntb_device instance ··· 173 155 * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 174 156 */ 175 157 int ntb_register_db_callback(struct ntb_device *ndev, unsigned int idx, 176 - void *data, void (*func)(void *data, int db_num)) 158 + void *data, int (*func)(void *data, int db_num)) 177 159 { 178 160 unsigned long mask; 179 161 ··· 184 166 185 167 ndev->db_cb[idx].callback = func; 186 168 ndev->db_cb[idx].data = data; 169 + ndev->db_cb[idx].ndev = ndev; 170 + 171 + tasklet_init(&ndev->db_cb[idx].irq_work, ntb_irq_work, 172 + (unsigned long) &ndev->db_cb[idx]); 187 173 188 174 /* unmask interrupt */ 189 175 mask = readw(ndev->reg_ofs.ldb_mask); ··· 215 193 mask = readw(ndev->reg_ofs.ldb_mask); 216 194 set_bit(idx * ndev->bits_per_vector, &mask); 217 195 writew(mask, ndev->reg_ofs.ldb_mask); 196 + 197 + tasklet_disable(&ndev->db_cb[idx].irq_work); 218 198 219 199 ndev->db_cb[idx].callback = NULL; 220 200 } ··· 702 678 return -EINVAL; 703 679 704 680 ndev->limits.max_mw = SNB_ERRATA_MAX_MW; 681 + ndev->limits.max_db_bits = SNB_MAX_DB_BITS; 705 682 ndev->reg_ofs.spad_write = ndev->mw[1].vbase + 706 683 SNB_SPAD_OFFSET; 707 684 ndev->reg_ofs.rdb = ndev->mw[1].vbase + ··· 713 688 */ 714 689 writeq(ndev->mw[1].bar_sz + 0x1000, ndev->reg_base + 715 690 SNB_PBAR4LMT_OFFSET); 691 + /* HW errata on the Limit registers. They can only be 692 + * written when the base register is 4GB aligned and 693 + * < 32bit. This should already be the case based on the 694 + * driver defaults, but write the Limit registers first 695 + * just in case. 696 + */ 716 697 } else { 717 698 ndev->limits.max_mw = SNB_MAX_MW; 699 + 700 + /* HW Errata on bit 14 of b2bdoorbell register. Writes 701 + * will not be mirrored to the remote system. Shrink 702 + * the number of bits by one, since bit 14 is the last 703 + * bit. 704 + */ 705 + ndev->limits.max_db_bits = SNB_MAX_DB_BITS - 1; 718 706 ndev->reg_ofs.spad_write = ndev->reg_base + 719 707 SNB_B2B_SPAD_OFFSET; 720 708 ndev->reg_ofs.rdb = ndev->reg_base + ··· 737 699 * something silly 738 700 */ 739 701 writeq(0, ndev->reg_base + SNB_PBAR4LMT_OFFSET); 702 + /* HW errata on the Limit registers. They can only be 703 + * written when the base register is 4GB aligned and 704 + * < 32bit. This should already be the case based on the 705 + * driver defaults, but write the Limit registers first 706 + * just in case. 707 + */ 740 708 } 741 709 742 710 /* The Xeon errata workaround requires setting SBAR Base ··· 813 769 * have an equal amount. 814 770 */ 815 771 ndev->limits.max_spads = SNB_MAX_COMPAT_SPADS / 2; 772 + ndev->limits.max_db_bits = SNB_MAX_DB_BITS; 816 773 /* Note: The SDOORBELL is the cause of the errata. You REALLY 817 774 * don't want to touch it. 818 775 */ ··· 838 793 * have an equal amount. 839 794 */ 840 795 ndev->limits.max_spads = SNB_MAX_COMPAT_SPADS / 2; 796 + ndev->limits.max_db_bits = SNB_MAX_DB_BITS; 841 797 ndev->reg_ofs.rdb = ndev->reg_base + SNB_PDOORBELL_OFFSET; 842 798 ndev->reg_ofs.ldb = ndev->reg_base + SNB_SDOORBELL_OFFSET; 843 799 ndev->reg_ofs.ldb_mask = ndev->reg_base + SNB_SDBMSK_OFFSET; ··· 865 819 ndev->reg_ofs.lnk_stat = ndev->reg_base + SNB_SLINK_STATUS_OFFSET; 866 820 ndev->reg_ofs.spci_cmd = ndev->reg_base + SNB_PCICMD_OFFSET; 867 821 868 - ndev->limits.max_db_bits = SNB_MAX_DB_BITS; 869 822 ndev->limits.msix_cnt = SNB_MSIX_CNT; 870 823 ndev->bits_per_vector = SNB_DB_BITS_PER_VEC; 871 824 ··· 979 934 { 980 935 struct ntb_db_cb *db_cb = data; 981 936 struct ntb_device *ndev = db_cb->ndev; 937 + unsigned long mask; 982 938 983 939 dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for DB %d\n", irq, 984 940 db_cb->db_num); 985 941 986 - if (db_cb->callback) 987 - db_cb->callback(db_cb->data, db_cb->db_num); 942 + mask = readw(ndev->reg_ofs.ldb_mask); 943 + set_bit(db_cb->db_num * ndev->bits_per_vector, &mask); 944 + writew(mask, ndev->reg_ofs.ldb_mask); 945 + 946 + tasklet_schedule(&db_cb->irq_work); 988 947 989 948 /* No need to check for the specific HB irq, any interrupt means 990 949 * we're connected. ··· 1004 955 { 1005 956 struct ntb_db_cb *db_cb = data; 1006 957 struct ntb_device *ndev = db_cb->ndev; 958 + unsigned long mask; 1007 959 1008 960 dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for DB %d\n", irq, 1009 961 db_cb->db_num); 1010 962 1011 - if (db_cb->callback) 1012 - db_cb->callback(db_cb->data, db_cb->db_num); 963 + mask = readw(ndev->reg_ofs.ldb_mask); 964 + set_bit(db_cb->db_num * ndev->bits_per_vector, &mask); 965 + writew(mask, ndev->reg_ofs.ldb_mask); 966 + 967 + tasklet_schedule(&db_cb->irq_work); 1013 968 1014 969 /* On Sandybridge, there are 16 bits in the interrupt register 1015 970 * but only 4 vectors. So, 5 bits are assigned to the first 3 ··· 1039 986 dev_err(&ndev->pdev->dev, "Error determining link status\n"); 1040 987 1041 988 /* bit 15 is always the link bit */ 1042 - writew(1 << ndev->limits.max_db_bits, ndev->reg_ofs.ldb); 989 + writew(1 << SNB_LINK_DB, ndev->reg_ofs.ldb); 1043 990 1044 991 return IRQ_HANDLED; 1045 992 } ··· 1128 1075 "Only %d MSI-X vectors. Limiting the number of queues to that number.\n", 1129 1076 rc); 1130 1077 msix_entries = rc; 1078 + 1079 + rc = pci_enable_msix(pdev, ndev->msix_entries, msix_entries); 1080 + if (rc) 1081 + goto err1; 1131 1082 } 1132 1083 1133 1084 for (i = 0; i < msix_entries; i++) { ··· 1233 1176 */ 1234 1177 if (ndev->hw_type == BWD_HW) 1235 1178 writeq(~0, ndev->reg_ofs.ldb_mask); 1236 - else 1237 - writew(~(1 << ndev->limits.max_db_bits), 1238 - ndev->reg_ofs.ldb_mask); 1179 + else { 1180 + u16 var = 1 << SNB_LINK_DB; 1181 + writew(~var, ndev->reg_ofs.ldb_mask); 1182 + } 1239 1183 1240 1184 rc = ntb_setup_msix(ndev); 1241 1185 if (!rc) ··· 1344 1286 } 1345 1287 } 1346 1288 1289 + static void ntb_hw_link_up(struct ntb_device *ndev) 1290 + { 1291 + if (ndev->conn_type == NTB_CONN_TRANSPARENT) 1292 + ntb_link_event(ndev, NTB_LINK_UP); 1293 + else { 1294 + u32 ntb_cntl; 1295 + 1296 + /* Let's bring the NTB link up */ 1297 + ntb_cntl = readl(ndev->reg_ofs.lnk_cntl); 1298 + ntb_cntl &= ~(NTB_CNTL_LINK_DISABLE | NTB_CNTL_CFG_LOCK); 1299 + ntb_cntl |= NTB_CNTL_P2S_BAR23_SNOOP | NTB_CNTL_S2P_BAR23_SNOOP; 1300 + ntb_cntl |= NTB_CNTL_P2S_BAR45_SNOOP | NTB_CNTL_S2P_BAR45_SNOOP; 1301 + writel(ntb_cntl, ndev->reg_ofs.lnk_cntl); 1302 + } 1303 + } 1304 + 1305 + static void ntb_hw_link_down(struct ntb_device *ndev) 1306 + { 1307 + u32 ntb_cntl; 1308 + 1309 + if (ndev->conn_type == NTB_CONN_TRANSPARENT) { 1310 + ntb_link_event(ndev, NTB_LINK_DOWN); 1311 + return; 1312 + } 1313 + 1314 + /* Bring NTB link down */ 1315 + ntb_cntl = readl(ndev->reg_ofs.lnk_cntl); 1316 + ntb_cntl &= ~(NTB_CNTL_P2S_BAR23_SNOOP | NTB_CNTL_S2P_BAR23_SNOOP); 1317 + ntb_cntl &= ~(NTB_CNTL_P2S_BAR45_SNOOP | NTB_CNTL_S2P_BAR45_SNOOP); 1318 + ntb_cntl |= NTB_CNTL_LINK_DISABLE | NTB_CNTL_CFG_LOCK; 1319 + writel(ntb_cntl, ndev->reg_ofs.lnk_cntl); 1320 + } 1321 + 1347 1322 static int ntb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 1348 1323 { 1349 1324 struct ntb_device *ndev; ··· 1465 1374 if (rc) 1466 1375 goto err6; 1467 1376 1468 - /* Let's bring the NTB link up */ 1469 - writel(NTB_CNTL_BAR23_SNOOP | NTB_CNTL_BAR45_SNOOP, 1470 - ndev->reg_ofs.lnk_cntl); 1377 + ntb_hw_link_up(ndev); 1471 1378 1472 1379 return 0; 1473 1380 ··· 1495 1406 { 1496 1407 struct ntb_device *ndev = pci_get_drvdata(pdev); 1497 1408 int i; 1498 - u32 ntb_cntl; 1499 1409 1500 - /* Bring NTB link down */ 1501 - ntb_cntl = readl(ndev->reg_ofs.lnk_cntl); 1502 - ntb_cntl |= NTB_CNTL_LINK_DISABLE; 1503 - writel(ntb_cntl, ndev->reg_ofs.lnk_cntl); 1410 + ntb_hw_link_down(ndev); 1504 1411 1505 1412 ntb_transport_free(ndev->ntb_transport); 1506 1413
+4 -3
drivers/ntb/ntb_hw.h
··· 106 106 }; 107 107 108 108 struct ntb_db_cb { 109 - void (*callback) (void *data, int db_num); 109 + int (*callback)(void *data, int db_num); 110 110 unsigned int db_num; 111 111 void *data; 112 112 struct ntb_device *ndev; 113 + struct tasklet_struct irq_work; 113 114 }; 114 115 115 116 struct ntb_device { ··· 229 228 void ntb_unregister_transport(struct ntb_device *ndev); 230 229 void ntb_set_mw_addr(struct ntb_device *ndev, unsigned int mw, u64 addr); 231 230 int ntb_register_db_callback(struct ntb_device *ndev, unsigned int idx, 232 - void *data, void (*db_cb_func) (void *data, 233 - int db_num)); 231 + void *data, int (*db_cb_func)(void *data, 232 + int db_num)); 234 233 void ntb_unregister_db_callback(struct ntb_device *ndev, unsigned int idx); 235 234 int ntb_register_event_callback(struct ntb_device *ndev, 236 235 void (*event_cb_func) (void *handle,
+8 -8
drivers/ntb/ntb_regs.h
··· 55 55 #define SNB_MAX_COMPAT_SPADS 16 56 56 /* Reserve the uppermost bit for link interrupt */ 57 57 #define SNB_MAX_DB_BITS 15 58 + #define SNB_LINK_DB 15 58 59 #define SNB_DB_BITS_PER_VEC 5 59 60 #define SNB_MAX_MW 2 60 61 #define SNB_ERRATA_MAX_MW 1 ··· 76 75 #define SNB_SBAR2XLAT_OFFSET 0x0030 77 76 #define SNB_SBAR4XLAT_OFFSET 0x0038 78 77 #define SNB_SBAR0BASE_OFFSET 0x0040 79 - #define SNB_SBAR0BASE_OFFSET 0x0040 80 - #define SNB_SBAR2BASE_OFFSET 0x0048 81 - #define SNB_SBAR4BASE_OFFSET 0x0050 82 78 #define SNB_SBAR2BASE_OFFSET 0x0048 83 79 #define SNB_SBAR4BASE_OFFSET 0x0050 84 80 #define SNB_NTBCNTL_OFFSET 0x0058 ··· 143 145 #define BWD_LTSSMSTATEJMP_FORCEDETECT (1 << 2) 144 146 #define BWD_IBIST_ERR_OFLOW 0x7FFF7FFF 145 147 146 - #define NTB_CNTL_CFG_LOCK (1 << 0) 147 - #define NTB_CNTL_LINK_DISABLE (1 << 1) 148 - #define NTB_CNTL_BAR23_SNOOP (1 << 2) 149 - #define NTB_CNTL_BAR45_SNOOP (1 << 6) 150 - #define BWD_CNTL_LINK_DOWN (1 << 16) 148 + #define NTB_CNTL_CFG_LOCK (1 << 0) 149 + #define NTB_CNTL_LINK_DISABLE (1 << 1) 150 + #define NTB_CNTL_S2P_BAR23_SNOOP (1 << 2) 151 + #define NTB_CNTL_P2S_BAR23_SNOOP (1 << 4) 152 + #define NTB_CNTL_S2P_BAR45_SNOOP (1 << 6) 153 + #define NTB_CNTL_P2S_BAR45_SNOOP (1 << 8) 154 + #define BWD_CNTL_LINK_DOWN (1 << 16) 151 155 152 156 #define NTB_PPD_OFFSET 0x00D4 153 157 #define SNB_PPD_CONN_TYPE 0x0003
+41 -36
drivers/ntb/ntb_transport.c
··· 119 119 120 120 void (*rx_handler) (struct ntb_transport_qp *qp, void *qp_data, 121 121 void *data, int len); 122 - struct tasklet_struct rx_work; 123 122 struct list_head rx_pend_q; 124 123 struct list_head rx_free_q; 125 124 spinlock_t ntb_rx_pend_q_lock; ··· 583 584 return 0; 584 585 } 585 586 586 - static void ntb_qp_link_cleanup(struct work_struct *work) 587 + static void ntb_qp_link_cleanup(struct ntb_transport_qp *qp) 587 588 { 588 - struct ntb_transport_qp *qp = container_of(work, 589 - struct ntb_transport_qp, 590 - link_cleanup); 591 589 struct ntb_transport *nt = qp->transport; 592 590 struct pci_dev *pdev = ntb_query_pdev(nt->ndev); 593 591 ··· 598 602 599 603 dev_info(&pdev->dev, "qp %d: Link Down\n", qp->qp_num); 600 604 qp->qp_link = NTB_LINK_DOWN; 605 + } 606 + 607 + static void ntb_qp_link_cleanup_work(struct work_struct *work) 608 + { 609 + struct ntb_transport_qp *qp = container_of(work, 610 + struct ntb_transport_qp, 611 + link_cleanup); 612 + struct ntb_transport *nt = qp->transport; 613 + 614 + ntb_qp_link_cleanup(qp); 601 615 602 616 if (nt->transport_link == NTB_LINK_UP) 603 617 schedule_delayed_work(&qp->link_work, ··· 619 613 schedule_work(&qp->link_cleanup); 620 614 } 621 615 622 - static void ntb_transport_link_cleanup(struct work_struct *work) 616 + static void ntb_transport_link_cleanup(struct ntb_transport *nt) 623 617 { 624 - struct ntb_transport *nt = container_of(work, struct ntb_transport, 625 - link_cleanup); 626 618 int i; 619 + 620 + /* Pass along the info to any clients */ 621 + for (i = 0; i < nt->max_qps; i++) 622 + if (!test_bit(i, &nt->qp_bitmap)) 623 + ntb_qp_link_cleanup(&nt->qps[i]); 627 624 628 625 if (nt->transport_link == NTB_LINK_DOWN) 629 626 cancel_delayed_work_sync(&nt->link_work); 630 627 else 631 628 nt->transport_link = NTB_LINK_DOWN; 632 - 633 - /* Pass along the info to any clients */ 634 - for (i = 0; i < nt->max_qps; i++) 635 - if (!test_bit(i, &nt->qp_bitmap)) 636 - ntb_qp_link_down(&nt->qps[i]); 637 629 638 630 /* The scratchpad registers keep the values if the remote side 639 631 * goes down, blast them now to give them a sane value the next ··· 639 635 */ 640 636 for (i = 0; i < MAX_SPAD; i++) 641 637 ntb_write_local_spad(nt->ndev, i, 0); 638 + } 639 + 640 + static void ntb_transport_link_cleanup_work(struct work_struct *work) 641 + { 642 + struct ntb_transport *nt = container_of(work, struct ntb_transport, 643 + link_cleanup); 644 + 645 + ntb_transport_link_cleanup(nt); 642 646 } 643 647 644 648 static void ntb_transport_event_callback(void *data, enum ntb_hw_event event) ··· 892 880 } 893 881 894 882 INIT_DELAYED_WORK(&qp->link_work, ntb_qp_link_work); 895 - INIT_WORK(&qp->link_cleanup, ntb_qp_link_cleanup); 883 + INIT_WORK(&qp->link_cleanup, ntb_qp_link_cleanup_work); 896 884 897 885 spin_lock_init(&qp->ntb_rx_pend_q_lock); 898 886 spin_lock_init(&qp->ntb_rx_free_q_lock); ··· 948 936 } 949 937 950 938 INIT_DELAYED_WORK(&nt->link_work, ntb_transport_link_work); 951 - INIT_WORK(&nt->link_cleanup, ntb_transport_link_cleanup); 939 + INIT_WORK(&nt->link_cleanup, ntb_transport_link_cleanup_work); 952 940 953 941 rc = ntb_register_event_callback(nt->ndev, 954 942 ntb_transport_event_callback); ··· 984 972 struct ntb_device *ndev = nt->ndev; 985 973 int i; 986 974 987 - nt->transport_link = NTB_LINK_DOWN; 975 + ntb_transport_link_cleanup(nt); 988 976 989 977 /* verify that all the qp's are freed */ 990 978 for (i = 0; i < nt->max_qps; i++) { ··· 1200 1188 goto out; 1201 1189 } 1202 1190 1203 - static void ntb_transport_rx(unsigned long data) 1191 + static int ntb_transport_rxc_db(void *data, int db_num) 1204 1192 { 1205 - struct ntb_transport_qp *qp = (struct ntb_transport_qp *)data; 1193 + struct ntb_transport_qp *qp = data; 1206 1194 int rc, i; 1195 + 1196 + dev_dbg(&ntb_query_pdev(qp->ndev)->dev, "%s: doorbell %d received\n", 1197 + __func__, db_num); 1207 1198 1208 1199 /* Limit the number of packets processed in a single interrupt to 1209 1200 * provide fairness to others ··· 1219 1204 1220 1205 if (qp->dma_chan) 1221 1206 dma_async_issue_pending(qp->dma_chan); 1222 - } 1223 1207 1224 - static void ntb_transport_rxc_db(void *data, int db_num) 1225 - { 1226 - struct ntb_transport_qp *qp = data; 1227 - 1228 - dev_dbg(&ntb_query_pdev(qp->ndev)->dev, "%s: doorbell %d received\n", 1229 - __func__, db_num); 1230 - 1231 - tasklet_schedule(&qp->rx_work); 1208 + return i; 1232 1209 } 1233 1210 1234 1211 static void ntb_tx_copy_callback(void *data) ··· 1439 1432 qp->tx_handler = handlers->tx_handler; 1440 1433 qp->event_handler = handlers->event_handler; 1441 1434 1435 + dmaengine_get(); 1442 1436 qp->dma_chan = dma_find_channel(DMA_MEMCPY); 1443 - if (!qp->dma_chan) 1437 + if (!qp->dma_chan) { 1438 + dmaengine_put(); 1444 1439 dev_info(&pdev->dev, "Unable to allocate DMA channel, using CPU instead\n"); 1445 - else 1446 - dmaengine_get(); 1440 + } 1447 1441 1448 1442 for (i = 0; i < NTB_QP_DEF_NUM_ENTRIES; i++) { 1449 1443 entry = kzalloc(sizeof(struct ntb_queue_entry), GFP_ATOMIC); ··· 1466 1458 &qp->tx_free_q); 1467 1459 } 1468 1460 1469 - tasklet_init(&qp->rx_work, ntb_transport_rx, (unsigned long) qp); 1470 - 1471 1461 rc = ntb_register_db_callback(qp->ndev, free_queue, qp, 1472 1462 ntb_transport_rxc_db); 1473 1463 if (rc) 1474 - goto err3; 1464 + goto err2; 1475 1465 1476 1466 dev_info(&pdev->dev, "NTB Transport QP %d created\n", qp->qp_num); 1477 1467 1478 1468 return qp; 1479 1469 1480 - err3: 1481 - tasklet_disable(&qp->rx_work); 1482 1470 err2: 1483 1471 while ((entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q))) 1484 1472 kfree(entry); 1485 1473 err1: 1486 1474 while ((entry = ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q))) 1487 1475 kfree(entry); 1476 + if (qp->dma_chan) 1477 + dmaengine_put(); 1488 1478 set_bit(free_queue, &nt->qp_bitmap); 1489 1479 err: 1490 1480 return NULL; ··· 1521 1515 } 1522 1516 1523 1517 ntb_unregister_db_callback(qp->ndev, qp->qp_num); 1524 - tasklet_disable(&qp->rx_work); 1525 1518 1526 1519 cancel_delayed_work_sync(&qp->link_work); 1527 1520
-4
drivers/pci/quirks.c
··· 9 9 * 10 10 * Init/reset quirks for USB host controllers should be in the 11 11 * USB quirks file, where their drivers can access reuse it. 12 - * 13 - * The bridge optimization stuff has been removed. If you really 14 - * have a silly BIOS which is unable to set your host bridge right, 15 - * use the PowerTweak utility (see http://powertweak.sourceforge.net). 16 12 */ 17 13 18 14 #include <linux/types.h>
+1
drivers/platform/Kconfig
··· 5 5 source "drivers/platform/goldfish/Kconfig" 6 6 endif 7 7 8 + source "drivers/platform/chrome/Kconfig"
+1
drivers/platform/Makefile
··· 5 5 obj-$(CONFIG_X86) += x86/ 6 6 obj-$(CONFIG_OLPC) += olpc/ 7 7 obj-$(CONFIG_GOLDFISH) += goldfish/ 8 + obj-$(CONFIG_CHROME_PLATFORMS) += chrome/
+28
drivers/platform/chrome/Kconfig
··· 1 + # 2 + # Platform support for Chrome OS hardware (Chromebooks and Chromeboxes) 3 + # 4 + 5 + menuconfig CHROME_PLATFORMS 6 + bool "Platform support for Chrome hardware" 7 + depends on X86 8 + ---help--- 9 + Say Y here to get to see options for platform support for 10 + various Chromebooks and Chromeboxes. This option alone does 11 + not add any kernel code. 12 + 13 + If you say N, all options in this submenu will be skipped and disabled. 14 + 15 + if CHROME_PLATFORMS 16 + 17 + config CHROMEOS_LAPTOP 18 + tristate "Chrome OS Laptop" 19 + depends on I2C 20 + depends on DMI 21 + ---help--- 22 + This driver instantiates i2c and smbus devices such as 23 + light sensors and touchpads. 24 + 25 + If you have a supported Chromebook, choose Y or M here. 26 + The module will be called chromeos_laptop. 27 + 28 + endif # CHROMEOS_PLATFORMS
+2
drivers/platform/chrome/Makefile
··· 1 + 2 + obj-$(CONFIG_CHROMEOS_LAPTOP) += chromeos_laptop.o
-11
drivers/platform/x86/Kconfig
··· 79 79 80 80 If you have an ACPI-compatible ASUS laptop, say Y or M here. 81 81 82 - config CHROMEOS_LAPTOP 83 - tristate "Chrome OS Laptop" 84 - depends on I2C 85 - depends on DMI 86 - ---help--- 87 - This driver instantiates i2c and smbus devices such as 88 - light sensors and touchpads. 89 - 90 - If you have a supported Chromebook, choose Y or M here. 91 - The module will be called chromeos_laptop. 92 - 93 82 config DELL_LAPTOP 94 83 tristate "Dell Laptop Extras" 95 84 depends on X86
-1
drivers/platform/x86/Makefile
··· 50 50 obj-$(CONFIG_INTEL_OAKTRAIL) += intel_oaktrail.o 51 51 obj-$(CONFIG_SAMSUNG_Q10) += samsung-q10.o 52 52 obj-$(CONFIG_APPLE_GMUX) += apple-gmux.o 53 - obj-$(CONFIG_CHROMEOS_LAPTOP) += chromeos_laptop.o 54 53 obj-$(CONFIG_INTEL_RST) += intel-rst.o 55 54 obj-$(CONFIG_INTEL_SMARTCONNECT) += intel-smartconnect.o 56 55
+2 -3
drivers/platform/x86/asus-laptop.c
··· 1494 1494 int error; 1495 1495 1496 1496 input = input_allocate_device(); 1497 - if (!input) { 1498 - pr_warn("Unable to allocate input device\n"); 1497 + if (!input) 1499 1498 return -ENOMEM; 1500 - } 1499 + 1501 1500 input->name = "Asus Laptop extra buttons"; 1502 1501 input->phys = ASUS_LAPTOP_FILE "/input0"; 1503 1502 input->id.bustype = BUS_HOST;
drivers/platform/x86/chromeos_laptop.c drivers/platform/chrome/chromeos_laptop.c
+288
drivers/platform/x86/dell-laptop.c
··· 21 21 #include <linux/err.h> 22 22 #include <linux/dmi.h> 23 23 #include <linux/io.h> 24 + #include <linux/rfkill.h> 24 25 #include <linux/power_supply.h> 25 26 #include <linux/acpi.h> 26 27 #include <linux/mm.h> ··· 90 89 91 90 static struct platform_device *platform_device; 92 91 static struct backlight_device *dell_backlight_device; 92 + static struct rfkill *wifi_rfkill; 93 + static struct rfkill *bluetooth_rfkill; 94 + static struct rfkill *wwan_rfkill; 95 + static bool force_rfkill; 96 + 97 + module_param(force_rfkill, bool, 0444); 98 + MODULE_PARM_DESC(force_rfkill, "enable rfkill on non whitelisted models"); 93 99 94 100 static const struct dmi_system_id dell_device_table[] __initconst = { 95 101 { ··· 363 355 return buffer; 364 356 } 365 357 358 + /* Derived from information in DellWirelessCtl.cpp: 359 + Class 17, select 11 is radio control. It returns an array of 32-bit values. 360 + 361 + Input byte 0 = 0: Wireless information 362 + 363 + result[0]: return code 364 + result[1]: 365 + Bit 0: Hardware switch supported 366 + Bit 1: Wifi locator supported 367 + Bit 2: Wifi is supported 368 + Bit 3: Bluetooth is supported 369 + Bit 4: WWAN is supported 370 + Bit 5: Wireless keyboard supported 371 + Bits 6-7: Reserved 372 + Bit 8: Wifi is installed 373 + Bit 9: Bluetooth is installed 374 + Bit 10: WWAN is installed 375 + Bits 11-15: Reserved 376 + Bit 16: Hardware switch is on 377 + Bit 17: Wifi is blocked 378 + Bit 18: Bluetooth is blocked 379 + Bit 19: WWAN is blocked 380 + Bits 20-31: Reserved 381 + result[2]: NVRAM size in bytes 382 + result[3]: NVRAM format version number 383 + 384 + Input byte 0 = 2: Wireless switch configuration 385 + result[0]: return code 386 + result[1]: 387 + Bit 0: Wifi controlled by switch 388 + Bit 1: Bluetooth controlled by switch 389 + Bit 2: WWAN controlled by switch 390 + Bits 3-6: Reserved 391 + Bit 7: Wireless switch config locked 392 + Bit 8: Wifi locator enabled 393 + Bits 9-14: Reserved 394 + Bit 15: Wifi locator setting locked 395 + Bits 16-31: Reserved 396 + */ 397 + 398 + static int dell_rfkill_set(void *data, bool blocked) 399 + { 400 + int disable = blocked ? 1 : 0; 401 + unsigned long radio = (unsigned long)data; 402 + int hwswitch_bit = (unsigned long)data - 1; 403 + 404 + get_buffer(); 405 + dell_send_request(buffer, 17, 11); 406 + 407 + /* If the hardware switch controls this radio, and the hardware 408 + switch is disabled, always disable the radio */ 409 + if ((hwswitch_state & BIT(hwswitch_bit)) && 410 + !(buffer->output[1] & BIT(16))) 411 + disable = 1; 412 + 413 + buffer->input[0] = (1 | (radio<<8) | (disable << 16)); 414 + dell_send_request(buffer, 17, 11); 415 + 416 + release_buffer(); 417 + return 0; 418 + } 419 + 420 + /* Must be called with the buffer held */ 421 + static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio, 422 + int status) 423 + { 424 + if (status & BIT(0)) { 425 + /* Has hw-switch, sync sw_state to BIOS */ 426 + int block = rfkill_blocked(rfkill); 427 + buffer->input[0] = (1 | (radio << 8) | (block << 16)); 428 + dell_send_request(buffer, 17, 11); 429 + } else { 430 + /* No hw-switch, sync BIOS state to sw_state */ 431 + rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16))); 432 + } 433 + } 434 + 435 + static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio, 436 + int status) 437 + { 438 + if (hwswitch_state & (BIT(radio - 1))) 439 + rfkill_set_hw_state(rfkill, !(status & BIT(16))); 440 + } 441 + 442 + static void dell_rfkill_query(struct rfkill *rfkill, void *data) 443 + { 444 + int status; 445 + 446 + get_buffer(); 447 + dell_send_request(buffer, 17, 11); 448 + status = buffer->output[1]; 449 + 450 + dell_rfkill_update_hw_state(rfkill, (unsigned long)data, status); 451 + 452 + release_buffer(); 453 + } 454 + 455 + static const struct rfkill_ops dell_rfkill_ops = { 456 + .set_block = dell_rfkill_set, 457 + .query = dell_rfkill_query, 458 + }; 459 + 366 460 static struct dentry *dell_laptop_dir; 367 461 368 462 static int dell_debugfs_show(struct seq_file *s, void *data) ··· 533 423 .llseek = seq_lseek, 534 424 .release = single_release, 535 425 }; 426 + 427 + static void dell_update_rfkill(struct work_struct *ignored) 428 + { 429 + int status; 430 + 431 + get_buffer(); 432 + dell_send_request(buffer, 17, 11); 433 + status = buffer->output[1]; 434 + 435 + if (wifi_rfkill) { 436 + dell_rfkill_update_hw_state(wifi_rfkill, 1, status); 437 + dell_rfkill_update_sw_state(wifi_rfkill, 1, status); 438 + } 439 + if (bluetooth_rfkill) { 440 + dell_rfkill_update_hw_state(bluetooth_rfkill, 2, status); 441 + dell_rfkill_update_sw_state(bluetooth_rfkill, 2, status); 442 + } 443 + if (wwan_rfkill) { 444 + dell_rfkill_update_hw_state(wwan_rfkill, 3, status); 445 + dell_rfkill_update_sw_state(wwan_rfkill, 3, status); 446 + } 447 + 448 + release_buffer(); 449 + } 450 + static DECLARE_DELAYED_WORK(dell_rfkill_work, dell_update_rfkill); 451 + 452 + 453 + static int __init dell_setup_rfkill(void) 454 + { 455 + int status; 456 + int ret; 457 + const char *product; 458 + 459 + /* 460 + * rfkill causes trouble on various non Latitudes, according to Dell 461 + * actually testing the rfkill functionality is only done on Latitudes. 462 + */ 463 + product = dmi_get_system_info(DMI_PRODUCT_NAME); 464 + if (!force_rfkill && (!product || strncmp(product, "Latitude", 8))) 465 + return 0; 466 + 467 + get_buffer(); 468 + dell_send_request(buffer, 17, 11); 469 + status = buffer->output[1]; 470 + buffer->input[0] = 0x2; 471 + dell_send_request(buffer, 17, 11); 472 + hwswitch_state = buffer->output[1]; 473 + release_buffer(); 474 + 475 + if (!(status & BIT(0))) { 476 + if (force_rfkill) { 477 + /* No hwsitch, clear all hw-controlled bits */ 478 + hwswitch_state &= ~7; 479 + } else { 480 + /* rfkill is only tested on laptops with a hwswitch */ 481 + return 0; 482 + } 483 + } 484 + 485 + if ((status & (1<<2|1<<8)) == (1<<2|1<<8)) { 486 + wifi_rfkill = rfkill_alloc("dell-wifi", &platform_device->dev, 487 + RFKILL_TYPE_WLAN, 488 + &dell_rfkill_ops, (void *) 1); 489 + if (!wifi_rfkill) { 490 + ret = -ENOMEM; 491 + goto err_wifi; 492 + } 493 + ret = rfkill_register(wifi_rfkill); 494 + if (ret) 495 + goto err_wifi; 496 + } 497 + 498 + if ((status & (1<<3|1<<9)) == (1<<3|1<<9)) { 499 + bluetooth_rfkill = rfkill_alloc("dell-bluetooth", 500 + &platform_device->dev, 501 + RFKILL_TYPE_BLUETOOTH, 502 + &dell_rfkill_ops, (void *) 2); 503 + if (!bluetooth_rfkill) { 504 + ret = -ENOMEM; 505 + goto err_bluetooth; 506 + } 507 + ret = rfkill_register(bluetooth_rfkill); 508 + if (ret) 509 + goto err_bluetooth; 510 + } 511 + 512 + if ((status & (1<<4|1<<10)) == (1<<4|1<<10)) { 513 + wwan_rfkill = rfkill_alloc("dell-wwan", 514 + &platform_device->dev, 515 + RFKILL_TYPE_WWAN, 516 + &dell_rfkill_ops, (void *) 3); 517 + if (!wwan_rfkill) { 518 + ret = -ENOMEM; 519 + goto err_wwan; 520 + } 521 + ret = rfkill_register(wwan_rfkill); 522 + if (ret) 523 + goto err_wwan; 524 + } 525 + 526 + return 0; 527 + err_wwan: 528 + rfkill_destroy(wwan_rfkill); 529 + if (bluetooth_rfkill) 530 + rfkill_unregister(bluetooth_rfkill); 531 + err_bluetooth: 532 + rfkill_destroy(bluetooth_rfkill); 533 + if (wifi_rfkill) 534 + rfkill_unregister(wifi_rfkill); 535 + err_wifi: 536 + rfkill_destroy(wifi_rfkill); 537 + 538 + return ret; 539 + } 540 + 541 + static void dell_cleanup_rfkill(void) 542 + { 543 + if (wifi_rfkill) { 544 + rfkill_unregister(wifi_rfkill); 545 + rfkill_destroy(wifi_rfkill); 546 + } 547 + if (bluetooth_rfkill) { 548 + rfkill_unregister(bluetooth_rfkill); 549 + rfkill_destroy(bluetooth_rfkill); 550 + } 551 + if (wwan_rfkill) { 552 + rfkill_unregister(wwan_rfkill); 553 + rfkill_destroy(wwan_rfkill); 554 + } 555 + } 536 556 537 557 static int dell_send_intensity(struct backlight_device *bd) 538 558 { ··· 755 515 led_classdev_unregister(&touchpad_led); 756 516 } 757 517 518 + static bool dell_laptop_i8042_filter(unsigned char data, unsigned char str, 519 + struct serio *port) 520 + { 521 + static bool extended; 522 + 523 + if (str & 0x20) 524 + return false; 525 + 526 + if (unlikely(data == 0xe0)) { 527 + extended = true; 528 + return false; 529 + } else if (unlikely(extended)) { 530 + switch (data) { 531 + case 0x8: 532 + schedule_delayed_work(&dell_rfkill_work, 533 + round_jiffies_relative(HZ / 4)); 534 + break; 535 + } 536 + extended = false; 537 + } 538 + 539 + return false; 540 + } 541 + 758 542 static int __init dell_init(void) 759 543 { 760 544 int max_intensity = 0; ··· 821 557 } 822 558 buffer = page_address(bufferpage); 823 559 560 + ret = dell_setup_rfkill(); 561 + 562 + if (ret) { 563 + pr_warn("Unable to setup rfkill\n"); 564 + goto fail_rfkill; 565 + } 566 + 567 + ret = i8042_install_filter(dell_laptop_i8042_filter); 568 + if (ret) { 569 + pr_warn("Unable to install key filter\n"); 570 + goto fail_filter; 571 + } 572 + 824 573 if (quirks && quirks->touchpad_led) 825 574 touchpad_led_init(&platform_device->dev); 826 575 827 576 dell_laptop_dir = debugfs_create_dir("dell_laptop", NULL); 577 + if (dell_laptop_dir != NULL) 578 + debugfs_create_file("rfkill", 0444, dell_laptop_dir, NULL, 579 + &dell_debugfs_fops); 828 580 829 581 #ifdef CONFIG_ACPI 830 582 /* In the event of an ACPI backlight being available, don't ··· 883 603 return 0; 884 604 885 605 fail_backlight: 606 + i8042_remove_filter(dell_laptop_i8042_filter); 607 + cancel_delayed_work_sync(&dell_rfkill_work); 608 + fail_filter: 609 + dell_cleanup_rfkill(); 610 + fail_rfkill: 886 611 free_page((unsigned long)bufferpage); 887 612 fail_buffer: 888 613 platform_device_del(platform_device); ··· 905 620 debugfs_remove_recursive(dell_laptop_dir); 906 621 if (quirks && quirks->touchpad_led) 907 622 touchpad_led_exit(); 623 + i8042_remove_filter(dell_laptop_i8042_filter); 624 + cancel_delayed_work_sync(&dell_rfkill_work); 908 625 backlight_device_unregister(dell_backlight_device); 626 + dell_cleanup_rfkill(); 909 627 if (platform_device) { 910 628 platform_device_unregister(platform_device); 911 629 platform_driver_unregister(&platform_driver);
+4 -3
drivers/platform/x86/dell-wmi.c
··· 130 130 KEY_BRIGHTNESSUP, KEY_UNKNOWN, KEY_KBDILLUMTOGGLE, 131 131 KEY_UNKNOWN, KEY_SWITCHVIDEOMODE, KEY_UNKNOWN, KEY_UNKNOWN, 132 132 KEY_SWITCHVIDEOMODE, KEY_UNKNOWN, KEY_UNKNOWN, KEY_PROG2, 133 - KEY_UNKNOWN, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 133 + KEY_UNKNOWN, KEY_UNKNOWN, KEY_UNKNOWN, KEY_UNKNOWN, 134 + KEY_UNKNOWN, KEY_UNKNOWN, KEY_UNKNOWN, KEY_MICMUTE, 134 135 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 135 136 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 136 137 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ··· 140 139 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 141 140 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 142 141 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 143 - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 144 - KEY_PROG3 142 + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 143 + 0, 0, 0, 0, 0, 0, 0, 0, 0, KEY_PROG3 145 144 }; 146 145 147 146 static struct input_dev *dell_wmi_input_dev;
+1 -3
drivers/platform/x86/eeepc-laptop.c
··· 1203 1203 int error; 1204 1204 1205 1205 input = input_allocate_device(); 1206 - if (!input) { 1207 - pr_info("Unable to allocate input device\n"); 1206 + if (!input) 1208 1207 return -ENOMEM; 1209 - } 1210 1208 1211 1209 input->name = "Asus EeePC extra buttons"; 1212 1210 input->phys = EEEPC_LAPTOP_FILE "/input0";
+13 -1
drivers/platform/x86/hp-wmi.c
··· 54 54 #define HPWMI_HARDWARE_QUERY 0x4 55 55 #define HPWMI_WIRELESS_QUERY 0x5 56 56 #define HPWMI_HOTKEY_QUERY 0xc 57 + #define HPWMI_FEATURE_QUERY 0xd 57 58 #define HPWMI_WIRELESS2_QUERY 0x1b 58 59 #define HPWMI_POSTCODEERROR_QUERY 0x2a 59 60 ··· 291 290 return ret; 292 291 293 292 return (state & 0x4) ? 1 : 0; 293 + } 294 + 295 + static int hp_wmi_bios_2009_later(void) 296 + { 297 + int state = 0; 298 + int ret = hp_wmi_perform_query(HPWMI_FEATURE_QUERY, 0, &state, 299 + sizeof(state), sizeof(state)); 300 + if (ret) 301 + return ret; 302 + 303 + return (state & 0x10) ? 1 : 0; 294 304 } 295 305 296 306 static int hp_wmi_set_block(void *data, bool blocked) ··· 883 871 gps_rfkill = NULL; 884 872 rfkill2_count = 0; 885 873 886 - if (hp_wmi_rfkill_setup(device)) 874 + if (hp_wmi_bios_2009_later() || hp_wmi_rfkill_setup(device)) 887 875 hp_wmi_rfkill2_setup(device); 888 876 889 877 err = device_create_file(&device->dev, &dev_attr_display);
+1 -3
drivers/platform/x86/ideapad-laptop.c
··· 570 570 int error; 571 571 572 572 inputdev = input_allocate_device(); 573 - if (!inputdev) { 574 - pr_info("Unable to allocate input device\n"); 573 + if (!inputdev) 575 574 return -ENOMEM; 576 - } 577 575 578 576 inputdev->name = "Ideapad extra buttons"; 579 577 inputdev->phys = "ideapad/input0";
+1 -3
drivers/platform/x86/intel_mid_powerbtn.c
··· 66 66 return -EINVAL; 67 67 68 68 input = input_allocate_device(); 69 - if (!input) { 70 - dev_err(&pdev->dev, "Input device allocation error\n"); 69 + if (!input) 71 70 return -ENOMEM; 72 - } 73 71 74 72 input->name = pdev->name; 75 73 input->phys = "power-button/input0";
+103 -14
drivers/platform/x86/intel_scu_ipc.c
··· 58 58 * message handler is called within firmware. 59 59 */ 60 60 61 - #define IPC_BASE_ADDR 0xFF11C000 /* IPC1 base register address */ 62 - #define IPC_MAX_ADDR 0x100 /* Maximum IPC regisers */ 63 61 #define IPC_WWBUF_SIZE 20 /* IPC Write buffer Size */ 64 62 #define IPC_RWBUF_SIZE 20 /* IPC Read buffer Size */ 65 - #define IPC_I2C_BASE 0xFF12B000 /* I2C control register base address */ 66 - #define IPC_I2C_MAX_ADDR 0x10 /* Maximum I2C regisers */ 63 + #define IPC_IOC 0x100 /* IPC command register IOC bit */ 64 + 65 + enum { 66 + SCU_IPC_LINCROFT, 67 + SCU_IPC_PENWELL, 68 + SCU_IPC_CLOVERVIEW, 69 + SCU_IPC_TANGIER, 70 + }; 71 + 72 + /* intel scu ipc driver data*/ 73 + struct intel_scu_ipc_pdata_t { 74 + u32 ipc_base; 75 + u32 i2c_base; 76 + u32 ipc_len; 77 + u32 i2c_len; 78 + u8 irq_mode; 79 + }; 80 + 81 + static struct intel_scu_ipc_pdata_t intel_scu_ipc_pdata[] = { 82 + [SCU_IPC_LINCROFT] = { 83 + .ipc_base = 0xff11c000, 84 + .i2c_base = 0xff12b000, 85 + .ipc_len = 0x100, 86 + .i2c_len = 0x10, 87 + .irq_mode = 0, 88 + }, 89 + [SCU_IPC_PENWELL] = { 90 + .ipc_base = 0xff11c000, 91 + .i2c_base = 0xff12b000, 92 + .ipc_len = 0x100, 93 + .i2c_len = 0x10, 94 + .irq_mode = 1, 95 + }, 96 + [SCU_IPC_CLOVERVIEW] = { 97 + .ipc_base = 0xff11c000, 98 + .i2c_base = 0xff12b000, 99 + .ipc_len = 0x100, 100 + .i2c_len = 0x10, 101 + .irq_mode = 1, 102 + }, 103 + [SCU_IPC_TANGIER] = { 104 + .ipc_base = 0xff009000, 105 + .i2c_base = 0xff00d000, 106 + .ipc_len = 0x100, 107 + .i2c_len = 0x10, 108 + .irq_mode = 0, 109 + }, 110 + }; 67 111 68 112 static int ipc_probe(struct pci_dev *dev, const struct pci_device_id *id); 69 113 static void ipc_remove(struct pci_dev *pdev); ··· 116 72 struct pci_dev *pdev; 117 73 void __iomem *ipc_base; 118 74 void __iomem *i2c_base; 75 + struct completion cmd_complete; 76 + u8 irq_mode; 119 77 }; 120 78 121 79 static struct intel_scu_ipc_dev ipcdev; /* Only one for now */ ··· 144 98 */ 145 99 static inline void ipc_command(u32 cmd) /* Send ipc command */ 146 100 { 101 + if (ipcdev.irq_mode) { 102 + reinit_completion(&ipcdev.cmd_complete); 103 + writel(cmd | IPC_IOC, ipcdev.ipc_base); 104 + } 147 105 writel(cmd, ipcdev.ipc_base); 148 106 } 149 107 ··· 206 156 return 0; 207 157 } 208 158 159 + /* Wait till ipc ioc interrupt is received or timeout in 3 HZ */ 160 + static inline int ipc_wait_for_interrupt(void) 161 + { 162 + int status; 163 + 164 + if (!wait_for_completion_timeout(&ipcdev.cmd_complete, 3 * HZ)) { 165 + struct device *dev = &ipcdev.pdev->dev; 166 + dev_err(dev, "IPC timed out\n"); 167 + return -ETIMEDOUT; 168 + } 169 + 170 + status = ipc_read_status(); 171 + 172 + if ((status >> 1) & 1) 173 + return -EIO; 174 + 175 + return 0; 176 + } 177 + 178 + int intel_scu_ipc_check_status(void) 179 + { 180 + return ipcdev.irq_mode ? ipc_wait_for_interrupt() : busy_loop(); 181 + } 182 + 209 183 /* Read/Write power control(PMIC in Langwell, MSIC in PenWell) registers */ 210 184 static int pwr_reg_rdwr(u16 *addr, u8 *data, u32 count, u32 op, u32 id) 211 185 { ··· 270 196 ipc_command(4 << 16 | id << 12 | 0 << 8 | op); 271 197 } 272 198 273 - err = busy_loop(); 274 - if (id == IPC_CMD_PCNTRL_R) { /* Read rbuf */ 199 + err = intel_scu_ipc_check_status(); 200 + if (!err && id == IPC_CMD_PCNTRL_R) { /* Read rbuf */ 275 201 /* Workaround: values are read as 0 without memcpy_fromio */ 276 202 memcpy_fromio(cbuf, ipcdev.ipc_base + 0x90, 16); 277 203 for (nc = 0; nc < count; nc++) ··· 465 391 return -ENODEV; 466 392 } 467 393 ipc_command(sub << 12 | cmd); 468 - err = busy_loop(); 394 + err = intel_scu_ipc_check_status(); 469 395 mutex_unlock(&ipclock); 470 396 return err; 471 397 } ··· 499 425 ipc_data_writel(*in++, 4 * i); 500 426 501 427 ipc_command((inlen << 16) | (sub << 12) | cmd); 502 - err = busy_loop(); 428 + err = intel_scu_ipc_check_status(); 503 429 504 - for (i = 0; i < outlen; i++) 505 - *out++ = ipc_data_readl(4 * i); 430 + if (!err) { 431 + for (i = 0; i < outlen; i++) 432 + *out++ = ipc_data_readl(4 * i); 433 + } 506 434 507 435 mutex_unlock(&ipclock); 508 436 return err; ··· 567 491 */ 568 492 static irqreturn_t ioc(int irq, void *dev_id) 569 493 { 494 + if (ipcdev.irq_mode) 495 + complete(&ipcdev.cmd_complete); 496 + 570 497 return IRQ_HANDLED; 571 498 } 572 499 ··· 583 504 */ 584 505 static int ipc_probe(struct pci_dev *dev, const struct pci_device_id *id) 585 506 { 586 - int err; 507 + int err, pid; 508 + struct intel_scu_ipc_pdata_t *pdata; 587 509 resource_size_t pci_resource; 588 510 589 511 if (ipcdev.pdev) /* We support only one SCU */ 590 512 return -EBUSY; 591 513 514 + pid = id->driver_data; 515 + pdata = &intel_scu_ipc_pdata[pid]; 516 + 592 517 ipcdev.pdev = pci_dev_get(dev); 518 + ipcdev.irq_mode = pdata->irq_mode; 593 519 594 520 err = pci_enable_device(dev); 595 521 if (err) ··· 608 524 if (!pci_resource) 609 525 return -ENOMEM; 610 526 527 + init_completion(&ipcdev.cmd_complete); 528 + 611 529 if (request_irq(dev->irq, ioc, 0, "intel_scu_ipc", &ipcdev)) 612 530 return -EBUSY; 613 531 614 - ipcdev.ipc_base = ioremap_nocache(IPC_BASE_ADDR, IPC_MAX_ADDR); 532 + ipcdev.ipc_base = ioremap_nocache(pdata->ipc_base, pdata->ipc_len); 615 533 if (!ipcdev.ipc_base) 616 534 return -ENOMEM; 617 535 618 - ipcdev.i2c_base = ioremap_nocache(IPC_I2C_BASE, IPC_I2C_MAX_ADDR); 536 + ipcdev.i2c_base = ioremap_nocache(pdata->i2c_base, pdata->i2c_len); 619 537 if (!ipcdev.i2c_base) { 620 538 iounmap(ipcdev.ipc_base); 621 539 return -ENOMEM; ··· 650 564 } 651 565 652 566 static DEFINE_PCI_DEVICE_TABLE(pci_ids) = { 653 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x082a)}, 567 + {PCI_VDEVICE(INTEL, 0x082a), SCU_IPC_LINCROFT}, 568 + {PCI_VDEVICE(INTEL, 0x080e), SCU_IPC_PENWELL}, 569 + {PCI_VDEVICE(INTEL, 0x08ea), SCU_IPC_CLOVERVIEW}, 570 + {PCI_VDEVICE(INTEL, 0x11a0), SCU_IPC_TANGIER}, 654 571 { 0,} 655 572 }; 656 573 MODULE_DEVICE_TABLE(pci, pci_ids);
+1 -4
drivers/platform/x86/panasonic-laptop.c
··· 490 490 int error; 491 491 492 492 input_dev = input_allocate_device(); 493 - if (!input_dev) { 494 - ACPI_DEBUG_PRINT((ACPI_DB_ERROR, 495 - "Couldn't allocate input device for hotkey")); 493 + if (!input_dev) 496 494 return -ENOMEM; 497 - } 498 495 499 496 input_dev->name = ACPI_PCC_DRIVER_NAME; 500 497 input_dev->phys = ACPI_PCC_INPUT_PHYS;
+14 -33
drivers/platform/x86/sony-laptop.c
··· 140 140 "on the model (default: no change from current value)"); 141 141 142 142 #ifdef CONFIG_PM_SLEEP 143 - static void sony_nc_kbd_backlight_resume(void); 144 143 static void sony_nc_thermal_resume(void); 145 144 #endif 146 145 static int sony_nc_kbd_backlight_setup(struct platform_device *pd, 147 146 unsigned int handle); 148 - static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd); 147 + static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd, 148 + unsigned int handle); 149 149 150 150 static int sony_nc_battery_care_setup(struct platform_device *pd, 151 151 unsigned int handle); ··· 304 304 KEY_FN_F10, /* 14 SONYPI_EVENT_FNKEY_F10 */ 305 305 KEY_FN_F11, /* 15 SONYPI_EVENT_FNKEY_F11 */ 306 306 KEY_FN_F12, /* 16 SONYPI_EVENT_FNKEY_F12 */ 307 - KEY_FN_F1, /* 17 SONYPI_EVENT_FNKEY_1 */ 308 - KEY_FN_F2, /* 18 SONYPI_EVENT_FNKEY_2 */ 307 + KEY_FN_1, /* 17 SONYPI_EVENT_FNKEY_1 */ 308 + KEY_FN_2, /* 18 SONYPI_EVENT_FNKEY_2 */ 309 309 KEY_FN_D, /* 19 SONYPI_EVENT_FNKEY_D */ 310 310 KEY_FN_E, /* 20 SONYPI_EVENT_FNKEY_E */ 311 311 KEY_FN_F, /* 21 SONYPI_EVENT_FNKEY_F */ ··· 1444 1444 case 0x014b: 1445 1445 case 0x014c: 1446 1446 case 0x0163: 1447 - sony_nc_kbd_backlight_cleanup(pd); 1447 + sony_nc_kbd_backlight_cleanup(pd, handle); 1448 1448 break; 1449 1449 default: 1450 1450 continue; ··· 1485 1485 case 0x0124: 1486 1486 case 0x0135: 1487 1487 sony_nc_rfkill_update(); 1488 - break; 1489 - case 0x0137: 1490 - case 0x0143: 1491 - case 0x014b: 1492 - case 0x014c: 1493 - case 0x0163: 1494 - sony_nc_kbd_backlight_resume(); 1495 1488 break; 1496 1489 default: 1497 1490 continue; ··· 1815 1822 int result; 1816 1823 int ret = 0; 1817 1824 1825 + if (kbdbl_ctl) { 1826 + pr_warn("handle 0x%.4x: keyboard backlight setup already done for 0x%.4x\n", 1827 + handle, kbdbl_ctl->handle); 1828 + return -EBUSY; 1829 + } 1830 + 1818 1831 /* verify the kbd backlight presence, these handles are not used for 1819 1832 * keyboard backlight only 1820 1833 */ ··· 1880 1881 return ret; 1881 1882 } 1882 1883 1883 - static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd) 1884 + static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd, 1885 + unsigned int handle) 1884 1886 { 1885 - if (kbdbl_ctl) { 1887 + if (kbdbl_ctl && handle == kbdbl_ctl->handle) { 1886 1888 device_remove_file(&pd->dev, &kbdbl_ctl->mode_attr); 1887 1889 device_remove_file(&pd->dev, &kbdbl_ctl->timeout_attr); 1888 1890 kfree(kbdbl_ctl); 1889 1891 kbdbl_ctl = NULL; 1890 1892 } 1891 1893 } 1892 - 1893 - #ifdef CONFIG_PM_SLEEP 1894 - static void sony_nc_kbd_backlight_resume(void) 1895 - { 1896 - int ignore = 0; 1897 - 1898 - if (!kbdbl_ctl) 1899 - return; 1900 - 1901 - if (kbdbl_ctl->mode == 0) 1902 - sony_call_snc_handle(kbdbl_ctl->handle, kbdbl_ctl->base, 1903 - &ignore); 1904 - 1905 - if (kbdbl_ctl->timeout != 0) 1906 - sony_call_snc_handle(kbdbl_ctl->handle, 1907 - (kbdbl_ctl->base + 0x200) | 1908 - (kbdbl_ctl->timeout << 0x10), &ignore); 1909 - } 1910 - #endif 1911 1894 1912 1895 struct battery_care_control { 1913 1896 struct device_attribute attrs[2];
+6 -2
drivers/platform/x86/thinkpad_acpi.c
··· 6438 6438 #define TPACPI_ALSA_SHRTNAME "ThinkPad Console Audio Control" 6439 6439 #define TPACPI_ALSA_MIXERNAME TPACPI_ALSA_SHRTNAME 6440 6440 6441 - static int alsa_index = ~((1 << (SNDRV_CARDS - 3)) - 1); /* last three slots */ 6441 + #if SNDRV_CARDS <= 32 6442 + #define DEFAULT_ALSA_IDX ~((1 << (SNDRV_CARDS - 3)) - 1) 6443 + #else 6444 + #define DEFAULT_ALSA_IDX ~((1 << (32 - 3)) - 1) 6445 + #endif 6446 + static int alsa_index = DEFAULT_ALSA_IDX; /* last three slots */ 6442 6447 static char *alsa_id = "ThinkPadEC"; 6443 6448 static bool alsa_enable = SNDRV_DEFAULT_ENABLE1; 6444 6449 ··· 9168 9163 mutex_init(&tpacpi_inputdev_send_mutex); 9169 9164 tpacpi_inputdev = input_allocate_device(); 9170 9165 if (!tpacpi_inputdev) { 9171 - pr_err("unable to allocate input device\n"); 9172 9166 thinkpad_acpi_module_exit(); 9173 9167 return -ENOMEM; 9174 9168 } else {
+1 -3
drivers/platform/x86/topstar-laptop.c
··· 97 97 int error; 98 98 99 99 input = input_allocate_device(); 100 - if (!input) { 101 - pr_err("Unable to allocate input device\n"); 100 + if (!input) 102 101 return -ENOMEM; 103 - } 104 102 105 103 input->name = "Topstar Laptop extra buttons"; 106 104 input->phys = "topstar/input0";
+1 -3
drivers/platform/x86/toshiba_acpi.c
··· 975 975 u32 hci_result; 976 976 977 977 dev->hotkey_dev = input_allocate_device(); 978 - if (!dev->hotkey_dev) { 979 - pr_info("Unable to register input device\n"); 978 + if (!dev->hotkey_dev) 980 979 return -ENOMEM; 981 - } 982 980 983 981 dev->hotkey_dev->name = "Toshiba input device"; 984 982 dev->hotkey_dev->phys = "toshiba_acpi/input0";
+4 -2
drivers/platform/x86/wmi.c
··· 672 672 struct wmi_block *wblock; 673 673 674 674 wblock = dev_get_drvdata(dev); 675 - if (!wblock) 676 - return -ENOMEM; 675 + if (!wblock) { 676 + strcat(buf, "\n"); 677 + return strlen(buf); 678 + } 677 679 678 680 wmi_gtoa(wblock->gblock.guid, guid_string); 679 681
+52 -2
drivers/regulator/arizona-micsupp.c
··· 174 174 .owner = THIS_MODULE, 175 175 }; 176 176 177 + static const struct regulator_linear_range arizona_micsupp_ext_ranges[] = { 178 + REGULATOR_LINEAR_RANGE(900000, 0, 0x14, 25000), 179 + REGULATOR_LINEAR_RANGE(1500000, 0x15, 0x27, 100000), 180 + }; 181 + 182 + static const struct regulator_desc arizona_micsupp_ext = { 183 + .name = "MICVDD", 184 + .supply_name = "CPVDD", 185 + .type = REGULATOR_VOLTAGE, 186 + .n_voltages = 40, 187 + .ops = &arizona_micsupp_ops, 188 + 189 + .vsel_reg = ARIZONA_LDO2_CONTROL_1, 190 + .vsel_mask = ARIZONA_LDO2_VSEL_MASK, 191 + .enable_reg = ARIZONA_MIC_CHARGE_PUMP_1, 192 + .enable_mask = ARIZONA_CPMIC_ENA, 193 + .bypass_reg = ARIZONA_MIC_CHARGE_PUMP_1, 194 + .bypass_mask = ARIZONA_CPMIC_BYPASS, 195 + 196 + .linear_ranges = arizona_micsupp_ext_ranges, 197 + .n_linear_ranges = ARRAY_SIZE(arizona_micsupp_ext_ranges), 198 + 199 + .enable_time = 3000, 200 + 201 + .owner = THIS_MODULE, 202 + }; 203 + 177 204 static const struct regulator_init_data arizona_micsupp_default = { 178 205 .constraints = { 179 206 .valid_ops_mask = REGULATOR_CHANGE_STATUS | ··· 213 186 .num_consumer_supplies = 1, 214 187 }; 215 188 189 + static const struct regulator_init_data arizona_micsupp_ext_default = { 190 + .constraints = { 191 + .valid_ops_mask = REGULATOR_CHANGE_STATUS | 192 + REGULATOR_CHANGE_VOLTAGE | 193 + REGULATOR_CHANGE_BYPASS, 194 + .min_uV = 900000, 195 + .max_uV = 3300000, 196 + }, 197 + 198 + .num_consumer_supplies = 1, 199 + }; 200 + 216 201 static int arizona_micsupp_probe(struct platform_device *pdev) 217 202 { 218 203 struct arizona *arizona = dev_get_drvdata(pdev->dev.parent); 204 + const struct regulator_desc *desc; 219 205 struct regulator_config config = { }; 220 206 struct arizona_micsupp *micsupp; 221 207 int ret; ··· 247 207 * default init_data for it. This will be overridden with 248 208 * platform data if provided. 249 209 */ 250 - micsupp->init_data = arizona_micsupp_default; 210 + switch (arizona->type) { 211 + case WM5110: 212 + desc = &arizona_micsupp_ext; 213 + micsupp->init_data = arizona_micsupp_ext_default; 214 + break; 215 + default: 216 + desc = &arizona_micsupp; 217 + micsupp->init_data = arizona_micsupp_default; 218 + break; 219 + } 220 + 251 221 micsupp->init_data.consumer_supplies = &micsupp->supply; 252 222 micsupp->supply.supply = "MICVDD"; 253 223 micsupp->supply.dev_name = dev_name(arizona->dev); ··· 276 226 ARIZONA_CPMIC_BYPASS, 0); 277 227 278 228 micsupp->regulator = devm_regulator_register(&pdev->dev, 279 - &arizona_micsupp, 229 + desc, 280 230 &config); 281 231 if (IS_ERR(micsupp->regulator)) { 282 232 ret = PTR_ERR(micsupp->regulator);
+3
drivers/regulator/core.c
··· 2184 2184 struct regulator_ops *ops = rdev->desc->ops; 2185 2185 int ret; 2186 2186 2187 + if (rdev->desc->fixed_uV && rdev->desc->n_voltages == 1 && !selector) 2188 + return rdev->desc->fixed_uV; 2189 + 2187 2190 if (!ops->list_voltage || selector >= rdev->desc->n_voltages) 2188 2191 return -EINVAL; 2189 2192
+6 -1
drivers/regulator/gpio-regulator.c
··· 139 139 struct property *prop; 140 140 const char *regtype; 141 141 int proplen, gpio, i; 142 + int ret; 142 143 143 144 config = devm_kzalloc(dev, 144 145 sizeof(struct gpio_regulator_config), ··· 203 202 } 204 203 config->nr_states = i; 205 204 206 - of_property_read_string(np, "regulator-type", &regtype); 205 + ret = of_property_read_string(np, "regulator-type", &regtype); 206 + if (ret < 0) { 207 + dev_err(dev, "Missing 'regulator-type' property\n"); 208 + return ERR_PTR(-EINVAL); 209 + } 207 210 208 211 if (!strncmp("voltage", regtype, 7)) 209 212 config->type = REGULATOR_VOLTAGE;
+9 -3
drivers/regulator/pfuze100-regulator.c
··· 308 308 if (ret) 309 309 return ret; 310 310 311 - if (value & 0x0f) { 312 - dev_warn(pfuze_chip->dev, "Illegal ID: %x\n", value); 313 - return -ENODEV; 311 + switch (value & 0x0f) { 312 + /* Freescale misprogrammed 1-3% of parts prior to week 8 of 2013 as ID=8 */ 313 + case 0x8: 314 + dev_info(pfuze_chip->dev, "Assuming misprogrammed ID=0x8"); 315 + case 0x0: 316 + break; 317 + default: 318 + dev_warn(pfuze_chip->dev, "Illegal ID: %x\n", value); 319 + return -ENODEV; 314 320 } 315 321 316 322 ret = regmap_read(pfuze_chip->regmap, PFUZE100_REVID, &value);
+2
drivers/s390/block/dasd_eckd.c
··· 3224 3224 3225 3225 fcx_multitrack = private->features.feature[40] & 0x20; 3226 3226 data_size = blk_rq_bytes(req); 3227 + if (data_size % blksize) 3228 + return ERR_PTR(-EINVAL); 3227 3229 /* tpm write request add CBC data on each track boundary */ 3228 3230 if (rq_data_dir(req) == WRITE) 3229 3231 data_size += (last_trk - first_trk) * 4;
+1 -2
drivers/staging/btmtk_usb/btmtk_usb.c
··· 1284 1284 kfree_skb(skb); 1285 1285 } 1286 1286 1287 - static int btmtk_usb_send_frame(struct sk_buff *skb) 1287 + static int btmtk_usb_send_frame(struct hci_dev *hdev, struct sk_buff *skb) 1288 1288 { 1289 - struct hci_dev *hdev = (struct hci_dev *)skb->dev; 1290 1289 struct btmtk_usb_data *data = hci_get_drvdata(hdev); 1291 1290 struct usb_ctrlrequest *dr; 1292 1291 struct urb *urb;
+3 -3
drivers/staging/comedi/drivers/pcl730.c
··· 173 173 if (mask) { 174 174 if (mask & 0x00ff) 175 175 outb(s->state & 0xff, dev->iobase + reg); 176 - if ((mask & 0xff00) & (s->n_chan > 8)) 176 + if ((mask & 0xff00) && (s->n_chan > 8)) 177 177 outb((s->state >> 8) & 0xff, dev->iobase + reg + 1); 178 - if ((mask & 0xff0000) & (s->n_chan > 16)) 178 + if ((mask & 0xff0000) && (s->n_chan > 16)) 179 179 outb((s->state >> 16) & 0xff, dev->iobase + reg + 2); 180 - if ((mask & 0xff000000) & (s->n_chan > 24)) 180 + if ((mask & 0xff000000) && (s->n_chan > 24)) 181 181 outb((s->state >> 24) & 0xff, dev->iobase + reg + 3); 182 182 } 183 183
+1 -1
drivers/staging/comedi/drivers/s626.c
··· 494 494 * Private helper function: Write setpoint to an application DAC channel. 495 495 */ 496 496 static void s626_set_dac(struct comedi_device *dev, uint16_t chan, 497 - unsigned short dacdata) 497 + int16_t dacdata) 498 498 { 499 499 struct s626_private *devpriv = dev->private; 500 500 uint16_t signmask;
+1 -1
drivers/staging/comedi/drivers/vmk80xx.c
··· 465 465 unsigned char *rx_buf = devpriv->usb_rx_buf; 466 466 unsigned char *tx_buf = devpriv->usb_tx_buf; 467 467 int reg, cmd; 468 - int ret; 468 + int ret = 0; 469 469 470 470 if (devpriv->model == VMK8061_MODEL) { 471 471 reg = VMK8061_DO_REG;
+1 -2
drivers/staging/ft1000/ft1000-usb/ft1000_download.c
··· 578 578 u8 **c_file, const u8 *endpoint, bool boot_case) 579 579 { 580 580 long word_length; 581 - int status; 581 + int status = 0; 582 582 583 583 /*DEBUG("FT1000:REQUEST_CODE_SEGMENT\n");i*/ 584 584 word_length = get_request_value(ft1000dev); ··· 1074 1074 1075 1075 return status; 1076 1076 } 1077 -
+2
drivers/staging/iio/magnetometer/Kconfig
··· 6 6 config SENSORS_HMC5843 7 7 tristate "Honeywell HMC5843/5883/5883L 3-Axis Magnetometer" 8 8 depends on I2C 9 + select IIO_BUFFER 10 + select IIO_TRIGGERED_BUFFER 9 11 help 10 12 Say Y here to add support for the Honeywell HMC5843, HMC5883 and 11 13 HMC5883L 3-Axis Magnetometer (digital compass).
+3 -1
drivers/staging/imx-drm/Makefile
··· 8 8 obj-$(CONFIG_DRM_IMX_LDB) += imx-ldb.o 9 9 obj-$(CONFIG_DRM_IMX_FB_HELPER) += imx-fbdev.o 10 10 obj-$(CONFIG_DRM_IMX_IPUV3_CORE) += ipu-v3/ 11 - obj-$(CONFIG_DRM_IMX_IPUV3) += ipuv3-crtc.o ipuv3-plane.o 11 + 12 + imx-ipuv3-crtc-objs := ipuv3-crtc.o ipuv3-plane.o 13 + obj-$(CONFIG_DRM_IMX_IPUV3) += imx-ipuv3-crtc.o
+1
drivers/staging/imx-drm/imx-drm-core.c
··· 72 72 { 73 73 return crtc->pipe; 74 74 } 75 + EXPORT_SYMBOL_GPL(imx_drm_crtc_id); 75 76 76 77 static void imx_drm_driver_lastclose(struct drm_device *drm) 77 78 {
+2 -2
drivers/staging/lustre/lustre/ptlrpc/pinger.c
··· 409 409 struct l_wait_info lwi = { 0 }; 410 410 int rc = 0; 411 411 412 - if (!thread_is_init(&pinger_thread) && 413 - !thread_is_stopped(&pinger_thread)) 412 + if (thread_is_init(&pinger_thread) || 413 + thread_is_stopped(&pinger_thread)) 414 414 return -EALREADY; 415 415 416 416 ptlrpc_pinger_remove_timeouts();
+15 -13
drivers/staging/media/go7007/go7007-usb.c
··· 15 15 * Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. 16 16 */ 17 17 18 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 19 + 18 20 #include <linux/module.h> 19 21 #include <linux/kernel.h> 20 22 #include <linux/init.h> ··· 663 661 664 662 if (usb->board->flags & GO7007_USB_EZUSB) { 665 663 /* Reset buffer in EZ-USB */ 666 - dev_dbg(go->dev, "resetting EZ-USB buffers\n"); 664 + pr_debug("resetting EZ-USB buffers\n"); 667 665 if (go7007_usb_vendor_request(go, 0x10, 0, 0, NULL, 0, 0) < 0 || 668 666 go7007_usb_vendor_request(go, 0x10, 0, 0, NULL, 0, 0) < 0) 669 667 return -1; ··· 691 689 u16 status_reg = 0; 692 690 int timeout = 500; 693 691 694 - dev_dbg(go->dev, "WriteInterrupt: %04x %04x\n", addr, data); 692 + pr_debug("WriteInterrupt: %04x %04x\n", addr, data); 695 693 696 694 for (i = 0; i < 100; ++i) { 697 695 r = usb_control_msg(usb->usbdev, ··· 736 734 int r; 737 735 int timeout = 500; 738 736 739 - dev_dbg(go->dev, "WriteInterrupt: %04x %04x\n", addr, data); 737 + pr_debug("WriteInterrupt: %04x %04x\n", addr, data); 740 738 741 739 go->usb_buf[0] = data & 0xff; 742 740 go->usb_buf[1] = data >> 8; ··· 773 771 go->interrupt_available = 1; 774 772 go->interrupt_data = __le16_to_cpu(regs[0]); 775 773 go->interrupt_value = __le16_to_cpu(regs[1]); 776 - dev_dbg(go->dev, "ReadInterrupt: %04x %04x\n", 774 + pr_debug("ReadInterrupt: %04x %04x\n", 777 775 go->interrupt_value, go->interrupt_data); 778 776 } 779 777 ··· 893 891 int transferred, pipe; 894 892 int timeout = 500; 895 893 896 - dev_dbg(go->dev, "DownloadBuffer sending %d bytes\n", len); 894 + pr_debug("DownloadBuffer sending %d bytes\n", len); 897 895 898 896 if (usb->board->flags & GO7007_USB_EZUSB) 899 897 pipe = usb_sndbulkpipe(usb->usbdev, 2); ··· 979 977 !(msgs[i].flags & I2C_M_RD) && 980 978 (msgs[i + 1].flags & I2C_M_RD)) { 981 979 #ifdef GO7007_I2C_DEBUG 982 - dev_dbg(go->dev, "i2c write/read %d/%d bytes on %02x\n", 980 + pr_debug("i2c write/read %d/%d bytes on %02x\n", 983 981 msgs[i].len, msgs[i + 1].len, msgs[i].addr); 984 982 #endif 985 983 buf[0] = 0x01; ··· 990 988 buf[buf_len++] = msgs[++i].len; 991 989 } else if (msgs[i].flags & I2C_M_RD) { 992 990 #ifdef GO7007_I2C_DEBUG 993 - dev_dbg(go->dev, "i2c read %d bytes on %02x\n", 991 + pr_debug("i2c read %d bytes on %02x\n", 994 992 msgs[i].len, msgs[i].addr); 995 993 #endif 996 994 buf[0] = 0x01; ··· 1000 998 buf_len = 4; 1001 999 } else { 1002 1000 #ifdef GO7007_I2C_DEBUG 1003 - dev_dbg(go->dev, "i2c write %d bytes on %02x\n", 1001 + pr_debug("i2c write %d bytes on %02x\n", 1004 1002 msgs[i].len, msgs[i].addr); 1005 1003 #endif 1006 1004 buf[0] = 0x00; ··· 1059 1057 char *name; 1060 1058 int video_pipe, i, v_urb_len; 1061 1059 1062 - dev_dbg(go->dev, "probing new GO7007 USB board\n"); 1060 + pr_debug("probing new GO7007 USB board\n"); 1063 1061 1064 1062 switch (id->driver_info) { 1065 1063 case GO7007_BOARDID_MATRIX_II: ··· 1099 1097 board = &board_px_tv402u; 1100 1098 break; 1101 1099 case GO7007_BOARDID_LIFEVIEW_LR192: 1102 - dev_err(go->dev, "The Lifeview TV Walker Ultra is not supported. Sorry!\n"); 1100 + dev_err(&intf->dev, "The Lifeview TV Walker Ultra is not supported. Sorry!\n"); 1103 1101 return -ENODEV; 1104 1102 name = "Lifeview TV Walker Ultra"; 1105 1103 board = &board_lifeview_lr192; 1106 1104 break; 1107 1105 case GO7007_BOARDID_SENSORAY_2250: 1108 - dev_info(go->dev, "Sensoray 2250 found\n"); 1106 + dev_info(&intf->dev, "Sensoray 2250 found\n"); 1109 1107 name = "Sensoray 2250/2251"; 1110 1108 board = &board_sensoray_2250; 1111 1109 break; ··· 1114 1112 board = &board_ads_usbav_709; 1115 1113 break; 1116 1114 default: 1117 - dev_err(go->dev, "unknown board ID %d!\n", 1115 + dev_err(&intf->dev, "unknown board ID %d!\n", 1118 1116 (unsigned int)id->driver_info); 1119 1117 return -ENODEV; 1120 1118 } ··· 1249 1247 sizeof(go->name)); 1250 1248 break; 1251 1249 default: 1252 - dev_dbg(go->dev, "unable to detect tuner type!\n"); 1250 + pr_debug("unable to detect tuner type!\n"); 1253 1251 break; 1254 1252 } 1255 1253 /* Configure tuner mode selection inputs connected
+2 -1
drivers/staging/nvec/nvec.c
··· 681 681 dev_err(nvec->dev, 682 682 "RX buffer overflow on %p: " 683 683 "Trying to write byte %u of %u\n", 684 - nvec->rx, nvec->rx->pos, NVEC_MSG_SIZE); 684 + nvec->rx, nvec->rx ? nvec->rx->pos : 0, 685 + NVEC_MSG_SIZE); 685 686 break; 686 687 default: 687 688 nvec->state = 0;
+3
drivers/staging/rtl8188eu/core/rtw_ap.c
··· 1115 1115 return _FAIL; 1116 1116 } 1117 1117 1118 + /* fix bug of flush_cam_entry at STOP AP mode */ 1119 + psta->state |= WIFI_AP_STATE; 1120 + rtw_indicate_connect(padapter); 1118 1121 pmlmepriv->cur_network.join_res = true;/* for check if already set beacon */ 1119 1122 return ret; 1120 1123 }
+1 -1
drivers/staging/tidspbridge/Kconfig
··· 4 4 5 5 menuconfig TIDSPBRIDGE 6 6 tristate "DSP Bridge driver" 7 - depends on ARCH_OMAP3 && !ARCH_MULTIPLATFORM 7 + depends on ARCH_OMAP3 && !ARCH_MULTIPLATFORM && BROKEN 8 8 select MAILBOX 9 9 select OMAP2PLUS_MBOX 10 10 help
+2 -1
drivers/staging/vt6655/hostap.c
··· 143 143 DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO "%s: Netdevice %s unregistered\n", 144 144 pDevice->dev->name, pDevice->apdev->name); 145 145 } 146 - free_netdev(pDevice->apdev); 146 + if (pDevice->apdev) 147 + free_netdev(pDevice->apdev); 147 148 pDevice->apdev = NULL; 148 149 pDevice->bEnable8021x = false; 149 150 pDevice->bEnableHostWEP = false;
+11
drivers/staging/vt6656/baseband.c
··· 939 939 u8 * pbyAgc; 940 940 u16 wLengthAgc; 941 941 u8 abyArray[256]; 942 + u8 data; 942 943 943 944 ntStatus = CONTROLnsRequestIn(pDevice, 944 945 MESSAGE_TYPE_READ, ··· 1105 1104 ControlvWriteByte(pDevice,MESSAGE_REQUEST_BBREG,0x0D,0x01); 1106 1105 1107 1106 RFbRFTableDownload(pDevice); 1107 + 1108 + /* Fix for TX USB resets from vendors driver */ 1109 + CONTROLnsRequestIn(pDevice, MESSAGE_TYPE_READ, USB_REG4, 1110 + MESSAGE_REQUEST_MEM, sizeof(data), &data); 1111 + 1112 + data |= 0x2; 1113 + 1114 + CONTROLnsRequestOut(pDevice, MESSAGE_TYPE_WRITE, USB_REG4, 1115 + MESSAGE_REQUEST_MEM, sizeof(data), &data); 1116 + 1108 1117 return true;//ntStatus; 1109 1118 } 1110 1119
+2 -1
drivers/staging/vt6656/hostap.c
··· 133 133 DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO "%s: Netdevice %s unregistered\n", 134 134 pDevice->dev->name, pDevice->apdev->name); 135 135 } 136 - free_netdev(pDevice->apdev); 136 + if (pDevice->apdev) 137 + free_netdev(pDevice->apdev); 137 138 pDevice->apdev = NULL; 138 139 pDevice->bEnable8021x = false; 139 140 pDevice->bEnableHostWEP = false;
+2
drivers/staging/vt6656/rndis.h
··· 66 66 67 67 #define VIAUSB20_PACKET_HEADER 0x04 68 68 69 + #define USB_REG4 0x604 70 + 69 71 typedef struct _CMD_MESSAGE 70 72 { 71 73 u8 byData[256];
+14 -5
drivers/staging/zram/zram_drv.c
··· 652 652 return -ENOMEM; 653 653 654 654 /* Do not reset an active device! */ 655 - if (bdev->bd_holders) 656 - return -EBUSY; 655 + if (bdev->bd_holders) { 656 + ret = -EBUSY; 657 + goto out; 658 + } 657 659 658 660 ret = kstrtou16(buf, 10, &do_reset); 659 661 if (ret) 660 - return ret; 662 + goto out; 661 663 662 - if (!do_reset) 663 - return -EINVAL; 664 + if (!do_reset) { 665 + ret = -EINVAL; 666 + goto out; 667 + } 664 668 665 669 /* Make sure all pending I/O is finished */ 666 670 fsync_bdev(bdev); 671 + bdput(bdev); 667 672 668 673 zram_reset_device(zram, true); 669 674 return len; 675 + 676 + out: 677 + bdput(bdev); 678 + return ret; 670 679 } 671 680 672 681 static void __zram_make_request(struct zram *zram, struct bio *bio, int rw)
+13 -4
drivers/staging/zsmalloc/zsmalloc-main.c
··· 430 430 return next; 431 431 } 432 432 433 - /* Encode <page, obj_idx> as a single handle value */ 433 + /* 434 + * Encode <page, obj_idx> as a single handle value. 435 + * On hardware platforms with physical memory starting at 0x0 the pfn 436 + * could be 0 so we ensure that the handle will never be 0 by adjusting the 437 + * encoded obj_idx value before encoding. 438 + */ 434 439 static void *obj_location_to_handle(struct page *page, unsigned long obj_idx) 435 440 { 436 441 unsigned long handle; ··· 446 441 } 447 442 448 443 handle = page_to_pfn(page) << OBJ_INDEX_BITS; 449 - handle |= (obj_idx & OBJ_INDEX_MASK); 444 + handle |= ((obj_idx + 1) & OBJ_INDEX_MASK); 450 445 451 446 return (void *)handle; 452 447 } 453 448 454 - /* Decode <page, obj_idx> pair from the given object handle */ 449 + /* 450 + * Decode <page, obj_idx> pair from the given object handle. We adjust the 451 + * decoded obj_idx back to its original value since it was adjusted in 452 + * obj_location_to_handle(). 453 + */ 455 454 static void obj_handle_to_location(unsigned long handle, struct page **page, 456 455 unsigned long *obj_idx) 457 456 { 458 457 *page = pfn_to_page(handle >> OBJ_INDEX_BITS); 459 - *obj_idx = handle & OBJ_INDEX_MASK; 458 + *obj_idx = (handle & OBJ_INDEX_MASK) - 1; 460 459 } 461 460 462 461 static unsigned long obj_idx_to_offset(struct page *page,
+3
drivers/tty/amiserial.c
··· 1855 1855 */ 1856 1856 static int __init amiserial_console_init(void) 1857 1857 { 1858 + if (!MACH_IS_AMIGA) 1859 + return -ENODEV; 1860 + 1858 1861 register_console(&sercons); 1859 1862 return 0; 1860 1863 }
+10 -6
drivers/tty/n_tty.c
··· 768 768 * data at the tail to prevent a subsequent overrun */ 769 769 while (ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) { 770 770 if (echo_buf(ldata, tail) == ECHO_OP_START) { 771 - if (echo_buf(ldata, tail) == ECHO_OP_ERASE_TAB) 771 + if (echo_buf(ldata, tail + 1) == ECHO_OP_ERASE_TAB) 772 772 tail += 3; 773 773 else 774 774 tail += 2; ··· 1998 1998 found = 1; 1999 1999 2000 2000 size = N_TTY_BUF_SIZE - tail; 2001 - n = (found + eol + size) & (N_TTY_BUF_SIZE - 1); 2001 + n = eol - tail; 2002 + if (n > 4096) 2003 + n += 4096; 2004 + n += found; 2002 2005 c = n; 2003 2006 2004 2007 if (found && read_buf(ldata, eol) == __DISABLED_CHAR) { ··· 2246 2243 if (time) 2247 2244 timeout = time; 2248 2245 } 2249 - mutex_unlock(&ldata->atomic_read_lock); 2250 - remove_wait_queue(&tty->read_wait, &wait); 2246 + n_tty_set_room(tty); 2247 + up_read(&tty->termios_rwsem); 2251 2248 2249 + remove_wait_queue(&tty->read_wait, &wait); 2252 2250 if (!waitqueue_active(&tty->read_wait)) 2253 2251 ldata->minimum_to_wake = minimum; 2252 + 2253 + mutex_unlock(&ldata->atomic_read_lock); 2254 2254 2255 2255 __set_current_state(TASK_RUNNING); 2256 2256 if (b - buf) 2257 2257 retval = b - buf; 2258 2258 2259 - n_tty_set_room(tty); 2260 - up_read(&tty->termios_rwsem); 2261 2259 return retval; 2262 2260 } 2263 2261
+1 -1
drivers/tty/serial/8250/Kconfig
··· 41 41 accept kernel parameters in both forms like 8250_core.nr_uarts=4 and 42 42 8250.nr_uarts=4. We now renamed the module back to 8250, but if 43 43 anybody noticed in 3.7 and changed their userspace we still have to 44 - keep the 8350_core.* options around until they revert the changes 44 + keep the 8250_core.* options around until they revert the changes 45 45 they already did. 46 46 47 47 If 8250 is built as a module, this adds 8250_core alias instead.
+3
drivers/tty/serial/pmac_zilog.c
··· 2052 2052 /* Probe ports */ 2053 2053 pmz_probe(); 2054 2054 2055 + if (pmz_ports_count == 0) 2056 + return -ENODEV; 2057 + 2055 2058 /* TODO: Autoprobe console based on OF */ 2056 2059 /* pmz_console.index = i; */ 2057 2060 register_console(&pmz_console);
+1
drivers/tty/tty_io.c
··· 2086 2086 filp->f_op = &tty_fops; 2087 2087 goto retry_open; 2088 2088 } 2089 + clear_bit(TTY_HUPPED, &tty->flags); 2089 2090 tty_unlock(tty); 2090 2091 2091 2092
+1 -1
fs/affs/Changes
··· 91 91 Version 3.11 92 92 ------------ 93 93 94 - - Converted to use 2.3.x page cache [Dave Jones <dave@powertweak.com>] 94 + - Converted to use 2.3.x page cache [Dave Jones] 95 95 - Corruption in truncate() bugfix [Ken Tyler <kent@werple.net.au>] 96 96 97 97 Version 3.10
+1 -1
fs/ceph/addr.c
··· 216 216 } 217 217 SetPageUptodate(page); 218 218 219 - if (err == 0) 219 + if (err >= 0) 220 220 ceph_readpage_to_fscache(inode, page); 221 221 222 222 out:
+3
fs/ceph/cache.c
··· 324 324 { 325 325 struct ceph_inode_info *ci = ceph_inode(inode); 326 326 327 + if (!PageFsCache(page)) 328 + return; 329 + 327 330 fscache_wait_on_page_write(ci->fscache, page); 328 331 fscache_uncache_page(ci->fscache, page); 329 332 }
+17 -10
fs/ceph/caps.c
··· 897 897 * caller should hold i_ceph_lock. 898 898 * caller will not hold session s_mutex if called from destroy_inode. 899 899 */ 900 - void __ceph_remove_cap(struct ceph_cap *cap) 900 + void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release) 901 901 { 902 902 struct ceph_mds_session *session = cap->session; 903 903 struct ceph_inode_info *ci = cap->ci; ··· 909 909 910 910 /* remove from session list */ 911 911 spin_lock(&session->s_cap_lock); 912 + /* 913 + * s_cap_reconnect is protected by s_cap_lock. no one changes 914 + * s_cap_gen while session is in the reconnect state. 915 + */ 916 + if (queue_release && 917 + (!session->s_cap_reconnect || 918 + cap->cap_gen == session->s_cap_gen)) 919 + __queue_cap_release(session, ci->i_vino.ino, cap->cap_id, 920 + cap->mseq, cap->issue_seq); 921 + 912 922 if (session->s_cap_iterator == cap) { 913 923 /* not yet, we are iterating over this very cap */ 914 924 dout("__ceph_remove_cap delaying %p removal from session %p\n", ··· 1033 1023 struct ceph_mds_cap_release *head; 1034 1024 struct ceph_mds_cap_item *item; 1035 1025 1036 - spin_lock(&session->s_cap_lock); 1037 1026 BUG_ON(!session->s_num_cap_releases); 1038 1027 msg = list_first_entry(&session->s_cap_releases, 1039 1028 struct ceph_msg, list_head); ··· 1061 1052 (int)CEPH_CAPS_PER_RELEASE, 1062 1053 (int)msg->front.iov_len); 1063 1054 } 1064 - spin_unlock(&session->s_cap_lock); 1065 1055 } 1066 1056 1067 1057 /* ··· 1075 1067 p = rb_first(&ci->i_caps); 1076 1068 while (p) { 1077 1069 struct ceph_cap *cap = rb_entry(p, struct ceph_cap, ci_node); 1078 - struct ceph_mds_session *session = cap->session; 1079 - 1080 - __queue_cap_release(session, ceph_ino(inode), cap->cap_id, 1081 - cap->mseq, cap->issue_seq); 1082 1070 p = rb_next(p); 1083 - __ceph_remove_cap(cap); 1071 + __ceph_remove_cap(cap, true); 1084 1072 } 1085 1073 } 1086 1074 ··· 2795 2791 } 2796 2792 spin_unlock(&mdsc->cap_dirty_lock); 2797 2793 } 2798 - __ceph_remove_cap(cap); 2794 + __ceph_remove_cap(cap, false); 2799 2795 } 2800 2796 /* else, we already released it */ 2801 2797 ··· 2935 2931 if (!inode) { 2936 2932 dout(" i don't have ino %llx\n", vino.ino); 2937 2933 2938 - if (op == CEPH_CAP_OP_IMPORT) 2934 + if (op == CEPH_CAP_OP_IMPORT) { 2935 + spin_lock(&session->s_cap_lock); 2939 2936 __queue_cap_release(session, vino.ino, cap_id, 2940 2937 mseq, seq); 2938 + spin_unlock(&session->s_cap_lock); 2939 + } 2941 2940 goto flush_cap_releases; 2942 2941 } 2943 2942
+10 -1
fs/ceph/dir.c
··· 352 352 } 353 353 354 354 /* note next offset and last dentry name */ 355 + rinfo = &req->r_reply_info; 356 + if (le32_to_cpu(rinfo->dir_dir->frag) != frag) { 357 + frag = le32_to_cpu(rinfo->dir_dir->frag); 358 + if (ceph_frag_is_leftmost(frag)) 359 + fi->next_offset = 2; 360 + else 361 + fi->next_offset = 0; 362 + off = fi->next_offset; 363 + } 355 364 fi->offset = fi->next_offset; 356 365 fi->last_readdir = req; 366 + fi->frag = frag; 357 367 358 368 if (req->r_reply_info.dir_end) { 359 369 kfree(fi->last_name); ··· 373 363 else 374 364 fi->next_offset = 0; 375 365 } else { 376 - rinfo = &req->r_reply_info; 377 366 err = note_last_dentry(fi, 378 367 rinfo->dir_dname[rinfo->dir_nr-1], 379 368 rinfo->dir_dname_len[rinfo->dir_nr-1]);
+43 -6
fs/ceph/inode.c
··· 577 577 int issued = 0, implemented; 578 578 struct timespec mtime, atime, ctime; 579 579 u32 nsplits; 580 + struct ceph_inode_frag *frag; 581 + struct rb_node *rb_node; 580 582 struct ceph_buffer *xattr_blob = NULL; 581 583 int err = 0; 582 584 int queue_trunc = 0; ··· 753 751 /* FIXME: move me up, if/when version reflects fragtree changes */ 754 752 nsplits = le32_to_cpu(info->fragtree.nsplits); 755 753 mutex_lock(&ci->i_fragtree_mutex); 754 + rb_node = rb_first(&ci->i_fragtree); 756 755 for (i = 0; i < nsplits; i++) { 757 756 u32 id = le32_to_cpu(info->fragtree.splits[i].frag); 758 - struct ceph_inode_frag *frag = __get_or_create_frag(ci, id); 759 - 760 - if (IS_ERR(frag)) 761 - continue; 757 + frag = NULL; 758 + while (rb_node) { 759 + frag = rb_entry(rb_node, struct ceph_inode_frag, node); 760 + if (ceph_frag_compare(frag->frag, id) >= 0) { 761 + if (frag->frag != id) 762 + frag = NULL; 763 + else 764 + rb_node = rb_next(rb_node); 765 + break; 766 + } 767 + rb_node = rb_next(rb_node); 768 + rb_erase(&frag->node, &ci->i_fragtree); 769 + kfree(frag); 770 + frag = NULL; 771 + } 772 + if (!frag) { 773 + frag = __get_or_create_frag(ci, id); 774 + if (IS_ERR(frag)) 775 + continue; 776 + } 762 777 frag->split_by = le32_to_cpu(info->fragtree.splits[i].by); 763 778 dout(" frag %x split by %d\n", frag->frag, frag->split_by); 779 + } 780 + while (rb_node) { 781 + frag = rb_entry(rb_node, struct ceph_inode_frag, node); 782 + rb_node = rb_next(rb_node); 783 + rb_erase(&frag->node, &ci->i_fragtree); 784 + kfree(frag); 764 785 } 765 786 mutex_unlock(&ci->i_fragtree_mutex); 766 787 ··· 1275 1250 int err = 0, i; 1276 1251 struct inode *snapdir = NULL; 1277 1252 struct ceph_mds_request_head *rhead = req->r_request->front.iov_base; 1278 - u64 frag = le32_to_cpu(rhead->args.readdir.frag); 1279 1253 struct ceph_dentry_info *di; 1254 + u64 r_readdir_offset = req->r_readdir_offset; 1255 + u32 frag = le32_to_cpu(rhead->args.readdir.frag); 1256 + 1257 + if (rinfo->dir_dir && 1258 + le32_to_cpu(rinfo->dir_dir->frag) != frag) { 1259 + dout("readdir_prepopulate got new frag %x -> %x\n", 1260 + frag, le32_to_cpu(rinfo->dir_dir->frag)); 1261 + frag = le32_to_cpu(rinfo->dir_dir->frag); 1262 + if (ceph_frag_is_leftmost(frag)) 1263 + r_readdir_offset = 2; 1264 + else 1265 + r_readdir_offset = 0; 1266 + } 1280 1267 1281 1268 if (req->r_aborted) 1282 1269 return readdir_prepopulate_inodes_only(req, session); ··· 1352 1315 } 1353 1316 1354 1317 di = dn->d_fsdata; 1355 - di->offset = ceph_make_fpos(frag, i + req->r_readdir_offset); 1318 + di->offset = ceph_make_fpos(frag, i + r_readdir_offset); 1356 1319 1357 1320 /* inode */ 1358 1321 if (dn->d_inode) {
+45 -16
fs/ceph/mds_client.c
··· 43 43 */ 44 44 45 45 struct ceph_reconnect_state { 46 + int nr_caps; 46 47 struct ceph_pagelist *pagelist; 47 48 bool flock; 48 49 }; ··· 444 443 INIT_LIST_HEAD(&s->s_waiting); 445 444 INIT_LIST_HEAD(&s->s_unsafe); 446 445 s->s_num_cap_releases = 0; 446 + s->s_cap_reconnect = 0; 447 447 s->s_cap_iterator = NULL; 448 448 INIT_LIST_HEAD(&s->s_cap_releases); 449 449 INIT_LIST_HEAD(&s->s_cap_releases_done); ··· 643 641 iput(req->r_unsafe_dir); 644 642 req->r_unsafe_dir = NULL; 645 643 } 644 + 645 + complete_all(&req->r_safe_completion); 646 646 647 647 ceph_mdsc_put_request(req); 648 648 } ··· 990 986 dout("removing cap %p, ci is %p, inode is %p\n", 991 987 cap, ci, &ci->vfs_inode); 992 988 spin_lock(&ci->i_ceph_lock); 993 - __ceph_remove_cap(cap); 989 + __ceph_remove_cap(cap, false); 994 990 if (!__ceph_is_any_real_caps(ci)) { 995 991 struct ceph_mds_client *mdsc = 996 992 ceph_sb_to_client(inode->i_sb)->mdsc; ··· 1235 1231 session->s_trim_caps--; 1236 1232 if (oissued) { 1237 1233 /* we aren't the only cap.. just remove us */ 1238 - __queue_cap_release(session, ceph_ino(inode), cap->cap_id, 1239 - cap->mseq, cap->issue_seq); 1240 - __ceph_remove_cap(cap); 1234 + __ceph_remove_cap(cap, true); 1241 1235 } else { 1242 1236 /* try to drop referring dentries */ 1243 1237 spin_unlock(&ci->i_ceph_lock); ··· 1418 1416 unsigned num; 1419 1417 1420 1418 dout("discard_cap_releases mds%d\n", session->s_mds); 1421 - spin_lock(&session->s_cap_lock); 1422 1419 1423 1420 /* zero out the in-progress message */ 1424 1421 msg = list_first_entry(&session->s_cap_releases, ··· 1444 1443 msg->front.iov_len = sizeof(*head); 1445 1444 list_add(&msg->list_head, &session->s_cap_releases); 1446 1445 } 1447 - 1448 - spin_unlock(&session->s_cap_lock); 1449 1446 } 1450 1447 1451 1448 /* ··· 1874 1875 int mds = -1; 1875 1876 int err = -EAGAIN; 1876 1877 1877 - if (req->r_err || req->r_got_result) 1878 + if (req->r_err || req->r_got_result) { 1879 + if (req->r_aborted) 1880 + __unregister_request(mdsc, req); 1878 1881 goto out; 1882 + } 1879 1883 1880 1884 if (req->r_timeout && 1881 1885 time_after_eq(jiffies, req->r_started + req->r_timeout)) { ··· 2188 2186 if (head->safe) { 2189 2187 req->r_got_safe = true; 2190 2188 __unregister_request(mdsc, req); 2191 - complete_all(&req->r_safe_completion); 2192 2189 2193 2190 if (req->r_got_unsafe) { 2194 2191 /* ··· 2239 2238 err = ceph_fill_trace(mdsc->fsc->sb, req, req->r_session); 2240 2239 if (err == 0) { 2241 2240 if (result == 0 && (req->r_op == CEPH_MDS_OP_READDIR || 2242 - req->r_op == CEPH_MDS_OP_LSSNAP) && 2243 - rinfo->dir_nr) 2241 + req->r_op == CEPH_MDS_OP_LSSNAP)) 2244 2242 ceph_readdir_prepopulate(req, req->r_session); 2245 2243 ceph_unreserve_caps(mdsc, &req->r_caps_reservation); 2246 2244 } ··· 2490 2490 cap->seq = 0; /* reset cap seq */ 2491 2491 cap->issue_seq = 0; /* and issue_seq */ 2492 2492 cap->mseq = 0; /* and migrate_seq */ 2493 + cap->cap_gen = cap->session->s_cap_gen; 2493 2494 2494 2495 if (recon_state->flock) { 2495 2496 rec.v2.cap_id = cpu_to_le64(cap->cap_id); ··· 2553 2552 } else { 2554 2553 err = ceph_pagelist_append(pagelist, &rec, reclen); 2555 2554 } 2555 + 2556 + recon_state->nr_caps++; 2556 2557 out_free: 2557 2558 kfree(path); 2558 2559 out_dput: ··· 2582 2579 struct rb_node *p; 2583 2580 int mds = session->s_mds; 2584 2581 int err = -ENOMEM; 2582 + int s_nr_caps; 2585 2583 struct ceph_pagelist *pagelist; 2586 2584 struct ceph_reconnect_state recon_state; 2587 2585 ··· 2614 2610 dout("session %p state %s\n", session, 2615 2611 session_state_name(session->s_state)); 2616 2612 2613 + spin_lock(&session->s_gen_ttl_lock); 2614 + session->s_cap_gen++; 2615 + spin_unlock(&session->s_gen_ttl_lock); 2616 + 2617 + spin_lock(&session->s_cap_lock); 2618 + /* 2619 + * notify __ceph_remove_cap() that we are composing cap reconnect. 2620 + * If a cap get released before being added to the cap reconnect, 2621 + * __ceph_remove_cap() should skip queuing cap release. 2622 + */ 2623 + session->s_cap_reconnect = 1; 2617 2624 /* drop old cap expires; we're about to reestablish that state */ 2618 2625 discard_cap_releases(mdsc, session); 2626 + spin_unlock(&session->s_cap_lock); 2619 2627 2620 2628 /* traverse this session's caps */ 2621 - err = ceph_pagelist_encode_32(pagelist, session->s_nr_caps); 2629 + s_nr_caps = session->s_nr_caps; 2630 + err = ceph_pagelist_encode_32(pagelist, s_nr_caps); 2622 2631 if (err) 2623 2632 goto fail; 2624 2633 2634 + recon_state.nr_caps = 0; 2625 2635 recon_state.pagelist = pagelist; 2626 2636 recon_state.flock = session->s_con.peer_features & CEPH_FEATURE_FLOCK; 2627 2637 err = iterate_session_caps(session, encode_caps_cb, &recon_state); 2628 2638 if (err < 0) 2629 2639 goto fail; 2640 + 2641 + spin_lock(&session->s_cap_lock); 2642 + session->s_cap_reconnect = 0; 2643 + spin_unlock(&session->s_cap_lock); 2630 2644 2631 2645 /* 2632 2646 * snaprealms. we provide mds with the ino, seq (version), and ··· 2668 2646 2669 2647 if (recon_state.flock) 2670 2648 reply->hdr.version = cpu_to_le16(2); 2671 - if (pagelist->length) { 2672 - /* set up outbound data if we have any */ 2673 - reply->hdr.data_len = cpu_to_le32(pagelist->length); 2674 - ceph_msg_data_add_pagelist(reply, pagelist); 2649 + 2650 + /* raced with cap release? */ 2651 + if (s_nr_caps != recon_state.nr_caps) { 2652 + struct page *page = list_first_entry(&pagelist->head, 2653 + struct page, lru); 2654 + __le32 *addr = kmap_atomic(page); 2655 + *addr = cpu_to_le32(recon_state.nr_caps); 2656 + kunmap_atomic(addr); 2675 2657 } 2658 + 2659 + reply->hdr.data_len = cpu_to_le32(pagelist->length); 2660 + ceph_msg_data_add_pagelist(reply, pagelist); 2676 2661 ceph_con_send(&session->s_con, reply); 2677 2662 2678 2663 mutex_unlock(&session->s_mutex);
+1
fs/ceph/mds_client.h
··· 132 132 struct list_head s_caps; /* all caps issued by this session */ 133 133 int s_nr_caps, s_trim_caps; 134 134 int s_num_cap_releases; 135 + int s_cap_reconnect; 135 136 struct list_head s_cap_releases; /* waiting cap_release messages */ 136 137 struct list_head s_cap_releases_done; /* ready to send */ 137 138 struct ceph_cap *s_cap_iterator;
+1 -7
fs/ceph/super.h
··· 741 741 int fmode, unsigned issued, unsigned wanted, 742 742 unsigned cap, unsigned seq, u64 realmino, int flags, 743 743 struct ceph_cap_reservation *caps_reservation); 744 - extern void __ceph_remove_cap(struct ceph_cap *cap); 745 - static inline void ceph_remove_cap(struct ceph_cap *cap) 746 - { 747 - spin_lock(&cap->ci->i_ceph_lock); 748 - __ceph_remove_cap(cap); 749 - spin_unlock(&cap->ci->i_ceph_lock); 750 - } 744 + extern void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release); 751 745 extern void ceph_put_cap(struct ceph_mds_client *mdsc, 752 746 struct ceph_cap *cap); 753 747
+1
fs/cifs/cifsglob.h
··· 384 384 int (*clone_range)(const unsigned int, struct cifsFileInfo *src_file, 385 385 struct cifsFileInfo *target_file, u64 src_off, u64 len, 386 386 u64 dest_off); 387 + int (*validate_negotiate)(const unsigned int, struct cifs_tcon *); 387 388 }; 388 389 389 390 struct smb_version_values {
+4 -2
fs/cifs/ioctl.c
··· 26 26 #include <linux/mount.h> 27 27 #include <linux/mm.h> 28 28 #include <linux/pagemap.h> 29 - #include <linux/btrfs.h> 30 29 #include "cifspdu.h" 31 30 #include "cifsglob.h" 32 31 #include "cifsproto.h" 33 32 #include "cifs_debug.h" 34 33 #include "cifsfs.h" 34 + 35 + #define CIFS_IOCTL_MAGIC 0xCF 36 + #define CIFS_IOC_COPYCHUNK_FILE _IOW(CIFS_IOCTL_MAGIC, 3, int) 35 37 36 38 static long cifs_ioctl_clone(unsigned int xid, struct file *dst_file, 37 39 unsigned long srcfd, u64 off, u64 len, u64 destoff) ··· 215 213 cifs_dbg(FYI, "set compress flag rc %d\n", rc); 216 214 } 217 215 break; 218 - case BTRFS_IOC_CLONE: 216 + case CIFS_IOC_COPYCHUNK_FILE: 219 217 rc = cifs_ioctl_clone(xid, filep, arg, 0, 0, 0); 220 218 break; 221 219 default:
+84 -11
fs/cifs/smb2ops.c
··· 532 532 int rc; 533 533 unsigned int ret_data_len; 534 534 struct copychunk_ioctl *pcchunk; 535 - char *retbuf = NULL; 535 + struct copychunk_ioctl_rsp *retbuf = NULL; 536 + struct cifs_tcon *tcon; 537 + int chunks_copied = 0; 538 + bool chunk_sizes_updated = false; 536 539 537 540 pcchunk = kmalloc(sizeof(struct copychunk_ioctl), GFP_KERNEL); 538 541 ··· 550 547 551 548 /* Note: request_res_key sets res_key null only if rc !=0 */ 552 549 if (rc) 553 - return rc; 550 + goto cchunk_out; 554 551 555 552 /* For now array only one chunk long, will make more flexible later */ 556 553 pcchunk->ChunkCount = __constant_cpu_to_le32(1); 557 554 pcchunk->Reserved = 0; 558 - pcchunk->SourceOffset = cpu_to_le64(src_off); 559 - pcchunk->TargetOffset = cpu_to_le64(dest_off); 560 - pcchunk->Length = cpu_to_le32(len); 561 555 pcchunk->Reserved2 = 0; 562 556 563 - /* Request that server copy to target from src file identified by key */ 564 - rc = SMB2_ioctl(xid, tlink_tcon(trgtfile->tlink), 565 - trgtfile->fid.persistent_fid, 557 + tcon = tlink_tcon(trgtfile->tlink); 558 + 559 + while (len > 0) { 560 + pcchunk->SourceOffset = cpu_to_le64(src_off); 561 + pcchunk->TargetOffset = cpu_to_le64(dest_off); 562 + pcchunk->Length = 563 + cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk)); 564 + 565 + /* Request server copy to target from src identified by key */ 566 + rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid, 566 567 trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE, 567 568 true /* is_fsctl */, (char *)pcchunk, 568 - sizeof(struct copychunk_ioctl), &retbuf, &ret_data_len); 569 + sizeof(struct copychunk_ioctl), (char **)&retbuf, 570 + &ret_data_len); 571 + if (rc == 0) { 572 + if (ret_data_len != 573 + sizeof(struct copychunk_ioctl_rsp)) { 574 + cifs_dbg(VFS, "invalid cchunk response size\n"); 575 + rc = -EIO; 576 + goto cchunk_out; 577 + } 578 + if (retbuf->TotalBytesWritten == 0) { 579 + cifs_dbg(FYI, "no bytes copied\n"); 580 + rc = -EIO; 581 + goto cchunk_out; 582 + } 583 + /* 584 + * Check if server claimed to write more than we asked 585 + */ 586 + if (le32_to_cpu(retbuf->TotalBytesWritten) > 587 + le32_to_cpu(pcchunk->Length)) { 588 + cifs_dbg(VFS, "invalid copy chunk response\n"); 589 + rc = -EIO; 590 + goto cchunk_out; 591 + } 592 + if (le32_to_cpu(retbuf->ChunksWritten) != 1) { 593 + cifs_dbg(VFS, "invalid num chunks written\n"); 594 + rc = -EIO; 595 + goto cchunk_out; 596 + } 597 + chunks_copied++; 569 598 570 - /* BB need to special case rc = EINVAL to alter chunk size */ 599 + src_off += le32_to_cpu(retbuf->TotalBytesWritten); 600 + dest_off += le32_to_cpu(retbuf->TotalBytesWritten); 601 + len -= le32_to_cpu(retbuf->TotalBytesWritten); 571 602 572 - cifs_dbg(FYI, "rc %d data length out %d\n", rc, ret_data_len); 603 + cifs_dbg(FYI, "Chunks %d PartialChunk %d Total %d\n", 604 + le32_to_cpu(retbuf->ChunksWritten), 605 + le32_to_cpu(retbuf->ChunkBytesWritten), 606 + le32_to_cpu(retbuf->TotalBytesWritten)); 607 + } else if (rc == -EINVAL) { 608 + if (ret_data_len != sizeof(struct copychunk_ioctl_rsp)) 609 + goto cchunk_out; 573 610 611 + cifs_dbg(FYI, "MaxChunks %d BytesChunk %d MaxCopy %d\n", 612 + le32_to_cpu(retbuf->ChunksWritten), 613 + le32_to_cpu(retbuf->ChunkBytesWritten), 614 + le32_to_cpu(retbuf->TotalBytesWritten)); 615 + 616 + /* 617 + * Check if this is the first request using these sizes, 618 + * (ie check if copy succeed once with original sizes 619 + * and check if the server gave us different sizes after 620 + * we already updated max sizes on previous request). 621 + * if not then why is the server returning an error now 622 + */ 623 + if ((chunks_copied != 0) || chunk_sizes_updated) 624 + goto cchunk_out; 625 + 626 + /* Check that server is not asking us to grow size */ 627 + if (le32_to_cpu(retbuf->ChunkBytesWritten) < 628 + tcon->max_bytes_chunk) 629 + tcon->max_bytes_chunk = 630 + le32_to_cpu(retbuf->ChunkBytesWritten); 631 + else 632 + goto cchunk_out; /* server gave us bogus size */ 633 + 634 + /* No need to change MaxChunks since already set to 1 */ 635 + chunk_sizes_updated = true; 636 + } 637 + } 638 + 639 + cchunk_out: 574 640 kfree(pcchunk); 575 641 return rc; 576 642 } ··· 1319 1247 .create_lease_buf = smb3_create_lease_buf, 1320 1248 .parse_lease_buf = smb3_parse_lease_buf, 1321 1249 .clone_range = smb2_clone_range, 1250 + .validate_negotiate = smb3_validate_negotiate, 1322 1251 }; 1323 1252 1324 1253 struct smb_version_values smb20_values = {
+87 -5
fs/cifs/smb2pdu.c
··· 454 454 return rc; 455 455 } 456 456 457 + int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon) 458 + { 459 + int rc = 0; 460 + struct validate_negotiate_info_req vneg_inbuf; 461 + struct validate_negotiate_info_rsp *pneg_rsp; 462 + u32 rsplen; 463 + 464 + cifs_dbg(FYI, "validate negotiate\n"); 465 + 466 + /* 467 + * validation ioctl must be signed, so no point sending this if we 468 + * can not sign it. We could eventually change this to selectively 469 + * sign just this, the first and only signed request on a connection. 470 + * This is good enough for now since a user who wants better security 471 + * would also enable signing on the mount. Having validation of 472 + * negotiate info for signed connections helps reduce attack vectors 473 + */ 474 + if (tcon->ses->server->sign == false) 475 + return 0; /* validation requires signing */ 476 + 477 + vneg_inbuf.Capabilities = 478 + cpu_to_le32(tcon->ses->server->vals->req_capabilities); 479 + memcpy(vneg_inbuf.Guid, cifs_client_guid, SMB2_CLIENT_GUID_SIZE); 480 + 481 + if (tcon->ses->sign) 482 + vneg_inbuf.SecurityMode = 483 + cpu_to_le16(SMB2_NEGOTIATE_SIGNING_REQUIRED); 484 + else if (global_secflags & CIFSSEC_MAY_SIGN) 485 + vneg_inbuf.SecurityMode = 486 + cpu_to_le16(SMB2_NEGOTIATE_SIGNING_ENABLED); 487 + else 488 + vneg_inbuf.SecurityMode = 0; 489 + 490 + vneg_inbuf.DialectCount = cpu_to_le16(1); 491 + vneg_inbuf.Dialects[0] = 492 + cpu_to_le16(tcon->ses->server->vals->protocol_id); 493 + 494 + rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID, 495 + FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */, 496 + (char *)&vneg_inbuf, sizeof(struct validate_negotiate_info_req), 497 + (char **)&pneg_rsp, &rsplen); 498 + 499 + if (rc != 0) { 500 + cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc); 501 + return -EIO; 502 + } 503 + 504 + if (rsplen != sizeof(struct validate_negotiate_info_rsp)) { 505 + cifs_dbg(VFS, "invalid size of protocol negotiate response\n"); 506 + return -EIO; 507 + } 508 + 509 + /* check validate negotiate info response matches what we got earlier */ 510 + if (pneg_rsp->Dialect != 511 + cpu_to_le16(tcon->ses->server->vals->protocol_id)) 512 + goto vneg_out; 513 + 514 + if (pneg_rsp->SecurityMode != cpu_to_le16(tcon->ses->server->sec_mode)) 515 + goto vneg_out; 516 + 517 + /* do not validate server guid because not saved at negprot time yet */ 518 + 519 + if ((le32_to_cpu(pneg_rsp->Capabilities) | SMB2_NT_FIND | 520 + SMB2_LARGE_FILES) != tcon->ses->server->capabilities) 521 + goto vneg_out; 522 + 523 + /* validate negotiate successful */ 524 + cifs_dbg(FYI, "validate negotiate info successful\n"); 525 + return 0; 526 + 527 + vneg_out: 528 + cifs_dbg(VFS, "protocol revalidation - security settings mismatch\n"); 529 + return -EIO; 530 + } 531 + 457 532 int 458 533 SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses, 459 534 const struct nls_table *nls_cp) ··· 904 829 ((tcon->share_flags & SHI1005_FLAGS_DFS) == 0)) 905 830 cifs_dbg(VFS, "DFS capability contradicts DFS flag\n"); 906 831 init_copy_chunk_defaults(tcon); 832 + if (tcon->ses->server->ops->validate_negotiate) 833 + rc = tcon->ses->server->ops->validate_negotiate(xid, tcon); 907 834 tcon_exit: 908 835 free_rsp_buf(resp_buftype, rsp); 909 836 kfree(unc_path); ··· 1291 1214 rc = SendReceive2(xid, ses, iov, num_iovecs, &resp_buftype, 0); 1292 1215 rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base; 1293 1216 1294 - if (rc != 0) { 1217 + if ((rc != 0) && (rc != -EINVAL)) { 1295 1218 if (tcon) 1296 1219 cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1297 1220 goto ioctl_exit; 1221 + } else if (rc == -EINVAL) { 1222 + if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) && 1223 + (opcode != FSCTL_SRV_COPYCHUNK)) { 1224 + if (tcon) 1225 + cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1226 + goto ioctl_exit; 1227 + } 1298 1228 } 1299 1229 1300 1230 /* check if caller wants to look at return data or just return rc */ ··· 2238 2154 rc = SendReceive2(xid, ses, iov, num, &resp_buftype, 0); 2239 2155 rsp = (struct smb2_set_info_rsp *)iov[0].iov_base; 2240 2156 2241 - if (rc != 0) { 2157 + if (rc != 0) 2242 2158 cifs_stats_fail_inc(tcon, SMB2_SET_INFO_HE); 2243 - goto out; 2244 - } 2245 - out: 2159 + 2246 2160 free_rsp_buf(resp_buftype, rsp); 2247 2161 kfree(iov); 2248 2162 return rc;
+9 -3
fs/cifs/smb2pdu.h
··· 577 577 __le32 TotalBytesWritten; 578 578 } __packed; 579 579 580 - /* Response and Request are the same format */ 581 - struct validate_negotiate_info { 580 + struct validate_negotiate_info_req { 582 581 __le32 Capabilities; 583 582 __u8 Guid[SMB2_CLIENT_GUID_SIZE]; 584 583 __le16 SecurityMode; 585 584 __le16 DialectCount; 586 - __le16 Dialect[1]; 585 + __le16 Dialects[1]; /* dialect (someday maybe list) client asked for */ 586 + } __packed; 587 + 588 + struct validate_negotiate_info_rsp { 589 + __le32 Capabilities; 590 + __u8 Guid[SMB2_CLIENT_GUID_SIZE]; 591 + __le16 SecurityMode; 592 + __le16 Dialect; /* Dialect in use for the connection */ 587 593 } __packed; 588 594 589 595 #define RSS_CAPABLE 0x00000001
+1
fs/cifs/smb2proto.h
··· 162 162 struct smb2_lock_element *buf); 163 163 extern int SMB2_lease_break(const unsigned int xid, struct cifs_tcon *tcon, 164 164 __u8 *lease_key, const __le32 lease_state); 165 + extern int smb3_validate_negotiate(const unsigned int, struct cifs_tcon *); 165 166 166 167 #endif /* _SMB2PROTO_H */
+1 -1
fs/cifs/smbfsctl.h
··· 90 90 #define FSCTL_LMR_REQUEST_RESILIENCY 0x001401D4 /* BB add struct */ 91 91 #define FSCTL_LMR_GET_LINK_TRACK_INF 0x001400E8 /* BB add struct */ 92 92 #define FSCTL_LMR_SET_LINK_TRACK_INF 0x001400EC /* BB add struct */ 93 - #define FSCTL_VALIDATE_NEGOTIATE_INFO 0x00140204 /* BB add struct */ 93 + #define FSCTL_VALIDATE_NEGOTIATE_INFO 0x00140204 94 94 /* Perform server-side data movement */ 95 95 #define FSCTL_SRV_COPYCHUNK 0x001440F2 96 96 #define FSCTL_SRV_COPYCHUNK_WRITE 0x001480F2
+1 -2
fs/namei.c
··· 513 513 514 514 if (!lockref_get_not_dead(&parent->d_lockref)) { 515 515 nd->path.dentry = NULL; 516 - rcu_read_unlock(); 517 - return -ECHILD; 516 + goto out; 518 517 } 519 518 520 519 /*
+2
include/acpi/acconfig.h
··· 83 83 * Should the subsystem abort the loading of an ACPI table if the 84 84 * table checksum is incorrect? 85 85 */ 86 + #ifndef ACPI_CHECKSUM_ABORT 86 87 #define ACPI_CHECKSUM_ABORT FALSE 88 + #endif 87 89 88 90 /* 89 91 * Generate a version of ACPICA that only supports "reduced hardware"
+1
include/acpi/acpi_bus.h
··· 100 100 struct acpi_hotplug_profile { 101 101 struct kobject kobj; 102 102 bool enabled:1; 103 + bool ignore:1; 103 104 enum acpi_hotplug_mode mode; 104 105 }; 105 106
+1 -1
include/acpi/acpixf.h
··· 46 46 47 47 /* Current ACPICA subsystem version in YYYYMMDD format */ 48 48 49 - #define ACPI_CA_VERSION 0x20130927 49 + #define ACPI_CA_VERSION 0x20131115 50 50 51 51 #include <acpi/acconfig.h> 52 52 #include <acpi/actypes.h>
+14
include/asm-generic/simd.h
··· 1 + 2 + #include <linux/hardirq.h> 3 + 4 + /* 5 + * may_use_simd - whether it is allowable at this time to issue SIMD 6 + * instructions or access the SIMD register file 7 + * 8 + * As architectures typically don't preserve the SIMD register file when 9 + * taking an interrupt, !in_interrupt() should be a reasonable default. 10 + */ 11 + static __must_check inline bool may_use_simd(void) 12 + { 13 + return !in_interrupt(); 14 + }
+17 -1
include/crypto/algapi.h
··· 386 386 return (type ^ CRYPTO_ALG_ASYNC) & mask & CRYPTO_ALG_ASYNC; 387 387 } 388 388 389 - #endif /* _CRYPTO_ALGAPI_H */ 389 + noinline unsigned long __crypto_memneq(const void *a, const void *b, size_t size); 390 390 391 + /** 392 + * crypto_memneq - Compare two areas of memory without leaking 393 + * timing information. 394 + * 395 + * @a: One area of memory 396 + * @b: Another area of memory 397 + * @size: The size of the area. 398 + * 399 + * Returns 0 when data is equal, 1 otherwise. 400 + */ 401 + static inline int crypto_memneq(const void *a, const void *b, size_t size) 402 + { 403 + return __crypto_memneq(a, b, size) != 0UL ? 1 : 0; 404 + } 405 + 406 + #endif /* _CRYPTO_ALGAPI_H */
+11 -1
include/crypto/authenc.h
··· 23 23 __be32 enckeylen; 24 24 }; 25 25 26 - #endif /* _CRYPTO_AUTHENC_H */ 26 + struct crypto_authenc_keys { 27 + const u8 *authkey; 28 + const u8 *enckey; 27 29 30 + unsigned int authkeylen; 31 + unsigned int enckeylen; 32 + }; 33 + 34 + int crypto_authenc_extractkeys(struct crypto_authenc_keys *keys, const u8 *key, 35 + unsigned int keylen); 36 + 37 + #endif /* _CRYPTO_AUTHENC_H */
+9 -2
include/linux/gpio/driver.h
··· 125 125 int gpiod_lock_as_irq(struct gpio_desc *desc); 126 126 void gpiod_unlock_as_irq(struct gpio_desc *desc); 127 127 128 + enum gpio_lookup_flags { 129 + GPIO_ACTIVE_HIGH = (0 << 0), 130 + GPIO_ACTIVE_LOW = (1 << 0), 131 + GPIO_OPEN_DRAIN = (1 << 1), 132 + GPIO_OPEN_SOURCE = (1 << 2), 133 + }; 134 + 128 135 /** 129 136 * Lookup table for associating GPIOs to specific devices and functions using 130 137 * platform data. ··· 159 152 */ 160 153 unsigned int idx; 161 154 /* 162 - * mask of GPIOF_* values 155 + * mask of GPIO_* values 163 156 */ 164 - unsigned long flags; 157 + enum gpio_lookup_flags flags; 165 158 }; 166 159 167 160 /*
+3
include/linux/hid-sensor-hub.h
··· 21 21 22 22 #include <linux/hid.h> 23 23 #include <linux/hid-sensor-ids.h> 24 + #include <linux/iio/iio.h> 25 + #include <linux/iio/trigger.h> 24 26 25 27 /** 26 28 * struct hid_sensor_hub_attribute_info - Attribute info ··· 186 184 struct platform_device *pdev; 187 185 unsigned usage_id; 188 186 bool data_ready; 187 + struct iio_trigger *trigger; 189 188 struct hid_sensor_hub_attribute_info poll; 190 189 struct hid_sensor_hub_attribute_info report_state; 191 190 struct hid_sensor_hub_attribute_info power_state;
+1 -2
include/linux/padata.h
··· 129 129 struct padata_serial_queue __percpu *squeue; 130 130 atomic_t reorder_objects; 131 131 atomic_t refcnt; 132 + atomic_t seq_nr; 132 133 struct padata_cpumask cpumask; 133 134 spinlock_t lock ____cacheline_aligned; 134 - spinlock_t seq_lock; 135 - unsigned int seq_nr; 136 135 unsigned int processed; 137 136 struct timer_list timer; 138 137 };
+46 -56
include/linux/slab.h
··· 388 388 /** 389 389 * kmalloc - allocate memory 390 390 * @size: how many bytes of memory are required. 391 - * @flags: the type of memory to allocate (see kcalloc). 391 + * @flags: the type of memory to allocate. 392 392 * 393 393 * kmalloc is the normal method of allocating memory 394 394 * for objects smaller than page size in the kernel. 395 + * 396 + * The @flags argument may be one of: 397 + * 398 + * %GFP_USER - Allocate memory on behalf of user. May sleep. 399 + * 400 + * %GFP_KERNEL - Allocate normal kernel ram. May sleep. 401 + * 402 + * %GFP_ATOMIC - Allocation will not sleep. May use emergency pools. 403 + * For example, use this inside interrupt handlers. 404 + * 405 + * %GFP_HIGHUSER - Allocate pages from high memory. 406 + * 407 + * %GFP_NOIO - Do not do any I/O at all while trying to get memory. 408 + * 409 + * %GFP_NOFS - Do not make any fs calls while trying to get memory. 410 + * 411 + * %GFP_NOWAIT - Allocation will not sleep. 412 + * 413 + * %GFP_THISNODE - Allocate node-local memory only. 414 + * 415 + * %GFP_DMA - Allocation suitable for DMA. 416 + * Should only be used for kmalloc() caches. Otherwise, use a 417 + * slab created with SLAB_DMA. 418 + * 419 + * Also it is possible to set different flags by OR'ing 420 + * in one or more of the following additional @flags: 421 + * 422 + * %__GFP_COLD - Request cache-cold pages instead of 423 + * trying to return cache-warm pages. 424 + * 425 + * %__GFP_HIGH - This allocation has high priority and may use emergency pools. 426 + * 427 + * %__GFP_NOFAIL - Indicate that this allocation is in no way allowed to fail 428 + * (think twice before using). 429 + * 430 + * %__GFP_NORETRY - If memory is not immediately available, 431 + * then give up at once. 432 + * 433 + * %__GFP_NOWARN - If allocation fails, don't issue any warnings. 434 + * 435 + * %__GFP_REPEAT - If allocation fails initially, try once more before failing. 436 + * 437 + * There are other flags available as well, but these are not intended 438 + * for general use, and so are not documented here. For a full list of 439 + * potential flags, always refer to linux/gfp.h. 395 440 */ 396 441 static __always_inline void *kmalloc(size_t size, gfp_t flags) 397 442 { ··· 545 500 struct seq_file; 546 501 int cache_show(struct kmem_cache *s, struct seq_file *m); 547 502 void print_slabinfo_header(struct seq_file *m); 548 - 549 - /** 550 - * kmalloc - allocate memory 551 - * @size: how many bytes of memory are required. 552 - * @flags: the type of memory to allocate. 553 - * 554 - * The @flags argument may be one of: 555 - * 556 - * %GFP_USER - Allocate memory on behalf of user. May sleep. 557 - * 558 - * %GFP_KERNEL - Allocate normal kernel ram. May sleep. 559 - * 560 - * %GFP_ATOMIC - Allocation will not sleep. May use emergency pools. 561 - * For example, use this inside interrupt handlers. 562 - * 563 - * %GFP_HIGHUSER - Allocate pages from high memory. 564 - * 565 - * %GFP_NOIO - Do not do any I/O at all while trying to get memory. 566 - * 567 - * %GFP_NOFS - Do not make any fs calls while trying to get memory. 568 - * 569 - * %GFP_NOWAIT - Allocation will not sleep. 570 - * 571 - * %GFP_THISNODE - Allocate node-local memory only. 572 - * 573 - * %GFP_DMA - Allocation suitable for DMA. 574 - * Should only be used for kmalloc() caches. Otherwise, use a 575 - * slab created with SLAB_DMA. 576 - * 577 - * Also it is possible to set different flags by OR'ing 578 - * in one or more of the following additional @flags: 579 - * 580 - * %__GFP_COLD - Request cache-cold pages instead of 581 - * trying to return cache-warm pages. 582 - * 583 - * %__GFP_HIGH - This allocation has high priority and may use emergency pools. 584 - * 585 - * %__GFP_NOFAIL - Indicate that this allocation is in no way allowed to fail 586 - * (think twice before using). 587 - * 588 - * %__GFP_NORETRY - If memory is not immediately available, 589 - * then give up at once. 590 - * 591 - * %__GFP_NOWARN - If allocation fails, don't issue any warnings. 592 - * 593 - * %__GFP_REPEAT - If allocation fails initially, try once more before failing. 594 - * 595 - * There are other flags available as well, but these are not intended 596 - * for general use, and so are not documented here. For a full list of 597 - * potential flags, always refer to linux/gfp.h. 598 - * 599 - * kmalloc is the normal method of allocating memory 600 - * in the kernel. 601 - */ 602 - static __always_inline void *kmalloc(size_t size, gfp_t flags); 603 503 604 504 /** 605 505 * kmalloc_array - allocate memory for an array.
+27
include/linux/tegra-powergate.h
··· 45 45 46 46 #define TEGRA_POWERGATE_3D0 TEGRA_POWERGATE_3D 47 47 48 + #ifdef CONFIG_ARCH_TEGRA 48 49 int tegra_powergate_is_powered(int id); 49 50 int tegra_powergate_power_on(int id); 50 51 int tegra_powergate_power_off(int id); ··· 53 52 54 53 /* Must be called with clk disabled, and returns with clk enabled */ 55 54 int tegra_powergate_sequence_power_up(int id, struct clk *clk); 55 + #else 56 + static inline int tegra_powergate_is_powered(int id) 57 + { 58 + return -ENOSYS; 59 + } 60 + 61 + static inline int tegra_powergate_power_on(int id) 62 + { 63 + return -ENOSYS; 64 + } 65 + 66 + static inline int tegra_powergate_power_off(int id) 67 + { 68 + return -ENOSYS; 69 + } 70 + 71 + static inline int tegra_powergate_remove_clamping(int id) 72 + { 73 + return -ENOSYS; 74 + } 75 + 76 + static inline int tegra_powergate_sequence_power_up(int id, struct clk *clk) 77 + { 78 + return -ENOSYS; 79 + } 80 + #endif 56 81 57 82 #endif /* _MACH_TEGRA_POWERGATE_H_ */
+3 -2
include/trace/ftrace.h
··· 372 372 __data_size += (len) * sizeof(type); 373 373 374 374 #undef __string 375 - #define __string(item, src) __dynamic_array(char, item, strlen(src) + 1) 375 + #define __string(item, src) __dynamic_array(char, item, \ 376 + strlen((src) ? (const char *)(src) : "(null)") + 1) 376 377 377 378 #undef DECLARE_EVENT_CLASS 378 379 #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ ··· 502 501 503 502 #undef __assign_str 504 503 #define __assign_str(dst, src) \ 505 - strcpy(__get_str(dst), src); 504 + strcpy(__get_str(dst), (src) ? (const char *)(src) : "(null)"); 506 505 507 506 #undef TP_fast_assign 508 507 #define TP_fast_assign(args...) args
+31 -4
kernel/cgroup.c
··· 90 90 static DEFINE_MUTEX(cgroup_root_mutex); 91 91 92 92 /* 93 + * cgroup destruction makes heavy use of work items and there can be a lot 94 + * of concurrent destructions. Use a separate workqueue so that cgroup 95 + * destruction work items don't end up filling up max_active of system_wq 96 + * which may lead to deadlock. 97 + */ 98 + static struct workqueue_struct *cgroup_destroy_wq; 99 + 100 + /* 93 101 * Generate an array of cgroup subsystem pointers. At boot time, this is 94 102 * populated with the built in subsystems, and modular subsystems are 95 103 * registered after that. The mutable section of this array is protected by ··· 199 191 static int cgroup_destroy_locked(struct cgroup *cgrp); 200 192 static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[], 201 193 bool is_add); 194 + static int cgroup_file_release(struct inode *inode, struct file *file); 202 195 203 196 /** 204 197 * cgroup_css - obtain a cgroup's css for the specified subsystem ··· 880 871 struct cgroup *cgrp = container_of(head, struct cgroup, rcu_head); 881 872 882 873 INIT_WORK(&cgrp->destroy_work, cgroup_free_fn); 883 - schedule_work(&cgrp->destroy_work); 874 + queue_work(cgroup_destroy_wq, &cgrp->destroy_work); 884 875 } 885 876 886 877 static void cgroup_diput(struct dentry *dentry, struct inode *inode) ··· 2430 2421 .read = seq_read, 2431 2422 .write = cgroup_file_write, 2432 2423 .llseek = seq_lseek, 2433 - .release = single_release, 2424 + .release = cgroup_file_release, 2434 2425 }; 2435 2426 2436 2427 static int cgroup_file_open(struct inode *inode, struct file *file) ··· 2491 2482 ret = cft->release(inode, file); 2492 2483 if (css->ss) 2493 2484 css_put(css); 2485 + if (file->f_op == &cgroup_seqfile_operations) 2486 + single_release(inode, file); 2494 2487 return ret; 2495 2488 } 2496 2489 ··· 4260 4249 * css_put(). dput() requires process context which we don't have. 4261 4250 */ 4262 4251 INIT_WORK(&css->destroy_work, css_free_work_fn); 4263 - schedule_work(&css->destroy_work); 4252 + queue_work(cgroup_destroy_wq, &css->destroy_work); 4264 4253 } 4265 4254 4266 4255 static void css_release(struct percpu_ref *ref) ··· 4550 4539 container_of(ref, struct cgroup_subsys_state, refcnt); 4551 4540 4552 4541 INIT_WORK(&css->destroy_work, css_killed_work_fn); 4553 - schedule_work(&css->destroy_work); 4542 + queue_work(cgroup_destroy_wq, &css->destroy_work); 4554 4543 } 4555 4544 4556 4545 /** ··· 5073 5062 5074 5063 return err; 5075 5064 } 5065 + 5066 + static int __init cgroup_wq_init(void) 5067 + { 5068 + /* 5069 + * There isn't much point in executing destruction path in 5070 + * parallel. Good chunk is serialized with cgroup_mutex anyway. 5071 + * Use 1 for @max_active. 5072 + * 5073 + * We would prefer to do this in cgroup_init() above, but that 5074 + * is called before init_workqueues(): so leave this until after. 5075 + */ 5076 + cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1); 5077 + BUG_ON(!cgroup_destroy_wq); 5078 + return 0; 5079 + } 5080 + core_initcall(cgroup_wq_init); 5076 5081 5077 5082 /* 5078 5083 * proc_cgroup_show()
+6 -2
kernel/cpuset.c
··· 1033 1033 need_loop = task_has_mempolicy(tsk) || 1034 1034 !nodes_intersects(*newmems, tsk->mems_allowed); 1035 1035 1036 - if (need_loop) 1036 + if (need_loop) { 1037 + local_irq_disable(); 1037 1038 write_seqcount_begin(&tsk->mems_allowed_seq); 1039 + } 1038 1040 1039 1041 nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems); 1040 1042 mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP1); ··· 1044 1042 mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP2); 1045 1043 tsk->mems_allowed = *newmems; 1046 1044 1047 - if (need_loop) 1045 + if (need_loop) { 1048 1046 write_seqcount_end(&tsk->mems_allowed_seq); 1047 + local_irq_enable(); 1048 + } 1049 1049 1050 1050 task_unlock(tsk); 1051 1051 }
+2 -2
kernel/extable.c
··· 61 61 static inline int init_kernel_text(unsigned long addr) 62 62 { 63 63 if (addr >= (unsigned long)_sinittext && 64 - addr <= (unsigned long)_einittext) 64 + addr < (unsigned long)_einittext) 65 65 return 1; 66 66 return 0; 67 67 } ··· 69 69 int core_kernel_text(unsigned long addr) 70 70 { 71 71 if (addr >= (unsigned long)_stext && 72 - addr <= (unsigned long)_etext) 72 + addr < (unsigned long)_etext) 73 73 return 1; 74 74 75 75 if (system_state == SYSTEM_BOOTING &&
+4 -5
kernel/padata.c
··· 46 46 47 47 static int padata_cpu_hash(struct parallel_data *pd) 48 48 { 49 + unsigned int seq_nr; 49 50 int cpu_index; 50 51 51 52 /* ··· 54 53 * seq_nr mod. number of cpus in use. 55 54 */ 56 55 57 - spin_lock(&pd->seq_lock); 58 - cpu_index = pd->seq_nr % cpumask_weight(pd->cpumask.pcpu); 59 - pd->seq_nr++; 60 - spin_unlock(&pd->seq_lock); 56 + seq_nr = atomic_inc_return(&pd->seq_nr); 57 + cpu_index = seq_nr % cpumask_weight(pd->cpumask.pcpu); 61 58 62 59 return padata_index_to_cpu(pd, cpu_index); 63 60 } ··· 428 429 padata_init_pqueues(pd); 429 430 padata_init_squeues(pd); 430 431 setup_timer(&pd->timer, padata_reorder_timer, (unsigned long)pd); 431 - pd->seq_nr = 0; 432 + atomic_set(&pd->seq_nr, -1); 432 433 atomic_set(&pd->reorder_objects, 0); 433 434 atomic_set(&pd->refcnt, 0); 434 435 pd->pinst = pinst;
+35 -29
kernel/trace/ftrace.c
··· 367 367 368 368 static int __register_ftrace_function(struct ftrace_ops *ops) 369 369 { 370 - if (unlikely(ftrace_disabled)) 371 - return -ENODEV; 372 - 373 370 if (FTRACE_WARN_ON(ops == &global_ops)) 374 371 return -EINVAL; 375 372 ··· 424 427 static int __unregister_ftrace_function(struct ftrace_ops *ops) 425 428 { 426 429 int ret; 427 - 428 - if (ftrace_disabled) 429 - return -ENODEV; 430 430 431 431 if (WARN_ON(!(ops->flags & FTRACE_OPS_FL_ENABLED))) 432 432 return -EBUSY; ··· 2082 2088 static int ftrace_startup(struct ftrace_ops *ops, int command) 2083 2089 { 2084 2090 bool hash_enable = true; 2091 + int ret; 2085 2092 2086 2093 if (unlikely(ftrace_disabled)) 2087 2094 return -ENODEV; 2095 + 2096 + ret = __register_ftrace_function(ops); 2097 + if (ret) 2098 + return ret; 2088 2099 2089 2100 ftrace_start_up++; 2090 2101 command |= FTRACE_UPDATE_CALLS; ··· 2112 2113 return 0; 2113 2114 } 2114 2115 2115 - static void ftrace_shutdown(struct ftrace_ops *ops, int command) 2116 + static int ftrace_shutdown(struct ftrace_ops *ops, int command) 2116 2117 { 2117 2118 bool hash_disable = true; 2119 + int ret; 2118 2120 2119 2121 if (unlikely(ftrace_disabled)) 2120 - return; 2122 + return -ENODEV; 2123 + 2124 + ret = __unregister_ftrace_function(ops); 2125 + if (ret) 2126 + return ret; 2121 2127 2122 2128 ftrace_start_up--; 2123 2129 /* ··· 2157 2153 } 2158 2154 2159 2155 if (!command || !ftrace_enabled) 2160 - return; 2156 + return 0; 2161 2157 2162 2158 ftrace_run_update_code(command); 2159 + return 0; 2163 2160 } 2164 2161 2165 2162 static void ftrace_startup_sysctl(void) ··· 3065 3060 if (i == FTRACE_FUNC_HASHSIZE) 3066 3061 return; 3067 3062 3068 - ret = __register_ftrace_function(&trace_probe_ops); 3069 - if (!ret) 3070 - ret = ftrace_startup(&trace_probe_ops, 0); 3063 + ret = ftrace_startup(&trace_probe_ops, 0); 3071 3064 3072 3065 ftrace_probe_registered = 1; 3073 3066 } 3074 3067 3075 3068 static void __disable_ftrace_function_probe(void) 3076 3069 { 3077 - int ret; 3078 3070 int i; 3079 3071 3080 3072 if (!ftrace_probe_registered) ··· 3084 3082 } 3085 3083 3086 3084 /* no more funcs left */ 3087 - ret = __unregister_ftrace_function(&trace_probe_ops); 3088 - if (!ret) 3089 - ftrace_shutdown(&trace_probe_ops, 0); 3085 + ftrace_shutdown(&trace_probe_ops, 0); 3090 3086 3091 3087 ftrace_probe_registered = 0; 3092 3088 } ··· 4366 4366 static inline int ftrace_init_dyn_debugfs(struct dentry *d_tracer) { return 0; } 4367 4367 static inline void ftrace_startup_enable(int command) { } 4368 4368 /* Keep as macros so we do not need to define the commands */ 4369 - # define ftrace_startup(ops, command) \ 4370 - ({ \ 4371 - (ops)->flags |= FTRACE_OPS_FL_ENABLED; \ 4372 - 0; \ 4369 + # define ftrace_startup(ops, command) \ 4370 + ({ \ 4371 + int ___ret = __register_ftrace_function(ops); \ 4372 + if (!___ret) \ 4373 + (ops)->flags |= FTRACE_OPS_FL_ENABLED; \ 4374 + ___ret; \ 4373 4375 }) 4374 - # define ftrace_shutdown(ops, command) do { } while (0) 4376 + # define ftrace_shutdown(ops, command) __unregister_ftrace_function(ops) 4377 + 4375 4378 # define ftrace_startup_sysctl() do { } while (0) 4376 4379 # define ftrace_shutdown_sysctl() do { } while (0) 4377 4380 ··· 4783 4780 4784 4781 mutex_lock(&ftrace_lock); 4785 4782 4786 - ret = __register_ftrace_function(ops); 4787 - if (!ret) 4788 - ret = ftrace_startup(ops, 0); 4783 + ret = ftrace_startup(ops, 0); 4789 4784 4790 4785 mutex_unlock(&ftrace_lock); 4791 4786 ··· 4802 4801 int ret; 4803 4802 4804 4803 mutex_lock(&ftrace_lock); 4805 - ret = __unregister_ftrace_function(ops); 4806 - if (!ret) 4807 - ftrace_shutdown(ops, 0); 4804 + ret = ftrace_shutdown(ops, 0); 4808 4805 mutex_unlock(&ftrace_lock); 4809 4806 4810 4807 return ret; ··· 4996 4997 return NOTIFY_DONE; 4997 4998 } 4998 4999 5000 + /* Just a place holder for function graph */ 5001 + static struct ftrace_ops fgraph_ops __read_mostly = { 5002 + .func = ftrace_stub, 5003 + .flags = FTRACE_OPS_FL_STUB | FTRACE_OPS_FL_GLOBAL | 5004 + FTRACE_OPS_FL_RECURSION_SAFE, 5005 + }; 5006 + 4999 5007 int register_ftrace_graph(trace_func_graph_ret_t retfunc, 5000 5008 trace_func_graph_ent_t entryfunc) 5001 5009 { ··· 5029 5023 ftrace_graph_return = retfunc; 5030 5024 ftrace_graph_entry = entryfunc; 5031 5025 5032 - ret = ftrace_startup(&global_ops, FTRACE_START_FUNC_RET); 5026 + ret = ftrace_startup(&fgraph_ops, FTRACE_START_FUNC_RET); 5033 5027 5034 5028 out: 5035 5029 mutex_unlock(&ftrace_lock); ··· 5046 5040 ftrace_graph_active--; 5047 5041 ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub; 5048 5042 ftrace_graph_entry = ftrace_graph_entry_stub; 5049 - ftrace_shutdown(&global_ops, FTRACE_STOP_FUNC_RET); 5043 + ftrace_shutdown(&fgraph_ops, FTRACE_STOP_FUNC_RET); 5050 5044 unregister_pm_notifier(&ftrace_suspend_notifier); 5051 5045 unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); 5052 5046
+37 -13
kernel/workqueue.c
··· 305 305 /* I: attributes used when instantiating standard unbound pools on demand */ 306 306 static struct workqueue_attrs *unbound_std_wq_attrs[NR_STD_WORKER_POOLS]; 307 307 308 + /* I: attributes used when instantiating ordered pools on demand */ 309 + static struct workqueue_attrs *ordered_wq_attrs[NR_STD_WORKER_POOLS]; 310 + 308 311 struct workqueue_struct *system_wq __read_mostly; 309 312 EXPORT_SYMBOL(system_wq); 310 313 struct workqueue_struct *system_highpri_wq __read_mostly; ··· 521 518 static inline void debug_work_deactivate(struct work_struct *work) { } 522 519 #endif 523 520 524 - /* allocate ID and assign it to @pool */ 521 + /** 522 + * worker_pool_assign_id - allocate ID and assing it to @pool 523 + * @pool: the pool pointer of interest 524 + * 525 + * Returns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned 526 + * successfully, -errno on failure. 527 + */ 525 528 static int worker_pool_assign_id(struct worker_pool *pool) 526 529 { 527 530 int ret; 528 531 529 532 lockdep_assert_held(&wq_pool_mutex); 530 533 531 - ret = idr_alloc(&worker_pool_idr, pool, 0, 0, GFP_KERNEL); 534 + ret = idr_alloc(&worker_pool_idr, pool, 0, WORK_OFFQ_POOL_NONE, 535 + GFP_KERNEL); 532 536 if (ret >= 0) { 533 537 pool->id = ret; 534 538 return 0; ··· 1330 1320 1331 1321 debug_work_activate(work); 1332 1322 1333 - /* if dying, only works from the same workqueue are allowed */ 1323 + /* if draining, only works from the same workqueue are allowed */ 1334 1324 if (unlikely(wq->flags & __WQ_DRAINING) && 1335 1325 WARN_ON_ONCE(!is_chained_work(wq))) 1336 1326 return; ··· 1746 1736 if (IS_ERR(worker->task)) 1747 1737 goto fail; 1748 1738 1739 + set_user_nice(worker->task, pool->attrs->nice); 1740 + 1741 + /* prevent userland from meddling with cpumask of workqueue workers */ 1742 + worker->task->flags |= PF_NO_SETAFFINITY; 1743 + 1749 1744 /* 1750 1745 * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any 1751 1746 * online CPUs. It'll be re-applied when any of the CPUs come up. 1752 1747 */ 1753 - set_user_nice(worker->task, pool->attrs->nice); 1754 1748 set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask); 1755 - 1756 - /* prevent userland from meddling with cpumask of workqueue workers */ 1757 - worker->task->flags |= PF_NO_SETAFFINITY; 1758 1749 1759 1750 /* 1760 1751 * The caller is responsible for ensuring %POOL_DISASSOCIATED ··· 4117 4106 static int alloc_and_link_pwqs(struct workqueue_struct *wq) 4118 4107 { 4119 4108 bool highpri = wq->flags & WQ_HIGHPRI; 4120 - int cpu; 4109 + int cpu, ret; 4121 4110 4122 4111 if (!(wq->flags & WQ_UNBOUND)) { 4123 4112 wq->cpu_pwqs = alloc_percpu(struct pool_workqueue); ··· 4137 4126 mutex_unlock(&wq->mutex); 4138 4127 } 4139 4128 return 0; 4129 + } else if (wq->flags & __WQ_ORDERED) { 4130 + ret = apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]); 4131 + /* there should only be single pwq for ordering guarantee */ 4132 + WARN(!ret && (wq->pwqs.next != &wq->dfl_pwq->pwqs_node || 4133 + wq->pwqs.prev != &wq->dfl_pwq->pwqs_node), 4134 + "ordering guarantee broken for workqueue %s\n", wq->name); 4135 + return ret; 4140 4136 } else { 4141 4137 return apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]); 4142 4138 } ··· 5027 5009 int std_nice[NR_STD_WORKER_POOLS] = { 0, HIGHPRI_NICE_LEVEL }; 5028 5010 int i, cpu; 5029 5011 5030 - /* make sure we have enough bits for OFFQ pool ID */ 5031 - BUILD_BUG_ON((1LU << (BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT)) < 5032 - WORK_CPU_END * NR_STD_WORKER_POOLS); 5033 - 5034 5012 WARN_ON(__alignof__(struct pool_workqueue) < __alignof__(long long)); 5035 5013 5036 5014 pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC); ··· 5065 5051 } 5066 5052 } 5067 5053 5068 - /* create default unbound wq attrs */ 5054 + /* create default unbound and ordered wq attrs */ 5069 5055 for (i = 0; i < NR_STD_WORKER_POOLS; i++) { 5070 5056 struct workqueue_attrs *attrs; 5071 5057 5072 5058 BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL))); 5073 5059 attrs->nice = std_nice[i]; 5074 5060 unbound_std_wq_attrs[i] = attrs; 5061 + 5062 + /* 5063 + * An ordered wq should have only one pwq as ordering is 5064 + * guaranteed by max_active which is enforced by pwqs. 5065 + * Turn off NUMA so that dfl_pwq is used for all nodes. 5066 + */ 5067 + BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL))); 5068 + attrs->nice = std_nice[i]; 5069 + attrs->no_numa = true; 5070 + ordered_wq_attrs[i] = attrs; 5075 5071 } 5076 5072 5077 5073 system_wq = alloc_workqueue("events", 0, 0);
+1 -8
lib/lockref.c
··· 1 1 #include <linux/export.h> 2 2 #include <linux/lockref.h> 3 + #include <linux/mutex.h> 3 4 4 5 #if USE_CMPXCHG_LOCKREF 5 6 ··· 10 9 */ 11 10 #ifndef cmpxchg64_relaxed 12 11 # define cmpxchg64_relaxed cmpxchg64 13 - #endif 14 - 15 - /* 16 - * Allow architectures to override the default cpu_relax() within CMPXCHG_LOOP. 17 - * This is useful for architectures with an expensive cpu_relax(). 18 - */ 19 - #ifndef arch_mutex_cpu_relax 20 - # define arch_mutex_cpu_relax() cpu_relax() 21 12 #endif 22 13 23 14 /*
+1 -29
security/integrity/digsig.c
··· 13 13 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 14 14 15 15 #include <linux/err.h> 16 - #include <linux/sched.h> 17 16 #include <linux/rbtree.h> 18 - #include <linux/cred.h> 19 17 #include <linux/key-type.h> 20 18 #include <linux/digsig.h> 21 19 ··· 21 23 22 24 static struct key *keyring[INTEGRITY_KEYRING_MAX]; 23 25 24 - #ifdef CONFIG_IMA_TRUSTED_KEYRING 25 - static const char *keyring_name[INTEGRITY_KEYRING_MAX] = { 26 - ".evm", 27 - ".module", 28 - ".ima", 29 - }; 30 - #else 31 26 static const char *keyring_name[INTEGRITY_KEYRING_MAX] = { 32 27 "_evm", 33 28 "_module", 34 29 "_ima", 35 30 }; 36 - #endif 37 31 38 32 int integrity_digsig_verify(const unsigned int id, const char *sig, int siglen, 39 33 const char *digest, int digestlen) ··· 35 45 36 46 if (!keyring[id]) { 37 47 keyring[id] = 38 - request_key(&key_type_keyring, keyring_name[id], NULL); 48 + request_key(&key_type_keyring, keyring_name[id], NULL); 39 49 if (IS_ERR(keyring[id])) { 40 50 int err = PTR_ERR(keyring[id]); 41 51 pr_err("no %s keyring: %d\n", keyring_name[id], err); ··· 55 65 } 56 66 57 67 return -EOPNOTSUPP; 58 - } 59 - 60 - int integrity_init_keyring(const unsigned int id) 61 - { 62 - const struct cred *cred = current_cred(); 63 - const struct user_struct *user = cred->user; 64 - 65 - keyring[id] = keyring_alloc(keyring_name[id], KUIDT_INIT(0), 66 - KGIDT_INIT(0), cred, 67 - ((KEY_POS_ALL & ~KEY_POS_SETATTR) | 68 - KEY_USR_VIEW | KEY_USR_READ), 69 - KEY_ALLOC_NOT_IN_QUOTA, user->uid_keyring); 70 - if (!IS_ERR(keyring[id])) 71 - set_bit(KEY_FLAG_TRUSTED_ONLY, &keyring[id]->flags); 72 - else 73 - pr_info("Can't allocate %s keyring (%ld)\n", 74 - keyring_name[id], PTR_ERR(keyring[id])); 75 - return 0; 76 68 }
-8
security/integrity/ima/Kconfig
··· 123 123 For more information on integrity appraisal refer to: 124 124 <http://linux-ima.sourceforge.net> 125 125 If unsure, say N. 126 - 127 - config IMA_TRUSTED_KEYRING 128 - bool "Require all keys on the _ima keyring be signed" 129 - depends on IMA_APPRAISE && SYSTEM_TRUSTED_KEYRING 130 - default y 131 - help 132 - This option requires that all keys added to the _ima 133 - keyring be signed by a key on the system trusted keyring.
+4 -2
security/integrity/ima/ima.h
··· 26 26 27 27 #include "../integrity.h" 28 28 29 - enum ima_show_type { IMA_SHOW_BINARY, IMA_SHOW_ASCII }; 29 + enum ima_show_type { IMA_SHOW_BINARY, IMA_SHOW_BINARY_NO_FIELD_LEN, 30 + IMA_SHOW_ASCII }; 30 31 enum tpm_pcrs { TPM_PCR0 = 0, TPM_PCR8 = 8 }; 31 32 32 33 /* digest size for IMA, fits SHA1 or MD5 */ ··· 98 97 const char *op, struct inode *inode, 99 98 const unsigned char *filename); 100 99 int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash); 101 - int ima_calc_field_array_hash(struct ima_field_data *field_data, int num_fields, 100 + int ima_calc_field_array_hash(struct ima_field_data *field_data, 101 + struct ima_template_desc *desc, int num_fields, 102 102 struct ima_digest_data *hash); 103 103 int __init ima_calc_boot_aggregate(struct ima_digest_data *hash); 104 104 void ima_add_violation(struct file *file, const unsigned char *filename,
+1
security/integrity/ima/ima_api.c
··· 94 94 /* this function uses default algo */ 95 95 hash.hdr.algo = HASH_ALGO_SHA1; 96 96 result = ima_calc_field_array_hash(&entry->template_data[0], 97 + entry->template_desc, 97 98 num_fields, &hash.hdr); 98 99 if (result < 0) { 99 100 integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode,
-11
security/integrity/ima/ima_appraise.c
··· 381 381 } 382 382 return result; 383 383 } 384 - 385 - #ifdef CONFIG_IMA_TRUSTED_KEYRING 386 - static int __init init_ima_keyring(void) 387 - { 388 - int ret; 389 - 390 - ret = integrity_init_keyring(INTEGRITY_KEYRING_IMA); 391 - return 0; 392 - } 393 - late_initcall(init_ima_keyring); 394 - #endif
+12 -5
security/integrity/ima/ima_crypto.c
··· 140 140 * Calculate the hash of template data 141 141 */ 142 142 static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data, 143 + struct ima_template_desc *td, 143 144 int num_fields, 144 145 struct ima_digest_data *hash, 145 146 struct crypto_shash *tfm) ··· 161 160 return rc; 162 161 163 162 for (i = 0; i < num_fields; i++) { 164 - rc = crypto_shash_update(&desc.shash, 165 - (const u8 *) &field_data[i].len, 166 - sizeof(field_data[i].len)); 163 + if (strcmp(td->name, IMA_TEMPLATE_IMA_NAME) != 0) { 164 + rc = crypto_shash_update(&desc.shash, 165 + (const u8 *) &field_data[i].len, 166 + sizeof(field_data[i].len)); 167 + if (rc) 168 + break; 169 + } 167 170 rc = crypto_shash_update(&desc.shash, field_data[i].data, 168 171 field_data[i].len); 169 172 if (rc) ··· 180 175 return rc; 181 176 } 182 177 183 - int ima_calc_field_array_hash(struct ima_field_data *field_data, int num_fields, 178 + int ima_calc_field_array_hash(struct ima_field_data *field_data, 179 + struct ima_template_desc *desc, int num_fields, 184 180 struct ima_digest_data *hash) 185 181 { 186 182 struct crypto_shash *tfm; ··· 191 185 if (IS_ERR(tfm)) 192 186 return PTR_ERR(tfm); 193 187 194 - rc = ima_calc_field_array_hash_tfm(field_data, num_fields, hash, tfm); 188 + rc = ima_calc_field_array_hash_tfm(field_data, desc, num_fields, 189 + hash, tfm); 195 190 196 191 ima_free_tfm(tfm); 197 192
+11 -3
security/integrity/ima/ima_fs.c
··· 120 120 struct ima_template_entry *e; 121 121 int namelen; 122 122 u32 pcr = CONFIG_IMA_MEASURE_PCR_IDX; 123 + bool is_ima_template = false; 123 124 int i; 124 125 125 126 /* get entry */ ··· 146 145 ima_putc(m, e->template_desc->name, namelen); 147 146 148 147 /* 5th: template length (except for 'ima' template) */ 149 - if (strcmp(e->template_desc->name, IMA_TEMPLATE_IMA_NAME) != 0) 148 + if (strcmp(e->template_desc->name, IMA_TEMPLATE_IMA_NAME) == 0) 149 + is_ima_template = true; 150 + 151 + if (!is_ima_template) 150 152 ima_putc(m, &e->template_data_len, 151 153 sizeof(e->template_data_len)); 152 154 153 155 /* 6th: template specific data */ 154 156 for (i = 0; i < e->template_desc->num_fields; i++) { 155 - e->template_desc->fields[i]->field_show(m, IMA_SHOW_BINARY, 156 - &e->template_data[i]); 157 + enum ima_show_type show = IMA_SHOW_BINARY; 158 + struct ima_template_field *field = e->template_desc->fields[i]; 159 + 160 + if (is_ima_template && strcmp(field->field_id, "d") == 0) 161 + show = IMA_SHOW_BINARY_NO_FIELD_LEN; 162 + field->field_show(m, show, &e->template_data[i]); 157 163 } 158 164 return 0; 159 165 }
+14 -7
security/integrity/ima/ima_template.c
··· 90 90 return NULL; 91 91 } 92 92 93 - static int template_fmt_size(char *template_fmt) 93 + static int template_fmt_size(const char *template_fmt) 94 94 { 95 95 char c; 96 96 int template_fmt_len = strlen(template_fmt); ··· 106 106 return j + 1; 107 107 } 108 108 109 - static int template_desc_init_fields(char *template_fmt, 109 + static int template_desc_init_fields(const char *template_fmt, 110 110 struct ima_template_field ***fields, 111 111 int *num_fields) 112 112 { 113 - char *c, *template_fmt_ptr = template_fmt; 113 + char *c, *template_fmt_copy; 114 114 int template_num_fields = template_fmt_size(template_fmt); 115 115 int i, result = 0; 116 116 117 117 if (template_num_fields > IMA_TEMPLATE_NUM_FIELDS_MAX) 118 118 return -EINVAL; 119 119 120 + /* copying is needed as strsep() modifies the original buffer */ 121 + template_fmt_copy = kstrdup(template_fmt, GFP_KERNEL); 122 + if (template_fmt_copy == NULL) 123 + return -ENOMEM; 124 + 120 125 *fields = kzalloc(template_num_fields * sizeof(*fields), GFP_KERNEL); 121 126 if (*fields == NULL) { 122 127 result = -ENOMEM; 123 128 goto out; 124 129 } 125 - for (i = 0; (c = strsep(&template_fmt_ptr, "|")) != NULL && 130 + for (i = 0; (c = strsep(&template_fmt_copy, "|")) != NULL && 126 131 i < template_num_fields; i++) { 127 132 struct ima_template_field *f = lookup_template_field(c); 128 133 ··· 138 133 (*fields)[i] = f; 139 134 } 140 135 *num_fields = i; 141 - return 0; 142 136 out: 143 - kfree(*fields); 144 - *fields = NULL; 137 + if (result < 0) { 138 + kfree(*fields); 139 + *fields = NULL; 140 + } 141 + kfree(template_fmt_copy); 145 142 return result; 146 143 } 147 144
+5 -1
security/integrity/ima/ima_template_lib.c
··· 109 109 enum data_formats datafmt, 110 110 struct ima_field_data *field_data) 111 111 { 112 - ima_putc(m, &field_data->len, sizeof(u32)); 112 + if (show != IMA_SHOW_BINARY_NO_FIELD_LEN) 113 + ima_putc(m, &field_data->len, sizeof(u32)); 114 + 113 115 if (!field_data->len) 114 116 return; 117 + 115 118 ima_putc(m, field_data->data, field_data->len); 116 119 } 117 120 ··· 128 125 ima_show_template_data_ascii(m, show, datafmt, field_data); 129 126 break; 130 127 case IMA_SHOW_BINARY: 128 + case IMA_SHOW_BINARY_NO_FIELD_LEN: 131 129 ima_show_template_data_binary(m, show, datafmt, field_data); 132 130 break; 133 131 default:
-7
security/integrity/integrity.h
··· 137 137 #ifdef CONFIG_INTEGRITY_ASYMMETRIC_KEYS 138 138 int asymmetric_verify(struct key *keyring, const char *sig, 139 139 int siglen, const char *data, int datalen); 140 - 141 - int integrity_init_keyring(const unsigned int id); 142 140 #else 143 141 static inline int asymmetric_verify(struct key *keyring, const char *sig, 144 142 int siglen, const char *data, int datalen) 145 143 { 146 144 return -EOPNOTSUPP; 147 - } 148 - 149 - static int integrity_init_keyring(const unsigned int id) 150 - { 151 - return 0; 152 145 } 153 146 #endif 154 147
+6 -9
sound/firewire/amdtp.c
··· 434 434 return; 435 435 index = s->packet_index; 436 436 437 + /* this module generate empty packet for 'no data' */ 437 438 syt = calculate_syt(s, cycle); 438 - if (!(s->flags & CIP_BLOCKING)) { 439 + if (!(s->flags & CIP_BLOCKING)) 439 440 data_blocks = calculate_data_blocks(s); 440 - } else { 441 - if (syt != 0xffff) { 442 - data_blocks = s->syt_interval; 443 - } else { 444 - data_blocks = 0; 445 - syt = 0xffffff; 446 - } 447 - } 441 + else if (syt != 0xffff) 442 + data_blocks = s->syt_interval; 443 + else 444 + data_blocks = 0; 448 445 449 446 buffer = s->buffer.packets[index].buffer; 450 447 buffer[0] = cpu_to_be32(ACCESS_ONCE(s->source_node_id_field) |
-1
sound/pci/hda/hda_codec.h
··· 698 698 unsigned int in_reset:1; /* during reset operation */ 699 699 unsigned int power_keep_link_on:1; /* don't power off HDA link */ 700 700 unsigned int no_response_fallback:1; /* don't fallback at RIRB error */ 701 - unsigned int avoid_link_reset:1; /* don't reset link at runtime PM */ 702 701 703 702 int primary_dig_out_type; /* primary digital out PCM type */ 704 703 };
+58 -21
sound/pci/hda/hda_generic.c
··· 2506 2506 2507 2507 for (i = 0; i < num_pins; i++) { 2508 2508 hda_nid_t pin = pins[i]; 2509 - if (pin == spec->hp_mic_pin) { 2510 - int ret = create_hp_mic_jack_mode(codec, pin); 2511 - if (ret < 0) 2512 - return ret; 2509 + if (pin == spec->hp_mic_pin) 2513 2510 continue; 2514 - } 2515 2511 if (get_out_jack_num_items(codec, pin) > 1) { 2516 2512 struct snd_kcontrol_new *knew; 2517 2513 char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN]; ··· 2760 2764 val &= ~(AC_PINCTL_VREFEN | PIN_HP); 2761 2765 val |= get_vref_idx(vref_caps, idx) | PIN_IN; 2762 2766 } else 2763 - val = snd_hda_get_default_vref(codec, nid); 2767 + val = snd_hda_get_default_vref(codec, nid) | PIN_IN; 2764 2768 } 2765 2769 snd_hda_set_pin_ctl_cache(codec, nid, val); 2766 2770 call_hp_automute(codec, NULL); ··· 2780 2784 struct hda_gen_spec *spec = codec->spec; 2781 2785 struct snd_kcontrol_new *knew; 2782 2786 2783 - if (get_out_jack_num_items(codec, pin) <= 1 && 2784 - get_in_jack_num_items(codec, pin) <= 1) 2785 - return 0; /* no need */ 2786 2787 knew = snd_hda_gen_add_kctl(spec, "Headphone Mic Jack Mode", 2787 2788 &hp_mic_jack_mode_enum); 2788 2789 if (!knew) ··· 2808 2815 return 0; 2809 2816 } 2810 2817 2818 + /* return true if either a volume or a mute amp is found for the given 2819 + * aamix path; the amp has to be either in the mixer node or its direct leaf 2820 + */ 2821 + static bool look_for_mix_leaf_ctls(struct hda_codec *codec, hda_nid_t mix_nid, 2822 + hda_nid_t pin, unsigned int *mix_val, 2823 + unsigned int *mute_val) 2824 + { 2825 + int idx, num_conns; 2826 + const hda_nid_t *list; 2827 + hda_nid_t nid; 2828 + 2829 + idx = snd_hda_get_conn_index(codec, mix_nid, pin, true); 2830 + if (idx < 0) 2831 + return false; 2832 + 2833 + *mix_val = *mute_val = 0; 2834 + if (nid_has_volume(codec, mix_nid, HDA_INPUT)) 2835 + *mix_val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT); 2836 + if (nid_has_mute(codec, mix_nid, HDA_INPUT)) 2837 + *mute_val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT); 2838 + if (*mix_val && *mute_val) 2839 + return true; 2840 + 2841 + /* check leaf node */ 2842 + num_conns = snd_hda_get_conn_list(codec, mix_nid, &list); 2843 + if (num_conns < idx) 2844 + return false; 2845 + nid = list[idx]; 2846 + if (!*mix_val && nid_has_volume(codec, nid, HDA_OUTPUT)) 2847 + *mix_val = HDA_COMPOSE_AMP_VAL(nid, 3, 0, HDA_OUTPUT); 2848 + if (!*mute_val && nid_has_mute(codec, nid, HDA_OUTPUT)) 2849 + *mute_val = HDA_COMPOSE_AMP_VAL(nid, 3, 0, HDA_OUTPUT); 2850 + 2851 + return *mix_val || *mute_val; 2852 + } 2853 + 2811 2854 /* create input playback/capture controls for the given pin */ 2812 2855 static int new_analog_input(struct hda_codec *codec, int input_idx, 2813 2856 hda_nid_t pin, const char *ctlname, int ctlidx, ··· 2851 2822 { 2852 2823 struct hda_gen_spec *spec = codec->spec; 2853 2824 struct nid_path *path; 2854 - unsigned int val; 2825 + unsigned int mix_val, mute_val; 2855 2826 int err, idx; 2856 2827 2857 - if (!nid_has_volume(codec, mix_nid, HDA_INPUT) && 2858 - !nid_has_mute(codec, mix_nid, HDA_INPUT)) 2859 - return 0; /* no need for analog loopback */ 2828 + if (!look_for_mix_leaf_ctls(codec, mix_nid, pin, &mix_val, &mute_val)) 2829 + return 0; 2860 2830 2861 2831 path = snd_hda_add_new_path(codec, pin, mix_nid, 0); 2862 2832 if (!path) ··· 2864 2836 spec->loopback_paths[input_idx] = snd_hda_get_path_idx(codec, path); 2865 2837 2866 2838 idx = path->idx[path->depth - 1]; 2867 - if (nid_has_volume(codec, mix_nid, HDA_INPUT)) { 2868 - val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT); 2869 - err = __add_pb_vol_ctrl(spec, HDA_CTL_WIDGET_VOL, ctlname, ctlidx, val); 2839 + if (mix_val) { 2840 + err = __add_pb_vol_ctrl(spec, HDA_CTL_WIDGET_VOL, ctlname, ctlidx, mix_val); 2870 2841 if (err < 0) 2871 2842 return err; 2872 - path->ctls[NID_PATH_VOL_CTL] = val; 2843 + path->ctls[NID_PATH_VOL_CTL] = mix_val; 2873 2844 } 2874 2845 2875 - if (nid_has_mute(codec, mix_nid, HDA_INPUT)) { 2876 - val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT); 2877 - err = __add_pb_sw_ctrl(spec, HDA_CTL_WIDGET_MUTE, ctlname, ctlidx, val); 2846 + if (mute_val) { 2847 + err = __add_pb_sw_ctrl(spec, HDA_CTL_WIDGET_MUTE, ctlname, ctlidx, mute_val); 2878 2848 if (err < 0) 2879 2849 return err; 2880 - path->ctls[NID_PATH_MUTE_CTL] = val; 2850 + path->ctls[NID_PATH_MUTE_CTL] = mute_val; 2881 2851 } 2882 2852 2883 2853 path->active = true; ··· 4408 4382 err = parse_mic_boost(codec); 4409 4383 if (err < 0) 4410 4384 return err; 4385 + 4386 + /* create "Headphone Mic Jack Mode" if no input selection is 4387 + * available (or user specifies add_jack_modes hint) 4388 + */ 4389 + if (spec->hp_mic_pin && 4390 + (spec->auto_mic || spec->input_mux.num_items == 1 || 4391 + spec->add_jack_modes)) { 4392 + err = create_hp_mic_jack_mode(codec, spec->hp_mic_pin); 4393 + if (err < 0) 4394 + return err; 4395 + } 4411 4396 4412 4397 if (spec->add_jack_modes) { 4413 4398 if (cfg->line_out_type != AUTO_PIN_SPEAKER_OUT) {
+1 -2
sound/pci/hda/hda_intel.c
··· 2994 2994 STATESTS_INT_MASK); 2995 2995 2996 2996 azx_stop_chip(chip); 2997 - if (!chip->bus->avoid_link_reset) 2998 - azx_enter_link_reset(chip); 2997 + azx_enter_link_reset(chip); 2999 2998 azx_clear_irq_pending(chip); 3000 2999 if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) 3001 3000 hda_display_power(false);
+23
sound/pci/hda/patch_conexant.c
··· 3244 3244 #if IS_ENABLED(CONFIG_THINKPAD_ACPI) 3245 3245 3246 3246 #include <linux/thinkpad_acpi.h> 3247 + #include <acpi/acpi.h> 3247 3248 3248 3249 static int (*led_set_func)(int, bool); 3250 + 3251 + static acpi_status acpi_check_cb(acpi_handle handle, u32 lvl, void *context, 3252 + void **rv) 3253 + { 3254 + bool *found = context; 3255 + *found = true; 3256 + return AE_OK; 3257 + } 3258 + 3259 + static bool is_thinkpad(struct hda_codec *codec) 3260 + { 3261 + bool found = false; 3262 + if (codec->subsystem_id >> 16 != 0x17aa) 3263 + return false; 3264 + if (ACPI_SUCCESS(acpi_get_devices("LEN0068", acpi_check_cb, &found, NULL)) && found) 3265 + return true; 3266 + found = false; 3267 + return ACPI_SUCCESS(acpi_get_devices("IBM0068", acpi_check_cb, &found, NULL)) && found; 3268 + } 3249 3269 3250 3270 static void update_tpacpi_mute_led(void *private_data, int enabled) 3251 3271 { ··· 3299 3279 bool removefunc = false; 3300 3280 3301 3281 if (action == HDA_FIXUP_ACT_PROBE) { 3282 + if (!is_thinkpad(codec)) 3283 + return; 3302 3284 if (!led_set_func) 3303 3285 led_set_func = symbol_request(tpacpi_led_set); 3304 3286 if (!led_set_func) { ··· 3516 3494 SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC), 3517 3495 SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC), 3518 3496 SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC), 3497 + SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI), 3519 3498 SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004), 3520 3499 SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205), 3521 3500 {}
+33 -5
sound/pci/hda/patch_realtek.c
··· 1782 1782 ALC889_FIXUP_IMAC91_VREF, 1783 1783 ALC882_FIXUP_INV_DMIC, 1784 1784 ALC882_FIXUP_NO_PRIMARY_HP, 1785 + ALC887_FIXUP_ASUS_BASS, 1786 + ALC887_FIXUP_BASS_CHMAP, 1785 1787 }; 1786 1788 1787 1789 static void alc889_fixup_coef(struct hda_codec *codec, ··· 1916 1914 spec->gen.no_multi_io = 1; 1917 1915 } 1918 1916 } 1917 + 1918 + static void alc_fixup_bass_chmap(struct hda_codec *codec, 1919 + const struct hda_fixup *fix, int action); 1919 1920 1920 1921 static const struct hda_fixup alc882_fixups[] = { 1921 1922 [ALC882_FIXUP_ABIT_AW9D_MAX] = { ··· 2110 2105 .type = HDA_FIXUP_FUNC, 2111 2106 .v.func = alc882_fixup_no_primary_hp, 2112 2107 }, 2108 + [ALC887_FIXUP_ASUS_BASS] = { 2109 + .type = HDA_FIXUP_PINS, 2110 + .v.pins = (const struct hda_pintbl[]) { 2111 + {0x16, 0x99130130}, /* bass speaker */ 2112 + {} 2113 + }, 2114 + .chained = true, 2115 + .chain_id = ALC887_FIXUP_BASS_CHMAP, 2116 + }, 2117 + [ALC887_FIXUP_BASS_CHMAP] = { 2118 + .type = HDA_FIXUP_FUNC, 2119 + .v.func = alc_fixup_bass_chmap, 2120 + }, 2113 2121 }; 2114 2122 2115 2123 static const struct snd_pci_quirk alc882_fixup_tbl[] = { ··· 2156 2138 SND_PCI_QUIRK(0x1043, 0x1873, "ASUS W90V", ALC882_FIXUP_ASUS_W90V), 2157 2139 SND_PCI_QUIRK(0x1043, 0x1971, "Asus W2JC", ALC882_FIXUP_ASUS_W2JC), 2158 2140 SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601), 2141 + SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS), 2159 2142 SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT), 2160 2143 SND_PCI_QUIRK(0x104d, 0x905a, "Sony Vaio Z", ALC882_FIXUP_NO_PRIMARY_HP), 2161 2144 SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP), ··· 3817 3798 ALC271_FIXUP_HP_GATE_MIC_JACK, 3818 3799 ALC269_FIXUP_ACER_AC700, 3819 3800 ALC269_FIXUP_LIMIT_INT_MIC_BOOST, 3801 + ALC269VB_FIXUP_ASUS_ZENBOOK, 3820 3802 ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED, 3821 3803 ALC269VB_FIXUP_ORDISSIMO_EVE2, 3822 3804 ALC283_FIXUP_CHROME_BOOK, ··· 4095 4075 .chained = true, 4096 4076 .chain_id = ALC269_FIXUP_THINKPAD_ACPI, 4097 4077 }, 4078 + [ALC269VB_FIXUP_ASUS_ZENBOOK] = { 4079 + .type = HDA_FIXUP_FUNC, 4080 + .v.func = alc269_fixup_limit_int_mic_boost, 4081 + .chained = true, 4082 + .chain_id = ALC269VB_FIXUP_DMIC, 4083 + }, 4098 4084 [ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED] = { 4099 4085 .type = HDA_FIXUP_FUNC, 4100 4086 .v.func = alc269_fixup_limit_int_mic_boost, ··· 4215 4189 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 4216 4190 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4217 4191 SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4218 - SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_DMIC), 4219 - SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_DMIC), 4192 + SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), 4193 + SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK), 4220 4194 SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), 4221 4195 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), 4222 4196 SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC), ··· 4741 4715 }; 4742 4716 4743 4717 /* override the 2.1 chmap */ 4744 - static void alc662_fixup_bass_chmap(struct hda_codec *codec, 4718 + static void alc_fixup_bass_chmap(struct hda_codec *codec, 4745 4719 const struct hda_fixup *fix, int action) 4746 4720 { 4747 4721 if (action == HDA_FIXUP_ACT_BUILD) { ··· 4949 4923 }, 4950 4924 [ALC662_FIXUP_BASS_CHMAP] = { 4951 4925 .type = HDA_FIXUP_FUNC, 4952 - .v.func = alc662_fixup_bass_chmap, 4926 + .v.func = alc_fixup_bass_chmap, 4953 4927 .chained = true, 4954 4928 .chain_id = ALC662_FIXUP_ASUS_MODE4 4955 4929 }, ··· 4962 4936 }, 4963 4937 [ALC662_FIXUP_BASS_1A_CHMAP] = { 4964 4938 .type = HDA_FIXUP_FUNC, 4965 - .v.func = alc662_fixup_bass_chmap, 4939 + .v.func = alc_fixup_bass_chmap, 4966 4940 .chained = true, 4967 4941 .chain_id = ALC662_FIXUP_BASS_1A, 4968 4942 }, ··· 5144 5118 case 0x10ec0272: 5145 5119 case 0x10ec0663: 5146 5120 case 0x10ec0665: 5121 + case 0x10ec0668: 5147 5122 set_beep_amp(spec, 0x0b, 0x04, HDA_INPUT); 5148 5123 break; 5149 5124 case 0x10ec0273: ··· 5202 5175 */ 5203 5176 static const struct hda_codec_preset snd_hda_preset_realtek[] = { 5204 5177 { .id = 0x10ec0221, .name = "ALC221", .patch = patch_alc269 }, 5178 + { .id = 0x10ec0231, .name = "ALC231", .patch = patch_alc269 }, 5205 5179 { .id = 0x10ec0233, .name = "ALC233", .patch = patch_alc269 }, 5206 5180 { .id = 0x10ec0255, .name = "ALC255", .patch = patch_alc269 }, 5207 5181 { .id = 0x10ec0260, .name = "ALC260", .patch = patch_alc260 },
+2 -1
sound/pci/hda/patch_sigmatel.c
··· 2094 2094 2095 2095 if (action == HDA_FIXUP_ACT_PRE_PROBE) { 2096 2096 spec->mic_mute_led_gpio = 0x08; /* GPIO3 */ 2097 - codec->bus->avoid_link_reset = 1; 2097 + /* resetting controller clears GPIO, so we need to keep on */ 2098 + codec->bus->power_keep_link_on = 1; 2098 2099 } 2099 2100 } 2100 2101
+15 -1
sound/usb/endpoint.c
··· 636 636 if (usb_pipein(ep->pipe) || 637 637 snd_usb_endpoint_implicit_feedback_sink(ep)) { 638 638 639 + urb_packs = packs_per_ms; 640 + /* 641 + * Wireless devices can poll at a max rate of once per 4ms. 642 + * For dataintervals less than 5, increase the packet count to 643 + * allow the host controller to use bursting to fill in the 644 + * gaps. 645 + */ 646 + if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_WIRELESS) { 647 + int interval = ep->datainterval; 648 + while (interval < 5) { 649 + urb_packs <<= 1; 650 + ++interval; 651 + } 652 + } 639 653 /* make capture URBs <= 1 ms and smaller than a period */ 640 - urb_packs = min(max_packs_per_urb, packs_per_ms); 654 + urb_packs = min(max_packs_per_urb, urb_packs); 641 655 while (urb_packs > 1 && urb_packs * maxsize >= period_bytes) 642 656 urb_packs >>= 1; 643 657 ep->nurbs = MAX_URBS;
+2 -1
tools/power/cpupower/man/cpupower-idle-info.1
··· 87 87 .fi 88 88 .SH "SEE ALSO" 89 89 .LP 90 - cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1) 90 + cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1), 91 + cpupower\-idle\-set(1)
+71
tools/power/cpupower/man/cpupower-idle-set.1
··· 1 + .TH "CPUPOWER-IDLE-SET" "1" "0.1" "" "cpupower Manual" 2 + .SH "NAME" 3 + .LP 4 + cpupower idle\-set \- Utility to set cpu idle state specific kernel options 5 + .SH "SYNTAX" 6 + .LP 7 + cpupower [ \-c cpulist ] idle\-info [\fIoptions\fP] 8 + .SH "DESCRIPTION" 9 + .LP 10 + The cpupower idle\-set subcommand allows to set cpu idle, also called cpu 11 + sleep state, specific options offered by the kernel. One example is disabling 12 + sleep states. This can be handy for power vs performance tuning. 13 + .SH "OPTIONS" 14 + .LP 15 + .TP 16 + \fB\-d\fR \fB\-\-disable\fR 17 + Disable a specific processor sleep state. 18 + .TP 19 + \fB\-e\fR \fB\-\-enable\fR 20 + Enable a specific processor sleep state. 21 + 22 + .SH "REMARKS" 23 + .LP 24 + Cpuidle Governors Policy on Disabling Sleep States 25 + 26 + .RS 4 27 + Depending on the used cpuidle governor, implementing the kernel policy 28 + how to choose sleep states, subsequent sleep states on this core, might get 29 + disabled as well. 30 + 31 + There are two cpuidle governors ladder and menu. While the ladder 32 + governor is always available, if CONFIG_CPU_IDLE is selected, the 33 + menu governor additionally requires CONFIG_NO_HZ. 34 + 35 + The behavior and the effect of the disable variable depends on the 36 + implementation of a particular governor. In the ladder governor, for 37 + example, it is not coherent, i.e. if one is disabling a light state, 38 + then all deeper states are disabled as well. Likewise, if one enables a 39 + deep state but a lighter state still is disabled, then this has no effect. 40 + .RE 41 + .LP 42 + Disabling the Lightest Sleep State may not have any Affect 43 + 44 + .RS 4 45 + If criteria are not met to enter deeper sleep states and the lightest sleep 46 + state is chosen when idle, the kernel may still enter this sleep state, 47 + irrespective of whether it is disabled or not. This is also reflected in 48 + the usage count of the disabled sleep state when using the cpupower idle-info 49 + command. 50 + .RE 51 + .LP 52 + Selecting specific CPU Cores 53 + 54 + .RS 4 55 + By default processor sleep states of all CPU cores are set. Please refer 56 + to the cpupower(1) manpage in the \-\-cpu option section how to disable 57 + C-states of specific cores. 58 + .RE 59 + .SH "FILES" 60 + .nf 61 + \fI/sys/devices/system/cpu/cpu*/cpuidle/state*\fP 62 + \fI/sys/devices/system/cpu/cpuidle/*\fP 63 + .fi 64 + .SH "AUTHORS" 65 + .nf 66 + Thomas Renninger <trenn@suse.de> 67 + .fi 68 + .SH "SEE ALSO" 69 + .LP 70 + cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1), 71 + cpupower\-idle\-info(1)
+2 -2
tools/power/cpupower/utils/helpers/sysfs.c
··· 278 278 int sysfs_is_idlestate_disabled(unsigned int cpu, 279 279 unsigned int idlestate) 280 280 { 281 - if (sysfs_get_idlestate_count(cpu) < idlestate) 281 + if (sysfs_get_idlestate_count(cpu) <= idlestate) 282 282 return -1; 283 283 284 284 if (!sysfs_idlestate_file_exists(cpu, idlestate, ··· 303 303 char value[SYSFS_PATH_MAX]; 304 304 int bytes_written; 305 305 306 - if (sysfs_get_idlestate_count(cpu) < idlestate) 306 + if (sysfs_get_idlestate_count(cpu) <= idlestate) 307 307 return -1; 308 308 309 309 if (!sysfs_idlestate_file_exists(cpu, idlestate,