···196196as root before you can use this. You'll probably also want to197197get the user-space microcode_ctl utility to use with this.198198199199-Powertweak200200-----------201201-202202-If you are running v0.1.17 or earlier, you should upgrade to203203-version v0.99.0 or higher. Running old versions may cause problems204204-with programs using shared memory.205205-206199udev207200----208201udev is a userspace application for populating /dev dynamically with···358365Intel P6 microcode359366------------------360367o <http://www.urbanmyth.org/microcode/>361361-362362-Powertweak363363-----------364364-o <http://powertweak.sourceforge.net/>365368366369udev367370----
+1-1
Documentation/DocBook/device-drivers.tmpl
···5858 </sect1>5959 <sect1><title>Wait queues and Wake events</title>6060!Iinclude/linux/wait.h6161-!Ekernel/wait.c6161+!Ekernel/sched/wait.c6262 </sect1>6363 <sect1><title>High-resolution timers</title>6464!Iinclude/linux/ktime.h
···11I2C for OMAP platforms2233Required properties :44-- compatible : Must be "ti,omap3-i2c" or "ti,omap4-i2c"44+- compatible : Must be "ti,omap2420-i2c", "ti,omap2430-i2c", "ti,omap3-i2c"55+ or "ti,omap4-i2c"56- ti,hwmods : Must be "i2c<n>", n being the instance number (1-based)67- #address-cells = <1>;78- #size-cells = <0>;
···11+Qualcomm MSM pseudo random number generator.22+33+Required properties:44+55+- compatible : should be "qcom,prng"66+- reg : specifies base physical address and size of the registers map77+- clocks : phandle to clock-controller plus clock-specifier pair88+- clock-names : "core" clocks all registers, FIFO and circuits in PRNG IP block99+1010+Example:1111+1212+ rng@f9bff000 {1313+ compatible = "qcom,prng";1414+ reg = <0xf9bff000 0x200>;1515+ clocks = <&clock GCC_PRNG_AHB_CLK>;1616+ clock-names = "core";1717+ };
···11+GPIO Mappings22+=============33+44+This document explains how GPIOs can be assigned to given devices and functions.55+Note that it only applies to the new descriptor-based interface. For a66+description of the deprecated integer-based GPIO interface please refer to77+gpio-legacy.txt (actually, there is no real mapping possible with the old88+interface; you just fetch an integer from somewhere and request the99+corresponding GPIO.1010+1111+Platforms that make use of GPIOs must select ARCH_REQUIRE_GPIOLIB (if GPIO usage1212+is mandatory) or ARCH_WANT_OPTIONAL_GPIOLIB (if GPIO support can be omitted) in1313+their Kconfig. Then, how GPIOs are mapped depends on what the platform uses to1414+describe its hardware layout. Currently, mappings can be defined through device1515+tree, ACPI, and platform data.1616+1717+Device Tree1818+-----------1919+GPIOs can easily be mapped to devices and functions in the device tree. The2020+exact way to do it depends on the GPIO controller providing the GPIOs, see the2121+device tree bindings for your controller.2222+2323+GPIOs mappings are defined in the consumer device's node, in a property named2424+<function>-gpios, where <function> is the function the driver will request2525+through gpiod_get(). For example:2626+2727+ foo_device {2828+ compatible = "acme,foo";2929+ ...3030+ led-gpios = <&gpio 15 GPIO_ACTIVE_HIGH>, /* red */3131+ <&gpio 16 GPIO_ACTIVE_HIGH>, /* green */3232+ <&gpio 17 GPIO_ACTIVE_HIGH>; /* blue */3333+3434+ power-gpio = <&gpio 1 GPIO_ACTIVE_LOW>;3535+ };3636+3737+This property will make GPIOs 15, 16 and 17 available to the driver under the3838+"led" function, and GPIO 1 as the "power" GPIO:3939+4040+ struct gpio_desc *red, *green, *blue, *power;4141+4242+ red = gpiod_get_index(dev, "led", 0);4343+ green = gpiod_get_index(dev, "led", 1);4444+ blue = gpiod_get_index(dev, "led", 2);4545+4646+ power = gpiod_get(dev, "power");4747+4848+The led GPIOs will be active-high, while the power GPIO will be active-low (i.e.4949+gpiod_is_active_low(power) will be true).5050+5151+ACPI5252+----5353+ACPI does not support function names for GPIOs. Therefore, only the "idx"5454+argument of gpiod_get_index() is useful to discriminate between GPIOs assigned5555+to a device. The "con_id" argument can still be set for debugging purposes (it5656+will appear under error messages as well as debug and sysfs nodes).5757+5858+Platform Data5959+-------------6060+Finally, GPIOs can be bound to devices and functions using platform data. Board6161+files that desire to do so need to include the following header:6262+6363+ #include <linux/gpio/driver.h>6464+6565+GPIOs are mapped by the means of tables of lookups, containing instances of the6666+gpiod_lookup structure. Two macros are defined to help declaring such mappings:6767+6868+ GPIO_LOOKUP(chip_label, chip_hwnum, dev_id, con_id, flags)6969+ GPIO_LOOKUP_IDX(chip_label, chip_hwnum, dev_id, con_id, idx, flags)7070+7171+where7272+7373+ - chip_label is the label of the gpiod_chip instance providing the GPIO7474+ - chip_hwnum is the hardware number of the GPIO within the chip7575+ - dev_id is the identifier of the device that will make use of this GPIO. If7676+ NULL, the GPIO will be available to all devices.7777+ - con_id is the name of the GPIO function from the device point of view. It7878+ can be NULL.7979+ - idx is the index of the GPIO within the function.8080+ - flags is defined to specify the following properties:8181+ * GPIOF_ACTIVE_LOW - to configure the GPIO as active-low8282+ * GPIOF_OPEN_DRAIN - GPIO pin is open drain type.8383+ * GPIOF_OPEN_SOURCE - GPIO pin is open source type.8484+8585+In the future, these flags might be extended to support more properties.8686+8787+Note that GPIO_LOOKUP() is just a shortcut to GPIO_LOOKUP_IDX() where idx = 0.8888+8989+A lookup table can then be defined as follows:9090+9191+ struct gpiod_lookup gpios_table[] = {9292+ GPIO_LOOKUP_IDX("gpio.0", 15, "foo.0", "led", 0, GPIO_ACTIVE_HIGH),9393+ GPIO_LOOKUP_IDX("gpio.0", 16, "foo.0", "led", 1, GPIO_ACTIVE_HIGH),9494+ GPIO_LOOKUP_IDX("gpio.0", 17, "foo.0", "led", 2, GPIO_ACTIVE_HIGH),9595+ GPIO_LOOKUP("gpio.0", 1, "foo.0", "power", GPIO_ACTIVE_LOW),9696+ };9797+9898+And the table can be added by the board code as follows:9999+100100+ gpiod_add_table(gpios_table, ARRAY_SIZE(gpios_table));101101+102102+The driver controlling "foo.0" will then be able to obtain its GPIOs as follows:103103+104104+ struct gpio_desc *red, *green, *blue, *power;105105+106106+ red = gpiod_get_index(dev, "led", 0);107107+ green = gpiod_get_index(dev, "led", 1);108108+ blue = gpiod_get_index(dev, "led", 2);109109+110110+ power = gpiod_get(dev, "power");111111+ gpiod_direction_output(power, 1);112112+113113+Since the "power" GPIO is mapped as active-low, its actual signal will be 0114114+after this code. Contrary to the legacy integer GPIO interface, the active-low115115+property is handled during mapping and is thus transparent to GPIO consumers.
+197
Documentation/gpio/consumer.txt
···11+GPIO Descriptor Consumer Interface22+==================================33+44+This document describes the consumer interface of the GPIO framework. Note that55+it describes the new descriptor-based interface. For a description of the66+deprecated integer-based GPIO interface please refer to gpio-legacy.txt.77+88+99+Guidelines for GPIOs consumers1010+==============================1111+1212+Drivers that can't work without standard GPIO calls should have Kconfig entries1313+that depend on GPIOLIB. The functions that allow a driver to obtain and use1414+GPIOs are available by including the following file:1515+1616+ #include <linux/gpio/consumer.h>1717+1818+All the functions that work with the descriptor-based GPIO interface are1919+prefixed with gpiod_. The gpio_ prefix is used for the legacy interface. No2020+other function in the kernel should use these prefixes.2121+2222+2323+Obtaining and Disposing GPIOs2424+=============================2525+2626+With the descriptor-based interface, GPIOs are identified with an opaque,2727+non-forgeable handler that must be obtained through a call to one of the2828+gpiod_get() functions. Like many other kernel subsystems, gpiod_get() takes the2929+device that will use the GPIO and the function the requested GPIO is supposed to3030+fulfill:3131+3232+ struct gpio_desc *gpiod_get(struct device *dev, const char *con_id)3333+3434+If a function is implemented by using several GPIOs together (e.g. a simple LED3535+device that displays digits), an additional index argument can be specified:3636+3737+ struct gpio_desc *gpiod_get_index(struct device *dev,3838+ const char *con_id, unsigned int idx)3939+4040+Both functions return either a valid GPIO descriptor, or an error code checkable4141+with IS_ERR(). They will never return a NULL pointer.4242+4343+Device-managed variants of these functions are also defined:4444+4545+ struct gpio_desc *devm_gpiod_get(struct device *dev, const char *con_id)4646+4747+ struct gpio_desc *devm_gpiod_get_index(struct device *dev,4848+ const char *con_id,4949+ unsigned int idx)5050+5151+A GPIO descriptor can be disposed of using the gpiod_put() function:5252+5353+ void gpiod_put(struct gpio_desc *desc)5454+5555+It is strictly forbidden to use a descriptor after calling this function. The5656+device-managed variant is, unsurprisingly:5757+5858+ void devm_gpiod_put(struct device *dev, struct gpio_desc *desc)5959+6060+6161+Using GPIOs6262+===========6363+6464+Setting Direction6565+-----------------6666+The first thing a driver must do with a GPIO is setting its direction. This is6767+done by invoking one of the gpiod_direction_*() functions:6868+6969+ int gpiod_direction_input(struct gpio_desc *desc)7070+ int gpiod_direction_output(struct gpio_desc *desc, int value)7171+7272+The return value is zero for success, else a negative errno. It should be7373+checked, since the get/set calls don't return errors and since misconfiguration7474+is possible. You should normally issue these calls from a task context. However,7575+for spinlock-safe GPIOs it is OK to use them before tasking is enabled, as part7676+of early board setup.7777+7878+For output GPIOs, the value provided becomes the initial output value. This7979+helps avoid signal glitching during system startup.8080+8181+A driver can also query the current direction of a GPIO:8282+8383+ int gpiod_get_direction(const struct gpio_desc *desc)8484+8585+This function will return either GPIOF_DIR_IN or GPIOF_DIR_OUT.8686+8787+Be aware that there is no default direction for GPIOs. Therefore, **using a GPIO8888+without setting its direction first is illegal and will result in undefined8989+behavior!**9090+9191+9292+Spinlock-Safe GPIO Access9393+-------------------------9494+Most GPIO controllers can be accessed with memory read/write instructions. Those9595+don't need to sleep, and can safely be done from inside hard (non-threaded) IRQ9696+handlers and similar contexts.9797+9898+Use the following calls to access GPIOs from an atomic context:9999+100100+ int gpiod_get_value(const struct gpio_desc *desc);101101+ void gpiod_set_value(struct gpio_desc *desc, int value);102102+103103+The values are boolean, zero for low, nonzero for high. When reading the value104104+of an output pin, the value returned should be what's seen on the pin. That105105+won't always match the specified output value, because of issues including106106+open-drain signaling and output latencies.107107+108108+The get/set calls do not return errors because "invalid GPIO" should have been109109+reported earlier from gpiod_direction_*(). However, note that not all platforms110110+can read the value of output pins; those that can't should always return zero.111111+Also, using these calls for GPIOs that can't safely be accessed without sleeping112112+(see below) is an error.113113+114114+115115+GPIO Access That May Sleep116116+--------------------------117117+Some GPIO controllers must be accessed using message based buses like I2C or118118+SPI. Commands to read or write those GPIO values require waiting to get to the119119+head of a queue to transmit a command and get its response. This requires120120+sleeping, which can't be done from inside IRQ handlers.121121+122122+Platforms that support this type of GPIO distinguish them from other GPIOs by123123+returning nonzero from this call:124124+125125+ int gpiod_cansleep(const struct gpio_desc *desc)126126+127127+To access such GPIOs, a different set of accessors is defined:128128+129129+ int gpiod_get_value_cansleep(const struct gpio_desc *desc)130130+ void gpiod_set_value_cansleep(struct gpio_desc *desc, int value)131131+132132+Accessing such GPIOs requires a context which may sleep, for example a threaded133133+IRQ handler, and those accessors must be used instead of spinlock-safe134134+accessors without the cansleep() name suffix.135135+136136+Other than the fact that these accessors might sleep, and will work on GPIOs137137+that can't be accessed from hardIRQ handlers, these calls act the same as the138138+spinlock-safe calls.139139+140140+141141+Active-low State and Raw GPIO Values142142+------------------------------------143143+Device drivers like to manage the logical state of a GPIO, i.e. the value their144144+device will actually receive, no matter what lies between it and the GPIO line.145145+In some cases, it might make sense to control the actual GPIO line value. The146146+following set of calls ignore the active-low property of a GPIO and work on the147147+raw line value:148148+149149+ int gpiod_get_raw_value(const struct gpio_desc *desc)150150+ void gpiod_set_raw_value(struct gpio_desc *desc, int value)151151+ int gpiod_get_raw_value_cansleep(const struct gpio_desc *desc)152152+ void gpiod_set_raw_value_cansleep(struct gpio_desc *desc, int value)153153+154154+The active-low state of a GPIO can also be queried using the following call:155155+156156+ int gpiod_is_active_low(const struct gpio_desc *desc)157157+158158+Note that these functions should only be used with great moderation ; a driver159159+should not have to care about the physical line level.160160+161161+GPIOs mapped to IRQs162162+--------------------163163+GPIO lines can quite often be used as IRQs. You can get the IRQ number164164+corresponding to a given GPIO using the following call:165165+166166+ int gpiod_to_irq(const struct gpio_desc *desc)167167+168168+It will return an IRQ number, or an negative errno code if the mapping can't be169169+done (most likely because that particular GPIO cannot be used as IRQ). It is an170170+unchecked error to use a GPIO that wasn't set up as an input using171171+gpiod_direction_input(), or to use an IRQ number that didn't originally come172172+from gpiod_to_irq(). gpiod_to_irq() is not allowed to sleep.173173+174174+Non-error values returned from gpiod_to_irq() can be passed to request_irq() or175175+free_irq(). They will often be stored into IRQ resources for platform devices,176176+by the board-specific initialization code. Note that IRQ trigger options are177177+part of the IRQ interface, e.g. IRQF_TRIGGER_FALLING, as are system wakeup178178+capabilities.179179+180180+181181+Interacting With the Legacy GPIO Subsystem182182+==========================================183183+Many kernel subsystems still handle GPIOs using the legacy integer-based184184+interface. Although it is strongly encouraged to upgrade them to the safer185185+descriptor-based API, the following two functions allow you to convert a GPIO186186+descriptor into the GPIO integer namespace and vice-versa:187187+188188+ int desc_to_gpio(const struct gpio_desc *desc)189189+ struct gpio_desc *gpio_to_desc(unsigned gpio)190190+191191+The GPIO number returned by desc_to_gpio() can be safely used as long as the192192+GPIO descriptor has not been freed. All the same, a GPIO number passed to193193+gpio_to_desc() must have been properly acquired, and usage of the returned GPIO194194+descriptor is only possible after the GPIO number has been released.195195+196196+Freeing a GPIO obtained by one API with the other API is forbidden and an197197+unchecked error.
+75
Documentation/gpio/driver.txt
···11+GPIO Descriptor Driver Interface22+================================33+44+This document serves as a guide for GPIO chip drivers writers. Note that it55+describes the new descriptor-based interface. For a description of the66+deprecated integer-based GPIO interface please refer to gpio-legacy.txt.77+88+Each GPIO controller driver needs to include the following header, which defines99+the structures used to define a GPIO driver:1010+1111+ #include <linux/gpio/driver.h>1212+1313+1414+Internal Representation of GPIOs1515+================================1616+1717+Inside a GPIO driver, individual GPIOs are identified by their hardware number,1818+which is a unique number between 0 and n, n being the number of GPIOs managed by1919+the chip. This number is purely internal: the hardware number of a particular2020+GPIO descriptor is never made visible outside of the driver.2121+2222+On top of this internal number, each GPIO also need to have a global number in2323+the integer GPIO namespace so that it can be used with the legacy GPIO2424+interface. Each chip must thus have a "base" number (which can be automatically2525+assigned), and for each GPIO the global number will be (base + hardware number).2626+Although the integer representation is considered deprecated, it still has many2727+users and thus needs to be maintained.2828+2929+So for example one platform could use numbers 32-159 for GPIOs, with a3030+controller defining 128 GPIOs at a "base" of 32 ; while another platform uses3131+numbers 0..63 with one set of GPIO controllers, 64-79 with another type of GPIO3232+controller, and on one particular board 80-95 with an FPGA. The numbers need not3333+be contiguous; either of those platforms could also use numbers 2000-2063 to3434+identify GPIOs in a bank of I2C GPIO expanders.3535+3636+3737+Controller Drivers: gpio_chip3838+=============================3939+4040+In the gpiolib framework each GPIO controller is packaged as a "struct4141+gpio_chip" (see linux/gpio/driver.h for its complete definition) with members4242+common to each controller of that type:4343+4444+ - methods to establish GPIO direction4545+ - methods used to access GPIO values4646+ - method to return the IRQ number associated to a given GPIO4747+ - flag saying whether calls to its methods may sleep4848+ - optional debugfs dump method (showing extra state like pullup config)4949+ - optional base number (will be automatically assigned if omitted)5050+ - label for diagnostics and GPIOs mapping using platform data5151+5252+The code implementing a gpio_chip should support multiple instances of the5353+controller, possibly using the driver model. That code will configure each5454+gpio_chip and issue gpiochip_add(). Removing a GPIO controller should be rare;5555+use gpiochip_remove() when it is unavoidable.5656+5757+Most often a gpio_chip is part of an instance-specific structure with state not5858+exposed by the GPIO interfaces, such as addressing, power management, and more.5959+Chips such as codecs will have complex non-GPIO state.6060+6161+Any debugfs dump method should normally ignore signals which haven't been6262+requested as GPIOs. They can use gpiochip_is_requested(), which returns either6363+NULL or the label associated with that GPIO when it was requested.6464+6565+Locking IRQ usage6666+-----------------6767+Input GPIOs can be used as IRQ signals. When this happens, a driver is requested6868+to mark the GPIO as being used as an IRQ:6969+7070+ int gpiod_lock_as_irq(struct gpio_desc *desc)7171+7272+This will prevent the use of non-irq related GPIO APIs until the GPIO IRQ lock7373+is released:7474+7575+ void gpiod_unlock_as_irq(struct gpio_desc *desc)
+119
Documentation/gpio/gpio.txt
···11+GPIO Interfaces22+===============33+44+The documents in this directory give detailed instructions on how to access55+GPIOs in drivers, and how to write a driver for a device that provides GPIOs66+itself.77+88+Due to the history of GPIO interfaces in the kernel, there are two different99+ways to obtain and use GPIOs:1010+1111+ - The descriptor-based interface is the preferred way to manipulate GPIOs,1212+and is described by all the files in this directory excepted gpio-legacy.txt.1313+ - The legacy integer-based interface which is considered deprecated (but still1414+usable for compatibility reasons) is documented in gpio-legacy.txt.1515+1616+The remainder of this document applies to the new descriptor-based interface.1717+gpio-legacy.txt contains the same information applied to the legacy1818+integer-based interface.1919+2020+2121+What is a GPIO?2222+===============2323+2424+A "General Purpose Input/Output" (GPIO) is a flexible software-controlled2525+digital signal. They are provided from many kinds of chip, and are familiar2626+to Linux developers working with embedded and custom hardware. Each GPIO2727+represents a bit connected to a particular pin, or "ball" on Ball Grid Array2828+(BGA) packages. Board schematics show which external hardware connects to2929+which GPIOs. Drivers can be written generically, so that board setup code3030+passes such pin configuration data to drivers.3131+3232+System-on-Chip (SOC) processors heavily rely on GPIOs. In some cases, every3333+non-dedicated pin can be configured as a GPIO; and most chips have at least3434+several dozen of them. Programmable logic devices (like FPGAs) can easily3535+provide GPIOs; multifunction chips like power managers, and audio codecs3636+often have a few such pins to help with pin scarcity on SOCs; and there are3737+also "GPIO Expander" chips that connect using the I2C or SPI serial buses.3838+Most PC southbridges have a few dozen GPIO-capable pins (with only the BIOS3939+firmware knowing how they're used).4040+4141+The exact capabilities of GPIOs vary between systems. Common options:4242+4343+ - Output values are writable (high=1, low=0). Some chips also have4444+ options about how that value is driven, so that for example only one4545+ value might be driven, supporting "wire-OR" and similar schemes for the4646+ other value (notably, "open drain" signaling).4747+4848+ - Input values are likewise readable (1, 0). Some chips support readback4949+ of pins configured as "output", which is very useful in such "wire-OR"5050+ cases (to support bidirectional signaling). GPIO controllers may have5151+ input de-glitch/debounce logic, sometimes with software controls.5252+5353+ - Inputs can often be used as IRQ signals, often edge triggered but5454+ sometimes level triggered. Such IRQs may be configurable as system5555+ wakeup events, to wake the system from a low power state.5656+5757+ - Usually a GPIO will be configurable as either input or output, as needed5858+ by different product boards; single direction ones exist too.5959+6060+ - Most GPIOs can be accessed while holding spinlocks, but those accessed6161+ through a serial bus normally can't. Some systems support both types.6262+6363+On a given board each GPIO is used for one specific purpose like monitoring6464+MMC/SD card insertion/removal, detecting card write-protect status, driving6565+a LED, configuring a transceiver, bit-banging a serial bus, poking a hardware6666+watchdog, sensing a switch, and so on.6767+6868+6969+Common GPIO Properties7070+======================7171+7272+These properties are met through all the other documents of the GPIO interface7373+and it is useful to understand them, especially if you need to define GPIO7474+mappings.7575+7676+Active-High and Active-Low7777+--------------------------7878+It is natural to assume that a GPIO is "active" when its output signal is 17979+("high"), and inactive when it is 0 ("low"). However in practice the signal of a8080+GPIO may be inverted before is reaches its destination, or a device could decide8181+to have different conventions about what "active" means. Such decisions should8282+be transparent to device drivers, therefore it is possible to define a GPIO as8383+being either active-high ("1" means "active", the default) or active-low ("0"8484+means "active") so that drivers only need to worry about the logical signal and8585+not about what happens at the line level.8686+8787+Open Drain and Open Source8888+--------------------------8989+Sometimes shared signals need to use "open drain" (where only the low signal9090+level is actually driven), or "open source" (where only the high signal level is9191+driven) signaling. That term applies to CMOS transistors; "open collector" is9292+used for TTL. A pullup or pulldown resistor causes the high or low signal level.9393+This is sometimes called a "wire-AND"; or more practically, from the negative9494+logic (low=true) perspective this is a "wire-OR".9595+9696+One common example of an open drain signal is a shared active-low IRQ line.9797+Also, bidirectional data bus signals sometimes use open drain signals.9898+9999+Some GPIO controllers directly support open drain and open source outputs; many100100+don't. When you need open drain signaling but your hardware doesn't directly101101+support it, there's a common idiom you can use to emulate it with any GPIO pin102102+that can be used as either an input or an output:103103+104104+ LOW: gpiod_direction_output(gpio, 0) ... this drives the signal and overrides105105+ the pullup.106106+107107+ HIGH: gpiod_direction_input(gpio) ... this turns off the output, so the pullup108108+ (or some other device) controls the signal.109109+110110+The same logic can be applied to emulate open source signaling, by driving the111111+high signal and configuring the GPIO as input for low. This open drain/open112112+source emulation can be handled transparently by the GPIO framework.113113+114114+If you are "driving" the signal high but gpiod_get_value(gpio) reports a low115115+value (after the appropriate rise time passes), you know some other component is116116+driving the shared signal low. That's not necessarily an error. As one common117117+example, that's how I2C clocks are stretched: a slave that needs a slower clock118118+delays the rising edge of SCK, and the I2C master adjusts its signaling rate119119+accordingly.
+155
Documentation/gpio/sysfs.txt
···11+GPIO Sysfs Interface for Userspace22+==================================33+44+Platforms which use the "gpiolib" implementors framework may choose to55+configure a sysfs user interface to GPIOs. This is different from the66+debugfs interface, since it provides control over GPIO direction and77+value instead of just showing a gpio state summary. Plus, it could be88+present on production systems without debugging support.99+1010+Given appropriate hardware documentation for the system, userspace could1111+know for example that GPIO #23 controls the write protect line used to1212+protect boot loader segments in flash memory. System upgrade procedures1313+may need to temporarily remove that protection, first importing a GPIO,1414+then changing its output state, then updating the code before re-enabling1515+the write protection. In normal use, GPIO #23 would never be touched,1616+and the kernel would have no need to know about it.1717+1818+Again depending on appropriate hardware documentation, on some systems1919+userspace GPIO can be used to determine system configuration data that2020+standard kernels won't know about. And for some tasks, simple userspace2121+GPIO drivers could be all that the system really needs.2222+2323+Note that standard kernel drivers exist for common "LEDs and Buttons"2424+GPIO tasks: "leds-gpio" and "gpio_keys", respectively. Use those2525+instead of talking directly to the GPIOs; they integrate with kernel2626+frameworks better than your userspace code could.2727+2828+2929+Paths in Sysfs3030+--------------3131+There are three kinds of entry in /sys/class/gpio:3232+3333+ - Control interfaces used to get userspace control over GPIOs;3434+3535+ - GPIOs themselves; and3636+3737+ - GPIO controllers ("gpio_chip" instances).3838+3939+That's in addition to standard files including the "device" symlink.4040+4141+The control interfaces are write-only:4242+4343+ /sys/class/gpio/4444+4545+ "export" ... Userspace may ask the kernel to export control of4646+ a GPIO to userspace by writing its number to this file.4747+4848+ Example: "echo 19 > export" will create a "gpio19" node4949+ for GPIO #19, if that's not requested by kernel code.5050+5151+ "unexport" ... Reverses the effect of exporting to userspace.5252+5353+ Example: "echo 19 > unexport" will remove a "gpio19"5454+ node exported using the "export" file.5555+5656+GPIO signals have paths like /sys/class/gpio/gpio42/ (for GPIO #42)5757+and have the following read/write attributes:5858+5959+ /sys/class/gpio/gpioN/6060+6161+ "direction" ... reads as either "in" or "out". This value may6262+ normally be written. Writing as "out" defaults to6363+ initializing the value as low. To ensure glitch free6464+ operation, values "low" and "high" may be written to6565+ configure the GPIO as an output with that initial value.6666+6767+ Note that this attribute *will not exist* if the kernel6868+ doesn't support changing the direction of a GPIO, or6969+ it was exported by kernel code that didn't explicitly7070+ allow userspace to reconfigure this GPIO's direction.7171+7272+ "value" ... reads as either 0 (low) or 1 (high). If the GPIO7373+ is configured as an output, this value may be written;7474+ any nonzero value is treated as high.7575+7676+ If the pin can be configured as interrupt-generating interrupt7777+ and if it has been configured to generate interrupts (see the7878+ description of "edge"), you can poll(2) on that file and7979+ poll(2) will return whenever the interrupt was triggered. If8080+ you use poll(2), set the events POLLPRI and POLLERR. If you8181+ use select(2), set the file descriptor in exceptfds. After8282+ poll(2) returns, either lseek(2) to the beginning of the sysfs8383+ file and read the new value or close the file and re-open it8484+ to read the value.8585+8686+ "edge" ... reads as either "none", "rising", "falling", or8787+ "both". Write these strings to select the signal edge(s)8888+ that will make poll(2) on the "value" file return.8989+9090+ This file exists only if the pin can be configured as an9191+ interrupt generating input pin.9292+9393+ "active_low" ... reads as either 0 (false) or 1 (true). Write9494+ any nonzero value to invert the value attribute both9595+ for reading and writing. Existing and subsequent9696+ poll(2) support configuration via the edge attribute9797+ for "rising" and "falling" edges will follow this9898+ setting.9999+100100+GPIO controllers have paths like /sys/class/gpio/gpiochip42/ (for the101101+controller implementing GPIOs starting at #42) and have the following102102+read-only attributes:103103+104104+ /sys/class/gpio/gpiochipN/105105+106106+ "base" ... same as N, the first GPIO managed by this chip107107+108108+ "label" ... provided for diagnostics (not always unique)109109+110110+ "ngpio" ... how many GPIOs this manges (N to N + ngpio - 1)111111+112112+Board documentation should in most cases cover what GPIOs are used for113113+what purposes. However, those numbers are not always stable; GPIOs on114114+a daughtercard might be different depending on the base board being used,115115+or other cards in the stack. In such cases, you may need to use the116116+gpiochip nodes (possibly in conjunction with schematics) to determine117117+the correct GPIO number to use for a given signal.118118+119119+120120+Exporting from Kernel code121121+--------------------------122122+Kernel code can explicitly manage exports of GPIOs which have already been123123+requested using gpio_request():124124+125125+ /* export the GPIO to userspace */126126+ int gpiod_export(struct gpio_desc *desc, bool direction_may_change);127127+128128+ /* reverse gpio_export() */129129+ void gpiod_unexport(struct gpio_desc *desc);130130+131131+ /* create a sysfs link to an exported GPIO node */132132+ int gpiod_export_link(struct device *dev, const char *name,133133+ struct gpio_desc *desc);134134+135135+ /* change the polarity of a GPIO node in sysfs */136136+ int gpiod_sysfs_set_active_low(struct gpio_desc *desc, int value);137137+138138+After a kernel driver requests a GPIO, it may only be made available in139139+the sysfs interface by gpiod_export(). The driver can control whether the140140+signal direction may change. This helps drivers prevent userspace code141141+from accidentally clobbering important system state.142142+143143+This explicit exporting can help with debugging (by making some kinds144144+of experiments easier), or can provide an always-there interface that's145145+suitable for documenting as part of a board support package.146146+147147+After the GPIO has been exported, gpiod_export_link() allows creating148148+symlinks from elsewhere in sysfs to the GPIO sysfs node. Drivers can149149+use this to provide the interface under their own device in sysfs with150150+a descriptive name.151151+152152+Drivers can use gpiod_sysfs_set_active_low() to hide GPIO line polarity153153+differences between boards from user space. Polarity change can be done both154154+before and after gpiod_export(), and previously enabled poll(2) support for155155+either rising or falling edge will be reconfigured to follow this setting.
+5
MAINTAINERS
···21422142S: Maintained21432143F: drivers/usb/chipidea/2144214421452145+CHROME HARDWARE PLATFORM SUPPORT21462146+M: Olof Johansson <olof@lixom.net>21472147+S: Maintained21482148+F: drivers/platform/chrome/21492149+21452150CISCO VIC ETHERNET NIC DRIVER21462151M: Christian Benvenuti <benve@cisco.com>21472152M: Sujith Sankar <ssujith@cisco.com>
+1-1
Makefile
···11VERSION = 322PATCHLEVEL = 1333SUBLEVEL = 044-EXTRAVERSION = -rc144+EXTRAVERSION = -rc255NAME = One Giant Leap for Frogkind6677# *DOCUMENTATION*
···15021502 }1503150315041504 /*15051505+ * For some GPMC devices we still need to rely on the bootloader15061506+ * timings because the devices can be connected via FPGA. So far15071507+ * the list is smc91x on the omap2 SDP boards, and 8250 on zooms.15081508+ * REVISIT: Add timing support from slls644g.pdf and from the15091509+ * lan91c96 manual.15101510+ */15111511+ if (of_device_is_compatible(child, "ns16550a") ||15121512+ of_device_is_compatible(child, "smsc,lan91c94") ||15131513+ of_device_is_compatible(child, "smsc,lan91c111")) {15141514+ dev_warn(&pdev->dev,15151515+ "%s using bootloader timings on CS%d\n",15161516+ child->name, cs);15171517+ goto no_timings;15181518+ }15191519+15201520+ /*15051521 * FIXME: gpmc_cs_request() will map the CS to an arbitary15061522 * location in the gpmc address space. When booting with15071523 * device-tree we want the NOR flash to be mapped to the···15451529 gpmc_read_timings_dt(child, &gpmc_t);15461530 gpmc_cs_set_timings(cs, &gpmc_t);1547153115321532+no_timings:15481533 if (of_platform_device_create(child, NULL, &pdev->dev))15491534 return 0;15501535···15561539 gpmc_cs_free(cs);1557154015581541 return ret;15591559-}15601560-15611561-/*15621562- * REVISIT: Add timing support from slls644g.pdf15631563- */15641564-static int gpmc_probe_8250(struct platform_device *pdev,15651565- struct device_node *child)15661566-{15671567- struct resource res;15681568- unsigned long base;15691569- int ret, cs;15701570-15711571- if (of_property_read_u32(child, "reg", &cs) < 0) {15721572- dev_err(&pdev->dev, "%s has no 'reg' property\n",15731573- child->full_name);15741574- return -ENODEV;15751575- }15761576-15771577- if (of_address_to_resource(child, 0, &res) < 0) {15781578- dev_err(&pdev->dev, "%s has malformed 'reg' property\n",15791579- child->full_name);15801580- return -ENODEV;15811581- }15821582-15831583- ret = gpmc_cs_request(cs, resource_size(&res), &base);15841584- if (ret < 0) {15851585- dev_err(&pdev->dev, "cannot request GPMC CS %d\n", cs);15861586- return ret;15871587- }15881588-15891589- if (of_platform_device_create(child, NULL, &pdev->dev))15901590- return 0;15911591-15921592- dev_err(&pdev->dev, "failed to create gpmc child %s\n", child->name);15931593-15941594- return -ENODEV;15951542}1596154315971544static int gpmc_probe_dt(struct platform_device *pdev)···15991618 else if (of_node_cmp(child->name, "onenand") == 0)16001619 ret = gpmc_probe_onenand_child(pdev, child);16011620 else if (of_node_cmp(child->name, "ethernet") == 0 ||16021602- of_node_cmp(child->name, "nor") == 0)16211621+ of_node_cmp(child->name, "nor") == 0 ||16221622+ of_node_cmp(child->name, "uart") == 0)16031623 ret = gpmc_probe_generic_child(pdev, child);16041604- else if (of_node_cmp(child->name, "8250") == 0)16051605- ret = gpmc_probe_8250(pdev, child);1606162416071625 if (WARN(ret < 0, "%s: probing gpmc child %s failed\n",16081626 __func__, child->full_name))
···3535#include "iomap.h"3636#include "common.h"3737#include "mmc.h"3838-#include "hsmmc.h"3938#include "prminst44xx.h"4039#include "prcm_mpu44xx.h"4140#include "omap4-sar-layout.h"···283284 omap_wakeupgen_init();284285 irqchip_init();285286}286286-287287-#if defined(CONFIG_MMC_OMAP_HS) || defined(CONFIG_MMC_OMAP_HS_MODULE)288288-static int omap4_twl6030_hsmmc_late_init(struct device *dev)289289-{290290- int irq = 0;291291- struct platform_device *pdev = container_of(dev,292292- struct platform_device, dev);293293- struct omap_mmc_platform_data *pdata = dev->platform_data;294294-295295- /* Setting MMC1 Card detect Irq */296296- if (pdev->id == 0) {297297- irq = twl6030_mmc_card_detect_config();298298- if (irq < 0) {299299- dev_err(dev, "%s: Error card detect config(%d)\n",300300- __func__, irq);301301- return irq;302302- }303303- pdata->slots[0].card_detect_irq = irq;304304- pdata->slots[0].card_detect = twl6030_mmc_card_detect;305305- }306306- return 0;307307-}308308-309309-static __init void omap4_twl6030_hsmmc_set_late_init(struct device *dev)310310-{311311- struct omap_mmc_platform_data *pdata;312312-313313- /* dev can be null if CONFIG_MMC_OMAP_HS is not set */314314- if (!dev) {315315- pr_err("Failed %s\n", __func__);316316- return;317317- }318318- pdata = dev->platform_data;319319- pdata->init = omap4_twl6030_hsmmc_late_init;320320-}321321-322322-int __init omap4_twl6030_hsmmc_init(struct omap2_hsmmc_info *controllers)323323-{324324- struct omap2_hsmmc_info *c;325325-326326- omap_hsmmc_init(controllers);327327- for (c = controllers; c->mmc; c++) {328328- /* pdev can be null if CONFIG_MMC_OMAP_HS is not set */329329- if (!c->pdev)330330- continue;331331- omap4_twl6030_hsmmc_set_late_init(&c->pdev->dev);332332- }333333-334334- return 0;335335-}336336-#else337337-int __init omap4_twl6030_hsmmc_init(struct omap2_hsmmc_info *controllers)338338-{339339- return 0;340340-}341341-#endif
+1-1
arch/arm/mach-omap2/pm34xx.c
···120120 * will hang the system.121121 */122122 pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);123123- ret = _omap_save_secure_sram((u32 *)123123+ ret = _omap_save_secure_sram((u32 *)(unsigned long)124124 __pa(omap3_secure_ram_storage));125125 pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state);126126 /* Following is for error tracking, it should not happen */
···209209 tegra_sku_id, tegra_cpu_process_id,210210 tegra_core_process_id);211211}212212-213213-unsigned long long tegra_chip_uid(void)214214-{215215- unsigned long long lo, hi;216216-217217- lo = tegra_fuse_readl(FUSE_UID_LOW);218218- hi = tegra_fuse_readl(FUSE_UID_HIGH);219219- return (hi << 32ull) | lo;220220-}221221-EXPORT_SYMBOL(tegra_chip_uid);
+40
arch/arm/mach-vexpress/spc.c
···5353#define A15_BX_ADDR0 0x685454#define A7_BX_ADDR0 0x7855555656+/* SPC CPU/cluster reset statue */5757+#define STANDBYWFI_STAT 0x3c5858+#define STANDBYWFI_STAT_A15_CPU_MASK(cpu) (1 << (cpu))5959+#define STANDBYWFI_STAT_A7_CPU_MASK(cpu) (1 << (3 + (cpu)))6060+5661/* SPC system config interface registers */5762#define SYSCFG_WDATA 0x705863#define SYSCFG_RDATA 0x74···216211217212 pwdrn_reg = cluster_is_a15(cluster) ? A15_PWRDN_EN : A7_PWRDN_EN;218213 writel_relaxed(enable, info->baseaddr + pwdrn_reg);214214+}215215+216216+static u32 standbywfi_cpu_mask(u32 cpu, u32 cluster)217217+{218218+ return cluster_is_a15(cluster) ?219219+ STANDBYWFI_STAT_A15_CPU_MASK(cpu)220220+ : STANDBYWFI_STAT_A7_CPU_MASK(cpu);221221+}222222+223223+/**224224+ * ve_spc_cpu_in_wfi(u32 cpu, u32 cluster)225225+ *226226+ * @cpu: mpidr[7:0] bitfield describing CPU affinity level within cluster227227+ * @cluster: mpidr[15:8] bitfield describing cluster affinity level228228+ *229229+ * @return: non-zero if and only if the specified CPU is in WFI230230+ *231231+ * Take care when interpreting the result of this function: a CPU might232232+ * be in WFI temporarily due to idle, and is not necessarily safely233233+ * parked.234234+ */235235+int ve_spc_cpu_in_wfi(u32 cpu, u32 cluster)236236+{237237+ int ret;238238+ u32 mask = standbywfi_cpu_mask(cpu, cluster);239239+240240+ if (cluster >= MAX_CLUSTERS)241241+ return 1;242242+243243+ ret = readl_relaxed(info->baseaddr + STANDBYWFI_STAT);244244+245245+ pr_debug("%s: PCFGREG[0x%X] = 0x%08X, mask = 0x%X\n",246246+ __func__, STANDBYWFI_STAT, ret, mask);247247+248248+ return ret & mask;219249}220250221251static int ve_spc_get_performance(int cluster, u32 *freq)
···1212 * published by the Free Software Foundation.1313 */14141515+#include <linux/delay.h>1516#include <linux/init.h>1617#include <linux/io.h>1718#include <linux/kernel.h>···3332#include "spc.h"34333534/* SCC conf registers */3535+#define RESET_CTRL 0x0183636+#define RESET_A15_NCORERESET(cpu) (1 << (2 + (cpu)))3737+#define RESET_A7_NCORERESET(cpu) (1 << (16 + (cpu)))3838+3639#define A15_CONF 0x4003740#define A7_CONF 0x5003841#define SYS_INFO 0x7003942#define SPC_BASE 0xb004343+4444+static void __iomem *scc;40454146/*4247 * We can't use regular spinlocks. In the switcher case, it is possible···197190 tc2_pm_down(0);198191}199192193193+static int tc2_core_in_reset(unsigned int cpu, unsigned int cluster)194194+{195195+ u32 mask = cluster ?196196+ RESET_A7_NCORERESET(cpu)197197+ : RESET_A15_NCORERESET(cpu);198198+199199+ return !(readl_relaxed(scc + RESET_CTRL) & mask);200200+}201201+202202+#define POLL_MSEC 10203203+#define TIMEOUT_MSEC 1000204204+205205+static int tc2_pm_power_down_finish(unsigned int cpu, unsigned int cluster)206206+{207207+ unsigned tries;208208+209209+ pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);210210+ BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER);211211+212212+ for (tries = 0; tries < TIMEOUT_MSEC / POLL_MSEC; ++tries) {213213+ /*214214+ * Only examine the hardware state if the target CPU has215215+ * caught up at least as far as tc2_pm_down():216216+ */217217+ if (ACCESS_ONCE(tc2_pm_use_count[cpu][cluster]) == 0) {218218+ pr_debug("%s(cpu=%u, cluster=%u): RESET_CTRL = 0x%08X\n",219219+ __func__, cpu, cluster,220220+ readl_relaxed(scc + RESET_CTRL));221221+222222+ /*223223+ * We need the CPU to reach WFI, but the power224224+ * controller may put the cluster in reset and225225+ * power it off as soon as that happens, before226226+ * we have a chance to see STANDBYWFI.227227+ *228228+ * So we need to check for both conditions:229229+ */230230+ if (tc2_core_in_reset(cpu, cluster) ||231231+ ve_spc_cpu_in_wfi(cpu, cluster))232232+ return 0; /* success: the CPU is halted */233233+ }234234+235235+ /* Otherwise, wait and retry: */236236+ msleep(POLL_MSEC);237237+ }238238+239239+ return -ETIMEDOUT; /* timeout */240240+}241241+200242static void tc2_pm_suspend(u64 residency)201243{202244 unsigned int mpidr, cpu, cluster;···288232}289233290234static const struct mcpm_platform_ops tc2_pm_power_ops = {291291- .power_up = tc2_pm_power_up,292292- .power_down = tc2_pm_power_down,293293- .suspend = tc2_pm_suspend,294294- .powered_up = tc2_pm_powered_up,235235+ .power_up = tc2_pm_power_up,236236+ .power_down = tc2_pm_power_down,237237+ .power_down_finish = tc2_pm_power_down_finish,238238+ .suspend = tc2_pm_suspend,239239+ .powered_up = tc2_pm_powered_up,295240};296241297242static bool __init tc2_pm_usage_count_init(void)···326269static int __init tc2_pm_init(void)327270{328271 int ret, irq;329329- void __iomem *scc;330272 u32 a15_cluster_id, a7_cluster_id, sys_info;331273 struct device_node *np;332274
···636636637637 for (i = 0; i < num_regs; ++i) {638638 unsigned int idx = start + i;639639- void *reg;639639+ compat_ulong_t reg;640640641641 switch (idx) {642642 case 15:643643- reg = (void *)&task_pt_regs(target)->pc;643643+ reg = task_pt_regs(target)->pc;644644 break;645645 case 16:646646- reg = (void *)&task_pt_regs(target)->pstate;646646+ reg = task_pt_regs(target)->pstate;647647 break;648648 case 17:649649- reg = (void *)&task_pt_regs(target)->orig_x0;649649+ reg = task_pt_regs(target)->orig_x0;650650 break;651651 default:652652- reg = (void *)&task_pt_regs(target)->regs[idx];652652+ reg = task_pt_regs(target)->regs[idx];653653 }654654655655- ret = copy_to_user(ubuf, reg, sizeof(compat_ulong_t));656656-655655+ ret = copy_to_user(ubuf, ®, sizeof(reg));657656 if (ret)658657 break;659659- else660660- ubuf += sizeof(compat_ulong_t);658658+659659+ ubuf += sizeof(reg);661660 }662661663662 return ret;···684685685686 for (i = 0; i < num_regs; ++i) {686687 unsigned int idx = start + i;687687- void *reg;688688+ compat_ulong_t reg;689689+690690+ ret = copy_from_user(®, ubuf, sizeof(reg));691691+ if (ret)692692+ return ret;693693+694694+ ubuf += sizeof(reg);688695689696 switch (idx) {690697 case 15:691691- reg = (void *)&newregs.pc;698698+ newregs.pc = reg;692699 break;693700 case 16:694694- reg = (void *)&newregs.pstate;701701+ newregs.pstate = reg;695702 break;696703 case 17:697697- reg = (void *)&newregs.orig_x0;704704+ newregs.orig_x0 = reg;698705 break;699706 default:700700- reg = (void *)&newregs.regs[idx];707707+ newregs.regs[idx] = reg;701708 }702709703703- ret = copy_from_user(reg, ubuf, sizeof(compat_ulong_t));704704-705705- if (ret)706706- goto out;707707- else708708- ubuf += sizeof(compat_ulong_t);709710 }710711711712 if (valid_user_regs(&newregs.user_regs))···713714 else714715 ret = -EINVAL;715716716716-out:717717 return ret;718718}719719
+5
arch/arm64/kernel/setup.c
···205205206206void __init setup_arch(char **cmdline_p)207207{208208+ /*209209+ * Unmask asynchronous aborts early to catch possible system errors.210210+ */211211+ local_async_enable();212212+208213 setup_processor();209214210215 setup_machine_fdt(__fdt_pointer);
···1616 unsigned long phys;1717 unsigned long virt_addr;1818};1919+extern struct vmemmap_backing *vmemmap_list;19202021/*2122 * Functions that deal with pagetables that could be at any level of
···445445#endif /* CONFIG_ALTIVEC */446446 if (copy_fpr_to_user(&frame->mc_fregs, current))447447 return 1;448448+449449+ /*450450+ * Clear the MSR VSX bit to indicate there is no valid state attached451451+ * to this context, except in the specific case below where we set it.452452+ */453453+ msr &= ~MSR_VSX;448454#ifdef CONFIG_VSX449455 /*450456 * Copy VSR 0-31 upper half from thread_struct to local···463457 if (copy_vsx_to_user(&frame->mc_vsregs, current))464458 return 1;465459 msr |= MSR_VSX;466466- } else if (!ctx_has_vsx_region)467467- /*468468- * With a small context structure we can't hold the VSX469469- * registers, hence clear the MSR value to indicate the state470470- * was not saved.471471- */472472- msr &= ~MSR_VSX;473473-474474-460460+ }475461#endif /* CONFIG_VSX */476462#ifdef CONFIG_SPE477463 /* save spe registers */
+6
arch/powerpc/kernel/signal_64.c
···122122 flush_fp_to_thread(current);123123 /* copy fpr regs and fpscr */124124 err |= copy_fpr_to_user(&sc->fp_regs, current);125125+126126+ /*127127+ * Clear the MSR VSX bit to indicate there is no valid state attached128128+ * to this context, except in the specific case below where we set it.129129+ */130130+ msr &= ~MSR_VSX;125131#ifdef CONFIG_VSX126132 /*127133 * Copy VSX low doubleword to local buffer for formatting,
+6
arch/powerpc/kernel/vdso32/gettimeofday.S
···232232 lwz r6,(CFG_TB_ORIG_STAMP+4)(r9)233233234234 /* Get a stable TB value */235235+#ifdef CONFIG_8xx236236+2: mftbu r3237237+ mftbl r4238238+ mftbu r0239239+#else2352402: mfspr r3, SPRN_TBRU236241 mfspr r4, SPRN_TBRL237242 mfspr r0, SPRN_TBRU243243+#endif238244 cmplw cr0,r3,r0239245 bne- 2b240246
···305305void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)306306{307307#ifdef CONFIG_HUGETLB_PAGE308308- if (is_vm_hugetlb_page(vma))308308+ if (vma && is_vm_hugetlb_page(vma))309309 flush_hugetlb_page(vma, vmaddr);310310#endif311311
+17-3
arch/powerpc/platforms/Kconfig.cputype
···404404405405endmenu406406407407-config CPU_LITTLE_ENDIAN408408- bool "Build little endian kernel"409409- default n407407+choice408408+ prompt "Endianness selection"409409+ default CPU_BIG_ENDIAN410410 help411411 This option selects whether a big endian or little endian kernel will412412 be built.413413414414+config CPU_BIG_ENDIAN415415+ bool "Build big endian kernel"416416+ help417417+ Build a big endian kernel.418418+419419+ If unsure, select this option.420420+421421+config CPU_LITTLE_ENDIAN422422+ bool "Build little endian kernel"423423+ help424424+ Build a little endian kernel.425425+414426 Note that if cross compiling a little endian kernel,415427 CROSS_COMPILE must point to a toolchain capable of targeting416428 little endian powerpc.429429+430430+endchoice
+1-1
arch/s390/Kconfig
···101101 select GENERIC_CPU_DEVICES if !SMP102102 select GENERIC_FIND_FIRST_BIT103103 select GENERIC_SMP_IDLE_THREAD104104- select GENERIC_TIME_VSYSCALL_OLD104104+ select GENERIC_TIME_VSYSCALL105105 select HAVE_ALIGNED_STRUCT_PAGE if SLUB106106 select HAVE_ARCH_JUMP_LABEL if !MARCH_G5107107 select HAVE_ARCH_SECCOMP_FILTER
+12-7
arch/s390/crypto/aes_s390.c
···3535static char keylen_flag;36363737struct s390_aes_ctx {3838- u8 iv[AES_BLOCK_SIZE];3938 u8 key[AES_MAX_KEY_SIZE];4039 long enc;4140 long dec;···440441 return aes_set_key(tfm, in_key, key_len);441442}442443443443-static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, void *param,444444+static int cbc_aes_crypt(struct blkcipher_desc *desc, long func,444445 struct blkcipher_walk *walk)445446{447447+ struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm);446448 int ret = blkcipher_walk_virt(desc, walk);447449 unsigned int nbytes = walk->nbytes;450450+ struct {451451+ u8 iv[AES_BLOCK_SIZE];452452+ u8 key[AES_MAX_KEY_SIZE];453453+ } param;448454449455 if (!nbytes)450456 goto out;451457452452- memcpy(param, walk->iv, AES_BLOCK_SIZE);458458+ memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);459459+ memcpy(param.key, sctx->key, sctx->key_len);453460 do {454461 /* only use complete blocks */455462 unsigned int n = nbytes & ~(AES_BLOCK_SIZE - 1);456463 u8 *out = walk->dst.virt.addr;457464 u8 *in = walk->src.virt.addr;458465459459- ret = crypt_s390_kmc(func, param, out, in, n);466466+ ret = crypt_s390_kmc(func, ¶m, out, in, n);460467 if (ret < 0 || ret != n)461468 return -EIO;462469463470 nbytes &= AES_BLOCK_SIZE - 1;464471 ret = blkcipher_walk_done(desc, walk, nbytes);465472 } while ((nbytes = walk->nbytes));466466- memcpy(walk->iv, param, AES_BLOCK_SIZE);473473+ memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);467474468475out:469476 return ret;···486481 return fallback_blk_enc(desc, dst, src, nbytes);487482488483 blkcipher_walk_init(&walk, dst, src, nbytes);489489- return cbc_aes_crypt(desc, sctx->enc, sctx->iv, &walk);484484+ return cbc_aes_crypt(desc, sctx->enc, &walk);490485}491486492487static int cbc_aes_decrypt(struct blkcipher_desc *desc,···500495 return fallback_blk_dec(desc, dst, src, nbytes);501496502497 blkcipher_walk_init(&walk, dst, src, nbytes);503503- return cbc_aes_crypt(desc, sctx->dec, sctx->iv, &walk);498498+ return cbc_aes_crypt(desc, sctx->dec, &walk);504499}505500506501static struct crypto_alg cbc_aes_alg = {
+13-25
arch/s390/include/asm/page.h
···4848 : "memory", "cc");4949}50505151+/*5252+ * copy_page uses the mvcl instruction with 0xb0 padding byte in order to5353+ * bypass caches when copying a page. Especially when copying huge pages5454+ * this keeps L1 and L2 data caches alive.5555+ */5156static inline void copy_page(void *to, void *from)5257{5353- if (MACHINE_HAS_MVPG) {5454- register unsigned long reg0 asm ("0") = 0;5555- asm volatile(5656- " mvpg %0,%1"5757- : : "a" (to), "a" (from), "d" (reg0)5858- : "memory", "cc");5959- } else6060- asm volatile(6161- " mvc 0(256,%0),0(%1)\n"6262- " mvc 256(256,%0),256(%1)\n"6363- " mvc 512(256,%0),512(%1)\n"6464- " mvc 768(256,%0),768(%1)\n"6565- " mvc 1024(256,%0),1024(%1)\n"6666- " mvc 1280(256,%0),1280(%1)\n"6767- " mvc 1536(256,%0),1536(%1)\n"6868- " mvc 1792(256,%0),1792(%1)\n"6969- " mvc 2048(256,%0),2048(%1)\n"7070- " mvc 2304(256,%0),2304(%1)\n"7171- " mvc 2560(256,%0),2560(%1)\n"7272- " mvc 2816(256,%0),2816(%1)\n"7373- " mvc 3072(256,%0),3072(%1)\n"7474- " mvc 3328(256,%0),3328(%1)\n"7575- " mvc 3584(256,%0),3584(%1)\n"7676- " mvc 3840(256,%0),3840(%1)\n"7777- : : "a" (to), "a" (from) : "memory");5858+ register void *reg2 asm ("2") = to;5959+ register unsigned long reg3 asm ("3") = 0x1000;6060+ register void *reg4 asm ("4") = from;6161+ register unsigned long reg5 asm ("5") = 0xb0001000;6262+ asm volatile(6363+ " mvcl 2,4"6464+ : "+d" (reg2), "+d" (reg3), "+d" (reg4), "+d" (reg5)6565+ : : "memory", "cc");7866}79678068#define clear_user_page(page, vaddr, pg) clear_page(page)
+3-2
arch/s390/include/asm/vdso.h
···2626 __u64 wtom_clock_nsec; /* 0x28 */2727 __u32 tz_minuteswest; /* Minutes west of Greenwich 0x30 */2828 __u32 tz_dsttime; /* Type of dst correction 0x34 */2929- __u32 ectg_available;3030- __u32 ntp_mult; /* NTP adjusted multiplier 0x3C */2929+ __u32 ectg_available; /* ECTG instruction present 0x38 */3030+ __u32 tk_mult; /* Mult. used for xtime_nsec 0x3c */3131+ __u32 tk_shift; /* Shift used for xtime_nsec 0x40 */3132};32333334struct vdso_per_cpu_data {
···11+22+#include <asm/i387.h>33+44+/*55+ * may_use_simd - whether it is allowable at this time to issue SIMD66+ * instructions or access the SIMD register file77+ */88+static __must_check inline bool may_use_simd(void)99+{1010+ return irq_fpu_usable();1111+}
+11-12
crypto/Kconfig
···174174 help175175 Quick & dirty crypto test module.176176177177-config CRYPTO_ABLK_HELPER_X86177177+config CRYPTO_ABLK_HELPER178178 tristate179179- depends on X86180179 select CRYPTO_CRYPTD181180182181config CRYPTO_GLUE_HELPER_X86···694695 select CRYPTO_AES_X86_64 if 64BIT695696 select CRYPTO_AES_586 if !64BIT696697 select CRYPTO_CRYPTD697697- select CRYPTO_ABLK_HELPER_X86698698+ select CRYPTO_ABLK_HELPER698699 select CRYPTO_ALGAPI699700 select CRYPTO_GLUE_HELPER_X86 if 64BIT700701 select CRYPTO_LRW···894895 depends on CRYPTO895896 select CRYPTO_ALGAPI896897 select CRYPTO_CRYPTD897897- select CRYPTO_ABLK_HELPER_X86898898+ select CRYPTO_ABLK_HELPER898899 select CRYPTO_GLUE_HELPER_X86899900 select CRYPTO_CAMELLIA_X86_64900901 select CRYPTO_LRW···916917 depends on CRYPTO917918 select CRYPTO_ALGAPI918919 select CRYPTO_CRYPTD919919- select CRYPTO_ABLK_HELPER_X86920920+ select CRYPTO_ABLK_HELPER920921 select CRYPTO_GLUE_HELPER_X86921922 select CRYPTO_CAMELLIA_X86_64922923 select CRYPTO_CAMELLIA_AESNI_AVX_X86_64···968969 depends on X86 && 64BIT969970 select CRYPTO_ALGAPI970971 select CRYPTO_CRYPTD971971- select CRYPTO_ABLK_HELPER_X86972972+ select CRYPTO_ABLK_HELPER972973 select CRYPTO_CAST_COMMON973974 select CRYPTO_CAST5974975 help···991992 depends on X86 && 64BIT992993 select CRYPTO_ALGAPI993994 select CRYPTO_CRYPTD994994- select CRYPTO_ABLK_HELPER_X86995995+ select CRYPTO_ABLK_HELPER995996 select CRYPTO_GLUE_HELPER_X86996997 select CRYPTO_CAST_COMMON997998 select CRYPTO_CAST6···11091110 depends on X86 && 64BIT11101111 select CRYPTO_ALGAPI11111112 select CRYPTO_CRYPTD11121112- select CRYPTO_ABLK_HELPER_X8611131113+ select CRYPTO_ABLK_HELPER11131114 select CRYPTO_GLUE_HELPER_X8611141115 select CRYPTO_SERPENT11151116 select CRYPTO_LRW···11311132 depends on X86 && !64BIT11321133 select CRYPTO_ALGAPI11331134 select CRYPTO_CRYPTD11341134- select CRYPTO_ABLK_HELPER_X8611351135+ select CRYPTO_ABLK_HELPER11351136 select CRYPTO_GLUE_HELPER_X8611361137 select CRYPTO_SERPENT11371138 select CRYPTO_LRW···11531154 depends on X86 && 64BIT11541155 select CRYPTO_ALGAPI11551156 select CRYPTO_CRYPTD11561156- select CRYPTO_ABLK_HELPER_X8611571157+ select CRYPTO_ABLK_HELPER11571158 select CRYPTO_GLUE_HELPER_X8611581159 select CRYPTO_SERPENT11591160 select CRYPTO_LRW···11751176 depends on X86 && 64BIT11761177 select CRYPTO_ALGAPI11771178 select CRYPTO_CRYPTD11781178- select CRYPTO_ABLK_HELPER_X8611791179+ select CRYPTO_ABLK_HELPER11791180 select CRYPTO_GLUE_HELPER_X8611801181 select CRYPTO_SERPENT11811182 select CRYPTO_SERPENT_AVX_X86_64···12911292 depends on X86 && 64BIT12921293 select CRYPTO_ALGAPI12931294 select CRYPTO_CRYPTD12941294- select CRYPTO_ABLK_HELPER_X8612951295+ select CRYPTO_ABLK_HELPER12951296 select CRYPTO_GLUE_HELPER_X8612961297 select CRYPTO_TWOFISH_COMMON12971298 select CRYPTO_TWOFISH_X86_64
+7-1
crypto/Makefile
···22# Cryptographic API33#4455+# memneq MUST be built with -Os or -O0 to prevent early-return optimizations66+# that will defeat memneq's actual purpose to prevent timing attacks.77+CFLAGS_REMOVE_memneq.o := -O1 -O2 -O388+CFLAGS_memneq.o := -Os99+510obj-$(CONFIG_CRYPTO) += crypto.o66-crypto-y := api.o cipher.o compress.o1111+crypto-y := api.o cipher.o compress.o memneq.o712813obj-$(CONFIG_CRYPTO_WORKQUEUE) += crypto_wq.o914···110105obj-$(CONFIG_ASYNC_CORE) += async_tx/111106obj-$(CONFIG_ASYMMETRIC_KEY_TYPE) += asymmetric_keys/112107obj-$(CONFIG_CRYPTO_HASH_INFO) += hash_info.o108108+obj-$(CONFIG_CRYPTO_ABLK_HELPER) += ablk_helper.o
···230230 */231231 if (byte_count < DEFAULT_BLK_SZ) {232232empty_rbuf:233233- for (; ctx->rand_data_valid < DEFAULT_BLK_SZ;234234- ctx->rand_data_valid++) {233233+ while (ctx->rand_data_valid < DEFAULT_BLK_SZ) {235234 *ptr = ctx->rand_data[ctx->rand_data_valid];236235 ptr++;237236 byte_count--;237237+ ctx->rand_data_valid++;238238 if (byte_count == 0)239239 goto done;240240 }
+3-2
crypto/asymmetric_keys/rsa.c
···1313#include <linux/module.h>1414#include <linux/kernel.h>1515#include <linux/slab.h>1616+#include <crypto/algapi.h>1617#include "public_key.h"17181819MODULE_LICENSE("GPL");···190189 }191190 }192191193193- if (memcmp(asn1_template, EM + T_offset, asn1_size) != 0) {192192+ if (crypto_memneq(asn1_template, EM + T_offset, asn1_size) != 0) {194193 kleave(" = -EBADMSG [EM[T] ASN.1 mismatch]");195194 return -EBADMSG;196195 }197196198198- if (memcmp(H, EM + T_offset + asn1_size, hash_size) != 0) {197197+ if (crypto_memneq(H, EM + T_offset + asn1_size, hash_size) != 0) {199198 kleave(" = -EKEYREJECTED [EM[T] hash mismatch]");200199 return -EKEYREJECTED;201200 }
+1-80
crypto/asymmetric_keys/x509_public_key.c
···1818#include <linux/asn1_decoder.h>1919#include <keys/asymmetric-subtype.h>2020#include <keys/asymmetric-parser.h>2121-#include <keys/system_keyring.h>2221#include <crypto/hash.h>2322#include "asymmetric_keys.h"2423#include "public_key.h"2524#include "x509_parser.h"2626-2727-/*2828- * Find a key in the given keyring by issuer and authority.2929- */3030-static struct key *x509_request_asymmetric_key(3131- struct key *keyring,3232- const char *signer, size_t signer_len,3333- const char *authority, size_t auth_len)3434-{3535- key_ref_t key;3636- char *id;3737-3838- /* Construct an identifier. */3939- id = kmalloc(signer_len + 2 + auth_len + 1, GFP_KERNEL);4040- if (!id)4141- return ERR_PTR(-ENOMEM);4242-4343- memcpy(id, signer, signer_len);4444- id[signer_len + 0] = ':';4545- id[signer_len + 1] = ' ';4646- memcpy(id + signer_len + 2, authority, auth_len);4747- id[signer_len + 2 + auth_len] = 0;4848-4949- pr_debug("Look up: \"%s\"\n", id);5050-5151- key = keyring_search(make_key_ref(keyring, 1),5252- &key_type_asymmetric, id);5353- if (IS_ERR(key))5454- pr_debug("Request for module key '%s' err %ld\n",5555- id, PTR_ERR(key));5656- kfree(id);5757-5858- if (IS_ERR(key)) {5959- switch (PTR_ERR(key)) {6060- /* Hide some search errors */6161- case -EACCES:6262- case -ENOTDIR:6363- case -EAGAIN:6464- return ERR_PTR(-ENOKEY);6565- default:6666- return ERR_CAST(key);6767- }6868- }6969-7070- pr_devel("<==%s() = 0 [%x]\n", __func__, key_serial(key_ref_to_ptr(key)));7171- return key_ref_to_ptr(key);7272-}73257426/*7527 * Set up the signature parameters in an X.509 certificate. This involves···103151EXPORT_SYMBOL_GPL(x509_check_signature);104152105153/*106106- * Check the new certificate against the ones in the trust keyring. If one of107107- * those is the signing key and validates the new certificate, then mark the108108- * new certificate as being trusted.109109- *110110- * Return 0 if the new certificate was successfully validated, 1 if we couldn't111111- * find a matching parent certificate in the trusted list and an error if there112112- * is a matching certificate but the signature check fails.113113- */114114-static int x509_validate_trust(struct x509_certificate *cert,115115- struct key *trust_keyring)116116-{117117- const struct public_key *pk;118118- struct key *key;119119- int ret = 1;120120-121121- key = x509_request_asymmetric_key(trust_keyring,122122- cert->issuer, strlen(cert->issuer),123123- cert->authority,124124- strlen(cert->authority));125125- if (!IS_ERR(key)) {126126- pk = key->payload.data;127127- ret = x509_check_signature(pk, cert);128128- }129129- return ret;130130-}131131-132132-/*133154 * Attempt to parse a data blob for a key as an X509 certificate.134155 */135156static int x509_key_preparse(struct key_preparsed_payload *prep)···155230 /* Check the signature on the key if it appears to be self-signed */156231 if (!cert->authority ||157232 strcmp(cert->fingerprint, cert->authority) == 0) {158158- ret = x509_check_signature(cert->pub, cert); /* self-signed */233233+ ret = x509_check_signature(cert->pub, cert);159234 if (ret < 0)160235 goto error_free_cert;161161- } else {162162- ret = x509_validate_trust(cert, system_trusted_keyring);163163- if (!ret)164164- prep->trusted = 1;165236 }166237167238 /* Propose a description */
···11+/*22+ * Constant-time equality testing of memory regions.33+ *44+ * Authors:55+ *66+ * James Yonan <james@openvpn.net>77+ * Daniel Borkmann <dborkman@redhat.com>88+ *99+ * This file is provided under a dual BSD/GPLv2 license. When using or1010+ * redistributing this file, you may do so under either license.1111+ *1212+ * GPL LICENSE SUMMARY1313+ *1414+ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.1515+ *1616+ * This program is free software; you can redistribute it and/or modify1717+ * it under the terms of version 2 of the GNU General Public License as1818+ * published by the Free Software Foundation.1919+ *2020+ * This program is distributed in the hope that it will be useful, but2121+ * WITHOUT ANY WARRANTY; without even the implied warranty of2222+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU2323+ * General Public License for more details.2424+ *2525+ * You should have received a copy of the GNU General Public License2626+ * along with this program; if not, write to the Free Software2727+ * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.2828+ * The full GNU General Public License is included in this distribution2929+ * in the file called LICENSE.GPL.3030+ *3131+ * BSD LICENSE3232+ *3333+ * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.3434+ *3535+ * Redistribution and use in source and binary forms, with or without3636+ * modification, are permitted provided that the following conditions3737+ * are met:3838+ *3939+ * * Redistributions of source code must retain the above copyright4040+ * notice, this list of conditions and the following disclaimer.4141+ * * Redistributions in binary form must reproduce the above copyright4242+ * notice, this list of conditions and the following disclaimer in4343+ * the documentation and/or other materials provided with the4444+ * distribution.4545+ * * Neither the name of OpenVPN Technologies nor the names of its4646+ * contributors may be used to endorse or promote products derived4747+ * from this software without specific prior written permission.4848+ *4949+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS5050+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT5151+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR5252+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT5353+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,5454+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT5555+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,5656+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY5757+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT5858+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE5959+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.6060+ */6161+6262+#include <crypto/algapi.h>6363+6464+#ifndef __HAVE_ARCH_CRYPTO_MEMNEQ6565+6666+/* Generic path for arbitrary size */6767+static inline unsigned long6868+__crypto_memneq_generic(const void *a, const void *b, size_t size)6969+{7070+ unsigned long neq = 0;7171+7272+#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)7373+ while (size >= sizeof(unsigned long)) {7474+ neq |= *(unsigned long *)a ^ *(unsigned long *)b;7575+ a += sizeof(unsigned long);7676+ b += sizeof(unsigned long);7777+ size -= sizeof(unsigned long);7878+ }7979+#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */8080+ while (size > 0) {8181+ neq |= *(unsigned char *)a ^ *(unsigned char *)b;8282+ a += 1;8383+ b += 1;8484+ size -= 1;8585+ }8686+ return neq;8787+}8888+8989+/* Loop-free fast-path for frequently used 16-byte size */9090+static inline unsigned long __crypto_memneq_16(const void *a, const void *b)9191+{9292+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS9393+ if (sizeof(unsigned long) == 8)9494+ return ((*(unsigned long *)(a) ^ *(unsigned long *)(b))9595+ | (*(unsigned long *)(a+8) ^ *(unsigned long *)(b+8)));9696+ else if (sizeof(unsigned int) == 4)9797+ return ((*(unsigned int *)(a) ^ *(unsigned int *)(b))9898+ | (*(unsigned int *)(a+4) ^ *(unsigned int *)(b+4))9999+ | (*(unsigned int *)(a+8) ^ *(unsigned int *)(b+8))100100+ | (*(unsigned int *)(a+12) ^ *(unsigned int *)(b+12)));101101+ else102102+#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */103103+ return ((*(unsigned char *)(a) ^ *(unsigned char *)(b))104104+ | (*(unsigned char *)(a+1) ^ *(unsigned char *)(b+1))105105+ | (*(unsigned char *)(a+2) ^ *(unsigned char *)(b+2))106106+ | (*(unsigned char *)(a+3) ^ *(unsigned char *)(b+3))107107+ | (*(unsigned char *)(a+4) ^ *(unsigned char *)(b+4))108108+ | (*(unsigned char *)(a+5) ^ *(unsigned char *)(b+5))109109+ | (*(unsigned char *)(a+6) ^ *(unsigned char *)(b+6))110110+ | (*(unsigned char *)(a+7) ^ *(unsigned char *)(b+7))111111+ | (*(unsigned char *)(a+8) ^ *(unsigned char *)(b+8))112112+ | (*(unsigned char *)(a+9) ^ *(unsigned char *)(b+9))113113+ | (*(unsigned char *)(a+10) ^ *(unsigned char *)(b+10))114114+ | (*(unsigned char *)(a+11) ^ *(unsigned char *)(b+11))115115+ | (*(unsigned char *)(a+12) ^ *(unsigned char *)(b+12))116116+ | (*(unsigned char *)(a+13) ^ *(unsigned char *)(b+13))117117+ | (*(unsigned char *)(a+14) ^ *(unsigned char *)(b+14))118118+ | (*(unsigned char *)(a+15) ^ *(unsigned char *)(b+15)));119119+}120120+121121+/* Compare two areas of memory without leaking timing information,122122+ * and with special optimizations for common sizes. Users should123123+ * not call this function directly, but should instead use124124+ * crypto_memneq defined in crypto/algapi.h.125125+ */126126+noinline unsigned long __crypto_memneq(const void *a, const void *b,127127+ size_t size)128128+{129129+ switch (size) {130130+ case 16:131131+ return __crypto_memneq_16(a, b);132132+ default:133133+ return __crypto_memneq_generic(a, b, size);134134+ }135135+}136136+EXPORT_SYMBOL(__crypto_memneq);137137+138138+#endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
···106106void acpi_ns_delete_node(struct acpi_namespace_node *node)107107{108108 union acpi_operand_object *obj_desc;109109+ union acpi_operand_object *next_desc;109110110111 ACPI_FUNCTION_NAME(ns_delete_node);111112···115114 acpi_ns_detach_object(node);116115117116 /*118118- * Delete an attached data object if present (an object that was created119119- * and attached via acpi_attach_data). Note: After any normal object is120120- * detached above, the only possible remaining object is a data object.117117+ * Delete an attached data object list if present (objects that were118118+ * attached via acpi_attach_data). Note: After any normal object is119119+ * detached above, the only possible remaining object(s) are data120120+ * objects, in a linked list.121121 */122122 obj_desc = node->object;123123- if (obj_desc && (obj_desc->common.type == ACPI_TYPE_LOCAL_DATA)) {123123+ while (obj_desc && (obj_desc->common.type == ACPI_TYPE_LOCAL_DATA)) {124124125125 /* Invoke the attached data deletion handler if present */126126···129127 obj_desc->data.handler(node, obj_desc->data.pointer);130128 }131129130130+ next_desc = obj_desc->common.next_object;132131 acpi_ut_remove_reference(obj_desc);132132+ obj_desc = next_desc;133133+ }134134+135135+ /* Special case for the statically allocated root node */136136+137137+ if (node == acpi_gbl_root_node) {138138+ return;133139 }134140135141 /* Now we can delete the node */
+10-8
drivers/acpi/acpica/nsutils.c
···593593594594void acpi_ns_terminate(void)595595{596596- union acpi_operand_object *obj_desc;596596+ acpi_status status;597597598598 ACPI_FUNCTION_TRACE(ns_terminate);599599600600 /*601601- * 1) Free the entire namespace -- all nodes and objects602602- *603603- * Delete all object descriptors attached to namepsace nodes601601+ * Free the entire namespace -- all nodes and all objects602602+ * attached to the nodes604603 */605604 acpi_ns_delete_namespace_subtree(acpi_gbl_root_node);606605607607- /* Detach any objects attached to the root */606606+ /* Delete any objects attached to the root node */608607609609- obj_desc = acpi_ns_get_attached_object(acpi_gbl_root_node);610610- if (obj_desc) {611611- acpi_ns_detach_object(acpi_gbl_root_node);608608+ status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);609609+ if (ACPI_FAILURE(status)) {610610+ return_VOID;612611 }612612+613613+ acpi_ns_delete_node(acpi_gbl_root_node);614614+ (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);613615614616 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Namespace freed\n"));615617 return_VOID;
+7-2
drivers/acpi/acpica/rscalc.c
···174174 * FUNCTION: acpi_rs_get_aml_length175175 *176176 * PARAMETERS: resource - Pointer to the resource linked list177177+ * resource_list_size - Size of the resource linked list177178 * size_needed - Where the required size is returned178179 *179180 * RETURN: Status···186185 ******************************************************************************/187186188187acpi_status189189-acpi_rs_get_aml_length(struct acpi_resource * resource, acpi_size * size_needed)188188+acpi_rs_get_aml_length(struct acpi_resource *resource,189189+ acpi_size resource_list_size, acpi_size * size_needed)190190{191191 acpi_size aml_size_needed = 0;192192+ struct acpi_resource *resource_end;192193 acpi_rs_length total_size;193194194195 ACPI_FUNCTION_TRACE(rs_get_aml_length);195196196197 /* Traverse entire list of internal resource descriptors */197198198198- while (resource) {199199+ resource_end =200200+ ACPI_ADD_PTR(struct acpi_resource, resource, resource_list_size);201201+ while (resource < resource_end) {199202200203 /* Validate the descriptor type */201204
+17-19
drivers/acpi/acpica/rscreate.c
···418418 *419419 * FUNCTION: acpi_rs_create_aml_resources420420 *421421- * PARAMETERS: linked_list_buffer - Pointer to the resource linked list422422- * output_buffer - Pointer to the user's buffer421421+ * PARAMETERS: resource_list - Pointer to the resource list buffer422422+ * output_buffer - Where the AML buffer is returned423423 *424424 * RETURN: Status AE_OK if okay, else a valid acpi_status code.425425 * If the output_buffer is too small, the error will be426426 * AE_BUFFER_OVERFLOW and output_buffer->Length will point427427 * to the size buffer needed.428428 *429429- * DESCRIPTION: Takes the linked list of device resources and430430- * creates a bytestream to be used as input for the431431- * _SRS control method.429429+ * DESCRIPTION: Converts a list of device resources to an AML bytestream430430+ * to be used as input for the _SRS control method.432431 *433432 ******************************************************************************/434433435434acpi_status436436-acpi_rs_create_aml_resources(struct acpi_resource *linked_list_buffer,435435+acpi_rs_create_aml_resources(struct acpi_buffer *resource_list,437436 struct acpi_buffer *output_buffer)438437{439438 acpi_status status;···440441441442 ACPI_FUNCTION_TRACE(rs_create_aml_resources);442443443443- ACPI_DEBUG_PRINT((ACPI_DB_INFO, "LinkedListBuffer = %p\n",444444- linked_list_buffer));444444+ /* Params already validated, no need to re-validate here */445445446446- /*447447- * Params already validated, so we don't re-validate here448448- *449449- * Pass the linked_list_buffer into a module that calculates450450- * the buffer size needed for the byte stream.451451- */452452- status = acpi_rs_get_aml_length(linked_list_buffer, &aml_size_needed);446446+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "ResourceList Buffer = %p\n",447447+ resource_list->pointer));448448+449449+ /* Get the buffer size needed for the AML byte stream */450450+451451+ status = acpi_rs_get_aml_length(resource_list->pointer,452452+ resource_list->length,453453+ &aml_size_needed);453454454455 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "AmlSizeNeeded=%X, %s\n",455456 (u32)aml_size_needed, acpi_format_exception(status)));···466467467468 /* Do the conversion */468469469469- status =470470- acpi_rs_convert_resources_to_aml(linked_list_buffer,471471- aml_size_needed,472472- output_buffer->pointer);470470+ status = acpi_rs_convert_resources_to_aml(resource_list->pointer,471471+ aml_size_needed,472472+ output_buffer->pointer);473473 if (ACPI_FAILURE(status)) {474474 return_ACPI_STATUS(status);475475 }
+1-1
drivers/acpi/acpica/rsutils.c
···753753 * Convert the linked list into a byte stream754754 */755755 buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER;756756- status = acpi_rs_create_aml_resources(in_buffer->pointer, &buffer);756756+ status = acpi_rs_create_aml_resources(in_buffer, &buffer);757757 if (ACPI_FAILURE(status)) {758758 goto cleanup;759759 }
+24-7
drivers/acpi/acpica/utdebug.c
···185185 }186186187187 acpi_gbl_prev_thread_id = thread_id;188188+ acpi_gbl_nesting_level = 0;188189 }189190190191 /*···194193 */195194 acpi_os_printf("%9s-%04ld ", module_name, line_number);196195196196+#ifdef ACPI_EXEC_APP197197+ /*198198+ * For acpi_exec only, emit the thread ID and nesting level.199199+ * Note: nesting level is really only useful during a single-thread200200+ * execution. Otherwise, multiple threads will keep resetting the201201+ * level.202202+ */197203 if (ACPI_LV_THREADS & acpi_dbg_level) {198204 acpi_os_printf("[%u] ", (u32)thread_id);199205 }200206201201- acpi_os_printf("[%02ld] %-22.22s: ",202202- acpi_gbl_nesting_level,203203- acpi_ut_trim_function_name(function_name));207207+ acpi_os_printf("[%02ld] ", acpi_gbl_nesting_level);208208+#endif209209+210210+ acpi_os_printf("%-22.22s: ", acpi_ut_trim_function_name(function_name));204211205212 va_start(args, format);206213 acpi_os_vprintf(format, args);···429420 component_id, "%s\n", acpi_gbl_fn_exit_str);430421 }431422432432- acpi_gbl_nesting_level--;423423+ if (acpi_gbl_nesting_level) {424424+ acpi_gbl_nesting_level--;425425+ }433426}434427435428ACPI_EXPORT_SYMBOL(acpi_ut_exit)···478467 }479468 }480469481481- acpi_gbl_nesting_level--;470470+ if (acpi_gbl_nesting_level) {471471+ acpi_gbl_nesting_level--;472472+ }482473}483474484475ACPI_EXPORT_SYMBOL(acpi_ut_status_exit)···517504 ACPI_FORMAT_UINT64(value));518505 }519506520520- acpi_gbl_nesting_level--;507507+ if (acpi_gbl_nesting_level) {508508+ acpi_gbl_nesting_level--;509509+ }521510}522511523512ACPI_EXPORT_SYMBOL(acpi_ut_value_exit)···555540 ptr);556541 }557542558558- acpi_gbl_nesting_level--;543543+ if (acpi_gbl_nesting_level) {544544+ acpi_gbl_nesting_level--;545545+ }559546}560547561548#endif
-1
drivers/acpi/nvs.c
···1313#include <linux/slab.h>1414#include <linux/acpi.h>1515#include <linux/acpi_io.h>1616-#include <acpi/acpiosxf.h>17161817/* ACPI NVS regions, APEI may use it */1918
···8888static bool odd_can_poweroff(struct ata_device *ata_dev)8989{9090 acpi_handle handle;9191- acpi_status status;9291 struct acpi_device *acpi_dev;93929493 handle = ata_dev_acpi_handle(ata_dev);9594 if (!handle)9695 return false;97969898- status = acpi_bus_get_device(handle, &acpi_dev);9999- if (ACPI_FAILURE(status))9797+ if (acpi_bus_get_device(handle, &acpi_dev))10098 return false;10199102100 return acpi_device_can_poweroff(acpi_dev);
+1
drivers/ata/pata_arasan_cf.c
···319319 ret = clk_set_rate(acdev->clk, 166000000);320320 if (ret) {321321 dev_warn(acdev->host->dev, "clock set rate failed");322322+ clk_disable_unprepare(acdev->clk);322323 return ret;323324 }324325
+25
drivers/char/hw_random/Kconfig
···165165166166 If unsure, say Y.167167168168+config HW_RANDOM_OMAP3_ROM169169+ tristate "OMAP3 ROM Random Number Generator support"170170+ depends on HW_RANDOM && ARCH_OMAP3171171+ default HW_RANDOM172172+ ---help---173173+ This driver provides kernel-side support for the Random Number174174+ Generator hardware found on OMAP34xx processors.175175+176176+ To compile this driver as a module, choose M here: the177177+ module will be called omap3-rom-rng.178178+179179+ If unsure, say Y.180180+168181config HW_RANDOM_OCTEON169182 tristate "Octeon Random Number Generator support"170183 depends on HW_RANDOM && CAVIUM_OCTEON_SOC···338325339326 To compile this driver as a module, choose M here: the340327 module will be called tpm-rng.328328+329329+ If unsure, say Y.330330+331331+config HW_RANDOM_MSM332332+ tristate "Qualcomm MSM Random Number Generator support"333333+ depends on HW_RANDOM && ARCH_MSM334334+ ---help---335335+ This driver provides kernel-side support for the Random Number336336+ Generator hardware found on Qualcomm MSM SoCs.337337+338338+ To compile this driver as a module, choose M here. the339339+ module will be called msm-rng.341340342341 If unsure, say Y.
···11+/*22+ * Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.33+ *44+ * This program is free software; you can redistribute it and/or modify55+ * it under the terms of the GNU General Public License version 2 and66+ * only version 2 as published by the Free Software Foundation.77+ *88+ * This program is distributed in the hope that it will be useful,99+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1010+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1111+ * GNU General Public License for more details.1212+ *1313+ */1414+#include <linux/clk.h>1515+#include <linux/err.h>1616+#include <linux/hw_random.h>1717+#include <linux/io.h>1818+#include <linux/module.h>1919+#include <linux/of.h>2020+#include <linux/platform_device.h>2121+2222+/* Device specific register offsets */2323+#define PRNG_DATA_OUT 0x00002424+#define PRNG_STATUS 0x00042525+#define PRNG_LFSR_CFG 0x01002626+#define PRNG_CONFIG 0x01042727+2828+/* Device specific register masks and config values */2929+#define PRNG_LFSR_CFG_MASK 0x0000ffff3030+#define PRNG_LFSR_CFG_CLOCKS 0x0000dddd3131+#define PRNG_CONFIG_HW_ENABLE BIT(1)3232+#define PRNG_STATUS_DATA_AVAIL BIT(0)3333+3434+#define MAX_HW_FIFO_DEPTH 163535+#define MAX_HW_FIFO_SIZE (MAX_HW_FIFO_DEPTH * 4)3636+#define WORD_SZ 43737+3838+struct msm_rng {3939+ void __iomem *base;4040+ struct clk *clk;4141+ struct hwrng hwrng;4242+};4343+4444+#define to_msm_rng(p) container_of(p, struct msm_rng, hwrng)4545+4646+static int msm_rng_enable(struct hwrng *hwrng, int enable)4747+{4848+ struct msm_rng *rng = to_msm_rng(hwrng);4949+ u32 val;5050+ int ret;5151+5252+ ret = clk_prepare_enable(rng->clk);5353+ if (ret)5454+ return ret;5555+5656+ if (enable) {5757+ /* Enable PRNG only if it is not already enabled */5858+ val = readl_relaxed(rng->base + PRNG_CONFIG);5959+ if (val & PRNG_CONFIG_HW_ENABLE)6060+ goto already_enabled;6161+6262+ val = readl_relaxed(rng->base + PRNG_LFSR_CFG);6363+ val &= ~PRNG_LFSR_CFG_MASK;6464+ val |= PRNG_LFSR_CFG_CLOCKS;6565+ writel(val, rng->base + PRNG_LFSR_CFG);6666+6767+ val = readl_relaxed(rng->base + PRNG_CONFIG);6868+ val |= PRNG_CONFIG_HW_ENABLE;6969+ writel(val, rng->base + PRNG_CONFIG);7070+ } else {7171+ val = readl_relaxed(rng->base + PRNG_CONFIG);7272+ val &= ~PRNG_CONFIG_HW_ENABLE;7373+ writel(val, rng->base + PRNG_CONFIG);7474+ }7575+7676+already_enabled:7777+ clk_disable_unprepare(rng->clk);7878+ return 0;7979+}8080+8181+static int msm_rng_read(struct hwrng *hwrng, void *data, size_t max, bool wait)8282+{8383+ struct msm_rng *rng = to_msm_rng(hwrng);8484+ size_t currsize = 0;8585+ u32 *retdata = data;8686+ size_t maxsize;8787+ int ret;8888+ u32 val;8989+9090+ /* calculate max size bytes to transfer back to caller */9191+ maxsize = min_t(size_t, MAX_HW_FIFO_SIZE, max);9292+9393+ /* no room for word data */9494+ if (maxsize < WORD_SZ)9595+ return 0;9696+9797+ ret = clk_prepare_enable(rng->clk);9898+ if (ret)9999+ return ret;100100+101101+ /* read random data from hardware */102102+ do {103103+ val = readl_relaxed(rng->base + PRNG_STATUS);104104+ if (!(val & PRNG_STATUS_DATA_AVAIL))105105+ break;106106+107107+ val = readl_relaxed(rng->base + PRNG_DATA_OUT);108108+ if (!val)109109+ break;110110+111111+ *retdata++ = val;112112+ currsize += WORD_SZ;113113+114114+ /* make sure we stay on 32bit boundary */115115+ if ((maxsize - currsize) < WORD_SZ)116116+ break;117117+ } while (currsize < maxsize);118118+119119+ clk_disable_unprepare(rng->clk);120120+121121+ return currsize;122122+}123123+124124+static int msm_rng_init(struct hwrng *hwrng)125125+{126126+ return msm_rng_enable(hwrng, 1);127127+}128128+129129+static void msm_rng_cleanup(struct hwrng *hwrng)130130+{131131+ msm_rng_enable(hwrng, 0);132132+}133133+134134+static int msm_rng_probe(struct platform_device *pdev)135135+{136136+ struct resource *res;137137+ struct msm_rng *rng;138138+ int ret;139139+140140+ rng = devm_kzalloc(&pdev->dev, sizeof(*rng), GFP_KERNEL);141141+ if (!rng)142142+ return -ENOMEM;143143+144144+ platform_set_drvdata(pdev, rng);145145+146146+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);147147+ rng->base = devm_ioremap_resource(&pdev->dev, res);148148+ if (IS_ERR(rng->base))149149+ return PTR_ERR(rng->base);150150+151151+ rng->clk = devm_clk_get(&pdev->dev, "core");152152+ if (IS_ERR(rng->clk))153153+ return PTR_ERR(rng->clk);154154+155155+ rng->hwrng.name = KBUILD_MODNAME,156156+ rng->hwrng.init = msm_rng_init,157157+ rng->hwrng.cleanup = msm_rng_cleanup,158158+ rng->hwrng.read = msm_rng_read,159159+160160+ ret = hwrng_register(&rng->hwrng);161161+ if (ret) {162162+ dev_err(&pdev->dev, "failed to register hwrng\n");163163+ return ret;164164+ }165165+166166+ return 0;167167+}168168+169169+static int msm_rng_remove(struct platform_device *pdev)170170+{171171+ struct msm_rng *rng = platform_get_drvdata(pdev);172172+173173+ hwrng_unregister(&rng->hwrng);174174+ return 0;175175+}176176+177177+static const struct of_device_id msm_rng_of_match[] = {178178+ { .compatible = "qcom,prng", },179179+ {}180180+};181181+MODULE_DEVICE_TABLE(of, msm_rng_of_match);182182+183183+static struct platform_driver msm_rng_driver = {184184+ .probe = msm_rng_probe,185185+ .remove = msm_rng_remove,186186+ .driver = {187187+ .name = KBUILD_MODNAME,188188+ .owner = THIS_MODULE,189189+ .of_match_table = of_match_ptr(msm_rng_of_match),190190+ }191191+};192192+module_platform_driver(msm_rng_driver);193193+194194+MODULE_ALIAS("platform:" KBUILD_MODNAME);195195+MODULE_AUTHOR("The Linux Foundation");196196+MODULE_DESCRIPTION("Qualcomm MSM random number generator driver");197197+MODULE_LICENSE("GPL v2");
+141
drivers/char/hw_random/omap3-rom-rng.c
···11+/*22+ * omap3-rom-rng.c - RNG driver for TI OMAP3 CPU family33+ *44+ * Copyright (C) 2009 Nokia Corporation55+ * Author: Juha Yrjola <juha.yrjola@solidboot.com>66+ *77+ * Copyright (C) 2013 Pali Rohár <pali.rohar@gmail.com>88+ *99+ * This file is licensed under the terms of the GNU General Public1010+ * License version 2. This program is licensed "as is" without any1111+ * warranty of any kind, whether express or implied.1212+ */1313+1414+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt1515+1616+#include <linux/module.h>1717+#include <linux/init.h>1818+#include <linux/random.h>1919+#include <linux/hw_random.h>2020+#include <linux/timer.h>2121+#include <linux/clk.h>2222+#include <linux/err.h>2323+#include <linux/platform_device.h>2424+2525+#define RNG_RESET 0x012626+#define RNG_GEN_PRNG_HW_INIT 0x022727+#define RNG_GEN_HW 0x082828+2929+/* param1: ptr, param2: count, param3: flag */3030+static u32 (*omap3_rom_rng_call)(u32, u32, u32);3131+3232+static struct timer_list idle_timer;3333+static int rng_idle;3434+static struct clk *rng_clk;3535+3636+static void omap3_rom_rng_idle(unsigned long data)3737+{3838+ int r;3939+4040+ r = omap3_rom_rng_call(0, 0, RNG_RESET);4141+ if (r != 0) {4242+ pr_err("reset failed: %d\n", r);4343+ return;4444+ }4545+ clk_disable_unprepare(rng_clk);4646+ rng_idle = 1;4747+}4848+4949+static int omap3_rom_rng_get_random(void *buf, unsigned int count)5050+{5151+ u32 r;5252+ u32 ptr;5353+5454+ del_timer_sync(&idle_timer);5555+ if (rng_idle) {5656+ clk_prepare_enable(rng_clk);5757+ r = omap3_rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT);5858+ if (r != 0) {5959+ clk_disable_unprepare(rng_clk);6060+ pr_err("HW init failed: %d\n", r);6161+ return -EIO;6262+ }6363+ rng_idle = 0;6464+ }6565+6666+ ptr = virt_to_phys(buf);6767+ r = omap3_rom_rng_call(ptr, count, RNG_GEN_HW);6868+ mod_timer(&idle_timer, jiffies + msecs_to_jiffies(500));6969+ if (r != 0)7070+ return -EINVAL;7171+ return 0;7272+}7373+7474+static int omap3_rom_rng_data_present(struct hwrng *rng, int wait)7575+{7676+ return 1;7777+}7878+7979+static int omap3_rom_rng_data_read(struct hwrng *rng, u32 *data)8080+{8181+ int r;8282+8383+ r = omap3_rom_rng_get_random(data, 4);8484+ if (r < 0)8585+ return r;8686+ return 4;8787+}8888+8989+static struct hwrng omap3_rom_rng_ops = {9090+ .name = "omap3-rom",9191+ .data_present = omap3_rom_rng_data_present,9292+ .data_read = omap3_rom_rng_data_read,9393+};9494+9595+static int omap3_rom_rng_probe(struct platform_device *pdev)9696+{9797+ pr_info("initializing\n");9898+9999+ omap3_rom_rng_call = pdev->dev.platform_data;100100+ if (!omap3_rom_rng_call) {101101+ pr_err("omap3_rom_rng_call is NULL\n");102102+ return -EINVAL;103103+ }104104+105105+ setup_timer(&idle_timer, omap3_rom_rng_idle, 0);106106+ rng_clk = clk_get(&pdev->dev, "ick");107107+ if (IS_ERR(rng_clk)) {108108+ pr_err("unable to get RNG clock\n");109109+ return PTR_ERR(rng_clk);110110+ }111111+112112+ /* Leave the RNG in reset state. */113113+ clk_prepare_enable(rng_clk);114114+ omap3_rom_rng_idle(0);115115+116116+ return hwrng_register(&omap3_rom_rng_ops);117117+}118118+119119+static int omap3_rom_rng_remove(struct platform_device *pdev)120120+{121121+ hwrng_unregister(&omap3_rom_rng_ops);122122+ clk_disable_unprepare(rng_clk);123123+ clk_put(rng_clk);124124+ return 0;125125+}126126+127127+static struct platform_driver omap3_rom_rng_driver = {128128+ .driver = {129129+ .name = "omap3-rom-rng",130130+ .owner = THIS_MODULE,131131+ },132132+ .probe = omap3_rom_rng_probe,133133+ .remove = omap3_rom_rng_remove,134134+};135135+136136+module_platform_driver(omap3_rom_rng_driver);137137+138138+MODULE_ALIAS("platform:omap3-rom-rng");139139+MODULE_AUTHOR("Juha Yrjola");140140+MODULE_AUTHOR("Pali Rohár <pali.rohar@gmail.com>");141141+MODULE_LICENSE("GPL");
···44 help55 Enables the driver module for Freescale's Cryptographic Accelerator66 and Assurance Module (CAAM), also known as the SEC version 4 (SEC4).77- This module adds a job ring operation interface, and configures h/w77+ This module creates job ring devices, and configures h/w88 to operate as a DPAA component automatically, depending99 on h/w feature availability.10101111 To compile this driver as a module, choose M here: the module1212 will be called caam.13131414+config CRYPTO_DEV_FSL_CAAM_JR1515+ tristate "Freescale CAAM Job Ring driver backend"1616+ depends on CRYPTO_DEV_FSL_CAAM1717+ default y1818+ help1919+ Enables the driver module for Job Rings which are part of2020+ Freescale's Cryptographic Accelerator2121+ and Assurance Module (CAAM). This module adds a job ring operation2222+ interface.2323+2424+ To compile this driver as a module, choose M here: the module2525+ will be called caam_jr.2626+1427config CRYPTO_DEV_FSL_CAAM_RINGSIZE1528 int "Job Ring size"1616- depends on CRYPTO_DEV_FSL_CAAM2929+ depends on CRYPTO_DEV_FSL_CAAM_JR1730 range 2 91831 default "9"1932 help···44314532config CRYPTO_DEV_FSL_CAAM_INTC4633 bool "Job Ring interrupt coalescing"4747- depends on CRYPTO_DEV_FSL_CAAM3434+ depends on CRYPTO_DEV_FSL_CAAM_JR4835 default n4936 help5037 Enable the Job Ring's interrupt coalescing feature.···75627663config CRYPTO_DEV_FSL_CAAM_CRYPTO_API7764 tristate "Register algorithm implementations with the Crypto API"7878- depends on CRYPTO_DEV_FSL_CAAM6565+ depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR7966 default y8067 select CRYPTO_ALGAPI8168 select CRYPTO_AUTHENC···89769077config CRYPTO_DEV_FSL_CAAM_AHASH_API9178 tristate "Register hash algorithm implementations with Crypto API"9292- depends on CRYPTO_DEV_FSL_CAAM7979+ depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR9380 default y9481 select CRYPTO_HASH9582 help···1018810289config CRYPTO_DEV_FSL_CAAM_RNG_API10390 tristate "Register caam device for hwrng API"104104- depends on CRYPTO_DEV_FSL_CAAM9191+ depends on CRYPTO_DEV_FSL_CAAM && CRYPTO_DEV_FSL_CAAM_JR10592 default y10693 select CRYPTO_RNG10794 select HW_RANDOM
···1616#include "error.h"1717#include "ctrl.h"18181919-static int caam_remove(struct platform_device *pdev)2020-{2121- struct device *ctrldev;2222- struct caam_drv_private *ctrlpriv;2323- struct caam_drv_private_jr *jrpriv;2424- struct caam_full __iomem *topregs;2525- int ring, ret = 0;2626-2727- ctrldev = &pdev->dev;2828- ctrlpriv = dev_get_drvdata(ctrldev);2929- topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;3030-3131- /* shut down JobRs */3232- for (ring = 0; ring < ctrlpriv->total_jobrs; ring++) {3333- ret |= caam_jr_shutdown(ctrlpriv->jrdev[ring]);3434- jrpriv = dev_get_drvdata(ctrlpriv->jrdev[ring]);3535- irq_dispose_mapping(jrpriv->irq);3636- }3737-3838- /* Shut down debug views */3939-#ifdef CONFIG_DEBUG_FS4040- debugfs_remove_recursive(ctrlpriv->dfs_root);4141-#endif4242-4343- /* Unmap controller region */4444- iounmap(&topregs->ctrl);4545-4646- kfree(ctrlpriv->jrdev);4747- kfree(ctrlpriv);4848-4949- return ret;5050-}5151-5219/*5320 * Descriptor to instantiate RNG State Handle 0 in normal mode and5421 * load the JDKEK, TDKEK and TDSK registers5522 */5656-static void build_instantiation_desc(u32 *desc)2323+static void build_instantiation_desc(u32 *desc, int handle, int do_sk)5724{5858- u32 *jump_cmd;2525+ u32 *jump_cmd, op_flags;59266027 init_job_desc(desc, 0);61282929+ op_flags = OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |3030+ (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INIT;3131+6232 /* INIT RNG in non-test mode */6363- append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |6464- OP_ALG_AS_INIT);3333+ append_operation(desc, op_flags);65346666- /* wait for done */6767- jump_cmd = append_jump(desc, JUMP_CLASS_CLASS1);6868- set_jump_tgt_here(desc, jump_cmd);3535+ if (!handle && do_sk) {3636+ /*3737+ * For SH0, Secure Keys must be generated as well3838+ */69397070- /*7171- * load 1 to clear written reg:7272- * resets the done interrupt and returns the RNG to idle.7373- */7474- append_load_imm_u32(desc, 1, LDST_SRCDST_WORD_CLRW);4040+ /* wait for done */4141+ jump_cmd = append_jump(desc, JUMP_CLASS_CLASS1);4242+ set_jump_tgt_here(desc, jump_cmd);75437676- /* generate secure keys (non-test) */7777- append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |7878- OP_ALG_RNG4_SK);4444+ /*4545+ * load 1 to clear written reg:4646+ * resets the done interrrupt and returns the RNG to idle.4747+ */4848+ append_load_imm_u32(desc, 1, LDST_SRCDST_WORD_CLRW);4949+5050+ /* Initialize State Handle */5151+ append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |5252+ OP_ALG_AAI_RNG4_SK);5353+ }5454+5555+ append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT);7956}80578181-static int instantiate_rng(struct device *ctrldev)5858+/* Descriptor for deinstantiation of State Handle 0 of the RNG block. */5959+static void build_deinstantiation_desc(u32 *desc, int handle)6060+{6161+ init_job_desc(desc, 0);6262+6363+ /* Uninstantiate State Handle 0 */6464+ append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |6565+ (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INITFINAL);6666+6767+ append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT);6868+}6969+7070+/*7171+ * run_descriptor_deco0 - runs a descriptor on DECO0, under direct control of7272+ * the software (no JR/QI used).7373+ * @ctrldev - pointer to device7474+ * @status - descriptor status, after being run7575+ *7676+ * Return: - 0 if no error occurred7777+ * - -ENODEV if the DECO couldn't be acquired7878+ * - -EAGAIN if an error occurred while executing the descriptor7979+ */8080+static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc,8181+ u32 *status)8282{8383 struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev);8484 struct caam_full __iomem *topregs;8585 unsigned int timeout = 100000;8686- u32 *desc;8787- int i, ret = 0;8888-8989- desc = kmalloc(CAAM_CMD_SZ * 6, GFP_KERNEL | GFP_DMA);9090- if (!desc) {9191- dev_err(ctrldev, "can't allocate RNG init descriptor memory\n");9292- return -ENOMEM;9393- }9494- build_instantiation_desc(desc);8686+ u32 deco_dbg_reg, flags;8787+ int i;95889689 /* Set the bit to request direct access to DECO0 */9790 topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;···9610397104 if (!timeout) {98105 dev_err(ctrldev, "failed to acquire DECO 0\n");9999- ret = -EIO;100100- goto out;106106+ clrbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE);107107+ return -ENODEV;101108 }102109103110 for (i = 0; i < desc_len(desc); i++)104104- topregs->deco.descbuf[i] = *(desc + i);111111+ wr_reg32(&topregs->deco.descbuf[i], *(desc + i));105112106106- wr_reg32(&topregs->deco.jr_ctl_hi, DECO_JQCR_WHL | DECO_JQCR_FOUR);113113+ flags = DECO_JQCR_WHL;114114+ /*115115+ * If the descriptor length is longer than 4 words, then the116116+ * FOUR bit in JRCTRL register must be set.117117+ */118118+ if (desc_len(desc) >= 4)119119+ flags |= DECO_JQCR_FOUR;120120+121121+ /* Instruct the DECO to execute it */122122+ wr_reg32(&topregs->deco.jr_ctl_hi, flags);107123108124 timeout = 10000000;109109- while ((rd_reg32(&topregs->deco.desc_dbg) & DECO_DBG_VALID) &&110110- --timeout)125125+ do {126126+ deco_dbg_reg = rd_reg32(&topregs->deco.desc_dbg);127127+ /*128128+ * If an error occured in the descriptor, then129129+ * the DECO status field will be set to 0x0D130130+ */131131+ if ((deco_dbg_reg & DESC_DBG_DECO_STAT_MASK) ==132132+ DESC_DBG_DECO_STAT_HOST_ERR)133133+ break;111134 cpu_relax();135135+ } while ((deco_dbg_reg & DESC_DBG_DECO_STAT_VALID) && --timeout);112136113113- if (!timeout) {114114- dev_err(ctrldev, "failed to instantiate RNG\n");115115- ret = -EIO;137137+ *status = rd_reg32(&topregs->deco.op_status_hi) &138138+ DECO_OP_STATUS_HI_ERR_MASK;139139+140140+ /* Mark the DECO as free */141141+ clrbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE);142142+143143+ if (!timeout)144144+ return -EAGAIN;145145+146146+ return 0;147147+}148148+149149+/*150150+ * instantiate_rng - builds and executes a descriptor on DECO0,151151+ * which initializes the RNG block.152152+ * @ctrldev - pointer to device153153+ * @state_handle_mask - bitmask containing the instantiation status154154+ * for the RNG4 state handles which exist in155155+ * the RNG4 block: 1 if it's been instantiated156156+ * by an external entry, 0 otherwise.157157+ * @gen_sk - generate data to be loaded into the JDKEK, TDKEK and TDSK;158158+ * Caution: this can be done only once; if the keys need to be159159+ * regenerated, a POR is required160160+ *161161+ * Return: - 0 if no error occurred162162+ * - -ENOMEM if there isn't enough memory to allocate the descriptor163163+ * - -ENODEV if DECO0 couldn't be acquired164164+ * - -EAGAIN if an error occurred when executing the descriptor165165+ * f.i. there was a RNG hardware error due to not "good enough"166166+ * entropy being aquired.167167+ */168168+static int instantiate_rng(struct device *ctrldev, int state_handle_mask,169169+ int gen_sk)170170+{171171+ struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev);172172+ struct caam_full __iomem *topregs;173173+ struct rng4tst __iomem *r4tst;174174+ u32 *desc, status, rdsta_val;175175+ int ret = 0, sh_idx;176176+177177+ topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;178178+ r4tst = &topregs->ctrl.r4tst[0];179179+180180+ desc = kmalloc(CAAM_CMD_SZ * 7, GFP_KERNEL);181181+ if (!desc)182182+ return -ENOMEM;183183+184184+ for (sh_idx = 0; sh_idx < RNG4_MAX_HANDLES; sh_idx++) {185185+ /*186186+ * If the corresponding bit is set, this state handle187187+ * was initialized by somebody else, so it's left alone.188188+ */189189+ if ((1 << sh_idx) & state_handle_mask)190190+ continue;191191+192192+ /* Create the descriptor for instantiating RNG State Handle */193193+ build_instantiation_desc(desc, sh_idx, gen_sk);194194+195195+ /* Try to run it through DECO0 */196196+ ret = run_descriptor_deco0(ctrldev, desc, &status);197197+198198+ /*199199+ * If ret is not 0, or descriptor status is not 0, then200200+ * something went wrong. No need to try the next state201201+ * handle (if available), bail out here.202202+ * Also, if for some reason, the State Handle didn't get203203+ * instantiated although the descriptor has finished204204+ * without any error (HW optimizations for later205205+ * CAAM eras), then try again.206206+ */207207+ rdsta_val =208208+ rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & RDSTA_IFMASK;209209+ if (status || !(rdsta_val & (1 << sh_idx)))210210+ ret = -EAGAIN;211211+ if (ret)212212+ break;213213+214214+ dev_info(ctrldev, "Instantiated RNG4 SH%d\n", sh_idx);215215+ /* Clear the contents before recreating the descriptor */216216+ memset(desc, 0x00, CAAM_CMD_SZ * 7);116217 }117218118118- clrbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE);119119-out:120219 kfree(desc);220220+121221 return ret;122222}123223124224/*125125- * By default, the TRNG runs for 200 clocks per sample;126126- * 1600 clocks per sample generates better entropy.225225+ * deinstantiate_rng - builds and executes a descriptor on DECO0,226226+ * which deinitializes the RNG block.227227+ * @ctrldev - pointer to device228228+ * @state_handle_mask - bitmask containing the instantiation status229229+ * for the RNG4 state handles which exist in230230+ * the RNG4 block: 1 if it's been instantiated231231+ *232232+ * Return: - 0 if no error occurred233233+ * - -ENOMEM if there isn't enough memory to allocate the descriptor234234+ * - -ENODEV if DECO0 couldn't be acquired235235+ * - -EAGAIN if an error occurred when executing the descriptor127236 */128128-static void kick_trng(struct platform_device *pdev)237237+static int deinstantiate_rng(struct device *ctrldev, int state_handle_mask)238238+{239239+ u32 *desc, status;240240+ int sh_idx, ret = 0;241241+242242+ desc = kmalloc(CAAM_CMD_SZ * 3, GFP_KERNEL);243243+ if (!desc)244244+ return -ENOMEM;245245+246246+ for (sh_idx = 0; sh_idx < RNG4_MAX_HANDLES; sh_idx++) {247247+ /*248248+ * If the corresponding bit is set, then it means the state249249+ * handle was initialized by us, and thus it needs to be250250+ * deintialized as well251251+ */252252+ if ((1 << sh_idx) & state_handle_mask) {253253+ /*254254+ * Create the descriptor for deinstantating this state255255+ * handle256256+ */257257+ build_deinstantiation_desc(desc, sh_idx);258258+259259+ /* Try to run it through DECO0 */260260+ ret = run_descriptor_deco0(ctrldev, desc, &status);261261+262262+ if (ret || status) {263263+ dev_err(ctrldev,264264+ "Failed to deinstantiate RNG4 SH%d\n",265265+ sh_idx);266266+ break;267267+ }268268+ dev_info(ctrldev, "Deinstantiated RNG4 SH%d\n", sh_idx);269269+ }270270+ }271271+272272+ kfree(desc);273273+274274+ return ret;275275+}276276+277277+static int caam_remove(struct platform_device *pdev)278278+{279279+ struct device *ctrldev;280280+ struct caam_drv_private *ctrlpriv;281281+ struct caam_full __iomem *topregs;282282+ int ring, ret = 0;283283+284284+ ctrldev = &pdev->dev;285285+ ctrlpriv = dev_get_drvdata(ctrldev);286286+ topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;287287+288288+ /* Remove platform devices for JobRs */289289+ for (ring = 0; ring < ctrlpriv->total_jobrs; ring++) {290290+ if (ctrlpriv->jrpdev[ring])291291+ of_device_unregister(ctrlpriv->jrpdev[ring]);292292+ }293293+294294+ /* De-initialize RNG state handles initialized by this driver. */295295+ if (ctrlpriv->rng4_sh_init)296296+ deinstantiate_rng(ctrldev, ctrlpriv->rng4_sh_init);297297+298298+ /* Shut down debug views */299299+#ifdef CONFIG_DEBUG_FS300300+ debugfs_remove_recursive(ctrlpriv->dfs_root);301301+#endif302302+303303+ /* Unmap controller region */304304+ iounmap(&topregs->ctrl);305305+306306+ kfree(ctrlpriv->jrpdev);307307+ kfree(ctrlpriv);308308+309309+ return ret;310310+}311311+312312+/*313313+ * kick_trng - sets the various parameters for enabling the initialization314314+ * of the RNG4 block in CAAM315315+ * @pdev - pointer to the platform device316316+ * @ent_delay - Defines the length (in system clocks) of each entropy sample.317317+ */318318+static void kick_trng(struct platform_device *pdev, int ent_delay)129319{130320 struct device *ctrldev = &pdev->dev;131321 struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev);···321145322146 /* put RNG4 into program mode */323147 setbits32(&r4tst->rtmctl, RTMCTL_PRGM);324324- /* 1600 clocks per sample */148148+149149+ /*150150+ * Performance-wise, it does not make sense to151151+ * set the delay to a value that is lower152152+ * than the last one that worked (i.e. the state handles153153+ * were instantiated properly. Thus, instead of wasting154154+ * time trying to set the values controlling the sample155155+ * frequency, the function simply returns.156156+ */157157+ val = (rd_reg32(&r4tst->rtsdctl) & RTSDCTL_ENT_DLY_MASK)158158+ >> RTSDCTL_ENT_DLY_SHIFT;159159+ if (ent_delay <= val) {160160+ /* put RNG4 into run mode */161161+ clrbits32(&r4tst->rtmctl, RTMCTL_PRGM);162162+ return;163163+ }164164+325165 val = rd_reg32(&r4tst->rtsdctl);326326- val = (val & ~RTSDCTL_ENT_DLY_MASK) | (1600 << RTSDCTL_ENT_DLY_SHIFT);166166+ val = (val & ~RTSDCTL_ENT_DLY_MASK) |167167+ (ent_delay << RTSDCTL_ENT_DLY_SHIFT);327168 wr_reg32(&r4tst->rtsdctl, val);328328- /* min. freq. count */329329- wr_reg32(&r4tst->rtfrqmin, 400);330330- /* max. freq. count */331331- wr_reg32(&r4tst->rtfrqmax, 6400);169169+ /* min. freq. count, equal to 1/4 of the entropy sample length */170170+ wr_reg32(&r4tst->rtfrqmin, ent_delay >> 2);171171+ /* max. freq. count, equal to 8 times the entropy sample length */172172+ wr_reg32(&r4tst->rtfrqmax, ent_delay << 3);332173 /* put RNG4 into run mode */333174 clrbits32(&r4tst->rtmctl, RTMCTL_PRGM);334175}···386193/* Probe routine for CAAM top (controller) level */387194static int caam_probe(struct platform_device *pdev)388195{389389- int ret, ring, rspec;196196+ int ret, ring, rspec, gen_sk, ent_delay = RTSDCTL_ENT_DLY_MIN;390197 u64 caam_id;391198 struct device *dev;392199 struct device_node *nprop, *np;···451258 rspec++;452259 }453260454454- ctrlpriv->jrdev = kzalloc(sizeof(struct device *) * rspec, GFP_KERNEL);455455- if (ctrlpriv->jrdev == NULL) {261261+ ctrlpriv->jrpdev = kzalloc(sizeof(struct platform_device *) * rspec,262262+ GFP_KERNEL);263263+ if (ctrlpriv->jrpdev == NULL) {456264 iounmap(&topregs->ctrl);457265 return -ENOMEM;458266 }···461267 ring = 0;462268 ctrlpriv->total_jobrs = 0;463269 for_each_compatible_node(np, NULL, "fsl,sec-v4.0-job-ring") {464464- caam_jr_probe(pdev, np, ring);270270+ ctrlpriv->jrpdev[ring] =271271+ of_platform_device_create(np, NULL, dev);272272+ if (!ctrlpriv->jrpdev[ring]) {273273+ pr_warn("JR%d Platform device creation error\n", ring);274274+ continue;275275+ }465276 ctrlpriv->total_jobrs++;466277 ring++;467278 }468279 if (!ring) {469280 for_each_compatible_node(np, NULL, "fsl,sec4.0-job-ring") {470470- caam_jr_probe(pdev, np, ring);281281+ ctrlpriv->jrpdev[ring] =282282+ of_platform_device_create(np, NULL, dev);283283+ if (!ctrlpriv->jrpdev[ring]) {284284+ pr_warn("JR%d Platform device creation error\n",285285+ ring);286286+ continue;287287+ }471288 ctrlpriv->total_jobrs++;472289 ring++;473290 }···504299505300 /*506301 * If SEC has RNG version >= 4 and RNG state handle has not been507507- * already instantiated ,do RNG instantiation302302+ * already instantiated, do RNG instantiation508303 */509509- if ((cha_vid & CHA_ID_RNG_MASK) >> CHA_ID_RNG_SHIFT >= 4 &&510510- !(rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & RDSTA_IF0)) {511511- kick_trng(pdev);512512- ret = instantiate_rng(dev);304304+ if ((cha_vid & CHA_ID_RNG_MASK) >> CHA_ID_RNG_SHIFT >= 4) {305305+ ctrlpriv->rng4_sh_init =306306+ rd_reg32(&topregs->ctrl.r4tst[0].rdsta);307307+ /*308308+ * If the secure keys (TDKEK, JDKEK, TDSK), were already309309+ * generated, signal this to the function that is instantiating310310+ * the state handles. An error would occur if RNG4 attempts311311+ * to regenerate these keys before the next POR.312312+ */313313+ gen_sk = ctrlpriv->rng4_sh_init & RDSTA_SKVN ? 0 : 1;314314+ ctrlpriv->rng4_sh_init &= RDSTA_IFMASK;315315+ do {316316+ int inst_handles =317317+ rd_reg32(&topregs->ctrl.r4tst[0].rdsta) &318318+ RDSTA_IFMASK;319319+ /*320320+ * If either SH were instantiated by somebody else321321+ * (e.g. u-boot) then it is assumed that the entropy322322+ * parameters are properly set and thus the function323323+ * setting these (kick_trng(...)) is skipped.324324+ * Also, if a handle was instantiated, do not change325325+ * the TRNG parameters.326326+ */327327+ if (!(ctrlpriv->rng4_sh_init || inst_handles)) {328328+ kick_trng(pdev, ent_delay);329329+ ent_delay += 400;330330+ }331331+ /*332332+ * if instantiate_rng(...) fails, the loop will rerun333333+ * and the kick_trng(...) function will modfiy the334334+ * upper and lower limits of the entropy sampling335335+ * interval, leading to a sucessful initialization of336336+ * the RNG.337337+ */338338+ ret = instantiate_rng(dev, inst_handles,339339+ gen_sk);340340+ } while ((ret == -EAGAIN) && (ent_delay < RTSDCTL_ENT_DLY_MAX));513341 if (ret) {342342+ dev_err(dev, "failed to instantiate RNG");514343 caam_remove(pdev);515344 return ret;516345 }346346+ /*347347+ * Set handles init'ed by this module as the complement of the348348+ * already initialized ones349349+ */350350+ ctrlpriv->rng4_sh_init = ~ctrlpriv->rng4_sh_init & RDSTA_IFMASK;517351518352 /* Enable RDB bit so that RNG works faster */519353 setbits32(&topregs->ctrl.scfgr, SCFGR_RDBENABLE);
···37373838/* Private sub-storage for a single JobR */3939struct caam_drv_private_jr {4040- struct device *parentdev; /* points back to controller dev */4141- struct platform_device *jr_pdev;/* points to platform device for JR */4040+ struct list_head list_node; /* Job Ring device list */4141+ struct device *dev;4242 int ridx;4343 struct caam_job_ring __iomem *rregs; /* JobR's register space */4444 struct tasklet_struct irqtask;4545 int irq; /* One per queue */4646+4747+ /* Number of scatterlist crypt transforms active on the JobR */4848+ atomic_t tfm_count ____cacheline_aligned;46494750 /* Job ring info */4851 int ringsize; /* Size of rings (assume input = output) */···6663struct caam_drv_private {67646865 struct device *dev;6969- struct device **jrdev; /* Alloc'ed array per sub-device */6666+ struct platform_device **jrpdev; /* Alloc'ed array per sub-device */7067 struct platform_device *pdev;71687269 /* Physical-presence section */···8380 u8 qi_present; /* Nonzero if QI present in device */8481 int secvio_irq; /* Security violation interrupt number */85828686- /* which jr allocated to scatterlist crypto */8787- atomic_t tfm_count ____cacheline_aligned;8888- /* list of registered crypto algorithms (mk generic context handle?) */8989- struct list_head alg_list;9090- /* list of registered hash algorithms (mk generic context handle?) */9191- struct list_head hash_list;8383+#define RNG4_MAX_HANDLES 28484+ /* RNG4 block */8585+ u32 rng4_sh_init; /* This bitmap shows which of the State8686+ Handles of the RNG4 block are initialized8787+ by this driver */92889389 /*9490 * debugfs entries for developer view into driver/device
+233-110
drivers/crypto/caam/jr.c
···1313#include "desc.h"1414#include "intern.h"15151616+struct jr_driver_data {1717+ /* List of Physical JobR's with the Driver */1818+ struct list_head jr_list;1919+ spinlock_t jr_alloc_lock; /* jr_list lock */2020+} ____cacheline_aligned;2121+2222+static struct jr_driver_data driver_data;2323+2424+static int caam_reset_hw_jr(struct device *dev)2525+{2626+ struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);2727+ unsigned int timeout = 100000;2828+2929+ /*3030+ * mask interrupts since we are going to poll3131+ * for reset completion status3232+ */3333+ setbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);3434+3535+ /* initiate flush (required prior to reset) */3636+ wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET);3737+ while (((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) ==3838+ JRINT_ERR_HALT_INPROGRESS) && --timeout)3939+ cpu_relax();4040+4141+ if ((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) !=4242+ JRINT_ERR_HALT_COMPLETE || timeout == 0) {4343+ dev_err(dev, "failed to flush job ring %d\n", jrp->ridx);4444+ return -EIO;4545+ }4646+4747+ /* initiate reset */4848+ timeout = 100000;4949+ wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET);5050+ while ((rd_reg32(&jrp->rregs->jrcommand) & JRCR_RESET) && --timeout)5151+ cpu_relax();5252+5353+ if (timeout == 0) {5454+ dev_err(dev, "failed to reset job ring %d\n", jrp->ridx);5555+ return -EIO;5656+ }5757+5858+ /* unmask interrupts */5959+ clrbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);6060+6161+ return 0;6262+}6363+6464+/*6565+ * Shutdown JobR independent of platform property code6666+ */6767+int caam_jr_shutdown(struct device *dev)6868+{6969+ struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);7070+ dma_addr_t inpbusaddr, outbusaddr;7171+ int ret;7272+7373+ ret = caam_reset_hw_jr(dev);7474+7575+ tasklet_kill(&jrp->irqtask);7676+7777+ /* Release interrupt */7878+ free_irq(jrp->irq, dev);7979+8080+ /* Free rings */8181+ inpbusaddr = rd_reg64(&jrp->rregs->inpring_base);8282+ outbusaddr = rd_reg64(&jrp->rregs->outring_base);8383+ dma_free_coherent(dev, sizeof(dma_addr_t) * JOBR_DEPTH,8484+ jrp->inpring, inpbusaddr);8585+ dma_free_coherent(dev, sizeof(struct jr_outentry) * JOBR_DEPTH,8686+ jrp->outring, outbusaddr);8787+ kfree(jrp->entinfo);8888+8989+ return ret;9090+}9191+9292+static int caam_jr_remove(struct platform_device *pdev)9393+{9494+ int ret;9595+ struct device *jrdev;9696+ struct caam_drv_private_jr *jrpriv;9797+9898+ jrdev = &pdev->dev;9999+ jrpriv = dev_get_drvdata(jrdev);100100+101101+ /*102102+ * Return EBUSY if job ring already allocated.103103+ */104104+ if (atomic_read(&jrpriv->tfm_count)) {105105+ dev_err(jrdev, "Device is busy\n");106106+ return -EBUSY;107107+ }108108+109109+ /* Remove the node from Physical JobR list maintained by driver */110110+ spin_lock(&driver_data.jr_alloc_lock);111111+ list_del(&jrpriv->list_node);112112+ spin_unlock(&driver_data.jr_alloc_lock);113113+114114+ /* Release ring */115115+ ret = caam_jr_shutdown(jrdev);116116+ if (ret)117117+ dev_err(jrdev, "Failed to shut down job ring\n");118118+ irq_dispose_mapping(jrpriv->irq);119119+120120+ return ret;121121+}122122+16123/* Main per-ring interrupt handler */17124static irqreturn_t caam_jr_interrupt(int irq, void *st_dev)18125{···235128}236129237130/**131131+ * caam_jr_alloc() - Alloc a job ring for someone to use as needed.132132+ *133133+ * returns : pointer to the newly allocated physical134134+ * JobR dev can be written to if successful.135135+ **/136136+struct device *caam_jr_alloc(void)137137+{138138+ struct caam_drv_private_jr *jrpriv, *min_jrpriv = NULL;139139+ struct device *dev = NULL;140140+ int min_tfm_cnt = INT_MAX;141141+ int tfm_cnt;142142+143143+ spin_lock(&driver_data.jr_alloc_lock);144144+145145+ if (list_empty(&driver_data.jr_list)) {146146+ spin_unlock(&driver_data.jr_alloc_lock);147147+ return ERR_PTR(-ENODEV);148148+ }149149+150150+ list_for_each_entry(jrpriv, &driver_data.jr_list, list_node) {151151+ tfm_cnt = atomic_read(&jrpriv->tfm_count);152152+ if (tfm_cnt < min_tfm_cnt) {153153+ min_tfm_cnt = tfm_cnt;154154+ min_jrpriv = jrpriv;155155+ }156156+ if (!min_tfm_cnt)157157+ break;158158+ }159159+160160+ if (min_jrpriv) {161161+ atomic_inc(&min_jrpriv->tfm_count);162162+ dev = min_jrpriv->dev;163163+ }164164+ spin_unlock(&driver_data.jr_alloc_lock);165165+166166+ return dev;167167+}168168+EXPORT_SYMBOL(caam_jr_alloc);169169+170170+/**171171+ * caam_jr_free() - Free the Job Ring172172+ * @rdev - points to the dev that identifies the Job ring to173173+ * be released.174174+ **/175175+void caam_jr_free(struct device *rdev)176176+{177177+ struct caam_drv_private_jr *jrpriv = dev_get_drvdata(rdev);178178+179179+ atomic_dec(&jrpriv->tfm_count);180180+}181181+EXPORT_SYMBOL(caam_jr_free);182182+183183+/**238184 * caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK,239185 * -EBUSY if the queue is full, -EIO if it cannot map the caller's240186 * descriptor.···367207}368208EXPORT_SYMBOL(caam_jr_enqueue);369209370370-static int caam_reset_hw_jr(struct device *dev)371371-{372372- struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);373373- unsigned int timeout = 100000;374374-375375- /*376376- * mask interrupts since we are going to poll377377- * for reset completion status378378- */379379- setbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);380380-381381- /* initiate flush (required prior to reset) */382382- wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET);383383- while (((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) ==384384- JRINT_ERR_HALT_INPROGRESS) && --timeout)385385- cpu_relax();386386-387387- if ((rd_reg32(&jrp->rregs->jrintstatus) & JRINT_ERR_HALT_MASK) !=388388- JRINT_ERR_HALT_COMPLETE || timeout == 0) {389389- dev_err(dev, "failed to flush job ring %d\n", jrp->ridx);390390- return -EIO;391391- }392392-393393- /* initiate reset */394394- timeout = 100000;395395- wr_reg32(&jrp->rregs->jrcommand, JRCR_RESET);396396- while ((rd_reg32(&jrp->rregs->jrcommand) & JRCR_RESET) && --timeout)397397- cpu_relax();398398-399399- if (timeout == 0) {400400- dev_err(dev, "failed to reset job ring %d\n", jrp->ridx);401401- return -EIO;402402- }403403-404404- /* unmask interrupts */405405- clrbits32(&jrp->rregs->rconfig_lo, JRCFG_IMSK);406406-407407- return 0;408408-}409409-410210/*411211 * Init JobR independent of platform property detection412212 */···382262383263 /* Connect job ring interrupt handler. */384264 error = request_irq(jrp->irq, caam_jr_interrupt, IRQF_SHARED,385385- "caam-jobr", dev);265265+ dev_name(dev), dev);386266 if (error) {387267 dev_err(dev, "can't connect JobR %d interrupt (%d)\n",388268 jrp->ridx, jrp->irq);···438318 return 0;439319}440320441441-/*442442- * Shutdown JobR independent of platform property code443443- */444444-int caam_jr_shutdown(struct device *dev)445445-{446446- struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);447447- dma_addr_t inpbusaddr, outbusaddr;448448- int ret;449449-450450- ret = caam_reset_hw_jr(dev);451451-452452- tasklet_kill(&jrp->irqtask);453453-454454- /* Release interrupt */455455- free_irq(jrp->irq, dev);456456-457457- /* Free rings */458458- inpbusaddr = rd_reg64(&jrp->rregs->inpring_base);459459- outbusaddr = rd_reg64(&jrp->rregs->outring_base);460460- dma_free_coherent(dev, sizeof(dma_addr_t) * JOBR_DEPTH,461461- jrp->inpring, inpbusaddr);462462- dma_free_coherent(dev, sizeof(struct jr_outentry) * JOBR_DEPTH,463463- jrp->outring, outbusaddr);464464- kfree(jrp->entinfo);465465- of_device_unregister(jrp->jr_pdev);466466-467467- return ret;468468-}469321470322/*471471- * Probe routine for each detected JobR subsystem. It assumes that472472- * property detection was picked up externally.323323+ * Probe routine for each detected JobR subsystem.473324 */474474-int caam_jr_probe(struct platform_device *pdev, struct device_node *np,475475- int ring)325325+static int caam_jr_probe(struct platform_device *pdev)476326{477477- struct device *ctrldev, *jrdev;478478- struct platform_device *jr_pdev;479479- struct caam_drv_private *ctrlpriv;327327+ struct device *jrdev;328328+ struct device_node *nprop;329329+ struct caam_job_ring __iomem *ctrl;480330 struct caam_drv_private_jr *jrpriv;481481- u32 *jroffset;331331+ static int total_jobrs;482332 int error;483333484484- ctrldev = &pdev->dev;485485- ctrlpriv = dev_get_drvdata(ctrldev);486486-334334+ jrdev = &pdev->dev;487335 jrpriv = kmalloc(sizeof(struct caam_drv_private_jr),488336 GFP_KERNEL);489489- if (jrpriv == NULL) {490490- dev_err(ctrldev, "can't alloc private mem for job ring %d\n",491491- ring);337337+ if (!jrpriv)338338+ return -ENOMEM;339339+340340+ dev_set_drvdata(jrdev, jrpriv);341341+342342+ /* save ring identity relative to detection */343343+ jrpriv->ridx = total_jobrs++;344344+345345+ nprop = pdev->dev.of_node;346346+ /* Get configuration properties from device tree */347347+ /* First, get register page */348348+ ctrl = of_iomap(nprop, 0);349349+ if (!ctrl) {350350+ dev_err(jrdev, "of_iomap() failed\n");492351 return -ENOMEM;493352 }494494- jrpriv->parentdev = ctrldev; /* point back to parent */495495- jrpriv->ridx = ring; /* save ring identity relative to detection */496353497497- /*498498- * Derive a pointer to the detected JobRs regs499499- * Driver has already iomapped the entire space, we just500500- * need to add in the offset to this JobR. Don't know if I501501- * like this long-term, but it'll run502502- */503503- jroffset = (u32 *)of_get_property(np, "reg", NULL);504504- jrpriv->rregs = (struct caam_job_ring __iomem *)((void *)ctrlpriv->ctrl505505- + *jroffset);506506-507507- /* Build a local dev for each detected queue */508508- jr_pdev = of_platform_device_create(np, NULL, ctrldev);509509- if (jr_pdev == NULL) {510510- kfree(jrpriv);511511- return -EINVAL;512512- }513513-514514- jrpriv->jr_pdev = jr_pdev;515515- jrdev = &jr_pdev->dev;516516- dev_set_drvdata(jrdev, jrpriv);517517- ctrlpriv->jrdev[ring] = jrdev;354354+ jrpriv->rregs = (struct caam_job_ring __force *)ctrl;518355519356 if (sizeof(dma_addr_t) == sizeof(u64))520520- if (of_device_is_compatible(np, "fsl,sec-v5.0-job-ring"))357357+ if (of_device_is_compatible(nprop, "fsl,sec-v5.0-job-ring"))521358 dma_set_mask(jrdev, DMA_BIT_MASK(40));522359 else523360 dma_set_mask(jrdev, DMA_BIT_MASK(36));···482405 dma_set_mask(jrdev, DMA_BIT_MASK(32));483406484407 /* Identify the interrupt */485485- jrpriv->irq = irq_of_parse_and_map(np, 0);408408+ jrpriv->irq = irq_of_parse_and_map(nprop, 0);486409487410 /* Now do the platform independent part */488411 error = caam_jr_init(jrdev); /* now turn on hardware */489412 if (error) {490490- of_device_unregister(jr_pdev);491413 kfree(jrpriv);492414 return error;493415 }494416495495- return error;417417+ jrpriv->dev = jrdev;418418+ spin_lock(&driver_data.jr_alloc_lock);419419+ list_add_tail(&jrpriv->list_node, &driver_data.jr_list);420420+ spin_unlock(&driver_data.jr_alloc_lock);421421+422422+ atomic_set(&jrpriv->tfm_count, 0);423423+424424+ return 0;496425}426426+427427+static struct of_device_id caam_jr_match[] = {428428+ {429429+ .compatible = "fsl,sec-v4.0-job-ring",430430+ },431431+ {432432+ .compatible = "fsl,sec4.0-job-ring",433433+ },434434+ {},435435+};436436+MODULE_DEVICE_TABLE(of, caam_jr_match);437437+438438+static struct platform_driver caam_jr_driver = {439439+ .driver = {440440+ .name = "caam_jr",441441+ .owner = THIS_MODULE,442442+ .of_match_table = caam_jr_match,443443+ },444444+ .probe = caam_jr_probe,445445+ .remove = caam_jr_remove,446446+};447447+448448+static int __init jr_driver_init(void)449449+{450450+ spin_lock_init(&driver_data.jr_alloc_lock);451451+ INIT_LIST_HEAD(&driver_data.jr_list);452452+ return platform_driver_register(&caam_jr_driver);453453+}454454+455455+static void __exit jr_driver_exit(void)456456+{457457+ platform_driver_unregister(&caam_jr_driver);458458+}459459+460460+module_init(jr_driver_init);461461+module_exit(jr_driver_exit);462462+463463+MODULE_LICENSE("GPL");464464+MODULE_DESCRIPTION("FSL CAAM JR request backend");465465+MODULE_AUTHOR("Freescale Semiconductor - NMG/STC");
+2-3
drivers/crypto/caam/jr.h
···88#define JR_H991010/* Prototypes for backend-level services exposed to APIs */1111+struct device *caam_jr_alloc(void);1212+void caam_jr_free(struct device *rdev);1113int caam_jr_enqueue(struct device *dev, u32 *desc,1214 void (*cbk)(struct device *dev, u32 *desc, u32 status,1315 void *areq),1416 void *areq);15171616-extern int caam_jr_probe(struct platform_device *pdev, struct device_node *np,1717- int ring);1818-extern int caam_jr_shutdown(struct device *dev);1918#endif /* JR_H */
···341341 case USB_DEVICE_ID_GENIUS_GX_IMPERATOR:342342 rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 83,343343 "Genius Gx Imperator Keyboard");344344+ case USB_DEVICE_ID_GENIUS_MANTICORE:345345+ rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104,346346+ "Genius Manticore Keyboard");344347 break;345348 }346349 return rdesc;···421418 goto enabling_err;422419 }423420 break;421421+ case USB_DEVICE_ID_GENIUS_MANTICORE:422422+ /*423423+ * The manticore keyboard needs to have all the interfaces424424+ * opened at least once to be fully functional.425425+ */426426+ if (hid_hw_open(hdev))427427+ hid_hw_close(hdev);428428+ break;424429 }425430426431 return 0;···450439 USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE) },451440 { HID_USB_DEVICE(USB_VENDOR_ID_KYE,452441 USB_DEVICE_ID_GENIUS_GX_IMPERATOR) },442442+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE,443443+ USB_DEVICE_ID_GENIUS_MANTICORE) },453444 { }454445};455446MODULE_DEVICE_TABLE(hid, kye_devices);
···8181config TCS34728282 tristate "TAOS TCS3472 color light-to-digital converter"8383 depends on I2C8484+ select IIO_BUFFER8585+ select IIO_TRIGGERED_BUFFER8486 help8587 If you say yes here you get support for the TAOS TCS34728688 family of color light-to-digital converters with IR filter.
···1919config MAG31102020 tristate "Freescale MAG3110 3-Axis Magnetometer"2121 depends on I2C2222+ select IIO_BUFFER2323+ select IIO_TRIGGERED_BUFFER2224 help2325 Say yes here to build support for the Freescale MAG3110 3-Axis2426 magnetometer.
···180180 if (WARN_ON(down_interruptible(&i8042tregs)))181181 return -1;182182183183- if (hp_sdc_enqueue_transaction(&t)) return -1;183183+ if (hp_sdc_enqueue_transaction(&t)) {184184+ up(&i8042tregs);185185+ return -1;186186+ }184187185188 /* Sleep until results come back. */186189 if (WARN_ON(down_interruptible(&i8042tregs)))
+11
drivers/input/touchscreen/Kconfig
···906906 To compile this driver as a module, choose M here: the907907 module will be called stmpe-ts.908908909909+config TOUCHSCREEN_SUR40910910+ tristate "Samsung SUR40 (Surface 2.0/PixelSense) touchscreen"911911+ depends on USB912912+ select INPUT_POLLDEV913913+ help914914+ Say Y here if you want support for the Samsung SUR40 touchscreen915915+ (also known as Microsoft Surface 2.0 or Microsoft PixelSense).916916+917917+ To compile this driver as a module, choose M here: the918918+ module will be called sur40.919919+909920config TOUCHSCREEN_TPS6507X910921 tristate "TPS6507x based touchscreens"911922 depends on I2C
···678678 } else679679 init_stripe(sh, sector, previous);680680 } else {681681+ spin_lock(&conf->device_lock);681682 if (atomic_read(&sh->count)) {682683 BUG_ON(!list_empty(&sh->lru)683684 && !test_bit(STRIPE_EXPANDING, &sh->state)684685 && !test_bit(STRIPE_ON_UNPLUG_LIST, &sh->state)685685- && !test_bit(STRIPE_ON_RELEASE_LIST, &sh->state));686686+ );686687 } else {687687- spin_lock(&conf->device_lock);688688 if (!test_bit(STRIPE_HANDLE, &sh->state))689689 atomic_inc(&conf->active_stripes);690690- if (list_empty(&sh->lru) &&691691- !test_bit(STRIPE_ON_RELEASE_LIST, &sh->state) &&692692- !test_bit(STRIPE_EXPANDING, &sh->state))693693- BUG();690690+ BUG_ON(list_empty(&sh->lru));694691 list_del_init(&sh->lru);695692 if (sh->group) {696693 sh->group->stripes_cnt--;697694 sh->group = NULL;698695 }699699- spin_unlock(&conf->device_lock);700696 }697697+ spin_unlock(&conf->device_lock);701698 }702699 } while (sh == NULL);703700···54685471 for (i = 0; i < *group_cnt; i++) {54695472 struct r5worker_group *group;5470547354715471- group = worker_groups[i];54745474+ group = &(*worker_groups)[i];54725475 INIT_LIST_HEAD(&group->handle_list);54735476 group->conf = conf;54745477 group->workers = workers + i * cnt;
+103-18
drivers/ntb/ntb_hw.c
···141141 ndev->event_cb = NULL;142142}143143144144+static void ntb_irq_work(unsigned long data)145145+{146146+ struct ntb_db_cb *db_cb = (struct ntb_db_cb *)data;147147+ int rc;148148+149149+ rc = db_cb->callback(db_cb->data, db_cb->db_num);150150+ if (rc)151151+ tasklet_schedule(&db_cb->irq_work);152152+ else {153153+ struct ntb_device *ndev = db_cb->ndev;154154+ unsigned long mask;155155+156156+ mask = readw(ndev->reg_ofs.ldb_mask);157157+ clear_bit(db_cb->db_num * ndev->bits_per_vector, &mask);158158+ writew(mask, ndev->reg_ofs.ldb_mask);159159+ }160160+}161161+144162/**145163 * ntb_register_db_callback() - register a callback for doorbell interrupt146164 * @ndev: pointer to ntb_device instance···173155 * RETURNS: An appropriate -ERRNO error value on error, or zero for success.174156 */175157int ntb_register_db_callback(struct ntb_device *ndev, unsigned int idx,176176- void *data, void (*func)(void *data, int db_num))158158+ void *data, int (*func)(void *data, int db_num))177159{178160 unsigned long mask;179161···184166185167 ndev->db_cb[idx].callback = func;186168 ndev->db_cb[idx].data = data;169169+ ndev->db_cb[idx].ndev = ndev;170170+171171+ tasklet_init(&ndev->db_cb[idx].irq_work, ntb_irq_work,172172+ (unsigned long) &ndev->db_cb[idx]);187173188174 /* unmask interrupt */189175 mask = readw(ndev->reg_ofs.ldb_mask);···215193 mask = readw(ndev->reg_ofs.ldb_mask);216194 set_bit(idx * ndev->bits_per_vector, &mask);217195 writew(mask, ndev->reg_ofs.ldb_mask);196196+197197+ tasklet_disable(&ndev->db_cb[idx].irq_work);218198219199 ndev->db_cb[idx].callback = NULL;220200}···702678 return -EINVAL;703679704680 ndev->limits.max_mw = SNB_ERRATA_MAX_MW;681681+ ndev->limits.max_db_bits = SNB_MAX_DB_BITS;705682 ndev->reg_ofs.spad_write = ndev->mw[1].vbase +706683 SNB_SPAD_OFFSET;707684 ndev->reg_ofs.rdb = ndev->mw[1].vbase +···713688 */714689 writeq(ndev->mw[1].bar_sz + 0x1000, ndev->reg_base +715690 SNB_PBAR4LMT_OFFSET);691691+ /* HW errata on the Limit registers. They can only be692692+ * written when the base register is 4GB aligned and693693+ * < 32bit. This should already be the case based on the694694+ * driver defaults, but write the Limit registers first695695+ * just in case.696696+ */716697 } else {717698 ndev->limits.max_mw = SNB_MAX_MW;699699+700700+ /* HW Errata on bit 14 of b2bdoorbell register. Writes701701+ * will not be mirrored to the remote system. Shrink702702+ * the number of bits by one, since bit 14 is the last703703+ * bit.704704+ */705705+ ndev->limits.max_db_bits = SNB_MAX_DB_BITS - 1;718706 ndev->reg_ofs.spad_write = ndev->reg_base +719707 SNB_B2B_SPAD_OFFSET;720708 ndev->reg_ofs.rdb = ndev->reg_base +···737699 * something silly738700 */739701 writeq(0, ndev->reg_base + SNB_PBAR4LMT_OFFSET);702702+ /* HW errata on the Limit registers. They can only be703703+ * written when the base register is 4GB aligned and704704+ * < 32bit. This should already be the case based on the705705+ * driver defaults, but write the Limit registers first706706+ * just in case.707707+ */740708 }741709742710 /* The Xeon errata workaround requires setting SBAR Base···813769 * have an equal amount.814770 */815771 ndev->limits.max_spads = SNB_MAX_COMPAT_SPADS / 2;772772+ ndev->limits.max_db_bits = SNB_MAX_DB_BITS;816773 /* Note: The SDOORBELL is the cause of the errata. You REALLY817774 * don't want to touch it.818775 */···838793 * have an equal amount.839794 */840795 ndev->limits.max_spads = SNB_MAX_COMPAT_SPADS / 2;796796+ ndev->limits.max_db_bits = SNB_MAX_DB_BITS;841797 ndev->reg_ofs.rdb = ndev->reg_base + SNB_PDOORBELL_OFFSET;842798 ndev->reg_ofs.ldb = ndev->reg_base + SNB_SDOORBELL_OFFSET;843799 ndev->reg_ofs.ldb_mask = ndev->reg_base + SNB_SDBMSK_OFFSET;···865819 ndev->reg_ofs.lnk_stat = ndev->reg_base + SNB_SLINK_STATUS_OFFSET;866820 ndev->reg_ofs.spci_cmd = ndev->reg_base + SNB_PCICMD_OFFSET;867821868868- ndev->limits.max_db_bits = SNB_MAX_DB_BITS;869822 ndev->limits.msix_cnt = SNB_MSIX_CNT;870823 ndev->bits_per_vector = SNB_DB_BITS_PER_VEC;871824···979934{980935 struct ntb_db_cb *db_cb = data;981936 struct ntb_device *ndev = db_cb->ndev;937937+ unsigned long mask;982938983939 dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for DB %d\n", irq,984940 db_cb->db_num);985941986986- if (db_cb->callback)987987- db_cb->callback(db_cb->data, db_cb->db_num);942942+ mask = readw(ndev->reg_ofs.ldb_mask);943943+ set_bit(db_cb->db_num * ndev->bits_per_vector, &mask);944944+ writew(mask, ndev->reg_ofs.ldb_mask);945945+946946+ tasklet_schedule(&db_cb->irq_work);988947989948 /* No need to check for the specific HB irq, any interrupt means990949 * we're connected.···1004955{1005956 struct ntb_db_cb *db_cb = data;1006957 struct ntb_device *ndev = db_cb->ndev;958958+ unsigned long mask;10079591008960 dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for DB %d\n", irq,1009961 db_cb->db_num);101096210111011- if (db_cb->callback)10121012- db_cb->callback(db_cb->data, db_cb->db_num);963963+ mask = readw(ndev->reg_ofs.ldb_mask);964964+ set_bit(db_cb->db_num * ndev->bits_per_vector, &mask);965965+ writew(mask, ndev->reg_ofs.ldb_mask);966966+967967+ tasklet_schedule(&db_cb->irq_work);10139681014969 /* On Sandybridge, there are 16 bits in the interrupt register1015970 * but only 4 vectors. So, 5 bits are assigned to the first 3···1039986 dev_err(&ndev->pdev->dev, "Error determining link status\n");10409871041988 /* bit 15 is always the link bit */10421042- writew(1 << ndev->limits.max_db_bits, ndev->reg_ofs.ldb);989989+ writew(1 << SNB_LINK_DB, ndev->reg_ofs.ldb);10439901044991 return IRQ_HANDLED;1045992}···11281075 "Only %d MSI-X vectors. Limiting the number of queues to that number.\n",11291076 rc);11301077 msix_entries = rc;10781078+10791079+ rc = pci_enable_msix(pdev, ndev->msix_entries, msix_entries);10801080+ if (rc)10811081+ goto err1;11311082 }1132108311331084 for (i = 0; i < msix_entries; i++) {···12331176 */12341177 if (ndev->hw_type == BWD_HW)12351178 writeq(~0, ndev->reg_ofs.ldb_mask);12361236- else12371237- writew(~(1 << ndev->limits.max_db_bits),12381238- ndev->reg_ofs.ldb_mask);11791179+ else {11801180+ u16 var = 1 << SNB_LINK_DB;11811181+ writew(~var, ndev->reg_ofs.ldb_mask);11821182+ }1239118312401184 rc = ntb_setup_msix(ndev);12411185 if (!rc)···13441286 }13451287}1346128812891289+static void ntb_hw_link_up(struct ntb_device *ndev)12901290+{12911291+ if (ndev->conn_type == NTB_CONN_TRANSPARENT)12921292+ ntb_link_event(ndev, NTB_LINK_UP);12931293+ else {12941294+ u32 ntb_cntl;12951295+12961296+ /* Let's bring the NTB link up */12971297+ ntb_cntl = readl(ndev->reg_ofs.lnk_cntl);12981298+ ntb_cntl &= ~(NTB_CNTL_LINK_DISABLE | NTB_CNTL_CFG_LOCK);12991299+ ntb_cntl |= NTB_CNTL_P2S_BAR23_SNOOP | NTB_CNTL_S2P_BAR23_SNOOP;13001300+ ntb_cntl |= NTB_CNTL_P2S_BAR45_SNOOP | NTB_CNTL_S2P_BAR45_SNOOP;13011301+ writel(ntb_cntl, ndev->reg_ofs.lnk_cntl);13021302+ }13031303+}13041304+13051305+static void ntb_hw_link_down(struct ntb_device *ndev)13061306+{13071307+ u32 ntb_cntl;13081308+13091309+ if (ndev->conn_type == NTB_CONN_TRANSPARENT) {13101310+ ntb_link_event(ndev, NTB_LINK_DOWN);13111311+ return;13121312+ }13131313+13141314+ /* Bring NTB link down */13151315+ ntb_cntl = readl(ndev->reg_ofs.lnk_cntl);13161316+ ntb_cntl &= ~(NTB_CNTL_P2S_BAR23_SNOOP | NTB_CNTL_S2P_BAR23_SNOOP);13171317+ ntb_cntl &= ~(NTB_CNTL_P2S_BAR45_SNOOP | NTB_CNTL_S2P_BAR45_SNOOP);13181318+ ntb_cntl |= NTB_CNTL_LINK_DISABLE | NTB_CNTL_CFG_LOCK;13191319+ writel(ntb_cntl, ndev->reg_ofs.lnk_cntl);13201320+}13211321+13471322static int ntb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)13481323{13491324 struct ntb_device *ndev;···14651374 if (rc)14661375 goto err6;1467137614681468- /* Let's bring the NTB link up */14691469- writel(NTB_CNTL_BAR23_SNOOP | NTB_CNTL_BAR45_SNOOP,14701470- ndev->reg_ofs.lnk_cntl);13771377+ ntb_hw_link_up(ndev);1471137814721379 return 0;14731380···14951406{14961407 struct ntb_device *ndev = pci_get_drvdata(pdev);14971408 int i;14981498- u32 ntb_cntl;1499140915001500- /* Bring NTB link down */15011501- ntb_cntl = readl(ndev->reg_ofs.lnk_cntl);15021502- ntb_cntl |= NTB_CNTL_LINK_DISABLE;15031503- writel(ntb_cntl, ndev->reg_ofs.lnk_cntl);14101410+ ntb_hw_link_down(ndev);1504141115051412 ntb_transport_free(ndev->ntb_transport);15061413
+4-3
drivers/ntb/ntb_hw.h
···106106};107107108108struct ntb_db_cb {109109- void (*callback) (void *data, int db_num);109109+ int (*callback)(void *data, int db_num);110110 unsigned int db_num;111111 void *data;112112 struct ntb_device *ndev;113113+ struct tasklet_struct irq_work;113114};114115115116struct ntb_device {···229228void ntb_unregister_transport(struct ntb_device *ndev);230229void ntb_set_mw_addr(struct ntb_device *ndev, unsigned int mw, u64 addr);231230int ntb_register_db_callback(struct ntb_device *ndev, unsigned int idx,232232- void *data, void (*db_cb_func) (void *data,233233- int db_num));231231+ void *data, int (*db_cb_func)(void *data,232232+ int db_num));234233void ntb_unregister_db_callback(struct ntb_device *ndev, unsigned int idx);235234int ntb_register_event_callback(struct ntb_device *ndev,236235 void (*event_cb_func) (void *handle,
···119119120120 void (*rx_handler) (struct ntb_transport_qp *qp, void *qp_data,121121 void *data, int len);122122- struct tasklet_struct rx_work;123122 struct list_head rx_pend_q;124123 struct list_head rx_free_q;125124 spinlock_t ntb_rx_pend_q_lock;···583584 return 0;584585}585586586586-static void ntb_qp_link_cleanup(struct work_struct *work)587587+static void ntb_qp_link_cleanup(struct ntb_transport_qp *qp)587588{588588- struct ntb_transport_qp *qp = container_of(work,589589- struct ntb_transport_qp,590590- link_cleanup);591589 struct ntb_transport *nt = qp->transport;592590 struct pci_dev *pdev = ntb_query_pdev(nt->ndev);593591···598602599603 dev_info(&pdev->dev, "qp %d: Link Down\n", qp->qp_num);600604 qp->qp_link = NTB_LINK_DOWN;605605+}606606+607607+static void ntb_qp_link_cleanup_work(struct work_struct *work)608608+{609609+ struct ntb_transport_qp *qp = container_of(work,610610+ struct ntb_transport_qp,611611+ link_cleanup);612612+ struct ntb_transport *nt = qp->transport;613613+614614+ ntb_qp_link_cleanup(qp);601615602616 if (nt->transport_link == NTB_LINK_UP)603617 schedule_delayed_work(&qp->link_work,···619613 schedule_work(&qp->link_cleanup);620614}621615622622-static void ntb_transport_link_cleanup(struct work_struct *work)616616+static void ntb_transport_link_cleanup(struct ntb_transport *nt)623617{624624- struct ntb_transport *nt = container_of(work, struct ntb_transport,625625- link_cleanup);626618 int i;619619+620620+ /* Pass along the info to any clients */621621+ for (i = 0; i < nt->max_qps; i++)622622+ if (!test_bit(i, &nt->qp_bitmap))623623+ ntb_qp_link_cleanup(&nt->qps[i]);627624628625 if (nt->transport_link == NTB_LINK_DOWN)629626 cancel_delayed_work_sync(&nt->link_work);630627 else631628 nt->transport_link = NTB_LINK_DOWN;632632-633633- /* Pass along the info to any clients */634634- for (i = 0; i < nt->max_qps; i++)635635- if (!test_bit(i, &nt->qp_bitmap))636636- ntb_qp_link_down(&nt->qps[i]);637629638630 /* The scratchpad registers keep the values if the remote side639631 * goes down, blast them now to give them a sane value the next···639635 */640636 for (i = 0; i < MAX_SPAD; i++)641637 ntb_write_local_spad(nt->ndev, i, 0);638638+}639639+640640+static void ntb_transport_link_cleanup_work(struct work_struct *work)641641+{642642+ struct ntb_transport *nt = container_of(work, struct ntb_transport,643643+ link_cleanup);644644+645645+ ntb_transport_link_cleanup(nt);642646}643647644648static void ntb_transport_event_callback(void *data, enum ntb_hw_event event)···892880 }893881894882 INIT_DELAYED_WORK(&qp->link_work, ntb_qp_link_work);895895- INIT_WORK(&qp->link_cleanup, ntb_qp_link_cleanup);883883+ INIT_WORK(&qp->link_cleanup, ntb_qp_link_cleanup_work);896884897885 spin_lock_init(&qp->ntb_rx_pend_q_lock);898886 spin_lock_init(&qp->ntb_rx_free_q_lock);···948936 }949937950938 INIT_DELAYED_WORK(&nt->link_work, ntb_transport_link_work);951951- INIT_WORK(&nt->link_cleanup, ntb_transport_link_cleanup);939939+ INIT_WORK(&nt->link_cleanup, ntb_transport_link_cleanup_work);952940953941 rc = ntb_register_event_callback(nt->ndev,954942 ntb_transport_event_callback);···984972 struct ntb_device *ndev = nt->ndev;985973 int i;986974987987- nt->transport_link = NTB_LINK_DOWN;975975+ ntb_transport_link_cleanup(nt);988976989977 /* verify that all the qp's are freed */990978 for (i = 0; i < nt->max_qps; i++) {···12001188 goto out;12011189}1202119012031203-static void ntb_transport_rx(unsigned long data)11911191+static int ntb_transport_rxc_db(void *data, int db_num)12041192{12051205- struct ntb_transport_qp *qp = (struct ntb_transport_qp *)data;11931193+ struct ntb_transport_qp *qp = data;12061194 int rc, i;11951195+11961196+ dev_dbg(&ntb_query_pdev(qp->ndev)->dev, "%s: doorbell %d received\n",11971197+ __func__, db_num);1207119812081199 /* Limit the number of packets processed in a single interrupt to12091200 * provide fairness to others···1219120412201205 if (qp->dma_chan)12211206 dma_async_issue_pending(qp->dma_chan);12221222-}1223120712241224-static void ntb_transport_rxc_db(void *data, int db_num)12251225-{12261226- struct ntb_transport_qp *qp = data;12271227-12281228- dev_dbg(&ntb_query_pdev(qp->ndev)->dev, "%s: doorbell %d received\n",12291229- __func__, db_num);12301230-12311231- tasklet_schedule(&qp->rx_work);12081208+ return i;12321209}1233121012341211static void ntb_tx_copy_callback(void *data)···14391432 qp->tx_handler = handlers->tx_handler;14401433 qp->event_handler = handlers->event_handler;1441143414351435+ dmaengine_get();14421436 qp->dma_chan = dma_find_channel(DMA_MEMCPY);14431443- if (!qp->dma_chan)14371437+ if (!qp->dma_chan) {14381438+ dmaengine_put();14441439 dev_info(&pdev->dev, "Unable to allocate DMA channel, using CPU instead\n");14451445- else14461446- dmaengine_get();14401440+ }1447144114481442 for (i = 0; i < NTB_QP_DEF_NUM_ENTRIES; i++) {14491443 entry = kzalloc(sizeof(struct ntb_queue_entry), GFP_ATOMIC);···14661458 &qp->tx_free_q);14671459 }1468146014691469- tasklet_init(&qp->rx_work, ntb_transport_rx, (unsigned long) qp);14701470-14711461 rc = ntb_register_db_callback(qp->ndev, free_queue, qp,14721462 ntb_transport_rxc_db);14731463 if (rc)14741474- goto err3;14641464+ goto err2;1475146514761466 dev_info(&pdev->dev, "NTB Transport QP %d created\n", qp->qp_num);1477146714781468 return qp;1479146914801480-err3:14811481- tasklet_disable(&qp->rx_work);14821470err2:14831471 while ((entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q)))14841472 kfree(entry);14851473err1:14861474 while ((entry = ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q)))14871475 kfree(entry);14761476+ if (qp->dma_chan)14771477+ dmaengine_put();14881478 set_bit(free_queue, &nt->qp_bitmap);14891479err:14901480 return NULL;···15211515 }1522151615231517 ntb_unregister_db_callback(qp->ndev, qp->qp_num);15241524- tasklet_disable(&qp->rx_work);1525151815261519 cancel_delayed_work_sync(&qp->link_work);15271520
-4
drivers/pci/quirks.c
···99 *1010 * Init/reset quirks for USB host controllers should be in the1111 * USB quirks file, where their drivers can access reuse it.1212- *1313- * The bridge optimization stuff has been removed. If you really1414- * have a silly BIOS which is unable to set your host bridge right,1515- * use the PowerTweak utility (see http://powertweak.sourceforge.net).1612 */17131814#include <linux/types.h>
···11+#22+# Platform support for Chrome OS hardware (Chromebooks and Chromeboxes)33+#44+55+menuconfig CHROME_PLATFORMS66+ bool "Platform support for Chrome hardware"77+ depends on X8688+ ---help---99+ Say Y here to get to see options for platform support for1010+ various Chromebooks and Chromeboxes. This option alone does1111+ not add any kernel code.1212+1313+ If you say N, all options in this submenu will be skipped and disabled.1414+1515+if CHROME_PLATFORMS1616+1717+config CHROMEOS_LAPTOP1818+ tristate "Chrome OS Laptop"1919+ depends on I2C2020+ depends on DMI2121+ ---help---2222+ This driver instantiates i2c and smbus devices such as2323+ light sensors and touchpads.2424+2525+ If you have a supported Chromebook, choose Y or M here.2626+ The module will be called chromeos_laptop.2727+2828+endif # CHROMEOS_PLATFORMS
···79798080 If you have an ACPI-compatible ASUS laptop, say Y or M here.81818282-config CHROMEOS_LAPTOP8383- tristate "Chrome OS Laptop"8484- depends on I2C8585- depends on DMI8686- ---help---8787- This driver instantiates i2c and smbus devices such as8888- light sensors and touchpads.8989-9090- If you have a supported Chromebook, choose Y or M here.9191- The module will be called chromeos_laptop.9292-9382config DELL_LAPTOP9483 tristate "Dell Laptop Extras"9584 depends on X86
···578578 u8 **c_file, const u8 *endpoint, bool boot_case)579579{580580 long word_length;581581- int status;581581+ int status = 0;582582583583 /*DEBUG("FT1000:REQUEST_CODE_SEGMENT\n");i*/584584 word_length = get_request_value(ft1000dev);···1074107410751075 return status;10761076}10771077-
+2
drivers/staging/iio/magnetometer/Kconfig
···66config SENSORS_HMC584377 tristate "Honeywell HMC5843/5883/5883L 3-Axis Magnetometer"88 depends on I2C99+ select IIO_BUFFER1010+ select IIO_TRIGGERED_BUFFER911 help1012 Say Y here to add support for the Honeywell HMC5843, HMC5883 and1113 HMC5883L 3-Axis Magnetometer (digital compass).
···409409 struct l_wait_info lwi = { 0 };410410 int rc = 0;411411412412- if (!thread_is_init(&pinger_thread) &&413413- !thread_is_stopped(&pinger_thread))412412+ if (thread_is_init(&pinger_thread) ||413413+ thread_is_stopped(&pinger_thread))414414 return -EALREADY;415415416416 ptlrpc_pinger_remove_timeouts();
+15-13
drivers/staging/media/go7007/go7007-usb.c
···1515 * Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.1616 */17171818+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt1919+1820#include <linux/module.h>1921#include <linux/kernel.h>2022#include <linux/init.h>···663661664662 if (usb->board->flags & GO7007_USB_EZUSB) {665663 /* Reset buffer in EZ-USB */666666- dev_dbg(go->dev, "resetting EZ-USB buffers\n");664664+ pr_debug("resetting EZ-USB buffers\n");667665 if (go7007_usb_vendor_request(go, 0x10, 0, 0, NULL, 0, 0) < 0 ||668666 go7007_usb_vendor_request(go, 0x10, 0, 0, NULL, 0, 0) < 0)669667 return -1;···691689 u16 status_reg = 0;692690 int timeout = 500;693691694694- dev_dbg(go->dev, "WriteInterrupt: %04x %04x\n", addr, data);692692+ pr_debug("WriteInterrupt: %04x %04x\n", addr, data);695693696694 for (i = 0; i < 100; ++i) {697695 r = usb_control_msg(usb->usbdev,···736734 int r;737735 int timeout = 500;738736739739- dev_dbg(go->dev, "WriteInterrupt: %04x %04x\n", addr, data);737737+ pr_debug("WriteInterrupt: %04x %04x\n", addr, data);740738741739 go->usb_buf[0] = data & 0xff;742740 go->usb_buf[1] = data >> 8;···773771 go->interrupt_available = 1;774772 go->interrupt_data = __le16_to_cpu(regs[0]);775773 go->interrupt_value = __le16_to_cpu(regs[1]);776776- dev_dbg(go->dev, "ReadInterrupt: %04x %04x\n",774774+ pr_debug("ReadInterrupt: %04x %04x\n",777775 go->interrupt_value, go->interrupt_data);778776 }779777···893891 int transferred, pipe;894892 int timeout = 500;895893896896- dev_dbg(go->dev, "DownloadBuffer sending %d bytes\n", len);894894+ pr_debug("DownloadBuffer sending %d bytes\n", len);897895898896 if (usb->board->flags & GO7007_USB_EZUSB)899897 pipe = usb_sndbulkpipe(usb->usbdev, 2);···979977 !(msgs[i].flags & I2C_M_RD) &&980978 (msgs[i + 1].flags & I2C_M_RD)) {981979#ifdef GO7007_I2C_DEBUG982982- dev_dbg(go->dev, "i2c write/read %d/%d bytes on %02x\n",980980+ pr_debug("i2c write/read %d/%d bytes on %02x\n",983981 msgs[i].len, msgs[i + 1].len, msgs[i].addr);984982#endif985983 buf[0] = 0x01;···990988 buf[buf_len++] = msgs[++i].len;991989 } else if (msgs[i].flags & I2C_M_RD) {992990#ifdef GO7007_I2C_DEBUG993993- dev_dbg(go->dev, "i2c read %d bytes on %02x\n",991991+ pr_debug("i2c read %d bytes on %02x\n",994992 msgs[i].len, msgs[i].addr);995993#endif996994 buf[0] = 0x01;···1000998 buf_len = 4;1001999 } else {10021000#ifdef GO7007_I2C_DEBUG10031003- dev_dbg(go->dev, "i2c write %d bytes on %02x\n",10011001+ pr_debug("i2c write %d bytes on %02x\n",10041002 msgs[i].len, msgs[i].addr);10051003#endif10061004 buf[0] = 0x00;···10591057 char *name;10601058 int video_pipe, i, v_urb_len;1061105910621062- dev_dbg(go->dev, "probing new GO7007 USB board\n");10601060+ pr_debug("probing new GO7007 USB board\n");1063106110641062 switch (id->driver_info) {10651063 case GO7007_BOARDID_MATRIX_II:···10991097 board = &board_px_tv402u;11001098 break;11011099 case GO7007_BOARDID_LIFEVIEW_LR192:11021102- dev_err(go->dev, "The Lifeview TV Walker Ultra is not supported. Sorry!\n");11001100+ dev_err(&intf->dev, "The Lifeview TV Walker Ultra is not supported. Sorry!\n");11031101 return -ENODEV;11041102 name = "Lifeview TV Walker Ultra";11051103 board = &board_lifeview_lr192;11061104 break;11071105 case GO7007_BOARDID_SENSORAY_2250:11081108- dev_info(go->dev, "Sensoray 2250 found\n");11061106+ dev_info(&intf->dev, "Sensoray 2250 found\n");11091107 name = "Sensoray 2250/2251";11101108 board = &board_sensoray_2250;11111109 break;···11141112 board = &board_ads_usbav_709;11151113 break;11161114 default:11171117- dev_err(go->dev, "unknown board ID %d!\n",11151115+ dev_err(&intf->dev, "unknown board ID %d!\n",11181116 (unsigned int)id->driver_info);11191117 return -ENODEV;11201118 }···12491247 sizeof(go->name));12501248 break;12511249 default:12521252- dev_dbg(go->dev, "unable to detect tuner type!\n");12501250+ pr_debug("unable to detect tuner type!\n");12531251 break;12541252 }12551253 /* Configure tuner mode selection inputs connected
+2-1
drivers/staging/nvec/nvec.c
···681681 dev_err(nvec->dev,682682 "RX buffer overflow on %p: "683683 "Trying to write byte %u of %u\n",684684- nvec->rx, nvec->rx->pos, NVEC_MSG_SIZE);684684+ nvec->rx, nvec->rx ? nvec->rx->pos : 0,685685+ NVEC_MSG_SIZE);685686 break;686687 default:687688 nvec->state = 0;
+3
drivers/staging/rtl8188eu/core/rtw_ap.c
···11151115 return _FAIL;11161116 }1117111711181118+ /* fix bug of flush_cam_entry at STOP AP mode */11191119+ psta->state |= WIFI_AP_STATE;11201120+ rtw_indicate_connect(padapter);11181121 pmlmepriv->cur_network.join_res = true;/* for check if already set beacon */11191122 return ret;11201123}
+1-1
drivers/staging/tidspbridge/Kconfig
···4455menuconfig TIDSPBRIDGE66 tristate "DSP Bridge driver"77- depends on ARCH_OMAP3 && !ARCH_MULTIPLATFORM77+ depends on ARCH_OMAP3 && !ARCH_MULTIPLATFORM && BROKEN88 select MAILBOX99 select OMAP2PLUS_MBOX1010 help
···652652 return -ENOMEM;653653654654 /* Do not reset an active device! */655655- if (bdev->bd_holders)656656- return -EBUSY;655655+ if (bdev->bd_holders) {656656+ ret = -EBUSY;657657+ goto out;658658+ }657659658660 ret = kstrtou16(buf, 10, &do_reset);659661 if (ret)660660- return ret;662662+ goto out;661663662662- if (!do_reset)663663- return -EINVAL;664664+ if (!do_reset) {665665+ ret = -EINVAL;666666+ goto out;667667+ }664668665669 /* Make sure all pending I/O is finished */666670 fsync_bdev(bdev);671671+ bdput(bdev);667672668673 zram_reset_device(zram, true);669674 return len;675675+676676+out:677677+ bdput(bdev);678678+ return ret;670679}671680672681static void __zram_make_request(struct zram *zram, struct bio *bio, int rw)
+13-4
drivers/staging/zsmalloc/zsmalloc-main.c
···430430 return next;431431}432432433433-/* Encode <page, obj_idx> as a single handle value */433433+/*434434+ * Encode <page, obj_idx> as a single handle value.435435+ * On hardware platforms with physical memory starting at 0x0 the pfn436436+ * could be 0 so we ensure that the handle will never be 0 by adjusting the437437+ * encoded obj_idx value before encoding.438438+ */434439static void *obj_location_to_handle(struct page *page, unsigned long obj_idx)435440{436441 unsigned long handle;···446441 }447442448443 handle = page_to_pfn(page) << OBJ_INDEX_BITS;449449- handle |= (obj_idx & OBJ_INDEX_MASK);444444+ handle |= ((obj_idx + 1) & OBJ_INDEX_MASK);450445451446 return (void *)handle;452447}453448454454-/* Decode <page, obj_idx> pair from the given object handle */449449+/*450450+ * Decode <page, obj_idx> pair from the given object handle. We adjust the451451+ * decoded obj_idx back to its original value since it was adjusted in452452+ * obj_location_to_handle().453453+ */455454static void obj_handle_to_location(unsigned long handle, struct page **page,456455 unsigned long *obj_idx)457456{458457 *page = pfn_to_page(handle >> OBJ_INDEX_BITS);459459- *obj_idx = handle & OBJ_INDEX_MASK;458458+ *obj_idx = (handle & OBJ_INDEX_MASK) - 1;460459}461460462461static unsigned long obj_idx_to_offset(struct page *page,
···768768 * data at the tail to prevent a subsequent overrun */769769 while (ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) {770770 if (echo_buf(ldata, tail) == ECHO_OP_START) {771771- if (echo_buf(ldata, tail) == ECHO_OP_ERASE_TAB)771771+ if (echo_buf(ldata, tail + 1) == ECHO_OP_ERASE_TAB)772772 tail += 3;773773 else774774 tail += 2;···19981998 found = 1;1999199920002000 size = N_TTY_BUF_SIZE - tail;20012001- n = (found + eol + size) & (N_TTY_BUF_SIZE - 1);20012001+ n = eol - tail;20022002+ if (n > 4096)20032003+ n += 4096;20042004+ n += found;20022005 c = n;2003200620042007 if (found && read_buf(ldata, eol) == __DISABLED_CHAR) {···22462243 if (time)22472244 timeout = time;22482245 }22492249- mutex_unlock(&ldata->atomic_read_lock);22502250- remove_wait_queue(&tty->read_wait, &wait);22462246+ n_tty_set_room(tty);22472247+ up_read(&tty->termios_rwsem);2251224822492249+ remove_wait_queue(&tty->read_wait, &wait);22522250 if (!waitqueue_active(&tty->read_wait))22532251 ldata->minimum_to_wake = minimum;22522252+22532253+ mutex_unlock(&ldata->atomic_read_lock);2254225422552255 __set_current_state(TASK_RUNNING);22562256 if (b - buf)22572257 retval = b - buf;2258225822592259- n_tty_set_room(tty);22602260- up_read(&tty->termios_rwsem);22612259 return retval;22622260}22632261
+1-1
drivers/tty/serial/8250/Kconfig
···4141 accept kernel parameters in both forms like 8250_core.nr_uarts=4 and4242 8250.nr_uarts=4. We now renamed the module back to 8250, but if4343 anybody noticed in 3.7 and changed their userspace we still have to4444- keep the 8350_core.* options around until they revert the changes4444+ keep the 8250_core.* options around until they revert the changes4545 they already did.46464747 If 8250 is built as a module, this adds 8250_core alias instead.
+3
drivers/tty/serial/pmac_zilog.c
···20522052 /* Probe ports */20532053 pmz_probe();2054205420552055+ if (pmz_ports_count == 0)20562056+ return -ENODEV;20572057+20552058 /* TODO: Autoprobe console based on OF */20562059 /* pmz_console.index = i; */20572060 register_console(&pmz_console);
···9191Version 3.119292------------93939494-- Converted to use 2.3.x page cache [Dave Jones <dave@powertweak.com>]9494+- Converted to use 2.3.x page cache [Dave Jones]9595- Corruption in truncate() bugfix [Ken Tyler <kent@werple.net.au>]96969797Version 3.10
···897897 * caller should hold i_ceph_lock.898898 * caller will not hold session s_mutex if called from destroy_inode.899899 */900900-void __ceph_remove_cap(struct ceph_cap *cap)900900+void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release)901901{902902 struct ceph_mds_session *session = cap->session;903903 struct ceph_inode_info *ci = cap->ci;···909909910910 /* remove from session list */911911 spin_lock(&session->s_cap_lock);912912+ /*913913+ * s_cap_reconnect is protected by s_cap_lock. no one changes914914+ * s_cap_gen while session is in the reconnect state.915915+ */916916+ if (queue_release &&917917+ (!session->s_cap_reconnect ||918918+ cap->cap_gen == session->s_cap_gen))919919+ __queue_cap_release(session, ci->i_vino.ino, cap->cap_id,920920+ cap->mseq, cap->issue_seq);921921+912922 if (session->s_cap_iterator == cap) {913923 /* not yet, we are iterating over this very cap */914924 dout("__ceph_remove_cap delaying %p removal from session %p\n",···10331023 struct ceph_mds_cap_release *head;10341024 struct ceph_mds_cap_item *item;1035102510361036- spin_lock(&session->s_cap_lock);10371026 BUG_ON(!session->s_num_cap_releases);10381027 msg = list_first_entry(&session->s_cap_releases,10391028 struct ceph_msg, list_head);···10611052 (int)CEPH_CAPS_PER_RELEASE,10621053 (int)msg->front.iov_len);10631054 }10641064- spin_unlock(&session->s_cap_lock);10651055}1066105610671057/*···10751067 p = rb_first(&ci->i_caps);10761068 while (p) {10771069 struct ceph_cap *cap = rb_entry(p, struct ceph_cap, ci_node);10781078- struct ceph_mds_session *session = cap->session;10791079-10801080- __queue_cap_release(session, ceph_ino(inode), cap->cap_id,10811081- cap->mseq, cap->issue_seq);10821070 p = rb_next(p);10831083- __ceph_remove_cap(cap);10711071+ __ceph_remove_cap(cap, true);10841072 }10851073}10861074···27952791 }27962792 spin_unlock(&mdsc->cap_dirty_lock);27972793 }27982798- __ceph_remove_cap(cap);27942794+ __ceph_remove_cap(cap, false);27992795 }28002796 /* else, we already released it */28012797···29352931 if (!inode) {29362932 dout(" i don't have ino %llx\n", vino.ino);2937293329382938- if (op == CEPH_CAP_OP_IMPORT)29342934+ if (op == CEPH_CAP_OP_IMPORT) {29352935+ spin_lock(&session->s_cap_lock);29392936 __queue_cap_release(session, vino.ino, cap_id,29402937 mseq, seq);29382938+ spin_unlock(&session->s_cap_lock);29392939+ }29412940 goto flush_cap_releases;29422941 }29432942
+10-1
fs/ceph/dir.c
···352352 }353353354354 /* note next offset and last dentry name */355355+ rinfo = &req->r_reply_info;356356+ if (le32_to_cpu(rinfo->dir_dir->frag) != frag) {357357+ frag = le32_to_cpu(rinfo->dir_dir->frag);358358+ if (ceph_frag_is_leftmost(frag))359359+ fi->next_offset = 2;360360+ else361361+ fi->next_offset = 0;362362+ off = fi->next_offset;363363+ }355364 fi->offset = fi->next_offset;356365 fi->last_readdir = req;366366+ fi->frag = frag;357367358368 if (req->r_reply_info.dir_end) {359369 kfree(fi->last_name);···373363 else374364 fi->next_offset = 0;375365 } else {376376- rinfo = &req->r_reply_info;377366 err = note_last_dentry(fi,378367 rinfo->dir_dname[rinfo->dir_nr-1],379368 rinfo->dir_dname_len[rinfo->dir_nr-1]);
+43-6
fs/ceph/inode.c
···577577 int issued = 0, implemented;578578 struct timespec mtime, atime, ctime;579579 u32 nsplits;580580+ struct ceph_inode_frag *frag;581581+ struct rb_node *rb_node;580582 struct ceph_buffer *xattr_blob = NULL;581583 int err = 0;582584 int queue_trunc = 0;···753751 /* FIXME: move me up, if/when version reflects fragtree changes */754752 nsplits = le32_to_cpu(info->fragtree.nsplits);755753 mutex_lock(&ci->i_fragtree_mutex);754754+ rb_node = rb_first(&ci->i_fragtree);756755 for (i = 0; i < nsplits; i++) {757756 u32 id = le32_to_cpu(info->fragtree.splits[i].frag);758758- struct ceph_inode_frag *frag = __get_or_create_frag(ci, id);759759-760760- if (IS_ERR(frag))761761- continue;757757+ frag = NULL;758758+ while (rb_node) {759759+ frag = rb_entry(rb_node, struct ceph_inode_frag, node);760760+ if (ceph_frag_compare(frag->frag, id) >= 0) {761761+ if (frag->frag != id)762762+ frag = NULL;763763+ else764764+ rb_node = rb_next(rb_node);765765+ break;766766+ }767767+ rb_node = rb_next(rb_node);768768+ rb_erase(&frag->node, &ci->i_fragtree);769769+ kfree(frag);770770+ frag = NULL;771771+ }772772+ if (!frag) {773773+ frag = __get_or_create_frag(ci, id);774774+ if (IS_ERR(frag))775775+ continue;776776+ }762777 frag->split_by = le32_to_cpu(info->fragtree.splits[i].by);763778 dout(" frag %x split by %d\n", frag->frag, frag->split_by);779779+ }780780+ while (rb_node) {781781+ frag = rb_entry(rb_node, struct ceph_inode_frag, node);782782+ rb_node = rb_next(rb_node);783783+ rb_erase(&frag->node, &ci->i_fragtree);784784+ kfree(frag);764785 }765786 mutex_unlock(&ci->i_fragtree_mutex);766787···12751250 int err = 0, i;12761251 struct inode *snapdir = NULL;12771252 struct ceph_mds_request_head *rhead = req->r_request->front.iov_base;12781278- u64 frag = le32_to_cpu(rhead->args.readdir.frag);12791253 struct ceph_dentry_info *di;12541254+ u64 r_readdir_offset = req->r_readdir_offset;12551255+ u32 frag = le32_to_cpu(rhead->args.readdir.frag);12561256+12571257+ if (rinfo->dir_dir &&12581258+ le32_to_cpu(rinfo->dir_dir->frag) != frag) {12591259+ dout("readdir_prepopulate got new frag %x -> %x\n",12601260+ frag, le32_to_cpu(rinfo->dir_dir->frag));12611261+ frag = le32_to_cpu(rinfo->dir_dir->frag);12621262+ if (ceph_frag_is_leftmost(frag))12631263+ r_readdir_offset = 2;12641264+ else12651265+ r_readdir_offset = 0;12661266+ }1280126712811268 if (req->r_aborted)12821269 return readdir_prepopulate_inodes_only(req, session);···13521315 }1353131613541317 di = dn->d_fsdata;13551355- di->offset = ceph_make_fpos(frag, i + req->r_readdir_offset);13181318+ di->offset = ceph_make_fpos(frag, i + r_readdir_offset);1356131913571320 /* inode */13581321 if (dn->d_inode) {
+45-16
fs/ceph/mds_client.c
···4343 */44444545struct ceph_reconnect_state {4646+ int nr_caps;4647 struct ceph_pagelist *pagelist;4748 bool flock;4849};···444443 INIT_LIST_HEAD(&s->s_waiting);445444 INIT_LIST_HEAD(&s->s_unsafe);446445 s->s_num_cap_releases = 0;446446+ s->s_cap_reconnect = 0;447447 s->s_cap_iterator = NULL;448448 INIT_LIST_HEAD(&s->s_cap_releases);449449 INIT_LIST_HEAD(&s->s_cap_releases_done);···643641 iput(req->r_unsafe_dir);644642 req->r_unsafe_dir = NULL;645643 }644644+645645+ complete_all(&req->r_safe_completion);646646647647 ceph_mdsc_put_request(req);648648}···990986 dout("removing cap %p, ci is %p, inode is %p\n",991987 cap, ci, &ci->vfs_inode);992988 spin_lock(&ci->i_ceph_lock);993993- __ceph_remove_cap(cap);989989+ __ceph_remove_cap(cap, false);994990 if (!__ceph_is_any_real_caps(ci)) {995991 struct ceph_mds_client *mdsc =996992 ceph_sb_to_client(inode->i_sb)->mdsc;···12351231 session->s_trim_caps--;12361232 if (oissued) {12371233 /* we aren't the only cap.. just remove us */12381238- __queue_cap_release(session, ceph_ino(inode), cap->cap_id,12391239- cap->mseq, cap->issue_seq);12401240- __ceph_remove_cap(cap);12341234+ __ceph_remove_cap(cap, true);12411235 } else {12421236 /* try to drop referring dentries */12431237 spin_unlock(&ci->i_ceph_lock);···14181416 unsigned num;1419141714201418 dout("discard_cap_releases mds%d\n", session->s_mds);14211421- spin_lock(&session->s_cap_lock);1422141914231420 /* zero out the in-progress message */14241421 msg = list_first_entry(&session->s_cap_releases,···14441443 msg->front.iov_len = sizeof(*head);14451444 list_add(&msg->list_head, &session->s_cap_releases);14461445 }14471447-14481448- spin_unlock(&session->s_cap_lock);14491446}1450144714511448/*···18741875 int mds = -1;18751876 int err = -EAGAIN;1876187718771877- if (req->r_err || req->r_got_result)18781878+ if (req->r_err || req->r_got_result) {18791879+ if (req->r_aborted)18801880+ __unregister_request(mdsc, req);18781881 goto out;18821882+ }1879188318801884 if (req->r_timeout &&18811885 time_after_eq(jiffies, req->r_started + req->r_timeout)) {···21882186 if (head->safe) {21892187 req->r_got_safe = true;21902188 __unregister_request(mdsc, req);21912191- complete_all(&req->r_safe_completion);2192218921932190 if (req->r_got_unsafe) {21942191 /*···22392238 err = ceph_fill_trace(mdsc->fsc->sb, req, req->r_session);22402239 if (err == 0) {22412240 if (result == 0 && (req->r_op == CEPH_MDS_OP_READDIR ||22422242- req->r_op == CEPH_MDS_OP_LSSNAP) &&22432243- rinfo->dir_nr)22412241+ req->r_op == CEPH_MDS_OP_LSSNAP))22442242 ceph_readdir_prepopulate(req, req->r_session);22452243 ceph_unreserve_caps(mdsc, &req->r_caps_reservation);22462244 }···24902490 cap->seq = 0; /* reset cap seq */24912491 cap->issue_seq = 0; /* and issue_seq */24922492 cap->mseq = 0; /* and migrate_seq */24932493+ cap->cap_gen = cap->session->s_cap_gen;2493249424942495 if (recon_state->flock) {24952496 rec.v2.cap_id = cpu_to_le64(cap->cap_id);···25532552 } else {25542553 err = ceph_pagelist_append(pagelist, &rec, reclen);25552554 }25552555+25562556+ recon_state->nr_caps++;25562557out_free:25572558 kfree(path);25582559out_dput:···25822579 struct rb_node *p;25832580 int mds = session->s_mds;25842581 int err = -ENOMEM;25822582+ int s_nr_caps;25852583 struct ceph_pagelist *pagelist;25862584 struct ceph_reconnect_state recon_state;25872585···26142610 dout("session %p state %s\n", session,26152611 session_state_name(session->s_state));2616261226132613+ spin_lock(&session->s_gen_ttl_lock);26142614+ session->s_cap_gen++;26152615+ spin_unlock(&session->s_gen_ttl_lock);26162616+26172617+ spin_lock(&session->s_cap_lock);26182618+ /*26192619+ * notify __ceph_remove_cap() that we are composing cap reconnect.26202620+ * If a cap get released before being added to the cap reconnect,26212621+ * __ceph_remove_cap() should skip queuing cap release.26222622+ */26232623+ session->s_cap_reconnect = 1;26172624 /* drop old cap expires; we're about to reestablish that state */26182625 discard_cap_releases(mdsc, session);26262626+ spin_unlock(&session->s_cap_lock);2619262726202628 /* traverse this session's caps */26212621- err = ceph_pagelist_encode_32(pagelist, session->s_nr_caps);26292629+ s_nr_caps = session->s_nr_caps;26302630+ err = ceph_pagelist_encode_32(pagelist, s_nr_caps);26222631 if (err)26232632 goto fail;2624263326342634+ recon_state.nr_caps = 0;26252635 recon_state.pagelist = pagelist;26262636 recon_state.flock = session->s_con.peer_features & CEPH_FEATURE_FLOCK;26272637 err = iterate_session_caps(session, encode_caps_cb, &recon_state);26282638 if (err < 0)26292639 goto fail;26402640+26412641+ spin_lock(&session->s_cap_lock);26422642+ session->s_cap_reconnect = 0;26432643+ spin_unlock(&session->s_cap_lock);2630264426312645 /*26322646 * snaprealms. we provide mds with the ino, seq (version), and···2668264626692647 if (recon_state.flock)26702648 reply->hdr.version = cpu_to_le16(2);26712671- if (pagelist->length) {26722672- /* set up outbound data if we have any */26732673- reply->hdr.data_len = cpu_to_le32(pagelist->length);26742674- ceph_msg_data_add_pagelist(reply, pagelist);26492649+26502650+ /* raced with cap release? */26512651+ if (s_nr_caps != recon_state.nr_caps) {26522652+ struct page *page = list_first_entry(&pagelist->head,26532653+ struct page, lru);26542654+ __le32 *addr = kmap_atomic(page);26552655+ *addr = cpu_to_le32(recon_state.nr_caps);26562656+ kunmap_atomic(addr);26752657 }26582658+26592659+ reply->hdr.data_len = cpu_to_le32(pagelist->length);26602660+ ceph_msg_data_add_pagelist(reply, pagelist);26762661 ceph_con_send(&session->s_con, reply);2677266226782663 mutex_unlock(&session->s_mutex);
+1
fs/ceph/mds_client.h
···132132 struct list_head s_caps; /* all caps issued by this session */133133 int s_nr_caps, s_trim_caps;134134 int s_num_cap_releases;135135+ int s_cap_reconnect;135136 struct list_head s_cap_releases; /* waiting cap_release messages */136137 struct list_head s_cap_releases_done; /* ready to send */137138 struct ceph_cap *s_cap_iterator;
···2626#include <linux/mount.h>2727#include <linux/mm.h>2828#include <linux/pagemap.h>2929-#include <linux/btrfs.h>3029#include "cifspdu.h"3130#include "cifsglob.h"3231#include "cifsproto.h"3332#include "cifs_debug.h"3433#include "cifsfs.h"3434+3535+#define CIFS_IOCTL_MAGIC 0xCF3636+#define CIFS_IOC_COPYCHUNK_FILE _IOW(CIFS_IOCTL_MAGIC, 3, int)35373638static long cifs_ioctl_clone(unsigned int xid, struct file *dst_file,3739 unsigned long srcfd, u64 off, u64 len, u64 destoff)···215213 cifs_dbg(FYI, "set compress flag rc %d\n", rc);216214 }217215 break;218218- case BTRFS_IOC_CLONE:216216+ case CIFS_IOC_COPYCHUNK_FILE:219217 rc = cifs_ioctl_clone(xid, filep, arg, 0, 0, 0);220218 break;221219 default:
+84-11
fs/cifs/smb2ops.c
···532532 int rc;533533 unsigned int ret_data_len;534534 struct copychunk_ioctl *pcchunk;535535- char *retbuf = NULL;535535+ struct copychunk_ioctl_rsp *retbuf = NULL;536536+ struct cifs_tcon *tcon;537537+ int chunks_copied = 0;538538+ bool chunk_sizes_updated = false;536539537540 pcchunk = kmalloc(sizeof(struct copychunk_ioctl), GFP_KERNEL);538541···550547551548 /* Note: request_res_key sets res_key null only if rc !=0 */552549 if (rc)553553- return rc;550550+ goto cchunk_out;554551555552 /* For now array only one chunk long, will make more flexible later */556553 pcchunk->ChunkCount = __constant_cpu_to_le32(1);557554 pcchunk->Reserved = 0;558558- pcchunk->SourceOffset = cpu_to_le64(src_off);559559- pcchunk->TargetOffset = cpu_to_le64(dest_off);560560- pcchunk->Length = cpu_to_le32(len);561555 pcchunk->Reserved2 = 0;562556563563- /* Request that server copy to target from src file identified by key */564564- rc = SMB2_ioctl(xid, tlink_tcon(trgtfile->tlink),565565- trgtfile->fid.persistent_fid,557557+ tcon = tlink_tcon(trgtfile->tlink);558558+559559+ while (len > 0) {560560+ pcchunk->SourceOffset = cpu_to_le64(src_off);561561+ pcchunk->TargetOffset = cpu_to_le64(dest_off);562562+ pcchunk->Length =563563+ cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk));564564+565565+ /* Request server copy to target from src identified by key */566566+ rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,566567 trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE,567568 true /* is_fsctl */, (char *)pcchunk,568568- sizeof(struct copychunk_ioctl), &retbuf, &ret_data_len);569569+ sizeof(struct copychunk_ioctl), (char **)&retbuf,570570+ &ret_data_len);571571+ if (rc == 0) {572572+ if (ret_data_len !=573573+ sizeof(struct copychunk_ioctl_rsp)) {574574+ cifs_dbg(VFS, "invalid cchunk response size\n");575575+ rc = -EIO;576576+ goto cchunk_out;577577+ }578578+ if (retbuf->TotalBytesWritten == 0) {579579+ cifs_dbg(FYI, "no bytes copied\n");580580+ rc = -EIO;581581+ goto cchunk_out;582582+ }583583+ /*584584+ * Check if server claimed to write more than we asked585585+ */586586+ if (le32_to_cpu(retbuf->TotalBytesWritten) >587587+ le32_to_cpu(pcchunk->Length)) {588588+ cifs_dbg(VFS, "invalid copy chunk response\n");589589+ rc = -EIO;590590+ goto cchunk_out;591591+ }592592+ if (le32_to_cpu(retbuf->ChunksWritten) != 1) {593593+ cifs_dbg(VFS, "invalid num chunks written\n");594594+ rc = -EIO;595595+ goto cchunk_out;596596+ }597597+ chunks_copied++;569598570570- /* BB need to special case rc = EINVAL to alter chunk size */599599+ src_off += le32_to_cpu(retbuf->TotalBytesWritten);600600+ dest_off += le32_to_cpu(retbuf->TotalBytesWritten);601601+ len -= le32_to_cpu(retbuf->TotalBytesWritten);571602572572- cifs_dbg(FYI, "rc %d data length out %d\n", rc, ret_data_len);603603+ cifs_dbg(FYI, "Chunks %d PartialChunk %d Total %d\n",604604+ le32_to_cpu(retbuf->ChunksWritten),605605+ le32_to_cpu(retbuf->ChunkBytesWritten),606606+ le32_to_cpu(retbuf->TotalBytesWritten));607607+ } else if (rc == -EINVAL) {608608+ if (ret_data_len != sizeof(struct copychunk_ioctl_rsp))609609+ goto cchunk_out;573610611611+ cifs_dbg(FYI, "MaxChunks %d BytesChunk %d MaxCopy %d\n",612612+ le32_to_cpu(retbuf->ChunksWritten),613613+ le32_to_cpu(retbuf->ChunkBytesWritten),614614+ le32_to_cpu(retbuf->TotalBytesWritten));615615+616616+ /*617617+ * Check if this is the first request using these sizes,618618+ * (ie check if copy succeed once with original sizes619619+ * and check if the server gave us different sizes after620620+ * we already updated max sizes on previous request).621621+ * if not then why is the server returning an error now622622+ */623623+ if ((chunks_copied != 0) || chunk_sizes_updated)624624+ goto cchunk_out;625625+626626+ /* Check that server is not asking us to grow size */627627+ if (le32_to_cpu(retbuf->ChunkBytesWritten) <628628+ tcon->max_bytes_chunk)629629+ tcon->max_bytes_chunk =630630+ le32_to_cpu(retbuf->ChunkBytesWritten);631631+ else632632+ goto cchunk_out; /* server gave us bogus size */633633+634634+ /* No need to change MaxChunks since already set to 1 */635635+ chunk_sizes_updated = true;636636+ }637637+ }638638+639639+cchunk_out:574640 kfree(pcchunk);575641 return rc;576642}···13191247 .create_lease_buf = smb3_create_lease_buf,13201248 .parse_lease_buf = smb3_parse_lease_buf,13211249 .clone_range = smb2_clone_range,12501250+ .validate_negotiate = smb3_validate_negotiate,13221251};1323125213241253struct smb_version_values smb20_values = {
+87-5
fs/cifs/smb2pdu.c
···454454 return rc;455455}456456457457+int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)458458+{459459+ int rc = 0;460460+ struct validate_negotiate_info_req vneg_inbuf;461461+ struct validate_negotiate_info_rsp *pneg_rsp;462462+ u32 rsplen;463463+464464+ cifs_dbg(FYI, "validate negotiate\n");465465+466466+ /*467467+ * validation ioctl must be signed, so no point sending this if we468468+ * can not sign it. We could eventually change this to selectively469469+ * sign just this, the first and only signed request on a connection.470470+ * This is good enough for now since a user who wants better security471471+ * would also enable signing on the mount. Having validation of472472+ * negotiate info for signed connections helps reduce attack vectors473473+ */474474+ if (tcon->ses->server->sign == false)475475+ return 0; /* validation requires signing */476476+477477+ vneg_inbuf.Capabilities =478478+ cpu_to_le32(tcon->ses->server->vals->req_capabilities);479479+ memcpy(vneg_inbuf.Guid, cifs_client_guid, SMB2_CLIENT_GUID_SIZE);480480+481481+ if (tcon->ses->sign)482482+ vneg_inbuf.SecurityMode =483483+ cpu_to_le16(SMB2_NEGOTIATE_SIGNING_REQUIRED);484484+ else if (global_secflags & CIFSSEC_MAY_SIGN)485485+ vneg_inbuf.SecurityMode =486486+ cpu_to_le16(SMB2_NEGOTIATE_SIGNING_ENABLED);487487+ else488488+ vneg_inbuf.SecurityMode = 0;489489+490490+ vneg_inbuf.DialectCount = cpu_to_le16(1);491491+ vneg_inbuf.Dialects[0] =492492+ cpu_to_le16(tcon->ses->server->vals->protocol_id);493493+494494+ rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,495495+ FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,496496+ (char *)&vneg_inbuf, sizeof(struct validate_negotiate_info_req),497497+ (char **)&pneg_rsp, &rsplen);498498+499499+ if (rc != 0) {500500+ cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);501501+ return -EIO;502502+ }503503+504504+ if (rsplen != sizeof(struct validate_negotiate_info_rsp)) {505505+ cifs_dbg(VFS, "invalid size of protocol negotiate response\n");506506+ return -EIO;507507+ }508508+509509+ /* check validate negotiate info response matches what we got earlier */510510+ if (pneg_rsp->Dialect !=511511+ cpu_to_le16(tcon->ses->server->vals->protocol_id))512512+ goto vneg_out;513513+514514+ if (pneg_rsp->SecurityMode != cpu_to_le16(tcon->ses->server->sec_mode))515515+ goto vneg_out;516516+517517+ /* do not validate server guid because not saved at negprot time yet */518518+519519+ if ((le32_to_cpu(pneg_rsp->Capabilities) | SMB2_NT_FIND |520520+ SMB2_LARGE_FILES) != tcon->ses->server->capabilities)521521+ goto vneg_out;522522+523523+ /* validate negotiate successful */524524+ cifs_dbg(FYI, "validate negotiate info successful\n");525525+ return 0;526526+527527+vneg_out:528528+ cifs_dbg(VFS, "protocol revalidation - security settings mismatch\n");529529+ return -EIO;530530+}531531+457532int458533SMB2_sess_setup(const unsigned int xid, struct cifs_ses *ses,459534 const struct nls_table *nls_cp)···904829 ((tcon->share_flags & SHI1005_FLAGS_DFS) == 0))905830 cifs_dbg(VFS, "DFS capability contradicts DFS flag\n");906831 init_copy_chunk_defaults(tcon);832832+ if (tcon->ses->server->ops->validate_negotiate)833833+ rc = tcon->ses->server->ops->validate_negotiate(xid, tcon);907834tcon_exit:908835 free_rsp_buf(resp_buftype, rsp);909836 kfree(unc_path);···12911214 rc = SendReceive2(xid, ses, iov, num_iovecs, &resp_buftype, 0);12921215 rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base;1293121612941294- if (rc != 0) {12171217+ if ((rc != 0) && (rc != -EINVAL)) {12951218 if (tcon)12961219 cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);12971220 goto ioctl_exit;12211221+ } else if (rc == -EINVAL) {12221222+ if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) &&12231223+ (opcode != FSCTL_SRV_COPYCHUNK)) {12241224+ if (tcon)12251225+ cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);12261226+ goto ioctl_exit;12271227+ }12981228 }1299122913001230 /* check if caller wants to look at return data or just return rc */···22382154 rc = SendReceive2(xid, ses, iov, num, &resp_buftype, 0);22392155 rsp = (struct smb2_set_info_rsp *)iov[0].iov_base;2240215622412241- if (rc != 0) {21572157+ if (rc != 0)22422158 cifs_stats_fail_inc(tcon, SMB2_SET_INFO_HE);22432243- goto out;22442244- }22452245-out:21592159+22462160 free_rsp_buf(resp_buftype, rsp);22472161 kfree(iov);22482162 return rc;
+9-3
fs/cifs/smb2pdu.h
···577577 __le32 TotalBytesWritten;578578} __packed;579579580580-/* Response and Request are the same format */581581-struct validate_negotiate_info {580580+struct validate_negotiate_info_req {582581 __le32 Capabilities;583582 __u8 Guid[SMB2_CLIENT_GUID_SIZE];584583 __le16 SecurityMode;585584 __le16 DialectCount;586586- __le16 Dialect[1];585585+ __le16 Dialects[1]; /* dialect (someday maybe list) client asked for */586586+} __packed;587587+588588+struct validate_negotiate_info_rsp {589589+ __le32 Capabilities;590590+ __u8 Guid[SMB2_CLIENT_GUID_SIZE];591591+ __le16 SecurityMode;592592+ __le16 Dialect; /* Dialect in use for the connection */587593} __packed;588594589595#define RSS_CAPABLE 0x00000001
+1
fs/cifs/smb2proto.h
···162162 struct smb2_lock_element *buf);163163extern int SMB2_lease_break(const unsigned int xid, struct cifs_tcon *tcon,164164 __u8 *lease_key, const __le32 lease_state);165165+extern int smb3_validate_negotiate(const unsigned int, struct cifs_tcon *);165166166167#endif /* _SMB2PROTO_H */
···8383 * Should the subsystem abort the loading of an ACPI table if the8484 * table checksum is incorrect?8585 */8686+#ifndef ACPI_CHECKSUM_ABORT8687#define ACPI_CHECKSUM_ABORT FALSE8888+#endif87898890/*8991 * Generate a version of ACPICA that only supports "reduced hardware"
···46464747/* Current ACPICA subsystem version in YYYYMMDD format */48484949-#define ACPI_CA_VERSION 0x201309274949+#define ACPI_CA_VERSION 0x2013111550505151#include <acpi/acconfig.h>5252#include <acpi/actypes.h>
+14
include/asm-generic/simd.h
···11+22+#include <linux/hardirq.h>33+44+/*55+ * may_use_simd - whether it is allowable at this time to issue SIMD66+ * instructions or access the SIMD register file77+ *88+ * As architectures typically don't preserve the SIMD register file when99+ * taking an interrupt, !in_interrupt() should be a reasonable default.1010+ */1111+static __must_check inline bool may_use_simd(void)1212+{1313+ return !in_interrupt();1414+}
+17-1
include/crypto/algapi.h
···386386 return (type ^ CRYPTO_ALG_ASYNC) & mask & CRYPTO_ALG_ASYNC;387387}388388389389-#endif /* _CRYPTO_ALGAPI_H */389389+noinline unsigned long __crypto_memneq(const void *a, const void *b, size_t size);390390391391+/**392392+ * crypto_memneq - Compare two areas of memory without leaking393393+ * timing information.394394+ *395395+ * @a: One area of memory396396+ * @b: Another area of memory397397+ * @size: The size of the area.398398+ *399399+ * Returns 0 when data is equal, 1 otherwise.400400+ */401401+static inline int crypto_memneq(const void *a, const void *b, size_t size)402402+{403403+ return __crypto_memneq(a, b, size) != 0UL ? 1 : 0;404404+}405405+406406+#endif /* _CRYPTO_ALGAPI_H */
···388388/**389389 * kmalloc - allocate memory390390 * @size: how many bytes of memory are required.391391- * @flags: the type of memory to allocate (see kcalloc).391391+ * @flags: the type of memory to allocate.392392 *393393 * kmalloc is the normal method of allocating memory394394 * for objects smaller than page size in the kernel.395395+ *396396+ * The @flags argument may be one of:397397+ *398398+ * %GFP_USER - Allocate memory on behalf of user. May sleep.399399+ *400400+ * %GFP_KERNEL - Allocate normal kernel ram. May sleep.401401+ *402402+ * %GFP_ATOMIC - Allocation will not sleep. May use emergency pools.403403+ * For example, use this inside interrupt handlers.404404+ *405405+ * %GFP_HIGHUSER - Allocate pages from high memory.406406+ *407407+ * %GFP_NOIO - Do not do any I/O at all while trying to get memory.408408+ *409409+ * %GFP_NOFS - Do not make any fs calls while trying to get memory.410410+ *411411+ * %GFP_NOWAIT - Allocation will not sleep.412412+ *413413+ * %GFP_THISNODE - Allocate node-local memory only.414414+ *415415+ * %GFP_DMA - Allocation suitable for DMA.416416+ * Should only be used for kmalloc() caches. Otherwise, use a417417+ * slab created with SLAB_DMA.418418+ *419419+ * Also it is possible to set different flags by OR'ing420420+ * in one or more of the following additional @flags:421421+ *422422+ * %__GFP_COLD - Request cache-cold pages instead of423423+ * trying to return cache-warm pages.424424+ *425425+ * %__GFP_HIGH - This allocation has high priority and may use emergency pools.426426+ *427427+ * %__GFP_NOFAIL - Indicate that this allocation is in no way allowed to fail428428+ * (think twice before using).429429+ *430430+ * %__GFP_NORETRY - If memory is not immediately available,431431+ * then give up at once.432432+ *433433+ * %__GFP_NOWARN - If allocation fails, don't issue any warnings.434434+ *435435+ * %__GFP_REPEAT - If allocation fails initially, try once more before failing.436436+ *437437+ * There are other flags available as well, but these are not intended438438+ * for general use, and so are not documented here. For a full list of439439+ * potential flags, always refer to linux/gfp.h.395440 */396441static __always_inline void *kmalloc(size_t size, gfp_t flags)397442{···545500struct seq_file;546501int cache_show(struct kmem_cache *s, struct seq_file *m);547502void print_slabinfo_header(struct seq_file *m);548548-549549-/**550550- * kmalloc - allocate memory551551- * @size: how many bytes of memory are required.552552- * @flags: the type of memory to allocate.553553- *554554- * The @flags argument may be one of:555555- *556556- * %GFP_USER - Allocate memory on behalf of user. May sleep.557557- *558558- * %GFP_KERNEL - Allocate normal kernel ram. May sleep.559559- *560560- * %GFP_ATOMIC - Allocation will not sleep. May use emergency pools.561561- * For example, use this inside interrupt handlers.562562- *563563- * %GFP_HIGHUSER - Allocate pages from high memory.564564- *565565- * %GFP_NOIO - Do not do any I/O at all while trying to get memory.566566- *567567- * %GFP_NOFS - Do not make any fs calls while trying to get memory.568568- *569569- * %GFP_NOWAIT - Allocation will not sleep.570570- *571571- * %GFP_THISNODE - Allocate node-local memory only.572572- *573573- * %GFP_DMA - Allocation suitable for DMA.574574- * Should only be used for kmalloc() caches. Otherwise, use a575575- * slab created with SLAB_DMA.576576- *577577- * Also it is possible to set different flags by OR'ing578578- * in one or more of the following additional @flags:579579- *580580- * %__GFP_COLD - Request cache-cold pages instead of581581- * trying to return cache-warm pages.582582- *583583- * %__GFP_HIGH - This allocation has high priority and may use emergency pools.584584- *585585- * %__GFP_NOFAIL - Indicate that this allocation is in no way allowed to fail586586- * (think twice before using).587587- *588588- * %__GFP_NORETRY - If memory is not immediately available,589589- * then give up at once.590590- *591591- * %__GFP_NOWARN - If allocation fails, don't issue any warnings.592592- *593593- * %__GFP_REPEAT - If allocation fails initially, try once more before failing.594594- *595595- * There are other flags available as well, but these are not intended596596- * for general use, and so are not documented here. For a full list of597597- * potential flags, always refer to linux/gfp.h.598598- *599599- * kmalloc is the normal method of allocating memory600600- * in the kernel.601601- */602602-static __always_inline void *kmalloc(size_t size, gfp_t flags);603503604504/**605505 * kmalloc_array - allocate memory for an array.
+27
include/linux/tegra-powergate.h
···45454646#define TEGRA_POWERGATE_3D0 TEGRA_POWERGATE_3D47474848+#ifdef CONFIG_ARCH_TEGRA4849int tegra_powergate_is_powered(int id);4950int tegra_powergate_power_on(int id);5051int tegra_powergate_power_off(int id);···53525453/* Must be called with clk disabled, and returns with clk enabled */5554int tegra_powergate_sequence_power_up(int id, struct clk *clk);5555+#else5656+static inline int tegra_powergate_is_powered(int id)5757+{5858+ return -ENOSYS;5959+}6060+6161+static inline int tegra_powergate_power_on(int id)6262+{6363+ return -ENOSYS;6464+}6565+6666+static inline int tegra_powergate_power_off(int id)6767+{6868+ return -ENOSYS;6969+}7070+7171+static inline int tegra_powergate_remove_clamping(int id)7272+{7373+ return -ENOSYS;7474+}7575+7676+static inline int tegra_powergate_sequence_power_up(int id, struct clk *clk)7777+{7878+ return -ENOSYS;7979+}8080+#endif56815782#endif /* _MACH_TEGRA_POWERGATE_H_ */
···9090static DEFINE_MUTEX(cgroup_root_mutex);91919292/*9393+ * cgroup destruction makes heavy use of work items and there can be a lot9494+ * of concurrent destructions. Use a separate workqueue so that cgroup9595+ * destruction work items don't end up filling up max_active of system_wq9696+ * which may lead to deadlock.9797+ */9898+static struct workqueue_struct *cgroup_destroy_wq;9999+100100+/*93101 * Generate an array of cgroup subsystem pointers. At boot time, this is94102 * populated with the built in subsystems, and modular subsystems are95103 * registered after that. The mutable section of this array is protected by···199191static int cgroup_destroy_locked(struct cgroup *cgrp);200192static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],201193 bool is_add);194194+static int cgroup_file_release(struct inode *inode, struct file *file);202195203196/**204197 * cgroup_css - obtain a cgroup's css for the specified subsystem···880871 struct cgroup *cgrp = container_of(head, struct cgroup, rcu_head);881872882873 INIT_WORK(&cgrp->destroy_work, cgroup_free_fn);883883- schedule_work(&cgrp->destroy_work);874874+ queue_work(cgroup_destroy_wq, &cgrp->destroy_work);884875}885876886877static void cgroup_diput(struct dentry *dentry, struct inode *inode)···24302421 .read = seq_read,24312422 .write = cgroup_file_write,24322423 .llseek = seq_lseek,24332433- .release = single_release,24242424+ .release = cgroup_file_release,24342425};2435242624362427static int cgroup_file_open(struct inode *inode, struct file *file)···24912482 ret = cft->release(inode, file);24922483 if (css->ss)24932484 css_put(css);24852485+ if (file->f_op == &cgroup_seqfile_operations)24862486+ single_release(inode, file);24942487 return ret;24952488}24962489···42604249 * css_put(). dput() requires process context which we don't have.42614250 */42624251 INIT_WORK(&css->destroy_work, css_free_work_fn);42634263- schedule_work(&css->destroy_work);42524252+ queue_work(cgroup_destroy_wq, &css->destroy_work);42644253}4265425442664255static void css_release(struct percpu_ref *ref)···45504539 container_of(ref, struct cgroup_subsys_state, refcnt);4551454045524541 INIT_WORK(&css->destroy_work, css_killed_work_fn);45534553- schedule_work(&css->destroy_work);45424542+ queue_work(cgroup_destroy_wq, &css->destroy_work);45544543}4555454445564545/**···5073506250745063 return err;50755064}50655065+50665066+static int __init cgroup_wq_init(void)50675067+{50685068+ /*50695069+ * There isn't much point in executing destruction path in50705070+ * parallel. Good chunk is serialized with cgroup_mutex anyway.50715071+ * Use 1 for @max_active.50725072+ *50735073+ * We would prefer to do this in cgroup_init() above, but that50745074+ * is called before init_workqueues(): so leave this until after.50755075+ */50765076+ cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);50775077+ BUG_ON(!cgroup_destroy_wq);50785078+ return 0;50795079+}50805080+core_initcall(cgroup_wq_init);5076508150775082/*50785083 * proc_cgroup_show()
+6-2
kernel/cpuset.c
···10331033 need_loop = task_has_mempolicy(tsk) ||10341034 !nodes_intersects(*newmems, tsk->mems_allowed);1035103510361036- if (need_loop)10361036+ if (need_loop) {10371037+ local_irq_disable();10371038 write_seqcount_begin(&tsk->mems_allowed_seq);10391039+ }1038104010391041 nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems);10401042 mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP1);···10441042 mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP2);10451043 tsk->mems_allowed = *newmems;1046104410471047- if (need_loop)10451045+ if (need_loop) {10481046 write_seqcount_end(&tsk->mems_allowed_seq);10471047+ local_irq_enable();10481048+ }1049104910501050 task_unlock(tsk);10511051}
+2-2
kernel/extable.c
···6161static inline int init_kernel_text(unsigned long addr)6262{6363 if (addr >= (unsigned long)_sinittext &&6464- addr <= (unsigned long)_einittext)6464+ addr < (unsigned long)_einittext)6565 return 1;6666 return 0;6767}···6969int core_kernel_text(unsigned long addr)7070{7171 if (addr >= (unsigned long)_stext &&7272- addr <= (unsigned long)_etext)7272+ addr < (unsigned long)_etext)7373 return 1;74747575 if (system_state == SYSTEM_BOOTING &&
+4-5
kernel/padata.c
···46464747static int padata_cpu_hash(struct parallel_data *pd)4848{4949+ unsigned int seq_nr;4950 int cpu_index;50515152 /*···5453 * seq_nr mod. number of cpus in use.5554 */56555757- spin_lock(&pd->seq_lock);5858- cpu_index = pd->seq_nr % cpumask_weight(pd->cpumask.pcpu);5959- pd->seq_nr++;6060- spin_unlock(&pd->seq_lock);5656+ seq_nr = atomic_inc_return(&pd->seq_nr);5757+ cpu_index = seq_nr % cpumask_weight(pd->cpumask.pcpu);61586259 return padata_index_to_cpu(pd, cpu_index);6360}···428429 padata_init_pqueues(pd);429430 padata_init_squeues(pd);430431 setup_timer(&pd->timer, padata_reorder_timer, (unsigned long)pd);431431- pd->seq_nr = 0;432432+ atomic_set(&pd->seq_nr, -1);432433 atomic_set(&pd->reorder_objects, 0);433434 atomic_set(&pd->refcnt, 0);434435 pd->pinst = pinst;
+35-29
kernel/trace/ftrace.c
···367367368368static int __register_ftrace_function(struct ftrace_ops *ops)369369{370370- if (unlikely(ftrace_disabled))371371- return -ENODEV;372372-373370 if (FTRACE_WARN_ON(ops == &global_ops))374371 return -EINVAL;375372···424427static int __unregister_ftrace_function(struct ftrace_ops *ops)425428{426429 int ret;427427-428428- if (ftrace_disabled)429429- return -ENODEV;430430431431 if (WARN_ON(!(ops->flags & FTRACE_OPS_FL_ENABLED)))432432 return -EBUSY;···20822088static int ftrace_startup(struct ftrace_ops *ops, int command)20832089{20842090 bool hash_enable = true;20912091+ int ret;2085209220862093 if (unlikely(ftrace_disabled))20872094 return -ENODEV;20952095+20962096+ ret = __register_ftrace_function(ops);20972097+ if (ret)20982098+ return ret;2088209920892100 ftrace_start_up++;20902101 command |= FTRACE_UPDATE_CALLS;···21122113 return 0;21132114}2114211521152115-static void ftrace_shutdown(struct ftrace_ops *ops, int command)21162116+static int ftrace_shutdown(struct ftrace_ops *ops, int command)21162117{21172118 bool hash_disable = true;21192119+ int ret;2118212021192121 if (unlikely(ftrace_disabled))21202120- return;21222122+ return -ENODEV;21232123+21242124+ ret = __unregister_ftrace_function(ops);21252125+ if (ret)21262126+ return ret;2121212721222128 ftrace_start_up--;21232129 /*···21572153 }2158215421592155 if (!command || !ftrace_enabled)21602160- return;21562156+ return 0;2161215721622158 ftrace_run_update_code(command);21592159+ return 0;21632160}2164216121652162static void ftrace_startup_sysctl(void)···30653060 if (i == FTRACE_FUNC_HASHSIZE)30663061 return;3067306230683068- ret = __register_ftrace_function(&trace_probe_ops);30693069- if (!ret)30703070- ret = ftrace_startup(&trace_probe_ops, 0);30633063+ ret = ftrace_startup(&trace_probe_ops, 0);3071306430723065 ftrace_probe_registered = 1;30733066}3074306730753068static void __disable_ftrace_function_probe(void)30763069{30773077- int ret;30783070 int i;3079307130803072 if (!ftrace_probe_registered)···30843082 }3085308330863084 /* no more funcs left */30873087- ret = __unregister_ftrace_function(&trace_probe_ops);30883088- if (!ret)30893089- ftrace_shutdown(&trace_probe_ops, 0);30853085+ ftrace_shutdown(&trace_probe_ops, 0);3090308630913087 ftrace_probe_registered = 0;30923088}···43664366static inline int ftrace_init_dyn_debugfs(struct dentry *d_tracer) { return 0; }43674367static inline void ftrace_startup_enable(int command) { }43684368/* Keep as macros so we do not need to define the commands */43694369-# define ftrace_startup(ops, command) \43704370- ({ \43714371- (ops)->flags |= FTRACE_OPS_FL_ENABLED; \43724372- 0; \43694369+# define ftrace_startup(ops, command) \43704370+ ({ \43714371+ int ___ret = __register_ftrace_function(ops); \43724372+ if (!___ret) \43734373+ (ops)->flags |= FTRACE_OPS_FL_ENABLED; \43744374+ ___ret; \43734375 })43744374-# define ftrace_shutdown(ops, command) do { } while (0)43764376+# define ftrace_shutdown(ops, command) __unregister_ftrace_function(ops)43774377+43754378# define ftrace_startup_sysctl() do { } while (0)43764379# define ftrace_shutdown_sysctl() do { } while (0)43774380···4783478047844781 mutex_lock(&ftrace_lock);4785478247864786- ret = __register_ftrace_function(ops);47874787- if (!ret)47884788- ret = ftrace_startup(ops, 0);47834783+ ret = ftrace_startup(ops, 0);4789478447904785 mutex_unlock(&ftrace_lock);47914786···48024801 int ret;4803480248044803 mutex_lock(&ftrace_lock);48054805- ret = __unregister_ftrace_function(ops);48064806- if (!ret)48074807- ftrace_shutdown(ops, 0);48044804+ ret = ftrace_shutdown(ops, 0);48084805 mutex_unlock(&ftrace_lock);4809480648104807 return ret;···49964997 return NOTIFY_DONE;49974998}4998499950005000+/* Just a place holder for function graph */50015001+static struct ftrace_ops fgraph_ops __read_mostly = {50025002+ .func = ftrace_stub,50035003+ .flags = FTRACE_OPS_FL_STUB | FTRACE_OPS_FL_GLOBAL |50045004+ FTRACE_OPS_FL_RECURSION_SAFE,50055005+};50065006+49995007int register_ftrace_graph(trace_func_graph_ret_t retfunc,50005008 trace_func_graph_ent_t entryfunc)50015009{···50295023 ftrace_graph_return = retfunc;50305024 ftrace_graph_entry = entryfunc;5031502550325032- ret = ftrace_startup(&global_ops, FTRACE_START_FUNC_RET);50265026+ ret = ftrace_startup(&fgraph_ops, FTRACE_START_FUNC_RET);5033502750345028out:50355029 mutex_unlock(&ftrace_lock);···50465040 ftrace_graph_active--;50475041 ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub;50485042 ftrace_graph_entry = ftrace_graph_entry_stub;50495049- ftrace_shutdown(&global_ops, FTRACE_STOP_FUNC_RET);50435043+ ftrace_shutdown(&fgraph_ops, FTRACE_STOP_FUNC_RET);50505044 unregister_pm_notifier(&ftrace_suspend_notifier);50515045 unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);50525046
+37-13
kernel/workqueue.c
···305305/* I: attributes used when instantiating standard unbound pools on demand */306306static struct workqueue_attrs *unbound_std_wq_attrs[NR_STD_WORKER_POOLS];307307308308+/* I: attributes used when instantiating ordered pools on demand */309309+static struct workqueue_attrs *ordered_wq_attrs[NR_STD_WORKER_POOLS];310310+308311struct workqueue_struct *system_wq __read_mostly;309312EXPORT_SYMBOL(system_wq);310313struct workqueue_struct *system_highpri_wq __read_mostly;···521518static inline void debug_work_deactivate(struct work_struct *work) { }522519#endif523520524524-/* allocate ID and assign it to @pool */521521+/**522522+ * worker_pool_assign_id - allocate ID and assing it to @pool523523+ * @pool: the pool pointer of interest524524+ *525525+ * Returns 0 if ID in [0, WORK_OFFQ_POOL_NONE) is allocated and assigned526526+ * successfully, -errno on failure.527527+ */525528static int worker_pool_assign_id(struct worker_pool *pool)526529{527530 int ret;528531529532 lockdep_assert_held(&wq_pool_mutex);530533531531- ret = idr_alloc(&worker_pool_idr, pool, 0, 0, GFP_KERNEL);534534+ ret = idr_alloc(&worker_pool_idr, pool, 0, WORK_OFFQ_POOL_NONE,535535+ GFP_KERNEL);532536 if (ret >= 0) {533537 pool->id = ret;534538 return 0;···1330132013311321 debug_work_activate(work);1332132213331333- /* if dying, only works from the same workqueue are allowed */13231323+ /* if draining, only works from the same workqueue are allowed */13341324 if (unlikely(wq->flags & __WQ_DRAINING) &&13351325 WARN_ON_ONCE(!is_chained_work(wq)))13361326 return;···17461736 if (IS_ERR(worker->task))17471737 goto fail;1748173817391739+ set_user_nice(worker->task, pool->attrs->nice);17401740+17411741+ /* prevent userland from meddling with cpumask of workqueue workers */17421742+ worker->task->flags |= PF_NO_SETAFFINITY;17431743+17491744 /*17501745 * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any17511746 * online CPUs. It'll be re-applied when any of the CPUs come up.17521747 */17531753- set_user_nice(worker->task, pool->attrs->nice);17541748 set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);17551755-17561756- /* prevent userland from meddling with cpumask of workqueue workers */17571757- worker->task->flags |= PF_NO_SETAFFINITY;1758174917591750 /*17601751 * The caller is responsible for ensuring %POOL_DISASSOCIATED···41174106static int alloc_and_link_pwqs(struct workqueue_struct *wq)41184107{41194108 bool highpri = wq->flags & WQ_HIGHPRI;41204120- int cpu;41094109+ int cpu, ret;4121411041224111 if (!(wq->flags & WQ_UNBOUND)) {41234112 wq->cpu_pwqs = alloc_percpu(struct pool_workqueue);···41374126 mutex_unlock(&wq->mutex);41384127 }41394128 return 0;41294129+ } else if (wq->flags & __WQ_ORDERED) {41304130+ ret = apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]);41314131+ /* there should only be single pwq for ordering guarantee */41324132+ WARN(!ret && (wq->pwqs.next != &wq->dfl_pwq->pwqs_node ||41334133+ wq->pwqs.prev != &wq->dfl_pwq->pwqs_node),41344134+ "ordering guarantee broken for workqueue %s\n", wq->name);41354135+ return ret;41404136 } else {41414137 return apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]);41424138 }···50275009 int std_nice[NR_STD_WORKER_POOLS] = { 0, HIGHPRI_NICE_LEVEL };50285010 int i, cpu;5029501150305030- /* make sure we have enough bits for OFFQ pool ID */50315031- BUILD_BUG_ON((1LU << (BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT)) <50325032- WORK_CPU_END * NR_STD_WORKER_POOLS);50335033-50345012 WARN_ON(__alignof__(struct pool_workqueue) < __alignof__(long long));5035501350365014 pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC);···50655051 }50665052 }5067505350685068- /* create default unbound wq attrs */50545054+ /* create default unbound and ordered wq attrs */50695055 for (i = 0; i < NR_STD_WORKER_POOLS; i++) {50705056 struct workqueue_attrs *attrs;5071505750725058 BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));50735059 attrs->nice = std_nice[i];50745060 unbound_std_wq_attrs[i] = attrs;50615061+50625062+ /*50635063+ * An ordered wq should have only one pwq as ordering is50645064+ * guaranteed by max_active which is enforced by pwqs.50655065+ * Turn off NUMA so that dfl_pwq is used for all nodes.50665066+ */50675067+ BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));50685068+ attrs->nice = std_nice[i];50695069+ attrs->no_numa = true;50705070+ ordered_wq_attrs[i] = attrs;50755071 }5076507250775073 system_wq = alloc_workqueue("events", 0, 0);
+1-8
lib/lockref.c
···11#include <linux/export.h>22#include <linux/lockref.h>33+#include <linux/mutex.h>3445#if USE_CMPXCHG_LOCKREF56···109 */1110#ifndef cmpxchg64_relaxed1211# define cmpxchg64_relaxed cmpxchg641313-#endif1414-1515-/*1616- * Allow architectures to override the default cpu_relax() within CMPXCHG_LOOP.1717- * This is useful for architectures with an expensive cpu_relax().1818- */1919-#ifndef arch_mutex_cpu_relax2020-# define arch_mutex_cpu_relax() cpu_relax()2112#endif22132314/*
···123123 For more information on integrity appraisal refer to:124124 <http://linux-ima.sourceforge.net>125125 If unsure, say N.126126-127127-config IMA_TRUSTED_KEYRING128128- bool "Require all keys on the _ima keyring be signed"129129- depends on IMA_APPRAISE && SYSTEM_TRUSTED_KEYRING130130- default y131131- help132132- This option requires that all keys added to the _ima133133- keyring be signed by a key on the system trusted keyring.
···9494 /* this function uses default algo */9595 hash.hdr.algo = HASH_ALGO_SHA1;9696 result = ima_calc_field_array_hash(&entry->template_data[0],9797+ entry->template_desc,9798 num_fields, &hash.hdr);9899 if (result < 0) {99100 integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode,
-11
security/integrity/ima/ima_appraise.c
···381381 }382382 return result;383383}384384-385385-#ifdef CONFIG_IMA_TRUSTED_KEYRING386386-static int __init init_ima_keyring(void)387387-{388388- int ret;389389-390390- ret = integrity_init_keyring(INTEGRITY_KEYRING_IMA);391391- return 0;392392-}393393-late_initcall(init_ima_keyring);394394-#endif
+12-5
security/integrity/ima/ima_crypto.c
···140140 * Calculate the hash of template data141141 */142142static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data,143143+ struct ima_template_desc *td,143144 int num_fields,144145 struct ima_digest_data *hash,145146 struct crypto_shash *tfm)···161160 return rc;162161163162 for (i = 0; i < num_fields; i++) {164164- rc = crypto_shash_update(&desc.shash,165165- (const u8 *) &field_data[i].len,166166- sizeof(field_data[i].len));163163+ if (strcmp(td->name, IMA_TEMPLATE_IMA_NAME) != 0) {164164+ rc = crypto_shash_update(&desc.shash,165165+ (const u8 *) &field_data[i].len,166166+ sizeof(field_data[i].len));167167+ if (rc)168168+ break;169169+ }167170 rc = crypto_shash_update(&desc.shash, field_data[i].data,168171 field_data[i].len);169172 if (rc)···180175 return rc;181176}182177183183-int ima_calc_field_array_hash(struct ima_field_data *field_data, int num_fields,178178+int ima_calc_field_array_hash(struct ima_field_data *field_data,179179+ struct ima_template_desc *desc, int num_fields,184180 struct ima_digest_data *hash)185181{186182 struct crypto_shash *tfm;···191185 if (IS_ERR(tfm))192186 return PTR_ERR(tfm);193187194194- rc = ima_calc_field_array_hash_tfm(field_data, num_fields, hash, tfm);188188+ rc = ima_calc_field_array_hash_tfm(field_data, desc, num_fields,189189+ hash, tfm);195190196191 ima_free_tfm(tfm);197192
+11-3
security/integrity/ima/ima_fs.c
···120120 struct ima_template_entry *e;121121 int namelen;122122 u32 pcr = CONFIG_IMA_MEASURE_PCR_IDX;123123+ bool is_ima_template = false;123124 int i;124125125126 /* get entry */···146145 ima_putc(m, e->template_desc->name, namelen);147146148147 /* 5th: template length (except for 'ima' template) */149149- if (strcmp(e->template_desc->name, IMA_TEMPLATE_IMA_NAME) != 0)148148+ if (strcmp(e->template_desc->name, IMA_TEMPLATE_IMA_NAME) == 0)149149+ is_ima_template = true;150150+151151+ if (!is_ima_template)150152 ima_putc(m, &e->template_data_len,151153 sizeof(e->template_data_len));152154153155 /* 6th: template specific data */154156 for (i = 0; i < e->template_desc->num_fields; i++) {155155- e->template_desc->fields[i]->field_show(m, IMA_SHOW_BINARY,156156- &e->template_data[i]);157157+ enum ima_show_type show = IMA_SHOW_BINARY;158158+ struct ima_template_field *field = e->template_desc->fields[i];159159+160160+ if (is_ima_template && strcmp(field->field_id, "d") == 0)161161+ show = IMA_SHOW_BINARY_NO_FIELD_LEN;162162+ field->field_show(m, show, &e->template_data[i]);157163 }158164 return 0;159165}
+14-7
security/integrity/ima/ima_template.c
···9090 return NULL;9191}92929393-static int template_fmt_size(char *template_fmt)9393+static int template_fmt_size(const char *template_fmt)9494{9595 char c;9696 int template_fmt_len = strlen(template_fmt);···106106 return j + 1;107107}108108109109-static int template_desc_init_fields(char *template_fmt,109109+static int template_desc_init_fields(const char *template_fmt,110110 struct ima_template_field ***fields,111111 int *num_fields)112112{113113- char *c, *template_fmt_ptr = template_fmt;113113+ char *c, *template_fmt_copy;114114 int template_num_fields = template_fmt_size(template_fmt);115115 int i, result = 0;116116117117 if (template_num_fields > IMA_TEMPLATE_NUM_FIELDS_MAX)118118 return -EINVAL;119119120120+ /* copying is needed as strsep() modifies the original buffer */121121+ template_fmt_copy = kstrdup(template_fmt, GFP_KERNEL);122122+ if (template_fmt_copy == NULL)123123+ return -ENOMEM;124124+120125 *fields = kzalloc(template_num_fields * sizeof(*fields), GFP_KERNEL);121126 if (*fields == NULL) {122127 result = -ENOMEM;123128 goto out;124129 }125125- for (i = 0; (c = strsep(&template_fmt_ptr, "|")) != NULL &&130130+ for (i = 0; (c = strsep(&template_fmt_copy, "|")) != NULL &&126131 i < template_num_fields; i++) {127132 struct ima_template_field *f = lookup_template_field(c);128133···138133 (*fields)[i] = f;139134 }140135 *num_fields = i;141141- return 0;142136out:143143- kfree(*fields);144144- *fields = NULL;137137+ if (result < 0) {138138+ kfree(*fields);139139+ *fields = NULL;140140+ }141141+ kfree(template_fmt_copy);145142 return result;146143}147144
+5-1
security/integrity/ima/ima_template_lib.c
···109109 enum data_formats datafmt,110110 struct ima_field_data *field_data)111111{112112- ima_putc(m, &field_data->len, sizeof(u32));112112+ if (show != IMA_SHOW_BINARY_NO_FIELD_LEN)113113+ ima_putc(m, &field_data->len, sizeof(u32));114114+113115 if (!field_data->len)114116 return;117117+115118 ima_putc(m, field_data->data, field_data->len);116119}117120···128125 ima_show_template_data_ascii(m, show, datafmt, field_data);129126 break;130127 case IMA_SHOW_BINARY:128128+ case IMA_SHOW_BINARY_NO_FIELD_LEN:131129 ima_show_template_data_binary(m, show, datafmt, field_data);132130 break;133131 default:
-7
security/integrity/integrity.h
···137137#ifdef CONFIG_INTEGRITY_ASYMMETRIC_KEYS138138int asymmetric_verify(struct key *keyring, const char *sig,139139 int siglen, const char *data, int datalen);140140-141141-int integrity_init_keyring(const unsigned int id);142140#else143141static inline int asymmetric_verify(struct key *keyring, const char *sig,144142 int siglen, const char *data, int datalen)145143{146144 return -EOPNOTSUPP;147147-}148148-149149-static int integrity_init_keyring(const unsigned int id)150150-{151151- return 0;152145}153146#endif154147
···698698 unsigned int in_reset:1; /* during reset operation */699699 unsigned int power_keep_link_on:1; /* don't power off HDA link */700700 unsigned int no_response_fallback:1; /* don't fallback at RIRB error */701701- unsigned int avoid_link_reset:1; /* don't reset link at runtime PM */702701703702 int primary_dig_out_type; /* primary digital out PCM type */704703};
+58-21
sound/pci/hda/hda_generic.c
···2506250625072507 for (i = 0; i < num_pins; i++) {25082508 hda_nid_t pin = pins[i];25092509- if (pin == spec->hp_mic_pin) {25102510- int ret = create_hp_mic_jack_mode(codec, pin);25112511- if (ret < 0)25122512- return ret;25092509+ if (pin == spec->hp_mic_pin)25132510 continue;25142514- }25152511 if (get_out_jack_num_items(codec, pin) > 1) {25162512 struct snd_kcontrol_new *knew;25172513 char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];···27602764 val &= ~(AC_PINCTL_VREFEN | PIN_HP);27612765 val |= get_vref_idx(vref_caps, idx) | PIN_IN;27622766 } else27632763- val = snd_hda_get_default_vref(codec, nid);27672767+ val = snd_hda_get_default_vref(codec, nid) | PIN_IN;27642768 }27652769 snd_hda_set_pin_ctl_cache(codec, nid, val);27662770 call_hp_automute(codec, NULL);···27802784 struct hda_gen_spec *spec = codec->spec;27812785 struct snd_kcontrol_new *knew;2782278627832783- if (get_out_jack_num_items(codec, pin) <= 1 &&27842784- get_in_jack_num_items(codec, pin) <= 1)27852785- return 0; /* no need */27862787 knew = snd_hda_gen_add_kctl(spec, "Headphone Mic Jack Mode",27872788 &hp_mic_jack_mode_enum);27882789 if (!knew)···28082815 return 0;28092816}2810281728182818+/* return true if either a volume or a mute amp is found for the given28192819+ * aamix path; the amp has to be either in the mixer node or its direct leaf28202820+ */28212821+static bool look_for_mix_leaf_ctls(struct hda_codec *codec, hda_nid_t mix_nid,28222822+ hda_nid_t pin, unsigned int *mix_val,28232823+ unsigned int *mute_val)28242824+{28252825+ int idx, num_conns;28262826+ const hda_nid_t *list;28272827+ hda_nid_t nid;28282828+28292829+ idx = snd_hda_get_conn_index(codec, mix_nid, pin, true);28302830+ if (idx < 0)28312831+ return false;28322832+28332833+ *mix_val = *mute_val = 0;28342834+ if (nid_has_volume(codec, mix_nid, HDA_INPUT))28352835+ *mix_val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT);28362836+ if (nid_has_mute(codec, mix_nid, HDA_INPUT))28372837+ *mute_val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT);28382838+ if (*mix_val && *mute_val)28392839+ return true;28402840+28412841+ /* check leaf node */28422842+ num_conns = snd_hda_get_conn_list(codec, mix_nid, &list);28432843+ if (num_conns < idx)28442844+ return false;28452845+ nid = list[idx];28462846+ if (!*mix_val && nid_has_volume(codec, nid, HDA_OUTPUT))28472847+ *mix_val = HDA_COMPOSE_AMP_VAL(nid, 3, 0, HDA_OUTPUT);28482848+ if (!*mute_val && nid_has_mute(codec, nid, HDA_OUTPUT))28492849+ *mute_val = HDA_COMPOSE_AMP_VAL(nid, 3, 0, HDA_OUTPUT);28502850+28512851+ return *mix_val || *mute_val;28522852+}28532853+28112854/* create input playback/capture controls for the given pin */28122855static int new_analog_input(struct hda_codec *codec, int input_idx,28132856 hda_nid_t pin, const char *ctlname, int ctlidx,···28512822{28522823 struct hda_gen_spec *spec = codec->spec;28532824 struct nid_path *path;28542854- unsigned int val;28252825+ unsigned int mix_val, mute_val;28552826 int err, idx;2856282728572857- if (!nid_has_volume(codec, mix_nid, HDA_INPUT) &&28582858- !nid_has_mute(codec, mix_nid, HDA_INPUT))28592859- return 0; /* no need for analog loopback */28282828+ if (!look_for_mix_leaf_ctls(codec, mix_nid, pin, &mix_val, &mute_val))28292829+ return 0;2860283028612831 path = snd_hda_add_new_path(codec, pin, mix_nid, 0);28622832 if (!path)···28642836 spec->loopback_paths[input_idx] = snd_hda_get_path_idx(codec, path);2865283728662838 idx = path->idx[path->depth - 1];28672867- if (nid_has_volume(codec, mix_nid, HDA_INPUT)) {28682868- val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT);28692869- err = __add_pb_vol_ctrl(spec, HDA_CTL_WIDGET_VOL, ctlname, ctlidx, val);28392839+ if (mix_val) {28402840+ err = __add_pb_vol_ctrl(spec, HDA_CTL_WIDGET_VOL, ctlname, ctlidx, mix_val);28702841 if (err < 0)28712842 return err;28722872- path->ctls[NID_PATH_VOL_CTL] = val;28432843+ path->ctls[NID_PATH_VOL_CTL] = mix_val;28732844 }2874284528752875- if (nid_has_mute(codec, mix_nid, HDA_INPUT)) {28762876- val = HDA_COMPOSE_AMP_VAL(mix_nid, 3, idx, HDA_INPUT);28772877- err = __add_pb_sw_ctrl(spec, HDA_CTL_WIDGET_MUTE, ctlname, ctlidx, val);28462846+ if (mute_val) {28472847+ err = __add_pb_sw_ctrl(spec, HDA_CTL_WIDGET_MUTE, ctlname, ctlidx, mute_val);28782848 if (err < 0)28792849 return err;28802880- path->ctls[NID_PATH_MUTE_CTL] = val;28502850+ path->ctls[NID_PATH_MUTE_CTL] = mute_val;28812851 }2882285228832853 path->active = true;···44084382 err = parse_mic_boost(codec);44094383 if (err < 0)44104384 return err;43854385+43864386+ /* create "Headphone Mic Jack Mode" if no input selection is43874387+ * available (or user specifies add_jack_modes hint)43884388+ */43894389+ if (spec->hp_mic_pin &&43904390+ (spec->auto_mic || spec->input_mux.num_items == 1 ||43914391+ spec->add_jack_modes)) {43924392+ err = create_hp_mic_jack_mode(codec, spec->hp_mic_pin);43934393+ if (err < 0)43944394+ return err;43954395+ }4411439644124397 if (spec->add_jack_modes) {44134398 if (cfg->line_out_type != AUTO_PIN_SPEAKER_OUT) {
+1-2
sound/pci/hda/hda_intel.c
···29942994 STATESTS_INT_MASK);2995299529962996 azx_stop_chip(chip);29972997- if (!chip->bus->avoid_link_reset)29982998- azx_enter_link_reset(chip);29972997+ azx_enter_link_reset(chip);29992998 azx_clear_irq_pending(chip);30002999 if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL)30013000 hda_display_power(false);
···2094209420952095 if (action == HDA_FIXUP_ACT_PRE_PROBE) {20962096 spec->mic_mute_led_gpio = 0x08; /* GPIO3 */20972097- codec->bus->avoid_link_reset = 1;20972097+ /* resetting controller clears GPIO, so we need to keep on */20982098+ codec->bus->power_keep_link_on = 1;20982099 }20992100}21002101
+15-1
sound/usb/endpoint.c
···636636 if (usb_pipein(ep->pipe) ||637637 snd_usb_endpoint_implicit_feedback_sink(ep)) {638638639639+ urb_packs = packs_per_ms;640640+ /*641641+ * Wireless devices can poll at a max rate of once per 4ms.642642+ * For dataintervals less than 5, increase the packet count to643643+ * allow the host controller to use bursting to fill in the644644+ * gaps.645645+ */646646+ if (snd_usb_get_speed(ep->chip->dev) == USB_SPEED_WIRELESS) {647647+ int interval = ep->datainterval;648648+ while (interval < 5) {649649+ urb_packs <<= 1;650650+ ++interval;651651+ }652652+ }639653 /* make capture URBs <= 1 ms and smaller than a period */640640- urb_packs = min(max_packs_per_urb, packs_per_ms);654654+ urb_packs = min(max_packs_per_urb, urb_packs);641655 while (urb_packs > 1 && urb_packs * maxsize >= period_bytes)642656 urb_packs >>= 1;643657 ep->nurbs = MAX_URBS;
+2-1
tools/power/cpupower/man/cpupower-idle-info.1
···8787.fi8888.SH "SEE ALSO"8989.LP9090-cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1)9090+cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1),9191+cpupower\-idle\-set(1)
+71
tools/power/cpupower/man/cpupower-idle-set.1
···11+.TH "CPUPOWER-IDLE-SET" "1" "0.1" "" "cpupower Manual"22+.SH "NAME"33+.LP44+cpupower idle\-set \- Utility to set cpu idle state specific kernel options55+.SH "SYNTAX"66+.LP77+cpupower [ \-c cpulist ] idle\-info [\fIoptions\fP]88+.SH "DESCRIPTION"99+.LP1010+The cpupower idle\-set subcommand allows to set cpu idle, also called cpu1111+sleep state, specific options offered by the kernel. One example is disabling1212+sleep states. This can be handy for power vs performance tuning.1313+.SH "OPTIONS"1414+.LP1515+.TP1616+\fB\-d\fR \fB\-\-disable\fR1717+Disable a specific processor sleep state.1818+.TP1919+\fB\-e\fR \fB\-\-enable\fR2020+Enable a specific processor sleep state.2121+2222+.SH "REMARKS"2323+.LP2424+Cpuidle Governors Policy on Disabling Sleep States2525+2626+.RS 42727+Depending on the used cpuidle governor, implementing the kernel policy2828+how to choose sleep states, subsequent sleep states on this core, might get2929+disabled as well.3030+3131+There are two cpuidle governors ladder and menu. While the ladder3232+governor is always available, if CONFIG_CPU_IDLE is selected, the3333+menu governor additionally requires CONFIG_NO_HZ.3434+3535+The behavior and the effect of the disable variable depends on the3636+implementation of a particular governor. In the ladder governor, for3737+example, it is not coherent, i.e. if one is disabling a light state,3838+then all deeper states are disabled as well. Likewise, if one enables a3939+deep state but a lighter state still is disabled, then this has no effect.4040+.RE4141+.LP4242+Disabling the Lightest Sleep State may not have any Affect4343+4444+.RS 44545+If criteria are not met to enter deeper sleep states and the lightest sleep4646+state is chosen when idle, the kernel may still enter this sleep state,4747+irrespective of whether it is disabled or not. This is also reflected in4848+the usage count of the disabled sleep state when using the cpupower idle-info4949+command.5050+.RE5151+.LP5252+Selecting specific CPU Cores5353+5454+.RS 45555+By default processor sleep states of all CPU cores are set. Please refer5656+to the cpupower(1) manpage in the \-\-cpu option section how to disable5757+C-states of specific cores.5858+.RE5959+.SH "FILES"6060+.nf6161+\fI/sys/devices/system/cpu/cpu*/cpuidle/state*\fP6262+\fI/sys/devices/system/cpu/cpuidle/*\fP6363+.fi6464+.SH "AUTHORS"6565+.nf6666+Thomas Renninger <trenn@suse.de>6767+.fi6868+.SH "SEE ALSO"6969+.LP7070+cpupower(1), cpupower\-monitor(1), cpupower\-info(1), cpupower\-set(1),7171+cpupower\-idle\-info(1)
+2-2
tools/power/cpupower/utils/helpers/sysfs.c
···278278int sysfs_is_idlestate_disabled(unsigned int cpu,279279 unsigned int idlestate)280280{281281- if (sysfs_get_idlestate_count(cpu) < idlestate)281281+ if (sysfs_get_idlestate_count(cpu) <= idlestate)282282 return -1;283283284284 if (!sysfs_idlestate_file_exists(cpu, idlestate,···303303 char value[SYSFS_PATH_MAX];304304 int bytes_written;305305306306- if (sysfs_get_idlestate_count(cpu) < idlestate)306306+ if (sysfs_get_idlestate_count(cpu) <= idlestate)307307 return -1;308308309309 if (!sysfs_idlestate_file_exists(cpu, idlestate,