···1313 4 = active high level-sensitive1414 8 = active low level-sensitive15151616-Optional parent device properties:1717-- reg : contains the PRCMU mailbox address for the AB8500 i2c port1818-1916The AB8500 consists of a large and varied group of sub-devices:20172118Device IRQ Names Supply Names Description···8386 - stericsson,amic2-bias-vamic1 : Analoge Mic wishes to use a non-standard Vamic8487 - stericsson,earpeice-cmv : Earpeice voltage (only: 950 | 1100 | 1270 | 1580)85888686-ab8500@5 {8989+ab8500 {8790 compatible = "stericsson,ab8500";8888- reg = <5>; /* mailbox 5 is i2c */8991 interrupts = <0 40 0x4>;9092 interrupt-controller;9193 #interrupt-cells = <2>;
···1111 - "nvidia,tegra20-uart"1212 - "nxp,lpc3220-uart"1313 - "ibm,qpace-nwp-serial"1414+ - "altr,16550-FIFO32"1515+ - "altr,16550-FIFO64"1616+ - "altr,16550-FIFO128"1417 - "serial" if the port type is unknown.1518- reg : offset and length of the register set for the device.1619- interrupts : should contain uart interrupt.
+59-6
Documentation/input/alps.txt
···3344Introduction55------------66+Currently the ALPS touchpad driver supports five protocol versions in use by77+ALPS touchpads, called versions 1, 2, 3, 4 and 5.6877-Currently the ALPS touchpad driver supports four protocol versions in use by88-ALPS touchpads, called versions 1, 2, 3, and 4. Information about the various99-protocol versions is contained in the following sections.99+Since roughly mid-2010 several new ALPS touchpads have been released and1010+integrated into a variety of laptops and netbooks. These new touchpads1111+have enough behavior differences that the alps_model_data definition1212+table, describing the properties of the different versions, is no longer1313+adequate. The design choices were to re-define the alps_model_data1414+table, with the risk of regression testing existing devices, or isolate1515+the new devices outside of the alps_model_data table. The latter design1616+choice was made. The new touchpad signatures are named: "Rushmore",1717+"Pinnacle", and "Dolphin", which you will see in the alps.c code.1818+For the purposes of this document, this group of ALPS touchpads will1919+generically be called "new ALPS touchpads".2020+2121+We experimented with probing the ACPI interface _HID (Hardware ID)/_CID2222+(Compatibility ID) definition as a way to uniquely identify the2323+different ALPS variants but there did not appear to be a 1:1 mapping.2424+In fact, it appeared to be an m:n mapping between the _HID and actual2525+hardware type.10261127Detection1228---------···3620report" sequence: E8-E7-E7-E7-E9. The response is the model signature and is3721matched against known models in the alps_model_data_array.38223939-With protocol versions 3 and 4, the E7 report model signature is always4040-73-02-64. To differentiate between these versions, the response from the4141-"Enter Command Mode" sequence must be inspected as described below.2323+For older touchpads supporting protocol versions 3 and 4, the E7 report2424+model signature is always 73-02-64. To differentiate between these2525+versions, the response from the "Enter Command Mode" sequence must be2626+inspected as described below.2727+2828+The new ALPS touchpads have an E7 signature of 73-03-50 or 73-03-0A but2929+seem to be better differentiated by the EC Command Mode response.42304331Command Mode4432------------···6646address of the register being read, and the third contains the value of the6747register. Registers are written by writing the value one nibble at a time6848using the same encoding used for addresses.4949+5050+For the new ALPS touchpads, the EC command is used to enter command5151+mode. The response in the new ALPS touchpads is significantly different,5252+and more important in determining the behavior. This code has been5353+separated from the original alps_model_data table and put in the5454+alps_identify function. For example, there seem to be two hardware init5555+sequences for the "Dolphin" touchpads as determined by the second byte5656+of the EC response.69577058Packet Format7159-------------···215187 well.216188217189So far no v4 devices with tracksticks have been encountered.190190+191191+ALPS Absolute Mode - Protocol Version 5192192+---------------------------------------193193+This is basically Protocol Version 3 but with different logic for packet194194+decode. It uses the same alps_process_touchpad_packet_v3 call with a195195+specialized decode_fields function pointer to correctly interpret the196196+packets. This appears to only be used by the Dolphin devices.197197+198198+For single-touch, the 6-byte packet format is:199199+200200+ byte 0: 1 1 0 0 1 0 0 0201201+ byte 1: 0 x6 x5 x4 x3 x2 x1 x0202202+ byte 2: 0 y6 y5 y4 y3 y2 y1 y0203203+ byte 3: 0 M R L 1 m r l204204+ byte 4: y10 y9 y8 y7 x10 x9 x8 x7205205+ byte 5: 0 z6 z5 z4 z3 z2 z1 z0206206+207207+For mt, the format is:208208+209209+ byte 0: 1 1 1 n3 1 n2 n1 x24210210+ byte 1: 1 y7 y6 y5 y4 y3 y2 y1211211+ byte 2: ? x2 x1 y12 y11 y10 y9 y8212212+ byte 3: 0 x23 x22 x21 x20 x19 x18 x17213213+ byte 4: 0 x9 x8 x7 x6 x5 x4 x3214214+ byte 5: 0 x16 x15 x14 x13 x12 x11 x10
+77
Documentation/networking/tuntap.txt
···105105 Proto [2 bytes]106106 Raw protocol(IP, IPv6, etc) frame.107107108108+ 3.3 Multiqueue tuntap interface:109109+110110+ From version 3.8, Linux supports multiqueue tuntap which can uses multiple111111+ file descriptors (queues) to parallelize packets sending or receiving. The112112+ device allocation is the same as before, and if user wants to create multiple113113+ queues, TUNSETIFF with the same device name must be called many times with114114+ IFF_MULTI_QUEUE flag.115115+116116+ char *dev should be the name of the device, queues is the number of queues to117117+ be created, fds is used to store and return the file descriptors (queues)118118+ created to the caller. Each file descriptor were served as the interface of a119119+ queue which could be accessed by userspace.120120+121121+ #include <linux/if.h>122122+ #include <linux/if_tun.h>123123+124124+ int tun_alloc_mq(char *dev, int queues, int *fds)125125+ {126126+ struct ifreq ifr;127127+ int fd, err, i;128128+129129+ if (!dev)130130+ return -1;131131+132132+ memset(&ifr, 0, sizeof(ifr));133133+ /* Flags: IFF_TUN - TUN device (no Ethernet headers)134134+ * IFF_TAP - TAP device135135+ *136136+ * IFF_NO_PI - Do not provide packet information137137+ * IFF_MULTI_QUEUE - Create a queue of multiqueue device138138+ */139139+ ifr.ifr_flags = IFF_TAP | IFF_NO_PI | IFF_MULTI_QUEUE;140140+ strcpy(ifr.ifr_name, dev);141141+142142+ for (i = 0; i < queues; i++) {143143+ if ((fd = open("/dev/net/tun", O_RDWR)) < 0)144144+ goto err;145145+ err = ioctl(fd, TUNSETIFF, (void *)&ifr);146146+ if (err) {147147+ close(fd);148148+ goto err;149149+ }150150+ fds[i] = fd;151151+ }152152+153153+ return 0;154154+ err:155155+ for (--i; i >= 0; i--)156156+ close(fds[i]);157157+ return err;158158+ }159159+160160+ A new ioctl(TUNSETQUEUE) were introduced to enable or disable a queue. When161161+ calling it with IFF_DETACH_QUEUE flag, the queue were disabled. And when162162+ calling it with IFF_ATTACH_QUEUE flag, the queue were enabled. The queue were163163+ enabled by default after it was created through TUNSETIFF.164164+165165+ fd is the file descriptor (queue) that we want to enable or disable, when166166+ enable is true we enable it, otherwise we disable it167167+168168+ #include <linux/if.h>169169+ #include <linux/if_tun.h>170170+171171+ int tun_set_queue(int fd, int enable)172172+ {173173+ struct ifreq ifr;174174+175175+ memset(&ifr, 0, sizeof(ifr));176176+177177+ if (enable)178178+ ifr.ifr_flags = IFF_ATTACH_QUEUE;179179+ else180180+ ifr.ifr_flags = IFF_DETACH_QUEUE;181181+182182+ return ioctl(fd, TUNSETQUEUE, (void *)&ifr);183183+ }184184+108185Universal TUN/TAP device driver Frequently Asked Question.1091861101871. What platforms are supported by TUN/TAP driver ?
···319319 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION320320 bool321321322322-config HAVE_VIRT_TO_BUS323323- bool324324- help325325- An architecture should select this if it implements the326326- deprecated interface virt_to_bus(). All new architectures327327- should probably not select this.328328-329322config HAVE_ARCH_SECCOMP_FILTER330323 bool331324 help
···4949 select HAVE_REGS_AND_STACK_ACCESS_API5050 select HAVE_SYSCALL_TRACEPOINTS5151 select HAVE_UID165252- select HAVE_VIRT_TO_BUS5252+ select VIRT_TO_BUS5353 select KTIME_SCALAR5454 select PERF_USE_VMALLOC5555 select RTC_LIB···556556config ARCH_DOVE557557 bool "Marvell Dove"558558 select ARCH_REQUIRE_GPIOLIB559559- select COMMON_CLK_DOVE560559 select CPU_V7561560 select GENERIC_CLOCKEVENTS562561 select MIGHT_HAVE_PCI···16561657 accounting to be spread across the timer interval, preventing a16571658 "thundering herd" at every timer tick.1658165916601660+# The GPIO number here must be sorted by descending number. In case of16611661+# a multiplatform kernel, we just want the highest value required by the16621662+# selected platforms.16591663config ARCH_NR_GPIO16601664 int16611665 default 1024 if ARCH_SHMOBILE || ARCH_TEGRA16621662- default 355 if ARCH_U850016631663- default 264 if MACH_H470016641666 default 512 if SOC_OMAP516671667+ default 355 if ARCH_U850016651668 default 288 if ARCH_VT8500 || ARCH_SUNXI16691669+ default 264 if MACH_H470016661670 default 016671671 help16681672 Maximum number of GPIOs in the system.···1889188718901888config XEN18911889 bool "Xen guest support on ARM (EXPERIMENTAL)"18921892- depends on ARM && OF18901890+ depends on ARM && AEABI && OF18931891 depends on CPU_V7 && !CPU_V618921892+ depends on !GENERIC_ATOMIC6418941893 help18951894 Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.18961895
···6464 status = "okay";6565 /* No CD or WP GPIOs */6666 };6767+6868+ usb@d0050000 {6969+ status = "okay";7070+ };7171+7272+ usb@d0051000 {7373+ status = "okay";7474+ };6775 };6876};
···126126CONFIG_INPUT_TWL4030_PWRBUTTON=y127127CONFIG_VT_HW_CONSOLE_BINDING=y128128# CONFIG_LEGACY_PTYS is not set129129+CONFIG_SERIAL_8250=y130130+CONFIG_SERIAL_8250_CONSOLE=y129131CONFIG_SERIAL_8250_NR_UARTS=32130132CONFIG_SERIAL_8250_EXTENDED=y131133CONFIG_SERIAL_8250_MANY_PORTS=y
+4-21
arch/arm/include/asm/xen/events.h
···22#define _ASM_ARM_XEN_EVENTS_H3344#include <asm/ptrace.h>55+#include <asm/atomic.h>5667enum ipi_vector {78 XEN_PLACEHOLDER_VECTOR,···1615 return raw_irqs_disabled_flags(regs->ARM_cpsr);1716}18171919-/*2020- * We cannot use xchg because it does not support 8-byte2121- * values. However it is safe to use {ldr,dtd}exd directly because all2222- * platforms which Xen can run on support those instructions.2323- */2424-static inline xen_ulong_t xchg_xen_ulong(xen_ulong_t *ptr, xen_ulong_t val)2525-{2626- xen_ulong_t oldval;2727- unsigned int tmp;2828-2929- wmb();3030- asm volatile("@ xchg_xen_ulong\n"3131- "1: ldrexd %0, %H0, [%3]\n"3232- " strexd %1, %2, %H2, [%3]\n"3333- " teq %1, #0\n"3434- " bne 1b"3535- : "=&r" (oldval), "=&r" (tmp)3636- : "r" (val), "r" (ptr)3737- : "memory", "cc");3838- return oldval;3939-}1818+#define xchg_xen_ulong(ptr, val) atomic64_xchg(container_of((ptr), \1919+ atomic64_t, \2020+ counter), (val))40214122#endif /* _ASM_ARM_XEN_EVENTS_H */
+1
arch/arm/mach-at91/board-foxg20.c
···176176 /* If you choose to use a pin other than PB16 it needs to be 3.3V */177177 .pin = AT91_PIN_PB16,178178 .is_open_drain = 1,179179+ .ext_pullup_enable_pin = -EINVAL,179180};180181181182static struct platform_device w1_device = {
···26262727#ifdef CONFIG_PM2828/*2929- * The following code is located into the .data section. This is to3030- * allow phys_l2x0_saved_regs to be accessed with a relative load3131- * as we are running on physical address here.2929+ * The following code must assume it is running from physical address3030+ * where absolute virtual addresses to the data section have to be3131+ * turned into relative ones.3232 */3333- .data3434- .align35333634#ifdef CONFIG_CACHE_L2X03735 .macro pl310_resume3838- ldr r2, phys_l2x0_saved_regs3636+ adr r0, l2x0_saved_regs_offset3737+ ldr r2, [r0]3838+ add r2, r2, r03939 ldr r0, [r2, #L2X0_R_PHY_BASE] @ get physical base of l2x04040 ldr r1, [r2, #L2X0_R_AUX_CTRL] @ get aux_ctrl value4141 str r1, [r0, #L2X0_AUX_CTRL] @ restore aux_ctrl···4343 str r1, [r0, #L2X0_CTRL] @ re-enable L24444 .endm45454646- .globl phys_l2x0_saved_regs4747-phys_l2x0_saved_regs:4848- .long 04646+l2x0_saved_regs_offset:4747+ .word l2x0_saved_regs - .4848+4949#else5050 .macro pl310_resume5151 .endm
-15
arch/arm/mach-imx/pm-imx6q.c
···2222#include "common.h"2323#include "hardware.h"24242525-extern unsigned long phys_l2x0_saved_regs;2626-2725static int imx6q_suspend_finish(unsigned long val)2826{2927 cpu_do_idle();···55575658void __init imx6q_pm_init(void)5759{5858- /*5959- * The l2x0 core code provides an infrastucture to save and restore6060- * l2x0 registers across suspend/resume cycle. But because imx6q6161- * retains L2 content during suspend and needs to resume L2 before6262- * MMU is enabled, it can only utilize register saving support and6363- * have to take care of restoring on its own. So we save physical6464- * address of the data structure used by l2x0 core to save registers,6565- * and later restore the necessary ones in imx6q resume entry.6666- */6767-#ifdef CONFIG_CACHE_L2X06868- phys_l2x0_saved_regs = __pa(&l2x0_saved_regs);6969-#endif7070-7160 suspend_set_ops(&imx6q_pm_ops);7261}
···11221122 /* TODO: remove, see function definition */11231123 gpmc_convert_ps_to_ns(gpmc_t);1124112411251125- /* Now the GPMC is initialised, unreserve the chip-selects */11261126- gpmc_cs_map = 0;11271127-11281125 return 0;11291126}11301127···1379138213801383 if (IS_ERR_VALUE(gpmc_setup_irq()))13811384 dev_warn(gpmc_dev, "gpmc_setup_irq failed\n");13851385+13861386+ /* Now the GPMC is initialised, unreserve the chip-selects */13871387+ gpmc_cs_map = 0;1382138813831389 rc = gpmc_probe_dt(pdev);13841390 if (rc < 0) {
+5-4
arch/arm/mach-omap2/mux.c
···211211 return -EINVAL;212212 }213213214214- pr_err("%s: Could not find signal %s\n", __func__, muxname);215215-216214 return -ENODEV;217215}218216···231233232234 return mux_mode;233235 }236236+237237+ pr_err("%s: Could not find signal %s\n", __func__, muxname);234238235239 return -ENODEV;236240}···739739 list_for_each_entry(e, &partition->muxmodes, node) {740740 struct omap_mux *m = &e->mux;741741742742- (void)debugfs_create_file(m->muxnames[0], S_IWUSR, mux_dbg_dir,743743- m, &omap_mux_dbg_signal_fops);742742+ (void)debugfs_create_file(m->muxnames[0], S_IWUSR | S_IRUGO,743743+ mux_dbg_dir, m,744744+ &omap_mux_dbg_signal_fops);744745 }745746}746747
···157157 u32 size = readl(ddr_window_cpu_base + DDR_SIZE_CS_OFF(i));158158159159 /*160160- * Chip select enabled?160160+ * We only take care of entries for which the chip161161+ * select is enabled, and that don't have high base162162+ * address bits set (devices can only access the first163163+ * 32 bits of the memory).161164 */162162- if (size & 1) {165165+ if ((size & 1) && !(base & 0xF)) {163166 struct mbus_dram_window *w;164167165168 w = &orion_mbus_dram_info.cs[cs++];
···6363 long long st_size;6464 unsigned long st_blksize;65656666-#if defined(__BIG_ENDIAN)6666+#if defined(__BYTE_ORDER) ? __BYTE_ORDER == __BIG_ENDIAN : defined(__BIG_ENDIAN)6767 unsigned long __pad4; /* future possible st_blocks high bits */6868 unsigned long st_blocks; /* Number 512-byte blocks allocated. */6969-#elif defined(__LITTLE_ENDIAN)6969+#elif defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN)7070 unsigned long st_blocks; /* Number 512-byte blocks allocated. */7171 unsigned long __pad4; /* future possible st_blocks high bits */7272#else
···310310config SOM5282EM311311 bool "EMAC.Inc SOM5282EM board support"312312 depends on M528x313313- select EMAC_INC314313 help315314 Support for the EMAC.Inc SOM5282EM module.316315
···188188 }189189 }190190191191-#if !defined(CONFIG_SUN3) && !defined(CONFIG_COLDFIRE)191191+#if defined(CONFIG_MMU) && !defined(CONFIG_SUN3) && !defined(CONFIG_COLDFIRE)192192 /* insert pointer tables allocated so far into the tablelist */193193 init_pointer_table((unsigned long)kernel_pg_dir);194194 for (i = 0; i < PTRS_PER_PGD; i++) {
+1-1
arch/m68k/platform/coldfire/m528x.c
···6969 u8 port;70707171 /* make sure PUAPAR is set for UART0 and UART1 */7272- port = readb(MCF5282_GPIO_PUAPAR);7272+ port = readb(MCFGPIO_PUAPAR);7373 port |= 0x03 | (0x03 << 2);7474 writeb(port, MCFGPIO_PUAPAR);7575}
···465465 return result;466466}467467468468-static int acpi_processor_get_performance_info(struct acpi_processor *pr)468468+int acpi_processor_get_performance_info(struct acpi_processor *pr)469469{470470 int result = 0;471471 acpi_status status = AE_OK;···509509#endif510510 return result;511511}512512-512512+EXPORT_SYMBOL_GPL(acpi_processor_get_performance_info);513513int acpi_processor_notify_smm(struct module *calling_module)514514{515515 acpi_status status;
+11-2
drivers/char/hw_random/virtio-rng.c
···9292{9393 int err;94949595+ if (vq) {9696+ /* We only support one device for now */9797+ return -EBUSY;9898+ }9599 /* We expect a single virtqueue. */96100 vq = virtio_find_single_vq(vdev, random_recv_done, "input");9797- if (IS_ERR(vq))9898- return PTR_ERR(vq);101101+ if (IS_ERR(vq)) {102102+ err = PTR_ERR(vq);103103+ vq = NULL;104104+ return err;105105+ }99106100107 err = hwrng_register(&virtio_hwrng);101108 if (err) {102109 vdev->config->del_vqs(vdev);110110+ vq = NULL;103111 return err;104112 }105113···120112 busy = false;121113 hwrng_unregister(&virtio_hwrng);122114 vdev->config->del_vqs(vdev);115115+ vq = NULL;123116}124117125118static int virtrng_probe(struct virtio_device *vdev)
-1
drivers/clk/tegra/clk-tegra20.c
···12921292 TEGRA_CLK_DUPLICATE(usbd, "tegra-ehci.0", NULL),12931293 TEGRA_CLK_DUPLICATE(usbd, "tegra-otg", NULL),12941294 TEGRA_CLK_DUPLICATE(cclk, NULL, "cpu"),12951295- TEGRA_CLK_DUPLICATE(twd, "smp_twd", NULL),12961295 TEGRA_CLK_DUPLICATE(clk_max, NULL, NULL), /* Must be the last entry */12971296};12981297
-1
drivers/clk/tegra/clk-tegra30.c
···19311931 TEGRA_CLK_DUPLICATE(cml1, "tegra_sata_cml", NULL),19321932 TEGRA_CLK_DUPLICATE(cml0, "tegra_pcie", "cml"),19331933 TEGRA_CLK_DUPLICATE(pciex, "tegra_pcie", "pciex"),19341934- TEGRA_CLK_DUPLICATE(twd, "smp_twd", NULL),19351934 TEGRA_CLK_DUPLICATE(vcp, "nvavp", "vcp"),19361935 TEGRA_CLK_DUPLICATE(clk_max, NULL, NULL), /* MUST be the last entry */19371936};
+7
drivers/gpio/gpio-mvebu.c
···4242#include <linux/io.h>4343#include <linux/of_irq.h>4444#include <linux/of_device.h>4545+#include <linux/clk.h>4546#include <linux/pinctrl/consumer.h>46474748/*···497496 struct resource *res;498497 struct irq_chip_generic *gc;499498 struct irq_chip_type *ct;499499+ struct clk *clk;500500 unsigned int ngpios;501501 int soc_variant;502502 int i, cpu, id;···530528 dev_err(&pdev->dev, "Couldn't get OF id\n");531529 return id;532530 }531531+532532+ clk = devm_clk_get(&pdev->dev, NULL);533533+ /* Not all SoCs require a clock.*/534534+ if (!IS_ERR(clk))535535+ clk_prepare_enable(clk);533536534537 mvchip->soc_variant = soc_variant;535538 mvchip->chip.label = dev_name(&pdev->dev);
···116116{117117 struct nouveau_abi16_ntfy *ntfy, *temp;118118119119+ /* wait for all activity to stop before releasing notify object, which120120+ * may be still in use */121121+ if (chan->chan && chan->ntfy)122122+ nouveau_channel_idle(chan->chan);123123+119124 /* cleanup notifier state */120125 list_for_each_entry_safe(ntfy, temp, &chan->notifiers, head) {121126 nouveau_abi16_ntfy_fini(chan, ntfy);
+2-2
drivers/gpu/drm/nouveau/nouveau_bo.c
···801801 stride = 16 * 4;802802 height = amount / stride;803803804804- if (new_mem->mem_type == TTM_PL_VRAM &&804804+ if (old_mem->mem_type == TTM_PL_VRAM &&805805 nouveau_bo_tile_layout(nvbo)) {806806 ret = RING_SPACE(chan, 8);807807 if (ret)···823823 BEGIN_NV04(chan, NvSubCopy, 0x0200, 1);824824 OUT_RING (chan, 1);825825 }826826- if (old_mem->mem_type == TTM_PL_VRAM &&826826+ if (new_mem->mem_type == TTM_PL_VRAM &&827827 nouveau_bo_tile_layout(nvbo)) {828828 ret = RING_SPACE(chan, 8);829829 if (ret)
···236236/* Must be called with ts->lock held */237237static void __ads7846_enable(struct ads7846 *ts)238238{239239- regulator_enable(ts->reg);239239+ int error;240240+241241+ error = regulator_enable(ts->reg);242242+ if (error != 0)243243+ dev_err(&ts->spi->dev, "Failed to enable supply: %d\n", error);244244+240245 ads7846_restart(ts);241246}242247
+25-9
drivers/input/touchscreen/mms114.c
···314314 struct i2c_client *client = data->client;315315 int error;316316317317- if (data->core_reg)318318- regulator_enable(data->core_reg);319319- if (data->io_reg)320320- regulator_enable(data->io_reg);317317+ error = regulator_enable(data->core_reg);318318+ if (error) {319319+ dev_err(&client->dev, "Failed to enable avdd: %d\n", error);320320+ return error;321321+ }322322+323323+ error = regulator_enable(data->io_reg);324324+ if (error) {325325+ dev_err(&client->dev, "Failed to enable vdd: %d\n", error);326326+ regulator_disable(data->core_reg);327327+ return error;328328+ }329329+321330 mdelay(MMS114_POWERON_DELAY);322331323332 error = mms114_setup_regs(data);324324- if (error < 0)333333+ if (error < 0) {334334+ regulator_disable(data->io_reg);335335+ regulator_disable(data->core_reg);325336 return error;337337+ }326338327339 if (data->pdata->cfg_pin)328340 data->pdata->cfg_pin(true);···347335static void mms114_stop(struct mms114_data *data)348336{349337 struct i2c_client *client = data->client;338338+ int error;350339351340 disable_irq(client->irq);352341353342 if (data->pdata->cfg_pin)354343 data->pdata->cfg_pin(false);355344356356- if (data->io_reg)357357- regulator_disable(data->io_reg);358358- if (data->core_reg)359359- regulator_disable(data->core_reg);345345+ error = regulator_disable(data->io_reg);346346+ if (error)347347+ dev_warn(&client->dev, "Failed to disable vdd: %d\n", error);348348+349349+ error = regulator_disable(data->core_reg);350350+ if (error)351351+ dev_warn(&client->dev, "Failed to disable avdd: %d\n", error);360352}361353362354static int mms114_input_open(struct input_dev *dev)
+1-1
drivers/irqchip/irq-gic.c
···648648649649 /* Convert our logical CPU mask into a physical one. */650650 for_each_cpu(cpu, mask)651651- map |= 1 << cpu_logical_map(cpu);651651+ map |= gic_cpu_map[cpu];652652653653 /*654654 * Ensure that stores to Normal memory are visible to the
+3-1
drivers/isdn/i4l/isdn_tty.c
···902902 int j;903903 int l;904904905905- l = strlen(msg);905905+ l = min(strlen(msg), sizeof(cmd.parm) - sizeof(cmd.parm.cmsg)906906+ + sizeof(cmd.parm.cmsg.para) - 2);907907+906908 if (!l) {907909 isdn_tty_modem_result(RESULT_ERROR, info);908910 return;
+1
drivers/mfd/Kconfig
···858858config AB8500_CORE859859 bool "ST-Ericsson AB8500 Mixed Signal Power Management chip"860860 depends on GENERIC_HARDIRQS && ABX500_CORE && MFD_DB8500_PRCMU861861+ select POWER_SUPPLY861862 select MFD_CORE862863 select IRQ_DOMAIN863864 help
+13-4
drivers/mfd/ab8500-gpadc.c
···594594static int ab8500_gpadc_runtime_resume(struct device *dev)595595{596596 struct ab8500_gpadc *gpadc = dev_get_drvdata(dev);597597+ int ret;597598598598- regulator_enable(gpadc->regu);599599- return 0;599599+ ret = regulator_enable(gpadc->regu);600600+ if (ret)601601+ dev_err(dev, "Failed to enable vtvout LDO: %d\n", ret);602602+ return ret;600603}601604602605static int ab8500_gpadc_runtime_idle(struct device *dev)···646643 }647644648645 /* VTVout LDO used to power up ab8500-GPADC */649649- gpadc->regu = regulator_get(&pdev->dev, "vddadc");646646+ gpadc->regu = devm_regulator_get(&pdev->dev, "vddadc");650647 if (IS_ERR(gpadc->regu)) {651648 ret = PTR_ERR(gpadc->regu);652649 dev_err(gpadc->dev, "failed to get vtvout LDO\n");···655652656653 platform_set_drvdata(pdev, gpadc);657654658658- regulator_enable(gpadc->regu);655655+ ret = regulator_enable(gpadc->regu);656656+ if (ret) {657657+ dev_err(gpadc->dev, "Failed to enable vtvout LDO: %d\n", ret);658658+ goto fail_enable;659659+ }659660660661 pm_runtime_set_autosuspend_delay(gpadc->dev, GPADC_AUDOSUSPEND_DELAY);661662 pm_runtime_use_autosuspend(gpadc->dev);···670663 list_add_tail(&gpadc->node, &ab8500_gpadc_list);671664 dev_dbg(gpadc->dev, "probe success\n");672665 return 0;666666+667667+fail_enable:673668fail_irq:674669 free_irq(gpadc->irq, gpadc);675670fail:
···118118 * Disable the resource.119119 * The function returns with error or the content of the register120120 */121121-int twl4030_audio_disable_resource(unsigned id)121121+int twl4030_audio_disable_resource(enum twl4030_audio_res id)122122{123123 struct twl4030_audio *audio = platform_get_drvdata(twl4030_audio_dev);124124 int val;
···349349 struct pci_dev *pdev;350350 struct net_device *netdev;351351352352+ u8 __iomem *csr; /* CSR BAR used only for BE2/3 */352353 u8 __iomem *db; /* Door Bell */353354354355 struct mutex mbox_lock; /* For serializing mbox cmds to BE card */
+16-20
drivers/net/ethernet/emulex/benet/be_cmds.c
···473473 return 0;474474}475475476476-static int be_POST_stage_get(struct be_adapter *adapter, u16 *stage)476476+static u16 be_POST_stage_get(struct be_adapter *adapter)477477{478478 u32 sem;479479- u32 reg = skyhawk_chip(adapter) ? SLIPORT_SEMAPHORE_OFFSET_SH :480480- SLIPORT_SEMAPHORE_OFFSET_BE;481479482482- pci_read_config_dword(adapter->pdev, reg, &sem);483483- *stage = sem & POST_STAGE_MASK;484484-485485- if ((sem >> POST_ERR_SHIFT) & POST_ERR_MASK)486486- return -1;480480+ if (BEx_chip(adapter))481481+ sem = ioread32(adapter->csr + SLIPORT_SEMAPHORE_OFFSET_BEx);487482 else488488- return 0;483483+ pci_read_config_dword(adapter->pdev,484484+ SLIPORT_SEMAPHORE_OFFSET_SH, &sem);485485+486486+ return sem & POST_STAGE_MASK;489487}490488491489int lancer_wait_ready(struct be_adapter *adapter)···577579 }578580579581 do {580580- status = be_POST_stage_get(adapter, &stage);581581- if (status) {582582- dev_err(dev, "POST error; stage=0x%x\n", stage);583583- return -1;584584- } else if (stage != POST_STAGE_ARMFW_RDY) {585585- if (msleep_interruptible(2000)) {586586- dev_err(dev, "Waiting for POST aborted\n");587587- return -EINTR;588588- }589589- timeout += 2;590590- } else {582582+ stage = be_POST_stage_get(adapter);583583+ if (stage == POST_STAGE_ARMFW_RDY)591584 return 0;585585+586586+ dev_info(dev, "Waiting for POST, %ds elapsed\n",587587+ timeout);588588+ if (msleep_interruptible(2000)) {589589+ dev_err(dev, "Waiting for POST aborted\n");590590+ return -EINTR;592591 }592592+ timeout += 2;593593 } while (timeout < 60);594594595595 dev_err(dev, "POST timeout; stage=0x%x\n", stage);
+2-2
drivers/net/ethernet/emulex/benet/be_hw.h
···3232#define MPU_EP_CONTROL 033333434/********** MPU semphore: used for SH & BE *************/3535-#define SLIPORT_SEMAPHORE_OFFSET_BE 0x7c3636-#define SLIPORT_SEMAPHORE_OFFSET_SH 0x943535+#define SLIPORT_SEMAPHORE_OFFSET_BEx 0xac /* CSR BAR offset */3636+#define SLIPORT_SEMAPHORE_OFFSET_SH 0x94 /* PCI-CFG offset */3737#define POST_STAGE_MASK 0x0000FFFF3838#define POST_ERR_MASK 0x13939#define POST_ERR_SHIFT 31
+10
drivers/net/ethernet/emulex/benet/be_main.c
···3688368836893689static void be_unmap_pci_bars(struct be_adapter *adapter)36903690{36913691+ if (adapter->csr)36923692+ pci_iounmap(adapter->pdev, adapter->csr);36913693 if (adapter->db)36923694 pci_iounmap(adapter->pdev, adapter->db);36933695}···37223720 pci_read_config_dword(adapter->pdev, SLI_INTF_REG_OFFSET, &sli_intf);37233721 adapter->if_type = (sli_intf & SLI_INTF_IF_TYPE_MASK) >>37243722 SLI_INTF_IF_TYPE_SHIFT;37233723+37243724+ if (BEx_chip(adapter) && be_physfn(adapter)) {37253725+ adapter->csr = pci_iomap(adapter->pdev, 2, 0);37263726+ if (adapter->csr == NULL)37273727+ return -ENOMEM;37283728+ }3725372937263730 addr = pci_iomap(adapter->pdev, db_bar(adapter), 0);37273731 if (addr == NULL)···43374329 pci_restore_state(pdev);4338433043394331 /* Check if card is ok and fw is ready */43324332+ dev_info(&adapter->pdev->dev,43334333+ "Waiting for FW to be ready after EEH reset\n");43404334 status = be_fw_wait_ready(adapter);43414335 if (status)43424336 return PCI_ERS_RESULT_DISCONNECT;
···472472 VMXNET3_RX_RING_MAX_SIZE)473473 return -EINVAL;474474475475+ /* if adapter not yet initialized, do nothing */476476+ if (adapter->rx_buf_per_pkt == 0) {477477+ netdev_err(netdev, "adapter not completely initialized, "478478+ "ring size cannot be changed yet\n");479479+ return -EOPNOTSUPP;480480+ }475481476482 /* round it up to a multiple of VMXNET3_RING_SIZE_ALIGN */477483 new_tx_ring_size = (param->tx_pending + VMXNET3_RING_SIZE_MASK) &
+2-2
drivers/net/vmxnet3/vmxnet3_int.h
···7070/*7171 * Version numbers7272 */7373-#define VMXNET3_DRIVER_VERSION_STRING "1.1.29.0-k"7373+#define VMXNET3_DRIVER_VERSION_STRING "1.1.30.0-k"74747575/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */7676-#define VMXNET3_DRIVER_VERSION_NUM 0x01011D007676+#define VMXNET3_DRIVER_VERSION_NUM 0x01011E0077777878#if defined(CONFIG_PCI_MSI)7979 /* RSS only makes sense if MSI-X is supported. */
···363363 __entry->flags = cmd->flags;364364 memcpy(__get_dynamic_array(hcmd), hdr, sizeof(*hdr));365365366366- for (i = 0; i < IWL_MAX_CMD_TFDS; i++) {366366+ for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) {367367 if (!cmd->len[i])368368 continue;369369 memcpy((u8 *)__get_dynamic_array(hcmd) + offset,
···186186 * @CMD_ASYNC: Return right away and don't want for the response187187 * @CMD_WANT_SKB: valid only with CMD_SYNC. The caller needs the buffer of the188188 * response. The caller needs to call iwl_free_resp when done.189189- * @CMD_WANT_HCMD: The caller needs to get the HCMD that was sent in the190190- * response handler. Chunks flagged by %IWL_HCMD_DFL_NOCOPY won't be191191- * copied. The pointer passed to the response handler is in the transport192192- * ownership and don't need to be freed by the op_mode. This also means193193- * that the pointer is invalidated after the op_mode's handler returns.194189 * @CMD_ON_DEMAND: This command is sent by the test mode pipe.195190 */196191enum CMD_MODE {197192 CMD_SYNC = 0,198193 CMD_ASYNC = BIT(0),199194 CMD_WANT_SKB = BIT(1),200200- CMD_WANT_HCMD = BIT(2),201201- CMD_ON_DEMAND = BIT(3),195195+ CMD_ON_DEMAND = BIT(2),202196};203197204198#define DEF_CMD_PAYLOAD_SIZE 320···211217212218#define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_device_cmd))213219214214-#define IWL_MAX_CMD_TFDS 2220220+/*221221+ * number of transfer buffers (fragments) per transmit frame descriptor;222222+ * this is just the driver's idea, the hardware supports 20223223+ */224224+#define IWL_MAX_CMD_TBS_PER_TFD 2215225216226/**217227 * struct iwl_hcmd_dataflag - flag for each one of the chunks of the command···252254 * @id: id of the host command253255 */254256struct iwl_host_cmd {255255- const void *data[IWL_MAX_CMD_TFDS];257257+ const void *data[IWL_MAX_CMD_TBS_PER_TFD];256258 struct iwl_rx_packet *resp_pkt;257259 unsigned long _rx_page_addr;258260 u32 _rx_page_order;259261 int handler_status;260262261263 u32 flags;262262- u16 len[IWL_MAX_CMD_TFDS];263263- u8 dataflags[IWL_MAX_CMD_TFDS];264264+ u16 len[IWL_MAX_CMD_TBS_PER_TFD];265265+ u8 dataflags[IWL_MAX_CMD_TBS_PER_TFD];264266 u8 id;265267};266268
···131131static int iwl_mvm_calc_rssi(struct iwl_mvm *mvm,132132 struct iwl_rx_phy_info *phy_info)133133{134134- u32 rssi_a, rssi_b, rssi_c, max_rssi, agc_db;134134+ int rssi_a, rssi_b, rssi_a_dbm, rssi_b_dbm, max_rssi_dbm;135135+ int rssi_all_band_a, rssi_all_band_b;136136+ u32 agc_a, agc_b, max_agc;135137 u32 val;136138137137- /* Find max rssi among 3 possible receivers.139139+ /* Find max rssi among 2 possible receivers.138140 * These values are measured by the Digital Signal Processor (DSP).139141 * They should stay fairly constant even as the signal strength varies,140142 * if the radio's Automatic Gain Control (AGC) is working right.141143 * AGC value (see below) will provide the "interesting" info.142144 */145145+ val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_AGC_IDX]);146146+ agc_a = (val & IWL_OFDM_AGC_A_MSK) >> IWL_OFDM_AGC_A_POS;147147+ agc_b = (val & IWL_OFDM_AGC_B_MSK) >> IWL_OFDM_AGC_B_POS;148148+ max_agc = max_t(u32, agc_a, agc_b);149149+143150 val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_RSSI_AB_IDX]);144151 rssi_a = (val & IWL_OFDM_RSSI_INBAND_A_MSK) >> IWL_OFDM_RSSI_A_POS;145152 rssi_b = (val & IWL_OFDM_RSSI_INBAND_B_MSK) >> IWL_OFDM_RSSI_B_POS;146146- val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_RSSI_C_IDX]);147147- rssi_c = (val & IWL_OFDM_RSSI_INBAND_C_MSK) >> IWL_OFDM_RSSI_C_POS;153153+ rssi_all_band_a = (val & IWL_OFDM_RSSI_ALLBAND_A_MSK) >>154154+ IWL_OFDM_RSSI_ALLBAND_A_POS;155155+ rssi_all_band_b = (val & IWL_OFDM_RSSI_ALLBAND_B_MSK) >>156156+ IWL_OFDM_RSSI_ALLBAND_B_POS;148157149149- val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_AGC_IDX]);150150- agc_db = (val & IWL_OFDM_AGC_DB_MSK) >> IWL_OFDM_AGC_DB_POS;158158+ /*159159+ * dBm = rssi dB - agc dB - constant.160160+ * Higher AGC (higher radio gain) means lower signal.161161+ */162162+ rssi_a_dbm = rssi_a - IWL_RSSI_OFFSET - agc_a;163163+ rssi_b_dbm = rssi_b - IWL_RSSI_OFFSET - agc_b;164164+ max_rssi_dbm = max_t(int, rssi_a_dbm, rssi_b_dbm);151165152152- max_rssi = max_t(u32, rssi_a, rssi_b);153153- max_rssi = max_t(u32, max_rssi, rssi_c);166166+ IWL_DEBUG_STATS(mvm, "Rssi In A %d B %d Max %d AGCA %d AGCB %d\n",167167+ rssi_a_dbm, rssi_b_dbm, max_rssi_dbm, agc_a, agc_b);154168155155- IWL_DEBUG_STATS(mvm, "Rssi In A %d B %d C %d Max %d AGC dB %d\n",156156- rssi_a, rssi_b, rssi_c, max_rssi, agc_db);157157-158158- /* dBm = max_rssi dB - agc dB - constant.159159- * Higher AGC (higher radio gain) means lower signal. */160160- return max_rssi - agc_db - IWL_RSSI_OFFSET;169169+ return max_rssi_dbm;161170}162171163172/*
+10
drivers/net/wireless/iwlwifi/mvm/sta.c
···770770 u16 txq_id;771771 int err;772772773773+774774+ /*775775+ * If mac80211 is cleaning its state, then say that we finished since776776+ * our state has been cleared anyway.777777+ */778778+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {779779+ ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);780780+ return 0;781781+ }782782+773783 spin_lock_bh(&mvmsta->lock);774784775785 txq_id = tid_data->txq_id;
+1-5
drivers/net/wireless/iwlwifi/mvm/tx.c
···607607608608 /* Single frame failure in an AMPDU queue => send BAR */609609 if (txq_id >= IWL_FIRST_AMPDU_QUEUE &&610610- !(info->flags & IEEE80211_TX_STAT_ACK)) {611611- /* there must be only one skb in the skb_list */612612- WARN_ON_ONCE(skb_freed > 1 ||613613- !skb_queue_empty(&skbs));610610+ !(info->flags & IEEE80211_TX_STAT_ACK))614611 info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK;615615- }616612617613 /* W/A FW bug: seq_ctl is wrong when the queue is flushed */618614 if (status == TX_STATUS_FAIL_FIFO_FLUSHED) {
+25-9
drivers/net/wireless/iwlwifi/pcie/internal.h
···137137struct iwl_cmd_meta {138138 /* only for SYNC commands, iff the reply skb is wanted */139139 struct iwl_host_cmd *source;140140-141141- DEFINE_DMA_UNMAP_ADDR(mapping);142142- DEFINE_DMA_UNMAP_LEN(len);143143-144140 u32 flags;145141};146142···181185/*182186 * The FH will write back to the first TB only, so we need183187 * to copy some data into the buffer regardless of whether184184- * it should be mapped or not. This indicates how much to185185- * copy, even for HCMDs it must be big enough to fit the186186- * DRAM scratch from the TX cmd, at least 16 bytes.188188+ * it should be mapped or not. This indicates how big the189189+ * first TB must be to include the scratch buffer. Since190190+ * the scratch is 4 bytes at offset 12, it's 16 now. If we191191+ * make it bigger then allocations will be bigger and copy192192+ * slower, so that's probably not useful.187193 */188188-#define IWL_HCMD_MIN_COPY_SIZE 16194194+#define IWL_HCMD_SCRATCHBUF_SIZE 16189195190196struct iwl_pcie_txq_entry {191197 struct iwl_device_cmd *cmd;192192- struct iwl_device_cmd *copy_cmd;193198 struct sk_buff *skb;194199 /* buffer to free after command completes */195200 const void *free_buf;196201 struct iwl_cmd_meta meta;197202};198203204204+struct iwl_pcie_txq_scratch_buf {205205+ struct iwl_cmd_header hdr;206206+ u8 buf[8];207207+ __le32 scratch;208208+};209209+199210/**200211 * struct iwl_txq - Tx Queue for DMA201212 * @q: generic Rx/Tx queue descriptor202213 * @tfds: transmit frame descriptors (DMA memory)214214+ * @scratchbufs: start of command headers, including scratch buffers, for215215+ * the writeback -- this is DMA memory and an array holding one buffer216216+ * for each command on the queue217217+ * @scratchbufs_dma: DMA address for the scratchbufs start203218 * @entries: transmit entries (driver state)204219 * @lock: queue lock205220 * @stuck_timer: timer that fires if queue gets stuck···224217struct iwl_txq {225218 struct iwl_queue q;226219 struct iwl_tfd *tfds;220220+ struct iwl_pcie_txq_scratch_buf *scratchbufs;221221+ dma_addr_t scratchbufs_dma;227222 struct iwl_pcie_txq_entry *entries;228223 spinlock_t lock;229224 struct timer_list stuck_timer;···233224 u8 need_update;234225 u8 active;235226};227227+228228+static inline dma_addr_t229229+iwl_pcie_get_scratchbuf_dma(struct iwl_txq *txq, int idx)230230+{231231+ return txq->scratchbufs_dma +232232+ sizeof(struct iwl_pcie_txq_scratch_buf) * idx;233233+}236234237235/**238236 * struct iwl_trans_pcie - PCIe transport specific data
+3-11
drivers/net/wireless/iwlwifi/pcie/rx.c
···637637 index = SEQ_TO_INDEX(sequence);638638 cmd_index = get_cmd_index(&txq->q, index);639639640640- if (reclaim) {641641- struct iwl_pcie_txq_entry *ent;642642- ent = &txq->entries[cmd_index];643643- cmd = ent->copy_cmd;644644- WARN_ON_ONCE(!cmd && ent->meta.flags & CMD_WANT_HCMD);645645- } else {640640+ if (reclaim)641641+ cmd = txq->entries[cmd_index].cmd;642642+ else646643 cmd = NULL;647647- }648644649645 err = iwl_op_mode_rx(trans->op_mode, &rxcb, cmd);650646651647 if (reclaim) {652652- /* The original command isn't needed any more */653653- kfree(txq->entries[cmd_index].copy_cmd);654654- txq->entries[cmd_index].copy_cmd = NULL;655655- /* nor is the duplicated part of the command */656648 kfree(txq->entries[cmd_index].free_buf);657649 txq->entries[cmd_index].free_buf = NULL;658650 }
+129-147
drivers/net/wireless/iwlwifi/pcie/tx.c
···191191 }192192193193 for (i = q->read_ptr; i != q->write_ptr;194194- i = iwl_queue_inc_wrap(i, q->n_bd)) {195195- struct iwl_tx_cmd *tx_cmd =196196- (struct iwl_tx_cmd *)txq->entries[i].cmd->payload;194194+ i = iwl_queue_inc_wrap(i, q->n_bd))197195 IWL_ERR(trans, "scratch %d = 0x%08x\n", i,198198- get_unaligned_le32(&tx_cmd->scratch));199199- }196196+ le32_to_cpu(txq->scratchbufs[i].scratch));200197201198 iwl_op_mode_nic_error(trans->op_mode);202199}···364367}365368366369static void iwl_pcie_tfd_unmap(struct iwl_trans *trans,367367- struct iwl_cmd_meta *meta, struct iwl_tfd *tfd,368368- enum dma_data_direction dma_dir)370370+ struct iwl_cmd_meta *meta,371371+ struct iwl_tfd *tfd)369372{370373 int i;371374 int num_tbs;···379382 return;380383 }381384382382- /* Unmap tx_cmd */383383- if (num_tbs)384384- dma_unmap_single(trans->dev,385385- dma_unmap_addr(meta, mapping),386386- dma_unmap_len(meta, len),387387- DMA_BIDIRECTIONAL);385385+ /* first TB is never freed - it's the scratchbuf data */388386389389- /* Unmap chunks, if any. */390387 for (i = 1; i < num_tbs; i++)391388 dma_unmap_single(trans->dev, iwl_pcie_tfd_tb_get_addr(tfd, i),392392- iwl_pcie_tfd_tb_get_len(tfd, i), dma_dir);389389+ iwl_pcie_tfd_tb_get_len(tfd, i),390390+ DMA_TO_DEVICE);393391394392 tfd->num_tbs = 0;395393}···398406 * Does NOT advance any TFD circular buffer read/write indexes399407 * Does NOT free the TFD itself (which is within circular buffer)400408 */401401-static void iwl_pcie_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq,402402- enum dma_data_direction dma_dir)409409+static void iwl_pcie_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq)403410{404411 struct iwl_tfd *tfd_tmp = txq->tfds;405412···409418 lockdep_assert_held(&txq->lock);410419411420 /* We have only q->n_window txq->entries, but we use q->n_bd tfds */412412- iwl_pcie_tfd_unmap(trans, &txq->entries[idx].meta, &tfd_tmp[rd_ptr],413413- dma_dir);421421+ iwl_pcie_tfd_unmap(trans, &txq->entries[idx].meta, &tfd_tmp[rd_ptr]);414422415423 /* free SKB */416424 if (txq->entries) {···469479{470480 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);471481 size_t tfd_sz = sizeof(struct iwl_tfd) * TFD_QUEUE_SIZE_MAX;482482+ size_t scratchbuf_sz;472483 int i;473484474485 if (WARN_ON(txq->entries || txq->tfds))···505514 IWL_ERR(trans, "dma_alloc_coherent(%zd) failed\n", tfd_sz);506515 goto error;507516 }517517+518518+ BUILD_BUG_ON(IWL_HCMD_SCRATCHBUF_SIZE != sizeof(*txq->scratchbufs));519519+ BUILD_BUG_ON(offsetof(struct iwl_pcie_txq_scratch_buf, scratch) !=520520+ sizeof(struct iwl_cmd_header) +521521+ offsetof(struct iwl_tx_cmd, scratch));522522+523523+ scratchbuf_sz = sizeof(*txq->scratchbufs) * slots_num;524524+525525+ txq->scratchbufs = dma_alloc_coherent(trans->dev, scratchbuf_sz,526526+ &txq->scratchbufs_dma,527527+ GFP_KERNEL);528528+ if (!txq->scratchbufs)529529+ goto err_free_tfds;530530+508531 txq->q.id = txq_id;509532510533 return 0;534534+err_free_tfds:535535+ dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->q.dma_addr);511536error:512537 if (txq->entries && txq_id == trans_pcie->cmd_queue)513538 for (i = 0; i < slots_num; i++)···572565 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);573566 struct iwl_txq *txq = &trans_pcie->txq[txq_id];574567 struct iwl_queue *q = &txq->q;575575- enum dma_data_direction dma_dir;576568577569 if (!q->n_bd)578570 return;579571580580- /* In the command queue, all the TBs are mapped as BIDI581581- * so unmap them as such.582582- */583583- if (txq_id == trans_pcie->cmd_queue)584584- dma_dir = DMA_BIDIRECTIONAL;585585- else586586- dma_dir = DMA_TO_DEVICE;587587-588572 spin_lock_bh(&txq->lock);589573 while (q->write_ptr != q->read_ptr) {590590- iwl_pcie_txq_free_tfd(trans, txq, dma_dir);574574+ iwl_pcie_txq_free_tfd(trans, txq);591575 q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd);592576 }593577 spin_unlock_bh(&txq->lock);···608610 if (txq_id == trans_pcie->cmd_queue)609611 for (i = 0; i < txq->q.n_window; i++) {610612 kfree(txq->entries[i].cmd);611611- kfree(txq->entries[i].copy_cmd);612613 kfree(txq->entries[i].free_buf);613614 }614615···616619 dma_free_coherent(dev, sizeof(struct iwl_tfd) *617620 txq->q.n_bd, txq->tfds, txq->q.dma_addr);618621 txq->q.dma_addr = 0;622622+623623+ dma_free_coherent(dev,624624+ sizeof(*txq->scratchbufs) * txq->q.n_window,625625+ txq->scratchbufs, txq->scratchbufs_dma);619626 }620627621628 kfree(txq->entries);···963962964963 iwl_pcie_txq_inval_byte_cnt_tbl(trans, txq);965964966966- iwl_pcie_txq_free_tfd(trans, txq, DMA_TO_DEVICE);965965+ iwl_pcie_txq_free_tfd(trans, txq);967966 }968967969968 iwl_pcie_txq_progress(trans_pcie, txq);···11531152 void *dup_buf = NULL;11541153 dma_addr_t phys_addr;11551154 int idx;11561156- u16 copy_size, cmd_size, dma_size;11551155+ u16 copy_size, cmd_size, scratch_size;11571156 bool had_nocopy = false;11581157 int i;11591158 u32 cmd_pos;11601160- const u8 *cmddata[IWL_MAX_CMD_TFDS];11611161- u16 cmdlen[IWL_MAX_CMD_TFDS];11591159+ const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD];11601160+ u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD];1162116111631162 copy_size = sizeof(out_cmd->hdr);11641163 cmd_size = sizeof(out_cmd->hdr);1165116411661165 /* need one for the header if the first is NOCOPY */11671167- BUILD_BUG_ON(IWL_MAX_CMD_TFDS > IWL_NUM_OF_TBS - 1);11661166+ BUILD_BUG_ON(IWL_MAX_CMD_TBS_PER_TFD > IWL_NUM_OF_TBS - 1);1168116711691169- for (i = 0; i < IWL_MAX_CMD_TFDS; i++) {11681168+ for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) {11701169 cmddata[i] = cmd->data[i];11711170 cmdlen[i] = cmd->len[i];1172117111731172 if (!cmd->len[i])11741173 continue;1175117411761176- /* need at least IWL_HCMD_MIN_COPY_SIZE copied */11771177- if (copy_size < IWL_HCMD_MIN_COPY_SIZE) {11781178- int copy = IWL_HCMD_MIN_COPY_SIZE - copy_size;11751175+ /* need at least IWL_HCMD_SCRATCHBUF_SIZE copied */11761176+ if (copy_size < IWL_HCMD_SCRATCHBUF_SIZE) {11771177+ int copy = IWL_HCMD_SCRATCHBUF_SIZE - copy_size;1179117811801179 if (copy > cmdlen[i])11811180 copy = cmdlen[i];···12611260 /* and copy the data that needs to be copied */12621261 cmd_pos = offsetof(struct iwl_device_cmd, payload);12631262 copy_size = sizeof(out_cmd->hdr);12641264- for (i = 0; i < IWL_MAX_CMD_TFDS; i++) {12631263+ for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) {12651264 int copy = 0;1266126512671266 if (!cmd->len)12681267 continue;1269126812701270- /* need at least IWL_HCMD_MIN_COPY_SIZE copied */12711271- if (copy_size < IWL_HCMD_MIN_COPY_SIZE) {12721272- copy = IWL_HCMD_MIN_COPY_SIZE - copy_size;12691269+ /* need at least IWL_HCMD_SCRATCHBUF_SIZE copied */12701270+ if (copy_size < IWL_HCMD_SCRATCHBUF_SIZE) {12711271+ copy = IWL_HCMD_SCRATCHBUF_SIZE - copy_size;1273127212741273 if (copy > cmd->len[i])12751274 copy = cmd->len[i];···12871286 }12881287 }1289128812901290- WARN_ON_ONCE(txq->entries[idx].copy_cmd);12911291-12921292- /*12931293- * since out_cmd will be the source address of the FH, it will write12941294- * the retry count there. So when the user needs to receivce the HCMD12951295- * that corresponds to the response in the response handler, it needs12961296- * to set CMD_WANT_HCMD.12971297- */12981298- if (cmd->flags & CMD_WANT_HCMD) {12991299- txq->entries[idx].copy_cmd =13001300- kmemdup(out_cmd, cmd_pos, GFP_ATOMIC);13011301- if (unlikely(!txq->entries[idx].copy_cmd)) {13021302- idx = -ENOMEM;13031303- goto out;13041304- }13051305- }13061306-13071289 IWL_DEBUG_HC(trans,13081290 "Sending command %s (#%x), seq: 0x%04X, %d bytes at %d[%d]:%d\n",13091291 get_cmd_string(trans_pcie, out_cmd->hdr.cmd),13101292 out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence),13111293 cmd_size, q->write_ptr, idx, trans_pcie->cmd_queue);1312129413131313- /*13141314- * If the entire command is smaller than IWL_HCMD_MIN_COPY_SIZE, we must13151315- * still map at least that many bytes for the hardware to write back to.13161316- * We have enough space, so that's not a problem.13171317- */13181318- dma_size = max_t(u16, copy_size, IWL_HCMD_MIN_COPY_SIZE);12951295+ /* start the TFD with the scratchbuf */12961296+ scratch_size = min_t(int, copy_size, IWL_HCMD_SCRATCHBUF_SIZE);12971297+ memcpy(&txq->scratchbufs[q->write_ptr], &out_cmd->hdr, scratch_size);12981298+ iwl_pcie_txq_build_tfd(trans, txq,12991299+ iwl_pcie_get_scratchbuf_dma(txq, q->write_ptr),13001300+ scratch_size, 1);1319130113201320- phys_addr = dma_map_single(trans->dev, &out_cmd->hdr, dma_size,13211321- DMA_BIDIRECTIONAL);13221322- if (unlikely(dma_mapping_error(trans->dev, phys_addr))) {13231323- idx = -ENOMEM;13241324- goto out;13021302+ /* map first command fragment, if any remains */13031303+ if (copy_size > scratch_size) {13041304+ phys_addr = dma_map_single(trans->dev,13051305+ ((u8 *)&out_cmd->hdr) + scratch_size,13061306+ copy_size - scratch_size,13071307+ DMA_TO_DEVICE);13081308+ if (dma_mapping_error(trans->dev, phys_addr)) {13091309+ iwl_pcie_tfd_unmap(trans, out_meta,13101310+ &txq->tfds[q->write_ptr]);13111311+ idx = -ENOMEM;13121312+ goto out;13131313+ }13141314+13151315+ iwl_pcie_txq_build_tfd(trans, txq, phys_addr,13161316+ copy_size - scratch_size, 0);13251317 }1326131813271327- dma_unmap_addr_set(out_meta, mapping, phys_addr);13281328- dma_unmap_len_set(out_meta, len, dma_size);13291329-13301330- iwl_pcie_txq_build_tfd(trans, txq, phys_addr, copy_size, 1);13311331-13321319 /* map the remaining (adjusted) nocopy/dup fragments */13331333- for (i = 0; i < IWL_MAX_CMD_TFDS; i++) {13201320+ for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) {13341321 const void *data = cmddata[i];1335132213361323 if (!cmdlen[i])···13291340 if (cmd->dataflags[i] & IWL_HCMD_DFL_DUP)13301341 data = dup_buf;13311342 phys_addr = dma_map_single(trans->dev, (void *)data,13321332- cmdlen[i], DMA_BIDIRECTIONAL);13431343+ cmdlen[i], DMA_TO_DEVICE);13331344 if (dma_mapping_error(trans->dev, phys_addr)) {13341345 iwl_pcie_tfd_unmap(trans, out_meta,13351335- &txq->tfds[q->write_ptr],13361336- DMA_BIDIRECTIONAL);13461346+ &txq->tfds[q->write_ptr]);13371347 idx = -ENOMEM;13381348 goto out;13391349 }···14061418 cmd = txq->entries[cmd_index].cmd;14071419 meta = &txq->entries[cmd_index].meta;1408142014091409- iwl_pcie_tfd_unmap(trans, meta, &txq->tfds[index], DMA_BIDIRECTIONAL);14211421+ iwl_pcie_tfd_unmap(trans, meta, &txq->tfds[index]);1410142214111423 /* Input error checking is done when commands are added to queue. */14121424 if (meta->flags & CMD_WANT_SKB) {···15851597 struct iwl_cmd_meta *out_meta;15861598 struct iwl_txq *txq;15871599 struct iwl_queue *q;15881588- dma_addr_t phys_addr = 0;15891589- dma_addr_t txcmd_phys;15901590- dma_addr_t scratch_phys;15911591- u16 len, firstlen, secondlen;16001600+ dma_addr_t tb0_phys, tb1_phys, scratch_phys;16011601+ void *tb1_addr;16021602+ u16 len, tb1_len, tb2_len;15921603 u8 wait_write_ptr = 0;15931604 __le16 fc = hdr->frame_control;15941605 u8 hdr_len = ieee80211_hdrlen(fc);···16251638 cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) |16261639 INDEX_TO_SEQ(q->write_ptr)));1627164016411641+ tb0_phys = iwl_pcie_get_scratchbuf_dma(txq, q->write_ptr);16421642+ scratch_phys = tb0_phys + sizeof(struct iwl_cmd_header) +16431643+ offsetof(struct iwl_tx_cmd, scratch);16441644+16451645+ tx_cmd->dram_lsb_ptr = cpu_to_le32(scratch_phys);16461646+ tx_cmd->dram_msb_ptr = iwl_get_dma_hi_addr(scratch_phys);16471647+16281648 /* Set up first empty entry in queue's array of Tx/cmd buffers */16291649 out_meta = &txq->entries[q->write_ptr].meta;1630165016311651 /*16321632- * Use the first empty entry in this queue's command buffer array16331633- * to contain the Tx command and MAC header concatenated together16341634- * (payload data will be in another buffer).16351635- * Size of this varies, due to varying MAC header length.16361636- * If end is not dword aligned, we'll have 2 extra bytes at the end16371637- * of the MAC header (device reads on dword boundaries).16381638- * We'll tell device about this padding later.16521652+ * The second TB (tb1) points to the remainder of the TX command16531653+ * and the 802.11 header - dword aligned size16541654+ * (This calculation modifies the TX command, so do it before the16551655+ * setup of the first TB)16391656 */16401640- len = sizeof(struct iwl_tx_cmd) +16411641- sizeof(struct iwl_cmd_header) + hdr_len;16421642- firstlen = (len + 3) & ~3;16571657+ len = sizeof(struct iwl_tx_cmd) + sizeof(struct iwl_cmd_header) +16581658+ hdr_len - IWL_HCMD_SCRATCHBUF_SIZE;16591659+ tb1_len = (len + 3) & ~3;1643166016441661 /* Tell NIC about any 2-byte padding after MAC header */16451645- if (firstlen != len)16621662+ if (tb1_len != len)16461663 tx_cmd->tx_flags |= TX_CMD_FLG_MH_PAD_MSK;1647166416481648- /* Physical address of this Tx command's header (not MAC header!),16491649- * within command buffer array. */16501650- txcmd_phys = dma_map_single(trans->dev,16511651- &dev_cmd->hdr, firstlen,16521652- DMA_BIDIRECTIONAL);16531653- if (unlikely(dma_mapping_error(trans->dev, txcmd_phys)))16651665+ /* The first TB points to the scratchbuf data - min_copy bytes */16661666+ memcpy(&txq->scratchbufs[q->write_ptr], &dev_cmd->hdr,16671667+ IWL_HCMD_SCRATCHBUF_SIZE);16681668+ iwl_pcie_txq_build_tfd(trans, txq, tb0_phys,16691669+ IWL_HCMD_SCRATCHBUF_SIZE, 1);16701670+16711671+ /* there must be data left over for TB1 or this code must be changed */16721672+ BUILD_BUG_ON(sizeof(struct iwl_tx_cmd) < IWL_HCMD_SCRATCHBUF_SIZE);16731673+16741674+ /* map the data for TB1 */16751675+ tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_HCMD_SCRATCHBUF_SIZE;16761676+ tb1_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE);16771677+ if (unlikely(dma_mapping_error(trans->dev, tb1_phys)))16541678 goto out_err;16551655- dma_unmap_addr_set(out_meta, mapping, txcmd_phys);16561656- dma_unmap_len_set(out_meta, len, firstlen);16791679+ iwl_pcie_txq_build_tfd(trans, txq, tb1_phys, tb1_len, 0);16801680+16811681+ /*16821682+ * Set up TFD's third entry to point directly to remainder16831683+ * of skb, if any (802.11 null frames have no payload).16841684+ */16851685+ tb2_len = skb->len - hdr_len;16861686+ if (tb2_len > 0) {16871687+ dma_addr_t tb2_phys = dma_map_single(trans->dev,16881688+ skb->data + hdr_len,16891689+ tb2_len, DMA_TO_DEVICE);16901690+ if (unlikely(dma_mapping_error(trans->dev, tb2_phys))) {16911691+ iwl_pcie_tfd_unmap(trans, out_meta,16921692+ &txq->tfds[q->write_ptr]);16931693+ goto out_err;16941694+ }16951695+ iwl_pcie_txq_build_tfd(trans, txq, tb2_phys, tb2_len, 0);16961696+ }16971697+16981698+ /* Set up entry for this TFD in Tx byte-count array */16991699+ iwl_pcie_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len));17001700+17011701+ trace_iwlwifi_dev_tx(trans->dev, skb,17021702+ &txq->tfds[txq->q.write_ptr],17031703+ sizeof(struct iwl_tfd),17041704+ &dev_cmd->hdr, IWL_HCMD_SCRATCHBUF_SIZE + tb1_len,17051705+ skb->data + hdr_len, tb2_len);17061706+ trace_iwlwifi_dev_tx_data(trans->dev, skb,17071707+ skb->data + hdr_len, tb2_len);1657170816581709 if (!ieee80211_has_morefrags(fc)) {16591710 txq->need_update = 1;···16991674 wait_write_ptr = 1;17001675 txq->need_update = 0;17011676 }17021702-17031703- /* Set up TFD's 2nd entry to point directly to remainder of skb,17041704- * if any (802.11 null frames have no payload). */17051705- secondlen = skb->len - hdr_len;17061706- if (secondlen > 0) {17071707- phys_addr = dma_map_single(trans->dev, skb->data + hdr_len,17081708- secondlen, DMA_TO_DEVICE);17091709- if (unlikely(dma_mapping_error(trans->dev, phys_addr))) {17101710- dma_unmap_single(trans->dev,17111711- dma_unmap_addr(out_meta, mapping),17121712- dma_unmap_len(out_meta, len),17131713- DMA_BIDIRECTIONAL);17141714- goto out_err;17151715- }17161716- }17171717-17181718- /* Attach buffers to TFD */17191719- iwl_pcie_txq_build_tfd(trans, txq, txcmd_phys, firstlen, 1);17201720- if (secondlen > 0)17211721- iwl_pcie_txq_build_tfd(trans, txq, phys_addr, secondlen, 0);17221722-17231723- scratch_phys = txcmd_phys + sizeof(struct iwl_cmd_header) +17241724- offsetof(struct iwl_tx_cmd, scratch);17251725-17261726- /* take back ownership of DMA buffer to enable update */17271727- dma_sync_single_for_cpu(trans->dev, txcmd_phys, firstlen,17281728- DMA_BIDIRECTIONAL);17291729- tx_cmd->dram_lsb_ptr = cpu_to_le32(scratch_phys);17301730- tx_cmd->dram_msb_ptr = iwl_get_dma_hi_addr(scratch_phys);17311731-17321732- /* Set up entry for this TFD in Tx byte-count array */17331733- iwl_pcie_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len));17341734-17351735- dma_sync_single_for_device(trans->dev, txcmd_phys, firstlen,17361736- DMA_BIDIRECTIONAL);17371737-17381738- trace_iwlwifi_dev_tx(trans->dev, skb,17391739- &txq->tfds[txq->q.write_ptr],17401740- sizeof(struct iwl_tfd),17411741- &dev_cmd->hdr, firstlen,17421742- skb->data + hdr_len, secondlen);17431743- trace_iwlwifi_dev_tx_data(trans->dev, skb,17441744- skb->data + hdr_len, secondlen);1745167717461678 /* start timer if queue currently empty */17471679 if (txq->need_update && q->read_ptr == q->write_ptr &&
+24-4
drivers/rtc/rtc-mv.c
···1414#include <linux/platform_device.h>1515#include <linux/of.h>1616#include <linux/delay.h>1717+#include <linux/clk.h>1718#include <linux/gfp.h>1819#include <linux/module.h>1920···4241 struct rtc_device *rtc;4342 void __iomem *ioaddr;4443 int irq;4444+ struct clk *clk;4545};46464747static int mv_rtc_set_time(struct device *dev, struct rtc_time *tm)···223221 struct rtc_plat_data *pdata;224222 resource_size_t size;225223 u32 rtc_time;224224+ int ret = 0;226225227226 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);228227 if (!res)···242239 if (!pdata->ioaddr)243240 return -ENOMEM;244241242242+ pdata->clk = devm_clk_get(&pdev->dev, NULL);243243+ /* Not all SoCs require a clock.*/244244+ if (!IS_ERR(pdata->clk))245245+ clk_prepare_enable(pdata->clk);246246+245247 /* make sure the 24 hours mode is enabled */246248 rtc_time = readl(pdata->ioaddr + RTC_TIME_REG_OFFS);247249 if (rtc_time & RTC_HOURS_12H_MODE) {248250 dev_err(&pdev->dev, "24 Hours mode not supported.\n");249249- return -EINVAL;251251+ ret = -EINVAL;252252+ goto out;250253 }251254252255 /* make sure it is actually functional */···261252 rtc_time = readl(pdata->ioaddr + RTC_TIME_REG_OFFS);262253 if (rtc_time == 0x01000000) {263254 dev_err(&pdev->dev, "internal RTC not ticking\n");264264- return -ENODEV;255255+ ret = -ENODEV;256256+ goto out;265257 }266258 }267259···278268 } else279269 pdata->rtc = rtc_device_register(pdev->name, &pdev->dev,280270 &mv_rtc_ops, THIS_MODULE);281281- if (IS_ERR(pdata->rtc))282282- return PTR_ERR(pdata->rtc);271271+ if (IS_ERR(pdata->rtc)) {272272+ ret = PTR_ERR(pdata->rtc);273273+ goto out;274274+ }283275284276 if (pdata->irq >= 0) {285277 writel(0, pdata->ioaddr + RTC_ALARM_INTERRUPT_MASK_REG_OFFS);···294282 }295283296284 return 0;285285+out:286286+ if (!IS_ERR(pdata->clk))287287+ clk_disable_unprepare(pdata->clk);288288+289289+ return ret;297290}298291299292static int __exit mv_rtc_remove(struct platform_device *pdev)···309292 device_init_wakeup(&pdev->dev, 0);310293311294 rtc_device_unregister(pdata->rtc);295295+ if (!IS_ERR(pdata->clk))296296+ clk_disable_unprepare(pdata->clk);297297+312298 return 0;313299}314300
+10-6
drivers/staging/comedi/drivers/dt9812.c
···947947 unsigned int *data)948948{949949 struct comedi_dt9812 *devpriv = dev->private;950950+ unsigned int channel = CR_CHAN(insn->chanspec);950951 int n;951952 u8 bits = 0;952953953954 dt9812_digital_in(devpriv->slot, &bits);954955 for (n = 0; n < insn->n; n++)955955- data[n] = ((1 << insn->chanspec) & bits) != 0;956956+ data[n] = ((1 << channel) & bits) != 0;956957 return n;957958}958959···962961 unsigned int *data)963962{964963 struct comedi_dt9812 *devpriv = dev->private;964964+ unsigned int channel = CR_CHAN(insn->chanspec);965965 int n;966966 u8 bits = 0;967967968968 dt9812_digital_out_shadow(devpriv->slot, &bits);969969 for (n = 0; n < insn->n; n++) {970970- u8 mask = 1 << insn->chanspec;970970+ u8 mask = 1 << channel;971971972972 bits &= ~mask;973973 if (data[n])···983981 unsigned int *data)984982{985983 struct comedi_dt9812 *devpriv = dev->private;984984+ unsigned int channel = CR_CHAN(insn->chanspec);986985 int n;987986988987 for (n = 0; n < insn->n; n++) {989988 u16 value = 0;990989991991- dt9812_analog_in(devpriv->slot, insn->chanspec, &value,992992- DT9812_GAIN_1);990990+ dt9812_analog_in(devpriv->slot, channel, &value, DT9812_GAIN_1);993991 data[n] = value;994992 }995993 return n;···1000998 unsigned int *data)1001999{10021000 struct comedi_dt9812 *devpriv = dev->private;10011001+ unsigned int channel = CR_CHAN(insn->chanspec);10031002 int n;10041003 u16 value;1005100410061005 for (n = 0; n < insn->n; n++) {10071006 value = 0;10081008- dt9812_analog_out_shadow(devpriv->slot, insn->chanspec, &value);10071007+ dt9812_analog_out_shadow(devpriv->slot, channel, &value);10091008 data[n] = value;10101009 }10111010 return n;···10171014 unsigned int *data)10181015{10191016 struct comedi_dt9812 *devpriv = dev->private;10171017+ unsigned int channel = CR_CHAN(insn->chanspec);10201018 int n;1021101910221020 for (n = 0; n < insn->n; n++)10231023- dt9812_analog_out(devpriv->slot, insn->chanspec, data[n]);10211021+ dt9812_analog_out(devpriv->slot, channel, data[n]);10241022 return n;10251023}10261024
+19-12
drivers/staging/comedi/drivers/usbdux.c
···730730static int usbduxsub_start(struct usbduxsub *usbduxsub)731731{732732 int errcode = 0;733733- uint8_t local_transfer_buffer[16];733733+ uint8_t *local_transfer_buffer;734734+735735+ local_transfer_buffer = kmalloc(1, GFP_KERNEL);736736+ if (!local_transfer_buffer)737737+ return -ENOMEM;734738735739 /* 7f92 to zero */736736- local_transfer_buffer[0] = 0;740740+ *local_transfer_buffer = 0;737741 errcode = usb_control_msg(usbduxsub->usbdev,738742 /* create a pipe for a control transfer */739743 usb_sndctrlpipe(usbduxsub->usbdev, 0),···755751 1,756752 /* Timeout */757753 BULK_TIMEOUT);758758- if (errcode < 0) {754754+ if (errcode < 0)759755 dev_err(&usbduxsub->interface->dev,760756 "comedi_: control msg failed (start)\n");761761- return errcode;762762- }763763- return 0;757757+758758+ kfree(local_transfer_buffer);759759+ return errcode;764760}765761766762static int usbduxsub_stop(struct usbduxsub *usbduxsub)767763{768764 int errcode = 0;765765+ uint8_t *local_transfer_buffer;769766770770- uint8_t local_transfer_buffer[16];767767+ local_transfer_buffer = kmalloc(1, GFP_KERNEL);768768+ if (!local_transfer_buffer)769769+ return -ENOMEM;771770772771 /* 7f92 to one */773773- local_transfer_buffer[0] = 1;772772+ *local_transfer_buffer = 1;774773 errcode = usb_control_msg(usbduxsub->usbdev,775774 usb_sndctrlpipe(usbduxsub->usbdev, 0),776775 /* bRequest, "Firmware" */···788781 1,789782 /* Timeout */790783 BULK_TIMEOUT);791791- if (errcode < 0) {784784+ if (errcode < 0)792785 dev_err(&usbduxsub->interface->dev,793786 "comedi_: control msg failed (stop)\n");794794- return errcode;795795- }796796- return 0;787787+788788+ kfree(local_transfer_buffer);789789+ return errcode;797790}798791799792static int usbduxsub_upload(struct usbduxsub *usbduxsub,
+18-12
drivers/staging/comedi/drivers/usbduxfast.c
···436436static int usbduxfastsub_start(struct usbduxfastsub_s *udfs)437437{438438 int ret;439439- unsigned char local_transfer_buffer[16];439439+ unsigned char *local_transfer_buffer;440440+441441+ local_transfer_buffer = kmalloc(1, GFP_KERNEL);442442+ if (!local_transfer_buffer)443443+ return -ENOMEM;440444441445 /* 7f92 to zero */442442- local_transfer_buffer[0] = 0;446446+ *local_transfer_buffer = 0;443447 /* bRequest, "Firmware" */444448 ret = usb_control_msg(udfs->usbdev, usb_sndctrlpipe(udfs->usbdev, 0),445449 USBDUXFASTSUB_FIRMWARE,···454450 local_transfer_buffer,455451 1, /* Length */456452 EZTIMEOUT); /* Timeout */457457- if (ret < 0) {453453+ if (ret < 0)458454 dev_err(&udfs->interface->dev,459455 "control msg failed (start)\n");460460- return ret;461461- }462456463463- return 0;457457+ kfree(local_transfer_buffer);458458+ return ret;464459}465460466461static int usbduxfastsub_stop(struct usbduxfastsub_s *udfs)467462{468463 int ret;469469- unsigned char local_transfer_buffer[16];464464+ unsigned char *local_transfer_buffer;465465+466466+ local_transfer_buffer = kmalloc(1, GFP_KERNEL);467467+ if (!local_transfer_buffer)468468+ return -ENOMEM;470469471470 /* 7f92 to one */472472- local_transfer_buffer[0] = 1;471471+ *local_transfer_buffer = 1;473472 /* bRequest, "Firmware" */474473 ret = usb_control_msg(udfs->usbdev, usb_sndctrlpipe(udfs->usbdev, 0),475474 USBDUXFASTSUB_FIRMWARE,···481474 0x0000, /* Index */482475 local_transfer_buffer, 1, /* Length */483476 EZTIMEOUT); /* Timeout */484484- if (ret < 0) {477477+ if (ret < 0)485478 dev_err(&udfs->interface->dev,486479 "control msg failed (stop)\n");487487- return ret;488488- }489480490490- return 0;481481+ kfree(local_transfer_buffer);482482+ return ret;491483}492484493485static int usbduxfastsub_upload(struct usbduxfastsub_s *udfs,
+17-10
drivers/staging/comedi/drivers/usbduxsigma.c
···681681static int usbduxsub_start(struct usbduxsub *usbduxsub)682682{683683 int errcode = 0;684684- uint8_t local_transfer_buffer[16];684684+ uint8_t *local_transfer_buffer;685685+686686+ local_transfer_buffer = kmalloc(16, GFP_KERNEL);687687+ if (!local_transfer_buffer)688688+ return -ENOMEM;685689686690 /* 7f92 to zero */687691 local_transfer_buffer[0] = 0;···706702 1,707703 /* Timeout */708704 BULK_TIMEOUT);709709- if (errcode < 0) {705705+ if (errcode < 0)710706 dev_err(&usbduxsub->interface->dev,711707 "comedi_: control msg failed (start)\n");712712- return errcode;713713- }714714- return 0;708708+709709+ kfree(local_transfer_buffer);710710+ return errcode;715711}716712717713static int usbduxsub_stop(struct usbduxsub *usbduxsub)718714{719715 int errcode = 0;716716+ uint8_t *local_transfer_buffer;720717721721- uint8_t local_transfer_buffer[16];718718+ local_transfer_buffer = kmalloc(16, GFP_KERNEL);719719+ if (!local_transfer_buffer)720720+ return -ENOMEM;722721723722 /* 7f92 to one */724723 local_transfer_buffer[0] = 1;···739732 1,740733 /* Timeout */741734 BULK_TIMEOUT);742742- if (errcode < 0) {735735+ if (errcode < 0)743736 dev_err(&usbduxsub->interface->dev,744737 "comedi_: control msg failed (stop)\n");745745- return errcode;746746- }747747- return 0;738738+739739+ kfree(local_transfer_buffer);740740+ return errcode;748741}749742750743static int usbduxsub_upload(struct usbduxsub *usbduxsub,
+12-11
drivers/staging/imx-drm/ipuv3-crtc.c
···483483 goto err_out;484484 }485485486486- ipu_crtc->irq = ipu_idmac_channel_irq(ipu, ipu_crtc->ipu_ch,487487- IPU_IRQ_EOF);488488- ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0,489489- "imx_drm", ipu_crtc);490490- if (ret < 0) {491491- dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret);492492- goto err_out;493493- }494494-495495- disable_irq(ipu_crtc->irq);496496-497486 return 0;498487err_out:499488 ipu_put_resources(ipu_crtc);···493504static int ipu_crtc_init(struct ipu_crtc *ipu_crtc,494505 struct ipu_client_platformdata *pdata)495506{507507+ struct ipu_soc *ipu = dev_get_drvdata(ipu_crtc->dev->parent);496508 int ret;497509498510 ret = ipu_get_resources(ipu_crtc, pdata);···511521 dev_err(ipu_crtc->dev, "adding crtc failed with %d.\n", ret);512522 goto err_put_resources;513523 }524524+525525+ ipu_crtc->irq = ipu_idmac_channel_irq(ipu, ipu_crtc->ipu_ch,526526+ IPU_IRQ_EOF);527527+ ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0,528528+ "imx_drm", ipu_crtc);529529+ if (ret < 0) {530530+ dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret);531531+ goto err_put_resources;532532+ }533533+534534+ disable_irq(ipu_crtc->irq);514535515536 return 0;516537
+26-44
drivers/staging/tidspbridge/rmgr/drv.c
···7676 struct node_res_object **node_res_obj =7777 (struct node_res_object **)node_resource;7878 struct process_context *ctxt = (struct process_context *)process_ctxt;7979- int status = 0;8079 int retval;81808281 *node_res_obj = kzalloc(sizeof(struct node_res_object), GFP_KERNEL);8383- if (!*node_res_obj) {8484- status = -ENOMEM;8585- goto func_end;8686- }8282+ if (!*node_res_obj)8383+ return -ENOMEM;87848885 (*node_res_obj)->node = hnode;8989- retval = idr_get_new(ctxt->node_id, *node_res_obj,9090- &(*node_res_obj)->id);9191- if (retval == -EAGAIN) {9292- if (!idr_pre_get(ctxt->node_id, GFP_KERNEL)) {9393- pr_err("%s: OUT OF MEMORY\n", __func__);9494- status = -ENOMEM;9595- goto func_end;9696- }9797-9898- retval = idr_get_new(ctxt->node_id, *node_res_obj,9999- &(*node_res_obj)->id);8686+ retval = idr_alloc(ctxt->node_id, *node_res_obj, 0, 0, GFP_KERNEL);8787+ if (retval >= 0) {8888+ (*node_res_obj)->id = retval;8989+ return 0;10090 }101101- if (retval) {9191+9292+ kfree(*node_res_obj);9393+9494+ if (retval == -ENOSPC) {10295 pr_err("%s: FAILED, IDR is FULL\n", __func__);103103- status = -EFAULT;9696+ return -EFAULT;9797+ } else {9898+ pr_err("%s: OUT OF MEMORY\n", __func__);9999+ return -ENOMEM;104100 }105105-func_end:106106- if (status)107107- kfree(*node_res_obj);108108-109109- return status;110101}111102112103/* Release all Node resources and its context···192201 struct strm_res_object **pstrm_res =193202 (struct strm_res_object **)strm_res;194203 struct process_context *ctxt = (struct process_context *)process_ctxt;195195- int status = 0;196204 int retval;197205198206 *pstrm_res = kzalloc(sizeof(struct strm_res_object), GFP_KERNEL);199199- if (*pstrm_res == NULL) {200200- status = -EFAULT;201201- goto func_end;202202- }207207+ if (*pstrm_res == NULL)208208+ return -EFAULT;203209204210 (*pstrm_res)->stream = stream_obj;205205- retval = idr_get_new(ctxt->stream_id, *pstrm_res,206206- &(*pstrm_res)->id);207207- if (retval == -EAGAIN) {208208- if (!idr_pre_get(ctxt->stream_id, GFP_KERNEL)) {209209- pr_err("%s: OUT OF MEMORY\n", __func__);210210- status = -ENOMEM;211211- goto func_end;212212- }213213-214214- retval = idr_get_new(ctxt->stream_id, *pstrm_res,215215- &(*pstrm_res)->id);211211+ retval = idr_alloc(ctxt->stream_id, *pstrm_res, 0, 0, GFP_KERNEL);212212+ if (retval >= 0) {213213+ (*pstrm_res)->id = retval;214214+ return 0;216215 }217217- if (retval) {216216+217217+ if (retval == -ENOSPC) {218218 pr_err("%s: FAILED, IDR is FULL\n", __func__);219219- status = -EPERM;219219+ return -EPERM;220220+ } else {221221+ pr_err("%s: OUT OF MEMORY\n", __func__);222222+ return -ENOMEM;220223 }221221-222222-func_end:223223- return status;224224}225225226226static int drv_proc_free_strm_res(int id, void *p, void *process_ctxt)
···669669 if (device->flags & DEVICE_FLAGS_OPENED)670670 device_close(device->dev);671671672672- usb_put_dev(interface_to_usbdev(intf));673673-674672 return 0;675673}676674···678680679681 if (!device || !device->dev)680682 return -ENODEV;681681-682682- usb_get_dev(interface_to_usbdev(intf));683683684684 if (!(device->flags & DEVICE_FLAGS_OPENED))685685 device_open(device->dev);
+10-15
drivers/staging/zcache/ramster/tcp.c
···300300301301static int r2net_prep_nsw(struct r2net_node *nn, struct r2net_status_wait *nsw)302302{303303- int ret = 0;303303+ int ret;304304305305- do {306306- if (!idr_pre_get(&nn->nn_status_idr, GFP_ATOMIC)) {307307- ret = -EAGAIN;308308- break;309309- }310310- spin_lock(&nn->nn_lock);311311- ret = idr_get_new(&nn->nn_status_idr, nsw, &nsw->ns_id);312312- if (ret == 0)313313- list_add_tail(&nsw->ns_node_item,314314- &nn->nn_status_list);315315- spin_unlock(&nn->nn_lock);316316- } while (ret == -EAGAIN);305305+ spin_lock(&nn->nn_lock);306306+ ret = idr_alloc(&nn->nn_status_idr, nsw, 0, 0, GFP_ATOMIC);307307+ if (ret >= 0) {308308+ nsw->ns_id = ret;309309+ list_add_tail(&nsw->ns_node_item, &nn->nn_status_list);310310+ }311311+ spin_unlock(&nn->nn_lock);317312318318- if (ret == 0) {313313+ if (ret >= 0) {319314 init_waitqueue_head(&nsw->ns_wq);320315 nsw->ns_sys_status = R2NET_ERR_NONE;321316 nsw->ns_status = 0;317317+ return 0;322318 }323323-324319 return ret;325320}326321
+51-1
drivers/tty/serial/8250/8250.c
···301301 },302302 [PORT_8250_CIR] = {303303 .name = "CIR port"304304- }304304+ },305305+ [PORT_ALTR_16550_F32] = {306306+ .name = "Altera 16550 FIFO32",307307+ .fifo_size = 32,308308+ .tx_loadsz = 32,309309+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,310310+ .flags = UART_CAP_FIFO | UART_CAP_AFE,311311+ },312312+ [PORT_ALTR_16550_F64] = {313313+ .name = "Altera 16550 FIFO64",314314+ .fifo_size = 64,315315+ .tx_loadsz = 64,316316+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,317317+ .flags = UART_CAP_FIFO | UART_CAP_AFE,318318+ },319319+ [PORT_ALTR_16550_F128] = {320320+ .name = "Altera 16550 FIFO128",321321+ .fifo_size = 128,322322+ .tx_loadsz = 128,323323+ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,324324+ .flags = UART_CAP_FIFO | UART_CAP_AFE,325325+ },305326};306327307328/* Uart divisor latch read */···34173396MODULE_PARM_DESC(probe_rsa, "Probe I/O ports for RSA");34183397#endif34193398MODULE_ALIAS_CHARDEV_MAJOR(TTY_MAJOR);33993399+34003400+#ifndef MODULE34013401+/* This module was renamed to 8250_core in 3.7. Keep the old "8250" name34023402+ * working as well for the module options so we don't break people. We34033403+ * need to keep the names identical and the convenient macros will happily34043404+ * refuse to let us do that by failing the build with redefinition errors34053405+ * of global variables. So we stick them inside a dummy function to avoid34063406+ * those conflicts. The options still get parsed, and the redefined34073407+ * MODULE_PARAM_PREFIX lets us keep the "8250." syntax alive.34083408+ *34093409+ * This is hacky. I'm sorry.34103410+ */34113411+static void __used s8250_options(void)34123412+{34133413+#undef MODULE_PARAM_PREFIX34143414+#define MODULE_PARAM_PREFIX "8250."34153415+34163416+ module_param_cb(share_irqs, ¶m_ops_uint, &share_irqs, 0644);34173417+ module_param_cb(nr_uarts, ¶m_ops_uint, &nr_uarts, 0644);34183418+ module_param_cb(skip_txen_test, ¶m_ops_uint, &skip_txen_test, 0644);34193419+#ifdef CONFIG_SERIAL_8250_RSA34203420+ __module_param_call(MODULE_PARAM_PREFIX, probe_rsa,34213421+ ¶m_array_ops, .arr = &__param_arr_probe_rsa,34223422+ 0444, -1);34233423+#endif34243424+}34253425+#else34263426+MODULE_ALIAS("8250");34273427+#endif
···429429{430430 struct uart_8250_port uart;431431 int ret, line, flags = dev_id->driver_data;432432+ struct resource *res = NULL;432433433434 if (flags & UNKNOWN_DEV) {434435 ret = serial_pnp_guess_board(dev);···440439 memset(&uart, 0, sizeof(uart));441440 if (pnp_irq_valid(dev, 0))442441 uart.port.irq = pnp_irq(dev, 0);443443- if ((flags & CIR_PORT) && pnp_port_valid(dev, 2)) {444444- uart.port.iobase = pnp_port_start(dev, 2);445445- uart.port.iotype = UPIO_PORT;446446- } else if (pnp_port_valid(dev, 0)) {447447- uart.port.iobase = pnp_port_start(dev, 0);442442+ if ((flags & CIR_PORT) && pnp_port_valid(dev, 2))443443+ res = pnp_get_resource(dev, IORESOURCE_IO, 2);444444+ else if (pnp_port_valid(dev, 0))445445+ res = pnp_get_resource(dev, IORESOURCE_IO, 0);446446+ if (pnp_resource_enabled(res)) {447447+ uart.port.iobase = res->start;448448 uart.port.iotype = UPIO_PORT;449449 } else if (pnp_mem_valid(dev, 0)) {450450 uart.port.mapbase = pnp_mem_start(dev, 0);
+2-2
drivers/tty/serial/Kconfig
···211211config SERIAL_SAMSUNG_UARTS_4212212 bool213213 depends on PLAT_SAMSUNG214214- default y if !(CPU_S3C2410 || SERIAL_S3C2412 || CPU_S3C2440 || CPU_S3C2442)214214+ default y if !(CPU_S3C2410 || CPU_S3C2412 || CPU_S3C2440 || CPU_S3C2442)215215 help216216 Internal node for the common case of 4 Samsung compatible UARTs217217218218config SERIAL_SAMSUNG_UARTS219219 int220220 depends on PLAT_SAMSUNG221221- default 6 if ARCH_S5P6450221221+ default 6 if CPU_S5P6450222222 default 4 if SERIAL_SAMSUNG_UARTS_4 || CPU_S3C2416223223 default 3224224 help
+4-4
drivers/tty/serial/bcm63xx_uart.c
···235235 */236236static void bcm_uart_do_rx(struct uart_port *port)237237{238238- struct tty_port *port = &port->state->port;238238+ struct tty_port *tty_port = &port->state->port;239239 unsigned int max_count;240240241241 /* limit number of char read in interrupt, should not be···260260 bcm_uart_writel(port, val, UART_CTL_REG);261261262262 port->icount.overrun++;263263- tty_insert_flip_char(port, 0, TTY_OVERRUN);263263+ tty_insert_flip_char(tty_port, 0, TTY_OVERRUN);264264 }265265266266 if (!(iestat & UART_IR_STAT(UART_IR_RXNOTEMPTY)))···299299300300301301 if ((cstat & port->ignore_status_mask) == 0)302302- tty_insert_flip_char(port, c, flag);302302+ tty_insert_flip_char(tty_port, c, flag);303303304304 } while (--max_count);305305306306- tty_flip_buffer_push(port);306306+ tty_flip_buffer_push(tty_port);307307}308308309309/*
···17571757/**17581758 * usb_composite_probe() - register a composite driver17591759 * @driver: the driver to register17601760- * @bind: the callback used to allocate resources that are shared across the17611761- * whole device, such as string IDs, and add its configurations using17621762- * @usb_add_config(). This may fail by returning a negative errno17631763- * value; it should return zero on successful initialization.17601760+ *17641761 * Context: single threaded during gadget setup17651762 *17661763 * This function is used to register drivers using the composite driver
···240240 snd = &card->playback;241241 snd->filp = filp_open(fn_play, O_WRONLY, 0);242242 if (IS_ERR(snd->filp)) {243243+ int ret = PTR_ERR(snd->filp);244244+243245 ERROR(card, "No such PCM playback device: %s\n", fn_play);244246 snd->filp = NULL;247247+ return ret;245248 }246249 pcm_file = snd->filp->private_data;247250 snd->substream = pcm_file->substream;
+2-4
drivers/usb/host/ehci-hcd.c
···748748 /* guard against (alleged) silicon errata */749749 if (cmd & CMD_IAAD)750750 ehci_dbg(ehci, "IAA with IAAD still set?\n");751751- if (ehci->async_iaa) {751751+ if (ehci->async_iaa)752752 COUNT(ehci->stats.iaa);753753- end_unlink_async(ehci);754754- } else755755- ehci_dbg(ehci, "IAA with nothing unlinked?\n");753753+ end_unlink_async(ehci);756754 }757755758756 /* remote wakeup [4.3.1] */
+27-9
drivers/usb/host/ehci-q.c
···135135 * qtd is updated in qh_completions(). Update the QH136136 * overlay here.137137 */138138- if (cpu_to_hc32(ehci, qtd->qtd_dma) == qh->hw->hw_current) {138138+ if (qh->hw->hw_token & ACTIVE_BIT(ehci)) {139139 qh->hw->hw_qtd_next = qtd->hw_next;140140 qtd = NULL;141141 }···449449 else if (last_status == -EINPROGRESS && !urb->unlinked)450450 continue;451451452452- /* qh unlinked; token in overlay may be most current */453453- if (state == QH_STATE_IDLE454454- && cpu_to_hc32(ehci, qtd->qtd_dma)455455- == hw->hw_current) {452452+ /*453453+ * If this was the active qtd when the qh was unlinked454454+ * and the overlay's token is active, then the overlay455455+ * hasn't been written back to the qtd yet so use its456456+ * token instead of the qtd's. After the qtd is457457+ * processed and removed, the overlay won't be valid458458+ * any more.459459+ */460460+ if (state == QH_STATE_IDLE &&461461+ qh->qtd_list.next == &qtd->qtd_list &&462462+ (hw->hw_token & ACTIVE_BIT(ehci))) {456463 token = hc32_to_cpu(ehci, hw->hw_token);464464+ hw->hw_token &= ~ACTIVE_BIT(ehci);457465458466 /* An unlink may leave an incomplete459467 * async transaction in the TT buffer.···11781170 struct ehci_qh *prev;1179117111801172 /* Add to the end of the list of QHs waiting for the next IAAD */11811181- qh->qh_state = QH_STATE_UNLINK;11731173+ qh->qh_state = QH_STATE_UNLINK_WAIT;11821174 if (ehci->async_unlink)11831175 ehci->async_unlink_last->unlink_next = qh;11841176 else···1221121312221214 /* Do only the first waiting QH (nVidia bug?) */12231215 qh = ehci->async_unlink;12241224- ehci->async_iaa = qh;12251225- ehci->async_unlink = qh->unlink_next;12261226- qh->unlink_next = NULL;12161216+12171217+ /*12181218+ * Intel (?) bug: The HC can write back the overlay region12191219+ * even after the IAA interrupt occurs. In self-defense,12201220+ * always go through two IAA cycles for each QH.12211221+ */12221222+ if (qh->qh_state == QH_STATE_UNLINK_WAIT) {12231223+ qh->qh_state = QH_STATE_UNLINK;12241224+ } else {12251225+ ehci->async_iaa = qh;12261226+ ehci->async_unlink = qh->unlink_next;12271227+ qh->unlink_next = NULL;12281228+ }1227122912281230 /* Make sure the unlinks are all visible to the hardware */12291231 wmb();
-5
drivers/usb/musb/Kconfig
···77config USB_MUSB_HDRC88 tristate 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)'99 depends on USB && USB_GADGET1010- select NOP_USB_XCEIV if (ARCH_DAVINCI || MACH_OMAP3EVM || BLACKFIN)1111- select NOP_USB_XCEIV if (SOC_TI81XX || SOC_AM33XX)1212- select TWL4030_USB if MACH_OMAP_3430SDP1313- select TWL6030_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA1414- select OMAP_CONTROL_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA1510 select USB_OTG_UTILS1611 help1712 Say Y here if your system has a dual role high speed USB
-6
drivers/usb/musb/musb_core.c
···1624162416251625/*-------------------------------------------------------------------------*/1626162616271627-#ifdef CONFIG_SYSFS16281628-16291627static ssize_t16301628musb_mode_show(struct device *dev, struct device_attribute *attr, char *buf)16311629{···17391741static const struct attribute_group musb_attr_group = {17401742 .attrs = musb_attributes,17411743};17421742-17431743-#endif /* sysfs */1744174417451745/* Only used to provide driver mode change events */17461746static void musb_irq_work(struct work_struct *data)···19641968 if (status < 0)19651969 goto fail4;1966197019671967-#ifdef CONFIG_SYSFS19681971 status = sysfs_create_group(&musb->controller->kobj, &musb_attr_group);19691972 if (status)19701973 goto fail5;19711971-#endif1972197419731975 pm_runtime_put(musb->controller);19741976
+8-4
drivers/usb/musb/omap2430.c
···5151};5252#define glue_to_musb(g) platform_get_drvdata(g->musb)53535454-struct omap2430_glue *_glue;5454+static struct omap2430_glue *_glue;55555656static struct timer_list musb_idle_timer;5757···237237{238238 struct omap2430_glue *glue = _glue;239239240240- if (glue && glue_to_musb(glue)) {241241- glue->status = status;242242- } else {240240+ if (!glue) {241241+ pr_err("%s: musb core is not yet initialized\n", __func__);242242+ return;243243+ }244244+ glue->status = status;245245+246246+ if (!glue_to_musb(glue)) {243247 pr_err("%s: musb core is not yet ready\n", __func__);244248 return;245249 }
+7-3
drivers/usb/otg/otg.c
···130130 spin_lock_irqsave(&phy_lock, flags);131131132132 phy = __usb_find_phy(&phy_list, type);133133- if (IS_ERR(phy)) {133133+ if (IS_ERR(phy) || !try_module_get(phy->dev->driver->owner)) {134134 pr_err("unable to find transceiver of type %s\n",135135 usb_phy_type_string(type));136136 goto err0;···228228 spin_lock_irqsave(&phy_lock, flags);229229230230 phy = __usb_find_phy_dev(dev, &phy_bind_list, index);231231- if (IS_ERR(phy)) {231231+ if (IS_ERR(phy) || !try_module_get(phy->dev->driver->owner)) {232232 pr_err("unable to find transceiver\n");233233 goto err0;234234 }···301301 */302302void usb_put_phy(struct usb_phy *x)303303{304304- if (x)304304+ if (x) {305305+ struct module *owner = x->dev->driver->owner;306306+305307 put_device(x->dev);308308+ module_put(owner);309309+ }306310}307311EXPORT_SYMBOL(usb_put_phy);308312
+9-15
drivers/usb/phy/omap-control-usb.c
···219219220220 res = platform_get_resource_byname(pdev, IORESOURCE_MEM,221221 "control_dev_conf");222222- control_usb->dev_conf = devm_request_and_ioremap(&pdev->dev, res);223223- if (!control_usb->dev_conf) {224224- dev_err(&pdev->dev, "Failed to obtain io memory\n");225225- return -EADDRNOTAVAIL;226226- }222222+ control_usb->dev_conf = devm_ioremap_resource(&pdev->dev, res);223223+ if (IS_ERR(control_usb->dev_conf))224224+ return PTR_ERR(control_usb->dev_conf);227225228226 if (control_usb->type == OMAP_CTRL_DEV_TYPE1) {229227 res = platform_get_resource_byname(pdev, IORESOURCE_MEM,230228 "otghs_control");231231- control_usb->otghs_control = devm_request_and_ioremap(229229+ control_usb->otghs_control = devm_ioremap_resource(232230 &pdev->dev, res);233233- if (!control_usb->otghs_control) {234234- dev_err(&pdev->dev, "Failed to obtain io memory\n");235235- return -EADDRNOTAVAIL;236236- }231231+ if (IS_ERR(control_usb->otghs_control))232232+ return PTR_ERR(control_usb->otghs_control);237233 }238234239235 if (control_usb->type == OMAP_CTRL_DEV_TYPE2) {240236 res = platform_get_resource_byname(pdev, IORESOURCE_MEM,241237 "phy_power_usb");242242- control_usb->phy_power = devm_request_and_ioremap(238238+ control_usb->phy_power = devm_ioremap_resource(243239 &pdev->dev, res);244244- if (!control_usb->phy_power) {245245- dev_dbg(&pdev->dev, "Failed to obtain io memory\n");246246- return -EADDRNOTAVAIL;247247- }240240+ if (IS_ERR(control_usb->phy_power))241241+ return PTR_ERR(control_usb->phy_power);248242249243 control_usb->sys_clk = devm_clk_get(control_usb->dev,250244 "sys_clkin");
+3-5
drivers/usb/phy/omap-usb3.c
···212212 }213213214214 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pll_ctrl");215215- phy->pll_ctrl_base = devm_request_and_ioremap(&pdev->dev, res);216216- if (!phy->pll_ctrl_base) {217217- dev_err(&pdev->dev, "ioremap of pll_ctrl failed\n");218218- return -ENOMEM;219219- }215215+ phy->pll_ctrl_base = devm_ioremap_resource(&pdev->dev, res);216216+ if (IS_ERR(phy->pll_ctrl_base))217217+ return PTR_ERR(phy->pll_ctrl_base);220218221219 phy->dev = &pdev->dev;222220
···9292 return 0;9393}94949595-/* This places the HUAWEI usb dongles in multi-port mode */9696-static int usb_stor_huawei_feature_init(struct us_data *us)9595+/* This places the HUAWEI E220 devices in multi-port mode */9696+int usb_stor_huawei_e220_init(struct us_data *us)9797{9898 int result;9999···103103 0x01, 0x0, NULL, 0x0, 1000);104104 US_DEBUGP("Huawei mode set result is %d\n", result);105105 return 0;106106-}107107-108108-/*109109- * It will send a scsi switch command called rewind' to huawei dongle.110110- * When the dongle receives this command at the first time,111111- * it will reboot immediately. After rebooted, it will ignore this command.112112- * So it is unnecessary to read its response.113113- */114114-static int usb_stor_huawei_scsi_init(struct us_data *us)115115-{116116- int result = 0;117117- int act_len = 0;118118- struct bulk_cb_wrap *bcbw = (struct bulk_cb_wrap *) us->iobuf;119119- char rewind_cmd[] = {0x11, 0x06, 0x20, 0x00, 0x00, 0x01, 0x01, 0x00,120120- 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};121121-122122- bcbw->Signature = cpu_to_le32(US_BULK_CB_SIGN);123123- bcbw->Tag = 0;124124- bcbw->DataTransferLength = 0;125125- bcbw->Flags = bcbw->Lun = 0;126126- bcbw->Length = sizeof(rewind_cmd);127127- memset(bcbw->CDB, 0, sizeof(bcbw->CDB));128128- memcpy(bcbw->CDB, rewind_cmd, sizeof(rewind_cmd));129129-130130- result = usb_stor_bulk_transfer_buf(us, us->send_bulk_pipe, bcbw,131131- US_BULK_CB_WRAP_LEN, &act_len);132132- US_DEBUGP("transfer actual length=%d, result=%d\n", act_len, result);133133- return result;134134-}135135-136136-/*137137- * It tries to find the supported Huawei USB dongles.138138- * In Huawei, they assign the following product IDs139139- * for all of their mobile broadband dongles,140140- * including the new dongles in the future.141141- * So if the product ID is not included in this list,142142- * it means it is not Huawei's mobile broadband dongles.143143- */144144-static int usb_stor_huawei_dongles_pid(struct us_data *us)145145-{146146- struct usb_interface_descriptor *idesc;147147- int idProduct;148148-149149- idesc = &us->pusb_intf->cur_altsetting->desc;150150- idProduct = le16_to_cpu(us->pusb_dev->descriptor.idProduct);151151- /* The first port is CDROM,152152- * means the dongle in the single port mode,153153- * and a switch command is required to be sent. */154154- if (idesc && idesc->bInterfaceNumber == 0) {155155- if ((idProduct == 0x1001)156156- || (idProduct == 0x1003)157157- || (idProduct == 0x1004)158158- || (idProduct >= 0x1401 && idProduct <= 0x1500)159159- || (idProduct >= 0x1505 && idProduct <= 0x1600)160160- || (idProduct >= 0x1c02 && idProduct <= 0x2202)) {161161- return 1;162162- }163163- }164164- return 0;165165-}166166-167167-int usb_stor_huawei_init(struct us_data *us)168168-{169169- int result = 0;170170-171171- if (usb_stor_huawei_dongles_pid(us)) {172172- if (le16_to_cpu(us->pusb_dev->descriptor.idProduct) >= 0x1446)173173- result = usb_stor_huawei_scsi_init(us);174174- else175175- result = usb_stor_huawei_feature_init(us);176176- }177177- return result;178106}
+2-2
drivers/usb/storage/initializers.h
···4646 * flash reader */4747int usb_stor_ucr61s2b_init(struct us_data *us);48484949-/* This places the HUAWEI usb dongles in multi-port mode */5050-int usb_stor_huawei_init(struct us_data *us);4949+/* This places the HUAWEI E220 devices in multi-port mode */5050+int usb_stor_huawei_e220_init(struct us_data *us);
···1525152515261526 if ((qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_RFER) &&15271527 qg->reserved + qg->rfer + num_bytes >15281528- qg->max_rfer)15281528+ qg->max_rfer) {15291529 ret = -EDQUOT;15301530+ goto out;15311531+ }1530153215311533 if ((qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) &&15321534 qg->reserved + qg->excl + num_bytes >15331533- qg->max_excl)15351535+ qg->max_excl) {15341536 ret = -EDQUOT;15371537+ goto out;15381538+ }1535153915361540 list_for_each_entry(glist, &qg->groups, next_group) {15371541 ulist_add(ulist, glist->group->qgroupid,15381542 (uintptr_t)glist->group, GFP_ATOMIC);15391543 }15401544 }15411541- if (ret)15421542- goto out;1543154515441546 /*15451547 * no limits exceeded, now record the reservation into all qgroups
+5-6
fs/btrfs/transaction.c
···625625626626 btrfs_trans_release_metadata(trans, root);627627 trans->block_rsv = NULL;628628- /*629629- * the same root has to be passed to start_transaction and630630- * end_transaction. Subvolume quota depends on this.631631- */632632- WARN_ON(trans->root != root);633628634629 if (trans->qgroup_reserved) {635635- btrfs_qgroup_free(root, trans->qgroup_reserved);630630+ /*631631+ * the same root has to be passed here between start_transaction632632+ * and end_transaction. Subvolume quota depends on this.633633+ */634634+ btrfs_qgroup_free(trans->root, trans->qgroup_reserved);636635 trans->qgroup_reserved = 0;637636 }638637
+6
fs/btrfs/volumes.c
···684684 __btrfs_close_devices(fs_devices);685685 free_fs_devices(fs_devices);686686 }687687+ /*688688+ * Wait for rcu kworkers under __btrfs_close_devices689689+ * to finish all blkdev_puts so device is really690690+ * free when umount is done.691691+ */692692+ rcu_barrier();687693 return ret;688694}689695
···558558 }559559 *ret_pointer = iov;560560561561+ ret = -EFAULT;562562+ if (!access_ok(VERIFY_READ, uvector, nr_segs*sizeof(*uvector)))563563+ goto out;564564+561565 /*562566 * Single unix specification:563567 * We should -EINVAL if an element length is not >= 0 and fitting an···10841080 if (!file->f_op)10851081 goto out;1086108210871087- ret = -EFAULT;10881088- if (!access_ok(VERIFY_READ, uvector, nr_segs*sizeof(*uvector)))10891089- goto out;10901090-10911091- tot_len = compat_rw_copy_check_uvector(type, uvector, nr_segs,10831083+ ret = compat_rw_copy_check_uvector(type, uvector, nr_segs,10921084 UIO_FASTIOV, iovstack, &iov);10931093- if (tot_len == 0) {10941094- ret = 0;10851085+ if (ret <= 0)10951086 goto out;10961096- }1097108710881088+ tot_len = ret;10981089 ret = rw_verify_area(type, file, pos, tot_len);10991090 if (ret < 0)11001091 goto out;
-1
fs/ext2/ialloc.c
···118118 * as writing the quota to disk may need the lock as well.119119 */120120 /* Quota is already initialized in iput() */121121- ext2_xattr_delete_inode(inode);122121 dquot_free_inode(inode);123122 dquot_drop(inode);124123
+2
fs/ext2/inode.c
···3434#include "ext2.h"3535#include "acl.h"3636#include "xip.h"3737+#include "xattr.h"37383839static int __ext2_write_inode(struct inode *inode, int do_sync);3940···8988 inode->i_size = 0;9089 if (inode->i_blocks)9190 ext2_truncate_blocks(inode, 0);9191+ ext2_xattr_delete_inode(inode);9292 }93939494 invalidate_inode_buffers(inode);
+2-2
fs/ext3/super.c
···353353 return bdev;354354355355fail:356356- ext3_msg(sb, "error: failed to open journal device %s: %ld",356356+ ext3_msg(sb, KERN_ERR, "error: failed to open journal device %s: %ld",357357 __bdevname(dev, b), PTR_ERR(bdev));358358359359 return NULL;···887887 /*todo: use simple_strtoll with >32bit ext3 */888888 sb_block = simple_strtoul(options, &options, 0);889889 if (*options && *options != ',') {890890- ext3_msg(sb, "error: invalid sb specification: %s",890890+ ext3_msg(sb, KERN_ERR, "error: invalid sb specification: %s",891891 (char *) *data);892892 return 1;893893 }
···258258 .fs_flags = FS_REQUIRES_DEV,259259};260260MODULE_ALIAS_FS("vxfs"); /* makes mount -t vxfs autoload the module */261261+MODULE_ALIAS("vxfs");261262262263static int __init263264vxfs_init(void)
+2-8
fs/hostfs/hostfs_kern.c
···845845 return err;846846847847 if ((attr->ia_valid & ATTR_SIZE) &&848848- attr->ia_size != i_size_read(inode)) {849849- int error;850850-851851- error = inode_newsize_ok(inode, attr->ia_size);852852- if (error)853853- return error;854854-848848+ attr->ia_size != i_size_read(inode))855849 truncate_setsize(inode, attr->ia_size);856856- }857850858851 setattr_copy(inode, attr);859852 mark_inode_dirty(inode);···986993 .kill_sb = hostfs_kill_sb,987994 .fs_flags = 0,988995};996996+MODULE_ALIAS_FS("hostfs");989997990998static int __init init_hostfs(void)991999{
+1
fs/hpfs/super.c
···688688 .kill_sb = kill_block_super,689689 .fs_flags = FS_REQUIRES_DEV,690690};691691+MODULE_ALIAS_FS("hpfs");691692692693static int __init init_hpfs_fs(void)693694{
···335335 .fs_flags = FS_RENAME_DOES_D_MOVE|FS_BINARY_MOUNTDATA,336336};337337MODULE_ALIAS_FS("nfs4");338338+MODULE_ALIAS("nfs4");338339EXPORT_SYMBOL_GPL(nfs4_fs_type);339340340341static int __init register_nfs4_fs(void)
+2-34
fs/nfsd/nfs4state.c
···230230 __nfs4_file_put_access(fp, oflag);231231}232232233233-static inline int get_new_stid(struct nfs4_stid *stid)234234-{235235- static int min_stateid = 0;236236- struct idr *stateids = &stid->sc_client->cl_stateids;237237- int new_stid;238238- int error;239239-240240- error = idr_get_new_above(stateids, stid, min_stateid, &new_stid);241241- /*242242- * Note: the necessary preallocation was done in243243- * nfs4_alloc_stateid(). The idr code caps the number of244244- * preallocations that can exist at a time, but the state lock245245- * prevents anyone from using ours before we get here:246246- */247247- WARN_ON_ONCE(error);248248- /*249249- * It shouldn't be a problem to reuse an opaque stateid value.250250- * I don't think it is for 4.1. But with 4.0 I worry that, for251251- * example, a stray write retransmission could be accepted by252252- * the server when it should have been rejected. Therefore,253253- * adopt a trick from the sctp code to attempt to maximize the254254- * amount of time until an id is reused, by ensuring they always255255- * "increase" (mod INT_MAX):256256- */257257-258258- min_stateid = new_stid+1;259259- if (min_stateid == INT_MAX)260260- min_stateid = 0;261261- return new_stid;262262-}263263-264233static struct nfs4_stid *nfs4_alloc_stid(struct nfs4_client *cl, struct265234kmem_cache *slab)266235{···242273 if (!stid)243274 return NULL;244275245245- if (!idr_pre_get(stateids, GFP_KERNEL))246246- goto out_free;247247- if (idr_get_new_above(stateids, stid, min_stateid, &new_id))276276+ new_id = idr_alloc(stateids, stid, min_stateid, 0, GFP_KERNEL);277277+ if (new_id < 0)248278 goto out_free;249279 stid->sc_client = cl;250280 stid->sc_type = 0;
+3
fs/pipe.c
···863863{864864 int ret = -ENOENT;865865866866+ if (!(filp->f_mode & (FMODE_READ|FMODE_WRITE)))867867+ return -EINVAL;868868+866869 mutex_lock(&inode->i_mutex);867870868871 if (inode->i_pipe) {
+4-1
fs/quota/dquot.c
···14391439 * did a write before quota was turned on14401440 */14411441 rsv = inode_get_rsv_space(inode);14421442- if (unlikely(rsv))14421442+ if (unlikely(rsv)) {14431443+ spin_lock(&dq_data_lock);14431444 dquot_resv_space(inode->i_dquot[cnt], rsv);14451445+ spin_unlock(&dq_data_lock);14461446+ }14441447 }14451448 }14461449out_err:
···235235 if a _PPC object exists, rmmod is disallowed then */236236int acpi_processor_notify_smm(struct module *calling_module);237237238238+/* parsing the _P* objects. */239239+extern int acpi_processor_get_performance_info(struct acpi_processor *pr);240240+238241/* for communication between multiple parts of the processor kernel module */239242DECLARE_PER_CPU(struct acpi_processor *, processors);240243extern struct acpi_processor_errata errata;
-6
include/asm-generic/atomic.h
···136136#define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v)))137137#define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), (old), (new)))138138139139-#define cmpxchg_local(ptr, o, n) \140140- ((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), (unsigned long)(o),\141141- (unsigned long)(n), sizeof(*(ptr))))142142-143143-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))144144-145139static inline int __atomic_add_unless(atomic_t *v, int a, int u)146140{147141 int c, old;
+10
include/asm-generic/cmpxchg.h
···9292 */9393#include <asm-generic/cmpxchg-local.h>94949595+#ifndef cmpxchg_local9696+#define cmpxchg_local(ptr, o, n) \9797+ ((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), (unsigned long)(o),\9898+ (unsigned long)(n), sizeof(*(ptr))))9999+#endif100100+101101+#ifndef cmpxchg64_local102102+#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))103103+#endif104104+95105#define cmpxchg(ptr, o, n) cmpxchg_local((ptr), (o), (n))96106#define cmpxchg64(ptr, o, n) cmpxchg64_local((ptr), (o), (n))97107
+51-17
include/linux/idr.h
···7373 */74747575void *idr_find_slowpath(struct idr *idp, int id);7676-int idr_pre_get(struct idr *idp, gfp_t gfp_mask);7777-int idr_get_new_above(struct idr *idp, void *ptr, int starting_id, int *id);7876void idr_preload(gfp_t gfp_mask);7977int idr_alloc(struct idr *idp, void *ptr, int start, int end, gfp_t gfp_mask);8078int idr_for_each(struct idr *idp,···979998100/**99101 * idr_find - return pointer for given id100100- * @idp: idr handle102102+ * @idr: idr handle101103 * @id: lookup key102104 *103105 * Return the pointer given the id it has been registered with. A %NULL···118120}119121120122/**121121- * idr_get_new - allocate new idr entry122122- * @idp: idr handle123123- * @ptr: pointer you want associated with the id124124- * @id: pointer to the allocated handle125125- *126126- * Simple wrapper around idr_get_new_above() w/ @starting_id of zero.127127- */128128-static inline int idr_get_new(struct idr *idp, void *ptr, int *id)129129-{130130- return idr_get_new_above(idp, ptr, 0, id);131131-}132132-133133-/**134123 * idr_for_each_entry - iterate over an idr's elements of a given type135124 * @idp: idr handle136125 * @entry: the type * to use as cursor···128143 entry != NULL; \129144 ++id, entry = (typeof(entry))idr_get_next((idp), &(id)))130145131131-void __idr_remove_all(struct idr *idp); /* don't use */146146+/*147147+ * Don't use the following functions. These exist only to suppress148148+ * deprecated warnings on EXPORT_SYMBOL()s.149149+ */150150+int __idr_pre_get(struct idr *idp, gfp_t gfp_mask);151151+int __idr_get_new_above(struct idr *idp, void *ptr, int starting_id, int *id);152152+void __idr_remove_all(struct idr *idp);153153+154154+/**155155+ * idr_pre_get - reserve resources for idr allocation156156+ * @idp: idr handle157157+ * @gfp_mask: memory allocation flags158158+ *159159+ * Part of old alloc interface. This is going away. Use160160+ * idr_preload[_end]() and idr_alloc() instead.161161+ */162162+static inline int __deprecated idr_pre_get(struct idr *idp, gfp_t gfp_mask)163163+{164164+ return __idr_pre_get(idp, gfp_mask);165165+}166166+167167+/**168168+ * idr_get_new_above - allocate new idr entry above or equal to a start id169169+ * @idp: idr handle170170+ * @ptr: pointer you want associated with the id171171+ * @starting_id: id to start search at172172+ * @id: pointer to the allocated handle173173+ *174174+ * Part of old alloc interface. This is going away. Use175175+ * idr_preload[_end]() and idr_alloc() instead.176176+ */177177+static inline int __deprecated idr_get_new_above(struct idr *idp, void *ptr,178178+ int starting_id, int *id)179179+{180180+ return __idr_get_new_above(idp, ptr, starting_id, id);181181+}182182+183183+/**184184+ * idr_get_new - allocate new idr entry185185+ * @idp: idr handle186186+ * @ptr: pointer you want associated with the id187187+ * @id: pointer to the allocated handle188188+ *189189+ * Part of old alloc interface. This is going away. Use190190+ * idr_preload[_end]() and idr_alloc() instead.191191+ */192192+static inline int __deprecated idr_get_new(struct idr *idp, void *ptr, int *id)193193+{194194+ return __idr_get_new_above(idp, ptr, 0, id);195195+}132196133197/**134198 * idr_remove_all - remove all ids from the given idr tree
···667667 pos = n)668668669669#define hlist_entry_safe(ptr, type, member) \670670- (ptr) ? hlist_entry(ptr, type, member) : NULL670670+ ({ typeof(ptr) ____ptr = (ptr); \671671+ ____ptr ? hlist_entry(____ptr, type, member) : NULL; \672672+ })671673672674/**673675 * hlist_for_each_entry - iterate over list of given type
+1
include/linux/mfd/palmas.h
···221221};222222223223struct palmas_platform_data {224224+ int irq_flags;224225 int gpio_base;225226226227 /* bit value to be loaded to the POWER_CTRL register */
···1414 */15151616#include <linux/cgroup.h>1717+#include <linux/errno.h>17181819/*1920 * The core object. the cgroup that wishes to account for some
+2-1
include/linux/usb/composite.h
···6060 * @name: For diagnostics, identifies the function.6161 * @strings: tables of strings, keyed by identifiers assigned during bind()6262 * and by language IDs provided in control requests6363- * @descriptors: Table of full (or low) speed descriptors, using interface and6363+ * @fs_descriptors: Table of full (or low) speed descriptors, using interface and6464 * string identifiers assigned during @bind(). If this pointer is null,6565 * the function will not be available at full speed (or at low speed).6666 * @hs_descriptors: Table of high speed descriptors, using interface and···290290 * after function notifications291291 * @resume: Notifies configuration when the host restarts USB traffic,292292 * before function notifications293293+ * @gadget_driver: Gadget driver controlling this driver293294 *294295 * Devices default to reporting self powered operation. Devices which rely295296 * on bus powered operation should report this in their @bind method.
+4-2
include/uapi/linux/acct.h
···107107#define ACORE 0x08 /* ... dumped core */108108#define AXSIG 0x10 /* ... was killed by a signal */109109110110-#ifdef __BIG_ENDIAN110110+#if defined(__BYTE_ORDER) ? __BYTE_ORDER == __BIG_ENDIAN : defined(__BIG_ENDIAN)111111#define ACCT_BYTEORDER 0x80 /* accounting file is big endian */112112-#else112112+#elif defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN)113113#define ACCT_BYTEORDER 0x00 /* accounting file is little endian */114114+#else115115+#error unspecified endianness114116#endif115117116118#ifndef __KERNEL__
+2-2
include/uapi/linux/aio_abi.h
···6262 __s64 res2; /* secondary result */6363};64646565-#if defined(__LITTLE_ENDIAN)6565+#if defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN)6666#define PADDED(x,y) x, y6767-#elif defined(__BIG_ENDIAN)6767+#elif defined(__BYTE_ORDER) ? __BYTE_ORDER == __BIG_ENDIAN : defined(__BIG_ENDIAN)6868#define PADDED(x,y) y, x6969#else7070#error edit for your odd byteorder.
+4-2
include/uapi/linux/raid/md_p.h
···145145 __u32 failed_disks; /* 4 Number of failed disks */146146 __u32 spare_disks; /* 5 Number of spare disks */147147 __u32 sb_csum; /* 6 checksum of the whole superblock */148148-#ifdef __BIG_ENDIAN148148+#if defined(__BYTE_ORDER) ? __BYTE_ORDER == __BIG_ENDIAN : defined(__BIG_ENDIAN)149149 __u32 events_hi; /* 7 high-order of superblock update count */150150 __u32 events_lo; /* 8 low-order of superblock update count */151151 __u32 cp_events_hi; /* 9 high-order of checkpoint update count */152152 __u32 cp_events_lo; /* 10 low-order of checkpoint update count */153153-#else153153+#elif defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN)154154 __u32 events_lo; /* 7 low-order of superblock update count */155155 __u32 events_hi; /* 8 high-order of superblock update count */156156 __u32 cp_events_lo; /* 9 low-order of checkpoint update count */157157 __u32 cp_events_hi; /* 10 high-order of checkpoint update count */158158+#else159159+#error unspecified endianness158160#endif159161 __u32 recovery_cp; /* 11 recovery checkpoint sector count */160162 /* There are only valid for minor_version > 90 */
+4-1
include/uapi/linux/serial_core.h
···5151#define PORT_8250_CIR 23 /* CIR infrared port, has its own driver */5252#define PORT_XR17V35X 24 /* Exar XR17V35x UARTs */5353#define PORT_BRCM_TRUMANAGE 255454-#define PORT_MAX_8250 25 /* max port ID */5454+#define PORT_ALTR_16550_F32 26 /* Altera 16550 UART with 32 FIFOs */5555+#define PORT_ALTR_16550_F64 27 /* Altera 16550 UART with 64 FIFOs */5656+#define PORT_ALTR_16550_F128 28 /* Altera 16550 UART with 128 FIFOs */5757+#define PORT_MAX_8250 28 /* max port ID */55585659/*5760 * ARM specific type numbers. These are not currently guaranteed
···11411141 if ((clone_flags & (CLONE_NEWNS|CLONE_FS)) == (CLONE_NEWNS|CLONE_FS))11421142 return ERR_PTR(-EINVAL);1143114311441144+ if ((clone_flags & (CLONE_NEWUSER|CLONE_FS)) == (CLONE_NEWUSER|CLONE_FS))11451145+ return ERR_PTR(-EINVAL);11461146+11441147 /*11451148 * Thread groups must share signals as well, and detached threads11461149 * can only be started up within the thread group.···18101807 * If unsharing a user namespace must also unshare the thread.18111808 */18121809 if (unshare_flags & CLONE_NEWUSER)18131813- unshare_flags |= CLONE_THREAD;18101810+ unshare_flags |= CLONE_THREAD | CLONE_FS;18141811 /*18151812 * If unsharing a pid namespace must also unshare the thread.18161813 */
+23-23
kernel/futex.c
···223223 * @rw: mapping needs to be read/write (values: VERIFY_READ,224224 * VERIFY_WRITE)225225 *226226- * Returns a negative error code or 0226226+ * Return: a negative error code or 0227227+ *227228 * The key words are stored in *key on success.228229 *229230 * For shared mappings, it's (page->index, file_inode(vma->vm_file),···706705 * be "current" except in the case of requeue pi.707706 * @set_waiters: force setting the FUTEX_WAITERS bit (1) or not (0)708707 *709709- * Returns:710710- * 0 - ready to wait711711- * 1 - acquired the lock708708+ * Return:709709+ * 0 - ready to wait;710710+ * 1 - acquired the lock;712711 * <0 - error713712 *714713 * The hb->lock and futex_key refs shall be held by the caller.···11921191 * then direct futex_lock_pi_atomic() to force setting the FUTEX_WAITERS bit.11931192 * hb1 and hb2 must be held by the caller.11941193 *11951195- * Returns:11961196- * 0 - failed to acquire the lock atomicly11971197- * 1 - acquired the lock11941194+ * Return:11951195+ * 0 - failed to acquire the lock atomically;11961196+ * 1 - acquired the lock;11981197 * <0 - error11991198 */12001199static int futex_proxy_trylock_atomic(u32 __user *pifutex,···12551254 * Requeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire12561255 * uaddr2 atomically on behalf of the top waiter.12571256 *12581258- * Returns:12591259- * >=0 - on success, the number of tasks requeued or woken12571257+ * Return:12581258+ * >=0 - on success, the number of tasks requeued or woken;12601259 * <0 - on error12611260 */12621261static int futex_requeue(u32 __user *uaddr1, unsigned int flags,···15371536 * The q->lock_ptr must not be held by the caller. A call to unqueue_me() must15381537 * be paired with exactly one earlier call to queue_me().15391538 *15401540- * Returns:15411541- * 1 - if the futex_q was still queued (and we removed unqueued it)15391539+ * Return:15401540+ * 1 - if the futex_q was still queued (and we removed unqueued it);15421541 * 0 - if the futex_q was already removed by the waking thread15431542 */15441543static int unqueue_me(struct futex_q *q)···17081707 * the pi_state owner as well as handle race conditions that may allow us to17091708 * acquire the lock. Must be called with the hb lock held.17101709 *17111711- * Returns:17121712- * 1 - success, lock taken17131713- * 0 - success, lock not taken17101710+ * Return:17111711+ * 1 - success, lock taken;17121712+ * 0 - success, lock not taken;17141713 * <0 - on error (-EFAULT)17151714 */17161715static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)···18251824 * Return with the hb lock held and a q.key reference on success, and unlocked18261825 * with no q.key reference on failure.18271826 *18281828- * Returns:18291829- * 0 - uaddr contains val and hb has been locked18271827+ * Return:18281828+ * 0 - uaddr contains val and hb has been locked;18301829 * <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlocked18311830 */18321831static int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,···22042203 * the wakeup and return the appropriate error code to the caller. Must be22052204 * called with the hb lock held.22062205 *22072207- * Returns22082208- * 0 - no early wakeup detected22092209- * <0 - -ETIMEDOUT or -ERESTARTNOINTR22062206+ * Return:22072207+ * 0 = no early wakeup detected;22082208+ * <0 = -ETIMEDOUT or -ERESTARTNOINTR22102209 */22112210static inline22122211int handle_early_requeue_pi_wakeup(struct futex_hash_bucket *hb,···22482247 * @val: the expected value of uaddr22492248 * @abs_time: absolute timeout22502249 * @bitset: 32 bit wakeup bitset set by userspace, defaults to all22512251- * @clockrt: whether to use CLOCK_REALTIME (1) or CLOCK_MONOTONIC (0)22522250 * @uaddr2: the pi futex we will take prior to returning to user-space22532251 *22542252 * The caller will wait on uaddr and will be requeued by futex_requeue() to···22582258 * there was a need to.22592259 *22602260 * We call schedule in futex_wait_queue_me() when we enqueue and return there22612261- * via the following:22612261+ * via the following--22622262 * 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue()22632263 * 2) wakeup on uaddr2 after a requeue22642264 * 3) signal···22762276 *22772277 * If 4 or 7, we cleanup and return with -ETIMEDOUT.22782278 *22792279- * Returns:22802280- * 0 - On success22792279+ * Return:22802280+ * 0 - On success;22812281 * <0 - On error22822282 */22832283static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+4-1
kernel/signal.c
···485485 if (force_default || ka->sa.sa_handler != SIG_IGN)486486 ka->sa.sa_handler = SIG_DFL;487487 ka->sa.sa_flags = 0;488488+#ifdef __ARCH_HAS_SA_RESTORER489489+ ka->sa.sa_restorer = NULL;490490+#endif488491 sigemptyset(&ka->sa.sa_mask);489492 ka++;490493 }···26852682/**26862683 * sys_rt_sigpending - examine a pending signal that has been raised26872684 * while blocked26882688- * @set: stores pending signals26852685+ * @uset: stores pending signals26892686 * @sigsetsize: size of sigset_t type or larger26902687 */26912688SYSCALL_DEFINE2(rt_sigpending, sigset_t __user *, uset, size_t, sigsetsize)
+14-10
kernel/trace/Kconfig
···414414 def_bool n415415416416config DYNAMIC_FTRACE417417- bool "enable/disable ftrace tracepoints dynamically"417417+ bool "enable/disable function tracing dynamically"418418 depends on FUNCTION_TRACER419419 depends on HAVE_DYNAMIC_FTRACE420420 default y421421 help422422- This option will modify all the calls to ftrace dynamically423423- (will patch them out of the binary image and replace them424424- with a No-Op instruction) as they are called. A table is425425- created to dynamically enable them again.422422+ This option will modify all the calls to function tracing423423+ dynamically (will patch them out of the binary image and424424+ replace them with a No-Op instruction) on boot up. During425425+ compile time, a table is made of all the locations that ftrace426426+ can function trace, and this table is linked into the kernel427427+ image. When this is enabled, functions can be individually428428+ enabled, and the functions not enabled will not affect429429+ performance of the system.430430+431431+ See the files in /sys/kernel/debug/tracing:432432+ available_filter_functions433433+ set_ftrace_filter434434+ set_ftrace_notrace426435427436 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but428437 otherwise has native performance as long as no tracing is active.429429-430430- The changes to the code are done by a kernel thread that431431- wakes up once a second and checks to see if any ftrace calls432432- were made. If so, it runs stop_machine (stops all CPUS)433433- and modifies the code to jump over the call to ftrace.434438435439config DYNAMIC_FTRACE_WITH_REGS436440 def_bool y
+24-3
kernel/trace/trace.c
···24002400 seq_printf(m, "# MAY BE MISSING FUNCTION EVENTS\n");24012401}2402240224032403+#ifdef CONFIG_TRACER_MAX_TRACE24042404+static void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter)24052405+{24062406+ if (iter->trace->allocated_snapshot)24072407+ seq_printf(m, "#\n# * Snapshot is allocated *\n#\n");24082408+ else24092409+ seq_printf(m, "#\n# * Snapshot is freed *\n#\n");24102410+24112411+ seq_printf(m, "# Snapshot commands:\n");24122412+ seq_printf(m, "# echo 0 > snapshot : Clears and frees snapshot buffer\n");24132413+ seq_printf(m, "# echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.\n");24142414+ seq_printf(m, "# Takes a snapshot of the main buffer.\n");24152415+ seq_printf(m, "# echo 2 > snapshot : Clears snapshot buffer (but does not allocate)\n");24162416+ seq_printf(m, "# (Doesn't have to be '2' works with any number that\n");24172417+ seq_printf(m, "# is not a '0' or '1')\n");24182418+}24192419+#else24202420+/* Should never be called */24212421+static inline void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter) { }24222422+#endif24232423+24032424static int s_show(struct seq_file *m, void *v)24042425{24052426 struct trace_iterator *iter = v;···24322411 seq_puts(m, "#\n");24332412 test_ftrace_alive(m);24342413 }24352435- if (iter->trace && iter->trace->print_header)24142414+ if (iter->snapshot && trace_empty(iter))24152415+ print_snapshot_help(m, iter);24162416+ else if (iter->trace && iter->trace->print_header)24362417 iter->trace->print_header(m);24372418 else24382419 trace_default_header(m);···41674144 default:41684145 if (current_trace->allocated_snapshot)41694146 tracing_reset_online_cpus(&max_tr);41704170- else41714171- ret = -EINVAL;41724147 break;41734148 }41744149
+4
kernel/user_namespace.c
···2121#include <linux/uaccess.h>2222#include <linux/ctype.h>2323#include <linux/projid.h>2424+#include <linux/fs_struct.h>24252526static struct kmem_cache *user_ns_cachep __read_mostly;2627···836835837836 /* Threaded processes may not enter a different user namespace */838837 if (atomic_read(¤t->mm->mm_users) > 1)838838+ return -EINVAL;839839+840840+ if (current->fs->users != 1)839841 return -EINVAL;840842841843 if (!ns_capable(user_ns, CAP_SYS_ADMIN))
+4-3
kernel/workqueue.c
···457457 int ret;458458459459 mutex_lock(&worker_pool_idr_mutex);460460- idr_pre_get(&worker_pool_idr, GFP_KERNEL);461461- ret = idr_get_new(&worker_pool_idr, pool, &pool->id);460460+ ret = idr_alloc(&worker_pool_idr, pool, 0, 0, GFP_KERNEL);461461+ if (ret >= 0)462462+ pool->id = ret;462463 mutex_unlock(&worker_pool_idr_mutex);463464464464- return ret;465465+ return ret < 0 ? ret : 0;465466}466467467468/*
+30-50
lib/idr.c
···106106 if (layer_idr)107107 return get_from_free_list(layer_idr);108108109109- /* try to allocate directly from kmem_cache */110110- new = kmem_cache_zalloc(idr_layer_cache, gfp_mask);109109+ /*110110+ * Try to allocate directly from kmem_cache. We want to try this111111+ * before preload buffer; otherwise, non-preloading idr_alloc()112112+ * users will end up taking advantage of preloading ones. As the113113+ * following is allowed to fail for preloaded cases, suppress114114+ * warning this time.115115+ */116116+ new = kmem_cache_zalloc(idr_layer_cache, gfp_mask | __GFP_NOWARN);111117 if (new)112118 return new;113119···121115 * Try to fetch one from the per-cpu preload buffer if in process122116 * context. See idr_preload() for details.123117 */124124- if (in_interrupt())125125- return NULL;126126-127127- preempt_disable();128128- new = __this_cpu_read(idr_preload_head);129129- if (new) {130130- __this_cpu_write(idr_preload_head, new->ary[0]);131131- __this_cpu_dec(idr_preload_cnt);132132- new->ary[0] = NULL;118118+ if (!in_interrupt()) {119119+ preempt_disable();120120+ new = __this_cpu_read(idr_preload_head);121121+ if (new) {122122+ __this_cpu_write(idr_preload_head, new->ary[0]);123123+ __this_cpu_dec(idr_preload_cnt);124124+ new->ary[0] = NULL;125125+ }126126+ preempt_enable();127127+ if (new)128128+ return new;133129 }134134- preempt_enable();135135- return new;130130+131131+ /*132132+ * Both failed. Try kmem_cache again w/o adding __GFP_NOWARN so133133+ * that memory allocation failure warning is printed as intended.134134+ */135135+ return kmem_cache_zalloc(idr_layer_cache, gfp_mask);136136}137137138138static void idr_layer_rcu_free(struct rcu_head *head)···196184 }197185}198186199199-/**200200- * idr_pre_get - reserve resources for idr allocation201201- * @idp: idr handle202202- * @gfp_mask: memory allocation flags203203- *204204- * This function should be called prior to calling the idr_get_new* functions.205205- * It preallocates enough memory to satisfy the worst possible allocation. The206206- * caller should pass in GFP_KERNEL if possible. This of course requires that207207- * no spinning locks be held.208208- *209209- * If the system is REALLY out of memory this function returns %0,210210- * otherwise %1.211211- */212212-int idr_pre_get(struct idr *idp, gfp_t gfp_mask)187187+int __idr_pre_get(struct idr *idp, gfp_t gfp_mask)213188{214189 while (idp->id_free_cnt < MAX_IDR_FREE) {215190 struct idr_layer *new;···207208 }208209 return 1;209210}210210-EXPORT_SYMBOL(idr_pre_get);211211+EXPORT_SYMBOL(__idr_pre_get);211212212213/**213214 * sub_alloc - try to allocate an id without growing the tree depth214215 * @idp: idr handle215216 * @starting_id: id to start search at216216- * @id: pointer to the allocated handle217217 * @pa: idr_layer[MAX_IDR_LEVEL] used as backtrack buffer218218 * @gfp_mask: allocation mask for idr_layer_alloc()219219 * @layer_idr: optional idr passed to idr_layer_alloc()···374376 idr_mark_full(pa, id);375377}376378377377-/**378378- * idr_get_new_above - allocate new idr entry above or equal to a start id379379- * @idp: idr handle380380- * @ptr: pointer you want associated with the id381381- * @starting_id: id to start search at382382- * @id: pointer to the allocated handle383383- *384384- * This is the allocate id function. It should be called with any385385- * required locks.386386- *387387- * If allocation from IDR's private freelist fails, idr_get_new_above() will388388- * return %-EAGAIN. The caller should retry the idr_pre_get() call to refill389389- * IDR's preallocation and then retry the idr_get_new_above() call.390390- *391391- * If the idr is full idr_get_new_above() will return %-ENOSPC.392392- *393393- * @id returns a value in the range @starting_id ... %0x7fffffff394394- */395395-int idr_get_new_above(struct idr *idp, void *ptr, int starting_id, int *id)379379+int __idr_get_new_above(struct idr *idp, void *ptr, int starting_id, int *id)396380{397381 struct idr_layer *pa[MAX_IDR_LEVEL + 1];398382 int rv;···387407 *id = rv;388408 return 0;389409}390390-EXPORT_SYMBOL(idr_get_new_above);410410+EXPORT_SYMBOL(__idr_get_new_above);391411392412/**393413 * idr_preload - preload for idr_alloc()···888908int ida_pre_get(struct ida *ida, gfp_t gfp_mask)889909{890910 /* allocate idr_layers */891891- if (!idr_pre_get(&ida->idr, gfp_mask))911911+ if (!__idr_pre_get(&ida->idr, gfp_mask))892912 return 0;893913894914 /* allocate free_bitmap */
+1-1
lib/xz/Kconfig
···15151616config XZ_DEC_POWERPC1717 bool "PowerPC BCJ filter decoder"1818- default y if POWERPC1818+ default y if PPC1919 select XZ_DEC_BCJ20202121config XZ_DEC_IA64
+6-2
mm/Kconfig
···286286 default "1"287287288288config VIRT_TO_BUS289289- def_bool y290290- depends on HAVE_VIRT_TO_BUS289289+ bool290290+ help291291+ An architecture should select this if it implements the292292+ deprecated interface virt_to_bus(). All new architectures293293+ should probably not select this.294294+291295292296config MMU_NOTIFIER293297 bool
+3-2
mm/fremap.c
···129129 struct vm_area_struct *vma;130130 int err = -EINVAL;131131 int has_write_lock = 0;132132- vm_flags_t vm_flags;132132+ vm_flags_t vm_flags = 0;133133134134 if (prot)135135 return err;···254254 */255255256256out:257257- vm_flags = vma->vm_flags;257257+ if (vma)258258+ vm_flags = vma->vm_flags;258259 if (likely(!has_write_lock))259260 up_read(&mm->mmap_sem);260261 else
+1-1
mm/memory_hotplug.c
···18011801 int retry = 1;1802180218031803 start_pfn = PFN_DOWN(start);18041804- end_pfn = start_pfn + PFN_DOWN(size);18041804+ end_pfn = PFN_UP(start + size - 1);1805180518061806 /*18071807 * When CONFIG_MEMCG is on, one memory block may be used by other
-8
mm/process_vm_access.c
···429429 if (flags != 0)430430 return -EINVAL;431431432432- if (!access_ok(VERIFY_READ, lvec, liovcnt * sizeof(*lvec)))433433- goto out;434434-435435- if (!access_ok(VERIFY_READ, rvec, riovcnt * sizeof(*rvec)))436436- goto out;437437-438432 if (vm_write)439433 rc = compat_rw_copy_check_uvector(WRITE, lvec, liovcnt,440434 UIO_FASTIOV, iovstack_l,···453459 kfree(iov_r);454460 if (iov_l != iovstack_l)455461 kfree(iov_l);456456-457457-out:458462 return rc;459463}460464
···654654 return 0;655655}656656657657+static int __decode_pgid(void **p, void *end, struct ceph_pg *pg)658658+{659659+ u8 v;660660+661661+ ceph_decode_need(p, end, 1+8+4+4, bad);662662+ v = ceph_decode_8(p);663663+ if (v != 1)664664+ goto bad;665665+ pg->pool = ceph_decode_64(p);666666+ pg->seed = ceph_decode_32(p);667667+ *p += 4; /* skip preferred */668668+ return 0;669669+670670+bad:671671+ dout("error decoding pgid\n");672672+ return -EINVAL;673673+}674674+657675/*658676 * decode a full map.659677 */···763745 for (i = 0; i < len; i++) {764746 int n, j;765747 struct ceph_pg pgid;766766- struct ceph_pg_v1 pgid_v1;767748 struct ceph_pg_mapping *pg;768749769769- ceph_decode_need(p, end, sizeof(u32) + sizeof(u64), bad);770770- ceph_decode_copy(p, &pgid_v1, sizeof(pgid_v1));771771- pgid.pool = le32_to_cpu(pgid_v1.pool);772772- pgid.seed = le16_to_cpu(pgid_v1.ps);750750+ err = __decode_pgid(p, end, &pgid);751751+ if (err)752752+ goto bad;753753+ ceph_decode_need(p, end, sizeof(u32), bad);773754 n = ceph_decode_32(p);774755 err = -EINVAL;775756 if (n > (UINT_MAX - sizeof(*pg)) / sizeof(u32))···835818 u16 version;836819837820 ceph_decode_16_safe(p, end, version, bad);838838- if (version > 6) {839839- pr_warning("got unknown v %d > %d of inc osdmap\n", version, 6);821821+ if (version != 6) {822822+ pr_warning("got unknown v %d != 6 of inc osdmap\n", version);840823 goto bad;841824 }842825···980963 while (len--) {981964 struct ceph_pg_mapping *pg;982965 int j;983983- struct ceph_pg_v1 pgid_v1;984966 struct ceph_pg pgid;985967 u32 pglen;986986- ceph_decode_need(p, end, sizeof(u64) + sizeof(u32), bad);987987- ceph_decode_copy(p, &pgid_v1, sizeof(pgid_v1));988988- pgid.pool = le32_to_cpu(pgid_v1.pool);989989- pgid.seed = le16_to_cpu(pgid_v1.ps);990990- pglen = ceph_decode_32(p);991968969969+ err = __decode_pgid(p, end, &pgid);970970+ if (err)971971+ goto bad;972972+ ceph_decode_need(p, end, sizeof(u32), bad);973973+ pglen = ceph_decode_32(p);992974 if (pglen) {993975 ceph_decode_need(p, end, pglen*sizeof(u32), bad);994976
+3-2
net/core/dev.c
···34443444 }34453445 switch (rx_handler(&skb)) {34463446 case RX_HANDLER_CONSUMED:34473447+ ret = NET_RX_SUCCESS;34473448 goto unlock;34483449 case RX_HANDLER_ANOTHER:34493450 goto another_round;···41044103 * Allow this to run for 2 jiffies since which will allow41054104 * an average latency of 1.5/HZ.41064105 */41074107- if (unlikely(budget <= 0 || time_after(jiffies, time_limit)))41064106+ if (unlikely(budget <= 0 || time_after_eq(jiffies, time_limit)))41084107 goto softnet_break;4109410841104109 local_irq_enable();···47814780/**47824781 * dev_change_carrier - Change device carrier47834782 * @dev: device47844784- * @new_carries: new value47834783+ * @new_carrier: new value47854784 *47864785 * Change device carrier47874786 */
+1
net/core/rtnetlink.c
···979979 * report anything.980980 */981981 ivi.spoofchk = -1;982982+ memset(ivi.mac, 0, sizeof(ivi.mac));982983 if (dev->netdev_ops->ndo_get_vf_config(dev, i, &ivi))983984 break;984985 vf_mac.vf =
+8
net/dcb/dcbnl.c
···284284 if (!netdev->dcbnl_ops->getpermhwaddr)285285 return -EOPNOTSUPP;286286287287+ memset(perm_addr, 0, sizeof(perm_addr));287288 netdev->dcbnl_ops->getpermhwaddr(netdev, perm_addr);288289289290 return nla_put(skb, DCB_ATTR_PERM_HWADDR, sizeof(perm_addr), perm_addr);···1043104210441043 if (ops->ieee_getets) {10451044 struct ieee_ets ets;10451045+ memset(&ets, 0, sizeof(ets));10461046 err = ops->ieee_getets(netdev, &ets);10471047 if (!err &&10481048 nla_put(skb, DCB_ATTR_IEEE_ETS, sizeof(ets), &ets))···1052105010531051 if (ops->ieee_getmaxrate) {10541052 struct ieee_maxrate maxrate;10531053+ memset(&maxrate, 0, sizeof(maxrate));10551054 err = ops->ieee_getmaxrate(netdev, &maxrate);10561055 if (!err) {10571056 err = nla_put(skb, DCB_ATTR_IEEE_MAXRATE,···1064106110651062 if (ops->ieee_getpfc) {10661063 struct ieee_pfc pfc;10641064+ memset(&pfc, 0, sizeof(pfc));10671065 err = ops->ieee_getpfc(netdev, &pfc);10681066 if (!err &&10691067 nla_put(skb, DCB_ATTR_IEEE_PFC, sizeof(pfc), &pfc))···10981094 /* get peer info if available */10991095 if (ops->ieee_peer_getets) {11001096 struct ieee_ets ets;10971097+ memset(&ets, 0, sizeof(ets));11011098 err = ops->ieee_peer_getets(netdev, &ets);11021099 if (!err &&11031100 nla_put(skb, DCB_ATTR_IEEE_PEER_ETS, sizeof(ets), &ets))···1107110211081103 if (ops->ieee_peer_getpfc) {11091104 struct ieee_pfc pfc;11051105+ memset(&pfc, 0, sizeof(pfc));11101106 err = ops->ieee_peer_getpfc(netdev, &pfc);11111107 if (!err &&11121108 nla_put(skb, DCB_ATTR_IEEE_PEER_PFC, sizeof(pfc), &pfc))···12861280 /* peer info if available */12871281 if (ops->cee_peer_getpg) {12881282 struct cee_pg pg;12831283+ memset(&pg, 0, sizeof(pg));12891284 err = ops->cee_peer_getpg(netdev, &pg);12901285 if (!err &&12911286 nla_put(skb, DCB_ATTR_CEE_PEER_PG, sizeof(pg), &pg))···1295128812961289 if (ops->cee_peer_getpfc) {12971290 struct cee_pfc pfc;12911291+ memset(&pfc, 0, sizeof(pfc));12981292 err = ops->cee_peer_getpfc(netdev, &pfc);12991293 if (!err &&13001294 nla_put(skb, DCB_ATTR_CEE_PEER_PFC, sizeof(pfc), &pfc))
+1-1
net/ieee802154/6lowpan.h
···8484 (memcmp(addr1, addr2, length >> 3) == 0)85858686/* local link, i.e. FE80::/10 */8787-#define is_addr_link_local(a) (((a)->s6_addr16[0]) == 0x80FE)8787+#define is_addr_link_local(a) (((a)->s6_addr16[0]) == htons(0xFE80))88888989/*9090 * check whether we can compress the IID to 16 bits,
+1
net/ipv4/inet_connection_sock.c
···735735 * tcp/dccp_create_openreq_child().736736 */737737void inet_csk_prepare_forced_close(struct sock *sk)738738+ __releases(&sk->sk_lock.slock)738739{739740 /* sk_clone_lock locked the socket and set refcnt to 2 */740741 bh_unlock_sock(sk);
···281281 * IPv6 multicast router mode is now supported ;)282282 */283283 if (dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding &&284284- !(ipv6_addr_type(&hdr->daddr) & IPV6_ADDR_LINKLOCAL) &&284284+ !(ipv6_addr_type(&hdr->daddr) &285285+ (IPV6_ADDR_LOOPBACK|IPV6_ADDR_LINKLOCAL)) &&285286 likely(!(IP6CB(skb)->flags & IP6SKB_FORWARDED))) {286287 /*287288 * Okay, we try to forward - split and duplicate
+16-13
net/irda/ircomm/ircomm_tty.c
···280280 struct tty_port *port = &self->port;281281 DECLARE_WAITQUEUE(wait, current);282282 int retval;283283- int do_clocal = 0, extra_count = 0;283283+ int do_clocal = 0;284284 unsigned long flags;285285286286 IRDA_DEBUG(2, "%s()\n", __func__ );···289289 * If non-blocking mode is set, or the port is not enabled,290290 * then make the check up front and then exit.291291 */292292- if (filp->f_flags & O_NONBLOCK || tty->flags & (1 << TTY_IO_ERROR)){293293- /* nonblock mode is set or port is not enabled */292292+ if (test_bit(TTY_IO_ERROR, &tty->flags)) {293293+ port->flags |= ASYNC_NORMAL_ACTIVE;294294+ return 0;295295+ }296296+297297+ if (filp->f_flags & O_NONBLOCK) {298298+ /* nonblock mode is set */299299+ if (tty->termios.c_cflag & CBAUD)300300+ tty_port_raise_dtr_rts(port);294301 port->flags |= ASYNC_NORMAL_ACTIVE;295302 IRDA_DEBUG(1, "%s(), O_NONBLOCK requested!\n", __func__ );296303 return 0;···322315 __FILE__, __LINE__, tty->driver->name, port->count);323316324317 spin_lock_irqsave(&port->lock, flags);325325- if (!tty_hung_up_p(filp)) {326326- extra_count = 1;318318+ if (!tty_hung_up_p(filp))327319 port->count--;328328- }329329- spin_unlock_irqrestore(&port->lock, flags);330320 port->blocked_open++;321321+ spin_unlock_irqrestore(&port->lock, flags);331322332323 while (1) {333324 if (tty->termios.c_cflag & CBAUD)334325 tty_port_raise_dtr_rts(port);335326336336- current->state = TASK_INTERRUPTIBLE;327327+ set_current_state(TASK_INTERRUPTIBLE);337328338329 if (tty_hung_up_p(filp) ||339330 !test_bit(ASYNCB_INITIALIZED, &port->flags)) {···366361 __set_current_state(TASK_RUNNING);367362 remove_wait_queue(&port->open_wait, &wait);368363369369- if (extra_count) {370370- /* ++ is not atomic, so this should be protected - Jean II */371371- spin_lock_irqsave(&port->lock, flags);364364+ spin_lock_irqsave(&port->lock, flags);365365+ if (!tty_hung_up_p(filp))372366 port->count++;373373- spin_unlock_irqrestore(&port->lock, flags);374374- }375367 port->blocked_open--;368368+ spin_unlock_irqrestore(&port->lock, flags);376369377370 IRDA_DEBUG(1, "%s(%d):block_til_ready after blocking on %s open_count=%d\n",378371 __FILE__, __LINE__, tty->driver->name, port->count);
+4-4
net/key/af_key.c
···22012201 XFRM_POLICY_BLOCK : XFRM_POLICY_ALLOW);22022202 xp->priority = pol->sadb_x_policy_priority;2203220322042204- sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1],22042204+ sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1];22052205 xp->family = pfkey_sadb_addr2xfrm_addr(sa, &xp->selector.saddr);22062206 if (!xp->family) {22072207 err = -EINVAL;···22142214 if (xp->selector.sport)22152215 xp->selector.sport_mask = htons(0xffff);2216221622172217- sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1],22172217+ sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1];22182218 pfkey_sadb_addr2xfrm_addr(sa, &xp->selector.daddr);22192219 xp->selector.prefixlen_d = sa->sadb_address_prefixlen;22202220···2315231523162316 memset(&sel, 0, sizeof(sel));2317231723182318- sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1],23182318+ sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1];23192319 sel.family = pfkey_sadb_addr2xfrm_addr(sa, &sel.saddr);23202320 sel.prefixlen_s = sa->sadb_address_prefixlen;23212321 sel.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto);···23232323 if (sel.sport)23242324 sel.sport_mask = htons(0xffff);2325232523262326- sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1],23262326+ sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1];23272327 pfkey_sadb_addr2xfrm_addr(sa, &sel.daddr);23282328 sel.prefixlen_d = sa->sadb_address_prefixlen;23292329 sel.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto);
+13-8
net/mac80211/cfg.c
···32903290 int ret = -ENODATA;3291329132923292 rcu_read_lock();32933293- if (local->use_chanctx) {32943294- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);32953295- if (chanctx_conf) {32963296- *chandef = chanctx_conf->def;32973297- ret = 0;32983298- }32993299- } else if (local->open_count == local->monitors) {33003300- *chandef = local->monitor_chandef;32933293+ chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);32943294+ if (chanctx_conf) {32953295+ *chandef = chanctx_conf->def;32963296+ ret = 0;32973297+ } else if (local->open_count > 0 &&32983298+ local->open_count == local->monitors &&32993299+ sdata->vif.type == NL80211_IFTYPE_MONITOR) {33003300+ if (local->use_chanctx)33013301+ *chandef = local->monitor_chandef;33023302+ else33033303+ cfg80211_chandef_create(chandef,33043304+ local->_oper_channel,33053305+ local->_oper_channel_type);33013306 ret = 0;33023307 }33033308 rcu_read_unlock();
···647647 our_mcs = (le16_to_cpu(vht_cap.vht_mcs.rx_mcs_map) &648648 mask) >> shift;649649650650+ if (our_mcs == IEEE80211_VHT_MCS_NOT_SUPPORTED)651651+ continue;652652+650653 switch (ap_mcs) {651654 default:652655 if (our_mcs <= ap_mcs)···35063503 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;3507350435083505 /*35063506+ * Stop timers before deleting work items, as timers35073507+ * could race and re-add the work-items. They will be35083508+ * re-established on connection.35093509+ */35103510+ del_timer_sync(&ifmgd->conn_mon_timer);35113511+ del_timer_sync(&ifmgd->bcn_mon_timer);35123512+35133513+ /*35093514 * we need to use atomic bitops for the running bits35103515 * only because both timers might fire at the same35113516 * time -- the code here is properly synchronised.···35273516 if (del_timer_sync(&ifmgd->timer))35283517 set_bit(TMR_RUNNING_TIMER, &ifmgd->timers_running);3529351835303530- cancel_work_sync(&ifmgd->chswitch_work);35313519 if (del_timer_sync(&ifmgd->chswitch_timer))35323520 set_bit(TMR_RUNNING_CHANSW, &ifmgd->timers_running);35333533-35343534- /* these will just be re-established on connection */35353535- del_timer_sync(&ifmgd->conn_mon_timer);35363536- del_timer_sync(&ifmgd->bcn_mon_timer);35213521+ cancel_work_sync(&ifmgd->chswitch_work);35373522}3538352335393524void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata)···43214314void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata)43224315{43234316 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;43174317+43184318+ /*43194319+ * Make sure some work items will not run after this,43204320+ * they will not do anything but might not have been43214321+ * cancelled when disconnecting.43224322+ */43234323+ cancel_work_sync(&ifmgd->monitor_work);43244324+ cancel_work_sync(&ifmgd->beacon_connection_loss_work);43254325+ cancel_work_sync(&ifmgd->request_smps_work);43264326+ cancel_work_sync(&ifmgd->csa_connection_drop_work);43274327+ cancel_work_sync(&ifmgd->chswitch_work);4324432843254329 mutex_lock(&ifmgd->mtx);43264330 if (ifmgd->assoc_data)
+2-1
net/mac80211/tx.c
···27452745 cpu_to_le16(IEEE80211_FCTL_MOREDATA);27462746 }2747274727482748- sdata = IEEE80211_DEV_TO_SUB_IF(skb->dev);27482748+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)27492749+ sdata = IEEE80211_DEV_TO_SUB_IF(skb->dev);27492750 if (!ieee80211_tx_prepare(sdata, &tx, skb))27502751 break;27512752 dev_kfree_skb_any(skb);
+10-1
net/netfilter/nf_conntrack_helper.c
···339339{340340 const struct nf_conn_help *help;341341 const struct nf_conntrack_helper *helper;342342+ struct va_format vaf;343343+ va_list args;344344+345345+ va_start(args, fmt);346346+347347+ vaf.fmt = fmt;348348+ vaf.va = &args;342349343350 /* Called from the helper function, this call never fails */344351 help = nfct_help(ct);···354347 helper = rcu_dereference(help->helper);355348356349 nf_log_packet(nf_ct_l3num(ct), 0, skb, NULL, NULL, NULL,357357- "nf_ct_%s: dropping packet: %s ", helper->name, fmt);350350+ "nf_ct_%s: dropping packet: %pV ", helper->name, &vaf);351351+352352+ va_end(args);358353}359354EXPORT_SYMBOL_GPL(nf_ct_helper_log);360355
+1-6
net/netfilter/nfnetlink.c
···6262}6363EXPORT_SYMBOL_GPL(nfnl_unlock);64646565-static struct mutex *nfnl_get_lock(__u8 subsys_id)6666-{6767- return &table[subsys_id].mutex;6868-}6969-7065int nfnetlink_subsys_register(const struct nfnetlink_subsystem *n)7166{7267 nfnl_lock(n->subsys_id);···194199 rcu_read_unlock();195200 nfnl_lock(subsys_id);196201 if (rcu_dereference_protected(table[subsys_id].subsys,197197- lockdep_is_held(nfnl_get_lock(subsys_id))) != ss ||202202+ lockdep_is_held(&table[subsys_id].mutex)) != ss ||198203 nfnetlink_find_client(type, ss) != nc)199204 err = -EAGAIN;200205 else if (nc->call)
+3
net/netfilter/xt_AUDIT.c
···124124 const struct xt_audit_info *info = par->targinfo;125125 struct audit_buffer *ab;126126127127+ if (audit_enabled == 0)128128+ goto errout;129129+127130 ab = audit_log_start(NULL, GFP_ATOMIC, AUDIT_NETFILTER_PKT);128131 if (ab == NULL)129132 goto errout;
···8787 for (i = 0; i < nr; i++) {8888 BUG_ON(strlen(names[i]) >= sizeof(ctr.name));8989 strncpy(ctr.name, names[i], sizeof(ctr.name) - 1);9090+ ctr.name[sizeof(ctr.name) - 1] = '\0';9091 ctr.value = values[i];91929293 rds_info_copy(iter, &ctr, sizeof(ctr));
+45-21
net/sched/sch_qfq.c
···298298 new_num_classes == q->max_agg_classes - 1) /* agg no more full */299299 hlist_add_head(&agg->nonfull_next, &q->nonfull_aggs);300300301301+ /* The next assignment may let302302+ * agg->initial_budget > agg->budgetmax303303+ * hold, we will take it into account in charge_actual_service().304304+ */301305 agg->budgetmax = new_num_classes * agg->lmax;302306 new_agg_weight = agg->class_weight * new_num_classes;303307 agg->inv_w = ONE_FP/new_agg_weight;···821817 unsigned long old_vslot = q->oldV >> q->min_slot_shift;822818823819 if (vslot != old_vslot) {824824- unsigned long mask = (1UL << fls(vslot ^ old_vslot)) - 1;820820+ unsigned long mask = (1ULL << fls(vslot ^ old_vslot)) - 1;825821 qfq_move_groups(q, mask, IR, ER);826822 qfq_move_groups(q, mask, IB, EB);827823 }···992988/* Update F according to the actual service received by the aggregate. */993989static inline void charge_actual_service(struct qfq_aggregate *agg)994990{995995- /* compute the service received by the aggregate */996996- u32 service_received = agg->initial_budget - agg->budget;991991+ /* Compute the service received by the aggregate, taking into992992+ * account that, after decreasing the number of classes in993993+ * agg, it may happen that994994+ * agg->initial_budget - agg->budget > agg->bugdetmax995995+ */996996+ u32 service_received = min(agg->budgetmax,997997+ agg->initial_budget - agg->budget);997998998999 agg->F = agg->S + (u64)service_received * agg->inv_w;9991000}10011001+10021002+static inline void qfq_update_agg_ts(struct qfq_sched *q,10031003+ struct qfq_aggregate *agg,10041004+ enum update_reason reason);10051005+10061006+static void qfq_schedule_agg(struct qfq_sched *q, struct qfq_aggregate *agg);1000100710011008static struct sk_buff *qfq_dequeue(struct Qdisc *sch)10021009{···10361021 in_serv_agg->initial_budget = in_serv_agg->budget =10371022 in_serv_agg->budgetmax;1038102310391039- if (!list_empty(&in_serv_agg->active))10241024+ if (!list_empty(&in_serv_agg->active)) {10401025 /*10411026 * Still active: reschedule for10421027 * service. Possible optimization: if no other···10471032 * handle it, we would need to maintain an10481033 * extra num_active_aggs field.10491034 */10501050- qfq_activate_agg(q, in_serv_agg, requeue);10511051- else if (sch->q.qlen == 0) { /* no aggregate to serve */10351035+ qfq_update_agg_ts(q, in_serv_agg, requeue);10361036+ qfq_schedule_agg(q, in_serv_agg);10371037+ } else if (sch->q.qlen == 0) { /* no aggregate to serve */10521038 q->in_serv_agg = NULL;10531039 return NULL;10541040 }···10681052 qdisc_bstats_update(sch, skb);1069105310701054 agg_dequeue(in_serv_agg, cl, len);10711071- in_serv_agg->budget -= len;10551055+ /* If lmax is lowered, through qfq_change_class, for a class10561056+ * owning pending packets with larger size than the new value10571057+ * of lmax, then the following condition may hold.10581058+ */10591059+ if (unlikely(in_serv_agg->budget < len))10601060+ in_serv_agg->budget = 0;10611061+ else10621062+ in_serv_agg->budget -= len;10631063+10721064 q->V += (u64)len * IWSUM;10731065 pr_debug("qfq dequeue: len %u F %lld now %lld\n",10741066 len, (unsigned long long) in_serv_agg->F,···12411217 cl->deficit = agg->lmax;12421218 list_add_tail(&cl->alist, &agg->active);1243121912441244- if (list_first_entry(&agg->active, struct qfq_class, alist) != cl)12451245- return err; /* aggregate was not empty, nothing else to do */12201220+ if (list_first_entry(&agg->active, struct qfq_class, alist) != cl ||12211221+ q->in_serv_agg == agg)12221222+ return err; /* non-empty or in service, nothing else to do */1246122312471247- /* recharge budget */12481248- agg->initial_budget = agg->budget = agg->budgetmax;12491249-12501250- qfq_update_agg_ts(q, agg, enqueue);12511251- if (q->in_serv_agg == NULL)12521252- q->in_serv_agg = agg;12531253- else if (agg != q->in_serv_agg)12541254- qfq_schedule_agg(q, agg);12241224+ qfq_activate_agg(q, agg, enqueue);1255122512561226 return err;12571227}···12791261 /* group was surely ineligible, remove */12801262 __clear_bit(grp->index, &q->bitmaps[IR]);12811263 __clear_bit(grp->index, &q->bitmaps[IB]);12821282- } else if (!q->bitmaps[ER] && qfq_gt(roundedS, q->V))12641264+ } else if (!q->bitmaps[ER] && qfq_gt(roundedS, q->V) &&12651265+ q->in_serv_agg == NULL)12831266 q->V = roundedS;1284126712851268 grp->S = roundedS;···13031284static void qfq_activate_agg(struct qfq_sched *q, struct qfq_aggregate *agg,13041285 enum update_reason reason)13051286{12871287+ agg->initial_budget = agg->budget = agg->budgetmax; /* recharge budg. */12881288+13061289 qfq_update_agg_ts(q, agg, reason);13071307- qfq_schedule_agg(q, agg);12901290+ if (q->in_serv_agg == NULL) { /* no aggr. in service or scheduled */12911291+ q->in_serv_agg = agg; /* start serving this aggregate */12921292+ /* update V: to be in service, agg must be eligible */12931293+ q->oldV = q->V = agg->S;12941294+ } else if (agg != q->in_serv_agg)12951295+ qfq_schedule_agg(q, agg);13081296}1309129713101298static void qfq_slot_remove(struct qfq_sched *q, struct qfq_group *grp,···13831357 __set_bit(grp->index, &q->bitmaps[s]);13841358 }13851359 }13861386-13871387- qfq_update_eligible(q);13881360}1389136113901362static void qfq_qlen_notify(struct Qdisc *sch, unsigned long arg)
+8-4
net/sunrpc/auth_gss/svcauth_gss.c
···447447 else {448448 int N, i;449449450450+ /*451451+ * NOTE: we skip uid_valid()/gid_valid() checks here:452452+ * instead, * -1 id's are later mapped to the453453+ * (export-specific) anonymous id by nfsd_setuser.454454+ *455455+ * (But supplementary gid's get no such special456456+ * treatment so are checked for validity here.)457457+ */450458 /* uid */451459 rsci.cred.cr_uid = make_kuid(&init_user_ns, id);452452- if (!uid_valid(rsci.cred.cr_uid))453453- goto out;454460455461 /* gid */456462 if (get_int(&mesg, &id))457463 goto out;458464 rsci.cred.cr_gid = make_kgid(&init_user_ns, id);459459- if (!gid_valid(rsci.cred.cr_gid))460460- goto out;461465462466 /* number of additional gid's */463467 if (get_int(&mesg, &N))
···557557 if ((chan->flags & IEEE80211_CHAN_RADAR) &&558558 nla_put_flag(msg, NL80211_FREQUENCY_ATTR_RADAR))559559 goto nla_put_failure;560560- if ((chan->flags & IEEE80211_CHAN_NO_HT40MINUS) &&561561- nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_HT40_MINUS))562562- goto nla_put_failure;563563- if ((chan->flags & IEEE80211_CHAN_NO_HT40PLUS) &&564564- nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_HT40_PLUS))565565- goto nla_put_failure;566566- if ((chan->flags & IEEE80211_CHAN_NO_80MHZ) &&567567- nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_80MHZ))568568- goto nla_put_failure;569569- if ((chan->flags & IEEE80211_CHAN_NO_160MHZ) &&570570- nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_160MHZ))571571- goto nla_put_failure;572560573561 if (nla_put_u32(msg, NL80211_FREQUENCY_ATTR_MAX_TX_POWER,574562 DBM_TO_MBM(chan->max_power)))···12981310 dev->wiphy.max_acl_mac_addrs))12991311 goto nla_put_failure;1300131213011301- if (dev->wiphy.extended_capabilities &&13021302- (nla_put(msg, NL80211_ATTR_EXT_CAPA,13031303- dev->wiphy.extended_capabilities_len,13041304- dev->wiphy.extended_capabilities) ||13051305- nla_put(msg, NL80211_ATTR_EXT_CAPA_MASK,13061306- dev->wiphy.extended_capabilities_len,13071307- dev->wiphy.extended_capabilities_mask)))13081308- goto nla_put_failure;13091309-13101313 return genlmsg_end(msg, hdr);1311131413121315 nla_put_failure:···1307132813081329static int nl80211_dump_wiphy(struct sk_buff *skb, struct netlink_callback *cb)13091330{13101310- int idx = 0;13311331+ int idx = 0, ret;13111332 int start = cb->args[0];13121333 struct cfg80211_registered_device *dev;13131334···13171338 continue;13181339 if (++idx <= start)13191340 continue;13201320- if (nl80211_send_wiphy(skb, NETLINK_CB(cb->skb).portid,13211321- cb->nlh->nlmsg_seq, NLM_F_MULTI,13221322- dev) < 0) {13411341+ ret = nl80211_send_wiphy(skb, NETLINK_CB(cb->skb).portid,13421342+ cb->nlh->nlmsg_seq, NLM_F_MULTI,13431343+ dev);13441344+ if (ret < 0) {13451345+ /*13461346+ * If sending the wiphy data didn't fit (ENOBUFS or13471347+ * EMSGSIZE returned), this SKB is still empty (so13481348+ * it's not too big because another wiphy dataset is13491349+ * already in the skb) and we've not tried to adjust13501350+ * the dump allocation yet ... then adjust the alloc13511351+ * size to be bigger, and return 1 but with the empty13521352+ * skb. This results in an empty message being RX'ed13531353+ * in userspace, but that is ignored.13541354+ *13551355+ * We can then retry with the larger buffer.13561356+ */13571357+ if ((ret == -ENOBUFS || ret == -EMSGSIZE) &&13581358+ !skb->len &&13591359+ cb->min_dump_alloc < 4096) {13601360+ cb->min_dump_alloc = 4096;13611361+ mutex_unlock(&cfg80211_mutex);13621362+ return 1;13631363+ }13231364 idx--;13241365 break;13251366 }···13561357 struct sk_buff *msg;13571358 struct cfg80211_registered_device *dev = info->user_ptr[0];1358135913591359- msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);13601360+ msg = nlmsg_new(4096, GFP_KERNEL);13601361 if (!msg)13611362 return -ENOMEM;13621363
···545545 case MIDI_PGM_CHANGE:546546 if (seq_mode == SEQ_2)547547 {548548+ if (chn > 15)549549+ break;550550+548551 synth_devs[dev]->chn_info[chn].pgm_num = p1;549552 if ((int) dev >= num_synths)550553 synth_devs[dev]->set_instr(dev, chn, p1);···599596 case MIDI_PITCH_BEND:600597 if (seq_mode == SEQ_2)601598 {599599+ if (chn > 15)600600+ break;601601+602602 synth_devs[dev]->chn_info[chn].bender_value = w14;603603604604 if ((int) dev < num_synths)
···815815 return 0;816816}817817818818+/* check whether a built-in speaker is included in parsed pins */819819+static bool has_builtin_speaker(struct hda_codec *codec)820820+{821821+ struct sigmatel_spec *spec = codec->spec;822822+ hda_nid_t *nid_pin;823823+ int nids, i;824824+825825+ if (spec->gen.autocfg.line_out_type == AUTO_PIN_SPEAKER_OUT) {826826+ nid_pin = spec->gen.autocfg.line_out_pins;827827+ nids = spec->gen.autocfg.line_outs;828828+ } else {829829+ nid_pin = spec->gen.autocfg.speaker_pins;830830+ nids = spec->gen.autocfg.speaker_outs;831831+ }832832+833833+ for (i = 0; i < nids; i++) {834834+ unsigned int def_conf = snd_hda_codec_get_pincfg(codec, nid_pin[i]);835835+ if (snd_hda_get_input_pin_attr(def_conf) == INPUT_PIN_ATTR_INT)836836+ return true;837837+ }838838+ return false;839839+}840840+818841/*819842 * PC beep controls820843 */···39123889 stac_free(codec);39133890 return err;39143891 }38923892+38933893+ /* Don't GPIO-mute speakers if there are no internal speakers, because38943894+ * the GPIO might be necessary for Headphone38953895+ */38963896+ if (spec->eapd_switch && !has_builtin_speaker(codec))38973897+ spec->eapd_switch = 0;3915389839163899 codec->proc_widget_hook = stac92hd7x_proc_hook;39173900
+15
sound/usb/card.c
···244244 usb_ifnum_to_if(dev, ctrlif)->intf_assoc;245245246246 if (!assoc) {247247+ /*248248+ * Firmware writers cannot count to three. So to find249249+ * the IAD on the NuForce UDH-100, also check the next250250+ * interface.251251+ */252252+ struct usb_interface *iface =253253+ usb_ifnum_to_if(dev, ctrlif + 1);254254+ if (iface &&255255+ iface->intf_assoc &&256256+ iface->intf_assoc->bFunctionClass == USB_CLASS_AUDIO &&257257+ iface->intf_assoc->bFunctionProtocol == UAC_VERSION_2)258258+ assoc = iface->intf_assoc;259259+ }260260+261261+ if (!assoc) {247262 snd_printk(KERN_ERR "Audio class v2 interfaces need an interface association\n");248263 return -EINVAL;249264 }
+1-1
tools/usb/ffs-test.c
···3838#include <unistd.h>3939#include <tools/le_byteshift.h>40404141-#include "../../include/linux/usb/functionfs.h"4141+#include "../../include/uapi/linux/usb/functionfs.h"424243434444/******************** Little Endian Handling ********************************/