Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/intel/e1000e/netdev.c

Minor conflict in e1000e, a line that got fixed in 'net'
has been removed in 'net-next'.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2693 -1615
+3 -3
CREDITS
··· 953 953 S: USA 954 954 955 955 N: Randy Dunlap 956 - E: rdunlap@xenotime.net 957 - W: http://www.xenotime.net/linux/linux.html 958 - W: http://www.linux-usb.org 956 + E: rdunlap@infradead.org 957 + W: http://www.infradead.org/~rdunlap/ 959 958 D: Linux-USB subsystem, USB core/UHCI/printer/storage drivers 960 959 D: x86 SMP, ACPI, bootflag hacking 960 + D: documentation, builds 961 961 S: (ask for current address) 962 962 S: USA 963 963
+1 -2
Documentation/SubmittingPatches
··· 60 60 "dontdiff" is a list of files which are generated by the kernel during 61 61 the build process, and should be ignored in any diff(1)-generated 62 62 patch. The "dontdiff" file is included in the kernel tree in 63 - 2.6.12 and later. For earlier kernel versions, you can get it 64 - from <http://www.xenotime.net/linux/doc/dontdiff>. 63 + 2.6.12 and later. 65 64 66 65 Make sure your patch does not include any extra files which do not 67 66 belong in a patch submission. Make sure to review your patch -after-
+1 -1
Documentation/hwmon/adm1275
··· 15 15 Addresses scanned: - 16 16 Datasheet: www.analog.com/static/imported-files/data_sheets/ADM1276.pdf 17 17 18 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 18 + Author: Guenter Roeck <linux@roeck-us.net> 19 19 20 20 21 21 Description
+10 -1
Documentation/hwmon/adt7410
··· 4 4 Supported chips: 5 5 * Analog Devices ADT7410 6 6 Prefix: 'adt7410' 7 - Addresses scanned: I2C 0x48 - 0x4B 7 + Addresses scanned: None 8 8 Datasheet: Publicly available at the Analog Devices website 9 9 http://www.analog.com/static/imported-files/data_sheets/ADT7410.pdf 10 + * Analog Devices ADT7420 11 + Prefix: 'adt7420' 12 + Addresses scanned: None 13 + Datasheet: Publicly available at the Analog Devices website 14 + http://www.analog.com/static/imported-files/data_sheets/ADT7420.pdf 10 15 11 16 Author: Hartmut Knaack <knaack.h@gmx.de> 12 17 ··· 31 26 value per second or even justget one sample on demand for power saving. 32 27 Besides, it can completely power down its ADC, if power management is 33 28 required. 29 + 30 + The ADT7420 is register compatible, the only differences being the package, 31 + a slightly narrower operating temperature range (-40°C to +150°C), and a 32 + better accuracy (0.25°C instead of 0.50°C.) 34 33 35 34 Configuration Notes 36 35 -------------------
+1 -1
Documentation/hwmon/jc42
··· 49 49 Addresses scanned: I2C 0x18 - 0x1f 50 50 51 51 Author: 52 - Guenter Roeck <guenter.roeck@ericsson.com> 52 + Guenter Roeck <linux@roeck-us.net> 53 53 54 54 55 55 Description
+1 -1
Documentation/hwmon/lineage-pem
··· 8 8 Documentation: 9 9 http://www.lineagepower.com/oem/pdf/CPLI2C.pdf 10 10 11 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 11 + Author: Guenter Roeck <linux@roeck-us.net> 12 12 13 13 14 14 Description
+1 -1
Documentation/hwmon/lm25066
··· 19 19 Datasheet: 20 20 http://www.national.com/pf/LM/LM5066.html 21 21 22 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 22 + Author: Guenter Roeck <linux@roeck-us.net> 23 23 24 24 25 25 Description
+3 -3
Documentation/hwmon/ltc2978
··· 5 5 * Linear Technology LTC2978 6 6 Prefix: 'ltc2978' 7 7 Addresses scanned: - 8 - Datasheet: http://cds.linear.com/docs/Datasheet/2978fa.pdf 8 + Datasheet: http://www.linear.com/product/ltc2978 9 9 * Linear Technology LTC3880 10 10 Prefix: 'ltc3880' 11 11 Addresses scanned: - 12 - Datasheet: http://cds.linear.com/docs/Datasheet/3880f.pdf 12 + Datasheet: http://www.linear.com/product/ltc3880 13 13 14 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 14 + Author: Guenter Roeck <linux@roeck-us.net> 15 15 16 16 17 17 Description
+1 -1
Documentation/hwmon/ltc4261
··· 8 8 Datasheet: 9 9 http://cds.linear.com/docs/Datasheet/42612fb.pdf 10 10 11 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 11 + Author: Guenter Roeck <linux@roeck-us.net> 12 12 13 13 14 14 Description
+1 -1
Documentation/hwmon/max16064
··· 7 7 Addresses scanned: - 8 8 Datasheet: http://datasheets.maxim-ic.com/en/ds/MAX16064.pdf 9 9 10 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 10 + Author: Guenter Roeck <linux@roeck-us.net> 11 11 12 12 13 13 Description
+1 -1
Documentation/hwmon/max16065
··· 24 24 http://datasheets.maxim-ic.com/en/ds/MAX16070-MAX16071.pdf 25 25 26 26 27 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 27 + Author: Guenter Roeck <linux@roeck-us.net> 28 28 29 29 30 30 Description
+1 -1
Documentation/hwmon/max34440
··· 27 27 Addresses scanned: - 28 28 Datasheet: http://datasheets.maximintegrated.com/en/ds/MAX34461.pdf 29 29 30 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 30 + Author: Guenter Roeck <linux@roeck-us.net> 31 31 32 32 33 33 Description
+1 -1
Documentation/hwmon/max8688
··· 7 7 Addresses scanned: - 8 8 Datasheet: http://datasheets.maxim-ic.com/en/ds/MAX8688.pdf 9 9 10 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 10 + Author: Guenter Roeck <linux@roeck-us.net> 11 11 12 12 13 13 Description
+1 -1
Documentation/hwmon/pmbus
··· 34 34 Addresses scanned: - 35 35 Datasheet: n.a. 36 36 37 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 37 + Author: Guenter Roeck <linux@roeck-us.net> 38 38 39 39 40 40 Description
+1 -1
Documentation/hwmon/smm665
··· 29 29 http://www.summitmicro.com/prod_select/summary/SMM766/SMM766_2086.pdf 30 30 http://www.summitmicro.com/prod_select/summary/SMM766B/SMM766B_2122.pdf 31 31 32 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 32 + Author: Guenter Roeck <linux@roeck-us.net> 33 33 34 34 35 35 Module Parameters
+1 -1
Documentation/hwmon/ucd9000
··· 11 11 http://focus.ti.com/lit/ds/symlink/ucd9090.pdf 12 12 http://focus.ti.com/lit/ds/symlink/ucd90910.pdf 13 13 14 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 14 + Author: Guenter Roeck <linux@roeck-us.net> 15 15 16 16 17 17 Description
+1 -1
Documentation/hwmon/ucd9200
··· 15 15 http://focus.ti.com/lit/ds/symlink/ucd9246.pdf 16 16 http://focus.ti.com/lit/ds/symlink/ucd9248.pdf 17 17 18 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 18 + Author: Guenter Roeck <linux@roeck-us.net> 19 19 20 20 21 21 Description
+1 -1
Documentation/hwmon/zl6100
··· 54 54 http://archive.ericsson.net/service/internet/picov/get?DocNo=28701-EN/LZT146256 55 55 56 56 57 - Author: Guenter Roeck <guenter.roeck@ericsson.com> 57 + Author: Guenter Roeck <linux@roeck-us.net> 58 58 59 59 60 60 Description
+59 -6
Documentation/input/alps.txt
··· 3 3 4 4 Introduction 5 5 ------------ 6 + Currently the ALPS touchpad driver supports five protocol versions in use by 7 + ALPS touchpads, called versions 1, 2, 3, 4 and 5. 6 8 7 - Currently the ALPS touchpad driver supports four protocol versions in use by 8 - ALPS touchpads, called versions 1, 2, 3, and 4. Information about the various 9 - protocol versions is contained in the following sections. 9 + Since roughly mid-2010 several new ALPS touchpads have been released and 10 + integrated into a variety of laptops and netbooks. These new touchpads 11 + have enough behavior differences that the alps_model_data definition 12 + table, describing the properties of the different versions, is no longer 13 + adequate. The design choices were to re-define the alps_model_data 14 + table, with the risk of regression testing existing devices, or isolate 15 + the new devices outside of the alps_model_data table. The latter design 16 + choice was made. The new touchpad signatures are named: "Rushmore", 17 + "Pinnacle", and "Dolphin", which you will see in the alps.c code. 18 + For the purposes of this document, this group of ALPS touchpads will 19 + generically be called "new ALPS touchpads". 20 + 21 + We experimented with probing the ACPI interface _HID (Hardware ID)/_CID 22 + (Compatibility ID) definition as a way to uniquely identify the 23 + different ALPS variants but there did not appear to be a 1:1 mapping. 24 + In fact, it appeared to be an m:n mapping between the _HID and actual 25 + hardware type. 10 26 11 27 Detection 12 28 --------- ··· 36 20 report" sequence: E8-E7-E7-E7-E9. The response is the model signature and is 37 21 matched against known models in the alps_model_data_array. 38 22 39 - With protocol versions 3 and 4, the E7 report model signature is always 40 - 73-02-64. To differentiate between these versions, the response from the 41 - "Enter Command Mode" sequence must be inspected as described below. 23 + For older touchpads supporting protocol versions 3 and 4, the E7 report 24 + model signature is always 73-02-64. To differentiate between these 25 + versions, the response from the "Enter Command Mode" sequence must be 26 + inspected as described below. 27 + 28 + The new ALPS touchpads have an E7 signature of 73-03-50 or 73-03-0A but 29 + seem to be better differentiated by the EC Command Mode response. 42 30 43 31 Command Mode 44 32 ------------ ··· 66 46 address of the register being read, and the third contains the value of the 67 47 register. Registers are written by writing the value one nibble at a time 68 48 using the same encoding used for addresses. 49 + 50 + For the new ALPS touchpads, the EC command is used to enter command 51 + mode. The response in the new ALPS touchpads is significantly different, 52 + and more important in determining the behavior. This code has been 53 + separated from the original alps_model_data table and put in the 54 + alps_identify function. For example, there seem to be two hardware init 55 + sequences for the "Dolphin" touchpads as determined by the second byte 56 + of the EC response. 69 57 70 58 Packet Format 71 59 ------------- ··· 215 187 well. 216 188 217 189 So far no v4 devices with tracksticks have been encountered. 190 + 191 + ALPS Absolute Mode - Protocol Version 5 192 + --------------------------------------- 193 + This is basically Protocol Version 3 but with different logic for packet 194 + decode. It uses the same alps_process_touchpad_packet_v3 call with a 195 + specialized decode_fields function pointer to correctly interpret the 196 + packets. This appears to only be used by the Dolphin devices. 197 + 198 + For single-touch, the 6-byte packet format is: 199 + 200 + byte 0: 1 1 0 0 1 0 0 0 201 + byte 1: 0 x6 x5 x4 x3 x2 x1 x0 202 + byte 2: 0 y6 y5 y4 y3 y2 y1 y0 203 + byte 3: 0 M R L 1 m r l 204 + byte 4: y10 y9 y8 y7 x10 x9 x8 x7 205 + byte 5: 0 z6 z5 z4 z3 z2 z1 z0 206 + 207 + For mt, the format is: 208 + 209 + byte 0: 1 1 1 n3 1 n2 n1 x24 210 + byte 1: 1 y7 y6 y5 y4 y3 y2 y1 211 + byte 2: ? x2 x1 y12 y11 y10 y9 y8 212 + byte 3: 0 x23 x22 x21 x20 x19 x18 x17 213 + byte 4: 0 x9 x8 x7 x6 x5 x4 x3 214 + byte 5: 0 x16 x15 x14 x13 x12 x11 x10
+77
Documentation/networking/tuntap.txt
··· 105 105 Proto [2 bytes] 106 106 Raw protocol(IP, IPv6, etc) frame. 107 107 108 + 3.3 Multiqueue tuntap interface: 109 + 110 + From version 3.8, Linux supports multiqueue tuntap which can uses multiple 111 + file descriptors (queues) to parallelize packets sending or receiving. The 112 + device allocation is the same as before, and if user wants to create multiple 113 + queues, TUNSETIFF with the same device name must be called many times with 114 + IFF_MULTI_QUEUE flag. 115 + 116 + char *dev should be the name of the device, queues is the number of queues to 117 + be created, fds is used to store and return the file descriptors (queues) 118 + created to the caller. Each file descriptor were served as the interface of a 119 + queue which could be accessed by userspace. 120 + 121 + #include <linux/if.h> 122 + #include <linux/if_tun.h> 123 + 124 + int tun_alloc_mq(char *dev, int queues, int *fds) 125 + { 126 + struct ifreq ifr; 127 + int fd, err, i; 128 + 129 + if (!dev) 130 + return -1; 131 + 132 + memset(&ifr, 0, sizeof(ifr)); 133 + /* Flags: IFF_TUN - TUN device (no Ethernet headers) 134 + * IFF_TAP - TAP device 135 + * 136 + * IFF_NO_PI - Do not provide packet information 137 + * IFF_MULTI_QUEUE - Create a queue of multiqueue device 138 + */ 139 + ifr.ifr_flags = IFF_TAP | IFF_NO_PI | IFF_MULTI_QUEUE; 140 + strcpy(ifr.ifr_name, dev); 141 + 142 + for (i = 0; i < queues; i++) { 143 + if ((fd = open("/dev/net/tun", O_RDWR)) < 0) 144 + goto err; 145 + err = ioctl(fd, TUNSETIFF, (void *)&ifr); 146 + if (err) { 147 + close(fd); 148 + goto err; 149 + } 150 + fds[i] = fd; 151 + } 152 + 153 + return 0; 154 + err: 155 + for (--i; i >= 0; i--) 156 + close(fds[i]); 157 + return err; 158 + } 159 + 160 + A new ioctl(TUNSETQUEUE) were introduced to enable or disable a queue. When 161 + calling it with IFF_DETACH_QUEUE flag, the queue were disabled. And when 162 + calling it with IFF_ATTACH_QUEUE flag, the queue were enabled. The queue were 163 + enabled by default after it was created through TUNSETIFF. 164 + 165 + fd is the file descriptor (queue) that we want to enable or disable, when 166 + enable is true we enable it, otherwise we disable it 167 + 168 + #include <linux/if.h> 169 + #include <linux/if_tun.h> 170 + 171 + int tun_set_queue(int fd, int enable) 172 + { 173 + struct ifreq ifr; 174 + 175 + memset(&ifr, 0, sizeof(ifr)); 176 + 177 + if (enable) 178 + ifr.ifr_flags = IFF_ATTACH_QUEUE; 179 + else 180 + ifr.ifr_flags = IFF_DETACH_QUEUE; 181 + 182 + return ioctl(fd, TUNSETQUEUE, (void *)&ifr); 183 + } 184 + 108 185 Universal TUN/TAP device driver Frequently Asked Question. 109 186 110 187 1. What platforms are supported by TUN/TAP driver ?
+20 -5
Documentation/power/opp.txt
··· 1 - *=============* 2 - * OPP Library * 3 - *=============* 1 + Operating Performance Points (OPP) Library 2 + ========================================== 4 3 5 4 (C) 2009-2010 Nishanth Menon <nm@ti.com>, Texas Instruments Incorporated 6 5 ··· 15 16 16 17 1. Introduction 17 18 =============== 19 + 1.1 What is an Operating Performance Point (OPP)? 20 + 18 21 Complex SoCs of today consists of a multiple sub-modules working in conjunction. 19 22 In an operational system executing varied use cases, not all modules in the SoC 20 23 need to function at their highest performing frequency all the time. To 21 24 facilitate this, sub-modules in a SoC are grouped into domains, allowing some 22 - domains to run at lower voltage and frequency while other domains are loaded 23 - more. The set of discrete tuples consisting of frequency and voltage pairs that 25 + domains to run at lower voltage and frequency while other domains run at 26 + voltage/frequency pairs that are higher. 27 + 28 + The set of discrete tuples consisting of frequency and voltage pairs that 24 29 the device will support per domain are called Operating Performance Points or 25 30 OPPs. 31 + 32 + As an example: 33 + Let us consider an MPU device which supports the following: 34 + {300MHz at minimum voltage of 1V}, {800MHz at minimum voltage of 1.2V}, 35 + {1GHz at minimum voltage of 1.3V} 36 + 37 + We can represent these as three OPPs as the following {Hz, uV} tuples: 38 + {300000000, 1000000} 39 + {800000000, 1200000} 40 + {1000000000, 1300000} 41 + 42 + 1.2 Operating Performance Points Library 26 43 27 44 OPP library provides a set of helper functions to organize and query the OPP 28 45 information. The library is located in drivers/base/power/opp.c and the header
+1 -1
Documentation/printk-formats.txt
··· 170 170 Thank you for your cooperation and attention. 171 171 172 172 173 - By Randy Dunlap <rdunlap@xenotime.net> and 173 + By Randy Dunlap <rdunlap@infradead.org> and 174 174 Andrew Murray <amurray@mpc-data.co.uk>
+1 -1
Documentation/trace/ftrace.txt
··· 1873 1873 1874 1874 status\input | 0 | 1 | else | 1875 1875 --------------+------------+------------+------------+ 1876 - not allocated |(do nothing)| alloc+swap | EINVAL | 1876 + not allocated |(do nothing)| alloc+swap |(do nothing)| 1877 1877 --------------+------------+------------+------------+ 1878 1878 allocated | free | swap | clear | 1879 1879 --------------+------------+------------+------------+
+2
MAINTAINERS
··· 6412 6412 F: drivers/net/ethernet/qlogic/qla3xxx.* 6413 6413 6414 6414 QLOGIC QLCNIC (1/10)Gb ETHERNET DRIVER 6415 + M: Rajesh Borundia <rajesh.borundia@qlogic.com> 6416 + M: Shahed Shaikh <shahed.shaikh@qlogic.com> 6415 6417 M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> 6416 6418 M: Sony Chacko <sony.chacko@qlogic.com> 6417 6419 M: linux-driver@qlogic.com
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 9 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc1 4 + EXTRAVERSION = -rc2 5 5 NAME = Unicycling Gorilla 6 6 7 7 # *DOCUMENTATION*
+1
arch/alpha/boot/head.S
··· 4 4 * initial bootloader stuff.. 5 5 */ 6 6 7 + #include <asm/pal.h> 7 8 8 9 .set noreorder 9 10 .globl __start
+1 -1
arch/arm/boot/compressed/Makefile
··· 120 120 KBUILD_CFLAGS = $(subst -pg, , $(ORIG_CFLAGS)) 121 121 endif 122 122 123 - ccflags-y := -fpic -fno-builtin -I$(obj) 123 + ccflags-y := -fpic -mno-single-pic-base -fno-builtin -I$(obj) 124 124 asflags-y := -Wa,-march=all -DZIMAGE 125 125 126 126 # Supply kernel BSS size to the decompressor via a linker symbol.
+4 -4
arch/arm/include/asm/mmu.h
··· 5 5 6 6 typedef struct { 7 7 #ifdef CONFIG_CPU_HAS_ASID 8 - u64 id; 8 + atomic64_t id; 9 9 #endif 10 - unsigned int vmalloc_seq; 10 + unsigned int vmalloc_seq; 11 11 } mm_context_t; 12 12 13 13 #ifdef CONFIG_CPU_HAS_ASID 14 14 #define ASID_BITS 8 15 15 #define ASID_MASK ((~0ULL) << ASID_BITS) 16 - #define ASID(mm) ((mm)->context.id & ~ASID_MASK) 16 + #define ASID(mm) ((mm)->context.id.counter & ~ASID_MASK) 17 17 #else 18 18 #define ASID(mm) (0) 19 19 #endif ··· 26 26 * modified for 2.6 by Hyok S. Choi <hyok.choi@samsung.com> 27 27 */ 28 28 typedef struct { 29 - unsigned long end_brk; 29 + unsigned long end_brk; 30 30 } mm_context_t; 31 31 32 32 #endif
+1 -1
arch/arm/include/asm/mmu_context.h
··· 25 25 #ifdef CONFIG_CPU_HAS_ASID 26 26 27 27 void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk); 28 - #define init_new_context(tsk,mm) ({ mm->context.id = 0; }) 28 + #define init_new_context(tsk,mm) ({ atomic64_set(&mm->context.id, 0); 0; }) 29 29 30 30 #else /* !CONFIG_CPU_HAS_ASID */ 31 31
+28 -6
arch/arm/include/asm/tlbflush.h
··· 34 34 #define TLB_V6_D_ASID (1 << 17) 35 35 #define TLB_V6_I_ASID (1 << 18) 36 36 37 + #define TLB_V6_BP (1 << 19) 38 + 37 39 /* Unified Inner Shareable TLB operations (ARMv7 MP extensions) */ 38 - #define TLB_V7_UIS_PAGE (1 << 19) 39 - #define TLB_V7_UIS_FULL (1 << 20) 40 - #define TLB_V7_UIS_ASID (1 << 21) 40 + #define TLB_V7_UIS_PAGE (1 << 20) 41 + #define TLB_V7_UIS_FULL (1 << 21) 42 + #define TLB_V7_UIS_ASID (1 << 22) 43 + #define TLB_V7_UIS_BP (1 << 23) 41 44 42 45 #define TLB_BARRIER (1 << 28) 43 46 #define TLB_L2CLEAN_FR (1 << 29) /* Feroceon */ ··· 153 150 #define v6wbi_tlb_flags (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \ 154 151 TLB_V6_I_FULL | TLB_V6_D_FULL | \ 155 152 TLB_V6_I_PAGE | TLB_V6_D_PAGE | \ 156 - TLB_V6_I_ASID | TLB_V6_D_ASID) 153 + TLB_V6_I_ASID | TLB_V6_D_ASID | \ 154 + TLB_V6_BP) 157 155 158 156 #ifdef CONFIG_CPU_TLB_V6 159 157 # define v6wbi_possible_flags v6wbi_tlb_flags ··· 170 166 #endif 171 167 172 168 #define v7wbi_tlb_flags_smp (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \ 173 - TLB_V7_UIS_FULL | TLB_V7_UIS_PAGE | TLB_V7_UIS_ASID) 169 + TLB_V7_UIS_FULL | TLB_V7_UIS_PAGE | \ 170 + TLB_V7_UIS_ASID | TLB_V7_UIS_BP) 174 171 #define v7wbi_tlb_flags_up (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \ 175 - TLB_V6_U_FULL | TLB_V6_U_PAGE | TLB_V6_U_ASID) 172 + TLB_V6_U_FULL | TLB_V6_U_PAGE | \ 173 + TLB_V6_U_ASID | TLB_V6_BP) 176 174 177 175 #ifdef CONFIG_CPU_TLB_V7 178 176 ··· 436 430 } 437 431 } 438 432 433 + static inline void local_flush_bp_all(void) 434 + { 435 + const int zero = 0; 436 + const unsigned int __tlb_flag = __cpu_tlb_flags; 437 + 438 + if (tlb_flag(TLB_V7_UIS_BP)) 439 + asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero)); 440 + else if (tlb_flag(TLB_V6_BP)) 441 + asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero)); 442 + 443 + if (tlb_flag(TLB_BARRIER)) 444 + isb(); 445 + } 446 + 439 447 /* 440 448 * flush_pmd_entry 441 449 * ··· 500 480 #define flush_tlb_kernel_page local_flush_tlb_kernel_page 501 481 #define flush_tlb_range local_flush_tlb_range 502 482 #define flush_tlb_kernel_range local_flush_tlb_kernel_range 483 + #define flush_bp_all local_flush_bp_all 503 484 #else 504 485 extern void flush_tlb_all(void); 505 486 extern void flush_tlb_mm(struct mm_struct *mm); ··· 508 487 extern void flush_tlb_kernel_page(unsigned long kaddr); 509 488 extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); 510 489 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); 490 + extern void flush_bp_all(void); 511 491 #endif 512 492 513 493 /*
+1 -1
arch/arm/include/uapi/asm/unistd.h
··· 404 404 #define __NR_setns (__NR_SYSCALL_BASE+375) 405 405 #define __NR_process_vm_readv (__NR_SYSCALL_BASE+376) 406 406 #define __NR_process_vm_writev (__NR_SYSCALL_BASE+377) 407 - /* 378 for kcmp */ 407 + #define __NR_kcmp (__NR_SYSCALL_BASE+378) 408 408 #define __NR_finit_module (__NR_SYSCALL_BASE+379) 409 409 410 410 /*
+1 -1
arch/arm/kernel/asm-offsets.c
··· 110 110 BLANK(); 111 111 #endif 112 112 #ifdef CONFIG_CPU_HAS_ASID 113 - DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id)); 113 + DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id.counter)); 114 114 BLANK(); 115 115 #endif 116 116 DEFINE(VMA_VM_MM, offsetof(struct vm_area_struct, vm_mm));
+1 -1
arch/arm/kernel/calls.S
··· 387 387 /* 375 */ CALL(sys_setns) 388 388 CALL(sys_process_vm_readv) 389 389 CALL(sys_process_vm_writev) 390 - CALL(sys_ni_syscall) /* reserved for sys_kcmp */ 390 + CALL(sys_kcmp) 391 391 CALL(sys_finit_module) 392 392 #ifndef syscalls_counted 393 393 .equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls
+22 -4
arch/arm/kernel/head.S
··· 184 184 orr r3, r3, #3 @ PGD block type 185 185 mov r6, #4 @ PTRS_PER_PGD 186 186 mov r7, #1 << (55 - 32) @ L_PGD_SWAPPER 187 - 1: str r3, [r0], #4 @ set bottom PGD entry bits 187 + 1: 188 + #ifdef CONFIG_CPU_ENDIAN_BE8 188 189 str r7, [r0], #4 @ set top PGD entry bits 190 + str r3, [r0], #4 @ set bottom PGD entry bits 191 + #else 192 + str r3, [r0], #4 @ set bottom PGD entry bits 193 + str r7, [r0], #4 @ set top PGD entry bits 194 + #endif 189 195 add r3, r3, #0x1000 @ next PMD table 190 196 subs r6, r6, #1 191 197 bne 1b 192 198 193 199 add r4, r4, #0x1000 @ point to the PMD tables 200 + #ifdef CONFIG_CPU_ENDIAN_BE8 201 + add r4, r4, #4 @ we only write the bottom word 202 + #endif 194 203 #endif 195 204 196 205 ldr r7, [r10, #PROCINFO_MM_MMUFLAGS] @ mm_mmuflags ··· 267 258 addne r6, r6, #1 << SECTION_SHIFT 268 259 strne r6, [r3] 269 260 261 + #if defined(CONFIG_LPAE) && defined(CONFIG_CPU_ENDIAN_BE8) 262 + sub r4, r4, #4 @ Fixup page table pointer 263 + @ for 64-bit descriptors 264 + #endif 265 + 270 266 #ifdef CONFIG_DEBUG_LL 271 267 #if !defined(CONFIG_DEBUG_ICEDCC) && !defined(CONFIG_DEBUG_SEMIHOSTING) 272 268 /* ··· 290 276 orr r3, r7, r3, lsl #SECTION_SHIFT 291 277 #ifdef CONFIG_ARM_LPAE 292 278 mov r7, #1 << (54 - 32) @ XN 279 + #ifdef CONFIG_CPU_ENDIAN_BE8 280 + str r7, [r0], #4 281 + str r3, [r0], #4 282 + #else 283 + str r3, [r0], #4 284 + str r7, [r0], #4 285 + #endif 293 286 #else 294 287 orr r3, r3, #PMD_SECT_XN 295 - #endif 296 288 str r3, [r0], #4 297 - #ifdef CONFIG_ARM_LPAE 298 - str r7, [r0], #4 299 289 #endif 300 290 301 291 #else /* CONFIG_DEBUG_ICEDCC || CONFIG_DEBUG_SEMIHOSTING */
+1 -1
arch/arm/kernel/hw_breakpoint.c
··· 1023 1023 static int __cpuinit dbg_reset_notify(struct notifier_block *self, 1024 1024 unsigned long action, void *cpu) 1025 1025 { 1026 - if (action == CPU_ONLINE) 1026 + if ((action & ~CPU_TASKS_FROZEN) == CPU_ONLINE) 1027 1027 smp_call_function_single((int)cpu, reset_ctrl_regs, NULL, 1); 1028 1028 1029 1029 return NOTIFY_OK;
+2 -2
arch/arm/kernel/perf_event.c
··· 400 400 } 401 401 402 402 if (event->group_leader != event) { 403 - if (validate_group(event) != 0); 403 + if (validate_group(event) != 0) 404 404 return -EINVAL; 405 405 } 406 406 ··· 484 484 SET_RUNTIME_PM_OPS(armpmu_runtime_suspend, armpmu_runtime_resume, NULL) 485 485 }; 486 486 487 - static void __init armpmu_init(struct arm_pmu *armpmu) 487 + static void armpmu_init(struct arm_pmu *armpmu) 488 488 { 489 489 atomic_set(&armpmu->active_events, 0); 490 490 mutex_init(&armpmu->reserve_mutex);
+1 -1
arch/arm/kernel/perf_event_v7.c
··· 774 774 /* 775 775 * PMXEVTYPER: Event selection reg 776 776 */ 777 - #define ARMV7_EVTYPE_MASK 0xc00000ff /* Mask for writable bits */ 777 + #define ARMV7_EVTYPE_MASK 0xc80000ff /* Mask for writable bits */ 778 778 #define ARMV7_EVTYPE_EVENT 0xff /* Mask for EVENT bits */ 779 779 780 780 /*
+1
arch/arm/kernel/smp.c
··· 285 285 * switch away from it before attempting any exclusive accesses. 286 286 */ 287 287 cpu_switch_mm(mm->pgd, mm); 288 + local_flush_bp_all(); 288 289 enter_lazy_tlb(mm, current); 289 290 local_flush_tlb_all(); 290 291
+12
arch/arm/kernel/smp_tlb.c
··· 64 64 local_flush_tlb_kernel_range(ta->ta_start, ta->ta_end); 65 65 } 66 66 67 + static inline void ipi_flush_bp_all(void *ignored) 68 + { 69 + local_flush_bp_all(); 70 + } 71 + 67 72 void flush_tlb_all(void) 68 73 { 69 74 if (tlb_ops_need_broadcast()) ··· 132 127 local_flush_tlb_kernel_range(start, end); 133 128 } 134 129 130 + void flush_bp_all(void) 131 + { 132 + if (tlb_ops_need_broadcast()) 133 + on_each_cpu(ipi_flush_bp_all, NULL, 1); 134 + else 135 + local_flush_bp_all(); 136 + }
+4
arch/arm/kernel/smp_twd.c
··· 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_address.h> 24 24 25 + #include <asm/smp_plat.h> 25 26 #include <asm/smp_twd.h> 26 27 #include <asm/localtimer.h> 27 28 ··· 373 372 { 374 373 struct device_node *np; 375 374 int err; 375 + 376 + if (!is_smp() || !setup_max_cpus) 377 + return; 376 378 377 379 np = of_find_matching_node(NULL, twd_of_match); 378 380 if (!np)
+1
arch/arm/kernel/suspend.c
··· 68 68 ret = __cpu_suspend(arg, fn); 69 69 if (ret == 0) { 70 70 cpu_switch_mm(mm->pgd, mm); 71 + local_flush_bp_all(); 71 72 local_flush_tlb_all(); 72 73 } 73 74
+44 -41
arch/arm/lib/memset.S
··· 19 19 1: subs r2, r2, #4 @ 1 do we have enough 20 20 blt 5f @ 1 bytes to align with? 21 21 cmp r3, #2 @ 1 22 - strltb r1, [r0], #1 @ 1 23 - strleb r1, [r0], #1 @ 1 24 - strb r1, [r0], #1 @ 1 22 + strltb r1, [ip], #1 @ 1 23 + strleb r1, [ip], #1 @ 1 24 + strb r1, [ip], #1 @ 1 25 25 add r2, r2, r3 @ 1 (r2 = r2 - (4 - r3)) 26 26 /* 27 27 * The pointer is now aligned and the length is adjusted. Try doing the ··· 29 29 */ 30 30 31 31 ENTRY(memset) 32 - ands r3, r0, #3 @ 1 unaligned? 32 + /* 33 + * Preserve the contents of r0 for the return value. 34 + */ 35 + mov ip, r0 36 + ands r3, ip, #3 @ 1 unaligned? 33 37 bne 1b @ 1 34 38 /* 35 - * we know that the pointer in r0 is aligned to a word boundary. 39 + * we know that the pointer in ip is aligned to a word boundary. 36 40 */ 37 41 orr r1, r1, r1, lsl #8 38 42 orr r1, r1, r1, lsl #16 ··· 47 43 #if ! CALGN(1)+0 48 44 49 45 /* 50 - * We need an extra register for this loop - save the return address and 51 - * use the LR 46 + * We need 2 extra registers for this loop - use r8 and the LR 52 47 */ 53 - str lr, [sp, #-4]! 54 - mov ip, r1 48 + stmfd sp!, {r8, lr} 49 + mov r8, r1 55 50 mov lr, r1 56 51 57 52 2: subs r2, r2, #64 58 - stmgeia r0!, {r1, r3, ip, lr} @ 64 bytes at a time. 59 - stmgeia r0!, {r1, r3, ip, lr} 60 - stmgeia r0!, {r1, r3, ip, lr} 61 - stmgeia r0!, {r1, r3, ip, lr} 53 + stmgeia ip!, {r1, r3, r8, lr} @ 64 bytes at a time. 54 + stmgeia ip!, {r1, r3, r8, lr} 55 + stmgeia ip!, {r1, r3, r8, lr} 56 + stmgeia ip!, {r1, r3, r8, lr} 62 57 bgt 2b 63 - ldmeqfd sp!, {pc} @ Now <64 bytes to go. 58 + ldmeqfd sp!, {r8, pc} @ Now <64 bytes to go. 64 59 /* 65 60 * No need to correct the count; we're only testing bits from now on 66 61 */ 67 62 tst r2, #32 68 - stmneia r0!, {r1, r3, ip, lr} 69 - stmneia r0!, {r1, r3, ip, lr} 63 + stmneia ip!, {r1, r3, r8, lr} 64 + stmneia ip!, {r1, r3, r8, lr} 70 65 tst r2, #16 71 - stmneia r0!, {r1, r3, ip, lr} 72 - ldr lr, [sp], #4 66 + stmneia ip!, {r1, r3, r8, lr} 67 + ldmfd sp!, {r8, lr} 73 68 74 69 #else 75 70 ··· 77 74 * whole cache lines at once. 78 75 */ 79 76 80 - stmfd sp!, {r4-r7, lr} 77 + stmfd sp!, {r4-r8, lr} 81 78 mov r4, r1 82 79 mov r5, r1 83 80 mov r6, r1 84 81 mov r7, r1 85 - mov ip, r1 82 + mov r8, r1 86 83 mov lr, r1 87 84 88 85 cmp r2, #96 89 - tstgt r0, #31 86 + tstgt ip, #31 90 87 ble 3f 91 88 92 - and ip, r0, #31 93 - rsb ip, ip, #32 94 - sub r2, r2, ip 95 - movs ip, ip, lsl #(32 - 4) 96 - stmcsia r0!, {r4, r5, r6, r7} 97 - stmmiia r0!, {r4, r5} 98 - tst ip, #(1 << 30) 99 - mov ip, r1 100 - strne r1, [r0], #4 89 + and r8, ip, #31 90 + rsb r8, r8, #32 91 + sub r2, r2, r8 92 + movs r8, r8, lsl #(32 - 4) 93 + stmcsia ip!, {r4, r5, r6, r7} 94 + stmmiia ip!, {r4, r5} 95 + tst r8, #(1 << 30) 96 + mov r8, r1 97 + strne r1, [ip], #4 101 98 102 99 3: subs r2, r2, #64 103 - stmgeia r0!, {r1, r3-r7, ip, lr} 104 - stmgeia r0!, {r1, r3-r7, ip, lr} 100 + stmgeia ip!, {r1, r3-r8, lr} 101 + stmgeia ip!, {r1, r3-r8, lr} 105 102 bgt 3b 106 - ldmeqfd sp!, {r4-r7, pc} 103 + ldmeqfd sp!, {r4-r8, pc} 107 104 108 105 tst r2, #32 109 - stmneia r0!, {r1, r3-r7, ip, lr} 106 + stmneia ip!, {r1, r3-r8, lr} 110 107 tst r2, #16 111 - stmneia r0!, {r4-r7} 112 - ldmfd sp!, {r4-r7, lr} 108 + stmneia ip!, {r4-r7} 109 + ldmfd sp!, {r4-r8, lr} 113 110 114 111 #endif 115 112 116 113 4: tst r2, #8 117 - stmneia r0!, {r1, r3} 114 + stmneia ip!, {r1, r3} 118 115 tst r2, #4 119 - strne r1, [r0], #4 116 + strne r1, [ip], #4 120 117 /* 121 118 * When we get here, we've got less than 4 bytes to zero. We 122 119 * may have an unaligned pointer as well. 123 120 */ 124 121 5: tst r2, #2 125 - strneb r1, [r0], #1 126 - strneb r1, [r0], #1 122 + strneb r1, [ip], #1 123 + strneb r1, [ip], #1 127 124 tst r2, #1 128 - strneb r1, [r0], #1 125 + strneb r1, [ip], #1 129 126 mov pc, lr 130 127 ENDPROC(memset)
+1 -1
arch/arm/mach-netx/generic.c
··· 168 168 { 169 169 int irq; 170 170 171 - vic_init(io_p2v(NETX_PA_VIC), 0, ~0, 0); 171 + vic_init(io_p2v(NETX_PA_VIC), NETX_IRQ_VIC_START, ~0, 0); 172 172 173 173 for (irq = NETX_IRQ_HIF_CHAINED(0); irq <= NETX_IRQ_HIF_LAST; irq++) { 174 174 irq_set_chip_and_handler(irq, &netx_hif_chip,
+32 -32
arch/arm/mach-netx/include/mach/irqs.h
··· 17 17 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 18 */ 19 19 20 - #define NETX_IRQ_VIC_START 0 21 - #define NETX_IRQ_SOFTINT 0 22 - #define NETX_IRQ_TIMER0 1 23 - #define NETX_IRQ_TIMER1 2 24 - #define NETX_IRQ_TIMER2 3 25 - #define NETX_IRQ_SYSTIME_NS 4 26 - #define NETX_IRQ_SYSTIME_S 5 27 - #define NETX_IRQ_GPIO_15 6 28 - #define NETX_IRQ_WATCHDOG 7 29 - #define NETX_IRQ_UART0 8 30 - #define NETX_IRQ_UART1 9 31 - #define NETX_IRQ_UART2 10 32 - #define NETX_IRQ_USB 11 33 - #define NETX_IRQ_SPI 12 34 - #define NETX_IRQ_I2C 13 35 - #define NETX_IRQ_LCD 14 36 - #define NETX_IRQ_HIF 15 37 - #define NETX_IRQ_GPIO_0_14 16 38 - #define NETX_IRQ_XPEC0 17 39 - #define NETX_IRQ_XPEC1 18 40 - #define NETX_IRQ_XPEC2 19 41 - #define NETX_IRQ_XPEC3 20 42 - #define NETX_IRQ_XPEC(no) (17 + (no)) 43 - #define NETX_IRQ_MSYNC0 21 44 - #define NETX_IRQ_MSYNC1 22 45 - #define NETX_IRQ_MSYNC2 23 46 - #define NETX_IRQ_MSYNC3 24 47 - #define NETX_IRQ_IRQ_PHY 25 48 - #define NETX_IRQ_ISO_AREA 26 20 + #define NETX_IRQ_VIC_START 64 21 + #define NETX_IRQ_SOFTINT (NETX_IRQ_VIC_START + 0) 22 + #define NETX_IRQ_TIMER0 (NETX_IRQ_VIC_START + 1) 23 + #define NETX_IRQ_TIMER1 (NETX_IRQ_VIC_START + 2) 24 + #define NETX_IRQ_TIMER2 (NETX_IRQ_VIC_START + 3) 25 + #define NETX_IRQ_SYSTIME_NS (NETX_IRQ_VIC_START + 4) 26 + #define NETX_IRQ_SYSTIME_S (NETX_IRQ_VIC_START + 5) 27 + #define NETX_IRQ_GPIO_15 (NETX_IRQ_VIC_START + 6) 28 + #define NETX_IRQ_WATCHDOG (NETX_IRQ_VIC_START + 7) 29 + #define NETX_IRQ_UART0 (NETX_IRQ_VIC_START + 8) 30 + #define NETX_IRQ_UART1 (NETX_IRQ_VIC_START + 9) 31 + #define NETX_IRQ_UART2 (NETX_IRQ_VIC_START + 10) 32 + #define NETX_IRQ_USB (NETX_IRQ_VIC_START + 11) 33 + #define NETX_IRQ_SPI (NETX_IRQ_VIC_START + 12) 34 + #define NETX_IRQ_I2C (NETX_IRQ_VIC_START + 13) 35 + #define NETX_IRQ_LCD (NETX_IRQ_VIC_START + 14) 36 + #define NETX_IRQ_HIF (NETX_IRQ_VIC_START + 15) 37 + #define NETX_IRQ_GPIO_0_14 (NETX_IRQ_VIC_START + 16) 38 + #define NETX_IRQ_XPEC0 (NETX_IRQ_VIC_START + 17) 39 + #define NETX_IRQ_XPEC1 (NETX_IRQ_VIC_START + 18) 40 + #define NETX_IRQ_XPEC2 (NETX_IRQ_VIC_START + 19) 41 + #define NETX_IRQ_XPEC3 (NETX_IRQ_VIC_START + 20) 42 + #define NETX_IRQ_XPEC(no) (NETX_IRQ_VIC_START + 17 + (no)) 43 + #define NETX_IRQ_MSYNC0 (NETX_IRQ_VIC_START + 21) 44 + #define NETX_IRQ_MSYNC1 (NETX_IRQ_VIC_START + 22) 45 + #define NETX_IRQ_MSYNC2 (NETX_IRQ_VIC_START + 23) 46 + #define NETX_IRQ_MSYNC3 (NETX_IRQ_VIC_START + 24) 47 + #define NETX_IRQ_IRQ_PHY (NETX_IRQ_VIC_START + 25) 48 + #define NETX_IRQ_ISO_AREA (NETX_IRQ_VIC_START + 26) 49 49 /* int 27 is reserved */ 50 50 /* int 28 is reserved */ 51 - #define NETX_IRQ_TIMER3 29 52 - #define NETX_IRQ_TIMER4 30 51 + #define NETX_IRQ_TIMER3 (NETX_IRQ_VIC_START + 29) 52 + #define NETX_IRQ_TIMER4 (NETX_IRQ_VIC_START + 30) 53 53 /* int 31 is reserved */ 54 54 55 - #define NETX_IRQS 32 55 + #define NETX_IRQS (NETX_IRQ_VIC_START + 32) 56 56 57 57 /* for multiplexed irqs on gpio 0..14 */ 58 58 #define NETX_IRQ_GPIO(x) (NETX_IRQS + (x))
+18 -11
arch/arm/mm/context.c
··· 152 152 return 0; 153 153 } 154 154 155 - static void new_context(struct mm_struct *mm, unsigned int cpu) 155 + static u64 new_context(struct mm_struct *mm, unsigned int cpu) 156 156 { 157 - u64 asid = mm->context.id; 157 + u64 asid = atomic64_read(&mm->context.id); 158 158 u64 generation = atomic64_read(&asid_generation); 159 159 160 160 if (asid != 0 && is_reserved_asid(asid)) { ··· 181 181 cpumask_clear(mm_cpumask(mm)); 182 182 } 183 183 184 - mm->context.id = asid; 184 + return asid; 185 185 } 186 186 187 187 void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk) 188 188 { 189 189 unsigned long flags; 190 190 unsigned int cpu = smp_processor_id(); 191 + u64 asid; 191 192 192 193 if (unlikely(mm->context.vmalloc_seq != init_mm.context.vmalloc_seq)) 193 194 __check_vmalloc_seq(mm); ··· 199 198 */ 200 199 cpu_set_reserved_ttbr0(); 201 200 202 - if (!((mm->context.id ^ atomic64_read(&asid_generation)) >> ASID_BITS) 203 - && atomic64_xchg(&per_cpu(active_asids, cpu), mm->context.id)) 201 + asid = atomic64_read(&mm->context.id); 202 + if (!((asid ^ atomic64_read(&asid_generation)) >> ASID_BITS) 203 + && atomic64_xchg(&per_cpu(active_asids, cpu), asid)) 204 204 goto switch_mm_fastpath; 205 205 206 206 raw_spin_lock_irqsave(&cpu_asid_lock, flags); 207 207 /* Check that our ASID belongs to the current generation. */ 208 - if ((mm->context.id ^ atomic64_read(&asid_generation)) >> ASID_BITS) 209 - new_context(mm, cpu); 208 + asid = atomic64_read(&mm->context.id); 209 + if ((asid ^ atomic64_read(&asid_generation)) >> ASID_BITS) { 210 + asid = new_context(mm, cpu); 211 + atomic64_set(&mm->context.id, asid); 212 + } 210 213 211 - atomic64_set(&per_cpu(active_asids, cpu), mm->context.id); 212 - cpumask_set_cpu(cpu, mm_cpumask(mm)); 213 - 214 - if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) 214 + if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) { 215 + local_flush_bp_all(); 215 216 local_flush_tlb_all(); 217 + } 218 + 219 + atomic64_set(&per_cpu(active_asids, cpu), asid); 220 + cpumask_set_cpu(cpu, mm_cpumask(mm)); 216 221 raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); 217 222 218 223 switch_mm_fastpath:
+1
arch/arm/mm/idmap.c
··· 141 141 { 142 142 /* Switch to the identity mapping. */ 143 143 cpu_switch_mm(idmap_pgd, &init_mm); 144 + local_flush_bp_all(); 144 145 145 146 #ifdef CONFIG_CPU_HAS_ASID 146 147 /*
+1 -1
arch/arm/mm/proc-v7-3level.S
··· 48 48 ENTRY(cpu_v7_switch_mm) 49 49 #ifdef CONFIG_MMU 50 50 mmid r1, r1 @ get mm->context.id 51 - and r3, r1, #0xff 51 + asid r3, r1 52 52 mov r3, r3, lsl #(48 - 32) @ ASID 53 53 mcrr p15, 0, r0, r3, c2 @ set TTB 0 54 54 isb
+1
arch/ia64/kernel/perfmon.c
··· 619 619 .mount = pfmfs_mount, 620 620 .kill_sb = kill_anon_super, 621 621 }; 622 + MODULE_ALIAS_FS("pfmfs"); 622 623 623 624 DEFINE_PER_CPU(unsigned long, pfm_syst_info); 624 625 DEFINE_PER_CPU(struct task_struct *, pmu_owner);
-3
arch/metag/include/asm/elf.h
··· 100 100 101 101 #define ELF_PLATFORM (NULL) 102 102 103 - #define SET_PERSONALITY(ex) \ 104 - set_personality(PER_LINUX | (current->personality & (~PER_MASK))) 105 - 106 103 #define STACK_RND_MASK (0) 107 104 108 105 #ifdef CONFIG_METAG_USER_TCM
+1
arch/metag/mm/Kconfig
··· 40 40 41 41 config NUMA 42 42 bool "Non Uniform Memory Access (NUMA) Support" 43 + select ARCH_WANT_NUMA_VARIABLE_LOCALITY 43 44 help 44 45 Some Meta systems have MMU-mappable on-chip memories with 45 46 lower latencies than main memory. This enables support for
+1
arch/powerpc/platforms/cell/spufs/inode.c
··· 749 749 .mount = spufs_mount, 750 750 .kill_sb = kill_litter_super, 751 751 }; 752 + MODULE_ALIAS_FS("spufs"); 752 753 753 754 static int __init spufs_init(void) 754 755 {
+1
arch/s390/hypfs/inode.c
··· 456 456 .mount = hypfs_mount, 457 457 .kill_sb = hypfs_kill_super 458 458 }; 459 + MODULE_ALIAS_FS("s390_hypfs"); 459 460 460 461 static const struct super_operations hypfs_s_ops = { 461 462 .statfs = simple_statfs,
+1
arch/s390/include/asm/cpu_mf.h
··· 12 12 #ifndef _ASM_S390_CPU_MF_H 13 13 #define _ASM_S390_CPU_MF_H 14 14 15 + #include <linux/errno.h> 15 16 #include <asm/facility.h> 16 17 17 18 #define CPU_MF_INT_SF_IAE (1 << 31) /* invalid entry address */
+3
arch/tile/include/asm/compat.h
··· 288 288 long compat_sys_fallocate(int fd, int mode, 289 289 u32 offset_lo, u32 offset_hi, 290 290 u32 len_lo, u32 len_hi); 291 + long compat_sys_llseek(unsigned int fd, unsigned int offset_high, 292 + unsigned int offset_low, loff_t __user * result, 293 + unsigned int origin); 291 294 292 295 /* Assembly trampoline to avoid clobbering r0. */ 293 296 long _compat_sys_rt_sigreturn(void);
+29 -13
arch/tile/kernel/compat.c
··· 32 32 * adapt the usual convention. 33 33 */ 34 34 35 - long compat_sys_truncate64(char __user *filename, u32 dummy, u32 low, u32 high) 35 + COMPAT_SYSCALL_DEFINE4(truncate64, char __user *, filename, u32, dummy, 36 + u32, low, u32, high) 36 37 { 37 38 return sys_truncate(filename, ((loff_t)high << 32) | low); 38 39 } 39 40 40 - long compat_sys_ftruncate64(unsigned int fd, u32 dummy, u32 low, u32 high) 41 + COMPAT_SYSCALL_DEFINE4(ftruncate64, unsigned int, fd, u32, dummy, 42 + u32, low, u32, high) 41 43 { 42 44 return sys_ftruncate(fd, ((loff_t)high << 32) | low); 43 45 } 44 46 45 - long compat_sys_pread64(unsigned int fd, char __user *ubuf, size_t count, 46 - u32 dummy, u32 low, u32 high) 47 + COMPAT_SYSCALL_DEFINE6(pread64, unsigned int, fd, char __user *, ubuf, 48 + size_t, count, u32, dummy, u32, low, u32, high) 47 49 { 48 50 return sys_pread64(fd, ubuf, count, ((loff_t)high << 32) | low); 49 51 } 50 52 51 - long compat_sys_pwrite64(unsigned int fd, char __user *ubuf, size_t count, 52 - u32 dummy, u32 low, u32 high) 53 + COMPAT_SYSCALL_DEFINE6(pwrite64, unsigned int, fd, char __user *, ubuf, 54 + size_t, count, u32, dummy, u32, low, u32, high) 53 55 { 54 56 return sys_pwrite64(fd, ubuf, count, ((loff_t)high << 32) | low); 55 57 } 56 58 57 - long compat_sys_lookup_dcookie(u32 low, u32 high, char __user *buf, size_t len) 59 + COMPAT_SYSCALL_DEFINE4(lookup_dcookie, u32, low, u32, high, 60 + char __user *, buf, size_t, len) 58 61 { 59 62 return sys_lookup_dcookie(((loff_t)high << 32) | low, buf, len); 60 63 } 61 64 62 - long compat_sys_sync_file_range2(int fd, unsigned int flags, 63 - u32 offset_lo, u32 offset_hi, 64 - u32 nbytes_lo, u32 nbytes_hi) 65 + COMPAT_SYSCALL_DEFINE6(sync_file_range2, int, fd, unsigned int, flags, 66 + u32, offset_lo, u32, offset_hi, 67 + u32, nbytes_lo, u32, nbytes_hi) 65 68 { 66 69 return sys_sync_file_range(fd, ((loff_t)offset_hi << 32) | offset_lo, 67 70 ((loff_t)nbytes_hi << 32) | nbytes_lo, 68 71 flags); 69 72 } 70 73 71 - long compat_sys_fallocate(int fd, int mode, 72 - u32 offset_lo, u32 offset_hi, 73 - u32 len_lo, u32 len_hi) 74 + COMPAT_SYSCALL_DEFINE6(fallocate, int, fd, int, mode, 75 + u32, offset_lo, u32, offset_hi, 76 + u32, len_lo, u32, len_hi) 74 77 { 75 78 return sys_fallocate(fd, mode, ((loff_t)offset_hi << 32) | offset_lo, 76 79 ((loff_t)len_hi << 32) | len_lo); 77 80 } 78 81 82 + /* 83 + * Avoid bug in generic sys_llseek() that specifies offset_high and 84 + * offset_low as "unsigned long", thus making it possible to pass 85 + * a sign-extended high 32 bits in offset_low. 86 + */ 87 + COMPAT_SYSCALL_DEFINE5(llseek, unsigned int, fd, unsigned int, offset_high, 88 + unsigned int, offset_low, loff_t __user *, result, 89 + unsigned int, origin) 90 + { 91 + return sys_llseek(fd, offset_high, offset_low, result, origin); 92 + } 93 + 79 94 /* Provide the compat syscall number to call mapping. */ 80 95 #undef __SYSCALL 81 96 #define __SYSCALL(nr, call) [nr] = (call), ··· 98 83 /* See comments in sys.c */ 99 84 #define compat_sys_fadvise64_64 sys32_fadvise64_64 100 85 #define compat_sys_readahead sys32_readahead 86 + #define sys_llseek compat_sys_llseek 101 87 102 88 /* Call the assembly trampolines where necessary. */ 103 89 #define compat_sys_rt_sigreturn _compat_sys_rt_sigreturn
+1 -1
arch/um/drivers/chan.h
··· 37 37 extern int console_open_chan(struct line *line, struct console *co); 38 38 extern void deactivate_chan(struct chan *chan, int irq); 39 39 extern void reactivate_chan(struct chan *chan, int irq); 40 - extern void chan_enable_winch(struct chan *chan, struct tty_struct *tty); 40 + extern void chan_enable_winch(struct chan *chan, struct tty_port *port); 41 41 extern int enable_chan(struct line *line); 42 42 extern void close_chan(struct line *line); 43 43 extern int chan_window_size(struct line *line,
+2 -2
arch/um/drivers/chan_kern.c
··· 122 122 return err; 123 123 } 124 124 125 - void chan_enable_winch(struct chan *chan, struct tty_struct *tty) 125 + void chan_enable_winch(struct chan *chan, struct tty_port *port) 126 126 { 127 127 if (chan && chan->primary && chan->ops->winch) 128 - register_winch(chan->fd, tty); 128 + register_winch(chan->fd, port); 129 129 } 130 130 131 131 static void line_timer_cb(struct work_struct *work)
+6 -6
arch/um/drivers/chan_user.c
··· 216 216 } 217 217 } 218 218 219 - static int winch_tramp(int fd, struct tty_struct *tty, int *fd_out, 219 + static int winch_tramp(int fd, struct tty_port *port, int *fd_out, 220 220 unsigned long *stack_out) 221 221 { 222 222 struct winch_data data; ··· 271 271 return err; 272 272 } 273 273 274 - void register_winch(int fd, struct tty_struct *tty) 274 + void register_winch(int fd, struct tty_port *port) 275 275 { 276 276 unsigned long stack; 277 277 int pid, thread, count, thread_fd = -1; ··· 281 281 return; 282 282 283 283 pid = tcgetpgrp(fd); 284 - if (is_skas_winch(pid, fd, tty)) { 285 - register_winch_irq(-1, fd, -1, tty, 0); 284 + if (is_skas_winch(pid, fd, port)) { 285 + register_winch_irq(-1, fd, -1, port, 0); 286 286 return; 287 287 } 288 288 289 289 if (pid == -1) { 290 - thread = winch_tramp(fd, tty, &thread_fd, &stack); 290 + thread = winch_tramp(fd, port, &thread_fd, &stack); 291 291 if (thread < 0) 292 292 return; 293 293 294 - register_winch_irq(thread_fd, fd, thread, tty, stack); 294 + register_winch_irq(thread_fd, fd, thread, port, stack); 295 295 296 296 count = write(thread_fd, &c, sizeof(c)); 297 297 if (count != sizeof(c))
+3 -3
arch/um/drivers/chan_user.h
··· 38 38 unsigned short *cols_out); 39 39 extern void generic_free(void *data); 40 40 41 - struct tty_struct; 42 - extern void register_winch(int fd, struct tty_struct *tty); 41 + struct tty_port; 42 + extern void register_winch(int fd, struct tty_port *port); 43 43 extern void register_winch_irq(int fd, int tty_fd, int pid, 44 - struct tty_struct *tty, unsigned long stack); 44 + struct tty_port *port, unsigned long stack); 45 45 46 46 #define __channel_help(fn, prefix) \ 47 47 __uml_help(fn, prefix "[0-9]*=<channel description>\n" \
+24 -18
arch/um/drivers/line.c
··· 305 305 return ret; 306 306 307 307 if (!line->sigio) { 308 - chan_enable_winch(line->chan_out, tty); 308 + chan_enable_winch(line->chan_out, port); 309 309 line->sigio = 1; 310 310 } 311 311 ··· 315 315 return 0; 316 316 } 317 317 318 + static void unregister_winch(struct tty_struct *tty); 319 + 320 + static void line_destruct(struct tty_port *port) 321 + { 322 + struct tty_struct *tty = tty_port_tty_get(port); 323 + struct line *line = tty->driver_data; 324 + 325 + if (line->sigio) { 326 + unregister_winch(tty); 327 + line->sigio = 0; 328 + } 329 + } 330 + 318 331 static const struct tty_port_operations line_port_ops = { 319 332 .activate = line_activate, 333 + .destruct = line_destruct, 320 334 }; 321 335 322 336 int line_open(struct tty_struct *tty, struct file *filp) ··· 352 338 tty->driver_data = line; 353 339 354 340 return 0; 355 - } 356 - 357 - static void unregister_winch(struct tty_struct *tty); 358 - 359 - void line_cleanup(struct tty_struct *tty) 360 - { 361 - struct line *line = tty->driver_data; 362 - 363 - if (line->sigio) { 364 - unregister_winch(tty); 365 - line->sigio = 0; 366 - } 367 341 } 368 342 369 343 void line_close(struct tty_struct *tty, struct file * filp) ··· 603 601 int fd; 604 602 int tty_fd; 605 603 int pid; 606 - struct tty_struct *tty; 604 + struct tty_port *port; 607 605 unsigned long stack; 608 606 struct work_struct work; 609 607 }; ··· 657 655 goto out; 658 656 } 659 657 } 660 - tty = winch->tty; 658 + tty = tty_port_tty_get(winch->port); 661 659 if (tty != NULL) { 662 660 line = tty->driver_data; 663 661 if (line != NULL) { ··· 665 663 &tty->winsize.ws_col); 666 664 kill_pgrp(tty->pgrp, SIGWINCH, 1); 667 665 } 666 + tty_kref_put(tty); 668 667 } 669 668 out: 670 669 if (winch->fd != -1) ··· 673 670 return IRQ_HANDLED; 674 671 } 675 672 676 - void register_winch_irq(int fd, int tty_fd, int pid, struct tty_struct *tty, 673 + void register_winch_irq(int fd, int tty_fd, int pid, struct tty_port *port, 677 674 unsigned long stack) 678 675 { 679 676 struct winch *winch; ··· 688 685 .fd = fd, 689 686 .tty_fd = tty_fd, 690 687 .pid = pid, 691 - .tty = tty, 688 + .port = port, 692 689 .stack = stack }); 693 690 694 691 if (um_request_irq(WINCH_IRQ, fd, IRQ_READ, winch_interrupt, ··· 717 714 { 718 715 struct list_head *ele, *next; 719 716 struct winch *winch; 717 + struct tty_struct *wtty; 720 718 721 719 spin_lock(&winch_handler_lock); 722 720 723 721 list_for_each_safe(ele, next, &winch_handlers) { 724 722 winch = list_entry(ele, struct winch, list); 725 - if (winch->tty == tty) { 723 + wtty = tty_port_tty_get(winch->port); 724 + if (wtty == tty) { 726 725 free_winch(winch); 727 726 break; 728 727 } 728 + tty_kref_put(wtty); 729 729 } 730 730 spin_unlock(&winch_handler_lock); 731 731 }
+2
arch/um/drivers/net_kern.c
··· 218 218 spin_lock_irqsave(&lp->lock, flags); 219 219 220 220 len = (*lp->write)(lp->fd, skb, lp); 221 + skb_tx_timestamp(skb); 221 222 222 223 if (len == skb->len) { 223 224 dev->stats.tx_packets++; ··· 282 281 static const struct ethtool_ops uml_net_ethtool_ops = { 283 282 .get_drvinfo = uml_net_get_drvinfo, 284 283 .get_link = ethtool_op_get_link, 284 + .get_ts_info = ethtool_op_get_ts_info, 285 285 }; 286 286 287 287 static void uml_net_user_timer_expire(unsigned long _conn)
-1
arch/um/drivers/ssl.c
··· 105 105 .throttle = line_throttle, 106 106 .unthrottle = line_unthrottle, 107 107 .install = ssl_install, 108 - .cleanup = line_cleanup, 109 108 .hangup = line_hangup, 110 109 }; 111 110
-1
arch/um/drivers/stdio_console.c
··· 110 110 .set_termios = line_set_termios, 111 111 .throttle = line_throttle, 112 112 .unthrottle = line_unthrottle, 113 - .cleanup = line_cleanup, 114 113 .hangup = line_hangup, 115 114 }; 116 115
+1 -1
arch/um/os-Linux/signal.c
··· 15 15 #include <sysdep/mcontext.h> 16 16 #include "internal.h" 17 17 18 - void (*sig_info[NSIG])(int, siginfo_t *, struct uml_pt_regs *) = { 18 + void (*sig_info[NSIG])(int, struct siginfo *, struct uml_pt_regs *) = { 19 19 [SIGTRAP] = relay_signal, 20 20 [SIGFPE] = relay_signal, 21 21 [SIGILL] = relay_signal,
+2
arch/um/os-Linux/start_up.c
··· 15 15 #include <sys/mman.h> 16 16 #include <sys/stat.h> 17 17 #include <sys/wait.h> 18 + #include <sys/time.h> 19 + #include <sys/resource.h> 18 20 #include <asm/unistd.h> 19 21 #include <init.h> 20 22 #include <os.h>
+18 -2
arch/x86/include/asm/bootparam_utils.h
··· 14 14 * analysis of kexec-tools; if other broken bootloaders initialize a 15 15 * different set of fields we will need to figure out how to disambiguate. 16 16 * 17 + * Note: efi_info is commonly left uninitialized, but that field has a 18 + * private magic, so it is better to leave it unchanged. 17 19 */ 18 20 static void sanitize_boot_params(struct boot_params *boot_params) 19 21 { 22 + /* 23 + * IMPORTANT NOTE TO BOOTLOADER AUTHORS: do not simply clear 24 + * this field. The purpose of this field is to guarantee 25 + * compliance with the x86 boot spec located in 26 + * Documentation/x86/boot.txt . That spec says that the 27 + * *whole* structure should be cleared, after which only the 28 + * portion defined by struct setup_header (boot_params->hdr) 29 + * should be copied in. 30 + * 31 + * If you're having an issue because the sentinel is set, you 32 + * need to change the whole structure to be cleared, not this 33 + * (or any other) individual field, or you will soon have 34 + * problems again. 35 + */ 20 36 if (boot_params->sentinel) { 21 - /*fields in boot_params are not valid, clear them */ 37 + /* fields in boot_params are left uninitialized, clear them */ 22 38 memset(&boot_params->olpc_ofw_header, 0, 23 - (char *)&boot_params->alt_mem_k - 39 + (char *)&boot_params->efi_info - 24 40 (char *)&boot_params->olpc_ofw_header); 25 41 memset(&boot_params->kbd_status, 0, 26 42 (char *)&boot_params->hdr -
+8 -2
arch/x86/kernel/setup.c
··· 171 171 172 172 #ifdef CONFIG_X86_32 173 173 /* cpu data as detected by the assembly code in head.S */ 174 - struct cpuinfo_x86 new_cpu_data __cpuinitdata = {0, 0, 0, 0, -1, 1, 0, 0, -1}; 174 + struct cpuinfo_x86 new_cpu_data __cpuinitdata = { 175 + .wp_works_ok = -1, 176 + .fdiv_bug = -1, 177 + }; 175 178 /* common cpu data for all cpus */ 176 - struct cpuinfo_x86 boot_cpu_data __read_mostly = {0, 0, 0, 0, -1, 1, 0, 0, -1}; 179 + struct cpuinfo_x86 boot_cpu_data __read_mostly = { 180 + .wp_works_ok = -1, 181 + .fdiv_bug = -1, 182 + }; 177 183 EXPORT_SYMBOL(boot_cpu_data); 178 184 179 185 unsigned int def_to_bigsmp;
+1 -2
arch/x86/kernel/smpboot.c
··· 1365 1365 unsigned int eax, ebx, ecx, edx; 1366 1366 unsigned int highest_cstate = 0; 1367 1367 unsigned int highest_subcstate = 0; 1368 - int i; 1369 1368 void *mwait_ptr; 1370 - struct cpuinfo_x86 *c = __this_cpu_ptr(&cpu_info); 1369 + int i; 1371 1370 1372 1371 if (!this_cpu_has(X86_FEATURE_MWAIT)) 1373 1372 return;
+2 -3
arch/x86/mm/init.c
··· 410 410 /* the ISA range is always mapped regardless of memory holes */ 411 411 init_memory_mapping(0, ISA_END_ADDRESS); 412 412 413 - /* xen has big range in reserved near end of ram, skip it at first */ 414 - addr = memblock_find_in_range(ISA_END_ADDRESS, end, PMD_SIZE, 415 - PAGE_SIZE); 413 + /* xen has big range in reserved near end of ram, skip it at first.*/ 414 + addr = memblock_find_in_range(ISA_END_ADDRESS, end, PMD_SIZE, PMD_SIZE); 416 415 real_end = addr + PMD_SIZE; 417 416 418 417 /* step_size need to be small so pgt_buf from BRK could cover it */
+7
arch/x86/mm/pat.c
··· 563 563 if (base > __pa(high_memory-1)) 564 564 return 0; 565 565 566 + /* 567 + * some areas in the middle of the kernel identity range 568 + * are not mapped, like the PCI space. 569 + */ 570 + if (!page_is_ram(base >> PAGE_SHIFT)) 571 + return 0; 572 + 566 573 id_sz = (__pa(high_memory-1) <= base + size) ? 567 574 __pa(high_memory) - base : 568 575 size;
+9 -46
drivers/acpi/glue.c
··· 36 36 { 37 37 if (acpi_disabled) 38 38 return -ENODEV; 39 - if (type && type->bus && type->find_device) { 39 + if (type && type->match && type->find_device) { 40 40 down_write(&bus_type_sem); 41 41 list_add_tail(&type->list, &bus_type_list); 42 42 up_write(&bus_type_sem); 43 - printk(KERN_INFO PREFIX "bus type %s registered\n", 44 - type->bus->name); 43 + printk(KERN_INFO PREFIX "bus type %s registered\n", type->name); 45 44 return 0; 46 45 } 47 46 return -ENODEV; ··· 55 56 down_write(&bus_type_sem); 56 57 list_del_init(&type->list); 57 58 up_write(&bus_type_sem); 58 - printk(KERN_INFO PREFIX "ACPI bus type %s unregistered\n", 59 - type->bus->name); 59 + printk(KERN_INFO PREFIX "bus type %s unregistered\n", 60 + type->name); 60 61 return 0; 61 62 } 62 63 return -ENODEV; 63 64 } 64 65 EXPORT_SYMBOL_GPL(unregister_acpi_bus_type); 65 66 66 - static struct acpi_bus_type *acpi_get_bus_type(struct bus_type *type) 67 + static struct acpi_bus_type *acpi_get_bus_type(struct device *dev) 67 68 { 68 69 struct acpi_bus_type *tmp, *ret = NULL; 69 70 70 - if (!type) 71 - return NULL; 72 - 73 71 down_read(&bus_type_sem); 74 72 list_for_each_entry(tmp, &bus_type_list, list) { 75 - if (tmp->bus == type) { 73 + if (tmp->match(dev)) { 76 74 ret = tmp; 77 - break; 78 - } 79 - } 80 - up_read(&bus_type_sem); 81 - return ret; 82 - } 83 - 84 - static int acpi_find_bridge_device(struct device *dev, acpi_handle * handle) 85 - { 86 - struct acpi_bus_type *tmp; 87 - int ret = -ENODEV; 88 - 89 - down_read(&bus_type_sem); 90 - list_for_each_entry(tmp, &bus_type_list, list) { 91 - if (tmp->find_bridge && !tmp->find_bridge(dev, handle)) { 92 - ret = 0; 93 75 break; 94 76 } 95 77 } ··· 241 261 242 262 static int acpi_platform_notify(struct device *dev) 243 263 { 244 - struct acpi_bus_type *type; 264 + struct acpi_bus_type *type = acpi_get_bus_type(dev); 245 265 acpi_handle handle; 246 266 int ret; 247 267 248 268 ret = acpi_bind_one(dev, NULL); 249 - if (ret && (!dev->bus || !dev->parent)) { 250 - /* bridge devices genernally haven't bus or parent */ 251 - ret = acpi_find_bridge_device(dev, &handle); 252 - if (!ret) { 253 - ret = acpi_bind_one(dev, handle); 254 - if (ret) 255 - goto out; 256 - } 257 - } 258 - 259 - type = acpi_get_bus_type(dev->bus); 260 - if (ret) { 261 - if (!type || !type->find_device) { 262 - DBG("No ACPI bus support for %s\n", dev_name(dev)); 263 - ret = -EINVAL; 264 - goto out; 265 - } 266 - 269 + if (ret && type) { 267 270 ret = type->find_device(dev, &handle); 268 271 if (ret) { 269 272 DBG("Unable to get handle for %s\n", dev_name(dev)); ··· 279 316 { 280 317 struct acpi_bus_type *type; 281 318 282 - type = acpi_get_bus_type(dev->bus); 319 + type = acpi_get_bus_type(dev); 283 320 if (type && type->cleanup) 284 321 type->cleanup(dev); 285 322
+1 -2
drivers/acpi/processor_core.c
··· 158 158 } 159 159 160 160 exit: 161 - if (buffer.pointer) 162 - kfree(buffer.pointer); 161 + kfree(buffer.pointer); 163 162 return apic_id; 164 163 } 165 164
+1 -1
drivers/acpi/processor_driver.c
··· 559 559 return 0; 560 560 #endif 561 561 562 - BUG_ON((pr->id >= nr_cpu_ids) || (pr->id < 0)); 562 + BUG_ON(pr->id >= nr_cpu_ids); 563 563 564 564 /* 565 565 * Buggy BIOS check
+11 -5
drivers/acpi/sleep.c
··· 599 599 status = acpi_get_sleep_type_data(i, &type_a, &type_b); 600 600 if (ACPI_SUCCESS(status)) { 601 601 sleep_states[i] = 1; 602 - pr_cont(" S%d", i); 603 602 } 604 603 } 605 604 ··· 741 742 hibernation_set_ops(old_suspend_ordering ? 742 743 &acpi_hibernation_ops_old : &acpi_hibernation_ops); 743 744 sleep_states[ACPI_STATE_S4] = 1; 744 - pr_cont(KERN_CONT " S4"); 745 745 if (nosigcheck) 746 746 return; 747 747 ··· 786 788 { 787 789 acpi_status status; 788 790 u8 type_a, type_b; 791 + char supported[ACPI_S_STATE_COUNT * 3 + 1]; 792 + char *pos = supported; 793 + int i; 789 794 790 795 if (acpi_disabled) 791 796 return 0; ··· 796 795 acpi_sleep_dmi_check(); 797 796 798 797 sleep_states[ACPI_STATE_S0] = 1; 799 - pr_info(PREFIX "(supports S0"); 800 798 801 799 acpi_sleep_suspend_setup(); 802 800 acpi_sleep_hibernate_setup(); ··· 803 803 status = acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b); 804 804 if (ACPI_SUCCESS(status)) { 805 805 sleep_states[ACPI_STATE_S5] = 1; 806 - pr_cont(" S5"); 807 806 pm_power_off_prepare = acpi_power_off_prepare; 808 807 pm_power_off = acpi_power_off; 809 808 } 810 - pr_cont(")\n"); 809 + 810 + supported[0] = 0; 811 + for (i = 0; i < ACPI_S_STATE_COUNT; i++) { 812 + if (sleep_states[i]) 813 + pos += sprintf(pos, " S%d", i); 814 + } 815 + pr_info(PREFIX "(supports%s)\n", supported); 816 + 811 817 /* 812 818 * Register the tts_notifier to reboot notifier list so that the _TTS 813 819 * object can also be evaluated when the system enters S5.
+1 -6
drivers/ata/libata-acpi.c
··· 1144 1144 return -ENODEV; 1145 1145 } 1146 1146 1147 - static int ata_acpi_find_dummy(struct device *dev, acpi_handle *handle) 1148 - { 1149 - return -ENODEV; 1150 - } 1151 - 1152 1147 static struct acpi_bus_type ata_acpi_bus = { 1153 - .find_bridge = ata_acpi_find_dummy, 1148 + .name = "ATA", 1154 1149 .find_device = ata_acpi_find_device, 1155 1150 }; 1156 1151
-2
drivers/base/power/main.c
··· 99 99 dev_warn(dev, "parent %s should not be sleeping\n", 100 100 dev_name(dev->parent)); 101 101 list_add_tail(&dev->power.entry, &dpm_list); 102 - dev_pm_qos_constraints_init(dev); 103 102 mutex_unlock(&dpm_list_mtx); 104 103 } 105 104 ··· 112 113 dev->bus ? dev->bus->name : "No Bus", dev_name(dev)); 113 114 complete_all(&dev->power.completion); 114 115 mutex_lock(&dpm_list_mtx); 115 - dev_pm_qos_constraints_destroy(dev); 116 116 list_del_init(&dev->power.entry); 117 117 mutex_unlock(&dpm_list_mtx); 118 118 device_wakeup_disable(dev);
+2 -6
drivers/base/power/power.h
··· 4 4 { 5 5 if (!dev->power.early_init) { 6 6 spin_lock_init(&dev->power.lock); 7 - dev->power.power_state = PMSG_INVALID; 7 + dev->power.qos = NULL; 8 8 dev->power.early_init = true; 9 9 } 10 10 } ··· 56 56 57 57 static inline void device_pm_sleep_init(struct device *dev) {} 58 58 59 - static inline void device_pm_add(struct device *dev) 60 - { 61 - dev_pm_qos_constraints_init(dev); 62 - } 59 + static inline void device_pm_add(struct device *dev) {} 63 60 64 61 static inline void device_pm_remove(struct device *dev) 65 62 { 66 - dev_pm_qos_constraints_destroy(dev); 67 63 pm_runtime_remove(dev); 68 64 } 69 65
+123 -94
drivers/base/power/qos.c
··· 41 41 #include <linux/mutex.h> 42 42 #include <linux/export.h> 43 43 #include <linux/pm_runtime.h> 44 + #include <linux/err.h> 44 45 45 46 #include "power.h" 46 47 ··· 62 61 struct pm_qos_flags *pqf; 63 62 s32 val; 64 63 65 - if (!qos) 64 + if (IS_ERR_OR_NULL(qos)) 66 65 return PM_QOS_FLAGS_UNDEFINED; 67 66 68 67 pqf = &qos->flags; ··· 102 101 */ 103 102 s32 __dev_pm_qos_read_value(struct device *dev) 104 103 { 105 - return dev->power.qos ? pm_qos_read_value(&dev->power.qos->latency) : 0; 104 + return IS_ERR_OR_NULL(dev->power.qos) ? 105 + 0 : pm_qos_read_value(&dev->power.qos->latency); 106 106 } 107 107 108 108 /** ··· 200 198 return 0; 201 199 } 202 200 203 - /** 204 - * dev_pm_qos_constraints_init - Initalize device's PM QoS constraints pointer. 205 - * @dev: target device 206 - * 207 - * Called from the device PM subsystem during device insertion under 208 - * device_pm_lock(). 209 - */ 210 - void dev_pm_qos_constraints_init(struct device *dev) 211 - { 212 - mutex_lock(&dev_pm_qos_mtx); 213 - dev->power.qos = NULL; 214 - dev->power.power_state = PMSG_ON; 215 - mutex_unlock(&dev_pm_qos_mtx); 216 - } 201 + static void __dev_pm_qos_hide_latency_limit(struct device *dev); 202 + static void __dev_pm_qos_hide_flags(struct device *dev); 217 203 218 204 /** 219 205 * dev_pm_qos_constraints_destroy ··· 216 226 struct pm_qos_constraints *c; 217 227 struct pm_qos_flags *f; 218 228 229 + mutex_lock(&dev_pm_qos_mtx); 230 + 219 231 /* 220 232 * If the device's PM QoS resume latency limit or PM QoS flags have been 221 233 * exposed to user space, they have to be hidden at this point. 222 234 */ 223 - dev_pm_qos_hide_latency_limit(dev); 224 - dev_pm_qos_hide_flags(dev); 235 + __dev_pm_qos_hide_latency_limit(dev); 236 + __dev_pm_qos_hide_flags(dev); 225 237 226 - mutex_lock(&dev_pm_qos_mtx); 227 - 228 - dev->power.power_state = PMSG_INVALID; 229 238 qos = dev->power.qos; 230 239 if (!qos) 231 240 goto out; ··· 246 257 } 247 258 248 259 spin_lock_irq(&dev->power.lock); 249 - dev->power.qos = NULL; 260 + dev->power.qos = ERR_PTR(-ENODEV); 250 261 spin_unlock_irq(&dev->power.lock); 251 262 252 263 kfree(c->notifiers); ··· 290 301 "%s() called for already added request\n", __func__)) 291 302 return -EINVAL; 292 303 293 - req->dev = dev; 294 - 295 304 mutex_lock(&dev_pm_qos_mtx); 296 305 297 - if (!dev->power.qos) { 298 - if (dev->power.power_state.event == PM_EVENT_INVALID) { 299 - /* The device has been removed from the system. */ 300 - req->dev = NULL; 301 - ret = -ENODEV; 302 - goto out; 303 - } else { 304 - /* 305 - * Allocate the constraints data on the first call to 306 - * add_request, i.e. only if the data is not already 307 - * allocated and if the device has not been removed. 308 - */ 309 - ret = dev_pm_qos_constraints_allocate(dev); 310 - } 311 - } 306 + if (IS_ERR(dev->power.qos)) 307 + ret = -ENODEV; 308 + else if (!dev->power.qos) 309 + ret = dev_pm_qos_constraints_allocate(dev); 312 310 313 311 if (!ret) { 312 + req->dev = dev; 314 313 req->type = type; 315 314 ret = apply_constraint(req, PM_QOS_ADD_REQ, value); 316 315 } 317 316 318 - out: 319 317 mutex_unlock(&dev_pm_qos_mtx); 320 318 321 319 return ret; ··· 320 344 s32 curr_value; 321 345 int ret = 0; 322 346 323 - if (!req->dev->power.qos) 347 + if (!req) /*guard against callers passing in null */ 348 + return -EINVAL; 349 + 350 + if (WARN(!dev_pm_qos_request_active(req), 351 + "%s() called for unknown object\n", __func__)) 352 + return -EINVAL; 353 + 354 + if (IS_ERR_OR_NULL(req->dev->power.qos)) 324 355 return -ENODEV; 325 356 326 357 switch(req->type) { ··· 369 386 { 370 387 int ret; 371 388 389 + mutex_lock(&dev_pm_qos_mtx); 390 + ret = __dev_pm_qos_update_request(req, new_value); 391 + mutex_unlock(&dev_pm_qos_mtx); 392 + return ret; 393 + } 394 + EXPORT_SYMBOL_GPL(dev_pm_qos_update_request); 395 + 396 + static int __dev_pm_qos_remove_request(struct dev_pm_qos_request *req) 397 + { 398 + int ret; 399 + 372 400 if (!req) /*guard against callers passing in null */ 373 401 return -EINVAL; 374 402 ··· 387 393 "%s() called for unknown object\n", __func__)) 388 394 return -EINVAL; 389 395 390 - mutex_lock(&dev_pm_qos_mtx); 391 - ret = __dev_pm_qos_update_request(req, new_value); 392 - mutex_unlock(&dev_pm_qos_mtx); 396 + if (IS_ERR_OR_NULL(req->dev->power.qos)) 397 + return -ENODEV; 393 398 399 + ret = apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); 400 + memset(req, 0, sizeof(*req)); 394 401 return ret; 395 402 } 396 - EXPORT_SYMBOL_GPL(dev_pm_qos_update_request); 397 403 398 404 /** 399 405 * dev_pm_qos_remove_request - modifies an existing qos request ··· 412 418 */ 413 419 int dev_pm_qos_remove_request(struct dev_pm_qos_request *req) 414 420 { 415 - int ret = 0; 416 - 417 - if (!req) /*guard against callers passing in null */ 418 - return -EINVAL; 419 - 420 - if (WARN(!dev_pm_qos_request_active(req), 421 - "%s() called for unknown object\n", __func__)) 422 - return -EINVAL; 421 + int ret; 423 422 424 423 mutex_lock(&dev_pm_qos_mtx); 425 - 426 - if (req->dev->power.qos) { 427 - ret = apply_constraint(req, PM_QOS_REMOVE_REQ, 428 - PM_QOS_DEFAULT_VALUE); 429 - memset(req, 0, sizeof(*req)); 430 - } else { 431 - /* Return if the device has been removed */ 432 - ret = -ENODEV; 433 - } 434 - 424 + ret = __dev_pm_qos_remove_request(req); 435 425 mutex_unlock(&dev_pm_qos_mtx); 436 426 return ret; 437 427 } ··· 440 462 441 463 mutex_lock(&dev_pm_qos_mtx); 442 464 443 - if (!dev->power.qos) 444 - ret = dev->power.power_state.event != PM_EVENT_INVALID ? 445 - dev_pm_qos_constraints_allocate(dev) : -ENODEV; 465 + if (IS_ERR(dev->power.qos)) 466 + ret = -ENODEV; 467 + else if (!dev->power.qos) 468 + ret = dev_pm_qos_constraints_allocate(dev); 446 469 447 470 if (!ret) 448 471 ret = blocking_notifier_chain_register( ··· 472 493 mutex_lock(&dev_pm_qos_mtx); 473 494 474 495 /* Silently return if the constraints object is not present. */ 475 - if (dev->power.qos) 496 + if (!IS_ERR_OR_NULL(dev->power.qos)) 476 497 retval = blocking_notifier_chain_unregister( 477 498 dev->power.qos->latency.notifiers, 478 499 notifier); ··· 542 563 static void __dev_pm_qos_drop_user_request(struct device *dev, 543 564 enum dev_pm_qos_req_type type) 544 565 { 566 + struct dev_pm_qos_request *req = NULL; 567 + 545 568 switch(type) { 546 569 case DEV_PM_QOS_LATENCY: 547 - dev_pm_qos_remove_request(dev->power.qos->latency_req); 570 + req = dev->power.qos->latency_req; 548 571 dev->power.qos->latency_req = NULL; 549 572 break; 550 573 case DEV_PM_QOS_FLAGS: 551 - dev_pm_qos_remove_request(dev->power.qos->flags_req); 574 + req = dev->power.qos->flags_req; 552 575 dev->power.qos->flags_req = NULL; 553 576 break; 554 577 } 578 + __dev_pm_qos_remove_request(req); 579 + kfree(req); 555 580 } 556 581 557 582 /** ··· 571 588 if (!device_is_registered(dev) || value < 0) 572 589 return -EINVAL; 573 590 574 - if (dev->power.qos && dev->power.qos->latency_req) 575 - return -EEXIST; 576 - 577 591 req = kzalloc(sizeof(*req), GFP_KERNEL); 578 592 if (!req) 579 593 return -ENOMEM; 580 594 581 595 ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY, value); 582 - if (ret < 0) 596 + if (ret < 0) { 597 + kfree(req); 583 598 return ret; 599 + } 600 + 601 + mutex_lock(&dev_pm_qos_mtx); 602 + 603 + if (IS_ERR_OR_NULL(dev->power.qos)) 604 + ret = -ENODEV; 605 + else if (dev->power.qos->latency_req) 606 + ret = -EEXIST; 607 + 608 + if (ret < 0) { 609 + __dev_pm_qos_remove_request(req); 610 + kfree(req); 611 + goto out; 612 + } 584 613 585 614 dev->power.qos->latency_req = req; 586 615 ret = pm_qos_sysfs_add_latency(dev); 587 616 if (ret) 588 617 __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY); 589 618 619 + out: 620 + mutex_unlock(&dev_pm_qos_mtx); 590 621 return ret; 591 622 } 592 623 EXPORT_SYMBOL_GPL(dev_pm_qos_expose_latency_limit); 624 + 625 + static void __dev_pm_qos_hide_latency_limit(struct device *dev) 626 + { 627 + if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->latency_req) { 628 + pm_qos_sysfs_remove_latency(dev); 629 + __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY); 630 + } 631 + } 593 632 594 633 /** 595 634 * dev_pm_qos_hide_latency_limit - Hide PM QoS latency limit from user space. ··· 619 614 */ 620 615 void dev_pm_qos_hide_latency_limit(struct device *dev) 621 616 { 622 - if (dev->power.qos && dev->power.qos->latency_req) { 623 - pm_qos_sysfs_remove_latency(dev); 624 - __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY); 625 - } 617 + mutex_lock(&dev_pm_qos_mtx); 618 + __dev_pm_qos_hide_latency_limit(dev); 619 + mutex_unlock(&dev_pm_qos_mtx); 626 620 } 627 621 EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_limit); 628 622 ··· 638 634 if (!device_is_registered(dev)) 639 635 return -EINVAL; 640 636 641 - if (dev->power.qos && dev->power.qos->flags_req) 642 - return -EEXIST; 643 - 644 637 req = kzalloc(sizeof(*req), GFP_KERNEL); 645 638 if (!req) 646 639 return -ENOMEM; 647 640 648 - pm_runtime_get_sync(dev); 649 641 ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_FLAGS, val); 650 - if (ret < 0) 651 - goto fail; 642 + if (ret < 0) { 643 + kfree(req); 644 + return ret; 645 + } 646 + 647 + pm_runtime_get_sync(dev); 648 + mutex_lock(&dev_pm_qos_mtx); 649 + 650 + if (IS_ERR_OR_NULL(dev->power.qos)) 651 + ret = -ENODEV; 652 + else if (dev->power.qos->flags_req) 653 + ret = -EEXIST; 654 + 655 + if (ret < 0) { 656 + __dev_pm_qos_remove_request(req); 657 + kfree(req); 658 + goto out; 659 + } 652 660 653 661 dev->power.qos->flags_req = req; 654 662 ret = pm_qos_sysfs_add_flags(dev); 655 663 if (ret) 656 664 __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS); 657 665 658 - fail: 666 + out: 667 + mutex_unlock(&dev_pm_qos_mtx); 659 668 pm_runtime_put(dev); 660 669 return ret; 661 670 } 662 671 EXPORT_SYMBOL_GPL(dev_pm_qos_expose_flags); 672 + 673 + static void __dev_pm_qos_hide_flags(struct device *dev) 674 + { 675 + if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->flags_req) { 676 + pm_qos_sysfs_remove_flags(dev); 677 + __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS); 678 + } 679 + } 663 680 664 681 /** 665 682 * dev_pm_qos_hide_flags - Hide PM QoS flags of a device from user space. ··· 688 663 */ 689 664 void dev_pm_qos_hide_flags(struct device *dev) 690 665 { 691 - if (dev->power.qos && dev->power.qos->flags_req) { 692 - pm_qos_sysfs_remove_flags(dev); 693 - pm_runtime_get_sync(dev); 694 - __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS); 695 - pm_runtime_put(dev); 696 - } 666 + pm_runtime_get_sync(dev); 667 + mutex_lock(&dev_pm_qos_mtx); 668 + __dev_pm_qos_hide_flags(dev); 669 + mutex_unlock(&dev_pm_qos_mtx); 670 + pm_runtime_put(dev); 697 671 } 698 672 EXPORT_SYMBOL_GPL(dev_pm_qos_hide_flags); 699 673 ··· 707 683 s32 value; 708 684 int ret; 709 685 710 - if (!dev->power.qos || !dev->power.qos->flags_req) 711 - return -EINVAL; 712 - 713 686 pm_runtime_get_sync(dev); 714 687 mutex_lock(&dev_pm_qos_mtx); 688 + 689 + if (IS_ERR_OR_NULL(dev->power.qos) || !dev->power.qos->flags_req) { 690 + ret = -EINVAL; 691 + goto out; 692 + } 715 693 716 694 value = dev_pm_qos_requested_flags(dev); 717 695 if (set) ··· 723 697 724 698 ret = __dev_pm_qos_update_request(dev->power.qos->flags_req, value); 725 699 700 + out: 726 701 mutex_unlock(&dev_pm_qos_mtx); 727 702 pm_runtime_put(dev); 728 - 729 703 return ret; 730 704 } 705 + #else /* !CONFIG_PM_RUNTIME */ 706 + static void __dev_pm_qos_hide_latency_limit(struct device *dev) {} 707 + static void __dev_pm_qos_hide_flags(struct device *dev) {} 731 708 #endif /* CONFIG_PM_RUNTIME */
+1
drivers/base/power/sysfs.c
··· 708 708 709 709 void dpm_sysfs_remove(struct device *dev) 710 710 { 711 + dev_pm_qos_constraints_destroy(dev); 711 712 rpm_sysfs_remove(dev); 712 713 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); 713 714 sysfs_remove_group(&dev->kobj, &pm_attr_group);
+1
drivers/base/regmap/regmap-irq.c
··· 184 184 if (ret < 0) { 185 185 dev_err(map->dev, "IRQ thread failed to resume: %d\n", 186 186 ret); 187 + pm_runtime_put(map->dev); 187 188 return IRQ_NONE; 188 189 } 189 190 }
+8 -4
drivers/char/random.c
··· 852 852 int reserved) 853 853 { 854 854 unsigned long flags; 855 + int wakeup_write = 0; 855 856 856 857 /* Hold lock while accounting */ 857 858 spin_lock_irqsave(&r->lock, flags); ··· 874 873 else 875 874 r->entropy_count = reserved; 876 875 877 - if (r->entropy_count < random_write_wakeup_thresh) { 878 - wake_up_interruptible(&random_write_wait); 879 - kill_fasync(&fasync, SIGIO, POLL_OUT); 880 - } 876 + if (r->entropy_count < random_write_wakeup_thresh) 877 + wakeup_write = 1; 881 878 } 882 879 883 880 DEBUG_ENT("debiting %zu entropy credits from %s%s\n", 884 881 nbytes * 8, r->name, r->limit ? "" : " (unlimited)"); 885 882 886 883 spin_unlock_irqrestore(&r->lock, flags); 884 + 885 + if (wakeup_write) { 886 + wake_up_interruptible(&random_write_wait); 887 + kill_fasync(&fasync, SIGIO, POLL_OUT); 888 + } 887 889 888 890 return nbytes; 889 891 }
+1 -1
drivers/cpufreq/cpufreq_governor.h
··· 64 64 * dbs: used as a shortform for demand based switching It helps to keep variable 65 65 * names smaller, simpler 66 66 * cdbs: common dbs 67 - * on_*: On-demand governor 67 + * od_*: On-demand governor 68 68 * cs_*: Conservative governor 69 69 */ 70 70
+1 -7
drivers/cpufreq/highbank-cpufreq.c
··· 28 28 29 29 static int hb_voltage_change(unsigned int freq) 30 30 { 31 - int i; 32 - u32 msg[HB_CPUFREQ_IPC_LEN]; 33 - 34 - msg[0] = HB_CPUFREQ_CHANGE_NOTE; 35 - msg[1] = freq / 1000000; 36 - for (i = 2; i < HB_CPUFREQ_IPC_LEN; i++) 37 - msg[i] = 0; 31 + u32 msg[HB_CPUFREQ_IPC_LEN] = {HB_CPUFREQ_CHANGE_NOTE, freq / 1000000}; 38 32 39 33 return pl320_ipc_transmit(msg); 40 34 }
+14 -28
drivers/cpufreq/intel_pstate.c
··· 662 662 663 663 cpu = all_cpu_data[policy->cpu]; 664 664 665 + if (!policy->cpuinfo.max_freq) 666 + return -ENODEV; 667 + 665 668 intel_pstate_get_min_max(cpu, &min, &max); 666 669 667 670 limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq; ··· 750 747 .owner = THIS_MODULE, 751 748 }; 752 749 753 - static void intel_pstate_exit(void) 754 - { 755 - int cpu; 756 - 757 - sysfs_remove_group(intel_pstate_kobject, 758 - &intel_pstate_attr_group); 759 - debugfs_remove_recursive(debugfs_parent); 760 - 761 - cpufreq_unregister_driver(&intel_pstate_driver); 762 - 763 - if (!all_cpu_data) 764 - return; 765 - 766 - get_online_cpus(); 767 - for_each_online_cpu(cpu) { 768 - if (all_cpu_data[cpu]) { 769 - del_timer_sync(&all_cpu_data[cpu]->timer); 770 - kfree(all_cpu_data[cpu]); 771 - } 772 - } 773 - 774 - put_online_cpus(); 775 - vfree(all_cpu_data); 776 - } 777 - module_exit(intel_pstate_exit); 778 - 779 750 static int __initdata no_load; 780 751 781 752 static int __init intel_pstate_init(void) 782 753 { 783 - int rc = 0; 754 + int cpu, rc = 0; 784 755 const struct x86_cpu_id *id; 785 756 786 757 if (no_load) ··· 779 802 intel_pstate_sysfs_expose_params(); 780 803 return rc; 781 804 out: 782 - intel_pstate_exit(); 805 + get_online_cpus(); 806 + for_each_online_cpu(cpu) { 807 + if (all_cpu_data[cpu]) { 808 + del_timer_sync(&all_cpu_data[cpu]->timer); 809 + kfree(all_cpu_data[cpu]); 810 + } 811 + } 812 + 813 + put_online_cpus(); 814 + vfree(all_cpu_data); 783 815 return -ENODEV; 784 816 } 785 817 device_initcall(intel_pstate_init);
+2 -3
drivers/firmware/dmi_scan.c
··· 442 442 static int __init smbios_present(const char __iomem *p) 443 443 { 444 444 u8 buf[32]; 445 - int offset = 0; 446 445 447 446 memcpy_fromio(buf, p, 32); 448 447 if ((buf[5] < 32) && dmi_checksum(buf, buf[5])) { ··· 460 461 dmi_ver = 0x0206; 461 462 break; 462 463 } 463 - offset = 16; 464 + return memcmp(p + 16, "_DMI_", 5) || dmi_present(p + 16); 464 465 } 465 - return dmi_present(buf + offset); 466 + return 1; 466 467 } 467 468 468 469 void __init dmi_scan_machine(void)
+97 -34
drivers/firmware/efivars.c
··· 426 426 return status; 427 427 } 428 428 429 + static efi_status_t 430 + check_var_size_locked(struct efivars *efivars, u32 attributes, 431 + unsigned long size) 432 + { 433 + u64 storage_size, remaining_size, max_size; 434 + efi_status_t status; 435 + const struct efivar_operations *fops = efivars->ops; 436 + 437 + if (!efivars->ops->query_variable_info) 438 + return EFI_UNSUPPORTED; 439 + 440 + status = fops->query_variable_info(attributes, &storage_size, 441 + &remaining_size, &max_size); 442 + 443 + if (status != EFI_SUCCESS) 444 + return status; 445 + 446 + if (!storage_size || size > remaining_size || size > max_size || 447 + (remaining_size - size) < (storage_size / 2)) 448 + return EFI_OUT_OF_RESOURCES; 449 + 450 + return status; 451 + } 452 + 453 + 454 + static efi_status_t 455 + check_var_size(struct efivars *efivars, u32 attributes, unsigned long size) 456 + { 457 + efi_status_t status; 458 + unsigned long flags; 459 + 460 + spin_lock_irqsave(&efivars->lock, flags); 461 + status = check_var_size_locked(efivars, attributes, size); 462 + spin_unlock_irqrestore(&efivars->lock, flags); 463 + 464 + return status; 465 + } 466 + 429 467 static ssize_t 430 468 efivar_guid_read(struct efivar_entry *entry, char *buf) 431 469 { ··· 585 547 } 586 548 587 549 spin_lock_irq(&efivars->lock); 588 - status = efivars->ops->set_variable(new_var->VariableName, 589 - &new_var->VendorGuid, 590 - new_var->Attributes, 591 - new_var->DataSize, 592 - new_var->Data); 550 + 551 + status = check_var_size_locked(efivars, new_var->Attributes, 552 + new_var->DataSize + utf16_strsize(new_var->VariableName, 1024)); 553 + 554 + if (status == EFI_SUCCESS || status == EFI_UNSUPPORTED) 555 + status = efivars->ops->set_variable(new_var->VariableName, 556 + &new_var->VendorGuid, 557 + new_var->Attributes, 558 + new_var->DataSize, 559 + new_var->Data); 593 560 594 561 spin_unlock_irq(&efivars->lock); 595 562 ··· 745 702 u32 attributes; 746 703 struct inode *inode = file->f_mapping->host; 747 704 unsigned long datasize = count - sizeof(attributes); 748 - unsigned long newdatasize; 749 - u64 storage_size, remaining_size, max_size; 705 + unsigned long newdatasize, varsize; 750 706 ssize_t bytes = 0; 751 707 752 708 if (count < sizeof(attributes)) ··· 764 722 * amounts of memory. Pick a default size of 64K if 765 723 * QueryVariableInfo() isn't supported by the firmware. 766 724 */ 767 - spin_lock_irq(&efivars->lock); 768 725 769 - if (!efivars->ops->query_variable_info) 770 - status = EFI_UNSUPPORTED; 771 - else { 772 - const struct efivar_operations *fops = efivars->ops; 773 - status = fops->query_variable_info(attributes, &storage_size, 774 - &remaining_size, &max_size); 775 - } 776 - 777 - spin_unlock_irq(&efivars->lock); 726 + varsize = datasize + utf16_strsize(var->var.VariableName, 1024); 727 + status = check_var_size(efivars, attributes, varsize); 778 728 779 729 if (status != EFI_SUCCESS) { 780 730 if (status != EFI_UNSUPPORTED) 781 731 return efi_status_to_err(status); 782 732 783 - remaining_size = 65536; 733 + if (datasize > 65536) 734 + return -ENOSPC; 784 735 } 785 - 786 - if (datasize > remaining_size) 787 - return -ENOSPC; 788 736 789 737 data = kmalloc(datasize, GFP_KERNEL); 790 738 if (!data) ··· 796 764 * list (in the case of an authenticated delete). 797 765 */ 798 766 spin_lock_irq(&efivars->lock); 767 + 768 + /* 769 + * Ensure that the available space hasn't shrunk below the safe level 770 + */ 771 + 772 + status = check_var_size_locked(efivars, attributes, varsize); 773 + 774 + if (status != EFI_SUCCESS && status != EFI_UNSUPPORTED) { 775 + spin_unlock_irq(&efivars->lock); 776 + kfree(data); 777 + 778 + return efi_status_to_err(status); 779 + } 799 780 800 781 status = efivars->ops->set_variable(var->var.VariableName, 801 782 &var->var.VendorGuid, ··· 974 929 if (len < GUID_LEN + 2) 975 930 return false; 976 931 977 - /* GUID should be right after the first '-' */ 978 - if (s - 1 != strchr(str, '-')) 932 + /* GUID must be preceded by a '-' */ 933 + if (*(s - 1) != '-') 979 934 return false; 980 935 981 936 /* ··· 1163 1118 1164 1119 static struct dentry *efivarfs_alloc_dentry(struct dentry *parent, char *name) 1165 1120 { 1121 + struct dentry *d; 1166 1122 struct qstr q; 1123 + int err; 1167 1124 1168 1125 q.name = name; 1169 1126 q.len = strlen(name); 1170 1127 1171 - if (efivarfs_d_hash(NULL, NULL, &q)) 1172 - return NULL; 1128 + err = efivarfs_d_hash(NULL, NULL, &q); 1129 + if (err) 1130 + return ERR_PTR(err); 1173 1131 1174 - return d_alloc(parent, &q); 1132 + d = d_alloc(parent, &q); 1133 + if (d) 1134 + return d; 1135 + 1136 + return ERR_PTR(-ENOMEM); 1175 1137 } 1176 1138 1177 1139 static int efivarfs_fill_super(struct super_block *sb, void *data, int silent) ··· 1188 1136 struct efivar_entry *entry, *n; 1189 1137 struct efivars *efivars = &__efivars; 1190 1138 char *name; 1139 + int err = -ENOMEM; 1191 1140 1192 1141 efivarfs_sb = sb; 1193 1142 ··· 1239 1186 goto fail_name; 1240 1187 1241 1188 dentry = efivarfs_alloc_dentry(root, name); 1242 - if (!dentry) 1189 + if (IS_ERR(dentry)) { 1190 + err = PTR_ERR(dentry); 1243 1191 goto fail_inode; 1192 + } 1244 1193 1245 1194 /* copied by the above to local storage in the dentry. */ 1246 1195 kfree(name); ··· 1269 1214 fail_name: 1270 1215 kfree(name); 1271 1216 fail: 1272 - return -ENOMEM; 1217 + return err; 1273 1218 } 1274 1219 1275 1220 static struct dentry *efivarfs_mount(struct file_system_type *fs_type, ··· 1289 1234 .mount = efivarfs_mount, 1290 1235 .kill_sb = efivarfs_kill_sb, 1291 1236 }; 1237 + MODULE_ALIAS_FS("efivarfs"); 1292 1238 1293 1239 /* 1294 1240 * Handle negative dentry. ··· 1401 1345 efi_guid_t vendor = LINUX_EFI_CRASH_GUID; 1402 1346 struct efivars *efivars = psi->data; 1403 1347 int i, ret = 0; 1404 - u64 storage_space, remaining_space, max_variable_size; 1405 1348 efi_status_t status = EFI_NOT_FOUND; 1406 1349 unsigned long flags; 1407 1350 ··· 1420 1365 * size: a size of logging data 1421 1366 * DUMP_NAME_LEN * 2: a maximum size of variable name 1422 1367 */ 1423 - status = efivars->ops->query_variable_info(PSTORE_EFI_ATTRIBUTES, 1424 - &storage_space, 1425 - &remaining_space, 1426 - &max_variable_size); 1427 - if (status || remaining_space < size + DUMP_NAME_LEN * 2) { 1368 + 1369 + status = check_var_size_locked(efivars, PSTORE_EFI_ATTRIBUTES, 1370 + size + DUMP_NAME_LEN * 2); 1371 + 1372 + if (status) { 1428 1373 spin_unlock_irqrestore(&efivars->lock, flags); 1429 1374 *id = part; 1430 1375 return -ENOSPC; ··· 1597 1542 if (found) { 1598 1543 spin_unlock_irq(&efivars->lock); 1599 1544 return -EINVAL; 1545 + } 1546 + 1547 + status = check_var_size_locked(efivars, new_var->Attributes, 1548 + new_var->DataSize + utf16_strsize(new_var->VariableName, 1024)); 1549 + 1550 + if (status && status != EFI_UNSUPPORTED) { 1551 + spin_unlock_irq(&efivars->lock); 1552 + return efi_status_to_err(status); 1600 1553 } 1601 1554 1602 1555 /* now *really* create the variable via EFI */
+18 -7
drivers/gpu/drm/i915/i915_drv.c
··· 379 379 INTEL_VGA_DEVICE(0x0A06, &intel_haswell_m_info), /* ULT GT1 mobile */ 380 380 INTEL_VGA_DEVICE(0x0A16, &intel_haswell_m_info), /* ULT GT2 mobile */ 381 381 INTEL_VGA_DEVICE(0x0A26, &intel_haswell_m_info), /* ULT GT2 mobile */ 382 - INTEL_VGA_DEVICE(0x0D12, &intel_haswell_d_info), /* CRW GT1 desktop */ 382 + INTEL_VGA_DEVICE(0x0D02, &intel_haswell_d_info), /* CRW GT1 desktop */ 383 + INTEL_VGA_DEVICE(0x0D12, &intel_haswell_d_info), /* CRW GT2 desktop */ 383 384 INTEL_VGA_DEVICE(0x0D22, &intel_haswell_d_info), /* CRW GT2 desktop */ 384 - INTEL_VGA_DEVICE(0x0D32, &intel_haswell_d_info), /* CRW GT2 desktop */ 385 - INTEL_VGA_DEVICE(0x0D1A, &intel_haswell_d_info), /* CRW GT1 server */ 385 + INTEL_VGA_DEVICE(0x0D0A, &intel_haswell_d_info), /* CRW GT1 server */ 386 + INTEL_VGA_DEVICE(0x0D1A, &intel_haswell_d_info), /* CRW GT2 server */ 386 387 INTEL_VGA_DEVICE(0x0D2A, &intel_haswell_d_info), /* CRW GT2 server */ 387 - INTEL_VGA_DEVICE(0x0D3A, &intel_haswell_d_info), /* CRW GT2 server */ 388 - INTEL_VGA_DEVICE(0x0D16, &intel_haswell_m_info), /* CRW GT1 mobile */ 388 + INTEL_VGA_DEVICE(0x0D06, &intel_haswell_m_info), /* CRW GT1 mobile */ 389 + INTEL_VGA_DEVICE(0x0D16, &intel_haswell_m_info), /* CRW GT2 mobile */ 389 390 INTEL_VGA_DEVICE(0x0D26, &intel_haswell_m_info), /* CRW GT2 mobile */ 390 - INTEL_VGA_DEVICE(0x0D36, &intel_haswell_m_info), /* CRW GT2 mobile */ 391 391 INTEL_VGA_DEVICE(0x0f30, &intel_valleyview_m_info), 392 392 INTEL_VGA_DEVICE(0x0157, &intel_valleyview_m_info), 393 393 INTEL_VGA_DEVICE(0x0155, &intel_valleyview_d_info), ··· 495 495 intel_modeset_disable(dev); 496 496 497 497 drm_irq_uninstall(dev); 498 + dev_priv->enable_hotplug_processing = false; 498 499 } 499 500 500 501 i915_save_state(dev); ··· 569 568 error = i915_gem_init_hw(dev); 570 569 mutex_unlock(&dev->struct_mutex); 571 570 571 + /* We need working interrupts for modeset enabling ... */ 572 + drm_irq_install(dev); 573 + 572 574 intel_modeset_init_hw(dev); 573 575 intel_modeset_setup_hw_state(dev, false); 574 - drm_irq_install(dev); 576 + 577 + /* 578 + * ... but also need to make sure that hotplug processing 579 + * doesn't cause havoc. Like in the driver load code we don't 580 + * bother with the tiny race here where we might loose hotplug 581 + * notifications. 582 + * */ 575 583 intel_hpd_init(dev); 584 + dev_priv->enable_hotplug_processing = true; 576 585 } 577 586 578 587 intel_opregion_init(dev);
+24 -2
drivers/gpu/drm/i915/i915_irq.c
··· 701 701 { 702 702 struct drm_device *dev = (struct drm_device *) arg; 703 703 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; 704 - u32 de_iir, gt_iir, de_ier, pm_iir; 704 + u32 de_iir, gt_iir, de_ier, pm_iir, sde_ier; 705 705 irqreturn_t ret = IRQ_NONE; 706 706 int i; 707 707 ··· 710 710 /* disable master interrupt before clearing iir */ 711 711 de_ier = I915_READ(DEIER); 712 712 I915_WRITE(DEIER, de_ier & ~DE_MASTER_IRQ_CONTROL); 713 + 714 + /* Disable south interrupts. We'll only write to SDEIIR once, so further 715 + * interrupts will will be stored on its back queue, and then we'll be 716 + * able to process them after we restore SDEIER (as soon as we restore 717 + * it, we'll get an interrupt if SDEIIR still has something to process 718 + * due to its back queue). */ 719 + sde_ier = I915_READ(SDEIER); 720 + I915_WRITE(SDEIER, 0); 721 + POSTING_READ(SDEIER); 713 722 714 723 gt_iir = I915_READ(GTIIR); 715 724 if (gt_iir) { ··· 768 759 769 760 I915_WRITE(DEIER, de_ier); 770 761 POSTING_READ(DEIER); 762 + I915_WRITE(SDEIER, sde_ier); 763 + POSTING_READ(SDEIER); 771 764 772 765 return ret; 773 766 } ··· 789 778 struct drm_device *dev = (struct drm_device *) arg; 790 779 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; 791 780 int ret = IRQ_NONE; 792 - u32 de_iir, gt_iir, de_ier, pm_iir; 781 + u32 de_iir, gt_iir, de_ier, pm_iir, sde_ier; 793 782 794 783 atomic_inc(&dev_priv->irq_received); 795 784 ··· 797 786 de_ier = I915_READ(DEIER); 798 787 I915_WRITE(DEIER, de_ier & ~DE_MASTER_IRQ_CONTROL); 799 788 POSTING_READ(DEIER); 789 + 790 + /* Disable south interrupts. We'll only write to SDEIIR once, so further 791 + * interrupts will will be stored on its back queue, and then we'll be 792 + * able to process them after we restore SDEIER (as soon as we restore 793 + * it, we'll get an interrupt if SDEIIR still has something to process 794 + * due to its back queue). */ 795 + sde_ier = I915_READ(SDEIER); 796 + I915_WRITE(SDEIER, 0); 797 + POSTING_READ(SDEIER); 800 798 801 799 de_iir = I915_READ(DEIIR); 802 800 gt_iir = I915_READ(GTIIR); ··· 869 849 done: 870 850 I915_WRITE(DEIER, de_ier); 871 851 POSTING_READ(DEIER); 852 + I915_WRITE(SDEIER, sde_ier); 853 + POSTING_READ(SDEIER); 872 854 873 855 return ret; 874 856 }
+2 -2
drivers/gpu/drm/i915/i915_reg.h
··· 1613 1613 #define ADPA_CRT_HOTPLUG_FORCE_TRIGGER (1<<16) 1614 1614 #define ADPA_USE_VGA_HVPOLARITY (1<<15) 1615 1615 #define ADPA_SETS_HVPOLARITY 0 1616 - #define ADPA_VSYNC_CNTL_DISABLE (1<<11) 1616 + #define ADPA_VSYNC_CNTL_DISABLE (1<<10) 1617 1617 #define ADPA_VSYNC_CNTL_ENABLE 0 1618 - #define ADPA_HSYNC_CNTL_DISABLE (1<<10) 1618 + #define ADPA_HSYNC_CNTL_DISABLE (1<<11) 1619 1619 #define ADPA_HSYNC_CNTL_ENABLE 0 1620 1620 #define ADPA_VSYNC_ACTIVE_HIGH (1<<4) 1621 1621 #define ADPA_VSYNC_ACTIVE_LOW 0
+1 -1
drivers/gpu/drm/i915/intel_crt.c
··· 88 88 u32 temp; 89 89 90 90 temp = I915_READ(crt->adpa_reg); 91 - temp &= ~(ADPA_HSYNC_CNTL_DISABLE | ADPA_VSYNC_CNTL_DISABLE); 91 + temp |= ADPA_HSYNC_CNTL_DISABLE | ADPA_VSYNC_CNTL_DISABLE; 92 92 temp &= ~ADPA_DAC_ENABLE; 93 93 I915_WRITE(crt->adpa_reg, temp); 94 94 }
+1 -1
drivers/gpu/drm/i915/intel_ddi.c
··· 1391 1391 struct intel_dp *intel_dp = &intel_dig_port->dp; 1392 1392 struct drm_i915_private *dev_priv = encoder->dev->dev_private; 1393 1393 enum port port = intel_dig_port->port; 1394 - bool wait; 1395 1394 uint32_t val; 1395 + bool wait = false; 1396 1396 1397 1397 if (I915_READ(DP_TP_CTL(port)) & DP_TP_CTL_ENABLE) { 1398 1398 val = I915_READ(DDI_BUF_CTL(port));
+30 -7
drivers/gpu/drm/i915/intel_display.c
··· 3604 3604 */ 3605 3605 } 3606 3606 3607 + /** 3608 + * i9xx_fixup_plane - ugly workaround for G45 to fire up the hardware 3609 + * cursor plane briefly if not already running after enabling the display 3610 + * plane. 3611 + * This workaround avoids occasional blank screens when self refresh is 3612 + * enabled. 3613 + */ 3614 + static void 3615 + g4x_fixup_plane(struct drm_i915_private *dev_priv, enum pipe pipe) 3616 + { 3617 + u32 cntl = I915_READ(CURCNTR(pipe)); 3618 + 3619 + if ((cntl & CURSOR_MODE) == 0) { 3620 + u32 fw_bcl_self = I915_READ(FW_BLC_SELF); 3621 + 3622 + I915_WRITE(FW_BLC_SELF, fw_bcl_self & ~FW_BLC_SELF_EN); 3623 + I915_WRITE(CURCNTR(pipe), CURSOR_MODE_64_ARGB_AX); 3624 + intel_wait_for_vblank(dev_priv->dev, pipe); 3625 + I915_WRITE(CURCNTR(pipe), cntl); 3626 + I915_WRITE(CURBASE(pipe), I915_READ(CURBASE(pipe))); 3627 + I915_WRITE(FW_BLC_SELF, fw_bcl_self); 3628 + } 3629 + } 3630 + 3607 3631 static void i9xx_crtc_enable(struct drm_crtc *crtc) 3608 3632 { 3609 3633 struct drm_device *dev = crtc->dev; ··· 3653 3629 3654 3630 intel_enable_pipe(dev_priv, pipe, false); 3655 3631 intel_enable_plane(dev_priv, plane, pipe); 3632 + if (IS_G4X(dev)) 3633 + g4x_fixup_plane(dev_priv, pipe); 3656 3634 3657 3635 intel_crtc_load_lut(crtc); 3658 3636 intel_update_fbc(dev); ··· 7282 7256 { 7283 7257 struct drm_device *dev = crtc->dev; 7284 7258 struct drm_i915_private *dev_priv = dev->dev_private; 7285 - struct intel_framebuffer *intel_fb; 7286 - struct drm_i915_gem_object *obj; 7259 + struct drm_framebuffer *old_fb = crtc->fb; 7260 + struct drm_i915_gem_object *obj = to_intel_framebuffer(fb)->obj; 7287 7261 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 7288 7262 struct intel_unpin_work *work; 7289 7263 unsigned long flags; ··· 7308 7282 7309 7283 work->event = event; 7310 7284 work->crtc = crtc; 7311 - intel_fb = to_intel_framebuffer(crtc->fb); 7312 - work->old_fb_obj = intel_fb->obj; 7285 + work->old_fb_obj = to_intel_framebuffer(old_fb)->obj; 7313 7286 INIT_WORK(&work->work, intel_unpin_work_fn); 7314 7287 7315 7288 ret = drm_vblank_get(dev, intel_crtc->pipe); ··· 7327 7302 } 7328 7303 intel_crtc->unpin_work = work; 7329 7304 spin_unlock_irqrestore(&dev->event_lock, flags); 7330 - 7331 - intel_fb = to_intel_framebuffer(fb); 7332 - obj = intel_fb->obj; 7333 7305 7334 7306 if (atomic_read(&intel_crtc->unpin_work_count) >= 2) 7335 7307 flush_workqueue(dev_priv->wq); ··· 7362 7340 7363 7341 cleanup_pending: 7364 7342 atomic_dec(&intel_crtc->unpin_work_count); 7343 + crtc->fb = old_fb; 7365 7344 drm_gem_object_unreference(&work->old_fb_obj->base); 7366 7345 drm_gem_object_unreference(&obj->base); 7367 7346 mutex_unlock(&dev->struct_mutex);
+2 -1
drivers/gpu/drm/i915/intel_dp.c
··· 353 353 354 354 #define C (((status = I915_READ_NOTRACE(ch_ctl)) & DP_AUX_CH_CTL_SEND_BUSY) == 0) 355 355 if (has_aux_irq) 356 - done = wait_event_timeout(dev_priv->gmbus_wait_queue, C, 10); 356 + done = wait_event_timeout(dev_priv->gmbus_wait_queue, C, 357 + msecs_to_jiffies(10)); 357 358 else 358 359 done = wait_for_atomic(C, 10) == 0; 359 360 if (!done)
+1 -1
drivers/gpu/drm/i915/intel_pm.c
··· 2574 2574 I915_WRITE(GEN6_RC_SLEEP, 0); 2575 2575 I915_WRITE(GEN6_RC1e_THRESHOLD, 1000); 2576 2576 I915_WRITE(GEN6_RC6_THRESHOLD, 50000); 2577 - I915_WRITE(GEN6_RC6p_THRESHOLD, 100000); 2577 + I915_WRITE(GEN6_RC6p_THRESHOLD, 150000); 2578 2578 I915_WRITE(GEN6_RC6pp_THRESHOLD, 64000); /* unused */ 2579 2579 2580 2580 /* Check if we are enabling RC6 */
-1
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 112 112 struct mga_fbdev { 113 113 struct drm_fb_helper helper; 114 114 struct mga_framebuffer mfb; 115 - struct list_head fbdev_list; 116 115 void *sysram; 117 116 int size; 118 117 struct ttm_bo_kmap_obj mapping;
+1
drivers/gpu/drm/mgag200/mgag200_i2c.c
··· 92 92 int ret; 93 93 int data, clock; 94 94 95 + WREG_DAC(MGA1064_GEN_IO_CTL2, 1); 95 96 WREG_DAC(MGA1064_GEN_IO_DATA, 0xff); 96 97 WREG_DAC(MGA1064_GEN_IO_CTL, 0); 97 98
+27
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 1406 1406 static int mga_vga_mode_valid(struct drm_connector *connector, 1407 1407 struct drm_display_mode *mode) 1408 1408 { 1409 + struct drm_device *dev = connector->dev; 1410 + struct mga_device *mdev = (struct mga_device*)dev->dev_private; 1411 + struct mga_fbdev *mfbdev = mdev->mfbdev; 1412 + struct drm_fb_helper *fb_helper = &mfbdev->helper; 1413 + struct drm_fb_helper_connector *fb_helper_conn = NULL; 1414 + int bpp = 32; 1415 + int i = 0; 1416 + 1409 1417 /* FIXME: Add bandwidth and g200se limitations */ 1410 1418 1411 1419 if (mode->crtc_hdisplay > 2048 || mode->crtc_hsync_start > 4096 || 1412 1420 mode->crtc_hsync_end > 4096 || mode->crtc_htotal > 4096 || 1413 1421 mode->crtc_vdisplay > 2048 || mode->crtc_vsync_start > 4096 || 1414 1422 mode->crtc_vsync_end > 4096 || mode->crtc_vtotal > 4096) { 1423 + return MODE_BAD; 1424 + } 1425 + 1426 + /* Validate the mode input by the user */ 1427 + for (i = 0; i < fb_helper->connector_count; i++) { 1428 + if (fb_helper->connector_info[i]->connector == connector) { 1429 + /* Found the helper for this connector */ 1430 + fb_helper_conn = fb_helper->connector_info[i]; 1431 + if (fb_helper_conn->cmdline_mode.specified) { 1432 + if (fb_helper_conn->cmdline_mode.bpp_specified) { 1433 + bpp = fb_helper_conn->cmdline_mode.bpp; 1434 + } 1435 + } 1436 + } 1437 + } 1438 + 1439 + if ((mode->hdisplay * mode->vdisplay * (bpp/8)) > mdev->mc.vram_size) { 1440 + if (fb_helper_conn) 1441 + fb_helper_conn->cmdline_mode.specified = false; 1415 1442 return MODE_BAD; 1416 1443 } 1417 1444
+1 -1
drivers/gpu/drm/nouveau/core/engine/graph/nve0.c
··· 350 350 nv_wr32(priv, GPC_UNIT(gpc, 0x0918), magicgpc918); 351 351 } 352 352 353 - nv_wr32(priv, GPC_BCAST(0x1bd4), magicgpc918); 353 + nv_wr32(priv, GPC_BCAST(0x3fd4), magicgpc918); 354 354 nv_wr32(priv, GPC_BCAST(0x08ac), nv_rd32(priv, 0x100800)); 355 355 } 356 356
+1 -1
drivers/gpu/drm/nouveau/core/subdev/bios/init.c
··· 869 869 init->offset += 2; 870 870 871 871 init_wr32(init, dreg, idata); 872 - init_mask(init, creg, ~mask, data | idata); 872 + init_mask(init, creg, ~mask, data | iaddr); 873 873 } 874 874 } 875 875
+1
drivers/gpu/drm/nouveau/core/subdev/i2c/base.c
··· 142 142 /* drop port's i2c subdev refcount, i2c handles this itself */ 143 143 if (ret == 0) { 144 144 list_add_tail(&port->head, &i2c->ports); 145 + atomic_dec(&parent->refcount); 145 146 atomic_dec(&engine->refcount); 146 147 } 147 148
+12
drivers/gpu/drm/nouveau/nouveau_agp.c
··· 47 47 if (drm->agp.stat == UNKNOWN) { 48 48 if (!nouveau_agpmode) 49 49 return false; 50 + #ifdef __powerpc__ 51 + /* Disable AGP by default on all PowerPC machines for 52 + * now -- At least some UniNorth-2 AGP bridges are 53 + * known to be broken: DMA from the host to the card 54 + * works just fine, but writeback from the card to the 55 + * host goes straight to memory untranslated bypassing 56 + * the GATT somehow, making them quite painful to deal 57 + * with... 58 + */ 59 + if (nouveau_agpmode == -1) 60 + return false; 61 + #endif 50 62 return true; 51 63 } 52 64
+102 -71
drivers/gpu/drm/nouveau/nv50_display.c
··· 55 55 56 56 /* offsets in shared sync bo of various structures */ 57 57 #define EVO_SYNC(c, o) ((c) * 0x0100 + (o)) 58 - #define EVO_MAST_NTFY EVO_SYNC( 0, 0x00) 59 - #define EVO_FLIP_SEM0(c) EVO_SYNC((c), 0x00) 60 - #define EVO_FLIP_SEM1(c) EVO_SYNC((c), 0x10) 58 + #define EVO_MAST_NTFY EVO_SYNC( 0, 0x00) 59 + #define EVO_FLIP_SEM0(c) EVO_SYNC((c) + 1, 0x00) 60 + #define EVO_FLIP_SEM1(c) EVO_SYNC((c) + 1, 0x10) 61 61 62 62 #define EVO_CORE_HANDLE (0xd1500000) 63 63 #define EVO_CHAN_HANDLE(t,i) (0xd15c0000 | (((t) & 0x00ff) << 8) | (i)) ··· 341 341 342 342 struct nv50_sync { 343 343 struct nv50_dmac base; 344 - struct { 345 - u32 offset; 346 - u16 value; 347 - } sem; 344 + u32 addr; 345 + u32 data; 348 346 }; 349 347 350 348 struct nv50_ovly { ··· 469 471 return nv50_disp(dev)->sync; 470 472 } 471 473 474 + struct nv50_display_flip { 475 + struct nv50_disp *disp; 476 + struct nv50_sync *chan; 477 + }; 478 + 479 + static bool 480 + nv50_display_flip_wait(void *data) 481 + { 482 + struct nv50_display_flip *flip = data; 483 + if (nouveau_bo_rd32(flip->disp->sync, flip->chan->addr / 4) == 484 + flip->chan->data); 485 + return true; 486 + usleep_range(1, 2); 487 + return false; 488 + } 489 + 472 490 void 473 491 nv50_display_flip_stop(struct drm_crtc *crtc) 474 492 { 475 - struct nv50_sync *sync = nv50_sync(crtc); 493 + struct nouveau_device *device = nouveau_dev(crtc->dev); 494 + struct nv50_display_flip flip = { 495 + .disp = nv50_disp(crtc->dev), 496 + .chan = nv50_sync(crtc), 497 + }; 476 498 u32 *push; 477 499 478 - push = evo_wait(sync, 8); 500 + push = evo_wait(flip.chan, 8); 479 501 if (push) { 480 502 evo_mthd(push, 0x0084, 1); 481 503 evo_data(push, 0x00000000); ··· 505 487 evo_data(push, 0x00000000); 506 488 evo_mthd(push, 0x0080, 1); 507 489 evo_data(push, 0x00000000); 508 - evo_kick(push, sync); 490 + evo_kick(push, flip.chan); 509 491 } 492 + 493 + nv_wait_cb(device, nv50_display_flip_wait, &flip); 510 494 } 511 495 512 496 int ··· 516 496 struct nouveau_channel *chan, u32 swap_interval) 517 497 { 518 498 struct nouveau_framebuffer *nv_fb = nouveau_framebuffer(fb); 519 - struct nv50_disp *disp = nv50_disp(crtc->dev); 520 499 struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc); 521 500 struct nv50_sync *sync = nv50_sync(crtc); 501 + int head = nv_crtc->index, ret; 522 502 u32 *push; 523 - int ret; 524 503 525 504 swap_interval <<= 4; 526 505 if (swap_interval == 0) ··· 529 510 if (unlikely(push == NULL)) 530 511 return -EBUSY; 531 512 532 - /* synchronise with the rendering channel, if necessary */ 533 - if (likely(chan)) { 513 + if (chan && nv_mclass(chan->object) < NV84_CHANNEL_IND_CLASS) { 514 + ret = RING_SPACE(chan, 8); 515 + if (ret) 516 + return ret; 517 + 518 + BEGIN_NV04(chan, 0, NV11_SUBCHAN_DMA_SEMAPHORE, 2); 519 + OUT_RING (chan, NvEvoSema0 + head); 520 + OUT_RING (chan, sync->addr ^ 0x10); 521 + BEGIN_NV04(chan, 0, NV11_SUBCHAN_SEMAPHORE_RELEASE, 1); 522 + OUT_RING (chan, sync->data + 1); 523 + BEGIN_NV04(chan, 0, NV11_SUBCHAN_SEMAPHORE_OFFSET, 2); 524 + OUT_RING (chan, sync->addr); 525 + OUT_RING (chan, sync->data); 526 + } else 527 + if (chan && nv_mclass(chan->object) < NVC0_CHANNEL_IND_CLASS) { 528 + u64 addr = nv84_fence_crtc(chan, head) + sync->addr; 529 + ret = RING_SPACE(chan, 12); 530 + if (ret) 531 + return ret; 532 + 533 + BEGIN_NV04(chan, 0, NV11_SUBCHAN_DMA_SEMAPHORE, 1); 534 + OUT_RING (chan, chan->vram); 535 + BEGIN_NV04(chan, 0, NV84_SUBCHAN_SEMAPHORE_ADDRESS_HIGH, 4); 536 + OUT_RING (chan, upper_32_bits(addr ^ 0x10)); 537 + OUT_RING (chan, lower_32_bits(addr ^ 0x10)); 538 + OUT_RING (chan, sync->data + 1); 539 + OUT_RING (chan, NV84_SUBCHAN_SEMAPHORE_TRIGGER_WRITE_LONG); 540 + BEGIN_NV04(chan, 0, NV84_SUBCHAN_SEMAPHORE_ADDRESS_HIGH, 4); 541 + OUT_RING (chan, upper_32_bits(addr)); 542 + OUT_RING (chan, lower_32_bits(addr)); 543 + OUT_RING (chan, sync->data); 544 + OUT_RING (chan, NV84_SUBCHAN_SEMAPHORE_TRIGGER_ACQUIRE_EQUAL); 545 + } else 546 + if (chan) { 547 + u64 addr = nv84_fence_crtc(chan, head) + sync->addr; 534 548 ret = RING_SPACE(chan, 10); 535 549 if (ret) 536 550 return ret; 537 551 538 - if (nv_mclass(chan->object) < NV84_CHANNEL_IND_CLASS) { 539 - BEGIN_NV04(chan, 0, NV11_SUBCHAN_DMA_SEMAPHORE, 2); 540 - OUT_RING (chan, NvEvoSema0 + nv_crtc->index); 541 - OUT_RING (chan, sync->sem.offset); 542 - BEGIN_NV04(chan, 0, NV11_SUBCHAN_SEMAPHORE_RELEASE, 1); 543 - OUT_RING (chan, 0xf00d0000 | sync->sem.value); 544 - BEGIN_NV04(chan, 0, NV11_SUBCHAN_SEMAPHORE_OFFSET, 2); 545 - OUT_RING (chan, sync->sem.offset ^ 0x10); 546 - OUT_RING (chan, 0x74b1e000); 547 - BEGIN_NV04(chan, 0, NV11_SUBCHAN_DMA_SEMAPHORE, 1); 548 - OUT_RING (chan, NvSema); 549 - } else 550 - if (nv_mclass(chan->object) < NVC0_CHANNEL_IND_CLASS) { 551 - u64 offset = nv84_fence_crtc(chan, nv_crtc->index); 552 - offset += sync->sem.offset; 552 + BEGIN_NVC0(chan, 0, NV84_SUBCHAN_SEMAPHORE_ADDRESS_HIGH, 4); 553 + OUT_RING (chan, upper_32_bits(addr ^ 0x10)); 554 + OUT_RING (chan, lower_32_bits(addr ^ 0x10)); 555 + OUT_RING (chan, sync->data + 1); 556 + OUT_RING (chan, NV84_SUBCHAN_SEMAPHORE_TRIGGER_WRITE_LONG | 557 + NVC0_SUBCHAN_SEMAPHORE_TRIGGER_YIELD); 558 + BEGIN_NVC0(chan, 0, NV84_SUBCHAN_SEMAPHORE_ADDRESS_HIGH, 4); 559 + OUT_RING (chan, upper_32_bits(addr)); 560 + OUT_RING (chan, lower_32_bits(addr)); 561 + OUT_RING (chan, sync->data); 562 + OUT_RING (chan, NV84_SUBCHAN_SEMAPHORE_TRIGGER_ACQUIRE_EQUAL | 563 + NVC0_SUBCHAN_SEMAPHORE_TRIGGER_YIELD); 564 + } 553 565 554 - BEGIN_NV04(chan, 0, NV84_SUBCHAN_SEMAPHORE_ADDRESS_HIGH, 4); 555 - OUT_RING (chan, upper_32_bits(offset)); 556 - OUT_RING (chan, lower_32_bits(offset)); 557 - OUT_RING (chan, 0xf00d0000 | sync->sem.value); 558 - OUT_RING (chan, 0x00000002); 559 - BEGIN_NV04(chan, 0, NV84_SUBCHAN_SEMAPHORE_ADDRESS_HIGH, 4); 560 - OUT_RING (chan, upper_32_bits(offset)); 561 - OUT_RING (chan, lower_32_bits(offset ^ 0x10)); 562 - OUT_RING (chan, 0x74b1e000); 563 - OUT_RING (chan, 0x00000001); 564 - } else { 565 - u64 offset = nv84_fence_crtc(chan, nv_crtc->index); 566 - offset += sync->sem.offset; 567 - 568 - BEGIN_NVC0(chan, 0, NV84_SUBCHAN_SEMAPHORE_ADDRESS_HIGH, 4); 569 - OUT_RING (chan, upper_32_bits(offset)); 570 - OUT_RING (chan, lower_32_bits(offset)); 571 - OUT_RING (chan, 0xf00d0000 | sync->sem.value); 572 - OUT_RING (chan, 0x00001002); 573 - BEGIN_NVC0(chan, 0, NV84_SUBCHAN_SEMAPHORE_ADDRESS_HIGH, 4); 574 - OUT_RING (chan, upper_32_bits(offset)); 575 - OUT_RING (chan, lower_32_bits(offset ^ 0x10)); 576 - OUT_RING (chan, 0x74b1e000); 577 - OUT_RING (chan, 0x00001001); 578 - } 579 - 566 + if (chan) { 567 + sync->addr ^= 0x10; 568 + sync->data++; 580 569 FIRE_RING (chan); 581 570 } else { 582 - nouveau_bo_wr32(disp->sync, sync->sem.offset / 4, 583 - 0xf00d0000 | sync->sem.value); 584 571 evo_sync(crtc->dev); 585 572 } 586 573 ··· 600 575 evo_data(push, 0x40000000); 601 576 } 602 577 evo_mthd(push, 0x0088, 4); 603 - evo_data(push, sync->sem.offset); 604 - evo_data(push, 0xf00d0000 | sync->sem.value); 605 - evo_data(push, 0x74b1e000); 578 + evo_data(push, sync->addr); 579 + evo_data(push, sync->data++); 580 + evo_data(push, sync->data); 606 581 evo_data(push, NvEvoSync); 607 582 evo_mthd(push, 0x00a0, 2); 608 583 evo_data(push, 0x00000000); ··· 630 605 evo_mthd(push, 0x0080, 1); 631 606 evo_data(push, 0x00000000); 632 607 evo_kick(push, sync); 633 - 634 - sync->sem.offset ^= 0x10; 635 - sync->sem.value++; 636 608 return 0; 637 609 } 638 610 ··· 1401 1379 if (ret) 1402 1380 goto out; 1403 1381 1404 - head->sync.sem.offset = EVO_SYNC(1 + index, 0x00); 1382 + head->sync.addr = EVO_FLIP_SEM0(index); 1383 + head->sync.data = 0x00000000; 1405 1384 1406 1385 /* allocate overlay resources */ 1407 1386 ret = nv50_pioc_create(disp->core, NV50_DISP_OIMM_CLASS, index, ··· 2135 2112 int 2136 2113 nv50_display_init(struct drm_device *dev) 2137 2114 { 2138 - u32 *push = evo_wait(nv50_mast(dev), 32); 2139 - if (push) { 2140 - evo_mthd(push, 0x0088, 1); 2141 - evo_data(push, NvEvoSync); 2142 - evo_kick(push, nv50_mast(dev)); 2143 - return 0; 2115 + struct nv50_disp *disp = nv50_disp(dev); 2116 + struct drm_crtc *crtc; 2117 + u32 *push; 2118 + 2119 + push = evo_wait(nv50_mast(dev), 32); 2120 + if (!push) 2121 + return -EBUSY; 2122 + 2123 + list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 2124 + struct nv50_sync *sync = nv50_sync(crtc); 2125 + nouveau_bo_wr32(disp->sync, sync->addr / 4, sync->data); 2144 2126 } 2145 2127 2146 - return -EBUSY; 2128 + evo_mthd(push, 0x0088, 1); 2129 + evo_data(push, NvEvoSync); 2130 + evo_kick(push, nv50_mast(dev)); 2131 + return 0; 2147 2132 } 2148 2133 2149 2134 void
+6
drivers/gpu/drm/radeon/evergreen.c
··· 2438 2438 if (tmp & L2_BUSY) 2439 2439 reset_mask |= RADEON_RESET_VMC; 2440 2440 2441 + /* Skip MC reset as it's mostly likely not hung, just busy */ 2442 + if (reset_mask & RADEON_RESET_MC) { 2443 + DRM_DEBUG("MC busy: 0x%08X, clearing.\n", reset_mask); 2444 + reset_mask &= ~RADEON_RESET_MC; 2445 + } 2446 + 2441 2447 return reset_mask; 2442 2448 } 2443 2449
+1 -1
drivers/gpu/drm/radeon/evergreen_cs.c
··· 834 834 __func__, __LINE__, toffset, surf.base_align); 835 835 return -EINVAL; 836 836 } 837 - if (moffset & (surf.base_align - 1)) { 837 + if (surf.nsamples <= 1 && moffset & (surf.base_align - 1)) { 838 838 dev_warn(p->dev, "%s:%d mipmap bo base %ld not aligned with %ld\n", 839 839 __func__, __LINE__, moffset, surf.base_align); 840 840 return -EINVAL;
+6
drivers/gpu/drm/radeon/ni.c
··· 1381 1381 if (tmp & L2_BUSY) 1382 1382 reset_mask |= RADEON_RESET_VMC; 1383 1383 1384 + /* Skip MC reset as it's mostly likely not hung, just busy */ 1385 + if (reset_mask & RADEON_RESET_MC) { 1386 + DRM_DEBUG("MC busy: 0x%08X, clearing.\n", reset_mask); 1387 + reset_mask &= ~RADEON_RESET_MC; 1388 + } 1389 + 1384 1390 return reset_mask; 1385 1391 } 1386 1392
+6
drivers/gpu/drm/radeon/r600.c
··· 1394 1394 if (r600_is_display_hung(rdev)) 1395 1395 reset_mask |= RADEON_RESET_DISPLAY; 1396 1396 1397 + /* Skip MC reset as it's mostly likely not hung, just busy */ 1398 + if (reset_mask & RADEON_RESET_MC) { 1399 + DRM_DEBUG("MC busy: 0x%08X, clearing.\n", reset_mask); 1400 + reset_mask &= ~RADEON_RESET_MC; 1401 + } 1402 + 1397 1403 return reset_mask; 1398 1404 } 1399 1405
+9
drivers/gpu/drm/radeon/radeon_combios.c
··· 970 970 found = 1; 971 971 } 972 972 973 + /* quirks */ 974 + /* Radeon 9100 (R200) */ 975 + if ((dev->pdev->device == 0x514D) && 976 + (dev->pdev->subsystem_vendor == 0x174B) && 977 + (dev->pdev->subsystem_device == 0x7149)) { 978 + /* vbios value is bad, use the default */ 979 + found = 0; 980 + } 981 + 973 982 if (!found) /* fallback to defaults */ 974 983 radeon_legacy_get_primary_dac_info_from_table(rdev, p_dac); 975 984
+2 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 70 70 * 2.27.0 - r600-SI: Add CS ioctl support for async DMA 71 71 * 2.28.0 - r600-eg: Add MEM_WRITE packet support 72 72 * 2.29.0 - R500 FP16 color clear registers 73 + * 2.30.0 - fix for FMASK texturing 73 74 */ 74 75 #define KMS_DRIVER_MAJOR 2 75 - #define KMS_DRIVER_MINOR 29 76 + #define KMS_DRIVER_MINOR 30 76 77 #define KMS_DRIVER_PATCHLEVEL 0 77 78 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); 78 79 int radeon_driver_unload_kms(struct drm_device *dev);
+12
drivers/gpu/drm/radeon/radeon_irq_kms.c
··· 400 400 { 401 401 unsigned long irqflags; 402 402 403 + if (!rdev->ddev->irq_enabled) 404 + return; 405 + 403 406 spin_lock_irqsave(&rdev->irq.lock, irqflags); 404 407 rdev->irq.afmt[block] = true; 405 408 radeon_irq_set(rdev); ··· 421 418 void radeon_irq_kms_disable_afmt(struct radeon_device *rdev, int block) 422 419 { 423 420 unsigned long irqflags; 421 + 422 + if (!rdev->ddev->irq_enabled) 423 + return; 424 424 425 425 spin_lock_irqsave(&rdev->irq.lock, irqflags); 426 426 rdev->irq.afmt[block] = false; ··· 444 438 unsigned long irqflags; 445 439 int i; 446 440 441 + if (!rdev->ddev->irq_enabled) 442 + return; 443 + 447 444 spin_lock_irqsave(&rdev->irq.lock, irqflags); 448 445 for (i = 0; i < RADEON_MAX_HPD_PINS; ++i) 449 446 rdev->irq.hpd[i] |= !!(hpd_mask & (1 << i)); ··· 466 457 { 467 458 unsigned long irqflags; 468 459 int i; 460 + 461 + if (!rdev->ddev->irq_enabled) 462 + return; 469 463 470 464 spin_lock_irqsave(&rdev->irq.lock, irqflags); 471 465 for (i = 0; i < RADEON_MAX_HPD_PINS; ++i)
+6
drivers/gpu/drm/radeon/si.c
··· 2284 2284 if (tmp & L2_BUSY) 2285 2285 reset_mask |= RADEON_RESET_VMC; 2286 2286 2287 + /* Skip MC reset as it's mostly likely not hung, just busy */ 2288 + if (reset_mask & RADEON_RESET_MC) { 2289 + DRM_DEBUG("MC busy: 0x%08X, clearing.\n", reset_mask); 2290 + reset_mask &= ~RADEON_RESET_MC; 2291 + } 2292 + 2287 2293 return reset_mask; 2288 2294 } 2289 2295
-1
drivers/gpu/drm/tegra/Kconfig
··· 4 4 select DRM_KMS_HELPER 5 5 select DRM_GEM_CMA_HELPER 6 6 select DRM_KMS_CMA_HELPER 7 - select DRM_HDMI 8 7 select FB_CFB_FILLRECT 9 8 select FB_CFB_COPYAREA 10 9 select FB_CFB_IMAGEBLIT
+14 -8
drivers/hid/hid-logitech-dj.c
··· 459 459 struct dj_report *dj_report) 460 460 { 461 461 struct hid_device *hdev = djrcv_dev->hdev; 462 - int sent_bytes; 462 + struct hid_report *report; 463 + struct hid_report_enum *output_report_enum; 464 + u8 *data = (u8 *)(&dj_report->device_index); 465 + int i; 463 466 464 - if (!hdev->hid_output_raw_report) { 465 - dev_err(&hdev->dev, "%s:" 466 - "hid_output_raw_report is null\n", __func__); 467 + output_report_enum = &hdev->report_enum[HID_OUTPUT_REPORT]; 468 + report = output_report_enum->report_id_hash[REPORT_ID_DJ_SHORT]; 469 + 470 + if (!report) { 471 + dev_err(&hdev->dev, "%s: unable to find dj report\n", __func__); 467 472 return -ENODEV; 468 473 } 469 474 470 - sent_bytes = hdev->hid_output_raw_report(hdev, (u8 *) dj_report, 471 - sizeof(struct dj_report), 472 - HID_OUTPUT_REPORT); 475 + for (i = 0; i < report->field[0]->report_count; i++) 476 + report->field[0]->value[i] = data[i]; 473 477 474 - return (sent_bytes < 0) ? sent_bytes : 0; 478 + usbhid_submit_report(hdev, report, USB_DIR_OUT); 479 + 480 + return 0; 475 481 } 476 482 477 483 static int logi_dj_recv_query_paired_devices(struct dj_receiver_dev *djrcv_dev)
+16 -14
drivers/hwmon/pmbus/ltc2978.c
··· 62 62 int temp_min, temp_max; 63 63 int vout_min[8], vout_max[8]; 64 64 int iout_max[2]; 65 - int temp2_max[2]; 65 + int temp2_max; 66 66 struct pmbus_driver_info info; 67 67 }; 68 68 ··· 204 204 ret = pmbus_read_word_data(client, page, 205 205 LTC3880_MFR_TEMPERATURE2_PEAK); 206 206 if (ret >= 0) { 207 - if (lin11_to_val(ret) 208 - > lin11_to_val(data->temp2_max[page])) 209 - data->temp2_max[page] = ret; 210 - ret = data->temp2_max[page]; 207 + if (lin11_to_val(ret) > lin11_to_val(data->temp2_max)) 208 + data->temp2_max = ret; 209 + ret = data->temp2_max; 211 210 } 212 211 break; 213 212 case PMBUS_VIRT_READ_VIN_MIN: ··· 247 248 248 249 switch (reg) { 249 250 case PMBUS_VIRT_RESET_IOUT_HISTORY: 250 - data->iout_max[page] = 0x7fff; 251 + data->iout_max[page] = 0x7c00; 251 252 ret = ltc2978_clear_peaks(client, page, data->id); 252 253 break; 253 254 case PMBUS_VIRT_RESET_TEMP2_HISTORY: 254 - data->temp2_max[page] = 0x7fff; 255 + data->temp2_max = 0x7c00; 255 256 ret = ltc2978_clear_peaks(client, page, data->id); 256 257 break; 257 258 case PMBUS_VIRT_RESET_VOUT_HISTORY: ··· 261 262 break; 262 263 case PMBUS_VIRT_RESET_VIN_HISTORY: 263 264 data->vin_min = 0x7bff; 264 - data->vin_max = 0; 265 + data->vin_max = 0x7c00; 265 266 ret = ltc2978_clear_peaks(client, page, data->id); 266 267 break; 267 268 case PMBUS_VIRT_RESET_TEMP_HISTORY: 268 269 data->temp_min = 0x7bff; 269 - data->temp_max = 0x7fff; 270 + data->temp_max = 0x7c00; 270 271 ret = ltc2978_clear_peaks(client, page, data->id); 271 272 break; 272 273 default: ··· 320 321 info = &data->info; 321 322 info->write_word_data = ltc2978_write_word_data; 322 323 323 - data->vout_min[0] = 0xffff; 324 324 data->vin_min = 0x7bff; 325 + data->vin_max = 0x7c00; 325 326 data->temp_min = 0x7bff; 326 - data->temp_max = 0x7fff; 327 + data->temp_max = 0x7c00; 328 + data->temp2_max = 0x7c00; 327 329 328 - switch (id->driver_data) { 330 + switch (data->id) { 329 331 case ltc2978: 330 332 info->read_word_data = ltc2978_read_word_data; 331 333 info->pages = 8; ··· 336 336 for (i = 1; i < 8; i++) { 337 337 info->func[i] = PMBUS_HAVE_VOUT 338 338 | PMBUS_HAVE_STATUS_VOUT; 339 - data->vout_min[i] = 0xffff; 340 339 } 341 340 break; 342 341 case ltc3880: ··· 351 352 | PMBUS_HAVE_IOUT | PMBUS_HAVE_STATUS_IOUT 352 353 | PMBUS_HAVE_POUT 353 354 | PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP; 354 - data->vout_min[1] = 0xffff; 355 + data->iout_max[0] = 0x7c00; 356 + data->iout_max[1] = 0x7c00; 355 357 break; 356 358 default: 357 359 return -ENODEV; 358 360 } 361 + for (i = 0; i < info->pages; i++) 362 + data->vout_min[i] = 0xffff; 359 363 360 364 return pmbus_do_probe(client, id, info); 361 365 }
+7 -1
drivers/hwmon/sht15.c
··· 965 965 if (voltage) 966 966 data->supply_uv = voltage; 967 967 968 - regulator_enable(data->reg); 968 + ret = regulator_enable(data->reg); 969 + if (ret != 0) { 970 + dev_err(&pdev->dev, 971 + "failed to enable regulator: %d\n", ret); 972 + return ret; 973 + } 974 + 969 975 /* 970 976 * Setup a notifier block to update this if another device 971 977 * causes the voltage to change
+1
drivers/infiniband/hw/ipath/ipath_fs.c
··· 410 410 .mount = ipathfs_mount, 411 411 .kill_sb = ipathfs_kill_super, 412 412 }; 413 + MODULE_ALIAS_FS("ipathfs"); 413 414 414 415 int __init ipath_init_ipathfs(void) 415 416 {
+1
drivers/infiniband/hw/qib/qib_fs.c
··· 604 604 .mount = qibfs_mount, 605 605 .kill_sb = qibfs_kill_super, 606 606 }; 607 + MODULE_ALIAS_FS("ipathfs"); 607 608 608 609 int __init qib_init_qibfs(void) 609 610 {
+4 -4
drivers/input/keyboard/tc3589x-keypad.c
··· 70 70 #define TC3589x_EVT_INT_CLR 0x2 71 71 #define TC3589x_KBD_INT_CLR 0x1 72 72 73 - #define TC3589x_KBD_KEYMAP_SIZE 64 74 - 75 73 /** 76 74 * struct tc_keypad - data structure used by keypad driver 77 75 * @tc3589x: pointer to tc35893 ··· 86 88 const struct tc3589x_keypad_platform_data *board; 87 89 unsigned int krow; 88 90 unsigned int kcol; 89 - unsigned short keymap[TC3589x_KBD_KEYMAP_SIZE]; 91 + unsigned short *keymap; 90 92 bool keypad_stopped; 91 93 }; 92 94 ··· 336 338 337 339 error = matrix_keypad_build_keymap(plat->keymap_data, NULL, 338 340 TC3589x_MAX_KPROW, TC3589x_MAX_KPCOL, 339 - keypad->keymap, input); 341 + NULL, input); 340 342 if (error) { 341 343 dev_err(&pdev->dev, "Failed to build keymap\n"); 342 344 goto err_free_mem; 343 345 } 346 + 347 + keypad->keymap = input->keycode; 344 348 345 349 input_set_capability(input, EV_MSC, MSC_SCAN); 346 350 if (!plat->no_autorepeat)
+72 -13
drivers/input/mouse/alps.c
··· 490 490 f->y_map |= (p[5] & 0x20) << 6; 491 491 } 492 492 493 + static void alps_decode_dolphin(struct alps_fields *f, unsigned char *p) 494 + { 495 + f->first_mp = !!(p[0] & 0x02); 496 + f->is_mp = !!(p[0] & 0x20); 497 + 498 + f->fingers = ((p[0] & 0x6) >> 1 | 499 + (p[0] & 0x10) >> 2); 500 + f->x_map = ((p[2] & 0x60) >> 5) | 501 + ((p[4] & 0x7f) << 2) | 502 + ((p[5] & 0x7f) << 9) | 503 + ((p[3] & 0x07) << 16) | 504 + ((p[3] & 0x70) << 15) | 505 + ((p[0] & 0x01) << 22); 506 + f->y_map = (p[1] & 0x7f) | 507 + ((p[2] & 0x1f) << 7); 508 + 509 + f->x = ((p[1] & 0x7f) | ((p[4] & 0x0f) << 7)); 510 + f->y = ((p[2] & 0x7f) | ((p[4] & 0xf0) << 3)); 511 + f->z = (p[0] & 4) ? 0 : p[5] & 0x7f; 512 + 513 + alps_decode_buttons_v3(f, p); 514 + } 515 + 493 516 static void alps_process_touchpad_packet_v3(struct psmouse *psmouse) 494 517 { 495 518 struct alps_data *priv = psmouse->private; ··· 897 874 } 898 875 899 876 /* Bytes 2 - pktsize should have 0 in the highest bit */ 900 - if (psmouse->pktcnt >= 2 && psmouse->pktcnt <= psmouse->pktsize && 877 + if (priv->proto_version != ALPS_PROTO_V5 && 878 + psmouse->pktcnt >= 2 && psmouse->pktcnt <= psmouse->pktsize && 901 879 (psmouse->packet[psmouse->pktcnt - 1] & 0x80)) { 902 880 psmouse_dbg(psmouse, "refusing packet[%i] = %x\n", 903 881 psmouse->pktcnt - 1, ··· 1018 994 return 0; 1019 995 } 1020 996 1021 - static int alps_enter_command_mode(struct psmouse *psmouse, 1022 - unsigned char *resp) 997 + static int alps_enter_command_mode(struct psmouse *psmouse) 1023 998 { 1024 999 unsigned char param[4]; 1025 1000 ··· 1027 1004 return -1; 1028 1005 } 1029 1006 1030 - if (param[0] != 0x88 || (param[1] != 0x07 && param[1] != 0x08)) { 1007 + if ((param[0] != 0x88 || (param[1] != 0x07 && param[1] != 0x08)) && 1008 + param[0] != 0x73) { 1031 1009 psmouse_dbg(psmouse, 1032 1010 "unknown response while entering command mode\n"); 1033 1011 return -1; 1034 1012 } 1035 - 1036 - if (resp) 1037 - *resp = param[2]; 1038 1013 return 0; 1039 1014 } 1040 1015 ··· 1197 1176 { 1198 1177 int reg_val, ret = -1; 1199 1178 1200 - if (alps_enter_command_mode(psmouse, NULL)) 1179 + if (alps_enter_command_mode(psmouse)) 1201 1180 return -1; 1202 1181 1203 1182 reg_val = alps_command_mode_read_reg(psmouse, reg_base + 0x0008); ··· 1237 1216 { 1238 1217 int ret = -EIO, reg_val; 1239 1218 1240 - if (alps_enter_command_mode(psmouse, NULL)) 1219 + if (alps_enter_command_mode(psmouse)) 1241 1220 goto error; 1242 1221 1243 1222 reg_val = alps_command_mode_read_reg(psmouse, reg_base + 0x08); ··· 1300 1279 * supported by this driver. If bit 1 isn't set the packet 1301 1280 * format is different. 1302 1281 */ 1303 - if (alps_enter_command_mode(psmouse, NULL) || 1282 + if (alps_enter_command_mode(psmouse) || 1304 1283 alps_command_mode_write_reg(psmouse, 1305 1284 reg_base + 0x08, 0x82) || 1306 1285 alps_exit_command_mode(psmouse)) ··· 1327 1306 alps_setup_trackstick_v3(psmouse, ALPS_REG_BASE_PINNACLE) == -EIO) 1328 1307 goto error; 1329 1308 1330 - if (alps_enter_command_mode(psmouse, NULL) || 1309 + if (alps_enter_command_mode(psmouse) || 1331 1310 alps_absolute_mode_v3(psmouse)) { 1332 1311 psmouse_err(psmouse, "Failed to enter absolute mode\n"); 1333 1312 goto error; ··· 1402 1381 priv->flags &= ~ALPS_DUALPOINT; 1403 1382 } 1404 1383 1405 - if (alps_enter_command_mode(psmouse, NULL) || 1384 + if (alps_enter_command_mode(psmouse) || 1406 1385 alps_command_mode_read_reg(psmouse, 0xc2d9) == -1 || 1407 1386 alps_command_mode_write_reg(psmouse, 0xc2cb, 0x00)) 1408 1387 goto error; ··· 1452 1431 struct ps2dev *ps2dev = &psmouse->ps2dev; 1453 1432 unsigned char param[4]; 1454 1433 1455 - if (alps_enter_command_mode(psmouse, NULL)) 1434 + if (alps_enter_command_mode(psmouse)) 1456 1435 goto error; 1457 1436 1458 1437 if (alps_absolute_mode_v4(psmouse)) { ··· 1520 1499 return -1; 1521 1500 } 1522 1501 1502 + static int alps_hw_init_dolphin_v1(struct psmouse *psmouse) 1503 + { 1504 + struct ps2dev *ps2dev = &psmouse->ps2dev; 1505 + unsigned char param[2]; 1506 + 1507 + /* This is dolphin "v1" as empirically defined by florin9doi */ 1508 + param[0] = 0x64; 1509 + param[1] = 0x28; 1510 + 1511 + if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSTREAM) || 1512 + ps2_command(ps2dev, &param[0], PSMOUSE_CMD_SETRATE) || 1513 + ps2_command(ps2dev, &param[1], PSMOUSE_CMD_SETRATE)) 1514 + return -1; 1515 + 1516 + return 0; 1517 + } 1518 + 1523 1519 static void alps_set_defaults(struct alps_data *priv) 1524 1520 { 1525 1521 priv->byte0 = 0x8f; ··· 1569 1531 priv->set_abs_params = alps_set_abs_params_mt; 1570 1532 priv->nibble_commands = alps_v4_nibble_commands; 1571 1533 priv->addr_command = PSMOUSE_CMD_DISABLE; 1534 + break; 1535 + case ALPS_PROTO_V5: 1536 + priv->hw_init = alps_hw_init_dolphin_v1; 1537 + priv->process_packet = alps_process_packet_v3; 1538 + priv->decode_fields = alps_decode_dolphin; 1539 + priv->set_abs_params = alps_set_abs_params_mt; 1540 + priv->nibble_commands = alps_v3_nibble_commands; 1541 + priv->addr_command = PSMOUSE_CMD_RESET_WRAP; 1542 + priv->byte0 = 0xc8; 1543 + priv->mask0 = 0xc8; 1544 + priv->flags = 0; 1545 + priv->x_max = 1360; 1546 + priv->y_max = 660; 1547 + priv->x_bits = 23; 1548 + priv->y_bits = 12; 1572 1549 break; 1573 1550 } 1574 1551 } ··· 1644 1591 return -EIO; 1645 1592 1646 1593 if (alps_match_table(psmouse, priv, e7, ec) == 0) { 1594 + return 0; 1595 + } else if (e7[0] == 0x73 && e7[1] == 0x03 && e7[2] == 0x50 && 1596 + ec[0] == 0x73 && ec[1] == 0x01) { 1597 + priv->proto_version = ALPS_PROTO_V5; 1598 + alps_set_defaults(priv); 1599 + 1647 1600 return 0; 1648 1601 } else if (ec[0] == 0x88 && ec[1] == 0x08) { 1649 1602 priv->proto_version = ALPS_PROTO_V3;
+1
drivers/input/mouse/alps.h
··· 16 16 #define ALPS_PROTO_V2 2 17 17 #define ALPS_PROTO_V3 3 18 18 #define ALPS_PROTO_V4 4 19 + #define ALPS_PROTO_V5 5 19 20 20 21 /** 21 22 * struct alps_model_info - touchpad ID table
+13 -6
drivers/input/mouse/cypress_ps2.c
··· 236 236 cytp->fw_version = param[2] & FW_VERSION_MASX; 237 237 cytp->tp_metrics_supported = (param[2] & TP_METRICS_MASK) ? 1 : 0; 238 238 239 + /* 240 + * Trackpad fw_version 11 (in Dell XPS12) yields a bogus response to 241 + * CYTP_CMD_READ_TP_METRICS so do not try to use it. LP: #1103594. 242 + */ 243 + if (cytp->fw_version >= 11) 244 + cytp->tp_metrics_supported = 0; 245 + 239 246 psmouse_dbg(psmouse, "cytp->fw_version = %d\n", cytp->fw_version); 240 247 psmouse_dbg(psmouse, "cytp->tp_metrics_supported = %d\n", 241 248 cytp->tp_metrics_supported); ··· 264 257 cytp->tp_max_pressure = CYTP_MAX_PRESSURE; 265 258 cytp->tp_res_x = cytp->tp_max_abs_x / cytp->tp_width; 266 259 cytp->tp_res_y = cytp->tp_max_abs_y / cytp->tp_high; 260 + 261 + if (!cytp->tp_metrics_supported) 262 + return 0; 267 263 268 264 memset(param, 0, sizeof(param)); 269 265 if (cypress_send_ext_cmd(psmouse, CYTP_CMD_READ_TP_METRICS, param) == 0) { ··· 325 315 326 316 static int cypress_query_hardware(struct psmouse *psmouse) 327 317 { 328 - struct cytp_data *cytp = psmouse->private; 329 318 int ret; 330 319 331 320 ret = cypress_read_fw_version(psmouse); 332 321 if (ret) 333 322 return ret; 334 323 335 - if (cytp->tp_metrics_supported) { 336 - ret = cypress_read_tp_metrics(psmouse); 337 - if (ret) 338 - return ret; 339 - } 324 + ret = cypress_read_tp_metrics(psmouse); 325 + if (ret) 326 + return ret; 340 327 341 328 return 0; 342 329 }
+4
drivers/input/tablet/wacom_wac.c
··· 2017 2017 static const struct wacom_features wacom_features_0x101 = 2018 2018 { "Wacom ISDv4 101", WACOM_PKGLEN_MTTPC, 26202, 16325, 255, 2019 2019 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; 2020 + static const struct wacom_features wacom_features_0x10D = 2021 + { "Wacom ISDv4 10D", WACOM_PKGLEN_MTTPC, 26202, 16325, 255, 2022 + 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; 2020 2023 static const struct wacom_features wacom_features_0x4001 = 2021 2024 { "Wacom ISDv4 4001", WACOM_PKGLEN_MTTPC, 26202, 16325, 255, 2022 2025 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; ··· 2204 2201 { USB_DEVICE_WACOM(0xEF) }, 2205 2202 { USB_DEVICE_WACOM(0x100) }, 2206 2203 { USB_DEVICE_WACOM(0x101) }, 2204 + { USB_DEVICE_WACOM(0x10D) }, 2207 2205 { USB_DEVICE_WACOM(0x4001) }, 2208 2206 { USB_DEVICE_WACOM(0x47) }, 2209 2207 { USB_DEVICE_WACOM(0xF4) },
+6 -1
drivers/input/touchscreen/ads7846.c
··· 236 236 /* Must be called with ts->lock held */ 237 237 static void __ads7846_enable(struct ads7846 *ts) 238 238 { 239 - regulator_enable(ts->reg); 239 + int error; 240 + 241 + error = regulator_enable(ts->reg); 242 + if (error != 0) 243 + dev_err(&ts->spi->dev, "Failed to enable supply: %d\n", error); 244 + 240 245 ads7846_restart(ts); 241 246 } 242 247
+64 -4
drivers/input/touchscreen/atmel_mxt_ts.c
··· 176 176 /* Define for MXT_GEN_COMMAND_T6 */ 177 177 #define MXT_BOOT_VALUE 0xa5 178 178 #define MXT_BACKUP_VALUE 0x55 179 - #define MXT_BACKUP_TIME 25 /* msec */ 180 - #define MXT_RESET_TIME 65 /* msec */ 179 + #define MXT_BACKUP_TIME 50 /* msec */ 180 + #define MXT_RESET_TIME 200 /* msec */ 181 181 182 182 #define MXT_FWRESET_TIME 175 /* msec */ 183 + 184 + /* MXT_SPT_GPIOPWM_T19 field */ 185 + #define MXT_GPIO0_MASK 0x04 186 + #define MXT_GPIO1_MASK 0x08 187 + #define MXT_GPIO2_MASK 0x10 188 + #define MXT_GPIO3_MASK 0x20 183 189 184 190 /* Command to unlock bootloader */ 185 191 #define MXT_UNLOCK_CMD_MSB 0xaa ··· 218 212 /* Touchscreen absolute values */ 219 213 #define MXT_MAX_AREA 0xff 220 214 215 + #define MXT_PIXELS_PER_MM 20 216 + 221 217 struct mxt_info { 222 218 u8 family_id; 223 219 u8 variant_id; ··· 251 243 const struct mxt_platform_data *pdata; 252 244 struct mxt_object *object_table; 253 245 struct mxt_info info; 246 + bool is_tp; 247 + 254 248 unsigned int irq; 255 249 unsigned int max_x; 256 250 unsigned int max_y; ··· 261 251 u8 T6_reportid; 262 252 u8 T9_reportid_min; 263 253 u8 T9_reportid_max; 254 + u8 T19_reportid; 264 255 }; 265 256 266 257 static bool mxt_object_readable(unsigned int type) ··· 513 502 return mxt_write_reg(data->client, reg + offset, val); 514 503 } 515 504 505 + static void mxt_input_button(struct mxt_data *data, struct mxt_message *message) 506 + { 507 + struct input_dev *input = data->input_dev; 508 + bool button; 509 + int i; 510 + 511 + /* Active-low switch */ 512 + for (i = 0; i < MXT_NUM_GPIO; i++) { 513 + if (data->pdata->key_map[i] == KEY_RESERVED) 514 + continue; 515 + button = !(message->message[0] & MXT_GPIO0_MASK << i); 516 + input_report_key(input, data->pdata->key_map[i], button); 517 + } 518 + } 519 + 516 520 static void mxt_input_touchevent(struct mxt_data *data, 517 521 struct mxt_message *message, int id) 518 522 { ··· 610 584 } else if (mxt_is_T9_message(data, &message)) { 611 585 int id = reportid - data->T9_reportid_min; 612 586 mxt_input_touchevent(data, &message, id); 587 + update_input = true; 588 + } else if (message.reportid == data->T19_reportid) { 589 + mxt_input_button(data, &message); 613 590 update_input = true; 614 591 } else { 615 592 mxt_dump_message(dev, &message); ··· 793 764 data->T9_reportid_min = min_id; 794 765 data->T9_reportid_max = max_id; 795 766 break; 767 + case MXT_SPT_GPIOPWM_T19: 768 + data->T19_reportid = min_id; 769 + break; 796 770 } 797 771 } 798 772 ··· 809 777 data->T6_reportid = 0; 810 778 data->T9_reportid_min = 0; 811 779 data->T9_reportid_max = 0; 812 - 780 + data->T19_reportid = 0; 813 781 } 814 782 815 783 static int mxt_initialize(struct mxt_data *data) ··· 1147 1115 goto err_free_mem; 1148 1116 } 1149 1117 1150 - input_dev->name = "Atmel maXTouch Touchscreen"; 1118 + data->is_tp = pdata && pdata->is_tp; 1119 + 1120 + input_dev->name = (data->is_tp) ? "Atmel maXTouch Touchpad" : 1121 + "Atmel maXTouch Touchscreen"; 1151 1122 snprintf(data->phys, sizeof(data->phys), "i2c-%u-%04x/input0", 1152 1123 client->adapter->nr, client->addr); 1124 + 1153 1125 input_dev->phys = data->phys; 1154 1126 1155 1127 input_dev->id.bustype = BUS_I2C; ··· 1175 1139 __set_bit(EV_ABS, input_dev->evbit); 1176 1140 __set_bit(EV_KEY, input_dev->evbit); 1177 1141 __set_bit(BTN_TOUCH, input_dev->keybit); 1142 + 1143 + if (data->is_tp) { 1144 + int i; 1145 + __set_bit(INPUT_PROP_POINTER, input_dev->propbit); 1146 + __set_bit(INPUT_PROP_BUTTONPAD, input_dev->propbit); 1147 + 1148 + for (i = 0; i < MXT_NUM_GPIO; i++) 1149 + if (pdata->key_map[i] != KEY_RESERVED) 1150 + __set_bit(pdata->key_map[i], input_dev->keybit); 1151 + 1152 + __set_bit(BTN_TOOL_FINGER, input_dev->keybit); 1153 + __set_bit(BTN_TOOL_DOUBLETAP, input_dev->keybit); 1154 + __set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit); 1155 + __set_bit(BTN_TOOL_QUADTAP, input_dev->keybit); 1156 + __set_bit(BTN_TOOL_QUINTTAP, input_dev->keybit); 1157 + 1158 + input_abs_set_res(input_dev, ABS_X, MXT_PIXELS_PER_MM); 1159 + input_abs_set_res(input_dev, ABS_Y, MXT_PIXELS_PER_MM); 1160 + input_abs_set_res(input_dev, ABS_MT_POSITION_X, 1161 + MXT_PIXELS_PER_MM); 1162 + input_abs_set_res(input_dev, ABS_MT_POSITION_Y, 1163 + MXT_PIXELS_PER_MM); 1164 + } 1178 1165 1179 1166 /* For single touch */ 1180 1167 input_set_abs_params(input_dev, ABS_X, ··· 1317 1258 static const struct i2c_device_id mxt_id[] = { 1318 1259 { "qt602240_ts", 0 }, 1319 1260 { "atmel_mxt_ts", 0 }, 1261 + { "atmel_mxt_tp", 0 }, 1320 1262 { "mXT224", 0 }, 1321 1263 { } 1322 1264 };
+25 -9
drivers/input/touchscreen/mms114.c
··· 314 314 struct i2c_client *client = data->client; 315 315 int error; 316 316 317 - if (data->core_reg) 318 - regulator_enable(data->core_reg); 319 - if (data->io_reg) 320 - regulator_enable(data->io_reg); 317 + error = regulator_enable(data->core_reg); 318 + if (error) { 319 + dev_err(&client->dev, "Failed to enable avdd: %d\n", error); 320 + return error; 321 + } 322 + 323 + error = regulator_enable(data->io_reg); 324 + if (error) { 325 + dev_err(&client->dev, "Failed to enable vdd: %d\n", error); 326 + regulator_disable(data->core_reg); 327 + return error; 328 + } 329 + 321 330 mdelay(MMS114_POWERON_DELAY); 322 331 323 332 error = mms114_setup_regs(data); 324 - if (error < 0) 333 + if (error < 0) { 334 + regulator_disable(data->io_reg); 335 + regulator_disable(data->core_reg); 325 336 return error; 337 + } 326 338 327 339 if (data->pdata->cfg_pin) 328 340 data->pdata->cfg_pin(true); ··· 347 335 static void mms114_stop(struct mms114_data *data) 348 336 { 349 337 struct i2c_client *client = data->client; 338 + int error; 350 339 351 340 disable_irq(client->irq); 352 341 353 342 if (data->pdata->cfg_pin) 354 343 data->pdata->cfg_pin(false); 355 344 356 - if (data->io_reg) 357 - regulator_disable(data->io_reg); 358 - if (data->core_reg) 359 - regulator_disable(data->core_reg); 345 + error = regulator_disable(data->io_reg); 346 + if (error) 347 + dev_warn(&client->dev, "Failed to disable vdd: %d\n", error); 348 + 349 + error = regulator_disable(data->core_reg); 350 + if (error) 351 + dev_warn(&client->dev, "Failed to disable avdd: %d\n", error); 360 352 } 361 353 362 354 static int mms114_input_open(struct input_dev *dev)
+1
drivers/iommu/dmar.c
··· 1083 1083 "non-zero reserved fields in RTP", 1084 1084 "non-zero reserved fields in CTP", 1085 1085 "non-zero reserved fields in PTE", 1086 + "PCE for translation request specifies blocking", 1086 1087 }; 1087 1088 1088 1089 static const char *irq_remap_fault_reasons[] =
+3 -1
drivers/isdn/i4l/isdn_tty.c
··· 902 902 int j; 903 903 int l; 904 904 905 - l = strlen(msg); 905 + l = min(strlen(msg), sizeof(cmd.parm) - sizeof(cmd.parm.cmsg) 906 + + sizeof(cmd.parm.cmsg.para) - 2); 907 + 906 908 if (!l) { 907 909 isdn_tty_modem_result(RESULT_ERROR, info); 908 910 return;
+1 -2
drivers/mailbox/pl320-ipc.c
··· 138 138 } 139 139 EXPORT_SYMBOL_GPL(pl320_ipc_unregister_notifier); 140 140 141 - static int __init pl320_probe(struct amba_device *adev, 142 - const struct amba_id *id) 141 + static int pl320_probe(struct amba_device *adev, const struct amba_id *id) 143 142 { 144 143 int ret; 145 144
+1
drivers/misc/ibmasm/ibmasmfs.c
··· 110 110 .mount = ibmasmfs_mount, 111 111 .kill_sb = kill_litter_super, 112 112 }; 113 + MODULE_ALIAS_FS("ibmasmfs"); 113 114 114 115 static int ibmasmfs_fill_super (struct super_block *sb, void *data, int silent) 115 116 {
+1
drivers/mtd/mtdchar.c
··· 1238 1238 .mount = mtd_inodefs_mount, 1239 1239 .kill_sb = kill_anon_super, 1240 1240 }; 1241 + MODULE_ALIAS_FS("mtd_inodefs"); 1241 1242 1242 1243 static int __init init_mtdchar(void) 1243 1244 {
+3 -2
drivers/net/bonding/bond_main.c
··· 1964 1964 } 1965 1965 1966 1966 block_netpoll_tx(); 1967 - call_netdevice_notifiers(NETDEV_RELEASE, bond_dev); 1968 1967 write_lock_bh(&bond->lock); 1969 1968 1970 1969 slave = bond_get_slave_by_dev(bond, slave_dev); ··· 2065 2066 write_unlock_bh(&bond->lock); 2066 2067 unblock_netpoll_tx(); 2067 2068 2068 - if (bond->slave_cnt == 0) 2069 + if (bond->slave_cnt == 0) { 2069 2070 call_netdevice_notifiers(NETDEV_CHANGEADDR, bond->dev); 2071 + call_netdevice_notifiers(NETDEV_RELEASE, bond->dev); 2072 + } 2070 2073 2071 2074 bond_compute_features(bond); 2072 2075 if (!(bond_dev->features & NETIF_F_VLAN_CHALLENGED) &&
+16 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
··· 8647 8647 MDIO_WC_DEVAD, 8648 8648 MDIO_WC_REG_DIGITAL5_MISC6, 8649 8649 &rx_tx_in_reset); 8650 - if (!rx_tx_in_reset) { 8650 + if ((!rx_tx_in_reset) && 8651 + (params->link_flags & 8652 + PHY_INITIALIZED)) { 8651 8653 bnx2x_warpcore_reset_lane(bp, phy, 1); 8652 8654 bnx2x_warpcore_config_sfi(phy, params); 8653 8655 bnx2x_warpcore_reset_lane(bp, phy, 0); ··· 12529 12527 vars->flow_ctrl = BNX2X_FLOW_CTRL_NONE; 12530 12528 vars->mac_type = MAC_TYPE_NONE; 12531 12529 vars->phy_flags = 0; 12530 + vars->check_kr2_recovery_cnt = 0; 12531 + params->link_flags = PHY_INITIALIZED; 12532 12532 /* Driver opens NIG-BRB filters */ 12533 12533 bnx2x_set_rx_filter(params, 1); 12534 12534 /* Check if link flap can be avoided */ ··· 12695 12691 struct bnx2x *bp = params->bp; 12696 12692 vars->link_up = 0; 12697 12693 vars->phy_flags = 0; 12694 + params->link_flags &= ~PHY_INITIALIZED; 12698 12695 if (!params->lfa_base) 12699 12696 return bnx2x_link_reset(params, vars, 1); 12700 12697 /* ··· 13416 13411 vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE; 13417 13412 bnx2x_update_link_attr(params, vars->link_attr_sync); 13418 13413 13414 + vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT; 13419 13415 /* Restart AN on leading lane */ 13420 13416 bnx2x_warpcore_restart_AN_KR(phy, params); 13421 13417 } ··· 13445 13439 return; 13446 13440 } 13447 13441 13442 + /* Once KR2 was disabled, wait 5 seconds before checking KR2 recovery 13443 + * since some switches tend to reinit the AN process and clear the 13444 + * advertised BP/NP after ~2 seconds causing the KR2 to be disabled 13445 + * and recovered many times 13446 + */ 13447 + if (vars->check_kr2_recovery_cnt > 0) { 13448 + vars->check_kr2_recovery_cnt--; 13449 + return; 13450 + } 13448 13451 lane = bnx2x_get_warpcore_lane(phy, params); 13449 13452 CL22_WR_OVER_CL45(bp, phy, MDIO_REG_BANK_AER_BLOCK, 13450 13453 MDIO_AER_BLOCK_AER_REG, lane);
+3 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
··· 309 309 req_flow_ctrl is set to AUTO */ 310 310 u16 link_flags; 311 311 #define LINK_FLAGS_INT_DISABLED (1<<0) 312 + #define PHY_INITIALIZED (1<<1) 312 313 u32 lfa_base; 313 314 }; 314 315 ··· 343 342 u32 link_status; 344 343 u32 eee_status; 345 344 u8 fault_detected; 346 - u8 rsrv1; 345 + u8 check_kr2_recovery_cnt; 346 + #define CHECK_KR2_RECOVERY_CNT 5 347 347 u16 periodic_flags; 348 348 #define PERIODIC_FLAGS_LINK_EVENT 0x0001 349 349
+5 -9
drivers/net/ethernet/broadcom/tg3.c
··· 1870 1870 1871 1871 tg3_ump_link_report(tp); 1872 1872 } 1873 + 1874 + tp->link_up = netif_carrier_ok(tp->dev); 1873 1875 } 1874 1876 1875 1877 static u16 tg3_advert_flowctrl_1000X(u8 flow_ctrl) ··· 2525 2523 return err; 2526 2524 } 2527 2525 2528 - static void tg3_carrier_on(struct tg3 *tp) 2529 - { 2530 - netif_carrier_on(tp->dev); 2531 - tp->link_up = true; 2532 - } 2533 - 2534 2526 static void tg3_carrier_off(struct tg3 *tp) 2535 2527 { 2536 2528 netif_carrier_off(tp->dev); ··· 2550 2554 return -EBUSY; 2551 2555 2552 2556 if (netif_running(tp->dev) && tp->link_up) { 2553 - tg3_carrier_off(tp); 2557 + netif_carrier_off(tp->dev); 2554 2558 tg3_link_report(tp); 2555 2559 } 2556 2560 ··· 4399 4403 { 4400 4404 if (curr_link_up != tp->link_up) { 4401 4405 if (curr_link_up) { 4402 - tg3_carrier_on(tp); 4406 + netif_carrier_on(tp->dev); 4403 4407 } else { 4404 - tg3_carrier_off(tp); 4408 + netif_carrier_off(tp->dev); 4405 4409 if (tp->phy_flags & TG3_PHYFLG_MII_SERDES) 4406 4410 tp->phy_flags &= ~TG3_PHYFLG_PARALLEL_DETECT; 4407 4411 }
+9 -3
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 497 497 } 498 498 499 499 #define EEPROM_STAT_ADDR 0x7bfc 500 - #define VPD_BASE 0 501 500 #define VPD_LEN 512 501 + #define VPD_BASE 0x400 502 + #define VPD_BASE_OLD 0 502 503 503 504 /** 504 505 * t4_seeprom_wp - enable/disable EEPROM write protection ··· 525 524 int get_vpd_params(struct adapter *adapter, struct vpd_params *p) 526 525 { 527 526 u32 cclk_param, cclk_val; 528 - int i, ret; 527 + int i, ret, addr; 529 528 int ec, sn; 530 529 u8 *vpd, csum; 531 530 unsigned int vpdr_len, kw_offset, id_len; ··· 534 533 if (!vpd) 535 534 return -ENOMEM; 536 535 537 - ret = pci_read_vpd(adapter->pdev, VPD_BASE, VPD_LEN, vpd); 536 + ret = pci_read_vpd(adapter->pdev, VPD_BASE, sizeof(u32), vpd); 537 + if (ret < 0) 538 + goto out; 539 + addr = *vpd == 0x82 ? VPD_BASE : VPD_BASE_OLD; 540 + 541 + ret = pci_read_vpd(adapter->pdev, addr, VPD_LEN, vpd); 538 542 if (ret < 0) 539 543 goto out; 540 544
+1
drivers/net/ethernet/emulex/benet/be.h
··· 349 349 struct pci_dev *pdev; 350 350 struct net_device *netdev; 351 351 352 + u8 __iomem *csr; /* CSR BAR used only for BE2/3 */ 352 353 u8 __iomem *db; /* Door Bell */ 353 354 354 355 struct mutex mbox_lock; /* For serializing mbox cmds to BE card */
+16 -20
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 473 473 return 0; 474 474 } 475 475 476 - static int be_POST_stage_get(struct be_adapter *adapter, u16 *stage) 476 + static u16 be_POST_stage_get(struct be_adapter *adapter) 477 477 { 478 478 u32 sem; 479 - u32 reg = skyhawk_chip(adapter) ? SLIPORT_SEMAPHORE_OFFSET_SH : 480 - SLIPORT_SEMAPHORE_OFFSET_BE; 481 479 482 - pci_read_config_dword(adapter->pdev, reg, &sem); 483 - *stage = sem & POST_STAGE_MASK; 484 - 485 - if ((sem >> POST_ERR_SHIFT) & POST_ERR_MASK) 486 - return -1; 480 + if (BEx_chip(adapter)) 481 + sem = ioread32(adapter->csr + SLIPORT_SEMAPHORE_OFFSET_BEx); 487 482 else 488 - return 0; 483 + pci_read_config_dword(adapter->pdev, 484 + SLIPORT_SEMAPHORE_OFFSET_SH, &sem); 485 + 486 + return sem & POST_STAGE_MASK; 489 487 } 490 488 491 489 int lancer_wait_ready(struct be_adapter *adapter) ··· 577 579 } 578 580 579 581 do { 580 - status = be_POST_stage_get(adapter, &stage); 581 - if (status) { 582 - dev_err(dev, "POST error; stage=0x%x\n", stage); 583 - return -1; 584 - } else if (stage != POST_STAGE_ARMFW_RDY) { 585 - if (msleep_interruptible(2000)) { 586 - dev_err(dev, "Waiting for POST aborted\n"); 587 - return -EINTR; 588 - } 589 - timeout += 2; 590 - } else { 582 + stage = be_POST_stage_get(adapter); 583 + if (stage == POST_STAGE_ARMFW_RDY) 591 584 return 0; 585 + 586 + dev_info(dev, "Waiting for POST, %ds elapsed\n", 587 + timeout); 588 + if (msleep_interruptible(2000)) { 589 + dev_err(dev, "Waiting for POST aborted\n"); 590 + return -EINTR; 592 591 } 592 + timeout += 2; 593 593 } while (timeout < 60); 594 594 595 595 dev_err(dev, "POST timeout; stage=0x%x\n", stage);
+2 -2
drivers/net/ethernet/emulex/benet/be_hw.h
··· 32 32 #define MPU_EP_CONTROL 0 33 33 34 34 /********** MPU semphore: used for SH & BE *************/ 35 - #define SLIPORT_SEMAPHORE_OFFSET_BE 0x7c 36 - #define SLIPORT_SEMAPHORE_OFFSET_SH 0x94 35 + #define SLIPORT_SEMAPHORE_OFFSET_BEx 0xac /* CSR BAR offset */ 36 + #define SLIPORT_SEMAPHORE_OFFSET_SH 0x94 /* PCI-CFG offset */ 37 37 #define POST_STAGE_MASK 0x0000FFFF 38 38 #define POST_ERR_MASK 0x1 39 39 #define POST_ERR_SHIFT 31
+10
drivers/net/ethernet/emulex/benet/be_main.c
··· 3688 3688 3689 3689 static void be_unmap_pci_bars(struct be_adapter *adapter) 3690 3690 { 3691 + if (adapter->csr) 3692 + pci_iounmap(adapter->pdev, adapter->csr); 3691 3693 if (adapter->db) 3692 3694 pci_iounmap(adapter->pdev, adapter->db); 3693 3695 } ··· 3722 3720 pci_read_config_dword(adapter->pdev, SLI_INTF_REG_OFFSET, &sli_intf); 3723 3721 adapter->if_type = (sli_intf & SLI_INTF_IF_TYPE_MASK) >> 3724 3722 SLI_INTF_IF_TYPE_SHIFT; 3723 + 3724 + if (BEx_chip(adapter) && be_physfn(adapter)) { 3725 + adapter->csr = pci_iomap(adapter->pdev, 2, 0); 3726 + if (adapter->csr == NULL) 3727 + return -ENOMEM; 3728 + } 3725 3729 3726 3730 addr = pci_iomap(adapter->pdev, db_bar(adapter), 0); 3727 3731 if (addr == NULL) ··· 4337 4329 pci_restore_state(pdev); 4338 4330 4339 4331 /* Check if card is ok and fw is ready */ 4332 + dev_info(&adapter->pdev->dev, 4333 + "Waiting for FW to be ready after EEH reset\n"); 4340 4334 status = be_fw_wait_ready(adapter); 4341 4335 if (status) 4342 4336 return PCI_ERS_RESULT_DISCONNECT;
+13
drivers/net/ethernet/intel/e1000e/ethtool.c
··· 36 36 #include <linux/delay.h> 37 37 #include <linux/vmalloc.h> 38 38 #include <linux/mdio.h> 39 + #include <linux/pm_runtime.h> 39 40 40 41 #include "e1000.h" 41 42 ··· 2237 2236 return 0; 2238 2237 } 2239 2238 2239 + static int e1000e_ethtool_begin(struct net_device *netdev) 2240 + { 2241 + return pm_runtime_get_sync(netdev->dev.parent); 2242 + } 2243 + 2244 + static void e1000e_ethtool_complete(struct net_device *netdev) 2245 + { 2246 + pm_runtime_put_sync(netdev->dev.parent); 2247 + } 2248 + 2240 2249 static const struct ethtool_ops e1000_ethtool_ops = { 2250 + .begin = e1000e_ethtool_begin, 2251 + .complete = e1000e_ethtool_complete, 2241 2252 .get_settings = e1000_get_settings, 2242 2253 .set_settings = e1000_set_settings, 2243 2254 .get_drvinfo = e1000_get_drvinfo,
+70 -1
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 782 782 } 783 783 784 784 /** 785 + * e1000_k1_workaround_lpt_lp - K1 workaround on Lynxpoint-LP 786 + * @hw: pointer to the HW structure 787 + * @link: link up bool flag 788 + * 789 + * When K1 is enabled for 1Gbps, the MAC can miss 2 DMA completion indications 790 + * preventing further DMA write requests. Workaround the issue by disabling 791 + * the de-assertion of the clock request when in 1Gpbs mode. 792 + **/ 793 + static s32 e1000_k1_workaround_lpt_lp(struct e1000_hw *hw, bool link) 794 + { 795 + u32 fextnvm6 = er32(FEXTNVM6); 796 + s32 ret_val = 0; 797 + 798 + if (link && (er32(STATUS) & E1000_STATUS_SPEED_1000)) { 799 + u16 kmrn_reg; 800 + 801 + ret_val = hw->phy.ops.acquire(hw); 802 + if (ret_val) 803 + return ret_val; 804 + 805 + ret_val = 806 + e1000e_read_kmrn_reg_locked(hw, E1000_KMRNCTRLSTA_K1_CONFIG, 807 + &kmrn_reg); 808 + if (ret_val) 809 + goto release; 810 + 811 + ret_val = 812 + e1000e_write_kmrn_reg_locked(hw, 813 + E1000_KMRNCTRLSTA_K1_CONFIG, 814 + kmrn_reg & 815 + ~E1000_KMRNCTRLSTA_K1_ENABLE); 816 + if (ret_val) 817 + goto release; 818 + 819 + usleep_range(10, 20); 820 + 821 + ew32(FEXTNVM6, fextnvm6 | E1000_FEXTNVM6_REQ_PLL_CLK); 822 + 823 + ret_val = 824 + e1000e_write_kmrn_reg_locked(hw, 825 + E1000_KMRNCTRLSTA_K1_CONFIG, 826 + kmrn_reg); 827 + release: 828 + hw->phy.ops.release(hw); 829 + } else { 830 + /* clear FEXTNVM6 bit 8 on link down or 10/100 */ 831 + ew32(FEXTNVM6, fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK); 832 + } 833 + 834 + return ret_val; 835 + } 836 + 837 + /** 785 838 * e1000_check_for_copper_link_ich8lan - Check for link (Copper) 786 839 * @hw: pointer to the HW structure 787 840 * ··· 867 814 868 815 if (hw->mac.type == e1000_pchlan) { 869 816 ret_val = e1000_k1_gig_workaround_hv(hw, link); 817 + if (ret_val) 818 + return ret_val; 819 + } 820 + 821 + /* Work-around I218 hang issue */ 822 + if ((hw->adapter->pdev->device == E1000_DEV_ID_PCH_LPTLP_I218_LM) || 823 + (hw->adapter->pdev->device == E1000_DEV_ID_PCH_LPTLP_I218_V)) { 824 + ret_val = e1000_k1_workaround_lpt_lp(hw, link); 870 825 if (ret_val) 871 826 return ret_val; 872 827 } ··· 4014 3953 4015 3954 phy_ctrl = er32(PHY_CTRL); 4016 3955 phy_ctrl |= E1000_PHY_CTRL_GBE_DISABLE; 3956 + 4017 3957 if (hw->phy.type == e1000_phy_i217) { 4018 - u16 phy_reg; 3958 + u16 phy_reg, device_id = hw->adapter->pdev->device; 3959 + 3960 + if ((device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) || 3961 + (device_id == E1000_DEV_ID_PCH_LPTLP_I218_V)) { 3962 + u32 fextnvm6 = er32(FEXTNVM6); 3963 + 3964 + ew32(FEXTNVM6, fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK); 3965 + } 4019 3966 4020 3967 ret_val = hw->phy.ops.acquire(hw); 4021 3968 if (ret_val)
+2
drivers/net/ethernet/intel/e1000e/ich8lan.h
··· 92 92 #define E1000_FEXTNVM4_BEACON_DURATION_8USEC 0x7 93 93 #define E1000_FEXTNVM4_BEACON_DURATION_16USEC 0x3 94 94 95 + #define E1000_FEXTNVM6_REQ_PLL_CLK 0x00000100 96 + 95 97 #define PCIE_ICH8_SNOOP_ALL PCIE_NO_SNOOP_ALL 96 98 97 99 #define E1000_ICH_RAR_ENTRIES 7
+20 -61
drivers/net/ethernet/intel/e1000e/netdev.c
··· 4284 4284 netif_start_queue(netdev); 4285 4285 4286 4286 adapter->idle_check = true; 4287 + hw->mac.get_link_status = true; 4287 4288 pm_runtime_put(&pdev->dev); 4288 4289 4289 4290 /* fire a link status change interrupt to start the watchdog */ ··· 4643 4642 (adapter->hw.phy.media_type == e1000_media_type_copper)) { 4644 4643 int ret_val; 4645 4644 4645 + pm_runtime_get_sync(&adapter->pdev->dev); 4646 4646 ret_val = e1e_rphy(hw, MII_BMCR, &phy->bmcr); 4647 4647 ret_val |= e1e_rphy(hw, MII_BMSR, &phy->bmsr); 4648 4648 ret_val |= e1e_rphy(hw, MII_ADVERTISE, &phy->advertise); ··· 4654 4652 ret_val |= e1e_rphy(hw, MII_ESTATUS, &phy->estatus); 4655 4653 if (ret_val) 4656 4654 e_warn("Error reading PHY register\n"); 4655 + pm_runtime_put_sync(&adapter->pdev->dev); 4657 4656 } else { 4658 4657 /* Do not read PHY registers if link is not up 4659 4658 * Set values to typical power-on defaults ··· 5868 5865 return retval; 5869 5866 } 5870 5867 5871 - static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake, 5872 - bool runtime) 5868 + static int __e1000_shutdown(struct pci_dev *pdev, bool runtime) 5873 5869 { 5874 5870 struct net_device *netdev = pci_get_drvdata(pdev); 5875 5871 struct e1000_adapter *adapter = netdev_priv(netdev); ··· 5891 5889 e1000_free_irq(adapter); 5892 5890 } 5893 5891 e1000e_reset_interrupt_capability(adapter); 5894 - 5895 - retval = pci_save_state(pdev); 5896 - if (retval) 5897 - return retval; 5898 5892 5899 5893 status = er32(STATUS); 5900 5894 if (status & E1000_STATUS_LU) ··· 5943 5945 ew32(WUFC, 0); 5944 5946 } 5945 5947 5946 - *enable_wake = !!wufc; 5947 - 5948 - /* make sure adapter isn't asleep if manageability is enabled */ 5949 - if ((adapter->flags & FLAG_MNG_PT_ENABLED) || 5950 - (hw->mac.ops.check_mng_mode(hw))) 5951 - *enable_wake = true; 5952 - 5953 5948 if (adapter->hw.phy.type == e1000_phy_igp_3) 5954 5949 e1000e_igp3_phy_powerdown_workaround_ich8lan(&adapter->hw); 5955 5950 ··· 5950 5959 * would have already happened in close and is redundant. 5951 5960 */ 5952 5961 e1000e_release_hw_control(adapter); 5953 - 5954 - pci_disable_device(pdev); 5955 - 5956 - return 0; 5957 - } 5958 - 5959 - static void e1000_power_off(struct pci_dev *pdev, bool sleep, bool wake) 5960 - { 5961 - if (sleep && wake) { 5962 - pci_prepare_to_sleep(pdev); 5963 - return; 5964 - } 5965 - 5966 - pci_wake_from_d3(pdev, wake); 5967 - pci_set_power_state(pdev, PCI_D3hot); 5968 - } 5969 - 5970 - static void e1000_complete_shutdown(struct pci_dev *pdev, bool sleep, bool wake) 5971 - { 5972 - struct net_device *netdev = pci_get_drvdata(pdev); 5973 - struct e1000_adapter *adapter = netdev_priv(netdev); 5974 5962 5975 5963 /* The pci-e switch on some quad port adapters will report a 5976 5964 * correctable error when the MAC transitions from D0 to D3. To ··· 5964 5994 pcie_capability_write_word(us_dev, PCI_EXP_DEVCTL, 5965 5995 (devctl & ~PCI_EXP_DEVCTL_CERE)); 5966 5996 5967 - e1000_power_off(pdev, sleep, wake); 5997 + pci_save_state(pdev); 5998 + pci_prepare_to_sleep(pdev); 5968 5999 5969 6000 pcie_capability_write_word(us_dev, PCI_EXP_DEVCTL, devctl); 5970 - } else { 5971 - e1000_power_off(pdev, sleep, wake); 5972 6001 } 6002 + 6003 + return 0; 5973 6004 } 5974 6005 5975 6006 #ifdef CONFIG_PCIEASPM ··· 6028 6057 if (aspm_disable_flag) 6029 6058 e1000e_disable_aspm(pdev, aspm_disable_flag); 6030 6059 6031 - pci_set_power_state(pdev, PCI_D0); 6032 - pci_restore_state(pdev); 6033 - pci_save_state(pdev); 6060 + pci_set_master(pdev); 6034 6061 6035 6062 e1000e_set_interrupt_capability(adapter); 6036 6063 if (netif_running(netdev)) { ··· 6094 6125 static int e1000_suspend(struct device *dev) 6095 6126 { 6096 6127 struct pci_dev *pdev = to_pci_dev(dev); 6097 - int retval; 6098 - bool wake; 6099 6128 6100 - retval = __e1000_shutdown(pdev, &wake, false); 6101 - if (!retval) 6102 - e1000_complete_shutdown(pdev, true, wake); 6103 - 6104 - return retval; 6129 + return __e1000_shutdown(pdev, false); 6105 6130 } 6106 6131 6107 6132 static int e1000_resume(struct device *dev) ··· 6118 6155 struct net_device *netdev = pci_get_drvdata(pdev); 6119 6156 struct e1000_adapter *adapter = netdev_priv(netdev); 6120 6157 6121 - if (e1000e_pm_ready(adapter)) { 6122 - bool wake; 6158 + if (!e1000e_pm_ready(adapter)) 6159 + return 0; 6123 6160 6124 - __e1000_shutdown(pdev, &wake, true); 6125 - } 6126 - 6127 - return 0; 6161 + return __e1000_shutdown(pdev, true); 6128 6162 } 6129 6163 6130 6164 static int e1000_idle(struct device *dev) ··· 6159 6199 6160 6200 static void e1000_shutdown(struct pci_dev *pdev) 6161 6201 { 6162 - bool wake = false; 6163 - 6164 - __e1000_shutdown(pdev, &wake, false); 6165 - 6166 - if (system_state == SYSTEM_POWER_OFF) 6167 - e1000_complete_shutdown(pdev, false, wake); 6202 + __e1000_shutdown(pdev, false); 6168 6203 } 6169 6204 6170 6205 #ifdef CONFIG_NET_POLL_CONTROLLER ··· 6280 6325 "Cannot re-enable PCI device after reset.\n"); 6281 6326 result = PCI_ERS_RESULT_DISCONNECT; 6282 6327 } else { 6283 - pci_set_master(pdev); 6284 6328 pdev->state_saved = true; 6285 6329 pci_restore_state(pdev); 6330 + pci_set_master(pdev); 6286 6331 6287 6332 pci_enable_wake(pdev, PCI_D3hot, 0); 6288 6333 pci_enable_wake(pdev, PCI_D3cold, 0); ··· 6712 6757 6713 6758 /* initialize the wol settings based on the eeprom settings */ 6714 6759 adapter->wol = adapter->eeprom_wol; 6715 - device_set_wakeup_enable(&adapter->pdev->dev, adapter->wol); 6760 + 6761 + /* make sure adapter isn't asleep if manageability is enabled */ 6762 + if (adapter->wol || (adapter->flags & FLAG_MNG_PT_ENABLED) || 6763 + (hw->mac.ops.check_mng_mode(hw))) 6764 + device_wakeup_enable(&pdev->dev); 6716 6765 6717 6766 /* save off EEPROM version number */ 6718 6767 e1000_read_nvm(&adapter->hw, 5, 1, &adapter->eeprom_vers);
+1
drivers/net/ethernet/intel/e1000e/regs.h
··· 42 42 #define E1000_FEXTNVM 0x00028 /* Future Extended NVM - RW */ 43 43 #define E1000_FEXTNVM3 0x0003C /* Future Extended NVM 3 - RW */ 44 44 #define E1000_FEXTNVM4 0x00024 /* Future Extended NVM 4 - RW */ 45 + #define E1000_FEXTNVM6 0x00010 /* Future Extended NVM 6 - RW */ 45 46 #define E1000_FEXTNVM7 0x000E4 /* Future Extended NVM 7 - RW */ 46 47 #define E1000_FCT 0x00030 /* Flow Control Type - RW */ 47 48 #define E1000_VET 0x00038 /* VLAN Ether Type - RW */
+8 -3
drivers/net/ethernet/intel/igb/e1000_82575.c
··· 1361 1361 switch (hw->phy.type) { 1362 1362 case e1000_phy_i210: 1363 1363 case e1000_phy_m88: 1364 - if (hw->phy.id == I347AT4_E_PHY_ID || 1365 - hw->phy.id == M88E1112_E_PHY_ID) 1364 + switch (hw->phy.id) { 1365 + case I347AT4_E_PHY_ID: 1366 + case M88E1112_E_PHY_ID: 1367 + case I210_I_PHY_ID: 1366 1368 ret_val = igb_copper_link_setup_m88_gen2(hw); 1367 - else 1369 + break; 1370 + default: 1368 1371 ret_val = igb_copper_link_setup_m88(hw); 1372 + break; 1373 + } 1369 1374 break; 1370 1375 case e1000_phy_igp_3: 1371 1376 ret_val = igb_copper_link_setup_igp(hw);
+1 -1
drivers/net/ethernet/intel/igb/igb.h
··· 447 447 #endif 448 448 struct i2c_algo_bit_data i2c_algo; 449 449 struct i2c_adapter i2c_adap; 450 - struct igb_i2c_client_list *i2c_clients; 450 + struct i2c_client *i2c_client; 451 451 }; 452 452 453 453 #define IGB_FLAG_HAS_MSI (1 << 0)
+14
drivers/net/ethernet/intel/igb/igb_hwmon.c
··· 39 39 #include <linux/pci.h> 40 40 41 41 #ifdef CONFIG_IGB_HWMON 42 + struct i2c_board_info i350_sensor_info = { 43 + I2C_BOARD_INFO("i350bb", (0Xf8 >> 1)), 44 + }; 45 + 42 46 /* hwmon callback functions */ 43 47 static ssize_t igb_hwmon_show_location(struct device *dev, 44 48 struct device_attribute *attr, ··· 192 188 unsigned int i; 193 189 int n_attrs; 194 190 int rc = 0; 191 + struct i2c_client *client = NULL; 195 192 196 193 /* If this method isn't defined we don't support thermals */ 197 194 if (adapter->hw.mac.ops.init_thermal_sensor_thresh == NULL) ··· 202 197 rc = (adapter->hw.mac.ops.init_thermal_sensor_thresh(&adapter->hw)); 203 198 if (rc) 204 199 goto exit; 200 + 201 + /* init i2c_client */ 202 + client = i2c_new_device(&adapter->i2c_adap, &i350_sensor_info); 203 + if (client == NULL) { 204 + dev_info(&adapter->pdev->dev, 205 + "Failed to create new i2c device..\n"); 206 + goto exit; 207 + } 208 + adapter->i2c_client = client; 205 209 206 210 /* Allocation space for max attributes 207 211 * max num sensors * values (loc, temp, max, caution)
+2 -74
drivers/net/ethernet/intel/igb/igb_main.c
··· 1923 1923 return; 1924 1924 } 1925 1925 1926 - static const struct i2c_board_info i350_sensor_info = { 1927 - I2C_BOARD_INFO("i350bb", 0Xf8), 1928 - }; 1929 - 1930 1926 /* igb_init_i2c - Init I2C interface 1931 1927 * @adapter: pointer to adapter structure 1932 1928 * ··· 6223 6227 /* If we spanned a buffer we have a huge mess so test for it */ 6224 6228 BUG_ON(unlikely(!igb_test_staterr(rx_desc, E1000_RXD_STAT_EOP))); 6225 6229 6226 - /* Guarantee this function can be used by verifying buffer sizes */ 6227 - BUILD_BUG_ON(SKB_WITH_OVERHEAD(IGB_RX_BUFSZ) < (NET_SKB_PAD + 6228 - NET_IP_ALIGN + 6229 - IGB_TS_HDR_LEN + 6230 - ETH_FRAME_LEN + 6231 - ETH_FCS_LEN)); 6232 - 6233 6230 rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; 6234 6231 page = rx_buffer->page; 6235 6232 prefetchw(page); ··· 7713 7724 } 7714 7725 } 7715 7726 7716 - static DEFINE_SPINLOCK(i2c_clients_lock); 7717 - 7718 - /* igb_get_i2c_client - returns matching client 7719 - * in adapters's client list. 7720 - * @adapter: adapter struct 7721 - * @dev_addr: device address of i2c needed. 7722 - */ 7723 - static struct i2c_client * 7724 - igb_get_i2c_client(struct igb_adapter *adapter, u8 dev_addr) 7725 - { 7726 - ulong flags; 7727 - struct igb_i2c_client_list *client_list; 7728 - struct i2c_client *client = NULL; 7729 - struct i2c_board_info client_info = { 7730 - I2C_BOARD_INFO("igb", 0x00), 7731 - }; 7732 - 7733 - spin_lock_irqsave(&i2c_clients_lock, flags); 7734 - client_list = adapter->i2c_clients; 7735 - 7736 - /* See if we already have an i2c_client */ 7737 - while (client_list) { 7738 - if (client_list->client->addr == (dev_addr >> 1)) { 7739 - client = client_list->client; 7740 - goto exit; 7741 - } else { 7742 - client_list = client_list->next; 7743 - } 7744 - } 7745 - 7746 - /* no client_list found, create a new one */ 7747 - client_list = kzalloc(sizeof(*client_list), GFP_ATOMIC); 7748 - if (client_list == NULL) 7749 - goto exit; 7750 - 7751 - /* dev_addr passed to us is left-shifted by 1 bit 7752 - * i2c_new_device call expects it to be flush to the right. 7753 - */ 7754 - client_info.addr = dev_addr >> 1; 7755 - client_info.platform_data = adapter; 7756 - client_list->client = i2c_new_device(&adapter->i2c_adap, &client_info); 7757 - if (client_list->client == NULL) { 7758 - dev_info(&adapter->pdev->dev, 7759 - "Failed to create new i2c device..\n"); 7760 - goto err_no_client; 7761 - } 7762 - 7763 - /* insert new client at head of list */ 7764 - client_list->next = adapter->i2c_clients; 7765 - adapter->i2c_clients = client_list; 7766 - 7767 - client = client_list->client; 7768 - goto exit; 7769 - 7770 - err_no_client: 7771 - kfree(client_list); 7772 - exit: 7773 - spin_unlock_irqrestore(&i2c_clients_lock, flags); 7774 - return client; 7775 - } 7776 - 7777 7727 /* igb_read_i2c_byte - Reads 8 bit word over I2C 7778 7728 * @hw: pointer to hardware structure 7779 7729 * @byte_offset: byte offset to read ··· 7726 7798 u8 dev_addr, u8 *data) 7727 7799 { 7728 7800 struct igb_adapter *adapter = container_of(hw, struct igb_adapter, hw); 7729 - struct i2c_client *this_client = igb_get_i2c_client(adapter, dev_addr); 7801 + struct i2c_client *this_client = adapter->i2c_client; 7730 7802 s32 status; 7731 7803 u16 swfw_mask = 0; 7732 7804 ··· 7763 7835 u8 dev_addr, u8 data) 7764 7836 { 7765 7837 struct igb_adapter *adapter = container_of(hw, struct igb_adapter, hw); 7766 - struct i2c_client *this_client = igb_get_i2c_client(adapter, dev_addr); 7838 + struct i2c_client *this_client = adapter->i2c_client; 7767 7839 s32 status; 7768 7840 u16 swfw_mask = E1000_SWFW_PHY0_SM; 7769 7841
+51 -4
drivers/net/ethernet/marvell/mv643xx_eth.c
··· 1081 1081 1082 1082 1083 1083 /* mii management interface *************************************************/ 1084 + static void mv643xx_adjust_pscr(struct mv643xx_eth_private *mp) 1085 + { 1086 + u32 pscr = rdlp(mp, PORT_SERIAL_CONTROL); 1087 + u32 autoneg_disable = FORCE_LINK_PASS | 1088 + DISABLE_AUTO_NEG_SPEED_GMII | 1089 + DISABLE_AUTO_NEG_FOR_FLOW_CTRL | 1090 + DISABLE_AUTO_NEG_FOR_DUPLEX; 1091 + 1092 + if (mp->phy->autoneg == AUTONEG_ENABLE) { 1093 + /* enable auto negotiation */ 1094 + pscr &= ~autoneg_disable; 1095 + goto out_write; 1096 + } 1097 + 1098 + pscr |= autoneg_disable; 1099 + 1100 + if (mp->phy->speed == SPEED_1000) { 1101 + /* force gigabit, half duplex not supported */ 1102 + pscr |= SET_GMII_SPEED_TO_1000; 1103 + pscr |= SET_FULL_DUPLEX_MODE; 1104 + goto out_write; 1105 + } 1106 + 1107 + pscr &= ~SET_GMII_SPEED_TO_1000; 1108 + 1109 + if (mp->phy->speed == SPEED_100) 1110 + pscr |= SET_MII_SPEED_TO_100; 1111 + else 1112 + pscr &= ~SET_MII_SPEED_TO_100; 1113 + 1114 + if (mp->phy->duplex == DUPLEX_FULL) 1115 + pscr |= SET_FULL_DUPLEX_MODE; 1116 + else 1117 + pscr &= ~SET_FULL_DUPLEX_MODE; 1118 + 1119 + out_write: 1120 + wrlp(mp, PORT_SERIAL_CONTROL, pscr); 1121 + } 1122 + 1084 1123 static irqreturn_t mv643xx_eth_err_irq(int irq, void *dev_id) 1085 1124 { 1086 1125 struct mv643xx_eth_shared_private *msp = dev_id; ··· 1538 1499 mv643xx_eth_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) 1539 1500 { 1540 1501 struct mv643xx_eth_private *mp = netdev_priv(dev); 1502 + int ret; 1541 1503 1542 1504 if (mp->phy == NULL) 1543 1505 return -EINVAL; ··· 1548 1508 */ 1549 1509 cmd->advertising &= ~ADVERTISED_1000baseT_Half; 1550 1510 1551 - return phy_ethtool_sset(mp->phy, cmd); 1511 + ret = phy_ethtool_sset(mp->phy, cmd); 1512 + if (!ret) 1513 + mv643xx_adjust_pscr(mp); 1514 + return ret; 1552 1515 } 1553 1516 1554 1517 static void mv643xx_eth_get_drvinfo(struct net_device *dev, ··· 2485 2442 static int mv643xx_eth_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 2486 2443 { 2487 2444 struct mv643xx_eth_private *mp = netdev_priv(dev); 2445 + int ret; 2488 2446 2489 - if (mp->phy != NULL) 2490 - return phy_mii_ioctl(mp->phy, ifr, cmd); 2447 + if (mp->phy == NULL) 2448 + return -ENOTSUPP; 2491 2449 2492 - return -EOPNOTSUPP; 2450 + ret = phy_mii_ioctl(mp->phy, ifr, cmd); 2451 + if (!ret) 2452 + mv643xx_adjust_pscr(mp); 2453 + return ret; 2493 2454 } 2494 2455 2495 2456 static int mv643xx_eth_change_mtu(struct net_device *dev, int new_mtu)
+1 -1
drivers/net/ethernet/mellanox/mlx4/cq.c
··· 226 226 227 227 static void mlx4_cq_free_icm(struct mlx4_dev *dev, int cqn) 228 228 { 229 - u64 in_param; 229 + u64 in_param = 0; 230 230 int err; 231 231 232 232 if (mlx4_is_mfunc(dev)) {
+46 -40
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 565 565 struct mlx4_en_dev *mdev = priv->mdev; 566 566 struct mlx4_dev *dev = mdev->dev; 567 567 int qpn = priv->base_qpn; 568 - u64 mac = mlx4_en_mac_to_u64(priv->dev->dev_addr); 568 + u64 mac; 569 569 570 - en_dbg(DRV, priv, "Registering MAC: %pM for deleting\n", 571 - priv->dev->dev_addr); 572 - mlx4_unregister_mac(dev, priv->port, mac); 573 - 574 - if (dev->caps.steering_mode != MLX4_STEERING_MODE_A0) { 570 + if (dev->caps.steering_mode == MLX4_STEERING_MODE_A0) { 571 + mac = mlx4_en_mac_to_u64(priv->dev->dev_addr); 572 + en_dbg(DRV, priv, "Registering MAC: %pM for deleting\n", 573 + priv->dev->dev_addr); 574 + mlx4_unregister_mac(dev, priv->port, mac); 575 + } else { 575 576 struct mlx4_mac_entry *entry; 576 577 struct hlist_node *tmp; 577 578 struct hlist_head *bucket; 578 - unsigned int mac_hash; 579 + unsigned int i; 579 580 580 - mac_hash = priv->dev->dev_addr[MLX4_EN_MAC_HASH_IDX]; 581 - bucket = &priv->mac_hash[mac_hash]; 582 - hlist_for_each_entry_safe(entry, tmp, bucket, hlist) { 583 - if (ether_addr_equal_64bits(entry->mac, 584 - priv->dev->dev_addr)) { 585 - en_dbg(DRV, priv, "Releasing qp: port %d, MAC %pM, qpn %d\n", 586 - priv->port, priv->dev->dev_addr, qpn); 581 + for (i = 0; i < MLX4_EN_MAC_HASH_SIZE; ++i) { 582 + bucket = &priv->mac_hash[i]; 583 + hlist_for_each_entry_safe(entry, tmp, bucket, hlist) { 584 + mac = mlx4_en_mac_to_u64(entry->mac); 585 + en_dbg(DRV, priv, "Registering MAC: %pM for deleting\n", 586 + entry->mac); 587 587 mlx4_en_uc_steer_release(priv, entry->mac, 588 588 qpn, entry->reg_id); 589 - mlx4_qp_release_range(dev, qpn, 1); 590 589 590 + mlx4_unregister_mac(dev, priv->port, mac); 591 591 hlist_del_rcu(&entry->hlist); 592 592 kfree_rcu(entry, rcu); 593 - break; 594 593 } 595 594 } 595 + 596 + en_dbg(DRV, priv, "Releasing qp: port %d, qpn %d\n", 597 + priv->port, qpn); 598 + mlx4_qp_release_range(dev, qpn, 1); 599 + priv->flags &= ~MLX4_EN_FLAG_FORCE_PROMISC; 596 600 } 597 601 } 598 602 ··· 654 650 return mac; 655 651 } 656 652 657 - static int mlx4_en_set_mac(struct net_device *dev, void *addr) 653 + static int mlx4_en_do_set_mac(struct mlx4_en_priv *priv) 658 654 { 659 - struct mlx4_en_priv *priv = netdev_priv(dev); 660 - struct mlx4_en_dev *mdev = priv->mdev; 661 - struct sockaddr *saddr = addr; 662 - 663 - if (!is_valid_ether_addr(saddr->sa_data)) 664 - return -EADDRNOTAVAIL; 665 - 666 - memcpy(dev->dev_addr, saddr->sa_data, ETH_ALEN); 667 - queue_work(mdev->workqueue, &priv->mac_task); 668 - return 0; 669 - } 670 - 671 - static void mlx4_en_do_set_mac(struct work_struct *work) 672 - { 673 - struct mlx4_en_priv *priv = container_of(work, struct mlx4_en_priv, 674 - mac_task); 675 - struct mlx4_en_dev *mdev = priv->mdev; 676 655 int err = 0; 677 656 678 - mutex_lock(&mdev->state_lock); 679 657 if (priv->port_up) { 680 658 /* Remove old MAC and insert the new one */ 681 659 err = mlx4_en_replace_mac(priv, priv->base_qpn, ··· 669 683 } else 670 684 en_dbg(HW, priv, "Port is down while registering mac, exiting...\n"); 671 685 686 + return err; 687 + } 688 + 689 + static int mlx4_en_set_mac(struct net_device *dev, void *addr) 690 + { 691 + struct mlx4_en_priv *priv = netdev_priv(dev); 692 + struct mlx4_en_dev *mdev = priv->mdev; 693 + struct sockaddr *saddr = addr; 694 + int err; 695 + 696 + if (!is_valid_ether_addr(saddr->sa_data)) 697 + return -EADDRNOTAVAIL; 698 + 699 + memcpy(dev->dev_addr, saddr->sa_data, ETH_ALEN); 700 + 701 + mutex_lock(&mdev->state_lock); 702 + err = mlx4_en_do_set_mac(priv); 672 703 mutex_unlock(&mdev->state_lock); 704 + 705 + return err; 673 706 } 674 707 675 708 static void mlx4_en_clear_list(struct net_device *dev) ··· 1353 1348 queue_delayed_work(mdev->workqueue, &priv->stats_task, STATS_DELAY); 1354 1349 } 1355 1350 if (mdev->mac_removed[MLX4_MAX_PORTS + 1 - priv->port]) { 1356 - queue_work(mdev->workqueue, &priv->mac_task); 1351 + mlx4_en_do_set_mac(priv); 1357 1352 mdev->mac_removed[MLX4_MAX_PORTS + 1 - priv->port] = 0; 1358 1353 } 1359 1354 mutex_unlock(&mdev->state_lock); ··· 1833 1828 } 1834 1829 1835 1830 #ifdef CONFIG_RFS_ACCEL 1836 - priv->dev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->mdev->dev->caps.comp_pool); 1837 - if (!priv->dev->rx_cpu_rmap) 1838 - goto err; 1831 + if (priv->mdev->dev->caps.comp_pool) { 1832 + priv->dev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->mdev->dev->caps.comp_pool); 1833 + if (!priv->dev->rx_cpu_rmap) 1834 + goto err; 1835 + } 1839 1836 #endif 1840 1837 1841 1838 return 0; ··· 2009 2002 priv->msg_enable = MLX4_EN_MSG_LEVEL; 2010 2003 spin_lock_init(&priv->stats_lock); 2011 2004 INIT_WORK(&priv->rx_mode_task, mlx4_en_do_set_rx_mode); 2012 - INIT_WORK(&priv->mac_task, mlx4_en_do_set_mac); 2013 2005 INIT_WORK(&priv->watchdog_task, mlx4_en_restart); 2014 2006 INIT_WORK(&priv->linkstate_task, mlx4_en_linkstate); 2015 2007 INIT_DELAYED_WORK(&priv->stats_task, mlx4_en_do_get_stats);
+8
drivers/net/ethernet/mellanox/mlx4/fw.c
··· 787 787 bmme_flags &= ~MLX4_BMME_FLAG_TYPE_2_WIN; 788 788 MLX4_PUT(outbox->buf, bmme_flags, QUERY_DEV_CAP_BMME_FLAGS_OFFSET); 789 789 790 + /* turn off device-managed steering capability if not enabled */ 791 + if (dev->caps.steering_mode != MLX4_STEERING_MODE_DEVICE_MANAGED) { 792 + MLX4_GET(field, outbox->buf, 793 + QUERY_DEV_CAP_FLOW_STEERING_RANGE_EN_OFFSET); 794 + field &= 0x7f; 795 + MLX4_PUT(outbox->buf, field, 796 + QUERY_DEV_CAP_FLOW_STEERING_RANGE_EN_OFFSET); 797 + } 790 798 return 0; 791 799 } 792 800
+1 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 1555 1555 1556 1556 void mlx4_counter_free(struct mlx4_dev *dev, u32 idx) 1557 1557 { 1558 - u64 in_param; 1558 + u64 in_param = 0; 1559 1559 1560 1560 if (mlx4_is_mfunc(dev)) { 1561 1561 set_param_l(&in_param, idx);
+1 -1
drivers/net/ethernet/mellanox/mlx4/mlx4.h
··· 1235 1235 1236 1236 static inline void set_param_l(u64 *arg, u32 val) 1237 1237 { 1238 - *((u32 *)arg) = val; 1238 + *arg = (*arg & 0xffffffff00000000ULL) | (u64) val; 1239 1239 } 1240 1240 1241 1241 static inline void set_param_h(u64 *arg, u32 val)
-1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 509 509 struct mlx4_en_cq rx_cq[MAX_RX_RINGS]; 510 510 struct mlx4_qp drop_qp; 511 511 struct work_struct rx_mode_task; 512 - struct work_struct mac_task; 513 512 struct work_struct watchdog_task; 514 513 struct work_struct linkstate_task; 515 514 struct delayed_work stats_task;
+5 -5
drivers/net/ethernet/mellanox/mlx4/mr.c
··· 183 183 184 184 static u32 mlx4_alloc_mtt_range(struct mlx4_dev *dev, int order) 185 185 { 186 - u64 in_param; 186 + u64 in_param = 0; 187 187 u64 out_param; 188 188 int err; 189 189 ··· 240 240 241 241 static void mlx4_free_mtt_range(struct mlx4_dev *dev, u32 offset, int order) 242 242 { 243 - u64 in_param; 243 + u64 in_param = 0; 244 244 int err; 245 245 246 246 if (mlx4_is_mfunc(dev)) { ··· 351 351 352 352 static void mlx4_mpt_release(struct mlx4_dev *dev, u32 index) 353 353 { 354 - u64 in_param; 354 + u64 in_param = 0; 355 355 356 356 if (mlx4_is_mfunc(dev)) { 357 357 set_param_l(&in_param, index); ··· 374 374 375 375 static int mlx4_mpt_alloc_icm(struct mlx4_dev *dev, u32 index) 376 376 { 377 - u64 param; 377 + u64 param = 0; 378 378 379 379 if (mlx4_is_mfunc(dev)) { 380 380 set_param_l(&param, index); ··· 395 395 396 396 static void mlx4_mpt_free_icm(struct mlx4_dev *dev, u32 index) 397 397 { 398 - u64 in_param; 398 + u64 in_param = 0; 399 399 400 400 if (mlx4_is_mfunc(dev)) { 401 401 set_param_l(&in_param, index);
+1 -1
drivers/net/ethernet/mellanox/mlx4/pd.c
··· 101 101 102 102 void mlx4_xrcd_free(struct mlx4_dev *dev, u32 xrcdn) 103 103 { 104 - u64 in_param; 104 + u64 in_param = 0; 105 105 int err; 106 106 107 107 if (mlx4_is_mfunc(dev)) {
+4 -4
drivers/net/ethernet/mellanox/mlx4/port.c
··· 175 175 176 176 int mlx4_register_mac(struct mlx4_dev *dev, u8 port, u64 mac) 177 177 { 178 - u64 out_param; 178 + u64 out_param = 0; 179 179 int err; 180 180 181 181 if (mlx4_is_mfunc(dev)) { ··· 222 222 223 223 void mlx4_unregister_mac(struct mlx4_dev *dev, u8 port, u64 mac) 224 224 { 225 - u64 out_param; 225 + u64 out_param = 0; 226 226 227 227 if (mlx4_is_mfunc(dev)) { 228 228 set_param_l(&out_param, port); ··· 361 361 362 362 int mlx4_register_vlan(struct mlx4_dev *dev, u8 port, u16 vlan, int *index) 363 363 { 364 - u64 out_param; 364 + u64 out_param = 0; 365 365 int err; 366 366 367 367 if (mlx4_is_mfunc(dev)) { ··· 406 406 407 407 void mlx4_unregister_vlan(struct mlx4_dev *dev, u8 port, int index) 408 408 { 409 - u64 in_param; 409 + u64 in_param = 0; 410 410 int err; 411 411 412 412 if (mlx4_is_mfunc(dev)) {
+4 -4
drivers/net/ethernet/mellanox/mlx4/qp.c
··· 222 222 223 223 int mlx4_qp_reserve_range(struct mlx4_dev *dev, int cnt, int align, int *base) 224 224 { 225 - u64 in_param; 225 + u64 in_param = 0; 226 226 u64 out_param; 227 227 int err; 228 228 ··· 255 255 256 256 void mlx4_qp_release_range(struct mlx4_dev *dev, int base_qpn, int cnt) 257 257 { 258 - u64 in_param; 258 + u64 in_param = 0; 259 259 int err; 260 260 261 261 if (mlx4_is_mfunc(dev)) { ··· 319 319 320 320 static int mlx4_qp_alloc_icm(struct mlx4_dev *dev, int qpn) 321 321 { 322 - u64 param; 322 + u64 param = 0; 323 323 324 324 if (mlx4_is_mfunc(dev)) { 325 325 set_param_l(&param, qpn); ··· 344 344 345 345 static void mlx4_qp_free_icm(struct mlx4_dev *dev, int qpn) 346 346 { 347 - u64 in_param; 347 + u64 in_param = 0; 348 348 349 349 if (mlx4_is_mfunc(dev)) { 350 350 set_param_l(&in_param, qpn);
+3
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 2990 2990 u8 steer_type_mask = 2; 2991 2991 enum mlx4_steer_type type = (gid[7] & steer_type_mask) >> 1; 2992 2992 2993 + if (dev->caps.steering_mode != MLX4_STEERING_MODE_B0) 2994 + return -EINVAL; 2995 + 2993 2996 qpn = vhcr->in_modifier & 0xffffff; 2994 2997 err = get_res(dev, slave, qpn, RES_QP, &rqp); 2995 2998 if (err)
+1 -1
drivers/net/ethernet/mellanox/mlx4/srq.c
··· 149 149 150 150 static void mlx4_srq_free_icm(struct mlx4_dev *dev, int srqn) 151 151 { 152 - u64 in_param; 152 + u64 in_param = 0; 153 153 154 154 if (mlx4_is_mfunc(dev)) { 155 155 set_param_l(&in_param, srqn);
+3
drivers/net/hippi/rrunner.c
··· 202 202 return 0; 203 203 204 204 out: 205 + if (rrpriv->evt_ring) 206 + pci_free_consistent(pdev, EVT_RING_SIZE, rrpriv->evt_ring, 207 + rrpriv->evt_ring_dma); 205 208 if (rrpriv->rx_ring) 206 209 pci_free_consistent(pdev, RX_TOTAL_SIZE, rrpriv->rx_ring, 207 210 rrpriv->rx_ring_dma);
+1
drivers/net/macvlan.c
··· 660 660 ether_setup(dev); 661 661 662 662 dev->priv_flags &= ~(IFF_XMIT_DST_RELEASE | IFF_TX_SKB_SHARING); 663 + dev->priv_flags |= IFF_UNICAST_FLT; 663 664 dev->netdev_ops = &macvlan_netdev_ops; 664 665 dev->destructor = free_netdev; 665 666 dev->header_ops = &macvlan_hard_header_ops,
+2
drivers/net/team/team.c
··· 1151 1151 netdev_upper_dev_unlink(port_dev, dev); 1152 1152 team_port_disable_netpoll(port); 1153 1153 vlan_vids_del_by_dev(port_dev, dev); 1154 + dev_uc_unsync(port_dev, dev); 1155 + dev_mc_unsync(port_dev, dev); 1154 1156 dev_close(port_dev); 1155 1157 team_port_leave(team, port); 1156 1158
+2
drivers/net/tun.c
··· 747 747 goto drop; 748 748 skb_orphan(skb); 749 749 750 + nf_reset(skb); 751 + 750 752 /* Enqueue packet */ 751 753 skb_queue_tail(&tfile->socket.sk->sk_receive_queue, skb); 752 754
+1
drivers/net/vmxnet3/vmxnet3_drv.c
··· 2958 2958 2959 2959 adapter->num_rx_queues = num_rx_queues; 2960 2960 adapter->num_tx_queues = num_tx_queues; 2961 + adapter->rx_buf_per_pkt = 1; 2961 2962 2962 2963 size = sizeof(struct Vmxnet3_TxQueueDesc) * adapter->num_tx_queues; 2963 2964 size += sizeof(struct Vmxnet3_RxQueueDesc) * adapter->num_rx_queues;
+6
drivers/net/vmxnet3/vmxnet3_ethtool.c
··· 472 472 VMXNET3_RX_RING_MAX_SIZE) 473 473 return -EINVAL; 474 474 475 + /* if adapter not yet initialized, do nothing */ 476 + if (adapter->rx_buf_per_pkt == 0) { 477 + netdev_err(netdev, "adapter not completely initialized, " 478 + "ring size cannot be changed yet\n"); 479 + return -EOPNOTSUPP; 480 + } 475 481 476 482 /* round it up to a multiple of VMXNET3_RING_SIZE_ALIGN */ 477 483 new_tx_ring_size = (param->tx_pending + VMXNET3_RING_SIZE_MASK) &
+2 -2
drivers/net/vmxnet3/vmxnet3_int.h
··· 70 70 /* 71 71 * Version numbers 72 72 */ 73 - #define VMXNET3_DRIVER_VERSION_STRING "1.1.29.0-k" 73 + #define VMXNET3_DRIVER_VERSION_STRING "1.1.30.0-k" 74 74 75 75 /* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ 76 - #define VMXNET3_DRIVER_VERSION_NUM 0x01011D00 76 + #define VMXNET3_DRIVER_VERSION_NUM 0x01011E00 77 77 78 78 #if defined(CONFIG_PCI_MSI) 79 79 /* RSS only makes sense if MSI-X is supported. */
+10
drivers/net/vxlan.c
··· 974 974 iph->ttl = ttl ? : ip4_dst_hoplimit(&rt->dst); 975 975 tunnel_ip_select_ident(skb, old_iph, &rt->dst); 976 976 977 + nf_reset(skb); 978 + 977 979 vxlan_set_owner(dev, skb); 978 980 979 981 if (handle_offloads(skb)) ··· 1509 1507 static __net_exit void vxlan_exit_net(struct net *net) 1510 1508 { 1511 1509 struct vxlan_net *vn = net_generic(net, vxlan_net_id); 1510 + struct vxlan_dev *vxlan; 1511 + unsigned h; 1512 + 1513 + rtnl_lock(); 1514 + for (h = 0; h < VNI_HASH_SIZE; ++h) 1515 + hlist_for_each_entry(vxlan, &vn->vni_list[h], hlist) 1516 + dev_close(vxlan->dev); 1517 + rtnl_unlock(); 1512 1518 1513 1519 if (vn->sock) { 1514 1520 sk_release_kernel(vn->sock->sk);
+1 -1
drivers/net/wireless/iwlwifi/dvm/sta.c
··· 151 151 sta_id, sta->sta.addr, flags & CMD_ASYNC ? "a" : ""); 152 152 153 153 if (!(flags & CMD_ASYNC)) { 154 - cmd.flags |= CMD_WANT_SKB | CMD_WANT_HCMD; 154 + cmd.flags |= CMD_WANT_SKB; 155 155 might_sleep(); 156 156 } 157 157
+1 -1
drivers/net/wireless/iwlwifi/iwl-devtrace.h
··· 363 363 __entry->flags = cmd->flags; 364 364 memcpy(__get_dynamic_array(hcmd), hdr, sizeof(*hdr)); 365 365 366 - for (i = 0; i < IWL_MAX_CMD_TFDS; i++) { 366 + for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) { 367 367 if (!cmd->len[i]) 368 368 continue; 369 369 memcpy((u8 *)__get_dynamic_array(hcmd) + offset,
+1 -2
drivers/net/wireless/iwlwifi/iwl-drv.c
··· 1102 1102 1103 1103 /* shared module parameters */ 1104 1104 struct iwl_mod_params iwlwifi_mod_params = { 1105 - .amsdu_size_8K = 1, 1106 1105 .restart_fw = 1, 1107 1106 .plcp_check = true, 1108 1107 .bt_coex_active = true, ··· 1206 1207 "disable 11n functionality, bitmap: 1: full, 2: agg TX, 4: agg RX"); 1207 1208 module_param_named(amsdu_size_8K, iwlwifi_mod_params.amsdu_size_8K, 1208 1209 int, S_IRUGO); 1209 - MODULE_PARM_DESC(amsdu_size_8K, "enable 8K amsdu size"); 1210 + MODULE_PARM_DESC(amsdu_size_8K, "enable 8K amsdu size (default 0)"); 1210 1211 module_param_named(fw_restart, iwlwifi_mod_params.restart_fw, int, S_IRUGO); 1211 1212 MODULE_PARM_DESC(fw_restart, "restart firmware in case of error"); 1212 1213
+1 -1
drivers/net/wireless/iwlwifi/iwl-modparams.h
··· 91 91 * @sw_crypto: using hardware encryption, default = 0 92 92 * @disable_11n: disable 11n capabilities, default = 0, 93 93 * use IWL_DISABLE_HT_* constants 94 - * @amsdu_size_8K: enable 8K amsdu size, default = 1 94 + * @amsdu_size_8K: enable 8K amsdu size, default = 0 95 95 * @restart_fw: restart firmware, default = 1 96 96 * @plcp_check: enable plcp health check, default = true 97 97 * @wd_disable: enable stuck queue check, default = 0
+9 -11
drivers/net/wireless/iwlwifi/iwl-trans.h
··· 186 186 * @CMD_ASYNC: Return right away and don't want for the response 187 187 * @CMD_WANT_SKB: valid only with CMD_SYNC. The caller needs the buffer of the 188 188 * response. The caller needs to call iwl_free_resp when done. 189 - * @CMD_WANT_HCMD: The caller needs to get the HCMD that was sent in the 190 - * response handler. Chunks flagged by %IWL_HCMD_DFL_NOCOPY won't be 191 - * copied. The pointer passed to the response handler is in the transport 192 - * ownership and don't need to be freed by the op_mode. This also means 193 - * that the pointer is invalidated after the op_mode's handler returns. 194 189 * @CMD_ON_DEMAND: This command is sent by the test mode pipe. 195 190 */ 196 191 enum CMD_MODE { 197 192 CMD_SYNC = 0, 198 193 CMD_ASYNC = BIT(0), 199 194 CMD_WANT_SKB = BIT(1), 200 - CMD_WANT_HCMD = BIT(2), 201 - CMD_ON_DEMAND = BIT(3), 195 + CMD_ON_DEMAND = BIT(2), 202 196 }; 203 197 204 198 #define DEF_CMD_PAYLOAD_SIZE 320 ··· 211 217 212 218 #define TFD_MAX_PAYLOAD_SIZE (sizeof(struct iwl_device_cmd)) 213 219 214 - #define IWL_MAX_CMD_TFDS 2 220 + /* 221 + * number of transfer buffers (fragments) per transmit frame descriptor; 222 + * this is just the driver's idea, the hardware supports 20 223 + */ 224 + #define IWL_MAX_CMD_TBS_PER_TFD 2 215 225 216 226 /** 217 227 * struct iwl_hcmd_dataflag - flag for each one of the chunks of the command ··· 252 254 * @id: id of the host command 253 255 */ 254 256 struct iwl_host_cmd { 255 - const void *data[IWL_MAX_CMD_TFDS]; 257 + const void *data[IWL_MAX_CMD_TBS_PER_TFD]; 256 258 struct iwl_rx_packet *resp_pkt; 257 259 unsigned long _rx_page_addr; 258 260 u32 _rx_page_order; 259 261 int handler_status; 260 262 261 263 u32 flags; 262 - u16 len[IWL_MAX_CMD_TFDS]; 263 - u8 dataflags[IWL_MAX_CMD_TFDS]; 264 + u16 len[IWL_MAX_CMD_TBS_PER_TFD]; 265 + u8 dataflags[IWL_MAX_CMD_TBS_PER_TFD]; 264 266 u8 id; 265 267 }; 266 268
+10 -8
drivers/net/wireless/iwlwifi/mvm/fw-api.h
··· 762 762 #define IWL_RX_INFO_PHY_CNT 8 763 763 #define IWL_RX_INFO_AGC_IDX 1 764 764 #define IWL_RX_INFO_RSSI_AB_IDX 2 765 - #define IWL_RX_INFO_RSSI_C_IDX 3 766 - #define IWL_OFDM_AGC_DB_MSK 0xfe00 767 - #define IWL_OFDM_AGC_DB_POS 9 765 + #define IWL_OFDM_AGC_A_MSK 0x0000007f 766 + #define IWL_OFDM_AGC_A_POS 0 767 + #define IWL_OFDM_AGC_B_MSK 0x00003f80 768 + #define IWL_OFDM_AGC_B_POS 7 769 + #define IWL_OFDM_AGC_CODE_MSK 0x3fe00000 770 + #define IWL_OFDM_AGC_CODE_POS 20 768 771 #define IWL_OFDM_RSSI_INBAND_A_MSK 0x00ff 769 - #define IWL_OFDM_RSSI_ALLBAND_A_MSK 0xff00 770 772 #define IWL_OFDM_RSSI_A_POS 0 773 + #define IWL_OFDM_RSSI_ALLBAND_A_MSK 0xff00 774 + #define IWL_OFDM_RSSI_ALLBAND_A_POS 8 771 775 #define IWL_OFDM_RSSI_INBAND_B_MSK 0xff0000 772 - #define IWL_OFDM_RSSI_ALLBAND_B_MSK 0xff000000 773 776 #define IWL_OFDM_RSSI_B_POS 16 774 - #define IWL_OFDM_RSSI_INBAND_C_MSK 0x00ff 775 - #define IWL_OFDM_RSSI_ALLBAND_C_MSK 0xff00 776 - #define IWL_OFDM_RSSI_C_POS 0 777 + #define IWL_OFDM_RSSI_ALLBAND_B_MSK 0xff000000 778 + #define IWL_OFDM_RSSI_ALLBAND_B_POS 24 777 779 778 780 /** 779 781 * struct iwl_rx_phy_info - phy info
+5 -128
drivers/net/wireless/iwlwifi/mvm/fw.c
··· 79 79 #define UCODE_VALID_OK cpu_to_le32(0x1) 80 80 81 81 /* Default calibration values for WkP - set to INIT image w/o running */ 82 - static const u8 wkp_calib_values_bb_filter[] = { 0xbf, 0x00, 0x5f, 0x00, 0x2f, 83 - 0x00, 0x18, 0x00 }; 84 - static const u8 wkp_calib_values_rx_dc[] = { 0x7f, 0x7f, 0x7f, 0x7f, 0x7f, 85 - 0x7f, 0x7f, 0x7f }; 86 - static const u8 wkp_calib_values_tx_lo[] = { 0x00, 0x00, 0x00, 0x00 }; 87 - static const u8 wkp_calib_values_tx_iq[] = { 0xff, 0x00, 0xff, 0x00, 0x00, 88 - 0x00 }; 89 - static const u8 wkp_calib_values_rx_iq[] = { 0xff, 0x00, 0x00, 0x00 }; 90 82 static const u8 wkp_calib_values_rx_iq_skew[] = { 0x00, 0x00, 0x01, 0x00 }; 91 83 static const u8 wkp_calib_values_tx_iq_skew[] = { 0x01, 0x00, 0x00, 0x00 }; 92 - static const u8 wkp_calib_values_xtal[] = { 0xd2, 0xd2 }; 93 84 94 85 struct iwl_calib_default_data { 95 86 u16 size; ··· 90 99 #define CALIB_SIZE_N_DATA(_buf) {.size = sizeof(_buf), .data = &_buf} 91 100 92 101 static const struct iwl_calib_default_data wkp_calib_default_data[12] = { 93 - [5] = CALIB_SIZE_N_DATA(wkp_calib_values_rx_dc), 94 - [6] = CALIB_SIZE_N_DATA(wkp_calib_values_bb_filter), 95 - [7] = CALIB_SIZE_N_DATA(wkp_calib_values_tx_lo), 96 - [8] = CALIB_SIZE_N_DATA(wkp_calib_values_tx_iq), 97 102 [9] = CALIB_SIZE_N_DATA(wkp_calib_values_tx_iq_skew), 98 - [10] = CALIB_SIZE_N_DATA(wkp_calib_values_rx_iq), 99 103 [11] = CALIB_SIZE_N_DATA(wkp_calib_values_rx_iq_skew), 100 104 }; 101 105 ··· 227 241 228 242 return 0; 229 243 } 230 - #define IWL_HW_REV_ID_RAINBOW 0x2 231 - #define IWL_PROJ_TYPE_LHP 0x5 232 - 233 - static u32 iwl_mvm_build_phy_cfg(struct iwl_mvm *mvm) 234 - { 235 - struct iwl_nvm_data *data = mvm->nvm_data; 236 - /* Temp calls to static definitions, will be changed to CSR calls */ 237 - u8 hw_rev_id = IWL_HW_REV_ID_RAINBOW; 238 - u8 project_type = IWL_PROJ_TYPE_LHP; 239 - 240 - return data->radio_cfg_dash | (data->radio_cfg_step << 2) | 241 - (hw_rev_id << 4) | ((project_type & 0x7f) << 6) | 242 - (data->valid_tx_ant << 16) | (data->valid_rx_ant << 20); 243 - } 244 244 245 245 static int iwl_send_phy_cfg_cmd(struct iwl_mvm *mvm) 246 246 { ··· 234 262 enum iwl_ucode_type ucode_type = mvm->cur_ucode; 235 263 236 264 /* Set parameters */ 237 - phy_cfg_cmd.phy_cfg = cpu_to_le32(iwl_mvm_build_phy_cfg(mvm)); 265 + phy_cfg_cmd.phy_cfg = cpu_to_le32(mvm->fw->phy_config); 238 266 phy_cfg_cmd.calib_control.event_trigger = 239 267 mvm->fw->default_calib[ucode_type].event_trigger; 240 268 phy_cfg_cmd.calib_control.flow_trigger = ··· 245 273 246 274 return iwl_mvm_send_cmd_pdu(mvm, PHY_CONFIGURATION_CMD, CMD_SYNC, 247 275 sizeof(phy_cfg_cmd), &phy_cfg_cmd); 248 - } 249 - 250 - /* Starting with the new PHY DB implementation - New calibs are enabled */ 251 - /* Value - 0x405e7 */ 252 - #define IWL_CALIB_DEFAULT_FLOW_INIT (IWL_CALIB_CFG_XTAL_IDX |\ 253 - IWL_CALIB_CFG_TEMPERATURE_IDX |\ 254 - IWL_CALIB_CFG_VOLTAGE_READ_IDX |\ 255 - IWL_CALIB_CFG_DC_IDX |\ 256 - IWL_CALIB_CFG_BB_FILTER_IDX |\ 257 - IWL_CALIB_CFG_LO_LEAKAGE_IDX |\ 258 - IWL_CALIB_CFG_TX_IQ_IDX |\ 259 - IWL_CALIB_CFG_RX_IQ_IDX |\ 260 - IWL_CALIB_CFG_AGC_IDX) 261 - 262 - #define IWL_CALIB_DEFAULT_EVENT_INIT 0x0 263 - 264 - /* Value 0x41567 */ 265 - #define IWL_CALIB_DEFAULT_FLOW_RUN (IWL_CALIB_CFG_XTAL_IDX |\ 266 - IWL_CALIB_CFG_TEMPERATURE_IDX |\ 267 - IWL_CALIB_CFG_VOLTAGE_READ_IDX |\ 268 - IWL_CALIB_CFG_BB_FILTER_IDX |\ 269 - IWL_CALIB_CFG_DC_IDX |\ 270 - IWL_CALIB_CFG_TX_IQ_IDX |\ 271 - IWL_CALIB_CFG_RX_IQ_IDX |\ 272 - IWL_CALIB_CFG_SENSITIVITY_IDX |\ 273 - IWL_CALIB_CFG_AGC_IDX) 274 - 275 - #define IWL_CALIB_DEFAULT_EVENT_RUN (IWL_CALIB_CFG_XTAL_IDX |\ 276 - IWL_CALIB_CFG_TEMPERATURE_IDX |\ 277 - IWL_CALIB_CFG_VOLTAGE_READ_IDX |\ 278 - IWL_CALIB_CFG_TX_PWR_IDX |\ 279 - IWL_CALIB_CFG_DC_IDX |\ 280 - IWL_CALIB_CFG_TX_IQ_IDX |\ 281 - IWL_CALIB_CFG_SENSITIVITY_IDX) 282 - 283 - /* 284 - * Sets the calibrations trigger values that will be sent to the FW for runtime 285 - * and init calibrations. 286 - * The ones given in the FW TLV are not correct. 287 - */ 288 - static void iwl_set_default_calib_trigger(struct iwl_mvm *mvm) 289 - { 290 - struct iwl_tlv_calib_ctrl default_calib; 291 - 292 - /* 293 - * WkP FW TLV calib bits are wrong, overwrite them. 294 - * This defines the dynamic calibrations which are implemented in the 295 - * uCode both for init(flow) calculation and event driven calibs. 296 - */ 297 - 298 - /* Init Image */ 299 - default_calib.event_trigger = cpu_to_le32(IWL_CALIB_DEFAULT_EVENT_INIT); 300 - default_calib.flow_trigger = cpu_to_le32(IWL_CALIB_DEFAULT_FLOW_INIT); 301 - 302 - if (default_calib.event_trigger != 303 - mvm->fw->default_calib[IWL_UCODE_INIT].event_trigger) 304 - IWL_ERR(mvm, 305 - "Updating the event calib for INIT image: 0x%x -> 0x%x\n", 306 - mvm->fw->default_calib[IWL_UCODE_INIT].event_trigger, 307 - default_calib.event_trigger); 308 - if (default_calib.flow_trigger != 309 - mvm->fw->default_calib[IWL_UCODE_INIT].flow_trigger) 310 - IWL_ERR(mvm, 311 - "Updating the flow calib for INIT image: 0x%x -> 0x%x\n", 312 - mvm->fw->default_calib[IWL_UCODE_INIT].flow_trigger, 313 - default_calib.flow_trigger); 314 - 315 - memcpy((void *)&mvm->fw->default_calib[IWL_UCODE_INIT], 316 - &default_calib, sizeof(struct iwl_tlv_calib_ctrl)); 317 - IWL_ERR(mvm, 318 - "Setting uCode init calibrations event 0x%x, trigger 0x%x\n", 319 - default_calib.event_trigger, 320 - default_calib.flow_trigger); 321 - 322 - /* Run time image */ 323 - default_calib.event_trigger = cpu_to_le32(IWL_CALIB_DEFAULT_EVENT_RUN); 324 - default_calib.flow_trigger = cpu_to_le32(IWL_CALIB_DEFAULT_FLOW_RUN); 325 - 326 - if (default_calib.event_trigger != 327 - mvm->fw->default_calib[IWL_UCODE_REGULAR].event_trigger) 328 - IWL_ERR(mvm, 329 - "Updating the event calib for RT image: 0x%x -> 0x%x\n", 330 - mvm->fw->default_calib[IWL_UCODE_REGULAR].event_trigger, 331 - default_calib.event_trigger); 332 - if (default_calib.flow_trigger != 333 - mvm->fw->default_calib[IWL_UCODE_REGULAR].flow_trigger) 334 - IWL_ERR(mvm, 335 - "Updating the flow calib for RT image: 0x%x -> 0x%x\n", 336 - mvm->fw->default_calib[IWL_UCODE_REGULAR].flow_trigger, 337 - default_calib.flow_trigger); 338 - 339 - memcpy((void *)&mvm->fw->default_calib[IWL_UCODE_REGULAR], 340 - &default_calib, sizeof(struct iwl_tlv_calib_ctrl)); 341 - IWL_ERR(mvm, 342 - "Setting uCode runtime calibs event 0x%x, trigger 0x%x\n", 343 - default_calib.event_trigger, 344 - default_calib.flow_trigger); 345 276 } 346 277 347 278 static int iwl_set_default_calibrations(struct iwl_mvm *mvm) ··· 321 446 ret = iwl_nvm_check_version(mvm->nvm_data, mvm->trans); 322 447 WARN_ON(ret); 323 448 324 - /* Override the calibrations from TLV and the const of fw */ 325 - iwl_set_default_calib_trigger(mvm); 449 + /* Send TX valid antennas before triggering calibrations */ 450 + ret = iwl_send_tx_ant_cfg(mvm, mvm->nvm_data->valid_tx_ant); 451 + if (ret) 452 + goto error; 326 453 327 454 /* WkP doesn't have all calibrations, need to set default values */ 328 455 if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) {
+2 -1
drivers/net/wireless/iwlwifi/mvm/mvm.h
··· 80 80 81 81 #define IWL_INVALID_MAC80211_QUEUE 0xff 82 82 #define IWL_MVM_MAX_ADDRESSES 2 83 - #define IWL_RSSI_OFFSET 44 83 + /* RSSI offset for WkP */ 84 + #define IWL_RSSI_OFFSET 50 84 85 85 86 enum iwl_mvm_tx_fifo { 86 87 IWL_MVM_TX_FIFO_BK = 0,
+13 -5
drivers/net/wireless/iwlwifi/mvm/ops.c
··· 624 624 ieee80211_free_txskb(mvm->hw, skb); 625 625 } 626 626 627 - static void iwl_mvm_nic_error(struct iwl_op_mode *op_mode) 627 + static void iwl_mvm_nic_restart(struct iwl_mvm *mvm) 628 628 { 629 - struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 630 - 631 - iwl_mvm_dump_nic_error_log(mvm); 632 - 633 629 iwl_abort_notification_waits(&mvm->notif_wait); 634 630 635 631 /* ··· 659 663 } 660 664 } 661 665 666 + static void iwl_mvm_nic_error(struct iwl_op_mode *op_mode) 667 + { 668 + struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 669 + 670 + iwl_mvm_dump_nic_error_log(mvm); 671 + 672 + iwl_mvm_nic_restart(mvm); 673 + } 674 + 662 675 static void iwl_mvm_cmd_queue_full(struct iwl_op_mode *op_mode) 663 676 { 677 + struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 678 + 664 679 WARN_ON(1); 680 + iwl_mvm_nic_restart(mvm); 665 681 } 666 682 667 683 static const struct iwl_op_mode_ops iwl_mvm_ops = {
+23 -14
drivers/net/wireless/iwlwifi/mvm/rx.c
··· 131 131 static int iwl_mvm_calc_rssi(struct iwl_mvm *mvm, 132 132 struct iwl_rx_phy_info *phy_info) 133 133 { 134 - u32 rssi_a, rssi_b, rssi_c, max_rssi, agc_db; 134 + int rssi_a, rssi_b, rssi_a_dbm, rssi_b_dbm, max_rssi_dbm; 135 + int rssi_all_band_a, rssi_all_band_b; 136 + u32 agc_a, agc_b, max_agc; 135 137 u32 val; 136 138 137 - /* Find max rssi among 3 possible receivers. 139 + /* Find max rssi among 2 possible receivers. 138 140 * These values are measured by the Digital Signal Processor (DSP). 139 141 * They should stay fairly constant even as the signal strength varies, 140 142 * if the radio's Automatic Gain Control (AGC) is working right. 141 143 * AGC value (see below) will provide the "interesting" info. 142 144 */ 145 + val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_AGC_IDX]); 146 + agc_a = (val & IWL_OFDM_AGC_A_MSK) >> IWL_OFDM_AGC_A_POS; 147 + agc_b = (val & IWL_OFDM_AGC_B_MSK) >> IWL_OFDM_AGC_B_POS; 148 + max_agc = max_t(u32, agc_a, agc_b); 149 + 143 150 val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_RSSI_AB_IDX]); 144 151 rssi_a = (val & IWL_OFDM_RSSI_INBAND_A_MSK) >> IWL_OFDM_RSSI_A_POS; 145 152 rssi_b = (val & IWL_OFDM_RSSI_INBAND_B_MSK) >> IWL_OFDM_RSSI_B_POS; 146 - val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_RSSI_C_IDX]); 147 - rssi_c = (val & IWL_OFDM_RSSI_INBAND_C_MSK) >> IWL_OFDM_RSSI_C_POS; 153 + rssi_all_band_a = (val & IWL_OFDM_RSSI_ALLBAND_A_MSK) >> 154 + IWL_OFDM_RSSI_ALLBAND_A_POS; 155 + rssi_all_band_b = (val & IWL_OFDM_RSSI_ALLBAND_B_MSK) >> 156 + IWL_OFDM_RSSI_ALLBAND_B_POS; 148 157 149 - val = le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_AGC_IDX]); 150 - agc_db = (val & IWL_OFDM_AGC_DB_MSK) >> IWL_OFDM_AGC_DB_POS; 158 + /* 159 + * dBm = rssi dB - agc dB - constant. 160 + * Higher AGC (higher radio gain) means lower signal. 161 + */ 162 + rssi_a_dbm = rssi_a - IWL_RSSI_OFFSET - agc_a; 163 + rssi_b_dbm = rssi_b - IWL_RSSI_OFFSET - agc_b; 164 + max_rssi_dbm = max_t(int, rssi_a_dbm, rssi_b_dbm); 151 165 152 - max_rssi = max_t(u32, rssi_a, rssi_b); 153 - max_rssi = max_t(u32, max_rssi, rssi_c); 166 + IWL_DEBUG_STATS(mvm, "Rssi In A %d B %d Max %d AGCA %d AGCB %d\n", 167 + rssi_a_dbm, rssi_b_dbm, max_rssi_dbm, agc_a, agc_b); 154 168 155 - IWL_DEBUG_STATS(mvm, "Rssi In A %d B %d C %d Max %d AGC dB %d\n", 156 - rssi_a, rssi_b, rssi_c, max_rssi, agc_db); 157 - 158 - /* dBm = max_rssi dB - agc dB - constant. 159 - * Higher AGC (higher radio gain) means lower signal. */ 160 - return max_rssi - agc_db - IWL_RSSI_OFFSET; 169 + return max_rssi_dbm; 161 170 } 162 171 163 172 /*
+10
drivers/net/wireless/iwlwifi/mvm/sta.c
··· 770 770 u16 txq_id; 771 771 int err; 772 772 773 + 774 + /* 775 + * If mac80211 is cleaning its state, then say that we finished since 776 + * our state has been cleared anyway. 777 + */ 778 + if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) { 779 + ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid); 780 + return 0; 781 + } 782 + 773 783 spin_lock_bh(&mvmsta->lock); 774 784 775 785 txq_id = tid_data->txq_id;
+1 -5
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 607 607 608 608 /* Single frame failure in an AMPDU queue => send BAR */ 609 609 if (txq_id >= IWL_FIRST_AMPDU_QUEUE && 610 - !(info->flags & IEEE80211_TX_STAT_ACK)) { 611 - /* there must be only one skb in the skb_list */ 612 - WARN_ON_ONCE(skb_freed > 1 || 613 - !skb_queue_empty(&skbs)); 610 + !(info->flags & IEEE80211_TX_STAT_ACK)) 614 611 info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK; 615 - } 616 612 617 613 /* W/A FW bug: seq_ctl is wrong when the queue is flushed */ 618 614 if (status == TX_STATUS_FAIL_FIFO_FLUSHED) {
+25 -9
drivers/net/wireless/iwlwifi/pcie/internal.h
··· 137 137 struct iwl_cmd_meta { 138 138 /* only for SYNC commands, iff the reply skb is wanted */ 139 139 struct iwl_host_cmd *source; 140 - 141 - DEFINE_DMA_UNMAP_ADDR(mapping); 142 - DEFINE_DMA_UNMAP_LEN(len); 143 - 144 140 u32 flags; 145 141 }; 146 142 ··· 181 185 /* 182 186 * The FH will write back to the first TB only, so we need 183 187 * to copy some data into the buffer regardless of whether 184 - * it should be mapped or not. This indicates how much to 185 - * copy, even for HCMDs it must be big enough to fit the 186 - * DRAM scratch from the TX cmd, at least 16 bytes. 188 + * it should be mapped or not. This indicates how big the 189 + * first TB must be to include the scratch buffer. Since 190 + * the scratch is 4 bytes at offset 12, it's 16 now. If we 191 + * make it bigger then allocations will be bigger and copy 192 + * slower, so that's probably not useful. 187 193 */ 188 - #define IWL_HCMD_MIN_COPY_SIZE 16 194 + #define IWL_HCMD_SCRATCHBUF_SIZE 16 189 195 190 196 struct iwl_pcie_txq_entry { 191 197 struct iwl_device_cmd *cmd; 192 - struct iwl_device_cmd *copy_cmd; 193 198 struct sk_buff *skb; 194 199 /* buffer to free after command completes */ 195 200 const void *free_buf; 196 201 struct iwl_cmd_meta meta; 197 202 }; 198 203 204 + struct iwl_pcie_txq_scratch_buf { 205 + struct iwl_cmd_header hdr; 206 + u8 buf[8]; 207 + __le32 scratch; 208 + }; 209 + 199 210 /** 200 211 * struct iwl_txq - Tx Queue for DMA 201 212 * @q: generic Rx/Tx queue descriptor 202 213 * @tfds: transmit frame descriptors (DMA memory) 214 + * @scratchbufs: start of command headers, including scratch buffers, for 215 + * the writeback -- this is DMA memory and an array holding one buffer 216 + * for each command on the queue 217 + * @scratchbufs_dma: DMA address for the scratchbufs start 203 218 * @entries: transmit entries (driver state) 204 219 * @lock: queue lock 205 220 * @stuck_timer: timer that fires if queue gets stuck ··· 224 217 struct iwl_txq { 225 218 struct iwl_queue q; 226 219 struct iwl_tfd *tfds; 220 + struct iwl_pcie_txq_scratch_buf *scratchbufs; 221 + dma_addr_t scratchbufs_dma; 227 222 struct iwl_pcie_txq_entry *entries; 228 223 spinlock_t lock; 229 224 struct timer_list stuck_timer; ··· 233 224 u8 need_update; 234 225 u8 active; 235 226 }; 227 + 228 + static inline dma_addr_t 229 + iwl_pcie_get_scratchbuf_dma(struct iwl_txq *txq, int idx) 230 + { 231 + return txq->scratchbufs_dma + 232 + sizeof(struct iwl_pcie_txq_scratch_buf) * idx; 233 + } 236 234 237 235 /** 238 236 * struct iwl_trans_pcie - PCIe transport specific data
+3 -11
drivers/net/wireless/iwlwifi/pcie/rx.c
··· 637 637 index = SEQ_TO_INDEX(sequence); 638 638 cmd_index = get_cmd_index(&txq->q, index); 639 639 640 - if (reclaim) { 641 - struct iwl_pcie_txq_entry *ent; 642 - ent = &txq->entries[cmd_index]; 643 - cmd = ent->copy_cmd; 644 - WARN_ON_ONCE(!cmd && ent->meta.flags & CMD_WANT_HCMD); 645 - } else { 640 + if (reclaim) 641 + cmd = txq->entries[cmd_index].cmd; 642 + else 646 643 cmd = NULL; 647 - } 648 644 649 645 err = iwl_op_mode_rx(trans->op_mode, &rxcb, cmd); 650 646 651 647 if (reclaim) { 652 - /* The original command isn't needed any more */ 653 - kfree(txq->entries[cmd_index].copy_cmd); 654 - txq->entries[cmd_index].copy_cmd = NULL; 655 - /* nor is the duplicated part of the command */ 656 648 kfree(txq->entries[cmd_index].free_buf); 657 649 txq->entries[cmd_index].free_buf = NULL; 658 650 }
+129 -147
drivers/net/wireless/iwlwifi/pcie/tx.c
··· 191 191 } 192 192 193 193 for (i = q->read_ptr; i != q->write_ptr; 194 - i = iwl_queue_inc_wrap(i, q->n_bd)) { 195 - struct iwl_tx_cmd *tx_cmd = 196 - (struct iwl_tx_cmd *)txq->entries[i].cmd->payload; 194 + i = iwl_queue_inc_wrap(i, q->n_bd)) 197 195 IWL_ERR(trans, "scratch %d = 0x%08x\n", i, 198 - get_unaligned_le32(&tx_cmd->scratch)); 199 - } 196 + le32_to_cpu(txq->scratchbufs[i].scratch)); 200 197 201 198 iwl_op_mode_nic_error(trans->op_mode); 202 199 } ··· 364 367 } 365 368 366 369 static void iwl_pcie_tfd_unmap(struct iwl_trans *trans, 367 - struct iwl_cmd_meta *meta, struct iwl_tfd *tfd, 368 - enum dma_data_direction dma_dir) 370 + struct iwl_cmd_meta *meta, 371 + struct iwl_tfd *tfd) 369 372 { 370 373 int i; 371 374 int num_tbs; ··· 379 382 return; 380 383 } 381 384 382 - /* Unmap tx_cmd */ 383 - if (num_tbs) 384 - dma_unmap_single(trans->dev, 385 - dma_unmap_addr(meta, mapping), 386 - dma_unmap_len(meta, len), 387 - DMA_BIDIRECTIONAL); 385 + /* first TB is never freed - it's the scratchbuf data */ 388 386 389 - /* Unmap chunks, if any. */ 390 387 for (i = 1; i < num_tbs; i++) 391 388 dma_unmap_single(trans->dev, iwl_pcie_tfd_tb_get_addr(tfd, i), 392 - iwl_pcie_tfd_tb_get_len(tfd, i), dma_dir); 389 + iwl_pcie_tfd_tb_get_len(tfd, i), 390 + DMA_TO_DEVICE); 393 391 394 392 tfd->num_tbs = 0; 395 393 } ··· 398 406 * Does NOT advance any TFD circular buffer read/write indexes 399 407 * Does NOT free the TFD itself (which is within circular buffer) 400 408 */ 401 - static void iwl_pcie_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq, 402 - enum dma_data_direction dma_dir) 409 + static void iwl_pcie_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq) 403 410 { 404 411 struct iwl_tfd *tfd_tmp = txq->tfds; 405 412 ··· 409 418 lockdep_assert_held(&txq->lock); 410 419 411 420 /* We have only q->n_window txq->entries, but we use q->n_bd tfds */ 412 - iwl_pcie_tfd_unmap(trans, &txq->entries[idx].meta, &tfd_tmp[rd_ptr], 413 - dma_dir); 421 + iwl_pcie_tfd_unmap(trans, &txq->entries[idx].meta, &tfd_tmp[rd_ptr]); 414 422 415 423 /* free SKB */ 416 424 if (txq->entries) { ··· 469 479 { 470 480 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 471 481 size_t tfd_sz = sizeof(struct iwl_tfd) * TFD_QUEUE_SIZE_MAX; 482 + size_t scratchbuf_sz; 472 483 int i; 473 484 474 485 if (WARN_ON(txq->entries || txq->tfds)) ··· 505 514 IWL_ERR(trans, "dma_alloc_coherent(%zd) failed\n", tfd_sz); 506 515 goto error; 507 516 } 517 + 518 + BUILD_BUG_ON(IWL_HCMD_SCRATCHBUF_SIZE != sizeof(*txq->scratchbufs)); 519 + BUILD_BUG_ON(offsetof(struct iwl_pcie_txq_scratch_buf, scratch) != 520 + sizeof(struct iwl_cmd_header) + 521 + offsetof(struct iwl_tx_cmd, scratch)); 522 + 523 + scratchbuf_sz = sizeof(*txq->scratchbufs) * slots_num; 524 + 525 + txq->scratchbufs = dma_alloc_coherent(trans->dev, scratchbuf_sz, 526 + &txq->scratchbufs_dma, 527 + GFP_KERNEL); 528 + if (!txq->scratchbufs) 529 + goto err_free_tfds; 530 + 508 531 txq->q.id = txq_id; 509 532 510 533 return 0; 534 + err_free_tfds: 535 + dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->q.dma_addr); 511 536 error: 512 537 if (txq->entries && txq_id == trans_pcie->cmd_queue) 513 538 for (i = 0; i < slots_num; i++) ··· 572 565 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 573 566 struct iwl_txq *txq = &trans_pcie->txq[txq_id]; 574 567 struct iwl_queue *q = &txq->q; 575 - enum dma_data_direction dma_dir; 576 568 577 569 if (!q->n_bd) 578 570 return; 579 571 580 - /* In the command queue, all the TBs are mapped as BIDI 581 - * so unmap them as such. 582 - */ 583 - if (txq_id == trans_pcie->cmd_queue) 584 - dma_dir = DMA_BIDIRECTIONAL; 585 - else 586 - dma_dir = DMA_TO_DEVICE; 587 - 588 572 spin_lock_bh(&txq->lock); 589 573 while (q->write_ptr != q->read_ptr) { 590 - iwl_pcie_txq_free_tfd(trans, txq, dma_dir); 574 + iwl_pcie_txq_free_tfd(trans, txq); 591 575 q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd); 592 576 } 593 577 spin_unlock_bh(&txq->lock); ··· 608 610 if (txq_id == trans_pcie->cmd_queue) 609 611 for (i = 0; i < txq->q.n_window; i++) { 610 612 kfree(txq->entries[i].cmd); 611 - kfree(txq->entries[i].copy_cmd); 612 613 kfree(txq->entries[i].free_buf); 613 614 } 614 615 ··· 616 619 dma_free_coherent(dev, sizeof(struct iwl_tfd) * 617 620 txq->q.n_bd, txq->tfds, txq->q.dma_addr); 618 621 txq->q.dma_addr = 0; 622 + 623 + dma_free_coherent(dev, 624 + sizeof(*txq->scratchbufs) * txq->q.n_window, 625 + txq->scratchbufs, txq->scratchbufs_dma); 619 626 } 620 627 621 628 kfree(txq->entries); ··· 963 962 964 963 iwl_pcie_txq_inval_byte_cnt_tbl(trans, txq); 965 964 966 - iwl_pcie_txq_free_tfd(trans, txq, DMA_TO_DEVICE); 965 + iwl_pcie_txq_free_tfd(trans, txq); 967 966 } 968 967 969 968 iwl_pcie_txq_progress(trans_pcie, txq); ··· 1153 1152 void *dup_buf = NULL; 1154 1153 dma_addr_t phys_addr; 1155 1154 int idx; 1156 - u16 copy_size, cmd_size, dma_size; 1155 + u16 copy_size, cmd_size, scratch_size; 1157 1156 bool had_nocopy = false; 1158 1157 int i; 1159 1158 u32 cmd_pos; 1160 - const u8 *cmddata[IWL_MAX_CMD_TFDS]; 1161 - u16 cmdlen[IWL_MAX_CMD_TFDS]; 1159 + const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD]; 1160 + u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD]; 1162 1161 1163 1162 copy_size = sizeof(out_cmd->hdr); 1164 1163 cmd_size = sizeof(out_cmd->hdr); 1165 1164 1166 1165 /* need one for the header if the first is NOCOPY */ 1167 - BUILD_BUG_ON(IWL_MAX_CMD_TFDS > IWL_NUM_OF_TBS - 1); 1166 + BUILD_BUG_ON(IWL_MAX_CMD_TBS_PER_TFD > IWL_NUM_OF_TBS - 1); 1168 1167 1169 - for (i = 0; i < IWL_MAX_CMD_TFDS; i++) { 1168 + for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) { 1170 1169 cmddata[i] = cmd->data[i]; 1171 1170 cmdlen[i] = cmd->len[i]; 1172 1171 1173 1172 if (!cmd->len[i]) 1174 1173 continue; 1175 1174 1176 - /* need at least IWL_HCMD_MIN_COPY_SIZE copied */ 1177 - if (copy_size < IWL_HCMD_MIN_COPY_SIZE) { 1178 - int copy = IWL_HCMD_MIN_COPY_SIZE - copy_size; 1175 + /* need at least IWL_HCMD_SCRATCHBUF_SIZE copied */ 1176 + if (copy_size < IWL_HCMD_SCRATCHBUF_SIZE) { 1177 + int copy = IWL_HCMD_SCRATCHBUF_SIZE - copy_size; 1179 1178 1180 1179 if (copy > cmdlen[i]) 1181 1180 copy = cmdlen[i]; ··· 1261 1260 /* and copy the data that needs to be copied */ 1262 1261 cmd_pos = offsetof(struct iwl_device_cmd, payload); 1263 1262 copy_size = sizeof(out_cmd->hdr); 1264 - for (i = 0; i < IWL_MAX_CMD_TFDS; i++) { 1263 + for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) { 1265 1264 int copy = 0; 1266 1265 1267 1266 if (!cmd->len) 1268 1267 continue; 1269 1268 1270 - /* need at least IWL_HCMD_MIN_COPY_SIZE copied */ 1271 - if (copy_size < IWL_HCMD_MIN_COPY_SIZE) { 1272 - copy = IWL_HCMD_MIN_COPY_SIZE - copy_size; 1269 + /* need at least IWL_HCMD_SCRATCHBUF_SIZE copied */ 1270 + if (copy_size < IWL_HCMD_SCRATCHBUF_SIZE) { 1271 + copy = IWL_HCMD_SCRATCHBUF_SIZE - copy_size; 1273 1272 1274 1273 if (copy > cmd->len[i]) 1275 1274 copy = cmd->len[i]; ··· 1287 1286 } 1288 1287 } 1289 1288 1290 - WARN_ON_ONCE(txq->entries[idx].copy_cmd); 1291 - 1292 - /* 1293 - * since out_cmd will be the source address of the FH, it will write 1294 - * the retry count there. So when the user needs to receivce the HCMD 1295 - * that corresponds to the response in the response handler, it needs 1296 - * to set CMD_WANT_HCMD. 1297 - */ 1298 - if (cmd->flags & CMD_WANT_HCMD) { 1299 - txq->entries[idx].copy_cmd = 1300 - kmemdup(out_cmd, cmd_pos, GFP_ATOMIC); 1301 - if (unlikely(!txq->entries[idx].copy_cmd)) { 1302 - idx = -ENOMEM; 1303 - goto out; 1304 - } 1305 - } 1306 - 1307 1289 IWL_DEBUG_HC(trans, 1308 1290 "Sending command %s (#%x), seq: 0x%04X, %d bytes at %d[%d]:%d\n", 1309 1291 get_cmd_string(trans_pcie, out_cmd->hdr.cmd), 1310 1292 out_cmd->hdr.cmd, le16_to_cpu(out_cmd->hdr.sequence), 1311 1293 cmd_size, q->write_ptr, idx, trans_pcie->cmd_queue); 1312 1294 1313 - /* 1314 - * If the entire command is smaller than IWL_HCMD_MIN_COPY_SIZE, we must 1315 - * still map at least that many bytes for the hardware to write back to. 1316 - * We have enough space, so that's not a problem. 1317 - */ 1318 - dma_size = max_t(u16, copy_size, IWL_HCMD_MIN_COPY_SIZE); 1295 + /* start the TFD with the scratchbuf */ 1296 + scratch_size = min_t(int, copy_size, IWL_HCMD_SCRATCHBUF_SIZE); 1297 + memcpy(&txq->scratchbufs[q->write_ptr], &out_cmd->hdr, scratch_size); 1298 + iwl_pcie_txq_build_tfd(trans, txq, 1299 + iwl_pcie_get_scratchbuf_dma(txq, q->write_ptr), 1300 + scratch_size, 1); 1319 1301 1320 - phys_addr = dma_map_single(trans->dev, &out_cmd->hdr, dma_size, 1321 - DMA_BIDIRECTIONAL); 1322 - if (unlikely(dma_mapping_error(trans->dev, phys_addr))) { 1323 - idx = -ENOMEM; 1324 - goto out; 1302 + /* map first command fragment, if any remains */ 1303 + if (copy_size > scratch_size) { 1304 + phys_addr = dma_map_single(trans->dev, 1305 + ((u8 *)&out_cmd->hdr) + scratch_size, 1306 + copy_size - scratch_size, 1307 + DMA_TO_DEVICE); 1308 + if (dma_mapping_error(trans->dev, phys_addr)) { 1309 + iwl_pcie_tfd_unmap(trans, out_meta, 1310 + &txq->tfds[q->write_ptr]); 1311 + idx = -ENOMEM; 1312 + goto out; 1313 + } 1314 + 1315 + iwl_pcie_txq_build_tfd(trans, txq, phys_addr, 1316 + copy_size - scratch_size, 0); 1325 1317 } 1326 1318 1327 - dma_unmap_addr_set(out_meta, mapping, phys_addr); 1328 - dma_unmap_len_set(out_meta, len, dma_size); 1329 - 1330 - iwl_pcie_txq_build_tfd(trans, txq, phys_addr, copy_size, 1); 1331 - 1332 1319 /* map the remaining (adjusted) nocopy/dup fragments */ 1333 - for (i = 0; i < IWL_MAX_CMD_TFDS; i++) { 1320 + for (i = 0; i < IWL_MAX_CMD_TBS_PER_TFD; i++) { 1334 1321 const void *data = cmddata[i]; 1335 1322 1336 1323 if (!cmdlen[i]) ··· 1329 1340 if (cmd->dataflags[i] & IWL_HCMD_DFL_DUP) 1330 1341 data = dup_buf; 1331 1342 phys_addr = dma_map_single(trans->dev, (void *)data, 1332 - cmdlen[i], DMA_BIDIRECTIONAL); 1343 + cmdlen[i], DMA_TO_DEVICE); 1333 1344 if (dma_mapping_error(trans->dev, phys_addr)) { 1334 1345 iwl_pcie_tfd_unmap(trans, out_meta, 1335 - &txq->tfds[q->write_ptr], 1336 - DMA_BIDIRECTIONAL); 1346 + &txq->tfds[q->write_ptr]); 1337 1347 idx = -ENOMEM; 1338 1348 goto out; 1339 1349 } ··· 1406 1418 cmd = txq->entries[cmd_index].cmd; 1407 1419 meta = &txq->entries[cmd_index].meta; 1408 1420 1409 - iwl_pcie_tfd_unmap(trans, meta, &txq->tfds[index], DMA_BIDIRECTIONAL); 1421 + iwl_pcie_tfd_unmap(trans, meta, &txq->tfds[index]); 1410 1422 1411 1423 /* Input error checking is done when commands are added to queue. */ 1412 1424 if (meta->flags & CMD_WANT_SKB) { ··· 1585 1597 struct iwl_cmd_meta *out_meta; 1586 1598 struct iwl_txq *txq; 1587 1599 struct iwl_queue *q; 1588 - dma_addr_t phys_addr = 0; 1589 - dma_addr_t txcmd_phys; 1590 - dma_addr_t scratch_phys; 1591 - u16 len, firstlen, secondlen; 1600 + dma_addr_t tb0_phys, tb1_phys, scratch_phys; 1601 + void *tb1_addr; 1602 + u16 len, tb1_len, tb2_len; 1592 1603 u8 wait_write_ptr = 0; 1593 1604 __le16 fc = hdr->frame_control; 1594 1605 u8 hdr_len = ieee80211_hdrlen(fc); ··· 1625 1638 cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) | 1626 1639 INDEX_TO_SEQ(q->write_ptr))); 1627 1640 1641 + tb0_phys = iwl_pcie_get_scratchbuf_dma(txq, q->write_ptr); 1642 + scratch_phys = tb0_phys + sizeof(struct iwl_cmd_header) + 1643 + offsetof(struct iwl_tx_cmd, scratch); 1644 + 1645 + tx_cmd->dram_lsb_ptr = cpu_to_le32(scratch_phys); 1646 + tx_cmd->dram_msb_ptr = iwl_get_dma_hi_addr(scratch_phys); 1647 + 1628 1648 /* Set up first empty entry in queue's array of Tx/cmd buffers */ 1629 1649 out_meta = &txq->entries[q->write_ptr].meta; 1630 1650 1631 1651 /* 1632 - * Use the first empty entry in this queue's command buffer array 1633 - * to contain the Tx command and MAC header concatenated together 1634 - * (payload data will be in another buffer). 1635 - * Size of this varies, due to varying MAC header length. 1636 - * If end is not dword aligned, we'll have 2 extra bytes at the end 1637 - * of the MAC header (device reads on dword boundaries). 1638 - * We'll tell device about this padding later. 1652 + * The second TB (tb1) points to the remainder of the TX command 1653 + * and the 802.11 header - dword aligned size 1654 + * (This calculation modifies the TX command, so do it before the 1655 + * setup of the first TB) 1639 1656 */ 1640 - len = sizeof(struct iwl_tx_cmd) + 1641 - sizeof(struct iwl_cmd_header) + hdr_len; 1642 - firstlen = (len + 3) & ~3; 1657 + len = sizeof(struct iwl_tx_cmd) + sizeof(struct iwl_cmd_header) + 1658 + hdr_len - IWL_HCMD_SCRATCHBUF_SIZE; 1659 + tb1_len = (len + 3) & ~3; 1643 1660 1644 1661 /* Tell NIC about any 2-byte padding after MAC header */ 1645 - if (firstlen != len) 1662 + if (tb1_len != len) 1646 1663 tx_cmd->tx_flags |= TX_CMD_FLG_MH_PAD_MSK; 1647 1664 1648 - /* Physical address of this Tx command's header (not MAC header!), 1649 - * within command buffer array. */ 1650 - txcmd_phys = dma_map_single(trans->dev, 1651 - &dev_cmd->hdr, firstlen, 1652 - DMA_BIDIRECTIONAL); 1653 - if (unlikely(dma_mapping_error(trans->dev, txcmd_phys))) 1665 + /* The first TB points to the scratchbuf data - min_copy bytes */ 1666 + memcpy(&txq->scratchbufs[q->write_ptr], &dev_cmd->hdr, 1667 + IWL_HCMD_SCRATCHBUF_SIZE); 1668 + iwl_pcie_txq_build_tfd(trans, txq, tb0_phys, 1669 + IWL_HCMD_SCRATCHBUF_SIZE, 1); 1670 + 1671 + /* there must be data left over for TB1 or this code must be changed */ 1672 + BUILD_BUG_ON(sizeof(struct iwl_tx_cmd) < IWL_HCMD_SCRATCHBUF_SIZE); 1673 + 1674 + /* map the data for TB1 */ 1675 + tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_HCMD_SCRATCHBUF_SIZE; 1676 + tb1_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE); 1677 + if (unlikely(dma_mapping_error(trans->dev, tb1_phys))) 1654 1678 goto out_err; 1655 - dma_unmap_addr_set(out_meta, mapping, txcmd_phys); 1656 - dma_unmap_len_set(out_meta, len, firstlen); 1679 + iwl_pcie_txq_build_tfd(trans, txq, tb1_phys, tb1_len, 0); 1680 + 1681 + /* 1682 + * Set up TFD's third entry to point directly to remainder 1683 + * of skb, if any (802.11 null frames have no payload). 1684 + */ 1685 + tb2_len = skb->len - hdr_len; 1686 + if (tb2_len > 0) { 1687 + dma_addr_t tb2_phys = dma_map_single(trans->dev, 1688 + skb->data + hdr_len, 1689 + tb2_len, DMA_TO_DEVICE); 1690 + if (unlikely(dma_mapping_error(trans->dev, tb2_phys))) { 1691 + iwl_pcie_tfd_unmap(trans, out_meta, 1692 + &txq->tfds[q->write_ptr]); 1693 + goto out_err; 1694 + } 1695 + iwl_pcie_txq_build_tfd(trans, txq, tb2_phys, tb2_len, 0); 1696 + } 1697 + 1698 + /* Set up entry for this TFD in Tx byte-count array */ 1699 + iwl_pcie_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len)); 1700 + 1701 + trace_iwlwifi_dev_tx(trans->dev, skb, 1702 + &txq->tfds[txq->q.write_ptr], 1703 + sizeof(struct iwl_tfd), 1704 + &dev_cmd->hdr, IWL_HCMD_SCRATCHBUF_SIZE + tb1_len, 1705 + skb->data + hdr_len, tb2_len); 1706 + trace_iwlwifi_dev_tx_data(trans->dev, skb, 1707 + skb->data + hdr_len, tb2_len); 1657 1708 1658 1709 if (!ieee80211_has_morefrags(fc)) { 1659 1710 txq->need_update = 1; ··· 1699 1674 wait_write_ptr = 1; 1700 1675 txq->need_update = 0; 1701 1676 } 1702 - 1703 - /* Set up TFD's 2nd entry to point directly to remainder of skb, 1704 - * if any (802.11 null frames have no payload). */ 1705 - secondlen = skb->len - hdr_len; 1706 - if (secondlen > 0) { 1707 - phys_addr = dma_map_single(trans->dev, skb->data + hdr_len, 1708 - secondlen, DMA_TO_DEVICE); 1709 - if (unlikely(dma_mapping_error(trans->dev, phys_addr))) { 1710 - dma_unmap_single(trans->dev, 1711 - dma_unmap_addr(out_meta, mapping), 1712 - dma_unmap_len(out_meta, len), 1713 - DMA_BIDIRECTIONAL); 1714 - goto out_err; 1715 - } 1716 - } 1717 - 1718 - /* Attach buffers to TFD */ 1719 - iwl_pcie_txq_build_tfd(trans, txq, txcmd_phys, firstlen, 1); 1720 - if (secondlen > 0) 1721 - iwl_pcie_txq_build_tfd(trans, txq, phys_addr, secondlen, 0); 1722 - 1723 - scratch_phys = txcmd_phys + sizeof(struct iwl_cmd_header) + 1724 - offsetof(struct iwl_tx_cmd, scratch); 1725 - 1726 - /* take back ownership of DMA buffer to enable update */ 1727 - dma_sync_single_for_cpu(trans->dev, txcmd_phys, firstlen, 1728 - DMA_BIDIRECTIONAL); 1729 - tx_cmd->dram_lsb_ptr = cpu_to_le32(scratch_phys); 1730 - tx_cmd->dram_msb_ptr = iwl_get_dma_hi_addr(scratch_phys); 1731 - 1732 - /* Set up entry for this TFD in Tx byte-count array */ 1733 - iwl_pcie_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len)); 1734 - 1735 - dma_sync_single_for_device(trans->dev, txcmd_phys, firstlen, 1736 - DMA_BIDIRECTIONAL); 1737 - 1738 - trace_iwlwifi_dev_tx(trans->dev, skb, 1739 - &txq->tfds[txq->q.write_ptr], 1740 - sizeof(struct iwl_tfd), 1741 - &dev_cmd->hdr, firstlen, 1742 - skb->data + hdr_len, secondlen); 1743 - trace_iwlwifi_dev_tx_data(trans->dev, skb, 1744 - skb->data + hdr_len, secondlen); 1745 1677 1746 1678 /* start timer if queue currently empty */ 1747 1679 if (txq->need_update && q->read_ptr == q->write_ptr &&
+1
drivers/oprofile/oprofilefs.c
··· 276 276 .mount = oprofilefs_mount, 277 277 .kill_sb = kill_litter_super, 278 278 }; 279 + MODULE_ALIAS_FS("oprofilefs"); 279 280 280 281 281 282 int __init oprofilefs_register(void)
+7 -1
drivers/pci/pci-acpi.c
··· 331 331 } 332 332 } 333 333 334 + static bool pci_acpi_bus_match(struct device *dev) 335 + { 336 + return dev->bus == &pci_bus_type; 337 + } 338 + 334 339 static struct acpi_bus_type acpi_pci_bus = { 335 - .bus = &pci_bus_type, 340 + .name = "PCI", 341 + .match = pci_acpi_bus_match, 336 342 .find_device = acpi_pci_find_device, 337 343 .setup = pci_acpi_setup, 338 344 .cleanup = pci_acpi_cleanup,
+39 -2
drivers/platform/x86/chromeos_laptop.c
··· 23 23 24 24 #include <linux/dmi.h> 25 25 #include <linux/i2c.h> 26 + #include <linux/i2c/atmel_mxt_ts.h> 27 + #include <linux/input.h> 28 + #include <linux/interrupt.h> 26 29 #include <linux/module.h> 27 30 28 31 #define ATMEL_TP_I2C_ADDR 0x4b ··· 70 67 I2C_BOARD_INFO("tsl2563", TAOS_ALS_I2C_ADDR), 71 68 }; 72 69 70 + static struct mxt_platform_data atmel_224s_tp_platform_data = { 71 + .x_line = 18, 72 + .y_line = 12, 73 + .x_size = 102*20, 74 + .y_size = 68*20, 75 + .blen = 0x80, /* Gain setting is in upper 4 bits */ 76 + .threshold = 0x32, 77 + .voltage = 0, /* 3.3V */ 78 + .orient = MXT_VERTICAL_FLIP, 79 + .irqflags = IRQF_TRIGGER_FALLING, 80 + .is_tp = true, 81 + .key_map = { KEY_RESERVED, 82 + KEY_RESERVED, 83 + KEY_RESERVED, 84 + BTN_LEFT }, 85 + .config = NULL, 86 + .config_length = 0, 87 + }; 88 + 73 89 static struct i2c_board_info __initdata atmel_224s_tp_device = { 74 90 I2C_BOARD_INFO("atmel_mxt_tp", ATMEL_TP_I2C_ADDR), 75 - .platform_data = NULL, 91 + .platform_data = &atmel_224s_tp_platform_data, 76 92 .flags = I2C_CLIENT_WAKE, 93 + }; 94 + 95 + static struct mxt_platform_data atmel_1664s_platform_data = { 96 + .x_line = 32, 97 + .y_line = 50, 98 + .x_size = 1700, 99 + .y_size = 2560, 100 + .blen = 0x89, /* Gain setting is in upper 4 bits */ 101 + .threshold = 0x28, 102 + .voltage = 0, /* 3.3V */ 103 + .orient = MXT_ROTATED_90_COUNTER, 104 + .irqflags = IRQF_TRIGGER_FALLING, 105 + .is_tp = false, 106 + .config = NULL, 107 + .config_length = 0, 77 108 }; 78 109 79 110 static struct i2c_board_info __initdata atmel_1664s_device = { 80 111 I2C_BOARD_INFO("atmel_mxt_ts", ATMEL_TS_I2C_ADDR), 81 - .platform_data = NULL, 112 + .platform_data = &atmel_1664s_platform_data, 82 113 .flags = I2C_CLIENT_WAKE, 83 114 }; 84 115
+7 -1
drivers/pnp/pnpacpi/core.c
··· 353 353 /* complete initialization of a PNPACPI device includes having 354 354 * pnpdev->dev.archdata.acpi_handle point to its ACPI sibling. 355 355 */ 356 + static bool acpi_pnp_bus_match(struct device *dev) 357 + { 358 + return dev->bus == &pnp_bus_type; 359 + } 360 + 356 361 static struct acpi_bus_type __initdata acpi_pnp_bus = { 357 - .bus = &pnp_bus_type, 362 + .name = "PNP", 363 + .match = acpi_pnp_bus_match, 358 364 .find_device = acpi_pnp_find_device, 359 365 }; 360 366
+8 -4
drivers/regulator/core.c
··· 2830 2830 * regulator_allow_bypass - allow the regulator to go into bypass mode 2831 2831 * 2832 2832 * @regulator: Regulator to configure 2833 - * @allow: enable or disable bypass mode 2833 + * @enable: enable or disable bypass mode 2834 2834 * 2835 2835 * Allow the regulator to go into bypass mode if all other consumers 2836 2836 * for the regulator also enable bypass mode and the machine ··· 3057 3057 return 0; 3058 3058 3059 3059 err: 3060 - pr_err("Failed to enable %s: %d\n", consumers[i].supply, ret); 3061 - while (--i >= 0) 3062 - regulator_disable(consumers[i].consumer); 3060 + for (i = 0; i < num_consumers; i++) { 3061 + if (consumers[i].ret < 0) 3062 + pr_err("Failed to enable %s: %d\n", consumers[i].supply, 3063 + consumers[i].ret); 3064 + else 3065 + regulator_disable(consumers[i].consumer); 3066 + } 3063 3067 3064 3068 return ret; 3065 3069 }
+2 -2
drivers/regulator/db8500-prcmu.c
··· 528 528 return 0; 529 529 } 530 530 531 - static int __exit db8500_regulator_remove(struct platform_device *pdev) 531 + static int db8500_regulator_remove(struct platform_device *pdev) 532 532 { 533 533 int i; 534 534 ··· 553 553 .owner = THIS_MODULE, 554 554 }, 555 555 .probe = db8500_regulator_probe, 556 - .remove = __exit_p(db8500_regulator_remove), 556 + .remove = db8500_regulator_remove, 557 557 }; 558 558 559 559 static int __init db8500_regulator_init(void)
+2 -1
drivers/regulator/palmas-regulator.c
··· 4 4 * Copyright 2011-2012 Texas Instruments Inc. 5 5 * 6 6 * Author: Graeme Gregory <gg@slimlogic.co.uk> 7 + * Author: Ian Lartey <ian@slimlogic.co.uk> 7 8 * 8 9 * This program is free software; you can redistribute it and/or modify it 9 10 * under the terms of the GNU General Public License as published by the ··· 157 156 * 158 157 * So they are basically (maxV-minV)/stepV 159 158 */ 160 - #define PALMAS_SMPS_NUM_VOLTAGES 116 159 + #define PALMAS_SMPS_NUM_VOLTAGES 117 161 160 #define PALMAS_SMPS10_NUM_VOLTAGES 2 162 161 #define PALMAS_LDO_NUM_VOLTAGES 50 163 162
+4 -5
drivers/regulator/twl-regulator.c
··· 471 471 selector); 472 472 } 473 473 474 - static int twl4030ldo_get_voltage(struct regulator_dev *rdev) 474 + static int twl4030ldo_get_voltage_sel(struct regulator_dev *rdev) 475 475 { 476 476 struct twlreg_info *info = rdev_get_drvdata(rdev); 477 - int vsel = twlreg_read(info, TWL_MODULE_PM_RECEIVER, 478 - VREG_VOLTAGE); 477 + int vsel = twlreg_read(info, TWL_MODULE_PM_RECEIVER, VREG_VOLTAGE); 479 478 480 479 if (vsel < 0) 481 480 return vsel; 482 481 483 482 vsel &= info->table_len - 1; 484 - return LDO_MV(info->table[vsel]) * 1000; 483 + return vsel; 485 484 } 486 485 487 486 static struct regulator_ops twl4030ldo_ops = { 488 487 .list_voltage = twl4030ldo_list_voltage, 489 488 490 489 .set_voltage_sel = twl4030ldo_set_voltage_sel, 491 - .get_voltage = twl4030ldo_get_voltage, 490 + .get_voltage_sel = twl4030ldo_get_voltage_sel, 492 491 493 492 .enable = twl4030reg_enable, 494 493 .disable = twl4030reg_disable,
+6 -1
drivers/scsi/scsi_lib.c
··· 71 71 #ifdef CONFIG_ACPI 72 72 #include <acpi/acpi_bus.h> 73 73 74 + static bool acpi_scsi_bus_match(struct device *dev) 75 + { 76 + return dev->bus == &scsi_bus_type; 77 + } 78 + 74 79 int scsi_register_acpi_bus_type(struct acpi_bus_type *bus) 75 80 { 76 - bus->bus = &scsi_bus_type; 81 + bus->match = acpi_scsi_bus_match; 77 82 return register_acpi_bus_type(bus); 78 83 } 79 84 EXPORT_SYMBOL_GPL(scsi_register_acpi_bus_type);
+1
drivers/staging/ccg/f_fs.c
··· 1223 1223 .mount = ffs_fs_mount, 1224 1224 .kill_sb = ffs_fs_kill_sb, 1225 1225 }; 1226 + MODULE_ALIAS_FS("functionfs"); 1226 1227 1227 1228 1228 1229 /* Driver's main init/cleanup functions *************************************/
+7 -2
drivers/usb/core/usb-acpi.c
··· 210 210 return 0; 211 211 } 212 212 213 + static bool usb_acpi_bus_match(struct device *dev) 214 + { 215 + return is_usb_device(dev) || is_usb_port(dev); 216 + } 217 + 213 218 static struct acpi_bus_type usb_acpi_bus = { 214 - .bus = &usb_bus_type, 215 - .find_bridge = usb_acpi_find_device, 219 + .name = "USB", 220 + .match = usb_acpi_bus_match, 216 221 .find_device = usb_acpi_find_device, 217 222 }; 218 223
+1
drivers/usb/gadget/f_fs.c
··· 1235 1235 .mount = ffs_fs_mount, 1236 1236 .kill_sb = ffs_fs_kill_sb, 1237 1237 }; 1238 + MODULE_ALIAS_FS("functionfs"); 1238 1239 1239 1240 1240 1241 /* Driver's main init/cleanup functions *************************************/
+1
drivers/usb/gadget/inode.c
··· 2105 2105 .mount = gadgetfs_mount, 2106 2106 .kill_sb = gadgetfs_kill_sb, 2107 2107 }; 2108 + MODULE_ALIAS_FS("gadgetfs"); 2108 2109 2109 2110 /*----------------------------------------------------------------------*/ 2110 2111
+1
drivers/xen/xenfs/super.c
··· 75 75 .mount = xenfs_mount, 76 76 .kill_sb = kill_litter_super, 77 77 }; 78 + MODULE_ALIAS_FS("xenfs"); 78 79 79 80 static int __init xenfs_init(void) 80 81 {
+1
fs/9p/vfs_super.c
··· 365 365 .owner = THIS_MODULE, 366 366 .fs_flags = FS_RENAME_DOES_D_MOVE, 367 367 }; 368 + MODULE_ALIAS_FS("9p");
+1
fs/adfs/super.c
··· 524 524 .kill_sb = kill_block_super, 525 525 .fs_flags = FS_REQUIRES_DEV, 526 526 }; 527 + MODULE_ALIAS_FS("adfs"); 527 528 528 529 static int __init init_adfs_fs(void) 529 530 {
+1
fs/affs/super.c
··· 622 622 .kill_sb = kill_block_super, 623 623 .fs_flags = FS_REQUIRES_DEV, 624 624 }; 625 + MODULE_ALIAS_FS("affs"); 625 626 626 627 static int __init init_affs_fs(void) 627 628 {
+1
fs/afs/super.c
··· 45 45 .kill_sb = afs_kill_super, 46 46 .fs_flags = 0, 47 47 }; 48 + MODULE_ALIAS_FS("afs"); 48 49 49 50 static const struct super_operations afs_super_ops = { 50 51 .statfs = afs_statfs,
+1
fs/autofs4/init.c
··· 26 26 .mount = autofs_mount, 27 27 .kill_sb = autofs4_kill_sb, 28 28 }; 29 + MODULE_ALIAS_FS("autofs"); 29 30 30 31 static int __init init_autofs4_fs(void) 31 32 {
+1
fs/befs/linuxvfs.c
··· 951 951 .kill_sb = kill_block_super, 952 952 .fs_flags = FS_REQUIRES_DEV, 953 953 }; 954 + MODULE_ALIAS_FS("befs"); 954 955 955 956 static int __init 956 957 init_befs_fs(void)
+1
fs/bfs/inode.c
··· 473 473 .kill_sb = kill_block_super, 474 474 .fs_flags = FS_REQUIRES_DEV, 475 475 }; 476 + MODULE_ALIAS_FS("bfs"); 476 477 477 478 static int __init init_bfs_fs(void) 478 479 {
+1
fs/binfmt_misc.c
··· 720 720 .mount = bm_mount, 721 721 .kill_sb = kill_litter_super, 722 722 }; 723 + MODULE_ALIAS_FS("binfmt_misc"); 723 724 724 725 static int __init init_misc_binfmt(void) 725 726 {
+92 -63
fs/btrfs/delayed-inode.c
··· 22 22 #include "disk-io.h" 23 23 #include "transaction.h" 24 24 25 - #define BTRFS_DELAYED_WRITEBACK 400 26 - #define BTRFS_DELAYED_BACKGROUND 100 25 + #define BTRFS_DELAYED_WRITEBACK 512 26 + #define BTRFS_DELAYED_BACKGROUND 128 27 + #define BTRFS_DELAYED_BATCH 16 27 28 28 29 static struct kmem_cache *delayed_node_cache; 29 30 ··· 495 494 BTRFS_DELAYED_DELETION_ITEM); 496 495 } 497 496 497 + static void finish_one_item(struct btrfs_delayed_root *delayed_root) 498 + { 499 + int seq = atomic_inc_return(&delayed_root->items_seq); 500 + if ((atomic_dec_return(&delayed_root->items) < 501 + BTRFS_DELAYED_BACKGROUND || seq % BTRFS_DELAYED_BATCH == 0) && 502 + waitqueue_active(&delayed_root->wait)) 503 + wake_up(&delayed_root->wait); 504 + } 505 + 498 506 static void __btrfs_remove_delayed_item(struct btrfs_delayed_item *delayed_item) 499 507 { 500 508 struct rb_root *root; ··· 522 512 523 513 rb_erase(&delayed_item->rb_node, root); 524 514 delayed_item->delayed_node->count--; 525 - if (atomic_dec_return(&delayed_root->items) < 526 - BTRFS_DELAYED_BACKGROUND && 527 - waitqueue_active(&delayed_root->wait)) 528 - wake_up(&delayed_root->wait); 515 + 516 + finish_one_item(delayed_root); 529 517 } 530 518 531 519 static void btrfs_release_delayed_item(struct btrfs_delayed_item *item) ··· 1064 1056 delayed_node->count--; 1065 1057 1066 1058 delayed_root = delayed_node->root->fs_info->delayed_root; 1067 - if (atomic_dec_return(&delayed_root->items) < 1068 - BTRFS_DELAYED_BACKGROUND && 1069 - waitqueue_active(&delayed_root->wait)) 1070 - wake_up(&delayed_root->wait); 1059 + finish_one_item(delayed_root); 1071 1060 } 1072 1061 } 1073 1062 ··· 1309 1304 btrfs_release_delayed_node(delayed_node); 1310 1305 } 1311 1306 1312 - struct btrfs_async_delayed_node { 1313 - struct btrfs_root *root; 1314 - struct btrfs_delayed_node *delayed_node; 1307 + struct btrfs_async_delayed_work { 1308 + struct btrfs_delayed_root *delayed_root; 1309 + int nr; 1315 1310 struct btrfs_work work; 1316 1311 }; 1317 1312 1318 - static void btrfs_async_run_delayed_node_done(struct btrfs_work *work) 1313 + static void btrfs_async_run_delayed_root(struct btrfs_work *work) 1319 1314 { 1320 - struct btrfs_async_delayed_node *async_node; 1315 + struct btrfs_async_delayed_work *async_work; 1316 + struct btrfs_delayed_root *delayed_root; 1321 1317 struct btrfs_trans_handle *trans; 1322 1318 struct btrfs_path *path; 1323 1319 struct btrfs_delayed_node *delayed_node = NULL; 1324 1320 struct btrfs_root *root; 1325 1321 struct btrfs_block_rsv *block_rsv; 1326 - int need_requeue = 0; 1322 + int total_done = 0; 1327 1323 1328 - async_node = container_of(work, struct btrfs_async_delayed_node, work); 1324 + async_work = container_of(work, struct btrfs_async_delayed_work, work); 1325 + delayed_root = async_work->delayed_root; 1329 1326 1330 1327 path = btrfs_alloc_path(); 1331 1328 if (!path) 1332 1329 goto out; 1333 - path->leave_spinning = 1; 1334 1330 1335 - delayed_node = async_node->delayed_node; 1331 + again: 1332 + if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND / 2) 1333 + goto free_path; 1334 + 1335 + delayed_node = btrfs_first_prepared_delayed_node(delayed_root); 1336 + if (!delayed_node) 1337 + goto free_path; 1338 + 1339 + path->leave_spinning = 1; 1336 1340 root = delayed_node->root; 1337 1341 1338 1342 trans = btrfs_join_transaction(root); 1339 1343 if (IS_ERR(trans)) 1340 - goto free_path; 1344 + goto release_path; 1341 1345 1342 1346 block_rsv = trans->block_rsv; 1343 1347 trans->block_rsv = &root->fs_info->delayed_block_rsv; ··· 1377 1363 * Task1 will sleep until the transaction is commited. 1378 1364 */ 1379 1365 mutex_lock(&delayed_node->mutex); 1380 - if (delayed_node->count) 1381 - need_requeue = 1; 1382 - else 1383 - btrfs_dequeue_delayed_node(root->fs_info->delayed_root, 1384 - delayed_node); 1366 + btrfs_dequeue_delayed_node(root->fs_info->delayed_root, delayed_node); 1385 1367 mutex_unlock(&delayed_node->mutex); 1386 1368 1387 1369 trans->block_rsv = block_rsv; 1388 1370 btrfs_end_transaction_dmeta(trans, root); 1389 1371 btrfs_btree_balance_dirty_nodelay(root); 1372 + 1373 + release_path: 1374 + btrfs_release_path(path); 1375 + total_done++; 1376 + 1377 + btrfs_release_prepared_delayed_node(delayed_node); 1378 + if (async_work->nr == 0 || total_done < async_work->nr) 1379 + goto again; 1380 + 1390 1381 free_path: 1391 1382 btrfs_free_path(path); 1392 1383 out: 1393 - if (need_requeue) 1394 - btrfs_requeue_work(&async_node->work); 1395 - else { 1396 - btrfs_release_prepared_delayed_node(delayed_node); 1397 - kfree(async_node); 1398 - } 1384 + wake_up(&delayed_root->wait); 1385 + kfree(async_work); 1399 1386 } 1400 1387 1401 - static int btrfs_wq_run_delayed_node(struct btrfs_delayed_root *delayed_root, 1402 - struct btrfs_root *root, int all) 1403 - { 1404 - struct btrfs_async_delayed_node *async_node; 1405 - struct btrfs_delayed_node *curr; 1406 - int count = 0; 1407 1388 1408 - again: 1409 - curr = btrfs_first_prepared_delayed_node(delayed_root); 1410 - if (!curr) 1389 + static int btrfs_wq_run_delayed_node(struct btrfs_delayed_root *delayed_root, 1390 + struct btrfs_root *root, int nr) 1391 + { 1392 + struct btrfs_async_delayed_work *async_work; 1393 + 1394 + if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND) 1411 1395 return 0; 1412 1396 1413 - async_node = kmalloc(sizeof(*async_node), GFP_NOFS); 1414 - if (!async_node) { 1415 - btrfs_release_prepared_delayed_node(curr); 1397 + async_work = kmalloc(sizeof(*async_work), GFP_NOFS); 1398 + if (!async_work) 1416 1399 return -ENOMEM; 1417 - } 1418 1400 1419 - async_node->root = root; 1420 - async_node->delayed_node = curr; 1401 + async_work->delayed_root = delayed_root; 1402 + async_work->work.func = btrfs_async_run_delayed_root; 1403 + async_work->work.flags = 0; 1404 + async_work->nr = nr; 1421 1405 1422 - async_node->work.func = btrfs_async_run_delayed_node_done; 1423 - async_node->work.flags = 0; 1424 - 1425 - btrfs_queue_worker(&root->fs_info->delayed_workers, &async_node->work); 1426 - count++; 1427 - 1428 - if (all || count < 4) 1429 - goto again; 1430 - 1406 + btrfs_queue_worker(&root->fs_info->delayed_workers, &async_work->work); 1431 1407 return 0; 1432 1408 } 1433 1409 ··· 1428 1424 WARN_ON(btrfs_first_delayed_node(delayed_root)); 1429 1425 } 1430 1426 1427 + static int refs_newer(struct btrfs_delayed_root *delayed_root, 1428 + int seq, int count) 1429 + { 1430 + int val = atomic_read(&delayed_root->items_seq); 1431 + 1432 + if (val < seq || val >= seq + count) 1433 + return 1; 1434 + return 0; 1435 + } 1436 + 1431 1437 void btrfs_balance_delayed_items(struct btrfs_root *root) 1432 1438 { 1433 1439 struct btrfs_delayed_root *delayed_root; 1440 + int seq; 1434 1441 1435 1442 delayed_root = btrfs_get_delayed_root(root); 1436 1443 1437 1444 if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND) 1438 1445 return; 1439 1446 1447 + seq = atomic_read(&delayed_root->items_seq); 1448 + 1440 1449 if (atomic_read(&delayed_root->items) >= BTRFS_DELAYED_WRITEBACK) { 1441 1450 int ret; 1442 - ret = btrfs_wq_run_delayed_node(delayed_root, root, 1); 1451 + DEFINE_WAIT(__wait); 1452 + 1453 + ret = btrfs_wq_run_delayed_node(delayed_root, root, 0); 1443 1454 if (ret) 1444 1455 return; 1445 1456 1446 - wait_event_interruptible_timeout( 1447 - delayed_root->wait, 1448 - (atomic_read(&delayed_root->items) < 1449 - BTRFS_DELAYED_BACKGROUND), 1450 - HZ); 1451 - return; 1457 + while (1) { 1458 + prepare_to_wait(&delayed_root->wait, &__wait, 1459 + TASK_INTERRUPTIBLE); 1460 + 1461 + if (refs_newer(delayed_root, seq, 1462 + BTRFS_DELAYED_BATCH) || 1463 + atomic_read(&delayed_root->items) < 1464 + BTRFS_DELAYED_BACKGROUND) { 1465 + break; 1466 + } 1467 + if (!signal_pending(current)) 1468 + schedule(); 1469 + else 1470 + break; 1471 + } 1472 + finish_wait(&delayed_root->wait, &__wait); 1452 1473 } 1453 1474 1454 - btrfs_wq_run_delayed_node(delayed_root, root, 0); 1475 + btrfs_wq_run_delayed_node(delayed_root, root, BTRFS_DELAYED_BATCH); 1455 1476 } 1456 1477 1457 1478 /* Will return 0 or -ENOMEM */
+2
fs/btrfs/delayed-inode.h
··· 43 43 */ 44 44 struct list_head prepare_list; 45 45 atomic_t items; /* for delayed items */ 46 + atomic_t items_seq; /* for delayed items */ 46 47 int nodes; /* for delayed nodes */ 47 48 wait_queue_head_t wait; 48 49 }; ··· 87 86 struct btrfs_delayed_root *delayed_root) 88 87 { 89 88 atomic_set(&delayed_root->items, 0); 89 + atomic_set(&delayed_root->items_seq, 0); 90 90 delayed_root->nodes = 0; 91 91 spin_lock_init(&delayed_root->lock); 92 92 init_waitqueue_head(&delayed_root->wait);
+7 -9
fs/btrfs/disk-io.c
··· 62 62 static void btrfs_destroy_ordered_extents(struct btrfs_root *root); 63 63 static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans, 64 64 struct btrfs_root *root); 65 - static void btrfs_destroy_pending_snapshots(struct btrfs_transaction *t); 65 + static void btrfs_evict_pending_snapshots(struct btrfs_transaction *t); 66 66 static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root); 67 67 static int btrfs_destroy_marked_extents(struct btrfs_root *root, 68 68 struct extent_io_tree *dirty_pages, ··· 3687 3687 return ret; 3688 3688 } 3689 3689 3690 - static void btrfs_destroy_pending_snapshots(struct btrfs_transaction *t) 3690 + static void btrfs_evict_pending_snapshots(struct btrfs_transaction *t) 3691 3691 { 3692 3692 struct btrfs_pending_snapshot *snapshot; 3693 3693 struct list_head splice; ··· 3700 3700 snapshot = list_entry(splice.next, 3701 3701 struct btrfs_pending_snapshot, 3702 3702 list); 3703 - 3703 + snapshot->error = -ECANCELED; 3704 3704 list_del_init(&snapshot->list); 3705 - 3706 - kfree(snapshot); 3707 3705 } 3708 3706 } 3709 3707 ··· 3838 3840 cur_trans->blocked = 1; 3839 3841 wake_up(&root->fs_info->transaction_blocked_wait); 3840 3842 3843 + btrfs_evict_pending_snapshots(cur_trans); 3844 + 3841 3845 cur_trans->blocked = 0; 3842 3846 wake_up(&root->fs_info->transaction_wait); 3843 3847 ··· 3848 3848 3849 3849 btrfs_destroy_delayed_inodes(root); 3850 3850 btrfs_assert_delayed_root_empty(root); 3851 - 3852 - btrfs_destroy_pending_snapshots(cur_trans); 3853 3851 3854 3852 btrfs_destroy_marked_extents(root, &cur_trans->dirty_pages, 3855 3853 EXTENT_DIRTY); ··· 3892 3894 if (waitqueue_active(&root->fs_info->transaction_blocked_wait)) 3893 3895 wake_up(&root->fs_info->transaction_blocked_wait); 3894 3896 3897 + btrfs_evict_pending_snapshots(t); 3898 + 3895 3899 t->blocked = 0; 3896 3900 smp_mb(); 3897 3901 if (waitqueue_active(&root->fs_info->transaction_wait)) ··· 3906 3906 3907 3907 btrfs_destroy_delayed_inodes(root); 3908 3908 btrfs_assert_delayed_root_empty(root); 3909 - 3910 - btrfs_destroy_pending_snapshots(t); 3911 3909 3912 3910 btrfs_destroy_delalloc_inodes(root); 3913 3911
+4 -2
fs/btrfs/inode.c
··· 8502 8502 struct btrfs_key ins; 8503 8503 u64 cur_offset = start; 8504 8504 u64 i_size; 8505 + u64 cur_bytes; 8505 8506 int ret = 0; 8506 8507 bool own_trans = true; 8507 8508 ··· 8517 8516 } 8518 8517 } 8519 8518 8520 - ret = btrfs_reserve_extent(trans, root, 8521 - min(num_bytes, 256ULL * 1024 * 1024), 8519 + cur_bytes = min(num_bytes, 256ULL * 1024 * 1024); 8520 + cur_bytes = max(cur_bytes, min_size); 8521 + ret = btrfs_reserve_extent(trans, root, cur_bytes, 8522 8522 min_size, 0, *alloc_hint, &ins, 1); 8523 8523 if (ret) { 8524 8524 if (own_trans)
+5 -13
fs/btrfs/ioctl.c
··· 527 527 if (async_transid) { 528 528 *async_transid = trans->transid; 529 529 err = btrfs_commit_transaction_async(trans, root, 1); 530 + if (err) 531 + err = btrfs_commit_transaction(trans, root); 530 532 } else { 531 533 err = btrfs_commit_transaction(trans, root); 532 534 } ··· 594 592 *async_transid = trans->transid; 595 593 ret = btrfs_commit_transaction_async(trans, 596 594 root->fs_info->extent_root, 1); 595 + if (ret) 596 + ret = btrfs_commit_transaction(trans, root); 597 597 } else { 598 598 ret = btrfs_commit_transaction(trans, 599 599 root->fs_info->extent_root); 600 600 } 601 - if (ret) { 602 - /* cleanup_transaction has freed this for us */ 603 - if (trans->aborted) 604 - pending_snapshot = NULL; 601 + if (ret) 605 602 goto fail; 606 - } 607 603 608 604 ret = pending_snapshot->error; 609 605 if (ret) ··· 2245 2245 if (ret) 2246 2246 return ret; 2247 2247 2248 - if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running, 2249 - 1)) { 2250 - pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n"); 2251 - mnt_drop_write_file(file); 2252 - return -EINVAL; 2253 - } 2254 - 2255 2248 if (btrfs_root_readonly(root)) { 2256 2249 ret = -EROFS; 2257 2250 goto out; ··· 2299 2306 ret = -EINVAL; 2300 2307 } 2301 2308 out: 2302 - atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0); 2303 2309 mnt_drop_write_file(file); 2304 2310 return ret; 2305 2311 }
+57 -17
fs/btrfs/relocation.c
··· 1269 1269 } 1270 1270 spin_unlock(&rc->reloc_root_tree.lock); 1271 1271 1272 + if (!node) 1273 + return 0; 1272 1274 BUG_ON((struct btrfs_root *)node->data != root); 1273 1275 1274 1276 if (!del) { ··· 2240 2238 } 2241 2239 2242 2240 static noinline_for_stack 2241 + void free_reloc_roots(struct list_head *list) 2242 + { 2243 + struct btrfs_root *reloc_root; 2244 + 2245 + while (!list_empty(list)) { 2246 + reloc_root = list_entry(list->next, struct btrfs_root, 2247 + root_list); 2248 + __update_reloc_root(reloc_root, 1); 2249 + free_extent_buffer(reloc_root->node); 2250 + free_extent_buffer(reloc_root->commit_root); 2251 + kfree(reloc_root); 2252 + } 2253 + } 2254 + 2255 + static noinline_for_stack 2243 2256 int merge_reloc_roots(struct reloc_control *rc) 2244 2257 { 2245 2258 struct btrfs_root *root; 2246 2259 struct btrfs_root *reloc_root; 2247 2260 LIST_HEAD(reloc_roots); 2248 2261 int found = 0; 2249 - int ret; 2262 + int ret = 0; 2250 2263 again: 2251 2264 root = rc->extent_root; 2252 2265 ··· 2287 2270 BUG_ON(root->reloc_root != reloc_root); 2288 2271 2289 2272 ret = merge_reloc_root(rc, root); 2290 - BUG_ON(ret); 2273 + if (ret) 2274 + goto out; 2291 2275 } else { 2292 2276 list_del_init(&reloc_root->root_list); 2293 2277 } 2294 2278 ret = btrfs_drop_snapshot(reloc_root, rc->block_rsv, 0, 1); 2295 - BUG_ON(ret < 0); 2279 + if (ret < 0) { 2280 + if (list_empty(&reloc_root->root_list)) 2281 + list_add_tail(&reloc_root->root_list, 2282 + &reloc_roots); 2283 + goto out; 2284 + } 2296 2285 } 2297 2286 2298 2287 if (found) { 2299 2288 found = 0; 2300 2289 goto again; 2301 2290 } 2291 + out: 2292 + if (ret) { 2293 + btrfs_std_error(root->fs_info, ret); 2294 + if (!list_empty(&reloc_roots)) 2295 + free_reloc_roots(&reloc_roots); 2296 + } 2297 + 2302 2298 BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root)); 2303 - return 0; 2299 + return ret; 2304 2300 } 2305 2301 2306 2302 static void free_block_list(struct rb_root *blocks) ··· 2848 2818 int err = 0; 2849 2819 2850 2820 path = btrfs_alloc_path(); 2851 - if (!path) 2852 - return -ENOMEM; 2821 + if (!path) { 2822 + err = -ENOMEM; 2823 + goto out_path; 2824 + } 2853 2825 2854 2826 rb_node = rb_first(blocks); 2855 2827 while (rb_node) { ··· 2890 2858 rb_node = rb_next(rb_node); 2891 2859 } 2892 2860 out: 2893 - free_block_list(blocks); 2894 2861 err = finish_pending_nodes(trans, rc, path, err); 2895 2862 2896 2863 btrfs_free_path(path); 2864 + out_path: 2865 + free_block_list(blocks); 2897 2866 return err; 2898 2867 } 2899 2868 ··· 3731 3698 set_reloc_control(rc); 3732 3699 3733 3700 trans = btrfs_join_transaction(rc->extent_root); 3734 - BUG_ON(IS_ERR(trans)); 3701 + if (IS_ERR(trans)) { 3702 + unset_reloc_control(rc); 3703 + /* 3704 + * extent tree is not a ref_cow tree and has no reloc_root to 3705 + * cleanup. And callers are responsible to free the above 3706 + * block rsv. 3707 + */ 3708 + return PTR_ERR(trans); 3709 + } 3735 3710 btrfs_commit_transaction(trans, rc->extent_root); 3736 3711 return 0; 3737 3712 } ··· 3771 3730 while (1) { 3772 3731 progress++; 3773 3732 trans = btrfs_start_transaction(rc->extent_root, 0); 3774 - BUG_ON(IS_ERR(trans)); 3733 + if (IS_ERR(trans)) { 3734 + err = PTR_ERR(trans); 3735 + trans = NULL; 3736 + break; 3737 + } 3775 3738 restart: 3776 3739 if (update_backref_cache(trans, &rc->backref_cache)) { 3777 3740 btrfs_end_transaction(trans, rc->extent_root); ··· 4309 4264 out_free: 4310 4265 kfree(rc); 4311 4266 out: 4312 - while (!list_empty(&reloc_roots)) { 4313 - reloc_root = list_entry(reloc_roots.next, 4314 - struct btrfs_root, root_list); 4315 - list_del(&reloc_root->root_list); 4316 - free_extent_buffer(reloc_root->node); 4317 - free_extent_buffer(reloc_root->commit_root); 4318 - kfree(reloc_root); 4319 - } 4267 + if (!list_empty(&reloc_roots)) 4268 + free_reloc_roots(&reloc_roots); 4269 + 4320 4270 btrfs_free_path(path); 4321 4271 4322 4272 if (err == 0) {
+1
fs/btrfs/super.c
··· 1558 1558 .kill_sb = btrfs_kill_super, 1559 1559 .fs_flags = FS_REQUIRES_DEV, 1560 1560 }; 1561 + MODULE_ALIAS_FS("btrfs"); 1561 1562 1562 1563 /* 1563 1564 * used by btrfsctl to scan devices when no FS is mounted
+40 -25
fs/btrfs/transaction.c
··· 1052 1052 1053 1053 /* 1054 1054 * new snapshots need to be created at a very specific time in the 1055 - * transaction commit. This does the actual creation 1055 + * transaction commit. This does the actual creation. 1056 + * 1057 + * Note: 1058 + * If the error which may affect the commitment of the current transaction 1059 + * happens, we should return the error number. If the error which just affect 1060 + * the creation of the pending snapshots, just return 0. 1056 1061 */ 1057 1062 static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans, 1058 1063 struct btrfs_fs_info *fs_info, ··· 1076 1071 struct extent_buffer *tmp; 1077 1072 struct extent_buffer *old; 1078 1073 struct timespec cur_time = CURRENT_TIME; 1079 - int ret; 1074 + int ret = 0; 1080 1075 u64 to_reserve = 0; 1081 1076 u64 index = 0; 1082 1077 u64 objectid; ··· 1085 1080 1086 1081 path = btrfs_alloc_path(); 1087 1082 if (!path) { 1088 - ret = pending->error = -ENOMEM; 1089 - return ret; 1083 + pending->error = -ENOMEM; 1084 + return 0; 1090 1085 } 1091 1086 1092 1087 new_root_item = kmalloc(sizeof(*new_root_item), GFP_NOFS); 1093 1088 if (!new_root_item) { 1094 - ret = pending->error = -ENOMEM; 1089 + pending->error = -ENOMEM; 1095 1090 goto root_item_alloc_fail; 1096 1091 } 1097 1092 1098 - ret = btrfs_find_free_objectid(tree_root, &objectid); 1099 - if (ret) { 1100 - pending->error = ret; 1093 + pending->error = btrfs_find_free_objectid(tree_root, &objectid); 1094 + if (pending->error) 1101 1095 goto no_free_objectid; 1102 - } 1103 1096 1104 1097 btrfs_reloc_pre_snapshot(trans, pending, &to_reserve); 1105 1098 1106 1099 if (to_reserve > 0) { 1107 - ret = btrfs_block_rsv_add(root, &pending->block_rsv, 1108 - to_reserve, 1109 - BTRFS_RESERVE_NO_FLUSH); 1110 - if (ret) { 1111 - pending->error = ret; 1100 + pending->error = btrfs_block_rsv_add(root, 1101 + &pending->block_rsv, 1102 + to_reserve, 1103 + BTRFS_RESERVE_NO_FLUSH); 1104 + if (pending->error) 1112 1105 goto no_free_objectid; 1113 - } 1114 1106 } 1115 1107 1116 - ret = btrfs_qgroup_inherit(trans, fs_info, root->root_key.objectid, 1117 - objectid, pending->inherit); 1118 - if (ret) { 1119 - pending->error = ret; 1108 + pending->error = btrfs_qgroup_inherit(trans, fs_info, 1109 + root->root_key.objectid, 1110 + objectid, pending->inherit); 1111 + if (pending->error) 1120 1112 goto no_free_objectid; 1121 - } 1122 1113 1123 1114 key.objectid = objectid; 1124 1115 key.offset = (u64)-1; ··· 1142 1141 dentry->d_name.len, 0); 1143 1142 if (dir_item != NULL && !IS_ERR(dir_item)) { 1144 1143 pending->error = -EEXIST; 1145 - goto fail; 1144 + goto dir_item_existed; 1146 1145 } else if (IS_ERR(dir_item)) { 1147 1146 ret = PTR_ERR(dir_item); 1148 1147 btrfs_abort_transaction(trans, root, ret); ··· 1273 1272 if (ret) 1274 1273 btrfs_abort_transaction(trans, root, ret); 1275 1274 fail: 1275 + pending->error = ret; 1276 + dir_item_existed: 1276 1277 trans->block_rsv = rsv; 1277 1278 trans->bytes_reserved = 0; 1278 1279 no_free_objectid: ··· 1290 1287 static noinline int create_pending_snapshots(struct btrfs_trans_handle *trans, 1291 1288 struct btrfs_fs_info *fs_info) 1292 1289 { 1293 - struct btrfs_pending_snapshot *pending; 1290 + struct btrfs_pending_snapshot *pending, *next; 1294 1291 struct list_head *head = &trans->transaction->pending_snapshots; 1292 + int ret = 0; 1295 1293 1296 - list_for_each_entry(pending, head, list) 1297 - create_pending_snapshot(trans, fs_info, pending); 1298 - return 0; 1294 + list_for_each_entry_safe(pending, next, head, list) { 1295 + list_del(&pending->list); 1296 + ret = create_pending_snapshot(trans, fs_info, pending); 1297 + if (ret) 1298 + break; 1299 + } 1300 + return ret; 1299 1301 } 1300 1302 1301 1303 static void update_super_roots(struct btrfs_root *root) ··· 1456 1448 btrfs_abort_transaction(trans, root, err); 1457 1449 1458 1450 spin_lock(&root->fs_info->trans_lock); 1451 + 1452 + if (list_empty(&cur_trans->list)) { 1453 + spin_unlock(&root->fs_info->trans_lock); 1454 + btrfs_end_transaction(trans, root); 1455 + return; 1456 + } 1457 + 1459 1458 list_del_init(&cur_trans->list); 1460 1459 if (cur_trans == root->fs_info->running_transaction) { 1461 1460 root->fs_info->trans_no_join = 1;
+4 -1
fs/btrfs/tree-log.c
··· 1382 1382 1383 1383 btrfs_release_path(path); 1384 1384 if (ret == 0) { 1385 - btrfs_inc_nlink(inode); 1385 + if (!inode->i_nlink) 1386 + set_nlink(inode, 1); 1387 + else 1388 + btrfs_inc_nlink(inode); 1386 1389 ret = btrfs_update_inode(trans, root, inode); 1387 1390 } else if (ret == -EEXIST) { 1388 1391 ret = 0;
+12 -2
fs/btrfs/volumes.c
··· 2379 2379 return ret; 2380 2380 2381 2381 trans = btrfs_start_transaction(root, 0); 2382 - BUG_ON(IS_ERR(trans)); 2382 + if (IS_ERR(trans)) { 2383 + ret = PTR_ERR(trans); 2384 + btrfs_std_error(root->fs_info, ret); 2385 + return ret; 2386 + } 2383 2387 2384 2388 lock_chunks(root); 2385 2389 ··· 3054 3050 3055 3051 unset_balance_control(fs_info); 3056 3052 ret = del_balance_item(fs_info->tree_root); 3057 - BUG_ON(ret); 3053 + if (ret) 3054 + btrfs_std_error(fs_info, ret); 3058 3055 3059 3056 atomic_set(&fs_info->mutually_exclusive_operation_running, 0); 3060 3057 } ··· 3233 3228 if (bargs) { 3234 3229 memset(bargs, 0, sizeof(*bargs)); 3235 3230 update_ioctl_balance_args(fs_info, 0, bargs); 3231 + } 3232 + 3233 + if ((ret && ret != -ECANCELED && ret != -ENOSPC) || 3234 + balance_need_close(fs_info)) { 3235 + __cancel_balance(fs_info); 3236 3236 } 3237 3237 3238 3238 wake_up(&fs_info->balance_wait_q);
+1
fs/ceph/super.c
··· 952 952 .kill_sb = ceph_kill_sb, 953 953 .fs_flags = FS_RENAME_DOES_D_MOVE, 954 954 }; 955 + MODULE_ALIAS_FS("ceph"); 955 956 956 957 #define _STRINGIFY(x) #x 957 958 #define STRINGIFY(x) _STRINGIFY(x)
+1 -1
fs/cifs/cifssmb.c
··· 1909 1909 } while (rc == -EAGAIN); 1910 1910 1911 1911 for (i = 0; i < wdata->nr_pages; i++) { 1912 + unlock_page(wdata->pages[i]); 1912 1913 if (rc != 0) { 1913 1914 SetPageError(wdata->pages[i]); 1914 1915 end_page_writeback(wdata->pages[i]); 1915 1916 page_cache_release(wdata->pages[i]); 1916 1917 } 1917 - unlock_page(wdata->pages[i]); 1918 1918 } 1919 1919 1920 1920 mapping_set_error(inode->i_mapping, rc);
+1 -15
fs/cifs/connect.c
··· 97 97 Opt_user, Opt_pass, Opt_ip, 98 98 Opt_unc, Opt_domain, 99 99 Opt_srcaddr, Opt_prefixpath, 100 - Opt_iocharset, Opt_sockopt, 100 + Opt_iocharset, 101 101 Opt_netbiosname, Opt_servern, 102 102 Opt_ver, Opt_vers, Opt_sec, Opt_cache, 103 103 ··· 202 202 { Opt_srcaddr, "srcaddr=%s" }, 203 203 { Opt_prefixpath, "prefixpath=%s" }, 204 204 { Opt_iocharset, "iocharset=%s" }, 205 - { Opt_sockopt, "sockopt=%s" }, 206 205 { Opt_netbiosname, "netbiosname=%s" }, 207 206 { Opt_servern, "servern=%s" }, 208 207 { Opt_ver, "ver=%s" }, ··· 1750 1751 * is used by caller 1751 1752 */ 1752 1753 cFYI(1, "iocharset set to %s", string); 1753 - break; 1754 - case Opt_sockopt: 1755 - string = match_strdup(args); 1756 - if (string == NULL) 1757 - goto out_nomem; 1758 - 1759 - if (strnicmp(string, "TCP_NODELAY", 11) == 0) { 1760 - printk(KERN_WARNING "CIFS: the " 1761 - "sockopt=TCP_NODELAY option has been " 1762 - "deprecated and will be removed " 1763 - "in 3.9\n"); 1764 - vol->sockopt_tcp_nodelay = 1; 1765 - } 1766 1754 break; 1767 1755 case Opt_netbiosname: 1768 1756 string = match_strdup(args);
+10 -1
fs/cifs/inode.c
··· 995 995 return PTR_ERR(tlink); 996 996 tcon = tlink_tcon(tlink); 997 997 998 + /* 999 + * We cannot rename the file if the server doesn't support 1000 + * CAP_INFOLEVEL_PASSTHRU 1001 + */ 1002 + if (!(tcon->ses->capabilities & CAP_INFOLEVEL_PASSTHRU)) { 1003 + rc = -EBUSY; 1004 + goto out; 1005 + } 1006 + 998 1007 rc = CIFSSMBOpen(xid, tcon, full_path, FILE_OPEN, 999 1008 DELETE|FILE_WRITE_ATTRIBUTES, CREATE_NOT_DIR, 1000 1009 &netfid, &oplock, NULL, cifs_sb->local_nls, ··· 1032 1023 current->tgid); 1033 1024 /* although we would like to mark the file hidden 1034 1025 if that fails we will still try to rename it */ 1035 - if (rc != 0) 1026 + if (!rc) 1036 1027 cifsInode->cifsAttrs = dosattr; 1037 1028 else 1038 1029 dosattr = origattr; /* since not able to change them */
+1
fs/cifs/smb2ops.c
··· 744 744 .cap_unix = 0, 745 745 .cap_nt_find = SMB2_NT_FIND, 746 746 .cap_large_files = SMB2_LARGE_FILES, 747 + .oplock_read = SMB2_OPLOCK_LEVEL_II, 747 748 };
+1
fs/coda/inode.c
··· 329 329 .kill_sb = kill_anon_super, 330 330 .fs_flags = FS_BINARY_MOUNTDATA, 331 331 }; 332 + MODULE_ALIAS_FS("coda"); 332 333
+1
fs/configfs/mount.c
··· 114 114 .mount = configfs_do_mount, 115 115 .kill_sb = kill_litter_super, 116 116 }; 117 + MODULE_ALIAS_FS("configfs"); 117 118 118 119 struct dentry *configfs_pin_fs(void) 119 120 {
+1
fs/cramfs/inode.c
··· 573 573 .kill_sb = kill_block_super, 574 574 .fs_flags = FS_REQUIRES_DEV, 575 575 }; 576 + MODULE_ALIAS_FS("cramfs"); 576 577 577 578 static int __init init_cramfs_fs(void) 578 579 {
+1
fs/debugfs/inode.c
··· 299 299 .mount = debug_mount, 300 300 .kill_sb = kill_litter_super, 301 301 }; 302 + MODULE_ALIAS_FS("debugfs"); 302 303 303 304 static struct dentry *__create_file(const char *name, umode_t mode, 304 305 struct dentry *parent, void *data,
+8
fs/ecryptfs/Kconfig
··· 12 12 13 13 To compile this file system support as a module, choose M here: the 14 14 module will be called ecryptfs. 15 + 16 + config ECRYPT_FS_MESSAGING 17 + bool "Enable notifications for userspace key wrap/unwrap" 18 + depends on ECRYPT_FS 19 + help 20 + Enables the /dev/ecryptfs entry for use by ecryptfsd. This allows 21 + for userspace to wrap/unwrap file encryption keys by other 22 + backends, like OpenSSL.
+5 -2
fs/ecryptfs/Makefile
··· 1 1 # 2 - # Makefile for the Linux 2.6 eCryptfs 2 + # Makefile for the Linux eCryptfs 3 3 # 4 4 5 5 obj-$(CONFIG_ECRYPT_FS) += ecryptfs.o 6 6 7 - ecryptfs-objs := dentry.o file.o inode.o main.o super.o mmap.o read_write.o crypto.o keystore.o messaging.o miscdev.o kthread.o debug.o 7 + ecryptfs-y := dentry.o file.o inode.o main.o super.o mmap.o read_write.o \ 8 + crypto.o keystore.o kthread.o debug.o 9 + 10 + ecryptfs-$(CONFIG_ECRYPT_FS_MESSAGING) += messaging.o miscdev.o
+3 -6
fs/ecryptfs/crypto.c
··· 301 301 while (size > 0 && i < sg_size) { 302 302 pg = virt_to_page(addr); 303 303 offset = offset_in_page(addr); 304 - if (sg) 305 - sg_set_page(&sg[i], pg, 0, offset); 304 + sg_set_page(&sg[i], pg, 0, offset); 306 305 remainder_of_page = PAGE_CACHE_SIZE - offset; 307 306 if (size >= remainder_of_page) { 308 - if (sg) 309 - sg[i].length = remainder_of_page; 307 + sg[i].length = remainder_of_page; 310 308 addr += remainder_of_page; 311 309 size -= remainder_of_page; 312 310 } else { 313 - if (sg) 314 - sg[i].length = size; 311 + sg[i].length = size; 315 312 addr += size; 316 313 size = 0; 317 314 }
-2
fs/ecryptfs/dentry.c
··· 45 45 static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags) 46 46 { 47 47 struct dentry *lower_dentry; 48 - struct vfsmount *lower_mnt; 49 48 int rc = 1; 50 49 51 50 if (flags & LOOKUP_RCU) 52 51 return -ECHILD; 53 52 54 53 lower_dentry = ecryptfs_dentry_to_lower(dentry); 55 - lower_mnt = ecryptfs_dentry_to_lower_mnt(dentry); 56 54 if (!lower_dentry->d_op || !lower_dentry->d_op->d_revalidate) 57 55 goto out; 58 56 rc = lower_dentry->d_op->d_revalidate(lower_dentry, flags);
+38 -2
fs/ecryptfs/ecryptfs_kernel.h
··· 172 172 #define ECRYPTFS_FNEK_ENCRYPTED_FILENAME_PREFIX_SIZE 24 173 173 #define ECRYPTFS_ENCRYPTED_DENTRY_NAME_LEN (18 + 1 + 4 + 1 + 32) 174 174 175 + #ifdef CONFIG_ECRYPT_FS_MESSAGING 176 + # define ECRYPTFS_VERSIONING_MASK_MESSAGING (ECRYPTFS_VERSIONING_DEVMISC \ 177 + | ECRYPTFS_VERSIONING_PUBKEY) 178 + #else 179 + # define ECRYPTFS_VERSIONING_MASK_MESSAGING 0 180 + #endif 181 + 182 + #define ECRYPTFS_VERSIONING_MASK (ECRYPTFS_VERSIONING_PASSPHRASE \ 183 + | ECRYPTFS_VERSIONING_PLAINTEXT_PASSTHROUGH \ 184 + | ECRYPTFS_VERSIONING_XATTR \ 185 + | ECRYPTFS_VERSIONING_MULTKEY \ 186 + | ECRYPTFS_VERSIONING_MASK_MESSAGING \ 187 + | ECRYPTFS_VERSIONING_FILENAME_ENCRYPTION) 175 188 struct ecryptfs_key_sig { 176 189 struct list_head crypt_stat_list; 177 190 char keysig[ECRYPTFS_SIG_SIZE_HEX + 1]; ··· 412 399 struct hlist_node euid_chain; 413 400 }; 414 401 402 + #ifdef CONFIG_ECRYPT_FS_MESSAGING 415 403 extern struct mutex ecryptfs_daemon_hash_mux; 404 + #endif 416 405 417 406 static inline size_t 418 407 ecryptfs_lower_header_size(struct ecryptfs_crypt_stat *crypt_stat) ··· 625 610 ecryptfs_setxattr(struct dentry *dentry, const char *name, const void *value, 626 611 size_t size, int flags); 627 612 int ecryptfs_read_xattr_region(char *page_virt, struct inode *ecryptfs_inode); 613 + #ifdef CONFIG_ECRYPT_FS_MESSAGING 628 614 int ecryptfs_process_response(struct ecryptfs_daemon *daemon, 629 615 struct ecryptfs_message *msg, u32 seq); 630 616 int ecryptfs_send_message(char *data, int data_len, ··· 634 618 struct ecryptfs_message **emsg); 635 619 int ecryptfs_init_messaging(void); 636 620 void ecryptfs_release_messaging(void); 621 + #else 622 + static inline int ecryptfs_init_messaging(void) 623 + { 624 + return 0; 625 + } 626 + static inline void ecryptfs_release_messaging(void) 627 + { } 628 + static inline int ecryptfs_send_message(char *data, int data_len, 629 + struct ecryptfs_msg_ctx **msg_ctx) 630 + { 631 + return -ENOTCONN; 632 + } 633 + static inline int ecryptfs_wait_for_response(struct ecryptfs_msg_ctx *msg_ctx, 634 + struct ecryptfs_message **emsg) 635 + { 636 + return -ENOMSG; 637 + } 638 + #endif 637 639 638 640 void 639 641 ecryptfs_write_header_metadata(char *virt, ··· 689 655 size_t offset_in_page, size_t size, 690 656 struct inode *ecryptfs_inode); 691 657 struct page *ecryptfs_get_locked_page(struct inode *inode, loff_t index); 692 - int ecryptfs_exorcise_daemon(struct ecryptfs_daemon *daemon); 693 - int ecryptfs_find_daemon_by_euid(struct ecryptfs_daemon **daemon); 694 658 int ecryptfs_parse_packet_length(unsigned char *data, size_t *size, 695 659 size_t *length_size); 696 660 int ecryptfs_write_packet_length(char *dest, size_t size, 697 661 size_t *packet_size_length); 662 + #ifdef CONFIG_ECRYPT_FS_MESSAGING 698 663 int ecryptfs_init_ecryptfs_miscdev(void); 699 664 void ecryptfs_destroy_ecryptfs_miscdev(void); 700 665 int ecryptfs_send_miscdev(char *data, size_t data_size, ··· 702 669 void ecryptfs_msg_ctx_alloc_to_free(struct ecryptfs_msg_ctx *msg_ctx); 703 670 int 704 671 ecryptfs_spawn_daemon(struct ecryptfs_daemon **daemon, struct file *file); 672 + int ecryptfs_exorcise_daemon(struct ecryptfs_daemon *daemon); 673 + int ecryptfs_find_daemon_by_euid(struct ecryptfs_daemon **daemon); 674 + #endif 705 675 int ecryptfs_init_kthread(void); 706 676 void ecryptfs_destroy_kthread(void); 707 677 int ecryptfs_privileged_open(struct file **lower_file,
-2
fs/ecryptfs/file.c
··· 199 199 struct dentry *ecryptfs_dentry = file->f_path.dentry; 200 200 /* Private value of ecryptfs_dentry allocated in 201 201 * ecryptfs_lookup() */ 202 - struct dentry *lower_dentry; 203 202 struct ecryptfs_file_info *file_info; 204 203 205 204 mount_crypt_stat = &ecryptfs_superblock_to_private( ··· 221 222 rc = -ENOMEM; 222 223 goto out; 223 224 } 224 - lower_dentry = ecryptfs_dentry_to_lower(ecryptfs_dentry); 225 225 crypt_stat = &ecryptfs_inode_to_private(inode)->crypt_stat; 226 226 mutex_lock(&crypt_stat->cs_mutex); 227 227 if (!(crypt_stat->flags & ECRYPTFS_POLICY_APPLIED)) {
+4 -4
fs/ecryptfs/inode.c
··· 999 999 return rc; 1000 1000 } 1001 1001 1002 - int ecryptfs_getattr_link(struct vfsmount *mnt, struct dentry *dentry, 1003 - struct kstat *stat) 1002 + static int ecryptfs_getattr_link(struct vfsmount *mnt, struct dentry *dentry, 1003 + struct kstat *stat) 1004 1004 { 1005 1005 struct ecryptfs_mount_crypt_stat *mount_crypt_stat; 1006 1006 int rc = 0; ··· 1021 1021 return rc; 1022 1022 } 1023 1023 1024 - int ecryptfs_getattr(struct vfsmount *mnt, struct dentry *dentry, 1025 - struct kstat *stat) 1024 + static int ecryptfs_getattr(struct vfsmount *mnt, struct dentry *dentry, 1025 + struct kstat *stat) 1026 1026 { 1027 1027 struct kstat lower_stat; 1028 1028 int rc;
+4 -5
fs/ecryptfs/keystore.c
··· 1150 1150 struct ecryptfs_message *msg = NULL; 1151 1151 char *auth_tok_sig; 1152 1152 char *payload; 1153 - size_t payload_len; 1153 + size_t payload_len = 0; 1154 1154 int rc; 1155 1155 1156 1156 rc = ecryptfs_get_auth_tok_sig(&auth_tok_sig, auth_tok); ··· 1168 1168 rc = ecryptfs_send_message(payload, payload_len, &msg_ctx); 1169 1169 if (rc) { 1170 1170 ecryptfs_printk(KERN_ERR, "Error sending message to " 1171 - "ecryptfsd\n"); 1171 + "ecryptfsd: %d\n", rc); 1172 1172 goto out; 1173 1173 } 1174 1174 rc = ecryptfs_wait_for_response(msg_ctx, &msg); ··· 1202 1202 crypt_stat->key_size); 1203 1203 } 1204 1204 out: 1205 - if (msg) 1206 - kfree(msg); 1205 + kfree(msg); 1207 1206 return rc; 1208 1207 } 1209 1208 ··· 1988 1989 rc = ecryptfs_send_message(payload, payload_len, &msg_ctx); 1989 1990 if (rc) { 1990 1991 ecryptfs_printk(KERN_ERR, "Error sending message to " 1991 - "ecryptfsd\n"); 1992 + "ecryptfsd: %d\n", rc); 1992 1993 goto out; 1993 1994 } 1994 1995 rc = ecryptfs_wait_for_response(msg_ctx, &msg);
+1
fs/ecryptfs/main.c
··· 629 629 .kill_sb = ecryptfs_kill_block_super, 630 630 .fs_flags = 0 631 631 }; 632 + MODULE_ALIAS_FS("ecryptfs"); 632 633 633 634 /** 634 635 * inode_info_init_once
+2 -3
fs/ecryptfs/messaging.c
··· 97 97 void ecryptfs_msg_ctx_alloc_to_free(struct ecryptfs_msg_ctx *msg_ctx) 98 98 { 99 99 list_move(&(msg_ctx->node), &ecryptfs_msg_ctx_free_list); 100 - if (msg_ctx->msg) 101 - kfree(msg_ctx->msg); 100 + kfree(msg_ctx->msg); 102 101 msg_ctx->msg = NULL; 103 102 msg_ctx->state = ECRYPTFS_MSG_CTX_STATE_FREE; 104 103 } ··· 282 283 int rc; 283 284 284 285 rc = ecryptfs_find_daemon_by_euid(&daemon); 285 - if (rc || !daemon) { 286 + if (rc) { 286 287 rc = -ENOTCONN; 287 288 goto out; 288 289 }
+1
fs/efs/super.c
··· 33 33 .kill_sb = kill_block_super, 34 34 .fs_flags = FS_REQUIRES_DEV, 35 35 }; 36 + MODULE_ALIAS_FS("efs"); 36 37 37 38 static struct pt_types sgi_pt_types[] = { 38 39 {0x00, "SGI vh"},
+1
fs/exofs/super.c
··· 1010 1010 .mount = exofs_mount, 1011 1011 .kill_sb = generic_shutdown_super, 1012 1012 }; 1013 + MODULE_ALIAS_FS("exofs"); 1013 1014 1014 1015 static int __init init_exofs(void) 1015 1016 {
+1
fs/ext2/super.c
··· 1536 1536 .kill_sb = kill_block_super, 1537 1537 .fs_flags = FS_REQUIRES_DEV, 1538 1538 }; 1539 + MODULE_ALIAS_FS("ext2"); 1539 1540 1540 1541 static int __init init_ext2_fs(void) 1541 1542 {
+1
fs/ext3/super.c
··· 3068 3068 .kill_sb = kill_block_super, 3069 3069 .fs_flags = FS_REQUIRES_DEV, 3070 3070 }; 3071 + MODULE_ALIAS_FS("ext3"); 3071 3072 3072 3073 static int __init init_ext3_fs(void) 3073 3074 {
+3 -2
fs/ext4/super.c
··· 90 90 .kill_sb = kill_block_super, 91 91 .fs_flags = FS_REQUIRES_DEV, 92 92 }; 93 + MODULE_ALIAS_FS("ext2"); 93 94 #define IS_EXT2_SB(sb) ((sb)->s_bdev->bd_holder == &ext2_fs_type) 94 95 #else 95 96 #define IS_EXT2_SB(sb) (0) ··· 105 104 .kill_sb = kill_block_super, 106 105 .fs_flags = FS_REQUIRES_DEV, 107 106 }; 107 + MODULE_ALIAS_FS("ext3"); 108 108 #define IS_EXT3_SB(sb) ((sb)->s_bdev->bd_holder == &ext3_fs_type) 109 109 #else 110 110 #define IS_EXT3_SB(sb) (0) ··· 5154 5152 return 0; 5155 5153 return 1; 5156 5154 } 5157 - MODULE_ALIAS("ext2"); 5158 5155 #else 5159 5156 static inline void register_as_ext2(void) { } 5160 5157 static inline void unregister_as_ext2(void) { } ··· 5186 5185 return 0; 5187 5186 return 1; 5188 5187 } 5189 - MODULE_ALIAS("ext3"); 5190 5188 #else 5191 5189 static inline void register_as_ext3(void) { } 5192 5190 static inline void unregister_as_ext3(void) { } ··· 5199 5199 .kill_sb = kill_block_super, 5200 5200 .fs_flags = FS_REQUIRES_DEV, 5201 5201 }; 5202 + MODULE_ALIAS_FS("ext4"); 5202 5203 5203 5204 static int __init ext4_init_feat_adverts(void) 5204 5205 {
+1
fs/f2fs/super.c
··· 687 687 .kill_sb = kill_block_super, 688 688 .fs_flags = FS_REQUIRES_DEV, 689 689 }; 690 + MODULE_ALIAS_FS("f2fs"); 690 691 691 692 static int __init init_inodecache(void) 692 693 {
+1
fs/fat/namei_msdos.c
··· 668 668 .kill_sb = kill_block_super, 669 669 .fs_flags = FS_REQUIRES_DEV, 670 670 }; 671 + MODULE_ALIAS_FS("msdos"); 671 672 672 673 static int __init init_msdos_fs(void) 673 674 {
+1
fs/fat/namei_vfat.c
··· 1073 1073 .kill_sb = kill_block_super, 1074 1074 .fs_flags = FS_REQUIRES_DEV, 1075 1075 }; 1076 + MODULE_ALIAS_FS("vfat"); 1076 1077 1077 1078 static int __init init_vfat_fs(void) 1078 1079 {
+1 -1
fs/filesystems.c
··· 273 273 int len = dot ? dot - name : strlen(name); 274 274 275 275 fs = __get_fs_type(name, len); 276 - if (!fs && (request_module("%.*s", len, name) == 0)) 276 + if (!fs && (request_module("fs-%.*s", len, name) == 0)) 277 277 fs = __get_fs_type(name, len); 278 278 279 279 if (dot && fs && !(fs->fs_flags & FS_HAS_SUBTYPE)) {
+1 -1
fs/freevxfs/vxfs_super.c
··· 52 52 MODULE_DESCRIPTION("Veritas Filesystem (VxFS) driver"); 53 53 MODULE_LICENSE("Dual BSD/GPL"); 54 54 55 - MODULE_ALIAS("vxfs"); /* makes mount -t vxfs autoload the module */ 56 55 57 56 58 57 static void vxfs_put_super(struct super_block *); ··· 257 258 .kill_sb = kill_block_super, 258 259 .fs_flags = FS_REQUIRES_DEV, 259 260 }; 261 + MODULE_ALIAS_FS("vxfs"); /* makes mount -t vxfs autoload the module */ 260 262 261 263 static int __init 262 264 vxfs_init(void)
+1
fs/fuse/control.c
··· 341 341 .mount = fuse_ctl_mount, 342 342 .kill_sb = fuse_ctl_kill_sb, 343 343 }; 344 + MODULE_ALIAS_FS("fusectl"); 344 345 345 346 int __init fuse_ctl_init(void) 346 347 {
+2
fs/fuse/inode.c
··· 1117 1117 .mount = fuse_mount, 1118 1118 .kill_sb = fuse_kill_sb_anon, 1119 1119 }; 1120 + MODULE_ALIAS_FS("fuse"); 1120 1121 1121 1122 #ifdef CONFIG_BLOCK 1122 1123 static struct dentry *fuse_mount_blk(struct file_system_type *fs_type, ··· 1147 1146 .kill_sb = fuse_kill_sb_blk, 1148 1147 .fs_flags = FS_REQUIRES_DEV | FS_HAS_SUBTYPE, 1149 1148 }; 1149 + MODULE_ALIAS_FS("fuseblk"); 1150 1150 1151 1151 static inline int register_fuseblk(void) 1152 1152 {
+3 -1
fs/gfs2/ops_fstype.c
··· 20 20 #include <linux/gfs2_ondisk.h> 21 21 #include <linux/quotaops.h> 22 22 #include <linux/lockdep.h> 23 + #include <linux/module.h> 23 24 24 25 #include "gfs2.h" 25 26 #include "incore.h" ··· 1426 1425 .kill_sb = gfs2_kill_sb, 1427 1426 .owner = THIS_MODULE, 1428 1427 }; 1428 + MODULE_ALIAS_FS("gfs2"); 1429 1429 1430 1430 struct file_system_type gfs2meta_fs_type = { 1431 1431 .name = "gfs2meta", ··· 1434 1432 .mount = gfs2_mount_meta, 1435 1433 .owner = THIS_MODULE, 1436 1434 }; 1437 - 1435 + MODULE_ALIAS_FS("gfs2meta");
+1
fs/hfs/super.c
··· 466 466 .kill_sb = kill_block_super, 467 467 .fs_flags = FS_REQUIRES_DEV, 468 468 }; 469 + MODULE_ALIAS_FS("hfs"); 469 470 470 471 static void hfs_init_once(void *p) 471 472 {
+1
fs/hfsplus/super.c
··· 654 654 .kill_sb = kill_block_super, 655 655 .fs_flags = FS_REQUIRES_DEV, 656 656 }; 657 + MODULE_ALIAS_FS("hfsplus"); 657 658 658 659 static void hfsplus_init_once(void *p) 659 660 {
+1 -8
fs/hostfs/hostfs_kern.c
··· 845 845 return err; 846 846 847 847 if ((attr->ia_valid & ATTR_SIZE) && 848 - attr->ia_size != i_size_read(inode)) { 849 - int error; 850 - 851 - error = inode_newsize_ok(inode, attr->ia_size); 852 - if (error) 853 - return error; 854 - 848 + attr->ia_size != i_size_read(inode)) 855 849 truncate_setsize(inode, attr->ia_size); 856 - } 857 850 858 851 setattr_copy(inode, attr); 859 852 mark_inode_dirty(inode);
+1
fs/hppfs/hppfs.c
··· 748 748 .kill_sb = kill_anon_super, 749 749 .fs_flags = 0, 750 750 }; 751 + MODULE_ALIAS_FS("hppfs"); 751 752 752 753 static int __init init_hppfs(void) 753 754 {
+1
fs/hugetlbfs/inode.c
··· 896 896 .mount = hugetlbfs_mount, 897 897 .kill_sb = kill_litter_super, 898 898 }; 899 + MODULE_ALIAS_FS("hugetlbfs"); 899 900 900 901 static struct vfsmount *hugetlbfs_vfsmount[HUGE_MAX_HSTATE]; 901 902
+1 -2
fs/isofs/inode.c
··· 1556 1556 .kill_sb = kill_block_super, 1557 1557 .fs_flags = FS_REQUIRES_DEV, 1558 1558 }; 1559 + MODULE_ALIAS_FS("iso9660"); 1559 1560 1560 1561 static int __init init_iso9660_fs(void) 1561 1562 { ··· 1594 1593 module_init(init_iso9660_fs) 1595 1594 module_exit(exit_iso9660_fs) 1596 1595 MODULE_LICENSE("GPL"); 1597 - /* Actual filesystem name is iso9660, as requested in filesystems.c */ 1598 - MODULE_ALIAS("iso9660");
+1
fs/jffs2/super.c
··· 356 356 .mount = jffs2_mount, 357 357 .kill_sb = jffs2_kill_sb, 358 358 }; 359 + MODULE_ALIAS_FS("jffs2"); 359 360 360 361 static int __init init_jffs2_fs(void) 361 362 {
+1
fs/jfs/super.c
··· 833 833 .kill_sb = kill_block_super, 834 834 .fs_flags = FS_REQUIRES_DEV, 835 835 }; 836 + MODULE_ALIAS_FS("jfs"); 836 837 837 838 static void init_once(void *foo) 838 839 {
+1
fs/logfs/super.c
··· 608 608 .fs_flags = FS_REQUIRES_DEV, 609 609 610 610 }; 611 + MODULE_ALIAS_FS("logfs"); 611 612 612 613 static int __init logfs_init(void) 613 614 {
+1
fs/minix/inode.c
··· 660 660 .kill_sb = kill_block_super, 661 661 .fs_flags = FS_REQUIRES_DEV, 662 662 }; 663 + MODULE_ALIAS_FS("minix"); 663 664 664 665 static int __init init_minix_fs(void) 665 666 {
-2
fs/namei.c
··· 689 689 nd->path = *path; 690 690 nd->inode = nd->path.dentry->d_inode; 691 691 nd->flags |= LOOKUP_JUMPED; 692 - 693 - BUG_ON(nd->inode->i_op->follow_link); 694 692 } 695 693 696 694 static inline void put_link(struct nameidata *nd, struct path *link, void *cookie)
+1
fs/ncpfs/inode.c
··· 1051 1051 .kill_sb = kill_anon_super, 1052 1052 .fs_flags = FS_BINARY_MOUNTDATA, 1053 1053 }; 1054 + MODULE_ALIAS_FS("ncpfs"); 1054 1055 1055 1056 static int __init init_ncp_fs(void) 1056 1057 {
+2 -1
fs/nfs/super.c
··· 294 294 .kill_sb = nfs_kill_super, 295 295 .fs_flags = FS_RENAME_DOES_D_MOVE|FS_BINARY_MOUNTDATA, 296 296 }; 297 + MODULE_ALIAS_FS("nfs"); 297 298 EXPORT_SYMBOL_GPL(nfs_fs_type); 298 299 299 300 struct file_system_type nfs_xdev_fs_type = { ··· 334 333 .kill_sb = nfs_kill_super, 335 334 .fs_flags = FS_RENAME_DOES_D_MOVE|FS_BINARY_MOUNTDATA, 336 335 }; 336 + MODULE_ALIAS_FS("nfs4"); 337 337 EXPORT_SYMBOL_GPL(nfs4_fs_type); 338 338 339 339 static int __init register_nfs4_fs(void) ··· 2719 2717 MODULE_PARM_DESC(send_implementation_id, 2720 2718 "Send implementation ID with NFSv4.1 exchange_id"); 2721 2719 MODULE_PARM_DESC(nfs4_unique_id, "nfs_client_id4 uniquifier string"); 2722 - MODULE_ALIAS("nfs4"); 2723 2720 2724 2721 #endif /* CONFIG_NFS_V4 */
+1
fs/nfsd/nfsctl.c
··· 1090 1090 .mount = nfsd_mount, 1091 1091 .kill_sb = nfsd_umount, 1092 1092 }; 1093 + MODULE_ALIAS_FS("nfsd"); 1093 1094 1094 1095 #ifdef CONFIG_PROC_FS 1095 1096 static int create_proc_exports_entry(void)
+1
fs/nilfs2/super.c
··· 1361 1361 .kill_sb = kill_block_super, 1362 1362 .fs_flags = FS_REQUIRES_DEV, 1363 1363 }; 1364 + MODULE_ALIAS_FS("nilfs2"); 1364 1365 1365 1366 static void nilfs_inode_init_once(void *obj) 1366 1367 {
+1
fs/ntfs/super.c
··· 3079 3079 .kill_sb = kill_block_super, 3080 3080 .fs_flags = FS_REQUIRES_DEV, 3081 3081 }; 3082 + MODULE_ALIAS_FS("ntfs"); 3082 3083 3083 3084 /* Stable names for the slab caches. */ 3084 3085 static const char ntfs_index_ctx_cache_name[] = "ntfs_index_ctx_cache";
+1
fs/ocfs2/dlmfs/dlmfs.c
··· 640 640 .mount = dlmfs_mount, 641 641 .kill_sb = kill_litter_super, 642 642 }; 643 + MODULE_ALIAS_FS("ocfs2_dlmfs"); 643 644 644 645 static int __init init_dlmfs_fs(void) 645 646 {
+1
fs/ocfs2/super.c
··· 1266 1266 .fs_flags = FS_REQUIRES_DEV|FS_RENAME_DOES_D_MOVE, 1267 1267 .next = NULL 1268 1268 }; 1269 + MODULE_ALIAS_FS("ocfs2"); 1269 1270 1270 1271 static int ocfs2_check_set_options(struct super_block *sb, 1271 1272 struct mount_options *options)
+1
fs/omfs/inode.c
··· 572 572 .kill_sb = kill_block_super, 573 573 .fs_flags = FS_REQUIRES_DEV, 574 574 }; 575 + MODULE_ALIAS_FS("omfs"); 575 576 576 577 static int __init init_omfs_fs(void) 577 578 {
+1
fs/openpromfs/inode.c
··· 432 432 .mount = openprom_mount, 433 433 .kill_sb = kill_anon_super, 434 434 }; 435 + MODULE_ALIAS_FS("openpromfs"); 435 436 436 437 static void op_inode_init_once(void *data) 437 438 {
+6 -6
fs/proc/namespaces.c
··· 118 118 struct super_block *sb = inode->i_sb; 119 119 struct proc_inode *ei = PROC_I(inode); 120 120 struct task_struct *task; 121 - struct dentry *ns_dentry; 121 + struct path ns_path; 122 122 void *error = ERR_PTR(-EACCES); 123 123 124 124 task = get_proc_task(inode); ··· 128 128 if (!ptrace_may_access(task, PTRACE_MODE_READ)) 129 129 goto out_put_task; 130 130 131 - ns_dentry = proc_ns_get_dentry(sb, task, ei->ns_ops); 132 - if (IS_ERR(ns_dentry)) { 133 - error = ERR_CAST(ns_dentry); 131 + ns_path.dentry = proc_ns_get_dentry(sb, task, ei->ns_ops); 132 + if (IS_ERR(ns_path.dentry)) { 133 + error = ERR_CAST(ns_path.dentry); 134 134 goto out_put_task; 135 135 } 136 136 137 - dput(nd->path.dentry); 138 - nd->path.dentry = ns_dentry; 137 + ns_path.mnt = mntget(nd->path.mnt); 138 + nd_jump_link(nd, &ns_path); 139 139 error = NULL; 140 140 141 141 out_put_task:
+1
fs/qnx4/inode.c
··· 412 412 .kill_sb = kill_block_super, 413 413 .fs_flags = FS_REQUIRES_DEV, 414 414 }; 415 + MODULE_ALIAS_FS("qnx4"); 415 416 416 417 static int __init init_qnx4_fs(void) 417 418 {
+1
fs/qnx6/inode.c
··· 672 672 .kill_sb = kill_block_super, 673 673 .fs_flags = FS_REQUIRES_DEV, 674 674 }; 675 + MODULE_ALIAS_FS("qnx6"); 675 676 676 677 static int __init init_qnx6_fs(void) 677 678 {
+1
fs/reiserfs/super.c
··· 2434 2434 .kill_sb = reiserfs_kill_sb, 2435 2435 .fs_flags = FS_REQUIRES_DEV, 2436 2436 }; 2437 + MODULE_ALIAS_FS("reiserfs"); 2437 2438 2438 2439 MODULE_DESCRIPTION("ReiserFS journaled filesystem"); 2439 2440 MODULE_AUTHOR("Hans Reiser <reiser@namesys.com>");
+1
fs/romfs/super.c
··· 599 599 .kill_sb = romfs_kill_sb, 600 600 .fs_flags = FS_REQUIRES_DEV, 601 601 }; 602 + MODULE_ALIAS_FS("romfs"); 602 603 603 604 /* 604 605 * inode storage initialiser
+2 -1
fs/sysv/super.c
··· 545 545 .kill_sb = kill_block_super, 546 546 .fs_flags = FS_REQUIRES_DEV, 547 547 }; 548 + MODULE_ALIAS_FS("sysv"); 548 549 549 550 static struct file_system_type v7_fs_type = { 550 551 .owner = THIS_MODULE, ··· 554 553 .kill_sb = kill_block_super, 555 554 .fs_flags = FS_REQUIRES_DEV, 556 555 }; 556 + MODULE_ALIAS_FS("v7"); 557 557 558 558 static int __init init_sysv_fs(void) 559 559 { ··· 588 586 589 587 module_init(init_sysv_fs) 590 588 module_exit(exit_sysv_fs) 591 - MODULE_ALIAS("v7"); 592 589 MODULE_LICENSE("GPL");
+1
fs/ubifs/super.c
··· 2174 2174 .mount = ubifs_mount, 2175 2175 .kill_sb = kill_ubifs_super, 2176 2176 }; 2177 + MODULE_ALIAS_FS("ubifs"); 2177 2178 2178 2179 /* 2179 2180 * Inode slab cache constructor.
+1
fs/ufs/super.c
··· 1500 1500 .kill_sb = kill_block_super, 1501 1501 .fs_flags = FS_REQUIRES_DEV, 1502 1502 }; 1503 + MODULE_ALIAS_FS("ufs"); 1503 1504 1504 1505 static int __init init_ufs_fs(void) 1505 1506 {
+1
fs/xfs/xfs_super.c
··· 1561 1561 .kill_sb = kill_block_super, 1562 1562 .fs_flags = FS_REQUIRES_DEV, 1563 1563 }; 1564 + MODULE_ALIAS_FS("xfs"); 1564 1565 1565 1566 STATIC int __init 1566 1567 xfs_init_zones(void)
+2 -4
include/acpi/acpi_bus.h
··· 437 437 */ 438 438 struct acpi_bus_type { 439 439 struct list_head list; 440 - struct bus_type *bus; 441 - /* For general devices under the bus */ 440 + const char *name; 441 + bool (*match)(struct device *dev); 442 442 int (*find_device) (struct device *, acpi_handle *); 443 - /* For bridges, such as PCI root bridge, IDE controller */ 444 - int (*find_bridge) (struct device *, acpi_handle *); 445 443 void (*setup)(struct device *); 446 444 void (*cleanup)(struct device *); 447 445 };
+3 -3
include/drm/drm_crtc.h
··· 443 443 * @dpms: set power state (see drm_crtc_funcs above) 444 444 * @save: save connector state 445 445 * @restore: restore connector state 446 - * @reset: reset connector after state has been invalidate (e.g. resume) 446 + * @reset: reset connector after state has been invalidated (e.g. resume) 447 447 * @detect: is this connector active? 448 448 * @fill_modes: fill mode list for this connector 449 - * @set_property: property for this connector may need update 449 + * @set_property: property for this connector may need an update 450 450 * @destroy: make object go away 451 - * @force: notify the driver the connector is forced on 451 + * @force: notify the driver that the connector is forced on 452 452 * 453 453 * Each CRTC may have one or more connectors attached to it. The functions 454 454 * below allow the core DRM code to control connectors, enumerate available modes,
+2 -10
include/linux/ecryptfs.h
··· 6 6 #define ECRYPTFS_VERSION_MINOR 0x04 7 7 #define ECRYPTFS_SUPPORTED_FILE_VERSION 0x03 8 8 /* These flags indicate which features are supported by the kernel 9 - * module; userspace tools such as the mount helper read 10 - * ECRYPTFS_VERSIONING_MASK from a sysfs handle in order to determine 11 - * how to behave. */ 9 + * module; userspace tools such as the mount helper read the feature 10 + * bits from a sysfs handle in order to determine how to behave. */ 12 11 #define ECRYPTFS_VERSIONING_PASSPHRASE 0x00000001 13 12 #define ECRYPTFS_VERSIONING_PUBKEY 0x00000002 14 13 #define ECRYPTFS_VERSIONING_PLAINTEXT_PASSTHROUGH 0x00000004 ··· 18 19 #define ECRYPTFS_VERSIONING_HMAC 0x00000080 19 20 #define ECRYPTFS_VERSIONING_FILENAME_ENCRYPTION 0x00000100 20 21 #define ECRYPTFS_VERSIONING_GCM 0x00000200 21 - #define ECRYPTFS_VERSIONING_MASK (ECRYPTFS_VERSIONING_PASSPHRASE \ 22 - | ECRYPTFS_VERSIONING_PLAINTEXT_PASSTHROUGH \ 23 - | ECRYPTFS_VERSIONING_PUBKEY \ 24 - | ECRYPTFS_VERSIONING_XATTR \ 25 - | ECRYPTFS_VERSIONING_MULTKEY \ 26 - | ECRYPTFS_VERSIONING_DEVMISC \ 27 - | ECRYPTFS_VERSIONING_FILENAME_ENCRYPTION) 28 22 #define ECRYPTFS_MAX_PASSWORD_LENGTH 64 29 23 #define ECRYPTFS_MAX_PASSPHRASE_BYTES ECRYPTFS_MAX_PASSWORD_LENGTH 30 24 #define ECRYPTFS_SALT_SIZE 8
+2
include/linux/fs.h
··· 1825 1825 struct lock_class_key i_mutex_dir_key; 1826 1826 }; 1827 1827 1828 + #define MODULE_ALIAS_FS(NAME) MODULE_ALIAS("fs-" NAME) 1829 + 1828 1830 extern struct dentry *mount_ns(struct file_system_type *fs_type, int flags, 1829 1831 void *data, int (*fill_super)(struct super_block *, void *, int)); 1830 1832 extern struct dentry *mount_bdev(struct file_system_type *fs_type,
+5
include/linux/i2c/atmel_mxt_ts.h
··· 15 15 16 16 #include <linux/types.h> 17 17 18 + /* For key_map array */ 19 + #define MXT_NUM_GPIO 4 20 + 18 21 /* Orient */ 19 22 #define MXT_NORMAL 0x0 20 23 #define MXT_DIAGONAL 0x1 ··· 42 39 unsigned int voltage; 43 40 unsigned char orient; 44 41 unsigned long irqflags; 42 + bool is_tp; 43 + const unsigned int key_map[MXT_NUM_GPIO]; 45 44 }; 46 45 47 46 #endif /* __LINUX_ATMEL_MXT_TS_H */
+2
include/linux/regulator/driver.h
··· 199 199 * output when using regulator_set_voltage_sel_regmap 200 200 * @enable_reg: Register for control when using regmap enable/disable ops 201 201 * @enable_mask: Mask for control when using regmap enable/disable ops 202 + * @bypass_reg: Register for control when using regmap set_bypass 203 + * @bypass_mask: Mask for control when using regmap set_bypass 202 204 * 203 205 * @enable_time: Time taken for initial enable of regulator (in uS). 204 206 */
+4 -2
ipc/msg.c
··· 820 820 struct msg_msg *copy = NULL; 821 821 unsigned long copy_number = 0; 822 822 823 + ns = current->nsproxy->ipc_ns; 824 + 823 825 if (msqid < 0 || (long) bufsz < 0) 824 826 return -EINVAL; 825 827 if (msgflg & MSG_COPY) { 826 - copy = prepare_copy(buf, bufsz, msgflg, &msgtyp, &copy_number); 828 + copy = prepare_copy(buf, min_t(size_t, bufsz, ns->msg_ctlmax), 829 + msgflg, &msgtyp, &copy_number); 827 830 if (IS_ERR(copy)) 828 831 return PTR_ERR(copy); 829 832 } 830 833 mode = convert_mode(&msgtyp, msgflg); 831 - ns = current->nsproxy->ipc_ns; 832 834 833 835 msq = msg_lock_check(ns, msqid); 834 836 if (IS_ERR(msq)) {
-3
ipc/msgutil.c
··· 117 117 if (alen > DATALEN_MSG) 118 118 alen = DATALEN_MSG; 119 119 120 - dst->next = NULL; 121 - dst->security = NULL; 122 - 123 120 memcpy(dst + 1, src + 1, alen); 124 121 125 122 len -= alen;
+1 -1
kernel/smpboot.c
··· 131 131 continue; 132 132 } 133 133 134 - //BUG_ON(td->cpu != smp_processor_id()); 134 + BUG_ON(td->cpu != smp_processor_id()); 135 135 136 136 /* Check for state change setup */ 137 137 switch (td->status) {
+14 -10
kernel/trace/Kconfig
··· 414 414 def_bool n 415 415 416 416 config DYNAMIC_FTRACE 417 - bool "enable/disable ftrace tracepoints dynamically" 417 + bool "enable/disable function tracing dynamically" 418 418 depends on FUNCTION_TRACER 419 419 depends on HAVE_DYNAMIC_FTRACE 420 420 default y 421 421 help 422 - This option will modify all the calls to ftrace dynamically 423 - (will patch them out of the binary image and replace them 424 - with a No-Op instruction) as they are called. A table is 425 - created to dynamically enable them again. 422 + This option will modify all the calls to function tracing 423 + dynamically (will patch them out of the binary image and 424 + replace them with a No-Op instruction) on boot up. During 425 + compile time, a table is made of all the locations that ftrace 426 + can function trace, and this table is linked into the kernel 427 + image. When this is enabled, functions can be individually 428 + enabled, and the functions not enabled will not affect 429 + performance of the system. 430 + 431 + See the files in /sys/kernel/debug/tracing: 432 + available_filter_functions 433 + set_ftrace_filter 434 + set_ftrace_notrace 426 435 427 436 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but 428 437 otherwise has native performance as long as no tracing is active. 429 - 430 - The changes to the code are done by a kernel thread that 431 - wakes up once a second and checks to see if any ftrace calls 432 - were made. If so, it runs stop_machine (stops all CPUS) 433 - and modifies the code to jump over the call to ftrace. 434 438 435 439 config DYNAMIC_FTRACE_WITH_REGS 436 440 def_bool y
+24 -3
kernel/trace/trace.c
··· 2400 2400 seq_printf(m, "# MAY BE MISSING FUNCTION EVENTS\n"); 2401 2401 } 2402 2402 2403 + #ifdef CONFIG_TRACER_MAX_TRACE 2404 + static void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter) 2405 + { 2406 + if (iter->trace->allocated_snapshot) 2407 + seq_printf(m, "#\n# * Snapshot is allocated *\n#\n"); 2408 + else 2409 + seq_printf(m, "#\n# * Snapshot is freed *\n#\n"); 2410 + 2411 + seq_printf(m, "# Snapshot commands:\n"); 2412 + seq_printf(m, "# echo 0 > snapshot : Clears and frees snapshot buffer\n"); 2413 + seq_printf(m, "# echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.\n"); 2414 + seq_printf(m, "# Takes a snapshot of the main buffer.\n"); 2415 + seq_printf(m, "# echo 2 > snapshot : Clears snapshot buffer (but does not allocate)\n"); 2416 + seq_printf(m, "# (Doesn't have to be '2' works with any number that\n"); 2417 + seq_printf(m, "# is not a '0' or '1')\n"); 2418 + } 2419 + #else 2420 + /* Should never be called */ 2421 + static inline void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter) { } 2422 + #endif 2423 + 2403 2424 static int s_show(struct seq_file *m, void *v) 2404 2425 { 2405 2426 struct trace_iterator *iter = v; ··· 2432 2411 seq_puts(m, "#\n"); 2433 2412 test_ftrace_alive(m); 2434 2413 } 2435 - if (iter->trace && iter->trace->print_header) 2414 + if (iter->snapshot && trace_empty(iter)) 2415 + print_snapshot_help(m, iter); 2416 + else if (iter->trace && iter->trace->print_header) 2436 2417 iter->trace->print_header(m); 2437 2418 else 2438 2419 trace_default_header(m); ··· 4167 4144 default: 4168 4145 if (current_trace->allocated_snapshot) 4169 4146 tracing_reset_online_cpus(&max_tr); 4170 - else 4171 - ret = -EINVAL; 4172 4147 break; 4173 4148 } 4174 4149
+3 -13
lib/idr.c
··· 569 569 struct idr_layer *p; 570 570 struct idr_layer *to_free; 571 571 572 - /* see comment in idr_find_slowpath() */ 573 - if (WARN_ON_ONCE(id < 0)) 572 + if (id < 0) 574 573 return; 575 574 576 575 sub_remove(idp, (idp->layers - 1) * IDR_BITS, id); ··· 666 667 int n; 667 668 struct idr_layer *p; 668 669 669 - /* 670 - * If @id is negative, idr_find() used to ignore the sign bit and 671 - * performed lookup with the rest of bits, which is weird and can 672 - * lead to very obscure bugs. We're now returning NULL for all 673 - * negative IDs but just in case somebody was depending on the sign 674 - * bit being ignored, let's trigger WARN_ON_ONCE() so that they can 675 - * be detected and fixed. WARN_ON_ONCE() can later be removed. 676 - */ 677 - if (WARN_ON_ONCE(id < 0)) 670 + if (id < 0) 678 671 return NULL; 679 672 680 673 p = rcu_dereference_raw(idp->top); ··· 815 824 int n; 816 825 struct idr_layer *p, *old_p; 817 826 818 - /* see comment in idr_find_slowpath() */ 819 - if (WARN_ON_ONCE(id < 0)) 827 + if (id < 0) 820 828 return ERR_PTR(-EINVAL); 821 829 822 830 p = idp->top;
+1 -1
mm/ksm.c
··· 489 489 */ 490 490 static inline int get_kpfn_nid(unsigned long kpfn) 491 491 { 492 - return ksm_merge_across_nodes ? 0 : pfn_to_nid(kpfn); 492 + return ksm_merge_across_nodes ? 0 : NUMA(pfn_to_nid(kpfn)); 493 493 } 494 494 495 495 static void remove_node_from_stable_tree(struct stable_node *stable_node)
+6 -2
mm/memcontrol.c
··· 3012 3012 memcg_limited_groups_array_size = memcg_caches_array_size(num); 3013 3013 } 3014 3014 3015 + static void kmem_cache_destroy_work_func(struct work_struct *w); 3016 + 3015 3017 int memcg_update_cache_size(struct kmem_cache *s, int num_groups) 3016 3018 { 3017 3019 struct memcg_cache_params *cur_params = s->memcg_params; ··· 3033 3031 return -ENOMEM; 3034 3032 } 3035 3033 3034 + INIT_WORK(&s->memcg_params->destroy, 3035 + kmem_cache_destroy_work_func); 3036 3036 s->memcg_params->is_root_cache = true; 3037 3037 3038 3038 /* ··· 3082 3078 if (!s->memcg_params) 3083 3079 return -ENOMEM; 3084 3080 3081 + INIT_WORK(&s->memcg_params->destroy, 3082 + kmem_cache_destroy_work_func); 3085 3083 if (memcg) { 3086 3084 s->memcg_params->memcg = memcg; 3087 3085 s->memcg_params->root_cache = root_cache; ··· 3364 3358 list_for_each_entry(params, &memcg->memcg_slab_caches, list) { 3365 3359 cachep = memcg_params_to_cache(params); 3366 3360 cachep->memcg_params->dead = true; 3367 - INIT_WORK(&cachep->memcg_params->destroy, 3368 - kmem_cache_destroy_work_func); 3369 3361 schedule_work(&cachep->memcg_params->destroy); 3370 3362 } 3371 3363 mutex_unlock(&memcg->slab_caches_mutex);
+2 -2
mm/mempolicy.c
··· 2390 2390 2391 2391 *mpol_new = *n->policy; 2392 2392 atomic_set(&mpol_new->refcnt, 1); 2393 - sp_node_init(n_new, n->end, end, mpol_new); 2394 - sp_insert(sp, n_new); 2393 + sp_node_init(n_new, end, n->end, mpol_new); 2395 2394 n->end = start; 2395 + sp_insert(sp, n_new); 2396 2396 n_new = NULL; 2397 2397 mpol_new = NULL; 2398 2398 break;
+1 -1
net/9p/trans_virtio.c
··· 655 655 .create = p9_virtio_create, 656 656 .close = p9_virtio_close, 657 657 .request = p9_virtio_request, 658 - //.zc_request = p9_virtio_zc_request, 658 + .zc_request = p9_virtio_zc_request, 659 659 .cancel = p9_virtio_cancel, 660 660 /* 661 661 * We leave one entry for input and one entry for response
+3 -3
net/batman-adv/bat_iv_ogm.c
··· 1288 1288 batadv_ogm_packet = (struct batadv_ogm_packet *)packet_buff; 1289 1289 1290 1290 /* unpack the aggregated packets and process them one by one */ 1291 - do { 1291 + while (batadv_iv_ogm_aggr_packet(buff_pos, packet_len, 1292 + batadv_ogm_packet->tt_num_changes)) { 1292 1293 tt_buff = packet_buff + buff_pos + BATADV_OGM_HLEN; 1293 1294 1294 1295 batadv_iv_ogm_process(ethhdr, batadv_ogm_packet, tt_buff, ··· 1300 1299 1301 1300 packet_pos = packet_buff + buff_pos; 1302 1301 batadv_ogm_packet = (struct batadv_ogm_packet *)packet_pos; 1303 - } while (batadv_iv_ogm_aggr_packet(buff_pos, packet_len, 1304 - batadv_ogm_packet->tt_num_changes)); 1302 + } 1305 1303 1306 1304 kfree_skb(skb); 1307 1305 return NET_RX_SUCCESS;
+1 -1
net/bridge/br_device.c
··· 66 66 goto out; 67 67 } 68 68 69 - mdst = br_mdb_get(br, skb); 69 + mdst = br_mdb_get(br, skb, vid); 70 70 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) 71 71 br_multicast_deliver(mdst, skb); 72 72 else
+1 -1
net/bridge/br_input.c
··· 97 97 if (is_broadcast_ether_addr(dest)) 98 98 skb2 = skb; 99 99 else if (is_multicast_ether_addr(dest)) { 100 - mdst = br_mdb_get(br, skb); 100 + mdst = br_mdb_get(br, skb, vid); 101 101 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) { 102 102 if ((mdst && mdst->mglist) || 103 103 br_multicast_is_router(br))
+4
net/bridge/br_mdb.c
··· 80 80 port = p->port; 81 81 if (port) { 82 82 struct br_mdb_entry e; 83 + memset(&e, 0, sizeof(e)); 83 84 e.ifindex = port->dev->ifindex; 84 85 e.state = p->state; 85 86 if (p->addr.proto == htons(ETH_P_IP)) ··· 137 136 break; 138 137 139 138 bpm = nlmsg_data(nlh); 139 + memset(bpm, 0, sizeof(*bpm)); 140 140 bpm->ifindex = dev->ifindex; 141 141 if (br_mdb_fill_info(skb, cb, dev) < 0) 142 142 goto out; ··· 173 171 return -EMSGSIZE; 174 172 175 173 bpm = nlmsg_data(nlh); 174 + memset(bpm, 0, sizeof(*bpm)); 176 175 bpm->family = AF_BRIDGE; 177 176 bpm->ifindex = dev->ifindex; 178 177 nest = nla_nest_start(skb, MDBA_MDB); ··· 231 228 { 232 229 struct br_mdb_entry entry; 233 230 231 + memset(&entry, 0, sizeof(entry)); 234 232 entry.ifindex = port->dev->ifindex; 235 233 entry.addr.proto = group->proto; 236 234 entry.addr.u.ip4 = group->u.ip4;
+2 -1
net/bridge/br_multicast.c
··· 132 132 #endif 133 133 134 134 struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, 135 - struct sk_buff *skb) 135 + struct sk_buff *skb, u16 vid) 136 136 { 137 137 struct net_bridge_mdb_htable *mdb = rcu_dereference(br->mdb); 138 138 struct br_ip ip; ··· 144 144 return NULL; 145 145 146 146 ip.proto = skb->protocol; 147 + ip.vid = vid; 147 148 148 149 switch (skb->protocol) { 149 150 case htons(ETH_P_IP):
+1
net/bridge/br_netlink.c
··· 29 29 + nla_total_size(1) /* IFLA_BRPORT_MODE */ 30 30 + nla_total_size(1) /* IFLA_BRPORT_GUARD */ 31 31 + nla_total_size(1) /* IFLA_BRPORT_PROTECT */ 32 + + nla_total_size(1) /* IFLA_BRPORT_FAST_LEAVE */ 32 33 + 0; 33 34 } 34 35
+2 -2
net/bridge/br_private.h
··· 442 442 struct net_bridge_port *port, 443 443 struct sk_buff *skb); 444 444 extern struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, 445 - struct sk_buff *skb); 445 + struct sk_buff *skb, u16 vid); 446 446 extern void br_multicast_add_port(struct net_bridge_port *port); 447 447 extern void br_multicast_del_port(struct net_bridge_port *port); 448 448 extern void br_multicast_enable_port(struct net_bridge_port *port); ··· 504 504 } 505 505 506 506 static inline struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, 507 - struct sk_buff *skb) 507 + struct sk_buff *skb, u16 vid) 508 508 { 509 509 return NULL; 510 510 }
+3 -2
net/core/dev.c
··· 3442 3442 } 3443 3443 switch (rx_handler(&skb)) { 3444 3444 case RX_HANDLER_CONSUMED: 3445 + ret = NET_RX_SUCCESS; 3445 3446 goto unlock; 3446 3447 case RX_HANDLER_ANOTHER: 3447 3448 goto another_round; ··· 4105 4104 * Allow this to run for 2 jiffies since which will allow 4106 4105 * an average latency of 1.5/HZ. 4107 4106 */ 4108 - if (unlikely(budget <= 0 || time_after(jiffies, time_limit))) 4107 + if (unlikely(budget <= 0 || time_after_eq(jiffies, time_limit))) 4109 4108 goto softnet_break; 4110 4109 4111 4110 local_irq_enable(); ··· 4782 4781 /** 4783 4782 * dev_change_carrier - Change device carrier 4784 4783 * @dev: device 4785 - * @new_carries: new value 4784 + * @new_carrier: new value 4786 4785 * 4787 4786 * Change device carrier 4788 4787 */
+1
net/core/rtnetlink.c
··· 979 979 * report anything. 980 980 */ 981 981 ivi.spoofchk = -1; 982 + memset(ivi.mac, 0, sizeof(ivi.mac)); 982 983 if (dev->netdev_ops->ndo_get_vf_config(dev, i, &ivi)) 983 984 break; 984 985 vf_mac.vf =
+8
net/dcb/dcbnl.c
··· 284 284 if (!netdev->dcbnl_ops->getpermhwaddr) 285 285 return -EOPNOTSUPP; 286 286 287 + memset(perm_addr, 0, sizeof(perm_addr)); 287 288 netdev->dcbnl_ops->getpermhwaddr(netdev, perm_addr); 288 289 289 290 return nla_put(skb, DCB_ATTR_PERM_HWADDR, sizeof(perm_addr), perm_addr); ··· 1043 1042 1044 1043 if (ops->ieee_getets) { 1045 1044 struct ieee_ets ets; 1045 + memset(&ets, 0, sizeof(ets)); 1046 1046 err = ops->ieee_getets(netdev, &ets); 1047 1047 if (!err && 1048 1048 nla_put(skb, DCB_ATTR_IEEE_ETS, sizeof(ets), &ets)) ··· 1052 1050 1053 1051 if (ops->ieee_getmaxrate) { 1054 1052 struct ieee_maxrate maxrate; 1053 + memset(&maxrate, 0, sizeof(maxrate)); 1055 1054 err = ops->ieee_getmaxrate(netdev, &maxrate); 1056 1055 if (!err) { 1057 1056 err = nla_put(skb, DCB_ATTR_IEEE_MAXRATE, ··· 1064 1061 1065 1062 if (ops->ieee_getpfc) { 1066 1063 struct ieee_pfc pfc; 1064 + memset(&pfc, 0, sizeof(pfc)); 1067 1065 err = ops->ieee_getpfc(netdev, &pfc); 1068 1066 if (!err && 1069 1067 nla_put(skb, DCB_ATTR_IEEE_PFC, sizeof(pfc), &pfc)) ··· 1098 1094 /* get peer info if available */ 1099 1095 if (ops->ieee_peer_getets) { 1100 1096 struct ieee_ets ets; 1097 + memset(&ets, 0, sizeof(ets)); 1101 1098 err = ops->ieee_peer_getets(netdev, &ets); 1102 1099 if (!err && 1103 1100 nla_put(skb, DCB_ATTR_IEEE_PEER_ETS, sizeof(ets), &ets)) ··· 1107 1102 1108 1103 if (ops->ieee_peer_getpfc) { 1109 1104 struct ieee_pfc pfc; 1105 + memset(&pfc, 0, sizeof(pfc)); 1110 1106 err = ops->ieee_peer_getpfc(netdev, &pfc); 1111 1107 if (!err && 1112 1108 nla_put(skb, DCB_ATTR_IEEE_PEER_PFC, sizeof(pfc), &pfc)) ··· 1286 1280 /* peer info if available */ 1287 1281 if (ops->cee_peer_getpg) { 1288 1282 struct cee_pg pg; 1283 + memset(&pg, 0, sizeof(pg)); 1289 1284 err = ops->cee_peer_getpg(netdev, &pg); 1290 1285 if (!err && 1291 1286 nla_put(skb, DCB_ATTR_CEE_PEER_PG, sizeof(pg), &pg)) ··· 1295 1288 1296 1289 if (ops->cee_peer_getpfc) { 1297 1290 struct cee_pfc pfc; 1291 + memset(&pfc, 0, sizeof(pfc)); 1298 1292 err = ops->cee_peer_getpfc(netdev, &pfc); 1299 1293 if (!err && 1300 1294 nla_put(skb, DCB_ATTR_CEE_PEER_PFC, sizeof(pfc), &pfc))
+1 -1
net/ieee802154/6lowpan.h
··· 84 84 (memcmp(addr1, addr2, length >> 3) == 0) 85 85 86 86 /* local link, i.e. FE80::/10 */ 87 - #define is_addr_link_local(a) (((a)->s6_addr16[0]) == 0x80FE) 87 + #define is_addr_link_local(a) (((a)->s6_addr16[0]) == htons(0xFE80)) 88 88 89 89 /* 90 90 * check whether we can compress the IID to 16 bits,
+1
net/ipv4/inet_connection_sock.c
··· 735 735 * tcp/dccp_create_openreq_child(). 736 736 */ 737 737 void inet_csk_prepare_forced_close(struct sock *sk) 738 + __releases(&sk->sk_lock.slock) 738 739 { 739 740 /* sk_clone_lock locked the socket and set refcnt to 2 */ 740 741 bh_unlock_sock(sk);
+2 -5
net/ipv4/ip_options.c
··· 370 370 } 371 371 switch (optptr[3]&0xF) { 372 372 case IPOPT_TS_TSONLY: 373 - opt->ts = optptr - iph; 374 373 if (skb) 375 374 timeptr = &optptr[optptr[2]-1]; 376 375 opt->ts_needtime = 1; ··· 380 381 pp_ptr = optptr + 2; 381 382 goto error; 382 383 } 383 - opt->ts = optptr - iph; 384 384 if (rt) { 385 385 spec_dst_fill(&spec_dst, skb); 386 386 memcpy(&optptr[optptr[2]-1], &spec_dst, 4); ··· 394 396 pp_ptr = optptr + 2; 395 397 goto error; 396 398 } 397 - opt->ts = optptr - iph; 398 399 { 399 400 __be32 addr; 400 401 memcpy(&addr, &optptr[optptr[2]-1], 4); ··· 420 423 put_unaligned_be32(midtime, timeptr); 421 424 opt->is_changed = 1; 422 425 } 423 - } else { 426 + } else if ((optptr[3]&0xF) != IPOPT_TS_PRESPEC) { 424 427 unsigned int overflow = optptr[3]>>4; 425 428 if (overflow == 15) { 426 429 pp_ptr = optptr + 3; 427 430 goto error; 428 431 } 429 - opt->ts = optptr - iph; 430 432 if (skb) { 431 433 optptr[3] = (optptr[3]&0xF)|((overflow+1)<<4); 432 434 opt->is_changed = 1; 433 435 } 434 436 } 437 + opt->ts = optptr - iph; 435 438 break; 436 439 case IPOPT_RA: 437 440 if (optlen < 4) {
+2 -1
net/ipv6/ip6_input.c
··· 281 281 * IPv6 multicast router mode is now supported ;) 282 282 */ 283 283 if (dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding && 284 - !(ipv6_addr_type(&hdr->daddr) & IPV6_ADDR_LINKLOCAL) && 284 + !(ipv6_addr_type(&hdr->daddr) & 285 + (IPV6_ADDR_LOOPBACK|IPV6_ADDR_LINKLOCAL)) && 285 286 likely(!(IP6CB(skb)->flags & IP6SKB_FORWARDED))) { 286 287 /* 287 288 * Okay, we try to forward - split and duplicate
+16 -13
net/irda/ircomm/ircomm_tty.c
··· 280 280 struct tty_port *port = &self->port; 281 281 DECLARE_WAITQUEUE(wait, current); 282 282 int retval; 283 - int do_clocal = 0, extra_count = 0; 283 + int do_clocal = 0; 284 284 unsigned long flags; 285 285 286 286 IRDA_DEBUG(2, "%s()\n", __func__ ); ··· 289 289 * If non-blocking mode is set, or the port is not enabled, 290 290 * then make the check up front and then exit. 291 291 */ 292 - if (filp->f_flags & O_NONBLOCK || tty->flags & (1 << TTY_IO_ERROR)){ 293 - /* nonblock mode is set or port is not enabled */ 292 + if (test_bit(TTY_IO_ERROR, &tty->flags)) { 293 + port->flags |= ASYNC_NORMAL_ACTIVE; 294 + return 0; 295 + } 296 + 297 + if (filp->f_flags & O_NONBLOCK) { 298 + /* nonblock mode is set */ 299 + if (tty->termios.c_cflag & CBAUD) 300 + tty_port_raise_dtr_rts(port); 294 301 port->flags |= ASYNC_NORMAL_ACTIVE; 295 302 IRDA_DEBUG(1, "%s(), O_NONBLOCK requested!\n", __func__ ); 296 303 return 0; ··· 322 315 __FILE__, __LINE__, tty->driver->name, port->count); 323 316 324 317 spin_lock_irqsave(&port->lock, flags); 325 - if (!tty_hung_up_p(filp)) { 326 - extra_count = 1; 318 + if (!tty_hung_up_p(filp)) 327 319 port->count--; 328 - } 329 - spin_unlock_irqrestore(&port->lock, flags); 330 320 port->blocked_open++; 321 + spin_unlock_irqrestore(&port->lock, flags); 331 322 332 323 while (1) { 333 324 if (tty->termios.c_cflag & CBAUD) 334 325 tty_port_raise_dtr_rts(port); 335 326 336 - current->state = TASK_INTERRUPTIBLE; 327 + set_current_state(TASK_INTERRUPTIBLE); 337 328 338 329 if (tty_hung_up_p(filp) || 339 330 !test_bit(ASYNCB_INITIALIZED, &port->flags)) { ··· 366 361 __set_current_state(TASK_RUNNING); 367 362 remove_wait_queue(&port->open_wait, &wait); 368 363 369 - if (extra_count) { 370 - /* ++ is not atomic, so this should be protected - Jean II */ 371 - spin_lock_irqsave(&port->lock, flags); 364 + spin_lock_irqsave(&port->lock, flags); 365 + if (!tty_hung_up_p(filp)) 372 366 port->count++; 373 - spin_unlock_irqrestore(&port->lock, flags); 374 - } 375 367 port->blocked_open--; 368 + spin_unlock_irqrestore(&port->lock, flags); 376 369 377 370 IRDA_DEBUG(1, "%s(%d):block_til_ready after blocking on %s open_count=%d\n", 378 371 __FILE__, __LINE__, tty->driver->name, port->count);
+4 -4
net/key/af_key.c
··· 2201 2201 XFRM_POLICY_BLOCK : XFRM_POLICY_ALLOW); 2202 2202 xp->priority = pol->sadb_x_policy_priority; 2203 2203 2204 - sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1], 2204 + sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1]; 2205 2205 xp->family = pfkey_sadb_addr2xfrm_addr(sa, &xp->selector.saddr); 2206 2206 if (!xp->family) { 2207 2207 err = -EINVAL; ··· 2214 2214 if (xp->selector.sport) 2215 2215 xp->selector.sport_mask = htons(0xffff); 2216 2216 2217 - sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1], 2217 + sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1]; 2218 2218 pfkey_sadb_addr2xfrm_addr(sa, &xp->selector.daddr); 2219 2219 xp->selector.prefixlen_d = sa->sadb_address_prefixlen; 2220 2220 ··· 2315 2315 2316 2316 memset(&sel, 0, sizeof(sel)); 2317 2317 2318 - sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1], 2318 + sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1]; 2319 2319 sel.family = pfkey_sadb_addr2xfrm_addr(sa, &sel.saddr); 2320 2320 sel.prefixlen_s = sa->sadb_address_prefixlen; 2321 2321 sel.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto); ··· 2323 2323 if (sel.sport) 2324 2324 sel.sport_mask = htons(0xffff); 2325 2325 2326 - sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1], 2326 + sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1]; 2327 2327 pfkey_sadb_addr2xfrm_addr(sa, &sel.daddr); 2328 2328 sel.prefixlen_d = sa->sadb_address_prefixlen; 2329 2329 sel.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto);
+13 -8
net/mac80211/cfg.c
··· 3290 3290 int ret = -ENODATA; 3291 3291 3292 3292 rcu_read_lock(); 3293 - if (local->use_chanctx) { 3294 - chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf); 3295 - if (chanctx_conf) { 3296 - *chandef = chanctx_conf->def; 3297 - ret = 0; 3298 - } 3299 - } else if (local->open_count == local->monitors) { 3300 - *chandef = local->monitor_chandef; 3293 + chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf); 3294 + if (chanctx_conf) { 3295 + *chandef = chanctx_conf->def; 3296 + ret = 0; 3297 + } else if (local->open_count > 0 && 3298 + local->open_count == local->monitors && 3299 + sdata->vif.type == NL80211_IFTYPE_MONITOR) { 3300 + if (local->use_chanctx) 3301 + *chandef = local->monitor_chandef; 3302 + else 3303 + cfg80211_chandef_create(chandef, 3304 + local->_oper_channel, 3305 + local->_oper_channel_type); 3301 3306 ret = 0; 3302 3307 } 3303 3308 rcu_read_unlock();
+6
net/mac80211/iface.c
··· 541 541 542 542 ieee80211_adjust_monitor_flags(sdata, 1); 543 543 ieee80211_configure_filter(local); 544 + mutex_lock(&local->mtx); 545 + ieee80211_recalc_idle(local); 546 + mutex_unlock(&local->mtx); 544 547 545 548 netif_carrier_on(dev); 546 549 break; ··· 815 812 816 813 ieee80211_adjust_monitor_flags(sdata, -1); 817 814 ieee80211_configure_filter(local); 815 + mutex_lock(&local->mtx); 816 + ieee80211_recalc_idle(local); 817 + mutex_unlock(&local->mtx); 818 818 break; 819 819 case NL80211_IFTYPE_P2P_DEVICE: 820 820 /* relies on synchronize_rcu() below */
+23 -5
net/mac80211/mlme.c
··· 647 647 our_mcs = (le16_to_cpu(vht_cap.vht_mcs.rx_mcs_map) & 648 648 mask) >> shift; 649 649 650 + if (our_mcs == IEEE80211_VHT_MCS_NOT_SUPPORTED) 651 + continue; 652 + 650 653 switch (ap_mcs) { 651 654 default: 652 655 if (our_mcs <= ap_mcs) ··· 3506 3503 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 3507 3504 3508 3505 /* 3506 + * Stop timers before deleting work items, as timers 3507 + * could race and re-add the work-items. They will be 3508 + * re-established on connection. 3509 + */ 3510 + del_timer_sync(&ifmgd->conn_mon_timer); 3511 + del_timer_sync(&ifmgd->bcn_mon_timer); 3512 + 3513 + /* 3509 3514 * we need to use atomic bitops for the running bits 3510 3515 * only because both timers might fire at the same 3511 3516 * time -- the code here is properly synchronised. ··· 3527 3516 if (del_timer_sync(&ifmgd->timer)) 3528 3517 set_bit(TMR_RUNNING_TIMER, &ifmgd->timers_running); 3529 3518 3530 - cancel_work_sync(&ifmgd->chswitch_work); 3531 3519 if (del_timer_sync(&ifmgd->chswitch_timer)) 3532 3520 set_bit(TMR_RUNNING_CHANSW, &ifmgd->timers_running); 3533 - 3534 - /* these will just be re-established on connection */ 3535 - del_timer_sync(&ifmgd->conn_mon_timer); 3536 - del_timer_sync(&ifmgd->bcn_mon_timer); 3521 + cancel_work_sync(&ifmgd->chswitch_work); 3537 3522 } 3538 3523 3539 3524 void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata) ··· 4321 4314 void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata) 4322 4315 { 4323 4316 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 4317 + 4318 + /* 4319 + * Make sure some work items will not run after this, 4320 + * they will not do anything but might not have been 4321 + * cancelled when disconnecting. 4322 + */ 4323 + cancel_work_sync(&ifmgd->monitor_work); 4324 + cancel_work_sync(&ifmgd->beacon_connection_loss_work); 4325 + cancel_work_sync(&ifmgd->request_smps_work); 4326 + cancel_work_sync(&ifmgd->csa_connection_drop_work); 4327 + cancel_work_sync(&ifmgd->chswitch_work); 4324 4328 4325 4329 mutex_lock(&ifmgd->mtx); 4326 4330 if (ifmgd->assoc_data)
+2 -1
net/mac80211/tx.c
··· 2745 2745 cpu_to_le16(IEEE80211_FCTL_MOREDATA); 2746 2746 } 2747 2747 2748 - sdata = IEEE80211_DEV_TO_SUB_IF(skb->dev); 2748 + if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) 2749 + sdata = IEEE80211_DEV_TO_SUB_IF(skb->dev); 2749 2750 if (!ieee80211_tx_prepare(sdata, &tx, skb)) 2750 2751 break; 2751 2752 dev_kfree_skb_any(skb);
+10 -1
net/netfilter/nf_conntrack_helper.c
··· 339 339 { 340 340 const struct nf_conn_help *help; 341 341 const struct nf_conntrack_helper *helper; 342 + struct va_format vaf; 343 + va_list args; 344 + 345 + va_start(args, fmt); 346 + 347 + vaf.fmt = fmt; 348 + vaf.va = &args; 342 349 343 350 /* Called from the helper function, this call never fails */ 344 351 help = nfct_help(ct); ··· 354 347 helper = rcu_dereference(help->helper); 355 348 356 349 nf_log_packet(nf_ct_l3num(ct), 0, skb, NULL, NULL, NULL, 357 - "nf_ct_%s: dropping packet: %s ", helper->name, fmt); 350 + "nf_ct_%s: dropping packet: %pV ", helper->name, &vaf); 351 + 352 + va_end(args); 358 353 } 359 354 EXPORT_SYMBOL_GPL(nf_ct_helper_log); 360 355
+1 -6
net/netfilter/nfnetlink.c
··· 62 62 } 63 63 EXPORT_SYMBOL_GPL(nfnl_unlock); 64 64 65 - static struct mutex *nfnl_get_lock(__u8 subsys_id) 66 - { 67 - return &table[subsys_id].mutex; 68 - } 69 - 70 65 int nfnetlink_subsys_register(const struct nfnetlink_subsystem *n) 71 66 { 72 67 nfnl_lock(n->subsys_id); ··· 194 199 rcu_read_unlock(); 195 200 nfnl_lock(subsys_id); 196 201 if (rcu_dereference_protected(table[subsys_id].subsys, 197 - lockdep_is_held(nfnl_get_lock(subsys_id))) != ss || 202 + lockdep_is_held(&table[subsys_id].mutex)) != ss || 198 203 nfnetlink_find_client(type, ss) != nc) 199 204 err = -EAGAIN; 200 205 else if (nc->call)
+3
net/netfilter/xt_AUDIT.c
··· 124 124 const struct xt_audit_info *info = par->targinfo; 125 125 struct audit_buffer *ab; 126 126 127 + if (audit_enabled == 0) 128 + goto errout; 129 + 127 130 ab = audit_log_start(NULL, GFP_ATOMIC, AUDIT_NETFILTER_PKT); 128 131 if (ab == NULL) 129 132 goto errout;
+11 -16
net/netlabel/netlabel_unlabeled.c
··· 1189 1189 struct netlbl_unlhsh_walk_arg cb_arg; 1190 1190 u32 skip_bkt = cb->args[0]; 1191 1191 u32 skip_chain = cb->args[1]; 1192 - u32 skip_addr4 = cb->args[2]; 1193 - u32 skip_addr6 = cb->args[3]; 1194 1192 u32 iter_bkt; 1195 1193 u32 iter_chain = 0, iter_addr4 = 0, iter_addr6 = 0; 1196 1194 struct netlbl_unlhsh_iface *iface; ··· 1213 1215 continue; 1214 1216 netlbl_af4list_foreach_rcu(addr4, 1215 1217 &iface->addr4_list) { 1216 - if (iter_addr4++ < skip_addr4) 1218 + if (iter_addr4++ < cb->args[2]) 1217 1219 continue; 1218 1220 if (netlbl_unlabel_staticlist_gen( 1219 1221 NLBL_UNLABEL_C_STATICLIST, ··· 1229 1231 #if IS_ENABLED(CONFIG_IPV6) 1230 1232 netlbl_af6list_foreach_rcu(addr6, 1231 1233 &iface->addr6_list) { 1232 - if (iter_addr6++ < skip_addr6) 1234 + if (iter_addr6++ < cb->args[3]) 1233 1235 continue; 1234 1236 if (netlbl_unlabel_staticlist_gen( 1235 1237 NLBL_UNLABEL_C_STATICLIST, ··· 1248 1250 1249 1251 unlabel_staticlist_return: 1250 1252 rcu_read_unlock(); 1251 - cb->args[0] = skip_bkt; 1252 - cb->args[1] = skip_chain; 1253 - cb->args[2] = skip_addr4; 1254 - cb->args[3] = skip_addr6; 1253 + cb->args[0] = iter_bkt; 1254 + cb->args[1] = iter_chain; 1255 + cb->args[2] = iter_addr4; 1256 + cb->args[3] = iter_addr6; 1255 1257 return skb->len; 1256 1258 } 1257 1259 ··· 1271 1273 { 1272 1274 struct netlbl_unlhsh_walk_arg cb_arg; 1273 1275 struct netlbl_unlhsh_iface *iface; 1274 - u32 skip_addr4 = cb->args[0]; 1275 - u32 skip_addr6 = cb->args[1]; 1276 - u32 iter_addr4 = 0; 1276 + u32 iter_addr4 = 0, iter_addr6 = 0; 1277 1277 struct netlbl_af4list *addr4; 1278 1278 #if IS_ENABLED(CONFIG_IPV6) 1279 - u32 iter_addr6 = 0; 1280 1279 struct netlbl_af6list *addr6; 1281 1280 #endif 1282 1281 ··· 1287 1292 goto unlabel_staticlistdef_return; 1288 1293 1289 1294 netlbl_af4list_foreach_rcu(addr4, &iface->addr4_list) { 1290 - if (iter_addr4++ < skip_addr4) 1295 + if (iter_addr4++ < cb->args[0]) 1291 1296 continue; 1292 1297 if (netlbl_unlabel_staticlist_gen(NLBL_UNLABEL_C_STATICLISTDEF, 1293 1298 iface, ··· 1300 1305 } 1301 1306 #if IS_ENABLED(CONFIG_IPV6) 1302 1307 netlbl_af6list_foreach_rcu(addr6, &iface->addr6_list) { 1303 - if (iter_addr6++ < skip_addr6) 1308 + if (iter_addr6++ < cb->args[1]) 1304 1309 continue; 1305 1310 if (netlbl_unlabel_staticlist_gen(NLBL_UNLABEL_C_STATICLISTDEF, 1306 1311 iface, ··· 1315 1320 1316 1321 unlabel_staticlistdef_return: 1317 1322 rcu_read_unlock(); 1318 - cb->args[0] = skip_addr4; 1319 - cb->args[1] = skip_addr6; 1323 + cb->args[0] = iter_addr4; 1324 + cb->args[1] = iter_addr6; 1320 1325 return skb->len; 1321 1326 } 1322 1327
+1
net/rds/stats.c
··· 87 87 for (i = 0; i < nr; i++) { 88 88 BUG_ON(strlen(names[i]) >= sizeof(ctr.name)); 89 89 strncpy(ctr.name, names[i], sizeof(ctr.name) - 1); 90 + ctr.name[sizeof(ctr.name) - 1] = '\0'; 90 91 ctr.value = values[i]; 91 92 92 93 rds_info_copy(iter, &ctr, sizeof(ctr));
+45 -21
net/sched/sch_qfq.c
··· 298 298 new_num_classes == q->max_agg_classes - 1) /* agg no more full */ 299 299 hlist_add_head(&agg->nonfull_next, &q->nonfull_aggs); 300 300 301 + /* The next assignment may let 302 + * agg->initial_budget > agg->budgetmax 303 + * hold, we will take it into account in charge_actual_service(). 304 + */ 301 305 agg->budgetmax = new_num_classes * agg->lmax; 302 306 new_agg_weight = agg->class_weight * new_num_classes; 303 307 agg->inv_w = ONE_FP/new_agg_weight; ··· 821 817 unsigned long old_vslot = q->oldV >> q->min_slot_shift; 822 818 823 819 if (vslot != old_vslot) { 824 - unsigned long mask = (1UL << fls(vslot ^ old_vslot)) - 1; 820 + unsigned long mask = (1ULL << fls(vslot ^ old_vslot)) - 1; 825 821 qfq_move_groups(q, mask, IR, ER); 826 822 qfq_move_groups(q, mask, IB, EB); 827 823 } ··· 992 988 /* Update F according to the actual service received by the aggregate. */ 993 989 static inline void charge_actual_service(struct qfq_aggregate *agg) 994 990 { 995 - /* compute the service received by the aggregate */ 996 - u32 service_received = agg->initial_budget - agg->budget; 991 + /* Compute the service received by the aggregate, taking into 992 + * account that, after decreasing the number of classes in 993 + * agg, it may happen that 994 + * agg->initial_budget - agg->budget > agg->bugdetmax 995 + */ 996 + u32 service_received = min(agg->budgetmax, 997 + agg->initial_budget - agg->budget); 997 998 998 999 agg->F = agg->S + (u64)service_received * agg->inv_w; 999 1000 } 1001 + 1002 + static inline void qfq_update_agg_ts(struct qfq_sched *q, 1003 + struct qfq_aggregate *agg, 1004 + enum update_reason reason); 1005 + 1006 + static void qfq_schedule_agg(struct qfq_sched *q, struct qfq_aggregate *agg); 1000 1007 1001 1008 static struct sk_buff *qfq_dequeue(struct Qdisc *sch) 1002 1009 { ··· 1036 1021 in_serv_agg->initial_budget = in_serv_agg->budget = 1037 1022 in_serv_agg->budgetmax; 1038 1023 1039 - if (!list_empty(&in_serv_agg->active)) 1024 + if (!list_empty(&in_serv_agg->active)) { 1040 1025 /* 1041 1026 * Still active: reschedule for 1042 1027 * service. Possible optimization: if no other ··· 1047 1032 * handle it, we would need to maintain an 1048 1033 * extra num_active_aggs field. 1049 1034 */ 1050 - qfq_activate_agg(q, in_serv_agg, requeue); 1051 - else if (sch->q.qlen == 0) { /* no aggregate to serve */ 1035 + qfq_update_agg_ts(q, in_serv_agg, requeue); 1036 + qfq_schedule_agg(q, in_serv_agg); 1037 + } else if (sch->q.qlen == 0) { /* no aggregate to serve */ 1052 1038 q->in_serv_agg = NULL; 1053 1039 return NULL; 1054 1040 } ··· 1068 1052 qdisc_bstats_update(sch, skb); 1069 1053 1070 1054 agg_dequeue(in_serv_agg, cl, len); 1071 - in_serv_agg->budget -= len; 1055 + /* If lmax is lowered, through qfq_change_class, for a class 1056 + * owning pending packets with larger size than the new value 1057 + * of lmax, then the following condition may hold. 1058 + */ 1059 + if (unlikely(in_serv_agg->budget < len)) 1060 + in_serv_agg->budget = 0; 1061 + else 1062 + in_serv_agg->budget -= len; 1063 + 1072 1064 q->V += (u64)len * IWSUM; 1073 1065 pr_debug("qfq dequeue: len %u F %lld now %lld\n", 1074 1066 len, (unsigned long long) in_serv_agg->F, ··· 1241 1217 cl->deficit = agg->lmax; 1242 1218 list_add_tail(&cl->alist, &agg->active); 1243 1219 1244 - if (list_first_entry(&agg->active, struct qfq_class, alist) != cl) 1245 - return err; /* aggregate was not empty, nothing else to do */ 1220 + if (list_first_entry(&agg->active, struct qfq_class, alist) != cl || 1221 + q->in_serv_agg == agg) 1222 + return err; /* non-empty or in service, nothing else to do */ 1246 1223 1247 - /* recharge budget */ 1248 - agg->initial_budget = agg->budget = agg->budgetmax; 1249 - 1250 - qfq_update_agg_ts(q, agg, enqueue); 1251 - if (q->in_serv_agg == NULL) 1252 - q->in_serv_agg = agg; 1253 - else if (agg != q->in_serv_agg) 1254 - qfq_schedule_agg(q, agg); 1224 + qfq_activate_agg(q, agg, enqueue); 1255 1225 1256 1226 return err; 1257 1227 } ··· 1279 1261 /* group was surely ineligible, remove */ 1280 1262 __clear_bit(grp->index, &q->bitmaps[IR]); 1281 1263 __clear_bit(grp->index, &q->bitmaps[IB]); 1282 - } else if (!q->bitmaps[ER] && qfq_gt(roundedS, q->V)) 1264 + } else if (!q->bitmaps[ER] && qfq_gt(roundedS, q->V) && 1265 + q->in_serv_agg == NULL) 1283 1266 q->V = roundedS; 1284 1267 1285 1268 grp->S = roundedS; ··· 1303 1284 static void qfq_activate_agg(struct qfq_sched *q, struct qfq_aggregate *agg, 1304 1285 enum update_reason reason) 1305 1286 { 1287 + agg->initial_budget = agg->budget = agg->budgetmax; /* recharge budg. */ 1288 + 1306 1289 qfq_update_agg_ts(q, agg, reason); 1307 - qfq_schedule_agg(q, agg); 1290 + if (q->in_serv_agg == NULL) { /* no aggr. in service or scheduled */ 1291 + q->in_serv_agg = agg; /* start serving this aggregate */ 1292 + /* update V: to be in service, agg must be eligible */ 1293 + q->oldV = q->V = agg->S; 1294 + } else if (agg != q->in_serv_agg) 1295 + qfq_schedule_agg(q, agg); 1308 1296 } 1309 1297 1310 1298 static void qfq_slot_remove(struct qfq_sched *q, struct qfq_group *grp, ··· 1383 1357 __set_bit(grp->index, &q->bitmaps[s]); 1384 1358 } 1385 1359 } 1386 - 1387 - qfq_update_eligible(q); 1388 1360 } 1389 1361 1390 1362 static void qfq_qlen_notify(struct Qdisc *sch, unsigned long arg)
+1 -3
net/sunrpc/rpc_pipe.c
··· 1174 1174 .mount = rpc_mount, 1175 1175 .kill_sb = rpc_kill_sb, 1176 1176 }; 1177 + MODULE_ALIAS_FS("rpc_pipefs"); 1177 1178 1178 1179 static void 1179 1180 init_once(void *foo) ··· 1219 1218 kmem_cache_destroy(rpc_inode_cachep); 1220 1219 unregister_filesystem(&rpc_pipe_fs_type); 1221 1220 } 1222 - 1223 - /* Make 'mount -t rpc_pipefs ...' autoload this module. */ 1224 - MODULE_ALIAS("rpc_pipefs");
+1 -2
net/wireless/core.c
··· 367 367 rdev->wiphy.rts_threshold = (u32) -1; 368 368 rdev->wiphy.coverage_class = 0; 369 369 370 - rdev->wiphy.features = NL80211_FEATURE_SCAN_FLUSH | 371 - NL80211_FEATURE_ADVERTISE_CHAN_LIMITS; 370 + rdev->wiphy.features = NL80211_FEATURE_SCAN_FLUSH; 372 371 373 372 return &rdev->wiphy; 374 373 }
+25 -26
net/wireless/nl80211.c
··· 557 557 if ((chan->flags & IEEE80211_CHAN_RADAR) && 558 558 nla_put_flag(msg, NL80211_FREQUENCY_ATTR_RADAR)) 559 559 goto nla_put_failure; 560 - if ((chan->flags & IEEE80211_CHAN_NO_HT40MINUS) && 561 - nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_HT40_MINUS)) 562 - goto nla_put_failure; 563 - if ((chan->flags & IEEE80211_CHAN_NO_HT40PLUS) && 564 - nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_HT40_PLUS)) 565 - goto nla_put_failure; 566 - if ((chan->flags & IEEE80211_CHAN_NO_80MHZ) && 567 - nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_80MHZ)) 568 - goto nla_put_failure; 569 - if ((chan->flags & IEEE80211_CHAN_NO_160MHZ) && 570 - nla_put_flag(msg, NL80211_FREQUENCY_ATTR_NO_160MHZ)) 571 - goto nla_put_failure; 572 560 573 561 if (nla_put_u32(msg, NL80211_FREQUENCY_ATTR_MAX_TX_POWER, 574 562 DBM_TO_MBM(chan->max_power))) ··· 1298 1310 dev->wiphy.max_acl_mac_addrs)) 1299 1311 goto nla_put_failure; 1300 1312 1301 - if (dev->wiphy.extended_capabilities && 1302 - (nla_put(msg, NL80211_ATTR_EXT_CAPA, 1303 - dev->wiphy.extended_capabilities_len, 1304 - dev->wiphy.extended_capabilities) || 1305 - nla_put(msg, NL80211_ATTR_EXT_CAPA_MASK, 1306 - dev->wiphy.extended_capabilities_len, 1307 - dev->wiphy.extended_capabilities_mask))) 1308 - goto nla_put_failure; 1309 - 1310 1313 return genlmsg_end(msg, hdr); 1311 1314 1312 1315 nla_put_failure: ··· 1307 1328 1308 1329 static int nl80211_dump_wiphy(struct sk_buff *skb, struct netlink_callback *cb) 1309 1330 { 1310 - int idx = 0; 1331 + int idx = 0, ret; 1311 1332 int start = cb->args[0]; 1312 1333 struct cfg80211_registered_device *dev; 1313 1334 ··· 1317 1338 continue; 1318 1339 if (++idx <= start) 1319 1340 continue; 1320 - if (nl80211_send_wiphy(skb, NETLINK_CB(cb->skb).portid, 1321 - cb->nlh->nlmsg_seq, NLM_F_MULTI, 1322 - dev) < 0) { 1341 + ret = nl80211_send_wiphy(skb, NETLINK_CB(cb->skb).portid, 1342 + cb->nlh->nlmsg_seq, NLM_F_MULTI, 1343 + dev); 1344 + if (ret < 0) { 1345 + /* 1346 + * If sending the wiphy data didn't fit (ENOBUFS or 1347 + * EMSGSIZE returned), this SKB is still empty (so 1348 + * it's not too big because another wiphy dataset is 1349 + * already in the skb) and we've not tried to adjust 1350 + * the dump allocation yet ... then adjust the alloc 1351 + * size to be bigger, and return 1 but with the empty 1352 + * skb. This results in an empty message being RX'ed 1353 + * in userspace, but that is ignored. 1354 + * 1355 + * We can then retry with the larger buffer. 1356 + */ 1357 + if ((ret == -ENOBUFS || ret == -EMSGSIZE) && 1358 + !skb->len && 1359 + cb->min_dump_alloc < 4096) { 1360 + cb->min_dump_alloc = 4096; 1361 + mutex_unlock(&cfg80211_mutex); 1362 + return 1; 1363 + } 1323 1364 idx--; 1324 1365 break; 1325 1366 } ··· 1356 1357 struct sk_buff *msg; 1357 1358 struct cfg80211_registered_device *dev = info->user_ptr[0]; 1358 1359 1359 - msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 1360 + msg = nlmsg_new(4096, GFP_KERNEL); 1360 1361 if (!msg) 1361 1362 return -ENOMEM; 1362 1363
+1 -1
security/keys/process_keys.c
··· 839 839 new-> sgid = old-> sgid; 840 840 new->fsgid = old->fsgid; 841 841 new->user = get_uid(old->user); 842 - new->user_ns = get_user_ns(new->user_ns); 842 + new->user_ns = get_user_ns(old->user_ns); 843 843 new->group_info = get_group_info(old->group_info); 844 844 845 845 new->securebits = old->securebits;
+12 -2
sound/core/seq/oss/seq_oss_event.c
··· 285 285 static int 286 286 note_on_event(struct seq_oss_devinfo *dp, int dev, int ch, int note, int vel, struct snd_seq_event *ev) 287 287 { 288 - struct seq_oss_synthinfo *info = &dp->synths[dev]; 288 + struct seq_oss_synthinfo *info; 289 + 290 + if (!snd_seq_oss_synth_is_valid(dp, dev)) 291 + return -ENXIO; 292 + 293 + info = &dp->synths[dev]; 289 294 switch (info->arg.event_passing) { 290 295 case SNDRV_SEQ_OSS_PROCESS_EVENTS: 291 296 if (! info->ch || ch < 0 || ch >= info->nr_voices) { ··· 345 340 static int 346 341 note_off_event(struct seq_oss_devinfo *dp, int dev, int ch, int note, int vel, struct snd_seq_event *ev) 347 342 { 348 - struct seq_oss_synthinfo *info = &dp->synths[dev]; 343 + struct seq_oss_synthinfo *info; 344 + 345 + if (!snd_seq_oss_synth_is_valid(dp, dev)) 346 + return -ENXIO; 347 + 348 + info = &dp->synths[dev]; 349 349 switch (info->arg.event_passing) { 350 350 case SNDRV_SEQ_OSS_PROCESS_EVENTS: 351 351 if (! info->ch || ch < 0 || ch >= info->nr_voices) {
+4 -1
sound/core/vmaster.c
··· 213 213 } 214 214 if (!changed) 215 215 return 0; 216 - return slave_put_val(slave, ucontrol); 216 + err = slave_put_val(slave, ucontrol); 217 + if (err < 0) 218 + return err; 219 + return 1; 217 220 } 218 221 219 222 static int slave_tlv_cmd(struct snd_kcontrol *kcontrol,
+9 -2
sound/pci/hda/hda_codec.c
··· 3334 3334 return -EBUSY; 3335 3335 } 3336 3336 spdif = snd_array_new(&codec->spdif_out); 3337 + if (!spdif) 3338 + return -ENOMEM; 3337 3339 for (dig_mix = dig_mixes; dig_mix->name; dig_mix++) { 3338 3340 kctl = snd_ctl_new1(dig_mix, codec); 3339 3341 if (!kctl) ··· 3433 3431 int snd_hda_create_spdif_share_sw(struct hda_codec *codec, 3434 3432 struct hda_multi_out *mout) 3435 3433 { 3434 + struct snd_kcontrol *kctl; 3435 + 3436 3436 if (!mout->dig_out_nid) 3437 3437 return 0; 3438 + 3439 + kctl = snd_ctl_new1(&spdif_share_sw, mout); 3440 + if (!kctl) 3441 + return -ENOMEM; 3438 3442 /* ATTENTION: here mout is passed as private_data, instead of codec */ 3439 - return snd_hda_ctl_add(codec, mout->dig_out_nid, 3440 - snd_ctl_new1(&spdif_share_sw, mout)); 3443 + return snd_hda_ctl_add(codec, mout->dig_out_nid, kctl); 3441 3444 } 3442 3445 EXPORT_SYMBOL_HDA(snd_hda_create_spdif_share_sw); 3443 3446
+6 -2
sound/pci/hda/patch_ca0132.c
··· 2298 2298 hda_frame_size_words = ((sample_rate_div == 0) ? 0 : 2299 2299 (num_chans * sample_rate_mul / sample_rate_div)); 2300 2300 2301 + if (hda_frame_size_words == 0) { 2302 + snd_printdd(KERN_ERR "frmsz zero\n"); 2303 + return -EINVAL; 2304 + } 2305 + 2301 2306 buffer_size_words = min(buffer_size_words, 2302 2307 (unsigned int)(UC_RANGE(chip_addx, 1) ? 2303 2308 65536 : 32768)); ··· 2313 2308 chip_addx, hda_frame_size_words, num_chans, 2314 2309 sample_rate_mul, sample_rate_div, buffer_size_words); 2315 2310 2316 - if ((buffer_addx == NULL) || (hda_frame_size_words == 0) || 2317 - (buffer_size_words < hda_frame_size_words)) { 2311 + if (buffer_size_words < hda_frame_size_words) { 2318 2312 snd_printdd(KERN_ERR "dspxfr_one_seg:failed\n"); 2319 2313 return -EINVAL; 2320 2314 }
+2
sound/pci/hda/patch_realtek.c
··· 3163 3163 case 0x10ec0290: 3164 3164 spec->codec_variant = ALC269_TYPE_ALC280; 3165 3165 break; 3166 + case 0x10ec0233: 3166 3167 case 0x10ec0282: 3167 3168 case 0x10ec0283: 3168 3169 spec->codec_variant = ALC269_TYPE_ALC282; ··· 3863 3862 */ 3864 3863 static const struct hda_codec_preset snd_hda_preset_realtek[] = { 3865 3864 { .id = 0x10ec0221, .name = "ALC221", .patch = patch_alc269 }, 3865 + { .id = 0x10ec0233, .name = "ALC233", .patch = patch_alc269 }, 3866 3866 { .id = 0x10ec0260, .name = "ALC260", .patch = patch_alc260 }, 3867 3867 { .id = 0x10ec0262, .name = "ALC262", .patch = patch_alc262 }, 3868 3868 { .id = 0x10ec0267, .name = "ALC267", .patch = patch_alc268 },
+2
sound/pci/ice1712/ice1712.c
··· 2594 2594 snd_ice1712_proc_init(ice); 2595 2595 synchronize_irq(pci->irq); 2596 2596 2597 + card->private_data = ice; 2598 + 2597 2599 err = pci_request_regions(pci, "ICE1712"); 2598 2600 if (err < 0) { 2599 2601 kfree(ice);
+13 -2
sound/soc/codecs/wm5102.c
··· 573 573 { 0x025e, 0x0112 }, 574 574 }; 575 575 576 + static const struct reg_default wm5102_sysclk_revb_patch[] = { 577 + { 0x3081, 0x08FE }, 578 + { 0x3083, 0x00ED }, 579 + { 0x30C1, 0x08FE }, 580 + { 0x30C3, 0x00ED }, 581 + }; 582 + 576 583 static int wm5102_sysclk_ev(struct snd_soc_dapm_widget *w, 577 584 struct snd_kcontrol *kcontrol, int event) 578 585 { ··· 593 586 case 0: 594 587 patch = wm5102_sysclk_reva_patch; 595 588 patch_size = ARRAY_SIZE(wm5102_sysclk_reva_patch); 589 + break; 590 + default: 591 + patch = wm5102_sysclk_revb_patch; 592 + patch_size = ARRAY_SIZE(wm5102_sysclk_revb_patch); 596 593 break; 597 594 } 598 595 ··· 766 755 767 756 SOC_DOUBLE_R("HPOUT1 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_1L, 768 757 ARIZONA_DAC_DIGITAL_VOLUME_1R, ARIZONA_OUT1L_MUTE_SHIFT, 1, 1), 769 - SOC_DOUBLE_R("OUT2 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_2L, 758 + SOC_DOUBLE_R("HPOUT2 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_2L, 770 759 ARIZONA_DAC_DIGITAL_VOLUME_2R, ARIZONA_OUT2L_MUTE_SHIFT, 1, 1), 771 760 SOC_SINGLE("EPOUT Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_3L, 772 761 ARIZONA_OUT3L_MUTE_SHIFT, 1, 1), ··· 778 767 SOC_DOUBLE_R_TLV("HPOUT1 Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_1L, 779 768 ARIZONA_DAC_DIGITAL_VOLUME_1R, ARIZONA_OUT1L_VOL_SHIFT, 780 769 0xbf, 0, digital_tlv), 781 - SOC_DOUBLE_R_TLV("OUT2 Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_2L, 770 + SOC_DOUBLE_R_TLV("HPOUT2 Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_2L, 782 771 ARIZONA_DAC_DIGITAL_VOLUME_2R, ARIZONA_OUT2L_VOL_SHIFT, 783 772 0xbf, 0, digital_tlv), 784 773 SOC_SINGLE_TLV("EPOUT Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_3L,
+8 -8
sound/soc/codecs/wm5110.c
··· 213 213 214 214 SOC_SINGLE("HPOUT1 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_1L, 215 215 ARIZONA_OUT1_OSR_SHIFT, 1, 0), 216 - SOC_SINGLE("OUT2 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_2L, 216 + SOC_SINGLE("HPOUT2 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_2L, 217 217 ARIZONA_OUT2_OSR_SHIFT, 1, 0), 218 - SOC_SINGLE("OUT3 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_3L, 218 + SOC_SINGLE("HPOUT3 High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_3L, 219 219 ARIZONA_OUT3_OSR_SHIFT, 1, 0), 220 220 SOC_SINGLE("Speaker High Performance Switch", ARIZONA_OUTPUT_PATH_CONFIG_4L, 221 221 ARIZONA_OUT4_OSR_SHIFT, 1, 0), ··· 226 226 227 227 SOC_DOUBLE_R("HPOUT1 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_1L, 228 228 ARIZONA_DAC_DIGITAL_VOLUME_1R, ARIZONA_OUT1L_MUTE_SHIFT, 1, 1), 229 - SOC_DOUBLE_R("OUT2 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_2L, 229 + SOC_DOUBLE_R("HPOUT2 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_2L, 230 230 ARIZONA_DAC_DIGITAL_VOLUME_2R, ARIZONA_OUT2L_MUTE_SHIFT, 1, 1), 231 - SOC_DOUBLE_R("OUT3 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_3L, 231 + SOC_DOUBLE_R("HPOUT3 Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_3L, 232 232 ARIZONA_DAC_DIGITAL_VOLUME_3R, ARIZONA_OUT3L_MUTE_SHIFT, 1, 1), 233 233 SOC_DOUBLE_R("Speaker Digital Switch", ARIZONA_DAC_DIGITAL_VOLUME_4L, 234 234 ARIZONA_DAC_DIGITAL_VOLUME_4R, ARIZONA_OUT4L_MUTE_SHIFT, 1, 1), ··· 240 240 SOC_DOUBLE_R_TLV("HPOUT1 Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_1L, 241 241 ARIZONA_DAC_DIGITAL_VOLUME_1R, ARIZONA_OUT1L_VOL_SHIFT, 242 242 0xbf, 0, digital_tlv), 243 - SOC_DOUBLE_R_TLV("OUT2 Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_2L, 243 + SOC_DOUBLE_R_TLV("HPOUT2 Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_2L, 244 244 ARIZONA_DAC_DIGITAL_VOLUME_2R, ARIZONA_OUT2L_VOL_SHIFT, 245 245 0xbf, 0, digital_tlv), 246 - SOC_DOUBLE_R_TLV("OUT3 Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_3L, 246 + SOC_DOUBLE_R_TLV("HPOUT3 Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_3L, 247 247 ARIZONA_DAC_DIGITAL_VOLUME_3R, ARIZONA_OUT3L_VOL_SHIFT, 248 248 0xbf, 0, digital_tlv), 249 249 SOC_DOUBLE_R_TLV("Speaker Digital Volume", ARIZONA_DAC_DIGITAL_VOLUME_4L, ··· 260 260 ARIZONA_OUTPUT_PATH_CONFIG_1R, 261 261 ARIZONA_OUT1L_PGA_VOL_SHIFT, 262 262 0x34, 0x40, 0, ana_tlv), 263 - SOC_DOUBLE_R_RANGE_TLV("OUT2 Volume", ARIZONA_OUTPUT_PATH_CONFIG_2L, 263 + SOC_DOUBLE_R_RANGE_TLV("HPOUT2 Volume", ARIZONA_OUTPUT_PATH_CONFIG_2L, 264 264 ARIZONA_OUTPUT_PATH_CONFIG_2R, 265 265 ARIZONA_OUT2L_PGA_VOL_SHIFT, 266 266 0x34, 0x40, 0, ana_tlv), 267 - SOC_DOUBLE_R_RANGE_TLV("OUT3 Volume", ARIZONA_OUTPUT_PATH_CONFIG_3L, 267 + SOC_DOUBLE_R_RANGE_TLV("HPOUT3 Volume", ARIZONA_OUTPUT_PATH_CONFIG_3L, 268 268 ARIZONA_OUTPUT_PATH_CONFIG_3R, 269 269 ARIZONA_OUT3L_PGA_VOL_SHIFT, 0x34, 0x40, 0, ana_tlv), 270 270
+2 -2
sound/soc/codecs/wm8350.c
··· 1301 1301 if (device_may_wakeup(wm8350->dev)) 1302 1302 pm_wakeup_event(wm8350->dev, 250); 1303 1303 1304 - schedule_delayed_work(&priv->hpl.work, 200); 1304 + schedule_delayed_work(&priv->hpl.work, msecs_to_jiffies(200)); 1305 1305 1306 1306 return IRQ_HANDLED; 1307 1307 } ··· 1318 1318 if (device_may_wakeup(wm8350->dev)) 1319 1319 pm_wakeup_event(wm8350->dev, 250); 1320 1320 1321 - schedule_delayed_work(&priv->hpr.work, 200); 1321 + schedule_delayed_work(&priv->hpr.work, msecs_to_jiffies(200)); 1322 1322 1323 1323 return IRQ_HANDLED; 1324 1324 }
+4 -4
sound/soc/codecs/wm8960.c
··· 53 53 * using 2 wire for device control, so we cache them instead. 54 54 */ 55 55 static const struct reg_default wm8960_reg_defaults[] = { 56 - { 0x0, 0x0097 }, 57 - { 0x1, 0x0097 }, 56 + { 0x0, 0x00a7 }, 57 + { 0x1, 0x00a7 }, 58 58 { 0x2, 0x0000 }, 59 59 { 0x3, 0x0000 }, 60 60 { 0x4, 0x0000 }, ··· 323 323 SND_SOC_DAPM_MIXER("Right Input Mixer", WM8960_POWER3, 4, 0, 324 324 wm8960_rin, ARRAY_SIZE(wm8960_rin)), 325 325 326 - SND_SOC_DAPM_ADC("Left ADC", "Capture", WM8960_POWER2, 3, 0), 327 - SND_SOC_DAPM_ADC("Right ADC", "Capture", WM8960_POWER2, 2, 0), 326 + SND_SOC_DAPM_ADC("Left ADC", "Capture", WM8960_POWER1, 3, 0), 327 + SND_SOC_DAPM_ADC("Right ADC", "Capture", WM8960_POWER1, 2, 0), 328 328 329 329 SND_SOC_DAPM_DAC("Left DAC", "Playback", WM8960_POWER2, 8, 0), 330 330 SND_SOC_DAPM_DAC("Right DAC", "Playback", WM8960_POWER2, 7, 0),
+1 -1
sound/soc/tegra/tegra20_i2s.h
··· 121 121 122 122 #define TEGRA20_I2S_TIMING_NON_SYM_ENABLE (1 << 12) 123 123 #define TEGRA20_I2S_TIMING_CHANNEL_BIT_COUNT_SHIFT 0 124 - #define TEGRA20_I2S_TIMING_CHANNEL_BIT_COUNT_MASK_US 0x7fff 124 + #define TEGRA20_I2S_TIMING_CHANNEL_BIT_COUNT_MASK_US 0x7ff 125 125 #define TEGRA20_I2S_TIMING_CHANNEL_BIT_COUNT_MASK (TEGRA20_I2S_TIMING_CHANNEL_BIT_COUNT_MASK_US << TEGRA20_I2S_TIMING_CHANNEL_BIT_COUNT_SHIFT) 126 126 127 127 /* Fields in TEGRA20_I2S_FIFO_SCR */
+1 -1
sound/soc/tegra/tegra30_i2s.h
··· 110 110 111 111 #define TEGRA30_I2S_TIMING_NON_SYM_ENABLE (1 << 12) 112 112 #define TEGRA30_I2S_TIMING_CHANNEL_BIT_COUNT_SHIFT 0 113 - #define TEGRA30_I2S_TIMING_CHANNEL_BIT_COUNT_MASK_US 0x7fff 113 + #define TEGRA30_I2S_TIMING_CHANNEL_BIT_COUNT_MASK_US 0x7ff 114 114 #define TEGRA30_I2S_TIMING_CHANNEL_BIT_COUNT_MASK (TEGRA30_I2S_TIMING_CHANNEL_BIT_COUNT_MASK_US << TEGRA30_I2S_TIMING_CHANNEL_BIT_COUNT_SHIFT) 115 115 116 116 /* Fields in TEGRA30_I2S_OFFSET */
+59
tools/testing/selftests/efivarfs/efivarfs.sh
··· 125 125 ./open-unlink $file 126 126 } 127 127 128 + # test that we can create a range of filenames 129 + test_valid_filenames() 130 + { 131 + local attrs='\x07\x00\x00\x00' 132 + local ret=0 133 + 134 + local file_list="abc dump-type0-11-1-1362436005 1234 -" 135 + for f in $file_list; do 136 + local file=$efivarfs_mount/$f-$test_guid 137 + 138 + printf "$attrs\x00" > $file 139 + 140 + if [ ! -e $file ]; then 141 + echo "$file could not be created" >&2 142 + ret=1 143 + else 144 + rm $file 145 + fi 146 + done 147 + 148 + exit $ret 149 + } 150 + 151 + test_invalid_filenames() 152 + { 153 + local attrs='\x07\x00\x00\x00' 154 + local ret=0 155 + 156 + local file_list=" 157 + -1234-1234-1234-123456789abc 158 + foo 159 + foo-bar 160 + -foo- 161 + foo-barbazba-foob-foob-foob-foobarbazfoo 162 + foo------------------------------------- 163 + -12345678-1234-1234-1234-123456789abc 164 + a-12345678=1234-1234-1234-123456789abc 165 + a-12345678-1234=1234-1234-123456789abc 166 + a-12345678-1234-1234=1234-123456789abc 167 + a-12345678-1234-1234-1234=123456789abc 168 + 1112345678-1234-1234-1234-123456789abc" 169 + 170 + for f in $file_list; do 171 + local file=$efivarfs_mount/$f 172 + 173 + printf "$attrs\x00" 2>/dev/null > $file 174 + 175 + if [ -e $file ]; then 176 + echo "Creating $file should have failed" >&2 177 + rm $file 178 + ret=1 179 + fi 180 + done 181 + 182 + exit $ret 183 + } 184 + 128 185 check_prereqs 129 186 130 187 rc=0 ··· 192 135 run_test test_delete 193 136 run_test test_zero_size_delete 194 137 run_test test_open_unlink 138 + run_test test_valid_filenames 139 + run_test test_invalid_filenames 195 140 196 141 exit $rc