Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
net/ipv4/ip_gre.c

Minor conflicts between tunnel bug fixes in net and
ipv6 tunnel cleanups in net-next.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2440 -1224
+4
.mailmap
··· 48 48 Felix Moeller <felix@derklecks.de> 49 49 Filipe Lautert <filipe@icewall.org> 50 50 Franck Bui-Huu <vagabon.xyz@gmail.com> 51 + Frank Rowand <frowand.list@gmail.com> <frowand@mvista.com> 52 + Frank Rowand <frowand.list@gmail.com> <frank.rowand@am.sony.com> 53 + Frank Rowand <frowand.list@gmail.com> <frank.rowand@sonymobile.com> 51 54 Frank Zago <fzago@systemfabricworks.com> 52 55 Greg Kroah-Hartman <greg@echidna.(none)> 53 56 Greg Kroah-Hartman <gregkh@suse.de> ··· 82 79 Kenneth W Chen <kenneth.w.chen@intel.com> 83 80 Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com> 84 81 Koushik <raghavendra.koushik@neterion.com> 82 + Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com> 85 83 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 86 84 Leonid I Ananiev <leonid.i.ananiev@intel.com> 87 85 Linas Vepstas <linas@austin.ibm.com>
+1 -1
Documentation/devicetree/bindings/arc/archs-pct.txt
··· 2 2 3 3 The ARC HS can be configured with a pipeline performance monitor for counting 4 4 CPU and cache events like cache misses and hits. Like conventional PCT there 5 - are 100+ hardware conditions dynamically mapped to upto 32 counters. 5 + are 100+ hardware conditions dynamically mapped to up to 32 counters. 6 6 It also supports overflow interrupts. 7 7 8 8 Required properties:
+1 -1
Documentation/devicetree/bindings/arc/pct.txt
··· 2 2 3 3 The ARC700 can be configured with a pipeline performance monitor for counting 4 4 CPU and cache events like cache misses and hits. Like conventional PCT there 5 - are 100+ hardware conditions dynamically mapped to upto 32 counters 5 + are 100+ hardware conditions dynamically mapped to up to 32 counters 6 6 7 7 Note that: 8 8 * The ARC 700 PCT does not support interrupts; although HW events may be
+2 -2
Documentation/devicetree/bindings/i2c/i2c-rk3x.txt
··· 6 6 Required properties : 7 7 8 8 - reg : Offset and length of the register set for the device 9 - - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c" or 10 - "rockchip,rk3288-i2c". 9 + - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c", 10 + "rockchip,rk3228-i2c" or "rockchip,rk3288-i2c". 11 11 - interrupts : interrupt number 12 12 - clocks : parent clock 13 13
+3 -3
Documentation/devicetree/bindings/net/cpsw.txt
··· 45 45 Optional properties: 46 46 - dual_emac_res_vlan : Specifies VID to be used to segregate the ports 47 47 - mac-address : See ethernet.txt file in the same directory 48 - - phy_id : Specifies slave phy id 48 + - phy_id : Specifies slave phy id (deprecated, use phy-handle) 49 49 - phy-handle : See ethernet.txt file in the same directory 50 50 51 51 Slave sub-nodes: 52 52 - fixed-link : See fixed-link.txt file in the same directory 53 - Either the property phy_id, or the sub-node 54 - fixed-link can be specified 53 + 54 + Note: Exactly one of phy_id, phy-handle, or fixed-link must be specified. 55 55 56 56 Note: "ti,hwmods" field is used to fetch the base address and irq 57 57 resources from TI, omap hwmod data base during device registration.
+3 -3
Documentation/networking/altera_tse.txt
··· 6 6 using the SGDMA and MSGDMA soft DMA IP components. The driver uses the 7 7 platform bus to obtain component resources. The designs used to test this 8 8 driver were built for a Cyclone(R) V SOC FPGA board, a Cyclone(R) V FPGA board, 9 - and tested with ARM and NIOS processor hosts seperately. The anticipated use 9 + and tested with ARM and NIOS processor hosts separately. The anticipated use 10 10 cases are simple communications between an embedded system and an external peer 11 11 for status and simple configuration of the embedded system. 12 12 ··· 65 65 4.1) Transmit process 66 66 When the driver's transmit routine is called by the kernel, it sets up a 67 67 transmit descriptor by calling the underlying DMA transmit routine (SGDMA or 68 - MSGDMA), and initites a transmit operation. Once the transmit is complete, an 68 + MSGDMA), and initiates a transmit operation. Once the transmit is complete, an 69 69 interrupt is driven by the transmit DMA logic. The driver handles the transmit 70 70 completion in the context of the interrupt handling chain by recycling 71 71 resource required to send and track the requested transmit operation. 72 72 73 73 4.2) Receive process 74 74 The driver will post receive buffers to the receive DMA logic during driver 75 - intialization. Receive buffers may or may not be queued depending upon the 75 + initialization. Receive buffers may or may not be queued depending upon the 76 76 underlying DMA logic (MSGDMA is able queue receive buffers, SGDMA is not able 77 77 to queue receive buffers to the SGDMA receive logic). When a packet is 78 78 received, the DMA logic generates an interrupt. The driver handles a receive
+3 -3
Documentation/networking/ipvlan.txt
··· 8 8 This is conceptually very similar to the macvlan driver with one major 9 9 exception of using L3 for mux-ing /demux-ing among slaves. This property makes 10 10 the master device share the L2 with it's slave devices. I have developed this 11 - driver in conjuntion with network namespaces and not sure if there is use case 11 + driver in conjunction with network namespaces and not sure if there is use case 12 12 outside of it. 13 13 14 14 ··· 42 42 as well. 43 43 44 44 4.2 L3 mode: 45 - In this mode TX processing upto L3 happens on the stack instance attached 45 + In this mode TX processing up to L3 happens on the stack instance attached 46 46 to the slave device and packets are switched to the stack instance of the 47 47 master device for the L2 processing and routing from that instance will be 48 48 used before packets are queued on the outbound device. In this mode the slaves ··· 56 56 (a) The Linux host that is connected to the external switch / router has 57 57 policy configured that allows only one mac per port. 58 58 (b) No of virtual devices created on a master exceed the mac capacity and 59 - puts the NIC in promiscous mode and degraded performance is a concern. 59 + puts the NIC in promiscuous mode and degraded performance is a concern. 60 60 (c) If the slave device is to be put into the hostile / untrusted network 61 61 namespace where L2 on the slave could be changed / misused. 62 62
+3 -3
Documentation/networking/pktgen.txt
··· 67 67 * add_device DEVICE@NAME -- adds a single device 68 68 * rem_device_all -- remove all associated devices 69 69 70 - When adding a device to a thread, a corrosponding procfile is created 70 + When adding a device to a thread, a corresponding procfile is created 71 71 which is used for configuring this device. Thus, device names need to 72 72 be unique. 73 73 74 74 To support adding the same device to multiple threads, which is useful 75 - with multi queue NICs, a the device naming scheme is extended with "@": 75 + with multi queue NICs, the device naming scheme is extended with "@": 76 76 device@something 77 77 78 78 The part after "@" can be anything, but it is custom to use the thread ··· 221 221 222 222 A collection of tutorial scripts and helpers for pktgen is in the 223 223 samples/pktgen directory. The helper parameters.sh file support easy 224 - and consistant parameter parsing across the sample scripts. 224 + and consistent parameter parsing across the sample scripts. 225 225 226 226 Usage example and help: 227 227 ./pktgen_sample01_simple.sh -i eth4 -m 00:1B:21:3C:9D:F8 -d 192.168.8.2
+1 -1
Documentation/networking/vrf.txt
··· 41 41 the VRF device. Similarly on egress routing rules are used to send packets 42 42 to the VRF device driver before getting sent out the actual interface. This 43 43 allows tcpdump on a VRF device to capture all packets into and out of the 44 - VRF as a whole.[1] Similiarly, netfilter [2] and tc rules can be applied 44 + VRF as a whole.[1] Similarly, netfilter [2] and tc rules can be applied 45 45 using the VRF device to specify rules that apply to the VRF domain as a whole. 46 46 47 47 [1] Packets in the forwarded state do not flow through the device, so those
+3 -3
Documentation/networking/xfrm_sync.txt
··· 4 4 from Jamal <hadi@cyberus.ca>. 5 5 6 6 The end goal for syncing is to be able to insert attributes + generate 7 - events so that the an SA can be safely moved from one machine to another 7 + events so that the SA can be safely moved from one machine to another 8 8 for HA purposes. 9 9 The idea is to synchronize the SA so that the takeover machine can do 10 10 the processing of the SA as accurate as possible if it has access to it. ··· 13 13 These patches add ability to sync and have accurate lifetime byte (to 14 14 ensure proper decay of SAs) and replay counters to avoid replay attacks 15 15 with as minimal loss at failover time. 16 - This way a backup stays as closely uptodate as an active member. 16 + This way a backup stays as closely up-to-date as an active member. 17 17 18 18 Because the above items change for every packet the SA receives, 19 19 it is possible for a lot of the events to be generated. ··· 163 163 there is a period where the timer threshold expires with no packets 164 164 seen, then an odd behavior is seen as follows: 165 165 The first packet arrival after a timer expiry will trigger a timeout 166 - aevent; i.e we dont wait for a timeout period or a packet threshold 166 + event; i.e we don't wait for a timeout period or a packet threshold 167 167 to be reached. This is done for simplicity and efficiency reasons. 168 168 169 169 -JHS
+9 -8
Documentation/sysctl/vm.txt
··· 581 581 "Zone Order" orders the zonelists by zone type, then by node within each 582 582 zone. Specify "[Zz]one" for zone order. 583 583 584 - Specify "[Dd]efault" to request automatic configuration. Autoconfiguration 585 - will select "node" order in following case. 586 - (1) if the DMA zone does not exist or 587 - (2) if the DMA zone comprises greater than 50% of the available memory or 588 - (3) if any node's DMA zone comprises greater than 70% of its local memory and 589 - the amount of local memory is big enough. 584 + Specify "[Dd]efault" to request automatic configuration. 590 585 591 - Otherwise, "zone" order will be selected. Default order is recommended unless 592 - this is causing problems for your system/application. 586 + On 32-bit, the Normal zone needs to be preserved for allocations accessible 587 + by the kernel, so "zone" order will be selected. 588 + 589 + On 64-bit, devices that require DMA32/DMA are relatively rare, so "node" 590 + order will be selected. 591 + 592 + Default order is recommended unless this is causing problems for your 593 + system/application. 593 594 594 595 ============================================================== 595 596
+7 -6
MAINTAINERS
··· 4745 4745 4746 4746 FUSE: FILESYSTEM IN USERSPACE 4747 4747 M: Miklos Szeredi <miklos@szeredi.hu> 4748 - L: fuse-devel@lists.sourceforge.net 4748 + L: linux-fsdevel@vger.kernel.org 4749 4749 W: http://fuse.sourceforge.net/ 4750 4750 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse.git 4751 4751 S: Maintained ··· 4904 4904 F: include/net/gre.h 4905 4905 4906 4906 GRETH 10/100/1G Ethernet MAC device driver 4907 - M: Kristoffer Glembo <kristoffer@gaisler.com> 4907 + M: Andreas Larsson <andreas@gaisler.com> 4908 4908 L: netdev@vger.kernel.org 4909 4909 S: Maintained 4910 4910 F: drivers/net/ethernet/aeroflex/ ··· 6028 6028 6029 6029 ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR 6030 6030 M: Or Gerlitz <ogerlitz@mellanox.com> 6031 - M: Sagi Grimberg <sagig@mellanox.com> 6031 + M: Sagi Grimberg <sagi@grimberg.me> 6032 6032 M: Roi Dayan <roid@mellanox.com> 6033 6033 L: linux-rdma@vger.kernel.org 6034 6034 S: Supported ··· 6038 6038 F: drivers/infiniband/ulp/iser/ 6039 6039 6040 6040 ISCSI EXTENSIONS FOR RDMA (ISER) TARGET 6041 - M: Sagi Grimberg <sagig@mellanox.com> 6041 + M: Sagi Grimberg <sagi@grimberg.me> 6042 6042 T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master 6043 6043 L: linux-rdma@vger.kernel.org 6044 6044 L: target-devel@vger.kernel.org ··· 6401 6401 F: mm/kmemleak-test.c 6402 6402 6403 6403 KPROBES 6404 - M: Ananth N Mavinakayanahalli <ananth@in.ibm.com> 6404 + M: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> 6405 6405 M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> 6406 6406 M: "David S. Miller" <davem@davemloft.net> 6407 6407 M: Masami Hiramatsu <mhiramat@kernel.org> ··· 10015 10015 10016 10016 SFC NETWORK DRIVER 10017 10017 M: Solarflare linux maintainers <linux-net-drivers@solarflare.com> 10018 - M: Shradha Shah <sshah@solarflare.com> 10018 + M: Edward Cree <ecree@solarflare.com> 10019 + M: Bert Kenward <bkenward@solarflare.com> 10019 10020 L: netdev@vger.kernel.org 10020 10021 S: Supported 10021 10022 F: drivers/net/ethernet/sfc/
+2 -2
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 5 - NAME = Blurry Fish Butt 4 + EXTRAVERSION = -rc6 5 + NAME = Charred Weasel 6 6 7 7 # *DOCUMENTATION* 8 8 # To see a list of typical targets execute "make help"
+2
arch/arc/Kconfig
··· 35 35 select NO_BOOTMEM 36 36 select OF 37 37 select OF_EARLY_FLATTREE 38 + select OF_RESERVED_MEM 38 39 select PERF_USE_VMALLOC 39 40 select HAVE_DEBUG_STACKOVERFLOW 41 + select HAVE_GENERIC_DMA_COHERENT 40 42 41 43 config MIGHT_HAVE_PCI 42 44 bool
+35 -1
arch/arc/include/asm/irqflags-arcv2.h
··· 18 18 #define STATUS_AD_MASK (1<<STATUS_AD_BIT) 19 19 #define STATUS_IE_MASK (1<<STATUS_IE_BIT) 20 20 21 + /* status32 Bits as encoded/expected by CLRI/SETI */ 22 + #define CLRI_STATUS_IE_BIT 4 23 + 24 + #define CLRI_STATUS_E_MASK 0xF 25 + #define CLRI_STATUS_IE_MASK (1 << CLRI_STATUS_IE_BIT) 26 + 21 27 #define AUX_USER_SP 0x00D 22 28 #define AUX_IRQ_CTRL 0x00E 23 29 #define AUX_IRQ_ACT 0x043 /* Active Intr across all levels */ ··· 106 100 : 107 101 : "memory"); 108 102 103 + /* To be compatible with irq_save()/irq_restore() 104 + * encode the irq bits as expected by CLRI/SETI 105 + * (this was needed to make CONFIG_TRACE_IRQFLAGS work) 106 + */ 107 + temp = (1 << 5) | 108 + ((!!(temp & STATUS_IE_MASK)) << CLRI_STATUS_IE_BIT) | 109 + (temp & CLRI_STATUS_E_MASK); 109 110 return temp; 110 111 } 111 112 ··· 121 108 */ 122 109 static inline int arch_irqs_disabled_flags(unsigned long flags) 123 110 { 124 - return !(flags & (STATUS_IE_MASK)); 111 + return !(flags & CLRI_STATUS_IE_MASK); 125 112 } 126 113 127 114 static inline int arch_irqs_disabled(void) ··· 141 128 142 129 #else 143 130 131 + #ifdef CONFIG_TRACE_IRQFLAGS 132 + 133 + .macro TRACE_ASM_IRQ_DISABLE 134 + bl trace_hardirqs_off 135 + .endm 136 + 137 + .macro TRACE_ASM_IRQ_ENABLE 138 + bl trace_hardirqs_on 139 + .endm 140 + 141 + #else 142 + 143 + .macro TRACE_ASM_IRQ_DISABLE 144 + .endm 145 + 146 + .macro TRACE_ASM_IRQ_ENABLE 147 + .endm 148 + 149 + #endif 144 150 .macro IRQ_DISABLE scratch 145 151 clri 152 + TRACE_ASM_IRQ_DISABLE 146 153 .endm 147 154 148 155 .macro IRQ_ENABLE scratch 156 + TRACE_ASM_IRQ_ENABLE 149 157 seti 150 158 .endm 151 159
+9 -1
arch/arc/kernel/entry-arcv2.S
··· 69 69 70 70 clri ; To make status32.IE agree with CPU internal state 71 71 72 - lr r0, [ICAUSE] 72 + #ifdef CONFIG_TRACE_IRQFLAGS 73 + TRACE_ASM_IRQ_DISABLE 74 + #endif 73 75 76 + lr r0, [ICAUSE] 74 77 mov blink, ret_from_exception 75 78 76 79 b.d arch_do_IRQ ··· 171 168 ; All 2 entry points to here already disable interrupts 172 169 173 170 .Lrestore_regs: 171 + 172 + # Interrpts are actually disabled from this point on, but will get 173 + # reenabled after we return from interrupt/exception. 174 + # But irq tracer needs to be told now... 175 + TRACE_ASM_IRQ_ENABLE 174 176 175 177 ld r0, [sp, PT_status32] ; U/K mode at time of entry 176 178 lr r10, [AUX_IRQ_ACT]
+3
arch/arc/kernel/entry-compact.S
··· 341 341 342 342 .Lrestore_regs: 343 343 344 + # Interrpts are actually disabled from this point on, but will get 345 + # reenabled after we return from interrupt/exception. 346 + # But irq tracer needs to be told now... 344 347 TRACE_ASM_IRQ_ENABLE 345 348 346 349 lr r10, [status32]
+4
arch/arc/mm/init.c
··· 13 13 #ifdef CONFIG_BLK_DEV_INITRD 14 14 #include <linux/initrd.h> 15 15 #endif 16 + #include <linux/of_fdt.h> 16 17 #include <linux/swap.h> 17 18 #include <linux/module.h> 18 19 #include <linux/highmem.h> ··· 136 135 if (initrd_start) 137 136 memblock_reserve(__pa(initrd_start), initrd_end - initrd_start); 138 137 #endif 138 + 139 + early_init_fdt_reserve_self(); 140 + early_init_fdt_scan_reserved_mem(); 139 141 140 142 memblock_dump_all(); 141 143
+1 -1
arch/nios2/lib/memset.c
··· 68 68 "=r" (charcnt), /* %1 Output */ 69 69 "=r" (dwordcnt), /* %2 Output */ 70 70 "=r" (fill8reg), /* %3 Output */ 71 - "=r" (wrkrega) /* %4 Output */ 71 + "=&r" (wrkrega) /* %4 Output only */ 72 72 : "r" (c), /* %5 Input */ 73 73 "0" (s), /* %0 Input/Output */ 74 74 "1" (count) /* %1 Input/Output */
+2
arch/powerpc/include/asm/systbl.h
··· 384 384 SYSCALL(ni_syscall) 385 385 SYSCALL(mlock2) 386 386 SYSCALL(copy_file_range) 387 + COMPAT_SYS_SPU(preadv2) 388 + COMPAT_SYS_SPU(pwritev2)
+1 -1
arch/powerpc/include/asm/unistd.h
··· 12 12 #include <uapi/asm/unistd.h> 13 13 14 14 15 - #define NR_syscalls 380 15 + #define NR_syscalls 382 16 16 17 17 #define __NR__exit __NR_exit 18 18
+2
arch/powerpc/include/uapi/asm/unistd.h
··· 390 390 #define __NR_membarrier 365 391 391 #define __NR_mlock2 378 392 392 #define __NR_copy_file_range 379 393 + #define __NR_preadv2 380 394 + #define __NR_pwritev2 381 393 395 394 396 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
+1 -1
arch/s390/include/asm/mmu.h
··· 11 11 spinlock_t list_lock; 12 12 struct list_head pgtable_list; 13 13 struct list_head gmap_list; 14 - unsigned long asce_bits; 14 + unsigned long asce; 15 15 unsigned long asce_limit; 16 16 unsigned long vdso_base; 17 17 /* The mmu context allocates 4K page tables. */
+22 -6
arch/s390/include/asm/mmu_context.h
··· 26 26 mm->context.has_pgste = 0; 27 27 mm->context.use_skey = 0; 28 28 #endif 29 - if (mm->context.asce_limit == 0) { 29 + switch (mm->context.asce_limit) { 30 + case 1UL << 42: 31 + /* 32 + * forked 3-level task, fall through to set new asce with new 33 + * mm->pgd 34 + */ 35 + case 0: 30 36 /* context created by exec, set asce limit to 4TB */ 31 - mm->context.asce_bits = _ASCE_TABLE_LENGTH | 32 - _ASCE_USER_BITS | _ASCE_TYPE_REGION3; 33 37 mm->context.asce_limit = STACK_TOP_MAX; 34 - } else if (mm->context.asce_limit == (1UL << 31)) { 38 + mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH | 39 + _ASCE_USER_BITS | _ASCE_TYPE_REGION3; 40 + break; 41 + case 1UL << 53: 42 + /* forked 4-level task, set new asce with new mm->pgd */ 43 + mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH | 44 + _ASCE_USER_BITS | _ASCE_TYPE_REGION2; 45 + break; 46 + case 1UL << 31: 47 + /* forked 2-level compat task, set new asce with new mm->pgd */ 48 + mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH | 49 + _ASCE_USER_BITS | _ASCE_TYPE_SEGMENT; 50 + /* pgd_alloc() did not increase mm->nr_pmds */ 35 51 mm_inc_nr_pmds(mm); 36 52 } 37 53 crst_table_init((unsigned long *) mm->pgd, pgd_entry_type(mm)); ··· 58 42 59 43 static inline void set_user_asce(struct mm_struct *mm) 60 44 { 61 - S390_lowcore.user_asce = mm->context.asce_bits | __pa(mm->pgd); 45 + S390_lowcore.user_asce = mm->context.asce; 62 46 if (current->thread.mm_segment.ar4) 63 47 __ctl_load(S390_lowcore.user_asce, 7, 7); 64 48 set_cpu_flag(CIF_ASCE); ··· 87 71 { 88 72 int cpu = smp_processor_id(); 89 73 90 - S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd); 74 + S390_lowcore.user_asce = next->context.asce; 91 75 if (prev == next) 92 76 return; 93 77 if (MACHINE_HAS_TLB_LC)
+2 -2
arch/s390/include/asm/pgalloc.h
··· 52 52 return _REGION2_ENTRY_EMPTY; 53 53 } 54 54 55 - int crst_table_upgrade(struct mm_struct *, unsigned long limit); 56 - void crst_table_downgrade(struct mm_struct *, unsigned long limit); 55 + int crst_table_upgrade(struct mm_struct *); 56 + void crst_table_downgrade(struct mm_struct *); 57 57 58 58 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) 59 59 {
+1 -1
arch/s390/include/asm/processor.h
··· 175 175 regs->psw.mask = PSW_USER_BITS | PSW_MASK_BA; \ 176 176 regs->psw.addr = new_psw; \ 177 177 regs->gprs[15] = new_stackp; \ 178 - crst_table_downgrade(current->mm, 1UL << 31); \ 178 + crst_table_downgrade(current->mm); \ 179 179 execve_tail(); \ 180 180 } while (0) 181 181
+3 -6
arch/s390/include/asm/tlbflush.h
··· 110 110 static inline void __tlb_flush_kernel(void) 111 111 { 112 112 if (MACHINE_HAS_IDTE) 113 - __tlb_flush_idte((unsigned long) init_mm.pgd | 114 - init_mm.context.asce_bits); 113 + __tlb_flush_idte(init_mm.context.asce); 115 114 else 116 115 __tlb_flush_global(); 117 116 } ··· 132 133 static inline void __tlb_flush_kernel(void) 133 134 { 134 135 if (MACHINE_HAS_TLB_LC) 135 - __tlb_flush_idte_local((unsigned long) init_mm.pgd | 136 - init_mm.context.asce_bits); 136 + __tlb_flush_idte_local(init_mm.context.asce); 137 137 else 138 138 __tlb_flush_local(); 139 139 } ··· 146 148 * only ran on the local cpu. 147 149 */ 148 150 if (MACHINE_HAS_IDTE && list_empty(&mm->context.gmap_list)) 149 - __tlb_flush_asce(mm, (unsigned long) mm->pgd | 150 - mm->context.asce_bits); 151 + __tlb_flush_asce(mm, mm->context.asce); 151 152 else 152 153 __tlb_flush_full(mm); 153 154 }
+2 -1
arch/s390/mm/init.c
··· 89 89 asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH; 90 90 pgd_type = _REGION3_ENTRY_EMPTY; 91 91 } 92 - S390_lowcore.kernel_asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits; 92 + init_mm.context.asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits; 93 + S390_lowcore.kernel_asce = init_mm.context.asce; 93 94 clear_table((unsigned long *) init_mm.pgd, pgd_type, 94 95 sizeof(unsigned long)*2048); 95 96 vmem_map_init();
+3 -3
arch/s390/mm/mmap.c
··· 174 174 if (!(flags & MAP_FIXED)) 175 175 addr = 0; 176 176 if ((addr + len) >= TASK_SIZE) 177 - return crst_table_upgrade(current->mm, TASK_MAX_SIZE); 177 + return crst_table_upgrade(current->mm); 178 178 return 0; 179 179 } 180 180 ··· 191 191 return area; 192 192 if (area == -ENOMEM && !is_compat_task() && TASK_SIZE < TASK_MAX_SIZE) { 193 193 /* Upgrade the page table to 4 levels and retry. */ 194 - rc = crst_table_upgrade(mm, TASK_MAX_SIZE); 194 + rc = crst_table_upgrade(mm); 195 195 if (rc) 196 196 return (unsigned long) rc; 197 197 area = arch_get_unmapped_area(filp, addr, len, pgoff, flags); ··· 213 213 return area; 214 214 if (area == -ENOMEM && !is_compat_task() && TASK_SIZE < TASK_MAX_SIZE) { 215 215 /* Upgrade the page table to 4 levels and retry. */ 216 - rc = crst_table_upgrade(mm, TASK_MAX_SIZE); 216 + rc = crst_table_upgrade(mm); 217 217 if (rc) 218 218 return (unsigned long) rc; 219 219 area = arch_get_unmapped_area_topdown(filp, addr, len,
+28 -57
arch/s390/mm/pgalloc.c
··· 76 76 __tlb_flush_local(); 77 77 } 78 78 79 - int crst_table_upgrade(struct mm_struct *mm, unsigned long limit) 79 + int crst_table_upgrade(struct mm_struct *mm) 80 80 { 81 81 unsigned long *table, *pgd; 82 - unsigned long entry; 83 - int flush; 84 82 85 - BUG_ON(limit > TASK_MAX_SIZE); 86 - flush = 0; 87 - repeat: 83 + /* upgrade should only happen from 3 to 4 levels */ 84 + BUG_ON(mm->context.asce_limit != (1UL << 42)); 85 + 88 86 table = crst_table_alloc(mm); 89 87 if (!table) 90 88 return -ENOMEM; 89 + 91 90 spin_lock_bh(&mm->page_table_lock); 92 - if (mm->context.asce_limit < limit) { 93 - pgd = (unsigned long *) mm->pgd; 94 - if (mm->context.asce_limit <= (1UL << 31)) { 95 - entry = _REGION3_ENTRY_EMPTY; 96 - mm->context.asce_limit = 1UL << 42; 97 - mm->context.asce_bits = _ASCE_TABLE_LENGTH | 98 - _ASCE_USER_BITS | 99 - _ASCE_TYPE_REGION3; 100 - } else { 101 - entry = _REGION2_ENTRY_EMPTY; 102 - mm->context.asce_limit = 1UL << 53; 103 - mm->context.asce_bits = _ASCE_TABLE_LENGTH | 104 - _ASCE_USER_BITS | 105 - _ASCE_TYPE_REGION2; 106 - } 107 - crst_table_init(table, entry); 108 - pgd_populate(mm, (pgd_t *) table, (pud_t *) pgd); 109 - mm->pgd = (pgd_t *) table; 110 - mm->task_size = mm->context.asce_limit; 111 - table = NULL; 112 - flush = 1; 113 - } 91 + pgd = (unsigned long *) mm->pgd; 92 + crst_table_init(table, _REGION2_ENTRY_EMPTY); 93 + pgd_populate(mm, (pgd_t *) table, (pud_t *) pgd); 94 + mm->pgd = (pgd_t *) table; 95 + mm->context.asce_limit = 1UL << 53; 96 + mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH | 97 + _ASCE_USER_BITS | _ASCE_TYPE_REGION2; 98 + mm->task_size = mm->context.asce_limit; 114 99 spin_unlock_bh(&mm->page_table_lock); 115 - if (table) 116 - crst_table_free(mm, table); 117 - if (mm->context.asce_limit < limit) 118 - goto repeat; 119 - if (flush) 120 - on_each_cpu(__crst_table_upgrade, mm, 0); 100 + 101 + on_each_cpu(__crst_table_upgrade, mm, 0); 121 102 return 0; 122 103 } 123 104 124 - void crst_table_downgrade(struct mm_struct *mm, unsigned long limit) 105 + void crst_table_downgrade(struct mm_struct *mm) 125 106 { 126 107 pgd_t *pgd; 108 + 109 + /* downgrade should only happen from 3 to 2 levels (compat only) */ 110 + BUG_ON(mm->context.asce_limit != (1UL << 42)); 127 111 128 112 if (current->active_mm == mm) { 129 113 clear_user_asce(); 130 114 __tlb_flush_mm(mm); 131 115 } 132 - while (mm->context.asce_limit > limit) { 133 - pgd = mm->pgd; 134 - switch (pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) { 135 - case _REGION_ENTRY_TYPE_R2: 136 - mm->context.asce_limit = 1UL << 42; 137 - mm->context.asce_bits = _ASCE_TABLE_LENGTH | 138 - _ASCE_USER_BITS | 139 - _ASCE_TYPE_REGION3; 140 - break; 141 - case _REGION_ENTRY_TYPE_R3: 142 - mm->context.asce_limit = 1UL << 31; 143 - mm->context.asce_bits = _ASCE_TABLE_LENGTH | 144 - _ASCE_USER_BITS | 145 - _ASCE_TYPE_SEGMENT; 146 - break; 147 - default: 148 - BUG(); 149 - } 150 - mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN); 151 - mm->task_size = mm->context.asce_limit; 152 - crst_table_free(mm, (unsigned long *) pgd); 153 - } 116 + 117 + pgd = mm->pgd; 118 + mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN); 119 + mm->context.asce_limit = 1UL << 31; 120 + mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH | 121 + _ASCE_USER_BITS | _ASCE_TYPE_SEGMENT; 122 + mm->task_size = mm->context.asce_limit; 123 + crst_table_free(mm, (unsigned long *) pgd); 124 + 154 125 if (current->active_mm == mm) 155 126 set_user_asce(mm); 156 127 }
+10 -6
arch/s390/pci/pci_dma.c
··· 457 457 zdev->dma_table = dma_alloc_cpu_table(); 458 458 if (!zdev->dma_table) { 459 459 rc = -ENOMEM; 460 - goto out_clean; 460 + goto out; 461 461 } 462 462 463 463 /* ··· 477 477 zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8); 478 478 if (!zdev->iommu_bitmap) { 479 479 rc = -ENOMEM; 480 - goto out_reg; 480 + goto free_dma_table; 481 481 } 482 482 483 483 rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 484 484 (u64) zdev->dma_table); 485 485 if (rc) 486 - goto out_reg; 487 - return 0; 486 + goto free_bitmap; 488 487 489 - out_reg: 488 + return 0; 489 + free_bitmap: 490 + vfree(zdev->iommu_bitmap); 491 + zdev->iommu_bitmap = NULL; 492 + free_dma_table: 490 493 dma_free_cpu_table(zdev->dma_table); 491 - out_clean: 494 + zdev->dma_table = NULL; 495 + out: 492 496 return rc; 493 497 } 494 498
-1
arch/sparc/configs/sparc32_defconfig
··· 24 24 CONFIG_INET_ESP=y 25 25 CONFIG_INET_IPCOMP=y 26 26 # CONFIG_INET_LRO is not set 27 - CONFIG_IPV6_PRIVACY=y 28 27 CONFIG_INET6_AH=m 29 28 CONFIG_INET6_ESP=m 30 29 CONFIG_INET6_IPCOMP=m
-1
arch/sparc/configs/sparc64_defconfig
··· 48 48 CONFIG_INET_AH=y 49 49 CONFIG_INET_ESP=y 50 50 CONFIG_INET_IPCOMP=y 51 - CONFIG_IPV6_PRIVACY=y 52 51 CONFIG_IPV6_ROUTER_PREF=y 53 52 CONFIG_IPV6_ROUTE_INFO=y 54 53 CONFIG_IPV6_OPTIMISTIC_DAD=y
+1
arch/sparc/include/asm/spitfire.h
··· 48 48 #define SUN4V_CHIP_SPARC_M6 0x06 49 49 #define SUN4V_CHIP_SPARC_M7 0x07 50 50 #define SUN4V_CHIP_SPARC64X 0x8a 51 + #define SUN4V_CHIP_SPARC_SN 0x8b 51 52 #define SUN4V_CHIP_UNKNOWN 0xff 52 53 53 54 #ifndef __ASSEMBLY__
+3 -1
arch/sparc/include/uapi/asm/unistd.h
··· 423 423 #define __NR_setsockopt 355 424 424 #define __NR_mlock2 356 425 425 #define __NR_copy_file_range 357 426 + #define __NR_preadv2 358 427 + #define __NR_pwritev2 359 426 428 427 - #define NR_syscalls 358 429 + #define NR_syscalls 360 428 430 429 431 /* Bitmask values returned from kern_features system call. */ 430 432 #define KERN_FEATURE_MIXED_MODE_STACK 0x00000001
+5 -9
arch/sparc/kernel/cherrs.S
··· 214 214 subcc %g1, %g2, %g1 ! Next cacheline 215 215 bge,pt %icc, 1b 216 216 nop 217 - ba,pt %xcc, dcpe_icpe_tl1_common 218 - nop 217 + ba,a,pt %xcc, dcpe_icpe_tl1_common 219 218 220 219 do_dcpe_tl1_fatal: 221 220 sethi %hi(1f), %g7 ··· 223 224 mov 0x2, %o0 224 225 call cheetah_plus_parity_error 225 226 add %sp, PTREGS_OFF, %o1 226 - ba,pt %xcc, rtrap 227 - nop 227 + ba,a,pt %xcc, rtrap 228 228 .size do_dcpe_tl1,.-do_dcpe_tl1 229 229 230 230 .globl do_icpe_tl1 ··· 257 259 subcc %g1, %g2, %g1 258 260 bge,pt %icc, 1b 259 261 nop 260 - ba,pt %xcc, dcpe_icpe_tl1_common 261 - nop 262 + ba,a,pt %xcc, dcpe_icpe_tl1_common 262 263 263 264 do_icpe_tl1_fatal: 264 265 sethi %hi(1f), %g7 ··· 266 269 mov 0x3, %o0 267 270 call cheetah_plus_parity_error 268 271 add %sp, PTREGS_OFF, %o1 269 - ba,pt %xcc, rtrap 270 - nop 272 + ba,a,pt %xcc, rtrap 271 273 .size do_icpe_tl1,.-do_icpe_tl1 272 274 273 275 .type dcpe_icpe_tl1_common,#function ··· 452 456 cmp %g2, 0x63 453 457 be c_cee 454 458 nop 455 - ba,pt %xcc, c_deferred 459 + ba,a,pt %xcc, c_deferred 456 460 .size __cheetah_log_error,.-__cheetah_log_error 457 461 458 462 /* Cheetah FECC trap handling, we get here from tl{0,1}_fecc
+6
arch/sparc/kernel/cpu.c
··· 506 506 sparc_pmu_type = "sparc-m7"; 507 507 break; 508 508 509 + case SUN4V_CHIP_SPARC_SN: 510 + sparc_cpu_type = "SPARC-SN"; 511 + sparc_fpu_type = "SPARC-SN integrated FPU"; 512 + sparc_pmu_type = "sparc-sn"; 513 + break; 514 + 509 515 case SUN4V_CHIP_SPARC64X: 510 516 sparc_cpu_type = "SPARC64-X"; 511 517 sparc_fpu_type = "SPARC64-X integrated FPU";
+1
arch/sparc/kernel/cpumap.c
··· 328 328 case SUN4V_CHIP_NIAGARA5: 329 329 case SUN4V_CHIP_SPARC_M6: 330 330 case SUN4V_CHIP_SPARC_M7: 331 + case SUN4V_CHIP_SPARC_SN: 331 332 case SUN4V_CHIP_SPARC64X: 332 333 rover_inc_table = niagara_iterate_method; 333 334 break;
+5 -6
arch/sparc/kernel/fpu_traps.S
··· 100 100 fmuld %f0, %f2, %f26 101 101 faddd %f0, %f2, %f28 102 102 fmuld %f0, %f2, %f30 103 - b,pt %xcc, fpdis_exit 104 - nop 103 + ba,a,pt %xcc, fpdis_exit 104 + 105 105 2: andcc %g5, FPRS_DU, %g0 106 106 bne,pt %icc, 3f 107 107 fzero %f32 ··· 144 144 fmuld %f32, %f34, %f58 145 145 faddd %f32, %f34, %f60 146 146 fmuld %f32, %f34, %f62 147 - ba,pt %xcc, fpdis_exit 148 - nop 147 + ba,a,pt %xcc, fpdis_exit 148 + 149 149 3: mov SECONDARY_CONTEXT, %g3 150 150 add %g6, TI_FPREGS, %g1 151 151 ··· 197 197 fp_other_bounce: 198 198 call do_fpother 199 199 add %sp, PTREGS_OFF, %o0 200 - ba,pt %xcc, rtrap 201 - nop 200 + ba,a,pt %xcc, rtrap 202 201 .size fp_other_bounce,.-fp_other_bounce 203 202 204 203 .align 32
+16 -16
arch/sparc/kernel/head_64.S
··· 414 414 cmp %g2, 'T' 415 415 be,pt %xcc, 70f 416 416 cmp %g2, 'M' 417 + be,pt %xcc, 70f 418 + cmp %g2, 'S' 417 419 bne,pn %xcc, 49f 418 420 nop 419 421 ··· 435 433 cmp %g2, '7' 436 434 be,pt %xcc, 5f 437 435 mov SUN4V_CHIP_SPARC_M7, %g4 436 + cmp %g2, 'N' 437 + be,pt %xcc, 5f 438 + mov SUN4V_CHIP_SPARC_SN, %g4 438 439 ba,pt %xcc, 49f 439 440 nop 440 441 ··· 466 461 subcc %g3, 1, %g3 467 462 bne,pt %xcc, 41b 468 463 add %g1, 1, %g1 469 - mov SUN4V_CHIP_SPARC64X, %g4 470 464 ba,pt %xcc, 5f 471 - nop 465 + mov SUN4V_CHIP_SPARC64X, %g4 472 466 473 467 49: 474 468 mov SUN4V_CHIP_UNKNOWN, %g4 ··· 552 548 stxa %g0, [%g7] ASI_DMMU 553 549 membar #Sync 554 550 555 - ba,pt %xcc, sun4u_continue 556 - nop 551 + ba,a,pt %xcc, sun4u_continue 557 552 558 553 sun4v_init: 559 554 /* Set ctx 0 */ ··· 563 560 mov SECONDARY_CONTEXT, %g7 564 561 stxa %g0, [%g7] ASI_MMU 565 562 membar #Sync 566 - ba,pt %xcc, niagara_tlb_fixup 567 - nop 563 + ba,a,pt %xcc, niagara_tlb_fixup 568 564 569 565 sun4u_continue: 570 566 BRANCH_IF_ANY_CHEETAH(g1, g7, cheetah_tlb_fixup) 571 567 572 - ba,pt %xcc, spitfire_tlb_fixup 573 - nop 568 + ba,a,pt %xcc, spitfire_tlb_fixup 574 569 575 570 niagara_tlb_fixup: 576 571 mov 3, %g2 /* Set TLB type to hypervisor. */ ··· 596 595 be,pt %xcc, niagara4_patch 597 596 nop 598 597 cmp %g1, SUN4V_CHIP_SPARC_M7 598 + be,pt %xcc, niagara4_patch 599 + nop 600 + cmp %g1, SUN4V_CHIP_SPARC_SN 599 601 be,pt %xcc, niagara4_patch 600 602 nop 601 603 ··· 643 639 call hypervisor_patch_cachetlbops 644 640 nop 645 641 646 - ba,pt %xcc, tlb_fixup_done 647 - nop 642 + ba,a,pt %xcc, tlb_fixup_done 648 643 649 644 cheetah_tlb_fixup: 650 645 mov 2, %g2 /* Set TLB type to cheetah+. */ ··· 662 659 call cheetah_patch_cachetlbops 663 660 nop 664 661 665 - ba,pt %xcc, tlb_fixup_done 666 - nop 662 + ba,a,pt %xcc, tlb_fixup_done 667 663 668 664 spitfire_tlb_fixup: 669 665 /* Set TLB type to spitfire. */ ··· 776 774 call %o1 777 775 add %sp, (2047 + 128), %o0 778 776 779 - ba,pt %xcc, 2f 780 - nop 777 + ba,a,pt %xcc, 2f 781 778 782 779 1: sethi %hi(sparc64_ttable_tl0), %o0 783 780 set prom_set_trap_table_name, %g2 ··· 815 814 816 815 BRANCH_IF_ANY_CHEETAH(o2, o3, 1f) 817 816 818 - ba,pt %xcc, 2f 819 - nop 817 + ba,a,pt %xcc, 2f 820 818 821 819 /* Disable STICK_INT interrupts. */ 822 820 1:
+4 -8
arch/sparc/kernel/misctrap.S
··· 18 18 109: or %g7, %lo(109b), %g7 19 19 call do_privact 20 20 add %sp, PTREGS_OFF, %o0 21 - ba,pt %xcc, rtrap 22 - nop 21 + ba,a,pt %xcc, rtrap 23 22 .size __do_privact,.-__do_privact 24 23 25 24 .type do_mna,#function ··· 45 46 mov %l5, %o2 46 47 call mem_address_unaligned 47 48 add %sp, PTREGS_OFF, %o0 48 - ba,pt %xcc, rtrap 49 - nop 49 + ba,a,pt %xcc, rtrap 50 50 .size do_mna,.-do_mna 51 51 52 52 .type do_lddfmna,#function ··· 63 65 mov %l5, %o2 64 66 call handle_lddfmna 65 67 add %sp, PTREGS_OFF, %o0 66 - ba,pt %xcc, rtrap 67 - nop 68 + ba,a,pt %xcc, rtrap 68 69 .size do_lddfmna,.-do_lddfmna 69 70 70 71 .type do_stdfmna,#function ··· 81 84 mov %l5, %o2 82 85 call handle_stdfmna 83 86 add %sp, PTREGS_OFF, %o0 84 - ba,pt %xcc, rtrap 85 - nop 87 + ba,a,pt %xcc, rtrap 86 88 .size do_stdfmna,.-do_stdfmna 87 89 88 90 .type breakpoint_trap,#function
+36 -6
arch/sparc/kernel/pci.c
··· 245 245 } 246 246 } 247 247 248 + static void pci_init_dev_archdata(struct dev_archdata *sd, void *iommu, 249 + void *stc, void *host_controller, 250 + struct platform_device *op, 251 + int numa_node) 252 + { 253 + sd->iommu = iommu; 254 + sd->stc = stc; 255 + sd->host_controller = host_controller; 256 + sd->op = op; 257 + sd->numa_node = numa_node; 258 + } 259 + 248 260 static struct pci_dev *of_create_pci_dev(struct pci_pbm_info *pbm, 249 261 struct device_node *node, 250 262 struct pci_bus *bus, int devfn) ··· 271 259 if (!dev) 272 260 return NULL; 273 261 262 + op = of_find_device_by_node(node); 274 263 sd = &dev->dev.archdata; 275 - sd->iommu = pbm->iommu; 276 - sd->stc = &pbm->stc; 277 - sd->host_controller = pbm; 278 - sd->op = op = of_find_device_by_node(node); 279 - sd->numa_node = pbm->numa_node; 280 - 264 + pci_init_dev_archdata(sd, pbm->iommu, &pbm->stc, pbm, op, 265 + pbm->numa_node); 281 266 sd = &op->dev.archdata; 282 267 sd->iommu = pbm->iommu; 283 268 sd->stc = &pbm->stc; ··· 1002 993 { 1003 994 /* No special bus mastering setup handling */ 1004 995 } 996 + 997 + #ifdef CONFIG_PCI_IOV 998 + int pcibios_add_device(struct pci_dev *dev) 999 + { 1000 + struct pci_dev *pdev; 1001 + 1002 + /* Add sriov arch specific initialization here. 1003 + * Copy dev_archdata from PF to VF 1004 + */ 1005 + if (dev->is_virtfn) { 1006 + struct dev_archdata *psd; 1007 + 1008 + pdev = dev->physfn; 1009 + psd = &pdev->dev.archdata; 1010 + pci_init_dev_archdata(&dev->dev.archdata, psd->iommu, 1011 + psd->stc, psd->host_controller, NULL, 1012 + psd->numa_node); 1013 + } 1014 + return 0; 1015 + } 1016 + #endif /* CONFIG_PCI_IOV */ 1005 1017 1006 1018 static int __init pcibios_init(void) 1007 1019 {
+6 -1
arch/sparc/kernel/setup_64.c
··· 285 285 286 286 sun4v_patch_2insn_range(&__sun4v_2insn_patch, 287 287 &__sun4v_2insn_patch_end); 288 - if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7) 288 + if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 289 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN) 289 290 sun_m7_patch_2insn_range(&__sun_m7_2insn_patch, 290 291 &__sun_m7_2insn_patch_end); 291 292 ··· 525 524 sun4v_chip_type == SUN4V_CHIP_NIAGARA5 || 526 525 sun4v_chip_type == SUN4V_CHIP_SPARC_M6 || 527 526 sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 527 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN || 528 528 sun4v_chip_type == SUN4V_CHIP_SPARC64X) 529 529 cap |= HWCAP_SPARC_BLKINIT; 530 530 if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 || ··· 534 532 sun4v_chip_type == SUN4V_CHIP_NIAGARA5 || 535 533 sun4v_chip_type == SUN4V_CHIP_SPARC_M6 || 536 534 sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 535 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN || 537 536 sun4v_chip_type == SUN4V_CHIP_SPARC64X) 538 537 cap |= HWCAP_SPARC_N2; 539 538 } ··· 564 561 sun4v_chip_type == SUN4V_CHIP_NIAGARA5 || 565 562 sun4v_chip_type == SUN4V_CHIP_SPARC_M6 || 566 563 sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 564 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN || 567 565 sun4v_chip_type == SUN4V_CHIP_SPARC64X) 568 566 cap |= (AV_SPARC_VIS | AV_SPARC_VIS2 | 569 567 AV_SPARC_ASI_BLK_INIT | ··· 574 570 sun4v_chip_type == SUN4V_CHIP_NIAGARA5 || 575 571 sun4v_chip_type == SUN4V_CHIP_SPARC_M6 || 576 572 sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 573 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN || 577 574 sun4v_chip_type == SUN4V_CHIP_SPARC64X) 578 575 cap |= (AV_SPARC_VIS3 | AV_SPARC_HPC | 579 576 AV_SPARC_FMAF);
+6 -12
arch/sparc/kernel/spiterrs.S
··· 85 85 ba,pt %xcc, etraptl1 86 86 rd %pc, %g7 87 87 88 - ba,pt %xcc, 2f 89 - nop 88 + ba,a,pt %xcc, 2f 90 89 91 90 1: ba,pt %xcc, etrap_irq 92 91 rd %pc, %g7 ··· 99 100 mov %l5, %o2 100 101 call spitfire_access_error 101 102 add %sp, PTREGS_OFF, %o0 102 - ba,pt %xcc, rtrap 103 - nop 103 + ba,a,pt %xcc, rtrap 104 104 .size __spitfire_access_error,.-__spitfire_access_error 105 105 106 106 /* This is the trap handler entry point for ECC correctable ··· 177 179 mov %l5, %o2 178 180 call spitfire_data_access_exception_tl1 179 181 add %sp, PTREGS_OFF, %o0 180 - ba,pt %xcc, rtrap 181 - nop 182 + ba,a,pt %xcc, rtrap 182 183 .size __spitfire_data_access_exception_tl1,.-__spitfire_data_access_exception_tl1 183 184 184 185 .type __spitfire_data_access_exception,#function ··· 197 200 mov %l5, %o2 198 201 call spitfire_data_access_exception 199 202 add %sp, PTREGS_OFF, %o0 200 - ba,pt %xcc, rtrap 201 - nop 203 + ba,a,pt %xcc, rtrap 202 204 .size __spitfire_data_access_exception,.-__spitfire_data_access_exception 203 205 204 206 .type __spitfire_insn_access_exception_tl1,#function ··· 216 220 mov %l5, %o2 217 221 call spitfire_insn_access_exception_tl1 218 222 add %sp, PTREGS_OFF, %o0 219 - ba,pt %xcc, rtrap 220 - nop 223 + ba,a,pt %xcc, rtrap 221 224 .size __spitfire_insn_access_exception_tl1,.-__spitfire_insn_access_exception_tl1 222 225 223 226 .type __spitfire_insn_access_exception,#function ··· 235 240 mov %l5, %o2 236 241 call spitfire_insn_access_exception 237 242 add %sp, PTREGS_OFF, %o0 238 - ba,pt %xcc, rtrap 239 - nop 243 + ba,a,pt %xcc, rtrap 240 244 .size __spitfire_insn_access_exception,.-__spitfire_insn_access_exception
+1 -1
arch/sparc/kernel/systbls_32.S
··· 88 88 /*340*/ .long sys_ni_syscall, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr 89 89 /*345*/ .long sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 90 90 /*350*/ .long sys_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 91 - /*355*/ .long sys_setsockopt, sys_mlock2, sys_copy_file_range 91 + /*355*/ .long sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
+2 -2
arch/sparc/kernel/systbls_64.S
··· 89 89 /*340*/ .word sys_kern_features, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr 90 90 .word sys32_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 91 91 /*350*/ .word sys32_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 92 - .word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range 92 + .word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range, compat_sys_preadv2, compat_sys_pwritev2 93 93 94 94 #endif /* CONFIG_COMPAT */ 95 95 ··· 170 170 /*340*/ .word sys_kern_features, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr 171 171 .word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 172 172 /*350*/ .word sys64_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 173 - .word sys_setsockopt, sys_mlock2, sys_copy_file_range 173 + .word sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
+1 -2
arch/sparc/kernel/utrap.S
··· 11 11 mov %l4, %o1 12 12 call bad_trap 13 13 add %sp, PTREGS_OFF, %o0 14 - ba,pt %xcc, rtrap 15 - nop 14 + ba,a,pt %xcc, rtrap 16 15 17 16 invoke_utrap: 18 17 sllx %g3, 3, %g3
+18
arch/sparc/kernel/vio.c
··· 45 45 return NULL; 46 46 } 47 47 48 + static int vio_hotplug(struct device *dev, struct kobj_uevent_env *env) 49 + { 50 + const struct vio_dev *vio_dev = to_vio_dev(dev); 51 + 52 + add_uevent_var(env, "MODALIAS=vio:T%sS%s", vio_dev->type, vio_dev->compat); 53 + return 0; 54 + } 55 + 48 56 static int vio_bus_match(struct device *dev, struct device_driver *drv) 49 57 { 50 58 struct vio_dev *vio_dev = to_vio_dev(dev); ··· 113 105 return sprintf(buf, "%s\n", vdev->type); 114 106 } 115 107 108 + static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, 109 + char *buf) 110 + { 111 + const struct vio_dev *vdev = to_vio_dev(dev); 112 + 113 + return sprintf(buf, "vio:T%sS%s\n", vdev->type, vdev->compat); 114 + } 115 + 116 116 static struct device_attribute vio_dev_attrs[] = { 117 117 __ATTR_RO(devspec), 118 118 __ATTR_RO(type), 119 + __ATTR_RO(modalias), 119 120 __ATTR_NULL 120 121 }; 121 122 122 123 static struct bus_type vio_bus_type = { 123 124 .name = "vio", 124 125 .dev_attrs = vio_dev_attrs, 126 + .uevent = vio_hotplug, 125 127 .match = vio_bus_match, 126 128 .probe = vio_device_probe, 127 129 .remove = vio_device_remove,
+4
arch/sparc/kernel/vmlinux.lds.S
··· 33 33 jiffies = jiffies_64; 34 34 #endif 35 35 36 + #ifdef CONFIG_SPARC64 37 + ASSERT((swapper_tsb == 0x0000000000408000), "Error: sparc64 early assembler too large") 38 + #endif 39 + 36 40 SECTIONS 37 41 { 38 42 #ifdef CONFIG_SPARC64
+1 -2
arch/sparc/kernel/winfixup.S
··· 32 32 rd %pc, %g7 33 33 call do_sparc64_fault 34 34 add %sp, PTREGS_OFF, %o0 35 - ba,pt %xcc, rtrap 36 - nop 35 + ba,a,pt %xcc, rtrap 37 36 38 37 /* Be very careful about usage of the trap globals here. 39 38 * You cannot touch %g5 as that has the fault information.
+3
arch/sparc/mm/init_64.c
··· 1769 1769 max_phys_bits = 47; 1770 1770 break; 1771 1771 case SUN4V_CHIP_SPARC_M7: 1772 + case SUN4V_CHIP_SPARC_SN: 1772 1773 default: 1773 1774 /* M7 and later support 52-bit virtual addresses. */ 1774 1775 sparc64_va_hole_top = 0xfff8000000000000UL; ··· 1987 1986 */ 1988 1987 switch (sun4v_chip_type) { 1989 1988 case SUN4V_CHIP_SPARC_M7: 1989 + case SUN4V_CHIP_SPARC_SN: 1990 1990 pagecv_flag = 0x00; 1991 1991 break; 1992 1992 default: ··· 2140 2138 */ 2141 2139 switch (sun4v_chip_type) { 2142 2140 case SUN4V_CHIP_SPARC_M7: 2141 + case SUN4V_CHIP_SPARC_SN: 2143 2142 page_cache4v_flag = _PAGE_CP_4V; 2144 2143 break; 2145 2144 default:
+1 -1
arch/x86/events/amd/core.c
··· 115 115 /* 116 116 * AMD Performance Monitor K7 and later. 117 117 */ 118 - static const u64 amd_perfmon_event_map[] = 118 + static const u64 amd_perfmon_event_map[PERF_COUNT_HW_MAX] = 119 119 { 120 120 [PERF_COUNT_HW_CPU_CYCLES] = 0x0076, 121 121 [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0,
+1
arch/x86/events/intel/core.c
··· 3639 3639 3640 3640 case 78: /* 14nm Skylake Mobile */ 3641 3641 case 94: /* 14nm Skylake Desktop */ 3642 + case 85: /* 14nm Skylake Server */ 3642 3643 x86_pmu.late_ack = true; 3643 3644 memcpy(hw_cache_event_ids, skl_hw_cache_event_ids, sizeof(hw_cache_event_ids)); 3644 3645 memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+4 -2
arch/x86/events/intel/lbr.c
··· 63 63 64 64 #define LBR_PLM (LBR_KERNEL | LBR_USER) 65 65 66 - #define LBR_SEL_MASK 0x1ff /* valid bits in LBR_SELECT */ 66 + #define LBR_SEL_MASK 0x3ff /* valid bits in LBR_SELECT */ 67 67 #define LBR_NOT_SUPP -1 /* LBR filter not supported */ 68 68 #define LBR_IGN 0 /* ignored */ 69 69 ··· 610 610 * The first 9 bits (LBR_SEL_MASK) in LBR_SELECT operate 611 611 * in suppress mode. So LBR_SELECT should be set to 612 612 * (~mask & LBR_SEL_MASK) | (mask & ~LBR_SEL_MASK) 613 + * But the 10th bit LBR_CALL_STACK does not operate 614 + * in suppress mode. 613 615 */ 614 - reg->config = mask ^ x86_pmu.lbr_sel_mask; 616 + reg->config = mask ^ (x86_pmu.lbr_sel_mask & ~LBR_CALL_STACK); 615 617 616 618 if ((br_type & PERF_SAMPLE_BRANCH_NO_CYCLES) && 617 619 (br_type & PERF_SAMPLE_BRANCH_NO_FLAGS) &&
+64 -11
arch/x86/events/intel/pt.c
··· 136 136 struct dev_ext_attribute *de_attrs; 137 137 struct attribute **attrs; 138 138 size_t size; 139 + u64 reg; 139 140 int ret; 140 141 long i; 142 + 143 + if (boot_cpu_has(X86_FEATURE_VMX)) { 144 + /* 145 + * Intel SDM, 36.5 "Tracing post-VMXON" says that 146 + * "IA32_VMX_MISC[bit 14]" being 1 means PT can trace 147 + * post-VMXON. 148 + */ 149 + rdmsrl(MSR_IA32_VMX_MISC, reg); 150 + if (reg & BIT(14)) 151 + pt_pmu.vmx = true; 152 + } 141 153 142 154 attrs = NULL; 143 155 ··· 281 269 282 270 reg |= (event->attr.config & PT_CONFIG_MASK); 283 271 272 + event->hw.config = reg; 284 273 wrmsrl(MSR_IA32_RTIT_CTL, reg); 285 274 } 286 275 287 - static void pt_config_start(bool start) 276 + static void pt_config_stop(struct perf_event *event) 288 277 { 289 - u64 ctl; 278 + u64 ctl = READ_ONCE(event->hw.config); 290 279 291 - rdmsrl(MSR_IA32_RTIT_CTL, ctl); 292 - if (start) 293 - ctl |= RTIT_CTL_TRACEEN; 294 - else 295 - ctl &= ~RTIT_CTL_TRACEEN; 280 + /* may be already stopped by a PMI */ 281 + if (!(ctl & RTIT_CTL_TRACEEN)) 282 + return; 283 + 284 + ctl &= ~RTIT_CTL_TRACEEN; 296 285 wrmsrl(MSR_IA32_RTIT_CTL, ctl); 286 + 287 + WRITE_ONCE(event->hw.config, ctl); 297 288 298 289 /* 299 290 * A wrmsr that disables trace generation serializes other PT ··· 306 291 * The below WMB, separating data store and aux_head store matches 307 292 * the consumer's RMB that separates aux_head load and data load. 308 293 */ 309 - if (!start) 310 - wmb(); 294 + wmb(); 311 295 } 312 296 313 297 static void pt_config_buffer(void *buf, unsigned int topa_idx, ··· 956 942 if (!ACCESS_ONCE(pt->handle_nmi)) 957 943 return; 958 944 959 - pt_config_start(false); 945 + /* 946 + * If VMX is on and PT does not support it, don't touch anything. 947 + */ 948 + if (READ_ONCE(pt->vmx_on)) 949 + return; 960 950 961 951 if (!event) 962 952 return; 953 + 954 + pt_config_stop(event); 963 955 964 956 buf = perf_get_aux(&pt->handle); 965 957 if (!buf) ··· 1003 983 } 1004 984 } 1005 985 986 + void intel_pt_handle_vmx(int on) 987 + { 988 + struct pt *pt = this_cpu_ptr(&pt_ctx); 989 + struct perf_event *event; 990 + unsigned long flags; 991 + 992 + /* PT plays nice with VMX, do nothing */ 993 + if (pt_pmu.vmx) 994 + return; 995 + 996 + /* 997 + * VMXON will clear RTIT_CTL.TraceEn; we need to make 998 + * sure to not try to set it while VMX is on. Disable 999 + * interrupts to avoid racing with pmu callbacks; 1000 + * concurrent PMI should be handled fine. 1001 + */ 1002 + local_irq_save(flags); 1003 + WRITE_ONCE(pt->vmx_on, on); 1004 + 1005 + if (on) { 1006 + /* prevent pt_config_stop() from writing RTIT_CTL */ 1007 + event = pt->handle.event; 1008 + if (event) 1009 + event->hw.config = 0; 1010 + } 1011 + local_irq_restore(flags); 1012 + } 1013 + EXPORT_SYMBOL_GPL(intel_pt_handle_vmx); 1014 + 1006 1015 /* 1007 1016 * PMU callbacks 1008 1017 */ ··· 1040 991 { 1041 992 struct pt *pt = this_cpu_ptr(&pt_ctx); 1042 993 struct pt_buffer *buf = perf_get_aux(&pt->handle); 994 + 995 + if (READ_ONCE(pt->vmx_on)) 996 + return; 1043 997 1044 998 if (!buf || pt_buffer_is_full(buf, pt)) { 1045 999 event->hw.state = PERF_HES_STOPPED; ··· 1066 1014 * see comment in intel_pt_interrupt(). 1067 1015 */ 1068 1016 ACCESS_ONCE(pt->handle_nmi) = 0; 1069 - pt_config_start(false); 1017 + 1018 + pt_config_stop(event); 1070 1019 1071 1020 if (event->hw.state == PERF_HES_STOPPED) 1072 1021 return;
+3
arch/x86/events/intel/pt.h
··· 65 65 struct pt_pmu { 66 66 struct pmu pmu; 67 67 u32 caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES]; 68 + bool vmx; 68 69 }; 69 70 70 71 /** ··· 108 107 * struct pt - per-cpu pt context 109 108 * @handle: perf output handle 110 109 * @handle_nmi: do handle PT PMI on this cpu, there's an active event 110 + * @vmx_on: 1 if VMX is ON on this cpu 111 111 */ 112 112 struct pt { 113 113 struct perf_output_handle handle; 114 114 int handle_nmi; 115 + int vmx_on; 115 116 }; 116 117 117 118 #endif /* __INTEL_PT_H__ */
+1
arch/x86/events/intel/rapl.c
··· 718 718 break; 719 719 case 60: /* Haswell */ 720 720 case 69: /* Haswell-Celeron */ 721 + case 70: /* Haswell GT3e */ 721 722 case 61: /* Broadwell */ 722 723 case 71: /* Broadwell-H */ 723 724 rapl_cntr_mask = RAPL_IDX_HSW;
+4
arch/x86/include/asm/perf_event.h
··· 285 285 static inline void perf_check_microcode(void) { } 286 286 #endif 287 287 288 + #ifdef CONFIG_CPU_SUP_INTEL 289 + extern void intel_pt_handle_vmx(int on); 290 + #endif 291 + 288 292 #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD) 289 293 extern void amd_pmu_enable_virt(void); 290 294 extern void amd_pmu_disable_virt(void);
+2 -1
arch/x86/kernel/apic/vector.c
··· 256 256 struct irq_desc *desc; 257 257 int cpu, vector; 258 258 259 - BUG_ON(!data->cfg.vector); 259 + if (!data->cfg.vector) 260 + return; 260 261 261 262 vector = data->cfg.vector; 262 263 for_each_cpu_and(cpu, data->domain, cpu_online_mask)
-6
arch/x86/kernel/head_32.S
··· 389 389 /* Make changes effective */ 390 390 wrmsr 391 391 392 - /* 393 - * And make sure that all the mappings we set up have NX set from 394 - * the beginning. 395 - */ 396 - orl $(1 << (_PAGE_BIT_NX - 32)), pa(__supported_pte_mask + 4) 397 - 398 392 enable_paging: 399 393 400 394 /*
+4
arch/x86/kvm/vmx.c
··· 3103 3103 3104 3104 static void kvm_cpu_vmxon(u64 addr) 3105 3105 { 3106 + intel_pt_handle_vmx(1); 3107 + 3106 3108 asm volatile (ASM_VMX_VMXON_RAX 3107 3109 : : "a"(&addr), "m"(addr) 3108 3110 : "memory", "cc"); ··· 3174 3172 static void kvm_cpu_vmxoff(void) 3175 3173 { 3176 3174 asm volatile (__ex(ASM_VMX_VMXOFF) : : : "cc"); 3175 + 3176 + intel_pt_handle_vmx(0); 3177 3177 } 3178 3178 3179 3179 static void hardware_disable(void)
+3 -2
arch/x86/mm/setup_nx.c
··· 32 32 33 33 void x86_configure_nx(void) 34 34 { 35 - /* If disable_nx is set, clear NX on all new mappings going forward. */ 36 - if (disable_nx) 35 + if (boot_cpu_has(X86_FEATURE_NX) && !disable_nx) 36 + __supported_pte_mask |= _PAGE_NX; 37 + else 37 38 __supported_pte_mask &= ~_PAGE_NX; 38 39 } 39 40
+6
arch/x86/xen/spinlock.c
··· 27 27 28 28 static void xen_qlock_kick(int cpu) 29 29 { 30 + int irq = per_cpu(lock_kicker_irq, cpu); 31 + 32 + /* Don't kick if the target's kicker interrupt is not initialized. */ 33 + if (irq == -1) 34 + return; 35 + 30 36 xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR); 31 37 } 32 38
+25 -27
drivers/block/rbd.c
··· 538 538 u8 *order, u64 *snap_size); 539 539 static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id, 540 540 u64 *snap_features); 541 - static u64 rbd_snap_id_by_name(struct rbd_device *rbd_dev, const char *name); 542 541 543 542 static int rbd_open(struct block_device *bdev, fmode_t mode) 544 543 { ··· 3126 3127 struct rbd_device *rbd_dev = (struct rbd_device *)data; 3127 3128 int ret; 3128 3129 3129 - if (!rbd_dev) 3130 - return; 3131 - 3132 3130 dout("%s: \"%s\" notify_id %llu opcode %u\n", __func__, 3133 3131 rbd_dev->header_name, (unsigned long long)notify_id, 3134 3132 (unsigned int)opcode); ··· 3259 3263 3260 3264 ceph_osdc_cancel_event(rbd_dev->watch_event); 3261 3265 rbd_dev->watch_event = NULL; 3266 + 3267 + dout("%s flushing notifies\n", __func__); 3268 + ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc); 3262 3269 } 3263 3270 3264 3271 /* ··· 3641 3642 static void rbd_dev_update_size(struct rbd_device *rbd_dev) 3642 3643 { 3643 3644 sector_t size; 3644 - bool removing; 3645 3645 3646 3646 /* 3647 - * Don't hold the lock while doing disk operations, 3648 - * or lock ordering will conflict with the bdev mutex via: 3649 - * rbd_add() -> blkdev_get() -> rbd_open() 3647 + * If EXISTS is not set, rbd_dev->disk may be NULL, so don't 3648 + * try to update its size. If REMOVING is set, updating size 3649 + * is just useless work since the device can't be opened. 3650 3650 */ 3651 - spin_lock_irq(&rbd_dev->lock); 3652 - removing = test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags); 3653 - spin_unlock_irq(&rbd_dev->lock); 3654 - /* 3655 - * If the device is being removed, rbd_dev->disk has 3656 - * been destroyed, so don't try to update its size 3657 - */ 3658 - if (!removing) { 3651 + if (test_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags) && 3652 + !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) { 3659 3653 size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE; 3660 3654 dout("setting size to %llu sectors", (unsigned long long)size); 3661 3655 set_capacity(rbd_dev->disk, size); ··· 4183 4191 __le64 features; 4184 4192 __le64 incompat; 4185 4193 } __attribute__ ((packed)) features_buf = { 0 }; 4186 - u64 incompat; 4194 + u64 unsup; 4187 4195 int ret; 4188 4196 4189 4197 ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name, ··· 4196 4204 if (ret < sizeof (features_buf)) 4197 4205 return -ERANGE; 4198 4206 4199 - incompat = le64_to_cpu(features_buf.incompat); 4200 - if (incompat & ~RBD_FEATURES_SUPPORTED) 4207 + unsup = le64_to_cpu(features_buf.incompat) & ~RBD_FEATURES_SUPPORTED; 4208 + if (unsup) { 4209 + rbd_warn(rbd_dev, "image uses unsupported features: 0x%llx", 4210 + unsup); 4201 4211 return -ENXIO; 4212 + } 4202 4213 4203 4214 *snap_features = le64_to_cpu(features_buf.features); 4204 4215 ··· 5182 5187 return ret; 5183 5188 } 5184 5189 5190 + /* 5191 + * rbd_dev->header_rwsem must be locked for write and will be unlocked 5192 + * upon return. 5193 + */ 5185 5194 static int rbd_dev_device_setup(struct rbd_device *rbd_dev) 5186 5195 { 5187 5196 int ret; ··· 5194 5195 5195 5196 ret = rbd_dev_id_get(rbd_dev); 5196 5197 if (ret) 5197 - return ret; 5198 + goto err_out_unlock; 5198 5199 5199 5200 BUILD_BUG_ON(DEV_NAME_LEN 5200 5201 < sizeof (RBD_DRV_NAME) + MAX_INT_FORMAT_WIDTH); ··· 5235 5236 /* Everything's ready. Announce the disk to the world. */ 5236 5237 5237 5238 set_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags); 5238 - add_disk(rbd_dev->disk); 5239 + up_write(&rbd_dev->header_rwsem); 5239 5240 5241 + add_disk(rbd_dev->disk); 5240 5242 pr_info("%s: added with size 0x%llx\n", rbd_dev->disk->disk_name, 5241 5243 (unsigned long long) rbd_dev->mapping.size); 5242 5244 ··· 5252 5252 unregister_blkdev(rbd_dev->major, rbd_dev->name); 5253 5253 err_out_id: 5254 5254 rbd_dev_id_put(rbd_dev); 5255 + err_out_unlock: 5256 + up_write(&rbd_dev->header_rwsem); 5255 5257 return ret; 5256 5258 } 5257 5259 ··· 5444 5442 spec = NULL; /* rbd_dev now owns this */ 5445 5443 rbd_opts = NULL; /* rbd_dev now owns this */ 5446 5444 5445 + down_write(&rbd_dev->header_rwsem); 5447 5446 rc = rbd_dev_image_probe(rbd_dev, 0); 5448 5447 if (rc < 0) 5449 5448 goto err_out_rbd_dev; ··· 5474 5471 return rc; 5475 5472 5476 5473 err_out_rbd_dev: 5474 + up_write(&rbd_dev->header_rwsem); 5477 5475 rbd_dev_destroy(rbd_dev); 5478 5476 err_out_client: 5479 5477 rbd_put_client(rbdc); ··· 5581 5577 return ret; 5582 5578 5583 5579 rbd_dev_header_unwatch_sync(rbd_dev); 5584 - /* 5585 - * flush remaining watch callbacks - these must be complete 5586 - * before the osd_client is shutdown 5587 - */ 5588 - dout("%s: flushing notifies", __func__); 5589 - ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc); 5590 5580 5591 5581 /* 5592 5582 * Don't free anything from rbd_dev->disk until after all
+1 -1
drivers/clk/imx/clk-imx6q.c
··· 394 394 clk[IMX6QDL_CLK_LDB_DI1_DIV_3_5] = imx_clk_fixed_factor("ldb_di1_div_3_5", "ldb_di1", 2, 7); 395 395 } else { 396 396 clk[IMX6QDL_CLK_ECSPI_ROOT] = imx_clk_divider("ecspi_root", "pll3_60m", base + 0x38, 19, 6); 397 - clk[IMX6QDL_CLK_CAN_ROOT] = imx_clk_divider("can_root", "pll3_60", base + 0x20, 2, 6); 397 + clk[IMX6QDL_CLK_CAN_ROOT] = imx_clk_divider("can_root", "pll3_60m", base + 0x20, 2, 6); 398 398 clk[IMX6QDL_CLK_IPG_PER] = imx_clk_fixup_divider("ipg_per", "ipg", base + 0x1c, 0, 6, imx_cscmr1_fixup); 399 399 clk[IMX6QDL_CLK_UART_SERIAL_PODF] = imx_clk_divider("uart_serial_podf", "pll3_80m", base + 0x24, 0, 6); 400 400 clk[IMX6QDL_CLK_LDB_DI0_DIV_3_5] = imx_clk_fixed_factor("ldb_di0_div_3_5", "ldb_di0_sel", 2, 7);
+2 -6
drivers/cpufreq/cpufreq_governor.c
··· 193 193 wall_time = cur_wall_time - j_cdbs->prev_cpu_wall; 194 194 j_cdbs->prev_cpu_wall = cur_wall_time; 195 195 196 - if (cur_idle_time <= j_cdbs->prev_cpu_idle) { 197 - idle_time = 0; 198 - } else { 199 - idle_time = cur_idle_time - j_cdbs->prev_cpu_idle; 200 - j_cdbs->prev_cpu_idle = cur_idle_time; 201 - } 196 + idle_time = cur_idle_time - j_cdbs->prev_cpu_idle; 197 + j_cdbs->prev_cpu_idle = cur_idle_time; 202 198 203 199 if (ignore_nice) { 204 200 u64 cur_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
+5
drivers/cpufreq/intel_pstate.c
··· 813 813 if (err) 814 814 goto skip_tar; 815 815 816 + /* For level 1 and 2, bits[23:16] contain the ratio */ 817 + if (tdp_ctrl) 818 + tdp_ratio >>= 16; 819 + 820 + tdp_ratio &= 0xff; /* ratios are only 8 bits long */ 816 821 if (tdp_ratio - 1 == tar) { 817 822 max_pstate = tar; 818 823 pr_debug("max_pstate=TAC %x\n", max_pstate);
+1 -1
drivers/edac/i7core_edac.c
··· 1866 1866 1867 1867 i7_dev = get_i7core_dev(mce->socketid); 1868 1868 if (!i7_dev) 1869 - return NOTIFY_BAD; 1869 + return NOTIFY_DONE; 1870 1870 1871 1871 mci = i7_dev->mci; 1872 1872 pvt = mci->pvt_info;
+1 -1
drivers/edac/sb_edac.c
··· 3168 3168 3169 3169 mci = get_mci_for_node_id(mce->socketid); 3170 3170 if (!mci) 3171 - return NOTIFY_BAD; 3171 + return NOTIFY_DONE; 3172 3172 pvt = mci->pvt_info; 3173 3173 3174 3174 /*
+26 -11
drivers/firmware/efi/vars.c
··· 202 202 { NULL_GUID, "", NULL }, 203 203 }; 204 204 205 + /* 206 + * Check if @var_name matches the pattern given in @match_name. 207 + * 208 + * @var_name: an array of @len non-NUL characters. 209 + * @match_name: a NUL-terminated pattern string, optionally ending in "*". A 210 + * final "*" character matches any trailing characters @var_name, 211 + * including the case when there are none left in @var_name. 212 + * @match: on output, the number of non-wildcard characters in @match_name 213 + * that @var_name matches, regardless of the return value. 214 + * @return: whether @var_name fully matches @match_name. 215 + */ 205 216 static bool 206 217 variable_matches(const char *var_name, size_t len, const char *match_name, 207 218 int *match) 208 219 { 209 220 for (*match = 0; ; (*match)++) { 210 221 char c = match_name[*match]; 211 - char u = var_name[*match]; 212 222 213 - /* Wildcard in the matching name means we've matched */ 214 - if (c == '*') 223 + switch (c) { 224 + case '*': 225 + /* Wildcard in @match_name means we've matched. */ 215 226 return true; 216 227 217 - /* Case sensitive match */ 218 - if (!c && *match == len) 219 - return true; 228 + case '\0': 229 + /* @match_name has ended. Has @var_name too? */ 230 + return (*match == len); 220 231 221 - if (c != u) 232 + default: 233 + /* 234 + * We've reached a non-wildcard char in @match_name. 235 + * Continue only if there's an identical character in 236 + * @var_name. 237 + */ 238 + if (*match < len && c == var_name[*match]) 239 + continue; 222 240 return false; 223 - 224 - if (!c) 225 - return true; 241 + } 226 242 } 227 - return true; 228 243 } 229 244 230 245 bool
+6 -59
drivers/gpio/gpio-rcar.c
··· 196 196 return 0; 197 197 } 198 198 199 - static void gpio_rcar_irq_bus_lock(struct irq_data *d) 200 - { 201 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 202 - struct gpio_rcar_priv *p = gpiochip_get_data(gc); 203 - 204 - pm_runtime_get_sync(&p->pdev->dev); 205 - } 206 - 207 - static void gpio_rcar_irq_bus_sync_unlock(struct irq_data *d) 208 - { 209 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 210 - struct gpio_rcar_priv *p = gpiochip_get_data(gc); 211 - 212 - pm_runtime_put(&p->pdev->dev); 213 - } 214 - 215 - 216 - static int gpio_rcar_irq_request_resources(struct irq_data *d) 217 - { 218 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 219 - struct gpio_rcar_priv *p = gpiochip_get_data(gc); 220 - int error; 221 - 222 - error = pm_runtime_get_sync(&p->pdev->dev); 223 - if (error < 0) 224 - return error; 225 - 226 - return 0; 227 - } 228 - 229 - static void gpio_rcar_irq_release_resources(struct irq_data *d) 230 - { 231 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 232 - struct gpio_rcar_priv *p = gpiochip_get_data(gc); 233 - 234 - pm_runtime_put(&p->pdev->dev); 235 - } 236 - 237 199 static irqreturn_t gpio_rcar_irq_handler(int irq, void *dev_id) 238 200 { 239 201 struct gpio_rcar_priv *p = dev_id; ··· 242 280 243 281 static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset) 244 282 { 245 - struct gpio_rcar_priv *p = gpiochip_get_data(chip); 246 - int error; 247 - 248 - error = pm_runtime_get_sync(&p->pdev->dev); 249 - if (error < 0) 250 - return error; 251 - 252 - error = pinctrl_request_gpio(chip->base + offset); 253 - if (error) 254 - pm_runtime_put(&p->pdev->dev); 255 - 256 - return error; 283 + return pinctrl_request_gpio(chip->base + offset); 257 284 } 258 285 259 286 static void gpio_rcar_free(struct gpio_chip *chip, unsigned offset) 260 287 { 261 - struct gpio_rcar_priv *p = gpiochip_get_data(chip); 262 - 263 288 pinctrl_free_gpio(chip->base + offset); 264 289 265 - /* Set the GPIO as an input to ensure that the next GPIO request won't 290 + /* 291 + * Set the GPIO as an input to ensure that the next GPIO request won't 266 292 * drive the GPIO pin as an output. 267 293 */ 268 294 gpio_rcar_config_general_input_output_mode(chip, offset, false); 269 - 270 - pm_runtime_put(&p->pdev->dev); 271 295 } 272 296 273 297 static int gpio_rcar_direction_input(struct gpio_chip *chip, unsigned offset) ··· 400 452 } 401 453 402 454 pm_runtime_enable(dev); 455 + pm_runtime_get_sync(dev); 403 456 404 457 io = platform_get_resource(pdev, IORESOURCE_MEM, 0); 405 458 irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); ··· 437 488 irq_chip->irq_unmask = gpio_rcar_irq_enable; 438 489 irq_chip->irq_set_type = gpio_rcar_irq_set_type; 439 490 irq_chip->irq_set_wake = gpio_rcar_irq_set_wake; 440 - irq_chip->irq_bus_lock = gpio_rcar_irq_bus_lock; 441 - irq_chip->irq_bus_sync_unlock = gpio_rcar_irq_bus_sync_unlock; 442 - irq_chip->irq_request_resources = gpio_rcar_irq_request_resources; 443 - irq_chip->irq_release_resources = gpio_rcar_irq_release_resources; 444 491 irq_chip->flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_MASK_ON_SUSPEND; 445 492 446 493 ret = gpiochip_add_data(gpio_chip, p); ··· 467 522 err1: 468 523 gpiochip_remove(gpio_chip); 469 524 err0: 525 + pm_runtime_put(dev); 470 526 pm_runtime_disable(dev); 471 527 return ret; 472 528 } ··· 478 532 479 533 gpiochip_remove(&p->gpio_chip); 480 534 535 + pm_runtime_put(&pdev->dev); 481 536 pm_runtime_disable(&pdev->dev); 482 537 return 0; 483 538 }
+1 -1
drivers/gpio/gpiolib-acpi.c
··· 977 977 lookup = kmalloc(sizeof(*lookup), GFP_KERNEL); 978 978 if (lookup) { 979 979 lookup->adev = adev; 980 - lookup->con_id = con_id; 980 + lookup->con_id = kstrdup(con_id, GFP_KERNEL); 981 981 list_add_tail(&lookup->node, &acpi_crs_lookup_list); 982 982 } 983 983 }
+7 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
··· 63 63 return amdgpu_atpx_priv.atpx_detected; 64 64 } 65 65 66 - bool amdgpu_has_atpx_dgpu_power_cntl(void) { 67 - return amdgpu_atpx_priv.atpx.functions.power_cntl; 68 - } 69 - 70 66 /** 71 67 * amdgpu_atpx_call - call an ATPX method 72 68 * ··· 142 146 */ 143 147 static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx) 144 148 { 149 + /* make sure required functions are enabled */ 150 + /* dGPU power control is required */ 151 + if (atpx->functions.power_cntl == false) { 152 + printk("ATPX dGPU power cntl not present, forcing\n"); 153 + atpx->functions.power_cntl = true; 154 + } 155 + 145 156 if (atpx->functions.px_params) { 146 157 union acpi_object *info; 147 158 struct atpx_px_params output;
+1 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 62 62 "LAST", 63 63 }; 64 64 65 - #if defined(CONFIG_VGA_SWITCHEROO) 66 - bool amdgpu_has_atpx_dgpu_power_cntl(void); 67 - #else 68 - static inline bool amdgpu_has_atpx_dgpu_power_cntl(void) { return false; } 69 - #endif 70 - 71 65 bool amdgpu_device_is_px(struct drm_device *dev) 72 66 { 73 67 struct amdgpu_device *adev = dev->dev_private; ··· 1479 1485 1480 1486 if (amdgpu_runtime_pm == 1) 1481 1487 runtime = true; 1482 - if (amdgpu_device_is_px(ddev) && amdgpu_has_atpx_dgpu_power_cntl()) 1488 + if (amdgpu_device_is_px(ddev)) 1483 1489 runtime = true; 1484 1490 vga_switcheroo_register_client(adev->pdev, &amdgpu_switcheroo_ops, runtime); 1485 1491 if (runtime)
+4 -1
drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
··· 910 910 { 911 911 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 912 912 913 - return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0); 913 + if (amdgpu_vm_fault_stop != AMDGPU_VM_FAULT_STOP_ALWAYS) 914 + return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0); 915 + else 916 + return 0; 914 917 } 915 918 916 919 static int gmc_v7_0_sw_init(void *handle)
+4 -1
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
··· 870 870 { 871 871 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 872 872 873 - return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0); 873 + if (amdgpu_vm_fault_stop != AMDGPU_VM_FAULT_STOP_ALWAYS) 874 + return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0); 875 + else 876 + return 0; 874 877 } 875 878 876 879 #define mmMC_SEQ_MISC0_FIJI 0xA71
+20
drivers/gpu/drm/drm_dp_mst_topology.c
··· 1796 1796 req_payload.start_slot = cur_slots; 1797 1797 if (mgr->proposed_vcpis[i]) { 1798 1798 port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi); 1799 + port = drm_dp_get_validated_port_ref(mgr, port); 1800 + if (!port) { 1801 + mutex_unlock(&mgr->payload_lock); 1802 + return -EINVAL; 1803 + } 1799 1804 req_payload.num_slots = mgr->proposed_vcpis[i]->num_slots; 1800 1805 req_payload.vcpi = mgr->proposed_vcpis[i]->vcpi; 1801 1806 } else { ··· 1828 1823 mgr->payloads[i].payload_state = req_payload.payload_state; 1829 1824 } 1830 1825 cur_slots += req_payload.num_slots; 1826 + 1827 + if (port) 1828 + drm_dp_put_port(port); 1831 1829 } 1832 1830 1833 1831 for (i = 0; i < mgr->max_payloads; i++) { ··· 2136 2128 2137 2129 if (mgr->mst_primary) { 2138 2130 int sret; 2131 + u8 guid[16]; 2132 + 2139 2133 sret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, DP_RECEIVER_CAP_SIZE); 2140 2134 if (sret != DP_RECEIVER_CAP_SIZE) { 2141 2135 DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n"); ··· 2152 2142 ret = -1; 2153 2143 goto out_unlock; 2154 2144 } 2145 + 2146 + /* Some hubs forget their guids after they resume */ 2147 + sret = drm_dp_dpcd_read(mgr->aux, DP_GUID, guid, 16); 2148 + if (sret != 16) { 2149 + DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n"); 2150 + ret = -1; 2151 + goto out_unlock; 2152 + } 2153 + drm_dp_check_mstb_guid(mgr->mst_primary, guid); 2154 + 2155 2155 ret = 0; 2156 2156 } else 2157 2157 ret = -1;
+18 -13
drivers/gpu/drm/etnaviv/etnaviv_gpu.c
··· 572 572 goto fail; 573 573 } 574 574 575 + /* 576 + * Set the GPU linear window to be at the end of the DMA window, where 577 + * the CMA area is likely to reside. This ensures that we are able to 578 + * map the command buffers while having the linear window overlap as 579 + * much RAM as possible, so we can optimize mappings for other buffers. 580 + * 581 + * For 3D cores only do this if MC2.0 is present, as with MC1.0 it leads 582 + * to different views of the memory on the individual engines. 583 + */ 584 + if (!(gpu->identity.features & chipFeatures_PIPE_3D) || 585 + (gpu->identity.minor_features0 & chipMinorFeatures0_MC20)) { 586 + u32 dma_mask = (u32)dma_get_required_mask(gpu->dev); 587 + if (dma_mask < PHYS_OFFSET + SZ_2G) 588 + gpu->memory_base = PHYS_OFFSET; 589 + else 590 + gpu->memory_base = dma_mask - SZ_2G + 1; 591 + } 592 + 575 593 ret = etnaviv_hw_reset(gpu); 576 594 if (ret) 577 595 goto fail; ··· 1584 1566 { 1585 1567 struct device *dev = &pdev->dev; 1586 1568 struct etnaviv_gpu *gpu; 1587 - u32 dma_mask; 1588 1569 int err = 0; 1589 1570 1590 1571 gpu = devm_kzalloc(dev, sizeof(*gpu), GFP_KERNEL); ··· 1592 1575 1593 1576 gpu->dev = &pdev->dev; 1594 1577 mutex_init(&gpu->lock); 1595 - 1596 - /* 1597 - * Set the GPU linear window to be at the end of the DMA window, where 1598 - * the CMA area is likely to reside. This ensures that we are able to 1599 - * map the command buffers while having the linear window overlap as 1600 - * much RAM as possible, so we can optimize mappings for other buffers. 1601 - */ 1602 - dma_mask = (u32)dma_get_required_mask(dev); 1603 - if (dma_mask < PHYS_OFFSET + SZ_2G) 1604 - gpu->memory_base = PHYS_OFFSET; 1605 - else 1606 - gpu->memory_base = dma_mask - SZ_2G + 1; 1607 1578 1608 1579 /* Map registers: */ 1609 1580 gpu->mmio = etnaviv_ioremap(pdev, NULL, dev_name(gpu->dev));
+153 -1
drivers/gpu/drm/radeon/evergreen.c
··· 2608 2608 WREG32(VM_CONTEXT1_CNTL, 0); 2609 2609 } 2610 2610 2611 + static const unsigned ni_dig_offsets[] = 2612 + { 2613 + NI_DIG0_REGISTER_OFFSET, 2614 + NI_DIG1_REGISTER_OFFSET, 2615 + NI_DIG2_REGISTER_OFFSET, 2616 + NI_DIG3_REGISTER_OFFSET, 2617 + NI_DIG4_REGISTER_OFFSET, 2618 + NI_DIG5_REGISTER_OFFSET 2619 + }; 2620 + 2621 + static const unsigned ni_tx_offsets[] = 2622 + { 2623 + NI_DCIO_UNIPHY0_UNIPHY_TX_CONTROL1, 2624 + NI_DCIO_UNIPHY1_UNIPHY_TX_CONTROL1, 2625 + NI_DCIO_UNIPHY2_UNIPHY_TX_CONTROL1, 2626 + NI_DCIO_UNIPHY3_UNIPHY_TX_CONTROL1, 2627 + NI_DCIO_UNIPHY4_UNIPHY_TX_CONTROL1, 2628 + NI_DCIO_UNIPHY5_UNIPHY_TX_CONTROL1 2629 + }; 2630 + 2631 + static const unsigned evergreen_dp_offsets[] = 2632 + { 2633 + EVERGREEN_DP0_REGISTER_OFFSET, 2634 + EVERGREEN_DP1_REGISTER_OFFSET, 2635 + EVERGREEN_DP2_REGISTER_OFFSET, 2636 + EVERGREEN_DP3_REGISTER_OFFSET, 2637 + EVERGREEN_DP4_REGISTER_OFFSET, 2638 + EVERGREEN_DP5_REGISTER_OFFSET 2639 + }; 2640 + 2641 + 2642 + /* 2643 + * Assumption is that EVERGREEN_CRTC_MASTER_EN enable for requested crtc 2644 + * We go from crtc to connector and it is not relible since it 2645 + * should be an opposite direction .If crtc is enable then 2646 + * find the dig_fe which selects this crtc and insure that it enable. 2647 + * if such dig_fe is found then find dig_be which selects found dig_be and 2648 + * insure that it enable and in DP_SST mode. 2649 + * if UNIPHY_PLL_CONTROL1.enable then we should disconnect timing 2650 + * from dp symbols clocks . 2651 + */ 2652 + static bool evergreen_is_dp_sst_stream_enabled(struct radeon_device *rdev, 2653 + unsigned crtc_id, unsigned *ret_dig_fe) 2654 + { 2655 + unsigned i; 2656 + unsigned dig_fe; 2657 + unsigned dig_be; 2658 + unsigned dig_en_be; 2659 + unsigned uniphy_pll; 2660 + unsigned digs_fe_selected; 2661 + unsigned dig_be_mode; 2662 + unsigned dig_fe_mask; 2663 + bool is_enabled = false; 2664 + bool found_crtc = false; 2665 + 2666 + /* loop through all running dig_fe to find selected crtc */ 2667 + for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) { 2668 + dig_fe = RREG32(NI_DIG_FE_CNTL + ni_dig_offsets[i]); 2669 + if (dig_fe & NI_DIG_FE_CNTL_SYMCLK_FE_ON && 2670 + crtc_id == NI_DIG_FE_CNTL_SOURCE_SELECT(dig_fe)) { 2671 + /* found running pipe */ 2672 + found_crtc = true; 2673 + dig_fe_mask = 1 << i; 2674 + dig_fe = i; 2675 + break; 2676 + } 2677 + } 2678 + 2679 + if (found_crtc) { 2680 + /* loop through all running dig_be to find selected dig_fe */ 2681 + for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) { 2682 + dig_be = RREG32(NI_DIG_BE_CNTL + ni_dig_offsets[i]); 2683 + /* if dig_fe_selected by dig_be? */ 2684 + digs_fe_selected = NI_DIG_BE_CNTL_FE_SOURCE_SELECT(dig_be); 2685 + dig_be_mode = NI_DIG_FE_CNTL_MODE(dig_be); 2686 + if (dig_fe_mask & digs_fe_selected && 2687 + /* if dig_be in sst mode? */ 2688 + dig_be_mode == NI_DIG_BE_DPSST) { 2689 + dig_en_be = RREG32(NI_DIG_BE_EN_CNTL + 2690 + ni_dig_offsets[i]); 2691 + uniphy_pll = RREG32(NI_DCIO_UNIPHY0_PLL_CONTROL1 + 2692 + ni_tx_offsets[i]); 2693 + /* dig_be enable and tx is running */ 2694 + if (dig_en_be & NI_DIG_BE_EN_CNTL_ENABLE && 2695 + dig_en_be & NI_DIG_BE_EN_CNTL_SYMBCLK_ON && 2696 + uniphy_pll & NI_DCIO_UNIPHY0_PLL_CONTROL1_ENABLE) { 2697 + is_enabled = true; 2698 + *ret_dig_fe = dig_fe; 2699 + break; 2700 + } 2701 + } 2702 + } 2703 + } 2704 + 2705 + return is_enabled; 2706 + } 2707 + 2708 + /* 2709 + * Blank dig when in dp sst mode 2710 + * Dig ignores crtc timing 2711 + */ 2712 + static void evergreen_blank_dp_output(struct radeon_device *rdev, 2713 + unsigned dig_fe) 2714 + { 2715 + unsigned stream_ctrl; 2716 + unsigned fifo_ctrl; 2717 + unsigned counter = 0; 2718 + 2719 + if (dig_fe >= ARRAY_SIZE(evergreen_dp_offsets)) { 2720 + DRM_ERROR("invalid dig_fe %d\n", dig_fe); 2721 + return; 2722 + } 2723 + 2724 + stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL + 2725 + evergreen_dp_offsets[dig_fe]); 2726 + if (!(stream_ctrl & EVERGREEN_DP_VID_STREAM_CNTL_ENABLE)) { 2727 + DRM_ERROR("dig %d , should be enable\n", dig_fe); 2728 + return; 2729 + } 2730 + 2731 + stream_ctrl &=~EVERGREEN_DP_VID_STREAM_CNTL_ENABLE; 2732 + WREG32(EVERGREEN_DP_VID_STREAM_CNTL + 2733 + evergreen_dp_offsets[dig_fe], stream_ctrl); 2734 + 2735 + stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL + 2736 + evergreen_dp_offsets[dig_fe]); 2737 + while (counter < 32 && stream_ctrl & EVERGREEN_DP_VID_STREAM_STATUS) { 2738 + msleep(1); 2739 + counter++; 2740 + stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL + 2741 + evergreen_dp_offsets[dig_fe]); 2742 + } 2743 + if (counter >= 32 ) 2744 + DRM_ERROR("counter exceeds %d\n", counter); 2745 + 2746 + fifo_ctrl = RREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe]); 2747 + fifo_ctrl |= EVERGREEN_DP_STEER_FIFO_RESET; 2748 + WREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe], fifo_ctrl); 2749 + 2750 + } 2751 + 2611 2752 void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save) 2612 2753 { 2613 2754 u32 crtc_enabled, tmp, frame_count, blackout; 2614 2755 int i, j; 2756 + unsigned dig_fe; 2615 2757 2616 2758 if (!ASIC_IS_NODCE(rdev)) { 2617 2759 save->vga_render_control = RREG32(VGA_RENDER_CONTROL); ··· 2793 2651 break; 2794 2652 udelay(1); 2795 2653 } 2796 - 2654 + /*we should disable dig if it drives dp sst*/ 2655 + /*but we are in radeon_device_init and the topology is unknown*/ 2656 + /*and it is available after radeon_modeset_init*/ 2657 + /*the following method radeon_atom_encoder_dpms_dig*/ 2658 + /*does the job if we initialize it properly*/ 2659 + /*for now we do it this manually*/ 2660 + /**/ 2661 + if (ASIC_IS_DCE5(rdev) && 2662 + evergreen_is_dp_sst_stream_enabled(rdev, i ,&dig_fe)) 2663 + evergreen_blank_dp_output(rdev, dig_fe); 2664 + /*we could remove 6 lines below*/ 2797 2665 /* XXX this is a hack to avoid strange behavior with EFI on certain systems */ 2798 2666 WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1); 2799 2667 tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
+46
drivers/gpu/drm/radeon/evergreen_reg.h
··· 250 250 251 251 /* HDMI blocks at 0x7030, 0x7c30, 0x10830, 0x11430, 0x12030, 0x12c30 */ 252 252 #define EVERGREEN_HDMI_BASE 0x7030 253 + /*DIG block*/ 254 + #define NI_DIG0_REGISTER_OFFSET (0x7000 - 0x7000) 255 + #define NI_DIG1_REGISTER_OFFSET (0x7C00 - 0x7000) 256 + #define NI_DIG2_REGISTER_OFFSET (0x10800 - 0x7000) 257 + #define NI_DIG3_REGISTER_OFFSET (0x11400 - 0x7000) 258 + #define NI_DIG4_REGISTER_OFFSET (0x12000 - 0x7000) 259 + #define NI_DIG5_REGISTER_OFFSET (0x12C00 - 0x7000) 260 + 261 + 262 + #define NI_DIG_FE_CNTL 0x7000 263 + # define NI_DIG_FE_CNTL_SOURCE_SELECT(x) ((x) & 0x3) 264 + # define NI_DIG_FE_CNTL_SYMCLK_FE_ON (1<<24) 265 + 266 + 267 + #define NI_DIG_BE_CNTL 0x7140 268 + # define NI_DIG_BE_CNTL_FE_SOURCE_SELECT(x) (((x) >> 8 ) & 0x3F) 269 + # define NI_DIG_FE_CNTL_MODE(x) (((x) >> 16) & 0x7 ) 270 + 271 + #define NI_DIG_BE_EN_CNTL 0x7144 272 + # define NI_DIG_BE_EN_CNTL_ENABLE (1 << 0) 273 + # define NI_DIG_BE_EN_CNTL_SYMBCLK_ON (1 << 8) 274 + # define NI_DIG_BE_DPSST 0 253 275 254 276 /* Display Port block */ 277 + #define EVERGREEN_DP0_REGISTER_OFFSET (0x730C - 0x730C) 278 + #define EVERGREEN_DP1_REGISTER_OFFSET (0x7F0C - 0x730C) 279 + #define EVERGREEN_DP2_REGISTER_OFFSET (0x10B0C - 0x730C) 280 + #define EVERGREEN_DP3_REGISTER_OFFSET (0x1170C - 0x730C) 281 + #define EVERGREEN_DP4_REGISTER_OFFSET (0x1230C - 0x730C) 282 + #define EVERGREEN_DP5_REGISTER_OFFSET (0x12F0C - 0x730C) 283 + 284 + 285 + #define EVERGREEN_DP_VID_STREAM_CNTL 0x730C 286 + # define EVERGREEN_DP_VID_STREAM_CNTL_ENABLE (1 << 0) 287 + # define EVERGREEN_DP_VID_STREAM_STATUS (1 <<16) 288 + #define EVERGREEN_DP_STEER_FIFO 0x7310 289 + # define EVERGREEN_DP_STEER_FIFO_RESET (1 << 0) 255 290 #define EVERGREEN_DP_SEC_CNTL 0x7280 256 291 # define EVERGREEN_DP_SEC_STREAM_ENABLE (1 << 0) 257 292 # define EVERGREEN_DP_SEC_ASP_ENABLE (1 << 4) ··· 300 265 #define EVERGREEN_DP_SEC_AUD_N 0x7294 301 266 # define EVERGREEN_DP_SEC_N_BASE_MULTIPLE(x) (((x) & 0xf) << 24) 302 267 # define EVERGREEN_DP_SEC_SS_EN (1 << 28) 268 + 269 + /*DCIO_UNIPHY block*/ 270 + #define NI_DCIO_UNIPHY0_UNIPHY_TX_CONTROL1 (0x6600 -0x6600) 271 + #define NI_DCIO_UNIPHY1_UNIPHY_TX_CONTROL1 (0x6640 -0x6600) 272 + #define NI_DCIO_UNIPHY2_UNIPHY_TX_CONTROL1 (0x6680 - 0x6600) 273 + #define NI_DCIO_UNIPHY3_UNIPHY_TX_CONTROL1 (0x66C0 - 0x6600) 274 + #define NI_DCIO_UNIPHY4_UNIPHY_TX_CONTROL1 (0x6700 - 0x6600) 275 + #define NI_DCIO_UNIPHY5_UNIPHY_TX_CONTROL1 (0x6740 - 0x6600) 276 + 277 + #define NI_DCIO_UNIPHY0_PLL_CONTROL1 0x6618 278 + # define NI_DCIO_UNIPHY0_PLL_CONTROL1_ENABLE (1 << 0) 303 279 304 280 #endif
+4 -13
drivers/gpu/drm/ttm/ttm_bo.c
··· 230 230 231 231 void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo) 232 232 { 233 - struct ttm_bo_device *bdev = bo->bdev; 234 - struct ttm_mem_type_manager *man; 233 + int put_count = 0; 235 234 236 235 lockdep_assert_held(&bo->resv->lock.base); 237 236 238 - if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT) { 239 - list_del_init(&bo->swap); 240 - list_del_init(&bo->lru); 241 - 242 - } else { 243 - if (bo->ttm && !(bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) 244 - list_move_tail(&bo->swap, &bo->glob->swap_lru); 245 - 246 - man = &bdev->man[bo->mem.mem_type]; 247 - list_move_tail(&bo->lru, &man->lru); 248 - } 237 + put_count = ttm_bo_del_from_lru(bo); 238 + ttm_bo_list_ref_sub(bo, put_count, true); 239 + ttm_bo_add_to_lru(bo); 249 240 } 250 241 EXPORT_SYMBOL(ttm_bo_move_to_lru_tail); 251 242
+12
drivers/gpu/drm/virtio/virtgpu_display.c
··· 267 267 return 0; 268 268 } 269 269 270 + static void virtio_gpu_crtc_atomic_flush(struct drm_crtc *crtc, 271 + struct drm_crtc_state *old_state) 272 + { 273 + unsigned long flags; 274 + 275 + spin_lock_irqsave(&crtc->dev->event_lock, flags); 276 + if (crtc->state->event) 277 + drm_crtc_send_vblank_event(crtc, crtc->state->event); 278 + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 279 + } 280 + 270 281 static const struct drm_crtc_helper_funcs virtio_gpu_crtc_helper_funcs = { 271 282 .enable = virtio_gpu_crtc_enable, 272 283 .disable = virtio_gpu_crtc_disable, 273 284 .mode_set_nofb = virtio_gpu_crtc_mode_set_nofb, 274 285 .atomic_check = virtio_gpu_crtc_atomic_check, 286 + .atomic_flush = virtio_gpu_crtc_atomic_flush, 275 287 }; 276 288 277 289 static void virtio_gpu_enc_mode_set(struct drm_encoder *encoder,
+5 -5
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 3293 3293 &vmw_cmd_dx_cid_check, true, false, true), 3294 3294 VMW_CMD_DEF(SVGA_3D_CMD_DX_DEFINE_QUERY, &vmw_cmd_dx_define_query, 3295 3295 true, false, true), 3296 - VMW_CMD_DEF(SVGA_3D_CMD_DX_DESTROY_QUERY, &vmw_cmd_ok, 3296 + VMW_CMD_DEF(SVGA_3D_CMD_DX_DESTROY_QUERY, &vmw_cmd_dx_cid_check, 3297 3297 true, false, true), 3298 3298 VMW_CMD_DEF(SVGA_3D_CMD_DX_BIND_QUERY, &vmw_cmd_dx_bind_query, 3299 3299 true, false, true), 3300 3300 VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_QUERY_OFFSET, 3301 - &vmw_cmd_ok, true, false, true), 3302 - VMW_CMD_DEF(SVGA_3D_CMD_DX_BEGIN_QUERY, &vmw_cmd_ok, 3301 + &vmw_cmd_dx_cid_check, true, false, true), 3302 + VMW_CMD_DEF(SVGA_3D_CMD_DX_BEGIN_QUERY, &vmw_cmd_dx_cid_check, 3303 3303 true, false, true), 3304 - VMW_CMD_DEF(SVGA_3D_CMD_DX_END_QUERY, &vmw_cmd_ok, 3304 + VMW_CMD_DEF(SVGA_3D_CMD_DX_END_QUERY, &vmw_cmd_dx_cid_check, 3305 3305 true, false, true), 3306 3306 VMW_CMD_DEF(SVGA_3D_CMD_DX_READBACK_QUERY, &vmw_cmd_invalid, 3307 3307 true, false, true), 3308 - VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_PREDICATION, &vmw_cmd_invalid, 3308 + VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_PREDICATION, &vmw_cmd_dx_cid_check, 3309 3309 true, false, true), 3310 3310 VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_VIEWPORTS, &vmw_cmd_dx_cid_check, 3311 3311 true, false, true),
+3 -3
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
··· 573 573 mode = old_mode; 574 574 old_mode = NULL; 575 575 } else if (!vmw_kms_validate_mode_vram(vmw_priv, 576 - mode->hdisplay * 577 - (var->bits_per_pixel + 7) / 8, 578 - mode->vdisplay)) { 576 + mode->hdisplay * 577 + DIV_ROUND_UP(var->bits_per_pixel, 8), 578 + mode->vdisplay)) { 579 579 drm_mode_destroy(vmw_priv->dev, mode); 580 580 return -EINVAL; 581 581 }
+1
drivers/hid/hid-ids.h
··· 259 259 #define USB_DEVICE_ID_CORSAIR_K90 0x1b02 260 260 261 261 #define USB_VENDOR_ID_CREATIVELABS 0x041e 262 + #define USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51 0x322c 262 263 #define USB_DEVICE_ID_PRODIKEYS_PCMIDI 0x2801 263 264 264 265 #define USB_VENDOR_ID_CVTOUCH 0x1ff7
+1
drivers/hid/usbhid/hid-quirks.c
··· 71 71 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_3AXIS_5BUTTON_STICK, HID_QUIRK_NOGET }, 72 72 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET }, 73 73 { USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL }, 74 + { USB_VENDOR_ID_CREATIVELABS, USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51, HID_QUIRK_NOGET }, 74 75 { USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET }, 75 76 { USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_WIIU, HID_QUIRK_MULTI_INPUT }, 76 77 { USB_VENDOR_ID_ELAN, HID_ANY_ID, HID_QUIRK_ALWAYS_POLL },
+6
drivers/hid/wacom_wac.c
··· 684 684 685 685 wacom->tool[idx] = wacom_intuos_get_tool_type(wacom->id[idx]); 686 686 687 + wacom->shared->stylus_in_proximity = true; 687 688 return 1; 688 689 } 689 690 ··· 3396 3395 { "Wacom Intuos PT M 2", 21600, 13500, 2047, 63, 3397 3396 INTUOSHT2, WACOM_INTUOS_RES, WACOM_INTUOS_RES, .touch_max = 16, 3398 3397 .check_for_hid_type = true, .hid_type = HID_TYPE_USBNONE }; 3398 + static const struct wacom_features wacom_features_0x343 = 3399 + { "Wacom DTK1651", 34616, 19559, 1023, 0, 3400 + DTUS, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4, 3401 + WACOM_DTU_OFFSET, WACOM_DTU_OFFSET }; 3399 3402 3400 3403 static const struct wacom_features wacom_features_HID_ANY_ID = 3401 3404 { "Wacom HID", .type = HID_GENERIC }; ··· 3565 3560 { USB_DEVICE_WACOM(0x33C) }, 3566 3561 { USB_DEVICE_WACOM(0x33D) }, 3567 3562 { USB_DEVICE_WACOM(0x33E) }, 3563 + { USB_DEVICE_WACOM(0x343) }, 3568 3564 { USB_DEVICE_WACOM(0x4001) }, 3569 3565 { USB_DEVICE_WACOM(0x4004) }, 3570 3566 { USB_DEVICE_WACOM(0x5000) },
+2 -2
drivers/i2c/busses/Kconfig
··· 975 975 976 976 config I2C_XLP9XX 977 977 tristate "XLP9XX I2C support" 978 - depends on CPU_XLP || COMPILE_TEST 978 + depends on CPU_XLP || ARCH_VULCAN || COMPILE_TEST 979 979 help 980 980 This driver enables support for the on-chip I2C interface of 981 - the Broadcom XLP9xx/XLP5xx MIPS processors. 981 + the Broadcom XLP9xx/XLP5xx MIPS and Vulcan ARM64 processors. 982 982 983 983 This driver can also be built as a module. If so, the module will 984 984 be called i2c-xlp9xx.
+2 -2
drivers/i2c/busses/i2c-cpm.c
··· 116 116 cbd_t __iomem *rbase; 117 117 u_char *txbuf[CPM_MAXBD]; 118 118 u_char *rxbuf[CPM_MAXBD]; 119 - u32 txdma[CPM_MAXBD]; 120 - u32 rxdma[CPM_MAXBD]; 119 + dma_addr_t txdma[CPM_MAXBD]; 120 + dma_addr_t rxdma[CPM_MAXBD]; 121 121 }; 122 122 123 123 static irqreturn_t cpm_i2c_interrupt(int irq, void *dev_id)
+19 -5
drivers/i2c/busses/i2c-exynos5.c
··· 671 671 return -EIO; 672 672 } 673 673 674 - clk_prepare_enable(i2c->clk); 674 + ret = clk_enable(i2c->clk); 675 + if (ret) 676 + return ret; 675 677 676 678 for (i = 0; i < num; i++, msgs++) { 677 679 stop = (i == num - 1); ··· 697 695 } 698 696 699 697 out: 700 - clk_disable_unprepare(i2c->clk); 698 + clk_disable(i2c->clk); 701 699 return ret; 702 700 } 703 701 ··· 749 747 return -ENOENT; 750 748 } 751 749 752 - clk_prepare_enable(i2c->clk); 750 + ret = clk_prepare_enable(i2c->clk); 751 + if (ret) 752 + return ret; 753 753 754 754 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 755 755 i2c->regs = devm_ioremap_resource(&pdev->dev, mem); ··· 803 799 804 800 platform_set_drvdata(pdev, i2c); 805 801 802 + clk_disable(i2c->clk); 803 + 804 + return 0; 805 + 806 806 err_clk: 807 807 clk_disable_unprepare(i2c->clk); 808 808 return ret; ··· 817 809 struct exynos5_i2c *i2c = platform_get_drvdata(pdev); 818 810 819 811 i2c_del_adapter(&i2c->adap); 812 + 813 + clk_unprepare(i2c->clk); 820 814 821 815 return 0; 822 816 } ··· 831 821 832 822 i2c->suspended = 1; 833 823 824 + clk_unprepare(i2c->clk); 825 + 834 826 return 0; 835 827 } 836 828 ··· 842 830 struct exynos5_i2c *i2c = platform_get_drvdata(pdev); 843 831 int ret = 0; 844 832 845 - clk_prepare_enable(i2c->clk); 833 + ret = clk_prepare_enable(i2c->clk); 834 + if (ret) 835 + return ret; 846 836 847 837 ret = exynos5_hsi2c_clock_setup(i2c); 848 838 if (ret) { ··· 853 839 } 854 840 855 841 exynos5_i2c_init(i2c); 856 - clk_disable_unprepare(i2c->clk); 842 + clk_disable(i2c->clk); 857 843 i2c->suspended = 0; 858 844 859 845 return 0;
+2
drivers/i2c/busses/i2c-ismt.c
··· 75 75 /* PCI DIDs for the Intel SMBus Message Transport (SMT) Devices */ 76 76 #define PCI_DEVICE_ID_INTEL_S1200_SMT0 0x0c59 77 77 #define PCI_DEVICE_ID_INTEL_S1200_SMT1 0x0c5a 78 + #define PCI_DEVICE_ID_INTEL_DNV_SMT 0x19ac 78 79 #define PCI_DEVICE_ID_INTEL_AVOTON_SMT 0x1f15 79 80 80 81 #define ISMT_DESC_ENTRIES 2 /* number of descriptor entries */ ··· 181 180 static const struct pci_device_id ismt_ids[] = { 182 181 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT0) }, 183 182 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT1) }, 183 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_DNV_SMT) }, 184 184 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_AVOTON_SMT) }, 185 185 { 0, } 186 186 };
+1
drivers/i2c/busses/i2c-rk3x.c
··· 855 855 static const struct of_device_id rk3x_i2c_match[] = { 856 856 { .compatible = "rockchip,rk3066-i2c", .data = (void *)&soc_data[0] }, 857 857 { .compatible = "rockchip,rk3188-i2c", .data = (void *)&soc_data[1] }, 858 + { .compatible = "rockchip,rk3228-i2c", .data = (void *)&soc_data[2] }, 858 859 { .compatible = "rockchip,rk3288-i2c", .data = (void *)&soc_data[2] }, 859 860 {}, 860 861 };
+2 -1
drivers/infiniband/core/cache.c
··· 691 691 NULL); 692 692 693 693 /* Coudn't find default GID location */ 694 - WARN_ON(ix < 0); 694 + if (WARN_ON(ix < 0)) 695 + goto release; 695 696 696 697 zattr_type.gid_type = gid_type; 697 698
+4
drivers/infiniband/core/ucm.c
··· 48 48 49 49 #include <asm/uaccess.h> 50 50 51 + #include <rdma/ib.h> 51 52 #include <rdma/ib_cm.h> 52 53 #include <rdma/ib_user_cm.h> 53 54 #include <rdma/ib_marshall.h> ··· 1103 1102 struct ib_ucm_file *file = filp->private_data; 1104 1103 struct ib_ucm_cmd_hdr hdr; 1105 1104 ssize_t result; 1105 + 1106 + if (WARN_ON_ONCE(!ib_safe_file_access(filp))) 1107 + return -EACCES; 1106 1108 1107 1109 if (len < sizeof(hdr)) 1108 1110 return -EINVAL;
+3
drivers/infiniband/core/ucma.c
··· 1574 1574 struct rdma_ucm_cmd_hdr hdr; 1575 1575 ssize_t ret; 1576 1576 1577 + if (WARN_ON_ONCE(!ib_safe_file_access(filp))) 1578 + return -EACCES; 1579 + 1577 1580 if (len < sizeof(hdr)) 1578 1581 return -EINVAL; 1579 1582
+5
drivers/infiniband/core/uverbs_main.c
··· 48 48 49 49 #include <asm/uaccess.h> 50 50 51 + #include <rdma/ib.h> 52 + 51 53 #include "uverbs.h" 52 54 53 55 MODULE_AUTHOR("Roland Dreier"); ··· 710 708 __u32 flags; 711 709 int srcu_key; 712 710 ssize_t ret; 711 + 712 + if (WARN_ON_ONCE(!ib_safe_file_access(filp))) 713 + return -EACCES; 713 714 714 715 if (count < sizeof hdr) 715 716 return -EINVAL;
+2 -1
drivers/infiniband/core/verbs.c
··· 1860 1860 void ib_drain_qp(struct ib_qp *qp) 1861 1861 { 1862 1862 ib_drain_sq(qp); 1863 - ib_drain_rq(qp); 1863 + if (!qp->srq) 1864 + ib_drain_rq(qp); 1864 1865 } 1865 1866 EXPORT_SYMBOL(ib_drain_qp);
+2
drivers/infiniband/hw/cxgb3/iwch_provider.c
··· 1390 1390 dev->ibdev.iwcm->add_ref = iwch_qp_add_ref; 1391 1391 dev->ibdev.iwcm->rem_ref = iwch_qp_rem_ref; 1392 1392 dev->ibdev.iwcm->get_qp = iwch_get_qp; 1393 + memcpy(dev->ibdev.iwcm->ifname, dev->rdev.t3cdev_p->lldev->name, 1394 + sizeof(dev->ibdev.iwcm->ifname)); 1393 1395 1394 1396 ret = ib_register_device(&dev->ibdev, NULL); 1395 1397 if (ret)
+1 -1
drivers/infiniband/hw/cxgb4/cq.c
··· 162 162 cq->bar2_va = c4iw_bar2_addrs(rdev, cq->cqid, T4_BAR2_QTYPE_INGRESS, 163 163 &cq->bar2_qid, 164 164 user ? &cq->bar2_pa : NULL); 165 - if (user && !cq->bar2_va) { 165 + if (user && !cq->bar2_pa) { 166 166 pr_warn(MOD "%s: cqid %u not in BAR2 range.\n", 167 167 pci_name(rdev->lldi.pdev), cq->cqid); 168 168 ret = -EINVAL;
+2
drivers/infiniband/hw/cxgb4/provider.c
··· 580 580 dev->ibdev.iwcm->add_ref = c4iw_qp_add_ref; 581 581 dev->ibdev.iwcm->rem_ref = c4iw_qp_rem_ref; 582 582 dev->ibdev.iwcm->get_qp = c4iw_get_qp; 583 + memcpy(dev->ibdev.iwcm->ifname, dev->rdev.lldi.ports[0]->name, 584 + sizeof(dev->ibdev.iwcm->ifname)); 583 585 584 586 ret = ib_register_device(&dev->ibdev, NULL); 585 587 if (ret)
+21 -3
drivers/infiniband/hw/cxgb4/qp.c
··· 185 185 186 186 if (pbar2_pa) 187 187 *pbar2_pa = (rdev->bar2_pa + bar2_qoffset) & PAGE_MASK; 188 + 189 + if (is_t4(rdev->lldi.adapter_type)) 190 + return NULL; 191 + 188 192 return rdev->bar2_kva + bar2_qoffset; 189 193 } 190 194 ··· 274 270 /* 275 271 * User mode must have bar2 access. 276 272 */ 277 - if (user && (!wq->sq.bar2_va || !wq->rq.bar2_va)) { 273 + if (user && (!wq->sq.bar2_pa || !wq->rq.bar2_pa)) { 278 274 pr_warn(MOD "%s: sqid %u or rqid %u not in BAR2 range.\n", 279 275 pci_name(rdev->lldi.pdev), wq->sq.qid, wq->rq.qid); 280 276 goto free_dma; ··· 1899 1895 void c4iw_drain_sq(struct ib_qp *ibqp) 1900 1896 { 1901 1897 struct c4iw_qp *qp = to_c4iw_qp(ibqp); 1898 + unsigned long flag; 1899 + bool need_to_wait; 1902 1900 1903 - wait_for_completion(&qp->sq_drained); 1901 + spin_lock_irqsave(&qp->lock, flag); 1902 + need_to_wait = !t4_sq_empty(&qp->wq); 1903 + spin_unlock_irqrestore(&qp->lock, flag); 1904 + 1905 + if (need_to_wait) 1906 + wait_for_completion(&qp->sq_drained); 1904 1907 } 1905 1908 1906 1909 void c4iw_drain_rq(struct ib_qp *ibqp) 1907 1910 { 1908 1911 struct c4iw_qp *qp = to_c4iw_qp(ibqp); 1912 + unsigned long flag; 1913 + bool need_to_wait; 1909 1914 1910 - wait_for_completion(&qp->rq_drained); 1915 + spin_lock_irqsave(&qp->lock, flag); 1916 + need_to_wait = !t4_rq_empty(&qp->wq); 1917 + spin_unlock_irqrestore(&qp->lock, flag); 1918 + 1919 + if (need_to_wait) 1920 + wait_for_completion(&qp->rq_drained); 1911 1921 }
+1 -1
drivers/infiniband/hw/mlx5/main.c
··· 530 530 sizeof(struct mlx5_wqe_ctrl_seg)) / 531 531 sizeof(struct mlx5_wqe_data_seg); 532 532 props->max_sge = min(max_rq_sg, max_sq_sg); 533 - props->max_sge_rd = props->max_sge; 533 + props->max_sge_rd = MLX5_MAX_SGE_RD; 534 534 props->max_cq = 1 << MLX5_CAP_GEN(mdev, log_max_cq); 535 535 props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_cq_sz)) - 1; 536 536 props->max_mr = 1 << MLX5_CAP_GEN(mdev, log_max_mkey);
-3
drivers/infiniband/hw/nes/nes_nic.c
··· 498 498 * skb_shinfo(skb)->nr_frags, skb_is_gso(skb)); 499 499 */ 500 500 501 - if (!netif_carrier_ok(netdev)) 502 - return NETDEV_TX_OK; 503 - 504 501 if (netif_queue_stopped(netdev)) 505 502 return NETDEV_TX_BUSY; 506 503
+5
drivers/infiniband/hw/qib/qib_file_ops.c
··· 45 45 #include <linux/export.h> 46 46 #include <linux/uio.h> 47 47 48 + #include <rdma/ib.h> 49 + 48 50 #include "qib.h" 49 51 #include "qib_common.h" 50 52 #include "qib_user_sdma.h" ··· 2068 2066 struct qib_cmd cmd; 2069 2067 ssize_t ret = 0; 2070 2068 void *dest; 2069 + 2070 + if (WARN_ON_ONCE(!ib_safe_file_access(fp))) 2071 + return -EACCES; 2071 2072 2072 2073 if (count < sizeof(cmd.type)) { 2073 2074 ret = -EINVAL;
+2 -2
drivers/infiniband/sw/rdmavt/qp.c
··· 1637 1637 spin_unlock_irqrestore(&qp->s_hlock, flags); 1638 1638 if (nreq) { 1639 1639 if (call_send) 1640 - rdi->driver_f.schedule_send_no_lock(qp); 1641 - else 1642 1640 rdi->driver_f.do_send(qp); 1641 + else 1642 + rdi->driver_f.schedule_send_no_lock(qp); 1643 1643 } 1644 1644 return err; 1645 1645 }
+2
drivers/md/md.c
··· 284 284 * go away inside make_request 285 285 */ 286 286 sectors = bio_sectors(bio); 287 + /* bio could be mergeable after passing to underlayer */ 288 + bio->bi_rw &= ~REQ_NOMERGE; 287 289 mddev->pers->make_request(mddev, bio); 288 290 289 291 cpu = part_stat_lock();
+1 -1
drivers/md/raid0.c
··· 70 70 (unsigned long long)zone_size>>1); 71 71 zone_start = conf->strip_zone[j].zone_end; 72 72 } 73 - printk(KERN_INFO "\n"); 74 73 } 75 74 76 75 static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) ··· 84 85 struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL); 85 86 unsigned short blksize = 512; 86 87 88 + *private_conf = ERR_PTR(-ENOMEM); 87 89 if (!conf) 88 90 return -ENOMEM; 89 91 rdev_for_each(rdev1, mddev) {
-2
drivers/md/raid5.c
··· 3502 3502 dev = &sh->dev[i]; 3503 3503 } else if (test_bit(R5_Discard, &dev->flags)) 3504 3504 discard_pending = 1; 3505 - WARN_ON(test_bit(R5_SkipCopy, &dev->flags)); 3506 - WARN_ON(dev->page != dev->orig_page); 3507 3505 } 3508 3506 3509 3507 r5l_stripe_write_finished(sh);
-7
drivers/media/usb/usbvision/usbvision-video.c
··· 1452 1452 printk(KERN_INFO "%s: %s found\n", __func__, 1453 1453 usbvision_device_data[model].model_string); 1454 1454 1455 - /* 1456 - * this is a security check. 1457 - * an exploit using an incorrect bInterfaceNumber is known 1458 - */ 1459 - if (ifnum >= USB_MAXINTERFACES || !dev->actconfig->interface[ifnum]) 1460 - return -ENODEV; 1461 - 1462 1455 if (usbvision_device_data[model].interface >= 0) 1463 1456 interface = &dev->actconfig->interface[usbvision_device_data[model].interface]->altsetting[0]; 1464 1457 else if (ifnum < dev->actconfig->desc.bNumInterfaces)
+15 -5
drivers/media/v4l2-core/videobuf2-core.c
··· 1645 1645 * Will sleep if required for nonblocking == false. 1646 1646 */ 1647 1647 static int __vb2_get_done_vb(struct vb2_queue *q, struct vb2_buffer **vb, 1648 - int nonblocking) 1648 + void *pb, int nonblocking) 1649 1649 { 1650 1650 unsigned long flags; 1651 1651 int ret; ··· 1666 1666 /* 1667 1667 * Only remove the buffer from done_list if v4l2_buffer can handle all 1668 1668 * the planes. 1669 - * Verifying planes is NOT necessary since it already has been checked 1670 - * before the buffer is queued/prepared. So it can never fail. 1671 1669 */ 1672 - list_del(&(*vb)->done_entry); 1670 + ret = call_bufop(q, verify_planes_array, *vb, pb); 1671 + if (!ret) 1672 + list_del(&(*vb)->done_entry); 1673 1673 spin_unlock_irqrestore(&q->done_lock, flags); 1674 1674 1675 1675 return ret; ··· 1748 1748 struct vb2_buffer *vb = NULL; 1749 1749 int ret; 1750 1750 1751 - ret = __vb2_get_done_vb(q, &vb, nonblocking); 1751 + ret = __vb2_get_done_vb(q, &vb, pb, nonblocking); 1752 1752 if (ret < 0) 1753 1753 return ret; 1754 1754 ··· 2295 2295 * error flag is set. 2296 2296 */ 2297 2297 if (!vb2_is_streaming(q) || q->error) 2298 + return POLLERR; 2299 + 2300 + /* 2301 + * If this quirk is set and QBUF hasn't been called yet then 2302 + * return POLLERR as well. This only affects capture queues, output 2303 + * queues will always initialize waiting_for_buffers to false. 2304 + * This quirk is set by V4L2 for backwards compatibility reasons. 2305 + */ 2306 + if (q->quirk_poll_must_check_waiting_for_buffers && 2307 + q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM))) 2298 2308 return POLLERR; 2299 2309 2300 2310 /*
+1 -1
drivers/media/v4l2-core/videobuf2-memops.c
··· 49 49 vec = frame_vector_create(nr); 50 50 if (!vec) 51 51 return ERR_PTR(-ENOMEM); 52 - ret = get_vaddr_frames(start, nr, write, 1, vec); 52 + ret = get_vaddr_frames(start & PAGE_MASK, nr, write, true, vec); 53 53 if (ret < 0) 54 54 goto out_destroy; 55 55 /* We accept only complete set of PFNs */
+12 -8
drivers/media/v4l2-core/videobuf2-v4l2.c
··· 74 74 return 0; 75 75 } 76 76 77 + static int __verify_planes_array_core(struct vb2_buffer *vb, const void *pb) 78 + { 79 + return __verify_planes_array(vb, pb); 80 + } 81 + 77 82 /** 78 83 * __verify_length() - Verify that the bytesused value for each plane fits in 79 84 * the plane length and that the data offset doesn't exceed the bytesused value. ··· 442 437 } 443 438 444 439 static const struct vb2_buf_ops v4l2_buf_ops = { 440 + .verify_planes_array = __verify_planes_array_core, 445 441 .fill_user_buffer = __fill_v4l2_buffer, 446 442 .fill_vb2_buffer = __fill_vb2_buffer, 447 443 .copy_timestamp = __copy_timestamp, ··· 771 765 q->is_output = V4L2_TYPE_IS_OUTPUT(q->type); 772 766 q->copy_timestamp = (q->timestamp_flags & V4L2_BUF_FLAG_TIMESTAMP_MASK) 773 767 == V4L2_BUF_FLAG_TIMESTAMP_COPY; 768 + /* 769 + * For compatibility with vb1: if QBUF hasn't been called yet, then 770 + * return POLLERR as well. This only affects capture queues, output 771 + * queues will always initialize waiting_for_buffers to false. 772 + */ 773 + q->quirk_poll_must_check_waiting_for_buffers = true; 774 774 775 775 return vb2_core_queue_init(q); 776 776 } ··· 829 817 else if (req_events & POLLPRI) 830 818 poll_wait(file, &fh->wait, wait); 831 819 } 832 - 833 - /* 834 - * For compatibility with vb1: if QBUF hasn't been called yet, then 835 - * return POLLERR as well. This only affects capture queues, output 836 - * queues will always initialize waiting_for_buffers to false. 837 - */ 838 - if (q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM))) 839 - return POLLERR; 840 820 841 821 return res | vb2_core_poll(q, file, wait); 842 822 }
+7
drivers/misc/cxl/context.c
··· 223 223 cxl_ops->link_ok(ctx->afu->adapter, ctx->afu)); 224 224 flush_work(&ctx->fault_work); /* Only needed for dedicated process */ 225 225 226 + /* 227 + * Wait until no further interrupts are presented by the PSL 228 + * for this context. 229 + */ 230 + if (cxl_ops->irq_wait) 231 + cxl_ops->irq_wait(ctx); 232 + 226 233 /* release the reference to the group leader and mm handling pid */ 227 234 put_pid(ctx->pid); 228 235 put_pid(ctx->glpid);
+2
drivers/misc/cxl/cxl.h
··· 274 274 #define CXL_PSL_DSISR_An_PE (1ull << (63-4)) /* PSL Error (implementation specific) */ 275 275 #define CXL_PSL_DSISR_An_AE (1ull << (63-5)) /* AFU Error */ 276 276 #define CXL_PSL_DSISR_An_OC (1ull << (63-6)) /* OS Context Warning */ 277 + #define CXL_PSL_DSISR_PENDING (CXL_PSL_DSISR_TRANS | CXL_PSL_DSISR_An_PE | CXL_PSL_DSISR_An_AE | CXL_PSL_DSISR_An_OC) 277 278 /* NOTE: Bits 32:63 are undefined if DSISR[DS] = 1 */ 278 279 #define CXL_PSL_DSISR_An_M DSISR_NOHPTE /* PTE not found */ 279 280 #define CXL_PSL_DSISR_An_P DSISR_PROTFAULT /* Storage protection violation */ ··· 856 855 u64 dsisr, u64 errstat); 857 856 irqreturn_t (*psl_interrupt)(int irq, void *data); 858 857 int (*ack_irq)(struct cxl_context *ctx, u64 tfc, u64 psl_reset_mask); 858 + void (*irq_wait)(struct cxl_context *ctx); 859 859 int (*attach_process)(struct cxl_context *ctx, bool kernel, 860 860 u64 wed, u64 amr); 861 861 int (*detach_process)(struct cxl_context *ctx);
-1
drivers/misc/cxl/irq.c
··· 203 203 void cxl_unmap_irq(unsigned int virq, void *cookie) 204 204 { 205 205 free_irq(virq, cookie); 206 - irq_dispose_mapping(virq); 207 206 } 208 207 209 208 int cxl_register_one_irq(struct cxl *adapter,
+31
drivers/misc/cxl/native.c
··· 14 14 #include <linux/mutex.h> 15 15 #include <linux/mm.h> 16 16 #include <linux/uaccess.h> 17 + #include <linux/delay.h> 17 18 #include <asm/synch.h> 18 19 #include <misc/cxl-base.h> 19 20 ··· 798 797 return fail_psl_irq(afu, &irq_info); 799 798 } 800 799 800 + void native_irq_wait(struct cxl_context *ctx) 801 + { 802 + u64 dsisr; 803 + int timeout = 1000; 804 + int ph; 805 + 806 + /* 807 + * Wait until no further interrupts are presented by the PSL 808 + * for this context. 809 + */ 810 + while (timeout--) { 811 + ph = cxl_p2n_read(ctx->afu, CXL_PSL_PEHandle_An) & 0xffff; 812 + if (ph != ctx->pe) 813 + return; 814 + dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An); 815 + if ((dsisr & CXL_PSL_DSISR_PENDING) == 0) 816 + return; 817 + /* 818 + * We are waiting for the workqueue to process our 819 + * irq, so need to let that run here. 820 + */ 821 + msleep(1); 822 + } 823 + 824 + dev_warn(&ctx->afu->dev, "WARNING: waiting on DSI for PE %i" 825 + " DSISR %016llx!\n", ph, dsisr); 826 + return; 827 + } 828 + 801 829 static irqreturn_t native_slice_irq_err(int irq, void *data) 802 830 { 803 831 struct cxl_afu *afu = data; ··· 1106 1076 .handle_psl_slice_error = native_handle_psl_slice_error, 1107 1077 .psl_interrupt = NULL, 1108 1078 .ack_irq = native_ack_irq, 1079 + .irq_wait = native_irq_wait, 1109 1080 .attach_process = native_attach_process, 1110 1081 .detach_process = native_detach_process, 1111 1082 .support_attributes = native_support_attributes,
+1
drivers/mmc/host/Kconfig
··· 97 97 config MMC_SDHCI_ACPI 98 98 tristate "SDHCI support for ACPI enumerated SDHCI controllers" 99 99 depends on MMC_SDHCI && ACPI 100 + select IOSF_MBI if X86 100 101 help 101 102 This selects support for ACPI enumerated SDHCI controllers, 102 103 identified by ACPI Compatibility ID PNP0D40 or specific
+81
drivers/mmc/host/sdhci-acpi.c
··· 41 41 #include <linux/mmc/pm.h> 42 42 #include <linux/mmc/slot-gpio.h> 43 43 44 + #ifdef CONFIG_X86 45 + #include <asm/cpu_device_id.h> 46 + #include <asm/iosf_mbi.h> 47 + #endif 48 + 44 49 #include "sdhci.h" 45 50 46 51 enum { ··· 120 115 static const struct sdhci_acpi_chip sdhci_acpi_chip_int = { 121 116 .ops = &sdhci_acpi_ops_int, 122 117 }; 118 + 119 + #ifdef CONFIG_X86 120 + 121 + static bool sdhci_acpi_byt(void) 122 + { 123 + static const struct x86_cpu_id byt[] = { 124 + { X86_VENDOR_INTEL, 6, 0x37 }, 125 + {} 126 + }; 127 + 128 + return x86_match_cpu(byt); 129 + } 130 + 131 + #define BYT_IOSF_SCCEP 0x63 132 + #define BYT_IOSF_OCP_NETCTRL0 0x1078 133 + #define BYT_IOSF_OCP_TIMEOUT_BASE GENMASK(10, 8) 134 + 135 + static void sdhci_acpi_byt_setting(struct device *dev) 136 + { 137 + u32 val = 0; 138 + 139 + if (!sdhci_acpi_byt()) 140 + return; 141 + 142 + if (iosf_mbi_read(BYT_IOSF_SCCEP, MBI_CR_READ, BYT_IOSF_OCP_NETCTRL0, 143 + &val)) { 144 + dev_err(dev, "%s read error\n", __func__); 145 + return; 146 + } 147 + 148 + if (!(val & BYT_IOSF_OCP_TIMEOUT_BASE)) 149 + return; 150 + 151 + val &= ~BYT_IOSF_OCP_TIMEOUT_BASE; 152 + 153 + if (iosf_mbi_write(BYT_IOSF_SCCEP, MBI_CR_WRITE, BYT_IOSF_OCP_NETCTRL0, 154 + val)) { 155 + dev_err(dev, "%s write error\n", __func__); 156 + return; 157 + } 158 + 159 + dev_dbg(dev, "%s completed\n", __func__); 160 + } 161 + 162 + static bool sdhci_acpi_byt_defer(struct device *dev) 163 + { 164 + if (!sdhci_acpi_byt()) 165 + return false; 166 + 167 + if (!iosf_mbi_available()) 168 + return true; 169 + 170 + sdhci_acpi_byt_setting(dev); 171 + 172 + return false; 173 + } 174 + 175 + #else 176 + 177 + static inline void sdhci_acpi_byt_setting(struct device *dev) 178 + { 179 + } 180 + 181 + static inline bool sdhci_acpi_byt_defer(struct device *dev) 182 + { 183 + return false; 184 + } 185 + 186 + #endif 123 187 124 188 static int bxt_get_cd(struct mmc_host *mmc) 125 189 { ··· 396 322 if (acpi_bus_get_status(device) || !device->status.present) 397 323 return -ENODEV; 398 324 325 + if (sdhci_acpi_byt_defer(dev)) 326 + return -EPROBE_DEFER; 327 + 399 328 hid = acpi_device_hid(device); 400 329 uid = device->pnp.unique_id; 401 330 ··· 524 447 { 525 448 struct sdhci_acpi_host *c = dev_get_drvdata(dev); 526 449 450 + sdhci_acpi_byt_setting(&c->pdev->dev); 451 + 527 452 return sdhci_resume_host(c->host); 528 453 } 529 454 ··· 548 469 static int sdhci_acpi_runtime_resume(struct device *dev) 549 470 { 550 471 struct sdhci_acpi_host *c = dev_get_drvdata(dev); 472 + 473 + sdhci_acpi_byt_setting(&c->pdev->dev); 551 474 552 475 return sdhci_runtime_resume_host(c->host); 553 476 }
+5
drivers/mmc/host/sunxi-mmc.c
··· 1129 1129 MMC_CAP_1_8V_DDR | 1130 1130 MMC_CAP_ERASE | MMC_CAP_SDIO_IRQ; 1131 1131 1132 + /* TODO MMC DDR is not working on A80 */ 1133 + if (of_device_is_compatible(pdev->dev.of_node, 1134 + "allwinner,sun9i-a80-mmc")) 1135 + mmc->caps &= ~MMC_CAP_1_8V_DDR; 1136 + 1132 1137 ret = mmc_of_parse(mmc); 1133 1138 if (ret) 1134 1139 goto error_free_dma;
+1 -1
drivers/net/dsa/mv88e6xxx.c
··· 2202 2202 struct net_device *bridge) 2203 2203 { 2204 2204 struct mv88e6xxx_priv_state *ps = ds_to_priv(ds); 2205 - int i, err; 2205 + int i, err = 0; 2206 2206 2207 2207 mutex_lock(&ps->smi_mutex); 2208 2208
+39 -14
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 588 588 struct page *page; 589 589 dma_addr_t mapping; 590 590 u16 sw_prod = rxr->rx_sw_agg_prod; 591 + unsigned int offset = 0; 591 592 592 - page = alloc_page(gfp); 593 - if (!page) 594 - return -ENOMEM; 593 + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { 594 + page = rxr->rx_page; 595 + if (!page) { 596 + page = alloc_page(gfp); 597 + if (!page) 598 + return -ENOMEM; 599 + rxr->rx_page = page; 600 + rxr->rx_page_offset = 0; 601 + } 602 + offset = rxr->rx_page_offset; 603 + rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; 604 + if (rxr->rx_page_offset == PAGE_SIZE) 605 + rxr->rx_page = NULL; 606 + else 607 + get_page(page); 608 + } else { 609 + page = alloc_page(gfp); 610 + if (!page) 611 + return -ENOMEM; 612 + } 595 613 596 - mapping = dma_map_page(&pdev->dev, page, 0, PAGE_SIZE, 614 + mapping = dma_map_page(&pdev->dev, page, offset, BNXT_RX_PAGE_SIZE, 597 615 PCI_DMA_FROMDEVICE); 598 616 if (dma_mapping_error(&pdev->dev, mapping)) { 599 617 __free_page(page); ··· 626 608 rxr->rx_sw_agg_prod = NEXT_RX_AGG(sw_prod); 627 609 628 610 rx_agg_buf->page = page; 611 + rx_agg_buf->offset = offset; 629 612 rx_agg_buf->mapping = mapping; 630 613 rxbd->rx_bd_haddr = cpu_to_le64(mapping); 631 614 rxbd->rx_bd_opaque = sw_prod; ··· 668 649 page = cons_rx_buf->page; 669 650 cons_rx_buf->page = NULL; 670 651 prod_rx_buf->page = page; 652 + prod_rx_buf->offset = cons_rx_buf->offset; 671 653 672 654 prod_rx_buf->mapping = cons_rx_buf->mapping; 673 655 ··· 736 716 RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; 737 717 738 718 cons_rx_buf = &rxr->rx_agg_ring[cons]; 739 - skb_fill_page_desc(skb, i, cons_rx_buf->page, 0, frag_len); 719 + skb_fill_page_desc(skb, i, cons_rx_buf->page, 720 + cons_rx_buf->offset, frag_len); 740 721 __clear_bit(cons, rxr->rx_agg_bmap); 741 722 742 723 /* It is possible for bnxt_alloc_rx_page() to allocate ··· 768 747 return NULL; 769 748 } 770 749 771 - dma_unmap_page(&pdev->dev, mapping, PAGE_SIZE, 750 + dma_unmap_page(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, 772 751 PCI_DMA_FROMDEVICE); 773 752 774 753 skb->data_len += frag_len; ··· 1656 1635 1657 1636 dma_unmap_page(&pdev->dev, 1658 1637 dma_unmap_addr(rx_agg_buf, mapping), 1659 - PAGE_SIZE, PCI_DMA_FROMDEVICE); 1638 + BNXT_RX_PAGE_SIZE, PCI_DMA_FROMDEVICE); 1660 1639 1661 1640 rx_agg_buf->page = NULL; 1662 1641 __clear_bit(j, rxr->rx_agg_bmap); 1663 1642 1664 1643 __free_page(page); 1644 + } 1645 + if (rxr->rx_page) { 1646 + __free_page(rxr->rx_page); 1647 + rxr->rx_page = NULL; 1665 1648 } 1666 1649 } 1667 1650 } ··· 2049 2024 if (!(bp->flags & BNXT_FLAG_AGG_RINGS)) 2050 2025 return 0; 2051 2026 2052 - type = ((u32)PAGE_SIZE << RX_BD_LEN_SHIFT) | 2027 + type = ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) | 2053 2028 RX_BD_TYPE_RX_AGG_BD | RX_BD_FLAGS_SOP; 2054 2029 2055 2030 bnxt_init_rxbd_pages(ring, type); ··· 2240 2215 bp->rx_agg_nr_pages = 0; 2241 2216 2242 2217 if (bp->flags & BNXT_FLAG_TPA) 2243 - agg_factor = 4; 2218 + agg_factor = min_t(u32, 4, 65536 / BNXT_RX_PAGE_SIZE); 2244 2219 2245 2220 bp->flags &= ~BNXT_FLAG_JUMBO; 2246 2221 if (rx_space > PAGE_SIZE) { ··· 3101 3076 /* Number of segs are log2 units, and first packet is not 3102 3077 * included as part of this units. 3103 3078 */ 3104 - if (mss <= PAGE_SIZE) { 3105 - n = PAGE_SIZE / mss; 3079 + if (mss <= BNXT_RX_PAGE_SIZE) { 3080 + n = BNXT_RX_PAGE_SIZE / mss; 3106 3081 nsegs = (MAX_SKB_FRAGS - 1) * n; 3107 3082 } else { 3108 - n = mss / PAGE_SIZE; 3109 - if (mss & (PAGE_SIZE - 1)) 3083 + n = mss / BNXT_RX_PAGE_SIZE; 3084 + if (mss & (BNXT_RX_PAGE_SIZE - 1)) 3110 3085 n++; 3111 3086 nsegs = (MAX_SKB_FRAGS - n) / n; 3112 3087 } ··· 4392 4367 if (bp->flags & BNXT_FLAG_MSIX_CAP) 4393 4368 rc = bnxt_setup_msix(bp); 4394 4369 4395 - if (!(bp->flags & BNXT_FLAG_USING_MSIX)) { 4370 + if (!(bp->flags & BNXT_FLAG_USING_MSIX) && BNXT_PF(bp)) { 4396 4371 /* fallback to INTA */ 4397 4372 rc = bnxt_setup_inta(bp); 4398 4373 }
+13
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 407 407 408 408 #define BNXT_PAGE_SIZE (1 << BNXT_PAGE_SHIFT) 409 409 410 + /* The RXBD length is 16-bit so we can only support page sizes < 64K */ 411 + #if (PAGE_SHIFT > 15) 412 + #define BNXT_RX_PAGE_SHIFT 15 413 + #else 414 + #define BNXT_RX_PAGE_SHIFT PAGE_SHIFT 415 + #endif 416 + 417 + #define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT) 418 + 410 419 #define BNXT_MIN_PKT_SIZE 45 411 420 412 421 #define BNXT_NUM_TESTS(bp) 0 ··· 515 506 516 507 struct bnxt_sw_rx_agg_bd { 517 508 struct page *page; 509 + unsigned int offset; 518 510 dma_addr_t mapping; 519 511 }; 520 512 ··· 595 585 596 586 unsigned long *rx_agg_bmap; 597 587 u16 rx_agg_bmap_size; 588 + 589 + struct page *rx_page; 590 + unsigned int rx_page_offset; 598 591 599 592 dma_addr_t rx_desc_mapping[MAX_RX_PAGES]; 600 593 dma_addr_t rx_agg_desc_mapping[MAX_RX_AGG_PAGES];
+21 -13
drivers/net/ethernet/cadence/macb.c
··· 440 440 snprintf(bp->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x", 441 441 bp->pdev->name, bp->pdev->id); 442 442 bp->mii_bus->priv = bp; 443 - bp->mii_bus->parent = &bp->dev->dev; 443 + bp->mii_bus->parent = &bp->pdev->dev; 444 444 pdata = dev_get_platdata(&bp->pdev->dev); 445 445 446 446 dev_set_drvdata(&bp->dev->dev, bp->mii_bus); ··· 458 458 struct phy_device *phydev; 459 459 460 460 phydev = mdiobus_scan(bp->mii_bus, i); 461 - if (IS_ERR(phydev)) { 461 + if (IS_ERR(phydev) && 462 + PTR_ERR(phydev) != -ENODEV) { 462 463 err = PTR_ERR(phydev); 463 464 break; 464 465 } ··· 3006 3005 if (err) 3007 3006 goto err_out_free_netdev; 3008 3007 3008 + err = macb_mii_init(bp); 3009 + if (err) 3010 + goto err_out_free_netdev; 3011 + 3012 + phydev = bp->phy_dev; 3013 + 3014 + netif_carrier_off(dev); 3015 + 3009 3016 err = register_netdev(dev); 3010 3017 if (err) { 3011 3018 dev_err(&pdev->dev, "Cannot register net device, aborting.\n"); 3012 - goto err_out_unregister_netdev; 3019 + goto err_out_unregister_mdio; 3013 3020 } 3014 3021 3015 - err = macb_mii_init(bp); 3016 - if (err) 3017 - goto err_out_unregister_netdev; 3018 - 3019 - netif_carrier_off(dev); 3022 + phy_attached_info(phydev); 3020 3023 3021 3024 netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n", 3022 3025 macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID), 3023 3026 dev->base_addr, dev->irq, dev->dev_addr); 3024 3027 3025 - phydev = bp->phy_dev; 3026 - phy_attached_info(phydev); 3027 - 3028 3028 return 0; 3029 3029 3030 - err_out_unregister_netdev: 3031 - unregister_netdev(dev); 3030 + err_out_unregister_mdio: 3031 + phy_disconnect(bp->phy_dev); 3032 + mdiobus_unregister(bp->mii_bus); 3033 + mdiobus_free(bp->mii_bus); 3034 + 3035 + /* Shutdown the PHY if there is a GPIO reset */ 3036 + if (bp->reset_gpio) 3037 + gpiod_set_value(bp->reset_gpio, 0); 3032 3038 3033 3039 err_out_free_netdev: 3034 3040 free_netdev(dev);
+2 -1
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
··· 576 576 unsigned int nq0 = adap2pinfo(adap, 0)->nqsets; 577 577 unsigned int nq1 = adap->port[1] ? adap2pinfo(adap, 1)->nqsets : 1; 578 578 u8 cpus[SGE_QSETS + 1]; 579 - u16 rspq_map[RSS_TABLE_SIZE]; 579 + u16 rspq_map[RSS_TABLE_SIZE + 1]; 580 580 581 581 for (i = 0; i < SGE_QSETS; ++i) 582 582 cpus[i] = i; ··· 586 586 rspq_map[i] = i % nq0; 587 587 rspq_map[i + RSS_TABLE_SIZE / 2] = (i % nq1) + nq0; 588 588 } 589 + rspq_map[RSS_TABLE_SIZE] = 0xffff; /* terminator */ 589 590 590 591 t3_config_rss(adap, F_RQFEEDBACKENABLE | F_TNLLKPEN | F_TNLMAPEN | 591 592 F_TNLPRTEN | F_TNL2TUPEN | F_TNL4TUPEN |
+2 -4
drivers/net/ethernet/marvell/mvneta.c
··· 3354 3354 /* Enable per-CPU interrupts on the CPU that is 3355 3355 * brought up. 3356 3356 */ 3357 - smp_call_function_single(cpu, mvneta_percpu_enable, 3358 - pp, true); 3357 + mvneta_percpu_enable(pp); 3359 3358 3360 3359 /* Enable per-CPU interrupt on the one CPU we care 3361 3360 * about. ··· 3386 3387 /* Disable per-CPU interrupts on the CPU that is 3387 3388 * brought down. 3388 3389 */ 3389 - smp_call_function_single(cpu, mvneta_percpu_disable, 3390 - pp, true); 3390 + mvneta_percpu_disable(pp); 3391 3391 3392 3392 break; 3393 3393 case CPU_DEAD:
+2
drivers/net/ethernet/marvell/pxa168_eth.c
··· 979 979 return 0; 980 980 981 981 pep->phy = mdiobus_scan(pep->smi_bus, pep->phy_addr); 982 + if (IS_ERR(pep->phy)) 983 + return PTR_ERR(pep->phy); 982 984 if (!pep->phy) 983 985 return -ENODEV; 984 986
+1
drivers/net/ethernet/mellanox/mlx5/core/Kconfig
··· 14 14 bool "Mellanox Technologies ConnectX-4 Ethernet support" 15 15 depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE 16 16 select PTP_1588_CLOCK 17 + select VXLAN if MLX5_CORE=y 17 18 default n 18 19 ---help--- 19 20 Ethernet support in Mellanox Technologies ConnectX-4 NIC.
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 525 525 struct mlx5e_vxlan_db vxlan; 526 526 527 527 struct mlx5e_params params; 528 + struct workqueue_struct *wq; 528 529 struct work_struct update_carrier_work; 529 530 struct work_struct set_rx_mode_work; 530 531 struct delayed_work update_stats_work;
+21 -13
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 231 231 mutex_lock(&priv->state_lock); 232 232 if (test_bit(MLX5E_STATE_OPENED, &priv->state)) { 233 233 mlx5e_update_stats(priv); 234 - schedule_delayed_work(dwork, 235 - msecs_to_jiffies( 236 - MLX5E_UPDATE_STATS_INTERVAL)); 234 + queue_delayed_work(priv->wq, dwork, 235 + msecs_to_jiffies(MLX5E_UPDATE_STATS_INTERVAL)); 237 236 } 238 237 mutex_unlock(&priv->state_lock); 239 238 } ··· 248 249 switch (event) { 249 250 case MLX5_DEV_EVENT_PORT_UP: 250 251 case MLX5_DEV_EVENT_PORT_DOWN: 251 - schedule_work(&priv->update_carrier_work); 252 + queue_work(priv->wq, &priv->update_carrier_work); 252 253 break; 253 254 254 255 default: ··· 1694 1695 priv->netdev->rx_cpu_rmap = priv->mdev->rmap; 1695 1696 #endif 1696 1697 1697 - schedule_delayed_work(&priv->update_stats_work, 0); 1698 + queue_delayed_work(priv->wq, &priv->update_stats_work, 0); 1698 1699 1699 1700 return 0; 1700 1701 ··· 2201 2202 { 2202 2203 struct mlx5e_priv *priv = netdev_priv(dev); 2203 2204 2204 - schedule_work(&priv->set_rx_mode_work); 2205 + queue_work(priv->wq, &priv->set_rx_mode_work); 2205 2206 } 2206 2207 2207 2208 static int mlx5e_set_mac(struct net_device *netdev, void *addr) ··· 2216 2217 ether_addr_copy(netdev->dev_addr, saddr->sa_data); 2217 2218 netif_addr_unlock_bh(netdev); 2218 2219 2219 - schedule_work(&priv->set_rx_mode_work); 2220 + queue_work(priv->wq, &priv->set_rx_mode_work); 2220 2221 2221 2222 return 0; 2222 2223 } ··· 2502 2503 if (!mlx5e_vxlan_allowed(priv->mdev)) 2503 2504 return; 2504 2505 2505 - mlx5e_vxlan_add_port(priv, be16_to_cpu(port)); 2506 + mlx5e_vxlan_queue_work(priv, sa_family, be16_to_cpu(port), 1); 2506 2507 } 2507 2508 2508 2509 static void mlx5e_del_vxlan_port(struct net_device *netdev, ··· 2513 2514 if (!mlx5e_vxlan_allowed(priv->mdev)) 2514 2515 return; 2515 2516 2516 - mlx5e_vxlan_del_port(priv, be16_to_cpu(port)); 2517 + mlx5e_vxlan_queue_work(priv, sa_family, be16_to_cpu(port), 0); 2517 2518 } 2518 2519 2519 2520 static netdev_features_t mlx5e_vxlan_features_check(struct mlx5e_priv *priv, ··· 2946 2947 2947 2948 priv = netdev_priv(netdev); 2948 2949 2950 + priv->wq = create_singlethread_workqueue("mlx5e"); 2951 + if (!priv->wq) 2952 + goto err_free_netdev; 2953 + 2949 2954 err = mlx5_alloc_map_uar(mdev, &priv->cq_uar, false); 2950 2955 if (err) { 2951 2956 mlx5_core_err(mdev, "alloc_map uar failed, %d\n", err); 2952 - goto err_free_netdev; 2957 + goto err_destroy_wq; 2953 2958 } 2954 2959 2955 2960 err = mlx5_core_alloc_pd(mdev, &priv->pdn); ··· 3037 3034 } 3038 3035 3039 3036 mlx5e_enable_async_events(priv); 3040 - schedule_work(&priv->set_rx_mode_work); 3037 + queue_work(priv->wq, &priv->set_rx_mode_work); 3041 3038 3042 3039 return priv; 3043 3040 ··· 3075 3072 err_unmap_free_uar: 3076 3073 mlx5_unmap_free_uar(mdev, &priv->cq_uar); 3077 3074 3075 + err_destroy_wq: 3076 + destroy_workqueue(priv->wq); 3077 + 3078 3078 err_free_netdev: 3079 3079 free_netdev(netdev); 3080 3080 ··· 3091 3085 3092 3086 set_bit(MLX5E_STATE_DESTROYING, &priv->state); 3093 3087 3094 - schedule_work(&priv->set_rx_mode_work); 3088 + queue_work(priv->wq, &priv->set_rx_mode_work); 3095 3089 mlx5e_disable_async_events(priv); 3096 - flush_scheduled_work(); 3090 + flush_workqueue(priv->wq); 3097 3091 if (test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state)) { 3098 3092 netif_device_detach(netdev); 3099 3093 mutex_lock(&priv->state_lock); ··· 3117 3111 mlx5_core_dealloc_transport_domain(priv->mdev, priv->tdn); 3118 3112 mlx5_core_dealloc_pd(priv->mdev, priv->pdn); 3119 3113 mlx5_unmap_free_uar(priv->mdev, &priv->cq_uar); 3114 + cancel_delayed_work_sync(&priv->update_stats_work); 3115 + destroy_workqueue(priv->wq); 3120 3116 3121 3117 if (!test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state)) 3122 3118 free_netdev(netdev);
+4 -2
drivers/net/ethernet/mellanox/mlx5/core/uar.c
··· 269 269 270 270 void mlx5_unmap_free_uar(struct mlx5_core_dev *mdev, struct mlx5_uar *uar) 271 271 { 272 - iounmap(uar->map); 273 - iounmap(uar->bf_map); 272 + if (uar->map) 273 + iounmap(uar->map); 274 + else 275 + iounmap(uar->bf_map); 274 276 mlx5_cmd_free_uar(mdev, uar->index); 275 277 } 276 278 EXPORT_SYMBOL(mlx5_unmap_free_uar);
+38 -12
drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
··· 95 95 return vxlan; 96 96 } 97 97 98 - int mlx5e_vxlan_add_port(struct mlx5e_priv *priv, u16 port) 98 + static void mlx5e_vxlan_add_port(struct work_struct *work) 99 99 { 100 + struct mlx5e_vxlan_work *vxlan_work = 101 + container_of(work, struct mlx5e_vxlan_work, work); 102 + struct mlx5e_priv *priv = vxlan_work->priv; 100 103 struct mlx5e_vxlan_db *vxlan_db = &priv->vxlan; 104 + u16 port = vxlan_work->port; 101 105 struct mlx5e_vxlan *vxlan; 102 106 int err; 103 107 104 - err = mlx5e_vxlan_core_add_port_cmd(priv->mdev, port); 105 - if (err) 106 - return err; 108 + if (mlx5e_vxlan_core_add_port_cmd(priv->mdev, port)) 109 + goto free_work; 107 110 108 111 vxlan = kzalloc(sizeof(*vxlan), GFP_KERNEL); 109 - if (!vxlan) { 110 - err = -ENOMEM; 112 + if (!vxlan) 111 113 goto err_delete_port; 112 - } 113 114 114 115 vxlan->udp_port = port; 115 116 ··· 120 119 if (err) 121 120 goto err_free; 122 121 123 - return 0; 122 + goto free_work; 124 123 125 124 err_free: 126 125 kfree(vxlan); 127 126 err_delete_port: 128 127 mlx5e_vxlan_core_del_port_cmd(priv->mdev, port); 129 - return err; 128 + free_work: 129 + kfree(vxlan_work); 130 130 } 131 131 132 132 static void __mlx5e_vxlan_core_del_port(struct mlx5e_priv *priv, u16 port) ··· 147 145 kfree(vxlan); 148 146 } 149 147 150 - void mlx5e_vxlan_del_port(struct mlx5e_priv *priv, u16 port) 148 + static void mlx5e_vxlan_del_port(struct work_struct *work) 151 149 { 152 - if (!mlx5e_vxlan_lookup_port(priv, port)) 153 - return; 150 + struct mlx5e_vxlan_work *vxlan_work = 151 + container_of(work, struct mlx5e_vxlan_work, work); 152 + struct mlx5e_priv *priv = vxlan_work->priv; 153 + u16 port = vxlan_work->port; 154 154 155 155 __mlx5e_vxlan_core_del_port(priv, port); 156 + 157 + kfree(vxlan_work); 158 + } 159 + 160 + void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, sa_family_t sa_family, 161 + u16 port, int add) 162 + { 163 + struct mlx5e_vxlan_work *vxlan_work; 164 + 165 + vxlan_work = kmalloc(sizeof(*vxlan_work), GFP_ATOMIC); 166 + if (!vxlan_work) 167 + return; 168 + 169 + if (add) 170 + INIT_WORK(&vxlan_work->work, mlx5e_vxlan_add_port); 171 + else 172 + INIT_WORK(&vxlan_work->work, mlx5e_vxlan_del_port); 173 + 174 + vxlan_work->priv = priv; 175 + vxlan_work->port = port; 176 + vxlan_work->sa_family = sa_family; 177 + queue_work(priv->wq, &vxlan_work->work); 156 178 } 157 179 158 180 void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv)
+9 -2
drivers/net/ethernet/mellanox/mlx5/core/vxlan.h
··· 39 39 u16 udp_port; 40 40 }; 41 41 42 + struct mlx5e_vxlan_work { 43 + struct work_struct work; 44 + struct mlx5e_priv *priv; 45 + sa_family_t sa_family; 46 + u16 port; 47 + }; 48 + 42 49 static inline bool mlx5e_vxlan_allowed(struct mlx5_core_dev *mdev) 43 50 { 44 51 return (MLX5_CAP_ETH(mdev, tunnel_stateless_vxlan) && ··· 53 46 } 54 47 55 48 void mlx5e_vxlan_init(struct mlx5e_priv *priv); 56 - int mlx5e_vxlan_add_port(struct mlx5e_priv *priv, u16 port); 57 - void mlx5e_vxlan_del_port(struct mlx5e_priv *priv, u16 port); 49 + void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, sa_family_t sa_family, 50 + u16 port, int add); 58 51 struct mlx5e_vxlan *mlx5e_vxlan_lookup_port(struct mlx5e_priv *priv, u16 port); 59 52 void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv); 60 53
+2 -2
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 2668 2668 2669 2669 del_timer_sync(&mgp->watchdog_timer); 2670 2670 mgp->running = MYRI10GE_ETH_STOPPING; 2671 - local_bh_disable(); /* myri10ge_ss_lock_napi needs bh disabled */ 2672 2671 for (i = 0; i < mgp->num_slices; i++) { 2673 2672 napi_disable(&mgp->ss[i].napi); 2673 + local_bh_disable(); /* myri10ge_ss_lock_napi needs this */ 2674 2674 /* Lock the slice to prevent the busy_poll handler from 2675 2675 * accessing it. Later when we bring the NIC up, myri10ge_open 2676 2676 * resets the slice including this lock. ··· 2679 2679 pr_info("Slice %d locked\n", i); 2680 2680 mdelay(1); 2681 2681 } 2682 + local_bh_enable(); 2682 2683 } 2683 - local_bh_enable(); 2684 2684 netif_carrier_off(dev); 2685 2685 2686 2686 netif_tx_stop_all_queues(dev);
+13 -2
drivers/net/ethernet/sfc/ef10.c
··· 1920 1920 return 0; 1921 1921 } 1922 1922 1923 + if (nic_data->datapath_caps & 1924 + 1 << MC_CMD_GET_CAPABILITIES_OUT_RX_RSS_LIMITED_LBN) 1925 + return -EOPNOTSUPP; 1926 + 1923 1927 MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_UPSTREAM_PORT_ID, 1924 1928 nic_data->vport_id); 1925 1929 MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_TYPE, alloc_type); ··· 2927 2923 bool replacing) 2928 2924 { 2929 2925 struct efx_ef10_nic_data *nic_data = efx->nic_data; 2926 + u32 flags = spec->flags; 2930 2927 2931 2928 memset(inbuf, 0, MC_CMD_FILTER_OP_IN_LEN); 2929 + 2930 + /* Remove RSS flag if we don't have an RSS context. */ 2931 + if (flags & EFX_FILTER_FLAG_RX_RSS && 2932 + spec->rss_context == EFX_FILTER_RSS_CONTEXT_DEFAULT && 2933 + nic_data->rx_rss_context == EFX_EF10_RSS_CONTEXT_INVALID) 2934 + flags &= ~EFX_FILTER_FLAG_RX_RSS; 2932 2935 2933 2936 if (replacing) { 2934 2937 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_OP, ··· 2996 2985 spec->dmaq_id == EFX_FILTER_RX_DMAQ_ID_DROP ? 2997 2986 0 : spec->dmaq_id); 2998 2987 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_MODE, 2999 - (spec->flags & EFX_FILTER_FLAG_RX_RSS) ? 2988 + (flags & EFX_FILTER_FLAG_RX_RSS) ? 3000 2989 MC_CMD_FILTER_OP_IN_RX_MODE_RSS : 3001 2990 MC_CMD_FILTER_OP_IN_RX_MODE_SIMPLE); 3002 - if (spec->flags & EFX_FILTER_FLAG_RX_RSS) 2991 + if (flags & EFX_FILTER_FLAG_RX_RSS) 3003 2992 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_CONTEXT, 3004 2993 spec->rss_context != 3005 2994 EFX_FILTER_RSS_CONTEXT_DEFAULT ?
+37 -32
drivers/net/ethernet/ti/cpsw.c
··· 367 367 spinlock_t lock; 368 368 struct platform_device *pdev; 369 369 struct net_device *ndev; 370 - struct device_node *phy_node; 371 370 struct napi_struct napi_rx; 372 371 struct napi_struct napi_tx; 373 372 struct device *dev; ··· 1141 1142 cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast, 1142 1143 1 << slave_port, 0, 0, ALE_MCAST_FWD_2); 1143 1144 1144 - if (priv->phy_node) 1145 - slave->phy = of_phy_connect(priv->ndev, priv->phy_node, 1145 + if (slave->data->phy_node) { 1146 + slave->phy = of_phy_connect(priv->ndev, slave->data->phy_node, 1146 1147 &cpsw_adjust_link, 0, slave->data->phy_if); 1147 - else 1148 + if (!slave->phy) { 1149 + dev_err(priv->dev, "phy \"%s\" not found on slave %d\n", 1150 + slave->data->phy_node->full_name, 1151 + slave->slave_num); 1152 + return; 1153 + } 1154 + } else { 1148 1155 slave->phy = phy_connect(priv->ndev, slave->data->phy_id, 1149 1156 &cpsw_adjust_link, slave->data->phy_if); 1150 - if (IS_ERR(slave->phy)) { 1151 - dev_err(priv->dev, "phy %s not found on slave %d\n", 1152 - slave->data->phy_id, slave->slave_num); 1153 - slave->phy = NULL; 1154 - } else { 1155 - phy_attached_info(slave->phy); 1156 - 1157 - phy_start(slave->phy); 1158 - 1159 - /* Configure GMII_SEL register */ 1160 - cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface, 1161 - slave->slave_num); 1157 + if (IS_ERR(slave->phy)) { 1158 + dev_err(priv->dev, 1159 + "phy \"%s\" not found on slave %d, err %ld\n", 1160 + slave->data->phy_id, slave->slave_num, 1161 + PTR_ERR(slave->phy)); 1162 + slave->phy = NULL; 1163 + return; 1164 + } 1162 1165 } 1166 + 1167 + phy_attached_info(slave->phy); 1168 + 1169 + phy_start(slave->phy); 1170 + 1171 + /* Configure GMII_SEL register */ 1172 + cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface, slave->slave_num); 1163 1173 } 1164 1174 1165 1175 static inline void cpsw_add_default_vlan(struct cpsw_priv *priv) ··· 1940 1932 slave->port_vlan = data->dual_emac_res_vlan; 1941 1933 } 1942 1934 1943 - static int cpsw_probe_dt(struct cpsw_priv *priv, 1935 + static int cpsw_probe_dt(struct cpsw_platform_data *data, 1944 1936 struct platform_device *pdev) 1945 1937 { 1946 1938 struct device_node *node = pdev->dev.of_node; 1947 1939 struct device_node *slave_node; 1948 - struct cpsw_platform_data *data = &priv->data; 1949 1940 int i = 0, ret; 1950 1941 u32 prop; 1951 1942 ··· 2032 2025 if (strcmp(slave_node->name, "slave")) 2033 2026 continue; 2034 2027 2035 - priv->phy_node = of_parse_phandle(slave_node, "phy-handle", 0); 2028 + slave_data->phy_node = of_parse_phandle(slave_node, 2029 + "phy-handle", 0); 2036 2030 parp = of_get_property(slave_node, "phy_id", &lenp); 2037 - if (of_phy_is_fixed_link(slave_node)) { 2038 - struct device_node *phy_node; 2039 - struct phy_device *phy_dev; 2040 - 2031 + if (slave_data->phy_node) { 2032 + dev_dbg(&pdev->dev, 2033 + "slave[%d] using phy-handle=\"%s\"\n", 2034 + i, slave_data->phy_node->full_name); 2035 + } else if (of_phy_is_fixed_link(slave_node)) { 2041 2036 /* In the case of a fixed PHY, the DT node associated 2042 2037 * to the PHY is the Ethernet MAC DT node. 2043 2038 */ 2044 2039 ret = of_phy_register_fixed_link(slave_node); 2045 2040 if (ret) 2046 2041 return ret; 2047 - phy_node = of_node_get(slave_node); 2048 - phy_dev = of_phy_find_device(phy_node); 2049 - if (!phy_dev) 2050 - return -ENODEV; 2051 - snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 2052 - PHY_ID_FMT, phy_dev->mdio.bus->id, 2053 - phy_dev->mdio.addr); 2042 + slave_data->phy_node = of_node_get(slave_node); 2054 2043 } else if (parp) { 2055 2044 u32 phyid; 2056 2045 struct device_node *mdio_node; ··· 2067 2064 snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 2068 2065 PHY_ID_FMT, mdio->name, phyid); 2069 2066 } else { 2070 - dev_err(&pdev->dev, "No slave[%d] phy_id or fixed-link property\n", i); 2067 + dev_err(&pdev->dev, 2068 + "No slave[%d] phy_id, phy-handle, or fixed-link property\n", 2069 + i); 2071 2070 goto no_phy_slave; 2072 2071 } 2073 2072 slave_data->phy_if = of_get_phy_mode(slave_node); ··· 2271 2266 /* Select default pin state */ 2272 2267 pinctrl_pm_select_default_state(&pdev->dev); 2273 2268 2274 - if (cpsw_probe_dt(priv, pdev)) { 2269 + if (cpsw_probe_dt(&priv->data, pdev)) { 2275 2270 dev_err(&pdev->dev, "cpsw: platform data missing\n"); 2276 2271 ret = -ENODEV; 2277 2272 goto clean_runtime_disable_ret;
+1
drivers/net/ethernet/ti/cpsw.h
··· 18 18 #include <linux/phy.h> 19 19 20 20 struct cpsw_slave_data { 21 + struct device_node *phy_node; 21 22 char phy_id[MII_BUS_ID_SIZE]; 22 23 int phy_if; 23 24 u8 mac_addr[ETH_ALEN];
+4 -1
drivers/net/ethernet/ti/davinci_emac.c
··· 1512 1512 1513 1513 /* TODO: Add phy read and write and private statistics get feature */ 1514 1514 1515 - return phy_mii_ioctl(priv->phydev, ifrq, cmd); 1515 + if (priv->phydev) 1516 + return phy_mii_ioctl(priv->phydev, ifrq, cmd); 1517 + else 1518 + return -EOPNOTSUPP; 1516 1519 } 1517 1520 1518 1521 static int match_first_device(struct device *dev, void *data)
+1 -1
drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
··· 1622 1622 continue; 1623 1623 1624 1624 /* copy hw scan info */ 1625 - memcpy(target->hwinfo, scan_info, scan_info->size); 1625 + memcpy(target->hwinfo, scan_info, be16_to_cpu(scan_info->size)); 1626 1626 target->essid_len = strnlen(scan_info->essid, 1627 1627 sizeof(scan_info->essid)); 1628 1628 target->rate_len = 0;
+14 -18
drivers/net/phy/at803x.c
··· 359 359 * in the FIFO. In such cases, the FIFO enters an error mode it 360 360 * cannot recover from by software. 361 361 */ 362 - if (phydev->drv->phy_id == ATH8030_PHY_ID) { 363 - if (phydev->state == PHY_NOLINK) { 364 - if (priv->gpiod_reset && !priv->phy_reset) { 365 - struct at803x_context context; 362 + if (phydev->state == PHY_NOLINK) { 363 + if (priv->gpiod_reset && !priv->phy_reset) { 364 + struct at803x_context context; 366 365 367 - at803x_context_save(phydev, &context); 366 + at803x_context_save(phydev, &context); 368 367 369 - gpiod_set_value(priv->gpiod_reset, 1); 370 - msleep(1); 371 - gpiod_set_value(priv->gpiod_reset, 0); 372 - msleep(1); 368 + gpiod_set_value(priv->gpiod_reset, 1); 369 + msleep(1); 370 + gpiod_set_value(priv->gpiod_reset, 0); 371 + msleep(1); 373 372 374 - at803x_context_restore(phydev, &context); 373 + at803x_context_restore(phydev, &context); 375 374 376 - phydev_dbg(phydev, "%s(): phy was reset\n", 377 - __func__); 378 - priv->phy_reset = true; 379 - } 380 - } else { 381 - priv->phy_reset = false; 375 + phydev_dbg(phydev, "%s(): phy was reset\n", 376 + __func__); 377 + priv->phy_reset = true; 382 378 } 379 + } else { 380 + priv->phy_reset = false; 383 381 } 384 382 } 385 383 ··· 389 391 .phy_id_mask = 0xffffffef, 390 392 .probe = at803x_probe, 391 393 .config_init = at803x_config_init, 392 - .link_change_notify = at803x_link_change_notify, 393 394 .set_wol = at803x_set_wol, 394 395 .get_wol = at803x_get_wol, 395 396 .suspend = at803x_suspend, ··· 424 427 .phy_id_mask = 0xffffffef, 425 428 .probe = at803x_probe, 426 429 .config_init = at803x_config_init, 427 - .link_change_notify = at803x_link_change_notify, 428 430 .set_wol = at803x_set_wol, 429 431 .get_wol = at803x_get_wol, 430 432 .suspend = at803x_suspend,
+38 -6
drivers/net/usb/lan78xx.c
··· 269 269 struct lan78xx_net *dev; 270 270 enum skb_state state; 271 271 size_t length; 272 + int num_of_packet; 272 273 }; 273 274 274 275 struct usb_context { ··· 1804 1803 1805 1804 static void lan78xx_link_status_change(struct net_device *net) 1806 1805 { 1807 - /* nothing to do */ 1806 + struct phy_device *phydev = net->phydev; 1807 + int ret, temp; 1808 + 1809 + /* At forced 100 F/H mode, chip may fail to set mode correctly 1810 + * when cable is switched between long(~50+m) and short one. 1811 + * As workaround, set to 10 before setting to 100 1812 + * at forced 100 F/H mode. 1813 + */ 1814 + if (!phydev->autoneg && (phydev->speed == 100)) { 1815 + /* disable phy interrupt */ 1816 + temp = phy_read(phydev, LAN88XX_INT_MASK); 1817 + temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_; 1818 + ret = phy_write(phydev, LAN88XX_INT_MASK, temp); 1819 + 1820 + temp = phy_read(phydev, MII_BMCR); 1821 + temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000); 1822 + phy_write(phydev, MII_BMCR, temp); /* set to 10 first */ 1823 + temp |= BMCR_SPEED100; 1824 + phy_write(phydev, MII_BMCR, temp); /* set to 100 later */ 1825 + 1826 + /* clear pending interrupt generated while workaround */ 1827 + temp = phy_read(phydev, LAN88XX_INT_STS); 1828 + 1829 + /* enable phy interrupt back */ 1830 + temp = phy_read(phydev, LAN88XX_INT_MASK); 1831 + temp |= LAN88XX_INT_MASK_MDINTPIN_EN_; 1832 + ret = phy_write(phydev, LAN88XX_INT_MASK, temp); 1833 + } 1808 1834 } 1809 1835 1810 1836 static int lan78xx_phy_init(struct lan78xx_net *dev) ··· 2492 2464 struct lan78xx_net *dev = entry->dev; 2493 2465 2494 2466 if (urb->status == 0) { 2495 - dev->net->stats.tx_packets++; 2467 + dev->net->stats.tx_packets += entry->num_of_packet; 2496 2468 dev->net->stats.tx_bytes += entry->length; 2497 2469 } else { 2498 2470 dev->net->stats.tx_errors++; ··· 2709 2681 return; 2710 2682 } 2711 2683 2712 - skb->protocol = eth_type_trans(skb, dev->net); 2713 2684 dev->net->stats.rx_packets++; 2714 2685 dev->net->stats.rx_bytes += skb->len; 2686 + 2687 + skb->protocol = eth_type_trans(skb, dev->net); 2715 2688 2716 2689 netif_dbg(dev, rx_status, dev->net, "< rx, len %zu, type 0x%x\n", 2717 2690 skb->len + sizeof(struct ethhdr), skb->protocol); ··· 2963 2934 2964 2935 skb_totallen = 0; 2965 2936 pkt_cnt = 0; 2937 + count = 0; 2938 + length = 0; 2966 2939 for (skb = tqp->next; pkt_cnt < tqp->qlen; skb = skb->next) { 2967 2940 if (skb_is_gso(skb)) { 2968 2941 if (pkt_cnt) { 2969 2942 /* handle previous packets first */ 2970 2943 break; 2971 2944 } 2972 - length = skb->len; 2945 + count = 1; 2946 + length = skb->len - TX_OVERHEAD; 2973 2947 skb2 = skb_dequeue(tqp); 2974 2948 goto gso_skb; 2975 2949 } ··· 2993 2961 for (count = pos = 0; count < pkt_cnt; count++) { 2994 2962 skb2 = skb_dequeue(tqp); 2995 2963 if (skb2) { 2964 + length += (skb2->len - TX_OVERHEAD); 2996 2965 memcpy(skb->data + pos, skb2->data, skb2->len); 2997 2966 pos += roundup(skb2->len, sizeof(u32)); 2998 2967 dev_kfree_skb(skb2); 2999 2968 } 3000 2969 } 3001 - 3002 - length = skb_totallen; 3003 2970 3004 2971 gso_skb: 3005 2972 urb = usb_alloc_urb(0, GFP_ATOMIC); ··· 3011 2980 entry->urb = urb; 3012 2981 entry->dev = dev; 3013 2982 entry->length = length; 2983 + entry->num_of_packet = count; 3014 2984 3015 2985 spin_lock_irqsave(&dev->txq.lock, flags); 3016 2986 ret = usb_autopm_get_interface_async(dev->intf);
+5 -5
drivers/net/usb/pegasus.c
··· 411 411 int ret; 412 412 413 413 read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart); 414 - data[0] = 0xc9; 414 + data[0] = 0xc8; /* TX & RX enable, append status, no CRC */ 415 415 data[1] = 0; 416 416 if (linkpart & (ADVERTISE_100FULL | ADVERTISE_10FULL)) 417 417 data[1] |= 0x20; /* set full duplex */ ··· 497 497 pkt_len = buf[count - 3] << 8; 498 498 pkt_len += buf[count - 4]; 499 499 pkt_len &= 0xfff; 500 - pkt_len -= 8; 500 + pkt_len -= 4; 501 501 } 502 502 503 503 /* ··· 528 528 goon: 529 529 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 530 530 usb_rcvbulkpipe(pegasus->usb, 1), 531 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 531 + pegasus->rx_skb->data, PEGASUS_MTU, 532 532 read_bulk_callback, pegasus); 533 533 rx_status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC); 534 534 if (rx_status == -ENODEV) ··· 569 569 } 570 570 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 571 571 usb_rcvbulkpipe(pegasus->usb, 1), 572 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 572 + pegasus->rx_skb->data, PEGASUS_MTU, 573 573 read_bulk_callback, pegasus); 574 574 try_again: 575 575 status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC); ··· 823 823 824 824 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 825 825 usb_rcvbulkpipe(pegasus->usb, 1), 826 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 826 + pegasus->rx_skb->data, PEGASUS_MTU, 827 827 read_bulk_callback, pegasus); 828 828 if ((res = usb_submit_urb(pegasus->rx_urb, GFP_KERNEL))) { 829 829 if (res == -ENODEV)
+11 -1
drivers/net/usb/smsc75xx.c
··· 29 29 #include <linux/crc32.h> 30 30 #include <linux/usb/usbnet.h> 31 31 #include <linux/slab.h> 32 + #include <linux/of_net.h> 32 33 #include "smsc75xx.h" 33 34 34 35 #define SMSC_CHIPNAME "smsc75xx" ··· 762 761 763 762 static void smsc75xx_init_mac_address(struct usbnet *dev) 764 763 { 764 + const u8 *mac_addr; 765 + 766 + /* maybe the boot loader passed the MAC address in devicetree */ 767 + mac_addr = of_get_mac_address(dev->udev->dev.of_node); 768 + if (mac_addr) { 769 + memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN); 770 + return; 771 + } 772 + 765 773 /* try reading mac address from EEPROM */ 766 774 if (smsc75xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN, 767 775 dev->net->dev_addr) == 0) { ··· 782 772 } 783 773 } 784 774 785 - /* no eeprom, or eeprom values are invalid. generate random MAC */ 775 + /* no useful static MAC address found. generate a random one */ 786 776 eth_hw_addr_random(dev->net); 787 777 netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n"); 788 778 }
+11 -1
drivers/net/usb/smsc95xx.c
··· 29 29 #include <linux/crc32.h> 30 30 #include <linux/usb/usbnet.h> 31 31 #include <linux/slab.h> 32 + #include <linux/of_net.h> 32 33 #include "smsc95xx.h" 33 34 34 35 #define SMSC_CHIPNAME "smsc95xx" ··· 766 765 767 766 static void smsc95xx_init_mac_address(struct usbnet *dev) 768 767 { 768 + const u8 *mac_addr; 769 + 770 + /* maybe the boot loader passed the MAC address in devicetree */ 771 + mac_addr = of_get_mac_address(dev->udev->dev.of_node); 772 + if (mac_addr) { 773 + memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN); 774 + return; 775 + } 776 + 769 777 /* try reading mac address from EEPROM */ 770 778 if (smsc95xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN, 771 779 dev->net->dev_addr) == 0) { ··· 785 775 } 786 776 } 787 777 788 - /* no eeprom, or eeprom values are invalid. generate random MAC */ 778 + /* no useful static MAC address found. generate a random one */ 789 779 eth_hw_addr_random(dev->net); 790 780 netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n"); 791 781 }
+3 -5
drivers/net/wireless/ath/ath9k/ar5008_phy.c
··· 274 274 }; 275 275 static const int inc[4] = { 0, 100, 0, 0 }; 276 276 277 + memset(&mask_m, 0, sizeof(int8_t) * 123); 278 + memset(&mask_p, 0, sizeof(int8_t) * 123); 279 + 277 280 cur_bin = -6000; 278 281 upper = bin + 100; 279 282 lower = bin - 100; ··· 427 424 int tmp, new; 428 425 int i; 429 426 430 - int8_t mask_m[123]; 431 - int8_t mask_p[123]; 432 427 int cur_bb_spur; 433 428 bool is2GHz = IS_CHAN_2GHZ(chan); 434 - 435 - memset(&mask_m, 0, sizeof(int8_t) * 123); 436 - memset(&mask_p, 0, sizeof(int8_t) * 123); 437 429 438 430 for (i = 0; i < AR_EEPROM_MODAL_SPURS; i++) { 439 431 cur_bb_spur = ah->eep_ops->get_spur_channel(ah, i, is2GHz);
-5
drivers/net/wireless/ath/ath9k/ar9002_phy.c
··· 178 178 int i; 179 179 struct chan_centers centers; 180 180 181 - int8_t mask_m[123]; 182 - int8_t mask_p[123]; 183 181 int cur_bb_spur; 184 182 bool is2GHz = IS_CHAN_2GHZ(chan); 185 - 186 - memset(&mask_m, 0, sizeof(int8_t) * 123); 187 - memset(&mask_p, 0, sizeof(int8_t) * 123); 188 183 189 184 ath9k_hw_get_channel_centers(ah, chan, &centers); 190 185 freq = centers.synth_center;
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-8000.c
··· 89 89 #define IWL8260_SMEM_OFFSET 0x400000 90 90 #define IWL8260_SMEM_LEN 0x68000 91 91 92 - #define IWL8000_FW_PRE "iwlwifi-8000" 92 + #define IWL8000_FW_PRE "iwlwifi-8000C-" 93 93 #define IWL8000_MODULE_FIRMWARE(api) \ 94 94 IWL8000_FW_PRE "-" __stringify(api) ".ucode" 95 95
-13
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 240 240 snprintf(drv->firmware_name, sizeof(drv->firmware_name), "%s%s.ucode", 241 241 name_pre, tag); 242 242 243 - /* 244 - * Starting 8000B - FW name format has changed. This overwrites the 245 - * previous name and uses the new format. 246 - */ 247 - if (drv->trans->cfg->device_family == IWL_DEVICE_FAMILY_8000) { 248 - char rev_step = 'A' + CSR_HW_REV_STEP(drv->trans->hw_rev); 249 - 250 - if (rev_step != 'A') 251 - snprintf(drv->firmware_name, 252 - sizeof(drv->firmware_name), "%s%c-%s.ucode", 253 - name_pre, rev_step, tag); 254 - } 255 - 256 243 IWL_DEBUG_INFO(drv, "attempting to load firmware %s'%s'\n", 257 244 (drv->fw_index == UCODE_EXPERIMENTAL_INDEX) 258 245 ? "EXPERIMENTAL " : "",
+4 -2
drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c
··· 609 609 } 610 610 611 611 /* Make room for fw's virtual image pages, if it exists */ 612 - if (mvm->fw->img[mvm->cur_ucode].paging_mem_size) 612 + if (mvm->fw->img[mvm->cur_ucode].paging_mem_size && 613 + mvm->fw_paging_db[0].fw_paging_block) 613 614 file_len += mvm->num_of_paging_blk * 614 615 (sizeof(*dump_data) + 615 616 sizeof(struct iwl_fw_error_dump_paging) + ··· 751 750 } 752 751 753 752 /* Dump fw's virtual image */ 754 - if (mvm->fw->img[mvm->cur_ucode].paging_mem_size) { 753 + if (mvm->fw->img[mvm->cur_ucode].paging_mem_size && 754 + mvm->fw_paging_db[0].fw_paging_block) { 755 755 for (i = 1; i < mvm->num_of_paging_blk + 1; i++) { 756 756 struct iwl_fw_error_dump_paging *paging; 757 757 struct page *pages =
+2
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 149 149 150 150 __free_pages(mvm->fw_paging_db[i].fw_paging_block, 151 151 get_order(mvm->fw_paging_db[i].fw_paging_size)); 152 + mvm->fw_paging_db[i].fw_paging_block = NULL; 152 153 } 153 154 kfree(mvm->trans->paging_download_buf); 154 155 mvm->trans->paging_download_buf = NULL; 156 + mvm->trans->paging_db = NULL; 155 157 156 158 memset(mvm->fw_paging_db, 0, sizeof(mvm->fw_paging_db)); 157 159 }
+10
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 479 479 {IWL_PCI_DEVICE(0x24F3, 0x0930, iwl8260_2ac_cfg)}, 480 480 {IWL_PCI_DEVICE(0x24F3, 0x0000, iwl8265_2ac_cfg)}, 481 481 {IWL_PCI_DEVICE(0x24FD, 0x0010, iwl8265_2ac_cfg)}, 482 + {IWL_PCI_DEVICE(0x24FD, 0x0110, iwl8265_2ac_cfg)}, 483 + {IWL_PCI_DEVICE(0x24FD, 0x1110, iwl8265_2ac_cfg)}, 484 + {IWL_PCI_DEVICE(0x24FD, 0x1010, iwl8265_2ac_cfg)}, 485 + {IWL_PCI_DEVICE(0x24FD, 0x0050, iwl8265_2ac_cfg)}, 486 + {IWL_PCI_DEVICE(0x24FD, 0x0150, iwl8265_2ac_cfg)}, 487 + {IWL_PCI_DEVICE(0x24FD, 0x9010, iwl8265_2ac_cfg)}, 488 + {IWL_PCI_DEVICE(0x24FD, 0x8110, iwl8265_2ac_cfg)}, 489 + {IWL_PCI_DEVICE(0x24FD, 0x8050, iwl8265_2ac_cfg)}, 482 490 {IWL_PCI_DEVICE(0x24FD, 0x8010, iwl8265_2ac_cfg)}, 483 491 {IWL_PCI_DEVICE(0x24FD, 0x0810, iwl8265_2ac_cfg)}, 492 + {IWL_PCI_DEVICE(0x24FD, 0x9110, iwl8265_2ac_cfg)}, 493 + {IWL_PCI_DEVICE(0x24FD, 0x8130, iwl8265_2ac_cfg)}, 484 494 485 495 /* 9000 Series */ 486 496 {IWL_PCI_DEVICE(0x9DF0, 0x0A10, iwl9560_2ac_cfg)},
+1 -1
drivers/platform/x86/toshiba_acpi.c
··· 135 135 /* Field definitions */ 136 136 #define HCI_ACCEL_MASK 0x7fff 137 137 #define HCI_HOTKEY_DISABLE 0x0b 138 - #define HCI_HOTKEY_ENABLE 0x01 138 + #define HCI_HOTKEY_ENABLE 0x09 139 139 #define HCI_HOTKEY_SPECIAL_FUNCTIONS 0x10 140 140 #define HCI_LCD_BRIGHTNESS_BITS 3 141 141 #define HCI_LCD_BRIGHTNESS_SHIFT (16-HCI_LCD_BRIGHTNESS_BITS)
+2 -2
drivers/rapidio/devices/rio_mport_cdev.c
··· 2669 2669 2670 2670 /* Create device class needed by udev */ 2671 2671 dev_class = class_create(THIS_MODULE, DRV_NAME); 2672 - if (!dev_class) { 2672 + if (IS_ERR(dev_class)) { 2673 2673 rmcd_error("Unable to create " DRV_NAME " class"); 2674 - return -EINVAL; 2674 + return PTR_ERR(dev_class); 2675 2675 } 2676 2676 2677 2677 ret = alloc_chrdev_region(&dev_number, 0, RIO_MAX_MPORTS, DRV_NAME);
+7 -5
drivers/s390/char/sclp_ctl.c
··· 56 56 { 57 57 struct sclp_ctl_sccb ctl_sccb; 58 58 struct sccb_header *sccb; 59 + unsigned long copied; 59 60 int rc; 60 61 61 62 if (copy_from_user(&ctl_sccb, user_area, sizeof(ctl_sccb))) ··· 66 65 sccb = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 67 66 if (!sccb) 68 67 return -ENOMEM; 69 - if (copy_from_user(sccb, u64_to_uptr(ctl_sccb.sccb), sizeof(*sccb))) { 68 + copied = PAGE_SIZE - 69 + copy_from_user(sccb, u64_to_uptr(ctl_sccb.sccb), PAGE_SIZE); 70 + if (offsetof(struct sccb_header, length) + 71 + sizeof(sccb->length) > copied || sccb->length > copied) { 70 72 rc = -EFAULT; 71 73 goto out_free; 72 74 } 73 - if (sccb->length > PAGE_SIZE || sccb->length < 8) 74 - return -EINVAL; 75 - if (copy_from_user(sccb, u64_to_uptr(ctl_sccb.sccb), sccb->length)) { 76 - rc = -EFAULT; 75 + if (sccb->length < 8) { 76 + rc = -EINVAL; 77 77 goto out_free; 78 78 } 79 79 rc = sclp_sync_request(ctl_sccb.cmdw, sccb);
+34 -20
drivers/staging/media/davinci_vpfe/vpfe_video.c
··· 172 172 static int vpfe_update_pipe_state(struct vpfe_video_device *video) 173 173 { 174 174 struct vpfe_pipeline *pipe = &video->pipe; 175 + int ret; 175 176 176 - if (vpfe_prepare_pipeline(video)) 177 - return vpfe_prepare_pipeline(video); 177 + ret = vpfe_prepare_pipeline(video); 178 + if (ret) 179 + return ret; 178 180 179 181 /* 180 182 * Find out if there is any input video ··· 184 182 */ 185 183 if (pipe->input_num == 0) { 186 184 pipe->state = VPFE_PIPELINE_STREAM_CONTINUOUS; 187 - if (vpfe_update_current_ext_subdev(video)) { 185 + ret = vpfe_update_current_ext_subdev(video); 186 + if (ret) { 188 187 pr_err("Invalid external subdev\n"); 189 - return vpfe_update_current_ext_subdev(video); 188 + return ret; 190 189 } 191 190 } else { 192 191 pipe->state = VPFE_PIPELINE_STREAM_SINGLESHOT; ··· 670 667 struct v4l2_subdev *subdev; 671 668 struct v4l2_format format; 672 669 struct media_pad *remote; 670 + int ret; 673 671 674 672 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_enum_fmt\n"); 675 673 ··· 699 695 sd_fmt.pad = remote->index; 700 696 sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE; 701 697 /* get output format of remote subdev */ 702 - if (v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt)) { 698 + ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt); 699 + if (ret) { 703 700 v4l2_err(&vpfe_dev->v4l2_dev, 704 701 "invalid remote subdev for video node\n"); 705 - return v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt); 702 + return ret; 706 703 } 707 704 /* convert to pix format */ 708 705 mbus.code = sd_fmt.format.code; ··· 730 725 struct vpfe_video_device *video = video_drvdata(file); 731 726 struct vpfe_device *vpfe_dev = video->vpfe_dev; 732 727 struct v4l2_format format; 728 + int ret; 733 729 734 730 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_fmt\n"); 735 731 /* If streaming is started, return error */ ··· 739 733 return -EBUSY; 740 734 } 741 735 /* get adjacent subdev's output pad format */ 742 - if (__vpfe_video_get_format(video, &format)) 743 - return __vpfe_video_get_format(video, &format); 736 + ret = __vpfe_video_get_format(video, &format); 737 + if (ret) 738 + return ret; 744 739 *fmt = format; 745 740 video->fmt = *fmt; 746 741 return 0; ··· 764 757 struct vpfe_video_device *video = video_drvdata(file); 765 758 struct vpfe_device *vpfe_dev = video->vpfe_dev; 766 759 struct v4l2_format format; 760 + int ret; 767 761 768 762 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_try_fmt\n"); 769 763 /* get adjacent subdev's output pad format */ 770 - if (__vpfe_video_get_format(video, &format)) 771 - return __vpfe_video_get_format(video, &format); 764 + ret = __vpfe_video_get_format(video, &format); 765 + if (ret) 766 + return ret; 772 767 773 768 *fmt = format; 774 769 return 0; ··· 847 838 848 839 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_input\n"); 849 840 850 - if (mutex_lock_interruptible(&video->lock)) 851 - return mutex_lock_interruptible(&video->lock); 841 + ret = mutex_lock_interruptible(&video->lock); 842 + if (ret) 843 + return ret; 852 844 /* 853 845 * If streaming is started return device busy 854 846 * error ··· 950 940 v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_std\n"); 951 941 952 942 /* Call decoder driver function to set the standard */ 953 - if (mutex_lock_interruptible(&video->lock)) 954 - return mutex_lock_interruptible(&video->lock); 943 + ret = mutex_lock_interruptible(&video->lock); 944 + if (ret) 945 + return ret; 955 946 sdinfo = video->current_ext_subdev; 956 947 /* If streaming is started, return device busy error */ 957 948 if (video->started) { ··· 1338 1327 return -EINVAL; 1339 1328 } 1340 1329 1341 - if (mutex_lock_interruptible(&video->lock)) 1342 - return mutex_lock_interruptible(&video->lock); 1330 + ret = mutex_lock_interruptible(&video->lock); 1331 + if (ret) 1332 + return ret; 1343 1333 1344 1334 if (video->io_usrs != 0) { 1345 1335 v4l2_err(&vpfe_dev->v4l2_dev, "Only one IO user allowed\n"); ··· 1366 1354 q->buf_struct_size = sizeof(struct vpfe_cap_buffer); 1367 1355 q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; 1368 1356 1369 - if (vb2_queue_init(q)) { 1357 + ret = vb2_queue_init(q); 1358 + if (ret) { 1370 1359 v4l2_err(&vpfe_dev->v4l2_dev, "vb2_queue_init() failed\n"); 1371 1360 vb2_dma_contig_cleanup_ctx(vpfe_dev->pdev); 1372 - return vb2_queue_init(q); 1361 + return ret; 1373 1362 } 1374 1363 1375 1364 fh->io_allowed = 1; ··· 1546 1533 return -EINVAL; 1547 1534 } 1548 1535 1549 - if (mutex_lock_interruptible(&video->lock)) 1550 - return mutex_lock_interruptible(&video->lock); 1536 + ret = mutex_lock_interruptible(&video->lock); 1537 + if (ret) 1538 + return ret; 1551 1539 1552 1540 vpfe_stop_capture(video); 1553 1541 ret = vb2_streamoff(&video->buffer_queue, buf_type);
+1 -1
drivers/staging/rdma/hfi1/TODO
··· 3 3 - Remove unneeded file entries in sysfs 4 4 - Remove software processing of IB protocol and place in library for use 5 5 by qib, ipath (if still present), hfi1, and eventually soft-roce 6 - 6 + - Replace incorrect uAPI
+35 -56
drivers/staging/rdma/hfi1/file_ops.c
··· 49 49 #include <linux/vmalloc.h> 50 50 #include <linux/io.h> 51 51 52 + #include <rdma/ib.h> 53 + 52 54 #include "hfi.h" 53 55 #include "pio.h" 54 56 #include "device.h" ··· 191 189 __u64 user_val = 0; 192 190 int uctxt_required = 1; 193 191 int must_be_root = 0; 192 + 193 + /* FIXME: This interface cannot continue out of staging */ 194 + if (WARN_ON_ONCE(!ib_safe_file_access(fp))) 195 + return -EACCES; 194 196 195 197 if (count < sizeof(cmd)) { 196 198 ret = -EINVAL; ··· 797 791 spin_unlock_irqrestore(&dd->uctxt_lock, flags); 798 792 799 793 dd->rcd[uctxt->ctxt] = NULL; 794 + 795 + hfi1_user_exp_rcv_free(fdata); 796 + hfi1_clear_ctxt_pkey(dd, uctxt->ctxt); 797 + 800 798 uctxt->rcvwait_to = 0; 801 799 uctxt->piowait_to = 0; 802 800 uctxt->rcvnowait = 0; 803 801 uctxt->pionowait = 0; 804 802 uctxt->event_flags = 0; 805 - 806 - hfi1_user_exp_rcv_free(fdata); 807 - hfi1_clear_ctxt_pkey(dd, uctxt->ctxt); 808 803 809 804 hfi1_stats.sps_ctxts--; 810 805 if (++dd->freectxts == dd->num_user_contexts) ··· 1134 1127 1135 1128 static int user_init(struct file *fp) 1136 1129 { 1137 - int ret; 1138 1130 unsigned int rcvctrl_ops = 0; 1139 1131 struct hfi1_filedata *fd = fp->private_data; 1140 1132 struct hfi1_ctxtdata *uctxt = fd->uctxt; 1141 1133 1142 1134 /* make sure that the context has already been setup */ 1143 - if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags)) { 1144 - ret = -EFAULT; 1145 - goto done; 1146 - } 1147 - 1148 - /* 1149 - * Subctxts don't need to initialize anything since master 1150 - * has done it. 1151 - */ 1152 - if (fd->subctxt) { 1153 - ret = wait_event_interruptible(uctxt->wait, !test_bit( 1154 - HFI1_CTXT_MASTER_UNINIT, 1155 - &uctxt->event_flags)); 1156 - goto expected; 1157 - } 1135 + if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags)) 1136 + return -EFAULT; 1158 1137 1159 1138 /* initialize poll variables... */ 1160 1139 uctxt->urgent = 0; ··· 1195 1202 wake_up(&uctxt->wait); 1196 1203 } 1197 1204 1198 - expected: 1199 - /* 1200 - * Expected receive has to be setup for all processes (including 1201 - * shared contexts). However, it has to be done after the master 1202 - * context has been fully configured as it depends on the 1203 - * eager/expected split of the RcvArray entries. 1204 - * Setting it up here ensures that the subcontexts will be waiting 1205 - * (due to the above wait_event_interruptible() until the master 1206 - * is setup. 1207 - */ 1208 - ret = hfi1_user_exp_rcv_init(fp); 1209 - done: 1210 - return ret; 1205 + return 0; 1211 1206 } 1212 1207 1213 1208 static int get_ctxt_info(struct file *fp, void __user *ubase, __u32 len) ··· 1242 1261 int ret = 0; 1243 1262 1244 1263 /* 1245 - * Context should be set up only once (including allocation and 1264 + * Context should be set up only once, including allocation and 1246 1265 * programming of eager buffers. This is done if context sharing 1247 1266 * is not requested or by the master process. 1248 1267 */ ··· 1263 1282 if (ret) 1264 1283 goto done; 1265 1284 } 1285 + } else { 1286 + ret = wait_event_interruptible(uctxt->wait, !test_bit( 1287 + HFI1_CTXT_MASTER_UNINIT, 1288 + &uctxt->event_flags)); 1289 + if (ret) 1290 + goto done; 1266 1291 } 1292 + 1267 1293 ret = hfi1_user_sdma_alloc_queues(uctxt, fp); 1294 + if (ret) 1295 + goto done; 1296 + /* 1297 + * Expected receive has to be setup for all processes (including 1298 + * shared contexts). However, it has to be done after the master 1299 + * context has been fully configured as it depends on the 1300 + * eager/expected split of the RcvArray entries. 1301 + * Setting it up here ensures that the subcontexts will be waiting 1302 + * (due to the above wait_event_interruptible() until the master 1303 + * is setup. 1304 + */ 1305 + ret = hfi1_user_exp_rcv_init(fp); 1268 1306 if (ret) 1269 1307 goto done; 1270 1308 ··· 1565 1565 { 1566 1566 struct hfi1_devdata *dd = filp->private_data; 1567 1567 1568 - switch (whence) { 1569 - case SEEK_SET: 1570 - break; 1571 - case SEEK_CUR: 1572 - offset += filp->f_pos; 1573 - break; 1574 - case SEEK_END: 1575 - offset = ((dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE) - 1576 - offset; 1577 - break; 1578 - default: 1579 - return -EINVAL; 1580 - } 1581 - 1582 - if (offset < 0) 1583 - return -EINVAL; 1584 - 1585 - if (offset >= (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE) 1586 - return -EINVAL; 1587 - 1588 - filp->f_pos = offset; 1589 - 1590 - return filp->f_pos; 1568 + return fixed_size_llseek(filp, offset, whence, 1569 + (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE); 1591 1570 } 1592 1571 1593 1572 /* NOTE: assumes unsigned long is 8 bytes */
+25 -15
drivers/staging/rdma/hfi1/mmu_rb.c
··· 71 71 struct mm_struct *, 72 72 unsigned long, unsigned long); 73 73 static void mmu_notifier_mem_invalidate(struct mmu_notifier *, 74 + struct mm_struct *, 74 75 unsigned long, unsigned long); 75 76 static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *, 76 77 unsigned long, unsigned long); ··· 138 137 rbnode = rb_entry(node, struct mmu_rb_node, node); 139 138 rb_erase(node, root); 140 139 if (handler->ops->remove) 141 - handler->ops->remove(root, rbnode, false); 140 + handler->ops->remove(root, rbnode, NULL); 142 141 } 143 142 } 144 143 ··· 177 176 return ret; 178 177 } 179 178 180 - /* Caller must host handler lock */ 179 + /* Caller must hold handler lock */ 181 180 static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler, 182 181 unsigned long addr, 183 182 unsigned long len) ··· 201 200 return node; 202 201 } 203 202 203 + /* Caller must *not* hold handler lock. */ 204 204 static void __mmu_rb_remove(struct mmu_rb_handler *handler, 205 - struct mmu_rb_node *node, bool arg) 205 + struct mmu_rb_node *node, struct mm_struct *mm) 206 206 { 207 + unsigned long flags; 208 + 207 209 /* Validity of handler and node pointers has been checked by caller. */ 208 210 hfi1_cdbg(MMU, "Removing node addr 0x%llx, len %u", node->addr, 209 211 node->len); 212 + spin_lock_irqsave(&handler->lock, flags); 210 213 __mmu_int_rb_remove(node, handler->root); 214 + spin_unlock_irqrestore(&handler->lock, flags); 215 + 211 216 if (handler->ops->remove) 212 - handler->ops->remove(handler->root, node, arg); 217 + handler->ops->remove(handler->root, node, mm); 213 218 } 214 219 215 220 struct mmu_rb_node *hfi1_mmu_rb_search(struct rb_root *root, unsigned long addr, ··· 238 231 void hfi1_mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node) 239 232 { 240 233 struct mmu_rb_handler *handler = find_mmu_handler(root); 241 - unsigned long flags; 242 234 243 235 if (!handler || !node) 244 236 return; 245 237 246 - spin_lock_irqsave(&handler->lock, flags); 247 - __mmu_rb_remove(handler, node, false); 248 - spin_unlock_irqrestore(&handler->lock, flags); 238 + __mmu_rb_remove(handler, node, NULL); 249 239 } 250 240 251 241 static struct mmu_rb_handler *find_mmu_handler(struct rb_root *root) ··· 264 260 static inline void mmu_notifier_page(struct mmu_notifier *mn, 265 261 struct mm_struct *mm, unsigned long addr) 266 262 { 267 - mmu_notifier_mem_invalidate(mn, addr, addr + PAGE_SIZE); 263 + mmu_notifier_mem_invalidate(mn, mm, addr, addr + PAGE_SIZE); 268 264 } 269 265 270 266 static inline void mmu_notifier_range_start(struct mmu_notifier *mn, ··· 272 268 unsigned long start, 273 269 unsigned long end) 274 270 { 275 - mmu_notifier_mem_invalidate(mn, start, end); 271 + mmu_notifier_mem_invalidate(mn, mm, start, end); 276 272 } 277 273 278 274 static void mmu_notifier_mem_invalidate(struct mmu_notifier *mn, 275 + struct mm_struct *mm, 279 276 unsigned long start, unsigned long end) 280 277 { 281 278 struct mmu_rb_handler *handler = 282 279 container_of(mn, struct mmu_rb_handler, mn); 283 280 struct rb_root *root = handler->root; 284 - struct mmu_rb_node *node; 281 + struct mmu_rb_node *node, *ptr = NULL; 285 282 unsigned long flags; 286 283 287 284 spin_lock_irqsave(&handler->lock, flags); 288 - for (node = __mmu_int_rb_iter_first(root, start, end - 1); node; 289 - node = __mmu_int_rb_iter_next(node, start, end - 1)) { 285 + for (node = __mmu_int_rb_iter_first(root, start, end - 1); 286 + node; node = ptr) { 287 + /* Guard against node removal. */ 288 + ptr = __mmu_int_rb_iter_next(node, start, end - 1); 290 289 hfi1_cdbg(MMU, "Invalidating node addr 0x%llx, len %u", 291 290 node->addr, node->len); 292 - if (handler->ops->invalidate(root, node)) 293 - __mmu_rb_remove(handler, node, true); 291 + if (handler->ops->invalidate(root, node)) { 292 + spin_unlock_irqrestore(&handler->lock, flags); 293 + __mmu_rb_remove(handler, node, mm); 294 + spin_lock_irqsave(&handler->lock, flags); 295 + } 294 296 } 295 297 spin_unlock_irqrestore(&handler->lock, flags); 296 298 }
+2 -1
drivers/staging/rdma/hfi1/mmu_rb.h
··· 59 59 struct mmu_rb_ops { 60 60 bool (*filter)(struct mmu_rb_node *, unsigned long, unsigned long); 61 61 int (*insert)(struct rb_root *, struct mmu_rb_node *); 62 - void (*remove)(struct rb_root *, struct mmu_rb_node *, bool); 62 + void (*remove)(struct rb_root *, struct mmu_rb_node *, 63 + struct mm_struct *); 63 64 int (*invalidate)(struct rb_root *, struct mmu_rb_node *); 64 65 }; 65 66
+2
drivers/staging/rdma/hfi1/qp.c
··· 519 519 * do the flush work until that QP's 520 520 * sdma work has finished. 521 521 */ 522 + spin_lock(&qp->s_lock); 522 523 if (qp->s_flags & RVT_S_WAIT_DMA) { 523 524 qp->s_flags &= ~RVT_S_WAIT_DMA; 524 525 hfi1_schedule_send(qp); 525 526 } 527 + spin_unlock(&qp->s_lock); 526 528 } 527 529 528 530 /**
+7 -4
drivers/staging/rdma/hfi1/user_exp_rcv.c
··· 87 87 static int set_rcvarray_entry(struct file *, unsigned long, u32, 88 88 struct tid_group *, struct page **, unsigned); 89 89 static int mmu_rb_insert(struct rb_root *, struct mmu_rb_node *); 90 - static void mmu_rb_remove(struct rb_root *, struct mmu_rb_node *, bool); 90 + static void mmu_rb_remove(struct rb_root *, struct mmu_rb_node *, 91 + struct mm_struct *); 91 92 static int mmu_rb_invalidate(struct rb_root *, struct mmu_rb_node *); 92 93 static int program_rcvarray(struct file *, unsigned long, struct tid_group *, 93 94 struct tid_pageset *, unsigned, u16, struct page **, ··· 255 254 struct hfi1_ctxtdata *uctxt = fd->uctxt; 256 255 struct tid_group *grp, *gptr; 257 256 257 + if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags)) 258 + return 0; 258 259 /* 259 260 * The notifier would have been removed when the process'es mm 260 261 * was freed. ··· 902 899 if (!node || node->rcventry != (uctxt->expected_base + rcventry)) 903 900 return -EBADF; 904 901 if (HFI1_CAP_IS_USET(TID_UNMAP)) 905 - mmu_rb_remove(&fd->tid_rb_root, &node->mmu, false); 902 + mmu_rb_remove(&fd->tid_rb_root, &node->mmu, NULL); 906 903 else 907 904 hfi1_mmu_rb_remove(&fd->tid_rb_root, &node->mmu); 908 905 ··· 968 965 continue; 969 966 if (HFI1_CAP_IS_USET(TID_UNMAP)) 970 967 mmu_rb_remove(&fd->tid_rb_root, 971 - &node->mmu, false); 968 + &node->mmu, NULL); 972 969 else 973 970 hfi1_mmu_rb_remove(&fd->tid_rb_root, 974 971 &node->mmu); ··· 1035 1032 } 1036 1033 1037 1034 static void mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node, 1038 - bool notifier) 1035 + struct mm_struct *mm) 1039 1036 { 1040 1037 struct hfi1_filedata *fdata = 1041 1038 container_of(root, struct hfi1_filedata, tid_rb_root);
+22 -11
drivers/staging/rdma/hfi1/user_sdma.c
··· 278 278 static void user_sdma_free_request(struct user_sdma_request *, bool); 279 279 static int pin_vector_pages(struct user_sdma_request *, 280 280 struct user_sdma_iovec *); 281 - static void unpin_vector_pages(struct mm_struct *, struct page **, unsigned); 281 + static void unpin_vector_pages(struct mm_struct *, struct page **, unsigned, 282 + unsigned); 282 283 static int check_header_template(struct user_sdma_request *, 283 284 struct hfi1_pkt_header *, u32, u32); 284 285 static int set_txreq_header(struct user_sdma_request *, ··· 300 299 static void activate_packet_queue(struct iowait *, int); 301 300 static bool sdma_rb_filter(struct mmu_rb_node *, unsigned long, unsigned long); 302 301 static int sdma_rb_insert(struct rb_root *, struct mmu_rb_node *); 303 - static void sdma_rb_remove(struct rb_root *, struct mmu_rb_node *, bool); 302 + static void sdma_rb_remove(struct rb_root *, struct mmu_rb_node *, 303 + struct mm_struct *); 304 304 static int sdma_rb_invalidate(struct rb_root *, struct mmu_rb_node *); 305 305 306 306 static struct mmu_rb_ops sdma_rb_ops = { ··· 1065 1063 rb_node = hfi1_mmu_rb_search(&pq->sdma_rb_root, 1066 1064 (unsigned long)iovec->iov.iov_base, 1067 1065 iovec->iov.iov_len); 1068 - if (rb_node) 1066 + if (rb_node && !IS_ERR(rb_node)) 1069 1067 node = container_of(rb_node, struct sdma_mmu_node, rb); 1068 + else 1069 + rb_node = NULL; 1070 1070 1071 1071 if (!node) { 1072 1072 node = kzalloc(sizeof(*node), GFP_KERNEL); ··· 1111 1107 goto bail; 1112 1108 } 1113 1109 if (pinned != npages) { 1114 - unpin_vector_pages(current->mm, pages, pinned); 1110 + unpin_vector_pages(current->mm, pages, node->npages, 1111 + pinned); 1115 1112 ret = -EFAULT; 1116 1113 goto bail; 1117 1114 } ··· 1152 1147 } 1153 1148 1154 1149 static void unpin_vector_pages(struct mm_struct *mm, struct page **pages, 1155 - unsigned npages) 1150 + unsigned start, unsigned npages) 1156 1151 { 1157 - hfi1_release_user_pages(mm, pages, npages, 0); 1152 + hfi1_release_user_pages(mm, pages + start, npages, 0); 1158 1153 kfree(pages); 1159 1154 } 1160 1155 ··· 1507 1502 &req->pq->sdma_rb_root, 1508 1503 (unsigned long)req->iovs[i].iov.iov_base, 1509 1504 req->iovs[i].iov.iov_len); 1510 - if (!mnode) 1505 + if (!mnode || IS_ERR(mnode)) 1511 1506 continue; 1512 1507 1513 1508 node = container_of(mnode, struct sdma_mmu_node, rb); ··· 1552 1547 } 1553 1548 1554 1549 static void sdma_rb_remove(struct rb_root *root, struct mmu_rb_node *mnode, 1555 - bool notifier) 1550 + struct mm_struct *mm) 1556 1551 { 1557 1552 struct sdma_mmu_node *node = 1558 1553 container_of(mnode, struct sdma_mmu_node, rb); ··· 1562 1557 node->pq->n_locked -= node->npages; 1563 1558 spin_unlock(&node->pq->evict_lock); 1564 1559 1565 - unpin_vector_pages(notifier ? NULL : current->mm, node->pages, 1560 + /* 1561 + * If mm is set, we are being called by the MMU notifier and we 1562 + * should not pass a mm_struct to unpin_vector_page(). This is to 1563 + * prevent a deadlock when hfi1_release_user_pages() attempts to 1564 + * take the mmap_sem, which the MMU notifier has already taken. 1565 + */ 1566 + unpin_vector_pages(mm ? NULL : current->mm, node->pages, 0, 1566 1567 node->npages); 1567 1568 /* 1568 1569 * If called by the MMU notifier, we have to adjust the pinned 1569 1570 * page count ourselves. 1570 1571 */ 1571 - if (notifier) 1572 - current->mm->pinned_vm -= node->npages; 1572 + if (mm) 1573 + mm->pinned_vm -= node->npages; 1573 1574 kfree(node); 1574 1575 } 1575 1576
+2 -2
drivers/thermal/hisi_thermal.c
··· 68 68 * Every step equals (1 * 200) / 255 celsius, and finally 69 69 * need convert to millicelsius. 70 70 */ 71 - return (HISI_TEMP_BASE + (step * 200 / 255)) * 1000; 71 + return (HISI_TEMP_BASE * 1000 + (step * 200000 / 255)); 72 72 } 73 73 74 74 static inline long _temp_to_step(long temp) 75 75 { 76 - return ((temp / 1000 - HISI_TEMP_BASE) * 255 / 200); 76 + return ((temp - HISI_TEMP_BASE * 1000) * 255) / 200000; 77 77 } 78 78 79 79 static long hisi_thermal_get_sensor_temp(struct hisi_thermal_data *data,
+1 -1
drivers/thermal/thermal_core.c
··· 959 959 struct thermal_zone_device *tz = to_thermal_zone(dev); \ 960 960 \ 961 961 if (tz->tzp) \ 962 - return sprintf(buf, "%u\n", tz->tzp->name); \ 962 + return sprintf(buf, "%d\n", tz->tzp->name); \ 963 963 else \ 964 964 return -EIO; \ 965 965 } \
+2 -4
fs/ceph/mds_client.c
··· 386 386 atomic_read(&s->s_ref), atomic_read(&s->s_ref)-1); 387 387 if (atomic_dec_and_test(&s->s_ref)) { 388 388 if (s->s_auth.authorizer) 389 - ceph_auth_destroy_authorizer( 390 - s->s_mdsc->fsc->client->monc.auth, 391 - s->s_auth.authorizer); 389 + ceph_auth_destroy_authorizer(s->s_auth.authorizer); 392 390 kfree(s); 393 391 } 394 392 } ··· 3898 3900 struct ceph_auth_handshake *auth = &s->s_auth; 3899 3901 3900 3902 if (force_new && auth->authorizer) { 3901 - ceph_auth_destroy_authorizer(ac, auth->authorizer); 3903 + ceph_auth_destroy_authorizer(auth->authorizer); 3902 3904 auth->authorizer = NULL; 3903 3905 } 3904 3906 if (!auth->authorizer) {
+1 -1
fs/fuse/file.c
··· 1295 1295 1296 1296 *nbytesp = nbytes; 1297 1297 1298 - return ret; 1298 + return ret < 0 ? ret : 0; 1299 1299 } 1300 1300 1301 1301 static inline int fuse_iter_npages(const struct iov_iter *ii_p)
+2
fs/ocfs2/dlm/dlmmaster.c
··· 2455 2455 2456 2456 spin_unlock(&dlm->spinlock); 2457 2457 2458 + ret = 0; 2459 + 2458 2460 done: 2459 2461 dlm_put(dlm); 2460 2462 return ret;
+30 -3
fs/proc/task_mmu.c
··· 1518 1518 return page; 1519 1519 } 1520 1520 1521 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1522 + static struct page *can_gather_numa_stats_pmd(pmd_t pmd, 1523 + struct vm_area_struct *vma, 1524 + unsigned long addr) 1525 + { 1526 + struct page *page; 1527 + int nid; 1528 + 1529 + if (!pmd_present(pmd)) 1530 + return NULL; 1531 + 1532 + page = vm_normal_page_pmd(vma, addr, pmd); 1533 + if (!page) 1534 + return NULL; 1535 + 1536 + if (PageReserved(page)) 1537 + return NULL; 1538 + 1539 + nid = page_to_nid(page); 1540 + if (!node_isset(nid, node_states[N_MEMORY])) 1541 + return NULL; 1542 + 1543 + return page; 1544 + } 1545 + #endif 1546 + 1521 1547 static int gather_pte_stats(pmd_t *pmd, unsigned long addr, 1522 1548 unsigned long end, struct mm_walk *walk) 1523 1549 { ··· 1553 1527 pte_t *orig_pte; 1554 1528 pte_t *pte; 1555 1529 1530 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1556 1531 ptl = pmd_trans_huge_lock(pmd, vma); 1557 1532 if (ptl) { 1558 - pte_t huge_pte = *(pte_t *)pmd; 1559 1533 struct page *page; 1560 1534 1561 - page = can_gather_numa_stats(huge_pte, vma, addr); 1535 + page = can_gather_numa_stats_pmd(*pmd, vma, addr); 1562 1536 if (page) 1563 - gather_stats(page, md, pte_dirty(huge_pte), 1537 + gather_stats(page, md, pmd_dirty(*pmd), 1564 1538 HPAGE_PMD_SIZE/PAGE_SIZE); 1565 1539 spin_unlock(ptl); 1566 1540 return 0; ··· 1568 1542 1569 1543 if (pmd_trans_unstable(pmd)) 1570 1544 return 0; 1545 + #endif 1571 1546 orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); 1572 1547 do { 1573 1548 struct page *page = can_gather_numa_stats(*pte, vma, addr);
+2 -2
fs/udf/super.c
··· 919 919 #endif 920 920 } 921 921 922 - ret = udf_CS0toUTF8(outstr, 31, pvoldesc->volIdent, 32); 922 + ret = udf_dstrCS0toUTF8(outstr, 31, pvoldesc->volIdent, 32); 923 923 if (ret < 0) 924 924 goto out_bh; 925 925 926 926 strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret); 927 927 udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident); 928 928 929 - ret = udf_CS0toUTF8(outstr, 127, pvoldesc->volSetIdent, 128); 929 + ret = udf_dstrCS0toUTF8(outstr, 127, pvoldesc->volSetIdent, 128); 930 930 if (ret < 0) 931 931 goto out_bh; 932 932
+1 -1
fs/udf/udfdecl.h
··· 212 212 uint8_t *, int); 213 213 extern int udf_put_filename(struct super_block *, const uint8_t *, int, 214 214 uint8_t *, int); 215 - extern int udf_CS0toUTF8(uint8_t *, int, const uint8_t *, int); 215 + extern int udf_dstrCS0toUTF8(uint8_t *, int, const uint8_t *, int); 216 216 217 217 /* ialloc.c */ 218 218 extern void udf_free_inode(struct inode *);
+14 -2
fs/udf/unicode.c
··· 335 335 return u_len; 336 336 } 337 337 338 - int udf_CS0toUTF8(uint8_t *utf_o, int o_len, const uint8_t *ocu_i, int i_len) 338 + int udf_dstrCS0toUTF8(uint8_t *utf_o, int o_len, 339 + const uint8_t *ocu_i, int i_len) 339 340 { 340 - return udf_name_from_CS0(utf_o, o_len, ocu_i, i_len, 341 + int s_len = 0; 342 + 343 + if (i_len > 0) { 344 + s_len = ocu_i[i_len - 1]; 345 + if (s_len >= i_len) { 346 + pr_err("incorrect dstring lengths (%d/%d)\n", 347 + s_len, i_len); 348 + return -EINVAL; 349 + } 350 + } 351 + 352 + return udf_name_from_CS0(utf_o, o_len, ocu_i, s_len, 341 353 udf_uni2char_utf8, 0); 342 354 } 343 355
+2 -1
include/linux/bpf.h
··· 180 180 void bpf_register_map_type(struct bpf_map_type_list *tl); 181 181 182 182 struct bpf_prog *bpf_prog_get(u32 ufd); 183 + struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog); 183 184 void bpf_prog_put(struct bpf_prog *prog); 184 185 void bpf_prog_put_rcu(struct bpf_prog *prog); 185 186 186 187 struct bpf_map *bpf_map_get_with_uref(u32 ufd); 187 188 struct bpf_map *__bpf_map_get(struct fd f); 188 - void bpf_map_inc(struct bpf_map *map, bool uref); 189 + struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref); 189 190 void bpf_map_put_with_uref(struct bpf_map *map); 190 191 void bpf_map_put(struct bpf_map *map); 191 192 int bpf_map_precharge_memlock(u32 pages);
+5 -5
include/linux/ceph/auth.h
··· 12 12 */ 13 13 14 14 struct ceph_auth_client; 15 - struct ceph_authorizer; 16 15 struct ceph_msg; 16 + 17 + struct ceph_authorizer { 18 + void (*destroy)(struct ceph_authorizer *); 19 + }; 17 20 18 21 struct ceph_auth_handshake { 19 22 struct ceph_authorizer *authorizer; ··· 65 62 struct ceph_auth_handshake *auth); 66 63 int (*verify_authorizer_reply)(struct ceph_auth_client *ac, 67 64 struct ceph_authorizer *a, size_t len); 68 - void (*destroy_authorizer)(struct ceph_auth_client *ac, 69 - struct ceph_authorizer *a); 70 65 void (*invalidate_authorizer)(struct ceph_auth_client *ac, 71 66 int peer_type); 72 67 ··· 113 112 extern int ceph_auth_create_authorizer(struct ceph_auth_client *ac, 114 113 int peer_type, 115 114 struct ceph_auth_handshake *auth); 116 - extern void ceph_auth_destroy_authorizer(struct ceph_auth_client *ac, 117 - struct ceph_authorizer *a); 115 + void ceph_auth_destroy_authorizer(struct ceph_authorizer *a); 118 116 extern int ceph_auth_update_authorizer(struct ceph_auth_client *ac, 119 117 int peer_type, 120 118 struct ceph_auth_handshake *a);
-1
include/linux/ceph/osd_client.h
··· 16 16 struct ceph_snap_context; 17 17 struct ceph_osd_request; 18 18 struct ceph_osd_client; 19 - struct ceph_authorizer; 20 19 21 20 /* 22 21 * completion callback for async writepages
+1
include/linux/cgroup-defs.h
··· 444 444 int (*can_attach)(struct cgroup_taskset *tset); 445 445 void (*cancel_attach)(struct cgroup_taskset *tset); 446 446 void (*attach)(struct cgroup_taskset *tset); 447 + void (*post_attach)(void); 447 448 int (*can_fork)(struct task_struct *task); 448 449 void (*cancel_fork)(struct task_struct *task); 449 450 void (*fork)(struct task_struct *task);
-6
include/linux/cpuset.h
··· 137 137 task_unlock(current); 138 138 } 139 139 140 - extern void cpuset_post_attach_flush(void); 141 - 142 140 #else /* !CONFIG_CPUSETS */ 143 141 144 142 static inline bool cpusets_enabled(void) { return false; } ··· 241 243 static inline bool read_mems_allowed_retry(unsigned int seq) 242 244 { 243 245 return false; 244 - } 245 - 246 - static inline void cpuset_post_attach_flush(void) 247 - { 248 246 } 249 247 250 248 #endif /* !CONFIG_CPUSETS */
+18 -2
include/linux/hash.h
··· 32 32 #error Wordsize not 32 or 64 33 33 #endif 34 34 35 + /* 36 + * The above primes are actively bad for hashing, since they are 37 + * too sparse. The 32-bit one is mostly ok, the 64-bit one causes 38 + * real problems. Besides, the "prime" part is pointless for the 39 + * multiplicative hash. 40 + * 41 + * Although a random odd number will do, it turns out that the golden 42 + * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice 43 + * properties. 44 + * 45 + * These are the negative, (1 - phi) = (phi^2) = (3 - sqrt(5))/2. 46 + * (See Knuth vol 3, section 6.4, exercise 9.) 47 + */ 48 + #define GOLDEN_RATIO_32 0x61C88647 49 + #define GOLDEN_RATIO_64 0x61C8864680B583EBull 50 + 35 51 static __always_inline u64 hash_64(u64 val, unsigned int bits) 36 52 { 37 53 u64 hash = val; 38 54 39 - #if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64 40 - hash = hash * GOLDEN_RATIO_PRIME_64; 55 + #if BITS_PER_LONG == 64 56 + hash = hash * GOLDEN_RATIO_64; 41 57 #else 42 58 /* Sigh, gcc can't optimise this alone like it does for 32 bits. */ 43 59 u64 n = hash;
+5
include/linux/huge_mm.h
··· 152 152 } 153 153 154 154 struct page *get_huge_zero_page(void); 155 + void put_huge_zero_page(void); 155 156 156 157 #else /* CONFIG_TRANSPARENT_HUGEPAGE */ 157 158 #define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; }) ··· 209 208 return false; 210 209 } 211 210 211 + static inline void put_huge_zero_page(void) 212 + { 213 + BUILD_BUG(); 214 + } 212 215 213 216 static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma, 214 217 unsigned long addr, pmd_t *pmd, int flags)
+5
include/linux/if_ether.h
··· 28 28 return (struct ethhdr *)skb_mac_header(skb); 29 29 } 30 30 31 + static inline struct ethhdr *inner_eth_hdr(const struct sk_buff *skb) 32 + { 33 + return (struct ethhdr *)skb_inner_mac_header(skb); 34 + } 35 + 31 36 int eth_header_parse(const struct sk_buff *skb, unsigned char *haddr); 32 37 33 38 extern ssize_t sysfs_format_mac(char *buf, const unsigned char *addr, int len);
+5 -3
include/linux/lockdep.h
··· 196 196 * We record lock dependency chains, so that we can cache them: 197 197 */ 198 198 struct lock_chain { 199 - u8 irq_context; 200 - u8 depth; 201 - u16 base; 199 + /* see BUILD_BUG_ON()s in lookup_chain_cache() */ 200 + unsigned int irq_context : 2, 201 + depth : 6, 202 + base : 24; 203 + /* 4 byte hole */ 202 204 struct hlist_node entry; 203 205 u64 chain_key; 204 206 };
+11
include/linux/mlx5/device.h
··· 393 393 MLX5_CAP_OFF_CMDIF_CSUM = 46, 394 394 }; 395 395 396 + enum { 397 + /* 398 + * Max wqe size for rdma read is 512 bytes, so this 399 + * limits our max_sge_rd as the wqe needs to fit: 400 + * - ctrl segment (16 bytes) 401 + * - rdma segment (16 bytes) 402 + * - scatter elements (16 bytes each) 403 + */ 404 + MLX5_MAX_SGE_RD = (512 - 16 - 16) / 16 405 + }; 406 + 396 407 struct mlx5_inbox_hdr { 397 408 __be16 opcode; 398 409 u8 rsvd[4];
+4
include/linux/mm.h
··· 1031 1031 page = compound_head(page); 1032 1032 if (atomic_read(compound_mapcount_ptr(page)) >= 0) 1033 1033 return true; 1034 + if (PageHuge(page)) 1035 + return false; 1034 1036 for (i = 0; i < hpage_nr_pages(page); i++) { 1035 1037 if (atomic_read(&page[i]._mapcount) >= 0) 1036 1038 return true; ··· 1140 1138 1141 1139 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, 1142 1140 pte_t pte); 1141 + struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, 1142 + pmd_t pmd); 1143 1143 1144 1144 int zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, 1145 1145 unsigned long size);
+9 -1
include/linux/net.h
··· 245 245 net_ratelimited_function(pr_warn, fmt, ##__VA_ARGS__) 246 246 #define net_info_ratelimited(fmt, ...) \ 247 247 net_ratelimited_function(pr_info, fmt, ##__VA_ARGS__) 248 - #if defined(DEBUG) 248 + #if defined(CONFIG_DYNAMIC_DEBUG) 249 + #define net_dbg_ratelimited(fmt, ...) \ 250 + do { \ 251 + DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt); \ 252 + if (unlikely(descriptor.flags & _DPRINTK_FLAGS_PRINT) && \ 253 + net_ratelimit()) \ 254 + __dynamic_pr_debug(&descriptor, fmt, ##__VA_ARGS__); \ 255 + } while (0) 256 + #elif defined(DEBUG) 249 257 #define net_dbg_ratelimited(fmt, ...) \ 250 258 net_ratelimited_function(pr_debug, fmt, ##__VA_ARGS__) 251 259 #else
+1 -1
include/linux/netdevice.h
··· 3992 3992 3993 3993 static inline bool net_gso_ok(netdev_features_t features, int gso_type) 3994 3994 { 3995 - netdev_features_t feature = gso_type << NETIF_F_GSO_SHIFT; 3995 + netdev_features_t feature = (netdev_features_t)gso_type << NETIF_F_GSO_SHIFT; 3996 3996 3997 3997 /* check flags correspondence */ 3998 3998 BUILD_BUG_ON(SKB_GSO_TCPV4 != (NETIF_F_TSO >> NETIF_F_GSO_SHIFT));
+8
include/media/videobuf2-core.h
··· 375 375 /** 376 376 * struct vb2_ops - driver-specific callbacks 377 377 * 378 + * @verify_planes_array: Verify that a given user space structure contains 379 + * enough planes for the buffer. This is called 380 + * for each dequeued buffer. 378 381 * @fill_user_buffer: given a vb2_buffer fill in the userspace structure. 379 382 * For V4L2 this is a struct v4l2_buffer. 380 383 * @fill_vb2_buffer: given a userspace structure, fill in the vb2_buffer. ··· 387 384 * the vb2_buffer struct. 388 385 */ 389 386 struct vb2_buf_ops { 387 + int (*verify_planes_array)(struct vb2_buffer *vb, const void *pb); 390 388 void (*fill_user_buffer)(struct vb2_buffer *vb, void *pb); 391 389 int (*fill_vb2_buffer)(struct vb2_buffer *vb, const void *pb, 392 390 struct vb2_plane *planes); ··· 404 400 * @fileio_read_once: report EOF after reading the first buffer 405 401 * @fileio_write_immediately: queue buffer after each write() call 406 402 * @allow_zero_bytesused: allow bytesused == 0 to be passed to the driver 403 + * @quirk_poll_must_check_waiting_for_buffers: Return POLLERR at poll when QBUF 404 + * has not been called. This is a vb1 idiom that has been adopted 405 + * also by vb2. 407 406 * @lock: pointer to a mutex that protects the vb2_queue struct. The 408 407 * driver can set this to a mutex to let the v4l2 core serialize 409 408 * the queuing ioctls. If the driver wants to handle locking ··· 470 463 unsigned fileio_read_once:1; 471 464 unsigned fileio_write_immediately:1; 472 465 unsigned allow_zero_bytesused:1; 466 + unsigned quirk_poll_must_check_waiting_for_buffers:1; 473 467 474 468 struct mutex *lock; 475 469 void *owner;
+3 -1
include/net/vxlan.h
··· 317 317 (skb->inner_protocol_type != ENCAP_TYPE_ETHER || 318 318 skb->inner_protocol != htons(ETH_P_TEB) || 319 319 (skb_inner_mac_header(skb) - skb_transport_header(skb) != 320 - sizeof(struct udphdr) + sizeof(struct vxlanhdr)))) 320 + sizeof(struct udphdr) + sizeof(struct vxlanhdr)) || 321 + (skb->ip_summed != CHECKSUM_NONE && 322 + !can_checksum_protocol(features, inner_eth_hdr(skb)->h_proto)))) 321 323 return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 322 324 323 325 return features;
+16
include/rdma/ib.h
··· 34 34 #define _RDMA_IB_H 35 35 36 36 #include <linux/types.h> 37 + #include <linux/sched.h> 37 38 38 39 struct ib_addr { 39 40 union { ··· 86 85 __be64 sib_sid_mask; 87 86 __u64 sib_scope_id; 88 87 }; 88 + 89 + /* 90 + * The IB interfaces that use write() as bi-directional ioctl() are 91 + * fundamentally unsafe, since there are lots of ways to trigger "write()" 92 + * calls from various contexts with elevated privileges. That includes the 93 + * traditional suid executable error message writes, but also various kernel 94 + * interfaces that can write to file descriptors. 95 + * 96 + * This function provides protection for the legacy API by restricting the 97 + * calling context. 98 + */ 99 + static inline bool ib_safe_file_access(struct file *filp) 100 + { 101 + return filp->f_cred == current_cred() && segment_eq(get_fs(), USER_DS); 102 + } 89 103 90 104 #endif /* _RDMA_IB_H */
+2 -3
include/sound/hda_i915.h
··· 9 9 #ifdef CONFIG_SND_HDA_I915 10 10 int snd_hdac_set_codec_wakeup(struct hdac_bus *bus, bool enable); 11 11 int snd_hdac_display_power(struct hdac_bus *bus, bool enable); 12 - int snd_hdac_get_display_clk(struct hdac_bus *bus); 12 + void snd_hdac_i915_set_bclk(struct hdac_bus *bus); 13 13 int snd_hdac_sync_audio_rate(struct hdac_bus *bus, hda_nid_t nid, int rate); 14 14 int snd_hdac_acomp_get_eld(struct hdac_bus *bus, hda_nid_t nid, 15 15 bool *audio_enabled, char *buffer, int max_bytes); ··· 25 25 { 26 26 return 0; 27 27 } 28 - static inline int snd_hdac_get_display_clk(struct hdac_bus *bus) 28 + static inline void snd_hdac_i915_set_bclk(struct hdac_bus *bus) 29 29 { 30 - return 0; 31 30 } 32 31 static inline int snd_hdac_sync_audio_rate(struct hdac_bus *bus, hda_nid_t nid, 33 32 int rate)
+20 -10
include/uapi/linux/v4l2-dv-timings.h
··· 183 183 184 184 #define V4L2_DV_BT_CEA_3840X2160P24 { \ 185 185 .type = V4L2_DV_BT_656_1120, \ 186 - V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 186 + V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \ 187 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 187 188 297000000, 1276, 88, 296, 8, 10, 72, 0, 0, 0, \ 188 189 V4L2_DV_BT_STD_CEA861, \ 189 190 V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \ ··· 192 191 193 192 #define V4L2_DV_BT_CEA_3840X2160P25 { \ 194 193 .type = V4L2_DV_BT_656_1120, \ 195 - V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 194 + V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \ 195 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 196 196 297000000, 1056, 88, 296, 8, 10, 72, 0, 0, 0, \ 197 197 V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \ 198 198 } 199 199 200 200 #define V4L2_DV_BT_CEA_3840X2160P30 { \ 201 201 .type = V4L2_DV_BT_656_1120, \ 202 - V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 202 + V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \ 203 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 203 204 297000000, 176, 88, 296, 8, 10, 72, 0, 0, 0, \ 204 205 V4L2_DV_BT_STD_CEA861, \ 205 206 V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \ ··· 209 206 210 207 #define V4L2_DV_BT_CEA_3840X2160P50 { \ 211 208 .type = V4L2_DV_BT_656_1120, \ 212 - V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 209 + V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \ 210 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 213 211 594000000, 1056, 88, 296, 8, 10, 72, 0, 0, 0, \ 214 212 V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \ 215 213 } 216 214 217 215 #define V4L2_DV_BT_CEA_3840X2160P60 { \ 218 216 .type = V4L2_DV_BT_656_1120, \ 219 - V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 217 + V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \ 218 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 220 219 594000000, 176, 88, 296, 8, 10, 72, 0, 0, 0, \ 221 220 V4L2_DV_BT_STD_CEA861, \ 222 221 V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \ ··· 226 221 227 222 #define V4L2_DV_BT_CEA_4096X2160P24 { \ 228 223 .type = V4L2_DV_BT_656_1120, \ 229 - V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 224 + V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \ 225 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 230 226 297000000, 1020, 88, 296, 8, 10, 72, 0, 0, 0, \ 231 227 V4L2_DV_BT_STD_CEA861, \ 232 228 V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \ ··· 235 229 236 230 #define V4L2_DV_BT_CEA_4096X2160P25 { \ 237 231 .type = V4L2_DV_BT_656_1120, \ 238 - V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 232 + V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \ 233 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 239 234 297000000, 968, 88, 128, 8, 10, 72, 0, 0, 0, \ 240 235 V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \ 241 236 } 242 237 243 238 #define V4L2_DV_BT_CEA_4096X2160P30 { \ 244 239 .type = V4L2_DV_BT_656_1120, \ 245 - V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 240 + V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \ 241 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 246 242 297000000, 88, 88, 128, 8, 10, 72, 0, 0, 0, \ 247 243 V4L2_DV_BT_STD_CEA861, \ 248 244 V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \ ··· 252 244 253 245 #define V4L2_DV_BT_CEA_4096X2160P50 { \ 254 246 .type = V4L2_DV_BT_656_1120, \ 255 - V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 247 + V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \ 248 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 256 249 594000000, 968, 88, 128, 8, 10, 72, 0, 0, 0, \ 257 250 V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \ 258 251 } 259 252 260 253 #define V4L2_DV_BT_CEA_4096X2160P60 { \ 261 254 .type = V4L2_DV_BT_656_1120, \ 262 - V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \ 255 + V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \ 256 + V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \ 263 257 594000000, 88, 88, 128, 8, 10, 72, 0, 0, 0, \ 264 258 V4L2_DV_BT_STD_CEA861, \ 265 259 V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
+4 -3
kernel/bpf/inode.c
··· 31 31 { 32 32 switch (type) { 33 33 case BPF_TYPE_PROG: 34 - atomic_inc(&((struct bpf_prog *)raw)->aux->refcnt); 34 + raw = bpf_prog_inc(raw); 35 35 break; 36 36 case BPF_TYPE_MAP: 37 - bpf_map_inc(raw, true); 37 + raw = bpf_map_inc(raw, true); 38 38 break; 39 39 default: 40 40 WARN_ON_ONCE(1); ··· 297 297 goto out; 298 298 299 299 raw = bpf_any_get(inode->i_private, *type); 300 - touch_atime(&path); 300 + if (!IS_ERR(raw)) 301 + touch_atime(&path); 301 302 302 303 path_put(&path); 303 304 return raw;
+20 -4
kernel/bpf/syscall.c
··· 218 218 return f.file->private_data; 219 219 } 220 220 221 - void bpf_map_inc(struct bpf_map *map, bool uref) 221 + /* prog's and map's refcnt limit */ 222 + #define BPF_MAX_REFCNT 32768 223 + 224 + struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref) 222 225 { 223 - atomic_inc(&map->refcnt); 226 + if (atomic_inc_return(&map->refcnt) > BPF_MAX_REFCNT) { 227 + atomic_dec(&map->refcnt); 228 + return ERR_PTR(-EBUSY); 229 + } 224 230 if (uref) 225 231 atomic_inc(&map->usercnt); 232 + return map; 226 233 } 227 234 228 235 struct bpf_map *bpf_map_get_with_uref(u32 ufd) ··· 241 234 if (IS_ERR(map)) 242 235 return map; 243 236 244 - bpf_map_inc(map, true); 237 + map = bpf_map_inc(map, true); 245 238 fdput(f); 246 239 247 240 return map; ··· 665 658 return f.file->private_data; 666 659 } 667 660 661 + struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog) 662 + { 663 + if (atomic_inc_return(&prog->aux->refcnt) > BPF_MAX_REFCNT) { 664 + atomic_dec(&prog->aux->refcnt); 665 + return ERR_PTR(-EBUSY); 666 + } 667 + return prog; 668 + } 669 + 668 670 /* called by sockets/tracing/seccomp before attaching program to an event 669 671 * pairs with bpf_prog_put() 670 672 */ ··· 686 670 if (IS_ERR(prog)) 687 671 return prog; 688 672 689 - atomic_inc(&prog->aux->refcnt); 673 + prog = bpf_prog_inc(prog); 690 674 fdput(f); 691 675 692 676 return prog;
+47 -29
kernel/bpf/verifier.c
··· 249 249 [CONST_IMM] = "imm", 250 250 }; 251 251 252 - static const struct { 253 - int map_type; 254 - int func_id; 255 - } func_limit[] = { 256 - {BPF_MAP_TYPE_PROG_ARRAY, BPF_FUNC_tail_call}, 257 - {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_read}, 258 - {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_output}, 259 - {BPF_MAP_TYPE_STACK_TRACE, BPF_FUNC_get_stackid}, 260 - }; 261 - 262 252 static void print_verifier_state(struct verifier_env *env) 263 253 { 264 254 enum bpf_reg_type t; ··· 933 943 934 944 static int check_map_func_compatibility(struct bpf_map *map, int func_id) 935 945 { 936 - bool bool_map, bool_func; 937 - int i; 938 - 939 946 if (!map) 940 947 return 0; 941 948 942 - for (i = 0; i < ARRAY_SIZE(func_limit); i++) { 943 - bool_map = (map->map_type == func_limit[i].map_type); 944 - bool_func = (func_id == func_limit[i].func_id); 945 - /* only when map & func pair match it can continue. 946 - * don't allow any other map type to be passed into 947 - * the special func; 948 - */ 949 - if (bool_func && bool_map != bool_func) { 950 - verbose("cannot pass map_type %d into func %d\n", 951 - map->map_type, func_id); 952 - return -EINVAL; 953 - } 949 + /* We need a two way check, first is from map perspective ... */ 950 + switch (map->map_type) { 951 + case BPF_MAP_TYPE_PROG_ARRAY: 952 + if (func_id != BPF_FUNC_tail_call) 953 + goto error; 954 + break; 955 + case BPF_MAP_TYPE_PERF_EVENT_ARRAY: 956 + if (func_id != BPF_FUNC_perf_event_read && 957 + func_id != BPF_FUNC_perf_event_output) 958 + goto error; 959 + break; 960 + case BPF_MAP_TYPE_STACK_TRACE: 961 + if (func_id != BPF_FUNC_get_stackid) 962 + goto error; 963 + break; 964 + default: 965 + break; 966 + } 967 + 968 + /* ... and second from the function itself. */ 969 + switch (func_id) { 970 + case BPF_FUNC_tail_call: 971 + if (map->map_type != BPF_MAP_TYPE_PROG_ARRAY) 972 + goto error; 973 + break; 974 + case BPF_FUNC_perf_event_read: 975 + case BPF_FUNC_perf_event_output: 976 + if (map->map_type != BPF_MAP_TYPE_PERF_EVENT_ARRAY) 977 + goto error; 978 + break; 979 + case BPF_FUNC_get_stackid: 980 + if (map->map_type != BPF_MAP_TYPE_STACK_TRACE) 981 + goto error; 982 + break; 983 + default: 984 + break; 954 985 } 955 986 956 987 return 0; 988 + error: 989 + verbose("cannot pass map_type %d into func %d\n", 990 + map->map_type, func_id); 991 + return -EINVAL; 957 992 } 958 993 959 994 static int check_raw_mode(const struct bpf_func_proto *fn) ··· 2126 2111 return -E2BIG; 2127 2112 } 2128 2113 2129 - /* remember this map */ 2130 - env->used_maps[env->used_map_cnt++] = map; 2131 - 2132 2114 /* hold the map. If the program is rejected by verifier, 2133 2115 * the map will be released by release_maps() or it 2134 2116 * will be used by the valid program until it's unloaded 2135 2117 * and all maps are released in free_bpf_prog_info() 2136 2118 */ 2137 - bpf_map_inc(map, false); 2119 + map = bpf_map_inc(map, false); 2120 + if (IS_ERR(map)) { 2121 + fdput(f); 2122 + return PTR_ERR(map); 2123 + } 2124 + env->used_maps[env->used_map_cnt++] = map; 2125 + 2138 2126 fdput(f); 2139 2127 next_insn: 2140 2128 insn++;
+5 -2
kernel/cgroup.c
··· 2825 2825 size_t nbytes, loff_t off, bool threadgroup) 2826 2826 { 2827 2827 struct task_struct *tsk; 2828 + struct cgroup_subsys *ss; 2828 2829 struct cgroup *cgrp; 2829 2830 pid_t pid; 2830 - int ret; 2831 + int ssid, ret; 2831 2832 2832 2833 if (kstrtoint(strstrip(buf), 0, &pid) || pid < 0) 2833 2834 return -EINVAL; ··· 2876 2875 rcu_read_unlock(); 2877 2876 out_unlock_threadgroup: 2878 2877 percpu_up_write(&cgroup_threadgroup_rwsem); 2878 + for_each_subsys(ss, ssid) 2879 + if (ss->post_attach) 2880 + ss->post_attach(); 2879 2881 cgroup_kn_unlock(of->kn); 2880 - cpuset_post_attach_flush(); 2881 2882 return ret ?: nbytes; 2882 2883 } 2883 2884
+2 -2
kernel/cpuset.c
··· 58 58 #include <asm/uaccess.h> 59 59 #include <linux/atomic.h> 60 60 #include <linux/mutex.h> 61 - #include <linux/workqueue.h> 62 61 #include <linux/cgroup.h> 63 62 #include <linux/wait.h> 64 63 ··· 1015 1016 } 1016 1017 } 1017 1018 1018 - void cpuset_post_attach_flush(void) 1019 + static void cpuset_post_attach(void) 1019 1020 { 1020 1021 flush_workqueue(cpuset_migrate_mm_wq); 1021 1022 } ··· 2086 2087 .can_attach = cpuset_can_attach, 2087 2088 .cancel_attach = cpuset_cancel_attach, 2088 2089 .attach = cpuset_attach, 2090 + .post_attach = cpuset_post_attach, 2089 2091 .bind = cpuset_bind, 2090 2092 .legacy_cftypes = files, 2091 2093 .early_init = true,
+38 -17
kernel/events/core.c
··· 412 412 if (ret || !write) 413 413 return ret; 414 414 415 - if (sysctl_perf_cpu_time_max_percent == 100) { 415 + if (sysctl_perf_cpu_time_max_percent == 100 || 416 + sysctl_perf_cpu_time_max_percent == 0) { 416 417 printk(KERN_WARNING 417 418 "perf: Dynamic interrupt throttling disabled, can hang your system!\n"); 418 419 WRITE_ONCE(perf_sample_allowed_ns, 0); ··· 1106 1105 * function. 1107 1106 * 1108 1107 * Lock order: 1108 + * cred_guard_mutex 1109 1109 * task_struct::perf_event_mutex 1110 1110 * perf_event_context::mutex 1111 1111 * perf_event::child_mutex; ··· 3422 3420 find_lively_task_by_vpid(pid_t vpid) 3423 3421 { 3424 3422 struct task_struct *task; 3425 - int err; 3426 3423 3427 3424 rcu_read_lock(); 3428 3425 if (!vpid) ··· 3435 3434 if (!task) 3436 3435 return ERR_PTR(-ESRCH); 3437 3436 3438 - /* Reuse ptrace permission checks for now. */ 3439 - err = -EACCES; 3440 - if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) 3441 - goto errout; 3442 - 3443 3437 return task; 3444 - errout: 3445 - put_task_struct(task); 3446 - return ERR_PTR(err); 3447 - 3448 3438 } 3449 3439 3450 3440 /* ··· 8438 8446 8439 8447 get_online_cpus(); 8440 8448 8449 + if (task) { 8450 + err = mutex_lock_interruptible(&task->signal->cred_guard_mutex); 8451 + if (err) 8452 + goto err_cpus; 8453 + 8454 + /* 8455 + * Reuse ptrace permission checks for now. 8456 + * 8457 + * We must hold cred_guard_mutex across this and any potential 8458 + * perf_install_in_context() call for this new event to 8459 + * serialize against exec() altering our credentials (and the 8460 + * perf_event_exit_task() that could imply). 8461 + */ 8462 + err = -EACCES; 8463 + if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) 8464 + goto err_cred; 8465 + } 8466 + 8441 8467 if (flags & PERF_FLAG_PID_CGROUP) 8442 8468 cgroup_fd = pid; 8443 8469 ··· 8463 8453 NULL, NULL, cgroup_fd); 8464 8454 if (IS_ERR(event)) { 8465 8455 err = PTR_ERR(event); 8466 - goto err_cpus; 8456 + goto err_cred; 8467 8457 } 8468 8458 8469 8459 if (is_sampling_event(event)) { ··· 8520 8510 if ((pmu->capabilities & PERF_PMU_CAP_EXCLUSIVE) && group_leader) { 8521 8511 err = -EBUSY; 8522 8512 goto err_context; 8523 - } 8524 - 8525 - if (task) { 8526 - put_task_struct(task); 8527 - task = NULL; 8528 8513 } 8529 8514 8530 8515 /* ··· 8619 8614 8620 8615 WARN_ON_ONCE(ctx->parent_ctx); 8621 8616 8617 + /* 8618 + * This is the point on no return; we cannot fail hereafter. This is 8619 + * where we start modifying current state. 8620 + */ 8621 + 8622 8622 if (move_group) { 8623 8623 /* 8624 8624 * See perf_event_ctx_lock() for comments on the details ··· 8695 8685 mutex_unlock(&gctx->mutex); 8696 8686 mutex_unlock(&ctx->mutex); 8697 8687 8688 + if (task) { 8689 + mutex_unlock(&task->signal->cred_guard_mutex); 8690 + put_task_struct(task); 8691 + } 8692 + 8698 8693 put_online_cpus(); 8699 8694 8700 8695 mutex_lock(&current->perf_event_mutex); ··· 8732 8717 */ 8733 8718 if (!event_file) 8734 8719 free_event(event); 8720 + err_cred: 8721 + if (task) 8722 + mutex_unlock(&task->signal->cred_guard_mutex); 8735 8723 err_cpus: 8736 8724 put_online_cpus(); 8737 8725 err_task: ··· 9019 9001 9020 9002 /* 9021 9003 * When a child task exits, feed back event values to parent events. 9004 + * 9005 + * Can be called with cred_guard_mutex held when called from 9006 + * install_exec_creds(). 9022 9007 */ 9023 9008 void perf_event_exit_task(struct task_struct *child) 9024 9009 {
+2 -1
kernel/kcov.c
··· 1 1 #define pr_fmt(fmt) "kcov: " fmt 2 2 3 + #define DISABLE_BRANCH_PROFILING 3 4 #include <linux/compiler.h> 4 5 #include <linux/types.h> 5 6 #include <linux/file.h> ··· 44 43 * Entry point from instrumented code. 45 44 * This is called once per basic-block/edge. 46 45 */ 47 - void __sanitizer_cov_trace_pc(void) 46 + void notrace __sanitizer_cov_trace_pc(void) 48 47 { 49 48 struct task_struct *t; 50 49 enum kcov_mode mode;
+5 -2
kernel/kexec_core.c
··· 1415 1415 VMCOREINFO_OFFSET(page, lru); 1416 1416 VMCOREINFO_OFFSET(page, _mapcount); 1417 1417 VMCOREINFO_OFFSET(page, private); 1418 + VMCOREINFO_OFFSET(page, compound_dtor); 1419 + VMCOREINFO_OFFSET(page, compound_order); 1420 + VMCOREINFO_OFFSET(page, compound_head); 1418 1421 VMCOREINFO_OFFSET(pglist_data, node_zones); 1419 1422 VMCOREINFO_OFFSET(pglist_data, nr_zones); 1420 1423 #ifdef CONFIG_FLAT_NODE_MEM_MAP ··· 1450 1447 #ifdef CONFIG_X86 1451 1448 VMCOREINFO_NUMBER(KERNEL_IMAGE_SIZE); 1452 1449 #endif 1453 - #ifdef CONFIG_HUGETLBFS 1454 - VMCOREINFO_SYMBOL(free_huge_page); 1450 + #ifdef CONFIG_HUGETLB_PAGE 1451 + VMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR); 1455 1452 #endif 1456 1453 1457 1454 arch_crash_save_vmcoreinfo();
+34 -3
kernel/locking/lockdep.c
··· 2176 2176 chain->irq_context = hlock->irq_context; 2177 2177 i = get_first_held_lock(curr, hlock); 2178 2178 chain->depth = curr->lockdep_depth + 1 - i; 2179 + 2180 + BUILD_BUG_ON((1UL << 24) <= ARRAY_SIZE(chain_hlocks)); 2181 + BUILD_BUG_ON((1UL << 6) <= ARRAY_SIZE(curr->held_locks)); 2182 + BUILD_BUG_ON((1UL << 8*sizeof(chain_hlocks[0])) <= ARRAY_SIZE(lock_classes)); 2183 + 2179 2184 if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) { 2180 2185 chain->base = nr_chain_hlocks; 2181 - nr_chain_hlocks += chain->depth; 2182 2186 for (j = 0; j < chain->depth - 1; j++, i++) { 2183 2187 int lock_id = curr->held_locks[i].class_idx - 1; 2184 2188 chain_hlocks[chain->base + j] = lock_id; 2185 2189 } 2186 2190 chain_hlocks[chain->base + j] = class - lock_classes; 2187 2191 } 2192 + 2193 + if (nr_chain_hlocks < MAX_LOCKDEP_CHAIN_HLOCKS) 2194 + nr_chain_hlocks += chain->depth; 2195 + 2196 + #ifdef CONFIG_DEBUG_LOCKDEP 2197 + /* 2198 + * Important for check_no_collision(). 2199 + */ 2200 + if (unlikely(nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)) { 2201 + if (debug_locks_off_graph_unlock()) 2202 + return 0; 2203 + 2204 + print_lockdep_off("BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!"); 2205 + dump_stack(); 2206 + return 0; 2207 + } 2208 + #endif 2209 + 2188 2210 hlist_add_head_rcu(&chain->entry, hash_head); 2189 2211 debug_atomic_inc(chain_lookup_misses); 2190 2212 inc_chains(); ··· 2954 2932 return 1; 2955 2933 } 2956 2934 2935 + static inline unsigned int task_irq_context(struct task_struct *task) 2936 + { 2937 + return 2 * !!task->hardirq_context + !!task->softirq_context; 2938 + } 2939 + 2957 2940 static int separate_irq_context(struct task_struct *curr, 2958 2941 struct held_lock *hlock) 2959 2942 { ··· 2967 2940 /* 2968 2941 * Keep track of points where we cross into an interrupt context: 2969 2942 */ 2970 - hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) + 2971 - curr->softirq_context; 2972 2943 if (depth) { 2973 2944 struct held_lock *prev_hlock; 2974 2945 ··· 2996 2971 struct held_lock *hlock) 2997 2972 { 2998 2973 return 1; 2974 + } 2975 + 2976 + static inline unsigned int task_irq_context(struct task_struct *task) 2977 + { 2978 + return 0; 2999 2979 } 3000 2980 3001 2981 static inline int separate_irq_context(struct task_struct *curr, ··· 3271 3241 hlock->acquire_ip = ip; 3272 3242 hlock->instance = lock; 3273 3243 hlock->nest_lock = nest_lock; 3244 + hlock->irq_context = task_irq_context(curr); 3274 3245 hlock->trylock = trylock; 3275 3246 hlock->read = read; 3276 3247 hlock->check = check;
+2
kernel/locking/lockdep_proc.c
··· 141 141 int i; 142 142 143 143 if (v == SEQ_START_TOKEN) { 144 + if (nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS) 145 + seq_printf(m, "(buggered) "); 144 146 seq_printf(m, "all lock chains:\n"); 145 147 return 0; 146 148 }
+29
kernel/workqueue.c
··· 666 666 */ 667 667 smp_wmb(); 668 668 set_work_data(work, (unsigned long)pool_id << WORK_OFFQ_POOL_SHIFT, 0); 669 + /* 670 + * The following mb guarantees that previous clear of a PENDING bit 671 + * will not be reordered with any speculative LOADS or STORES from 672 + * work->current_func, which is executed afterwards. This possible 673 + * reordering can lead to a missed execution on attempt to qeueue 674 + * the same @work. E.g. consider this case: 675 + * 676 + * CPU#0 CPU#1 677 + * ---------------------------- -------------------------------- 678 + * 679 + * 1 STORE event_indicated 680 + * 2 queue_work_on() { 681 + * 3 test_and_set_bit(PENDING) 682 + * 4 } set_..._and_clear_pending() { 683 + * 5 set_work_data() # clear bit 684 + * 6 smp_mb() 685 + * 7 work->current_func() { 686 + * 8 LOAD event_indicated 687 + * } 688 + * 689 + * Without an explicit full barrier speculative LOAD on line 8 can 690 + * be executed before CPU#0 does STORE on line 1. If that happens, 691 + * CPU#0 observes the PENDING bit is still set and new execution of 692 + * a @work is not queued in a hope, that CPU#1 will eventually 693 + * finish the queued @work. Meanwhile CPU#1 does not see 694 + * event_indicated is set, because speculative LOAD was executed 695 + * before actual STORE. 696 + */ 697 + smp_mb(); 669 698 } 670 699 671 700 static void clear_work_data(struct work_struct *work)
-4
lib/stackdepot.c
··· 210 210 goto fast_exit; 211 211 212 212 hash = hash_stack(trace->entries, trace->nr_entries); 213 - /* Bad luck, we won't store this stack. */ 214 - if (hash == 0) 215 - goto exit; 216 - 217 213 bucket = &stack_table[hash & STACK_HASH_MASK]; 218 214 219 215 /*
+5 -7
mm/huge_memory.c
··· 232 232 return READ_ONCE(huge_zero_page); 233 233 } 234 234 235 - static void put_huge_zero_page(void) 235 + void put_huge_zero_page(void) 236 236 { 237 237 /* 238 238 * Counter should never go to zero here. Only shrinker can put ··· 1684 1684 if (vma_is_dax(vma)) { 1685 1685 spin_unlock(ptl); 1686 1686 if (is_huge_zero_pmd(orig_pmd)) 1687 - put_huge_zero_page(); 1687 + tlb_remove_page(tlb, pmd_page(orig_pmd)); 1688 1688 } else if (is_huge_zero_pmd(orig_pmd)) { 1689 1689 pte_free(tlb->mm, pgtable_trans_huge_withdraw(tlb->mm, pmd)); 1690 1690 atomic_long_dec(&tlb->mm->nr_ptes); 1691 1691 spin_unlock(ptl); 1692 - put_huge_zero_page(); 1692 + tlb_remove_page(tlb, pmd_page(orig_pmd)); 1693 1693 } else { 1694 1694 struct page *page = pmd_page(orig_pmd); 1695 1695 page_remove_rmap(page, true); ··· 1960 1960 * page fault if needed. 1961 1961 */ 1962 1962 return 0; 1963 - if (vma->vm_ops) 1963 + if (vma->vm_ops || (vm_flags & VM_NO_THP)) 1964 1964 /* khugepaged not yet working on file or special mappings */ 1965 1965 return 0; 1966 - VM_BUG_ON_VMA(vm_flags & VM_NO_THP, vma); 1967 1966 hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; 1968 1967 hend = vma->vm_end & HPAGE_PMD_MASK; 1969 1968 if (hstart < hend) ··· 2351 2352 return false; 2352 2353 if (is_vma_temporary_stack(vma)) 2353 2354 return false; 2354 - VM_BUG_ON_VMA(vma->vm_flags & VM_NO_THP, vma); 2355 - return true; 2355 + return !(vma->vm_flags & VM_NO_THP); 2356 2356 } 2357 2357 2358 2358 static void collapse_huge_page(struct mm_struct *mm,
+19 -18
mm/memcontrol.c
··· 207 207 /* "mc" and its members are protected by cgroup_mutex */ 208 208 static struct move_charge_struct { 209 209 spinlock_t lock; /* for from, to */ 210 + struct mm_struct *mm; 210 211 struct mem_cgroup *from; 211 212 struct mem_cgroup *to; 212 213 unsigned long flags; ··· 4668 4667 4669 4668 static void mem_cgroup_clear_mc(void) 4670 4669 { 4670 + struct mm_struct *mm = mc.mm; 4671 + 4671 4672 /* 4672 4673 * we must clear moving_task before waking up waiters at the end of 4673 4674 * task migration. ··· 4679 4676 spin_lock(&mc.lock); 4680 4677 mc.from = NULL; 4681 4678 mc.to = NULL; 4679 + mc.mm = NULL; 4682 4680 spin_unlock(&mc.lock); 4681 + 4682 + mmput(mm); 4683 4683 } 4684 4684 4685 4685 static int mem_cgroup_can_attach(struct cgroup_taskset *tset) ··· 4739 4733 VM_BUG_ON(mc.moved_swap); 4740 4734 4741 4735 spin_lock(&mc.lock); 4736 + mc.mm = mm; 4742 4737 mc.from = from; 4743 4738 mc.to = memcg; 4744 4739 mc.flags = move_flags; ··· 4749 4742 ret = mem_cgroup_precharge_mc(mm); 4750 4743 if (ret) 4751 4744 mem_cgroup_clear_mc(); 4745 + } else { 4746 + mmput(mm); 4752 4747 } 4753 - mmput(mm); 4754 4748 return ret; 4755 4749 } 4756 4750 ··· 4860 4852 return ret; 4861 4853 } 4862 4854 4863 - static void mem_cgroup_move_charge(struct mm_struct *mm) 4855 + static void mem_cgroup_move_charge(void) 4864 4856 { 4865 4857 struct mm_walk mem_cgroup_move_charge_walk = { 4866 4858 .pmd_entry = mem_cgroup_move_charge_pte_range, 4867 - .mm = mm, 4859 + .mm = mc.mm, 4868 4860 }; 4869 4861 4870 4862 lru_add_drain_all(); ··· 4876 4868 atomic_inc(&mc.from->moving_account); 4877 4869 synchronize_rcu(); 4878 4870 retry: 4879 - if (unlikely(!down_read_trylock(&mm->mmap_sem))) { 4871 + if (unlikely(!down_read_trylock(&mc.mm->mmap_sem))) { 4880 4872 /* 4881 4873 * Someone who are holding the mmap_sem might be waiting in 4882 4874 * waitq. So we cancel all extra charges, wake up all waiters, ··· 4893 4885 * additional charge, the page walk just aborts. 4894 4886 */ 4895 4887 walk_page_range(0, ~0UL, &mem_cgroup_move_charge_walk); 4896 - up_read(&mm->mmap_sem); 4888 + up_read(&mc.mm->mmap_sem); 4897 4889 atomic_dec(&mc.from->moving_account); 4898 4890 } 4899 4891 4900 - static void mem_cgroup_move_task(struct cgroup_taskset *tset) 4892 + static void mem_cgroup_move_task(void) 4901 4893 { 4902 - struct cgroup_subsys_state *css; 4903 - struct task_struct *p = cgroup_taskset_first(tset, &css); 4904 - struct mm_struct *mm = get_task_mm(p); 4905 - 4906 - if (mm) { 4907 - if (mc.to) 4908 - mem_cgroup_move_charge(mm); 4909 - mmput(mm); 4910 - } 4911 - if (mc.to) 4894 + if (mc.to) { 4895 + mem_cgroup_move_charge(); 4912 4896 mem_cgroup_clear_mc(); 4897 + } 4913 4898 } 4914 4899 #else /* !CONFIG_MMU */ 4915 4900 static int mem_cgroup_can_attach(struct cgroup_taskset *tset) ··· 4912 4911 static void mem_cgroup_cancel_attach(struct cgroup_taskset *tset) 4913 4912 { 4914 4913 } 4915 - static void mem_cgroup_move_task(struct cgroup_taskset *tset) 4914 + static void mem_cgroup_move_task(void) 4916 4915 { 4917 4916 } 4918 4917 #endif ··· 5196 5195 .css_reset = mem_cgroup_css_reset, 5197 5196 .can_attach = mem_cgroup_can_attach, 5198 5197 .cancel_attach = mem_cgroup_cancel_attach, 5199 - .attach = mem_cgroup_move_task, 5198 + .post_attach = mem_cgroup_move_task, 5200 5199 .bind = mem_cgroup_bind, 5201 5200 .dfl_cftypes = memory_files, 5202 5201 .legacy_cftypes = mem_cgroup_legacy_files,
+9 -1
mm/memory-failure.c
··· 888 888 } 889 889 } 890 890 891 - return get_page_unless_zero(head); 891 + if (get_page_unless_zero(head)) { 892 + if (head == compound_head(page)) 893 + return 1; 894 + 895 + pr_info("MCE: %#lx cannot catch tail\n", page_to_pfn(page)); 896 + put_page(head); 897 + } 898 + 899 + return 0; 892 900 } 893 901 EXPORT_SYMBOL_GPL(get_hwpoison_page); 894 902
+40
mm/memory.c
··· 789 789 return pfn_to_page(pfn); 790 790 } 791 791 792 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 793 + struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, 794 + pmd_t pmd) 795 + { 796 + unsigned long pfn = pmd_pfn(pmd); 797 + 798 + /* 799 + * There is no pmd_special() but there may be special pmds, e.g. 800 + * in a direct-access (dax) mapping, so let's just replicate the 801 + * !HAVE_PTE_SPECIAL case from vm_normal_page() here. 802 + */ 803 + if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { 804 + if (vma->vm_flags & VM_MIXEDMAP) { 805 + if (!pfn_valid(pfn)) 806 + return NULL; 807 + goto out; 808 + } else { 809 + unsigned long off; 810 + off = (addr - vma->vm_start) >> PAGE_SHIFT; 811 + if (pfn == vma->vm_pgoff + off) 812 + return NULL; 813 + if (!is_cow_mapping(vma->vm_flags)) 814 + return NULL; 815 + } 816 + } 817 + 818 + if (is_zero_pfn(pfn)) 819 + return NULL; 820 + if (unlikely(pfn > highest_memmap_pfn)) 821 + return NULL; 822 + 823 + /* 824 + * NOTE! We still have PageReserved() pages in the page tables. 825 + * eg. VDSO mappings can cause them to exist. 826 + */ 827 + out: 828 + return pfn_to_page(pfn); 829 + } 830 + #endif 831 + 792 832 /* 793 833 * copy one vm_area from one task to the other. Assumes the page tables 794 834 * already present in the new task to be cleared in the whole range
+7 -1
mm/migrate.c
··· 975 975 dec_zone_page_state(page, NR_ISOLATED_ANON + 976 976 page_is_file_cache(page)); 977 977 /* Soft-offlined page shouldn't go through lru cache list */ 978 - if (reason == MR_MEMORY_FAILURE) { 978 + if (reason == MR_MEMORY_FAILURE && rc == MIGRATEPAGE_SUCCESS) { 979 + /* 980 + * With this release, we free successfully migrated 981 + * page and set PG_HWPoison on just freed page 982 + * intentionally. Although it's rather weird, it's how 983 + * HWPoison flag works at the moment. 984 + */ 979 985 put_page(page); 980 986 if (!test_set_page_hwpoison(page)) 981 987 num_poisoned_pages_inc();
+5 -1
mm/page_io.c
··· 353 353 354 354 ret = bdev_read_page(sis->bdev, swap_page_sector(page), page); 355 355 if (!ret) { 356 - swap_slot_free_notify(page); 356 + if (trylock_page(page)) { 357 + swap_slot_free_notify(page); 358 + unlock_page(page); 359 + } 360 + 357 361 count_vm_event(PSWPIN); 358 362 return 0; 359 363 }
+5
mm/swap.c
··· 728 728 zone = NULL; 729 729 } 730 730 731 + if (is_huge_zero_page(page)) { 732 + put_huge_zero_page(); 733 + continue; 734 + } 735 + 731 736 page = compound_head(page); 732 737 if (!put_page_testzero(page)) 733 738 continue;
+15 -15
mm/vmscan.c
··· 2553 2553 sc->gfp_mask |= __GFP_HIGHMEM; 2554 2554 2555 2555 for_each_zone_zonelist_nodemask(zone, z, zonelist, 2556 - requested_highidx, sc->nodemask) { 2556 + gfp_zone(sc->gfp_mask), sc->nodemask) { 2557 2557 enum zone_type classzone_idx; 2558 2558 2559 2559 if (!populated_zone(zone)) ··· 3318 3318 /* Try to sleep for a short interval */ 3319 3319 if (prepare_kswapd_sleep(pgdat, order, remaining, 3320 3320 balanced_classzone_idx)) { 3321 + /* 3322 + * Compaction records what page blocks it recently failed to 3323 + * isolate pages from and skips them in the future scanning. 3324 + * When kswapd is going to sleep, it is reasonable to assume 3325 + * that pages and compaction may succeed so reset the cache. 3326 + */ 3327 + reset_isolation_suitable(pgdat); 3328 + 3329 + /* 3330 + * We have freed the memory, now we should compact it to make 3331 + * allocation of the requested order possible. 3332 + */ 3333 + wakeup_kcompactd(pgdat, order, classzone_idx); 3334 + 3321 3335 remaining = schedule_timeout(HZ/10); 3322 3336 finish_wait(&pgdat->kswapd_wait, &wait); 3323 3337 prepare_to_wait(&pgdat->kswapd_wait, &wait, TASK_INTERRUPTIBLE); ··· 3354 3340 * them before going back to sleep. 3355 3341 */ 3356 3342 set_pgdat_percpu_threshold(pgdat, calculate_normal_threshold); 3357 - 3358 - /* 3359 - * Compaction records what page blocks it recently failed to 3360 - * isolate pages from and skips them in the future scanning. 3361 - * When kswapd is going to sleep, it is reasonable to assume 3362 - * that pages and compaction may succeed so reset the cache. 3363 - */ 3364 - reset_isolation_suitable(pgdat); 3365 - 3366 - /* 3367 - * We have freed the memory, now we should compact it to make 3368 - * allocation of the requested order possible. 3369 - */ 3370 - wakeup_kcompactd(pgdat, order, classzone_idx); 3371 3343 3372 3344 if (!kthread_should_stop()) 3373 3345 schedule();
+12
net/batman-adv/bat_v.c
··· 32 32 33 33 #include "bat_v_elp.h" 34 34 #include "bat_v_ogm.h" 35 + #include "hard-interface.h" 35 36 #include "hash.h" 36 37 #include "originator.h" 37 38 #include "packet.h" 39 + 40 + static void batadv_v_iface_activate(struct batadv_hard_iface *hard_iface) 41 + { 42 + /* B.A.T.M.A.N. V does not use any queuing mechanism, therefore it can 43 + * set the interface as ACTIVE right away, without any risk of race 44 + * condition 45 + */ 46 + if (hard_iface->if_status == BATADV_IF_TO_BE_ACTIVATED) 47 + hard_iface->if_status = BATADV_IF_ACTIVE; 48 + } 38 49 39 50 static int batadv_v_iface_enable(struct batadv_hard_iface *hard_iface) 40 51 { ··· 285 274 286 275 static struct batadv_algo_ops batadv_batman_v __read_mostly = { 287 276 .name = "BATMAN_V", 277 + .bat_iface_activate = batadv_v_iface_activate, 288 278 .bat_iface_enable = batadv_v_iface_enable, 289 279 .bat_iface_disable = batadv_v_iface_disable, 290 280 .bat_iface_update_mac = batadv_v_iface_update_mac,
+10 -7
net/batman-adv/distributed-arp-table.c
··· 568 568 * be sent to 569 569 * @bat_priv: the bat priv with all the soft interface information 570 570 * @ip_dst: ipv4 to look up in the DHT 571 + * @vid: VLAN identifier 571 572 * 572 573 * An originator O is selected if and only if its DHT_ID value is one of three 573 574 * closest values (from the LEFT, with wrap around if needed) then the hash ··· 577 576 * Return: the candidate array of size BATADV_DAT_CANDIDATE_NUM. 578 577 */ 579 578 static struct batadv_dat_candidate * 580 - batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst) 579 + batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst, 580 + unsigned short vid) 581 581 { 582 582 int select; 583 583 batadv_dat_addr_t last_max = BATADV_DAT_ADDR_MAX, ip_key; ··· 594 592 return NULL; 595 593 596 594 dat.ip = ip_dst; 597 - dat.vid = 0; 595 + dat.vid = vid; 598 596 ip_key = (batadv_dat_addr_t)batadv_hash_dat(&dat, 599 597 BATADV_DAT_ADDR_MAX); 600 598 ··· 614 612 * @bat_priv: the bat priv with all the soft interface information 615 613 * @skb: payload to send 616 614 * @ip: the DHT key 615 + * @vid: VLAN identifier 617 616 * @packet_subtype: unicast4addr packet subtype to use 618 617 * 619 618 * This function copies the skb with pskb_copy() and is sent as unicast packet ··· 625 622 */ 626 623 static bool batadv_dat_send_data(struct batadv_priv *bat_priv, 627 624 struct sk_buff *skb, __be32 ip, 628 - int packet_subtype) 625 + unsigned short vid, int packet_subtype) 629 626 { 630 627 int i; 631 628 bool ret = false; ··· 634 631 struct sk_buff *tmp_skb; 635 632 struct batadv_dat_candidate *cand; 636 633 637 - cand = batadv_dat_select_candidates(bat_priv, ip); 634 + cand = batadv_dat_select_candidates(bat_priv, ip, vid); 638 635 if (!cand) 639 636 goto out; 640 637 ··· 1025 1022 ret = true; 1026 1023 } else { 1027 1024 /* Send the request to the DHT */ 1028 - ret = batadv_dat_send_data(bat_priv, skb, ip_dst, 1025 + ret = batadv_dat_send_data(bat_priv, skb, ip_dst, vid, 1029 1026 BATADV_P_DAT_DHT_GET); 1030 1027 } 1031 1028 out: ··· 1153 1150 /* Send the ARP reply to the candidates for both the IP addresses that 1154 1151 * the node obtained from the ARP reply 1155 1152 */ 1156 - batadv_dat_send_data(bat_priv, skb, ip_src, BATADV_P_DAT_DHT_PUT); 1157 - batadv_dat_send_data(bat_priv, skb, ip_dst, BATADV_P_DAT_DHT_PUT); 1153 + batadv_dat_send_data(bat_priv, skb, ip_src, vid, BATADV_P_DAT_DHT_PUT); 1154 + batadv_dat_send_data(bat_priv, skb, ip_dst, vid, BATADV_P_DAT_DHT_PUT); 1158 1155 } 1159 1156 1160 1157 /**
+4 -2
net/batman-adv/hard-interface.c
··· 407 407 408 408 batadv_update_min_mtu(hard_iface->soft_iface); 409 409 410 + if (bat_priv->bat_algo_ops->bat_iface_activate) 411 + bat_priv->bat_algo_ops->bat_iface_activate(hard_iface); 412 + 410 413 out: 411 414 if (primary_if) 412 415 batadv_hardif_put(primary_if); ··· 575 572 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); 576 573 struct batadv_hard_iface *primary_if = NULL; 577 574 578 - if (hard_iface->if_status == BATADV_IF_ACTIVE) 579 - batadv_hardif_deactivate_interface(hard_iface); 575 + batadv_hardif_deactivate_interface(hard_iface); 580 576 581 577 if (hard_iface->if_status != BATADV_IF_INACTIVE) 582 578 goto out;
+6 -11
net/batman-adv/originator.c
··· 250 250 { 251 251 struct hlist_node *node_tmp; 252 252 struct batadv_neigh_node *neigh_node; 253 - struct batadv_hardif_neigh_node *hardif_neigh; 254 253 struct batadv_neigh_ifinfo *neigh_ifinfo; 255 254 struct batadv_algo_ops *bao; 256 255 ··· 261 262 batadv_neigh_ifinfo_put(neigh_ifinfo); 262 263 } 263 264 264 - hardif_neigh = batadv_hardif_neigh_get(neigh_node->if_incoming, 265 - neigh_node->addr); 266 - if (hardif_neigh) { 267 - /* batadv_hardif_neigh_get() increases refcount too */ 268 - batadv_hardif_neigh_put(hardif_neigh); 269 - batadv_hardif_neigh_put(hardif_neigh); 270 - } 265 + batadv_hardif_neigh_put(neigh_node->hardif_neigh); 271 266 272 267 if (bao->bat_neigh_free) 273 268 bao->bat_neigh_free(neigh_node); ··· 656 663 ether_addr_copy(neigh_node->addr, neigh_addr); 657 664 neigh_node->if_incoming = hard_iface; 658 665 neigh_node->orig_node = orig_node; 666 + neigh_node->last_seen = jiffies; 667 + 668 + /* increment unique neighbor refcount */ 669 + kref_get(&hardif_neigh->refcount); 670 + neigh_node->hardif_neigh = hardif_neigh; 659 671 660 672 /* extra reference for return */ 661 673 kref_init(&neigh_node->refcount); ··· 669 671 spin_lock_bh(&orig_node->neigh_list_lock); 670 672 hlist_add_head_rcu(&neigh_node->list, &orig_node->neigh_list); 671 673 spin_unlock_bh(&orig_node->neigh_list_lock); 672 - 673 - /* increment unique neighbor refcount */ 674 - kref_get(&hardif_neigh->refcount); 675 674 676 675 batadv_dbg(BATADV_DBG_BATMAN, orig_node->bat_priv, 677 676 "Creating new neighbor %pM for orig_node %pM on interface %s\n",
+9
net/batman-adv/routing.c
··· 105 105 neigh_node = NULL; 106 106 107 107 spin_lock_bh(&orig_node->neigh_list_lock); 108 + /* curr_router used earlier may not be the current orig_ifinfo->router 109 + * anymore because it was dereferenced outside of the neigh_list_lock 110 + * protected region. After the new best neighbor has replace the current 111 + * best neighbor the reference counter needs to decrease. Consequently, 112 + * the code needs to ensure the curr_router variable contains a pointer 113 + * to the replaced best neighbor. 114 + */ 115 + curr_router = rcu_dereference_protected(orig_ifinfo->router, true); 116 + 108 117 rcu_assign_pointer(orig_ifinfo->router, neigh_node); 109 118 spin_unlock_bh(&orig_node->neigh_list_lock); 110 119 batadv_orig_ifinfo_put(orig_ifinfo);
+6
net/batman-adv/send.c
··· 675 675 676 676 if (pending) { 677 677 hlist_del(&forw_packet->list); 678 + if (!forw_packet->own) 679 + atomic_inc(&bat_priv->bcast_queue_left); 680 + 678 681 batadv_forw_packet_free(forw_packet); 679 682 } 680 683 } ··· 705 702 706 703 if (pending) { 707 704 hlist_del(&forw_packet->list); 705 + if (!forw_packet->own) 706 + atomic_inc(&bat_priv->batman_queue_left); 707 + 708 708 batadv_forw_packet_free(forw_packet); 709 709 } 710 710 }
+6 -2
net/batman-adv/soft-interface.c
··· 408 408 */ 409 409 nf_reset(skb); 410 410 411 + if (unlikely(!pskb_may_pull(skb, ETH_HLEN))) 412 + goto dropped; 413 + 411 414 vid = batadv_get_vid(skb, 0); 412 415 ethhdr = eth_hdr(skb); 413 416 414 417 switch (ntohs(ethhdr->h_proto)) { 415 418 case ETH_P_8021Q: 419 + if (!pskb_may_pull(skb, VLAN_ETH_HLEN)) 420 + goto dropped; 421 + 416 422 vhdr = (struct vlan_ethhdr *)skb->data; 417 423 418 424 if (vhdr->h_vlan_encapsulated_proto != ethertype) ··· 430 424 } 431 425 432 426 /* skb->dev & skb->pkt_type are set here */ 433 - if (unlikely(!pskb_may_pull(skb, ETH_HLEN))) 434 - goto dropped; 435 427 skb->protocol = eth_type_trans(skb, soft_iface); 436 428 437 429 /* should not be necessary anymore as we use skb_pull_rcsum()
+4 -38
net/batman-adv/translation-table.c
··· 215 215 tt_local_entry = container_of(ref, struct batadv_tt_local_entry, 216 216 common.refcount); 217 217 218 + batadv_softif_vlan_put(tt_local_entry->vlan); 219 + 218 220 kfree_rcu(tt_local_entry, common.rcu); 219 221 } 220 222 ··· 675 673 kref_get(&tt_local->common.refcount); 676 674 tt_local->last_seen = jiffies; 677 675 tt_local->common.added_at = tt_local->last_seen; 676 + tt_local->vlan = vlan; 678 677 679 678 /* the batman interface mac and multicast addresses should never be 680 679 * purged ··· 994 991 struct batadv_tt_common_entry *tt_common_entry; 995 992 struct batadv_tt_local_entry *tt_local; 996 993 struct batadv_hard_iface *primary_if; 997 - struct batadv_softif_vlan *vlan; 998 994 struct hlist_head *head; 999 995 unsigned short vid; 1000 996 u32 i; ··· 1029 1027 last_seen_msecs = last_seen_msecs % 1000; 1030 1028 1031 1029 no_purge = tt_common_entry->flags & np_flag; 1032 - 1033 - vlan = batadv_softif_vlan_get(bat_priv, vid); 1034 - if (!vlan) { 1035 - seq_printf(seq, "Cannot retrieve VLAN %d\n", 1036 - BATADV_PRINT_VID(vid)); 1037 - continue; 1038 - } 1039 - 1040 1030 seq_printf(seq, 1041 1031 " * %pM %4i [%c%c%c%c%c%c] %3u.%03u (%#.8x)\n", 1042 1032 tt_common_entry->addr, ··· 1046 1052 BATADV_TT_CLIENT_ISOLA) ? 'I' : '.'), 1047 1053 no_purge ? 0 : last_seen_secs, 1048 1054 no_purge ? 0 : last_seen_msecs, 1049 - vlan->tt.crc); 1050 - 1051 - batadv_softif_vlan_put(vlan); 1055 + tt_local->vlan->tt.crc); 1052 1056 } 1053 1057 rcu_read_unlock(); 1054 1058 } ··· 1091 1099 { 1092 1100 struct batadv_tt_local_entry *tt_local_entry; 1093 1101 u16 flags, curr_flags = BATADV_NO_FLAGS; 1094 - struct batadv_softif_vlan *vlan; 1095 1102 void *tt_entry_exists; 1096 1103 1097 1104 tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid); ··· 1129 1138 1130 1139 /* extra call to free the local tt entry */ 1131 1140 batadv_tt_local_entry_put(tt_local_entry); 1132 - 1133 - /* decrease the reference held for this vlan */ 1134 - vlan = batadv_softif_vlan_get(bat_priv, vid); 1135 - if (!vlan) 1136 - goto out; 1137 - 1138 - batadv_softif_vlan_put(vlan); 1139 - batadv_softif_vlan_put(vlan); 1140 1141 1141 1142 out: 1142 1143 if (tt_local_entry) ··· 1202 1219 spinlock_t *list_lock; /* protects write access to the hash lists */ 1203 1220 struct batadv_tt_common_entry *tt_common_entry; 1204 1221 struct batadv_tt_local_entry *tt_local; 1205 - struct batadv_softif_vlan *vlan; 1206 1222 struct hlist_node *node_tmp; 1207 1223 struct hlist_head *head; 1208 1224 u32 i; ··· 1222 1240 tt_local = container_of(tt_common_entry, 1223 1241 struct batadv_tt_local_entry, 1224 1242 common); 1225 - 1226 - /* decrease the reference held for this vlan */ 1227 - vlan = batadv_softif_vlan_get(bat_priv, 1228 - tt_common_entry->vid); 1229 - if (vlan) { 1230 - batadv_softif_vlan_put(vlan); 1231 - batadv_softif_vlan_put(vlan); 1232 - } 1233 1243 1234 1244 batadv_tt_local_entry_put(tt_local); 1235 1245 } ··· 3283 3309 struct batadv_hashtable *hash = bat_priv->tt.local_hash; 3284 3310 struct batadv_tt_common_entry *tt_common; 3285 3311 struct batadv_tt_local_entry *tt_local; 3286 - struct batadv_softif_vlan *vlan; 3287 3312 struct hlist_node *node_tmp; 3288 3313 struct hlist_head *head; 3289 3314 spinlock_t *list_lock; /* protects write access to the hash lists */ ··· 3311 3338 tt_local = container_of(tt_common, 3312 3339 struct batadv_tt_local_entry, 3313 3340 common); 3314 - 3315 - /* decrease the reference held for this vlan */ 3316 - vlan = batadv_softif_vlan_get(bat_priv, tt_common->vid); 3317 - if (vlan) { 3318 - batadv_softif_vlan_put(vlan); 3319 - batadv_softif_vlan_put(vlan); 3320 - } 3321 3341 3322 3342 batadv_tt_local_entry_put(tt_local); 3323 3343 }
+7
net/batman-adv/types.h
··· 433 433 * @ifinfo_lock: lock protecting private ifinfo members and list 434 434 * @if_incoming: pointer to incoming hard-interface 435 435 * @last_seen: when last packet via this neighbor was received 436 + * @hardif_neigh: hardif_neigh of this neighbor 436 437 * @refcount: number of contexts the object is used 437 438 * @rcu: struct used for freeing in an RCU-safe manner 438 439 */ ··· 445 444 spinlock_t ifinfo_lock; /* protects ifinfo_list and its members */ 446 445 struct batadv_hard_iface *if_incoming; 447 446 unsigned long last_seen; 447 + struct batadv_hardif_neigh_node *hardif_neigh; 448 448 struct kref refcount; 449 449 struct rcu_head rcu; 450 450 }; ··· 1075 1073 * struct batadv_tt_local_entry - translation table local entry data 1076 1074 * @common: general translation table data 1077 1075 * @last_seen: timestamp used for purging stale tt local entries 1076 + * @vlan: soft-interface vlan of the entry 1078 1077 */ 1079 1078 struct batadv_tt_local_entry { 1080 1079 struct batadv_tt_common_entry common; 1081 1080 unsigned long last_seen; 1081 + struct batadv_softif_vlan *vlan; 1082 1082 }; 1083 1083 1084 1084 /** ··· 1254 1250 * struct batadv_algo_ops - mesh algorithm callbacks 1255 1251 * @list: list node for the batadv_algo_list 1256 1252 * @name: name of the algorithm 1253 + * @bat_iface_activate: start routing mechanisms when hard-interface is brought 1254 + * up 1257 1255 * @bat_iface_enable: init routing info when hard-interface is enabled 1258 1256 * @bat_iface_disable: de-init routing info when hard-interface is disabled 1259 1257 * @bat_iface_update_mac: (re-)init mac addresses of the protocol information ··· 1283 1277 struct batadv_algo_ops { 1284 1278 struct hlist_node list; 1285 1279 char *name; 1280 + void (*bat_iface_activate)(struct batadv_hard_iface *hard_iface); 1286 1281 int (*bat_iface_enable)(struct batadv_hard_iface *hard_iface); 1287 1282 void (*bat_iface_disable)(struct batadv_hard_iface *hard_iface); 1288 1283 void (*bat_iface_update_mac)(struct batadv_hard_iface *hard_iface);
+2 -6
net/ceph/auth.c
··· 293 293 } 294 294 EXPORT_SYMBOL(ceph_auth_create_authorizer); 295 295 296 - void ceph_auth_destroy_authorizer(struct ceph_auth_client *ac, 297 - struct ceph_authorizer *a) 296 + void ceph_auth_destroy_authorizer(struct ceph_authorizer *a) 298 297 { 299 - mutex_lock(&ac->mutex); 300 - if (ac->ops && ac->ops->destroy_authorizer) 301 - ac->ops->destroy_authorizer(ac, a); 302 - mutex_unlock(&ac->mutex); 298 + a->destroy(a); 303 299 } 304 300 EXPORT_SYMBOL(ceph_auth_destroy_authorizer); 305 301
+39 -32
net/ceph/auth_none.c
··· 16 16 struct ceph_auth_none_info *xi = ac->private; 17 17 18 18 xi->starting = true; 19 - xi->built_authorizer = false; 20 19 } 21 20 22 21 static void destroy(struct ceph_auth_client *ac) ··· 38 39 return xi->starting; 39 40 } 40 41 42 + static int ceph_auth_none_build_authorizer(struct ceph_auth_client *ac, 43 + struct ceph_none_authorizer *au) 44 + { 45 + void *p = au->buf; 46 + void *const end = p + sizeof(au->buf); 47 + int ret; 48 + 49 + ceph_encode_8_safe(&p, end, 1, e_range); 50 + ret = ceph_entity_name_encode(ac->name, &p, end); 51 + if (ret < 0) 52 + return ret; 53 + 54 + ceph_encode_64_safe(&p, end, ac->global_id, e_range); 55 + au->buf_len = p - (void *)au->buf; 56 + dout("%s built authorizer len %d\n", __func__, au->buf_len); 57 + return 0; 58 + 59 + e_range: 60 + return -ERANGE; 61 + } 62 + 41 63 static int build_request(struct ceph_auth_client *ac, void *buf, void *end) 42 64 { 43 65 return 0; ··· 77 57 return result; 78 58 } 79 59 60 + static void ceph_auth_none_destroy_authorizer(struct ceph_authorizer *a) 61 + { 62 + kfree(a); 63 + } 64 + 80 65 /* 81 - * build an 'authorizer' with our entity_name and global_id. we can 82 - * reuse a single static copy since it is identical for all services 83 - * we connect to. 66 + * build an 'authorizer' with our entity_name and global_id. it is 67 + * identical for all services we connect to. 84 68 */ 85 69 static int ceph_auth_none_create_authorizer( 86 70 struct ceph_auth_client *ac, int peer_type, 87 71 struct ceph_auth_handshake *auth) 88 72 { 89 - struct ceph_auth_none_info *ai = ac->private; 90 - struct ceph_none_authorizer *au = &ai->au; 91 - void *p, *end; 73 + struct ceph_none_authorizer *au; 92 74 int ret; 93 75 94 - if (!ai->built_authorizer) { 95 - p = au->buf; 96 - end = p + sizeof(au->buf); 97 - ceph_encode_8(&p, 1); 98 - ret = ceph_entity_name_encode(ac->name, &p, end - 8); 99 - if (ret < 0) 100 - goto bad; 101 - ceph_decode_need(&p, end, sizeof(u64), bad2); 102 - ceph_encode_64(&p, ac->global_id); 103 - au->buf_len = p - (void *)au->buf; 104 - ai->built_authorizer = true; 105 - dout("built authorizer len %d\n", au->buf_len); 76 + au = kmalloc(sizeof(*au), GFP_NOFS); 77 + if (!au) 78 + return -ENOMEM; 79 + 80 + au->base.destroy = ceph_auth_none_destroy_authorizer; 81 + 82 + ret = ceph_auth_none_build_authorizer(ac, au); 83 + if (ret) { 84 + kfree(au); 85 + return ret; 106 86 } 107 87 108 88 auth->authorizer = (struct ceph_authorizer *) au; ··· 112 92 auth->authorizer_reply_buf_len = sizeof (au->reply_buf); 113 93 114 94 return 0; 115 - 116 - bad2: 117 - ret = -ERANGE; 118 - bad: 119 - return ret; 120 - } 121 - 122 - static void ceph_auth_none_destroy_authorizer(struct ceph_auth_client *ac, 123 - struct ceph_authorizer *a) 124 - { 125 - /* nothing to do */ 126 95 } 127 96 128 97 static const struct ceph_auth_client_ops ceph_auth_none_ops = { ··· 123 114 .build_request = build_request, 124 115 .handle_reply = handle_reply, 125 116 .create_authorizer = ceph_auth_none_create_authorizer, 126 - .destroy_authorizer = ceph_auth_none_destroy_authorizer, 127 117 }; 128 118 129 119 int ceph_auth_none_init(struct ceph_auth_client *ac) ··· 135 127 return -ENOMEM; 136 128 137 129 xi->starting = true; 138 - xi->built_authorizer = false; 139 130 140 131 ac->protocol = CEPH_AUTH_NONE; 141 132 ac->private = xi;
+1 -2
net/ceph/auth_none.h
··· 12 12 */ 13 13 14 14 struct ceph_none_authorizer { 15 + struct ceph_authorizer base; 15 16 char buf[128]; 16 17 int buf_len; 17 18 char reply_buf[0]; ··· 20 19 21 20 struct ceph_auth_none_info { 22 21 bool starting; 23 - bool built_authorizer; 24 - struct ceph_none_authorizer au; /* we only need one; it's static */ 25 22 }; 26 23 27 24 int ceph_auth_none_init(struct ceph_auth_client *ac);
+10 -11
net/ceph/auth_x.c
··· 565 565 return -EAGAIN; 566 566 } 567 567 568 + static void ceph_x_destroy_authorizer(struct ceph_authorizer *a) 569 + { 570 + struct ceph_x_authorizer *au = (void *)a; 571 + 572 + ceph_x_authorizer_cleanup(au); 573 + kfree(au); 574 + } 575 + 568 576 static int ceph_x_create_authorizer( 569 577 struct ceph_auth_client *ac, int peer_type, 570 578 struct ceph_auth_handshake *auth) ··· 588 580 au = kzalloc(sizeof(*au), GFP_NOFS); 589 581 if (!au) 590 582 return -ENOMEM; 583 + 584 + au->base.destroy = ceph_x_destroy_authorizer; 591 585 592 586 ret = ceph_x_build_authorizer(ac, th, au); 593 587 if (ret) { ··· 652 642 au->nonce, le64_to_cpu(reply.nonce_plus_one), ret); 653 643 return ret; 654 644 } 655 - 656 - static void ceph_x_destroy_authorizer(struct ceph_auth_client *ac, 657 - struct ceph_authorizer *a) 658 - { 659 - struct ceph_x_authorizer *au = (void *)a; 660 - 661 - ceph_x_authorizer_cleanup(au); 662 - kfree(au); 663 - } 664 - 665 645 666 646 static void ceph_x_reset(struct ceph_auth_client *ac) 667 647 { ··· 770 770 .create_authorizer = ceph_x_create_authorizer, 771 771 .update_authorizer = ceph_x_update_authorizer, 772 772 .verify_authorizer_reply = ceph_x_verify_authorizer_reply, 773 - .destroy_authorizer = ceph_x_destroy_authorizer, 774 773 .invalidate_authorizer = ceph_x_invalidate_authorizer, 775 774 .reset = ceph_x_reset, 776 775 .destroy = ceph_x_destroy,
+1
net/ceph/auth_x.h
··· 26 26 27 27 28 28 struct ceph_x_authorizer { 29 + struct ceph_authorizer base; 29 30 struct ceph_crypto_key session_key; 30 31 struct ceph_buffer *buf; 31 32 unsigned int service;
+2 -4
net/ceph/osd_client.c
··· 1087 1087 dout("put_osd %p %d -> %d\n", osd, atomic_read(&osd->o_ref), 1088 1088 atomic_read(&osd->o_ref) - 1); 1089 1089 if (atomic_dec_and_test(&osd->o_ref)) { 1090 - struct ceph_auth_client *ac = osd->o_osdc->client->monc.auth; 1091 - 1092 1090 if (osd->o_auth.authorizer) 1093 - ceph_auth_destroy_authorizer(ac, osd->o_auth.authorizer); 1091 + ceph_auth_destroy_authorizer(osd->o_auth.authorizer); 1094 1092 kfree(osd); 1095 1093 } 1096 1094 } ··· 2982 2984 struct ceph_auth_handshake *auth = &o->o_auth; 2983 2985 2984 2986 if (force_new && auth->authorizer) { 2985 - ceph_auth_destroy_authorizer(ac, auth->authorizer); 2987 + ceph_auth_destroy_authorizer(auth->authorizer); 2986 2988 auth->authorizer = NULL; 2987 2989 } 2988 2990 if (!auth->authorizer) {
+1 -1
net/core/dev.c
··· 2815 2815 2816 2816 if (skb->ip_summed != CHECKSUM_NONE && 2817 2817 !can_checksum_protocol(features, type)) { 2818 - features &= ~NETIF_F_CSUM_MASK; 2818 + features &= ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 2819 2819 } else if (illegal_highdma(skb->dev, skb)) { 2820 2820 features &= ~NETIF_F_SG; 2821 2821 }
+2
net/ipv4/inet_hashtables.c
··· 438 438 const struct sock *sk2, 439 439 bool match_wildcard)) 440 440 { 441 + struct inet_bind_bucket *tb = inet_csk(sk)->icsk_bind_hash; 441 442 struct sock *sk2; 442 443 kuid_t uid = sock_i_uid(sk); 443 444 ··· 447 446 sk2->sk_family == sk->sk_family && 448 447 ipv6_only_sock(sk2) == ipv6_only_sock(sk) && 449 448 sk2->sk_bound_dev_if == sk->sk_bound_dev_if && 449 + inet_csk(sk2)->icsk_bind_hash == tb && 450 450 sk2->sk_reuseport && uid_eq(uid, sock_i_uid(sk2)) && 451 451 saddr_same(sk, sk2, false)) 452 452 return reuseport_add_sock(sk, sk2);
+13 -6
net/ipv4/ip_gre.c
··· 369 369 return ip_route_output_key(net, fl); 370 370 } 371 371 372 - static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev) 372 + static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev, 373 + __be16 proto) 373 374 { 374 375 struct ip_tunnel_info *tun_info; 375 376 const struct ip_tunnel_key *key; ··· 419 418 goto err_free_rt; 420 419 421 420 flags = tun_info->key.tun_flags & (TUNNEL_CSUM | TUNNEL_KEY); 422 - gre_build_header(skb, tunnel_hlen, flags, htons(ETH_P_TEB), 421 + gre_build_header(skb, tunnel_hlen, flags, proto, 423 422 tunnel_id_to_key(tun_info->key.tun_id), 0); 424 423 425 424 df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; ··· 460 459 const struct iphdr *tnl_params; 461 460 462 461 if (tunnel->collect_md) { 463 - gre_fb_xmit(skb, dev); 462 + gre_fb_xmit(skb, dev, skb->protocol); 464 463 return NETDEV_TX_OK; 465 464 } 466 465 ··· 502 501 struct ip_tunnel *tunnel = netdev_priv(dev); 503 502 504 503 if (tunnel->collect_md) { 505 - gre_fb_xmit(skb, dev); 504 + gre_fb_xmit(skb, dev, htons(ETH_P_TEB)); 506 505 return NETDEV_TX_OK; 507 506 } 508 507 ··· 733 732 netif_keep_dst(dev); 734 733 dev->addr_len = 4; 735 734 736 - if (iph->daddr) { 735 + if (iph->daddr && !tunnel->collect_md) { 737 736 #ifdef CONFIG_NET_IPGRE_BROADCAST 738 737 if (ipv4_is_multicast(iph->daddr)) { 739 738 if (!iph->saddr) ··· 742 741 dev->header_ops = &ipgre_header_ops; 743 742 } 744 743 #endif 745 - } else 744 + } else if (!tunnel->collect_md) { 746 745 dev->header_ops = &ipgre_header_ops; 746 + } 747 747 748 748 return ip_tunnel_init(dev); 749 749 } ··· 785 783 if (data[IFLA_GRE_OFLAGS]) 786 784 flags |= nla_get_be16(data[IFLA_GRE_OFLAGS]); 787 785 if (flags & (GRE_VERSION|GRE_ROUTING)) 786 + return -EINVAL; 787 + 788 + if (data[IFLA_GRE_COLLECT_METADATA] && 789 + data[IFLA_GRE_ENCAP_TYPE] && 790 + nla_get_u16(data[IFLA_GRE_ENCAP_TYPE]) != TUNNEL_ENCAP_NONE) 788 791 return -EINVAL; 789 792 790 793 return 0;
+2 -2
net/ipv4/ip_tunnel.c
··· 326 326 327 327 if (!IS_ERR(rt)) { 328 328 tdev = rt->dst.dev; 329 - dst_cache_set_ip4(&tunnel->dst_cache, &rt->dst, 330 - fl4.saddr); 331 329 ip_rt_put(rt); 332 330 } 333 331 if (dev->type != ARPHRD_ETHER) 334 332 dev->flags |= IFF_POINTOPOINT; 333 + 334 + dst_cache_reset(&tunnel->dst_cache); 335 335 } 336 336 337 337 if (!tdev && tunnel->parms.link)
+1 -2
net/ipv6/ila/ila_lwt.c
··· 144 144 145 145 static int ila_encap_nlsize(struct lwtunnel_state *lwtstate) 146 146 { 147 - /* No encapsulation overhead */ 148 - return 0; 147 + return nla_total_size(sizeof(u64)); /* ILA_ATTR_LOCATOR */ 149 148 } 150 149 151 150 static int ila_encap_cmp(struct lwtunnel_state *a, struct lwtunnel_state *b)
+2 -2
net/l2tp/l2tp_core.c
··· 1376 1376 memcpy(&udp_conf.peer_ip6, cfg->peer_ip6, 1377 1377 sizeof(udp_conf.peer_ip6)); 1378 1378 udp_conf.use_udp6_tx_checksums = 1379 - cfg->udp6_zero_tx_checksums; 1379 + ! cfg->udp6_zero_tx_checksums; 1380 1380 udp_conf.use_udp6_rx_checksums = 1381 - cfg->udp6_zero_rx_checksums; 1381 + ! cfg->udp6_zero_rx_checksums; 1382 1382 } else 1383 1383 #endif 1384 1384 {
+2 -2
net/mac80211/iface.c
··· 1761 1761 1762 1762 ret = dev_alloc_name(ndev, ndev->name); 1763 1763 if (ret < 0) { 1764 - free_netdev(ndev); 1764 + ieee80211_if_free(ndev); 1765 1765 return ret; 1766 1766 } 1767 1767 ··· 1847 1847 1848 1848 ret = register_netdevice(ndev); 1849 1849 if (ret) { 1850 - free_netdev(ndev); 1850 + ieee80211_if_free(ndev); 1851 1851 return ret; 1852 1852 } 1853 1853 }
+2 -1
net/rds/tcp.c
··· 127 127 128 128 /* 129 129 * This is the only path that sets tc->t_sock. Send and receive trust that 130 - * it is set. The RDS_CONN_CONNECTED bit protects those paths from being 130 + * it is set. The RDS_CONN_UP bit protects those paths from being 131 131 * called while it isn't set. 132 132 */ 133 133 void rds_tcp_set_callbacks(struct socket *sock, struct rds_connection *conn) ··· 216 216 if (!tc) 217 217 return -ENOMEM; 218 218 219 + mutex_init(&tc->t_conn_lock); 219 220 tc->t_sock = NULL; 220 221 tc->t_tinc = NULL; 221 222 tc->t_tinc_hdr_rem = sizeof(struct rds_header);
+4
net/rds/tcp.h
··· 12 12 13 13 struct list_head t_tcp_node; 14 14 struct rds_connection *conn; 15 + /* t_conn_lock synchronizes the connection establishment between 16 + * rds_tcp_accept_one and rds_tcp_conn_connect 17 + */ 18 + struct mutex t_conn_lock; 15 19 struct socket *t_sock; 16 20 void *t_orig_write_space; 17 21 void *t_orig_data_ready;
+8
net/rds/tcp_connect.c
··· 78 78 struct socket *sock = NULL; 79 79 struct sockaddr_in src, dest; 80 80 int ret; 81 + struct rds_tcp_connection *tc = conn->c_transport_data; 81 82 83 + mutex_lock(&tc->t_conn_lock); 84 + 85 + if (rds_conn_up(conn)) { 86 + mutex_unlock(&tc->t_conn_lock); 87 + return 0; 88 + } 82 89 ret = sock_create_kern(rds_conn_net(conn), PF_INET, 83 90 SOCK_STREAM, IPPROTO_TCP, &sock); 84 91 if (ret < 0) ··· 127 120 } 128 121 129 122 out: 123 + mutex_unlock(&tc->t_conn_lock); 130 124 if (sock) 131 125 sock_release(sock); 132 126 return ret;
+36 -18
net/rds/tcp_listen.c
··· 76 76 struct rds_connection *conn; 77 77 int ret; 78 78 struct inet_sock *inet; 79 - struct rds_tcp_connection *rs_tcp; 79 + struct rds_tcp_connection *rs_tcp = NULL; 80 + int conn_state; 81 + struct sock *nsk; 80 82 81 83 ret = sock_create_kern(sock_net(sock->sk), sock->sk->sk_family, 82 84 sock->sk->sk_type, sock->sk->sk_protocol, ··· 117 115 * rds_tcp_state_change() will do that cleanup 118 116 */ 119 117 rs_tcp = (struct rds_tcp_connection *)conn->c_transport_data; 120 - if (rs_tcp->t_sock && 121 - ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr)) { 122 - struct sock *nsk = new_sock->sk; 123 - 124 - nsk->sk_user_data = NULL; 125 - nsk->sk_prot->disconnect(nsk, 0); 126 - tcp_done(nsk); 127 - new_sock = NULL; 128 - ret = 0; 129 - goto out; 130 - } else if (rs_tcp->t_sock) { 131 - rds_tcp_restore_callbacks(rs_tcp->t_sock, rs_tcp); 132 - conn->c_outgoing = 0; 133 - } 134 - 135 118 rds_conn_transition(conn, RDS_CONN_DOWN, RDS_CONN_CONNECTING); 119 + mutex_lock(&rs_tcp->t_conn_lock); 120 + conn_state = rds_conn_state(conn); 121 + if (conn_state != RDS_CONN_CONNECTING && conn_state != RDS_CONN_UP) 122 + goto rst_nsk; 123 + if (rs_tcp->t_sock) { 124 + /* Need to resolve a duelling SYN between peers. 125 + * We have an outstanding SYN to this peer, which may 126 + * potentially have transitioned to the RDS_CONN_UP state, 127 + * so we must quiesce any send threads before resetting 128 + * c_transport_data. 129 + */ 130 + wait_event(conn->c_waitq, 131 + !test_bit(RDS_IN_XMIT, &conn->c_flags)); 132 + if (ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr)) { 133 + goto rst_nsk; 134 + } else if (rs_tcp->t_sock) { 135 + rds_tcp_restore_callbacks(rs_tcp->t_sock, rs_tcp); 136 + conn->c_outgoing = 0; 137 + } 138 + } 136 139 rds_tcp_set_callbacks(new_sock, conn); 137 - rds_connect_complete(conn); 140 + rds_connect_complete(conn); /* marks RDS_CONN_UP */ 138 141 new_sock = NULL; 139 142 ret = 0; 140 - 143 + goto out; 144 + rst_nsk: 145 + /* reset the newly returned accept sock and bail */ 146 + nsk = new_sock->sk; 147 + rds_tcp_stats_inc(s_tcp_listen_closed_stale); 148 + nsk->sk_user_data = NULL; 149 + nsk->sk_prot->disconnect(nsk, 0); 150 + tcp_done(nsk); 151 + new_sock = NULL; 152 + ret = 0; 141 153 out: 154 + if (rs_tcp) 155 + mutex_unlock(&rs_tcp->t_conn_lock); 142 156 if (new_sock) 143 157 sock_release(new_sock); 144 158 return ret;
+59 -2
net/sched/sch_netem.c
··· 395 395 sch->q.qlen++; 396 396 } 397 397 398 + /* netem can't properly corrupt a megapacket (like we get from GSO), so instead 399 + * when we statistically choose to corrupt one, we instead segment it, returning 400 + * the first packet to be corrupted, and re-enqueue the remaining frames 401 + */ 402 + static struct sk_buff *netem_segment(struct sk_buff *skb, struct Qdisc *sch) 403 + { 404 + struct sk_buff *segs; 405 + netdev_features_t features = netif_skb_features(skb); 406 + 407 + segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK); 408 + 409 + if (IS_ERR_OR_NULL(segs)) { 410 + qdisc_reshape_fail(skb, sch); 411 + return NULL; 412 + } 413 + consume_skb(skb); 414 + return segs; 415 + } 416 + 398 417 /* 399 418 * Insert one skb into qdisc. 400 419 * Note: parent depends on return value to account for queue length. ··· 426 407 /* We don't fill cb now as skb_unshare() may invalidate it */ 427 408 struct netem_skb_cb *cb; 428 409 struct sk_buff *skb2; 410 + struct sk_buff *segs = NULL; 411 + unsigned int len = 0, last_len, prev_len = qdisc_pkt_len(skb); 412 + int nb = 0; 429 413 int count = 1; 414 + int rc = NET_XMIT_SUCCESS; 430 415 431 416 /* Random duplication */ 432 417 if (q->duplicate && q->duplicate >= get_crandom(&q->dup_cor)) ··· 476 453 * do it now in software before we mangle it. 477 454 */ 478 455 if (q->corrupt && q->corrupt >= get_crandom(&q->corrupt_cor)) { 456 + if (skb_is_gso(skb)) { 457 + segs = netem_segment(skb, sch); 458 + if (!segs) 459 + return NET_XMIT_DROP; 460 + } else { 461 + segs = skb; 462 + } 463 + 464 + skb = segs; 465 + segs = segs->next; 466 + 479 467 if (!(skb = skb_unshare(skb, GFP_ATOMIC)) || 480 468 (skb->ip_summed == CHECKSUM_PARTIAL && 481 - skb_checksum_help(skb))) 482 - return qdisc_drop(skb, sch); 469 + skb_checksum_help(skb))) { 470 + rc = qdisc_drop(skb, sch); 471 + goto finish_segs; 472 + } 483 473 484 474 skb->data[prandom_u32() % skb_headlen(skb)] ^= 485 475 1<<(prandom_u32() % 8); ··· 552 516 sch->qstats.requeues++; 553 517 } 554 518 519 + finish_segs: 520 + if (segs) { 521 + while (segs) { 522 + skb2 = segs->next; 523 + segs->next = NULL; 524 + qdisc_skb_cb(segs)->pkt_len = segs->len; 525 + last_len = segs->len; 526 + rc = qdisc_enqueue(segs, sch); 527 + if (rc != NET_XMIT_SUCCESS) { 528 + if (net_xmit_drop_count(rc)) 529 + qdisc_qstats_drop(sch); 530 + } else { 531 + nb++; 532 + len += last_len; 533 + } 534 + segs = skb2; 535 + } 536 + sch->q.qlen += nb; 537 + if (nb > 1) 538 + qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len); 539 + } 555 540 return NET_XMIT_SUCCESS; 556 541 } 557 542
+5
net/tipc/node.c
··· 1469 1469 int bearer_id = b->identity; 1470 1470 struct tipc_link_entry *le; 1471 1471 u16 bc_ack = msg_bcast_ack(hdr); 1472 + u32 self = tipc_own_addr(net); 1472 1473 int rc = 0; 1473 1474 1474 1475 __skb_queue_head_init(&xmitq); ··· 1485 1484 else 1486 1485 return tipc_node_bc_rcv(net, skb, bearer_id); 1487 1486 } 1487 + 1488 + /* Discard unicast link messages destined for another node */ 1489 + if (unlikely(!msg_short(hdr) && (msg_destnode(hdr) != self))) 1490 + goto discard; 1488 1491 1489 1492 /* Locate neighboring node that sent packet */ 1490 1493 n = tipc_node_find(net, msg_prevnode(hdr));
-1
samples/bpf/trace_output_kern.c
··· 18 18 u64 cookie; 19 19 } data; 20 20 21 - memset(&data, 0, sizeof(data)); 22 21 data.pid = bpf_get_current_pid_tgid(); 23 22 data.cookie = 0x12345678; 24 23
+2 -3
sound/hda/ext/hdac_ext_stream.c
··· 104 104 */ 105 105 void snd_hdac_stream_free_all(struct hdac_ext_bus *ebus) 106 106 { 107 - struct hdac_stream *s; 107 + struct hdac_stream *s, *_s; 108 108 struct hdac_ext_stream *stream; 109 109 struct hdac_bus *bus = ebus_to_hbus(ebus); 110 110 111 - while (!list_empty(&bus->stream_list)) { 112 - s = list_first_entry(&bus->stream_list, struct hdac_stream, list); 111 + list_for_each_entry_safe(s, _s, &bus->stream_list, list) { 113 112 stream = stream_to_hdac_ext_stream(s); 114 113 snd_hdac_ext_stream_decouple(ebus, stream, false); 115 114 list_del(&s->list);
+50 -10
sound/hda/hdac_i915.c
··· 20 20 #include <sound/core.h> 21 21 #include <sound/hdaudio.h> 22 22 #include <sound/hda_i915.h> 23 + #include <sound/hda_register.h> 23 24 24 25 static struct i915_audio_component *hdac_acomp; 25 26 ··· 98 97 } 99 98 EXPORT_SYMBOL_GPL(snd_hdac_display_power); 100 99 100 + #define CONTROLLER_IN_GPU(pci) (((pci)->device == 0x0a0c) || \ 101 + ((pci)->device == 0x0c0c) || \ 102 + ((pci)->device == 0x0d0c) || \ 103 + ((pci)->device == 0x160c)) 104 + 101 105 /** 102 - * snd_hdac_get_display_clk - Get CDCLK in kHz 106 + * snd_hdac_i915_set_bclk - Reprogram BCLK for HSW/BDW 103 107 * @bus: HDA core bus 104 108 * 105 - * This function is supposed to be used only by a HD-audio controller 106 - * driver that needs the interaction with i915 graphics. 109 + * Intel HSW/BDW display HDA controller is in GPU. Both its power and link BCLK 110 + * depends on GPU. Two Extended Mode registers EM4 (M value) and EM5 (N Value) 111 + * are used to convert CDClk (Core Display Clock) to 24MHz BCLK: 112 + * BCLK = CDCLK * M / N 113 + * The values will be lost when the display power well is disabled and need to 114 + * be restored to avoid abnormal playback speed. 107 115 * 108 - * This function queries CDCLK value in kHz from the graphics driver and 109 - * returns the value. A negative code is returned in error. 116 + * Call this function at initializing and changing power well, as well as 117 + * at ELD notifier for the hotplug. 110 118 */ 111 - int snd_hdac_get_display_clk(struct hdac_bus *bus) 119 + void snd_hdac_i915_set_bclk(struct hdac_bus *bus) 112 120 { 113 121 struct i915_audio_component *acomp = bus->audio_component; 122 + struct pci_dev *pci = to_pci_dev(bus->dev); 123 + int cdclk_freq; 124 + unsigned int bclk_m, bclk_n; 114 125 115 - if (!acomp || !acomp->ops) 116 - return -ENODEV; 126 + if (!acomp || !acomp->ops || !acomp->ops->get_cdclk_freq) 127 + return; /* only for i915 binding */ 128 + if (!CONTROLLER_IN_GPU(pci)) 129 + return; /* only HSW/BDW */ 117 130 118 - return acomp->ops->get_cdclk_freq(acomp->dev); 131 + cdclk_freq = acomp->ops->get_cdclk_freq(acomp->dev); 132 + switch (cdclk_freq) { 133 + case 337500: 134 + bclk_m = 16; 135 + bclk_n = 225; 136 + break; 137 + 138 + case 450000: 139 + default: /* default CDCLK 450MHz */ 140 + bclk_m = 4; 141 + bclk_n = 75; 142 + break; 143 + 144 + case 540000: 145 + bclk_m = 4; 146 + bclk_n = 90; 147 + break; 148 + 149 + case 675000: 150 + bclk_m = 8; 151 + bclk_n = 225; 152 + break; 153 + } 154 + 155 + snd_hdac_chip_writew(bus, HSW_EM4, bclk_m); 156 + snd_hdac_chip_writew(bus, HSW_EM5, bclk_n); 119 157 } 120 - EXPORT_SYMBOL_GPL(snd_hdac_get_display_clk); 158 + EXPORT_SYMBOL_GPL(snd_hdac_i915_set_bclk); 121 159 122 160 /* There is a fixed mapping between audio pin node and display port 123 161 * on current Intel platforms:
+4 -52
sound/pci/hda/hda_intel.c
··· 857 857 #define azx_del_card_list(chip) /* NOP */ 858 858 #endif /* CONFIG_PM */ 859 859 860 - /* Intel HSW/BDW display HDA controller is in GPU. Both its power and link BCLK 861 - * depends on GPU. Two Extended Mode registers EM4 (M value) and EM5 (N Value) 862 - * are used to convert CDClk (Core Display Clock) to 24MHz BCLK: 863 - * BCLK = CDCLK * M / N 864 - * The values will be lost when the display power well is disabled and need to 865 - * be restored to avoid abnormal playback speed. 866 - */ 867 - static void haswell_set_bclk(struct hda_intel *hda) 868 - { 869 - struct azx *chip = &hda->chip; 870 - int cdclk_freq; 871 - unsigned int bclk_m, bclk_n; 872 - 873 - if (!hda->need_i915_power) 874 - return; 875 - 876 - cdclk_freq = snd_hdac_get_display_clk(azx_bus(chip)); 877 - switch (cdclk_freq) { 878 - case 337500: 879 - bclk_m = 16; 880 - bclk_n = 225; 881 - break; 882 - 883 - case 450000: 884 - default: /* default CDCLK 450MHz */ 885 - bclk_m = 4; 886 - bclk_n = 75; 887 - break; 888 - 889 - case 540000: 890 - bclk_m = 4; 891 - bclk_n = 90; 892 - break; 893 - 894 - case 675000: 895 - bclk_m = 8; 896 - bclk_n = 225; 897 - break; 898 - } 899 - 900 - azx_writew(chip, HSW_EM4, bclk_m); 901 - azx_writew(chip, HSW_EM5, bclk_n); 902 - } 903 - 904 860 #if defined(CONFIG_PM_SLEEP) || defined(SUPPORT_VGA_SWITCHEROO) 905 861 /* 906 862 * power management ··· 914 958 if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL 915 959 && hda->need_i915_power) { 916 960 snd_hdac_display_power(azx_bus(chip), true); 917 - haswell_set_bclk(hda); 961 + snd_hdac_i915_set_bclk(azx_bus(chip)); 918 962 } 919 963 if (chip->msi) 920 964 if (pci_enable_msi(pci) < 0) ··· 1014 1058 bus = azx_bus(chip); 1015 1059 if (hda->need_i915_power) { 1016 1060 snd_hdac_display_power(bus, true); 1017 - haswell_set_bclk(hda); 1061 + snd_hdac_i915_set_bclk(bus); 1018 1062 } else { 1019 1063 /* toggle codec wakeup bit for STATESTS read */ 1020 1064 snd_hdac_set_codec_wakeup(bus, true); ··· 1752 1796 /* initialize chip */ 1753 1797 azx_init_pci(chip); 1754 1798 1755 - if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) { 1756 - struct hda_intel *hda; 1757 - 1758 - hda = container_of(chip, struct hda_intel, chip); 1759 - haswell_set_bclk(hda); 1760 - } 1799 + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) 1800 + snd_hdac_i915_set_bclk(bus); 1761 1801 1762 1802 hda_intel_init_chip(chip, (probe_only[dev] & 2) == 0); 1763 1803
+1
sound/pci/hda/patch_hdmi.c
··· 2232 2232 if (atomic_read(&(codec)->core.in_pm)) 2233 2233 return; 2234 2234 2235 + snd_hdac_i915_set_bclk(&codec->bus->core); 2235 2236 check_presence_and_report(codec, pin_nid); 2236 2237 } 2237 2238
+1
sound/pci/hda/patch_realtek.c
··· 5584 5584 SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK), 5585 5585 SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK), 5586 5586 SND_PCI_QUIRK(0x17aa, 0x503c, "Thinkpad L450", ALC292_FIXUP_TPT440_DOCK), 5587 + SND_PCI_QUIRK(0x17aa, 0x504a, "ThinkPad X260", ALC292_FIXUP_TPT440_DOCK), 5587 5588 SND_PCI_QUIRK(0x17aa, 0x504b, "Thinkpad", ALC293_FIXUP_LENOVO_SPK_NOISE), 5588 5589 SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 5589 5590 SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+1
sound/soc/codecs/Kconfig
··· 629 629 630 630 config SND_SOC_RT5616 631 631 tristate "Realtek RT5616 CODEC" 632 + depends on I2C 632 633 633 634 config SND_SOC_RT5631 634 635 tristate "Realtek ALC5631/RT5631 CODEC"
+12
sound/soc/codecs/arizona.c
··· 249 249 } 250 250 EXPORT_SYMBOL_GPL(arizona_init_spk); 251 251 252 + int arizona_free_spk(struct snd_soc_codec *codec) 253 + { 254 + struct arizona_priv *priv = snd_soc_codec_get_drvdata(codec); 255 + struct arizona *arizona = priv->arizona; 256 + 257 + arizona_free_irq(arizona, ARIZONA_IRQ_SPK_OVERHEAT_WARN, arizona); 258 + arizona_free_irq(arizona, ARIZONA_IRQ_SPK_OVERHEAT, arizona); 259 + 260 + return 0; 261 + } 262 + EXPORT_SYMBOL_GPL(arizona_free_spk); 263 + 252 264 static const struct snd_soc_dapm_route arizona_mono_routes[] = { 253 265 { "OUT1R", NULL, "OUT1L" }, 254 266 { "OUT2R", NULL, "OUT2L" },
+2
sound/soc/codecs/arizona.h
··· 307 307 extern int arizona_init_gpio(struct snd_soc_codec *codec); 308 308 extern int arizona_init_mono(struct snd_soc_codec *codec); 309 309 310 + extern int arizona_free_spk(struct snd_soc_codec *codec); 311 + 310 312 extern int arizona_init_dai(struct arizona_priv *priv, int dai); 311 313 312 314 int arizona_set_output_mode(struct snd_soc_codec *codec, int output,
+13 -4
sound/soc/codecs/cs35l32.c
··· 274 274 if (of_property_read_u32(np, "cirrus,sdout-share", &val) >= 0) 275 275 pdata->sdout_share = val; 276 276 277 - of_property_read_u32(np, "cirrus,boost-manager", &val); 277 + if (of_property_read_u32(np, "cirrus,boost-manager", &val)) 278 + val = -1u; 279 + 278 280 switch (val) { 279 281 case CS35L32_BOOST_MGR_AUTO: 280 282 case CS35L32_BOOST_MGR_AUTO_AUDIO: ··· 284 282 case CS35L32_BOOST_MGR_FIXED: 285 283 pdata->boost_mng = val; 286 284 break; 285 + case -1u: 287 286 default: 288 287 dev_err(&i2c_client->dev, 289 288 "Wrong cirrus,boost-manager DT value %d\n", val); 290 289 pdata->boost_mng = CS35L32_BOOST_MGR_BYPASS; 291 290 } 292 291 293 - of_property_read_u32(np, "cirrus,sdout-datacfg", &val); 292 + if (of_property_read_u32(np, "cirrus,sdout-datacfg", &val)) 293 + val = -1u; 294 294 switch (val) { 295 295 case CS35L32_DATA_CFG_LR_VP: 296 296 case CS35L32_DATA_CFG_LR_STAT: ··· 300 296 case CS35L32_DATA_CFG_LR_VPSTAT: 301 297 pdata->sdout_datacfg = val; 302 298 break; 299 + case -1u: 303 300 default: 304 301 dev_err(&i2c_client->dev, 305 302 "Wrong cirrus,sdout-datacfg DT value %d\n", val); 306 303 pdata->sdout_datacfg = CS35L32_DATA_CFG_LR; 307 304 } 308 305 309 - of_property_read_u32(np, "cirrus,battery-threshold", &val); 306 + if (of_property_read_u32(np, "cirrus,battery-threshold", &val)) 307 + val = -1u; 310 308 switch (val) { 311 309 case CS35L32_BATT_THRESH_3_1V: 312 310 case CS35L32_BATT_THRESH_3_2V: ··· 316 310 case CS35L32_BATT_THRESH_3_4V: 317 311 pdata->batt_thresh = val; 318 312 break; 313 + case -1u: 319 314 default: 320 315 dev_err(&i2c_client->dev, 321 316 "Wrong cirrus,battery-threshold DT value %d\n", val); 322 317 pdata->batt_thresh = CS35L32_BATT_THRESH_3_3V; 323 318 } 324 319 325 - of_property_read_u32(np, "cirrus,battery-recovery", &val); 320 + if (of_property_read_u32(np, "cirrus,battery-recovery", &val)) 321 + val = -1u; 326 322 switch (val) { 327 323 case CS35L32_BATT_RECOV_3_1V: 328 324 case CS35L32_BATT_RECOV_3_2V: ··· 334 326 case CS35L32_BATT_RECOV_3_6V: 335 327 pdata->batt_recov = val; 336 328 break; 329 + case -1u: 337 330 default: 338 331 dev_err(&i2c_client->dev, 339 332 "Wrong cirrus,battery-recovery DT value %d\n", val);
+3
sound/soc/codecs/cs47l24.c
··· 1108 1108 priv->core.arizona->dapm = NULL; 1109 1109 1110 1110 arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, priv); 1111 + 1112 + arizona_free_spk(codec); 1113 + 1111 1114 return 0; 1112 1115 } 1113 1116
+42 -52
sound/soc/codecs/hdac_hdmi.c
··· 1420 1420 } 1421 1421 1422 1422 #ifdef CONFIG_PM 1423 - static int hdmi_codec_resume(struct snd_soc_codec *codec) 1423 + static int hdmi_codec_prepare(struct device *dev) 1424 1424 { 1425 - struct hdac_ext_device *edev = snd_soc_codec_get_drvdata(codec); 1425 + struct hdac_ext_device *edev = to_hda_ext_device(dev); 1426 + struct hdac_device *hdac = &edev->hdac; 1427 + 1428 + pm_runtime_get_sync(&edev->hdac.dev); 1429 + 1430 + /* 1431 + * Power down afg. 1432 + * codec_read is preferred over codec_write to set the power state. 1433 + * This way verb is send to set the power state and response 1434 + * is received. So setting power state is ensured without using loop 1435 + * to read the state. 1436 + */ 1437 + snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE, 1438 + AC_PWRST_D3); 1439 + 1440 + return 0; 1441 + } 1442 + 1443 + static void hdmi_codec_complete(struct device *dev) 1444 + { 1445 + struct hdac_ext_device *edev = to_hda_ext_device(dev); 1426 1446 struct hdac_hdmi_priv *hdmi = edev->private_data; 1427 1447 struct hdac_hdmi_pin *pin; 1428 1448 struct hdac_device *hdac = &edev->hdac; 1429 - struct hdac_bus *bus = hdac->bus; 1430 - int err; 1431 - unsigned long timeout; 1449 + 1450 + /* Power up afg */ 1451 + snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE, 1452 + AC_PWRST_D0); 1432 1453 1433 1454 hdac_hdmi_skl_enable_all_pins(&edev->hdac); 1434 1455 hdac_hdmi_skl_enable_dp12(&edev->hdac); 1435 - 1436 - /* Power up afg */ 1437 - if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0)) { 1438 - 1439 - snd_hdac_codec_write(hdac, hdac->afg, 0, 1440 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 1441 - 1442 - /* Wait till power state is set to D0 */ 1443 - timeout = jiffies + msecs_to_jiffies(1000); 1444 - while (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0) 1445 - && time_before(jiffies, timeout)) { 1446 - msleep(50); 1447 - } 1448 - } 1449 1456 1450 1457 /* 1451 1458 * As the ELD notify callback request is not entertained while the ··· 1462 1455 list_for_each_entry(pin, &hdmi->pin_list, head) 1463 1456 hdac_hdmi_present_sense(pin, 1); 1464 1457 1465 - /* 1466 - * Codec power is turned ON during controller resume. 1467 - * Turn it OFF here 1468 - */ 1469 - err = snd_hdac_display_power(bus, false); 1470 - if (err < 0) { 1471 - dev_err(bus->dev, 1472 - "Cannot turn OFF display power on i915, err: %d\n", 1473 - err); 1474 - return err; 1475 - } 1476 - 1477 - return 0; 1458 + pm_runtime_put_sync(&edev->hdac.dev); 1478 1459 } 1479 1460 #else 1480 - #define hdmi_codec_resume NULL 1461 + #define hdmi_codec_prepare NULL 1462 + #define hdmi_codec_complete NULL 1481 1463 #endif 1482 1464 1483 1465 static struct snd_soc_codec_driver hdmi_hda_codec = { 1484 1466 .probe = hdmi_codec_probe, 1485 1467 .remove = hdmi_codec_remove, 1486 - .resume = hdmi_codec_resume, 1487 1468 .idle_bias_off = true, 1488 1469 }; 1489 1470 ··· 1556 1561 struct hdac_ext_device *edev = to_hda_ext_device(dev); 1557 1562 struct hdac_device *hdac = &edev->hdac; 1558 1563 struct hdac_bus *bus = hdac->bus; 1559 - unsigned long timeout; 1560 1564 int err; 1561 1565 1562 1566 dev_dbg(dev, "Enter: %s\n", __func__); ··· 1564 1570 if (!bus) 1565 1571 return 0; 1566 1572 1567 - /* Power down afg */ 1568 - if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D3)) { 1569 - snd_hdac_codec_write(hdac, hdac->afg, 0, 1570 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 1571 - 1572 - /* Wait till power state is set to D3 */ 1573 - timeout = jiffies + msecs_to_jiffies(1000); 1574 - while (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D3) 1575 - && time_before(jiffies, timeout)) { 1576 - 1577 - msleep(50); 1578 - } 1579 - } 1580 - 1573 + /* 1574 + * Power down afg. 1575 + * codec_read is preferred over codec_write to set the power state. 1576 + * This way verb is send to set the power state and response 1577 + * is received. So setting power state is ensured without using loop 1578 + * to read the state. 1579 + */ 1580 + snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE, 1581 + AC_PWRST_D3); 1581 1582 err = snd_hdac_display_power(bus, false); 1582 1583 if (err < 0) { 1583 1584 dev_err(bus->dev, "Cannot turn on display power on i915\n"); ··· 1605 1616 hdac_hdmi_skl_enable_dp12(&edev->hdac); 1606 1617 1607 1618 /* Power up afg */ 1608 - if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0)) 1609 - snd_hdac_codec_write(hdac, hdac->afg, 0, 1610 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 1619 + snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE, 1620 + AC_PWRST_D0); 1611 1621 1612 1622 return 0; 1613 1623 } ··· 1617 1629 1618 1630 static const struct dev_pm_ops hdac_hdmi_pm = { 1619 1631 SET_RUNTIME_PM_OPS(hdac_hdmi_runtime_suspend, hdac_hdmi_runtime_resume, NULL) 1632 + .prepare = hdmi_codec_prepare, 1633 + .complete = hdmi_codec_complete, 1620 1634 }; 1621 1635 1622 1636 static const struct hda_device_id hdmi_list[] = {
+71 -55
sound/soc/codecs/nau8825.c
··· 343 343 SND_SOC_DAPM_SUPPLY("ADC Power", NAU8825_REG_ANALOG_ADC_2, 6, 0, NULL, 344 344 0), 345 345 346 - /* ADC for button press detection */ 347 - SND_SOC_DAPM_ADC("SAR", NULL, NAU8825_REG_SAR_CTRL, 348 - NAU8825_SAR_ADC_EN_SFT, 0), 346 + /* ADC for button press detection. A dapm supply widget is used to 347 + * prevent dapm_power_widgets keeping the codec at SND_SOC_BIAS_ON 348 + * during suspend. 349 + */ 350 + SND_SOC_DAPM_SUPPLY("SAR", NAU8825_REG_SAR_CTRL, 351 + NAU8825_SAR_ADC_EN_SFT, 0, NULL, 0), 349 352 350 353 SND_SOC_DAPM_PGA_S("ADACL", 2, NAU8825_REG_RDAC, 12, 0, NULL, 0), 351 354 SND_SOC_DAPM_PGA_S("ADACR", 2, NAU8825_REG_RDAC, 13, 0, NULL, 0), ··· 610 607 611 608 static void nau8825_restart_jack_detection(struct regmap *regmap) 612 609 { 610 + /* Chip needs one FSCLK cycle in order to generate interrupts, 611 + * as we cannot guarantee one will be provided by the system. Turning 612 + * master mode on then off enables us to generate that FSCLK cycle 613 + * with a minimum of contention on the clock bus. 614 + */ 615 + regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2, 616 + NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_MASTER); 617 + regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2, 618 + NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_SLAVE); 619 + 613 620 /* this will restart the entire jack detection process including MIC/GND 614 621 * switching and create interrupts. We have to go from 0 to 1 and back 615 622 * to 0 to restart. ··· 741 728 struct regmap *regmap = nau8825->regmap; 742 729 int active_irq, clear_irq = 0, event = 0, event_mask = 0; 743 730 744 - regmap_read(regmap, NAU8825_REG_IRQ_STATUS, &active_irq); 731 + if (regmap_read(regmap, NAU8825_REG_IRQ_STATUS, &active_irq)) { 732 + dev_err(nau8825->dev, "failed to read irq status\n"); 733 + return IRQ_NONE; 734 + } 745 735 746 736 if ((active_irq & NAU8825_JACK_EJECTION_IRQ_MASK) == 747 737 NAU8825_JACK_EJECTION_DETECTED) { ··· 1157 1141 return ret; 1158 1142 } 1159 1143 } 1160 - 1161 - ret = regcache_sync(nau8825->regmap); 1162 - if (ret) { 1163 - dev_err(codec->dev, 1164 - "Failed to sync cache: %d\n", ret); 1165 - return ret; 1166 - } 1167 1144 } 1168 - 1169 1145 break; 1170 1146 1171 1147 case SND_SOC_BIAS_OFF: 1172 1148 if (nau8825->mclk_freq) 1173 1149 clk_disable_unprepare(nau8825->mclk); 1174 - 1175 - regcache_mark_dirty(nau8825->regmap); 1176 1150 break; 1177 1151 } 1178 1152 return 0; 1179 1153 } 1154 + 1155 + #ifdef CONFIG_PM 1156 + static int nau8825_suspend(struct snd_soc_codec *codec) 1157 + { 1158 + struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec); 1159 + 1160 + disable_irq(nau8825->irq); 1161 + regcache_cache_only(nau8825->regmap, true); 1162 + regcache_mark_dirty(nau8825->regmap); 1163 + 1164 + return 0; 1165 + } 1166 + 1167 + static int nau8825_resume(struct snd_soc_codec *codec) 1168 + { 1169 + struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec); 1170 + 1171 + /* The chip may lose power and reset in S3. regcache_sync restores 1172 + * register values including configurations for sysclk, irq, and 1173 + * jack/button detection. 1174 + */ 1175 + regcache_cache_only(nau8825->regmap, false); 1176 + regcache_sync(nau8825->regmap); 1177 + 1178 + /* Check the jack plug status directly. If the headset is unplugged 1179 + * during S3 when the chip has no power, there will be no jack 1180 + * detection irq even after the nau8825_restart_jack_detection below, 1181 + * because the chip just thinks no headset has ever been plugged in. 1182 + */ 1183 + if (!nau8825_is_jack_inserted(nau8825->regmap)) { 1184 + nau8825_eject_jack(nau8825); 1185 + snd_soc_jack_report(nau8825->jack, 0, SND_JACK_HEADSET); 1186 + } 1187 + 1188 + enable_irq(nau8825->irq); 1189 + 1190 + /* Run jack detection to check the type (OMTP or CTIA) of the headset 1191 + * if there is one. This handles the case where a different type of 1192 + * headset is plugged in during S3. This triggers an IRQ iff a headset 1193 + * is already plugged in. 1194 + */ 1195 + nau8825_restart_jack_detection(nau8825->regmap); 1196 + 1197 + return 0; 1198 + } 1199 + #else 1200 + #define nau8825_suspend NULL 1201 + #define nau8825_resume NULL 1202 + #endif 1180 1203 1181 1204 static struct snd_soc_codec_driver nau8825_codec_driver = { 1182 1205 .probe = nau8825_codec_probe, ··· 1223 1168 .set_pll = nau8825_set_pll, 1224 1169 .set_bias_level = nau8825_set_bias_level, 1225 1170 .suspend_bias_off = true, 1171 + .suspend = nau8825_suspend, 1172 + .resume = nau8825_resume, 1226 1173 1227 1174 .controls = nau8825_controls, 1228 1175 .num_controls = ARRAY_SIZE(nau8825_controls), ··· 1334 1277 regmap_update_bits(regmap, NAU8825_REG_ENA_CTRL, 1335 1278 NAU8825_ENABLE_DACR, NAU8825_ENABLE_DACR); 1336 1279 1337 - /* Chip needs one FSCLK cycle in order to generate interrupts, 1338 - * as we cannot guarantee one will be provided by the system. Turning 1339 - * master mode on then off enables us to generate that FSCLK cycle 1340 - * with a minimum of contention on the clock bus. 1341 - */ 1342 - regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2, 1343 - NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_MASTER); 1344 - regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2, 1345 - NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_SLAVE); 1346 - 1347 1280 ret = devm_request_threaded_irq(nau8825->dev, nau8825->irq, NULL, 1348 1281 nau8825_interrupt, IRQF_TRIGGER_LOW | IRQF_ONESHOT, 1349 1282 "nau8825", nau8825); ··· 1401 1354 return 0; 1402 1355 } 1403 1356 1404 - #ifdef CONFIG_PM_SLEEP 1405 - static int nau8825_suspend(struct device *dev) 1406 - { 1407 - struct i2c_client *client = to_i2c_client(dev); 1408 - struct nau8825 *nau8825 = dev_get_drvdata(dev); 1409 - 1410 - disable_irq(client->irq); 1411 - regcache_cache_only(nau8825->regmap, true); 1412 - regcache_mark_dirty(nau8825->regmap); 1413 - 1414 - return 0; 1415 - } 1416 - 1417 - static int nau8825_resume(struct device *dev) 1418 - { 1419 - struct i2c_client *client = to_i2c_client(dev); 1420 - struct nau8825 *nau8825 = dev_get_drvdata(dev); 1421 - 1422 - regcache_cache_only(nau8825->regmap, false); 1423 - regcache_sync(nau8825->regmap); 1424 - enable_irq(client->irq); 1425 - 1426 - return 0; 1427 - } 1428 - #endif 1429 - 1430 - static const struct dev_pm_ops nau8825_pm = { 1431 - SET_SYSTEM_SLEEP_PM_OPS(nau8825_suspend, nau8825_resume) 1432 - }; 1433 - 1434 1357 static const struct i2c_device_id nau8825_i2c_ids[] = { 1435 1358 { "nau8825", 0 }, 1436 1359 { } ··· 1427 1410 .name = "nau8825", 1428 1411 .of_match_table = of_match_ptr(nau8825_of_ids), 1429 1412 .acpi_match_table = ACPI_PTR(nau8825_acpi_match), 1430 - .pm = &nau8825_pm, 1431 1413 }, 1432 1414 .probe = nau8825_i2c_probe, 1433 1415 .remove = nau8825_i2c_remove,
+1 -1
sound/soc/codecs/rt5640.c
··· 359 359 360 360 /* Interface data select */ 361 361 static const char * const rt5640_data_select[] = { 362 - "Normal", "left copy to right", "right copy to left", "Swap"}; 362 + "Normal", "Swap", "left copy to right", "right copy to left"}; 363 363 364 364 static SOC_ENUM_SINGLE_DECL(rt5640_if1_dac_enum, RT5640_DIG_INF_DATA, 365 365 RT5640_IF1_DAC_SEL_SFT, rt5640_data_select);
+18 -18
sound/soc/codecs/rt5640.h
··· 443 443 #define RT5640_IF1_DAC_SEL_MASK (0x3 << 14) 444 444 #define RT5640_IF1_DAC_SEL_SFT 14 445 445 #define RT5640_IF1_DAC_SEL_NOR (0x0 << 14) 446 - #define RT5640_IF1_DAC_SEL_L2R (0x1 << 14) 447 - #define RT5640_IF1_DAC_SEL_R2L (0x2 << 14) 448 - #define RT5640_IF1_DAC_SEL_SWAP (0x3 << 14) 446 + #define RT5640_IF1_DAC_SEL_SWAP (0x1 << 14) 447 + #define RT5640_IF1_DAC_SEL_L2R (0x2 << 14) 448 + #define RT5640_IF1_DAC_SEL_R2L (0x3 << 14) 449 449 #define RT5640_IF1_ADC_SEL_MASK (0x3 << 12) 450 450 #define RT5640_IF1_ADC_SEL_SFT 12 451 451 #define RT5640_IF1_ADC_SEL_NOR (0x0 << 12) 452 - #define RT5640_IF1_ADC_SEL_L2R (0x1 << 12) 453 - #define RT5640_IF1_ADC_SEL_R2L (0x2 << 12) 454 - #define RT5640_IF1_ADC_SEL_SWAP (0x3 << 12) 452 + #define RT5640_IF1_ADC_SEL_SWAP (0x1 << 12) 453 + #define RT5640_IF1_ADC_SEL_L2R (0x2 << 12) 454 + #define RT5640_IF1_ADC_SEL_R2L (0x3 << 12) 455 455 #define RT5640_IF2_DAC_SEL_MASK (0x3 << 10) 456 456 #define RT5640_IF2_DAC_SEL_SFT 10 457 457 #define RT5640_IF2_DAC_SEL_NOR (0x0 << 10) 458 - #define RT5640_IF2_DAC_SEL_L2R (0x1 << 10) 459 - #define RT5640_IF2_DAC_SEL_R2L (0x2 << 10) 460 - #define RT5640_IF2_DAC_SEL_SWAP (0x3 << 10) 458 + #define RT5640_IF2_DAC_SEL_SWAP (0x1 << 10) 459 + #define RT5640_IF2_DAC_SEL_L2R (0x2 << 10) 460 + #define RT5640_IF2_DAC_SEL_R2L (0x3 << 10) 461 461 #define RT5640_IF2_ADC_SEL_MASK (0x3 << 8) 462 462 #define RT5640_IF2_ADC_SEL_SFT 8 463 463 #define RT5640_IF2_ADC_SEL_NOR (0x0 << 8) 464 - #define RT5640_IF2_ADC_SEL_L2R (0x1 << 8) 465 - #define RT5640_IF2_ADC_SEL_R2L (0x2 << 8) 466 - #define RT5640_IF2_ADC_SEL_SWAP (0x3 << 8) 464 + #define RT5640_IF2_ADC_SEL_SWAP (0x1 << 8) 465 + #define RT5640_IF2_ADC_SEL_L2R (0x2 << 8) 466 + #define RT5640_IF2_ADC_SEL_R2L (0x3 << 8) 467 467 #define RT5640_IF3_DAC_SEL_MASK (0x3 << 6) 468 468 #define RT5640_IF3_DAC_SEL_SFT 6 469 469 #define RT5640_IF3_DAC_SEL_NOR (0x0 << 6) 470 - #define RT5640_IF3_DAC_SEL_L2R (0x1 << 6) 471 - #define RT5640_IF3_DAC_SEL_R2L (0x2 << 6) 472 - #define RT5640_IF3_DAC_SEL_SWAP (0x3 << 6) 470 + #define RT5640_IF3_DAC_SEL_SWAP (0x1 << 6) 471 + #define RT5640_IF3_DAC_SEL_L2R (0x2 << 6) 472 + #define RT5640_IF3_DAC_SEL_R2L (0x3 << 6) 473 473 #define RT5640_IF3_ADC_SEL_MASK (0x3 << 4) 474 474 #define RT5640_IF3_ADC_SEL_SFT 4 475 475 #define RT5640_IF3_ADC_SEL_NOR (0x0 << 4) 476 - #define RT5640_IF3_ADC_SEL_L2R (0x1 << 4) 477 - #define RT5640_IF3_ADC_SEL_R2L (0x2 << 4) 478 - #define RT5640_IF3_ADC_SEL_SWAP (0x3 << 4) 476 + #define RT5640_IF3_ADC_SEL_SWAP (0x1 << 4) 477 + #define RT5640_IF3_ADC_SEL_L2R (0x2 << 4) 478 + #define RT5640_IF3_ADC_SEL_R2L (0x3 << 4) 479 479 480 480 /* REC Left Mixer Control 1 (0x3b) */ 481 481 #define RT5640_G_HP_L_RM_L_MASK (0x7 << 13)
+5
sound/soc/codecs/wm5102.c
··· 1955 1955 static int wm5102_codec_remove(struct snd_soc_codec *codec) 1956 1956 { 1957 1957 struct wm5102_priv *priv = snd_soc_codec_get_drvdata(codec); 1958 + struct arizona *arizona = priv->core.arizona; 1958 1959 1959 1960 wm_adsp2_codec_remove(&priv->core.adsp[0], codec); 1960 1961 1961 1962 priv->core.arizona->dapm = NULL; 1963 + 1964 + arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, priv); 1965 + 1966 + arizona_free_spk(codec); 1962 1967 1963 1968 return 0; 1964 1969 }
+2
sound/soc/codecs/wm5110.c
··· 2298 2298 2299 2299 arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, priv); 2300 2300 2301 + arizona_free_spk(codec); 2302 + 2301 2303 return 0; 2302 2304 } 2303 2305
+1 -1
sound/soc/codecs/wm8962.c
··· 2471 2471 break; 2472 2472 default: 2473 2473 dev_warn(codec->dev, "Unknown DSPCLK divisor read back\n"); 2474 - dspclk = wm8962->sysclk; 2474 + dspclk = wm8962->sysclk_rate; 2475 2475 } 2476 2476 2477 2477 dev_dbg(codec->dev, "DSPCLK is %dHz, BCLK %d\n", dspclk, wm8962->bclk);
+2
sound/soc/codecs/wm8997.c
··· 1072 1072 1073 1073 priv->core.arizona->dapm = NULL; 1074 1074 1075 + arizona_free_spk(codec); 1076 + 1075 1077 return 0; 1076 1078 } 1077 1079
+2
sound/soc/codecs/wm8998.c
··· 1324 1324 1325 1325 priv->core.arizona->dapm = NULL; 1326 1326 1327 + arizona_free_spk(codec); 1328 + 1327 1329 return 0; 1328 1330 } 1329 1331
-1
sound/soc/intel/Kconfig
··· 163 163 tristate 164 164 select SND_HDA_EXT_CORE 165 165 select SND_SOC_TOPOLOGY 166 - select SND_HDA_I915 167 166 select SND_SOC_INTEL_SST 168 167 169 168 config SND_SOC_INTEL_SKL_RT286_MACH
+1 -1
sound/soc/intel/haswell/sst-haswell-ipc.c
··· 1345 1345 return 0; 1346 1346 1347 1347 /* wait for pause to complete before we reset the stream */ 1348 - while (stream->running && tries--) 1348 + while (stream->running && --tries) 1349 1349 msleep(1); 1350 1350 if (!tries) { 1351 1351 dev_err(hsw->dev, "error: reset stream %d still running\n",
+5
sound/soc/intel/skylake/skl-sst-dsp.c
··· 336 336 skl_ipc_int_disable(dsp); 337 337 338 338 free_irq(dsp->irq, dsp); 339 + dsp->cl_dev.ops.cl_cleanup_controller(dsp); 340 + skl_cldma_int_disable(dsp); 341 + skl_ipc_op_int_disable(dsp); 342 + skl_ipc_int_disable(dsp); 343 + 339 344 skl_dsp_disable_core(dsp); 340 345 } 341 346 EXPORT_SYMBOL_GPL(skl_dsp_free);
+27 -13
sound/soc/intel/skylake/skl-topology.c
··· 239 239 { 240 240 int multiplier = 1; 241 241 struct skl_module_fmt *in_fmt, *out_fmt; 242 + int in_rate, out_rate; 242 243 243 244 244 245 /* Since fixups is applied to pin 0 only, ibs, obs needs ··· 250 249 251 250 if (mcfg->m_type == SKL_MODULE_TYPE_SRCINT) 252 251 multiplier = 5; 253 - mcfg->ibs = (in_fmt->s_freq / 1000) * 254 - (mcfg->in_fmt->channels) * 255 - (mcfg->in_fmt->bit_depth >> 3) * 256 - multiplier; 257 252 258 - mcfg->obs = (mcfg->out_fmt->s_freq / 1000) * 259 - (mcfg->out_fmt->channels) * 260 - (mcfg->out_fmt->bit_depth >> 3) * 261 - multiplier; 253 + if (in_fmt->s_freq % 1000) 254 + in_rate = (in_fmt->s_freq / 1000) + 1; 255 + else 256 + in_rate = (in_fmt->s_freq / 1000); 257 + 258 + mcfg->ibs = in_rate * (mcfg->in_fmt->channels) * 259 + (mcfg->in_fmt->bit_depth >> 3) * 260 + multiplier; 261 + 262 + if (mcfg->out_fmt->s_freq % 1000) 263 + out_rate = (mcfg->out_fmt->s_freq / 1000) + 1; 264 + else 265 + out_rate = (mcfg->out_fmt->s_freq / 1000); 266 + 267 + mcfg->obs = out_rate * (mcfg->out_fmt->channels) * 268 + (mcfg->out_fmt->bit_depth >> 3) * 269 + multiplier; 262 270 } 263 271 264 272 static int skl_tplg_update_be_blob(struct snd_soc_dapm_widget *w, ··· 495 485 if (!skl_is_pipe_mcps_avail(skl, mconfig)) 496 486 return -ENOMEM; 497 487 488 + skl_tplg_alloc_pipe_mcps(skl, mconfig); 489 + 498 490 if (mconfig->is_loadable && ctx->dsp->fw_ops.load_mod) { 499 491 ret = ctx->dsp->fw_ops.load_mod(ctx->dsp, 500 492 mconfig->id.module_id, mconfig->guid); 501 493 if (ret < 0) 502 494 return ret; 495 + 496 + mconfig->m_state = SKL_MODULE_LOADED; 503 497 } 504 498 505 499 /* update blob if blob is null for be with default value */ ··· 523 509 ret = skl_tplg_set_module_params(w, ctx); 524 510 if (ret < 0) 525 511 return ret; 526 - skl_tplg_alloc_pipe_mcps(skl, mconfig); 527 512 } 528 513 529 514 return 0; ··· 537 524 list_for_each_entry(w_module, &pipe->w_list, node) { 538 525 mconfig = w_module->w->priv; 539 526 540 - if (mconfig->is_loadable && ctx->dsp->fw_ops.unload_mod) 527 + if (mconfig->is_loadable && ctx->dsp->fw_ops.unload_mod && 528 + mconfig->m_state > SKL_MODULE_UNINIT) 541 529 return ctx->dsp->fw_ops.unload_mod(ctx->dsp, 542 530 mconfig->id.module_id); 543 531 } ··· 571 557 572 558 if (!skl_is_pipe_mem_avail(skl, mconfig)) 573 559 return -ENOMEM; 560 + 561 + skl_tplg_alloc_pipe_mem(skl, mconfig); 562 + skl_tplg_alloc_pipe_mcps(skl, mconfig); 574 563 575 564 /* 576 565 * Create a list of modules for pipe. ··· 617 600 618 601 src_module = dst_module; 619 602 } 620 - 621 - skl_tplg_alloc_pipe_mem(skl, mconfig); 622 - skl_tplg_alloc_pipe_mcps(skl, mconfig); 623 603 624 604 return 0; 625 605 }
+4 -4
sound/soc/intel/skylake/skl-topology.h
··· 274 274 275 275 enum skl_module_state { 276 276 SKL_MODULE_UNINIT = 0, 277 - SKL_MODULE_INIT_DONE = 1, 278 - SKL_MODULE_LOADED = 2, 279 - SKL_MODULE_UNLOADED = 3, 280 - SKL_MODULE_BIND_DONE = 4 277 + SKL_MODULE_LOADED = 1, 278 + SKL_MODULE_INIT_DONE = 2, 279 + SKL_MODULE_BIND_DONE = 3, 280 + SKL_MODULE_UNLOADED = 4, 281 281 }; 282 282 283 283 struct skl_module_cfg {
+23 -9
sound/soc/intel/skylake/skl.c
··· 222 222 struct hdac_ext_bus *ebus = pci_get_drvdata(pci); 223 223 struct skl *skl = ebus_to_skl(ebus); 224 224 struct hdac_bus *bus = ebus_to_hbus(ebus); 225 + int ret = 0; 225 226 226 227 /* 227 228 * Do not suspend if streams which are marked ignore suspend are ··· 233 232 enable_irq_wake(bus->irq); 234 233 pci_save_state(pci); 235 234 pci_disable_device(pci); 236 - return 0; 237 235 } else { 238 - return _skl_suspend(ebus); 236 + ret = _skl_suspend(ebus); 237 + if (ret < 0) 238 + return ret; 239 239 } 240 + 241 + if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) { 242 + ret = snd_hdac_display_power(bus, false); 243 + if (ret < 0) 244 + dev_err(bus->dev, 245 + "Cannot turn OFF display power on i915\n"); 246 + } 247 + 248 + return ret; 240 249 } 241 250 242 251 static int skl_resume(struct device *dev) ··· 327 316 328 317 if (bus->irq >= 0) 329 318 free_irq(bus->irq, (void *)bus); 330 - if (bus->remap_addr) 331 - iounmap(bus->remap_addr); 332 - 333 319 snd_hdac_bus_free_stream_pages(bus); 334 320 snd_hdac_stream_free_all(ebus); 335 321 snd_hdac_link_free_all(ebus); 322 + 323 + if (bus->remap_addr) 324 + iounmap(bus->remap_addr); 325 + 336 326 pci_release_regions(skl->pci); 337 327 pci_disable_device(skl->pci); 338 328 339 329 snd_hdac_ext_bus_exit(ebus); 340 330 331 + if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) 332 + snd_hdac_i915_exit(&ebus->bus); 341 333 return 0; 342 334 } 343 335 ··· 733 719 if (skl->tplg) 734 720 release_firmware(skl->tplg); 735 721 736 - if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) 737 - snd_hdac_i915_exit(&ebus->bus); 738 - 739 722 if (pci_dev_run_wake(pci)) 740 723 pm_runtime_get_noresume(&pci->dev); 741 - pci_dev_put(pci); 724 + 725 + /* codec removal, invoke bus_device_remove */ 726 + snd_hdac_ext_bus_device_remove(ebus); 727 + 742 728 skl_platform_unregister(&pci->dev); 743 729 skl_free_dsp(skl); 744 730 skl_machine_device_unregister(skl);
+7
sound/soc/soc-dapm.c
··· 2188 2188 int count = 0; 2189 2189 char *state = "not set"; 2190 2190 2191 + /* card won't be set for the dummy component, as a spot fix 2192 + * we're checking for that case specifically here but in future 2193 + * we will ensure that the dummy component looks like others. 2194 + */ 2195 + if (!cmpnt->card) 2196 + return 0; 2197 + 2191 2198 list_for_each_entry(w, &cmpnt->card->widgets, list) { 2192 2199 if (w->dapm != dapm) 2193 2200 continue;