+1
.mailmap
+1
.mailmap
+1
-1
Documentation/networking/switchdev.txt
+1
-1
Documentation/networking/switchdev.txt
···
386
386
memory allocation, etc. The goal is to handle the stuff that is not unlikely
387
387
to fail here. The second phase is to "commit" the actual changes.
388
388
389
-
Switchdev provides an inftrastructure for sharing items (for example memory
389
+
Switchdev provides an infrastructure for sharing items (for example memory
390
390
allocations) between the two phases.
391
391
392
392
The object created by a driver in "prepare" phase and it is queued up by:
+208
Documentation/x86/topology.txt
+208
Documentation/x86/topology.txt
···
1
+
x86 Topology
2
+
============
3
+
4
+
This documents and clarifies the main aspects of x86 topology modelling and
5
+
representation in the kernel. Update/change when doing changes to the
6
+
respective code.
7
+
8
+
The architecture-agnostic topology definitions are in
9
+
Documentation/cputopology.txt. This file holds x86-specific
10
+
differences/specialities which must not necessarily apply to the generic
11
+
definitions. Thus, the way to read up on Linux topology on x86 is to start
12
+
with the generic one and look at this one in parallel for the x86 specifics.
13
+
14
+
Needless to say, code should use the generic functions - this file is *only*
15
+
here to *document* the inner workings of x86 topology.
16
+
17
+
Started by Thomas Gleixner <tglx@linutronix.de> and Borislav Petkov <bp@alien8.de>.
18
+
19
+
The main aim of the topology facilities is to present adequate interfaces to
20
+
code which needs to know/query/use the structure of the running system wrt
21
+
threads, cores, packages, etc.
22
+
23
+
The kernel does not care about the concept of physical sockets because a
24
+
socket has no relevance to software. It's an electromechanical component. In
25
+
the past a socket always contained a single package (see below), but with the
26
+
advent of Multi Chip Modules (MCM) a socket can hold more than one package. So
27
+
there might be still references to sockets in the code, but they are of
28
+
historical nature and should be cleaned up.
29
+
30
+
The topology of a system is described in the units of:
31
+
32
+
- packages
33
+
- cores
34
+
- threads
35
+
36
+
* Package:
37
+
38
+
Packages contain a number of cores plus shared resources, e.g. DRAM
39
+
controller, shared caches etc.
40
+
41
+
AMD nomenclature for package is 'Node'.
42
+
43
+
Package-related topology information in the kernel:
44
+
45
+
- cpuinfo_x86.x86_max_cores:
46
+
47
+
The number of cores in a package. This information is retrieved via CPUID.
48
+
49
+
- cpuinfo_x86.phys_proc_id:
50
+
51
+
The physical ID of the package. This information is retrieved via CPUID
52
+
and deduced from the APIC IDs of the cores in the package.
53
+
54
+
- cpuinfo_x86.logical_id:
55
+
56
+
The logical ID of the package. As we do not trust BIOSes to enumerate the
57
+
packages in a consistent way, we introduced the concept of logical package
58
+
ID so we can sanely calculate the number of maximum possible packages in
59
+
the system and have the packages enumerated linearly.
60
+
61
+
- topology_max_packages():
62
+
63
+
The maximum possible number of packages in the system. Helpful for per
64
+
package facilities to preallocate per package information.
65
+
66
+
67
+
* Cores:
68
+
69
+
A core consists of 1 or more threads. It does not matter whether the threads
70
+
are SMT- or CMT-type threads.
71
+
72
+
AMDs nomenclature for a CMT core is "Compute Unit". The kernel always uses
73
+
"core".
74
+
75
+
Core-related topology information in the kernel:
76
+
77
+
- smp_num_siblings:
78
+
79
+
The number of threads in a core. The number of threads in a package can be
80
+
calculated by:
81
+
82
+
threads_per_package = cpuinfo_x86.x86_max_cores * smp_num_siblings
83
+
84
+
85
+
* Threads:
86
+
87
+
A thread is a single scheduling unit. It's the equivalent to a logical Linux
88
+
CPU.
89
+
90
+
AMDs nomenclature for CMT threads is "Compute Unit Core". The kernel always
91
+
uses "thread".
92
+
93
+
Thread-related topology information in the kernel:
94
+
95
+
- topology_core_cpumask():
96
+
97
+
The cpumask contains all online threads in the package to which a thread
98
+
belongs.
99
+
100
+
The number of online threads is also printed in /proc/cpuinfo "siblings."
101
+
102
+
- topology_sibling_mask():
103
+
104
+
The cpumask contains all online threads in the core to which a thread
105
+
belongs.
106
+
107
+
- topology_logical_package_id():
108
+
109
+
The logical package ID to which a thread belongs.
110
+
111
+
- topology_physical_package_id():
112
+
113
+
The physical package ID to which a thread belongs.
114
+
115
+
- topology_core_id();
116
+
117
+
The ID of the core to which a thread belongs. It is also printed in /proc/cpuinfo
118
+
"core_id."
119
+
120
+
121
+
122
+
System topology examples
123
+
124
+
Note:
125
+
126
+
The alternative Linux CPU enumeration depends on how the BIOS enumerates the
127
+
threads. Many BIOSes enumerate all threads 0 first and then all threads 1.
128
+
That has the "advantage" that the logical Linux CPU numbers of threads 0 stay
129
+
the same whether threads are enabled or not. That's merely an implementation
130
+
detail and has no practical impact.
131
+
132
+
1) Single Package, Single Core
133
+
134
+
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
135
+
136
+
2) Single Package, Dual Core
137
+
138
+
a) One thread per core
139
+
140
+
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
141
+
-> [core 1] -> [thread 0] -> Linux CPU 1
142
+
143
+
b) Two threads per core
144
+
145
+
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
146
+
-> [thread 1] -> Linux CPU 1
147
+
-> [core 1] -> [thread 0] -> Linux CPU 2
148
+
-> [thread 1] -> Linux CPU 3
149
+
150
+
Alternative enumeration:
151
+
152
+
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
153
+
-> [thread 1] -> Linux CPU 2
154
+
-> [core 1] -> [thread 0] -> Linux CPU 1
155
+
-> [thread 1] -> Linux CPU 3
156
+
157
+
AMD nomenclature for CMT systems:
158
+
159
+
[node 0] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 0
160
+
-> [Compute Unit Core 1] -> Linux CPU 1
161
+
-> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 2
162
+
-> [Compute Unit Core 1] -> Linux CPU 3
163
+
164
+
4) Dual Package, Dual Core
165
+
166
+
a) One thread per core
167
+
168
+
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
169
+
-> [core 1] -> [thread 0] -> Linux CPU 1
170
+
171
+
[package 1] -> [core 0] -> [thread 0] -> Linux CPU 2
172
+
-> [core 1] -> [thread 0] -> Linux CPU 3
173
+
174
+
b) Two threads per core
175
+
176
+
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
177
+
-> [thread 1] -> Linux CPU 1
178
+
-> [core 1] -> [thread 0] -> Linux CPU 2
179
+
-> [thread 1] -> Linux CPU 3
180
+
181
+
[package 1] -> [core 0] -> [thread 0] -> Linux CPU 4
182
+
-> [thread 1] -> Linux CPU 5
183
+
-> [core 1] -> [thread 0] -> Linux CPU 6
184
+
-> [thread 1] -> Linux CPU 7
185
+
186
+
Alternative enumeration:
187
+
188
+
[package 0] -> [core 0] -> [thread 0] -> Linux CPU 0
189
+
-> [thread 1] -> Linux CPU 4
190
+
-> [core 1] -> [thread 0] -> Linux CPU 1
191
+
-> [thread 1] -> Linux CPU 5
192
+
193
+
[package 1] -> [core 0] -> [thread 0] -> Linux CPU 2
194
+
-> [thread 1] -> Linux CPU 6
195
+
-> [core 1] -> [thread 0] -> Linux CPU 3
196
+
-> [thread 1] -> Linux CPU 7
197
+
198
+
AMD nomenclature for CMT systems:
199
+
200
+
[node 0] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 0
201
+
-> [Compute Unit Core 1] -> Linux CPU 1
202
+
-> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 2
203
+
-> [Compute Unit Core 1] -> Linux CPU 3
204
+
205
+
[node 1] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 4
206
+
-> [Compute Unit Core 1] -> Linux CPU 5
207
+
-> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 6
208
+
-> [Compute Unit Core 1] -> Linux CPU 7
+7
-4
MAINTAINERS
+7
-4
MAINTAINERS
···
5042
5042
HARDWARE SPINLOCK CORE
5043
5043
M: Ohad Ben-Cohen <ohad@wizery.com>
5044
5044
M: Bjorn Andersson <bjorn.andersson@linaro.org>
5045
+
L: linux-remoteproc@vger.kernel.org
5045
5046
S: Maintained
5046
5047
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock.git
5047
5048
F: Documentation/hwspinlock.txt
···
6403
6402
M: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
6404
6403
M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
6405
6404
M: "David S. Miller" <davem@davemloft.net>
6406
-
M: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
6405
+
M: Masami Hiramatsu <mhiramat@kernel.org>
6407
6406
S: Maintained
6408
6407
F: Documentation/kprobes.txt
6409
6408
F: include/linux/kprobes.h
···
8254
8253
8255
8254
ORANGEFS FILESYSTEM
8256
8255
M: Mike Marshall <hubcap@omnibond.com>
8257
-
L: pvfs2-developers@beowulf-underground.org
8256
+
L: pvfs2-developers@beowulf-underground.org (subscribers-only)
8258
8257
T: git git://git.kernel.org/pub/scm/linux/kernel/git/hubcap/linux.git
8259
8258
S: Supported
8260
8259
F: fs/orangefs/
···
9315
9314
REMOTE PROCESSOR (REMOTEPROC) SUBSYSTEM
9316
9315
M: Ohad Ben-Cohen <ohad@wizery.com>
9317
9316
M: Bjorn Andersson <bjorn.andersson@linaro.org>
9317
+
L: linux-remoteproc@vger.kernel.org
9318
9318
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/remoteproc.git
9319
9319
S: Maintained
9320
9320
F: drivers/remoteproc/
···
9325
9323
REMOTE PROCESSOR MESSAGING (RPMSG) SUBSYSTEM
9326
9324
M: Ohad Ben-Cohen <ohad@wizery.com>
9327
9325
M: Bjorn Andersson <bjorn.andersson@linaro.org>
9326
+
L: linux-remoteproc@vger.kernel.org
9328
9327
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/rpmsg.git
9329
9328
S: Maintained
9330
9329
F: drivers/rpmsg/
···
11140
11137
F: net/tipc/
11141
11138
11142
11139
TILE ARCHITECTURE
11143
-
M: Chris Metcalf <cmetcalf@ezchip.com>
11144
-
W: http://www.ezchip.com/scm/
11140
+
M: Chris Metcalf <cmetcalf@mellanox.com>
11141
+
W: http://www.mellanox.com/repository/solutions/tile-scm/
11145
11142
T: git git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile.git
11146
11143
S: Supported
11147
11144
F: arch/tile/
+1
-1
Makefile
+1
-1
Makefile
+20
-8
arch/arm64/configs/defconfig
+20
-8
arch/arm64/configs/defconfig
···
68
68
CONFIG_TRANSPARENT_HUGEPAGE=y
69
69
CONFIG_CMA=y
70
70
CONFIG_XEN=y
71
-
CONFIG_CMDLINE="console=ttyAMA0"
72
71
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
73
72
CONFIG_COMPAT=y
74
73
CONFIG_CPU_IDLE=y
75
74
CONFIG_ARM_CPUIDLE=y
75
+
CONFIG_CPU_FREQ=y
76
+
CONFIG_ARM_BIG_LITTLE_CPUFREQ=y
77
+
CONFIG_ARM_SCPI_CPUFREQ=y
76
78
CONFIG_NET=y
77
79
CONFIG_PACKET=y
78
80
CONFIG_UNIX=y
···
82
80
CONFIG_IP_PNP=y
83
81
CONFIG_IP_PNP_DHCP=y
84
82
CONFIG_IP_PNP_BOOTP=y
85
-
# CONFIG_INET_LRO is not set
86
83
# CONFIG_IPV6 is not set
87
84
CONFIG_BPF_JIT=y
88
85
# CONFIG_WIRELESS is not set
···
145
144
CONFIG_SERIAL_MVEBU_UART=y
146
145
CONFIG_VIRTIO_CONSOLE=y
147
146
# CONFIG_HW_RANDOM is not set
148
-
CONFIG_I2C=y
149
147
CONFIG_I2C_CHARDEV=y
148
+
CONFIG_I2C_DESIGNWARE_PLATFORM=y
150
149
CONFIG_I2C_MV64XXX=y
151
150
CONFIG_I2C_QUP=y
151
+
CONFIG_I2C_TEGRA=y
152
152
CONFIG_I2C_UNIPHIER_F=y
153
153
CONFIG_I2C_RCAR=y
154
154
CONFIG_SPI=y
155
155
CONFIG_SPI_PL022=y
156
156
CONFIG_SPI_QUP=y
157
157
CONFIG_SPMI=y
158
+
CONFIG_PINCTRL_SINGLE=y
158
159
CONFIG_PINCTRL_MSM8916=y
159
160
CONFIG_PINCTRL_QCOM_SPMI_PMIC=y
160
161
CONFIG_GPIO_SYSFS=y
···
199
196
CONFIG_USB_OHCI_HCD=y
200
197
CONFIG_USB_OHCI_HCD_PLATFORM=y
201
198
CONFIG_USB_STORAGE=y
199
+
CONFIG_USB_DWC2=y
202
200
CONFIG_USB_CHIPIDEA=y
203
201
CONFIG_USB_CHIPIDEA_UDC=y
204
202
CONFIG_USB_CHIPIDEA_HOST=y
···
209
205
CONFIG_USB_ULPI=y
210
206
CONFIG_USB_GADGET=y
211
207
CONFIG_MMC=y
212
-
CONFIG_MMC_BLOCK_MINORS=16
208
+
CONFIG_MMC_BLOCK_MINORS=32
213
209
CONFIG_MMC_ARMMMCI=y
214
210
CONFIG_MMC_SDHCI=y
215
211
CONFIG_MMC_SDHCI_PLTFM=y
216
212
CONFIG_MMC_SDHCI_TEGRA=y
217
213
CONFIG_MMC_SDHCI_MSM=y
218
214
CONFIG_MMC_SPI=y
219
-
CONFIG_MMC_SUNXI=y
220
215
CONFIG_MMC_DW=y
221
216
CONFIG_MMC_DW_EXYNOS=y
222
-
CONFIG_MMC_BLOCK_MINORS=16
217
+
CONFIG_MMC_DW_K3=y
218
+
CONFIG_MMC_SUNXI=y
223
219
CONFIG_NEW_LEDS=y
224
220
CONFIG_LEDS_CLASS=y
221
+
CONFIG_LEDS_GPIO=y
225
222
CONFIG_LEDS_SYSCON=y
226
223
CONFIG_LEDS_TRIGGERS=y
227
224
CONFIG_LEDS_TRIGGER_HEARTBEAT=y
···
234
229
CONFIG_RTC_DRV_SUN6I=y
235
230
CONFIG_RTC_DRV_XGENE=y
236
231
CONFIG_DMADEVICES=y
237
-
CONFIG_QCOM_BAM_DMA=y
238
232
CONFIG_TEGRA20_APB_DMA=y
233
+
CONFIG_QCOM_BAM_DMA=y
239
234
CONFIG_RCAR_DMAC=y
240
235
CONFIG_VFIO=y
241
236
CONFIG_VFIO_PCI=y
···
244
239
CONFIG_VIRTIO_MMIO=y
245
240
CONFIG_XEN_GNTDEV=y
246
241
CONFIG_XEN_GRANT_DEV_ALLOC=y
242
+
CONFIG_COMMON_CLK_SCPI=y
247
243
CONFIG_COMMON_CLK_CS2000_CP=y
248
244
CONFIG_COMMON_CLK_QCOM=y
249
245
CONFIG_MSM_GCC_8916=y
250
246
CONFIG_HWSPINLOCK_QCOM=y
247
+
CONFIG_MAILBOX=y
248
+
CONFIG_ARM_MHU=y
249
+
CONFIG_HI6220_MBOX=y
251
250
CONFIG_ARM_SMMU=y
252
251
CONFIG_QCOM_SMEM=y
253
252
CONFIG_QCOM_SMD=y
254
253
CONFIG_QCOM_SMD_RPM=y
255
254
CONFIG_ARCH_TEGRA_132_SOC=y
256
255
CONFIG_ARCH_TEGRA_210_SOC=y
257
-
CONFIG_HISILICON_IRQ_MBIGEN=y
258
256
CONFIG_EXTCON_USB_GPIO=y
257
+
CONFIG_COMMON_RESET_HI6220=y
259
258
CONFIG_PHY_RCAR_GEN3_USB2=y
259
+
CONFIG_PHY_HI6220_USB=y
260
260
CONFIG_PHY_XGENE=y
261
+
CONFIG_ARM_SCPI_PROTOCOL=y
261
262
CONFIG_EXT2_FS=y
262
263
CONFIG_EXT3_FS=y
263
264
CONFIG_FANOTIFY=y
···
275
264
CONFIG_VFAT_FS=y
276
265
CONFIG_TMPFS=y
277
266
CONFIG_HUGETLBFS=y
267
+
CONFIG_CONFIGFS_FS=y
278
268
CONFIG_EFIVAR_FS=y
279
269
CONFIG_SQUASHFS=y
280
270
CONFIG_NFS_FS=y
-1
arch/arm64/include/asm/kvm_host.h
-1
arch/arm64/include/asm/kvm_host.h
-1
arch/arm64/include/asm/kvm_hyp.h
-1
arch/arm64/include/asm/kvm_hyp.h
-68
arch/arm64/include/asm/kvm_perf_event.h
-68
arch/arm64/include/asm/kvm_perf_event.h
···
1
-
/*
2
-
* Copyright (C) 2012 ARM Ltd.
3
-
*
4
-
* This program is free software; you can redistribute it and/or modify
5
-
* it under the terms of the GNU General Public License version 2 as
6
-
* published by the Free Software Foundation.
7
-
*
8
-
* This program is distributed in the hope that it will be useful,
9
-
* but WITHOUT ANY WARRANTY; without even the implied warranty of
10
-
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11
-
* GNU General Public License for more details.
12
-
*
13
-
* You should have received a copy of the GNU General Public License
14
-
* along with this program. If not, see <http://www.gnu.org/licenses/>.
15
-
*/
16
-
17
-
#ifndef __ASM_KVM_PERF_EVENT_H
18
-
#define __ASM_KVM_PERF_EVENT_H
19
-
20
-
#define ARMV8_PMU_MAX_COUNTERS 32
21
-
#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1)
22
-
23
-
/*
24
-
* Per-CPU PMCR: config reg
25
-
*/
26
-
#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */
27
-
#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */
28
-
#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */
29
-
#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
30
-
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
31
-
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
32
-
/* Determines which bit of PMCCNTR_EL0 generates an overflow */
33
-
#define ARMV8_PMU_PMCR_LC (1 << 6)
34
-
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
35
-
#define ARMV8_PMU_PMCR_N_MASK 0x1f
36
-
#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */
37
-
38
-
/*
39
-
* PMOVSR: counters overflow flag status reg
40
-
*/
41
-
#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */
42
-
#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK
43
-
44
-
/*
45
-
* PMXEVTYPER: Event selection reg
46
-
*/
47
-
#define ARMV8_PMU_EVTYPE_MASK 0xc80003ff /* Mask for writable bits */
48
-
#define ARMV8_PMU_EVTYPE_EVENT 0x3ff /* Mask for EVENT bits */
49
-
50
-
#define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */
51
-
52
-
/*
53
-
* Event filters for PMUv3
54
-
*/
55
-
#define ARMV8_PMU_EXCLUDE_EL1 (1 << 31)
56
-
#define ARMV8_PMU_EXCLUDE_EL0 (1 << 30)
57
-
#define ARMV8_PMU_INCLUDE_EL2 (1 << 27)
58
-
59
-
/*
60
-
* PMUSERENR: user enable reg
61
-
*/
62
-
#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
63
-
#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */
64
-
#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */
65
-
#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
66
-
#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
67
-
68
-
#endif
+4
arch/arm64/include/asm/opcodes.h
+4
arch/arm64/include/asm/opcodes.h
+47
arch/arm64/include/asm/perf_event.h
+47
arch/arm64/include/asm/perf_event.h
···
17
17
#ifndef __ASM_PERF_EVENT_H
18
18
#define __ASM_PERF_EVENT_H
19
19
20
+
#define ARMV8_PMU_MAX_COUNTERS 32
21
+
#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1)
22
+
23
+
/*
24
+
* Per-CPU PMCR: config reg
25
+
*/
26
+
#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */
27
+
#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */
28
+
#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */
29
+
#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
30
+
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
31
+
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
32
+
#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
33
+
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
34
+
#define ARMV8_PMU_PMCR_N_MASK 0x1f
35
+
#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */
36
+
37
+
/*
38
+
* PMOVSR: counters overflow flag status reg
39
+
*/
40
+
#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */
41
+
#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK
42
+
43
+
/*
44
+
* PMXEVTYPER: Event selection reg
45
+
*/
46
+
#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */
47
+
#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */
48
+
49
+
#define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */
50
+
51
+
/*
52
+
* Event filters for PMUv3
53
+
*/
54
+
#define ARMV8_PMU_EXCLUDE_EL1 (1 << 31)
55
+
#define ARMV8_PMU_EXCLUDE_EL0 (1 << 30)
56
+
#define ARMV8_PMU_INCLUDE_EL2 (1 << 27)
57
+
58
+
/*
59
+
* PMUSERENR: user enable reg
60
+
*/
61
+
#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
62
+
#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */
63
+
#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */
64
+
#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
65
+
#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
66
+
20
67
#ifdef CONFIG_PERF_EVENTS
21
68
struct pt_regs;
22
69
extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
+19
-53
arch/arm64/kernel/perf_event.c
+19
-53
arch/arm64/kernel/perf_event.c
···
20
20
*/
21
21
22
22
#include <asm/irq_regs.h>
23
+
#include <asm/perf_event.h>
23
24
#include <asm/virt.h>
24
25
25
26
#include <linux/of.h>
···
385
384
#define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
386
385
(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
387
386
388
-
#define ARMV8_MAX_COUNTERS 32
389
-
#define ARMV8_COUNTER_MASK (ARMV8_MAX_COUNTERS - 1)
390
-
391
387
/*
392
388
* ARMv8 low level PMU access
393
389
*/
···
393
395
* Perf Event to low level counters mapping
394
396
*/
395
397
#define ARMV8_IDX_TO_COUNTER(x) \
396
-
(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
397
-
398
-
/*
399
-
* Per-CPU PMCR: config reg
400
-
*/
401
-
#define ARMV8_PMCR_E (1 << 0) /* Enable all counters */
402
-
#define ARMV8_PMCR_P (1 << 1) /* Reset all counters */
403
-
#define ARMV8_PMCR_C (1 << 2) /* Cycle counter reset */
404
-
#define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
405
-
#define ARMV8_PMCR_X (1 << 4) /* Export to ETM */
406
-
#define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
407
-
#define ARMV8_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
408
-
#define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */
409
-
#define ARMV8_PMCR_N_MASK 0x1f
410
-
#define ARMV8_PMCR_MASK 0x7f /* Mask for writable bits */
411
-
412
-
/*
413
-
* PMOVSR: counters overflow flag status reg
414
-
*/
415
-
#define ARMV8_OVSR_MASK 0xffffffff /* Mask for writable bits */
416
-
#define ARMV8_OVERFLOWED_MASK ARMV8_OVSR_MASK
417
-
418
-
/*
419
-
* PMXEVTYPER: Event selection reg
420
-
*/
421
-
#define ARMV8_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */
422
-
#define ARMV8_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */
423
-
424
-
/*
425
-
* Event filters for PMUv3
426
-
*/
427
-
#define ARMV8_EXCLUDE_EL1 (1 << 31)
428
-
#define ARMV8_EXCLUDE_EL0 (1 << 30)
429
-
#define ARMV8_INCLUDE_EL2 (1 << 27)
398
+
(((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
430
399
431
400
static inline u32 armv8pmu_pmcr_read(void)
432
401
{
···
404
439
405
440
static inline void armv8pmu_pmcr_write(u32 val)
406
441
{
407
-
val &= ARMV8_PMCR_MASK;
442
+
val &= ARMV8_PMU_PMCR_MASK;
408
443
isb();
409
444
asm volatile("msr pmcr_el0, %0" :: "r" (val));
410
445
}
411
446
412
447
static inline int armv8pmu_has_overflowed(u32 pmovsr)
413
448
{
414
-
return pmovsr & ARMV8_OVERFLOWED_MASK;
449
+
return pmovsr & ARMV8_PMU_OVERFLOWED_MASK;
415
450
}
416
451
417
452
static inline int armv8pmu_counter_valid(struct arm_pmu *cpu_pmu, int idx)
···
477
512
static inline void armv8pmu_write_evtype(int idx, u32 val)
478
513
{
479
514
if (armv8pmu_select_counter(idx) == idx) {
480
-
val &= ARMV8_EVTYPE_MASK;
515
+
val &= ARMV8_PMU_EVTYPE_MASK;
481
516
asm volatile("msr pmxevtyper_el0, %0" :: "r" (val));
482
517
}
483
518
}
···
523
558
asm volatile("mrs %0, pmovsclr_el0" : "=r" (value));
524
559
525
560
/* Write to clear flags */
526
-
value &= ARMV8_OVSR_MASK;
561
+
value &= ARMV8_PMU_OVSR_MASK;
527
562
asm volatile("msr pmovsclr_el0, %0" :: "r" (value));
528
563
529
564
return value;
···
661
696
662
697
raw_spin_lock_irqsave(&events->pmu_lock, flags);
663
698
/* Enable all counters */
664
-
armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMCR_E);
699
+
armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
665
700
raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
666
701
}
667
702
···
672
707
673
708
raw_spin_lock_irqsave(&events->pmu_lock, flags);
674
709
/* Disable all counters */
675
-
armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMCR_E);
710
+
armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
676
711
raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
677
712
}
678
713
···
682
717
int idx;
683
718
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
684
719
struct hw_perf_event *hwc = &event->hw;
685
-
unsigned long evtype = hwc->config_base & ARMV8_EVTYPE_EVENT;
720
+
unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT;
686
721
687
722
/* Always place a cycle counter into the cycle counter. */
688
723
if (evtype == ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES) {
···
719
754
attr->exclude_kernel != attr->exclude_hv)
720
755
return -EINVAL;
721
756
if (attr->exclude_user)
722
-
config_base |= ARMV8_EXCLUDE_EL0;
757
+
config_base |= ARMV8_PMU_EXCLUDE_EL0;
723
758
if (!is_kernel_in_hyp_mode() && attr->exclude_kernel)
724
-
config_base |= ARMV8_EXCLUDE_EL1;
759
+
config_base |= ARMV8_PMU_EXCLUDE_EL1;
725
760
if (!attr->exclude_hv)
726
-
config_base |= ARMV8_INCLUDE_EL2;
761
+
config_base |= ARMV8_PMU_INCLUDE_EL2;
727
762
728
763
/*
729
764
* Install the filter into config_base as this is used to
···
749
784
* Initialize & Reset PMNC. Request overflow interrupt for
750
785
* 64 bit cycle counter but cheat in armv8pmu_write_counter().
751
786
*/
752
-
armv8pmu_pmcr_write(ARMV8_PMCR_P | ARMV8_PMCR_C | ARMV8_PMCR_LC);
787
+
armv8pmu_pmcr_write(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C |
788
+
ARMV8_PMU_PMCR_LC);
753
789
}
754
790
755
791
static int armv8_pmuv3_map_event(struct perf_event *event)
756
792
{
757
793
return armpmu_map_event(event, &armv8_pmuv3_perf_map,
758
794
&armv8_pmuv3_perf_cache_map,
759
-
ARMV8_EVTYPE_EVENT);
795
+
ARMV8_PMU_EVTYPE_EVENT);
760
796
}
761
797
762
798
static int armv8_a53_map_event(struct perf_event *event)
763
799
{
764
800
return armpmu_map_event(event, &armv8_a53_perf_map,
765
801
&armv8_a53_perf_cache_map,
766
-
ARMV8_EVTYPE_EVENT);
802
+
ARMV8_PMU_EVTYPE_EVENT);
767
803
}
768
804
769
805
static int armv8_a57_map_event(struct perf_event *event)
770
806
{
771
807
return armpmu_map_event(event, &armv8_a57_perf_map,
772
808
&armv8_a57_perf_cache_map,
773
-
ARMV8_EVTYPE_EVENT);
809
+
ARMV8_PMU_EVTYPE_EVENT);
774
810
}
775
811
776
812
static int armv8_thunder_map_event(struct perf_event *event)
777
813
{
778
814
return armpmu_map_event(event, &armv8_thunder_perf_map,
779
815
&armv8_thunder_perf_cache_map,
780
-
ARMV8_EVTYPE_EVENT);
816
+
ARMV8_PMU_EVTYPE_EVENT);
781
817
}
782
818
783
819
static void armv8pmu_read_num_pmnc_events(void *info)
···
786
820
int *nb_cnt = info;
787
821
788
822
/* Read the nb of CNTx counters supported from PMNC */
789
-
*nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
823
+
*nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
790
824
791
825
/* Add the CPU cycles counter */
792
826
*nb_cnt += 1;
+1
-2
arch/nios2/kernel/prom.c
+1
-2
arch/nios2/kernel/prom.c
+1
arch/parisc/Kconfig
+1
arch/parisc/Kconfig
+7
arch/parisc/include/asm/compat.h
+7
arch/parisc/include/asm/compat.h
···
183
183
int _band; /* POLL_IN, POLL_OUT, POLL_MSG */
184
184
int _fd;
185
185
} _sigpoll;
186
+
187
+
/* SIGSYS */
188
+
struct {
189
+
compat_uptr_t _call_addr; /* calling user insn */
190
+
int _syscall; /* triggering system call number */
191
+
compat_uint_t _arch; /* AUDIT_ARCH_* of syscall */
192
+
} _sigsys;
186
193
} _sifields;
187
194
} compat_siginfo_t;
188
195
+13
arch/parisc/include/asm/syscall.h
+13
arch/parisc/include/asm/syscall.h
···
39
39
}
40
40
}
41
41
42
+
static inline void syscall_set_return_value(struct task_struct *task,
43
+
struct pt_regs *regs,
44
+
int error, long val)
45
+
{
46
+
regs->gr[28] = error ? error : val;
47
+
}
48
+
49
+
static inline void syscall_rollback(struct task_struct *task,
50
+
struct pt_regs *regs)
51
+
{
52
+
/* do nothing */
53
+
}
54
+
42
55
static inline int syscall_get_arch(void)
43
56
{
44
57
int arch = AUDIT_ARCH_PARISC;
+7
-2
arch/parisc/kernel/ptrace.c
+7
-2
arch/parisc/kernel/ptrace.c
···
270
270
long do_syscall_trace_enter(struct pt_regs *regs)
271
271
{
272
272
/* Do the secure computing check first. */
273
-
secure_computing_strict(regs->gr[20]);
273
+
if (secure_computing() == -1)
274
+
return -1;
274
275
275
276
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
276
277
tracehook_report_syscall_entry(regs)) {
···
297
296
regs->gr[23] & 0xffffffff);
298
297
299
298
out:
300
-
return regs->gr[20];
299
+
/*
300
+
* Sign extend the syscall number to 64bit since it may have been
301
+
* modified by a compat ptrace call
302
+
*/
303
+
return (int) ((u32) regs->gr[20]);
301
304
}
302
305
303
306
void do_syscall_trace_exit(struct pt_regs *regs)
+5
arch/parisc/kernel/signal32.c
+5
arch/parisc/kernel/signal32.c
···
371
371
val = (compat_int_t)from->si_int;
372
372
err |= __put_user(val, &to->si_int);
373
373
break;
374
+
case __SI_SYS >> 16:
375
+
err |= __put_user(ptr_to_compat(from->si_call_addr), &to->si_call_addr);
376
+
err |= __put_user(from->si_syscall, &to->si_syscall);
377
+
err |= __put_user(from->si_arch, &to->si_arch);
378
+
break;
374
379
}
375
380
}
376
381
return err;
+2
arch/parisc/kernel/syscall.S
+2
arch/parisc/kernel/syscall.S
···
329
329
330
330
ldo -THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r1 /* get task ptr */
331
331
LDREG TI_TASK(%r1), %r1
332
+
LDREG TASK_PT_GR28(%r1), %r28 /* Restore return value */
332
333
LDREG TASK_PT_GR26(%r1), %r26 /* Restore the users args */
333
334
LDREG TASK_PT_GR25(%r1), %r25
334
335
LDREG TASK_PT_GR24(%r1), %r24
···
343
342
stw %r21, -56(%r30) /* 6th argument */
344
343
#endif
345
344
345
+
cmpib,COND(=),n -1,%r20,tracesys_exit /* seccomp may have returned -1 */
346
346
comiclr,>>= __NR_Linux_syscalls, %r20, %r0
347
347
b,n .Ltracesys_nosys
348
348
+1
-1
arch/powerpc/include/asm/processor.h
+1
-1
arch/powerpc/include/asm/processor.h
···
246
246
#endif /* CONFIG_ALTIVEC */
247
247
#ifdef CONFIG_VSX
248
248
/* VSR status */
249
-
int used_vsr; /* set if process has used altivec */
249
+
int used_vsr; /* set if process has used VSX */
250
250
#endif /* CONFIG_VSX */
251
251
#ifdef CONFIG_SPE
252
252
unsigned long evr[32]; /* upper 32-bits of SPE regs */
+1
-1
arch/powerpc/kernel/process.c
+1
-1
arch/powerpc/kernel/process.c
···
983
983
static inline void save_sprs(struct thread_struct *t)
984
984
{
985
985
#ifdef CONFIG_ALTIVEC
986
-
if (cpu_has_feature(cpu_has_feature(CPU_FTR_ALTIVEC)))
986
+
if (cpu_has_feature(CPU_FTR_ALTIVEC))
987
987
t->vrsave = mfspr(SPRN_VRSAVE);
988
988
#endif
989
989
#ifdef CONFIG_PPC_BOOK3S_64
+2
-2
arch/powerpc/mm/hugetlbpage.c
+2
-2
arch/powerpc/mm/hugetlbpage.c
···
413
413
{
414
414
struct hugepd_freelist **batchp;
415
415
416
-
batchp = this_cpu_ptr(&hugepd_freelist_cur);
416
+
batchp = &get_cpu_var(hugepd_freelist_cur);
417
417
418
418
if (atomic_read(&tlb->mm->mm_users) < 2 ||
419
419
cpumask_equal(mm_cpumask(tlb->mm),
420
420
cpumask_of(smp_processor_id()))) {
421
421
kmem_cache_free(hugepte_cache, hugepte);
422
-
put_cpu_var(hugepd_freelist_cur);
422
+
put_cpu_var(hugepd_freelist_cur);
423
423
return;
424
424
}
425
425
+3
arch/s390/Kconfig
+3
arch/s390/Kconfig
+2
arch/s390/crypto/prng.c
+2
arch/s390/crypto/prng.c
···
669
669
static struct miscdevice prng_sha512_dev = {
670
670
.name = "prandom",
671
671
.minor = MISC_DYNAMIC_MINOR,
672
+
.mode = 0644,
672
673
.fops = &prng_sha512_fops,
673
674
};
674
675
static struct miscdevice prng_tdes_dev = {
675
676
.name = "prandom",
676
677
.minor = MISC_DYNAMIC_MINOR,
678
+
.mode = 0644,
677
679
.fops = &prng_tdes_fops,
678
680
};
679
681
+3
arch/s390/include/asm/cache.h
+3
arch/s390/include/asm/cache.h
+3
-1
arch/s390/include/uapi/asm/unistd.h
+3
-1
arch/s390/include/uapi/asm/unistd.h
···
311
311
#define __NR_shutdown 373
312
312
#define __NR_mlock2 374
313
313
#define __NR_copy_file_range 375
314
-
#define NR_syscalls 376
314
+
#define __NR_preadv2 376
315
+
#define __NR_pwritev2 377
316
+
#define NR_syscalls 378
315
317
316
318
/*
317
319
* There are some system calls that are not present on 64 bit, some
+1
arch/s390/kernel/perf_cpum_cf.c
+1
arch/s390/kernel/perf_cpum_cf.c
+1
-1
arch/s390/kernel/perf_cpum_sf.c
+1
-1
arch/s390/kernel/perf_cpum_sf.c
+2
arch/s390/kernel/syscalls.S
+2
arch/s390/kernel/syscalls.S
+5
-3
arch/s390/mm/gup.c
+5
-3
arch/s390/mm/gup.c
···
20
20
static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
21
21
unsigned long end, int write, struct page **pages, int *nr)
22
22
{
23
+
struct page *head, *page;
23
24
unsigned long mask;
24
25
pte_t *ptep, pte;
25
-
struct page *page;
26
26
27
27
mask = (write ? _PAGE_PROTECT : 0) | _PAGE_INVALID | _PAGE_SPECIAL;
28
28
···
37
37
return 0;
38
38
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
39
39
page = pte_page(pte);
40
-
if (!page_cache_get_speculative(page))
40
+
head = compound_head(page);
41
+
if (!page_cache_get_speculative(head))
41
42
return 0;
42
43
if (unlikely(pte_val(pte) != pte_val(*ptep))) {
43
-
put_page(page);
44
+
put_page(head);
44
45
return 0;
45
46
}
47
+
VM_BUG_ON_PAGE(compound_head(page) != head, page);
46
48
pages[*nr] = page;
47
49
(*nr)++;
48
50
+7
-3
arch/s390/mm/init.c
+7
-3
arch/s390/mm/init.c
···
108
108
free_area_init_nodes(max_zone_pfns);
109
109
}
110
110
111
+
void mark_rodata_ro(void)
112
+
{
113
+
/* Text and rodata are already protected. Nothing to do here. */
114
+
pr_info("Write protecting the kernel read-only data: %luk\n",
115
+
((unsigned long)&_eshared - (unsigned long)&_stext) >> 10);
116
+
}
117
+
111
118
void __init mem_init(void)
112
119
{
113
120
if (MACHINE_HAS_TLB_LC)
···
133
126
setup_zero_pages(); /* Setup zeroed pages. */
134
127
135
128
mem_init_print_info(NULL);
136
-
printk("Write protected kernel read-only data: %#lx - %#lx\n",
137
-
(unsigned long)&_stext,
138
-
PFN_ALIGN((unsigned long)&_eshared) - 1);
139
129
}
140
130
141
131
void free_initmem(void)
+1
-2
arch/s390/pci/pci_clp.c
+1
-2
arch/s390/pci/pci_clp.c
···
176
176
rc = clp_store_query_pci_fn(zdev, &rrb->response);
177
177
if (rc)
178
178
goto out;
179
-
if (rrb->response.pfgid)
180
-
rc = clp_query_pci_fngrp(zdev, rrb->response.pfgid);
179
+
rc = clp_query_pci_fngrp(zdev, rrb->response.pfgid);
181
180
} else {
182
181
zpci_err("Q PCI FN:\n");
183
182
zpci_err_clp(rrb->response.hdr.rsp, rc);
+4
-4
arch/sparc/include/asm/compat_signal.h
+4
-4
arch/sparc/include/asm/compat_signal.h
···
6
6
7
7
#ifdef CONFIG_COMPAT
8
8
struct __new_sigaction32 {
9
-
unsigned sa_handler;
9
+
unsigned int sa_handler;
10
10
unsigned int sa_flags;
11
-
unsigned sa_restorer; /* not used by Linux/SPARC yet */
11
+
unsigned int sa_restorer; /* not used by Linux/SPARC yet */
12
12
compat_sigset_t sa_mask;
13
13
};
14
14
15
15
struct __old_sigaction32 {
16
-
unsigned sa_handler;
16
+
unsigned int sa_handler;
17
17
compat_old_sigset_t sa_mask;
18
18
unsigned int sa_flags;
19
-
unsigned sa_restorer; /* not used by Linux/SPARC yet */
19
+
unsigned int sa_restorer; /* not used by Linux/SPARC yet */
20
20
};
21
21
#endif
22
22
+16
-16
arch/sparc/include/asm/obio.h
+16
-16
arch/sparc/include/asm/obio.h
···
117
117
"i" (ASI_M_CTL));
118
118
}
119
119
120
-
static inline unsigned bw_get_prof_limit(int cpu)
120
+
static inline unsigned int bw_get_prof_limit(int cpu)
121
121
{
122
-
unsigned limit;
122
+
unsigned int limit;
123
123
124
124
__asm__ __volatile__ ("lda [%1] %2, %0" :
125
125
"=r" (limit) :
···
128
128
return limit;
129
129
}
130
130
131
-
static inline void bw_set_prof_limit(int cpu, unsigned limit)
131
+
static inline void bw_set_prof_limit(int cpu, unsigned int limit)
132
132
{
133
133
__asm__ __volatile__ ("sta %0, [%1] %2" : :
134
134
"r" (limit),
···
136
136
"i" (ASI_M_CTL));
137
137
}
138
138
139
-
static inline unsigned bw_get_ctrl(int cpu)
139
+
static inline unsigned int bw_get_ctrl(int cpu)
140
140
{
141
-
unsigned ctrl;
141
+
unsigned int ctrl;
142
142
143
143
__asm__ __volatile__ ("lda [%1] %2, %0" :
144
144
"=r" (ctrl) :
···
147
147
return ctrl;
148
148
}
149
149
150
-
static inline void bw_set_ctrl(int cpu, unsigned ctrl)
150
+
static inline void bw_set_ctrl(int cpu, unsigned int ctrl)
151
151
{
152
152
__asm__ __volatile__ ("sta %0, [%1] %2" : :
153
153
"r" (ctrl),
···
155
155
"i" (ASI_M_CTL));
156
156
}
157
157
158
-
static inline unsigned cc_get_ipen(void)
158
+
static inline unsigned int cc_get_ipen(void)
159
159
{
160
-
unsigned pending;
160
+
unsigned int pending;
161
161
162
162
__asm__ __volatile__ ("lduha [%1] %2, %0" :
163
163
"=r" (pending) :
···
166
166
return pending;
167
167
}
168
168
169
-
static inline void cc_set_iclr(unsigned clear)
169
+
static inline void cc_set_iclr(unsigned int clear)
170
170
{
171
171
__asm__ __volatile__ ("stha %0, [%1] %2" : :
172
172
"r" (clear),
···
174
174
"i" (ASI_M_MXCC));
175
175
}
176
176
177
-
static inline unsigned cc_get_imsk(void)
177
+
static inline unsigned int cc_get_imsk(void)
178
178
{
179
-
unsigned mask;
179
+
unsigned int mask;
180
180
181
181
__asm__ __volatile__ ("lduha [%1] %2, %0" :
182
182
"=r" (mask) :
···
185
185
return mask;
186
186
}
187
187
188
-
static inline void cc_set_imsk(unsigned mask)
188
+
static inline void cc_set_imsk(unsigned int mask)
189
189
{
190
190
__asm__ __volatile__ ("stha %0, [%1] %2" : :
191
191
"r" (mask),
···
193
193
"i" (ASI_M_MXCC));
194
194
}
195
195
196
-
static inline unsigned cc_get_imsk_other(int cpuid)
196
+
static inline unsigned int cc_get_imsk_other(int cpuid)
197
197
{
198
-
unsigned mask;
198
+
unsigned int mask;
199
199
200
200
__asm__ __volatile__ ("lduha [%1] %2, %0" :
201
201
"=r" (mask) :
···
204
204
return mask;
205
205
}
206
206
207
-
static inline void cc_set_imsk_other(int cpuid, unsigned mask)
207
+
static inline void cc_set_imsk_other(int cpuid, unsigned int mask)
208
208
{
209
209
__asm__ __volatile__ ("stha %0, [%1] %2" : :
210
210
"r" (mask),
···
212
212
"i" (ASI_M_CTL));
213
213
}
214
214
215
-
static inline void cc_set_igen(unsigned gen)
215
+
static inline void cc_set_igen(unsigned int gen)
216
216
{
217
217
__asm__ __volatile__ ("sta %0, [%1] %2" : :
218
218
"r" (gen),
+5
-5
arch/sparc/include/asm/openprom.h
+5
-5
arch/sparc/include/asm/openprom.h
···
29
29
/* V2 and later prom device operations. */
30
30
struct linux_dev_v2_funcs {
31
31
phandle (*v2_inst2pkg)(int d); /* Convert ihandle to phandle */
32
-
char * (*v2_dumb_mem_alloc)(char *va, unsigned sz);
33
-
void (*v2_dumb_mem_free)(char *va, unsigned sz);
32
+
char * (*v2_dumb_mem_alloc)(char *va, unsigned int sz);
33
+
void (*v2_dumb_mem_free)(char *va, unsigned int sz);
34
34
35
35
/* To map devices into virtual I/O space. */
36
-
char * (*v2_dumb_mmap)(char *virta, int which_io, unsigned paddr, unsigned sz);
37
-
void (*v2_dumb_munmap)(char *virta, unsigned size);
36
+
char * (*v2_dumb_mmap)(char *virta, int which_io, unsigned int paddr, unsigned int sz);
37
+
void (*v2_dumb_munmap)(char *virta, unsigned int size);
38
38
39
39
int (*v2_dev_open)(char *devpath);
40
40
void (*v2_dev_close)(int d);
···
50
50
struct linux_mlist_v0 {
51
51
struct linux_mlist_v0 *theres_more;
52
52
unsigned int start_adr;
53
-
unsigned num_bytes;
53
+
unsigned int num_bytes;
54
54
};
55
55
56
56
struct linux_mem_v0 {
+1
-1
arch/sparc/include/asm/pgtable_64.h
+1
-1
arch/sparc/include/asm/pgtable_64.h
···
218
218
extern pgprot_t PAGE_COPY;
219
219
extern pgprot_t PAGE_SHARED;
220
220
221
-
/* XXX This uglyness is for the atyfb driver's sparc mmap() support. XXX */
221
+
/* XXX This ugliness is for the atyfb driver's sparc mmap() support. XXX */
222
222
extern unsigned long _PAGE_IE;
223
223
extern unsigned long _PAGE_E;
224
224
extern unsigned long _PAGE_CACHE;
+1
-1
arch/sparc/include/asm/processor_64.h
+1
-1
arch/sparc/include/asm/processor_64.h
···
201
201
#define KSTK_ESP(tsk) (task_pt_regs(tsk)->u_regs[UREG_FP])
202
202
203
203
/* Please see the commentary in asm/backoff.h for a description of
204
-
* what these instructions are doing and how they have been choosen.
204
+
* what these instructions are doing and how they have been chosen.
205
205
* To make a long story short, we are trying to yield the current cpu
206
206
* strand during busy loops.
207
207
*/
+1
-1
arch/sparc/include/asm/sigcontext.h
+1
-1
arch/sparc/include/asm/sigcontext.h
+1
-1
arch/sparc/include/asm/tsb.h
+1
-1
arch/sparc/include/asm/tsb.h
···
149
149
* page size in question. So for PMD mappings (which fall on
150
150
* bit 23, for 8MB per PMD) we must propagate bit 22 for a
151
151
* 4MB huge page. For huge PUDs (which fall on bit 33, for
152
-
* 8GB per PUD), we have to accomodate 256MB and 2GB huge
152
+
* 8GB per PUD), we have to accommodate 256MB and 2GB huge
153
153
* pages. So for those we propagate bits 32 to 28.
154
154
*/
155
155
#define KERN_PGTABLE_WALK(VADDR, REG1, REG2, FAIL_LABEL) \
+2
-2
arch/sparc/include/uapi/asm/stat.h
+2
-2
arch/sparc/include/uapi/asm/stat.h
···
6
6
#if defined(__sparc__) && defined(__arch64__)
7
7
/* 64 bit sparc */
8
8
struct stat {
9
-
unsigned st_dev;
9
+
unsigned int st_dev;
10
10
ino_t st_ino;
11
11
mode_t st_mode;
12
12
short st_nlink;
13
13
uid_t st_uid;
14
14
gid_t st_gid;
15
-
unsigned st_rdev;
15
+
unsigned int st_rdev;
16
16
off_t st_size;
17
17
time_t st_atime;
18
18
time_t st_mtime;
+6
-6
arch/sparc/kernel/audit.c
+6
-6
arch/sparc/kernel/audit.c
···
5
5
6
6
#include "kernel.h"
7
7
8
-
static unsigned dir_class[] = {
8
+
static unsigned int dir_class[] = {
9
9
#include <asm-generic/audit_dir_write.h>
10
10
~0U
11
11
};
12
12
13
-
static unsigned read_class[] = {
13
+
static unsigned int read_class[] = {
14
14
#include <asm-generic/audit_read.h>
15
15
~0U
16
16
};
17
17
18
-
static unsigned write_class[] = {
18
+
static unsigned int write_class[] = {
19
19
#include <asm-generic/audit_write.h>
20
20
~0U
21
21
};
22
22
23
-
static unsigned chattr_class[] = {
23
+
static unsigned int chattr_class[] = {
24
24
#include <asm-generic/audit_change_attr.h>
25
25
~0U
26
26
};
27
27
28
-
static unsigned signal_class[] = {
28
+
static unsigned int signal_class[] = {
29
29
#include <asm-generic/audit_signal.h>
30
30
~0U
31
31
};
···
39
39
return 0;
40
40
}
41
41
42
-
int audit_classify_syscall(int abi, unsigned syscall)
42
+
int audit_classify_syscall(int abi, unsigned int syscall)
43
43
{
44
44
#ifdef CONFIG_COMPAT
45
45
if (abi == AUDIT_ARCH_SPARC)
+6
-6
arch/sparc/kernel/compat_audit.c
+6
-6
arch/sparc/kernel/compat_audit.c
···
2
2
#include <asm/unistd.h>
3
3
#include "kernel.h"
4
4
5
-
unsigned sparc32_dir_class[] = {
5
+
unsigned int sparc32_dir_class[] = {
6
6
#include <asm-generic/audit_dir_write.h>
7
7
~0U
8
8
};
9
9
10
-
unsigned sparc32_chattr_class[] = {
10
+
unsigned int sparc32_chattr_class[] = {
11
11
#include <asm-generic/audit_change_attr.h>
12
12
~0U
13
13
};
14
14
15
-
unsigned sparc32_write_class[] = {
15
+
unsigned int sparc32_write_class[] = {
16
16
#include <asm-generic/audit_write.h>
17
17
~0U
18
18
};
19
19
20
-
unsigned sparc32_read_class[] = {
20
+
unsigned int sparc32_read_class[] = {
21
21
#include <asm-generic/audit_read.h>
22
22
~0U
23
23
};
24
24
25
-
unsigned sparc32_signal_class[] = {
25
+
unsigned int sparc32_signal_class[] = {
26
26
#include <asm-generic/audit_signal.h>
27
27
~0U
28
28
};
29
29
30
-
int sparc32_classify_syscall(unsigned syscall)
30
+
int sparc32_classify_syscall(unsigned int syscall)
31
31
{
32
32
switch(syscall) {
33
33
case __NR_open:
+1
-1
arch/sparc/kernel/entry.S
+1
-1
arch/sparc/kernel/entry.S
···
1255
1255
kuw_patch1_7win: sll %o3, 6, %o3
1256
1256
1257
1257
/* No matter how much overhead this routine has in the worst
1258
-
* case scenerio, it is several times better than taking the
1258
+
* case scenario, it is several times better than taking the
1259
1259
* traps with the old method of just doing flush_user_windows().
1260
1260
*/
1261
1261
kill_user_windows:
+3
-3
arch/sparc/kernel/ioport.c
+3
-3
arch/sparc/kernel/ioport.c
···
131
131
EXPORT_SYMBOL(ioremap);
132
132
133
133
/*
134
-
* Comlimentary to ioremap().
134
+
* Complementary to ioremap().
135
135
*/
136
136
void iounmap(volatile void __iomem *virtual)
137
137
{
···
233
233
}
234
234
235
235
/*
236
-
* Comlimentary to _sparc_ioremap().
236
+
* Complementary to _sparc_ioremap().
237
237
*/
238
238
static void _sparc_free_io(struct resource *res)
239
239
{
···
532
532
}
533
533
534
534
/* Map a set of buffers described by scatterlist in streaming
535
-
* mode for DMA. This is the scather-gather version of the
535
+
* mode for DMA. This is the scatter-gather version of the
536
536
* above pci_map_single interface. Here the scatter gather list
537
537
* elements are each tagged with the appropriate dma address
538
538
* and length. They are obtained via sg_dma_{address,length}(SG).
+6
-6
arch/sparc/kernel/kernel.h
+6
-6
arch/sparc/kernel/kernel.h
···
54
54
asmlinkage int do_sys32_sigstack(u32 u_ssptr, u32 u_ossptr, unsigned long sp);
55
55
56
56
/* compat_audit.c */
57
-
extern unsigned sparc32_dir_class[];
58
-
extern unsigned sparc32_chattr_class[];
59
-
extern unsigned sparc32_write_class[];
60
-
extern unsigned sparc32_read_class[];
61
-
extern unsigned sparc32_signal_class[];
62
-
int sparc32_classify_syscall(unsigned syscall);
57
+
extern unsigned int sparc32_dir_class[];
58
+
extern unsigned int sparc32_chattr_class[];
59
+
extern unsigned int sparc32_write_class[];
60
+
extern unsigned int sparc32_read_class[];
61
+
extern unsigned int sparc32_signal_class[];
62
+
int sparc32_classify_syscall(unsigned int syscall);
63
63
#endif
64
64
65
65
#ifdef CONFIG_SPARC32
+1
-1
arch/sparc/kernel/leon_kernel.c
+1
-1
arch/sparc/kernel/leon_kernel.c
···
203
203
204
204
/*
205
205
* Build a LEON IRQ for the edge triggered LEON IRQ controller:
206
-
* Edge (normal) IRQ - handle_simple_irq, ack=DONT-CARE, never ack
206
+
* Edge (normal) IRQ - handle_simple_irq, ack=DON'T-CARE, never ack
207
207
* Level IRQ (PCI|Level-GPIO) - handle_fasteoi_irq, ack=1, ack after ISR
208
208
* Per-CPU Edge - handle_percpu_irq, ack=0
209
209
*/
+1
-1
arch/sparc/kernel/process_64.c
+1
-1
arch/sparc/kernel/process_64.c
+1
-1
arch/sparc/kernel/setup_32.c
+1
-1
arch/sparc/kernel/setup_32.c
···
109
109
unsigned char boot_cpu_id = 0xff; /* 0xff will make it into DATA section... */
110
110
111
111
static void
112
-
prom_console_write(struct console *con, const char *s, unsigned n)
112
+
prom_console_write(struct console *con, const char *s, unsigned int n)
113
113
{
114
114
prom_write(s, n);
115
115
}
+1
-1
arch/sparc/kernel/setup_64.c
+1
-1
arch/sparc/kernel/setup_64.c
+1
-1
arch/sparc/kernel/signal32.c
+1
-1
arch/sparc/kernel/signal32.c
+2
-2
arch/sparc/kernel/sys_sparc_64.c
+2
-2
arch/sparc/kernel/sys_sparc_64.c
···
337
337
switch (call) {
338
338
case SEMOP:
339
339
err = sys_semtimedop(first, ptr,
340
-
(unsigned)second, NULL);
340
+
(unsigned int)second, NULL);
341
341
goto out;
342
342
case SEMTIMEDOP:
343
-
err = sys_semtimedop(first, ptr, (unsigned)second,
343
+
err = sys_semtimedop(first, ptr, (unsigned int)second,
344
344
(const struct timespec __user *)
345
345
(unsigned long) fifth);
346
346
goto out;
+1
-1
arch/sparc/kernel/sysfs.c
+1
-1
arch/sparc/kernel/sysfs.c
+2
-2
arch/sparc/kernel/unaligned_64.c
+2
-2
arch/sparc/kernel/unaligned_64.c
···
209
209
if (size == 16) {
210
210
size = 8;
211
211
zero = (((long)(reg_num ?
212
-
(unsigned)fetch_reg(reg_num, regs) : 0)) << 32) |
213
-
(unsigned)fetch_reg(reg_num + 1, regs);
212
+
(unsigned int)fetch_reg(reg_num, regs) : 0)) << 32) |
213
+
(unsigned int)fetch_reg(reg_num + 1, regs);
214
214
} else if (reg_num) {
215
215
src_val_p = fetch_reg_addr(reg_num, regs);
216
216
}
+4
-4
arch/sparc/mm/fault_32.c
+4
-4
arch/sparc/mm/fault_32.c
···
303
303
fixup = search_extables_range(regs->pc, &g2);
304
304
/* Values below 10 are reserved for other things */
305
305
if (fixup > 10) {
306
-
extern const unsigned __memset_start[];
307
-
extern const unsigned __memset_end[];
308
-
extern const unsigned __csum_partial_copy_start[];
309
-
extern const unsigned __csum_partial_copy_end[];
306
+
extern const unsigned int __memset_start[];
307
+
extern const unsigned int __memset_end[];
308
+
extern const unsigned int __csum_partial_copy_start[];
309
+
extern const unsigned int __csum_partial_copy_end[];
310
310
311
311
#ifdef DEBUG_EXCEPTIONS
312
312
printk("Exception: PC<%08lx> faddr<%08lx>\n",
+1
-1
arch/sparc/net/bpf_jit_comp.c
+1
-1
arch/sparc/net/bpf_jit_comp.c
···
351
351
*
352
352
* Sometimes we need to emit a branch earlier in the code
353
353
* sequence. And in these situations we adjust "destination"
354
-
* to accomodate this difference. For example, if we needed
354
+
* to accommodate this difference. For example, if we needed
355
355
* to emit a branch (and it's delay slot) right before the
356
356
* final instruction emitted for a BPF opcode, we'd use
357
357
* "destination + 4" instead of just plain "destination" above.
+13
-13
arch/tile/include/hv/drv_mpipe_intf.h
+13
-13
arch/tile/include/hv/drv_mpipe_intf.h
···
211
211
* request shared data permission on the same link.
212
212
*
213
213
* No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA,
214
-
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open()
214
+
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open()
215
215
* call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed.
216
216
*/
217
217
#define GXIO_MPIPE_LINK_DATA 0x00000001UL
···
219
219
/** Do not request data permission on the specified link.
220
220
*
221
221
* No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA,
222
-
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open()
222
+
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open()
223
223
* call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed.
224
224
*/
225
225
#define GXIO_MPIPE_LINK_NO_DATA 0x00000002UL
···
230
230
* data permission on it, this open will fail.
231
231
*
232
232
* No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA,
233
-
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open()
233
+
* or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open()
234
234
* call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed.
235
235
*/
236
236
#define GXIO_MPIPE_LINK_EXCL_DATA 0x00000004UL
···
241
241
* permission on the same link.
242
242
*
243
243
* No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS,
244
-
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open()
244
+
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open()
245
245
* call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed.
246
246
*/
247
247
#define GXIO_MPIPE_LINK_STATS 0x00000008UL
···
249
249
/** Do not request stats permission on the specified link.
250
250
*
251
251
* No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS,
252
-
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open()
252
+
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open()
253
253
* call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed.
254
254
*/
255
255
#define GXIO_MPIPE_LINK_NO_STATS 0x00000010UL
···
267
267
* reset by other statistics programs.
268
268
*
269
269
* No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS,
270
-
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open()
270
+
* or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open()
271
271
* call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed.
272
272
*/
273
273
#define GXIO_MPIPE_LINK_EXCL_STATS 0x00000020UL
···
278
278
* permission on the same link.
279
279
*
280
280
* No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL,
281
-
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open()
281
+
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open()
282
282
* call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed.
283
283
*/
284
284
#define GXIO_MPIPE_LINK_CTL 0x00000040UL
···
286
286
/** Do not request control permission on the specified link.
287
287
*
288
288
* No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL,
289
-
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open()
289
+
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open()
290
290
* call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed.
291
291
*/
292
292
#define GXIO_MPIPE_LINK_NO_CTL 0x00000080UL
···
301
301
* it prevents programs like mpipe-link from configuring the link.
302
302
*
303
303
* No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL,
304
-
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open()
304
+
* or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open()
305
305
* call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed.
306
306
*/
307
307
#define GXIO_MPIPE_LINK_EXCL_CTL 0x00000100UL
···
311
311
* change the desired state of the link when it is closed or the process
312
312
* exits. No more than one of ::GXIO_MPIPE_LINK_AUTO_UP,
313
313
* ::GXIO_MPIPE_LINK_AUTO_UPDOWN, ::GXIO_MPIPE_LINK_AUTO_DOWN, or
314
-
* ::GXIO_MPIPE_LINK_AUTO_NONE may be specifed in a gxio_mpipe_link_open()
314
+
* ::GXIO_MPIPE_LINK_AUTO_NONE may be specified in a gxio_mpipe_link_open()
315
315
* call. If none are specified, ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed.
316
316
*/
317
317
#define GXIO_MPIPE_LINK_AUTO_UP 0x00000200UL
···
322
322
* open, set the desired state of the link to down. No more than one of
323
323
* ::GXIO_MPIPE_LINK_AUTO_UP, ::GXIO_MPIPE_LINK_AUTO_UPDOWN,
324
324
* ::GXIO_MPIPE_LINK_AUTO_DOWN, or ::GXIO_MPIPE_LINK_AUTO_NONE may be
325
-
* specifed in a gxio_mpipe_link_open() call. If none are specified,
325
+
* specified in a gxio_mpipe_link_open() call. If none are specified,
326
326
* ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed.
327
327
*/
328
328
#define GXIO_MPIPE_LINK_AUTO_UPDOWN 0x00000400UL
···
332
332
* process has the link open, set the desired state of the link to down.
333
333
* No more than one of ::GXIO_MPIPE_LINK_AUTO_UP,
334
334
* ::GXIO_MPIPE_LINK_AUTO_UPDOWN, ::GXIO_MPIPE_LINK_AUTO_DOWN, or
335
-
* ::GXIO_MPIPE_LINK_AUTO_NONE may be specifed in a gxio_mpipe_link_open()
335
+
* ::GXIO_MPIPE_LINK_AUTO_NONE may be specified in a gxio_mpipe_link_open()
336
336
* call. If none are specified, ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed.
337
337
*/
338
338
#define GXIO_MPIPE_LINK_AUTO_DOWN 0x00000800UL
···
342
342
* closed or the process exits. No more than one of
343
343
* ::GXIO_MPIPE_LINK_AUTO_UP, ::GXIO_MPIPE_LINK_AUTO_UPDOWN,
344
344
* ::GXIO_MPIPE_LINK_AUTO_DOWN, or ::GXIO_MPIPE_LINK_AUTO_NONE may be
345
-
* specifed in a gxio_mpipe_link_open() call. If none are specified,
345
+
* specified in a gxio_mpipe_link_open() call. If none are specified,
346
346
* ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed.
347
347
*/
348
348
#define GXIO_MPIPE_LINK_AUTO_NONE 0x00001000UL
+8
-8
arch/tile/kernel/kgdb.c
+8
-8
arch/tile/kernel/kgdb.c
···
126
126
sleeping_thread_to_gdb_regs(unsigned long *gdb_regs, struct task_struct *task)
127
127
{
128
128
struct pt_regs *thread_regs;
129
+
const int NGPRS = TREG_LAST_GPR + 1;
129
130
130
131
if (task == NULL)
131
132
return;
132
133
133
-
/* Initialize to zero. */
134
-
memset(gdb_regs, 0, NUMREGBYTES);
135
-
136
134
thread_regs = task_pt_regs(task);
137
-
memcpy(gdb_regs, thread_regs, TREG_LAST_GPR * sizeof(unsigned long));
135
+
memcpy(gdb_regs, thread_regs, NGPRS * sizeof(unsigned long));
136
+
memset(&gdb_regs[NGPRS], 0,
137
+
(TILEGX_PC_REGNUM - NGPRS) * sizeof(unsigned long));
138
138
gdb_regs[TILEGX_PC_REGNUM] = thread_regs->pc;
139
139
gdb_regs[TILEGX_FAULTNUM_REGNUM] = thread_regs->faultnum;
140
140
}
···
433
433
struct kgdb_arch arch_kgdb_ops;
434
434
435
435
/*
436
-
* kgdb_arch_init - Perform any architecture specific initalization.
436
+
* kgdb_arch_init - Perform any architecture specific initialization.
437
437
*
438
-
* This function will handle the initalization of any architecture
438
+
* This function will handle the initialization of any architecture
439
439
* specific callbacks.
440
440
*/
441
441
int kgdb_arch_init(void)
···
447
447
}
448
448
449
449
/*
450
-
* kgdb_arch_exit - Perform any architecture specific uninitalization.
450
+
* kgdb_arch_exit - Perform any architecture specific uninitialization.
451
451
*
452
-
* This function will handle the uninitalization of any architecture
452
+
* This function will handle the uninitialization of any architecture
453
453
* specific callbacks, for dynamic registration and unregistration.
454
454
*/
455
455
void kgdb_arch_exit(void)
+1
-1
arch/tile/kernel/pci_gx.c
+1
-1
arch/tile/kernel/pci_gx.c
···
1326
1326
1327
1327
1328
1328
/*
1329
-
* See tile_cfg_read() for relevent comments.
1329
+
* See tile_cfg_read() for relevant comments.
1330
1330
* Note that "val" is the value to write, not a pointer to that value.
1331
1331
*/
1332
1332
static int tile_cfg_write(struct pci_bus *bus, unsigned int devfn, int offset,
+18
-3
arch/x86/events/amd/core.c
+18
-3
arch/x86/events/amd/core.c
···
369
369
370
370
WARN_ON_ONCE(cpuc->amd_nb);
371
371
372
-
if (boot_cpu_data.x86_max_cores < 2)
372
+
if (!x86_pmu.amd_nb_constraints)
373
373
return NOTIFY_OK;
374
374
375
375
cpuc->amd_nb = amd_alloc_nb(cpu);
···
388
388
389
389
cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY;
390
390
391
-
if (boot_cpu_data.x86_max_cores < 2)
391
+
if (!x86_pmu.amd_nb_constraints)
392
392
return;
393
393
394
394
nb_id = amd_get_nb_id(cpu);
···
414
414
{
415
415
struct cpu_hw_events *cpuhw;
416
416
417
-
if (boot_cpu_data.x86_max_cores < 2)
417
+
if (!x86_pmu.amd_nb_constraints)
418
418
return;
419
419
420
420
cpuhw = &per_cpu(cpu_hw_events, cpu);
···
648
648
.cpu_prepare = amd_pmu_cpu_prepare,
649
649
.cpu_starting = amd_pmu_cpu_starting,
650
650
.cpu_dead = amd_pmu_cpu_dead,
651
+
652
+
.amd_nb_constraints = 1,
651
653
};
652
654
653
655
static int __init amd_core_pmu_init(void)
···
676
674
x86_pmu.eventsel = MSR_F15H_PERF_CTL;
677
675
x86_pmu.perfctr = MSR_F15H_PERF_CTR;
678
676
x86_pmu.num_counters = AMD64_NUM_COUNTERS_CORE;
677
+
/*
678
+
* AMD Core perfctr has separate MSRs for the NB events, see
679
+
* the amd/uncore.c driver.
680
+
*/
681
+
x86_pmu.amd_nb_constraints = 0;
679
682
680
683
pr_cont("core perfctr, ");
681
684
return 0;
···
699
692
ret = amd_core_pmu_init();
700
693
if (ret)
701
694
return ret;
695
+
696
+
if (num_possible_cpus() == 1) {
697
+
/*
698
+
* No point in allocating data structures to serialize
699
+
* against other CPUs, when there is only the one CPU.
700
+
*/
701
+
x86_pmu.amd_nb_constraints = 0;
702
+
}
702
703
703
704
/* Events are common for all AMDs */
704
705
memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
+45
-7
arch/x86/events/amd/ibs.c
+45
-7
arch/x86/events/amd/ibs.c
···
28
28
#define IBS_FETCH_CONFIG_MASK (IBS_FETCH_RAND_EN | IBS_FETCH_MAX_CNT)
29
29
#define IBS_OP_CONFIG_MASK IBS_OP_MAX_CNT
30
30
31
+
32
+
/*
33
+
* IBS states:
34
+
*
35
+
* ENABLED; tracks the pmu::add(), pmu::del() state, when set the counter is taken
36
+
* and any further add()s must fail.
37
+
*
38
+
* STARTED/STOPPING/STOPPED; deal with pmu::start(), pmu::stop() state but are
39
+
* complicated by the fact that the IBS hardware can send late NMIs (ie. after
40
+
* we've cleared the EN bit).
41
+
*
42
+
* In order to consume these late NMIs we have the STOPPED state, any NMI that
43
+
* happens after we've cleared the EN state will clear this bit and report the
44
+
* NMI handled (this is fundamentally racy in the face or multiple NMI sources,
45
+
* someone else can consume our BIT and our NMI will go unhandled).
46
+
*
47
+
* And since we cannot set/clear this separate bit together with the EN bit,
48
+
* there are races; if we cleared STARTED early, an NMI could land in
49
+
* between clearing STARTED and clearing the EN bit (in fact multiple NMIs
50
+
* could happen if the period is small enough), and consume our STOPPED bit
51
+
* and trigger streams of unhandled NMIs.
52
+
*
53
+
* If, however, we clear STARTED late, an NMI can hit between clearing the
54
+
* EN bit and clearing STARTED, still see STARTED set and process the event.
55
+
* If this event will have the VALID bit clear, we bail properly, but this
56
+
* is not a given. With VALID set we can end up calling pmu::stop() again
57
+
* (the throttle logic) and trigger the WARNs in there.
58
+
*
59
+
* So what we do is set STOPPING before clearing EN to avoid the pmu::stop()
60
+
* nesting, and clear STARTED late, so that we have a well defined state over
61
+
* the clearing of the EN bit.
62
+
*
63
+
* XXX: we could probably be using !atomic bitops for all this.
64
+
*/
65
+
31
66
enum ibs_states {
32
67
IBS_ENABLED = 0,
33
68
IBS_STARTED = 1,
34
69
IBS_STOPPING = 2,
70
+
IBS_STOPPED = 3,
35
71
36
72
IBS_MAX_STATES,
37
73
};
···
413
377
414
378
perf_ibs_set_period(perf_ibs, hwc, &period);
415
379
/*
416
-
* Set STARTED before enabling the hardware, such that
417
-
* a subsequent NMI must observe it. Then clear STOPPING
418
-
* such that we don't consume NMIs by accident.
380
+
* Set STARTED before enabling the hardware, such that a subsequent NMI
381
+
* must observe it.
419
382
*/
420
-
set_bit(IBS_STARTED, pcpu->state);
383
+
set_bit(IBS_STARTED, pcpu->state);
421
384
clear_bit(IBS_STOPPING, pcpu->state);
422
385
perf_ibs_enable_event(perf_ibs, hwc, period >> 4);
423
386
···
431
396
u64 config;
432
397
int stopping;
433
398
399
+
if (test_and_set_bit(IBS_STOPPING, pcpu->state))
400
+
return;
401
+
434
402
stopping = test_bit(IBS_STARTED, pcpu->state);
435
403
436
404
if (!stopping && (hwc->state & PERF_HES_UPTODATE))
···
443
405
444
406
if (stopping) {
445
407
/*
446
-
* Set STOPPING before disabling the hardware, such that it
408
+
* Set STOPPED before disabling the hardware, such that it
447
409
* must be visible to NMIs the moment we clear the EN bit,
448
410
* at which point we can generate an !VALID sample which
449
411
* we need to consume.
450
412
*/
451
-
set_bit(IBS_STOPPING, pcpu->state);
413
+
set_bit(IBS_STOPPED, pcpu->state);
452
414
perf_ibs_disable_event(perf_ibs, hwc, config);
453
415
/*
454
416
* Clear STARTED after disabling the hardware; if it were
···
594
556
* with samples that even have the valid bit cleared.
595
557
* Mark all this NMIs as handled.
596
558
*/
597
-
if (test_and_clear_bit(IBS_STOPPING, pcpu->state))
559
+
if (test_and_clear_bit(IBS_STOPPED, pcpu->state))
598
560
return 1;
599
561
600
562
return 0;
+8
-3
arch/x86/events/perf_event.h
+8
-3
arch/x86/events/perf_event.h
···
608
608
atomic_t lbr_exclusive[x86_lbr_exclusive_max];
609
609
610
610
/*
611
+
* AMD bits
612
+
*/
613
+
unsigned int amd_nb_constraints : 1;
614
+
615
+
/*
611
616
* Extra registers for events
612
617
*/
613
618
struct extra_reg *extra_regs;
···
800
795
801
796
struct attribute **merge_attr(struct attribute **a, struct attribute **b);
802
797
798
+
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
799
+
char *page);
800
+
803
801
#ifdef CONFIG_CPU_SUP_AMD
804
802
805
803
int amd_pmu_init(void);
···
932
924
int p6_pmu_init(void);
933
925
934
926
int knc_pmu_init(void);
935
-
936
-
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
937
-
char *page);
938
927
939
928
static inline int is_ht_workaround_enabled(void)
940
929
{
+1
-7
arch/x86/include/asm/msr-index.h
+1
-7
arch/x86/include/asm/msr-index.h
···
190
190
#define MSR_PP1_ENERGY_STATUS 0x00000641
191
191
#define MSR_PP1_POLICY 0x00000642
192
192
193
+
/* Config TDP MSRs */
193
194
#define MSR_CONFIG_TDP_NOMINAL 0x00000648
194
195
#define MSR_CONFIG_TDP_LEVEL_1 0x00000649
195
196
#define MSR_CONFIG_TDP_LEVEL_2 0x0000064A
···
210
209
#define MSR_CORE_PERF_LIMIT_REASONS 0x00000690
211
210
#define MSR_GFX_PERF_LIMIT_REASONS 0x000006B0
212
211
#define MSR_RING_PERF_LIMIT_REASONS 0x000006B1
213
-
214
-
/* Config TDP MSRs */
215
-
#define MSR_CONFIG_TDP_NOMINAL 0x00000648
216
-
#define MSR_CONFIG_TDP_LEVEL1 0x00000649
217
-
#define MSR_CONFIG_TDP_LEVEL2 0x0000064A
218
-
#define MSR_CONFIG_TDP_CONTROL 0x0000064B
219
-
#define MSR_TURBO_ACTIVATION_RATIO 0x0000064C
220
212
221
213
/* Hardware P state interface */
222
214
#define MSR_PPERF 0x0000064e
+9
arch/x86/include/asm/pmem.h
+9
arch/x86/include/asm/pmem.h
···
47
47
BUG();
48
48
}
49
49
50
+
static inline int arch_memcpy_from_pmem(void *dst, const void __pmem *src,
51
+
size_t n)
52
+
{
53
+
if (static_cpu_has(X86_FEATURE_MCE_RECOVERY))
54
+
return memcpy_mcsafe(dst, (void __force *) src, n);
55
+
memcpy(dst, (void __force *) src, n);
56
+
return 0;
57
+
}
58
+
50
59
/**
51
60
* arch_wmb_pmem - synchronize writes to persistent memory
52
61
*
-2
arch/x86/include/asm/processor.h
-2
arch/x86/include/asm/processor.h
+1
arch/x86/include/asm/smp.h
+1
arch/x86/include/asm/smp.h
+2
-4
arch/x86/include/asm/thread_info.h
+2
-4
arch/x86/include/asm/thread_info.h
···
276
276
*/
277
277
#define force_iret() set_thread_flag(TIF_NOTIFY_RESUME)
278
278
279
-
#endif /* !__ASSEMBLY__ */
280
-
281
-
#ifndef __ASSEMBLY__
282
279
extern void arch_task_cache_init(void);
283
280
extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
284
281
extern void arch_release_task_struct(struct task_struct *tsk);
285
-
#endif
282
+
#endif /* !__ASSEMBLY__ */
283
+
286
284
#endif /* _ASM_X86_THREAD_INFO_H */
-6
arch/x86/include/asm/tlbflush.h
-6
arch/x86/include/asm/tlbflush.h
···
319
319
320
320
#endif /* SMP */
321
321
322
-
/* Not inlined due to inc_irq_stat not being defined yet */
323
-
#define flush_tlb_local() { \
324
-
inc_irq_stat(irq_tlb_count); \
325
-
local_flush_tlb(); \
326
-
}
327
-
328
322
#ifndef CONFIG_PARAVIRT
329
323
#define flush_tlb_others(mask, mm, start, end) \
330
324
native_flush_tlb_others(mask, mm, start, end)
+2
-4
arch/x86/kernel/amd_nb.c
+2
-4
arch/x86/kernel/amd_nb.c
···
170
170
{
171
171
struct pci_dev *link = node_to_amd_nb(amd_get_nb_id(cpu))->link;
172
172
unsigned int mask;
173
-
int cuid;
174
173
175
174
if (!amd_nb_has_feature(AMD_NB_L3_PARTITIONING))
176
175
return 0;
177
176
178
177
pci_read_config_dword(link, 0x1d4, &mask);
179
178
180
-
cuid = cpu_data(cpu).compute_unit_id;
181
-
return (mask >> (4 * cuid)) & 0xf;
179
+
return (mask >> (4 * cpu_data(cpu).cpu_core_id)) & 0xf;
182
180
}
183
181
184
182
int amd_set_subcaches(int cpu, unsigned long mask)
···
202
204
pci_write_config_dword(nb->misc, 0x1b8, reg & ~0x180000);
203
205
}
204
206
205
-
cuid = cpu_data(cpu).compute_unit_id;
207
+
cuid = cpu_data(cpu).cpu_core_id;
206
208
mask <<= 4 * cuid;
207
209
mask |= (0xf ^ (1 << cuid)) << 26;
208
210
+4
-8
arch/x86/kernel/cpu/amd.c
+4
-8
arch/x86/kernel/cpu/amd.c
···
300
300
#ifdef CONFIG_SMP
301
301
static void amd_get_topology(struct cpuinfo_x86 *c)
302
302
{
303
-
u32 cores_per_cu = 1;
304
303
u8 node_id;
305
304
int cpu = smp_processor_id();
306
305
···
312
313
313
314
/* get compute unit information */
314
315
smp_num_siblings = ((ebx >> 8) & 3) + 1;
315
-
c->compute_unit_id = ebx & 0xff;
316
-
cores_per_cu += ((ebx >> 8) & 3);
316
+
c->x86_max_cores /= smp_num_siblings;
317
+
c->cpu_core_id = ebx & 0xff;
317
318
} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
318
319
u64 value;
319
320
···
324
325
325
326
/* fixup multi-node processor information */
326
327
if (nodes_per_socket > 1) {
327
-
u32 cores_per_node;
328
328
u32 cus_per_node;
329
329
330
330
set_cpu_cap(c, X86_FEATURE_AMD_DCM);
331
-
cores_per_node = c->x86_max_cores / nodes_per_socket;
332
-
cus_per_node = cores_per_node / cores_per_cu;
331
+
cus_per_node = c->x86_max_cores / nodes_per_socket;
333
332
334
333
/* store NodeID, use llc_shared_map to store sibling info */
335
334
per_cpu(cpu_llc_id, cpu) = node_id;
336
335
337
336
/* core id has to be in the [0 .. cores_per_node - 1] range */
338
-
c->cpu_core_id %= cores_per_node;
339
-
c->compute_unit_id %= cus_per_node;
337
+
c->cpu_core_id %= cus_per_node;
340
338
}
341
339
}
342
340
#endif
+3
arch/x86/kernel/cpu/mcheck/therm_throt.c
+3
arch/x86/kernel/cpu/mcheck/therm_throt.c
+2
arch/x86/kernel/cpu/powerflags.c
+2
arch/x86/kernel/cpu/powerflags.c
+1
-1
arch/x86/kernel/smpboot.c
+1
-1
arch/x86/kernel/smpboot.c
···
422
422
423
423
if (c->phys_proc_id == o->phys_proc_id &&
424
424
per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2) &&
425
-
c->compute_unit_id == o->compute_unit_id)
425
+
c->cpu_core_id == o->cpu_core_id)
426
426
return topology_sane(c, o, "smt");
427
427
428
428
} else if (c->phys_proc_id == o->phys_proc_id &&
+10
-4
arch/x86/mm/tlb.c
+10
-4
arch/x86/mm/tlb.c
···
104
104
105
105
inc_irq_stat(irq_tlb_count);
106
106
107
-
if (f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm))
107
+
if (f->flush_mm && f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm))
108
108
return;
109
-
if (!f->flush_end)
110
-
f->flush_end = f->flush_start + PAGE_SIZE;
111
109
112
110
count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
113
111
if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) {
···
133
135
unsigned long end)
134
136
{
135
137
struct flush_tlb_info info;
138
+
139
+
if (end == 0)
140
+
end = start + PAGE_SIZE;
136
141
info.flush_mm = mm;
137
142
info.flush_start = start;
138
143
info.flush_end = end;
139
144
140
145
count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
141
-
trace_tlb_flush(TLB_REMOTE_SEND_IPI, end - start);
146
+
if (end == TLB_FLUSH_ALL)
147
+
trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
148
+
else
149
+
trace_tlb_flush(TLB_REMOTE_SEND_IPI,
150
+
(end - start) >> PAGE_SHIFT);
151
+
142
152
if (is_uv_system()) {
143
153
unsigned int cpu;
144
154
+2
-1
arch/x86/ras/mce_amd_inj.c
+2
-1
arch/x86/ras/mce_amd_inj.c
···
20
20
#include <linux/pci.h>
21
21
22
22
#include <asm/mce.h>
23
+
#include <asm/smp.h>
23
24
#include <asm/amd_nb.h>
24
25
#include <asm/irq_vectors.h>
25
26
···
207
206
struct cpuinfo_x86 *c = &boot_cpu_data;
208
207
u32 cores_per_node;
209
208
210
-
cores_per_node = c->x86_max_cores / amd_get_nodes_per_socket();
209
+
cores_per_node = (c->x86_max_cores * smp_num_siblings) / amd_get_nodes_per_socket();
211
210
212
211
return cores_per_node * node_id;
213
212
}
+2
crypto/asymmetric_keys/pkcs7_trust.c
+2
crypto/asymmetric_keys/pkcs7_trust.c
+52
drivers/acpi/acpi_processor.c
+52
drivers/acpi/acpi_processor.c
···
491
491
}
492
492
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
493
493
494
+
#ifdef CONFIG_X86
495
+
static bool acpi_hwp_native_thermal_lvt_set;
496
+
static acpi_status __init acpi_hwp_native_thermal_lvt_osc(acpi_handle handle,
497
+
u32 lvl,
498
+
void *context,
499
+
void **rv)
500
+
{
501
+
u8 sb_uuid_str[] = "4077A616-290C-47BE-9EBD-D87058713953";
502
+
u32 capbuf[2];
503
+
struct acpi_osc_context osc_context = {
504
+
.uuid_str = sb_uuid_str,
505
+
.rev = 1,
506
+
.cap.length = 8,
507
+
.cap.pointer = capbuf,
508
+
};
509
+
510
+
if (acpi_hwp_native_thermal_lvt_set)
511
+
return AE_CTRL_TERMINATE;
512
+
513
+
capbuf[0] = 0x0000;
514
+
capbuf[1] = 0x1000; /* set bit 12 */
515
+
516
+
if (ACPI_SUCCESS(acpi_run_osc(handle, &osc_context))) {
517
+
if (osc_context.ret.pointer && osc_context.ret.length > 1) {
518
+
u32 *capbuf_ret = osc_context.ret.pointer;
519
+
520
+
if (capbuf_ret[1] & 0x1000) {
521
+
acpi_handle_info(handle,
522
+
"_OSC native thermal LVT Acked\n");
523
+
acpi_hwp_native_thermal_lvt_set = true;
524
+
}
525
+
}
526
+
kfree(osc_context.ret.pointer);
527
+
}
528
+
529
+
return AE_OK;
530
+
}
531
+
532
+
void __init acpi_early_processor_osc(void)
533
+
{
534
+
if (boot_cpu_has(X86_FEATURE_HWP)) {
535
+
acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT,
536
+
ACPI_UINT32_MAX,
537
+
acpi_hwp_native_thermal_lvt_osc,
538
+
NULL, NULL, NULL);
539
+
acpi_get_devices(ACPI_PROCESSOR_DEVICE_HID,
540
+
acpi_hwp_native_thermal_lvt_osc,
541
+
NULL, NULL);
542
+
}
543
+
}
544
+
#endif
545
+
494
546
/*
495
547
* The following ACPI IDs are known to be suitable for representing as
496
548
* processor devices.
+3
drivers/acpi/bus.c
+3
drivers/acpi/bus.c
+6
drivers/acpi/internal.h
+6
drivers/acpi/internal.h
···
145
145
static inline void acpi_early_processor_set_pdc(void) {}
146
146
#endif
147
147
148
+
#ifdef CONFIG_X86
149
+
void acpi_early_processor_osc(void);
150
+
#else
151
+
static inline void acpi_early_processor_osc(void) {}
152
+
#endif
153
+
148
154
/* --------------------------------------------------------------------------
149
155
Embedded Controller
150
156
-------------------------------------------------------------------------- */
+1
-1
drivers/clk/mediatek/reset.c
+1
-1
drivers/clk/mediatek/reset.c
+1
-1
drivers/clk/mmp/reset.c
+1
-1
drivers/clk/mmp/reset.c
+35
-35
drivers/clk/qcom/gcc-ipq4019.c
+35
-35
drivers/clk/qcom/gcc-ipq4019.c
···
129
129
};
130
130
131
131
#define F(f, s, h, m, n) { (f), (s), (2 * (h) - 1), (m), (n) }
132
-
#define P_XO 0
133
-
#define FE_PLL_200 1
134
-
#define FE_PLL_500 2
135
-
#define DDRC_PLL_666 3
136
-
137
-
#define DDRC_PLL_666_SDCC 1
138
-
#define FE_PLL_125_DLY 1
139
-
140
-
#define FE_PLL_WCSS2G 1
141
-
#define FE_PLL_WCSS5G 1
142
132
143
133
static const struct freq_tbl ftbl_gcc_audio_pwm_clk[] = {
144
134
F(48000000, P_XO, 1, 0, 0),
145
-
F(200000000, FE_PLL_200, 1, 0, 0),
135
+
F(200000000, P_FEPLL200, 1, 0, 0),
146
136
{ }
147
137
};
148
138
···
324
334
};
325
335
326
336
static const struct freq_tbl ftbl_gcc_blsp1_uart1_2_apps_clk[] = {
327
-
F(1843200, FE_PLL_200, 1, 144, 15625),
328
-
F(3686400, FE_PLL_200, 1, 288, 15625),
329
-
F(7372800, FE_PLL_200, 1, 576, 15625),
330
-
F(14745600, FE_PLL_200, 1, 1152, 15625),
331
-
F(16000000, FE_PLL_200, 1, 2, 25),
337
+
F(1843200, P_FEPLL200, 1, 144, 15625),
338
+
F(3686400, P_FEPLL200, 1, 288, 15625),
339
+
F(7372800, P_FEPLL200, 1, 576, 15625),
340
+
F(14745600, P_FEPLL200, 1, 1152, 15625),
341
+
F(16000000, P_FEPLL200, 1, 2, 25),
332
342
F(24000000, P_XO, 1, 1, 2),
333
-
F(32000000, FE_PLL_200, 1, 4, 25),
334
-
F(40000000, FE_PLL_200, 1, 1, 5),
335
-
F(46400000, FE_PLL_200, 1, 29, 125),
343
+
F(32000000, P_FEPLL200, 1, 4, 25),
344
+
F(40000000, P_FEPLL200, 1, 1, 5),
345
+
F(46400000, P_FEPLL200, 1, 29, 125),
336
346
F(48000000, P_XO, 1, 0, 0),
337
347
{ }
338
348
};
···
400
410
};
401
411
402
412
static const struct freq_tbl ftbl_gcc_gp_clk[] = {
403
-
F(1250000, FE_PLL_200, 1, 16, 0),
404
-
F(2500000, FE_PLL_200, 1, 8, 0),
405
-
F(5000000, FE_PLL_200, 1, 4, 0),
413
+
F(1250000, P_FEPLL200, 1, 16, 0),
414
+
F(2500000, P_FEPLL200, 1, 8, 0),
415
+
F(5000000, P_FEPLL200, 1, 4, 0),
406
416
{ }
407
417
};
408
418
···
502
512
static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk[] = {
503
513
F(144000, P_XO, 1, 3, 240),
504
514
F(400000, P_XO, 1, 1, 0),
505
-
F(20000000, FE_PLL_500, 1, 1, 25),
506
-
F(25000000, FE_PLL_500, 1, 1, 20),
507
-
F(50000000, FE_PLL_500, 1, 1, 10),
508
-
F(100000000, FE_PLL_500, 1, 1, 5),
509
-
F(193000000, DDRC_PLL_666_SDCC, 1, 0, 0),
515
+
F(20000000, P_FEPLL500, 1, 1, 25),
516
+
F(25000000, P_FEPLL500, 1, 1, 20),
517
+
F(50000000, P_FEPLL500, 1, 1, 10),
518
+
F(100000000, P_FEPLL500, 1, 1, 5),
519
+
F(193000000, P_DDRPLL, 1, 0, 0),
510
520
{ }
511
521
};
512
522
···
526
536
527
537
static const struct freq_tbl ftbl_gcc_apps_clk[] = {
528
538
F(48000000, P_XO, 1, 0, 0),
529
-
F(200000000, FE_PLL_200, 1, 0, 0),
530
-
F(500000000, FE_PLL_500, 1, 0, 0),
531
-
F(626000000, DDRC_PLL_666, 1, 0, 0),
539
+
F(200000000, P_FEPLL200, 1, 0, 0),
540
+
F(500000000, P_FEPLL500, 1, 0, 0),
541
+
F(626000000, P_DDRPLLAPSS, 1, 0, 0),
532
542
{ }
533
543
};
534
544
···
547
557
548
558
static const struct freq_tbl ftbl_gcc_apps_ahb_clk[] = {
549
559
F(48000000, P_XO, 1, 0, 0),
550
-
F(100000000, FE_PLL_200, 2, 0, 0),
560
+
F(100000000, P_FEPLL200, 2, 0, 0),
551
561
{ }
552
562
};
553
563
···
930
940
};
931
941
932
942
static const struct freq_tbl ftbl_gcc_usb30_mock_utmi_clk[] = {
933
-
F(2000000, FE_PLL_200, 10, 0, 0),
943
+
F(2000000, P_FEPLL200, 10, 0, 0),
934
944
{ }
935
945
};
936
946
···
997
1007
};
998
1008
999
1009
static const struct freq_tbl ftbl_gcc_fephy_dly_clk[] = {
1000
-
F(125000000, FE_PLL_125_DLY, 1, 0, 0),
1010
+
F(125000000, P_FEPLL125DLY, 1, 0, 0),
1001
1011
{ }
1002
1012
};
1003
1013
···
1017
1027
1018
1028
static const struct freq_tbl ftbl_gcc_wcss2g_clk[] = {
1019
1029
F(48000000, P_XO, 1, 0, 0),
1020
-
F(250000000, FE_PLL_WCSS2G, 1, 0, 0),
1030
+
F(250000000, P_FEPLLWCSS2G, 1, 0, 0),
1021
1031
{ }
1022
1032
};
1023
1033
···
1087
1097
1088
1098
static const struct freq_tbl ftbl_gcc_wcss5g_clk[] = {
1089
1099
F(48000000, P_XO, 1, 0, 0),
1090
-
F(250000000, FE_PLL_WCSS5G, 1, 0, 0),
1100
+
F(250000000, P_FEPLLWCSS5G, 1, 0, 0),
1091
1101
{ }
1092
1102
};
1093
1103
···
1315
1325
1316
1326
static int gcc_ipq4019_probe(struct platform_device *pdev)
1317
1327
{
1328
+
struct device *dev = &pdev->dev;
1329
+
1330
+
clk_register_fixed_rate(dev, "fepll125", "xo", 0, 200000000);
1331
+
clk_register_fixed_rate(dev, "fepll125dly", "xo", 0, 200000000);
1332
+
clk_register_fixed_rate(dev, "fepllwcss2g", "xo", 0, 200000000);
1333
+
clk_register_fixed_rate(dev, "fepllwcss5g", "xo", 0, 200000000);
1334
+
clk_register_fixed_rate(dev, "fepll200", "xo", 0, 200000000);
1335
+
clk_register_fixed_rate(dev, "fepll500", "xo", 0, 200000000);
1336
+
clk_register_fixed_rate(dev, "ddrpllapss", "xo", 0, 666000000);
1337
+
1318
1338
return qcom_cc_probe(pdev, &gcc_ipq4019_desc);
1319
1339
}
1320
1340
+1
-1
drivers/clk/qcom/reset.c
+1
-1
drivers/clk/qcom/reset.c
+1
-1
drivers/clk/qcom/reset.h
+1
-1
drivers/clk/qcom/reset.h
+1
-1
drivers/clk/rockchip/softrst.c
+1
-1
drivers/clk/rockchip/softrst.c
+1
-1
drivers/clk/sirf/clk-atlas7.c
+1
-1
drivers/clk/sirf/clk-atlas7.c
+1
-1
drivers/clk/sunxi/clk-a10-ve.c
+1
-1
drivers/clk/sunxi/clk-a10-ve.c
+1
-1
drivers/clk/sunxi/clk-sun9i-mmc.c
+1
-1
drivers/clk/sunxi/clk-sun9i-mmc.c
+1
-1
drivers/clk/sunxi/clk-usb.c
+1
-1
drivers/clk/sunxi/clk-usb.c
+1
-1
drivers/clk/tegra/clk.c
+1
-1
drivers/clk/tegra/clk.c
+1
-2
drivers/extcon/extcon-palmas.c
+1
-2
drivers/extcon/extcon-palmas.c
+4
-5
drivers/gpio/gpio-menz127.c
+4
-5
drivers/gpio/gpio-menz127.c
···
37
37
void __iomem *reg_base;
38
38
struct mcb_device *mdev;
39
39
struct resource *mem;
40
-
spinlock_t lock;
41
40
};
42
41
43
42
static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
···
68
69
debounce /= 50;
69
70
}
70
71
71
-
spin_lock(&priv->lock);
72
+
spin_lock(&gc->bgpio_lock);
72
73
73
74
db_en = readl(priv->reg_base + MEN_Z127_DBER);
74
75
···
83
84
writel(db_en, priv->reg_base + MEN_Z127_DBER);
84
85
writel(db_cnt, priv->reg_base + GPIO_TO_DBCNT_REG(gpio));
85
86
86
-
spin_unlock(&priv->lock);
87
+
spin_unlock(&gc->bgpio_lock);
87
88
88
89
return 0;
89
90
}
···
96
97
if (gpio_pin >= gc->ngpio)
97
98
return -EINVAL;
98
99
99
-
spin_lock(&priv->lock);
100
+
spin_lock(&gc->bgpio_lock);
100
101
od_en = readl(priv->reg_base + MEN_Z127_ODER);
101
102
102
103
if (gpiochip_line_is_open_drain(gc, gpio_pin))
···
105
106
od_en &= ~BIT(gpio_pin);
106
107
107
108
writel(od_en, priv->reg_base + MEN_Z127_ODER);
108
-
spin_unlock(&priv->lock);
109
+
spin_unlock(&gc->bgpio_lock);
109
110
110
111
return 0;
111
112
}
+5
drivers/gpio/gpio-xgene.c
+5
drivers/gpio/gpio-xgene.c
+6
-2
drivers/gpu/drm/amd/acp/Kconfig
+6
-2
drivers/gpu/drm/amd/acp/Kconfig
···
1
-
menu "ACP Configuration"
1
+
menu "ACP (Audio CoProcessor) Configuration"
2
2
3
3
config DRM_AMD_ACP
4
-
bool "Enable ACP IP support"
4
+
bool "Enable AMD Audio CoProcessor IP support"
5
5
select MFD_CORE
6
6
select PM_GENERIC_DOMAINS if PM
7
7
help
8
8
Choose this option to enable ACP IP support for AMD SOCs.
9
+
This adds the ACP (Audio CoProcessor) IP driver and wires
10
+
it up into the amdgpu driver. The ACP block provides the DMA
11
+
engine for the i2s-based ALSA driver. It is required for audio
12
+
on APUs which utilize an i2s codec.
9
13
10
14
endmenu
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
···
608
608
if ((offset + size) <= adev->mc.visible_vram_size)
609
609
return 0;
610
610
611
+
/* Can't move a pinned BO to visible VRAM */
612
+
if (abo->pin_count > 0)
613
+
return -EINVAL;
614
+
611
615
/* hurrah the memory is not visible ! */
612
616
amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_VRAM);
613
617
lpfn = adev->mc.visible_vram_size >> PAGE_SHIFT;
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
···
384
384
struct ttm_mem_reg *new_mem)
385
385
{
386
386
struct amdgpu_device *adev;
387
+
struct amdgpu_bo *abo;
387
388
struct ttm_mem_reg *old_mem = &bo->mem;
388
389
int r;
390
+
391
+
/* Can't move a pinned BO */
392
+
abo = container_of(bo, struct amdgpu_bo, tbo);
393
+
if (WARN_ON_ONCE(abo->pin_count > 0))
394
+
return -EINVAL;
389
395
390
396
adev = amdgpu_get_adev(bo->bdev);
391
397
if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
+17
-10
drivers/gpu/drm/drm_dp_helper.c
+17
-10
drivers/gpu/drm/drm_dp_helper.c
···
179
179
{
180
180
struct drm_dp_aux_msg msg;
181
181
unsigned int retry;
182
-
int err;
182
+
int err = 0;
183
183
184
184
memset(&msg, 0, sizeof(msg));
185
185
msg.address = offset;
186
186
msg.request = request;
187
187
msg.buffer = buffer;
188
188
msg.size = size;
189
+
190
+
mutex_lock(&aux->hw_mutex);
189
191
190
192
/*
191
193
* The specification doesn't give any recommendation on how often to
···
197
195
*/
198
196
for (retry = 0; retry < 32; retry++) {
199
197
200
-
mutex_lock(&aux->hw_mutex);
201
198
err = aux->transfer(aux, &msg);
202
-
mutex_unlock(&aux->hw_mutex);
203
199
if (err < 0) {
204
200
if (err == -EBUSY)
205
201
continue;
206
202
207
-
return err;
203
+
goto unlock;
208
204
}
209
205
210
206
211
207
switch (msg.reply & DP_AUX_NATIVE_REPLY_MASK) {
212
208
case DP_AUX_NATIVE_REPLY_ACK:
213
209
if (err < size)
214
-
return -EPROTO;
215
-
return err;
210
+
err = -EPROTO;
211
+
goto unlock;
216
212
217
213
case DP_AUX_NATIVE_REPLY_NACK:
218
-
return -EIO;
214
+
err = -EIO;
215
+
goto unlock;
219
216
220
217
case DP_AUX_NATIVE_REPLY_DEFER:
221
218
usleep_range(AUX_RETRY_INTERVAL, AUX_RETRY_INTERVAL + 100);
···
223
222
}
224
223
225
224
DRM_DEBUG_KMS("too many retries, giving up\n");
226
-
return -EIO;
225
+
err = -EIO;
226
+
227
+
unlock:
228
+
mutex_unlock(&aux->hw_mutex);
229
+
return err;
227
230
}
228
231
229
232
/**
···
549
544
int max_retries = max(7, drm_dp_i2c_retry_count(msg, dp_aux_i2c_speed_khz));
550
545
551
546
for (retry = 0, defer_i2c = 0; retry < (max_retries + defer_i2c); retry++) {
552
-
mutex_lock(&aux->hw_mutex);
553
547
ret = aux->transfer(aux, msg);
554
-
mutex_unlock(&aux->hw_mutex);
555
548
if (ret < 0) {
556
549
if (ret == -EBUSY)
557
550
continue;
···
688
685
689
686
memset(&msg, 0, sizeof(msg));
690
687
688
+
mutex_lock(&aux->hw_mutex);
689
+
691
690
for (i = 0; i < num; i++) {
692
691
msg.address = msgs[i].addr;
693
692
drm_dp_i2c_msg_set_request(&msg, &msgs[i]);
···
743
738
msg.buffer = NULL;
744
739
msg.size = 0;
745
740
(void)drm_dp_i2c_do_msg(aux, &msg);
741
+
742
+
mutex_unlock(&aux->hw_mutex);
746
743
747
744
return err;
748
745
}
+1
-1
drivers/gpu/drm/msm/hdmi/hdmi.h
+1
-1
drivers/gpu/drm/msm/hdmi/hdmi.h
···
196
196
int msm_hdmi_pll_8960_init(struct platform_device *pdev);
197
197
int msm_hdmi_pll_8996_init(struct platform_device *pdev);
198
198
#else
199
-
static inline int msm_hdmi_pll_8960_init(struct platform_device *pdev);
199
+
static inline int msm_hdmi_pll_8960_init(struct platform_device *pdev)
200
200
{
201
201
return -ENODEV;
202
202
}
-3
drivers/gpu/drm/msm/msm_drv.c
-3
drivers/gpu/drm/msm/msm_drv.c
-1
drivers/gpu/drm/msm/msm_kms.h
-1
drivers/gpu/drm/msm/msm_kms.h
+4
drivers/gpu/drm/radeon/radeon_object.c
+4
drivers/gpu/drm/radeon/radeon_object.c
···
799
799
if ((offset + size) <= rdev->mc.visible_vram_size)
800
800
return 0;
801
801
802
+
/* Can't move a pinned BO to visible VRAM */
803
+
if (rbo->pin_count > 0)
804
+
return -EINVAL;
805
+
802
806
/* hurrah the memory is not visible ! */
803
807
radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM);
804
808
lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;
+6
drivers/gpu/drm/radeon/radeon_ttm.c
+6
drivers/gpu/drm/radeon/radeon_ttm.c
···
397
397
struct ttm_mem_reg *new_mem)
398
398
{
399
399
struct radeon_device *rdev;
400
+
struct radeon_bo *rbo;
400
401
struct ttm_mem_reg *old_mem = &bo->mem;
401
402
int r;
403
+
404
+
/* Can't move a pinned BO */
405
+
rbo = container_of(bo, struct radeon_bo, tbo);
406
+
if (WARN_ON_ONCE(rbo->pin_count > 0))
407
+
return -EINVAL;
402
408
403
409
rdev = radeon_get_rdev(bo->bdev);
404
410
if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
+6
drivers/gpu/drm/radeon/si_dpm.c
+6
drivers/gpu/drm/radeon/si_dpm.c
···
2926
2926
/* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */
2927
2927
{ PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 },
2928
2928
{ PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 },
2929
+
{ PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0x2015, 0, 120000 },
2929
2930
{ PCI_VENDOR_ID_ATI, 0x6810, 0x174b, 0xe271, 85000, 90000 },
2930
2931
{ PCI_VENDOR_ID_ATI, 0x6811, 0x1462, 0x2015, 0, 120000 },
2931
2932
{ PCI_VENDOR_ID_ATI, 0x6811, 0x1043, 0x2015, 0, 120000 },
2933
+
{ PCI_VENDOR_ID_ATI, 0x6811, 0x148c, 0x2015, 0, 120000 },
2932
2934
{ 0, 0, 0, 0 },
2933
2935
};
2934
2936
···
3010
3008
}
3011
3009
++p;
3012
3010
}
3011
+
/* limit mclk on all R7 370 parts for stability */
3012
+
if (rdev->pdev->device == 0x6811 &&
3013
+
rdev->pdev->revision == 0x81)
3014
+
max_mclk = 120000;
3013
3015
3014
3016
if (rps->vce_active) {
3015
3017
rps->evclk = rdev->pm.dpm.vce_states[rdev->pm.dpm.vce_level].evclk;
+10
-3
drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+10
-3
drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
···
271
271
if (!iores)
272
272
return -ENXIO;
273
273
274
-
platform_set_drvdata(pdev, hdmi);
275
-
276
274
encoder->possible_crtcs = drm_of_find_possible_crtcs(drm, dev->of_node);
277
275
/*
278
276
* If we failed to find the CRTC(s) which this encoder is
···
291
293
drm_encoder_init(drm, encoder, &dw_hdmi_rockchip_encoder_funcs,
292
294
DRM_MODE_ENCODER_TMDS, NULL);
293
295
294
-
return dw_hdmi_bind(dev, master, data, encoder, iores, irq, plat_data);
296
+
ret = dw_hdmi_bind(dev, master, data, encoder, iores, irq, plat_data);
297
+
298
+
/*
299
+
* If dw_hdmi_bind() fails we'll never call dw_hdmi_unbind(),
300
+
* which would have called the encoder cleanup. Do it manually.
301
+
*/
302
+
if (ret)
303
+
drm_encoder_cleanup(encoder);
304
+
305
+
return ret;
295
306
}
296
307
297
308
static void dw_hdmi_rockchip_unbind(struct device *dev, struct device *master,
+22
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
+22
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
···
251
251
return 0;
252
252
}
253
253
254
+
static void rockchip_drm_crtc_cancel_pending_vblank(struct drm_crtc *crtc,
255
+
struct drm_file *file_priv)
256
+
{
257
+
struct rockchip_drm_private *priv = crtc->dev->dev_private;
258
+
int pipe = drm_crtc_index(crtc);
259
+
260
+
if (pipe < ROCKCHIP_MAX_CRTC &&
261
+
priv->crtc_funcs[pipe] &&
262
+
priv->crtc_funcs[pipe]->cancel_pending_vblank)
263
+
priv->crtc_funcs[pipe]->cancel_pending_vblank(crtc, file_priv);
264
+
}
265
+
266
+
static void rockchip_drm_preclose(struct drm_device *dev,
267
+
struct drm_file *file_priv)
268
+
{
269
+
struct drm_crtc *crtc;
270
+
271
+
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
272
+
rockchip_drm_crtc_cancel_pending_vblank(crtc, file_priv);
273
+
}
274
+
254
275
void rockchip_drm_lastclose(struct drm_device *dev)
255
276
{
256
277
struct rockchip_drm_private *priv = dev->dev_private;
···
302
281
DRIVER_PRIME | DRIVER_ATOMIC,
303
282
.load = rockchip_drm_load,
304
283
.unload = rockchip_drm_unload,
284
+
.preclose = rockchip_drm_preclose,
305
285
.lastclose = rockchip_drm_lastclose,
306
286
.get_vblank_counter = drm_vblank_no_hw_counter,
307
287
.enable_vblank = rockchip_drm_crtc_enable_vblank,
+1
drivers/gpu/drm/rockchip/rockchip_drm_drv.h
+1
drivers/gpu/drm/rockchip/rockchip_drm_drv.h
···
40
40
int (*enable_vblank)(struct drm_crtc *crtc);
41
41
void (*disable_vblank)(struct drm_crtc *crtc);
42
42
void (*wait_for_update)(struct drm_crtc *crtc);
43
+
void (*cancel_pending_vblank)(struct drm_crtc *crtc, struct drm_file *file_priv);
43
44
};
44
45
45
46
struct rockchip_atomic_commit {
+67
-12
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+67
-12
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
···
499
499
static void vop_crtc_disable(struct drm_crtc *crtc)
500
500
{
501
501
struct vop *vop = to_vop(crtc);
502
+
int i;
502
503
503
504
if (!vop->is_enabled)
504
505
return;
506
+
507
+
/*
508
+
* We need to make sure that all windows are disabled before we
509
+
* disable that crtc. Otherwise we might try to scan from a destroyed
510
+
* buffer later.
511
+
*/
512
+
for (i = 0; i < vop->data->win_size; i++) {
513
+
struct vop_win *vop_win = &vop->win[i];
514
+
const struct vop_win_data *win = vop_win->data;
515
+
516
+
spin_lock(&vop->reg_lock);
517
+
VOP_WIN_SET(vop, win, enable, 0);
518
+
spin_unlock(&vop->reg_lock);
519
+
}
505
520
506
521
drm_crtc_vblank_off(crtc);
507
522
···
564
549
struct drm_plane_state *state)
565
550
{
566
551
struct drm_crtc *crtc = state->crtc;
552
+
struct drm_crtc_state *crtc_state;
567
553
struct drm_framebuffer *fb = state->fb;
568
554
struct vop_win *vop_win = to_vop_win(plane);
569
555
struct vop_plane_state *vop_plane_state = to_vop_plane_state(state);
···
579
563
int max_scale = win->phy->scl ? FRAC_16_16(8, 1) :
580
564
DRM_PLANE_HELPER_NO_SCALING;
581
565
582
-
crtc = crtc ? crtc : plane->state->crtc;
583
-
/*
584
-
* Both crtc or plane->state->crtc can be null.
585
-
*/
586
566
if (!crtc || !fb)
587
567
goto out_disable;
568
+
569
+
crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc);
570
+
if (WARN_ON(!crtc_state))
571
+
return -EINVAL;
572
+
588
573
src->x1 = state->src_x;
589
574
src->y1 = state->src_y;
590
575
src->x2 = state->src_x + state->src_w;
···
597
580
598
581
clip.x1 = 0;
599
582
clip.y1 = 0;
600
-
clip.x2 = crtc->mode.hdisplay;
601
-
clip.y2 = crtc->mode.vdisplay;
583
+
clip.x2 = crtc_state->adjusted_mode.hdisplay;
584
+
clip.y2 = crtc_state->adjusted_mode.vdisplay;
602
585
603
586
ret = drm_plane_helper_check_update(plane, crtc, state->fb,
604
587
src, dest, &clip,
···
890
873
WARN_ON(!wait_for_completion_timeout(&vop->wait_update_complete, 100));
891
874
}
892
875
876
+
static void vop_crtc_cancel_pending_vblank(struct drm_crtc *crtc,
877
+
struct drm_file *file_priv)
878
+
{
879
+
struct drm_device *drm = crtc->dev;
880
+
struct vop *vop = to_vop(crtc);
881
+
struct drm_pending_vblank_event *e;
882
+
unsigned long flags;
883
+
884
+
spin_lock_irqsave(&drm->event_lock, flags);
885
+
e = vop->event;
886
+
if (e && e->base.file_priv == file_priv) {
887
+
vop->event = NULL;
888
+
889
+
e->base.destroy(&e->base);
890
+
file_priv->event_space += sizeof(e->event);
891
+
}
892
+
spin_unlock_irqrestore(&drm->event_lock, flags);
893
+
}
894
+
893
895
static const struct rockchip_crtc_funcs private_crtc_funcs = {
894
896
.enable_vblank = vop_crtc_enable_vblank,
895
897
.disable_vblank = vop_crtc_disable_vblank,
896
898
.wait_for_update = vop_crtc_wait_for_update,
899
+
.cancel_pending_vblank = vop_crtc_cancel_pending_vblank,
897
900
};
898
901
899
902
static bool vop_crtc_mode_fixup(struct drm_crtc *crtc,
···
921
884
struct drm_display_mode *adjusted_mode)
922
885
{
923
886
struct vop *vop = to_vop(crtc);
924
-
925
-
if (adjusted_mode->htotal == 0 || adjusted_mode->vtotal == 0)
926
-
return false;
927
887
928
888
adjusted_mode->clock =
929
889
clk_round_rate(vop->dclk, mode->clock * 1000) / 1000;
···
1142
1108
const struct vop_data *vop_data = vop->data;
1143
1109
struct device *dev = vop->dev;
1144
1110
struct drm_device *drm_dev = vop->drm_dev;
1145
-
struct drm_plane *primary = NULL, *cursor = NULL, *plane;
1111
+
struct drm_plane *primary = NULL, *cursor = NULL, *plane, *tmp;
1146
1112
struct drm_crtc *crtc = &vop->crtc;
1147
1113
struct device_node *port;
1148
1114
int ret;
···
1182
1148
ret = drm_crtc_init_with_planes(drm_dev, crtc, primary, cursor,
1183
1149
&vop_crtc_funcs, NULL);
1184
1150
if (ret)
1185
-
return ret;
1151
+
goto err_cleanup_planes;
1186
1152
1187
1153
drm_crtc_helper_add(crtc, &vop_crtc_helper_funcs);
1188
1154
···
1215
1181
if (!port) {
1216
1182
DRM_ERROR("no port node found in %s\n",
1217
1183
dev->of_node->full_name);
1184
+
ret = -ENOENT;
1218
1185
goto err_cleanup_crtc;
1219
1186
}
1220
1187
···
1229
1194
err_cleanup_crtc:
1230
1195
drm_crtc_cleanup(crtc);
1231
1196
err_cleanup_planes:
1232
-
list_for_each_entry(plane, &drm_dev->mode_config.plane_list, head)
1197
+
list_for_each_entry_safe(plane, tmp, &drm_dev->mode_config.plane_list,
1198
+
head)
1233
1199
drm_plane_cleanup(plane);
1234
1200
return ret;
1235
1201
}
···
1238
1202
static void vop_destroy_crtc(struct vop *vop)
1239
1203
{
1240
1204
struct drm_crtc *crtc = &vop->crtc;
1205
+
struct drm_device *drm_dev = vop->drm_dev;
1206
+
struct drm_plane *plane, *tmp;
1241
1207
1242
1208
rockchip_unregister_crtc_funcs(crtc);
1243
1209
of_node_put(crtc->port);
1210
+
1211
+
/*
1212
+
* We need to cleanup the planes now. Why?
1213
+
*
1214
+
* The planes are "&vop->win[i].base". That means the memory is
1215
+
* all part of the big "struct vop" chunk of memory. That memory
1216
+
* was devm allocated and associated with this component. We need to
1217
+
* free it ourselves before vop_unbind() finishes.
1218
+
*/
1219
+
list_for_each_entry_safe(plane, tmp, &drm_dev->mode_config.plane_list,
1220
+
head)
1221
+
vop_plane_destroy(plane);
1222
+
1223
+
/*
1224
+
* Destroy CRTC after vop_plane_destroy() since vop_disable_plane()
1225
+
* references the CRTC.
1226
+
*/
1244
1227
drm_crtc_cleanup(crtc);
1245
1228
}
1246
1229
+1
-1
drivers/gpu/drm/udl/udl_fb.c
+1
-1
drivers/gpu/drm/udl/udl_fb.c
+1
-1
drivers/gpu/drm/udl/udl_gem.c
+1
-1
drivers/gpu/drm/udl/udl_gem.c
+6
drivers/hwmon/max1111.c
+6
drivers/hwmon/max1111.c
···
85
85
86
86
int max1111_read_channel(int channel)
87
87
{
88
+
if (!the_max1111 || !the_max1111->spi)
89
+
return -ENODEV;
90
+
88
91
return max1111_read(&the_max1111->spi->dev, channel);
89
92
}
90
93
EXPORT_SYMBOL(max1111_read_channel);
···
261
258
{
262
259
struct max1111_data *data = spi_get_drvdata(spi);
263
260
261
+
#ifdef CONFIG_SHARPSL_PM
262
+
the_max1111 = NULL;
263
+
#endif
264
264
hwmon_device_unregister(data->hwmon_dev);
265
265
sysfs_remove_group(&spi->dev.kobj, &max1110_attr_group);
266
266
sysfs_remove_group(&spi->dev.kobj, &max1111_attr_group);
+1
-1
drivers/ide/icside.c
+1
-1
drivers/ide/icside.c
···
451
451
return ret;
452
452
}
453
453
454
-
static const struct ide_port_info icside_v6_port_info __initconst = {
454
+
static const struct ide_port_info icside_v6_port_info = {
455
455
.init_dma = icside_dma_off_init,
456
456
.port_ops = &icside_v6_no_dma_port_ops,
457
457
.host_flags = IDE_HFLAG_SERIALIZE | IDE_HFLAG_MMIO,
+2
drivers/ide/palm_bk3710.c
+2
drivers/ide/palm_bk3710.c
+4
-35
drivers/infiniband/ulp/isert/ib_isert.c
+4
-35
drivers/infiniband/ulp/isert/ib_isert.c
···
63
63
struct rdma_cm_id *isert_setup_id(struct isert_np *isert_np);
64
64
65
65
static void isert_release_work(struct work_struct *work);
66
-
static void isert_wait4flush(struct isert_conn *isert_conn);
67
66
static void isert_recv_done(struct ib_cq *cq, struct ib_wc *wc);
68
67
static void isert_send_done(struct ib_cq *cq, struct ib_wc *wc);
69
68
static void isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc);
···
140
141
attr.qp_context = isert_conn;
141
142
attr.send_cq = comp->cq;
142
143
attr.recv_cq = comp->cq;
143
-
attr.cap.max_send_wr = ISERT_QP_MAX_REQ_DTOS;
144
+
attr.cap.max_send_wr = ISERT_QP_MAX_REQ_DTOS + 1;
144
145
attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS + 1;
145
146
attr.cap.max_send_sge = device->ib_device->attrs.max_sge;
146
147
isert_conn->max_sge = min(device->ib_device->attrs.max_sge,
···
886
887
break;
887
888
case ISER_CONN_UP:
888
889
isert_conn_terminate(isert_conn);
889
-
isert_wait4flush(isert_conn);
890
+
ib_drain_qp(isert_conn->qp);
890
891
isert_handle_unbound_conn(isert_conn);
891
892
break;
892
893
case ISER_CONN_BOUND:
···
3212
3213
}
3213
3214
}
3214
3215
3215
-
static void
3216
-
isert_beacon_done(struct ib_cq *cq, struct ib_wc *wc)
3217
-
{
3218
-
struct isert_conn *isert_conn = wc->qp->qp_context;
3219
-
3220
-
isert_print_wc(wc, "beacon");
3221
-
3222
-
isert_info("conn %p completing wait_comp_err\n", isert_conn);
3223
-
complete(&isert_conn->wait_comp_err);
3224
-
}
3225
-
3226
-
static void
3227
-
isert_wait4flush(struct isert_conn *isert_conn)
3228
-
{
3229
-
struct ib_recv_wr *bad_wr;
3230
-
static struct ib_cqe cqe = { .done = isert_beacon_done };
3231
-
3232
-
isert_info("conn %p\n", isert_conn);
3233
-
3234
-
init_completion(&isert_conn->wait_comp_err);
3235
-
isert_conn->beacon.wr_cqe = &cqe;
3236
-
/* post an indication that all flush errors were consumed */
3237
-
if (ib_post_recv(isert_conn->qp, &isert_conn->beacon, &bad_wr)) {
3238
-
isert_err("conn %p failed to post beacon", isert_conn);
3239
-
return;
3240
-
}
3241
-
3242
-
wait_for_completion(&isert_conn->wait_comp_err);
3243
-
}
3244
-
3245
3216
/**
3246
3217
* isert_put_unsol_pending_cmds() - Drop commands waiting for
3247
3218
* unsolicitate dataout
···
3257
3288
isert_conn_terminate(isert_conn);
3258
3289
mutex_unlock(&isert_conn->mutex);
3259
3290
3260
-
isert_wait4flush(isert_conn);
3291
+
ib_drain_qp(isert_conn->qp);
3261
3292
isert_put_unsol_pending_cmds(conn);
3262
3293
isert_wait4cmds(conn);
3263
3294
isert_wait4logout(isert_conn);
···
3269
3300
{
3270
3301
struct isert_conn *isert_conn = conn->context;
3271
3302
3272
-
isert_wait4flush(isert_conn);
3303
+
ib_drain_qp(isert_conn->qp);
3273
3304
isert_put_conn(isert_conn);
3274
3305
}
3275
3306
-2
drivers/infiniband/ulp/isert/ib_isert.h
-2
drivers/infiniband/ulp/isert/ib_isert.h
···
209
209
struct ib_qp *qp;
210
210
struct isert_device *device;
211
211
struct mutex mutex;
212
-
struct completion wait_comp_err;
213
212
struct kref kref;
214
213
struct list_head fr_pool;
215
214
int fr_pool_size;
216
215
/* lock to protect fastreg pool */
217
216
spinlock_t pool_lock;
218
217
struct work_struct release_work;
219
-
struct ib_recv_wr beacon;
220
218
bool logout_posted;
221
219
bool snd_w_inv;
222
220
};
+10
-5
drivers/isdn/hisax/isac.c
+10
-5
drivers/isdn/hisax/isac.c
···
215
215
if (count == 0)
216
216
count = 32;
217
217
isac_empty_fifo(cs, count);
218
-
if ((count = cs->rcvidx) > 0) {
218
+
count = cs->rcvidx;
219
+
if (count > 0) {
219
220
cs->rcvidx = 0;
220
-
if (!(skb = alloc_skb(count, GFP_ATOMIC)))
221
+
skb = alloc_skb(count, GFP_ATOMIC);
222
+
if (!skb)
221
223
printk(KERN_WARNING "HiSax: D receive out of memory\n");
222
224
else {
223
225
memcpy(skb_put(skb, count), cs->rcvbuf, count);
···
253
251
cs->tx_skb = NULL;
254
252
}
255
253
}
256
-
if ((cs->tx_skb = skb_dequeue(&cs->sq))) {
254
+
cs->tx_skb = skb_dequeue(&cs->sq);
255
+
if (cs->tx_skb) {
257
256
cs->tx_cnt = 0;
258
257
isac_fill_fifo(cs);
259
258
} else
···
316
313
#if ARCOFI_USE
317
314
if (v1 & 0x08) {
318
315
if (!cs->dc.isac.mon_rx) {
319
-
if (!(cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC))) {
316
+
cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC);
317
+
if (!cs->dc.isac.mon_rx) {
320
318
if (cs->debug & L1_DEB_WARN)
321
319
debugl1(cs, "ISAC MON RX out of memory!");
322
320
cs->dc.isac.mocr &= 0xf0;
···
347
343
afterMONR0:
348
344
if (v1 & 0x80) {
349
345
if (!cs->dc.isac.mon_rx) {
350
-
if (!(cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC))) {
346
+
cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC);
347
+
if (!cs->dc.isac.mon_rx) {
351
348
if (cs->debug & L1_DEB_WARN)
352
349
debugl1(cs, "ISAC MON RX out of memory!");
353
350
cs->dc.isac.mocr &= 0x0f;
+1
-1
drivers/media/v4l2-core/v4l2-mc.c
+1
-1
drivers/media/v4l2-core/v4l2-mc.c
···
34
34
{
35
35
struct media_entity *entity;
36
36
struct media_entity *if_vid = NULL, *if_aud = NULL;
37
-
struct media_entity *tuner = NULL, *decoder = NULL, *dtv_demod = NULL;
37
+
struct media_entity *tuner = NULL, *decoder = NULL;
38
38
struct media_entity *io_v4l = NULL, *io_vbi = NULL, *io_swradio = NULL;
39
39
bool is_webcam = false;
40
40
u32 flags;
+72
-13
drivers/net/dsa/mv88e6xxx.c
+72
-13
drivers/net/dsa/mv88e6xxx.c
···
2264
2264
mutex_unlock(&ps->smi_mutex);
2265
2265
}
2266
2266
2267
+
static int _mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page,
2268
+
int reg, int val)
2269
+
{
2270
+
int ret;
2271
+
2272
+
ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page);
2273
+
if (ret < 0)
2274
+
goto restore_page_0;
2275
+
2276
+
ret = _mv88e6xxx_phy_write_indirect(ds, port, reg, val);
2277
+
restore_page_0:
2278
+
_mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0);
2279
+
2280
+
return ret;
2281
+
}
2282
+
2283
+
static int _mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page,
2284
+
int reg)
2285
+
{
2286
+
int ret;
2287
+
2288
+
ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page);
2289
+
if (ret < 0)
2290
+
goto restore_page_0;
2291
+
2292
+
ret = _mv88e6xxx_phy_read_indirect(ds, port, reg);
2293
+
restore_page_0:
2294
+
_mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0);
2295
+
2296
+
return ret;
2297
+
}
2298
+
2299
+
static int mv88e6xxx_power_on_serdes(struct dsa_switch *ds)
2300
+
{
2301
+
int ret;
2302
+
2303
+
ret = _mv88e6xxx_phy_page_read(ds, REG_FIBER_SERDES, PAGE_FIBER_SERDES,
2304
+
MII_BMCR);
2305
+
if (ret < 0)
2306
+
return ret;
2307
+
2308
+
if (ret & BMCR_PDOWN) {
2309
+
ret &= ~BMCR_PDOWN;
2310
+
ret = _mv88e6xxx_phy_page_write(ds, REG_FIBER_SERDES,
2311
+
PAGE_FIBER_SERDES, MII_BMCR,
2312
+
ret);
2313
+
}
2314
+
2315
+
return ret;
2316
+
}
2317
+
2267
2318
static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
2268
2319
{
2269
2320
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
···
2416
2365
PORT_CONTROL, reg);
2417
2366
if (ret)
2418
2367
goto abort;
2368
+
}
2369
+
2370
+
/* If this port is connected to a SerDes, make sure the SerDes is not
2371
+
* powered down.
2372
+
*/
2373
+
if (mv88e6xxx_6352_family(ds)) {
2374
+
ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_STATUS);
2375
+
if (ret < 0)
2376
+
goto abort;
2377
+
ret &= PORT_STATUS_CMODE_MASK;
2378
+
if ((ret == PORT_STATUS_CMODE_100BASE_X) ||
2379
+
(ret == PORT_STATUS_CMODE_1000BASE_X) ||
2380
+
(ret == PORT_STATUS_CMODE_SGMII)) {
2381
+
ret = mv88e6xxx_power_on_serdes(ds);
2382
+
if (ret < 0)
2383
+
goto abort;
2384
+
}
2419
2385
}
2420
2386
2421
2387
/* Port Control 2: don't force a good FCS, set the maximum frame size to
···
2782
2714
int ret;
2783
2715
2784
2716
mutex_lock(&ps->smi_mutex);
2785
-
ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page);
2786
-
if (ret < 0)
2787
-
goto error;
2788
-
ret = _mv88e6xxx_phy_read_indirect(ds, port, reg);
2789
-
error:
2790
-
_mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0);
2717
+
ret = _mv88e6xxx_phy_page_read(ds, port, page, reg);
2791
2718
mutex_unlock(&ps->smi_mutex);
2719
+
2792
2720
return ret;
2793
2721
}
2794
2722
···
2795
2731
int ret;
2796
2732
2797
2733
mutex_lock(&ps->smi_mutex);
2798
-
ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page);
2799
-
if (ret < 0)
2800
-
goto error;
2801
-
2802
-
ret = _mv88e6xxx_phy_write_indirect(ds, port, reg, val);
2803
-
error:
2804
-
_mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0);
2734
+
ret = _mv88e6xxx_phy_page_write(ds, port, page, reg, val);
2805
2735
mutex_unlock(&ps->smi_mutex);
2736
+
2806
2737
return ret;
2807
2738
}
2808
2739
+8
drivers/net/dsa/mv88e6xxx.h
+8
drivers/net/dsa/mv88e6xxx.h
···
28
28
#define SMI_CMD_OP_45_READ_DATA_INC ((3 << 10) | SMI_CMD_BUSY)
29
29
#define SMI_DATA 0x01
30
30
31
+
/* Fiber/SERDES Registers are located at SMI address F, page 1 */
32
+
#define REG_FIBER_SERDES 0x0f
33
+
#define PAGE_FIBER_SERDES 0x01
34
+
31
35
#define REG_PORT(p) (0x10 + (p))
32
36
#define PORT_STATUS 0x00
33
37
#define PORT_STATUS_PAUSE_EN BIT(15)
···
49
45
#define PORT_STATUS_MGMII BIT(6) /* 6185 */
50
46
#define PORT_STATUS_TX_PAUSED BIT(5)
51
47
#define PORT_STATUS_FLOW_CTRL BIT(4)
48
+
#define PORT_STATUS_CMODE_MASK 0x0f
49
+
#define PORT_STATUS_CMODE_100BASE_X 0x8
50
+
#define PORT_STATUS_CMODE_1000BASE_X 0x9
51
+
#define PORT_STATUS_CMODE_SGMII 0xa
52
52
#define PORT_PCS_CTRL 0x01
53
53
#define PORT_PCS_CTRL_RGMII_DELAY_RXCLK BIT(15)
54
54
#define PORT_PCS_CTRL_RGMII_DELAY_TXCLK BIT(14)
+7
-3
drivers/net/ethernet/broadcom/bnxt/bnxt.c
+7
-3
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···
2653
2653
/* Write request msg to hwrm channel */
2654
2654
__iowrite32_copy(bp->bar0, data, msg_len / 4);
2655
2655
2656
-
for (i = msg_len; i < HWRM_MAX_REQ_LEN; i += 4)
2656
+
for (i = msg_len; i < BNXT_HWRM_MAX_REQ_LEN; i += 4)
2657
2657
writel(0, bp->bar0 + i);
2658
2658
2659
2659
/* currently supports only one outstanding message */
···
3391
3391
struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
3392
3392
struct bnxt_ring_struct *ring = &cpr->cp_ring_struct;
3393
3393
3394
+
cpr->cp_doorbell = bp->bar1 + i * 0x80;
3394
3395
rc = hwrm_ring_alloc_send_msg(bp, ring, HWRM_RING_ALLOC_CMPL, i,
3395
3396
INVALID_STATS_CTX_ID);
3396
3397
if (rc)
3397
3398
goto err_out;
3398
-
cpr->cp_doorbell = bp->bar1 + i * 0x80;
3399
3399
BNXT_CP_DB(cpr->cp_doorbell, cpr->cp_raw_cons);
3400
3400
bp->grp_info[i].cp_fw_ring_id = ring->fw_ring_id;
3401
3401
}
···
3830
3830
struct hwrm_ver_get_input req = {0};
3831
3831
struct hwrm_ver_get_output *resp = bp->hwrm_cmd_resp_addr;
3832
3832
3833
+
bp->hwrm_max_req_len = HWRM_MAX_REQ_LEN;
3833
3834
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_VER_GET, -1, -1);
3834
3835
req.hwrm_intf_maj = HWRM_VERSION_MAJOR;
3835
3836
req.hwrm_intf_min = HWRM_VERSION_MINOR;
···
3855
3854
bp->hwrm_cmd_timeout = le16_to_cpu(resp->def_req_timeout);
3856
3855
if (!bp->hwrm_cmd_timeout)
3857
3856
bp->hwrm_cmd_timeout = DFLT_HWRM_CMD_TIMEOUT;
3857
+
3858
+
if (resp->hwrm_intf_maj >= 1)
3859
+
bp->hwrm_max_req_len = le16_to_cpu(resp->max_req_win_len);
3858
3860
3859
3861
hwrm_ver_get_exit:
3860
3862
mutex_unlock(&bp->hwrm_cmd_lock);
···
4559
4555
if (bp->link_info.req_flow_ctrl & BNXT_LINK_PAUSE_RX)
4560
4556
req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_RX;
4561
4557
if (bp->link_info.req_flow_ctrl & BNXT_LINK_PAUSE_TX)
4562
-
req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_RX;
4558
+
req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_TX;
4563
4559
req->enables |=
4564
4560
cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_AUTO_PAUSE);
4565
4561
} else {
+2
drivers/net/ethernet/broadcom/bnxt/bnxt.h
+2
drivers/net/ethernet/broadcom/bnxt/bnxt.h
···
477
477
#define RING_CMP(idx) ((idx) & bp->cp_ring_mask)
478
478
#define NEXT_CMP(idx) RING_CMP(ADV_RAW_CMP(idx, 1))
479
479
480
+
#define BNXT_HWRM_MAX_REQ_LEN (bp->hwrm_max_req_len)
480
481
#define DFLT_HWRM_CMD_TIMEOUT 500
481
482
#define HWRM_CMD_TIMEOUT (bp->hwrm_cmd_timeout)
482
483
#define HWRM_RESET_TIMEOUT ((HWRM_CMD_TIMEOUT) * 4)
···
954
953
dma_addr_t hw_tx_port_stats_map;
955
954
int hw_port_stats_size;
956
955
956
+
u16 hwrm_max_req_len;
957
957
int hwrm_cmd_timeout;
958
958
struct mutex hwrm_cmd_lock; /* serialize hwrm messages */
959
959
struct hwrm_ver_get_output ver_resp;
+2
-4
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+2
-4
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
···
855
855
if (BNXT_VF(bp))
856
856
return;
857
857
epause->autoneg = !!(link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL);
858
-
epause->rx_pause =
859
-
((link_info->auto_pause_setting & BNXT_LINK_PAUSE_RX) != 0);
860
-
epause->tx_pause =
861
-
((link_info->auto_pause_setting & BNXT_LINK_PAUSE_TX) != 0);
858
+
epause->rx_pause = !!(link_info->req_flow_ctrl & BNXT_LINK_PAUSE_RX);
859
+
epause->tx_pause = !!(link_info->req_flow_ctrl & BNXT_LINK_PAUSE_TX);
862
860
}
863
861
864
862
static int bnxt_set_pauseparam(struct net_device *dev,
+11
-5
drivers/net/ethernet/broadcom/genet/bcmgenet.c
+11
-5
drivers/net/ethernet/broadcom/genet/bcmgenet.c
···
1171
1171
struct enet_cb *tx_cb_ptr;
1172
1172
struct netdev_queue *txq;
1173
1173
unsigned int pkts_compl = 0;
1174
+
unsigned int bytes_compl = 0;
1174
1175
unsigned int c_index;
1175
1176
unsigned int txbds_ready;
1176
1177
unsigned int txbds_processed = 0;
···
1194
1193
tx_cb_ptr = &priv->tx_cbs[ring->clean_ptr];
1195
1194
if (tx_cb_ptr->skb) {
1196
1195
pkts_compl++;
1197
-
dev->stats.tx_packets++;
1198
-
dev->stats.tx_bytes += tx_cb_ptr->skb->len;
1196
+
bytes_compl += GENET_CB(tx_cb_ptr->skb)->bytes_sent;
1199
1197
dma_unmap_single(&dev->dev,
1200
1198
dma_unmap_addr(tx_cb_ptr, dma_addr),
1201
1199
dma_unmap_len(tx_cb_ptr, dma_len),
1202
1200
DMA_TO_DEVICE);
1203
1201
bcmgenet_free_cb(tx_cb_ptr);
1204
1202
} else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) {
1205
-
dev->stats.tx_bytes +=
1206
-
dma_unmap_len(tx_cb_ptr, dma_len);
1207
1203
dma_unmap_page(&dev->dev,
1208
1204
dma_unmap_addr(tx_cb_ptr, dma_addr),
1209
1205
dma_unmap_len(tx_cb_ptr, dma_len),
···
1217
1219
1218
1220
ring->free_bds += txbds_processed;
1219
1221
ring->c_index = (ring->c_index + txbds_processed) & DMA_C_INDEX_MASK;
1222
+
1223
+
dev->stats.tx_packets += pkts_compl;
1224
+
dev->stats.tx_bytes += bytes_compl;
1220
1225
1221
1226
if (ring->free_bds > (MAX_SKB_FRAGS + 1)) {
1222
1227
txq = netdev_get_tx_queue(dev, ring->queue);
···
1297
1296
1298
1297
tx_cb_ptr->skb = skb;
1299
1298
1300
-
skb_len = skb_headlen(skb) < ETH_ZLEN ? ETH_ZLEN : skb_headlen(skb);
1299
+
skb_len = skb_headlen(skb);
1301
1300
1302
1301
mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE);
1303
1302
ret = dma_mapping_error(kdev, mapping);
···
1464
1463
ret = NETDEV_TX_OK;
1465
1464
goto out;
1466
1465
}
1466
+
1467
+
/* Retain how many bytes will be sent on the wire, without TSB inserted
1468
+
* by transmit checksum offload
1469
+
*/
1470
+
GENET_CB(skb)->bytes_sent = skb->len;
1467
1471
1468
1472
/* set the SKB transmit checksum */
1469
1473
if (priv->desc_64b_en) {
+6
drivers/net/ethernet/broadcom/genet/bcmgenet.h
+6
drivers/net/ethernet/broadcom/genet/bcmgenet.h
···
531
531
u32 flags;
532
532
};
533
533
534
+
struct bcmgenet_skb_cb {
535
+
unsigned int bytes_sent; /* bytes on the wire (no TSB) */
536
+
};
537
+
538
+
#define GENET_CB(skb) ((struct bcmgenet_skb_cb *)((skb)->cb))
539
+
534
540
struct bcmgenet_tx_ring {
535
541
spinlock_t lock; /* ring lock */
536
542
struct napi_struct napi; /* NAPI per tx queue */
+55
-14
drivers/net/ethernet/cadence/macb.c
+55
-14
drivers/net/ethernet/cadence/macb.c
···
917
917
unsigned int frag_len = bp->rx_buffer_size;
918
918
919
919
if (offset + frag_len > len) {
920
-
BUG_ON(frag != last_frag);
920
+
if (unlikely(frag != last_frag)) {
921
+
dev_kfree_skb_any(skb);
922
+
return -1;
923
+
}
921
924
frag_len = len - offset;
922
925
}
923
926
skb_copy_to_linear_data_offset(skb, offset,
···
948
945
return 0;
949
946
}
950
947
948
+
static inline void macb_init_rx_ring(struct macb *bp)
949
+
{
950
+
dma_addr_t addr;
951
+
int i;
952
+
953
+
addr = bp->rx_buffers_dma;
954
+
for (i = 0; i < RX_RING_SIZE; i++) {
955
+
bp->rx_ring[i].addr = addr;
956
+
bp->rx_ring[i].ctrl = 0;
957
+
addr += bp->rx_buffer_size;
958
+
}
959
+
bp->rx_ring[RX_RING_SIZE - 1].addr |= MACB_BIT(RX_WRAP);
960
+
}
961
+
951
962
static int macb_rx(struct macb *bp, int budget)
952
963
{
964
+
bool reset_rx_queue = false;
953
965
int received = 0;
954
966
unsigned int tail;
955
967
int first_frag = -1;
···
990
972
991
973
if (ctrl & MACB_BIT(RX_EOF)) {
992
974
int dropped;
993
-
BUG_ON(first_frag == -1);
975
+
976
+
if (unlikely(first_frag == -1)) {
977
+
reset_rx_queue = true;
978
+
continue;
979
+
}
994
980
995
981
dropped = macb_rx_frame(bp, first_frag, tail);
996
982
first_frag = -1;
983
+
if (unlikely(dropped < 0)) {
984
+
reset_rx_queue = true;
985
+
continue;
986
+
}
997
987
if (!dropped) {
998
988
received++;
999
989
budget--;
1000
990
}
1001
991
}
992
+
}
993
+
994
+
if (unlikely(reset_rx_queue)) {
995
+
unsigned long flags;
996
+
u32 ctrl;
997
+
998
+
netdev_err(bp->dev, "RX queue corruption: reset it\n");
999
+
1000
+
spin_lock_irqsave(&bp->lock, flags);
1001
+
1002
+
ctrl = macb_readl(bp, NCR);
1003
+
macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
1004
+
1005
+
macb_init_rx_ring(bp);
1006
+
macb_writel(bp, RBQP, bp->rx_ring_dma);
1007
+
1008
+
macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
1009
+
1010
+
spin_unlock_irqrestore(&bp->lock, flags);
1011
+
return received;
1002
1012
}
1003
1013
1004
1014
if (first_frag != -1)
···
1146
1100
macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
1147
1101
1148
1102
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
1149
-
macb_writel(bp, ISR, MACB_BIT(RXUBR));
1103
+
queue_writel(queue, ISR, MACB_BIT(RXUBR));
1150
1104
}
1151
1105
1152
1106
if (status & MACB_BIT(ISR_ROVR)) {
···
1569
1523
static void macb_init_rings(struct macb *bp)
1570
1524
{
1571
1525
int i;
1572
-
dma_addr_t addr;
1573
1526
1574
-
addr = bp->rx_buffers_dma;
1575
-
for (i = 0; i < RX_RING_SIZE; i++) {
1576
-
bp->rx_ring[i].addr = addr;
1577
-
bp->rx_ring[i].ctrl = 0;
1578
-
addr += bp->rx_buffer_size;
1579
-
}
1580
-
bp->rx_ring[RX_RING_SIZE - 1].addr |= MACB_BIT(RX_WRAP);
1527
+
macb_init_rx_ring(bp);
1581
1528
1582
1529
for (i = 0; i < TX_RING_SIZE; i++) {
1583
1530
bp->queues[0].tx_ring[i].addr = 0;
···
2996
2957
phy_node = of_get_next_available_child(np, NULL);
2997
2958
if (phy_node) {
2998
2959
int gpio = of_get_named_gpio(phy_node, "reset-gpios", 0);
2999
-
if (gpio_is_valid(gpio))
2960
+
if (gpio_is_valid(gpio)) {
3000
2961
bp->reset_gpio = gpio_to_desc(gpio);
3001
-
gpiod_direction_output(bp->reset_gpio, 1);
2962
+
gpiod_direction_output(bp->reset_gpio, 1);
2963
+
}
3002
2964
}
3003
2965
of_node_put(phy_node);
3004
2966
···
3069
3029
mdiobus_free(bp->mii_bus);
3070
3030
3071
3031
/* Shutdown the PHY if there is a GPIO reset */
3072
-
gpiod_set_value(bp->reset_gpio, 0);
3032
+
if (bp->reset_gpio)
3033
+
gpiod_set_value(bp->reset_gpio, 0);
3073
3034
3074
3035
unregister_netdev(dev);
3075
3036
clk_disable_unprepare(bp->tx_clk);
+1
-1
drivers/net/ethernet/freescale/fec_main.c
+1
-1
drivers/net/ethernet/freescale/fec_main.c
+1
-1
drivers/net/ethernet/hisilicon/hns/hnae.h
+1
-1
drivers/net/ethernet/hisilicon/hns/hnae.h
···
469
469
u32 *tx_usecs, u32 *rx_usecs);
470
470
void (*get_rx_max_coalesced_frames)(struct hnae_handle *handle,
471
471
u32 *tx_frames, u32 *rx_frames);
472
-
void (*set_coalesce_usecs)(struct hnae_handle *handle, u32 timeout);
472
+
int (*set_coalesce_usecs)(struct hnae_handle *handle, u32 timeout);
473
473
int (*set_coalesce_frames)(struct hnae_handle *handle,
474
474
u32 coalesce_frames);
475
475
void (*set_promisc_mode)(struct hnae_handle *handle, u32 en);
+23
-41
drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+23
-41
drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
···
159
159
ae_handle->qs[i]->tx_ring.q = ae_handle->qs[i];
160
160
161
161
ring_pair_cb->used_by_vf = 1;
162
-
if (port_idx < DSAF_SERVICE_PORT_NUM_PER_DSAF)
163
-
ring_pair_cb->port_id_in_dsa = port_idx;
164
-
else
165
-
ring_pair_cb->port_id_in_dsa = 0;
166
-
167
162
ring_pair_cb++;
168
163
}
169
164
···
448
453
static void hns_ae_get_coalesce_usecs(struct hnae_handle *handle,
449
454
u32 *tx_usecs, u32 *rx_usecs)
450
455
{
451
-
int port;
456
+
struct ring_pair_cb *ring_pair =
457
+
container_of(handle->qs[0], struct ring_pair_cb, q);
452
458
453
-
port = hns_ae_map_eport_to_dport(handle->eport_id);
454
-
455
-
*tx_usecs = hns_rcb_get_coalesce_usecs(
456
-
hns_ae_get_dsaf_dev(handle->dev),
457
-
hns_dsaf_get_comm_idx_by_port(port));
458
-
*rx_usecs = hns_rcb_get_coalesce_usecs(
459
-
hns_ae_get_dsaf_dev(handle->dev),
460
-
hns_dsaf_get_comm_idx_by_port(port));
459
+
*tx_usecs = hns_rcb_get_coalesce_usecs(ring_pair->rcb_common,
460
+
ring_pair->port_id_in_comm);
461
+
*rx_usecs = hns_rcb_get_coalesce_usecs(ring_pair->rcb_common,
462
+
ring_pair->port_id_in_comm);
461
463
}
462
464
463
465
static void hns_ae_get_rx_max_coalesced_frames(struct hnae_handle *handle,
464
466
u32 *tx_frames, u32 *rx_frames)
465
467
{
466
-
int port;
468
+
struct ring_pair_cb *ring_pair =
469
+
container_of(handle->qs[0], struct ring_pair_cb, q);
467
470
468
-
assert(handle);
469
-
470
-
port = hns_ae_map_eport_to_dport(handle->eport_id);
471
-
472
-
*tx_frames = hns_rcb_get_coalesced_frames(
473
-
hns_ae_get_dsaf_dev(handle->dev), port);
474
-
*rx_frames = hns_rcb_get_coalesced_frames(
475
-
hns_ae_get_dsaf_dev(handle->dev), port);
471
+
*tx_frames = hns_rcb_get_coalesced_frames(ring_pair->rcb_common,
472
+
ring_pair->port_id_in_comm);
473
+
*rx_frames = hns_rcb_get_coalesced_frames(ring_pair->rcb_common,
474
+
ring_pair->port_id_in_comm);
476
475
}
477
476
478
-
static void hns_ae_set_coalesce_usecs(struct hnae_handle *handle,
479
-
u32 timeout)
477
+
static int hns_ae_set_coalesce_usecs(struct hnae_handle *handle,
478
+
u32 timeout)
480
479
{
481
-
int port;
480
+
struct ring_pair_cb *ring_pair =
481
+
container_of(handle->qs[0], struct ring_pair_cb, q);
482
482
483
-
assert(handle);
484
-
485
-
port = hns_ae_map_eport_to_dport(handle->eport_id);
486
-
487
-
hns_rcb_set_coalesce_usecs(hns_ae_get_dsaf_dev(handle->dev),
488
-
port, timeout);
483
+
return hns_rcb_set_coalesce_usecs(
484
+
ring_pair->rcb_common, ring_pair->port_id_in_comm, timeout);
489
485
}
490
486
491
487
static int hns_ae_set_coalesce_frames(struct hnae_handle *handle,
492
488
u32 coalesce_frames)
493
489
{
494
-
int port;
495
-
int ret;
490
+
struct ring_pair_cb *ring_pair =
491
+
container_of(handle->qs[0], struct ring_pair_cb, q);
496
492
497
-
assert(handle);
498
-
499
-
port = hns_ae_map_eport_to_dport(handle->eport_id);
500
-
501
-
ret = hns_rcb_set_coalesced_frames(hns_ae_get_dsaf_dev(handle->dev),
502
-
port, coalesce_frames);
503
-
return ret;
493
+
return hns_rcb_set_coalesced_frames(
494
+
ring_pair->rcb_common,
495
+
ring_pair->port_id_in_comm, coalesce_frames);
504
496
}
505
497
506
498
void hns_ae_update_stats(struct hnae_handle *handle,
+2
-1
drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+2
-1
drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+6
-6
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+6
-6
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
···
2219
2219
/* dsaf onode registers */
2220
2220
for (i = 0; i < DSAF_XOD_NUM; i++) {
2221
2221
p[311 + i] = dsaf_read_dev(ddev,
2222
-
DSAF_XOD_ETS_TSA_TC0_TC3_CFG_0_REG + j * 0x90);
2222
+
DSAF_XOD_ETS_TSA_TC0_TC3_CFG_0_REG + i * 0x90);
2223
2223
p[319 + i] = dsaf_read_dev(ddev,
2224
-
DSAF_XOD_ETS_TSA_TC4_TC7_CFG_0_REG + j * 0x90);
2224
+
DSAF_XOD_ETS_TSA_TC4_TC7_CFG_0_REG + i * 0x90);
2225
2225
p[327 + i] = dsaf_read_dev(ddev,
2226
-
DSAF_XOD_ETS_BW_TC0_TC3_CFG_0_REG + j * 0x90);
2226
+
DSAF_XOD_ETS_BW_TC0_TC3_CFG_0_REG + i * 0x90);
2227
2227
p[335 + i] = dsaf_read_dev(ddev,
2228
-
DSAF_XOD_ETS_BW_TC4_TC7_CFG_0_REG + j * 0x90);
2228
+
DSAF_XOD_ETS_BW_TC4_TC7_CFG_0_REG + i * 0x90);
2229
2229
p[343 + i] = dsaf_read_dev(ddev,
2230
-
DSAF_XOD_ETS_BW_OFFSET_CFG_0_REG + j * 0x90);
2230
+
DSAF_XOD_ETS_BW_OFFSET_CFG_0_REG + i * 0x90);
2231
2231
p[351 + i] = dsaf_read_dev(ddev,
2232
-
DSAF_XOD_ETS_TOKEN_CFG_0_REG + j * 0x90);
2232
+
DSAF_XOD_ETS_TOKEN_CFG_0_REG + i * 0x90);
2233
2233
}
2234
2234
2235
2235
p[359] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_0_0_REG + port * 0x90);
+24
-20
drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
+24
-20
drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
···
244
244
*/
245
245
phy_interface_t hns_mac_get_phy_if(struct hns_mac_cb *mac_cb)
246
246
{
247
-
u32 hilink3_mode;
248
-
u32 hilink4_mode;
247
+
u32 mode;
248
+
u32 reg;
249
+
u32 shift;
250
+
bool is_ver1 = AE_IS_VER1(mac_cb->dsaf_dev->dsaf_ver);
249
251
void __iomem *sys_ctl_vaddr = mac_cb->sys_ctl_vaddr;
250
-
int dev_id = mac_cb->mac_id;
252
+
int mac_id = mac_cb->mac_id;
251
253
phy_interface_t phy_if = PHY_INTERFACE_MODE_NA;
252
254
253
-
hilink3_mode = dsaf_read_reg(sys_ctl_vaddr, HNS_MAC_HILINK3_REG);
254
-
hilink4_mode = dsaf_read_reg(sys_ctl_vaddr, HNS_MAC_HILINK4_REG);
255
-
if (dev_id >= 0 && dev_id <= 3) {
256
-
if (hilink4_mode == 0)
257
-
phy_if = PHY_INTERFACE_MODE_SGMII;
258
-
else
259
-
phy_if = PHY_INTERFACE_MODE_XGMII;
260
-
} else if (dev_id >= 4 && dev_id <= 5) {
261
-
if (hilink3_mode == 0)
262
-
phy_if = PHY_INTERFACE_MODE_SGMII;
263
-
else
264
-
phy_if = PHY_INTERFACE_MODE_XGMII;
265
-
} else {
255
+
if (is_ver1 && (mac_id >= 6 && mac_id <= 7)) {
266
256
phy_if = PHY_INTERFACE_MODE_SGMII;
257
+
} else if (mac_id >= 0 && mac_id <= 3) {
258
+
reg = is_ver1 ? HNS_MAC_HILINK4_REG : HNS_MAC_HILINK4V2_REG;
259
+
mode = dsaf_read_reg(sys_ctl_vaddr, reg);
260
+
/* mac_id 0, 1, 2, 3 ---> hilink4 lane 0, 1, 2, 3 */
261
+
shift = is_ver1 ? 0 : mac_id;
262
+
if (dsaf_get_bit(mode, shift))
263
+
phy_if = PHY_INTERFACE_MODE_XGMII;
264
+
else
265
+
phy_if = PHY_INTERFACE_MODE_SGMII;
266
+
} else if (mac_id >= 4 && mac_id <= 7) {
267
+
reg = is_ver1 ? HNS_MAC_HILINK3_REG : HNS_MAC_HILINK3V2_REG;
268
+
mode = dsaf_read_reg(sys_ctl_vaddr, reg);
269
+
/* mac_id 4, 5, 6, 7 ---> hilink3 lane 2, 3, 0, 1 */
270
+
shift = is_ver1 ? 0 : mac_id <= 5 ? mac_id - 2 : mac_id - 6;
271
+
if (dsaf_get_bit(mode, shift))
272
+
phy_if = PHY_INTERFACE_MODE_XGMII;
273
+
else
274
+
phy_if = PHY_INTERFACE_MODE_SGMII;
267
275
}
268
-
269
-
dev_dbg(mac_cb->dev,
270
-
"hilink3_mode=%d, hilink4_mode=%d dev_id=%d, phy_if=%d\n",
271
-
hilink3_mode, hilink4_mode, dev_id, phy_if);
272
276
return phy_if;
273
277
}
274
278
+92
-104
drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+92
-104
drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
···
215
215
dsaf_write_dev(q, RCB_RING_RX_RING_BD_LEN_REG,
216
216
bd_size_type);
217
217
dsaf_write_dev(q, RCB_RING_RX_RING_BD_NUM_REG,
218
-
ring_pair->port_id_in_dsa);
218
+
ring_pair->port_id_in_comm);
219
219
dsaf_write_dev(q, RCB_RING_RX_RING_PKTLINE_REG,
220
-
ring_pair->port_id_in_dsa);
220
+
ring_pair->port_id_in_comm);
221
221
} else {
222
222
dsaf_write_dev(q, RCB_RING_TX_RING_BASEADDR_L_REG,
223
223
(u32)dma);
···
227
227
dsaf_write_dev(q, RCB_RING_TX_RING_BD_LEN_REG,
228
228
bd_size_type);
229
229
dsaf_write_dev(q, RCB_RING_TX_RING_BD_NUM_REG,
230
-
ring_pair->port_id_in_dsa);
230
+
ring_pair->port_id_in_comm);
231
231
dsaf_write_dev(q, RCB_RING_TX_RING_PKTLINE_REG,
232
-
ring_pair->port_id_in_dsa);
232
+
ring_pair->port_id_in_comm);
233
233
}
234
234
}
235
235
···
256
256
desc_cnt);
257
257
}
258
258
259
-
/**
260
-
*hns_rcb_set_port_coalesced_frames - set rcb port coalesced frames
261
-
*@rcb_common: rcb_common device
262
-
*@port_idx:port index
263
-
*@coalesced_frames:BD num for coalesced frames
264
-
*/
265
-
static int hns_rcb_set_port_coalesced_frames(struct rcb_common_cb *rcb_common,
266
-
u32 port_idx,
267
-
u32 coalesced_frames)
259
+
static void hns_rcb_set_port_timeout(
260
+
struct rcb_common_cb *rcb_common, u32 port_idx, u32 timeout)
268
261
{
269
-
if (coalesced_frames >= rcb_common->desc_num ||
270
-
coalesced_frames > HNS_RCB_MAX_COALESCED_FRAMES)
271
-
return -EINVAL;
272
-
273
-
dsaf_write_dev(rcb_common, RCB_CFG_PKTLINE_REG + port_idx * 4,
274
-
coalesced_frames);
275
-
return 0;
276
-
}
277
-
278
-
/**
279
-
*hns_rcb_get_port_coalesced_frames - set rcb port coalesced frames
280
-
*@rcb_common: rcb_common device
281
-
*@port_idx:port index
282
-
* return coaleseced frames value
283
-
*/
284
-
static u32 hns_rcb_get_port_coalesced_frames(struct rcb_common_cb *rcb_common,
285
-
u32 port_idx)
286
-
{
287
-
if (port_idx >= HNS_RCB_SERVICE_NW_ENGINE_NUM)
288
-
port_idx = 0;
289
-
290
-
return dsaf_read_dev(rcb_common,
291
-
RCB_CFG_PKTLINE_REG + port_idx * 4);
292
-
}
293
-
294
-
/**
295
-
*hns_rcb_set_timeout - set rcb port coalesced time_out
296
-
*@rcb_common: rcb_common device
297
-
*@time_out:time for coalesced time_out
298
-
*/
299
-
static void hns_rcb_set_timeout(struct rcb_common_cb *rcb_common,
300
-
u32 timeout)
301
-
{
302
-
dsaf_write_dev(rcb_common, RCB_CFG_OVERTIME_REG, timeout);
262
+
if (AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver))
263
+
dsaf_write_dev(rcb_common, RCB_CFG_OVERTIME_REG,
264
+
timeout * HNS_RCB_CLK_FREQ_MHZ);
265
+
else
266
+
dsaf_write_dev(rcb_common,
267
+
RCB_PORT_CFG_OVERTIME_REG + port_idx * 4,
268
+
timeout);
303
269
}
304
270
305
271
static int hns_rcb_common_get_port_num(struct rcb_common_cb *rcb_common)
···
327
361
328
362
for (i = 0; i < port_num; i++) {
329
363
hns_rcb_set_port_desc_cnt(rcb_common, i, rcb_common->desc_num);
330
-
(void)hns_rcb_set_port_coalesced_frames(
331
-
rcb_common, i, rcb_common->coalesced_frames);
364
+
(void)hns_rcb_set_coalesced_frames(
365
+
rcb_common, i, HNS_RCB_DEF_COALESCED_FRAMES);
366
+
hns_rcb_set_port_timeout(
367
+
rcb_common, i, HNS_RCB_DEF_COALESCED_USECS);
332
368
}
333
-
hns_rcb_set_timeout(rcb_common, rcb_common->timeout);
334
369
335
370
dsaf_write_dev(rcb_common, RCB_COM_CFG_ENDIAN_REG,
336
371
HNS_RCB_COMMON_ENDIAN);
···
427
460
hns_rcb_ring_get_cfg(&ring_pair_cb->q, TX_RING);
428
461
}
429
462
430
-
static int hns_rcb_get_port(struct rcb_common_cb *rcb_common, int ring_idx)
463
+
static int hns_rcb_get_port_in_comm(
464
+
struct rcb_common_cb *rcb_common, int ring_idx)
431
465
{
432
466
int comm_index = rcb_common->comm_index;
433
467
int port;
···
438
470
q_num = (int)rcb_common->max_q_per_vf * rcb_common->max_vfn;
439
471
port = ring_idx / q_num;
440
472
} else {
441
-
port = HNS_RCB_SERVICE_NW_ENGINE_NUM + comm_index - 1;
473
+
port = 0; /* config debug-ports port_id_in_comm to 0*/
442
474
}
443
475
444
476
return port;
···
486
518
ring_pair_cb->index = i;
487
519
ring_pair_cb->q.io_base =
488
520
RCB_COMM_BASE_TO_RING_BASE(rcb_common->io_base, i);
489
-
ring_pair_cb->port_id_in_dsa = hns_rcb_get_port(rcb_common, i);
521
+
ring_pair_cb->port_id_in_comm =
522
+
hns_rcb_get_port_in_comm(rcb_common, i);
490
523
ring_pair_cb->virq[HNS_RCB_IRQ_IDX_TX] =
491
524
is_ver1 ? irq_of_parse_and_map(np, base_irq_idx + i * 2) :
492
525
platform_get_irq(pdev, base_irq_idx + i * 3 + 1);
···
503
534
/**
504
535
*hns_rcb_get_coalesced_frames - get rcb port coalesced frames
505
536
*@rcb_common: rcb_common device
506
-
*@comm_index:port index
507
-
*return coalesced_frames
537
+
*@port_idx:port id in comm
538
+
*
539
+
*Returns: coalesced_frames
508
540
*/
509
-
u32 hns_rcb_get_coalesced_frames(struct dsaf_device *dsaf_dev, int port)
541
+
u32 hns_rcb_get_coalesced_frames(
542
+
struct rcb_common_cb *rcb_common, u32 port_idx)
510
543
{
511
-
int comm_index = hns_dsaf_get_comm_idx_by_port(port);
512
-
struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[comm_index];
513
-
514
-
return hns_rcb_get_port_coalesced_frames(rcb_comm, port);
544
+
return dsaf_read_dev(rcb_common, RCB_CFG_PKTLINE_REG + port_idx * 4);
515
545
}
516
546
517
547
/**
518
548
*hns_rcb_get_coalesce_usecs - get rcb port coalesced time_out
519
549
*@rcb_common: rcb_common device
520
-
*@comm_index:port index
521
-
*return time_out
550
+
*@port_idx:port id in comm
551
+
*
552
+
*Returns: time_out
522
553
*/
523
-
u32 hns_rcb_get_coalesce_usecs(struct dsaf_device *dsaf_dev, int comm_index)
554
+
u32 hns_rcb_get_coalesce_usecs(
555
+
struct rcb_common_cb *rcb_common, u32 port_idx)
524
556
{
525
-
struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[comm_index];
526
-
527
-
return rcb_comm->timeout;
557
+
if (AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver))
558
+
return dsaf_read_dev(rcb_common, RCB_CFG_OVERTIME_REG) /
559
+
HNS_RCB_CLK_FREQ_MHZ;
560
+
else
561
+
return dsaf_read_dev(rcb_common,
562
+
RCB_PORT_CFG_OVERTIME_REG + port_idx * 4);
528
563
}
529
564
530
565
/**
531
566
*hns_rcb_set_coalesce_usecs - set rcb port coalesced time_out
532
567
*@rcb_common: rcb_common device
533
-
*@comm_index: comm :index
534
-
*@etx_usecs:tx time for coalesced time_out
535
-
*@rx_usecs:rx time for coalesced time_out
568
+
*@port_idx:port id in comm
569
+
*@timeout:tx/rx time for coalesced time_out
570
+
*
571
+
* Returns:
572
+
* Zero for success, or an error code in case of failure
536
573
*/
537
-
void hns_rcb_set_coalesce_usecs(struct dsaf_device *dsaf_dev,
538
-
int port, u32 timeout)
574
+
int hns_rcb_set_coalesce_usecs(
575
+
struct rcb_common_cb *rcb_common, u32 port_idx, u32 timeout)
539
576
{
540
-
int comm_index = hns_dsaf_get_comm_idx_by_port(port);
541
-
struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[comm_index];
577
+
u32 old_timeout = hns_rcb_get_coalesce_usecs(rcb_common, port_idx);
542
578
543
-
if (rcb_comm->timeout == timeout)
544
-
return;
579
+
if (timeout == old_timeout)
580
+
return 0;
545
581
546
-
if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
547
-
dev_err(dsaf_dev->dev,
548
-
"error: not support coalesce_usecs setting!\n");
549
-
return;
582
+
if (AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver)) {
583
+
if (rcb_common->comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
584
+
dev_err(rcb_common->dsaf_dev->dev,
585
+
"error: not support coalesce_usecs setting!\n");
586
+
return -EINVAL;
587
+
}
550
588
}
551
-
rcb_comm->timeout = timeout;
552
-
hns_rcb_set_timeout(rcb_comm, rcb_comm->timeout);
589
+
if (timeout > HNS_RCB_MAX_COALESCED_USECS) {
590
+
dev_err(rcb_common->dsaf_dev->dev,
591
+
"error: not support coalesce %dus!\n", timeout);
592
+
return -EINVAL;
593
+
}
594
+
hns_rcb_set_port_timeout(rcb_common, port_idx, timeout);
595
+
return 0;
553
596
}
554
597
555
598
/**
556
599
*hns_rcb_set_coalesced_frames - set rcb coalesced frames
557
600
*@rcb_common: rcb_common device
558
-
*@tx_frames:tx BD num for coalesced frames
559
-
*@rx_frames:rx BD num for coalesced frames
560
-
*Return 0 on success, negative on failure
601
+
*@port_idx:port id in comm
602
+
*@coalesced_frames:tx/rx BD num for coalesced frames
603
+
*
604
+
* Returns:
605
+
* Zero for success, or an error code in case of failure
561
606
*/
562
-
int hns_rcb_set_coalesced_frames(struct dsaf_device *dsaf_dev,
563
-
int port, u32 coalesced_frames)
607
+
int hns_rcb_set_coalesced_frames(
608
+
struct rcb_common_cb *rcb_common, u32 port_idx, u32 coalesced_frames)
564
609
{
565
-
int comm_index = hns_dsaf_get_comm_idx_by_port(port);
566
-
struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[comm_index];
567
-
u32 coalesced_reg_val;
568
-
int ret;
610
+
u32 old_waterline = hns_rcb_get_coalesced_frames(rcb_common, port_idx);
569
611
570
-
coalesced_reg_val = hns_rcb_get_port_coalesced_frames(rcb_comm, port);
571
-
572
-
if (coalesced_reg_val == coalesced_frames)
612
+
if (coalesced_frames == old_waterline)
573
613
return 0;
574
614
575
-
if (coalesced_frames >= HNS_RCB_MIN_COALESCED_FRAMES) {
576
-
ret = hns_rcb_set_port_coalesced_frames(rcb_comm, port,
577
-
coalesced_frames);
578
-
return ret;
579
-
} else {
615
+
if (coalesced_frames >= rcb_common->desc_num ||
616
+
coalesced_frames > HNS_RCB_MAX_COALESCED_FRAMES ||
617
+
coalesced_frames < HNS_RCB_MIN_COALESCED_FRAMES) {
618
+
dev_err(rcb_common->dsaf_dev->dev,
619
+
"error: not support coalesce_frames setting!\n");
580
620
return -EINVAL;
581
621
}
622
+
623
+
dsaf_write_dev(rcb_common, RCB_CFG_PKTLINE_REG + port_idx * 4,
624
+
coalesced_frames);
625
+
return 0;
582
626
}
583
627
584
628
/**
···
731
749
rcb_common->dsaf_dev = dsaf_dev;
732
750
733
751
rcb_common->desc_num = dsaf_dev->desc_num;
734
-
rcb_common->coalesced_frames = HNS_RCB_DEF_COALESCED_FRAMES;
735
-
rcb_common->timeout = HNS_RCB_MAX_TIME_OUT;
736
752
737
753
hns_rcb_get_queue_mode(dsaf_mode, comm_index, &max_vfn, &max_q_per_vf);
738
754
rcb_common->max_vfn = max_vfn;
···
931
951
void hns_rcb_get_common_regs(struct rcb_common_cb *rcb_com, void *data)
932
952
{
933
953
u32 *regs = data;
954
+
bool is_ver1 = AE_IS_VER1(rcb_com->dsaf_dev->dsaf_ver);
955
+
bool is_dbg = (rcb_com->comm_index != HNS_DSAF_COMM_SERVICE_NW_IDX);
956
+
u32 reg_tmp;
957
+
u32 reg_num_tmp;
934
958
u32 i = 0;
935
959
936
960
/*rcb common registers */
···
988
1004
= dsaf_read_dev(rcb_com, RCB_CFG_PKTLINE_REG + 4 * i);
989
1005
}
990
1006
991
-
regs[70] = dsaf_read_dev(rcb_com, RCB_CFG_OVERTIME_REG);
992
-
regs[71] = dsaf_read_dev(rcb_com, RCB_CFG_PKTLINE_INT_NUM_REG);
993
-
regs[72] = dsaf_read_dev(rcb_com, RCB_CFG_OVERTIME_INT_NUM_REG);
1007
+
reg_tmp = is_ver1 ? RCB_CFG_OVERTIME_REG : RCB_PORT_CFG_OVERTIME_REG;
1008
+
reg_num_tmp = (is_ver1 || is_dbg) ? 1 : 6;
1009
+
for (i = 0; i < reg_num_tmp; i++)
1010
+
regs[70 + i] = dsaf_read_dev(rcb_com, reg_tmp);
1011
+
1012
+
regs[76] = dsaf_read_dev(rcb_com, RCB_CFG_PKTLINE_INT_NUM_REG);
1013
+
regs[77] = dsaf_read_dev(rcb_com, RCB_CFG_OVERTIME_INT_NUM_REG);
994
1014
995
1015
/* mark end of rcb common regs */
996
-
for (i = 73; i < 80; i++)
1016
+
for (i = 78; i < 80; i++)
997
1017
regs[i] = 0xcccccccc;
998
1018
}
999
1019
+12
-11
drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+12
-11
drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
···
38
38
#define HNS_RCB_MAX_COALESCED_FRAMES 1023
39
39
#define HNS_RCB_MIN_COALESCED_FRAMES 1
40
40
#define HNS_RCB_DEF_COALESCED_FRAMES 50
41
-
#define HNS_RCB_MAX_TIME_OUT 0x500
41
+
#define HNS_RCB_CLK_FREQ_MHZ 350
42
+
#define HNS_RCB_MAX_COALESCED_USECS 0x3ff
43
+
#define HNS_RCB_DEF_COALESCED_USECS 3
42
44
43
45
#define HNS_RCB_COMMON_ENDIAN 1
44
46
···
84
82
85
83
int virq[HNS_RCB_IRQ_NUM_PER_QUEUE];
86
84
87
-
u8 port_id_in_dsa;
85
+
u8 port_id_in_comm;
88
86
u8 used_by_vf;
89
87
90
88
struct hns_ring_hw_stats hw_stats;
···
99
97
100
98
u8 comm_index;
101
99
u32 ring_num;
102
-
u32 coalesced_frames; /* frames threshold of rx interrupt */
103
-
u32 timeout; /* time threshold of rx interrupt */
104
100
u32 desc_num; /* desc num per queue*/
105
101
106
102
struct ring_pair_cb ring_pair_cb[0];
···
125
125
void hns_rcb_init_hw(struct ring_pair_cb *ring);
126
126
void hns_rcb_reset_ring_hw(struct hnae_queue *q);
127
127
void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag);
128
-
129
-
u32 hns_rcb_get_coalesced_frames(struct dsaf_device *dsaf_dev, int comm_index);
130
-
u32 hns_rcb_get_coalesce_usecs(struct dsaf_device *dsaf_dev, int comm_index);
131
-
void hns_rcb_set_coalesce_usecs(struct dsaf_device *dsaf_dev,
132
-
int comm_index, u32 timeout);
133
-
int hns_rcb_set_coalesced_frames(struct dsaf_device *dsaf_dev,
134
-
int comm_index, u32 coalesce_frames);
128
+
u32 hns_rcb_get_coalesced_frames(
129
+
struct rcb_common_cb *rcb_common, u32 port_idx);
130
+
u32 hns_rcb_get_coalesce_usecs(
131
+
struct rcb_common_cb *rcb_common, u32 port_idx);
132
+
int hns_rcb_set_coalesce_usecs(
133
+
struct rcb_common_cb *rcb_common, u32 port_idx, u32 timeout);
134
+
int hns_rcb_set_coalesced_frames(
135
+
struct rcb_common_cb *rcb_common, u32 port_idx, u32 coalesced_frames);
135
136
void hns_rcb_update_stats(struct hnae_queue *queue);
136
137
137
138
void hns_rcb_get_stats(struct hnae_queue *queue, u64 *data);
+3
drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+3
drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
···
103
103
/*serdes offset**/
104
104
#define HNS_MAC_HILINK3_REG DSAF_SUB_SC_HILINK3_CRG_CTRL0_REG
105
105
#define HNS_MAC_HILINK4_REG DSAF_SUB_SC_HILINK4_CRG_CTRL0_REG
106
+
#define HNS_MAC_HILINK3V2_REG DSAF_SUB_SC_HILINK3_CRG_CTRL1_REG
107
+
#define HNS_MAC_HILINK4V2_REG DSAF_SUB_SC_HILINK4_CRG_CTRL1_REG
106
108
#define HNS_MAC_LANE0_CTLEDFE_REG 0x000BFFCCULL
107
109
#define HNS_MAC_LANE1_CTLEDFE_REG 0x000BFFBCULL
108
110
#define HNS_MAC_LANE2_CTLEDFE_REG 0x000BFFACULL
···
406
404
#define RCB_CFG_OVERTIME_REG 0x9300
407
405
#define RCB_CFG_PKTLINE_INT_NUM_REG 0x9304
408
406
#define RCB_CFG_OVERTIME_INT_NUM_REG 0x9308
407
+
#define RCB_PORT_CFG_OVERTIME_REG 0x9430
409
408
410
409
#define RCB_RING_RX_RING_BASEADDR_L_REG 0x00000
411
410
#define RCB_RING_RX_RING_BASEADDR_H_REG 0x00004
+7
-9
drivers/net/ethernet/hisilicon/hns/hns_enet.c
+7
-9
drivers/net/ethernet/hisilicon/hns/hns_enet.c
···
913
913
static void hns_nic_tx_fini_pro(struct hns_nic_ring_data *ring_data)
914
914
{
915
915
struct hnae_ring *ring = ring_data->ring;
916
-
int head = ring->next_to_clean;
917
-
918
-
/* for hardware bug fixed */
919
-
head = readl_relaxed(ring->io_base + RCB_REG_HEAD);
916
+
int head = readl_relaxed(ring->io_base + RCB_REG_HEAD);
920
917
921
918
if (head != ring->next_to_clean) {
922
919
ring_data->ring->q->handle->dev->ops->toggle_ring_irq(
···
956
959
napi_complete(napi);
957
960
ring_data->ring->q->handle->dev->ops->toggle_ring_irq(
958
961
ring_data->ring, 0);
959
-
960
-
ring_data->fini_process(ring_data);
962
+
if (ring_data->fini_process)
963
+
ring_data->fini_process(ring_data);
961
964
return 0;
962
965
}
963
966
···
1720
1723
{
1721
1724
struct hnae_handle *h = priv->ae_handle;
1722
1725
struct hns_nic_ring_data *rd;
1726
+
bool is_ver1 = AE_IS_VER1(priv->enet_ver);
1723
1727
int i;
1724
1728
1725
1729
if (h->q_num > NIC_MAX_Q_PER_VF) {
···
1738
1740
rd->queue_index = i;
1739
1741
rd->ring = &h->qs[i]->tx_ring;
1740
1742
rd->poll_one = hns_nic_tx_poll_one;
1741
-
rd->fini_process = hns_nic_tx_fini_pro;
1743
+
rd->fini_process = is_ver1 ? hns_nic_tx_fini_pro : NULL;
1742
1744
1743
1745
netif_napi_add(priv->netdev, &rd->napi,
1744
1746
hns_nic_common_poll, NIC_TX_CLEAN_MAX_NUM);
···
1750
1752
rd->ring = &h->qs[i - h->q_num]->rx_ring;
1751
1753
rd->poll_one = hns_nic_rx_poll_one;
1752
1754
rd->ex_process = hns_nic_rx_up_pro;
1753
-
rd->fini_process = hns_nic_rx_fini_pro;
1755
+
rd->fini_process = is_ver1 ? hns_nic_rx_fini_pro : NULL;
1754
1756
1755
1757
netif_napi_add(priv->netdev, &rd->napi,
1756
1758
hns_nic_common_poll, NIC_RX_CLEAN_MAX_NUM);
···
1814
1816
h = hnae_get_handle(&priv->netdev->dev,
1815
1817
priv->ae_node, priv->port_id, NULL);
1816
1818
if (IS_ERR_OR_NULL(h)) {
1817
-
ret = PTR_ERR(h);
1819
+
ret = -ENODEV;
1818
1820
dev_dbg(priv->dev, "has not handle, register notifier!\n");
1819
1821
goto out;
1820
1822
}
+6
-4
drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+6
-4
drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
···
794
794
(!ops->set_coalesce_frames))
795
795
return -ESRCH;
796
796
797
-
ops->set_coalesce_usecs(priv->ae_handle,
798
-
ec->rx_coalesce_usecs);
797
+
ret = ops->set_coalesce_usecs(priv->ae_handle,
798
+
ec->rx_coalesce_usecs);
799
+
if (ret)
800
+
return ret;
799
801
800
802
ret = ops->set_coalesce_frames(
801
803
priv->ae_handle,
···
1015
1013
struct phy_device *phy_dev = priv->phy;
1016
1014
1017
1015
retval = phy_write(phy_dev, HNS_PHY_PAGE_REG, HNS_PHY_PAGE_LED);
1018
-
retval = phy_write(phy_dev, HNS_LED_FC_REG, value);
1019
-
retval = phy_write(phy_dev, HNS_PHY_PAGE_REG, HNS_PHY_PAGE_COPPER);
1016
+
retval |= phy_write(phy_dev, HNS_LED_FC_REG, value);
1017
+
retval |= phy_write(phy_dev, HNS_PHY_PAGE_REG, HNS_PHY_PAGE_COPPER);
1020
1018
if (retval) {
1021
1019
netdev_err(netdev, "mdiobus_write fail !\n");
1022
1020
return retval;
+5
-5
drivers/net/ethernet/intel/ixgbe/ixgbe.h
+5
-5
drivers/net/ethernet/intel/ixgbe/ixgbe.h
···
661
661
#define IXGBE_FLAG2_RSS_FIELD_IPV6_UDP (u32)(1 << 9)
662
662
#define IXGBE_FLAG2_PTP_PPS_ENABLED (u32)(1 << 10)
663
663
#define IXGBE_FLAG2_PHY_INTERRUPT (u32)(1 << 11)
664
-
#ifdef CONFIG_IXGBE_VXLAN
665
664
#define IXGBE_FLAG2_VXLAN_REREG_NEEDED BIT(12)
666
-
#endif
667
665
#define IXGBE_FLAG2_VLAN_PROMISC BIT(13)
668
666
669
667
/* Tx fast path data */
···
672
674
/* Rx fast path data */
673
675
int num_rx_queues;
674
676
u16 rx_itr_setting;
677
+
678
+
/* Port number used to identify VXLAN traffic */
679
+
__be16 vxlan_port;
675
680
676
681
/* TX */
677
682
struct ixgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
···
783
782
u32 timer_event_accumulator;
784
783
u32 vferr_refcount;
785
784
struct ixgbe_mac_addr *mac_table;
786
-
#ifdef CONFIG_IXGBE_VXLAN
787
-
u16 vxlan_port;
788
-
#endif
789
785
struct kobject *info_kobj;
790
786
#ifdef CONFIG_IXGBE_HWMON
791
787
struct hwmon_buff *ixgbe_hwmon_buff;
···
877
879
extern char ixgbe_default_device_descr[];
878
880
#endif /* IXGBE_FCOE */
879
881
882
+
int ixgbe_open(struct net_device *netdev);
883
+
int ixgbe_close(struct net_device *netdev);
880
884
void ixgbe_up(struct ixgbe_adapter *adapter);
881
885
void ixgbe_down(struct ixgbe_adapter *adapter);
882
886
void ixgbe_reinit_locked(struct ixgbe_adapter *adapter);
+2
-2
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+2
-2
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
···
2053
2053
2054
2054
if (if_running)
2055
2055
/* indicate we're in test mode */
2056
-
dev_close(netdev);
2056
+
ixgbe_close(netdev);
2057
2057
else
2058
2058
ixgbe_reset(adapter);
2059
2059
···
2091
2091
/* clear testing bit and return adapter to previous state */
2092
2092
clear_bit(__IXGBE_TESTING, &adapter->state);
2093
2093
if (if_running)
2094
-
dev_open(netdev);
2094
+
ixgbe_open(netdev);
2095
2095
else if (hw->mac.ops.disable_tx_laser)
2096
2096
hw->mac.ops.disable_tx_laser(hw);
2097
2097
} else {
+76
-89
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+76
-89
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···
4531
4531
case ixgbe_mac_X550:
4532
4532
case ixgbe_mac_X550EM_x:
4533
4533
IXGBE_WRITE_REG(&adapter->hw, IXGBE_VXLANCTRL, 0);
4534
-
#ifdef CONFIG_IXGBE_VXLAN
4535
4534
adapter->vxlan_port = 0;
4536
-
#endif
4537
4535
break;
4538
4536
default:
4539
4537
break;
···
5992
5994
* handler is registered with the OS, the watchdog timer is started,
5993
5995
* and the stack is notified that the interface is ready.
5994
5996
**/
5995
-
static int ixgbe_open(struct net_device *netdev)
5997
+
int ixgbe_open(struct net_device *netdev)
5996
5998
{
5997
5999
struct ixgbe_adapter *adapter = netdev_priv(netdev);
5998
6000
struct ixgbe_hw *hw = &adapter->hw;
···
6094
6096
* needs to be disabled. A global MAC reset is issued to stop the
6095
6097
* hardware, and all transmit and receive resources are freed.
6096
6098
**/
6097
-
static int ixgbe_close(struct net_device *netdev)
6099
+
int ixgbe_close(struct net_device *netdev)
6098
6100
{
6099
6101
struct ixgbe_adapter *adapter = netdev_priv(netdev);
6100
6102
···
7558
7560
struct ipv6hdr *ipv6;
7559
7561
} hdr;
7560
7562
struct tcphdr *th;
7563
+
unsigned int hlen;
7561
7564
struct sk_buff *skb;
7562
-
#ifdef CONFIG_IXGBE_VXLAN
7563
-
u8 encap = false;
7564
-
#endif /* CONFIG_IXGBE_VXLAN */
7565
7565
__be16 vlan_id;
7566
+
int l4_proto;
7566
7567
7567
7568
/* if ring doesn't have a interrupt vector, cannot perform ATR */
7568
7569
if (!q_vector)
···
7573
7576
7574
7577
ring->atr_count++;
7575
7578
7579
+
/* currently only IPv4/IPv6 with TCP is supported */
7580
+
if ((first->protocol != htons(ETH_P_IP)) &&
7581
+
(first->protocol != htons(ETH_P_IPV6)))
7582
+
return;
7583
+
7576
7584
/* snag network header to get L4 type and address */
7577
7585
skb = first->skb;
7578
7586
hdr.network = skb_network_header(skb);
7579
-
if (!skb->encapsulation) {
7580
-
th = tcp_hdr(skb);
7581
-
} else {
7582
7587
#ifdef CONFIG_IXGBE_VXLAN
7588
+
if (skb->encapsulation &&
7589
+
first->protocol == htons(ETH_P_IP) &&
7590
+
hdr.ipv4->protocol != IPPROTO_UDP) {
7583
7591
struct ixgbe_adapter *adapter = q_vector->adapter;
7584
7592
7585
-
if (!adapter->vxlan_port)
7586
-
return;
7587
-
if (first->protocol != htons(ETH_P_IP) ||
7588
-
hdr.ipv4->version != IPVERSION ||
7589
-
hdr.ipv4->protocol != IPPROTO_UDP) {
7590
-
return;
7591
-
}
7592
-
if (ntohs(udp_hdr(skb)->dest) != adapter->vxlan_port)
7593
-
return;
7594
-
encap = true;
7595
-
hdr.network = skb_inner_network_header(skb);
7596
-
th = inner_tcp_hdr(skb);
7597
-
#else
7598
-
return;
7599
-
#endif /* CONFIG_IXGBE_VXLAN */
7593
+
/* verify the port is recognized as VXLAN */
7594
+
if (adapter->vxlan_port &&
7595
+
udp_hdr(skb)->dest == adapter->vxlan_port)
7596
+
hdr.network = skb_inner_network_header(skb);
7600
7597
}
7598
+
#endif /* CONFIG_IXGBE_VXLAN */
7601
7599
7602
7600
/* Currently only IPv4/IPv6 with TCP is supported */
7603
7601
switch (hdr.ipv4->version) {
7604
7602
case IPVERSION:
7605
-
if (hdr.ipv4->protocol != IPPROTO_TCP)
7606
-
return;
7603
+
/* access ihl as u8 to avoid unaligned access on ia64 */
7604
+
hlen = (hdr.network[0] & 0x0F) << 2;
7605
+
l4_proto = hdr.ipv4->protocol;
7607
7606
break;
7608
7607
case 6:
7609
-
if (likely((unsigned char *)th - hdr.network ==
7610
-
sizeof(struct ipv6hdr))) {
7611
-
if (hdr.ipv6->nexthdr != IPPROTO_TCP)
7612
-
return;
7613
-
} else {
7614
-
__be16 frag_off;
7615
-
u8 l4_hdr;
7616
-
7617
-
ipv6_skip_exthdr(skb, hdr.network - skb->data +
7618
-
sizeof(struct ipv6hdr),
7619
-
&l4_hdr, &frag_off);
7620
-
if (unlikely(frag_off))
7621
-
return;
7622
-
if (l4_hdr != IPPROTO_TCP)
7623
-
return;
7624
-
}
7608
+
hlen = hdr.network - skb->data;
7609
+
l4_proto = ipv6_find_hdr(skb, &hlen, IPPROTO_TCP, NULL, NULL);
7610
+
hlen -= hdr.network - skb->data;
7625
7611
break;
7626
7612
default:
7627
7613
return;
7628
7614
}
7629
7615
7630
-
/* skip this packet since it is invalid or the socket is closing */
7631
-
if (!th || th->fin)
7616
+
if (l4_proto != IPPROTO_TCP)
7617
+
return;
7618
+
7619
+
th = (struct tcphdr *)(hdr.network + hlen);
7620
+
7621
+
/* skip this packet since the socket is closing */
7622
+
if (th->fin)
7632
7623
return;
7633
7624
7634
7625
/* sample on all syn packets or once every atr sample count */
···
7667
7682
break;
7668
7683
}
7669
7684
7670
-
#ifdef CONFIG_IXGBE_VXLAN
7671
-
if (encap)
7685
+
if (hdr.network != skb_network_header(skb))
7672
7686
input.formatted.flow_type |= IXGBE_ATR_L4TYPE_TUNNEL_MASK;
7673
-
#endif /* CONFIG_IXGBE_VXLAN */
7674
7687
7675
7688
/* This assumes the Rx queue and Tx queue are bound to the same CPU */
7676
7689
ixgbe_fdir_add_signature_filter_82599(&q_vector->adapter->hw,
···
8192
8209
static int ixgbe_delete_clsu32(struct ixgbe_adapter *adapter,
8193
8210
struct tc_cls_u32_offload *cls)
8194
8211
{
8212
+
u32 uhtid = TC_U32_USERHTID(cls->knode.handle);
8213
+
u32 loc;
8195
8214
int err;
8196
8215
8216
+
if ((uhtid != 0x800) && (uhtid >= IXGBE_MAX_LINK_HANDLE))
8217
+
return -EINVAL;
8218
+
8219
+
loc = cls->knode.handle & 0xfffff;
8220
+
8197
8221
spin_lock(&adapter->fdir_perfect_lock);
8198
-
err = ixgbe_update_ethtool_fdir_entry(adapter, NULL, cls->knode.handle);
8222
+
err = ixgbe_update_ethtool_fdir_entry(adapter, NULL, loc);
8199
8223
spin_unlock(&adapter->fdir_perfect_lock);
8200
8224
return err;
8201
8225
}
···
8211
8221
__be16 protocol,
8212
8222
struct tc_cls_u32_offload *cls)
8213
8223
{
8224
+
u32 uhtid = TC_U32_USERHTID(cls->hnode.handle);
8225
+
8226
+
if (uhtid >= IXGBE_MAX_LINK_HANDLE)
8227
+
return -EINVAL;
8228
+
8214
8229
/* This ixgbe devices do not support hash tables at the moment
8215
8230
* so abort when given hash tables.
8216
8231
*/
8217
8232
if (cls->hnode.divisor > 0)
8218
8233
return -EINVAL;
8219
8234
8220
-
set_bit(TC_U32_USERHTID(cls->hnode.handle), &adapter->tables);
8235
+
set_bit(uhtid - 1, &adapter->tables);
8221
8236
return 0;
8222
8237
}
8223
8238
8224
8239
static int ixgbe_configure_clsu32_del_hnode(struct ixgbe_adapter *adapter,
8225
8240
struct tc_cls_u32_offload *cls)
8226
8241
{
8227
-
clear_bit(TC_U32_USERHTID(cls->hnode.handle), &adapter->tables);
8242
+
u32 uhtid = TC_U32_USERHTID(cls->hnode.handle);
8243
+
8244
+
if (uhtid >= IXGBE_MAX_LINK_HANDLE)
8245
+
return -EINVAL;
8246
+
8247
+
clear_bit(uhtid - 1, &adapter->tables);
8228
8248
return 0;
8229
8249
}
8230
8250
···
8252
8252
#endif
8253
8253
int i, err = 0;
8254
8254
u8 queue;
8255
-
u32 handle;
8255
+
u32 uhtid, link_uhtid;
8256
8256
8257
8257
memset(&mask, 0, sizeof(union ixgbe_atr_input));
8258
-
handle = cls->knode.handle;
8258
+
uhtid = TC_U32_USERHTID(cls->knode.handle);
8259
+
link_uhtid = TC_U32_USERHTID(cls->knode.link_handle);
8259
8260
8260
-
/* At the moment cls_u32 jumps to transport layer and skips past
8261
+
/* At the moment cls_u32 jumps to network layer and skips past
8261
8262
* L2 headers. The canonical method to match L2 frames is to use
8262
8263
* negative values. However this is error prone at best but really
8263
8264
* just broken because there is no way to "know" what sort of hdr
8264
-
* is in front of the transport layer. Fix cls_u32 to support L2
8265
+
* is in front of the network layer. Fix cls_u32 to support L2
8265
8266
* headers when needed.
8266
8267
*/
8267
8268
if (protocol != htons(ETH_P_IP))
8268
8269
return -EINVAL;
8269
8270
8270
-
if (cls->knode.link_handle ||
8271
-
cls->knode.link_handle >= IXGBE_MAX_LINK_HANDLE) {
8271
+
if (link_uhtid) {
8272
8272
struct ixgbe_nexthdr *nexthdr = ixgbe_ipv4_jumps;
8273
-
u32 uhtid = TC_U32_USERHTID(cls->knode.link_handle);
8274
8273
8275
-
if (!test_bit(uhtid, &adapter->tables))
8274
+
if (link_uhtid >= IXGBE_MAX_LINK_HANDLE)
8275
+
return -EINVAL;
8276
+
8277
+
if (!test_bit(link_uhtid - 1, &adapter->tables))
8276
8278
return -EINVAL;
8277
8279
8278
8280
for (i = 0; nexthdr[i].jump; i++) {
···
8290
8288
nexthdr->mask != cls->knode.sel->keys[0].mask)
8291
8289
return -EINVAL;
8292
8290
8293
-
if (uhtid >= IXGBE_MAX_LINK_HANDLE)
8294
-
return -EINVAL;
8295
-
8296
-
adapter->jump_tables[uhtid] = nexthdr->jump;
8291
+
adapter->jump_tables[link_uhtid] = nexthdr->jump;
8297
8292
}
8298
8293
return 0;
8299
8294
}
···
8307
8308
* To add support for new nodes update ixgbe_model.h parse structures
8308
8309
* this function _should_ be generic try not to hardcode values here.
8309
8310
*/
8310
-
if (TC_U32_USERHTID(handle) == 0x800) {
8311
+
if (uhtid == 0x800) {
8311
8312
field_ptr = adapter->jump_tables[0];
8312
8313
} else {
8313
-
if (TC_U32_USERHTID(handle) >= ARRAY_SIZE(adapter->jump_tables))
8314
+
if (uhtid >= IXGBE_MAX_LINK_HANDLE)
8314
8315
return -EINVAL;
8315
8316
8316
-
field_ptr = adapter->jump_tables[TC_U32_USERHTID(handle)];
8317
+
field_ptr = adapter->jump_tables[uhtid];
8317
8318
}
8318
8319
8319
8320
if (!field_ptr)
···
8331
8332
int j;
8332
8333
8333
8334
for (j = 0; field_ptr[j].val; j++) {
8334
-
if (field_ptr[j].off == off &&
8335
-
field_ptr[j].mask == m) {
8335
+
if (field_ptr[j].off == off) {
8336
8336
field_ptr[j].val(input, &mask, val, m);
8337
8337
input->filter.formatted.flow_type |=
8338
8338
field_ptr[j].type;
···
8391
8393
return -EINVAL;
8392
8394
}
8393
8395
8394
-
int __ixgbe_setup_tc(struct net_device *dev, u32 handle, __be16 proto,
8395
-
struct tc_to_netdev *tc)
8396
+
static int __ixgbe_setup_tc(struct net_device *dev, u32 handle, __be16 proto,
8397
+
struct tc_to_netdev *tc)
8396
8398
{
8397
8399
struct ixgbe_adapter *adapter = netdev_priv(dev);
8398
8400
···
8552
8554
{
8553
8555
struct ixgbe_adapter *adapter = netdev_priv(dev);
8554
8556
struct ixgbe_hw *hw = &adapter->hw;
8555
-
u16 new_port = ntohs(port);
8556
8557
8557
8558
if (!(adapter->flags & IXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE))
8558
8559
return;
···
8559
8562
if (sa_family == AF_INET6)
8560
8563
return;
8561
8564
8562
-
if (adapter->vxlan_port == new_port)
8565
+
if (adapter->vxlan_port == port)
8563
8566
return;
8564
8567
8565
8568
if (adapter->vxlan_port) {
8566
8569
netdev_info(dev,
8567
8570
"Hit Max num of VXLAN ports, not adding port %d\n",
8568
-
new_port);
8571
+
ntohs(port));
8569
8572
return;
8570
8573
}
8571
8574
8572
-
adapter->vxlan_port = new_port;
8573
-
IXGBE_WRITE_REG(hw, IXGBE_VXLANCTRL, new_port);
8575
+
adapter->vxlan_port = port;
8576
+
IXGBE_WRITE_REG(hw, IXGBE_VXLANCTRL, ntohs(port));
8574
8577
}
8575
8578
8576
8579
/**
···
8583
8586
__be16 port)
8584
8587
{
8585
8588
struct ixgbe_adapter *adapter = netdev_priv(dev);
8586
-
u16 new_port = ntohs(port);
8587
8589
8588
8590
if (!(adapter->flags & IXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE))
8589
8591
return;
···
8590
8594
if (sa_family == AF_INET6)
8591
8595
return;
8592
8596
8593
-
if (adapter->vxlan_port != new_port) {
8597
+
if (adapter->vxlan_port != port) {
8594
8598
netdev_info(dev, "Port %d was not found, not deleting\n",
8595
-
new_port);
8599
+
ntohs(port));
8596
8600
return;
8597
8601
}
8598
8602
···
9261
9265
netdev->priv_flags |= IFF_UNICAST_FLT;
9262
9266
netdev->priv_flags |= IFF_SUPP_NOFCS;
9263
9267
9264
-
#ifdef CONFIG_IXGBE_VXLAN
9265
-
switch (adapter->hw.mac.type) {
9266
-
case ixgbe_mac_X550:
9267
-
case ixgbe_mac_X550EM_x:
9268
-
netdev->hw_enc_features |= NETIF_F_RXCSUM;
9269
-
break;
9270
-
default:
9271
-
break;
9272
-
}
9273
-
#endif /* CONFIG_IXGBE_VXLAN */
9274
-
9275
9268
#ifdef CONFIG_IXGBE_DCB
9276
9269
netdev->dcbnl_ops = &dcbnl_ops;
9277
9270
#endif
···
9314
9329
goto err_sw_init;
9315
9330
}
9316
9331
9332
+
/* Set hw->mac.addr to permanent MAC address */
9333
+
ether_addr_copy(hw->mac.addr, hw->mac.perm_addr);
9317
9334
ixgbe_mac_set_default_filter(adapter);
9318
9335
9319
9336
setup_timer(&adapter->service_timer, &ixgbe_service_timer,
+6
-15
drivers/net/ethernet/intel/ixgbe/ixgbe_model.h
+6
-15
drivers/net/ethernet/intel/ixgbe/ixgbe_model.h
···
32
32
33
33
struct ixgbe_mat_field {
34
34
unsigned int off;
35
-
unsigned int mask;
36
35
int (*val)(struct ixgbe_fdir_filter *input,
37
36
union ixgbe_atr_input *mask,
38
37
u32 val, u32 m);
···
57
58
}
58
59
59
60
static struct ixgbe_mat_field ixgbe_ipv4_fields[] = {
60
-
{ .off = 12, .mask = -1, .val = ixgbe_mat_prgm_sip,
61
+
{ .off = 12, .val = ixgbe_mat_prgm_sip,
61
62
.type = IXGBE_ATR_FLOW_TYPE_IPV4},
62
-
{ .off = 16, .mask = -1, .val = ixgbe_mat_prgm_dip,
63
+
{ .off = 16, .val = ixgbe_mat_prgm_dip,
63
64
.type = IXGBE_ATR_FLOW_TYPE_IPV4},
64
65
{ .val = NULL } /* terminal node */
65
66
};
66
67
67
-
static inline int ixgbe_mat_prgm_sport(struct ixgbe_fdir_filter *input,
68
+
static inline int ixgbe_mat_prgm_ports(struct ixgbe_fdir_filter *input,
68
69
union ixgbe_atr_input *mask,
69
70
u32 val, u32 m)
70
71
{
71
72
input->filter.formatted.src_port = val & 0xffff;
72
73
mask->formatted.src_port = m & 0xffff;
73
-
return 0;
74
-
};
74
+
input->filter.formatted.dst_port = val >> 16;
75
+
mask->formatted.dst_port = m >> 16;
75
76
76
-
static inline int ixgbe_mat_prgm_dport(struct ixgbe_fdir_filter *input,
77
-
union ixgbe_atr_input *mask,
78
-
u32 val, u32 m)
79
-
{
80
-
input->filter.formatted.dst_port = val & 0xffff;
81
-
mask->formatted.dst_port = m & 0xffff;
82
77
return 0;
83
78
};
84
79
85
80
static struct ixgbe_mat_field ixgbe_tcp_fields[] = {
86
-
{.off = 0, .mask = 0xffff, .val = ixgbe_mat_prgm_sport,
87
-
.type = IXGBE_ATR_FLOW_TYPE_TCPV4},
88
-
{.off = 2, .mask = 0xffff, .val = ixgbe_mat_prgm_dport,
81
+
{.off = 0, .val = ixgbe_mat_prgm_ports,
89
82
.type = IXGBE_ATR_FLOW_TYPE_TCPV4},
90
83
{ .val = NULL } /* terminal node */
91
84
};
+1
-1
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+1
-1
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+2
-2
drivers/net/ethernet/intel/ixgbevf/ethtool.c
+2
-2
drivers/net/ethernet/intel/ixgbevf/ethtool.c
···
680
680
681
681
if (if_running)
682
682
/* indicate we're in test mode */
683
-
dev_close(netdev);
683
+
ixgbevf_close(netdev);
684
684
else
685
685
ixgbevf_reset(adapter);
686
686
···
692
692
693
693
clear_bit(__IXGBEVF_TESTING, &adapter->state);
694
694
if (if_running)
695
-
dev_open(netdev);
695
+
ixgbevf_open(netdev);
696
696
} else {
697
697
hw_dbg(&adapter->hw, "online testing starting\n");
698
698
/* Online tests */
+2
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
+2
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
···
486
486
extern const char ixgbevf_driver_name[];
487
487
extern const char ixgbevf_driver_version[];
488
488
489
+
int ixgbevf_open(struct net_device *netdev);
490
+
int ixgbevf_close(struct net_device *netdev);
489
491
void ixgbevf_up(struct ixgbevf_adapter *adapter);
490
492
void ixgbevf_down(struct ixgbevf_adapter *adapter);
491
493
void ixgbevf_reinit_locked(struct ixgbevf_adapter *adapter);
+10
-6
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+10
-6
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
···
3122
3122
* handler is registered with the OS, the watchdog timer is started,
3123
3123
* and the stack is notified that the interface is ready.
3124
3124
**/
3125
-
static int ixgbevf_open(struct net_device *netdev)
3125
+
int ixgbevf_open(struct net_device *netdev)
3126
3126
{
3127
3127
struct ixgbevf_adapter *adapter = netdev_priv(netdev);
3128
3128
struct ixgbe_hw *hw = &adapter->hw;
···
3205
3205
* needs to be disabled. A global MAC reset is issued to stop the
3206
3206
* hardware, and all transmit and receive resources are freed.
3207
3207
**/
3208
-
static int ixgbevf_close(struct net_device *netdev)
3208
+
int ixgbevf_close(struct net_device *netdev)
3209
3209
{
3210
3210
struct ixgbevf_adapter *adapter = netdev_priv(netdev);
3211
3211
···
3692
3692
struct ixgbevf_adapter *adapter = netdev_priv(netdev);
3693
3693
struct ixgbe_hw *hw = &adapter->hw;
3694
3694
struct sockaddr *addr = p;
3695
+
int err;
3695
3696
3696
3697
if (!is_valid_ether_addr(addr->sa_data))
3697
3698
return -EADDRNOTAVAIL;
3698
3699
3699
-
ether_addr_copy(netdev->dev_addr, addr->sa_data);
3700
-
ether_addr_copy(hw->mac.addr, addr->sa_data);
3701
-
3702
3700
spin_lock_bh(&adapter->mbx_lock);
3703
3701
3704
-
hw->mac.ops.set_rar(hw, 0, hw->mac.addr, 0);
3702
+
err = hw->mac.ops.set_rar(hw, 0, addr->sa_data, 0);
3705
3703
3706
3704
spin_unlock_bh(&adapter->mbx_lock);
3705
+
3706
+
if (err)
3707
+
return -EPERM;
3708
+
3709
+
ether_addr_copy(hw->mac.addr, addr->sa_data);
3710
+
ether_addr_copy(netdev->dev_addr, addr->sa_data);
3707
3711
3708
3712
return 0;
3709
3713
}
+3
-1
drivers/net/ethernet/intel/ixgbevf/vf.c
+3
-1
drivers/net/ethernet/intel/ixgbevf/vf.c
···
408
408
409
409
/* if nacked the address was rejected, use "perm_addr" */
410
410
if (!ret_val &&
411
-
(msgbuf[0] == (IXGBE_VF_SET_MAC_ADDR | IXGBE_VT_MSGTYPE_NACK)))
411
+
(msgbuf[0] == (IXGBE_VF_SET_MAC_ADDR | IXGBE_VT_MSGTYPE_NACK))) {
412
412
ixgbevf_get_mac_addr_vf(hw, hw->mac.addr);
413
+
return IXGBE_ERR_MBX;
414
+
}
413
415
414
416
return ret_val;
415
417
}
+17
-23
drivers/net/ethernet/marvell/mvneta.c
+17
-23
drivers/net/ethernet/marvell/mvneta.c
···
260
260
261
261
#define MVNETA_VLAN_TAG_LEN 4
262
262
263
-
#define MVNETA_CPU_D_CACHE_LINE_SIZE 32
264
263
#define MVNETA_TX_CSUM_DEF_SIZE 1600
265
264
#define MVNETA_TX_CSUM_MAX_SIZE 9800
266
265
#define MVNETA_ACC_MODE_EXT1 1
···
299
300
#define MVNETA_RX_PKT_SIZE(mtu) \
300
301
ALIGN((mtu) + MVNETA_MH_SIZE + MVNETA_VLAN_TAG_LEN + \
301
302
ETH_HLEN + ETH_FCS_LEN, \
302
-
MVNETA_CPU_D_CACHE_LINE_SIZE)
303
+
cache_line_size())
303
304
304
305
#define IS_TSO_HEADER(txq, addr) \
305
306
((addr >= txq->tso_hdrs_phys) && \
···
2763
2764
if (rxq->descs == NULL)
2764
2765
return -ENOMEM;
2765
2766
2766
-
BUG_ON(rxq->descs !=
2767
-
PTR_ALIGN(rxq->descs, MVNETA_CPU_D_CACHE_LINE_SIZE));
2768
-
2769
2767
rxq->last_desc = rxq->size - 1;
2770
2768
2771
2769
/* Set Rx descriptors queue starting address */
···
2832
2836
&txq->descs_phys, GFP_KERNEL);
2833
2837
if (txq->descs == NULL)
2834
2838
return -ENOMEM;
2835
-
2836
-
/* Make sure descriptor address is cache line size aligned */
2837
-
BUG_ON(txq->descs !=
2838
-
PTR_ALIGN(txq->descs, MVNETA_CPU_D_CACHE_LINE_SIZE));
2839
2839
2840
2840
txq->last_desc = txq->size - 1;
2841
2841
···
3042
3050
return mtu;
3043
3051
}
3044
3052
3053
+
static void mvneta_percpu_enable(void *arg)
3054
+
{
3055
+
struct mvneta_port *pp = arg;
3056
+
3057
+
enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
3058
+
}
3059
+
3060
+
static void mvneta_percpu_disable(void *arg)
3061
+
{
3062
+
struct mvneta_port *pp = arg;
3063
+
3064
+
disable_percpu_irq(pp->dev->irq);
3065
+
}
3066
+
3045
3067
/* Change the device mtu */
3046
3068
static int mvneta_change_mtu(struct net_device *dev, int mtu)
3047
3069
{
···
3080
3074
* reallocation of the queues
3081
3075
*/
3082
3076
mvneta_stop_dev(pp);
3077
+
on_each_cpu(mvneta_percpu_disable, pp, true);
3083
3078
3084
3079
mvneta_cleanup_txqs(pp);
3085
3080
mvneta_cleanup_rxqs(pp);
···
3104
3097
return ret;
3105
3098
}
3106
3099
3100
+
on_each_cpu(mvneta_percpu_enable, pp, true);
3107
3101
mvneta_start_dev(pp);
3108
3102
mvneta_port_up(pp);
3109
3103
···
3256
3248
{
3257
3249
phy_disconnect(pp->phy_dev);
3258
3250
pp->phy_dev = NULL;
3259
-
}
3260
-
3261
-
static void mvneta_percpu_enable(void *arg)
3262
-
{
3263
-
struct mvneta_port *pp = arg;
3264
-
3265
-
enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE);
3266
-
}
3267
-
3268
-
static void mvneta_percpu_disable(void *arg)
3269
-
{
3270
-
struct mvneta_port *pp = arg;
3271
-
3272
-
disable_percpu_irq(pp->dev->irq);
3273
3251
}
3274
3252
3275
3253
/* Electing a CPU must be done in an atomic way: it should be done
+4
-14
drivers/net/ethernet/marvell/mvpp2.c
+4
-14
drivers/net/ethernet/marvell/mvpp2.c
···
321
321
/* Lbtd 802.3 type */
322
322
#define MVPP2_IP_LBDT_TYPE 0xfffa
323
323
324
-
#define MVPP2_CPU_D_CACHE_LINE_SIZE 32
325
324
#define MVPP2_TX_CSUM_MAX_SIZE 9800
326
325
327
326
/* Timeout constants */
···
376
377
377
378
#define MVPP2_RX_PKT_SIZE(mtu) \
378
379
ALIGN((mtu) + MVPP2_MH_SIZE + MVPP2_VLAN_TAG_LEN + \
379
-
ETH_HLEN + ETH_FCS_LEN, MVPP2_CPU_D_CACHE_LINE_SIZE)
380
+
ETH_HLEN + ETH_FCS_LEN, cache_line_size())
380
381
381
382
#define MVPP2_RX_BUF_SIZE(pkt_size) ((pkt_size) + NET_SKB_PAD)
382
383
#define MVPP2_RX_TOTAL_SIZE(buf_size) ((buf_size) + MVPP2_SKB_SHINFO_SIZE)
···
4492
4493
if (!aggr_txq->descs)
4493
4494
return -ENOMEM;
4494
4495
4495
-
/* Make sure descriptor address is cache line size aligned */
4496
-
BUG_ON(aggr_txq->descs !=
4497
-
PTR_ALIGN(aggr_txq->descs, MVPP2_CPU_D_CACHE_LINE_SIZE));
4498
-
4499
4496
aggr_txq->last_desc = aggr_txq->size - 1;
4500
4497
4501
4498
/* Aggr TXQ no reset WA */
···
4520
4525
&rxq->descs_phys, GFP_KERNEL);
4521
4526
if (!rxq->descs)
4522
4527
return -ENOMEM;
4523
-
4524
-
BUG_ON(rxq->descs !=
4525
-
PTR_ALIGN(rxq->descs, MVPP2_CPU_D_CACHE_LINE_SIZE));
4526
4528
4527
4529
rxq->last_desc = rxq->size - 1;
4528
4530
···
4607
4615
&txq->descs_phys, GFP_KERNEL);
4608
4616
if (!txq->descs)
4609
4617
return -ENOMEM;
4610
-
4611
-
/* Make sure descriptor address is cache line size aligned */
4612
-
BUG_ON(txq->descs !=
4613
-
PTR_ALIGN(txq->descs, MVPP2_CPU_D_CACHE_LINE_SIZE));
4614
4618
4615
4619
txq->last_desc = txq->size - 1;
4616
4620
···
6047
6059
6048
6060
/* Map physical Rx queue to port's logical Rx queue */
6049
6061
rxq = devm_kzalloc(dev, sizeof(*rxq), GFP_KERNEL);
6050
-
if (!rxq)
6062
+
if (!rxq) {
6063
+
err = -ENOMEM;
6051
6064
goto err_free_percpu;
6065
+
}
6052
6066
/* Map this Rx queue to a physical queue */
6053
6067
rxq->id = port->first_rxq + queue;
6054
6068
rxq->port = port->id;
+1
-1
drivers/net/ethernet/qlogic/qed/qed_int.c
+1
-1
drivers/net/ethernet/qlogic/qed/qed_int.c
···
2750
2750
int qed_int_igu_enable(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
2751
2751
enum qed_int_mode int_mode)
2752
2752
{
2753
-
int rc;
2753
+
int rc = 0;
2754
2754
2755
2755
/* Configure AEU signal change to produce attentions */
2756
2756
qed_wr(p_hwfn, p_ptt, IGU_REG_ATTENTION_ENABLE, 0);
+1
-1
drivers/net/ethernet/qlogic/qlge/qlge.h
+1
-1
drivers/net/ethernet/qlogic/qlge/qlge.h
+1
-1
drivers/net/ethernet/renesas/ravb_main.c
+1
-1
drivers/net/ethernet/renesas/ravb_main.c
···
1377
1377
1378
1378
/* TAG and timestamp required flag */
1379
1379
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
1380
-
skb_tx_timestamp(skb);
1381
1380
desc->tagh_tsr = (ts_skb->tag >> 4) | TX_TSR;
1382
1381
desc->ds_tagl |= le16_to_cpu(ts_skb->tag << 12);
1383
1382
}
1384
1383
1384
+
skb_tx_timestamp(skb);
1385
1385
/* Descriptor type must be set after all the above writes */
1386
1386
dma_wmb();
1387
1387
desc->die_dt = DT_FEND;
+2
-2
drivers/net/ethernet/samsung/sxgbe/sxgbe_platform.c
+2
-2
drivers/net/ethernet/samsung/sxgbe/sxgbe_platform.c
···
155
155
return 0;
156
156
157
157
err_rx_irq_unmap:
158
-
while (--i)
158
+
while (i--)
159
159
irq_dispose_mapping(priv->rxq[i]->irq_no);
160
160
i = SXGBE_TX_QUEUES;
161
161
err_tx_irq_unmap:
162
-
while (--i)
162
+
while (i--)
163
163
irq_dispose_mapping(priv->txq[i]->irq_no);
164
164
irq_dispose_mapping(priv->irq);
165
165
err_drv_remove:
+8
-8
drivers/net/ethernet/stmicro/stmmac/norm_desc.c
+8
-8
drivers/net/ethernet/stmicro/stmmac/norm_desc.c
···
199
199
{
200
200
unsigned int tdes1 = p->des1;
201
201
202
-
if (mode == STMMAC_CHAIN_MODE)
203
-
norm_set_tx_desc_len_on_chain(p, len);
204
-
else
205
-
norm_set_tx_desc_len_on_ring(p, len);
206
-
207
202
if (is_fs)
208
203
tdes1 |= TDES1_FIRST_SEGMENT;
209
204
else
···
212
217
if (ls)
213
218
tdes1 |= TDES1_LAST_SEGMENT;
214
219
215
-
if (tx_own)
216
-
tdes1 |= TDES0_OWN;
217
-
218
220
p->des1 = tdes1;
221
+
222
+
if (mode == STMMAC_CHAIN_MODE)
223
+
norm_set_tx_desc_len_on_chain(p, len);
224
+
else
225
+
norm_set_tx_desc_len_on_ring(p, len);
226
+
227
+
if (tx_own)
228
+
p->des0 |= TDES0_OWN;
219
229
}
220
230
221
231
static void ndesc_set_tx_ic(struct dma_desc *p)
+5
-11
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+5
-11
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···
278
278
*/
279
279
bool stmmac_eee_init(struct stmmac_priv *priv)
280
280
{
281
-
char *phy_bus_name = priv->plat->phy_bus_name;
282
281
unsigned long flags;
283
282
bool ret = false;
284
283
···
289
290
goto out;
290
291
291
292
/* Never init EEE in case of a switch is attached */
292
-
if (phy_bus_name && (!strcmp(phy_bus_name, "fixed")))
293
+
if (priv->phydev->is_pseudo_fixed_link)
293
294
goto out;
294
295
295
296
/* MAC core supports the EEE feature. */
···
826
827
phydev = of_phy_connect(dev, priv->plat->phy_node,
827
828
&stmmac_adjust_link, 0, interface);
828
829
} else {
829
-
if (priv->plat->phy_bus_name)
830
-
snprintf(bus_id, MII_BUS_ID_SIZE, "%s-%x",
831
-
priv->plat->phy_bus_name, priv->plat->bus_id);
832
-
else
833
-
snprintf(bus_id, MII_BUS_ID_SIZE, "stmmac-%x",
834
-
priv->plat->bus_id);
830
+
snprintf(bus_id, MII_BUS_ID_SIZE, "stmmac-%x",
831
+
priv->plat->bus_id);
835
832
836
833
snprintf(phy_id_fmt, MII_BUS_ID_SIZE + 3, PHY_ID_FMT, bus_id,
837
834
priv->plat->phy_addr);
···
866
871
}
867
872
868
873
/* If attached to a switch, there is no reason to poll phy handler */
869
-
if (priv->plat->phy_bus_name)
870
-
if (!strcmp(priv->plat->phy_bus_name, "fixed"))
871
-
phydev->irq = PHY_IGNORE_INTERRUPT;
874
+
if (phydev->is_pseudo_fixed_link)
875
+
phydev->irq = PHY_IGNORE_INTERRUPT;
872
876
873
877
pr_debug("stmmac_init_phy: %s: attached to PHY (UID 0x%x)"
874
878
" Link = %d\n", dev->name, phydev->phy_id, phydev->link);
+1
-9
drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+1
-9
drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
···
198
198
struct mii_bus *new_bus;
199
199
struct stmmac_priv *priv = netdev_priv(ndev);
200
200
struct stmmac_mdio_bus_data *mdio_bus_data = priv->plat->mdio_bus_data;
201
-
int addr, found;
202
201
struct device_node *mdio_node = priv->plat->mdio_node;
202
+
int addr, found;
203
203
204
204
if (!mdio_bus_data)
205
205
return 0;
206
-
207
-
if (IS_ENABLED(CONFIG_OF)) {
208
-
if (mdio_node) {
209
-
netdev_dbg(ndev, "FOUND MDIO subnode\n");
210
-
} else {
211
-
netdev_warn(ndev, "No MDIO subnode found\n");
212
-
}
213
-
}
214
206
215
207
new_bus = mdiobus_alloc();
216
208
if (new_bus == NULL)
+66
-25
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+66
-25
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
···
132
132
}
133
133
134
134
/**
135
+
* stmmac_dt_phy - parse device-tree driver parameters to allocate PHY resources
136
+
* @plat: driver data platform structure
137
+
* @np: device tree node
138
+
* @dev: device pointer
139
+
* Description:
140
+
* The mdio bus will be allocated in case of a phy transceiver is on board;
141
+
* it will be NULL if the fixed-link is configured.
142
+
* If there is the "snps,dwmac-mdio" sub-node the mdio will be allocated
143
+
* in any case (for DSA, mdio must be registered even if fixed-link).
144
+
* The table below sums the supported configurations:
145
+
* -------------------------------
146
+
* snps,phy-addr | Y
147
+
* -------------------------------
148
+
* phy-handle | Y
149
+
* -------------------------------
150
+
* fixed-link | N
151
+
* -------------------------------
152
+
* snps,dwmac-mdio |
153
+
* even if | Y
154
+
* fixed-link |
155
+
* -------------------------------
156
+
*
157
+
* It returns 0 in case of success otherwise -ENODEV.
158
+
*/
159
+
static int stmmac_dt_phy(struct plat_stmmacenet_data *plat,
160
+
struct device_node *np, struct device *dev)
161
+
{
162
+
bool mdio = true;
163
+
164
+
/* If phy-handle property is passed from DT, use it as the PHY */
165
+
plat->phy_node = of_parse_phandle(np, "phy-handle", 0);
166
+
if (plat->phy_node)
167
+
dev_dbg(dev, "Found phy-handle subnode\n");
168
+
169
+
/* If phy-handle is not specified, check if we have a fixed-phy */
170
+
if (!plat->phy_node && of_phy_is_fixed_link(np)) {
171
+
if ((of_phy_register_fixed_link(np) < 0))
172
+
return -ENODEV;
173
+
174
+
dev_dbg(dev, "Found fixed-link subnode\n");
175
+
plat->phy_node = of_node_get(np);
176
+
mdio = false;
177
+
}
178
+
179
+
/* If snps,dwmac-mdio is passed from DT, always register the MDIO */
180
+
for_each_child_of_node(np, plat->mdio_node) {
181
+
if (of_device_is_compatible(plat->mdio_node, "snps,dwmac-mdio"))
182
+
break;
183
+
}
184
+
185
+
if (plat->mdio_node) {
186
+
dev_dbg(dev, "Found MDIO subnode\n");
187
+
mdio = true;
188
+
}
189
+
190
+
if (mdio)
191
+
plat->mdio_bus_data =
192
+
devm_kzalloc(dev, sizeof(struct stmmac_mdio_bus_data),
193
+
GFP_KERNEL);
194
+
return 0;
195
+
}
196
+
197
+
/**
135
198
* stmmac_probe_config_dt - parse device-tree driver parameters
136
199
* @pdev: platform_device structure
137
200
* @plat: driver data platform structure
···
209
146
struct device_node *np = pdev->dev.of_node;
210
147
struct plat_stmmacenet_data *plat;
211
148
struct stmmac_dma_cfg *dma_cfg;
212
-
struct device_node *child_node = NULL;
213
149
214
150
plat = devm_kzalloc(&pdev->dev, sizeof(*plat), GFP_KERNEL);
215
151
if (!plat)
···
228
166
/* Default to phy auto-detection */
229
167
plat->phy_addr = -1;
230
168
231
-
/* If we find a phy-handle property, use it as the PHY */
232
-
plat->phy_node = of_parse_phandle(np, "phy-handle", 0);
233
-
234
-
/* If phy-handle is not specified, check if we have a fixed-phy */
235
-
if (!plat->phy_node && of_phy_is_fixed_link(np)) {
236
-
if ((of_phy_register_fixed_link(np) < 0))
237
-
return ERR_PTR(-ENODEV);
238
-
239
-
plat->phy_node = of_node_get(np);
240
-
}
241
-
242
-
for_each_child_of_node(np, child_node)
243
-
if (of_device_is_compatible(child_node, "snps,dwmac-mdio")) {
244
-
plat->mdio_node = child_node;
245
-
break;
246
-
}
247
-
248
169
/* "snps,phy-addr" is not a standard property. Mark it as deprecated
249
170
* and warn of its use. Remove this when phy node support is added.
250
171
*/
251
172
if (of_property_read_u32(np, "snps,phy-addr", &plat->phy_addr) == 0)
252
173
dev_warn(&pdev->dev, "snps,phy-addr property is deprecated\n");
253
174
254
-
if ((plat->phy_node && !of_phy_is_fixed_link(np)) || !plat->mdio_node)
255
-
plat->mdio_bus_data = NULL;
256
-
else
257
-
plat->mdio_bus_data =
258
-
devm_kzalloc(&pdev->dev,
259
-
sizeof(struct stmmac_mdio_bus_data),
260
-
GFP_KERNEL);
175
+
/* To Configure PHY by using all device-tree supported properties */
176
+
if (stmmac_dt_phy(plat, np, &pdev->dev))
177
+
return ERR_PTR(-ENODEV);
261
178
262
179
of_property_read_u32(np, "tx-fifo-depth", &plat->tx_fifo_size);
263
180
+4
drivers/net/phy/bcm7xxx.c
+4
drivers/net/phy/bcm7xxx.c
···
339
339
BCM7XXX_28NM_GPHY(PHY_ID_BCM7439, "Broadcom BCM7439"),
340
340
BCM7XXX_28NM_GPHY(PHY_ID_BCM7439_2, "Broadcom BCM7439 (2)"),
341
341
BCM7XXX_28NM_GPHY(PHY_ID_BCM7445, "Broadcom BCM7445"),
342
+
BCM7XXX_40NM_EPHY(PHY_ID_BCM7346, "Broadcom BCM7346"),
343
+
BCM7XXX_40NM_EPHY(PHY_ID_BCM7362, "Broadcom BCM7362"),
342
344
BCM7XXX_40NM_EPHY(PHY_ID_BCM7425, "Broadcom BCM7425"),
343
345
BCM7XXX_40NM_EPHY(PHY_ID_BCM7429, "Broadcom BCM7429"),
344
346
BCM7XXX_40NM_EPHY(PHY_ID_BCM7435, "Broadcom BCM7435"),
···
350
348
{ PHY_ID_BCM7250, 0xfffffff0, },
351
349
{ PHY_ID_BCM7364, 0xfffffff0, },
352
350
{ PHY_ID_BCM7366, 0xfffffff0, },
351
+
{ PHY_ID_BCM7346, 0xfffffff0, },
352
+
{ PHY_ID_BCM7362, 0xfffffff0, },
353
353
{ PHY_ID_BCM7425, 0xfffffff0, },
354
354
{ PHY_ID_BCM7429, 0xfffffff0, },
355
355
{ PHY_ID_BCM7439, 0xfffffff0, },
+5
drivers/net/team/team.c
+5
drivers/net/team/team.c
···
1198
1198
goto err_dev_open;
1199
1199
}
1200
1200
1201
+
dev_uc_sync_multiple(port_dev, dev);
1202
+
dev_mc_sync_multiple(port_dev, dev);
1203
+
1201
1204
err = vlan_vids_add_by_dev(port_dev, dev);
1202
1205
if (err) {
1203
1206
netdev_err(dev, "Failed to add vlan ids to device %s\n",
···
1264
1261
vlan_vids_del_by_dev(port_dev, dev);
1265
1262
1266
1263
err_vids_add:
1264
+
dev_uc_unsync(port_dev, dev);
1265
+
dev_mc_unsync(port_dev, dev);
1267
1266
dev_close(port_dev);
1268
1267
1269
1268
err_dev_open:
+5
-3
drivers/net/tun.c
+5
-3
drivers/net/tun.c
···
622
622
623
623
/* Re-attach the filter to persist device */
624
624
if (!skip_filter && (tun->filter_attached == true)) {
625
-
err = sk_attach_filter(&tun->fprog, tfile->socket.sk);
625
+
err = __sk_attach_filter(&tun->fprog, tfile->socket.sk,
626
+
lockdep_rtnl_is_held());
626
627
if (!err)
627
628
goto out;
628
629
}
···
1823
1822
1824
1823
for (i = 0; i < n; i++) {
1825
1824
tfile = rtnl_dereference(tun->tfiles[i]);
1826
-
sk_detach_filter(tfile->socket.sk);
1825
+
__sk_detach_filter(tfile->socket.sk, lockdep_rtnl_is_held());
1827
1826
}
1828
1827
1829
1828
tun->filter_attached = false;
···
1836
1835
1837
1836
for (i = 0; i < tun->numqueues; i++) {
1838
1837
tfile = rtnl_dereference(tun->tfiles[i]);
1839
-
ret = sk_attach_filter(&tun->fprog, tfile->socket.sk);
1838
+
ret = __sk_attach_filter(&tun->fprog, tfile->socket.sk,
1839
+
lockdep_rtnl_is_held());
1840
1840
if (ret) {
1841
1841
tun_detach_filter(tun, i);
1842
1842
return ret;
+7
drivers/net/usb/cdc_ncm.c
+7
drivers/net/usb/cdc_ncm.c
···
1626
1626
.driver_info = (unsigned long) &wwan_info,
1627
1627
},
1628
1628
1629
+
/* Telit LE910 V2 */
1630
+
{ USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x0036,
1631
+
USB_CLASS_COMM,
1632
+
USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE),
1633
+
.driver_info = (unsigned long)&wwan_noarp_info,
1634
+
},
1635
+
1629
1636
/* DW5812 LTE Verizon Mobile Broadband Card
1630
1637
* Unlike DW5550 this device requires FLAG_NOARP
1631
1638
*/
+1
-1
drivers/net/usb/plusb.c
+1
-1
drivers/net/usb/plusb.c
···
38
38
* HEADS UP: this handshaking isn't all that robust. This driver
39
39
* gets confused easily if you unplug one end of the cable then
40
40
* try to connect it again; you'll need to restart both ends. The
41
-
* "naplink" software (used by some PlayStation/2 deveopers) does
41
+
* "naplink" software (used by some PlayStation/2 developers) does
42
42
* the handshaking much better! Also, sometimes this hardware
43
43
* seems to get wedged under load. Prolific docs are weak, and
44
44
* don't identify differences between PL2301 and PL2302, much less
+1
drivers/net/usb/qmi_wwan.c
+1
drivers/net/usb/qmi_wwan.c
···
844
844
{QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */
845
845
{QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */
846
846
{QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */
847
+
{QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */
847
848
{QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */
848
849
{QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */
849
850
{QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */
+2
-2
drivers/nvdimm/pmem.c
+2
-2
drivers/nvdimm/pmem.c
···
99
99
if (unlikely(bad_pmem))
100
100
rc = -EIO;
101
101
else {
102
-
memcpy_from_pmem(mem + off, pmem_addr, len);
102
+
rc = memcpy_from_pmem(mem + off, pmem_addr, len);
103
103
flush_dcache_page(page);
104
104
}
105
105
} else {
···
295
295
296
296
if (unlikely(is_bad_pmem(&pmem->bb, offset / 512, sz_align)))
297
297
return -EIO;
298
-
memcpy_from_pmem(buf, pmem->virt_addr + offset, size);
298
+
return memcpy_from_pmem(buf, pmem->virt_addr + offset, size);
299
299
} else {
300
300
memcpy_to_pmem(pmem->virt_addr + offset, buf, size);
301
301
wmb_pmem();
+1
-2
drivers/platform/goldfish/goldfish_pipe.c
+1
-2
drivers/platform/goldfish/goldfish_pipe.c
···
309
309
* much memory to the process.
310
310
*/
311
311
down_read(¤t->mm->mmap_sem);
312
-
ret = get_user_pages(current, current->mm, address, 1,
313
-
!is_write, 0, &page, NULL);
312
+
ret = get_user_pages(address, 1, !is_write, 0, &page, NULL);
314
313
up_read(¤t->mm->mmap_sem);
315
314
if (ret < 0)
316
315
break;
+1
-1
drivers/rapidio/devices/rio_mport_cdev.c
+1
-1
drivers/rapidio/devices/rio_mport_cdev.c
+2
-2
drivers/remoteproc/st_remoteproc.c
+2
-2
drivers/remoteproc/st_remoteproc.c
···
189
189
}
190
190
191
191
ddata->boot_base = syscon_regmap_lookup_by_phandle(np, "st,syscfg");
192
-
if (!ddata->boot_base) {
192
+
if (IS_ERR(ddata->boot_base)) {
193
193
dev_err(dev, "Boot base not found\n");
194
-
return -EINVAL;
194
+
return PTR_ERR(ddata->boot_base);
195
195
}
196
196
197
197
err = of_property_read_u32_index(np, "st,syscfg", 1,
+53
-173
drivers/s390/block/dasd_alias.c
+53
-173
drivers/s390/block/dasd_alias.c
···
317
317
struct alias_pav_group *group;
318
318
struct dasd_uid uid;
319
319
320
+
spin_lock(get_ccwdev_lock(device->cdev));
320
321
private->uid.type = lcu->uac->unit[private->uid.real_unit_addr].ua_type;
321
322
private->uid.base_unit_addr =
322
323
lcu->uac->unit[private->uid.real_unit_addr].base_ua;
323
324
uid = private->uid;
324
-
325
+
spin_unlock(get_ccwdev_lock(device->cdev));
325
326
/* if we have no PAV anyway, we don't need to bother with PAV groups */
326
327
if (lcu->pav == NO_PAV) {
327
328
list_move(&device->alias_list, &lcu->active_devices);
328
329
return 0;
329
330
}
330
-
331
331
group = _find_group(lcu, &uid);
332
332
if (!group) {
333
333
group = kzalloc(sizeof(*group), GFP_ATOMIC);
···
395
395
return 1;
396
396
397
397
return 0;
398
-
}
399
-
400
-
/*
401
-
* This function tries to lock all devices on an lcu via trylock
402
-
* return NULL on success otherwise return first failed device
403
-
*/
404
-
static struct dasd_device *_trylock_all_devices_on_lcu(struct alias_lcu *lcu,
405
-
struct dasd_device *pos)
406
-
407
-
{
408
-
struct alias_pav_group *pavgroup;
409
-
struct dasd_device *device;
410
-
411
-
list_for_each_entry(device, &lcu->active_devices, alias_list) {
412
-
if (device == pos)
413
-
continue;
414
-
if (!spin_trylock(get_ccwdev_lock(device->cdev)))
415
-
return device;
416
-
}
417
-
list_for_each_entry(device, &lcu->inactive_devices, alias_list) {
418
-
if (device == pos)
419
-
continue;
420
-
if (!spin_trylock(get_ccwdev_lock(device->cdev)))
421
-
return device;
422
-
}
423
-
list_for_each_entry(pavgroup, &lcu->grouplist, group) {
424
-
list_for_each_entry(device, &pavgroup->baselist, alias_list) {
425
-
if (device == pos)
426
-
continue;
427
-
if (!spin_trylock(get_ccwdev_lock(device->cdev)))
428
-
return device;
429
-
}
430
-
list_for_each_entry(device, &pavgroup->aliaslist, alias_list) {
431
-
if (device == pos)
432
-
continue;
433
-
if (!spin_trylock(get_ccwdev_lock(device->cdev)))
434
-
return device;
435
-
}
436
-
}
437
-
return NULL;
438
-
}
439
-
440
-
/*
441
-
* unlock all devices except the one that is specified as pos
442
-
* stop if enddev is specified and reached
443
-
*/
444
-
static void _unlock_all_devices_on_lcu(struct alias_lcu *lcu,
445
-
struct dasd_device *pos,
446
-
struct dasd_device *enddev)
447
-
448
-
{
449
-
struct alias_pav_group *pavgroup;
450
-
struct dasd_device *device;
451
-
452
-
list_for_each_entry(device, &lcu->active_devices, alias_list) {
453
-
if (device == pos)
454
-
continue;
455
-
if (device == enddev)
456
-
return;
457
-
spin_unlock(get_ccwdev_lock(device->cdev));
458
-
}
459
-
list_for_each_entry(device, &lcu->inactive_devices, alias_list) {
460
-
if (device == pos)
461
-
continue;
462
-
if (device == enddev)
463
-
return;
464
-
spin_unlock(get_ccwdev_lock(device->cdev));
465
-
}
466
-
list_for_each_entry(pavgroup, &lcu->grouplist, group) {
467
-
list_for_each_entry(device, &pavgroup->baselist, alias_list) {
468
-
if (device == pos)
469
-
continue;
470
-
if (device == enddev)
471
-
return;
472
-
spin_unlock(get_ccwdev_lock(device->cdev));
473
-
}
474
-
list_for_each_entry(device, &pavgroup->aliaslist, alias_list) {
475
-
if (device == pos)
476
-
continue;
477
-
if (device == enddev)
478
-
return;
479
-
spin_unlock(get_ccwdev_lock(device->cdev));
480
-
}
481
-
}
482
-
}
483
-
484
-
/*
485
-
* this function is needed because the locking order
486
-
* device lock -> lcu lock
487
-
* needs to be assured when iterating over devices in an LCU
488
-
*
489
-
* if a device is specified in pos then the device lock is already hold
490
-
*/
491
-
static void _trylock_and_lock_lcu_irqsave(struct alias_lcu *lcu,
492
-
struct dasd_device *pos,
493
-
unsigned long *flags)
494
-
{
495
-
struct dasd_device *failed;
496
-
497
-
do {
498
-
spin_lock_irqsave(&lcu->lock, *flags);
499
-
failed = _trylock_all_devices_on_lcu(lcu, pos);
500
-
if (failed) {
501
-
_unlock_all_devices_on_lcu(lcu, pos, failed);
502
-
spin_unlock_irqrestore(&lcu->lock, *flags);
503
-
cpu_relax();
504
-
}
505
-
} while (failed);
506
-
}
507
-
508
-
static void _trylock_and_lock_lcu(struct alias_lcu *lcu,
509
-
struct dasd_device *pos)
510
-
{
511
-
struct dasd_device *failed;
512
-
513
-
do {
514
-
spin_lock(&lcu->lock);
515
-
failed = _trylock_all_devices_on_lcu(lcu, pos);
516
-
if (failed) {
517
-
_unlock_all_devices_on_lcu(lcu, pos, failed);
518
-
spin_unlock(&lcu->lock);
519
-
cpu_relax();
520
-
}
521
-
} while (failed);
522
398
}
523
399
524
400
static int read_unit_address_configuration(struct dasd_device *device,
···
491
615
if (rc)
492
616
return rc;
493
617
494
-
_trylock_and_lock_lcu_irqsave(lcu, NULL, &flags);
618
+
spin_lock_irqsave(&lcu->lock, flags);
495
619
lcu->pav = NO_PAV;
496
620
for (i = 0; i < MAX_DEVICES_PER_LCU; ++i) {
497
621
switch (lcu->uac->unit[i].ua_type) {
···
510
634
alias_list) {
511
635
_add_device_to_lcu(lcu, device, refdev);
512
636
}
513
-
_unlock_all_devices_on_lcu(lcu, NULL, NULL);
514
637
spin_unlock_irqrestore(&lcu->lock, flags);
515
638
return 0;
516
639
}
···
597
722
598
723
lcu = private->lcu;
599
724
rc = 0;
600
-
spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
601
-
spin_lock(&lcu->lock);
725
+
spin_lock_irqsave(&lcu->lock, flags);
602
726
if (!(lcu->flags & UPDATE_PENDING)) {
603
727
rc = _add_device_to_lcu(lcu, device, device);
604
728
if (rc)
···
607
733
list_move(&device->alias_list, &lcu->active_devices);
608
734
_schedule_lcu_update(lcu, device);
609
735
}
610
-
spin_unlock(&lcu->lock);
611
-
spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
736
+
spin_unlock_irqrestore(&lcu->lock, flags);
612
737
return rc;
613
738
}
614
739
···
806
933
struct alias_pav_group *pavgroup;
807
934
struct dasd_device *device;
808
935
809
-
list_for_each_entry(device, &lcu->active_devices, alias_list)
936
+
list_for_each_entry(device, &lcu->active_devices, alias_list) {
937
+
spin_lock(get_ccwdev_lock(device->cdev));
810
938
dasd_device_set_stop_bits(device, DASD_STOPPED_SU);
811
-
list_for_each_entry(device, &lcu->inactive_devices, alias_list)
939
+
spin_unlock(get_ccwdev_lock(device->cdev));
940
+
}
941
+
list_for_each_entry(device, &lcu->inactive_devices, alias_list) {
942
+
spin_lock(get_ccwdev_lock(device->cdev));
812
943
dasd_device_set_stop_bits(device, DASD_STOPPED_SU);
944
+
spin_unlock(get_ccwdev_lock(device->cdev));
945
+
}
813
946
list_for_each_entry(pavgroup, &lcu->grouplist, group) {
814
-
list_for_each_entry(device, &pavgroup->baselist, alias_list)
947
+
list_for_each_entry(device, &pavgroup->baselist, alias_list) {
948
+
spin_lock(get_ccwdev_lock(device->cdev));
815
949
dasd_device_set_stop_bits(device, DASD_STOPPED_SU);
816
-
list_for_each_entry(device, &pavgroup->aliaslist, alias_list)
950
+
spin_unlock(get_ccwdev_lock(device->cdev));
951
+
}
952
+
list_for_each_entry(device, &pavgroup->aliaslist, alias_list) {
953
+
spin_lock(get_ccwdev_lock(device->cdev));
817
954
dasd_device_set_stop_bits(device, DASD_STOPPED_SU);
955
+
spin_unlock(get_ccwdev_lock(device->cdev));
956
+
}
818
957
}
819
958
}
820
959
···
835
950
struct alias_pav_group *pavgroup;
836
951
struct dasd_device *device;
837
952
838
-
list_for_each_entry(device, &lcu->active_devices, alias_list)
953
+
list_for_each_entry(device, &lcu->active_devices, alias_list) {
954
+
spin_lock(get_ccwdev_lock(device->cdev));
839
955
dasd_device_remove_stop_bits(device, DASD_STOPPED_SU);
840
-
list_for_each_entry(device, &lcu->inactive_devices, alias_list)
956
+
spin_unlock(get_ccwdev_lock(device->cdev));
957
+
}
958
+
list_for_each_entry(device, &lcu->inactive_devices, alias_list) {
959
+
spin_lock(get_ccwdev_lock(device->cdev));
841
960
dasd_device_remove_stop_bits(device, DASD_STOPPED_SU);
961
+
spin_unlock(get_ccwdev_lock(device->cdev));
962
+
}
842
963
list_for_each_entry(pavgroup, &lcu->grouplist, group) {
843
-
list_for_each_entry(device, &pavgroup->baselist, alias_list)
964
+
list_for_each_entry(device, &pavgroup->baselist, alias_list) {
965
+
spin_lock(get_ccwdev_lock(device->cdev));
844
966
dasd_device_remove_stop_bits(device, DASD_STOPPED_SU);
845
-
list_for_each_entry(device, &pavgroup->aliaslist, alias_list)
967
+
spin_unlock(get_ccwdev_lock(device->cdev));
968
+
}
969
+
list_for_each_entry(device, &pavgroup->aliaslist, alias_list) {
970
+
spin_lock(get_ccwdev_lock(device->cdev));
846
971
dasd_device_remove_stop_bits(device, DASD_STOPPED_SU);
972
+
spin_unlock(get_ccwdev_lock(device->cdev));
973
+
}
847
974
}
848
975
}
849
976
···
881
984
spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
882
985
reset_summary_unit_check(lcu, device, suc_data->reason);
883
986
884
-
_trylock_and_lock_lcu_irqsave(lcu, NULL, &flags);
987
+
spin_lock_irqsave(&lcu->lock, flags);
885
988
_unstop_all_devices_on_lcu(lcu);
886
989
_restart_all_base_devices_on_lcu(lcu);
887
990
/* 3. read new alias configuration */
888
991
_schedule_lcu_update(lcu, device);
889
992
lcu->suc_data.device = NULL;
890
993
dasd_put_device(device);
891
-
_unlock_all_devices_on_lcu(lcu, NULL, NULL);
892
994
spin_unlock_irqrestore(&lcu->lock, flags);
893
995
}
894
996
895
-
/*
896
-
* note: this will be called from int handler context (cdev locked)
897
-
*/
898
-
void dasd_alias_handle_summary_unit_check(struct dasd_device *device,
899
-
struct irb *irb)
997
+
void dasd_alias_handle_summary_unit_check(struct work_struct *work)
900
998
{
999
+
struct dasd_device *device = container_of(work, struct dasd_device,
1000
+
suc_work);
901
1001
struct dasd_eckd_private *private = device->private;
902
1002
struct alias_lcu *lcu;
903
-
char reason;
904
-
char *sense;
905
-
906
-
sense = dasd_get_sense(irb);
907
-
if (sense) {
908
-
reason = sense[8];
909
-
DBF_DEV_EVENT(DBF_NOTICE, device, "%s %x",
910
-
"eckd handle summary unit check: reason", reason);
911
-
} else {
912
-
DBF_DEV_EVENT(DBF_WARNING, device, "%s",
913
-
"eckd handle summary unit check:"
914
-
" no reason code available");
915
-
return;
916
-
}
1003
+
unsigned long flags;
917
1004
918
1005
lcu = private->lcu;
919
1006
if (!lcu) {
920
1007
DBF_DEV_EVENT(DBF_WARNING, device, "%s",
921
1008
"device not ready to handle summary"
922
1009
" unit check (no lcu structure)");
923
-
return;
1010
+
goto out;
924
1011
}
925
-
_trylock_and_lock_lcu(lcu, device);
1012
+
spin_lock_irqsave(&lcu->lock, flags);
926
1013
/* If this device is about to be removed just return and wait for
927
1014
* the next interrupt on a different device
928
1015
*/
···
914
1033
DBF_DEV_EVENT(DBF_WARNING, device, "%s",
915
1034
"device is in offline processing,"
916
1035
" don't do summary unit check handling");
917
-
_unlock_all_devices_on_lcu(lcu, device, NULL);
918
-
spin_unlock(&lcu->lock);
919
-
return;
1036
+
goto out_unlock;
920
1037
}
921
1038
if (lcu->suc_data.device) {
922
1039
/* already scheduled or running */
923
1040
DBF_DEV_EVENT(DBF_WARNING, device, "%s",
924
1041
"previous instance of summary unit check worker"
925
1042
" still pending");
926
-
_unlock_all_devices_on_lcu(lcu, device, NULL);
927
-
spin_unlock(&lcu->lock);
928
-
return ;
1043
+
goto out_unlock;
929
1044
}
930
1045
_stop_all_devices_on_lcu(lcu);
931
1046
/* prepare for lcu_update */
932
-
private->lcu->flags |= NEED_UAC_UPDATE | UPDATE_PENDING;
933
-
lcu->suc_data.reason = reason;
1047
+
lcu->flags |= NEED_UAC_UPDATE | UPDATE_PENDING;
1048
+
lcu->suc_data.reason = private->suc_reason;
934
1049
lcu->suc_data.device = device;
935
1050
dasd_get_device(device);
936
-
_unlock_all_devices_on_lcu(lcu, device, NULL);
937
-
spin_unlock(&lcu->lock);
938
1051
if (!schedule_work(&lcu->suc_data.worker))
939
1052
dasd_put_device(device);
1053
+
out_unlock:
1054
+
spin_unlock_irqrestore(&lcu->lock, flags);
1055
+
out:
1056
+
clear_bit(DASD_FLAG_SUC, &device->flags);
1057
+
dasd_put_device(device);
940
1058
};
+29
-9
drivers/s390/block/dasd_eckd.c
+29
-9
drivers/s390/block/dasd_eckd.c
···
1682
1682
1683
1683
/* setup work queue for validate server*/
1684
1684
INIT_WORK(&device->kick_validate, dasd_eckd_do_validate_server);
1685
+
/* setup work queue for summary unit check */
1686
+
INIT_WORK(&device->suc_work, dasd_alias_handle_summary_unit_check);
1685
1687
1686
1688
if (!ccw_device_is_pathgroup(device->cdev)) {
1687
1689
dev_warn(&device->cdev->dev,
···
2551
2549
device->state == DASD_STATE_ONLINE &&
2552
2550
!test_bit(DASD_FLAG_OFFLINE, &device->flags) &&
2553
2551
!test_bit(DASD_FLAG_SUSPENDED, &device->flags)) {
2554
-
/*
2555
-
* the state change could be caused by an alias
2556
-
* reassignment remove device from alias handling
2557
-
* to prevent new requests from being scheduled on
2558
-
* the wrong alias device
2559
-
*/
2560
-
dasd_alias_remove_device(device);
2561
-
2562
2552
/* schedule worker to reload device */
2563
2553
dasd_reload_device(device);
2564
2554
}
···
2565
2571
/* summary unit check */
2566
2572
if ((sense[27] & DASD_SENSE_BIT_0) && (sense[7] == 0x0D) &&
2567
2573
(scsw_dstat(&irb->scsw) & DEV_STAT_UNIT_CHECK)) {
2568
-
dasd_alias_handle_summary_unit_check(device, irb);
2574
+
if (test_and_set_bit(DASD_FLAG_SUC, &device->flags)) {
2575
+
DBF_DEV_EVENT(DBF_WARNING, device, "%s",
2576
+
"eckd suc: device already notified");
2577
+
return;
2578
+
}
2579
+
sense = dasd_get_sense(irb);
2580
+
if (!sense) {
2581
+
DBF_DEV_EVENT(DBF_WARNING, device, "%s",
2582
+
"eckd suc: no reason code available");
2583
+
clear_bit(DASD_FLAG_SUC, &device->flags);
2584
+
return;
2585
+
2586
+
}
2587
+
private->suc_reason = sense[8];
2588
+
DBF_DEV_EVENT(DBF_NOTICE, device, "%s %x",
2589
+
"eckd handle summary unit check: reason",
2590
+
private->suc_reason);
2591
+
dasd_get_device(device);
2592
+
if (!schedule_work(&device->suc_work))
2593
+
dasd_put_device(device);
2594
+
2569
2595
return;
2570
2596
}
2571
2597
···
4508
4494
char print_uid[60];
4509
4495
struct dasd_uid uid;
4510
4496
unsigned long flags;
4497
+
4498
+
/*
4499
+
* remove device from alias handling to prevent new requests
4500
+
* from being scheduled on the wrong alias device
4501
+
*/
4502
+
dasd_alias_remove_device(device);
4511
4503
4512
4504
spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags);
4513
4505
old_base = private->uid.base_unit_addr;
+2
-1
drivers/s390/block/dasd_eckd.h
+2
-1
drivers/s390/block/dasd_eckd.h
···
525
525
int count;
526
526
527
527
u32 fcx_max_data;
528
+
char suc_reason;
528
529
};
529
530
530
531
···
535
534
int dasd_alias_add_device(struct dasd_device *);
536
535
int dasd_alias_remove_device(struct dasd_device *);
537
536
struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *);
538
-
void dasd_alias_handle_summary_unit_check(struct dasd_device *, struct irb *);
537
+
void dasd_alias_handle_summary_unit_check(struct work_struct *);
539
538
void dasd_eckd_reset_ccw_to_base_io(struct dasd_ccw_req *);
540
539
void dasd_alias_lcu_setup_complete(struct dasd_device *);
541
540
void dasd_alias_wait_for_lcu_setup(struct dasd_device *);
+2
drivers/s390/block/dasd_int.h
+2
drivers/s390/block/dasd_int.h
···
470
470
struct work_struct restore_device;
471
471
struct work_struct reload_device;
472
472
struct work_struct kick_validate;
473
+
struct work_struct suc_work;
473
474
struct timer_list timer;
474
475
475
476
debug_info_t *debug_area;
···
543
542
#define DASD_FLAG_SAFE_OFFLINE_RUNNING 11 /* safe offline running */
544
543
#define DASD_FLAG_ABORTALL 12 /* Abort all noretry requests */
545
544
#define DASD_FLAG_PATH_VERIFY 13 /* Path verification worker running */
545
+
#define DASD_FLAG_SUC 14 /* unhandled summary unit check */
546
546
547
547
#define DASD_SLEEPON_START_TAG ((void *) 1)
548
548
#define DASD_SLEEPON_END_TAG ((void *) 2)
+11
-17
drivers/target/iscsi/iscsi_target_configfs.c
+11
-17
drivers/target/iscsi/iscsi_target_configfs.c
···
779
779
return 0;
780
780
}
781
781
782
-
static void lio_target_cleanup_nodeacl( struct se_node_acl *se_nacl)
783
-
{
784
-
struct iscsi_node_acl *acl = container_of(se_nacl,
785
-
struct iscsi_node_acl, se_node_acl);
786
-
787
-
configfs_remove_default_groups(&acl->se_node_acl.acl_fabric_stat_group);
788
-
}
789
-
790
782
/* End items for lio_target_acl_cit */
791
783
792
784
/* Start items for lio_target_tpg_attrib_cit */
···
1239
1247
if (IS_ERR(tiqn))
1240
1248
return ERR_CAST(tiqn);
1241
1249
1250
+
pr_debug("LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn);
1251
+
pr_debug("LIO_Target_ConfigFS: REGISTER -> Allocated Node:"
1252
+
" %s\n", name);
1253
+
return &tiqn->tiqn_wwn;
1254
+
}
1255
+
1256
+
static void lio_target_add_wwn_groups(struct se_wwn *wwn)
1257
+
{
1258
+
struct iscsi_tiqn *tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn);
1259
+
1242
1260
config_group_init_type_name(&tiqn->tiqn_stat_grps.iscsi_instance_group,
1243
1261
"iscsi_instance", &iscsi_stat_instance_cit);
1244
1262
configfs_add_default_group(&tiqn->tiqn_stat_grps.iscsi_instance_group,
···
1273
1271
"iscsi_logout_stats", &iscsi_stat_logout_cit);
1274
1272
configfs_add_default_group(&tiqn->tiqn_stat_grps.iscsi_logout_stats_group,
1275
1273
&tiqn->tiqn_wwn.fabric_stat_group);
1276
-
1277
-
1278
-
pr_debug("LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn);
1279
-
pr_debug("LIO_Target_ConfigFS: REGISTER -> Allocated Node:"
1280
-
" %s\n", name);
1281
-
return &tiqn->tiqn_wwn;
1282
1274
}
1283
1275
1284
1276
static void lio_target_call_coredeltiqn(
1285
1277
struct se_wwn *wwn)
1286
1278
{
1287
1279
struct iscsi_tiqn *tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn);
1288
-
1289
-
configfs_remove_default_groups(&tiqn->tiqn_wwn.fabric_stat_group);
1290
1280
1291
1281
pr_debug("LIO_Target_ConfigFS: DEREGISTER -> %s\n",
1292
1282
tiqn->tiqn);
···
1654
1660
.aborted_task = lio_aborted_task,
1655
1661
.fabric_make_wwn = lio_target_call_coreaddtiqn,
1656
1662
.fabric_drop_wwn = lio_target_call_coredeltiqn,
1663
+
.add_wwn_groups = lio_target_add_wwn_groups,
1657
1664
.fabric_make_tpg = lio_target_tiqn_addtpg,
1658
1665
.fabric_drop_tpg = lio_target_tiqn_deltpg,
1659
1666
.fabric_make_np = lio_target_call_addnptotpg,
1660
1667
.fabric_drop_np = lio_target_call_delnpfromtpg,
1661
1668
.fabric_init_nodeacl = lio_target_init_nodeacl,
1662
-
.fabric_cleanup_nodeacl = lio_target_cleanup_nodeacl,
1663
1669
1664
1670
.tfc_discovery_attrs = lio_target_discovery_auth_attrs,
1665
1671
.tfc_wwn_attrs = lio_target_wwn_attrs,
+13
-11
drivers/target/target_core_fabric_configfs.c
+13
-11
drivers/target/target_core_fabric_configfs.c
···
338
338
{
339
339
struct se_node_acl *se_nacl = container_of(to_config_group(item),
340
340
struct se_node_acl, acl_group);
341
-
struct target_fabric_configfs *tf = se_nacl->se_tpg->se_tpg_wwn->wwn_tf;
342
341
343
-
if (tf->tf_ops->fabric_cleanup_nodeacl)
344
-
tf->tf_ops->fabric_cleanup_nodeacl(se_nacl);
342
+
configfs_remove_default_groups(&se_nacl->acl_fabric_stat_group);
345
343
core_tpg_del_initiator_node_acl(se_nacl);
346
344
}
347
345
···
381
383
if (IS_ERR(se_nacl))
382
384
return ERR_CAST(se_nacl);
383
385
384
-
if (tf->tf_ops->fabric_init_nodeacl) {
385
-
int ret = tf->tf_ops->fabric_init_nodeacl(se_nacl, name);
386
-
if (ret) {
387
-
core_tpg_del_initiator_node_acl(se_nacl);
388
-
return ERR_PTR(ret);
389
-
}
390
-
}
391
-
392
386
config_group_init_type_name(&se_nacl->acl_group, name,
393
387
&tf->tf_tpg_nacl_base_cit);
394
388
···
403
413
"fabric_statistics", &tf->tf_tpg_nacl_stat_cit);
404
414
configfs_add_default_group(&se_nacl->acl_fabric_stat_group,
405
415
&se_nacl->acl_group);
416
+
417
+
if (tf->tf_ops->fabric_init_nodeacl) {
418
+
int ret = tf->tf_ops->fabric_init_nodeacl(se_nacl, name);
419
+
if (ret) {
420
+
configfs_remove_default_groups(&se_nacl->acl_fabric_stat_group);
421
+
core_tpg_del_initiator_node_acl(se_nacl);
422
+
return ERR_PTR(ret);
423
+
}
424
+
}
406
425
407
426
return &se_nacl->acl_group;
408
427
}
···
891
892
struct se_wwn, wwn_group);
892
893
struct target_fabric_configfs *tf = wwn->wwn_tf;
893
894
895
+
configfs_remove_default_groups(&wwn->fabric_stat_group);
894
896
tf->tf_ops->fabric_drop_wwn(wwn);
895
897
}
896
898
···
945
945
&tf->tf_wwn_fabric_stats_cit);
946
946
configfs_add_default_group(&wwn->fabric_stat_group, &wwn->wwn_group);
947
947
948
+
if (tf->tf_ops->add_wwn_groups)
949
+
tf->tf_ops->add_wwn_groups(wwn);
948
950
return &wwn->wwn_group;
949
951
}
950
952
+25
-20
fs/btrfs/disk-io.c
+25
-20
fs/btrfs/disk-io.c
···
25
25
#include <linux/buffer_head.h>
26
26
#include <linux/workqueue.h>
27
27
#include <linux/kthread.h>
28
-
#include <linux/freezer.h>
29
28
#include <linux/slab.h>
30
29
#include <linux/migrate.h>
31
30
#include <linux/ratelimit.h>
···
302
303
err = map_private_extent_buffer(buf, offset, 32,
303
304
&kaddr, &map_start, &map_len);
304
305
if (err)
305
-
return 1;
306
+
return err;
306
307
cur_len = min(len, map_len - (offset - map_start));
307
308
crc = btrfs_csum_data(kaddr + offset - map_start,
308
309
crc, cur_len);
···
312
313
if (csum_size > sizeof(inline_result)) {
313
314
result = kzalloc(csum_size, GFP_NOFS);
314
315
if (!result)
315
-
return 1;
316
+
return -ENOMEM;
316
317
} else {
317
318
result = (char *)&inline_result;
318
319
}
···
333
334
val, found, btrfs_header_level(buf));
334
335
if (result != (char *)&inline_result)
335
336
kfree(result);
336
-
return 1;
337
+
return -EUCLEAN;
337
338
}
338
339
} else {
339
340
write_extent_buffer(buf, result, 0, csum_size);
···
512
513
eb = (struct extent_buffer *)page->private;
513
514
if (page != eb->pages[0])
514
515
return 0;
516
+
515
517
found_start = btrfs_header_bytenr(eb);
516
-
if (WARN_ON(found_start != start || !PageUptodate(page)))
517
-
return 0;
518
-
csum_tree_block(fs_info, eb, 0);
519
-
return 0;
518
+
/*
519
+
* Please do not consolidate these warnings into a single if.
520
+
* It is useful to know what went wrong.
521
+
*/
522
+
if (WARN_ON(found_start != start))
523
+
return -EUCLEAN;
524
+
if (WARN_ON(!PageUptodate(page)))
525
+
return -EUCLEAN;
526
+
527
+
ASSERT(memcmp_extent_buffer(eb, fs_info->fsid,
528
+
btrfs_header_fsid(), BTRFS_FSID_SIZE) == 0);
529
+
530
+
return csum_tree_block(fs_info, eb, 0);
520
531
}
521
532
522
533
static int check_tree_block_fsid(struct btrfs_fs_info *fs_info,
···
670
661
eb, found_level);
671
662
672
663
ret = csum_tree_block(fs_info, eb, 1);
673
-
if (ret) {
674
-
ret = -EIO;
664
+
if (ret)
675
665
goto err;
676
-
}
677
666
678
667
/*
679
668
* If this is a leaf block and it is corrupt, set the corrupt bit so
···
1838
1831
*/
1839
1832
btrfs_delete_unused_bgs(root->fs_info);
1840
1833
sleep:
1841
-
if (!try_to_freeze() && !again) {
1834
+
if (!again) {
1842
1835
set_current_state(TASK_INTERRUPTIBLE);
1843
1836
if (!kthread_should_stop())
1844
1837
schedule();
···
1928
1921
if (unlikely(test_bit(BTRFS_FS_STATE_ERROR,
1929
1922
&root->fs_info->fs_state)))
1930
1923
btrfs_cleanup_transaction(root);
1931
-
if (!try_to_freeze()) {
1932
-
set_current_state(TASK_INTERRUPTIBLE);
1933
-
if (!kthread_should_stop() &&
1934
-
(!btrfs_transaction_blocked(root->fs_info) ||
1935
-
cannot_commit))
1936
-
schedule_timeout(delay);
1937
-
__set_current_state(TASK_RUNNING);
1938
-
}
1924
+
set_current_state(TASK_INTERRUPTIBLE);
1925
+
if (!kthread_should_stop() &&
1926
+
(!btrfs_transaction_blocked(root->fs_info) ||
1927
+
cannot_commit))
1928
+
schedule_timeout(delay);
1929
+
__set_current_state(TASK_RUNNING);
1939
1930
} while (!kthread_should_stop());
1940
1931
return 0;
1941
1932
}
+1
-2
fs/dlm/config.c
+1
-2
fs/dlm/config.c
···
343
343
struct dlm_cluster *cl = NULL;
344
344
struct dlm_spaces *sps = NULL;
345
345
struct dlm_comms *cms = NULL;
346
-
void *gps = NULL;
347
346
348
347
cl = kzalloc(sizeof(struct dlm_cluster), GFP_NOFS);
349
348
sps = kzalloc(sizeof(struct dlm_spaces), GFP_NOFS);
350
349
cms = kzalloc(sizeof(struct dlm_comms), GFP_NOFS);
351
350
352
-
if (!cl || !gps || !sps || !cms)
351
+
if (!cl || !sps || !cms)
353
352
goto fail;
354
353
355
354
config_group_init_type_name(&cl->group, name, &cluster_type);
+6
-4
fs/namei.c
+6
-4
fs/namei.c
···
1740
1740
nd->flags);
1741
1741
if (IS_ERR(path.dentry))
1742
1742
return PTR_ERR(path.dentry);
1743
-
if (unlikely(d_is_negative(path.dentry))) {
1744
-
dput(path.dentry);
1745
-
return -ENOENT;
1746
-
}
1743
+
1747
1744
path.mnt = nd->path.mnt;
1748
1745
err = follow_managed(&path, nd);
1749
1746
if (unlikely(err < 0))
1750
1747
return err;
1748
+
1749
+
if (unlikely(d_is_negative(path.dentry))) {
1750
+
path_to_nameidata(&path, nd);
1751
+
return -ENOENT;
1752
+
}
1751
1753
1752
1754
seq = 0; /* we are already out of RCU mode */
1753
1755
inode = d_backing_inode(path.dentry);
+3
-5
fs/orangefs/dir.c
+3
-5
fs/orangefs/dir.c
···
235
235
if (ret == -EIO && op_state_purged(new_op)) {
236
236
gossip_err("%s: Client is down. Aborting readdir call.\n",
237
237
__func__);
238
-
goto out_slot;
238
+
goto out_free_op;
239
239
}
240
240
241
241
if (ret < 0 || new_op->downcall.status != 0) {
···
244
244
new_op->downcall.status);
245
245
if (ret >= 0)
246
246
ret = new_op->downcall.status;
247
-
goto out_slot;
247
+
goto out_free_op;
248
248
}
249
249
250
250
dents_buf = new_op->downcall.trailer_buf;
251
251
if (dents_buf == NULL) {
252
252
gossip_err("Invalid NULL buffer in readdir response\n");
253
253
ret = -ENOMEM;
254
-
goto out_slot;
254
+
goto out_free_op;
255
255
}
256
256
257
257
bytes_decoded = decode_dirents(dents_buf, new_op->downcall.trailer_size,
···
363
363
out_vfree:
364
364
gossip_debug(GOSSIP_DIR_DEBUG, "vfree %p\n", dents_buf);
365
365
vfree(dents_buf);
366
-
out_slot:
367
-
orangefs_readdir_index_put(buffer_index);
368
366
out_free_op:
369
367
op_release(new_op);
370
368
gossip_debug(GOSSIP_DIR_DEBUG, "orangefs_readdir returning %d\n", ret);
+1
-1
fs/orangefs/protocol.h
+1
-1
fs/orangefs/protocol.h
···
407
407
* space. Zero signifies the upstream version of the kernel module.
408
408
*/
409
409
#define ORANGEFS_KERNEL_PROTO_VERSION 0
410
-
#define ORANGEFS_MINIMUM_USERSPACE_VERSION 20904
410
+
#define ORANGEFS_MINIMUM_USERSPACE_VERSION 20903
411
411
412
412
/*
413
413
* describes memory regions to map in the ORANGEFS_DEV_MAP ioctl.
+17
-17
include/linux/atomic.h
+17
-17
include/linux/atomic.h
···
559
559
#endif
560
560
561
561
/**
562
-
* fetch_or - perform *ptr |= mask and return old value of *ptr
563
-
* @ptr: pointer to value
564
-
* @mask: mask to OR on the value
565
-
*
566
-
* cmpxchg based fetch_or, macro so it works for different integer types
562
+
* atomic_fetch_or - perform *p |= mask and return old value of *p
563
+
* @p: pointer to atomic_t
564
+
* @mask: mask to OR on the atomic_t
567
565
*/
568
-
#ifndef fetch_or
569
-
#define fetch_or(ptr, mask) \
570
-
({ typeof(*(ptr)) __old, __val = *(ptr); \
571
-
for (;;) { \
572
-
__old = cmpxchg((ptr), __val, __val | (mask)); \
573
-
if (__old == __val) \
574
-
break; \
575
-
__val = __old; \
576
-
} \
577
-
__old; \
578
-
})
579
-
#endif
566
+
#ifndef atomic_fetch_or
567
+
static inline int atomic_fetch_or(atomic_t *p, int mask)
568
+
{
569
+
int old, val = atomic_read(p);
580
570
571
+
for (;;) {
572
+
old = atomic_cmpxchg(p, val, val | mask);
573
+
if (old == val)
574
+
break;
575
+
val = old;
576
+
}
577
+
578
+
return old;
579
+
}
580
+
#endif
581
581
582
582
#ifdef CONFIG_GENERIC_ATOMIC64
583
583
#include <asm-generic/atomic64.h>
+2
include/linux/brcmphy.h
+2
include/linux/brcmphy.h
···
24
24
#define PHY_ID_BCM7250 0xae025280
25
25
#define PHY_ID_BCM7364 0xae025260
26
26
#define PHY_ID_BCM7366 0x600d8490
27
+
#define PHY_ID_BCM7346 0x600d8650
28
+
#define PHY_ID_BCM7362 0x600d84b0
27
29
#define PHY_ID_BCM7425 0x600d86b0
28
30
#define PHY_ID_BCM7429 0x600d8730
29
31
#define PHY_ID_BCM7435 0x600d8750
+2
-2
include/linux/configfs.h
+2
-2
include/linux/configfs.h
···
188
188
}
189
189
190
190
#define CONFIGFS_BIN_ATTR_RO(_pfx, _name, _priv, _maxsz) \
191
-
static struct configfs_attribute _pfx##attr_##_name = { \
191
+
static struct configfs_bin_attribute _pfx##attr_##_name = { \
192
192
.cb_attr = { \
193
193
.ca_name = __stringify(_name), \
194
194
.ca_mode = S_IRUGO, \
···
200
200
}
201
201
202
202
#define CONFIGFS_BIN_ATTR_WO(_pfx, _name, _priv, _maxsz) \
203
-
static struct configfs_attribute _pfx##attr_##_name = { \
203
+
static struct configfs_bin_attribute _pfx##attr_##_name = { \
204
204
.cb_attr = { \
205
205
.ca_name = __stringify(_name), \
206
206
.ca_mode = S_IWUSR, \
+4
include/linux/filter.h
+4
include/linux/filter.h
···
465
465
void bpf_prog_destroy(struct bpf_prog *fp);
466
466
467
467
int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk);
468
+
int __sk_attach_filter(struct sock_fprog *fprog, struct sock *sk,
469
+
bool locked);
468
470
int sk_attach_bpf(u32 ufd, struct sock *sk);
469
471
int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk);
470
472
int sk_reuseport_attach_bpf(u32 ufd, struct sock *sk);
471
473
int sk_detach_filter(struct sock *sk);
474
+
int __sk_detach_filter(struct sock *sk, bool locked);
475
+
472
476
int sk_get_filter(struct sock *sk, struct sock_filter __user *filter,
473
477
unsigned int len);
474
478
+1
-1
include/linux/huge_mm.h
+1
-1
include/linux/huge_mm.h
+4
include/linux/netfilter/ipset/ip_set.h
+4
include/linux/netfilter/ipset/ip_set.h
···
234
234
spinlock_t lock;
235
235
/* References to the set */
236
236
u32 ref;
237
+
/* References to the set for netlink events like dump,
238
+
* ref can be swapped out by ip_set_swap
239
+
*/
240
+
u32 ref_netlink;
237
241
/* The core set type */
238
242
struct ip_set_type *type;
239
243
/* The type variant doing the real job */
+16
-6
include/linux/pmem.h
+16
-6
include/linux/pmem.h
···
42
42
BUG();
43
43
}
44
44
45
+
static inline int arch_memcpy_from_pmem(void *dst, const void __pmem *src,
46
+
size_t n)
47
+
{
48
+
BUG();
49
+
return -EFAULT;
50
+
}
51
+
45
52
static inline size_t arch_copy_from_iter_pmem(void __pmem *addr, size_t bytes,
46
53
struct iov_iter *i)
47
54
{
···
73
66
#endif
74
67
75
68
/*
76
-
* Architectures that define ARCH_HAS_PMEM_API must provide
77
-
* implementations for arch_memcpy_to_pmem(), arch_wmb_pmem(),
78
-
* arch_copy_from_iter_pmem(), arch_clear_pmem(), arch_wb_cache_pmem()
79
-
* and arch_has_wmb_pmem().
69
+
* memcpy_from_pmem - read from persistent memory with error handling
70
+
* @dst: destination buffer
71
+
* @src: source buffer
72
+
* @size: transfer length
73
+
*
74
+
* Returns 0 on success negative error code on failure.
80
75
*/
81
-
static inline void memcpy_from_pmem(void *dst, void __pmem const *src, size_t size)
76
+
static inline int memcpy_from_pmem(void *dst, void __pmem const *src,
77
+
size_t size)
82
78
{
83
-
memcpy(dst, (void __force const *) src, size);
79
+
return arch_memcpy_from_pmem(dst, src, size);
84
80
}
85
81
86
82
static inline bool arch_has_pmem_api(void)
+2
-2
include/linux/sched.h
+2
-2
include/linux/sched.h
···
720
720
struct task_cputime cputime_expires;
721
721
722
722
#ifdef CONFIG_NO_HZ_FULL
723
-
unsigned long tick_dep_mask;
723
+
atomic_t tick_dep_mask;
724
724
#endif
725
725
726
726
struct list_head cpu_timers[3];
···
1549
1549
#endif
1550
1550
1551
1551
#ifdef CONFIG_NO_HZ_FULL
1552
-
unsigned long tick_dep_mask;
1552
+
atomic_t tick_dep_mask;
1553
1553
#endif
1554
1554
unsigned long nvcsw, nivcsw; /* context switch counts */
1555
1555
u64 start_time; /* monotonic time in nsec */
-1
include/linux/stmmac.h
-1
include/linux/stmmac.h
+1
-1
include/target/target_core_fabric.h
+1
-1
include/target/target_core_fabric.h
···
76
76
struct se_wwn *(*fabric_make_wwn)(struct target_fabric_configfs *,
77
77
struct config_group *, const char *);
78
78
void (*fabric_drop_wwn)(struct se_wwn *);
79
+
void (*add_wwn_groups)(struct se_wwn *);
79
80
struct se_portal_group *(*fabric_make_tpg)(struct se_wwn *,
80
81
struct config_group *, const char *);
81
82
void (*fabric_drop_tpg)(struct se_portal_group *);
···
88
87
struct config_group *, const char *);
89
88
void (*fabric_drop_np)(struct se_tpg_np *);
90
89
int (*fabric_init_nodeacl)(struct se_node_acl *, const char *);
91
-
void (*fabric_cleanup_nodeacl)(struct se_node_acl *);
92
90
93
91
struct configfs_attribute **tfc_discovery_attrs;
94
92
struct configfs_attribute **tfc_wwn_attrs;
+1
-1
include/trace/events/page_isolation.h
+1
-1
include/trace/events/page_isolation.h
···
29
29
30
30
TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s",
31
31
__entry->start_pfn, __entry->end_pfn, __entry->fin_pfn,
32
-
__entry->end_pfn == __entry->fin_pfn ? "success" : "fail")
32
+
__entry->end_pfn <= __entry->fin_pfn ? "success" : "fail")
33
33
);
34
34
35
35
#endif /* _TRACE_PAGE_ISOLATION_H */
+1
include/uapi/linux/bpf.h
+1
include/uapi/linux/bpf.h
+4
include/uapi/linux/stddef.h
+4
include/uapi/linux/stddef.h
+2
-1
init/Kconfig
+2
-1
init/Kconfig
···
272
272
See the man page for more details.
273
273
274
274
config FHANDLE
275
-
bool "open by fhandle syscalls"
275
+
bool "open by fhandle syscalls" if EXPERT
276
276
select EXPORTFS
277
+
default y
277
278
help
278
279
If you say Y here, a user level program will be able to map
279
280
file names to handle and then later use the handle for
+4
-2
kernel/bpf/syscall.c
+4
-2
kernel/bpf/syscall.c
···
137
137
"map_type:\t%u\n"
138
138
"key_size:\t%u\n"
139
139
"value_size:\t%u\n"
140
-
"max_entries:\t%u\n",
140
+
"max_entries:\t%u\n"
141
+
"map_flags:\t%#x\n",
141
142
map->map_type,
142
143
map->key_size,
143
144
map->value_size,
144
-
map->max_entries);
145
+
map->max_entries,
146
+
map->map_flags);
145
147
}
146
148
#endif
147
149
+13
-2
kernel/events/core.c
+13
-2
kernel/events/core.c
···
2417
2417
cpuctx->task_ctx = NULL;
2418
2418
}
2419
2419
2420
-
is_active ^= ctx->is_active; /* changed bits */
2421
-
2420
+
/*
2421
+
* Always update time if it was set; not only when it changes.
2422
+
* Otherwise we can 'forget' to update time for any but the last
2423
+
* context we sched out. For example:
2424
+
*
2425
+
* ctx_sched_out(.event_type = EVENT_FLEXIBLE)
2426
+
* ctx_sched_out(.event_type = EVENT_PINNED)
2427
+
*
2428
+
* would only update time for the pinned events.
2429
+
*/
2422
2430
if (is_active & EVENT_TIME) {
2423
2431
/* update (and stop) ctx time */
2424
2432
update_context_time(ctx);
2425
2433
update_cgrp_time_from_cpuctx(cpuctx);
2426
2434
}
2435
+
2436
+
is_active ^= ctx->is_active; /* changed bits */
2427
2437
2428
2438
if (!ctx->nr_active || !(is_active & EVENT_ALL))
2429
2439
return;
···
8542
8532
f_flags);
8543
8533
if (IS_ERR(event_file)) {
8544
8534
err = PTR_ERR(event_file);
8535
+
event_file = NULL;
8545
8536
goto err_context;
8546
8537
}
8547
8538
+77
-2
kernel/locking/lockdep.c
+77
-2
kernel/locking/lockdep.c
···
2000
2000
}
2001
2001
2002
2002
/*
2003
+
* Returns the next chain_key iteration
2004
+
*/
2005
+
static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
2006
+
{
2007
+
u64 new_chain_key = iterate_chain_key(chain_key, class_idx);
2008
+
2009
+
printk(" class_idx:%d -> chain_key:%016Lx",
2010
+
class_idx,
2011
+
(unsigned long long)new_chain_key);
2012
+
return new_chain_key;
2013
+
}
2014
+
2015
+
static void
2016
+
print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_next)
2017
+
{
2018
+
struct held_lock *hlock;
2019
+
u64 chain_key = 0;
2020
+
int depth = curr->lockdep_depth;
2021
+
int i;
2022
+
2023
+
printk("depth: %u\n", depth + 1);
2024
+
for (i = get_first_held_lock(curr, hlock_next); i < depth; i++) {
2025
+
hlock = curr->held_locks + i;
2026
+
chain_key = print_chain_key_iteration(hlock->class_idx, chain_key);
2027
+
2028
+
print_lock(hlock);
2029
+
}
2030
+
2031
+
print_chain_key_iteration(hlock_next->class_idx, chain_key);
2032
+
print_lock(hlock_next);
2033
+
}
2034
+
2035
+
static void print_chain_keys_chain(struct lock_chain *chain)
2036
+
{
2037
+
int i;
2038
+
u64 chain_key = 0;
2039
+
int class_id;
2040
+
2041
+
printk("depth: %u\n", chain->depth);
2042
+
for (i = 0; i < chain->depth; i++) {
2043
+
class_id = chain_hlocks[chain->base + i];
2044
+
chain_key = print_chain_key_iteration(class_id + 1, chain_key);
2045
+
2046
+
print_lock_name(lock_classes + class_id);
2047
+
printk("\n");
2048
+
}
2049
+
}
2050
+
2051
+
static void print_collision(struct task_struct *curr,
2052
+
struct held_lock *hlock_next,
2053
+
struct lock_chain *chain)
2054
+
{
2055
+
printk("\n");
2056
+
printk("======================\n");
2057
+
printk("[chain_key collision ]\n");
2058
+
print_kernel_ident();
2059
+
printk("----------------------\n");
2060
+
printk("%s/%d: ", current->comm, task_pid_nr(current));
2061
+
printk("Hash chain already cached but the contents don't match!\n");
2062
+
2063
+
printk("Held locks:");
2064
+
print_chain_keys_held_locks(curr, hlock_next);
2065
+
2066
+
printk("Locks in cached chain:");
2067
+
print_chain_keys_chain(chain);
2068
+
2069
+
printk("\nstack backtrace:\n");
2070
+
dump_stack();
2071
+
}
2072
+
2073
+
/*
2003
2074
* Checks whether the chain and the current held locks are consistent
2004
2075
* in depth and also in content. If they are not it most likely means
2005
2076
* that there was a collision during the calculation of the chain_key.
···
2085
2014
2086
2015
i = get_first_held_lock(curr, hlock);
2087
2016
2088
-
if (DEBUG_LOCKS_WARN_ON(chain->depth != curr->lockdep_depth - (i - 1)))
2017
+
if (DEBUG_LOCKS_WARN_ON(chain->depth != curr->lockdep_depth - (i - 1))) {
2018
+
print_collision(curr, hlock, chain);
2089
2019
return 0;
2020
+
}
2090
2021
2091
2022
for (j = 0; j < chain->depth - 1; j++, i++) {
2092
2023
id = curr->held_locks[i].class_idx - 1;
2093
2024
2094
-
if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id))
2025
+
if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) {
2026
+
print_collision(curr, hlock, chain);
2095
2027
return 0;
2028
+
}
2096
2029
}
2097
2030
#endif
2098
2031
return 1;
+18
kernel/sched/core.c
+18
kernel/sched/core.c
···
321
321
}
322
322
#endif /* CONFIG_SCHED_HRTICK */
323
323
324
+
/*
325
+
* cmpxchg based fetch_or, macro so it works for different integer types
326
+
*/
327
+
#define fetch_or(ptr, mask) \
328
+
({ \
329
+
typeof(ptr) _ptr = (ptr); \
330
+
typeof(mask) _mask = (mask); \
331
+
typeof(*_ptr) _old, _val = *_ptr; \
332
+
\
333
+
for (;;) { \
334
+
_old = cmpxchg(_ptr, _val, _val | _mask); \
335
+
if (_old == _val) \
336
+
break; \
337
+
_val = _old; \
338
+
} \
339
+
_old; \
340
+
})
341
+
324
342
#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
325
343
/*
326
344
* Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
+30
-31
kernel/time/tick-sched.c
+30
-31
kernel/time/tick-sched.c
···
157
157
cpumask_var_t tick_nohz_full_mask;
158
158
cpumask_var_t housekeeping_mask;
159
159
bool tick_nohz_full_running;
160
-
static unsigned long tick_dep_mask;
160
+
static atomic_t tick_dep_mask;
161
161
162
-
static void trace_tick_dependency(unsigned long dep)
162
+
static bool check_tick_dependency(atomic_t *dep)
163
163
{
164
-
if (dep & TICK_DEP_MASK_POSIX_TIMER) {
164
+
int val = atomic_read(dep);
165
+
166
+
if (val & TICK_DEP_MASK_POSIX_TIMER) {
165
167
trace_tick_stop(0, TICK_DEP_MASK_POSIX_TIMER);
166
-
return;
168
+
return true;
167
169
}
168
170
169
-
if (dep & TICK_DEP_MASK_PERF_EVENTS) {
171
+
if (val & TICK_DEP_MASK_PERF_EVENTS) {
170
172
trace_tick_stop(0, TICK_DEP_MASK_PERF_EVENTS);
171
-
return;
173
+
return true;
172
174
}
173
175
174
-
if (dep & TICK_DEP_MASK_SCHED) {
176
+
if (val & TICK_DEP_MASK_SCHED) {
175
177
trace_tick_stop(0, TICK_DEP_MASK_SCHED);
176
-
return;
178
+
return true;
177
179
}
178
180
179
-
if (dep & TICK_DEP_MASK_CLOCK_UNSTABLE)
181
+
if (val & TICK_DEP_MASK_CLOCK_UNSTABLE) {
180
182
trace_tick_stop(0, TICK_DEP_MASK_CLOCK_UNSTABLE);
183
+
return true;
184
+
}
185
+
186
+
return false;
181
187
}
182
188
183
189
static bool can_stop_full_tick(struct tick_sched *ts)
184
190
{
185
191
WARN_ON_ONCE(!irqs_disabled());
186
192
187
-
if (tick_dep_mask) {
188
-
trace_tick_dependency(tick_dep_mask);
193
+
if (check_tick_dependency(&tick_dep_mask))
189
194
return false;
190
-
}
191
195
192
-
if (ts->tick_dep_mask) {
193
-
trace_tick_dependency(ts->tick_dep_mask);
196
+
if (check_tick_dependency(&ts->tick_dep_mask))
194
197
return false;
195
-
}
196
198
197
-
if (current->tick_dep_mask) {
198
-
trace_tick_dependency(current->tick_dep_mask);
199
+
if (check_tick_dependency(¤t->tick_dep_mask))
199
200
return false;
200
-
}
201
201
202
-
if (current->signal->tick_dep_mask) {
203
-
trace_tick_dependency(current->signal->tick_dep_mask);
202
+
if (check_tick_dependency(¤t->signal->tick_dep_mask))
204
203
return false;
205
-
}
206
204
207
205
return true;
208
206
}
···
257
259
preempt_enable();
258
260
}
259
261
260
-
static void tick_nohz_dep_set_all(unsigned long *dep,
262
+
static void tick_nohz_dep_set_all(atomic_t *dep,
261
263
enum tick_dep_bits bit)
262
264
{
263
-
unsigned long prev;
265
+
int prev;
264
266
265
-
prev = fetch_or(dep, BIT_MASK(bit));
267
+
prev = atomic_fetch_or(dep, BIT(bit));
266
268
if (!prev)
267
269
tick_nohz_full_kick_all();
268
270
}
···
278
280
279
281
void tick_nohz_dep_clear(enum tick_dep_bits bit)
280
282
{
281
-
clear_bit(bit, &tick_dep_mask);
283
+
atomic_andnot(BIT(bit), &tick_dep_mask);
282
284
}
283
285
284
286
/*
···
287
289
*/
288
290
void tick_nohz_dep_set_cpu(int cpu, enum tick_dep_bits bit)
289
291
{
290
-
unsigned long prev;
292
+
int prev;
291
293
struct tick_sched *ts;
292
294
293
295
ts = per_cpu_ptr(&tick_cpu_sched, cpu);
294
296
295
-
prev = fetch_or(&ts->tick_dep_mask, BIT_MASK(bit));
297
+
prev = atomic_fetch_or(&ts->tick_dep_mask, BIT(bit));
296
298
if (!prev) {
297
299
preempt_disable();
298
300
/* Perf needs local kick that is NMI safe */
···
311
313
{
312
314
struct tick_sched *ts = per_cpu_ptr(&tick_cpu_sched, cpu);
313
315
314
-
clear_bit(bit, &ts->tick_dep_mask);
316
+
atomic_andnot(BIT(bit), &ts->tick_dep_mask);
315
317
}
316
318
317
319
/*
···
329
331
330
332
void tick_nohz_dep_clear_task(struct task_struct *tsk, enum tick_dep_bits bit)
331
333
{
332
-
clear_bit(bit, &tsk->tick_dep_mask);
334
+
atomic_andnot(BIT(bit), &tsk->tick_dep_mask);
333
335
}
334
336
335
337
/*
···
343
345
344
346
void tick_nohz_dep_clear_signal(struct signal_struct *sig, enum tick_dep_bits bit)
345
347
{
346
-
clear_bit(bit, &sig->tick_dep_mask);
348
+
atomic_andnot(BIT(bit), &sig->tick_dep_mask);
347
349
}
348
350
349
351
/*
···
364
366
ts = this_cpu_ptr(&tick_cpu_sched);
365
367
366
368
if (ts->tick_stopped) {
367
-
if (current->tick_dep_mask || current->signal->tick_dep_mask)
369
+
if (atomic_read(¤t->tick_dep_mask) ||
370
+
atomic_read(¤t->signal->tick_dep_mask))
368
371
tick_nohz_full_kick();
369
372
}
370
373
out:
+1
-1
kernel/time/tick-sched.h
+1
-1
kernel/time/tick-sched.h
+1
-1
mm/kasan/kasan.c
+1
-1
mm/kasan/kasan.c
+5
-1
mm/oom_kill.c
+5
-1
mm/oom_kill.c
···
547
547
548
548
static void wake_oom_reaper(struct task_struct *tsk)
549
549
{
550
-
if (!oom_reaper_th || tsk->oom_reaper_list)
550
+
if (!oom_reaper_th)
551
+
return;
552
+
553
+
/* tsk is already queued? */
554
+
if (tsk == oom_reaper_list || tsk->oom_reaper_list)
551
555
return;
552
556
553
557
get_task_struct(tsk);
+5
-5
mm/page_isolation.c
+5
-5
mm/page_isolation.c
···
215
215
* all pages in [start_pfn...end_pfn) must be in the same zone.
216
216
* zone->lock must be held before call this.
217
217
*
218
-
* Returns 1 if all pages in the range are isolated.
218
+
* Returns the last tested pfn.
219
219
*/
220
220
static unsigned long
221
221
__test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
···
289
289
* now as a simple work-around, we use the next node for destination.
290
290
*/
291
291
if (PageHuge(page)) {
292
-
nodemask_t src = nodemask_of_node(page_to_nid(page));
293
-
nodemask_t dst;
294
-
nodes_complement(dst, src);
292
+
int node = next_online_node(page_to_nid(page));
293
+
if (node == MAX_NUMNODES)
294
+
node = first_online_node;
295
295
return alloc_huge_page_node(page_hstate(compound_head(page)),
296
-
next_node(page_to_nid(page), dst));
296
+
node);
297
297
}
298
298
299
299
if (PageHighMem(page))
+7
-21
mm/rmap.c
+7
-21
mm/rmap.c
···
569
569
}
570
570
571
571
#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
572
-
static void percpu_flush_tlb_batch_pages(void *data)
573
-
{
574
-
/*
575
-
* All TLB entries are flushed on the assumption that it is
576
-
* cheaper to flush all TLBs and let them be refilled than
577
-
* flushing individual PFNs. Note that we do not track mm's
578
-
* to flush as that might simply be multiple full TLB flushes
579
-
* for no gain.
580
-
*/
581
-
count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
582
-
flush_tlb_local();
583
-
}
584
-
585
572
/*
586
573
* Flush TLB entries for recently unmapped pages from remote CPUs. It is
587
574
* important if a PTE was dirty when it was unmapped that it's flushed
···
585
598
586
599
cpu = get_cpu();
587
600
588
-
trace_tlb_flush(TLB_REMOTE_SHOOTDOWN, -1UL);
589
-
590
-
if (cpumask_test_cpu(cpu, &tlb_ubc->cpumask))
591
-
percpu_flush_tlb_batch_pages(&tlb_ubc->cpumask);
592
-
593
-
if (cpumask_any_but(&tlb_ubc->cpumask, cpu) < nr_cpu_ids) {
594
-
smp_call_function_many(&tlb_ubc->cpumask,
595
-
percpu_flush_tlb_batch_pages, (void *)tlb_ubc, true);
601
+
if (cpumask_test_cpu(cpu, &tlb_ubc->cpumask)) {
602
+
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
603
+
local_flush_tlb();
604
+
trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
596
605
}
606
+
607
+
if (cpumask_any_but(&tlb_ubc->cpumask, cpu) < nr_cpu_ids)
608
+
flush_tlb_others(&tlb_ubc->cpumask, NULL, 0, TLB_FLUSH_ALL);
597
609
cpumask_clear(&tlb_ubc->cpumask);
598
610
tlb_ubc->flush_required = false;
599
611
tlb_ubc->writable = false;
+1
-1
net/bridge/br_stp.c
+1
-1
net/bridge/br_stp.c
+4
net/bridge/netfilter/ebtables.c
+4
net/bridge/netfilter/ebtables.c
···
1521
1521
if (copy_from_user(&tmp, user, sizeof(tmp)))
1522
1522
return -EFAULT;
1523
1523
1524
+
tmp.name[sizeof(tmp.name) - 1] = '\0';
1525
+
1524
1526
t = find_table_lock(net, tmp.name, &ret, &ebt_mutex);
1525
1527
if (!t)
1526
1528
return ret;
···
2333
2331
2334
2332
if (copy_from_user(&tmp, user, sizeof(tmp)))
2335
2333
return -EFAULT;
2334
+
2335
+
tmp.name[sizeof(tmp.name) - 1] = '\0';
2336
2336
2337
2337
t = find_table_lock(net, tmp.name, &ret, &ebt_mutex);
2338
2338
if (!t)
+10
-10
net/bridge/netfilter/nft_reject_bridge.c
+10
-10
net/bridge/netfilter/nft_reject_bridge.c
···
40
40
/* We cannot use oldskb->dev, it can be either bridge device (NF_BRIDGE INPUT)
41
41
* or the bridge port (NF_BRIDGE PREROUTING).
42
42
*/
43
-
static void nft_reject_br_send_v4_tcp_reset(struct sk_buff *oldskb,
43
+
static void nft_reject_br_send_v4_tcp_reset(struct net *net,
44
+
struct sk_buff *oldskb,
44
45
const struct net_device *dev,
45
46
int hook)
46
47
{
···
49
48
struct iphdr *niph;
50
49
const struct tcphdr *oth;
51
50
struct tcphdr _oth;
52
-
struct net *net = sock_net(oldskb->sk);
53
51
54
52
if (!nft_bridge_iphdr_validate(oldskb))
55
53
return;
···
75
75
br_deliver(br_port_get_rcu(dev), nskb);
76
76
}
77
77
78
-
static void nft_reject_br_send_v4_unreach(struct sk_buff *oldskb,
78
+
static void nft_reject_br_send_v4_unreach(struct net *net,
79
+
struct sk_buff *oldskb,
79
80
const struct net_device *dev,
80
81
int hook, u8 code)
81
82
{
···
87
86
void *payload;
88
87
__wsum csum;
89
88
u8 proto;
90
-
struct net *net = sock_net(oldskb->sk);
91
89
92
90
if (oldskb->csum_bad || !nft_bridge_iphdr_validate(oldskb))
93
91
return;
···
273
273
case htons(ETH_P_IP):
274
274
switch (priv->type) {
275
275
case NFT_REJECT_ICMP_UNREACH:
276
-
nft_reject_br_send_v4_unreach(pkt->skb, pkt->in,
277
-
pkt->hook,
276
+
nft_reject_br_send_v4_unreach(pkt->net, pkt->skb,
277
+
pkt->in, pkt->hook,
278
278
priv->icmp_code);
279
279
break;
280
280
case NFT_REJECT_TCP_RST:
281
-
nft_reject_br_send_v4_tcp_reset(pkt->skb, pkt->in,
282
-
pkt->hook);
281
+
nft_reject_br_send_v4_tcp_reset(pkt->net, pkt->skb,
282
+
pkt->in, pkt->hook);
283
283
break;
284
284
case NFT_REJECT_ICMPX_UNREACH:
285
-
nft_reject_br_send_v4_unreach(pkt->skb, pkt->in,
286
-
pkt->hook,
285
+
nft_reject_br_send_v4_unreach(pkt->net, pkt->skb,
286
+
pkt->in, pkt->hook,
287
287
nft_reject_icmp_code(priv->icmp_code));
288
288
break;
289
289
}
+25
-13
net/core/filter.c
+25
-13
net/core/filter.c
···
1149
1149
}
1150
1150
EXPORT_SYMBOL_GPL(bpf_prog_destroy);
1151
1151
1152
-
static int __sk_attach_prog(struct bpf_prog *prog, struct sock *sk)
1152
+
static int __sk_attach_prog(struct bpf_prog *prog, struct sock *sk,
1153
+
bool locked)
1153
1154
{
1154
1155
struct sk_filter *fp, *old_fp;
1155
1156
···
1166
1165
return -ENOMEM;
1167
1166
}
1168
1167
1169
-
old_fp = rcu_dereference_protected(sk->sk_filter,
1170
-
sock_owned_by_user(sk));
1168
+
old_fp = rcu_dereference_protected(sk->sk_filter, locked);
1171
1169
rcu_assign_pointer(sk->sk_filter, fp);
1172
-
1173
1170
if (old_fp)
1174
1171
sk_filter_uncharge(sk, old_fp);
1175
1172
···
1246
1247
* occurs or there is insufficient memory for the filter a negative
1247
1248
* errno code is returned. On success the return is zero.
1248
1249
*/
1249
-
int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
1250
+
int __sk_attach_filter(struct sock_fprog *fprog, struct sock *sk,
1251
+
bool locked)
1250
1252
{
1251
1253
struct bpf_prog *prog = __get_filter(fprog, sk);
1252
1254
int err;
···
1255
1255
if (IS_ERR(prog))
1256
1256
return PTR_ERR(prog);
1257
1257
1258
-
err = __sk_attach_prog(prog, sk);
1258
+
err = __sk_attach_prog(prog, sk, locked);
1259
1259
if (err < 0) {
1260
1260
__bpf_prog_release(prog);
1261
1261
return err;
···
1263
1263
1264
1264
return 0;
1265
1265
}
1266
-
EXPORT_SYMBOL_GPL(sk_attach_filter);
1266
+
EXPORT_SYMBOL_GPL(__sk_attach_filter);
1267
+
1268
+
int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
1269
+
{
1270
+
return __sk_attach_filter(fprog, sk, sock_owned_by_user(sk));
1271
+
}
1267
1272
1268
1273
int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk)
1269
1274
{
···
1314
1309
if (IS_ERR(prog))
1315
1310
return PTR_ERR(prog);
1316
1311
1317
-
err = __sk_attach_prog(prog, sk);
1312
+
err = __sk_attach_prog(prog, sk, sock_owned_by_user(sk));
1318
1313
if (err < 0) {
1319
1314
bpf_prog_put(prog);
1320
1315
return err;
···
1769
1764
if (unlikely(size != sizeof(struct bpf_tunnel_key))) {
1770
1765
switch (size) {
1771
1766
case offsetof(struct bpf_tunnel_key, tunnel_label):
1767
+
case offsetof(struct bpf_tunnel_key, tunnel_ext):
1772
1768
goto set_compat;
1773
1769
case offsetof(struct bpf_tunnel_key, remote_ipv6[1]):
1774
1770
/* Fixup deprecated structure layouts here, so we have
···
1855
1849
if (unlikely(size != sizeof(struct bpf_tunnel_key))) {
1856
1850
switch (size) {
1857
1851
case offsetof(struct bpf_tunnel_key, tunnel_label):
1852
+
case offsetof(struct bpf_tunnel_key, tunnel_ext):
1858
1853
case offsetof(struct bpf_tunnel_key, remote_ipv6[1]):
1859
1854
/* Fixup deprecated structure layouts here, so we have
1860
1855
* a common path later on.
···
1868
1861
return -EINVAL;
1869
1862
}
1870
1863
}
1871
-
if (unlikely(!(flags & BPF_F_TUNINFO_IPV6) && from->tunnel_label))
1864
+
if (unlikely((!(flags & BPF_F_TUNINFO_IPV6) && from->tunnel_label) ||
1865
+
from->tunnel_ext))
1872
1866
return -EINVAL;
1873
1867
1874
1868
skb_dst_drop(skb);
···
2255
2247
}
2256
2248
late_initcall(register_sk_filter_ops);
2257
2249
2258
-
int sk_detach_filter(struct sock *sk)
2250
+
int __sk_detach_filter(struct sock *sk, bool locked)
2259
2251
{
2260
2252
int ret = -ENOENT;
2261
2253
struct sk_filter *filter;
···
2263
2255
if (sock_flag(sk, SOCK_FILTER_LOCKED))
2264
2256
return -EPERM;
2265
2257
2266
-
filter = rcu_dereference_protected(sk->sk_filter,
2267
-
sock_owned_by_user(sk));
2258
+
filter = rcu_dereference_protected(sk->sk_filter, locked);
2268
2259
if (filter) {
2269
2260
RCU_INIT_POINTER(sk->sk_filter, NULL);
2270
2261
sk_filter_uncharge(sk, filter);
···
2272
2265
2273
2266
return ret;
2274
2267
}
2275
-
EXPORT_SYMBOL_GPL(sk_detach_filter);
2268
+
EXPORT_SYMBOL_GPL(__sk_detach_filter);
2269
+
2270
+
int sk_detach_filter(struct sock *sk)
2271
+
{
2272
+
return __sk_detach_filter(sk, sock_owned_by_user(sk));
2273
+
}
2276
2274
2277
2275
int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf,
2278
2276
unsigned int len)
+2
-1
net/core/netpoll.c
+2
-1
net/core/netpoll.c
···
603
603
const struct net_device_ops *ops;
604
604
int err;
605
605
606
-
np->dev = ndev;
607
606
strlcpy(np->dev_name, ndev->name, IFNAMSIZ);
608
607
INIT_WORK(&np->cleanup_work, netpoll_async_cleanup);
609
608
···
669
670
goto unlock;
670
671
}
671
672
dev_hold(ndev);
673
+
np->dev = ndev;
672
674
673
675
if (netdev_master_upper_dev_get(ndev)) {
674
676
np_err(np, "%s is a slave device, aborting\n", np->dev_name);
···
770
770
return 0;
771
771
772
772
put:
773
+
np->dev = NULL;
773
774
dev_put(ndev);
774
775
unlock:
775
776
rtnl_unlock();
+1
net/core/rtnetlink.c
+1
net/core/rtnetlink.c
···
909
909
+ rtnl_link_get_af_size(dev, ext_filter_mask) /* IFLA_AF_SPEC */
910
910
+ nla_total_size(MAX_PHYS_ITEM_ID_LEN) /* IFLA_PHYS_PORT_ID */
911
911
+ nla_total_size(MAX_PHYS_ITEM_ID_LEN) /* IFLA_PHYS_SWITCH_ID */
912
+
+ nla_total_size(IFNAMSIZ) /* IFLA_PHYS_PORT_NAME */
912
913
+ nla_total_size(1); /* IFLA_PROTO_DOWN */
913
914
914
915
}
+16
net/ipv4/fou.c
+16
net/ipv4/fou.c
···
195
195
u8 proto = NAPI_GRO_CB(skb)->proto;
196
196
const struct net_offload **offloads;
197
197
198
+
/* We can clear the encap_mark for FOU as we are essentially doing
199
+
* one of two possible things. We are either adding an L4 tunnel
200
+
* header to the outer L3 tunnel header, or we are are simply
201
+
* treating the GRE tunnel header as though it is a UDP protocol
202
+
* specific header such as VXLAN or GENEVE.
203
+
*/
204
+
NAPI_GRO_CB(skb)->encap_mark = 0;
205
+
198
206
rcu_read_lock();
199
207
offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
200
208
ops = rcu_dereference(offloads[proto]);
···
359
351
continue;
360
352
}
361
353
}
354
+
355
+
/* We can clear the encap_mark for GUE as we are essentially doing
356
+
* one of two possible things. We are either adding an L4 tunnel
357
+
* header to the outer L3 tunnel header, or we are are simply
358
+
* treating the GRE tunnel header as though it is a UDP protocol
359
+
* specific header such as VXLAN or GENEVE.
360
+
*/
361
+
NAPI_GRO_CB(skb)->encap_mark = 0;
362
362
363
363
rcu_read_lock();
364
364
offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
+2
-2
net/ipv4/ip_tunnel_core.c
+2
-2
net/ipv4/ip_tunnel_core.c
···
372
372
if (nla_put_be64(skb, LWTUNNEL_IP6_ID, tun_info->key.tun_id) ||
373
373
nla_put_in6_addr(skb, LWTUNNEL_IP6_DST, &tun_info->key.u.ipv6.dst) ||
374
374
nla_put_in6_addr(skb, LWTUNNEL_IP6_SRC, &tun_info->key.u.ipv6.src) ||
375
-
nla_put_u8(skb, LWTUNNEL_IP6_HOPLIMIT, tun_info->key.tos) ||
376
-
nla_put_u8(skb, LWTUNNEL_IP6_TC, tun_info->key.ttl) ||
375
+
nla_put_u8(skb, LWTUNNEL_IP6_TC, tun_info->key.tos) ||
376
+
nla_put_u8(skb, LWTUNNEL_IP6_HOPLIMIT, tun_info->key.ttl) ||
377
377
nla_put_be16(skb, LWTUNNEL_IP6_FLAGS, tun_info->key.tun_flags))
378
378
return -ENOMEM;
379
379
+23
-20
net/ipv4/netfilter/arp_tables.c
+23
-20
net/ipv4/netfilter/arp_tables.c
···
359
359
}
360
360
361
361
/* All zeroes == unconditional rule. */
362
-
static inline bool unconditional(const struct arpt_arp *arp)
362
+
static inline bool unconditional(const struct arpt_entry *e)
363
363
{
364
364
static const struct arpt_arp uncond;
365
365
366
-
return memcmp(arp, &uncond, sizeof(uncond)) == 0;
366
+
return e->target_offset == sizeof(struct arpt_entry) &&
367
+
memcmp(&e->arp, &uncond, sizeof(uncond)) == 0;
367
368
}
368
369
369
370
/* Figures out from what hook each rule can be called: returns 0 if
···
403
402
|= ((1 << hook) | (1 << NF_ARP_NUMHOOKS));
404
403
405
404
/* Unconditional return/END. */
406
-
if ((e->target_offset == sizeof(struct arpt_entry) &&
405
+
if ((unconditional(e) &&
407
406
(strcmp(t->target.u.user.name,
408
407
XT_STANDARD_TARGET) == 0) &&
409
-
t->verdict < 0 && unconditional(&e->arp)) ||
410
-
visited) {
408
+
t->verdict < 0) || visited) {
411
409
unsigned int oldpos, size;
412
410
413
411
if ((strcmp(t->target.u.user.name,
···
474
474
return 1;
475
475
}
476
476
477
-
static inline int check_entry(const struct arpt_entry *e, const char *name)
477
+
static inline int check_entry(const struct arpt_entry *e)
478
478
{
479
479
const struct xt_entry_target *t;
480
480
481
-
if (!arp_checkentry(&e->arp)) {
482
-
duprintf("arp_tables: arp check failed %p %s.\n", e, name);
481
+
if (!arp_checkentry(&e->arp))
483
482
return -EINVAL;
484
-
}
485
483
486
484
if (e->target_offset + sizeof(struct xt_entry_target) > e->next_offset)
487
485
return -EINVAL;
···
520
522
struct xt_target *target;
521
523
int ret;
522
524
523
-
ret = check_entry(e, name);
524
-
if (ret)
525
-
return ret;
526
-
527
525
e->counters.pcnt = xt_percpu_counter_alloc();
528
526
if (IS_ERR_VALUE(e->counters.pcnt))
529
527
return -ENOMEM;
···
551
557
const struct xt_entry_target *t;
552
558
unsigned int verdict;
553
559
554
-
if (!unconditional(&e->arp))
560
+
if (!unconditional(e))
555
561
return false;
556
562
t = arpt_get_target_c(e);
557
563
if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0)
···
570
576
unsigned int valid_hooks)
571
577
{
572
578
unsigned int h;
579
+
int err;
573
580
574
581
if ((unsigned long)e % __alignof__(struct arpt_entry) != 0 ||
575
-
(unsigned char *)e + sizeof(struct arpt_entry) >= limit) {
582
+
(unsigned char *)e + sizeof(struct arpt_entry) >= limit ||
583
+
(unsigned char *)e + e->next_offset > limit) {
576
584
duprintf("Bad offset %p\n", e);
577
585
return -EINVAL;
578
586
}
···
586
590
return -EINVAL;
587
591
}
588
592
593
+
err = check_entry(e);
594
+
if (err)
595
+
return err;
596
+
589
597
/* Check hooks & underflows */
590
598
for (h = 0; h < NF_ARP_NUMHOOKS; h++) {
591
599
if (!(valid_hooks & (1 << h)))
···
598
598
newinfo->hook_entry[h] = hook_entries[h];
599
599
if ((unsigned char *)e - base == underflows[h]) {
600
600
if (!check_underflow(e)) {
601
-
pr_err("Underflows must be unconditional and "
602
-
"use the STANDARD target with "
603
-
"ACCEPT/DROP\n");
601
+
pr_debug("Underflows must be unconditional and "
602
+
"use the STANDARD target with "
603
+
"ACCEPT/DROP\n");
604
604
return -EINVAL;
605
605
}
606
606
newinfo->underflow[h] = underflows[h];
···
969
969
sizeof(struct arpt_get_entries) + get.size);
970
970
return -EINVAL;
971
971
}
972
+
get.name[sizeof(get.name) - 1] = '\0';
972
973
973
974
t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
974
975
if (!IS_ERR_OR_NULL(t)) {
···
1234
1233
1235
1234
duprintf("check_compat_entry_size_and_hooks %p\n", e);
1236
1235
if ((unsigned long)e % __alignof__(struct compat_arpt_entry) != 0 ||
1237
-
(unsigned char *)e + sizeof(struct compat_arpt_entry) >= limit) {
1236
+
(unsigned char *)e + sizeof(struct compat_arpt_entry) >= limit ||
1237
+
(unsigned char *)e + e->next_offset > limit) {
1238
1238
duprintf("Bad offset %p, limit = %p\n", e, limit);
1239
1239
return -EINVAL;
1240
1240
}
···
1248
1246
}
1249
1247
1250
1248
/* For purposes of check_entry casting the compat entry is fine */
1251
-
ret = check_entry((struct arpt_entry *)e, name);
1249
+
ret = check_entry((struct arpt_entry *)e);
1252
1250
if (ret)
1253
1251
return ret;
1254
1252
···
1664
1662
*len, sizeof(get) + get.size);
1665
1663
return -EINVAL;
1666
1664
}
1665
+
get.name[sizeof(get.name) - 1] = '\0';
1667
1666
1668
1667
xt_compat_lock(NFPROTO_ARP);
1669
1668
t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
+25
-23
net/ipv4/netfilter/ip_tables.c
+25
-23
net/ipv4/netfilter/ip_tables.c
···
168
168
169
169
/* All zeroes == unconditional rule. */
170
170
/* Mildly perf critical (only if packet tracing is on) */
171
-
static inline bool unconditional(const struct ipt_ip *ip)
171
+
static inline bool unconditional(const struct ipt_entry *e)
172
172
{
173
173
static const struct ipt_ip uncond;
174
174
175
-
return memcmp(ip, &uncond, sizeof(uncond)) == 0;
175
+
return e->target_offset == sizeof(struct ipt_entry) &&
176
+
memcmp(&e->ip, &uncond, sizeof(uncond)) == 0;
176
177
#undef FWINV
177
178
}
178
179
···
230
229
} else if (s == e) {
231
230
(*rulenum)++;
232
231
233
-
if (s->target_offset == sizeof(struct ipt_entry) &&
232
+
if (unconditional(s) &&
234
233
strcmp(t->target.u.kernel.target->name,
235
234
XT_STANDARD_TARGET) == 0 &&
236
-
t->verdict < 0 &&
237
-
unconditional(&s->ip)) {
235
+
t->verdict < 0) {
238
236
/* Tail of chains: STANDARD target (return/policy) */
239
237
*comment = *chainname == hookname
240
238
? comments[NF_IP_TRACE_COMMENT_POLICY]
···
476
476
e->comefrom |= ((1 << hook) | (1 << NF_INET_NUMHOOKS));
477
477
478
478
/* Unconditional return/END. */
479
-
if ((e->target_offset == sizeof(struct ipt_entry) &&
479
+
if ((unconditional(e) &&
480
480
(strcmp(t->target.u.user.name,
481
481
XT_STANDARD_TARGET) == 0) &&
482
-
t->verdict < 0 && unconditional(&e->ip)) ||
483
-
visited) {
482
+
t->verdict < 0) || visited) {
484
483
unsigned int oldpos, size;
485
484
486
485
if ((strcmp(t->target.u.user.name,
···
568
569
}
569
570
570
571
static int
571
-
check_entry(const struct ipt_entry *e, const char *name)
572
+
check_entry(const struct ipt_entry *e)
572
573
{
573
574
const struct xt_entry_target *t;
574
575
575
-
if (!ip_checkentry(&e->ip)) {
576
-
duprintf("ip check failed %p %s.\n", e, name);
576
+
if (!ip_checkentry(&e->ip))
577
577
return -EINVAL;
578
-
}
579
578
580
579
if (e->target_offset + sizeof(struct xt_entry_target) >
581
580
e->next_offset)
···
663
666
struct xt_mtchk_param mtpar;
664
667
struct xt_entry_match *ematch;
665
668
666
-
ret = check_entry(e, name);
667
-
if (ret)
668
-
return ret;
669
-
670
669
e->counters.pcnt = xt_percpu_counter_alloc();
671
670
if (IS_ERR_VALUE(e->counters.pcnt))
672
671
return -ENOMEM;
···
714
721
const struct xt_entry_target *t;
715
722
unsigned int verdict;
716
723
717
-
if (!unconditional(&e->ip))
724
+
if (!unconditional(e))
718
725
return false;
719
726
t = ipt_get_target_c(e);
720
727
if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0)
···
734
741
unsigned int valid_hooks)
735
742
{
736
743
unsigned int h;
744
+
int err;
737
745
738
746
if ((unsigned long)e % __alignof__(struct ipt_entry) != 0 ||
739
-
(unsigned char *)e + sizeof(struct ipt_entry) >= limit) {
747
+
(unsigned char *)e + sizeof(struct ipt_entry) >= limit ||
748
+
(unsigned char *)e + e->next_offset > limit) {
740
749
duprintf("Bad offset %p\n", e);
741
750
return -EINVAL;
742
751
}
···
750
755
return -EINVAL;
751
756
}
752
757
758
+
err = check_entry(e);
759
+
if (err)
760
+
return err;
761
+
753
762
/* Check hooks & underflows */
754
763
for (h = 0; h < NF_INET_NUMHOOKS; h++) {
755
764
if (!(valid_hooks & (1 << h)))
···
762
763
newinfo->hook_entry[h] = hook_entries[h];
763
764
if ((unsigned char *)e - base == underflows[h]) {
764
765
if (!check_underflow(e)) {
765
-
pr_err("Underflows must be unconditional and "
766
-
"use the STANDARD target with "
767
-
"ACCEPT/DROP\n");
766
+
pr_debug("Underflows must be unconditional and "
767
+
"use the STANDARD target with "
768
+
"ACCEPT/DROP\n");
768
769
return -EINVAL;
769
770
}
770
771
newinfo->underflow[h] = underflows[h];
···
1156
1157
*len, sizeof(get) + get.size);
1157
1158
return -EINVAL;
1158
1159
}
1160
+
get.name[sizeof(get.name) - 1] = '\0';
1159
1161
1160
1162
t = xt_find_table_lock(net, AF_INET, get.name);
1161
1163
if (!IS_ERR_OR_NULL(t)) {
···
1493
1493
1494
1494
duprintf("check_compat_entry_size_and_hooks %p\n", e);
1495
1495
if ((unsigned long)e % __alignof__(struct compat_ipt_entry) != 0 ||
1496
-
(unsigned char *)e + sizeof(struct compat_ipt_entry) >= limit) {
1496
+
(unsigned char *)e + sizeof(struct compat_ipt_entry) >= limit ||
1497
+
(unsigned char *)e + e->next_offset > limit) {
1497
1498
duprintf("Bad offset %p, limit = %p\n", e, limit);
1498
1499
return -EINVAL;
1499
1500
}
···
1507
1506
}
1508
1507
1509
1508
/* For purposes of check_entry casting the compat entry is fine */
1510
-
ret = check_entry((struct ipt_entry *)e, name);
1509
+
ret = check_entry((struct ipt_entry *)e);
1511
1510
if (ret)
1512
1511
return ret;
1513
1512
···
1936
1935
*len, sizeof(get) + get.size);
1937
1936
return -EINVAL;
1938
1937
}
1938
+
get.name[sizeof(get.name) - 1] = '\0';
1939
1939
1940
1940
xt_compat_lock(AF_INET);
1941
1941
t = xt_find_table_lock(net, AF_INET, get.name);
+28
-26
net/ipv4/netfilter/ipt_SYNPROXY.c
+28
-26
net/ipv4/netfilter/ipt_SYNPROXY.c
···
18
18
#include <net/netfilter/nf_conntrack_synproxy.h>
19
19
20
20
static struct iphdr *
21
-
synproxy_build_ip(struct sk_buff *skb, __be32 saddr, __be32 daddr)
21
+
synproxy_build_ip(struct net *net, struct sk_buff *skb, __be32 saddr,
22
+
__be32 daddr)
22
23
{
23
24
struct iphdr *iph;
24
-
struct net *net = sock_net(skb->sk);
25
25
26
26
skb_reset_network_header(skb);
27
27
iph = (struct iphdr *)skb_put(skb, sizeof(*iph));
···
40
40
}
41
41
42
42
static void
43
-
synproxy_send_tcp(const struct synproxy_net *snet,
43
+
synproxy_send_tcp(struct net *net,
44
44
const struct sk_buff *skb, struct sk_buff *nskb,
45
45
struct nf_conntrack *nfct, enum ip_conntrack_info ctinfo,
46
46
struct iphdr *niph, struct tcphdr *nth,
47
47
unsigned int tcp_hdr_size)
48
48
{
49
-
struct net *net = nf_ct_net(snet->tmpl);
50
-
51
49
nth->check = ~tcp_v4_check(tcp_hdr_size, niph->saddr, niph->daddr, 0);
52
50
nskb->ip_summed = CHECKSUM_PARTIAL;
53
51
nskb->csum_start = (unsigned char *)nth - nskb->head;
···
70
72
}
71
73
72
74
static void
73
-
synproxy_send_client_synack(const struct synproxy_net *snet,
75
+
synproxy_send_client_synack(struct net *net,
74
76
const struct sk_buff *skb, const struct tcphdr *th,
75
77
const struct synproxy_options *opts)
76
78
{
···
89
91
return;
90
92
skb_reserve(nskb, MAX_TCP_HEADER);
91
93
92
-
niph = synproxy_build_ip(nskb, iph->daddr, iph->saddr);
94
+
niph = synproxy_build_ip(net, nskb, iph->daddr, iph->saddr);
93
95
94
96
skb_reset_transport_header(nskb);
95
97
nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size);
···
107
109
108
110
synproxy_build_options(nth, opts);
109
111
110
-
synproxy_send_tcp(snet, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY,
112
+
synproxy_send_tcp(net, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY,
111
113
niph, nth, tcp_hdr_size);
112
114
}
113
115
114
116
static void
115
-
synproxy_send_server_syn(const struct synproxy_net *snet,
117
+
synproxy_send_server_syn(struct net *net,
116
118
const struct sk_buff *skb, const struct tcphdr *th,
117
119
const struct synproxy_options *opts, u32 recv_seq)
118
120
{
121
+
struct synproxy_net *snet = synproxy_pernet(net);
119
122
struct sk_buff *nskb;
120
123
struct iphdr *iph, *niph;
121
124
struct tcphdr *nth;
···
131
132
return;
132
133
skb_reserve(nskb, MAX_TCP_HEADER);
133
134
134
-
niph = synproxy_build_ip(nskb, iph->saddr, iph->daddr);
135
+
niph = synproxy_build_ip(net, nskb, iph->saddr, iph->daddr);
135
136
136
137
skb_reset_transport_header(nskb);
137
138
nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size);
···
152
153
153
154
synproxy_build_options(nth, opts);
154
155
155
-
synproxy_send_tcp(snet, skb, nskb, &snet->tmpl->ct_general, IP_CT_NEW,
156
+
synproxy_send_tcp(net, skb, nskb, &snet->tmpl->ct_general, IP_CT_NEW,
156
157
niph, nth, tcp_hdr_size);
157
158
}
158
159
159
160
static void
160
-
synproxy_send_server_ack(const struct synproxy_net *snet,
161
+
synproxy_send_server_ack(struct net *net,
161
162
const struct ip_ct_tcp *state,
162
163
const struct sk_buff *skb, const struct tcphdr *th,
163
164
const struct synproxy_options *opts)
···
176
177
return;
177
178
skb_reserve(nskb, MAX_TCP_HEADER);
178
179
179
-
niph = synproxy_build_ip(nskb, iph->daddr, iph->saddr);
180
+
niph = synproxy_build_ip(net, nskb, iph->daddr, iph->saddr);
180
181
181
182
skb_reset_transport_header(nskb);
182
183
nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size);
···
192
193
193
194
synproxy_build_options(nth, opts);
194
195
195
-
synproxy_send_tcp(snet, skb, nskb, NULL, 0, niph, nth, tcp_hdr_size);
196
+
synproxy_send_tcp(net, skb, nskb, NULL, 0, niph, nth, tcp_hdr_size);
196
197
}
197
198
198
199
static void
199
-
synproxy_send_client_ack(const struct synproxy_net *snet,
200
+
synproxy_send_client_ack(struct net *net,
200
201
const struct sk_buff *skb, const struct tcphdr *th,
201
202
const struct synproxy_options *opts)
202
203
{
···
214
215
return;
215
216
skb_reserve(nskb, MAX_TCP_HEADER);
216
217
217
-
niph = synproxy_build_ip(nskb, iph->saddr, iph->daddr);
218
+
niph = synproxy_build_ip(net, nskb, iph->saddr, iph->daddr);
218
219
219
220
skb_reset_transport_header(nskb);
220
221
nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size);
···
230
231
231
232
synproxy_build_options(nth, opts);
232
233
233
-
synproxy_send_tcp(snet, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY,
234
+
synproxy_send_tcp(net, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY,
234
235
niph, nth, tcp_hdr_size);
235
236
}
236
237
237
238
static bool
238
-
synproxy_recv_client_ack(const struct synproxy_net *snet,
239
+
synproxy_recv_client_ack(struct net *net,
239
240
const struct sk_buff *skb, const struct tcphdr *th,
240
241
struct synproxy_options *opts, u32 recv_seq)
241
242
{
243
+
struct synproxy_net *snet = synproxy_pernet(net);
242
244
int mss;
243
245
244
246
mss = __cookie_v4_check(ip_hdr(skb), th, ntohl(th->ack_seq) - 1);
···
255
255
if (opts->options & XT_SYNPROXY_OPT_TIMESTAMP)
256
256
synproxy_check_timestamp_cookie(opts);
257
257
258
-
synproxy_send_server_syn(snet, skb, th, opts, recv_seq);
258
+
synproxy_send_server_syn(net, skb, th, opts, recv_seq);
259
259
return true;
260
260
}
261
261
···
263
263
synproxy_tg4(struct sk_buff *skb, const struct xt_action_param *par)
264
264
{
265
265
const struct xt_synproxy_info *info = par->targinfo;
266
-
struct synproxy_net *snet = synproxy_pernet(par->net);
266
+
struct net *net = par->net;
267
+
struct synproxy_net *snet = synproxy_pernet(net);
267
268
struct synproxy_options opts = {};
268
269
struct tcphdr *th, _th;
269
270
···
293
292
XT_SYNPROXY_OPT_SACK_PERM |
294
293
XT_SYNPROXY_OPT_ECN);
295
294
296
-
synproxy_send_client_synack(snet, skb, th, &opts);
295
+
synproxy_send_client_synack(net, skb, th, &opts);
297
296
return NF_DROP;
298
297
299
298
} else if (th->ack && !(th->fin || th->rst || th->syn)) {
300
299
/* ACK from client */
301
-
synproxy_recv_client_ack(snet, skb, th, &opts, ntohl(th->seq));
300
+
synproxy_recv_client_ack(net, skb, th, &opts, ntohl(th->seq));
302
301
return NF_DROP;
303
302
}
304
303
···
309
308
struct sk_buff *skb,
310
309
const struct nf_hook_state *nhs)
311
310
{
312
-
struct synproxy_net *snet = synproxy_pernet(nhs->net);
311
+
struct net *net = nhs->net;
312
+
struct synproxy_net *snet = synproxy_pernet(net);
313
313
enum ip_conntrack_info ctinfo;
314
314
struct nf_conn *ct;
315
315
struct nf_conn_synproxy *synproxy;
···
367
365
* therefore we need to add 1 to make the SYN sequence
368
366
* number match the one of first SYN.
369
367
*/
370
-
if (synproxy_recv_client_ack(snet, skb, th, &opts,
368
+
if (synproxy_recv_client_ack(net, skb, th, &opts,
371
369
ntohl(th->seq) + 1))
372
370
this_cpu_inc(snet->stats->cookie_retrans);
373
371
···
393
391
XT_SYNPROXY_OPT_SACK_PERM);
394
392
395
393
swap(opts.tsval, opts.tsecr);
396
-
synproxy_send_server_ack(snet, state, skb, th, &opts);
394
+
synproxy_send_server_ack(net, state, skb, th, &opts);
397
395
398
396
nf_ct_seqadj_init(ct, ctinfo, synproxy->isn - ntohl(th->seq));
399
397
400
398
swap(opts.tsval, opts.tsecr);
401
-
synproxy_send_client_ack(snet, skb, th, &opts);
399
+
synproxy_send_client_ack(net, skb, th, &opts);
402
400
403
401
consume_skb(skb);
404
402
return NF_STOLEN;
+25
-23
net/ipv6/netfilter/ip6_tables.c
+25
-23
net/ipv6/netfilter/ip6_tables.c
···
198
198
199
199
/* All zeroes == unconditional rule. */
200
200
/* Mildly perf critical (only if packet tracing is on) */
201
-
static inline bool unconditional(const struct ip6t_ip6 *ipv6)
201
+
static inline bool unconditional(const struct ip6t_entry *e)
202
202
{
203
203
static const struct ip6t_ip6 uncond;
204
204
205
-
return memcmp(ipv6, &uncond, sizeof(uncond)) == 0;
205
+
return e->target_offset == sizeof(struct ip6t_entry) &&
206
+
memcmp(&e->ipv6, &uncond, sizeof(uncond)) == 0;
206
207
}
207
208
208
209
static inline const struct xt_entry_target *
···
259
258
} else if (s == e) {
260
259
(*rulenum)++;
261
260
262
-
if (s->target_offset == sizeof(struct ip6t_entry) &&
261
+
if (unconditional(s) &&
263
262
strcmp(t->target.u.kernel.target->name,
264
263
XT_STANDARD_TARGET) == 0 &&
265
-
t->verdict < 0 &&
266
-
unconditional(&s->ipv6)) {
264
+
t->verdict < 0) {
267
265
/* Tail of chains: STANDARD target (return/policy) */
268
266
*comment = *chainname == hookname
269
267
? comments[NF_IP6_TRACE_COMMENT_POLICY]
···
488
488
e->comefrom |= ((1 << hook) | (1 << NF_INET_NUMHOOKS));
489
489
490
490
/* Unconditional return/END. */
491
-
if ((e->target_offset == sizeof(struct ip6t_entry) &&
491
+
if ((unconditional(e) &&
492
492
(strcmp(t->target.u.user.name,
493
493
XT_STANDARD_TARGET) == 0) &&
494
-
t->verdict < 0 &&
495
-
unconditional(&e->ipv6)) || visited) {
494
+
t->verdict < 0) || visited) {
496
495
unsigned int oldpos, size;
497
496
498
497
if ((strcmp(t->target.u.user.name,
···
580
581
}
581
582
582
583
static int
583
-
check_entry(const struct ip6t_entry *e, const char *name)
584
+
check_entry(const struct ip6t_entry *e)
584
585
{
585
586
const struct xt_entry_target *t;
586
587
587
-
if (!ip6_checkentry(&e->ipv6)) {
588
-
duprintf("ip_tables: ip check failed %p %s.\n", e, name);
588
+
if (!ip6_checkentry(&e->ipv6))
589
589
return -EINVAL;
590
-
}
591
590
592
591
if (e->target_offset + sizeof(struct xt_entry_target) >
593
592
e->next_offset)
···
676
679
struct xt_mtchk_param mtpar;
677
680
struct xt_entry_match *ematch;
678
681
679
-
ret = check_entry(e, name);
680
-
if (ret)
681
-
return ret;
682
-
683
682
e->counters.pcnt = xt_percpu_counter_alloc();
684
683
if (IS_ERR_VALUE(e->counters.pcnt))
685
684
return -ENOMEM;
···
726
733
const struct xt_entry_target *t;
727
734
unsigned int verdict;
728
735
729
-
if (!unconditional(&e->ipv6))
736
+
if (!unconditional(e))
730
737
return false;
731
738
t = ip6t_get_target_c(e);
732
739
if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0)
···
746
753
unsigned int valid_hooks)
747
754
{
748
755
unsigned int h;
756
+
int err;
749
757
750
758
if ((unsigned long)e % __alignof__(struct ip6t_entry) != 0 ||
751
-
(unsigned char *)e + sizeof(struct ip6t_entry) >= limit) {
759
+
(unsigned char *)e + sizeof(struct ip6t_entry) >= limit ||
760
+
(unsigned char *)e + e->next_offset > limit) {
752
761
duprintf("Bad offset %p\n", e);
753
762
return -EINVAL;
754
763
}
···
762
767
return -EINVAL;
763
768
}
764
769
770
+
err = check_entry(e);
771
+
if (err)
772
+
return err;
773
+
765
774
/* Check hooks & underflows */
766
775
for (h = 0; h < NF_INET_NUMHOOKS; h++) {
767
776
if (!(valid_hooks & (1 << h)))
···
774
775
newinfo->hook_entry[h] = hook_entries[h];
775
776
if ((unsigned char *)e - base == underflows[h]) {
776
777
if (!check_underflow(e)) {
777
-
pr_err("Underflows must be unconditional and "
778
-
"use the STANDARD target with "
779
-
"ACCEPT/DROP\n");
778
+
pr_debug("Underflows must be unconditional and "
779
+
"use the STANDARD target with "
780
+
"ACCEPT/DROP\n");
780
781
return -EINVAL;
781
782
}
782
783
newinfo->underflow[h] = underflows[h];
···
1168
1169
*len, sizeof(get) + get.size);
1169
1170
return -EINVAL;
1170
1171
}
1172
+
get.name[sizeof(get.name) - 1] = '\0';
1171
1173
1172
1174
t = xt_find_table_lock(net, AF_INET6, get.name);
1173
1175
if (!IS_ERR_OR_NULL(t)) {
···
1505
1505
1506
1506
duprintf("check_compat_entry_size_and_hooks %p\n", e);
1507
1507
if ((unsigned long)e % __alignof__(struct compat_ip6t_entry) != 0 ||
1508
-
(unsigned char *)e + sizeof(struct compat_ip6t_entry) >= limit) {
1508
+
(unsigned char *)e + sizeof(struct compat_ip6t_entry) >= limit ||
1509
+
(unsigned char *)e + e->next_offset > limit) {
1509
1510
duprintf("Bad offset %p, limit = %p\n", e, limit);
1510
1511
return -EINVAL;
1511
1512
}
···
1519
1518
}
1520
1519
1521
1520
/* For purposes of check_entry casting the compat entry is fine */
1522
-
ret = check_entry((struct ip6t_entry *)e, name);
1521
+
ret = check_entry((struct ip6t_entry *)e);
1523
1522
if (ret)
1524
1523
return ret;
1525
1524
···
1945
1944
*len, sizeof(get) + get.size);
1946
1945
return -EINVAL;
1947
1946
}
1947
+
get.name[sizeof(get.name) - 1] = '\0';
1948
1948
1949
1949
xt_compat_lock(AF_INET6);
1950
1950
t = xt_find_table_lock(net, AF_INET6, get.name);
+2
-2
net/ipv6/udp.c
+2
-2
net/ipv6/udp.c
···
843
843
flush_stack(stack, count, skb, count - 1);
844
844
} else {
845
845
if (!inner_flushed)
846
-
UDP_INC_STATS_BH(net, UDP_MIB_IGNOREDMULTI,
847
-
proto == IPPROTO_UDPLITE);
846
+
UDP6_INC_STATS_BH(net, UDP_MIB_IGNOREDMULTI,
847
+
proto == IPPROTO_UDPLITE);
848
848
consume_skb(skb);
849
849
}
850
850
return 0;
+1
-1
net/netfilter/ipset/ip_set_bitmap_gen.h
+1
-1
net/netfilter/ipset/ip_set_bitmap_gen.h
···
95
95
if (!nested)
96
96
goto nla_put_failure;
97
97
if (mtype_do_head(skb, map) ||
98
-
nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref - 1)) ||
98
+
nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) ||
99
99
nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize)))
100
100
goto nla_put_failure;
101
101
if (unlikely(ip_set_put_flags(skb, set)))
+28
-5
net/netfilter/ipset/ip_set_core.c
+28
-5
net/netfilter/ipset/ip_set_core.c
···
497
497
write_unlock_bh(&ip_set_ref_lock);
498
498
}
499
499
500
+
/* set->ref can be swapped out by ip_set_swap, netlink events (like dump) need
501
+
* a separate reference counter
502
+
*/
503
+
static inline void
504
+
__ip_set_get_netlink(struct ip_set *set)
505
+
{
506
+
write_lock_bh(&ip_set_ref_lock);
507
+
set->ref_netlink++;
508
+
write_unlock_bh(&ip_set_ref_lock);
509
+
}
510
+
511
+
static inline void
512
+
__ip_set_put_netlink(struct ip_set *set)
513
+
{
514
+
write_lock_bh(&ip_set_ref_lock);
515
+
BUG_ON(set->ref_netlink == 0);
516
+
set->ref_netlink--;
517
+
write_unlock_bh(&ip_set_ref_lock);
518
+
}
519
+
500
520
/* Add, del and test set entries from kernel.
501
521
*
502
522
* The set behind the index must exist and must be referenced
···
1022
1002
if (!attr[IPSET_ATTR_SETNAME]) {
1023
1003
for (i = 0; i < inst->ip_set_max; i++) {
1024
1004
s = ip_set(inst, i);
1025
-
if (s && s->ref) {
1005
+
if (s && (s->ref || s->ref_netlink)) {
1026
1006
ret = -IPSET_ERR_BUSY;
1027
1007
goto out;
1028
1008
}
···
1044
1024
if (!s) {
1045
1025
ret = -ENOENT;
1046
1026
goto out;
1047
-
} else if (s->ref) {
1027
+
} else if (s->ref || s->ref_netlink) {
1048
1028
ret = -IPSET_ERR_BUSY;
1049
1029
goto out;
1050
1030
}
···
1191
1171
from->family == to->family))
1192
1172
return -IPSET_ERR_TYPE_MISMATCH;
1193
1173
1174
+
if (from->ref_netlink || to->ref_netlink)
1175
+
return -EBUSY;
1176
+
1194
1177
strncpy(from_name, from->name, IPSET_MAXNAMELEN);
1195
1178
strncpy(from->name, to->name, IPSET_MAXNAMELEN);
1196
1179
strncpy(to->name, from_name, IPSET_MAXNAMELEN);
···
1229
1206
if (set->variant->uref)
1230
1207
set->variant->uref(set, cb, false);
1231
1208
pr_debug("release set %s\n", set->name);
1232
-
__ip_set_put_byindex(inst, index);
1209
+
__ip_set_put_netlink(set);
1233
1210
}
1234
1211
return 0;
1235
1212
}
···
1351
1328
if (!cb->args[IPSET_CB_ARG0]) {
1352
1329
/* Start listing: make sure set won't be destroyed */
1353
1330
pr_debug("reference set\n");
1354
-
set->ref++;
1331
+
set->ref_netlink++;
1355
1332
}
1356
1333
write_unlock_bh(&ip_set_ref_lock);
1357
1334
nlh = start_msg(skb, NETLINK_CB(cb->skb).portid,
···
1419
1396
if (set->variant->uref)
1420
1397
set->variant->uref(set, cb, false);
1421
1398
pr_debug("release set %s\n", set->name);
1422
-
__ip_set_put_byindex(inst, index);
1399
+
__ip_set_put_netlink(set);
1423
1400
cb->args[IPSET_CB_ARG0] = 0;
1424
1401
}
1425
1402
out:
+1
-1
net/netfilter/ipset/ip_set_hash_gen.h
+1
-1
net/netfilter/ipset/ip_set_hash_gen.h
···
1082
1082
if (nla_put_u32(skb, IPSET_ATTR_MARKMASK, h->markmask))
1083
1083
goto nla_put_failure;
1084
1084
#endif
1085
-
if (nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref - 1)) ||
1085
+
if (nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) ||
1086
1086
nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize)))
1087
1087
goto nla_put_failure;
1088
1088
if (unlikely(ip_set_put_flags(skb, set)))
+1
-1
net/netfilter/ipset/ip_set_list_set.c
+1
-1
net/netfilter/ipset/ip_set_list_set.c
···
458
458
if (!nested)
459
459
goto nla_put_failure;
460
460
if (nla_put_net32(skb, IPSET_ATTR_SIZE, htonl(map->size)) ||
461
-
nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref - 1)) ||
461
+
nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) ||
462
462
nla_put_net32(skb, IPSET_ATTR_MEMSIZE,
463
463
htonl(sizeof(*map) + n * set->dsize)))
464
464
goto nla_put_failure;
+6
-1
net/netfilter/nfnetlink_queue.c
+6
-1
net/netfilter/nfnetlink_queue.c
···
582
582
/* nfnetlink_unicast will either free the nskb or add it to a socket */
583
583
err = nfnetlink_unicast(nskb, net, queue->peer_portid, MSG_DONTWAIT);
584
584
if (err < 0) {
585
-
queue->queue_user_dropped++;
585
+
if (queue->flags & NFQA_CFG_F_FAIL_OPEN) {
586
+
failopen = 1;
587
+
err = 0;
588
+
} else {
589
+
queue->queue_user_dropped++;
590
+
}
586
591
goto err_out_unlock;
587
592
}
588
593
+3
-1
net/openvswitch/Kconfig
+3
-1
net/openvswitch/Kconfig
···
7
7
depends on INET
8
8
depends on !NF_CONNTRACK || \
9
9
(NF_CONNTRACK && ((!NF_DEFRAG_IPV6 || NF_DEFRAG_IPV6) && \
10
-
(!NF_NAT || NF_NAT)))
10
+
(!NF_NAT || NF_NAT) && \
11
+
(!NF_NAT_IPV4 || NF_NAT_IPV4) && \
12
+
(!NF_NAT_IPV6 || NF_NAT_IPV6)))
11
13
select LIBCRC32C
12
14
select MPLS
13
15
select NET_MPLS_GSO
+13
-11
net/openvswitch/conntrack.c
+13
-11
net/openvswitch/conntrack.c
···
535
535
switch (ctinfo) {
536
536
case IP_CT_RELATED:
537
537
case IP_CT_RELATED_REPLY:
538
-
if (skb->protocol == htons(ETH_P_IP) &&
538
+
if (IS_ENABLED(CONFIG_NF_NAT_IPV4) &&
539
+
skb->protocol == htons(ETH_P_IP) &&
539
540
ip_hdr(skb)->protocol == IPPROTO_ICMP) {
540
541
if (!nf_nat_icmp_reply_translation(skb, ct, ctinfo,
541
542
hooknum))
542
543
err = NF_DROP;
543
544
goto push;
544
-
#if IS_ENABLED(CONFIG_NF_NAT_IPV6)
545
-
} else if (skb->protocol == htons(ETH_P_IPV6)) {
545
+
} else if (IS_ENABLED(CONFIG_NF_NAT_IPV6) &&
546
+
skb->protocol == htons(ETH_P_IPV6)) {
546
547
__be16 frag_off;
547
548
u8 nexthdr = ipv6_hdr(skb)->nexthdr;
548
549
int hdrlen = ipv6_skip_exthdr(skb,
···
558
557
err = NF_DROP;
559
558
goto push;
560
559
}
561
-
#endif
562
560
}
563
561
/* Non-ICMP, fall thru to initialize if needed. */
564
562
case IP_CT_NEW:
···
664
664
665
665
/* Determine NAT type.
666
666
* Check if the NAT type can be deduced from the tracked connection.
667
-
* Make sure expected traffic is NATted only when committing.
667
+
* Make sure new expected connections (IP_CT_RELATED) are NATted only
668
+
* when committing.
668
669
*/
669
670
if (info->nat & OVS_CT_NAT && ctinfo != IP_CT_NEW &&
670
671
ct->status & IPS_NAT_MASK &&
671
-
(!(ct->status & IPS_EXPECTED_BIT) || info->commit)) {
672
+
(ctinfo != IP_CT_RELATED || info->commit)) {
672
673
/* NAT an established or related connection like before. */
673
674
if (CTINFO2DIR(ctinfo) == IP_CT_DIR_REPLY)
674
675
/* This is the REPLY direction for a connection
···
969
968
break;
970
969
971
970
case OVS_NAT_ATTR_IP_MIN:
972
-
nla_memcpy(&info->range.min_addr, a, nla_len(a));
971
+
nla_memcpy(&info->range.min_addr, a,
972
+
sizeof(info->range.min_addr));
973
973
info->range.flags |= NF_NAT_RANGE_MAP_IPS;
974
974
break;
975
975
···
1240
1238
}
1241
1239
1242
1240
if (info->range.flags & NF_NAT_RANGE_MAP_IPS) {
1243
-
if (info->family == NFPROTO_IPV4) {
1241
+
if (IS_ENABLED(CONFIG_NF_NAT_IPV4) &&
1242
+
info->family == NFPROTO_IPV4) {
1244
1243
if (nla_put_in_addr(skb, OVS_NAT_ATTR_IP_MIN,
1245
1244
info->range.min_addr.ip) ||
1246
1245
(info->range.max_addr.ip
···
1249
1246
(nla_put_in_addr(skb, OVS_NAT_ATTR_IP_MAX,
1250
1247
info->range.max_addr.ip))))
1251
1248
return false;
1252
-
#if IS_ENABLED(CONFIG_NF_NAT_IPV6)
1253
-
} else if (info->family == NFPROTO_IPV6) {
1249
+
} else if (IS_ENABLED(CONFIG_NF_NAT_IPV6) &&
1250
+
info->family == NFPROTO_IPV6) {
1254
1251
if (nla_put_in6_addr(skb, OVS_NAT_ATTR_IP_MIN,
1255
1252
&info->range.min_addr.in6) ||
1256
1253
(memcmp(&info->range.max_addr.in6,
···
1259
1256
(nla_put_in6_addr(skb, OVS_NAT_ATTR_IP_MAX,
1260
1257
&info->range.max_addr.in6))))
1261
1258
return false;
1262
-
#endif
1263
1259
} else {
1264
1260
return false;
1265
1261
}
+3
-3
net/sctp/output.c
+3
-3
net/sctp/output.c
···
401
401
sk = chunk->skb->sk;
402
402
403
403
/* Allocate the new skb. */
404
-
nskb = alloc_skb(packet->size + MAX_HEADER, GFP_ATOMIC);
404
+
nskb = alloc_skb(packet->size + MAX_HEADER, gfp);
405
405
if (!nskb)
406
406
goto nomem;
407
407
···
523
523
*/
524
524
if (auth)
525
525
sctp_auth_calculate_hmac(asoc, nskb,
526
-
(struct sctp_auth_chunk *)auth,
527
-
GFP_ATOMIC);
526
+
(struct sctp_auth_chunk *)auth,
527
+
gfp);
528
528
529
529
/* 2) Calculate the Adler-32 checksum of the whole packet,
530
530
* including the SCTP common header and all the
+1
-1
net/switchdev/switchdev.c
+1
-1
net/switchdev/switchdev.c
+3
net/xfrm/xfrm_input.c
+3
net/xfrm/xfrm_input.c
···
292
292
XFRM_SKB_CB(skb)->seq.input.hi = seq_hi;
293
293
294
294
skb_dst_force(skb);
295
+
dev_hold(skb->dev);
295
296
296
297
nexthdr = x->type->input(x, skb);
297
298
298
299
if (nexthdr == -EINPROGRESS)
299
300
return 0;
300
301
resume:
302
+
dev_put(skb->dev);
303
+
301
304
spin_lock(&x->lock);
302
305
if (nexthdr <= 0) {
303
306
if (nexthdr == -EBADMSG) {
+15
-9
sound/core/timer.c
+15
-9
sound/core/timer.c
···
1019
1019
njiff += timer->sticks - priv->correction;
1020
1020
priv->correction = 0;
1021
1021
}
1022
-
priv->last_expires = priv->tlist.expires = njiff;
1023
-
add_timer(&priv->tlist);
1022
+
priv->last_expires = njiff;
1023
+
mod_timer(&priv->tlist, njiff);
1024
1024
return 0;
1025
1025
}
1026
1026
···
1502
1502
return err;
1503
1503
}
1504
1504
1505
-
static int snd_timer_user_gparams(struct file *file,
1506
-
struct snd_timer_gparams __user *_gparams)
1505
+
static int timer_set_gparams(struct snd_timer_gparams *gparams)
1507
1506
{
1508
-
struct snd_timer_gparams gparams;
1509
1507
struct snd_timer *t;
1510
1508
int err;
1511
1509
1512
-
if (copy_from_user(&gparams, _gparams, sizeof(gparams)))
1513
-
return -EFAULT;
1514
1510
mutex_lock(®ister_mutex);
1515
-
t = snd_timer_find(&gparams.tid);
1511
+
t = snd_timer_find(&gparams->tid);
1516
1512
if (!t) {
1517
1513
err = -ENODEV;
1518
1514
goto _error;
···
1521
1525
err = -ENOSYS;
1522
1526
goto _error;
1523
1527
}
1524
-
err = t->hw.set_period(t, gparams.period_num, gparams.period_den);
1528
+
err = t->hw.set_period(t, gparams->period_num, gparams->period_den);
1525
1529
_error:
1526
1530
mutex_unlock(®ister_mutex);
1527
1531
return err;
1532
+
}
1533
+
1534
+
static int snd_timer_user_gparams(struct file *file,
1535
+
struct snd_timer_gparams __user *_gparams)
1536
+
{
1537
+
struct snd_timer_gparams gparams;
1538
+
1539
+
if (copy_from_user(&gparams, _gparams, sizeof(gparams)))
1540
+
return -EFAULT;
1541
+
return timer_set_gparams(&gparams);
1528
1542
}
1529
1543
1530
1544
static int snd_timer_user_gstatus(struct file *file,
+29
-1
sound/core/timer_compat.c
+29
-1
sound/core/timer_compat.c
···
22
22
23
23
#include <linux/compat.h>
24
24
25
+
/*
26
+
* ILP32/LP64 has different size for 'long' type. Additionally, the size
27
+
* of storage alignment differs depending on architectures. Here, '__packed'
28
+
* qualifier is used so that the size of this structure is multiple of 4 and
29
+
* it fits to any architectures with 32 bit storage alignment.
30
+
*/
31
+
struct snd_timer_gparams32 {
32
+
struct snd_timer_id tid;
33
+
u32 period_num;
34
+
u32 period_den;
35
+
unsigned char reserved[32];
36
+
} __packed;
37
+
25
38
struct snd_timer_info32 {
26
39
u32 flags;
27
40
s32 card;
···
44
31
u32 resolution;
45
32
unsigned char reserved[64];
46
33
};
34
+
35
+
static int snd_timer_user_gparams_compat(struct file *file,
36
+
struct snd_timer_gparams32 __user *user)
37
+
{
38
+
struct snd_timer_gparams gparams;
39
+
40
+
if (copy_from_user(&gparams.tid, &user->tid, sizeof(gparams.tid)) ||
41
+
get_user(gparams.period_num, &user->period_num) ||
42
+
get_user(gparams.period_den, &user->period_den))
43
+
return -EFAULT;
44
+
45
+
return timer_set_gparams(&gparams);
46
+
}
47
47
48
48
static int snd_timer_user_info_compat(struct file *file,
49
49
struct snd_timer_info32 __user *_info)
···
125
99
*/
126
100
127
101
enum {
102
+
SNDRV_TIMER_IOCTL_GPARAMS32 = _IOW('T', 0x04, struct snd_timer_gparams32),
128
103
SNDRV_TIMER_IOCTL_INFO32 = _IOR('T', 0x11, struct snd_timer_info32),
129
104
SNDRV_TIMER_IOCTL_STATUS32 = _IOW('T', 0x14, struct snd_timer_status32),
130
105
#ifdef CONFIG_X86_X32
···
141
114
case SNDRV_TIMER_IOCTL_PVERSION:
142
115
case SNDRV_TIMER_IOCTL_TREAD:
143
116
case SNDRV_TIMER_IOCTL_GINFO:
144
-
case SNDRV_TIMER_IOCTL_GPARAMS:
145
117
case SNDRV_TIMER_IOCTL_GSTATUS:
146
118
case SNDRV_TIMER_IOCTL_SELECT:
147
119
case SNDRV_TIMER_IOCTL_PARAMS:
···
154
128
case SNDRV_TIMER_IOCTL_PAUSE_OLD:
155
129
case SNDRV_TIMER_IOCTL_NEXT_DEVICE:
156
130
return snd_timer_user_ioctl(file, cmd, (unsigned long)argp);
131
+
case SNDRV_TIMER_IOCTL_GPARAMS32:
132
+
return snd_timer_user_gparams_compat(file, argp);
157
133
case SNDRV_TIMER_IOCTL_INFO32:
158
134
return snd_timer_user_info_compat(file, argp);
159
135
case SNDRV_TIMER_IOCTL_STATUS32:
+4
-10
sound/firewire/dice/dice-stream.c
+4
-10
sound/firewire/dice/dice-stream.c
···
446
446
447
447
void snd_dice_stream_destroy_duplex(struct snd_dice *dice)
448
448
{
449
-
struct reg_params tx_params, rx_params;
449
+
unsigned int i;
450
450
451
-
snd_dice_transaction_clear_enable(dice);
452
-
453
-
if (get_register_params(dice, &tx_params, &rx_params) == 0) {
454
-
stop_streams(dice, AMDTP_IN_STREAM, &tx_params);
455
-
stop_streams(dice, AMDTP_OUT_STREAM, &rx_params);
451
+
for (i = 0; i < MAX_STREAMS; i++) {
452
+
destroy_stream(dice, AMDTP_IN_STREAM, i);
453
+
destroy_stream(dice, AMDTP_OUT_STREAM, i);
456
454
}
457
-
458
-
release_resources(dice);
459
-
460
-
dice->substreams_counter = 0;
461
455
}
462
456
463
457
void snd_dice_stream_update_duplex(struct snd_dice *dice)
+4
sound/pci/hda/hda_intel.c
+4
sound/pci/hda/hda_intel.c
···
2361
2361
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
2362
2362
{ PCI_DEVICE(0x1002, 0xaae8),
2363
2363
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
2364
+
{ PCI_DEVICE(0x1002, 0xaae0),
2365
+
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
2366
+
{ PCI_DEVICE(0x1002, 0xaaf0),
2367
+
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
2364
2368
/* VIA VT8251/VT8237A */
2365
2369
{ PCI_DEVICE(0x1106, 0x3288), .driver_data = AZX_DRIVER_VIA },
2366
2370
/* VIA GFX VT7122/VX900 */
+18
-1
sound/pci/hda/patch_realtek.c
+18
-1
sound/pci/hda/patch_realtek.c
···
4759
4759
ALC255_FIXUP_DELL_SPK_NOISE,
4760
4760
ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
4761
4761
ALC280_FIXUP_HP_HEADSET_MIC,
4762
+
ALC221_FIXUP_HP_FRONT_MIC,
4762
4763
};
4763
4764
4764
4765
static const struct hda_fixup alc269_fixups[] = {
···
5402
5401
.chained = true,
5403
5402
.chain_id = ALC269_FIXUP_HEADSET_MIC,
5404
5403
},
5404
+
[ALC221_FIXUP_HP_FRONT_MIC] = {
5405
+
.type = HDA_FIXUP_PINS,
5406
+
.v.pins = (const struct hda_pintbl[]) {
5407
+
{ 0x19, 0x02a19020 }, /* Front Mic */
5408
+
{ }
5409
+
},
5410
+
},
5405
5411
};
5406
5412
5407
5413
static const struct snd_pci_quirk alc269_fixup_tbl[] = {
···
5514
5506
SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
5515
5507
SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
5516
5508
SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
5509
+
SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
5517
5510
SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
5518
5511
SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
5519
5512
SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
···
6415
6406
ALC668_FIXUP_AUTO_MUTE,
6416
6407
ALC668_FIXUP_DELL_DISABLE_AAMIX,
6417
6408
ALC668_FIXUP_DELL_XPS13,
6409
+
ALC662_FIXUP_ASUS_Nx50,
6418
6410
};
6419
6411
6420
6412
static const struct hda_fixup alc662_fixups[] = {
···
6656
6646
.type = HDA_FIXUP_FUNC,
6657
6647
.v.func = alc_fixup_bass_chmap,
6658
6648
},
6649
+
[ALC662_FIXUP_ASUS_Nx50] = {
6650
+
.type = HDA_FIXUP_FUNC,
6651
+
.v.func = alc_fixup_auto_mute_via_amp,
6652
+
.chained = true,
6653
+
.chain_id = ALC662_FIXUP_BASS_1A
6654
+
},
6659
6655
};
6660
6656
6661
6657
static const struct snd_pci_quirk alc662_fixup_tbl[] = {
···
6684
6668
SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
6685
6669
SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
6686
6670
SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
6687
-
SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_BASS_1A),
6671
+
SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
6688
6672
SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
6673
+
SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
6689
6674
SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
6690
6675
SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
6691
6676
SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16),
+4
sound/usb/quirks.c
+4
sound/usb/quirks.c
···
150
150
usb_audio_err(chip, "cannot memdup\n");
151
151
return -ENOMEM;
152
152
}
153
+
INIT_LIST_HEAD(&fp->list);
153
154
if (fp->nr_rates > MAX_NR_RATES) {
154
155
kfree(fp);
155
156
return -EINVAL;
···
194
193
return 0;
195
194
196
195
error:
196
+
list_del(&fp->list); /* unlink for avoiding double-free */
197
197
kfree(fp);
198
198
kfree(rate_table);
199
199
return err;
···
471
469
fp->ep_attr = get_endpoint(alts, 0)->bmAttributes;
472
470
fp->datainterval = 0;
473
471
fp->maxpacksize = le16_to_cpu(get_endpoint(alts, 0)->wMaxPacketSize);
472
+
INIT_LIST_HEAD(&fp->list);
474
473
475
474
switch (fp->maxpacksize) {
476
475
case 0x120:
···
495
492
? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
496
493
err = snd_usb_add_audio_stream(chip, stream, fp);
497
494
if (err < 0) {
495
+
list_del(&fp->list); /* unlink for avoiding double-free */
498
496
kfree(fp);
499
497
return err;
500
498
}
+5
-1
sound/usb/stream.c
+5
-1
sound/usb/stream.c
···
316
316
/*
317
317
* add this endpoint to the chip instance.
318
318
* if a stream with the same endpoint already exists, append to it.
319
-
* if not, create a new pcm stream.
319
+
* if not, create a new pcm stream. note, fp is added to the substream
320
+
* fmt_list and will be freed on the chip instance release. do not free
321
+
* fp or do remove it from the substream fmt_list to avoid double-free.
320
322
*/
321
323
int snd_usb_add_audio_stream(struct snd_usb_audio *chip,
322
324
int stream,
···
679
677
* (fp->maxpacksize & 0x7ff);
680
678
fp->attributes = parse_uac_endpoint_attributes(chip, alts, protocol, iface_no);
681
679
fp->clock = clock;
680
+
INIT_LIST_HEAD(&fp->list);
682
681
683
682
/* some quirks for attributes here */
684
683
···
728
725
dev_dbg(&dev->dev, "%u:%d: add audio endpoint %#x\n", iface_no, altno, fp->endpoint);
729
726
err = snd_usb_add_audio_stream(chip, stream, fp);
730
727
if (err < 0) {
728
+
list_del(&fp->list); /* unlink for avoiding double-free */
731
729
kfree(fp->rate_table);
732
730
kfree(fp->chmap);
733
731
kfree(fp);
+8
-4
tools/lib/lockdep/run_tests.sh
+8
-4
tools/lib/lockdep/run_tests.sh
···
3
3
make &> /dev/null
4
4
5
5
for i in `ls tests/*.c`; do
6
-
testname=$(basename -s .c "$i")
6
+
testname=$(basename "$i" .c)
7
7
gcc -o tests/$testname -pthread -lpthread $i liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &> /dev/null
8
8
echo -ne "$testname... "
9
9
if [ $(timeout 1 ./tests/$testname | wc -l) -gt 0 ]; then
···
11
11
else
12
12
echo "FAILED!"
13
13
fi
14
-
rm tests/$testname
14
+
if [ -f "tests/$testname" ]; then
15
+
rm tests/$testname
16
+
fi
15
17
done
16
18
17
19
for i in `ls tests/*.c`; do
18
-
testname=$(basename -s .c "$i")
20
+
testname=$(basename "$i" .c)
19
21
gcc -o tests/$testname -pthread -lpthread -Iinclude $i &> /dev/null
20
22
echo -ne "(PRELOAD) $testname... "
21
23
if [ $(timeout 1 ./lockdep ./tests/$testname | wc -l) -gt 0 ]; then
···
25
23
else
26
24
echo "FAILED!"
27
25
fi
28
-
rm tests/$testname
26
+
if [ -f "tests/$testname" ]; then
27
+
rm tests/$testname
28
+
fi
29
29
done
+1
tools/perf/MANIFEST
+1
tools/perf/MANIFEST
+2
tools/perf/arch/powerpc/util/header.c
+2
tools/perf/arch/powerpc/util/header.c
+1
-1
tools/perf/tests/perf-targz-src-pkg
+1
-1
tools/perf/tests/perf-targz-src-pkg
+1
-1
tools/perf/ui/browsers/hists.c
+1
-1
tools/perf/ui/browsers/hists.c
···
337
337
chain = list_entry(node->val.next, struct callchain_list, list);
338
338
chain->has_children = has_sibling;
339
339
340
-
if (node->val.next != node->val.prev) {
340
+
if (!list_empty(&node->val)) {
341
341
chain = list_entry(node->val.prev, struct callchain_list, list);
342
342
chain->has_children = !RB_EMPTY_ROOT(&node->rb_root);
343
343
}
+16
-7
tools/perf/util/event.c
+16
-7
tools/perf/util/event.c
···
56
56
return perf_event__names[id];
57
57
}
58
58
59
-
static struct perf_sample synth_sample = {
59
+
static int perf_tool__process_synth_event(struct perf_tool *tool,
60
+
union perf_event *event,
61
+
struct machine *machine,
62
+
perf_event__handler_t process)
63
+
{
64
+
struct perf_sample synth_sample = {
60
65
.pid = -1,
61
66
.tid = -1,
62
67
.time = -1,
63
68
.stream_id = -1,
64
69
.cpu = -1,
65
70
.period = 1,
71
+
.cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK,
72
+
};
73
+
74
+
return process(tool, event, &synth_sample, machine);
66
75
};
67
76
68
77
/*
···
195
186
if (perf_event__prepare_comm(event, pid, machine, &tgid, &ppid) != 0)
196
187
return -1;
197
188
198
-
if (process(tool, event, &synth_sample, machine) != 0)
189
+
if (perf_tool__process_synth_event(tool, event, machine, process) != 0)
199
190
return -1;
200
191
201
192
return tgid;
···
227
218
228
219
event->fork.header.size = (sizeof(event->fork) + machine->id_hdr_size);
229
220
230
-
if (process(tool, event, &synth_sample, machine) != 0)
221
+
if (perf_tool__process_synth_event(tool, event, machine, process) != 0)
231
222
return -1;
232
223
233
224
return 0;
···
353
344
event->mmap2.pid = tgid;
354
345
event->mmap2.tid = pid;
355
346
356
-
if (process(tool, event, &synth_sample, machine) != 0) {
347
+
if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
357
348
rc = -1;
358
349
break;
359
350
}
···
411
402
412
403
memcpy(event->mmap.filename, pos->dso->long_name,
413
404
pos->dso->long_name_len + 1);
414
-
if (process(tool, event, &synth_sample, machine) != 0) {
405
+
if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
415
406
rc = -1;
416
407
break;
417
408
}
···
481
472
/*
482
473
* Send the prepared comm event
483
474
*/
484
-
if (process(tool, comm_event, &synth_sample, machine) != 0)
475
+
if (perf_tool__process_synth_event(tool, comm_event, machine, process) != 0)
485
476
break;
486
477
487
478
rc = 0;
···
710
701
event->mmap.len = map->end - event->mmap.start;
711
702
event->mmap.pid = machine->pid;
712
703
713
-
err = process(tool, event, &synth_sample, machine);
704
+
err = perf_tool__process_synth_event(tool, event, machine, process);
714
705
free(event);
715
706
716
707
return err;
+10
-14
tools/perf/util/genelf.h
+10
-14
tools/perf/util/genelf.h
···
9
9
10
10
#if defined(__arm__)
11
11
#define GEN_ELF_ARCH EM_ARM
12
-
#define GEN_ELF_ENDIAN ELFDATA2LSB
13
12
#define GEN_ELF_CLASS ELFCLASS32
14
13
#elif defined(__aarch64__)
15
14
#define GEN_ELF_ARCH EM_AARCH64
16
-
#define GEN_ELF_ENDIAN ELFDATA2LSB
17
15
#define GEN_ELF_CLASS ELFCLASS64
18
16
#elif defined(__x86_64__)
19
17
#define GEN_ELF_ARCH EM_X86_64
20
-
#define GEN_ELF_ENDIAN ELFDATA2LSB
21
18
#define GEN_ELF_CLASS ELFCLASS64
22
19
#elif defined(__i386__)
23
20
#define GEN_ELF_ARCH EM_386
24
-
#define GEN_ELF_ENDIAN ELFDATA2LSB
25
21
#define GEN_ELF_CLASS ELFCLASS32
26
-
#elif defined(__ppcle__)
27
-
#define GEN_ELF_ARCH EM_PPC
28
-
#define GEN_ELF_ENDIAN ELFDATA2LSB
22
+
#elif defined(__powerpc64__)
23
+
#define GEN_ELF_ARCH EM_PPC64
29
24
#define GEN_ELF_CLASS ELFCLASS64
30
25
#elif defined(__powerpc__)
31
-
#define GEN_ELF_ARCH EM_PPC64
32
-
#define GEN_ELF_ENDIAN ELFDATA2MSB
33
-
#define GEN_ELF_CLASS ELFCLASS64
34
-
#elif defined(__powerpcle__)
35
-
#define GEN_ELF_ARCH EM_PPC64
36
-
#define GEN_ELF_ENDIAN ELFDATA2LSB
37
-
#define GEN_ELF_CLASS ELFCLASS64
26
+
#define GEN_ELF_ARCH EM_PPC
27
+
#define GEN_ELF_CLASS ELFCLASS32
38
28
#else
39
29
#error "unsupported architecture"
30
+
#endif
31
+
32
+
#if __BYTE_ORDER == __BIG_ENDIAN
33
+
#define GEN_ELF_ENDIAN ELFDATA2MSB
34
+
#else
35
+
#define GEN_ELF_ENDIAN ELFDATA2LSB
40
36
#endif
41
37
42
38
#if GEN_ELF_CLASS == ELFCLASS64
+1
tools/perf/util/intel-bts.c
+1
tools/perf/util/intel-bts.c
+3
tools/perf/util/intel-pt.c
+3
tools/perf/util/intel-pt.c
···
979
979
if (!pt->timeless_decoding)
980
980
sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc);
981
981
982
+
sample.cpumode = PERF_RECORD_MISC_USER;
982
983
sample.ip = ptq->state->from_ip;
983
984
sample.pid = ptq->pid;
984
985
sample.tid = ptq->tid;
···
1036
1035
if (!pt->timeless_decoding)
1037
1036
sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc);
1038
1037
1038
+
sample.cpumode = PERF_RECORD_MISC_USER;
1039
1039
sample.ip = ptq->state->from_ip;
1040
1040
sample.pid = ptq->pid;
1041
1041
sample.tid = ptq->tid;
···
1094
1092
if (!pt->timeless_decoding)
1095
1093
sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc);
1096
1094
1095
+
sample.cpumode = PERF_RECORD_MISC_USER;
1097
1096
sample.ip = ptq->state->from_ip;
1098
1097
sample.pid = ptq->pid;
1099
1098
sample.tid = ptq->tid;
+2
tools/perf/util/jitdump.c
+2
tools/perf/util/jitdump.c
···
417
417
* use first address as sample address
418
418
*/
419
419
memset(&sample, 0, sizeof(sample));
420
+
sample.cpumode = PERF_RECORD_MISC_USER;
420
421
sample.pid = pid;
421
422
sample.tid = tid;
422
423
sample.time = id->time;
···
506
505
* use first address as sample address
507
506
*/
508
507
memset(&sample, 0, sizeof(sample));
508
+
sample.cpumode = PERF_RECORD_MISC_USER;
509
509
sample.pid = pid;
510
510
sample.tid = tid;
511
511
sample.time = id->time;