Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ASoC: Improve coverage in default KUnit runs

Merge series from Mark Brown <broonie@kernel.org>:

We have some KUnit tests for ASoC but they're not being run as much as
they should be since ASoC isn't enabled in the configs used by default
with KUnit and in the case of the topology tests there is no way to
enable them without enabling drivers that use them. This series
provides a Kconfig option which KUnit can use directly rather than worry
about drivers.

Further, since KUnit is typically run in UML but ALSA prevents build
with UML we need to remove that Kconfig conflict. As far as I can tell
the motiviation for this is that many ALSA drivers use iomem APIs which
are not available under UML and it's more trouble than it's worth to go
through and add per driver dependencies. In order to avoid these issues
we also provide stubs for these APIs so there are no build time issues
if a driver relies on iomem but does not depend on it. With these stubs
I am able to build all the sound drivers available in a UML defconfig
(UML allmodconfig appears to have substantial other issues in a quick
test).

With this series I am able to run the topology KUnit tests as part of a
kunit --alltests run.

+2940 -1645
+1
.mailmap
··· 246 246 John Stultz <johnstul@us.ibm.com> 247 247 <jon.toppins+linux@gmail.com> <jtoppins@cumulusnetworks.com> 248 248 <jon.toppins+linux@gmail.com> <jtoppins@redhat.com> 249 + Jonas Gorski <jonas.gorski@gmail.com> <jogo@openwrt.org> 249 250 Jordan Crouse <jordan@cosmicpenguin.net> <jcrouse@codeaurora.org> 250 251 <josh@joshtriplett.org> <josh@freedesktop.org> 251 252 <josh@joshtriplett.org> <josh@kernel.org>
+38 -38
Documentation/ABI/testing/sysfs-driver-ufs
··· 994 994 What: /sys/bus/platform/drivers/ufshcd/*/rpm_lvl 995 995 What: /sys/bus/platform/devices/*.ufs/rpm_lvl 996 996 Date: September 2014 997 - Contact: Subhash Jadavani <subhashj@codeaurora.org> 997 + Contact: Can Guo <quic_cang@quicinc.com> 998 998 Description: This entry could be used to set or show the UFS device 999 999 runtime power management level. The current driver 1000 1000 implementation supports 7 levels with next target states: ··· 1021 1021 What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_dev_state 1022 1022 What: /sys/bus/platform/devices/*.ufs/rpm_target_dev_state 1023 1023 Date: February 2018 1024 - Contact: Subhash Jadavani <subhashj@codeaurora.org> 1024 + Contact: Can Guo <quic_cang@quicinc.com> 1025 1025 Description: This entry shows the target power mode of an UFS device 1026 1026 for the chosen runtime power management level. 1027 1027 ··· 1030 1030 What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_link_state 1031 1031 What: /sys/bus/platform/devices/*.ufs/rpm_target_link_state 1032 1032 Date: February 2018 1033 - Contact: Subhash Jadavani <subhashj@codeaurora.org> 1033 + Contact: Can Guo <quic_cang@quicinc.com> 1034 1034 Description: This entry shows the target state of an UFS UIC link 1035 1035 for the chosen runtime power management level. 1036 1036 ··· 1039 1039 What: /sys/bus/platform/drivers/ufshcd/*/spm_lvl 1040 1040 What: /sys/bus/platform/devices/*.ufs/spm_lvl 1041 1041 Date: September 2014 1042 - Contact: Subhash Jadavani <subhashj@codeaurora.org> 1042 + Contact: Can Guo <quic_cang@quicinc.com> 1043 1043 Description: This entry could be used to set or show the UFS device 1044 1044 system power management level. The current driver 1045 1045 implementation supports 7 levels with next target states: ··· 1066 1066 What: /sys/bus/platform/drivers/ufshcd/*/spm_target_dev_state 1067 1067 What: /sys/bus/platform/devices/*.ufs/spm_target_dev_state 1068 1068 Date: February 2018 1069 - Contact: Subhash Jadavani <subhashj@codeaurora.org> 1069 + Contact: Can Guo <quic_cang@quicinc.com> 1070 1070 Description: This entry shows the target power mode of an UFS device 1071 1071 for the chosen system power management level. 1072 1072 ··· 1075 1075 What: /sys/bus/platform/drivers/ufshcd/*/spm_target_link_state 1076 1076 What: /sys/bus/platform/devices/*.ufs/spm_target_link_state 1077 1077 Date: February 2018 1078 - Contact: Subhash Jadavani <subhashj@codeaurora.org> 1078 + Contact: Can Guo <quic_cang@quicinc.com> 1079 1079 Description: This entry shows the target state of an UFS UIC link 1080 1080 for the chosen system power management level. 1081 1081 ··· 1084 1084 What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_enable 1085 1085 What: /sys/bus/platform/devices/*.ufs/monitor/monitor_enable 1086 1086 Date: January 2021 1087 - Contact: Can Guo <cang@codeaurora.org> 1087 + Contact: Can Guo <quic_cang@quicinc.com> 1088 1088 Description: This file shows the status of performance monitor enablement 1089 1089 and it can be used to start/stop the monitor. When the monitor 1090 1090 is stopped, the performance data collected is also cleared. ··· 1092 1092 What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_chunk_size 1093 1093 What: /sys/bus/platform/devices/*.ufs/monitor/monitor_chunk_size 1094 1094 Date: January 2021 1095 - Contact: Can Guo <cang@codeaurora.org> 1095 + Contact: Can Guo <quic_cang@quicinc.com> 1096 1096 Description: This file tells the monitor to focus on requests transferring 1097 1097 data of specific chunk size (in Bytes). 0 means any chunk size. 1098 1098 It can only be changed when monitor is disabled. ··· 1100 1100 What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_sectors 1101 1101 What: /sys/bus/platform/devices/*.ufs/monitor/read_total_sectors 1102 1102 Date: January 2021 1103 - Contact: Can Guo <cang@codeaurora.org> 1103 + Contact: Can Guo <quic_cang@quicinc.com> 1104 1104 Description: This file shows how many sectors (in 512 Bytes) have been 1105 1105 sent from device to host after monitor gets started. 1106 1106 ··· 1109 1109 What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_busy 1110 1110 What: /sys/bus/platform/devices/*.ufs/monitor/read_total_busy 1111 1111 Date: January 2021 1112 - Contact: Can Guo <cang@codeaurora.org> 1112 + Contact: Can Guo <quic_cang@quicinc.com> 1113 1113 Description: This file shows how long (in micro seconds) has been spent 1114 1114 sending data from device to host after monitor gets started. 1115 1115 ··· 1118 1118 What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_nr_requests 1119 1119 What: /sys/bus/platform/devices/*.ufs/monitor/read_nr_requests 1120 1120 Date: January 2021 1121 - Contact: Can Guo <cang@codeaurora.org> 1121 + Contact: Can Guo <quic_cang@quicinc.com> 1122 1122 Description: This file shows how many read requests have been sent after 1123 1123 monitor gets started. 1124 1124 ··· 1127 1127 What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_max 1128 1128 What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_max 1129 1129 Date: January 2021 1130 - Contact: Can Guo <cang@codeaurora.org> 1130 + Contact: Can Guo <quic_cang@quicinc.com> 1131 1131 Description: This file shows the maximum latency (in micro seconds) of 1132 1132 read requests after monitor gets started. 1133 1133 ··· 1136 1136 What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_min 1137 1137 What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_min 1138 1138 Date: January 2021 1139 - Contact: Can Guo <cang@codeaurora.org> 1139 + Contact: Can Guo <quic_cang@quicinc.com> 1140 1140 Description: This file shows the minimum latency (in micro seconds) of 1141 1141 read requests after monitor gets started. 1142 1142 ··· 1145 1145 What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_avg 1146 1146 What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_avg 1147 1147 Date: January 2021 1148 - Contact: Can Guo <cang@codeaurora.org> 1148 + Contact: Can Guo <quic_cang@quicinc.com> 1149 1149 Description: This file shows the average latency (in micro seconds) of 1150 1150 read requests after monitor gets started. 1151 1151 ··· 1154 1154 What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_sum 1155 1155 What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_sum 1156 1156 Date: January 2021 1157 - Contact: Can Guo <cang@codeaurora.org> 1157 + Contact: Can Guo <quic_cang@quicinc.com> 1158 1158 Description: This file shows the total latency (in micro seconds) of 1159 1159 read requests sent after monitor gets started. 1160 1160 ··· 1163 1163 What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_sectors 1164 1164 What: /sys/bus/platform/devices/*.ufs/monitor/write_total_sectors 1165 1165 Date: January 2021 1166 - Contact: Can Guo <cang@codeaurora.org> 1166 + Contact: Can Guo <quic_cang@quicinc.com> 1167 1167 Description: This file shows how many sectors (in 512 Bytes) have been sent 1168 1168 from host to device after monitor gets started. 1169 1169 ··· 1172 1172 What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_busy 1173 1173 What: /sys/bus/platform/devices/*.ufs/monitor/write_total_busy 1174 1174 Date: January 2021 1175 - Contact: Can Guo <cang@codeaurora.org> 1175 + Contact: Can Guo <quic_cang@quicinc.com> 1176 1176 Description: This file shows how long (in micro seconds) has been spent 1177 1177 sending data from host to device after monitor gets started. 1178 1178 ··· 1181 1181 What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_nr_requests 1182 1182 What: /sys/bus/platform/devices/*.ufs/monitor/write_nr_requests 1183 1183 Date: January 2021 1184 - Contact: Can Guo <cang@codeaurora.org> 1184 + Contact: Can Guo <quic_cang@quicinc.com> 1185 1185 Description: This file shows how many write requests have been sent after 1186 1186 monitor gets started. 1187 1187 ··· 1190 1190 What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_max 1191 1191 What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_max 1192 1192 Date: January 2021 1193 - Contact: Can Guo <cang@codeaurora.org> 1193 + Contact: Can Guo <quic_cang@quicinc.com> 1194 1194 Description: This file shows the maximum latency (in micro seconds) of write 1195 1195 requests after monitor gets started. 1196 1196 ··· 1199 1199 What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_min 1200 1200 What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_min 1201 1201 Date: January 2021 1202 - Contact: Can Guo <cang@codeaurora.org> 1202 + Contact: Can Guo <quic_cang@quicinc.com> 1203 1203 Description: This file shows the minimum latency (in micro seconds) of write 1204 1204 requests after monitor gets started. 1205 1205 ··· 1208 1208 What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_avg 1209 1209 What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_avg 1210 1210 Date: January 2021 1211 - Contact: Can Guo <cang@codeaurora.org> 1211 + Contact: Can Guo <quic_cang@quicinc.com> 1212 1212 Description: This file shows the average latency (in micro seconds) of write 1213 1213 requests after monitor gets started. 1214 1214 ··· 1217 1217 What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_sum 1218 1218 What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_sum 1219 1219 Date: January 2021 1220 - Contact: Can Guo <cang@codeaurora.org> 1220 + Contact: Can Guo <quic_cang@quicinc.com> 1221 1221 Description: This file shows the total latency (in micro seconds) of write 1222 1222 requests after monitor gets started. 1223 1223 ··· 1226 1226 What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_presv_us_en 1227 1227 What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_presv_us_en 1228 1228 Date: June 2020 1229 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1229 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1230 1230 Description: This entry shows if preserve user-space was configured 1231 1231 1232 1232 The file is read only. ··· 1234 1234 What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_shared_alloc_units 1235 1235 What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_shared_alloc_units 1236 1236 Date: June 2020 1237 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1237 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1238 1238 Description: This entry shows the shared allocated units of WB buffer 1239 1239 1240 1240 The file is read only. ··· 1242 1242 What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_type 1243 1243 What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_type 1244 1244 Date: June 2020 1245 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1245 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1246 1246 Description: This entry shows the configured WB type. 1247 1247 0x1 for shared buffer mode. 0x0 for dedicated buffer mode. 1248 1248 ··· 1251 1251 What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_buff_cap_adj 1252 1252 What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_buff_cap_adj 1253 1253 Date: June 2020 1254 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1254 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1255 1255 Description: This entry shows the total user-space decrease in shared 1256 1256 buffer mode. 1257 1257 The value of this parameter is 3 for TLC NAND when SLC mode ··· 1262 1262 What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_alloc_units 1263 1263 What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_max_alloc_units 1264 1264 Date: June 2020 1265 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1265 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1266 1266 Description: This entry shows the Maximum total WriteBooster Buffer size 1267 1267 which is supported by the entire device. 1268 1268 ··· 1271 1271 What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_wb_luns 1272 1272 What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_max_wb_luns 1273 1273 Date: June 2020 1274 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1274 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1275 1275 Description: This entry shows the maximum number of luns that can support 1276 1276 WriteBooster. 1277 1277 ··· 1280 1280 What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_red_type 1281 1281 What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_sup_red_type 1282 1282 Date: June 2020 1283 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1283 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1284 1284 Description: The supportability of user space reduction mode 1285 1285 and preserve user space mode. 1286 1286 00h: WriteBooster Buffer can be configured only in ··· 1295 1295 What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_wb_type 1296 1296 What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_sup_wb_type 1297 1297 Date: June 2020 1298 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1298 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1299 1299 Description: The supportability of WriteBooster Buffer type. 1300 1300 1301 1301 === ========================================================== ··· 1310 1310 What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable 1311 1311 What: /sys/bus/platform/devices/*.ufs/flags/wb_enable 1312 1312 Date: June 2020 1313 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1313 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1314 1314 Description: This entry shows the status of WriteBooster. 1315 1315 1316 1316 == ============================ ··· 1323 1323 What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_en 1324 1324 What: /sys/bus/platform/devices/*.ufs/flags/wb_flush_en 1325 1325 Date: June 2020 1326 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1326 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1327 1327 Description: This entry shows if flush is enabled. 1328 1328 1329 1329 == ================================= ··· 1336 1336 What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_during_h8 1337 1337 What: /sys/bus/platform/devices/*.ufs/flags/wb_flush_during_h8 1338 1338 Date: June 2020 1339 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1339 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1340 1340 Description: Flush WriteBooster Buffer during hibernate state. 1341 1341 1342 1342 == ================================================= ··· 1351 1351 What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_avail_buf 1352 1352 What: /sys/bus/platform/devices/*.ufs/attributes/wb_avail_buf 1353 1353 Date: June 2020 1354 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1354 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1355 1355 Description: This entry shows the amount of unused WriteBooster buffer 1356 1356 available. 1357 1357 ··· 1360 1360 What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_cur_buf 1361 1361 What: /sys/bus/platform/devices/*.ufs/attributes/wb_cur_buf 1362 1362 Date: June 2020 1363 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1363 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1364 1364 Description: This entry shows the amount of unused current buffer. 1365 1365 1366 1366 The file is read only. ··· 1368 1368 What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_flush_status 1369 1369 What: /sys/bus/platform/devices/*.ufs/attributes/wb_flush_status 1370 1370 Date: June 2020 1371 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1371 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1372 1372 Description: This entry shows the flush operation status. 1373 1373 1374 1374 ··· 1385 1385 What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_life_time_est 1386 1386 What: /sys/bus/platform/devices/*.ufs/attributes/wb_life_time_est 1387 1387 Date: June 2020 1388 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1388 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1389 1389 Description: This entry shows an indication of the WriteBooster Buffer 1390 1390 lifetime based on the amount of performed program/erase cycles 1391 1391 ··· 1399 1399 1400 1400 What: /sys/class/scsi_device/*/device/unit_descriptor/wb_buf_alloc_units 1401 1401 Date: June 2020 1402 - Contact: Asutosh Das <asutoshd@codeaurora.org> 1402 + Contact: Asutosh Das <quic_asutoshd@quicinc.com> 1403 1403 Description: This entry shows the configured size of WriteBooster buffer. 1404 1404 0400h corresponds to 4GB. 1405 1405
+42
Documentation/devicetree/bindings/watchdog/loongson,ls1x-wdt.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/watchdog/loongson,ls1x-wdt.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Loongson-1 Watchdog Timer 8 + 9 + maintainers: 10 + - Keguang Zhang <keguang.zhang@gmail.com> 11 + 12 + allOf: 13 + - $ref: watchdog.yaml# 14 + 15 + properties: 16 + compatible: 17 + enum: 18 + - loongson,ls1b-wdt 19 + - loongson,ls1c-wdt 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + clocks: 25 + maxItems: 1 26 + 27 + required: 28 + - compatible 29 + - reg 30 + - clocks 31 + 32 + unevaluatedProperties: false 33 + 34 + examples: 35 + - | 36 + #include <dt-bindings/clock/loongson,ls1x-clk.h> 37 + watchdog: watchdog@1fe5c060 { 38 + compatible = "loongson,ls1b-wdt"; 39 + reg = <0x1fe5c060 0xc>; 40 + 41 + clocks = <&clkc LS1X_CLKID_APB>; 42 + };
+1 -1
Documentation/process/maintainer-netdev.rst
··· 98 98 repository link above for any new networking-related commits. You may 99 99 also check the following website for the current status: 100 100 101 - http://vger.kernel.org/~davem/net-next.html 101 + https://patchwork.hopto.org/net-next.html 102 102 103 103 The ``net`` tree continues to collect fixes for the vX.Y content, and is 104 104 fed back to Linus at regular (~weekly) intervals. Meaning that the
+1 -1
Documentation/riscv/hwprobe.rst
··· 49 49 privileged ISA, with the following known exceptions (more exceptions may be 50 50 added, but only if it can be demonstrated that the user ABI is not broken): 51 51 52 - * The :fence.i: instruction cannot be directly executed by userspace 52 + * The ``fence.i`` instruction cannot be directly executed by userspace 53 53 programs (it may still be executed in userspace via a 54 54 kernel-controlled mechanism such as the vDSO). 55 55
+2 -1
Documentation/wmi/devices/dell-wmi-ddv.rst
··· 187 187 188 188 Returns a buffer usually containg 12 blocks of analytics data. 189 189 Those blocks contain: 190 - - block number starting with 0 (u8) 190 + 191 + - a block number starting with 0 (u8) 191 192 - 31 bytes of unknown data 192 193 193 194 .. note::
+13 -2
MAINTAINERS
··· 4121 4121 F: drivers/spi/spi-bcm63xx-hsspi.c 4122 4122 F: drivers/spi/spi-bcmbca-hsspi.c 4123 4123 4124 + BROADCOM BCM6348/BCM6358 SPI controller DRIVER 4125 + M: Jonas Gorski <jonas.gorski@gmail.com> 4126 + L: linux-spi@vger.kernel.org 4127 + S: Odd Fixes 4128 + F: Documentation/devicetree/bindings/spi/spi-bcm63xx.txt 4129 + F: drivers/spi/spi-bcm63xx.c 4130 + 4124 4131 BROADCOM ETHERNET PHY DRIVERS 4125 4132 M: Florian Fainelli <florian.fainelli@broadcom.com> 4126 4133 R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com> ··· 8679 8672 F: drivers/input/touchscreen/resistive-adc-touch.c 8680 8673 8681 8674 GENERIC STRING LIBRARY 8675 + M: Kees Cook <keescook@chromium.org> 8682 8676 R: Andy Shevchenko <andy@kernel.org> 8683 - S: Maintained 8677 + L: linux-hardening@vger.kernel.org 8678 + S: Supported 8679 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening 8684 8680 F: include/linux/string.h 8685 8681 F: include/linux/string_choices.h 8686 8682 F: include/linux/string_helpers.h ··· 13978 13968 F: drivers/soc/microchip/ 13979 13969 13980 13970 MICROCHIP SPI DRIVER 13981 - M: Tudor Ambarus <tudor.ambarus@linaro.org> 13971 + M: Ryan Wanner <ryan.wanner@microchip.com> 13982 13972 S: Supported 13983 13973 F: drivers/spi/spi-atmel.* 13984 13974 ··· 17553 17543 M: Vinod Koul <vkoul@kernel.org> 17554 17544 R: Bhupesh Sharma <bhupesh.sharma@linaro.org> 17555 17545 L: netdev@vger.kernel.org 17546 + L: linux-arm-msm@vger.kernel.org 17556 17547 S: Maintained 17557 17548 F: Documentation/devicetree/bindings/net/qcom,ethqos.yaml 17558 17549 F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 5 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+2
arch/arm64/Kconfig
··· 197 197 !CC_OPTIMIZE_FOR_SIZE) 198 198 select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY \ 199 199 if DYNAMIC_FTRACE_WITH_ARGS 200 + select HAVE_SAMPLE_FTRACE_DIRECT 201 + select HAVE_SAMPLE_FTRACE_DIRECT_MULTI 200 202 select HAVE_EFFICIENT_UNALIGNED_ACCESS 201 203 select HAVE_FAST_GUP 202 204 select HAVE_FTRACE_MCOUNT_RECORD
+4
arch/arm64/include/asm/ftrace.h
··· 211 211 { 212 212 return ret_regs->fp; 213 213 } 214 + 215 + void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, 216 + unsigned long frame_pointer); 217 + 214 218 #endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */ 215 219 #endif 216 220
+3
arch/arm64/include/asm/syscall.h
··· 85 85 return AUDIT_ARCH_AARCH64; 86 86 } 87 87 88 + int syscall_trace_enter(struct pt_regs *regs); 89 + void syscall_trace_exit(struct pt_regs *regs); 90 + 88 91 #endif /* __ASM_SYSCALL_H */
-3
arch/arm64/kernel/syscall.c
··· 75 75 return unlikely(flags & _TIF_SYSCALL_WORK); 76 76 } 77 77 78 - int syscall_trace_enter(struct pt_regs *regs); 79 - void syscall_trace_exit(struct pt_regs *regs); 80 - 81 78 static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, 82 79 const syscall_fn_t syscall_table[]) 83 80 {
+4 -2
arch/openrisc/include/uapi/asm/sigcontext.h
··· 28 28 29 29 struct sigcontext { 30 30 struct user_regs_struct regs; /* needs to be first */ 31 - struct __or1k_fpu_state fpu; 32 - unsigned long oldmask; 31 + union { 32 + unsigned long fpcsr; 33 + unsigned long oldmask; /* unused */ 34 + }; 33 35 }; 34 36 35 37 #endif /* __ASM_OPENRISC_SIGCONTEXT_H */
+2 -2
arch/openrisc/kernel/signal.c
··· 50 50 err |= __copy_from_user(regs, sc->regs.gpr, 32 * sizeof(unsigned long)); 51 51 err |= __copy_from_user(&regs->pc, &sc->regs.pc, sizeof(unsigned long)); 52 52 err |= __copy_from_user(&regs->sr, &sc->regs.sr, sizeof(unsigned long)); 53 - err |= __copy_from_user(&regs->fpcsr, &sc->fpu.fpcsr, sizeof(unsigned long)); 53 + err |= __copy_from_user(&regs->fpcsr, &sc->fpcsr, sizeof(unsigned long)); 54 54 55 55 /* make sure the SM-bit is cleared so user-mode cannot fool us */ 56 56 regs->sr &= ~SPR_SR_SM; ··· 113 113 err |= __copy_to_user(sc->regs.gpr, regs, 32 * sizeof(unsigned long)); 114 114 err |= __copy_to_user(&sc->regs.pc, &regs->pc, sizeof(unsigned long)); 115 115 err |= __copy_to_user(&sc->regs.sr, &regs->sr, sizeof(unsigned long)); 116 - err |= __copy_to_user(&sc->fpu.fpcsr, &regs->fpcsr, sizeof(unsigned long)); 116 + err |= __copy_to_user(&sc->fpcsr, &regs->fpcsr, sizeof(unsigned long)); 117 117 118 118 return err; 119 119 }
-6
arch/powerpc/include/asm/book3s/64/hash-4k.h
··· 136 136 return 0; 137 137 } 138 138 139 - static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b) 140 - { 141 - BUG(); 142 - return 0; 143 - } 144 - 145 139 static inline pmd_t hash__pmd_mkhuge(pmd_t pmd) 146 140 { 147 141 BUG();
-5
arch/powerpc/include/asm/book3s/64/hash-64k.h
··· 263 263 (_PAGE_PTE | H_PAGE_THP_HUGE)); 264 264 } 265 265 266 - static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b) 267 - { 268 - return (((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0); 269 - } 270 - 271 266 static inline pmd_t hash__pmd_mkhuge(pmd_t pmd) 272 267 { 273 268 return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE));
+5
arch/powerpc/include/asm/book3s/64/hash.h
··· 132 132 return region_id; 133 133 } 134 134 135 + static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b) 136 + { 137 + return (((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0); 138 + } 139 + 135 140 #define hash__pmd_bad(pmd) (pmd_val(pmd) & H_PMD_BAD_BITS) 136 141 #define hash__pud_bad(pud) (pud_val(pud) & H_PUD_BAD_BITS) 137 142 static inline int hash__p4d_bad(p4d_t p4d)
+18 -13
arch/powerpc/kernel/exceptions-64e.S
··· 5 5 * Copyright (C) 2007 Ben. Herrenschmidt (benh@kernel.crashing.org), IBM Corp. 6 6 */ 7 7 8 + #include <linux/linkage.h> 8 9 #include <linux/threads.h> 9 10 #include <asm/reg.h> 10 11 #include <asm/page.h> ··· 67 66 #define SPECIAL_EXC_LOAD(reg, name) \ 68 67 ld reg, (SPECIAL_EXC_##name * 8 + SPECIAL_EXC_FRAME_OFFS)(r1) 69 68 70 - special_reg_save: 69 + SYM_CODE_START_LOCAL(special_reg_save) 71 70 /* 72 71 * We only need (or have stack space) to save this stuff if 73 72 * we interrupted the kernel. ··· 132 131 SPECIAL_EXC_STORE(r10,CSRR1) 133 132 134 133 blr 134 + SYM_CODE_END(special_reg_save) 135 135 136 - ret_from_level_except: 136 + SYM_CODE_START_LOCAL(ret_from_level_except) 137 137 ld r3,_MSR(r1) 138 138 andi. r3,r3,MSR_PR 139 139 beq 1f ··· 208 206 mtxer r11 209 207 210 208 blr 209 + SYM_CODE_END(ret_from_level_except) 211 210 212 211 .macro ret_from_level srr0 srr1 paca_ex scratch 213 212 bl ret_from_level_except ··· 235 232 mfspr r13,\scratch 236 233 .endm 237 234 238 - ret_from_crit_except: 235 + SYM_CODE_START_LOCAL(ret_from_crit_except) 239 236 ret_from_level SPRN_CSRR0 SPRN_CSRR1 PACA_EXCRIT SPRN_SPRG_CRIT_SCRATCH 240 237 rfci 238 + SYM_CODE_END(ret_from_crit_except) 241 239 242 - ret_from_mc_except: 240 + SYM_CODE_START_LOCAL(ret_from_mc_except) 243 241 ret_from_level SPRN_MCSRR0 SPRN_MCSRR1 PACA_EXMC SPRN_SPRG_MC_SCRATCH 244 242 rfmci 243 + SYM_CODE_END(ret_from_mc_except) 245 244 246 245 /* Exception prolog code for all exceptions */ 247 246 #define EXCEPTION_PROLOG(n, intnum, type, addition) \ ··· 983 978 * r14 and r15 containing the fault address and error code, with the 984 979 * original values stashed away in the PACA 985 980 */ 986 - storage_fault_common: 981 + SYM_CODE_START_LOCAL(storage_fault_common) 987 982 addi r3,r1,STACK_INT_FRAME_REGS 988 983 bl do_page_fault 989 984 b interrupt_return 985 + SYM_CODE_END(storage_fault_common) 990 986 991 987 /* 992 988 * Alignment exception doesn't fit entirely in the 0x100 bytes so it 993 989 * continues here. 994 990 */ 995 - alignment_more: 991 + SYM_CODE_START_LOCAL(alignment_more) 996 992 addi r3,r1,STACK_INT_FRAME_REGS 997 993 bl alignment_exception 998 994 REST_NVGPRS(r1) 999 995 b interrupt_return 996 + SYM_CODE_END(alignment_more) 1000 997 1001 998 /* 1002 999 * Trampolines used when spotting a bad kernel stack pointer in ··· 1037 1030 BAD_STACK_TRAMPOLINE(0xf00) 1038 1031 BAD_STACK_TRAMPOLINE(0xf20) 1039 1032 1040 - .globl bad_stack_book3e 1041 - bad_stack_book3e: 1033 + _GLOBAL(bad_stack_book3e) 1042 1034 /* XXX: Needs to make SPRN_SPRG_GEN depend on exception type */ 1043 1035 mfspr r10,SPRN_SRR0; /* read SRR0 before touching stack */ 1044 1036 ld r1,PACAEMERGSP(r13) ··· 1291 1285 * ever takes any parameters, the SCOM code must also be updated to 1292 1286 * provide them. 1293 1287 */ 1294 - .globl a2_tlbinit_code_start 1295 - a2_tlbinit_code_start: 1288 + _GLOBAL(a2_tlbinit_code_start) 1296 1289 1297 1290 ori r11,r3,MAS0_WQ_ALLWAYS 1298 1291 oris r11,r11,MAS0_ESEL(3)@h /* Use way 3: workaround A2 erratum 376 */ ··· 1484 1479 mflr r28 1485 1480 b 3b 1486 1481 1487 - .globl init_core_book3e 1488 - init_core_book3e: 1482 + _GLOBAL(init_core_book3e) 1489 1483 /* Establish the interrupt vector base */ 1490 1484 tovirt(r2,r2) 1491 1485 LOAD_REG_ADDR(r3, interrupt_base_book3e) ··· 1492 1488 sync 1493 1489 blr 1494 1490 1495 - init_thread_book3e: 1491 + SYM_CODE_START_LOCAL(init_thread_book3e) 1496 1492 lis r3,(SPRN_EPCR_ICM | SPRN_EPCR_GICM)@h 1497 1493 mtspr SPRN_EPCR,r3 1498 1494 ··· 1506 1502 mtspr SPRN_TSR,r3 1507 1503 1508 1504 blr 1505 + SYM_CODE_END(init_thread_book3e) 1509 1506 1510 1507 _GLOBAL(__setup_base_ivors) 1511 1508 SET_IVOR(0, 0x020) /* Critical Input */
+19 -18
arch/powerpc/kernel/security.c
··· 364 364 365 365 static int ssb_prctl_get(struct task_struct *task) 366 366 { 367 - if (stf_enabled_flush_types == STF_BARRIER_NONE) 368 - /* 369 - * We don't have an explicit signal from firmware that we're 370 - * vulnerable or not, we only have certain CPU revisions that 371 - * are known to be vulnerable. 372 - * 373 - * We assume that if we're on another CPU, where the barrier is 374 - * NONE, then we are not vulnerable. 375 - */ 367 + /* 368 + * The STF_BARRIER feature is on by default, so if it's off that means 369 + * firmware has explicitly said the CPU is not vulnerable via either 370 + * the hypercall or device tree. 371 + */ 372 + if (!security_ftr_enabled(SEC_FTR_STF_BARRIER)) 376 373 return PR_SPEC_NOT_AFFECTED; 377 - else 378 - /* 379 - * If we do have a barrier type then we are vulnerable. The 380 - * barrier is not a global or per-process mitigation, so the 381 - * only value we can report here is PR_SPEC_ENABLE, which 382 - * appears as "vulnerable" in /proc. 383 - */ 384 - return PR_SPEC_ENABLE; 385 374 386 - return -EINVAL; 375 + /* 376 + * If the system's CPU has no known barrier (see setup_stf_barrier()) 377 + * then assume that the CPU is not vulnerable. 378 + */ 379 + if (stf_enabled_flush_types == STF_BARRIER_NONE) 380 + return PR_SPEC_NOT_AFFECTED; 381 + 382 + /* 383 + * Otherwise the CPU is vulnerable. The barrier is not a global or 384 + * per-process mitigation, so the only value that can be reported here 385 + * is PR_SPEC_ENABLE, which appears as "vulnerable" in /proc. 386 + */ 387 + return PR_SPEC_ENABLE; 387 388 } 388 389 389 390 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+9 -4
arch/powerpc/mm/book3s64/hash_native.c
··· 328 328 329 329 static long native_hpte_remove(unsigned long hpte_group) 330 330 { 331 + unsigned long hpte_v, flags; 331 332 struct hash_pte *hptep; 332 333 int i; 333 334 int slot_offset; 334 - unsigned long hpte_v; 335 + 336 + local_irq_save(flags); 335 337 336 338 DBG_LOW(" remove(group=%lx)\n", hpte_group); 337 339 ··· 358 356 slot_offset &= 0x7; 359 357 } 360 358 361 - if (i == HPTES_PER_GROUP) 362 - return -1; 359 + if (i == HPTES_PER_GROUP) { 360 + i = -1; 361 + goto out; 362 + } 363 363 364 364 /* Invalidate the hpte. NOTE: this also unlocks it */ 365 365 release_hpte_lock(); 366 366 hptep->v = 0; 367 - 367 + out: 368 + local_irq_restore(flags); 368 369 return i; 369 370 } 370 371
+2 -7
arch/riscv/kernel/cpufeature.c
··· 318 318 } 319 319 320 320 /* 321 - * Linux requires the following extensions, so we may as well 322 - * always set them. 323 - */ 324 - set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa); 325 - set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa); 326 - 327 - /* 328 321 * These ones were as they were part of the base ISA when the 329 322 * port & dt-bindings were upstreamed, and so can be set 330 323 * unconditionally where `i` is in riscv,isa on DT systems. 331 324 */ 332 325 if (acpi_disabled) { 326 + set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa); 327 + set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa); 333 328 set_bit(RISCV_ISA_EXT_ZICNTR, isainfo->isa); 334 329 set_bit(RISCV_ISA_EXT_ZIHPM, isainfo->isa); 335 330 }
+1 -1
arch/riscv/mm/init.c
··· 1346 1346 */ 1347 1347 crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE, 1348 1348 search_start, 1349 - min(search_end, (unsigned long) SZ_4G)); 1349 + min(search_end, (unsigned long)(SZ_4G - 1))); 1350 1350 if (crash_base == 0) { 1351 1351 /* Try again without restricting region to 32bit addressible memory */ 1352 1352 crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,
+3 -3
arch/riscv/net/bpf_jit.h
··· 69 69 struct bpf_prog *prog; 70 70 u16 *insns; /* RV insns */ 71 71 int ninsns; 72 - int body_len; 72 + int prologue_len; 73 73 int epilogue_offset; 74 74 int *offset; /* BPF to RV */ 75 75 int nexentries; ··· 216 216 int from, to; 217 217 218 218 off++; /* BPF branch is from PC+1, RV is from PC */ 219 - from = (insn > 0) ? ctx->offset[insn - 1] : 0; 220 - to = (insn + off > 0) ? ctx->offset[insn + off - 1] : 0; 219 + from = (insn > 0) ? ctx->offset[insn - 1] : ctx->prologue_len; 220 + to = (insn + off > 0) ? ctx->offset[insn + off - 1] : ctx->prologue_len; 221 221 return ninsns_rvoff(to - from); 222 222 } 223 223
+13 -6
arch/riscv/net/bpf_jit_core.c
··· 44 44 unsigned int prog_size = 0, extable_size = 0; 45 45 bool tmp_blinded = false, extra_pass = false; 46 46 struct bpf_prog *tmp, *orig_prog = prog; 47 - int pass = 0, prev_ninsns = 0, prologue_len, i; 47 + int pass = 0, prev_ninsns = 0, i; 48 48 struct rv_jit_data *jit_data; 49 49 struct rv_jit_context *ctx; 50 50 ··· 83 83 prog = orig_prog; 84 84 goto out_offset; 85 85 } 86 + 87 + if (build_body(ctx, extra_pass, NULL)) { 88 + prog = orig_prog; 89 + goto out_offset; 90 + } 91 + 86 92 for (i = 0; i < prog->len; i++) { 87 93 prev_ninsns += 32; 88 94 ctx->offset[i] = prev_ninsns; ··· 97 91 for (i = 0; i < NR_JIT_ITERATIONS; i++) { 98 92 pass++; 99 93 ctx->ninsns = 0; 94 + 95 + bpf_jit_build_prologue(ctx); 96 + ctx->prologue_len = ctx->ninsns; 97 + 100 98 if (build_body(ctx, extra_pass, ctx->offset)) { 101 99 prog = orig_prog; 102 100 goto out_offset; 103 101 } 104 - ctx->body_len = ctx->ninsns; 105 - bpf_jit_build_prologue(ctx); 102 + 106 103 ctx->epilogue_offset = ctx->ninsns; 107 104 bpf_jit_build_epilogue(ctx); 108 105 ··· 171 162 172 163 if (!prog->is_func || extra_pass) { 173 164 bpf_jit_binary_lock_ro(jit_data->header); 174 - prologue_len = ctx->epilogue_offset - ctx->body_len; 175 165 for (i = 0; i < prog->len; i++) 176 - ctx->offset[i] = ninsns_rvoff(prologue_len + 177 - ctx->offset[i]); 166 + ctx->offset[i] = ninsns_rvoff(ctx->offset[i]); 178 167 bpf_prog_fill_jited_linfo(prog, ctx->offset); 179 168 out_offset: 180 169 kfree(ctx->offset);
+3 -3
arch/sh/boards/mach-dreamcast/irq.c
··· 108 108 __u32 j, bit; 109 109 110 110 switch (irq) { 111 - case 13: 111 + case 13 + 16: 112 112 level = 0; 113 113 break; 114 - case 11: 114 + case 11 + 16: 115 115 level = 1; 116 116 break; 117 - case 9: 117 + case 9 + 16: 118 118 level = 2; 119 119 break; 120 120 default:
+2 -2
arch/sh/boards/mach-highlander/setup.c
··· 389 389 390 390 static int highlander_irq_demux(int irq) 391 391 { 392 - if (irq >= HL_NR_IRL || irq < 0 || !irl2irq[irq]) 392 + if (irq >= HL_NR_IRL + 16 || irq < 16 || !irl2irq[irq - 16]) 393 393 return irq; 394 394 395 - return irl2irq[irq]; 395 + return irl2irq[irq - 16]; 396 396 } 397 397 398 398 static void __init highlander_init_irq(void)
+2 -2
arch/sh/boards/mach-r2d/irq.c
··· 117 117 118 118 int rts7751r2d_irq_demux(int irq) 119 119 { 120 - if (irq >= R2D_NR_IRL || irq < 0 || !irl2irq[irq]) 120 + if (irq >= R2D_NR_IRL + 16 || irq < 16 || !irl2irq[irq - 16]) 121 121 return irq; 122 122 123 - return irl2irq[irq]; 123 + return irl2irq[irq - 16]; 124 124 } 125 125 126 126 /*
+2 -2
arch/sh/cchips/Kconfig
··· 29 29 config HD64461_IRQ 30 30 int "HD64461 IRQ" 31 31 depends on HD64461 32 - default "36" 32 + default "52" 33 33 help 34 - The default setting of the HD64461 IRQ is 36. 34 + The default setting of the HD64461 IRQ is 52. 35 35 36 36 Do not change this unless you know what you are doing. 37 37
+1 -1
arch/sh/include/asm/hd64461.h
··· 229 229 #define HD64461_NIMR HD64461_IO_OFFSET(0x5002) 230 230 231 231 #define HD64461_IRQBASE OFFCHIP_IRQ_BASE 232 - #define OFFCHIP_IRQ_BASE 64 232 + #define OFFCHIP_IRQ_BASE (64 + 16) 233 233 #define HD64461_IRQ_NUM 16 234 234 235 235 #define HD64461_IRQ_UART (HD64461_IRQBASE+5)
+1 -1
arch/sparc/include/asm/cmpxchg_32.h
··· 15 15 unsigned long __xchg_u32(volatile u32 *m, u32 new); 16 16 void __xchg_called_with_bad_pointer(void); 17 17 18 - static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) 18 + static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) 19 19 { 20 20 switch (size) { 21 21 case 4:
+1 -1
arch/sparc/include/asm/cmpxchg_64.h
··· 87 87 return (load32 & mask) >> bit_shift; 88 88 } 89 89 90 - static inline unsigned long 90 + static __always_inline unsigned long 91 91 __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) 92 92 { 93 93 switch (size) {
+1 -1
arch/um/kernel/um_arch.c
··· 437 437 os_check_bugs(); 438 438 } 439 439 440 - void apply_ibt_endbr(s32 *start, s32 *end) 440 + void apply_seal_endbr(s32 *start, s32 *end) 441 441 { 442 442 } 443 443
+13 -40
arch/x86/entry/entry_32.S
··· 720 720 .popsection 721 721 722 722 /* 723 - * The unwinder expects the last frame on the stack to always be at the same 724 - * offset from the end of the page, which allows it to validate the stack. 725 - * Calling schedule_tail() directly would break that convention because its an 726 - * asmlinkage function so its argument has to be pushed on the stack. This 727 - * wrapper creates a proper "end of stack" frame header before the call. 728 - */ 729 - .pushsection .text, "ax" 730 - SYM_FUNC_START(schedule_tail_wrapper) 731 - FRAME_BEGIN 732 - 733 - pushl %eax 734 - call schedule_tail 735 - popl %eax 736 - 737 - FRAME_END 738 - RET 739 - SYM_FUNC_END(schedule_tail_wrapper) 740 - .popsection 741 - 742 - /* 743 723 * A newly forked process directly context switches into this address. 744 724 * 745 725 * eax: prev task we switched from ··· 727 747 * edi: kernel thread arg 728 748 */ 729 749 .pushsection .text, "ax" 730 - SYM_CODE_START(ret_from_fork) 731 - call schedule_tail_wrapper 750 + SYM_CODE_START(ret_from_fork_asm) 751 + movl %esp, %edx /* regs */ 732 752 733 - testl %ebx, %ebx 734 - jnz 1f /* kernel threads are uncommon */ 753 + /* return address for the stack unwinder */ 754 + pushl $.Lsyscall_32_done 735 755 736 - 2: 737 - /* When we fork, we trace the syscall return in the child, too. */ 738 - movl %esp, %eax 739 - call syscall_exit_to_user_mode 740 - jmp .Lsyscall_32_done 756 + FRAME_BEGIN 757 + /* prev already in EAX */ 758 + movl %ebx, %ecx /* fn */ 759 + pushl %edi /* fn_arg */ 760 + call ret_from_fork 761 + addl $4, %esp 762 + FRAME_END 741 763 742 - /* kernel thread */ 743 - 1: movl %edi, %eax 744 - CALL_NOSPEC ebx 745 - /* 746 - * A kernel thread is allowed to return here after successfully 747 - * calling kernel_execve(). Exit to userspace to complete the execve() 748 - * syscall. 749 - */ 750 - movl $0, PT_EAX(%esp) 751 - jmp 2b 752 - SYM_CODE_END(ret_from_fork) 764 + RET 765 + SYM_CODE_END(ret_from_fork_asm) 753 766 .popsection 754 767 755 768 SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
+8 -25
arch/x86/entry/entry_64.S
··· 284 284 * r12: kernel thread arg 285 285 */ 286 286 .pushsection .text, "ax" 287 - __FUNC_ALIGN 288 - SYM_CODE_START_NOALIGN(ret_from_fork) 289 - UNWIND_HINT_END_OF_STACK 287 + SYM_CODE_START(ret_from_fork_asm) 288 + UNWIND_HINT_REGS 290 289 ANNOTATE_NOENDBR // copy_thread 291 290 CALL_DEPTH_ACCOUNT 292 - movq %rax, %rdi 293 - call schedule_tail /* rdi: 'prev' task parameter */ 294 291 295 - testq %rbx, %rbx /* from kernel_thread? */ 296 - jnz 1f /* kernel threads are uncommon */ 292 + movq %rax, %rdi /* prev */ 293 + movq %rsp, %rsi /* regs */ 294 + movq %rbx, %rdx /* fn */ 295 + movq %r12, %rcx /* fn_arg */ 296 + call ret_from_fork 297 297 298 - 2: 299 - UNWIND_HINT_REGS 300 - movq %rsp, %rdi 301 - call syscall_exit_to_user_mode /* returns with IRQs disabled */ 302 298 jmp swapgs_restore_regs_and_return_to_usermode 303 - 304 - 1: 305 - /* kernel thread */ 306 - UNWIND_HINT_END_OF_STACK 307 - movq %r12, %rdi 308 - CALL_NOSPEC rbx 309 - /* 310 - * A kernel thread is allowed to return here after successfully 311 - * calling kernel_execve(). Exit to userspace to complete the execve() 312 - * syscall. 313 - */ 314 - movq $0, RAX(%rsp) 315 - jmp 2b 316 - SYM_CODE_END(ret_from_fork) 299 + SYM_CODE_END(ret_from_fork_asm) 317 300 .popsection 318 301 319 302 .macro DEBUG_ENTRY_ASSERT_IRQS_OFF
+7
arch/x86/events/intel/core.c
··· 3993 3993 struct perf_event *leader = event->group_leader; 3994 3994 struct perf_event *sibling = NULL; 3995 3995 3996 + /* 3997 + * When this memload event is also the first event (no group 3998 + * exists yet), then there is no aux event before it. 3999 + */ 4000 + if (leader == event) 4001 + return -ENODATA; 4002 + 3996 4003 if (!is_mem_loads_aux_event(leader)) { 3997 4004 for_each_sibling_event(sibling, leader) { 3998 4005 if (is_mem_loads_aux_event(sibling))
+1 -1
arch/x86/include/asm/alternative.h
··· 96 96 extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end); 97 97 extern void apply_retpolines(s32 *start, s32 *end); 98 98 extern void apply_returns(s32 *start, s32 *end); 99 - extern void apply_ibt_endbr(s32 *start, s32 *end); 99 + extern void apply_seal_endbr(s32 *start, s32 *end); 100 100 extern void apply_fineibt(s32 *start_retpoline, s32 *end_retpoine, 101 101 s32 *start_cfi, s32 *end_cfi); 102 102
+1 -1
arch/x86/include/asm/ibt.h
··· 34 34 /* 35 35 * Create a dummy function pointer reference to prevent objtool from marking 36 36 * the function as needing to be "sealed" (i.e. ENDBR converted to NOP by 37 - * apply_ibt_endbr()). 37 + * apply_seal_endbr()). 38 38 */ 39 39 #define IBT_NOSEAL(fname) \ 40 40 ".pushsection .discard.ibt_endbr_noseal\n\t" \
+4
arch/x86/include/asm/nospec-branch.h
··· 234 234 * JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple 235 235 * indirect jmp/call which may be susceptible to the Spectre variant 2 236 236 * attack. 237 + * 238 + * NOTE: these do not take kCFI into account and are thus not comparable to C 239 + * indirect calls, take care when using. The target of these should be an ENDBR 240 + * instruction irrespective of kCFI. 237 241 */ 238 242 .macro JMP_NOSPEC reg:req 239 243 #ifdef CONFIG_RETPOLINE
+3 -1
arch/x86/include/asm/switch_to.h
··· 12 12 __visible struct task_struct *__switch_to(struct task_struct *prev, 13 13 struct task_struct *next); 14 14 15 - asmlinkage void ret_from_fork(void); 15 + asmlinkage void ret_from_fork_asm(void); 16 + __visible void ret_from_fork(struct task_struct *prev, struct pt_regs *regs, 17 + int (*fn)(void *), void *fn_arg); 16 18 17 19 /* 18 20 * This is the structure pointed to by thread.sp for an inactive task. The
+67 -4
arch/x86/kernel/alternative.c
··· 778 778 779 779 #ifdef CONFIG_X86_KERNEL_IBT 780 780 781 + static void poison_cfi(void *addr); 782 + 781 783 static void __init_or_module poison_endbr(void *addr, bool warn) 782 784 { 783 785 u32 endbr, poison = gen_endbr_poison(); ··· 804 802 805 803 /* 806 804 * Generated by: objtool --ibt 805 + * 806 + * Seal the functions for indirect calls by clobbering the ENDBR instructions 807 + * and the kCFI hash value. 807 808 */ 808 - void __init_or_module noinline apply_ibt_endbr(s32 *start, s32 *end) 809 + void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end) 809 810 { 810 811 s32 *s; 811 812 ··· 817 812 818 813 poison_endbr(addr, true); 819 814 if (IS_ENABLED(CONFIG_FINEIBT)) 820 - poison_endbr(addr - 16, false); 815 + poison_cfi(addr - 16); 821 816 } 822 817 } 823 818 824 819 #else 825 820 826 - void __init_or_module apply_ibt_endbr(s32 *start, s32 *end) { } 821 + void __init_or_module apply_seal_endbr(s32 *start, s32 *end) { } 827 822 828 823 #endif /* CONFIG_X86_KERNEL_IBT */ 829 824 ··· 1068 1063 return 0; 1069 1064 } 1070 1065 1066 + static void cfi_rewrite_endbr(s32 *start, s32 *end) 1067 + { 1068 + s32 *s; 1069 + 1070 + for (s = start; s < end; s++) { 1071 + void *addr = (void *)s + *s; 1072 + 1073 + poison_endbr(addr+16, false); 1074 + } 1075 + } 1076 + 1071 1077 /* .retpoline_sites */ 1072 1078 static int cfi_rand_callers(s32 *start, s32 *end) 1073 1079 { ··· 1173 1157 return; 1174 1158 1175 1159 case CFI_FINEIBT: 1160 + /* place the FineIBT preamble at func()-16 */ 1176 1161 ret = cfi_rewrite_preamble(start_cfi, end_cfi); 1177 1162 if (ret) 1178 1163 goto err; 1179 1164 1165 + /* rewrite the callers to target func()-16 */ 1180 1166 ret = cfi_rewrite_callers(start_retpoline, end_retpoline); 1181 1167 if (ret) 1182 1168 goto err; 1169 + 1170 + /* now that nobody targets func()+0, remove ENDBR there */ 1171 + cfi_rewrite_endbr(start_cfi, end_cfi); 1183 1172 1184 1173 if (builtin) 1185 1174 pr_info("Using FineIBT CFI\n"); ··· 1198 1177 pr_err("Something went horribly wrong trying to rewrite the CFI implementation.\n"); 1199 1178 } 1200 1179 1180 + static inline void poison_hash(void *addr) 1181 + { 1182 + *(u32 *)addr = 0; 1183 + } 1184 + 1185 + static void poison_cfi(void *addr) 1186 + { 1187 + switch (cfi_mode) { 1188 + case CFI_FINEIBT: 1189 + /* 1190 + * __cfi_\func: 1191 + * osp nopl (%rax) 1192 + * subl $0, %r10d 1193 + * jz 1f 1194 + * ud2 1195 + * 1: nop 1196 + */ 1197 + poison_endbr(addr, false); 1198 + poison_hash(addr + fineibt_preamble_hash); 1199 + break; 1200 + 1201 + case CFI_KCFI: 1202 + /* 1203 + * __cfi_\func: 1204 + * movl $0, %eax 1205 + * .skip 11, 0x90 1206 + */ 1207 + poison_hash(addr + 1); 1208 + break; 1209 + 1210 + default: 1211 + break; 1212 + } 1213 + } 1214 + 1201 1215 #else 1202 1216 1203 1217 static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, 1204 1218 s32 *start_cfi, s32 *end_cfi, bool builtin) 1205 1219 { 1206 1220 } 1221 + 1222 + #ifdef CONFIG_X86_KERNEL_IBT 1223 + static void poison_cfi(void *addr) { } 1224 + #endif 1207 1225 1208 1226 #endif 1209 1227 ··· 1625 1565 */ 1626 1566 callthunks_patch_builtin_calls(); 1627 1567 1628 - apply_ibt_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end); 1568 + /* 1569 + * Seal all functions that do not have their address taken. 1570 + */ 1571 + apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end); 1629 1572 1630 1573 #ifdef CONFIG_SMP 1631 1574 /* Patch to UP if other cpus not imminent. */
-1
arch/x86/kernel/ftrace.c
··· 282 282 283 283 /* Defined as markers to the end of the ftrace default trampolines */ 284 284 extern void ftrace_regs_caller_end(void); 285 - extern void ftrace_regs_caller_ret(void); 286 285 extern void ftrace_caller_end(void); 287 286 extern void ftrace_caller_op_ptr(void); 288 287 extern void ftrace_regs_caller_op_ptr(void);
+1 -1
arch/x86/kernel/module.c
··· 358 358 } 359 359 if (ibt_endbr) { 360 360 void *iseg = (void *)ibt_endbr->sh_addr; 361 - apply_ibt_endbr(iseg, iseg + ibt_endbr->sh_size); 361 + apply_seal_endbr(iseg, iseg + ibt_endbr->sh_size); 362 362 } 363 363 if (locks) { 364 364 void *lseg = (void *)locks->sh_addr;
+21 -1
arch/x86/kernel/process.c
··· 28 28 #include <linux/static_call.h> 29 29 #include <trace/events/power.h> 30 30 #include <linux/hw_breakpoint.h> 31 + #include <linux/entry-common.h> 31 32 #include <asm/cpu.h> 32 33 #include <asm/apic.h> 33 34 #include <linux/uaccess.h> ··· 135 134 return do_set_thread_area_64(p, ARCH_SET_FS, tls); 136 135 } 137 136 137 + __visible void ret_from_fork(struct task_struct *prev, struct pt_regs *regs, 138 + int (*fn)(void *), void *fn_arg) 139 + { 140 + schedule_tail(prev); 141 + 142 + /* Is this a kernel thread? */ 143 + if (unlikely(fn)) { 144 + fn(fn_arg); 145 + /* 146 + * A kernel thread is allowed to return here after successfully 147 + * calling kernel_execve(). Exit to userspace to complete the 148 + * execve() syscall. 149 + */ 150 + regs->ax = 0; 151 + } 152 + 153 + syscall_exit_to_user_mode(regs); 154 + } 155 + 138 156 int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) 139 157 { 140 158 unsigned long clone_flags = args->flags; ··· 169 149 frame = &fork_frame->frame; 170 150 171 151 frame->bp = encode_frame_pointer(childregs); 172 - frame->ret_addr = (unsigned long) ret_from_fork; 152 + frame->ret_addr = (unsigned long) ret_from_fork_asm; 173 153 p->thread.sp = (unsigned long) fork_frame; 174 154 p->thread.io_bitmap = NULL; 175 155 p->thread.iopl_warn = 0;
+21 -16
arch/x86/xen/xen-head.S
··· 90 90 ELFNOTE(Xen, XEN_ELFNOTE_GUEST_OS, .asciz "linux") 91 91 ELFNOTE(Xen, XEN_ELFNOTE_GUEST_VERSION, .asciz "2.6") 92 92 ELFNOTE(Xen, XEN_ELFNOTE_XEN_VERSION, .asciz "xen-3.0") 93 - #ifdef CONFIG_X86_32 94 - ELFNOTE(Xen, XEN_ELFNOTE_VIRT_BASE, _ASM_PTR __PAGE_OFFSET) 95 - #else 93 + #ifdef CONFIG_XEN_PV 96 94 ELFNOTE(Xen, XEN_ELFNOTE_VIRT_BASE, _ASM_PTR __START_KERNEL_map) 97 95 /* Map the p2m table to a 512GB-aligned user address. */ 98 96 ELFNOTE(Xen, XEN_ELFNOTE_INIT_P2M, .quad (PUD_SIZE * PTRS_PER_PUD)) 99 - #endif 100 - #ifdef CONFIG_XEN_PV 101 97 ELFNOTE(Xen, XEN_ELFNOTE_ENTRY, _ASM_PTR startup_xen) 102 - #endif 103 - ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page) 104 - ELFNOTE(Xen, XEN_ELFNOTE_FEATURES, 105 - .ascii "!writable_page_tables|pae_pgdir_above_4gb") 106 - ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, 107 - .long (1 << XENFEAT_writable_page_tables) | \ 108 - (1 << XENFEAT_dom0) | \ 109 - (1 << XENFEAT_linux_rsdp_unrestricted)) 98 + ELFNOTE(Xen, XEN_ELFNOTE_FEATURES, .ascii "!writable_page_tables") 110 99 ELFNOTE(Xen, XEN_ELFNOTE_PAE_MODE, .asciz "yes") 111 - ELFNOTE(Xen, XEN_ELFNOTE_LOADER, .asciz "generic") 112 100 ELFNOTE(Xen, XEN_ELFNOTE_L1_MFN_VALID, 113 101 .quad _PAGE_PRESENT; .quad _PAGE_PRESENT) 114 - ELFNOTE(Xen, XEN_ELFNOTE_SUSPEND_CANCEL, .long 1) 115 102 ELFNOTE(Xen, XEN_ELFNOTE_MOD_START_PFN, .long 1) 116 - ELFNOTE(Xen, XEN_ELFNOTE_HV_START_LOW, _ASM_PTR __HYPERVISOR_VIRT_START) 117 103 ELFNOTE(Xen, XEN_ELFNOTE_PADDR_OFFSET, _ASM_PTR 0) 104 + # define FEATURES_PV (1 << XENFEAT_writable_page_tables) 105 + #else 106 + # define FEATURES_PV 0 107 + #endif 108 + #ifdef CONFIG_XEN_PVH 109 + # define FEATURES_PVH (1 << XENFEAT_linux_rsdp_unrestricted) 110 + #else 111 + # define FEATURES_PVH 0 112 + #endif 113 + #ifdef CONFIG_XEN_DOM0 114 + # define FEATURES_DOM0 (1 << XENFEAT_dom0) 115 + #else 116 + # define FEATURES_DOM0 0 117 + #endif 118 + ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page) 119 + ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, 120 + .long FEATURES_PV | FEATURES_PVH | FEATURES_DOM0) 121 + ELFNOTE(Xen, XEN_ELFNOTE_LOADER, .asciz "generic") 122 + ELFNOTE(Xen, XEN_ELFNOTE_SUSPEND_CANCEL, .long 1) 118 123 119 124 #endif /*CONFIG_XEN */
+14 -20
arch/xtensa/kernel/align.S
··· 1 1 /* 2 2 * arch/xtensa/kernel/align.S 3 3 * 4 - * Handle unalignment exceptions in kernel space. 4 + * Handle unalignment and load/store exceptions. 5 5 * 6 6 * This file is subject to the terms and conditions of the GNU General 7 7 * Public License. See the file "COPYING" in the main directory of ··· 26 26 #define LOAD_EXCEPTION_HANDLER 27 27 #endif 28 28 29 - #if XCHAL_UNALIGNED_STORE_EXCEPTION || defined LOAD_EXCEPTION_HANDLER 29 + #if XCHAL_UNALIGNED_STORE_EXCEPTION || defined CONFIG_XTENSA_LOAD_STORE 30 + #define STORE_EXCEPTION_HANDLER 31 + #endif 32 + 33 + #if defined LOAD_EXCEPTION_HANDLER || defined STORE_EXCEPTION_HANDLER 30 34 #define ANY_EXCEPTION_HANDLER 31 35 #endif 32 36 33 - #if XCHAL_HAVE_WINDOWED 37 + #if XCHAL_HAVE_WINDOWED && defined CONFIG_MMU 34 38 #define UNALIGNED_USER_EXCEPTION 35 39 #endif 36 - 37 - /* First-level exception handler for unaligned exceptions. 38 - * 39 - * Note: This handler works only for kernel exceptions. Unaligned user 40 - * access should get a seg fault. 41 - */ 42 40 43 41 /* Big and little endian 16-bit values are located in 44 42 * different halves of a register. HWORD_START helps to ··· 226 228 #ifdef ANY_EXCEPTION_HANDLER 227 229 ENTRY(fast_unaligned) 228 230 229 - #if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION 230 - 231 231 call0 .Lsave_and_load_instruction 232 232 233 233 /* Analyze the instruction (load or store?). */ ··· 240 244 /* 'store indicator bit' not set, jump */ 241 245 _bbci.l a4, OP1_SI_BIT + INSN_OP1, .Lload 242 246 243 - #endif 244 - #if XCHAL_UNALIGNED_STORE_EXCEPTION 247 + #ifdef STORE_EXCEPTION_HANDLER 245 248 246 249 /* Store: Jump to table entry to get the value in the source register.*/ 247 250 ··· 249 254 addx8 a5, a6, a5 250 255 jx a5 # jump into table 251 256 #endif 252 - #if XCHAL_UNALIGNED_LOAD_EXCEPTION 257 + #ifdef LOAD_EXCEPTION_HANDLER 253 258 254 259 /* Load: Load memory address. */ 255 260 ··· 323 328 mov a14, a3 ; _j .Lexit; .align 8 324 329 mov a15, a3 ; _j .Lexit; .align 8 325 330 #endif 326 - #if XCHAL_UNALIGNED_STORE_EXCEPTION 331 + #ifdef STORE_EXCEPTION_HANDLER 327 332 .Lstore_table: 328 333 l32i a3, a2, PT_AREG0; _j .Lstore_w; .align 8 329 334 mov a3, a1; _j .Lstore_w; .align 8 # fishy?? ··· 343 348 mov a3, a15 ; _j .Lstore_w; .align 8 344 349 #endif 345 350 346 - #ifdef ANY_EXCEPTION_HANDLER 347 351 /* We cannot handle this exception. */ 348 352 349 353 .extern _kernel_exception ··· 371 377 372 378 2: movi a0, _user_exception 373 379 jx a0 374 - #endif 375 - #if XCHAL_UNALIGNED_STORE_EXCEPTION 380 + 381 + #ifdef STORE_EXCEPTION_HANDLER 376 382 377 383 # a7: instruction pointer, a4: instruction, a3: value 378 384 .Lstore_w: ··· 438 444 s32i a6, a4, 4 439 445 #endif 440 446 #endif 441 - #ifdef ANY_EXCEPTION_HANDLER 447 + 442 448 .Lexit: 443 449 #if XCHAL_HAVE_LOOPS 444 450 rsr a4, lend # check if we reached LEND ··· 533 539 __src_b a4, a4, a5 # a4 has the instruction 534 540 535 541 ret 536 - #endif 542 + 537 543 ENDPROC(fast_unaligned) 538 544 539 545 ENTRY(fast_unaligned_fixup)
+2 -1
arch/xtensa/kernel/traps.c
··· 102 102 #endif 103 103 { EXCCAUSE_INTEGER_DIVIDE_BY_ZERO, 0, do_div0 }, 104 104 /* EXCCAUSE_PRIVILEGED unhandled */ 105 - #if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION 105 + #if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION || \ 106 + IS_ENABLED(CONFIG_XTENSA_LOAD_STORE) 106 107 #ifdef CONFIG_XTENSA_UNALIGNED_USER 107 108 { EXCCAUSE_UNALIGNED, USER, fast_unaligned }, 108 109 #endif
+2 -1
arch/xtensa/platforms/iss/network.c
··· 237 237 238 238 init += sizeof(TRANSPORT_TUNTAP_NAME) - 1; 239 239 if (*init == ',') { 240 - rem = split_if_spec(init + 1, &mac_str, &dev_name); 240 + rem = split_if_spec(init + 1, &mac_str, &dev_name, NULL); 241 241 if (rem != NULL) { 242 242 pr_err("%s: extra garbage on specification : '%s'\n", 243 243 dev->name, rem); ··· 540 540 rtnl_unlock(); 541 541 pr_err("%s: error registering net device!\n", dev->name); 542 542 platform_device_unregister(&lp->pdev); 543 + /* dev is freed by the iss_net_pdev_release callback */ 543 544 return; 544 545 } 545 546 rtnl_unlock();
+10 -2
block/blk-crypto-profile.c
··· 79 79 unsigned int slot_hashtable_size; 80 80 81 81 memset(profile, 0, sizeof(*profile)); 82 - init_rwsem(&profile->lock); 82 + 83 + /* 84 + * profile->lock of an underlying device can nest inside profile->lock 85 + * of a device-mapper device, so use a dynamic lock class to avoid 86 + * false-positive lockdep reports. 87 + */ 88 + lockdep_register_key(&profile->lockdep_key); 89 + __init_rwsem(&profile->lock, "&profile->lock", &profile->lockdep_key); 83 90 84 91 if (num_slots == 0) 85 92 return 0; ··· 96 89 profile->slots = kvcalloc(num_slots, sizeof(profile->slots[0]), 97 90 GFP_KERNEL); 98 91 if (!profile->slots) 99 - return -ENOMEM; 92 + goto err_destroy; 100 93 101 94 profile->num_slots = num_slots; 102 95 ··· 442 435 { 443 436 if (!profile) 444 437 return; 438 + lockdep_unregister_key(&profile->lockdep_key); 445 439 kvfree(profile->slot_hashtable); 446 440 kvfree_sensitive(profile->slots, 447 441 sizeof(profile->slots[0]) * profile->num_slots);
+1 -1
block/blk-flush.c
··· 189 189 case REQ_FSEQ_DATA: 190 190 list_move_tail(&rq->flush.list, &fq->flush_data_in_flight); 191 191 spin_lock(&q->requeue_lock); 192 - list_add_tail(&rq->queuelist, &q->flush_list); 192 + list_add(&rq->queuelist, &q->requeue_list); 193 193 spin_unlock(&q->requeue_lock); 194 194 blk_mq_kick_requeue_list(q); 195 195 break;
+30 -17
block/blk-mq.c
··· 328 328 } 329 329 EXPORT_SYMBOL(blk_rq_init); 330 330 331 + /* Set start and alloc time when the allocated request is actually used */ 332 + static inline void blk_mq_rq_time_init(struct request *rq, u64 alloc_time_ns) 333 + { 334 + if (blk_mq_need_time_stamp(rq)) 335 + rq->start_time_ns = ktime_get_ns(); 336 + else 337 + rq->start_time_ns = 0; 338 + 339 + #ifdef CONFIG_BLK_RQ_ALLOC_TIME 340 + if (blk_queue_rq_alloc_time(rq->q)) 341 + rq->alloc_time_ns = alloc_time_ns ?: rq->start_time_ns; 342 + else 343 + rq->alloc_time_ns = 0; 344 + #endif 345 + } 346 + 331 347 static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, 332 - struct blk_mq_tags *tags, unsigned int tag, u64 alloc_time_ns) 348 + struct blk_mq_tags *tags, unsigned int tag) 333 349 { 334 350 struct blk_mq_ctx *ctx = data->ctx; 335 351 struct blk_mq_hw_ctx *hctx = data->hctx; ··· 372 356 } 373 357 rq->timeout = 0; 374 358 375 - if (blk_mq_need_time_stamp(rq)) 376 - rq->start_time_ns = ktime_get_ns(); 377 - else 378 - rq->start_time_ns = 0; 379 359 rq->part = NULL; 380 - #ifdef CONFIG_BLK_RQ_ALLOC_TIME 381 - rq->alloc_time_ns = alloc_time_ns; 382 - #endif 383 360 rq->io_start_time_ns = 0; 384 361 rq->stats_sectors = 0; 385 362 rq->nr_phys_segments = 0; ··· 402 393 } 403 394 404 395 static inline struct request * 405 - __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data, 406 - u64 alloc_time_ns) 396 + __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data) 407 397 { 408 398 unsigned int tag, tag_offset; 409 399 struct blk_mq_tags *tags; ··· 421 413 tag = tag_offset + i; 422 414 prefetch(tags->static_rqs[tag]); 423 415 tag_mask &= ~(1UL << i); 424 - rq = blk_mq_rq_ctx_init(data, tags, tag, alloc_time_ns); 416 + rq = blk_mq_rq_ctx_init(data, tags, tag); 425 417 rq_list_add(data->cached_rq, rq); 426 418 nr++; 427 419 } ··· 482 474 * Try batched alloc if we want more than 1 tag. 483 475 */ 484 476 if (data->nr_tags > 1) { 485 - rq = __blk_mq_alloc_requests_batch(data, alloc_time_ns); 486 - if (rq) 477 + rq = __blk_mq_alloc_requests_batch(data); 478 + if (rq) { 479 + blk_mq_rq_time_init(rq, alloc_time_ns); 487 480 return rq; 481 + } 488 482 data->nr_tags = 1; 489 483 } 490 484 ··· 509 499 goto retry; 510 500 } 511 501 512 - return blk_mq_rq_ctx_init(data, blk_mq_tags_from_data(data), tag, 513 - alloc_time_ns); 502 + rq = blk_mq_rq_ctx_init(data, blk_mq_tags_from_data(data), tag); 503 + blk_mq_rq_time_init(rq, alloc_time_ns); 504 + return rq; 514 505 } 515 506 516 507 static struct request *blk_mq_rq_cache_fill(struct request_queue *q, ··· 566 555 return NULL; 567 556 568 557 plug->cached_rq = rq_list_next(rq); 558 + blk_mq_rq_time_init(rq, 0); 569 559 } 570 560 571 561 rq->cmd_flags = opf; ··· 668 656 tag = blk_mq_get_tag(&data); 669 657 if (tag == BLK_MQ_NO_TAG) 670 658 goto out_queue_exit; 671 - rq = blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag, 672 - alloc_time_ns); 659 + rq = blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag); 660 + blk_mq_rq_time_init(rq, alloc_time_ns); 673 661 rq->__data_len = 0; 674 662 rq->__sector = (sector_t) -1; 675 663 rq->bio = rq->biotail = NULL; ··· 2908 2896 plug->cached_rq = rq_list_next(rq); 2909 2897 rq_qos_throttle(q, *bio); 2910 2898 2899 + blk_mq_rq_time_init(rq, 0); 2911 2900 rq->cmd_flags = (*bio)->bi_opf; 2912 2901 INIT_LIST_HEAD(&rq->queuelist); 2913 2902 return rq;
+50 -36
block/blk-zoned.c
··· 442 442 unsigned long *conv_zones_bitmap; 443 443 unsigned long *seq_zones_wlock; 444 444 unsigned int nr_zones; 445 - sector_t zone_sectors; 446 445 sector_t sector; 447 446 }; 448 447 ··· 455 456 struct gendisk *disk = args->disk; 456 457 struct request_queue *q = disk->queue; 457 458 sector_t capacity = get_capacity(disk); 459 + sector_t zone_sectors = q->limits.chunk_sectors; 460 + 461 + /* Check for bad zones and holes in the zone report */ 462 + if (zone->start != args->sector) { 463 + pr_warn("%s: Zone gap at sectors %llu..%llu\n", 464 + disk->disk_name, args->sector, zone->start); 465 + return -ENODEV; 466 + } 467 + 468 + if (zone->start >= capacity || !zone->len) { 469 + pr_warn("%s: Invalid zone start %llu, length %llu\n", 470 + disk->disk_name, zone->start, zone->len); 471 + return -ENODEV; 472 + } 458 473 459 474 /* 460 475 * All zones must have the same size, with the exception on an eventual 461 476 * smaller last zone. 462 477 */ 463 - if (zone->start == 0) { 464 - if (zone->len == 0 || !is_power_of_2(zone->len)) { 465 - pr_warn("%s: Invalid zoned device with non power of two zone size (%llu)\n", 466 - disk->disk_name, zone->len); 467 - return -ENODEV; 468 - } 469 - 470 - args->zone_sectors = zone->len; 471 - args->nr_zones = (capacity + zone->len - 1) >> ilog2(zone->len); 472 - } else if (zone->start + args->zone_sectors < capacity) { 473 - if (zone->len != args->zone_sectors) { 478 + if (zone->start + zone->len < capacity) { 479 + if (zone->len != zone_sectors) { 474 480 pr_warn("%s: Invalid zoned device with non constant zone size\n", 475 481 disk->disk_name); 476 482 return -ENODEV; 477 483 } 478 - } else { 479 - if (zone->len > args->zone_sectors) { 480 - pr_warn("%s: Invalid zoned device with larger last zone size\n", 481 - disk->disk_name); 482 - return -ENODEV; 483 - } 484 - } 485 - 486 - /* Check for holes in the zone report */ 487 - if (zone->start != args->sector) { 488 - pr_warn("%s: Zone gap at sectors %llu..%llu\n", 489 - disk->disk_name, args->sector, zone->start); 484 + } else if (zone->len > zone_sectors) { 485 + pr_warn("%s: Invalid zoned device with larger last zone size\n", 486 + disk->disk_name); 490 487 return -ENODEV; 491 488 } 492 489 ··· 521 526 * @disk: Target disk 522 527 * @update_driver_data: Callback to update driver data on the frozen disk 523 528 * 524 - * Helper function for low-level device drivers to (re) allocate and initialize 525 - * a disk request queue zone bitmaps. This functions should normally be called 526 - * within the disk ->revalidate method for blk-mq based drivers. For BIO based 527 - * drivers only q->nr_zones needs to be updated so that the sysfs exposed value 528 - * is correct. 529 + * Helper function for low-level device drivers to check and (re) allocate and 530 + * initialize a disk request queue zone bitmaps. This functions should normally 531 + * be called within the disk ->revalidate method for blk-mq based drivers. 532 + * Before calling this function, the device driver must already have set the 533 + * device zone size (chunk_sector limit) and the max zone append limit. 534 + * For BIO based drivers, this function cannot be used. BIO based device drivers 535 + * only need to set disk->nr_zones so that the sysfs exposed value is correct. 529 536 * If the @update_driver_data callback function is not NULL, the callback is 530 537 * executed with the device request queue frozen after all zones have been 531 538 * checked. ··· 536 539 void (*update_driver_data)(struct gendisk *disk)) 537 540 { 538 541 struct request_queue *q = disk->queue; 539 - struct blk_revalidate_zone_args args = { 540 - .disk = disk, 541 - }; 542 + sector_t zone_sectors = q->limits.chunk_sectors; 543 + sector_t capacity = get_capacity(disk); 544 + struct blk_revalidate_zone_args args = { }; 542 545 unsigned int noio_flag; 543 546 int ret; 544 547 ··· 547 550 if (WARN_ON_ONCE(!queue_is_mq(q))) 548 551 return -EIO; 549 552 550 - if (!get_capacity(disk)) 551 - return -EIO; 553 + if (!capacity) 554 + return -ENODEV; 555 + 556 + /* 557 + * Checks that the device driver indicated a valid zone size and that 558 + * the max zone append limit is set. 559 + */ 560 + if (!zone_sectors || !is_power_of_2(zone_sectors)) { 561 + pr_warn("%s: Invalid non power of two zone size (%llu)\n", 562 + disk->disk_name, zone_sectors); 563 + return -ENODEV; 564 + } 565 + 566 + if (!q->limits.max_zone_append_sectors) { 567 + pr_warn("%s: Invalid 0 maximum zone append limit\n", 568 + disk->disk_name); 569 + return -ENODEV; 570 + } 552 571 553 572 /* 554 573 * Ensure that all memory allocations in this context are done as if 555 574 * GFP_NOIO was specified. 556 575 */ 576 + args.disk = disk; 577 + args.nr_zones = (capacity + zone_sectors - 1) >> ilog2(zone_sectors); 557 578 noio_flag = memalloc_noio_save(); 558 579 ret = disk->fops->report_zones(disk, 0, UINT_MAX, 559 580 blk_revalidate_zone_cb, &args); ··· 585 570 * If zones where reported, make sure that the entire disk capacity 586 571 * has been checked. 587 572 */ 588 - if (ret > 0 && args.sector != get_capacity(disk)) { 573 + if (ret > 0 && args.sector != capacity) { 589 574 pr_warn("%s: Missing zones from sector %llu\n", 590 575 disk->disk_name, args.sector); 591 576 ret = -ENODEV; ··· 598 583 */ 599 584 blk_mq_freeze_queue(q); 600 585 if (ret > 0) { 601 - blk_queue_chunk_sectors(q, args.zone_sectors); 602 586 disk->nr_zones = args.nr_zones; 603 587 swap(disk->seq_zones_wlock, args.seq_zones_wlock); 604 588 swap(disk->conv_zones_bitmap, args.conv_zones_bitmap);
+1 -1
block/mq-deadline.c
··· 176 176 * zoned writes, start searching from the start of a zone. 177 177 */ 178 178 if (blk_rq_is_seq_zoned_write(rq)) 179 - pos -= round_down(pos, rq->q->limits.chunk_sectors); 179 + pos = round_down(pos, rq->q->limits.chunk_sectors); 180 180 181 181 while (node) { 182 182 rq = rb_entry_rq(node);
+1 -1
block/partitions/amiga.c
··· 90 90 } 91 91 blk = be32_to_cpu(rdb->rdb_PartitionList); 92 92 put_dev_sector(sect); 93 - for (part = 1; blk>0 && part<=16; part++, put_dev_sector(sect)) { 93 + for (part = 1; (s32) blk>0 && part<=16; part++, put_dev_sector(sect)) { 94 94 /* Read in terms partition table understands */ 95 95 if (check_mul_overflow(blk, (sector_t) blksize, &blk)) { 96 96 pr_err("Dev %s: overflow calculating partition block %llu! Skipping partitions %u and beyond\n",
+4 -3
crypto/af_alg.c
··· 992 992 ssize_t plen; 993 993 994 994 /* use the existing memory in an allocated page */ 995 - if (ctx->merge) { 995 + if (ctx->merge && !(msg->msg_flags & MSG_SPLICE_PAGES)) { 996 996 sgl = list_entry(ctx->tsgl_list.prev, 997 997 struct af_alg_tsgl, list); 998 998 sg = sgl->sg + sgl->cur - 1; ··· 1054 1054 ctx->used += plen; 1055 1055 copied += plen; 1056 1056 size -= plen; 1057 + ctx->merge = 0; 1057 1058 } else { 1058 1059 do { 1059 1060 struct page *pg; ··· 1086 1085 size -= plen; 1087 1086 sgl->cur++; 1088 1087 } while (len && sgl->cur < MAX_SGL_ENTS); 1088 + 1089 + ctx->merge = plen & (PAGE_SIZE - 1); 1089 1090 } 1090 1091 1091 1092 if (!size) 1092 1093 sg_mark_end(sg + sgl->cur - 1); 1093 - 1094 - ctx->merge = plen & (PAGE_SIZE - 1); 1095 1094 } 1096 1095 1097 1096 err = 0;
+3 -1
crypto/algif_hash.c
··· 68 68 struct hash_ctx *ctx = ask->private; 69 69 ssize_t copied = 0; 70 70 size_t len, max_pages, npages; 71 - bool continuing = ctx->more, need_init = false; 71 + bool continuing, need_init = false; 72 72 int err; 73 73 74 74 max_pages = min_t(size_t, ALG_MAX_PAGES, 75 75 DIV_ROUND_UP(sk->sk_sndbuf, PAGE_SIZE)); 76 76 77 77 lock_sock(sk); 78 + continuing = ctx->more; 79 + 78 80 if (!continuing) { 79 81 /* Discard a previous request that wasn't marked MSG_MORE. */ 80 82 hash_free_result(sk, ctx);
+15 -5
crypto/asymmetric_keys/public_key.c
··· 185 185 186 186 if (issig) { 187 187 sig = crypto_alloc_sig(alg_name, 0, 0); 188 - if (IS_ERR(sig)) 188 + if (IS_ERR(sig)) { 189 + ret = PTR_ERR(sig); 189 190 goto error_free_key; 191 + } 190 192 191 193 if (pkey->key_is_private) 192 194 ret = crypto_sig_set_privkey(sig, key, pkey->keylen); ··· 210 208 } 211 209 } else { 212 210 tfm = crypto_alloc_akcipher(alg_name, 0, 0); 213 - if (IS_ERR(tfm)) 211 + if (IS_ERR(tfm)) { 212 + ret = PTR_ERR(tfm); 214 213 goto error_free_key; 214 + } 215 215 216 216 if (pkey->key_is_private) 217 217 ret = crypto_akcipher_set_priv_key(tfm, key, pkey->keylen); ··· 304 300 305 301 if (issig) { 306 302 sig = crypto_alloc_sig(alg_name, 0, 0); 307 - if (IS_ERR(sig)) 303 + if (IS_ERR(sig)) { 304 + ret = PTR_ERR(sig); 308 305 goto error_free_key; 306 + } 309 307 310 308 if (pkey->key_is_private) 311 309 ret = crypto_sig_set_privkey(sig, key, pkey->keylen); ··· 319 313 ksz = crypto_sig_maxsize(sig); 320 314 } else { 321 315 tfm = crypto_alloc_akcipher(alg_name, 0, 0); 322 - if (IS_ERR(tfm)) 316 + if (IS_ERR(tfm)) { 317 + ret = PTR_ERR(tfm); 323 318 goto error_free_key; 319 + } 324 320 325 321 if (pkey->key_is_private) 326 322 ret = crypto_akcipher_set_priv_key(tfm, key, pkey->keylen); ··· 419 411 420 412 key = kmalloc(pkey->keylen + sizeof(u32) * 2 + pkey->paramlen, 421 413 GFP_KERNEL); 422 - if (!key) 414 + if (!key) { 415 + ret = -ENOMEM; 423 416 goto error_free_tfm; 417 + } 424 418 425 419 memcpy(key, pkey->key, pkey->keylen); 426 420 ptr = key + pkey->keylen;
+1
drivers/accel/ivpu/ivpu_drv.h
··· 75 75 bool punit_disabled; 76 76 bool clear_runtime_mem; 77 77 bool d3hot_after_power_off; 78 + bool interrupt_clear_with_0; 78 79 }; 79 80 80 81 struct ivpu_hw_info;
+13 -7
drivers/accel/ivpu/ivpu_hw_mtl.c
··· 101 101 vdev->wa.punit_disabled = ivpu_is_fpga(vdev); 102 102 vdev->wa.clear_runtime_mem = false; 103 103 vdev->wa.d3hot_after_power_off = true; 104 + 105 + if (ivpu_device_id(vdev) == PCI_DEVICE_ID_MTL && ivpu_revision(vdev) < 4) 106 + vdev->wa.interrupt_clear_with_0 = true; 104 107 } 105 108 106 109 static void ivpu_hw_timeouts_init(struct ivpu_device *vdev) ··· 888 885 REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x1); 889 886 REGB_WR32(MTL_BUTTRESS_LOCAL_INT_MASK, BUTTRESS_IRQ_DISABLE_MASK); 890 887 REGV_WR64(MTL_VPU_HOST_SS_ICB_ENABLE_0, 0x0ull); 891 - REGB_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0); 888 + REGV_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0); 892 889 } 893 890 894 891 static void ivpu_hw_mtl_irq_wdt_nce_handler(struct ivpu_device *vdev) ··· 976 973 schedule_recovery = true; 977 974 } 978 975 979 - /* 980 - * Clear local interrupt status by writing 0 to all bits. 981 - * This must be done after interrupts are cleared at the source. 982 - * Writing 1 triggers an interrupt, so we can't perform read update write. 983 - */ 984 - REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, 0x0); 976 + /* This must be done after interrupts are cleared at the source. */ 977 + if (IVPU_WA(interrupt_clear_with_0)) 978 + /* 979 + * Writing 1 triggers an interrupt, so we can't perform read update write. 980 + * Clear local interrupt status by writing 0 to all bits. 981 + */ 982 + REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, 0x0); 983 + else 984 + REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, status); 985 985 986 986 /* Re-enable global interrupt */ 987 987 REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x0);
+1 -1
drivers/base/regmap/regmap-irq.c
··· 717 717 if (!d->config_buf) 718 718 goto err_alloc; 719 719 720 - for (i = 0; i < chip->num_config_regs; i++) { 720 + for (i = 0; i < chip->num_config_bases; i++) { 721 721 d->config_buf[i] = kcalloc(chip->num_config_regs, 722 722 sizeof(**d->config_buf), 723 723 GFP_KERNEL);
+5 -11
drivers/block/null_blk/zoned.c
··· 162 162 disk_set_zoned(nullb->disk, BLK_ZONED_HM); 163 163 blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); 164 164 blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE); 165 - 166 - if (queue_is_mq(q)) { 167 - int ret = blk_revalidate_disk_zones(nullb->disk, NULL); 168 - 169 - if (ret) 170 - return ret; 171 - } else { 172 - blk_queue_chunk_sectors(q, dev->zone_size_sects); 173 - nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0); 174 - } 175 - 165 + blk_queue_chunk_sectors(q, dev->zone_size_sects); 166 + nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0); 176 167 blk_queue_max_zone_append_sectors(q, dev->zone_size_sects); 177 168 disk_set_max_open_zones(nullb->disk, dev->zone_max_open); 178 169 disk_set_max_active_zones(nullb->disk, dev->zone_max_active); 170 + 171 + if (queue_is_mq(q)) 172 + return blk_revalidate_disk_zones(nullb->disk, NULL); 179 173 180 174 return 0; 181 175 }
+15 -19
drivers/block/virtio_blk.c
··· 751 751 { 752 752 u32 v, wg; 753 753 u8 model; 754 - int ret; 755 754 756 755 virtio_cread(vdev, struct virtio_blk_config, 757 756 zoned.model, &model); ··· 805 806 vblk->zone_sectors); 806 807 return -ENODEV; 807 808 } 809 + blk_queue_chunk_sectors(q, vblk->zone_sectors); 808 810 dev_dbg(&vdev->dev, "zone sectors = %u\n", vblk->zone_sectors); 809 811 810 812 if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { ··· 814 814 blk_queue_max_discard_sectors(q, 0); 815 815 } 816 816 817 - ret = blk_revalidate_disk_zones(vblk->disk, NULL); 818 - if (!ret) { 819 - virtio_cread(vdev, struct virtio_blk_config, 820 - zoned.max_append_sectors, &v); 821 - if (!v) { 822 - dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); 823 - return -ENODEV; 824 - } 825 - if ((v << SECTOR_SHIFT) < wg) { 826 - dev_err(&vdev->dev, 827 - "write granularity %u exceeds max_append_sectors %u limit\n", 828 - wg, v); 829 - return -ENODEV; 830 - } 831 - 832 - blk_queue_max_zone_append_sectors(q, v); 833 - dev_dbg(&vdev->dev, "max append sectors = %u\n", v); 817 + virtio_cread(vdev, struct virtio_blk_config, 818 + zoned.max_append_sectors, &v); 819 + if (!v) { 820 + dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); 821 + return -ENODEV; 834 822 } 823 + if ((v << SECTOR_SHIFT) < wg) { 824 + dev_err(&vdev->dev, 825 + "write granularity %u exceeds max_append_sectors %u limit\n", 826 + wg, v); 827 + return -ENODEV; 828 + } 829 + blk_queue_max_zone_append_sectors(q, v); 830 + dev_dbg(&vdev->dev, "max append sectors = %u\n", v); 835 831 836 - return ret; 832 + return blk_revalidate_disk_zones(vblk->disk, NULL); 837 833 } 838 834 839 835 #else
+1 -1
drivers/cpufreq/sparc-us2e-cpufreq.c
··· 269 269 return smp_call_function_single(cpu, __us2e_freq_target, &index, 1); 270 270 } 271 271 272 - static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy) 272 + static int us2e_freq_cpu_init(struct cpufreq_policy *policy) 273 273 { 274 274 unsigned int cpu = policy->cpu; 275 275 unsigned long clock_tick = sparc64_get_clock_tick(cpu) / 1000;
+1 -1
drivers/cpufreq/sparc-us3-cpufreq.c
··· 117 117 return smp_call_function_single(cpu, update_safari_cfg, &new_bits, 1); 118 118 } 119 119 120 - static int __init us3_freq_cpu_init(struct cpufreq_policy *policy) 120 + static int us3_freq_cpu_init(struct cpufreq_policy *policy) 121 121 { 122 122 unsigned int cpu = policy->cpu; 123 123 unsigned long clock_tick = sparc64_get_clock_tick(cpu) / 1000;
+22 -4
drivers/dma-buf/dma-fence-unwrap.c
··· 66 66 { 67 67 struct dma_fence_array *result; 68 68 struct dma_fence *tmp, **array; 69 + ktime_t timestamp; 69 70 unsigned int i; 70 71 size_t count; 71 72 72 73 count = 0; 74 + timestamp = ns_to_ktime(0); 73 75 for (i = 0; i < num_fences; ++i) { 74 - dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) 75 - if (!dma_fence_is_signaled(tmp)) 76 + dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) { 77 + if (!dma_fence_is_signaled(tmp)) { 76 78 ++count; 79 + } else if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, 80 + &tmp->flags)) { 81 + if (ktime_after(tmp->timestamp, timestamp)) 82 + timestamp = tmp->timestamp; 83 + } else { 84 + /* 85 + * Use the current time if the fence is 86 + * currently signaling. 87 + */ 88 + timestamp = ktime_get(); 89 + } 90 + } 77 91 } 78 92 93 + /* 94 + * If we couldn't find a pending fence just return a private signaled 95 + * fence with the timestamp of the last signaled one. 96 + */ 79 97 if (count == 0) 80 - return dma_fence_get_stub(); 98 + return dma_fence_allocate_private_stub(timestamp); 81 99 82 100 array = kmalloc_array(count, sizeof(*array), GFP_KERNEL); 83 101 if (!array) ··· 156 138 } while (tmp); 157 139 158 140 if (count == 0) { 159 - tmp = dma_fence_get_stub(); 141 + tmp = dma_fence_allocate_private_stub(ktime_get()); 160 142 goto return_tmp; 161 143 } 162 144
+4 -3
drivers/dma-buf/dma-fence.c
··· 150 150 151 151 /** 152 152 * dma_fence_allocate_private_stub - return a private, signaled fence 153 + * @timestamp: timestamp when the fence was signaled 153 154 * 154 155 * Return a newly allocated and signaled stub fence. 155 156 */ 156 - struct dma_fence *dma_fence_allocate_private_stub(void) 157 + struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp) 157 158 { 158 159 struct dma_fence *fence; 159 160 160 161 fence = kzalloc(sizeof(*fence), GFP_KERNEL); 161 162 if (fence == NULL) 162 - return ERR_PTR(-ENOMEM); 163 + return NULL; 163 164 164 165 dma_fence_init(fence, 165 166 &dma_fence_stub_ops, ··· 170 169 set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, 171 170 &fence->flags); 172 171 173 - dma_fence_signal(fence); 172 + dma_fence_signal_timestamp(fence, timestamp); 174 173 175 174 return fence; 176 175 }
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 1296 1296 void amdgpu_device_pci_config_reset(struct amdgpu_device *adev); 1297 1297 int amdgpu_device_pci_reset(struct amdgpu_device *adev); 1298 1298 bool amdgpu_device_need_post(struct amdgpu_device *adev); 1299 + bool amdgpu_device_pcie_dynamic_switching_supported(void); 1299 1300 bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev); 1300 1301 bool amdgpu_device_aspm_support_quirk(void); 1301 1302
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 2881 2881 if (!attachment->is_mapped) 2882 2882 continue; 2883 2883 2884 + if (attachment->bo_va->base.bo->tbo.pin_count) 2885 + continue; 2886 + 2884 2887 kfd_mem_dmaunmap_attachment(mem, attachment); 2885 2888 ret = update_gpuvm_pte(mem, attachment, &sync_obj); 2886 2889 if (ret) {
+19
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 1458 1458 return true; 1459 1459 } 1460 1460 1461 + /* 1462 + * Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic 1463 + * speed switching. Until we have confirmation from Intel that a specific host 1464 + * supports it, it's safer that we keep it disabled for all. 1465 + * 1466 + * https://edc.intel.com/content/www/us/en/design/products/platforms/details/raptor-lake-s/13th-generation-core-processors-datasheet-volume-1-of-2/005/pci-express-support/ 1467 + * https://gitlab.freedesktop.org/drm/amd/-/issues/2663 1468 + */ 1469 + bool amdgpu_device_pcie_dynamic_switching_supported(void) 1470 + { 1471 + #if IS_ENABLED(CONFIG_X86) 1472 + struct cpuinfo_x86 *c = &cpu_data(0); 1473 + 1474 + if (c->x86_vendor == X86_VENDOR_INTEL) 1475 + return false; 1476 + #endif 1477 + return true; 1478 + } 1479 + 1461 1480 /** 1462 1481 * amdgpu_device_should_use_aspm - check if the device should program ASPM 1463 1482 *
+4
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
··· 295 295 uint32_t *size, 296 296 uint32_t pptable_id); 297 297 298 + int smu_v13_0_update_pcie_parameters(struct smu_context *smu, 299 + uint32_t pcie_gen_cap, 300 + uint32_t pcie_width_cap); 301 + 298 302 #endif 299 303 #endif
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
··· 2113 2113 } 2114 2114 mutex_lock(&adev->pm.mutex); 2115 2115 r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true); 2116 - mutex_unlock(&adev->pm.mutex); 2117 2116 if (r) 2118 2117 goto fail; 2119 2118 ··· 2129 2130 } 2130 2131 r = num_msgs; 2131 2132 fail: 2133 + mutex_unlock(&adev->pm.mutex); 2132 2134 kfree(req); 2133 2135 return r; 2134 2136 }
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
··· 3021 3021 } 3022 3022 mutex_lock(&adev->pm.mutex); 3023 3023 r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true); 3024 - mutex_unlock(&adev->pm.mutex); 3025 3024 if (r) 3026 3025 goto fail; 3027 3026 ··· 3037 3038 } 3038 3039 r = num_msgs; 3039 3040 fail: 3041 + mutex_unlock(&adev->pm.mutex); 3040 3042 kfree(req); 3041 3043 return r; 3042 3044 }
+19 -72
drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
··· 2077 2077 return ret; 2078 2078 } 2079 2079 2080 - static void sienna_cichlid_get_override_pcie_settings(struct smu_context *smu, 2081 - uint32_t *gen_speed_override, 2082 - uint32_t *lane_width_override) 2083 - { 2084 - struct amdgpu_device *adev = smu->adev; 2085 - 2086 - *gen_speed_override = 0xff; 2087 - *lane_width_override = 0xff; 2088 - 2089 - switch (adev->pdev->device) { 2090 - case 0x73A0: 2091 - case 0x73A1: 2092 - case 0x73A2: 2093 - case 0x73A3: 2094 - case 0x73AB: 2095 - case 0x73AE: 2096 - /* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32 */ 2097 - *lane_width_override = 6; 2098 - break; 2099 - case 0x73E0: 2100 - case 0x73E1: 2101 - case 0x73E3: 2102 - *lane_width_override = 4; 2103 - break; 2104 - case 0x7420: 2105 - case 0x7421: 2106 - case 0x7422: 2107 - case 0x7423: 2108 - case 0x7424: 2109 - *lane_width_override = 3; 2110 - break; 2111 - default: 2112 - break; 2113 - } 2114 - } 2115 - 2116 - #define MAX(a, b) ((a) > (b) ? (a) : (b)) 2117 - 2118 2080 static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu, 2119 2081 uint32_t pcie_gen_cap, 2120 2082 uint32_t pcie_width_cap) 2121 2083 { 2122 2084 struct smu_11_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; 2123 2085 struct smu_11_0_pcie_table *pcie_table = &dpm_context->dpm_tables.pcie_table; 2124 - uint32_t gen_speed_override, lane_width_override; 2125 - uint8_t *table_member1, *table_member2; 2126 - uint32_t min_gen_speed, max_gen_speed; 2127 - uint32_t min_lane_width, max_lane_width; 2128 - uint32_t smu_pcie_arg; 2086 + u32 smu_pcie_arg; 2129 2087 int ret, i; 2130 2088 2131 - GET_PPTABLE_MEMBER(PcieGenSpeed, &table_member1); 2132 - GET_PPTABLE_MEMBER(PcieLaneCount, &table_member2); 2089 + /* PCIE gen speed and lane width override */ 2090 + if (!amdgpu_device_pcie_dynamic_switching_supported()) { 2091 + if (pcie_table->pcie_gen[NUM_LINK_LEVELS - 1] < pcie_gen_cap) 2092 + pcie_gen_cap = pcie_table->pcie_gen[NUM_LINK_LEVELS - 1]; 2133 2093 2134 - sienna_cichlid_get_override_pcie_settings(smu, 2135 - &gen_speed_override, 2136 - &lane_width_override); 2094 + if (pcie_table->pcie_lane[NUM_LINK_LEVELS - 1] < pcie_width_cap) 2095 + pcie_width_cap = pcie_table->pcie_lane[NUM_LINK_LEVELS - 1]; 2137 2096 2138 - /* PCIE gen speed override */ 2139 - if (gen_speed_override != 0xff) { 2140 - min_gen_speed = MIN(pcie_gen_cap, gen_speed_override); 2141 - max_gen_speed = MIN(pcie_gen_cap, gen_speed_override); 2097 + /* Force all levels to use the same settings */ 2098 + for (i = 0; i < NUM_LINK_LEVELS; i++) { 2099 + pcie_table->pcie_gen[i] = pcie_gen_cap; 2100 + pcie_table->pcie_lane[i] = pcie_width_cap; 2101 + } 2142 2102 } else { 2143 - min_gen_speed = MAX(0, table_member1[0]); 2144 - max_gen_speed = MIN(pcie_gen_cap, table_member1[1]); 2145 - min_gen_speed = min_gen_speed > max_gen_speed ? 2146 - max_gen_speed : min_gen_speed; 2103 + for (i = 0; i < NUM_LINK_LEVELS; i++) { 2104 + if (pcie_table->pcie_gen[i] > pcie_gen_cap) 2105 + pcie_table->pcie_gen[i] = pcie_gen_cap; 2106 + if (pcie_table->pcie_lane[i] > pcie_width_cap) 2107 + pcie_table->pcie_lane[i] = pcie_width_cap; 2108 + } 2147 2109 } 2148 - pcie_table->pcie_gen[0] = min_gen_speed; 2149 - pcie_table->pcie_gen[1] = max_gen_speed; 2150 - 2151 - /* PCIE lane width override */ 2152 - if (lane_width_override != 0xff) { 2153 - min_lane_width = MIN(pcie_width_cap, lane_width_override); 2154 - max_lane_width = MIN(pcie_width_cap, lane_width_override); 2155 - } else { 2156 - min_lane_width = MAX(1, table_member2[0]); 2157 - max_lane_width = MIN(pcie_width_cap, table_member2[1]); 2158 - min_lane_width = min_lane_width > max_lane_width ? 2159 - max_lane_width : min_lane_width; 2160 - } 2161 - pcie_table->pcie_lane[0] = min_lane_width; 2162 - pcie_table->pcie_lane[1] = max_lane_width; 2163 2110 2164 2111 for (i = 0; i < NUM_LINK_LEVELS; i++) { 2165 2112 smu_pcie_arg = (i << 16 | ··· 3789 3842 } 3790 3843 mutex_lock(&adev->pm.mutex); 3791 3844 r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true); 3792 - mutex_unlock(&adev->pm.mutex); 3793 3845 if (r) 3794 3846 goto fail; 3795 3847 ··· 3805 3859 } 3806 3860 r = num_msgs; 3807 3861 fail: 3862 + mutex_unlock(&adev->pm.mutex); 3808 3863 kfree(req); 3809 3864 return r; 3810 3865 }
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
··· 1525 1525 } 1526 1526 mutex_lock(&adev->pm.mutex); 1527 1527 r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true); 1528 - mutex_unlock(&adev->pm.mutex); 1529 1528 if (r) 1530 1529 goto fail; 1531 1530 ··· 1541 1542 } 1542 1543 r = num_msgs; 1543 1544 fail: 1545 + mutex_unlock(&adev->pm.mutex); 1544 1546 kfree(req); 1545 1547 return r; 1546 1548 }
+48
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 2424 2424 2425 2425 return ret; 2426 2426 } 2427 + 2428 + int smu_v13_0_update_pcie_parameters(struct smu_context *smu, 2429 + uint32_t pcie_gen_cap, 2430 + uint32_t pcie_width_cap) 2431 + { 2432 + struct smu_13_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; 2433 + struct smu_13_0_pcie_table *pcie_table = 2434 + &dpm_context->dpm_tables.pcie_table; 2435 + int num_of_levels = pcie_table->num_of_link_levels; 2436 + uint32_t smu_pcie_arg; 2437 + int ret, i; 2438 + 2439 + if (!amdgpu_device_pcie_dynamic_switching_supported()) { 2440 + if (pcie_table->pcie_gen[num_of_levels - 1] < pcie_gen_cap) 2441 + pcie_gen_cap = pcie_table->pcie_gen[num_of_levels - 1]; 2442 + 2443 + if (pcie_table->pcie_lane[num_of_levels - 1] < pcie_width_cap) 2444 + pcie_width_cap = pcie_table->pcie_lane[num_of_levels - 1]; 2445 + 2446 + /* Force all levels to use the same settings */ 2447 + for (i = 0; i < num_of_levels; i++) { 2448 + pcie_table->pcie_gen[i] = pcie_gen_cap; 2449 + pcie_table->pcie_lane[i] = pcie_width_cap; 2450 + } 2451 + } else { 2452 + for (i = 0; i < num_of_levels; i++) { 2453 + if (pcie_table->pcie_gen[i] > pcie_gen_cap) 2454 + pcie_table->pcie_gen[i] = pcie_gen_cap; 2455 + if (pcie_table->pcie_lane[i] > pcie_width_cap) 2456 + pcie_table->pcie_lane[i] = pcie_width_cap; 2457 + } 2458 + } 2459 + 2460 + for (i = 0; i < num_of_levels; i++) { 2461 + smu_pcie_arg = i << 16; 2462 + smu_pcie_arg |= pcie_table->pcie_gen[i] << 8; 2463 + smu_pcie_arg |= pcie_table->pcie_lane[i]; 2464 + 2465 + ret = smu_cmn_send_smc_msg_with_param(smu, 2466 + SMU_MSG_OverridePcieParameters, 2467 + smu_pcie_arg, 2468 + NULL); 2469 + if (ret) 2470 + return ret; 2471 + } 2472 + 2473 + return 0; 2474 + }
+2 -33
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 1645 1645 return ret; 1646 1646 } 1647 1647 1648 - static int smu_v13_0_0_update_pcie_parameters(struct smu_context *smu, 1649 - uint32_t pcie_gen_cap, 1650 - uint32_t pcie_width_cap) 1651 - { 1652 - struct smu_13_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; 1653 - struct smu_13_0_pcie_table *pcie_table = 1654 - &dpm_context->dpm_tables.pcie_table; 1655 - uint32_t smu_pcie_arg; 1656 - int ret, i; 1657 - 1658 - for (i = 0; i < pcie_table->num_of_link_levels; i++) { 1659 - if (pcie_table->pcie_gen[i] > pcie_gen_cap) 1660 - pcie_table->pcie_gen[i] = pcie_gen_cap; 1661 - if (pcie_table->pcie_lane[i] > pcie_width_cap) 1662 - pcie_table->pcie_lane[i] = pcie_width_cap; 1663 - 1664 - smu_pcie_arg = i << 16; 1665 - smu_pcie_arg |= pcie_table->pcie_gen[i] << 8; 1666 - smu_pcie_arg |= pcie_table->pcie_lane[i]; 1667 - 1668 - ret = smu_cmn_send_smc_msg_with_param(smu, 1669 - SMU_MSG_OverridePcieParameters, 1670 - smu_pcie_arg, 1671 - NULL); 1672 - if (ret) 1673 - return ret; 1674 - } 1675 - 1676 - return 0; 1677 - } 1678 - 1679 1648 static const struct smu_temperature_range smu13_thermal_policy[] = { 1680 1649 {-273150, 99000, 99000, -273150, 99000, 99000, -273150, 99000, 99000}, 1681 1650 { 120000, 120000, 120000, 120000, 120000, 120000, 120000, 120000, 120000}, ··· 2289 2320 } 2290 2321 mutex_lock(&adev->pm.mutex); 2291 2322 r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true); 2292 - mutex_unlock(&adev->pm.mutex); 2293 2323 if (r) 2294 2324 goto fail; 2295 2325 ··· 2305 2337 } 2306 2338 r = num_msgs; 2307 2339 fail: 2340 + mutex_unlock(&adev->pm.mutex); 2308 2341 kfree(req); 2309 2342 return r; 2310 2343 } ··· 2623 2654 .feature_is_enabled = smu_cmn_feature_is_enabled, 2624 2655 .print_clk_levels = smu_v13_0_0_print_clk_levels, 2625 2656 .force_clk_levels = smu_v13_0_0_force_clk_levels, 2626 - .update_pcie_parameters = smu_v13_0_0_update_pcie_parameters, 2657 + .update_pcie_parameters = smu_v13_0_update_pcie_parameters, 2627 2658 .get_thermal_temperature_range = smu_v13_0_0_get_thermal_temperature_range, 2628 2659 .register_irq_handler = smu_v13_0_register_irq_handler, 2629 2660 .enable_thermal_alert = smu_v13_0_enable_thermal_alert,
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
··· 1763 1763 } 1764 1764 mutex_lock(&adev->pm.mutex); 1765 1765 r = smu_v13_0_6_request_i2c_xfer(smu, req); 1766 - mutex_unlock(&adev->pm.mutex); 1767 1766 if (r) 1768 1767 goto fail; 1769 1768 ··· 1779 1780 } 1780 1781 r = num_msgs; 1781 1782 fail: 1783 + mutex_unlock(&adev->pm.mutex); 1782 1784 kfree(req); 1783 1785 return r; 1784 1786 }
+1 -32
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 1635 1635 return ret; 1636 1636 } 1637 1637 1638 - static int smu_v13_0_7_update_pcie_parameters(struct smu_context *smu, 1639 - uint32_t pcie_gen_cap, 1640 - uint32_t pcie_width_cap) 1641 - { 1642 - struct smu_13_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; 1643 - struct smu_13_0_pcie_table *pcie_table = 1644 - &dpm_context->dpm_tables.pcie_table; 1645 - uint32_t smu_pcie_arg; 1646 - int ret, i; 1647 - 1648 - for (i = 0; i < pcie_table->num_of_link_levels; i++) { 1649 - if (pcie_table->pcie_gen[i] > pcie_gen_cap) 1650 - pcie_table->pcie_gen[i] = pcie_gen_cap; 1651 - if (pcie_table->pcie_lane[i] > pcie_width_cap) 1652 - pcie_table->pcie_lane[i] = pcie_width_cap; 1653 - 1654 - smu_pcie_arg = i << 16; 1655 - smu_pcie_arg |= pcie_table->pcie_gen[i] << 8; 1656 - smu_pcie_arg |= pcie_table->pcie_lane[i]; 1657 - 1658 - ret = smu_cmn_send_smc_msg_with_param(smu, 1659 - SMU_MSG_OverridePcieParameters, 1660 - smu_pcie_arg, 1661 - NULL); 1662 - if (ret) 1663 - return ret; 1664 - } 1665 - 1666 - return 0; 1667 - } 1668 - 1669 1638 static const struct smu_temperature_range smu13_thermal_policy[] = 1670 1639 { 1671 1640 {-273150, 99000, 99000, -273150, 99000, 99000, -273150, 99000, 99000}, ··· 2203 2234 .feature_is_enabled = smu_cmn_feature_is_enabled, 2204 2235 .print_clk_levels = smu_v13_0_7_print_clk_levels, 2205 2236 .force_clk_levels = smu_v13_0_7_force_clk_levels, 2206 - .update_pcie_parameters = smu_v13_0_7_update_pcie_parameters, 2237 + .update_pcie_parameters = smu_v13_0_update_pcie_parameters, 2207 2238 .get_thermal_temperature_range = smu_v13_0_7_get_thermal_temperature_range, 2208 2239 .register_irq_handler = smu_v13_0_register_irq_handler, 2209 2240 .enable_thermal_alert = smu_v13_0_enable_thermal_alert,
-4
drivers/gpu/drm/armada/armada_fbdev.c
··· 209 209 goto err_drm_client_init; 210 210 } 211 211 212 - ret = armada_fbdev_client_hotplug(&fbh->client); 213 - if (ret) 214 - drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 215 - 216 212 drm_client_register(&fbh->client); 217 213 218 214 return;
+5 -4
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 1426 1426 /* Control for TMDS Bit Period/TMDS Clock-Period Ratio */ 1427 1427 if (dw_hdmi_support_scdc(hdmi, display)) { 1428 1428 if (mtmdsclock > HDMI14_MAX_TMDSCLK) 1429 - drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 1); 1429 + drm_scdc_set_high_tmds_clock_ratio(hdmi->curr_conn, 1); 1430 1430 else 1431 - drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 0); 1431 + drm_scdc_set_high_tmds_clock_ratio(hdmi->curr_conn, 0); 1432 1432 } 1433 1433 } 1434 1434 EXPORT_SYMBOL_GPL(dw_hdmi_set_high_tmds_clock_ratio); ··· 2116 2116 min_t(u8, bytes, SCDC_MIN_SOURCE_VERSION)); 2117 2117 2118 2118 /* Enabled Scrambling in the Sink */ 2119 - drm_scdc_set_scrambling(&hdmi->connector, 1); 2119 + drm_scdc_set_scrambling(hdmi->curr_conn, 1); 2120 2120 2121 2121 /* 2122 2122 * To activate the scrambler feature, you must ensure ··· 2132 2132 hdmi_writeb(hdmi, 0, HDMI_FC_SCRAMBLER_CTRL); 2133 2133 hdmi_writeb(hdmi, (u8)~HDMI_MC_SWRSTZ_TMDSSWRST_REQ, 2134 2134 HDMI_MC_SWRSTZ); 2135 - drm_scdc_set_scrambling(&hdmi->connector, 0); 2135 + drm_scdc_set_scrambling(hdmi->curr_conn, 0); 2136 2136 } 2137 2137 } 2138 2138 ··· 3553 3553 hdmi->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID 3554 3554 | DRM_BRIDGE_OP_HPD; 3555 3555 hdmi->bridge.interlace_allowed = true; 3556 + hdmi->bridge.ddc = hdmi->ddc; 3556 3557 #ifdef CONFIG_OF 3557 3558 hdmi->bridge.of_node = pdev->dev.of_node; 3558 3559 #endif
+22 -13
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 170 170 * @pwm_refclk_freq: Cache for the reference clock input to the PWM. 171 171 */ 172 172 struct ti_sn65dsi86 { 173 - struct auxiliary_device bridge_aux; 174 - struct auxiliary_device gpio_aux; 175 - struct auxiliary_device aux_aux; 176 - struct auxiliary_device pwm_aux; 173 + struct auxiliary_device *bridge_aux; 174 + struct auxiliary_device *gpio_aux; 175 + struct auxiliary_device *aux_aux; 176 + struct auxiliary_device *pwm_aux; 177 177 178 178 struct device *dev; 179 179 struct regmap *regmap; ··· 468 468 auxiliary_device_delete(data); 469 469 } 470 470 471 - /* 472 - * AUX bus docs say that a non-NULL release is mandatory, but it makes no 473 - * sense for the model used here where all of the aux devices are allocated 474 - * in the single shared structure. We'll use this noop as a workaround. 475 - */ 476 - static void ti_sn65dsi86_noop(struct device *dev) {} 471 + static void ti_sn65dsi86_aux_device_release(struct device *dev) 472 + { 473 + struct auxiliary_device *aux = container_of(dev, struct auxiliary_device, dev); 474 + 475 + kfree(aux); 476 + } 477 477 478 478 static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata, 479 - struct auxiliary_device *aux, 479 + struct auxiliary_device **aux_out, 480 480 const char *name) 481 481 { 482 482 struct device *dev = pdata->dev; 483 + struct auxiliary_device *aux; 483 484 int ret; 485 + 486 + aux = kzalloc(sizeof(*aux), GFP_KERNEL); 487 + if (!aux) 488 + return -ENOMEM; 484 489 485 490 aux->name = name; 486 491 aux->dev.parent = dev; 487 - aux->dev.release = ti_sn65dsi86_noop; 492 + aux->dev.release = ti_sn65dsi86_aux_device_release; 488 493 device_set_of_node_from_dev(&aux->dev, dev); 489 494 ret = auxiliary_device_init(aux); 490 - if (ret) 495 + if (ret) { 496 + kfree(aux); 491 497 return ret; 498 + } 492 499 ret = devm_add_action_or_reset(dev, ti_sn65dsi86_uninit_aux, aux); 493 500 if (ret) 494 501 return ret; ··· 504 497 if (ret) 505 498 return ret; 506 499 ret = devm_add_action_or_reset(dev, ti_sn65dsi86_delete_aux, aux); 500 + if (!ret) 501 + *aux_out = aux; 507 502 508 503 return ret; 509 504 }
+21
drivers/gpu/drm/drm_client.c
··· 122 122 * drm_client_register() it is no longer permissible to call drm_client_release() 123 123 * directly (outside the unregister callback), instead cleanup will happen 124 124 * automatically on driver unload. 125 + * 126 + * Registering a client generates a hotplug event that allows the client 127 + * to set up its display from pre-existing outputs. The client must have 128 + * initialized its state to able to handle the hotplug event successfully. 125 129 */ 126 130 void drm_client_register(struct drm_client_dev *client) 127 131 { 128 132 struct drm_device *dev = client->dev; 133 + int ret; 129 134 130 135 mutex_lock(&dev->clientlist_mutex); 131 136 list_add(&client->list, &dev->clientlist); 137 + 138 + if (client->funcs && client->funcs->hotplug) { 139 + /* 140 + * Perform an initial hotplug event to pick up the 141 + * display configuration for the client. This step 142 + * has to be performed *after* registering the client 143 + * in the list of clients, or a concurrent hotplug 144 + * event might be lost; leaving the display off. 145 + * 146 + * Hold the clientlist_mutex as for a regular hotplug 147 + * event. 148 + */ 149 + ret = client->funcs->hotplug(client); 150 + if (ret) 151 + drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 152 + } 132 153 mutex_unlock(&dev->clientlist_mutex); 133 154 } 134 155 EXPORT_SYMBOL(drm_client_register);
+1 -5
drivers/gpu/drm/drm_fbdev_dma.c
··· 217 217 * drm_fbdev_dma_setup() - Setup fbdev emulation for GEM DMA helpers 218 218 * @dev: DRM device 219 219 * @preferred_bpp: Preferred bits per pixel for the device. 220 - * @dev->mode_config.preferred_depth is used if this is zero. 220 + * 32 is used if this is zero. 221 221 * 222 222 * This function sets up fbdev emulation for GEM DMA drivers that support 223 223 * dumb buffers with a virtual address and that can be mmap'ed. ··· 251 251 drm_err(dev, "Failed to register client: %d\n", ret); 252 252 goto err_drm_client_init; 253 253 } 254 - 255 - ret = drm_fbdev_dma_client_hotplug(&fb_helper->client); 256 - if (ret) 257 - drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 258 254 259 255 drm_client_register(&fb_helper->client); 260 256
-4
drivers/gpu/drm/drm_fbdev_generic.c
··· 339 339 goto err_drm_client_init; 340 340 } 341 341 342 - ret = drm_fbdev_generic_client_hotplug(&fb_helper->client); 343 - if (ret) 344 - drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 345 - 346 342 drm_client_register(&fb_helper->client); 347 343 348 344 return;
+3 -3
drivers/gpu/drm/drm_syncobj.c
··· 353 353 */ 354 354 static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj) 355 355 { 356 - struct dma_fence *fence = dma_fence_allocate_private_stub(); 356 + struct dma_fence *fence = dma_fence_allocate_private_stub(ktime_get()); 357 357 358 - if (IS_ERR(fence)) 359 - return PTR_ERR(fence); 358 + if (!fence) 359 + return -ENOMEM; 360 360 361 361 drm_syncobj_replace_fence(syncobj, fence); 362 362 dma_fence_put(fence);
-4
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
··· 215 215 if (ret) 216 216 goto err_drm_client_init; 217 217 218 - ret = exynos_drm_fbdev_client_hotplug(&fb_helper->client); 219 - if (ret) 220 - drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 221 - 222 218 drm_client_register(&fb_helper->client); 223 219 224 220 return;
-4
drivers/gpu/drm/gma500/fbdev.c
··· 328 328 goto err_drm_fb_helper_unprepare; 329 329 } 330 330 331 - ret = psb_fbdev_client_hotplug(&fb_helper->client); 332 - if (ret) 333 - drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 334 - 335 331 drm_client_register(&fb_helper->client); 336 332 337 333 return;
-1
drivers/gpu/drm/i915/display/intel_display.c
··· 4564 4564 saved_state->uapi = slave_crtc_state->uapi; 4565 4565 saved_state->scaler_state = slave_crtc_state->scaler_state; 4566 4566 saved_state->shared_dpll = slave_crtc_state->shared_dpll; 4567 - saved_state->dpll_hw_state = slave_crtc_state->dpll_hw_state; 4568 4567 saved_state->crc_enabled = slave_crtc_state->crc_enabled; 4569 4568 4570 4569 intel_crtc_free_hw_state(slave_crtc_state);
-3
drivers/gpu/drm/i915/gt/gen8_ppgtt.c
··· 37 37 if (unlikely(flags & PTE_READ_ONLY)) 38 38 pte &= ~GEN8_PAGE_RW; 39 39 40 - if (flags & PTE_LM) 41 - pte |= GEN12_PPGTT_PTE_LM; 42 - 43 40 /* 44 41 * For pre-gen12 platforms pat_index is the same as enum 45 42 * i915_cache_level, so the switch-case here is still valid.
+1 -1
drivers/gpu/drm/i915/gt/intel_gtt.c
··· 670 670 if (IS_ERR(obj)) 671 671 return ERR_CAST(obj); 672 672 673 - i915_gem_object_set_cache_coherency(obj, I915_CACHING_CACHED); 673 + i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); 674 674 675 675 vma = i915_vma_instance(obj, vm, NULL); 676 676 if (IS_ERR(vma)) {
+10 -1
drivers/gpu/drm/i915/i915_perf.c
··· 868 868 oa_report_id_clear(stream, report32); 869 869 oa_timestamp_clear(stream, report32); 870 870 } else { 871 + u8 *oa_buf_end = stream->oa_buffer.vaddr + 872 + OA_BUFFER_SIZE; 873 + u32 part = oa_buf_end - (u8 *)report32; 874 + 871 875 /* Zero out the entire report */ 872 - memset(report32, 0, report_size); 876 + if (report_size <= part) { 877 + memset(report32, 0, report_size); 878 + } else { 879 + memset(report32, 0, part); 880 + memset(oa_buf_base, 0, report_size - part); 881 + } 873 882 } 874 883 } 875 884
-4
drivers/gpu/drm/msm/msm_fbdev.c
··· 246 246 goto err_drm_fb_helper_unprepare; 247 247 } 248 248 249 - ret = msm_fbdev_client_hotplug(&helper->client); 250 - if (ret) 251 - drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 252 - 253 249 drm_client_register(&helper->client); 254 250 255 251 return;
+6 -2
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 910 910 struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); 911 911 struct nv50_mstc *mstc = msto->mstc; 912 912 struct nv50_mstm *mstm = mstc->mstm; 913 - struct drm_dp_mst_atomic_payload *payload; 913 + struct drm_dp_mst_topology_state *old_mst_state; 914 + struct drm_dp_mst_atomic_payload *payload, *old_payload; 914 915 915 916 NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name); 916 917 918 + old_mst_state = drm_atomic_get_old_mst_topology_state(state, mgr); 919 + 917 920 payload = drm_atomic_get_mst_payload_state(mst_state, mstc->port); 921 + old_payload = drm_atomic_get_mst_payload_state(old_mst_state, mstc->port); 918 922 919 923 // TODO: Figure out if we want to do a better job of handling VCPI allocation failures here? 920 924 if (msto->disabled) { 921 - drm_dp_remove_payload(mgr, mst_state, payload, payload); 925 + drm_dp_remove_payload(mgr, mst_state, old_payload, payload); 922 926 923 927 nvif_outp_dp_mst_vcpi(&mstm->outp->outp, msto->head->base.index, 0, 0, 0, 0); 924 928 } else {
+1
drivers/gpu/drm/nouveau/nouveau_chan.c
··· 90 90 if (cli) 91 91 nouveau_svmm_part(chan->vmm->svmm, chan->inst); 92 92 93 + nvif_object_dtor(&chan->blit); 93 94 nvif_object_dtor(&chan->nvsw); 94 95 nvif_object_dtor(&chan->gart); 95 96 nvif_object_dtor(&chan->vram);
+1
drivers/gpu/drm/nouveau/nouveau_chan.h
··· 53 53 u32 user_put; 54 54 55 55 struct nvif_object user; 56 + struct nvif_object blit; 56 57 57 58 struct nvif_event kill; 58 59 atomic_t killed;
+17 -3
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 375 375 ret = nvif_object_ctor(&drm->channel->user, "drmNvsw", 376 376 NVDRM_NVSW, nouveau_abi16_swclass(drm), 377 377 NULL, 0, &drm->channel->nvsw); 378 + 379 + if (ret == 0 && device->info.chipset >= 0x11) { 380 + ret = nvif_object_ctor(&drm->channel->user, "drmBlit", 381 + 0x005f, 0x009f, 382 + NULL, 0, &drm->channel->blit); 383 + } 384 + 378 385 if (ret == 0) { 379 386 struct nvif_push *push = drm->channel->chan.push; 380 - ret = PUSH_WAIT(push, 2); 381 - if (ret == 0) 387 + ret = PUSH_WAIT(push, 8); 388 + if (ret == 0) { 389 + if (device->info.chipset >= 0x11) { 390 + PUSH_NVSQ(push, NV05F, 0x0000, drm->channel->blit.handle); 391 + PUSH_NVSQ(push, NV09F, 0x0120, 0, 392 + 0x0124, 1, 393 + 0x0128, 2); 394 + } 382 395 PUSH_NVSQ(push, NV_SW, 0x0000, drm->channel->nvsw.handle); 396 + } 383 397 } 384 398 385 399 if (ret) { 386 - NV_ERROR(drm, "failed to allocate sw class, %d\n", ret); 400 + NV_ERROR(drm, "failed to allocate sw or blit class, %d\n", ret); 387 401 nouveau_accel_gr_fini(drm); 388 402 return; 389 403 }
+1
drivers/gpu/drm/nouveau/nvkm/engine/disp/g94.c
··· 295 295 .clock = nv50_sor_clock, 296 296 .war_2 = g94_sor_war_2, 297 297 .war_3 = g94_sor_war_3, 298 + .hdmi = &g84_sor_hdmi, 298 299 .dp = &g94_sor_dp, 299 300 }; 300 301
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/disp/gt215.c
··· 125 125 pack_hdmi_infoframe(&avi, data, size); 126 126 127 127 nvkm_mask(device, 0x61c520 + soff, 0x00000001, 0x00000000); 128 - if (size) 128 + if (!size) 129 129 return; 130 130 131 131 nvkm_wr32(device, 0x61c528 + soff, avi.header);
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.c
··· 224 224 u64 falcons; 225 225 int ret, i; 226 226 227 - if (list_empty(&acr->hsfw)) { 227 + if (list_empty(&acr->hsfw) || !acr->func || !acr->func->wpr_layout) { 228 228 nvkm_debug(subdev, "No HSFW(s)\n"); 229 229 nvkm_acr_cleanup(acr); 230 230 return 0;
-4
drivers/gpu/drm/omapdrm/omap_fbdev.c
··· 318 318 319 319 INIT_WORK(&fbdev->work, pan_worker); 320 320 321 - ret = omap_fbdev_client_hotplug(&helper->client); 322 - if (ret) 323 - drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 324 - 325 321 drm_client_register(&helper->client); 326 322 327 323 return;
+2
drivers/gpu/drm/panel/panel-simple.c
··· 2178 2178 .height = 54, 2179 2179 }, 2180 2180 .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 2181 + .connector_type = DRM_MODE_CONNECTOR_DPI, 2181 2182 .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE, 2182 2183 }; 2183 2184 ··· 3203 3202 .vsync_start = 480 + 49, 3204 3203 .vsync_end = 480 + 49 + 2, 3205 3204 .vtotal = 480 + 49 + 2 + 22, 3205 + .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC, 3206 3206 }; 3207 3207 3208 3208 static const struct panel_desc powertip_ph800480t013_idf02 = {
-4
drivers/gpu/drm/radeon/radeon_fbdev.c
··· 383 383 goto err_drm_client_init; 384 384 } 385 385 386 - ret = radeon_fbdev_client_hotplug(&fb_helper->client); 387 - if (ret) 388 - drm_dbg_kms(rdev->ddev, "client hotplug ret=%d\n", ret); 389 - 390 386 drm_client_register(&fb_helper->client); 391 387 392 388 return;
+33 -8
drivers/gpu/drm/scheduler/sched_entity.c
··· 176 176 { 177 177 struct drm_sched_job *job = container_of(cb, struct drm_sched_job, 178 178 finish_cb); 179 - int r; 179 + unsigned long index; 180 180 181 181 dma_fence_put(f); 182 182 183 183 /* Wait for all dependencies to avoid data corruptions */ 184 - while (!xa_empty(&job->dependencies)) { 185 - f = xa_erase(&job->dependencies, job->last_dependency++); 186 - r = dma_fence_add_callback(f, &job->finish_cb, 187 - drm_sched_entity_kill_jobs_cb); 188 - if (!r) 184 + xa_for_each(&job->dependencies, index, f) { 185 + struct drm_sched_fence *s_fence = to_drm_sched_fence(f); 186 + 187 + if (s_fence && f == &s_fence->scheduled) { 188 + /* The dependencies array had a reference on the scheduled 189 + * fence, and the finished fence refcount might have 190 + * dropped to zero. Use dma_fence_get_rcu() so we get 191 + * a NULL fence in that case. 192 + */ 193 + f = dma_fence_get_rcu(&s_fence->finished); 194 + 195 + /* Now that we have a reference on the finished fence, 196 + * we can release the reference the dependencies array 197 + * had on the scheduled fence. 198 + */ 199 + dma_fence_put(&s_fence->scheduled); 200 + } 201 + 202 + xa_erase(&job->dependencies, index); 203 + if (f && !dma_fence_add_callback(f, &job->finish_cb, 204 + drm_sched_entity_kill_jobs_cb)) 189 205 return; 190 206 191 207 dma_fence_put(f); ··· 431 415 drm_sched_job_dependency(struct drm_sched_job *job, 432 416 struct drm_sched_entity *entity) 433 417 { 434 - if (!xa_empty(&job->dependencies)) 435 - return xa_erase(&job->dependencies, job->last_dependency++); 418 + struct dma_fence *f; 419 + 420 + /* We keep the fence around, so we can iterate over all dependencies 421 + * in drm_sched_entity_kill_jobs_cb() to ensure all deps are signaled 422 + * before killing the job. 423 + */ 424 + f = xa_load(&job->dependencies, job->last_dependency); 425 + if (f) { 426 + job->last_dependency++; 427 + return dma_fence_get(f); 428 + } 436 429 437 430 if (job->sched->ops->prepare_job) 438 431 return job->sched->ops->prepare_job(job, entity);
+25 -15
drivers/gpu/drm/scheduler/sched_fence.c
··· 48 48 kmem_cache_destroy(sched_fence_slab); 49 49 } 50 50 51 - void drm_sched_fence_scheduled(struct drm_sched_fence *fence) 51 + static void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, 52 + struct dma_fence *fence) 52 53 { 54 + /* 55 + * smp_store_release() to ensure another thread racing us 56 + * in drm_sched_fence_set_deadline_finished() sees the 57 + * fence's parent set before test_bit() 58 + */ 59 + smp_store_release(&s_fence->parent, dma_fence_get(fence)); 60 + if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, 61 + &s_fence->finished.flags)) 62 + dma_fence_set_deadline(fence, s_fence->deadline); 63 + } 64 + 65 + void drm_sched_fence_scheduled(struct drm_sched_fence *fence, 66 + struct dma_fence *parent) 67 + { 68 + /* Set the parent before signaling the scheduled fence, such that, 69 + * any waiter expecting the parent to be filled after the job has 70 + * been scheduled (which is the case for drivers delegating waits 71 + * to some firmware) doesn't have to busy wait for parent to show 72 + * up. 73 + */ 74 + if (!IS_ERR_OR_NULL(parent)) 75 + drm_sched_fence_set_parent(fence, parent); 76 + 53 77 dma_fence_signal(&fence->scheduled); 54 78 } 55 79 ··· 204 180 return NULL; 205 181 } 206 182 EXPORT_SYMBOL(to_drm_sched_fence); 207 - 208 - void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, 209 - struct dma_fence *fence) 210 - { 211 - /* 212 - * smp_store_release() to ensure another thread racing us 213 - * in drm_sched_fence_set_deadline_finished() sees the 214 - * fence's parent set before test_bit() 215 - */ 216 - smp_store_release(&s_fence->parent, dma_fence_get(fence)); 217 - if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, 218 - &s_fence->finished.flags)) 219 - dma_fence_set_deadline(fence, s_fence->deadline); 220 - } 221 183 222 184 struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity, 223 185 void *owner)
+1 -2
drivers/gpu/drm/scheduler/sched_main.c
··· 1043 1043 trace_drm_run_job(sched_job, entity); 1044 1044 fence = sched->ops->run_job(sched_job); 1045 1045 complete_all(&entity->entity_idle); 1046 - drm_sched_fence_scheduled(s_fence); 1046 + drm_sched_fence_scheduled(s_fence, fence); 1047 1047 1048 1048 if (!IS_ERR_OR_NULL(fence)) { 1049 - drm_sched_fence_set_parent(s_fence, fence); 1050 1049 /* Drop for original kref_init of the fence */ 1051 1050 dma_fence_put(fence); 1052 1051
-4
drivers/gpu/drm/tegra/fbdev.c
··· 225 225 if (ret) 226 226 goto err_drm_client_init; 227 227 228 - ret = tegra_fbdev_client_hotplug(&helper->client); 229 - if (ret) 230 - drm_dbg_kms(dev, "client hotplug ret=%d\n", ret); 231 - 232 228 drm_client_register(&helper->client); 233 229 234 230 return;
+18 -11
drivers/gpu/drm/ttm/ttm_bo.c
··· 458 458 goto out; 459 459 } 460 460 461 - bounce: 462 - ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop); 463 - if (ret == -EMULTIHOP) { 461 + do { 462 + ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop); 463 + if (ret != -EMULTIHOP) 464 + break; 465 + 464 466 ret = ttm_bo_bounce_temp_buffer(bo, &evict_mem, ctx, &hop); 465 - if (ret) { 466 - if (ret != -ERESTARTSYS && ret != -EINTR) 467 - pr_err("Buffer eviction failed\n"); 468 - ttm_resource_free(bo, &evict_mem); 469 - goto out; 470 - } 471 - /* try and move to final place now. */ 472 - goto bounce; 467 + } while (!ret); 468 + 469 + if (ret) { 470 + ttm_resource_free(bo, &evict_mem); 471 + if (ret != -ERESTARTSYS && ret != -EINTR) 472 + pr_err("Buffer eviction failed\n"); 473 473 } 474 474 out: 475 475 return ret; ··· 516 516 bool *locked, bool *busy) 517 517 { 518 518 bool ret = false; 519 + 520 + if (bo->pin_count) { 521 + *locked = false; 522 + *busy = false; 523 + return false; 524 + } 519 525 520 526 if (bo->base.resv == ctx->resv) { 521 527 dma_resv_assert_held(bo->base.resv); ··· 1173 1167 ret = ttm_bo_handle_move_mem(bo, evict_mem, true, &ctx, &hop); 1174 1168 if (unlikely(ret != 0)) { 1175 1169 WARN(ret == -EMULTIHOP, "Unexpected multihop in swaput - likely driver bug.\n"); 1170 + ttm_resource_free(bo, &evict_mem); 1176 1171 goto out; 1177 1172 } 1178 1173 }
+4 -1
drivers/gpu/drm/ttm/ttm_resource.c
··· 86 86 struct ttm_resource *res) 87 87 { 88 88 if (pos->last != res) { 89 + if (pos->first == res) 90 + pos->first = list_next_entry(res, lru); 89 91 list_move(&res->lru, &pos->last->lru); 90 92 pos->last = res; 91 93 } ··· 113 111 { 114 112 struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res); 115 113 116 - if (unlikely(pos->first == res && pos->last == res)) { 114 + if (unlikely(WARN_ON(!pos->first || !pos->last) || 115 + (pos->first == res && pos->last == res))) { 117 116 pos->first = NULL; 118 117 pos->last = NULL; 119 118 } else if (pos->first == res) {
+23 -7
drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_desc.c
··· 132 132 common->event_type = HID_USAGE_SENSOR_EVENT_DATA_UPDATED_ENUM; 133 133 } 134 134 135 - static int float_to_int(u32 float32) 135 + static int float_to_int(u32 flt32_val) 136 136 { 137 137 int fraction, shift, mantissa, sign, exp, zeropre; 138 138 139 - mantissa = float32 & GENMASK(22, 0); 140 - sign = (float32 & BIT(31)) ? -1 : 1; 141 - exp = (float32 & ~BIT(31)) >> 23; 139 + mantissa = flt32_val & GENMASK(22, 0); 140 + sign = (flt32_val & BIT(31)) ? -1 : 1; 141 + exp = (flt32_val & ~BIT(31)) >> 23; 142 142 143 143 if (!exp && !mantissa) 144 144 return 0; 145 145 146 + /* 147 + * Calculate the exponent and fraction part of floating 148 + * point representation. 149 + */ 146 150 exp -= 127; 147 151 if (exp < 0) { 148 152 exp = -exp; 153 + if (exp >= BITS_PER_TYPE(u32)) 154 + return 0; 149 155 zeropre = (((BIT(23) + mantissa) * 100) >> 23) >> exp; 150 156 return zeropre >= 50 ? sign : 0; 151 157 } 152 158 153 159 shift = 23 - exp; 154 - float32 = BIT(exp) + (mantissa >> shift); 155 - fraction = mantissa & GENMASK(shift - 1, 0); 160 + if (abs(shift) >= BITS_PER_TYPE(u32)) 161 + return 0; 156 162 157 - return (((fraction * 100) >> shift) >= 50) ? sign * (float32 + 1) : sign * float32; 163 + if (shift < 0) { 164 + shift = -shift; 165 + flt32_val = BIT(exp) + (mantissa << shift); 166 + shift = 0; 167 + } else { 168 + flt32_val = BIT(exp) + (mantissa >> shift); 169 + } 170 + 171 + fraction = (shift == 0) ? 0 : mantissa & GENMASK(shift - 1, 0); 172 + 173 + return (((fraction * 100) >> shift) >= 50) ? sign * (flt32_val + 1) : sign * flt32_val; 158 174 } 159 175 160 176 static u8 get_input_rep(u8 current_index, int sensor_idx, int report_id,
+4 -6
drivers/hid/hid-hyperv.c
··· 258 258 259 259 switch (hid_msg_hdr->type) { 260 260 case SYNTH_HID_PROTOCOL_RESPONSE: 261 + len = struct_size(pipe_msg, data, pipe_msg->size); 262 + 261 263 /* 262 264 * While it will be impossible for us to protect against 263 265 * malicious/buggy hypervisor/host, add a check here to 264 266 * ensure we don't corrupt memory. 265 267 */ 266 - if (struct_size(pipe_msg, data, pipe_msg->size) 267 - > sizeof(struct mousevsc_prt_msg)) { 268 - WARN_ON(1); 268 + if (WARN_ON(len > sizeof(struct mousevsc_prt_msg))) 269 269 break; 270 - } 271 270 272 - memcpy(&input_dev->protocol_resp, pipe_msg, 273 - struct_size(pipe_msg, data, pipe_msg->size)); 271 + memcpy(&input_dev->protocol_resp, pipe_msg, len); 274 272 complete(&input_dev->wait_event); 275 273 break; 276 274
+4 -3
drivers/hid/hid-input.c
··· 1093 1093 case 0x074: map_key_clear(KEY_BRIGHTNESS_MAX); break; 1094 1094 case 0x075: map_key_clear(KEY_BRIGHTNESS_AUTO); break; 1095 1095 1096 + case 0x076: map_key_clear(KEY_CAMERA_ACCESS_ENABLE); break; 1097 + case 0x077: map_key_clear(KEY_CAMERA_ACCESS_DISABLE); break; 1098 + case 0x078: map_key_clear(KEY_CAMERA_ACCESS_TOGGLE); break; 1099 + 1096 1100 case 0x079: map_key_clear(KEY_KBDILLUMUP); break; 1097 1101 case 0x07a: map_key_clear(KEY_KBDILLUMDOWN); break; 1098 1102 case 0x07c: map_key_clear(KEY_KBDILLUMTOGGLE); break; ··· 1143 1139 case 0x0cd: map_key_clear(KEY_PLAYPAUSE); break; 1144 1140 case 0x0cf: map_key_clear(KEY_VOICECOMMAND); break; 1145 1141 1146 - case 0x0d5: map_key_clear(KEY_CAMERA_ACCESS_ENABLE); break; 1147 - case 0x0d6: map_key_clear(KEY_CAMERA_ACCESS_DISABLE); break; 1148 - case 0x0d7: map_key_clear(KEY_CAMERA_ACCESS_TOGGLE); break; 1149 1142 case 0x0d8: map_key_clear(KEY_DICTATE); break; 1150 1143 case 0x0d9: map_key_clear(KEY_EMOJI_PICKER); break; 1151 1144
+2
drivers/hid/hid-logitech-hidpp.c
··· 4598 4598 4599 4599 { /* Logitech G403 Wireless Gaming Mouse over USB */ 4600 4600 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC082) }, 4601 + { /* Logitech G502 Lightspeed Wireless Gaming Mouse over USB */ 4602 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC08D) }, 4601 4603 { /* Logitech G703 Gaming Mouse over USB */ 4602 4604 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC087) }, 4603 4605 { /* Logitech G703 Hero Gaming Mouse over USB */
+6 -6
drivers/hid/hid-nvidia-shield.c
··· 63 63 struct thunderstrike_hostcmd_board_info { 64 64 __le16 revision; 65 65 __le16 serial[7]; 66 - }; 66 + } __packed; 67 67 68 68 struct thunderstrike_hostcmd_haptics { 69 69 u8 motor_left; 70 70 u8 motor_right; 71 - }; 71 + } __packed; 72 72 73 73 struct thunderstrike_hostcmd_resp_report { 74 74 u8 report_id; /* THUNDERSTRIKE_HOSTCMD_RESP_REPORT_ID */ ··· 81 81 __le16 fw_version; 82 82 enum thunderstrike_led_state led_state; 83 83 u8 payload[30]; 84 - }; 84 + } __packed; 85 85 } __packed; 86 86 static_assert(sizeof(struct thunderstrike_hostcmd_resp_report) == 87 87 THUNDERSTRIKE_HOSTCMD_REPORT_SIZE); ··· 92 92 u8 reserved_at_10; 93 93 94 94 union { 95 - struct { 95 + struct __packed { 96 96 u8 update; 97 97 enum thunderstrike_led_state state; 98 98 } led; 99 - struct { 99 + struct __packed { 100 100 u8 update; 101 101 struct thunderstrike_hostcmd_haptics motors; 102 102 } haptics; 103 - }; 103 + } __packed; 104 104 u8 reserved_at_30[27]; 105 105 } __packed; 106 106 static_assert(sizeof(struct thunderstrike_hostcmd_req_report) ==
+2 -1
drivers/iommu/iommu-sva.c
··· 34 34 } 35 35 36 36 ret = ida_alloc_range(&iommu_global_pasid_ida, min, max, GFP_KERNEL); 37 - if (ret < min) 37 + if (ret < 0) 38 38 goto out; 39 + 39 40 mm->pasid = ret; 40 41 ret = 0; 41 42 out:
+16 -15
drivers/iommu/iommu.c
··· 2891 2891 ret = __iommu_group_set_domain_internal( 2892 2892 group, dom, IOMMU_SET_DOMAIN_MUST_SUCCEED); 2893 2893 if (WARN_ON(ret)) 2894 - goto out_free; 2894 + goto out_free_old; 2895 2895 } else { 2896 2896 ret = __iommu_group_set_domain(group, dom); 2897 - if (ret) { 2898 - iommu_domain_free(dom); 2899 - group->default_domain = old_dom; 2900 - return ret; 2901 - } 2897 + if (ret) 2898 + goto err_restore_def_domain; 2902 2899 } 2903 2900 2904 2901 /* ··· 2908 2911 for_each_group_device(group, gdev) { 2909 2912 ret = iommu_create_device_direct_mappings(dom, gdev->dev); 2910 2913 if (ret) 2911 - goto err_restore; 2914 + goto err_restore_domain; 2912 2915 } 2913 2916 } 2914 2917 2915 - err_restore: 2916 - if (old_dom) { 2917 - __iommu_group_set_domain_internal( 2918 - group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED); 2919 - iommu_domain_free(dom); 2920 - old_dom = NULL; 2921 - } 2922 - out_free: 2918 + out_free_old: 2923 2919 if (old_dom) 2924 2920 iommu_domain_free(old_dom); 2921 + return ret; 2922 + 2923 + err_restore_domain: 2924 + if (old_dom) 2925 + __iommu_group_set_domain_internal( 2926 + group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED); 2927 + err_restore_def_domain: 2928 + if (old_dom) { 2929 + iommu_domain_free(dom); 2930 + group->default_domain = old_dom; 2931 + } 2925 2932 return ret; 2926 2933 } 2927 2934
+4 -6
drivers/net/dsa/ocelot/felix.c
··· 1286 1286 if (err < 0) { 1287 1287 dev_info(dev, "Unsupported PHY mode %s on port %d\n", 1288 1288 phy_modes(phy_mode), port); 1289 - of_node_put(child); 1290 1289 1291 1290 /* Leave port_phy_modes[port] = 0, which is also 1292 1291 * PHY_INTERFACE_MODE_NA. This will perform a ··· 1785 1786 { 1786 1787 struct ocelot *ocelot = ds->priv; 1787 1788 struct ocelot_port *ocelot_port = ocelot->ports[port]; 1788 - struct felix *felix = ocelot_to_felix(ocelot); 1789 1789 1790 1790 ocelot_port_set_maxlen(ocelot, port, new_mtu); 1791 1791 1792 - mutex_lock(&ocelot->tas_lock); 1792 + mutex_lock(&ocelot->fwd_domain_lock); 1793 1793 1794 - if (ocelot_port->taprio && felix->info->tas_guard_bands_update) 1795 - felix->info->tas_guard_bands_update(ocelot, port); 1794 + if (ocelot_port->taprio && ocelot->ops->tas_guard_bands_update) 1795 + ocelot->ops->tas_guard_bands_update(ocelot, port); 1796 1796 1797 - mutex_unlock(&ocelot->tas_lock); 1797 + mutex_unlock(&ocelot->fwd_domain_lock); 1798 1798 1799 1799 return 0; 1800 1800 }
-1
drivers/net/dsa/ocelot/felix.h
··· 57 57 void (*mdio_bus_free)(struct ocelot *ocelot); 58 58 int (*port_setup_tc)(struct dsa_switch *ds, int port, 59 59 enum tc_setup_type type, void *type_data); 60 - void (*tas_guard_bands_update)(struct ocelot *ocelot, int port); 61 60 void (*port_sched_speed_set)(struct ocelot *ocelot, int port, 62 61 u32 speed); 63 62 void (*phylink_mac_config)(struct ocelot *ocelot, int port,
+40 -19
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1209 1209 static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port) 1210 1210 { 1211 1211 struct ocelot_port *ocelot_port = ocelot->ports[port]; 1212 + struct ocelot_mm_state *mm = &ocelot->mm[port]; 1212 1213 struct tc_taprio_qopt_offload *taprio; 1213 1214 u64 min_gate_len[OCELOT_NUM_TC]; 1215 + u32 val, maxlen, add_frag_size; 1216 + u64 needed_min_frag_time_ps; 1214 1217 int speed, picos_per_byte; 1215 1218 u64 needed_bit_time_ps; 1216 - u32 val, maxlen; 1217 1219 u8 tas_speed; 1218 1220 int tc; 1219 1221 1220 - lockdep_assert_held(&ocelot->tas_lock); 1222 + lockdep_assert_held(&ocelot->fwd_domain_lock); 1221 1223 1222 1224 taprio = ocelot_port->taprio; 1223 1225 ··· 1255 1253 */ 1256 1254 needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte; 1257 1255 1256 + /* Preemptible TCs don't need to pass a full MTU, the port will 1257 + * automatically emit a HOLD request when a preemptible TC gate closes 1258 + */ 1259 + val = ocelot_read_rix(ocelot, QSYS_PREEMPTION_CFG, port); 1260 + add_frag_size = QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_X(val); 1261 + needed_min_frag_time_ps = picos_per_byte * 1262 + (u64)(24 + 2 * ethtool_mm_frag_size_add_to_min(add_frag_size)); 1263 + 1258 1264 dev_dbg(ocelot->dev, 1259 - "port %d: max frame size %d needs %llu ps at speed %d\n", 1260 - port, maxlen, needed_bit_time_ps, speed); 1265 + "port %d: max frame size %d needs %llu ps, %llu ps for mPackets at speed %d\n", 1266 + port, maxlen, needed_bit_time_ps, needed_min_frag_time_ps, 1267 + speed); 1261 1268 1262 1269 vsc9959_tas_min_gate_lengths(taprio, min_gate_len); 1263 - 1264 - mutex_lock(&ocelot->fwd_domain_lock); 1265 1270 1266 1271 for (tc = 0; tc < OCELOT_NUM_TC; tc++) { 1267 1272 u32 requested_max_sdu = vsc9959_tas_tc_max_sdu(taprio, tc); ··· 1278 1269 remaining_gate_len_ps = 1279 1270 vsc9959_tas_remaining_gate_len_ps(min_gate_len[tc]); 1280 1271 1281 - if (remaining_gate_len_ps > needed_bit_time_ps) { 1272 + if ((mm->active_preemptible_tcs & BIT(tc)) ? 1273 + remaining_gate_len_ps > needed_min_frag_time_ps : 1274 + remaining_gate_len_ps > needed_bit_time_ps) { 1282 1275 /* Setting QMAXSDU_CFG to 0 disables oversized frame 1283 1276 * dropping. 1284 1277 */ ··· 1334 1323 ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port); 1335 1324 1336 1325 ocelot->ops->cut_through_fwd(ocelot); 1337 - 1338 - mutex_unlock(&ocelot->fwd_domain_lock); 1339 1326 } 1340 1327 1341 1328 static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port, ··· 1360 1351 break; 1361 1352 } 1362 1353 1363 - mutex_lock(&ocelot->tas_lock); 1354 + mutex_lock(&ocelot->fwd_domain_lock); 1364 1355 1365 1356 ocelot_rmw_rix(ocelot, 1366 1357 QSYS_TAG_CONFIG_LINK_SPEED(tas_speed), ··· 1370 1361 if (ocelot_port->taprio) 1371 1362 vsc9959_tas_guard_bands_update(ocelot, port); 1372 1363 1373 - mutex_unlock(&ocelot->tas_lock); 1364 + mutex_unlock(&ocelot->fwd_domain_lock); 1374 1365 } 1375 1366 1376 1367 static void vsc9959_new_base_time(struct ocelot *ocelot, ktime_t base_time, ··· 1418 1409 int ret, i; 1419 1410 u32 val; 1420 1411 1421 - mutex_lock(&ocelot->tas_lock); 1412 + mutex_lock(&ocelot->fwd_domain_lock); 1422 1413 1423 1414 if (taprio->cmd == TAPRIO_CMD_DESTROY) { 1424 1415 ocelot_port_mqprio(ocelot, port, &taprio->mqprio); ··· 1430 1421 1431 1422 vsc9959_tas_guard_bands_update(ocelot, port); 1432 1423 1433 - mutex_unlock(&ocelot->tas_lock); 1424 + mutex_unlock(&ocelot->fwd_domain_lock); 1434 1425 return 0; 1435 1426 } else if (taprio->cmd != TAPRIO_CMD_REPLACE) { 1436 1427 ret = -EOPNOTSUPP; ··· 1513 1504 ocelot_port->taprio = taprio_offload_get(taprio); 1514 1505 vsc9959_tas_guard_bands_update(ocelot, port); 1515 1506 1516 - mutex_unlock(&ocelot->tas_lock); 1507 + mutex_unlock(&ocelot->fwd_domain_lock); 1517 1508 1518 1509 return 0; 1519 1510 ··· 1521 1512 taprio->mqprio.qopt.num_tc = 0; 1522 1513 ocelot_port_mqprio(ocelot, port, &taprio->mqprio); 1523 1514 err_unlock: 1524 - mutex_unlock(&ocelot->tas_lock); 1515 + mutex_unlock(&ocelot->fwd_domain_lock); 1525 1516 1526 1517 return ret; 1527 1518 } ··· 1534 1525 int port; 1535 1526 u32 val; 1536 1527 1537 - mutex_lock(&ocelot->tas_lock); 1528 + mutex_lock(&ocelot->fwd_domain_lock); 1538 1529 1539 1530 for (port = 0; port < ocelot->num_phys_ports; port++) { 1540 1531 ocelot_port = ocelot->ports[port]; ··· 1572 1563 QSYS_TAG_CONFIG_ENABLE, 1573 1564 QSYS_TAG_CONFIG, port); 1574 1565 } 1575 - mutex_unlock(&ocelot->tas_lock); 1566 + mutex_unlock(&ocelot->fwd_domain_lock); 1576 1567 } 1577 1568 1578 1569 static int vsc9959_qos_port_cbs_set(struct dsa_switch *ds, int port, ··· 1643 1634 } 1644 1635 } 1645 1636 1637 + static int vsc9959_qos_port_mqprio(struct ocelot *ocelot, int port, 1638 + struct tc_mqprio_qopt_offload *mqprio) 1639 + { 1640 + int ret; 1641 + 1642 + mutex_lock(&ocelot->fwd_domain_lock); 1643 + ret = ocelot_port_mqprio(ocelot, port, mqprio); 1644 + mutex_unlock(&ocelot->fwd_domain_lock); 1645 + 1646 + return ret; 1647 + } 1648 + 1646 1649 static int vsc9959_port_setup_tc(struct dsa_switch *ds, int port, 1647 1650 enum tc_setup_type type, 1648 1651 void *type_data) ··· 1667 1646 case TC_SETUP_QDISC_TAPRIO: 1668 1647 return vsc9959_qos_port_tas_set(ocelot, port, type_data); 1669 1648 case TC_SETUP_QDISC_MQPRIO: 1670 - return ocelot_port_mqprio(ocelot, port, type_data); 1649 + return vsc9959_qos_port_mqprio(ocelot, port, type_data); 1671 1650 case TC_SETUP_QDISC_CBS: 1672 1651 return vsc9959_qos_port_cbs_set(ds, port, type_data); 1673 1652 default: ··· 2612 2591 .cut_through_fwd = vsc9959_cut_through_fwd, 2613 2592 .tas_clock_adjust = vsc9959_tas_clock_adjust, 2614 2593 .update_stats = vsc9959_update_stats, 2594 + .tas_guard_bands_update = vsc9959_tas_guard_bands_update, 2615 2595 }; 2616 2596 2617 2597 static const struct felix_info felix_info_vsc9959 = { ··· 2638 2616 .port_modes = vsc9959_port_modes, 2639 2617 .port_setup_tc = vsc9959_port_setup_tc, 2640 2618 .port_sched_speed_set = vsc9959_sched_speed_set, 2641 - .tas_guard_bands_update = vsc9959_tas_guard_bands_update, 2642 2619 }; 2643 2620 2644 2621 /* The INTB interrupt is shared between for PTP TX timestamp availability
+3
drivers/net/dsa/qca/qca8k-8xxx.c
··· 588 588 bool ack; 589 589 int ret; 590 590 591 + if (!skb) 592 + return -ENOMEM; 593 + 591 594 reinit_completion(&mgmt_eth_data->rw_done); 592 595 593 596 /* Increment seq_num and set it in the copy pkt */
+3
drivers/net/ethernet/amazon/ena/ena_com.c
··· 35 35 36 36 #define ENA_REGS_ADMIN_INTR_MASK 1 37 37 38 + #define ENA_MAX_BACKOFF_DELAY_EXP 16U 39 + 38 40 #define ENA_MIN_ADMIN_POLL_US 100 39 41 40 42 #define ENA_MAX_ADMIN_POLL_US 5000 ··· 538 536 539 537 static void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us) 540 538 { 539 + exp = min_t(u32, exp, ENA_MAX_BACKOFF_DELAY_EXP); 541 540 delay_us = max_t(u32, ENA_MIN_ADMIN_POLL_US, delay_us); 542 541 delay_us = min_t(u32, delay_us * (1U << exp), ENA_MAX_ADMIN_POLL_US); 543 542 usleep_range(delay_us, 2 * delay_us);
+2 -2
drivers/net/ethernet/broadcom/bgmac.c
··· 1492 1492 1493 1493 bgmac->in_init = true; 1494 1494 1495 - bgmac_chip_intrs_off(bgmac); 1496 - 1497 1495 net_dev->irq = bgmac->irq; 1498 1496 SET_NETDEV_DEV(net_dev, bgmac->dev); 1499 1497 dev_set_drvdata(bgmac->dev, bgmac); ··· 1508 1510 * Broadcom does it in arch PCI code when enabling fake PCI device. 1509 1511 */ 1510 1512 bgmac_clk_enable(bgmac, 0); 1513 + 1514 + bgmac_chip_intrs_off(bgmac); 1511 1515 1512 1516 /* This seems to be fixing IRQ by assigning OOB #6 to the core */ 1513 1517 if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) {
+15 -2
drivers/net/ethernet/freescale/fec.h
··· 355 355 #define RX_RING_SIZE (FEC_ENET_RX_FRPPG * FEC_ENET_RX_PAGES) 356 356 #define FEC_ENET_TX_FRSIZE 2048 357 357 #define FEC_ENET_TX_FRPPG (PAGE_SIZE / FEC_ENET_TX_FRSIZE) 358 - #define TX_RING_SIZE 512 /* Must be power of two */ 358 + #define TX_RING_SIZE 1024 /* Must be power of two */ 359 359 #define TX_RING_MOD_MASK 511 /* for this to work */ 360 360 361 361 #define BD_ENET_RX_INT 0x00800000 ··· 544 544 XDP_STATS_TOTAL, 545 545 }; 546 546 547 + enum fec_txbuf_type { 548 + FEC_TXBUF_T_SKB, 549 + FEC_TXBUF_T_XDP_NDO, 550 + }; 551 + 552 + struct fec_tx_buffer { 553 + union { 554 + struct sk_buff *skb; 555 + struct xdp_frame *xdp; 556 + }; 557 + enum fec_txbuf_type type; 558 + }; 559 + 547 560 struct fec_enet_priv_tx_q { 548 561 struct bufdesc_prop bd; 549 562 unsigned char *tx_bounce[TX_RING_SIZE]; 550 - struct sk_buff *tx_skbuff[TX_RING_SIZE]; 563 + struct fec_tx_buffer tx_buf[TX_RING_SIZE]; 551 564 552 565 unsigned short tx_stop_threshold; 553 566 unsigned short tx_wake_threshold;
+112 -54
drivers/net/ethernet/freescale/fec_main.c
··· 397 397 fec16_to_cpu(bdp->cbd_sc), 398 398 fec32_to_cpu(bdp->cbd_bufaddr), 399 399 fec16_to_cpu(bdp->cbd_datlen), 400 - txq->tx_skbuff[index]); 400 + txq->tx_buf[index].skb); 401 401 bdp = fec_enet_get_nextdesc(bdp, &txq->bd); 402 402 index++; 403 403 } while (bdp != txq->bd.base); ··· 654 654 655 655 index = fec_enet_get_bd_index(last_bdp, &txq->bd); 656 656 /* Save skb pointer */ 657 - txq->tx_skbuff[index] = skb; 657 + txq->tx_buf[index].skb = skb; 658 658 659 659 /* Make sure the updates to rest of the descriptor are performed before 660 660 * transferring ownership. ··· 672 672 673 673 skb_tx_timestamp(skb); 674 674 675 - /* Make sure the update to bdp and tx_skbuff are performed before 676 - * txq->bd.cur. 677 - */ 675 + /* Make sure the update to bdp is performed before txq->bd.cur. */ 678 676 wmb(); 679 677 txq->bd.cur = bdp; 680 678 ··· 860 862 } 861 863 862 864 /* Save skb pointer */ 863 - txq->tx_skbuff[index] = skb; 865 + txq->tx_buf[index].skb = skb; 864 866 865 867 skb_tx_timestamp(skb); 866 868 txq->bd.cur = bdp; ··· 950 952 for (i = 0; i < txq->bd.ring_size; i++) { 951 953 /* Initialize the BD for every fragment in the page. */ 952 954 bdp->cbd_sc = cpu_to_fec16(0); 953 - if (bdp->cbd_bufaddr && 954 - !IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr))) 955 - dma_unmap_single(&fep->pdev->dev, 956 - fec32_to_cpu(bdp->cbd_bufaddr), 957 - fec16_to_cpu(bdp->cbd_datlen), 958 - DMA_TO_DEVICE); 959 - if (txq->tx_skbuff[i]) { 960 - dev_kfree_skb_any(txq->tx_skbuff[i]); 961 - txq->tx_skbuff[i] = NULL; 955 + if (txq->tx_buf[i].type == FEC_TXBUF_T_SKB) { 956 + if (bdp->cbd_bufaddr && 957 + !IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr))) 958 + dma_unmap_single(&fep->pdev->dev, 959 + fec32_to_cpu(bdp->cbd_bufaddr), 960 + fec16_to_cpu(bdp->cbd_datlen), 961 + DMA_TO_DEVICE); 962 + if (txq->tx_buf[i].skb) { 963 + dev_kfree_skb_any(txq->tx_buf[i].skb); 964 + txq->tx_buf[i].skb = NULL; 965 + } 966 + } else { 967 + if (bdp->cbd_bufaddr) 968 + dma_unmap_single(&fep->pdev->dev, 969 + fec32_to_cpu(bdp->cbd_bufaddr), 970 + fec16_to_cpu(bdp->cbd_datlen), 971 + DMA_TO_DEVICE); 972 + 973 + if (txq->tx_buf[i].xdp) { 974 + xdp_return_frame(txq->tx_buf[i].xdp); 975 + txq->tx_buf[i].xdp = NULL; 976 + } 977 + 978 + /* restore default tx buffer type: FEC_TXBUF_T_SKB */ 979 + txq->tx_buf[i].type = FEC_TXBUF_T_SKB; 962 980 } 981 + 963 982 bdp->cbd_bufaddr = cpu_to_fec32(0); 964 983 bdp = fec_enet_get_nextdesc(bdp, &txq->bd); 965 984 } ··· 1375 1360 fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) 1376 1361 { 1377 1362 struct fec_enet_private *fep; 1363 + struct xdp_frame *xdpf; 1378 1364 struct bufdesc *bdp; 1379 1365 unsigned short status; 1380 1366 struct sk_buff *skb; ··· 1403 1387 1404 1388 index = fec_enet_get_bd_index(bdp, &txq->bd); 1405 1389 1406 - skb = txq->tx_skbuff[index]; 1407 - txq->tx_skbuff[index] = NULL; 1408 - if (!IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr))) 1409 - dma_unmap_single(&fep->pdev->dev, 1410 - fec32_to_cpu(bdp->cbd_bufaddr), 1411 - fec16_to_cpu(bdp->cbd_datlen), 1412 - DMA_TO_DEVICE); 1413 - bdp->cbd_bufaddr = cpu_to_fec32(0); 1414 - if (!skb) 1415 - goto skb_done; 1390 + if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) { 1391 + skb = txq->tx_buf[index].skb; 1392 + txq->tx_buf[index].skb = NULL; 1393 + if (bdp->cbd_bufaddr && 1394 + !IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr))) 1395 + dma_unmap_single(&fep->pdev->dev, 1396 + fec32_to_cpu(bdp->cbd_bufaddr), 1397 + fec16_to_cpu(bdp->cbd_datlen), 1398 + DMA_TO_DEVICE); 1399 + bdp->cbd_bufaddr = cpu_to_fec32(0); 1400 + if (!skb) 1401 + goto tx_buf_done; 1402 + } else { 1403 + xdpf = txq->tx_buf[index].xdp; 1404 + if (bdp->cbd_bufaddr) 1405 + dma_unmap_single(&fep->pdev->dev, 1406 + fec32_to_cpu(bdp->cbd_bufaddr), 1407 + fec16_to_cpu(bdp->cbd_datlen), 1408 + DMA_TO_DEVICE); 1409 + bdp->cbd_bufaddr = cpu_to_fec32(0); 1410 + if (!xdpf) { 1411 + txq->tx_buf[index].type = FEC_TXBUF_T_SKB; 1412 + goto tx_buf_done; 1413 + } 1414 + } 1416 1415 1417 1416 /* Check for errors. */ 1418 1417 if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC | ··· 1446 1415 ndev->stats.tx_carrier_errors++; 1447 1416 } else { 1448 1417 ndev->stats.tx_packets++; 1449 - ndev->stats.tx_bytes += skb->len; 1450 - } 1451 1418 1452 - /* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who 1453 - * are to time stamp the packet, so we still need to check time 1454 - * stamping enabled flag. 1455 - */ 1456 - if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS && 1457 - fep->hwts_tx_en) && 1458 - fep->bufdesc_ex) { 1459 - struct skb_shared_hwtstamps shhwtstamps; 1460 - struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; 1461 - 1462 - fec_enet_hwtstamp(fep, fec32_to_cpu(ebdp->ts), &shhwtstamps); 1463 - skb_tstamp_tx(skb, &shhwtstamps); 1419 + if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) 1420 + ndev->stats.tx_bytes += skb->len; 1421 + else 1422 + ndev->stats.tx_bytes += xdpf->len; 1464 1423 } 1465 1424 1466 1425 /* Deferred means some collisions occurred during transmit, ··· 1459 1438 if (status & BD_ENET_TX_DEF) 1460 1439 ndev->stats.collisions++; 1461 1440 1462 - /* Free the sk buffer associated with this last transmit */ 1463 - dev_kfree_skb_any(skb); 1464 - skb_done: 1465 - /* Make sure the update to bdp and tx_skbuff are performed 1441 + if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) { 1442 + /* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who 1443 + * are to time stamp the packet, so we still need to check time 1444 + * stamping enabled flag. 1445 + */ 1446 + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS && 1447 + fep->hwts_tx_en) && fep->bufdesc_ex) { 1448 + struct skb_shared_hwtstamps shhwtstamps; 1449 + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; 1450 + 1451 + fec_enet_hwtstamp(fep, fec32_to_cpu(ebdp->ts), &shhwtstamps); 1452 + skb_tstamp_tx(skb, &shhwtstamps); 1453 + } 1454 + 1455 + /* Free the sk buffer associated with this last transmit */ 1456 + dev_kfree_skb_any(skb); 1457 + } else { 1458 + xdp_return_frame(xdpf); 1459 + 1460 + txq->tx_buf[index].xdp = NULL; 1461 + /* restore default tx buffer type: FEC_TXBUF_T_SKB */ 1462 + txq->tx_buf[index].type = FEC_TXBUF_T_SKB; 1463 + } 1464 + 1465 + tx_buf_done: 1466 + /* Make sure the update to bdp and tx_buf are performed 1466 1467 * before dirty_tx 1467 1468 */ 1468 1469 wmb(); ··· 3292 3249 for (i = 0; i < txq->bd.ring_size; i++) { 3293 3250 kfree(txq->tx_bounce[i]); 3294 3251 txq->tx_bounce[i] = NULL; 3295 - skb = txq->tx_skbuff[i]; 3296 - txq->tx_skbuff[i] = NULL; 3297 - dev_kfree_skb(skb); 3252 + 3253 + if (txq->tx_buf[i].type == FEC_TXBUF_T_SKB) { 3254 + skb = txq->tx_buf[i].skb; 3255 + txq->tx_buf[i].skb = NULL; 3256 + dev_kfree_skb(skb); 3257 + } else { 3258 + if (txq->tx_buf[i].xdp) { 3259 + xdp_return_frame(txq->tx_buf[i].xdp); 3260 + txq->tx_buf[i].xdp = NULL; 3261 + } 3262 + 3263 + txq->tx_buf[i].type = FEC_TXBUF_T_SKB; 3264 + } 3298 3265 } 3299 3266 } 3300 3267 } ··· 3349 3296 fep->total_tx_ring_size += fep->tx_queue[i]->bd.ring_size; 3350 3297 3351 3298 txq->tx_stop_threshold = FEC_MAX_SKB_DESCS; 3352 - txq->tx_wake_threshold = 3353 - (txq->bd.ring_size - txq->tx_stop_threshold) / 2; 3299 + txq->tx_wake_threshold = FEC_MAX_SKB_DESCS + 2 * MAX_SKB_FRAGS; 3354 3300 3355 3301 txq->tso_hdrs = dma_alloc_coherent(&fep->pdev->dev, 3356 3302 txq->bd.ring_size * TSO_HEADER_SIZE, ··· 3784 3732 if (fep->quirks & FEC_QUIRK_SWAP_FRAME) 3785 3733 return -EOPNOTSUPP; 3786 3734 3735 + if (!bpf->prog) 3736 + xdp_features_clear_redirect_target(dev); 3737 + 3787 3738 if (is_run) { 3788 3739 napi_disable(&fep->napi); 3789 3740 netif_tx_disable(dev); 3790 3741 } 3791 3742 3792 3743 old_prog = xchg(&fep->xdp_prog, bpf->prog); 3744 + if (old_prog) 3745 + bpf_prog_put(old_prog); 3746 + 3793 3747 fec_restart(dev); 3794 3748 3795 3749 if (is_run) { ··· 3803 3745 netif_tx_start_all_queues(dev); 3804 3746 } 3805 3747 3806 - if (old_prog) 3807 - bpf_prog_put(old_prog); 3748 + if (bpf->prog) 3749 + xdp_features_set_redirect_target(dev, false); 3808 3750 3809 3751 return 0; 3810 3752 ··· 3836 3778 3837 3779 entries_free = fec_enet_get_free_txdesc_num(txq); 3838 3780 if (entries_free < MAX_SKB_FRAGS + 1) { 3839 - netdev_err(fep->netdev, "NOT enough BD for SG!\n"); 3781 + netdev_err_once(fep->netdev, "NOT enough BD for SG!\n"); 3840 3782 return -EBUSY; 3841 3783 } 3842 3784 ··· 3869 3811 ebdp->cbd_esc = cpu_to_fec32(estatus); 3870 3812 } 3871 3813 3872 - txq->tx_skbuff[index] = NULL; 3814 + txq->tx_buf[index].type = FEC_TXBUF_T_XDP_NDO; 3815 + txq->tx_buf[index].xdp = frame; 3873 3816 3874 3817 /* Make sure the updates to rest of the descriptor are performed before 3875 3818 * transferring ownership. ··· 4075 4016 4076 4017 if (!(fep->quirks & FEC_QUIRK_SWAP_FRAME)) 4077 4018 ndev->xdp_features = NETDEV_XDP_ACT_BASIC | 4078 - NETDEV_XDP_ACT_REDIRECT | 4079 - NETDEV_XDP_ACT_NDO_XMIT; 4019 + NETDEV_XDP_ACT_REDIRECT; 4080 4020 4081 4021 fec_restart(ndev); 4082 4022
+1
drivers/net/ethernet/google/gve/gve.h
··· 964 964 /* exported by ethtool.c */ 965 965 extern const struct ethtool_ops gve_ethtool_ops; 966 966 /* needed by ethtool */ 967 + extern char gve_driver_name[]; 967 968 extern const char gve_version_str[]; 968 969 #endif /* _GVE_H_ */
+4 -1
drivers/net/ethernet/google/gve/gve_ethtool.c
··· 15 15 { 16 16 struct gve_priv *priv = netdev_priv(netdev); 17 17 18 - strscpy(info->driver, "gve", sizeof(info->driver)); 18 + strscpy(info->driver, gve_driver_name, sizeof(info->driver)); 19 19 strscpy(info->version, gve_version_str, sizeof(info->version)); 20 20 strscpy(info->bus_info, pci_name(priv->pdev), sizeof(info->bus_info)); 21 21 } ··· 590 590 err = gve_adminq_report_link_speed(priv); 591 591 592 592 cmd->base.speed = priv->link_speed; 593 + 594 + cmd->base.duplex = DUPLEX_FULL; 595 + 593 596 return err; 594 597 } 595 598
+6 -5
drivers/net/ethernet/google/gve/gve_main.c
··· 33 33 #define MIN_TX_TIMEOUT_GAP (1000 * 10) 34 34 #define DQO_TX_MAX 0x3FFFF 35 35 36 + char gve_driver_name[] = "gve"; 36 37 const char gve_version_str[] = GVE_VERSION; 37 38 static const char gve_version_prefix[] = GVE_VERSION_PREFIX; 38 39 ··· 2201 2200 if (err) 2202 2201 return err; 2203 2202 2204 - err = pci_request_regions(pdev, "gvnic-cfg"); 2203 + err = pci_request_regions(pdev, gve_driver_name); 2205 2204 if (err) 2206 2205 goto abort_with_enabled; 2207 2206 ··· 2394 2393 { } 2395 2394 }; 2396 2395 2397 - static struct pci_driver gvnic_driver = { 2398 - .name = "gvnic", 2396 + static struct pci_driver gve_driver = { 2397 + .name = gve_driver_name, 2399 2398 .id_table = gve_id_table, 2400 2399 .probe = gve_probe, 2401 2400 .remove = gve_remove, ··· 2406 2405 #endif 2407 2406 }; 2408 2407 2409 - module_pci_driver(gvnic_driver); 2408 + module_pci_driver(gve_driver); 2410 2409 2411 2410 MODULE_DEVICE_TABLE(pci, gve_id_table); 2412 2411 MODULE_AUTHOR("Google, Inc."); 2413 - MODULE_DESCRIPTION("gVNIC Driver"); 2412 + MODULE_DESCRIPTION("Google Virtual NIC Driver"); 2414 2413 MODULE_LICENSE("Dual MIT/GPL"); 2415 2414 MODULE_VERSION(GVE_VERSION);
+15 -8
drivers/net/ethernet/intel/ice/ice_main.c
··· 5739 5739 q_handle = vsi->tx_rings[queue_index]->q_handle; 5740 5740 tc = ice_dcb_get_tc(vsi, queue_index); 5741 5741 5742 + vsi = ice_locate_vsi_using_queue(vsi, queue_index); 5743 + if (!vsi) { 5744 + netdev_err(netdev, "Invalid VSI for given queue %d\n", 5745 + queue_index); 5746 + return -EINVAL; 5747 + } 5748 + 5742 5749 /* Set BW back to default, when user set maxrate to 0 */ 5743 5750 if (!maxrate) 5744 5751 status = ice_cfg_q_bw_dflt_lmt(vsi->port_info, vsi->idx, tc, ··· 7879 7872 ice_validate_mqprio_qopt(struct ice_vsi *vsi, 7880 7873 struct tc_mqprio_qopt_offload *mqprio_qopt) 7881 7874 { 7882 - u64 sum_max_rate = 0, sum_min_rate = 0; 7883 7875 int non_power_of_2_qcount = 0; 7884 7876 struct ice_pf *pf = vsi->back; 7885 7877 int max_rss_q_cnt = 0; 7878 + u64 sum_min_rate = 0; 7886 7879 struct device *dev; 7887 7880 int i, speed; 7888 7881 u8 num_tc; ··· 7898 7891 dev = ice_pf_to_dev(pf); 7899 7892 vsi->ch_rss_size = 0; 7900 7893 num_tc = mqprio_qopt->qopt.num_tc; 7894 + speed = ice_get_link_speed_kbps(vsi); 7901 7895 7902 7896 for (i = 0; num_tc; i++) { 7903 7897 int qcount = mqprio_qopt->qopt.count[i]; ··· 7939 7931 */ 7940 7932 max_rate = mqprio_qopt->max_rate[i]; 7941 7933 max_rate = div_u64(max_rate, ICE_BW_KBPS_DIVISOR); 7942 - sum_max_rate += max_rate; 7943 7934 7944 7935 /* min_rate is minimum guaranteed rate and it can't be zero */ 7945 7936 min_rate = mqprio_qopt->min_rate[i]; ··· 7948 7941 if (min_rate && min_rate < ICE_MIN_BW_LIMIT) { 7949 7942 dev_err(dev, "TC%d: min_rate(%llu Kbps) < %u Kbps\n", i, 7950 7943 min_rate, ICE_MIN_BW_LIMIT); 7944 + return -EINVAL; 7945 + } 7946 + 7947 + if (max_rate && max_rate > speed) { 7948 + dev_err(dev, "TC%d: max_rate(%llu Kbps) > link speed of %u Kbps\n", 7949 + i, max_rate, speed); 7951 7950 return -EINVAL; 7952 7951 } 7953 7952 ··· 7994 7981 (mqprio_qopt->qopt.offset[i] + mqprio_qopt->qopt.count[i])) 7995 7982 return -EINVAL; 7996 7983 7997 - speed = ice_get_link_speed_kbps(vsi); 7998 - if (sum_max_rate && sum_max_rate > (u64)speed) { 7999 - dev_err(dev, "Invalid max Tx rate(%llu) Kbps > speed(%u) Kbps specified\n", 8000 - sum_max_rate, speed); 8001 - return -EINVAL; 8002 - } 8003 7984 if (sum_min_rate && sum_min_rate > (u64)speed) { 8004 7985 dev_err(dev, "Invalid min Tx rate(%llu) Kbps > speed (%u) Kbps specified\n", 8005 7986 sum_min_rate, speed);
+11 -11
drivers/net/ethernet/intel/ice/ice_tc_lib.c
··· 750 750 /** 751 751 * ice_locate_vsi_using_queue - locate VSI using queue (forward to queue action) 752 752 * @vsi: Pointer to VSI 753 - * @tc_fltr: Pointer to tc_flower_filter 753 + * @queue: Queue index 754 754 * 755 - * Locate the VSI using specified queue. When ADQ is not enabled, always 756 - * return input VSI, otherwise locate corresponding VSI based on per channel 757 - * offset and qcount 755 + * Locate the VSI using specified "queue". When ADQ is not enabled, 756 + * always return input VSI, otherwise locate corresponding 757 + * VSI based on per channel "offset" and "qcount" 758 758 */ 759 - static struct ice_vsi * 760 - ice_locate_vsi_using_queue(struct ice_vsi *vsi, 761 - struct ice_tc_flower_fltr *tc_fltr) 759 + struct ice_vsi * 760 + ice_locate_vsi_using_queue(struct ice_vsi *vsi, int queue) 762 761 { 763 - int num_tc, tc, queue; 762 + int num_tc, tc; 764 763 765 764 /* if ADQ is not active, passed VSI is the candidate VSI */ 766 765 if (!ice_is_adq_active(vsi->back)) ··· 769 770 * upon queue number) 770 771 */ 771 772 num_tc = vsi->mqprio_qopt.qopt.num_tc; 772 - queue = tc_fltr->action.fwd.q.queue; 773 773 774 774 for (tc = 0; tc < num_tc; tc++) { 775 775 int qcount = vsi->mqprio_qopt.qopt.count[tc]; ··· 810 812 struct ice_pf *pf = vsi->back; 811 813 struct device *dev; 812 814 u32 tc_class; 815 + int q; 813 816 814 817 dev = ice_pf_to_dev(pf); 815 818 ··· 839 840 /* Determine destination VSI even though the action is 840 841 * FWD_TO_QUEUE, because QUEUE is associated with VSI 841 842 */ 842 - dest_vsi = tc_fltr->dest_vsi; 843 + q = tc_fltr->action.fwd.q.queue; 844 + dest_vsi = ice_locate_vsi_using_queue(vsi, q); 843 845 break; 844 846 default: 845 847 dev_err(dev, ··· 1716 1716 /* If ADQ is configured, and the queue belongs to ADQ VSI, then prepare 1717 1717 * ADQ switch filter 1718 1718 */ 1719 - ch_vsi = ice_locate_vsi_using_queue(vsi, fltr); 1719 + ch_vsi = ice_locate_vsi_using_queue(vsi, fltr->action.fwd.q.queue); 1720 1720 if (!ch_vsi) 1721 1721 return -EINVAL; 1722 1722 fltr->dest_vsi = ch_vsi;
+1
drivers/net/ethernet/intel/ice/ice_tc_lib.h
··· 204 204 return pf->num_dmac_chnl_fltrs; 205 205 } 206 206 207 + struct ice_vsi *ice_locate_vsi_using_queue(struct ice_vsi *vsi, int queue); 207 208 int 208 209 ice_add_cls_flower(struct net_device *netdev, struct ice_vsi *vsi, 209 210 struct flow_cls_offload *cls_flower);
+8 -1
drivers/net/ethernet/intel/igc/igc.h
··· 14 14 #include <linux/timecounter.h> 15 15 #include <linux/net_tstamp.h> 16 16 #include <linux/bitfield.h> 17 + #include <linux/hrtimer.h> 17 18 18 19 #include "igc_hw.h" 19 20 ··· 102 101 u32 start_time; 103 102 u32 end_time; 104 103 u32 max_sdu; 104 + bool oper_gate_closed; /* Operating gate. True if the TX Queue is closed */ 105 + bool admin_gate_closed; /* Future gate. True if the TX Queue will be closed */ 105 106 106 107 /* CBS parameters */ 107 108 bool cbs_enable; /* indicates if CBS is enabled */ ··· 163 160 struct timer_list watchdog_timer; 164 161 struct timer_list dma_err_timer; 165 162 struct timer_list phy_info_timer; 163 + struct hrtimer hrtimer; 166 164 167 165 u32 wol; 168 166 u32 en_mng_pt; ··· 188 184 u32 max_frame_size; 189 185 u32 min_frame_size; 190 186 187 + int tc_setup_type; 191 188 ktime_t base_time; 192 189 ktime_t cycle_time; 193 - bool qbv_enable; 190 + bool taprio_offload_enable; 194 191 u32 qbv_config_change_errors; 192 + bool qbv_transition; 193 + unsigned int qbv_count; 195 194 196 195 /* OS defined structs */ 197 196 struct pci_dev *pdev;
+2
drivers/net/ethernet/intel/igc/igc_ethtool.c
··· 1708 1708 /* twisted pair */ 1709 1709 cmd->base.port = PORT_TP; 1710 1710 cmd->base.phy_address = hw->phy.addr; 1711 + ethtool_link_ksettings_add_link_mode(cmd, supported, TP); 1712 + ethtool_link_ksettings_add_link_mode(cmd, advertising, TP); 1711 1713 1712 1714 /* advertising link modes */ 1713 1715 if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF)
+68 -30
drivers/net/ethernet/intel/igc/igc_main.c
··· 711 711 /* disable the queue */ 712 712 wr32(IGC_TXDCTL(reg_idx), 0); 713 713 wrfl(); 714 - mdelay(10); 715 714 716 715 wr32(IGC_TDLEN(reg_idx), 717 716 ring->count * sizeof(union igc_adv_tx_desc)); ··· 1016 1017 ktime_t base_time = adapter->base_time; 1017 1018 ktime_t now = ktime_get_clocktai(); 1018 1019 ktime_t baset_est, end_of_cycle; 1019 - u32 launchtime; 1020 + s32 launchtime; 1020 1021 s64 n; 1021 1022 1022 1023 n = div64_s64(ktime_sub_ns(now, base_time), cycle_time); ··· 1029 1030 *first_flag = true; 1030 1031 ring->last_ff_cycle = baset_est; 1031 1032 1032 - if (ktime_compare(txtime, ring->last_tx_cycle) > 0) 1033 + if (ktime_compare(end_of_cycle, ring->last_tx_cycle) > 0) 1033 1034 *insert_empty = true; 1034 1035 } 1035 1036 } ··· 1572 1573 first->bytecount = skb->len; 1573 1574 first->gso_segs = 1; 1574 1575 1575 - if (tx_ring->max_sdu > 0) { 1576 - u32 max_sdu = 0; 1576 + if (adapter->qbv_transition || tx_ring->oper_gate_closed) 1577 + goto out_drop; 1577 1578 1578 - max_sdu = tx_ring->max_sdu + 1579 - (skb_vlan_tagged(first->skb) ? VLAN_HLEN : 0); 1580 - 1581 - if (first->bytecount > max_sdu) { 1582 - adapter->stats.txdrop++; 1583 - goto out_drop; 1584 - } 1579 + if (tx_ring->max_sdu > 0 && first->bytecount > tx_ring->max_sdu) { 1580 + adapter->stats.txdrop++; 1581 + goto out_drop; 1585 1582 } 1586 1583 1587 1584 if (unlikely(test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags) && ··· 3007 3012 time_after(jiffies, tx_buffer->time_stamp + 3008 3013 (adapter->tx_timeout_factor * HZ)) && 3009 3014 !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF) && 3010 - (rd32(IGC_TDH(tx_ring->reg_idx)) != 3011 - readl(tx_ring->tail))) { 3015 + (rd32(IGC_TDH(tx_ring->reg_idx)) != readl(tx_ring->tail)) && 3016 + !tx_ring->oper_gate_closed) { 3012 3017 /* detected Tx unit hang */ 3013 3018 netdev_err(tx_ring->netdev, 3014 3019 "Detected Tx Unit Hang\n" ··· 6097 6102 6098 6103 adapter->base_time = 0; 6099 6104 adapter->cycle_time = NSEC_PER_SEC; 6105 + adapter->taprio_offload_enable = false; 6100 6106 adapter->qbv_config_change_errors = 0; 6107 + adapter->qbv_transition = false; 6108 + adapter->qbv_count = 0; 6101 6109 6102 6110 for (i = 0; i < adapter->num_tx_queues; i++) { 6103 6111 struct igc_ring *ring = adapter->tx_ring[i]; ··· 6108 6110 ring->start_time = 0; 6109 6111 ring->end_time = NSEC_PER_SEC; 6110 6112 ring->max_sdu = 0; 6113 + ring->oper_gate_closed = false; 6114 + ring->admin_gate_closed = false; 6111 6115 } 6112 6116 6113 6117 return 0; ··· 6121 6121 bool queue_configured[IGC_MAX_TX_QUEUES] = { }; 6122 6122 struct igc_hw *hw = &adapter->hw; 6123 6123 u32 start_time = 0, end_time = 0; 6124 + struct timespec64 now; 6124 6125 size_t n; 6125 6126 int i; 6126 6127 6127 - switch (qopt->cmd) { 6128 - case TAPRIO_CMD_REPLACE: 6129 - adapter->qbv_enable = true; 6130 - break; 6131 - case TAPRIO_CMD_DESTROY: 6132 - adapter->qbv_enable = false; 6133 - break; 6134 - default: 6135 - return -EOPNOTSUPP; 6136 - } 6137 - 6138 - if (!adapter->qbv_enable) 6128 + if (qopt->cmd == TAPRIO_CMD_DESTROY) 6139 6129 return igc_tsn_clear_schedule(adapter); 6130 + 6131 + if (qopt->cmd != TAPRIO_CMD_REPLACE) 6132 + return -EOPNOTSUPP; 6140 6133 6141 6134 if (qopt->base_time < 0) 6142 6135 return -ERANGE; 6143 6136 6144 - if (igc_is_device_id_i225(hw) && adapter->base_time) 6137 + if (igc_is_device_id_i225(hw) && adapter->taprio_offload_enable) 6145 6138 return -EALREADY; 6146 6139 6147 6140 if (!validate_schedule(adapter, qopt)) ··· 6142 6149 6143 6150 adapter->cycle_time = qopt->cycle_time; 6144 6151 adapter->base_time = qopt->base_time; 6152 + adapter->taprio_offload_enable = true; 6153 + 6154 + igc_ptp_read(adapter, &now); 6145 6155 6146 6156 for (n = 0; n < qopt->num_entries; n++) { 6147 6157 struct tc_taprio_sched_entry *e = &qopt->entries[n]; ··· 6180 6184 ring->start_time = start_time; 6181 6185 ring->end_time = end_time; 6182 6186 6183 - queue_configured[i] = true; 6187 + if (ring->start_time >= adapter->cycle_time) 6188 + queue_configured[i] = false; 6189 + else 6190 + queue_configured[i] = true; 6184 6191 } 6185 6192 6186 6193 start_time += e->interval; ··· 6193 6194 * If not, set the start and end time to be end time. 6194 6195 */ 6195 6196 for (i = 0; i < adapter->num_tx_queues; i++) { 6197 + struct igc_ring *ring = adapter->tx_ring[i]; 6198 + 6199 + if (!is_base_time_past(qopt->base_time, &now)) { 6200 + ring->admin_gate_closed = false; 6201 + } else { 6202 + ring->oper_gate_closed = false; 6203 + ring->admin_gate_closed = false; 6204 + } 6205 + 6196 6206 if (!queue_configured[i]) { 6197 - struct igc_ring *ring = adapter->tx_ring[i]; 6207 + if (!is_base_time_past(qopt->base_time, &now)) 6208 + ring->admin_gate_closed = true; 6209 + else 6210 + ring->oper_gate_closed = true; 6198 6211 6199 6212 ring->start_time = end_time; 6200 6213 ring->end_time = end_time; ··· 6218 6207 struct net_device *dev = adapter->netdev; 6219 6208 6220 6209 if (qopt->max_sdu[i]) 6221 - ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len; 6210 + ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len - ETH_TLEN; 6222 6211 else 6223 6212 ring->max_sdu = 0; 6224 6213 } ··· 6337 6326 void *type_data) 6338 6327 { 6339 6328 struct igc_adapter *adapter = netdev_priv(dev); 6329 + 6330 + adapter->tc_setup_type = type; 6340 6331 6341 6332 switch (type) { 6342 6333 case TC_QUERY_CAPS: ··· 6587 6574 .xmo_rx_timestamp = igc_xdp_rx_timestamp, 6588 6575 }; 6589 6576 6577 + static enum hrtimer_restart igc_qbv_scheduling_timer(struct hrtimer *timer) 6578 + { 6579 + struct igc_adapter *adapter = container_of(timer, struct igc_adapter, 6580 + hrtimer); 6581 + unsigned int i; 6582 + 6583 + adapter->qbv_transition = true; 6584 + for (i = 0; i < adapter->num_tx_queues; i++) { 6585 + struct igc_ring *tx_ring = adapter->tx_ring[i]; 6586 + 6587 + if (tx_ring->admin_gate_closed) { 6588 + tx_ring->admin_gate_closed = false; 6589 + tx_ring->oper_gate_closed = true; 6590 + } else { 6591 + tx_ring->oper_gate_closed = false; 6592 + } 6593 + } 6594 + adapter->qbv_transition = false; 6595 + return HRTIMER_NORESTART; 6596 + } 6597 + 6590 6598 /** 6591 6599 * igc_probe - Device Initialization Routine 6592 6600 * @pdev: PCI device information struct ··· 6786 6752 INIT_WORK(&adapter->reset_task, igc_reset_task); 6787 6753 INIT_WORK(&adapter->watchdog_task, igc_watchdog_task); 6788 6754 6755 + hrtimer_init(&adapter->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 6756 + adapter->hrtimer.function = &igc_qbv_scheduling_timer; 6757 + 6789 6758 /* Initialize link properties that are user-changeable */ 6790 6759 adapter->fc_autoneg = true; 6791 6760 hw->mac.autoneg = true; ··· 6892 6855 6893 6856 cancel_work_sync(&adapter->reset_task); 6894 6857 cancel_work_sync(&adapter->watchdog_task); 6858 + hrtimer_cancel(&adapter->hrtimer); 6895 6859 6896 6860 /* Release control of h/w to f/w. If f/w is AMT enabled, this 6897 6861 * would have already happened in close and is redundant.
+22 -3
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 356 356 tsim &= ~IGC_TSICR_TT0; 357 357 } 358 358 if (on) { 359 + struct timespec64 safe_start; 359 360 int i = rq->perout.index; 360 361 361 362 igc_pin_perout(igc, i, pin, use_freq); 362 - igc->perout[i].start.tv_sec = rq->perout.start.sec; 363 + igc_ptp_read(igc, &safe_start); 364 + 365 + /* PPS output start time is triggered by Target time(TT) 366 + * register. Programming any past time value into TT 367 + * register will cause PPS to never start. Need to make 368 + * sure we program the TT register a time ahead in 369 + * future. There isn't a stringent need to fire PPS out 370 + * right away. Adding +2 seconds should take care of 371 + * corner cases. Let's say if the SYSTIML is close to 372 + * wrap up and the timer keeps ticking as we program the 373 + * register, adding +2seconds is safe bet. 374 + */ 375 + safe_start.tv_sec += 2; 376 + 377 + if (rq->perout.start.sec < safe_start.tv_sec) 378 + igc->perout[i].start.tv_sec = safe_start.tv_sec; 379 + else 380 + igc->perout[i].start.tv_sec = rq->perout.start.sec; 363 381 igc->perout[i].start.tv_nsec = rq->perout.start.nsec; 364 382 igc->perout[i].period.tv_sec = ts.tv_sec; 365 383 igc->perout[i].period.tv_nsec = ts.tv_nsec; 366 - wr32(trgttimh, rq->perout.start.sec); 384 + wr32(trgttimh, (u32)igc->perout[i].start.tv_sec); 367 385 /* For now, always select timer 0 as source. */ 368 - wr32(trgttiml, rq->perout.start.nsec | IGC_TT_IO_TIMER_SEL_SYSTIM0); 386 + wr32(trgttiml, (u32)(igc->perout[i].start.tv_nsec | 387 + IGC_TT_IO_TIMER_SEL_SYSTIM0)); 369 388 if (use_freq) 370 389 wr32(freqout, ns); 371 390 tsauxc |= tsauxc_mask;
+51 -17
drivers/net/ethernet/intel/igc/igc_tsn.c
··· 37 37 { 38 38 unsigned int new_flags = adapter->flags & ~IGC_FLAG_TSN_ANY_ENABLED; 39 39 40 - if (adapter->qbv_enable) 40 + if (adapter->taprio_offload_enable) 41 41 new_flags |= IGC_FLAG_TSN_QBV_ENABLED; 42 42 43 43 if (is_any_launchtime(adapter)) ··· 114 114 static int igc_tsn_enable_offload(struct igc_adapter *adapter) 115 115 { 116 116 struct igc_hw *hw = &adapter->hw; 117 - bool tsn_mode_reconfig = false; 118 117 u32 tqavctrl, baset_l, baset_h; 119 118 u32 sec, nsec, cycle; 120 119 ktime_t base_time, systim; ··· 132 133 wr32(IGC_STQT(i), ring->start_time); 133 134 wr32(IGC_ENDQT(i), ring->end_time); 134 135 135 - txqctl |= IGC_TXQCTL_STRICT_CYCLE | 136 - IGC_TXQCTL_STRICT_END; 136 + if (adapter->taprio_offload_enable) { 137 + /* If taprio_offload_enable is set we are in "taprio" 138 + * mode and we need to be strict about the 139 + * cycles: only transmit a packet if it can be 140 + * completed during that cycle. 141 + * 142 + * If taprio_offload_enable is NOT true when 143 + * enabling TSN offload, the cycle should have 144 + * no external effects, but is only used internally 145 + * to adapt the base time register after a second 146 + * has passed. 147 + * 148 + * Enabling strict mode in this case would 149 + * unnecessarily prevent the transmission of 150 + * certain packets (i.e. at the boundary of a 151 + * second) and thus interfere with the launchtime 152 + * feature that promises transmission at a 153 + * certain point in time. 154 + */ 155 + txqctl |= IGC_TXQCTL_STRICT_CYCLE | 156 + IGC_TXQCTL_STRICT_END; 157 + } 137 158 138 159 if (ring->launchtime_enable) 139 160 txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT; ··· 247 228 248 229 tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS; 249 230 250 - if (tqavctrl & IGC_TQAVCTRL_TRANSMIT_MODE_TSN) 251 - tsn_mode_reconfig = true; 252 - 253 231 tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; 232 + 233 + adapter->qbv_count++; 254 234 255 235 cycle = adapter->cycle_time; 256 236 base_time = adapter->base_time; ··· 267 249 * Gate Control List (GCL) is running. 268 250 */ 269 251 if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) && 270 - tsn_mode_reconfig) 252 + (adapter->tc_setup_type == TC_SETUP_QDISC_TAPRIO) && 253 + (adapter->qbv_count > 1)) 271 254 adapter->qbv_config_change_errors++; 272 255 } else { 273 - /* According to datasheet section 7.5.2.9.3.3, FutScdDis bit 274 - * has to be configured before the cycle time and base time. 275 - * Tx won't hang if there is a GCL is already running, 276 - * so in this case we don't need to set FutScdDis. 277 - */ 278 - if (igc_is_device_id_i226(hw) && 279 - !(rd32(IGC_BASET_H) || rd32(IGC_BASET_L))) 280 - tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS; 256 + if (igc_is_device_id_i226(hw)) { 257 + ktime_t adjust_time, expires_time; 258 + 259 + /* According to datasheet section 7.5.2.9.3.3, FutScdDis bit 260 + * has to be configured before the cycle time and base time. 261 + * Tx won't hang if a GCL is already running, 262 + * so in this case we don't need to set FutScdDis. 263 + */ 264 + if (!(rd32(IGC_BASET_H) || rd32(IGC_BASET_L))) 265 + tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS; 266 + 267 + nsec = rd32(IGC_SYSTIML); 268 + sec = rd32(IGC_SYSTIMH); 269 + systim = ktime_set(sec, nsec); 270 + 271 + adjust_time = adapter->base_time; 272 + expires_time = ktime_sub_ns(adjust_time, systim); 273 + hrtimer_start(&adapter->hrtimer, expires_time, HRTIMER_MODE_REL); 274 + } 281 275 } 282 276 283 277 wr32(IGC_TQAVCTRL, tqavctrl); ··· 335 305 { 336 306 struct igc_hw *hw = &adapter->hw; 337 307 338 - if (netif_running(adapter->netdev) && igc_is_device_id_i225(hw)) { 308 + /* Per I225/6 HW Design Section 7.5.2.1, transmit mode 309 + * cannot be changed dynamically. Require reset the adapter. 310 + */ 311 + if (netif_running(adapter->netdev) && 312 + (igc_is_device_id_i225(hw) || !adapter->qbv_count)) { 339 313 schedule_work(&adapter->reset_task); 340 314 return 0; 341 315 }
+2 -2
drivers/net/ethernet/marvell/mvneta.c
··· 1511 1511 */ 1512 1512 if (txq_number == 1) 1513 1513 txq_map = (cpu == pp->rxq_def) ? 1514 - MVNETA_CPU_TXQ_ACCESS(1) : 0; 1514 + MVNETA_CPU_TXQ_ACCESS(0) : 0; 1515 1515 1516 1516 } else { 1517 1517 txq_map = MVNETA_CPU_TXQ_ACCESS_ALL_MASK; ··· 4356 4356 */ 4357 4357 if (txq_number == 1) 4358 4358 txq_map = (cpu == elected_cpu) ? 4359 - MVNETA_CPU_TXQ_ACCESS(1) : 0; 4359 + MVNETA_CPU_TXQ_ACCESS(0) : 0; 4360 4360 else 4361 4361 txq_map = mvreg_read(pp, MVNETA_CPU_MAP(cpu)) & 4362 4362 MVNETA_CPU_TXQ_ACCESS_ALL_MASK;
+9 -10
drivers/net/ethernet/marvell/octeontx2/af/ptp.c
··· 208 208 /* Check driver is bound to PTP block */ 209 209 if (!ptp) 210 210 ptp = ERR_PTR(-EPROBE_DEFER); 211 - else 211 + else if (!IS_ERR(ptp)) 212 212 pci_dev_get(ptp->pdev); 213 213 214 214 return ptp; ··· 388 388 static int ptp_probe(struct pci_dev *pdev, 389 389 const struct pci_device_id *ent) 390 390 { 391 - struct device *dev = &pdev->dev; 392 391 struct ptp *ptp; 393 392 int err; 394 393 395 - ptp = devm_kzalloc(dev, sizeof(*ptp), GFP_KERNEL); 394 + ptp = kzalloc(sizeof(*ptp), GFP_KERNEL); 396 395 if (!ptp) { 397 396 err = -ENOMEM; 398 397 goto error; ··· 427 428 return 0; 428 429 429 430 error_free: 430 - devm_kfree(dev, ptp); 431 + kfree(ptp); 431 432 432 433 error: 433 434 /* For `ptp_get()` we need to differentiate between the case 434 435 * when the core has not tried to probe this device and the case when 435 - * the probe failed. In the later case we pretend that the 436 - * initialization was successful and keep the error in 436 + * the probe failed. In the later case we keep the error in 437 437 * `dev->driver_data`. 438 438 */ 439 439 pci_set_drvdata(pdev, ERR_PTR(err)); 440 440 if (!first_ptp_block) 441 441 first_ptp_block = ERR_PTR(err); 442 442 443 - return 0; 443 + return err; 444 444 } 445 445 446 446 static void ptp_remove(struct pci_dev *pdev) ··· 447 449 struct ptp *ptp = pci_get_drvdata(pdev); 448 450 u64 clock_cfg; 449 451 450 - if (cn10k_ptp_errata(ptp) && hrtimer_active(&ptp->hrtimer)) 451 - hrtimer_cancel(&ptp->hrtimer); 452 - 453 452 if (IS_ERR_OR_NULL(ptp)) 454 453 return; 454 + 455 + if (cn10k_ptp_errata(ptp) && hrtimer_active(&ptp->hrtimer)) 456 + hrtimer_cancel(&ptp->hrtimer); 455 457 456 458 /* Disable PTP clock */ 457 459 clock_cfg = readq(ptp->reg_base + PTP_CLOCK_CFG); 458 460 clock_cfg &= ~PTP_CLOCK_CFG_PTP_EN; 459 461 writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG); 462 + kfree(ptp); 460 463 } 461 464 462 465 static const struct pci_device_id ptp_id_table[] = {
+1 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu.c
··· 3252 3252 rvu->ptp = ptp_get(); 3253 3253 if (IS_ERR(rvu->ptp)) { 3254 3254 err = PTR_ERR(rvu->ptp); 3255 - if (err == -EPROBE_DEFER) 3255 + if (err) 3256 3256 goto err_release_regions; 3257 3257 rvu->ptp = NULL; 3258 3258 }
+2 -9
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
··· 4069 4069 } 4070 4070 4071 4071 /* install/uninstall promisc entry */ 4072 - if (promisc) { 4072 + if (promisc) 4073 4073 rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf, 4074 4074 pfvf->rx_chan_base, 4075 4075 pfvf->rx_chan_cnt); 4076 - 4077 - if (rvu_npc_exact_has_match_table(rvu)) 4078 - rvu_npc_exact_promisc_enable(rvu, pcifunc); 4079 - } else { 4076 + else 4080 4077 if (!nix_rx_multicast) 4081 4078 rvu_npc_enable_promisc_entry(rvu, pcifunc, nixlf, false); 4082 - 4083 - if (rvu_npc_exact_has_match_table(rvu)) 4084 - rvu_npc_exact_promisc_disable(rvu, pcifunc); 4085 - } 4086 4079 4087 4080 return 0; 4088 4081 }
+21 -2
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
··· 1164 1164 { 1165 1165 struct npc_exact_table *table; 1166 1166 u16 *cnt, old_cnt; 1167 + bool promisc; 1167 1168 1168 1169 table = rvu->hw->table; 1170 + promisc = table->promisc_mode[drop_mcam_idx]; 1169 1171 1170 1172 cnt = &table->cnt_cmd_rules[drop_mcam_idx]; 1171 1173 old_cnt = *cnt; ··· 1179 1177 1180 1178 *enable_or_disable_cam = false; 1181 1179 1182 - /* If all rules are deleted, disable cam */ 1180 + if (promisc) 1181 + goto done; 1182 + 1183 + /* If all rules are deleted and not already in promisc mode; 1184 + * disable cam 1185 + */ 1183 1186 if (!*cnt && val < 0) { 1184 1187 *enable_or_disable_cam = true; 1185 1188 goto done; 1186 1189 } 1187 1190 1188 - /* If rule got added, enable cam */ 1191 + /* If rule got added and not already in promisc mode; enable cam */ 1189 1192 if (!old_cnt && val > 0) { 1190 1193 *enable_or_disable_cam = true; 1191 1194 goto done; ··· 1469 1462 *promisc = false; 1470 1463 mutex_unlock(&table->lock); 1471 1464 1465 + /* Enable drop rule */ 1466 + rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, 1467 + true); 1468 + 1469 + dev_dbg(rvu->dev, "%s: disabled promisc mode (cgx=%d lmac=%d)\n", 1470 + __func__, cgx_id, lmac_id); 1472 1471 return 0; 1473 1472 } 1474 1473 ··· 1516 1503 *promisc = true; 1517 1504 mutex_unlock(&table->lock); 1518 1505 1506 + /* disable drop rule */ 1507 + rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, 1508 + false); 1509 + 1510 + dev_dbg(rvu->dev, "%s: Enabled promisc mode (cgx=%d lmac=%d)\n", 1511 + __func__, cgx_id, lmac_id); 1519 1512 return 0; 1520 1513 } 1521 1514
+8
drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
··· 872 872 return -EINVAL; 873 873 874 874 vlan_etype = be16_to_cpu(fsp->h_ext.vlan_etype); 875 + 876 + /* Drop rule with vlan_etype == 802.1Q 877 + * and vlan_id == 0 is not supported 878 + */ 879 + if (vlan_etype == ETH_P_8021Q && !fsp->m_ext.vlan_tci && 880 + fsp->ring_cookie == RX_CLS_FLOW_DISC) 881 + return -EINVAL; 882 + 875 883 /* Only ETH_P_8021Q and ETH_P_802AD types supported */ 876 884 if (vlan_etype != ETH_P_8021Q && 877 885 vlan_etype != ETH_P_8021AD)
+15
drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
··· 597 597 return -EOPNOTSUPP; 598 598 } 599 599 600 + if (!match.mask->vlan_id) { 601 + struct flow_action_entry *act; 602 + int i; 603 + 604 + flow_action_for_each(i, act, &rule->action) { 605 + if (act->id == FLOW_ACTION_DROP) { 606 + netdev_err(nic->netdev, 607 + "vlan tpid 0x%x with vlan_id %d is not supported for DROP rule.\n", 608 + ntohs(match.key->vlan_tpid), 609 + match.key->vlan_id); 610 + return -EOPNOTSUPP; 611 + } 612 + } 613 + } 614 + 600 615 if (match.mask->vlan_id || 601 616 match.mask->vlan_dei || 602 617 match.mask->vlan_priority) {
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/en/fs_tt_redirect.c
··· 594 594 595 595 err = fs_any_create_table(fs); 596 596 if (err) 597 - return err; 597 + goto err_free_any; 598 598 599 599 err = fs_any_enable(fs); 600 600 if (err) ··· 606 606 607 607 err_destroy_table: 608 608 fs_any_destroy_table(fs_any); 609 - 610 - kfree(fs_any); 609 + err_free_any: 611 610 mlx5e_fs_set_any(fs, NULL); 611 + kfree(fs_any); 612 612 return err; 613 613 }
+4 -2
drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
··· 729 729 730 730 c = kvzalloc_node(sizeof(*c), GFP_KERNEL, dev_to_node(mlx5_core_dma_dev(mdev))); 731 731 cparams = kvzalloc(sizeof(*cparams), GFP_KERNEL); 732 - if (!c || !cparams) 733 - return -ENOMEM; 732 + if (!c || !cparams) { 733 + err = -ENOMEM; 734 + goto err_free; 735 + } 734 736 735 737 c->priv = priv; 736 738 c->mdev = priv->mdev;
+11 -3
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 1545 1545 1546 1546 attr->ct_attr.ct_action |= act->ct.action; /* So we can have clear + ct */ 1547 1547 attr->ct_attr.zone = act->ct.zone; 1548 - attr->ct_attr.nf_ft = act->ct.flow_table; 1548 + if (!(act->ct.action & TCA_CT_ACT_CLEAR)) 1549 + attr->ct_attr.nf_ft = act->ct.flow_table; 1549 1550 attr->ct_attr.act_miss_cookie = act->miss_cookie; 1550 1551 1551 1552 return 0; ··· 1991 1990 if (!priv) 1992 1991 return -EOPNOTSUPP; 1993 1992 1993 + if (attr->ct_attr.offloaded) 1994 + return 0; 1995 + 1994 1996 if (attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR) { 1995 1997 err = mlx5_tc_ct_entry_set_registers(priv, &attr->parse_attr->mod_hdr_acts, 1996 1998 0, 0, 0, 0); ··· 2003 1999 attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; 2004 2000 } 2005 2001 2006 - if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */ 2002 + if (!attr->ct_attr.nf_ft) { /* means only ct clear action, and not ct_clear,ct() */ 2003 + attr->ct_attr.offloaded = true; 2007 2004 return 0; 2005 + } 2008 2006 2009 2007 mutex_lock(&priv->control_lock); 2010 2008 err = __mlx5_tc_ct_flow_offload(priv, attr); 2009 + if (!err) 2010 + attr->ct_attr.offloaded = true; 2011 2011 mutex_unlock(&priv->control_lock); 2012 2012 2013 2013 return err; ··· 2029 2021 mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *priv, 2030 2022 struct mlx5_flow_attr *attr) 2031 2023 { 2032 - if (!attr->ct_attr.ft) /* no ct action, return */ 2024 + if (!attr->ct_attr.offloaded) /* no ct action, return */ 2033 2025 return; 2034 2026 if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */ 2035 2027 return;
+1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
··· 29 29 u32 ct_labels_id; 30 30 u32 act_miss_mapping; 31 31 u64 act_miss_cookie; 32 + bool offloaded; 32 33 struct mlx5_ct_ft *ft; 33 34 }; 34 35
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
··· 662 662 /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) 663 663 * as we know this is a page_pool page. 664 664 */ 665 - page_pool_put_defragged_page(page->pp, 666 - page, -1, true); 665 + page_pool_recycle_direct(page->pp, page); 667 666 } while (++n < num); 668 667 669 668 break;
+1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c
··· 190 190 in = kvzalloc(inlen, GFP_KERNEL); 191 191 if (!in || !ft->g) { 192 192 kfree(ft->g); 193 + ft->g = NULL; 193 194 kvfree(in); 194 195 return -ENOMEM; 195 196 }
+22 -22
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 390 390 { 391 391 struct mlx5e_wqe_frag_info *wi = get_frag(rq, ix); 392 392 393 - if (rq->xsk_pool) 393 + if (rq->xsk_pool) { 394 394 mlx5e_xsk_free_rx_wqe(wi); 395 - else 395 + } else { 396 396 mlx5e_free_rx_wqe(rq, wi); 397 + 398 + /* Avoid a second release of the wqe pages: dealloc is called 399 + * for the same missing wqes on regular RQ flush and on regular 400 + * RQ close. This happens when XSK RQs come into play. 401 + */ 402 + for (int i = 0; i < rq->wqe.info.num_frags; i++, wi++) 403 + wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); 404 + } 397 405 } 398 406 399 407 static void mlx5e_xsk_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk) ··· 1751 1743 1752 1744 prog = rcu_dereference(rq->xdp_prog); 1753 1745 if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) { 1754 - if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { 1746 + if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { 1755 1747 struct mlx5e_wqe_frag_info *pwi; 1756 1748 1757 1749 for (pwi = head_wi; pwi < wi; pwi++) 1758 - pwi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); 1750 + pwi->frag_page->frags++; 1759 1751 } 1760 1752 return NULL; /* page/packet was consumed by XDP */ 1761 1753 } ··· 1825 1817 rq, wi, cqe, cqe_bcnt); 1826 1818 if (!skb) { 1827 1819 /* probably for XDP */ 1828 - if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { 1829 - /* do not return page to cache, 1830 - * it will be returned on XDP_TX completion. 1831 - */ 1832 - wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); 1833 - } 1820 + if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) 1821 + wi->frag_page->frags++; 1834 1822 goto wq_cyc_pop; 1835 1823 } 1836 1824 ··· 1872 1868 rq, wi, cqe, cqe_bcnt); 1873 1869 if (!skb) { 1874 1870 /* probably for XDP */ 1875 - if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { 1876 - /* do not return page to cache, 1877 - * it will be returned on XDP_TX completion. 1878 - */ 1879 - wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); 1880 - } 1871 + if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) 1872 + wi->frag_page->frags++; 1881 1873 goto wq_cyc_pop; 1882 1874 } 1883 1875 ··· 2052 2052 if (prog) { 2053 2053 if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { 2054 2054 if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { 2055 - int i; 2055 + struct mlx5e_frag_page *pfp; 2056 2056 2057 - for (i = 0; i < sinfo->nr_frags; i++) 2058 - /* non-atomic */ 2059 - __set_bit(page_idx + i, wi->skip_release_bitmap); 2060 - return NULL; 2057 + for (pfp = head_page; pfp < frag_page; pfp++) 2058 + pfp->frags++; 2059 + 2060 + wi->linear_page.frags++; 2061 2061 } 2062 2062 mlx5e_page_release_fragmented(rq, &wi->linear_page); 2063 2063 return NULL; /* page/packet was consumed by XDP */ ··· 2155 2155 cqe_bcnt, &mxbuf); 2156 2156 if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { 2157 2157 if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) 2158 - __set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */ 2158 + frag_page->frags++; 2159 2159 return NULL; /* page/packet was consumed by XDP */ 2160 2160 } 2161 2161
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1639 1639 uplink_priv = &rpriv->uplink_priv; 1640 1640 1641 1641 mutex_lock(&uplink_priv->unready_flows_lock); 1642 - unready_flow_del(flow); 1642 + if (flow_flag_test(flow, NOT_READY)) 1643 + unready_flow_del(flow); 1643 1644 mutex_unlock(&uplink_priv->unready_flows_lock); 1644 1645 } 1645 1646 ··· 1933 1932 esw_attr = attr->esw_attr; 1934 1933 mlx5e_put_flow_tunnel_id(flow); 1935 1934 1936 - if (flow_flag_test(flow, NOT_READY)) 1937 - remove_unready_flow(flow); 1935 + remove_unready_flow(flow); 1938 1936 1939 1937 if (mlx5e_is_offloaded_flow(flow)) { 1940 1938 if (flow_flag_test(flow, SLOW))
+3
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 807 807 hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability); 808 808 vport->info.roce_enabled = MLX5_GET(cmd_hca_cap, hca_caps, roce); 809 809 810 + if (!MLX5_CAP_GEN_MAX(esw->dev, hca_cap_2)) 811 + goto out_free; 812 + 810 813 memset(query_ctx, 0, query_out_sz); 811 814 err = mlx5_vport_get_other_func_cap(esw->dev, vport->vport, query_ctx, 812 815 MLX5_CAP_GENERAL_2);
+12 -7
drivers/net/ethernet/mellanox/mlx5/core/thermal.c
··· 68 68 69 69 int mlx5_thermal_init(struct mlx5_core_dev *mdev) 70 70 { 71 + char data[THERMAL_NAME_LENGTH]; 71 72 struct mlx5_thermal *thermal; 72 - struct thermal_zone_device *tzd; 73 - const char *data = "mlx5"; 73 + int err; 74 74 75 - tzd = thermal_zone_get_zone_by_name(data); 76 - if (!IS_ERR(tzd)) 75 + if (!mlx5_core_is_pf(mdev) && !mlx5_core_is_ecpf(mdev)) 77 76 return 0; 77 + 78 + err = snprintf(data, sizeof(data), "mlx5_%s", dev_name(mdev->device)); 79 + if (err < 0 || err >= sizeof(data)) { 80 + mlx5_core_err(mdev, "Failed to setup thermal zone name, %d\n", err); 81 + return -EINVAL; 82 + } 78 83 79 84 thermal = kzalloc(sizeof(*thermal), GFP_KERNEL); 80 85 if (!thermal) ··· 94 89 &mlx5_thermal_ops, 95 90 NULL, 0, MLX5_THERMAL_POLL_INT_MSEC); 96 91 if (IS_ERR(thermal->tzdev)) { 97 - dev_err(mdev->device, "Failed to register thermal zone device (%s) %ld\n", 98 - data, PTR_ERR(thermal->tzdev)); 92 + err = PTR_ERR(thermal->tzdev); 93 + mlx5_core_err(mdev, "Failed to register thermal zone device (%s) %d\n", data, err); 99 94 kfree(thermal); 100 - return -EINVAL; 95 + return err; 101 96 } 102 97 103 98 mdev->thermal = thermal;
+1 -1
drivers/net/ethernet/microchip/Kconfig
··· 46 46 tristate "LAN743x support" 47 47 depends on PCI 48 48 depends on PTP_1588_CLOCK_OPTIONAL 49 - select PHYLIB 49 + select FIXED_PHY 50 50 select CRC16 51 51 select CRC32 52 52 help
-1
drivers/net/ethernet/mscc/ocelot.c
··· 2927 2927 2928 2928 mutex_init(&ocelot->mact_lock); 2929 2929 mutex_init(&ocelot->fwd_domain_lock); 2930 - mutex_init(&ocelot->tas_lock); 2931 2930 spin_lock_init(&ocelot->ptp_clock_lock); 2932 2931 spin_lock_init(&ocelot->ts_id_lock); 2933 2932
+7 -7
drivers/net/ethernet/mscc/ocelot_mm.c
··· 67 67 val = mm->preemptible_tcs; 68 68 69 69 /* Cut through switching doesn't work for preemptible priorities, 70 - * so first make sure it is disabled. 70 + * so first make sure it is disabled. Also, changing the preemptible 71 + * TCs affects the oversized frame dropping logic, so that needs to be 72 + * re-triggered. And since tas_guard_bands_update() also implicitly 73 + * calls cut_through_fwd(), we don't need to explicitly call it. 71 74 */ 72 75 mm->active_preemptible_tcs = val; 73 - ocelot->ops->cut_through_fwd(ocelot); 76 + ocelot->ops->tas_guard_bands_update(ocelot, port); 74 77 75 78 dev_dbg(ocelot->dev, 76 79 "port %d %s/%s, MM TX %s, preemptible TCs 0x%x, active 0x%x\n", ··· 92 89 { 93 90 struct ocelot_mm_state *mm = &ocelot->mm[port]; 94 91 95 - mutex_lock(&ocelot->fwd_domain_lock); 92 + lockdep_assert_held(&ocelot->fwd_domain_lock); 96 93 97 94 if (mm->preemptible_tcs == preemptible_tcs) 98 - goto out_unlock; 95 + return; 99 96 100 97 mm->preemptible_tcs = preemptible_tcs; 101 98 102 99 ocelot_port_update_active_preemptible_tcs(ocelot, port); 103 - 104 - out_unlock: 105 - mutex_unlock(&ocelot->fwd_domain_lock); 106 100 } 107 101 108 102 static void ocelot_mm_update_port_status(struct ocelot *ocelot, int port)
-6
drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
··· 353 353 ionic_reset(ionic); 354 354 err_out_teardown: 355 355 ionic_dev_teardown(ionic); 356 - pci_clear_master(pdev); 357 - /* Don't fail the probe for these errors, keep 358 - * the hw interface around for inspection 359 - */ 360 - return 0; 361 - 362 356 err_out_unmap_bars: 363 357 ionic_unmap_bars(ionic); 364 358 err_out_pci_release_regions:
-5
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 475 475 static void ionic_link_qcq_interrupts(struct ionic_qcq *src_qcq, 476 476 struct ionic_qcq *n_qcq) 477 477 { 478 - if (WARN_ON(n_qcq->flags & IONIC_QCQ_F_INTR)) { 479 - ionic_intr_free(n_qcq->cq.lif->ionic, n_qcq->intr.index); 480 - n_qcq->flags &= ~IONIC_QCQ_F_INTR; 481 - } 482 - 483 478 n_qcq->intr.vector = src_qcq->intr.vector; 484 479 n_qcq->intr.index = src_qcq->intr.index; 485 480 n_qcq->napi_qcq = src_qcq->napi_qcq;
-3
drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
··· 186 186 if (eeprom_ptrs) 187 187 kvfree(eeprom_ptrs); 188 188 189 - if (*checksum > TXGBE_EEPROM_SUM) 190 - return -EINVAL; 191 - 192 189 *checksum = TXGBE_EEPROM_SUM - *checksum; 193 190 194 191 return 0;
+3 -6
drivers/net/netdevsim/dev.c
··· 184 184 cookie_len = (count - 1) / 2; 185 185 if ((count - 1) % 2) 186 186 return -EINVAL; 187 - buf = kmalloc(count, GFP_KERNEL | __GFP_NOWARN); 188 - if (!buf) 189 - return -ENOMEM; 190 187 191 - ret = simple_write_to_buffer(buf, count, ppos, data, count); 192 - if (ret < 0) 193 - goto free_buf; 188 + buf = memdup_user(data, count); 189 + if (IS_ERR(buf)) 190 + return PTR_ERR(buf); 194 191 195 192 fa_cookie = kmalloc(sizeof(*fa_cookie) + cookie_len, 196 193 GFP_KERNEL | __GFP_NOWARN);
+4 -1
drivers/net/wireless/cisco/airo.c
··· 6157 6157 struct iw_param *vwrq = &wrqu->bitrate; 6158 6158 struct airo_info *local = dev->ml_priv; 6159 6159 StatusRid status_rid; /* Card status info */ 6160 + int ret; 6160 6161 6161 - readStatusRid(local, &status_rid, 1); 6162 + ret = readStatusRid(local, &status_rid, 1); 6163 + if (ret) 6164 + return -EBUSY; 6162 6165 6163 6166 vwrq->value = le16_to_cpu(status_rid.currentXmitRate) * 500000; 6164 6167 /* If more than one rate, set auto */
-5
drivers/net/wireless/intel/iwlwifi/cfg/22000.c
··· 84 84 .mac_addr_from_csr = 0x380, \ 85 85 .ht_params = &iwl_22000_ht_params, \ 86 86 .nvm_ver = IWL_22000_NVM_VERSION, \ 87 - .trans.use_tfh = true, \ 88 87 .trans.rf_id = true, \ 89 88 .trans.gen2 = true, \ 90 89 .nvm_type = IWL_NVM_EXT, \ ··· 121 122 122 123 const struct iwl_cfg_trans_params iwl_qu_trans_cfg = { 123 124 .mq_rx_supported = true, 124 - .use_tfh = true, 125 125 .rf_id = true, 126 126 .gen2 = true, 127 127 .device_family = IWL_DEVICE_FAMILY_22000, ··· 132 134 133 135 const struct iwl_cfg_trans_params iwl_qu_medium_latency_trans_cfg = { 134 136 .mq_rx_supported = true, 135 - .use_tfh = true, 136 137 .rf_id = true, 137 138 .gen2 = true, 138 139 .device_family = IWL_DEVICE_FAMILY_22000, ··· 143 146 144 147 const struct iwl_cfg_trans_params iwl_qu_long_latency_trans_cfg = { 145 148 .mq_rx_supported = true, 146 - .use_tfh = true, 147 149 .rf_id = true, 148 150 .gen2 = true, 149 151 .device_family = IWL_DEVICE_FAMILY_22000, ··· 196 200 .device_family = IWL_DEVICE_FAMILY_22000, 197 201 .base_params = &iwl_22000_base_params, 198 202 .mq_rx_supported = true, 199 - .use_tfh = true, 200 203 .rf_id = true, 201 204 .gen2 = true, 202 205 .bisr_workaround = 1,
-2
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 256 256 * @xtal_latency: power up latency to get the xtal stabilized 257 257 * @extra_phy_cfg_flags: extra configuration flags to pass to the PHY 258 258 * @rf_id: need to read rf_id to determine the firmware image 259 - * @use_tfh: use TFH 260 259 * @gen2: 22000 and on transport operation 261 260 * @mq_rx_supported: multi-queue rx support 262 261 * @integrated: discrete or integrated ··· 270 271 u32 xtal_latency; 271 272 u32 extra_phy_cfg_flags; 272 273 u32 rf_id:1, 273 - use_tfh:1, 274 274 gen2:1, 275 275 mq_rx_supported:1, 276 276 integrated:1,
+2 -2
drivers/net/wireless/intel/iwlwifi/iwl-fh.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* 3 - * Copyright (C) 2005-2014, 2018-2021 Intel Corporation 3 + * Copyright (C) 2005-2014, 2018-2021, 2023 Intel Corporation 4 4 * Copyright (C) 2015-2017 Intel Deutschland GmbH 5 5 */ 6 6 #ifndef __iwl_fh_h__ ··· 71 71 static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, 72 72 unsigned int chnl) 73 73 { 74 - if (trans->trans_cfg->use_tfh) { 74 + if (trans->trans_cfg->gen2) { 75 75 WARN_ON_ONCE(chnl >= 64); 76 76 return TFH_TFDQ_CBB_TABLE + 8 * chnl; 77 77 }
+3 -3
drivers/net/wireless/intel/iwlwifi/iwl-trans.c
··· 2 2 /* 3 3 * Copyright (C) 2015 Intel Mobile Communications GmbH 4 4 * Copyright (C) 2016-2017 Intel Deutschland GmbH 5 - * Copyright (C) 2019-2021 Intel Corporation 5 + * Copyright (C) 2019-2021, 2023 Intel Corporation 6 6 */ 7 7 #include <linux/kernel.h> 8 8 #include <linux/bsearch.h> ··· 42 42 43 43 WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty); 44 44 45 - if (trans->trans_cfg->use_tfh) { 45 + if (trans->trans_cfg->gen2) { 46 46 trans->txqs.tfd.addr_size = 64; 47 47 trans->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS; 48 48 trans->txqs.tfd.size = sizeof(struct iwl_tfh_tfd); ··· 101 101 102 102 /* Some things must not change even if the config does */ 103 103 WARN_ON(trans->txqs.tfd.addr_size != 104 - (trans->trans_cfg->use_tfh ? 64 : 36)); 104 + (trans->trans_cfg->gen2 ? 64 : 36)); 105 105 106 106 snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name), 107 107 "iwl_cmd_pool:%s", dev_name(trans->dev));
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 1450 1450 static inline bool iwl_mvm_has_new_tx_api(struct iwl_mvm *mvm) 1451 1451 { 1452 1452 /* TODO - replace with TLV once defined */ 1453 - return mvm->trans->trans_cfg->use_tfh; 1453 + return mvm->trans->trans_cfg->gen2; 1454 1454 } 1455 1455 1456 1456 static inline bool iwl_mvm_has_unified_ucode(struct iwl_mvm *mvm)
+2 -2
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
··· 819 819 820 820 iwl_enable_interrupts(trans); 821 821 822 - if (trans->trans_cfg->use_tfh) { 822 + if (trans->trans_cfg->gen2) { 823 823 if (cpu == 1) 824 824 iwl_write_prph(trans, UREG_UCODE_LOAD_STATUS, 825 825 0xFFFF); ··· 3394 3394 u8 tfdidx; 3395 3395 u32 caplen, cmdlen; 3396 3396 3397 - if (trans->trans_cfg->use_tfh) 3397 + if (trans->trans_cfg->gen2) 3398 3398 tfdidx = idx; 3399 3399 else 3400 3400 tfdidx = ptr;
+1 -1
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 364 364 for (txq_id = 0; txq_id < trans->trans_cfg->base_params->num_of_queues; 365 365 txq_id++) { 366 366 struct iwl_txq *txq = trans->txqs.txq[txq_id]; 367 - if (trans->trans_cfg->use_tfh) 367 + if (trans->trans_cfg->gen2) 368 368 iwl_write_direct64(trans, 369 369 FH_MEM_CBBC_QUEUE(trans, txq_id), 370 370 txq->dma_addr);
+5 -5
drivers/net/wireless/intel/iwlwifi/queue/tx.c
··· 985 985 bool active; 986 986 u8 fifo; 987 987 988 - if (trans->trans_cfg->use_tfh) { 988 + if (trans->trans_cfg->gen2) { 989 989 IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id, 990 990 txq->read_ptr, txq->write_ptr); 991 991 /* TODO: access new SCD registers and dump them */ ··· 1040 1040 if (WARN_ON(txq->entries || txq->tfds)) 1041 1041 return -EINVAL; 1042 1042 1043 - if (trans->trans_cfg->use_tfh) 1043 + if (trans->trans_cfg->gen2) 1044 1044 tfd_sz = trans->txqs.tfd.size * slots_num; 1045 1045 1046 1046 timer_setup(&txq->stuck_timer, iwl_txq_stuck_timer, 0); ··· 1347 1347 dma_addr_t addr; 1348 1348 dma_addr_t hi_len; 1349 1349 1350 - if (trans->trans_cfg->use_tfh) { 1350 + if (trans->trans_cfg->gen2) { 1351 1351 struct iwl_tfh_tfd *tfh_tfd = _tfd; 1352 1352 struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx]; 1353 1353 ··· 1408 1408 1409 1409 meta->tbs = 0; 1410 1410 1411 - if (trans->trans_cfg->use_tfh) { 1411 + if (trans->trans_cfg->gen2) { 1412 1412 struct iwl_tfh_tfd *tfd_fh = (void *)tfd; 1413 1413 1414 1414 tfd_fh->num_tbs = 0; ··· 1625 1625 1626 1626 txq->entries[read_ptr].skb = NULL; 1627 1627 1628 - if (!trans->trans_cfg->use_tfh) 1628 + if (!trans->trans_cfg->gen2) 1629 1629 iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq); 1630 1630 1631 1631 iwl_txq_free_tfd(trans, txq);
+4 -4
drivers/net/wireless/intel/iwlwifi/queue/tx.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* 3 - * Copyright (C) 2020-2022 Intel Corporation 3 + * Copyright (C) 2020-2023 Intel Corporation 4 4 */ 5 5 #ifndef __iwl_trans_queue_tx_h__ 6 6 #define __iwl_trans_queue_tx_h__ ··· 38 38 static inline void *iwl_txq_get_tfd(struct iwl_trans *trans, 39 39 struct iwl_txq *txq, int idx) 40 40 { 41 - if (trans->trans_cfg->use_tfh) 41 + if (trans->trans_cfg->gen2) 42 42 idx = iwl_txq_get_cmd_index(txq, idx); 43 43 44 44 return (u8 *)txq->tfds + trans->txqs.tfd.size * idx; ··· 135 135 { 136 136 struct iwl_tfd *tfd; 137 137 138 - if (trans->trans_cfg->use_tfh) { 138 + if (trans->trans_cfg->gen2) { 139 139 struct iwl_tfh_tfd *tfh_tfd = _tfd; 140 140 141 141 return le16_to_cpu(tfh_tfd->num_tbs) & 0x1f; ··· 151 151 struct iwl_tfd *tfd; 152 152 struct iwl_tfd_tb *tb; 153 153 154 - if (trans->trans_cfg->use_tfh) { 154 + if (trans->trans_cfg->gen2) { 155 155 struct iwl_tfh_tfd *tfh_tfd = _tfd; 156 156 struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx]; 157 157
-4
drivers/net/wireless/mediatek/mt76/mt7921/dma.c
··· 231 231 if (ret) 232 232 return ret; 233 233 234 - ret = mt7921_wfsys_reset(dev); 235 - if (ret) 236 - return ret; 237 - 238 234 /* init tx queue */ 239 235 ret = mt76_connac_init_tx_queues(dev->phy.mt76, MT7921_TXQ_BAND0, 240 236 MT7921_TX_RING_SIZE,
-8
drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
··· 476 476 { 477 477 int ret; 478 478 479 - ret = mt76_get_field(dev, MT_CONN_ON_MISC, MT_TOP_MISC2_FW_N9_RDY); 480 - if (ret && mt76_is_mmio(&dev->mt76)) { 481 - dev_dbg(dev->mt76.dev, "Firmware is already download\n"); 482 - goto fw_loaded; 483 - } 484 - 485 479 ret = mt76_connac2_load_patch(&dev->mt76, mt7921_patch_name(dev)); 486 480 if (ret) 487 481 return ret; ··· 497 503 498 504 return -EIO; 499 505 } 500 - 501 - fw_loaded: 502 506 503 507 #ifdef CONFIG_PM 504 508 dev->mt76.hw->wiphy->wowlan = &mt76_connac_wowlan_support;
+8
drivers/net/wireless/mediatek/mt76/mt7921/pci.c
··· 325 325 bus_ops->rmw = mt7921_rmw; 326 326 dev->mt76.bus = bus_ops; 327 327 328 + ret = mt7921e_mcu_fw_pmctrl(dev); 329 + if (ret) 330 + goto err_free_dev; 331 + 328 332 ret = __mt7921e_mcu_drv_pmctrl(dev); 329 333 if (ret) 330 334 goto err_free_dev; ··· 336 332 mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) | 337 333 (mt7921_l1_rr(dev, MT_HW_REV) & 0xff); 338 334 dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev); 335 + 336 + ret = mt7921_wfsys_reset(dev); 337 + if (ret) 338 + goto err_free_dev; 339 339 340 340 mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA, 0); 341 341
+3 -2
drivers/net/wireless/realtek/rtw89/debug.c
··· 3026 3026 struct rtw89_debugfs_priv *debugfs_priv = filp->private_data; 3027 3027 struct rtw89_dev *rtwdev = debugfs_priv->rtwdev; 3028 3028 u8 *h2c; 3029 + int ret; 3029 3030 u16 h2c_len = count / 2; 3030 3031 3031 3032 h2c = rtw89_hex2bin_user(rtwdev, user_buf, count); 3032 3033 if (IS_ERR(h2c)) 3033 3034 return -EFAULT; 3034 3035 3035 - rtw89_fw_h2c_raw(rtwdev, h2c, h2c_len); 3036 + ret = rtw89_fw_h2c_raw(rtwdev, h2c, h2c_len); 3036 3037 3037 3038 kfree(h2c); 3038 3039 3039 - return count; 3040 + return ret ? ret : count; 3040 3041 } 3041 3042 3042 3043 static int
+33 -3
drivers/nvme/host/core.c
··· 3431 3431 3432 3432 ret = nvme_global_check_duplicate_ids(ctrl->subsys, &info->ids); 3433 3433 if (ret) { 3434 - dev_err(ctrl->device, 3435 - "globally duplicate IDs for nsid %d\n", info->nsid); 3434 + /* 3435 + * We've found two different namespaces on two different 3436 + * subsystems that report the same ID. This is pretty nasty 3437 + * for anything that actually requires unique device 3438 + * identification. In the kernel we need this for multipathing, 3439 + * and in user space the /dev/disk/by-id/ links rely on it. 3440 + * 3441 + * If the device also claims to be multi-path capable back off 3442 + * here now and refuse the probe the second device as this is a 3443 + * recipe for data corruption. If not this is probably a 3444 + * cheap consumer device if on the PCIe bus, so let the user 3445 + * proceed and use the shiny toy, but warn that with changing 3446 + * probing order (which due to our async probing could just be 3447 + * device taking longer to startup) the other device could show 3448 + * up at any time. 3449 + */ 3436 3450 nvme_print_device_info(ctrl); 3437 - return ret; 3451 + if ((ns->ctrl->ops->flags & NVME_F_FABRICS) || /* !PCIe */ 3452 + ((ns->ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) && 3453 + info->is_shared)) { 3454 + dev_err(ctrl->device, 3455 + "ignoring nsid %d because of duplicate IDs\n", 3456 + info->nsid); 3457 + return ret; 3458 + } 3459 + 3460 + dev_err(ctrl->device, 3461 + "clearing duplicate IDs for nsid %d\n", info->nsid); 3462 + dev_err(ctrl->device, 3463 + "use of /dev/disk/by-id/ may cause data corruption\n"); 3464 + memset(&info->ids.nguid, 0, sizeof(info->ids.nguid)); 3465 + memset(&info->ids.uuid, 0, sizeof(info->ids.uuid)); 3466 + memset(&info->ids.eui64, 0, sizeof(info->ids.eui64)); 3467 + ctrl->quirks |= NVME_QUIRK_BOGUS_NID; 3438 3468 } 3439 3469 3440 3470 mutex_lock(&ctrl->subsys->lock);
+1 -1
drivers/nvme/host/fault_inject.c
··· 27 27 28 28 /* create debugfs directory and attribute */ 29 29 parent = debugfs_create_dir(dev_name, NULL); 30 - if (!parent) { 30 + if (IS_ERR(parent)) { 31 31 pr_warn("%s: failed to create debugfs directory\n", dev_name); 32 32 return; 33 33 }
+30 -7
drivers/nvme/host/fc.c
··· 2548 2548 * the controller. Abort any ios on the association and let the 2549 2549 * create_association error path resolve things. 2550 2550 */ 2551 - if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) { 2552 - __nvme_fc_abort_outstanding_ios(ctrl, true); 2551 + enum nvme_ctrl_state state; 2552 + unsigned long flags; 2553 + 2554 + spin_lock_irqsave(&ctrl->lock, flags); 2555 + state = ctrl->ctrl.state; 2556 + if (state == NVME_CTRL_CONNECTING) { 2553 2557 set_bit(ASSOC_FAILED, &ctrl->flags); 2558 + spin_unlock_irqrestore(&ctrl->lock, flags); 2559 + __nvme_fc_abort_outstanding_ios(ctrl, true); 2560 + dev_warn(ctrl->ctrl.device, 2561 + "NVME-FC{%d}: transport error during (re)connect\n", 2562 + ctrl->cnum); 2554 2563 return; 2555 2564 } 2565 + spin_unlock_irqrestore(&ctrl->lock, flags); 2556 2566 2557 2567 /* Otherwise, only proceed if in LIVE state - e.g. on first error */ 2558 - if (ctrl->ctrl.state != NVME_CTRL_LIVE) 2568 + if (state != NVME_CTRL_LIVE) 2559 2569 return; 2560 2570 2561 2571 dev_warn(ctrl->ctrl.device, ··· 3120 3110 */ 3121 3111 3122 3112 ret = nvme_enable_ctrl(&ctrl->ctrl); 3123 - if (ret || test_bit(ASSOC_FAILED, &ctrl->flags)) 3113 + if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags)) 3114 + ret = -EIO; 3115 + if (ret) 3124 3116 goto out_disconnect_admin_queue; 3125 3117 3126 3118 ctrl->ctrl.max_segments = ctrl->lport->ops->max_sgl_segments; ··· 3132 3120 nvme_unquiesce_admin_queue(&ctrl->ctrl); 3133 3121 3134 3122 ret = nvme_init_ctrl_finish(&ctrl->ctrl, false); 3135 - if (ret || test_bit(ASSOC_FAILED, &ctrl->flags)) 3123 + if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags)) 3124 + ret = -EIO; 3125 + if (ret) 3136 3126 goto out_disconnect_admin_queue; 3137 3127 3138 3128 /* sanity checks */ ··· 3179 3165 else 3180 3166 ret = nvme_fc_recreate_io_queues(ctrl); 3181 3167 } 3182 - if (ret || test_bit(ASSOC_FAILED, &ctrl->flags)) 3183 - goto out_term_aen_ops; 3184 3168 3169 + spin_lock_irqsave(&ctrl->lock, flags); 3170 + if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags)) 3171 + ret = -EIO; 3172 + if (ret) { 3173 + spin_unlock_irqrestore(&ctrl->lock, flags); 3174 + goto out_term_aen_ops; 3175 + } 3185 3176 changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE); 3177 + spin_unlock_irqrestore(&ctrl->lock, flags); 3186 3178 3187 3179 ctrl->ctrl.nr_reconnects = 0; 3188 3180 ··· 3200 3180 out_term_aen_ops: 3201 3181 nvme_fc_term_aen_ops(ctrl); 3202 3182 out_disconnect_admin_queue: 3183 + dev_warn(ctrl->ctrl.device, 3184 + "NVME-FC{%d}: create_assoc failed, assoc_id %llx ret %d\n", 3185 + ctrl->cnum, ctrl->association_id, ret); 3203 3186 /* send a Disconnect(association) LS to fc-nvme target */ 3204 3187 nvme_fc_xmt_disconnect_assoc(ctrl); 3205 3188 spin_lock_irqsave(&ctrl->lock, flags);
+20 -9
drivers/nvme/host/pci.c
··· 967 967 struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 968 968 969 969 dma_unmap_page(dev->dev, iod->meta_dma, 970 - rq_integrity_vec(req)->bv_len, rq_data_dir(req)); 970 + rq_integrity_vec(req)->bv_len, rq_dma_dir(req)); 971 971 } 972 972 973 973 if (blk_rq_nr_phys_segments(req)) ··· 1298 1298 */ 1299 1299 if (nvme_should_reset(dev, csts)) { 1300 1300 nvme_warn_reset(dev, csts); 1301 - nvme_dev_disable(dev, false); 1302 - nvme_reset_ctrl(&dev->ctrl); 1303 - return BLK_EH_DONE; 1301 + goto disable; 1304 1302 } 1305 1303 1306 1304 /* ··· 1349 1351 "I/O %d QID %d timeout, reset controller\n", 1350 1352 req->tag, nvmeq->qid); 1351 1353 nvme_req(req)->flags |= NVME_REQ_CANCELLED; 1352 - nvme_dev_disable(dev, false); 1353 - nvme_reset_ctrl(&dev->ctrl); 1354 - 1355 - return BLK_EH_DONE; 1354 + goto disable; 1356 1355 } 1357 1356 1358 1357 if (atomic_dec_return(&dev->ctrl.abort_limit) < 0) { ··· 1386 1391 * as the device then is in a faulty state. 1387 1392 */ 1388 1393 return BLK_EH_RESET_TIMER; 1394 + 1395 + disable: 1396 + if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING)) 1397 + return BLK_EH_DONE; 1398 + 1399 + nvme_dev_disable(dev, false); 1400 + if (nvme_try_sched_reset(&dev->ctrl)) 1401 + nvme_unquiesce_io_queues(&dev->ctrl); 1402 + return BLK_EH_DONE; 1389 1403 } 1390 1404 1391 1405 static void nvme_free_queue(struct nvme_queue *nvmeq) ··· 3282 3278 case pci_channel_io_frozen: 3283 3279 dev_warn(dev->ctrl.device, 3284 3280 "frozen state error detected, reset controller\n"); 3281 + if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING)) { 3282 + nvme_dev_disable(dev, true); 3283 + return PCI_ERS_RESULT_DISCONNECT; 3284 + } 3285 3285 nvme_dev_disable(dev, false); 3286 3286 return PCI_ERS_RESULT_NEED_RESET; 3287 3287 case pci_channel_io_perm_failure: ··· 3302 3294 3303 3295 dev_info(dev->ctrl.device, "restart after slot reset\n"); 3304 3296 pci_restore_state(pdev); 3305 - nvme_reset_ctrl(&dev->ctrl); 3297 + if (!nvme_try_sched_reset(&dev->ctrl)) 3298 + nvme_unquiesce_io_queues(&dev->ctrl); 3306 3299 return PCI_ERS_RESULT_RECOVERED; 3307 3300 } 3308 3301 ··· 3405 3396 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3406 3397 { PCI_DEVICE(0x144d, 0xa809), /* Samsung MZALQ256HBJD 256G */ 3407 3398 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3399 + { PCI_DEVICE(0x144d, 0xa802), /* Samsung SM953 */ 3400 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3408 3401 { PCI_DEVICE(0x1cc4, 0x6303), /* UMIS RPJTJ512MGE1QDY 512G */ 3409 3402 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3410 3403 { PCI_DEVICE(0x1cc4, 0x6302), /* UMIS RPJTJ256MGE1QDY 256G */
+1 -1
drivers/nvme/host/sysfs.c
··· 92 92 * we have no UUID set 93 93 */ 94 94 if (uuid_is_null(&ids->uuid)) { 95 - dev_warn_ratelimited(dev, 95 + dev_warn_once(dev, 96 96 "No UUID available providing old NGUID\n"); 97 97 return sysfs_emit(buf, "%pU\n", ids->nguid); 98 98 }
+4 -5
drivers/nvme/host/zns.c
··· 10 10 int nvme_revalidate_zones(struct nvme_ns *ns) 11 11 { 12 12 struct request_queue *q = ns->queue; 13 - int ret; 14 13 15 - ret = blk_revalidate_disk_zones(ns->disk, NULL); 16 - if (!ret) 17 - blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append); 18 - return ret; 14 + blk_queue_chunk_sectors(q, ns->zsze); 15 + blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append); 16 + 17 + return blk_revalidate_disk_zones(ns->disk, NULL); 19 18 } 20 19 21 20 static int nvme_set_max_append(struct nvme_ctrl *ctrl)
+1 -1
drivers/nvme/target/loop.c
··· 373 373 goto out_cleanup_tagset; 374 374 375 375 ctrl->ctrl.max_hw_sectors = 376 - (NVME_LOOP_MAX_SEGMENTS - 1) << (PAGE_SHIFT - 9); 376 + (NVME_LOOP_MAX_SEGMENTS - 1) << PAGE_SECTORS_SHIFT; 377 377 378 378 nvme_unquiesce_admin_queue(&ctrl->ctrl); 379 379
+2 -2
drivers/nvme/target/passthru.c
··· 102 102 * which depends on the host's memory fragementation. To solve this, 103 103 * ensure mdts is limited to the pages equal to the number of segments. 104 104 */ 105 - max_hw_sectors = min_not_zero(pctrl->max_segments << (PAGE_SHIFT - 9), 105 + max_hw_sectors = min_not_zero(pctrl->max_segments << PAGE_SECTORS_SHIFT, 106 106 pctrl->max_hw_sectors); 107 107 108 108 /* 109 109 * nvmet_passthru_map_sg is limitted to using a single bio so limit 110 110 * the mdts based on BIO_MAX_VECS as well 111 111 */ 112 - max_hw_sectors = min_not_zero(BIO_MAX_VECS << (PAGE_SHIFT - 9), 112 + max_hw_sectors = min_not_zero(BIO_MAX_VECS << PAGE_SECTORS_SHIFT, 113 113 max_hw_sectors); 114 114 115 115 page_shift = NVME_CAP_MPSMIN(ctrl->cap) + 12;
-3
drivers/perf/riscv_pmu.c
··· 181 181 uint64_t max_period = riscv_pmu_ctr_get_width_mask(event); 182 182 u64 init_val; 183 183 184 - if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) 185 - return; 186 - 187 184 if (flags & PERF_EF_RELOAD) 188 185 WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE)); 189 186
+23 -38
drivers/pinctrl/pinctrl-amd.c
··· 116 116 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 117 117 } 118 118 119 - static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset, 120 - unsigned debounce) 119 + static int amd_gpio_set_debounce(struct amd_gpio *gpio_dev, unsigned int offset, 120 + unsigned int debounce) 121 121 { 122 122 u32 time; 123 123 u32 pin_reg; 124 124 int ret = 0; 125 - unsigned long flags; 126 - struct amd_gpio *gpio_dev = gpiochip_get_data(gc); 127 - 128 - raw_spin_lock_irqsave(&gpio_dev->lock, flags); 129 125 130 126 /* Use special handling for Pin0 debounce */ 131 - pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG); 132 - if (pin_reg & INTERNAL_GPIO0_DEBOUNCE) 133 - debounce = 0; 127 + if (offset == 0) { 128 + pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG); 129 + if (pin_reg & INTERNAL_GPIO0_DEBOUNCE) 130 + debounce = 0; 131 + } 134 132 135 133 pin_reg = readl(gpio_dev->base + offset * 4); 136 134 ··· 180 182 pin_reg &= ~(DB_CNTRl_MASK << DB_CNTRL_OFF); 181 183 } 182 184 writel(pin_reg, gpio_dev->base + offset * 4); 183 - raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 184 185 185 186 return ret; 186 - } 187 - 188 - static int amd_gpio_set_config(struct gpio_chip *gc, unsigned offset, 189 - unsigned long config) 190 - { 191 - u32 debounce; 192 - 193 - if (pinconf_to_config_param(config) != PIN_CONFIG_INPUT_DEBOUNCE) 194 - return -ENOTSUPP; 195 - 196 - debounce = pinconf_to_config_argument(config); 197 - return amd_gpio_set_debounce(gc, offset, debounce); 198 187 } 199 188 200 189 #ifdef CONFIG_DEBUG_FS ··· 205 220 char *pin_sts; 206 221 char *interrupt_sts; 207 222 char *wake_sts; 208 - char *pull_up_sel; 209 223 char *orientation; 210 224 char debounce_value[40]; 211 225 char *debounce_enable; ··· 312 328 seq_printf(s, " %s|", wake_sts); 313 329 314 330 if (pin_reg & BIT(PULL_UP_ENABLE_OFF)) { 315 - if (pin_reg & BIT(PULL_UP_SEL_OFF)) 316 - pull_up_sel = "8k"; 317 - else 318 - pull_up_sel = "4k"; 319 - seq_printf(s, "%s ↑|", 320 - pull_up_sel); 331 + seq_puts(s, " ↑ |"); 321 332 } else if (pin_reg & BIT(PULL_DOWN_ENABLE_OFF)) { 322 - seq_puts(s, " ↓|"); 333 + seq_puts(s, " ↓ |"); 323 334 } else { 324 335 seq_puts(s, " |"); 325 336 } ··· 740 761 break; 741 762 742 763 case PIN_CONFIG_BIAS_PULL_UP: 743 - arg = (pin_reg >> PULL_UP_SEL_OFF) & (BIT(0) | BIT(1)); 764 + arg = (pin_reg >> PULL_UP_ENABLE_OFF) & BIT(0); 744 765 break; 745 766 746 767 case PIN_CONFIG_DRIVE_STRENGTH: ··· 759 780 } 760 781 761 782 static int amd_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin, 762 - unsigned long *configs, unsigned num_configs) 783 + unsigned long *configs, unsigned int num_configs) 763 784 { 764 785 int i; 765 786 u32 arg; ··· 777 798 778 799 switch (param) { 779 800 case PIN_CONFIG_INPUT_DEBOUNCE: 780 - pin_reg &= ~DB_TMR_OUT_MASK; 781 - pin_reg |= arg & DB_TMR_OUT_MASK; 782 - break; 801 + ret = amd_gpio_set_debounce(gpio_dev, pin, arg); 802 + goto out_unlock; 783 803 784 804 case PIN_CONFIG_BIAS_PULL_DOWN: 785 805 pin_reg &= ~BIT(PULL_DOWN_ENABLE_OFF); ··· 786 808 break; 787 809 788 810 case PIN_CONFIG_BIAS_PULL_UP: 789 - pin_reg &= ~BIT(PULL_UP_SEL_OFF); 790 - pin_reg |= (arg & BIT(0)) << PULL_UP_SEL_OFF; 791 811 pin_reg &= ~BIT(PULL_UP_ENABLE_OFF); 792 - pin_reg |= ((arg>>1) & BIT(0)) << PULL_UP_ENABLE_OFF; 812 + pin_reg |= (arg & BIT(0)) << PULL_UP_ENABLE_OFF; 793 813 break; 794 814 795 815 case PIN_CONFIG_DRIVE_STRENGTH: ··· 805 829 806 830 writel(pin_reg, gpio_dev->base + pin*4); 807 831 } 832 + out_unlock: 808 833 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 809 834 810 835 return ret; ··· 845 868 return -ENOTSUPP; 846 869 } 847 870 return 0; 871 + } 872 + 873 + static int amd_gpio_set_config(struct gpio_chip *gc, unsigned int pin, 874 + unsigned long config) 875 + { 876 + struct amd_gpio *gpio_dev = gpiochip_get_data(gc); 877 + 878 + return amd_pinconf_set(gpio_dev->pctrl, pin, &config, 1); 848 879 } 849 880 850 881 static const struct pinconf_ops amd_pinconf_ops = {
-1
drivers/pinctrl/pinctrl-amd.h
··· 36 36 #define WAKE_CNTRL_OFF_S4 15 37 37 #define PIN_STS_OFF 16 38 38 #define DRV_STRENGTH_SEL_OFF 17 39 - #define PULL_UP_SEL_OFF 19 40 39 #define PULL_UP_ENABLE_OFF 20 41 40 #define PULL_DOWN_ENABLE_OFF 21 42 41 #define OUTPUT_VALUE_OFF 22
+20 -8
drivers/pinctrl/renesas/pinctrl-rzg2l.c
··· 249 249 250 250 static int rzg2l_dt_subnode_to_map(struct pinctrl_dev *pctldev, 251 251 struct device_node *np, 252 + struct device_node *parent, 252 253 struct pinctrl_map **map, 253 254 unsigned int *num_maps, 254 255 unsigned int *index) ··· 267 266 struct property *prop; 268 267 int ret, gsel, fsel; 269 268 const char **pin_fn; 269 + const char *name; 270 270 const char *pin; 271 271 272 272 pinmux = of_find_property(np, "pinmux", NULL); ··· 351 349 psel_val[i] = MUX_FUNC(value); 352 350 } 353 351 352 + if (parent) { 353 + name = devm_kasprintf(pctrl->dev, GFP_KERNEL, "%pOFn.%pOFn", 354 + parent, np); 355 + if (!name) { 356 + ret = -ENOMEM; 357 + goto done; 358 + } 359 + } else { 360 + name = np->name; 361 + } 362 + 354 363 /* Register a single pin group listing all the pins we read from DT */ 355 - gsel = pinctrl_generic_add_group(pctldev, np->name, pins, num_pinmux, NULL); 364 + gsel = pinctrl_generic_add_group(pctldev, name, pins, num_pinmux, NULL); 356 365 if (gsel < 0) { 357 366 ret = gsel; 358 367 goto done; ··· 373 360 * Register a single group function where the 'data' is an array PSEL 374 361 * register values read from DT. 375 362 */ 376 - pin_fn[0] = np->name; 377 - fsel = pinmux_generic_add_function(pctldev, np->name, pin_fn, 1, 378 - psel_val); 363 + pin_fn[0] = name; 364 + fsel = pinmux_generic_add_function(pctldev, name, pin_fn, 1, psel_val); 379 365 if (fsel < 0) { 380 366 ret = fsel; 381 367 goto remove_group; 382 368 } 383 369 384 370 maps[idx].type = PIN_MAP_TYPE_MUX_GROUP; 385 - maps[idx].data.mux.group = np->name; 386 - maps[idx].data.mux.function = np->name; 371 + maps[idx].data.mux.group = name; 372 + maps[idx].data.mux.function = name; 387 373 idx++; 388 374 389 375 dev_dbg(pctrl->dev, "Parsed %pOF with %d pins\n", np, num_pinmux); ··· 429 417 index = 0; 430 418 431 419 for_each_child_of_node(np, child) { 432 - ret = rzg2l_dt_subnode_to_map(pctldev, child, map, 420 + ret = rzg2l_dt_subnode_to_map(pctldev, child, np, map, 433 421 num_maps, &index); 434 422 if (ret < 0) { 435 423 of_node_put(child); ··· 438 426 } 439 427 440 428 if (*num_maps == 0) { 441 - ret = rzg2l_dt_subnode_to_map(pctldev, np, map, 429 + ret = rzg2l_dt_subnode_to_map(pctldev, np, NULL, map, 442 430 num_maps, &index); 443 431 if (ret < 0) 444 432 goto done;
+20 -8
drivers/pinctrl/renesas/pinctrl-rzv2m.c
··· 209 209 210 210 static int rzv2m_dt_subnode_to_map(struct pinctrl_dev *pctldev, 211 211 struct device_node *np, 212 + struct device_node *parent, 212 213 struct pinctrl_map **map, 213 214 unsigned int *num_maps, 214 215 unsigned int *index) ··· 227 226 struct property *prop; 228 227 int ret, gsel, fsel; 229 228 const char **pin_fn; 229 + const char *name; 230 230 const char *pin; 231 231 232 232 pinmux = of_find_property(np, "pinmux", NULL); ··· 311 309 psel_val[i] = MUX_FUNC(value); 312 310 } 313 311 312 + if (parent) { 313 + name = devm_kasprintf(pctrl->dev, GFP_KERNEL, "%pOFn.%pOFn", 314 + parent, np); 315 + if (!name) { 316 + ret = -ENOMEM; 317 + goto done; 318 + } 319 + } else { 320 + name = np->name; 321 + } 322 + 314 323 /* Register a single pin group listing all the pins we read from DT */ 315 - gsel = pinctrl_generic_add_group(pctldev, np->name, pins, num_pinmux, NULL); 324 + gsel = pinctrl_generic_add_group(pctldev, name, pins, num_pinmux, NULL); 316 325 if (gsel < 0) { 317 326 ret = gsel; 318 327 goto done; ··· 333 320 * Register a single group function where the 'data' is an array PSEL 334 321 * register values read from DT. 335 322 */ 336 - pin_fn[0] = np->name; 337 - fsel = pinmux_generic_add_function(pctldev, np->name, pin_fn, 1, 338 - psel_val); 323 + pin_fn[0] = name; 324 + fsel = pinmux_generic_add_function(pctldev, name, pin_fn, 1, psel_val); 339 325 if (fsel < 0) { 340 326 ret = fsel; 341 327 goto remove_group; 342 328 } 343 329 344 330 maps[idx].type = PIN_MAP_TYPE_MUX_GROUP; 345 - maps[idx].data.mux.group = np->name; 346 - maps[idx].data.mux.function = np->name; 331 + maps[idx].data.mux.group = name; 332 + maps[idx].data.mux.function = name; 347 333 idx++; 348 334 349 335 dev_dbg(pctrl->dev, "Parsed %pOF with %d pins\n", np, num_pinmux); ··· 389 377 index = 0; 390 378 391 379 for_each_child_of_node(np, child) { 392 - ret = rzv2m_dt_subnode_to_map(pctldev, child, map, 380 + ret = rzv2m_dt_subnode_to_map(pctldev, child, np, map, 393 381 num_maps, &index); 394 382 if (ret < 0) { 395 383 of_node_put(child); ··· 398 386 } 399 387 400 388 if (*num_maps == 0) { 401 - ret = rzv2m_dt_subnode_to_map(pctldev, np, map, 389 + ret = rzv2m_dt_subnode_to_map(pctldev, np, NULL, map, 402 390 num_maps, &index); 403 391 if (ret < 0) 404 392 goto done;
+1 -1
drivers/platform/x86/amd/Makefile
··· 4 4 # AMD x86 Platform-Specific Drivers 5 5 # 6 6 7 - amd-pmc-y := pmc.o 7 + amd-pmc-y := pmc.o pmc-quirks.o 8 8 obj-$(CONFIG_AMD_PMC) += amd-pmc.o 9 9 amd_hsmp-y := hsmp.o 10 10 obj-$(CONFIG_AMD_HSMP) += amd_hsmp.o
+176
drivers/platform/x86/amd/pmc-quirks.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * AMD SoC Power Management Controller Driver Quirks 4 + * 5 + * Copyright (c) 2023, Advanced Micro Devices, Inc. 6 + * All Rights Reserved. 7 + * 8 + * Author: Mario Limonciello <mario.limonciello@amd.com> 9 + */ 10 + 11 + #include <linux/dmi.h> 12 + #include <linux/io.h> 13 + #include <linux/ioport.h> 14 + #include <linux/slab.h> 15 + 16 + #include "pmc.h" 17 + 18 + struct quirk_entry { 19 + u32 s2idle_bug_mmio; 20 + }; 21 + 22 + static struct quirk_entry quirk_s2idle_bug = { 23 + .s2idle_bug_mmio = 0xfed80380, 24 + }; 25 + 26 + static const struct dmi_system_id fwbug_list[] = { 27 + { 28 + .ident = "L14 Gen2 AMD", 29 + .driver_data = &quirk_s2idle_bug, 30 + .matches = { 31 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 32 + DMI_MATCH(DMI_PRODUCT_NAME, "20X5"), 33 + } 34 + }, 35 + { 36 + .ident = "T14s Gen2 AMD", 37 + .driver_data = &quirk_s2idle_bug, 38 + .matches = { 39 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 40 + DMI_MATCH(DMI_PRODUCT_NAME, "20XF"), 41 + } 42 + }, 43 + { 44 + .ident = "X13 Gen2 AMD", 45 + .driver_data = &quirk_s2idle_bug, 46 + .matches = { 47 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 48 + DMI_MATCH(DMI_PRODUCT_NAME, "20XH"), 49 + } 50 + }, 51 + { 52 + .ident = "T14 Gen2 AMD", 53 + .driver_data = &quirk_s2idle_bug, 54 + .matches = { 55 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 56 + DMI_MATCH(DMI_PRODUCT_NAME, "20XK"), 57 + } 58 + }, 59 + { 60 + .ident = "T14 Gen1 AMD", 61 + .driver_data = &quirk_s2idle_bug, 62 + .matches = { 63 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 64 + DMI_MATCH(DMI_PRODUCT_NAME, "20UD"), 65 + } 66 + }, 67 + { 68 + .ident = "T14 Gen1 AMD", 69 + .driver_data = &quirk_s2idle_bug, 70 + .matches = { 71 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 72 + DMI_MATCH(DMI_PRODUCT_NAME, "20UE"), 73 + } 74 + }, 75 + { 76 + .ident = "T14s Gen1 AMD", 77 + .driver_data = &quirk_s2idle_bug, 78 + .matches = { 79 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 80 + DMI_MATCH(DMI_PRODUCT_NAME, "20UH"), 81 + } 82 + }, 83 + { 84 + .ident = "T14s Gen1 AMD", 85 + .driver_data = &quirk_s2idle_bug, 86 + .matches = { 87 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 88 + DMI_MATCH(DMI_PRODUCT_NAME, "20UJ"), 89 + } 90 + }, 91 + { 92 + .ident = "P14s Gen1 AMD", 93 + .driver_data = &quirk_s2idle_bug, 94 + .matches = { 95 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 96 + DMI_MATCH(DMI_PRODUCT_NAME, "20Y1"), 97 + } 98 + }, 99 + { 100 + .ident = "P14s Gen2 AMD", 101 + .driver_data = &quirk_s2idle_bug, 102 + .matches = { 103 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 104 + DMI_MATCH(DMI_PRODUCT_NAME, "21A0"), 105 + } 106 + }, 107 + { 108 + .ident = "P14s Gen2 AMD", 109 + .driver_data = &quirk_s2idle_bug, 110 + .matches = { 111 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 112 + DMI_MATCH(DMI_PRODUCT_NAME, "21A1"), 113 + } 114 + }, 115 + /* https://gitlab.freedesktop.org/drm/amd/-/issues/2684 */ 116 + { 117 + .ident = "HP Laptop 15s-eq2xxx", 118 + .driver_data = &quirk_s2idle_bug, 119 + .matches = { 120 + DMI_MATCH(DMI_SYS_VENDOR, "HP"), 121 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Laptop 15s-eq2xxx"), 122 + } 123 + }, 124 + {} 125 + }; 126 + 127 + /* 128 + * Laptops that run a SMI handler during the D3->D0 transition that occurs 129 + * specifically when exiting suspend to idle which can cause 130 + * large delays during resume when the IOMMU translation layer is enabled (the default 131 + * behavior) for NVME devices: 132 + * 133 + * To avoid this firmware problem, skip the SMI handler on these machines before the 134 + * D0 transition occurs. 135 + */ 136 + static void amd_pmc_skip_nvme_smi_handler(u32 s2idle_bug_mmio) 137 + { 138 + struct resource *res; 139 + void __iomem *addr; 140 + u8 val; 141 + 142 + res = request_mem_region_muxed(s2idle_bug_mmio, 1, "amd_pmc_pm80"); 143 + if (!res) 144 + return; 145 + 146 + addr = ioremap(s2idle_bug_mmio, 1); 147 + if (!addr) 148 + goto cleanup_resource; 149 + 150 + val = ioread8(addr); 151 + iowrite8(val & ~BIT(0), addr); 152 + 153 + iounmap(addr); 154 + cleanup_resource: 155 + release_resource(res); 156 + kfree(res); 157 + } 158 + 159 + void amd_pmc_process_restore_quirks(struct amd_pmc_dev *dev) 160 + { 161 + if (dev->quirks && dev->quirks->s2idle_bug_mmio) 162 + amd_pmc_skip_nvme_smi_handler(dev->quirks->s2idle_bug_mmio); 163 + } 164 + 165 + void amd_pmc_quirks_init(struct amd_pmc_dev *dev) 166 + { 167 + const struct dmi_system_id *dmi_id; 168 + 169 + dmi_id = dmi_first_match(fwbug_list); 170 + if (!dmi_id) 171 + return; 172 + dev->quirks = dmi_id->driver_data; 173 + if (dev->quirks->s2idle_bug_mmio) 174 + pr_info("Using s2idle quirk to avoid %s platform firmware bug\n", 175 + dmi_id->ident); 176 + }
+9 -23
drivers/platform/x86/amd/pmc.c
··· 28 28 #include <linux/seq_file.h> 29 29 #include <linux/uaccess.h> 30 30 31 + #include "pmc.h" 32 + 31 33 /* SMU communication registers */ 32 34 #define AMD_PMC_REGISTER_MESSAGE 0x538 33 35 #define AMD_PMC_REGISTER_RESPONSE 0x980 ··· 96 94 #define AMD_CPU_ID_CB 0x14D8 97 95 #define AMD_CPU_ID_PS 0x14E8 98 96 #define AMD_CPU_ID_SP 0x14A4 97 + #define PCI_DEVICE_ID_AMD_1AH_M20H_ROOT 0x1507 99 98 100 99 #define PMC_MSG_DELAY_MIN_US 50 101 100 #define RESPONSE_REGISTER_LOOP_MAX 20000 ··· 147 144 {"IPU", BIT(19)}, 148 145 {"UMSCH", BIT(20)}, 149 146 {} 150 - }; 151 - 152 - struct amd_pmc_dev { 153 - void __iomem *regbase; 154 - void __iomem *smu_virt_addr; 155 - void __iomem *stb_virt_addr; 156 - void __iomem *fch_virt_addr; 157 - bool msg_port; 158 - u32 base_addr; 159 - u32 cpu_id; 160 - u32 active_ips; 161 - u32 dram_size; 162 - u32 num_ips; 163 - u32 s2d_msg_id; 164 - /* SMU version information */ 165 - u8 smu_program; 166 - u8 major; 167 - u8 minor; 168 - u8 rev; 169 - struct device *dev; 170 - struct pci_dev *rdev; 171 - struct mutex lock; /* generic mutex lock */ 172 - struct dentry *dbgfs_dir; 173 147 }; 174 148 175 149 static bool enable_stb; ··· 871 891 872 892 /* Notify on failed entry */ 873 893 amd_pmc_validate_deepest(pdev); 894 + 895 + amd_pmc_process_restore_quirks(pdev); 874 896 } 875 897 876 898 static struct acpi_s2idle_dev_ops amd_pmc_s2idle_dev_ops = { ··· 908 926 { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_PCO) }, 909 927 { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_RV) }, 910 928 { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_SP) }, 929 + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_ROOT) }, 911 930 { } 912 931 }; 913 932 ··· 1070 1087 err = acpi_register_lps0_dev(&amd_pmc_s2idle_dev_ops); 1071 1088 if (err) 1072 1089 dev_warn(dev->dev, "failed to register LPS0 sleep handler, expect increased power consumption\n"); 1090 + if (!disable_workarounds) 1091 + amd_pmc_quirks_init(dev); 1073 1092 } 1074 1093 1075 1094 amd_pmc_dbgfs_register(dev); ··· 1100 1115 {"AMDI0007", 0}, 1101 1116 {"AMDI0008", 0}, 1102 1117 {"AMDI0009", 0}, 1118 + {"AMDI000A", 0}, 1103 1119 {"AMD0004", 0}, 1104 1120 {"AMD0005", 0}, 1105 1121 { }
+44
drivers/platform/x86/amd/pmc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * AMD SoC Power Management Controller Driver 4 + * 5 + * Copyright (c) 2023, Advanced Micro Devices, Inc. 6 + * All Rights Reserved. 7 + * 8 + * Author: Mario Limonciello <mario.limonciello@amd.com> 9 + */ 10 + 11 + #ifndef PMC_H 12 + #define PMC_H 13 + 14 + #include <linux/types.h> 15 + #include <linux/mutex.h> 16 + 17 + struct amd_pmc_dev { 18 + void __iomem *regbase; 19 + void __iomem *smu_virt_addr; 20 + void __iomem *stb_virt_addr; 21 + void __iomem *fch_virt_addr; 22 + bool msg_port; 23 + u32 base_addr; 24 + u32 cpu_id; 25 + u32 active_ips; 26 + u32 dram_size; 27 + u32 num_ips; 28 + u32 s2d_msg_id; 29 + /* SMU version information */ 30 + u8 smu_program; 31 + u8 major; 32 + u8 minor; 33 + u8 rev; 34 + struct device *dev; 35 + struct pci_dev *rdev; 36 + struct mutex lock; /* generic mutex lock */ 37 + struct dentry *dbgfs_dir; 38 + struct quirk_entry *quirks; 39 + }; 40 + 41 + void amd_pmc_process_restore_quirks(struct amd_pmc_dev *dev); 42 + void amd_pmc_quirks_init(struct amd_pmc_dev *dev); 43 + 44 + #endif /* PMC_H */
+3
drivers/platform/x86/amd/pmf/core.c
··· 40 40 /* List of supported CPU ids */ 41 41 #define AMD_CPU_ID_RMB 0x14b5 42 42 #define AMD_CPU_ID_PS 0x14e8 43 + #define PCI_DEVICE_ID_AMD_1AH_M20H_ROOT 0x1507 43 44 44 45 #define PMF_MSG_DELAY_MIN_US 50 45 46 #define RESPONSE_REGISTER_LOOP_MAX 20000 ··· 243 242 static const struct pci_device_id pmf_pci_ids[] = { 244 243 { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_RMB) }, 245 244 { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_PS) }, 245 + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_1AH_M20H_ROOT) }, 246 246 { } 247 247 }; 248 248 ··· 335 333 static const struct acpi_device_id amd_pmf_acpi_ids[] = { 336 334 {"AMDI0100", 0x100}, 337 335 {"AMDI0102", 0}, 336 + {"AMDI0103", 0}, 338 337 { } 339 338 }; 340 339 MODULE_DEVICE_TABLE(acpi, amd_pmf_acpi_ids);
+4 -3
drivers/platform/x86/dell/dell-wmi-ddv.c
··· 616 616 } 617 617 618 618 if (index < 2) { 619 - ret = -ENODEV; 619 + /* Finding no available sensors is not an error */ 620 + ret = 0; 620 621 621 622 goto err_release; 622 623 } ··· 842 841 843 842 if (IS_REACHABLE(CONFIG_ACPI_BATTERY)) { 844 843 ret = dell_wmi_ddv_battery_add(data); 845 - if (ret < 0 && ret != -ENODEV) 844 + if (ret < 0) 846 845 dev_warn(&wdev->dev, "Unable to register ACPI battery hook: %d\n", ret); 847 846 } 848 847 849 848 if (IS_REACHABLE(CONFIG_HWMON)) { 850 849 ret = dell_wmi_ddv_hwmon_add(data); 851 - if (ret < 0 && ret != -ENODEV) 850 + if (ret < 0) 852 851 dev_warn(&wdev->dev, "Unable to register hwmon interface: %d\n", ret); 853 852 } 854 853
+1 -1
drivers/platform/x86/intel/int3472/clk_and_regulator.c
··· 260 260 * This DMI table contains the name of the second sensor. This is used to add 261 261 * entries for the second sensor to the supply_map. 262 262 */ 263 - const struct dmi_system_id skl_int3472_regulator_second_sensor[] = { 263 + static const struct dmi_system_id skl_int3472_regulator_second_sensor[] = { 264 264 { 265 265 /* Lenovo Miix 510-12IKB */ 266 266 .matches = {
+1 -3
drivers/platform/x86/intel/tpmi.c
··· 356 356 if (!pfs_start) 357 357 pfs_start = res_start; 358 358 359 - pfs->pfs_header.cap_offset *= TPMI_CAP_OFFSET_UNIT; 360 - 361 - pfs->vsec_offset = pfs_start + pfs->pfs_header.cap_offset; 359 + pfs->vsec_offset = pfs_start + pfs->pfs_header.cap_offset * TPMI_CAP_OFFSET_UNIT; 362 360 363 361 /* 364 362 * Process TPMI_INFO to get PCI device to CPU package ID.
-143
drivers/platform/x86/thinkpad_acpi.c
··· 315 315 /* DMI Quirks */ 316 316 struct quirk_entry { 317 317 bool btusb_bug; 318 - u32 s2idle_bug_mmio; 319 318 }; 320 319 321 320 static struct quirk_entry quirk_btusb_bug = { 322 321 .btusb_bug = true, 323 - }; 324 - 325 - static struct quirk_entry quirk_s2idle_bug = { 326 - .s2idle_bug_mmio = 0xfed80380, 327 322 }; 328 323 329 324 static struct { ··· 4417 4422 DMI_MATCH(DMI_BOARD_NAME, "20MV"), 4418 4423 }, 4419 4424 }, 4420 - { 4421 - .ident = "L14 Gen2 AMD", 4422 - .driver_data = &quirk_s2idle_bug, 4423 - .matches = { 4424 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4425 - DMI_MATCH(DMI_PRODUCT_NAME, "20X5"), 4426 - } 4427 - }, 4428 - { 4429 - .ident = "T14s Gen2 AMD", 4430 - .driver_data = &quirk_s2idle_bug, 4431 - .matches = { 4432 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4433 - DMI_MATCH(DMI_PRODUCT_NAME, "20XF"), 4434 - } 4435 - }, 4436 - { 4437 - .ident = "X13 Gen2 AMD", 4438 - .driver_data = &quirk_s2idle_bug, 4439 - .matches = { 4440 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4441 - DMI_MATCH(DMI_PRODUCT_NAME, "20XH"), 4442 - } 4443 - }, 4444 - { 4445 - .ident = "T14 Gen2 AMD", 4446 - .driver_data = &quirk_s2idle_bug, 4447 - .matches = { 4448 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4449 - DMI_MATCH(DMI_PRODUCT_NAME, "20XK"), 4450 - } 4451 - }, 4452 - { 4453 - .ident = "T14 Gen1 AMD", 4454 - .driver_data = &quirk_s2idle_bug, 4455 - .matches = { 4456 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4457 - DMI_MATCH(DMI_PRODUCT_NAME, "20UD"), 4458 - } 4459 - }, 4460 - { 4461 - .ident = "T14 Gen1 AMD", 4462 - .driver_data = &quirk_s2idle_bug, 4463 - .matches = { 4464 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4465 - DMI_MATCH(DMI_PRODUCT_NAME, "20UE"), 4466 - } 4467 - }, 4468 - { 4469 - .ident = "T14s Gen1 AMD", 4470 - .driver_data = &quirk_s2idle_bug, 4471 - .matches = { 4472 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4473 - DMI_MATCH(DMI_PRODUCT_NAME, "20UH"), 4474 - } 4475 - }, 4476 - { 4477 - .ident = "T14s Gen1 AMD", 4478 - .driver_data = &quirk_s2idle_bug, 4479 - .matches = { 4480 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4481 - DMI_MATCH(DMI_PRODUCT_NAME, "20UJ"), 4482 - } 4483 - }, 4484 - { 4485 - .ident = "P14s Gen1 AMD", 4486 - .driver_data = &quirk_s2idle_bug, 4487 - .matches = { 4488 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4489 - DMI_MATCH(DMI_PRODUCT_NAME, "20Y1"), 4490 - } 4491 - }, 4492 - { 4493 - .ident = "P14s Gen2 AMD", 4494 - .driver_data = &quirk_s2idle_bug, 4495 - .matches = { 4496 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4497 - DMI_MATCH(DMI_PRODUCT_NAME, "21A0"), 4498 - } 4499 - }, 4500 - { 4501 - .ident = "P14s Gen2 AMD", 4502 - .driver_data = &quirk_s2idle_bug, 4503 - .matches = { 4504 - DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 4505 - DMI_MATCH(DMI_PRODUCT_NAME, "21A1"), 4506 - } 4507 - }, 4508 4425 {} 4509 4426 }; 4510 - 4511 - #ifdef CONFIG_SUSPEND 4512 - /* 4513 - * Lenovo laptops from a variety of generations run a SMI handler during the D3->D0 4514 - * transition that occurs specifically when exiting suspend to idle which can cause 4515 - * large delays during resume when the IOMMU translation layer is enabled (the default 4516 - * behavior) for NVME devices: 4517 - * 4518 - * To avoid this firmware problem, skip the SMI handler on these machines before the 4519 - * D0 transition occurs. 4520 - */ 4521 - static void thinkpad_acpi_amd_s2idle_restore(void) 4522 - { 4523 - struct resource *res; 4524 - void __iomem *addr; 4525 - u8 val; 4526 - 4527 - res = request_mem_region_muxed(tp_features.quirks->s2idle_bug_mmio, 1, 4528 - "thinkpad_acpi_pm80"); 4529 - if (!res) 4530 - return; 4531 - 4532 - addr = ioremap(tp_features.quirks->s2idle_bug_mmio, 1); 4533 - if (!addr) 4534 - goto cleanup_resource; 4535 - 4536 - val = ioread8(addr); 4537 - iowrite8(val & ~BIT(0), addr); 4538 - 4539 - iounmap(addr); 4540 - cleanup_resource: 4541 - release_resource(res); 4542 - kfree(res); 4543 - } 4544 - 4545 - static struct acpi_s2idle_dev_ops thinkpad_acpi_s2idle_dev_ops = { 4546 - .restore = thinkpad_acpi_amd_s2idle_restore, 4547 - }; 4548 - #endif 4549 4427 4550 4428 static const struct pci_device_id fwbug_cards_ids[] __initconst = { 4551 4429 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x24F3) }, ··· 11536 11668 11537 11669 tpacpi_lifecycle = TPACPI_LIFE_EXITING; 11538 11670 11539 - #ifdef CONFIG_SUSPEND 11540 - if (tp_features.quirks && tp_features.quirks->s2idle_bug_mmio) 11541 - acpi_unregister_lps0_dev(&thinkpad_acpi_s2idle_dev_ops); 11542 - #endif 11543 11671 if (tpacpi_hwmon) 11544 11672 hwmon_device_unregister(tpacpi_hwmon); 11545 11673 if (tp_features.sensors_pdrv_registered) ··· 11725 11861 tp_features.input_device_registered = 1; 11726 11862 } 11727 11863 11728 - #ifdef CONFIG_SUSPEND 11729 - if (tp_features.quirks && tp_features.quirks->s2idle_bug_mmio) { 11730 - if (!acpi_register_lps0_dev(&thinkpad_acpi_s2idle_dev_ops)) 11731 - pr_info("Using s2idle quirk to avoid %s platform firmware bug\n", 11732 - (dmi_id && dmi_id->ident) ? dmi_id->ident : ""); 11733 - } 11734 - #endif 11735 11864 return 0; 11736 11865 } 11737 11866
+22
drivers/platform/x86/touchscreen_dmi.c
··· 26 26 27 27 /* NOTE: Please keep all entries sorted alphabetically */ 28 28 29 + static const struct property_entry archos_101_cesium_educ_props[] = { 30 + PROPERTY_ENTRY_U32("touchscreen-size-x", 1280), 31 + PROPERTY_ENTRY_U32("touchscreen-size-y", 1850), 32 + PROPERTY_ENTRY_BOOL("touchscreen-inverted-x"), 33 + PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 34 + PROPERTY_ENTRY_U32("silead,max-fingers", 10), 35 + PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-archos-101-cesium-educ.fw"), 36 + { } 37 + }; 38 + 39 + static const struct ts_dmi_data archos_101_cesium_educ_data = { 40 + .acpi_name = "MSSL1680:00", 41 + .properties = archos_101_cesium_educ_props, 42 + }; 43 + 29 44 static const struct property_entry chuwi_hi8_props[] = { 30 45 PROPERTY_ENTRY_U32("touchscreen-size-x", 1665), 31 46 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), ··· 1062 1047 1063 1048 /* NOTE: Please keep this table sorted alphabetically */ 1064 1049 const struct dmi_system_id touchscreen_dmi_table[] = { 1050 + { 1051 + /* Archos 101 Cesium Educ */ 1052 + .driver_data = (void *)&archos_101_cesium_educ_data, 1053 + .matches = { 1054 + DMI_MATCH(DMI_PRODUCT_NAME, "ARCHOS 101 Cesium Educ"), 1055 + }, 1056 + }, 1065 1057 { 1066 1058 /* Chuwi Hi8 */ 1067 1059 .driver_data = (void *)&chuwi_hi8_data,
+13 -15
drivers/platform/x86/wmi.c
··· 136 136 return AE_NOT_FOUND; 137 137 } 138 138 139 + static bool guid_parse_and_compare(const char *string, const guid_t *guid) 140 + { 141 + guid_t guid_input; 142 + 143 + if (guid_parse(string, &guid_input)) 144 + return false; 145 + 146 + return guid_equal(&guid_input, guid); 147 + } 148 + 139 149 static const void *find_guid_context(struct wmi_block *wblock, 140 150 struct wmi_driver *wdriver) 141 151 { ··· 156 146 return NULL; 157 147 158 148 while (*id->guid_string) { 159 - guid_t guid_input; 160 - 161 - if (guid_parse(id->guid_string, &guid_input)) 162 - continue; 163 - if (guid_equal(&wblock->gblock.guid, &guid_input)) 149 + if (guid_parse_and_compare(id->guid_string, &wblock->gblock.guid)) 164 150 return id->context; 165 151 id++; 166 152 } ··· 901 895 return 0; 902 896 903 897 while (*id->guid_string) { 904 - guid_t driver_guid; 905 - 906 - if (WARN_ON(guid_parse(id->guid_string, &driver_guid))) 907 - continue; 908 - if (guid_equal(&driver_guid, &wblock->gblock.guid)) 898 + if (guid_parse_and_compare(id->guid_string, &wblock->gblock.guid)) 909 899 return 1; 910 900 911 901 id++; ··· 1241 1239 list_for_each_entry(wblock, &wmi_block_list, list) { 1242 1240 /* skip warning and register if we know the driver will use struct wmi_driver */ 1243 1241 for (int i = 0; allow_duplicates[i] != NULL; i++) { 1244 - guid_t tmp; 1245 - 1246 - if (guid_parse(allow_duplicates[i], &tmp)) 1247 - continue; 1248 - if (guid_equal(&tmp, guid)) 1242 + if (guid_parse_and_compare(allow_duplicates[i], guid)) 1249 1243 return false; 1250 1244 } 1251 1245 if (guid_equal(&wblock->gblock.guid, guid)) {
+73 -80
drivers/s390/net/ism_drv.c
··· 36 36 static struct ism_client *clients[MAX_CLIENTS]; /* use an array rather than */ 37 37 /* a list for fast mapping */ 38 38 static u8 max_client; 39 - static DEFINE_SPINLOCK(clients_lock); 39 + static DEFINE_MUTEX(clients_lock); 40 40 struct ism_dev_list { 41 41 struct list_head list; 42 42 struct mutex mutex; /* protects ism device list */ ··· 47 47 .mutex = __MUTEX_INITIALIZER(ism_dev_list.mutex), 48 48 }; 49 49 50 + static void ism_setup_forwarding(struct ism_client *client, struct ism_dev *ism) 51 + { 52 + unsigned long flags; 53 + 54 + spin_lock_irqsave(&ism->lock, flags); 55 + ism->subs[client->id] = client; 56 + spin_unlock_irqrestore(&ism->lock, flags); 57 + } 58 + 50 59 int ism_register_client(struct ism_client *client) 51 60 { 52 61 struct ism_dev *ism; 53 - unsigned long flags; 54 62 int i, rc = -ENOSPC; 55 63 56 64 mutex_lock(&ism_dev_list.mutex); 57 - spin_lock_irqsave(&clients_lock, flags); 65 + mutex_lock(&clients_lock); 58 66 for (i = 0; i < MAX_CLIENTS; ++i) { 59 67 if (!clients[i]) { 60 68 clients[i] = client; ··· 73 65 break; 74 66 } 75 67 } 76 - spin_unlock_irqrestore(&clients_lock, flags); 68 + mutex_unlock(&clients_lock); 69 + 77 70 if (i < MAX_CLIENTS) { 78 71 /* initialize with all devices that we got so far */ 79 72 list_for_each_entry(ism, &ism_dev_list.list, list) { 80 73 ism->priv[i] = NULL; 81 74 client->add(ism); 75 + ism_setup_forwarding(client, ism); 82 76 } 83 77 } 84 78 mutex_unlock(&ism_dev_list.mutex); ··· 96 86 int rc = 0; 97 87 98 88 mutex_lock(&ism_dev_list.mutex); 99 - spin_lock_irqsave(&clients_lock, flags); 89 + list_for_each_entry(ism, &ism_dev_list.list, list) { 90 + spin_lock_irqsave(&ism->lock, flags); 91 + /* Stop forwarding IRQs and events */ 92 + ism->subs[client->id] = NULL; 93 + for (int i = 0; i < ISM_NR_DMBS; ++i) { 94 + if (ism->sba_client_arr[i] == client->id) { 95 + WARN(1, "%s: attempt to unregister '%s' with registered dmb(s)\n", 96 + __func__, client->name); 97 + rc = -EBUSY; 98 + goto err_reg_dmb; 99 + } 100 + } 101 + spin_unlock_irqrestore(&ism->lock, flags); 102 + } 103 + mutex_unlock(&ism_dev_list.mutex); 104 + 105 + mutex_lock(&clients_lock); 100 106 clients[client->id] = NULL; 101 107 if (client->id + 1 == max_client) 102 108 max_client--; 103 - spin_unlock_irqrestore(&clients_lock, flags); 104 - list_for_each_entry(ism, &ism_dev_list.list, list) { 105 - for (int i = 0; i < ISM_NR_DMBS; ++i) { 106 - if (ism->sba_client_arr[i] == client->id) { 107 - pr_err("%s: attempt to unregister client '%s'" 108 - "with registered dmb(s)\n", __func__, 109 - client->name); 110 - rc = -EBUSY; 111 - goto out; 112 - } 113 - } 114 - } 115 - out: 116 - mutex_unlock(&ism_dev_list.mutex); 109 + mutex_unlock(&clients_lock); 110 + return rc; 117 111 112 + err_reg_dmb: 113 + spin_unlock_irqrestore(&ism->lock, flags); 114 + mutex_unlock(&ism_dev_list.mutex); 118 115 return rc; 119 116 } 120 117 EXPORT_SYMBOL_GPL(ism_unregister_client); ··· 345 328 struct ism_client *client) 346 329 { 347 330 union ism_reg_dmb cmd; 331 + unsigned long flags; 348 332 int ret; 349 333 350 334 ret = ism_alloc_dmb(ism, dmb); ··· 369 351 goto out; 370 352 } 371 353 dmb->dmb_tok = cmd.response.dmb_tok; 354 + spin_lock_irqsave(&ism->lock, flags); 372 355 ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = client->id; 356 + spin_unlock_irqrestore(&ism->lock, flags); 373 357 out: 374 358 return ret; 375 359 } ··· 380 360 int ism_unregister_dmb(struct ism_dev *ism, struct ism_dmb *dmb) 381 361 { 382 362 union ism_unreg_dmb cmd; 363 + unsigned long flags; 383 364 int ret; 384 365 385 366 memset(&cmd, 0, sizeof(cmd)); ··· 389 368 390 369 cmd.request.dmb_tok = dmb->dmb_tok; 391 370 371 + spin_lock_irqsave(&ism->lock, flags); 392 372 ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = NO_CLIENT; 373 + spin_unlock_irqrestore(&ism->lock, flags); 393 374 394 375 ret = ism_cmd(ism, &cmd); 395 376 if (ret && ret != ISM_ERROR) ··· 514 491 static void ism_handle_event(struct ism_dev *ism) 515 492 { 516 493 struct ism_event *entry; 494 + struct ism_client *clt; 517 495 int i; 518 496 519 497 while ((ism->ieq_idx + 1) != READ_ONCE(ism->ieq->header.idx)) { ··· 523 499 524 500 entry = &ism->ieq->entry[ism->ieq_idx]; 525 501 debug_event(ism_debug_info, 2, entry, sizeof(*entry)); 526 - spin_lock(&clients_lock); 527 - for (i = 0; i < max_client; ++i) 528 - if (clients[i]) 529 - clients[i]->handle_event(ism, entry); 530 - spin_unlock(&clients_lock); 502 + for (i = 0; i < max_client; ++i) { 503 + clt = ism->subs[i]; 504 + if (clt) 505 + clt->handle_event(ism, entry); 506 + } 531 507 } 532 508 } 533 509 534 510 static irqreturn_t ism_handle_irq(int irq, void *data) 535 511 { 536 512 struct ism_dev *ism = data; 537 - struct ism_client *clt; 538 513 unsigned long bit, end; 539 514 unsigned long *bv; 540 515 u16 dmbemask; 516 + u8 client_id; 541 517 542 518 bv = (void *) &ism->sba->dmb_bits[ISM_DMB_WORD_OFFSET]; 543 519 end = sizeof(ism->sba->dmb_bits) * BITS_PER_BYTE - ISM_DMB_BIT_OFFSET; ··· 554 530 dmbemask = ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET]; 555 531 ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; 556 532 barrier(); 557 - clt = clients[ism->sba_client_arr[bit]]; 558 - clt->handle_irq(ism, bit + ISM_DMB_BIT_OFFSET, dmbemask); 533 + client_id = ism->sba_client_arr[bit]; 534 + if (unlikely(client_id == NO_CLIENT || !ism->subs[client_id])) 535 + continue; 536 + ism->subs[client_id]->handle_irq(ism, bit + ISM_DMB_BIT_OFFSET, dmbemask); 559 537 } 560 538 561 539 if (ism->sba->e) { ··· 574 548 return ism->local_gid; 575 549 } 576 550 577 - static void ism_dev_add_work_func(struct work_struct *work) 578 - { 579 - struct ism_client *client = container_of(work, struct ism_client, 580 - add_work); 581 - 582 - client->add(client->tgt_ism); 583 - atomic_dec(&client->tgt_ism->add_dev_cnt); 584 - wake_up(&client->tgt_ism->waitq); 585 - } 586 - 587 551 static int ism_dev_init(struct ism_dev *ism) 588 552 { 589 553 struct pci_dev *pdev = ism->pdev; 590 - unsigned long flags; 591 554 int i, ret; 592 555 593 556 ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI); ··· 609 594 /* hardware is V2 capable */ 610 595 ism_create_system_eid(); 611 596 612 - init_waitqueue_head(&ism->waitq); 613 - atomic_set(&ism->free_clients_cnt, 0); 614 - atomic_set(&ism->add_dev_cnt, 0); 615 - 616 - wait_event(ism->waitq, !atomic_read(&ism->add_dev_cnt)); 617 - spin_lock_irqsave(&clients_lock, flags); 618 - for (i = 0; i < max_client; ++i) 619 - if (clients[i]) { 620 - INIT_WORK(&clients[i]->add_work, 621 - ism_dev_add_work_func); 622 - clients[i]->tgt_ism = ism; 623 - atomic_inc(&ism->add_dev_cnt); 624 - schedule_work(&clients[i]->add_work); 625 - } 626 - spin_unlock_irqrestore(&clients_lock, flags); 627 - 628 - wait_event(ism->waitq, !atomic_read(&ism->add_dev_cnt)); 629 - 630 597 mutex_lock(&ism_dev_list.mutex); 598 + mutex_lock(&clients_lock); 599 + for (i = 0; i < max_client; ++i) { 600 + if (clients[i]) { 601 + clients[i]->add(ism); 602 + ism_setup_forwarding(clients[i], ism); 603 + } 604 + } 605 + mutex_unlock(&clients_lock); 606 + 631 607 list_add(&ism->list, &ism_dev_list.list); 632 608 mutex_unlock(&ism_dev_list.mutex); 633 609 ··· 693 687 return ret; 694 688 } 695 689 696 - static void ism_dev_remove_work_func(struct work_struct *work) 697 - { 698 - struct ism_client *client = container_of(work, struct ism_client, 699 - remove_work); 700 - 701 - client->remove(client->tgt_ism); 702 - atomic_dec(&client->tgt_ism->free_clients_cnt); 703 - wake_up(&client->tgt_ism->waitq); 704 - } 705 - 706 - /* Callers must hold ism_dev_list.mutex */ 707 690 static void ism_dev_exit(struct ism_dev *ism) 708 691 { 709 692 struct pci_dev *pdev = ism->pdev; 710 693 unsigned long flags; 711 694 int i; 712 695 713 - wait_event(ism->waitq, !atomic_read(&ism->free_clients_cnt)); 714 - spin_lock_irqsave(&clients_lock, flags); 696 + spin_lock_irqsave(&ism->lock, flags); 715 697 for (i = 0; i < max_client; ++i) 716 - if (clients[i]) { 717 - INIT_WORK(&clients[i]->remove_work, 718 - ism_dev_remove_work_func); 719 - clients[i]->tgt_ism = ism; 720 - atomic_inc(&ism->free_clients_cnt); 721 - schedule_work(&clients[i]->remove_work); 722 - } 723 - spin_unlock_irqrestore(&clients_lock, flags); 698 + ism->subs[i] = NULL; 699 + spin_unlock_irqrestore(&ism->lock, flags); 724 700 725 - wait_event(ism->waitq, !atomic_read(&ism->free_clients_cnt)); 701 + mutex_lock(&ism_dev_list.mutex); 702 + mutex_lock(&clients_lock); 703 + for (i = 0; i < max_client; ++i) { 704 + if (clients[i]) 705 + clients[i]->remove(ism); 706 + } 707 + mutex_unlock(&clients_lock); 726 708 727 709 if (SYSTEM_EID.serial_number[0] != '0' || 728 710 SYSTEM_EID.type[0] != '0') ··· 721 727 kfree(ism->sba_client_arr); 722 728 pci_free_irq_vectors(pdev); 723 729 list_del_init(&ism->list); 730 + mutex_unlock(&ism_dev_list.mutex); 724 731 } 725 732 726 733 static void ism_remove(struct pci_dev *pdev) 727 734 { 728 735 struct ism_dev *ism = dev_get_drvdata(&pdev->dev); 729 736 730 - mutex_lock(&ism_dev_list.mutex); 731 737 ism_dev_exit(ism); 732 - mutex_unlock(&ism_dev_list.mutex); 733 738 734 739 pci_release_mem_regions(pdev); 735 740 pci_disable_device(pdev);
+1 -1
drivers/scsi/aacraid/aacraid.h
··· 2618 2618 struct aac_aifcmd { 2619 2619 __le32 command; /* Tell host what type of notify this is */ 2620 2620 __le32 seqnum; /* To allow ordering of reports (if necessary) */ 2621 - u8 data[1]; /* Undefined length (from kernel viewpoint) */ 2621 + u8 data[]; /* Undefined length (from kernel viewpoint) */ 2622 2622 }; 2623 2623 2624 2624 /**
+1 -1
drivers/scsi/fnic/fnic_trace.c
··· 465 465 fnic_max_trace_entries = (trace_max_pages * PAGE_SIZE)/ 466 466 FNIC_ENTRY_SIZE_BYTES; 467 467 468 - fnic_trace_buf_p = (unsigned long)vzalloc(trace_max_pages * PAGE_SIZE); 468 + fnic_trace_buf_p = (unsigned long)vcalloc(trace_max_pages, PAGE_SIZE); 469 469 if (!fnic_trace_buf_p) { 470 470 printk(KERN_ERR PFX "Failed to allocate memory " 471 471 "for fnic_trace_buf_p\n");
+2
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 6944 6944 if (rc) 6945 6945 return; 6946 6946 /* Reset HBA FCF states after successful unregister FCF */ 6947 + spin_lock_irq(&phba->hbalock); 6947 6948 phba->fcf.fcf_flag = 0; 6949 + spin_unlock_irq(&phba->hbalock); 6948 6950 phba->fcf.current_rec.flag = 0; 6949 6951 6950 6952 /*
-1
drivers/scsi/qla2xxx/qla_def.h
··· 4462 4462 4463 4463 /* n2n */ 4464 4464 struct fc_els_flogi plogi_els_payld; 4465 - #define LOGIN_TEMPLATE_SIZE (sizeof(struct fc_els_flogi) - 4) 4466 4465 4467 4466 void *swl; 4468 4467
+2 -2
drivers/scsi/qla2xxx/qla_init.c
··· 8434 8434 ql_dbg(ql_dbg_init, vha, 0x0163, 8435 8435 "-> fwdt%u template allocate template %#x words...\n", 8436 8436 j, risc_size); 8437 - fwdt->template = vmalloc(risc_size * sizeof(*dcode)); 8437 + fwdt->template = vmalloc_array(risc_size, sizeof(*dcode)); 8438 8438 if (!fwdt->template) { 8439 8439 ql_log(ql_log_warn, vha, 0x0164, 8440 8440 "-> fwdt%u failed allocate template.\n", j); ··· 8689 8689 ql_dbg(ql_dbg_init, vha, 0x0173, 8690 8690 "-> fwdt%u template allocate template %#x words...\n", 8691 8691 j, risc_size); 8692 - fwdt->template = vmalloc(risc_size * sizeof(*dcode)); 8692 + fwdt->template = vmalloc_array(risc_size, sizeof(*dcode)); 8693 8693 if (!fwdt->template) { 8694 8694 ql_log(ql_log_warn, vha, 0x0174, 8695 8695 "-> fwdt%u failed allocate template.\n", j);
+3 -2
drivers/scsi/qla2xxx/qla_iocb.c
··· 3073 3073 memset(ptr, 0, sizeof(struct els_plogi_payload)); 3074 3074 memset(resp_ptr, 0, sizeof(struct els_plogi_payload)); 3075 3075 memcpy(elsio->u.els_plogi.els_plogi_pyld->data, 3076 - &ha->plogi_els_payld.fl_csp, LOGIN_TEMPLATE_SIZE); 3076 + (void *)&ha->plogi_els_payld + offsetof(struct fc_els_flogi, fl_csp), 3077 + sizeof(ha->plogi_els_payld) - offsetof(struct fc_els_flogi, fl_csp)); 3077 3078 3078 3079 elsio->u.els_plogi.els_cmd = els_opcode; 3079 3080 elsio->u.els_plogi.els_plogi_pyld->opcode = els_opcode; ··· 3912 3911 3913 3912 pkt = __qla2x00_alloc_iocbs(sp->qpair, sp); 3914 3913 if (!pkt) { 3915 - rval = EAGAIN; 3914 + rval = -EAGAIN; 3916 3915 ql_log(ql_log_warn, vha, 0x700c, 3917 3916 "qla2x00_alloc_iocbs failed.\n"); 3918 3917 goto done;
-8
drivers/scsi/scsi_debug.c
··· 841 841 static int submit_queues = DEF_SUBMIT_QUEUES; /* > 1 for multi-queue (mq) */ 842 842 static int poll_queues; /* iouring iopoll interface.*/ 843 843 844 - static DEFINE_RWLOCK(atomic_rw); 845 - static DEFINE_RWLOCK(atomic_rw2); 846 - 847 - static rwlock_t *ramdisk_lck_a[2]; 848 - 849 844 static char sdebug_proc_name[] = MY_NAME; 850 845 static const char *my_name = MY_NAME; 851 846 ··· 6812 6817 unsigned long sz; 6813 6818 int k, ret, hosts_to_add; 6814 6819 int idx = -1; 6815 - 6816 - ramdisk_lck_a[0] = &atomic_rw; 6817 - ramdisk_lck_a[1] = &atomic_rw2; 6818 6820 6819 6821 if (sdebug_ndelay >= 1000 * 1000 * 1000) { 6820 6822 pr_warn("ndelay must be less than 1 second, ignored\n");
+5 -7
drivers/scsi/sd_zbc.c
··· 831 831 struct request_queue *q = disk->queue; 832 832 u32 zone_blocks = sdkp->early_zone_info.zone_blocks; 833 833 unsigned int nr_zones = sdkp->early_zone_info.nr_zones; 834 - u32 max_append; 835 834 int ret = 0; 836 835 unsigned int flags; 837 836 ··· 875 876 goto unlock; 876 877 } 877 878 879 + blk_queue_chunk_sectors(q, 880 + logical_to_sectors(sdkp->device, zone_blocks)); 881 + blk_queue_max_zone_append_sectors(q, 882 + q->limits.max_segments << PAGE_SECTORS_SHIFT); 883 + 878 884 ret = blk_revalidate_disk_zones(disk, sd_zbc_revalidate_zones_cb); 879 885 880 886 memalloc_noio_restore(flags); ··· 891 887 sdkp->capacity = 0; 892 888 goto unlock; 893 889 } 894 - 895 - max_append = min_t(u32, logical_to_sectors(sdkp->device, zone_blocks), 896 - q->limits.max_segments << PAGE_SECTORS_SHIFT); 897 - max_append = min_t(u32, max_append, queue_max_hw_sectors(q)); 898 - 899 - blk_queue_max_zone_append_sectors(q, max_append); 900 890 901 891 sd_zbc_print_zones(sdkp); 902 892
+2
drivers/scsi/storvsc_drv.c
··· 318 318 #define SRB_STATUS_INVALID_REQUEST 0x06 319 319 #define SRB_STATUS_DATA_OVERRUN 0x12 320 320 #define SRB_STATUS_INVALID_LUN 0x20 321 + #define SRB_STATUS_INTERNAL_ERROR 0x30 321 322 322 323 #define SRB_STATUS(status) \ 323 324 (status & ~(SRB_STATUS_AUTOSENSE_VALID | SRB_STATUS_QUEUE_FROZEN)) ··· 979 978 case SRB_STATUS_ERROR: 980 979 case SRB_STATUS_ABORTED: 981 980 case SRB_STATUS_INVALID_REQUEST: 981 + case SRB_STATUS_INTERNAL_ERROR: 982 982 if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID) { 983 983 /* Check for capacity change */ 984 984 if ((asc == 0x2a) && (ascq == 0x9)) {
+1 -1
drivers/spi/spi-bcm63xx.c
··· 126 126 SPI_MSG_DATA_SIZE, 127 127 }; 128 128 129 - #define BCM63XX_SPI_MAX_PREPEND 15 129 + #define BCM63XX_SPI_MAX_PREPEND 7 130 130 131 131 #define BCM63XX_SPI_MAX_CS 8 132 132 #define BCM63XX_SPI_BUS_NUM 0
+2
drivers/spi/spi-s3c64xx.c
··· 684 684 685 685 if ((sdd->cur_mode & SPI_LOOP) && sdd->port_conf->has_loopback) 686 686 val |= S3C64XX_SPI_MODE_SELF_LOOPBACK; 687 + else 688 + val &= ~S3C64XX_SPI_MODE_SELF_LOOPBACK; 687 689 688 690 writel(val, regs + S3C64XX_SPI_MODE_CFG); 689 691
+38
drivers/ufs/core/ufshcd.c
··· 8520 8520 return ret; 8521 8521 } 8522 8522 8523 + static void ufshcd_set_timestamp_attr(struct ufs_hba *hba) 8524 + { 8525 + int err; 8526 + struct ufs_query_req *request = NULL; 8527 + struct ufs_query_res *response = NULL; 8528 + struct ufs_dev_info *dev_info = &hba->dev_info; 8529 + struct utp_upiu_query_v4_0 *upiu_data; 8530 + 8531 + if (dev_info->wspecversion < 0x400) 8532 + return; 8533 + 8534 + ufshcd_hold(hba); 8535 + 8536 + mutex_lock(&hba->dev_cmd.lock); 8537 + 8538 + ufshcd_init_query(hba, &request, &response, 8539 + UPIU_QUERY_OPCODE_WRITE_ATTR, 8540 + QUERY_ATTR_IDN_TIMESTAMP, 0, 0); 8541 + 8542 + request->query_func = UPIU_QUERY_FUNC_STANDARD_WRITE_REQUEST; 8543 + 8544 + upiu_data = (struct utp_upiu_query_v4_0 *)&request->upiu_req; 8545 + 8546 + put_unaligned_be64(ktime_get_real_ns(), &upiu_data->osf3); 8547 + 8548 + err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT); 8549 + 8550 + if (err) 8551 + dev_err(hba->dev, "%s: failed to set timestamp %d\n", 8552 + __func__, err); 8553 + 8554 + mutex_unlock(&hba->dev_cmd.lock); 8555 + ufshcd_release(hba); 8556 + } 8557 + 8523 8558 /** 8524 8559 * ufshcd_add_lus - probe and add UFS logical units 8525 8560 * @hba: per-adapter instance ··· 8742 8707 /* UFS device is also active now */ 8743 8708 ufshcd_set_ufs_dev_active(hba); 8744 8709 ufshcd_force_reset_auto_bkops(hba); 8710 + 8711 + ufshcd_set_timestamp_attr(hba); 8745 8712 8746 8713 /* Gear up to HS gear if supported */ 8747 8714 if (hba->max_pwr_info.is_valid) { ··· 9786 9749 ret = ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE); 9787 9750 if (ret) 9788 9751 goto set_old_link_state; 9752 + ufshcd_set_timestamp_attr(hba); 9789 9753 } 9790 9754 9791 9755 if (ufshcd_keep_autobkops_enabled_except_suspend(hba))
+1
drivers/ufs/host/Kconfig
··· 72 72 config SCSI_UFS_MEDIATEK 73 73 tristate "Mediatek specific hooks to UFS controller platform driver" 74 74 depends on SCSI_UFSHCD_PLATFORM && ARCH_MEDIATEK 75 + depends on RESET_CONTROLLER 75 76 select PHY_MTK_UFS 76 77 select RESET_TI_SYSCON 77 78 help
+2
drivers/xen/grant-dma-ops.c
··· 303 303 while (!pci_is_root_bus(bus)) 304 304 bus = bus->parent; 305 305 306 + if (!bus->bridge->parent) 307 + return NULL; 306 308 return of_node_get(bus->bridge->parent->of_node); 307 309 } 308 310
+17 -20
fs/erofs/decompressor.c
··· 148 148 *maptype = 0; 149 149 return inpage; 150 150 } 151 - kunmap_atomic(inpage); 151 + kunmap_local(inpage); 152 152 might_sleep(); 153 153 src = erofs_vm_map_ram(rq->in, ctx->inpages); 154 154 if (!src) ··· 162 162 src = erofs_get_pcpubuf(ctx->inpages); 163 163 if (!src) { 164 164 DBG_BUGON(1); 165 - kunmap_atomic(inpage); 165 + kunmap_local(inpage); 166 166 return ERR_PTR(-EFAULT); 167 167 } 168 168 ··· 173 173 min_t(unsigned int, total, PAGE_SIZE - *inputmargin); 174 174 175 175 if (!inpage) 176 - inpage = kmap_atomic(*in); 176 + inpage = kmap_local_page(*in); 177 177 memcpy(tmp, inpage + *inputmargin, page_copycnt); 178 - kunmap_atomic(inpage); 178 + kunmap_local(inpage); 179 179 inpage = NULL; 180 180 tmp += page_copycnt; 181 181 total -= page_copycnt; ··· 214 214 int ret, maptype; 215 215 216 216 DBG_BUGON(*rq->in == NULL); 217 - headpage = kmap_atomic(*rq->in); 217 + headpage = kmap_local_page(*rq->in); 218 218 219 219 /* LZ4 decompression inplace is only safe if zero_padding is enabled */ 220 220 if (erofs_sb_has_zero_padding(EROFS_SB(rq->sb))) { ··· 223 223 min_t(unsigned int, rq->inputsize, 224 224 rq->sb->s_blocksize - rq->pageofs_in)); 225 225 if (ret) { 226 - kunmap_atomic(headpage); 226 + kunmap_local(headpage); 227 227 return ret; 228 228 } 229 229 may_inplace = !((rq->pageofs_in + rq->inputsize) & ··· 261 261 } 262 262 263 263 if (maptype == 0) { 264 - kunmap_atomic(headpage); 264 + kunmap_local(headpage); 265 265 } else if (maptype == 1) { 266 266 vm_unmap_ram(src, ctx->inpages); 267 267 } else if (maptype == 2) { ··· 289 289 /* one optimized fast path only for non bigpcluster cases yet */ 290 290 if (ctx.inpages == 1 && ctx.outpages == 1 && !rq->inplace_io) { 291 291 DBG_BUGON(!*rq->out); 292 - dst = kmap_atomic(*rq->out); 292 + dst = kmap_local_page(*rq->out); 293 293 dst_maptype = 0; 294 294 goto dstmap_out; 295 295 } ··· 311 311 dstmap_out: 312 312 ret = z_erofs_lz4_decompress_mem(&ctx, dst + rq->pageofs_out); 313 313 if (!dst_maptype) 314 - kunmap_atomic(dst); 314 + kunmap_local(dst); 315 315 else if (dst_maptype == 2) 316 316 vm_unmap_ram(dst, ctx.outpages); 317 317 return ret; ··· 328 328 const unsigned int lefthalf = rq->outputsize - righthalf; 329 329 const unsigned int interlaced_offset = 330 330 rq->alg == Z_EROFS_COMPRESSION_SHIFTED ? 0 : rq->pageofs_out; 331 - unsigned char *src, *dst; 331 + u8 *src; 332 332 333 333 if (outpages > 2 && rq->alg == Z_EROFS_COMPRESSION_SHIFTED) { 334 334 DBG_BUGON(1); ··· 341 341 } 342 342 343 343 src = kmap_local_page(rq->in[inpages - 1]) + rq->pageofs_in; 344 - if (rq->out[0]) { 345 - dst = kmap_local_page(rq->out[0]); 346 - memcpy(dst + rq->pageofs_out, src + interlaced_offset, 347 - righthalf); 348 - kunmap_local(dst); 349 - } 344 + if (rq->out[0]) 345 + memcpy_to_page(rq->out[0], rq->pageofs_out, 346 + src + interlaced_offset, righthalf); 350 347 351 348 if (outpages > inpages) { 352 349 DBG_BUGON(!rq->out[outpages - 1]); 353 350 if (rq->out[outpages - 1] != rq->in[inpages - 1]) { 354 - dst = kmap_local_page(rq->out[outpages - 1]); 355 - memcpy(dst, interlaced_offset ? src : 356 - (src + righthalf), lefthalf); 357 - kunmap_local(dst); 351 + memcpy_to_page(rq->out[outpages - 1], 0, src + 352 + (interlaced_offset ? 0 : righthalf), 353 + lefthalf); 358 354 } else if (!interlaced_offset) { 359 355 memmove(src, src + righthalf, lefthalf); 356 + flush_dcache_page(rq->in[inpages - 1]); 360 357 } 361 358 } 362 359 kunmap_local(src);
+2 -1
fs/erofs/inode.c
··· 183 183 184 184 inode->i_flags &= ~S_DAX; 185 185 if (test_opt(&sbi->opt, DAX_ALWAYS) && S_ISREG(inode->i_mode) && 186 - vi->datalayout == EROFS_INODE_FLAT_PLAIN) 186 + (vi->datalayout == EROFS_INODE_FLAT_PLAIN || 187 + vi->datalayout == EROFS_INODE_CHUNK_BASED)) 187 188 inode->i_flags |= S_DAX; 188 189 189 190 if (!nblks)
+2 -2
fs/erofs/zdata.c
··· 1035 1035 */ 1036 1036 tight &= (fe->mode > Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE); 1037 1037 1038 - cur = end - min_t(unsigned int, offset + end - map->m_la, end); 1038 + cur = end - min_t(erofs_off_t, offset + end - map->m_la, end); 1039 1039 if (!(map->m_flags & EROFS_MAP_MAPPED)) { 1040 1040 zero_user_segment(page, cur, end); 1041 1041 goto next_part; ··· 1841 1841 } 1842 1842 1843 1843 cur = map->m_la + map->m_llen - 1; 1844 - while (cur >= end) { 1844 + while ((cur >= end) && (cur < i_size_read(inode))) { 1845 1845 pgoff_t index = cur >> PAGE_SHIFT; 1846 1846 struct page *page; 1847 1847
+1 -1
fs/smb/client/cifsglob.h
··· 532 532 /* Check for STATUS_IO_TIMEOUT */ 533 533 bool (*is_status_io_timeout)(char *buf); 534 534 /* Check for STATUS_NETWORK_NAME_DELETED */ 535 - void (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv); 535 + bool (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv); 536 536 }; 537 537 538 538 struct smb_version_values {
+1 -1
fs/smb/client/cifssmb.c
··· 3184 3184 param_offset = offsetof(struct smb_com_transaction2_spi_req, 3185 3185 InformationLevel) - 4; 3186 3186 offset = param_offset + params; 3187 - parm_data = ((char *) &pSMB->hdr.Protocol) + offset; 3187 + parm_data = ((char *)pSMB) + sizeof(pSMB->hdr.smb_buf_length) + offset; 3188 3188 pSMB->ParameterOffset = cpu_to_le16(param_offset); 3189 3189 3190 3190 /* convert to on the wire format for POSIX ACL */
+23 -7
fs/smb/client/connect.c
··· 60 60 #define TLINK_IDLE_EXPIRE (600 * HZ) 61 61 62 62 /* Drop the connection to not overload the server */ 63 - #define NUM_STATUS_IO_TIMEOUT 5 63 + #define MAX_STATUS_IO_TIMEOUT 5 64 64 65 65 static int ip_connect(struct TCP_Server_Info *server); 66 66 static int generic_ip_connect(struct TCP_Server_Info *server); ··· 1117 1117 struct mid_q_entry *mids[MAX_COMPOUND]; 1118 1118 char *bufs[MAX_COMPOUND]; 1119 1119 unsigned int noreclaim_flag, num_io_timeout = 0; 1120 + bool pending_reconnect = false; 1120 1121 1121 1122 noreclaim_flag = memalloc_noreclaim_save(); 1122 1123 cifs_dbg(FYI, "Demultiplex PID: %d\n", task_pid_nr(current)); ··· 1157 1156 cifs_dbg(FYI, "RFC1002 header 0x%x\n", pdu_length); 1158 1157 if (!is_smb_response(server, buf[0])) 1159 1158 continue; 1159 + 1160 + pending_reconnect = false; 1160 1161 next_pdu: 1161 1162 server->pdu_size = pdu_length; 1162 1163 ··· 1216 1213 if (server->ops->is_status_io_timeout && 1217 1214 server->ops->is_status_io_timeout(buf)) { 1218 1215 num_io_timeout++; 1219 - if (num_io_timeout > NUM_STATUS_IO_TIMEOUT) { 1220 - cifs_reconnect(server, false); 1216 + if (num_io_timeout > MAX_STATUS_IO_TIMEOUT) { 1217 + cifs_server_dbg(VFS, 1218 + "Number of request timeouts exceeded %d. Reconnecting", 1219 + MAX_STATUS_IO_TIMEOUT); 1220 + 1221 + pending_reconnect = true; 1221 1222 num_io_timeout = 0; 1222 - continue; 1223 1223 } 1224 1224 } 1225 1225 ··· 1232 1226 if (mids[i] != NULL) { 1233 1227 mids[i]->resp_buf_size = server->pdu_size; 1234 1228 1235 - if (bufs[i] && server->ops->is_network_name_deleted) 1236 - server->ops->is_network_name_deleted(bufs[i], 1237 - server); 1229 + if (bufs[i] != NULL) { 1230 + if (server->ops->is_network_name_deleted && 1231 + server->ops->is_network_name_deleted(bufs[i], 1232 + server)) { 1233 + cifs_server_dbg(FYI, 1234 + "Share deleted. Reconnect needed"); 1235 + } 1236 + } 1238 1237 1239 1238 if (!mids[i]->multiRsp || mids[i]->multiEnd) 1240 1239 mids[i]->callback(mids[i]); ··· 1274 1263 buf = server->smallbuf; 1275 1264 goto next_pdu; 1276 1265 } 1266 + 1267 + /* do this reconnect at the very end after processing all MIDs */ 1268 + if (pending_reconnect) 1269 + cifs_reconnect(server, true); 1270 + 1277 1271 } /* end while !EXITING */ 1278 1272 1279 1273 /* buffer usually freed in free_mid - need to free it here on exit */
+10 -16
fs/smb/client/dfs.c
··· 66 66 return rc; 67 67 } 68 68 69 + /* 70 + * Track individual DFS referral servers used by new DFS mount. 71 + * 72 + * On success, their lifetime will be shared by final tcon (dfs_ses_list). 73 + * Otherwise, they will be put by dfs_put_root_smb_sessions() in cifs_mount(). 74 + */ 69 75 static int add_root_smb_session(struct cifs_mount_ctx *mnt_ctx) 70 76 { 71 77 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx; ··· 86 80 INIT_LIST_HEAD(&root_ses->list); 87 81 88 82 spin_lock(&cifs_tcp_ses_lock); 89 - ses->ses_count++; 83 + cifs_smb_ses_inc_refcount(ses); 90 84 spin_unlock(&cifs_tcp_ses_lock); 91 85 root_ses->ses = ses; 92 86 list_add_tail(&root_ses->list, &mnt_ctx->dfs_ses_list); 93 87 } 88 + /* Select new DFS referral server so that new referrals go through it */ 94 89 ctx->dfs_root_ses = ses; 95 90 return 0; 96 91 } ··· 249 242 int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs) 250 243 { 251 244 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx; 252 - struct cifs_ses *ses; 253 245 bool nodfs = ctx->nodfs; 254 246 int rc; 255 247 ··· 282 276 } 283 277 284 278 *isdfs = true; 285 - /* 286 - * Prevent DFS root session of being put in the first call to 287 - * cifs_mount_put_conns(). If another DFS root server was not found 288 - * while chasing the referrals (@ctx->dfs_root_ses == @ses), then we 289 - * can safely put extra refcount of @ses. 290 - */ 291 - ses = mnt_ctx->ses; 292 - mnt_ctx->ses = NULL; 293 - mnt_ctx->server = NULL; 294 - rc = __dfs_mount_share(mnt_ctx); 295 - if (ses == ctx->dfs_root_ses) 296 - cifs_put_smb_ses(ses); 297 - 298 - return rc; 279 + add_root_smb_session(mnt_ctx); 280 + return __dfs_mount_share(mnt_ctx); 299 281 } 300 282 301 283 /* Update dfs referral path of superblock */
+2 -2
fs/smb/client/file.c
··· 1080 1080 cfile = file->private_data; 1081 1081 file->private_data = NULL; 1082 1082 dclose = kmalloc(sizeof(struct cifs_deferred_close), GFP_KERNEL); 1083 - if ((cinode->oplock == CIFS_CACHE_RHW_FLG) && 1084 - cinode->lease_granted && 1083 + if ((cifs_sb->ctx->closetimeo && cinode->oplock == CIFS_CACHE_RHW_FLG) 1084 + && cinode->lease_granted && 1085 1085 !test_bit(CIFS_INO_CLOSE_ON_LOCK, &cinode->flags) && 1086 1086 dclose) { 1087 1087 if (test_and_clear_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags)) {
+5 -3
fs/smb/client/smb2ops.c
··· 2395 2395 return false; 2396 2396 } 2397 2397 2398 - static void 2398 + static bool 2399 2399 smb2_is_network_name_deleted(char *buf, struct TCP_Server_Info *server) 2400 2400 { 2401 2401 struct smb2_hdr *shdr = (struct smb2_hdr *)buf; ··· 2404 2404 struct cifs_tcon *tcon; 2405 2405 2406 2406 if (shdr->Status != STATUS_NETWORK_NAME_DELETED) 2407 - return; 2407 + return false; 2408 2408 2409 2409 /* If server is a channel, select the primary channel */ 2410 2410 pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server; ··· 2419 2419 spin_unlock(&cifs_tcp_ses_lock); 2420 2420 pr_warn_once("Server share %s deleted.\n", 2421 2421 tcon->tree_name); 2422 - return; 2422 + return true; 2423 2423 } 2424 2424 } 2425 2425 } 2426 2426 spin_unlock(&cifs_tcp_ses_lock); 2427 + 2428 + return false; 2427 2429 } 2428 2430 2429 2431 static int
+1 -1
fs/smb/client/smb2transport.c
··· 160 160 spin_unlock(&ses->ses_lock); 161 161 continue; 162 162 } 163 - ++ses->ses_count; 163 + cifs_smb_ses_inc_refcount(ses); 164 164 spin_unlock(&ses->ses_lock); 165 165 return ses; 166 166 }
-1
include/asm-generic/vmlinux.lds.h
··· 578 578 *(.text.unlikely .text.unlikely.*) \ 579 579 *(.text.unknown .text.unknown.*) \ 580 580 NOINSTR_TEXT \ 581 - *(.text..refcount) \ 582 581 *(.ref.text) \ 583 582 *(.text.asan.* .text.tsan.*) \ 584 583 MEM_KEEP(init.text*) \
+2 -3
include/drm/gpu_scheduler.h
··· 583 583 bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); 584 584 int drm_sched_entity_error(struct drm_sched_entity *entity); 585 585 586 - void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, 587 - struct dma_fence *fence); 588 586 struct drm_sched_fence *drm_sched_fence_alloc( 589 587 struct drm_sched_entity *s_entity, void *owner); 590 588 void drm_sched_fence_init(struct drm_sched_fence *fence, 591 589 struct drm_sched_entity *entity); 592 590 void drm_sched_fence_free(struct drm_sched_fence *fence); 593 591 594 - void drm_sched_fence_scheduled(struct drm_sched_fence *fence); 592 + void drm_sched_fence_scheduled(struct drm_sched_fence *fence, 593 + struct dma_fence *parent); 595 594 void drm_sched_fence_finished(struct drm_sched_fence *fence, int result); 596 595 597 596 unsigned long drm_sched_suspend_timeout(struct drm_gpu_scheduler *sched);
+1
include/linux/blk-crypto-profile.h
··· 111 111 * keyslots while ensuring that they can't be changed concurrently. 112 112 */ 113 113 struct rw_semaphore lock; 114 + struct lock_class_key lockdep_key; 114 115 115 116 /* List of idle slots, with least recently used slot at front */ 116 117 wait_queue_head_t idle_slots_wait_queue;
+3 -3
include/linux/blk-mq.h
··· 158 158 159 159 /* 160 160 * The rb_node is only used inside the io scheduler, requests 161 - * are pruned when moved to the dispatch queue. So let the 162 - * completion_data share space with the rb_node. 161 + * are pruned when moved to the dispatch queue. special_vec must 162 + * only be used if RQF_SPECIAL_PAYLOAD is set, and those cannot be 163 + * insert into an IO scheduler. 163 164 */ 164 165 union { 165 166 struct rb_node rb_node; /* sort/lookup */ 166 167 struct bio_vec special_vec; 167 - void *completion_data; 168 168 }; 169 169 170 170 /*
+26
include/linux/device.h
··· 349 349 gfp_t gfp_mask, unsigned int order); 350 350 void devm_free_pages(struct device *dev, unsigned long addr); 351 351 352 + #ifdef CONFIG_HAS_IOMEM 352 353 void __iomem *devm_ioremap_resource(struct device *dev, 353 354 const struct resource *res); 354 355 void __iomem *devm_ioremap_resource_wc(struct device *dev, ··· 358 357 void __iomem *devm_of_iomap(struct device *dev, 359 358 struct device_node *node, int index, 360 359 resource_size_t *size); 360 + #else 361 + 362 + static inline 363 + void __iomem *devm_ioremap_resource(struct device *dev, 364 + const struct resource *res) 365 + { 366 + return ERR_PTR(-EINVAL); 367 + } 368 + 369 + static inline 370 + void __iomem *devm_ioremap_resource_wc(struct device *dev, 371 + const struct resource *res) 372 + { 373 + return ERR_PTR(-EINVAL); 374 + } 375 + 376 + static inline 377 + void __iomem *devm_of_iomap(struct device *dev, 378 + struct device_node *node, int index, 379 + resource_size_t *size) 380 + { 381 + return ERR_PTR(-EINVAL); 382 + } 383 + 384 + #endif 361 385 362 386 /* allows to add/remove a custom action to devres stack */ 363 387 void devm_remove_action(struct device *dev, void (*action)(void *), void *data);
+1 -1
include/linux/dma-fence.h
··· 606 606 void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline); 607 607 608 608 struct dma_fence *dma_fence_get_stub(void); 609 - struct dma_fence *dma_fence_allocate_private_stub(void); 609 + struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp); 610 610 u64 dma_fence_context_alloc(unsigned num); 611 611 612 612 extern const struct dma_fence_ops dma_fence_array_ops;
+9
include/linux/ftrace.h
··· 41 41 struct ftrace_regs; 42 42 struct dyn_ftrace; 43 43 44 + char *arch_ftrace_match_adjust(char *str, const char *search); 45 + 46 + #ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL 47 + struct fgraph_ret_regs; 48 + unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs); 49 + #else 50 + unsigned long ftrace_return_to_handler(unsigned long frame_pointer); 51 + #endif 52 + 44 53 #ifdef CONFIG_FUNCTION_TRACER 45 54 /* 46 55 * If the arch's mcount caller does not support all of ftrace's
+1 -6
include/linux/ism.h
··· 44 44 u64 local_gid; 45 45 int ieq_idx; 46 46 47 - atomic_t free_clients_cnt; 48 - atomic_t add_dev_cnt; 49 - wait_queue_head_t waitq; 47 + struct ism_client *subs[MAX_CLIENTS]; 50 48 }; 51 49 52 50 struct ism_event { ··· 66 68 */ 67 69 void (*handle_irq)(struct ism_dev *dev, unsigned int bit, u16 dmbemask); 68 70 /* Private area - don't touch! */ 69 - struct work_struct remove_work; 70 - struct work_struct add_work; 71 - struct ism_dev *tgt_ism; 72 71 u8 id; 73 72 }; 74 73
+1 -1
include/linux/nvme.h
··· 473 473 }; 474 474 475 475 enum { 476 - NVME_ID_NS_NVM_STS_MASK = 0x3f, 476 + NVME_ID_NS_NVM_STS_MASK = 0x7f, 477 477 NVME_ID_NS_NVM_GUARD_SHIFT = 7, 478 478 NVME_ID_NS_NVM_GUARD_MASK = 0x3, 479 479 };
+28
include/linux/platform_device.h
··· 63 63 extern struct device * 64 64 platform_find_device_by_driver(struct device *start, 65 65 const struct device_driver *drv); 66 + 67 + #ifdef CONFIG_HAS_IOMEM 66 68 extern void __iomem * 67 69 devm_platform_get_and_ioremap_resource(struct platform_device *pdev, 68 70 unsigned int index, struct resource **res); ··· 74 72 extern void __iomem * 75 73 devm_platform_ioremap_resource_byname(struct platform_device *pdev, 76 74 const char *name); 75 + #else 76 + 77 + static inline void __iomem * 78 + devm_platform_get_and_ioremap_resource(struct platform_device *pdev, 79 + unsigned int index, struct resource **res) 80 + { 81 + return ERR_PTR(-EINVAL); 82 + } 83 + 84 + 85 + static inline void __iomem * 86 + devm_platform_ioremap_resource(struct platform_device *pdev, 87 + unsigned int index) 88 + { 89 + return ERR_PTR(-EINVAL); 90 + } 91 + 92 + static inline void __iomem * 93 + devm_platform_ioremap_resource_byname(struct platform_device *pdev, 94 + const char *name) 95 + { 96 + return ERR_PTR(-EINVAL); 97 + } 98 + 99 + #endif 100 + 77 101 extern int platform_get_irq(struct platform_device *, unsigned int); 78 102 extern int platform_get_irq_optional(struct platform_device *, unsigned int); 79 103 extern int platform_irq_count(struct platform_device *);
+3 -2
include/linux/psi.h
··· 23 23 void psi_memstall_leave(unsigned long *flags); 24 24 25 25 int psi_show(struct seq_file *s, struct psi_group *group, enum psi_res res); 26 - struct psi_trigger *psi_trigger_create(struct psi_group *group, 27 - char *buf, enum psi_res res, struct file *file); 26 + struct psi_trigger *psi_trigger_create(struct psi_group *group, char *buf, 27 + enum psi_res res, struct file *file, 28 + struct kernfs_open_file *of); 28 29 void psi_trigger_destroy(struct psi_trigger *t); 29 30 30 31 __poll_t psi_trigger_poll(void **trigger_ptr, struct file *file,
+3
include/linux/psi_types.h
··· 137 137 /* Wait queue for polling */ 138 138 wait_queue_head_t event_wait; 139 139 140 + /* Kernfs file for cgroup triggers */ 141 + struct kernfs_open_file *of; 142 + 140 143 /* Pending event flag */ 141 144 int event; 142 145
+1
include/linux/rethook.h
··· 59 59 }; 60 60 61 61 struct rethook *rethook_alloc(void *data, rethook_handler_t handler); 62 + void rethook_stop(struct rethook *rh); 62 63 void rethook_free(struct rethook *rh); 63 64 void rethook_add_node(struct rethook *rh, struct rethook_node *node); 64 65 struct rethook_node *rethook_try_get(struct rethook *rh);
+3
include/net/netfilter/nf_conntrack_tuple.h
··· 67 67 /* The protocol. */ 68 68 u_int8_t protonum; 69 69 70 + /* The direction must be ignored for the tuplehash */ 71 + struct { } __nfct_hash_offsetend; 72 + 70 73 /* The direction (for tuplehash) */ 71 74 u_int8_t dir; 72 75 } dst;
+27 -4
include/net/netfilter/nf_tables.h
··· 1211 1211 1212 1212 unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv); 1213 1213 1214 + static inline bool nft_use_inc(u32 *use) 1215 + { 1216 + if (*use == UINT_MAX) 1217 + return false; 1218 + 1219 + (*use)++; 1220 + 1221 + return true; 1222 + } 1223 + 1224 + static inline void nft_use_dec(u32 *use) 1225 + { 1226 + WARN_ON_ONCE((*use)-- == 0); 1227 + } 1228 + 1229 + /* For error and abort path: restore use counter to previous state. */ 1230 + static inline void nft_use_inc_restore(u32 *use) 1231 + { 1232 + WARN_ON_ONCE(!nft_use_inc(use)); 1233 + } 1234 + 1235 + #define nft_use_dec_restore nft_use_dec 1236 + 1214 1237 /** 1215 1238 * struct nft_table - nf_tables table 1216 1239 * ··· 1319 1296 struct list_head list; 1320 1297 struct rhlist_head rhlhead; 1321 1298 struct nft_object_hash_key key; 1322 - u32 genmask:2, 1323 - use:30; 1299 + u32 genmask:2; 1300 + u32 use; 1324 1301 u64 handle; 1325 1302 u16 udlen; 1326 1303 u8 *udata; ··· 1422 1399 char *name; 1423 1400 int hooknum; 1424 1401 int ops_len; 1425 - u32 genmask:2, 1426 - use:30; 1402 + u32 genmask:2; 1403 + u32 use; 1427 1404 u64 handle; 1428 1405 /* runtime data below here */ 1429 1406 struct list_head hook_list ____cacheline_aligned;
+1 -1
include/net/pkt_sched.h
··· 134 134 */ 135 135 static inline unsigned int psched_mtu(const struct net_device *dev) 136 136 { 137 - return dev->mtu + dev->hard_header_len; 137 + return READ_ONCE(dev->mtu) + dev->hard_header_len; 138 138 } 139 139 140 140 static inline struct net *qdisc_net(struct Qdisc *q)
+5 -4
include/soc/mscc/ocelot.h
··· 663 663 struct flow_stats *stats); 664 664 void (*cut_through_fwd)(struct ocelot *ocelot); 665 665 void (*tas_clock_adjust)(struct ocelot *ocelot); 666 + void (*tas_guard_bands_update)(struct ocelot *ocelot, int port); 666 667 void (*update_stats)(struct ocelot *ocelot); 667 668 }; 668 669 ··· 864 863 struct mutex stat_view_lock; 865 864 /* Lock for serializing access to the MAC table */ 866 865 struct mutex mact_lock; 867 - /* Lock for serializing forwarding domain changes */ 866 + /* Lock for serializing forwarding domain changes, including the 867 + * configuration of the Time-Aware Shaper, MAC Merge layer and 868 + * cut-through forwarding, on which it depends 869 + */ 868 870 struct mutex fwd_domain_lock; 869 - 870 - /* Lock for serializing Time-Aware Shaper changes */ 871 - struct mutex tas_lock; 872 871 873 872 struct workqueue_struct *owq; 874 873
+25
include/uapi/scsi/scsi_bsg_ufs.h
··· 71 71 }; 72 72 73 73 /** 74 + * struct utp_upiu_query_v4_0 - upiu request buffer structure for 75 + * query request >= UFS 4.0 spec. 76 + * @opcode: command to perform B-0 77 + * @idn: a value that indicates the particular type of data B-1 78 + * @index: Index to further identify data B-2 79 + * @selector: Index to further identify data B-3 80 + * @osf4: spec field B-5 81 + * @osf5: spec field B 6,7 82 + * @osf6: spec field DW 8,9 83 + * @osf7: spec field DW 10,11 84 + */ 85 + struct utp_upiu_query_v4_0 { 86 + __u8 opcode; 87 + __u8 idn; 88 + __u8 index; 89 + __u8 selector; 90 + __u8 osf3; 91 + __u8 osf4; 92 + __be16 osf5; 93 + __be32 osf6; 94 + __be32 osf7; 95 + __be32 reserved; 96 + }; 97 + 98 + /** 74 99 * struct utp_upiu_cmd - Command UPIU structure 75 100 * @data_transfer_len: Data Transfer Length DW-3 76 101 * @cdb: Command Descriptor Block CDB DW-4 to DW-7
+1
include/ufs/ufs.h
··· 170 170 QUERY_ATTR_IDN_WB_BUFF_LIFE_TIME_EST = 0x1E, 171 171 QUERY_ATTR_IDN_CURR_WB_BUFF_SIZE = 0x1F, 172 172 QUERY_ATTR_IDN_EXT_IID_EN = 0x2A, 173 + QUERY_ATTR_IDN_TIMESTAMP = 0x30 173 174 }; 174 175 175 176 /* Descriptor idn for Query requests */
+13 -2
io_uring/io_uring.c
··· 2489 2489 static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx, 2490 2490 struct io_wait_queue *iowq) 2491 2491 { 2492 + int token, ret; 2493 + 2492 2494 if (unlikely(READ_ONCE(ctx->check_cq))) 2493 2495 return 1; 2494 2496 if (unlikely(!llist_empty(&ctx->work_llist))) ··· 2501 2499 return -EINTR; 2502 2500 if (unlikely(io_should_wake(iowq))) 2503 2501 return 0; 2502 + 2503 + /* 2504 + * Use io_schedule_prepare/finish, so cpufreq can take into account 2505 + * that the task is waiting for IO - turns out to be important for low 2506 + * QD IO. 2507 + */ 2508 + token = io_schedule_prepare(); 2509 + ret = 0; 2504 2510 if (iowq->timeout == KTIME_MAX) 2505 2511 schedule(); 2506 2512 else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS)) 2507 - return -ETIME; 2508 - return 0; 2513 + ret = -ETIME; 2514 + io_schedule_finish(token); 2515 + return ret; 2509 2516 } 2510 2517 2511 2518 /*
+24 -16
kernel/bpf/cpumap.c
··· 122 122 atomic_inc(&rcpu->refcnt); 123 123 } 124 124 125 - /* called from workqueue, to workaround syscall using preempt_disable */ 126 - static void cpu_map_kthread_stop(struct work_struct *work) 127 - { 128 - struct bpf_cpu_map_entry *rcpu; 129 - 130 - rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq); 131 - 132 - /* Wait for flush in __cpu_map_entry_free(), via full RCU barrier, 133 - * as it waits until all in-flight call_rcu() callbacks complete. 134 - */ 135 - rcu_barrier(); 136 - 137 - /* kthread_stop will wake_up_process and wait for it to complete */ 138 - kthread_stop(rcpu->kthread); 139 - } 140 - 141 125 static void __cpu_map_ring_cleanup(struct ptr_ring *ring) 142 126 { 143 127 /* The tear-down procedure should have made sure that queue is ··· 146 162 ptr_ring_cleanup(rcpu->queue, NULL); 147 163 kfree(rcpu->queue); 148 164 kfree(rcpu); 165 + } 166 + } 167 + 168 + /* called from workqueue, to workaround syscall using preempt_disable */ 169 + static void cpu_map_kthread_stop(struct work_struct *work) 170 + { 171 + struct bpf_cpu_map_entry *rcpu; 172 + int err; 173 + 174 + rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq); 175 + 176 + /* Wait for flush in __cpu_map_entry_free(), via full RCU barrier, 177 + * as it waits until all in-flight call_rcu() callbacks complete. 178 + */ 179 + rcu_barrier(); 180 + 181 + /* kthread_stop will wake_up_process and wait for it to complete */ 182 + err = kthread_stop(rcpu->kthread); 183 + if (err) { 184 + /* kthread_stop may be called before cpu_map_kthread_run 185 + * is executed, so we need to release the memory related 186 + * to rcpu. 187 + */ 188 + put_cpu_map_entry(rcpu); 149 189 } 150 190 } 151 191
+3 -2
kernel/bpf/verifier.c
··· 5642 5642 verbose(env, "verifier bug. subprog has tail_call and async cb\n"); 5643 5643 return -EFAULT; 5644 5644 } 5645 - /* async callbacks don't increase bpf prog stack size */ 5646 - continue; 5645 + /* async callbacks don't increase bpf prog stack size unless called directly */ 5646 + if (!bpf_pseudo_call(insn + i)) 5647 + continue; 5647 5648 } 5648 5649 i = next_insn; 5649 5650
+1 -1
kernel/cgroup/cgroup.c
··· 3730 3730 } 3731 3731 3732 3732 psi = cgroup_psi(cgrp); 3733 - new = psi_trigger_create(psi, buf, res, of->file); 3733 + new = psi_trigger_create(psi, buf, res, of->file, of); 3734 3734 if (IS_ERR(new)) { 3735 3735 cgroup_put(cgrp); 3736 3736 return PTR_ERR(new);
+2 -3
kernel/kallsyms.c
··· 174 174 * LLVM appends various suffixes for local functions and variables that 175 175 * must be promoted to global scope as part of LTO. This can break 176 176 * hooking of static functions with kprobes. '.' is not a valid 177 - * character in an identifier in C. Suffixes observed: 177 + * character in an identifier in C. Suffixes only in LLVM LTO observed: 178 178 * - foo.llvm.[0-9a-f]+ 179 - * - foo.[0-9a-f]+ 180 179 */ 181 - res = strchr(s, '.'); 180 + res = strstr(s, ".llvm."); 182 181 if (res) { 183 182 *res = '\0'; 184 183 return true;
+4 -4
kernel/kprobes.c
··· 1072 1072 static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops, 1073 1073 int *cnt) 1074 1074 { 1075 - int ret = 0; 1075 + int ret; 1076 1076 1077 1077 lockdep_assert_held(&kprobe_mutex); 1078 1078 ··· 1110 1110 static int __disarm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops, 1111 1111 int *cnt) 1112 1112 { 1113 - int ret = 0; 1113 + int ret; 1114 1114 1115 1115 lockdep_assert_held(&kprobe_mutex); 1116 1116 ··· 2007 2007 unsigned long __kretprobe_trampoline_handler(struct pt_regs *regs, 2008 2008 void *frame_pointer) 2009 2009 { 2010 - kprobe_opcode_t *correct_ret_addr = NULL; 2011 2010 struct kretprobe_instance *ri = NULL; 2012 2011 struct llist_node *first, *node = NULL; 2012 + kprobe_opcode_t *correct_ret_addr; 2013 2013 struct kretprobe *rp; 2014 2014 2015 2015 /* Find correct address and all nodes for this frame. */ ··· 2693 2693 2694 2694 static int __init init_kprobes(void) 2695 2695 { 2696 - int i, err = 0; 2696 + int i, err; 2697 2697 2698 2698 /* FIXME allocate the probe table, currently defined statically */ 2699 2699 /* initialize all list heads */
+1
kernel/power/hibernate.c
··· 1179 1179 unsigned maj, min, offset; 1180 1180 char *p, dummy; 1181 1181 1182 + error = 0; 1182 1183 if (sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2 || 1183 1184 sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, 1184 1185 &dummy) == 3) {
+7 -2
kernel/power/qos.c
··· 426 426 427 427 /* Definitions related to the frequency QoS below. */ 428 428 429 + static inline bool freq_qos_value_invalid(s32 value) 430 + { 431 + return value < 0 && value != PM_QOS_DEFAULT_VALUE; 432 + } 433 + 429 434 /** 430 435 * freq_constraints_init - Initialize frequency QoS constraints. 431 436 * @qos: Frequency QoS constraints to initialize. ··· 536 531 { 537 532 int ret; 538 533 539 - if (IS_ERR_OR_NULL(qos) || !req || value < 0) 534 + if (IS_ERR_OR_NULL(qos) || !req || freq_qos_value_invalid(value)) 540 535 return -EINVAL; 541 536 542 537 if (WARN(freq_qos_request_active(req), ··· 568 563 */ 569 564 int freq_qos_update_request(struct freq_qos_request *req, s32 new_value) 570 565 { 571 - if (!req || new_value < 0) 566 + if (!req || freq_qos_value_invalid(new_value)) 572 567 return -EINVAL; 573 568 574 569 if (WARN(!freq_qos_request_active(req),
+1 -1
kernel/sched/fair.c
··· 7174 7174 recent_used_cpu != target && 7175 7175 cpus_share_cache(recent_used_cpu, target) && 7176 7176 (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) && 7177 - cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) && 7177 + cpumask_test_cpu(recent_used_cpu, p->cpus_ptr) && 7178 7178 asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) { 7179 7179 return recent_used_cpu; 7180 7180 }
+21 -8
kernel/sched/psi.c
··· 493 493 continue; 494 494 495 495 /* Generate an event */ 496 - if (cmpxchg(&t->event, 0, 1) == 0) 497 - wake_up_interruptible(&t->event_wait); 496 + if (cmpxchg(&t->event, 0, 1) == 0) { 497 + if (t->of) 498 + kernfs_notify(t->of->kn); 499 + else 500 + wake_up_interruptible(&t->event_wait); 501 + } 498 502 t->last_event_time = now; 499 503 /* Reset threshold breach flag once event got generated */ 500 504 t->pending_event = false; ··· 1275 1271 return 0; 1276 1272 } 1277 1273 1278 - struct psi_trigger *psi_trigger_create(struct psi_group *group, 1279 - char *buf, enum psi_res res, struct file *file) 1274 + struct psi_trigger *psi_trigger_create(struct psi_group *group, char *buf, 1275 + enum psi_res res, struct file *file, 1276 + struct kernfs_open_file *of) 1280 1277 { 1281 1278 struct psi_trigger *t; 1282 1279 enum psi_states state; ··· 1336 1331 1337 1332 t->event = 0; 1338 1333 t->last_event_time = 0; 1339 - init_waitqueue_head(&t->event_wait); 1334 + t->of = of; 1335 + if (!of) 1336 + init_waitqueue_head(&t->event_wait); 1340 1337 t->pending_event = false; 1341 1338 t->aggregator = privileged ? PSI_POLL : PSI_AVGS; 1342 1339 ··· 1395 1388 * being accessed later. Can happen if cgroup is deleted from under a 1396 1389 * polling process. 1397 1390 */ 1398 - wake_up_pollfree(&t->event_wait); 1391 + if (t->of) 1392 + kernfs_notify(t->of->kn); 1393 + else 1394 + wake_up_interruptible(&t->event_wait); 1399 1395 1400 1396 if (t->aggregator == PSI_AVGS) { 1401 1397 mutex_lock(&group->avgs_lock); ··· 1475 1465 if (!t) 1476 1466 return DEFAULT_POLLMASK | EPOLLERR | EPOLLPRI; 1477 1467 1478 - poll_wait(file, &t->event_wait, wait); 1468 + if (t->of) 1469 + kernfs_generic_poll(t->of, wait); 1470 + else 1471 + poll_wait(file, &t->event_wait, wait); 1479 1472 1480 1473 if (cmpxchg(&t->event, 1, 0) == 1) 1481 1474 ret |= EPOLLPRI; ··· 1548 1535 return -EBUSY; 1549 1536 } 1550 1537 1551 - new = psi_trigger_create(&psi_system, buf, res, file); 1538 + new = psi_trigger_create(&psi_system, buf, res, file, NULL); 1552 1539 if (IS_ERR(new)) { 1553 1540 mutex_unlock(&seq->lock); 1554 1541 return PTR_ERR(new);
+1
kernel/trace/fgraph.c
··· 15 15 #include <trace/events/sched.h> 16 16 17 17 #include "ftrace_internal.h" 18 + #include "trace.h" 18 19 19 20 #ifdef CONFIG_DYNAMIC_FTRACE 20 21 #define ASSIGN_OPS_HASH(opsname, val) \
+13 -8
kernel/trace/fprobe.c
··· 100 100 return; 101 101 } 102 102 103 + /* 104 + * This user handler is shared with other kprobes and is not expected to be 105 + * called recursively. So if any other kprobe handler is running, this will 106 + * exit as kprobe does. See the section 'Share the callbacks with kprobes' 107 + * in Documentation/trace/fprobe.rst for more information. 108 + */ 103 109 if (unlikely(kprobe_running())) { 104 110 fp->nmissed++; 105 - return; 111 + goto recursion_unlock; 106 112 } 107 113 108 114 kprobe_busy_begin(); 109 115 __fprobe_handler(ip, parent_ip, ops, fregs); 110 116 kprobe_busy_end(); 117 + 118 + recursion_unlock: 111 119 ftrace_test_recursion_unlock(bit); 112 120 } 113 121 ··· 379 371 if (!fprobe_is_registered(fp)) 380 372 return -EINVAL; 381 373 382 - /* 383 - * rethook_free() starts disabling the rethook, but the rethook handlers 384 - * may be running on other processors at this point. To make sure that all 385 - * current running handlers are finished, call unregister_ftrace_function() 386 - * after this. 387 - */ 388 374 if (fp->rethook) 389 - rethook_free(fp->rethook); 375 + rethook_stop(fp->rethook); 390 376 391 377 ret = unregister_ftrace_function(&fp->ops); 392 378 if (ret < 0) 393 379 return ret; 380 + 381 + if (fp->rethook) 382 + rethook_free(fp->rethook); 394 383 395 384 ftrace_free_filter(&fp->ops); 396 385
+31 -14
kernel/trace/ftrace.c
··· 3305 3305 return cnt; 3306 3306 } 3307 3307 3308 + static void ftrace_free_pages(struct ftrace_page *pages) 3309 + { 3310 + struct ftrace_page *pg = pages; 3311 + 3312 + while (pg) { 3313 + if (pg->records) { 3314 + free_pages((unsigned long)pg->records, pg->order); 3315 + ftrace_number_of_pages -= 1 << pg->order; 3316 + } 3317 + pages = pg->next; 3318 + kfree(pg); 3319 + pg = pages; 3320 + ftrace_number_of_groups--; 3321 + } 3322 + } 3323 + 3308 3324 static struct ftrace_page * 3309 3325 ftrace_allocate_pages(unsigned long num_to_init) 3310 3326 { ··· 3359 3343 return start_pg; 3360 3344 3361 3345 free_pages: 3362 - pg = start_pg; 3363 - while (pg) { 3364 - if (pg->records) { 3365 - free_pages((unsigned long)pg->records, pg->order); 3366 - ftrace_number_of_pages -= 1 << pg->order; 3367 - } 3368 - start_pg = pg->next; 3369 - kfree(pg); 3370 - pg = start_pg; 3371 - ftrace_number_of_groups--; 3372 - } 3346 + ftrace_free_pages(start_pg); 3373 3347 pr_info("ftrace: FAILED to allocate memory for functions\n"); 3374 3348 return NULL; 3375 3349 } ··· 6477 6471 unsigned long *start, 6478 6472 unsigned long *end) 6479 6473 { 6474 + struct ftrace_page *pg_unuse = NULL; 6480 6475 struct ftrace_page *start_pg; 6481 6476 struct ftrace_page *pg; 6482 6477 struct dyn_ftrace *rec; 6478 + unsigned long skipped = 0; 6483 6479 unsigned long count; 6484 6480 unsigned long *p; 6485 6481 unsigned long addr; ··· 6544 6536 * object files to satisfy alignments. 6545 6537 * Skip any NULL pointers. 6546 6538 */ 6547 - if (!addr) 6539 + if (!addr) { 6540 + skipped++; 6548 6541 continue; 6542 + } 6549 6543 6550 6544 end_offset = (pg->index+1) * sizeof(pg->records[0]); 6551 6545 if (end_offset > PAGE_SIZE << pg->order) { ··· 6561 6551 rec->ip = addr; 6562 6552 } 6563 6553 6564 - /* We should have used all pages */ 6565 - WARN_ON(pg->next); 6554 + if (pg->next) { 6555 + pg_unuse = pg->next; 6556 + pg->next = NULL; 6557 + } 6566 6558 6567 6559 /* Assign the last page to ftrace_pages */ 6568 6560 ftrace_pages = pg; ··· 6586 6574 out: 6587 6575 mutex_unlock(&ftrace_lock); 6588 6576 6577 + /* We should have used all pages unless we skipped some */ 6578 + if (pg_unuse) { 6579 + WARN_ON(!skipped); 6580 + ftrace_free_pages(pg_unuse); 6581 + } 6589 6582 return ret; 6590 6583 } 6591 6584
+3 -2
kernel/trace/ftrace_internal.h
··· 2 2 #ifndef _LINUX_KERNEL_FTRACE_INTERNAL_H 3 3 #define _LINUX_KERNEL_FTRACE_INTERNAL_H 4 4 5 + int __register_ftrace_function(struct ftrace_ops *ops); 6 + int __unregister_ftrace_function(struct ftrace_ops *ops); 7 + 5 8 #ifdef CONFIG_FUNCTION_TRACER 6 9 7 10 extern struct mutex ftrace_lock; ··· 18 15 19 16 #else /* !CONFIG_DYNAMIC_FTRACE */ 20 17 21 - int __register_ftrace_function(struct ftrace_ops *ops); 22 - int __unregister_ftrace_function(struct ftrace_ops *ops); 23 18 /* Keep as macros so we do not need to define the commands */ 24 19 # define ftrace_startup(ops, command) \ 25 20 ({ \
+13
kernel/trace/rethook.c
··· 54 54 } 55 55 56 56 /** 57 + * rethook_stop() - Stop using a rethook. 58 + * @rh: the struct rethook to stop. 59 + * 60 + * Stop using a rethook to prepare for freeing it. If you want to wait for 61 + * all running rethook handler before calling rethook_free(), you need to 62 + * call this first and wait RCU, and call rethook_free(). 63 + */ 64 + void rethook_stop(struct rethook *rh) 65 + { 66 + WRITE_ONCE(rh->handler, NULL); 67 + } 68 + 69 + /** 57 70 * rethook_free() - Free struct rethook. 58 71 * @rh: the struct rethook to be freed. 59 72 *
+15 -9
kernel/trace/ring_buffer.c
··· 5242 5242 } 5243 5243 EXPORT_SYMBOL_GPL(ring_buffer_size); 5244 5244 5245 + static void rb_clear_buffer_page(struct buffer_page *page) 5246 + { 5247 + local_set(&page->write, 0); 5248 + local_set(&page->entries, 0); 5249 + rb_init_page(page->page); 5250 + page->read = 0; 5251 + } 5252 + 5245 5253 static void 5246 5254 rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) 5247 5255 { 5256 + struct buffer_page *page; 5257 + 5248 5258 rb_head_page_deactivate(cpu_buffer); 5249 5259 5250 5260 cpu_buffer->head_page 5251 5261 = list_entry(cpu_buffer->pages, struct buffer_page, list); 5252 - local_set(&cpu_buffer->head_page->write, 0); 5253 - local_set(&cpu_buffer->head_page->entries, 0); 5254 - local_set(&cpu_buffer->head_page->page->commit, 0); 5255 - 5256 - cpu_buffer->head_page->read = 0; 5262 + rb_clear_buffer_page(cpu_buffer->head_page); 5263 + list_for_each_entry(page, cpu_buffer->pages, list) { 5264 + rb_clear_buffer_page(page); 5265 + } 5257 5266 5258 5267 cpu_buffer->tail_page = cpu_buffer->head_page; 5259 5268 cpu_buffer->commit_page = cpu_buffer->head_page; 5260 5269 5261 5270 INIT_LIST_HEAD(&cpu_buffer->reader_page->list); 5262 5271 INIT_LIST_HEAD(&cpu_buffer->new_pages); 5263 - local_set(&cpu_buffer->reader_page->write, 0); 5264 - local_set(&cpu_buffer->reader_page->entries, 0); 5265 - local_set(&cpu_buffer->reader_page->page->commit, 0); 5266 - cpu_buffer->reader_page->read = 0; 5272 + rb_clear_buffer_page(cpu_buffer->reader_page); 5267 5273 5268 5274 local_set(&cpu_buffer->entries_bytes, 0); 5269 5275 local_set(&cpu_buffer->overrun, 0);
+20 -2
kernel/trace/trace.c
··· 3118 3118 struct ftrace_stack *fstack; 3119 3119 struct stack_entry *entry; 3120 3120 int stackidx; 3121 + void *ptr; 3121 3122 3122 3123 /* 3123 3124 * Add one, for this function and the call to save_stack_trace() ··· 3162 3161 trace_ctx); 3163 3162 if (!event) 3164 3163 goto out; 3165 - entry = ring_buffer_event_data(event); 3164 + ptr = ring_buffer_event_data(event); 3165 + entry = ptr; 3166 3166 3167 - memcpy(&entry->caller, fstack->calls, size); 3167 + /* 3168 + * For backward compatibility reasons, the entry->caller is an 3169 + * array of 8 slots to store the stack. This is also exported 3170 + * to user space. The amount allocated on the ring buffer actually 3171 + * holds enough for the stack specified by nr_entries. This will 3172 + * go into the location of entry->caller. Due to string fortifiers 3173 + * checking the size of the destination of memcpy() it triggers 3174 + * when it detects that size is greater than 8. To hide this from 3175 + * the fortifiers, we use "ptr" and pointer arithmetic to assign caller. 3176 + * 3177 + * The below is really just: 3178 + * memcpy(&entry->caller, fstack->calls, size); 3179 + */ 3180 + ptr += offsetof(typeof(*entry), caller); 3181 + memcpy(ptr, fstack->calls, size); 3182 + 3168 3183 entry->size = nr_entries; 3169 3184 3170 3185 if (!call_filter_check_discard(call, entry, buffer, event)) ··· 6781 6764 6782 6765 free_cpumask_var(iter->started); 6783 6766 kfree(iter->fmt); 6767 + kfree(iter->temp); 6784 6768 mutex_destroy(&iter->mutex); 6785 6769 kfree(iter); 6786 6770
+2
kernel/trace/trace.h
··· 113 113 #define MEM_FAIL(condition, fmt, ...) \ 114 114 DO_ONCE_LITE_IF(condition, pr_err, "ERROR: " fmt, ##__VA_ARGS__) 115 115 116 + #define FAULT_STRING "(fault)" 117 + 116 118 #define HIST_STACKTRACE_DEPTH 16 117 119 #define HIST_STACKTRACE_SIZE (HIST_STACKTRACE_DEPTH * sizeof(unsigned long)) 118 120 #define HIST_STACKTRACE_SKIP 5
+16 -2
kernel/trace/trace_eprobe.c
··· 644 644 struct trace_eprobe *ep; 645 645 bool enabled; 646 646 int ret = 0; 647 + int cnt = 0; 647 648 648 649 tp = trace_probe_primary_from_call(call); 649 650 if (WARN_ON_ONCE(!tp)) ··· 668 667 if (ret) 669 668 break; 670 669 enabled = true; 670 + cnt++; 671 671 } 672 672 673 673 if (ret) { 674 674 /* Failed to enable one of them. Roll back all */ 675 - if (enabled) 676 - disable_eprobe(ep, file->tr); 675 + if (enabled) { 676 + /* 677 + * It's a bug if one failed for something other than memory 678 + * not being available but another eprobe succeeded. 679 + */ 680 + WARN_ON_ONCE(ret != -ENOMEM); 681 + 682 + list_for_each_entry(pos, trace_probe_probe_list(tp), list) { 683 + ep = container_of(pos, struct trace_eprobe, tp); 684 + disable_eprobe(ep, file->tr); 685 + if (!--cnt) 686 + break; 687 + } 688 + } 677 689 if (file) 678 690 trace_probe_remove_file(tp, file); 679 691 else
+5 -3
kernel/trace/trace_events_hist.c
··· 6663 6663 if (get_named_trigger_data(trigger_data)) 6664 6664 goto enable; 6665 6665 6666 - if (has_hist_vars(hist_data)) 6667 - save_hist_vars(hist_data); 6668 - 6669 6666 ret = create_actions(hist_data); 6670 6667 if (ret) 6671 6668 goto out_unreg; 6669 + 6670 + if (has_hist_vars(hist_data) || hist_data->n_var_refs) { 6671 + if (save_hist_vars(hist_data)) 6672 + goto out_unreg; 6673 + } 6672 6674 6673 6675 ret = tracing_map_init(hist_data->map); 6674 6676 if (ret)
+3
kernel/trace/trace_events_user.c
··· 1317 1317 pos += snprintf(buf + pos, LEN_OR_ZERO, " "); 1318 1318 pos += snprintf(buf + pos, LEN_OR_ZERO, "%s", field->name); 1319 1319 1320 + if (str_has_prefix(field->type, "struct ")) 1321 + pos += snprintf(buf + pos, LEN_OR_ZERO, " %d", field->size); 1322 + 1320 1323 if (colon) 1321 1324 pos += snprintf(buf + pos, LEN_OR_ZERO, ";"); 1322 1325
+3
kernel/trace/trace_kprobe_selftest.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include "trace_kprobe_selftest.h" 4 + 2 5 /* 3 6 * Function used during the kprobe self test. This function is in a separate 4 7 * compile unit so it can be compile with CC_FLAGS_FTRACE to ensure that it
+1 -1
kernel/trace/trace_probe.c
··· 67 67 int len = *(u32 *)data >> 16; 68 68 69 69 if (!len) 70 - trace_seq_puts(s, "(fault)"); 70 + trace_seq_puts(s, FAULT_STRING); 71 71 else 72 72 trace_seq_printf(s, "\"%s\"", 73 73 (const char *)get_loc_data(data, ent));
+8 -22
kernel/trace/trace_probe_kernel.h
··· 2 2 #ifndef __TRACE_PROBE_KERNEL_H_ 3 3 #define __TRACE_PROBE_KERNEL_H_ 4 4 5 - #define FAULT_STRING "(fault)" 6 - 7 5 /* 8 6 * This depends on trace_probe.h, but can not include it due to 9 7 * the way trace_probe_tmpl.h is used by trace_kprobe.c and trace_eprobe.c. ··· 13 15 fetch_store_strlen_user(unsigned long addr) 14 16 { 15 17 const void __user *uaddr = (__force const void __user *)addr; 16 - int ret; 17 18 18 - ret = strnlen_user_nofault(uaddr, MAX_STRING_SIZE); 19 - /* 20 - * strnlen_user_nofault returns zero on fault, insert the 21 - * FAULT_STRING when that occurs. 22 - */ 23 - if (ret <= 0) 24 - return strlen(FAULT_STRING) + 1; 25 - return ret; 19 + return strnlen_user_nofault(uaddr, MAX_STRING_SIZE); 26 20 } 27 21 28 22 /* Return the length of string -- including null terminal byte */ ··· 34 44 len++; 35 45 } while (c && ret == 0 && len < MAX_STRING_SIZE); 36 46 37 - /* For faults, return enough to hold the FAULT_STRING */ 38 - return (ret < 0) ? strlen(FAULT_STRING) + 1 : len; 47 + return (ret < 0) ? ret : len; 39 48 } 40 49 41 - static nokprobe_inline void set_data_loc(int ret, void *dest, void *__dest, void *base, int len) 50 + static nokprobe_inline void set_data_loc(int ret, void *dest, void *__dest, void *base) 42 51 { 43 - if (ret >= 0) { 44 - *(u32 *)dest = make_data_loc(ret, __dest - base); 45 - } else { 46 - strscpy(__dest, FAULT_STRING, len); 47 - ret = strlen(__dest) + 1; 48 - } 52 + if (ret < 0) 53 + ret = 0; 54 + *(u32 *)dest = make_data_loc(ret, __dest - base); 49 55 } 50 56 51 57 /* ··· 62 76 __dest = get_loc_data(dest, base); 63 77 64 78 ret = strncpy_from_user_nofault(__dest, uaddr, maxlen); 65 - set_data_loc(ret, dest, __dest, base, maxlen); 79 + set_data_loc(ret, dest, __dest, base); 66 80 67 81 return ret; 68 82 } ··· 93 107 * probing. 94 108 */ 95 109 ret = strncpy_from_kernel_nofault(__dest, (void *)addr, maxlen); 96 - set_data_loc(ret, dest, __dest, base, maxlen); 110 + set_data_loc(ret, dest, __dest, base); 97 111 98 112 return ret; 99 113 }
+5 -5
kernel/trace/trace_probe_tmpl.h
··· 156 156 code++; 157 157 goto array; 158 158 case FETCH_OP_ST_USTRING: 159 - ret += fetch_store_strlen_user(val + code->offset); 159 + ret = fetch_store_strlen_user(val + code->offset); 160 160 code++; 161 161 goto array; 162 162 case FETCH_OP_ST_SYMSTR: 163 - ret += fetch_store_symstrlen(val + code->offset); 163 + ret = fetch_store_symstrlen(val + code->offset); 164 164 code++; 165 165 goto array; 166 166 default: ··· 204 204 array: 205 205 /* the last stage: Loop on array */ 206 206 if (code->op == FETCH_OP_LP_ARRAY) { 207 + if (ret < 0) 208 + ret = 0; 207 209 total += ret; 208 210 if (++i < code->param) { 209 211 code = s3; ··· 267 265 if (unlikely(arg->dynamic)) 268 266 *dl = make_data_loc(maxlen, dyndata - base); 269 267 ret = process_fetch_insn(arg->code, rec, dl, base); 270 - if (unlikely(ret < 0 && arg->dynamic)) { 271 - *dl = make_data_loc(0, dyndata - base); 272 - } else { 268 + if (arg->dynamic && likely(ret > 0)) { 273 269 dyndata += ret; 274 270 maxlen -= ret; 275 271 }
+2 -1
kernel/trace/trace_uprobe.c
··· 170 170 */ 171 171 ret++; 172 172 *(u32 *)dest = make_data_loc(ret, (void *)dst - base); 173 - } 173 + } else 174 + *(u32 *)dest = make_data_loc(0, (void *)dst - base); 174 175 175 176 return ret; 176 177 }
+1 -1
lib/iov_iter.c
··· 1349 1349 return ret; 1350 1350 } 1351 1351 1352 - static int copy_iovec_from_user(struct iovec *iov, 1352 + static __noclone int copy_iovec_from_user(struct iovec *iov, 1353 1353 const struct iovec __user *uiov, unsigned long nr_segs) 1354 1354 { 1355 1355 int ret = -EFAULT;
+29 -18
net/ceph/messenger_v2.c
··· 390 390 int head_len; 391 391 int rem_len; 392 392 393 + BUG_ON(ctrl_len < 0 || ctrl_len > CEPH_MSG_MAX_CONTROL_LEN); 394 + 393 395 if (secure) { 394 396 head_len = CEPH_PREAMBLE_SECURE_LEN; 395 397 if (ctrl_len > CEPH_PREAMBLE_INLINE_LEN) { ··· 410 408 static int __tail_onwire_len(int front_len, int middle_len, int data_len, 411 409 bool secure) 412 410 { 411 + BUG_ON(front_len < 0 || front_len > CEPH_MSG_MAX_FRONT_LEN || 412 + middle_len < 0 || middle_len > CEPH_MSG_MAX_MIDDLE_LEN || 413 + data_len < 0 || data_len > CEPH_MSG_MAX_DATA_LEN); 414 + 413 415 if (!front_len && !middle_len && !data_len) 414 416 return 0; 415 417 ··· 526 520 desc->fd_aligns[i] = ceph_decode_16(&p); 527 521 } 528 522 523 + if (desc->fd_lens[0] < 0 || 524 + desc->fd_lens[0] > CEPH_MSG_MAX_CONTROL_LEN) { 525 + pr_err("bad control segment length %d\n", desc->fd_lens[0]); 526 + return -EINVAL; 527 + } 528 + if (desc->fd_lens[1] < 0 || 529 + desc->fd_lens[1] > CEPH_MSG_MAX_FRONT_LEN) { 530 + pr_err("bad front segment length %d\n", desc->fd_lens[1]); 531 + return -EINVAL; 532 + } 533 + if (desc->fd_lens[2] < 0 || 534 + desc->fd_lens[2] > CEPH_MSG_MAX_MIDDLE_LEN) { 535 + pr_err("bad middle segment length %d\n", desc->fd_lens[2]); 536 + return -EINVAL; 537 + } 538 + if (desc->fd_lens[3] < 0 || 539 + desc->fd_lens[3] > CEPH_MSG_MAX_DATA_LEN) { 540 + pr_err("bad data segment length %d\n", desc->fd_lens[3]); 541 + return -EINVAL; 542 + } 543 + 529 544 /* 530 545 * This would fire for FRAME_TAG_WAIT (it has one empty 531 546 * segment), but we should never get it as client. 532 547 */ 533 548 if (!desc->fd_lens[desc->fd_seg_cnt - 1]) { 534 - pr_err("last segment empty\n"); 535 - return -EINVAL; 536 - } 537 - 538 - if (desc->fd_lens[0] > CEPH_MSG_MAX_CONTROL_LEN) { 539 - pr_err("control segment too big %d\n", desc->fd_lens[0]); 540 - return -EINVAL; 541 - } 542 - if (desc->fd_lens[1] > CEPH_MSG_MAX_FRONT_LEN) { 543 - pr_err("front segment too big %d\n", desc->fd_lens[1]); 544 - return -EINVAL; 545 - } 546 - if (desc->fd_lens[2] > CEPH_MSG_MAX_MIDDLE_LEN) { 547 - pr_err("middle segment too big %d\n", desc->fd_lens[2]); 548 - return -EINVAL; 549 - } 550 - if (desc->fd_lens[3] > CEPH_MSG_MAX_DATA_LEN) { 551 - pr_err("data segment too big %d\n", desc->fd_lens[3]); 549 + pr_err("last segment empty, segment count %d\n", 550 + desc->fd_seg_cnt); 552 551 return -EINVAL; 553 552 } 554 553
+2
net/core/net-traces.c
··· 63 63 EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_send_reset); 64 64 EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_bad_csum); 65 65 66 + EXPORT_TRACEPOINT_SYMBOL_GPL(udp_fail_queue_rcv_skb); 67 + 66 68 EXPORT_TRACEPOINT_SYMBOL_GPL(sk_data_ready);
+5
net/core/skbuff.c
··· 4261 4261 4262 4262 skb_push(skb, -skb_network_offset(skb) + offset); 4263 4263 4264 + /* Ensure the head is writeable before touching the shared info */ 4265 + err = skb_unclone(skb, GFP_ATOMIC); 4266 + if (err) 4267 + goto err_linearize; 4268 + 4264 4269 skb_shinfo(skb)->frag_list = NULL; 4265 4270 4266 4271 while (list_skb) {
+1 -1
net/core/xdp.c
··· 741 741 __diag_pop(); 742 742 743 743 BTF_SET8_START(xdp_metadata_kfunc_ids) 744 - #define XDP_METADATA_KFUNC(_, name) BTF_ID_FLAGS(func, name, 0) 744 + #define XDP_METADATA_KFUNC(_, name) BTF_ID_FLAGS(func, name, KF_TRUSTED_ARGS) 745 745 XDP_METADATA_KFUNC_xxx 746 746 #undef XDP_METADATA_KFUNC 747 747 BTF_SET8_END(xdp_metadata_kfunc_ids)
+1 -2
net/ipv6/addrconf.c
··· 318 318 static void addrconf_mod_rs_timer(struct inet6_dev *idev, 319 319 unsigned long when) 320 320 { 321 - if (!timer_pending(&idev->rs_timer)) 321 + if (!mod_timer(&idev->rs_timer, jiffies + when)) 322 322 in6_dev_hold(idev); 323 - mod_timer(&idev->rs_timer, jiffies + when); 324 323 } 325 324 326 325 static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp,
+4 -1
net/ipv6/icmp.c
··· 424 424 if (unlikely(dev->ifindex == LOOPBACK_IFINDEX || netif_is_l3_master(skb->dev))) { 425 425 const struct rt6_info *rt6 = skb_rt6_info(skb); 426 426 427 - if (rt6) 427 + /* The destination could be an external IP in Ext Hdr (SRv6, RPL, etc.), 428 + * and ip6_null_entry could be set to skb if no route is found. 429 + */ 430 + if (rt6 && rt6->rt6i_idev) 428 431 dev = rt6->rt6i_idev->dev; 429 432 } 430 433
+3 -1
net/ipv6/udp.c
··· 45 45 #include <net/tcp_states.h> 46 46 #include <net/ip6_checksum.h> 47 47 #include <net/ip6_tunnel.h> 48 + #include <trace/events/udp.h> 48 49 #include <net/xfrm.h> 49 50 #include <net/inet_hashtables.h> 50 51 #include <net/inet6_hashtables.h> ··· 91 90 fhash = __ipv6_addr_jhash(faddr, udp_ipv6_hash_secret); 92 91 93 92 return __inet6_ehashfn(lhash, lport, fhash, fport, 94 - udp_ipv6_hash_secret + net_hash_mix(net)); 93 + udp6_ehash_secret + net_hash_mix(net)); 95 94 } 96 95 97 96 int udp_v6_get_port(struct sock *sk, unsigned short snum) ··· 681 680 } 682 681 UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite); 683 682 kfree_skb_reason(skb, drop_reason); 683 + trace_udp_fail_queue_rcv_skb(rc, sk); 684 684 return -1; 685 685 } 686 686
+7 -13
net/netfilter/nf_conntrack_core.c
··· 211 211 unsigned int zoneid, 212 212 const struct net *net) 213 213 { 214 - u64 a, b, c, d; 214 + siphash_key_t key; 215 215 216 216 get_random_once(&nf_conntrack_hash_rnd, sizeof(nf_conntrack_hash_rnd)); 217 217 218 - /* The direction must be ignored, handle usable tuplehash members manually */ 219 - a = (u64)tuple->src.u3.all[0] << 32 | tuple->src.u3.all[3]; 220 - b = (u64)tuple->dst.u3.all[0] << 32 | tuple->dst.u3.all[3]; 218 + key = nf_conntrack_hash_rnd; 221 219 222 - c = (__force u64)tuple->src.u.all << 32 | (__force u64)tuple->dst.u.all << 16; 223 - c |= tuple->dst.protonum; 220 + key.key[0] ^= zoneid; 221 + key.key[1] ^= net_hash_mix(net); 224 222 225 - d = (u64)zoneid << 32 | net_hash_mix(net); 226 - 227 - /* IPv4: u3.all[1,2,3] == 0 */ 228 - c ^= (u64)tuple->src.u3.all[1] << 32 | tuple->src.u3.all[2]; 229 - d += (u64)tuple->dst.u3.all[1] << 32 | tuple->dst.u3.all[2]; 230 - 231 - return (u32)siphash_4u64(a, b, c, d, &nf_conntrack_hash_rnd); 223 + return siphash((void *)tuple, 224 + offsetofend(struct nf_conntrack_tuple, dst.__nfct_hash_offsetend), 225 + &key); 232 226 } 233 227 234 228 static u32 scale_hash(u32 hash)
+4
net/netfilter/nf_conntrack_helper.c
··· 360 360 BUG_ON(me->expect_class_max >= NF_CT_MAX_EXPECT_CLASSES); 361 361 BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1); 362 362 363 + if (!nf_ct_helper_hash) 364 + return -ENOENT; 365 + 363 366 if (me->expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT) 364 367 return -EINVAL; 365 368 ··· 518 515 void nf_conntrack_helper_fini(void) 519 516 { 520 517 kvfree(nf_ct_helper_hash); 518 + nf_ct_helper_hash = NULL; 521 519 }
+9 -1
net/netfilter/nf_conntrack_proto_gre.c
··· 205 205 enum ip_conntrack_info ctinfo, 206 206 const struct nf_hook_state *state) 207 207 { 208 + unsigned long status; 209 + 208 210 if (!nf_ct_is_confirmed(ct)) { 209 211 unsigned int *timeouts = nf_ct_timeout_lookup(ct); 210 212 ··· 219 217 ct->proto.gre.timeout = timeouts[GRE_CT_UNREPLIED]; 220 218 } 221 219 220 + status = READ_ONCE(ct->status); 222 221 /* If we've seen traffic both ways, this is a GRE connection. 223 222 * Extend timeout. */ 224 - if (ct->status & IPS_SEEN_REPLY) { 223 + if (status & IPS_SEEN_REPLY) { 225 224 nf_ct_refresh_acct(ct, ctinfo, skb, 226 225 ct->proto.gre.stream_timeout); 226 + 227 + /* never set ASSURED for IPS_NAT_CLASH, they time out soon */ 228 + if (unlikely((status & IPS_NAT_CLASH))) 229 + return NF_ACCEPT; 230 + 227 231 /* Also, more likely to be important, and not a probe. */ 228 232 if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status)) 229 233 nf_conntrack_event_cache(IPCT_ASSURED, ct);
+108 -66
net/netfilter/nf_tables_api.c
··· 253 253 if (chain->bound) 254 254 return -EBUSY; 255 255 256 + if (!nft_use_inc(&chain->use)) 257 + return -EMFILE; 258 + 256 259 chain->bound = true; 257 - chain->use++; 258 260 nft_chain_trans_bind(ctx, chain); 259 261 260 262 return 0; ··· 439 437 if (IS_ERR(trans)) 440 438 return PTR_ERR(trans); 441 439 442 - ctx->table->use--; 440 + nft_use_dec(&ctx->table->use); 443 441 nft_deactivate_next(ctx->net, ctx->chain); 444 442 445 443 return 0; ··· 478 476 /* You cannot delete the same rule twice */ 479 477 if (nft_is_active_next(ctx->net, rule)) { 480 478 nft_deactivate_next(ctx->net, rule); 481 - ctx->chain->use--; 479 + nft_use_dec(&ctx->chain->use); 482 480 return 0; 483 481 } 484 482 return -ENOENT; ··· 646 644 nft_map_deactivate(ctx, set); 647 645 648 646 nft_deactivate_next(ctx->net, set); 649 - ctx->table->use--; 647 + nft_use_dec(&ctx->table->use); 650 648 651 649 return err; 652 650 } ··· 678 676 return err; 679 677 680 678 nft_deactivate_next(ctx->net, obj); 681 - ctx->table->use--; 679 + nft_use_dec(&ctx->table->use); 682 680 683 681 return err; 684 682 } ··· 713 711 return err; 714 712 715 713 nft_deactivate_next(ctx->net, flowtable); 716 - ctx->table->use--; 714 + nft_use_dec(&ctx->table->use); 717 715 718 716 return err; 719 717 } ··· 2398 2396 struct nft_chain *chain; 2399 2397 int err; 2400 2398 2401 - if (table->use == UINT_MAX) 2402 - return -EOVERFLOW; 2403 - 2404 2399 if (nla[NFTA_CHAIN_HOOK]) { 2405 2400 struct nft_stats __percpu *stats = NULL; 2406 2401 struct nft_chain_hook hook = {}; ··· 2493 2494 if (err < 0) 2494 2495 goto err_destroy_chain; 2495 2496 2497 + if (!nft_use_inc(&table->use)) { 2498 + err = -EMFILE; 2499 + goto err_use; 2500 + } 2501 + 2496 2502 trans = nft_trans_chain_add(ctx, NFT_MSG_NEWCHAIN); 2497 2503 if (IS_ERR(trans)) { 2498 2504 err = PTR_ERR(trans); ··· 2514 2510 goto err_unregister_hook; 2515 2511 } 2516 2512 2517 - table->use++; 2518 - 2519 2513 return 0; 2514 + 2520 2515 err_unregister_hook: 2516 + nft_use_dec_restore(&table->use); 2517 + err_use: 2521 2518 nf_tables_unregister_hook(net, table, chain); 2522 2519 err_destroy_chain: 2523 2520 nf_tables_chain_destroy(ctx); ··· 2699 2694 2700 2695 static struct nft_chain *nft_chain_lookup_byid(const struct net *net, 2701 2696 const struct nft_table *table, 2702 - const struct nlattr *nla) 2697 + const struct nlattr *nla, u8 genmask) 2703 2698 { 2704 2699 struct nftables_pernet *nft_net = nft_pernet(net); 2705 2700 u32 id = ntohl(nla_get_be32(nla)); ··· 2710 2705 2711 2706 if (trans->msg_type == NFT_MSG_NEWCHAIN && 2712 2707 chain->table == table && 2713 - id == nft_trans_chain_id(trans)) 2708 + id == nft_trans_chain_id(trans) && 2709 + nft_active_genmask(chain, genmask)) 2714 2710 return chain; 2715 2711 } 2716 2712 return ERR_PTR(-ENOENT); ··· 3815 3809 return -EOPNOTSUPP; 3816 3810 3817 3811 } else if (nla[NFTA_RULE_CHAIN_ID]) { 3818 - chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID]); 3812 + chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID], 3813 + genmask); 3819 3814 if (IS_ERR(chain)) { 3820 3815 NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]); 3821 3816 return PTR_ERR(chain); ··· 3846 3839 info->nlh->nlmsg_flags & NLM_F_REPLACE) 3847 3840 return -EINVAL; 3848 3841 handle = nf_tables_alloc_handle(table); 3849 - 3850 - if (chain->use == UINT_MAX) 3851 - return -EOVERFLOW; 3852 3842 3853 3843 if (nla[NFTA_RULE_POSITION]) { 3854 3844 pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); ··· 3940 3936 } 3941 3937 } 3942 3938 3939 + if (!nft_use_inc(&chain->use)) { 3940 + err = -EMFILE; 3941 + goto err_release_rule; 3942 + } 3943 + 3943 3944 if (info->nlh->nlmsg_flags & NLM_F_REPLACE) { 3944 3945 err = nft_delrule(&ctx, old_rule); 3945 3946 if (err < 0) ··· 3976 3967 } 3977 3968 } 3978 3969 kvfree(expr_info); 3979 - chain->use++; 3980 3970 3981 3971 if (flow) 3982 3972 nft_trans_flow_rule(trans) = flow; ··· 3986 3978 return 0; 3987 3979 3988 3980 err_destroy_flow_rule: 3981 + nft_use_dec_restore(&chain->use); 3989 3982 if (flow) 3990 3983 nft_flow_rule_destroy(flow); 3991 3984 err_release_rule: ··· 5023 5014 alloc_size = sizeof(*set) + size + udlen; 5024 5015 if (alloc_size < size || alloc_size > INT_MAX) 5025 5016 return -ENOMEM; 5017 + 5018 + if (!nft_use_inc(&table->use)) 5019 + return -EMFILE; 5020 + 5026 5021 set = kvzalloc(alloc_size, GFP_KERNEL_ACCOUNT); 5027 - if (!set) 5028 - return -ENOMEM; 5022 + if (!set) { 5023 + err = -ENOMEM; 5024 + goto err_alloc; 5025 + } 5029 5026 5030 5027 name = nla_strdup(nla[NFTA_SET_NAME], GFP_KERNEL_ACCOUNT); 5031 5028 if (!name) { ··· 5089 5074 goto err_set_expr_alloc; 5090 5075 5091 5076 list_add_tail_rcu(&set->list, &table->sets); 5092 - table->use++; 5077 + 5093 5078 return 0; 5094 5079 5095 5080 err_set_expr_alloc: ··· 5101 5086 kfree(set->name); 5102 5087 err_set_name: 5103 5088 kvfree(set); 5089 + err_alloc: 5090 + nft_use_dec_restore(&table->use); 5091 + 5104 5092 return err; 5105 5093 } 5106 5094 ··· 5242 5224 struct nft_set_binding *i; 5243 5225 struct nft_set_iter iter; 5244 5226 5245 - if (set->use == UINT_MAX) 5246 - return -EOVERFLOW; 5247 - 5248 5227 if (!list_empty(&set->bindings) && nft_set_is_anonymous(set)) 5249 5228 return -EBUSY; 5250 5229 ··· 5269 5254 return iter.err; 5270 5255 } 5271 5256 bind: 5257 + if (!nft_use_inc(&set->use)) 5258 + return -EMFILE; 5259 + 5272 5260 binding->chain = ctx->chain; 5273 5261 list_add_tail_rcu(&binding->list, &set->bindings); 5274 5262 nft_set_trans_bind(ctx, set); 5275 - set->use++; 5276 5263 5277 5264 return 0; 5278 5265 } ··· 5348 5331 nft_clear(ctx->net, set); 5349 5332 } 5350 5333 5351 - set->use++; 5334 + nft_use_inc_restore(&set->use); 5352 5335 } 5353 5336 EXPORT_SYMBOL_GPL(nf_tables_activate_set); 5354 5337 ··· 5364 5347 else 5365 5348 list_del_rcu(&binding->list); 5366 5349 5367 - set->use--; 5350 + nft_use_dec(&set->use); 5368 5351 break; 5369 5352 case NFT_TRANS_PREPARE: 5370 5353 if (nft_set_is_anonymous(set)) { ··· 5373 5356 5374 5357 nft_deactivate_next(ctx->net, set); 5375 5358 } 5376 - set->use--; 5359 + nft_use_dec(&set->use); 5377 5360 return; 5378 5361 case NFT_TRANS_ABORT: 5379 5362 case NFT_TRANS_RELEASE: ··· 5381 5364 set->flags & (NFT_SET_MAP | NFT_SET_OBJECT)) 5382 5365 nft_map_deactivate(ctx, set); 5383 5366 5384 - set->use--; 5367 + nft_use_dec(&set->use); 5385 5368 fallthrough; 5386 5369 default: 5387 5370 nf_tables_unbind_set(ctx, set, binding, ··· 6172 6155 nft_set_elem_expr_destroy(&ctx, nft_set_ext_expr(ext)); 6173 6156 6174 6157 if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) 6175 - (*nft_set_ext_obj(ext))->use--; 6158 + nft_use_dec(&(*nft_set_ext_obj(ext))->use); 6176 6159 kfree(elem); 6177 6160 } 6178 6161 EXPORT_SYMBOL_GPL(nft_set_elem_destroy); ··· 6674 6657 set->objtype, genmask); 6675 6658 if (IS_ERR(obj)) { 6676 6659 err = PTR_ERR(obj); 6660 + obj = NULL; 6677 6661 goto err_parse_key_end; 6678 6662 } 6663 + 6664 + if (!nft_use_inc(&obj->use)) { 6665 + err = -EMFILE; 6666 + obj = NULL; 6667 + goto err_parse_key_end; 6668 + } 6669 + 6679 6670 err = nft_set_ext_add(&tmpl, NFT_SET_EXT_OBJREF); 6680 6671 if (err < 0) 6681 6672 goto err_parse_key_end; ··· 6752 6727 if (flags) 6753 6728 *nft_set_ext_flags(ext) = flags; 6754 6729 6755 - if (obj) { 6730 + if (obj) 6756 6731 *nft_set_ext_obj(ext) = obj; 6757 - obj->use++; 6758 - } 6732 + 6759 6733 if (ulen > 0) { 6760 6734 if (nft_set_ext_check(&tmpl, NFT_SET_EXT_USERDATA, ulen) < 0) { 6761 6735 err = -EINVAL; ··· 6822 6798 kfree(trans); 6823 6799 err_elem_free: 6824 6800 nf_tables_set_elem_destroy(ctx, set, elem.priv); 6825 - if (obj) 6826 - obj->use--; 6827 6801 err_parse_data: 6828 6802 if (nla[NFTA_SET_ELEM_DATA] != NULL) 6829 6803 nft_data_release(&elem.data.val, desc.type); 6830 6804 err_parse_key_end: 6805 + if (obj) 6806 + nft_use_dec_restore(&obj->use); 6807 + 6831 6808 nft_data_release(&elem.key_end.val, NFT_DATA_VALUE); 6832 6809 err_parse_key: 6833 6810 nft_data_release(&elem.key.val, NFT_DATA_VALUE); ··· 6908 6883 case NFT_JUMP: 6909 6884 case NFT_GOTO: 6910 6885 chain = data->verdict.chain; 6911 - chain->use++; 6886 + nft_use_inc_restore(&chain->use); 6912 6887 break; 6913 6888 } 6914 6889 } ··· 6923 6898 if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 6924 6899 nft_data_hold(nft_set_ext_data(ext), set->dtype); 6925 6900 if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) 6926 - (*nft_set_ext_obj(ext))->use++; 6901 + nft_use_inc_restore(&(*nft_set_ext_obj(ext))->use); 6927 6902 } 6928 6903 6929 6904 static void nft_setelem_data_deactivate(const struct net *net, ··· 6935 6910 if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 6936 6911 nft_data_release(nft_set_ext_data(ext), set->dtype); 6937 6912 if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) 6938 - (*nft_set_ext_obj(ext))->use--; 6913 + nft_use_dec(&(*nft_set_ext_obj(ext))->use); 6939 6914 } 6940 6915 6941 6916 static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set, ··· 7478 7453 7479 7454 nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla); 7480 7455 7456 + if (!nft_use_inc(&table->use)) 7457 + return -EMFILE; 7458 + 7481 7459 type = nft_obj_type_get(net, objtype); 7482 - if (IS_ERR(type)) 7483 - return PTR_ERR(type); 7460 + if (IS_ERR(type)) { 7461 + err = PTR_ERR(type); 7462 + goto err_type; 7463 + } 7484 7464 7485 7465 obj = nft_obj_init(&ctx, type, nla[NFTA_OBJ_DATA]); 7486 7466 if (IS_ERR(obj)) { ··· 7519 7489 goto err_obj_ht; 7520 7490 7521 7491 list_add_tail_rcu(&obj->list, &table->objects); 7522 - table->use++; 7492 + 7523 7493 return 0; 7524 7494 err_obj_ht: 7525 7495 /* queued in transaction log */ ··· 7535 7505 kfree(obj); 7536 7506 err_init: 7537 7507 module_put(type->owner); 7508 + err_type: 7509 + nft_use_dec_restore(&table->use); 7510 + 7538 7511 return err; 7539 7512 } 7540 7513 ··· 7939 7906 case NFT_TRANS_PREPARE: 7940 7907 case NFT_TRANS_ABORT: 7941 7908 case NFT_TRANS_RELEASE: 7942 - flowtable->use--; 7909 + nft_use_dec(&flowtable->use); 7943 7910 fallthrough; 7944 7911 default: 7945 7912 return; ··· 8293 8260 8294 8261 nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla); 8295 8262 8263 + if (!nft_use_inc(&table->use)) 8264 + return -EMFILE; 8265 + 8296 8266 flowtable = kzalloc(sizeof(*flowtable), GFP_KERNEL_ACCOUNT); 8297 - if (!flowtable) 8298 - return -ENOMEM; 8267 + if (!flowtable) { 8268 + err = -ENOMEM; 8269 + goto flowtable_alloc; 8270 + } 8299 8271 8300 8272 flowtable->table = table; 8301 8273 flowtable->handle = nf_tables_alloc_handle(table); ··· 8355 8317 goto err5; 8356 8318 8357 8319 list_add_tail_rcu(&flowtable->list, &table->flowtables); 8358 - table->use++; 8359 8320 8360 8321 return 0; 8361 8322 err5: ··· 8371 8334 kfree(flowtable->name); 8372 8335 err1: 8373 8336 kfree(flowtable); 8337 + flowtable_alloc: 8338 + nft_use_dec_restore(&table->use); 8339 + 8374 8340 return err; 8375 8341 } 8376 8342 ··· 9753 9713 */ 9754 9714 if (nft_set_is_anonymous(nft_trans_set(trans)) && 9755 9715 !list_empty(&nft_trans_set(trans)->bindings)) 9756 - trans->ctx.table->use--; 9716 + nft_use_dec(&trans->ctx.table->use); 9757 9717 } 9758 9718 nf_tables_set_notify(&trans->ctx, nft_trans_set(trans), 9759 9719 NFT_MSG_NEWSET, GFP_KERNEL); ··· 9983 9943 nft_trans_destroy(trans); 9984 9944 break; 9985 9945 } 9986 - trans->ctx.table->use--; 9946 + nft_use_dec_restore(&trans->ctx.table->use); 9987 9947 nft_chain_del(trans->ctx.chain); 9988 9948 nf_tables_unregister_hook(trans->ctx.net, 9989 9949 trans->ctx.table, ··· 9996 9956 list_splice(&nft_trans_chain_hooks(trans), 9997 9957 &nft_trans_basechain(trans)->hook_list); 9998 9958 } else { 9999 - trans->ctx.table->use++; 9959 + nft_use_inc_restore(&trans->ctx.table->use); 10000 9960 nft_clear(trans->ctx.net, trans->ctx.chain); 10001 9961 } 10002 9962 nft_trans_destroy(trans); ··· 10006 9966 nft_trans_destroy(trans); 10007 9967 break; 10008 9968 } 10009 - trans->ctx.chain->use--; 9969 + nft_use_dec_restore(&trans->ctx.chain->use); 10010 9970 list_del_rcu(&nft_trans_rule(trans)->list); 10011 9971 nft_rule_expr_deactivate(&trans->ctx, 10012 9972 nft_trans_rule(trans), ··· 10016 9976 break; 10017 9977 case NFT_MSG_DELRULE: 10018 9978 case NFT_MSG_DESTROYRULE: 10019 - trans->ctx.chain->use++; 9979 + nft_use_inc_restore(&trans->ctx.chain->use); 10020 9980 nft_clear(trans->ctx.net, nft_trans_rule(trans)); 10021 9981 nft_rule_expr_activate(&trans->ctx, nft_trans_rule(trans)); 10022 9982 if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) ··· 10029 9989 nft_trans_destroy(trans); 10030 9990 break; 10031 9991 } 10032 - trans->ctx.table->use--; 9992 + nft_use_dec_restore(&trans->ctx.table->use); 10033 9993 if (nft_trans_set_bound(trans)) { 10034 9994 nft_trans_destroy(trans); 10035 9995 break; ··· 10038 9998 break; 10039 9999 case NFT_MSG_DELSET: 10040 10000 case NFT_MSG_DESTROYSET: 10041 - trans->ctx.table->use++; 10001 + nft_use_inc_restore(&trans->ctx.table->use); 10042 10002 nft_clear(trans->ctx.net, nft_trans_set(trans)); 10043 10003 if (nft_trans_set(trans)->flags & (NFT_SET_MAP | NFT_SET_OBJECT)) 10044 10004 nft_map_activate(&trans->ctx, nft_trans_set(trans)); ··· 10082 10042 nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans)); 10083 10043 nft_trans_destroy(trans); 10084 10044 } else { 10085 - trans->ctx.table->use--; 10045 + nft_use_dec_restore(&trans->ctx.table->use); 10086 10046 nft_obj_del(nft_trans_obj(trans)); 10087 10047 } 10088 10048 break; 10089 10049 case NFT_MSG_DELOBJ: 10090 10050 case NFT_MSG_DESTROYOBJ: 10091 - trans->ctx.table->use++; 10051 + nft_use_inc_restore(&trans->ctx.table->use); 10092 10052 nft_clear(trans->ctx.net, nft_trans_obj(trans)); 10093 10053 nft_trans_destroy(trans); 10094 10054 break; ··· 10097 10057 nft_unregister_flowtable_net_hooks(net, 10098 10058 &nft_trans_flowtable_hooks(trans)); 10099 10059 } else { 10100 - trans->ctx.table->use--; 10060 + nft_use_dec_restore(&trans->ctx.table->use); 10101 10061 list_del_rcu(&nft_trans_flowtable(trans)->list); 10102 10062 nft_unregister_flowtable_net_hooks(net, 10103 10063 &nft_trans_flowtable(trans)->hook_list); ··· 10109 10069 list_splice(&nft_trans_flowtable_hooks(trans), 10110 10070 &nft_trans_flowtable(trans)->hook_list); 10111 10071 } else { 10112 - trans->ctx.table->use++; 10072 + nft_use_inc_restore(&trans->ctx.table->use); 10113 10073 nft_clear(trans->ctx.net, nft_trans_flowtable(trans)); 10114 10074 } 10115 10075 nft_trans_destroy(trans); ··· 10542 10502 genmask); 10543 10503 } else if (tb[NFTA_VERDICT_CHAIN_ID]) { 10544 10504 chain = nft_chain_lookup_byid(ctx->net, ctx->table, 10545 - tb[NFTA_VERDICT_CHAIN_ID]); 10505 + tb[NFTA_VERDICT_CHAIN_ID], 10506 + genmask); 10546 10507 if (IS_ERR(chain)) 10547 10508 return PTR_ERR(chain); 10548 10509 } else { ··· 10559 10518 if (desc->flags & NFT_DATA_DESC_SETELEM && 10560 10519 chain->flags & NFT_CHAIN_BINDING) 10561 10520 return -EINVAL; 10521 + if (!nft_use_inc(&chain->use)) 10522 + return -EMFILE; 10562 10523 10563 - chain->use++; 10564 10524 data->verdict.chain = chain; 10565 10525 break; 10566 10526 } ··· 10579 10537 case NFT_JUMP: 10580 10538 case NFT_GOTO: 10581 10539 chain = data->verdict.chain; 10582 - chain->use--; 10540 + nft_use_dec(&chain->use); 10583 10541 break; 10584 10542 } 10585 10543 } ··· 10748 10706 nf_tables_unregister_hook(ctx->net, ctx->chain->table, ctx->chain); 10749 10707 list_for_each_entry_safe(rule, nr, &ctx->chain->rules, list) { 10750 10708 list_del(&rule->list); 10751 - ctx->chain->use--; 10709 + nft_use_dec(&ctx->chain->use); 10752 10710 nf_tables_rule_release(ctx, rule); 10753 10711 } 10754 10712 nft_chain_del(ctx->chain); 10755 - ctx->table->use--; 10713 + nft_use_dec(&ctx->table->use); 10756 10714 nf_tables_chain_destroy(ctx); 10757 10715 10758 10716 return 0; ··· 10802 10760 ctx.chain = chain; 10803 10761 list_for_each_entry_safe(rule, nr, &chain->rules, list) { 10804 10762 list_del(&rule->list); 10805 - chain->use--; 10763 + nft_use_dec(&chain->use); 10806 10764 nf_tables_rule_release(&ctx, rule); 10807 10765 } 10808 10766 } 10809 10767 list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) { 10810 10768 list_del(&flowtable->list); 10811 - table->use--; 10769 + nft_use_dec(&table->use); 10812 10770 nf_tables_flowtable_destroy(flowtable); 10813 10771 } 10814 10772 list_for_each_entry_safe(set, ns, &table->sets, list) { 10815 10773 list_del(&set->list); 10816 - table->use--; 10774 + nft_use_dec(&table->use); 10817 10775 if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT)) 10818 10776 nft_map_deactivate(&ctx, set); 10819 10777 ··· 10821 10779 } 10822 10780 list_for_each_entry_safe(obj, ne, &table->objects, list) { 10823 10781 nft_obj_del(obj); 10824 - table->use--; 10782 + nft_use_dec(&table->use); 10825 10783 nft_obj_destroy(&ctx, obj); 10826 10784 } 10827 10785 list_for_each_entry_safe(chain, nc, &table->chains, list) { 10828 10786 ctx.chain = chain; 10829 10787 nft_chain_del(chain); 10830 - table->use--; 10788 + nft_use_dec(&table->use); 10831 10789 nf_tables_chain_destroy(&ctx); 10832 10790 } 10833 10791 nf_tables_table_destroy(&ctx);
+7 -7
net/netfilter/nft_byteorder.c
··· 30 30 const struct nft_byteorder *priv = nft_expr_priv(expr); 31 31 u32 *src = &regs->data[priv->sreg]; 32 32 u32 *dst = &regs->data[priv->dreg]; 33 - union { u32 u32; u16 u16; } *s, *d; 33 + u16 *s16, *d16; 34 34 unsigned int i; 35 35 36 - s = (void *)src; 37 - d = (void *)dst; 36 + s16 = (void *)src; 37 + d16 = (void *)dst; 38 38 39 39 switch (priv->size) { 40 40 case 8: { ··· 62 62 switch (priv->op) { 63 63 case NFT_BYTEORDER_NTOH: 64 64 for (i = 0; i < priv->len / 4; i++) 65 - d[i].u32 = ntohl((__force __be32)s[i].u32); 65 + dst[i] = ntohl((__force __be32)src[i]); 66 66 break; 67 67 case NFT_BYTEORDER_HTON: 68 68 for (i = 0; i < priv->len / 4; i++) 69 - d[i].u32 = (__force __u32)htonl(s[i].u32); 69 + dst[i] = (__force __u32)htonl(src[i]); 70 70 break; 71 71 } 72 72 break; ··· 74 74 switch (priv->op) { 75 75 case NFT_BYTEORDER_NTOH: 76 76 for (i = 0; i < priv->len / 2; i++) 77 - d[i].u16 = ntohs((__force __be16)s[i].u16); 77 + d16[i] = ntohs((__force __be16)s16[i]); 78 78 break; 79 79 case NFT_BYTEORDER_HTON: 80 80 for (i = 0; i < priv->len / 2; i++) 81 - d[i].u16 = (__force __u16)htons(s[i].u16); 81 + d16[i] = (__force __u16)htons(s16[i]); 82 82 break; 83 83 } 84 84 break;
+4 -2
net/netfilter/nft_flow_offload.c
··· 408 408 if (IS_ERR(flowtable)) 409 409 return PTR_ERR(flowtable); 410 410 411 + if (!nft_use_inc(&flowtable->use)) 412 + return -EMFILE; 413 + 411 414 priv->flowtable = flowtable; 412 - flowtable->use++; 413 415 414 416 return nf_ct_netns_get(ctx->net, ctx->family); 415 417 } ··· 430 428 { 431 429 struct nft_flow_offload *priv = nft_expr_priv(expr); 432 430 433 - priv->flowtable->use++; 431 + nft_use_inc_restore(&priv->flowtable->use); 434 432 } 435 433 436 434 static void nft_flow_offload_destroy(const struct nft_ctx *ctx,
+4 -4
net/netfilter/nft_immediate.c
··· 159 159 default: 160 160 nft_chain_del(chain); 161 161 chain->bound = false; 162 - chain->table->use--; 162 + nft_use_dec(&chain->table->use); 163 163 break; 164 164 } 165 165 break; ··· 198 198 * let the transaction records release this chain and its rules. 199 199 */ 200 200 if (chain->bound) { 201 - chain->use--; 201 + nft_use_dec(&chain->use); 202 202 break; 203 203 } 204 204 ··· 206 206 chain_ctx = *ctx; 207 207 chain_ctx.chain = chain; 208 208 209 - chain->use--; 209 + nft_use_dec(&chain->use); 210 210 list_for_each_entry_safe(rule, n, &chain->rules, list) { 211 - chain->use--; 211 + nft_use_dec(&chain->use); 212 212 list_del(&rule->list); 213 213 nf_tables_rule_destroy(&chain_ctx, rule); 214 214 }
+5 -3
net/netfilter/nft_objref.c
··· 41 41 if (IS_ERR(obj)) 42 42 return -ENOENT; 43 43 44 + if (!nft_use_inc(&obj->use)) 45 + return -EMFILE; 46 + 44 47 nft_objref_priv(expr) = obj; 45 - obj->use++; 46 48 47 49 return 0; 48 50 } ··· 74 72 if (phase == NFT_TRANS_COMMIT) 75 73 return; 76 74 77 - obj->use--; 75 + nft_use_dec(&obj->use); 78 76 } 79 77 80 78 static void nft_objref_activate(const struct nft_ctx *ctx, ··· 82 80 { 83 81 struct nft_object *obj = nft_objref_priv(expr); 84 82 85 - obj->use++; 83 + nft_use_inc_restore(&obj->use); 86 84 } 87 85 88 86 static const struct nft_expr_ops nft_objref_ops = {
+1 -1
net/sched/act_api.c
··· 1320 1320 return ERR_PTR(err); 1321 1321 } 1322 1322 } else { 1323 - if (strlcpy(act_name, "police", IFNAMSIZ) >= IFNAMSIZ) { 1323 + if (strscpy(act_name, "police", IFNAMSIZ) < 0) { 1324 1324 NL_SET_ERR_MSG(extack, "TC action name too long"); 1325 1325 return ERR_PTR(-EINVAL); 1326 1326 }
+10
net/sched/cls_flower.c
··· 812 812 TCA_FLOWER_KEY_PORT_SRC_MAX, &mask->tp_range.tp_max.src, 813 813 TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src)); 814 814 815 + if (mask->tp_range.tp_min.dst != mask->tp_range.tp_max.dst) { 816 + NL_SET_ERR_MSG(extack, 817 + "Both min and max destination ports must be specified"); 818 + return -EINVAL; 819 + } 820 + if (mask->tp_range.tp_min.src != mask->tp_range.tp_max.src) { 821 + NL_SET_ERR_MSG(extack, 822 + "Both min and max source ports must be specified"); 823 + return -EINVAL; 824 + } 815 825 if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst && 816 826 ntohs(key->tp_range.tp_max.dst) <= 817 827 ntohs(key->tp_range.tp_min.dst)) {
+5 -5
net/sched/cls_fw.c
··· 212 212 if (err < 0) 213 213 return err; 214 214 215 - if (tb[TCA_FW_CLASSID]) { 216 - f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]); 217 - tcf_bind_filter(tp, &f->res, base); 218 - } 219 - 220 215 if (tb[TCA_FW_INDEV]) { 221 216 int ret; 222 217 ret = tcf_change_indev(net, tb[TCA_FW_INDEV], extack); ··· 227 232 return err; 228 233 } else if (head->mask != 0xFFFFFFFF) 229 234 return err; 235 + 236 + if (tb[TCA_FW_CLASSID]) { 237 + f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]); 238 + tcf_bind_filter(tp, &f->res, base); 239 + } 230 240 231 241 return 0; 232 242 }
+15 -3
net/sched/sch_qfq.c
··· 381 381 u32 lmax) 382 382 { 383 383 struct qfq_sched *q = qdisc_priv(sch); 384 - struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight); 384 + struct qfq_aggregate *new_agg; 385 385 386 + /* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */ 387 + if (lmax > QFQ_MAX_LMAX) 388 + return -EINVAL; 389 + 390 + new_agg = qfq_find_agg(q, lmax, weight); 386 391 if (new_agg == NULL) { /* create new aggregate */ 387 392 new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC); 388 393 if (new_agg == NULL) ··· 428 423 else 429 424 weight = 1; 430 425 431 - if (tb[TCA_QFQ_LMAX]) 426 + if (tb[TCA_QFQ_LMAX]) { 432 427 lmax = nla_get_u32(tb[TCA_QFQ_LMAX]); 433 - else 428 + } else { 429 + /* MTU size is user controlled */ 434 430 lmax = psched_mtu(qdisc_dev(sch)); 431 + if (lmax < QFQ_MIN_LMAX || lmax > QFQ_MAX_LMAX) { 432 + NL_SET_ERR_MSG_MOD(extack, 433 + "MTU size out of bounds for qfq"); 434 + return -EINVAL; 435 + } 436 + } 435 437 436 438 inv_w = ONE_FP / weight; 437 439 weight = ONE_FP / inv_w;
+2
net/wireless/util.c
··· 580 580 hdrlen += ETH_ALEN + 2; 581 581 else if (!pskb_may_pull(skb, hdrlen)) 582 582 return -EINVAL; 583 + else 584 + payload.eth.h_proto = htons(skb->len - hdrlen); 583 585 584 586 mesh_addr = skb->data + sizeof(payload.eth) + ETH_ALEN; 585 587 switch (payload.flags & MESH_FLAGS_AE) {
+34
samples/ftrace/ftrace-direct-modify.c
··· 2 2 #include <linux/module.h> 3 3 #include <linux/kthread.h> 4 4 #include <linux/ftrace.h> 5 + #ifndef CONFIG_ARM64 5 6 #include <asm/asm-offsets.h> 7 + #endif 6 8 7 9 extern void my_direct_func1(void); 8 10 extern void my_direct_func2(void); ··· 97 95 ); 98 96 99 97 #endif /* CONFIG_S390 */ 98 + 99 + #ifdef CONFIG_ARM64 100 + 101 + asm ( 102 + " .pushsection .text, \"ax\", @progbits\n" 103 + " .type my_tramp1, @function\n" 104 + " .globl my_tramp1\n" 105 + " my_tramp1:" 106 + " bti c\n" 107 + " sub sp, sp, #16\n" 108 + " stp x9, x30, [sp]\n" 109 + " bl my_direct_func1\n" 110 + " ldp x30, x9, [sp]\n" 111 + " add sp, sp, #16\n" 112 + " ret x9\n" 113 + " .size my_tramp1, .-my_tramp1\n" 114 + 115 + " .type my_tramp2, @function\n" 116 + " .globl my_tramp2\n" 117 + " my_tramp2:" 118 + " bti c\n" 119 + " sub sp, sp, #16\n" 120 + " stp x9, x30, [sp]\n" 121 + " bl my_direct_func2\n" 122 + " ldp x30, x9, [sp]\n" 123 + " add sp, sp, #16\n" 124 + " ret x9\n" 125 + " .size my_tramp2, .-my_tramp2\n" 126 + " .popsection\n" 127 + ); 128 + 129 + #endif /* CONFIG_ARM64 */ 100 130 101 131 #ifdef CONFIG_LOONGARCH 102 132
+40
samples/ftrace/ftrace-direct-multi-modify.c
··· 2 2 #include <linux/module.h> 3 3 #include <linux/kthread.h> 4 4 #include <linux/ftrace.h> 5 + #ifndef CONFIG_ARM64 5 6 #include <asm/asm-offsets.h> 7 + #endif 6 8 7 9 extern void my_direct_func1(unsigned long ip); 8 10 extern void my_direct_func2(unsigned long ip); ··· 104 102 ); 105 103 106 104 #endif /* CONFIG_S390 */ 105 + 106 + #ifdef CONFIG_ARM64 107 + 108 + asm ( 109 + " .pushsection .text, \"ax\", @progbits\n" 110 + " .type my_tramp1, @function\n" 111 + " .globl my_tramp1\n" 112 + " my_tramp1:" 113 + " bti c\n" 114 + " sub sp, sp, #32\n" 115 + " stp x9, x30, [sp]\n" 116 + " str x0, [sp, #16]\n" 117 + " mov x0, x30\n" 118 + " bl my_direct_func1\n" 119 + " ldp x30, x9, [sp]\n" 120 + " ldr x0, [sp, #16]\n" 121 + " add sp, sp, #32\n" 122 + " ret x9\n" 123 + " .size my_tramp1, .-my_tramp1\n" 124 + 125 + " .type my_tramp2, @function\n" 126 + " .globl my_tramp2\n" 127 + " my_tramp2:" 128 + " bti c\n" 129 + " sub sp, sp, #32\n" 130 + " stp x9, x30, [sp]\n" 131 + " str x0, [sp, #16]\n" 132 + " mov x0, x30\n" 133 + " bl my_direct_func2\n" 134 + " ldp x30, x9, [sp]\n" 135 + " ldr x0, [sp, #16]\n" 136 + " add sp, sp, #32\n" 137 + " ret x9\n" 138 + " .size my_tramp2, .-my_tramp2\n" 139 + " .popsection\n" 140 + ); 141 + 142 + #endif /* CONFIG_ARM64 */ 107 143 108 144 #ifdef CONFIG_LOONGARCH 109 145 #include <asm/asm.h>
+25
samples/ftrace/ftrace-direct-multi.c
··· 4 4 #include <linux/mm.h> /* for handle_mm_fault() */ 5 5 #include <linux/ftrace.h> 6 6 #include <linux/sched/stat.h> 7 + #ifndef CONFIG_ARM64 7 8 #include <asm/asm-offsets.h> 9 + #endif 8 10 9 11 extern void my_direct_func(unsigned long ip); 10 12 ··· 67 65 ); 68 66 69 67 #endif /* CONFIG_S390 */ 68 + 69 + #ifdef CONFIG_ARM64 70 + 71 + asm ( 72 + " .pushsection .text, \"ax\", @progbits\n" 73 + " .type my_tramp, @function\n" 74 + " .globl my_tramp\n" 75 + " my_tramp:" 76 + " bti c\n" 77 + " sub sp, sp, #32\n" 78 + " stp x9, x30, [sp]\n" 79 + " str x0, [sp, #16]\n" 80 + " mov x0, x30\n" 81 + " bl my_direct_func\n" 82 + " ldp x30, x9, [sp]\n" 83 + " ldr x0, [sp, #16]\n" 84 + " add sp, sp, #32\n" 85 + " ret x9\n" 86 + " .size my_tramp, .-my_tramp\n" 87 + " .popsection\n" 88 + ); 89 + 90 + #endif /* CONFIG_ARM64 */ 70 91 71 92 #ifdef CONFIG_LOONGARCH 72 93
+34 -6
samples/ftrace/ftrace-direct-too.c
··· 3 3 4 4 #include <linux/mm.h> /* for handle_mm_fault() */ 5 5 #include <linux/ftrace.h> 6 + #ifndef CONFIG_ARM64 6 7 #include <asm/asm-offsets.h> 8 + #endif 7 9 8 - extern void my_direct_func(struct vm_area_struct *vma, 9 - unsigned long address, unsigned int flags); 10 + extern void my_direct_func(struct vm_area_struct *vma, unsigned long address, 11 + unsigned int flags, struct pt_regs *regs); 10 12 11 - void my_direct_func(struct vm_area_struct *vma, 12 - unsigned long address, unsigned int flags) 13 + void my_direct_func(struct vm_area_struct *vma, unsigned long address, 14 + unsigned int flags, struct pt_regs *regs) 13 15 { 14 - trace_printk("handle mm fault vma=%p address=%lx flags=%x\n", 15 - vma, address, flags); 16 + trace_printk("handle mm fault vma=%p address=%lx flags=%x regs=%p\n", 17 + vma, address, flags, regs); 16 18 } 17 19 18 20 extern void my_tramp(void *); ··· 36 34 " pushq %rdi\n" 37 35 " pushq %rsi\n" 38 36 " pushq %rdx\n" 37 + " pushq %rcx\n" 39 38 " call my_direct_func\n" 39 + " popq %rcx\n" 40 40 " popq %rdx\n" 41 41 " popq %rsi\n" 42 42 " popq %rdi\n" ··· 73 69 ); 74 70 75 71 #endif /* CONFIG_S390 */ 72 + 73 + #ifdef CONFIG_ARM64 74 + 75 + asm ( 76 + " .pushsection .text, \"ax\", @progbits\n" 77 + " .type my_tramp, @function\n" 78 + " .globl my_tramp\n" 79 + " my_tramp:" 80 + " bti c\n" 81 + " sub sp, sp, #48\n" 82 + " stp x9, x30, [sp]\n" 83 + " stp x0, x1, [sp, #16]\n" 84 + " stp x2, x3, [sp, #32]\n" 85 + " bl my_direct_func\n" 86 + " ldp x30, x9, [sp]\n" 87 + " ldp x0, x1, [sp, #16]\n" 88 + " ldp x2, x3, [sp, #32]\n" 89 + " add sp, sp, #48\n" 90 + " ret x9\n" 91 + " .size my_tramp, .-my_tramp\n" 92 + " .popsection\n" 93 + ); 94 + 95 + #endif /* CONFIG_ARM64 */ 76 96 77 97 #ifdef CONFIG_LOONGARCH 78 98
+24
samples/ftrace/ftrace-direct.c
··· 3 3 4 4 #include <linux/sched.h> /* for wake_up_process() */ 5 5 #include <linux/ftrace.h> 6 + #ifndef CONFIG_ARM64 6 7 #include <asm/asm-offsets.h> 8 + #endif 7 9 8 10 extern void my_direct_func(struct task_struct *p); 9 11 ··· 64 62 ); 65 63 66 64 #endif /* CONFIG_S390 */ 65 + 66 + #ifdef CONFIG_ARM64 67 + 68 + asm ( 69 + " .pushsection .text, \"ax\", @progbits\n" 70 + " .type my_tramp, @function\n" 71 + " .globl my_tramp\n" 72 + " my_tramp:" 73 + " bti c\n" 74 + " sub sp, sp, #32\n" 75 + " stp x9, x30, [sp]\n" 76 + " str x0, [sp, #16]\n" 77 + " bl my_direct_func\n" 78 + " ldp x30, x9, [sp]\n" 79 + " ldr x0, [sp, #16]\n" 80 + " add sp, sp, #32\n" 81 + " ret x9\n" 82 + " .size my_tramp, .-my_tramp\n" 83 + " .popsection\n" 84 + ); 85 + 86 + #endif /* CONFIG_ARM64 */ 67 87 68 88 #ifdef CONFIG_LOONGARCH 69 89
+3 -3
scripts/kallsyms.c
··· 349 349 * ASCII[_] = 5f 350 350 * ASCII[a-z] = 61,7a 351 351 * 352 - * As above, replacing '.' with '\0' does not affect the main sorting, 353 - * but it helps us with subsorting. 352 + * As above, replacing the first '.' in ".llvm." with '\0' does not 353 + * affect the main sorting, but it helps us with subsorting. 354 354 */ 355 - p = strchr(s, '.'); 355 + p = strstr(s, ".llvm."); 356 356 if (p) 357 357 *p = '\0'; 358 358 }
-4
sound/Kconfig
··· 39 39 40 40 source "sound/oss/dmasound/Kconfig" 41 41 42 - if !UML 43 - 44 42 menuconfig SND 45 43 tristate "Advanced Linux Sound Architecture" 46 44 help ··· 100 102 source "sound/virtio/Kconfig" 101 103 102 104 endif # SND 103 - 104 - endif # !UML 105 105 106 106 endif # SOUND 107 107
+11
sound/soc/Kconfig
··· 38 38 bool 39 39 select SND_DYNAMIC_MINORS 40 40 41 + config SND_SOC_TOPOLOGY_BUILD 42 + bool "Build topology core" 43 + select SND_SOC_TOPOLOGY 44 + depends on KUNIT 45 + help 46 + This option exists to facilitate running the KUnit tests for 47 + the topology core, KUnit is frequently tested in virtual 48 + environments with minimal drivers enabled but the topology 49 + core is usually selected by drivers. There is little reason 50 + to enable it if not doing a KUnit build. 51 + 41 52 config SND_SOC_TOPOLOGY_KUNIT_TEST 42 53 tristate "KUnit tests for SoC topology" 43 54 depends on KUNIT
+1 -1
tools/objtool/elf.c
··· 1005 1005 perror("malloc"); 1006 1006 return NULL; 1007 1007 } 1008 - memset(elf, 0, offsetof(struct elf, sections)); 1008 + memset(elf, 0, sizeof(*elf)); 1009 1009 1010 1010 INIT_LIST_HEAD(&elf->sections); 1011 1011
+5
tools/testing/kunit/configs/all_tests.config
··· 35 35 36 36 CONFIG_SECURITY=y 37 37 CONFIG_SECURITY_APPARMOR=y 38 + 39 + CONFIG_SOUND=y 40 + CONFIG_SND=y 41 + CONFIG_SND_SOC=y 42 + CONFIG_SND_SOC_TOPOLOGY_BUILD=y
+9
tools/testing/selftests/bpf/prog_tests/async_stack_depth.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <test_progs.h> 3 + 4 + #include "async_stack_depth.skel.h" 5 + 6 + void test_async_stack_depth(void) 7 + { 8 + RUN_TESTS(async_stack_depth); 9 + }
+40
tools/testing/selftests/bpf/progs/async_stack_depth.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <vmlinux.h> 3 + #include <bpf/bpf_helpers.h> 4 + 5 + #include "bpf_misc.h" 6 + 7 + struct hmap_elem { 8 + struct bpf_timer timer; 9 + }; 10 + 11 + struct { 12 + __uint(type, BPF_MAP_TYPE_HASH); 13 + __uint(max_entries, 64); 14 + __type(key, int); 15 + __type(value, struct hmap_elem); 16 + } hmap SEC(".maps"); 17 + 18 + __attribute__((noinline)) 19 + static int timer_cb(void *map, int *key, struct bpf_timer *timer) 20 + { 21 + volatile char buf[256] = {}; 22 + return buf[69]; 23 + } 24 + 25 + SEC("tc") 26 + __failure __msg("combined stack size of 2 calls") 27 + int prog(struct __sk_buff *ctx) 28 + { 29 + struct hmap_elem *elem; 30 + volatile char buf[256] = {}; 31 + 32 + elem = bpf_map_lookup_elem(&hmap, &(int){0}); 33 + if (!elem) 34 + return 0; 35 + 36 + timer_cb(NULL, NULL, NULL); 37 + return bpf_timer_set_callback(&elem->timer, timer_cb) + buf[0]; 38 + } 39 + 40 + char _license[] SEC("license") = "GPL";
+1
tools/testing/selftests/hid/vmtest.sh
··· 79 79 cd "${kernel_checkout}" 80 80 81 81 ${make_command} olddefconfig 82 + ${make_command} headers 82 83 ${make_command} 83 84 } 84 85
+86
tools/testing/selftests/tc-testing/tc-tests/qdiscs/qfq.json
··· 213 213 "$TC qdisc del dev $DUMMY handle 1: root", 214 214 "$IP link del dev $DUMMY type dummy" 215 215 ] 216 + }, 217 + { 218 + "id": "85ee", 219 + "name": "QFQ with big MTU", 220 + "category": [ 221 + "qdisc", 222 + "qfq" 223 + ], 224 + "plugins": { 225 + "requires": "nsPlugin" 226 + }, 227 + "setup": [ 228 + "$IP link add dev $DUMMY type dummy || /bin/true", 229 + "$IP link set dev $DUMMY mtu 2147483647 || /bin/true", 230 + "$TC qdisc add dev $DUMMY handle 1: root qfq" 231 + ], 232 + "cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100", 233 + "expExitCode": "2", 234 + "verifyCmd": "$TC class show dev $DUMMY", 235 + "matchPattern": "class qfq 1:", 236 + "matchCount": "0", 237 + "teardown": [ 238 + "$IP link del dev $DUMMY type dummy" 239 + ] 240 + }, 241 + { 242 + "id": "ddfa", 243 + "name": "QFQ with small MTU", 244 + "category": [ 245 + "qdisc", 246 + "qfq" 247 + ], 248 + "plugins": { 249 + "requires": "nsPlugin" 250 + }, 251 + "setup": [ 252 + "$IP link add dev $DUMMY type dummy || /bin/true", 253 + "$IP link set dev $DUMMY mtu 256 || /bin/true", 254 + "$TC qdisc add dev $DUMMY handle 1: root qfq" 255 + ], 256 + "cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100", 257 + "expExitCode": "2", 258 + "verifyCmd": "$TC class show dev $DUMMY", 259 + "matchPattern": "class qfq 1:", 260 + "matchCount": "0", 261 + "teardown": [ 262 + "$IP link del dev $DUMMY type dummy" 263 + ] 264 + }, 265 + { 266 + "id": "5993", 267 + "name": "QFQ with stab overhead greater than max packet len", 268 + "category": [ 269 + "qdisc", 270 + "qfq", 271 + "scapy" 272 + ], 273 + "plugins": { 274 + "requires": [ 275 + "nsPlugin", 276 + "scapyPlugin" 277 + ] 278 + }, 279 + "setup": [ 280 + "$IP link add dev $DUMMY type dummy || /bin/true", 281 + "$IP link set dev $DUMMY up || /bin/true", 282 + "$TC qdisc add dev $DUMMY handle 1: stab mtu 2048 tsize 512 mpu 0 overhead 999999999 linklayer ethernet root qfq", 283 + "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100", 284 + "$TC qdisc add dev $DEV1 clsact", 285 + "$TC filter add dev $DEV1 ingress protocol ip flower dst_ip 1.3.3.7/32 action mirred egress mirror dev $DUMMY" 286 + ], 287 + "cmdUnderTest": "$TC filter add dev $DUMMY parent 1: matchall classid 1:1", 288 + "scapy": [ 289 + { 290 + "iface": "$DEV0", 291 + "count": 22, 292 + "packet": "Ether(type=0x800)/IP(src='10.0.0.10',dst='1.3.3.7')/TCP(sport=5000,dport=10)" 293 + } 294 + ], 295 + "expExitCode": "0", 296 + "verifyCmd": "$TC -s qdisc ls dev $DUMMY", 297 + "matchPattern": "dropped 22", 298 + "matchCount": "1", 299 + "teardown": [ 300 + "$TC qdisc del dev $DUMMY handle 1: root qfq" 301 + ] 216 302 } 217 303 ]
+12
tools/testing/selftests/user_events/dyn_test.c
··· 217 217 /* Types don't match */ 218 218 TEST_NMATCH("__test_event u64 a; u64 b", 219 219 "__test_event u32 a; u32 b"); 220 + 221 + /* Struct name and size matches */ 222 + TEST_MATCH("__test_event struct my_struct a 20", 223 + "__test_event struct my_struct a 20"); 224 + 225 + /* Struct name don't match */ 226 + TEST_NMATCH("__test_event struct my_struct a 20", 227 + "__test_event struct my_struct b 20"); 228 + 229 + /* Struct size don't match */ 230 + TEST_NMATCH("__test_event struct my_struct a 20", 231 + "__test_event struct my_struct a 21"); 220 232 } 221 233 222 234 int main(int argc, char **argv)