···994994What: /sys/bus/platform/drivers/ufshcd/*/rpm_lvl995995What: /sys/bus/platform/devices/*.ufs/rpm_lvl996996Date: September 2014997997-Contact: Subhash Jadavani <subhashj@codeaurora.org>997997+Contact: Can Guo <quic_cang@quicinc.com>998998Description: This entry could be used to set or show the UFS device999999 runtime power management level. The current driver10001000 implementation supports 7 levels with next target states:···10211021What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_dev_state10221022What: /sys/bus/platform/devices/*.ufs/rpm_target_dev_state10231023Date: February 201810241024-Contact: Subhash Jadavani <subhashj@codeaurora.org>10241024+Contact: Can Guo <quic_cang@quicinc.com>10251025Description: This entry shows the target power mode of an UFS device10261026 for the chosen runtime power management level.10271027···10301030What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_link_state10311031What: /sys/bus/platform/devices/*.ufs/rpm_target_link_state10321032Date: February 201810331033-Contact: Subhash Jadavani <subhashj@codeaurora.org>10331033+Contact: Can Guo <quic_cang@quicinc.com>10341034Description: This entry shows the target state of an UFS UIC link10351035 for the chosen runtime power management level.10361036···10391039What: /sys/bus/platform/drivers/ufshcd/*/spm_lvl10401040What: /sys/bus/platform/devices/*.ufs/spm_lvl10411041Date: September 201410421042-Contact: Subhash Jadavani <subhashj@codeaurora.org>10421042+Contact: Can Guo <quic_cang@quicinc.com>10431043Description: This entry could be used to set or show the UFS device10441044 system power management level. The current driver10451045 implementation supports 7 levels with next target states:···10661066What: /sys/bus/platform/drivers/ufshcd/*/spm_target_dev_state10671067What: /sys/bus/platform/devices/*.ufs/spm_target_dev_state10681068Date: February 201810691069-Contact: Subhash Jadavani <subhashj@codeaurora.org>10691069+Contact: Can Guo <quic_cang@quicinc.com>10701070Description: This entry shows the target power mode of an UFS device10711071 for the chosen system power management level.10721072···10751075What: /sys/bus/platform/drivers/ufshcd/*/spm_target_link_state10761076What: /sys/bus/platform/devices/*.ufs/spm_target_link_state10771077Date: February 201810781078-Contact: Subhash Jadavani <subhashj@codeaurora.org>10781078+Contact: Can Guo <quic_cang@quicinc.com>10791079Description: This entry shows the target state of an UFS UIC link10801080 for the chosen system power management level.10811081···10841084What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_enable10851085What: /sys/bus/platform/devices/*.ufs/monitor/monitor_enable10861086Date: January 202110871087-Contact: Can Guo <cang@codeaurora.org>10871087+Contact: Can Guo <quic_cang@quicinc.com>10881088Description: This file shows the status of performance monitor enablement10891089 and it can be used to start/stop the monitor. When the monitor10901090 is stopped, the performance data collected is also cleared.···10921092What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_chunk_size10931093What: /sys/bus/platform/devices/*.ufs/monitor/monitor_chunk_size10941094Date: January 202110951095-Contact: Can Guo <cang@codeaurora.org>10951095+Contact: Can Guo <quic_cang@quicinc.com>10961096Description: This file tells the monitor to focus on requests transferring10971097 data of specific chunk size (in Bytes). 0 means any chunk size.10981098 It can only be changed when monitor is disabled.···11001100What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_sectors11011101What: /sys/bus/platform/devices/*.ufs/monitor/read_total_sectors11021102Date: January 202111031103-Contact: Can Guo <cang@codeaurora.org>11031103+Contact: Can Guo <quic_cang@quicinc.com>11041104Description: This file shows how many sectors (in 512 Bytes) have been11051105 sent from device to host after monitor gets started.11061106···11091109What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_busy11101110What: /sys/bus/platform/devices/*.ufs/monitor/read_total_busy11111111Date: January 202111121112-Contact: Can Guo <cang@codeaurora.org>11121112+Contact: Can Guo <quic_cang@quicinc.com>11131113Description: This file shows how long (in micro seconds) has been spent11141114 sending data from device to host after monitor gets started.11151115···11181118What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_nr_requests11191119What: /sys/bus/platform/devices/*.ufs/monitor/read_nr_requests11201120Date: January 202111211121-Contact: Can Guo <cang@codeaurora.org>11211121+Contact: Can Guo <quic_cang@quicinc.com>11221122Description: This file shows how many read requests have been sent after11231123 monitor gets started.11241124···11271127What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_max11281128What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_max11291129Date: January 202111301130-Contact: Can Guo <cang@codeaurora.org>11301130+Contact: Can Guo <quic_cang@quicinc.com>11311131Description: This file shows the maximum latency (in micro seconds) of11321132 read requests after monitor gets started.11331133···11361136What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_min11371137What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_min11381138Date: January 202111391139-Contact: Can Guo <cang@codeaurora.org>11391139+Contact: Can Guo <quic_cang@quicinc.com>11401140Description: This file shows the minimum latency (in micro seconds) of11411141 read requests after monitor gets started.11421142···11451145What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_avg11461146What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_avg11471147Date: January 202111481148-Contact: Can Guo <cang@codeaurora.org>11481148+Contact: Can Guo <quic_cang@quicinc.com>11491149Description: This file shows the average latency (in micro seconds) of11501150 read requests after monitor gets started.11511151···11541154What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_sum11551155What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_sum11561156Date: January 202111571157-Contact: Can Guo <cang@codeaurora.org>11571157+Contact: Can Guo <quic_cang@quicinc.com>11581158Description: This file shows the total latency (in micro seconds) of11591159 read requests sent after monitor gets started.11601160···11631163What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_sectors11641164What: /sys/bus/platform/devices/*.ufs/monitor/write_total_sectors11651165Date: January 202111661166-Contact: Can Guo <cang@codeaurora.org>11661166+Contact: Can Guo <quic_cang@quicinc.com>11671167Description: This file shows how many sectors (in 512 Bytes) have been sent11681168 from host to device after monitor gets started.11691169···11721172What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_busy11731173What: /sys/bus/platform/devices/*.ufs/monitor/write_total_busy11741174Date: January 202111751175-Contact: Can Guo <cang@codeaurora.org>11751175+Contact: Can Guo <quic_cang@quicinc.com>11761176Description: This file shows how long (in micro seconds) has been spent11771177 sending data from host to device after monitor gets started.11781178···11811181What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_nr_requests11821182What: /sys/bus/platform/devices/*.ufs/monitor/write_nr_requests11831183Date: January 202111841184-Contact: Can Guo <cang@codeaurora.org>11841184+Contact: Can Guo <quic_cang@quicinc.com>11851185Description: This file shows how many write requests have been sent after11861186 monitor gets started.11871187···11901190What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_max11911191What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_max11921192Date: January 202111931193-Contact: Can Guo <cang@codeaurora.org>11931193+Contact: Can Guo <quic_cang@quicinc.com>11941194Description: This file shows the maximum latency (in micro seconds) of write11951195 requests after monitor gets started.11961196···11991199What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_min12001200What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_min12011201Date: January 202112021202-Contact: Can Guo <cang@codeaurora.org>12021202+Contact: Can Guo <quic_cang@quicinc.com>12031203Description: This file shows the minimum latency (in micro seconds) of write12041204 requests after monitor gets started.12051205···12081208What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_avg12091209What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_avg12101210Date: January 202112111211-Contact: Can Guo <cang@codeaurora.org>12111211+Contact: Can Guo <quic_cang@quicinc.com>12121212Description: This file shows the average latency (in micro seconds) of write12131213 requests after monitor gets started.12141214···12171217What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_sum12181218What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_sum12191219Date: January 202112201220-Contact: Can Guo <cang@codeaurora.org>12201220+Contact: Can Guo <quic_cang@quicinc.com>12211221Description: This file shows the total latency (in micro seconds) of write12221222 requests after monitor gets started.12231223···12261226What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_presv_us_en12271227What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_presv_us_en12281228Date: June 202012291229-Contact: Asutosh Das <asutoshd@codeaurora.org>12291229+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12301230Description: This entry shows if preserve user-space was configured1231123112321232 The file is read only.···12341234What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_shared_alloc_units12351235What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_shared_alloc_units12361236Date: June 202012371237-Contact: Asutosh Das <asutoshd@codeaurora.org>12371237+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12381238Description: This entry shows the shared allocated units of WB buffer1239123912401240 The file is read only.···12421242What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_type12431243What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_type12441244Date: June 202012451245-Contact: Asutosh Das <asutoshd@codeaurora.org>12451245+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12461246Description: This entry shows the configured WB type.12471247 0x1 for shared buffer mode. 0x0 for dedicated buffer mode.12481248···12511251What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_buff_cap_adj12521252What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_buff_cap_adj12531253Date: June 202012541254-Contact: Asutosh Das <asutoshd@codeaurora.org>12541254+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12551255Description: This entry shows the total user-space decrease in shared12561256 buffer mode.12571257 The value of this parameter is 3 for TLC NAND when SLC mode···12621262What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_alloc_units12631263What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_max_alloc_units12641264Date: June 202012651265-Contact: Asutosh Das <asutoshd@codeaurora.org>12651265+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12661266Description: This entry shows the Maximum total WriteBooster Buffer size12671267 which is supported by the entire device.12681268···12711271What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_wb_luns12721272What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_max_wb_luns12731273Date: June 202012741274-Contact: Asutosh Das <asutoshd@codeaurora.org>12741274+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12751275Description: This entry shows the maximum number of luns that can support12761276 WriteBooster.12771277···12801280What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_red_type12811281What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_sup_red_type12821282Date: June 202012831283-Contact: Asutosh Das <asutoshd@codeaurora.org>12831283+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12841284Description: The supportability of user space reduction mode12851285 and preserve user space mode.12861286 00h: WriteBooster Buffer can be configured only in···12951295What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_wb_type12961296What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_sup_wb_type12971297Date: June 202012981298-Contact: Asutosh Das <asutoshd@codeaurora.org>12981298+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12991299Description: The supportability of WriteBooster Buffer type.1300130013011301 === ==========================================================···13101310What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable13111311What: /sys/bus/platform/devices/*.ufs/flags/wb_enable13121312Date: June 202013131313-Contact: Asutosh Das <asutoshd@codeaurora.org>13131313+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13141314Description: This entry shows the status of WriteBooster.1315131513161316 == ============================···13231323What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_en13241324What: /sys/bus/platform/devices/*.ufs/flags/wb_flush_en13251325Date: June 202013261326-Contact: Asutosh Das <asutoshd@codeaurora.org>13261326+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13271327Description: This entry shows if flush is enabled.1328132813291329 == =================================···13361336What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_during_h813371337What: /sys/bus/platform/devices/*.ufs/flags/wb_flush_during_h813381338Date: June 202013391339-Contact: Asutosh Das <asutoshd@codeaurora.org>13391339+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13401340Description: Flush WriteBooster Buffer during hibernate state.1341134113421342 == =================================================···13511351What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_avail_buf13521352What: /sys/bus/platform/devices/*.ufs/attributes/wb_avail_buf13531353Date: June 202013541354-Contact: Asutosh Das <asutoshd@codeaurora.org>13541354+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13551355Description: This entry shows the amount of unused WriteBooster buffer13561356 available.13571357···13601360What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_cur_buf13611361What: /sys/bus/platform/devices/*.ufs/attributes/wb_cur_buf13621362Date: June 202013631363-Contact: Asutosh Das <asutoshd@codeaurora.org>13631363+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13641364Description: This entry shows the amount of unused current buffer.1365136513661366 The file is read only.···13681368What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_flush_status13691369What: /sys/bus/platform/devices/*.ufs/attributes/wb_flush_status13701370Date: June 202013711371-Contact: Asutosh Das <asutoshd@codeaurora.org>13711371+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13721372Description: This entry shows the flush operation status.1373137313741374···13851385What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_life_time_est13861386What: /sys/bus/platform/devices/*.ufs/attributes/wb_life_time_est13871387Date: June 202013881388-Contact: Asutosh Das <asutoshd@codeaurora.org>13881388+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13891389Description: This entry shows an indication of the WriteBooster Buffer13901390 lifetime based on the amount of performed program/erase cycles13911391···1399139914001400What: /sys/class/scsi_device/*/device/unit_descriptor/wb_buf_alloc_units14011401Date: June 202014021402-Contact: Asutosh Das <asutoshd@codeaurora.org>14021402+Contact: Asutosh Das <quic_asutoshd@quicinc.com>14031403Description: This entry shows the configured size of WriteBooster buffer.14041404 0400h corresponds to 4GB.14051405
···9898repository link above for any new networking-related commits. You may9999also check the following website for the current status:100100101101- http://vger.kernel.org/~davem/net-next.html101101+ https://patchwork.hopto.org/net-next.html102102103103The ``net`` tree continues to collect fixes for the vX.Y content, and is104104fed back to Linus at regular (~weekly) intervals. Meaning that the
+1-1
Documentation/riscv/hwprobe.rst
···4949 privileged ISA, with the following known exceptions (more exceptions may be5050 added, but only if it can be demonstrated that the user ABI is not broken):51515252- * The :fence.i: instruction cannot be directly executed by userspace5252+ * The ``fence.i`` instruction cannot be directly executed by userspace5353 programs (it may still be executed in userspace via a5454 kernel-controlled mechanism such as the vDSO).5555
+2-1
Documentation/wmi/devices/dell-wmi-ddv.rst
···187187188188Returns a buffer usually containg 12 blocks of analytics data.189189Those blocks contain:190190-- block number starting with 0 (u8)190190+191191+- a block number starting with 0 (u8)191192- 31 bytes of unknown data192193193194.. note::
···28282929struct sigcontext {3030 struct user_regs_struct regs; /* needs to be first */3131- struct __or1k_fpu_state fpu;3232- unsigned long oldmask;3131+ union {3232+ unsigned long fpcsr;3333+ unsigned long oldmask; /* unused */3434+ };3335};34363537#endif /* __ASM_OPENRISC_SIGCONTEXT_H */
···55 * Copyright (C) 2007 Ben. Herrenschmidt (benh@kernel.crashing.org), IBM Corp.66 */7788+#include <linux/linkage.h>89#include <linux/threads.h>910#include <asm/reg.h>1011#include <asm/page.h>···6766#define SPECIAL_EXC_LOAD(reg, name) \6867 ld reg, (SPECIAL_EXC_##name * 8 + SPECIAL_EXC_FRAME_OFFS)(r1)69687070-special_reg_save:6969+SYM_CODE_START_LOCAL(special_reg_save)7170 /*7271 * We only need (or have stack space) to save this stuff if7372 * we interrupted the kernel.···132131 SPECIAL_EXC_STORE(r10,CSRR1)133132134133 blr134134+SYM_CODE_END(special_reg_save)135135136136-ret_from_level_except:136136+SYM_CODE_START_LOCAL(ret_from_level_except)137137 ld r3,_MSR(r1)138138 andi. r3,r3,MSR_PR139139 beq 1f···208206 mtxer r11209207210208 blr209209+SYM_CODE_END(ret_from_level_except)211210212211.macro ret_from_level srr0 srr1 paca_ex scratch213212 bl ret_from_level_except···235232 mfspr r13,\scratch236233.endm237234238238-ret_from_crit_except:235235+SYM_CODE_START_LOCAL(ret_from_crit_except)239236 ret_from_level SPRN_CSRR0 SPRN_CSRR1 PACA_EXCRIT SPRN_SPRG_CRIT_SCRATCH240237 rfci238238+SYM_CODE_END(ret_from_crit_except)241239242242-ret_from_mc_except:240240+SYM_CODE_START_LOCAL(ret_from_mc_except)243241 ret_from_level SPRN_MCSRR0 SPRN_MCSRR1 PACA_EXMC SPRN_SPRG_MC_SCRATCH244242 rfmci243243+SYM_CODE_END(ret_from_mc_except)245244246245/* Exception prolog code for all exceptions */247246#define EXCEPTION_PROLOG(n, intnum, type, addition) \···983978 * r14 and r15 containing the fault address and error code, with the984979 * original values stashed away in the PACA985980 */986986-storage_fault_common:981981+SYM_CODE_START_LOCAL(storage_fault_common)987982 addi r3,r1,STACK_INT_FRAME_REGS988983 bl do_page_fault989984 b interrupt_return985985+SYM_CODE_END(storage_fault_common)990986991987/*992988 * Alignment exception doesn't fit entirely in the 0x100 bytes so it993989 * continues here.994990 */995995-alignment_more:991991+SYM_CODE_START_LOCAL(alignment_more)996992 addi r3,r1,STACK_INT_FRAME_REGS997993 bl alignment_exception998994 REST_NVGPRS(r1)999995 b interrupt_return996996+SYM_CODE_END(alignment_more)10009971001998/*1002999 * Trampolines used when spotting a bad kernel stack pointer in···10371030BAD_STACK_TRAMPOLINE(0xf00)10381031BAD_STACK_TRAMPOLINE(0xf20)1039103210401040- .globl bad_stack_book3e10411041-bad_stack_book3e:10331033+_GLOBAL(bad_stack_book3e)10421034 /* XXX: Needs to make SPRN_SPRG_GEN depend on exception type */10431035 mfspr r10,SPRN_SRR0; /* read SRR0 before touching stack */10441036 ld r1,PACAEMERGSP(r13)···12911285 * ever takes any parameters, the SCOM code must also be updated to12921286 * provide them.12931287 */12941294- .globl a2_tlbinit_code_start12951295-a2_tlbinit_code_start:12881288+_GLOBAL(a2_tlbinit_code_start)1296128912971290 ori r11,r3,MAS0_WQ_ALLWAYS12981291 oris r11,r11,MAS0_ESEL(3)@h /* Use way 3: workaround A2 erratum 376 */···14841479 mflr r2814851480 b 3b1486148114871487- .globl init_core_book3e14881488-init_core_book3e:14821482+_GLOBAL(init_core_book3e)14891483 /* Establish the interrupt vector base */14901484 tovirt(r2,r2)14911485 LOAD_REG_ADDR(r3, interrupt_base_book3e)···14921488 sync14931489 blr1494149014951495-init_thread_book3e:14911491+SYM_CODE_START_LOCAL(init_thread_book3e)14961492 lis r3,(SPRN_EPCR_ICM | SPRN_EPCR_GICM)@h14971493 mtspr SPRN_EPCR,r314981494···15061502 mtspr SPRN_TSR,r31507150315081504 blr15051505+SYM_CODE_END(init_thread_book3e)1509150615101507_GLOBAL(__setup_base_ivors)15111508 SET_IVOR(0, 0x020) /* Critical Input */
+19-18
arch/powerpc/kernel/security.c
···364364365365static int ssb_prctl_get(struct task_struct *task)366366{367367- if (stf_enabled_flush_types == STF_BARRIER_NONE)368368- /*369369- * We don't have an explicit signal from firmware that we're370370- * vulnerable or not, we only have certain CPU revisions that371371- * are known to be vulnerable.372372- *373373- * We assume that if we're on another CPU, where the barrier is374374- * NONE, then we are not vulnerable.375375- */367367+ /*368368+ * The STF_BARRIER feature is on by default, so if it's off that means369369+ * firmware has explicitly said the CPU is not vulnerable via either370370+ * the hypercall or device tree.371371+ */372372+ if (!security_ftr_enabled(SEC_FTR_STF_BARRIER))376373 return PR_SPEC_NOT_AFFECTED;377377- else378378- /*379379- * If we do have a barrier type then we are vulnerable. The380380- * barrier is not a global or per-process mitigation, so the381381- * only value we can report here is PR_SPEC_ENABLE, which382382- * appears as "vulnerable" in /proc.383383- */384384- return PR_SPEC_ENABLE;385374386386- return -EINVAL;375375+ /*376376+ * If the system's CPU has no known barrier (see setup_stf_barrier())377377+ * then assume that the CPU is not vulnerable.378378+ */379379+ if (stf_enabled_flush_types == STF_BARRIER_NONE)380380+ return PR_SPEC_NOT_AFFECTED;381381+382382+ /*383383+ * Otherwise the CPU is vulnerable. The barrier is not a global or384384+ * per-process mitigation, so the only value that can be reported here385385+ * is PR_SPEC_ENABLE, which appears as "vulnerable" in /proc.386386+ */387387+ return PR_SPEC_ENABLE;387388}388389389390int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+9-4
arch/powerpc/mm/book3s64/hash_native.c
···328328329329static long native_hpte_remove(unsigned long hpte_group)330330{331331+ unsigned long hpte_v, flags;331332 struct hash_pte *hptep;332333 int i;333334 int slot_offset;334334- unsigned long hpte_v;335335+336336+ local_irq_save(flags);335337336338 DBG_LOW(" remove(group=%lx)\n", hpte_group);337339···358356 slot_offset &= 0x7;359357 }360358361361- if (i == HPTES_PER_GROUP)362362- return -1;359359+ if (i == HPTES_PER_GROUP) {360360+ i = -1;361361+ goto out;362362+ }363363364364 /* Invalidate the hpte. NOTE: this also unlocks it */365365 release_hpte_lock();366366 hptep->v = 0;367367-367367+out:368368+ local_irq_restore(flags);368369 return i;369370}370371
+2-7
arch/riscv/kernel/cpufeature.c
···318318 }319319320320 /*321321- * Linux requires the following extensions, so we may as well322322- * always set them.323323- */324324- set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa);325325- set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa);326326-327327- /*328321 * These ones were as they were part of the base ISA when the329322 * port & dt-bindings were upstreamed, and so can be set330323 * unconditionally where `i` is in riscv,isa on DT systems.331324 */332325 if (acpi_disabled) {326326+ set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa);327327+ set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa);333328 set_bit(RISCV_ISA_EXT_ZICNTR, isainfo->isa);334329 set_bit(RISCV_ISA_EXT_ZIHPM, isainfo->isa);335330 }
+1-1
arch/riscv/mm/init.c
···13461346 */13471347 crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,13481348 search_start,13491349- min(search_end, (unsigned long) SZ_4G));13491349+ min(search_end, (unsigned long)(SZ_4G - 1)));13501350 if (crash_base == 0) {13511351 /* Try again without restricting region to 32bit addressible memory */13521352 crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,
+3-3
arch/riscv/net/bpf_jit.h
···6969 struct bpf_prog *prog;7070 u16 *insns; /* RV insns */7171 int ninsns;7272- int body_len;7272+ int prologue_len;7373 int epilogue_offset;7474 int *offset; /* BPF to RV */7575 int nexentries;···216216 int from, to;217217218218 off++; /* BPF branch is from PC+1, RV is from PC */219219- from = (insn > 0) ? ctx->offset[insn - 1] : 0;220220- to = (insn + off > 0) ? ctx->offset[insn + off - 1] : 0;219219+ from = (insn > 0) ? ctx->offset[insn - 1] : ctx->prologue_len;220220+ to = (insn + off > 0) ? ctx->offset[insn + off - 1] : ctx->prologue_len;221221 return ninsns_rvoff(to - from);222222}223223
+13-6
arch/riscv/net/bpf_jit_core.c
···4444 unsigned int prog_size = 0, extable_size = 0;4545 bool tmp_blinded = false, extra_pass = false;4646 struct bpf_prog *tmp, *orig_prog = prog;4747- int pass = 0, prev_ninsns = 0, prologue_len, i;4747+ int pass = 0, prev_ninsns = 0, i;4848 struct rv_jit_data *jit_data;4949 struct rv_jit_context *ctx;5050···8383 prog = orig_prog;8484 goto out_offset;8585 }8686+8787+ if (build_body(ctx, extra_pass, NULL)) {8888+ prog = orig_prog;8989+ goto out_offset;9090+ }9191+8692 for (i = 0; i < prog->len; i++) {8793 prev_ninsns += 32;8894 ctx->offset[i] = prev_ninsns;···9791 for (i = 0; i < NR_JIT_ITERATIONS; i++) {9892 pass++;9993 ctx->ninsns = 0;9494+9595+ bpf_jit_build_prologue(ctx);9696+ ctx->prologue_len = ctx->ninsns;9797+10098 if (build_body(ctx, extra_pass, ctx->offset)) {10199 prog = orig_prog;102100 goto out_offset;103101 }104104- ctx->body_len = ctx->ninsns;105105- bpf_jit_build_prologue(ctx);102102+106103 ctx->epilogue_offset = ctx->ninsns;107104 bpf_jit_build_epilogue(ctx);108105···171162172163 if (!prog->is_func || extra_pass) {173164 bpf_jit_binary_lock_ro(jit_data->header);174174- prologue_len = ctx->epilogue_offset - ctx->body_len;175165 for (i = 0; i < prog->len; i++)176176- ctx->offset[i] = ninsns_rvoff(prologue_len +177177- ctx->offset[i]);166166+ ctx->offset[i] = ninsns_rvoff(ctx->offset[i]);178167 bpf_prog_fill_jited_linfo(prog, ctx->offset);179168out_offset:180169 kfree(ctx->offset);
···2929config HD64461_IRQ3030 int "HD64461 IRQ"3131 depends on HD644613232- default "36"3232+ default "52"3333 help3434- The default setting of the HD64461 IRQ is 36.3434+ The default setting of the HD64461 IRQ is 52.35353636 Do not change this unless you know what you are doing.3737
···1515unsigned long __xchg_u32(volatile u32 *m, u32 new);1616void __xchg_called_with_bad_pointer(void);17171818-static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)1818+static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)1919{2020 switch (size) {2121 case 4:
···720720.popsection721721722722/*723723- * The unwinder expects the last frame on the stack to always be at the same724724- * offset from the end of the page, which allows it to validate the stack.725725- * Calling schedule_tail() directly would break that convention because its an726726- * asmlinkage function so its argument has to be pushed on the stack. This727727- * wrapper creates a proper "end of stack" frame header before the call.728728- */729729-.pushsection .text, "ax"730730-SYM_FUNC_START(schedule_tail_wrapper)731731- FRAME_BEGIN732732-733733- pushl %eax734734- call schedule_tail735735- popl %eax736736-737737- FRAME_END738738- RET739739-SYM_FUNC_END(schedule_tail_wrapper)740740-.popsection741741-742742-/*743723 * A newly forked process directly context switches into this address.744724 *745725 * eax: prev task we switched from···727747 * edi: kernel thread arg728748 */729749.pushsection .text, "ax"730730-SYM_CODE_START(ret_from_fork)731731- call schedule_tail_wrapper750750+SYM_CODE_START(ret_from_fork_asm)751751+ movl %esp, %edx /* regs */732752733733- testl %ebx, %ebx734734- jnz 1f /* kernel threads are uncommon */753753+ /* return address for the stack unwinder */754754+ pushl $.Lsyscall_32_done735755736736-2:737737- /* When we fork, we trace the syscall return in the child, too. */738738- movl %esp, %eax739739- call syscall_exit_to_user_mode740740- jmp .Lsyscall_32_done756756+ FRAME_BEGIN757757+ /* prev already in EAX */758758+ movl %ebx, %ecx /* fn */759759+ pushl %edi /* fn_arg */760760+ call ret_from_fork761761+ addl $4, %esp762762+ FRAME_END741763742742- /* kernel thread */743743-1: movl %edi, %eax744744- CALL_NOSPEC ebx745745- /*746746- * A kernel thread is allowed to return here after successfully747747- * calling kernel_execve(). Exit to userspace to complete the execve()748748- * syscall.749749- */750750- movl $0, PT_EAX(%esp)751751- jmp 2b752752-SYM_CODE_END(ret_from_fork)764764+ RET765765+SYM_CODE_END(ret_from_fork_asm)753766.popsection754767755768SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
···39933993 struct perf_event *leader = event->group_leader;39943994 struct perf_event *sibling = NULL;3995399539963996+ /*39973997+ * When this memload event is also the first event (no group39983998+ * exists yet), then there is no aux event before it.39993999+ */40004000+ if (leader == event)40014001+ return -ENODATA;40024002+39964003 if (!is_mem_loads_aux_event(leader)) {39974004 for_each_sibling_event(sibling, leader) {39984005 if (is_mem_loads_aux_event(sibling))
···3434/*3535 * Create a dummy function pointer reference to prevent objtool from marking3636 * the function as needing to be "sealed" (i.e. ENDBR converted to NOP by3737- * apply_ibt_endbr()).3737+ * apply_seal_endbr()).3838 */3939#define IBT_NOSEAL(fname) \4040 ".pushsection .discard.ibt_endbr_noseal\n\t" \
+4
arch/x86/include/asm/nospec-branch.h
···234234 * JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple235235 * indirect jmp/call which may be susceptible to the Spectre variant 2236236 * attack.237237+ *238238+ * NOTE: these do not take kCFI into account and are thus not comparable to C239239+ * indirect calls, take care when using. The target of these should be an ENDBR240240+ * instruction irrespective of kCFI.237241 */238242.macro JMP_NOSPEC reg:req239243#ifdef CONFIG_RETPOLINE
+3-1
arch/x86/include/asm/switch_to.h
···1212__visible struct task_struct *__switch_to(struct task_struct *prev,1313 struct task_struct *next);14141515-asmlinkage void ret_from_fork(void);1515+asmlinkage void ret_from_fork_asm(void);1616+__visible void ret_from_fork(struct task_struct *prev, struct pt_regs *regs,1717+ int (*fn)(void *), void *fn_arg);16181719/*1820 * This is the structure pointed to by thread.sp for an inactive task. The
+67-4
arch/x86/kernel/alternative.c
···778778779779#ifdef CONFIG_X86_KERNEL_IBT780780781781+static void poison_cfi(void *addr);782782+781783static void __init_or_module poison_endbr(void *addr, bool warn)782784{783785 u32 endbr, poison = gen_endbr_poison();···804802805803/*806804 * Generated by: objtool --ibt805805+ *806806+ * Seal the functions for indirect calls by clobbering the ENDBR instructions807807+ * and the kCFI hash value.807808 */808808-void __init_or_module noinline apply_ibt_endbr(s32 *start, s32 *end)809809+void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end)809810{810811 s32 *s;811812···817812818813 poison_endbr(addr, true);819814 if (IS_ENABLED(CONFIG_FINEIBT))820820- poison_endbr(addr - 16, false);815815+ poison_cfi(addr - 16);821816 }822817}823818824819#else825820826826-void __init_or_module apply_ibt_endbr(s32 *start, s32 *end) { }821821+void __init_or_module apply_seal_endbr(s32 *start, s32 *end) { }827822828823#endif /* CONFIG_X86_KERNEL_IBT */829824···10681063 return 0;10691064}1070106510661066+static void cfi_rewrite_endbr(s32 *start, s32 *end)10671067+{10681068+ s32 *s;10691069+10701070+ for (s = start; s < end; s++) {10711071+ void *addr = (void *)s + *s;10721072+10731073+ poison_endbr(addr+16, false);10741074+ }10751075+}10761076+10711077/* .retpoline_sites */10721078static int cfi_rand_callers(s32 *start, s32 *end)10731079{···11731157 return;1174115811751159 case CFI_FINEIBT:11601160+ /* place the FineIBT preamble at func()-16 */11761161 ret = cfi_rewrite_preamble(start_cfi, end_cfi);11771162 if (ret)11781163 goto err;1179116411651165+ /* rewrite the callers to target func()-16 */11801166 ret = cfi_rewrite_callers(start_retpoline, end_retpoline);11811167 if (ret)11821168 goto err;11691169+11701170+ /* now that nobody targets func()+0, remove ENDBR there */11711171+ cfi_rewrite_endbr(start_cfi, end_cfi);1183117211841173 if (builtin)11851174 pr_info("Using FineIBT CFI\n");···11981177 pr_err("Something went horribly wrong trying to rewrite the CFI implementation.\n");11991178}1200117911801180+static inline void poison_hash(void *addr)11811181+{11821182+ *(u32 *)addr = 0;11831183+}11841184+11851185+static void poison_cfi(void *addr)11861186+{11871187+ switch (cfi_mode) {11881188+ case CFI_FINEIBT:11891189+ /*11901190+ * __cfi_\func:11911191+ * osp nopl (%rax)11921192+ * subl $0, %r10d11931193+ * jz 1f11941194+ * ud211951195+ * 1: nop11961196+ */11971197+ poison_endbr(addr, false);11981198+ poison_hash(addr + fineibt_preamble_hash);11991199+ break;12001200+12011201+ case CFI_KCFI:12021202+ /*12031203+ * __cfi_\func:12041204+ * movl $0, %eax12051205+ * .skip 11, 0x9012061206+ */12071207+ poison_hash(addr + 1);12081208+ break;12091209+12101210+ default:12111211+ break;12121212+ }12131213+}12141214+12011215#else1202121612031217static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline,12041218 s32 *start_cfi, s32 *end_cfi, bool builtin)12051219{12061220}12211221+12221222+#ifdef CONFIG_X86_KERNEL_IBT12231223+static void poison_cfi(void *addr) { }12241224+#endif1207122512081226#endif12091227···16251565 */16261566 callthunks_patch_builtin_calls();1627156716281628- apply_ibt_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end);15681568+ /*15691569+ * Seal all functions that do not have their address taken.15701570+ */15711571+ apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end);1629157216301573#ifdef CONFIG_SMP16311574 /* Patch to UP if other cpus not imminent. */
-1
arch/x86/kernel/ftrace.c
···282282283283/* Defined as markers to the end of the ftrace default trampolines */284284extern void ftrace_regs_caller_end(void);285285-extern void ftrace_regs_caller_ret(void);286285extern void ftrace_caller_end(void);287286extern void ftrace_caller_op_ptr(void);288287extern void ftrace_regs_caller_op_ptr(void);
···11/*22 * arch/xtensa/kernel/align.S33 *44- * Handle unalignment exceptions in kernel space.44+ * Handle unalignment and load/store exceptions.55 *66 * This file is subject to the terms and conditions of the GNU General77 * Public License. See the file "COPYING" in the main directory of···2626#define LOAD_EXCEPTION_HANDLER2727#endif28282929-#if XCHAL_UNALIGNED_STORE_EXCEPTION || defined LOAD_EXCEPTION_HANDLER2929+#if XCHAL_UNALIGNED_STORE_EXCEPTION || defined CONFIG_XTENSA_LOAD_STORE3030+#define STORE_EXCEPTION_HANDLER3131+#endif3232+3333+#if defined LOAD_EXCEPTION_HANDLER || defined STORE_EXCEPTION_HANDLER3034#define ANY_EXCEPTION_HANDLER3135#endif32363333-#if XCHAL_HAVE_WINDOWED3737+#if XCHAL_HAVE_WINDOWED && defined CONFIG_MMU3438#define UNALIGNED_USER_EXCEPTION3539#endif3636-3737-/* First-level exception handler for unaligned exceptions.3838- *3939- * Note: This handler works only for kernel exceptions. Unaligned user4040- * access should get a seg fault.4141- */42404341/* Big and little endian 16-bit values are located in4442 * different halves of a register. HWORD_START helps to···226228#ifdef ANY_EXCEPTION_HANDLER227229ENTRY(fast_unaligned)228230229229-#if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION230230-231231 call0 .Lsave_and_load_instruction232232233233 /* Analyze the instruction (load or store?). */···240244 /* 'store indicator bit' not set, jump */241245 _bbci.l a4, OP1_SI_BIT + INSN_OP1, .Lload242246243243-#endif244244-#if XCHAL_UNALIGNED_STORE_EXCEPTION247247+#ifdef STORE_EXCEPTION_HANDLER245248246249 /* Store: Jump to table entry to get the value in the source register.*/247250···249254 addx8 a5, a6, a5250255 jx a5 # jump into table251256#endif252252-#if XCHAL_UNALIGNED_LOAD_EXCEPTION257257+#ifdef LOAD_EXCEPTION_HANDLER253258254259 /* Load: Load memory address. */255260···323328 mov a14, a3 ; _j .Lexit; .align 8324329 mov a15, a3 ; _j .Lexit; .align 8325330#endif326326-#if XCHAL_UNALIGNED_STORE_EXCEPTION331331+#ifdef STORE_EXCEPTION_HANDLER327332.Lstore_table:328333 l32i a3, a2, PT_AREG0; _j .Lstore_w; .align 8329334 mov a3, a1; _j .Lstore_w; .align 8 # fishy??···343348 mov a3, a15 ; _j .Lstore_w; .align 8344349#endif345350346346-#ifdef ANY_EXCEPTION_HANDLER347351 /* We cannot handle this exception. */348352349353 .extern _kernel_exception···3713773723782: movi a0, _user_exception373379 jx a0374374-#endif375375-#if XCHAL_UNALIGNED_STORE_EXCEPTION380380+381381+#ifdef STORE_EXCEPTION_HANDLER376382377383 # a7: instruction pointer, a4: instruction, a3: value378384.Lstore_w:···438444 s32i a6, a4, 4439445#endif440446#endif441441-#ifdef ANY_EXCEPTION_HANDLER447447+442448.Lexit:443449#if XCHAL_HAVE_LOOPS444450 rsr a4, lend # check if we reached LEND···533539 __src_b a4, a4, a5 # a4 has the instruction534540535541 ret536536-#endif542542+537543ENDPROC(fast_unaligned)538544539545ENTRY(fast_unaligned_fixup)
···237237238238 init += sizeof(TRANSPORT_TUNTAP_NAME) - 1;239239 if (*init == ',') {240240- rem = split_if_spec(init + 1, &mac_str, &dev_name);240240+ rem = split_if_spec(init + 1, &mac_str, &dev_name, NULL);241241 if (rem != NULL) {242242 pr_err("%s: extra garbage on specification : '%s'\n",243243 dev->name, rem);···540540 rtnl_unlock();541541 pr_err("%s: error registering net device!\n", dev->name);542542 platform_device_unregister(&lp->pdev);543543+ /* dev is freed by the iss_net_pdev_release callback */543544 return;544545 }545546 rtnl_unlock();
+10-2
block/blk-crypto-profile.c
···7979 unsigned int slot_hashtable_size;80808181 memset(profile, 0, sizeof(*profile));8282- init_rwsem(&profile->lock);8282+8383+ /*8484+ * profile->lock of an underlying device can nest inside profile->lock8585+ * of a device-mapper device, so use a dynamic lock class to avoid8686+ * false-positive lockdep reports.8787+ */8888+ lockdep_register_key(&profile->lockdep_key);8989+ __init_rwsem(&profile->lock, "&profile->lock", &profile->lockdep_key);83908491 if (num_slots == 0)8592 return 0;···9689 profile->slots = kvcalloc(num_slots, sizeof(profile->slots[0]),9790 GFP_KERNEL);9891 if (!profile->slots)9999- return -ENOMEM;9292+ goto err_destroy;1009310194 profile->num_slots = num_slots;10295···442435{443436 if (!profile)444437 return;438438+ lockdep_unregister_key(&profile->lockdep_key);445439 kvfree(profile->slot_hashtable);446440 kvfree_sensitive(profile->slots,447441 sizeof(profile->slots[0]) * profile->num_slots);
···442442 unsigned long *conv_zones_bitmap;443443 unsigned long *seq_zones_wlock;444444 unsigned int nr_zones;445445- sector_t zone_sectors;446445 sector_t sector;447446};448447···455456 struct gendisk *disk = args->disk;456457 struct request_queue *q = disk->queue;457458 sector_t capacity = get_capacity(disk);459459+ sector_t zone_sectors = q->limits.chunk_sectors;460460+461461+ /* Check for bad zones and holes in the zone report */462462+ if (zone->start != args->sector) {463463+ pr_warn("%s: Zone gap at sectors %llu..%llu\n",464464+ disk->disk_name, args->sector, zone->start);465465+ return -ENODEV;466466+ }467467+468468+ if (zone->start >= capacity || !zone->len) {469469+ pr_warn("%s: Invalid zone start %llu, length %llu\n",470470+ disk->disk_name, zone->start, zone->len);471471+ return -ENODEV;472472+ }458473459474 /*460475 * All zones must have the same size, with the exception on an eventual461476 * smaller last zone.462477 */463463- if (zone->start == 0) {464464- if (zone->len == 0 || !is_power_of_2(zone->len)) {465465- pr_warn("%s: Invalid zoned device with non power of two zone size (%llu)\n",466466- disk->disk_name, zone->len);467467- return -ENODEV;468468- }469469-470470- args->zone_sectors = zone->len;471471- args->nr_zones = (capacity + zone->len - 1) >> ilog2(zone->len);472472- } else if (zone->start + args->zone_sectors < capacity) {473473- if (zone->len != args->zone_sectors) {478478+ if (zone->start + zone->len < capacity) {479479+ if (zone->len != zone_sectors) {474480 pr_warn("%s: Invalid zoned device with non constant zone size\n",475481 disk->disk_name);476482 return -ENODEV;477483 }478478- } else {479479- if (zone->len > args->zone_sectors) {480480- pr_warn("%s: Invalid zoned device with larger last zone size\n",481481- disk->disk_name);482482- return -ENODEV;483483- }484484- }485485-486486- /* Check for holes in the zone report */487487- if (zone->start != args->sector) {488488- pr_warn("%s: Zone gap at sectors %llu..%llu\n",489489- disk->disk_name, args->sector, zone->start);484484+ } else if (zone->len > zone_sectors) {485485+ pr_warn("%s: Invalid zoned device with larger last zone size\n",486486+ disk->disk_name);490487 return -ENODEV;491488 }492489···521526 * @disk: Target disk522527 * @update_driver_data: Callback to update driver data on the frozen disk523528 *524524- * Helper function for low-level device drivers to (re) allocate and initialize525525- * a disk request queue zone bitmaps. This functions should normally be called526526- * within the disk ->revalidate method for blk-mq based drivers. For BIO based527527- * drivers only q->nr_zones needs to be updated so that the sysfs exposed value528528- * is correct.529529+ * Helper function for low-level device drivers to check and (re) allocate and530530+ * initialize a disk request queue zone bitmaps. This functions should normally531531+ * be called within the disk ->revalidate method for blk-mq based drivers.532532+ * Before calling this function, the device driver must already have set the533533+ * device zone size (chunk_sector limit) and the max zone append limit.534534+ * For BIO based drivers, this function cannot be used. BIO based device drivers535535+ * only need to set disk->nr_zones so that the sysfs exposed value is correct.529536 * If the @update_driver_data callback function is not NULL, the callback is530537 * executed with the device request queue frozen after all zones have been531538 * checked.···536539 void (*update_driver_data)(struct gendisk *disk))537540{538541 struct request_queue *q = disk->queue;539539- struct blk_revalidate_zone_args args = {540540- .disk = disk,541541- };542542+ sector_t zone_sectors = q->limits.chunk_sectors;543543+ sector_t capacity = get_capacity(disk);544544+ struct blk_revalidate_zone_args args = { };542545 unsigned int noio_flag;543546 int ret;544547···547550 if (WARN_ON_ONCE(!queue_is_mq(q)))548551 return -EIO;549552550550- if (!get_capacity(disk))551551- return -EIO;553553+ if (!capacity)554554+ return -ENODEV;555555+556556+ /*557557+ * Checks that the device driver indicated a valid zone size and that558558+ * the max zone append limit is set.559559+ */560560+ if (!zone_sectors || !is_power_of_2(zone_sectors)) {561561+ pr_warn("%s: Invalid non power of two zone size (%llu)\n",562562+ disk->disk_name, zone_sectors);563563+ return -ENODEV;564564+ }565565+566566+ if (!q->limits.max_zone_append_sectors) {567567+ pr_warn("%s: Invalid 0 maximum zone append limit\n",568568+ disk->disk_name);569569+ return -ENODEV;570570+ }552571553572 /*554573 * Ensure that all memory allocations in this context are done as if555574 * GFP_NOIO was specified.556575 */576576+ args.disk = disk;577577+ args.nr_zones = (capacity + zone_sectors - 1) >> ilog2(zone_sectors);557578 noio_flag = memalloc_noio_save();558579 ret = disk->fops->report_zones(disk, 0, UINT_MAX,559580 blk_revalidate_zone_cb, &args);···585570 * If zones where reported, make sure that the entire disk capacity586571 * has been checked.587572 */588588- if (ret > 0 && args.sector != get_capacity(disk)) {573573+ if (ret > 0 && args.sector != capacity) {589574 pr_warn("%s: Missing zones from sector %llu\n",590575 disk->disk_name, args.sector);591576 ret = -ENODEV;···598583 */599584 blk_mq_freeze_queue(q);600585 if (ret > 0) {601601- blk_queue_chunk_sectors(q, args.zone_sectors);602586 disk->nr_zones = args.nr_zones;603587 swap(disk->seq_zones_wlock, args.seq_zones_wlock);604588 swap(disk->conv_zones_bitmap, args.conv_zones_bitmap);
+1-1
block/mq-deadline.c
···176176 * zoned writes, start searching from the start of a zone.177177 */178178 if (blk_rq_is_seq_zoned_write(rq))179179- pos -= round_down(pos, rq->q->limits.chunk_sectors);179179+ pos = round_down(pos, rq->q->limits.chunk_sectors);180180181181 while (node) {182182 rq = rb_entry_rq(node);
···101101 vdev->wa.punit_disabled = ivpu_is_fpga(vdev);102102 vdev->wa.clear_runtime_mem = false;103103 vdev->wa.d3hot_after_power_off = true;104104+105105+ if (ivpu_device_id(vdev) == PCI_DEVICE_ID_MTL && ivpu_revision(vdev) < 4)106106+ vdev->wa.interrupt_clear_with_0 = true;104107}105108106109static void ivpu_hw_timeouts_init(struct ivpu_device *vdev)···888885 REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x1);889886 REGB_WR32(MTL_BUTTRESS_LOCAL_INT_MASK, BUTTRESS_IRQ_DISABLE_MASK);890887 REGV_WR64(MTL_VPU_HOST_SS_ICB_ENABLE_0, 0x0ull);891891- REGB_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0);888888+ REGV_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0);892889}893890894891static void ivpu_hw_mtl_irq_wdt_nce_handler(struct ivpu_device *vdev)···976973 schedule_recovery = true;977974 }978975979979- /*980980- * Clear local interrupt status by writing 0 to all bits.981981- * This must be done after interrupts are cleared at the source.982982- * Writing 1 triggers an interrupt, so we can't perform read update write.983983- */984984- REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, 0x0);976976+ /* This must be done after interrupts are cleared at the source. */977977+ if (IVPU_WA(interrupt_clear_with_0))978978+ /*979979+ * Writing 1 triggers an interrupt, so we can't perform read update write.980980+ * Clear local interrupt status by writing 0 to all bits.981981+ */982982+ REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, 0x0);983983+ else984984+ REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, status);985985986986 /* Re-enable global interrupt */987987 REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x0);
+1-1
drivers/base/regmap/regmap-irq.c
···717717 if (!d->config_buf)718718 goto err_alloc;719719720720- for (i = 0; i < chip->num_config_regs; i++) {720720+ for (i = 0; i < chip->num_config_bases; i++) {721721 d->config_buf[i] = kcalloc(chip->num_config_regs,722722 sizeof(**d->config_buf),723723 GFP_KERNEL);
+5-11
drivers/block/null_blk/zoned.c
···162162 disk_set_zoned(nullb->disk, BLK_ZONED_HM);163163 blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);164164 blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE);165165-166166- if (queue_is_mq(q)) {167167- int ret = blk_revalidate_disk_zones(nullb->disk, NULL);168168-169169- if (ret)170170- return ret;171171- } else {172172- blk_queue_chunk_sectors(q, dev->zone_size_sects);173173- nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0);174174- }175175-165165+ blk_queue_chunk_sectors(q, dev->zone_size_sects);166166+ nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0);176167 blk_queue_max_zone_append_sectors(q, dev->zone_size_sects);177168 disk_set_max_open_zones(nullb->disk, dev->zone_max_open);178169 disk_set_max_active_zones(nullb->disk, dev->zone_max_active);170170+171171+ if (queue_is_mq(q))172172+ return blk_revalidate_disk_zones(nullb->disk, NULL);179173180174 return 0;181175}
···269269 return smp_call_function_single(cpu, __us2e_freq_target, &index, 1);270270}271271272272-static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy)272272+static int us2e_freq_cpu_init(struct cpufreq_policy *policy)273273{274274 unsigned int cpu = policy->cpu;275275 unsigned long clock_tick = sparc64_get_clock_tick(cpu) / 1000;
+1-1
drivers/cpufreq/sparc-us3-cpufreq.c
···117117 return smp_call_function_single(cpu, update_safari_cfg, &new_bits, 1);118118}119119120120-static int __init us3_freq_cpu_init(struct cpufreq_policy *policy)120120+static int us3_freq_cpu_init(struct cpufreq_policy *policy)121121{122122 unsigned int cpu = policy->cpu;123123 unsigned long clock_tick = sparc64_get_clock_tick(cpu) / 1000;
+22-4
drivers/dma-buf/dma-fence-unwrap.c
···6666{6767 struct dma_fence_array *result;6868 struct dma_fence *tmp, **array;6969+ ktime_t timestamp;6970 unsigned int i;7071 size_t count;71727273 count = 0;7474+ timestamp = ns_to_ktime(0);7375 for (i = 0; i < num_fences; ++i) {7474- dma_fence_unwrap_for_each(tmp, &iter[i], fences[i])7575- if (!dma_fence_is_signaled(tmp))7676+ dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) {7777+ if (!dma_fence_is_signaled(tmp)) {7678 ++count;7979+ } else if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,8080+ &tmp->flags)) {8181+ if (ktime_after(tmp->timestamp, timestamp))8282+ timestamp = tmp->timestamp;8383+ } else {8484+ /*8585+ * Use the current time if the fence is8686+ * currently signaling.8787+ */8888+ timestamp = ktime_get();8989+ }9090+ }7791 }78929393+ /*9494+ * If we couldn't find a pending fence just return a private signaled9595+ * fence with the timestamp of the last signaled one.9696+ */7997 if (count == 0)8080- return dma_fence_get_stub();9898+ return dma_fence_allocate_private_stub(timestamp);819982100 array = kmalloc_array(count, sizeof(*array), GFP_KERNEL);83101 if (!array)···156138 } while (tmp);157139158140 if (count == 0) {159159- tmp = dma_fence_get_stub();141141+ tmp = dma_fence_allocate_private_stub(ktime_get());160142 goto return_tmp;161143 }162144
+4-3
drivers/dma-buf/dma-fence.c
···150150151151/**152152 * dma_fence_allocate_private_stub - return a private, signaled fence153153+ * @timestamp: timestamp when the fence was signaled153154 *154155 * Return a newly allocated and signaled stub fence.155156 */156156-struct dma_fence *dma_fence_allocate_private_stub(void)157157+struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp)157158{158159 struct dma_fence *fence;159160160161 fence = kzalloc(sizeof(*fence), GFP_KERNEL);161162 if (fence == NULL)162162- return ERR_PTR(-ENOMEM);163163+ return NULL;163164164165 dma_fence_init(fence,165166 &dma_fence_stub_ops,···170169 set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,171170 &fence->flags);172171173173- dma_fence_signal(fence);172172+ dma_fence_signal_timestamp(fence, timestamp);174173175174 return fence;176175}
···28812881 if (!attachment->is_mapped)28822882 continue;2883288328842884+ if (attachment->bo_va->base.bo->tbo.pin_count)28852885+ continue;28862886+28842887 kfd_mem_dmaunmap_attachment(mem, attachment);28852888 ret = update_gpuvm_pte(mem, attachment, &sync_obj);28862889 if (ret) {
+19
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···14581458 return true;14591459}1460146014611461+/*14621462+ * Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic14631463+ * speed switching. Until we have confirmation from Intel that a specific host14641464+ * supports it, it's safer that we keep it disabled for all.14651465+ *14661466+ * https://edc.intel.com/content/www/us/en/design/products/platforms/details/raptor-lake-s/13th-generation-core-processors-datasheet-volume-1-of-2/005/pci-express-support/14671467+ * https://gitlab.freedesktop.org/drm/amd/-/issues/266314681468+ */14691469+bool amdgpu_device_pcie_dynamic_switching_supported(void)14701470+{14711471+#if IS_ENABLED(CONFIG_X86)14721472+ struct cpuinfo_x86 *c = &cpu_data(0);14731473+14741474+ if (c->x86_vendor == X86_VENDOR_INTEL)14751475+ return false;14761476+#endif14771477+ return true;14781478+}14791479+14611480/**14621481 * amdgpu_device_should_use_aspm - check if the device should program ASPM14631482 *
···209209 goto err_drm_client_init;210210 }211211212212- ret = armada_fbdev_client_hotplug(&fbh->client);213213- if (ret)214214- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);215215-216212 drm_client_register(&fbh->client);217213218214 return;
+5-4
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
···14261426 /* Control for TMDS Bit Period/TMDS Clock-Period Ratio */14271427 if (dw_hdmi_support_scdc(hdmi, display)) {14281428 if (mtmdsclock > HDMI14_MAX_TMDSCLK)14291429- drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 1);14291429+ drm_scdc_set_high_tmds_clock_ratio(hdmi->curr_conn, 1);14301430 else14311431- drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 0);14311431+ drm_scdc_set_high_tmds_clock_ratio(hdmi->curr_conn, 0);14321432 }14331433}14341434EXPORT_SYMBOL_GPL(dw_hdmi_set_high_tmds_clock_ratio);···21162116 min_t(u8, bytes, SCDC_MIN_SOURCE_VERSION));2117211721182118 /* Enabled Scrambling in the Sink */21192119- drm_scdc_set_scrambling(&hdmi->connector, 1);21192119+ drm_scdc_set_scrambling(hdmi->curr_conn, 1);2120212021212121 /*21222122 * To activate the scrambler feature, you must ensure···21322132 hdmi_writeb(hdmi, 0, HDMI_FC_SCRAMBLER_CTRL);21332133 hdmi_writeb(hdmi, (u8)~HDMI_MC_SWRSTZ_TMDSSWRST_REQ,21342134 HDMI_MC_SWRSTZ);21352135- drm_scdc_set_scrambling(&hdmi->connector, 0);21352135+ drm_scdc_set_scrambling(hdmi->curr_conn, 0);21362136 }21372137 }21382138···35533553 hdmi->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID35543554 | DRM_BRIDGE_OP_HPD;35553555 hdmi->bridge.interlace_allowed = true;35563556+ hdmi->bridge.ddc = hdmi->ddc;35563557#ifdef CONFIG_OF35573558 hdmi->bridge.of_node = pdev->dev.of_node;35583559#endif
+22-13
drivers/gpu/drm/bridge/ti-sn65dsi86.c
···170170 * @pwm_refclk_freq: Cache for the reference clock input to the PWM.171171 */172172struct ti_sn65dsi86 {173173- struct auxiliary_device bridge_aux;174174- struct auxiliary_device gpio_aux;175175- struct auxiliary_device aux_aux;176176- struct auxiliary_device pwm_aux;173173+ struct auxiliary_device *bridge_aux;174174+ struct auxiliary_device *gpio_aux;175175+ struct auxiliary_device *aux_aux;176176+ struct auxiliary_device *pwm_aux;177177178178 struct device *dev;179179 struct regmap *regmap;···468468 auxiliary_device_delete(data);469469}470470471471-/*472472- * AUX bus docs say that a non-NULL release is mandatory, but it makes no473473- * sense for the model used here where all of the aux devices are allocated474474- * in the single shared structure. We'll use this noop as a workaround.475475- */476476-static void ti_sn65dsi86_noop(struct device *dev) {}471471+static void ti_sn65dsi86_aux_device_release(struct device *dev)472472+{473473+ struct auxiliary_device *aux = container_of(dev, struct auxiliary_device, dev);474474+475475+ kfree(aux);476476+}477477478478static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,479479- struct auxiliary_device *aux,479479+ struct auxiliary_device **aux_out,480480 const char *name)481481{482482 struct device *dev = pdata->dev;483483+ struct auxiliary_device *aux;483484 int ret;485485+486486+ aux = kzalloc(sizeof(*aux), GFP_KERNEL);487487+ if (!aux)488488+ return -ENOMEM;484489485490 aux->name = name;486491 aux->dev.parent = dev;487487- aux->dev.release = ti_sn65dsi86_noop;492492+ aux->dev.release = ti_sn65dsi86_aux_device_release;488493 device_set_of_node_from_dev(&aux->dev, dev);489494 ret = auxiliary_device_init(aux);490490- if (ret)495495+ if (ret) {496496+ kfree(aux);491497 return ret;498498+ }492499 ret = devm_add_action_or_reset(dev, ti_sn65dsi86_uninit_aux, aux);493500 if (ret)494501 return ret;···504497 if (ret)505498 return ret;506499 ret = devm_add_action_or_reset(dev, ti_sn65dsi86_delete_aux, aux);500500+ if (!ret)501501+ *aux_out = aux;507502508503 return ret;509504}
+21
drivers/gpu/drm/drm_client.c
···122122 * drm_client_register() it is no longer permissible to call drm_client_release()123123 * directly (outside the unregister callback), instead cleanup will happen124124 * automatically on driver unload.125125+ *126126+ * Registering a client generates a hotplug event that allows the client127127+ * to set up its display from pre-existing outputs. The client must have128128+ * initialized its state to able to handle the hotplug event successfully.125129 */126130void drm_client_register(struct drm_client_dev *client)127131{128132 struct drm_device *dev = client->dev;133133+ int ret;129134130135 mutex_lock(&dev->clientlist_mutex);131136 list_add(&client->list, &dev->clientlist);137137+138138+ if (client->funcs && client->funcs->hotplug) {139139+ /*140140+ * Perform an initial hotplug event to pick up the141141+ * display configuration for the client. This step142142+ * has to be performed *after* registering the client143143+ * in the list of clients, or a concurrent hotplug144144+ * event might be lost; leaving the display off.145145+ *146146+ * Hold the clientlist_mutex as for a regular hotplug147147+ * event.148148+ */149149+ ret = client->funcs->hotplug(client);150150+ if (ret)151151+ drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);152152+ }132153 mutex_unlock(&dev->clientlist_mutex);133154}134155EXPORT_SYMBOL(drm_client_register);
+1-5
drivers/gpu/drm/drm_fbdev_dma.c
···217217 * drm_fbdev_dma_setup() - Setup fbdev emulation for GEM DMA helpers218218 * @dev: DRM device219219 * @preferred_bpp: Preferred bits per pixel for the device.220220- * @dev->mode_config.preferred_depth is used if this is zero.220220+ * 32 is used if this is zero.221221 *222222 * This function sets up fbdev emulation for GEM DMA drivers that support223223 * dumb buffers with a virtual address and that can be mmap'ed.···251251 drm_err(dev, "Failed to register client: %d\n", ret);252252 goto err_drm_client_init;253253 }254254-255255- ret = drm_fbdev_dma_client_hotplug(&fb_helper->client);256256- if (ret)257257- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);258254259255 drm_client_register(&fb_helper->client);260256
-4
drivers/gpu/drm/drm_fbdev_generic.c
···339339 goto err_drm_client_init;340340 }341341342342- ret = drm_fbdev_generic_client_hotplug(&fb_helper->client);343343- if (ret)344344- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);345345-346342 drm_client_register(&fb_helper->client);347343348344 return;
+3-3
drivers/gpu/drm/drm_syncobj.c
···353353 */354354static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)355355{356356- struct dma_fence *fence = dma_fence_allocate_private_stub();356356+ struct dma_fence *fence = dma_fence_allocate_private_stub(ktime_get());357357358358- if (IS_ERR(fence))359359- return PTR_ERR(fence);358358+ if (!fence)359359+ return -ENOMEM;360360361361 drm_syncobj_replace_fence(syncobj, fence);362362 dma_fence_put(fence);
-4
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
···215215 if (ret)216216 goto err_drm_client_init;217217218218- ret = exynos_drm_fbdev_client_hotplug(&fb_helper->client);219219- if (ret)220220- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);221221-222218 drm_client_register(&fb_helper->client);223219224220 return;
-4
drivers/gpu/drm/gma500/fbdev.c
···328328 goto err_drm_fb_helper_unprepare;329329 }330330331331- ret = psb_fbdev_client_hotplug(&fb_helper->client);332332- if (ret)333333- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);334334-335331 drm_client_register(&fb_helper->client);336332337333 return;
···3737 if (unlikely(flags & PTE_READ_ONLY))3838 pte &= ~GEN8_PAGE_RW;39394040- if (flags & PTE_LM)4141- pte |= GEN12_PPGTT_PTE_LM;4242-4340 /*4441 * For pre-gen12 platforms pat_index is the same as enum4542 * i915_cache_level, so the switch-case here is still valid.
+1-1
drivers/gpu/drm/i915/gt/intel_gtt.c
···670670 if (IS_ERR(obj))671671 return ERR_CAST(obj);672672673673- i915_gem_object_set_cache_coherency(obj, I915_CACHING_CACHED);673673+ i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);674674675675 vma = i915_vma_instance(obj, vm, NULL);676676 if (IS_ERR(vma)) {
···383383 goto err_drm_client_init;384384 }385385386386- ret = radeon_fbdev_client_hotplug(&fb_helper->client);387387- if (ret)388388- drm_dbg_kms(rdev->ddev, "client hotplug ret=%d\n", ret);389389-390386 drm_client_register(&fb_helper->client);391387392388 return;
+33-8
drivers/gpu/drm/scheduler/sched_entity.c
···176176{177177 struct drm_sched_job *job = container_of(cb, struct drm_sched_job,178178 finish_cb);179179- int r;179179+ unsigned long index;180180181181 dma_fence_put(f);182182183183 /* Wait for all dependencies to avoid data corruptions */184184- while (!xa_empty(&job->dependencies)) {185185- f = xa_erase(&job->dependencies, job->last_dependency++);186186- r = dma_fence_add_callback(f, &job->finish_cb,187187- drm_sched_entity_kill_jobs_cb);188188- if (!r)184184+ xa_for_each(&job->dependencies, index, f) {185185+ struct drm_sched_fence *s_fence = to_drm_sched_fence(f);186186+187187+ if (s_fence && f == &s_fence->scheduled) {188188+ /* The dependencies array had a reference on the scheduled189189+ * fence, and the finished fence refcount might have190190+ * dropped to zero. Use dma_fence_get_rcu() so we get191191+ * a NULL fence in that case.192192+ */193193+ f = dma_fence_get_rcu(&s_fence->finished);194194+195195+ /* Now that we have a reference on the finished fence,196196+ * we can release the reference the dependencies array197197+ * had on the scheduled fence.198198+ */199199+ dma_fence_put(&s_fence->scheduled);200200+ }201201+202202+ xa_erase(&job->dependencies, index);203203+ if (f && !dma_fence_add_callback(f, &job->finish_cb,204204+ drm_sched_entity_kill_jobs_cb))189205 return;190206191207 dma_fence_put(f);···431415drm_sched_job_dependency(struct drm_sched_job *job,432416 struct drm_sched_entity *entity)433417{434434- if (!xa_empty(&job->dependencies))435435- return xa_erase(&job->dependencies, job->last_dependency++);418418+ struct dma_fence *f;419419+420420+ /* We keep the fence around, so we can iterate over all dependencies421421+ * in drm_sched_entity_kill_jobs_cb() to ensure all deps are signaled422422+ * before killing the job.423423+ */424424+ f = xa_load(&job->dependencies, job->last_dependency);425425+ if (f) {426426+ job->last_dependency++;427427+ return dma_fence_get(f);428428+ }436429437430 if (job->sched->ops->prepare_job)438431 return job->sched->ops->prepare_job(job, entity);
+25-15
drivers/gpu/drm/scheduler/sched_fence.c
···4848 kmem_cache_destroy(sched_fence_slab);4949}50505151-void drm_sched_fence_scheduled(struct drm_sched_fence *fence)5151+static void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,5252+ struct dma_fence *fence)5253{5454+ /*5555+ * smp_store_release() to ensure another thread racing us5656+ * in drm_sched_fence_set_deadline_finished() sees the5757+ * fence's parent set before test_bit()5858+ */5959+ smp_store_release(&s_fence->parent, dma_fence_get(fence));6060+ if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT,6161+ &s_fence->finished.flags))6262+ dma_fence_set_deadline(fence, s_fence->deadline);6363+}6464+6565+void drm_sched_fence_scheduled(struct drm_sched_fence *fence,6666+ struct dma_fence *parent)6767+{6868+ /* Set the parent before signaling the scheduled fence, such that,6969+ * any waiter expecting the parent to be filled after the job has7070+ * been scheduled (which is the case for drivers delegating waits7171+ * to some firmware) doesn't have to busy wait for parent to show7272+ * up.7373+ */7474+ if (!IS_ERR_OR_NULL(parent))7575+ drm_sched_fence_set_parent(fence, parent);7676+5377 dma_fence_signal(&fence->scheduled);5478}5579···204180 return NULL;205181}206182EXPORT_SYMBOL(to_drm_sched_fence);207207-208208-void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,209209- struct dma_fence *fence)210210-{211211- /*212212- * smp_store_release() to ensure another thread racing us213213- * in drm_sched_fence_set_deadline_finished() sees the214214- * fence's parent set before test_bit()215215- */216216- smp_store_release(&s_fence->parent, dma_fence_get(fence));217217- if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT,218218- &s_fence->finished.flags))219219- dma_fence_set_deadline(fence, s_fence->deadline);220220-}221183222184struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,223185 void *owner)
+1-2
drivers/gpu/drm/scheduler/sched_main.c
···10431043 trace_drm_run_job(sched_job, entity);10441044 fence = sched->ops->run_job(sched_job);10451045 complete_all(&entity->entity_idle);10461046- drm_sched_fence_scheduled(s_fence);10461046+ drm_sched_fence_scheduled(s_fence, fence);1047104710481048 if (!IS_ERR_OR_NULL(fence)) {10491049- drm_sched_fence_set_parent(s_fence, fence);10501049 /* Drop for original kref_init of the fence */10511050 dma_fence_put(fence);10521051
-4
drivers/gpu/drm/tegra/fbdev.c
···225225 if (ret)226226 goto err_drm_client_init;227227228228- ret = tegra_fbdev_client_hotplug(&helper->client);229229- if (ret)230230- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);231231-232228 drm_client_register(&helper->client);233229234230 return;
+18-11
drivers/gpu/drm/ttm/ttm_bo.c
···458458 goto out;459459 }460460461461-bounce:462462- ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop);463463- if (ret == -EMULTIHOP) {461461+ do {462462+ ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop);463463+ if (ret != -EMULTIHOP)464464+ break;465465+464466 ret = ttm_bo_bounce_temp_buffer(bo, &evict_mem, ctx, &hop);465465- if (ret) {466466- if (ret != -ERESTARTSYS && ret != -EINTR)467467- pr_err("Buffer eviction failed\n");468468- ttm_resource_free(bo, &evict_mem);469469- goto out;470470- }471471- /* try and move to final place now. */472472- goto bounce;467467+ } while (!ret);468468+469469+ if (ret) {470470+ ttm_resource_free(bo, &evict_mem);471471+ if (ret != -ERESTARTSYS && ret != -EINTR)472472+ pr_err("Buffer eviction failed\n");473473 }474474out:475475 return ret;···516516 bool *locked, bool *busy)517517{518518 bool ret = false;519519+520520+ if (bo->pin_count) {521521+ *locked = false;522522+ *busy = false;523523+ return false;524524+ }519525520526 if (bo->base.resv == ctx->resv) {521527 dma_resv_assert_held(bo->base.resv);···11731167 ret = ttm_bo_handle_move_mem(bo, evict_mem, true, &ctx, &hop);11741168 if (unlikely(ret != 0)) {11751169 WARN(ret == -EMULTIHOP, "Unexpected multihop in swaput - likely driver bug.\n");11701170+ ttm_resource_free(bo, &evict_mem);11761171 goto out;11771172 }11781173 }
+4-1
drivers/gpu/drm/ttm/ttm_resource.c
···8686 struct ttm_resource *res)8787{8888 if (pos->last != res) {8989+ if (pos->first == res)9090+ pos->first = list_next_entry(res, lru);8991 list_move(&res->lru, &pos->last->lru);9092 pos->last = res;9193 }···113111{114112 struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res);115113116116- if (unlikely(pos->first == res && pos->last == res)) {114114+ if (unlikely(WARN_ON(!pos->first || !pos->last) ||115115+ (pos->first == res && pos->last == res))) {117116 pos->first = NULL;118117 pos->last = NULL;119118 } else if (pos->first == res) {
···258258259259 switch (hid_msg_hdr->type) {260260 case SYNTH_HID_PROTOCOL_RESPONSE:261261+ len = struct_size(pipe_msg, data, pipe_msg->size);262262+261263 /*262264 * While it will be impossible for us to protect against263265 * malicious/buggy hypervisor/host, add a check here to264266 * ensure we don't corrupt memory.265267 */266266- if (struct_size(pipe_msg, data, pipe_msg->size)267267- > sizeof(struct mousevsc_prt_msg)) {268268- WARN_ON(1);268268+ if (WARN_ON(len > sizeof(struct mousevsc_prt_msg)))269269 break;270270- }271270272272- memcpy(&input_dev->protocol_resp, pipe_msg,273273- struct_size(pipe_msg, data, pipe_msg->size));271271+ memcpy(&input_dev->protocol_resp, pipe_msg, len);274272 complete(&input_dev->wait_event);275273 break;276274
+4-3
drivers/hid/hid-input.c
···10931093 case 0x074: map_key_clear(KEY_BRIGHTNESS_MAX); break;10941094 case 0x075: map_key_clear(KEY_BRIGHTNESS_AUTO); break;1095109510961096+ case 0x076: map_key_clear(KEY_CAMERA_ACCESS_ENABLE); break;10971097+ case 0x077: map_key_clear(KEY_CAMERA_ACCESS_DISABLE); break;10981098+ case 0x078: map_key_clear(KEY_CAMERA_ACCESS_TOGGLE); break;10991099+10961100 case 0x079: map_key_clear(KEY_KBDILLUMUP); break;10971101 case 0x07a: map_key_clear(KEY_KBDILLUMDOWN); break;10981102 case 0x07c: map_key_clear(KEY_KBDILLUMTOGGLE); break;···11431139 case 0x0cd: map_key_clear(KEY_PLAYPAUSE); break;11441140 case 0x0cf: map_key_clear(KEY_VOICECOMMAND); break;1145114111461146- case 0x0d5: map_key_clear(KEY_CAMERA_ACCESS_ENABLE); break;11471147- case 0x0d6: map_key_clear(KEY_CAMERA_ACCESS_DISABLE); break;11481148- case 0x0d7: map_key_clear(KEY_CAMERA_ACCESS_TOGGLE); break;11491142 case 0x0d8: map_key_clear(KEY_DICTATE); break;11501143 case 0x0d9: map_key_clear(KEY_EMOJI_PICKER); break;11511144
+2
drivers/hid/hid-logitech-hidpp.c
···4598459845994599 { /* Logitech G403 Wireless Gaming Mouse over USB */46004600 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC082) },46014601+ { /* Logitech G502 Lightspeed Wireless Gaming Mouse over USB */46024602+ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC08D) },46014603 { /* Logitech G703 Gaming Mouse over USB */46024604 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC087) },46034605 { /* Logitech G703 Hero Gaming Mouse over USB */
···3434 }35353636 ret = ida_alloc_range(&iommu_global_pasid_ida, min, max, GFP_KERNEL);3737- if (ret < min)3737+ if (ret < 0)3838 goto out;3939+3940 mm->pasid = ret;4041 ret = 0;4142out:
+16-15
drivers/iommu/iommu.c
···28912891 ret = __iommu_group_set_domain_internal(28922892 group, dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);28932893 if (WARN_ON(ret))28942894- goto out_free;28942894+ goto out_free_old;28952895 } else {28962896 ret = __iommu_group_set_domain(group, dom);28972897- if (ret) {28982898- iommu_domain_free(dom);28992899- group->default_domain = old_dom;29002900- return ret;29012901- }28972897+ if (ret)28982898+ goto err_restore_def_domain;29022899 }2903290029042901 /*···29082911 for_each_group_device(group, gdev) {29092912 ret = iommu_create_device_direct_mappings(dom, gdev->dev);29102913 if (ret)29112911- goto err_restore;29142914+ goto err_restore_domain;29122915 }29132916 }2914291729152915-err_restore:29162916- if (old_dom) {29172917- __iommu_group_set_domain_internal(29182918- group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);29192919- iommu_domain_free(dom);29202920- old_dom = NULL;29212921- }29222922-out_free:29182918+out_free_old:29232919 if (old_dom)29242920 iommu_domain_free(old_dom);29212921+ return ret;29222922+29232923+err_restore_domain:29242924+ if (old_dom)29252925+ __iommu_group_set_domain_internal(29262926+ group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);29272927+err_restore_def_domain:29282928+ if (old_dom) {29292929+ iommu_domain_free(dom);29302930+ group->default_domain = old_dom;29312931+ }29252932 return ret;29262933}29272934
+4-6
drivers/net/dsa/ocelot/felix.c
···12861286 if (err < 0) {12871287 dev_info(dev, "Unsupported PHY mode %s on port %d\n",12881288 phy_modes(phy_mode), port);12891289- of_node_put(child);1290128912911290 /* Leave port_phy_modes[port] = 0, which is also12921291 * PHY_INTERFACE_MODE_NA. This will perform a···17851786{17861787 struct ocelot *ocelot = ds->priv;17871788 struct ocelot_port *ocelot_port = ocelot->ports[port];17881788- struct felix *felix = ocelot_to_felix(ocelot);1789178917901790 ocelot_port_set_maxlen(ocelot, port, new_mtu);1791179117921792- mutex_lock(&ocelot->tas_lock);17921792+ mutex_lock(&ocelot->fwd_domain_lock);1793179317941794- if (ocelot_port->taprio && felix->info->tas_guard_bands_update)17951795- felix->info->tas_guard_bands_update(ocelot, port);17941794+ if (ocelot_port->taprio && ocelot->ops->tas_guard_bands_update)17951795+ ocelot->ops->tas_guard_bands_update(ocelot, port);1796179617971797- mutex_unlock(&ocelot->tas_lock);17971797+ mutex_unlock(&ocelot->fwd_domain_lock);1798179817991799 return 0;18001800}
-1
drivers/net/dsa/ocelot/felix.h
···5757 void (*mdio_bus_free)(struct ocelot *ocelot);5858 int (*port_setup_tc)(struct dsa_switch *ds, int port,5959 enum tc_setup_type type, void *type_data);6060- void (*tas_guard_bands_update)(struct ocelot *ocelot, int port);6160 void (*port_sched_speed_set)(struct ocelot *ocelot, int port,6261 u32 speed);6362 void (*phylink_mac_config)(struct ocelot *ocelot, int port,
+40-19
drivers/net/dsa/ocelot/felix_vsc9959.c
···12091209static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)12101210{12111211 struct ocelot_port *ocelot_port = ocelot->ports[port];12121212+ struct ocelot_mm_state *mm = &ocelot->mm[port];12121213 struct tc_taprio_qopt_offload *taprio;12131214 u64 min_gate_len[OCELOT_NUM_TC];12151215+ u32 val, maxlen, add_frag_size;12161216+ u64 needed_min_frag_time_ps;12141217 int speed, picos_per_byte;12151218 u64 needed_bit_time_ps;12161216- u32 val, maxlen;12171219 u8 tas_speed;12181220 int tc;1219122112201220- lockdep_assert_held(&ocelot->tas_lock);12221222+ lockdep_assert_held(&ocelot->fwd_domain_lock);1221122312221224 taprio = ocelot_port->taprio;12231225···12551253 */12561254 needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte;1257125512561256+ /* Preemptible TCs don't need to pass a full MTU, the port will12571257+ * automatically emit a HOLD request when a preemptible TC gate closes12581258+ */12591259+ val = ocelot_read_rix(ocelot, QSYS_PREEMPTION_CFG, port);12601260+ add_frag_size = QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_X(val);12611261+ needed_min_frag_time_ps = picos_per_byte *12621262+ (u64)(24 + 2 * ethtool_mm_frag_size_add_to_min(add_frag_size));12631263+12581264 dev_dbg(ocelot->dev,12591259- "port %d: max frame size %d needs %llu ps at speed %d\n",12601260- port, maxlen, needed_bit_time_ps, speed);12651265+ "port %d: max frame size %d needs %llu ps, %llu ps for mPackets at speed %d\n",12661266+ port, maxlen, needed_bit_time_ps, needed_min_frag_time_ps,12671267+ speed);1261126812621269 vsc9959_tas_min_gate_lengths(taprio, min_gate_len);12631263-12641264- mutex_lock(&ocelot->fwd_domain_lock);1265127012661271 for (tc = 0; tc < OCELOT_NUM_TC; tc++) {12671272 u32 requested_max_sdu = vsc9959_tas_tc_max_sdu(taprio, tc);···12781269 remaining_gate_len_ps =12791270 vsc9959_tas_remaining_gate_len_ps(min_gate_len[tc]);1280127112811281- if (remaining_gate_len_ps > needed_bit_time_ps) {12721272+ if ((mm->active_preemptible_tcs & BIT(tc)) ?12731273+ remaining_gate_len_ps > needed_min_frag_time_ps :12741274+ remaining_gate_len_ps > needed_bit_time_ps) {12821275 /* Setting QMAXSDU_CFG to 0 disables oversized frame12831276 * dropping.12841277 */···13341323 ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port);1335132413361325 ocelot->ops->cut_through_fwd(ocelot);13371337-13381338- mutex_unlock(&ocelot->fwd_domain_lock);13391326}1340132713411328static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,···13601351 break;13611352 }1362135313631363- mutex_lock(&ocelot->tas_lock);13541354+ mutex_lock(&ocelot->fwd_domain_lock);1364135513651356 ocelot_rmw_rix(ocelot,13661357 QSYS_TAG_CONFIG_LINK_SPEED(tas_speed),···13701361 if (ocelot_port->taprio)13711362 vsc9959_tas_guard_bands_update(ocelot, port);1372136313731373- mutex_unlock(&ocelot->tas_lock);13641364+ mutex_unlock(&ocelot->fwd_domain_lock);13741365}1375136613761367static void vsc9959_new_base_time(struct ocelot *ocelot, ktime_t base_time,···14181409 int ret, i;14191410 u32 val;1420141114211421- mutex_lock(&ocelot->tas_lock);14121412+ mutex_lock(&ocelot->fwd_domain_lock);1422141314231414 if (taprio->cmd == TAPRIO_CMD_DESTROY) {14241415 ocelot_port_mqprio(ocelot, port, &taprio->mqprio);···1430142114311422 vsc9959_tas_guard_bands_update(ocelot, port);1432142314331433- mutex_unlock(&ocelot->tas_lock);14241424+ mutex_unlock(&ocelot->fwd_domain_lock);14341425 return 0;14351426 } else if (taprio->cmd != TAPRIO_CMD_REPLACE) {14361427 ret = -EOPNOTSUPP;···15131504 ocelot_port->taprio = taprio_offload_get(taprio);15141505 vsc9959_tas_guard_bands_update(ocelot, port);1515150615161516- mutex_unlock(&ocelot->tas_lock);15071507+ mutex_unlock(&ocelot->fwd_domain_lock);1517150815181509 return 0;15191510···15211512 taprio->mqprio.qopt.num_tc = 0;15221513 ocelot_port_mqprio(ocelot, port, &taprio->mqprio);15231514err_unlock:15241524- mutex_unlock(&ocelot->tas_lock);15151515+ mutex_unlock(&ocelot->fwd_domain_lock);1525151615261517 return ret;15271518}···15341525 int port;15351526 u32 val;1536152715371537- mutex_lock(&ocelot->tas_lock);15281528+ mutex_lock(&ocelot->fwd_domain_lock);1538152915391530 for (port = 0; port < ocelot->num_phys_ports; port++) {15401531 ocelot_port = ocelot->ports[port];···15721563 QSYS_TAG_CONFIG_ENABLE,15731564 QSYS_TAG_CONFIG, port);15741565 }15751575- mutex_unlock(&ocelot->tas_lock);15661566+ mutex_unlock(&ocelot->fwd_domain_lock);15761567}1577156815781569static int vsc9959_qos_port_cbs_set(struct dsa_switch *ds, int port,···16431634 }16441635}1645163616371637+static int vsc9959_qos_port_mqprio(struct ocelot *ocelot, int port,16381638+ struct tc_mqprio_qopt_offload *mqprio)16391639+{16401640+ int ret;16411641+16421642+ mutex_lock(&ocelot->fwd_domain_lock);16431643+ ret = ocelot_port_mqprio(ocelot, port, mqprio);16441644+ mutex_unlock(&ocelot->fwd_domain_lock);16451645+16461646+ return ret;16471647+}16481648+16461649static int vsc9959_port_setup_tc(struct dsa_switch *ds, int port,16471650 enum tc_setup_type type,16481651 void *type_data)···16671646 case TC_SETUP_QDISC_TAPRIO:16681647 return vsc9959_qos_port_tas_set(ocelot, port, type_data);16691648 case TC_SETUP_QDISC_MQPRIO:16701670- return ocelot_port_mqprio(ocelot, port, type_data);16491649+ return vsc9959_qos_port_mqprio(ocelot, port, type_data);16711650 case TC_SETUP_QDISC_CBS:16721651 return vsc9959_qos_port_cbs_set(ds, port, type_data);16731652 default:···26122591 .cut_through_fwd = vsc9959_cut_through_fwd,26132592 .tas_clock_adjust = vsc9959_tas_clock_adjust,26142593 .update_stats = vsc9959_update_stats,25942594+ .tas_guard_bands_update = vsc9959_tas_guard_bands_update,26152595};2616259626172597static const struct felix_info felix_info_vsc9959 = {···26382616 .port_modes = vsc9959_port_modes,26392617 .port_setup_tc = vsc9959_port_setup_tc,26402618 .port_sched_speed_set = vsc9959_sched_speed_set,26412641- .tas_guard_bands_update = vsc9959_tas_guard_bands_update,26422619};2643262026442621/* The INTB interrupt is shared between for PTP TX timestamp availability
+3
drivers/net/dsa/qca/qca8k-8xxx.c
···588588 bool ack;589589 int ret;590590591591+ if (!skb)592592+ return -ENOMEM;593593+591594 reinit_completion(&mgmt_eth_data->rw_done);592595593596 /* Increment seq_num and set it in the copy pkt */
···1492149214931493 bgmac->in_init = true;1494149414951495- bgmac_chip_intrs_off(bgmac);14961496-14971495 net_dev->irq = bgmac->irq;14981496 SET_NETDEV_DEV(net_dev, bgmac->dev);14991497 dev_set_drvdata(bgmac->dev, bgmac);···15081510 * Broadcom does it in arch PCI code when enabling fake PCI device.15091511 */15101512 bgmac_clk_enable(bgmac, 0);15131513+15141514+ bgmac_chip_intrs_off(bgmac);1511151515121516 /* This seems to be fixing IRQ by assigning OOB #6 to the core */15131517 if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) {
+15-2
drivers/net/ethernet/freescale/fec.h
···355355#define RX_RING_SIZE (FEC_ENET_RX_FRPPG * FEC_ENET_RX_PAGES)356356#define FEC_ENET_TX_FRSIZE 2048357357#define FEC_ENET_TX_FRPPG (PAGE_SIZE / FEC_ENET_TX_FRSIZE)358358-#define TX_RING_SIZE 512 /* Must be power of two */358358+#define TX_RING_SIZE 1024 /* Must be power of two */359359#define TX_RING_MOD_MASK 511 /* for this to work */360360361361#define BD_ENET_RX_INT 0x00800000···544544 XDP_STATS_TOTAL,545545};546546547547+enum fec_txbuf_type {548548+ FEC_TXBUF_T_SKB,549549+ FEC_TXBUF_T_XDP_NDO,550550+};551551+552552+struct fec_tx_buffer {553553+ union {554554+ struct sk_buff *skb;555555+ struct xdp_frame *xdp;556556+ };557557+ enum fec_txbuf_type type;558558+};559559+547560struct fec_enet_priv_tx_q {548561 struct bufdesc_prop bd;549562 unsigned char *tx_bounce[TX_RING_SIZE];550550- struct sk_buff *tx_skbuff[TX_RING_SIZE];563563+ struct fec_tx_buffer tx_buf[TX_RING_SIZE];551564552565 unsigned short tx_stop_threshold;553566 unsigned short tx_wake_threshold;
+112-54
drivers/net/ethernet/freescale/fec_main.c
···397397 fec16_to_cpu(bdp->cbd_sc),398398 fec32_to_cpu(bdp->cbd_bufaddr),399399 fec16_to_cpu(bdp->cbd_datlen),400400- txq->tx_skbuff[index]);400400+ txq->tx_buf[index].skb);401401 bdp = fec_enet_get_nextdesc(bdp, &txq->bd);402402 index++;403403 } while (bdp != txq->bd.base);···654654655655 index = fec_enet_get_bd_index(last_bdp, &txq->bd);656656 /* Save skb pointer */657657- txq->tx_skbuff[index] = skb;657657+ txq->tx_buf[index].skb = skb;658658659659 /* Make sure the updates to rest of the descriptor are performed before660660 * transferring ownership.···672672673673 skb_tx_timestamp(skb);674674675675- /* Make sure the update to bdp and tx_skbuff are performed before676676- * txq->bd.cur.677677- */675675+ /* Make sure the update to bdp is performed before txq->bd.cur. */678676 wmb();679677 txq->bd.cur = bdp;680678···860862 }861863862864 /* Save skb pointer */863863- txq->tx_skbuff[index] = skb;865865+ txq->tx_buf[index].skb = skb;864866865867 skb_tx_timestamp(skb);866868 txq->bd.cur = bdp;···950952 for (i = 0; i < txq->bd.ring_size; i++) {951953 /* Initialize the BD for every fragment in the page. */952954 bdp->cbd_sc = cpu_to_fec16(0);953953- if (bdp->cbd_bufaddr &&954954- !IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))955955- dma_unmap_single(&fep->pdev->dev,956956- fec32_to_cpu(bdp->cbd_bufaddr),957957- fec16_to_cpu(bdp->cbd_datlen),958958- DMA_TO_DEVICE);959959- if (txq->tx_skbuff[i]) {960960- dev_kfree_skb_any(txq->tx_skbuff[i]);961961- txq->tx_skbuff[i] = NULL;955955+ if (txq->tx_buf[i].type == FEC_TXBUF_T_SKB) {956956+ if (bdp->cbd_bufaddr &&957957+ !IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))958958+ dma_unmap_single(&fep->pdev->dev,959959+ fec32_to_cpu(bdp->cbd_bufaddr),960960+ fec16_to_cpu(bdp->cbd_datlen),961961+ DMA_TO_DEVICE);962962+ if (txq->tx_buf[i].skb) {963963+ dev_kfree_skb_any(txq->tx_buf[i].skb);964964+ txq->tx_buf[i].skb = NULL;965965+ }966966+ } else {967967+ if (bdp->cbd_bufaddr)968968+ dma_unmap_single(&fep->pdev->dev,969969+ fec32_to_cpu(bdp->cbd_bufaddr),970970+ fec16_to_cpu(bdp->cbd_datlen),971971+ DMA_TO_DEVICE);972972+973973+ if (txq->tx_buf[i].xdp) {974974+ xdp_return_frame(txq->tx_buf[i].xdp);975975+ txq->tx_buf[i].xdp = NULL;976976+ }977977+978978+ /* restore default tx buffer type: FEC_TXBUF_T_SKB */979979+ txq->tx_buf[i].type = FEC_TXBUF_T_SKB;962980 }981981+963982 bdp->cbd_bufaddr = cpu_to_fec32(0);964983 bdp = fec_enet_get_nextdesc(bdp, &txq->bd);965984 }···13751360fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)13761361{13771362 struct fec_enet_private *fep;13631363+ struct xdp_frame *xdpf;13781364 struct bufdesc *bdp;13791365 unsigned short status;13801366 struct sk_buff *skb;···1403138714041388 index = fec_enet_get_bd_index(bdp, &txq->bd);1405138914061406- skb = txq->tx_skbuff[index];14071407- txq->tx_skbuff[index] = NULL;14081408- if (!IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))14091409- dma_unmap_single(&fep->pdev->dev,14101410- fec32_to_cpu(bdp->cbd_bufaddr),14111411- fec16_to_cpu(bdp->cbd_datlen),14121412- DMA_TO_DEVICE);14131413- bdp->cbd_bufaddr = cpu_to_fec32(0);14141414- if (!skb)14151415- goto skb_done;13901390+ if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) {13911391+ skb = txq->tx_buf[index].skb;13921392+ txq->tx_buf[index].skb = NULL;13931393+ if (bdp->cbd_bufaddr &&13941394+ !IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))13951395+ dma_unmap_single(&fep->pdev->dev,13961396+ fec32_to_cpu(bdp->cbd_bufaddr),13971397+ fec16_to_cpu(bdp->cbd_datlen),13981398+ DMA_TO_DEVICE);13991399+ bdp->cbd_bufaddr = cpu_to_fec32(0);14001400+ if (!skb)14011401+ goto tx_buf_done;14021402+ } else {14031403+ xdpf = txq->tx_buf[index].xdp;14041404+ if (bdp->cbd_bufaddr)14051405+ dma_unmap_single(&fep->pdev->dev,14061406+ fec32_to_cpu(bdp->cbd_bufaddr),14071407+ fec16_to_cpu(bdp->cbd_datlen),14081408+ DMA_TO_DEVICE);14091409+ bdp->cbd_bufaddr = cpu_to_fec32(0);14101410+ if (!xdpf) {14111411+ txq->tx_buf[index].type = FEC_TXBUF_T_SKB;14121412+ goto tx_buf_done;14131413+ }14141414+ }1416141514171416 /* Check for errors. */14181417 if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |···14461415 ndev->stats.tx_carrier_errors++;14471416 } else {14481417 ndev->stats.tx_packets++;14491449- ndev->stats.tx_bytes += skb->len;14501450- }1451141814521452- /* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who14531453- * are to time stamp the packet, so we still need to check time14541454- * stamping enabled flag.14551455- */14561456- if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS &&14571457- fep->hwts_tx_en) &&14581458- fep->bufdesc_ex) {14591459- struct skb_shared_hwtstamps shhwtstamps;14601460- struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;14611461-14621462- fec_enet_hwtstamp(fep, fec32_to_cpu(ebdp->ts), &shhwtstamps);14631463- skb_tstamp_tx(skb, &shhwtstamps);14191419+ if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB)14201420+ ndev->stats.tx_bytes += skb->len;14211421+ else14221422+ ndev->stats.tx_bytes += xdpf->len;14641423 }1465142414661425 /* Deferred means some collisions occurred during transmit,···14591438 if (status & BD_ENET_TX_DEF)14601439 ndev->stats.collisions++;1461144014621462- /* Free the sk buffer associated with this last transmit */14631463- dev_kfree_skb_any(skb);14641464-skb_done:14651465- /* Make sure the update to bdp and tx_skbuff are performed14411441+ if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) {14421442+ /* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who14431443+ * are to time stamp the packet, so we still need to check time14441444+ * stamping enabled flag.14451445+ */14461446+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS &&14471447+ fep->hwts_tx_en) && fep->bufdesc_ex) {14481448+ struct skb_shared_hwtstamps shhwtstamps;14491449+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;14501450+14511451+ fec_enet_hwtstamp(fep, fec32_to_cpu(ebdp->ts), &shhwtstamps);14521452+ skb_tstamp_tx(skb, &shhwtstamps);14531453+ }14541454+14551455+ /* Free the sk buffer associated with this last transmit */14561456+ dev_kfree_skb_any(skb);14571457+ } else {14581458+ xdp_return_frame(xdpf);14591459+14601460+ txq->tx_buf[index].xdp = NULL;14611461+ /* restore default tx buffer type: FEC_TXBUF_T_SKB */14621462+ txq->tx_buf[index].type = FEC_TXBUF_T_SKB;14631463+ }14641464+14651465+tx_buf_done:14661466+ /* Make sure the update to bdp and tx_buf are performed14661467 * before dirty_tx14671468 */14681469 wmb();···32923249 for (i = 0; i < txq->bd.ring_size; i++) {32933250 kfree(txq->tx_bounce[i]);32943251 txq->tx_bounce[i] = NULL;32953295- skb = txq->tx_skbuff[i];32963296- txq->tx_skbuff[i] = NULL;32973297- dev_kfree_skb(skb);32523252+32533253+ if (txq->tx_buf[i].type == FEC_TXBUF_T_SKB) {32543254+ skb = txq->tx_buf[i].skb;32553255+ txq->tx_buf[i].skb = NULL;32563256+ dev_kfree_skb(skb);32573257+ } else {32583258+ if (txq->tx_buf[i].xdp) {32593259+ xdp_return_frame(txq->tx_buf[i].xdp);32603260+ txq->tx_buf[i].xdp = NULL;32613261+ }32623262+32633263+ txq->tx_buf[i].type = FEC_TXBUF_T_SKB;32643264+ }32983265 }32993266 }33003267}···33493296 fep->total_tx_ring_size += fep->tx_queue[i]->bd.ring_size;3350329733513298 txq->tx_stop_threshold = FEC_MAX_SKB_DESCS;33523352- txq->tx_wake_threshold =33533353- (txq->bd.ring_size - txq->tx_stop_threshold) / 2;32993299+ txq->tx_wake_threshold = FEC_MAX_SKB_DESCS + 2 * MAX_SKB_FRAGS;3354330033553301 txq->tso_hdrs = dma_alloc_coherent(&fep->pdev->dev,33563302 txq->bd.ring_size * TSO_HEADER_SIZE,···37843732 if (fep->quirks & FEC_QUIRK_SWAP_FRAME)37853733 return -EOPNOTSUPP;3786373437353735+ if (!bpf->prog)37363736+ xdp_features_clear_redirect_target(dev);37373737+37873738 if (is_run) {37883739 napi_disable(&fep->napi);37893740 netif_tx_disable(dev);37903741 }3791374237923743 old_prog = xchg(&fep->xdp_prog, bpf->prog);37443744+ if (old_prog)37453745+ bpf_prog_put(old_prog);37463746+37933747 fec_restart(dev);3794374837953749 if (is_run) {···38033745 netif_tx_start_all_queues(dev);38043746 }3805374738063806- if (old_prog)38073807- bpf_prog_put(old_prog);37483748+ if (bpf->prog)37493749+ xdp_features_set_redirect_target(dev, false);3808375038093751 return 0;38103752···3836377838373779 entries_free = fec_enet_get_free_txdesc_num(txq);38383780 if (entries_free < MAX_SKB_FRAGS + 1) {38393839- netdev_err(fep->netdev, "NOT enough BD for SG!\n");37813781+ netdev_err_once(fep->netdev, "NOT enough BD for SG!\n");38403782 return -EBUSY;38413783 }38423784···38693811 ebdp->cbd_esc = cpu_to_fec32(estatus);38703812 }3871381338723872- txq->tx_skbuff[index] = NULL;38143814+ txq->tx_buf[index].type = FEC_TXBUF_T_XDP_NDO;38153815+ txq->tx_buf[index].xdp = frame;3873381638743817 /* Make sure the updates to rest of the descriptor are performed before38753818 * transferring ownership.···4075401640764017 if (!(fep->quirks & FEC_QUIRK_SWAP_FRAME))40774018 ndev->xdp_features = NETDEV_XDP_ACT_BASIC |40784078- NETDEV_XDP_ACT_REDIRECT |40794079- NETDEV_XDP_ACT_NDO_XMIT;40194019+ NETDEV_XDP_ACT_REDIRECT;4080402040814021 fec_restart(ndev);40824022
···57395739 q_handle = vsi->tx_rings[queue_index]->q_handle;57405740 tc = ice_dcb_get_tc(vsi, queue_index);5741574157425742+ vsi = ice_locate_vsi_using_queue(vsi, queue_index);57435743+ if (!vsi) {57445744+ netdev_err(netdev, "Invalid VSI for given queue %d\n",57455745+ queue_index);57465746+ return -EINVAL;57475747+ }57485748+57425749 /* Set BW back to default, when user set maxrate to 0 */57435750 if (!maxrate)57445751 status = ice_cfg_q_bw_dflt_lmt(vsi->port_info, vsi->idx, tc,···78797872ice_validate_mqprio_qopt(struct ice_vsi *vsi,78807873 struct tc_mqprio_qopt_offload *mqprio_qopt)78817874{78827882- u64 sum_max_rate = 0, sum_min_rate = 0;78837875 int non_power_of_2_qcount = 0;78847876 struct ice_pf *pf = vsi->back;78857877 int max_rss_q_cnt = 0;78787878+ u64 sum_min_rate = 0;78867879 struct device *dev;78877880 int i, speed;78887881 u8 num_tc;···78987891 dev = ice_pf_to_dev(pf);78997892 vsi->ch_rss_size = 0;79007893 num_tc = mqprio_qopt->qopt.num_tc;78947894+ speed = ice_get_link_speed_kbps(vsi);7901789579027896 for (i = 0; num_tc; i++) {79037897 int qcount = mqprio_qopt->qopt.count[i];···79397931 */79407932 max_rate = mqprio_qopt->max_rate[i];79417933 max_rate = div_u64(max_rate, ICE_BW_KBPS_DIVISOR);79427942- sum_max_rate += max_rate;7943793479447935 /* min_rate is minimum guaranteed rate and it can't be zero */79457936 min_rate = mqprio_qopt->min_rate[i];···79487941 if (min_rate && min_rate < ICE_MIN_BW_LIMIT) {79497942 dev_err(dev, "TC%d: min_rate(%llu Kbps) < %u Kbps\n", i,79507943 min_rate, ICE_MIN_BW_LIMIT);79447944+ return -EINVAL;79457945+ }79467946+79477947+ if (max_rate && max_rate > speed) {79487948+ dev_err(dev, "TC%d: max_rate(%llu Kbps) > link speed of %u Kbps\n",79497949+ i, max_rate, speed);79517950 return -EINVAL;79527951 }79537952···79947981 (mqprio_qopt->qopt.offset[i] + mqprio_qopt->qopt.count[i]))79957982 return -EINVAL;7996798379977997- speed = ice_get_link_speed_kbps(vsi);79987998- if (sum_max_rate && sum_max_rate > (u64)speed) {79997999- dev_err(dev, "Invalid max Tx rate(%llu) Kbps > speed(%u) Kbps specified\n",80008000- sum_max_rate, speed);80018001- return -EINVAL;80028002- }80037984 if (sum_min_rate && sum_min_rate > (u64)speed) {80047985 dev_err(dev, "Invalid min Tx rate(%llu) Kbps > speed (%u) Kbps specified\n",80057986 sum_min_rate, speed);
+11-11
drivers/net/ethernet/intel/ice/ice_tc_lib.c
···750750/**751751 * ice_locate_vsi_using_queue - locate VSI using queue (forward to queue action)752752 * @vsi: Pointer to VSI753753- * @tc_fltr: Pointer to tc_flower_filter753753+ * @queue: Queue index754754 *755755- * Locate the VSI using specified queue. When ADQ is not enabled, always756756- * return input VSI, otherwise locate corresponding VSI based on per channel757757- * offset and qcount755755+ * Locate the VSI using specified "queue". When ADQ is not enabled,756756+ * always return input VSI, otherwise locate corresponding757757+ * VSI based on per channel "offset" and "qcount"758758 */759759-static struct ice_vsi *760760-ice_locate_vsi_using_queue(struct ice_vsi *vsi,761761- struct ice_tc_flower_fltr *tc_fltr)759759+struct ice_vsi *760760+ice_locate_vsi_using_queue(struct ice_vsi *vsi, int queue)762761{763763- int num_tc, tc, queue;762762+ int num_tc, tc;764763765764 /* if ADQ is not active, passed VSI is the candidate VSI */766765 if (!ice_is_adq_active(vsi->back))···769770 * upon queue number)770771 */771772 num_tc = vsi->mqprio_qopt.qopt.num_tc;772772- queue = tc_fltr->action.fwd.q.queue;773773774774 for (tc = 0; tc < num_tc; tc++) {775775 int qcount = vsi->mqprio_qopt.qopt.count[tc];···810812 struct ice_pf *pf = vsi->back;811813 struct device *dev;812814 u32 tc_class;815815+ int q;813816814817 dev = ice_pf_to_dev(pf);815818···839840 /* Determine destination VSI even though the action is840841 * FWD_TO_QUEUE, because QUEUE is associated with VSI841842 */842842- dest_vsi = tc_fltr->dest_vsi;843843+ q = tc_fltr->action.fwd.q.queue;844844+ dest_vsi = ice_locate_vsi_using_queue(vsi, q);843845 break;844846 default:845847 dev_err(dev,···17161716 /* If ADQ is configured, and the queue belongs to ADQ VSI, then prepare17171717 * ADQ switch filter17181718 */17191719- ch_vsi = ice_locate_vsi_using_queue(vsi, fltr);17191719+ ch_vsi = ice_locate_vsi_using_queue(vsi, fltr->action.fwd.q.queue);17201720 if (!ch_vsi)17211721 return -EINVAL;17221722 fltr->dest_vsi = ch_vsi;
···711711 /* disable the queue */712712 wr32(IGC_TXDCTL(reg_idx), 0);713713 wrfl();714714- mdelay(10);715714716715 wr32(IGC_TDLEN(reg_idx),717716 ring->count * sizeof(union igc_adv_tx_desc));···10161017 ktime_t base_time = adapter->base_time;10171018 ktime_t now = ktime_get_clocktai();10181019 ktime_t baset_est, end_of_cycle;10191019- u32 launchtime;10201020+ s32 launchtime;10201021 s64 n;1021102210221023 n = div64_s64(ktime_sub_ns(now, base_time), cycle_time);···10291030 *first_flag = true;10301031 ring->last_ff_cycle = baset_est;1031103210321032- if (ktime_compare(txtime, ring->last_tx_cycle) > 0)10331033+ if (ktime_compare(end_of_cycle, ring->last_tx_cycle) > 0)10331034 *insert_empty = true;10341035 }10351036 }···15721573 first->bytecount = skb->len;15731574 first->gso_segs = 1;1574157515751575- if (tx_ring->max_sdu > 0) {15761576- u32 max_sdu = 0;15761576+ if (adapter->qbv_transition || tx_ring->oper_gate_closed)15771577+ goto out_drop;1577157815781578- max_sdu = tx_ring->max_sdu +15791579- (skb_vlan_tagged(first->skb) ? VLAN_HLEN : 0);15801580-15811581- if (first->bytecount > max_sdu) {15821582- adapter->stats.txdrop++;15831583- goto out_drop;15841584- }15791579+ if (tx_ring->max_sdu > 0 && first->bytecount > tx_ring->max_sdu) {15801580+ adapter->stats.txdrop++;15811581+ goto out_drop;15851582 }1586158315871584 if (unlikely(test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags) &&···30073012 time_after(jiffies, tx_buffer->time_stamp +30083013 (adapter->tx_timeout_factor * HZ)) &&30093014 !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF) &&30103010- (rd32(IGC_TDH(tx_ring->reg_idx)) !=30113011- readl(tx_ring->tail))) {30153015+ (rd32(IGC_TDH(tx_ring->reg_idx)) != readl(tx_ring->tail)) &&30163016+ !tx_ring->oper_gate_closed) {30123017 /* detected Tx unit hang */30133018 netdev_err(tx_ring->netdev,30143019 "Detected Tx Unit Hang\n"···6097610260986103 adapter->base_time = 0;60996104 adapter->cycle_time = NSEC_PER_SEC;61056105+ adapter->taprio_offload_enable = false;61006106 adapter->qbv_config_change_errors = 0;61076107+ adapter->qbv_transition = false;61086108+ adapter->qbv_count = 0;6101610961026110 for (i = 0; i < adapter->num_tx_queues; i++) {61036111 struct igc_ring *ring = adapter->tx_ring[i];···61086110 ring->start_time = 0;61096111 ring->end_time = NSEC_PER_SEC;61106112 ring->max_sdu = 0;61136113+ ring->oper_gate_closed = false;61146114+ ring->admin_gate_closed = false;61116115 }6112611661136117 return 0;···61216121 bool queue_configured[IGC_MAX_TX_QUEUES] = { };61226122 struct igc_hw *hw = &adapter->hw;61236123 u32 start_time = 0, end_time = 0;61246124+ struct timespec64 now;61246125 size_t n;61256126 int i;6126612761276127- switch (qopt->cmd) {61286128- case TAPRIO_CMD_REPLACE:61296129- adapter->qbv_enable = true;61306130- break;61316131- case TAPRIO_CMD_DESTROY:61326132- adapter->qbv_enable = false;61336133- break;61346134- default:61356135- return -EOPNOTSUPP;61366136- }61376137-61386138- if (!adapter->qbv_enable)61286128+ if (qopt->cmd == TAPRIO_CMD_DESTROY)61396129 return igc_tsn_clear_schedule(adapter);61306130+61316131+ if (qopt->cmd != TAPRIO_CMD_REPLACE)61326132+ return -EOPNOTSUPP;6140613361416134 if (qopt->base_time < 0)61426135 return -ERANGE;6143613661446144- if (igc_is_device_id_i225(hw) && adapter->base_time)61376137+ if (igc_is_device_id_i225(hw) && adapter->taprio_offload_enable)61456138 return -EALREADY;6146613961476140 if (!validate_schedule(adapter, qopt))···6142614961436150 adapter->cycle_time = qopt->cycle_time;61446151 adapter->base_time = qopt->base_time;61526152+ adapter->taprio_offload_enable = true;61536153+61546154+ igc_ptp_read(adapter, &now);6145615561466156 for (n = 0; n < qopt->num_entries; n++) {61476157 struct tc_taprio_sched_entry *e = &qopt->entries[n];···61806184 ring->start_time = start_time;61816185 ring->end_time = end_time;6182618661836183- queue_configured[i] = true;61876187+ if (ring->start_time >= adapter->cycle_time)61886188+ queue_configured[i] = false;61896189+ else61906190+ queue_configured[i] = true;61846191 }6185619261866193 start_time += e->interval;···61936194 * If not, set the start and end time to be end time.61946195 */61956196 for (i = 0; i < adapter->num_tx_queues; i++) {61976197+ struct igc_ring *ring = adapter->tx_ring[i];61986198+61996199+ if (!is_base_time_past(qopt->base_time, &now)) {62006200+ ring->admin_gate_closed = false;62016201+ } else {62026202+ ring->oper_gate_closed = false;62036203+ ring->admin_gate_closed = false;62046204+ }62056205+61966206 if (!queue_configured[i]) {61976197- struct igc_ring *ring = adapter->tx_ring[i];62076207+ if (!is_base_time_past(qopt->base_time, &now))62086208+ ring->admin_gate_closed = true;62096209+ else62106210+ ring->oper_gate_closed = true;6198621161996212 ring->start_time = end_time;62006213 ring->end_time = end_time;···62186207 struct net_device *dev = adapter->netdev;6219620862206209 if (qopt->max_sdu[i])62216221- ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len;62106210+ ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len - ETH_TLEN;62226211 else62236212 ring->max_sdu = 0;62246213 }···63376326 void *type_data)63386327{63396328 struct igc_adapter *adapter = netdev_priv(dev);63296329+63306330+ adapter->tc_setup_type = type;6340633163416332 switch (type) {63426333 case TC_QUERY_CAPS:···65876574 .xmo_rx_timestamp = igc_xdp_rx_timestamp,65886575};6589657665776577+static enum hrtimer_restart igc_qbv_scheduling_timer(struct hrtimer *timer)65786578+{65796579+ struct igc_adapter *adapter = container_of(timer, struct igc_adapter,65806580+ hrtimer);65816581+ unsigned int i;65826582+65836583+ adapter->qbv_transition = true;65846584+ for (i = 0; i < adapter->num_tx_queues; i++) {65856585+ struct igc_ring *tx_ring = adapter->tx_ring[i];65866586+65876587+ if (tx_ring->admin_gate_closed) {65886588+ tx_ring->admin_gate_closed = false;65896589+ tx_ring->oper_gate_closed = true;65906590+ } else {65916591+ tx_ring->oper_gate_closed = false;65926592+ }65936593+ }65946594+ adapter->qbv_transition = false;65956595+ return HRTIMER_NORESTART;65966596+}65976597+65906598/**65916599 * igc_probe - Device Initialization Routine65926600 * @pdev: PCI device information struct···67866752 INIT_WORK(&adapter->reset_task, igc_reset_task);67876753 INIT_WORK(&adapter->watchdog_task, igc_watchdog_task);6788675467556755+ hrtimer_init(&adapter->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);67566756+ adapter->hrtimer.function = &igc_qbv_scheduling_timer;67576757+67896758 /* Initialize link properties that are user-changeable */67906759 adapter->fc_autoneg = true;67916760 hw->mac.autoneg = true;···6892685568936856 cancel_work_sync(&adapter->reset_task);68946857 cancel_work_sync(&adapter->watchdog_task);68586858+ hrtimer_cancel(&adapter->hrtimer);6895685968966860 /* Release control of h/w to f/w. If f/w is AMT enabled, this68976861 * would have already happened in close and is redundant.
+22-3
drivers/net/ethernet/intel/igc/igc_ptp.c
···356356 tsim &= ~IGC_TSICR_TT0;357357 }358358 if (on) {359359+ struct timespec64 safe_start;359360 int i = rq->perout.index;360361361362 igc_pin_perout(igc, i, pin, use_freq);362362- igc->perout[i].start.tv_sec = rq->perout.start.sec;363363+ igc_ptp_read(igc, &safe_start);364364+365365+ /* PPS output start time is triggered by Target time(TT)366366+ * register. Programming any past time value into TT367367+ * register will cause PPS to never start. Need to make368368+ * sure we program the TT register a time ahead in369369+ * future. There isn't a stringent need to fire PPS out370370+ * right away. Adding +2 seconds should take care of371371+ * corner cases. Let's say if the SYSTIML is close to372372+ * wrap up and the timer keeps ticking as we program the373373+ * register, adding +2seconds is safe bet.374374+ */375375+ safe_start.tv_sec += 2;376376+377377+ if (rq->perout.start.sec < safe_start.tv_sec)378378+ igc->perout[i].start.tv_sec = safe_start.tv_sec;379379+ else380380+ igc->perout[i].start.tv_sec = rq->perout.start.sec;363381 igc->perout[i].start.tv_nsec = rq->perout.start.nsec;364382 igc->perout[i].period.tv_sec = ts.tv_sec;365383 igc->perout[i].period.tv_nsec = ts.tv_nsec;366366- wr32(trgttimh, rq->perout.start.sec);384384+ wr32(trgttimh, (u32)igc->perout[i].start.tv_sec);367385 /* For now, always select timer 0 as source. */368368- wr32(trgttiml, rq->perout.start.nsec | IGC_TT_IO_TIMER_SEL_SYSTIM0);386386+ wr32(trgttiml, (u32)(igc->perout[i].start.tv_nsec |387387+ IGC_TT_IO_TIMER_SEL_SYSTIM0));369388 if (use_freq)370389 wr32(freqout, ns);371390 tsauxc |= tsauxc_mask;
+51-17
drivers/net/ethernet/intel/igc/igc_tsn.c
···3737{3838 unsigned int new_flags = adapter->flags & ~IGC_FLAG_TSN_ANY_ENABLED;39394040- if (adapter->qbv_enable)4040+ if (adapter->taprio_offload_enable)4141 new_flags |= IGC_FLAG_TSN_QBV_ENABLED;42424343 if (is_any_launchtime(adapter))···114114static int igc_tsn_enable_offload(struct igc_adapter *adapter)115115{116116 struct igc_hw *hw = &adapter->hw;117117- bool tsn_mode_reconfig = false;118117 u32 tqavctrl, baset_l, baset_h;119118 u32 sec, nsec, cycle;120119 ktime_t base_time, systim;···132133 wr32(IGC_STQT(i), ring->start_time);133134 wr32(IGC_ENDQT(i), ring->end_time);134135135135- txqctl |= IGC_TXQCTL_STRICT_CYCLE |136136- IGC_TXQCTL_STRICT_END;136136+ if (adapter->taprio_offload_enable) {137137+ /* If taprio_offload_enable is set we are in "taprio"138138+ * mode and we need to be strict about the139139+ * cycles: only transmit a packet if it can be140140+ * completed during that cycle.141141+ *142142+ * If taprio_offload_enable is NOT true when143143+ * enabling TSN offload, the cycle should have144144+ * no external effects, but is only used internally145145+ * to adapt the base time register after a second146146+ * has passed.147147+ *148148+ * Enabling strict mode in this case would149149+ * unnecessarily prevent the transmission of150150+ * certain packets (i.e. at the boundary of a151151+ * second) and thus interfere with the launchtime152152+ * feature that promises transmission at a153153+ * certain point in time.154154+ */155155+ txqctl |= IGC_TXQCTL_STRICT_CYCLE |156156+ IGC_TXQCTL_STRICT_END;157157+ }137158138159 if (ring->launchtime_enable)139160 txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT;···247228248229 tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS;249230250250- if (tqavctrl & IGC_TQAVCTRL_TRANSMIT_MODE_TSN)251251- tsn_mode_reconfig = true;252252-253231 tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV;232232+233233+ adapter->qbv_count++;254234255235 cycle = adapter->cycle_time;256236 base_time = adapter->base_time;···267249 * Gate Control List (GCL) is running.268250 */269251 if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) &&270270- tsn_mode_reconfig)252252+ (adapter->tc_setup_type == TC_SETUP_QDISC_TAPRIO) &&253253+ (adapter->qbv_count > 1))271254 adapter->qbv_config_change_errors++;272255 } else {273273- /* According to datasheet section 7.5.2.9.3.3, FutScdDis bit274274- * has to be configured before the cycle time and base time.275275- * Tx won't hang if there is a GCL is already running,276276- * so in this case we don't need to set FutScdDis.277277- */278278- if (igc_is_device_id_i226(hw) &&279279- !(rd32(IGC_BASET_H) || rd32(IGC_BASET_L)))280280- tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS;256256+ if (igc_is_device_id_i226(hw)) {257257+ ktime_t adjust_time, expires_time;258258+259259+ /* According to datasheet section 7.5.2.9.3.3, FutScdDis bit260260+ * has to be configured before the cycle time and base time.261261+ * Tx won't hang if a GCL is already running,262262+ * so in this case we don't need to set FutScdDis.263263+ */264264+ if (!(rd32(IGC_BASET_H) || rd32(IGC_BASET_L)))265265+ tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS;266266+267267+ nsec = rd32(IGC_SYSTIML);268268+ sec = rd32(IGC_SYSTIMH);269269+ systim = ktime_set(sec, nsec);270270+271271+ adjust_time = adapter->base_time;272272+ expires_time = ktime_sub_ns(adjust_time, systim);273273+ hrtimer_start(&adapter->hrtimer, expires_time, HRTIMER_MODE_REL);274274+ }281275 }282276283277 wr32(IGC_TQAVCTRL, tqavctrl);···335305{336306 struct igc_hw *hw = &adapter->hw;337307338338- if (netif_running(adapter->netdev) && igc_is_device_id_i225(hw)) {308308+ /* Per I225/6 HW Design Section 7.5.2.1, transmit mode309309+ * cannot be changed dynamically. Require reset the adapter.310310+ */311311+ if (netif_running(adapter->netdev) &&312312+ (igc_is_device_id_i225(hw) || !adapter->qbv_count)) {339313 schedule_work(&adapter->reset_task);340314 return 0;341315 }
···208208 /* Check driver is bound to PTP block */209209 if (!ptp)210210 ptp = ERR_PTR(-EPROBE_DEFER);211211- else211211+ else if (!IS_ERR(ptp))212212 pci_dev_get(ptp->pdev);213213214214 return ptp;···388388static int ptp_probe(struct pci_dev *pdev,389389 const struct pci_device_id *ent)390390{391391- struct device *dev = &pdev->dev;392391 struct ptp *ptp;393392 int err;394393395395- ptp = devm_kzalloc(dev, sizeof(*ptp), GFP_KERNEL);394394+ ptp = kzalloc(sizeof(*ptp), GFP_KERNEL);396395 if (!ptp) {397396 err = -ENOMEM;398397 goto error;···427428 return 0;428429429430error_free:430430- devm_kfree(dev, ptp);431431+ kfree(ptp);431432432433error:433434 /* For `ptp_get()` we need to differentiate between the case434435 * when the core has not tried to probe this device and the case when435435- * the probe failed. In the later case we pretend that the436436- * initialization was successful and keep the error in436436+ * the probe failed. In the later case we keep the error in437437 * `dev->driver_data`.438438 */439439 pci_set_drvdata(pdev, ERR_PTR(err));440440 if (!first_ptp_block)441441 first_ptp_block = ERR_PTR(err);442442443443- return 0;443443+ return err;444444}445445446446static void ptp_remove(struct pci_dev *pdev)···447449 struct ptp *ptp = pci_get_drvdata(pdev);448450 u64 clock_cfg;449451450450- if (cn10k_ptp_errata(ptp) && hrtimer_active(&ptp->hrtimer))451451- hrtimer_cancel(&ptp->hrtimer);452452-453452 if (IS_ERR_OR_NULL(ptp))454453 return;454454+455455+ if (cn10k_ptp_errata(ptp) && hrtimer_active(&ptp->hrtimer))456456+ hrtimer_cancel(&ptp->hrtimer);455457456458 /* Disable PTP clock */457459 clock_cfg = readq(ptp->reg_base + PTP_CLOCK_CFG);458460 clock_cfg &= ~PTP_CLOCK_CFG_PTP_EN;459461 writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG);462462+ kfree(ptp);460463}461464462465static const struct pci_device_id ptp_id_table[] = {
···1545154515461546 attr->ct_attr.ct_action |= act->ct.action; /* So we can have clear + ct */15471547 attr->ct_attr.zone = act->ct.zone;15481548- attr->ct_attr.nf_ft = act->ct.flow_table;15481548+ if (!(act->ct.action & TCA_CT_ACT_CLEAR))15491549+ attr->ct_attr.nf_ft = act->ct.flow_table;15491550 attr->ct_attr.act_miss_cookie = act->miss_cookie;1550155115511552 return 0;···19911990 if (!priv)19921991 return -EOPNOTSUPP;1993199219931993+ if (attr->ct_attr.offloaded)19941994+ return 0;19951995+19941996 if (attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR) {19951997 err = mlx5_tc_ct_entry_set_registers(priv, &attr->parse_attr->mod_hdr_acts,19961998 0, 0, 0, 0);···20031999 attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;20042000 }2005200120062006- if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */20022002+ if (!attr->ct_attr.nf_ft) { /* means only ct clear action, and not ct_clear,ct() */20032003+ attr->ct_attr.offloaded = true;20072004 return 0;20052005+ }2008200620092007 mutex_lock(&priv->control_lock);20102008 err = __mlx5_tc_ct_flow_offload(priv, attr);20092009+ if (!err)20102010+ attr->ct_attr.offloaded = true;20112011 mutex_unlock(&priv->control_lock);2012201220132013 return err;···20292021mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *priv,20302022 struct mlx5_flow_attr *attr)20312023{20322032- if (!attr->ct_attr.ft) /* no ct action, return */20242024+ if (!attr->ct_attr.offloaded) /* no ct action, return */20332025 return;20342026 if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */20352027 return;
···662662 /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)663663 * as we know this is a page_pool page.664664 */665665- page_pool_put_defragged_page(page->pp,666666- page, -1, true);665665+ page_pool_recycle_direct(page->pp, page);667666 } while (++n < num);668667669668 break;
···190190 in = kvzalloc(inlen, GFP_KERNEL);191191 if (!in || !ft->g) {192192 kfree(ft->g);193193+ ft->g = NULL;193194 kvfree(in);194195 return -ENOMEM;195196 }
+22-22
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
···390390{391391 struct mlx5e_wqe_frag_info *wi = get_frag(rq, ix);392392393393- if (rq->xsk_pool)393393+ if (rq->xsk_pool) {394394 mlx5e_xsk_free_rx_wqe(wi);395395- else395395+ } else {396396 mlx5e_free_rx_wqe(rq, wi);397397+398398+ /* Avoid a second release of the wqe pages: dealloc is called399399+ * for the same missing wqes on regular RQ flush and on regular400400+ * RQ close. This happens when XSK RQs come into play.401401+ */402402+ for (int i = 0; i < rq->wqe.info.num_frags; i++, wi++)403403+ wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);404404+ }397405}398406399407static void mlx5e_xsk_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)···1751174317521744 prog = rcu_dereference(rq->xdp_prog);17531745 if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) {17541754- if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {17461746+ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {17551747 struct mlx5e_wqe_frag_info *pwi;1756174817571749 for (pwi = head_wi; pwi < wi; pwi++)17581758- pwi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);17501750+ pwi->frag_page->frags++;17591751 }17601752 return NULL; /* page/packet was consumed by XDP */17611753 }···18251817 rq, wi, cqe, cqe_bcnt);18261818 if (!skb) {18271819 /* probably for XDP */18281828- if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {18291829- /* do not return page to cache,18301830- * it will be returned on XDP_TX completion.18311831- */18321832- wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);18331833- }18201820+ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))18211821+ wi->frag_page->frags++;18341822 goto wq_cyc_pop;18351823 }18361824···18721868 rq, wi, cqe, cqe_bcnt);18731869 if (!skb) {18741870 /* probably for XDP */18751875- if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {18761876- /* do not return page to cache,18771877- * it will be returned on XDP_TX completion.18781878- */18791879- wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);18801880- }18711871+ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))18721872+ wi->frag_page->frags++;18811873 goto wq_cyc_pop;18821874 }18831875···20522052 if (prog) {20532053 if (mlx5e_xdp_handle(rq, prog, &mxbuf)) {20542054 if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {20552055- int i;20552055+ struct mlx5e_frag_page *pfp;2056205620572057- for (i = 0; i < sinfo->nr_frags; i++)20582058- /* non-atomic */20592059- __set_bit(page_idx + i, wi->skip_release_bitmap);20602060- return NULL;20572057+ for (pfp = head_page; pfp < frag_page; pfp++)20582058+ pfp->frags++;20592059+20602060+ wi->linear_page.frags++;20612061 }20622062 mlx5e_page_release_fragmented(rq, &wi->linear_page);20632063 return NULL; /* page/packet was consumed by XDP */···21552155 cqe_bcnt, &mxbuf);21562156 if (mlx5e_xdp_handle(rq, prog, &mxbuf)) {21572157 if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))21582158- __set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */21582158+ frag_page->frags++;21592159 return NULL; /* page/packet was consumed by XDP */21602160 }21612161
+3-3
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
···16391639 uplink_priv = &rpriv->uplink_priv;1640164016411641 mutex_lock(&uplink_priv->unready_flows_lock);16421642- unready_flow_del(flow);16421642+ if (flow_flag_test(flow, NOT_READY))16431643+ unready_flow_del(flow);16431644 mutex_unlock(&uplink_priv->unready_flows_lock);16441645}16451646···19331932 esw_attr = attr->esw_attr;19341933 mlx5e_put_flow_tunnel_id(flow);1935193419361936- if (flow_flag_test(flow, NOT_READY))19371937- remove_unready_flow(flow);19351935+ remove_unready_flow(flow);1938193619391937 if (mlx5e_is_offloaded_flow(flow)) {19401938 if (flow_flag_test(flow, SLOW))
···6767 val = mm->preemptible_tcs;68686969 /* Cut through switching doesn't work for preemptible priorities,7070- * so first make sure it is disabled.7070+ * so first make sure it is disabled. Also, changing the preemptible7171+ * TCs affects the oversized frame dropping logic, so that needs to be7272+ * re-triggered. And since tas_guard_bands_update() also implicitly7373+ * calls cut_through_fwd(), we don't need to explicitly call it.7174 */7275 mm->active_preemptible_tcs = val;7373- ocelot->ops->cut_through_fwd(ocelot);7676+ ocelot->ops->tas_guard_bands_update(ocelot, port);74777578 dev_dbg(ocelot->dev,7679 "port %d %s/%s, MM TX %s, preemptible TCs 0x%x, active 0x%x\n",···9289{9390 struct ocelot_mm_state *mm = &ocelot->mm[port];94919595- mutex_lock(&ocelot->fwd_domain_lock);9292+ lockdep_assert_held(&ocelot->fwd_domain_lock);96939794 if (mm->preemptible_tcs == preemptible_tcs)9898- goto out_unlock;9595+ return;999610097 mm->preemptible_tcs = preemptible_tcs;1019810299 ocelot_port_update_active_preemptible_tcs(ocelot, port);103103-104104-out_unlock:105105- mutex_unlock(&ocelot->fwd_domain_lock);106100}107101108102static void ocelot_mm_update_port_status(struct ocelot *ocelot, int port)
···353353 ionic_reset(ionic);354354err_out_teardown:355355 ionic_dev_teardown(ionic);356356- pci_clear_master(pdev);357357- /* Don't fail the probe for these errors, keep358358- * the hw interface around for inspection359359- */360360- return 0;361361-362356err_out_unmap_bars:363357 ionic_unmap_bars(ionic);364358err_out_pci_release_regions:
···61576157 struct iw_param *vwrq = &wrqu->bitrate;61586158 struct airo_info *local = dev->ml_priv;61596159 StatusRid status_rid; /* Card status info */61606160+ int ret;6160616161616161- readStatusRid(local, &status_rid, 1);61626162+ ret = readStatusRid(local, &status_rid, 1);61636163+ if (ret)61646164+ return -EBUSY;6162616561636166 vwrq->value = le16_to_cpu(status_rid.currentXmitRate) * 500000;61646167 /* If more than one rate, set auto */
···256256 * @xtal_latency: power up latency to get the xtal stabilized257257 * @extra_phy_cfg_flags: extra configuration flags to pass to the PHY258258 * @rf_id: need to read rf_id to determine the firmware image259259- * @use_tfh: use TFH260259 * @gen2: 22000 and on transport operation261260 * @mq_rx_supported: multi-queue rx support262261 * @integrated: discrete or integrated···270271 u32 xtal_latency;271272 u32 extra_phy_cfg_flags;272273 u32 rf_id:1,273273- use_tfh:1,274274 gen2:1,275275 mq_rx_supported:1,276276 integrated:1,
+2-2
drivers/net/wireless/intel/iwlwifi/iwl-fh.h
···11/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */22/*33- * Copyright (C) 2005-2014, 2018-2021 Intel Corporation33+ * Copyright (C) 2005-2014, 2018-2021, 2023 Intel Corporation44 * Copyright (C) 2015-2017 Intel Deutschland GmbH55 */66#ifndef __iwl_fh_h__···7171static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans,7272 unsigned int chnl)7373{7474- if (trans->trans_cfg->use_tfh) {7474+ if (trans->trans_cfg->gen2) {7575 WARN_ON_ONCE(chnl >= 64);7676 return TFH_TFDQ_CBB_TABLE + 8 * chnl;7777 }
+3-3
drivers/net/wireless/intel/iwlwifi/iwl-trans.c
···22/*33 * Copyright (C) 2015 Intel Mobile Communications GmbH44 * Copyright (C) 2016-2017 Intel Deutschland GmbH55- * Copyright (C) 2019-2021 Intel Corporation55+ * Copyright (C) 2019-2021, 2023 Intel Corporation66 */77#include <linux/kernel.h>88#include <linux/bsearch.h>···42424343 WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty);44444545- if (trans->trans_cfg->use_tfh) {4545+ if (trans->trans_cfg->gen2) {4646 trans->txqs.tfd.addr_size = 64;4747 trans->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS;4848 trans->txqs.tfd.size = sizeof(struct iwl_tfh_tfd);···101101102102 /* Some things must not change even if the config does */103103 WARN_ON(trans->txqs.tfd.addr_size !=104104- (trans->trans_cfg->use_tfh ? 64 : 36));104104+ (trans->trans_cfg->gen2 ? 64 : 36));105105106106 snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name),107107 "iwl_cmd_pool:%s", dev_name(trans->dev));
+1-1
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
···14501450static inline bool iwl_mvm_has_new_tx_api(struct iwl_mvm *mvm)14511451{14521452 /* TODO - replace with TLV once defined */14531453- return mvm->trans->trans_cfg->use_tfh;14531453+ return mvm->trans->trans_cfg->gen2;14541454}1455145514561456static inline bool iwl_mvm_has_unified_ucode(struct iwl_mvm *mvm)
+2-2
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
···819819820820 iwl_enable_interrupts(trans);821821822822- if (trans->trans_cfg->use_tfh) {822822+ if (trans->trans_cfg->gen2) {823823 if (cpu == 1)824824 iwl_write_prph(trans, UREG_UCODE_LOAD_STATUS,825825 0xFFFF);···33943394 u8 tfdidx;33953395 u32 caplen, cmdlen;3396339633973397- if (trans->trans_cfg->use_tfh)33973397+ if (trans->trans_cfg->gen2)33983398 tfdidx = idx;33993399 else34003400 tfdidx = ptr;
+1-1
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
···364364 for (txq_id = 0; txq_id < trans->trans_cfg->base_params->num_of_queues;365365 txq_id++) {366366 struct iwl_txq *txq = trans->txqs.txq[txq_id];367367- if (trans->trans_cfg->use_tfh)367367+ if (trans->trans_cfg->gen2)368368 iwl_write_direct64(trans,369369 FH_MEM_CBBC_QUEUE(trans, txq_id),370370 txq->dma_addr);
+5-5
drivers/net/wireless/intel/iwlwifi/queue/tx.c
···985985 bool active;986986 u8 fifo;987987988988- if (trans->trans_cfg->use_tfh) {988988+ if (trans->trans_cfg->gen2) {989989 IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id,990990 txq->read_ptr, txq->write_ptr);991991 /* TODO: access new SCD registers and dump them */···10401040 if (WARN_ON(txq->entries || txq->tfds))10411041 return -EINVAL;1042104210431043- if (trans->trans_cfg->use_tfh)10431043+ if (trans->trans_cfg->gen2)10441044 tfd_sz = trans->txqs.tfd.size * slots_num;1045104510461046 timer_setup(&txq->stuck_timer, iwl_txq_stuck_timer, 0);···13471347 dma_addr_t addr;13481348 dma_addr_t hi_len;1349134913501350- if (trans->trans_cfg->use_tfh) {13501350+ if (trans->trans_cfg->gen2) {13511351 struct iwl_tfh_tfd *tfh_tfd = _tfd;13521352 struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx];13531353···1408140814091409 meta->tbs = 0;1410141014111411- if (trans->trans_cfg->use_tfh) {14111411+ if (trans->trans_cfg->gen2) {14121412 struct iwl_tfh_tfd *tfd_fh = (void *)tfd;1413141314141414 tfd_fh->num_tbs = 0;···1625162516261626 txq->entries[read_ptr].skb = NULL;1627162716281628- if (!trans->trans_cfg->use_tfh)16281628+ if (!trans->trans_cfg->gen2)16291629 iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq);1630163016311631 iwl_txq_free_tfd(trans, txq);
···3431343134323432 ret = nvme_global_check_duplicate_ids(ctrl->subsys, &info->ids);34333433 if (ret) {34343434- dev_err(ctrl->device,34353435- "globally duplicate IDs for nsid %d\n", info->nsid);34343434+ /*34353435+ * We've found two different namespaces on two different34363436+ * subsystems that report the same ID. This is pretty nasty34373437+ * for anything that actually requires unique device34383438+ * identification. In the kernel we need this for multipathing,34393439+ * and in user space the /dev/disk/by-id/ links rely on it.34403440+ *34413441+ * If the device also claims to be multi-path capable back off34423442+ * here now and refuse the probe the second device as this is a34433443+ * recipe for data corruption. If not this is probably a34443444+ * cheap consumer device if on the PCIe bus, so let the user34453445+ * proceed and use the shiny toy, but warn that with changing34463446+ * probing order (which due to our async probing could just be34473447+ * device taking longer to startup) the other device could show34483448+ * up at any time.34493449+ */34363450 nvme_print_device_info(ctrl);34373437- return ret;34513451+ if ((ns->ctrl->ops->flags & NVME_F_FABRICS) || /* !PCIe */34523452+ ((ns->ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) &&34533453+ info->is_shared)) {34543454+ dev_err(ctrl->device,34553455+ "ignoring nsid %d because of duplicate IDs\n",34563456+ info->nsid);34573457+ return ret;34583458+ }34593459+34603460+ dev_err(ctrl->device,34613461+ "clearing duplicate IDs for nsid %d\n", info->nsid);34623462+ dev_err(ctrl->device,34633463+ "use of /dev/disk/by-id/ may cause data corruption\n");34643464+ memset(&info->ids.nguid, 0, sizeof(info->ids.nguid));34653465+ memset(&info->ids.uuid, 0, sizeof(info->ids.uuid));34663466+ memset(&info->ids.eui64, 0, sizeof(info->ids.eui64));34673467+ ctrl->quirks |= NVME_QUIRK_BOGUS_NID;34383468 }3439346934403470 mutex_lock(&ctrl->subsys->lock);
+1-1
drivers/nvme/host/fault_inject.c
···27272828 /* create debugfs directory and attribute */2929 parent = debugfs_create_dir(dev_name, NULL);3030- if (!parent) {3030+ if (IS_ERR(parent)) {3131 pr_warn("%s: failed to create debugfs directory\n", dev_name);3232 return;3333 }
+30-7
drivers/nvme/host/fc.c
···25482548 * the controller. Abort any ios on the association and let the25492549 * create_association error path resolve things.25502550 */25512551- if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {25522552- __nvme_fc_abort_outstanding_ios(ctrl, true);25512551+ enum nvme_ctrl_state state;25522552+ unsigned long flags;25532553+25542554+ spin_lock_irqsave(&ctrl->lock, flags);25552555+ state = ctrl->ctrl.state;25562556+ if (state == NVME_CTRL_CONNECTING) {25532557 set_bit(ASSOC_FAILED, &ctrl->flags);25582558+ spin_unlock_irqrestore(&ctrl->lock, flags);25592559+ __nvme_fc_abort_outstanding_ios(ctrl, true);25602560+ dev_warn(ctrl->ctrl.device,25612561+ "NVME-FC{%d}: transport error during (re)connect\n",25622562+ ctrl->cnum);25542563 return;25552564 }25652565+ spin_unlock_irqrestore(&ctrl->lock, flags);2556256625572567 /* Otherwise, only proceed if in LIVE state - e.g. on first error */25582558- if (ctrl->ctrl.state != NVME_CTRL_LIVE)25682568+ if (state != NVME_CTRL_LIVE)25592569 return;2560257025612571 dev_warn(ctrl->ctrl.device,···31203110 */3121311131223112 ret = nvme_enable_ctrl(&ctrl->ctrl);31233123- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))31133113+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))31143114+ ret = -EIO;31153115+ if (ret)31243116 goto out_disconnect_admin_queue;3125311731263118 ctrl->ctrl.max_segments = ctrl->lport->ops->max_sgl_segments;···31323120 nvme_unquiesce_admin_queue(&ctrl->ctrl);3133312131343122 ret = nvme_init_ctrl_finish(&ctrl->ctrl, false);31353135- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))31233123+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))31243124+ ret = -EIO;31253125+ if (ret)31363126 goto out_disconnect_admin_queue;3137312731383128 /* sanity checks */···31793165 else31803166 ret = nvme_fc_recreate_io_queues(ctrl);31813167 }31823182- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))31833183- goto out_term_aen_ops;3184316831693169+ spin_lock_irqsave(&ctrl->lock, flags);31703170+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))31713171+ ret = -EIO;31723172+ if (ret) {31733173+ spin_unlock_irqrestore(&ctrl->lock, flags);31743174+ goto out_term_aen_ops;31753175+ }31853176 changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);31773177+ spin_unlock_irqrestore(&ctrl->lock, flags);3186317831873179 ctrl->ctrl.nr_reconnects = 0;31883180···32003180out_term_aen_ops:32013181 nvme_fc_term_aen_ops(ctrl);32023182out_disconnect_admin_queue:31833183+ dev_warn(ctrl->ctrl.device,31843184+ "NVME-FC{%d}: create_assoc failed, assoc_id %llx ret %d\n",31853185+ ctrl->cnum, ctrl->association_id, ret);32033186 /* send a Disconnect(association) LS to fc-nvme target */32043187 nvme_fc_xmt_disconnect_assoc(ctrl);32053188 spin_lock_irqsave(&ctrl->lock, flags);
+20-9
drivers/nvme/host/pci.c
···967967 struct nvme_iod *iod = blk_mq_rq_to_pdu(req);968968969969 dma_unmap_page(dev->dev, iod->meta_dma,970970- rq_integrity_vec(req)->bv_len, rq_data_dir(req));970970+ rq_integrity_vec(req)->bv_len, rq_dma_dir(req));971971 }972972973973 if (blk_rq_nr_phys_segments(req))···12981298 */12991299 if (nvme_should_reset(dev, csts)) {13001300 nvme_warn_reset(dev, csts);13011301- nvme_dev_disable(dev, false);13021302- nvme_reset_ctrl(&dev->ctrl);13031303- return BLK_EH_DONE;13011301+ goto disable;13041302 }1305130313061304 /*···13491351 "I/O %d QID %d timeout, reset controller\n",13501352 req->tag, nvmeq->qid);13511353 nvme_req(req)->flags |= NVME_REQ_CANCELLED;13521352- nvme_dev_disable(dev, false);13531353- nvme_reset_ctrl(&dev->ctrl);13541354-13551355- return BLK_EH_DONE;13541354+ goto disable;13561355 }1357135613581357 if (atomic_dec_return(&dev->ctrl.abort_limit) < 0) {···13861391 * as the device then is in a faulty state.13871392 */13881393 return BLK_EH_RESET_TIMER;13941394+13951395+disable:13961396+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING))13971397+ return BLK_EH_DONE;13981398+13991399+ nvme_dev_disable(dev, false);14001400+ if (nvme_try_sched_reset(&dev->ctrl))14011401+ nvme_unquiesce_io_queues(&dev->ctrl);14021402+ return BLK_EH_DONE;13891403}1390140413911405static void nvme_free_queue(struct nvme_queue *nvmeq)···32823278 case pci_channel_io_frozen:32833279 dev_warn(dev->ctrl.device,32843280 "frozen state error detected, reset controller\n");32813281+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING)) {32823282+ nvme_dev_disable(dev, true);32833283+ return PCI_ERS_RESULT_DISCONNECT;32843284+ }32853285 nvme_dev_disable(dev, false);32863286 return PCI_ERS_RESULT_NEED_RESET;32873287 case pci_channel_io_perm_failure:···3302329433033295 dev_info(dev->ctrl.device, "restart after slot reset\n");33043296 pci_restore_state(pdev);33053305- nvme_reset_ctrl(&dev->ctrl);32973297+ if (!nvme_try_sched_reset(&dev->ctrl))32983298+ nvme_unquiesce_io_queues(&dev->ctrl);33063299 return PCI_ERS_RESULT_RECOVERED;33073300}33083301···34053396 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },34063397 { PCI_DEVICE(0x144d, 0xa809), /* Samsung MZALQ256HBJD 256G */34073398 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },33993399+ { PCI_DEVICE(0x144d, 0xa802), /* Samsung SM953 */34003400+ .driver_data = NVME_QUIRK_BOGUS_NID, },34083401 { PCI_DEVICE(0x1cc4, 0x6303), /* UMIS RPJTJ512MGE1QDY 512G */34093402 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },34103403 { PCI_DEVICE(0x1cc4, 0x6302), /* UMIS RPJTJ256MGE1QDY 256G */
+1-1
drivers/nvme/host/sysfs.c
···9292 * we have no UUID set9393 */9494 if (uuid_is_null(&ids->uuid)) {9595- dev_warn_ratelimited(dev,9595+ dev_warn_once(dev,9696 "No UUID available providing old NGUID\n");9797 return sysfs_emit(buf, "%pU\n", ids->nguid);9898 }
+4-5
drivers/nvme/host/zns.c
···1010int nvme_revalidate_zones(struct nvme_ns *ns)1111{1212 struct request_queue *q = ns->queue;1313- int ret;14131515- ret = blk_revalidate_disk_zones(ns->disk, NULL);1616- if (!ret)1717- blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append);1818- return ret;1414+ blk_queue_chunk_sectors(q, ns->zsze);1515+ blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append);1616+1717+ return blk_revalidate_disk_zones(ns->disk, NULL);1918}20192120static int nvme_set_max_append(struct nvme_ctrl *ctrl)
···102102 * which depends on the host's memory fragementation. To solve this,103103 * ensure mdts is limited to the pages equal to the number of segments.104104 */105105- max_hw_sectors = min_not_zero(pctrl->max_segments << (PAGE_SHIFT - 9),105105+ max_hw_sectors = min_not_zero(pctrl->max_segments << PAGE_SECTORS_SHIFT,106106 pctrl->max_hw_sectors);107107108108 /*109109 * nvmet_passthru_map_sg is limitted to using a single bio so limit110110 * the mdts based on BIO_MAX_VECS as well111111 */112112- max_hw_sectors = min_not_zero(BIO_MAX_VECS << (PAGE_SHIFT - 9),112112+ max_hw_sectors = min_not_zero(BIO_MAX_VECS << PAGE_SECTORS_SHIFT,113113 max_hw_sectors);114114115115 page_shift = NVME_CAP_MPSMIN(ctrl->cap) + 12;
-3
drivers/perf/riscv_pmu.c
···181181 uint64_t max_period = riscv_pmu_ctr_get_width_mask(event);182182 u64 init_val;183183184184- if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))185185- return;186186-187184 if (flags & PERF_EF_RELOAD)188185 WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));189186
+23-38
drivers/pinctrl/pinctrl-amd.c
···116116 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);117117}118118119119-static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,120120- unsigned debounce)119119+static int amd_gpio_set_debounce(struct amd_gpio *gpio_dev, unsigned int offset,120120+ unsigned int debounce)121121{122122 u32 time;123123 u32 pin_reg;124124 int ret = 0;125125- unsigned long flags;126126- struct amd_gpio *gpio_dev = gpiochip_get_data(gc);127127-128128- raw_spin_lock_irqsave(&gpio_dev->lock, flags);129125130126 /* Use special handling for Pin0 debounce */131131- pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);132132- if (pin_reg & INTERNAL_GPIO0_DEBOUNCE)133133- debounce = 0;127127+ if (offset == 0) {128128+ pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);129129+ if (pin_reg & INTERNAL_GPIO0_DEBOUNCE)130130+ debounce = 0;131131+ }134132135133 pin_reg = readl(gpio_dev->base + offset * 4);136134···180182 pin_reg &= ~(DB_CNTRl_MASK << DB_CNTRL_OFF);181183 }182184 writel(pin_reg, gpio_dev->base + offset * 4);183183- raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);184185185186 return ret;186186-}187187-188188-static int amd_gpio_set_config(struct gpio_chip *gc, unsigned offset,189189- unsigned long config)190190-{191191- u32 debounce;192192-193193- if (pinconf_to_config_param(config) != PIN_CONFIG_INPUT_DEBOUNCE)194194- return -ENOTSUPP;195195-196196- debounce = pinconf_to_config_argument(config);197197- return amd_gpio_set_debounce(gc, offset, debounce);198187}199188200189#ifdef CONFIG_DEBUG_FS···205220 char *pin_sts;206221 char *interrupt_sts;207222 char *wake_sts;208208- char *pull_up_sel;209223 char *orientation;210224 char debounce_value[40];211225 char *debounce_enable;···312328 seq_printf(s, " %s|", wake_sts);313329314330 if (pin_reg & BIT(PULL_UP_ENABLE_OFF)) {315315- if (pin_reg & BIT(PULL_UP_SEL_OFF))316316- pull_up_sel = "8k";317317- else318318- pull_up_sel = "4k";319319- seq_printf(s, "%s ↑|",320320- pull_up_sel);331331+ seq_puts(s, " ↑ |");321332 } else if (pin_reg & BIT(PULL_DOWN_ENABLE_OFF)) {322322- seq_puts(s, " ↓|");333333+ seq_puts(s, " ↓ |");323334 } else {324335 seq_puts(s, " |");325336 }···740761 break;741762742763 case PIN_CONFIG_BIAS_PULL_UP:743743- arg = (pin_reg >> PULL_UP_SEL_OFF) & (BIT(0) | BIT(1));764764+ arg = (pin_reg >> PULL_UP_ENABLE_OFF) & BIT(0);744765 break;745766746767 case PIN_CONFIG_DRIVE_STRENGTH:···759780}760781761782static int amd_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,762762- unsigned long *configs, unsigned num_configs)783783+ unsigned long *configs, unsigned int num_configs)763784{764785 int i;765786 u32 arg;···777798778799 switch (param) {779800 case PIN_CONFIG_INPUT_DEBOUNCE:780780- pin_reg &= ~DB_TMR_OUT_MASK;781781- pin_reg |= arg & DB_TMR_OUT_MASK;782782- break;801801+ ret = amd_gpio_set_debounce(gpio_dev, pin, arg);802802+ goto out_unlock;783803784804 case PIN_CONFIG_BIAS_PULL_DOWN:785805 pin_reg &= ~BIT(PULL_DOWN_ENABLE_OFF);···786808 break;787809788810 case PIN_CONFIG_BIAS_PULL_UP:789789- pin_reg &= ~BIT(PULL_UP_SEL_OFF);790790- pin_reg |= (arg & BIT(0)) << PULL_UP_SEL_OFF;791811 pin_reg &= ~BIT(PULL_UP_ENABLE_OFF);792792- pin_reg |= ((arg>>1) & BIT(0)) << PULL_UP_ENABLE_OFF;812812+ pin_reg |= (arg & BIT(0)) << PULL_UP_ENABLE_OFF;793813 break;794814795815 case PIN_CONFIG_DRIVE_STRENGTH:···805829806830 writel(pin_reg, gpio_dev->base + pin*4);807831 }832832+out_unlock:808833 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);809834810835 return ret;···845868 return -ENOTSUPP;846869 }847870 return 0;871871+}872872+873873+static int amd_gpio_set_config(struct gpio_chip *gc, unsigned int pin,874874+ unsigned long config)875875+{876876+ struct amd_gpio *gpio_dev = gpiochip_get_data(gc);877877+878878+ return amd_pinconf_set(gpio_dev->pctrl, pin, &config, 1);848879}849880850881static const struct pinconf_ops amd_pinconf_ops = {
···616616 }617617618618 if (index < 2) {619619- ret = -ENODEV;619619+ /* Finding no available sensors is not an error */620620+ ret = 0;620621621622 goto err_release;622623 }···842841843842 if (IS_REACHABLE(CONFIG_ACPI_BATTERY)) {844843 ret = dell_wmi_ddv_battery_add(data);845845- if (ret < 0 && ret != -ENODEV)844844+ if (ret < 0)846845 dev_warn(&wdev->dev, "Unable to register ACPI battery hook: %d\n", ret);847846 }848847849848 if (IS_REACHABLE(CONFIG_HWMON)) {850849 ret = dell_wmi_ddv_hwmon_add(data);851851- if (ret < 0 && ret != -ENODEV)850850+ if (ret < 0)852851 dev_warn(&wdev->dev, "Unable to register hwmon interface: %d\n", ret);853852 }854853
···260260 * This DMI table contains the name of the second sensor. This is used to add261261 * entries for the second sensor to the supply_map.262262 */263263-const struct dmi_system_id skl_int3472_regulator_second_sensor[] = {263263+static const struct dmi_system_id skl_int3472_regulator_second_sensor[] = {264264 {265265 /* Lenovo Miix 510-12IKB */266266 .matches = {
+1-3
drivers/platform/x86/intel/tpmi.c
···356356 if (!pfs_start)357357 pfs_start = res_start;358358359359- pfs->pfs_header.cap_offset *= TPMI_CAP_OFFSET_UNIT;360360-361361- pfs->vsec_offset = pfs_start + pfs->pfs_header.cap_offset;359359+ pfs->vsec_offset = pfs_start + pfs->pfs_header.cap_offset * TPMI_CAP_OFFSET_UNIT;362360363361 /*364362 * Process TPMI_INFO to get PCI device to CPU package ID.
···684684685685 if ((sdd->cur_mode & SPI_LOOP) && sdd->port_conf->has_loopback)686686 val |= S3C64XX_SPI_MODE_SELF_LOOPBACK;687687+ else688688+ val &= ~S3C64XX_SPI_MODE_SELF_LOOPBACK;687689688690 writel(val, regs + S3C64XX_SPI_MODE_CFG);689691
+38
drivers/ufs/core/ufshcd.c
···85208520 return ret;85218521}8522852285238523+static void ufshcd_set_timestamp_attr(struct ufs_hba *hba)85248524+{85258525+ int err;85268526+ struct ufs_query_req *request = NULL;85278527+ struct ufs_query_res *response = NULL;85288528+ struct ufs_dev_info *dev_info = &hba->dev_info;85298529+ struct utp_upiu_query_v4_0 *upiu_data;85308530+85318531+ if (dev_info->wspecversion < 0x400)85328532+ return;85338533+85348534+ ufshcd_hold(hba);85358535+85368536+ mutex_lock(&hba->dev_cmd.lock);85378537+85388538+ ufshcd_init_query(hba, &request, &response,85398539+ UPIU_QUERY_OPCODE_WRITE_ATTR,85408540+ QUERY_ATTR_IDN_TIMESTAMP, 0, 0);85418541+85428542+ request->query_func = UPIU_QUERY_FUNC_STANDARD_WRITE_REQUEST;85438543+85448544+ upiu_data = (struct utp_upiu_query_v4_0 *)&request->upiu_req;85458545+85468546+ put_unaligned_be64(ktime_get_real_ns(), &upiu_data->osf3);85478547+85488548+ err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT);85498549+85508550+ if (err)85518551+ dev_err(hba->dev, "%s: failed to set timestamp %d\n",85528552+ __func__, err);85538553+85548554+ mutex_unlock(&hba->dev_cmd.lock);85558555+ ufshcd_release(hba);85568556+}85578557+85238558/**85248559 * ufshcd_add_lus - probe and add UFS logical units85258560 * @hba: per-adapter instance···87428707 /* UFS device is also active now */87438708 ufshcd_set_ufs_dev_active(hba);87448709 ufshcd_force_reset_auto_bkops(hba);87108710+87118711+ ufshcd_set_timestamp_attr(hba);8745871287468713 /* Gear up to HS gear if supported */87478714 if (hba->max_pwr_info.is_valid) {···97869749 ret = ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE);97879750 if (ret)97889751 goto set_old_link_state;97529752+ ufshcd_set_timestamp_attr(hba);97899753 }9790975497919755 if (ufshcd_keep_autobkops_enabled_except_suspend(hba))
+1
drivers/ufs/host/Kconfig
···7272config SCSI_UFS_MEDIATEK7373 tristate "Mediatek specific hooks to UFS controller platform driver"7474 depends on SCSI_UFSHCD_PLATFORM && ARCH_MEDIATEK7575+ depends on RESET_CONTROLLER7576 select PHY_MTK_UFS7677 select RESET_TI_SYSCON7778 help
+2
drivers/xen/grant-dma-ops.c
···303303 while (!pci_is_root_bus(bus))304304 bus = bus->parent;305305306306+ if (!bus->bridge->parent)307307+ return NULL;306308 return of_node_get(bus->bridge->parent->of_node);307309 }308310
···31843184 param_offset = offsetof(struct smb_com_transaction2_spi_req,31853185 InformationLevel) - 4;31863186 offset = param_offset + params;31873187- parm_data = ((char *) &pSMB->hdr.Protocol) + offset;31873187+ parm_data = ((char *)pSMB) + sizeof(pSMB->hdr.smb_buf_length) + offset;31883188 pSMB->ParameterOffset = cpu_to_le16(param_offset);3189318931903190 /* convert to on the wire format for POSIX ACL */
+23-7
fs/smb/client/connect.c
···6060#define TLINK_IDLE_EXPIRE (600 * HZ)61616262/* Drop the connection to not overload the server */6363-#define NUM_STATUS_IO_TIMEOUT 56363+#define MAX_STATUS_IO_TIMEOUT 564646565static int ip_connect(struct TCP_Server_Info *server);6666static int generic_ip_connect(struct TCP_Server_Info *server);···11171117 struct mid_q_entry *mids[MAX_COMPOUND];11181118 char *bufs[MAX_COMPOUND];11191119 unsigned int noreclaim_flag, num_io_timeout = 0;11201120+ bool pending_reconnect = false;1120112111211122 noreclaim_flag = memalloc_noreclaim_save();11221123 cifs_dbg(FYI, "Demultiplex PID: %d\n", task_pid_nr(current));···11571156 cifs_dbg(FYI, "RFC1002 header 0x%x\n", pdu_length);11581157 if (!is_smb_response(server, buf[0]))11591158 continue;11591159+11601160+ pending_reconnect = false;11601161next_pdu:11611162 server->pdu_size = pdu_length;11621163···12161213 if (server->ops->is_status_io_timeout &&12171214 server->ops->is_status_io_timeout(buf)) {12181215 num_io_timeout++;12191219- if (num_io_timeout > NUM_STATUS_IO_TIMEOUT) {12201220- cifs_reconnect(server, false);12161216+ if (num_io_timeout > MAX_STATUS_IO_TIMEOUT) {12171217+ cifs_server_dbg(VFS,12181218+ "Number of request timeouts exceeded %d. Reconnecting",12191219+ MAX_STATUS_IO_TIMEOUT);12201220+12211221+ pending_reconnect = true;12211222 num_io_timeout = 0;12221222- continue;12231223 }12241224 }12251225···12321226 if (mids[i] != NULL) {12331227 mids[i]->resp_buf_size = server->pdu_size;1234122812351235- if (bufs[i] && server->ops->is_network_name_deleted)12361236- server->ops->is_network_name_deleted(bufs[i],12371237- server);12291229+ if (bufs[i] != NULL) {12301230+ if (server->ops->is_network_name_deleted &&12311231+ server->ops->is_network_name_deleted(bufs[i],12321232+ server)) {12331233+ cifs_server_dbg(FYI,12341234+ "Share deleted. Reconnect needed");12351235+ }12361236+ }1238123712391238 if (!mids[i]->multiRsp || mids[i]->multiEnd)12401239 mids[i]->callback(mids[i]);···12741263 buf = server->smallbuf;12751264 goto next_pdu;12761265 }12661266+12671267+ /* do this reconnect at the very end after processing all MIDs */12681268+ if (pending_reconnect)12691269+ cifs_reconnect(server, true);12701270+12771271 } /* end while !EXITING */1278127212791273 /* buffer usually freed in free_mid - need to free it here on exit */
+10-16
fs/smb/client/dfs.c
···6666 return rc;6767}68686969+/*7070+ * Track individual DFS referral servers used by new DFS mount.7171+ *7272+ * On success, their lifetime will be shared by final tcon (dfs_ses_list).7373+ * Otherwise, they will be put by dfs_put_root_smb_sessions() in cifs_mount().7474+ */6975static int add_root_smb_session(struct cifs_mount_ctx *mnt_ctx)7076{7177 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;···8680 INIT_LIST_HEAD(&root_ses->list);87818882 spin_lock(&cifs_tcp_ses_lock);8989- ses->ses_count++;8383+ cifs_smb_ses_inc_refcount(ses);9084 spin_unlock(&cifs_tcp_ses_lock);9185 root_ses->ses = ses;9286 list_add_tail(&root_ses->list, &mnt_ctx->dfs_ses_list);9387 }8888+ /* Select new DFS referral server so that new referrals go through it */9489 ctx->dfs_root_ses = ses;9590 return 0;9691}···249242int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)250243{251244 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;252252- struct cifs_ses *ses;253245 bool nodfs = ctx->nodfs;254246 int rc;255247···282276 }283277284278 *isdfs = true;285285- /*286286- * Prevent DFS root session of being put in the first call to287287- * cifs_mount_put_conns(). If another DFS root server was not found288288- * while chasing the referrals (@ctx->dfs_root_ses == @ses), then we289289- * can safely put extra refcount of @ses.290290- */291291- ses = mnt_ctx->ses;292292- mnt_ctx->ses = NULL;293293- mnt_ctx->server = NULL;294294- rc = __dfs_mount_share(mnt_ctx);295295- if (ses == ctx->dfs_root_ses)296296- cifs_put_smb_ses(ses);297297-298298- return rc;279279+ add_root_smb_session(mnt_ctx);280280+ return __dfs_mount_share(mnt_ctx);299281}300282301283/* Update dfs referral path of superblock */
···111111 * keyslots while ensuring that they can't be changed concurrently.112112 */113113 struct rw_semaphore lock;114114+ struct lock_class_key lockdep_key;114115115116 /* List of idle slots, with least recently used slot at front */116117 wait_queue_head_t idle_slots_wait_queue;
+3-3
include/linux/blk-mq.h
···158158159159 /*160160 * The rb_node is only used inside the io scheduler, requests161161- * are pruned when moved to the dispatch queue. So let the162162- * completion_data share space with the rb_node.161161+ * are pruned when moved to the dispatch queue. special_vec must162162+ * only be used if RQF_SPECIAL_PAYLOAD is set, and those cannot be163163+ * insert into an IO scheduler.163164 */164165 union {165166 struct rb_node rb_node; /* sort/lookup */166167 struct bio_vec special_vec;167167- void *completion_data;168168 };169169170170 /*
···4141struct ftrace_regs;4242struct dyn_ftrace;43434444+char *arch_ftrace_match_adjust(char *str, const char *search);4545+4646+#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL4747+struct fgraph_ret_regs;4848+unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs);4949+#else5050+unsigned long ftrace_return_to_handler(unsigned long frame_pointer);5151+#endif5252+4453#ifdef CONFIG_FUNCTION_TRACER4554/*4655 * If the arch's mcount caller does not support all of ftrace's
···6767 /* The protocol. */6868 u_int8_t protonum;69697070+ /* The direction must be ignored for the tuplehash */7171+ struct { } __nfct_hash_offsetend;7272+7073 /* The direction (for tuplehash) */7174 u_int8_t dir;7275 } dst;
+27-4
include/net/netfilter/nf_tables.h
···1211121112121212unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv);1213121312141214+static inline bool nft_use_inc(u32 *use)12151215+{12161216+ if (*use == UINT_MAX)12171217+ return false;12181218+12191219+ (*use)++;12201220+12211221+ return true;12221222+}12231223+12241224+static inline void nft_use_dec(u32 *use)12251225+{12261226+ WARN_ON_ONCE((*use)-- == 0);12271227+}12281228+12291229+/* For error and abort path: restore use counter to previous state. */12301230+static inline void nft_use_inc_restore(u32 *use)12311231+{12321232+ WARN_ON_ONCE(!nft_use_inc(use));12331233+}12341234+12351235+#define nft_use_dec_restore nft_use_dec12361236+12141237/**12151238 * struct nft_table - nf_tables table12161239 *···13191296 struct list_head list;13201297 struct rhlist_head rhlhead;13211298 struct nft_object_hash_key key;13221322- u32 genmask:2,13231323- use:30;12991299+ u32 genmask:2;13001300+ u32 use;13241301 u64 handle;13251302 u16 udlen;13261303 u8 *udata;···14221399 char *name;14231400 int hooknum;14241401 int ops_len;14251425- u32 genmask:2,14261426- use:30;14021402+ u32 genmask:2;14031403+ u32 use;14271404 u64 handle;14281405 /* runtime data below here */14291406 struct list_head hook_list ____cacheline_aligned;
···7171};72727373/**7474+ * struct utp_upiu_query_v4_0 - upiu request buffer structure for7575+ * query request >= UFS 4.0 spec.7676+ * @opcode: command to perform B-07777+ * @idn: a value that indicates the particular type of data B-17878+ * @index: Index to further identify data B-27979+ * @selector: Index to further identify data B-38080+ * @osf4: spec field B-58181+ * @osf5: spec field B 6,78282+ * @osf6: spec field DW 8,98383+ * @osf7: spec field DW 10,118484+ */8585+struct utp_upiu_query_v4_0 {8686+ __u8 opcode;8787+ __u8 idn;8888+ __u8 index;8989+ __u8 selector;9090+ __u8 osf3;9191+ __u8 osf4;9292+ __be16 osf5;9393+ __be32 osf6;9494+ __be32 osf7;9595+ __be32 reserved;9696+};9797+9898+/**7499 * struct utp_upiu_cmd - Command UPIU structure75100 * @data_transfer_len: Data Transfer Length DW-376101 * @cdb: Command Descriptor Block CDB DW-4 to DW-7
···24892489static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,24902490 struct io_wait_queue *iowq)24912491{24922492+ int token, ret;24932493+24922494 if (unlikely(READ_ONCE(ctx->check_cq)))24932495 return 1;24942496 if (unlikely(!llist_empty(&ctx->work_llist)))···25012499 return -EINTR;25022500 if (unlikely(io_should_wake(iowq)))25032501 return 0;25022502+25032503+ /*25042504+ * Use io_schedule_prepare/finish, so cpufreq can take into account25052505+ * that the task is waiting for IO - turns out to be important for low25062506+ * QD IO.25072507+ */25082508+ token = io_schedule_prepare();25092509+ ret = 0;25042510 if (iowq->timeout == KTIME_MAX)25052511 schedule();25062512 else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))25072507- return -ETIME;25082508- return 0;25132513+ ret = -ETIME;25142514+ io_schedule_finish(token);25152515+ return ret;25092516}2510251725112518/*
+24-16
kernel/bpf/cpumap.c
···122122 atomic_inc(&rcpu->refcnt);123123}124124125125-/* called from workqueue, to workaround syscall using preempt_disable */126126-static void cpu_map_kthread_stop(struct work_struct *work)127127-{128128- struct bpf_cpu_map_entry *rcpu;129129-130130- rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq);131131-132132- /* Wait for flush in __cpu_map_entry_free(), via full RCU barrier,133133- * as it waits until all in-flight call_rcu() callbacks complete.134134- */135135- rcu_barrier();136136-137137- /* kthread_stop will wake_up_process and wait for it to complete */138138- kthread_stop(rcpu->kthread);139139-}140140-141125static void __cpu_map_ring_cleanup(struct ptr_ring *ring)142126{143127 /* The tear-down procedure should have made sure that queue is···146162 ptr_ring_cleanup(rcpu->queue, NULL);147163 kfree(rcpu->queue);148164 kfree(rcpu);165165+ }166166+}167167+168168+/* called from workqueue, to workaround syscall using preempt_disable */169169+static void cpu_map_kthread_stop(struct work_struct *work)170170+{171171+ struct bpf_cpu_map_entry *rcpu;172172+ int err;173173+174174+ rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq);175175+176176+ /* Wait for flush in __cpu_map_entry_free(), via full RCU barrier,177177+ * as it waits until all in-flight call_rcu() callbacks complete.178178+ */179179+ rcu_barrier();180180+181181+ /* kthread_stop will wake_up_process and wait for it to complete */182182+ err = kthread_stop(rcpu->kthread);183183+ if (err) {184184+ /* kthread_stop may be called before cpu_map_kthread_run185185+ * is executed, so we need to release the memory related186186+ * to rcpu.187187+ */188188+ put_cpu_map_entry(rcpu);149189 }150190}151191
+3-2
kernel/bpf/verifier.c
···56425642 verbose(env, "verifier bug. subprog has tail_call and async cb\n");56435643 return -EFAULT;56445644 }56455645- /* async callbacks don't increase bpf prog stack size */56465646- continue;56455645+ /* async callbacks don't increase bpf prog stack size unless called directly */56465646+ if (!bpf_pseudo_call(insn + i))56475647+ continue;56475648 }56485649 i = next_insn;56495650
+1-1
kernel/cgroup/cgroup.c
···37303730 }3731373137323732 psi = cgroup_psi(cgrp);37333733- new = psi_trigger_create(psi, buf, res, of->file);37333733+ new = psi_trigger_create(psi, buf, res, of->file, of);37343734 if (IS_ERR(new)) {37353735 cgroup_put(cgrp);37363736 return PTR_ERR(new);
+2-3
kernel/kallsyms.c
···174174 * LLVM appends various suffixes for local functions and variables that175175 * must be promoted to global scope as part of LTO. This can break176176 * hooking of static functions with kprobes. '.' is not a valid177177- * character in an identifier in C. Suffixes observed:177177+ * character in an identifier in C. Suffixes only in LLVM LTO observed:178178 * - foo.llvm.[0-9a-f]+179179- * - foo.[0-9a-f]+180179 */181181- res = strchr(s, '.');180180+ res = strstr(s, ".llvm.");182181 if (res) {183182 *res = '\0';184183 return true;
+4-4
kernel/kprobes.c
···10721072static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,10731073 int *cnt)10741074{10751075- int ret = 0;10751075+ int ret;1076107610771077 lockdep_assert_held(&kprobe_mutex);10781078···11101110static int __disarm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,11111111 int *cnt)11121112{11131113- int ret = 0;11131113+ int ret;1114111411151115 lockdep_assert_held(&kprobe_mutex);11161116···20072007unsigned long __kretprobe_trampoline_handler(struct pt_regs *regs,20082008 void *frame_pointer)20092009{20102010- kprobe_opcode_t *correct_ret_addr = NULL;20112010 struct kretprobe_instance *ri = NULL;20122011 struct llist_node *first, *node = NULL;20122012+ kprobe_opcode_t *correct_ret_addr;20132013 struct kretprobe *rp;2014201420152015 /* Find correct address and all nodes for this frame. */···2693269326942694static int __init init_kprobes(void)26952695{26962696- int i, err = 0;26962696+ int i, err;2697269726982698 /* FIXME allocate the probe table, currently defined statically */26992699 /* initialize all list heads */
···426426427427/* Definitions related to the frequency QoS below. */428428429429+static inline bool freq_qos_value_invalid(s32 value)430430+{431431+ return value < 0 && value != PM_QOS_DEFAULT_VALUE;432432+}433433+429434/**430435 * freq_constraints_init - Initialize frequency QoS constraints.431436 * @qos: Frequency QoS constraints to initialize.···536531{537532 int ret;538533539539- if (IS_ERR_OR_NULL(qos) || !req || value < 0)534534+ if (IS_ERR_OR_NULL(qos) || !req || freq_qos_value_invalid(value))540535 return -EINVAL;541536542537 if (WARN(freq_qos_request_active(req),···568563 */569564int freq_qos_update_request(struct freq_qos_request *req, s32 new_value)570565{571571- if (!req || new_value < 0)566566+ if (!req || freq_qos_value_invalid(new_value))572567 return -EINVAL;573568574569 if (WARN(!freq_qos_request_active(req),
···100100 return;101101 }102102103103+ /*104104+ * This user handler is shared with other kprobes and is not expected to be105105+ * called recursively. So if any other kprobe handler is running, this will106106+ * exit as kprobe does. See the section 'Share the callbacks with kprobes'107107+ * in Documentation/trace/fprobe.rst for more information.108108+ */103109 if (unlikely(kprobe_running())) {104110 fp->nmissed++;105105- return;111111+ goto recursion_unlock;106112 }107113108114 kprobe_busy_begin();109115 __fprobe_handler(ip, parent_ip, ops, fregs);110116 kprobe_busy_end();117117+118118+recursion_unlock:111119 ftrace_test_recursion_unlock(bit);112120}113121···379371 if (!fprobe_is_registered(fp))380372 return -EINVAL;381373382382- /*383383- * rethook_free() starts disabling the rethook, but the rethook handlers384384- * may be running on other processors at this point. To make sure that all385385- * current running handlers are finished, call unregister_ftrace_function()386386- * after this.387387- */388374 if (fp->rethook)389389- rethook_free(fp->rethook);375375+ rethook_stop(fp->rethook);390376391377 ret = unregister_ftrace_function(&fp->ops);392378 if (ret < 0)393379 return ret;380380+381381+ if (fp->rethook)382382+ rethook_free(fp->rethook);394383395384 ftrace_free_filter(&fp->ops);396385
+31-14
kernel/trace/ftrace.c
···33053305 return cnt;33063306}3307330733083308+static void ftrace_free_pages(struct ftrace_page *pages)33093309+{33103310+ struct ftrace_page *pg = pages;33113311+33123312+ while (pg) {33133313+ if (pg->records) {33143314+ free_pages((unsigned long)pg->records, pg->order);33153315+ ftrace_number_of_pages -= 1 << pg->order;33163316+ }33173317+ pages = pg->next;33183318+ kfree(pg);33193319+ pg = pages;33203320+ ftrace_number_of_groups--;33213321+ }33223322+}33233323+33083324static struct ftrace_page *33093325ftrace_allocate_pages(unsigned long num_to_init)33103326{···33593343 return start_pg;3360334433613345 free_pages:33623362- pg = start_pg;33633363- while (pg) {33643364- if (pg->records) {33653365- free_pages((unsigned long)pg->records, pg->order);33663366- ftrace_number_of_pages -= 1 << pg->order;33673367- }33683368- start_pg = pg->next;33693369- kfree(pg);33703370- pg = start_pg;33713371- ftrace_number_of_groups--;33723372- }33463346+ ftrace_free_pages(start_pg);33733347 pr_info("ftrace: FAILED to allocate memory for functions\n");33743348 return NULL;33753349}···64776471 unsigned long *start,64786472 unsigned long *end)64796473{64746474+ struct ftrace_page *pg_unuse = NULL;64806475 struct ftrace_page *start_pg;64816476 struct ftrace_page *pg;64826477 struct dyn_ftrace *rec;64786478+ unsigned long skipped = 0;64836479 unsigned long count;64846480 unsigned long *p;64856481 unsigned long addr;···65446536 * object files to satisfy alignments.65456537 * Skip any NULL pointers.65466538 */65476547- if (!addr)65396539+ if (!addr) {65406540+ skipped++;65486541 continue;65426542+ }6549654365506544 end_offset = (pg->index+1) * sizeof(pg->records[0]);65516545 if (end_offset > PAGE_SIZE << pg->order) {···65616551 rec->ip = addr;65626552 }6563655365646564- /* We should have used all pages */65656565- WARN_ON(pg->next);65546554+ if (pg->next) {65556555+ pg_unuse = pg->next;65566556+ pg->next = NULL;65576557+ }6566655865676559 /* Assign the last page to ftrace_pages */65686560 ftrace_pages = pg;···65866574 out:65876575 mutex_unlock(&ftrace_lock);6588657665776577+ /* We should have used all pages unless we skipped some */65786578+ if (pg_unuse) {65796579+ WARN_ON(!skipped);65806580+ ftrace_free_pages(pg_unuse);65816581+ }65896582 return ret;65906583}65916584
+3-2
kernel/trace/ftrace_internal.h
···22#ifndef _LINUX_KERNEL_FTRACE_INTERNAL_H33#define _LINUX_KERNEL_FTRACE_INTERNAL_H4455+int __register_ftrace_function(struct ftrace_ops *ops);66+int __unregister_ftrace_function(struct ftrace_ops *ops);77+58#ifdef CONFIG_FUNCTION_TRACER69710extern struct mutex ftrace_lock;···18151916#else /* !CONFIG_DYNAMIC_FTRACE */20172121-int __register_ftrace_function(struct ftrace_ops *ops);2222-int __unregister_ftrace_function(struct ftrace_ops *ops);2318/* Keep as macros so we do not need to define the commands */2419# define ftrace_startup(ops, command) \2520 ({ \
+13
kernel/trace/rethook.c
···5454}55555656/**5757+ * rethook_stop() - Stop using a rethook.5858+ * @rh: the struct rethook to stop.5959+ *6060+ * Stop using a rethook to prepare for freeing it. If you want to wait for6161+ * all running rethook handler before calling rethook_free(), you need to6262+ * call this first and wait RCU, and call rethook_free().6363+ */6464+void rethook_stop(struct rethook *rh)6565+{6666+ WRITE_ONCE(rh->handler, NULL);6767+}6868+6969+/**5770 * rethook_free() - Free struct rethook.5871 * @rh: the struct rethook to be freed.5972 *
···31183118 struct ftrace_stack *fstack;31193119 struct stack_entry *entry;31203120 int stackidx;31213121+ void *ptr;3121312231223123 /*31233124 * Add one, for this function and the call to save_stack_trace()···31623161 trace_ctx);31633162 if (!event)31643163 goto out;31653165- entry = ring_buffer_event_data(event);31643164+ ptr = ring_buffer_event_data(event);31653165+ entry = ptr;3166316631673167- memcpy(&entry->caller, fstack->calls, size);31673167+ /*31683168+ * For backward compatibility reasons, the entry->caller is an31693169+ * array of 8 slots to store the stack. This is also exported31703170+ * to user space. The amount allocated on the ring buffer actually31713171+ * holds enough for the stack specified by nr_entries. This will31723172+ * go into the location of entry->caller. Due to string fortifiers31733173+ * checking the size of the destination of memcpy() it triggers31743174+ * when it detects that size is greater than 8. To hide this from31753175+ * the fortifiers, we use "ptr" and pointer arithmetic to assign caller.31763176+ *31773177+ * The below is really just:31783178+ * memcpy(&entry->caller, fstack->calls, size);31793179+ */31803180+ ptr += offsetof(typeof(*entry), caller);31813181+ memcpy(ptr, fstack->calls, size);31823182+31683183 entry->size = nr_entries;3169318431703185 if (!call_filter_check_discard(call, entry, buffer, event))···6781676467826765 free_cpumask_var(iter->started);67836766 kfree(iter->fmt);67676767+ kfree(iter->temp);67846768 mutex_destroy(&iter->mutex);67856769 kfree(iter);67866770
···644644 struct trace_eprobe *ep;645645 bool enabled;646646 int ret = 0;647647+ int cnt = 0;647648648649 tp = trace_probe_primary_from_call(call);649650 if (WARN_ON_ONCE(!tp))···668667 if (ret)669668 break;670669 enabled = true;670670+ cnt++;671671 }672672673673 if (ret) {674674 /* Failed to enable one of them. Roll back all */675675- if (enabled)676676- disable_eprobe(ep, file->tr);675675+ if (enabled) {676676+ /*677677+ * It's a bug if one failed for something other than memory678678+ * not being available but another eprobe succeeded.679679+ */680680+ WARN_ON_ONCE(ret != -ENOMEM);681681+682682+ list_for_each_entry(pos, trace_probe_probe_list(tp), list) {683683+ ep = container_of(pos, struct trace_eprobe, tp);684684+ disable_eprobe(ep, file->tr);685685+ if (!--cnt)686686+ break;687687+ }688688+ }677689 if (file)678690 trace_probe_remove_file(tp, file);679691 else
+5-3
kernel/trace/trace_events_hist.c
···66636663 if (get_named_trigger_data(trigger_data))66646664 goto enable;6665666566666666- if (has_hist_vars(hist_data))66676667- save_hist_vars(hist_data);66686668-66696666 ret = create_actions(hist_data);66706667 if (ret)66716668 goto out_unreg;66696669+66706670+ if (has_hist_vars(hist_data) || hist_data->n_var_refs) {66716671+ if (save_hist_vars(hist_data))66726672+ goto out_unreg;66736673+ }6672667466736675 ret = tracing_map_init(hist_data->map);66746676 if (ret)
···11// SPDX-License-Identifier: GPL-2.022+33+#include "trace_kprobe_selftest.h"44+25/*36 * Function used during the kprobe self test. This function is in a separate47 * compile unit so it can be compile with CC_FLAGS_FTRACE to ensure that it
+1-1
kernel/trace/trace_probe.c
···6767 int len = *(u32 *)data >> 16;68686969 if (!len)7070- trace_seq_puts(s, "(fault)");7070+ trace_seq_puts(s, FAULT_STRING);7171 else7272 trace_seq_printf(s, "\"%s\"",7373 (const char *)get_loc_data(data, ent));
+8-22
kernel/trace/trace_probe_kernel.h
···22#ifndef __TRACE_PROBE_KERNEL_H_33#define __TRACE_PROBE_KERNEL_H_4455-#define FAULT_STRING "(fault)"66-75/*86 * This depends on trace_probe.h, but can not include it due to97 * the way trace_probe_tmpl.h is used by trace_kprobe.c and trace_eprobe.c.···1315fetch_store_strlen_user(unsigned long addr)1416{1517 const void __user *uaddr = (__force const void __user *)addr;1616- int ret;17181818- ret = strnlen_user_nofault(uaddr, MAX_STRING_SIZE);1919- /*2020- * strnlen_user_nofault returns zero on fault, insert the2121- * FAULT_STRING when that occurs.2222- */2323- if (ret <= 0)2424- return strlen(FAULT_STRING) + 1;2525- return ret;1919+ return strnlen_user_nofault(uaddr, MAX_STRING_SIZE);2620}27212822/* Return the length of string -- including null terminal byte */···3444 len++;3545 } while (c && ret == 0 && len < MAX_STRING_SIZE);36463737- /* For faults, return enough to hold the FAULT_STRING */3838- return (ret < 0) ? strlen(FAULT_STRING) + 1 : len;4747+ return (ret < 0) ? ret : len;3948}40494141-static nokprobe_inline void set_data_loc(int ret, void *dest, void *__dest, void *base, int len)5050+static nokprobe_inline void set_data_loc(int ret, void *dest, void *__dest, void *base)4251{4343- if (ret >= 0) {4444- *(u32 *)dest = make_data_loc(ret, __dest - base);4545- } else {4646- strscpy(__dest, FAULT_STRING, len);4747- ret = strlen(__dest) + 1;4848- }5252+ if (ret < 0)5353+ ret = 0;5454+ *(u32 *)dest = make_data_loc(ret, __dest - base);4955}50565157/*···6276 __dest = get_loc_data(dest, base);63776478 ret = strncpy_from_user_nofault(__dest, uaddr, maxlen);6565- set_data_loc(ret, dest, __dest, base, maxlen);7979+ set_data_loc(ret, dest, __dest, base);66806781 return ret;6882}···93107 * probing.94108 */95109 ret = strncpy_from_kernel_nofault(__dest, (void *)addr, maxlen);9696- set_data_loc(ret, dest, __dest, base, maxlen);110110+ set_data_loc(ret, dest, __dest, base);9711198112 return ret;99113}
+5-5
kernel/trace/trace_probe_tmpl.h
···156156 code++;157157 goto array;158158 case FETCH_OP_ST_USTRING:159159- ret += fetch_store_strlen_user(val + code->offset);159159+ ret = fetch_store_strlen_user(val + code->offset);160160 code++;161161 goto array;162162 case FETCH_OP_ST_SYMSTR:163163- ret += fetch_store_symstrlen(val + code->offset);163163+ ret = fetch_store_symstrlen(val + code->offset);164164 code++;165165 goto array;166166 default:···204204array:205205 /* the last stage: Loop on array */206206 if (code->op == FETCH_OP_LP_ARRAY) {207207+ if (ret < 0)208208+ ret = 0;207209 total += ret;208210 if (++i < code->param) {209211 code = s3;···267265 if (unlikely(arg->dynamic))268266 *dl = make_data_loc(maxlen, dyndata - base);269267 ret = process_fetch_insn(arg->code, rec, dl, base);270270- if (unlikely(ret < 0 && arg->dynamic)) {271271- *dl = make_data_loc(0, dyndata - base);272272- } else {268268+ if (arg->dynamic && likely(ret > 0)) {273269 dyndata += ret;274270 maxlen -= ret;275271 }
···13491349 return ret;13501350}1351135113521352-static int copy_iovec_from_user(struct iovec *iov,13521352+static __noclone int copy_iovec_from_user(struct iovec *iov,13531353 const struct iovec __user *uiov, unsigned long nr_segs)13541354{13551355 int ret = -EFAULT;
+29-18
net/ceph/messenger_v2.c
···390390 int head_len;391391 int rem_len;392392393393+ BUG_ON(ctrl_len < 0 || ctrl_len > CEPH_MSG_MAX_CONTROL_LEN);394394+393395 if (secure) {394396 head_len = CEPH_PREAMBLE_SECURE_LEN;395397 if (ctrl_len > CEPH_PREAMBLE_INLINE_LEN) {···410408static int __tail_onwire_len(int front_len, int middle_len, int data_len,411409 bool secure)412410{411411+ BUG_ON(front_len < 0 || front_len > CEPH_MSG_MAX_FRONT_LEN ||412412+ middle_len < 0 || middle_len > CEPH_MSG_MAX_MIDDLE_LEN ||413413+ data_len < 0 || data_len > CEPH_MSG_MAX_DATA_LEN);414414+413415 if (!front_len && !middle_len && !data_len)414416 return 0;415417···526520 desc->fd_aligns[i] = ceph_decode_16(&p);527521 }528522523523+ if (desc->fd_lens[0] < 0 ||524524+ desc->fd_lens[0] > CEPH_MSG_MAX_CONTROL_LEN) {525525+ pr_err("bad control segment length %d\n", desc->fd_lens[0]);526526+ return -EINVAL;527527+ }528528+ if (desc->fd_lens[1] < 0 ||529529+ desc->fd_lens[1] > CEPH_MSG_MAX_FRONT_LEN) {530530+ pr_err("bad front segment length %d\n", desc->fd_lens[1]);531531+ return -EINVAL;532532+ }533533+ if (desc->fd_lens[2] < 0 ||534534+ desc->fd_lens[2] > CEPH_MSG_MAX_MIDDLE_LEN) {535535+ pr_err("bad middle segment length %d\n", desc->fd_lens[2]);536536+ return -EINVAL;537537+ }538538+ if (desc->fd_lens[3] < 0 ||539539+ desc->fd_lens[3] > CEPH_MSG_MAX_DATA_LEN) {540540+ pr_err("bad data segment length %d\n", desc->fd_lens[3]);541541+ return -EINVAL;542542+ }543543+529544 /*530545 * This would fire for FRAME_TAG_WAIT (it has one empty531546 * segment), but we should never get it as client.532547 */533548 if (!desc->fd_lens[desc->fd_seg_cnt - 1]) {534534- pr_err("last segment empty\n");535535- return -EINVAL;536536- }537537-538538- if (desc->fd_lens[0] > CEPH_MSG_MAX_CONTROL_LEN) {539539- pr_err("control segment too big %d\n", desc->fd_lens[0]);540540- return -EINVAL;541541- }542542- if (desc->fd_lens[1] > CEPH_MSG_MAX_FRONT_LEN) {543543- pr_err("front segment too big %d\n", desc->fd_lens[1]);544544- return -EINVAL;545545- }546546- if (desc->fd_lens[2] > CEPH_MSG_MAX_MIDDLE_LEN) {547547- pr_err("middle segment too big %d\n", desc->fd_lens[2]);548548- return -EINVAL;549549- }550550- if (desc->fd_lens[3] > CEPH_MSG_MAX_DATA_LEN) {551551- pr_err("data segment too big %d\n", desc->fd_lens[3]);549549+ pr_err("last segment empty, segment count %d\n",550550+ desc->fd_seg_cnt);552551 return -EINVAL;553552 }554553
···4261426142624262 skb_push(skb, -skb_network_offset(skb) + offset);4263426342644264+ /* Ensure the head is writeable before touching the shared info */42654265+ err = skb_unclone(skb, GFP_ATOMIC);42664266+ if (err)42674267+ goto err_linearize;42684268+42644269 skb_shinfo(skb)->frag_list = NULL;4265427042664271 while (list_skb) {
···318318static void addrconf_mod_rs_timer(struct inet6_dev *idev,319319 unsigned long when)320320{321321- if (!timer_pending(&idev->rs_timer))321321+ if (!mod_timer(&idev->rs_timer, jiffies + when))322322 in6_dev_hold(idev);323323- mod_timer(&idev->rs_timer, jiffies + when);324323}325324326325static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp,
+4-1
net/ipv6/icmp.c
···424424 if (unlikely(dev->ifindex == LOOPBACK_IFINDEX || netif_is_l3_master(skb->dev))) {425425 const struct rt6_info *rt6 = skb_rt6_info(skb);426426427427- if (rt6)427427+ /* The destination could be an external IP in Ext Hdr (SRv6, RPL, etc.),428428+ * and ip6_null_entry could be set to skb if no route is found.429429+ */430430+ if (rt6 && rt6->rt6i_idev)428431 dev = rt6->rt6i_idev->dev;429432 }430433
···205205 enum ip_conntrack_info ctinfo,206206 const struct nf_hook_state *state)207207{208208+ unsigned long status;209209+208210 if (!nf_ct_is_confirmed(ct)) {209211 unsigned int *timeouts = nf_ct_timeout_lookup(ct);210212···219217 ct->proto.gre.timeout = timeouts[GRE_CT_UNREPLIED];220218 }221219220220+ status = READ_ONCE(ct->status);222221 /* If we've seen traffic both ways, this is a GRE connection.223222 * Extend timeout. */224224- if (ct->status & IPS_SEEN_REPLY) {223223+ if (status & IPS_SEEN_REPLY) {225224 nf_ct_refresh_acct(ct, ctinfo, skb,226225 ct->proto.gre.stream_timeout);226226+227227+ /* never set ASSURED for IPS_NAT_CLASH, they time out soon */228228+ if (unlikely((status & IPS_NAT_CLASH)))229229+ return NF_ACCEPT;230230+227231 /* Also, more likely to be important, and not a probe. */228232 if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status))229233 nf_conntrack_event_cache(IPCT_ASSURED, ct);
···13201320 return ERR_PTR(err);13211321 }13221322 } else {13231323- if (strlcpy(act_name, "police", IFNAMSIZ) >= IFNAMSIZ) {13231323+ if (strscpy(act_name, "police", IFNAMSIZ) < 0) {13241324 NL_SET_ERR_MSG(extack, "TC action name too long");13251325 return ERR_PTR(-EINVAL);13261326 }
+10
net/sched/cls_flower.c
···812812 TCA_FLOWER_KEY_PORT_SRC_MAX, &mask->tp_range.tp_max.src,813813 TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src));814814815815+ if (mask->tp_range.tp_min.dst != mask->tp_range.tp_max.dst) {816816+ NL_SET_ERR_MSG(extack,817817+ "Both min and max destination ports must be specified");818818+ return -EINVAL;819819+ }820820+ if (mask->tp_range.tp_min.src != mask->tp_range.tp_max.src) {821821+ NL_SET_ERR_MSG(extack,822822+ "Both min and max source ports must be specified");823823+ return -EINVAL;824824+ }815825 if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&816826 ntohs(key->tp_range.tp_max.dst) <=817827 ntohs(key->tp_range.tp_min.dst)) {
+5-5
net/sched/cls_fw.c
···212212 if (err < 0)213213 return err;214214215215- if (tb[TCA_FW_CLASSID]) {216216- f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);217217- tcf_bind_filter(tp, &f->res, base);218218- }219219-220215 if (tb[TCA_FW_INDEV]) {221216 int ret;222217 ret = tcf_change_indev(net, tb[TCA_FW_INDEV], extack);···227232 return err;228233 } else if (head->mask != 0xFFFFFFFF)229234 return err;235235+236236+ if (tb[TCA_FW_CLASSID]) {237237+ f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);238238+ tcf_bind_filter(tp, &f->res, base);239239+ }230240231241 return 0;232242}
+15-3
net/sched/sch_qfq.c
···381381 u32 lmax)382382{383383 struct qfq_sched *q = qdisc_priv(sch);384384- struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight);384384+ struct qfq_aggregate *new_agg;385385386386+ /* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */387387+ if (lmax > QFQ_MAX_LMAX)388388+ return -EINVAL;389389+390390+ new_agg = qfq_find_agg(q, lmax, weight);386391 if (new_agg == NULL) { /* create new aggregate */387392 new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC);388393 if (new_agg == NULL)···428423 else429424 weight = 1;430425431431- if (tb[TCA_QFQ_LMAX])426426+ if (tb[TCA_QFQ_LMAX]) {432427 lmax = nla_get_u32(tb[TCA_QFQ_LMAX]);433433- else428428+ } else {429429+ /* MTU size is user controlled */434430 lmax = psched_mtu(qdisc_dev(sch));431431+ if (lmax < QFQ_MIN_LMAX || lmax > QFQ_MAX_LMAX) {432432+ NL_SET_ERR_MSG_MOD(extack,433433+ "MTU size out of bounds for qfq");434434+ return -EINVAL;435435+ }436436+ }435437436438 inv_w = ONE_FP / weight;437439 weight = ONE_FP / inv_w;
···349349 * ASCII[_] = 5f350350 * ASCII[a-z] = 61,7a351351 *352352- * As above, replacing '.' with '\0' does not affect the main sorting,353353- * but it helps us with subsorting.352352+ * As above, replacing the first '.' in ".llvm." with '\0' does not353353+ * affect the main sorting, but it helps us with subsorting.354354 */355355- p = strchr(s, '.');355355+ p = strstr(s, ".llvm.");356356 if (p)357357 *p = '\0';358358}
+1-6
sound/hda/hdac_i915.c
···1111#include <sound/hda_i915.h>1212#include <sound/hda_register.h>13131414-#define IS_HSW_CONTROLLER(pci) (((pci)->device == 0x0a0c) || \1515- ((pci)->device == 0x0c0c) || \1616- ((pci)->device == 0x0d0c) || \1717- ((pci)->device == 0x160c))1818-1914/**2015 * snd_hdac_i915_set_bclk - Reprogram BCLK for HSW/BDW2116 * @bus: HDA core bus···34393540 if (!acomp || !acomp->ops || !acomp->ops->get_cdclk_freq)3641 return; /* only for i915 binding */3737- if (!IS_HSW_CONTROLLER(pci))4242+ if (!HDA_CONTROLLER_IS_HSW(pci))3843 return; /* only HSW/BDW */39444045 cdclk_freq = acomp->ops->get_cdclk_freq(acomp->dev);
···1616#include <linux/interrupt.h>1717#include <linux/io.h>1818#include <linux/firmware.h>1919+#include <linux/pci.h>1920#include <linux/pm_runtime.h>2021#include <linux/pm_qos.h>2122#include <linux/async.h>···175174{176175177176 switch (sst->dev_id) {178178- case SST_MRFLD_PCI_ID:179179- case SST_BYT_ACPI_ID:180180- case SST_CHV_ACPI_ID:177177+ case PCI_DEVICE_ID_INTEL_SST_TNG:178178+ case PCI_DEVICE_ID_INTEL_SST_BYT:179179+ case PCI_DEVICE_ID_INTEL_SST_BSW:181180 sst->tstamp = SST_TIME_STAMP_MRFLD;182181 sst->ops = &mrfld_ops;183182 return 0;···222221 spin_lock_init(&ctx->block_lock);223222}224223224224+/*225225+ * Driver handles PCI IDs in ACPI - sst_acpi_probe() - and we are using only226226+ * device ID part. If real ACPI ID appears, the kstrtouint() returns error, so227227+ * we are fine with using unsigned short as dev_id type.228228+ */225229int sst_alloc_drv_context(struct intel_sst_drv **ctx,226226- struct device *dev, unsigned int dev_id)230230+ struct device *dev, unsigned short dev_id)227231{228232 *ctx = devm_kzalloc(dev, sizeof(struct intel_sst_drv), GFP_KERNEL);229233 if (!(*ctx))
···1313#include <linux/pci.h>1414#include <linux/pm_runtime.h>1515#include <linux/delay.h>1616+#include <sound/hdaudio.h>1617#include <sound/pcm_params.h>1718#include <sound/soc.h>1819#include "skl.h"···153152 * The recommended SDxFMT programming sequence for BXT154153 * platforms is to couple the stream before writing the format155154 */156156- if (IS_BXT(skl->pci)) {155155+ if (HDA_CONTROLLER_IS_APL(skl->pci)) {157156 snd_hdac_ext_stream_decouple(bus, stream, false);158157 err = snd_hdac_stream_setup(hdac_stream(stream));159158 snd_hdac_ext_stream_decouple(bus, stream, true);