···994994What: /sys/bus/platform/drivers/ufshcd/*/rpm_lvl995995What: /sys/bus/platform/devices/*.ufs/rpm_lvl996996Date: September 2014997997-Contact: Subhash Jadavani <subhashj@codeaurora.org>997997+Contact: Can Guo <quic_cang@quicinc.com>998998Description: This entry could be used to set or show the UFS device999999 runtime power management level. The current driver10001000 implementation supports 7 levels with next target states:···10211021What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_dev_state10221022What: /sys/bus/platform/devices/*.ufs/rpm_target_dev_state10231023Date: February 201810241024-Contact: Subhash Jadavani <subhashj@codeaurora.org>10241024+Contact: Can Guo <quic_cang@quicinc.com>10251025Description: This entry shows the target power mode of an UFS device10261026 for the chosen runtime power management level.10271027···10301030What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_link_state10311031What: /sys/bus/platform/devices/*.ufs/rpm_target_link_state10321032Date: February 201810331033-Contact: Subhash Jadavani <subhashj@codeaurora.org>10331033+Contact: Can Guo <quic_cang@quicinc.com>10341034Description: This entry shows the target state of an UFS UIC link10351035 for the chosen runtime power management level.10361036···10391039What: /sys/bus/platform/drivers/ufshcd/*/spm_lvl10401040What: /sys/bus/platform/devices/*.ufs/spm_lvl10411041Date: September 201410421042-Contact: Subhash Jadavani <subhashj@codeaurora.org>10421042+Contact: Can Guo <quic_cang@quicinc.com>10431043Description: This entry could be used to set or show the UFS device10441044 system power management level. The current driver10451045 implementation supports 7 levels with next target states:···10661066What: /sys/bus/platform/drivers/ufshcd/*/spm_target_dev_state10671067What: /sys/bus/platform/devices/*.ufs/spm_target_dev_state10681068Date: February 201810691069-Contact: Subhash Jadavani <subhashj@codeaurora.org>10691069+Contact: Can Guo <quic_cang@quicinc.com>10701070Description: This entry shows the target power mode of an UFS device10711071 for the chosen system power management level.10721072···10751075What: /sys/bus/platform/drivers/ufshcd/*/spm_target_link_state10761076What: /sys/bus/platform/devices/*.ufs/spm_target_link_state10771077Date: February 201810781078-Contact: Subhash Jadavani <subhashj@codeaurora.org>10781078+Contact: Can Guo <quic_cang@quicinc.com>10791079Description: This entry shows the target state of an UFS UIC link10801080 for the chosen system power management level.10811081···10841084What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_enable10851085What: /sys/bus/platform/devices/*.ufs/monitor/monitor_enable10861086Date: January 202110871087-Contact: Can Guo <cang@codeaurora.org>10871087+Contact: Can Guo <quic_cang@quicinc.com>10881088Description: This file shows the status of performance monitor enablement10891089 and it can be used to start/stop the monitor. When the monitor10901090 is stopped, the performance data collected is also cleared.···10921092What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_chunk_size10931093What: /sys/bus/platform/devices/*.ufs/monitor/monitor_chunk_size10941094Date: January 202110951095-Contact: Can Guo <cang@codeaurora.org>10951095+Contact: Can Guo <quic_cang@quicinc.com>10961096Description: This file tells the monitor to focus on requests transferring10971097 data of specific chunk size (in Bytes). 0 means any chunk size.10981098 It can only be changed when monitor is disabled.···11001100What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_sectors11011101What: /sys/bus/platform/devices/*.ufs/monitor/read_total_sectors11021102Date: January 202111031103-Contact: Can Guo <cang@codeaurora.org>11031103+Contact: Can Guo <quic_cang@quicinc.com>11041104Description: This file shows how many sectors (in 512 Bytes) have been11051105 sent from device to host after monitor gets started.11061106···11091109What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_busy11101110What: /sys/bus/platform/devices/*.ufs/monitor/read_total_busy11111111Date: January 202111121112-Contact: Can Guo <cang@codeaurora.org>11121112+Contact: Can Guo <quic_cang@quicinc.com>11131113Description: This file shows how long (in micro seconds) has been spent11141114 sending data from device to host after monitor gets started.11151115···11181118What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_nr_requests11191119What: /sys/bus/platform/devices/*.ufs/monitor/read_nr_requests11201120Date: January 202111211121-Contact: Can Guo <cang@codeaurora.org>11211121+Contact: Can Guo <quic_cang@quicinc.com>11221122Description: This file shows how many read requests have been sent after11231123 monitor gets started.11241124···11271127What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_max11281128What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_max11291129Date: January 202111301130-Contact: Can Guo <cang@codeaurora.org>11301130+Contact: Can Guo <quic_cang@quicinc.com>11311131Description: This file shows the maximum latency (in micro seconds) of11321132 read requests after monitor gets started.11331133···11361136What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_min11371137What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_min11381138Date: January 202111391139-Contact: Can Guo <cang@codeaurora.org>11391139+Contact: Can Guo <quic_cang@quicinc.com>11401140Description: This file shows the minimum latency (in micro seconds) of11411141 read requests after monitor gets started.11421142···11451145What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_avg11461146What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_avg11471147Date: January 202111481148-Contact: Can Guo <cang@codeaurora.org>11481148+Contact: Can Guo <quic_cang@quicinc.com>11491149Description: This file shows the average latency (in micro seconds) of11501150 read requests after monitor gets started.11511151···11541154What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_sum11551155What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_sum11561156Date: January 202111571157-Contact: Can Guo <cang@codeaurora.org>11571157+Contact: Can Guo <quic_cang@quicinc.com>11581158Description: This file shows the total latency (in micro seconds) of11591159 read requests sent after monitor gets started.11601160···11631163What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_sectors11641164What: /sys/bus/platform/devices/*.ufs/monitor/write_total_sectors11651165Date: January 202111661166-Contact: Can Guo <cang@codeaurora.org>11661166+Contact: Can Guo <quic_cang@quicinc.com>11671167Description: This file shows how many sectors (in 512 Bytes) have been sent11681168 from host to device after monitor gets started.11691169···11721172What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_busy11731173What: /sys/bus/platform/devices/*.ufs/monitor/write_total_busy11741174Date: January 202111751175-Contact: Can Guo <cang@codeaurora.org>11751175+Contact: Can Guo <quic_cang@quicinc.com>11761176Description: This file shows how long (in micro seconds) has been spent11771177 sending data from host to device after monitor gets started.11781178···11811181What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_nr_requests11821182What: /sys/bus/platform/devices/*.ufs/monitor/write_nr_requests11831183Date: January 202111841184-Contact: Can Guo <cang@codeaurora.org>11841184+Contact: Can Guo <quic_cang@quicinc.com>11851185Description: This file shows how many write requests have been sent after11861186 monitor gets started.11871187···11901190What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_max11911191What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_max11921192Date: January 202111931193-Contact: Can Guo <cang@codeaurora.org>11931193+Contact: Can Guo <quic_cang@quicinc.com>11941194Description: This file shows the maximum latency (in micro seconds) of write11951195 requests after monitor gets started.11961196···11991199What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_min12001200What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_min12011201Date: January 202112021202-Contact: Can Guo <cang@codeaurora.org>12021202+Contact: Can Guo <quic_cang@quicinc.com>12031203Description: This file shows the minimum latency (in micro seconds) of write12041204 requests after monitor gets started.12051205···12081208What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_avg12091209What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_avg12101210Date: January 202112111211-Contact: Can Guo <cang@codeaurora.org>12111211+Contact: Can Guo <quic_cang@quicinc.com>12121212Description: This file shows the average latency (in micro seconds) of write12131213 requests after monitor gets started.12141214···12171217What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_sum12181218What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_sum12191219Date: January 202112201220-Contact: Can Guo <cang@codeaurora.org>12201220+Contact: Can Guo <quic_cang@quicinc.com>12211221Description: This file shows the total latency (in micro seconds) of write12221222 requests after monitor gets started.12231223···12261226What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_presv_us_en12271227What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_presv_us_en12281228Date: June 202012291229-Contact: Asutosh Das <asutoshd@codeaurora.org>12291229+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12301230Description: This entry shows if preserve user-space was configured1231123112321232 The file is read only.···12341234What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_shared_alloc_units12351235What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_shared_alloc_units12361236Date: June 202012371237-Contact: Asutosh Das <asutoshd@codeaurora.org>12371237+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12381238Description: This entry shows the shared allocated units of WB buffer1239123912401240 The file is read only.···12421242What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_type12431243What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_type12441244Date: June 202012451245-Contact: Asutosh Das <asutoshd@codeaurora.org>12451245+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12461246Description: This entry shows the configured WB type.12471247 0x1 for shared buffer mode. 0x0 for dedicated buffer mode.12481248···12511251What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_buff_cap_adj12521252What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_buff_cap_adj12531253Date: June 202012541254-Contact: Asutosh Das <asutoshd@codeaurora.org>12541254+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12551255Description: This entry shows the total user-space decrease in shared12561256 buffer mode.12571257 The value of this parameter is 3 for TLC NAND when SLC mode···12621262What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_alloc_units12631263What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_max_alloc_units12641264Date: June 202012651265-Contact: Asutosh Das <asutoshd@codeaurora.org>12651265+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12661266Description: This entry shows the Maximum total WriteBooster Buffer size12671267 which is supported by the entire device.12681268···12711271What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_wb_luns12721272What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_max_wb_luns12731273Date: June 202012741274-Contact: Asutosh Das <asutoshd@codeaurora.org>12741274+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12751275Description: This entry shows the maximum number of luns that can support12761276 WriteBooster.12771277···12801280What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_red_type12811281What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_sup_red_type12821282Date: June 202012831283-Contact: Asutosh Das <asutoshd@codeaurora.org>12831283+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12841284Description: The supportability of user space reduction mode12851285 and preserve user space mode.12861286 00h: WriteBooster Buffer can be configured only in···12951295What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_wb_type12961296What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_sup_wb_type12971297Date: June 202012981298-Contact: Asutosh Das <asutoshd@codeaurora.org>12981298+Contact: Asutosh Das <quic_asutoshd@quicinc.com>12991299Description: The supportability of WriteBooster Buffer type.1300130013011301 === ==========================================================···13101310What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable13111311What: /sys/bus/platform/devices/*.ufs/flags/wb_enable13121312Date: June 202013131313-Contact: Asutosh Das <asutoshd@codeaurora.org>13131313+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13141314Description: This entry shows the status of WriteBooster.1315131513161316 == ============================···13231323What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_en13241324What: /sys/bus/platform/devices/*.ufs/flags/wb_flush_en13251325Date: June 202013261326-Contact: Asutosh Das <asutoshd@codeaurora.org>13261326+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13271327Description: This entry shows if flush is enabled.1328132813291329 == =================================···13361336What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_during_h813371337What: /sys/bus/platform/devices/*.ufs/flags/wb_flush_during_h813381338Date: June 202013391339-Contact: Asutosh Das <asutoshd@codeaurora.org>13391339+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13401340Description: Flush WriteBooster Buffer during hibernate state.1341134113421342 == =================================================···13511351What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_avail_buf13521352What: /sys/bus/platform/devices/*.ufs/attributes/wb_avail_buf13531353Date: June 202013541354-Contact: Asutosh Das <asutoshd@codeaurora.org>13541354+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13551355Description: This entry shows the amount of unused WriteBooster buffer13561356 available.13571357···13601360What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_cur_buf13611361What: /sys/bus/platform/devices/*.ufs/attributes/wb_cur_buf13621362Date: June 202013631363-Contact: Asutosh Das <asutoshd@codeaurora.org>13631363+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13641364Description: This entry shows the amount of unused current buffer.1365136513661366 The file is read only.···13681368What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_flush_status13691369What: /sys/bus/platform/devices/*.ufs/attributes/wb_flush_status13701370Date: June 202013711371-Contact: Asutosh Das <asutoshd@codeaurora.org>13711371+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13721372Description: This entry shows the flush operation status.1373137313741374···13851385What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_life_time_est13861386What: /sys/bus/platform/devices/*.ufs/attributes/wb_life_time_est13871387Date: June 202013881388-Contact: Asutosh Das <asutoshd@codeaurora.org>13881388+Contact: Asutosh Das <quic_asutoshd@quicinc.com>13891389Description: This entry shows an indication of the WriteBooster Buffer13901390 lifetime based on the amount of performed program/erase cycles13911391···1399139914001400What: /sys/class/scsi_device/*/device/unit_descriptor/wb_buf_alloc_units14011401Date: June 202014021402-Contact: Asutosh Das <asutoshd@codeaurora.org>14021402+Contact: Asutosh Das <quic_asutoshd@quicinc.com>14031403Description: This entry shows the configured size of WriteBooster buffer.14041404 0400h corresponds to 4GB.14051405
+1-1
Documentation/riscv/hwprobe.rst
···4949 privileged ISA, with the following known exceptions (more exceptions may be5050 added, but only if it can be demonstrated that the user ABI is not broken):51515252- * The :fence.i: instruction cannot be directly executed by userspace5252+ * The ``fence.i`` instruction cannot be directly executed by userspace5353 programs (it may still be executed in userspace via a5454 kernel-controlled mechanism such as the vDSO).5555
···322322 *323323 */324324325325- emit_bti(A64_BTI_C, ctx);325325+ /* bpf function may be invoked by 3 instruction types:326326+ * 1. bl, attached via freplace to bpf prog via short jump327327+ * 2. br, attached via freplace to bpf prog via long jump328328+ * 3. blr, working as a function pointer, used by emit_call.329329+ * So BTI_JC should used here to support both br and blr.330330+ */331331+ emit_bti(A64_BTI_JC, ctx);326332327333 emit(A64_MOV(1, A64_R(9), A64_LR), ctx);328334 emit(A64_NOP, ctx);
···55 * Copyright (C) 2007 Ben. Herrenschmidt (benh@kernel.crashing.org), IBM Corp.66 */7788+#include <linux/linkage.h>89#include <linux/threads.h>910#include <asm/reg.h>1011#include <asm/page.h>···6766#define SPECIAL_EXC_LOAD(reg, name) \6867 ld reg, (SPECIAL_EXC_##name * 8 + SPECIAL_EXC_FRAME_OFFS)(r1)69687070-special_reg_save:6969+SYM_CODE_START_LOCAL(special_reg_save)7170 /*7271 * We only need (or have stack space) to save this stuff if7372 * we interrupted the kernel.···132131 SPECIAL_EXC_STORE(r10,CSRR1)133132134133 blr134134+SYM_CODE_END(special_reg_save)135135136136-ret_from_level_except:136136+SYM_CODE_START_LOCAL(ret_from_level_except)137137 ld r3,_MSR(r1)138138 andi. r3,r3,MSR_PR139139 beq 1f···208206 mtxer r11209207210208 blr209209+SYM_CODE_END(ret_from_level_except)211210212211.macro ret_from_level srr0 srr1 paca_ex scratch213212 bl ret_from_level_except···235232 mfspr r13,\scratch236233.endm237234238238-ret_from_crit_except:235235+SYM_CODE_START_LOCAL(ret_from_crit_except)239236 ret_from_level SPRN_CSRR0 SPRN_CSRR1 PACA_EXCRIT SPRN_SPRG_CRIT_SCRATCH240237 rfci238238+SYM_CODE_END(ret_from_crit_except)241239242242-ret_from_mc_except:240240+SYM_CODE_START_LOCAL(ret_from_mc_except)243241 ret_from_level SPRN_MCSRR0 SPRN_MCSRR1 PACA_EXMC SPRN_SPRG_MC_SCRATCH244242 rfmci243243+SYM_CODE_END(ret_from_mc_except)245244246245/* Exception prolog code for all exceptions */247246#define EXCEPTION_PROLOG(n, intnum, type, addition) \···983978 * r14 and r15 containing the fault address and error code, with the984979 * original values stashed away in the PACA985980 */986986-storage_fault_common:981981+SYM_CODE_START_LOCAL(storage_fault_common)987982 addi r3,r1,STACK_INT_FRAME_REGS988983 bl do_page_fault989984 b interrupt_return985985+SYM_CODE_END(storage_fault_common)990986991987/*992988 * Alignment exception doesn't fit entirely in the 0x100 bytes so it993989 * continues here.994990 */995995-alignment_more:991991+SYM_CODE_START_LOCAL(alignment_more)996992 addi r3,r1,STACK_INT_FRAME_REGS997993 bl alignment_exception998994 REST_NVGPRS(r1)999995 b interrupt_return996996+SYM_CODE_END(alignment_more)10009971001998/*1002999 * Trampolines used when spotting a bad kernel stack pointer in···10371030BAD_STACK_TRAMPOLINE(0xf00)10381031BAD_STACK_TRAMPOLINE(0xf20)1039103210401040- .globl bad_stack_book3e10411041-bad_stack_book3e:10331033+_GLOBAL(bad_stack_book3e)10421034 /* XXX: Needs to make SPRN_SPRG_GEN depend on exception type */10431035 mfspr r10,SPRN_SRR0; /* read SRR0 before touching stack */10441036 ld r1,PACAEMERGSP(r13)···12911285 * ever takes any parameters, the SCOM code must also be updated to12921286 * provide them.12931287 */12941294- .globl a2_tlbinit_code_start12951295-a2_tlbinit_code_start:12881288+_GLOBAL(a2_tlbinit_code_start)1296128912971290 ori r11,r3,MAS0_WQ_ALLWAYS12981291 oris r11,r11,MAS0_ESEL(3)@h /* Use way 3: workaround A2 erratum 376 */···14841479 mflr r2814851480 b 3b1486148114871487- .globl init_core_book3e14881488-init_core_book3e:14821482+_GLOBAL(init_core_book3e)14891483 /* Establish the interrupt vector base */14901484 tovirt(r2,r2)14911485 LOAD_REG_ADDR(r3, interrupt_base_book3e)···14921488 sync14931489 blr1494149014951495-init_thread_book3e:14911491+SYM_CODE_START_LOCAL(init_thread_book3e)14961492 lis r3,(SPRN_EPCR_ICM | SPRN_EPCR_GICM)@h14971493 mtspr SPRN_EPCR,r314981494···15061502 mtspr SPRN_TSR,r31507150315081504 blr15051505+SYM_CODE_END(init_thread_book3e)1509150615101507_GLOBAL(__setup_base_ivors)15111508 SET_IVOR(0, 0x020) /* Critical Input */
+19-18
arch/powerpc/kernel/security.c
···364364365365static int ssb_prctl_get(struct task_struct *task)366366{367367- if (stf_enabled_flush_types == STF_BARRIER_NONE)368368- /*369369- * We don't have an explicit signal from firmware that we're370370- * vulnerable or not, we only have certain CPU revisions that371371- * are known to be vulnerable.372372- *373373- * We assume that if we're on another CPU, where the barrier is374374- * NONE, then we are not vulnerable.375375- */367367+ /*368368+ * The STF_BARRIER feature is on by default, so if it's off that means369369+ * firmware has explicitly said the CPU is not vulnerable via either370370+ * the hypercall or device tree.371371+ */372372+ if (!security_ftr_enabled(SEC_FTR_STF_BARRIER))376373 return PR_SPEC_NOT_AFFECTED;377377- else378378- /*379379- * If we do have a barrier type then we are vulnerable. The380380- * barrier is not a global or per-process mitigation, so the381381- * only value we can report here is PR_SPEC_ENABLE, which382382- * appears as "vulnerable" in /proc.383383- */384384- return PR_SPEC_ENABLE;385374386386- return -EINVAL;375375+ /*376376+ * If the system's CPU has no known barrier (see setup_stf_barrier())377377+ * then assume that the CPU is not vulnerable.378378+ */379379+ if (stf_enabled_flush_types == STF_BARRIER_NONE)380380+ return PR_SPEC_NOT_AFFECTED;381381+382382+ /*383383+ * Otherwise the CPU is vulnerable. The barrier is not a global or384384+ * per-process mitigation, so the only value that can be reported here385385+ * is PR_SPEC_ENABLE, which appears as "vulnerable" in /proc.386386+ */387387+ return PR_SPEC_ENABLE;387388}388389389390int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+9-4
arch/powerpc/mm/book3s64/hash_native.c
···328328329329static long native_hpte_remove(unsigned long hpte_group)330330{331331+ unsigned long hpte_v, flags;331332 struct hash_pte *hptep;332333 int i;333334 int slot_offset;334334- unsigned long hpte_v;335335+336336+ local_irq_save(flags);335337336338 DBG_LOW(" remove(group=%lx)\n", hpte_group);337339···358356 slot_offset &= 0x7;359357 }360358361361- if (i == HPTES_PER_GROUP)362362- return -1;359359+ if (i == HPTES_PER_GROUP) {360360+ i = -1;361361+ goto out;362362+ }363363364364 /* Invalidate the hpte. NOTE: this also unlocks it */365365 release_hpte_lock();366366 hptep->v = 0;367367-367367+out:368368+ local_irq_restore(flags);368369 return i;369370}370371
+2-7
arch/riscv/kernel/cpufeature.c
···318318 }319319320320 /*321321- * Linux requires the following extensions, so we may as well322322- * always set them.323323- */324324- set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa);325325- set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa);326326-327327- /*328321 * These ones were as they were part of the base ISA when the329322 * port & dt-bindings were upstreamed, and so can be set330323 * unconditionally where `i` is in riscv,isa on DT systems.331324 */332325 if (acpi_disabled) {326326+ set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa);327327+ set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa);333328 set_bit(RISCV_ISA_EXT_ZICNTR, isainfo->isa);334329 set_bit(RISCV_ISA_EXT_ZIHPM, isainfo->isa);335330 }
+1-1
arch/riscv/mm/init.c
···13461346 */13471347 crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,13481348 search_start,13491349- min(search_end, (unsigned long) SZ_4G));13491349+ min(search_end, (unsigned long)(SZ_4G - 1)));13501350 if (crash_base == 0) {13511351 /* Try again without restricting region to 32bit addressible memory */13521352 crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,
+1-1
arch/sparc/include/asm/cmpxchg_32.h
···1515unsigned long __xchg_u32(volatile u32 *m, u32 new);1616void __xchg_called_with_bad_pointer(void);17171818-static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)1818+static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)1919{2020 switch (size) {2121 case 4:
···720720.popsection721721722722/*723723- * The unwinder expects the last frame on the stack to always be at the same724724- * offset from the end of the page, which allows it to validate the stack.725725- * Calling schedule_tail() directly would break that convention because its an726726- * asmlinkage function so its argument has to be pushed on the stack. This727727- * wrapper creates a proper "end of stack" frame header before the call.728728- */729729-.pushsection .text, "ax"730730-SYM_FUNC_START(schedule_tail_wrapper)731731- FRAME_BEGIN732732-733733- pushl %eax734734- call schedule_tail735735- popl %eax736736-737737- FRAME_END738738- RET739739-SYM_FUNC_END(schedule_tail_wrapper)740740-.popsection741741-742742-/*743723 * A newly forked process directly context switches into this address.744724 *745725 * eax: prev task we switched from···727747 * edi: kernel thread arg728748 */729749.pushsection .text, "ax"730730-SYM_CODE_START(ret_from_fork)731731- call schedule_tail_wrapper750750+SYM_CODE_START(ret_from_fork_asm)751751+ movl %esp, %edx /* regs */732752733733- testl %ebx, %ebx734734- jnz 1f /* kernel threads are uncommon */753753+ /* return address for the stack unwinder */754754+ pushl $.Lsyscall_32_done735755736736-2:737737- /* When we fork, we trace the syscall return in the child, too. */738738- movl %esp, %eax739739- call syscall_exit_to_user_mode740740- jmp .Lsyscall_32_done756756+ FRAME_BEGIN757757+ /* prev already in EAX */758758+ movl %ebx, %ecx /* fn */759759+ pushl %edi /* fn_arg */760760+ call ret_from_fork761761+ addl $4, %esp762762+ FRAME_END741763742742- /* kernel thread */743743-1: movl %edi, %eax744744- CALL_NOSPEC ebx745745- /*746746- * A kernel thread is allowed to return here after successfully747747- * calling kernel_execve(). Exit to userspace to complete the execve()748748- * syscall.749749- */750750- movl $0, PT_EAX(%esp)751751- jmp 2b752752-SYM_CODE_END(ret_from_fork)764764+ RET765765+SYM_CODE_END(ret_from_fork_asm)753766.popsection754767755768SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
···39933993 struct perf_event *leader = event->group_leader;39943994 struct perf_event *sibling = NULL;3995399539963996+ /*39973997+ * When this memload event is also the first event (no group39983998+ * exists yet), then there is no aux event before it.39993999+ */40004000+ if (leader == event)40014001+ return -ENODATA;40024002+39964003 if (!is_mem_loads_aux_event(leader)) {39974004 for_each_sibling_event(sibling, leader) {39984005 if (is_mem_loads_aux_event(sibling))
···3434/*3535 * Create a dummy function pointer reference to prevent objtool from marking3636 * the function as needing to be "sealed" (i.e. ENDBR converted to NOP by3737- * apply_ibt_endbr()).3737+ * apply_seal_endbr()).3838 */3939#define IBT_NOSEAL(fname) \4040 ".pushsection .discard.ibt_endbr_noseal\n\t" \
+4
arch/x86/include/asm/nospec-branch.h
···234234 * JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple235235 * indirect jmp/call which may be susceptible to the Spectre variant 2236236 * attack.237237+ *238238+ * NOTE: these do not take kCFI into account and are thus not comparable to C239239+ * indirect calls, take care when using. The target of these should be an ENDBR240240+ * instruction irrespective of kCFI.237241 */238242.macro JMP_NOSPEC reg:req239243#ifdef CONFIG_RETPOLINE
+3-1
arch/x86/include/asm/switch_to.h
···1212__visible struct task_struct *__switch_to(struct task_struct *prev,1313 struct task_struct *next);14141515-asmlinkage void ret_from_fork(void);1515+asmlinkage void ret_from_fork_asm(void);1616+__visible void ret_from_fork(struct task_struct *prev, struct pt_regs *regs,1717+ int (*fn)(void *), void *fn_arg);16181719/*1820 * This is the structure pointed to by thread.sp for an inactive task. The
+67-4
arch/x86/kernel/alternative.c
···778778779779#ifdef CONFIG_X86_KERNEL_IBT780780781781+static void poison_cfi(void *addr);782782+781783static void __init_or_module poison_endbr(void *addr, bool warn)782784{783785 u32 endbr, poison = gen_endbr_poison();···804802805803/*806804 * Generated by: objtool --ibt805805+ *806806+ * Seal the functions for indirect calls by clobbering the ENDBR instructions807807+ * and the kCFI hash value.807808 */808808-void __init_or_module noinline apply_ibt_endbr(s32 *start, s32 *end)809809+void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end)809810{810811 s32 *s;811812···817812818813 poison_endbr(addr, true);819814 if (IS_ENABLED(CONFIG_FINEIBT))820820- poison_endbr(addr - 16, false);815815+ poison_cfi(addr - 16);821816 }822817}823818824819#else825820826826-void __init_or_module apply_ibt_endbr(s32 *start, s32 *end) { }821821+void __init_or_module apply_seal_endbr(s32 *start, s32 *end) { }827822828823#endif /* CONFIG_X86_KERNEL_IBT */829824···10681063 return 0;10691064}1070106510661066+static void cfi_rewrite_endbr(s32 *start, s32 *end)10671067+{10681068+ s32 *s;10691069+10701070+ for (s = start; s < end; s++) {10711071+ void *addr = (void *)s + *s;10721072+10731073+ poison_endbr(addr+16, false);10741074+ }10751075+}10761076+10711077/* .retpoline_sites */10721078static int cfi_rand_callers(s32 *start, s32 *end)10731079{···11731157 return;1174115811751159 case CFI_FINEIBT:11601160+ /* place the FineIBT preamble at func()-16 */11761161 ret = cfi_rewrite_preamble(start_cfi, end_cfi);11771162 if (ret)11781163 goto err;1179116411651165+ /* rewrite the callers to target func()-16 */11801166 ret = cfi_rewrite_callers(start_retpoline, end_retpoline);11811167 if (ret)11821168 goto err;11691169+11701170+ /* now that nobody targets func()+0, remove ENDBR there */11711171+ cfi_rewrite_endbr(start_cfi, end_cfi);1183117211841173 if (builtin)11851174 pr_info("Using FineIBT CFI\n");···11981177 pr_err("Something went horribly wrong trying to rewrite the CFI implementation.\n");11991178}1200117911801180+static inline void poison_hash(void *addr)11811181+{11821182+ *(u32 *)addr = 0;11831183+}11841184+11851185+static void poison_cfi(void *addr)11861186+{11871187+ switch (cfi_mode) {11881188+ case CFI_FINEIBT:11891189+ /*11901190+ * __cfi_\func:11911191+ * osp nopl (%rax)11921192+ * subl $0, %r10d11931193+ * jz 1f11941194+ * ud211951195+ * 1: nop11961196+ */11971197+ poison_endbr(addr, false);11981198+ poison_hash(addr + fineibt_preamble_hash);11991199+ break;12001200+12011201+ case CFI_KCFI:12021202+ /*12031203+ * __cfi_\func:12041204+ * movl $0, %eax12051205+ * .skip 11, 0x9012061206+ */12071207+ poison_hash(addr + 1);12081208+ break;12091209+12101210+ default:12111211+ break;12121212+ }12131213+}12141214+12011215#else1202121612031217static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline,12041218 s32 *start_cfi, s32 *end_cfi, bool builtin)12051219{12061220}12211221+12221222+#ifdef CONFIG_X86_KERNEL_IBT12231223+static void poison_cfi(void *addr) { }12241224+#endif1207122512081226#endif12091227···16251565 */16261566 callthunks_patch_builtin_calls();1627156716281628- apply_ibt_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end);15681568+ /*15691569+ * Seal all functions that do not have their address taken.15701570+ */15711571+ apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end);1629157216301573#ifdef CONFIG_SMP16311574 /* Patch to UP if other cpus not imminent. */
···2828#include <linux/static_call.h>2929#include <trace/events/power.h>3030#include <linux/hw_breakpoint.h>3131+#include <linux/entry-common.h>3132#include <asm/cpu.h>3233#include <asm/apic.h>3334#include <linux/uaccess.h>···135134 return do_set_thread_area_64(p, ARCH_SET_FS, tls);136135}137136137137+__visible void ret_from_fork(struct task_struct *prev, struct pt_regs *regs,138138+ int (*fn)(void *), void *fn_arg)139139+{140140+ schedule_tail(prev);141141+142142+ /* Is this a kernel thread? */143143+ if (unlikely(fn)) {144144+ fn(fn_arg);145145+ /*146146+ * A kernel thread is allowed to return here after successfully147147+ * calling kernel_execve(). Exit to userspace to complete the148148+ * execve() syscall.149149+ */150150+ regs->ax = 0;151151+ }152152+153153+ syscall_exit_to_user_mode(regs);154154+}155155+138156int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)139157{140158 unsigned long clone_flags = args->flags;···169149 frame = &fork_frame->frame;170150171151 frame->bp = encode_frame_pointer(childregs);172172- frame->ret_addr = (unsigned long) ret_from_fork;152152+ frame->ret_addr = (unsigned long) ret_from_fork_asm;173153 p->thread.sp = (unsigned long) fork_frame;174154 p->thread.io_bitmap = NULL;175155 p->thread.iopl_warn = 0;
+14-20
arch/xtensa/kernel/align.S
···11/*22 * arch/xtensa/kernel/align.S33 *44- * Handle unalignment exceptions in kernel space.44+ * Handle unalignment and load/store exceptions.55 *66 * This file is subject to the terms and conditions of the GNU General77 * Public License. See the file "COPYING" in the main directory of···2626#define LOAD_EXCEPTION_HANDLER2727#endif28282929-#if XCHAL_UNALIGNED_STORE_EXCEPTION || defined LOAD_EXCEPTION_HANDLER2929+#if XCHAL_UNALIGNED_STORE_EXCEPTION || defined CONFIG_XTENSA_LOAD_STORE3030+#define STORE_EXCEPTION_HANDLER3131+#endif3232+3333+#if defined LOAD_EXCEPTION_HANDLER || defined STORE_EXCEPTION_HANDLER3034#define ANY_EXCEPTION_HANDLER3135#endif32363333-#if XCHAL_HAVE_WINDOWED3737+#if XCHAL_HAVE_WINDOWED && defined CONFIG_MMU3438#define UNALIGNED_USER_EXCEPTION3539#endif3636-3737-/* First-level exception handler for unaligned exceptions.3838- *3939- * Note: This handler works only for kernel exceptions. Unaligned user4040- * access should get a seg fault.4141- */42404341/* Big and little endian 16-bit values are located in4442 * different halves of a register. HWORD_START helps to···226228#ifdef ANY_EXCEPTION_HANDLER227229ENTRY(fast_unaligned)228230229229-#if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION230230-231231 call0 .Lsave_and_load_instruction232232233233 /* Analyze the instruction (load or store?). */···240244 /* 'store indicator bit' not set, jump */241245 _bbci.l a4, OP1_SI_BIT + INSN_OP1, .Lload242246243243-#endif244244-#if XCHAL_UNALIGNED_STORE_EXCEPTION247247+#ifdef STORE_EXCEPTION_HANDLER245248246249 /* Store: Jump to table entry to get the value in the source register.*/247250···249254 addx8 a5, a6, a5250255 jx a5 # jump into table251256#endif252252-#if XCHAL_UNALIGNED_LOAD_EXCEPTION257257+#ifdef LOAD_EXCEPTION_HANDLER253258254259 /* Load: Load memory address. */255260···323328 mov a14, a3 ; _j .Lexit; .align 8324329 mov a15, a3 ; _j .Lexit; .align 8325330#endif326326-#if XCHAL_UNALIGNED_STORE_EXCEPTION331331+#ifdef STORE_EXCEPTION_HANDLER327332.Lstore_table:328333 l32i a3, a2, PT_AREG0; _j .Lstore_w; .align 8329334 mov a3, a1; _j .Lstore_w; .align 8 # fishy??···343348 mov a3, a15 ; _j .Lstore_w; .align 8344349#endif345350346346-#ifdef ANY_EXCEPTION_HANDLER347351 /* We cannot handle this exception. */348352349353 .extern _kernel_exception···3713773723782: movi a0, _user_exception373379 jx a0374374-#endif375375-#if XCHAL_UNALIGNED_STORE_EXCEPTION380380+381381+#ifdef STORE_EXCEPTION_HANDLER376382377383 # a7: instruction pointer, a4: instruction, a3: value378384.Lstore_w:···438444 s32i a6, a4, 4439445#endif440446#endif441441-#ifdef ANY_EXCEPTION_HANDLER447447+442448.Lexit:443449#if XCHAL_HAVE_LOOPS444450 rsr a4, lend # check if we reached LEND···533539 __src_b a4, a4, a5 # a4 has the instruction534540535541 ret536536-#endif542542+537543ENDPROC(fast_unaligned)538544539545ENTRY(fast_unaligned_fixup)
···237237238238 init += sizeof(TRANSPORT_TUNTAP_NAME) - 1;239239 if (*init == ',') {240240- rem = split_if_spec(init + 1, &mac_str, &dev_name);240240+ rem = split_if_spec(init + 1, &mac_str, &dev_name, NULL);241241 if (rem != NULL) {242242 pr_err("%s: extra garbage on specification : '%s'\n",243243 dev->name, rem);···540540 rtnl_unlock();541541 pr_err("%s: error registering net device!\n", dev->name);542542 platform_device_unregister(&lp->pdev);543543+ /* dev is freed by the iss_net_pdev_release callback */543544 return;544545 }545546 rtnl_unlock();
+10-2
block/blk-crypto-profile.c
···7979 unsigned int slot_hashtable_size;80808181 memset(profile, 0, sizeof(*profile));8282- init_rwsem(&profile->lock);8282+8383+ /*8484+ * profile->lock of an underlying device can nest inside profile->lock8585+ * of a device-mapper device, so use a dynamic lock class to avoid8686+ * false-positive lockdep reports.8787+ */8888+ lockdep_register_key(&profile->lockdep_key);8989+ __init_rwsem(&profile->lock, "&profile->lock", &profile->lockdep_key);83908491 if (num_slots == 0)8592 return 0;···9689 profile->slots = kvcalloc(num_slots, sizeof(profile->slots[0]),9790 GFP_KERNEL);9891 if (!profile->slots)9999- return -ENOMEM;9292+ goto err_destroy;1009310194 profile->num_slots = num_slots;10295···442435{443436 if (!profile)444437 return;438438+ lockdep_unregister_key(&profile->lockdep_key);445439 kvfree(profile->slot_hashtable);446440 kvfree_sensitive(profile->slots,447441 sizeof(profile->slots[0]) * profile->num_slots);
···442442 unsigned long *conv_zones_bitmap;443443 unsigned long *seq_zones_wlock;444444 unsigned int nr_zones;445445- sector_t zone_sectors;446445 sector_t sector;447446};448447···455456 struct gendisk *disk = args->disk;456457 struct request_queue *q = disk->queue;457458 sector_t capacity = get_capacity(disk);459459+ sector_t zone_sectors = q->limits.chunk_sectors;460460+461461+ /* Check for bad zones and holes in the zone report */462462+ if (zone->start != args->sector) {463463+ pr_warn("%s: Zone gap at sectors %llu..%llu\n",464464+ disk->disk_name, args->sector, zone->start);465465+ return -ENODEV;466466+ }467467+468468+ if (zone->start >= capacity || !zone->len) {469469+ pr_warn("%s: Invalid zone start %llu, length %llu\n",470470+ disk->disk_name, zone->start, zone->len);471471+ return -ENODEV;472472+ }458473459474 /*460475 * All zones must have the same size, with the exception on an eventual461476 * smaller last zone.462477 */463463- if (zone->start == 0) {464464- if (zone->len == 0 || !is_power_of_2(zone->len)) {465465- pr_warn("%s: Invalid zoned device with non power of two zone size (%llu)\n",466466- disk->disk_name, zone->len);467467- return -ENODEV;468468- }469469-470470- args->zone_sectors = zone->len;471471- args->nr_zones = (capacity + zone->len - 1) >> ilog2(zone->len);472472- } else if (zone->start + args->zone_sectors < capacity) {473473- if (zone->len != args->zone_sectors) {478478+ if (zone->start + zone->len < capacity) {479479+ if (zone->len != zone_sectors) {474480 pr_warn("%s: Invalid zoned device with non constant zone size\n",475481 disk->disk_name);476482 return -ENODEV;477483 }478478- } else {479479- if (zone->len > args->zone_sectors) {480480- pr_warn("%s: Invalid zoned device with larger last zone size\n",481481- disk->disk_name);482482- return -ENODEV;483483- }484484- }485485-486486- /* Check for holes in the zone report */487487- if (zone->start != args->sector) {488488- pr_warn("%s: Zone gap at sectors %llu..%llu\n",489489- disk->disk_name, args->sector, zone->start);484484+ } else if (zone->len > zone_sectors) {485485+ pr_warn("%s: Invalid zoned device with larger last zone size\n",486486+ disk->disk_name);490487 return -ENODEV;491488 }492489···521526 * @disk: Target disk522527 * @update_driver_data: Callback to update driver data on the frozen disk523528 *524524- * Helper function for low-level device drivers to (re) allocate and initialize525525- * a disk request queue zone bitmaps. This functions should normally be called526526- * within the disk ->revalidate method for blk-mq based drivers. For BIO based527527- * drivers only q->nr_zones needs to be updated so that the sysfs exposed value528528- * is correct.529529+ * Helper function for low-level device drivers to check and (re) allocate and530530+ * initialize a disk request queue zone bitmaps. This functions should normally531531+ * be called within the disk ->revalidate method for blk-mq based drivers.532532+ * Before calling this function, the device driver must already have set the533533+ * device zone size (chunk_sector limit) and the max zone append limit.534534+ * For BIO based drivers, this function cannot be used. BIO based device drivers535535+ * only need to set disk->nr_zones so that the sysfs exposed value is correct.529536 * If the @update_driver_data callback function is not NULL, the callback is530537 * executed with the device request queue frozen after all zones have been531538 * checked.···536539 void (*update_driver_data)(struct gendisk *disk))537540{538541 struct request_queue *q = disk->queue;539539- struct blk_revalidate_zone_args args = {540540- .disk = disk,541541- };542542+ sector_t zone_sectors = q->limits.chunk_sectors;543543+ sector_t capacity = get_capacity(disk);544544+ struct blk_revalidate_zone_args args = { };542545 unsigned int noio_flag;543546 int ret;544547···547550 if (WARN_ON_ONCE(!queue_is_mq(q)))548551 return -EIO;549552550550- if (!get_capacity(disk))551551- return -EIO;553553+ if (!capacity)554554+ return -ENODEV;555555+556556+ /*557557+ * Checks that the device driver indicated a valid zone size and that558558+ * the max zone append limit is set.559559+ */560560+ if (!zone_sectors || !is_power_of_2(zone_sectors)) {561561+ pr_warn("%s: Invalid non power of two zone size (%llu)\n",562562+ disk->disk_name, zone_sectors);563563+ return -ENODEV;564564+ }565565+566566+ if (!q->limits.max_zone_append_sectors) {567567+ pr_warn("%s: Invalid 0 maximum zone append limit\n",568568+ disk->disk_name);569569+ return -ENODEV;570570+ }552571553572 /*554573 * Ensure that all memory allocations in this context are done as if555574 * GFP_NOIO was specified.556575 */576576+ args.disk = disk;577577+ args.nr_zones = (capacity + zone_sectors - 1) >> ilog2(zone_sectors);557578 noio_flag = memalloc_noio_save();558579 ret = disk->fops->report_zones(disk, 0, UINT_MAX,559580 blk_revalidate_zone_cb, &args);···585570 * If zones where reported, make sure that the entire disk capacity586571 * has been checked.587572 */588588- if (ret > 0 && args.sector != get_capacity(disk)) {573573+ if (ret > 0 && args.sector != capacity) {589574 pr_warn("%s: Missing zones from sector %llu\n",590575 disk->disk_name, args.sector);591576 ret = -ENODEV;···598583 */599584 blk_mq_freeze_queue(q);600585 if (ret > 0) {601601- blk_queue_chunk_sectors(q, args.zone_sectors);602586 disk->nr_zones = args.nr_zones;603587 swap(disk->seq_zones_wlock, args.seq_zones_wlock);604588 swap(disk->conv_zones_bitmap, args.conv_zones_bitmap);
+1-1
block/mq-deadline.c
···176176 * zoned writes, start searching from the start of a zone.177177 */178178 if (blk_rq_is_seq_zoned_write(rq))179179- pos -= round_down(pos, rq->q->limits.chunk_sectors);179179+ pos = round_down(pos, rq->q->limits.chunk_sectors);180180181181 while (node) {182182 rq = rb_entry_rq(node);
···101101 vdev->wa.punit_disabled = ivpu_is_fpga(vdev);102102 vdev->wa.clear_runtime_mem = false;103103 vdev->wa.d3hot_after_power_off = true;104104+105105+ if (ivpu_device_id(vdev) == PCI_DEVICE_ID_MTL && ivpu_revision(vdev) < 4)106106+ vdev->wa.interrupt_clear_with_0 = true;104107}105108106109static void ivpu_hw_timeouts_init(struct ivpu_device *vdev)···888885 REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x1);889886 REGB_WR32(MTL_BUTTRESS_LOCAL_INT_MASK, BUTTRESS_IRQ_DISABLE_MASK);890887 REGV_WR64(MTL_VPU_HOST_SS_ICB_ENABLE_0, 0x0ull);891891- REGB_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0);888888+ REGV_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0);892889}893890894891static void ivpu_hw_mtl_irq_wdt_nce_handler(struct ivpu_device *vdev)···976973 schedule_recovery = true;977974 }978975979979- /*980980- * Clear local interrupt status by writing 0 to all bits.981981- * This must be done after interrupts are cleared at the source.982982- * Writing 1 triggers an interrupt, so we can't perform read update write.983983- */984984- REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, 0x0);976976+ /* This must be done after interrupts are cleared at the source. */977977+ if (IVPU_WA(interrupt_clear_with_0))978978+ /*979979+ * Writing 1 triggers an interrupt, so we can't perform read update write.980980+ * Clear local interrupt status by writing 0 to all bits.981981+ */982982+ REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, 0x0);983983+ else984984+ REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, status);985985986986 /* Re-enable global interrupt */987987 REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x0);
+1-1
drivers/base/regmap/regmap-irq.c
···717717 if (!d->config_buf)718718 goto err_alloc;719719720720- for (i = 0; i < chip->num_config_regs; i++) {720720+ for (i = 0; i < chip->num_config_bases; i++) {721721 d->config_buf[i] = kcalloc(chip->num_config_regs,722722 sizeof(**d->config_buf),723723 GFP_KERNEL);
+5-11
drivers/block/null_blk/zoned.c
···162162 disk_set_zoned(nullb->disk, BLK_ZONED_HM);163163 blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);164164 blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE);165165-166166- if (queue_is_mq(q)) {167167- int ret = blk_revalidate_disk_zones(nullb->disk, NULL);168168-169169- if (ret)170170- return ret;171171- } else {172172- blk_queue_chunk_sectors(q, dev->zone_size_sects);173173- nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0);174174- }175175-165165+ blk_queue_chunk_sectors(q, dev->zone_size_sects);166166+ nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0);176167 blk_queue_max_zone_append_sectors(q, dev->zone_size_sects);177168 disk_set_max_open_zones(nullb->disk, dev->zone_max_open);178169 disk_set_max_active_zones(nullb->disk, dev->zone_max_active);170170+171171+ if (queue_is_mq(q))172172+ return blk_revalidate_disk_zones(nullb->disk, NULL);179173180174 return 0;181175}
···563563 u32 rsp_size;564564 int ret;565565566566- INIT_LIST_HEAD(&acpi_resource_list);567567- ret = acpi_dev_get_resources(device, &acpi_resource_list,568568- crb_check_resource, iores_array);569569- if (ret < 0)570570- return ret;571571- acpi_dev_free_resource_list(&acpi_resource_list);572572-573573- /* Pluton doesn't appear to define ACPI memory regions */566566+ /*567567+ * Pluton sometimes does not define ACPI memory regions.568568+ * Mapping is then done in crb_map_pluton569569+ */574570 if (priv->sm != ACPI_TPM2_COMMAND_BUFFER_WITH_PLUTON) {571571+ INIT_LIST_HEAD(&acpi_resource_list);572572+ ret = acpi_dev_get_resources(device, &acpi_resource_list,573573+ crb_check_resource, iores_array);574574+ if (ret < 0)575575+ return ret;576576+ acpi_dev_free_resource_list(&acpi_resource_list);577577+575578 if (resource_type(iores_array) != IORESOURCE_MEM) {576579 dev_err(dev, FW_BUG "TPM2 ACPI table does not define a memory resource\n");577580 return -EINVAL;
···269269 return smp_call_function_single(cpu, __us2e_freq_target, &index, 1);270270}271271272272-static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy)272272+static int us2e_freq_cpu_init(struct cpufreq_policy *policy)273273{274274 unsigned int cpu = policy->cpu;275275 unsigned long clock_tick = sparc64_get_clock_tick(cpu) / 1000;
+1-1
drivers/cpufreq/sparc-us3-cpufreq.c
···117117 return smp_call_function_single(cpu, update_safari_cfg, &new_bits, 1);118118}119119120120-static int __init us3_freq_cpu_init(struct cpufreq_policy *policy)120120+static int us3_freq_cpu_init(struct cpufreq_policy *policy)121121{122122 unsigned int cpu = policy->cpu;123123 unsigned long clock_tick = sparc64_get_clock_tick(cpu) / 1000;
+22-4
drivers/dma-buf/dma-fence-unwrap.c
···6666{6767 struct dma_fence_array *result;6868 struct dma_fence *tmp, **array;6969+ ktime_t timestamp;6970 unsigned int i;7071 size_t count;71727273 count = 0;7474+ timestamp = ns_to_ktime(0);7375 for (i = 0; i < num_fences; ++i) {7474- dma_fence_unwrap_for_each(tmp, &iter[i], fences[i])7575- if (!dma_fence_is_signaled(tmp))7676+ dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) {7777+ if (!dma_fence_is_signaled(tmp)) {7678 ++count;7979+ } else if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,8080+ &tmp->flags)) {8181+ if (ktime_after(tmp->timestamp, timestamp))8282+ timestamp = tmp->timestamp;8383+ } else {8484+ /*8585+ * Use the current time if the fence is8686+ * currently signaling.8787+ */8888+ timestamp = ktime_get();8989+ }9090+ }7791 }78929393+ /*9494+ * If we couldn't find a pending fence just return a private signaled9595+ * fence with the timestamp of the last signaled one.9696+ */7997 if (count == 0)8080- return dma_fence_get_stub();9898+ return dma_fence_allocate_private_stub(timestamp);819982100 array = kmalloc_array(count, sizeof(*array), GFP_KERNEL);83101 if (!array)···156138 } while (tmp);157139158140 if (count == 0) {159159- tmp = dma_fence_get_stub();141141+ tmp = dma_fence_allocate_private_stub(ktime_get());160142 goto return_tmp;161143 }162144
+4-3
drivers/dma-buf/dma-fence.c
···150150151151/**152152 * dma_fence_allocate_private_stub - return a private, signaled fence153153+ * @timestamp: timestamp when the fence was signaled153154 *154155 * Return a newly allocated and signaled stub fence.155156 */156156-struct dma_fence *dma_fence_allocate_private_stub(void)157157+struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp)157158{158159 struct dma_fence *fence;159160160161 fence = kzalloc(sizeof(*fence), GFP_KERNEL);161162 if (fence == NULL)162162- return ERR_PTR(-ENOMEM);163163+ return NULL;163164164165 dma_fence_init(fence,165166 &dma_fence_stub_ops,···170169 set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,171170 &fence->flags);172171173173- dma_fence_signal(fence);172172+ dma_fence_signal_timestamp(fence, timestamp);174173175174 return fence;176175}
···28812881 if (!attachment->is_mapped)28822882 continue;2883288328842884+ if (attachment->bo_va->base.bo->tbo.pin_count)28852885+ continue;28862886+28842887 kfd_mem_dmaunmap_attachment(mem, attachment);28852888 ret = update_gpuvm_pte(mem, attachment, &sync_obj);28862889 if (ret) {
+19
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···14581458 return true;14591459}1460146014611461+/*14621462+ * Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic14631463+ * speed switching. Until we have confirmation from Intel that a specific host14641464+ * supports it, it's safer that we keep it disabled for all.14651465+ *14661466+ * https://edc.intel.com/content/www/us/en/design/products/platforms/details/raptor-lake-s/13th-generation-core-processors-datasheet-volume-1-of-2/005/pci-express-support/14671467+ * https://gitlab.freedesktop.org/drm/amd/-/issues/266314681468+ */14691469+bool amdgpu_device_pcie_dynamic_switching_supported(void)14701470+{14711471+#if IS_ENABLED(CONFIG_X86)14721472+ struct cpuinfo_x86 *c = &cpu_data(0);14731473+14741474+ if (c->x86_vendor == X86_VENDOR_INTEL)14751475+ return false;14761476+#endif14771477+ return true;14781478+}14791479+14611480/**14621481 * amdgpu_device_should_use_aspm - check if the device should program ASPM14631482 *
···209209 goto err_drm_client_init;210210 }211211212212- ret = armada_fbdev_client_hotplug(&fbh->client);213213- if (ret)214214- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);215215-216212 drm_client_register(&fbh->client);217213218214 return;
+5-4
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
···14261426 /* Control for TMDS Bit Period/TMDS Clock-Period Ratio */14271427 if (dw_hdmi_support_scdc(hdmi, display)) {14281428 if (mtmdsclock > HDMI14_MAX_TMDSCLK)14291429- drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 1);14291429+ drm_scdc_set_high_tmds_clock_ratio(hdmi->curr_conn, 1);14301430 else14311431- drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 0);14311431+ drm_scdc_set_high_tmds_clock_ratio(hdmi->curr_conn, 0);14321432 }14331433}14341434EXPORT_SYMBOL_GPL(dw_hdmi_set_high_tmds_clock_ratio);···21162116 min_t(u8, bytes, SCDC_MIN_SOURCE_VERSION));2117211721182118 /* Enabled Scrambling in the Sink */21192119- drm_scdc_set_scrambling(&hdmi->connector, 1);21192119+ drm_scdc_set_scrambling(hdmi->curr_conn, 1);2120212021212121 /*21222122 * To activate the scrambler feature, you must ensure···21322132 hdmi_writeb(hdmi, 0, HDMI_FC_SCRAMBLER_CTRL);21332133 hdmi_writeb(hdmi, (u8)~HDMI_MC_SWRSTZ_TMDSSWRST_REQ,21342134 HDMI_MC_SWRSTZ);21352135- drm_scdc_set_scrambling(&hdmi->connector, 0);21352135+ drm_scdc_set_scrambling(hdmi->curr_conn, 0);21362136 }21372137 }21382138···35533553 hdmi->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID35543554 | DRM_BRIDGE_OP_HPD;35553555 hdmi->bridge.interlace_allowed = true;35563556+ hdmi->bridge.ddc = hdmi->ddc;35563557#ifdef CONFIG_OF35573558 hdmi->bridge.of_node = pdev->dev.of_node;35583559#endif
+22-13
drivers/gpu/drm/bridge/ti-sn65dsi86.c
···170170 * @pwm_refclk_freq: Cache for the reference clock input to the PWM.171171 */172172struct ti_sn65dsi86 {173173- struct auxiliary_device bridge_aux;174174- struct auxiliary_device gpio_aux;175175- struct auxiliary_device aux_aux;176176- struct auxiliary_device pwm_aux;173173+ struct auxiliary_device *bridge_aux;174174+ struct auxiliary_device *gpio_aux;175175+ struct auxiliary_device *aux_aux;176176+ struct auxiliary_device *pwm_aux;177177178178 struct device *dev;179179 struct regmap *regmap;···468468 auxiliary_device_delete(data);469469}470470471471-/*472472- * AUX bus docs say that a non-NULL release is mandatory, but it makes no473473- * sense for the model used here where all of the aux devices are allocated474474- * in the single shared structure. We'll use this noop as a workaround.475475- */476476-static void ti_sn65dsi86_noop(struct device *dev) {}471471+static void ti_sn65dsi86_aux_device_release(struct device *dev)472472+{473473+ struct auxiliary_device *aux = container_of(dev, struct auxiliary_device, dev);474474+475475+ kfree(aux);476476+}477477478478static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,479479- struct auxiliary_device *aux,479479+ struct auxiliary_device **aux_out,480480 const char *name)481481{482482 struct device *dev = pdata->dev;483483+ struct auxiliary_device *aux;483484 int ret;485485+486486+ aux = kzalloc(sizeof(*aux), GFP_KERNEL);487487+ if (!aux)488488+ return -ENOMEM;484489485490 aux->name = name;486491 aux->dev.parent = dev;487487- aux->dev.release = ti_sn65dsi86_noop;492492+ aux->dev.release = ti_sn65dsi86_aux_device_release;488493 device_set_of_node_from_dev(&aux->dev, dev);489494 ret = auxiliary_device_init(aux);490490- if (ret)495495+ if (ret) {496496+ kfree(aux);491497 return ret;498498+ }492499 ret = devm_add_action_or_reset(dev, ti_sn65dsi86_uninit_aux, aux);493500 if (ret)494501 return ret;···504497 if (ret)505498 return ret;506499 ret = devm_add_action_or_reset(dev, ti_sn65dsi86_delete_aux, aux);500500+ if (!ret)501501+ *aux_out = aux;507502508503 return ret;509504}
+21
drivers/gpu/drm/drm_client.c
···122122 * drm_client_register() it is no longer permissible to call drm_client_release()123123 * directly (outside the unregister callback), instead cleanup will happen124124 * automatically on driver unload.125125+ *126126+ * Registering a client generates a hotplug event that allows the client127127+ * to set up its display from pre-existing outputs. The client must have128128+ * initialized its state to able to handle the hotplug event successfully.125129 */126130void drm_client_register(struct drm_client_dev *client)127131{128132 struct drm_device *dev = client->dev;133133+ int ret;129134130135 mutex_lock(&dev->clientlist_mutex);131136 list_add(&client->list, &dev->clientlist);137137+138138+ if (client->funcs && client->funcs->hotplug) {139139+ /*140140+ * Perform an initial hotplug event to pick up the141141+ * display configuration for the client. This step142142+ * has to be performed *after* registering the client143143+ * in the list of clients, or a concurrent hotplug144144+ * event might be lost; leaving the display off.145145+ *146146+ * Hold the clientlist_mutex as for a regular hotplug147147+ * event.148148+ */149149+ ret = client->funcs->hotplug(client);150150+ if (ret)151151+ drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);152152+ }132153 mutex_unlock(&dev->clientlist_mutex);133154}134155EXPORT_SYMBOL(drm_client_register);
+1-5
drivers/gpu/drm/drm_fbdev_dma.c
···217217 * drm_fbdev_dma_setup() - Setup fbdev emulation for GEM DMA helpers218218 * @dev: DRM device219219 * @preferred_bpp: Preferred bits per pixel for the device.220220- * @dev->mode_config.preferred_depth is used if this is zero.220220+ * 32 is used if this is zero.221221 *222222 * This function sets up fbdev emulation for GEM DMA drivers that support223223 * dumb buffers with a virtual address and that can be mmap'ed.···251251 drm_err(dev, "Failed to register client: %d\n", ret);252252 goto err_drm_client_init;253253 }254254-255255- ret = drm_fbdev_dma_client_hotplug(&fb_helper->client);256256- if (ret)257257- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);258254259255 drm_client_register(&fb_helper->client);260256
-4
drivers/gpu/drm/drm_fbdev_generic.c
···339339 goto err_drm_client_init;340340 }341341342342- ret = drm_fbdev_generic_client_hotplug(&fb_helper->client);343343- if (ret)344344- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);345345-346342 drm_client_register(&fb_helper->client);347343348344 return;
+3-3
drivers/gpu/drm/drm_syncobj.c
···353353 */354354static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)355355{356356- struct dma_fence *fence = dma_fence_allocate_private_stub();356356+ struct dma_fence *fence = dma_fence_allocate_private_stub(ktime_get());357357358358- if (IS_ERR(fence))359359- return PTR_ERR(fence);358358+ if (!fence)359359+ return -ENOMEM;360360361361 drm_syncobj_replace_fence(syncobj, fence);362362 dma_fence_put(fence);
-4
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
···215215 if (ret)216216 goto err_drm_client_init;217217218218- ret = exynos_drm_fbdev_client_hotplug(&fb_helper->client);219219- if (ret)220220- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);221221-222218 drm_client_register(&fb_helper->client);223219224220 return;
-4
drivers/gpu/drm/gma500/fbdev.c
···328328 goto err_drm_fb_helper_unprepare;329329 }330330331331- ret = psb_fbdev_client_hotplug(&fb_helper->client);332332- if (ret)333333- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);334334-335331 drm_client_register(&fb_helper->client);336332337333 return;
···3737 if (unlikely(flags & PTE_READ_ONLY))3838 pte &= ~GEN8_PAGE_RW;39394040- if (flags & PTE_LM)4141- pte |= GEN12_PPGTT_PTE_LM;4242-4340 /*4441 * For pre-gen12 platforms pat_index is the same as enum4542 * i915_cache_level, so the switch-case here is still valid.
+1-1
drivers/gpu/drm/i915/gt/intel_gtt.c
···670670 if (IS_ERR(obj))671671 return ERR_CAST(obj);672672673673- i915_gem_object_set_cache_coherency(obj, I915_CACHING_CACHED);673673+ i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);674674675675 vma = i915_vma_instance(obj, vm, NULL);676676 if (IS_ERR(vma)) {
···383383 goto err_drm_client_init;384384 }385385386386- ret = radeon_fbdev_client_hotplug(&fb_helper->client);387387- if (ret)388388- drm_dbg_kms(rdev->ddev, "client hotplug ret=%d\n", ret);389389-390386 drm_client_register(&fb_helper->client);391387392388 return;
+33-8
drivers/gpu/drm/scheduler/sched_entity.c
···176176{177177 struct drm_sched_job *job = container_of(cb, struct drm_sched_job,178178 finish_cb);179179- int r;179179+ unsigned long index;180180181181 dma_fence_put(f);182182183183 /* Wait for all dependencies to avoid data corruptions */184184- while (!xa_empty(&job->dependencies)) {185185- f = xa_erase(&job->dependencies, job->last_dependency++);186186- r = dma_fence_add_callback(f, &job->finish_cb,187187- drm_sched_entity_kill_jobs_cb);188188- if (!r)184184+ xa_for_each(&job->dependencies, index, f) {185185+ struct drm_sched_fence *s_fence = to_drm_sched_fence(f);186186+187187+ if (s_fence && f == &s_fence->scheduled) {188188+ /* The dependencies array had a reference on the scheduled189189+ * fence, and the finished fence refcount might have190190+ * dropped to zero. Use dma_fence_get_rcu() so we get191191+ * a NULL fence in that case.192192+ */193193+ f = dma_fence_get_rcu(&s_fence->finished);194194+195195+ /* Now that we have a reference on the finished fence,196196+ * we can release the reference the dependencies array197197+ * had on the scheduled fence.198198+ */199199+ dma_fence_put(&s_fence->scheduled);200200+ }201201+202202+ xa_erase(&job->dependencies, index);203203+ if (f && !dma_fence_add_callback(f, &job->finish_cb,204204+ drm_sched_entity_kill_jobs_cb))189205 return;190206191207 dma_fence_put(f);···431415drm_sched_job_dependency(struct drm_sched_job *job,432416 struct drm_sched_entity *entity)433417{434434- if (!xa_empty(&job->dependencies))435435- return xa_erase(&job->dependencies, job->last_dependency++);418418+ struct dma_fence *f;419419+420420+ /* We keep the fence around, so we can iterate over all dependencies421421+ * in drm_sched_entity_kill_jobs_cb() to ensure all deps are signaled422422+ * before killing the job.423423+ */424424+ f = xa_load(&job->dependencies, job->last_dependency);425425+ if (f) {426426+ job->last_dependency++;427427+ return dma_fence_get(f);428428+ }436429437430 if (job->sched->ops->prepare_job)438431 return job->sched->ops->prepare_job(job, entity);
+25-15
drivers/gpu/drm/scheduler/sched_fence.c
···4848 kmem_cache_destroy(sched_fence_slab);4949}50505151-void drm_sched_fence_scheduled(struct drm_sched_fence *fence)5151+static void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,5252+ struct dma_fence *fence)5253{5454+ /*5555+ * smp_store_release() to ensure another thread racing us5656+ * in drm_sched_fence_set_deadline_finished() sees the5757+ * fence's parent set before test_bit()5858+ */5959+ smp_store_release(&s_fence->parent, dma_fence_get(fence));6060+ if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT,6161+ &s_fence->finished.flags))6262+ dma_fence_set_deadline(fence, s_fence->deadline);6363+}6464+6565+void drm_sched_fence_scheduled(struct drm_sched_fence *fence,6666+ struct dma_fence *parent)6767+{6868+ /* Set the parent before signaling the scheduled fence, such that,6969+ * any waiter expecting the parent to be filled after the job has7070+ * been scheduled (which is the case for drivers delegating waits7171+ * to some firmware) doesn't have to busy wait for parent to show7272+ * up.7373+ */7474+ if (!IS_ERR_OR_NULL(parent))7575+ drm_sched_fence_set_parent(fence, parent);7676+5377 dma_fence_signal(&fence->scheduled);5478}5579···204180 return NULL;205181}206182EXPORT_SYMBOL(to_drm_sched_fence);207207-208208-void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,209209- struct dma_fence *fence)210210-{211211- /*212212- * smp_store_release() to ensure another thread racing us213213- * in drm_sched_fence_set_deadline_finished() sees the214214- * fence's parent set before test_bit()215215- */216216- smp_store_release(&s_fence->parent, dma_fence_get(fence));217217- if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT,218218- &s_fence->finished.flags))219219- dma_fence_set_deadline(fence, s_fence->deadline);220220-}221183222184struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,223185 void *owner)
+1-2
drivers/gpu/drm/scheduler/sched_main.c
···10431043 trace_drm_run_job(sched_job, entity);10441044 fence = sched->ops->run_job(sched_job);10451045 complete_all(&entity->entity_idle);10461046- drm_sched_fence_scheduled(s_fence);10461046+ drm_sched_fence_scheduled(s_fence, fence);1047104710481048 if (!IS_ERR_OR_NULL(fence)) {10491049- drm_sched_fence_set_parent(s_fence, fence);10501049 /* Drop for original kref_init of the fence */10511050 dma_fence_put(fence);10521051
-4
drivers/gpu/drm/tegra/fbdev.c
···225225 if (ret)226226 goto err_drm_client_init;227227228228- ret = tegra_fbdev_client_hotplug(&helper->client);229229- if (ret)230230- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);231231-232228 drm_client_register(&helper->client);233229234230 return;
+18-11
drivers/gpu/drm/ttm/ttm_bo.c
···458458 goto out;459459 }460460461461-bounce:462462- ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop);463463- if (ret == -EMULTIHOP) {461461+ do {462462+ ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop);463463+ if (ret != -EMULTIHOP)464464+ break;465465+464466 ret = ttm_bo_bounce_temp_buffer(bo, &evict_mem, ctx, &hop);465465- if (ret) {466466- if (ret != -ERESTARTSYS && ret != -EINTR)467467- pr_err("Buffer eviction failed\n");468468- ttm_resource_free(bo, &evict_mem);469469- goto out;470470- }471471- /* try and move to final place now. */472472- goto bounce;467467+ } while (!ret);468468+469469+ if (ret) {470470+ ttm_resource_free(bo, &evict_mem);471471+ if (ret != -ERESTARTSYS && ret != -EINTR)472472+ pr_err("Buffer eviction failed\n");473473 }474474out:475475 return ret;···516516 bool *locked, bool *busy)517517{518518 bool ret = false;519519+520520+ if (bo->pin_count) {521521+ *locked = false;522522+ *busy = false;523523+ return false;524524+ }519525520526 if (bo->base.resv == ctx->resv) {521527 dma_resv_assert_held(bo->base.resv);···11731167 ret = ttm_bo_handle_move_mem(bo, evict_mem, true, &ctx, &hop);11741168 if (unlikely(ret != 0)) {11751169 WARN(ret == -EMULTIHOP, "Unexpected multihop in swaput - likely driver bug.\n");11701170+ ttm_resource_free(bo, &evict_mem);11761171 goto out;11771172 }11781173 }
+4-1
drivers/gpu/drm/ttm/ttm_resource.c
···8686 struct ttm_resource *res)8787{8888 if (pos->last != res) {8989+ if (pos->first == res)9090+ pos->first = list_next_entry(res, lru);8991 list_move(&res->lru, &pos->last->lru);9092 pos->last = res;9193 }···113111{114112 struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res);115113116116- if (unlikely(pos->first == res && pos->last == res)) {114114+ if (unlikely(WARN_ON(!pos->first || !pos->last) ||115115+ (pos->first == res && pos->last == res))) {117116 pos->first = NULL;118117 pos->last = NULL;119118 } else if (pos->first == res) {
+2-1
drivers/iommu/iommu-sva.c
···3434 }35353636 ret = ida_alloc_range(&iommu_global_pasid_ida, min, max, GFP_KERNEL);3737- if (ret < min)3737+ if (ret < 0)3838 goto out;3939+3940 mm->pasid = ret;4041 ret = 0;4142out:
+16-15
drivers/iommu/iommu.c
···28912891 ret = __iommu_group_set_domain_internal(28922892 group, dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);28932893 if (WARN_ON(ret))28942894- goto out_free;28942894+ goto out_free_old;28952895 } else {28962896 ret = __iommu_group_set_domain(group, dom);28972897- if (ret) {28982898- iommu_domain_free(dom);28992899- group->default_domain = old_dom;29002900- return ret;29012901- }28972897+ if (ret)28982898+ goto err_restore_def_domain;29022899 }2903290029042901 /*···29082911 for_each_group_device(group, gdev) {29092912 ret = iommu_create_device_direct_mappings(dom, gdev->dev);29102913 if (ret)29112911- goto err_restore;29142914+ goto err_restore_domain;29122915 }29132916 }2914291729152915-err_restore:29162916- if (old_dom) {29172917- __iommu_group_set_domain_internal(29182918- group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);29192919- iommu_domain_free(dom);29202920- old_dom = NULL;29212921- }29222922-out_free:29182918+out_free_old:29232919 if (old_dom)29242920 iommu_domain_free(old_dom);29212921+ return ret;29222922+29232923+err_restore_domain:29242924+ if (old_dom)29252925+ __iommu_group_set_domain_internal(29262926+ group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);29272927+err_restore_def_domain:29282928+ if (old_dom) {29292929+ iommu_domain_free(dom);29302930+ group->default_domain = old_dom;29312931+ }29252932 return ret;29262933}29272934
···2681268126822682 ring->rx_max_pending = ICE_MAX_NUM_DESC;26832683 ring->tx_max_pending = ICE_MAX_NUM_DESC;26842684- ring->rx_pending = vsi->rx_rings[0]->count;26852685- ring->tx_pending = vsi->tx_rings[0]->count;26842684+ if (vsi->tx_rings && vsi->rx_rings) {26852685+ ring->rx_pending = vsi->rx_rings[0]->count;26862686+ ring->tx_pending = vsi->tx_rings[0]->count;26872687+ } else {26882688+ ring->rx_pending = 0;26892689+ ring->tx_pending = 0;26902690+ }2686269126872692 /* Rx mini and jumbo rings are not supported */26882693 ring->rx_mini_max_pending = 0;···27202715 ICE_REQ_DESC_MULTIPLE);27212716 return -EINVAL;27222717 }27182718+27192719+ /* Return if there is no rings (device is reloading) */27202720+ if (!vsi->tx_rings || !vsi->rx_rings)27212721+ return -EBUSY;2723272227242723 new_tx_cnt = ALIGN(ring->tx_pending, ICE_REQ_DESC_MULTIPLE);27252724 if (new_tx_cnt != ring->tx_pending)
-27
drivers/net/ethernet/intel/ice/ice_lib.c
···29722972 return -ENODEV;29732973 pf = vsi->back;2974297429752975- /* do not unregister while driver is in the reset recovery pending29762976- * state. Since reset/rebuild happens through PF service task workqueue,29772977- * it's not a good idea to unregister netdev that is associated to the29782978- * PF that is running the work queue items currently. This is done to29792979- * avoid check_flush_dependency() warning on this wq29802980- */29812981- if (vsi->netdev && !ice_is_reset_in_progress(pf->state) &&29822982- (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state))) {29832983- unregister_netdev(vsi->netdev);29842984- clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);29852985- }29862986-29872987- if (vsi->type == ICE_VSI_PF)29882988- ice_devlink_destroy_pf_port(pf);29892989-29902975 if (test_bit(ICE_FLAG_RSS_ENA, pf->flags))29912976 ice_rss_clean(vsi);2992297729932978 ice_vsi_close(vsi);29942979 ice_vsi_decfg(vsi);29952995-29962996- if (vsi->netdev) {29972997- if (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state)) {29982998- unregister_netdev(vsi->netdev);29992999- clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);30003000- }30013001- if (test_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state)) {30023002- free_netdev(vsi->netdev);30033003- vsi->netdev = NULL;30043004- clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state);30053005- }30063006- }3007298030082981 /* retain SW VSI data structure since it is needed to unregister and30092982 * free VSI netdev when PF is not in reset recovery pending state,\
···12601260 if (skb->protocol == htons(ETH_P_IP)) {12611261 u32 pkt_len = ((unsigned char *)ip_hdr(skb) - skb->data)12621262 + ntohs(ip_hdr(skb)->tot_len);12631263- if (skb->len > pkt_len)12641264- pskb_trim(skb, pkt_len);12631263+ if (skb->len > pkt_len) {12641264+ ret = pskb_trim(skb, pkt_len);12651265+ if (unlikely(ret))12661266+ return ret;12671267+ }12651268 }1266126912671270 hdr_len = skb_tcp_all_headers(skb);
+34-11
drivers/net/ethernet/realtek/r8169_main.c
···623623 int cfg9346_usage_count;624624625625 unsigned supports_gmii:1;626626+ unsigned aspm_manageable:1;626627 dma_addr_t counters_phys_addr;627628 struct rtl8169_counters *counters;628629 struct rtl8169_tc_offsets tc_offset;···27472746 if (tp->mac_version < RTL_GIGA_MAC_VER_32)27482747 return;2749274827502750- if (enable) {27492749+ /* Don't enable ASPM in the chip if OS can't control ASPM */27502750+ if (enable && tp->aspm_manageable) {27512751+ /* On these chip versions ASPM can even harm27522752+ * bus communication of other PCI devices.27532753+ */27542754+ if (tp->mac_version == RTL_GIGA_MAC_VER_42 ||27552755+ tp->mac_version == RTL_GIGA_MAC_VER_43)27562756+ return;27572757+27512758 rtl_mod_config5(tp, 0, ASPM_en);27522759 rtl_mod_config2(tp, 0, ClkReqEn);27532760···45234514 }4524451545254516 if (napi_schedule_prep(&tp->napi)) {45264526- rtl_unlock_config_regs(tp);45274527- rtl_hw_aspm_clkreq_enable(tp, false);45284528- rtl_lock_config_regs(tp);45294529-45304517 rtl_irq_disable(tp);45314518 __napi_schedule(&tp->napi);45324519 }···4582457745834578 work_done = rtl_rx(dev, tp, budget);4584457945854585- if (work_done < budget && napi_complete_done(napi, work_done)) {45804580+ if (work_done < budget && napi_complete_done(napi, work_done))45864581 rtl_irq_enable(tp);45874587-45884588- rtl_unlock_config_regs(tp);45894589- rtl_hw_aspm_clkreq_enable(tp, true);45904590- rtl_lock_config_regs(tp);45914591- }4592458245934583 return work_done;45944584}···51585158 rtl_rar_set(tp, mac_addr);51595159}5160516051615161+/* register is set if system vendor successfully tested ASPM 1.2 */51625162+static bool rtl_aspm_is_safe(struct rtl8169_private *tp)51635163+{51645164+ if (tp->mac_version >= RTL_GIGA_MAC_VER_61 &&51655165+ r8168_mac_ocp_read(tp, 0xc0b2) & 0xf)51665166+ return true;51675167+51685168+ return false;51695169+}51705170+51615171static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)51625172{51635173 struct rtl8169_private *tp;···52365226 "unknown chip XID %03x, contact r8169 maintainers (see MAINTAINERS file)\n",52375227 xid);52385228 tp->mac_version = chipset;52295229+52305230+ /* Disable ASPM L1 as that cause random device stop working52315231+ * problems as well as full system hangs for some PCIe devices users.52325232+ * Chips from RTL8168h partially have issues with L1.2, but seem52335233+ * to work fine with L1 and L1.1.52345234+ */52355235+ if (rtl_aspm_is_safe(tp))52365236+ rc = 0;52375237+ else if (tp->mac_version >= RTL_GIGA_MAC_VER_46)52385238+ rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);52395239+ else52405240+ rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1);52415241+ tp->aspm_manageable = !rc;5239524252405243 tp->dash_type = rtl_check_dash(tp);52415244
···34513451{34523452 int rc;3453345334543454+ ethtool_set_ethtool_phy_ops(&phy_ethtool_phy_ops);34553455+34543456 rc = mdio_bus_init();34553457 if (rc)34563456- return rc;34583458+ goto err_ethtool_phy_ops;3457345934583458- ethtool_set_ethtool_phy_ops(&phy_ethtool_phy_ops);34593460 features_init();3460346134613462 rc = phy_driver_register(&genphy_c45_driver, THIS_MODULE);34623463 if (rc)34633463- goto err_c45;34643464+ goto err_mdio_bus;3464346534653466 rc = phy_driver_register(&genphy_driver, THIS_MODULE);34663466- if (rc) {34673467- phy_driver_unregister(&genphy_c45_driver);34673467+ if (rc)34683468+ goto err_c45;34693469+34703470+ return 0;34713471+34683472err_c45:34693469- mdio_bus_exit();34703470- }34733473+ phy_driver_unregister(&genphy_c45_driver);34743474+err_mdio_bus:34753475+ mdio_bus_exit();34763476+err_ethtool_phy_ops:34773477+ ethtool_set_ethtool_phy_ops(NULL);3471347834723479 return rc;34733480}
+6
drivers/net/usb/usbnet.c
···17751775 } else if (!info->in || !info->out)17761776 status = usbnet_get_endpoints (dev, udev);17771777 else {17781778+ u8 ep_addrs[3] = {17791779+ info->in + USB_DIR_IN, info->out + USB_DIR_OUT, 017801780+ };17811781+17781782 dev->in = usb_rcvbulkpipe (xdev, info->in);17791783 dev->out = usb_sndbulkpipe (xdev, info->out);17801784 if (!(info->flags & FLAG_NO_SETINT))···17881784 else17891785 status = 0;1790178617871787+ if (status == 0 && !usb_check_bulk_endpoints(udev, ep_addrs))17881788+ status = -EINVAL;17911789 }17921790 if (status >= 0 && dev->status)17931791 status = init_status (dev, udev);
+6-6
drivers/net/vrf.c
···664664 skb->protocol = htons(ETH_P_IPV6);665665 skb->dev = dev;666666667667- rcu_read_lock_bh();667667+ rcu_read_lock();668668 nexthop = rt6_nexthop((struct rt6_info *)dst, &ipv6_hdr(skb)->daddr);669669 neigh = __ipv6_neigh_lookup_noref(dst->dev, nexthop);670670 if (unlikely(!neigh))···672672 if (!IS_ERR(neigh)) {673673 sock_confirm_neigh(skb, neigh);674674 ret = neigh_output(neigh, skb, false);675675- rcu_read_unlock_bh();675675+ rcu_read_unlock();676676 return ret;677677 }678678- rcu_read_unlock_bh();678678+ rcu_read_unlock();679679680680 IP6_INC_STATS(dev_net(dst->dev),681681 ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);···889889 }890890 }891891892892- rcu_read_lock_bh();892892+ rcu_read_lock();893893894894 neigh = ip_neigh_for_gw(rt, skb, &is_v6gw);895895 if (!IS_ERR(neigh)) {···898898 sock_confirm_neigh(skb, neigh);899899 /* if crossing protocols, can not use the cached header */900900 ret = neigh_output(neigh, skb, is_v6gw);901901- rcu_read_unlock_bh();901901+ rcu_read_unlock();902902 return ret;903903 }904904905905- rcu_read_unlock_bh();905905+ rcu_read_unlock();906906 vrf_tx_error(skb->dev, skb);907907 return -EINVAL;908908}
+33-3
drivers/nvme/host/core.c
···3431343134323432 ret = nvme_global_check_duplicate_ids(ctrl->subsys, &info->ids);34333433 if (ret) {34343434- dev_err(ctrl->device,34353435- "globally duplicate IDs for nsid %d\n", info->nsid);34343434+ /*34353435+ * We've found two different namespaces on two different34363436+ * subsystems that report the same ID. This is pretty nasty34373437+ * for anything that actually requires unique device34383438+ * identification. In the kernel we need this for multipathing,34393439+ * and in user space the /dev/disk/by-id/ links rely on it.34403440+ *34413441+ * If the device also claims to be multi-path capable back off34423442+ * here now and refuse the probe the second device as this is a34433443+ * recipe for data corruption. If not this is probably a34443444+ * cheap consumer device if on the PCIe bus, so let the user34453445+ * proceed and use the shiny toy, but warn that with changing34463446+ * probing order (which due to our async probing could just be34473447+ * device taking longer to startup) the other device could show34483448+ * up at any time.34493449+ */34363450 nvme_print_device_info(ctrl);34373437- return ret;34513451+ if ((ns->ctrl->ops->flags & NVME_F_FABRICS) || /* !PCIe */34523452+ ((ns->ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) &&34533453+ info->is_shared)) {34543454+ dev_err(ctrl->device,34553455+ "ignoring nsid %d because of duplicate IDs\n",34563456+ info->nsid);34573457+ return ret;34583458+ }34593459+34603460+ dev_err(ctrl->device,34613461+ "clearing duplicate IDs for nsid %d\n", info->nsid);34623462+ dev_err(ctrl->device,34633463+ "use of /dev/disk/by-id/ may cause data corruption\n");34643464+ memset(&info->ids.nguid, 0, sizeof(info->ids.nguid));34653465+ memset(&info->ids.uuid, 0, sizeof(info->ids.uuid));34663466+ memset(&info->ids.eui64, 0, sizeof(info->ids.eui64));34673467+ ctrl->quirks |= NVME_QUIRK_BOGUS_NID;34383468 }3439346934403470 mutex_lock(&ctrl->subsys->lock);
+1-1
drivers/nvme/host/fault_inject.c
···27272828 /* create debugfs directory and attribute */2929 parent = debugfs_create_dir(dev_name, NULL);3030- if (!parent) {3030+ if (IS_ERR(parent)) {3131 pr_warn("%s: failed to create debugfs directory\n", dev_name);3232 return;3333 }
+30-7
drivers/nvme/host/fc.c
···25482548 * the controller. Abort any ios on the association and let the25492549 * create_association error path resolve things.25502550 */25512551- if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {25522552- __nvme_fc_abort_outstanding_ios(ctrl, true);25512551+ enum nvme_ctrl_state state;25522552+ unsigned long flags;25532553+25542554+ spin_lock_irqsave(&ctrl->lock, flags);25552555+ state = ctrl->ctrl.state;25562556+ if (state == NVME_CTRL_CONNECTING) {25532557 set_bit(ASSOC_FAILED, &ctrl->flags);25582558+ spin_unlock_irqrestore(&ctrl->lock, flags);25592559+ __nvme_fc_abort_outstanding_ios(ctrl, true);25602560+ dev_warn(ctrl->ctrl.device,25612561+ "NVME-FC{%d}: transport error during (re)connect\n",25622562+ ctrl->cnum);25542563 return;25552564 }25652565+ spin_unlock_irqrestore(&ctrl->lock, flags);2556256625572567 /* Otherwise, only proceed if in LIVE state - e.g. on first error */25582558- if (ctrl->ctrl.state != NVME_CTRL_LIVE)25682568+ if (state != NVME_CTRL_LIVE)25592569 return;2560257025612571 dev_warn(ctrl->ctrl.device,···31203110 */3121311131223112 ret = nvme_enable_ctrl(&ctrl->ctrl);31233123- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))31133113+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))31143114+ ret = -EIO;31153115+ if (ret)31243116 goto out_disconnect_admin_queue;3125311731263118 ctrl->ctrl.max_segments = ctrl->lport->ops->max_sgl_segments;···31323120 nvme_unquiesce_admin_queue(&ctrl->ctrl);3133312131343122 ret = nvme_init_ctrl_finish(&ctrl->ctrl, false);31353135- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))31233123+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))31243124+ ret = -EIO;31253125+ if (ret)31363126 goto out_disconnect_admin_queue;3137312731383128 /* sanity checks */···31793165 else31803166 ret = nvme_fc_recreate_io_queues(ctrl);31813167 }31823182- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))31833183- goto out_term_aen_ops;3184316831693169+ spin_lock_irqsave(&ctrl->lock, flags);31703170+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))31713171+ ret = -EIO;31723172+ if (ret) {31733173+ spin_unlock_irqrestore(&ctrl->lock, flags);31743174+ goto out_term_aen_ops;31753175+ }31853176 changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);31773177+ spin_unlock_irqrestore(&ctrl->lock, flags);3186317831873179 ctrl->ctrl.nr_reconnects = 0;31883180···32003180out_term_aen_ops:32013181 nvme_fc_term_aen_ops(ctrl);32023182out_disconnect_admin_queue:31833183+ dev_warn(ctrl->ctrl.device,31843184+ "NVME-FC{%d}: create_assoc failed, assoc_id %llx ret %d\n",31853185+ ctrl->cnum, ctrl->association_id, ret);32033186 /* send a Disconnect(association) LS to fc-nvme target */32043187 nvme_fc_xmt_disconnect_assoc(ctrl);32053188 spin_lock_irqsave(&ctrl->lock, flags);
+20-9
drivers/nvme/host/pci.c
···967967 struct nvme_iod *iod = blk_mq_rq_to_pdu(req);968968969969 dma_unmap_page(dev->dev, iod->meta_dma,970970- rq_integrity_vec(req)->bv_len, rq_data_dir(req));970970+ rq_integrity_vec(req)->bv_len, rq_dma_dir(req));971971 }972972973973 if (blk_rq_nr_phys_segments(req))···12981298 */12991299 if (nvme_should_reset(dev, csts)) {13001300 nvme_warn_reset(dev, csts);13011301- nvme_dev_disable(dev, false);13021302- nvme_reset_ctrl(&dev->ctrl);13031303- return BLK_EH_DONE;13011301+ goto disable;13041302 }1305130313061304 /*···13491351 "I/O %d QID %d timeout, reset controller\n",13501352 req->tag, nvmeq->qid);13511353 nvme_req(req)->flags |= NVME_REQ_CANCELLED;13521352- nvme_dev_disable(dev, false);13531353- nvme_reset_ctrl(&dev->ctrl);13541354-13551355- return BLK_EH_DONE;13541354+ goto disable;13561355 }1357135613581357 if (atomic_dec_return(&dev->ctrl.abort_limit) < 0) {···13861391 * as the device then is in a faulty state.13871392 */13881393 return BLK_EH_RESET_TIMER;13941394+13951395+disable:13961396+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING))13971397+ return BLK_EH_DONE;13981398+13991399+ nvme_dev_disable(dev, false);14001400+ if (nvme_try_sched_reset(&dev->ctrl))14011401+ nvme_unquiesce_io_queues(&dev->ctrl);14021402+ return BLK_EH_DONE;13891403}1390140413911405static void nvme_free_queue(struct nvme_queue *nvmeq)···32823278 case pci_channel_io_frozen:32833279 dev_warn(dev->ctrl.device,32843280 "frozen state error detected, reset controller\n");32813281+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING)) {32823282+ nvme_dev_disable(dev, true);32833283+ return PCI_ERS_RESULT_DISCONNECT;32843284+ }32853285 nvme_dev_disable(dev, false);32863286 return PCI_ERS_RESULT_NEED_RESET;32873287 case pci_channel_io_perm_failure:···3302329433033295 dev_info(dev->ctrl.device, "restart after slot reset\n");33043296 pci_restore_state(pdev);33053305- nvme_reset_ctrl(&dev->ctrl);32973297+ if (!nvme_try_sched_reset(&dev->ctrl))32983298+ nvme_unquiesce_io_queues(&dev->ctrl);33063299 return PCI_ERS_RESULT_RECOVERED;33073300}33083301···34053396 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },34063397 { PCI_DEVICE(0x144d, 0xa809), /* Samsung MZALQ256HBJD 256G */34073398 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },33993399+ { PCI_DEVICE(0x144d, 0xa802), /* Samsung SM953 */34003400+ .driver_data = NVME_QUIRK_BOGUS_NID, },34083401 { PCI_DEVICE(0x1cc4, 0x6303), /* UMIS RPJTJ512MGE1QDY 512G */34093402 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },34103403 { PCI_DEVICE(0x1cc4, 0x6302), /* UMIS RPJTJ256MGE1QDY 256G */
+1-1
drivers/nvme/host/sysfs.c
···9292 * we have no UUID set9393 */9494 if (uuid_is_null(&ids->uuid)) {9595- dev_warn_ratelimited(dev,9595+ dev_warn_once(dev,9696 "No UUID available providing old NGUID\n");9797 return sysfs_emit(buf, "%pU\n", ids->nguid);9898 }
+4-5
drivers/nvme/host/zns.c
···1010int nvme_revalidate_zones(struct nvme_ns *ns)1111{1212 struct request_queue *q = ns->queue;1313- int ret;14131515- ret = blk_revalidate_disk_zones(ns->disk, NULL);1616- if (!ret)1717- blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append);1818- return ret;1414+ blk_queue_chunk_sectors(q, ns->zsze);1515+ blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append);1616+1717+ return blk_revalidate_disk_zones(ns->disk, NULL);1918}20192120static int nvme_set_max_append(struct nvme_ctrl *ctrl)
···102102 * which depends on the host's memory fragementation. To solve this,103103 * ensure mdts is limited to the pages equal to the number of segments.104104 */105105- max_hw_sectors = min_not_zero(pctrl->max_segments << (PAGE_SHIFT - 9),105105+ max_hw_sectors = min_not_zero(pctrl->max_segments << PAGE_SECTORS_SHIFT,106106 pctrl->max_hw_sectors);107107108108 /*109109 * nvmet_passthru_map_sg is limitted to using a single bio so limit110110 * the mdts based on BIO_MAX_VECS as well111111 */112112- max_hw_sectors = min_not_zero(BIO_MAX_VECS << (PAGE_SHIFT - 9),112112+ max_hw_sectors = min_not_zero(BIO_MAX_VECS << PAGE_SECTORS_SHIFT,113113 max_hw_sectors);114114115115 page_shift = NVME_CAP_MPSMIN(ctrl->cap) + 12;
-3
drivers/perf/riscv_pmu.c
···181181 uint64_t max_period = riscv_pmu_ctr_get_width_mask(event);182182 u64 init_val;183183184184- if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))185185- return;186186-187184 if (flags & PERF_EF_RELOAD)188185 WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));189186
+23-38
drivers/pinctrl/pinctrl-amd.c
···116116 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);117117}118118119119-static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,120120- unsigned debounce)119119+static int amd_gpio_set_debounce(struct amd_gpio *gpio_dev, unsigned int offset,120120+ unsigned int debounce)121121{122122 u32 time;123123 u32 pin_reg;124124 int ret = 0;125125- unsigned long flags;126126- struct amd_gpio *gpio_dev = gpiochip_get_data(gc);127127-128128- raw_spin_lock_irqsave(&gpio_dev->lock, flags);129125130126 /* Use special handling for Pin0 debounce */131131- pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);132132- if (pin_reg & INTERNAL_GPIO0_DEBOUNCE)133133- debounce = 0;127127+ if (offset == 0) {128128+ pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);129129+ if (pin_reg & INTERNAL_GPIO0_DEBOUNCE)130130+ debounce = 0;131131+ }134132135133 pin_reg = readl(gpio_dev->base + offset * 4);136134···180182 pin_reg &= ~(DB_CNTRl_MASK << DB_CNTRL_OFF);181183 }182184 writel(pin_reg, gpio_dev->base + offset * 4);183183- raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);184185185186 return ret;186186-}187187-188188-static int amd_gpio_set_config(struct gpio_chip *gc, unsigned offset,189189- unsigned long config)190190-{191191- u32 debounce;192192-193193- if (pinconf_to_config_param(config) != PIN_CONFIG_INPUT_DEBOUNCE)194194- return -ENOTSUPP;195195-196196- debounce = pinconf_to_config_argument(config);197197- return amd_gpio_set_debounce(gc, offset, debounce);198187}199188200189#ifdef CONFIG_DEBUG_FS···205220 char *pin_sts;206221 char *interrupt_sts;207222 char *wake_sts;208208- char *pull_up_sel;209223 char *orientation;210224 char debounce_value[40];211225 char *debounce_enable;···312328 seq_printf(s, " %s|", wake_sts);313329314330 if (pin_reg & BIT(PULL_UP_ENABLE_OFF)) {315315- if (pin_reg & BIT(PULL_UP_SEL_OFF))316316- pull_up_sel = "8k";317317- else318318- pull_up_sel = "4k";319319- seq_printf(s, "%s ↑|",320320- pull_up_sel);331331+ seq_puts(s, " ↑ |");321332 } else if (pin_reg & BIT(PULL_DOWN_ENABLE_OFF)) {322322- seq_puts(s, " ↓|");333333+ seq_puts(s, " ↓ |");323334 } else {324335 seq_puts(s, " |");325336 }···740761 break;741762742763 case PIN_CONFIG_BIAS_PULL_UP:743743- arg = (pin_reg >> PULL_UP_SEL_OFF) & (BIT(0) | BIT(1));764764+ arg = (pin_reg >> PULL_UP_ENABLE_OFF) & BIT(0);744765 break;745766746767 case PIN_CONFIG_DRIVE_STRENGTH:···759780}760781761782static int amd_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,762762- unsigned long *configs, unsigned num_configs)783783+ unsigned long *configs, unsigned int num_configs)763784{764785 int i;765786 u32 arg;···777798778799 switch (param) {779800 case PIN_CONFIG_INPUT_DEBOUNCE:780780- pin_reg &= ~DB_TMR_OUT_MASK;781781- pin_reg |= arg & DB_TMR_OUT_MASK;782782- break;801801+ ret = amd_gpio_set_debounce(gpio_dev, pin, arg);802802+ goto out_unlock;783803784804 case PIN_CONFIG_BIAS_PULL_DOWN:785805 pin_reg &= ~BIT(PULL_DOWN_ENABLE_OFF);···786808 break;787809788810 case PIN_CONFIG_BIAS_PULL_UP:789789- pin_reg &= ~BIT(PULL_UP_SEL_OFF);790790- pin_reg |= (arg & BIT(0)) << PULL_UP_SEL_OFF;791811 pin_reg &= ~BIT(PULL_UP_ENABLE_OFF);792792- pin_reg |= ((arg>>1) & BIT(0)) << PULL_UP_ENABLE_OFF;812812+ pin_reg |= (arg & BIT(0)) << PULL_UP_ENABLE_OFF;793813 break;794814795815 case PIN_CONFIG_DRIVE_STRENGTH:···805829806830 writel(pin_reg, gpio_dev->base + pin*4);807831 }832832+out_unlock:808833 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);809834810835 return ret;···845868 return -ENOTSUPP;846869 }847870 return 0;871871+}872872+873873+static int amd_gpio_set_config(struct gpio_chip *gc, unsigned int pin,874874+ unsigned long config)875875+{876876+ struct amd_gpio *gpio_dev = gpiochip_get_data(gc);877877+878878+ return amd_pinconf_set(gpio_dev->pctrl, pin, &config, 1);848879}849880850881static const struct pinconf_ops amd_pinconf_ops = {
···249249250250static int rzg2l_dt_subnode_to_map(struct pinctrl_dev *pctldev,251251 struct device_node *np,252252+ struct device_node *parent,252253 struct pinctrl_map **map,253254 unsigned int *num_maps,254255 unsigned int *index)···267266 struct property *prop;268267 int ret, gsel, fsel;269268 const char **pin_fn;269269+ const char *name;270270 const char *pin;271271272272 pinmux = of_find_property(np, "pinmux", NULL);···351349 psel_val[i] = MUX_FUNC(value);352350 }353351352352+ if (parent) {353353+ name = devm_kasprintf(pctrl->dev, GFP_KERNEL, "%pOFn.%pOFn",354354+ parent, np);355355+ if (!name) {356356+ ret = -ENOMEM;357357+ goto done;358358+ }359359+ } else {360360+ name = np->name;361361+ }362362+354363 /* Register a single pin group listing all the pins we read from DT */355355- gsel = pinctrl_generic_add_group(pctldev, np->name, pins, num_pinmux, NULL);364364+ gsel = pinctrl_generic_add_group(pctldev, name, pins, num_pinmux, NULL);356365 if (gsel < 0) {357366 ret = gsel;358367 goto done;···373360 * Register a single group function where the 'data' is an array PSEL374361 * register values read from DT.375362 */376376- pin_fn[0] = np->name;377377- fsel = pinmux_generic_add_function(pctldev, np->name, pin_fn, 1,378378- psel_val);363363+ pin_fn[0] = name;364364+ fsel = pinmux_generic_add_function(pctldev, name, pin_fn, 1, psel_val);379365 if (fsel < 0) {380366 ret = fsel;381367 goto remove_group;382368 }383369384370 maps[idx].type = PIN_MAP_TYPE_MUX_GROUP;385385- maps[idx].data.mux.group = np->name;386386- maps[idx].data.mux.function = np->name;371371+ maps[idx].data.mux.group = name;372372+ maps[idx].data.mux.function = name;387373 idx++;388374389375 dev_dbg(pctrl->dev, "Parsed %pOF with %d pins\n", np, num_pinmux);···429417 index = 0;430418431419 for_each_child_of_node(np, child) {432432- ret = rzg2l_dt_subnode_to_map(pctldev, child, map,420420+ ret = rzg2l_dt_subnode_to_map(pctldev, child, np, map,433421 num_maps, &index);434422 if (ret < 0) {435423 of_node_put(child);···438426 }439427440428 if (*num_maps == 0) {441441- ret = rzg2l_dt_subnode_to_map(pctldev, np, map,429429+ ret = rzg2l_dt_subnode_to_map(pctldev, np, NULL, map,442430 num_maps, &index);443431 if (ret < 0)444432 goto done;
+20-8
drivers/pinctrl/renesas/pinctrl-rzv2m.c
···209209210210static int rzv2m_dt_subnode_to_map(struct pinctrl_dev *pctldev,211211 struct device_node *np,212212+ struct device_node *parent,212213 struct pinctrl_map **map,213214 unsigned int *num_maps,214215 unsigned int *index)···227226 struct property *prop;228227 int ret, gsel, fsel;229228 const char **pin_fn;229229+ const char *name;230230 const char *pin;231231232232 pinmux = of_find_property(np, "pinmux", NULL);···311309 psel_val[i] = MUX_FUNC(value);312310 }313311312312+ if (parent) {313313+ name = devm_kasprintf(pctrl->dev, GFP_KERNEL, "%pOFn.%pOFn",314314+ parent, np);315315+ if (!name) {316316+ ret = -ENOMEM;317317+ goto done;318318+ }319319+ } else {320320+ name = np->name;321321+ }322322+314323 /* Register a single pin group listing all the pins we read from DT */315315- gsel = pinctrl_generic_add_group(pctldev, np->name, pins, num_pinmux, NULL);324324+ gsel = pinctrl_generic_add_group(pctldev, name, pins, num_pinmux, NULL);316325 if (gsel < 0) {317326 ret = gsel;318327 goto done;···333320 * Register a single group function where the 'data' is an array PSEL334321 * register values read from DT.335322 */336336- pin_fn[0] = np->name;337337- fsel = pinmux_generic_add_function(pctldev, np->name, pin_fn, 1,338338- psel_val);323323+ pin_fn[0] = name;324324+ fsel = pinmux_generic_add_function(pctldev, name, pin_fn, 1, psel_val);339325 if (fsel < 0) {340326 ret = fsel;341327 goto remove_group;342328 }343329344330 maps[idx].type = PIN_MAP_TYPE_MUX_GROUP;345345- maps[idx].data.mux.group = np->name;346346- maps[idx].data.mux.function = np->name;331331+ maps[idx].data.mux.group = name;332332+ maps[idx].data.mux.function = name;347333 idx++;348334349335 dev_dbg(pctrl->dev, "Parsed %pOF with %d pins\n", np, num_pinmux);···389377 index = 0;390378391379 for_each_child_of_node(np, child) {392392- ret = rzv2m_dt_subnode_to_map(pctldev, child, map,380380+ ret = rzv2m_dt_subnode_to_map(pctldev, child, np, map,393381 num_maps, &index);394382 if (ret < 0) {395383 of_node_put(child);···398386 }399387400388 if (*num_maps == 0) {401401- ret = rzv2m_dt_subnode_to_map(pctldev, np, map,389389+ ret = rzv2m_dt_subnode_to_map(pctldev, np, NULL, map,402390 num_maps, &index);403391 if (ret < 0)404392 goto done;
+3
drivers/regulator/da9063-regulator.c
···778778 const struct notification_limit *uv_l = &constr->under_voltage_limits;779779 const struct notification_limit *ov_l = &constr->over_voltage_limits;780780781781+ if (!config->init_data) /* No config in DT, pointers will be invalid */782782+ return 0;783783+781784 /* make sure that only one severity is used to clarify if unchanged, enabled or disabled */782785 if ((!!uv_l->prot + !!uv_l->err + !!uv_l->warn) > 1) {783786 dev_err(config->dev, "%s: at most one voltage monitoring severity allowed!\n",
+1-1
drivers/scsi/aacraid/aacraid.h
···26182618struct aac_aifcmd {26192619 __le32 command; /* Tell host what type of notify this is */26202620 __le32 seqnum; /* To allow ordering of reports (if necessary) */26212621- u8 data[1]; /* Undefined length (from kernel viewpoint) */26212621+ u8 data[]; /* Undefined length (from kernel viewpoint) */26222622};2623262326242624/**
···684684685685 if ((sdd->cur_mode & SPI_LOOP) && sdd->port_conf->has_loopback)686686 val |= S3C64XX_SPI_MODE_SELF_LOOPBACK;687687+ else688688+ val &= ~S3C64XX_SPI_MODE_SELF_LOOPBACK;687689688690 writel(val, regs + S3C64XX_SPI_MODE_CFG);689691
+38
drivers/ufs/core/ufshcd.c
···85208520 return ret;85218521}8522852285238523+static void ufshcd_set_timestamp_attr(struct ufs_hba *hba)85248524+{85258525+ int err;85268526+ struct ufs_query_req *request = NULL;85278527+ struct ufs_query_res *response = NULL;85288528+ struct ufs_dev_info *dev_info = &hba->dev_info;85298529+ struct utp_upiu_query_v4_0 *upiu_data;85308530+85318531+ if (dev_info->wspecversion < 0x400)85328532+ return;85338533+85348534+ ufshcd_hold(hba);85358535+85368536+ mutex_lock(&hba->dev_cmd.lock);85378537+85388538+ ufshcd_init_query(hba, &request, &response,85398539+ UPIU_QUERY_OPCODE_WRITE_ATTR,85408540+ QUERY_ATTR_IDN_TIMESTAMP, 0, 0);85418541+85428542+ request->query_func = UPIU_QUERY_FUNC_STANDARD_WRITE_REQUEST;85438543+85448544+ upiu_data = (struct utp_upiu_query_v4_0 *)&request->upiu_req;85458545+85468546+ put_unaligned_be64(ktime_get_real_ns(), &upiu_data->osf3);85478547+85488548+ err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT);85498549+85508550+ if (err)85518551+ dev_err(hba->dev, "%s: failed to set timestamp %d\n",85528552+ __func__, err);85538553+85548554+ mutex_unlock(&hba->dev_cmd.lock);85558555+ ufshcd_release(hba);85568556+}85578557+85238558/**85248559 * ufshcd_add_lus - probe and add UFS logical units85258560 * @hba: per-adapter instance···87428707 /* UFS device is also active now */87438708 ufshcd_set_ufs_dev_active(hba);87448709 ufshcd_force_reset_auto_bkops(hba);87108710+87118711+ ufshcd_set_timestamp_attr(hba);8745871287468713 /* Gear up to HS gear if supported */87478714 if (hba->max_pwr_info.is_valid) {···97869749 ret = ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE);97879750 if (ret)97889751 goto set_old_link_state;97529752+ ufshcd_set_timestamp_attr(hba);97899753 }9790975497919755 if (ufshcd_keep_autobkops_enabled_except_suspend(hba))
+1
drivers/ufs/host/Kconfig
···7272config SCSI_UFS_MEDIATEK7373 tristate "Mediatek specific hooks to UFS controller platform driver"7474 depends on SCSI_UFSHCD_PLATFORM && ARCH_MEDIATEK7575+ depends on RESET_CONTROLLER7576 select PHY_MTK_UFS7677 select RESET_TI_SYSCON7778 help
+12-2
fs/btrfs/block-group.c
···16401640{16411641 struct btrfs_fs_info *fs_info = bg->fs_info;1642164216431643- trace_btrfs_add_unused_block_group(bg);16441643 spin_lock(&fs_info->unused_bgs_lock);16451644 if (list_empty(&bg->bg_list)) {16461645 btrfs_get_block_group(bg);16461646+ trace_btrfs_add_unused_block_group(bg);16471647 list_add_tail(&bg->bg_list, &fs_info->unused_bgs);16481648- } else {16481648+ } else if (!test_bit(BLOCK_GROUP_FLAG_NEW, &bg->runtime_flags)) {16491649 /* Pull out the block group from the reclaim_bgs list. */16501650+ trace_btrfs_add_unused_block_group(bg);16501651 list_move_tail(&bg->bg_list, &fs_info->unused_bgs);16511652 }16521653 spin_unlock(&fs_info->unused_bgs_lock);···2088208720892088 /* Shouldn't have super stripes in sequential zones */20902089 if (zoned && nr) {20902090+ kfree(logical);20912091 btrfs_err(fs_info,20922092 "zoned: block group %llu must not contain super block",20932093 cache->start);···26702668next:26712669 btrfs_delayed_refs_rsv_release(fs_info, 1);26722670 list_del_init(&block_group->bg_list);26712671+ clear_bit(BLOCK_GROUP_FLAG_NEW, &block_group->runtime_flags);26732672 }26742673 btrfs_trans_release_chunk_metadata(trans);26752674}···27092706 cache = btrfs_create_block_group_cache(fs_info, chunk_offset);27102707 if (!cache)27112708 return ERR_PTR(-ENOMEM);27092709+27102710+ /*27112711+ * Mark it as new before adding it to the rbtree of block groups or any27122712+ * list, so that no other task finds it and calls btrfs_mark_bg_unused()27132713+ * before the new flag is set.27142714+ */27152715+ set_bit(BLOCK_GROUP_FLAG_NEW, &cache->runtime_flags);2712271627132717 cache->length = size;27142718 set_free_space_tree_thresholds(cache);
+5
fs/btrfs/block-group.h
···7070 BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE,7171 /* Indicate that the block group is placed on a sequential zone */7272 BLOCK_GROUP_FLAG_SEQUENTIAL_ZONE,7373+ /*7474+ * Indicate that block group is in the list of new block groups of a7575+ * transaction.7676+ */7777+ BLOCK_GROUP_FLAG_NEW,7378};74797580enum btrfs_caching_type {
+52-25
fs/btrfs/inode.c
···34823482void btrfs_add_delayed_iput(struct btrfs_inode *inode)34833483{34843484 struct btrfs_fs_info *fs_info = inode->root->fs_info;34853485+ unsigned long flags;3485348634863487 if (atomic_add_unless(&inode->vfs_inode.i_count, -1, 1))34873488 return;3488348934893490 atomic_inc(&fs_info->nr_delayed_iputs);34903490- spin_lock(&fs_info->delayed_iput_lock);34913491+ /*34923492+ * Need to be irq safe here because we can be called from either an irq34933493+ * context (see bio.c and btrfs_put_ordered_extent()) or a non-irq34943494+ * context.34953495+ */34963496+ spin_lock_irqsave(&fs_info->delayed_iput_lock, flags);34913497 ASSERT(list_empty(&inode->delayed_iput));34923498 list_add_tail(&inode->delayed_iput, &fs_info->delayed_iputs);34933493- spin_unlock(&fs_info->delayed_iput_lock);34993499+ spin_unlock_irqrestore(&fs_info->delayed_iput_lock, flags);34943500 if (!test_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags))34953501 wake_up_process(fs_info->cleaner_kthread);34963502}···35053499 struct btrfs_inode *inode)35063500{35073501 list_del_init(&inode->delayed_iput);35083508- spin_unlock(&fs_info->delayed_iput_lock);35023502+ spin_unlock_irq(&fs_info->delayed_iput_lock);35093503 iput(&inode->vfs_inode);35103504 if (atomic_dec_and_test(&fs_info->nr_delayed_iputs))35113505 wake_up(&fs_info->delayed_iputs_wait);35123512- spin_lock(&fs_info->delayed_iput_lock);35063506+ spin_lock_irq(&fs_info->delayed_iput_lock);35133507}3514350835153509static void btrfs_run_delayed_iput(struct btrfs_fs_info *fs_info,35163510 struct btrfs_inode *inode)35173511{35183512 if (!list_empty(&inode->delayed_iput)) {35193519- spin_lock(&fs_info->delayed_iput_lock);35133513+ spin_lock_irq(&fs_info->delayed_iput_lock);35203514 if (!list_empty(&inode->delayed_iput))35213515 run_delayed_iput_locked(fs_info, inode);35223522- spin_unlock(&fs_info->delayed_iput_lock);35163516+ spin_unlock_irq(&fs_info->delayed_iput_lock);35233517 }35243518}3525351935263520void btrfs_run_delayed_iputs(struct btrfs_fs_info *fs_info)35273521{35283528-35293529- spin_lock(&fs_info->delayed_iput_lock);35223522+ /*35233523+ * btrfs_put_ordered_extent() can run in irq context (see bio.c), which35243524+ * calls btrfs_add_delayed_iput() and that needs to lock35253525+ * fs_info->delayed_iput_lock. So we need to disable irqs here to35263526+ * prevent a deadlock.35273527+ */35283528+ spin_lock_irq(&fs_info->delayed_iput_lock);35303529 while (!list_empty(&fs_info->delayed_iputs)) {35313530 struct btrfs_inode *inode;3532353135333532 inode = list_first_entry(&fs_info->delayed_iputs,35343533 struct btrfs_inode, delayed_iput);35353534 run_delayed_iput_locked(fs_info, inode);35363536- cond_resched_lock(&fs_info->delayed_iput_lock);35353535+ if (need_resched()) {35363536+ spin_unlock_irq(&fs_info->delayed_iput_lock);35373537+ cond_resched();35383538+ spin_lock_irq(&fs_info->delayed_iput_lock);35393539+ }35373540 }35383538- spin_unlock(&fs_info->delayed_iput_lock);35413541+ spin_unlock_irq(&fs_info->delayed_iput_lock);35393542}3540354335413544/*···36743659 found_key.type = BTRFS_INODE_ITEM_KEY;36753660 found_key.offset = 0;36763661 inode = btrfs_iget(fs_info->sb, last_objectid, root);36773677- ret = PTR_ERR_OR_ZERO(inode);36783678- if (ret && ret != -ENOENT)36793679- goto out;36623662+ if (IS_ERR(inode)) {36633663+ ret = PTR_ERR(inode);36643664+ inode = NULL;36653665+ if (ret != -ENOENT)36663666+ goto out;36673667+ }3680366836813681- if (ret == -ENOENT && root == fs_info->tree_root) {36693669+ if (!inode && root == fs_info->tree_root) {36823670 struct btrfs_root *dead_root;36833671 int is_dead_root = 0;36843672···37423724 * deleted but wasn't. The inode number may have been reused,37433725 * but either way, we can delete the orphan item.37443726 */37453745- if (ret == -ENOENT || inode->i_nlink) {37463746- if (!ret) {37273727+ if (!inode || inode->i_nlink) {37283728+ if (inode) {37473729 ret = btrfs_drop_verity_items(BTRFS_I(inode));37483730 iput(inode);37313731+ inode = NULL;37493732 if (ret)37503733 goto out;37513734 }37523735 trans = btrfs_start_transaction(root, 1);37533736 if (IS_ERR(trans)) {37543737 ret = PTR_ERR(trans);37553755- iput(inode);37563738 goto out;37573739 }37583740 btrfs_debug(fs_info, "auto deleting %Lu",···37603742 ret = btrfs_del_orphan_item(trans, root,37613743 found_key.objectid);37623744 btrfs_end_transaction(trans);37633763- if (ret) {37643764- iput(inode);37453745+ if (ret)37653746 goto out;37663766- }37673747 continue;37683748 }37693749···48634847 ret = -ENOMEM;48644848 goto out;48654849 }48664866- ret = set_page_extent_mapped(page);48674867- if (ret < 0)48684868- goto out_unlock;4869485048704851 if (!PageUptodate(page)) {48714852 ret = btrfs_read_folio(NULL, page_folio(page));···48774864 goto out_unlock;48784865 }48794866 }48674867+48684868+ /*48694869+ * We unlock the page after the io is completed and then re-lock it48704870+ * above. release_folio() could have come in between that and cleared48714871+ * PagePrivate(), but left the page in the mapping. Set the page mapped48724872+ * here to make sure it's properly set for the subpage stuff.48734873+ */48744874+ ret = set_page_extent_mapped(page);48754875+ if (ret < 0)48764876+ goto out_unlock;48774877+48804878 wait_on_page_writeback(page);4881487948824880 lock_extent(io_tree, block_start, block_end, &cached_state);···7873784978747850 ret = btrfs_extract_ordered_extent(bbio, dio_data->ordered);78757851 if (ret) {78767876- bbio->bio.bi_status = errno_to_blk_status(ret);78777877- btrfs_dio_end_io(bbio);78527852+ btrfs_finish_ordered_extent(dio_data->ordered, NULL,78537853+ file_offset, dip->bytes,78547854+ !ret);78557855+ bio->bi_status = errno_to_blk_status(ret);78567856+ iomap_dio_bio_end_io(bio);78787857 return;78797858 }78807859 }
···7171static void index_rbio_pages(struct btrfs_raid_bio *rbio);7272static int alloc_rbio_pages(struct btrfs_raid_bio *rbio);73737474-static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check);7474+static int finish_parity_scrub(struct btrfs_raid_bio *rbio);7575static void scrub_rbio_work_locked(struct work_struct *work);76767777static void free_raid_bio_pointers(struct btrfs_raid_bio *rbio)···24042404 return 0;24052405}2406240624072407-static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check)24072407+static int finish_parity_scrub(struct btrfs_raid_bio *rbio)24082408{24092409 struct btrfs_io_context *bioc = rbio->bioc;24102410 const u32 sectorsize = bioc->fs_info->sectorsize;···24442444 * it.24452445 */24462446 clear_bit(RBIO_CACHE_READY_BIT, &rbio->flags);24472447-24482448- if (!need_check)24492449- goto writeback;2450244724512448 p_sector.page = alloc_page(GFP_NOFS);24522449 if (!p_sector.page)···25132516 q_sector.page = NULL;25142517 }2515251825162516-writeback:25172519 /*25182520 * time to start writing. Make bios for everything from the25192521 * higher layers (the bio_list in our rbio) and our p/q. Ignore···2695269926962700static void scrub_rbio(struct btrfs_raid_bio *rbio)26972701{26982698- bool need_check = false;26992702 int sector_nr;27002703 int ret;27012704···27172722 * We have every sector properly prepared. Can finish the scrub27182723 * and writeback the good content.27192724 */27202720- ret = finish_parity_scrub(rbio, need_check);27252725+ ret = finish_parity_scrub(rbio);27212726 wait_event(rbio->io_wait, atomic_read(&rbio->stripes_pending) == 0);27222727 for (sector_nr = 0; sector_nr < rbio->stripe_nsectors; sector_nr++) {27232728 int found_errors;
+6-11
fs/btrfs/volumes.c
···40784078 return has_single_bit_set(flags);40794079}4080408040814081-static inline int balance_need_close(struct btrfs_fs_info *fs_info)40824082-{40834083- /* cancel requested || normal exit path */40844084- return atomic_read(&fs_info->balance_cancel_req) ||40854085- (atomic_read(&fs_info->balance_pause_req) == 0 &&40864086- atomic_read(&fs_info->balance_cancel_req) == 0);40874087-}40884088-40894081/*40904082 * Validate target profile against allowed profiles and return true if it's OK.40914083 * Otherwise print the error message and return false.···42674275 u64 num_devices;42684276 unsigned seq;42694277 bool reducing_redundancy;42784278+ bool paused = false;42704279 int i;4271428042724281 if (btrfs_fs_closing(fs_info) ||···43984405 if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req)) {43994406 btrfs_info(fs_info, "balance: paused");44004407 btrfs_exclop_balance(fs_info, BTRFS_EXCLOP_BALANCE_PAUSED);44084408+ paused = true;44014409 }44024410 /*44034411 * Balance can be canceled by:···44274433 btrfs_update_ioctl_balance_args(fs_info, bargs);44284434 }4429443544304430- if ((ret && ret != -ECANCELED && ret != -ENOSPC) ||44314431- balance_need_close(fs_info)) {44364436+ /* We didn't pause, we can clean everything up. */44374437+ if (!paused) {44324438 reset_balance_state(fs_info);44334439 btrfs_exclop_finish(fs_info);44344440 }···63986404 (op == BTRFS_MAP_READ || !dev_replace_is_ongoing ||63996405 !dev_replace->tgtdev)) {64006406 set_io_stripe(smap, map, stripe_index, stripe_offset, stripe_nr);64016401- *mirror_num_ret = mirror_num;64076407+ if (mirror_num_ret)64086408+ *mirror_num_ret = mirror_num;64026409 *bioc_ret = NULL;64036410 ret = 0;64046411 goto out;
···31843184 param_offset = offsetof(struct smb_com_transaction2_spi_req,31853185 InformationLevel) - 4;31863186 offset = param_offset + params;31873187- parm_data = ((char *) &pSMB->hdr.Protocol) + offset;31873187+ parm_data = ((char *)pSMB) + sizeof(pSMB->hdr.smb_buf_length) + offset;31883188 pSMB->ParameterOffset = cpu_to_le16(param_offset);3189318931903190 /* convert to on the wire format for POSIX ACL */
+23-7
fs/smb/client/connect.c
···6060#define TLINK_IDLE_EXPIRE (600 * HZ)61616262/* Drop the connection to not overload the server */6363-#define NUM_STATUS_IO_TIMEOUT 56363+#define MAX_STATUS_IO_TIMEOUT 564646565static int ip_connect(struct TCP_Server_Info *server);6666static int generic_ip_connect(struct TCP_Server_Info *server);···11171117 struct mid_q_entry *mids[MAX_COMPOUND];11181118 char *bufs[MAX_COMPOUND];11191119 unsigned int noreclaim_flag, num_io_timeout = 0;11201120+ bool pending_reconnect = false;1120112111211122 noreclaim_flag = memalloc_noreclaim_save();11221123 cifs_dbg(FYI, "Demultiplex PID: %d\n", task_pid_nr(current));···11571156 cifs_dbg(FYI, "RFC1002 header 0x%x\n", pdu_length);11581157 if (!is_smb_response(server, buf[0]))11591158 continue;11591159+11601160+ pending_reconnect = false;11601161next_pdu:11611162 server->pdu_size = pdu_length;11621163···12161213 if (server->ops->is_status_io_timeout &&12171214 server->ops->is_status_io_timeout(buf)) {12181215 num_io_timeout++;12191219- if (num_io_timeout > NUM_STATUS_IO_TIMEOUT) {12201220- cifs_reconnect(server, false);12161216+ if (num_io_timeout > MAX_STATUS_IO_TIMEOUT) {12171217+ cifs_server_dbg(VFS,12181218+ "Number of request timeouts exceeded %d. Reconnecting",12191219+ MAX_STATUS_IO_TIMEOUT);12201220+12211221+ pending_reconnect = true;12211222 num_io_timeout = 0;12221222- continue;12231223 }12241224 }12251225···12321226 if (mids[i] != NULL) {12331227 mids[i]->resp_buf_size = server->pdu_size;1234122812351235- if (bufs[i] && server->ops->is_network_name_deleted)12361236- server->ops->is_network_name_deleted(bufs[i],12371237- server);12291229+ if (bufs[i] != NULL) {12301230+ if (server->ops->is_network_name_deleted &&12311231+ server->ops->is_network_name_deleted(bufs[i],12321232+ server)) {12331233+ cifs_server_dbg(FYI,12341234+ "Share deleted. Reconnect needed");12351235+ }12361236+ }1238123712391238 if (!mids[i]->multiRsp || mids[i]->multiEnd)12401239 mids[i]->callback(mids[i]);···12741263 buf = server->smallbuf;12751264 goto next_pdu;12761265 }12661266+12671267+ /* do this reconnect at the very end after processing all MIDs */12681268+ if (pending_reconnect)12691269+ cifs_reconnect(server, true);12701270+12771271 } /* end while !EXITING */1278127212791273 /* buffer usually freed in free_mid - need to free it here on exit */
+10-16
fs/smb/client/dfs.c
···6666 return rc;6767}68686969+/*7070+ * Track individual DFS referral servers used by new DFS mount.7171+ *7272+ * On success, their lifetime will be shared by final tcon (dfs_ses_list).7373+ * Otherwise, they will be put by dfs_put_root_smb_sessions() in cifs_mount().7474+ */6975static int add_root_smb_session(struct cifs_mount_ctx *mnt_ctx)7076{7177 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;···8680 INIT_LIST_HEAD(&root_ses->list);87818882 spin_lock(&cifs_tcp_ses_lock);8989- ses->ses_count++;8383+ cifs_smb_ses_inc_refcount(ses);9084 spin_unlock(&cifs_tcp_ses_lock);9185 root_ses->ses = ses;9286 list_add_tail(&root_ses->list, &mnt_ctx->dfs_ses_list);9387 }8888+ /* Select new DFS referral server so that new referrals go through it */9489 ctx->dfs_root_ses = ses;9590 return 0;9691}···249242int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)250243{251244 struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;252252- struct cifs_ses *ses;253245 bool nodfs = ctx->nodfs;254246 int rc;255247···282276 }283277284278 *isdfs = true;285285- /*286286- * Prevent DFS root session of being put in the first call to287287- * cifs_mount_put_conns(). If another DFS root server was not found288288- * while chasing the referrals (@ctx->dfs_root_ses == @ses), then we289289- * can safely put extra refcount of @ses.290290- */291291- ses = mnt_ctx->ses;292292- mnt_ctx->ses = NULL;293293- mnt_ctx->server = NULL;294294- rc = __dfs_mount_share(mnt_ctx);295295- if (ses == ctx->dfs_root_ses)296296- cifs_put_smb_ses(ses);297297-298298- return rc;279279+ add_root_smb_session(mnt_ctx);280280+ return __dfs_mount_share(mnt_ctx);299281}300282301283/* Update dfs referral path of superblock */
···591591 uint8_t valuelen; /* actual length of value (no NULL) */592592 uint8_t flags; /* flags bits (see xfs_attr_leaf.h) */593593 uint8_t nameval[]; /* name & value bytes concatenated */594594- } list[1]; /* variable sized array */594594+ } list[]; /* variable sized array */595595};596596597597typedef struct xfs_attr_leaf_map { /* RLE map of free bytes */···620620typedef struct xfs_attr_leaf_name_local {621621 __be16 valuelen; /* number of bytes in value */622622 __u8 namelen; /* length of name bytes */623623- __u8 nameval[1]; /* name/value bytes */623623+ /*624624+ * In Linux 6.5 this flex array was converted from nameval[1] to625625+ * nameval[]. Be very careful here about extra padding at the end;626626+ * see xfs_attr_leaf_entsize_local() for details.627627+ */628628+ __u8 nameval[]; /* name/value bytes */624629} xfs_attr_leaf_name_local_t;625630626631typedef struct xfs_attr_leaf_name_remote {627632 __be32 valueblk; /* block number of value bytes */628633 __be32 valuelen; /* number of bytes in value */629634 __u8 namelen; /* length of name bytes */630630- __u8 name[1]; /* name bytes */635635+ /*636636+ * In Linux 6.5 this flex array was converted from name[1] to name[].637637+ * Be very careful here about extra padding at the end; see638638+ * xfs_attr_leaf_entsize_remote() for details.639639+ */640640+ __u8 name[]; /* name bytes */631641} xfs_attr_leaf_name_remote_t;632642633643typedef struct xfs_attr_leafblock {634644 xfs_attr_leaf_hdr_t hdr; /* constant-structure header block */635635- xfs_attr_leaf_entry_t entries[1]; /* sorted on key, not name */645645+ xfs_attr_leaf_entry_t entries[]; /* sorted on key, not name */636646 /*637647 * The rest of the block contains the following structures after the638648 * leaf entries, growing from the bottom up. The variables are never···674664675665struct xfs_attr3_leafblock {676666 struct xfs_attr3_leaf_hdr hdr;677677- struct xfs_attr_leaf_entry entries[1];667667+ struct xfs_attr_leaf_entry entries[];678668679669 /*680670 * The rest of the block contains the following structures after the···757747 */758748static inline int xfs_attr_leaf_entsize_remote(int nlen)759749{760760- return round_up(sizeof(struct xfs_attr_leaf_name_remote) - 1 +761761- nlen, XFS_ATTR_LEAF_NAME_ALIGN);750750+ /*751751+ * Prior to Linux 6.5, struct xfs_attr_leaf_name_remote ended with752752+ * name[1], which was used as a flexarray. The layout of this struct753753+ * is 9 bytes of fixed-length fields followed by a __u8 flex array at754754+ * offset 9.755755+ *756756+ * On most architectures, struct xfs_attr_leaf_name_remote had two757757+ * bytes of implicit padding at the end of the struct to make the758758+ * struct length 12. After converting name[1] to name[], there are759759+ * three implicit padding bytes and the struct size remains 12.760760+ * However, there are compiler configurations that do not add implicit761761+ * padding at all (m68k) and have been broken for years.762762+ *763763+ * This entsize computation historically added (the xattr name length)764764+ * to (the padded struct length - 1) and rounded that sum up to the765765+ * nearest multiple of 4 (NAME_ALIGN). IOWs, round_up(11 + nlen, 4).766766+ * This is encoded in the ondisk format, so we cannot change this.767767+ *768768+ * Compute the entsize from offsetof of the flexarray and manually769769+ * adding bytes for the implicit padding.770770+ */771771+ const size_t remotesize =772772+ offsetof(struct xfs_attr_leaf_name_remote, name) + 2;773773+774774+ return round_up(remotesize + nlen, XFS_ATTR_LEAF_NAME_ALIGN);762775}763776764777static inline int xfs_attr_leaf_entsize_local(int nlen, int vlen)765778{766766- return round_up(sizeof(struct xfs_attr_leaf_name_local) - 1 +767767- nlen + vlen, XFS_ATTR_LEAF_NAME_ALIGN);779779+ /*780780+ * Prior to Linux 6.5, struct xfs_attr_leaf_name_local ended with781781+ * nameval[1], which was used as a flexarray. The layout of this782782+ * struct is 3 bytes of fixed-length fields followed by a __u8 flex783783+ * array at offset 3.784784+ *785785+ * struct xfs_attr_leaf_name_local had zero bytes of implicit padding786786+ * at the end of the struct to make the struct length 4. On most787787+ * architectures, after converting nameval[1] to nameval[], there is788788+ * one implicit padding byte and the struct size remains 4. However,789789+ * there are compiler configurations that do not add implicit padding790790+ * at all (m68k) and would break.791791+ *792792+ * This entsize computation historically added (the xattr name and793793+ * value length) to (the padded struct length - 1) and rounded that sum794794+ * up to the nearest multiple of 4 (NAME_ALIGN). IOWs, the formula is795795+ * round_up(3 + nlen + vlen, 4). This is encoded in the ondisk format,796796+ * so we cannot change this.797797+ *798798+ * Compute the entsize from offsetof of the flexarray and manually799799+ * adding bytes for the implicit padding.800800+ */801801+ const size_t localsize =802802+ offsetof(struct xfs_attr_leaf_name_local, nameval);803803+804804+ return round_up(localsize + nlen + vlen, XFS_ATTR_LEAF_NAME_ALIGN);768805}769806770807static inline int xfs_attr_leaf_entsize_local_max(int bsize)
+2-2
fs/xfs/libxfs/xfs_fs.h
···592592struct xfs_attrlist {593593 __s32 al_count; /* number of entries in attrlist */594594 __s32 al_more; /* T/F: more attrs (do call again) */595595- __s32 al_offset[1]; /* byte offsets of attrs [var-sized] */595595+ __s32 al_offset[]; /* byte offsets of attrs [var-sized] */596596};597597598598struct xfs_attrlist_ent { /* data from attr_list() */599599 __u32 a_valuelen; /* number bytes in value of attr */600600- char a_name[1]; /* attr name (NULL terminated) */600600+ char a_name[]; /* attr name (NULL terminated) */601601};602602603603typedef struct xfs_fsop_attrlist_handlereq {
···111111 * keyslots while ensuring that they can't be changed concurrently.112112 */113113 struct rw_semaphore lock;114114+ struct lock_class_key lockdep_key;114115115116 /* List of idle slots, with least recently used slot at front */116117 wait_queue_head_t idle_slots_wait_queue;
+3-3
include/linux/blk-mq.h
···158158159159 /*160160 * The rb_node is only used inside the io scheduler, requests161161- * are pruned when moved to the dispatch queue. So let the162162- * completion_data share space with the rb_node.161161+ * are pruned when moved to the dispatch queue. special_vec must162162+ * only be used if RQF_SPECIAL_PAYLOAD is set, and those cannot be163163+ * insert into an IO scheduler.163164 */164165 union {165166 struct rb_node rb_node; /* sort/lookup */166167 struct bio_vec special_vec;167167- void *completion_data;168168 };169169170170 /*
···277277 unsigned short vlan_id;278278};279279280280-/**280280+/*281281 * Returns NULL if the net_device does not belong to any of the bond's slaves282282 *283283 * Caller must hold bond lock for read
+2-1
include/net/cfg802154.h
···170170}171171172172/**173173- * @WPAN_PHY_FLAG_TRANSMIT_POWER: Indicates that transceiver will support173173+ * enum wpan_phy_flags - WPAN PHY state flags174174+ * @WPAN_PHY_FLAG_TXPOWER: Indicates that transceiver will support174175 * transmit power setting.175176 * @WPAN_PHY_FLAG_CCA_ED_LEVEL: Indicates that transceiver will support cca ed176177 * level setting.
+2-2
include/net/codel.h
···145145 * @maxpacket: largest packet we've seen so far146146 * @drop_count: temp count of dropped packets in dequeue()147147 * @drop_len: bytes of dropped packets in dequeue()148148- * ecn_mark: number of packets we ECN marked instead of dropping149149- * ce_mark: number of packets CE marked because sojourn time was above ce_threshold148148+ * @ecn_mark: number of packets we ECN marked instead of dropping149149+ * @ce_mark: number of packets CE marked because sojourn time was above ce_threshold150150 */151151struct codel_stats {152152 u32 maxpacket;
+16-12
include/net/devlink.h
···221221/**222222 * struct devlink_dpipe_header - dpipe header object223223 * @name: header name224224- * @id: index, global/local detrmined by global bit224224+ * @id: index, global/local determined by global bit225225 * @fields: fields226226 * @fields_count: number of fields227227 * @global: indicates if header is shared like most protocol header···241241 * @header_index: header index (packets can have several headers of same242242 * type like in case of tunnels)243243 * @header: header244244- * @fieled_id: field index244244+ * @field_id: field index245245 */246246struct devlink_dpipe_match {247247 enum devlink_dpipe_match_type type;···256256 * @header_index: header index (packets can have several headers of same257257 * type like in case of tunnels)258258 * @header: header259259- * @fieled_id: field index259259+ * @field_id: field index260260 */261261struct devlink_dpipe_action {262262 enum devlink_dpipe_action_type type;···292292 * struct devlink_dpipe_entry - table entry object293293 * @index: index of the entry in the table294294 * @match_values: match values295295- * @matche_values_count: count of matches tuples295295+ * @match_values_count: count of matches tuples296296 * @action_values: actions values297297 * @action_values_count: count of actions values298298 * @counter: value of counter···342342 */343343struct devlink_dpipe_table {344344 void *priv;345345+ /* private: */345346 struct list_head list;347347+ /* public: */346348 const char *name;347349 bool counters_enabled;348350 bool counter_control_extern;···357355358356/**359357 * struct devlink_dpipe_table_ops - dpipe_table ops360360- * @actions_dump - dumps all tables actions361361- * @matches_dump - dumps all tables matches362362- * @entries_dump - dumps all active entries in the table363363- * @counters_set_update - when changing the counter status hardware sync358358+ * @actions_dump: dumps all tables actions359359+ * @matches_dump: dumps all tables matches360360+ * @entries_dump: dumps all active entries in the table361361+ * @counters_set_update: when changing the counter status hardware sync364362 * maybe needed to allocate/free counter related365363 * resources366366- * @size_get - get size364364+ * @size_get: get size367365 */368366struct devlink_dpipe_table_ops {369367 int (*actions_dump)(void *priv, struct sk_buff *skb);···376374377375/**378376 * struct devlink_dpipe_headers - dpipe headers379379- * @headers - header array can be shared (global bit) or driver specific380380- * @headers_count - count of headers377377+ * @headers: header array can be shared (global bit) or driver specific378378+ * @headers_count: count of headers381379 */382380struct devlink_dpipe_headers {383381 struct devlink_dpipe_header **headers;···389387 * @size_min: minimum size which can be set390388 * @size_max: maximum size which can be set391389 * @size_granularity: size granularity392392- * @size_unit: resource's basic unit390390+ * @unit: resource's basic unit393391 */394392struct devlink_resource_size_params {395393 u64 size_min;···459457460458/**461459 * struct devlink_param - devlink configuration parameter data460460+ * @id: devlink parameter id number462461 * @name: name of the parameter463462 * @generic: indicates if the parameter is generic or driver specific464463 * @type: parameter type···635632 * struct devlink_flash_update_params - Flash Update parameters636633 * @fw: pointer to the firmware data to update from637634 * @component: the flash component to update635635+ * @overwrite_mask: which types of flash update are supported (may be %0)638636 *639637 * With the exception of fw, drivers must opt-in to parameters by640638 * setting the appropriate bit in the supported_flash_update_params field in
+1-1
include/net/inet_frag.h
···2929};30303131/**3232- * fragment queue flags3232+ * enum: fragment queue flags3333 *3434 * @INET_FRAG_FIRST_IN: first fragment has arrived3535 * @INET_FRAG_LAST_IN: final fragment has arrived
···269269/**270270 * llc_pdu_decode_da - extracts dest address of input frame271271 * @skb: input skb that destination address must be extracted from it272272- * @sa: pointer to destination address (6 byte array).272272+ * @da: pointer to destination address (6 byte array).273273 *274274 * This function extracts destination address(MAC) of input frame.275275 */···321321322322/**323323 * llc_pdu_init_as_test_cmd - sets PDU as TEST324324- * @skb - Address of the skb to build324324+ * @skb: Address of the skb to build325325 *326326 * Sets a PDU as TEST327327 */···369369/**370370 * llc_pdu_init_as_xid_cmd - sets bytes 3, 4 & 5 of LLC header as XID371371 * @skb: input skb that header must be set into it.372372+ * @svcs_supported: The class of the LLC (I or II)373373+ * @rx_window: The size of the receive window of the LLC372374 *373375 * This function sets third,fourth,fifth and sixth bytes of LLC header as374376 * a XID PDU.
···1717/**1818 * struct pie_params - contains pie parameters1919 * @target: target delay in pschedtime2020- * @tudpate: interval at which drop probability is calculated2020+ * @tupdate: interval at which drop probability is calculated2121 * @limit: total number of packets that can be in the queue2222 * @alpha: parameter to control drop probability2323 * @beta: parameter to control drop probability
+1-1
include/net/rsi_91x.h
···11-/**11+/*22 * Copyright (c) 2017 Redpine Signals Inc.33 *44 * Permission to use, copy, modify, and/or distribute this software for any
+24-7
include/net/tcp.h
···15231523static inline int keepalive_intvl_when(const struct tcp_sock *tp)15241524{15251525 struct net *net = sock_net((struct sock *)tp);15261526+ int val;1526152715271527- return tp->keepalive_intvl ? :15281528- READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl);15281528+ /* Paired with WRITE_ONCE() in tcp_sock_set_keepintvl()15291529+ * and do_tcp_setsockopt().15301530+ */15311531+ val = READ_ONCE(tp->keepalive_intvl);15321532+15331533+ return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl);15291534}1530153515311536static inline int keepalive_time_when(const struct tcp_sock *tp)15321537{15331538 struct net *net = sock_net((struct sock *)tp);15391539+ int val;1534154015351535- return tp->keepalive_time ? :15361536- READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time);15411541+ /* Paired with WRITE_ONCE() in tcp_sock_set_keepidle_locked() */15421542+ val = READ_ONCE(tp->keepalive_time);15431543+15441544+ return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time);15371545}1538154615391547static inline int keepalive_probes(const struct tcp_sock *tp)15401548{15411549 struct net *net = sock_net((struct sock *)tp);15501550+ int val;1542155115431543- return tp->keepalive_probes ? :15441544- READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes);15521552+ /* Paired with WRITE_ONCE() in tcp_sock_set_keepcnt()15531553+ * and do_tcp_setsockopt().15541554+ */15551555+ val = READ_ONCE(tp->keepalive_probes);15561556+15571557+ return val ? : READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes);15451558}1546155915471560static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp)···20752062static inline u32 tcp_notsent_lowat(const struct tcp_sock *tp)20762063{20772064 struct net *net = sock_net((struct sock *)tp);20782078- return tp->notsent_lowat ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);20652065+ u32 val;20662066+20672067+ val = READ_ONCE(tp->notsent_lowat);20682068+20692069+ return val ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat);20792070}2080207120812072bool tcp_stream_memory_free(const struct sock *sk, int wake);
···7171};72727373/**7474+ * struct utp_upiu_query_v4_0 - upiu request buffer structure for7575+ * query request >= UFS 4.0 spec.7676+ * @opcode: command to perform B-07777+ * @idn: a value that indicates the particular type of data B-17878+ * @index: Index to further identify data B-27979+ * @selector: Index to further identify data B-38080+ * @osf4: spec field B-58181+ * @osf5: spec field B 6,78282+ * @osf6: spec field DW 8,98383+ * @osf7: spec field DW 10,118484+ */8585+struct utp_upiu_query_v4_0 {8686+ __u8 opcode;8787+ __u8 idn;8888+ __u8 index;8989+ __u8 selector;9090+ __u8 osf3;9191+ __u8 osf4;9292+ __be16 osf5;9393+ __be32 osf6;9494+ __be32 osf7;9595+ __be32 reserved;9696+};9797+9898+/**7499 * struct utp_upiu_cmd - Command UPIU structure75100 * @data_transfer_len: Data Transfer Length DW-376101 * @cdb: Command Descriptor Block CDB DW-4 to DW-7
···24892489static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,24902490 struct io_wait_queue *iowq)24912491{24922492+ int token, ret;24932493+24922494 if (unlikely(READ_ONCE(ctx->check_cq)))24932495 return 1;24942496 if (unlikely(!llist_empty(&ctx->work_llist)))···25012499 return -EINTR;25022500 if (unlikely(io_should_wake(iowq)))25032501 return 0;25022502+25032503+ /*25042504+ * Use io_schedule_prepare/finish, so cpufreq can take into account25052505+ * that the task is waiting for IO - turns out to be important for low25062506+ * QD IO.25072507+ */25082508+ token = io_schedule_prepare();25092509+ ret = 0;25042510 if (iowq->timeout == KTIME_MAX)25052511 schedule();25062512 else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))25072507- return -ETIME;25082508- return 0;25132513+ ret = -ETIME;25142514+ io_schedule_finish(token);25152515+ return ret;25092516}2510251725112518/*
+25-7
kernel/bpf/verifier.c
···55875587 * Since recursion is prevented by check_cfg() this algorithm55885588 * only needs a local stack of MAX_CALL_FRAMES to remember callsites55895589 */55905590-static int check_max_stack_depth(struct bpf_verifier_env *env)55905590+static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx)55915591{55925592- int depth = 0, frame = 0, idx = 0, i = 0, subprog_end;55935592 struct bpf_subprog_info *subprog = env->subprog_info;55945593 struct bpf_insn *insn = env->prog->insnsi;55945594+ int depth = 0, frame = 0, i, subprog_end;55955595 bool tail_call_reachable = false;55965596 int ret_insn[MAX_CALL_FRAMES];55975597 int ret_prog[MAX_CALL_FRAMES];55985598 int j;5599559956005600+ i = subprog[idx].start;56005601process_func:56015602 /* protect against potential stack overflow that might happen when56025603 * bpf2bpf calls get combined with tailcalls. Limit the caller's stack···56365635continue_func:56375636 subprog_end = subprog[idx + 1].start;56385637 for (; i < subprog_end; i++) {56395639- int next_insn;56385638+ int next_insn, sidx;5640563956415640 if (!bpf_pseudo_call(insn + i) && !bpf_pseudo_func(insn + i))56425641 continue;···5646564556475646 /* find the callee */56485647 next_insn = i + insn[i].imm + 1;56495649- idx = find_subprog(env, next_insn);56505650- if (idx < 0) {56485648+ sidx = find_subprog(env, next_insn);56495649+ if (sidx < 0) {56515650 WARN_ONCE(1, "verifier bug. No program starts at insn %d\n",56525651 next_insn);56535652 return -EFAULT;56545653 }56555655- if (subprog[idx].is_async_cb) {56565656- if (subprog[idx].has_tail_call) {56545654+ if (subprog[sidx].is_async_cb) {56555655+ if (subprog[sidx].has_tail_call) {56575656 verbose(env, "verifier bug. subprog has tail_call and async cb\n");56585657 return -EFAULT;56595658 }···56625661 continue;56635662 }56645663 i = next_insn;56645664+ idx = sidx;5665566556665666 if (subprog[idx].has_tail_call)56675667 tail_call_reachable = true;···56965694 i = ret_insn[frame];56975695 idx = ret_prog[frame];56985696 goto continue_func;56975697+}56985698+56995699+static int check_max_stack_depth(struct bpf_verifier_env *env)57005700+{57015701+ struct bpf_subprog_info *si = env->subprog_info;57025702+ int ret;57035703+57045704+ for (int i = 0; i < env->subprog_cnt; i++) {57055705+ if (!i || si[i].is_async_cb) {57065706+ ret = check_max_stack_depth_subprog(env, i);57075707+ if (ret < 0)57085708+ return ret;57095709+ }57105710+ continue;57115711+ }57125712+ return 0;56995713}5700571457015715#ifndef CONFIG_BPF_JIT_ALWAYS_ON
+1-1
kernel/cgroup/cgroup.c
···37303730 }3731373137323732 psi = cgroup_psi(cgrp);37333733- new = psi_trigger_create(psi, buf, res, of->file);37333733+ new = psi_trigger_create(psi, buf, res, of->file, of);37343734 if (IS_ERR(new)) {37353735 cgroup_put(cgrp);37363736 return PTR_ERR(new);
+2-3
kernel/kallsyms.c
···174174 * LLVM appends various suffixes for local functions and variables that175175 * must be promoted to global scope as part of LTO. This can break176176 * hooking of static functions with kprobes. '.' is not a valid177177- * character in an identifier in C. Suffixes observed:177177+ * character in an identifier in C. Suffixes only in LLVM LTO observed:178178 * - foo.llvm.[0-9a-f]+179179- * - foo.[0-9a-f]+180179 */181181- res = strchr(s, '.');180180+ res = strstr(s, ".llvm.");182181 if (res) {183182 *res = '\0';184183 return true;
···426426427427/* Definitions related to the frequency QoS below. */428428429429+static inline bool freq_qos_value_invalid(s32 value)430430+{431431+ return value < 0 && value != PM_QOS_DEFAULT_VALUE;432432+}433433+429434/**430435 * freq_constraints_init - Initialize frequency QoS constraints.431436 * @qos: Frequency QoS constraints to initialize.···536531{537532 int ret;538533539539- if (IS_ERR_OR_NULL(qos) || !req || value < 0)534534+ if (IS_ERR_OR_NULL(qos) || !req || freq_qos_value_invalid(value))540535 return -EINVAL;541536542537 if (WARN(freq_qos_request_active(req),···568563 */569564int freq_qos_update_request(struct freq_qos_request *req, s32 new_value)570565{571571- if (!req || new_value < 0)566566+ if (!req || freq_qos_value_invalid(new_value))572567 return -EINVAL;573568574569 if (WARN(!freq_qos_request_active(req),
···493493 continue;494494495495 /* Generate an event */496496- if (cmpxchg(&t->event, 0, 1) == 0)497497- wake_up_interruptible(&t->event_wait);496496+ if (cmpxchg(&t->event, 0, 1) == 0) {497497+ if (t->of)498498+ kernfs_notify(t->of->kn);499499+ else500500+ wake_up_interruptible(&t->event_wait);501501+ }498502 t->last_event_time = now;499503 /* Reset threshold breach flag once event got generated */500504 t->pending_event = false;···12751271 return 0;12761272}1277127312781278-struct psi_trigger *psi_trigger_create(struct psi_group *group,12791279- char *buf, enum psi_res res, struct file *file)12741274+struct psi_trigger *psi_trigger_create(struct psi_group *group, char *buf,12751275+ enum psi_res res, struct file *file,12761276+ struct kernfs_open_file *of)12801277{12811278 struct psi_trigger *t;12821279 enum psi_states state;···1336133113371332 t->event = 0;13381333 t->last_event_time = 0;13391339- init_waitqueue_head(&t->event_wait);13341334+ t->of = of;13351335+ if (!of)13361336+ init_waitqueue_head(&t->event_wait);13401337 t->pending_event = false;13411338 t->aggregator = privileged ? PSI_POLL : PSI_AVGS;13421339···13951388 * being accessed later. Can happen if cgroup is deleted from under a13961389 * polling process.13971390 */13981398- wake_up_pollfree(&t->event_wait);13911391+ if (t->of)13921392+ kernfs_notify(t->of->kn);13931393+ else13941394+ wake_up_interruptible(&t->event_wait);1399139514001396 if (t->aggregator == PSI_AVGS) {14011397 mutex_lock(&group->avgs_lock);···14751465 if (!t)14761466 return DEFAULT_POLLMASK | EPOLLERR | EPOLLPRI;1477146714781478- poll_wait(file, &t->event_wait, wait);14681468+ if (t->of)14691469+ kernfs_generic_poll(t->of, wait);14701470+ else14711471+ poll_wait(file, &t->event_wait, wait);1479147214801473 if (cmpxchg(&t->event, 1, 0) == 1)14811474 ret |= EPOLLPRI;···15481535 return -EBUSY;15491536 }1550153715511551- new = psi_trigger_create(&psi_system, buf, res, file);15381538+ new = psi_trigger_create(&psi_system, buf, res, file, NULL);15521539 if (IS_ERR(new)) {15531540 mutex_unlock(&seq->lock);15541541 return PTR_ERR(new);
+5-5
kernel/sys.c
···25352535 else25362536 return -EINVAL;25372537 break;25382538- case PR_GET_AUXV:25392539- if (arg4 || arg5)25402540- return -EINVAL;25412541- error = prctl_get_auxv((void __user *)arg2, arg3);25422542- break;25432538 default:25442539 return -EINVAL;25452540 }···26882693 break;26892694 case PR_SET_VMA:26902695 error = prctl_set_vma(arg2, arg3, arg4, arg5);26962696+ break;26972697+ case PR_GET_AUXV:26982698+ if (arg4 || arg5)26992699+ return -EINVAL;27002700+ error = prctl_get_auxv((void __user *)arg2, arg3);26912701 break;26922702#ifdef CONFIG_KSM26932703 case PR_SET_MEMORY_MERGE:
+6
kernel/trace/fprobe.c
···100100 return;101101 }102102103103+ /*104104+ * This user handler is shared with other kprobes and is not expected to be105105+ * called recursively. So if any other kprobe handler is running, this will106106+ * exit as kprobe does. See the section 'Share the callbacks with kprobes'107107+ * in Documentation/trace/fprobe.rst for more information.108108+ */103109 if (unlikely(kprobe_running())) {104110 fp->nmissed++;105111 goto recursion_unlock;
···13491349 return ret;13501350}1351135113521352-static int copy_iovec_from_user(struct iovec *iov,13521352+static __noclone int copy_iovec_from_user(struct iovec *iov,13531353 const struct iovec __user *uiov, unsigned long nr_segs)13541354{13551355 int ret = -EFAULT;
···118118 */119119 params->explicit_connect = false;120120121121- list_del_init(¶ms->action);121121+ hci_pend_le_list_del_init(params);122122123123 switch (params->auto_connect) {124124 case HCI_AUTO_CONN_EXPLICIT:···127127 return;128128 case HCI_AUTO_CONN_DIRECT:129129 case HCI_AUTO_CONN_ALWAYS:130130- list_add(¶ms->action, &hdev->pend_le_conns);130130+ hci_pend_le_list_add(params, &hdev->pend_le_conns);131131 break;132132 case HCI_AUTO_CONN_REPORT:133133- list_add(¶ms->action, &hdev->pend_le_reports);133133+ hci_pend_le_list_add(params, &hdev->pend_le_reports);134134 break;135135 default:136136 break;···14261426 if (params->auto_connect == HCI_AUTO_CONN_DISABLED ||14271427 params->auto_connect == HCI_AUTO_CONN_REPORT ||14281428 params->auto_connect == HCI_AUTO_CONN_EXPLICIT) {14291429- list_del_init(¶ms->action);14301430- list_add(¶ms->action, &hdev->pend_le_conns);14291429+ hci_pend_le_list_del_init(params);14301430+ hci_pend_le_list_add(params, &hdev->pend_le_conns);14311431 }1432143214331433 params->explicit_connect = true;···16841684 if (!link) {16851685 hci_conn_drop(acl);16861686 hci_conn_drop(sco);16871687- return NULL;16871687+ return ERR_PTR(-ENOLINK);16881688 }1689168916901690 sco->setting = setting;···22542254 if (!link) {22552255 hci_conn_drop(le);22562256 hci_conn_drop(cis);22572257- return NULL;22572257+ return ERR_PTR(-ENOLINK);22582258 }2259225922602260 /* If LE is already connected and CIS handle is already set proceed to
+34-8
net/bluetooth/hci_core.c
···19721972 struct adv_monitor *monitor)19731973{19741974 int status = 0;19751975+ int handle;1975197619761977 switch (hci_get_adv_monitor_offload_ext(hdev)) {19771978 case HCI_ADV_MONITOR_EXT_NONE: /* also goes here when powered off */···19811980 goto free_monitor;1982198119831982 case HCI_ADV_MONITOR_EXT_MSFT:19831983+ handle = monitor->handle;19841984 status = msft_remove_monitor(hdev, monitor);19851985 bt_dev_dbg(hdev, "%s remove monitor %d msft status %d",19861986- hdev->name, monitor->handle, status);19861986+ hdev->name, handle, status);19871987 break;19881988 }19891989···22512249 return NULL;22522250}2253225122542254-/* This function requires the caller holds hdev->lock */22522252+/* This function requires the caller holds hdev->lock or rcu_read_lock */22552253struct hci_conn_params *hci_pend_le_action_lookup(struct list_head *list,22562254 bdaddr_t *addr, u8 addr_type)22572255{22582256 struct hci_conn_params *param;2259225722602260- list_for_each_entry(param, list, action) {22582258+ rcu_read_lock();22592259+22602260+ list_for_each_entry_rcu(param, list, action) {22612261 if (bacmp(¶m->addr, addr) == 0 &&22622262- param->addr_type == addr_type)22622262+ param->addr_type == addr_type) {22632263+ rcu_read_unlock();22632264 return param;22652265+ }22642266 }2265226722682268+ rcu_read_unlock();22692269+22662270 return NULL;22712271+}22722272+22732273+/* This function requires the caller holds hdev->lock */22742274+void hci_pend_le_list_del_init(struct hci_conn_params *param)22752275+{22762276+ if (list_empty(¶m->action))22772277+ return;22782278+22792279+ list_del_rcu(¶m->action);22802280+ synchronize_rcu();22812281+ INIT_LIST_HEAD(¶m->action);22822282+}22832283+22842284+/* This function requires the caller holds hdev->lock */22852285+void hci_pend_le_list_add(struct hci_conn_params *param,22862286+ struct list_head *list)22872287+{22882288+ list_add_rcu(¶m->action, list);22672289}2268229022692291/* This function requires the caller holds hdev->lock */···23232297 return params;23242298}2325229923262326-static void hci_conn_params_free(struct hci_conn_params *params)23002300+void hci_conn_params_free(struct hci_conn_params *params)23272301{23022302+ hci_pend_le_list_del_init(params);23032303+23282304 if (params->conn) {23292305 hci_conn_drop(params->conn);23302306 hci_conn_put(params->conn);23312307 }2332230823332333- list_del(¶ms->action);23342309 list_del(¶ms->list);23352310 kfree(params);23362311}···23692342 continue;23702343 }2371234423722372- list_del(¶ms->list);23732373- kfree(params);23452345+ hci_conn_params_free(params);23742346 }2375234723762348 BT_DBG("All LE disabled connection parameters were removed");
+9-6
net/bluetooth/hci_event.c
···1564156415651565 params = hci_conn_params_lookup(hdev, &cp->bdaddr, cp->bdaddr_type);15661566 if (params)15671567- params->privacy_mode = cp->mode;15671567+ WRITE_ONCE(params->privacy_mode, cp->mode);1568156815691569 hci_dev_unlock(hdev);15701570···27842784 hci_enable_advertising(hdev);27852785 }2786278627872787+ /* Inform sockets conn is gone before we delete it */27882788+ hci_disconn_cfm(conn, HCI_ERROR_UNSPECIFIED);27892789+27872790 goto done;27882791 }27892792···2807280428082805 case HCI_AUTO_CONN_DIRECT:28092806 case HCI_AUTO_CONN_ALWAYS:28102810- list_del_init(¶ms->action);28112811- list_add(¶ms->action, &hdev->pend_le_conns);28072807+ hci_pend_le_list_del_init(params);28082808+ hci_pend_le_list_add(params, &hdev->pend_le_conns);28122809 break;2813281028142811 default:···3426342334273424 case HCI_AUTO_CONN_DIRECT:34283425 case HCI_AUTO_CONN_ALWAYS:34293429- list_del_init(¶ms->action);34303430- list_add(¶ms->action, &hdev->pend_le_conns);34263426+ hci_pend_le_list_del_init(params);34273427+ hci_pend_le_list_add(params, &hdev->pend_le_conns);34313428 hci_update_passive_scan(hdev);34323429 break;34333430···59655962 params = hci_pend_le_action_lookup(&hdev->pend_le_conns, &conn->dst,59665963 conn->dst_type);59675964 if (params) {59685968- list_del_init(¶ms->action);59655965+ hci_pend_le_list_del_init(params);59695966 if (params->conn) {59705967 hci_conn_drop(params->conn);59715968 hci_conn_put(params->conn);
+108-13
net/bluetooth/hci_sync.c
···21602160 return 0;21612161}2162216221632163+struct conn_params {21642164+ bdaddr_t addr;21652165+ u8 addr_type;21662166+ hci_conn_flags_t flags;21672167+ u8 privacy_mode;21682168+};21692169+21632170/* Adds connection to resolve list if needed.21642171 * Setting params to NULL programs local hdev->irk21652172 */21662173static int hci_le_add_resolve_list_sync(struct hci_dev *hdev,21672167- struct hci_conn_params *params)21742174+ struct conn_params *params)21682175{21692176 struct hci_cp_le_add_to_resolv_list cp;21702177 struct smp_irk *irk;21712178 struct bdaddr_list_with_irk *entry;21792179+ struct hci_conn_params *p;2172218021732181 if (!use_ll_privacy(hdev))21742182 return 0;···22112203 /* Default privacy mode is always Network */22122204 params->privacy_mode = HCI_NETWORK_PRIVACY;2213220522062206+ rcu_read_lock();22072207+ p = hci_pend_le_action_lookup(&hdev->pend_le_conns,22082208+ ¶ms->addr, params->addr_type);22092209+ if (!p)22102210+ p = hci_pend_le_action_lookup(&hdev->pend_le_reports,22112211+ ¶ms->addr, params->addr_type);22122212+ if (p)22132213+ WRITE_ONCE(p->privacy_mode, HCI_NETWORK_PRIVACY);22142214+ rcu_read_unlock();22152215+22142216done:22152217 if (hci_dev_test_flag(hdev, HCI_PRIVACY))22162218 memcpy(cp.local_irk, hdev->irk, 16);···2233221522342216/* Set Device Privacy Mode. */22352217static int hci_le_set_privacy_mode_sync(struct hci_dev *hdev,22362236- struct hci_conn_params *params)22182218+ struct conn_params *params)22372219{22382220 struct hci_cp_le_set_privacy_mode cp;22392221 struct smp_irk *irk;···22582240 bacpy(&cp.bdaddr, &irk->bdaddr);22592241 cp.mode = HCI_DEVICE_PRIVACY;2260224222432243+ /* Note: params->privacy_mode is not updated since it is a copy */22442244+22612245 return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_PRIVACY_MODE,22622246 sizeof(cp), &cp, HCI_CMD_TIMEOUT);22632247}···22692249 * properly set the privacy mode.22702250 */22712251static int hci_le_add_accept_list_sync(struct hci_dev *hdev,22722272- struct hci_conn_params *params,22522252+ struct conn_params *params,22732253 u8 *num_entries)22742254{22752255 struct hci_cp_le_add_to_accept_list cp;···24672447 return __hci_cmd_sync_sk(hdev, opcode, 0, NULL, 0, HCI_CMD_TIMEOUT, sk);24682448}2469244924502450+static struct conn_params *conn_params_copy(struct list_head *list, size_t *n)24512451+{24522452+ struct hci_conn_params *params;24532453+ struct conn_params *p;24542454+ size_t i;24552455+24562456+ rcu_read_lock();24572457+24582458+ i = 0;24592459+ list_for_each_entry_rcu(params, list, action)24602460+ ++i;24612461+ *n = i;24622462+24632463+ rcu_read_unlock();24642464+24652465+ p = kvcalloc(*n, sizeof(struct conn_params), GFP_KERNEL);24662466+ if (!p)24672467+ return NULL;24682468+24692469+ rcu_read_lock();24702470+24712471+ i = 0;24722472+ list_for_each_entry_rcu(params, list, action) {24732473+ /* Racing adds are handled in next scan update */24742474+ if (i >= *n)24752475+ break;24762476+24772477+ /* No hdev->lock, but: addr, addr_type are immutable.24782478+ * privacy_mode is only written by us or in24792479+ * hci_cc_le_set_privacy_mode that we wait for.24802480+ * We should be idempotent so MGMT updating flags24812481+ * while we are processing is OK.24822482+ */24832483+ bacpy(&p[i].addr, ¶ms->addr);24842484+ p[i].addr_type = params->addr_type;24852485+ p[i].flags = READ_ONCE(params->flags);24862486+ p[i].privacy_mode = READ_ONCE(params->privacy_mode);24872487+ ++i;24882488+ }24892489+24902490+ rcu_read_unlock();24912491+24922492+ *n = i;24932493+ return p;24942494+}24952495+24702496/* Device must not be scanning when updating the accept list.24712497 *24722498 * Update is done using the following sequence:···25322466 */25332467static u8 hci_update_accept_list_sync(struct hci_dev *hdev)25342468{25352535- struct hci_conn_params *params;24692469+ struct conn_params *params;25362470 struct bdaddr_list *b, *t;25372471 u8 num_entries = 0;25382472 bool pend_conn, pend_report;25392473 u8 filter_policy;24742474+ size_t i, n;25402475 int err;2541247625422477 /* Pause advertising if resolving list can be used as controllers···25712504 if (hci_conn_hash_lookup_le(hdev, &b->bdaddr, b->bdaddr_type))25722505 continue;2573250625072507+ /* Pointers not dereferenced, no locks needed */25742508 pend_conn = hci_pend_le_action_lookup(&hdev->pend_le_conns,25752509 &b->bdaddr,25762510 b->bdaddr_type);···26002532 * available accept list entries in the controller, then26012533 * just abort and return filer policy value to not use the26022534 * accept list.25352535+ *25362536+ * The list and params may be mutated while we wait for events,25372537+ * so make a copy and iterate it.26032538 */26042604- list_for_each_entry(params, &hdev->pend_le_conns, action) {26052605- err = hci_le_add_accept_list_sync(hdev, params, &num_entries);26062606- if (err)26072607- goto done;25392539+25402540+ params = conn_params_copy(&hdev->pend_le_conns, &n);25412541+ if (!params) {25422542+ err = -ENOMEM;25432543+ goto done;26082544 }25452545+25462546+ for (i = 0; i < n; ++i) {25472547+ err = hci_le_add_accept_list_sync(hdev, ¶ms[i],25482548+ &num_entries);25492549+ if (err) {25502550+ kvfree(params);25512551+ goto done;25522552+ }25532553+ }25542554+25552555+ kvfree(params);2609255626102557 /* After adding all new pending connections, walk through26112558 * the list of pending reports and also add these to the26122559 * accept list if there is still space. Abort if space runs out.26132560 */26142614- list_for_each_entry(params, &hdev->pend_le_reports, action) {26152615- err = hci_le_add_accept_list_sync(hdev, params, &num_entries);26162616- if (err)26172617- goto done;25612561+25622562+ params = conn_params_copy(&hdev->pend_le_reports, &n);25632563+ if (!params) {25642564+ err = -ENOMEM;25652565+ goto done;26182566 }25672567+25682568+ for (i = 0; i < n; ++i) {25692569+ err = hci_le_add_accept_list_sync(hdev, ¶ms[i],25702570+ &num_entries);25712571+ if (err) {25722572+ kvfree(params);25732573+ goto done;25742574+ }25752575+ }25762576+25772577+ kvfree(params);2619257826202579 /* Use the allowlist unless the following conditions are all true:26212580 * - We are not currently suspending···49324837 struct hci_conn_params *p;4933483849344839 list_for_each_entry(p, &hdev->le_conn_params, list) {48404840+ hci_pend_le_list_del_init(p);49354841 if (p->conn) {49364842 hci_conn_drop(p->conn);49374843 hci_conn_put(p->conn);49384844 p->conn = NULL;49394845 }49404940- list_del_init(&p->action);49414846 }4942484749434848 BT_DBG("All LE pending actions cleared");
+32-23
net/bluetooth/iso.c
···123123{124124 struct iso_conn *conn = hcon->iso_data;125125126126- if (conn)126126+ if (conn) {127127+ if (!conn->hcon)128128+ conn->hcon = hcon;127129 return conn;130130+ }128131129132 conn = kzalloc(sizeof(*conn), GFP_KERNEL);130133 if (!conn)···303300 goto unlock;304301 }305302306306- hci_dev_unlock(hdev);307307- hci_dev_put(hdev);303303+ lock_sock(sk);308304309305 err = iso_chan_add(conn, sk, NULL);310310- if (err)311311- return err;312312-313313- lock_sock(sk);306306+ if (err) {307307+ release_sock(sk);308308+ goto unlock;309309+ }314310315311 /* Update source addr of the socket */316312 bacpy(&iso_pi(sk)->src, &hcon->src);···323321 }324322325323 release_sock(sk);326326- return err;327324328325unlock:329326 hci_dev_unlock(hdev);···390389 goto unlock;391390 }392391393393- hci_dev_unlock(hdev);394394- hci_dev_put(hdev);392392+ lock_sock(sk);395393396394 err = iso_chan_add(conn, sk, NULL);397397- if (err)398398- return err;399399-400400- lock_sock(sk);395395+ if (err) {396396+ release_sock(sk);397397+ goto unlock;398398+ }401399402400 /* Update source addr of the socket */403401 bacpy(&iso_pi(sk)->src, &hcon->src);···413413 }414414415415 release_sock(sk);416416- return err;417416418417unlock:419418 hci_dev_unlock(hdev);···10711072 size_t len)10721073{10731074 struct sock *sk = sock->sk;10741074- struct iso_conn *conn = iso_pi(sk)->conn;10751075 struct sk_buff *skb, **frag;10761076+ size_t mtu;10761077 int err;1077107810781079 BT_DBG("sock %p, sk %p", sock, sk);···10841085 if (msg->msg_flags & MSG_OOB)10851086 return -EOPNOTSUPP;1086108710871087- if (sk->sk_state != BT_CONNECTED)10881088- return -ENOTCONN;10881088+ lock_sock(sk);1089108910901090- skb = bt_skb_sendmsg(sk, msg, len, conn->hcon->hdev->iso_mtu,10911091- HCI_ISO_DATA_HDR_SIZE, 0);10901090+ if (sk->sk_state != BT_CONNECTED) {10911091+ release_sock(sk);10921092+ return -ENOTCONN;10931093+ }10941094+10951095+ mtu = iso_pi(sk)->conn->hcon->hdev->iso_mtu;10961096+10971097+ release_sock(sk);10981098+10991099+ skb = bt_skb_sendmsg(sk, msg, len, mtu, HCI_ISO_DATA_HDR_SIZE, 0);10921100 if (IS_ERR(skb))10931101 return PTR_ERR(skb);10941102···11081102 while (len) {11091103 struct sk_buff *tmp;1110110411111111- tmp = bt_skb_sendmsg(sk, msg, len, conn->hcon->hdev->iso_mtu,11121112- 0, 0);11051105+ tmp = bt_skb_sendmsg(sk, msg, len, mtu, 0, 0);11131106 if (IS_ERR(tmp)) {11141107 kfree_skb(skb);11151108 return PTR_ERR(tmp);···11631158 BT_DBG("sk %p", sk);1164115911651160 if (test_and_clear_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {11611161+ lock_sock(sk);11661162 switch (sk->sk_state) {11671163 case BT_CONNECT2:11681168- lock_sock(sk);11691164 iso_conn_defer_accept(pi->conn->hcon);11701165 sk->sk_state = BT_CONFIG;11711166 release_sock(sk);11721167 return 0;11731168 case BT_CONNECT:11691169+ release_sock(sk);11741170 return iso_connect_cis(sk);11711171+ default:11721172+ release_sock(sk);11731173+ break;11751174 }11761175 }11771176
+12-16
net/bluetooth/mgmt.c
···12971297 /* Needed for AUTO_OFF case where might not "really"12981298 * have been powered off.12991299 */13001300- list_del_init(&p->action);13001300+ hci_pend_le_list_del_init(p);1301130113021302 switch (p->auto_connect) {13031303 case HCI_AUTO_CONN_DIRECT:13041304 case HCI_AUTO_CONN_ALWAYS:13051305- list_add(&p->action, &hdev->pend_le_conns);13051305+ hci_pend_le_list_add(p, &hdev->pend_le_conns);13061306 break;13071307 case HCI_AUTO_CONN_REPORT:13081308- list_add(&p->action, &hdev->pend_le_reports);13081308+ hci_pend_le_list_add(p, &hdev->pend_le_reports);13091309 break;13101310 default:13111311 break;···51695169 goto unlock;51705170 }5171517151725172- params->flags = current_flags;51725172+ WRITE_ONCE(params->flags, current_flags);51735173 status = MGMT_STATUS_SUCCESS;5174517451755175 /* Update passive scan if HCI_CONN_FLAG_DEVICE_PRIVACY···7285728572867286 bt_dev_dbg(hdev, "err %d", err);7287728772887288- memcpy(&rp.addr, &cp->addr.bdaddr, sizeof(rp.addr));72887288+ memcpy(&rp.addr, &cp->addr, sizeof(rp.addr));7289728972907290 status = mgmt_status(err);72917291 if (status == MGMT_STATUS_SUCCESS) {···75807580 if (params->auto_connect == auto_connect)75817581 return 0;7582758275837583- list_del_init(¶ms->action);75837583+ hci_pend_le_list_del_init(params);7584758475857585 switch (auto_connect) {75867586 case HCI_AUTO_CONN_DISABLED:···75897589 * connect to device, keep connecting.75907590 */75917591 if (params->explicit_connect)75927592- list_add(¶ms->action, &hdev->pend_le_conns);75927592+ hci_pend_le_list_add(params, &hdev->pend_le_conns);75937593 break;75947594 case HCI_AUTO_CONN_REPORT:75957595 if (params->explicit_connect)75967596- list_add(¶ms->action, &hdev->pend_le_conns);75967596+ hci_pend_le_list_add(params, &hdev->pend_le_conns);75977597 else75987598- list_add(¶ms->action, &hdev->pend_le_reports);75987598+ hci_pend_le_list_add(params, &hdev->pend_le_reports);75997599 break;76007600 case HCI_AUTO_CONN_DIRECT:76017601 case HCI_AUTO_CONN_ALWAYS:76027602 if (!is_connected(hdev, addr, addr_type))76037603- list_add(¶ms->action, &hdev->pend_le_conns);76037603+ hci_pend_le_list_add(params, &hdev->pend_le_conns);76047604 break;76057605 }76067606···78237823 goto unlock;78247824 }7825782578267826- list_del(¶ms->action);78277827- list_del(¶ms->list);78287828- kfree(params);78267826+ hci_conn_params_free(params);7829782778307828 device_removed(sk, hdev, &cp->addr.bdaddr, cp->addr.type);78317829 } else {···78547856 p->auto_connect = HCI_AUTO_CONN_EXPLICIT;78557857 continue;78567858 }78577857- list_del(&p->action);78587858- list_del(&p->list);78597859- kfree(p);78597859+ hci_conn_params_free(p);78607860 }7861786178627862 bt_dev_dbg(hdev, "All LE connection parameters were removed");
+12-11
net/bluetooth/sco.c
···126126 struct hci_dev *hdev = hcon->hdev;127127 struct sco_conn *conn = hcon->sco_data;128128129129- if (conn)129129+ if (conn) {130130+ if (!conn->hcon)131131+ conn->hcon = hcon;130132 return conn;133133+ }131134132135 conn = kzalloc(sizeof(struct sco_conn), GFP_KERNEL);133136 if (!conn)···271268 goto unlock;272269 }273270274274- hci_dev_unlock(hdev);275275- hci_dev_put(hdev);276276-277271 conn = sco_conn_add(hcon);278272 if (!conn) {279273 hci_conn_drop(hcon);280280- return -ENOMEM;274274+ err = -ENOMEM;275275+ goto unlock;281276 }282277283283- err = sco_chan_add(conn, sk, NULL);284284- if (err)285285- return err;286286-287278 lock_sock(sk);279279+280280+ err = sco_chan_add(conn, sk, NULL);281281+ if (err) {282282+ release_sock(sk);283283+ goto unlock;284284+ }288285289286 /* Update source addr of the socket */290287 bacpy(&sco_pi(sk)->src, &hcon->src);···298295 }299296300297 release_sock(sk);301301-302302- return err;303298304299unlock:305300 hci_dev_unlock(hdev);
···1019101910201020 icsk = inet_csk(sk_listener);10211021 net = sock_net(sk_listener);10221022- max_syn_ack_retries = icsk->icsk_syn_retries ? :10221022+ max_syn_ack_retries = READ_ONCE(icsk->icsk_syn_retries) ? :10231023 READ_ONCE(net->ipv4.sysctl_tcp_synack_retries);10241024 /* Normally all the openreqs are young and become mature10251025 * (i.e. converted to established socket) for first timeout.
+2-15
net/ipv4/inet_hashtables.c
···650650 spin_lock(lock);651651 if (osk) {652652 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);653653- ret = sk_hashed(osk);654654- if (ret) {655655- /* Before deleting the node, we insert a new one to make656656- * sure that the look-up-sk process would not miss either657657- * of them and that at least one node would exist in ehash658658- * table all the time. Otherwise there's a tiny chance659659- * that lookup process could find nothing in ehash table.660660- */661661- __sk_nulls_add_node_tail_rcu(sk, list);662662- sk_nulls_del_node_init_rcu(osk);663663- }664664- goto unlock;665665- }666666- if (found_dup_sk) {653653+ ret = sk_nulls_del_node_init_rcu(osk);654654+ } else if (found_dup_sk) {667655 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);668656 if (*found_dup_sk)669657 ret = false;···660672 if (ret)661673 __sk_nulls_add_node_rcu(sk, list);662674663663-unlock:664675 spin_unlock(lock);665676666677 return ret;
···32923292 return -EINVAL;3293329332943294 lock_sock(sk);32953295- inet_csk(sk)->icsk_syn_retries = val;32953295+ WRITE_ONCE(inet_csk(sk)->icsk_syn_retries, val);32963296 release_sock(sk);32973297 return 0;32983298}···33013301void tcp_sock_set_user_timeout(struct sock *sk, u32 val)33023302{33033303 lock_sock(sk);33043304- inet_csk(sk)->icsk_user_timeout = val;33043304+ WRITE_ONCE(inet_csk(sk)->icsk_user_timeout, val);33053305 release_sock(sk);33063306}33073307EXPORT_SYMBOL(tcp_sock_set_user_timeout);···33133313 if (val < 1 || val > MAX_TCP_KEEPIDLE)33143314 return -EINVAL;3315331533163316- tp->keepalive_time = val * HZ;33163316+ /* Paired with WRITE_ONCE() in keepalive_time_when() */33173317+ WRITE_ONCE(tp->keepalive_time, val * HZ);33173318 if (sock_flag(sk, SOCK_KEEPOPEN) &&33183319 !((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) {33193320 u32 elapsed = keepalive_time_elapsed(tp);···33463345 return -EINVAL;3347334633483347 lock_sock(sk);33493349- tcp_sk(sk)->keepalive_intvl = val * HZ;33483348+ WRITE_ONCE(tcp_sk(sk)->keepalive_intvl, val * HZ);33503349 release_sock(sk);33513350 return 0;33523351}···33583357 return -EINVAL;3359335833603359 lock_sock(sk);33613361- tcp_sk(sk)->keepalive_probes = val;33603360+ /* Paired with READ_ONCE() in keepalive_probes() */33613361+ WRITE_ONCE(tcp_sk(sk)->keepalive_probes, val);33623362 release_sock(sk);33633363 return 0;33643364}···35613559 if (val < 1 || val > MAX_TCP_KEEPINTVL)35623560 err = -EINVAL;35633561 else35643564- tp->keepalive_intvl = val * HZ;35623562+ WRITE_ONCE(tp->keepalive_intvl, val * HZ);35653563 break;35663564 case TCP_KEEPCNT:35673565 if (val < 1 || val > MAX_TCP_KEEPCNT)35683566 err = -EINVAL;35693567 else35703570- tp->keepalive_probes = val;35683568+ WRITE_ONCE(tp->keepalive_probes, val);35713569 break;35723570 case TCP_SYNCNT:35733571 if (val < 1 || val > MAX_TCP_SYNCNT)35743572 err = -EINVAL;35753573 else35763576- icsk->icsk_syn_retries = val;35743574+ WRITE_ONCE(icsk->icsk_syn_retries, val);35773575 break;3578357635793577 case TCP_SAVE_SYN:···3586358435873585 case TCP_LINGER2:35883586 if (val < 0)35893589- tp->linger2 = -1;35873587+ WRITE_ONCE(tp->linger2, -1);35903588 else if (val > TCP_FIN_TIMEOUT_MAX / HZ)35913591- tp->linger2 = TCP_FIN_TIMEOUT_MAX;35893589+ WRITE_ONCE(tp->linger2, TCP_FIN_TIMEOUT_MAX);35923590 else35933593- tp->linger2 = val * HZ;35913591+ WRITE_ONCE(tp->linger2, val * HZ);35943592 break;3595359335963594 case TCP_DEFER_ACCEPT:35973595 /* Translate value in seconds to number of retransmits */35983598- icsk->icsk_accept_queue.rskq_defer_accept =35993599- secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ,36003600- TCP_RTO_MAX / HZ);35963596+ WRITE_ONCE(icsk->icsk_accept_queue.rskq_defer_accept,35973597+ secs_to_retrans(val, TCP_TIMEOUT_INIT / HZ,35983598+ TCP_RTO_MAX / HZ));36013599 break;3602360036033601 case TCP_WINDOW_CLAMP:···36213619 if (val < 0)36223620 err = -EINVAL;36233621 else36243624- icsk->icsk_user_timeout = val;36223622+ WRITE_ONCE(icsk->icsk_user_timeout, val);36253623 break;3626362436273625 case TCP_FASTOPEN:···36593657 if (!tp->repair)36603658 err = -EPERM;36613659 else36623662- tp->tsoffset = val - tcp_time_stamp_raw();36603660+ WRITE_ONCE(tp->tsoffset, val - tcp_time_stamp_raw());36633661 break;36643662 case TCP_REPAIR_WINDOW:36653663 err = tcp_repair_set_window(tp, optval, optlen);36663664 break;36673665 case TCP_NOTSENT_LOWAT:36683668- tp->notsent_lowat = val;36663666+ WRITE_ONCE(tp->notsent_lowat, val);36693667 sk->sk_write_space(sk);36703668 break;36713669 case TCP_INQ:···36773675 case TCP_TX_DELAY:36783676 if (val)36793677 tcp_enable_tx_delay();36803680- tp->tcp_tx_delay = val;36783678+ WRITE_ONCE(tp->tcp_tx_delay, val);36813679 break;36823680 default:36833681 err = -ENOPROTOOPT;···39943992 val = keepalive_probes(tp);39953993 break;39963994 case TCP_SYNCNT:39973997- val = icsk->icsk_syn_retries ? :39953995+ val = READ_ONCE(icsk->icsk_syn_retries) ? :39983996 READ_ONCE(net->ipv4.sysctl_tcp_syn_retries);39993997 break;40003998 case TCP_LINGER2:40014001- val = tp->linger2;39993999+ val = READ_ONCE(tp->linger2);40024000 if (val >= 0)40034001 val = (val ? : READ_ONCE(net->ipv4.sysctl_tcp_fin_timeout)) / HZ;40044002 break;40054003 case TCP_DEFER_ACCEPT:40064006- val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept,40074007- TCP_TIMEOUT_INIT / HZ, TCP_RTO_MAX / HZ);40044004+ val = READ_ONCE(icsk->icsk_accept_queue.rskq_defer_accept);40054005+ val = retrans_to_secs(val, TCP_TIMEOUT_INIT / HZ,40064006+ TCP_RTO_MAX / HZ);40084007 break;40094008 case TCP_WINDOW_CLAMP:40104009 val = tp->window_clamp;···41424139 break;4143414041444141 case TCP_USER_TIMEOUT:41454145- val = icsk->icsk_user_timeout;41424142+ val = READ_ONCE(icsk->icsk_user_timeout);41464143 break;4147414441484145 case TCP_FASTOPEN:41494149- val = icsk->icsk_accept_queue.fastopenq.max_qlen;41464146+ val = READ_ONCE(icsk->icsk_accept_queue.fastopenq.max_qlen);41504147 break;4151414841524149 case TCP_FASTOPEN_CONNECT:···41584155 break;4159415641604157 case TCP_TX_DELAY:41614161- val = tp->tcp_tx_delay;41584158+ val = READ_ONCE(tp->tcp_tx_delay);41624159 break;4163416041644161 case TCP_TIMESTAMP:41654165- val = tcp_time_stamp_raw() + tp->tsoffset;41624162+ val = tcp_time_stamp_raw() + READ_ONCE(tp->tsoffset);41664163 break;41674164 case TCP_NOTSENT_LOWAT:41684168- val = tp->notsent_lowat;41654165+ val = READ_ONCE(tp->notsent_lowat);41694166 break;41704167 case TCP_INQ:41714168 val = tp->recvmsg_inq;
+4-2
net/ipv4/tcp_fastopen.c
···296296static bool tcp_fastopen_queue_check(struct sock *sk)297297{298298 struct fastopen_queue *fastopenq;299299+ int max_qlen;299300300301 /* Make sure the listener has enabled fastopen, and we don't301302 * exceed the max # of pending TFO requests allowed before trying···309308 * temporarily vs a server not supporting Fast Open at all.310309 */311310 fastopenq = &inet_csk(sk)->icsk_accept_queue.fastopenq;312312- if (fastopenq->max_qlen == 0)311311+ max_qlen = READ_ONCE(fastopenq->max_qlen);312312+ if (max_qlen == 0)313313 return false;314314315315- if (fastopenq->qlen >= fastopenq->max_qlen) {315315+ if (fastopenq->qlen >= max_qlen) {316316 struct request_sock *req1;317317 spin_lock(&fastopenq->lock);318318 req1 = fastopenq->rskq_rst_head;
···528528 newicsk->icsk_ack.lrcvtime = tcp_jiffies32;529529530530 newtp->lsndtime = tcp_jiffies32;531531- newsk->sk_txhash = treq->txhash;531531+ newsk->sk_txhash = READ_ONCE(treq->txhash);532532 newtp->total_retrans = req->num_retrans;533533534534 tcp_init_xmit_timers(newsk);···555555 newtp->max_window = newtp->snd_wnd;556556557557 if (newtp->rx_opt.tstamp_ok) {558558- newtp->rx_opt.ts_recent = req->ts_recent;558558+ newtp->rx_opt.ts_recent = READ_ONCE(req->ts_recent);559559 newtp->rx_opt.ts_recent_stamp = ktime_get_seconds();560560 newtp->tcp_header_len = sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED;561561 } else {···619619 tcp_parse_options(sock_net(sk), skb, &tmp_opt, 0, NULL);620620621621 if (tmp_opt.saw_tstamp) {622622- tmp_opt.ts_recent = req->ts_recent;622622+ tmp_opt.ts_recent = READ_ONCE(req->ts_recent);623623 if (tmp_opt.rcv_tsecr)624624 tmp_opt.rcv_tsecr -= tcp_rsk(req)->ts_off;625625 /* We do not store true stamp, but it is not required,···758758759759 /* In sequence, PAWS is OK. */760760761761+ /* TODO: We probably should defer ts_recent change once762762+ * we take ownership of @req.763763+ */761764 if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))762762- req->ts_recent = tmp_opt.rcv_tsval;765765+ WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval);763766764767 if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) {765768 /* Truncate SYN, it is out of window starting
+3-3
net/ipv4/tcp_output.c
···878878 if (likely(ireq->tstamp_ok)) {879879 opts->options |= OPTION_TS;880880 opts->tsval = tcp_skb_timestamp(skb) + tcp_rsk(req)->ts_off;881881- opts->tsecr = req->ts_recent;881881+ opts->tsecr = READ_ONCE(req->ts_recent);882882 remaining -= TCPOLEN_TSTAMP_ALIGNED;883883 }884884 if (likely(ireq->sack_ok)) {···36603660 rcu_read_lock();36613661 md5 = tcp_rsk(req)->af_specific->req_md5_lookup(sk, req_to_sk(req));36623662#endif36633663- skb_set_hash(skb, tcp_rsk(req)->txhash, PKT_HASH_TYPE_L4);36633663+ skb_set_hash(skb, READ_ONCE(tcp_rsk(req)->txhash), PKT_HASH_TYPE_L4);36643664 /* bpf program will be interested in the tcp_flags */36653665 TCP_SKB_CB(skb)->tcp_flags = TCPHDR_SYN | TCPHDR_ACK;36663666 tcp_header_size = tcp_synack_options(sk, req, mss, skb, &opts, md5,···4210421042114211 /* Paired with WRITE_ONCE() in sock_setsockopt() */42124212 if (READ_ONCE(sk->sk_txrehash) == SOCK_TXREHASH_ENABLED)42134213- tcp_rsk(req)->txhash = net_tx_rndhash();42134213+ WRITE_ONCE(tcp_rsk(req)->txhash, net_tx_rndhash());42144214 res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL,42154215 NULL);42164216 if (!res) {
+11-5
net/ipv4/udp_offload.c
···274274 __sum16 check;275275 __be16 newlen;276276277277- if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)278278- return __udp_gso_segment_list(gso_skb, features, is_ipv6);279279-280277 mss = skb_shinfo(gso_skb)->gso_size;281278 if (gso_skb->len <= sizeof(*uh) + mss)282279 return ERR_PTR(-EINVAL);280280+281281+ if (skb_gso_ok(gso_skb, features | NETIF_F_GSO_ROBUST)) {282282+ /* Packet is from an untrusted source, reset gso_segs. */283283+ skb_shinfo(gso_skb)->gso_segs = DIV_ROUND_UP(gso_skb->len - sizeof(*uh),284284+ mss);285285+ return NULL;286286+ }287287+288288+ if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)289289+ return __udp_gso_segment_list(gso_skb, features, is_ipv6);283290284291 skb_pull(gso_skb, sizeof(*uh));285292···395388 if (!pskb_may_pull(skb, sizeof(struct udphdr)))396389 goto out;397390398398- if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 &&399399- !skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST))391391+ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4)400392 return __udp_gso_segment(skb, features, false);401393402394 mss = skb_shinfo(skb)->gso_size;
···163163 void (*sta_handler)(struct sk_buff *skb);164164 void (*sap_handler)(struct llc_sap *sap, struct sk_buff *skb);165165166166- if (!net_eq(dev_net(dev), &init_net))167167- goto drop;168168-169166 /*170167 * When the interface is in promisc. mode, drop all the crap that it171168 * receives, do not try to analyse it.
+11-7
net/llc/llc_sap.c
···294294295295static inline bool llc_dgram_match(const struct llc_sap *sap,296296 const struct llc_addr *laddr,297297- const struct sock *sk)297297+ const struct sock *sk,298298+ const struct net *net)298299{299300 struct llc_sock *llc = llc_sk(sk);300301301302 return sk->sk_type == SOCK_DGRAM &&302302- llc->laddr.lsap == laddr->lsap &&303303- ether_addr_equal(llc->laddr.mac, laddr->mac);303303+ net_eq(sock_net(sk), net) &&304304+ llc->laddr.lsap == laddr->lsap &&305305+ ether_addr_equal(llc->laddr.mac, laddr->mac);304306}305307306308/**307309 * llc_lookup_dgram - Finds dgram socket for the local sap/mac308310 * @sap: SAP309311 * @laddr: address of local LLC (MAC + SAP)312312+ * @net: netns to look up a socket in310313 *311314 * Search socket list of the SAP and finds connection using the local312315 * mac, and local sap. Returns pointer for socket found, %NULL otherwise.313316 */314317static struct sock *llc_lookup_dgram(struct llc_sap *sap,315315- const struct llc_addr *laddr)318318+ const struct llc_addr *laddr,319319+ const struct net *net)316320{317321 struct sock *rc;318322 struct hlist_nulls_node *node;···326322 rcu_read_lock_bh();327323again:328324 sk_nulls_for_each_rcu(rc, node, laddr_hb) {329329- if (llc_dgram_match(sap, laddr, rc)) {325325+ if (llc_dgram_match(sap, laddr, rc, net)) {330326 /* Extra checks required by SLAB_TYPESAFE_BY_RCU */331327 if (unlikely(!refcount_inc_not_zero(&rc->sk_refcnt)))332328 goto again;333329 if (unlikely(llc_sk(rc)->sap != sap ||334334- !llc_dgram_match(sap, laddr, rc))) {330330+ !llc_dgram_match(sap, laddr, rc, net))) {335331 sock_put(rc);336332 continue;337333 }···433429 llc_sap_mcast(sap, &laddr, skb);434430 kfree_skb(skb);435431 } else {436436- struct sock *sk = llc_lookup_dgram(sap, &laddr);432432+ struct sock *sk = llc_lookup_dgram(sap, &laddr, dev_net(skb->dev));437433 if (sk) {438434 llc_sap_rcv(sap, skb, sk);439435 sock_put(sk);
···712712 [TCA_U32_FLAGS] = { .type = NLA_U32 },713713};714714715715+static void u32_unbind_filter(struct tcf_proto *tp, struct tc_u_knode *n,716716+ struct nlattr **tb)717717+{718718+ if (tb[TCA_U32_CLASSID])719719+ tcf_unbind_filter(tp, &n->res);720720+}721721+722722+static void u32_bind_filter(struct tcf_proto *tp, struct tc_u_knode *n,723723+ unsigned long base, struct nlattr **tb)724724+{725725+ if (tb[TCA_U32_CLASSID]) {726726+ n->res.classid = nla_get_u32(tb[TCA_U32_CLASSID]);727727+ tcf_bind_filter(tp, &n->res, base);728728+ }729729+}730730+715731static int u32_set_parms(struct net *net, struct tcf_proto *tp,716716- unsigned long base,717732 struct tc_u_knode *n, struct nlattr **tb,718733 struct nlattr *est, u32 flags, u32 fl_flags,719734 struct netlink_ext_ack *extack)···774759775760 if (ht_old)776761 ht_old->refcnt--;777777- }778778- if (tb[TCA_U32_CLASSID]) {779779- n->res.classid = nla_get_u32(tb[TCA_U32_CLASSID]);780780- tcf_bind_filter(tp, &n->res, base);781762 }782763783764 if (ifindex >= 0)···914903 if (!new)915904 return -ENOMEM;916905917917- err = u32_set_parms(net, tp, base, new, tb,918918- tca[TCA_RATE], flags, new->flags,919919- extack);906906+ err = u32_set_parms(net, tp, new, tb, tca[TCA_RATE],907907+ flags, new->flags, extack);920908921909 if (err) {922910 __u32_destroy_key(new);923911 return err;924912 }925913914914+ u32_bind_filter(tp, new, base, tb);915915+926916 err = u32_replace_hw_knode(tp, new, flags, extack);927917 if (err) {918918+ u32_unbind_filter(tp, new, tb);919919+920920+ if (tb[TCA_U32_LINK]) {921921+ struct tc_u_hnode *ht_old;922922+923923+ ht_old = rtnl_dereference(n->ht_down);924924+ if (ht_old)925925+ ht_old->refcnt++;926926+ }928927 __u32_destroy_key(new);929928 return err;930929 }···10951074 }10961075#endif1097107610981098- err = u32_set_parms(net, tp, base, n, tb, tca[TCA_RATE],10771077+ err = u32_set_parms(net, tp, n, tb, tca[TCA_RATE],10991078 flags, n->flags, extack);10791079+10801080+ u32_bind_filter(tp, n, base, tb);10811081+11001082 if (err == 0) {11011083 struct tc_u_knode __rcu **ins;11021084 struct tc_u_knode *pins;1103108511041086 err = u32_replace_hw_knode(tp, n, flags, extack);11051087 if (err)11061106- goto errhw;10881088+ goto errunbind;1107108911081090 if (!tc_in_hw(n->flags))11091091 n->flags |= TCA_CLS_FLAGS_NOT_IN_HW;···11241100 return 0;11251101 }1126110211271127-errhw:11031103+errunbind:11041104+ u32_unbind_filter(tp, n, tb);11051105+11281106#ifdef CONFIG_CLS_U32_MARK11291107 free_percpu(n->pcpu_success);11301108#endif
+3-3
scripts/kallsyms.c
···349349 * ASCII[_] = 5f350350 * ASCII[a-z] = 61,7a351351 *352352- * As above, replacing '.' with '\0' does not affect the main sorting,353353- * but it helps us with subsorting.352352+ * As above, replacing the first '.' in ".llvm." with '\0' does not353353+ * affect the main sorting, but it helps us with subsorting.354354 */355355- p = strchr(s, '.');355355+ p = strstr(s, ".llvm.");356356 if (p)357357 *p = '\0';358358}
+24-11
security/keys/request_key.c
···401401 set_bit(KEY_FLAG_USER_CONSTRUCT, &key->flags);402402403403 if (dest_keyring) {404404- ret = __key_link_lock(dest_keyring, &ctx->index_key);404404+ ret = __key_link_lock(dest_keyring, &key->index_key);405405 if (ret < 0)406406 goto link_lock_failed;407407- ret = __key_link_begin(dest_keyring, &ctx->index_key, &edit);408408- if (ret < 0)409409- goto link_prealloc_failed;410407 }411408412412- /* attach the key to the destination keyring under lock, but we do need409409+ /*410410+ * Attach the key to the destination keyring under lock, but we do need413411 * to do another check just in case someone beat us to it whilst we414414- * waited for locks */412412+ * waited for locks.413413+ *414414+ * The caller might specify a comparison function which looks for keys415415+ * that do not exactly match but are still equivalent from the caller's416416+ * perspective. The __key_link_begin() operation must be done only after417417+ * an actual key is determined.418418+ */415419 mutex_lock(&key_construction_mutex);416420417421 rcu_read_lock();···424420 if (!IS_ERR(key_ref))425421 goto key_already_present;426422427427- if (dest_keyring)423423+ if (dest_keyring) {424424+ ret = __key_link_begin(dest_keyring, &key->index_key, &edit);425425+ if (ret < 0)426426+ goto link_alloc_failed;428427 __key_link(dest_keyring, key, &edit);428428+ }429429430430 mutex_unlock(&key_construction_mutex);431431 if (dest_keyring)432432- __key_link_end(dest_keyring, &ctx->index_key, edit);432432+ __key_link_end(dest_keyring, &key->index_key, edit);433433 mutex_unlock(&user->cons_lock);434434 *_key = key;435435 kleave(" = 0 [%d]", key_serial(key));···446438 mutex_unlock(&key_construction_mutex);447439 key = key_ref_to_ptr(key_ref);448440 if (dest_keyring) {441441+ ret = __key_link_begin(dest_keyring, &key->index_key, &edit);442442+ if (ret < 0)443443+ goto link_alloc_failed_unlocked;449444 ret = __key_link_check_live_key(dest_keyring, key);450445 if (ret == 0)451446 __key_link(dest_keyring, key, &edit);452452- __key_link_end(dest_keyring, &ctx->index_key, edit);447447+ __key_link_end(dest_keyring, &key->index_key, edit);453448 if (ret < 0)454449 goto link_check_failed;455450 }···467456 kleave(" = %d [linkcheck]", ret);468457 return ret;469458470470-link_prealloc_failed:471471- __key_link_end(dest_keyring, &ctx->index_key, edit);459459+link_alloc_failed:460460+ mutex_unlock(&key_construction_mutex);461461+link_alloc_failed_unlocked:462462+ __key_link_end(dest_keyring, &key->index_key, edit);472463link_lock_failed:473464 mutex_unlock(&user->cons_lock);474465 key_put(key);
+1-1
security/keys/trusted-keys/trusted_tpm2.c
···186186}187187188188/**189189- * tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.189189+ * tpm2_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.190190 *191191 * @buf: an allocated tpm_buf instance192192 * @session_handle: session handle
···817817#define __NR_set_mempolicy_home_node 450818818__SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)819819820820+#define __NR_cachestat 451821821+__SYSCALL(__NR_cachestat, sys_cachestat)822822+820823#undef __NR_syscalls821821-#define __NR_syscalls 451824824+#define __NR_syscalls 452822825823826/*824827 * 32 bit systems traditionally used different
+93-2
tools/include/uapi/drm/i915_drm.h
···280280#define I915_PMU_ENGINE_SEMA(class, instance) \281281 __I915_PMU_ENGINE(class, instance, I915_SAMPLE_SEMA)282282283283-#define __I915_PMU_OTHER(x) (__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x))283283+/*284284+ * Top 4 bits of every non-engine counter are GT id.285285+ */286286+#define __I915_PMU_GT_SHIFT (60)287287+288288+#define ___I915_PMU_OTHER(gt, x) \289289+ (((__u64)__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x)) | \290290+ ((__u64)(gt) << __I915_PMU_GT_SHIFT))291291+292292+#define __I915_PMU_OTHER(x) ___I915_PMU_OTHER(0, x)284293285294#define I915_PMU_ACTUAL_FREQUENCY __I915_PMU_OTHER(0)286295#define I915_PMU_REQUESTED_FREQUENCY __I915_PMU_OTHER(1)···298289#define I915_PMU_SOFTWARE_GT_AWAKE_TIME __I915_PMU_OTHER(4)299290300291#define I915_PMU_LAST /* Deprecated - do not use */ I915_PMU_RC6_RESIDENCY292292+293293+#define __I915_PMU_ACTUAL_FREQUENCY(gt) ___I915_PMU_OTHER(gt, 0)294294+#define __I915_PMU_REQUESTED_FREQUENCY(gt) ___I915_PMU_OTHER(gt, 1)295295+#define __I915_PMU_INTERRUPTS(gt) ___I915_PMU_OTHER(gt, 2)296296+#define __I915_PMU_RC6_RESIDENCY(gt) ___I915_PMU_OTHER(gt, 3)297297+#define __I915_PMU_SOFTWARE_GT_AWAKE_TIME(gt) ___I915_PMU_OTHER(gt, 4)301298302299/* Each region is a minimum of 16k, and there are at most 255 of them.303300 */···674659 * If the IOCTL is successful, the returned parameter will be set to one of the675660 * following values:676661 * * 0 if HuC firmware load is not complete,677677- * * 1 if HuC firmware is authenticated and running.662662+ * * 1 if HuC firmware is loaded and fully authenticated,663663+ * * 2 if HuC firmware is loaded and authenticated for clear media only678664 */679665#define I915_PARAM_HUC_STATUS 42680666···786770 * timestamp frequency, but differs on some platforms.787771 */788772#define I915_PARAM_OA_TIMESTAMP_FREQUENCY 57773773+774774+/*775775+ * Query the status of PXP support in i915.776776+ *777777+ * The query can fail in the following scenarios with the listed error codes:778778+ * -ENODEV = PXP support is not available on the GPU device or in the779779+ * kernel due to missing component drivers or kernel configs.780780+ *781781+ * If the IOCTL is successful, the returned parameter will be set to one of782782+ * the following values:783783+ * 1 = PXP feature is supported and is ready for use.784784+ * 2 = PXP feature is supported but should be ready soon (pending785785+ * initialization of non-i915 system dependencies).786786+ *787787+ * NOTE: When param is supported (positive return values), user space should788788+ * still refer to the GEM PXP context-creation UAPI header specs to be789789+ * aware of possible failure due to system state machine at the time.790790+ */791791+#define I915_PARAM_PXP_STATUS 58789792790793/* Must be kept compact -- no holes and well documented */791794···21312096 *21322097 * -ENODEV: feature not available21332098 * -EPERM: trying to mark a recoverable or not bannable context as protected20992099+ * -ENXIO: A dependency such as a component driver or firmware is not yet21002100+ * loaded so user space may need to attempt again. Depending on the21012101+ * device, this error may be reported if protected context creation is21022102+ * attempted very early after kernel start because the internal timeout21032103+ * waiting for such dependencies is not guaranteed to be larger than21042104+ * required (numbers differ depending on system and kernel config):21052105+ * - ADL/RPL: dependencies may take up to 3 seconds from kernel start21062106+ * while context creation internal timeout is 250 milisecs21072107+ * - MTL: dependencies may take up to 8 seconds from kernel start21082108+ * while context creation internal timeout is 250 milisecs21092109+ * NOTE: such dependencies happen once, so a subsequent call to create a21102110+ * protected context after a prior successful call will not experience21112111+ * such timeouts and will not return -ENXIO (unless the driver is reloaded,21122112+ * or, depending on the device, resumes from a suspended state).21132113+ * -EIO: The firmware did not succeed in creating the protected context.21342114 */21352115#define I915_CONTEXT_PARAM_PROTECTED_CONTENT 0xd21362116/* Must be kept compact -- no holes and well documented */···36803630 *36813631 * For I915_GEM_CREATE_EXT_PROTECTED_CONTENT usage see36823632 * struct drm_i915_gem_create_ext_protected_content.36333633+ *36343634+ * For I915_GEM_CREATE_EXT_SET_PAT usage see36353635+ * struct drm_i915_gem_create_ext_set_pat.36833636 */36843637#define I915_GEM_CREATE_EXT_MEMORY_REGIONS 036853638#define I915_GEM_CREATE_EXT_PROTECTED_CONTENT 136393639+#define I915_GEM_CREATE_EXT_SET_PAT 236863640 __u64 extensions;36873641};36883642···37993745 struct i915_user_extension base;38003746 /** @flags: reserved for future usage, currently MBZ */38013747 __u32 flags;37483748+};37493749+37503750+/**37513751+ * struct drm_i915_gem_create_ext_set_pat - The37523752+ * I915_GEM_CREATE_EXT_SET_PAT extension.37533753+ *37543754+ * If this extension is provided, the specified caching policy (PAT index) is37553755+ * applied to the buffer object.37563756+ *37573757+ * Below is an example on how to create an object with specific caching policy:37583758+ *37593759+ * .. code-block:: C37603760+ *37613761+ * struct drm_i915_gem_create_ext_set_pat set_pat_ext = {37623762+ * .base = { .name = I915_GEM_CREATE_EXT_SET_PAT },37633763+ * .pat_index = 0,37643764+ * };37653765+ * struct drm_i915_gem_create_ext create_ext = {37663766+ * .size = PAGE_SIZE,37673767+ * .extensions = (uintptr_t)&set_pat_ext,37683768+ * };37693769+ *37703770+ * int err = ioctl(fd, DRM_IOCTL_I915_GEM_CREATE_EXT, &create_ext);37713771+ * if (err) ...37723772+ */37733773+struct drm_i915_gem_create_ext_set_pat {37743774+ /** @base: Extension link. See struct i915_user_extension. */37753775+ struct i915_user_extension base;37763776+ /**37773777+ * @pat_index: PAT index to be set37783778+ * PAT index is a bit field in Page Table Entry to control caching37793779+ * behaviors for GPU accesses. The definition of PAT index is37803780+ * platform dependent and can be found in hardware specifications,37813781+ */37823782+ __u32 pat_index;37833783+ /** @rsvd: reserved for future use */37843784+ __u32 rsvd;38023785};3803378638043787/* ID of the protected content session managed by i915 when PXP is active */
+5
tools/include/uapi/linux/fcntl.h
···112112113113#define AT_RECURSIVE 0x8000 /* Apply to the entire subtree */114114115115+/* Flags for name_to_handle_at(2). We reuse AT_ flag space to save bits... */116116+#define AT_HANDLE_FID AT_REMOVEDIR /* file handle is needed to117117+ compare object identity and may not118118+ be usable to open_by_handle_at(2) */119119+115120#endif /* _UAPI_LINUX_FCNTL_H */
+5-1
tools/include/uapi/linux/kvm.h
···11901190#define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 22511911191#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 22611921192#define KVM_CAP_COUNTER_OFFSET 22711931193+#define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 22811941194+#define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 2291193119511941196#ifdef KVM_CAP_IRQ_ROUTING11951197···14441442#define KVM_DEV_TYPE_XIVE KVM_DEV_TYPE_XIVE14451443 KVM_DEV_TYPE_ARM_PV_TIME,14461444#define KVM_DEV_TYPE_ARM_PV_TIME KVM_DEV_TYPE_ARM_PV_TIME14451445+ KVM_DEV_TYPE_RISCV_AIA,14461446+#define KVM_DEV_TYPE_RISCV_AIA KVM_DEV_TYPE_RISCV_AIA14471447 KVM_DEV_TYPE_MAX,14481448};14491449···16171613#define KVM_GET_DEBUGREGS _IOR(KVMIO, 0xa1, struct kvm_debugregs)16181614#define KVM_SET_DEBUGREGS _IOW(KVMIO, 0xa2, struct kvm_debugregs)16191615/*16201620- * vcpu version available with KVM_ENABLE_CAP16161616+ * vcpu version available with KVM_CAP_ENABLE_CAP16211617 * vm version available with KVM_CAP_ENABLE_CAP_VM16221618 */16231619#define KVM_ENABLE_CAP _IOW(KVMIO, 0xa3, struct kvm_enable_cap)
···4545#define VHOST_SET_LOG_BASE _IOW(VHOST_VIRTIO, 0x04, __u64)4646/* Specify an eventfd file descriptor to signal on log write. */4747#define VHOST_SET_LOG_FD _IOW(VHOST_VIRTIO, 0x07, int)4848+/* By default, a device gets one vhost_worker that its virtqueues share. This4949+ * command allows the owner of the device to create an additional vhost_worker5050+ * for the device. It can later be bound to 1 or more of its virtqueues using5151+ * the VHOST_ATTACH_VRING_WORKER command.5252+ *5353+ * This must be called after VHOST_SET_OWNER and the caller must be the owner5454+ * of the device. The new thread will inherit caller's cgroups and namespaces,5555+ * and will share the caller's memory space. The new thread will also be5656+ * counted against the caller's RLIMIT_NPROC value.5757+ *5858+ * The worker's ID used in other commands will be returned in5959+ * vhost_worker_state.6060+ */6161+#define VHOST_NEW_WORKER _IOR(VHOST_VIRTIO, 0x8, struct vhost_worker_state)6262+/* Free a worker created with VHOST_NEW_WORKER if it's not attached to any6363+ * virtqueue. If userspace is not able to call this for workers its created,6464+ * the kernel will free all the device's workers when the device is closed.6565+ */6666+#define VHOST_FREE_WORKER _IOW(VHOST_VIRTIO, 0x9, struct vhost_worker_state)48674968/* Ring setup. */5069/* Set number of descriptors in ring. This parameter can not···8970#define VHOST_VRING_BIG_ENDIAN 19071#define VHOST_SET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x13, struct vhost_vring_state)9172#define VHOST_GET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x14, struct vhost_vring_state)7373+/* Attach a vhost_worker created with VHOST_NEW_WORKER to one of the device's7474+ * virtqueues.7575+ *7676+ * This will replace the virtqueue's existing worker. If the replaced worker7777+ * is no longer attached to any virtqueues, it can be freed with7878+ * VHOST_FREE_WORKER.7979+ */8080+#define VHOST_ATTACH_VRING_WORKER _IOW(VHOST_VIRTIO, 0x15, \8181+ struct vhost_vring_worker)8282+/* Return the vring worker's ID */8383+#define VHOST_GET_VRING_WORKER _IOWR(VHOST_VIRTIO, 0x16, \8484+ struct vhost_vring_worker)92859386/* The following ioctls use eventfd file descriptors to signal and poll9487 * for events. */
+79-2
tools/include/uapi/sound/asound.h
···274274#define SNDRV_PCM_INFO_DOUBLE 0x00000004 /* Double buffering needed for PCM start/stop */275275#define SNDRV_PCM_INFO_BATCH 0x00000010 /* double buffering */276276#define SNDRV_PCM_INFO_SYNC_APPLPTR 0x00000020 /* need the explicit sync of appl_ptr update */277277+#define SNDRV_PCM_INFO_PERFECT_DRAIN 0x00000040 /* silencing at the end of stream is not required */277278#define SNDRV_PCM_INFO_INTERLEAVED 0x00000100 /* channels are interleaved */278279#define SNDRV_PCM_INFO_NONINTERLEAVED 0x00000200 /* channels are not interleaved */279280#define SNDRV_PCM_INFO_COMPLEX 0x00000400 /* complex frame organization (mmap only) */···384383#define SNDRV_PCM_HW_PARAMS_NORESAMPLE (1<<0) /* avoid rate resampling */385384#define SNDRV_PCM_HW_PARAMS_EXPORT_BUFFER (1<<1) /* export buffer */386385#define SNDRV_PCM_HW_PARAMS_NO_PERIOD_WAKEUP (1<<2) /* disable period wakeups */386386+#define SNDRV_PCM_HW_PARAMS_NO_DRAIN_SILENCE (1<<3) /* suppress drain with the filling387387+ * of the silence samples388388+ */387389388390struct snd_interval {389391 unsigned int min, max;···712708 * Raw MIDI section - /dev/snd/midi??713709 */714710715715-#define SNDRV_RAWMIDI_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 2)711711+#define SNDRV_RAWMIDI_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 4)716712717713enum {718714 SNDRV_RAWMIDI_STREAM_OUTPUT = 0,···723719#define SNDRV_RAWMIDI_INFO_OUTPUT 0x00000001724720#define SNDRV_RAWMIDI_INFO_INPUT 0x00000002725721#define SNDRV_RAWMIDI_INFO_DUPLEX 0x00000004722722+#define SNDRV_RAWMIDI_INFO_UMP 0x00000008726723727724struct snd_rawmidi_info {728725 unsigned int device; /* RO/WR (control): device number */···784779};785780#endif786781782782+/* UMP EP info flags */783783+#define SNDRV_UMP_EP_INFO_STATIC_BLOCKS 0x01784784+785785+/* UMP EP Protocol / JRTS capability bits */786786+#define SNDRV_UMP_EP_INFO_PROTO_MIDI_MASK 0x0300787787+#define SNDRV_UMP_EP_INFO_PROTO_MIDI1 0x0100 /* MIDI 1.0 */788788+#define SNDRV_UMP_EP_INFO_PROTO_MIDI2 0x0200 /* MIDI 2.0 */789789+#define SNDRV_UMP_EP_INFO_PROTO_JRTS_MASK 0x0003790790+#define SNDRV_UMP_EP_INFO_PROTO_JRTS_TX 0x0001 /* JRTS Transmit */791791+#define SNDRV_UMP_EP_INFO_PROTO_JRTS_RX 0x0002 /* JRTS Receive */792792+793793+/* UMP Endpoint information */794794+struct snd_ump_endpoint_info {795795+ int card; /* card number */796796+ int device; /* device number */797797+ unsigned int flags; /* additional info */798798+ unsigned int protocol_caps; /* protocol capabilities */799799+ unsigned int protocol; /* current protocol */800800+ unsigned int num_blocks; /* # of function blocks */801801+ unsigned short version; /* UMP major/minor version */802802+ unsigned short family_id; /* MIDI device family ID */803803+ unsigned short model_id; /* MIDI family model ID */804804+ unsigned int manufacturer_id; /* MIDI manufacturer ID */805805+ unsigned char sw_revision[4]; /* software revision */806806+ unsigned short padding;807807+ unsigned char name[128]; /* endpoint name string */808808+ unsigned char product_id[128]; /* unique product id string */809809+ unsigned char reserved[32];810810+} __packed;811811+812812+/* UMP direction */813813+#define SNDRV_UMP_DIR_INPUT 0x01814814+#define SNDRV_UMP_DIR_OUTPUT 0x02815815+#define SNDRV_UMP_DIR_BIDIRECTION 0x03816816+817817+/* UMP block info flags */818818+#define SNDRV_UMP_BLOCK_IS_MIDI1 (1U << 0) /* MIDI 1.0 port w/o restrict */819819+#define SNDRV_UMP_BLOCK_IS_LOWSPEED (1U << 1) /* 31.25Kbps B/W MIDI1 port */820820+821821+/* UMP block user-interface hint */822822+#define SNDRV_UMP_BLOCK_UI_HINT_UNKNOWN 0x00823823+#define SNDRV_UMP_BLOCK_UI_HINT_RECEIVER 0x01824824+#define SNDRV_UMP_BLOCK_UI_HINT_SENDER 0x02825825+#define SNDRV_UMP_BLOCK_UI_HINT_BOTH 0x03826826+827827+/* UMP groups and blocks */828828+#define SNDRV_UMP_MAX_GROUPS 16829829+#define SNDRV_UMP_MAX_BLOCKS 32830830+831831+/* UMP Block information */832832+struct snd_ump_block_info {833833+ int card; /* card number */834834+ int device; /* device number */835835+ unsigned char block_id; /* block ID (R/W) */836836+ unsigned char direction; /* UMP direction */837837+ unsigned char active; /* Activeness */838838+ unsigned char first_group; /* first group ID */839839+ unsigned char num_groups; /* number of groups */840840+ unsigned char midi_ci_version; /* MIDI-CI support version */841841+ unsigned char sysex8_streams; /* max number of sysex8 streams */842842+ unsigned char ui_hint; /* user interface hint */843843+ unsigned int flags; /* various info flags */844844+ unsigned char name[128]; /* block name string */845845+ unsigned char reserved[32];846846+} __packed;847847+787848#define SNDRV_RAWMIDI_IOCTL_PVERSION _IOR('W', 0x00, int)788849#define SNDRV_RAWMIDI_IOCTL_INFO _IOR('W', 0x01, struct snd_rawmidi_info)789850#define SNDRV_RAWMIDI_IOCTL_USER_PVERSION _IOW('W', 0x02, int)···857786#define SNDRV_RAWMIDI_IOCTL_STATUS _IOWR('W', 0x20, struct snd_rawmidi_status)858787#define SNDRV_RAWMIDI_IOCTL_DROP _IOW('W', 0x30, int)859788#define SNDRV_RAWMIDI_IOCTL_DRAIN _IOW('W', 0x31, int)789789+/* Additional ioctls for UMP rawmidi devices */790790+#define SNDRV_UMP_IOCTL_ENDPOINT_INFO _IOR('W', 0x40, struct snd_ump_endpoint_info)791791+#define SNDRV_UMP_IOCTL_BLOCK_INFO _IOR('W', 0x41, struct snd_ump_block_info)860792861793/*862794 * Timer section - /dev/snd/timer···1035961 * *1036962 ****************************************************************************/103796310381038-#define SNDRV_CTL_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 8)964964+#define SNDRV_CTL_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 9)10399651040966struct snd_ctl_card_info {1041967 int card; /* card number */···11961122#define SNDRV_CTL_IOCTL_RAWMIDI_NEXT_DEVICE _IOWR('U', 0x40, int)11971123#define SNDRV_CTL_IOCTL_RAWMIDI_INFO _IOWR('U', 0x41, struct snd_rawmidi_info)11981124#define SNDRV_CTL_IOCTL_RAWMIDI_PREFER_SUBDEVICE _IOW('U', 0x42, int)11251125+#define SNDRV_CTL_IOCTL_UMP_NEXT_DEVICE _IOWR('U', 0x43, int)11261126+#define SNDRV_CTL_IOCTL_UMP_ENDPOINT_INFO _IOWR('U', 0x44, struct snd_ump_endpoint_info)11271127+#define SNDRV_CTL_IOCTL_UMP_BLOCK_INFO _IOWR('U', 0x45, struct snd_ump_block_info)11991128#define SNDRV_CTL_IOCTL_POWER _IOWR('U', 0xd0, int)12001129#define SNDRV_CTL_IOCTL_POWER_STATE _IOR('U', 0xd1, int)12011130
+12-6
tools/lib/subcmd/help.c
···6868 while (ci < cmds->cnt && ei < excludes->cnt) {6969 cmp = strcmp(cmds->names[ci]->name, excludes->names[ei]->name);7070 if (cmp < 0) {7171- zfree(&cmds->names[cj]);7272- cmds->names[cj++] = cmds->names[ci++];7171+ if (ci == cj) {7272+ ci++;7373+ cj++;7474+ } else {7575+ zfree(&cmds->names[cj]);7676+ cmds->names[cj++] = cmds->names[ci++];7777+ }7378 } else if (cmp == 0) {7479 ci++;7580 ei++;···8277 ei++;8378 }8479 }8585-8686- while (ci < cmds->cnt) {8787- zfree(&cmds->names[cj]);8888- cmds->names[cj++] = cmds->names[ci++];8080+ if (ci != cj) {8181+ while (ci < cmds->cnt) {8282+ zfree(&cmds->names[cj]);8383+ cmds->names[cj++] = cmds->names[ci++];8484+ }8985 }9086 for (ci = cj; ci < cmds->cnt; ci++)9187 zfree(&cmds->names[ci]);
···537537448 common process_mrelease sys_process_mrelease538538449 common futex_waitv sys_futex_waitv539539450 nospu set_mempolicy_home_node sys_set_mempolicy_home_node540540+451 common cachestat sys_cachestat
+1
tools/perf/arch/s390/entry/syscalls/syscall.tbl
···453453448 common process_mrelease sys_process_mrelease sys_process_mrelease454454449 common futex_waitv sys_futex_waitv sys_futex_waitv455455450 common set_mempolicy_home_node sys_set_mempolicy_home_node sys_set_mempolicy_home_node456456+451 common cachestat sys_cachestat sys_cachestat
+1
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
···372372448 common process_mrelease sys_process_mrelease373373449 common futex_waitv sys_futex_waitv374374450 common set_mempolicy_home_node sys_set_mempolicy_home_node375375+451 common cachestat sys_cachestat375376376377#377378# Due to a historical design error, certain syscalls are numbered differently
···177177#define SCM_RIGHTS 0x01 /* rw: access rights (array of int) */178178#define SCM_CREDENTIALS 0x02 /* rw: struct ucred */179179#define SCM_SECURITY 0x03 /* rw: security label */180180+#define SCM_PIDFD 0x04 /* ro: pidfd (int) */180181181182struct ucred {182183 __u32 pid;···327326 */328327329328#define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */329329+#define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator in sendmsg() */330330#define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */331331#define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exec for file332332 descriptor received through···338336#define MSG_CMSG_COMPAT 0 /* We never have 32 bit fixups */339337#endif340338339339+/* Flags to be cleared on entry by sendmsg and sendmmsg syscalls */340340+#define MSG_INTERNAL_SENDMSG_FLAGS \341341+ (MSG_SPLICE_PAGES | MSG_SENDPAGE_NOPOLICY | MSG_SENDPAGE_DECRYPTED)341342342343/* Setsockoptions(2) level. Thanks to BSD these must match IPPROTO_xxx */343344#define SOL_IP 0
···4242 done43434444# Avoid any output on non arm64 on emit_tests4545-emit_tests: all4545+emit_tests:4646 @for DIR in $(ARM64_SUBTARGETS); do \4747 BUILD_TARGET=$(OUTPUT)/$$DIR; \4848 make OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
···4343 done44444545# Avoid any output on non riscv on emit_tests4646-emit_tests: all4646+emit_tests:4747 @for DIR in $(RISCV_SUBTARGETS); do \4848 BUILD_TARGET=$(OUTPUT)/$$DIR; \4949 $(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@; \