Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

scsi: lpfc: Fix buffer free/clear order in deferred receive path

Fix a use-after-free window by correcting the buffer release sequence in
the deferred receive path. The code freed the RQ buffer first and only
then cleared the context pointer under the lock. Concurrent paths (e.g.,
ABTS and the repost path) also inspect and release the same pointer under
the lock, so the old order could lead to double-free/UAF.

Note that the repost path already uses the correct pattern: detach the
pointer under the lock, then free it after dropping the lock. The
deferred path should do the same.

Fixes: 472e146d1cf3 ("scsi: lpfc: Correct upcalling nvmet_fc transport during io done downcall")
Cc: stable@vger.kernel.org
Signed-off-by: John Evans <evans1210144@gmail.com>
Link: https://lore.kernel.org/r/20250828044008.743-1-evans1210144@gmail.com
Reviewed-by: Justin Tee <justin.tee@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>

authored by

John Evans and committed by
Martin K. Petersen
9dba9a45 6300d5c5

+6 -4
+6 -4
drivers/scsi/lpfc/lpfc_nvmet.c
··· 1243 1243 struct lpfc_nvmet_tgtport *tgtp; 1244 1244 struct lpfc_async_xchg_ctx *ctxp = 1245 1245 container_of(rsp, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req); 1246 - struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer; 1246 + struct rqb_dmabuf *nvmebuf; 1247 1247 struct lpfc_hba *phba = ctxp->phba; 1248 1248 unsigned long iflag; 1249 1249 ··· 1251 1251 lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n", 1252 1252 ctxp->oxid, ctxp->size, raw_smp_processor_id()); 1253 1253 1254 + spin_lock_irqsave(&ctxp->ctxlock, iflag); 1255 + nvmebuf = ctxp->rqb_buffer; 1254 1256 if (!nvmebuf) { 1257 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1255 1258 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR, 1256 1259 "6425 Defer rcv: no buffer oxid x%x: " 1257 1260 "flg %x ste %x\n", 1258 1261 ctxp->oxid, ctxp->flag, ctxp->state); 1259 1262 return; 1260 1263 } 1264 + ctxp->rqb_buffer = NULL; 1265 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1261 1266 1262 1267 tgtp = phba->targetport->private; 1263 1268 if (tgtp) ··· 1270 1265 1271 1266 /* Free the nvmebuf since a new buffer already replaced it */ 1272 1267 nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); 1273 - spin_lock_irqsave(&ctxp->ctxlock, iflag); 1274 - ctxp->rqb_buffer = NULL; 1275 - spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1276 1268 } 1277 1269 1278 1270 /**