Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mei: vsc: Run event callback from a workqueue

The event_notify callback in some cases calls vsc_tp_xfer(), which checks
tp->assert_cnt and waits for it through the tp->xfer_wait wait-queue.

And tp->assert_cnt is increased and the tp->xfer_wait queue is woken o
from the interrupt handler.

So the interrupt handler which is running the event callback is waiting for
itself to signal that it can continue.

This happens to work because the event callback runs from the threaded
ISR handler and while that is running the hard ISR handler will still
get called a second / third time for further interrupts and it is the hard
ISR handler which does the atomic_inc() and wake_up() calls.

But having the threaded ISR handler wait for its own interrupt to trigger
again is not how a threaded ISR handler is supposed to be used.

Move the running of the event callback from a threaded interrupt handler
to a workqueue since a threaded ISR should not wait for events from its
own interrupt.

This is a preparation patch for moving the atomic_inc() and wake_up() calls
to the threaded ISR handler, which is necessary to fix a locking issue.

Fixes: 566f5ca97680 ("mei: Add transport driver for IVSC device")
Signed-off-by: Hans de Goede <hansg@kernel.org>
Link: https://lore.kernel.org/r/20250623085052.12347-9-hansg@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

authored by

Hans de Goede and committed by
Greg Kroah-Hartman
de88b02c 6175c697

+11 -6
+11 -6
drivers/misc/mei/vsc-tp.c
··· 18 18 #include <linux/platform_device.h> 19 19 #include <linux/spi/spi.h> 20 20 #include <linux/types.h> 21 + #include <linux/workqueue.h> 21 22 22 23 #include "vsc-tp.h" 23 24 ··· 77 76 78 77 atomic_t assert_cnt; 79 78 wait_queue_head_t xfer_wait; 79 + struct work_struct event_work; 80 80 81 81 vsc_tp_event_cb_t event_notify; 82 82 void *event_notify_context; ··· 107 105 108 106 wake_up(&tp->xfer_wait); 109 107 110 - return IRQ_WAKE_THREAD; 108 + schedule_work(&tp->event_work); 109 + 110 + return IRQ_HANDLED; 111 111 } 112 112 113 - static irqreturn_t vsc_tp_thread_isr(int irq, void *data) 113 + static void vsc_tp_event_work(struct work_struct *work) 114 114 { 115 - struct vsc_tp *tp = data; 115 + struct vsc_tp *tp = container_of(work, struct vsc_tp, event_work); 116 116 117 117 guard(mutex)(&tp->event_notify_mutex); 118 118 119 119 if (tp->event_notify) 120 120 tp->event_notify(tp->event_notify_context); 121 - 122 - return IRQ_HANDLED; 123 121 } 124 122 125 123 /* wakeup firmware and wait for response */ ··· 497 495 tp->spi = spi; 498 496 499 497 irq_set_status_flags(spi->irq, IRQ_DISABLE_UNLAZY); 500 - ret = request_threaded_irq(spi->irq, vsc_tp_isr, vsc_tp_thread_isr, 498 + ret = request_threaded_irq(spi->irq, vsc_tp_isr, NULL, 501 499 IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 502 500 dev_name(dev), tp); 503 501 if (ret) ··· 505 503 506 504 mutex_init(&tp->mutex); 507 505 mutex_init(&tp->event_notify_mutex); 506 + INIT_WORK(&tp->event_work, vsc_tp_event_work); 508 507 509 508 /* only one child acpi device */ 510 509 ret = acpi_dev_for_each_child(ACPI_COMPANION(dev), ··· 530 527 err_destroy_lock: 531 528 free_irq(spi->irq, tp); 532 529 530 + cancel_work_sync(&tp->event_work); 533 531 mutex_destroy(&tp->event_notify_mutex); 534 532 mutex_destroy(&tp->mutex); 535 533 ··· 546 542 547 543 free_irq(spi->irq, tp); 548 544 545 + cancel_work_sync(&tp->event_work); 549 546 mutex_destroy(&tp->event_notify_mutex); 550 547 mutex_destroy(&tp->mutex); 551 548 }