Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

hid: intel-ish-hid: ishtp: Fix ishtp client sending disordered message

There is a timing issue captured during ishtp client sending stress tests.
It was observed during stress tests that ISH firmware is getting out of
ordered messages. This is a rare scenario as the current set of ISH client
drivers don't send much data to firmware. But this may not be the case
going forward.

When message size is bigger than IPC MTU, ishtp splits the message into
fragments and uses serialized async method to send message fragments.
The call stack:
ishtp_cl_send_msg_ipc->ipc_tx_callback(first fregment)->
ishtp_send_msg(with callback)->write_ipc_to_queue->
write_ipc_from_queue->callback->ipc_tx_callback(next fregment)......

When an ipc write complete interrupt is received, driver also calls
write_ipc_from_queue->ipc_tx_callback in ISR to start sending of next fragment.

Through ipc_tx_callback uses spin_lock to protect message splitting, as the
serialized sending method will call back to ipc_tx_callback again, so it doesn't
put sending under spin_lock, it causes driver cannot guarantee all fragments
be sent in order.

Considering this scenario:
ipc_tx_callback just finished a fragment splitting, and not call ishtp_send_msg
yet, there is a write complete interrupt happens, then ISR->write_ipc_from_queue
->ipc_tx_callback->ishtp_send_msg->write_ipc_to_queue......

Because ISR has higher exec priority than normal thread, this causes the new
fragment be sent out before previous fragment. This disordered message causes
invalid message to firmware.

The solution is, to send fragments synchronously:
Use ishtp_write_message writing fragments into tx queue directly one by one,
instead of ishtp_send_msg only writing one fragment with completion callback.
As no completion callback be used, so change ipc_tx_callback to ipc_tx_send.

Signed-off-by: Even Xu <even.xu@intel.com>
Acked-by: Srinivas Pandruvada <srinivas.pandruvada@intel.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>

authored by

Even Xu and committed by
Jiri Kosina
e1fa0767 94553f8a

+39 -29
+39 -29
drivers/hid/intel-ish-hid/ishtp/client.c
··· 626 626 } 627 627 628 628 /** 629 - * ipc_tx_callback() - IPC tx callback function 629 + * ipc_tx_send() - IPC tx send function 630 630 * @prm: Pointer to client device instance 631 631 * 632 - * Send message over IPC either first time or on callback on previous message 633 - * completion 632 + * Send message over IPC. Message will be split into fragments 633 + * if message size is bigger than IPC FIFO size, and all 634 + * fragments will be sent one by one. 634 635 */ 635 - static void ipc_tx_callback(void *prm) 636 + static void ipc_tx_send(void *prm) 636 637 { 637 638 struct ishtp_cl *cl = prm; 638 639 struct ishtp_cl_tx_ring *cl_msg; ··· 678 677 list); 679 678 rem = cl_msg->send_buf.size - cl->tx_offs; 680 679 681 - ishtp_hdr.host_addr = cl->host_client_id; 682 - ishtp_hdr.fw_addr = cl->fw_client_id; 683 - ishtp_hdr.reserved = 0; 684 - pmsg = cl_msg->send_buf.data + cl->tx_offs; 680 + while (rem > 0) { 681 + ishtp_hdr.host_addr = cl->host_client_id; 682 + ishtp_hdr.fw_addr = cl->fw_client_id; 683 + ishtp_hdr.reserved = 0; 684 + pmsg = cl_msg->send_buf.data + cl->tx_offs; 685 685 686 - if (rem <= dev->mtu) { 687 - ishtp_hdr.length = rem; 688 - ishtp_hdr.msg_complete = 1; 689 - cl->sending = 0; 690 - list_del_init(&cl_msg->list); /* Must be before write */ 691 - spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags); 692 - /* Submit to IPC queue with no callback */ 693 - ishtp_write_message(dev, &ishtp_hdr, pmsg); 694 - spin_lock_irqsave(&cl->tx_free_list_spinlock, tx_free_flags); 695 - list_add_tail(&cl_msg->list, &cl->tx_free_list.list); 696 - ++cl->tx_ring_free_size; 697 - spin_unlock_irqrestore(&cl->tx_free_list_spinlock, 698 - tx_free_flags); 699 - } else { 700 - /* Send IPC fragment */ 701 - spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags); 702 - cl->tx_offs += dev->mtu; 703 - ishtp_hdr.length = dev->mtu; 704 - ishtp_hdr.msg_complete = 0; 705 - ishtp_send_msg(dev, &ishtp_hdr, pmsg, ipc_tx_callback, cl); 686 + if (rem <= dev->mtu) { 687 + /* Last fragment or only one packet */ 688 + ishtp_hdr.length = rem; 689 + ishtp_hdr.msg_complete = 1; 690 + /* Submit to IPC queue with no callback */ 691 + ishtp_write_message(dev, &ishtp_hdr, pmsg); 692 + cl->tx_offs = 0; 693 + cl->sending = 0; 694 + 695 + break; 696 + } else { 697 + /* Send ipc fragment */ 698 + ishtp_hdr.length = dev->mtu; 699 + ishtp_hdr.msg_complete = 0; 700 + /* All fregments submitted to IPC queue with no callback */ 701 + ishtp_write_message(dev, &ishtp_hdr, pmsg); 702 + cl->tx_offs += dev->mtu; 703 + rem = cl_msg->send_buf.size - cl->tx_offs; 704 + } 706 705 } 706 + 707 + list_del_init(&cl_msg->list); 708 + spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags); 709 + 710 + spin_lock_irqsave(&cl->tx_free_list_spinlock, tx_free_flags); 711 + list_add_tail(&cl_msg->list, &cl->tx_free_list.list); 712 + ++cl->tx_ring_free_size; 713 + spin_unlock_irqrestore(&cl->tx_free_list_spinlock, 714 + tx_free_flags); 707 715 } 708 716 709 717 /** ··· 730 720 return; 731 721 732 722 cl->tx_offs = 0; 733 - ipc_tx_callback(cl); 723 + ipc_tx_send(cl); 734 724 ++cl->send_msg_cnt_ipc; 735 725 } 736 726