Merge branch 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull IRQ chip updates from Ingo Molnar:
"A late irqchips update:

- New TI INTR/INTA set of drivers

- Rewrite of the stm32mp1-exti driver as a platform driver

- Update the IOMMU MSI mapping API to be RT friendly

- A number of cleanups and other low impact fixes"

* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (34 commits)
iommu/dma-iommu: Remove iommu_dma_map_msi_msg()
irqchip/gic-v3-mbi: Don't map the MSI page in mbi_compose_m{b, s}i_msg()
irqchip/ls-scfg-msi: Don't map the MSI page in ls_scfg_msi_compose_msg()
irqchip/gic-v3-its: Don't map the MSI page in its_irq_compose_msi_msg()
irqchip/gicv2m: Don't map the MSI page in gicv2m_compose_msi_msg()
iommu/dma-iommu: Split iommu_dma_map_msi_msg() in two parts
genirq/msi: Add a new field in msi_desc to store an IOMMU cookie
arm64: arch_k3: Enable interrupt controller drivers
irqchip/ti-sci-inta: Add msi domain support
soc: ti: Add MSI domain bus support for Interrupt Aggregator
irqchip/ti-sci-inta: Add support for Interrupt Aggregator driver
dt-bindings: irqchip: Introduce TISCI Interrupt Aggregator bindings
irqchip/ti-sci-intr: Add support for Interrupt Router driver
dt-bindings: irqchip: Introduce TISCI Interrupt router bindings
gpio: thunderx: Use the default parent apis for {request,release}_resources
genirq: Introduce irq_chip_{request,release}_resource_parent() apis
firmware: ti_sci: Add helper apis to manage resources
firmware: ti_sci: Add RM mapping table for am654
firmware: ti_sci: Add support for IRQ management
firmware: ti_sci: Add support for RM core ops
...

+2510 -228
+2 -1
Documentation/devicetree/bindings/arm/keystone/ti,sci.txt
··· 24 24 25 25 Required properties: 26 26 ------------------- 27 - - compatible: should be "ti,k2g-sci" 27 + - compatible: should be "ti,k2g-sci" for TI 66AK2G SoC 28 + should be "ti,am654-sci" for for TI AM654 SoC 28 29 - mbox-names: 29 30 "rx" - Mailbox corresponding to receive path 30 31 "tx" - Mailbox corresponding to transmit path
+66
Documentation/devicetree/bindings/interrupt-controller/ti,sci-inta.txt
··· 1 + Texas Instruments K3 Interrupt Aggregator 2 + ========================================= 3 + 4 + The Interrupt Aggregator (INTA) provides a centralized machine 5 + which handles the termination of system events to that they can 6 + be coherently processed by the host(s) in the system. A maximum 7 + of 64 events can be mapped to a single interrupt. 8 + 9 + 10 + Interrupt Aggregator 11 + +-----------------------------------------+ 12 + | Intmap VINT | 13 + | +--------------+ +------------+ | 14 + m ------>| | vint | bit | | 0 |.....|63| vint0 | 15 + . | +--------------+ +------------+ | +------+ 16 + . | . . | | HOST | 17 + Globalevents ------>| . . |------>| IRQ | 18 + . | . . | | CTRL | 19 + . | . . | +------+ 20 + n ------>| +--------------+ +------------+ | 21 + | | vint | bit | | 0 |.....|63| vintx | 22 + | +--------------+ +------------+ | 23 + | | 24 + +-----------------------------------------+ 25 + 26 + Configuration of these Intmap registers that maps global events to vint is done 27 + by a system controller (like the Device Memory and Security Controller on K3 28 + AM654 SoC). Driver should request the system controller to get the range 29 + of global events and vints assigned to the requesting host. Management 30 + of these requested resources should be handled by driver and requests 31 + system controller to map specific global event to vint, bit pair. 32 + 33 + Communication between the host processor running an OS and the system 34 + controller happens through a protocol called TI System Control Interface 35 + (TISCI protocol). For more details refer: 36 + Documentation/devicetree/bindings/arm/keystone/ti,sci.txt 37 + 38 + TISCI Interrupt Aggregator Node: 39 + ------------------------------- 40 + - compatible: Must be "ti,sci-inta". 41 + - reg: Should contain registers location and length. 42 + - interrupt-controller: Identifies the node as an interrupt controller 43 + - msi-controller: Identifies the node as an MSI controller. 44 + - interrupt-parent: phandle of irq parent. 45 + - ti,sci: Phandle to TI-SCI compatible System controller node. 46 + - ti,sci-dev-id: TISCI device ID of the Interrupt Aggregator. 47 + - ti,sci-rm-range-vint: Array of TISCI subtype ids representing vints(inta 48 + outputs) range within this INTA, assigned to the 49 + requesting host context. 50 + - ti,sci-rm-range-global-event: Array of TISCI subtype ids representing the 51 + global events range reaching this IA and are assigned 52 + to the requesting host context. 53 + 54 + Example: 55 + -------- 56 + main_udmass_inta: interrupt-controller@33d00000 { 57 + compatible = "ti,sci-inta"; 58 + reg = <0x0 0x33d00000 0x0 0x100000>; 59 + interrupt-controller; 60 + msi-controller; 61 + interrupt-parent = <&main_navss_intr>; 62 + ti,sci = <&dmsc>; 63 + ti,sci-dev-id = <179>; 64 + ti,sci-rm-range-vint = <0x0>; 65 + ti,sci-rm-range-global-event = <0x1>; 66 + };
+82
Documentation/devicetree/bindings/interrupt-controller/ti,sci-intr.txt
··· 1 + Texas Instruments K3 Interrupt Router 2 + ===================================== 3 + 4 + The Interrupt Router (INTR) module provides a mechanism to mux M 5 + interrupt inputs to N interrupt outputs, where all M inputs are selectable 6 + to be driven per N output. An Interrupt Router can either handle edge triggered 7 + or level triggered interrupts and that is fixed in hardware. 8 + 9 + Interrupt Router 10 + +----------------------+ 11 + | Inputs Outputs | 12 + +-------+ | +------+ +-----+ | 13 + | GPIO |----------->| | irq0 | | 0 | | Host IRQ 14 + +-------+ | +------+ +-----+ | controller 15 + | . . | +-------+ 16 + +-------+ | . . |----->| IRQ | 17 + | INTA |----------->| . . | +-------+ 18 + +-------+ | . +-----+ | 19 + | +------+ | N | | 20 + | | irqM | +-----+ | 21 + | +------+ | 22 + | | 23 + +----------------------+ 24 + 25 + There is one register per output (MUXCNTL_N) that controls the selection. 26 + Configuration of these MUXCNTL_N registers is done by a system controller 27 + (like the Device Memory and Security Controller on K3 AM654 SoC). System 28 + controller will keep track of the used and unused registers within the Router. 29 + Driver should request the system controller to get the range of GIC IRQs 30 + assigned to the requesting hosts. It is the drivers responsibility to keep 31 + track of Host IRQs. 32 + 33 + Communication between the host processor running an OS and the system 34 + controller happens through a protocol called TI System Control Interface 35 + (TISCI protocol). For more details refer: 36 + Documentation/devicetree/bindings/arm/keystone/ti,sci.txt 37 + 38 + TISCI Interrupt Router Node: 39 + ---------------------------- 40 + Required Properties: 41 + - compatible: Must be "ti,sci-intr". 42 + - ti,intr-trigger-type: Should be one of the following: 43 + 1: If intr supports edge triggered interrupts. 44 + 4: If intr supports level triggered interrupts. 45 + - interrupt-controller: Identifies the node as an interrupt controller 46 + - #interrupt-cells: Specifies the number of cells needed to encode an 47 + interrupt source. The value should be 2. 48 + First cell should contain the TISCI device ID of source 49 + Second cell should contain the interrupt source offset 50 + within the device. 51 + - ti,sci: Phandle to TI-SCI compatible System controller node. 52 + - ti,sci-dst-id: TISCI device ID of the destination IRQ controller. 53 + - ti,sci-rm-range-girq: Array of TISCI subtype ids representing the host irqs 54 + assigned to this interrupt router. Each subtype id 55 + corresponds to a range of host irqs. 56 + 57 + For more details on TISCI IRQ resource management refer: 58 + http://downloads.ti.com/tisci/esd/latest/2_tisci_msgs/rm/rm_irq.html 59 + 60 + Example: 61 + -------- 62 + The following example demonstrates both interrupt router node and the consumer 63 + node(main gpio) on the AM654 SoC: 64 + 65 + main_intr: interrupt-controller0 { 66 + compatible = "ti,sci-intr"; 67 + ti,intr-trigger-type = <1>; 68 + interrupt-controller; 69 + interrupt-parent = <&gic500>; 70 + #interrupt-cells = <2>; 71 + ti,sci = <&dmsc>; 72 + ti,sci-dst-id = <56>; 73 + ti,sci-rm-range-girq = <0x1>; 74 + }; 75 + 76 + main_gpio0: gpio@600000 { 77 + ... 78 + interrupt-parent = <&main_intr>; 79 + interrupts = <57 256>, <57 257>, <57 258>, 80 + <57 259>, <57 260>, <57 261>; 81 + ... 82 + };
+6
MAINTAINERS
··· 15547 15547 F: Documentation/devicetree/bindings/clock/ti,sci-clk.txt 15548 15548 F: drivers/clk/keystone/sci-clk.c 15549 15549 F: drivers/reset/reset-ti-sci.c 15550 + F: Documentation/devicetree/bindings/interrupt-controller/ti,sci-intr.txt 15551 + F: Documentation/devicetree/bindings/interrupt-controller/ti,sci-inta.txt 15552 + F: drivers/irqchip/irq-ti-sci-intr.c 15553 + F: drivers/irqchip/irq-ti-sci-inta.c 15554 + F: include/linux/soc/ti/ti_sci_inta_msi.h 15555 + F: drivers/soc/ti/ti_sci_inta_msi.c 15550 15556 15551 15557 Texas Instruments ASoC drivers 15552 15558 M: Peter Ujfalusi <peter.ujfalusi@ti.com>
+5
arch/arm64/Kconfig.platforms
··· 87 87 config ARCH_K3 88 88 bool "Texas Instruments Inc. K3 multicore SoC architecture" 89 89 select PM_GENERIC_DOMAINS if PM 90 + select MAILBOX 91 + select TI_MESSAGE_MANAGER 92 + select TI_SCI_PROTOCOL 93 + select TI_SCI_INTR_IRQCHIP 94 + select TI_SCI_INTA_IRQCHIP 90 95 help 91 96 This enables support for Texas Instruments' K3 multicore SoC 92 97 architecture.
+651
drivers/firmware/ti_sci.c
··· 65 65 }; 66 66 67 67 /** 68 + * struct ti_sci_rm_type_map - Structure representing TISCI Resource 69 + * management representation of dev_ids. 70 + * @dev_id: TISCI device ID 71 + * @type: Corresponding id as identified by TISCI RM. 72 + * 73 + * Note: This is used only as a work around for using RM range apis 74 + * for AM654 SoC. For future SoCs dev_id will be used as type 75 + * for RM range APIs. In order to maintain ABI backward compatibility 76 + * type is not being changed for AM654 SoC. 77 + */ 78 + struct ti_sci_rm_type_map { 79 + u32 dev_id; 80 + u16 type; 81 + }; 82 + 83 + /** 68 84 * struct ti_sci_desc - Description of SoC integration 69 85 * @default_host_id: Host identifier representing the compute entity 70 86 * @max_rx_timeout_ms: Timeout for communication with SoC (in Milliseconds) 71 87 * @max_msgs: Maximum number of messages that can be pending 72 88 * simultaneously in the system 73 89 * @max_msg_size: Maximum size of data per message that can be handled. 90 + * @rm_type_map: RM resource type mapping structure. 74 91 */ 75 92 struct ti_sci_desc { 76 93 u8 default_host_id; 77 94 int max_rx_timeout_ms; 78 95 int max_msgs; 79 96 int max_msg_size; 97 + struct ti_sci_rm_type_map *rm_type_map; 80 98 }; 81 99 82 100 /** ··· 1618 1600 return ret; 1619 1601 } 1620 1602 1603 + static int ti_sci_get_resource_type(struct ti_sci_info *info, u16 dev_id, 1604 + u16 *type) 1605 + { 1606 + struct ti_sci_rm_type_map *rm_type_map = info->desc->rm_type_map; 1607 + bool found = false; 1608 + int i; 1609 + 1610 + /* If map is not provided then assume dev_id is used as type */ 1611 + if (!rm_type_map) { 1612 + *type = dev_id; 1613 + return 0; 1614 + } 1615 + 1616 + for (i = 0; rm_type_map[i].dev_id; i++) { 1617 + if (rm_type_map[i].dev_id == dev_id) { 1618 + *type = rm_type_map[i].type; 1619 + found = true; 1620 + break; 1621 + } 1622 + } 1623 + 1624 + if (!found) 1625 + return -EINVAL; 1626 + 1627 + return 0; 1628 + } 1629 + 1630 + /** 1631 + * ti_sci_get_resource_range - Helper to get a range of resources assigned 1632 + * to a host. Resource is uniquely identified by 1633 + * type and subtype. 1634 + * @handle: Pointer to TISCI handle. 1635 + * @dev_id: TISCI device ID. 1636 + * @subtype: Resource assignment subtype that is being requested 1637 + * from the given device. 1638 + * @s_host: Host processor ID to which the resources are allocated 1639 + * @range_start: Start index of the resource range 1640 + * @range_num: Number of resources in the range 1641 + * 1642 + * Return: 0 if all went fine, else return appropriate error. 1643 + */ 1644 + static int ti_sci_get_resource_range(const struct ti_sci_handle *handle, 1645 + u32 dev_id, u8 subtype, u8 s_host, 1646 + u16 *range_start, u16 *range_num) 1647 + { 1648 + struct ti_sci_msg_resp_get_resource_range *resp; 1649 + struct ti_sci_msg_req_get_resource_range *req; 1650 + struct ti_sci_xfer *xfer; 1651 + struct ti_sci_info *info; 1652 + struct device *dev; 1653 + u16 type; 1654 + int ret = 0; 1655 + 1656 + if (IS_ERR(handle)) 1657 + return PTR_ERR(handle); 1658 + if (!handle) 1659 + return -EINVAL; 1660 + 1661 + info = handle_to_ti_sci_info(handle); 1662 + dev = info->dev; 1663 + 1664 + xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_GET_RESOURCE_RANGE, 1665 + TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 1666 + sizeof(*req), sizeof(*resp)); 1667 + if (IS_ERR(xfer)) { 1668 + ret = PTR_ERR(xfer); 1669 + dev_err(dev, "Message alloc failed(%d)\n", ret); 1670 + return ret; 1671 + } 1672 + 1673 + ret = ti_sci_get_resource_type(info, dev_id, &type); 1674 + if (ret) { 1675 + dev_err(dev, "rm type lookup failed for %u\n", dev_id); 1676 + goto fail; 1677 + } 1678 + 1679 + req = (struct ti_sci_msg_req_get_resource_range *)xfer->xfer_buf; 1680 + req->secondary_host = s_host; 1681 + req->type = type & MSG_RM_RESOURCE_TYPE_MASK; 1682 + req->subtype = subtype & MSG_RM_RESOURCE_SUBTYPE_MASK; 1683 + 1684 + ret = ti_sci_do_xfer(info, xfer); 1685 + if (ret) { 1686 + dev_err(dev, "Mbox send fail %d\n", ret); 1687 + goto fail; 1688 + } 1689 + 1690 + resp = (struct ti_sci_msg_resp_get_resource_range *)xfer->xfer_buf; 1691 + 1692 + if (!ti_sci_is_response_ack(resp)) { 1693 + ret = -ENODEV; 1694 + } else if (!resp->range_start && !resp->range_num) { 1695 + ret = -ENODEV; 1696 + } else { 1697 + *range_start = resp->range_start; 1698 + *range_num = resp->range_num; 1699 + }; 1700 + 1701 + fail: 1702 + ti_sci_put_one_xfer(&info->minfo, xfer); 1703 + 1704 + return ret; 1705 + } 1706 + 1707 + /** 1708 + * ti_sci_cmd_get_resource_range - Get a range of resources assigned to host 1709 + * that is same as ti sci interface host. 1710 + * @handle: Pointer to TISCI handle. 1711 + * @dev_id: TISCI device ID. 1712 + * @subtype: Resource assignment subtype that is being requested 1713 + * from the given device. 1714 + * @range_start: Start index of the resource range 1715 + * @range_num: Number of resources in the range 1716 + * 1717 + * Return: 0 if all went fine, else return appropriate error. 1718 + */ 1719 + static int ti_sci_cmd_get_resource_range(const struct ti_sci_handle *handle, 1720 + u32 dev_id, u8 subtype, 1721 + u16 *range_start, u16 *range_num) 1722 + { 1723 + return ti_sci_get_resource_range(handle, dev_id, subtype, 1724 + TI_SCI_IRQ_SECONDARY_HOST_INVALID, 1725 + range_start, range_num); 1726 + } 1727 + 1728 + /** 1729 + * ti_sci_cmd_get_resource_range_from_shost - Get a range of resources 1730 + * assigned to a specified host. 1731 + * @handle: Pointer to TISCI handle. 1732 + * @dev_id: TISCI device ID. 1733 + * @subtype: Resource assignment subtype that is being requested 1734 + * from the given device. 1735 + * @s_host: Host processor ID to which the resources are allocated 1736 + * @range_start: Start index of the resource range 1737 + * @range_num: Number of resources in the range 1738 + * 1739 + * Return: 0 if all went fine, else return appropriate error. 1740 + */ 1741 + static 1742 + int ti_sci_cmd_get_resource_range_from_shost(const struct ti_sci_handle *handle, 1743 + u32 dev_id, u8 subtype, u8 s_host, 1744 + u16 *range_start, u16 *range_num) 1745 + { 1746 + return ti_sci_get_resource_range(handle, dev_id, subtype, s_host, 1747 + range_start, range_num); 1748 + } 1749 + 1750 + /** 1751 + * ti_sci_manage_irq() - Helper api to configure/release the irq route between 1752 + * the requested source and destination 1753 + * @handle: Pointer to TISCI handle. 1754 + * @valid_params: Bit fields defining the validity of certain params 1755 + * @src_id: Device ID of the IRQ source 1756 + * @src_index: IRQ source index within the source device 1757 + * @dst_id: Device ID of the IRQ destination 1758 + * @dst_host_irq: IRQ number of the destination device 1759 + * @ia_id: Device ID of the IA, if the IRQ flows through this IA 1760 + * @vint: Virtual interrupt to be used within the IA 1761 + * @global_event: Global event number to be used for the requesting event 1762 + * @vint_status_bit: Virtual interrupt status bit to be used for the event 1763 + * @s_host: Secondary host ID to which the irq/event is being 1764 + * requested for. 1765 + * @type: Request type irq set or release. 1766 + * 1767 + * Return: 0 if all went fine, else return appropriate error. 1768 + */ 1769 + static int ti_sci_manage_irq(const struct ti_sci_handle *handle, 1770 + u32 valid_params, u16 src_id, u16 src_index, 1771 + u16 dst_id, u16 dst_host_irq, u16 ia_id, u16 vint, 1772 + u16 global_event, u8 vint_status_bit, u8 s_host, 1773 + u16 type) 1774 + { 1775 + struct ti_sci_msg_req_manage_irq *req; 1776 + struct ti_sci_msg_hdr *resp; 1777 + struct ti_sci_xfer *xfer; 1778 + struct ti_sci_info *info; 1779 + struct device *dev; 1780 + int ret = 0; 1781 + 1782 + if (IS_ERR(handle)) 1783 + return PTR_ERR(handle); 1784 + if (!handle) 1785 + return -EINVAL; 1786 + 1787 + info = handle_to_ti_sci_info(handle); 1788 + dev = info->dev; 1789 + 1790 + xfer = ti_sci_get_one_xfer(info, type, TI_SCI_FLAG_REQ_ACK_ON_PROCESSED, 1791 + sizeof(*req), sizeof(*resp)); 1792 + if (IS_ERR(xfer)) { 1793 + ret = PTR_ERR(xfer); 1794 + dev_err(dev, "Message alloc failed(%d)\n", ret); 1795 + return ret; 1796 + } 1797 + req = (struct ti_sci_msg_req_manage_irq *)xfer->xfer_buf; 1798 + req->valid_params = valid_params; 1799 + req->src_id = src_id; 1800 + req->src_index = src_index; 1801 + req->dst_id = dst_id; 1802 + req->dst_host_irq = dst_host_irq; 1803 + req->ia_id = ia_id; 1804 + req->vint = vint; 1805 + req->global_event = global_event; 1806 + req->vint_status_bit = vint_status_bit; 1807 + req->secondary_host = s_host; 1808 + 1809 + ret = ti_sci_do_xfer(info, xfer); 1810 + if (ret) { 1811 + dev_err(dev, "Mbox send fail %d\n", ret); 1812 + goto fail; 1813 + } 1814 + 1815 + resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf; 1816 + 1817 + ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV; 1818 + 1819 + fail: 1820 + ti_sci_put_one_xfer(&info->minfo, xfer); 1821 + 1822 + return ret; 1823 + } 1824 + 1825 + /** 1826 + * ti_sci_set_irq() - Helper api to configure the irq route between the 1827 + * requested source and destination 1828 + * @handle: Pointer to TISCI handle. 1829 + * @valid_params: Bit fields defining the validity of certain params 1830 + * @src_id: Device ID of the IRQ source 1831 + * @src_index: IRQ source index within the source device 1832 + * @dst_id: Device ID of the IRQ destination 1833 + * @dst_host_irq: IRQ number of the destination device 1834 + * @ia_id: Device ID of the IA, if the IRQ flows through this IA 1835 + * @vint: Virtual interrupt to be used within the IA 1836 + * @global_event: Global event number to be used for the requesting event 1837 + * @vint_status_bit: Virtual interrupt status bit to be used for the event 1838 + * @s_host: Secondary host ID to which the irq/event is being 1839 + * requested for. 1840 + * 1841 + * Return: 0 if all went fine, else return appropriate error. 1842 + */ 1843 + static int ti_sci_set_irq(const struct ti_sci_handle *handle, u32 valid_params, 1844 + u16 src_id, u16 src_index, u16 dst_id, 1845 + u16 dst_host_irq, u16 ia_id, u16 vint, 1846 + u16 global_event, u8 vint_status_bit, u8 s_host) 1847 + { 1848 + pr_debug("%s: IRQ set with valid_params = 0x%x from src = %d, index = %d, to dst = %d, irq = %d,via ia_id = %d, vint = %d, global event = %d,status_bit = %d\n", 1849 + __func__, valid_params, src_id, src_index, 1850 + dst_id, dst_host_irq, ia_id, vint, global_event, 1851 + vint_status_bit); 1852 + 1853 + return ti_sci_manage_irq(handle, valid_params, src_id, src_index, 1854 + dst_id, dst_host_irq, ia_id, vint, 1855 + global_event, vint_status_bit, s_host, 1856 + TI_SCI_MSG_SET_IRQ); 1857 + } 1858 + 1859 + /** 1860 + * ti_sci_free_irq() - Helper api to free the irq route between the 1861 + * requested source and destination 1862 + * @handle: Pointer to TISCI handle. 1863 + * @valid_params: Bit fields defining the validity of certain params 1864 + * @src_id: Device ID of the IRQ source 1865 + * @src_index: IRQ source index within the source device 1866 + * @dst_id: Device ID of the IRQ destination 1867 + * @dst_host_irq: IRQ number of the destination device 1868 + * @ia_id: Device ID of the IA, if the IRQ flows through this IA 1869 + * @vint: Virtual interrupt to be used within the IA 1870 + * @global_event: Global event number to be used for the requesting event 1871 + * @vint_status_bit: Virtual interrupt status bit to be used for the event 1872 + * @s_host: Secondary host ID to which the irq/event is being 1873 + * requested for. 1874 + * 1875 + * Return: 0 if all went fine, else return appropriate error. 1876 + */ 1877 + static int ti_sci_free_irq(const struct ti_sci_handle *handle, u32 valid_params, 1878 + u16 src_id, u16 src_index, u16 dst_id, 1879 + u16 dst_host_irq, u16 ia_id, u16 vint, 1880 + u16 global_event, u8 vint_status_bit, u8 s_host) 1881 + { 1882 + pr_debug("%s: IRQ release with valid_params = 0x%x from src = %d, index = %d, to dst = %d, irq = %d,via ia_id = %d, vint = %d, global event = %d,status_bit = %d\n", 1883 + __func__, valid_params, src_id, src_index, 1884 + dst_id, dst_host_irq, ia_id, vint, global_event, 1885 + vint_status_bit); 1886 + 1887 + return ti_sci_manage_irq(handle, valid_params, src_id, src_index, 1888 + dst_id, dst_host_irq, ia_id, vint, 1889 + global_event, vint_status_bit, s_host, 1890 + TI_SCI_MSG_FREE_IRQ); 1891 + } 1892 + 1893 + /** 1894 + * ti_sci_cmd_set_irq() - Configure a host irq route between the requested 1895 + * source and destination. 1896 + * @handle: Pointer to TISCI handle. 1897 + * @src_id: Device ID of the IRQ source 1898 + * @src_index: IRQ source index within the source device 1899 + * @dst_id: Device ID of the IRQ destination 1900 + * @dst_host_irq: IRQ number of the destination device 1901 + * @vint_irq: Boolean specifying if this interrupt belongs to 1902 + * Interrupt Aggregator. 1903 + * 1904 + * Return: 0 if all went fine, else return appropriate error. 1905 + */ 1906 + static int ti_sci_cmd_set_irq(const struct ti_sci_handle *handle, u16 src_id, 1907 + u16 src_index, u16 dst_id, u16 dst_host_irq) 1908 + { 1909 + u32 valid_params = MSG_FLAG_DST_ID_VALID | MSG_FLAG_DST_HOST_IRQ_VALID; 1910 + 1911 + return ti_sci_set_irq(handle, valid_params, src_id, src_index, dst_id, 1912 + dst_host_irq, 0, 0, 0, 0, 0); 1913 + } 1914 + 1915 + /** 1916 + * ti_sci_cmd_set_event_map() - Configure an event based irq route between the 1917 + * requested source and Interrupt Aggregator. 1918 + * @handle: Pointer to TISCI handle. 1919 + * @src_id: Device ID of the IRQ source 1920 + * @src_index: IRQ source index within the source device 1921 + * @ia_id: Device ID of the IA, if the IRQ flows through this IA 1922 + * @vint: Virtual interrupt to be used within the IA 1923 + * @global_event: Global event number to be used for the requesting event 1924 + * @vint_status_bit: Virtual interrupt status bit to be used for the event 1925 + * 1926 + * Return: 0 if all went fine, else return appropriate error. 1927 + */ 1928 + static int ti_sci_cmd_set_event_map(const struct ti_sci_handle *handle, 1929 + u16 src_id, u16 src_index, u16 ia_id, 1930 + u16 vint, u16 global_event, 1931 + u8 vint_status_bit) 1932 + { 1933 + u32 valid_params = MSG_FLAG_IA_ID_VALID | MSG_FLAG_VINT_VALID | 1934 + MSG_FLAG_GLB_EVNT_VALID | 1935 + MSG_FLAG_VINT_STS_BIT_VALID; 1936 + 1937 + return ti_sci_set_irq(handle, valid_params, src_id, src_index, 0, 0, 1938 + ia_id, vint, global_event, vint_status_bit, 0); 1939 + } 1940 + 1941 + /** 1942 + * ti_sci_cmd_free_irq() - Free a host irq route between the between the 1943 + * requested source and destination. 1944 + * @handle: Pointer to TISCI handle. 1945 + * @src_id: Device ID of the IRQ source 1946 + * @src_index: IRQ source index within the source device 1947 + * @dst_id: Device ID of the IRQ destination 1948 + * @dst_host_irq: IRQ number of the destination device 1949 + * @vint_irq: Boolean specifying if this interrupt belongs to 1950 + * Interrupt Aggregator. 1951 + * 1952 + * Return: 0 if all went fine, else return appropriate error. 1953 + */ 1954 + static int ti_sci_cmd_free_irq(const struct ti_sci_handle *handle, u16 src_id, 1955 + u16 src_index, u16 dst_id, u16 dst_host_irq) 1956 + { 1957 + u32 valid_params = MSG_FLAG_DST_ID_VALID | MSG_FLAG_DST_HOST_IRQ_VALID; 1958 + 1959 + return ti_sci_free_irq(handle, valid_params, src_id, src_index, dst_id, 1960 + dst_host_irq, 0, 0, 0, 0, 0); 1961 + } 1962 + 1963 + /** 1964 + * ti_sci_cmd_free_event_map() - Free an event map between the requested source 1965 + * and Interrupt Aggregator. 1966 + * @handle: Pointer to TISCI handle. 1967 + * @src_id: Device ID of the IRQ source 1968 + * @src_index: IRQ source index within the source device 1969 + * @ia_id: Device ID of the IA, if the IRQ flows through this IA 1970 + * @vint: Virtual interrupt to be used within the IA 1971 + * @global_event: Global event number to be used for the requesting event 1972 + * @vint_status_bit: Virtual interrupt status bit to be used for the event 1973 + * 1974 + * Return: 0 if all went fine, else return appropriate error. 1975 + */ 1976 + static int ti_sci_cmd_free_event_map(const struct ti_sci_handle *handle, 1977 + u16 src_id, u16 src_index, u16 ia_id, 1978 + u16 vint, u16 global_event, 1979 + u8 vint_status_bit) 1980 + { 1981 + u32 valid_params = MSG_FLAG_IA_ID_VALID | 1982 + MSG_FLAG_VINT_VALID | MSG_FLAG_GLB_EVNT_VALID | 1983 + MSG_FLAG_VINT_STS_BIT_VALID; 1984 + 1985 + return ti_sci_free_irq(handle, valid_params, src_id, src_index, 0, 0, 1986 + ia_id, vint, global_event, vint_status_bit, 0); 1987 + } 1988 + 1621 1989 /* 1622 1990 * ti_sci_setup_ops() - Setup the operations structures 1623 1991 * @info: pointer to TISCI pointer ··· 2014 1610 struct ti_sci_core_ops *core_ops = &ops->core_ops; 2015 1611 struct ti_sci_dev_ops *dops = &ops->dev_ops; 2016 1612 struct ti_sci_clk_ops *cops = &ops->clk_ops; 1613 + struct ti_sci_rm_core_ops *rm_core_ops = &ops->rm_core_ops; 1614 + struct ti_sci_rm_irq_ops *iops = &ops->rm_irq_ops; 2017 1615 2018 1616 core_ops->reboot_device = ti_sci_cmd_core_reboot; 2019 1617 ··· 2046 1640 cops->get_best_match_freq = ti_sci_cmd_clk_get_match_freq; 2047 1641 cops->set_freq = ti_sci_cmd_clk_set_freq; 2048 1642 cops->get_freq = ti_sci_cmd_clk_get_freq; 1643 + 1644 + rm_core_ops->get_range = ti_sci_cmd_get_resource_range; 1645 + rm_core_ops->get_range_from_shost = 1646 + ti_sci_cmd_get_resource_range_from_shost; 1647 + 1648 + iops->set_irq = ti_sci_cmd_set_irq; 1649 + iops->set_event_map = ti_sci_cmd_set_event_map; 1650 + iops->free_irq = ti_sci_cmd_free_irq; 1651 + iops->free_event_map = ti_sci_cmd_free_event_map; 2049 1652 } 2050 1653 2051 1654 /** ··· 2179 1764 } 2180 1765 EXPORT_SYMBOL_GPL(devm_ti_sci_get_handle); 2181 1766 1767 + /** 1768 + * ti_sci_get_by_phandle() - Get the TI SCI handle using DT phandle 1769 + * @np: device node 1770 + * @property: property name containing phandle on TISCI node 1771 + * 1772 + * NOTE: The function does not track individual clients of the framework 1773 + * and is expected to be maintained by caller of TI SCI protocol library. 1774 + * ti_sci_put_handle must be balanced with successful ti_sci_get_by_phandle 1775 + * Return: pointer to handle if successful, else: 1776 + * -EPROBE_DEFER if the instance is not ready 1777 + * -ENODEV if the required node handler is missing 1778 + * -EINVAL if invalid conditions are encountered. 1779 + */ 1780 + const struct ti_sci_handle *ti_sci_get_by_phandle(struct device_node *np, 1781 + const char *property) 1782 + { 1783 + struct ti_sci_handle *handle = NULL; 1784 + struct device_node *ti_sci_np; 1785 + struct ti_sci_info *info; 1786 + struct list_head *p; 1787 + 1788 + if (!np) { 1789 + pr_err("I need a device pointer\n"); 1790 + return ERR_PTR(-EINVAL); 1791 + } 1792 + 1793 + ti_sci_np = of_parse_phandle(np, property, 0); 1794 + if (!ti_sci_np) 1795 + return ERR_PTR(-ENODEV); 1796 + 1797 + mutex_lock(&ti_sci_list_mutex); 1798 + list_for_each(p, &ti_sci_list) { 1799 + info = list_entry(p, struct ti_sci_info, node); 1800 + if (ti_sci_np == info->dev->of_node) { 1801 + handle = &info->handle; 1802 + info->users++; 1803 + break; 1804 + } 1805 + } 1806 + mutex_unlock(&ti_sci_list_mutex); 1807 + of_node_put(ti_sci_np); 1808 + 1809 + if (!handle) 1810 + return ERR_PTR(-EPROBE_DEFER); 1811 + 1812 + return handle; 1813 + } 1814 + EXPORT_SYMBOL_GPL(ti_sci_get_by_phandle); 1815 + 1816 + /** 1817 + * devm_ti_sci_get_by_phandle() - Managed get handle using phandle 1818 + * @dev: Device pointer requesting TISCI handle 1819 + * @property: property name containing phandle on TISCI node 1820 + * 1821 + * NOTE: This releases the handle once the device resources are 1822 + * no longer needed. MUST NOT BE released with ti_sci_put_handle. 1823 + * The function does not track individual clients of the framework 1824 + * and is expected to be maintained by caller of TI SCI protocol library. 1825 + * 1826 + * Return: 0 if all went fine, else corresponding error. 1827 + */ 1828 + const struct ti_sci_handle *devm_ti_sci_get_by_phandle(struct device *dev, 1829 + const char *property) 1830 + { 1831 + const struct ti_sci_handle *handle; 1832 + const struct ti_sci_handle **ptr; 1833 + 1834 + ptr = devres_alloc(devm_ti_sci_release, sizeof(*ptr), GFP_KERNEL); 1835 + if (!ptr) 1836 + return ERR_PTR(-ENOMEM); 1837 + handle = ti_sci_get_by_phandle(dev_of_node(dev), property); 1838 + 1839 + if (!IS_ERR(handle)) { 1840 + *ptr = handle; 1841 + devres_add(dev, ptr); 1842 + } else { 1843 + devres_free(ptr); 1844 + } 1845 + 1846 + return handle; 1847 + } 1848 + EXPORT_SYMBOL_GPL(devm_ti_sci_get_by_phandle); 1849 + 1850 + /** 1851 + * ti_sci_get_free_resource() - Get a free resource from TISCI resource. 1852 + * @res: Pointer to the TISCI resource 1853 + * 1854 + * Return: resource num if all went ok else TI_SCI_RESOURCE_NULL. 1855 + */ 1856 + u16 ti_sci_get_free_resource(struct ti_sci_resource *res) 1857 + { 1858 + unsigned long flags; 1859 + u16 set, free_bit; 1860 + 1861 + raw_spin_lock_irqsave(&res->lock, flags); 1862 + for (set = 0; set < res->sets; set++) { 1863 + free_bit = find_first_zero_bit(res->desc[set].res_map, 1864 + res->desc[set].num); 1865 + if (free_bit != res->desc[set].num) { 1866 + set_bit(free_bit, res->desc[set].res_map); 1867 + raw_spin_unlock_irqrestore(&res->lock, flags); 1868 + return res->desc[set].start + free_bit; 1869 + } 1870 + } 1871 + raw_spin_unlock_irqrestore(&res->lock, flags); 1872 + 1873 + return TI_SCI_RESOURCE_NULL; 1874 + } 1875 + EXPORT_SYMBOL_GPL(ti_sci_get_free_resource); 1876 + 1877 + /** 1878 + * ti_sci_release_resource() - Release a resource from TISCI resource. 1879 + * @res: Pointer to the TISCI resource 1880 + * @id: Resource id to be released. 1881 + */ 1882 + void ti_sci_release_resource(struct ti_sci_resource *res, u16 id) 1883 + { 1884 + unsigned long flags; 1885 + u16 set; 1886 + 1887 + raw_spin_lock_irqsave(&res->lock, flags); 1888 + for (set = 0; set < res->sets; set++) { 1889 + if (res->desc[set].start <= id && 1890 + (res->desc[set].num + res->desc[set].start) > id) 1891 + clear_bit(id - res->desc[set].start, 1892 + res->desc[set].res_map); 1893 + } 1894 + raw_spin_unlock_irqrestore(&res->lock, flags); 1895 + } 1896 + EXPORT_SYMBOL_GPL(ti_sci_release_resource); 1897 + 1898 + /** 1899 + * ti_sci_get_num_resources() - Get the number of resources in TISCI resource 1900 + * @res: Pointer to the TISCI resource 1901 + * 1902 + * Return: Total number of available resources. 1903 + */ 1904 + u32 ti_sci_get_num_resources(struct ti_sci_resource *res) 1905 + { 1906 + u32 set, count = 0; 1907 + 1908 + for (set = 0; set < res->sets; set++) 1909 + count += res->desc[set].num; 1910 + 1911 + return count; 1912 + } 1913 + EXPORT_SYMBOL_GPL(ti_sci_get_num_resources); 1914 + 1915 + /** 1916 + * devm_ti_sci_get_of_resource() - Get a TISCI resource assigned to a device 1917 + * @handle: TISCI handle 1918 + * @dev: Device pointer to which the resource is assigned 1919 + * @dev_id: TISCI device id to which the resource is assigned 1920 + * @of_prop: property name by which the resource are represented 1921 + * 1922 + * Return: Pointer to ti_sci_resource if all went well else appropriate 1923 + * error pointer. 1924 + */ 1925 + struct ti_sci_resource * 1926 + devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle, 1927 + struct device *dev, u32 dev_id, char *of_prop) 1928 + { 1929 + struct ti_sci_resource *res; 1930 + u32 resource_subtype; 1931 + int i, ret; 1932 + 1933 + res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL); 1934 + if (!res) 1935 + return ERR_PTR(-ENOMEM); 1936 + 1937 + res->sets = of_property_count_elems_of_size(dev_of_node(dev), of_prop, 1938 + sizeof(u32)); 1939 + if (res->sets < 0) { 1940 + dev_err(dev, "%s resource type ids not available\n", of_prop); 1941 + return ERR_PTR(res->sets); 1942 + } 1943 + 1944 + res->desc = devm_kcalloc(dev, res->sets, sizeof(*res->desc), 1945 + GFP_KERNEL); 1946 + if (!res->desc) 1947 + return ERR_PTR(-ENOMEM); 1948 + 1949 + for (i = 0; i < res->sets; i++) { 1950 + ret = of_property_read_u32_index(dev_of_node(dev), of_prop, i, 1951 + &resource_subtype); 1952 + if (ret) 1953 + return ERR_PTR(-EINVAL); 1954 + 1955 + ret = handle->ops.rm_core_ops.get_range(handle, dev_id, 1956 + resource_subtype, 1957 + &res->desc[i].start, 1958 + &res->desc[i].num); 1959 + if (ret) { 1960 + dev_err(dev, "dev = %d subtype %d not allocated for this host\n", 1961 + dev_id, resource_subtype); 1962 + return ERR_PTR(ret); 1963 + } 1964 + 1965 + dev_dbg(dev, "dev = %d, subtype = %d, start = %d, num = %d\n", 1966 + dev_id, resource_subtype, res->desc[i].start, 1967 + res->desc[i].num); 1968 + 1969 + res->desc[i].res_map = 1970 + devm_kzalloc(dev, BITS_TO_LONGS(res->desc[i].num) * 1971 + sizeof(*res->desc[i].res_map), GFP_KERNEL); 1972 + if (!res->desc[i].res_map) 1973 + return ERR_PTR(-ENOMEM); 1974 + } 1975 + raw_spin_lock_init(&res->lock); 1976 + 1977 + return res; 1978 + } 1979 + 2182 1980 static int tisci_reboot_handler(struct notifier_block *nb, unsigned long mode, 2183 1981 void *cmd) 2184 1982 { ··· 2412 1784 /* Limited by MBOX_TX_QUEUE_LEN. K2G can handle upto 128 messages! */ 2413 1785 .max_msgs = 20, 2414 1786 .max_msg_size = 64, 1787 + .rm_type_map = NULL, 1788 + }; 1789 + 1790 + static struct ti_sci_rm_type_map ti_sci_am654_rm_type_map[] = { 1791 + {.dev_id = 56, .type = 0x00b}, /* GIC_IRQ */ 1792 + {.dev_id = 179, .type = 0x000}, /* MAIN_NAV_UDMASS_IA0 */ 1793 + {.dev_id = 187, .type = 0x009}, /* MAIN_NAV_RA */ 1794 + {.dev_id = 188, .type = 0x006}, /* MAIN_NAV_UDMAP */ 1795 + {.dev_id = 194, .type = 0x007}, /* MCU_NAV_UDMAP */ 1796 + {.dev_id = 195, .type = 0x00a}, /* MCU_NAV_RA */ 1797 + {.dev_id = 0, .type = 0x000}, /* end of table */ 1798 + }; 1799 + 1800 + /* Description for AM654 */ 1801 + static const struct ti_sci_desc ti_sci_pmmc_am654_desc = { 1802 + .default_host_id = 12, 1803 + /* Conservative duration */ 1804 + .max_rx_timeout_ms = 10000, 1805 + /* Limited by MBOX_TX_QUEUE_LEN. K2G can handle upto 128 messages! */ 1806 + .max_msgs = 20, 1807 + .max_msg_size = 60, 1808 + .rm_type_map = ti_sci_am654_rm_type_map, 2415 1809 }; 2416 1810 2417 1811 static const struct of_device_id ti_sci_of_match[] = { 2418 1812 {.compatible = "ti,k2g-sci", .data = &ti_sci_pmmc_k2g_desc}, 1813 + {.compatible = "ti,am654-sci", .data = &ti_sci_pmmc_am654_desc}, 2419 1814 { /* Sentinel */ }, 2420 1815 }; 2421 1816 MODULE_DEVICE_TABLE(of, ti_sci_of_match);
+102
drivers/firmware/ti_sci.h
··· 35 35 #define TI_SCI_MSG_QUERY_CLOCK_FREQ 0x010d 36 36 #define TI_SCI_MSG_GET_CLOCK_FREQ 0x010e 37 37 38 + /* Resource Management Requests */ 39 + #define TI_SCI_MSG_GET_RESOURCE_RANGE 0x1500 40 + 41 + /* IRQ requests */ 42 + #define TI_SCI_MSG_SET_IRQ 0x1000 43 + #define TI_SCI_MSG_FREE_IRQ 0x1001 44 + 38 45 /** 39 46 * struct ti_sci_msg_hdr - Generic Message Header for All messages and responses 40 47 * @type: Type of messages: One of TI_SCI_MSG* values ··· 466 459 struct ti_sci_msg_resp_get_clock_freq { 467 460 struct ti_sci_msg_hdr hdr; 468 461 u64 freq_hz; 462 + } __packed; 463 + 464 + #define TI_SCI_IRQ_SECONDARY_HOST_INVALID 0xff 465 + 466 + /** 467 + * struct ti_sci_msg_req_get_resource_range - Request to get a host's assigned 468 + * range of resources. 469 + * @hdr: Generic Header 470 + * @type: Unique resource assignment type 471 + * @subtype: Resource assignment subtype within the resource type. 472 + * @secondary_host: Host processing entity to which the resources are 473 + * allocated. This is required only when the destination 474 + * host id id different from ti sci interface host id, 475 + * else TI_SCI_IRQ_SECONDARY_HOST_INVALID can be passed. 476 + * 477 + * Request type is TI_SCI_MSG_GET_RESOURCE_RANGE. Responded with requested 478 + * resource range which is of type TI_SCI_MSG_GET_RESOURCE_RANGE. 479 + */ 480 + struct ti_sci_msg_req_get_resource_range { 481 + struct ti_sci_msg_hdr hdr; 482 + #define MSG_RM_RESOURCE_TYPE_MASK GENMASK(9, 0) 483 + #define MSG_RM_RESOURCE_SUBTYPE_MASK GENMASK(5, 0) 484 + u16 type; 485 + u8 subtype; 486 + u8 secondary_host; 487 + } __packed; 488 + 489 + /** 490 + * struct ti_sci_msg_resp_get_resource_range - Response to resource get range. 491 + * @hdr: Generic Header 492 + * @range_start: Start index of the resource range. 493 + * @range_num: Number of resources in the range. 494 + * 495 + * Response to request TI_SCI_MSG_GET_RESOURCE_RANGE. 496 + */ 497 + struct ti_sci_msg_resp_get_resource_range { 498 + struct ti_sci_msg_hdr hdr; 499 + u16 range_start; 500 + u16 range_num; 501 + } __packed; 502 + 503 + /** 504 + * struct ti_sci_msg_req_manage_irq - Request to configure/release the route 505 + * between the dev and the host. 506 + * @hdr: Generic Header 507 + * @valid_params: Bit fields defining the validity of interrupt source 508 + * parameters. If a bit is not set, then corresponding 509 + * field is not valid and will not be used for route set. 510 + * Bit field definitions: 511 + * 0 - Valid bit for @dst_id 512 + * 1 - Valid bit for @dst_host_irq 513 + * 2 - Valid bit for @ia_id 514 + * 3 - Valid bit for @vint 515 + * 4 - Valid bit for @global_event 516 + * 5 - Valid bit for @vint_status_bit_index 517 + * 31 - Valid bit for @secondary_host 518 + * @src_id: IRQ source peripheral ID. 519 + * @src_index: IRQ source index within the peripheral 520 + * @dst_id: IRQ Destination ID. Based on the architecture it can be 521 + * IRQ controller or host processor ID. 522 + * @dst_host_irq: IRQ number of the destination host IRQ controller 523 + * @ia_id: Device ID of the interrupt aggregator in which the 524 + * vint resides. 525 + * @vint: Virtual interrupt number if the interrupt route 526 + * is through an interrupt aggregator. 527 + * @global_event: Global event that is to be mapped to interrupt 528 + * aggregator virtual interrupt status bit. 529 + * @vint_status_bit: Virtual interrupt status bit if the interrupt route 530 + * utilizes an interrupt aggregator status bit. 531 + * @secondary_host: Host ID of the IRQ destination computing entity. This is 532 + * required only when destination host id is different 533 + * from ti sci interface host id. 534 + * 535 + * Request type is TI_SCI_MSG_SET/RELEASE_IRQ. 536 + * Response is generic ACK / NACK message. 537 + */ 538 + struct ti_sci_msg_req_manage_irq { 539 + struct ti_sci_msg_hdr hdr; 540 + #define MSG_FLAG_DST_ID_VALID TI_SCI_MSG_FLAG(0) 541 + #define MSG_FLAG_DST_HOST_IRQ_VALID TI_SCI_MSG_FLAG(1) 542 + #define MSG_FLAG_IA_ID_VALID TI_SCI_MSG_FLAG(2) 543 + #define MSG_FLAG_VINT_VALID TI_SCI_MSG_FLAG(3) 544 + #define MSG_FLAG_GLB_EVNT_VALID TI_SCI_MSG_FLAG(4) 545 + #define MSG_FLAG_VINT_STS_BIT_VALID TI_SCI_MSG_FLAG(5) 546 + #define MSG_FLAG_SHOST_VALID TI_SCI_MSG_FLAG(31) 547 + u32 valid_params; 548 + u16 src_id; 549 + u16 src_index; 550 + u16 dst_id; 551 + u16 dst_host_irq; 552 + u16 ia_id; 553 + u16 vint; 554 + u16 global_event; 555 + u8 vint_status_bit; 556 + u8 secondary_host; 469 557 } __packed; 470 558 471 559 #endif /* __TI_SCI_H */
+4 -12
drivers/gpio/gpio-thunderx.c
··· 363 363 { 364 364 struct thunderx_line *txline = irq_data_get_irq_chip_data(data); 365 365 struct thunderx_gpio *txgpio = txline->txgpio; 366 - struct irq_data *parent_data = data->parent_data; 367 366 int r; 368 367 369 368 r = gpiochip_lock_as_irq(&txgpio->chip, txline->line); 370 369 if (r) 371 370 return r; 372 371 373 - if (parent_data && parent_data->chip->irq_request_resources) { 374 - r = parent_data->chip->irq_request_resources(parent_data); 375 - if (r) 376 - goto error; 377 - } 372 + r = irq_chip_request_resources_parent(data); 373 + if (r) 374 + gpiochip_unlock_as_irq(&txgpio->chip, txline->line); 378 375 379 - return 0; 380 - error: 381 - gpiochip_unlock_as_irq(&txgpio->chip, txline->line); 382 376 return r; 383 377 } 384 378 ··· 380 386 { 381 387 struct thunderx_line *txline = irq_data_get_irq_chip_data(data); 382 388 struct thunderx_gpio *txgpio = txline->txgpio; 383 - struct irq_data *parent_data = data->parent_data; 384 389 385 - if (parent_data && parent_data->chip->irq_release_resources) 386 - parent_data->chip->irq_release_resources(parent_data); 390 + irq_chip_release_resources_parent(data); 387 391 388 392 gpiochip_unlock_as_irq(&txgpio->chip, txline->line); 389 393 }
+1
drivers/iommu/Kconfig
··· 94 94 bool 95 95 select IOMMU_API 96 96 select IOMMU_IOVA 97 + select IRQ_MSI_IOMMU 97 98 select NEED_SG_DMA_LENGTH 98 99 99 100 config FSL_PAMU
+28 -20
drivers/iommu/dma-iommu.c
··· 907 907 return NULL; 908 908 } 909 909 910 - void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) 910 + int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr) 911 911 { 912 - struct device *dev = msi_desc_to_dev(irq_get_msi_desc(irq)); 912 + struct device *dev = msi_desc_to_dev(desc); 913 913 struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 914 914 struct iommu_dma_cookie *cookie; 915 915 struct iommu_dma_msi_page *msi_page; 916 - phys_addr_t msi_addr = (u64)msg->address_hi << 32 | msg->address_lo; 917 916 unsigned long flags; 918 917 919 - if (!domain || !domain->iova_cookie) 920 - return; 918 + if (!domain || !domain->iova_cookie) { 919 + desc->iommu_cookie = NULL; 920 + return 0; 921 + } 921 922 922 923 cookie = domain->iova_cookie; 923 924 ··· 931 930 msi_page = iommu_dma_get_msi_page(dev, msi_addr, domain); 932 931 spin_unlock_irqrestore(&cookie->msi_lock, flags); 933 932 934 - if (WARN_ON(!msi_page)) { 935 - /* 936 - * We're called from a void callback, so the best we can do is 937 - * 'fail' by filling the message with obviously bogus values. 938 - * Since we got this far due to an IOMMU being present, it's 939 - * not like the existing address would have worked anyway... 940 - */ 941 - msg->address_hi = ~0U; 942 - msg->address_lo = ~0U; 943 - msg->data = ~0U; 944 - } else { 945 - msg->address_hi = upper_32_bits(msi_page->iova); 946 - msg->address_lo &= cookie_msi_granule(cookie) - 1; 947 - msg->address_lo += lower_32_bits(msi_page->iova); 948 - } 933 + msi_desc_set_iommu_cookie(desc, msi_page); 934 + 935 + if (!msi_page) 936 + return -ENOMEM; 937 + return 0; 938 + } 939 + 940 + void iommu_dma_compose_msi_msg(struct msi_desc *desc, 941 + struct msi_msg *msg) 942 + { 943 + struct device *dev = msi_desc_to_dev(desc); 944 + const struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 945 + const struct iommu_dma_msi_page *msi_page; 946 + 947 + msi_page = msi_desc_get_iommu_cookie(desc); 948 + 949 + if (!domain || !domain->iova_cookie || WARN_ON(!msi_page)) 950 + return; 951 + 952 + msg->address_hi = upper_32_bits(msi_page->iova); 953 + msg->address_lo &= cookie_msi_granule(domain->iova_cookie) - 1; 954 + msg->address_lo += lower_32_bits(msi_page->iova); 949 955 }
+21 -6
drivers/irqchip/Kconfig
··· 6 6 7 7 config ARM_GIC 8 8 bool 9 - select IRQ_DOMAIN 10 9 select IRQ_DOMAIN_HIERARCHY 11 10 select GENERIC_IRQ_MULTI_HANDLER 12 11 select GENERIC_IRQ_EFFECTIVE_AFF_MASK ··· 32 33 33 34 config ARM_GIC_V3 34 35 bool 35 - select IRQ_DOMAIN 36 36 select GENERIC_IRQ_MULTI_HANDLER 37 37 select IRQ_DOMAIN_HIERARCHY 38 38 select PARTITION_PERCPU ··· 57 59 58 60 config ARM_NVIC 59 61 bool 60 - select IRQ_DOMAIN 61 62 select IRQ_DOMAIN_HIERARCHY 62 63 select GENERIC_IRQ_CHIP 63 64 ··· 355 358 config QCOM_IRQ_COMBINER 356 359 bool "QCOM IRQ combiner support" 357 360 depends on ARCH_QCOM && ACPI 358 - select IRQ_DOMAIN 359 361 select IRQ_DOMAIN_HIERARCHY 360 362 help 361 363 Say yes here to add support for the IRQ combiner devices embedded ··· 371 375 config MESON_IRQ_GPIO 372 376 bool "Meson GPIO Interrupt Multiplexer" 373 377 depends on ARCH_MESON 374 - select IRQ_DOMAIN 375 378 select IRQ_DOMAIN_HIERARCHY 376 379 help 377 380 Support Meson SoC Family GPIO Interrupt Multiplexer ··· 386 391 config QCOM_PDC 387 392 bool "QCOM PDC" 388 393 depends on ARCH_QCOM 389 - select IRQ_DOMAIN 390 394 select IRQ_DOMAIN_HIERARCHY 391 395 help 392 396 Power Domain Controller driver to manage and configure wakeup ··· 424 430 select GENERIC_IRQ_CHIP 425 431 help 426 432 Support for the Loongson-1 platform Interrupt Controller. 433 + 434 + config TI_SCI_INTR_IRQCHIP 435 + bool 436 + depends on TI_SCI_PROTOCOL 437 + select IRQ_DOMAIN_HIERARCHY 438 + help 439 + This enables the irqchip driver support for K3 Interrupt router 440 + over TI System Control Interface available on some new TI's SoCs. 441 + If you wish to use interrupt router irq resources managed by the 442 + TI System Controller, say Y here. Otherwise, say N. 443 + 444 + config TI_SCI_INTA_IRQCHIP 445 + bool 446 + depends on TI_SCI_PROTOCOL 447 + select IRQ_DOMAIN_HIERARCHY 448 + select TI_SCI_INTA_MSI_DOMAIN 449 + help 450 + This enables the irqchip driver support for K3 Interrupt aggregator 451 + over TI System Control Interface available on some new TI's SoCs. 452 + If you wish to use interrupt aggregator irq resources managed by the 453 + TI System Controller, say Y here. Otherwise, say N. 427 454 428 455 endmenu 429 456
+2
drivers/irqchip/Makefile
··· 98 98 obj-$(CONFIG_IMX_IRQSTEER) += irq-imx-irqsteer.o 99 99 obj-$(CONFIG_MADERA_IRQ) += irq-madera.o 100 100 obj-$(CONFIG_LS1X_IRQ) += irq-ls1x.o 101 + obj-$(CONFIG_TI_SCI_INTR_IRQCHIP) += irq-ti-sci-intr.o 102 + obj-$(CONFIG_TI_SCI_INTA_IRQCHIP) += irq-ti-sci-inta.o
+3
drivers/irqchip/irq-bcm7038-l1.c
··· 343 343 goto out_unmap; 344 344 } 345 345 346 + pr_info("registered BCM7038 L1 intc (%pOF, IRQs: %d)\n", 347 + dn, IRQS_PER_WORD * intc->n_words); 348 + 346 349 return 0; 347 350 348 351 out_unmap:
+3
drivers/irqchip/irq-bcm7120-l2.c
··· 318 318 } 319 319 } 320 320 321 + pr_info("registered %s intc (%pOF, parent IRQ(s): %d)\n", 322 + intc_name, dn, data->num_parent_irqs); 323 + 321 324 return 0; 322 325 323 326 out_free_domain:
+2
drivers/irqchip/irq-brcmstb-l2.c
··· 264 264 ct->chip.irq_set_wake = irq_gc_set_wake; 265 265 } 266 266 267 + pr_info("registered L2 intc (%pOF, parent irq: %d)\n", np, parent_irq); 268 + 267 269 return 0; 268 270 269 271 out_free_domain:
+39 -37
drivers/irqchip/irq-gic-pm.c
··· 19 19 #include <linux/of_irq.h> 20 20 #include <linux/irqchip/arm-gic.h> 21 21 #include <linux/platform_device.h> 22 - #include <linux/pm_clock.h> 23 22 #include <linux/pm_runtime.h> 24 23 #include <linux/slab.h> 25 24 ··· 27 28 const char *const *clocks; 28 29 }; 29 30 31 + struct gic_chip_pm { 32 + struct gic_chip_data *chip_data; 33 + const struct gic_clk_data *clk_data; 34 + struct clk_bulk_data *clks; 35 + }; 36 + 30 37 static int gic_runtime_resume(struct device *dev) 31 38 { 32 - struct gic_chip_data *gic = dev_get_drvdata(dev); 39 + struct gic_chip_pm *chip_pm = dev_get_drvdata(dev); 40 + struct gic_chip_data *gic = chip_pm->chip_data; 41 + const struct gic_clk_data *data = chip_pm->clk_data; 33 42 int ret; 34 43 35 - ret = pm_clk_resume(dev); 36 - if (ret) 44 + ret = clk_bulk_prepare_enable(data->num_clocks, chip_pm->clks); 45 + if (ret) { 46 + dev_err(dev, "clk_enable failed: %d\n", ret); 37 47 return ret; 48 + } 38 49 39 50 /* 40 - * On the very first resume, the pointer to the driver data 51 + * On the very first resume, the pointer to chip_pm->chip_data 41 52 * will be NULL and this is intentional, because we do not 42 53 * want to restore the GIC on the very first resume. So if 43 54 * the pointer is not valid just return. ··· 63 54 64 55 static int gic_runtime_suspend(struct device *dev) 65 56 { 66 - struct gic_chip_data *gic = dev_get_drvdata(dev); 57 + struct gic_chip_pm *chip_pm = dev_get_drvdata(dev); 58 + struct gic_chip_data *gic = chip_pm->chip_data; 59 + const struct gic_clk_data *data = chip_pm->clk_data; 67 60 68 61 gic_dist_save(gic); 69 62 gic_cpu_save(gic); 70 63 71 - return pm_clk_suspend(dev); 72 - } 73 - 74 - static int gic_get_clocks(struct device *dev, const struct gic_clk_data *data) 75 - { 76 - unsigned int i; 77 - int ret; 78 - 79 - if (!dev || !data) 80 - return -EINVAL; 81 - 82 - ret = pm_clk_create(dev); 83 - if (ret) 84 - return ret; 85 - 86 - for (i = 0; i < data->num_clocks; i++) { 87 - ret = of_pm_clk_add_clk(dev, data->clocks[i]); 88 - if (ret) { 89 - dev_err(dev, "failed to add clock %s\n", 90 - data->clocks[i]); 91 - pm_clk_destroy(dev); 92 - return ret; 93 - } 94 - } 64 + clk_bulk_disable_unprepare(data->num_clocks, chip_pm->clks); 95 65 96 66 return 0; 97 67 } ··· 79 91 { 80 92 struct device *dev = &pdev->dev; 81 93 const struct gic_clk_data *data; 82 - struct gic_chip_data *gic; 83 - int ret, irq; 94 + struct gic_chip_pm *chip_pm; 95 + int ret, irq, i; 84 96 85 97 data = of_device_get_match_data(&pdev->dev); 86 98 if (!data) { ··· 88 100 return -ENODEV; 89 101 } 90 102 103 + chip_pm = devm_kzalloc(dev, sizeof(*chip_pm), GFP_KERNEL); 104 + if (!chip_pm) 105 + return -ENOMEM; 106 + 91 107 irq = irq_of_parse_and_map(dev->of_node, 0); 92 108 if (!irq) { 93 109 dev_err(dev, "no parent interrupt found!\n"); 94 110 return -EINVAL; 95 111 } 96 112 97 - ret = gic_get_clocks(dev, data); 113 + chip_pm->clks = devm_kcalloc(dev, data->num_clocks, 114 + sizeof(*chip_pm->clks), GFP_KERNEL); 115 + if (!chip_pm->clks) 116 + return -ENOMEM; 117 + 118 + for (i = 0; i < data->num_clocks; i++) 119 + chip_pm->clks[i].id = data->clocks[i]; 120 + 121 + ret = devm_clk_bulk_get(dev, data->num_clocks, chip_pm->clks); 98 122 if (ret) 99 123 goto irq_dispose; 124 + 125 + chip_pm->clk_data = data; 126 + dev_set_drvdata(dev, chip_pm); 100 127 101 128 pm_runtime_enable(dev); 102 129 ··· 119 116 if (ret < 0) 120 117 goto rpm_disable; 121 118 122 - ret = gic_of_init_child(dev, &gic, irq); 119 + ret = gic_of_init_child(dev, &chip_pm->chip_data, irq); 123 120 if (ret) 124 121 goto rpm_put; 125 - 126 - platform_set_drvdata(pdev, gic); 127 122 128 123 pm_runtime_put(dev); 129 124 ··· 133 132 pm_runtime_put_sync(dev); 134 133 rpm_disable: 135 134 pm_runtime_disable(dev); 136 - pm_clk_destroy(dev); 137 135 irq_dispose: 138 136 irq_dispose_mapping(irq); 139 137 ··· 142 142 static const struct dev_pm_ops gic_pm_ops = { 143 143 SET_RUNTIME_PM_OPS(gic_runtime_suspend, 144 144 gic_runtime_resume, NULL) 145 + SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 146 + pm_runtime_force_resume) 145 147 }; 146 148 147 149 static const char * const gic400_clocks[] = {
+7 -1
drivers/irqchip/irq-gic-v2m.c
··· 110 110 if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET) 111 111 msg->data -= v2m->spi_offset; 112 112 113 - iommu_dma_map_msi_msg(data->irq, msg); 113 + iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg); 114 114 } 115 115 116 116 static struct irq_chip gicv2m_irq_chip = { ··· 167 167 static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 168 168 unsigned int nr_irqs, void *args) 169 169 { 170 + msi_alloc_info_t *info = args; 170 171 struct v2m_data *v2m = NULL, *tmp; 171 172 int hwirq, offset, i, err = 0; 172 173 ··· 186 185 return -ENOSPC; 187 186 188 187 hwirq = v2m->spi_start + offset; 188 + 189 + err = iommu_dma_prepare_msi(info->desc, 190 + v2m->res.start + V2M_MSI_SETSPI_NS); 191 + if (err) 192 + return err; 189 193 190 194 for (i = 0; i < nr_irqs; i++) { 191 195 err = gicv2m_irq_gic_domain_alloc(domain, virq + i, hwirq + i);
+43 -41
drivers/irqchip/irq-gic-v3-its.c
··· 26 26 #include <linux/interrupt.h> 27 27 #include <linux/irqdomain.h> 28 28 #include <linux/list.h> 29 - #include <linux/list_sort.h> 30 29 #include <linux/log2.h> 31 30 #include <linux/memblock.h> 32 31 #include <linux/mm.h> ··· 1178 1179 msg->address_hi = upper_32_bits(addr); 1179 1180 msg->data = its_get_event_id(d); 1180 1181 1181 - iommu_dma_map_msi_msg(d->irq, msg); 1182 + iommu_dma_compose_msi_msg(irq_data_get_msi_desc(d), msg); 1182 1183 } 1183 1184 1184 1185 static int its_irq_set_irqchip_state(struct irq_data *d, ··· 1464 1465 { 1465 1466 struct lpi_range *range; 1466 1467 1467 - range = kzalloc(sizeof(*range), GFP_KERNEL); 1468 + range = kmalloc(sizeof(*range), GFP_KERNEL); 1468 1469 if (range) { 1469 - INIT_LIST_HEAD(&range->entry); 1470 1470 range->base_id = base; 1471 1471 range->span = span; 1472 1472 } 1473 1473 1474 1474 return range; 1475 - } 1476 - 1477 - static int lpi_range_cmp(void *priv, struct list_head *a, struct list_head *b) 1478 - { 1479 - struct lpi_range *ra, *rb; 1480 - 1481 - ra = container_of(a, struct lpi_range, entry); 1482 - rb = container_of(b, struct lpi_range, entry); 1483 - 1484 - return ra->base_id - rb->base_id; 1485 - } 1486 - 1487 - static void merge_lpi_ranges(void) 1488 - { 1489 - struct lpi_range *range, *tmp; 1490 - 1491 - list_for_each_entry_safe(range, tmp, &lpi_range_list, entry) { 1492 - if (!list_is_last(&range->entry, &lpi_range_list) && 1493 - (tmp->base_id == (range->base_id + range->span))) { 1494 - tmp->base_id = range->base_id; 1495 - tmp->span += range->span; 1496 - list_del(&range->entry); 1497 - kfree(range); 1498 - } 1499 - } 1500 1475 } 1501 1476 1502 1477 static int alloc_lpi_range(u32 nr_lpis, u32 *base) ··· 1502 1529 return err; 1503 1530 } 1504 1531 1532 + static void merge_lpi_ranges(struct lpi_range *a, struct lpi_range *b) 1533 + { 1534 + if (&a->entry == &lpi_range_list || &b->entry == &lpi_range_list) 1535 + return; 1536 + if (a->base_id + a->span != b->base_id) 1537 + return; 1538 + b->base_id = a->base_id; 1539 + b->span += a->span; 1540 + list_del(&a->entry); 1541 + kfree(a); 1542 + } 1543 + 1505 1544 static int free_lpi_range(u32 base, u32 nr_lpis) 1506 1545 { 1507 - struct lpi_range *new; 1508 - int err = 0; 1546 + struct lpi_range *new, *old; 1547 + 1548 + new = mk_lpi_range(base, nr_lpis); 1549 + if (!new) 1550 + return -ENOMEM; 1509 1551 1510 1552 mutex_lock(&lpi_range_lock); 1511 1553 1512 - new = mk_lpi_range(base, nr_lpis); 1513 - if (!new) { 1514 - err = -ENOMEM; 1515 - goto out; 1554 + list_for_each_entry_reverse(old, &lpi_range_list, entry) { 1555 + if (old->base_id < base) 1556 + break; 1516 1557 } 1558 + /* 1559 + * old is the last element with ->base_id smaller than base, 1560 + * so new goes right after it. If there are no elements with 1561 + * ->base_id smaller than base, &old->entry ends up pointing 1562 + * at the head of the list, and inserting new it the start of 1563 + * the list is the right thing to do in that case as well. 1564 + */ 1565 + list_add(&new->entry, &old->entry); 1566 + /* 1567 + * Now check if we can merge with the preceding and/or 1568 + * following ranges. 1569 + */ 1570 + merge_lpi_ranges(old, new); 1571 + merge_lpi_ranges(new, list_next_entry(new, entry)); 1517 1572 1518 - list_add(&new->entry, &lpi_range_list); 1519 - list_sort(NULL, &lpi_range_list, lpi_range_cmp); 1520 - merge_lpi_ranges(); 1521 - out: 1522 1573 mutex_unlock(&lpi_range_lock); 1523 - return err; 1574 + return 0; 1524 1575 } 1525 1576 1526 1577 static int __init its_lpi_init(u32 id_bits) ··· 2484 2487 int err = 0; 2485 2488 2486 2489 /* 2487 - * We ignore "dev" entierely, and rely on the dev_id that has 2490 + * We ignore "dev" entirely, and rely on the dev_id that has 2488 2491 * been passed via the scratchpad. This limits this domain's 2489 2492 * usefulness to upper layers that definitely know that they 2490 2493 * are built on top of the ITS. ··· 2563 2566 { 2564 2567 msi_alloc_info_t *info = args; 2565 2568 struct its_device *its_dev = info->scratchpad[0].ptr; 2569 + struct its_node *its = its_dev->its; 2566 2570 irq_hw_number_t hwirq; 2567 2571 int err; 2568 2572 int i; 2569 2573 2570 2574 err = its_alloc_device_irq(its_dev, nr_irqs, &hwirq); 2575 + if (err) 2576 + return err; 2577 + 2578 + err = iommu_dma_prepare_msi(info->desc, its->get_msi_base(its_dev)); 2571 2579 if (err) 2572 2580 return err; 2573 2581
+8 -2
drivers/irqchip/irq-gic-v3-mbi.c
··· 84 84 static int mbi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 85 85 unsigned int nr_irqs, void *args) 86 86 { 87 + msi_alloc_info_t *info = args; 87 88 struct mbi_range *mbi = NULL; 88 89 int hwirq, offset, i, err = 0; 89 90 ··· 104 103 return -ENOSPC; 105 104 106 105 hwirq = mbi->spi_start + offset; 106 + 107 + err = iommu_dma_prepare_msi(info->desc, 108 + mbi_phys_base + GICD_SETSPI_NSR); 109 + if (err) 110 + return err; 107 111 108 112 for (i = 0; i < nr_irqs; i++) { 109 113 err = mbi_irq_gic_domain_alloc(domain, virq + i, hwirq + i); ··· 148 142 msg[0].address_lo = lower_32_bits(mbi_phys_base + GICD_SETSPI_NSR); 149 143 msg[0].data = data->parent_data->hwirq; 150 144 151 - iommu_dma_map_msi_msg(data->irq, msg); 145 + iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg); 152 146 } 153 147 154 148 #ifdef CONFIG_PCI_MSI ··· 208 202 msg[1].address_lo = lower_32_bits(mbi_phys_base + GICD_CLRSPI_NSR); 209 203 msg[1].data = data->parent_data->hwirq; 210 204 211 - iommu_dma_map_msi_msg(data->irq, &msg[1]); 205 + iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), &msg[1]); 212 206 } 213 207 214 208 /* Platform-MSI specific irqchip */
+1 -3
drivers/irqchip/irq-imx-irqsteer.c
··· 144 144 { 145 145 struct device_node *np = pdev->dev.of_node; 146 146 struct irqsteer_data *data; 147 - struct resource *res; 148 147 u32 irqs_num; 149 148 int i, ret; 150 149 ··· 151 152 if (!data) 152 153 return -ENOMEM; 153 154 154 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 155 - data->regs = devm_ioremap_resource(&pdev->dev, res); 155 + data->regs = devm_platform_ioremap_resource(pdev, 0); 156 156 if (IS_ERR(data->regs)) { 157 157 dev_err(&pdev->dev, "failed to initialize reg\n"); 158 158 return PTR_ERR(data->regs);
+6 -1
drivers/irqchip/irq-ls-scfg-msi.c
··· 100 100 msg->data |= cpumask_first(mask); 101 101 } 102 102 103 - iommu_dma_map_msi_msg(data->irq, msg); 103 + iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg); 104 104 } 105 105 106 106 static int ls_scfg_msi_set_affinity(struct irq_data *irq_data, ··· 141 141 unsigned int nr_irqs, 142 142 void *args) 143 143 { 144 + msi_alloc_info_t *info = args; 144 145 struct ls_scfg_msi *msi_data = domain->host_data; 145 146 int pos, err = 0; 146 147 ··· 155 154 err = -ENOSPC; 156 155 spin_unlock(&msi_data->lock); 157 156 157 + if (err) 158 + return err; 159 + 160 + err = iommu_dma_prepare_msi(info->desc, msi_data->msiir_addr); 158 161 if (err) 159 162 return err; 160 163
+1 -3
drivers/irqchip/irq-renesas-intc-irqpin.c
··· 389 389 int k; 390 390 391 391 p = devm_kzalloc(dev, sizeof(*p), GFP_KERNEL); 392 - if (!p) { 393 - dev_err(dev, "failed to allocate driver data\n"); 392 + if (!p) 394 393 return -ENOMEM; 395 - } 396 394 397 395 /* deal with driver instance configuration */ 398 396 of_property_read_u32(dev->of_node, "sense-bitfield-width",
+139 -92
drivers/irqchip/irq-stm32-exti.c
··· 14 14 #include <linux/irqchip.h> 15 15 #include <linux/irqchip/chained_irq.h> 16 16 #include <linux/irqdomain.h> 17 + #include <linux/module.h> 17 18 #include <linux/of_address.h> 18 19 #include <linux/of_irq.h> 20 + #include <linux/of_platform.h> 19 21 #include <linux/syscore_ops.h> 20 22 21 23 #include <dt-bindings/interrupt-controller/arm-gic.h> ··· 38 36 }; 39 37 40 38 #define UNDEF_REG ~0 41 - 42 - enum stm32_exti_hwspinlock { 43 - HWSPINLOCK_UNKNOWN, 44 - HWSPINLOCK_NONE, 45 - HWSPINLOCK_READY, 46 - }; 47 39 48 40 struct stm32_desc_irq { 49 41 u32 exti; ··· 65 69 void __iomem *base; 66 70 struct stm32_exti_chip_data *chips_data; 67 71 const struct stm32_exti_drv_data *drv_data; 68 - struct device_node *node; 69 - enum stm32_exti_hwspinlock hwlock_state; 70 72 struct hwspinlock *hwlock; 71 73 }; 72 74 ··· 279 285 280 286 static int stm32_exti_hwspin_lock(struct stm32_exti_chip_data *chip_data) 281 287 { 282 - struct stm32_exti_host_data *host_data = chip_data->host_data; 283 - struct hwspinlock *hwlock; 284 - int id, ret = 0, timeout = 0; 288 + int ret, timeout = 0; 285 289 286 - /* first time, check for hwspinlock availability */ 287 - if (unlikely(host_data->hwlock_state == HWSPINLOCK_UNKNOWN)) { 288 - id = of_hwspin_lock_get_id(host_data->node, 0); 289 - if (id >= 0) { 290 - hwlock = hwspin_lock_request_specific(id); 291 - if (hwlock) { 292 - /* found valid hwspinlock */ 293 - host_data->hwlock_state = HWSPINLOCK_READY; 294 - host_data->hwlock = hwlock; 295 - pr_debug("%s hwspinlock = %d\n", __func__, id); 296 - } else { 297 - host_data->hwlock_state = HWSPINLOCK_NONE; 298 - } 299 - } else if (id != -EPROBE_DEFER) { 300 - host_data->hwlock_state = HWSPINLOCK_NONE; 301 - } else { 302 - /* hwspinlock driver shall be ready at that stage */ 303 - ret = -EPROBE_DEFER; 304 - } 305 - } 290 + if (!chip_data->host_data->hwlock) 291 + return 0; 306 292 307 - if (likely(host_data->hwlock_state == HWSPINLOCK_READY)) { 308 - /* 309 - * Use the x_raw API since we are under spin_lock protection. 310 - * Do not use the x_timeout API because we are under irq_disable 311 - * mode (see __setup_irq()) 312 - */ 313 - do { 314 - ret = hwspin_trylock_raw(host_data->hwlock); 315 - if (!ret) 316 - return 0; 293 + /* 294 + * Use the x_raw API since we are under spin_lock protection. 295 + * Do not use the x_timeout API because we are under irq_disable 296 + * mode (see __setup_irq()) 297 + */ 298 + do { 299 + ret = hwspin_trylock_raw(chip_data->host_data->hwlock); 300 + if (!ret) 301 + return 0; 317 302 318 - udelay(HWSPNLCK_RETRY_DELAY); 319 - timeout += HWSPNLCK_RETRY_DELAY; 320 - } while (timeout < HWSPNLCK_TIMEOUT); 303 + udelay(HWSPNLCK_RETRY_DELAY); 304 + timeout += HWSPNLCK_RETRY_DELAY; 305 + } while (timeout < HWSPNLCK_TIMEOUT); 321 306 322 - if (ret == -EBUSY) 323 - ret = -ETIMEDOUT; 324 - } 307 + if (ret == -EBUSY) 308 + ret = -ETIMEDOUT; 325 309 326 310 if (ret) 327 311 pr_err("%s can't get hwspinlock (%d)\n", __func__, ret); ··· 309 337 310 338 static void stm32_exti_hwspin_unlock(struct stm32_exti_chip_data *chip_data) 311 339 { 312 - if (likely(chip_data->host_data->hwlock_state == HWSPINLOCK_READY)) 340 + if (chip_data->host_data->hwlock) 313 341 hwspin_unlock_raw(chip_data->host_data->hwlock); 314 342 } 315 343 ··· 558 586 return -EINVAL; 559 587 } 560 588 561 - #ifdef CONFIG_PM 562 - static int stm32_exti_h_suspend(void) 589 + static int __maybe_unused stm32_exti_h_suspend(void) 563 590 { 564 591 struct stm32_exti_chip_data *chip_data; 565 592 int i; ··· 573 602 return 0; 574 603 } 575 604 576 - static void stm32_exti_h_resume(void) 605 + static void __maybe_unused stm32_exti_h_resume(void) 577 606 { 578 607 struct stm32_exti_chip_data *chip_data; 579 608 int i; ··· 587 616 } 588 617 589 618 static struct syscore_ops stm32_exti_h_syscore_ops = { 619 + #ifdef CONFIG_PM_SLEEP 590 620 .suspend = stm32_exti_h_suspend, 591 621 .resume = stm32_exti_h_resume, 622 + #endif 592 623 }; 593 624 594 - static void stm32_exti_h_syscore_init(void) 625 + static void stm32_exti_h_syscore_init(struct stm32_exti_host_data *host_data) 595 626 { 627 + stm32_host_data = host_data; 596 628 register_syscore_ops(&stm32_exti_h_syscore_ops); 597 629 } 598 - #else 599 - static inline void stm32_exti_h_syscore_init(void) {} 600 - #endif 630 + 631 + static void stm32_exti_h_syscore_deinit(void) 632 + { 633 + unregister_syscore_ops(&stm32_exti_h_syscore_ops); 634 + } 601 635 602 636 static struct irq_chip stm32_exti_h_chip = { 603 637 .name = "stm32-exti-h", ··· 659 683 return NULL; 660 684 661 685 host_data->drv_data = dd; 662 - host_data->node = node; 663 - host_data->hwlock_state = HWSPINLOCK_UNKNOWN; 664 686 host_data->chips_data = kcalloc(dd->bank_nr, 665 687 sizeof(struct stm32_exti_chip_data), 666 688 GFP_KERNEL); ··· 685 711 686 712 static struct 687 713 stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data, 688 - u32 bank_idx) 714 + u32 bank_idx, 715 + struct device_node *node) 689 716 { 690 717 const struct stm32_exti_bank *stm32_bank; 691 718 struct stm32_exti_chip_data *chip_data; ··· 706 731 writel_relaxed(0, base + stm32_bank->imr_ofst); 707 732 writel_relaxed(0, base + stm32_bank->emr_ofst); 708 733 709 - pr_info("%pOF: bank%d\n", h_data->node, bank_idx); 734 + pr_info("%pOF: bank%d\n", node, bank_idx); 710 735 711 736 return chip_data; 712 737 } ··· 746 771 struct stm32_exti_chip_data *chip_data; 747 772 748 773 stm32_bank = drv_data->exti_banks[i]; 749 - chip_data = stm32_exti_chip_init(host_data, i); 774 + chip_data = stm32_exti_chip_init(host_data, i, node); 750 775 751 776 gc = irq_get_domain_generic_chip(domain, i * IRQS_PER_BANK); 752 777 ··· 790 815 .xlate = irq_domain_xlate_twocell, 791 816 }; 792 817 793 - static int 794 - __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data, 795 - struct device_node *node, 796 - struct device_node *parent) 818 + static void stm32_exti_remove_irq(void *data) 797 819 { 820 + struct irq_domain *domain = data; 821 + 822 + irq_domain_remove(domain); 823 + } 824 + 825 + static int stm32_exti_remove(struct platform_device *pdev) 826 + { 827 + stm32_exti_h_syscore_deinit(); 828 + return 0; 829 + } 830 + 831 + static int stm32_exti_probe(struct platform_device *pdev) 832 + { 833 + int ret, i; 834 + struct device *dev = &pdev->dev; 835 + struct device_node *np = dev->of_node; 798 836 struct irq_domain *parent_domain, *domain; 799 837 struct stm32_exti_host_data *host_data; 800 - int ret, i; 838 + const struct stm32_exti_drv_data *drv_data; 839 + struct resource *res; 801 840 802 - parent_domain = irq_find_host(parent); 803 - if (!parent_domain) { 804 - pr_err("interrupt-parent not found\n"); 805 - return -EINVAL; 806 - } 807 - 808 - host_data = stm32_exti_host_init(drv_data, node); 841 + host_data = devm_kzalloc(dev, sizeof(*host_data), GFP_KERNEL); 809 842 if (!host_data) 810 843 return -ENOMEM; 811 844 845 + /* check for optional hwspinlock which may be not available yet */ 846 + ret = of_hwspin_lock_get_id(np, 0); 847 + if (ret == -EPROBE_DEFER) 848 + /* hwspinlock framework not yet ready */ 849 + return ret; 850 + 851 + if (ret >= 0) { 852 + host_data->hwlock = devm_hwspin_lock_request_specific(dev, ret); 853 + if (!host_data->hwlock) { 854 + dev_err(dev, "Failed to request hwspinlock\n"); 855 + return -EINVAL; 856 + } 857 + } else if (ret != -ENOENT) { 858 + /* note: ENOENT is a valid case (means 'no hwspinlock') */ 859 + dev_err(dev, "Failed to get hwspinlock\n"); 860 + return ret; 861 + } 862 + 863 + /* initialize host_data */ 864 + drv_data = of_device_get_match_data(dev); 865 + if (!drv_data) { 866 + dev_err(dev, "no of match data\n"); 867 + return -ENODEV; 868 + } 869 + host_data->drv_data = drv_data; 870 + 871 + host_data->chips_data = devm_kcalloc(dev, drv_data->bank_nr, 872 + sizeof(*host_data->chips_data), 873 + GFP_KERNEL); 874 + if (!host_data->chips_data) 875 + return -ENOMEM; 876 + 877 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 878 + host_data->base = devm_ioremap_resource(dev, res); 879 + if (IS_ERR(host_data->base)) { 880 + dev_err(dev, "Unable to map registers\n"); 881 + return PTR_ERR(host_data->base); 882 + } 883 + 812 884 for (i = 0; i < drv_data->bank_nr; i++) 813 - stm32_exti_chip_init(host_data, i); 885 + stm32_exti_chip_init(host_data, i, np); 886 + 887 + parent_domain = irq_find_host(of_irq_find_parent(np)); 888 + if (!parent_domain) { 889 + dev_err(dev, "GIC interrupt-parent not found\n"); 890 + return -EINVAL; 891 + } 814 892 815 893 domain = irq_domain_add_hierarchy(parent_domain, 0, 816 894 drv_data->bank_nr * IRQS_PER_BANK, 817 - node, &stm32_exti_h_domain_ops, 895 + np, &stm32_exti_h_domain_ops, 818 896 host_data); 819 897 820 898 if (!domain) { 821 - pr_err("%pOFn: Could not register exti domain.\n", node); 822 - ret = -ENOMEM; 823 - goto out_unmap; 899 + dev_err(dev, "Could not register exti domain\n"); 900 + return -ENOMEM; 824 901 } 825 902 826 - stm32_exti_h_syscore_init(); 903 + ret = devm_add_action_or_reset(dev, stm32_exti_remove_irq, domain); 904 + if (ret) 905 + return ret; 906 + 907 + stm32_exti_h_syscore_init(host_data); 827 908 828 909 return 0; 829 - 830 - out_unmap: 831 - iounmap(host_data->base); 832 - kfree(host_data->chips_data); 833 - kfree(host_data); 834 - return ret; 835 910 } 836 911 912 + /* platform driver only for MP1 */ 913 + static const struct of_device_id stm32_exti_ids[] = { 914 + { .compatible = "st,stm32mp1-exti", .data = &stm32mp1_drv_data}, 915 + {}, 916 + }; 917 + MODULE_DEVICE_TABLE(of, stm32_exti_ids); 918 + 919 + static struct platform_driver stm32_exti_driver = { 920 + .probe = stm32_exti_probe, 921 + .remove = stm32_exti_remove, 922 + .driver = { 923 + .name = "stm32_exti", 924 + .of_match_table = stm32_exti_ids, 925 + }, 926 + }; 927 + 928 + static int __init stm32_exti_arch_init(void) 929 + { 930 + return platform_driver_register(&stm32_exti_driver); 931 + } 932 + 933 + static void __exit stm32_exti_arch_exit(void) 934 + { 935 + return platform_driver_unregister(&stm32_exti_driver); 936 + } 937 + 938 + arch_initcall(stm32_exti_arch_init); 939 + module_exit(stm32_exti_arch_exit); 940 + 941 + /* no platform driver for F4 and H7 */ 837 942 static int __init stm32f4_exti_of_init(struct device_node *np, 838 943 struct device_node *parent) 839 944 { ··· 929 874 } 930 875 931 876 IRQCHIP_DECLARE(stm32h7_exti, "st,stm32h7-exti", stm32h7_exti_of_init); 932 - 933 - static int __init stm32mp1_exti_of_init(struct device_node *np, 934 - struct device_node *parent) 935 - { 936 - return stm32_exti_hierarchy_init(&stm32mp1_drv_data, np, parent); 937 - } 938 - 939 - IRQCHIP_DECLARE(stm32mp1_exti, "st,stm32mp1-exti", stm32mp1_exti_of_init);
+615
drivers/irqchip/irq-ti-sci-inta.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Texas Instruments' K3 Interrupt Aggregator irqchip driver 4 + * 5 + * Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/ 6 + * Lokesh Vutla <lokeshvutla@ti.com> 7 + */ 8 + 9 + #include <linux/err.h> 10 + #include <linux/io.h> 11 + #include <linux/irqchip.h> 12 + #include <linux/irqdomain.h> 13 + #include <linux/interrupt.h> 14 + #include <linux/msi.h> 15 + #include <linux/module.h> 16 + #include <linux/moduleparam.h> 17 + #include <linux/of_address.h> 18 + #include <linux/of_irq.h> 19 + #include <linux/of_platform.h> 20 + #include <linux/irqchip/chained_irq.h> 21 + #include <linux/soc/ti/ti_sci_inta_msi.h> 22 + #include <linux/soc/ti/ti_sci_protocol.h> 23 + #include <asm-generic/msi.h> 24 + 25 + #define TI_SCI_DEV_ID_MASK 0xffff 26 + #define TI_SCI_DEV_ID_SHIFT 16 27 + #define TI_SCI_IRQ_ID_MASK 0xffff 28 + #define TI_SCI_IRQ_ID_SHIFT 0 29 + #define HWIRQ_TO_DEVID(hwirq) (((hwirq) >> (TI_SCI_DEV_ID_SHIFT)) & \ 30 + (TI_SCI_DEV_ID_MASK)) 31 + #define HWIRQ_TO_IRQID(hwirq) ((hwirq) & (TI_SCI_IRQ_ID_MASK)) 32 + #define TO_HWIRQ(dev, index) ((((dev) & TI_SCI_DEV_ID_MASK) << \ 33 + TI_SCI_DEV_ID_SHIFT) | \ 34 + ((index) & TI_SCI_IRQ_ID_MASK)) 35 + 36 + #define MAX_EVENTS_PER_VINT 64 37 + #define VINT_ENABLE_SET_OFFSET 0x0 38 + #define VINT_ENABLE_CLR_OFFSET 0x8 39 + #define VINT_STATUS_OFFSET 0x18 40 + 41 + /** 42 + * struct ti_sci_inta_event_desc - Description of an event coming to 43 + * Interrupt Aggregator. This serves 44 + * as a mapping table for global event, 45 + * hwirq and vint bit. 46 + * @global_event: Global event number corresponding to this event 47 + * @hwirq: Hwirq of the incoming interrupt 48 + * @vint_bit: Corresponding vint bit to which this event is attached. 49 + */ 50 + struct ti_sci_inta_event_desc { 51 + u16 global_event; 52 + u32 hwirq; 53 + u8 vint_bit; 54 + }; 55 + 56 + /** 57 + * struct ti_sci_inta_vint_desc - Description of a virtual interrupt coming out 58 + * of Interrupt Aggregator. 59 + * @domain: Pointer to IRQ domain to which this vint belongs. 60 + * @list: List entry for the vint list 61 + * @event_map: Bitmap to manage the allocation of events to vint. 62 + * @events: Array of event descriptors assigned to this vint. 63 + * @parent_virq: Linux IRQ number that gets attached to parent 64 + * @vint_id: TISCI vint ID 65 + */ 66 + struct ti_sci_inta_vint_desc { 67 + struct irq_domain *domain; 68 + struct list_head list; 69 + DECLARE_BITMAP(event_map, MAX_EVENTS_PER_VINT); 70 + struct ti_sci_inta_event_desc events[MAX_EVENTS_PER_VINT]; 71 + unsigned int parent_virq; 72 + u16 vint_id; 73 + }; 74 + 75 + /** 76 + * struct ti_sci_inta_irq_domain - Structure representing a TISCI based 77 + * Interrupt Aggregator IRQ domain. 78 + * @sci: Pointer to TISCI handle 79 + * @vint: TISCI resource pointer representing IA inerrupts. 80 + * @global_event: TISCI resource pointer representing global events. 81 + * @vint_list: List of the vints active in the system 82 + * @vint_mutex: Mutex to protect vint_list 83 + * @base: Base address of the memory mapped IO registers 84 + * @pdev: Pointer to platform device. 85 + */ 86 + struct ti_sci_inta_irq_domain { 87 + const struct ti_sci_handle *sci; 88 + struct ti_sci_resource *vint; 89 + struct ti_sci_resource *global_event; 90 + struct list_head vint_list; 91 + /* Mutex to protect vint list */ 92 + struct mutex vint_mutex; 93 + void __iomem *base; 94 + struct platform_device *pdev; 95 + }; 96 + 97 + #define to_vint_desc(e, i) container_of(e, struct ti_sci_inta_vint_desc, \ 98 + events[i]) 99 + 100 + /** 101 + * ti_sci_inta_irq_handler() - Chained IRQ handler for the vint irqs 102 + * @desc: Pointer to irq_desc corresponding to the irq 103 + */ 104 + static void ti_sci_inta_irq_handler(struct irq_desc *desc) 105 + { 106 + struct ti_sci_inta_vint_desc *vint_desc; 107 + struct ti_sci_inta_irq_domain *inta; 108 + struct irq_domain *domain; 109 + unsigned int virq, bit; 110 + unsigned long val; 111 + 112 + vint_desc = irq_desc_get_handler_data(desc); 113 + domain = vint_desc->domain; 114 + inta = domain->host_data; 115 + 116 + chained_irq_enter(irq_desc_get_chip(desc), desc); 117 + 118 + val = readq_relaxed(inta->base + vint_desc->vint_id * 0x1000 + 119 + VINT_STATUS_OFFSET); 120 + 121 + for_each_set_bit(bit, &val, MAX_EVENTS_PER_VINT) { 122 + virq = irq_find_mapping(domain, vint_desc->events[bit].hwirq); 123 + if (virq) 124 + generic_handle_irq(virq); 125 + } 126 + 127 + chained_irq_exit(irq_desc_get_chip(desc), desc); 128 + } 129 + 130 + /** 131 + * ti_sci_inta_alloc_parent_irq() - Allocate parent irq to Interrupt aggregator 132 + * @domain: IRQ domain corresponding to Interrupt Aggregator 133 + * 134 + * Return 0 if all went well else corresponding error value. 135 + */ 136 + static struct ti_sci_inta_vint_desc *ti_sci_inta_alloc_parent_irq(struct irq_domain *domain) 137 + { 138 + struct ti_sci_inta_irq_domain *inta = domain->host_data; 139 + struct ti_sci_inta_vint_desc *vint_desc; 140 + struct irq_fwspec parent_fwspec; 141 + unsigned int parent_virq; 142 + u16 vint_id; 143 + 144 + vint_id = ti_sci_get_free_resource(inta->vint); 145 + if (vint_id == TI_SCI_RESOURCE_NULL) 146 + return ERR_PTR(-EINVAL); 147 + 148 + vint_desc = kzalloc(sizeof(*vint_desc), GFP_KERNEL); 149 + if (!vint_desc) 150 + return ERR_PTR(-ENOMEM); 151 + 152 + vint_desc->domain = domain; 153 + vint_desc->vint_id = vint_id; 154 + INIT_LIST_HEAD(&vint_desc->list); 155 + 156 + parent_fwspec.fwnode = of_node_to_fwnode(of_irq_find_parent(dev_of_node(&inta->pdev->dev))); 157 + parent_fwspec.param_count = 2; 158 + parent_fwspec.param[0] = inta->pdev->id; 159 + parent_fwspec.param[1] = vint_desc->vint_id; 160 + 161 + parent_virq = irq_create_fwspec_mapping(&parent_fwspec); 162 + if (parent_virq <= 0) { 163 + kfree(vint_desc); 164 + return ERR_PTR(parent_virq); 165 + } 166 + vint_desc->parent_virq = parent_virq; 167 + 168 + list_add_tail(&vint_desc->list, &inta->vint_list); 169 + irq_set_chained_handler_and_data(vint_desc->parent_virq, 170 + ti_sci_inta_irq_handler, vint_desc); 171 + 172 + return vint_desc; 173 + } 174 + 175 + /** 176 + * ti_sci_inta_alloc_event() - Attach an event to a IA vint. 177 + * @vint_desc: Pointer to vint_desc to which the event gets attached 178 + * @free_bit: Bit inside vint to which event gets attached 179 + * @hwirq: hwirq of the input event 180 + * 181 + * Return event_desc pointer if all went ok else appropriate error value. 182 + */ 183 + static struct ti_sci_inta_event_desc *ti_sci_inta_alloc_event(struct ti_sci_inta_vint_desc *vint_desc, 184 + u16 free_bit, 185 + u32 hwirq) 186 + { 187 + struct ti_sci_inta_irq_domain *inta = vint_desc->domain->host_data; 188 + struct ti_sci_inta_event_desc *event_desc; 189 + u16 dev_id, dev_index; 190 + int err; 191 + 192 + dev_id = HWIRQ_TO_DEVID(hwirq); 193 + dev_index = HWIRQ_TO_IRQID(hwirq); 194 + 195 + event_desc = &vint_desc->events[free_bit]; 196 + event_desc->hwirq = hwirq; 197 + event_desc->vint_bit = free_bit; 198 + event_desc->global_event = ti_sci_get_free_resource(inta->global_event); 199 + if (event_desc->global_event == TI_SCI_RESOURCE_NULL) 200 + return ERR_PTR(-EINVAL); 201 + 202 + err = inta->sci->ops.rm_irq_ops.set_event_map(inta->sci, 203 + dev_id, dev_index, 204 + inta->pdev->id, 205 + vint_desc->vint_id, 206 + event_desc->global_event, 207 + free_bit); 208 + if (err) 209 + goto free_global_event; 210 + 211 + return event_desc; 212 + free_global_event: 213 + ti_sci_release_resource(inta->global_event, event_desc->global_event); 214 + return ERR_PTR(err); 215 + } 216 + 217 + /** 218 + * ti_sci_inta_alloc_irq() - Allocate an irq within INTA domain 219 + * @domain: irq_domain pointer corresponding to INTA 220 + * @hwirq: hwirq of the input event 221 + * 222 + * Note: Allocation happens in the following manner: 223 + * - Find a free bit available in any of the vints available in the list. 224 + * - If not found, allocate a vint from the vint pool 225 + * - Attach the free bit to input hwirq. 226 + * Return event_desc if all went ok else appropriate error value. 227 + */ 228 + static struct ti_sci_inta_event_desc *ti_sci_inta_alloc_irq(struct irq_domain *domain, 229 + u32 hwirq) 230 + { 231 + struct ti_sci_inta_irq_domain *inta = domain->host_data; 232 + struct ti_sci_inta_vint_desc *vint_desc = NULL; 233 + struct ti_sci_inta_event_desc *event_desc; 234 + u16 free_bit; 235 + 236 + mutex_lock(&inta->vint_mutex); 237 + list_for_each_entry(vint_desc, &inta->vint_list, list) { 238 + free_bit = find_first_zero_bit(vint_desc->event_map, 239 + MAX_EVENTS_PER_VINT); 240 + if (free_bit != MAX_EVENTS_PER_VINT) { 241 + set_bit(free_bit, vint_desc->event_map); 242 + goto alloc_event; 243 + } 244 + } 245 + 246 + /* No free bits available. Allocate a new vint */ 247 + vint_desc = ti_sci_inta_alloc_parent_irq(domain); 248 + if (IS_ERR(vint_desc)) { 249 + mutex_unlock(&inta->vint_mutex); 250 + return ERR_PTR(PTR_ERR(vint_desc)); 251 + } 252 + 253 + free_bit = find_first_zero_bit(vint_desc->event_map, 254 + MAX_EVENTS_PER_VINT); 255 + set_bit(free_bit, vint_desc->event_map); 256 + 257 + alloc_event: 258 + event_desc = ti_sci_inta_alloc_event(vint_desc, free_bit, hwirq); 259 + if (IS_ERR(event_desc)) 260 + clear_bit(free_bit, vint_desc->event_map); 261 + 262 + mutex_unlock(&inta->vint_mutex); 263 + return event_desc; 264 + } 265 + 266 + /** 267 + * ti_sci_inta_free_parent_irq() - Free a parent irq to INTA 268 + * @inta: Pointer to inta domain. 269 + * @vint_desc: Pointer to vint_desc that needs to be freed. 270 + */ 271 + static void ti_sci_inta_free_parent_irq(struct ti_sci_inta_irq_domain *inta, 272 + struct ti_sci_inta_vint_desc *vint_desc) 273 + { 274 + if (find_first_bit(vint_desc->event_map, MAX_EVENTS_PER_VINT) == MAX_EVENTS_PER_VINT) { 275 + list_del(&vint_desc->list); 276 + ti_sci_release_resource(inta->vint, vint_desc->vint_id); 277 + irq_dispose_mapping(vint_desc->parent_virq); 278 + kfree(vint_desc); 279 + } 280 + } 281 + 282 + /** 283 + * ti_sci_inta_free_irq() - Free an IRQ within INTA domain 284 + * @event_desc: Pointer to event_desc that needs to be freed. 285 + * @hwirq: Hwirq number within INTA domain that needs to be freed 286 + */ 287 + static void ti_sci_inta_free_irq(struct ti_sci_inta_event_desc *event_desc, 288 + u32 hwirq) 289 + { 290 + struct ti_sci_inta_vint_desc *vint_desc; 291 + struct ti_sci_inta_irq_domain *inta; 292 + 293 + vint_desc = to_vint_desc(event_desc, event_desc->vint_bit); 294 + inta = vint_desc->domain->host_data; 295 + /* free event irq */ 296 + mutex_lock(&inta->vint_mutex); 297 + inta->sci->ops.rm_irq_ops.free_event_map(inta->sci, 298 + HWIRQ_TO_DEVID(hwirq), 299 + HWIRQ_TO_IRQID(hwirq), 300 + inta->pdev->id, 301 + vint_desc->vint_id, 302 + event_desc->global_event, 303 + event_desc->vint_bit); 304 + 305 + clear_bit(event_desc->vint_bit, vint_desc->event_map); 306 + ti_sci_release_resource(inta->global_event, event_desc->global_event); 307 + event_desc->global_event = TI_SCI_RESOURCE_NULL; 308 + event_desc->hwirq = 0; 309 + 310 + ti_sci_inta_free_parent_irq(inta, vint_desc); 311 + mutex_unlock(&inta->vint_mutex); 312 + } 313 + 314 + /** 315 + * ti_sci_inta_request_resources() - Allocate resources for input irq 316 + * @data: Pointer to corresponding irq_data 317 + * 318 + * Note: This is the core api where the actual allocation happens for input 319 + * hwirq. This allocation involves creating a parent irq for vint. 320 + * If this is done in irq_domain_ops.alloc() then a deadlock is reached 321 + * for allocation. So this allocation is being done in request_resources() 322 + * 323 + * Return: 0 if all went well else corresponding error. 324 + */ 325 + static int ti_sci_inta_request_resources(struct irq_data *data) 326 + { 327 + struct ti_sci_inta_event_desc *event_desc; 328 + 329 + event_desc = ti_sci_inta_alloc_irq(data->domain, data->hwirq); 330 + if (IS_ERR(event_desc)) 331 + return PTR_ERR(event_desc); 332 + 333 + data->chip_data = event_desc; 334 + 335 + return 0; 336 + } 337 + 338 + /** 339 + * ti_sci_inta_release_resources - Release resources for input irq 340 + * @data: Pointer to corresponding irq_data 341 + * 342 + * Note: Corresponding to request_resources(), all the unmapping and deletion 343 + * of parent vint irqs happens in this api. 344 + */ 345 + static void ti_sci_inta_release_resources(struct irq_data *data) 346 + { 347 + struct ti_sci_inta_event_desc *event_desc; 348 + 349 + event_desc = irq_data_get_irq_chip_data(data); 350 + ti_sci_inta_free_irq(event_desc, data->hwirq); 351 + } 352 + 353 + /** 354 + * ti_sci_inta_manage_event() - Control the event based on the offset 355 + * @data: Pointer to corresponding irq_data 356 + * @offset: register offset using which event is controlled. 357 + */ 358 + static void ti_sci_inta_manage_event(struct irq_data *data, u32 offset) 359 + { 360 + struct ti_sci_inta_event_desc *event_desc; 361 + struct ti_sci_inta_vint_desc *vint_desc; 362 + struct ti_sci_inta_irq_domain *inta; 363 + 364 + event_desc = irq_data_get_irq_chip_data(data); 365 + vint_desc = to_vint_desc(event_desc, event_desc->vint_bit); 366 + inta = data->domain->host_data; 367 + 368 + writeq_relaxed(BIT(event_desc->vint_bit), 369 + inta->base + vint_desc->vint_id * 0x1000 + offset); 370 + } 371 + 372 + /** 373 + * ti_sci_inta_mask_irq() - Mask an event 374 + * @data: Pointer to corresponding irq_data 375 + */ 376 + static void ti_sci_inta_mask_irq(struct irq_data *data) 377 + { 378 + ti_sci_inta_manage_event(data, VINT_ENABLE_CLR_OFFSET); 379 + } 380 + 381 + /** 382 + * ti_sci_inta_unmask_irq() - Unmask an event 383 + * @data: Pointer to corresponding irq_data 384 + */ 385 + static void ti_sci_inta_unmask_irq(struct irq_data *data) 386 + { 387 + ti_sci_inta_manage_event(data, VINT_ENABLE_SET_OFFSET); 388 + } 389 + 390 + /** 391 + * ti_sci_inta_ack_irq() - Ack an event 392 + * @data: Pointer to corresponding irq_data 393 + */ 394 + static void ti_sci_inta_ack_irq(struct irq_data *data) 395 + { 396 + /* 397 + * Do not clear the event if hardware is capable of sending 398 + * a down event. 399 + */ 400 + if (irqd_get_trigger_type(data) != IRQF_TRIGGER_HIGH) 401 + ti_sci_inta_manage_event(data, VINT_STATUS_OFFSET); 402 + } 403 + 404 + static int ti_sci_inta_set_affinity(struct irq_data *d, 405 + const struct cpumask *mask_val, bool force) 406 + { 407 + return -EINVAL; 408 + } 409 + 410 + /** 411 + * ti_sci_inta_set_type() - Update the trigger type of the irq. 412 + * @data: Pointer to corresponding irq_data 413 + * @type: Trigger type as specified by user 414 + * 415 + * Note: This updates the handle_irq callback for level msi. 416 + * 417 + * Return 0 if all went well else appropriate error. 418 + */ 419 + static int ti_sci_inta_set_type(struct irq_data *data, unsigned int type) 420 + { 421 + /* 422 + * .alloc default sets handle_edge_irq. But if the user specifies 423 + * that IRQ is level MSI, then update the handle to handle_level_irq 424 + */ 425 + switch (type & IRQ_TYPE_SENSE_MASK) { 426 + case IRQF_TRIGGER_HIGH: 427 + irq_set_handler_locked(data, handle_level_irq); 428 + return 0; 429 + case IRQF_TRIGGER_RISING: 430 + return 0; 431 + default: 432 + return -EINVAL; 433 + } 434 + 435 + return -EINVAL; 436 + } 437 + 438 + static struct irq_chip ti_sci_inta_irq_chip = { 439 + .name = "INTA", 440 + .irq_ack = ti_sci_inta_ack_irq, 441 + .irq_mask = ti_sci_inta_mask_irq, 442 + .irq_set_type = ti_sci_inta_set_type, 443 + .irq_unmask = ti_sci_inta_unmask_irq, 444 + .irq_set_affinity = ti_sci_inta_set_affinity, 445 + .irq_request_resources = ti_sci_inta_request_resources, 446 + .irq_release_resources = ti_sci_inta_release_resources, 447 + }; 448 + 449 + /** 450 + * ti_sci_inta_irq_domain_free() - Free an IRQ from the IRQ domain 451 + * @domain: Domain to which the irqs belong 452 + * @virq: base linux virtual IRQ to be freed. 453 + * @nr_irqs: Number of continuous irqs to be freed 454 + */ 455 + static void ti_sci_inta_irq_domain_free(struct irq_domain *domain, 456 + unsigned int virq, unsigned int nr_irqs) 457 + { 458 + struct irq_data *data = irq_domain_get_irq_data(domain, virq); 459 + 460 + irq_domain_reset_irq_data(data); 461 + } 462 + 463 + /** 464 + * ti_sci_inta_irq_domain_alloc() - Allocate Interrupt aggregator IRQs 465 + * @domain: Point to the interrupt aggregator IRQ domain 466 + * @virq: Corresponding Linux virtual IRQ number 467 + * @nr_irqs: Continuous irqs to be allocated 468 + * @data: Pointer to firmware specifier 469 + * 470 + * No actual allocation happens here. 471 + * 472 + * Return 0 if all went well else appropriate error value. 473 + */ 474 + static int ti_sci_inta_irq_domain_alloc(struct irq_domain *domain, 475 + unsigned int virq, unsigned int nr_irqs, 476 + void *data) 477 + { 478 + msi_alloc_info_t *arg = data; 479 + 480 + irq_domain_set_info(domain, virq, arg->hwirq, &ti_sci_inta_irq_chip, 481 + NULL, handle_edge_irq, NULL, NULL); 482 + 483 + return 0; 484 + } 485 + 486 + static const struct irq_domain_ops ti_sci_inta_irq_domain_ops = { 487 + .free = ti_sci_inta_irq_domain_free, 488 + .alloc = ti_sci_inta_irq_domain_alloc, 489 + }; 490 + 491 + static struct irq_chip ti_sci_inta_msi_irq_chip = { 492 + .name = "MSI-INTA", 493 + .flags = IRQCHIP_SUPPORTS_LEVEL_MSI, 494 + }; 495 + 496 + static void ti_sci_inta_msi_set_desc(msi_alloc_info_t *arg, 497 + struct msi_desc *desc) 498 + { 499 + struct platform_device *pdev = to_platform_device(desc->dev); 500 + 501 + arg->desc = desc; 502 + arg->hwirq = TO_HWIRQ(pdev->id, desc->inta.dev_index); 503 + } 504 + 505 + static struct msi_domain_ops ti_sci_inta_msi_ops = { 506 + .set_desc = ti_sci_inta_msi_set_desc, 507 + }; 508 + 509 + static struct msi_domain_info ti_sci_inta_msi_domain_info = { 510 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 511 + MSI_FLAG_LEVEL_CAPABLE), 512 + .ops = &ti_sci_inta_msi_ops, 513 + .chip = &ti_sci_inta_msi_irq_chip, 514 + }; 515 + 516 + static int ti_sci_inta_irq_domain_probe(struct platform_device *pdev) 517 + { 518 + struct irq_domain *parent_domain, *domain, *msi_domain; 519 + struct device_node *parent_node, *node; 520 + struct ti_sci_inta_irq_domain *inta; 521 + struct device *dev = &pdev->dev; 522 + struct resource *res; 523 + int ret; 524 + 525 + node = dev_of_node(dev); 526 + parent_node = of_irq_find_parent(node); 527 + if (!parent_node) { 528 + dev_err(dev, "Failed to get IRQ parent node\n"); 529 + return -ENODEV; 530 + } 531 + 532 + parent_domain = irq_find_host(parent_node); 533 + if (!parent_domain) 534 + return -EPROBE_DEFER; 535 + 536 + inta = devm_kzalloc(dev, sizeof(*inta), GFP_KERNEL); 537 + if (!inta) 538 + return -ENOMEM; 539 + 540 + inta->pdev = pdev; 541 + inta->sci = devm_ti_sci_get_by_phandle(dev, "ti,sci"); 542 + if (IS_ERR(inta->sci)) { 543 + ret = PTR_ERR(inta->sci); 544 + if (ret != -EPROBE_DEFER) 545 + dev_err(dev, "ti,sci read fail %d\n", ret); 546 + inta->sci = NULL; 547 + return ret; 548 + } 549 + 550 + ret = of_property_read_u32(dev->of_node, "ti,sci-dev-id", &pdev->id); 551 + if (ret) { 552 + dev_err(dev, "missing 'ti,sci-dev-id' property\n"); 553 + return -EINVAL; 554 + } 555 + 556 + inta->vint = devm_ti_sci_get_of_resource(inta->sci, dev, pdev->id, 557 + "ti,sci-rm-range-vint"); 558 + if (IS_ERR(inta->vint)) { 559 + dev_err(dev, "VINT resource allocation failed\n"); 560 + return PTR_ERR(inta->vint); 561 + } 562 + 563 + inta->global_event = devm_ti_sci_get_of_resource(inta->sci, dev, pdev->id, 564 + "ti,sci-rm-range-global-event"); 565 + if (IS_ERR(inta->global_event)) { 566 + dev_err(dev, "Global event resource allocation failed\n"); 567 + return PTR_ERR(inta->global_event); 568 + } 569 + 570 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 571 + inta->base = devm_ioremap_resource(dev, res); 572 + if (IS_ERR(inta->base)) 573 + return -ENODEV; 574 + 575 + domain = irq_domain_add_linear(dev_of_node(dev), 576 + ti_sci_get_num_resources(inta->vint), 577 + &ti_sci_inta_irq_domain_ops, inta); 578 + if (!domain) { 579 + dev_err(dev, "Failed to allocate IRQ domain\n"); 580 + return -ENOMEM; 581 + } 582 + 583 + msi_domain = ti_sci_inta_msi_create_irq_domain(of_node_to_fwnode(node), 584 + &ti_sci_inta_msi_domain_info, 585 + domain); 586 + if (!msi_domain) { 587 + irq_domain_remove(domain); 588 + dev_err(dev, "Failed to allocate msi domain\n"); 589 + return -ENOMEM; 590 + } 591 + 592 + INIT_LIST_HEAD(&inta->vint_list); 593 + mutex_init(&inta->vint_mutex); 594 + 595 + return 0; 596 + } 597 + 598 + static const struct of_device_id ti_sci_inta_irq_domain_of_match[] = { 599 + { .compatible = "ti,sci-inta", }, 600 + { /* sentinel */ }, 601 + }; 602 + MODULE_DEVICE_TABLE(of, ti_sci_inta_irq_domain_of_match); 603 + 604 + static struct platform_driver ti_sci_inta_irq_domain_driver = { 605 + .probe = ti_sci_inta_irq_domain_probe, 606 + .driver = { 607 + .name = "ti-sci-inta", 608 + .of_match_table = ti_sci_inta_irq_domain_of_match, 609 + }, 610 + }; 611 + module_platform_driver(ti_sci_inta_irq_domain_driver); 612 + 613 + MODULE_AUTHOR("Lokesh Vutla <lokeshvutla@ticom>"); 614 + MODULE_DESCRIPTION("K3 Interrupt Aggregator driver over TI SCI protocol"); 615 + MODULE_LICENSE("GPL v2");
+275
drivers/irqchip/irq-ti-sci-intr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Texas Instruments' K3 Interrupt Router irqchip driver 4 + * 5 + * Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/ 6 + * Lokesh Vutla <lokeshvutla@ti.com> 7 + */ 8 + 9 + #include <linux/err.h> 10 + #include <linux/module.h> 11 + #include <linux/moduleparam.h> 12 + #include <linux/io.h> 13 + #include <linux/irqchip.h> 14 + #include <linux/irqdomain.h> 15 + #include <linux/of_platform.h> 16 + #include <linux/of_address.h> 17 + #include <linux/of_irq.h> 18 + #include <linux/soc/ti/ti_sci_protocol.h> 19 + 20 + #define TI_SCI_DEV_ID_MASK 0xffff 21 + #define TI_SCI_DEV_ID_SHIFT 16 22 + #define TI_SCI_IRQ_ID_MASK 0xffff 23 + #define TI_SCI_IRQ_ID_SHIFT 0 24 + #define HWIRQ_TO_DEVID(hwirq) (((hwirq) >> (TI_SCI_DEV_ID_SHIFT)) & \ 25 + (TI_SCI_DEV_ID_MASK)) 26 + #define HWIRQ_TO_IRQID(hwirq) ((hwirq) & (TI_SCI_IRQ_ID_MASK)) 27 + #define TO_HWIRQ(dev, index) ((((dev) & TI_SCI_DEV_ID_MASK) << \ 28 + TI_SCI_DEV_ID_SHIFT) | \ 29 + ((index) & TI_SCI_IRQ_ID_MASK)) 30 + 31 + /** 32 + * struct ti_sci_intr_irq_domain - Structure representing a TISCI based 33 + * Interrupt Router IRQ domain. 34 + * @sci: Pointer to TISCI handle 35 + * @dst_irq: TISCI resource pointer representing GIC irq controller. 36 + * @dst_id: TISCI device ID of the GIC irq controller. 37 + * @type: Specifies the trigger type supported by this Interrupt Router 38 + */ 39 + struct ti_sci_intr_irq_domain { 40 + const struct ti_sci_handle *sci; 41 + struct ti_sci_resource *dst_irq; 42 + u32 dst_id; 43 + u32 type; 44 + }; 45 + 46 + static struct irq_chip ti_sci_intr_irq_chip = { 47 + .name = "INTR", 48 + .irq_eoi = irq_chip_eoi_parent, 49 + .irq_mask = irq_chip_mask_parent, 50 + .irq_unmask = irq_chip_unmask_parent, 51 + .irq_set_type = irq_chip_set_type_parent, 52 + .irq_retrigger = irq_chip_retrigger_hierarchy, 53 + .irq_set_affinity = irq_chip_set_affinity_parent, 54 + }; 55 + 56 + /** 57 + * ti_sci_intr_irq_domain_translate() - Retrieve hwirq and type from 58 + * IRQ firmware specific handler. 59 + * @domain: Pointer to IRQ domain 60 + * @fwspec: Pointer to IRQ specific firmware structure 61 + * @hwirq: IRQ number identified by hardware 62 + * @type: IRQ type 63 + * 64 + * Return 0 if all went ok else appropriate error. 65 + */ 66 + static int ti_sci_intr_irq_domain_translate(struct irq_domain *domain, 67 + struct irq_fwspec *fwspec, 68 + unsigned long *hwirq, 69 + unsigned int *type) 70 + { 71 + struct ti_sci_intr_irq_domain *intr = domain->host_data; 72 + 73 + if (fwspec->param_count != 2) 74 + return -EINVAL; 75 + 76 + *hwirq = TO_HWIRQ(fwspec->param[0], fwspec->param[1]); 77 + *type = intr->type; 78 + 79 + return 0; 80 + } 81 + 82 + /** 83 + * ti_sci_intr_irq_domain_free() - Free the specified IRQs from the domain. 84 + * @domain: Domain to which the irqs belong 85 + * @virq: Linux virtual IRQ to be freed. 86 + * @nr_irqs: Number of continuous irqs to be freed 87 + */ 88 + static void ti_sci_intr_irq_domain_free(struct irq_domain *domain, 89 + unsigned int virq, unsigned int nr_irqs) 90 + { 91 + struct ti_sci_intr_irq_domain *intr = domain->host_data; 92 + struct irq_data *data, *parent_data; 93 + u16 dev_id, irq_index; 94 + 95 + parent_data = irq_domain_get_irq_data(domain->parent, virq); 96 + data = irq_domain_get_irq_data(domain, virq); 97 + irq_index = HWIRQ_TO_IRQID(data->hwirq); 98 + dev_id = HWIRQ_TO_DEVID(data->hwirq); 99 + 100 + intr->sci->ops.rm_irq_ops.free_irq(intr->sci, dev_id, irq_index, 101 + intr->dst_id, parent_data->hwirq); 102 + ti_sci_release_resource(intr->dst_irq, parent_data->hwirq); 103 + irq_domain_free_irqs_parent(domain, virq, 1); 104 + irq_domain_reset_irq_data(data); 105 + } 106 + 107 + /** 108 + * ti_sci_intr_alloc_gic_irq() - Allocate GIC specific IRQ 109 + * @domain: Pointer to the interrupt router IRQ domain 110 + * @virq: Corresponding Linux virtual IRQ number 111 + * @hwirq: Corresponding hwirq for the IRQ within this IRQ domain 112 + * 113 + * Returns 0 if all went well else appropriate error pointer. 114 + */ 115 + static int ti_sci_intr_alloc_gic_irq(struct irq_domain *domain, 116 + unsigned int virq, u32 hwirq) 117 + { 118 + struct ti_sci_intr_irq_domain *intr = domain->host_data; 119 + struct irq_fwspec fwspec; 120 + u16 dev_id, irq_index; 121 + u16 dst_irq; 122 + int err; 123 + 124 + dev_id = HWIRQ_TO_DEVID(hwirq); 125 + irq_index = HWIRQ_TO_IRQID(hwirq); 126 + 127 + dst_irq = ti_sci_get_free_resource(intr->dst_irq); 128 + if (dst_irq == TI_SCI_RESOURCE_NULL) 129 + return -EINVAL; 130 + 131 + fwspec.fwnode = domain->parent->fwnode; 132 + fwspec.param_count = 3; 133 + fwspec.param[0] = 0; /* SPI */ 134 + fwspec.param[1] = dst_irq - 32; /* SPI offset */ 135 + fwspec.param[2] = intr->type; 136 + 137 + err = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec); 138 + if (err) 139 + goto err_irqs; 140 + 141 + err = intr->sci->ops.rm_irq_ops.set_irq(intr->sci, dev_id, irq_index, 142 + intr->dst_id, dst_irq); 143 + if (err) 144 + goto err_msg; 145 + 146 + return 0; 147 + 148 + err_msg: 149 + irq_domain_free_irqs_parent(domain, virq, 1); 150 + err_irqs: 151 + ti_sci_release_resource(intr->dst_irq, dst_irq); 152 + return err; 153 + } 154 + 155 + /** 156 + * ti_sci_intr_irq_domain_alloc() - Allocate Interrupt router IRQs 157 + * @domain: Point to the interrupt router IRQ domain 158 + * @virq: Corresponding Linux virtual IRQ number 159 + * @nr_irqs: Continuous irqs to be allocated 160 + * @data: Pointer to firmware specifier 161 + * 162 + * Return 0 if all went well else appropriate error value. 163 + */ 164 + static int ti_sci_intr_irq_domain_alloc(struct irq_domain *domain, 165 + unsigned int virq, unsigned int nr_irqs, 166 + void *data) 167 + { 168 + struct irq_fwspec *fwspec = data; 169 + unsigned long hwirq; 170 + unsigned int flags; 171 + int err; 172 + 173 + err = ti_sci_intr_irq_domain_translate(domain, fwspec, &hwirq, &flags); 174 + if (err) 175 + return err; 176 + 177 + err = ti_sci_intr_alloc_gic_irq(domain, virq, hwirq); 178 + if (err) 179 + return err; 180 + 181 + irq_domain_set_hwirq_and_chip(domain, virq, hwirq, 182 + &ti_sci_intr_irq_chip, NULL); 183 + 184 + return 0; 185 + } 186 + 187 + static const struct irq_domain_ops ti_sci_intr_irq_domain_ops = { 188 + .free = ti_sci_intr_irq_domain_free, 189 + .alloc = ti_sci_intr_irq_domain_alloc, 190 + .translate = ti_sci_intr_irq_domain_translate, 191 + }; 192 + 193 + static int ti_sci_intr_irq_domain_probe(struct platform_device *pdev) 194 + { 195 + struct irq_domain *parent_domain, *domain; 196 + struct ti_sci_intr_irq_domain *intr; 197 + struct device_node *parent_node; 198 + struct device *dev = &pdev->dev; 199 + int ret; 200 + 201 + parent_node = of_irq_find_parent(dev_of_node(dev)); 202 + if (!parent_node) { 203 + dev_err(dev, "Failed to get IRQ parent node\n"); 204 + return -ENODEV; 205 + } 206 + 207 + parent_domain = irq_find_host(parent_node); 208 + if (!parent_domain) { 209 + dev_err(dev, "Failed to find IRQ parent domain\n"); 210 + return -ENODEV; 211 + } 212 + 213 + intr = devm_kzalloc(dev, sizeof(*intr), GFP_KERNEL); 214 + if (!intr) 215 + return -ENOMEM; 216 + 217 + ret = of_property_read_u32(dev_of_node(dev), "ti,intr-trigger-type", 218 + &intr->type); 219 + if (ret) { 220 + dev_err(dev, "missing ti,intr-trigger-type property\n"); 221 + return -EINVAL; 222 + } 223 + 224 + intr->sci = devm_ti_sci_get_by_phandle(dev, "ti,sci"); 225 + if (IS_ERR(intr->sci)) { 226 + ret = PTR_ERR(intr->sci); 227 + if (ret != -EPROBE_DEFER) 228 + dev_err(dev, "ti,sci read fail %d\n", ret); 229 + intr->sci = NULL; 230 + return ret; 231 + } 232 + 233 + ret = of_property_read_u32(dev_of_node(dev), "ti,sci-dst-id", 234 + &intr->dst_id); 235 + if (ret) { 236 + dev_err(dev, "missing 'ti,sci-dst-id' property\n"); 237 + return -EINVAL; 238 + } 239 + 240 + intr->dst_irq = devm_ti_sci_get_of_resource(intr->sci, dev, 241 + intr->dst_id, 242 + "ti,sci-rm-range-girq"); 243 + if (IS_ERR(intr->dst_irq)) { 244 + dev_err(dev, "Destination irq resource allocation failed\n"); 245 + return PTR_ERR(intr->dst_irq); 246 + } 247 + 248 + domain = irq_domain_add_hierarchy(parent_domain, 0, 0, dev_of_node(dev), 249 + &ti_sci_intr_irq_domain_ops, intr); 250 + if (!domain) { 251 + dev_err(dev, "Failed to allocate IRQ domain\n"); 252 + return -ENOMEM; 253 + } 254 + 255 + return 0; 256 + } 257 + 258 + static const struct of_device_id ti_sci_intr_irq_domain_of_match[] = { 259 + { .compatible = "ti,sci-intr", }, 260 + { /* sentinel */ }, 261 + }; 262 + MODULE_DEVICE_TABLE(of, ti_sci_intr_irq_domain_of_match); 263 + 264 + static struct platform_driver ti_sci_intr_irq_domain_driver = { 265 + .probe = ti_sci_intr_irq_domain_probe, 266 + .driver = { 267 + .name = "ti-sci-intr", 268 + .of_match_table = ti_sci_intr_irq_domain_of_match, 269 + }, 270 + }; 271 + module_platform_driver(ti_sci_intr_irq_domain_driver); 272 + 273 + MODULE_AUTHOR("Lokesh Vutla <lokeshvutla@ticom>"); 274 + MODULE_DESCRIPTION("K3 Interrupt Router driver over TI SCI protocol"); 275 + MODULE_LICENSE("GPL v2");
+6
drivers/soc/ti/Kconfig
··· 74 74 called ti_sci_pm_domains. Note this is needed early in boot before 75 75 rootfs may be available. 76 76 77 + config TI_SCI_INTA_MSI_DOMAIN 78 + bool 79 + select GENERIC_MSI_IRQ_DOMAIN 80 + help 81 + Driver to enable Interrupt Aggregator specific MSI Domain. 82 + 77 83 endif # SOC_TI
+1
drivers/soc/ti/Makefile
··· 8 8 obj-$(CONFIG_AMX3_PM) += pm33xx.o 9 9 obj-$(CONFIG_WKUP_M3_IPC) += wkup_m3_ipc.o 10 10 obj-$(CONFIG_TI_SCI_PM_DOMAINS) += ti_sci_pm_domains.o 11 + obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN) += ti_sci_inta_msi.o
+146
drivers/soc/ti/ti_sci_inta_msi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Texas Instruments' K3 Interrupt Aggregator MSI bus 4 + * 5 + * Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/ 6 + * Lokesh Vutla <lokeshvutla@ti.com> 7 + */ 8 + 9 + #include <linux/irq.h> 10 + #include <linux/irqdomain.h> 11 + #include <linux/msi.h> 12 + #include <linux/of_address.h> 13 + #include <linux/of_device.h> 14 + #include <linux/of_irq.h> 15 + #include <linux/soc/ti/ti_sci_inta_msi.h> 16 + #include <linux/soc/ti/ti_sci_protocol.h> 17 + 18 + static void ti_sci_inta_msi_write_msg(struct irq_data *data, 19 + struct msi_msg *msg) 20 + { 21 + /* Nothing to do */ 22 + } 23 + 24 + static void ti_sci_inta_msi_compose_msi_msg(struct irq_data *data, 25 + struct msi_msg *msg) 26 + { 27 + /* Nothing to do */ 28 + } 29 + 30 + static void ti_sci_inta_msi_update_chip_ops(struct msi_domain_info *info) 31 + { 32 + struct irq_chip *chip = info->chip; 33 + 34 + if (WARN_ON(!chip)) 35 + return; 36 + 37 + chip->irq_request_resources = irq_chip_request_resources_parent; 38 + chip->irq_release_resources = irq_chip_release_resources_parent; 39 + chip->irq_compose_msi_msg = ti_sci_inta_msi_compose_msi_msg; 40 + chip->irq_write_msi_msg = ti_sci_inta_msi_write_msg; 41 + chip->irq_set_type = irq_chip_set_type_parent; 42 + chip->irq_unmask = irq_chip_unmask_parent; 43 + chip->irq_mask = irq_chip_mask_parent; 44 + chip->irq_ack = irq_chip_ack_parent; 45 + } 46 + 47 + struct irq_domain *ti_sci_inta_msi_create_irq_domain(struct fwnode_handle *fwnode, 48 + struct msi_domain_info *info, 49 + struct irq_domain *parent) 50 + { 51 + struct irq_domain *domain; 52 + 53 + ti_sci_inta_msi_update_chip_ops(info); 54 + 55 + domain = msi_create_irq_domain(fwnode, info, parent); 56 + if (domain) 57 + irq_domain_update_bus_token(domain, DOMAIN_BUS_TI_SCI_INTA_MSI); 58 + 59 + return domain; 60 + } 61 + EXPORT_SYMBOL_GPL(ti_sci_inta_msi_create_irq_domain); 62 + 63 + static void ti_sci_inta_msi_free_descs(struct device *dev) 64 + { 65 + struct msi_desc *desc, *tmp; 66 + 67 + list_for_each_entry_safe(desc, tmp, dev_to_msi_list(dev), list) { 68 + list_del(&desc->list); 69 + free_msi_entry(desc); 70 + } 71 + } 72 + 73 + static int ti_sci_inta_msi_alloc_descs(struct device *dev, 74 + struct ti_sci_resource *res) 75 + { 76 + struct msi_desc *msi_desc; 77 + int set, i, count = 0; 78 + 79 + for (set = 0; set < res->sets; set++) { 80 + for (i = 0; i < res->desc[set].num; i++) { 81 + msi_desc = alloc_msi_entry(dev, 1, NULL); 82 + if (!msi_desc) { 83 + ti_sci_inta_msi_free_descs(dev); 84 + return -ENOMEM; 85 + } 86 + 87 + msi_desc->inta.dev_index = res->desc[set].start + i; 88 + INIT_LIST_HEAD(&msi_desc->list); 89 + list_add_tail(&msi_desc->list, dev_to_msi_list(dev)); 90 + count++; 91 + } 92 + } 93 + 94 + return count; 95 + } 96 + 97 + int ti_sci_inta_msi_domain_alloc_irqs(struct device *dev, 98 + struct ti_sci_resource *res) 99 + { 100 + struct platform_device *pdev = to_platform_device(dev); 101 + struct irq_domain *msi_domain; 102 + int ret, nvec; 103 + 104 + msi_domain = dev_get_msi_domain(dev); 105 + if (!msi_domain) 106 + return -EINVAL; 107 + 108 + if (pdev->id < 0) 109 + return -ENODEV; 110 + 111 + nvec = ti_sci_inta_msi_alloc_descs(dev, res); 112 + if (nvec <= 0) 113 + return nvec; 114 + 115 + ret = msi_domain_alloc_irqs(msi_domain, dev, nvec); 116 + if (ret) { 117 + dev_err(dev, "Failed to allocate IRQs %d\n", ret); 118 + goto cleanup; 119 + } 120 + 121 + return 0; 122 + 123 + cleanup: 124 + ti_sci_inta_msi_free_descs(&pdev->dev); 125 + return ret; 126 + } 127 + EXPORT_SYMBOL_GPL(ti_sci_inta_msi_domain_alloc_irqs); 128 + 129 + void ti_sci_inta_msi_domain_free_irqs(struct device *dev) 130 + { 131 + msi_domain_free_irqs(dev->msi_domain, dev); 132 + ti_sci_inta_msi_free_descs(dev); 133 + } 134 + EXPORT_SYMBOL_GPL(ti_sci_inta_msi_domain_free_irqs); 135 + 136 + unsigned int ti_sci_inta_msi_get_virq(struct device *dev, u32 dev_index) 137 + { 138 + struct msi_desc *desc; 139 + 140 + for_each_msi_entry(desc, dev) 141 + if (desc->inta.dev_index == dev_index) 142 + return desc->irq; 143 + 144 + return -ENODEV; 145 + } 146 + EXPORT_SYMBOL_GPL(ti_sci_inta_msi_get_virq);
+22 -2
include/linux/dma-iommu.h
··· 71 71 size_t size, enum dma_data_direction dir, unsigned long attrs); 72 72 73 73 /* The DMA API isn't _quite_ the whole story, though... */ 74 - void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg); 74 + /* 75 + * iommu_dma_prepare_msi() - Map the MSI page in the IOMMU device 76 + * 77 + * The MSI page will be stored in @desc. 78 + * 79 + * Return: 0 on success otherwise an error describing the failure. 80 + */ 81 + int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr); 82 + 83 + /* Update the MSI message if required. */ 84 + void iommu_dma_compose_msi_msg(struct msi_desc *desc, 85 + struct msi_msg *msg); 86 + 75 87 void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); 76 88 77 89 #else 78 90 79 91 struct iommu_domain; 92 + struct msi_desc; 80 93 struct msi_msg; 81 94 struct device; 82 95 ··· 112 99 { 113 100 } 114 101 115 - static inline void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) 102 + static inline int iommu_dma_prepare_msi(struct msi_desc *desc, 103 + phys_addr_t msi_addr) 104 + { 105 + return 0; 106 + } 107 + 108 + static inline void iommu_dma_compose_msi_msg(struct msi_desc *desc, 109 + struct msi_msg *msg) 116 110 { 117 111 } 118 112
+2
include/linux/irq.h
··· 625 625 extern int irq_chip_set_vcpu_affinity_parent(struct irq_data *data, 626 626 void *vcpu_info); 627 627 extern int irq_chip_set_type_parent(struct irq_data *data, unsigned int type); 628 + extern int irq_chip_request_resources_parent(struct irq_data *data); 629 + extern void irq_chip_release_resources_parent(struct irq_data *data); 628 630 #endif 629 631 630 632 /* Handling of unhandled and spurious interrupts: */
+6 -6
include/linux/irqchip/arm-gic-v3.h
··· 165 165 #define GICR_PROPBASER_nCnB GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, nCnB) 166 166 #define GICR_PROPBASER_nC GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, nC) 167 167 #define GICR_PROPBASER_RaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWt) 168 - #define GICR_PROPBASER_RaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWt) 168 + #define GICR_PROPBASER_RaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWb) 169 169 #define GICR_PROPBASER_WaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, WaWt) 170 170 #define GICR_PROPBASER_WaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, WaWb) 171 171 #define GICR_PROPBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWaWt) ··· 192 192 #define GICR_PENDBASER_nCnB GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, nCnB) 193 193 #define GICR_PENDBASER_nC GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, nC) 194 194 #define GICR_PENDBASER_RaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWt) 195 - #define GICR_PENDBASER_RaWb GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWt) 195 + #define GICR_PENDBASER_RaWb GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWb) 196 196 #define GICR_PENDBASER_WaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, WaWt) 197 197 #define GICR_PENDBASER_WaWb GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, WaWb) 198 198 #define GICR_PENDBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWaWt) ··· 251 251 #define GICR_VPROPBASER_nCnB GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, nCnB) 252 252 #define GICR_VPROPBASER_nC GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, nC) 253 253 #define GICR_VPROPBASER_RaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWt) 254 - #define GICR_VPROPBASER_RaWb GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWt) 254 + #define GICR_VPROPBASER_RaWb GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWb) 255 255 #define GICR_VPROPBASER_WaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, WaWt) 256 256 #define GICR_VPROPBASER_WaWb GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, WaWb) 257 257 #define GICR_VPROPBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWaWt) ··· 277 277 #define GICR_VPENDBASER_nCnB GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, nCnB) 278 278 #define GICR_VPENDBASER_nC GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, nC) 279 279 #define GICR_VPENDBASER_RaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWt) 280 - #define GICR_VPENDBASER_RaWb GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWt) 280 + #define GICR_VPENDBASER_RaWb GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWb) 281 281 #define GICR_VPENDBASER_WaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, WaWt) 282 282 #define GICR_VPENDBASER_WaWb GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, WaWb) 283 283 #define GICR_VPENDBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWaWt) ··· 351 351 #define GITS_CBASER_nCnB GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, nCnB) 352 352 #define GITS_CBASER_nC GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, nC) 353 353 #define GITS_CBASER_RaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWt) 354 - #define GITS_CBASER_RaWb GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWt) 354 + #define GITS_CBASER_RaWb GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWb) 355 355 #define GITS_CBASER_WaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, WaWt) 356 356 #define GITS_CBASER_WaWb GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, WaWb) 357 357 #define GITS_CBASER_RaWaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWaWt) ··· 377 377 #define GITS_BASER_nCnB GIC_BASER_CACHEABILITY(GITS_BASER, INNER, nCnB) 378 378 #define GITS_BASER_nC GIC_BASER_CACHEABILITY(GITS_BASER, INNER, nC) 379 379 #define GITS_BASER_RaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWt) 380 - #define GITS_BASER_RaWb GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWt) 380 + #define GITS_BASER_RaWb GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWb) 381 381 #define GITS_BASER_WaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, WaWt) 382 382 #define GITS_BASER_WaWb GIC_BASER_CACHEABILITY(GITS_BASER, INNER, WaWb) 383 383 #define GITS_BASER_RaWaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWaWt)
+1
include/linux/irqdomain.h
··· 82 82 DOMAIN_BUS_NEXUS, 83 83 DOMAIN_BUS_IPI, 84 84 DOMAIN_BUS_FSL_MC_MSI, 85 + DOMAIN_BUS_TI_SCI_INTA_MSI, 85 86 }; 86 87 87 88 /**
+36
include/linux/msi.h
··· 48 48 }; 49 49 50 50 /** 51 + * ti_sci_inta_msi_desc - TISCI based INTA specific msi descriptor data 52 + * @dev_index: TISCI device index 53 + */ 54 + struct ti_sci_inta_msi_desc { 55 + u16 dev_index; 56 + }; 57 + 58 + /** 51 59 * struct msi_desc - Descriptor structure for MSI based interrupts 52 60 * @list: List head for management 53 61 * @irq: The base interrupt number ··· 76 68 * @mask_base: [PCI MSI-X] Mask register base address 77 69 * @platform: [platform] Platform device specific msi descriptor data 78 70 * @fsl_mc: [fsl-mc] FSL MC device specific msi descriptor data 71 + * @inta: [INTA] TISCI based INTA specific msi descriptor data 79 72 */ 80 73 struct msi_desc { 81 74 /* Shared device/bus type independent data */ ··· 86 77 struct device *dev; 87 78 struct msi_msg msg; 88 79 struct irq_affinity_desc *affinity; 80 + #ifdef CONFIG_IRQ_MSI_IOMMU 81 + const void *iommu_cookie; 82 + #endif 89 83 90 84 union { 91 85 /* PCI MSI/X specific data */ ··· 118 106 */ 119 107 struct platform_msi_desc platform; 120 108 struct fsl_mc_msi_desc fsl_mc; 109 + struct ti_sci_inta_msi_desc inta; 121 110 }; 122 111 }; 123 112 ··· 131 118 list_for_each_entry((desc), dev_to_msi_list((dev)), list) 132 119 #define for_each_msi_entry_safe(desc, tmp, dev) \ 133 120 list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list) 121 + 122 + #ifdef CONFIG_IRQ_MSI_IOMMU 123 + static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc) 124 + { 125 + return desc->iommu_cookie; 126 + } 127 + 128 + static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc, 129 + const void *iommu_cookie) 130 + { 131 + desc->iommu_cookie = iommu_cookie; 132 + } 133 + #else 134 + static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc) 135 + { 136 + return NULL; 137 + } 138 + 139 + static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc, 140 + const void *iommu_cookie) 141 + { 142 + } 143 + #endif 134 144 135 145 #ifdef CONFIG_PCI_MSI 136 146 #define first_pci_msi_entry(pdev) first_msi_entry(&(pdev)->dev)
+23
include/linux/soc/ti/ti_sci_inta_msi.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Texas Instruments' K3 TI SCI INTA MSI helper 4 + * 5 + * Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/ 6 + * Lokesh Vutla <lokeshvutla@ti.com> 7 + */ 8 + 9 + #ifndef __INCLUDE_LINUX_TI_SCI_INTA_MSI_H 10 + #define __INCLUDE_LINUX_TI_SCI_INTA_MSI_H 11 + 12 + #include <linux/msi.h> 13 + #include <linux/soc/ti/ti_sci_protocol.h> 14 + 15 + struct irq_domain 16 + *ti_sci_inta_msi_create_irq_domain(struct fwnode_handle *fwnode, 17 + struct msi_domain_info *info, 18 + struct irq_domain *parent); 19 + int ti_sci_inta_msi_domain_alloc_irqs(struct device *dev, 20 + struct ti_sci_resource *res); 21 + unsigned int ti_sci_inta_msi_get_virq(struct device *dev, u32 index); 22 + void ti_sci_inta_msi_domain_free_irqs(struct device *dev); 23 + #endif /* __INCLUDE_LINUX_IRQCHIP_TI_SCI_INTA_H */
+124
include/linux/soc/ti/ti_sci_protocol.h
··· 193 193 }; 194 194 195 195 /** 196 + * struct ti_sci_rm_core_ops - Resource management core operations 197 + * @get_range: Get a range of resources belonging to ti sci host. 198 + * @get_rage_from_shost: Get a range of resources belonging to 199 + * specified host id. 200 + * - s_host: Host processing entity to which the 201 + * resources are allocated 202 + * 203 + * NOTE: for these functions, all the parameters are consolidated and defined 204 + * as below: 205 + * - handle: Pointer to TISCI handle as retrieved by *ti_sci_get_handle 206 + * - dev_id: TISCI device ID. 207 + * - subtype: Resource assignment subtype that is being requested 208 + * from the given device. 209 + * - range_start: Start index of the resource range 210 + * - range_end: Number of resources in the range 211 + */ 212 + struct ti_sci_rm_core_ops { 213 + int (*get_range)(const struct ti_sci_handle *handle, u32 dev_id, 214 + u8 subtype, u16 *range_start, u16 *range_num); 215 + int (*get_range_from_shost)(const struct ti_sci_handle *handle, 216 + u32 dev_id, u8 subtype, u8 s_host, 217 + u16 *range_start, u16 *range_num); 218 + }; 219 + 220 + /** 221 + * struct ti_sci_rm_irq_ops: IRQ management operations 222 + * @set_irq: Set an IRQ route between the requested source 223 + * and destination 224 + * @set_event_map: Set an Event based peripheral irq to Interrupt 225 + * Aggregator. 226 + * @free_irq: Free an an IRQ route between the requested source 227 + * destination. 228 + * @free_event_map: Free an event based peripheral irq to Interrupt 229 + * Aggregator. 230 + */ 231 + struct ti_sci_rm_irq_ops { 232 + int (*set_irq)(const struct ti_sci_handle *handle, u16 src_id, 233 + u16 src_index, u16 dst_id, u16 dst_host_irq); 234 + int (*set_event_map)(const struct ti_sci_handle *handle, u16 src_id, 235 + u16 src_index, u16 ia_id, u16 vint, 236 + u16 global_event, u8 vint_status_bit); 237 + int (*free_irq)(const struct ti_sci_handle *handle, u16 src_id, 238 + u16 src_index, u16 dst_id, u16 dst_host_irq); 239 + int (*free_event_map)(const struct ti_sci_handle *handle, u16 src_id, 240 + u16 src_index, u16 ia_id, u16 vint, 241 + u16 global_event, u8 vint_status_bit); 242 + }; 243 + 244 + /** 196 245 * struct ti_sci_ops - Function support for TI SCI 197 246 * @dev_ops: Device specific operations 198 247 * @clk_ops: Clock specific operations 248 + * @rm_core_ops: Resource management core operations. 249 + * @rm_irq_ops: IRQ management specific operations 199 250 */ 200 251 struct ti_sci_ops { 201 252 struct ti_sci_core_ops core_ops; 202 253 struct ti_sci_dev_ops dev_ops; 203 254 struct ti_sci_clk_ops clk_ops; 255 + struct ti_sci_rm_core_ops rm_core_ops; 256 + struct ti_sci_rm_irq_ops rm_irq_ops; 204 257 }; 205 258 206 259 /** ··· 266 213 struct ti_sci_ops ops; 267 214 }; 268 215 216 + #define TI_SCI_RESOURCE_NULL 0xffff 217 + 218 + /** 219 + * struct ti_sci_resource_desc - Description of TI SCI resource instance range. 220 + * @start: Start index of the resource. 221 + * @num: Number of resources. 222 + * @res_map: Bitmap to manage the allocation of these resources. 223 + */ 224 + struct ti_sci_resource_desc { 225 + u16 start; 226 + u16 num; 227 + unsigned long *res_map; 228 + }; 229 + 230 + /** 231 + * struct ti_sci_resource - Structure representing a resource assigned 232 + * to a device. 233 + * @sets: Number of sets available from this resource type 234 + * @lock: Lock to guard the res map in each set. 235 + * @desc: Array of resource descriptors. 236 + */ 237 + struct ti_sci_resource { 238 + u16 sets; 239 + raw_spinlock_t lock; 240 + struct ti_sci_resource_desc *desc; 241 + }; 242 + 269 243 #if IS_ENABLED(CONFIG_TI_SCI_PROTOCOL) 270 244 const struct ti_sci_handle *ti_sci_get_handle(struct device *dev); 271 245 int ti_sci_put_handle(const struct ti_sci_handle *handle); 272 246 const struct ti_sci_handle *devm_ti_sci_get_handle(struct device *dev); 247 + const struct ti_sci_handle *ti_sci_get_by_phandle(struct device_node *np, 248 + const char *property); 249 + const struct ti_sci_handle *devm_ti_sci_get_by_phandle(struct device *dev, 250 + const char *property); 251 + u16 ti_sci_get_free_resource(struct ti_sci_resource *res); 252 + void ti_sci_release_resource(struct ti_sci_resource *res, u16 id); 253 + u32 ti_sci_get_num_resources(struct ti_sci_resource *res); 254 + struct ti_sci_resource * 255 + devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle, 256 + struct device *dev, u32 dev_id, char *of_prop); 273 257 274 258 #else /* CONFIG_TI_SCI_PROTOCOL */ 275 259 ··· 326 236 return ERR_PTR(-EINVAL); 327 237 } 328 238 239 + static inline 240 + const struct ti_sci_handle *ti_sci_get_by_phandle(struct device_node *np, 241 + const char *property) 242 + { 243 + return ERR_PTR(-EINVAL); 244 + } 245 + 246 + static inline 247 + const struct ti_sci_handle *devm_ti_sci_get_by_phandle(struct device *dev, 248 + const char *property) 249 + { 250 + return ERR_PTR(-EINVAL); 251 + } 252 + 253 + static inline u16 ti_sci_get_free_resource(struct ti_sci_resource *res) 254 + { 255 + return TI_SCI_RESOURCE_NULL; 256 + } 257 + 258 + static inline void ti_sci_release_resource(struct ti_sci_resource *res, u16 id) 259 + { 260 + } 261 + 262 + static inline u32 ti_sci_get_num_resources(struct ti_sci_resource *res) 263 + { 264 + return 0; 265 + } 266 + 267 + static inline struct ti_sci_resource * 268 + devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle, 269 + struct device *dev, u32 dev_id, char *of_prop) 270 + { 271 + return ERR_PTR(-EINVAL); 272 + } 329 273 #endif /* CONFIG_TI_SCI_PROTOCOL */ 330 274 331 275 #endif /* __TISCI_PROTOCOL_H */
+3
kernel/irq/Kconfig
··· 91 91 select IRQ_DOMAIN_HIERARCHY 92 92 select GENERIC_MSI_IRQ 93 93 94 + config IRQ_MSI_IOMMU 95 + bool 96 + 94 97 config HANDLE_DOMAIN_IRQ 95 98 bool 96 99
+27
kernel/irq/chip.c
··· 1459 1459 return -ENOSYS; 1460 1460 } 1461 1461 EXPORT_SYMBOL_GPL(irq_chip_set_wake_parent); 1462 + 1463 + /** 1464 + * irq_chip_request_resources_parent - Request resources on the parent interrupt 1465 + * @data: Pointer to interrupt specific data 1466 + */ 1467 + int irq_chip_request_resources_parent(struct irq_data *data) 1468 + { 1469 + data = data->parent_data; 1470 + 1471 + if (data->chip->irq_request_resources) 1472 + return data->chip->irq_request_resources(data); 1473 + 1474 + return -ENOSYS; 1475 + } 1476 + EXPORT_SYMBOL_GPL(irq_chip_request_resources_parent); 1477 + 1478 + /** 1479 + * irq_chip_release_resources_parent - Release resources on the parent interrupt 1480 + * @data: Pointer to interrupt specific data 1481 + */ 1482 + void irq_chip_release_resources_parent(struct irq_data *data) 1483 + { 1484 + data = data->parent_data; 1485 + if (data->chip->irq_release_resources) 1486 + data->chip->irq_release_resources(data); 1487 + } 1488 + EXPORT_SYMBOL_GPL(irq_chip_release_resources_parent); 1462 1489 #endif 1463 1490 1464 1491 /**
+1 -1
kernel/irq/irqdomain.c
··· 1297 1297 /** 1298 1298 * __irq_domain_alloc_irqs - Allocate IRQs from domain 1299 1299 * @domain: domain to allocate from 1300 - * @irq_base: allocate specified IRQ nubmer if irq_base >= 0 1300 + * @irq_base: allocate specified IRQ number if irq_base >= 0 1301 1301 * @nr_irqs: number of IRQs to allocate 1302 1302 * @node: NUMA node id for memory allocation 1303 1303 * @arg: domain specific argument