Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'remotes/lorenzo/pci/keystone'

- Move IRQ register address computation inside macros (Kishon Vijay
Abraham I)

- Separate legacy IRQ and MSI configuration (Kishon Vijay Abraham I)

- Use hwirq, not virq, to get MSI IRQ number offset (Kishon Vijay Abraham
I)

- Squash ks_pcie_handle_msi_irq() into ks_pcie_msi_irq_handler() (Kishon
Vijay Abraham I)

- Add dwc support for platforms with custom MSI controllers (Kishon Vijay
Abraham I)

- Add keystone-specific MSI controller (Kishon Vijay Abraham I)

- Remove dwc host_ops previously used for keystone-specific MSI (Kishon
Vijay Abraham I)

- Skip dwc default MSI init if platform has custom MSI controller (Kishon
Vijay Abraham I)

- Implement .start_link() and .stop_link() for keystone endpoint support
(Kishon Vijay Abraham I)

- Add keystone "reg-names" DT binding (Kishon Vijay Abraham I)

- Squash ks_pcie_dw_host_init() into ks_pcie_add_pcie_port() (Kishon
Vijay Abraham I)

- Get keystone register resources from DT by name, not index (Kishon
Vijay Abraham I)

- Get DT resources in .probe() to prepare for endpoint support (Kishon
Vijay Abraham I)

- Add "ti,syscon-pcie-mode" DT property for PCIe mode configuration
(Kishon Vijay Abraham I)

- Explicitly set keystone to host mode (Kishon Vijay Abraham I)

- Document DT "atu" reg-names requirement for DesignWare core >= 4.80
(Kishon Vijay Abraham I)

- Enable dwc iATU unroll for endpoint mode as well as host mode (Kishon
Vijay Abraham I)

- Add dwc "version" to identify core >= 4.80 for ATU programming (Kishon
Vijay Abraham I)

- Don't build ARM32-specific keystone code on ARM64 (Kishon Vijay Abraham
I)

- Add DT binding for keystone PCIe RC in AM654 SoC (Kishon Vijay Abraham
I)

- Add keystone support for AM654 SoC PCIe RC (Kishon Vijay Abraham I)

- Reset keystone PHYs before enabling them (Kishon Vijay Abraham I)

- Make of_pci_get_max_link_speed() available to endpoint drivers as well
as host drivers (Kishon Vijay Abraham I)

- Add keystone support for DT "max-link-speed" property (Kishon Vijay
Abraham I)

- Add endpoint library support for BAR buffer alignment (Kishon Vijay
Abraham I)

- Make all dw_pcie_ep_ops structs const (Kishon Vijay Abraham I)

- Fix fencepost error in dw_pcie_ep_find_capability() (Kishon Vijay
Abraham I)

- Add dwc hooks for dbi/dbi2 that share the same address space (Kishon
Vijay Abraham I)

- Add keystone support for TI AM654x in endpoint mode (Kishon Vijay
Abraham I)

- Configure designware endpoints to advertise smallest resizable BAR
(1MB) (Kishon Vijay Abraham I)

- Align designware endpoint ATU windows for raising MSIs (Kishon Vijay
Abraham I)

- Add endpoint test support for TI AM654x (Kishon Vijay Abraham I)

- Fix endpoint test test_reg_bar issue (Kishon Vijay Abraham I)

* remotes/lorenzo/pci/keystone:
misc: pci_endpoint_test: Fix test_reg_bar to be updated in pci_endpoint_test
misc: pci_endpoint_test: Add support to test PCI EP in AM654x
PCI: designware-ep: Use aligned ATU window for raising MSI interrupts
PCI: designware-ep: Configure Resizable BAR cap to advertise the smallest size
PCI: keystone: Add support for PCIe EP in AM654x Platforms
dt-bindings: PCI: Add PCI EP DT binding documentation for AM654
PCI: dwc: Add callbacks for accessing dbi2 address space
PCI: dwc: Fix dw_pcie_ep_find_capability() to return correct capability offset
PCI: dwc: Add const qualifier to struct dw_pcie_ep_ops
PCI: endpoint: Add support to specify alignment for buffers allocated to BARs
PCI: keystone: Add support to set the max link speed from DT
PCI: OF: Allow of_pci_get_max_link_speed() to be used by PCI Endpoint drivers
PCI: keystone: Invoke phy_reset() API before enabling PHY
PCI: keystone: Add support for PCIe RC in AM654x Platforms
dt-bindings: PCI: Add PCI RC DT binding documentation for AM654
PCI: keystone: Prevent ARM32 specific code to be compiled for ARM64
PCI: dwc: Fix ATU identification for designware version >= 4.80
PCI: dwc: Enable iATU unroll for endpoint too
dt-bindings: PCI: Document "atu" reg-names
PCI: keystone: Explicitly set the PCIe mode
dt-bindings: PCI: Add dt-binding to configure PCIe mode
PCI: keystone: Move resources initialization to prepare for EP support
PCI: keystone: Use platform_get_resource_byname() to get memory resources
PCI: keystone: Perform host initialization in a single function
dt-bindings: PCI: keystone: Add "reg-names" binding information
PCI: keystone: Cleanup error_irq configuration
PCI: keystone: Add start_link()/stop_link() dw_pcie_ops
PCI: dwc: Remove default MSI initialization for platform specific MSI chips
PCI: dwc: Remove Keystone specific dw_pcie_host_ops
PCI: keystone: Use Keystone specific msi_irq_chip
PCI: dwc: Add support to use non default msi_irq_chip
PCI: keystone: Cleanup ks_pcie_msi_irq_handler()
PCI: keystone: Use hwirq to get the MSI IRQ number offset
PCI: keystone: Add separate functions for configuring MSI and legacy interrupt
PCI: keystone: Cleanup interrupt related macros

# Conflicts:
# drivers/pci/controller/dwc/pcie-designware.h

+966 -399
+5 -2
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 4 4 - compatible: 5 5 "snps,dw-pcie" for RC mode; 6 6 "snps,dw-pcie-ep" for EP mode; 7 - - reg: Should contain the configuration address space. 8 - - reg-names: Must be "config" for the PCIe configuration space. 7 + - reg: For designware cores version < 4.80 contains the configuration 8 + address space. For designware core version >= 4.80, contains 9 + the configuration and ATU address space 10 + - reg-names: Must be "config" for the PCIe configuration space and "atu" for 11 + the ATU address space. 9 12 (The old way of getting the configuration address space from "ranges" 10 13 is deprecated and should be avoided.) 11 14 - num-lanes: number of lanes to use
+55 -3
Documentation/devicetree/bindings/pci/pci-keystone.txt
··· 11 11 12 12 Required Properties:- 13 13 14 - compatibility: "ti,keystone-pcie" 15 - reg: index 1 is the base address and length of DW application registers. 16 - index 2 is the base address and length of PCI device ID register. 14 + compatibility: Should be "ti,keystone-pcie" for RC on Keystone2 SoC 15 + Should be "ti,am654-pcie-rc" for RC on AM654x SoC 16 + reg: Three register ranges as listed in the reg-names property 17 + reg-names: "dbics" for the DesignWare PCIe registers, "app" for the 18 + TI specific application registers, "config" for the 19 + configuration space address 17 20 18 21 pcie_msi_intc : Interrupt controller device node for MSI IRQ chip 19 22 interrupt-cells: should be set to 1 20 23 interrupts: GIC interrupt lines connected to PCI MSI interrupt lines 24 + (required if the compatible is "ti,keystone-pcie") 25 + msi-map: As specified in Documentation/devicetree/bindings/pci/pci-msi.txt 26 + (required if the compatible is "ti,am654-pcie-rc". 21 27 22 28 ti,syscon-pcie-id : phandle to the device control module required to set device 23 29 id and vendor id. 30 + ti,syscon-pcie-mode : phandle to the device control module required to configure 31 + PCI in either RC mode or EP mode. 24 32 25 33 Example: 26 34 pcie_msi_intc: msi-interrupt-controller { ··· 69 61 DesignWare DT Properties not applicable for Keystone PCI 70 62 71 63 1. pcie_bus clock-names not used. Instead, a phandle to phys is used. 64 + 65 + AM654 PCIe Endpoint 66 + =================== 67 + 68 + Required Properties:- 69 + 70 + compatibility: Should be "ti,am654-pcie-ep" for EP on AM654x SoC 71 + reg: Four register ranges as listed in the reg-names property 72 + reg-names: "dbics" for the DesignWare PCIe registers, "app" for the 73 + TI specific application registers, "atu" for the 74 + Address Translation Unit configuration registers and 75 + "addr_space" used to map remote RC address space 76 + num-ib-windows: As specified in 77 + Documentation/devicetree/bindings/pci/designware-pcie.txt 78 + num-ob-windows: As specified in 79 + Documentation/devicetree/bindings/pci/designware-pcie.txt 80 + num-lanes: As specified in 81 + Documentation/devicetree/bindings/pci/designware-pcie.txt 82 + power-domains: As documented by the generic PM domain bindings in 83 + Documentation/devicetree/bindings/power/power_domain.txt. 84 + ti,syscon-pcie-mode: phandle to the device control module required to configure 85 + PCI in either RC mode or EP mode. 86 + 87 + Optional properties:- 88 + 89 + phys: list of PHY specifiers (used by generic PHY framework) 90 + phy-names: must be "pcie-phy0", "pcie-phy1", "pcie-phyN".. based on the 91 + number of lanes as specified in *num-lanes* property. 92 + ("phys" and "phy-names" DT bindings are specified in 93 + Documentation/devicetree/bindings/phy/phy-bindings.txt) 94 + interrupts: platform interrupt for error interrupts. 95 + 96 + pcie-ep { 97 + compatible = "ti,am654-pcie-ep"; 98 + reg = <0x5500000 0x1000>, <0x5501000 0x1000>, 99 + <0x10000000 0x8000000>, <0x5506000 0x1000>; 100 + reg-names = "app", "dbics", "addr_space", "atu"; 101 + power-domains = <&k3_pds 120>; 102 + ti,syscon-pcie-mode = <&pcie0_mode>; 103 + num-lanes = <1>; 104 + num-ib-windows = <16>; 105 + num-ob-windows = <16>; 106 + interrupts = <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>; 107 + };
+18
drivers/misc/pci_endpoint_test.c
··· 75 75 #define PCI_ENDPOINT_TEST_IRQ_TYPE 0x24 76 76 #define PCI_ENDPOINT_TEST_IRQ_NUMBER 0x28 77 77 78 + #define PCI_DEVICE_ID_TI_AM654 0xb00c 79 + 80 + #define is_am654_pci_dev(pdev) \ 81 + ((pdev)->device == PCI_DEVICE_ID_TI_AM654) 82 + 78 83 static DEFINE_IDA(pci_endpoint_test_ida); 79 84 80 85 #define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \ ··· 593 588 int ret = -EINVAL; 594 589 enum pci_barno bar; 595 590 struct pci_endpoint_test *test = to_endpoint_test(file->private_data); 591 + struct pci_dev *pdev = test->pdev; 596 592 597 593 mutex_lock(&test->mutex); 598 594 switch (cmd) { 599 595 case PCITEST_BAR: 600 596 bar = arg; 601 597 if (bar < 0 || bar > 5) 598 + goto ret; 599 + if (is_am654_pci_dev(pdev) && bar == BAR_0) 602 600 goto ret; 603 601 ret = pci_endpoint_test_bar(test, bar); 604 602 break; ··· 670 662 data = (struct pci_endpoint_test_data *)ent->driver_data; 671 663 if (data) { 672 664 test_reg_bar = data->test_reg_bar; 665 + test->test_reg_bar = test_reg_bar; 673 666 test->alignment = data->alignment; 674 667 irq_type = data->irq_type; 675 668 } ··· 794 785 pci_disable_device(pdev); 795 786 } 796 787 788 + static const struct pci_endpoint_test_data am654_data = { 789 + .test_reg_bar = BAR_2, 790 + .alignment = SZ_64K, 791 + .irq_type = IRQ_TYPE_MSI, 792 + }; 793 + 797 794 static const struct pci_device_id pci_endpoint_test_tbl[] = { 798 795 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) }, 799 796 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) }, 800 797 { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0) }, 801 798 { PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS, 0xedda) }, 799 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654), 800 + .driver_data = (kernel_ulong_t)&am654_data 801 + }, 802 802 { } 803 803 }; 804 804 MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);
+1 -1
drivers/pci/Makefile
··· 10 10 ifdef CONFIG_PCI 11 11 obj-$(CONFIG_PROC_FS) += proc.o 12 12 obj-$(CONFIG_SYSFS) += slot.o 13 - obj-$(CONFIG_OF) += of.o 14 13 obj-$(CONFIG_ACPI) += pci-acpi.o 15 14 endif 16 15 16 + obj-$(CONFIG_OF) += of.o 17 17 obj-$(CONFIG_PCI_QUIRKS) += quirks.o 18 18 obj-$(CONFIG_PCIEPORTBUS) += pcie/ 19 19 obj-$(CONFIG_HOTPLUG_PCI) += hotplug/
+23 -6
drivers/pci/controller/dwc/Kconfig
··· 103 103 Say Y here if you want PCIe support on SPEAr13XX SoCs. 104 104 105 105 config PCI_KEYSTONE 106 - bool "TI Keystone PCIe controller" 107 - depends on ARCH_KEYSTONE || (ARM && COMPILE_TEST) 106 + bool 107 + 108 + config PCI_KEYSTONE_HOST 109 + bool "PCI Keystone Host Mode" 110 + depends on ARCH_KEYSTONE || ARCH_K3 || ((ARM || ARM64) && COMPILE_TEST) 108 111 depends on PCI_MSI_IRQ_DOMAIN 109 112 select PCIE_DW_HOST 113 + select PCI_KEYSTONE 114 + default y 110 115 help 111 - Say Y here if you want to enable PCI controller support on Keystone 112 - SoCs. The PCI controller on Keystone is based on DesignWare hardware 113 - and therefore the driver re-uses the DesignWare core functions to 114 - implement the driver. 116 + Enables support for the PCIe controller in the Keystone SoC to 117 + work in host mode. The PCI controller on Keystone is based on 118 + DesignWare hardware and therefore the driver re-uses the 119 + DesignWare core functions to implement the driver. 120 + 121 + config PCI_KEYSTONE_EP 122 + bool "PCI Keystone Endpoint Mode" 123 + depends on ARCH_KEYSTONE || ARCH_K3 || ((ARM || ARM64) && COMPILE_TEST) 124 + depends on PCI_ENDPOINT 125 + select PCIE_DW_EP 126 + select PCI_KEYSTONE 127 + help 128 + Enables support for the PCIe controller in the Keystone SoC to 129 + work in endpoint mode. The PCI controller on Keystone is based 130 + on DesignWare hardware and therefore the driver re-uses the 131 + DesignWare core functions to implement the driver. 115 132 116 133 config PCI_LAYERSCAPE 117 134 bool "Freescale Layerscape PCIe controller"
+1 -1
drivers/pci/controller/dwc/pci-dra7xx.c
··· 406 406 return &dra7xx_pcie_epc_features; 407 407 } 408 408 409 - static struct dw_pcie_ep_ops pcie_ep_ops = { 409 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 410 410 .ep_init = dra7xx_pcie_ep_init, 411 411 .raise_irq = dra7xx_pcie_raise_irq, 412 412 .get_features = dra7xx_pcie_get_features,
+684 -272
drivers/pci/controller/dwc/pci-keystone.c
··· 11 11 12 12 #include <linux/clk.h> 13 13 #include <linux/delay.h> 14 + #include <linux/gpio/consumer.h> 14 15 #include <linux/init.h> 15 16 #include <linux/interrupt.h> 16 17 #include <linux/irqchip/chained_irq.h> ··· 19 18 #include <linux/mfd/syscon.h> 20 19 #include <linux/msi.h> 21 20 #include <linux/of.h> 21 + #include <linux/of_device.h> 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_pci.h> 24 24 #include <linux/phy/phy.h> ··· 28 26 #include <linux/resource.h> 29 27 #include <linux/signal.h> 30 28 29 + #include "../../pci.h" 31 30 #include "pcie-designware.h" 32 31 33 32 #define PCIE_VENDORID_MASK 0xffff ··· 47 44 #define CFG_TYPE1 BIT(24) 48 45 49 46 #define OB_SIZE 0x030 50 - #define SPACE0_REMOTE_CFG_OFFSET 0x1000 51 47 #define OB_OFFSET_INDEX(n) (0x200 + (8 * (n))) 52 48 #define OB_OFFSET_HI(n) (0x204 + (8 * (n))) 53 49 #define OB_ENABLEN BIT(0) 54 50 #define OB_WIN_SIZE 8 /* 8MB */ 55 51 52 + #define PCIE_LEGACY_IRQ_ENABLE_SET(n) (0x188 + (0x10 * ((n) - 1))) 53 + #define PCIE_LEGACY_IRQ_ENABLE_CLR(n) (0x18c + (0x10 * ((n) - 1))) 54 + #define PCIE_EP_IRQ_SET 0x64 55 + #define PCIE_EP_IRQ_CLR 0x68 56 + #define INT_ENABLE BIT(0) 57 + 56 58 /* IRQ register defines */ 57 59 #define IRQ_EOI 0x050 58 - #define IRQ_STATUS 0x184 59 - #define IRQ_ENABLE_SET 0x188 60 - #define IRQ_ENABLE_CLR 0x18c 61 60 62 61 #define MSI_IRQ 0x054 63 - #define MSI0_IRQ_STATUS 0x104 64 - #define MSI0_IRQ_ENABLE_SET 0x108 65 - #define MSI0_IRQ_ENABLE_CLR 0x10c 66 - #define IRQ_STATUS 0x184 62 + #define MSI_IRQ_STATUS(n) (0x104 + ((n) << 4)) 63 + #define MSI_IRQ_ENABLE_SET(n) (0x108 + ((n) << 4)) 64 + #define MSI_IRQ_ENABLE_CLR(n) (0x10c + ((n) << 4)) 67 65 #define MSI_IRQ_OFFSET 4 66 + 67 + #define IRQ_STATUS(n) (0x184 + ((n) << 4)) 68 + #define IRQ_ENABLE_SET(n) (0x188 + ((n) << 4)) 69 + #define INTx_EN BIT(0) 68 70 69 71 #define ERR_IRQ_STATUS 0x1c4 70 72 #define ERR_IRQ_ENABLE_SET 0x1c8 71 73 #define ERR_AER BIT(5) /* ECRC error */ 74 + #define AM6_ERR_AER BIT(4) /* AM6 ECRC error */ 72 75 #define ERR_AXI BIT(4) /* AXI tag lookup fatal error */ 73 76 #define ERR_CORR BIT(3) /* Correctable error */ 74 77 #define ERR_NONFATAL BIT(2) /* Non-fatal error */ ··· 83 74 #define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \ 84 75 ERR_NONFATAL | ERR_FATAL | ERR_SYS) 85 76 86 - #define MAX_MSI_HOST_IRQS 8 87 77 /* PCIE controller device IDs */ 88 78 #define PCIE_RC_K2HK 0xb008 89 79 #define PCIE_RC_K2E 0xb009 90 80 #define PCIE_RC_K2L 0xb00a 91 81 #define PCIE_RC_K2G 0xb00b 92 82 83 + #define KS_PCIE_DEV_TYPE_MASK (0x3 << 1) 84 + #define KS_PCIE_DEV_TYPE(mode) ((mode) << 1) 85 + 86 + #define EP 0x0 87 + #define LEG_EP 0x1 88 + #define RC 0x2 89 + 90 + #define EXP_CAP_ID_OFFSET 0x70 91 + 92 + #define KS_PCIE_SYSCLOCKOUTEN BIT(0) 93 + 94 + #define AM654_PCIE_DEV_TYPE_MASK 0x3 95 + #define AM654_WIN_SIZE SZ_64K 96 + 97 + #define APP_ADDR_SPACE_0 (16 * SZ_1K) 98 + 93 99 #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) 100 + 101 + struct ks_pcie_of_data { 102 + enum dw_pcie_device_mode mode; 103 + const struct dw_pcie_host_ops *host_ops; 104 + const struct dw_pcie_ep_ops *ep_ops; 105 + unsigned int version; 106 + }; 94 107 95 108 struct keystone_pcie { 96 109 struct dw_pcie *pci; 97 110 /* PCI Device ID */ 98 111 u32 device_id; 99 - int num_legacy_host_irqs; 100 112 int legacy_host_irqs[PCI_NUM_INTX]; 101 113 struct device_node *legacy_intc_np; 102 114 103 - int num_msi_host_irqs; 104 - int msi_host_irqs[MAX_MSI_HOST_IRQS]; 115 + int msi_host_irq; 105 116 int num_lanes; 106 117 u32 num_viewport; 107 118 struct phy **phy; ··· 130 101 struct irq_domain *legacy_irq_domain; 131 102 struct device_node *np; 132 103 133 - int error_irq; 134 - 135 104 /* Application register space */ 136 105 void __iomem *va_app_base; /* DT 1st resource */ 137 106 struct resource app; 107 + bool is_am6; 138 108 }; 139 - 140 - static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset, 141 - u32 *bit_pos) 142 - { 143 - *reg_offset = offset % 8; 144 - *bit_pos = offset >> 3; 145 - } 146 - 147 - static phys_addr_t ks_pcie_get_msi_addr(struct pcie_port *pp) 148 - { 149 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 150 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 151 - 152 - return ks_pcie->app.start + MSI_IRQ; 153 - } 154 109 155 110 static u32 ks_pcie_app_readl(struct keystone_pcie *ks_pcie, u32 offset) 156 111 { ··· 147 134 writel(val, ks_pcie->va_app_base + offset); 148 135 } 149 136 150 - static void ks_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset) 137 + static void ks_pcie_msi_irq_ack(struct irq_data *data) 151 138 { 152 - struct dw_pcie *pci = ks_pcie->pci; 153 - struct pcie_port *pp = &pci->pp; 154 - struct device *dev = pci->dev; 155 - u32 pending, vector; 156 - int src, virq; 157 - 158 - pending = ks_pcie_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4)); 159 - 160 - /* 161 - * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit 162 - * shows 1, 9, 17, 25 and so forth 163 - */ 164 - for (src = 0; src < 4; src++) { 165 - if (BIT(src) & pending) { 166 - vector = offset + (src << 3); 167 - virq = irq_linear_revmap(pp->irq_domain, vector); 168 - dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", 169 - src, vector, virq); 170 - generic_handle_irq(virq); 171 - } 172 - } 173 - } 174 - 175 - static void ks_pcie_msi_irq_ack(int irq, struct pcie_port *pp) 176 - { 177 - u32 reg_offset, bit_pos; 139 + struct pcie_port *pp = irq_data_get_irq_chip_data(data); 178 140 struct keystone_pcie *ks_pcie; 141 + u32 irq = data->hwirq; 179 142 struct dw_pcie *pci; 143 + u32 reg_offset; 144 + u32 bit_pos; 180 145 181 146 pci = to_dw_pcie_from_pp(pp); 182 147 ks_pcie = to_keystone_pcie(pci); 183 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 184 148 185 - ks_pcie_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4), 149 + reg_offset = irq % 8; 150 + bit_pos = irq >> 3; 151 + 152 + ks_pcie_app_writel(ks_pcie, MSI_IRQ_STATUS(reg_offset), 186 153 BIT(bit_pos)); 187 154 ks_pcie_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET); 188 155 } 189 156 190 - static void ks_pcie_msi_set_irq(struct pcie_port *pp, int irq) 157 + static void ks_pcie_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 191 158 { 192 - u32 reg_offset, bit_pos; 193 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 194 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 159 + struct pcie_port *pp = irq_data_get_irq_chip_data(data); 160 + struct keystone_pcie *ks_pcie; 161 + struct dw_pcie *pci; 162 + u64 msi_target; 195 163 196 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 197 - ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4), 198 - BIT(bit_pos)); 164 + pci = to_dw_pcie_from_pp(pp); 165 + ks_pcie = to_keystone_pcie(pci); 166 + 167 + msi_target = ks_pcie->app.start + MSI_IRQ; 168 + msg->address_lo = lower_32_bits(msi_target); 169 + msg->address_hi = upper_32_bits(msi_target); 170 + msg->data = data->hwirq; 171 + 172 + dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n", 173 + (int)data->hwirq, msg->address_hi, msg->address_lo); 199 174 } 200 175 201 - static void ks_pcie_msi_clear_irq(struct pcie_port *pp, int irq) 176 + static int ks_pcie_msi_set_affinity(struct irq_data *irq_data, 177 + const struct cpumask *mask, bool force) 202 178 { 203 - u32 reg_offset, bit_pos; 204 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 205 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 206 - 207 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 208 - ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4), 209 - BIT(bit_pos)); 179 + return -EINVAL; 210 180 } 181 + 182 + static void ks_pcie_msi_mask(struct irq_data *data) 183 + { 184 + struct pcie_port *pp = irq_data_get_irq_chip_data(data); 185 + struct keystone_pcie *ks_pcie; 186 + u32 irq = data->hwirq; 187 + struct dw_pcie *pci; 188 + unsigned long flags; 189 + u32 reg_offset; 190 + u32 bit_pos; 191 + 192 + raw_spin_lock_irqsave(&pp->lock, flags); 193 + 194 + pci = to_dw_pcie_from_pp(pp); 195 + ks_pcie = to_keystone_pcie(pci); 196 + 197 + reg_offset = irq % 8; 198 + bit_pos = irq >> 3; 199 + 200 + ks_pcie_app_writel(ks_pcie, MSI_IRQ_ENABLE_CLR(reg_offset), 201 + BIT(bit_pos)); 202 + 203 + raw_spin_unlock_irqrestore(&pp->lock, flags); 204 + } 205 + 206 + static void ks_pcie_msi_unmask(struct irq_data *data) 207 + { 208 + struct pcie_port *pp = irq_data_get_irq_chip_data(data); 209 + struct keystone_pcie *ks_pcie; 210 + u32 irq = data->hwirq; 211 + struct dw_pcie *pci; 212 + unsigned long flags; 213 + u32 reg_offset; 214 + u32 bit_pos; 215 + 216 + raw_spin_lock_irqsave(&pp->lock, flags); 217 + 218 + pci = to_dw_pcie_from_pp(pp); 219 + ks_pcie = to_keystone_pcie(pci); 220 + 221 + reg_offset = irq % 8; 222 + bit_pos = irq >> 3; 223 + 224 + ks_pcie_app_writel(ks_pcie, MSI_IRQ_ENABLE_SET(reg_offset), 225 + BIT(bit_pos)); 226 + 227 + raw_spin_unlock_irqrestore(&pp->lock, flags); 228 + } 229 + 230 + static struct irq_chip ks_pcie_msi_irq_chip = { 231 + .name = "KEYSTONE-PCI-MSI", 232 + .irq_ack = ks_pcie_msi_irq_ack, 233 + .irq_compose_msi_msg = ks_pcie_compose_msi_msg, 234 + .irq_set_affinity = ks_pcie_msi_set_affinity, 235 + .irq_mask = ks_pcie_msi_mask, 236 + .irq_unmask = ks_pcie_msi_unmask, 237 + }; 211 238 212 239 static int ks_pcie_msi_host_init(struct pcie_port *pp) 213 240 { 241 + pp->msi_irq_chip = &ks_pcie_msi_irq_chip; 214 242 return dw_pcie_allocate_domains(pp); 215 - } 216 - 217 - static void ks_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) 218 - { 219 - int i; 220 - 221 - for (i = 0; i < PCI_NUM_INTX; i++) 222 - ks_pcie_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1); 223 243 } 224 244 225 245 static void ks_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, ··· 263 217 u32 pending; 264 218 int virq; 265 219 266 - pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS + (offset << 4)); 220 + pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS(offset)); 267 221 268 222 if (BIT(0) & pending) { 269 223 virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset); ··· 273 227 274 228 /* EOI the INTx interrupt */ 275 229 ks_pcie_app_writel(ks_pcie, IRQ_EOI, offset); 230 + } 231 + 232 + /* 233 + * Dummy function so that DW core doesn't configure MSI 234 + */ 235 + static int ks_pcie_am654_msi_host_init(struct pcie_port *pp) 236 + { 237 + return 0; 276 238 } 277 239 278 240 static void ks_pcie_enable_error_irq(struct keystone_pcie *ks_pcie) ··· 309 255 if (reg & ERR_CORR) 310 256 dev_dbg(dev, "Correctable Error\n"); 311 257 312 - if (reg & ERR_AXI) 258 + if (!ks_pcie->is_am6 && (reg & ERR_AXI)) 313 259 dev_err(dev, "AXI tag lookup fatal Error\n"); 314 260 315 - if (reg & ERR_AER) 261 + if (reg & ERR_AER || (ks_pcie->is_am6 && (reg & AM6_ERR_AER))) 316 262 dev_err(dev, "ECRC Error\n"); 317 263 318 264 ks_pcie_app_writel(ks_pcie, ERR_IRQ_STATUS, reg); ··· 410 356 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0); 411 357 ks_pcie_clear_dbi_mode(ks_pcie); 412 358 359 + if (ks_pcie->is_am6) 360 + return; 361 + 413 362 val = ilog2(OB_WIN_SIZE); 414 363 ks_pcie_app_writel(ks_pcie, OB_SIZE, val); 415 364 ··· 502 445 return (val == PORT_LOGIC_LTSSM_STATE_L0); 503 446 } 504 447 505 - static void ks_pcie_initiate_link_train(struct keystone_pcie *ks_pcie) 448 + static void ks_pcie_stop_link(struct dw_pcie *pci) 506 449 { 450 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 507 451 u32 val; 508 452 509 453 /* Disable Link training */ 510 454 val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 511 455 val &= ~LTSSM_EN_VAL; 512 456 ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); 457 + } 458 + 459 + static int ks_pcie_start_link(struct dw_pcie *pci) 460 + { 461 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 462 + struct device *dev = pci->dev; 463 + u32 val; 464 + 465 + if (dw_pcie_link_up(pci)) { 466 + dev_dbg(dev, "link is already up\n"); 467 + return 0; 468 + } 513 469 514 470 /* Initiate Link Training */ 515 471 val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 516 472 ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); 517 - } 518 473 519 - /** 520 - * ks_pcie_dw_host_init() - initialize host for v3_65 dw hardware 521 - * 522 - * Ioremap the register resources, initialize legacy irq domain 523 - * and call dw_pcie_v3_65_host_init() API to initialize the Keystone 524 - * PCI host controller. 525 - */ 526 - static int __init ks_pcie_dw_host_init(struct keystone_pcie *ks_pcie) 527 - { 528 - struct dw_pcie *pci = ks_pcie->pci; 529 - struct pcie_port *pp = &pci->pp; 530 - struct device *dev = pci->dev; 531 - struct platform_device *pdev = to_platform_device(dev); 532 - struct resource *res; 533 - 534 - /* Index 0 is the config reg. space address */ 535 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 536 - pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); 537 - if (IS_ERR(pci->dbi_base)) 538 - return PTR_ERR(pci->dbi_base); 539 - 540 - /* 541 - * We set these same and is used in pcie rd/wr_other_conf 542 - * functions 543 - */ 544 - pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET; 545 - pp->va_cfg1_base = pp->va_cfg0_base; 546 - 547 - /* Index 1 is the application reg. space address */ 548 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 549 - ks_pcie->va_app_base = devm_ioremap_resource(dev, res); 550 - if (IS_ERR(ks_pcie->va_app_base)) 551 - return PTR_ERR(ks_pcie->va_app_base); 552 - 553 - ks_pcie->app = *res; 554 - 555 - /* Create legacy IRQ domain */ 556 - ks_pcie->legacy_irq_domain = 557 - irq_domain_add_linear(ks_pcie->legacy_intc_np, 558 - PCI_NUM_INTX, 559 - &ks_pcie_legacy_irq_domain_ops, 560 - NULL); 561 - if (!ks_pcie->legacy_irq_domain) { 562 - dev_err(dev, "Failed to add irq domain for legacy irqs\n"); 563 - return -EINVAL; 564 - } 565 - 566 - return dw_pcie_host_init(pp); 474 + return 0; 567 475 } 568 476 569 477 static void ks_pcie_quirk(struct pci_dev *dev) ··· 574 552 } 575 553 DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk); 576 554 577 - static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie) 578 - { 579 - struct dw_pcie *pci = ks_pcie->pci; 580 - struct device *dev = pci->dev; 581 - 582 - if (dw_pcie_link_up(pci)) { 583 - dev_info(dev, "Link already up\n"); 584 - return 0; 585 - } 586 - 587 - ks_pcie_initiate_link_train(ks_pcie); 588 - 589 - /* check if the link is up or not */ 590 - if (!dw_pcie_wait_for_link(pci)) 591 - return 0; 592 - 593 - dev_err(dev, "phy link never came up\n"); 594 - return -ETIMEDOUT; 595 - } 596 - 597 555 static void ks_pcie_msi_irq_handler(struct irq_desc *desc) 598 556 { 599 - unsigned int irq = irq_desc_get_irq(desc); 557 + unsigned int irq = desc->irq_data.hwirq; 600 558 struct keystone_pcie *ks_pcie = irq_desc_get_handler_data(desc); 601 - u32 offset = irq - ks_pcie->msi_host_irqs[0]; 559 + u32 offset = irq - ks_pcie->msi_host_irq; 602 560 struct dw_pcie *pci = ks_pcie->pci; 561 + struct pcie_port *pp = &pci->pp; 603 562 struct device *dev = pci->dev; 604 563 struct irq_chip *chip = irq_desc_get_chip(desc); 564 + u32 vector, virq, reg, pos; 605 565 606 566 dev_dbg(dev, "%s, irq %d\n", __func__, irq); 607 567 ··· 593 589 * ack operation. 594 590 */ 595 591 chained_irq_enter(chip, desc); 596 - ks_pcie_handle_msi_irq(ks_pcie, offset); 592 + 593 + reg = ks_pcie_app_readl(ks_pcie, MSI_IRQ_STATUS(offset)); 594 + /* 595 + * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit 596 + * shows 1, 9, 17, 25 and so forth 597 + */ 598 + for (pos = 0; pos < 4; pos++) { 599 + if (!(reg & BIT(pos))) 600 + continue; 601 + 602 + vector = offset + (pos << 3); 603 + virq = irq_linear_revmap(pp->irq_domain, vector); 604 + dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", pos, vector, 605 + virq); 606 + generic_handle_irq(virq); 607 + } 608 + 597 609 chained_irq_exit(chip, desc); 598 610 } 599 611 ··· 642 622 chained_irq_exit(chip, desc); 643 623 } 644 624 645 - static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie, 646 - char *controller, int *num_irqs) 625 + static int ks_pcie_config_msi_irq(struct keystone_pcie *ks_pcie) 647 626 { 648 - int temp, max_host_irqs, legacy = 1, *host_irqs; 649 627 struct device *dev = ks_pcie->pci->dev; 650 - struct device_node *np_pcie = dev->of_node, **np_temp; 628 + struct device_node *np = ks_pcie->np; 629 + struct device_node *intc_np; 630 + struct irq_data *irq_data; 631 + int irq_count, irq, ret, i; 651 632 652 - if (!strcmp(controller, "msi-interrupt-controller")) 653 - legacy = 0; 654 - 655 - if (legacy) { 656 - np_temp = &ks_pcie->legacy_intc_np; 657 - max_host_irqs = PCI_NUM_INTX; 658 - host_irqs = &ks_pcie->legacy_host_irqs[0]; 659 - } else { 660 - np_temp = &ks_pcie->msi_intc_np; 661 - max_host_irqs = MAX_MSI_HOST_IRQS; 662 - host_irqs = &ks_pcie->msi_host_irqs[0]; 663 - } 664 - 665 - /* interrupt controller is in a child node */ 666 - *np_temp = of_get_child_by_name(np_pcie, controller); 667 - if (!(*np_temp)) { 668 - dev_err(dev, "Node for %s is absent\n", controller); 669 - return -EINVAL; 670 - } 671 - 672 - temp = of_irq_count(*np_temp); 673 - if (!temp) { 674 - dev_err(dev, "No IRQ entries in %s\n", controller); 675 - of_node_put(*np_temp); 676 - return -EINVAL; 677 - } 678 - 679 - if (temp > max_host_irqs) 680 - dev_warn(dev, "Too many %s interrupts defined %u\n", 681 - (legacy ? "legacy" : "MSI"), temp); 682 - 683 - /* 684 - * support upto max_host_irqs. In dt from index 0 to 3 (legacy) or 0 to 685 - * 7 (MSI) 686 - */ 687 - for (temp = 0; temp < max_host_irqs; temp++) { 688 - host_irqs[temp] = irq_of_parse_and_map(*np_temp, temp); 689 - if (!host_irqs[temp]) 690 - break; 691 - } 692 - 693 - of_node_put(*np_temp); 694 - 695 - if (temp) { 696 - *num_irqs = temp; 633 + if (!IS_ENABLED(CONFIG_PCI_MSI)) 697 634 return 0; 635 + 636 + intc_np = of_get_child_by_name(np, "msi-interrupt-controller"); 637 + if (!intc_np) { 638 + if (ks_pcie->is_am6) 639 + return 0; 640 + dev_warn(dev, "msi-interrupt-controller node is absent\n"); 641 + return -EINVAL; 698 642 } 699 643 700 - return -EINVAL; 644 + irq_count = of_irq_count(intc_np); 645 + if (!irq_count) { 646 + dev_err(dev, "No IRQ entries in msi-interrupt-controller\n"); 647 + ret = -EINVAL; 648 + goto err; 649 + } 650 + 651 + for (i = 0; i < irq_count; i++) { 652 + irq = irq_of_parse_and_map(intc_np, i); 653 + if (!irq) { 654 + ret = -EINVAL; 655 + goto err; 656 + } 657 + 658 + if (!ks_pcie->msi_host_irq) { 659 + irq_data = irq_get_irq_data(irq); 660 + if (!irq_data) { 661 + ret = -EINVAL; 662 + goto err; 663 + } 664 + ks_pcie->msi_host_irq = irq_data->hwirq; 665 + } 666 + 667 + irq_set_chained_handler_and_data(irq, ks_pcie_msi_irq_handler, 668 + ks_pcie); 669 + } 670 + 671 + of_node_put(intc_np); 672 + return 0; 673 + 674 + err: 675 + of_node_put(intc_np); 676 + return ret; 701 677 } 702 678 703 - static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) 679 + static int ks_pcie_config_legacy_irq(struct keystone_pcie *ks_pcie) 704 680 { 705 - int i; 681 + struct device *dev = ks_pcie->pci->dev; 682 + struct irq_domain *legacy_irq_domain; 683 + struct device_node *np = ks_pcie->np; 684 + struct device_node *intc_np; 685 + int irq_count, irq, ret = 0, i; 706 686 707 - /* Legacy IRQ */ 708 - for (i = 0; i < ks_pcie->num_legacy_host_irqs; i++) { 709 - irq_set_chained_handler_and_data(ks_pcie->legacy_host_irqs[i], 687 + intc_np = of_get_child_by_name(np, "legacy-interrupt-controller"); 688 + if (!intc_np) { 689 + /* 690 + * Since legacy interrupts are modeled as edge-interrupts in 691 + * AM6, keep it disabled for now. 692 + */ 693 + if (ks_pcie->is_am6) 694 + return 0; 695 + dev_warn(dev, "legacy-interrupt-controller node is absent\n"); 696 + return -EINVAL; 697 + } 698 + 699 + irq_count = of_irq_count(intc_np); 700 + if (!irq_count) { 701 + dev_err(dev, "No IRQ entries in legacy-interrupt-controller\n"); 702 + ret = -EINVAL; 703 + goto err; 704 + } 705 + 706 + for (i = 0; i < irq_count; i++) { 707 + irq = irq_of_parse_and_map(intc_np, i); 708 + if (!irq) { 709 + ret = -EINVAL; 710 + goto err; 711 + } 712 + ks_pcie->legacy_host_irqs[i] = irq; 713 + 714 + irq_set_chained_handler_and_data(irq, 710 715 ks_pcie_legacy_irq_handler, 711 716 ks_pcie); 712 717 } 713 - ks_pcie_enable_legacy_irqs(ks_pcie); 714 718 715 - /* MSI IRQ */ 716 - if (IS_ENABLED(CONFIG_PCI_MSI)) { 717 - for (i = 0; i < ks_pcie->num_msi_host_irqs; i++) { 718 - irq_set_chained_handler_and_data(ks_pcie->msi_host_irqs[i], 719 - ks_pcie_msi_irq_handler, 720 - ks_pcie); 721 - } 719 + legacy_irq_domain = 720 + irq_domain_add_linear(intc_np, PCI_NUM_INTX, 721 + &ks_pcie_legacy_irq_domain_ops, NULL); 722 + if (!legacy_irq_domain) { 723 + dev_err(dev, "Failed to add irq domain for legacy irqs\n"); 724 + ret = -EINVAL; 725 + goto err; 722 726 } 727 + ks_pcie->legacy_irq_domain = legacy_irq_domain; 723 728 724 - if (ks_pcie->error_irq > 0) 725 - ks_pcie_enable_error_irq(ks_pcie); 729 + for (i = 0; i < PCI_NUM_INTX; i++) 730 + ks_pcie_app_writel(ks_pcie, IRQ_ENABLE_SET(i), INTx_EN); 731 + 732 + err: 733 + of_node_put(intc_np); 734 + return ret; 726 735 } 727 736 737 + #ifdef CONFIG_ARM 728 738 /* 729 739 * When a PCI device does not exist during config cycles, keystone host gets a 730 740 * bus error instead of returning 0xffffffff. This handler always returns 0 ··· 774 724 775 725 return 0; 776 726 } 727 + #endif 777 728 778 729 static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie) 779 730 { ··· 793 742 if (ret) 794 743 return ret; 795 744 745 + dw_pcie_dbi_ro_wr_en(pci); 796 746 dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, id & PCIE_VENDORID_MASK); 797 747 dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, id >> PCIE_DEVICEID_SHIFT); 748 + dw_pcie_dbi_ro_wr_dis(pci); 798 749 799 750 return 0; 800 751 } ··· 807 754 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 808 755 int ret; 809 756 757 + ret = ks_pcie_config_legacy_irq(ks_pcie); 758 + if (ret) 759 + return ret; 760 + 761 + ret = ks_pcie_config_msi_irq(ks_pcie); 762 + if (ret) 763 + return ret; 764 + 810 765 dw_pcie_setup_rc(pp); 811 766 812 - ks_pcie_establish_link(ks_pcie); 767 + ks_pcie_stop_link(pci); 813 768 ks_pcie_setup_rc_app_regs(ks_pcie); 814 - ks_pcie_setup_interrupts(ks_pcie); 815 769 writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8), 816 770 pci->dbi_base + PCI_IO_BASE); 817 771 ··· 826 766 if (ret < 0) 827 767 return ret; 828 768 769 + #ifdef CONFIG_ARM 829 770 /* 830 771 * PCIe access errors that result into OCP errors are caught by ARM as 831 772 * "External aborts" 832 773 */ 833 774 hook_fault_code(17, ks_pcie_fault, SIGBUS, 0, 834 775 "Asynchronous external abort"); 776 + #endif 777 + 778 + ks_pcie_start_link(pci); 779 + dw_pcie_wait_for_link(pci); 835 780 836 781 return 0; 837 782 } ··· 845 780 .rd_other_conf = ks_pcie_rd_other_conf, 846 781 .wr_other_conf = ks_pcie_wr_other_conf, 847 782 .host_init = ks_pcie_host_init, 848 - .msi_set_irq = ks_pcie_msi_set_irq, 849 - .msi_clear_irq = ks_pcie_msi_clear_irq, 850 - .get_msi_addr = ks_pcie_get_msi_addr, 851 783 .msi_host_init = ks_pcie_msi_host_init, 852 - .msi_irq_ack = ks_pcie_msi_irq_ack, 853 784 .scan_bus = ks_pcie_v3_65_scan_bus, 785 + }; 786 + 787 + static const struct dw_pcie_host_ops ks_pcie_am654_host_ops = { 788 + .host_init = ks_pcie_host_init, 789 + .msi_host_init = ks_pcie_am654_msi_host_init, 854 790 }; 855 791 856 792 static irqreturn_t ks_pcie_err_irq_handler(int irq, void *priv) ··· 867 801 struct dw_pcie *pci = ks_pcie->pci; 868 802 struct pcie_port *pp = &pci->pp; 869 803 struct device *dev = &pdev->dev; 804 + struct resource *res; 870 805 int ret; 871 806 872 - ret = ks_pcie_get_irq_controller_info(ks_pcie, 873 - "legacy-interrupt-controller", 874 - &ks_pcie->num_legacy_host_irqs); 875 - if (ret) 876 - return ret; 807 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 808 + pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 809 + if (IS_ERR(pp->va_cfg0_base)) 810 + return PTR_ERR(pp->va_cfg0_base); 877 811 878 - if (IS_ENABLED(CONFIG_PCI_MSI)) { 879 - ret = ks_pcie_get_irq_controller_info(ks_pcie, 880 - "msi-interrupt-controller", 881 - &ks_pcie->num_msi_host_irqs); 882 - if (ret) 883 - return ret; 884 - } 812 + pp->va_cfg1_base = pp->va_cfg0_base; 885 813 886 - /* 887 - * Index 0 is the platform interrupt for error interrupt 888 - * from RC. This is optional. 889 - */ 890 - ks_pcie->error_irq = irq_of_parse_and_map(ks_pcie->np, 0); 891 - if (ks_pcie->error_irq <= 0) 892 - dev_info(dev, "no error IRQ defined\n"); 893 - else { 894 - ret = request_irq(ks_pcie->error_irq, ks_pcie_err_irq_handler, 895 - IRQF_SHARED, "pcie-error-irq", ks_pcie); 896 - if (ret < 0) { 897 - dev_err(dev, "failed to request error IRQ %d\n", 898 - ks_pcie->error_irq); 899 - return ret; 900 - } 901 - } 902 - 903 - pp->ops = &ks_pcie_host_ops; 904 - ret = ks_pcie_dw_host_init(ks_pcie); 814 + ret = dw_pcie_host_init(pp); 905 815 if (ret) { 906 816 dev_err(dev, "failed to initialize host\n"); 907 817 return ret; ··· 886 844 return 0; 887 845 } 888 846 889 - static const struct of_device_id ks_pcie_of_match[] = { 890 - { 891 - .type = "pci", 892 - .compatible = "ti,keystone-pcie", 893 - }, 894 - { }, 895 - }; 847 + static u32 ks_pcie_am654_read_dbi2(struct dw_pcie *pci, void __iomem *base, 848 + u32 reg, size_t size) 849 + { 850 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 851 + u32 val; 852 + 853 + ks_pcie_set_dbi_mode(ks_pcie); 854 + dw_pcie_read(base + reg, size, &val); 855 + ks_pcie_clear_dbi_mode(ks_pcie); 856 + return val; 857 + } 858 + 859 + static void ks_pcie_am654_write_dbi2(struct dw_pcie *pci, void __iomem *base, 860 + u32 reg, size_t size, u32 val) 861 + { 862 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 863 + 864 + ks_pcie_set_dbi_mode(ks_pcie); 865 + dw_pcie_write(base + reg, size, val); 866 + ks_pcie_clear_dbi_mode(ks_pcie); 867 + } 896 868 897 869 static const struct dw_pcie_ops ks_pcie_dw_pcie_ops = { 870 + .start_link = ks_pcie_start_link, 871 + .stop_link = ks_pcie_stop_link, 898 872 .link_up = ks_pcie_link_up, 873 + .read_dbi2 = ks_pcie_am654_read_dbi2, 874 + .write_dbi2 = ks_pcie_am654_write_dbi2, 899 875 }; 876 + 877 + static void ks_pcie_am654_ep_init(struct dw_pcie_ep *ep) 878 + { 879 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 880 + int flags; 881 + 882 + ep->page_size = AM654_WIN_SIZE; 883 + flags = PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_32; 884 + dw_pcie_writel_dbi2(pci, PCI_BASE_ADDRESS_0, APP_ADDR_SPACE_0 - 1); 885 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, flags); 886 + } 887 + 888 + static void ks_pcie_am654_raise_legacy_irq(struct keystone_pcie *ks_pcie) 889 + { 890 + struct dw_pcie *pci = ks_pcie->pci; 891 + u8 int_pin; 892 + 893 + int_pin = dw_pcie_readb_dbi(pci, PCI_INTERRUPT_PIN); 894 + if (int_pin == 0 || int_pin > 4) 895 + return; 896 + 897 + ks_pcie_app_writel(ks_pcie, PCIE_LEGACY_IRQ_ENABLE_SET(int_pin), 898 + INT_ENABLE); 899 + ks_pcie_app_writel(ks_pcie, PCIE_EP_IRQ_SET, INT_ENABLE); 900 + mdelay(1); 901 + ks_pcie_app_writel(ks_pcie, PCIE_EP_IRQ_CLR, INT_ENABLE); 902 + ks_pcie_app_writel(ks_pcie, PCIE_LEGACY_IRQ_ENABLE_CLR(int_pin), 903 + INT_ENABLE); 904 + } 905 + 906 + static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 907 + enum pci_epc_irq_type type, 908 + u16 interrupt_num) 909 + { 910 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 911 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 912 + 913 + switch (type) { 914 + case PCI_EPC_IRQ_LEGACY: 915 + ks_pcie_am654_raise_legacy_irq(ks_pcie); 916 + break; 917 + case PCI_EPC_IRQ_MSI: 918 + dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 919 + break; 920 + default: 921 + dev_err(pci->dev, "UNKNOWN IRQ type\n"); 922 + return -EINVAL; 923 + } 924 + 925 + return 0; 926 + } 927 + 928 + static const struct pci_epc_features ks_pcie_am654_epc_features = { 929 + .linkup_notifier = false, 930 + .msi_capable = true, 931 + .msix_capable = false, 932 + .reserved_bar = 1 << BAR_0 | 1 << BAR_1, 933 + .bar_fixed_64bit = 1 << BAR_0, 934 + .bar_fixed_size[2] = SZ_1M, 935 + .bar_fixed_size[3] = SZ_64K, 936 + .bar_fixed_size[4] = 256, 937 + .bar_fixed_size[5] = SZ_1M, 938 + .align = SZ_1M, 939 + }; 940 + 941 + static const struct pci_epc_features* 942 + ks_pcie_am654_get_features(struct dw_pcie_ep *ep) 943 + { 944 + return &ks_pcie_am654_epc_features; 945 + } 946 + 947 + static const struct dw_pcie_ep_ops ks_pcie_am654_ep_ops = { 948 + .ep_init = ks_pcie_am654_ep_init, 949 + .raise_irq = ks_pcie_am654_raise_irq, 950 + .get_features = &ks_pcie_am654_get_features, 951 + }; 952 + 953 + static int __init ks_pcie_add_pcie_ep(struct keystone_pcie *ks_pcie, 954 + struct platform_device *pdev) 955 + { 956 + int ret; 957 + struct dw_pcie_ep *ep; 958 + struct resource *res; 959 + struct device *dev = &pdev->dev; 960 + struct dw_pcie *pci = ks_pcie->pci; 961 + 962 + ep = &pci->ep; 963 + 964 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 965 + if (!res) 966 + return -EINVAL; 967 + 968 + ep->phys_base = res->start; 969 + ep->addr_size = resource_size(res); 970 + 971 + ret = dw_pcie_ep_init(ep); 972 + if (ret) { 973 + dev_err(dev, "failed to initialize endpoint\n"); 974 + return ret; 975 + } 976 + 977 + return 0; 978 + } 900 979 901 980 static void ks_pcie_disable_phy(struct keystone_pcie *ks_pcie) 902 981 { ··· 1036 873 int num_lanes = ks_pcie->num_lanes; 1037 874 1038 875 for (i = 0; i < num_lanes; i++) { 876 + ret = phy_reset(ks_pcie->phy[i]); 877 + if (ret < 0) 878 + goto err_phy; 879 + 1039 880 ret = phy_init(ks_pcie->phy[i]); 1040 881 if (ret < 0) 1041 882 goto err_phy; ··· 1062 895 return ret; 1063 896 } 1064 897 898 + static int ks_pcie_set_mode(struct device *dev) 899 + { 900 + struct device_node *np = dev->of_node; 901 + struct regmap *syscon; 902 + u32 val; 903 + u32 mask; 904 + int ret = 0; 905 + 906 + syscon = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-mode"); 907 + if (IS_ERR(syscon)) 908 + return 0; 909 + 910 + mask = KS_PCIE_DEV_TYPE_MASK | KS_PCIE_SYSCLOCKOUTEN; 911 + val = KS_PCIE_DEV_TYPE(RC) | KS_PCIE_SYSCLOCKOUTEN; 912 + 913 + ret = regmap_update_bits(syscon, 0, mask, val); 914 + if (ret) { 915 + dev_err(dev, "failed to set pcie mode\n"); 916 + return ret; 917 + } 918 + 919 + return 0; 920 + } 921 + 922 + static int ks_pcie_am654_set_mode(struct device *dev, 923 + enum dw_pcie_device_mode mode) 924 + { 925 + struct device_node *np = dev->of_node; 926 + struct regmap *syscon; 927 + u32 val; 928 + u32 mask; 929 + int ret = 0; 930 + 931 + syscon = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-mode"); 932 + if (IS_ERR(syscon)) 933 + return 0; 934 + 935 + mask = AM654_PCIE_DEV_TYPE_MASK; 936 + 937 + switch (mode) { 938 + case DW_PCIE_RC_TYPE: 939 + val = RC; 940 + break; 941 + case DW_PCIE_EP_TYPE: 942 + val = EP; 943 + break; 944 + default: 945 + dev_err(dev, "INVALID device type %d\n", mode); 946 + return -EINVAL; 947 + } 948 + 949 + ret = regmap_update_bits(syscon, 0, mask, val); 950 + if (ret) { 951 + dev_err(dev, "failed to set pcie mode\n"); 952 + return ret; 953 + } 954 + 955 + return 0; 956 + } 957 + 958 + static void ks_pcie_set_link_speed(struct dw_pcie *pci, int link_speed) 959 + { 960 + u32 val; 961 + 962 + dw_pcie_dbi_ro_wr_en(pci); 963 + 964 + val = dw_pcie_readl_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCAP); 965 + if ((val & PCI_EXP_LNKCAP_SLS) != link_speed) { 966 + val &= ~((u32)PCI_EXP_LNKCAP_SLS); 967 + val |= link_speed; 968 + dw_pcie_writel_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCAP, 969 + val); 970 + } 971 + 972 + val = dw_pcie_readl_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCTL2); 973 + if ((val & PCI_EXP_LNKCAP_SLS) != link_speed) { 974 + val &= ~((u32)PCI_EXP_LNKCAP_SLS); 975 + val |= link_speed; 976 + dw_pcie_writel_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCTL2, 977 + val); 978 + } 979 + 980 + dw_pcie_dbi_ro_wr_dis(pci); 981 + } 982 + 983 + static const struct ks_pcie_of_data ks_pcie_rc_of_data = { 984 + .host_ops = &ks_pcie_host_ops, 985 + .version = 0x365A, 986 + }; 987 + 988 + static const struct ks_pcie_of_data ks_pcie_am654_rc_of_data = { 989 + .host_ops = &ks_pcie_am654_host_ops, 990 + .mode = DW_PCIE_RC_TYPE, 991 + .version = 0x490A, 992 + }; 993 + 994 + static const struct ks_pcie_of_data ks_pcie_am654_ep_of_data = { 995 + .ep_ops = &ks_pcie_am654_ep_ops, 996 + .mode = DW_PCIE_EP_TYPE, 997 + .version = 0x490A, 998 + }; 999 + 1000 + static const struct of_device_id ks_pcie_of_match[] = { 1001 + { 1002 + .type = "pci", 1003 + .data = &ks_pcie_rc_of_data, 1004 + .compatible = "ti,keystone-pcie", 1005 + }, 1006 + { 1007 + .data = &ks_pcie_am654_rc_of_data, 1008 + .compatible = "ti,am654-pcie-rc", 1009 + }, 1010 + { 1011 + .data = &ks_pcie_am654_ep_of_data, 1012 + .compatible = "ti,am654-pcie-ep", 1013 + }, 1014 + { }, 1015 + }; 1016 + 1065 1017 static int __init ks_pcie_probe(struct platform_device *pdev) 1066 1018 { 1019 + const struct dw_pcie_host_ops *host_ops; 1020 + const struct dw_pcie_ep_ops *ep_ops; 1067 1021 struct device *dev = &pdev->dev; 1068 1022 struct device_node *np = dev->of_node; 1023 + const struct ks_pcie_of_data *data; 1024 + const struct of_device_id *match; 1025 + enum dw_pcie_device_mode mode; 1069 1026 struct dw_pcie *pci; 1070 1027 struct keystone_pcie *ks_pcie; 1071 1028 struct device_link **link; 1029 + struct gpio_desc *gpiod; 1030 + void __iomem *atu_base; 1031 + struct resource *res; 1032 + unsigned int version; 1033 + void __iomem *base; 1072 1034 u32 num_viewport; 1073 1035 struct phy **phy; 1036 + int link_speed; 1074 1037 u32 num_lanes; 1075 1038 char name[10]; 1076 1039 int ret; 1040 + int irq; 1077 1041 int i; 1042 + 1043 + match = of_match_device(of_match_ptr(ks_pcie_of_match), dev); 1044 + data = (struct ks_pcie_of_data *)match->data; 1045 + if (!data) 1046 + return -EINVAL; 1047 + 1048 + version = data->version; 1049 + host_ops = data->host_ops; 1050 + ep_ops = data->ep_ops; 1051 + mode = data->mode; 1078 1052 1079 1053 ks_pcie = devm_kzalloc(dev, sizeof(*ks_pcie), GFP_KERNEL); 1080 1054 if (!ks_pcie) ··· 1225 917 if (!pci) 1226 918 return -ENOMEM; 1227 919 920 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "app"); 921 + ks_pcie->va_app_base = devm_ioremap_resource(dev, res); 922 + if (IS_ERR(ks_pcie->va_app_base)) 923 + return PTR_ERR(ks_pcie->va_app_base); 924 + 925 + ks_pcie->app = *res; 926 + 927 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbics"); 928 + base = devm_pci_remap_cfg_resource(dev, res); 929 + if (IS_ERR(base)) 930 + return PTR_ERR(base); 931 + 932 + if (of_device_is_compatible(np, "ti,am654-pcie-rc")) 933 + ks_pcie->is_am6 = true; 934 + 935 + pci->dbi_base = base; 936 + pci->dbi_base2 = base; 1228 937 pci->dev = dev; 1229 938 pci->ops = &ks_pcie_dw_pcie_ops; 939 + pci->version = version; 1230 940 1231 - ret = of_property_read_u32(np, "num-viewport", &num_viewport); 941 + irq = platform_get_irq(pdev, 0); 942 + if (irq < 0) { 943 + dev_err(dev, "missing IRQ resource: %d\n", irq); 944 + return irq; 945 + } 946 + 947 + ret = request_irq(irq, ks_pcie_err_irq_handler, IRQF_SHARED, 948 + "ks-pcie-error-irq", ks_pcie); 1232 949 if (ret < 0) { 1233 - dev_err(dev, "unable to read *num-viewport* property\n"); 950 + dev_err(dev, "failed to request error IRQ %d\n", 951 + irq); 1234 952 return ret; 1235 953 } 1236 954 ··· 1294 960 ks_pcie->pci = pci; 1295 961 ks_pcie->link = link; 1296 962 ks_pcie->num_lanes = num_lanes; 1297 - ks_pcie->num_viewport = num_viewport; 1298 963 ks_pcie->phy = phy; 964 + 965 + gpiod = devm_gpiod_get_optional(dev, "reset", 966 + GPIOD_OUT_LOW); 967 + if (IS_ERR(gpiod)) { 968 + ret = PTR_ERR(gpiod); 969 + if (ret != -EPROBE_DEFER) 970 + dev_err(dev, "Failed to get reset GPIO\n"); 971 + goto err_link; 972 + } 1299 973 1300 974 ret = ks_pcie_enable_phy(ks_pcie); 1301 975 if (ret) { ··· 1319 977 goto err_get_sync; 1320 978 } 1321 979 1322 - ret = ks_pcie_add_pcie_port(ks_pcie, pdev); 1323 - if (ret < 0) 1324 - goto err_get_sync; 980 + if (pci->version >= 0x480A) { 981 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu"); 982 + atu_base = devm_ioremap_resource(dev, res); 983 + if (IS_ERR(atu_base)) { 984 + ret = PTR_ERR(atu_base); 985 + goto err_get_sync; 986 + } 987 + 988 + pci->atu_base = atu_base; 989 + 990 + ret = ks_pcie_am654_set_mode(dev, mode); 991 + if (ret < 0) 992 + goto err_get_sync; 993 + } else { 994 + ret = ks_pcie_set_mode(dev); 995 + if (ret < 0) 996 + goto err_get_sync; 997 + } 998 + 999 + link_speed = of_pci_get_max_link_speed(np); 1000 + if (link_speed < 0) 1001 + link_speed = 2; 1002 + 1003 + ks_pcie_set_link_speed(pci, link_speed); 1004 + 1005 + switch (mode) { 1006 + case DW_PCIE_RC_TYPE: 1007 + if (!IS_ENABLED(CONFIG_PCI_KEYSTONE_HOST)) { 1008 + ret = -ENODEV; 1009 + goto err_get_sync; 1010 + } 1011 + 1012 + ret = of_property_read_u32(np, "num-viewport", &num_viewport); 1013 + if (ret < 0) { 1014 + dev_err(dev, "unable to read *num-viewport* property\n"); 1015 + return ret; 1016 + } 1017 + 1018 + /* 1019 + * "Power Sequencing and Reset Signal Timings" table in 1020 + * PCI EXPRESS CARD ELECTROMECHANICAL SPECIFICATION, REV. 2.0 1021 + * indicates PERST# should be deasserted after minimum of 100us 1022 + * once REFCLK is stable. The REFCLK to the connector in RC 1023 + * mode is selected while enabling the PHY. So deassert PERST# 1024 + * after 100 us. 1025 + */ 1026 + if (gpiod) { 1027 + usleep_range(100, 200); 1028 + gpiod_set_value_cansleep(gpiod, 1); 1029 + } 1030 + 1031 + ks_pcie->num_viewport = num_viewport; 1032 + pci->pp.ops = host_ops; 1033 + ret = ks_pcie_add_pcie_port(ks_pcie, pdev); 1034 + if (ret < 0) 1035 + goto err_get_sync; 1036 + break; 1037 + case DW_PCIE_EP_TYPE: 1038 + if (!IS_ENABLED(CONFIG_PCI_KEYSTONE_EP)) { 1039 + ret = -ENODEV; 1040 + goto err_get_sync; 1041 + } 1042 + 1043 + pci->ep.ops = ep_ops; 1044 + ret = ks_pcie_add_pcie_ep(ks_pcie, pdev); 1045 + if (ret < 0) 1046 + goto err_get_sync; 1047 + break; 1048 + default: 1049 + dev_err(dev, "INVALID device type %d\n", mode); 1050 + } 1051 + 1052 + ks_pcie_enable_error_irq(ks_pcie); 1325 1053 1326 1054 return 0; 1327 1055
+1 -1
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 79 79 } 80 80 } 81 81 82 - static struct dw_pcie_ep_ops pcie_ep_ops = { 82 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 83 83 .ep_init = ls_pcie_ep_init, 84 84 .raise_irq = ls_pcie_ep_raise_irq, 85 85 .get_features = ls_pcie_ep_get_features,
+1 -1
drivers/pci/controller/dwc/pcie-artpec6.c
··· 444 444 return 0; 445 445 } 446 446 447 - static struct dw_pcie_ep_ops pcie_ep_ops = { 447 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 448 448 .ep_init = artpec6_pcie_ep_init, 449 449 .raise_irq = artpec6_pcie_raise_irq, 450 450 };
+44 -11
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 46 46 u8 cap_id, next_cap_ptr; 47 47 u16 reg; 48 48 49 + if (!cap_ptr) 50 + return 0; 51 + 49 52 reg = dw_pcie_readw_dbi(pci, cap_ptr); 50 - next_cap_ptr = (reg & 0xff00) >> 8; 51 53 cap_id = (reg & 0x00ff); 52 54 53 - if (!next_cap_ptr || cap_id > PCI_CAP_ID_MAX) 55 + if (cap_id > PCI_CAP_ID_MAX) 54 56 return 0; 55 57 56 58 if (cap_id == cap) 57 59 return cap_ptr; 58 60 61 + next_cap_ptr = (reg & 0xff00) >> 8; 59 62 return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); 60 63 } 61 64 ··· 69 66 70 67 reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); 71 68 next_cap_ptr = (reg & 0x00ff); 72 - 73 - if (!next_cap_ptr) 74 - return 0; 75 69 76 70 return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); 77 71 } ··· 397 397 { 398 398 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 399 399 struct pci_epc *epc = ep->epc; 400 + unsigned int aligned_offset; 400 401 u16 msg_ctrl, msg_data; 401 402 u32 msg_addr_lower, msg_addr_upper, reg; 402 403 u64 msg_addr; ··· 423 422 reg = ep->msi_cap + PCI_MSI_DATA_32; 424 423 msg_data = dw_pcie_readw_dbi(pci, reg); 425 424 } 426 - msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower; 425 + aligned_offset = msg_addr_lower & (epc->mem->page_size - 1); 426 + msg_addr = ((u64)msg_addr_upper) << 32 | 427 + (msg_addr_lower & ~aligned_offset); 427 428 ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, 428 429 epc->mem->page_size); 429 430 if (ret) 430 431 return ret; 431 432 432 - writel(msg_data | (interrupt_num - 1), ep->msi_mem); 433 + writel(msg_data | (interrupt_num - 1), ep->msi_mem + aligned_offset); 433 434 434 435 dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); 435 436 ··· 507 504 pci_epc_mem_exit(epc); 508 505 } 509 506 507 + static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap) 508 + { 509 + u32 header; 510 + int pos = PCI_CFG_SPACE_SIZE; 511 + 512 + while (pos) { 513 + header = dw_pcie_readl_dbi(pci, pos); 514 + if (PCI_EXT_CAP_ID(header) == cap) 515 + return pos; 516 + 517 + pos = PCI_EXT_CAP_NEXT(header); 518 + if (!pos) 519 + break; 520 + } 521 + 522 + return 0; 523 + } 524 + 510 525 int dw_pcie_ep_init(struct dw_pcie_ep *ep) 511 526 { 527 + int i; 512 528 int ret; 529 + u32 reg; 513 530 void *addr; 531 + unsigned int nbars; 532 + unsigned int offset; 514 533 struct pci_epc *epc; 515 534 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 516 535 struct device *dev = pci->dev; ··· 540 515 541 516 if (!pci->dbi_base || !pci->dbi_base2) { 542 517 dev_err(dev, "dbi_base/dbi_base2 is not populated\n"); 543 - return -EINVAL; 544 - } 545 - if (pci->iatu_unroll_enabled && !pci->atu_base) { 546 - dev_err(dev, "atu_base is not populated\n"); 547 518 return -EINVAL; 548 519 } 549 520 ··· 615 594 ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI); 616 595 617 596 ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX); 597 + 598 + offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 599 + if (offset) { 600 + reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 601 + nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> 602 + PCI_REBAR_CTRL_NBAR_SHIFT; 603 + 604 + dw_pcie_dbi_ro_wr_en(pci); 605 + for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) 606 + dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0); 607 + dw_pcie_dbi_ro_wr_dis(pci); 608 + } 618 609 619 610 dw_pcie_setup(pci); 620 611
+29 -66
drivers/pci/controller/dwc/pcie-designware-host.c
··· 126 126 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 127 127 u64 msi_target; 128 128 129 - if (pp->ops->get_msi_addr) 130 - msi_target = pp->ops->get_msi_addr(pp); 131 - else 132 - msi_target = (u64)pp->msi_data; 129 + msi_target = (u64)pp->msi_data; 133 130 134 131 msg->address_lo = lower_32_bits(msi_target); 135 132 msg->address_hi = upper_32_bits(msi_target); 136 133 137 - if (pp->ops->get_msi_data) 138 - msg->data = pp->ops->get_msi_data(pp, d->hwirq); 139 - else 140 - msg->data = d->hwirq; 134 + msg->data = d->hwirq; 141 135 142 136 dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n", 143 137 (int)d->hwirq, msg->address_hi, msg->address_lo); ··· 151 157 152 158 raw_spin_lock_irqsave(&pp->lock, flags); 153 159 154 - if (pp->ops->msi_clear_irq) { 155 - pp->ops->msi_clear_irq(pp, d->hwirq); 156 - } else { 157 - ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 158 - res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 159 - bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 160 + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 161 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 162 + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 160 163 161 - pp->irq_mask[ctrl] |= BIT(bit); 162 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 163 - pp->irq_mask[ctrl]); 164 - } 164 + pp->irq_mask[ctrl] |= BIT(bit); 165 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 166 + pp->irq_mask[ctrl]); 165 167 166 168 raw_spin_unlock_irqrestore(&pp->lock, flags); 167 169 } ··· 170 180 171 181 raw_spin_lock_irqsave(&pp->lock, flags); 172 182 173 - if (pp->ops->msi_set_irq) { 174 - pp->ops->msi_set_irq(pp, d->hwirq); 175 - } else { 176 - ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 177 - res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 178 - bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 183 + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 184 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 185 + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 179 186 180 - pp->irq_mask[ctrl] &= ~BIT(bit); 181 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 182 - pp->irq_mask[ctrl]); 183 - } 187 + pp->irq_mask[ctrl] &= ~BIT(bit); 188 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 189 + pp->irq_mask[ctrl]); 184 190 185 191 raw_spin_unlock_irqrestore(&pp->lock, flags); 186 192 } ··· 185 199 { 186 200 struct pcie_port *pp = irq_data_get_irq_chip_data(d); 187 201 unsigned int res, bit, ctrl; 188 - unsigned long flags; 189 202 190 203 ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 191 204 res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 192 205 bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 193 206 194 - raw_spin_lock_irqsave(&pp->lock, flags); 195 - 196 207 dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + res, 4, BIT(bit)); 197 - 198 - if (pp->ops->msi_irq_ack) 199 - pp->ops->msi_irq_ack(d->hwirq, pp); 200 - 201 - raw_spin_unlock_irqrestore(&pp->lock, flags); 202 208 } 203 209 204 210 static struct irq_chip dw_pci_msi_bottom_irq_chip = { ··· 223 245 224 246 for (i = 0; i < nr_irqs; i++) 225 247 irq_domain_set_info(domain, virq + i, bit + i, 226 - &dw_pci_msi_bottom_irq_chip, 248 + pp->msi_irq_chip, 227 249 pp, handle_edge_irq, 228 250 NULL, NULL); 229 251 ··· 440 462 } 441 463 442 464 if (!pp->ops->msi_host_init) { 465 + pp->msi_irq_chip = &dw_pci_msi_bottom_irq_chip; 466 + 443 467 ret = dw_pcie_allocate_domains(pp); 444 468 if (ret) 445 469 return ret; ··· 612 632 .write = dw_pcie_wr_conf, 613 633 }; 614 634 615 - static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) 616 - { 617 - u32 val; 618 - 619 - val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT); 620 - if (val == 0xffffffff) 621 - return 1; 622 - 623 - return 0; 624 - } 625 - 626 635 void dw_pcie_setup_rc(struct pcie_port *pp) 627 636 { 628 637 u32 val, ctrl, num_ctrls; ··· 619 650 620 651 dw_pcie_setup(pci); 621 652 622 - num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 653 + if (!pp->ops->msi_host_init) { 654 + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 623 655 624 - /* Initialize IRQ Status array */ 625 - for (ctrl = 0; ctrl < num_ctrls; ctrl++) { 626 - pp->irq_mask[ctrl] = ~0; 627 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + 628 - (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 629 - 4, pp->irq_mask[ctrl]); 630 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + 631 - (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 632 - 4, ~0); 656 + /* Initialize IRQ Status array */ 657 + for (ctrl = 0; ctrl < num_ctrls; ctrl++) { 658 + pp->irq_mask[ctrl] = ~0; 659 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + 660 + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 661 + 4, pp->irq_mask[ctrl]); 662 + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + 663 + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 664 + 4, ~0); 665 + } 633 666 } 634 667 635 668 /* Setup RC BARs */ ··· 665 694 * we should not program the ATU here. 666 695 */ 667 696 if (!pp->ops->rd_other_conf) { 668 - /* Get iATU unroll support */ 669 - pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci); 670 - dev_dbg(pci->dev, "iATU unroll: %s\n", 671 - pci->iatu_unroll_enabled ? "enabled" : "disabled"); 672 - 673 - if (pci->iatu_unroll_enabled && !pci->atu_base) 674 - pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; 675 - 676 697 dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX0, 677 698 PCIE_ATU_TYPE_MEM, pp->mem_base, 678 699 pp->mem_bus_addr, pp->mem_size);
+1 -1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 106 106 return &dw_plat_pcie_epc_features; 107 107 } 108 108 109 - static struct dw_pcie_ep_ops pcie_ep_ops = { 109 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 110 110 .ep_init = dw_plat_pcie_ep_init, 111 111 .raise_irq = dw_plat_pcie_ep_raise_irq, 112 112 .get_features = dw_plat_pcie_get_features,
+52
drivers/pci/controller/dwc/pcie-designware.c
··· 83 83 dev_err(pci->dev, "Write DBI address failed\n"); 84 84 } 85 85 86 + u32 __dw_pcie_read_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 87 + size_t size) 88 + { 89 + int ret; 90 + u32 val; 91 + 92 + if (pci->ops->read_dbi2) 93 + return pci->ops->read_dbi2(pci, base, reg, size); 94 + 95 + ret = dw_pcie_read(base + reg, size, &val); 96 + if (ret) 97 + dev_err(pci->dev, "read DBI address failed\n"); 98 + 99 + return val; 100 + } 101 + 102 + void __dw_pcie_write_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 103 + size_t size, u32 val) 104 + { 105 + int ret; 106 + 107 + if (pci->ops->write_dbi2) { 108 + pci->ops->write_dbi2(pci, base, reg, size, val); 109 + return; 110 + } 111 + 112 + ret = dw_pcie_write(base + reg, size, val); 113 + if (ret) 114 + dev_err(pci->dev, "write DBI address failed\n"); 115 + } 116 + 86 117 static u32 dw_pcie_readl_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg) 87 118 { 88 119 u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index); ··· 364 333 (!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING))); 365 334 } 366 335 336 + static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) 337 + { 338 + u32 val; 339 + 340 + val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT); 341 + if (val == 0xffffffff) 342 + return 1; 343 + 344 + return 0; 345 + } 346 + 367 347 void dw_pcie_setup(struct dw_pcie *pci) 368 348 { 369 349 int ret; ··· 382 340 u32 lanes; 383 341 struct device *dev = pci->dev; 384 342 struct device_node *np = dev->of_node; 343 + 344 + if (pci->version >= 0x480A || (!pci->version && 345 + dw_pcie_iatu_unroll_enabled(pci))) { 346 + pci->iatu_unroll_enabled = true; 347 + if (!pci->atu_base) 348 + pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; 349 + } 350 + dev_dbg(pci->dev, "iATU unroll: %s\n", pci->iatu_unroll_enabled ? 351 + "enabled" : "disabled"); 352 + 385 353 386 354 ret = of_property_read_u32(np, "num-lanes", &lanes); 387 355 if (ret)
+13 -8
drivers/pci/controller/dwc/pcie-designware.h
··· 148 148 int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus, 149 149 unsigned int devfn, int where, int size, u32 val); 150 150 int (*host_init)(struct pcie_port *pp); 151 - void (*msi_set_irq)(struct pcie_port *pp, int irq); 152 - void (*msi_clear_irq)(struct pcie_port *pp, int irq); 153 - phys_addr_t (*get_msi_addr)(struct pcie_port *pp); 154 - u32 (*get_msi_data)(struct pcie_port *pp, int pos); 155 151 void (*scan_bus)(struct pcie_port *pp); 156 152 void (*set_num_vectors)(struct pcie_port *pp); 157 153 int (*msi_host_init)(struct pcie_port *pp); 158 - void (*msi_irq_ack)(int irq, struct pcie_port *pp); 159 154 }; 160 155 161 156 struct pcie_port { ··· 178 183 struct irq_domain *msi_domain; 179 184 dma_addr_t msi_data; 180 185 struct page *msi_page; 186 + struct irq_chip *msi_irq_chip; 181 187 u32 num_vectors; 182 188 u32 irq_mask[MAX_MSI_CTRLS]; 183 189 struct pci_bus *root_bus; ··· 201 205 202 206 struct dw_pcie_ep { 203 207 struct pci_epc *epc; 204 - struct dw_pcie_ep_ops *ops; 208 + const struct dw_pcie_ep_ops *ops; 205 209 phys_addr_t phys_base; 206 210 size_t addr_size; 207 211 size_t page_size; ··· 223 227 size_t size); 224 228 void (*write_dbi)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 225 229 size_t size, u32 val); 230 + u32 (*read_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 231 + size_t size); 232 + void (*write_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 233 + size_t size, u32 val); 226 234 int (*link_up)(struct dw_pcie *pcie); 227 235 int (*start_link)(struct dw_pcie *pcie); 228 236 void (*stop_link)(struct dw_pcie *pcie); ··· 243 243 struct pcie_port pp; 244 244 struct dw_pcie_ep ep; 245 245 const struct dw_pcie_ops *ops; 246 + unsigned int version; 246 247 }; 247 248 248 249 #define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp) ··· 258 257 size_t size); 259 258 void __dw_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg, 260 259 size_t size, u32 val); 260 + u32 __dw_pcie_read_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 261 + size_t size); 262 + void __dw_pcie_write_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 263 + size_t size, u32 val); 261 264 int dw_pcie_link_up(struct dw_pcie *pci); 262 265 int dw_pcie_wait_for_link(struct dw_pcie *pci); 263 266 void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, ··· 305 300 306 301 static inline void dw_pcie_writel_dbi2(struct dw_pcie *pci, u32 reg, u32 val) 307 302 { 308 - __dw_pcie_write_dbi(pci, pci->dbi_base2, reg, 0x4, val); 303 + __dw_pcie_write_dbi2(pci, pci->dbi_base2, reg, 0x4, val); 309 304 } 310 305 311 306 static inline u32 dw_pcie_readl_dbi2(struct dw_pcie *pci, u32 reg) 312 307 { 313 - return __dw_pcie_read_dbi(pci, pci->dbi_base2, reg, 0x4); 308 + return __dw_pcie_read_dbi2(pci, pci->dbi_base2, reg, 0x4); 314 309 } 315 310 316 311 static inline void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val)
+3 -2
drivers/pci/endpoint/functions/pci-epf-test.c
··· 438 438 epc_features = epf_test->epc_features; 439 439 440 440 base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg), 441 - test_reg_bar); 441 + test_reg_bar, epc_features->align); 442 442 if (!base) { 443 443 dev_err(dev, "Failed to allocated register space\n"); 444 444 return -ENOMEM; ··· 453 453 if (!!(epc_features->reserved_bar & (1 << bar))) 454 454 continue; 455 455 456 - base = pci_epf_alloc_space(epf, bar_size[bar], bar); 456 + base = pci_epf_alloc_space(epf, bar_size[bar], bar, 457 + epc_features->align); 457 458 if (!base) 458 459 dev_err(dev, "Failed to allocate space for BAR%d\n", 459 460 bar);
+8 -2
drivers/pci/endpoint/pci-epf-core.c
··· 109 109 * pci_epf_alloc_space() - allocate memory for the PCI EPF register space 110 110 * @size: the size of the memory that has to be allocated 111 111 * @bar: the BAR number corresponding to the allocated register space 112 + * @align: alignment size for the allocation region 112 113 * 113 114 * Invoke to allocate memory for the PCI EPF register space. 114 115 */ 115 - void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar) 116 + void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar, 117 + size_t align) 116 118 { 117 119 void *space; 118 120 struct device *dev = epf->epc->dev.parent; ··· 122 120 123 121 if (size < 128) 124 122 size = 128; 125 - size = roundup_pow_of_two(size); 123 + 124 + if (align) 125 + size = ALIGN(size, align); 126 + else 127 + size = roundup_pow_of_two(size); 126 128 127 129 space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL); 128 130 if (!space) {
+23 -21
drivers/pci/of.c
··· 15 15 #include <linux/of_pci.h> 16 16 #include "pci.h" 17 17 18 + #ifdef CONFIG_PCI 18 19 void pci_set_of_node(struct pci_dev *dev) 19 20 { 20 21 if (!dev->bus->dev.of_node) ··· 202 201 return (u16)domain; 203 202 } 204 203 EXPORT_SYMBOL_GPL(of_get_pci_domain_nr); 205 - 206 - /** 207 - * This function will try to find the limitation of link speed by finding 208 - * a property called "max-link-speed" of the given device node. 209 - * 210 - * @node: device tree node with the max link speed information 211 - * 212 - * Returns the associated max link speed from DT, or a negative value if the 213 - * required property is not found or is invalid. 214 - */ 215 - int of_pci_get_max_link_speed(struct device_node *node) 216 - { 217 - u32 max_link_speed; 218 - 219 - if (of_property_read_u32(node, "max-link-speed", &max_link_speed) || 220 - max_link_speed > 4) 221 - return -EINVAL; 222 - 223 - return max_link_speed; 224 - } 225 - EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed); 226 204 227 205 /** 228 206 * of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only ··· 523 543 return err; 524 544 } 525 545 546 + #endif /* CONFIG_PCI */ 547 + 548 + /** 549 + * This function will try to find the limitation of link speed by finding 550 + * a property called "max-link-speed" of the given device node. 551 + * 552 + * @node: device tree node with the max link speed information 553 + * 554 + * Returns the associated max link speed from DT, or a negative value if the 555 + * required property is not found or is invalid. 556 + */ 557 + int of_pci_get_max_link_speed(struct device_node *node) 558 + { 559 + u32 max_link_speed; 560 + 561 + if (of_property_read_u32(node, "max-link-speed", &max_link_speed) || 562 + max_link_speed > 4) 563 + return -EINVAL; 564 + 565 + return max_link_speed; 566 + } 567 + EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed);
+2
include/linux/pci-epc.h
··· 109 109 * @reserved_bar: bitmap to indicate reserved BAR unavailable to function driver 110 110 * @bar_fixed_64bit: bitmap to indicate fixed 64bit BARs 111 111 * @bar_fixed_size: Array specifying the size supported by each BAR 112 + * @align: alignment size required for BAR buffer allocation 112 113 */ 113 114 struct pci_epc_features { 114 115 unsigned int linkup_notifier : 1; ··· 118 117 u8 reserved_bar; 119 118 u8 bar_fixed_64bit; 120 119 u64 bar_fixed_size[BAR_5 + 1]; 120 + size_t align; 121 121 }; 122 122 123 123 #define to_pci_epc(device) container_of((device), struct pci_epc, dev)
+2 -1
include/linux/pci-epf.h
··· 149 149 int __pci_epf_register_driver(struct pci_epf_driver *driver, 150 150 struct module *owner); 151 151 void pci_epf_unregister_driver(struct pci_epf_driver *driver); 152 - void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar); 152 + void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar, 153 + size_t align); 153 154 void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar); 154 155 int pci_epf_bind(struct pci_epf *epf); 155 156 void pci_epf_unbind(struct pci_epf *epf);