Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"PCI changes for the v4.5 merge window:

Enumeration:
- Simplify config space size computation (Bjorn Helgaas)
- Avoid iterating through ROM outside the resource window (Edward O'Callaghan)
- Support PCIe devices with short cfg_size (Jason S. McMullan)
- Add Netronome vendor and device IDs (Jason S. McMullan)
- Limit config space size for Netronome NFP6000 family (Jason S. McMullan)
- Add Netronome NFP4000 PF device ID (Simon Horman)
- Limit config space size for Netronome NFP4000 (Simon Horman)
- Print warnings for all invalid expansion ROM headers (Vladis Dronov)

Resource management:
- Fix minimum allocation address overwrite (Christoph Biedl)

PCI device hotplug:
- acpiphp_ibm: Fix null dereferences on null ibm_slot (Colin Ian King)
- pciehp: Always protect pciehp_disable_slot() with hotplug mutex (Guenter Roeck)
- shpchp: Constify hpc_ops structure (Julia Lawall)
- ibmphp: Remove unneeded NULL test (Julia Lawall)

Power management:
- Make ASPM sysfs link_state_store() consistent with link_state_show() (Andy Lutomirski)

Virtualization
- Add function 1 DMA alias quirk for Lite-On/Plextor M6e/Marvell 88SS9183 (Tim Sander)

MSI:
- Remove empty pci_msi_init_pci_dev() (Bjorn Helgaas)
- Mark PCIe/PCI (MSI) IRQ cascade handlers as IRQF_NO_THREAD (Grygorii Strashko)
- Initialize MSI capability for all architectures (Guilherme G. Piccoli)
- Relax msi_domain_alloc() to support parentless MSI irqdomains (Liu Jiang)

ARM Versatile host bridge driver:
- Remove unused pci_sys_data structures (Lorenzo Pieralisi)

Broadcom iProc host bridge driver:
- Hide CONFIG_PCIE_IPROC (Arnd Bergmann)
- Do not use 0x in front of %pap (Dmitry V. Krivenok)
- Update iProc PCIe device tree binding (Ray Jui)
- Add PAXC interface support (Ray Jui)
- Add iProc PCIe MSI device tree binding (Ray Jui)
- Add iProc PCIe MSI support (Ray Jui)

Freescale i.MX6 host bridge driver:
- Use gpio_set_value_cansleep() (Fabio Estevam)
- Add support for active-low reset GPIO (Petr Štetiar)

HiSilicon host bridge driver:
- Add support for HiSilicon Hip06 PCIe host controllers (Gabriele Paoloni)

Intel VMD host bridge driver:
- Export irq_domain_set_info() for module use (Keith Busch)
- x86/PCI: Allow DMA ops specific to a PCI domain (Keith Busch)
- Use 32 bit PCI domain numbers (Keith Busch)
- Add driver for Intel Volume Management Device (VMD) (Keith Busch)

Qualcomm host bridge driver:
- Document PCIe devicetree bindings (Stanimir Varbanov)
- Add Qualcomm PCIe controller driver (Stanimir Varbanov)
- dts: apq8064: add PCIe devicetree node (Stanimir Varbanov)
- dts: ifc6410: enable PCIe DT node for this board (Stanimir Varbanov)

Renesas R-Car host bridge driver:
- Add support for R-Car H3 to pcie-rcar (Harunobu Kurokawa)
- Allow DT to override default window settings (Phil Edworthy)
- Convert to DT resource parsing API (Phil Edworthy)
- Revert "PCI: rcar: Build pcie-rcar.c only on ARM" (Phil Edworthy)
- Remove unused pci_sys_data struct from pcie-rcar (Phil Edworthy)
- Add runtime PM support to pcie-rcar (Phil Edworthy)
- Add Gen2 PHY setup to pcie-rcar (Phil Edworthy)
- Add gen2 fallback compatibility string for pci-rcar-gen2 (Simon Horman)
- Add gen2 fallback compatibility string for pcie-rcar (Simon Horman)

Synopsys DesignWare host bridge driver:
- Simplify control flow (Bjorn Helgaas)
- Make config accessor override checking symmetric (Bjorn Helgaas)
- Ensure ATU is enabled before IO/conf space accesses (Stanimir Varbanov)

Miscellaneous:
- Add of_pci_get_host_bridge_resources() stub (Arnd Bergmann)
- Check for PCI_HEADER_TYPE_BRIDGE equality, not bitmask (Bjorn Helgaas)
- Fix all whitespace issues (Bogicevic Sasa)
- x86/PCI: Simplify pci_bios_{read,write} (Geliang Tang)
- Use to_pci_dev() instead of open-coding it (Geliang Tang)
- Use kobj_to_dev() instead of open-coding it (Geliang Tang)
- Use list_for_each_entry() to simplify code (Geliang Tang)
- Fix typos in <linux/msi.h> (Thomas Petazzoni)
- x86/PCI: Clarify AMD Fam10h config access restrictions comment (Tomasz Nowicki)"

* tag 'pci-v4.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (58 commits)
PCI: Add function 1 DMA alias quirk for Lite-On/Plextor M6e/Marvell 88SS9183
PCI: Limit config space size for Netronome NFP4000
PCI: Add Netronome NFP4000 PF device ID
x86/PCI: Add driver for Intel Volume Management Device (VMD)
PCI/AER: Use 32 bit PCI domain numbers
x86/PCI: Allow DMA ops specific to a PCI domain
irqdomain: Export irq_domain_set_info() for module use
PCI: host: Add of_pci_get_host_bridge_resources() stub
genirq/MSI: Relax msi_domain_alloc() to support parentless MSI irqdomains
PCI: rcar: Add Gen2 PHY setup to pcie-rcar
PCI: rcar: Add runtime PM support to pcie-rcar
PCI: designware: Make config accessor override checking symmetric
PCI: ibmphp: Remove unneeded NULL test
ARM: dts: ifc6410: enable PCIe DT node for this board
ARM: dts: apq8064: add PCIe devicetree node
PCI: hotplug: Use list_for_each_entry() to simplify code
PCI: rcar: Remove unused pci_sys_data struct from pcie-rcar
PCI: hisi: Add support for HiSilicon Hip06 PCIe host controllers
PCI: Avoid iterating through memory outside the resource window
PCI: acpiphp_ibm: Fix null dereferences on null ibm_slot
...

+4682 -1896
+39 -1
Documentation/devicetree/bindings/pci/brcm,iproc-pcie.txt
··· 1 1 * Broadcom iProc PCIe controller with the platform bus interface 2 2 3 3 Required properties: 4 - - compatible: Must be "brcm,iproc-pcie" 4 + - compatible: Must be "brcm,iproc-pcie" for PAXB, or "brcm,iproc-pcie-paxc" 5 + for PAXC. PAXB-based root complex is used for external endpoint devices. 6 + PAXC-based root complex is connected to emulated endpoint devices 7 + internal to the ASIC 5 8 - reg: base address and length of the PCIe controller I/O register space 6 9 - #interrupt-cells: set to <1> 7 10 - interrupt-map-mask and interrupt-map, standard PCI properties to define the ··· 35 32 - brcm,pcie-ob-oarr-size: Some iProc SoCs need the OARR size bit to be set to 36 33 increase the outbound window size 37 34 35 + MSI support (optional): 36 + 37 + For older platforms without MSI integrated in the GIC, iProc PCIe core provides 38 + an event queue based MSI support. The iProc MSI uses host memories to store 39 + MSI posted writes in the event queues 40 + 41 + - msi-parent: Link to the device node of the MSI controller. On newer iProc 42 + platforms, the MSI controller may be gicv2m or gicv3-its. On older iProc 43 + platforms without MSI support in its interrupt controller, one may use the 44 + event queue based MSI support integrated within the iProc PCIe core. 45 + 46 + When the iProc event queue based MSI is used, one needs to define the 47 + following properties in the MSI device node: 48 + - compatible: Must be "brcm,iproc-msi" 49 + - msi-controller: claims itself as an MSI controller 50 + - interrupt-parent: Link to its parent interrupt device 51 + - interrupts: List of interrupt IDs from its parent interrupt device 52 + 53 + Optional properties: 54 + - brcm,pcie-msi-inten: Needs to be present for some older iProc platforms that 55 + require the interrupt enable registers to be set explicitly to enable MSI 56 + 38 57 Example: 39 58 pcie0: pcie@18012000 { 40 59 compatible = "brcm,iproc-pcie"; ··· 83 58 brcm,pcie-ob-oarr-size; 84 59 brcm,pcie-ob-axi-offset = <0x00000000>; 85 60 brcm,pcie-ob-window-size = <256>; 61 + 62 + msi-parent = <&msi0>; 63 + 64 + /* iProc event queue based MSI */ 65 + msi0: msi@18012000 { 66 + compatible = "brcm,iproc-msi"; 67 + msi-controller; 68 + interrupt-parent = <&gic>; 69 + interrupts = <GIC_SPI 96 IRQ_TYPE_NONE>, 70 + <GIC_SPI 97 IRQ_TYPE_NONE>, 71 + <GIC_SPI 98 IRQ_TYPE_NONE>, 72 + <GIC_SPI 99 IRQ_TYPE_NONE>, 73 + }; 86 74 }; 87 75 88 76 pcie1: pcie@18013000 {
+4 -4
Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
··· 1 - HiSilicon PCIe host bridge DT description 1 + HiSilicon Hip05 and Hip06 PCIe host bridge DT description 2 2 3 3 HiSilicon PCIe host controller is based on Designware PCI core. 4 4 It shares common functions with PCIe Designware core driver and inherits ··· 7 7 8 8 Additional properties are described here: 9 9 10 - Required properties: 11 - - compatible: Should contain "hisilicon,hip05-pcie". 10 + Required properties 11 + - compatible: Should contain "hisilicon,hip05-pcie" or "hisilicon,hip06-pcie". 12 12 - reg: Should contain rc_dbi, config registers location and length. 13 13 - reg-names: Must include the following entries: 14 14 "rc_dbi": controller configuration registers; ··· 20 20 - status: Either "ok" or "disabled". 21 21 - dma-coherent: Present if DMA operations are coherent. 22 22 23 - Example: 23 + Hip05 Example (note that Hip06 is the same except compatible): 24 24 pcie@0xb0080000 { 25 25 compatible = "hisilicon,hip05-pcie", "snps,dw-pcie"; 26 26 reg = <0 0xb0080000 0 0x10000>, <0x220 0x00000000 0 0x2000>;
+15 -2
Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
··· 8 8 Required properties: 9 9 - compatible: "renesas,pci-r8a7790" for the R8A7790 SoC; 10 10 "renesas,pci-r8a7791" for the R8A7791 SoC; 11 - "renesas,pci-r8a7794" for the R8A7794 SoC. 11 + "renesas,pci-r8a7794" for the R8A7794 SoC; 12 + "renesas,pci-rcar-gen2" for a generic R-Car Gen2 compatible device 13 + 14 + 15 + When compatible with the generic version, nodes must list the 16 + SoC-specific version corresponding to the platform first 17 + followed by the generic version. 18 + 12 19 - reg: A list of physical regions to access the device: the first is 13 20 the operational registers for the OHCI/EHCI controllers and the 14 21 second is for the bridge configuration and control registers. ··· 31 24 - interrupt-map-mask: standard property that helps to define the interrupt 32 25 mapping. 33 26 27 + Optional properties: 28 + - dma-ranges: a single range for the inbound memory region. If not supplied, 29 + defaults to 1GiB at 0x40000000. Note there are hardware restrictions on the 30 + allowed combinations of address and size. 31 + 34 32 Example SoC configuration: 35 33 36 34 pci0: pci@ee090000 { 37 - compatible = "renesas,pci-r8a7790"; 35 + compatible = "renesas,pci-r8a7790", "renesas,pci-rcar-gen2"; 38 36 clocks = <&mstp7_clks R8A7790_CLK_EHCI>; 39 37 reg = <0x0 0xee090000 0x0 0xc00>, 40 38 <0x0 0xee080000 0x0 0x1100>; ··· 50 38 #address-cells = <3>; 51 39 #size-cells = <2>; 52 40 #interrupt-cells = <1>; 41 + dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000>; 53 42 interrupt-map-mask = <0xff00 0 0 0x7>; 54 43 interrupt-map = <0x0000 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH 55 44 0x0800 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH
+233
Documentation/devicetree/bindings/pci/qcom,pcie.txt
··· 1 + * Qualcomm PCI express root complex 2 + 3 + - compatible: 4 + Usage: required 5 + Value type: <stringlist> 6 + Definition: Value should contain 7 + - "qcom,pcie-ipq8064" for ipq8064 8 + - "qcom,pcie-apq8064" for apq8064 9 + - "qcom,pcie-apq8084" for apq8084 10 + 11 + - reg: 12 + Usage: required 13 + Value type: <prop-encoded-array> 14 + Definition: Register ranges as listed in the reg-names property 15 + 16 + - reg-names: 17 + Usage: required 18 + Value type: <stringlist> 19 + Definition: Must include the following entries 20 + - "parf" Qualcomm specific registers 21 + - "dbi" Designware PCIe registers 22 + - "elbi" External local bus interface registers 23 + - "config" PCIe configuration space 24 + 25 + - device_type: 26 + Usage: required 27 + Value type: <string> 28 + Definition: Should be "pci". As specified in designware-pcie.txt 29 + 30 + - #address-cells: 31 + Usage: required 32 + Value type: <u32> 33 + Definition: Should be 3. As specified in designware-pcie.txt 34 + 35 + - #size-cells: 36 + Usage: required 37 + Value type: <u32> 38 + Definition: Should be 2. As specified in designware-pcie.txt 39 + 40 + - ranges: 41 + Usage: required 42 + Value type: <prop-encoded-array> 43 + Definition: As specified in designware-pcie.txt 44 + 45 + - interrupts: 46 + Usage: required 47 + Value type: <prop-encoded-array> 48 + Definition: MSI interrupt 49 + 50 + - interrupt-names: 51 + Usage: required 52 + Value type: <stringlist> 53 + Definition: Should contain "msi" 54 + 55 + - #interrupt-cells: 56 + Usage: required 57 + Value type: <u32> 58 + Definition: Should be 1. As specified in designware-pcie.txt 59 + 60 + - interrupt-map-mask: 61 + Usage: required 62 + Value type: <prop-encoded-array> 63 + Definition: As specified in designware-pcie.txt 64 + 65 + - interrupt-map: 66 + Usage: required 67 + Value type: <prop-encoded-array> 68 + Definition: As specified in designware-pcie.txt 69 + 70 + - clocks: 71 + Usage: required 72 + Value type: <prop-encoded-array> 73 + Definition: List of phandle and clock specifier pairs as listed 74 + in clock-names property 75 + 76 + - clock-names: 77 + Usage: required 78 + Value type: <stringlist> 79 + Definition: Should contain the following entries 80 + - "iface" Configuration AHB clock 81 + 82 + - clock-names: 83 + Usage: required for ipq/apq8064 84 + Value type: <stringlist> 85 + Definition: Should contain the following entries 86 + - "core" Clocks the pcie hw block 87 + - "phy" Clocks the pcie PHY block 88 + - clock-names: 89 + Usage: required for apq8084 90 + Value type: <stringlist> 91 + Definition: Should contain the following entries 92 + - "aux" Auxiliary (AUX) clock 93 + - "bus_master" Master AXI clock 94 + - "bus_slave" Slave AXI clock 95 + - resets: 96 + Usage: required 97 + Value type: <prop-encoded-array> 98 + Definition: List of phandle and reset specifier pairs as listed 99 + in reset-names property 100 + 101 + - reset-names: 102 + Usage: required for ipq/apq8064 103 + Value type: <stringlist> 104 + Definition: Should contain the following entries 105 + - "axi" AXI reset 106 + - "ahb" AHB reset 107 + - "por" POR reset 108 + - "pci" PCI reset 109 + - "phy" PHY reset 110 + 111 + - reset-names: 112 + Usage: required for apq8084 113 + Value type: <stringlist> 114 + Definition: Should contain the following entries 115 + - "core" Core reset 116 + 117 + - power-domains: 118 + Usage: required for apq8084 119 + Value type: <prop-encoded-array> 120 + Definition: A phandle and power domain specifier pair to the 121 + power domain which is responsible for collapsing 122 + and restoring power to the peripheral 123 + 124 + - vdda-supply: 125 + Usage: required 126 + Value type: <phandle> 127 + Definition: A phandle to the core analog power supply 128 + 129 + - vdda_phy-supply: 130 + Usage: required for ipq/apq8064 131 + Value type: <phandle> 132 + Definition: A phandle to the analog power supply for PHY 133 + 134 + - vdda_refclk-supply: 135 + Usage: required for ipq/apq8064 136 + Value type: <phandle> 137 + Definition: A phandle to the analog power supply for IC which generates 138 + reference clock 139 + 140 + - phys: 141 + Usage: required for apq8084 142 + Value type: <phandle> 143 + Definition: List of phandle(s) as listed in phy-names property 144 + 145 + - phy-names: 146 + Usage: required for apq8084 147 + Value type: <stringlist> 148 + Definition: Should contain "pciephy" 149 + 150 + - <name>-gpios: 151 + Usage: optional 152 + Value type: <prop-encoded-array> 153 + Definition: List of phandle and gpio specifier pairs. Should contain 154 + - "perst-gpios" PCIe endpoint reset signal line 155 + - "wake-gpios" PCIe endpoint wake signal line 156 + 157 + * Example for ipq/apq8064 158 + pcie@1b500000 { 159 + compatible = "qcom,pcie-apq8064", "qcom,pcie-ipq8064", "snps,dw-pcie"; 160 + reg = <0x1b500000 0x1000 161 + 0x1b502000 0x80 162 + 0x1b600000 0x100 163 + 0x0ff00000 0x100000>; 164 + reg-names = "dbi", "elbi", "parf", "config"; 165 + device_type = "pci"; 166 + linux,pci-domain = <0>; 167 + bus-range = <0x00 0xff>; 168 + num-lanes = <1>; 169 + #address-cells = <3>; 170 + #size-cells = <2>; 171 + ranges = <0x81000000 0 0 0x0fe00000 0 0x00100000 /* I/O */ 172 + 0x82000000 0 0 0x08000000 0 0x07e00000>; /* memory */ 173 + interrupts = <GIC_SPI 238 IRQ_TYPE_NONE>; 174 + interrupt-names = "msi"; 175 + #interrupt-cells = <1>; 176 + interrupt-map-mask = <0 0 0 0x7>; 177 + interrupt-map = <0 0 0 1 &intc 0 36 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 178 + <0 0 0 2 &intc 0 37 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 179 + <0 0 0 3 &intc 0 38 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 180 + <0 0 0 4 &intc 0 39 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 181 + clocks = <&gcc PCIE_A_CLK>, 182 + <&gcc PCIE_H_CLK>, 183 + <&gcc PCIE_PHY_CLK>; 184 + clock-names = "core", "iface", "phy"; 185 + resets = <&gcc PCIE_ACLK_RESET>, 186 + <&gcc PCIE_HCLK_RESET>, 187 + <&gcc PCIE_POR_RESET>, 188 + <&gcc PCIE_PCI_RESET>, 189 + <&gcc PCIE_PHY_RESET>; 190 + reset-names = "axi", "ahb", "por", "pci", "phy"; 191 + pinctrl-0 = <&pcie_pins_default>; 192 + pinctrl-names = "default"; 193 + }; 194 + 195 + * Example for apq8084 196 + pcie0@fc520000 { 197 + compatible = "qcom,pcie-apq8084", "snps,dw-pcie"; 198 + reg = <0xfc520000 0x2000>, 199 + <0xff000000 0x1000>, 200 + <0xff001000 0x1000>, 201 + <0xff002000 0x2000>; 202 + reg-names = "parf", "dbi", "elbi", "config"; 203 + device_type = "pci"; 204 + linux,pci-domain = <0>; 205 + bus-range = <0x00 0xff>; 206 + num-lanes = <1>; 207 + #address-cells = <3>; 208 + #size-cells = <2>; 209 + ranges = <0x81000000 0 0 0xff200000 0 0x00100000 /* I/O */ 210 + 0x82000000 0 0x00300000 0xff300000 0 0x00d00000>; /* memory */ 211 + interrupts = <GIC_SPI 243 IRQ_TYPE_NONE>; 212 + interrupt-names = "msi"; 213 + #interrupt-cells = <1>; 214 + interrupt-map-mask = <0 0 0 0x7>; 215 + interrupt-map = <0 0 0 1 &intc 0 244 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 216 + <0 0 0 2 &intc 0 245 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 217 + <0 0 0 3 &intc 0 247 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 218 + <0 0 0 4 &intc 0 248 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 219 + clocks = <&gcc GCC_PCIE_0_CFG_AHB_CLK>, 220 + <&gcc GCC_PCIE_0_MSTR_AXI_CLK>, 221 + <&gcc GCC_PCIE_0_SLV_AXI_CLK>, 222 + <&gcc GCC_PCIE_0_AUX_CLK>; 223 + clock-names = "iface", "master_bus", "slave_bus", "aux"; 224 + resets = <&gcc GCC_PCIE_0_BCR>; 225 + reset-names = "core"; 226 + power-domains = <&gcc PCIE0_GDSC>; 227 + vdda-supply = <&pma8084_l3>; 228 + phys = <&pciephy0>; 229 + phy-names = "pciephy"; 230 + perst-gpio = <&tlmm 70 GPIO_ACTIVE_LOW>; 231 + pinctrl-0 = <&pcie0_pins_default>; 232 + pinctrl-names = "default"; 233 + };
+11 -3
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 1 1 * Renesas RCar PCIe interface 2 2 3 3 Required properties: 4 - - compatible: should contain one of the following 5 - "renesas,pcie-r8a7779", "renesas,pcie-r8a7790", "renesas,pcie-r8a7791" 4 + compatible: "renesas,pcie-r8a7779" for the R8A7779 SoC; 5 + "renesas,pcie-r8a7790" for the R8A7790 SoC; 6 + "renesas,pcie-r8a7791" for the R8A7791 SoC; 7 + "renesas,pcie-r8a7795" for the R8A7795 SoC; 8 + "renesas,pcie-rcar-gen2" for a generic R-Car Gen2 compatible device. 9 + 10 + When compatible with the generic version, nodes must list the 11 + SoC-specific version corresponding to the platform first 12 + followed by the generic version. 13 + 6 14 - reg: base address and length of the pcie controller registers. 7 15 - #address-cells: set to <3> 8 16 - #size-cells: set to <2> ··· 33 25 SoC specific DT Entry: 34 26 35 27 pcie: pcie@fe000000 { 36 - compatible = "renesas,pcie-r8a7791"; 28 + compatible = "renesas,pcie-r8a7791", "renesas,pcie-rcar-gen2"; 37 29 reg = <0 0xfe000000 0 0x80000>; 38 30 #address-cells = <3>; 39 31 #size-cells = <2>;
+14
MAINTAINERS
··· 8317 8317 F: Documentation/devicetree/bindings/pci/host-generic-pci.txt 8318 8318 F: drivers/pci/host/pci-host-generic.c 8319 8319 8320 + PCI DRIVER FOR INTEL VOLUME MANAGEMENT DEVICE (VMD) 8321 + M: Keith Busch <keith.busch@intel.com> 8322 + L: linux-pci@vger.kernel.org 8323 + S: Supported 8324 + F: arch/x86/pci/vmd.c 8325 + 8320 8326 PCIE DRIVER FOR ST SPEAR13XX 8321 8327 M: Pratyush Anand <pratyush.anand@gmail.com> 8322 8328 L: linux-pci@vger.kernel.org ··· 8347 8341 8348 8342 PCIE DRIVER FOR HISILICON 8349 8343 M: Zhou Wang <wangzhou1@hisilicon.com> 8344 + M: Gabriele Paoloni <gabriele.paoloni@huawei.com> 8350 8345 L: linux-pci@vger.kernel.org 8351 8346 S: Maintained 8352 8347 F: Documentation/devicetree/bindings/pci/hisilicon-pcie.txt 8353 8348 F: drivers/pci/host/pcie-hisi.c 8349 + 8350 + PCIE DRIVER FOR QUALCOMM MSM 8351 + M: Stanimir Varbanov <svarbanov@mm-sol.com> 8352 + L: linux-pci@vger.kernel.org 8353 + L: linux-arm-msm@vger.kernel.org 8354 + S: Maintained 8355 + F: drivers/pci/host/*qcom* 8354 8356 8355 8357 PCMCIA SUBSYSTEM 8356 8358 P: Linux PCMCIA Team
+26
arch/arm/boot/dts/qcom-apq8064-ifc6410.dts
··· 47 47 bias-disable; 48 48 }; 49 49 }; 50 + 51 + pcie_pins: pcie_pinmux { 52 + mux { 53 + pins = "gpio27"; 54 + function = "gpio"; 55 + }; 56 + conf { 57 + pins = "gpio27"; 58 + drive-strength = <12>; 59 + bias-disable; 60 + }; 61 + }; 50 62 }; 51 63 52 64 rpm@108000 { ··· 133 121 }; 134 122 135 123 lvs1 { 124 + bias-pull-down; 125 + }; 126 + 127 + lvs6 { 136 128 bias-pull-down; 137 129 }; 138 130 }; ··· 245 229 246 230 usb4: usb@12530000 { 247 231 status = "okay"; 232 + }; 233 + 234 + pci@1b500000 { 235 + status = "ok"; 236 + vdda-supply = <&pm8921_s3>; 237 + vdda_phy-supply = <&pm8921_lvs6>; 238 + vdda_refclk-supply = <&ext_3p3v>; 239 + pinctrl-0 = <&pcie_pins>; 240 + pinctrl-names = "default"; 241 + perst-gpio = <&tlmm_pinmux 27 GPIO_ACTIVE_LOW>; 248 242 }; 249 243 250 244 qcom,ssbi@500000 {
+36
arch/arm/boot/dts/qcom-apq8064.dtsi
··· 785 785 compatible = "qcom,tcsr-apq8064", "syscon"; 786 786 reg = <0x1a400000 0x100>; 787 787 }; 788 + 789 + pcie: pci@1b500000 { 790 + compatible = "qcom,pcie-apq8064", "snps,dw-pcie"; 791 + reg = <0x1b500000 0x1000 792 + 0x1b502000 0x80 793 + 0x1b600000 0x100 794 + 0x0ff00000 0x100000>; 795 + reg-names = "dbi", "elbi", "parf", "config"; 796 + device_type = "pci"; 797 + linux,pci-domain = <0>; 798 + bus-range = <0x00 0xff>; 799 + num-lanes = <1>; 800 + #address-cells = <3>; 801 + #size-cells = <2>; 802 + ranges = <0x81000000 0 0 0x0fe00000 0 0x00100000 /* I/O */ 803 + 0x82000000 0 0 0x08000000 0 0x07e00000>; /* memory */ 804 + interrupts = <GIC_SPI 238 IRQ_TYPE_NONE>; 805 + interrupt-names = "msi"; 806 + #interrupt-cells = <1>; 807 + interrupt-map-mask = <0 0 0 0x7>; 808 + interrupt-map = <0 0 0 1 &intc 0 36 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 809 + <0 0 0 2 &intc 0 37 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 810 + <0 0 0 3 &intc 0 38 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 811 + <0 0 0 4 &intc 0 39 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 812 + clocks = <&gcc PCIE_A_CLK>, 813 + <&gcc PCIE_H_CLK>, 814 + <&gcc PCIE_PHY_REF_CLK>; 815 + clock-names = "core", "iface", "phy"; 816 + resets = <&gcc PCIE_ACLK_RESET>, 817 + <&gcc PCIE_HCLK_RESET>, 818 + <&gcc PCIE_POR_RESET>, 819 + <&gcc PCIE_PCI_RESET>, 820 + <&gcc PCIE_PHY_RESET>; 821 + reset-names = "axi", "ahb", "por", "pci", "phy"; 822 + status = "disabled"; 823 + }; 788 824 }; 789 825 };
+1 -1
arch/powerpc/kernel/eeh_driver.c
··· 400 400 * support EEH. So we just care about PCI devices for 401 401 * simplicity here. 402 402 */ 403 - if (!dev || (dev->hdr_type & PCI_HEADER_TYPE_BRIDGE)) 403 + if (!dev || (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE)) 404 404 return NULL; 405 405 406 406 /*
-3
arch/powerpc/kernel/pci_of_scan.c
··· 187 187 188 188 pci_device_add(dev, bus); 189 189 190 - /* Setup MSI caps & disable MSI/MSI-X interrupts */ 191 - pci_msi_setup_pci_dev(dev); 192 - 193 190 return dev; 194 191 } 195 192 EXPORT_SYMBOL(of_create_pci_dev);
+13
arch/x86/Kconfig
··· 2699 2699 def_bool y 2700 2700 depends on PCI 2701 2701 2702 + config VMD 2703 + depends on PCI_MSI 2704 + tristate "Volume Management Device Driver" 2705 + default N 2706 + ---help--- 2707 + Adds support for the Intel Volume Management Device (VMD). VMD is a 2708 + secondary PCI host bridge that allows PCI Express root ports, 2709 + and devices attached to them, to be removed from the default 2710 + PCI domain and placed within the VMD domain. This provides 2711 + more bus resources than are otherwise possible with a 2712 + single domain. If you know your system provides one of these and 2713 + has devices attached to it, say Y; if you are not sure, say N. 2714 + 2702 2715 source "net/Kconfig" 2703 2716 2704 2717 source "drivers/Kconfig"
+10
arch/x86/include/asm/device.h
··· 10 10 #endif 11 11 }; 12 12 13 + #if defined(CONFIG_X86_DEV_DMA_OPS) && defined(CONFIG_PCI_DOMAINS) 14 + struct dma_domain { 15 + struct list_head node; 16 + struct dma_map_ops *dma_ops; 17 + int domain_nr; 18 + }; 19 + void add_dma_domain(struct dma_domain *domain); 20 + void del_dma_domain(struct dma_domain *domain); 21 + #endif 22 + 13 23 struct pdev_archdata { 14 24 }; 15 25
+5
arch/x86/include/asm/hw_irq.h
··· 130 130 char *uv_name; 131 131 }; 132 132 #endif 133 + #if IS_ENABLED(CONFIG_VMD) 134 + struct { 135 + struct msi_desc *desc; 136 + }; 137 + #endif 133 138 }; 134 139 }; 135 140
+5 -5
arch/x86/include/asm/pci_x86.h
··· 151 151 #define PCI_MMCFG_BUS_OFFSET(bus) ((bus) << 20) 152 152 153 153 /* 154 - * AMD Fam10h CPUs are buggy, and cannot access MMIO config space 155 - * on their northbrige except through the * %eax register. As such, you MUST 156 - * NOT use normal IOMEM accesses, you need to only use the magic mmio-config 157 - * accessor functions. 158 - * In fact just use pci_config_*, nothing else please. 154 + * On AMD Fam10h CPUs, all PCI MMIO configuration space accesses must use 155 + * %eax. No other source or target registers may be used. The following 156 + * mmio_config_* accessors enforce this. See "BIOS and Kernel Developer's 157 + * Guide (BKDG) For AMD Family 10h Processors", rev. 3.48, sec 2.11.1, 158 + * "MMIO Configuration Coding Requirements". 159 159 */ 160 160 static inline unsigned char mmio_config_readb(void __iomem *pos) 161 161 {
+2
arch/x86/pci/Makefile
··· 23 23 obj-$(CONFIG_AMD_NB) += amd_bus.o 24 24 obj-$(CONFIG_PCI_CNB20LE_QUIRK) += broadcom_bus.o 25 25 26 + obj-$(CONFIG_VMD) += vmd.o 27 + 26 28 ifeq ($(CONFIG_PCI_DEBUG),y) 27 29 EXTRA_CFLAGS += -DDEBUG 28 30 endif
+38
arch/x86/pci/common.c
··· 641 641 return (pci_probe & PCI_ASSIGN_ALL_BUSSES) ? 1 : 0; 642 642 } 643 643 644 + #if defined(CONFIG_X86_DEV_DMA_OPS) && defined(CONFIG_PCI_DOMAINS) 645 + static LIST_HEAD(dma_domain_list); 646 + static DEFINE_SPINLOCK(dma_domain_list_lock); 647 + 648 + void add_dma_domain(struct dma_domain *domain) 649 + { 650 + spin_lock(&dma_domain_list_lock); 651 + list_add(&domain->node, &dma_domain_list); 652 + spin_unlock(&dma_domain_list_lock); 653 + } 654 + EXPORT_SYMBOL_GPL(add_dma_domain); 655 + 656 + void del_dma_domain(struct dma_domain *domain) 657 + { 658 + spin_lock(&dma_domain_list_lock); 659 + list_del(&domain->node); 660 + spin_unlock(&dma_domain_list_lock); 661 + } 662 + EXPORT_SYMBOL_GPL(del_dma_domain); 663 + 664 + static void set_dma_domain_ops(struct pci_dev *pdev) 665 + { 666 + struct dma_domain *domain; 667 + 668 + spin_lock(&dma_domain_list_lock); 669 + list_for_each_entry(domain, &dma_domain_list, node) { 670 + if (pci_domain_nr(pdev->bus) == domain->domain_nr) { 671 + pdev->dev.archdata.dma_ops = domain->dma_ops; 672 + break; 673 + } 674 + } 675 + spin_unlock(&dma_domain_list_lock); 676 + } 677 + #else 678 + static void set_dma_domain_ops(struct pci_dev *pdev) {} 679 + #endif 680 + 644 681 int pcibios_add_device(struct pci_dev *dev) 645 682 { 646 683 struct setup_data *data; ··· 707 670 pa_data = data->next; 708 671 iounmap(data); 709 672 } 673 + set_dma_domain_ops(dev); 710 674 return 0; 711 675 } 712 676
+38 -70
arch/x86/pci/pcbios.c
··· 180 180 unsigned long result = 0; 181 181 unsigned long flags; 182 182 unsigned long bx = (bus << 8) | devfn; 183 + u16 number = 0, mask = 0; 183 184 184 185 WARN_ON(seg); 185 186 if (!value || (bus > 255) || (devfn > 255) || (reg > 255)) ··· 190 189 191 190 switch (len) { 192 191 case 1: 193 - __asm__("lcall *(%%esi); cld\n\t" 194 - "jc 1f\n\t" 195 - "xor %%ah, %%ah\n" 196 - "1:" 197 - : "=c" (*value), 198 - "=a" (result) 199 - : "1" (PCIBIOS_READ_CONFIG_BYTE), 200 - "b" (bx), 201 - "D" ((long)reg), 202 - "S" (&pci_indirect)); 203 - /* 204 - * Zero-extend the result beyond 8 bits, do not trust the 205 - * BIOS having done it: 206 - */ 207 - *value &= 0xff; 192 + number = PCIBIOS_READ_CONFIG_BYTE; 193 + mask = 0xff; 208 194 break; 209 195 case 2: 210 - __asm__("lcall *(%%esi); cld\n\t" 211 - "jc 1f\n\t" 212 - "xor %%ah, %%ah\n" 213 - "1:" 214 - : "=c" (*value), 215 - "=a" (result) 216 - : "1" (PCIBIOS_READ_CONFIG_WORD), 217 - "b" (bx), 218 - "D" ((long)reg), 219 - "S" (&pci_indirect)); 220 - /* 221 - * Zero-extend the result beyond 16 bits, do not trust the 222 - * BIOS having done it: 223 - */ 224 - *value &= 0xffff; 196 + number = PCIBIOS_READ_CONFIG_WORD; 197 + mask = 0xffff; 225 198 break; 226 199 case 4: 227 - __asm__("lcall *(%%esi); cld\n\t" 228 - "jc 1f\n\t" 229 - "xor %%ah, %%ah\n" 230 - "1:" 231 - : "=c" (*value), 232 - "=a" (result) 233 - : "1" (PCIBIOS_READ_CONFIG_DWORD), 234 - "b" (bx), 235 - "D" ((long)reg), 236 - "S" (&pci_indirect)); 200 + number = PCIBIOS_READ_CONFIG_DWORD; 237 201 break; 238 202 } 203 + 204 + __asm__("lcall *(%%esi); cld\n\t" 205 + "jc 1f\n\t" 206 + "xor %%ah, %%ah\n" 207 + "1:" 208 + : "=c" (*value), 209 + "=a" (result) 210 + : "1" (number), 211 + "b" (bx), 212 + "D" ((long)reg), 213 + "S" (&pci_indirect)); 214 + /* 215 + * Zero-extend the result beyond 8 or 16 bits, do not trust the 216 + * BIOS having done it: 217 + */ 218 + if (mask) 219 + *value &= mask; 239 220 240 221 raw_spin_unlock_irqrestore(&pci_config_lock, flags); 241 222 ··· 230 247 unsigned long result = 0; 231 248 unsigned long flags; 232 249 unsigned long bx = (bus << 8) | devfn; 250 + u16 number = 0; 233 251 234 252 WARN_ON(seg); 235 253 if ((bus > 255) || (devfn > 255) || (reg > 255)) ··· 240 256 241 257 switch (len) { 242 258 case 1: 243 - __asm__("lcall *(%%esi); cld\n\t" 244 - "jc 1f\n\t" 245 - "xor %%ah, %%ah\n" 246 - "1:" 247 - : "=a" (result) 248 - : "0" (PCIBIOS_WRITE_CONFIG_BYTE), 249 - "c" (value), 250 - "b" (bx), 251 - "D" ((long)reg), 252 - "S" (&pci_indirect)); 259 + number = PCIBIOS_WRITE_CONFIG_BYTE; 253 260 break; 254 261 case 2: 255 - __asm__("lcall *(%%esi); cld\n\t" 256 - "jc 1f\n\t" 257 - "xor %%ah, %%ah\n" 258 - "1:" 259 - : "=a" (result) 260 - : "0" (PCIBIOS_WRITE_CONFIG_WORD), 261 - "c" (value), 262 - "b" (bx), 263 - "D" ((long)reg), 264 - "S" (&pci_indirect)); 262 + number = PCIBIOS_WRITE_CONFIG_WORD; 265 263 break; 266 264 case 4: 267 - __asm__("lcall *(%%esi); cld\n\t" 268 - "jc 1f\n\t" 269 - "xor %%ah, %%ah\n" 270 - "1:" 271 - : "=a" (result) 272 - : "0" (PCIBIOS_WRITE_CONFIG_DWORD), 273 - "c" (value), 274 - "b" (bx), 275 - "D" ((long)reg), 276 - "S" (&pci_indirect)); 265 + number = PCIBIOS_WRITE_CONFIG_DWORD; 277 266 break; 278 267 } 268 + 269 + __asm__("lcall *(%%esi); cld\n\t" 270 + "jc 1f\n\t" 271 + "xor %%ah, %%ah\n" 272 + "1:" 273 + : "=a" (result) 274 + : "0" (number), 275 + "c" (value), 276 + "b" (bx), 277 + "D" ((long)reg), 278 + "S" (&pci_indirect)); 279 279 280 280 raw_spin_unlock_irqrestore(&pci_config_lock, flags); 281 281
+723
arch/x86/pci/vmd.c
··· 1 + /* 2 + * Volume Management Device driver 3 + * Copyright (c) 2015, Intel Corporation. 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms and conditions of the GNU General Public License, 7 + * version 2, as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + */ 14 + 15 + #include <linux/device.h> 16 + #include <linux/interrupt.h> 17 + #include <linux/irq.h> 18 + #include <linux/kernel.h> 19 + #include <linux/module.h> 20 + #include <linux/msi.h> 21 + #include <linux/pci.h> 22 + #include <linux/rculist.h> 23 + #include <linux/rcupdate.h> 24 + 25 + #include <asm/irqdomain.h> 26 + #include <asm/device.h> 27 + #include <asm/msi.h> 28 + #include <asm/msidef.h> 29 + 30 + #define VMD_CFGBAR 0 31 + #define VMD_MEMBAR1 2 32 + #define VMD_MEMBAR2 4 33 + 34 + /* 35 + * Lock for manipulating VMD IRQ lists. 36 + */ 37 + static DEFINE_RAW_SPINLOCK(list_lock); 38 + 39 + /** 40 + * struct vmd_irq - private data to map driver IRQ to the VMD shared vector 41 + * @node: list item for parent traversal. 42 + * @rcu: RCU callback item for freeing. 43 + * @irq: back pointer to parent. 44 + * @virq: the virtual IRQ value provided to the requesting driver. 45 + * 46 + * Every MSI/MSI-X IRQ requested for a device in a VMD domain will be mapped to 47 + * a VMD IRQ using this structure. 48 + */ 49 + struct vmd_irq { 50 + struct list_head node; 51 + struct rcu_head rcu; 52 + struct vmd_irq_list *irq; 53 + unsigned int virq; 54 + }; 55 + 56 + /** 57 + * struct vmd_irq_list - list of driver requested IRQs mapping to a VMD vector 58 + * @irq_list: the list of irq's the VMD one demuxes to. 59 + * @vmd_vector: the h/w IRQ assigned to the VMD. 60 + * @index: index into the VMD MSI-X table; used for message routing. 61 + * @count: number of child IRQs assigned to this vector; used to track 62 + * sharing. 63 + */ 64 + struct vmd_irq_list { 65 + struct list_head irq_list; 66 + struct vmd_dev *vmd; 67 + unsigned int vmd_vector; 68 + unsigned int index; 69 + unsigned int count; 70 + }; 71 + 72 + struct vmd_dev { 73 + struct pci_dev *dev; 74 + 75 + spinlock_t cfg_lock; 76 + char __iomem *cfgbar; 77 + 78 + int msix_count; 79 + struct msix_entry *msix_entries; 80 + struct vmd_irq_list *irqs; 81 + 82 + struct pci_sysdata sysdata; 83 + struct resource resources[3]; 84 + struct irq_domain *irq_domain; 85 + struct pci_bus *bus; 86 + 87 + #ifdef CONFIG_X86_DEV_DMA_OPS 88 + struct dma_map_ops dma_ops; 89 + struct dma_domain dma_domain; 90 + #endif 91 + }; 92 + 93 + static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus) 94 + { 95 + return container_of(bus->sysdata, struct vmd_dev, sysdata); 96 + } 97 + 98 + /* 99 + * Drivers managing a device in a VMD domain allocate their own IRQs as before, 100 + * but the MSI entry for the hardware it's driving will be programmed with a 101 + * destination ID for the VMD MSI-X table. The VMD muxes interrupts in its 102 + * domain into one of its own, and the VMD driver de-muxes these for the 103 + * handlers sharing that VMD IRQ. The vmd irq_domain provides the operations 104 + * and irq_chip to set this up. 105 + */ 106 + static void vmd_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 107 + { 108 + struct vmd_irq *vmdirq = data->chip_data; 109 + struct vmd_irq_list *irq = vmdirq->irq; 110 + 111 + msg->address_hi = MSI_ADDR_BASE_HI; 112 + msg->address_lo = MSI_ADDR_BASE_LO | MSI_ADDR_DEST_ID(irq->index); 113 + msg->data = 0; 114 + } 115 + 116 + /* 117 + * We rely on MSI_FLAG_USE_DEF_CHIP_OPS to set the IRQ mask/unmask ops. 118 + */ 119 + static void vmd_irq_enable(struct irq_data *data) 120 + { 121 + struct vmd_irq *vmdirq = data->chip_data; 122 + 123 + raw_spin_lock(&list_lock); 124 + list_add_tail_rcu(&vmdirq->node, &vmdirq->irq->irq_list); 125 + raw_spin_unlock(&list_lock); 126 + 127 + data->chip->irq_unmask(data); 128 + } 129 + 130 + static void vmd_irq_disable(struct irq_data *data) 131 + { 132 + struct vmd_irq *vmdirq = data->chip_data; 133 + 134 + data->chip->irq_mask(data); 135 + 136 + raw_spin_lock(&list_lock); 137 + list_del_rcu(&vmdirq->node); 138 + raw_spin_unlock(&list_lock); 139 + } 140 + 141 + /* 142 + * XXX: Stubbed until we develop acceptable way to not create conflicts with 143 + * other devices sharing the same vector. 144 + */ 145 + static int vmd_irq_set_affinity(struct irq_data *data, 146 + const struct cpumask *dest, bool force) 147 + { 148 + return -EINVAL; 149 + } 150 + 151 + static struct irq_chip vmd_msi_controller = { 152 + .name = "VMD-MSI", 153 + .irq_enable = vmd_irq_enable, 154 + .irq_disable = vmd_irq_disable, 155 + .irq_compose_msi_msg = vmd_compose_msi_msg, 156 + .irq_set_affinity = vmd_irq_set_affinity, 157 + }; 158 + 159 + static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info, 160 + msi_alloc_info_t *arg) 161 + { 162 + return 0; 163 + } 164 + 165 + /* 166 + * XXX: We can be even smarter selecting the best IRQ once we solve the 167 + * affinity problem. 168 + */ 169 + static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd) 170 + { 171 + int i, best = 0; 172 + 173 + raw_spin_lock(&list_lock); 174 + for (i = 1; i < vmd->msix_count; i++) 175 + if (vmd->irqs[i].count < vmd->irqs[best].count) 176 + best = i; 177 + vmd->irqs[best].count++; 178 + raw_spin_unlock(&list_lock); 179 + 180 + return &vmd->irqs[best]; 181 + } 182 + 183 + static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info, 184 + unsigned int virq, irq_hw_number_t hwirq, 185 + msi_alloc_info_t *arg) 186 + { 187 + struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(arg->desc)->bus); 188 + struct vmd_irq *vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL); 189 + 190 + if (!vmdirq) 191 + return -ENOMEM; 192 + 193 + INIT_LIST_HEAD(&vmdirq->node); 194 + vmdirq->irq = vmd_next_irq(vmd); 195 + vmdirq->virq = virq; 196 + 197 + irq_domain_set_info(domain, virq, vmdirq->irq->vmd_vector, info->chip, 198 + vmdirq, handle_simple_irq, vmd, NULL); 199 + return 0; 200 + } 201 + 202 + static void vmd_msi_free(struct irq_domain *domain, 203 + struct msi_domain_info *info, unsigned int virq) 204 + { 205 + struct vmd_irq *vmdirq = irq_get_chip_data(virq); 206 + 207 + /* XXX: Potential optimization to rebalance */ 208 + raw_spin_lock(&list_lock); 209 + vmdirq->irq->count--; 210 + raw_spin_unlock(&list_lock); 211 + 212 + kfree_rcu(vmdirq, rcu); 213 + } 214 + 215 + static int vmd_msi_prepare(struct irq_domain *domain, struct device *dev, 216 + int nvec, msi_alloc_info_t *arg) 217 + { 218 + struct pci_dev *pdev = to_pci_dev(dev); 219 + struct vmd_dev *vmd = vmd_from_bus(pdev->bus); 220 + 221 + if (nvec > vmd->msix_count) 222 + return vmd->msix_count; 223 + 224 + memset(arg, 0, sizeof(*arg)); 225 + return 0; 226 + } 227 + 228 + static void vmd_set_desc(msi_alloc_info_t *arg, struct msi_desc *desc) 229 + { 230 + arg->desc = desc; 231 + } 232 + 233 + static struct msi_domain_ops vmd_msi_domain_ops = { 234 + .get_hwirq = vmd_get_hwirq, 235 + .msi_init = vmd_msi_init, 236 + .msi_free = vmd_msi_free, 237 + .msi_prepare = vmd_msi_prepare, 238 + .set_desc = vmd_set_desc, 239 + }; 240 + 241 + static struct msi_domain_info vmd_msi_domain_info = { 242 + .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 243 + MSI_FLAG_PCI_MSIX, 244 + .ops = &vmd_msi_domain_ops, 245 + .chip = &vmd_msi_controller, 246 + }; 247 + 248 + #ifdef CONFIG_X86_DEV_DMA_OPS 249 + /* 250 + * VMD replaces the requester ID with its own. DMA mappings for devices in a 251 + * VMD domain need to be mapped for the VMD, not the device requiring 252 + * the mapping. 253 + */ 254 + static struct device *to_vmd_dev(struct device *dev) 255 + { 256 + struct pci_dev *pdev = to_pci_dev(dev); 257 + struct vmd_dev *vmd = vmd_from_bus(pdev->bus); 258 + 259 + return &vmd->dev->dev; 260 + } 261 + 262 + static struct dma_map_ops *vmd_dma_ops(struct device *dev) 263 + { 264 + return to_vmd_dev(dev)->archdata.dma_ops; 265 + } 266 + 267 + static void *vmd_alloc(struct device *dev, size_t size, dma_addr_t *addr, 268 + gfp_t flag, struct dma_attrs *attrs) 269 + { 270 + return vmd_dma_ops(dev)->alloc(to_vmd_dev(dev), size, addr, flag, 271 + attrs); 272 + } 273 + 274 + static void vmd_free(struct device *dev, size_t size, void *vaddr, 275 + dma_addr_t addr, struct dma_attrs *attrs) 276 + { 277 + return vmd_dma_ops(dev)->free(to_vmd_dev(dev), size, vaddr, addr, 278 + attrs); 279 + } 280 + 281 + static int vmd_mmap(struct device *dev, struct vm_area_struct *vma, 282 + void *cpu_addr, dma_addr_t addr, size_t size, 283 + struct dma_attrs *attrs) 284 + { 285 + return vmd_dma_ops(dev)->mmap(to_vmd_dev(dev), vma, cpu_addr, addr, 286 + size, attrs); 287 + } 288 + 289 + static int vmd_get_sgtable(struct device *dev, struct sg_table *sgt, 290 + void *cpu_addr, dma_addr_t addr, size_t size, 291 + struct dma_attrs *attrs) 292 + { 293 + return vmd_dma_ops(dev)->get_sgtable(to_vmd_dev(dev), sgt, cpu_addr, 294 + addr, size, attrs); 295 + } 296 + 297 + static dma_addr_t vmd_map_page(struct device *dev, struct page *page, 298 + unsigned long offset, size_t size, 299 + enum dma_data_direction dir, 300 + struct dma_attrs *attrs) 301 + { 302 + return vmd_dma_ops(dev)->map_page(to_vmd_dev(dev), page, offset, size, 303 + dir, attrs); 304 + } 305 + 306 + static void vmd_unmap_page(struct device *dev, dma_addr_t addr, size_t size, 307 + enum dma_data_direction dir, struct dma_attrs *attrs) 308 + { 309 + vmd_dma_ops(dev)->unmap_page(to_vmd_dev(dev), addr, size, dir, attrs); 310 + } 311 + 312 + static int vmd_map_sg(struct device *dev, struct scatterlist *sg, int nents, 313 + enum dma_data_direction dir, struct dma_attrs *attrs) 314 + { 315 + return vmd_dma_ops(dev)->map_sg(to_vmd_dev(dev), sg, nents, dir, attrs); 316 + } 317 + 318 + static void vmd_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, 319 + enum dma_data_direction dir, struct dma_attrs *attrs) 320 + { 321 + vmd_dma_ops(dev)->unmap_sg(to_vmd_dev(dev), sg, nents, dir, attrs); 322 + } 323 + 324 + static void vmd_sync_single_for_cpu(struct device *dev, dma_addr_t addr, 325 + size_t size, enum dma_data_direction dir) 326 + { 327 + vmd_dma_ops(dev)->sync_single_for_cpu(to_vmd_dev(dev), addr, size, dir); 328 + } 329 + 330 + static void vmd_sync_single_for_device(struct device *dev, dma_addr_t addr, 331 + size_t size, enum dma_data_direction dir) 332 + { 333 + vmd_dma_ops(dev)->sync_single_for_device(to_vmd_dev(dev), addr, size, 334 + dir); 335 + } 336 + 337 + static void vmd_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 338 + int nents, enum dma_data_direction dir) 339 + { 340 + vmd_dma_ops(dev)->sync_sg_for_cpu(to_vmd_dev(dev), sg, nents, dir); 341 + } 342 + 343 + static void vmd_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 344 + int nents, enum dma_data_direction dir) 345 + { 346 + vmd_dma_ops(dev)->sync_sg_for_device(to_vmd_dev(dev), sg, nents, dir); 347 + } 348 + 349 + static int vmd_mapping_error(struct device *dev, dma_addr_t addr) 350 + { 351 + return vmd_dma_ops(dev)->mapping_error(to_vmd_dev(dev), addr); 352 + } 353 + 354 + static int vmd_dma_supported(struct device *dev, u64 mask) 355 + { 356 + return vmd_dma_ops(dev)->dma_supported(to_vmd_dev(dev), mask); 357 + } 358 + 359 + #ifdef ARCH_HAS_DMA_GET_REQUIRED_MASK 360 + static u64 vmd_get_required_mask(struct device *dev) 361 + { 362 + return vmd_dma_ops(dev)->get_required_mask(to_vmd_dev(dev)); 363 + } 364 + #endif 365 + 366 + static void vmd_teardown_dma_ops(struct vmd_dev *vmd) 367 + { 368 + struct dma_domain *domain = &vmd->dma_domain; 369 + 370 + if (vmd->dev->dev.archdata.dma_ops) 371 + del_dma_domain(domain); 372 + } 373 + 374 + #define ASSIGN_VMD_DMA_OPS(source, dest, fn) \ 375 + do { \ 376 + if (source->fn) \ 377 + dest->fn = vmd_##fn; \ 378 + } while (0) 379 + 380 + static void vmd_setup_dma_ops(struct vmd_dev *vmd) 381 + { 382 + const struct dma_map_ops *source = vmd->dev->dev.archdata.dma_ops; 383 + struct dma_map_ops *dest = &vmd->dma_ops; 384 + struct dma_domain *domain = &vmd->dma_domain; 385 + 386 + domain->domain_nr = vmd->sysdata.domain; 387 + domain->dma_ops = dest; 388 + 389 + if (!source) 390 + return; 391 + ASSIGN_VMD_DMA_OPS(source, dest, alloc); 392 + ASSIGN_VMD_DMA_OPS(source, dest, free); 393 + ASSIGN_VMD_DMA_OPS(source, dest, mmap); 394 + ASSIGN_VMD_DMA_OPS(source, dest, get_sgtable); 395 + ASSIGN_VMD_DMA_OPS(source, dest, map_page); 396 + ASSIGN_VMD_DMA_OPS(source, dest, unmap_page); 397 + ASSIGN_VMD_DMA_OPS(source, dest, map_sg); 398 + ASSIGN_VMD_DMA_OPS(source, dest, unmap_sg); 399 + ASSIGN_VMD_DMA_OPS(source, dest, sync_single_for_cpu); 400 + ASSIGN_VMD_DMA_OPS(source, dest, sync_single_for_device); 401 + ASSIGN_VMD_DMA_OPS(source, dest, sync_sg_for_cpu); 402 + ASSIGN_VMD_DMA_OPS(source, dest, sync_sg_for_device); 403 + ASSIGN_VMD_DMA_OPS(source, dest, mapping_error); 404 + ASSIGN_VMD_DMA_OPS(source, dest, dma_supported); 405 + #ifdef ARCH_HAS_DMA_GET_REQUIRED_MASK 406 + ASSIGN_VMD_DMA_OPS(source, dest, get_required_mask); 407 + #endif 408 + add_dma_domain(domain); 409 + } 410 + #undef ASSIGN_VMD_DMA_OPS 411 + #else 412 + static void vmd_teardown_dma_ops(struct vmd_dev *vmd) {} 413 + static void vmd_setup_dma_ops(struct vmd_dev *vmd) {} 414 + #endif 415 + 416 + static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus, 417 + unsigned int devfn, int reg, int len) 418 + { 419 + char __iomem *addr = vmd->cfgbar + 420 + (bus->number << 20) + (devfn << 12) + reg; 421 + 422 + if ((addr - vmd->cfgbar) + len >= 423 + resource_size(&vmd->dev->resource[VMD_CFGBAR])) 424 + return NULL; 425 + 426 + return addr; 427 + } 428 + 429 + /* 430 + * CPU may deadlock if config space is not serialized on some versions of this 431 + * hardware, so all config space access is done under a spinlock. 432 + */ 433 + static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg, 434 + int len, u32 *value) 435 + { 436 + struct vmd_dev *vmd = vmd_from_bus(bus); 437 + char __iomem *addr = vmd_cfg_addr(vmd, bus, devfn, reg, len); 438 + unsigned long flags; 439 + int ret = 0; 440 + 441 + if (!addr) 442 + return -EFAULT; 443 + 444 + spin_lock_irqsave(&vmd->cfg_lock, flags); 445 + switch (len) { 446 + case 1: 447 + *value = readb(addr); 448 + break; 449 + case 2: 450 + *value = readw(addr); 451 + break; 452 + case 4: 453 + *value = readl(addr); 454 + break; 455 + default: 456 + ret = -EINVAL; 457 + break; 458 + } 459 + spin_unlock_irqrestore(&vmd->cfg_lock, flags); 460 + return ret; 461 + } 462 + 463 + /* 464 + * VMD h/w converts non-posted config writes to posted memory writes. The 465 + * read-back in this function forces the completion so it returns only after 466 + * the config space was written, as expected. 467 + */ 468 + static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg, 469 + int len, u32 value) 470 + { 471 + struct vmd_dev *vmd = vmd_from_bus(bus); 472 + char __iomem *addr = vmd_cfg_addr(vmd, bus, devfn, reg, len); 473 + unsigned long flags; 474 + int ret = 0; 475 + 476 + if (!addr) 477 + return -EFAULT; 478 + 479 + spin_lock_irqsave(&vmd->cfg_lock, flags); 480 + switch (len) { 481 + case 1: 482 + writeb(value, addr); 483 + readb(addr); 484 + break; 485 + case 2: 486 + writew(value, addr); 487 + readw(addr); 488 + break; 489 + case 4: 490 + writel(value, addr); 491 + readl(addr); 492 + break; 493 + default: 494 + ret = -EINVAL; 495 + break; 496 + } 497 + spin_unlock_irqrestore(&vmd->cfg_lock, flags); 498 + return ret; 499 + } 500 + 501 + static struct pci_ops vmd_ops = { 502 + .read = vmd_pci_read, 503 + .write = vmd_pci_write, 504 + }; 505 + 506 + /* 507 + * VMD domains start at 0x1000 to not clash with ACPI _SEG domains. 508 + */ 509 + static int vmd_find_free_domain(void) 510 + { 511 + int domain = 0xffff; 512 + struct pci_bus *bus = NULL; 513 + 514 + while ((bus = pci_find_next_bus(bus)) != NULL) 515 + domain = max_t(int, domain, pci_domain_nr(bus)); 516 + return domain + 1; 517 + } 518 + 519 + static int vmd_enable_domain(struct vmd_dev *vmd) 520 + { 521 + struct pci_sysdata *sd = &vmd->sysdata; 522 + struct resource *res; 523 + u32 upper_bits; 524 + unsigned long flags; 525 + LIST_HEAD(resources); 526 + 527 + res = &vmd->dev->resource[VMD_CFGBAR]; 528 + vmd->resources[0] = (struct resource) { 529 + .name = "VMD CFGBAR", 530 + .start = res->start, 531 + .end = (resource_size(res) >> 20) - 1, 532 + .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, 533 + }; 534 + 535 + res = &vmd->dev->resource[VMD_MEMBAR1]; 536 + upper_bits = upper_32_bits(res->end); 537 + flags = res->flags & ~IORESOURCE_SIZEALIGN; 538 + if (!upper_bits) 539 + flags &= ~IORESOURCE_MEM_64; 540 + vmd->resources[1] = (struct resource) { 541 + .name = "VMD MEMBAR1", 542 + .start = res->start, 543 + .end = res->end, 544 + .flags = flags, 545 + }; 546 + 547 + res = &vmd->dev->resource[VMD_MEMBAR2]; 548 + upper_bits = upper_32_bits(res->end); 549 + flags = res->flags & ~IORESOURCE_SIZEALIGN; 550 + if (!upper_bits) 551 + flags &= ~IORESOURCE_MEM_64; 552 + vmd->resources[2] = (struct resource) { 553 + .name = "VMD MEMBAR2", 554 + .start = res->start + 0x2000, 555 + .end = res->end, 556 + .flags = flags, 557 + }; 558 + 559 + sd->domain = vmd_find_free_domain(); 560 + if (sd->domain < 0) 561 + return sd->domain; 562 + 563 + sd->node = pcibus_to_node(vmd->dev->bus); 564 + 565 + vmd->irq_domain = pci_msi_create_irq_domain(NULL, &vmd_msi_domain_info, 566 + NULL); 567 + if (!vmd->irq_domain) 568 + return -ENODEV; 569 + 570 + pci_add_resource(&resources, &vmd->resources[0]); 571 + pci_add_resource(&resources, &vmd->resources[1]); 572 + pci_add_resource(&resources, &vmd->resources[2]); 573 + vmd->bus = pci_create_root_bus(&vmd->dev->dev, 0, &vmd_ops, sd, 574 + &resources); 575 + if (!vmd->bus) { 576 + pci_free_resource_list(&resources); 577 + irq_domain_remove(vmd->irq_domain); 578 + return -ENODEV; 579 + } 580 + 581 + vmd_setup_dma_ops(vmd); 582 + dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain); 583 + pci_rescan_bus(vmd->bus); 584 + 585 + WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj, 586 + "domain"), "Can't create symlink to domain\n"); 587 + return 0; 588 + } 589 + 590 + static irqreturn_t vmd_irq(int irq, void *data) 591 + { 592 + struct vmd_irq_list *irqs = data; 593 + struct vmd_irq *vmdirq; 594 + 595 + rcu_read_lock(); 596 + list_for_each_entry_rcu(vmdirq, &irqs->irq_list, node) 597 + generic_handle_irq(vmdirq->virq); 598 + rcu_read_unlock(); 599 + 600 + return IRQ_HANDLED; 601 + } 602 + 603 + static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id) 604 + { 605 + struct vmd_dev *vmd; 606 + int i, err; 607 + 608 + if (resource_size(&dev->resource[VMD_CFGBAR]) < (1 << 20)) 609 + return -ENOMEM; 610 + 611 + vmd = devm_kzalloc(&dev->dev, sizeof(*vmd), GFP_KERNEL); 612 + if (!vmd) 613 + return -ENOMEM; 614 + 615 + vmd->dev = dev; 616 + err = pcim_enable_device(dev); 617 + if (err < 0) 618 + return err; 619 + 620 + vmd->cfgbar = pcim_iomap(dev, VMD_CFGBAR, 0); 621 + if (!vmd->cfgbar) 622 + return -ENOMEM; 623 + 624 + pci_set_master(dev); 625 + if (dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(64)) && 626 + dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) 627 + return -ENODEV; 628 + 629 + vmd->msix_count = pci_msix_vec_count(dev); 630 + if (vmd->msix_count < 0) 631 + return -ENODEV; 632 + 633 + vmd->irqs = devm_kcalloc(&dev->dev, vmd->msix_count, sizeof(*vmd->irqs), 634 + GFP_KERNEL); 635 + if (!vmd->irqs) 636 + return -ENOMEM; 637 + 638 + vmd->msix_entries = devm_kcalloc(&dev->dev, vmd->msix_count, 639 + sizeof(*vmd->msix_entries), 640 + GFP_KERNEL); 641 + if (!vmd->msix_entries) 642 + return -ENOMEM; 643 + for (i = 0; i < vmd->msix_count; i++) 644 + vmd->msix_entries[i].entry = i; 645 + 646 + vmd->msix_count = pci_enable_msix_range(vmd->dev, vmd->msix_entries, 1, 647 + vmd->msix_count); 648 + if (vmd->msix_count < 0) 649 + return vmd->msix_count; 650 + 651 + for (i = 0; i < vmd->msix_count; i++) { 652 + INIT_LIST_HEAD(&vmd->irqs[i].irq_list); 653 + vmd->irqs[i].vmd_vector = vmd->msix_entries[i].vector; 654 + vmd->irqs[i].index = i; 655 + 656 + err = devm_request_irq(&dev->dev, vmd->irqs[i].vmd_vector, 657 + vmd_irq, 0, "vmd", &vmd->irqs[i]); 658 + if (err) 659 + return err; 660 + } 661 + 662 + spin_lock_init(&vmd->cfg_lock); 663 + pci_set_drvdata(dev, vmd); 664 + err = vmd_enable_domain(vmd); 665 + if (err) 666 + return err; 667 + 668 + dev_info(&vmd->dev->dev, "Bound to PCI domain %04x\n", 669 + vmd->sysdata.domain); 670 + return 0; 671 + } 672 + 673 + static void vmd_remove(struct pci_dev *dev) 674 + { 675 + struct vmd_dev *vmd = pci_get_drvdata(dev); 676 + 677 + pci_set_drvdata(dev, NULL); 678 + sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); 679 + pci_stop_root_bus(vmd->bus); 680 + pci_remove_root_bus(vmd->bus); 681 + vmd_teardown_dma_ops(vmd); 682 + irq_domain_remove(vmd->irq_domain); 683 + } 684 + 685 + #ifdef CONFIG_PM 686 + static int vmd_suspend(struct device *dev) 687 + { 688 + struct pci_dev *pdev = to_pci_dev(dev); 689 + 690 + pci_save_state(pdev); 691 + return 0; 692 + } 693 + 694 + static int vmd_resume(struct device *dev) 695 + { 696 + struct pci_dev *pdev = to_pci_dev(dev); 697 + 698 + pci_restore_state(pdev); 699 + return 0; 700 + } 701 + #endif 702 + static SIMPLE_DEV_PM_OPS(vmd_dev_pm_ops, vmd_suspend, vmd_resume); 703 + 704 + static const struct pci_device_id vmd_ids[] = { 705 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x201d),}, 706 + {0,} 707 + }; 708 + MODULE_DEVICE_TABLE(pci, vmd_ids); 709 + 710 + static struct pci_driver vmd_drv = { 711 + .name = "vmd", 712 + .id_table = vmd_ids, 713 + .probe = vmd_probe, 714 + .remove = vmd_remove, 715 + .driver = { 716 + .pm = &vmd_dev_pm_ops, 717 + }, 718 + }; 719 + module_pci_driver(vmd_drv); 720 + 721 + MODULE_AUTHOR("Intel Corporation"); 722 + MODULE_LICENSE("GPL v2"); 723 + MODULE_VERSION("0.6");
+4 -4
drivers/pci/access.c
··· 25 25 #define PCI_word_BAD (pos & 1) 26 26 #define PCI_dword_BAD (pos & 3) 27 27 28 - #define PCI_OP_READ(size,type,len) \ 28 + #define PCI_OP_READ(size, type, len) \ 29 29 int pci_bus_read_config_##size \ 30 30 (struct pci_bus *bus, unsigned int devfn, int pos, type *value) \ 31 31 { \ ··· 40 40 return res; \ 41 41 } 42 42 43 - #define PCI_OP_WRITE(size,type,len) \ 43 + #define PCI_OP_WRITE(size, type, len) \ 44 44 int pci_bus_write_config_##size \ 45 45 (struct pci_bus *bus, unsigned int devfn, int pos, type value) \ 46 46 { \ ··· 231 231 } 232 232 233 233 /* Returns 0 on success, negative values indicate error. */ 234 - #define PCI_USER_READ_CONFIG(size,type) \ 234 + #define PCI_USER_READ_CONFIG(size, type) \ 235 235 int pci_user_read_config_##size \ 236 236 (struct pci_dev *dev, int pos, type *val) \ 237 237 { \ ··· 251 251 EXPORT_SYMBOL_GPL(pci_user_read_config_##size); 252 252 253 253 /* Returns 0 on success, negative values indicate error. */ 254 - #define PCI_USER_WRITE_CONFIG(size,type) \ 254 + #define PCI_USER_WRITE_CONFIG(size, type) \ 255 255 int pci_user_write_config_##size \ 256 256 (struct pci_dev *dev, int pos, type val) \ 257 257 { \
+4 -2
drivers/pci/bus.c
··· 140 140 type_mask |= IORESOURCE_TYPE_BITS; 141 141 142 142 pci_bus_for_each_resource(bus, r, i) { 143 + resource_size_t min_used = min; 144 + 143 145 if (!r) 144 146 continue; 145 147 ··· 165 163 * overrides "min". 166 164 */ 167 165 if (avail.start) 168 - min = avail.start; 166 + min_used = avail.start; 169 167 170 168 max = avail.end; 171 169 172 170 /* Ok, try it out.. */ 173 - ret = allocate_resource(r, res, size, min, max, 171 + ret = allocate_resource(r, res, size, min_used, max, 174 172 align, alignf, alignf_data); 175 173 if (ret == 0) 176 174 return 0;
+27 -9
drivers/pci/host/Kconfig
··· 49 49 50 50 config PCI_RCAR_GEN2_PCIE 51 51 bool "Renesas R-Car PCIe controller" 52 - depends on ARM 53 - depends on ARCH_SHMOBILE || COMPILE_TEST 52 + depends on ARCH_SHMOBILE || (ARM && COMPILE_TEST) 54 53 help 55 54 Say Y here if you want PCIe controller support on R-Car Gen2 SoCs. 56 55 ··· 118 119 depends on ARCH_VERSATILE 119 120 120 121 config PCIE_IPROC 121 - tristate "Broadcom iProc PCIe controller" 122 - depends on OF && (ARM || ARM64) 123 - default n 122 + tristate 124 123 help 125 124 This enables the iProc PCIe core controller support for Broadcom's 126 - iProc family of SoCs. An appropriate bus interface driver also needs 127 - to be enabled 125 + iProc family of SoCs. An appropriate bus interface driver needs 126 + to be enabled to select this. 128 127 129 128 config PCIE_IPROC_PLATFORM 130 129 tristate "Broadcom iProc PCIe platform bus driver" ··· 145 148 Say Y here if you want to use the Broadcom iProc PCIe controller 146 149 through the BCMA bus interface 147 150 151 + config PCIE_IPROC_MSI 152 + bool "Broadcom iProc PCIe MSI support" 153 + depends on PCIE_IPROC_PLATFORM || PCIE_IPROC_BCMA 154 + depends on PCI_MSI 155 + select PCI_MSI_IRQ_DOMAIN 156 + default ARCH_BCM_IPROC 157 + help 158 + Say Y here if you want to enable MSI support for Broadcom's iProc 159 + PCIe controller 160 + 148 161 config PCIE_ALTERA 149 162 bool "Altera PCIe controller" 150 163 depends on ARM || NIOS2 ··· 174 167 175 168 config PCI_HISI 176 169 depends on OF && ARM64 177 - bool "HiSilicon SoC HIP05 PCIe controller" 170 + bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers" 178 171 select PCIEPORTBUS 179 172 select PCIE_DW 180 173 help 181 - Say Y here if you want PCIe controller support on HiSilicon HIP05 SoC 174 + Say Y here if you want PCIe controller support on HiSilicon 175 + Hip05 and Hip06 SoCs 176 + 177 + config PCIE_QCOM 178 + bool "Qualcomm PCIe controller" 179 + depends on ARCH_QCOM && OF 180 + select PCIE_DW 181 + select PCIEPORTBUS 182 + help 183 + Say Y here to enable PCIe controller support on Qualcomm SoCs. The 184 + PCIe controller uses the Designware core plus Qualcomm-specific 185 + hardware wrappers. 182 186 183 187 endmenu
+2
drivers/pci/host/Makefile
··· 15 15 obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o 16 16 obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o 17 17 obj-$(CONFIG_PCIE_IPROC) += pcie-iproc.o 18 + obj-$(CONFIG_PCIE_IPROC_MSI) += pcie-iproc-msi.o 18 19 obj-$(CONFIG_PCIE_IPROC_PLATFORM) += pcie-iproc-platform.o 19 20 obj-$(CONFIG_PCIE_IPROC_BCMA) += pcie-iproc-bcma.o 20 21 obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o 21 22 obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o 22 23 obj-$(CONFIG_PCI_HISI) += pcie-hisi.o 24 + obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
+2 -1
drivers/pci/host/pci-dra7xx.c
··· 302 302 } 303 303 304 304 ret = devm_request_irq(&pdev->dev, pp->irq, 305 - dra7xx_pcie_msi_irq_handler, IRQF_SHARED, 305 + dra7xx_pcie_msi_irq_handler, 306 + IRQF_SHARED | IRQF_NO_THREAD, 306 307 "dra7-pcie-msi", pp); 307 308 if (ret) { 308 309 dev_err(&pdev->dev, "failed to request irq\n");
+2 -1
drivers/pci/host/pci-exynos.c
··· 522 522 523 523 ret = devm_request_irq(&pdev->dev, pp->msi_irq, 524 524 exynos_pcie_msi_irq_handler, 525 - IRQF_SHARED, "exynos-pcie", pp); 525 + IRQF_SHARED | IRQF_NO_THREAD, 526 + "exynos-pcie", pp); 526 527 if (ret) { 527 528 dev_err(&pdev->dev, "failed to request msi irq\n"); 528 529 return ret;
-9
drivers/pci/host/pci-host-generic.c
··· 38 38 struct gen_pci_cfg_bus_ops *ops; 39 39 }; 40 40 41 - /* 42 - * ARM pcibios functions expect the ARM struct pci_sys_data as the PCI 43 - * sysdata. Add pci_sys_data as the first element in struct gen_pci so 44 - * that when we use a gen_pci pointer as sysdata, it is also a pointer to 45 - * a struct pci_sys_data. 46 - */ 47 41 struct gen_pci { 48 - #ifdef CONFIG_ARM 49 - struct pci_sys_data sys; 50 - #endif 51 42 struct pci_host_bridge host; 52 43 struct gen_pci_cfg_windows cfg; 53 44 struct list_head resources;
+9 -16
drivers/pci/host/pci-imx6.c
··· 32 32 #define to_imx6_pcie(x) container_of(x, struct imx6_pcie, pp) 33 33 34 34 struct imx6_pcie { 35 - int reset_gpio; 35 + struct gpio_desc *reset_gpio; 36 36 struct clk *pcie_bus; 37 37 struct clk *pcie_phy; 38 38 struct clk *pcie; ··· 122 122 } 123 123 124 124 /* Read from the 16-bit PCIe PHY control registers (not memory-mapped) */ 125 - static int pcie_phy_read(void __iomem *dbi_base, int addr , int *data) 125 + static int pcie_phy_read(void __iomem *dbi_base, int addr, int *data) 126 126 { 127 127 u32 val, phy_ctl; 128 128 int ret; ··· 287 287 usleep_range(200, 500); 288 288 289 289 /* Some boards don't have PCIe reset GPIO. */ 290 - if (gpio_is_valid(imx6_pcie->reset_gpio)) { 291 - gpio_set_value(imx6_pcie->reset_gpio, 0); 290 + if (imx6_pcie->reset_gpio) { 291 + gpiod_set_value_cansleep(imx6_pcie->reset_gpio, 0); 292 292 msleep(100); 293 - gpio_set_value(imx6_pcie->reset_gpio, 1); 293 + gpiod_set_value_cansleep(imx6_pcie->reset_gpio, 1); 294 294 } 295 295 return 0; 296 296 ··· 537 537 538 538 ret = devm_request_irq(&pdev->dev, pp->msi_irq, 539 539 imx6_pcie_msi_handler, 540 - IRQF_SHARED, "mx6-pcie-msi", pp); 540 + IRQF_SHARED | IRQF_NO_THREAD, 541 + "mx6-pcie-msi", pp); 541 542 if (ret) { 542 543 dev_err(&pdev->dev, "failed to request MSI irq\n"); 543 544 return ret; ··· 561 560 { 562 561 struct imx6_pcie *imx6_pcie; 563 562 struct pcie_port *pp; 564 - struct device_node *np = pdev->dev.of_node; 565 563 struct resource *dbi_base; 566 564 int ret; 567 565 ··· 581 581 return PTR_ERR(pp->dbi_base); 582 582 583 583 /* Fetch GPIOs */ 584 - imx6_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0); 585 - if (gpio_is_valid(imx6_pcie->reset_gpio)) { 586 - ret = devm_gpio_request_one(&pdev->dev, imx6_pcie->reset_gpio, 587 - GPIOF_OUT_INIT_LOW, "PCIe reset"); 588 - if (ret) { 589 - dev_err(&pdev->dev, "unable to get reset gpio\n"); 590 - return ret; 591 - } 592 - } 584 + imx6_pcie->reset_gpio = devm_gpiod_get_optional(&pdev->dev, "reset", 585 + GPIOD_OUT_LOW); 593 586 594 587 /* Fetch clocks */ 595 588 imx6_pcie->pcie_phy = devm_clk_get(&pdev->dev, "pcie_phy");
+74 -3
drivers/pci/host/pci-rcar-gen2.c
··· 15 15 #include <linux/io.h> 16 16 #include <linux/kernel.h> 17 17 #include <linux/module.h> 18 + #include <linux/of_address.h> 18 19 #include <linux/of_pci.h> 19 20 #include <linux/pci.h> 20 21 #include <linux/platform_device.h> ··· 103 102 unsigned busnr; 104 103 int irq; 105 104 unsigned long window_size; 105 + unsigned long window_addr; 106 + unsigned long window_pci; 106 107 }; 107 108 108 109 /* PCI configuration space operations */ ··· 242 239 RCAR_PCI_ARBITER_PCIBP_MODE; 243 240 iowrite32(val, reg + RCAR_PCI_ARBITER_CTR_REG); 244 241 245 - /* PCI-AHB mapping: 0x40000000 base */ 246 - iowrite32(0x40000000 | RCAR_PCIAHB_PREFETCH16, 242 + /* PCI-AHB mapping */ 243 + iowrite32(priv->window_addr | RCAR_PCIAHB_PREFETCH16, 247 244 reg + RCAR_PCIAHB_WIN1_CTR_REG); 248 245 249 246 /* AHB-PCI mapping: OHCI/EHCI registers */ ··· 254 251 iowrite32(RCAR_AHBPCI_WIN1_HOST | RCAR_AHBPCI_WIN_CTR_CFG, 255 252 reg + RCAR_AHBPCI_WIN1_CTR_REG); 256 253 /* Set PCI-AHB Window1 address */ 257 - iowrite32(0x40000000 | PCI_BASE_ADDRESS_MEM_PREFETCH, 254 + iowrite32(priv->window_pci | PCI_BASE_ADDRESS_MEM_PREFETCH, 258 255 reg + PCI_BASE_ADDRESS_1); 259 256 /* Set AHB-PCI bridge PCI communication area address */ 260 257 val = priv->cfg_res->start + RCAR_AHBPCI_PCICOM_OFFSET; ··· 286 283 .read = pci_generic_config_read, 287 284 .write = pci_generic_config_write, 288 285 }; 286 + 287 + static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 288 + struct device_node *node) 289 + { 290 + const int na = 3, ns = 2; 291 + int rlen; 292 + 293 + parser->node = node; 294 + parser->pna = of_n_addr_cells(node); 295 + parser->np = parser->pna + na + ns; 296 + 297 + parser->range = of_get_property(node, "dma-ranges", &rlen); 298 + if (!parser->range) 299 + return -ENOENT; 300 + 301 + parser->end = parser->range + rlen / sizeof(__be32); 302 + return 0; 303 + } 304 + 305 + static int rcar_pci_parse_map_dma_ranges(struct rcar_pci_priv *pci, 306 + struct device_node *np) 307 + { 308 + struct of_pci_range range; 309 + struct of_pci_range_parser parser; 310 + int index = 0; 311 + 312 + /* Failure to parse is ok as we fall back to defaults */ 313 + if (pci_dma_range_parser_init(&parser, np)) 314 + return 0; 315 + 316 + /* Get the dma-ranges from DT */ 317 + for_each_of_pci_range(&parser, &range) { 318 + /* Hardware only allows one inbound 32-bit range */ 319 + if (index) 320 + return -EINVAL; 321 + 322 + pci->window_addr = (unsigned long)range.cpu_addr; 323 + pci->window_pci = (unsigned long)range.pci_addr; 324 + pci->window_size = (unsigned long)range.size; 325 + 326 + /* Catch HW limitations */ 327 + if (!(range.flags & IORESOURCE_PREFETCH)) { 328 + dev_err(pci->dev, "window must be prefetchable\n"); 329 + return -EINVAL; 330 + } 331 + if (pci->window_addr) { 332 + u32 lowaddr = 1 << (ffs(pci->window_addr) - 1); 333 + 334 + if (lowaddr < pci->window_size) { 335 + dev_err(pci->dev, "invalid window size/addr\n"); 336 + return -EINVAL; 337 + } 338 + } 339 + index++; 340 + } 341 + 342 + return 0; 343 + } 289 344 290 345 static int rcar_pci_probe(struct platform_device *pdev) 291 346 { ··· 390 329 return priv->irq; 391 330 } 392 331 332 + /* default window addr and size if not specified in DT */ 333 + priv->window_addr = 0x40000000; 334 + priv->window_pci = 0x40000000; 393 335 priv->window_size = SZ_1G; 394 336 395 337 if (pdev->dev.of_node) { ··· 408 344 priv->busnr = busnr.start; 409 345 if (busnr.end != busnr.start) 410 346 dev_warn(&pdev->dev, "only one bus number supported\n"); 347 + 348 + ret = rcar_pci_parse_map_dma_ranges(priv, pdev->dev.of_node); 349 + if (ret < 0) { 350 + dev_err(&pdev->dev, "failed to parse dma-range\n"); 351 + return ret; 352 + } 411 353 } else { 412 354 priv->busnr = pdev->id; 413 355 } ··· 430 360 } 431 361 432 362 static struct of_device_id rcar_pci_of_match[] = { 363 + { .compatible = "renesas,pci-rcar-gen2", }, 433 364 { .compatible = "renesas,pci-r8a7790", }, 434 365 { .compatible = "renesas,pci-r8a7791", }, 435 366 { .compatible = "renesas,pci-r8a7794", },
+1 -1
drivers/pci/host/pci-tegra.c
··· 1288 1288 1289 1289 msi->irq = err; 1290 1290 1291 - err = request_irq(msi->irq, tegra_pcie_msi_irq, 0, 1291 + err = request_irq(msi->irq, tegra_pcie_msi_irq, IRQF_NO_THREAD, 1292 1292 tegra_msi_irq_chip.name, pcie); 1293 1293 if (err < 0) { 1294 1294 dev_err(&pdev->dev, "failed to request IRQ: %d\n", err);
+1 -4
drivers/pci/host/pci-versatile.c
··· 125 125 return err; 126 126 } 127 127 128 - /* Unused, temporary to satisfy ARM arch code */ 129 - struct pci_sys_data sys; 130 - 131 128 static int versatile_pci_probe(struct platform_device *pdev) 132 129 { 133 130 struct resource *res; ··· 205 208 pci_add_flags(PCI_ENABLE_PROC_DOMAINS); 206 209 pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC); 207 210 208 - bus = pci_scan_root_bus(&pdev->dev, 0, &pci_versatile_ops, &sys, &pci_res); 211 + bus = pci_scan_root_bus(&pdev->dev, 0, &pci_versatile_ops, NULL, &pci_res); 209 212 if (!bus) 210 213 return -ENOMEM; 211 214
+26 -36
drivers/pci/host/pcie-designware.c
··· 128 128 static int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, 129 129 u32 *val) 130 130 { 131 - int ret; 132 - 133 131 if (pp->ops->rd_own_conf) 134 - ret = pp->ops->rd_own_conf(pp, where, size, val); 135 - else 136 - ret = dw_pcie_cfg_read(pp->dbi_base + where, size, val); 132 + return pp->ops->rd_own_conf(pp, where, size, val); 137 133 138 - return ret; 134 + return dw_pcie_cfg_read(pp->dbi_base + where, size, val); 139 135 } 140 136 141 137 static int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size, 142 138 u32 val) 143 139 { 144 - int ret; 145 - 146 140 if (pp->ops->wr_own_conf) 147 - ret = pp->ops->wr_own_conf(pp, where, size, val); 148 - else 149 - ret = dw_pcie_cfg_write(pp->dbi_base + where, size, val); 141 + return pp->ops->wr_own_conf(pp, where, size, val); 150 142 151 - return ret; 143 + return dw_pcie_cfg_write(pp->dbi_base + where, size, val); 152 144 } 153 145 154 146 static void dw_pcie_prog_outbound_atu(struct pcie_port *pp, int index, 155 147 int type, u64 cpu_addr, u64 pci_addr, u32 size) 156 148 { 149 + u32 val; 150 + 157 151 dw_pcie_writel_rc(pp, PCIE_ATU_REGION_OUTBOUND | index, 158 152 PCIE_ATU_VIEWPORT); 159 153 dw_pcie_writel_rc(pp, lower_32_bits(cpu_addr), PCIE_ATU_LOWER_BASE); ··· 158 164 dw_pcie_writel_rc(pp, upper_32_bits(pci_addr), PCIE_ATU_UPPER_TARGET); 159 165 dw_pcie_writel_rc(pp, type, PCIE_ATU_CR1); 160 166 dw_pcie_writel_rc(pp, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 167 + 168 + /* 169 + * Make sure ATU enable takes effect before any subsequent config 170 + * and I/O accesses. 171 + */ 172 + dw_pcie_readl_rc(pp, PCIE_ATU_CR2, &val); 161 173 } 162 174 163 175 static struct irq_chip dw_msi_irq_chip = { ··· 384 384 { 385 385 if (pp->ops->link_up) 386 386 return pp->ops->link_up(pp); 387 - else 388 - return 0; 387 + 388 + return 0; 389 389 } 390 390 391 391 static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, ··· 571 571 u64 cpu_addr; 572 572 void __iomem *va_cfg_base; 573 573 574 + if (pp->ops->rd_other_conf) 575 + return pp->ops->rd_other_conf(pp, bus, devfn, where, size, val); 576 + 574 577 busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | 575 578 PCIE_ATU_FUNC(PCI_FUNC(devfn)); 576 579 ··· 607 604 u32 busdev, cfg_size; 608 605 u64 cpu_addr; 609 606 void __iomem *va_cfg_base; 607 + 608 + if (pp->ops->wr_other_conf) 609 + return pp->ops->wr_other_conf(pp, bus, devfn, where, size, val); 610 610 611 611 busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | 612 612 PCIE_ATU_FUNC(PCI_FUNC(devfn)); ··· 664 658 int size, u32 *val) 665 659 { 666 660 struct pcie_port *pp = bus->sysdata; 667 - int ret; 668 661 669 662 if (dw_pcie_valid_config(pp, bus, PCI_SLOT(devfn)) == 0) { 670 663 *val = 0xffffffff; 671 664 return PCIBIOS_DEVICE_NOT_FOUND; 672 665 } 673 666 674 - if (bus->number != pp->root_bus_nr) 675 - if (pp->ops->rd_other_conf) 676 - ret = pp->ops->rd_other_conf(pp, bus, devfn, 677 - where, size, val); 678 - else 679 - ret = dw_pcie_rd_other_conf(pp, bus, devfn, 680 - where, size, val); 681 - else 682 - ret = dw_pcie_rd_own_conf(pp, where, size, val); 667 + if (bus->number == pp->root_bus_nr) 668 + return dw_pcie_rd_own_conf(pp, where, size, val); 683 669 684 - return ret; 670 + return dw_pcie_rd_other_conf(pp, bus, devfn, where, size, val); 685 671 } 686 672 687 673 static int dw_pcie_wr_conf(struct pci_bus *bus, u32 devfn, 688 674 int where, int size, u32 val) 689 675 { 690 676 struct pcie_port *pp = bus->sysdata; 691 - int ret; 692 677 693 678 if (dw_pcie_valid_config(pp, bus, PCI_SLOT(devfn)) == 0) 694 679 return PCIBIOS_DEVICE_NOT_FOUND; 695 680 696 - if (bus->number != pp->root_bus_nr) 697 - if (pp->ops->wr_other_conf) 698 - ret = pp->ops->wr_other_conf(pp, bus, devfn, 699 - where, size, val); 700 - else 701 - ret = dw_pcie_wr_other_conf(pp, bus, devfn, 702 - where, size, val); 703 - else 704 - ret = dw_pcie_wr_own_conf(pp, where, size, val); 681 + if (bus->number == pp->root_bus_nr) 682 + return dw_pcie_wr_own_conf(pp, where, size, val); 705 683 706 - return ret; 684 + return dw_pcie_wr_other_conf(pp, bus, devfn, where, size, val); 707 685 } 708 686 709 687 static struct pci_ops dw_pcie_ops = {
+65 -11
drivers/pci/host/pcie-hisi.c
··· 1 1 /* 2 - * PCIe host controller driver for HiSilicon Hip05 SoC 2 + * PCIe host controller driver for HiSilicon SoCs 3 3 * 4 4 * Copyright (C) 2015 HiSilicon Co., Ltd. http://www.hisilicon.com 5 5 * 6 - * Author: Zhou Wang <wangzhou1@hisilicon.com> 7 - * Dacai Zhu <zhudacai@hisilicon.com> 6 + * Authors: Zhou Wang <wangzhou1@hisilicon.com> 7 + * Dacai Zhu <zhudacai@hisilicon.com> 8 + * Gabriele Paoloni <gabriele.paoloni@huawei.com> 8 9 * 9 10 * This program is free software; you can redistribute it and/or modify 10 11 * it under the terms of the GNU General Public License version 2 as ··· 17 16 #include <linux/of_address.h> 18 17 #include <linux/of_pci.h> 19 18 #include <linux/platform_device.h> 19 + #include <linux/of_device.h> 20 20 #include <linux/regmap.h> 21 21 22 22 #include "pcie-designware.h" 23 23 24 - #define PCIE_SUBCTRL_SYS_STATE4_REG 0x6818 25 - #define PCIE_LTSSM_LINKUP_STATE 0x11 26 - #define PCIE_LTSSM_STATE_MASK 0x3F 24 + #define PCIE_LTSSM_LINKUP_STATE 0x11 25 + #define PCIE_LTSSM_STATE_MASK 0x3F 26 + #define PCIE_SUBCTRL_SYS_STATE4_REG 0x6818 27 + #define PCIE_SYS_STATE4 0x31c 28 + #define PCIE_HIP06_CTRL_OFF 0x1000 27 29 28 30 #define to_hisi_pcie(x) container_of(x, struct hisi_pcie, pp) 31 + 32 + struct hisi_pcie; 33 + 34 + struct pcie_soc_ops { 35 + int (*hisi_pcie_link_up)(struct hisi_pcie *pcie); 36 + }; 29 37 30 38 struct hisi_pcie { 31 39 struct regmap *subctrl; 32 40 void __iomem *reg_base; 33 41 u32 port_id; 34 42 struct pcie_port pp; 43 + struct pcie_soc_ops *soc_ops; 35 44 }; 36 45 37 46 static inline void hisi_pcie_apb_writel(struct hisi_pcie *pcie, ··· 55 44 return readl(pcie->reg_base + reg); 56 45 } 57 46 58 - /* Hip05 PCIe host only supports 32-bit config access */ 47 + /* HipXX PCIe host only supports 32-bit config access */ 59 48 static int hisi_pcie_cfg_read(struct pcie_port *pp, int where, int size, 60 49 u32 *val) 61 50 { ··· 80 69 return PCIBIOS_SUCCESSFUL; 81 70 } 82 71 83 - /* Hip05 PCIe host only supports 32-bit config access */ 72 + /* HipXX PCIe host only supports 32-bit config access */ 84 73 static int hisi_pcie_cfg_write(struct pcie_port *pp, int where, int size, 85 74 u32 val) 86 75 { ··· 107 96 return PCIBIOS_SUCCESSFUL; 108 97 } 109 98 110 - static int hisi_pcie_link_up(struct pcie_port *pp) 99 + static int hisi_pcie_link_up_hip05(struct hisi_pcie *hisi_pcie) 111 100 { 112 101 u32 val; 113 - struct hisi_pcie *hisi_pcie = to_hisi_pcie(pp); 114 102 115 103 regmap_read(hisi_pcie->subctrl, PCIE_SUBCTRL_SYS_STATE4_REG + 116 104 0x100 * hisi_pcie->port_id, &val); 117 105 118 106 return ((val & PCIE_LTSSM_STATE_MASK) == PCIE_LTSSM_LINKUP_STATE); 107 + } 108 + 109 + static int hisi_pcie_link_up_hip06(struct hisi_pcie *hisi_pcie) 110 + { 111 + u32 val; 112 + 113 + val = hisi_pcie_apb_readl(hisi_pcie, PCIE_HIP06_CTRL_OFF + 114 + PCIE_SYS_STATE4); 115 + 116 + return ((val & PCIE_LTSSM_STATE_MASK) == PCIE_LTSSM_LINKUP_STATE); 117 + } 118 + 119 + static int hisi_pcie_link_up(struct pcie_port *pp) 120 + { 121 + struct hisi_pcie *hisi_pcie = to_hisi_pcie(pp); 122 + 123 + return hisi_pcie->soc_ops->hisi_pcie_link_up(hisi_pcie); 119 124 } 120 125 121 126 static struct pcie_host_ops hisi_pcie_host_ops = { ··· 172 145 { 173 146 struct hisi_pcie *hisi_pcie; 174 147 struct pcie_port *pp; 148 + const struct of_device_id *match; 175 149 struct resource *reg; 150 + struct device_driver *driver; 176 151 int ret; 177 152 178 153 hisi_pcie = devm_kzalloc(&pdev->dev, sizeof(*hisi_pcie), GFP_KERNEL); ··· 183 154 184 155 pp = &hisi_pcie->pp; 185 156 pp->dev = &pdev->dev; 157 + driver = (pdev->dev).driver; 158 + 159 + match = of_match_device(driver->of_match_table, &pdev->dev); 160 + hisi_pcie->soc_ops = (struct pcie_soc_ops *) match->data; 186 161 187 162 hisi_pcie->subctrl = 188 163 syscon_regmap_lookup_by_compatible("hisilicon,pcie-sas-subctrl"); ··· 215 182 return 0; 216 183 } 217 184 185 + static struct pcie_soc_ops hip05_ops = { 186 + &hisi_pcie_link_up_hip05 187 + }; 188 + 189 + static struct pcie_soc_ops hip06_ops = { 190 + &hisi_pcie_link_up_hip06 191 + }; 192 + 218 193 static const struct of_device_id hisi_pcie_of_match[] = { 219 - {.compatible = "hisilicon,hip05-pcie",}, 194 + { 195 + .compatible = "hisilicon,hip05-pcie", 196 + .data = (void *) &hip05_ops, 197 + }, 198 + { 199 + .compatible = "hisilicon,hip06-pcie", 200 + .data = (void *) &hip06_ops, 201 + }, 220 202 {}, 221 203 }; 204 + 222 205 223 206 MODULE_DEVICE_TABLE(of, hisi_pcie_of_match); 224 207 ··· 247 198 }; 248 199 249 200 module_platform_driver(hisi_pcie_driver); 201 + 202 + MODULE_AUTHOR("Zhou Wang <wangzhou1@hisilicon.com>"); 203 + MODULE_AUTHOR("Dacai Zhu <zhudacai@hisilicon.com>"); 204 + MODULE_AUTHOR("Gabriele Paoloni <gabriele.paoloni@huawei.com>"); 205 + MODULE_LICENSE("GPL v2");
+1
drivers/pci/host/pcie-iproc-bcma.c
··· 55 55 bcma_set_drvdata(bdev, pcie); 56 56 57 57 pcie->base = bdev->io_addr; 58 + pcie->base_addr = bdev->addr; 58 59 59 60 res_mem.start = bdev->addr_s[0]; 60 61 res_mem.end = bdev->addr_s[0] + SZ_128M - 1;
+675
drivers/pci/host/pcie-iproc-msi.c
··· 1 + /* 2 + * Copyright (C) 2015 Broadcom Corporation 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License as 6 + * published by the Free Software Foundation version 2. 7 + * 8 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 9 + * kind, whether express or implied; without even the implied warranty 10 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <linux/interrupt.h> 15 + #include <linux/irqchip/chained_irq.h> 16 + #include <linux/irqdomain.h> 17 + #include <linux/msi.h> 18 + #include <linux/of_irq.h> 19 + #include <linux/of_pci.h> 20 + #include <linux/pci.h> 21 + 22 + #include "pcie-iproc.h" 23 + 24 + #define IPROC_MSI_INTR_EN_SHIFT 11 25 + #define IPROC_MSI_INTR_EN BIT(IPROC_MSI_INTR_EN_SHIFT) 26 + #define IPROC_MSI_INT_N_EVENT_SHIFT 1 27 + #define IPROC_MSI_INT_N_EVENT BIT(IPROC_MSI_INT_N_EVENT_SHIFT) 28 + #define IPROC_MSI_EQ_EN_SHIFT 0 29 + #define IPROC_MSI_EQ_EN BIT(IPROC_MSI_EQ_EN_SHIFT) 30 + 31 + #define IPROC_MSI_EQ_MASK 0x3f 32 + 33 + /* Max number of GIC interrupts */ 34 + #define NR_HW_IRQS 6 35 + 36 + /* Number of entries in each event queue */ 37 + #define EQ_LEN 64 38 + 39 + /* Size of each event queue memory region */ 40 + #define EQ_MEM_REGION_SIZE SZ_4K 41 + 42 + /* Size of each MSI address region */ 43 + #define MSI_MEM_REGION_SIZE SZ_4K 44 + 45 + enum iproc_msi_reg { 46 + IPROC_MSI_EQ_PAGE = 0, 47 + IPROC_MSI_EQ_PAGE_UPPER, 48 + IPROC_MSI_PAGE, 49 + IPROC_MSI_PAGE_UPPER, 50 + IPROC_MSI_CTRL, 51 + IPROC_MSI_EQ_HEAD, 52 + IPROC_MSI_EQ_TAIL, 53 + IPROC_MSI_INTS_EN, 54 + IPROC_MSI_REG_SIZE, 55 + }; 56 + 57 + struct iproc_msi; 58 + 59 + /** 60 + * iProc MSI group 61 + * 62 + * One MSI group is allocated per GIC interrupt, serviced by one iProc MSI 63 + * event queue. 64 + * 65 + * @msi: pointer to iProc MSI data 66 + * @gic_irq: GIC interrupt 67 + * @eq: Event queue number 68 + */ 69 + struct iproc_msi_grp { 70 + struct iproc_msi *msi; 71 + int gic_irq; 72 + unsigned int eq; 73 + }; 74 + 75 + /** 76 + * iProc event queue based MSI 77 + * 78 + * Only meant to be used on platforms without MSI support integrated into the 79 + * GIC. 80 + * 81 + * @pcie: pointer to iProc PCIe data 82 + * @reg_offsets: MSI register offsets 83 + * @grps: MSI groups 84 + * @nr_irqs: number of total interrupts connected to GIC 85 + * @nr_cpus: number of toal CPUs 86 + * @has_inten_reg: indicates the MSI interrupt enable register needs to be 87 + * set explicitly (required for some legacy platforms) 88 + * @bitmap: MSI vector bitmap 89 + * @bitmap_lock: lock to protect access to the MSI bitmap 90 + * @nr_msi_vecs: total number of MSI vectors 91 + * @inner_domain: inner IRQ domain 92 + * @msi_domain: MSI IRQ domain 93 + * @nr_eq_region: required number of 4K aligned memory region for MSI event 94 + * queues 95 + * @nr_msi_region: required number of 4K aligned address region for MSI posted 96 + * writes 97 + * @eq_cpu: pointer to allocated memory region for MSI event queues 98 + * @eq_dma: DMA address of MSI event queues 99 + * @msi_addr: MSI address 100 + */ 101 + struct iproc_msi { 102 + struct iproc_pcie *pcie; 103 + const u16 (*reg_offsets)[IPROC_MSI_REG_SIZE]; 104 + struct iproc_msi_grp *grps; 105 + int nr_irqs; 106 + int nr_cpus; 107 + bool has_inten_reg; 108 + unsigned long *bitmap; 109 + struct mutex bitmap_lock; 110 + unsigned int nr_msi_vecs; 111 + struct irq_domain *inner_domain; 112 + struct irq_domain *msi_domain; 113 + unsigned int nr_eq_region; 114 + unsigned int nr_msi_region; 115 + void *eq_cpu; 116 + dma_addr_t eq_dma; 117 + phys_addr_t msi_addr; 118 + }; 119 + 120 + static const u16 iproc_msi_reg_paxb[NR_HW_IRQS][IPROC_MSI_REG_SIZE] = { 121 + { 0x200, 0x2c0, 0x204, 0x2c4, 0x210, 0x250, 0x254, 0x208 }, 122 + { 0x200, 0x2c0, 0x204, 0x2c4, 0x214, 0x258, 0x25c, 0x208 }, 123 + { 0x200, 0x2c0, 0x204, 0x2c4, 0x218, 0x260, 0x264, 0x208 }, 124 + { 0x200, 0x2c0, 0x204, 0x2c4, 0x21c, 0x268, 0x26c, 0x208 }, 125 + { 0x200, 0x2c0, 0x204, 0x2c4, 0x220, 0x270, 0x274, 0x208 }, 126 + { 0x200, 0x2c0, 0x204, 0x2c4, 0x224, 0x278, 0x27c, 0x208 }, 127 + }; 128 + 129 + static const u16 iproc_msi_reg_paxc[NR_HW_IRQS][IPROC_MSI_REG_SIZE] = { 130 + { 0xc00, 0xc04, 0xc08, 0xc0c, 0xc40, 0xc50, 0xc60 }, 131 + { 0xc10, 0xc14, 0xc18, 0xc1c, 0xc44, 0xc54, 0xc64 }, 132 + { 0xc20, 0xc24, 0xc28, 0xc2c, 0xc48, 0xc58, 0xc68 }, 133 + { 0xc30, 0xc34, 0xc38, 0xc3c, 0xc4c, 0xc5c, 0xc6c }, 134 + }; 135 + 136 + static inline u32 iproc_msi_read_reg(struct iproc_msi *msi, 137 + enum iproc_msi_reg reg, 138 + unsigned int eq) 139 + { 140 + struct iproc_pcie *pcie = msi->pcie; 141 + 142 + return readl_relaxed(pcie->base + msi->reg_offsets[eq][reg]); 143 + } 144 + 145 + static inline void iproc_msi_write_reg(struct iproc_msi *msi, 146 + enum iproc_msi_reg reg, 147 + int eq, u32 val) 148 + { 149 + struct iproc_pcie *pcie = msi->pcie; 150 + 151 + writel_relaxed(val, pcie->base + msi->reg_offsets[eq][reg]); 152 + } 153 + 154 + static inline u32 hwirq_to_group(struct iproc_msi *msi, unsigned long hwirq) 155 + { 156 + return (hwirq % msi->nr_irqs); 157 + } 158 + 159 + static inline unsigned int iproc_msi_addr_offset(struct iproc_msi *msi, 160 + unsigned long hwirq) 161 + { 162 + if (msi->nr_msi_region > 1) 163 + return hwirq_to_group(msi, hwirq) * MSI_MEM_REGION_SIZE; 164 + else 165 + return hwirq_to_group(msi, hwirq) * sizeof(u32); 166 + } 167 + 168 + static inline unsigned int iproc_msi_eq_offset(struct iproc_msi *msi, u32 eq) 169 + { 170 + if (msi->nr_eq_region > 1) 171 + return eq * EQ_MEM_REGION_SIZE; 172 + else 173 + return eq * EQ_LEN * sizeof(u32); 174 + } 175 + 176 + static struct irq_chip iproc_msi_irq_chip = { 177 + .name = "iProc-MSI", 178 + }; 179 + 180 + static struct msi_domain_info iproc_msi_domain_info = { 181 + .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 182 + MSI_FLAG_PCI_MSIX, 183 + .chip = &iproc_msi_irq_chip, 184 + }; 185 + 186 + /* 187 + * In iProc PCIe core, each MSI group is serviced by a GIC interrupt and a 188 + * dedicated event queue. Each MSI group can support up to 64 MSI vectors. 189 + * 190 + * The number of MSI groups varies between different iProc SoCs. The total 191 + * number of CPU cores also varies. To support MSI IRQ affinity, we 192 + * distribute GIC interrupts across all available CPUs. MSI vector is moved 193 + * from one GIC interrupt to another to steer to the target CPU. 194 + * 195 + * Assuming: 196 + * - the number of MSI groups is M 197 + * - the number of CPU cores is N 198 + * - M is always a multiple of N 199 + * 200 + * Total number of raw MSI vectors = M * 64 201 + * Total number of supported MSI vectors = (M * 64) / N 202 + */ 203 + static inline int hwirq_to_cpu(struct iproc_msi *msi, unsigned long hwirq) 204 + { 205 + return (hwirq % msi->nr_cpus); 206 + } 207 + 208 + static inline unsigned long hwirq_to_canonical_hwirq(struct iproc_msi *msi, 209 + unsigned long hwirq) 210 + { 211 + return (hwirq - hwirq_to_cpu(msi, hwirq)); 212 + } 213 + 214 + static int iproc_msi_irq_set_affinity(struct irq_data *data, 215 + const struct cpumask *mask, bool force) 216 + { 217 + struct iproc_msi *msi = irq_data_get_irq_chip_data(data); 218 + int target_cpu = cpumask_first(mask); 219 + int curr_cpu; 220 + 221 + curr_cpu = hwirq_to_cpu(msi, data->hwirq); 222 + if (curr_cpu == target_cpu) 223 + return IRQ_SET_MASK_OK_DONE; 224 + 225 + /* steer MSI to the target CPU */ 226 + data->hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq) + target_cpu; 227 + 228 + return IRQ_SET_MASK_OK; 229 + } 230 + 231 + static void iproc_msi_irq_compose_msi_msg(struct irq_data *data, 232 + struct msi_msg *msg) 233 + { 234 + struct iproc_msi *msi = irq_data_get_irq_chip_data(data); 235 + dma_addr_t addr; 236 + 237 + addr = msi->msi_addr + iproc_msi_addr_offset(msi, data->hwirq); 238 + msg->address_lo = lower_32_bits(addr); 239 + msg->address_hi = upper_32_bits(addr); 240 + msg->data = data->hwirq; 241 + } 242 + 243 + static struct irq_chip iproc_msi_bottom_irq_chip = { 244 + .name = "MSI", 245 + .irq_set_affinity = iproc_msi_irq_set_affinity, 246 + .irq_compose_msi_msg = iproc_msi_irq_compose_msi_msg, 247 + }; 248 + 249 + static int iproc_msi_irq_domain_alloc(struct irq_domain *domain, 250 + unsigned int virq, unsigned int nr_irqs, 251 + void *args) 252 + { 253 + struct iproc_msi *msi = domain->host_data; 254 + int hwirq; 255 + 256 + mutex_lock(&msi->bitmap_lock); 257 + 258 + /* Allocate 'nr_cpus' number of MSI vectors each time */ 259 + hwirq = bitmap_find_next_zero_area(msi->bitmap, msi->nr_msi_vecs, 0, 260 + msi->nr_cpus, 0); 261 + if (hwirq < msi->nr_msi_vecs) { 262 + bitmap_set(msi->bitmap, hwirq, msi->nr_cpus); 263 + } else { 264 + mutex_unlock(&msi->bitmap_lock); 265 + return -ENOSPC; 266 + } 267 + 268 + mutex_unlock(&msi->bitmap_lock); 269 + 270 + irq_domain_set_info(domain, virq, hwirq, &iproc_msi_bottom_irq_chip, 271 + domain->host_data, handle_simple_irq, NULL, NULL); 272 + 273 + return 0; 274 + } 275 + 276 + static void iproc_msi_irq_domain_free(struct irq_domain *domain, 277 + unsigned int virq, unsigned int nr_irqs) 278 + { 279 + struct irq_data *data = irq_domain_get_irq_data(domain, virq); 280 + struct iproc_msi *msi = irq_data_get_irq_chip_data(data); 281 + unsigned int hwirq; 282 + 283 + mutex_lock(&msi->bitmap_lock); 284 + 285 + hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq); 286 + bitmap_clear(msi->bitmap, hwirq, msi->nr_cpus); 287 + 288 + mutex_unlock(&msi->bitmap_lock); 289 + 290 + irq_domain_free_irqs_parent(domain, virq, nr_irqs); 291 + } 292 + 293 + static const struct irq_domain_ops msi_domain_ops = { 294 + .alloc = iproc_msi_irq_domain_alloc, 295 + .free = iproc_msi_irq_domain_free, 296 + }; 297 + 298 + static inline u32 decode_msi_hwirq(struct iproc_msi *msi, u32 eq, u32 head) 299 + { 300 + u32 *msg, hwirq; 301 + unsigned int offs; 302 + 303 + offs = iproc_msi_eq_offset(msi, eq) + head * sizeof(u32); 304 + msg = (u32 *)(msi->eq_cpu + offs); 305 + hwirq = *msg & IPROC_MSI_EQ_MASK; 306 + 307 + /* 308 + * Since we have multiple hwirq mapped to a single MSI vector, 309 + * now we need to derive the hwirq at CPU0. It can then be used to 310 + * mapped back to virq. 311 + */ 312 + return hwirq_to_canonical_hwirq(msi, hwirq); 313 + } 314 + 315 + static void iproc_msi_handler(struct irq_desc *desc) 316 + { 317 + struct irq_chip *chip = irq_desc_get_chip(desc); 318 + struct iproc_msi_grp *grp; 319 + struct iproc_msi *msi; 320 + struct iproc_pcie *pcie; 321 + u32 eq, head, tail, nr_events; 322 + unsigned long hwirq; 323 + int virq; 324 + 325 + chained_irq_enter(chip, desc); 326 + 327 + grp = irq_desc_get_handler_data(desc); 328 + msi = grp->msi; 329 + pcie = msi->pcie; 330 + eq = grp->eq; 331 + 332 + /* 333 + * iProc MSI event queue is tracked by head and tail pointers. Head 334 + * pointer indicates the next entry (MSI data) to be consumed by SW in 335 + * the queue and needs to be updated by SW. iProc MSI core uses the 336 + * tail pointer as the next data insertion point. 337 + * 338 + * Entries between head and tail pointers contain valid MSI data. MSI 339 + * data is guaranteed to be in the event queue memory before the tail 340 + * pointer is updated by the iProc MSI core. 341 + */ 342 + head = iproc_msi_read_reg(msi, IPROC_MSI_EQ_HEAD, 343 + eq) & IPROC_MSI_EQ_MASK; 344 + do { 345 + tail = iproc_msi_read_reg(msi, IPROC_MSI_EQ_TAIL, 346 + eq) & IPROC_MSI_EQ_MASK; 347 + 348 + /* 349 + * Figure out total number of events (MSI data) to be 350 + * processed. 351 + */ 352 + nr_events = (tail < head) ? 353 + (EQ_LEN - (head - tail)) : (tail - head); 354 + if (!nr_events) 355 + break; 356 + 357 + /* process all outstanding events */ 358 + while (nr_events--) { 359 + hwirq = decode_msi_hwirq(msi, eq, head); 360 + virq = irq_find_mapping(msi->inner_domain, hwirq); 361 + generic_handle_irq(virq); 362 + 363 + head++; 364 + head %= EQ_LEN; 365 + } 366 + 367 + /* 368 + * Now all outstanding events have been processed. Update the 369 + * head pointer. 370 + */ 371 + iproc_msi_write_reg(msi, IPROC_MSI_EQ_HEAD, eq, head); 372 + 373 + /* 374 + * Now go read the tail pointer again to see if there are new 375 + * oustanding events that came in during the above window. 376 + */ 377 + } while (true); 378 + 379 + chained_irq_exit(chip, desc); 380 + } 381 + 382 + static void iproc_msi_enable(struct iproc_msi *msi) 383 + { 384 + int i, eq; 385 + u32 val; 386 + 387 + /* Program memory region for each event queue */ 388 + for (i = 0; i < msi->nr_eq_region; i++) { 389 + dma_addr_t addr = msi->eq_dma + (i * EQ_MEM_REGION_SIZE); 390 + 391 + iproc_msi_write_reg(msi, IPROC_MSI_EQ_PAGE, i, 392 + lower_32_bits(addr)); 393 + iproc_msi_write_reg(msi, IPROC_MSI_EQ_PAGE_UPPER, i, 394 + upper_32_bits(addr)); 395 + } 396 + 397 + /* Program address region for MSI posted writes */ 398 + for (i = 0; i < msi->nr_msi_region; i++) { 399 + phys_addr_t addr = msi->msi_addr + (i * MSI_MEM_REGION_SIZE); 400 + 401 + iproc_msi_write_reg(msi, IPROC_MSI_PAGE, i, 402 + lower_32_bits(addr)); 403 + iproc_msi_write_reg(msi, IPROC_MSI_PAGE_UPPER, i, 404 + upper_32_bits(addr)); 405 + } 406 + 407 + for (eq = 0; eq < msi->nr_irqs; eq++) { 408 + /* Enable MSI event queue */ 409 + val = IPROC_MSI_INTR_EN | IPROC_MSI_INT_N_EVENT | 410 + IPROC_MSI_EQ_EN; 411 + iproc_msi_write_reg(msi, IPROC_MSI_CTRL, eq, val); 412 + 413 + /* 414 + * Some legacy platforms require the MSI interrupt enable 415 + * register to be set explicitly. 416 + */ 417 + if (msi->has_inten_reg) { 418 + val = iproc_msi_read_reg(msi, IPROC_MSI_INTS_EN, eq); 419 + val |= BIT(eq); 420 + iproc_msi_write_reg(msi, IPROC_MSI_INTS_EN, eq, val); 421 + } 422 + } 423 + } 424 + 425 + static void iproc_msi_disable(struct iproc_msi *msi) 426 + { 427 + u32 eq, val; 428 + 429 + for (eq = 0; eq < msi->nr_irqs; eq++) { 430 + if (msi->has_inten_reg) { 431 + val = iproc_msi_read_reg(msi, IPROC_MSI_INTS_EN, eq); 432 + val &= ~BIT(eq); 433 + iproc_msi_write_reg(msi, IPROC_MSI_INTS_EN, eq, val); 434 + } 435 + 436 + val = iproc_msi_read_reg(msi, IPROC_MSI_CTRL, eq); 437 + val &= ~(IPROC_MSI_INTR_EN | IPROC_MSI_INT_N_EVENT | 438 + IPROC_MSI_EQ_EN); 439 + iproc_msi_write_reg(msi, IPROC_MSI_CTRL, eq, val); 440 + } 441 + } 442 + 443 + static int iproc_msi_alloc_domains(struct device_node *node, 444 + struct iproc_msi *msi) 445 + { 446 + msi->inner_domain = irq_domain_add_linear(NULL, msi->nr_msi_vecs, 447 + &msi_domain_ops, msi); 448 + if (!msi->inner_domain) 449 + return -ENOMEM; 450 + 451 + msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(node), 452 + &iproc_msi_domain_info, 453 + msi->inner_domain); 454 + if (!msi->msi_domain) { 455 + irq_domain_remove(msi->inner_domain); 456 + return -ENOMEM; 457 + } 458 + 459 + return 0; 460 + } 461 + 462 + static void iproc_msi_free_domains(struct iproc_msi *msi) 463 + { 464 + if (msi->msi_domain) 465 + irq_domain_remove(msi->msi_domain); 466 + 467 + if (msi->inner_domain) 468 + irq_domain_remove(msi->inner_domain); 469 + } 470 + 471 + static void iproc_msi_irq_free(struct iproc_msi *msi, unsigned int cpu) 472 + { 473 + int i; 474 + 475 + for (i = cpu; i < msi->nr_irqs; i += msi->nr_cpus) { 476 + irq_set_chained_handler_and_data(msi->grps[i].gic_irq, 477 + NULL, NULL); 478 + } 479 + } 480 + 481 + static int iproc_msi_irq_setup(struct iproc_msi *msi, unsigned int cpu) 482 + { 483 + int i, ret; 484 + cpumask_var_t mask; 485 + struct iproc_pcie *pcie = msi->pcie; 486 + 487 + for (i = cpu; i < msi->nr_irqs; i += msi->nr_cpus) { 488 + irq_set_chained_handler_and_data(msi->grps[i].gic_irq, 489 + iproc_msi_handler, 490 + &msi->grps[i]); 491 + /* Dedicate GIC interrupt to each CPU core */ 492 + if (alloc_cpumask_var(&mask, GFP_KERNEL)) { 493 + cpumask_clear(mask); 494 + cpumask_set_cpu(cpu, mask); 495 + ret = irq_set_affinity(msi->grps[i].gic_irq, mask); 496 + if (ret) 497 + dev_err(pcie->dev, 498 + "failed to set affinity for IRQ%d\n", 499 + msi->grps[i].gic_irq); 500 + free_cpumask_var(mask); 501 + } else { 502 + dev_err(pcie->dev, "failed to alloc CPU mask\n"); 503 + ret = -EINVAL; 504 + } 505 + 506 + if (ret) { 507 + /* Free all configured/unconfigured IRQs */ 508 + iproc_msi_irq_free(msi, cpu); 509 + return ret; 510 + } 511 + } 512 + 513 + return 0; 514 + } 515 + 516 + int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node) 517 + { 518 + struct iproc_msi *msi; 519 + int i, ret; 520 + unsigned int cpu; 521 + 522 + if (!of_device_is_compatible(node, "brcm,iproc-msi")) 523 + return -ENODEV; 524 + 525 + if (!of_find_property(node, "msi-controller", NULL)) 526 + return -ENODEV; 527 + 528 + if (pcie->msi) 529 + return -EBUSY; 530 + 531 + msi = devm_kzalloc(pcie->dev, sizeof(*msi), GFP_KERNEL); 532 + if (!msi) 533 + return -ENOMEM; 534 + 535 + msi->pcie = pcie; 536 + pcie->msi = msi; 537 + msi->msi_addr = pcie->base_addr; 538 + mutex_init(&msi->bitmap_lock); 539 + msi->nr_cpus = num_possible_cpus(); 540 + 541 + msi->nr_irqs = of_irq_count(node); 542 + if (!msi->nr_irqs) { 543 + dev_err(pcie->dev, "found no MSI GIC interrupt\n"); 544 + return -ENODEV; 545 + } 546 + 547 + if (msi->nr_irqs > NR_HW_IRQS) { 548 + dev_warn(pcie->dev, "too many MSI GIC interrupts defined %d\n", 549 + msi->nr_irqs); 550 + msi->nr_irqs = NR_HW_IRQS; 551 + } 552 + 553 + if (msi->nr_irqs < msi->nr_cpus) { 554 + dev_err(pcie->dev, 555 + "not enough GIC interrupts for MSI affinity\n"); 556 + return -EINVAL; 557 + } 558 + 559 + if (msi->nr_irqs % msi->nr_cpus != 0) { 560 + msi->nr_irqs -= msi->nr_irqs % msi->nr_cpus; 561 + dev_warn(pcie->dev, "Reducing number of interrupts to %d\n", 562 + msi->nr_irqs); 563 + } 564 + 565 + switch (pcie->type) { 566 + case IPROC_PCIE_PAXB: 567 + msi->reg_offsets = iproc_msi_reg_paxb; 568 + msi->nr_eq_region = 1; 569 + msi->nr_msi_region = 1; 570 + break; 571 + case IPROC_PCIE_PAXC: 572 + msi->reg_offsets = iproc_msi_reg_paxc; 573 + msi->nr_eq_region = msi->nr_irqs; 574 + msi->nr_msi_region = msi->nr_irqs; 575 + break; 576 + default: 577 + dev_err(pcie->dev, "incompatible iProc PCIe interface\n"); 578 + return -EINVAL; 579 + } 580 + 581 + if (of_find_property(node, "brcm,pcie-msi-inten", NULL)) 582 + msi->has_inten_reg = true; 583 + 584 + msi->nr_msi_vecs = msi->nr_irqs * EQ_LEN; 585 + msi->bitmap = devm_kcalloc(pcie->dev, BITS_TO_LONGS(msi->nr_msi_vecs), 586 + sizeof(*msi->bitmap), GFP_KERNEL); 587 + if (!msi->bitmap) 588 + return -ENOMEM; 589 + 590 + msi->grps = devm_kcalloc(pcie->dev, msi->nr_irqs, sizeof(*msi->grps), 591 + GFP_KERNEL); 592 + if (!msi->grps) 593 + return -ENOMEM; 594 + 595 + for (i = 0; i < msi->nr_irqs; i++) { 596 + unsigned int irq = irq_of_parse_and_map(node, i); 597 + 598 + if (!irq) { 599 + dev_err(pcie->dev, "unable to parse/map interrupt\n"); 600 + ret = -ENODEV; 601 + goto free_irqs; 602 + } 603 + msi->grps[i].gic_irq = irq; 604 + msi->grps[i].msi = msi; 605 + msi->grps[i].eq = i; 606 + } 607 + 608 + /* Reserve memory for event queue and make sure memories are zeroed */ 609 + msi->eq_cpu = dma_zalloc_coherent(pcie->dev, 610 + msi->nr_eq_region * EQ_MEM_REGION_SIZE, 611 + &msi->eq_dma, GFP_KERNEL); 612 + if (!msi->eq_cpu) { 613 + ret = -ENOMEM; 614 + goto free_irqs; 615 + } 616 + 617 + ret = iproc_msi_alloc_domains(node, msi); 618 + if (ret) { 619 + dev_err(pcie->dev, "failed to create MSI domains\n"); 620 + goto free_eq_dma; 621 + } 622 + 623 + for_each_online_cpu(cpu) { 624 + ret = iproc_msi_irq_setup(msi, cpu); 625 + if (ret) 626 + goto free_msi_irq; 627 + } 628 + 629 + iproc_msi_enable(msi); 630 + 631 + return 0; 632 + 633 + free_msi_irq: 634 + for_each_online_cpu(cpu) 635 + iproc_msi_irq_free(msi, cpu); 636 + iproc_msi_free_domains(msi); 637 + 638 + free_eq_dma: 639 + dma_free_coherent(pcie->dev, msi->nr_eq_region * EQ_MEM_REGION_SIZE, 640 + msi->eq_cpu, msi->eq_dma); 641 + 642 + free_irqs: 643 + for (i = 0; i < msi->nr_irqs; i++) { 644 + if (msi->grps[i].gic_irq) 645 + irq_dispose_mapping(msi->grps[i].gic_irq); 646 + } 647 + pcie->msi = NULL; 648 + return ret; 649 + } 650 + EXPORT_SYMBOL(iproc_msi_init); 651 + 652 + void iproc_msi_exit(struct iproc_pcie *pcie) 653 + { 654 + struct iproc_msi *msi = pcie->msi; 655 + unsigned int i, cpu; 656 + 657 + if (!msi) 658 + return; 659 + 660 + iproc_msi_disable(msi); 661 + 662 + for_each_online_cpu(cpu) 663 + iproc_msi_irq_free(msi, cpu); 664 + 665 + iproc_msi_free_domains(msi); 666 + 667 + dma_free_coherent(pcie->dev, msi->nr_eq_region * EQ_MEM_REGION_SIZE, 668 + msi->eq_cpu, msi->eq_dma); 669 + 670 + for (i = 0; i < msi->nr_irqs; i++) { 671 + if (msi->grps[i].gic_irq) 672 + irq_dispose_mapping(msi->grps[i].gic_irq); 673 + } 674 + } 675 + EXPORT_SYMBOL(iproc_msi_exit);
+19 -6
drivers/pci/host/pcie-iproc-platform.c
··· 26 26 27 27 #include "pcie-iproc.h" 28 28 29 + static const struct of_device_id iproc_pcie_of_match_table[] = { 30 + { 31 + .compatible = "brcm,iproc-pcie", 32 + .data = (int *)IPROC_PCIE_PAXB, 33 + }, { 34 + .compatible = "brcm,iproc-pcie-paxc", 35 + .data = (int *)IPROC_PCIE_PAXC, 36 + }, 37 + { /* sentinel */ } 38 + }; 39 + MODULE_DEVICE_TABLE(of, iproc_pcie_of_match_table); 40 + 29 41 static int iproc_pcie_pltfm_probe(struct platform_device *pdev) 30 42 { 43 + const struct of_device_id *of_id; 31 44 struct iproc_pcie *pcie; 32 45 struct device_node *np = pdev->dev.of_node; 33 46 struct resource reg; ··· 48 35 LIST_HEAD(res); 49 36 int ret; 50 37 38 + of_id = of_match_device(iproc_pcie_of_match_table, &pdev->dev); 39 + if (!of_id) 40 + return -EINVAL; 41 + 51 42 pcie = devm_kzalloc(&pdev->dev, sizeof(struct iproc_pcie), GFP_KERNEL); 52 43 if (!pcie) 53 44 return -ENOMEM; 54 45 55 46 pcie->dev = &pdev->dev; 47 + pcie->type = (enum iproc_pcie_type)of_id->data; 56 48 platform_set_drvdata(pdev, pcie); 57 49 58 50 ret = of_address_to_resource(np, 0, &reg); ··· 71 53 dev_err(pcie->dev, "unable to map controller registers\n"); 72 54 return -ENOMEM; 73 55 } 56 + pcie->base_addr = reg.start; 74 57 75 58 if (of_property_read_bool(np, "brcm,pcie-ob")) { 76 59 u32 val; ··· 132 113 133 114 return iproc_pcie_remove(pcie); 134 115 } 135 - 136 - static const struct of_device_id iproc_pcie_of_match_table[] = { 137 - { .compatible = "brcm,iproc-pcie", }, 138 - { /* sentinel */ } 139 - }; 140 - MODULE_DEVICE_TABLE(of, iproc_pcie_of_match_table); 141 116 142 117 static struct platform_driver iproc_pcie_pltfm_driver = { 143 118 .driver = {
+195 -35
drivers/pci/host/pcie-iproc.c
··· 30 30 31 31 #include "pcie-iproc.h" 32 32 33 - #define CLK_CONTROL_OFFSET 0x000 34 33 #define EP_PERST_SOURCE_SELECT_SHIFT 2 35 34 #define EP_PERST_SOURCE_SELECT BIT(EP_PERST_SOURCE_SELECT_SHIFT) 36 35 #define EP_MODE_SURVIVE_PERST_SHIFT 1 37 36 #define EP_MODE_SURVIVE_PERST BIT(EP_MODE_SURVIVE_PERST_SHIFT) 38 37 #define RC_PCIE_RST_OUTPUT_SHIFT 0 39 38 #define RC_PCIE_RST_OUTPUT BIT(RC_PCIE_RST_OUTPUT_SHIFT) 39 + #define PAXC_RESET_MASK 0x7f 40 40 41 - #define CFG_IND_ADDR_OFFSET 0x120 42 41 #define CFG_IND_ADDR_MASK 0x00001ffc 43 42 44 - #define CFG_IND_DATA_OFFSET 0x124 45 - 46 - #define CFG_ADDR_OFFSET 0x1f8 47 43 #define CFG_ADDR_BUS_NUM_SHIFT 20 48 44 #define CFG_ADDR_BUS_NUM_MASK 0x0ff00000 49 45 #define CFG_ADDR_DEV_NUM_SHIFT 15 ··· 51 55 #define CFG_ADDR_CFG_TYPE_SHIFT 0 52 56 #define CFG_ADDR_CFG_TYPE_MASK 0x00000003 53 57 54 - #define CFG_DATA_OFFSET 0x1fc 55 - 56 - #define SYS_RC_INTX_EN 0x330 57 58 #define SYS_RC_INTX_MASK 0xf 58 59 59 - #define PCIE_LINK_STATUS_OFFSET 0xf0c 60 60 #define PCIE_PHYLINKUP_SHIFT 3 61 61 #define PCIE_PHYLINKUP BIT(PCIE_PHYLINKUP_SHIFT) 62 62 #define PCIE_DL_ACTIVE_SHIFT 2 ··· 63 71 #define OARR_SIZE_CFG_SHIFT 1 64 72 #define OARR_SIZE_CFG BIT(OARR_SIZE_CFG_SHIFT) 65 73 66 - #define OARR_LO(window) (0xd20 + (window) * 8) 67 - #define OARR_HI(window) (0xd24 + (window) * 8) 68 - #define OMAP_LO(window) (0xd40 + (window) * 8) 69 - #define OMAP_HI(window) (0xd44 + (window) * 8) 70 - 71 74 #define MAX_NUM_OB_WINDOWS 2 75 + #define MAX_NUM_PAXC_PF 4 76 + 77 + #define IPROC_PCIE_REG_INVALID 0xffff 78 + 79 + enum iproc_pcie_reg { 80 + IPROC_PCIE_CLK_CTRL = 0, 81 + IPROC_PCIE_CFG_IND_ADDR, 82 + IPROC_PCIE_CFG_IND_DATA, 83 + IPROC_PCIE_CFG_ADDR, 84 + IPROC_PCIE_CFG_DATA, 85 + IPROC_PCIE_INTX_EN, 86 + IPROC_PCIE_OARR_LO, 87 + IPROC_PCIE_OARR_HI, 88 + IPROC_PCIE_OMAP_LO, 89 + IPROC_PCIE_OMAP_HI, 90 + IPROC_PCIE_LINK_STATUS, 91 + }; 92 + 93 + /* iProc PCIe PAXB registers */ 94 + static const u16 iproc_pcie_reg_paxb[] = { 95 + [IPROC_PCIE_CLK_CTRL] = 0x000, 96 + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 97 + [IPROC_PCIE_CFG_IND_DATA] = 0x124, 98 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 99 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 100 + [IPROC_PCIE_INTX_EN] = 0x330, 101 + [IPROC_PCIE_OARR_LO] = 0xd20, 102 + [IPROC_PCIE_OARR_HI] = 0xd24, 103 + [IPROC_PCIE_OMAP_LO] = 0xd40, 104 + [IPROC_PCIE_OMAP_HI] = 0xd44, 105 + [IPROC_PCIE_LINK_STATUS] = 0xf0c, 106 + }; 107 + 108 + /* iProc PCIe PAXC v1 registers */ 109 + static const u16 iproc_pcie_reg_paxc[] = { 110 + [IPROC_PCIE_CLK_CTRL] = 0x000, 111 + [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, 112 + [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, 113 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 114 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 115 + [IPROC_PCIE_INTX_EN] = IPROC_PCIE_REG_INVALID, 116 + [IPROC_PCIE_OARR_LO] = IPROC_PCIE_REG_INVALID, 117 + [IPROC_PCIE_OARR_HI] = IPROC_PCIE_REG_INVALID, 118 + [IPROC_PCIE_OMAP_LO] = IPROC_PCIE_REG_INVALID, 119 + [IPROC_PCIE_OMAP_HI] = IPROC_PCIE_REG_INVALID, 120 + [IPROC_PCIE_LINK_STATUS] = IPROC_PCIE_REG_INVALID, 121 + }; 72 122 73 123 static inline struct iproc_pcie *iproc_data(struct pci_bus *bus) 74 124 { ··· 123 89 pcie = bus->sysdata; 124 90 #endif 125 91 return pcie; 92 + } 93 + 94 + static inline bool iproc_pcie_reg_is_invalid(u16 reg_offset) 95 + { 96 + return !!(reg_offset == IPROC_PCIE_REG_INVALID); 97 + } 98 + 99 + static inline u16 iproc_pcie_reg_offset(struct iproc_pcie *pcie, 100 + enum iproc_pcie_reg reg) 101 + { 102 + return pcie->reg_offsets[reg]; 103 + } 104 + 105 + static inline u32 iproc_pcie_read_reg(struct iproc_pcie *pcie, 106 + enum iproc_pcie_reg reg) 107 + { 108 + u16 offset = iproc_pcie_reg_offset(pcie, reg); 109 + 110 + if (iproc_pcie_reg_is_invalid(offset)) 111 + return 0; 112 + 113 + return readl(pcie->base + offset); 114 + } 115 + 116 + static inline void iproc_pcie_write_reg(struct iproc_pcie *pcie, 117 + enum iproc_pcie_reg reg, u32 val) 118 + { 119 + u16 offset = iproc_pcie_reg_offset(pcie, reg); 120 + 121 + if (iproc_pcie_reg_is_invalid(offset)) 122 + return; 123 + 124 + writel(val, pcie->base + offset); 125 + } 126 + 127 + static inline void iproc_pcie_ob_write(struct iproc_pcie *pcie, 128 + enum iproc_pcie_reg reg, 129 + unsigned window, u32 val) 130 + { 131 + u16 offset = iproc_pcie_reg_offset(pcie, reg); 132 + 133 + if (iproc_pcie_reg_is_invalid(offset)) 134 + return; 135 + 136 + writel(val, pcie->base + offset + (window * 8)); 137 + } 138 + 139 + static inline bool iproc_pcie_device_is_valid(struct iproc_pcie *pcie, 140 + unsigned int slot, 141 + unsigned int fn) 142 + { 143 + if (slot > 0) 144 + return false; 145 + 146 + /* PAXC can only support limited number of functions */ 147 + if (pcie->type == IPROC_PCIE_PAXC && fn >= MAX_NUM_PAXC_PF) 148 + return false; 149 + 150 + return true; 126 151 } 127 152 128 153 /** ··· 197 104 unsigned fn = PCI_FUNC(devfn); 198 105 unsigned busno = bus->number; 199 106 u32 val; 107 + u16 offset; 108 + 109 + if (!iproc_pcie_device_is_valid(pcie, slot, fn)) 110 + return NULL; 200 111 201 112 /* root complex access */ 202 113 if (busno == 0) { 203 - if (slot >= 1) 114 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CFG_IND_ADDR, 115 + where & CFG_IND_ADDR_MASK); 116 + offset = iproc_pcie_reg_offset(pcie, IPROC_PCIE_CFG_IND_DATA); 117 + if (iproc_pcie_reg_is_invalid(offset)) 204 118 return NULL; 205 - writel(where & CFG_IND_ADDR_MASK, 206 - pcie->base + CFG_IND_ADDR_OFFSET); 207 - return (pcie->base + CFG_IND_DATA_OFFSET); 119 + else 120 + return (pcie->base + offset); 208 121 } 209 - 210 - if (fn > 1) 211 - return NULL; 212 122 213 123 /* EP device access */ 214 124 val = (busno << CFG_ADDR_BUS_NUM_SHIFT) | ··· 219 123 (fn << CFG_ADDR_FUNC_NUM_SHIFT) | 220 124 (where & CFG_ADDR_REG_NUM_MASK) | 221 125 (1 & CFG_ADDR_CFG_TYPE_MASK); 222 - writel(val, pcie->base + CFG_ADDR_OFFSET); 223 - 224 - return (pcie->base + CFG_DATA_OFFSET); 126 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CFG_ADDR, val); 127 + offset = iproc_pcie_reg_offset(pcie, IPROC_PCIE_CFG_DATA); 128 + if (iproc_pcie_reg_is_invalid(offset)) 129 + return NULL; 130 + else 131 + return (pcie->base + offset); 225 132 } 226 133 227 134 static struct pci_ops iproc_pcie_ops = { ··· 237 138 { 238 139 u32 val; 239 140 141 + if (pcie->type == IPROC_PCIE_PAXC) { 142 + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); 143 + val &= ~PAXC_RESET_MASK; 144 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 145 + udelay(100); 146 + val |= PAXC_RESET_MASK; 147 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 148 + udelay(100); 149 + return; 150 + } 151 + 240 152 /* 241 153 * Select perst_b signal as reset source. Put the device into reset, 242 154 * and then bring it out of reset 243 155 */ 244 - val = readl(pcie->base + CLK_CONTROL_OFFSET); 156 + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); 245 157 val &= ~EP_PERST_SOURCE_SELECT & ~EP_MODE_SURVIVE_PERST & 246 158 ~RC_PCIE_RST_OUTPUT; 247 - writel(val, pcie->base + CLK_CONTROL_OFFSET); 159 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 248 160 udelay(250); 249 161 250 162 val |= RC_PCIE_RST_OUTPUT; 251 - writel(val, pcie->base + CLK_CONTROL_OFFSET); 163 + iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 252 164 msleep(100); 253 165 } 254 166 ··· 270 160 u16 pos, link_status; 271 161 bool link_is_active = false; 272 162 273 - val = readl(pcie->base + PCIE_LINK_STATUS_OFFSET); 163 + /* 164 + * PAXC connects to emulated endpoint devices directly and does not 165 + * have a Serdes. Therefore skip the link detection logic here. 166 + */ 167 + if (pcie->type == IPROC_PCIE_PAXC) 168 + return 0; 169 + 170 + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_LINK_STATUS); 274 171 if (!(val & PCIE_PHYLINKUP) || !(val & PCIE_DL_ACTIVE)) { 275 172 dev_err(pcie->dev, "PHY or data link is INACTIVE!\n"); 276 173 return -ENODEV; ··· 338 221 339 222 static void iproc_pcie_enable(struct iproc_pcie *pcie) 340 223 { 341 - writel(SYS_RC_INTX_MASK, pcie->base + SYS_RC_INTX_EN); 224 + iproc_pcie_write_reg(pcie, IPROC_PCIE_INTX_EN, SYS_RC_INTX_MASK); 342 225 } 343 226 344 227 /** ··· 362 245 363 246 if (size > max_size) { 364 247 dev_err(pcie->dev, 365 - "res size 0x%pap exceeds max supported size 0x%llx\n", 248 + "res size %pap exceeds max supported size 0x%llx\n", 366 249 &size, max_size); 367 250 return -EINVAL; 368 251 } ··· 389 272 axi_addr -= ob->axi_offset; 390 273 391 274 for (i = 0; i < MAX_NUM_OB_WINDOWS; i++) { 392 - writel(lower_32_bits(axi_addr) | OARR_VALID | 393 - (ob->set_oarr_size ? 1 : 0), pcie->base + OARR_LO(i)); 394 - writel(upper_32_bits(axi_addr), pcie->base + OARR_HI(i)); 395 - writel(lower_32_bits(pci_addr), pcie->base + OMAP_LO(i)); 396 - writel(upper_32_bits(pci_addr), pcie->base + OMAP_HI(i)); 275 + iproc_pcie_ob_write(pcie, IPROC_PCIE_OARR_LO, i, 276 + lower_32_bits(axi_addr) | OARR_VALID | 277 + (ob->set_oarr_size ? 1 : 0)); 278 + iproc_pcie_ob_write(pcie, IPROC_PCIE_OARR_HI, i, 279 + upper_32_bits(axi_addr)); 280 + iproc_pcie_ob_write(pcie, IPROC_PCIE_OMAP_LO, i, 281 + lower_32_bits(pci_addr)); 282 + iproc_pcie_ob_write(pcie, IPROC_PCIE_OMAP_HI, i, 283 + upper_32_bits(pci_addr)); 397 284 398 285 size -= ob->window_size; 399 286 if (size == 0) ··· 440 319 return 0; 441 320 } 442 321 322 + static int iproc_pcie_msi_enable(struct iproc_pcie *pcie) 323 + { 324 + struct device_node *msi_node; 325 + 326 + msi_node = of_parse_phandle(pcie->dev->of_node, "msi-parent", 0); 327 + if (!msi_node) 328 + return -ENODEV; 329 + 330 + /* 331 + * If another MSI controller is being used, the call below should fail 332 + * but that is okay 333 + */ 334 + return iproc_msi_init(pcie, msi_node); 335 + } 336 + 337 + static void iproc_pcie_msi_disable(struct iproc_pcie *pcie) 338 + { 339 + iproc_msi_exit(pcie); 340 + } 341 + 443 342 int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) 444 343 { 445 344 int ret; ··· 479 338 if (ret) { 480 339 dev_err(pcie->dev, "unable to power on PCIe PHY\n"); 481 340 goto err_exit_phy; 341 + } 342 + 343 + switch (pcie->type) { 344 + case IPROC_PCIE_PAXB: 345 + pcie->reg_offsets = iproc_pcie_reg_paxb; 346 + break; 347 + case IPROC_PCIE_PAXC: 348 + pcie->reg_offsets = iproc_pcie_reg_paxc; 349 + break; 350 + default: 351 + dev_err(pcie->dev, "incompatible iProc PCIe interface\n"); 352 + ret = -EINVAL; 353 + goto err_power_off_phy; 482 354 } 483 355 484 356 iproc_pcie_reset(pcie); ··· 527 373 528 374 iproc_pcie_enable(pcie); 529 375 376 + if (IS_ENABLED(CONFIG_PCI_MSI)) 377 + if (iproc_pcie_msi_enable(pcie)) 378 + dev_info(pcie->dev, "not using iProc MSI\n"); 379 + 530 380 pci_scan_child_bus(bus); 531 381 pci_assign_unassigned_bus_resources(bus); 532 382 pci_fixup_irqs(pci_common_swizzle, pcie->map_irq); ··· 554 396 { 555 397 pci_stop_root_bus(pcie->root_bus); 556 398 pci_remove_root_bus(pcie->root_bus); 399 + 400 + iproc_pcie_msi_disable(pcie); 557 401 558 402 phy_power_off(pcie->phy); 559 403 phy_exit(pcie->phy);
+40 -2
drivers/pci/host/pcie-iproc.h
··· 15 15 #define _PCIE_IPROC_H 16 16 17 17 /** 18 + * iProc PCIe interface type 19 + * 20 + * PAXB is the wrapper used in root complex that can be connected to an 21 + * external endpoint device. 22 + * 23 + * PAXC is the wrapper used in root complex dedicated for internal emulated 24 + * endpoint devices. 25 + */ 26 + enum iproc_pcie_type { 27 + IPROC_PCIE_PAXB = 0, 28 + IPROC_PCIE_PAXC, 29 + }; 30 + 31 + /** 18 32 * iProc PCIe outbound mapping 19 33 * @set_oarr_size: indicates the OARR size bit needs to be set 20 34 * @axi_offset: offset from the AXI address to the internal address used by ··· 41 27 resource_size_t window_size; 42 28 }; 43 29 30 + struct iproc_msi; 31 + 44 32 /** 45 33 * iProc PCIe device 34 + * 46 35 * @dev: pointer to device data structure 36 + * @type: iProc PCIe interface type 37 + * @reg_offsets: register offsets 47 38 * @base: PCIe host controller I/O register base 39 + * @base_addr: PCIe host controller register base physical address 48 40 * @sysdata: Per PCI controller data (ARM-specific) 49 41 * @root_bus: pointer to root bus 50 42 * @phy: optional PHY device that controls the Serdes 51 - * @irqs: interrupt IDs 52 43 * @map_irq: function callback to map interrupts 53 - * @need_ob_cfg: indidates SW needs to configure the outbound mapping window 44 + * @need_ob_cfg: indicates SW needs to configure the outbound mapping window 54 45 * @ob: outbound mapping parameters 46 + * @msi: MSI data 55 47 */ 56 48 struct iproc_pcie { 57 49 struct device *dev; 50 + enum iproc_pcie_type type; 51 + const u16 *reg_offsets; 58 52 void __iomem *base; 53 + phys_addr_t base_addr; 59 54 #ifdef CONFIG_ARM 60 55 struct pci_sys_data sysdata; 61 56 #endif ··· 73 50 int (*map_irq)(const struct pci_dev *, u8, u8); 74 51 bool need_ob_cfg; 75 52 struct iproc_pcie_ob ob; 53 + struct iproc_msi *msi; 76 54 }; 77 55 78 56 int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res); 79 57 int iproc_pcie_remove(struct iproc_pcie *pcie); 58 + 59 + #ifdef CONFIG_PCIE_IPROC_MSI 60 + int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node); 61 + void iproc_msi_exit(struct iproc_pcie *pcie); 62 + #else 63 + static inline int iproc_msi_init(struct iproc_pcie *pcie, 64 + struct device_node *node) 65 + { 66 + return -ENODEV; 67 + } 68 + static inline void iproc_msi_exit(struct iproc_pcie *pcie) 69 + { 70 + } 71 + #endif 80 72 81 73 #endif /* _PCIE_IPROC_H */
+616
drivers/pci/host/pcie-qcom.c
··· 1 + /* 2 + * Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. 3 + * Copyright 2015 Linaro Limited. 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 and 7 + * only version 2 as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/clk.h> 16 + #include <linux/delay.h> 17 + #include <linux/gpio.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/io.h> 20 + #include <linux/iopoll.h> 21 + #include <linux/kernel.h> 22 + #include <linux/module.h> 23 + #include <linux/of_device.h> 24 + #include <linux/of_gpio.h> 25 + #include <linux/pci.h> 26 + #include <linux/platform_device.h> 27 + #include <linux/phy/phy.h> 28 + #include <linux/regulator/consumer.h> 29 + #include <linux/reset.h> 30 + #include <linux/slab.h> 31 + #include <linux/types.h> 32 + 33 + #include "pcie-designware.h" 34 + 35 + #define PCIE20_PARF_PHY_CTRL 0x40 36 + #define PCIE20_PARF_PHY_REFCLK 0x4C 37 + #define PCIE20_PARF_DBI_BASE_ADDR 0x168 38 + #define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16c 39 + #define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT 0x178 40 + 41 + #define PCIE20_ELBI_SYS_CTRL 0x04 42 + #define PCIE20_ELBI_SYS_CTRL_LT_ENABLE BIT(0) 43 + 44 + #define PCIE20_CAP 0x70 45 + 46 + #define PERST_DELAY_US 1000 47 + 48 + struct qcom_pcie_resources_v0 { 49 + struct clk *iface_clk; 50 + struct clk *core_clk; 51 + struct clk *phy_clk; 52 + struct reset_control *pci_reset; 53 + struct reset_control *axi_reset; 54 + struct reset_control *ahb_reset; 55 + struct reset_control *por_reset; 56 + struct reset_control *phy_reset; 57 + struct regulator *vdda; 58 + struct regulator *vdda_phy; 59 + struct regulator *vdda_refclk; 60 + }; 61 + 62 + struct qcom_pcie_resources_v1 { 63 + struct clk *iface; 64 + struct clk *aux; 65 + struct clk *master_bus; 66 + struct clk *slave_bus; 67 + struct reset_control *core; 68 + struct regulator *vdda; 69 + }; 70 + 71 + union qcom_pcie_resources { 72 + struct qcom_pcie_resources_v0 v0; 73 + struct qcom_pcie_resources_v1 v1; 74 + }; 75 + 76 + struct qcom_pcie; 77 + 78 + struct qcom_pcie_ops { 79 + int (*get_resources)(struct qcom_pcie *pcie); 80 + int (*init)(struct qcom_pcie *pcie); 81 + void (*deinit)(struct qcom_pcie *pcie); 82 + }; 83 + 84 + struct qcom_pcie { 85 + struct pcie_port pp; 86 + struct device *dev; 87 + union qcom_pcie_resources res; 88 + void __iomem *parf; 89 + void __iomem *dbi; 90 + void __iomem *elbi; 91 + struct phy *phy; 92 + struct gpio_desc *reset; 93 + struct qcom_pcie_ops *ops; 94 + }; 95 + 96 + #define to_qcom_pcie(x) container_of(x, struct qcom_pcie, pp) 97 + 98 + static void qcom_ep_reset_assert(struct qcom_pcie *pcie) 99 + { 100 + gpiod_set_value(pcie->reset, 1); 101 + usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 102 + } 103 + 104 + static void qcom_ep_reset_deassert(struct qcom_pcie *pcie) 105 + { 106 + gpiod_set_value(pcie->reset, 0); 107 + usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 108 + } 109 + 110 + static irqreturn_t qcom_pcie_msi_irq_handler(int irq, void *arg) 111 + { 112 + struct pcie_port *pp = arg; 113 + 114 + return dw_handle_msi_irq(pp); 115 + } 116 + 117 + static int qcom_pcie_establish_link(struct qcom_pcie *pcie) 118 + { 119 + struct device *dev = pcie->dev; 120 + unsigned int retries = 0; 121 + u32 val; 122 + 123 + if (dw_pcie_link_up(&pcie->pp)) 124 + return 0; 125 + 126 + /* enable link training */ 127 + val = readl(pcie->elbi + PCIE20_ELBI_SYS_CTRL); 128 + val |= PCIE20_ELBI_SYS_CTRL_LT_ENABLE; 129 + writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL); 130 + 131 + do { 132 + if (dw_pcie_link_up(&pcie->pp)) 133 + return 0; 134 + usleep_range(250, 1000); 135 + } while (retries < 200); 136 + 137 + dev_warn(dev, "phy link never came up\n"); 138 + 139 + return -ETIMEDOUT; 140 + } 141 + 142 + static int qcom_pcie_get_resources_v0(struct qcom_pcie *pcie) 143 + { 144 + struct qcom_pcie_resources_v0 *res = &pcie->res.v0; 145 + struct device *dev = pcie->dev; 146 + 147 + res->vdda = devm_regulator_get(dev, "vdda"); 148 + if (IS_ERR(res->vdda)) 149 + return PTR_ERR(res->vdda); 150 + 151 + res->vdda_phy = devm_regulator_get(dev, "vdda_phy"); 152 + if (IS_ERR(res->vdda_phy)) 153 + return PTR_ERR(res->vdda_phy); 154 + 155 + res->vdda_refclk = devm_regulator_get(dev, "vdda_refclk"); 156 + if (IS_ERR(res->vdda_refclk)) 157 + return PTR_ERR(res->vdda_refclk); 158 + 159 + res->iface_clk = devm_clk_get(dev, "iface"); 160 + if (IS_ERR(res->iface_clk)) 161 + return PTR_ERR(res->iface_clk); 162 + 163 + res->core_clk = devm_clk_get(dev, "core"); 164 + if (IS_ERR(res->core_clk)) 165 + return PTR_ERR(res->core_clk); 166 + 167 + res->phy_clk = devm_clk_get(dev, "phy"); 168 + if (IS_ERR(res->phy_clk)) 169 + return PTR_ERR(res->phy_clk); 170 + 171 + res->pci_reset = devm_reset_control_get(dev, "pci"); 172 + if (IS_ERR(res->pci_reset)) 173 + return PTR_ERR(res->pci_reset); 174 + 175 + res->axi_reset = devm_reset_control_get(dev, "axi"); 176 + if (IS_ERR(res->axi_reset)) 177 + return PTR_ERR(res->axi_reset); 178 + 179 + res->ahb_reset = devm_reset_control_get(dev, "ahb"); 180 + if (IS_ERR(res->ahb_reset)) 181 + return PTR_ERR(res->ahb_reset); 182 + 183 + res->por_reset = devm_reset_control_get(dev, "por"); 184 + if (IS_ERR(res->por_reset)) 185 + return PTR_ERR(res->por_reset); 186 + 187 + res->phy_reset = devm_reset_control_get(dev, "phy"); 188 + if (IS_ERR(res->phy_reset)) 189 + return PTR_ERR(res->phy_reset); 190 + 191 + return 0; 192 + } 193 + 194 + static int qcom_pcie_get_resources_v1(struct qcom_pcie *pcie) 195 + { 196 + struct qcom_pcie_resources_v1 *res = &pcie->res.v1; 197 + struct device *dev = pcie->dev; 198 + 199 + res->vdda = devm_regulator_get(dev, "vdda"); 200 + if (IS_ERR(res->vdda)) 201 + return PTR_ERR(res->vdda); 202 + 203 + res->iface = devm_clk_get(dev, "iface"); 204 + if (IS_ERR(res->iface)) 205 + return PTR_ERR(res->iface); 206 + 207 + res->aux = devm_clk_get(dev, "aux"); 208 + if (IS_ERR(res->aux)) 209 + return PTR_ERR(res->aux); 210 + 211 + res->master_bus = devm_clk_get(dev, "master_bus"); 212 + if (IS_ERR(res->master_bus)) 213 + return PTR_ERR(res->master_bus); 214 + 215 + res->slave_bus = devm_clk_get(dev, "slave_bus"); 216 + if (IS_ERR(res->slave_bus)) 217 + return PTR_ERR(res->slave_bus); 218 + 219 + res->core = devm_reset_control_get(dev, "core"); 220 + if (IS_ERR(res->core)) 221 + return PTR_ERR(res->core); 222 + 223 + return 0; 224 + } 225 + 226 + static void qcom_pcie_deinit_v0(struct qcom_pcie *pcie) 227 + { 228 + struct qcom_pcie_resources_v0 *res = &pcie->res.v0; 229 + 230 + reset_control_assert(res->pci_reset); 231 + reset_control_assert(res->axi_reset); 232 + reset_control_assert(res->ahb_reset); 233 + reset_control_assert(res->por_reset); 234 + reset_control_assert(res->pci_reset); 235 + clk_disable_unprepare(res->iface_clk); 236 + clk_disable_unprepare(res->core_clk); 237 + clk_disable_unprepare(res->phy_clk); 238 + regulator_disable(res->vdda); 239 + regulator_disable(res->vdda_phy); 240 + regulator_disable(res->vdda_refclk); 241 + } 242 + 243 + static int qcom_pcie_init_v0(struct qcom_pcie *pcie) 244 + { 245 + struct qcom_pcie_resources_v0 *res = &pcie->res.v0; 246 + struct device *dev = pcie->dev; 247 + u32 val; 248 + int ret; 249 + 250 + ret = regulator_enable(res->vdda); 251 + if (ret) { 252 + dev_err(dev, "cannot enable vdda regulator\n"); 253 + return ret; 254 + } 255 + 256 + ret = regulator_enable(res->vdda_refclk); 257 + if (ret) { 258 + dev_err(dev, "cannot enable vdda_refclk regulator\n"); 259 + goto err_refclk; 260 + } 261 + 262 + ret = regulator_enable(res->vdda_phy); 263 + if (ret) { 264 + dev_err(dev, "cannot enable vdda_phy regulator\n"); 265 + goto err_vdda_phy; 266 + } 267 + 268 + ret = reset_control_assert(res->ahb_reset); 269 + if (ret) { 270 + dev_err(dev, "cannot assert ahb reset\n"); 271 + goto err_assert_ahb; 272 + } 273 + 274 + ret = clk_prepare_enable(res->iface_clk); 275 + if (ret) { 276 + dev_err(dev, "cannot prepare/enable iface clock\n"); 277 + goto err_assert_ahb; 278 + } 279 + 280 + ret = clk_prepare_enable(res->phy_clk); 281 + if (ret) { 282 + dev_err(dev, "cannot prepare/enable phy clock\n"); 283 + goto err_clk_phy; 284 + } 285 + 286 + ret = clk_prepare_enable(res->core_clk); 287 + if (ret) { 288 + dev_err(dev, "cannot prepare/enable core clock\n"); 289 + goto err_clk_core; 290 + } 291 + 292 + ret = reset_control_deassert(res->ahb_reset); 293 + if (ret) { 294 + dev_err(dev, "cannot deassert ahb reset\n"); 295 + goto err_deassert_ahb; 296 + } 297 + 298 + /* enable PCIe clocks and resets */ 299 + val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 300 + val &= ~BIT(0); 301 + writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 302 + 303 + /* enable external reference clock */ 304 + val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK); 305 + val |= BIT(16); 306 + writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK); 307 + 308 + ret = reset_control_deassert(res->phy_reset); 309 + if (ret) { 310 + dev_err(dev, "cannot deassert phy reset\n"); 311 + return ret; 312 + } 313 + 314 + ret = reset_control_deassert(res->pci_reset); 315 + if (ret) { 316 + dev_err(dev, "cannot deassert pci reset\n"); 317 + return ret; 318 + } 319 + 320 + ret = reset_control_deassert(res->por_reset); 321 + if (ret) { 322 + dev_err(dev, "cannot deassert por reset\n"); 323 + return ret; 324 + } 325 + 326 + ret = reset_control_deassert(res->axi_reset); 327 + if (ret) { 328 + dev_err(dev, "cannot deassert axi reset\n"); 329 + return ret; 330 + } 331 + 332 + /* wait for clock acquisition */ 333 + usleep_range(1000, 1500); 334 + 335 + return 0; 336 + 337 + err_deassert_ahb: 338 + clk_disable_unprepare(res->core_clk); 339 + err_clk_core: 340 + clk_disable_unprepare(res->phy_clk); 341 + err_clk_phy: 342 + clk_disable_unprepare(res->iface_clk); 343 + err_assert_ahb: 344 + regulator_disable(res->vdda_phy); 345 + err_vdda_phy: 346 + regulator_disable(res->vdda_refclk); 347 + err_refclk: 348 + regulator_disable(res->vdda); 349 + 350 + return ret; 351 + } 352 + 353 + static void qcom_pcie_deinit_v1(struct qcom_pcie *pcie) 354 + { 355 + struct qcom_pcie_resources_v1 *res = &pcie->res.v1; 356 + 357 + reset_control_assert(res->core); 358 + clk_disable_unprepare(res->slave_bus); 359 + clk_disable_unprepare(res->master_bus); 360 + clk_disable_unprepare(res->iface); 361 + clk_disable_unprepare(res->aux); 362 + regulator_disable(res->vdda); 363 + } 364 + 365 + static int qcom_pcie_init_v1(struct qcom_pcie *pcie) 366 + { 367 + struct qcom_pcie_resources_v1 *res = &pcie->res.v1; 368 + struct device *dev = pcie->dev; 369 + int ret; 370 + 371 + ret = reset_control_deassert(res->core); 372 + if (ret) { 373 + dev_err(dev, "cannot deassert core reset\n"); 374 + return ret; 375 + } 376 + 377 + ret = clk_prepare_enable(res->aux); 378 + if (ret) { 379 + dev_err(dev, "cannot prepare/enable aux clock\n"); 380 + goto err_res; 381 + } 382 + 383 + ret = clk_prepare_enable(res->iface); 384 + if (ret) { 385 + dev_err(dev, "cannot prepare/enable iface clock\n"); 386 + goto err_aux; 387 + } 388 + 389 + ret = clk_prepare_enable(res->master_bus); 390 + if (ret) { 391 + dev_err(dev, "cannot prepare/enable master_bus clock\n"); 392 + goto err_iface; 393 + } 394 + 395 + ret = clk_prepare_enable(res->slave_bus); 396 + if (ret) { 397 + dev_err(dev, "cannot prepare/enable slave_bus clock\n"); 398 + goto err_master; 399 + } 400 + 401 + ret = regulator_enable(res->vdda); 402 + if (ret) { 403 + dev_err(dev, "cannot enable vdda regulator\n"); 404 + goto err_slave; 405 + } 406 + 407 + /* change DBI base address */ 408 + writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 409 + 410 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 411 + u32 val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT); 412 + 413 + val |= BIT(31); 414 + writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT); 415 + } 416 + 417 + return 0; 418 + err_slave: 419 + clk_disable_unprepare(res->slave_bus); 420 + err_master: 421 + clk_disable_unprepare(res->master_bus); 422 + err_iface: 423 + clk_disable_unprepare(res->iface); 424 + err_aux: 425 + clk_disable_unprepare(res->aux); 426 + err_res: 427 + reset_control_assert(res->core); 428 + 429 + return ret; 430 + } 431 + 432 + static int qcom_pcie_link_up(struct pcie_port *pp) 433 + { 434 + struct qcom_pcie *pcie = to_qcom_pcie(pp); 435 + u16 val = readw(pcie->dbi + PCIE20_CAP + PCI_EXP_LNKSTA); 436 + 437 + return !!(val & PCI_EXP_LNKSTA_DLLLA); 438 + } 439 + 440 + static void qcom_pcie_host_init(struct pcie_port *pp) 441 + { 442 + struct qcom_pcie *pcie = to_qcom_pcie(pp); 443 + int ret; 444 + 445 + qcom_ep_reset_assert(pcie); 446 + 447 + ret = pcie->ops->init(pcie); 448 + if (ret) 449 + goto err_deinit; 450 + 451 + ret = phy_power_on(pcie->phy); 452 + if (ret) 453 + goto err_deinit; 454 + 455 + dw_pcie_setup_rc(pp); 456 + 457 + if (IS_ENABLED(CONFIG_PCI_MSI)) 458 + dw_pcie_msi_init(pp); 459 + 460 + qcom_ep_reset_deassert(pcie); 461 + 462 + ret = qcom_pcie_establish_link(pcie); 463 + if (ret) 464 + goto err; 465 + 466 + return; 467 + err: 468 + qcom_ep_reset_assert(pcie); 469 + phy_power_off(pcie->phy); 470 + err_deinit: 471 + pcie->ops->deinit(pcie); 472 + } 473 + 474 + static int qcom_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, 475 + u32 *val) 476 + { 477 + /* the device class is not reported correctly from the register */ 478 + if (where == PCI_CLASS_REVISION && size == 4) { 479 + *val = readl(pp->dbi_base + PCI_CLASS_REVISION); 480 + *val &= 0xff; /* keep revision id */ 481 + *val |= PCI_CLASS_BRIDGE_PCI << 16; 482 + return PCIBIOS_SUCCESSFUL; 483 + } 484 + 485 + return dw_pcie_cfg_read(pp->dbi_base + where, size, val); 486 + } 487 + 488 + static struct pcie_host_ops qcom_pcie_dw_ops = { 489 + .link_up = qcom_pcie_link_up, 490 + .host_init = qcom_pcie_host_init, 491 + .rd_own_conf = qcom_pcie_rd_own_conf, 492 + }; 493 + 494 + static const struct qcom_pcie_ops ops_v0 = { 495 + .get_resources = qcom_pcie_get_resources_v0, 496 + .init = qcom_pcie_init_v0, 497 + .deinit = qcom_pcie_deinit_v0, 498 + }; 499 + 500 + static const struct qcom_pcie_ops ops_v1 = { 501 + .get_resources = qcom_pcie_get_resources_v1, 502 + .init = qcom_pcie_init_v1, 503 + .deinit = qcom_pcie_deinit_v1, 504 + }; 505 + 506 + static int qcom_pcie_probe(struct platform_device *pdev) 507 + { 508 + struct device *dev = &pdev->dev; 509 + struct resource *res; 510 + struct qcom_pcie *pcie; 511 + struct pcie_port *pp; 512 + int ret; 513 + 514 + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 515 + if (!pcie) 516 + return -ENOMEM; 517 + 518 + pcie->ops = (struct qcom_pcie_ops *)of_device_get_match_data(dev); 519 + pcie->dev = dev; 520 + 521 + pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW); 522 + if (IS_ERR(pcie->reset)) 523 + return PTR_ERR(pcie->reset); 524 + 525 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "parf"); 526 + pcie->parf = devm_ioremap_resource(dev, res); 527 + if (IS_ERR(pcie->parf)) 528 + return PTR_ERR(pcie->parf); 529 + 530 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 531 + pcie->dbi = devm_ioremap_resource(dev, res); 532 + if (IS_ERR(pcie->dbi)) 533 + return PTR_ERR(pcie->dbi); 534 + 535 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); 536 + pcie->elbi = devm_ioremap_resource(dev, res); 537 + if (IS_ERR(pcie->elbi)) 538 + return PTR_ERR(pcie->elbi); 539 + 540 + pcie->phy = devm_phy_optional_get(dev, "pciephy"); 541 + if (IS_ERR(pcie->phy)) 542 + return PTR_ERR(pcie->phy); 543 + 544 + ret = pcie->ops->get_resources(pcie); 545 + if (ret) 546 + return ret; 547 + 548 + pp = &pcie->pp; 549 + pp->dev = dev; 550 + pp->dbi_base = pcie->dbi; 551 + pp->root_bus_nr = -1; 552 + pp->ops = &qcom_pcie_dw_ops; 553 + 554 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 555 + pp->msi_irq = platform_get_irq_byname(pdev, "msi"); 556 + if (pp->msi_irq < 0) 557 + return pp->msi_irq; 558 + 559 + ret = devm_request_irq(dev, pp->msi_irq, 560 + qcom_pcie_msi_irq_handler, 561 + IRQF_SHARED, "qcom-pcie-msi", pp); 562 + if (ret) { 563 + dev_err(dev, "cannot request msi irq\n"); 564 + return ret; 565 + } 566 + } 567 + 568 + ret = phy_init(pcie->phy); 569 + if (ret) 570 + return ret; 571 + 572 + ret = dw_pcie_host_init(pp); 573 + if (ret) { 574 + dev_err(dev, "cannot initialize host\n"); 575 + return ret; 576 + } 577 + 578 + platform_set_drvdata(pdev, pcie); 579 + 580 + return 0; 581 + } 582 + 583 + static int qcom_pcie_remove(struct platform_device *pdev) 584 + { 585 + struct qcom_pcie *pcie = platform_get_drvdata(pdev); 586 + 587 + qcom_ep_reset_assert(pcie); 588 + phy_power_off(pcie->phy); 589 + phy_exit(pcie->phy); 590 + pcie->ops->deinit(pcie); 591 + 592 + return 0; 593 + } 594 + 595 + static const struct of_device_id qcom_pcie_match[] = { 596 + { .compatible = "qcom,pcie-ipq8064", .data = &ops_v0 }, 597 + { .compatible = "qcom,pcie-apq8064", .data = &ops_v0 }, 598 + { .compatible = "qcom,pcie-apq8084", .data = &ops_v1 }, 599 + { } 600 + }; 601 + MODULE_DEVICE_TABLE(of, qcom_pcie_match); 602 + 603 + static struct platform_driver qcom_pcie_driver = { 604 + .probe = qcom_pcie_probe, 605 + .remove = qcom_pcie_remove, 606 + .driver = { 607 + .name = "qcom-pcie", 608 + .of_match_table = qcom_pcie_match, 609 + }, 610 + }; 611 + 612 + module_platform_driver(qcom_pcie_driver); 613 + 614 + MODULE_AUTHOR("Stanimir Varbanov <svarbanov@mm-sol.com>"); 615 + MODULE_DESCRIPTION("Qualcomm PCIe root complex driver"); 616 + MODULE_LICENSE("GPL v2");
+140 -70
drivers/pci/host/pcie-rcar.c
··· 26 26 #include <linux/of_platform.h> 27 27 #include <linux/pci.h> 28 28 #include <linux/platform_device.h> 29 + #include <linux/pm_runtime.h> 29 30 #include <linux/slab.h> 30 31 31 32 #define DRV_NAME "rcar-pcie" ··· 95 94 #define H1_PCIEPHYDOUTR 0x040014 96 95 #define H1_PCIEPHYSR 0x040018 97 96 97 + /* R-Car Gen2 PHY */ 98 + #define GEN2_PCIEPHYADDR 0x780 99 + #define GEN2_PCIEPHYDATA 0x784 100 + #define GEN2_PCIEPHYCTRL 0x78c 101 + 98 102 #define INT_PCI_MSI_NR 32 99 103 100 104 #define RCONF(x) (PCICONF(0)+(x)) ··· 113 107 114 108 #define RCAR_PCI_MAX_RESOURCES 4 115 109 #define MAX_NR_INBOUND_MAPS 6 116 - 117 - static unsigned long global_io_offset; 118 110 119 111 struct rcar_msi { 120 112 DECLARE_BITMAP(used, INT_PCI_MSI_NR); ··· 130 126 } 131 127 132 128 /* Structure representing the PCIe interface */ 133 - /* 134 - * ARM pcibios functions expect the ARM struct pci_sys_data as the PCI 135 - * sysdata. Add pci_sys_data as the first element in struct gen_pci so 136 - * that when we use a gen_pci pointer as sysdata, it is also a pointer to 137 - * a struct pci_sys_data. 138 - */ 139 129 struct rcar_pcie { 140 - #ifdef CONFIG_ARM 141 - struct pci_sys_data sys; 142 - #endif 143 130 struct device *dev; 144 131 void __iomem *base; 145 - struct resource res[RCAR_PCI_MAX_RESOURCES]; 146 - struct resource busn; 132 + struct list_head resources; 147 133 int root_bus_nr; 148 134 struct clk *clk; 149 135 struct clk *bus_clk; ··· 317 323 .write = rcar_pcie_write_conf, 318 324 }; 319 325 320 - static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie) 326 + static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie, 327 + struct resource *res) 321 328 { 322 - struct resource *res = &pcie->res[win]; 323 - 324 329 /* Setup PCIe address space mappings for each resource */ 325 330 resource_size_t size; 326 331 resource_size_t res_start; ··· 352 359 rcar_pci_write_reg(pcie, mask, PCIEPTCTLR(win)); 353 360 } 354 361 355 - static int rcar_pcie_setup(struct list_head *resource, struct rcar_pcie *pcie) 362 + static int rcar_pcie_setup(struct list_head *resource, struct rcar_pcie *pci) 356 363 { 357 - struct resource *res; 358 - int i; 359 - 360 - pcie->root_bus_nr = pcie->busn.start; 364 + struct resource_entry *win; 365 + int i = 0; 361 366 362 367 /* Setup PCI resources */ 363 - for (i = 0; i < RCAR_PCI_MAX_RESOURCES; i++) { 368 + resource_list_for_each_entry(win, &pci->resources) { 369 + struct resource *res = win->res; 364 370 365 - res = &pcie->res[i]; 366 371 if (!res->flags) 367 372 continue; 368 373 369 - rcar_pcie_setup_window(i, pcie); 370 - 371 - if (res->flags & IORESOURCE_IO) { 372 - phys_addr_t io_start = pci_pio_to_address(res->start); 373 - pci_ioremap_io(global_io_offset, io_start); 374 - global_io_offset += SZ_64K; 374 + switch (resource_type(res)) { 375 + case IORESOURCE_IO: 376 + case IORESOURCE_MEM: 377 + rcar_pcie_setup_window(i, pci, res); 378 + i++; 379 + break; 380 + case IORESOURCE_BUS: 381 + pci->root_bus_nr = res->start; 382 + break; 383 + default: 384 + continue; 375 385 } 376 386 377 387 pci_add_resource(resource, res); 378 388 } 379 - pci_add_resource(resource, &pcie->busn); 380 389 381 390 return 1; 382 391 } ··· 573 578 return -ETIMEDOUT; 574 579 } 575 580 581 + static int rcar_pcie_hw_init_gen2(struct rcar_pcie *pcie) 582 + { 583 + /* 584 + * These settings come from the R-Car Series, 2nd Generation User's 585 + * Manual, section 50.3.1 (2) Initialization of the physical layer. 586 + */ 587 + rcar_pci_write_reg(pcie, 0x000f0030, GEN2_PCIEPHYADDR); 588 + rcar_pci_write_reg(pcie, 0x00381203, GEN2_PCIEPHYDATA); 589 + rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); 590 + rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); 591 + 592 + rcar_pci_write_reg(pcie, 0x000f0054, GEN2_PCIEPHYADDR); 593 + /* The following value is for DC connection, no termination resistor */ 594 + rcar_pci_write_reg(pcie, 0x13802007, GEN2_PCIEPHYDATA); 595 + rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); 596 + rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); 597 + 598 + return rcar_pcie_hw_init(pcie); 599 + } 600 + 576 601 static int rcar_msi_alloc(struct rcar_msi *chip) 577 602 { 578 603 int msi; ··· 735 720 736 721 /* Two irqs are for MSI, but they are also used for non-MSI irqs */ 737 722 err = devm_request_irq(&pdev->dev, msi->irq1, rcar_pcie_msi_irq, 738 - IRQF_SHARED, rcar_msi_irq_chip.name, pcie); 723 + IRQF_SHARED | IRQF_NO_THREAD, 724 + rcar_msi_irq_chip.name, pcie); 739 725 if (err < 0) { 740 726 dev_err(&pdev->dev, "failed to request IRQ: %d\n", err); 741 727 goto err; 742 728 } 743 729 744 730 err = devm_request_irq(&pdev->dev, msi->irq2, rcar_pcie_msi_irq, 745 - IRQF_SHARED, rcar_msi_irq_chip.name, pcie); 731 + IRQF_SHARED | IRQF_NO_THREAD, 732 + rcar_msi_irq_chip.name, pcie); 746 733 if (err < 0) { 747 734 dev_err(&pdev->dev, "failed to request IRQ: %d\n", err); 748 735 goto err; ··· 934 917 935 918 static const struct of_device_id rcar_pcie_of_match[] = { 936 919 { .compatible = "renesas,pcie-r8a7779", .data = rcar_pcie_hw_init_h1 }, 937 - { .compatible = "renesas,pcie-r8a7790", .data = rcar_pcie_hw_init }, 938 - { .compatible = "renesas,pcie-r8a7791", .data = rcar_pcie_hw_init }, 920 + { .compatible = "renesas,pcie-rcar-gen2", .data = rcar_pcie_hw_init_gen2 }, 921 + { .compatible = "renesas,pcie-r8a7790", .data = rcar_pcie_hw_init_gen2 }, 922 + { .compatible = "renesas,pcie-r8a7791", .data = rcar_pcie_hw_init_gen2 }, 923 + { .compatible = "renesas,pcie-r8a7795", .data = rcar_pcie_hw_init }, 939 924 {}, 940 925 }; 941 926 MODULE_DEVICE_TABLE(of, rcar_pcie_of_match); 927 + 928 + static void rcar_pcie_release_of_pci_ranges(struct rcar_pcie *pci) 929 + { 930 + pci_free_resource_list(&pci->resources); 931 + } 932 + 933 + static int rcar_pcie_parse_request_of_pci_ranges(struct rcar_pcie *pci) 934 + { 935 + int err; 936 + struct device *dev = pci->dev; 937 + struct device_node *np = dev->of_node; 938 + resource_size_t iobase; 939 + struct resource_entry *win; 940 + 941 + err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources, &iobase); 942 + if (err) 943 + return err; 944 + 945 + resource_list_for_each_entry(win, &pci->resources) { 946 + struct resource *parent, *res = win->res; 947 + 948 + switch (resource_type(res)) { 949 + case IORESOURCE_IO: 950 + parent = &ioport_resource; 951 + err = pci_remap_iospace(res, iobase); 952 + if (err) { 953 + dev_warn(dev, "error %d: failed to map resource %pR\n", 954 + err, res); 955 + continue; 956 + } 957 + break; 958 + case IORESOURCE_MEM: 959 + parent = &iomem_resource; 960 + break; 961 + 962 + case IORESOURCE_BUS: 963 + default: 964 + continue; 965 + } 966 + 967 + err = devm_request_resource(dev, parent, res); 968 + if (err) 969 + goto out_release_res; 970 + } 971 + 972 + return 0; 973 + 974 + out_release_res: 975 + rcar_pcie_release_of_pci_ranges(pci); 976 + return err; 977 + } 942 978 943 979 static int rcar_pcie_probe(struct platform_device *pdev) 944 980 { 945 981 struct rcar_pcie *pcie; 946 982 unsigned int data; 947 - struct of_pci_range range; 948 - struct of_pci_range_parser parser; 949 983 const struct of_device_id *of_id; 950 - int err, win = 0; 984 + int err; 951 985 int (*hw_init_fn)(struct rcar_pcie *); 952 986 953 987 pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL); ··· 1008 940 pcie->dev = &pdev->dev; 1009 941 platform_set_drvdata(pdev, pcie); 1010 942 1011 - /* Get the bus range */ 1012 - if (of_pci_parse_bus_range(pdev->dev.of_node, &pcie->busn)) { 1013 - dev_err(&pdev->dev, "failed to parse bus-range property\n"); 1014 - return -EINVAL; 1015 - } 943 + INIT_LIST_HEAD(&pcie->resources); 1016 944 1017 - if (of_pci_range_parser_init(&parser, pdev->dev.of_node)) { 1018 - dev_err(&pdev->dev, "missing ranges property\n"); 1019 - return -EINVAL; 1020 - } 945 + rcar_pcie_parse_request_of_pci_ranges(pcie); 1021 946 1022 947 err = rcar_pcie_get_resources(pdev, pcie); 1023 948 if (err < 0) { ··· 1018 957 return err; 1019 958 } 1020 959 1021 - for_each_of_pci_range(&parser, &range) { 1022 - err = of_pci_range_to_resource(&range, pdev->dev.of_node, 1023 - &pcie->res[win++]); 1024 - if (err < 0) 1025 - return err; 1026 - 1027 - if (win > RCAR_PCI_MAX_RESOURCES) 1028 - break; 1029 - } 1030 - 1031 960 err = rcar_pcie_parse_map_dma_ranges(pcie, pdev->dev.of_node); 1032 961 if (err) 1033 962 return err; 963 + 964 + of_id = of_match_device(rcar_pcie_of_match, pcie->dev); 965 + if (!of_id || !of_id->data) 966 + return -EINVAL; 967 + hw_init_fn = of_id->data; 968 + 969 + pm_runtime_enable(pcie->dev); 970 + err = pm_runtime_get_sync(pcie->dev); 971 + if (err < 0) { 972 + dev_err(pcie->dev, "pm_runtime_get_sync failed\n"); 973 + goto err_pm_disable; 974 + } 975 + 976 + /* Failure to get a link might just be that no cards are inserted */ 977 + err = hw_init_fn(pcie); 978 + if (err) { 979 + dev_info(&pdev->dev, "PCIe link down\n"); 980 + err = 0; 981 + goto err_pm_put; 982 + } 983 + 984 + data = rcar_pci_read_reg(pcie, MACSR); 985 + dev_info(&pdev->dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); 1034 986 1035 987 if (IS_ENABLED(CONFIG_PCI_MSI)) { 1036 988 err = rcar_pcie_enable_msi(pcie); ··· 1051 977 dev_err(&pdev->dev, 1052 978 "failed to enable MSI support: %d\n", 1053 979 err); 1054 - return err; 980 + goto err_pm_put; 1055 981 } 1056 982 } 1057 983 1058 - of_id = of_match_device(rcar_pcie_of_match, pcie->dev); 1059 - if (!of_id || !of_id->data) 1060 - return -EINVAL; 1061 - hw_init_fn = of_id->data; 984 + err = rcar_pcie_enable(pcie); 985 + if (err) 986 + goto err_pm_put; 1062 987 1063 - /* Failure to get a link might just be that no cards are inserted */ 1064 - err = hw_init_fn(pcie); 1065 - if (err) { 1066 - dev_info(&pdev->dev, "PCIe link down\n"); 1067 - return 0; 1068 - } 988 + return 0; 1069 989 1070 - data = rcar_pci_read_reg(pcie, MACSR); 1071 - dev_info(&pdev->dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); 990 + err_pm_put: 991 + pm_runtime_put(pcie->dev); 1072 992 1073 - return rcar_pcie_enable(pcie); 993 + err_pm_disable: 994 + pm_runtime_disable(pcie->dev); 995 + return err; 1074 996 } 1075 997 1076 998 static struct platform_driver rcar_pcie_driver = {
+2 -1
drivers/pci/host/pcie-spear13xx.c
··· 279 279 return -ENODEV; 280 280 } 281 281 ret = devm_request_irq(dev, pp->irq, spear13xx_pcie_irq_handler, 282 - IRQF_SHARED, "spear1340-pcie", pp); 282 + IRQF_SHARED | IRQF_NO_THREAD, 283 + "spear1340-pcie", pp); 283 284 if (ret) { 284 285 dev_err(dev, "failed to request irq %d\n", pp->irq); 285 286 return ret;
+2 -1
drivers/pci/host/pcie-xilinx.c
··· 781 781 782 782 port->irq = irq_of_parse_and_map(node, 0); 783 783 err = devm_request_irq(dev, port->irq, xilinx_pcie_intr_handler, 784 - IRQF_SHARED, "xilinx-pcie", port); 784 + IRQF_SHARED | IRQF_NO_THREAD, 785 + "xilinx-pcie", port); 785 786 if (err) { 786 787 dev_err(dev, "unable to request irq %d\n", port->irq); 787 788 return err;
+5 -5
drivers/pci/hotplug/acpi_pcihp.c
··· 36 36 37 37 #define MY_NAME "acpi_pcihp" 38 38 39 - #define dbg(fmt, arg...) do { if (debug_acpi) printk(KERN_DEBUG "%s: %s: " fmt , MY_NAME , __func__ , ## arg); } while (0) 40 - #define err(format, arg...) printk(KERN_ERR "%s: " format , MY_NAME , ## arg) 41 - #define info(format, arg...) printk(KERN_INFO "%s: " format , MY_NAME , ## arg) 42 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format , MY_NAME , ## arg) 39 + #define dbg(fmt, arg...) do { if (debug_acpi) printk(KERN_DEBUG "%s: %s: " fmt, MY_NAME, __func__, ## arg); } while (0) 40 + #define err(format, arg...) printk(KERN_ERR "%s: " format, MY_NAME, ## arg) 41 + #define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 42 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 43 43 44 44 #define METHOD_NAME__SUN "_SUN" 45 45 #define METHOD_NAME_OSHP "OSHP" ··· 132 132 133 133 while (handle) { 134 134 acpi_get_name(handle, ACPI_FULL_PATHNAME, &string); 135 - dbg("Trying to get hotplug control for %s \n", 135 + dbg("Trying to get hotplug control for %s\n", 136 136 (char *)string.pointer); 137 137 status = acpi_run_oshp(handle); 138 138 if (ACPI_SUCCESS(status))
+1 -1
drivers/pci/hotplug/acpiphp.h
··· 181 181 /* function prototypes */ 182 182 183 183 /* acpiphp_core.c */ 184 - int acpiphp_register_attention(struct acpiphp_attention_info*info); 184 + int acpiphp_register_attention(struct acpiphp_attention_info *info); 185 185 int acpiphp_unregister_attention(struct acpiphp_attention_info *info); 186 186 int acpiphp_register_hotplug_slot(struct acpiphp_slot *slot, unsigned int sun); 187 187 void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *slot);
+7 -7
drivers/pci/hotplug/acpiphp_core.c
··· 63 63 MODULE_PARM_DESC(disable, "disable acpiphp driver"); 64 64 module_param_named(disable, acpiphp_disabled, bool, 0444); 65 65 66 - static int enable_slot (struct hotplug_slot *slot); 67 - static int disable_slot (struct hotplug_slot *slot); 68 - static int set_attention_status (struct hotplug_slot *slot, u8 value); 69 - static int get_power_status (struct hotplug_slot *slot, u8 *value); 70 - static int get_attention_status (struct hotplug_slot *slot, u8 *value); 71 - static int get_latch_status (struct hotplug_slot *slot, u8 *value); 72 - static int get_adapter_status (struct hotplug_slot *slot, u8 *value); 66 + static int enable_slot(struct hotplug_slot *slot); 67 + static int disable_slot(struct hotplug_slot *slot); 68 + static int set_attention_status(struct hotplug_slot *slot, u8 value); 69 + static int get_power_status(struct hotplug_slot *slot, u8 *value); 70 + static int get_attention_status(struct hotplug_slot *slot, u8 *value); 71 + static int get_latch_status(struct hotplug_slot *slot, u8 *value); 72 + static int get_adapter_status(struct hotplug_slot *slot, u8 *value); 73 73 74 74 static struct hotplug_slot_ops acpi_hotplug_slot_ops = { 75 75 .enable_slot = enable_slot,
+1 -1
drivers/pci/hotplug/acpiphp_glue.c
··· 707 707 unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM; 708 708 709 709 list_for_each_entry_safe_reverse(dev, tmp, &bus->devices, bus_list) { 710 - for (i=0; i<PCI_BRIDGE_RESOURCES; i++) { 710 + for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) { 711 711 struct resource *res = &dev->resource[i]; 712 712 if ((res->flags & type_mask) && !res->start && 713 713 res->end) {
+15 -4
drivers/pci/hotplug/acpiphp_ibm.c
··· 154 154 ibm_slot_done: 155 155 if (ret) { 156 156 ret = kmalloc(sizeof(union apci_descriptor), GFP_KERNEL); 157 - memcpy(ret, des, sizeof(union apci_descriptor)); 157 + if (ret) 158 + memcpy(ret, des, sizeof(union apci_descriptor)); 158 159 } 159 160 kfree(table); 160 161 return ret; ··· 176 175 acpi_status stat; 177 176 unsigned long long rc; 178 177 union apci_descriptor *ibm_slot; 178 + int id = hpslot_to_sun(slot); 179 179 180 - ibm_slot = ibm_slot_from_id(hpslot_to_sun(slot)); 180 + ibm_slot = ibm_slot_from_id(id); 181 + if (!ibm_slot) { 182 + pr_err("APLS null ACPI descriptor for slot %d\n", id); 183 + return -ENODEV; 184 + } 181 185 182 186 pr_debug("%s: set slot %d (%d) attention status to %d\n", __func__, 183 187 ibm_slot->slot.slot_num, ibm_slot->slot.slot_id, ··· 221 215 static int ibm_get_attention_status(struct hotplug_slot *slot, u8 *status) 222 216 { 223 217 union apci_descriptor *ibm_slot; 218 + int id = hpslot_to_sun(slot); 224 219 225 - ibm_slot = ibm_slot_from_id(hpslot_to_sun(slot)); 220 + ibm_slot = ibm_slot_from_id(id); 221 + if (!ibm_slot) { 222 + pr_err("APLS null ACPI descriptor for slot %d\n", id); 223 + return -ENODEV; 224 + } 226 225 227 226 if (ibm_slot->slot.attn & 0xa0 || ibm_slot->slot.status[1] & 0x08) 228 227 *status = 1; ··· 336 325 } 337 326 338 327 size = 0; 339 - for (i=0; i<package->package.count; i++) { 328 + for (i = 0; i < package->package.count; i++) { 340 329 memcpy(&lbuf[size], 341 330 package->package.elements[i].buffer.pointer, 342 331 package->package.elements[i].buffer.length);
+7 -7
drivers/pci/hotplug/cpci_hotplug.h
··· 52 52 }; 53 53 54 54 struct cpci_hp_controller_ops { 55 - int (*query_enum) (void); 56 - int (*enable_irq) (void); 57 - int (*disable_irq) (void); 58 - int (*check_irq) (void *dev_id); 59 - int (*hardware_test) (struct slot *slot, u32 value); 60 - u8 (*get_power) (struct slot *slot); 61 - int (*set_power) (struct slot *slot, int value); 55 + int (*query_enum)(void); 56 + int (*enable_irq)(void); 57 + int (*disable_irq)(void); 58 + int (*check_irq)(void *dev_id); 59 + int (*hardware_test)(struct slot *slot, u32 value); 60 + u8 (*get_power)(struct slot *slot); 61 + int (*set_power)(struct slot *slot, int value); 62 62 }; 63 63 64 64 struct cpci_hp_controller {
+8 -8
drivers/pci/hotplug/cpci_hotplug_core.c
··· 45 45 #define dbg(format, arg...) \ 46 46 do { \ 47 47 if (cpci_debug) \ 48 - printk (KERN_DEBUG "%s: " format "\n", \ 49 - MY_NAME , ## arg); \ 48 + printk(KERN_DEBUG "%s: " format "\n", \ 49 + MY_NAME, ## arg); \ 50 50 } while (0) 51 - #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 52 - #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 53 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) 51 + #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME, ## arg) 52 + #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME, ## arg) 53 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME, ## arg) 54 54 55 55 /* local variables */ 56 56 static DECLARE_RWSEM(list_rwsem); ··· 238 238 * with the pci_hotplug subsystem. 239 239 */ 240 240 for (i = first; i <= last; ++i) { 241 - slot = kzalloc(sizeof (struct slot), GFP_KERNEL); 241 + slot = kzalloc(sizeof(struct slot), GFP_KERNEL); 242 242 if (!slot) { 243 243 status = -ENOMEM; 244 244 goto error; 245 245 } 246 246 247 247 hotplug_slot = 248 - kzalloc(sizeof (struct hotplug_slot), GFP_KERNEL); 248 + kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); 249 249 if (!hotplug_slot) { 250 250 status = -ENOMEM; 251 251 goto error_slot; 252 252 } 253 253 slot->hotplug_slot = hotplug_slot; 254 254 255 - info = kzalloc(sizeof (struct hotplug_slot_info), GFP_KERNEL); 255 + info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL); 256 256 if (!info) { 257 257 status = -ENOMEM; 258 258 goto error_hpslot;
+5 -5
drivers/pci/hotplug/cpci_hotplug_pci.c
··· 38 38 #define dbg(format, arg...) \ 39 39 do { \ 40 40 if (cpci_debug) \ 41 - printk (KERN_DEBUG "%s: " format "\n", \ 42 - MY_NAME , ## arg); \ 41 + printk(KERN_DEBUG "%s: " format "\n", \ 42 + MY_NAME, ## arg); \ 43 43 } while (0) 44 - #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 45 - #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 46 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) 44 + #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME, ## arg) 45 + #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME, ## arg) 46 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME, ## arg) 47 47 48 48 49 49 u8 cpci_get_attention_status(struct slot *slot)
+6 -6
drivers/pci/hotplug/cpcihp_generic.c
··· 54 54 #define dbg(format, arg...) \ 55 55 do { \ 56 56 if (debug) \ 57 - printk (KERN_DEBUG "%s: " format "\n", \ 58 - MY_NAME , ## arg); \ 57 + printk(KERN_DEBUG "%s: " format "\n", \ 58 + MY_NAME, ## arg); \ 59 59 } while (0) 60 - #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 61 - #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 62 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) 60 + #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME, ## arg) 61 + #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME, ## arg) 62 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME, ## arg) 63 63 64 64 /* local variables */ 65 65 static bool debug; ··· 164 164 bus = dev->subordinate; 165 165 pci_dev_put(dev); 166 166 167 - memset(&generic_hpc, 0, sizeof (struct cpci_hp_controller)); 167 + memset(&generic_hpc, 0, sizeof(struct cpci_hp_controller)); 168 168 generic_hpc_ops.query_enum = query_enum; 169 169 generic_hpc.ops = &generic_hpc_ops; 170 170
+7 -7
drivers/pci/hotplug/cpcihp_zt5550.c
··· 49 49 #define dbg(format, arg...) \ 50 50 do { \ 51 51 if (debug) \ 52 - printk (KERN_DEBUG "%s: " format "\n", \ 53 - MY_NAME , ## arg); \ 52 + printk(KERN_DEBUG "%s: " format "\n", \ 53 + MY_NAME, ## arg); \ 54 54 } while (0) 55 - #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 56 - #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 57 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) 55 + #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME, ## arg) 56 + #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME, ## arg) 57 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME, ## arg) 58 58 59 59 /* local variables */ 60 60 static bool debug; ··· 204 204 return 0; 205 205 } 206 206 207 - static int zt5550_hc_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) 207 + static int zt5550_hc_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 208 208 { 209 209 int status; 210 210 ··· 214 214 215 215 dbg("returned from zt5550_hc_config"); 216 216 217 - memset(&zt5550_hpc, 0, sizeof (struct cpci_hp_controller)); 217 + memset(&zt5550_hpc, 0, sizeof(struct cpci_hp_controller)); 218 218 zt5550_hpc_ops.query_enum = zt5550_hc_query_enum; 219 219 zt5550_hpc.ops = &zt5550_hpc_ops; 220 220 if (!poll) {
+7 -7
drivers/pci/hotplug/cpqphp.h
··· 36 36 37 37 #define MY_NAME "cpqphp" 38 38 39 - #define dbg(fmt, arg...) do { if (cpqhp_debug) printk(KERN_DEBUG "%s: " fmt , MY_NAME , ## arg); } while (0) 40 - #define err(format, arg...) printk(KERN_ERR "%s: " format , MY_NAME , ## arg) 41 - #define info(format, arg...) printk(KERN_INFO "%s: " format , MY_NAME , ## arg) 42 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format , MY_NAME , ## arg) 39 + #define dbg(fmt, arg...) do { if (cpqhp_debug) printk(KERN_DEBUG "%s: " fmt, MY_NAME, ## arg); } while (0) 40 + #define err(format, arg...) printk(KERN_ERR "%s: " format, MY_NAME, ## arg) 41 + #define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 42 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 43 43 44 44 45 45 ··· 424 424 int cpqhp_hardware_test(struct controller *ctrl, int test_num); 425 425 426 426 /* resource functions */ 427 - int cpqhp_resource_sort_and_combine (struct pci_resource **head); 427 + int cpqhp_resource_sort_and_combine(struct pci_resource **head); 428 428 429 429 /* pci functions */ 430 430 int cpqhp_set_irq(u8 bus_num, u8 dev_num, u8 int_pin, u8 irq_num); ··· 685 685 u8 hp_slot; 686 686 687 687 hp_slot = slot->device - ctrl->slot_device_offset; 688 - dbg("%s: slot->device = %d, ctrl->slot_device_offset = %d \n", 688 + dbg("%s: slot->device = %d, ctrl->slot_device_offset = %d\n", 689 689 __func__, slot->device, ctrl->slot_device_offset); 690 690 691 691 status = (readl(ctrl->hpc_reg + INT_INPUT_CLEAR) & (0x01L << hp_slot)); ··· 712 712 713 713 static inline int wait_for_ctrl_irq(struct controller *ctrl) 714 714 { 715 - DECLARE_WAITQUEUE(wait, current); 715 + DECLARE_WAITQUEUE(wait, current); 716 716 int retval = 0; 717 717 718 718 dbg("%s - start\n", __func__);
+16 -16
drivers/pci/hotplug/cpqphp_core.c
··· 291 291 kfree(slot); 292 292 } 293 293 294 - static int ctrl_slot_cleanup (struct controller *ctrl) 294 + static int ctrl_slot_cleanup(struct controller *ctrl) 295 295 { 296 296 struct slot *old_slot, *next_slot; 297 297 ··· 301 301 while (old_slot) { 302 302 /* memory will be freed by the release_slot callback */ 303 303 next_slot = old_slot->next; 304 - pci_hp_deregister (old_slot->hotplug_slot); 304 + pci_hp_deregister(old_slot->hotplug_slot); 305 305 old_slot = next_slot; 306 306 } 307 307 ··· 413 413 mutex_lock(&ctrl->crit_sect); 414 414 415 415 if (status == 1) 416 - amber_LED_on (ctrl, hp_slot); 416 + amber_LED_on(ctrl, hp_slot); 417 417 else if (status == 0) 418 - amber_LED_off (ctrl, hp_slot); 418 + amber_LED_off(ctrl, hp_slot); 419 419 else { 420 420 /* Done with exclusive hardware access */ 421 421 mutex_unlock(&ctrl->crit_sect); ··· 425 425 set_SOGO(ctrl); 426 426 427 427 /* Wait for SOBS to be unset */ 428 - wait_for_ctrl_irq (ctrl); 428 + wait_for_ctrl_irq(ctrl); 429 429 430 430 /* Done with exclusive hardware access */ 431 431 mutex_unlock(&ctrl->crit_sect); ··· 439 439 * @hotplug_slot: slot to change LED on 440 440 * @status: LED control flag 441 441 */ 442 - static int set_attention_status (struct hotplug_slot *hotplug_slot, u8 status) 442 + static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) 443 443 { 444 444 struct pci_func *slot_func; 445 445 struct slot *slot = hotplug_slot->private; ··· 610 610 u8 ctrl_slot; 611 611 u32 tempdword; 612 612 char name[SLOT_NAME_SIZE]; 613 - void __iomem *slot_entry= NULL; 613 + void __iomem *slot_entry = NULL; 614 614 int result; 615 615 616 616 dbg("%s\n", __func__); ··· 755 755 if (cpqhp_debug) 756 756 pci_print_IRQ_route(); 757 757 758 - dbg("Initialize + Start the notification mechanism \n"); 758 + dbg("Initialize + Start the notification mechanism\n"); 759 759 760 760 retval = cpqhp_event_start_thread(); 761 761 if (retval) ··· 772 772 /* Map rom address */ 773 773 cpqhp_rom_start = ioremap(ROM_PHY_ADDR, ROM_PHY_LEN); 774 774 if (!cpqhp_rom_start) { 775 - err ("Could not ioremap memory region for ROM\n"); 775 + err("Could not ioremap memory region for ROM\n"); 776 776 retval = -EIO; 777 777 goto error; 778 778 } ··· 786 786 smbios_table = detect_SMBIOS_pointer(cpqhp_rom_start, 787 787 cpqhp_rom_start + ROM_PHY_LEN); 788 788 if (!smbios_table) { 789 - err ("Could not find the SMBIOS pointer in memory\n"); 789 + err("Could not find the SMBIOS pointer in memory\n"); 790 790 retval = -EIO; 791 791 goto error_rom_start; 792 792 } ··· 794 794 smbios_start = ioremap(readl(smbios_table + ST_ADDRESS), 795 795 readw(smbios_table + ST_LENGTH)); 796 796 if (!smbios_start) { 797 - err ("Could not ioremap memory region taken from SMBIOS values\n"); 797 + err("Could not ioremap memory region taken from SMBIOS values\n"); 798 798 retval = -EIO; 799 799 goto error_smbios_start; 800 800 } ··· 1181 1181 * Finish setting up the hot plug ctrl device 1182 1182 */ 1183 1183 ctrl->slot_device_offset = readb(ctrl->hpc_reg + SLOT_MASK) >> 4; 1184 - dbg("NumSlots %d \n", ctrl->slot_device_offset); 1184 + dbg("NumSlots %d\n", ctrl->slot_device_offset); 1185 1185 1186 1186 ctrl->next_event = 0; 1187 1187 ··· 1198 1198 writel(0xFFFFFFFFL, ctrl->hpc_reg + INT_MASK); 1199 1199 1200 1200 /* set up the interrupt */ 1201 - dbg("HPC interrupt = %d \n", ctrl->interrupt); 1201 + dbg("HPC interrupt = %d\n", ctrl->interrupt); 1202 1202 if (request_irq(ctrl->interrupt, cpqhp_ctrl_intr, 1203 1203 IRQF_SHARED, MY_NAME, ctrl)) { 1204 1204 err("Can't get irq %d for the hotplug pci controller\n", ··· 1321 1321 while (ctrl) { 1322 1322 if (ctrl->hpc_reg) { 1323 1323 u16 misc; 1324 - rc = read_slot_enable (ctrl); 1324 + rc = read_slot_enable(ctrl); 1325 1325 1326 1326 writeb(0, ctrl->hpc_reg + SLOT_SERR); 1327 1327 writel(0xFFFFFFC0L | ~rc, ctrl->hpc_reg + INT_MASK); ··· 1361 1361 kfree(tres); 1362 1362 } 1363 1363 1364 - kfree (ctrl->pci_bus); 1364 + kfree(ctrl->pci_bus); 1365 1365 1366 1366 tctrl = ctrl; 1367 1367 ctrl = ctrl->next; ··· 1446 1446 1447 1447 cpqhp_debug = debug; 1448 1448 1449 - info (DRIVER_DESC " version: " DRIVER_VERSION "\n"); 1449 + info(DRIVER_DESC " version: " DRIVER_VERSION "\n"); 1450 1450 cpqhp_initialize_debugfs(); 1451 1451 result = pci_register_driver(&cpqhpc_driver); 1452 1452 dbg("pci_register_driver = %d\n", result);
+100 -100
drivers/pci/hotplug/cpqphp_ctrl.c
··· 155 155 * Presence Change 156 156 */ 157 157 dbg("cpqsbd: Presence/Notify input change.\n"); 158 - dbg(" Changed bits are 0x%4.4x\n", change ); 158 + dbg(" Changed bits are 0x%4.4x\n", change); 159 159 160 160 for (hp_slot = 0; hp_slot < 6; hp_slot++) { 161 161 if (change & (0x0101 << hp_slot)) { ··· 276 276 taskInfo->event_type = INT_POWER_FAULT; 277 277 278 278 if (ctrl->rev < 4) { 279 - amber_LED_on (ctrl, hp_slot); 280 - green_LED_off (ctrl, hp_slot); 281 - set_SOGO (ctrl); 279 + amber_LED_on(ctrl, hp_slot); 280 + green_LED_off(ctrl, hp_slot); 281 + set_SOGO(ctrl); 282 282 283 283 /* this is a fatal condition, we want 284 284 * to crash the machine to protect from ··· 438 438 439 439 node = *head; 440 440 441 - if (node->length & (alignment -1)) { 441 + if (node->length & (alignment - 1)) { 442 442 /* this one isn't an aligned length, so we'll make a new entry 443 443 * and split it up. 444 444 */ ··· 835 835 if (!(*head)) 836 836 return 1; 837 837 838 - dbg("*head->next = %p\n",(*head)->next); 838 + dbg("*head->next = %p\n", (*head)->next); 839 839 840 840 if (!(*head)->next) 841 841 return 0; /* only one item on the list, already sorted! */ 842 842 843 - dbg("*head->base = 0x%x\n",(*head)->base); 844 - dbg("*head->next->base = 0x%x\n",(*head)->next->base); 843 + dbg("*head->base = 0x%x\n", (*head)->base); 844 + dbg("*head->next->base = 0x%x\n", (*head)->next->base); 845 845 while (out_of_order) { 846 846 out_of_order = 0; 847 847 ··· 917 917 /* Read to clear posted writes */ 918 918 misc = readw(ctrl->hpc_reg + MISC); 919 919 920 - dbg ("%s - waking up\n", __func__); 920 + dbg("%s - waking up\n", __func__); 921 921 wake_up_interruptible(&ctrl->queue); 922 922 } 923 923 ··· 1285 1285 /* 1286 1286 * The board is already on 1287 1287 */ 1288 - else if (is_slot_enabled (ctrl, hp_slot)) 1288 + else if (is_slot_enabled(ctrl, hp_slot)) 1289 1289 rc = CARD_FUNCTIONING; 1290 1290 else { 1291 1291 mutex_lock(&ctrl->crit_sect); 1292 1292 1293 1293 /* turn on board without attaching to the bus */ 1294 - enable_slot_power (ctrl, hp_slot); 1294 + enable_slot_power(ctrl, hp_slot); 1295 1295 1296 1296 set_SOGO(ctrl); 1297 1297 1298 1298 /* Wait for SOBS to be unset */ 1299 - wait_for_ctrl_irq (ctrl); 1299 + wait_for_ctrl_irq(ctrl); 1300 1300 1301 1301 /* Change bits in slot power register to force another shift out 1302 1302 * NOTE: this is to work around the timer bug */ ··· 1307 1307 set_SOGO(ctrl); 1308 1308 1309 1309 /* Wait for SOBS to be unset */ 1310 - wait_for_ctrl_irq (ctrl); 1310 + wait_for_ctrl_irq(ctrl); 1311 1311 1312 1312 adapter_speed = get_adapter_speed(ctrl, hp_slot); 1313 1313 if (bus->cur_bus_speed != adapter_speed) ··· 1315 1315 rc = WRONG_BUS_FREQUENCY; 1316 1316 1317 1317 /* turn off board without attaching to the bus */ 1318 - disable_slot_power (ctrl, hp_slot); 1318 + disable_slot_power(ctrl, hp_slot); 1319 1319 1320 1320 set_SOGO(ctrl); 1321 1321 1322 1322 /* Wait for SOBS to be unset */ 1323 - wait_for_ctrl_irq (ctrl); 1323 + wait_for_ctrl_irq(ctrl); 1324 1324 1325 1325 mutex_unlock(&ctrl->crit_sect); 1326 1326 ··· 1329 1329 1330 1330 mutex_lock(&ctrl->crit_sect); 1331 1331 1332 - slot_enable (ctrl, hp_slot); 1333 - green_LED_blink (ctrl, hp_slot); 1332 + slot_enable(ctrl, hp_slot); 1333 + green_LED_blink(ctrl, hp_slot); 1334 1334 1335 - amber_LED_off (ctrl, hp_slot); 1335 + amber_LED_off(ctrl, hp_slot); 1336 1336 1337 1337 set_SOGO(ctrl); 1338 1338 1339 1339 /* Wait for SOBS to be unset */ 1340 - wait_for_ctrl_irq (ctrl); 1340 + wait_for_ctrl_irq(ctrl); 1341 1341 1342 1342 mutex_unlock(&ctrl->crit_sect); 1343 1343 ··· 1366 1366 1367 1367 mutex_lock(&ctrl->crit_sect); 1368 1368 1369 - amber_LED_on (ctrl, hp_slot); 1370 - green_LED_off (ctrl, hp_slot); 1371 - slot_disable (ctrl, hp_slot); 1369 + amber_LED_on(ctrl, hp_slot); 1370 + green_LED_off(ctrl, hp_slot); 1371 + slot_disable(ctrl, hp_slot); 1372 1372 1373 1373 set_SOGO(ctrl); 1374 1374 1375 1375 /* Wait for SOBS to be unset */ 1376 - wait_for_ctrl_irq (ctrl); 1376 + wait_for_ctrl_irq(ctrl); 1377 1377 1378 1378 mutex_unlock(&ctrl->crit_sect); 1379 1379 ··· 1392 1392 1393 1393 mutex_lock(&ctrl->crit_sect); 1394 1394 1395 - amber_LED_on (ctrl, hp_slot); 1396 - green_LED_off (ctrl, hp_slot); 1397 - slot_disable (ctrl, hp_slot); 1395 + amber_LED_on(ctrl, hp_slot); 1396 + green_LED_off(ctrl, hp_slot); 1397 + slot_disable(ctrl, hp_slot); 1398 1398 1399 1399 set_SOGO(ctrl); 1400 1400 1401 1401 /* Wait for SOBS to be unset */ 1402 - wait_for_ctrl_irq (ctrl); 1402 + wait_for_ctrl_irq(ctrl); 1403 1403 1404 1404 mutex_unlock(&ctrl->crit_sect); 1405 1405 } ··· 1443 1443 set_SOGO(ctrl); 1444 1444 1445 1445 /* Wait for SOBS to be unset */ 1446 - wait_for_ctrl_irq (ctrl); 1446 + wait_for_ctrl_irq(ctrl); 1447 1447 1448 1448 /* Change bits in slot power register to force another shift out 1449 1449 * NOTE: this is to work around the timer bug ··· 1455 1455 set_SOGO(ctrl); 1456 1456 1457 1457 /* Wait for SOBS to be unset */ 1458 - wait_for_ctrl_irq (ctrl); 1458 + wait_for_ctrl_irq(ctrl); 1459 1459 1460 1460 adapter_speed = get_adapter_speed(ctrl, hp_slot); 1461 1461 if (bus->cur_bus_speed != adapter_speed) ··· 1463 1463 rc = WRONG_BUS_FREQUENCY; 1464 1464 1465 1465 /* turn off board without attaching to the bus */ 1466 - disable_slot_power (ctrl, hp_slot); 1466 + disable_slot_power(ctrl, hp_slot); 1467 1467 1468 1468 set_SOGO(ctrl); 1469 1469 ··· 1484 1484 dbg("%s: after down\n", __func__); 1485 1485 1486 1486 dbg("%s: before slot_enable\n", __func__); 1487 - slot_enable (ctrl, hp_slot); 1487 + slot_enable(ctrl, hp_slot); 1488 1488 1489 1489 dbg("%s: before green_LED_blink\n", __func__); 1490 - green_LED_blink (ctrl, hp_slot); 1490 + green_LED_blink(ctrl, hp_slot); 1491 1491 1492 1492 dbg("%s: before amber_LED_blink\n", __func__); 1493 - amber_LED_off (ctrl, hp_slot); 1493 + amber_LED_off(ctrl, hp_slot); 1494 1494 1495 1495 dbg("%s: before set_SOGO\n", __func__); 1496 1496 set_SOGO(ctrl); 1497 1497 1498 1498 /* Wait for SOBS to be unset */ 1499 1499 dbg("%s: before wait_for_ctrl_irq\n", __func__); 1500 - wait_for_ctrl_irq (ctrl); 1500 + wait_for_ctrl_irq(ctrl); 1501 1501 dbg("%s: after wait_for_ctrl_irq\n", __func__); 1502 1502 1503 1503 dbg("%s: before up\n", __func__); ··· 1520 1520 } else { 1521 1521 /* Get vendor/device ID u32 */ 1522 1522 ctrl->pci_bus->number = func->bus; 1523 - rc = pci_bus_read_config_dword (ctrl->pci_bus, PCI_DEVFN(func->device, func->function), PCI_VENDOR_ID, &temp_register); 1523 + rc = pci_bus_read_config_dword(ctrl->pci_bus, PCI_DEVFN(func->device, func->function), PCI_VENDOR_ID, &temp_register); 1524 1524 dbg("%s: pci_read_config_dword returns %d\n", __func__, rc); 1525 1525 dbg("%s: temp_register is %x\n", __func__, temp_register); 1526 1526 ··· 1557 1557 if (rc) { 1558 1558 mutex_lock(&ctrl->crit_sect); 1559 1559 1560 - amber_LED_on (ctrl, hp_slot); 1561 - green_LED_off (ctrl, hp_slot); 1562 - slot_disable (ctrl, hp_slot); 1560 + amber_LED_on(ctrl, hp_slot); 1561 + green_LED_off(ctrl, hp_slot); 1562 + slot_disable(ctrl, hp_slot); 1563 1563 1564 1564 set_SOGO(ctrl); 1565 1565 1566 1566 /* Wait for SOBS to be unset */ 1567 - wait_for_ctrl_irq (ctrl); 1567 + wait_for_ctrl_irq(ctrl); 1568 1568 1569 1569 mutex_unlock(&ctrl->crit_sect); 1570 1570 return rc; ··· 1589 1589 1590 1590 mutex_lock(&ctrl->crit_sect); 1591 1591 1592 - green_LED_on (ctrl, hp_slot); 1592 + green_LED_on(ctrl, hp_slot); 1593 1593 1594 1594 set_SOGO(ctrl); 1595 1595 1596 1596 /* Wait for SOBS to be unset */ 1597 - wait_for_ctrl_irq (ctrl); 1597 + wait_for_ctrl_irq(ctrl); 1598 1598 1599 1599 mutex_unlock(&ctrl->crit_sect); 1600 1600 } else { 1601 1601 mutex_lock(&ctrl->crit_sect); 1602 1602 1603 - amber_LED_on (ctrl, hp_slot); 1604 - green_LED_off (ctrl, hp_slot); 1605 - slot_disable (ctrl, hp_slot); 1603 + amber_LED_on(ctrl, hp_slot); 1604 + green_LED_off(ctrl, hp_slot); 1605 + slot_disable(ctrl, hp_slot); 1606 1606 1607 1607 set_SOGO(ctrl); 1608 1608 1609 1609 /* Wait for SOBS to be unset */ 1610 - wait_for_ctrl_irq (ctrl); 1610 + wait_for_ctrl_irq(ctrl); 1611 1611 1612 1612 mutex_unlock(&ctrl->crit_sect); 1613 1613 ··· 1672 1672 1673 1673 mutex_lock(&ctrl->crit_sect); 1674 1674 1675 - green_LED_off (ctrl, hp_slot); 1676 - slot_disable (ctrl, hp_slot); 1675 + green_LED_off(ctrl, hp_slot); 1676 + slot_disable(ctrl, hp_slot); 1677 1677 1678 1678 set_SOGO(ctrl); 1679 1679 ··· 1683 1683 writeb(temp_byte, ctrl->hpc_reg + SLOT_SERR); 1684 1684 1685 1685 /* Wait for SOBS to be unset */ 1686 - wait_for_ctrl_irq (ctrl); 1686 + wait_for_ctrl_irq(ctrl); 1687 1687 1688 1688 mutex_unlock(&ctrl->crit_sect); 1689 1689 ··· 1755 1755 if (pushbutton_pending) 1756 1756 cpqhp_pushbutton_thread(pushbutton_pending); 1757 1757 else 1758 - for (ctrl = cpqhp_ctrl_list; ctrl; ctrl=ctrl->next) 1758 + for (ctrl = cpqhp_ctrl_list; ctrl; ctrl = ctrl->next) 1759 1759 interrupt_event_handler(ctrl); 1760 1760 } 1761 1761 dbg("event_thread signals exit\n"); ··· 1766 1766 { 1767 1767 cpqhp_event_thread = kthread_run(event_thread, NULL, "phpd_event"); 1768 1768 if (IS_ERR(cpqhp_event_thread)) { 1769 - err ("Can't start up our event thread\n"); 1769 + err("Can't start up our event thread\n"); 1770 1770 return PTR_ERR(cpqhp_event_thread); 1771 1771 } 1772 1772 ··· 1794 1794 info->latch_status = cpq_get_latch_status(ctrl, slot); 1795 1795 info->adapter_status = get_presence_status(ctrl, slot); 1796 1796 result = pci_hp_change_slot_info(slot->hotplug_slot, info); 1797 - kfree (info); 1797 + kfree(info); 1798 1798 return result; 1799 1799 } 1800 1800 ··· 1837 1837 if (p_slot->state == BLINKINGOFF_STATE) { 1838 1838 /* slot is on */ 1839 1839 dbg("turn on green LED\n"); 1840 - green_LED_on (ctrl, hp_slot); 1840 + green_LED_on(ctrl, hp_slot); 1841 1841 } else if (p_slot->state == BLINKINGON_STATE) { 1842 1842 /* slot is off */ 1843 1843 dbg("turn off green LED\n"); 1844 - green_LED_off (ctrl, hp_slot); 1844 + green_LED_off(ctrl, hp_slot); 1845 1845 } 1846 1846 1847 1847 info(msg_button_cancel, p_slot->number); 1848 1848 1849 1849 p_slot->state = STATIC_STATE; 1850 1850 1851 - amber_LED_off (ctrl, hp_slot); 1851 + amber_LED_off(ctrl, hp_slot); 1852 1852 1853 1853 set_SOGO(ctrl); 1854 1854 1855 1855 /* Wait for SOBS to be unset */ 1856 - wait_for_ctrl_irq (ctrl); 1856 + wait_for_ctrl_irq(ctrl); 1857 1857 1858 1858 mutex_unlock(&ctrl->crit_sect); 1859 1859 } ··· 1861 1861 else if (ctrl->event_queue[loop].event_type == INT_BUTTON_RELEASE) { 1862 1862 dbg("button release\n"); 1863 1863 1864 - if (is_slot_enabled (ctrl, hp_slot)) { 1864 + if (is_slot_enabled(ctrl, hp_slot)) { 1865 1865 dbg("slot is on\n"); 1866 1866 p_slot->state = BLINKINGOFF_STATE; 1867 1867 info(msg_button_off, p_slot->number); ··· 1874 1874 1875 1875 dbg("blink green LED and turn off amber\n"); 1876 1876 1877 - amber_LED_off (ctrl, hp_slot); 1878 - green_LED_blink (ctrl, hp_slot); 1877 + amber_LED_off(ctrl, hp_slot); 1878 + green_LED_blink(ctrl, hp_slot); 1879 1879 1880 1880 set_SOGO(ctrl); 1881 1881 1882 1882 /* Wait for SOBS to be unset */ 1883 - wait_for_ctrl_irq (ctrl); 1883 + wait_for_ctrl_irq(ctrl); 1884 1884 1885 1885 mutex_unlock(&ctrl->crit_sect); 1886 1886 init_timer(&p_slot->task_event); ··· 1940 1940 dbg("In power_down_board, func = %p, ctrl = %p\n", func, ctrl); 1941 1941 if (!func) { 1942 1942 dbg("Error! func NULL in %s\n", __func__); 1943 - return ; 1943 + return; 1944 1944 } 1945 1945 1946 1946 if (cpqhp_process_SS(ctrl, func) != 0) { ··· 1962 1962 dbg("In add_board, func = %p, ctrl = %p\n", func, ctrl); 1963 1963 if (!func) { 1964 1964 dbg("Error! func NULL in %s\n", __func__); 1965 - return ; 1965 + return; 1966 1966 } 1967 1967 1968 1968 if (ctrl != NULL) { ··· 1973 1973 set_SOGO(ctrl); 1974 1974 1975 1975 /* Wait for SOBS to be unset */ 1976 - wait_for_ctrl_irq (ctrl); 1976 + wait_for_ctrl_irq(ctrl); 1977 1977 } 1978 1978 } 1979 1979 ··· 2086 2086 unsigned int devfn; 2087 2087 struct slot *p_slot; 2088 2088 struct pci_bus *pci_bus = ctrl->pci_bus; 2089 - int physical_slot=0; 2089 + int physical_slot = 0; 2090 2090 2091 2091 device = func->device; 2092 2092 func = cpqhp_slot_find(ctrl->bus, device, index++); ··· 2100 2100 devfn = PCI_DEVFN(func->device, func->function); 2101 2101 2102 2102 /* Check the Class Code */ 2103 - rc = pci_bus_read_config_byte (pci_bus, devfn, 0x0B, &class_code); 2103 + rc = pci_bus_read_config_byte(pci_bus, devfn, 0x0B, &class_code); 2104 2104 if (rc) 2105 2105 return rc; 2106 2106 ··· 2109 2109 rc = REMOVE_NOT_SUPPORTED; 2110 2110 } else { 2111 2111 /* See if it's a bridge */ 2112 - rc = pci_bus_read_config_byte (pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 2112 + rc = pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 2113 2113 if (rc) 2114 2114 return rc; 2115 2115 2116 2116 /* If it's a bridge, check the VGA Enable bit */ 2117 2117 if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 2118 - rc = pci_bus_read_config_byte (pci_bus, devfn, PCI_BRIDGE_CONTROL, &BCR); 2118 + rc = pci_bus_read_config_byte(pci_bus, devfn, PCI_BRIDGE_CONTROL, &BCR); 2119 2119 if (rc) 2120 2120 return rc; 2121 2121 ··· 2217 2217 set_SOGO(ctrl); 2218 2218 2219 2219 /* Wait for SOGO interrupt */ 2220 - wait_for_ctrl_irq (ctrl); 2220 + wait_for_ctrl_irq(ctrl); 2221 2221 2222 2222 /* Get ready for next iteration */ 2223 2223 long_delay((3*HZ)/10); ··· 2227 2227 set_SOGO(ctrl); 2228 2228 2229 2229 /* Wait for SOGO interrupt */ 2230 - wait_for_ctrl_irq (ctrl); 2230 + wait_for_ctrl_irq(ctrl); 2231 2231 2232 2232 /* Get ready for next iteration */ 2233 2233 long_delay((3*HZ)/10); ··· 2243 2243 set_SOGO(ctrl); 2244 2244 2245 2245 /* Wait for SOBS to be unset */ 2246 - wait_for_ctrl_irq (ctrl); 2246 + wait_for_ctrl_irq(ctrl); 2247 2247 break; 2248 2248 case 2: 2249 2249 /* Do other stuff here! */ ··· 2279 2279 dbg("%s\n", __func__); 2280 2280 /* Check for Multi-function device */ 2281 2281 ctrl->pci_bus->number = func->bus; 2282 - rc = pci_bus_read_config_byte (ctrl->pci_bus, PCI_DEVFN(func->device, func->function), 0x0E, &temp_byte); 2282 + rc = pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(func->device, func->function), 0x0E, &temp_byte); 2283 2283 if (rc) { 2284 2284 dbg("%s: rc = %d\n", __func__, rc); 2285 2285 return rc; ··· 2296 2296 rc = configure_new_function(ctrl, new_slot, behind_bridge, resources); 2297 2297 2298 2298 if (rc) { 2299 - dbg("configure_new_function failed %d\n",rc); 2299 + dbg("configure_new_function failed %d\n", rc); 2300 2300 index = 0; 2301 2301 2302 2302 while (new_slot) { ··· 2317 2317 * and creates a board structure */ 2318 2318 2319 2319 while ((function < max_functions) && (!stop_it)) { 2320 - pci_bus_read_config_dword (ctrl->pci_bus, PCI_DEVFN(func->device, function), 0x00, &ID); 2320 + pci_bus_read_config_dword(ctrl->pci_bus, PCI_DEVFN(func->device, function), 0x00, &ID); 2321 2321 2322 2322 if (ID == 0xFFFFFFFF) { 2323 2323 function++; ··· 2543 2543 2544 2544 /* set Pre Mem base and Limit registers */ 2545 2545 temp_word = p_mem_node->base >> 16; 2546 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_PREF_MEMORY_BASE, temp_word); 2546 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_PREF_MEMORY_BASE, temp_word); 2547 2547 2548 2548 temp_word = (p_mem_node->base + p_mem_node->length - 1) >> 16; 2549 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, temp_word); 2549 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, temp_word); 2550 2550 2551 2551 /* Adjust this to compensate for extra adjustment in first loop 2552 2552 */ ··· 2560 2560 2561 2561 ID = 0xFFFFFFFF; 2562 2562 pci_bus->number = hold_bus_node->base; 2563 - pci_bus_read_config_dword (pci_bus, PCI_DEVFN(device, 0), 0x00, &ID); 2563 + pci_bus_read_config_dword(pci_bus, PCI_DEVFN(device, 0), 0x00, &ID); 2564 2564 pci_bus->number = func->bus; 2565 2565 2566 2566 if (ID != 0xFFFFFFFF) { /* device present */ ··· 2579 2579 new_slot->status = 0; 2580 2580 2581 2581 rc = configure_new_device(ctrl, new_slot, 1, &temp_resources); 2582 - dbg("configure_new_device rc=0x%x\n",rc); 2582 + dbg("configure_new_device rc=0x%x\n", rc); 2583 2583 } /* End of IF (device in slot?) */ 2584 2584 } /* End of FOR loop */ 2585 2585 ··· 2615 2615 temp_byte = temp_resources.bus_head->base - 1; 2616 2616 2617 2617 /* set subordinate bus */ 2618 - rc = pci_bus_write_config_byte (pci_bus, devfn, PCI_SUBORDINATE_BUS, temp_byte); 2618 + rc = pci_bus_write_config_byte(pci_bus, devfn, PCI_SUBORDINATE_BUS, temp_byte); 2619 2619 2620 2620 if (temp_resources.bus_head->length == 0) { 2621 2621 kfree(temp_resources.bus_head); ··· 2636 2636 hold_IO_node->base = io_node->base + io_node->length; 2637 2637 2638 2638 temp_byte = (hold_IO_node->base) >> 8; 2639 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_IO_BASE, temp_byte); 2639 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_IO_BASE, temp_byte); 2640 2640 2641 2641 return_resource(&(resources->io_head), io_node); 2642 2642 } ··· 2655 2655 func->io_head = hold_IO_node; 2656 2656 2657 2657 temp_byte = (io_node->base - 1) >> 8; 2658 - rc = pci_bus_write_config_byte (pci_bus, devfn, PCI_IO_LIMIT, temp_byte); 2658 + rc = pci_bus_write_config_byte(pci_bus, devfn, PCI_IO_LIMIT, temp_byte); 2659 2659 2660 2660 return_resource(&(resources->io_head), io_node); 2661 2661 } else { 2662 2662 /* it doesn't need any IO */ 2663 2663 temp_word = 0x0000; 2664 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_IO_LIMIT, temp_word); 2664 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_IO_LIMIT, temp_word); 2665 2665 2666 2666 return_resource(&(resources->io_head), io_node); 2667 2667 kfree(hold_IO_node); ··· 2687 2687 hold_mem_node->base = mem_node->base + mem_node->length; 2688 2688 2689 2689 temp_word = (hold_mem_node->base) >> 16; 2690 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_MEMORY_BASE, temp_word); 2690 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_MEMORY_BASE, temp_word); 2691 2691 2692 2692 return_resource(&(resources->mem_head), mem_node); 2693 2693 } ··· 2706 2706 2707 2707 /* configure end address */ 2708 2708 temp_word = (mem_node->base - 1) >> 16; 2709 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_MEMORY_LIMIT, temp_word); 2709 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_MEMORY_LIMIT, temp_word); 2710 2710 2711 2711 /* Return unused resources to the pool */ 2712 2712 return_resource(&(resources->mem_head), mem_node); 2713 2713 } else { 2714 2714 /* it doesn't need any Mem */ 2715 2715 temp_word = 0x0000; 2716 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_MEMORY_LIMIT, temp_word); 2716 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_MEMORY_LIMIT, temp_word); 2717 2717 2718 2718 return_resource(&(resources->mem_head), mem_node); 2719 2719 kfree(hold_mem_node); ··· 2739 2739 hold_p_mem_node->base = p_mem_node->base + p_mem_node->length; 2740 2740 2741 2741 temp_word = (hold_p_mem_node->base) >> 16; 2742 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_PREF_MEMORY_BASE, temp_word); 2742 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_PREF_MEMORY_BASE, temp_word); 2743 2743 2744 2744 return_resource(&(resources->p_mem_head), p_mem_node); 2745 2745 } ··· 2758 2758 func->p_mem_head = hold_p_mem_node; 2759 2759 2760 2760 temp_word = (p_mem_node->base - 1) >> 16; 2761 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, temp_word); 2761 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, temp_word); 2762 2762 2763 2763 return_resource(&(resources->p_mem_head), p_mem_node); 2764 2764 } else { 2765 2765 /* it doesn't need any PMem */ 2766 2766 temp_word = 0x0000; 2767 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, temp_word); 2767 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, temp_word); 2768 2768 2769 2769 return_resource(&(resources->p_mem_head), p_mem_node); 2770 2770 kfree(hold_p_mem_node); ··· 2790 2790 * PCI_COMMAND_INVALIDATE | 2791 2791 * PCI_COMMAND_PARITY | 2792 2792 * PCI_COMMAND_SERR */ 2793 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_COMMAND, command); 2793 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_COMMAND, command); 2794 2794 2795 2795 /* set Bridge Control Register */ 2796 2796 command = 0x07; /* = PCI_BRIDGE_CTL_PARITY | 2797 2797 * PCI_BRIDGE_CTL_SERR | 2798 2798 * PCI_BRIDGE_CTL_NO_ISA */ 2799 - rc = pci_bus_write_config_word (pci_bus, devfn, PCI_BRIDGE_CONTROL, command); 2799 + rc = pci_bus_write_config_word(pci_bus, devfn, PCI_BRIDGE_CONTROL, command); 2800 2800 } else if ((temp_byte & 0x7F) == PCI_HEADER_TYPE_NORMAL) { 2801 2801 /* Standard device */ 2802 - rc = pci_bus_read_config_byte (pci_bus, devfn, 0x0B, &class_code); 2802 + rc = pci_bus_read_config_byte(pci_bus, devfn, 0x0B, &class_code); 2803 2803 2804 2804 if (class_code == PCI_BASE_CLASS_DISPLAY) { 2805 2805 /* Display (video) adapter (not supported) */ ··· 2810 2810 temp_register = 0xFFFFFFFF; 2811 2811 2812 2812 dbg("CND: bus=%d, devfn=%d, offset=%d\n", pci_bus->number, devfn, cloop); 2813 - rc = pci_bus_write_config_dword (pci_bus, devfn, cloop, temp_register); 2813 + rc = pci_bus_write_config_dword(pci_bus, devfn, cloop, temp_register); 2814 2814 2815 - rc = pci_bus_read_config_dword (pci_bus, devfn, cloop, &temp_register); 2815 + rc = pci_bus_read_config_dword(pci_bus, devfn, cloop, &temp_register); 2816 2816 dbg("CND: base = 0x%x\n", temp_register); 2817 2817 2818 2818 if (temp_register) { /* If this register is implemented */ ··· 2891 2891 } /* End of base register loop */ 2892 2892 if (cpqhp_legacy_mode) { 2893 2893 /* Figure out which interrupt pin this function uses */ 2894 - rc = pci_bus_read_config_byte (pci_bus, devfn, 2894 + rc = pci_bus_read_config_byte(pci_bus, devfn, 2895 2895 PCI_INTERRUPT_PIN, &temp_byte); 2896 2896 2897 2897 /* If this function needs an interrupt and we are behind ··· 2905 2905 resources->irqs->barber_pole - 1) & 0x03]; 2906 2906 } else { 2907 2907 /* Program IRQ based on card type */ 2908 - rc = pci_bus_read_config_byte (pci_bus, devfn, 0x0B, &class_code); 2908 + rc = pci_bus_read_config_byte(pci_bus, devfn, 0x0B, &class_code); 2909 2909 2910 2910 if (class_code == PCI_BASE_CLASS_STORAGE) 2911 2911 IRQ = cpqhp_disk_irq; ··· 2914 2914 } 2915 2915 2916 2916 /* IRQ Line */ 2917 - rc = pci_bus_write_config_byte (pci_bus, devfn, PCI_INTERRUPT_LINE, IRQ); 2917 + rc = pci_bus_write_config_byte(pci_bus, devfn, PCI_INTERRUPT_LINE, IRQ); 2918 2918 } 2919 2919 2920 2920 if (!behind_bridge) { ··· 2950 2950 * PCI_COMMAND_INVALIDATE | 2951 2951 * PCI_COMMAND_PARITY | 2952 2952 * PCI_COMMAND_SERR */ 2953 - rc = pci_bus_write_config_word (pci_bus, devfn, 2953 + rc = pci_bus_write_config_word(pci_bus, devfn, 2954 2954 PCI_COMMAND, temp_word); 2955 2955 } else { /* End of Not-A-Bridge else */ 2956 2956 /* It's some strange type of PCI adapter (Cardbus?) */ ··· 2961 2961 2962 2962 return 0; 2963 2963 free_and_out: 2964 - cpqhp_destroy_resource_list (&temp_resources); 2964 + cpqhp_destroy_resource_list(&temp_resources); 2965 2965 2966 - return_resource(&(resources-> bus_head), hold_bus_node); 2967 - return_resource(&(resources-> io_head), hold_IO_node); 2968 - return_resource(&(resources-> mem_head), hold_mem_node); 2969 - return_resource(&(resources-> p_mem_head), hold_p_mem_node); 2966 + return_resource(&(resources->bus_head), hold_bus_node); 2967 + return_resource(&(resources->io_head), hold_IO_node); 2968 + return_resource(&(resources->mem_head), hold_mem_node); 2969 + return_resource(&(resources->p_mem_head), hold_p_mem_node); 2970 2970 return rc; 2971 2971 }
+46 -46
drivers/pci/hotplug/cpqphp_nvram.c
··· 114 114 if ((*used + 1) > *avail) 115 115 return(1); 116 116 117 - *((u8*)*p_buffer) = value; 118 - tByte = (u8**)p_buffer; 117 + *((u8 *)*p_buffer) = value; 118 + tByte = (u8 **)p_buffer; 119 119 (*tByte)++; 120 - *used+=1; 120 + *used += 1; 121 121 return(0); 122 122 } 123 123 ··· 129 129 130 130 **p_buffer = value; 131 131 (*p_buffer)++; 132 - *used+=4; 132 + *used += 4; 133 133 return(0); 134 134 } 135 135 ··· 141 141 * 142 142 * returns 0 for non-Compaq ROM, 1 for Compaq ROM 143 143 */ 144 - static int check_for_compaq_ROM (void __iomem *rom_start) 144 + static int check_for_compaq_ROM(void __iomem *rom_start) 145 145 { 146 146 u8 temp1, temp2, temp3, temp4, temp5, temp6; 147 147 int result = 0; ··· 160 160 (temp6 == 'Q')) { 161 161 result = 1; 162 162 } 163 - dbg ("%s - returned %d\n", __func__, result); 163 + dbg("%s - returned %d\n", __func__, result); 164 164 return result; 165 165 } 166 166 167 167 168 - static u32 access_EV (u16 operation, u8 *ev_name, u8 *buffer, u32 *buf_size) 168 + static u32 access_EV(u16 operation, u8 *ev_name, u8 *buffer, u32 *buf_size) 169 169 { 170 170 unsigned long flags; 171 171 int op = operation; ··· 197 197 * 198 198 * Read the hot plug Resource Table from NVRAM 199 199 */ 200 - static int load_HRT (void __iomem *rom_start) 200 + static int load_HRT(void __iomem *rom_start) 201 201 { 202 202 u32 available; 203 203 u32 temp_dword; ··· 232 232 * 233 233 * Save the hot plug Resource Table in NVRAM 234 234 */ 235 - static u32 store_HRT (void __iomem *rom_start) 235 + static u32 store_HRT(void __iomem *rom_start) 236 236 { 237 237 u32 *buffer; 238 238 u32 *pFill; ··· 252 252 if (!check_for_compaq_ROM(rom_start)) 253 253 return(1); 254 254 255 - buffer = (u32*) evbuffer; 255 + buffer = (u32 *) evbuffer; 256 256 257 257 if (!buffer) 258 258 return(1); ··· 306 306 loop = 0; 307 307 308 308 while (resNode) { 309 - loop ++; 309 + loop++; 310 310 311 311 /* base */ 312 312 rc = add_dword(&pFill, resNode->base, &usedbytes, &available); ··· 331 331 loop = 0; 332 332 333 333 while (resNode) { 334 - loop ++; 334 + loop++; 335 335 336 336 /* base */ 337 337 rc = add_dword(&pFill, resNode->base, &usedbytes, &available); ··· 356 356 loop = 0; 357 357 358 358 while (resNode) { 359 - loop ++; 359 + loop++; 360 360 361 361 /* base */ 362 362 rc = add_dword(&pFill, resNode->base, &usedbytes, &available); ··· 381 381 loop = 0; 382 382 383 383 while (resNode) { 384 - loop ++; 384 + loop++; 385 385 386 386 /* base */ 387 387 rc = add_dword(&pFill, resNode->base, &usedbytes, &available); ··· 408 408 409 409 temp_dword = usedbytes; 410 410 411 - rc = access_EV(WRITE_EV, "CQTHPS", (u8*) buffer, &temp_dword); 411 + rc = access_EV(WRITE_EV, "CQTHPS", (u8 *) buffer, &temp_dword); 412 412 413 413 dbg("usedbytes = 0x%x, length = 0x%x\n", usedbytes, temp_dword); 414 414 ··· 423 423 } 424 424 425 425 426 - void compaq_nvram_init (void __iomem *rom_start) 426 + void compaq_nvram_init(void __iomem *rom_start) 427 427 { 428 428 if (rom_start) 429 429 compaq_int15_entry_point = (rom_start + ROM_INT15_PHY_ADDR - ROM_PHY_ADDR); ··· 435 435 } 436 436 437 437 438 - int compaq_nvram_load (void __iomem *rom_start, struct controller *ctrl) 438 + int compaq_nvram_load(void __iomem *rom_start, struct controller *ctrl) 439 439 { 440 440 u8 bus, device, function; 441 441 u8 nummem, numpmem, numio, numbus; ··· 451 451 if (!evbuffer_init) { 452 452 /* Read the resource list information in from NVRAM */ 453 453 if (load_HRT(rom_start)) 454 - memset (evbuffer, 0, 1024); 454 + memset(evbuffer, 0, 1024); 455 455 456 456 evbuffer_init = 1; 457 457 } ··· 472 472 473 473 p_byte += 3; 474 474 475 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) 475 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) 476 476 return 2; 477 477 478 478 bus = p_ev_ctrl->bus; ··· 489 489 490 490 p_byte += 4; 491 491 492 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) 492 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) 493 493 return 2; 494 494 495 495 /* Skip forward to the next entry */ 496 496 p_byte += (nummem + numpmem + numio + numbus) * 8; 497 497 498 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) 498 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) 499 499 return 2; 500 500 501 501 p_ev_ctrl = (struct ev_hrt_ctrl *) p_byte; 502 502 503 503 p_byte += 3; 504 504 505 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) 505 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) 506 506 return 2; 507 507 508 508 bus = p_ev_ctrl->bus; ··· 517 517 518 518 p_byte += 4; 519 519 520 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) 520 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) 521 521 return 2; 522 522 523 523 while (nummem--) { ··· 526 526 if (!mem_node) 527 527 break; 528 528 529 - mem_node->base = *(u32*)p_byte; 530 - dbg("mem base = %8.8x\n",mem_node->base); 529 + mem_node->base = *(u32 *)p_byte; 530 + dbg("mem base = %8.8x\n", mem_node->base); 531 531 p_byte += 4; 532 532 533 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) { 533 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) { 534 534 kfree(mem_node); 535 535 return 2; 536 536 } 537 537 538 - mem_node->length = *(u32*)p_byte; 539 - dbg("mem length = %8.8x\n",mem_node->length); 538 + mem_node->length = *(u32 *)p_byte; 539 + dbg("mem length = %8.8x\n", mem_node->length); 540 540 p_byte += 4; 541 541 542 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) { 542 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) { 543 543 kfree(mem_node); 544 544 return 2; 545 545 } ··· 554 554 if (!p_mem_node) 555 555 break; 556 556 557 - p_mem_node->base = *(u32*)p_byte; 558 - dbg("pre-mem base = %8.8x\n",p_mem_node->base); 557 + p_mem_node->base = *(u32 *)p_byte; 558 + dbg("pre-mem base = %8.8x\n", p_mem_node->base); 559 559 p_byte += 4; 560 560 561 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) { 561 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) { 562 562 kfree(p_mem_node); 563 563 return 2; 564 564 } 565 565 566 - p_mem_node->length = *(u32*)p_byte; 567 - dbg("pre-mem length = %8.8x\n",p_mem_node->length); 566 + p_mem_node->length = *(u32 *)p_byte; 567 + dbg("pre-mem length = %8.8x\n", p_mem_node->length); 568 568 p_byte += 4; 569 569 570 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) { 570 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) { 571 571 kfree(p_mem_node); 572 572 return 2; 573 573 } ··· 582 582 if (!io_node) 583 583 break; 584 584 585 - io_node->base = *(u32*)p_byte; 586 - dbg("io base = %8.8x\n",io_node->base); 585 + io_node->base = *(u32 *)p_byte; 586 + dbg("io base = %8.8x\n", io_node->base); 587 587 p_byte += 4; 588 588 589 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) { 589 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) { 590 590 kfree(io_node); 591 591 return 2; 592 592 } 593 593 594 - io_node->length = *(u32*)p_byte; 595 - dbg("io length = %8.8x\n",io_node->length); 594 + io_node->length = *(u32 *)p_byte; 595 + dbg("io length = %8.8x\n", io_node->length); 596 596 p_byte += 4; 597 597 598 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) { 598 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) { 599 599 kfree(io_node); 600 600 return 2; 601 601 } ··· 610 610 if (!bus_node) 611 611 break; 612 612 613 - bus_node->base = *(u32*)p_byte; 613 + bus_node->base = *(u32 *)p_byte; 614 614 p_byte += 4; 615 615 616 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) { 616 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) { 617 617 kfree(bus_node); 618 618 return 2; 619 619 } 620 620 621 - bus_node->length = *(u32*)p_byte; 621 + bus_node->length = *(u32 *)p_byte; 622 622 p_byte += 4; 623 623 624 - if (p_byte > ((u8*)p_EV_header + evbuffer_length)) { 624 + if (p_byte > ((u8 *)p_EV_header + evbuffer_length)) { 625 625 kfree(bus_node); 626 626 return 2; 627 627 } ··· 650 650 } 651 651 652 652 653 - int compaq_nvram_store (void __iomem *rom_start) 653 + int compaq_nvram_store(void __iomem *rom_start) 654 654 { 655 655 int rc = 1; 656 656
+42 -42
drivers/pci/hotplug/cpqphp_pci.c
··· 81 81 } 82 82 83 83 84 - int cpqhp_configure_device (struct controller *ctrl, struct pci_func *func) 84 + int cpqhp_configure_device(struct controller *ctrl, struct pci_func *func) 85 85 { 86 86 struct pci_bus *child; 87 87 int num; ··· 89 89 pci_lock_rescan_remove(); 90 90 91 91 if (func->pci_dev == NULL) 92 - func->pci_dev = pci_get_bus_and_slot(func->bus,PCI_DEVFN(func->device, func->function)); 92 + func->pci_dev = pci_get_bus_and_slot(func->bus, PCI_DEVFN(func->device, func->function)); 93 93 94 94 /* No pci device, we need to create it then */ 95 95 if (func->pci_dev == NULL) { ··· 128 128 dbg("%s: bus/dev/func = %x/%x/%x\n", __func__, func->bus, func->device, func->function); 129 129 130 130 pci_lock_rescan_remove(); 131 - for (j=0; j<8 ; j++) { 131 + for (j = 0; j < 8 ; j++) { 132 132 struct pci_dev *temp = pci_get_bus_and_slot(func->bus, PCI_DEVFN(func->device, j)); 133 133 if (temp) { 134 134 pci_dev_put(temp); ··· 143 143 { 144 144 u32 vendID = 0; 145 145 146 - if (pci_bus_read_config_dword (bus, devfn, PCI_VENDOR_ID, &vendID) == -1) 146 + if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID) == -1) 147 147 return -1; 148 148 if (vendID == 0xffffffff) 149 149 return -1; 150 - return pci_bus_read_config_dword (bus, devfn, offset, value); 150 + return pci_bus_read_config_dword(bus, devfn, offset, value); 151 151 } 152 152 153 153 ··· 158 158 * @dev_num: device number of PCI device 159 159 * @slot: pointer to u8 where slot number will be returned 160 160 */ 161 - int cpqhp_set_irq (u8 bus_num, u8 dev_num, u8 int_pin, u8 irq_num) 161 + int cpqhp_set_irq(u8 bus_num, u8 dev_num, u8 int_pin, u8 irq_num) 162 162 { 163 163 int rc = 0; 164 164 ··· 230 230 dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice); 231 231 /* Yep we got one. bridge ? */ 232 232 if ((work >> 8) == PCI_TO_PCI_BRIDGE_CLASS) { 233 - pci_bus_read_config_byte (ctrl->pci_bus, PCI_DEVFN(tdevice, 0), PCI_SECONDARY_BUS, &tbus); 233 + pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(tdevice, 0), PCI_SECONDARY_BUS, &tbus); 234 234 /* XXX: no recursion, wtf? */ 235 235 dbg("Recurse on bus_num %d tdevice %d\n", tbus, tdevice); 236 236 return 0; ··· 257 257 *bus_num = tbus; 258 258 *dev_num = tdevice; 259 259 ctrl->pci_bus->number = tbus; 260 - pci_bus_read_config_dword (ctrl->pci_bus, *dev_num, PCI_VENDOR_ID, &work); 260 + pci_bus_read_config_dword(ctrl->pci_bus, *dev_num, PCI_VENDOR_ID, &work); 261 261 if (!nobridge || (work == 0xffffffff)) 262 262 return 0; 263 263 264 264 dbg("bus_num %d devfn %d\n", *bus_num, *dev_num); 265 - pci_bus_read_config_dword (ctrl->pci_bus, *dev_num, PCI_CLASS_REVISION, &work); 265 + pci_bus_read_config_dword(ctrl->pci_bus, *dev_num, PCI_CLASS_REVISION, &work); 266 266 dbg("work >> 8 (%x) = BRIDGE (%x)\n", work >> 8, PCI_TO_PCI_BRIDGE_CLASS); 267 267 268 268 if ((work >> 8) == PCI_TO_PCI_BRIDGE_CLASS) { 269 - pci_bus_read_config_byte (ctrl->pci_bus, *dev_num, PCI_SECONDARY_BUS, &tbus); 269 + pci_bus_read_config_byte(ctrl->pci_bus, *dev_num, PCI_SECONDARY_BUS, &tbus); 270 270 dbg("Scan bus for Non Bridge: bus %d\n", tbus); 271 271 if (PCI_ScanBusForNonBridge(ctrl, tbus, dev_num) == 0) { 272 272 *bus_num = tbus; ··· 280 280 } 281 281 282 282 283 - int cpqhp_get_bus_dev (struct controller *ctrl, u8 *bus_num, u8 *dev_num, u8 slot) 283 + int cpqhp_get_bus_dev(struct controller *ctrl, u8 *bus_num, u8 *dev_num, u8 slot) 284 284 { 285 285 /* plain (bridges allowed) */ 286 286 return PCI_GetBusDevHelper(ctrl, bus_num, dev_num, slot, 0); ··· 419 419 new_slot->pci_dev = pci_get_bus_and_slot(new_slot->bus, (new_slot->device << 3) | new_slot->function); 420 420 421 421 for (cloop = 0; cloop < 0x20; cloop++) { 422 - rc = pci_bus_read_config_dword(ctrl->pci_bus, PCI_DEVFN(device, function), cloop << 2, (u32 *) & (new_slot-> config_space [cloop])); 422 + rc = pci_bus_read_config_dword(ctrl->pci_bus, PCI_DEVFN(device, function), cloop << 2, (u32 *) &(new_slot->config_space[cloop])); 423 423 if (rc) 424 424 return rc; 425 425 } ··· 465 465 * 466 466 * returns 0 if success 467 467 */ 468 - int cpqhp_save_slot_config (struct controller *ctrl, struct pci_func *new_slot) 468 + int cpqhp_save_slot_config(struct controller *ctrl, struct pci_func *new_slot) 469 469 { 470 470 long rc; 471 471 u8 class_code; ··· 481 481 ID = 0xFFFFFFFF; 482 482 483 483 ctrl->pci_bus->number = new_slot->bus; 484 - pci_bus_read_config_dword (ctrl->pci_bus, PCI_DEVFN(new_slot->device, 0), PCI_VENDOR_ID, &ID); 484 + pci_bus_read_config_dword(ctrl->pci_bus, PCI_DEVFN(new_slot->device, 0), PCI_VENDOR_ID, &ID); 485 485 486 486 if (ID == 0xFFFFFFFF) 487 487 return 2; ··· 497 497 while (function < max_functions) { 498 498 if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 499 499 /* Recurse the subordinate bus */ 500 - pci_bus_read_config_byte (ctrl->pci_bus, PCI_DEVFN(new_slot->device, function), PCI_SECONDARY_BUS, &secondary_bus); 500 + pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(new_slot->device, function), PCI_SECONDARY_BUS, &secondary_bus); 501 501 502 502 sub_bus = (int) secondary_bus; 503 503 ··· 514 514 new_slot->status = 0; 515 515 516 516 for (cloop = 0; cloop < 0x20; cloop++) 517 - pci_bus_read_config_dword(ctrl->pci_bus, PCI_DEVFN(new_slot->device, function), cloop << 2, (u32 *) & (new_slot-> config_space [cloop])); 517 + pci_bus_read_config_dword(ctrl->pci_bus, PCI_DEVFN(new_slot->device, function), cloop << 2, (u32 *) &(new_slot->config_space[cloop])); 518 518 519 519 function++; 520 520 ··· 571 571 devfn = PCI_DEVFN(func->device, func->function); 572 572 573 573 /* Check for Bridge */ 574 - pci_bus_read_config_byte (pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 574 + pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 575 575 576 576 if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 577 - pci_bus_read_config_byte (pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus); 577 + pci_bus_read_config_byte(pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus); 578 578 579 579 sub_bus = (int) secondary_bus; 580 580 ··· 595 595 */ 596 596 for (cloop = 0x10; cloop <= 0x14; cloop += 4) { 597 597 temp_register = 0xFFFFFFFF; 598 - pci_bus_write_config_dword (pci_bus, devfn, cloop, temp_register); 599 - pci_bus_read_config_dword (pci_bus, devfn, cloop, &base); 598 + pci_bus_write_config_dword(pci_bus, devfn, cloop, temp_register); 599 + pci_bus_read_config_dword(pci_bus, devfn, cloop, &base); 600 600 /* If this register is implemented */ 601 601 if (base) { 602 602 if (base & 0x01L) { ··· 631 631 /* Figure out IO and memory base lengths */ 632 632 for (cloop = 0x10; cloop <= 0x24; cloop += 4) { 633 633 temp_register = 0xFFFFFFFF; 634 - pci_bus_write_config_dword (pci_bus, devfn, cloop, temp_register); 635 - pci_bus_read_config_dword (pci_bus, devfn, cloop, &base); 634 + pci_bus_write_config_dword(pci_bus, devfn, cloop, temp_register); 635 + pci_bus_read_config_dword(pci_bus, devfn, cloop, &base); 636 636 637 637 /* If this register is implemented */ 638 638 if (base) { ··· 686 686 * 687 687 * returns 0 if success 688 688 */ 689 - int cpqhp_save_used_resources (struct controller *ctrl, struct pci_func *func) 689 + int cpqhp_save_used_resources(struct controller *ctrl, struct pci_func *func) 690 690 { 691 691 u8 cloop; 692 692 u8 header_type; ··· 791 791 } 792 792 /* Figure out IO and memory base lengths */ 793 793 for (cloop = 0x10; cloop <= 0x14; cloop += 4) { 794 - pci_bus_read_config_dword (pci_bus, devfn, cloop, &save_base); 794 + pci_bus_read_config_dword(pci_bus, devfn, cloop, &save_base); 795 795 796 796 temp_register = 0xFFFFFFFF; 797 797 pci_bus_write_config_dword(pci_bus, devfn, cloop, temp_register); ··· 972 972 * registers are programmed last 973 973 */ 974 974 for (cloop = 0x3C; cloop > 0; cloop -= 4) 975 - pci_bus_write_config_dword (pci_bus, devfn, cloop, func->config_space[cloop >> 2]); 975 + pci_bus_write_config_dword(pci_bus, devfn, cloop, func->config_space[cloop >> 2]); 976 976 977 - pci_bus_read_config_byte (pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 977 + pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 978 978 979 979 /* If this is a bridge device, restore subordinate devices */ 980 980 if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 981 - pci_bus_read_config_byte (pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus); 981 + pci_bus_read_config_byte(pci_bus, devfn, PCI_SECONDARY_BUS, &secondary_bus); 982 982 983 983 sub_bus = (int) secondary_bus; 984 984 ··· 998 998 */ 999 999 1000 1000 for (cloop = 16; cloop < 40; cloop += 4) { 1001 - pci_bus_read_config_dword (pci_bus, devfn, cloop, &temp); 1001 + pci_bus_read_config_dword(pci_bus, devfn, cloop, &temp); 1002 1002 1003 1003 if (temp != func->config_space[cloop >> 2]) { 1004 1004 dbg("Config space compare failure!!! offset = %x\n", cloop); ··· 1050 1050 pci_bus->number = func->bus; 1051 1051 devfn = PCI_DEVFN(func->device, func->function); 1052 1052 1053 - pci_bus_read_config_dword (pci_bus, devfn, PCI_VENDOR_ID, &temp_register); 1053 + pci_bus_read_config_dword(pci_bus, devfn, PCI_VENDOR_ID, &temp_register); 1054 1054 1055 1055 /* No adapter present */ 1056 1056 if (temp_register == 0xFFFFFFFF) ··· 1060 1060 return(ADAPTER_NOT_SAME); 1061 1061 1062 1062 /* Check for same revision number and class code */ 1063 - pci_bus_read_config_dword (pci_bus, devfn, PCI_CLASS_REVISION, &temp_register); 1063 + pci_bus_read_config_dword(pci_bus, devfn, PCI_CLASS_REVISION, &temp_register); 1064 1064 1065 1065 /* Adapter not the same */ 1066 1066 if (temp_register != func->config_space[0x08 >> 2]) 1067 1067 return(ADAPTER_NOT_SAME); 1068 1068 1069 1069 /* Check for Bridge */ 1070 - pci_bus_read_config_byte (pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 1070 + pci_bus_read_config_byte(pci_bus, devfn, PCI_HEADER_TYPE, &header_type); 1071 1071 1072 1072 if ((header_type & 0x7F) == PCI_HEADER_TYPE_BRIDGE) { 1073 1073 /* In order to continue checking, we must program the ··· 1076 1076 */ 1077 1077 1078 1078 temp_register = func->config_space[0x18 >> 2]; 1079 - pci_bus_write_config_dword (pci_bus, devfn, PCI_PRIMARY_BUS, temp_register); 1079 + pci_bus_write_config_dword(pci_bus, devfn, PCI_PRIMARY_BUS, temp_register); 1080 1080 1081 1081 secondary_bus = (temp_register >> 8) & 0xFF; 1082 1082 ··· 1094 1094 /* Check to see if it is a standard config header */ 1095 1095 else if ((header_type & 0x7F) == PCI_HEADER_TYPE_NORMAL) { 1096 1096 /* Check subsystem vendor and ID */ 1097 - pci_bus_read_config_dword (pci_bus, devfn, PCI_SUBSYSTEM_VENDOR_ID, &temp_register); 1097 + pci_bus_read_config_dword(pci_bus, devfn, PCI_SUBSYSTEM_VENDOR_ID, &temp_register); 1098 1098 1099 1099 if (temp_register != func->config_space[0x2C >> 2]) { 1100 1100 /* If it's a SMART-2 and the register isn't ··· 1108 1108 /* Figure out IO and memory base lengths */ 1109 1109 for (cloop = 0x10; cloop <= 0x24; cloop += 4) { 1110 1110 temp_register = 0xFFFFFFFF; 1111 - pci_bus_write_config_dword (pci_bus, devfn, cloop, temp_register); 1112 - pci_bus_read_config_dword (pci_bus, devfn, cloop, &base); 1111 + pci_bus_write_config_dword(pci_bus, devfn, cloop, temp_register); 1112 + pci_bus_read_config_dword(pci_bus, devfn, cloop, &base); 1113 1113 1114 1114 /* If this register is implemented */ 1115 1115 if (base) { ··· 1234 1234 if (rc) 1235 1235 return rc; 1236 1236 1237 - one_slot = rom_resource_table + sizeof (struct hrt); 1237 + one_slot = rom_resource_table + sizeof(struct hrt); 1238 1238 1239 1239 i = readb(rom_resource_table + NUMBER_OF_ENTRIES); 1240 1240 dbg("number_of_entries = %d\n", i); ··· 1263 1263 /* If this entry isn't for our controller's bus, ignore it */ 1264 1264 if (primary_bus != ctrl->bus) { 1265 1265 i--; 1266 - one_slot += sizeof (struct slot_rt); 1266 + one_slot += sizeof(struct slot_rt); 1267 1267 continue; 1268 1268 } 1269 1269 /* find out if this entry is for an occupied slot */ 1270 1270 ctrl->pci_bus->number = primary_bus; 1271 - pci_bus_read_config_dword (ctrl->pci_bus, dev_func, PCI_VENDOR_ID, &temp_dword); 1271 + pci_bus_read_config_dword(ctrl->pci_bus, dev_func, PCI_VENDOR_ID, &temp_dword); 1272 1272 dbg("temp_D_word = %x\n", temp_dword); 1273 1273 1274 1274 if (temp_dword != 0xFFFFFFFF) { ··· 1283 1283 /* If we can't find a match, skip this table entry */ 1284 1284 if (!func) { 1285 1285 i--; 1286 - one_slot += sizeof (struct slot_rt); 1286 + one_slot += sizeof(struct slot_rt); 1287 1287 continue; 1288 1288 } 1289 1289 /* this may not work and shouldn't be used */ ··· 1395 1395 } 1396 1396 1397 1397 i--; 1398 - one_slot += sizeof (struct slot_rt); 1398 + one_slot += sizeof(struct slot_rt); 1399 1399 } 1400 1400 1401 1401 /* If all of the following fail, we don't have any resources for ··· 1475 1475 * 1476 1476 * Puts node back in the resource list pointed to by head 1477 1477 */ 1478 - void cpqhp_destroy_resource_list (struct resource_lists *resources) 1478 + void cpqhp_destroy_resource_list(struct resource_lists *resources) 1479 1479 { 1480 1480 struct pci_resource *res, *tres; 1481 1481 ··· 1522 1522 * 1523 1523 * Puts node back in the resource list pointed to by head 1524 1524 */ 1525 - void cpqhp_destroy_board_resources (struct pci_func *func) 1525 + void cpqhp_destroy_board_resources(struct pci_func *func) 1526 1526 { 1527 1527 struct pci_resource *res, *tres; 1528 1528
+3 -3
drivers/pci/hotplug/cpqphp_sysfs.c
··· 39 39 #include "cpqphp.h" 40 40 41 41 static DEFINE_MUTEX(cpqphp_mutex); 42 - static int show_ctrl (struct controller *ctrl, char *buf) 42 + static int show_ctrl(struct controller *ctrl, char *buf) 43 43 { 44 44 char *out = buf; 45 45 int index; ··· 77 77 return out - buf; 78 78 } 79 79 80 - static int show_dev (struct controller *ctrl, char *buf) 80 + static int show_dev(struct controller *ctrl, char *buf) 81 81 { 82 82 char *out = buf; 83 83 int index; ··· 119 119 out += sprintf(out, "start = %8.8x, length = %8.8x\n", res->base, res->length); 120 120 res = res->next; 121 121 } 122 - slot=slot->next; 122 + slot = slot->next; 123 123 } 124 124 125 125 return out - buf;
+6 -6
drivers/pci/hotplug/ibmphp.h
··· 39 39 #else 40 40 #define MY_NAME THIS_MODULE->name 41 41 #endif 42 - #define debug(fmt, arg...) do { if (ibmphp_debug == 1) printk(KERN_DEBUG "%s: " fmt , MY_NAME , ## arg); } while (0) 43 - #define debug_pci(fmt, arg...) do { if (ibmphp_debug) printk(KERN_DEBUG "%s: " fmt , MY_NAME , ## arg); } while (0) 44 - #define err(format, arg...) printk(KERN_ERR "%s: " format , MY_NAME , ## arg) 45 - #define info(format, arg...) printk(KERN_INFO "%s: " format , MY_NAME , ## arg) 46 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format , MY_NAME , ## arg) 42 + #define debug(fmt, arg...) do { if (ibmphp_debug == 1) printk(KERN_DEBUG "%s: " fmt, MY_NAME, ## arg); } while (0) 43 + #define debug_pci(fmt, arg...) do { if (ibmphp_debug) printk(KERN_DEBUG "%s: " fmt, MY_NAME, ## arg); } while (0) 44 + #define err(format, arg...) printk(KERN_ERR "%s: " format, MY_NAME, ## arg) 45 + #define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 46 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 47 47 48 48 49 49 /* EBDA stuff */ ··· 603 603 #define SLOT_CONNECT(s) ((u8) ((s & HPC_SLOT_CONNECT) \ 604 604 ? HPC_SLOT_DISCONNECTED : HPC_SLOT_CONNECTED)) 605 605 606 - #define SLOT_ATTN(s,es) ((u8) ((es & HPC_SLOT_BLINK_ATTN) \ 606 + #define SLOT_ATTN(s, es) ((u8) ((es & HPC_SLOT_BLINK_ATTN) \ 607 607 ? HPC_SLOT_ATTN_BLINK \ 608 608 : ((s & HPC_SLOT_ATTN) ? HPC_SLOT_ATTN_ON : HPC_SLOT_ATTN_OFF))) 609 609
+21 -33
drivers/pci/hotplug/ibmphp_core.c
··· 39 39 #include <asm/io_apic.h> 40 40 #include "ibmphp.h" 41 41 42 - #define attn_on(sl) ibmphp_hpc_writeslot (sl, HPC_SLOT_ATTNON) 43 - #define attn_off(sl) ibmphp_hpc_writeslot (sl, HPC_SLOT_ATTNOFF) 44 - #define attn_LED_blink(sl) ibmphp_hpc_writeslot (sl, HPC_SLOT_BLINKLED) 45 - #define get_ctrl_revision(sl, rev) ibmphp_hpc_readslot (sl, READ_REVLEVEL, rev) 46 - #define get_hpc_options(sl, opt) ibmphp_hpc_readslot (sl, READ_HPCOPTIONS, opt) 42 + #define attn_on(sl) ibmphp_hpc_writeslot(sl, HPC_SLOT_ATTNON) 43 + #define attn_off(sl) ibmphp_hpc_writeslot(sl, HPC_SLOT_ATTNOFF) 44 + #define attn_LED_blink(sl) ibmphp_hpc_writeslot(sl, HPC_SLOT_BLINKLED) 45 + #define get_ctrl_revision(sl, rev) ibmphp_hpc_readslot(sl, READ_REVLEVEL, rev) 46 + #define get_hpc_options(sl, opt) ibmphp_hpc_readslot(sl, READ_HPCOPTIONS, opt) 47 47 48 48 #define DRIVER_VERSION "0.6" 49 49 #define DRIVER_DESC "IBM Hot Plug PCI Controller Driver" ··· 52 52 53 53 static bool debug; 54 54 module_param(debug, bool, S_IRUGO | S_IWUSR); 55 - MODULE_PARM_DESC (debug, "Debugging mode enabled or not"); 56 - MODULE_LICENSE ("GPL"); 57 - MODULE_DESCRIPTION (DRIVER_DESC); 55 + MODULE_PARM_DESC(debug, "Debugging mode enabled or not"); 56 + MODULE_LICENSE("GPL"); 57 + MODULE_DESCRIPTION(DRIVER_DESC); 58 58 59 59 struct pci_bus *ibmphp_pci_bus; 60 60 static int max_slots; ··· 113 113 return rc; 114 114 } 115 115 116 - static int __init get_max_slots (void) 116 + static int __init get_max_slots(void) 117 117 { 118 118 struct slot *slot_cur; 119 - struct list_head *tmp; 120 119 u8 slot_count = 0; 121 120 122 - list_for_each(tmp, &ibmphp_slot_head) { 123 - slot_cur = list_entry(tmp, struct slot, ibm_slot_list); 121 + list_for_each_entry(slot_cur, &ibmphp_slot_head, ibm_slot_list) { 124 122 /* sometimes the hot-pluggable slots start with 4 (not always from 1) */ 125 123 slot_count = max(slot_count, slot_cur->number); 126 124 } ··· 457 459 *value = SLOT_SPEED(myslot.ext_status); 458 460 } else 459 461 *value = MAX_ADAPTER_NONE; 460 - } 462 + } 461 463 } 462 464 463 465 if (flag) ··· 499 501 static int __init init_ops(void) 500 502 { 501 503 struct slot *slot_cur; 502 - struct list_head *tmp; 503 504 int retval; 504 505 int rc; 505 506 506 - list_for_each(tmp, &ibmphp_slot_head) { 507 - slot_cur = list_entry(tmp, struct slot, ibm_slot_list); 508 - 509 - if (!slot_cur) 510 - return -ENODEV; 511 - 507 + list_for_each_entry(slot_cur, &ibmphp_slot_head, ibm_slot_list) { 512 508 debug("BEFORE GETTING SLOT STATUS, slot # %x\n", 513 509 slot_cur->number); 514 510 if (slot_cur->ctrl->revision == 0xFF) ··· 612 620 info->attention_status = SLOT_ATTN(slot_cur->status, 613 621 slot_cur->ext_status); 614 622 info->latch_status = SLOT_LATCH(slot_cur->status); 615 - if (!SLOT_PRESENT(slot_cur->status)) { 616 - info->adapter_status = 0; 623 + if (!SLOT_PRESENT(slot_cur->status)) { 624 + info->adapter_status = 0; 617 625 /* info->max_adapter_speed_status = MAX_ADAPTER_NONE; */ 618 626 } else { 619 - info->adapter_status = 1; 627 + info->adapter_status = 1; 620 628 /* get_max_adapter_speed_1(slot_cur->hotplug_slot, 621 629 &info->max_adapter_speed_status, 0); */ 622 630 } ··· 661 669 { 662 670 struct pci_func *func_cur; 663 671 struct slot *slot_cur; 664 - struct list_head *tmp; 665 - list_for_each(tmp, &ibmphp_slot_head) { 666 - slot_cur = list_entry(tmp, struct slot, ibm_slot_list); 672 + list_for_each_entry(slot_cur, &ibmphp_slot_head, ibm_slot_list) { 667 673 if (slot_cur->func) { 668 674 func_cur = slot_cur->func; 669 675 while (func_cur) { ··· 683 693 *************************************************************/ 684 694 static void free_slots(void) 685 695 { 686 - struct slot *slot_cur; 687 - struct list_head *tmp; 688 - struct list_head *next; 696 + struct slot *slot_cur, *next; 689 697 690 698 debug("%s -- enter\n", __func__); 691 699 692 - list_for_each_safe(tmp, next, &ibmphp_slot_head) { 693 - slot_cur = list_entry(tmp, struct slot, ibm_slot_list); 700 + list_for_each_entry_safe(slot_cur, next, &ibmphp_slot_head, 701 + ibm_slot_list) { 694 702 pci_hp_deregister(slot_cur->hotplug_slot); 695 703 } 696 704 debug("%s -- exit\n", __func__); ··· 854 866 int retval; 855 867 static struct pci_device_id ciobx[] = { 856 868 { PCI_DEVICE(PCI_VENDOR_ID_SERVERWORKS, 0x0101) }, 857 - { }, 869 + { }, 858 870 }; 859 871 860 872 debug("%s - entry slot # %d\n", __func__, slot_cur->number); ··· 1170 1182 * HOT REMOVING ADAPTER CARD * 1171 1183 * INPUT: POINTER TO THE HOTPLUG SLOT STRUCTURE * 1172 1184 * OUTPUT: SUCCESS 0 ; FAILURE: UNCONFIGURE , VALIDATE * 1173 - DISABLE POWER , * 1185 + * DISABLE POWER , * 1174 1186 **************************************************************/ 1175 1187 static int ibmphp_disable_slot(struct hotplug_slot *hotplug_slot) 1176 1188 {
+262 -268
drivers/pci/hotplug/ibmphp_ebda.c
··· 49 49 */ 50 50 51 51 /* Global lists */ 52 - LIST_HEAD (ibmphp_ebda_pci_rsrc_head); 53 - LIST_HEAD (ibmphp_slot_head); 52 + LIST_HEAD(ibmphp_ebda_pci_rsrc_head); 53 + LIST_HEAD(ibmphp_slot_head); 54 54 55 55 /* Local variables */ 56 56 static struct ebda_hpc_list *hpc_list_ptr; 57 57 static struct ebda_rsrc_list *rsrc_list_ptr; 58 58 static struct rio_table_hdr *rio_table_ptr = NULL; 59 - static LIST_HEAD (ebda_hpc_head); 60 - static LIST_HEAD (bus_info_head); 61 - static LIST_HEAD (rio_vg_head); 62 - static LIST_HEAD (rio_lo_head); 63 - static LIST_HEAD (opt_vg_head); 64 - static LIST_HEAD (opt_lo_head); 59 + static LIST_HEAD(ebda_hpc_head); 60 + static LIST_HEAD(bus_info_head); 61 + static LIST_HEAD(rio_vg_head); 62 + static LIST_HEAD(rio_lo_head); 63 + static LIST_HEAD(opt_vg_head); 64 + static LIST_HEAD(opt_lo_head); 65 65 static void __iomem *io_mem; 66 66 67 67 /* Local functions */ 68 - static int ebda_rsrc_controller (void); 69 - static int ebda_rsrc_rsrc (void); 70 - static int ebda_rio_table (void); 68 + static int ebda_rsrc_controller(void); 69 + static int ebda_rsrc_rsrc(void); 70 + static int ebda_rio_table(void); 71 71 72 - static struct ebda_hpc_list * __init alloc_ebda_hpc_list (void) 72 + static struct ebda_hpc_list * __init alloc_ebda_hpc_list(void) 73 73 { 74 74 return kzalloc(sizeof(struct ebda_hpc_list), GFP_KERNEL); 75 75 } 76 76 77 - static struct controller *alloc_ebda_hpc (u32 slot_count, u32 bus_count) 77 + static struct controller *alloc_ebda_hpc(u32 slot_count, u32 bus_count) 78 78 { 79 79 struct controller *controller; 80 80 struct ebda_hpc_slot *slots; ··· 103 103 return NULL; 104 104 } 105 105 106 - static void free_ebda_hpc (struct controller *controller) 106 + static void free_ebda_hpc(struct controller *controller) 107 107 { 108 - kfree (controller->slots); 109 - kfree (controller->buses); 110 - kfree (controller); 108 + kfree(controller->slots); 109 + kfree(controller->buses); 110 + kfree(controller); 111 111 } 112 112 113 - static struct ebda_rsrc_list * __init alloc_ebda_rsrc_list (void) 113 + static struct ebda_rsrc_list * __init alloc_ebda_rsrc_list(void) 114 114 { 115 115 return kzalloc(sizeof(struct ebda_rsrc_list), GFP_KERNEL); 116 116 } 117 117 118 - static struct ebda_pci_rsrc *alloc_ebda_pci_rsrc (void) 118 + static struct ebda_pci_rsrc *alloc_ebda_pci_rsrc(void) 119 119 { 120 120 return kzalloc(sizeof(struct ebda_pci_rsrc), GFP_KERNEL); 121 121 } 122 122 123 - static void __init print_bus_info (void) 123 + static void __init print_bus_info(void) 124 124 { 125 125 struct bus_info *ptr; 126 126 127 127 list_for_each_entry(ptr, &bus_info_head, bus_info_list) { 128 - debug ("%s - slot_min = %x\n", __func__, ptr->slot_min); 129 - debug ("%s - slot_max = %x\n", __func__, ptr->slot_max); 130 - debug ("%s - slot_count = %x\n", __func__, ptr->slot_count); 131 - debug ("%s - bus# = %x\n", __func__, ptr->busno); 132 - debug ("%s - current_speed = %x\n", __func__, ptr->current_speed); 133 - debug ("%s - controller_id = %x\n", __func__, ptr->controller_id); 128 + debug("%s - slot_min = %x\n", __func__, ptr->slot_min); 129 + debug("%s - slot_max = %x\n", __func__, ptr->slot_max); 130 + debug("%s - slot_count = %x\n", __func__, ptr->slot_count); 131 + debug("%s - bus# = %x\n", __func__, ptr->busno); 132 + debug("%s - current_speed = %x\n", __func__, ptr->current_speed); 133 + debug("%s - controller_id = %x\n", __func__, ptr->controller_id); 134 134 135 - debug ("%s - slots_at_33_conv = %x\n", __func__, ptr->slots_at_33_conv); 136 - debug ("%s - slots_at_66_conv = %x\n", __func__, ptr->slots_at_66_conv); 137 - debug ("%s - slots_at_66_pcix = %x\n", __func__, ptr->slots_at_66_pcix); 138 - debug ("%s - slots_at_100_pcix = %x\n", __func__, ptr->slots_at_100_pcix); 139 - debug ("%s - slots_at_133_pcix = %x\n", __func__, ptr->slots_at_133_pcix); 135 + debug("%s - slots_at_33_conv = %x\n", __func__, ptr->slots_at_33_conv); 136 + debug("%s - slots_at_66_conv = %x\n", __func__, ptr->slots_at_66_conv); 137 + debug("%s - slots_at_66_pcix = %x\n", __func__, ptr->slots_at_66_pcix); 138 + debug("%s - slots_at_100_pcix = %x\n", __func__, ptr->slots_at_100_pcix); 139 + debug("%s - slots_at_133_pcix = %x\n", __func__, ptr->slots_at_133_pcix); 140 140 141 141 } 142 142 } 143 143 144 - static void print_lo_info (void) 144 + static void print_lo_info(void) 145 145 { 146 146 struct rio_detail *ptr; 147 - debug ("print_lo_info ----\n"); 147 + debug("print_lo_info ----\n"); 148 148 list_for_each_entry(ptr, &rio_lo_head, rio_detail_list) { 149 - debug ("%s - rio_node_id = %x\n", __func__, ptr->rio_node_id); 150 - debug ("%s - rio_type = %x\n", __func__, ptr->rio_type); 151 - debug ("%s - owner_id = %x\n", __func__, ptr->owner_id); 152 - debug ("%s - first_slot_num = %x\n", __func__, ptr->first_slot_num); 153 - debug ("%s - wpindex = %x\n", __func__, ptr->wpindex); 154 - debug ("%s - chassis_num = %x\n", __func__, ptr->chassis_num); 149 + debug("%s - rio_node_id = %x\n", __func__, ptr->rio_node_id); 150 + debug("%s - rio_type = %x\n", __func__, ptr->rio_type); 151 + debug("%s - owner_id = %x\n", __func__, ptr->owner_id); 152 + debug("%s - first_slot_num = %x\n", __func__, ptr->first_slot_num); 153 + debug("%s - wpindex = %x\n", __func__, ptr->wpindex); 154 + debug("%s - chassis_num = %x\n", __func__, ptr->chassis_num); 155 155 156 156 } 157 157 } 158 158 159 - static void print_vg_info (void) 159 + static void print_vg_info(void) 160 160 { 161 161 struct rio_detail *ptr; 162 - debug ("%s ---\n", __func__); 162 + debug("%s ---\n", __func__); 163 163 list_for_each_entry(ptr, &rio_vg_head, rio_detail_list) { 164 - debug ("%s - rio_node_id = %x\n", __func__, ptr->rio_node_id); 165 - debug ("%s - rio_type = %x\n", __func__, ptr->rio_type); 166 - debug ("%s - owner_id = %x\n", __func__, ptr->owner_id); 167 - debug ("%s - first_slot_num = %x\n", __func__, ptr->first_slot_num); 168 - debug ("%s - wpindex = %x\n", __func__, ptr->wpindex); 169 - debug ("%s - chassis_num = %x\n", __func__, ptr->chassis_num); 164 + debug("%s - rio_node_id = %x\n", __func__, ptr->rio_node_id); 165 + debug("%s - rio_type = %x\n", __func__, ptr->rio_type); 166 + debug("%s - owner_id = %x\n", __func__, ptr->owner_id); 167 + debug("%s - first_slot_num = %x\n", __func__, ptr->first_slot_num); 168 + debug("%s - wpindex = %x\n", __func__, ptr->wpindex); 169 + debug("%s - chassis_num = %x\n", __func__, ptr->chassis_num); 170 170 171 171 } 172 172 } 173 173 174 - static void __init print_ebda_pci_rsrc (void) 174 + static void __init print_ebda_pci_rsrc(void) 175 175 { 176 176 struct ebda_pci_rsrc *ptr; 177 177 178 178 list_for_each_entry(ptr, &ibmphp_ebda_pci_rsrc_head, ebda_pci_rsrc_list) { 179 - debug ("%s - rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n", 180 - __func__, ptr->rsrc_type ,ptr->bus_num, ptr->dev_fun,ptr->start_addr, ptr->end_addr); 179 + debug("%s - rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n", 180 + __func__, ptr->rsrc_type, ptr->bus_num, ptr->dev_fun, ptr->start_addr, ptr->end_addr); 181 181 } 182 182 } 183 183 184 - static void __init print_ibm_slot (void) 184 + static void __init print_ibm_slot(void) 185 185 { 186 186 struct slot *ptr; 187 187 188 188 list_for_each_entry(ptr, &ibmphp_slot_head, ibm_slot_list) { 189 - debug ("%s - slot_number: %x\n", __func__, ptr->number); 189 + debug("%s - slot_number: %x\n", __func__, ptr->number); 190 190 } 191 191 } 192 192 193 - static void __init print_opt_vg (void) 193 + static void __init print_opt_vg(void) 194 194 { 195 195 struct opt_rio *ptr; 196 - debug ("%s ---\n", __func__); 196 + debug("%s ---\n", __func__); 197 197 list_for_each_entry(ptr, &opt_vg_head, opt_rio_list) { 198 - debug ("%s - rio_type %x\n", __func__, ptr->rio_type); 199 - debug ("%s - chassis_num: %x\n", __func__, ptr->chassis_num); 200 - debug ("%s - first_slot_num: %x\n", __func__, ptr->first_slot_num); 201 - debug ("%s - middle_num: %x\n", __func__, ptr->middle_num); 198 + debug("%s - rio_type %x\n", __func__, ptr->rio_type); 199 + debug("%s - chassis_num: %x\n", __func__, ptr->chassis_num); 200 + debug("%s - first_slot_num: %x\n", __func__, ptr->first_slot_num); 201 + debug("%s - middle_num: %x\n", __func__, ptr->middle_num); 202 202 } 203 203 } 204 204 205 - static void __init print_ebda_hpc (void) 205 + static void __init print_ebda_hpc(void) 206 206 { 207 207 struct controller *hpc_ptr; 208 208 u16 index; 209 209 210 210 list_for_each_entry(hpc_ptr, &ebda_hpc_head, ebda_hpc_list) { 211 211 for (index = 0; index < hpc_ptr->slot_count; index++) { 212 - debug ("%s - physical slot#: %x\n", __func__, hpc_ptr->slots[index].slot_num); 213 - debug ("%s - pci bus# of the slot: %x\n", __func__, hpc_ptr->slots[index].slot_bus_num); 214 - debug ("%s - index into ctlr addr: %x\n", __func__, hpc_ptr->slots[index].ctl_index); 215 - debug ("%s - cap of the slot: %x\n", __func__, hpc_ptr->slots[index].slot_cap); 212 + debug("%s - physical slot#: %x\n", __func__, hpc_ptr->slots[index].slot_num); 213 + debug("%s - pci bus# of the slot: %x\n", __func__, hpc_ptr->slots[index].slot_bus_num); 214 + debug("%s - index into ctlr addr: %x\n", __func__, hpc_ptr->slots[index].ctl_index); 215 + debug("%s - cap of the slot: %x\n", __func__, hpc_ptr->slots[index].slot_cap); 216 216 } 217 217 218 218 for (index = 0; index < hpc_ptr->bus_count; index++) 219 - debug ("%s - bus# of each bus controlled by this ctlr: %x\n", __func__, hpc_ptr->buses[index].bus_num); 219 + debug("%s - bus# of each bus controlled by this ctlr: %x\n", __func__, hpc_ptr->buses[index].bus_num); 220 220 221 - debug ("%s - type of hpc: %x\n", __func__, hpc_ptr->ctlr_type); 221 + debug("%s - type of hpc: %x\n", __func__, hpc_ptr->ctlr_type); 222 222 switch (hpc_ptr->ctlr_type) { 223 223 case 1: 224 - debug ("%s - bus: %x\n", __func__, hpc_ptr->u.pci_ctlr.bus); 225 - debug ("%s - dev_fun: %x\n", __func__, hpc_ptr->u.pci_ctlr.dev_fun); 226 - debug ("%s - irq: %x\n", __func__, hpc_ptr->irq); 224 + debug("%s - bus: %x\n", __func__, hpc_ptr->u.pci_ctlr.bus); 225 + debug("%s - dev_fun: %x\n", __func__, hpc_ptr->u.pci_ctlr.dev_fun); 226 + debug("%s - irq: %x\n", __func__, hpc_ptr->irq); 227 227 break; 228 228 229 229 case 0: 230 - debug ("%s - io_start: %x\n", __func__, hpc_ptr->u.isa_ctlr.io_start); 231 - debug ("%s - io_end: %x\n", __func__, hpc_ptr->u.isa_ctlr.io_end); 232 - debug ("%s - irq: %x\n", __func__, hpc_ptr->irq); 230 + debug("%s - io_start: %x\n", __func__, hpc_ptr->u.isa_ctlr.io_start); 231 + debug("%s - io_end: %x\n", __func__, hpc_ptr->u.isa_ctlr.io_end); 232 + debug("%s - irq: %x\n", __func__, hpc_ptr->irq); 233 233 break; 234 234 235 235 case 2: 236 236 case 4: 237 - debug ("%s - wpegbbar: %lx\n", __func__, hpc_ptr->u.wpeg_ctlr.wpegbbar); 238 - debug ("%s - i2c_addr: %x\n", __func__, hpc_ptr->u.wpeg_ctlr.i2c_addr); 239 - debug ("%s - irq: %x\n", __func__, hpc_ptr->irq); 237 + debug("%s - wpegbbar: %lx\n", __func__, hpc_ptr->u.wpeg_ctlr.wpegbbar); 238 + debug("%s - i2c_addr: %x\n", __func__, hpc_ptr->u.wpeg_ctlr.i2c_addr); 239 + debug("%s - irq: %x\n", __func__, hpc_ptr->irq); 240 240 break; 241 241 } 242 242 } 243 243 } 244 244 245 - int __init ibmphp_access_ebda (void) 245 + int __init ibmphp_access_ebda(void) 246 246 { 247 247 u8 format, num_ctlrs, rio_complete, hs_complete, ebda_sz; 248 248 u16 ebda_seg, num_entries, next_offset, offset, blk_id, sub_addr, re, rc_id, re_id, base; ··· 252 252 rio_complete = 0; 253 253 hs_complete = 0; 254 254 255 - io_mem = ioremap ((0x40 << 4) + 0x0e, 2); 256 - if (!io_mem ) 255 + io_mem = ioremap((0x40 << 4) + 0x0e, 2); 256 + if (!io_mem) 257 257 return -ENOMEM; 258 - ebda_seg = readw (io_mem); 259 - iounmap (io_mem); 260 - debug ("returned ebda segment: %x\n", ebda_seg); 258 + ebda_seg = readw(io_mem); 259 + iounmap(io_mem); 260 + debug("returned ebda segment: %x\n", ebda_seg); 261 261 262 262 io_mem = ioremap(ebda_seg<<4, 1); 263 263 if (!io_mem) ··· 269 269 return -ENOMEM; 270 270 271 271 io_mem = ioremap(ebda_seg<<4, (ebda_sz * 1024)); 272 - if (!io_mem ) 272 + if (!io_mem) 273 273 return -ENOMEM; 274 274 next_offset = 0x180; 275 275 ··· 281 281 "ibmphp_ebda: next read is beyond ebda_sz\n")) 282 282 break; 283 283 284 - next_offset = readw (io_mem + offset); /* offset of next blk */ 284 + next_offset = readw(io_mem + offset); /* offset of next blk */ 285 285 286 286 offset += 2; 287 287 if (next_offset == 0) /* 0 indicate it's last blk */ 288 288 break; 289 - blk_id = readw (io_mem + offset); /* this blk id */ 289 + blk_id = readw(io_mem + offset); /* this blk id */ 290 290 291 291 offset += 2; 292 292 /* check if it is hot swap block or rio block */ ··· 294 294 continue; 295 295 /* found hs table */ 296 296 if (blk_id == 0x4853) { 297 - debug ("now enter hot swap block---\n"); 298 - debug ("hot blk id: %x\n", blk_id); 299 - format = readb (io_mem + offset); 297 + debug("now enter hot swap block---\n"); 298 + debug("hot blk id: %x\n", blk_id); 299 + format = readb(io_mem + offset); 300 300 301 301 offset += 1; 302 302 if (format != 4) 303 303 goto error_nodev; 304 - debug ("hot blk format: %x\n", format); 304 + debug("hot blk format: %x\n", format); 305 305 /* hot swap sub blk */ 306 306 base = offset; 307 307 308 308 sub_addr = base; 309 - re = readw (io_mem + sub_addr); /* next sub blk */ 309 + re = readw(io_mem + sub_addr); /* next sub blk */ 310 310 311 311 sub_addr += 2; 312 - rc_id = readw (io_mem + sub_addr); /* sub blk id */ 312 + rc_id = readw(io_mem + sub_addr); /* sub blk id */ 313 313 314 314 sub_addr += 2; 315 315 if (rc_id != 0x5243) 316 316 goto error_nodev; 317 317 /* rc sub blk signature */ 318 - num_ctlrs = readb (io_mem + sub_addr); 318 + num_ctlrs = readb(io_mem + sub_addr); 319 319 320 320 sub_addr += 1; 321 - hpc_list_ptr = alloc_ebda_hpc_list (); 321 + hpc_list_ptr = alloc_ebda_hpc_list(); 322 322 if (!hpc_list_ptr) { 323 323 rc = -ENOMEM; 324 324 goto out; ··· 326 326 hpc_list_ptr->format = format; 327 327 hpc_list_ptr->num_ctlrs = num_ctlrs; 328 328 hpc_list_ptr->phys_addr = sub_addr; /* offset of RSRC_CONTROLLER blk */ 329 - debug ("info about hpc descriptor---\n"); 330 - debug ("hot blk format: %x\n", format); 331 - debug ("num of controller: %x\n", num_ctlrs); 332 - debug ("offset of hpc data structure entries: %x\n ", sub_addr); 329 + debug("info about hpc descriptor---\n"); 330 + debug("hot blk format: %x\n", format); 331 + debug("num of controller: %x\n", num_ctlrs); 332 + debug("offset of hpc data structure entries: %x\n ", sub_addr); 333 333 334 334 sub_addr = base + re; /* re sub blk */ 335 335 /* FIXME: rc is never used/checked */ 336 - rc = readw (io_mem + sub_addr); /* next sub blk */ 336 + rc = readw(io_mem + sub_addr); /* next sub blk */ 337 337 338 338 sub_addr += 2; 339 - re_id = readw (io_mem + sub_addr); /* sub blk id */ 339 + re_id = readw(io_mem + sub_addr); /* sub blk id */ 340 340 341 341 sub_addr += 2; 342 342 if (re_id != 0x5245) 343 343 goto error_nodev; 344 344 345 345 /* signature of re */ 346 - num_entries = readw (io_mem + sub_addr); 346 + num_entries = readw(io_mem + sub_addr); 347 347 348 348 sub_addr += 2; /* offset of RSRC_ENTRIES blk */ 349 - rsrc_list_ptr = alloc_ebda_rsrc_list (); 350 - if (!rsrc_list_ptr ) { 349 + rsrc_list_ptr = alloc_ebda_rsrc_list(); 350 + if (!rsrc_list_ptr) { 351 351 rc = -ENOMEM; 352 352 goto out; 353 353 } ··· 355 355 rsrc_list_ptr->num_entries = num_entries; 356 356 rsrc_list_ptr->phys_addr = sub_addr; 357 357 358 - debug ("info about rsrc descriptor---\n"); 359 - debug ("format: %x\n", format); 360 - debug ("num of rsrc: %x\n", num_entries); 361 - debug ("offset of rsrc data structure entries: %x\n ", sub_addr); 358 + debug("info about rsrc descriptor---\n"); 359 + debug("format: %x\n", format); 360 + debug("num of rsrc: %x\n", num_entries); 361 + debug("offset of rsrc data structure entries: %x\n ", sub_addr); 362 362 363 363 hs_complete = 1; 364 364 } else { 365 365 /* found rio table, blk_id == 0x4752 */ 366 - debug ("now enter io table ---\n"); 367 - debug ("rio blk id: %x\n", blk_id); 366 + debug("now enter io table ---\n"); 367 + debug("rio blk id: %x\n", blk_id); 368 368 369 369 rio_table_ptr = kzalloc(sizeof(struct rio_table_hdr), GFP_KERNEL); 370 370 if (!rio_table_ptr) { 371 371 rc = -ENOMEM; 372 372 goto out; 373 373 } 374 - rio_table_ptr->ver_num = readb (io_mem + offset); 375 - rio_table_ptr->scal_count = readb (io_mem + offset + 1); 376 - rio_table_ptr->riodev_count = readb (io_mem + offset + 2); 377 - rio_table_ptr->offset = offset +3 ; 374 + rio_table_ptr->ver_num = readb(io_mem + offset); 375 + rio_table_ptr->scal_count = readb(io_mem + offset + 1); 376 + rio_table_ptr->riodev_count = readb(io_mem + offset + 2); 377 + rio_table_ptr->offset = offset + 3 ; 378 378 379 379 debug("info about rio table hdr ---\n"); 380 380 debug("ver_num: %x\nscal_count: %x\nriodev_count: %x\noffset of rio table: %x\n ", ··· 390 390 391 391 if (rio_table_ptr) { 392 392 if (rio_complete && rio_table_ptr->ver_num == 3) { 393 - rc = ebda_rio_table (); 393 + rc = ebda_rio_table(); 394 394 if (rc) 395 395 goto out; 396 396 } 397 397 } 398 - rc = ebda_rsrc_controller (); 398 + rc = ebda_rsrc_controller(); 399 399 if (rc) 400 400 goto out; 401 401 402 - rc = ebda_rsrc_rsrc (); 402 + rc = ebda_rsrc_rsrc(); 403 403 goto out; 404 404 error_nodev: 405 405 rc = -ENODEV; 406 406 out: 407 - iounmap (io_mem); 407 + iounmap(io_mem); 408 408 return rc; 409 409 } 410 410 411 411 /* 412 412 * map info of scalability details and rio details from physical address 413 413 */ 414 - static int __init ebda_rio_table (void) 414 + static int __init ebda_rio_table(void) 415 415 { 416 416 u16 offset; 417 417 u8 i; ··· 425 425 rio_detail_ptr = kzalloc(sizeof(struct rio_detail), GFP_KERNEL); 426 426 if (!rio_detail_ptr) 427 427 return -ENOMEM; 428 - rio_detail_ptr->rio_node_id = readb (io_mem + offset); 429 - rio_detail_ptr->bbar = readl (io_mem + offset + 1); 430 - rio_detail_ptr->rio_type = readb (io_mem + offset + 5); 431 - rio_detail_ptr->owner_id = readb (io_mem + offset + 6); 432 - rio_detail_ptr->port0_node_connect = readb (io_mem + offset + 7); 433 - rio_detail_ptr->port0_port_connect = readb (io_mem + offset + 8); 434 - rio_detail_ptr->port1_node_connect = readb (io_mem + offset + 9); 435 - rio_detail_ptr->port1_port_connect = readb (io_mem + offset + 10); 436 - rio_detail_ptr->first_slot_num = readb (io_mem + offset + 11); 437 - rio_detail_ptr->status = readb (io_mem + offset + 12); 438 - rio_detail_ptr->wpindex = readb (io_mem + offset + 13); 439 - rio_detail_ptr->chassis_num = readb (io_mem + offset + 14); 440 - // debug ("rio_node_id: %x\nbbar: %x\nrio_type: %x\nowner_id: %x\nport0_node: %x\nport0_port: %x\nport1_node: %x\nport1_port: %x\nfirst_slot_num: %x\nstatus: %x\n", rio_detail_ptr->rio_node_id, rio_detail_ptr->bbar, rio_detail_ptr->rio_type, rio_detail_ptr->owner_id, rio_detail_ptr->port0_node_connect, rio_detail_ptr->port0_port_connect, rio_detail_ptr->port1_node_connect, rio_detail_ptr->port1_port_connect, rio_detail_ptr->first_slot_num, rio_detail_ptr->status); 428 + rio_detail_ptr->rio_node_id = readb(io_mem + offset); 429 + rio_detail_ptr->bbar = readl(io_mem + offset + 1); 430 + rio_detail_ptr->rio_type = readb(io_mem + offset + 5); 431 + rio_detail_ptr->owner_id = readb(io_mem + offset + 6); 432 + rio_detail_ptr->port0_node_connect = readb(io_mem + offset + 7); 433 + rio_detail_ptr->port0_port_connect = readb(io_mem + offset + 8); 434 + rio_detail_ptr->port1_node_connect = readb(io_mem + offset + 9); 435 + rio_detail_ptr->port1_port_connect = readb(io_mem + offset + 10); 436 + rio_detail_ptr->first_slot_num = readb(io_mem + offset + 11); 437 + rio_detail_ptr->status = readb(io_mem + offset + 12); 438 + rio_detail_ptr->wpindex = readb(io_mem + offset + 13); 439 + rio_detail_ptr->chassis_num = readb(io_mem + offset + 14); 440 + // debug("rio_node_id: %x\nbbar: %x\nrio_type: %x\nowner_id: %x\nport0_node: %x\nport0_port: %x\nport1_node: %x\nport1_port: %x\nfirst_slot_num: %x\nstatus: %x\n", rio_detail_ptr->rio_node_id, rio_detail_ptr->bbar, rio_detail_ptr->rio_type, rio_detail_ptr->owner_id, rio_detail_ptr->port0_node_connect, rio_detail_ptr->port0_port_connect, rio_detail_ptr->port1_node_connect, rio_detail_ptr->port1_port_connect, rio_detail_ptr->first_slot_num, rio_detail_ptr->status); 441 441 //create linked list of chassis 442 442 if (rio_detail_ptr->rio_type == 4 || rio_detail_ptr->rio_type == 5) 443 - list_add (&rio_detail_ptr->rio_detail_list, &rio_vg_head); 443 + list_add(&rio_detail_ptr->rio_detail_list, &rio_vg_head); 444 444 //create linked list of expansion box 445 445 else if (rio_detail_ptr->rio_type == 6 || rio_detail_ptr->rio_type == 7) 446 - list_add (&rio_detail_ptr->rio_detail_list, &rio_lo_head); 446 + list_add(&rio_detail_ptr->rio_detail_list, &rio_lo_head); 447 447 else 448 448 // not in my concern 449 - kfree (rio_detail_ptr); 449 + kfree(rio_detail_ptr); 450 450 offset += 15; 451 451 } 452 - print_lo_info (); 453 - print_vg_info (); 452 + print_lo_info(); 453 + print_vg_info(); 454 454 return 0; 455 455 } 456 456 457 457 /* 458 458 * reorganizing linked list of chassis 459 459 */ 460 - static struct opt_rio *search_opt_vg (u8 chassis_num) 460 + static struct opt_rio *search_opt_vg(u8 chassis_num) 461 461 { 462 462 struct opt_rio *ptr; 463 463 list_for_each_entry(ptr, &opt_vg_head, opt_rio_list) { ··· 467 467 return NULL; 468 468 } 469 469 470 - static int __init combine_wpg_for_chassis (void) 470 + static int __init combine_wpg_for_chassis(void) 471 471 { 472 472 struct opt_rio *opt_rio_ptr = NULL; 473 473 struct rio_detail *rio_detail_ptr = NULL; 474 474 475 475 list_for_each_entry(rio_detail_ptr, &rio_vg_head, rio_detail_list) { 476 - opt_rio_ptr = search_opt_vg (rio_detail_ptr->chassis_num); 476 + opt_rio_ptr = search_opt_vg(rio_detail_ptr->chassis_num); 477 477 if (!opt_rio_ptr) { 478 478 opt_rio_ptr = kzalloc(sizeof(struct opt_rio), GFP_KERNEL); 479 479 if (!opt_rio_ptr) ··· 482 482 opt_rio_ptr->chassis_num = rio_detail_ptr->chassis_num; 483 483 opt_rio_ptr->first_slot_num = rio_detail_ptr->first_slot_num; 484 484 opt_rio_ptr->middle_num = rio_detail_ptr->first_slot_num; 485 - list_add (&opt_rio_ptr->opt_rio_list, &opt_vg_head); 485 + list_add(&opt_rio_ptr->opt_rio_list, &opt_vg_head); 486 486 } else { 487 - opt_rio_ptr->first_slot_num = min (opt_rio_ptr->first_slot_num, rio_detail_ptr->first_slot_num); 488 - opt_rio_ptr->middle_num = max (opt_rio_ptr->middle_num, rio_detail_ptr->first_slot_num); 487 + opt_rio_ptr->first_slot_num = min(opt_rio_ptr->first_slot_num, rio_detail_ptr->first_slot_num); 488 + opt_rio_ptr->middle_num = max(opt_rio_ptr->middle_num, rio_detail_ptr->first_slot_num); 489 489 } 490 490 } 491 - print_opt_vg (); 491 + print_opt_vg(); 492 492 return 0; 493 493 } 494 494 495 495 /* 496 496 * reorganizing linked list of expansion box 497 497 */ 498 - static struct opt_rio_lo *search_opt_lo (u8 chassis_num) 498 + static struct opt_rio_lo *search_opt_lo(u8 chassis_num) 499 499 { 500 500 struct opt_rio_lo *ptr; 501 501 list_for_each_entry(ptr, &opt_lo_head, opt_rio_lo_list) { ··· 505 505 return NULL; 506 506 } 507 507 508 - static int combine_wpg_for_expansion (void) 508 + static int combine_wpg_for_expansion(void) 509 509 { 510 510 struct opt_rio_lo *opt_rio_lo_ptr = NULL; 511 511 struct rio_detail *rio_detail_ptr = NULL; 512 512 513 513 list_for_each_entry(rio_detail_ptr, &rio_lo_head, rio_detail_list) { 514 - opt_rio_lo_ptr = search_opt_lo (rio_detail_ptr->chassis_num); 514 + opt_rio_lo_ptr = search_opt_lo(rio_detail_ptr->chassis_num); 515 515 if (!opt_rio_lo_ptr) { 516 516 opt_rio_lo_ptr = kzalloc(sizeof(struct opt_rio_lo), GFP_KERNEL); 517 517 if (!opt_rio_lo_ptr) ··· 522 522 opt_rio_lo_ptr->middle_num = rio_detail_ptr->first_slot_num; 523 523 opt_rio_lo_ptr->pack_count = 1; 524 524 525 - list_add (&opt_rio_lo_ptr->opt_rio_lo_list, &opt_lo_head); 525 + list_add(&opt_rio_lo_ptr->opt_rio_lo_list, &opt_lo_head); 526 526 } else { 527 - opt_rio_lo_ptr->first_slot_num = min (opt_rio_lo_ptr->first_slot_num, rio_detail_ptr->first_slot_num); 528 - opt_rio_lo_ptr->middle_num = max (opt_rio_lo_ptr->middle_num, rio_detail_ptr->first_slot_num); 527 + opt_rio_lo_ptr->first_slot_num = min(opt_rio_lo_ptr->first_slot_num, rio_detail_ptr->first_slot_num); 528 + opt_rio_lo_ptr->middle_num = max(opt_rio_lo_ptr->middle_num, rio_detail_ptr->first_slot_num); 529 529 opt_rio_lo_ptr->pack_count = 2; 530 530 } 531 531 } ··· 538 538 * Arguments: slot_num, 1st slot number of the chassis we think we are on, 539 539 * var (0 = chassis, 1 = expansion box) 540 540 */ 541 - static int first_slot_num (u8 slot_num, u8 first_slot, u8 var) 541 + static int first_slot_num(u8 slot_num, u8 first_slot, u8 var) 542 542 { 543 543 struct opt_rio *opt_vg_ptr = NULL; 544 544 struct opt_rio_lo *opt_lo_ptr = NULL; ··· 562 562 return rc; 563 563 } 564 564 565 - static struct opt_rio_lo *find_rxe_num (u8 slot_num) 565 + static struct opt_rio_lo *find_rxe_num(u8 slot_num) 566 566 { 567 567 struct opt_rio_lo *opt_lo_ptr; 568 568 569 569 list_for_each_entry(opt_lo_ptr, &opt_lo_head, opt_rio_lo_list) { 570 570 //check to see if this slot_num belongs to expansion box 571 - if ((slot_num >= opt_lo_ptr->first_slot_num) && (!first_slot_num (slot_num, opt_lo_ptr->first_slot_num, 1))) 571 + if ((slot_num >= opt_lo_ptr->first_slot_num) && (!first_slot_num(slot_num, opt_lo_ptr->first_slot_num, 1))) 572 572 return opt_lo_ptr; 573 573 } 574 574 return NULL; 575 575 } 576 576 577 - static struct opt_rio *find_chassis_num (u8 slot_num) 577 + static struct opt_rio *find_chassis_num(u8 slot_num) 578 578 { 579 579 struct opt_rio *opt_vg_ptr; 580 580 581 581 list_for_each_entry(opt_vg_ptr, &opt_vg_head, opt_rio_list) { 582 582 //check to see if this slot_num belongs to chassis 583 - if ((slot_num >= opt_vg_ptr->first_slot_num) && (!first_slot_num (slot_num, opt_vg_ptr->first_slot_num, 0))) 583 + if ((slot_num >= opt_vg_ptr->first_slot_num) && (!first_slot_num(slot_num, opt_vg_ptr->first_slot_num, 0))) 584 584 return opt_vg_ptr; 585 585 } 586 586 return NULL; ··· 589 589 /* This routine will find out how many slots are in the chassis, so that 590 590 * the slot numbers for rxe100 would start from 1, and not from 7, or 6 etc 591 591 */ 592 - static u8 calculate_first_slot (u8 slot_num) 592 + static u8 calculate_first_slot(u8 slot_num) 593 593 { 594 594 u8 first_slot = 1; 595 595 struct slot *slot_cur; ··· 606 606 607 607 #define SLOT_NAME_SIZE 30 608 608 609 - static char *create_file_name (struct slot *slot_cur) 609 + static char *create_file_name(struct slot *slot_cur) 610 610 { 611 611 struct opt_rio *opt_vg_ptr = NULL; 612 612 struct opt_rio_lo *opt_lo_ptr = NULL; ··· 618 618 u8 flag = 0; 619 619 620 620 if (!slot_cur) { 621 - err ("Structure passed is empty\n"); 621 + err("Structure passed is empty\n"); 622 622 return NULL; 623 623 } 624 624 625 625 slot_num = slot_cur->number; 626 626 627 - memset (str, 0, sizeof(str)); 627 + memset(str, 0, sizeof(str)); 628 628 629 629 if (rio_table_ptr) { 630 630 if (rio_table_ptr->ver_num == 3) { 631 - opt_vg_ptr = find_chassis_num (slot_num); 632 - opt_lo_ptr = find_rxe_num (slot_num); 631 + opt_vg_ptr = find_chassis_num(slot_num); 632 + opt_lo_ptr = find_rxe_num(slot_num); 633 633 } 634 634 } 635 635 if (opt_vg_ptr) { ··· 662 662 } 663 663 if (!flag) { 664 664 if (slot_cur->ctrl->ctlr_type == 4) { 665 - first_slot = calculate_first_slot (slot_num); 665 + first_slot = calculate_first_slot(slot_num); 666 666 which = 1; 667 667 } else { 668 668 which = 0; ··· 698 698 hotplug_slot->info->latch_status = SLOT_LATCH(slot->status); 699 699 700 700 // pci board - present:1 not:0 701 - if (SLOT_PRESENT (slot->status)) 701 + if (SLOT_PRESENT(slot->status)) 702 702 hotplug_slot->info->adapter_status = 1; 703 703 else 704 704 hotplug_slot->info->adapter_status = 0; ··· 729 729 /* we don't want to actually remove the resources, since free_resources will do just that */ 730 730 ibmphp_unconfigure_card(&slot, -1); 731 731 732 - kfree (slot); 732 + kfree(slot); 733 733 } 734 734 735 735 static struct pci_driver ibmphp_driver; ··· 739 739 * each hpc from physical address to a list of hot plug controllers based on 740 740 * hpc descriptors. 741 741 */ 742 - static int __init ebda_rsrc_controller (void) 742 + static int __init ebda_rsrc_controller(void) 743 743 { 744 744 u16 addr, addr_slot, addr_bus; 745 745 u8 ctlr_id, temp, bus_index; ··· 757 757 addr = hpc_list_ptr->phys_addr; 758 758 for (ctlr = 0; ctlr < hpc_list_ptr->num_ctlrs; ctlr++) { 759 759 bus_index = 1; 760 - ctlr_id = readb (io_mem + addr); 760 + ctlr_id = readb(io_mem + addr); 761 761 addr += 1; 762 - slot_num = readb (io_mem + addr); 762 + slot_num = readb(io_mem + addr); 763 763 764 764 addr += 1; 765 765 addr_slot = addr; /* offset of slot structure */ 766 766 addr += (slot_num * 4); 767 767 768 - bus_num = readb (io_mem + addr); 768 + bus_num = readb(io_mem + addr); 769 769 770 770 addr += 1; 771 771 addr_bus = addr; /* offset of bus */ 772 772 addr += (bus_num * 9); /* offset of ctlr_type */ 773 - temp = readb (io_mem + addr); 773 + temp = readb(io_mem + addr); 774 774 775 775 addr += 1; 776 776 /* init hpc structure */ 777 - hpc_ptr = alloc_ebda_hpc (slot_num, bus_num); 778 - if (!hpc_ptr ) { 777 + hpc_ptr = alloc_ebda_hpc(slot_num, bus_num); 778 + if (!hpc_ptr) { 779 779 rc = -ENOMEM; 780 780 goto error_no_hpc; 781 781 } ··· 783 783 hpc_ptr->ctlr_relative_id = ctlr; 784 784 hpc_ptr->slot_count = slot_num; 785 785 hpc_ptr->bus_count = bus_num; 786 - debug ("now enter ctlr data structure ---\n"); 787 - debug ("ctlr id: %x\n", ctlr_id); 788 - debug ("ctlr_relative_id: %x\n", hpc_ptr->ctlr_relative_id); 789 - debug ("count of slots controlled by this ctlr: %x\n", slot_num); 790 - debug ("count of buses controlled by this ctlr: %x\n", bus_num); 786 + debug("now enter ctlr data structure ---\n"); 787 + debug("ctlr id: %x\n", ctlr_id); 788 + debug("ctlr_relative_id: %x\n", hpc_ptr->ctlr_relative_id); 789 + debug("count of slots controlled by this ctlr: %x\n", slot_num); 790 + debug("count of buses controlled by this ctlr: %x\n", bus_num); 791 791 792 792 /* init slot structure, fetch slot, bus, cap... */ 793 793 slot_ptr = hpc_ptr->slots; 794 794 for (slot = 0; slot < slot_num; slot++) { 795 - slot_ptr->slot_num = readb (io_mem + addr_slot); 796 - slot_ptr->slot_bus_num = readb (io_mem + addr_slot + slot_num); 797 - slot_ptr->ctl_index = readb (io_mem + addr_slot + 2*slot_num); 798 - slot_ptr->slot_cap = readb (io_mem + addr_slot + 3*slot_num); 795 + slot_ptr->slot_num = readb(io_mem + addr_slot); 796 + slot_ptr->slot_bus_num = readb(io_mem + addr_slot + slot_num); 797 + slot_ptr->ctl_index = readb(io_mem + addr_slot + 2*slot_num); 798 + slot_ptr->slot_cap = readb(io_mem + addr_slot + 3*slot_num); 799 799 800 800 // create bus_info lined list --- if only one slot per bus: slot_min = slot_max 801 801 802 - bus_info_ptr2 = ibmphp_find_same_bus_num (slot_ptr->slot_bus_num); 802 + bus_info_ptr2 = ibmphp_find_same_bus_num(slot_ptr->slot_bus_num); 803 803 if (!bus_info_ptr2) { 804 804 bus_info_ptr1 = kzalloc(sizeof(struct bus_info), GFP_KERNEL); 805 805 if (!bus_info_ptr1) { ··· 816 816 817 817 bus_info_ptr1->controller_id = hpc_ptr->ctlr_id; 818 818 819 - list_add_tail (&bus_info_ptr1->bus_info_list, &bus_info_head); 819 + list_add_tail(&bus_info_ptr1->bus_info_list, &bus_info_head); 820 820 821 821 } else { 822 - bus_info_ptr2->slot_min = min (bus_info_ptr2->slot_min, slot_ptr->slot_num); 823 - bus_info_ptr2->slot_max = max (bus_info_ptr2->slot_max, slot_ptr->slot_num); 822 + bus_info_ptr2->slot_min = min(bus_info_ptr2->slot_min, slot_ptr->slot_num); 823 + bus_info_ptr2->slot_max = max(bus_info_ptr2->slot_max, slot_ptr->slot_num); 824 824 bus_info_ptr2->slot_count += 1; 825 825 826 826 } ··· 834 834 /* init bus structure */ 835 835 bus_ptr = hpc_ptr->buses; 836 836 for (bus = 0; bus < bus_num; bus++) { 837 - bus_ptr->bus_num = readb (io_mem + addr_bus + bus); 838 - bus_ptr->slots_at_33_conv = readb (io_mem + addr_bus + bus_num + 8 * bus); 839 - bus_ptr->slots_at_66_conv = readb (io_mem + addr_bus + bus_num + 8 * bus + 1); 837 + bus_ptr->bus_num = readb(io_mem + addr_bus + bus); 838 + bus_ptr->slots_at_33_conv = readb(io_mem + addr_bus + bus_num + 8 * bus); 839 + bus_ptr->slots_at_66_conv = readb(io_mem + addr_bus + bus_num + 8 * bus + 1); 840 840 841 - bus_ptr->slots_at_66_pcix = readb (io_mem + addr_bus + bus_num + 8 * bus + 2); 841 + bus_ptr->slots_at_66_pcix = readb(io_mem + addr_bus + bus_num + 8 * bus + 2); 842 842 843 - bus_ptr->slots_at_100_pcix = readb (io_mem + addr_bus + bus_num + 8 * bus + 3); 843 + bus_ptr->slots_at_100_pcix = readb(io_mem + addr_bus + bus_num + 8 * bus + 3); 844 844 845 - bus_ptr->slots_at_133_pcix = readb (io_mem + addr_bus + bus_num + 8 * bus + 4); 845 + bus_ptr->slots_at_133_pcix = readb(io_mem + addr_bus + bus_num + 8 * bus + 4); 846 846 847 - bus_info_ptr2 = ibmphp_find_same_bus_num (bus_ptr->bus_num); 847 + bus_info_ptr2 = ibmphp_find_same_bus_num(bus_ptr->bus_num); 848 848 if (bus_info_ptr2) { 849 849 bus_info_ptr2->slots_at_33_conv = bus_ptr->slots_at_33_conv; 850 850 bus_info_ptr2->slots_at_66_conv = bus_ptr->slots_at_66_conv; ··· 859 859 860 860 switch (hpc_ptr->ctlr_type) { 861 861 case 1: 862 - hpc_ptr->u.pci_ctlr.bus = readb (io_mem + addr); 863 - hpc_ptr->u.pci_ctlr.dev_fun = readb (io_mem + addr + 1); 864 - hpc_ptr->irq = readb (io_mem + addr + 2); 862 + hpc_ptr->u.pci_ctlr.bus = readb(io_mem + addr); 863 + hpc_ptr->u.pci_ctlr.dev_fun = readb(io_mem + addr + 1); 864 + hpc_ptr->irq = readb(io_mem + addr + 2); 865 865 addr += 3; 866 - debug ("ctrl bus = %x, ctlr devfun = %x, irq = %x\n", 866 + debug("ctrl bus = %x, ctlr devfun = %x, irq = %x\n", 867 867 hpc_ptr->u.pci_ctlr.bus, 868 868 hpc_ptr->u.pci_ctlr.dev_fun, hpc_ptr->irq); 869 869 break; 870 870 871 871 case 0: 872 - hpc_ptr->u.isa_ctlr.io_start = readw (io_mem + addr); 873 - hpc_ptr->u.isa_ctlr.io_end = readw (io_mem + addr + 2); 874 - if (!request_region (hpc_ptr->u.isa_ctlr.io_start, 872 + hpc_ptr->u.isa_ctlr.io_start = readw(io_mem + addr); 873 + hpc_ptr->u.isa_ctlr.io_end = readw(io_mem + addr + 2); 874 + if (!request_region(hpc_ptr->u.isa_ctlr.io_start, 875 875 (hpc_ptr->u.isa_ctlr.io_end - hpc_ptr->u.isa_ctlr.io_start + 1), 876 876 "ibmphp")) { 877 877 rc = -ENODEV; 878 878 goto error_no_hp_slot; 879 879 } 880 - hpc_ptr->irq = readb (io_mem + addr + 4); 880 + hpc_ptr->irq = readb(io_mem + addr + 4); 881 881 addr += 5; 882 882 break; 883 883 884 884 case 2: 885 885 case 4: 886 - hpc_ptr->u.wpeg_ctlr.wpegbbar = readl (io_mem + addr); 887 - hpc_ptr->u.wpeg_ctlr.i2c_addr = readb (io_mem + addr + 4); 888 - hpc_ptr->irq = readb (io_mem + addr + 5); 886 + hpc_ptr->u.wpeg_ctlr.wpegbbar = readl(io_mem + addr); 887 + hpc_ptr->u.wpeg_ctlr.i2c_addr = readb(io_mem + addr + 4); 888 + hpc_ptr->irq = readb(io_mem + addr + 5); 889 889 addr += 6; 890 890 break; 891 891 default: ··· 894 894 } 895 895 896 896 //reorganize chassis' linked list 897 - combine_wpg_for_chassis (); 898 - combine_wpg_for_expansion (); 897 + combine_wpg_for_chassis(); 898 + combine_wpg_for_expansion(); 899 899 hpc_ptr->revision = 0xff; 900 900 hpc_ptr->options = 0xff; 901 901 hpc_ptr->starting_slot_num = hpc_ptr->slots[0].slot_num; ··· 940 940 941 941 tmp_slot->bus = hpc_ptr->slots[index].slot_bus_num; 942 942 943 - bus_info_ptr1 = ibmphp_find_same_bus_num (hpc_ptr->slots[index].slot_bus_num); 943 + bus_info_ptr1 = ibmphp_find_same_bus_num(hpc_ptr->slots[index].slot_bus_num); 944 944 if (!bus_info_ptr1) { 945 945 kfree(tmp_slot); 946 946 rc = -ENODEV; ··· 961 961 if (rc) 962 962 goto error; 963 963 964 - rc = ibmphp_init_devno ((struct slot **) &hp_slot_ptr->private); 964 + rc = ibmphp_init_devno((struct slot **) &hp_slot_ptr->private); 965 965 if (rc) 966 966 goto error; 967 967 hp_slot_ptr->ops = &ibmphp_hotplug_slot_ops; 968 968 969 969 // end of registering ibm slot with hotplug core 970 970 971 - list_add (& ((struct slot *)(hp_slot_ptr->private))->ibm_slot_list, &ibmphp_slot_head); 971 + list_add(&((struct slot *)(hp_slot_ptr->private))->ibm_slot_list, &ibmphp_slot_head); 972 972 } 973 973 974 - print_bus_info (); 975 - list_add (&hpc_ptr->ebda_hpc_list, &ebda_hpc_head ); 974 + print_bus_info(); 975 + list_add(&hpc_ptr->ebda_hpc_list, &ebda_hpc_head); 976 976 977 977 } /* each hpc */ 978 978 ··· 982 982 pci_find_bus(0, tmp_slot->bus), tmp_slot->device, name); 983 983 } 984 984 985 - print_ebda_hpc (); 986 - print_ibm_slot (); 985 + print_ebda_hpc(); 986 + print_ibm_slot(); 987 987 return 0; 988 988 989 989 error: 990 - kfree (hp_slot_ptr->private); 990 + kfree(hp_slot_ptr->private); 991 991 error_no_slot: 992 - kfree (hp_slot_ptr->info); 992 + kfree(hp_slot_ptr->info); 993 993 error_no_hp_info: 994 - kfree (hp_slot_ptr); 994 + kfree(hp_slot_ptr); 995 995 error_no_hp_slot: 996 - free_ebda_hpc (hpc_ptr); 996 + free_ebda_hpc(hpc_ptr); 997 997 error_no_hpc: 998 - iounmap (io_mem); 998 + iounmap(io_mem); 999 999 return rc; 1000 1000 } 1001 1001 ··· 1003 1003 * map info (bus, devfun, start addr, end addr..) of i/o, memory, 1004 1004 * pfm from the physical addr to a list of resource. 1005 1005 */ 1006 - static int __init ebda_rsrc_rsrc (void) 1006 + static int __init ebda_rsrc_rsrc(void) 1007 1007 { 1008 1008 u16 addr; 1009 1009 short rsrc; ··· 1011 1011 struct ebda_pci_rsrc *rsrc_ptr; 1012 1012 1013 1013 addr = rsrc_list_ptr->phys_addr; 1014 - debug ("now entering rsrc land\n"); 1015 - debug ("offset of rsrc: %x\n", rsrc_list_ptr->phys_addr); 1014 + debug("now entering rsrc land\n"); 1015 + debug("offset of rsrc: %x\n", rsrc_list_ptr->phys_addr); 1016 1016 1017 1017 for (rsrc = 0; rsrc < rsrc_list_ptr->num_entries; rsrc++) { 1018 - type = readb (io_mem + addr); 1018 + type = readb(io_mem + addr); 1019 1019 1020 1020 addr += 1; 1021 1021 rsrc_type = type & EBDA_RSRC_TYPE_MASK; 1022 1022 1023 1023 if (rsrc_type == EBDA_IO_RSRC_TYPE) { 1024 - rsrc_ptr = alloc_ebda_pci_rsrc (); 1024 + rsrc_ptr = alloc_ebda_pci_rsrc(); 1025 1025 if (!rsrc_ptr) { 1026 - iounmap (io_mem); 1026 + iounmap(io_mem); 1027 1027 return -ENOMEM; 1028 1028 } 1029 1029 rsrc_ptr->rsrc_type = type; 1030 1030 1031 - rsrc_ptr->bus_num = readb (io_mem + addr); 1032 - rsrc_ptr->dev_fun = readb (io_mem + addr + 1); 1033 - rsrc_ptr->start_addr = readw (io_mem + addr + 2); 1034 - rsrc_ptr->end_addr = readw (io_mem + addr + 4); 1031 + rsrc_ptr->bus_num = readb(io_mem + addr); 1032 + rsrc_ptr->dev_fun = readb(io_mem + addr + 1); 1033 + rsrc_ptr->start_addr = readw(io_mem + addr + 2); 1034 + rsrc_ptr->end_addr = readw(io_mem + addr + 4); 1035 1035 addr += 6; 1036 1036 1037 - debug ("rsrc from io type ----\n"); 1038 - debug ("rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n", 1037 + debug("rsrc from io type ----\n"); 1038 + debug("rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n", 1039 1039 rsrc_ptr->rsrc_type, rsrc_ptr->bus_num, rsrc_ptr->dev_fun, rsrc_ptr->start_addr, rsrc_ptr->end_addr); 1040 1040 1041 - list_add (&rsrc_ptr->ebda_pci_rsrc_list, &ibmphp_ebda_pci_rsrc_head); 1041 + list_add(&rsrc_ptr->ebda_pci_rsrc_list, &ibmphp_ebda_pci_rsrc_head); 1042 1042 } 1043 1043 1044 1044 if (rsrc_type == EBDA_MEM_RSRC_TYPE || rsrc_type == EBDA_PFM_RSRC_TYPE) { 1045 - rsrc_ptr = alloc_ebda_pci_rsrc (); 1046 - if (!rsrc_ptr ) { 1047 - iounmap (io_mem); 1045 + rsrc_ptr = alloc_ebda_pci_rsrc(); 1046 + if (!rsrc_ptr) { 1047 + iounmap(io_mem); 1048 1048 return -ENOMEM; 1049 1049 } 1050 1050 rsrc_ptr->rsrc_type = type; 1051 1051 1052 - rsrc_ptr->bus_num = readb (io_mem + addr); 1053 - rsrc_ptr->dev_fun = readb (io_mem + addr + 1); 1054 - rsrc_ptr->start_addr = readl (io_mem + addr + 2); 1055 - rsrc_ptr->end_addr = readl (io_mem + addr + 6); 1052 + rsrc_ptr->bus_num = readb(io_mem + addr); 1053 + rsrc_ptr->dev_fun = readb(io_mem + addr + 1); 1054 + rsrc_ptr->start_addr = readl(io_mem + addr + 2); 1055 + rsrc_ptr->end_addr = readl(io_mem + addr + 6); 1056 1056 addr += 10; 1057 1057 1058 - debug ("rsrc from mem or pfm ---\n"); 1059 - debug ("rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n", 1058 + debug("rsrc from mem or pfm ---\n"); 1059 + debug("rsrc type: %x bus#: %x dev_func: %x start addr: %x end addr: %x\n", 1060 1060 rsrc_ptr->rsrc_type, rsrc_ptr->bus_num, rsrc_ptr->dev_fun, rsrc_ptr->start_addr, rsrc_ptr->end_addr); 1061 1061 1062 - list_add (&rsrc_ptr->ebda_pci_rsrc_list, &ibmphp_ebda_pci_rsrc_head); 1062 + list_add(&rsrc_ptr->ebda_pci_rsrc_list, &ibmphp_ebda_pci_rsrc_head); 1063 1063 } 1064 1064 } 1065 - kfree (rsrc_list_ptr); 1065 + kfree(rsrc_list_ptr); 1066 1066 rsrc_list_ptr = NULL; 1067 - print_ebda_pci_rsrc (); 1067 + print_ebda_pci_rsrc(); 1068 1068 return 0; 1069 1069 } 1070 1070 1071 - u16 ibmphp_get_total_controllers (void) 1071 + u16 ibmphp_get_total_controllers(void) 1072 1072 { 1073 1073 return hpc_list_ptr->num_ctlrs; 1074 1074 } 1075 1075 1076 - struct slot *ibmphp_get_slot_from_physical_num (u8 physical_num) 1076 + struct slot *ibmphp_get_slot_from_physical_num(u8 physical_num) 1077 1077 { 1078 1078 struct slot *slot; 1079 1079 ··· 1090 1090 * - the total number of the slots based on each bus 1091 1091 * (if only one slot per bus slot_min = slot_max ) 1092 1092 */ 1093 - struct bus_info *ibmphp_find_same_bus_num (u32 num) 1093 + struct bus_info *ibmphp_find_same_bus_num(u32 num) 1094 1094 { 1095 1095 struct bus_info *ptr; 1096 1096 ··· 1104 1104 /* Finding relative bus number, in order to map corresponding 1105 1105 * bus register 1106 1106 */ 1107 - int ibmphp_get_bus_index (u8 num) 1107 + int ibmphp_get_bus_index(u8 num) 1108 1108 { 1109 1109 struct bus_info *ptr; 1110 1110 ··· 1115 1115 return -ENODEV; 1116 1116 } 1117 1117 1118 - void ibmphp_free_bus_info_queue (void) 1118 + void ibmphp_free_bus_info_queue(void) 1119 1119 { 1120 - struct bus_info *bus_info; 1121 - struct list_head *list; 1122 - struct list_head *next; 1120 + struct bus_info *bus_info, *next; 1123 1121 1124 - list_for_each_safe (list, next, &bus_info_head ) { 1125 - bus_info = list_entry (list, struct bus_info, bus_info_list); 1122 + list_for_each_entry_safe(bus_info, next, &bus_info_head, 1123 + bus_info_list) { 1126 1124 kfree (bus_info); 1127 1125 } 1128 1126 } 1129 1127 1130 - void ibmphp_free_ebda_hpc_queue (void) 1128 + void ibmphp_free_ebda_hpc_queue(void) 1131 1129 { 1132 - struct controller *controller = NULL; 1133 - struct list_head *list; 1134 - struct list_head *next; 1130 + struct controller *controller = NULL, *next; 1135 1131 int pci_flag = 0; 1136 1132 1137 - list_for_each_safe (list, next, &ebda_hpc_head) { 1138 - controller = list_entry (list, struct controller, ebda_hpc_list); 1133 + list_for_each_entry_safe(controller, next, &ebda_hpc_head, 1134 + ebda_hpc_list) { 1139 1135 if (controller->ctlr_type == 0) 1140 - release_region (controller->u.isa_ctlr.io_start, (controller->u.isa_ctlr.io_end - controller->u.isa_ctlr.io_start + 1)); 1136 + release_region(controller->u.isa_ctlr.io_start, (controller->u.isa_ctlr.io_end - controller->u.isa_ctlr.io_start + 1)); 1141 1137 else if ((controller->ctlr_type == 1) && (!pci_flag)) { 1142 1138 ++pci_flag; 1143 - pci_unregister_driver (&ibmphp_driver); 1139 + pci_unregister_driver(&ibmphp_driver); 1144 1140 } 1145 - free_ebda_hpc (controller); 1141 + free_ebda_hpc(controller); 1146 1142 } 1147 1143 } 1148 1144 1149 - void ibmphp_free_ebda_pci_rsrc_queue (void) 1145 + void ibmphp_free_ebda_pci_rsrc_queue(void) 1150 1146 { 1151 - struct ebda_pci_rsrc *resource; 1152 - struct list_head *list; 1153 - struct list_head *next; 1147 + struct ebda_pci_rsrc *resource, *next; 1154 1148 1155 - list_for_each_safe (list, next, &ibmphp_ebda_pci_rsrc_head) { 1156 - resource = list_entry (list, struct ebda_pci_rsrc, ebda_pci_rsrc_list); 1149 + list_for_each_entry_safe(resource, next, &ibmphp_ebda_pci_rsrc_head, 1150 + ebda_pci_rsrc_list) { 1157 1151 kfree (resource); 1158 1152 resource = NULL; 1159 1153 } ··· 1165 1171 1166 1172 MODULE_DEVICE_TABLE(pci, id_table); 1167 1173 1168 - static int ibmphp_probe (struct pci_dev *, const struct pci_device_id *); 1174 + static int ibmphp_probe(struct pci_dev *, const struct pci_device_id *); 1169 1175 static struct pci_driver ibmphp_driver = { 1170 1176 .name = "ibmphp", 1171 1177 .id_table = id_table, 1172 1178 .probe = ibmphp_probe, 1173 1179 }; 1174 1180 1175 - int ibmphp_register_pci (void) 1181 + int ibmphp_register_pci(void) 1176 1182 { 1177 1183 struct controller *ctrl; 1178 1184 int rc = 0; ··· 1185 1191 } 1186 1192 return rc; 1187 1193 } 1188 - static int ibmphp_probe (struct pci_dev *dev, const struct pci_device_id *ids) 1194 + static int ibmphp_probe(struct pci_dev *dev, const struct pci_device_id *ids) 1189 1195 { 1190 1196 struct controller *ctrl; 1191 1197 1192 - debug ("inside ibmphp_probe\n"); 1198 + debug("inside ibmphp_probe\n"); 1193 1199 1194 1200 list_for_each_entry(ctrl, &ebda_hpc_head, ebda_hpc_list) { 1195 1201 if (ctrl->ctlr_type == 1) { 1196 1202 if ((dev->devfn == ctrl->u.pci_ctlr.dev_fun) && (dev->bus->number == ctrl->u.pci_ctlr.bus)) { 1197 1203 ctrl->ctrl_dev = dev; 1198 - debug ("found device!!!\n"); 1199 - debug ("dev->device = %x, dev->subsystem_device = %x\n", dev->device, dev->subsystem_device); 1204 + debug("found device!!!\n"); 1205 + debug("dev->device = %x, dev->subsystem_device = %x\n", dev->device, dev->subsystem_device); 1200 1206 return 0; 1201 1207 } 1202 1208 }
+192 -194
drivers/pci/hotplug/ibmphp_hpc.c
··· 40 40 #include "ibmphp.h" 41 41 42 42 static int to_debug = 0; 43 - #define debug_polling(fmt, arg...) do { if (to_debug) debug (fmt, arg); } while (0) 43 + #define debug_polling(fmt, arg...) do { if (to_debug) debug(fmt, arg); } while (0) 44 44 45 45 //---------------------------------------------------------------------------- 46 46 // timeout values ··· 110 110 //---------------------------------------------------------------------------- 111 111 // local function prototypes 112 112 //---------------------------------------------------------------------------- 113 - static u8 i2c_ctrl_read (struct controller *, void __iomem *, u8); 114 - static u8 i2c_ctrl_write (struct controller *, void __iomem *, u8, u8); 115 - static u8 hpc_writecmdtoindex (u8, u8); 116 - static u8 hpc_readcmdtoindex (u8, u8); 117 - static void get_hpc_access (void); 118 - static void free_hpc_access (void); 113 + static u8 i2c_ctrl_read(struct controller *, void __iomem *, u8); 114 + static u8 i2c_ctrl_write(struct controller *, void __iomem *, u8, u8); 115 + static u8 hpc_writecmdtoindex(u8, u8); 116 + static u8 hpc_readcmdtoindex(u8, u8); 117 + static void get_hpc_access(void); 118 + static void free_hpc_access(void); 119 119 static int poll_hpc(void *data); 120 - static int process_changeinstatus (struct slot *, struct slot *); 121 - static int process_changeinlatch (u8, u8, struct controller *); 122 - static int hpc_wait_ctlr_notworking (int, struct controller *, void __iomem *, u8 *); 120 + static int process_changeinstatus(struct slot *, struct slot *); 121 + static int process_changeinlatch(u8, u8, struct controller *); 122 + static int hpc_wait_ctlr_notworking(int, struct controller *, void __iomem *, u8 *); 123 123 //---------------------------------------------------------------------------- 124 124 125 125 ··· 128 128 * 129 129 * Action: initialize semaphores and variables 130 130 *---------------------------------------------------------------------*/ 131 - void __init ibmphp_hpc_initvars (void) 131 + void __init ibmphp_hpc_initvars(void) 132 132 { 133 - debug ("%s - Entry\n", __func__); 133 + debug("%s - Entry\n", __func__); 134 134 135 135 mutex_init(&sem_hpcaccess); 136 136 sema_init(&semOperations, 1); 137 137 sema_init(&sem_exit, 0); 138 138 to_debug = 0; 139 139 140 - debug ("%s - Exit\n", __func__); 140 + debug("%s - Exit\n", __func__); 141 141 } 142 142 143 143 /*---------------------------------------------------------------------- ··· 146 146 * Action: read from HPC over I2C 147 147 * 148 148 *---------------------------------------------------------------------*/ 149 - static u8 i2c_ctrl_read (struct controller *ctlr_ptr, void __iomem *WPGBbar, u8 index) 149 + static u8 i2c_ctrl_read(struct controller *ctlr_ptr, void __iomem *WPGBbar, u8 index) 150 150 { 151 151 u8 status; 152 152 int i; ··· 155 155 unsigned long ultemp; 156 156 unsigned long data; // actual data HILO format 157 157 158 - debug_polling ("%s - Entry WPGBbar[%p] index[%x] \n", __func__, WPGBbar, index); 158 + debug_polling("%s - Entry WPGBbar[%p] index[%x] \n", __func__, WPGBbar, index); 159 159 160 160 //-------------------------------------------------------------------- 161 161 // READ - step 1 ··· 178 178 ultemp = ultemp << 8; 179 179 data |= ultemp; 180 180 } else { 181 - err ("this controller type is not supported \n"); 181 + err("this controller type is not supported \n"); 182 182 return HPC_ERROR; 183 183 } 184 184 185 - wpg_data = swab32 (data); // swap data before writing 185 + wpg_data = swab32(data); // swap data before writing 186 186 wpg_addr = WPGBbar + WPG_I2CMOSUP_OFFSET; 187 - writel (wpg_data, wpg_addr); 187 + writel(wpg_data, wpg_addr); 188 188 189 189 //-------------------------------------------------------------------- 190 190 // READ - step 2 : clear the message buffer 191 191 data = 0x00000000; 192 - wpg_data = swab32 (data); 192 + wpg_data = swab32(data); 193 193 wpg_addr = WPGBbar + WPG_I2CMBUFL_OFFSET; 194 - writel (wpg_data, wpg_addr); 194 + writel(wpg_data, wpg_addr); 195 195 196 196 //-------------------------------------------------------------------- 197 197 // READ - step 3 : issue start operation, I2C master control bit 30:ON 198 198 // 2020 : [20] OR operation at [20] offset 0x20 199 199 data = WPG_I2CMCNTL_STARTOP_MASK; 200 - wpg_data = swab32 (data); 200 + wpg_data = swab32(data); 201 201 wpg_addr = WPGBbar + WPG_I2CMCNTL_OFFSET + WPG_I2C_OR; 202 - writel (wpg_data, wpg_addr); 202 + writel(wpg_data, wpg_addr); 203 203 204 204 //-------------------------------------------------------------------- 205 205 // READ - step 4 : wait until start operation bit clears ··· 207 207 while (i) { 208 208 msleep(10); 209 209 wpg_addr = WPGBbar + WPG_I2CMCNTL_OFFSET; 210 - wpg_data = readl (wpg_addr); 211 - data = swab32 (wpg_data); 210 + wpg_data = readl(wpg_addr); 211 + data = swab32(wpg_data); 212 212 if (!(data & WPG_I2CMCNTL_STARTOP_MASK)) 213 213 break; 214 214 i--; 215 215 } 216 216 if (i == 0) { 217 - debug ("%s - Error : WPG timeout\n", __func__); 217 + debug("%s - Error : WPG timeout\n", __func__); 218 218 return HPC_ERROR; 219 219 } 220 220 //-------------------------------------------------------------------- ··· 223 223 while (i) { 224 224 msleep(10); 225 225 wpg_addr = WPGBbar + WPG_I2CSTAT_OFFSET; 226 - wpg_data = readl (wpg_addr); 227 - data = swab32 (wpg_data); 228 - if (HPC_I2CSTATUS_CHECK (data)) 226 + wpg_data = readl(wpg_addr); 227 + data = swab32(wpg_data); 228 + if (HPC_I2CSTATUS_CHECK(data)) 229 229 break; 230 230 i--; 231 231 } 232 232 if (i == 0) { 233 - debug ("ctrl_read - Exit Error:I2C timeout\n"); 233 + debug("ctrl_read - Exit Error:I2C timeout\n"); 234 234 return HPC_ERROR; 235 235 } 236 236 237 237 //-------------------------------------------------------------------- 238 238 // READ - step 6 : get DATA 239 239 wpg_addr = WPGBbar + WPG_I2CMBUFL_OFFSET; 240 - wpg_data = readl (wpg_addr); 241 - data = swab32 (wpg_data); 240 + wpg_data = readl(wpg_addr); 241 + data = swab32(wpg_data); 242 242 243 243 status = (u8) data; 244 244 245 - debug_polling ("%s - Exit index[%x] status[%x]\n", __func__, index, status); 245 + debug_polling("%s - Exit index[%x] status[%x]\n", __func__, index, status); 246 246 247 247 return (status); 248 248 } ··· 254 254 * 255 255 * Return 0 or error codes 256 256 *---------------------------------------------------------------------*/ 257 - static u8 i2c_ctrl_write (struct controller *ctlr_ptr, void __iomem *WPGBbar, u8 index, u8 cmd) 257 + static u8 i2c_ctrl_write(struct controller *ctlr_ptr, void __iomem *WPGBbar, u8 index, u8 cmd) 258 258 { 259 259 u8 rc; 260 260 void __iomem *wpg_addr; // base addr + offset ··· 263 263 unsigned long data; // actual data HILO format 264 264 int i; 265 265 266 - debug_polling ("%s - Entry WPGBbar[%p] index[%x] cmd[%x]\n", __func__, WPGBbar, index, cmd); 266 + debug_polling("%s - Entry WPGBbar[%p] index[%x] cmd[%x]\n", __func__, WPGBbar, index, cmd); 267 267 268 268 rc = 0; 269 269 //-------------------------------------------------------------------- ··· 289 289 ultemp = ultemp << 8; 290 290 data |= ultemp; 291 291 } else { 292 - err ("this controller type is not supported \n"); 292 + err("this controller type is not supported \n"); 293 293 return HPC_ERROR; 294 294 } 295 295 296 - wpg_data = swab32 (data); // swap data before writing 296 + wpg_data = swab32(data); // swap data before writing 297 297 wpg_addr = WPGBbar + WPG_I2CMOSUP_OFFSET; 298 - writel (wpg_data, wpg_addr); 298 + writel(wpg_data, wpg_addr); 299 299 300 300 //-------------------------------------------------------------------- 301 301 // WRITE - step 2 : clear the message buffer 302 302 data = 0x00000000 | (unsigned long)cmd; 303 - wpg_data = swab32 (data); 303 + wpg_data = swab32(data); 304 304 wpg_addr = WPGBbar + WPG_I2CMBUFL_OFFSET; 305 - writel (wpg_data, wpg_addr); 305 + writel(wpg_data, wpg_addr); 306 306 307 307 //-------------------------------------------------------------------- 308 308 // WRITE - step 3 : issue start operation,I2C master control bit 30:ON 309 309 // 2020 : [20] OR operation at [20] offset 0x20 310 310 data = WPG_I2CMCNTL_STARTOP_MASK; 311 - wpg_data = swab32 (data); 311 + wpg_data = swab32(data); 312 312 wpg_addr = WPGBbar + WPG_I2CMCNTL_OFFSET + WPG_I2C_OR; 313 - writel (wpg_data, wpg_addr); 313 + writel(wpg_data, wpg_addr); 314 314 315 315 //-------------------------------------------------------------------- 316 316 // WRITE - step 4 : wait until start operation bit clears ··· 318 318 while (i) { 319 319 msleep(10); 320 320 wpg_addr = WPGBbar + WPG_I2CMCNTL_OFFSET; 321 - wpg_data = readl (wpg_addr); 322 - data = swab32 (wpg_data); 321 + wpg_data = readl(wpg_addr); 322 + data = swab32(wpg_data); 323 323 if (!(data & WPG_I2CMCNTL_STARTOP_MASK)) 324 324 break; 325 325 i--; 326 326 } 327 327 if (i == 0) { 328 - debug ("%s - Exit Error:WPG timeout\n", __func__); 328 + debug("%s - Exit Error:WPG timeout\n", __func__); 329 329 rc = HPC_ERROR; 330 330 } 331 331 ··· 335 335 while (i) { 336 336 msleep(10); 337 337 wpg_addr = WPGBbar + WPG_I2CSTAT_OFFSET; 338 - wpg_data = readl (wpg_addr); 339 - data = swab32 (wpg_data); 340 - if (HPC_I2CSTATUS_CHECK (data)) 338 + wpg_data = readl(wpg_addr); 339 + data = swab32(wpg_data); 340 + if (HPC_I2CSTATUS_CHECK(data)) 341 341 break; 342 342 i--; 343 343 } 344 344 if (i == 0) { 345 - debug ("ctrl_read - Error : I2C timeout\n"); 345 + debug("ctrl_read - Error : I2C timeout\n"); 346 346 rc = HPC_ERROR; 347 347 } 348 348 349 - debug_polling ("%s Exit rc[%x]\n", __func__, rc); 349 + debug_polling("%s Exit rc[%x]\n", __func__, rc); 350 350 return (rc); 351 351 } 352 352 353 353 //------------------------------------------------------------ 354 354 // Read from ISA type HPC 355 355 //------------------------------------------------------------ 356 - static u8 isa_ctrl_read (struct controller *ctlr_ptr, u8 offset) 356 + static u8 isa_ctrl_read(struct controller *ctlr_ptr, u8 offset) 357 357 { 358 358 u16 start_address; 359 359 u16 end_address; ··· 361 361 362 362 start_address = ctlr_ptr->u.isa_ctlr.io_start; 363 363 end_address = ctlr_ptr->u.isa_ctlr.io_end; 364 - data = inb (start_address + offset); 364 + data = inb(start_address + offset); 365 365 return data; 366 366 } 367 367 368 368 //-------------------------------------------------------------- 369 369 // Write to ISA type HPC 370 370 //-------------------------------------------------------------- 371 - static void isa_ctrl_write (struct controller *ctlr_ptr, u8 offset, u8 data) 371 + static void isa_ctrl_write(struct controller *ctlr_ptr, u8 offset, u8 data) 372 372 { 373 373 u16 start_address; 374 374 u16 port_address; 375 375 376 376 start_address = ctlr_ptr->u.isa_ctlr.io_start; 377 377 port_address = start_address + (u16) offset; 378 - outb (data, port_address); 378 + outb(data, port_address); 379 379 } 380 380 381 - static u8 pci_ctrl_read (struct controller *ctrl, u8 offset) 381 + static u8 pci_ctrl_read(struct controller *ctrl, u8 offset) 382 382 { 383 383 u8 data = 0x00; 384 - debug ("inside pci_ctrl_read\n"); 384 + debug("inside pci_ctrl_read\n"); 385 385 if (ctrl->ctrl_dev) 386 - pci_read_config_byte (ctrl->ctrl_dev, HPC_PCI_OFFSET + offset, &data); 386 + pci_read_config_byte(ctrl->ctrl_dev, HPC_PCI_OFFSET + offset, &data); 387 387 return data; 388 388 } 389 389 390 - static u8 pci_ctrl_write (struct controller *ctrl, u8 offset, u8 data) 390 + static u8 pci_ctrl_write(struct controller *ctrl, u8 offset, u8 data) 391 391 { 392 392 u8 rc = -ENODEV; 393 - debug ("inside pci_ctrl_write\n"); 393 + debug("inside pci_ctrl_write\n"); 394 394 if (ctrl->ctrl_dev) { 395 - pci_write_config_byte (ctrl->ctrl_dev, HPC_PCI_OFFSET + offset, data); 395 + pci_write_config_byte(ctrl->ctrl_dev, HPC_PCI_OFFSET + offset, data); 396 396 rc = 0; 397 397 } 398 398 return rc; 399 399 } 400 400 401 - static u8 ctrl_read (struct controller *ctlr, void __iomem *base, u8 offset) 401 + static u8 ctrl_read(struct controller *ctlr, void __iomem *base, u8 offset) 402 402 { 403 403 u8 rc; 404 404 switch (ctlr->ctlr_type) { 405 405 case 0: 406 - rc = isa_ctrl_read (ctlr, offset); 406 + rc = isa_ctrl_read(ctlr, offset); 407 407 break; 408 408 case 1: 409 - rc = pci_ctrl_read (ctlr, offset); 409 + rc = pci_ctrl_read(ctlr, offset); 410 410 break; 411 411 case 2: 412 412 case 4: 413 - rc = i2c_ctrl_read (ctlr, base, offset); 413 + rc = i2c_ctrl_read(ctlr, base, offset); 414 414 break; 415 415 default: 416 416 return -ENODEV; ··· 418 418 return rc; 419 419 } 420 420 421 - static u8 ctrl_write (struct controller *ctlr, void __iomem *base, u8 offset, u8 data) 421 + static u8 ctrl_write(struct controller *ctlr, void __iomem *base, u8 offset, u8 data) 422 422 { 423 423 u8 rc = 0; 424 424 switch (ctlr->ctlr_type) { ··· 426 426 isa_ctrl_write(ctlr, offset, data); 427 427 break; 428 428 case 1: 429 - rc = pci_ctrl_write (ctlr, offset, data); 429 + rc = pci_ctrl_write(ctlr, offset, data); 430 430 break; 431 431 case 2: 432 432 case 4: ··· 444 444 * 445 445 * Return index, HPC_ERROR 446 446 *---------------------------------------------------------------------*/ 447 - static u8 hpc_writecmdtoindex (u8 cmd, u8 index) 447 + static u8 hpc_writecmdtoindex(u8 cmd, u8 index) 448 448 { 449 449 u8 rc; 450 450 ··· 476 476 break; 477 477 478 478 default: 479 - err ("hpc_writecmdtoindex - Error invalid cmd[%x]\n", cmd); 479 + err("hpc_writecmdtoindex - Error invalid cmd[%x]\n", cmd); 480 480 rc = HPC_ERROR; 481 481 } 482 482 ··· 490 490 * 491 491 * Return index, HPC_ERROR 492 492 *---------------------------------------------------------------------*/ 493 - static u8 hpc_readcmdtoindex (u8 cmd, u8 index) 493 + static u8 hpc_readcmdtoindex(u8 cmd, u8 index) 494 494 { 495 495 u8 rc; 496 496 ··· 533 533 * 534 534 * Return 0 or error codes 535 535 *---------------------------------------------------------------------*/ 536 - int ibmphp_hpc_readslot (struct slot *pslot, u8 cmd, u8 *pstatus) 536 + int ibmphp_hpc_readslot(struct slot *pslot, u8 cmd, u8 *pstatus) 537 537 { 538 538 void __iomem *wpg_bbar = NULL; 539 539 struct controller *ctlr_ptr; 540 - struct list_head *pslotlist; 541 540 u8 index, status; 542 541 int rc = 0; 543 542 int busindex; 544 543 545 - debug_polling ("%s - Entry pslot[%p] cmd[%x] pstatus[%p]\n", __func__, pslot, cmd, pstatus); 544 + debug_polling("%s - Entry pslot[%p] cmd[%x] pstatus[%p]\n", __func__, pslot, cmd, pstatus); 546 545 547 546 if ((pslot == NULL) 548 547 || ((pstatus == NULL) && (cmd != READ_ALLSTAT) && (cmd != READ_BUSSTATUS))) { 549 548 rc = -EINVAL; 550 - err ("%s - Error invalid pointer, rc[%d]\n", __func__, rc); 549 + err("%s - Error invalid pointer, rc[%d]\n", __func__, rc); 551 550 return rc; 552 551 } 553 552 554 553 if (cmd == READ_BUSSTATUS) { 555 - busindex = ibmphp_get_bus_index (pslot->bus); 554 + busindex = ibmphp_get_bus_index(pslot->bus); 556 555 if (busindex < 0) { 557 556 rc = -EINVAL; 558 - err ("%s - Exit Error:invalid bus, rc[%d]\n", __func__, rc); 557 + err("%s - Exit Error:invalid bus, rc[%d]\n", __func__, rc); 559 558 return rc; 560 559 } else 561 560 index = (u8) busindex; 562 561 } else 563 562 index = pslot->ctlr_index; 564 563 565 - index = hpc_readcmdtoindex (cmd, index); 564 + index = hpc_readcmdtoindex(cmd, index); 566 565 567 566 if (index == HPC_ERROR) { 568 567 rc = -EINVAL; 569 - err ("%s - Exit Error:invalid index, rc[%d]\n", __func__, rc); 568 + err("%s - Exit Error:invalid index, rc[%d]\n", __func__, rc); 570 569 return rc; 571 570 } 572 571 573 572 ctlr_ptr = pslot->ctrl; 574 573 575 - get_hpc_access (); 574 + get_hpc_access(); 576 575 577 576 //-------------------------------------------------------------------- 578 577 // map physical address to logical address 579 578 //-------------------------------------------------------------------- 580 579 if ((ctlr_ptr->ctlr_type == 2) || (ctlr_ptr->ctlr_type == 4)) 581 - wpg_bbar = ioremap (ctlr_ptr->u.wpeg_ctlr.wpegbbar, WPG_I2C_IOREMAP_SIZE); 580 + wpg_bbar = ioremap(ctlr_ptr->u.wpeg_ctlr.wpegbbar, WPG_I2C_IOREMAP_SIZE); 582 581 583 582 //-------------------------------------------------------------------- 584 583 // check controller status before reading 585 584 //-------------------------------------------------------------------- 586 - rc = hpc_wait_ctlr_notworking (HPC_CTLR_WORKING_TOUT, ctlr_ptr, wpg_bbar, &status); 585 + rc = hpc_wait_ctlr_notworking(HPC_CTLR_WORKING_TOUT, ctlr_ptr, wpg_bbar, &status); 587 586 if (!rc) { 588 587 switch (cmd) { 589 588 case READ_ALLSTAT: 590 589 // update the slot structure 591 590 pslot->ctrl->status = status; 592 - pslot->status = ctrl_read (ctlr_ptr, wpg_bbar, index); 593 - rc = hpc_wait_ctlr_notworking (HPC_CTLR_WORKING_TOUT, ctlr_ptr, wpg_bbar, 591 + pslot->status = ctrl_read(ctlr_ptr, wpg_bbar, index); 592 + rc = hpc_wait_ctlr_notworking(HPC_CTLR_WORKING_TOUT, ctlr_ptr, wpg_bbar, 594 593 &status); 595 594 if (!rc) 596 - pslot->ext_status = ctrl_read (ctlr_ptr, wpg_bbar, index + WPG_1ST_EXTSLOT_INDEX); 595 + pslot->ext_status = ctrl_read(ctlr_ptr, wpg_bbar, index + WPG_1ST_EXTSLOT_INDEX); 597 596 598 597 break; 599 598 600 599 case READ_SLOTSTATUS: 601 600 // DO NOT update the slot structure 602 - *pstatus = ctrl_read (ctlr_ptr, wpg_bbar, index); 601 + *pstatus = ctrl_read(ctlr_ptr, wpg_bbar, index); 603 602 break; 604 603 605 604 case READ_EXTSLOTSTATUS: 606 605 // DO NOT update the slot structure 607 - *pstatus = ctrl_read (ctlr_ptr, wpg_bbar, index); 606 + *pstatus = ctrl_read(ctlr_ptr, wpg_bbar, index); 608 607 break; 609 608 610 609 case READ_CTLRSTATUS: ··· 612 613 break; 613 614 614 615 case READ_BUSSTATUS: 615 - pslot->busstatus = ctrl_read (ctlr_ptr, wpg_bbar, index); 616 + pslot->busstatus = ctrl_read(ctlr_ptr, wpg_bbar, index); 616 617 break; 617 618 case READ_REVLEVEL: 618 - *pstatus = ctrl_read (ctlr_ptr, wpg_bbar, index); 619 + *pstatus = ctrl_read(ctlr_ptr, wpg_bbar, index); 619 620 break; 620 621 case READ_HPCOPTIONS: 621 - *pstatus = ctrl_read (ctlr_ptr, wpg_bbar, index); 622 + *pstatus = ctrl_read(ctlr_ptr, wpg_bbar, index); 622 623 break; 623 624 case READ_SLOTLATCHLOWREG: 624 625 // DO NOT update the slot structure 625 - *pstatus = ctrl_read (ctlr_ptr, wpg_bbar, index); 626 + *pstatus = ctrl_read(ctlr_ptr, wpg_bbar, index); 626 627 break; 627 628 628 629 // Not used 629 630 case READ_ALLSLOT: 630 - list_for_each (pslotlist, &ibmphp_slot_head) { 631 - pslot = list_entry (pslotlist, struct slot, ibm_slot_list); 631 + list_for_each_entry(pslot, &ibmphp_slot_head, 632 + ibm_slot_list) { 632 633 index = pslot->ctlr_index; 633 - rc = hpc_wait_ctlr_notworking (HPC_CTLR_WORKING_TOUT, ctlr_ptr, 634 + rc = hpc_wait_ctlr_notworking(HPC_CTLR_WORKING_TOUT, ctlr_ptr, 634 635 wpg_bbar, &status); 635 636 if (!rc) { 636 - pslot->status = ctrl_read (ctlr_ptr, wpg_bbar, index); 637 - rc = hpc_wait_ctlr_notworking (HPC_CTLR_WORKING_TOUT, 637 + pslot->status = ctrl_read(ctlr_ptr, wpg_bbar, index); 638 + rc = hpc_wait_ctlr_notworking(HPC_CTLR_WORKING_TOUT, 638 639 ctlr_ptr, wpg_bbar, &status); 639 640 if (!rc) 640 641 pslot->ext_status = 641 - ctrl_read (ctlr_ptr, wpg_bbar, 642 + ctrl_read(ctlr_ptr, wpg_bbar, 642 643 index + WPG_1ST_EXTSLOT_INDEX); 643 644 } else { 644 - err ("%s - Error ctrl_read failed\n", __func__); 645 + err("%s - Error ctrl_read failed\n", __func__); 645 646 rc = -EINVAL; 646 647 break; 647 648 } ··· 658 659 659 660 // remove physical to logical address mapping 660 661 if ((ctlr_ptr->ctlr_type == 2) || (ctlr_ptr->ctlr_type == 4)) 661 - iounmap (wpg_bbar); 662 + iounmap(wpg_bbar); 662 663 663 - free_hpc_access (); 664 + free_hpc_access(); 664 665 665 - debug_polling ("%s - Exit rc[%d]\n", __func__, rc); 666 + debug_polling("%s - Exit rc[%d]\n", __func__, rc); 666 667 return rc; 667 668 } 668 669 ··· 671 672 * 672 673 * Action: issue a WRITE command to HPC 673 674 *---------------------------------------------------------------------*/ 674 - int ibmphp_hpc_writeslot (struct slot *pslot, u8 cmd) 675 + int ibmphp_hpc_writeslot(struct slot *pslot, u8 cmd) 675 676 { 676 677 void __iomem *wpg_bbar = NULL; 677 678 struct controller *ctlr_ptr; ··· 681 682 int rc = 0; 682 683 int timeout; 683 684 684 - debug_polling ("%s - Entry pslot[%p] cmd[%x]\n", __func__, pslot, cmd); 685 + debug_polling("%s - Entry pslot[%p] cmd[%x]\n", __func__, pslot, cmd); 685 686 if (pslot == NULL) { 686 687 rc = -EINVAL; 687 - err ("%s - Error Exit rc[%d]\n", __func__, rc); 688 + err("%s - Error Exit rc[%d]\n", __func__, rc); 688 689 return rc; 689 690 } 690 691 691 692 if ((cmd == HPC_BUS_33CONVMODE) || (cmd == HPC_BUS_66CONVMODE) || 692 693 (cmd == HPC_BUS_66PCIXMODE) || (cmd == HPC_BUS_100PCIXMODE) || 693 694 (cmd == HPC_BUS_133PCIXMODE)) { 694 - busindex = ibmphp_get_bus_index (pslot->bus); 695 + busindex = ibmphp_get_bus_index(pslot->bus); 695 696 if (busindex < 0) { 696 697 rc = -EINVAL; 697 - err ("%s - Exit Error:invalid bus, rc[%d]\n", __func__, rc); 698 + err("%s - Exit Error:invalid bus, rc[%d]\n", __func__, rc); 698 699 return rc; 699 700 } else 700 701 index = (u8) busindex; 701 702 } else 702 703 index = pslot->ctlr_index; 703 704 704 - index = hpc_writecmdtoindex (cmd, index); 705 + index = hpc_writecmdtoindex(cmd, index); 705 706 706 707 if (index == HPC_ERROR) { 707 708 rc = -EINVAL; 708 - err ("%s - Error Exit rc[%d]\n", __func__, rc); 709 + err("%s - Error Exit rc[%d]\n", __func__, rc); 709 710 return rc; 710 711 } 711 712 712 713 ctlr_ptr = pslot->ctrl; 713 714 714 - get_hpc_access (); 715 + get_hpc_access(); 715 716 716 717 //-------------------------------------------------------------------- 717 718 // map physical address to logical address 718 719 //-------------------------------------------------------------------- 719 720 if ((ctlr_ptr->ctlr_type == 2) || (ctlr_ptr->ctlr_type == 4)) { 720 - wpg_bbar = ioremap (ctlr_ptr->u.wpeg_ctlr.wpegbbar, WPG_I2C_IOREMAP_SIZE); 721 + wpg_bbar = ioremap(ctlr_ptr->u.wpeg_ctlr.wpegbbar, WPG_I2C_IOREMAP_SIZE); 721 722 722 - debug ("%s - ctlr id[%x] physical[%lx] logical[%lx] i2c[%x]\n", __func__, 723 + debug("%s - ctlr id[%x] physical[%lx] logical[%lx] i2c[%x]\n", __func__, 723 724 ctlr_ptr->ctlr_id, (ulong) (ctlr_ptr->u.wpeg_ctlr.wpegbbar), (ulong) wpg_bbar, 724 725 ctlr_ptr->u.wpeg_ctlr.i2c_addr); 725 726 } 726 727 //-------------------------------------------------------------------- 727 728 // check controller status before writing 728 729 //-------------------------------------------------------------------- 729 - rc = hpc_wait_ctlr_notworking (HPC_CTLR_WORKING_TOUT, ctlr_ptr, wpg_bbar, &status); 730 + rc = hpc_wait_ctlr_notworking(HPC_CTLR_WORKING_TOUT, ctlr_ptr, wpg_bbar, &status); 730 731 if (!rc) { 731 732 732 - ctrl_write (ctlr_ptr, wpg_bbar, index, cmd); 733 + ctrl_write(ctlr_ptr, wpg_bbar, index, cmd); 733 734 734 735 //-------------------------------------------------------------------- 735 736 // check controller is still not working on the command ··· 737 738 timeout = CMD_COMPLETE_TOUT_SEC; 738 739 done = 0; 739 740 while (!done) { 740 - rc = hpc_wait_ctlr_notworking (HPC_CTLR_WORKING_TOUT, ctlr_ptr, wpg_bbar, 741 + rc = hpc_wait_ctlr_notworking(HPC_CTLR_WORKING_TOUT, ctlr_ptr, wpg_bbar, 741 742 &status); 742 743 if (!rc) { 743 - if (NEEDTOCHECK_CMDSTATUS (cmd)) { 744 - if (CTLR_FINISHED (status) == HPC_CTLR_FINISHED_YES) 744 + if (NEEDTOCHECK_CMDSTATUS(cmd)) { 745 + if (CTLR_FINISHED(status) == HPC_CTLR_FINISHED_YES) 745 746 done = 1; 746 747 } else 747 748 done = 1; ··· 750 751 msleep(1000); 751 752 if (timeout < 1) { 752 753 done = 1; 753 - err ("%s - Error command complete timeout\n", __func__); 754 + err("%s - Error command complete timeout\n", __func__); 754 755 rc = -EFAULT; 755 756 } else 756 757 timeout--; ··· 762 763 763 764 // remove physical to logical address mapping 764 765 if ((ctlr_ptr->ctlr_type == 2) || (ctlr_ptr->ctlr_type == 4)) 765 - iounmap (wpg_bbar); 766 - free_hpc_access (); 766 + iounmap(wpg_bbar); 767 + free_hpc_access(); 767 768 768 - debug_polling ("%s - Exit rc[%d]\n", __func__, rc); 769 + debug_polling("%s - Exit rc[%d]\n", __func__, rc); 769 770 return rc; 770 771 } 771 772 ··· 774 775 * 775 776 * Action: make sure only one process can access HPC at one time 776 777 *---------------------------------------------------------------------*/ 777 - static void get_hpc_access (void) 778 + static void get_hpc_access(void) 778 779 { 779 780 mutex_lock(&sem_hpcaccess); 780 781 } ··· 782 783 /*---------------------------------------------------------------------- 783 784 * Name: free_hpc_access() 784 785 *---------------------------------------------------------------------*/ 785 - void free_hpc_access (void) 786 + void free_hpc_access(void) 786 787 { 787 788 mutex_unlock(&sem_hpcaccess); 788 789 } ··· 792 793 * 793 794 * Action: make sure only one process can change the data structure 794 795 *---------------------------------------------------------------------*/ 795 - void ibmphp_lock_operations (void) 796 + void ibmphp_lock_operations(void) 796 797 { 797 - down (&semOperations); 798 + down(&semOperations); 798 799 to_debug = 1; 799 800 } 800 801 801 802 /*---------------------------------------------------------------------- 802 803 * Name: ibmphp_unlock_operations() 803 804 *---------------------------------------------------------------------*/ 804 - void ibmphp_unlock_operations (void) 805 + void ibmphp_unlock_operations(void) 805 806 { 806 - debug ("%s - Entry\n", __func__); 807 - up (&semOperations); 807 + debug("%s - Entry\n", __func__); 808 + up(&semOperations); 808 809 to_debug = 0; 809 - debug ("%s - Exit\n", __func__); 810 + debug("%s - Exit\n", __func__); 810 811 } 811 812 812 813 /*---------------------------------------------------------------------- ··· 819 820 { 820 821 struct slot myslot; 821 822 struct slot *pslot = NULL; 822 - struct list_head *pslotlist; 823 823 int rc; 824 824 int poll_state = POLL_LATCH_REGISTER; 825 825 u8 oldlatchlow = 0x00; ··· 826 828 int poll_count = 0; 827 829 u8 ctrl_count = 0x00; 828 830 829 - debug ("%s - Entry\n", __func__); 831 + debug("%s - Entry\n", __func__); 830 832 831 833 while (!kthread_should_stop()) { 832 834 /* try to get the lock to do some kind of hardware access */ 833 - down (&semOperations); 835 + down(&semOperations); 834 836 835 837 switch (poll_state) { 836 838 case POLL_LATCH_REGISTER: 837 839 oldlatchlow = curlatchlow; 838 840 ctrl_count = 0x00; 839 - list_for_each (pslotlist, &ibmphp_slot_head) { 841 + list_for_each_entry(pslot, &ibmphp_slot_head, 842 + ibm_slot_list) { 840 843 if (ctrl_count >= ibmphp_get_total_controllers()) 841 844 break; 842 - pslot = list_entry (pslotlist, struct slot, ibm_slot_list); 843 845 if (pslot->ctrl->ctlr_relative_id == ctrl_count) { 844 846 ctrl_count++; 845 - if (READ_SLOT_LATCH (pslot->ctrl)) { 846 - rc = ibmphp_hpc_readslot (pslot, 847 + if (READ_SLOT_LATCH(pslot->ctrl)) { 848 + rc = ibmphp_hpc_readslot(pslot, 847 849 READ_SLOTLATCHLOWREG, 848 850 &curlatchlow); 849 851 if (oldlatchlow != curlatchlow) 850 - process_changeinlatch (oldlatchlow, 852 + process_changeinlatch(oldlatchlow, 851 853 curlatchlow, 852 854 pslot->ctrl); 853 855 } ··· 857 859 poll_state = POLL_SLEEP; 858 860 break; 859 861 case POLL_SLOTS: 860 - list_for_each (pslotlist, &ibmphp_slot_head) { 861 - pslot = list_entry (pslotlist, struct slot, ibm_slot_list); 862 + list_for_each_entry(pslot, &ibmphp_slot_head, 863 + ibm_slot_list) { 862 864 // make a copy of the old status 863 - memcpy ((void *) &myslot, (void *) pslot, 864 - sizeof (struct slot)); 865 - rc = ibmphp_hpc_readslot (pslot, READ_ALLSTAT, NULL); 865 + memcpy((void *) &myslot, (void *) pslot, 866 + sizeof(struct slot)); 867 + rc = ibmphp_hpc_readslot(pslot, READ_ALLSTAT, NULL); 866 868 if ((myslot.status != pslot->status) 867 869 || (myslot.ext_status != pslot->ext_status)) 868 - process_changeinstatus (pslot, &myslot); 870 + process_changeinstatus(pslot, &myslot); 869 871 } 870 872 ctrl_count = 0x00; 871 - list_for_each (pslotlist, &ibmphp_slot_head) { 873 + list_for_each_entry(pslot, &ibmphp_slot_head, 874 + ibm_slot_list) { 872 875 if (ctrl_count >= ibmphp_get_total_controllers()) 873 876 break; 874 - pslot = list_entry (pslotlist, struct slot, ibm_slot_list); 875 877 if (pslot->ctrl->ctlr_relative_id == ctrl_count) { 876 878 ctrl_count++; 877 - if (READ_SLOT_LATCH (pslot->ctrl)) 878 - rc = ibmphp_hpc_readslot (pslot, 879 + if (READ_SLOT_LATCH(pslot->ctrl)) 880 + rc = ibmphp_hpc_readslot(pslot, 879 881 READ_SLOTLATCHLOWREG, 880 882 &curlatchlow); 881 883 } ··· 885 887 break; 886 888 case POLL_SLEEP: 887 889 /* don't sleep with a lock on the hardware */ 888 - up (&semOperations); 890 + up(&semOperations); 889 891 msleep(POLL_INTERVAL_SEC * 1000); 890 892 891 893 if (kthread_should_stop()) 892 894 goto out_sleep; 893 895 894 - down (&semOperations); 896 + down(&semOperations); 895 897 896 898 if (poll_count >= POLL_LATCH_CNT) { 897 899 poll_count = 0; ··· 901 903 break; 902 904 } 903 905 /* give up the hardware semaphore */ 904 - up (&semOperations); 906 + up(&semOperations); 905 907 /* sleep for a short time just for good measure */ 906 908 out_sleep: 907 909 msleep(100); 908 910 } 909 - up (&sem_exit); 910 - debug ("%s - Exit\n", __func__); 911 + up(&sem_exit); 912 + debug("%s - Exit\n", __func__); 911 913 return 0; 912 914 } 913 915 ··· 927 929 * 928 930 * Notes: 929 931 *---------------------------------------------------------------------*/ 930 - static int process_changeinstatus (struct slot *pslot, struct slot *poldslot) 932 + static int process_changeinstatus(struct slot *pslot, struct slot *poldslot) 931 933 { 932 934 u8 status; 933 935 int rc = 0; 934 936 u8 disable = 0; 935 937 u8 update = 0; 936 938 937 - debug ("process_changeinstatus - Entry pslot[%p], poldslot[%p]\n", pslot, poldslot); 939 + debug("process_changeinstatus - Entry pslot[%p], poldslot[%p]\n", pslot, poldslot); 938 940 939 941 // bit 0 - HPC_SLOT_POWER 940 942 if ((pslot->status & 0x01) != (poldslot->status & 0x01)) ··· 956 958 // bit 5 - HPC_SLOT_PWRGD 957 959 if ((pslot->status & 0x20) != (poldslot->status & 0x20)) 958 960 // OFF -> ON: ignore, ON -> OFF: disable slot 959 - if ((poldslot->status & 0x20) && (SLOT_CONNECT (poldslot->status) == HPC_SLOT_CONNECTED) && (SLOT_PRESENT (poldslot->status))) 961 + if ((poldslot->status & 0x20) && (SLOT_CONNECT(poldslot->status) == HPC_SLOT_CONNECTED) && (SLOT_PRESENT(poldslot->status))) 960 962 disable = 1; 961 963 962 964 // bit 6 - HPC_SLOT_BUS_SPEED ··· 967 969 update = 1; 968 970 // OPEN -> CLOSE 969 971 if (pslot->status & 0x80) { 970 - if (SLOT_PWRGD (pslot->status)) { 972 + if (SLOT_PWRGD(pslot->status)) { 971 973 // power goes on and off after closing latch 972 974 // check again to make sure power is still ON 973 975 msleep(1000); 974 - rc = ibmphp_hpc_readslot (pslot, READ_SLOTSTATUS, &status); 975 - if (SLOT_PWRGD (status)) 976 + rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, &status); 977 + if (SLOT_PWRGD(status)) 976 978 update = 1; 977 979 else // overwrite power in pslot to OFF 978 980 pslot->status &= ~HPC_SLOT_POWER; 979 981 } 980 982 } 981 983 // CLOSE -> OPEN 982 - else if ((SLOT_PWRGD (poldslot->status) == HPC_SLOT_PWRGD_GOOD) 983 - && (SLOT_CONNECT (poldslot->status) == HPC_SLOT_CONNECTED) && (SLOT_PRESENT (poldslot->status))) { 984 + else if ((SLOT_PWRGD(poldslot->status) == HPC_SLOT_PWRGD_GOOD) 985 + && (SLOT_CONNECT(poldslot->status) == HPC_SLOT_CONNECTED) && (SLOT_PRESENT(poldslot->status))) { 984 986 disable = 1; 985 987 } 986 988 // else - ignore ··· 990 992 update = 1; 991 993 992 994 if (disable) { 993 - debug ("process_changeinstatus - disable slot\n"); 995 + debug("process_changeinstatus - disable slot\n"); 994 996 pslot->flag = 0; 995 - rc = ibmphp_do_disable_slot (pslot); 997 + rc = ibmphp_do_disable_slot(pslot); 996 998 } 997 999 998 1000 if (update || disable) 999 - ibmphp_update_slot_info (pslot); 1001 + ibmphp_update_slot_info(pslot); 1000 1002 1001 - debug ("%s - Exit rc[%d] disable[%x] update[%x]\n", __func__, rc, disable, update); 1003 + debug("%s - Exit rc[%d] disable[%x] update[%x]\n", __func__, rc, disable, update); 1002 1004 1003 1005 return rc; 1004 1006 } ··· 1013 1015 * Return 0 or error codes 1014 1016 * Value: 1015 1017 *---------------------------------------------------------------------*/ 1016 - static int process_changeinlatch (u8 old, u8 new, struct controller *ctrl) 1018 + static int process_changeinlatch(u8 old, u8 new, struct controller *ctrl) 1017 1019 { 1018 1020 struct slot myslot, *pslot; 1019 1021 u8 i; 1020 1022 u8 mask; 1021 1023 int rc = 0; 1022 1024 1023 - debug ("%s - Entry old[%x], new[%x]\n", __func__, old, new); 1025 + debug("%s - Entry old[%x], new[%x]\n", __func__, old, new); 1024 1026 // bit 0 reserved, 0 is LSB, check bit 1-6 for 6 slots 1025 1027 1026 1028 for (i = ctrl->starting_slot_num; i <= ctrl->ending_slot_num; i++) { 1027 1029 mask = 0x01 << i; 1028 1030 if ((mask & old) != (mask & new)) { 1029 - pslot = ibmphp_get_slot_from_physical_num (i); 1031 + pslot = ibmphp_get_slot_from_physical_num(i); 1030 1032 if (pslot) { 1031 - memcpy ((void *) &myslot, (void *) pslot, sizeof (struct slot)); 1032 - rc = ibmphp_hpc_readslot (pslot, READ_ALLSTAT, NULL); 1033 - debug ("%s - call process_changeinstatus for slot[%d]\n", __func__, i); 1034 - process_changeinstatus (pslot, &myslot); 1033 + memcpy((void *) &myslot, (void *) pslot, sizeof(struct slot)); 1034 + rc = ibmphp_hpc_readslot(pslot, READ_ALLSTAT, NULL); 1035 + debug("%s - call process_changeinstatus for slot[%d]\n", __func__, i); 1036 + process_changeinstatus(pslot, &myslot); 1035 1037 } else { 1036 1038 rc = -EINVAL; 1037 - err ("%s - Error bad pointer for slot[%d]\n", __func__, i); 1039 + err("%s - Error bad pointer for slot[%d]\n", __func__, i); 1038 1040 } 1039 1041 } 1040 1042 } 1041 - debug ("%s - Exit rc[%d]\n", __func__, rc); 1043 + debug("%s - Exit rc[%d]\n", __func__, rc); 1042 1044 return rc; 1043 1045 } 1044 1046 ··· 1047 1049 * 1048 1050 * Action: start polling thread 1049 1051 *---------------------------------------------------------------------*/ 1050 - int __init ibmphp_hpc_start_poll_thread (void) 1052 + int __init ibmphp_hpc_start_poll_thread(void) 1051 1053 { 1052 - debug ("%s - Entry\n", __func__); 1054 + debug("%s - Entry\n", __func__); 1053 1055 1054 1056 ibmphp_poll_thread = kthread_run(poll_hpc, NULL, "hpc_poll"); 1055 1057 if (IS_ERR(ibmphp_poll_thread)) { 1056 - err ("%s - Error, thread not started\n", __func__); 1058 + err("%s - Error, thread not started\n", __func__); 1057 1059 return PTR_ERR(ibmphp_poll_thread); 1058 1060 } 1059 1061 return 0; ··· 1064 1066 * 1065 1067 * Action: stop polling thread and cleanup 1066 1068 *---------------------------------------------------------------------*/ 1067 - void __exit ibmphp_hpc_stop_poll_thread (void) 1069 + void __exit ibmphp_hpc_stop_poll_thread(void) 1068 1070 { 1069 - debug ("%s - Entry\n", __func__); 1071 + debug("%s - Entry\n", __func__); 1070 1072 1071 1073 kthread_stop(ibmphp_poll_thread); 1072 - debug ("before locking operations \n"); 1073 - ibmphp_lock_operations (); 1074 - debug ("after locking operations \n"); 1074 + debug("before locking operations\n"); 1075 + ibmphp_lock_operations(); 1076 + debug("after locking operations\n"); 1075 1077 1076 1078 // wait for poll thread to exit 1077 - debug ("before sem_exit down \n"); 1078 - down (&sem_exit); 1079 - debug ("after sem_exit down \n"); 1079 + debug("before sem_exit down\n"); 1080 + down(&sem_exit); 1081 + debug("after sem_exit down\n"); 1080 1082 1081 1083 // cleanup 1082 - debug ("before free_hpc_access \n"); 1083 - free_hpc_access (); 1084 - debug ("after free_hpc_access \n"); 1085 - ibmphp_unlock_operations (); 1086 - debug ("after unlock operations \n"); 1087 - up (&sem_exit); 1088 - debug ("after sem exit up\n"); 1084 + debug("before free_hpc_access\n"); 1085 + free_hpc_access(); 1086 + debug("after free_hpc_access\n"); 1087 + ibmphp_unlock_operations(); 1088 + debug("after unlock operations\n"); 1089 + up(&sem_exit); 1090 + debug("after sem exit up\n"); 1089 1091 1090 - debug ("%s - Exit\n", __func__); 1092 + debug("%s - Exit\n", __func__); 1091 1093 } 1092 1094 1093 1095 /*---------------------------------------------------------------------- ··· 1098 1100 * Return 0, HPC_ERROR 1099 1101 * Value: 1100 1102 *---------------------------------------------------------------------*/ 1101 - static int hpc_wait_ctlr_notworking (int timeout, struct controller *ctlr_ptr, void __iomem *wpg_bbar, 1103 + static int hpc_wait_ctlr_notworking(int timeout, struct controller *ctlr_ptr, void __iomem *wpg_bbar, 1102 1104 u8 *pstatus) 1103 1105 { 1104 1106 int rc = 0; 1105 1107 u8 done = 0; 1106 1108 1107 - debug_polling ("hpc_wait_ctlr_notworking - Entry timeout[%d]\n", timeout); 1109 + debug_polling("hpc_wait_ctlr_notworking - Entry timeout[%d]\n", timeout); 1108 1110 1109 1111 while (!done) { 1110 - *pstatus = ctrl_read (ctlr_ptr, wpg_bbar, WPG_CTLR_INDEX); 1112 + *pstatus = ctrl_read(ctlr_ptr, wpg_bbar, WPG_CTLR_INDEX); 1111 1113 if (*pstatus == HPC_ERROR) { 1112 1114 rc = HPC_ERROR; 1113 1115 done = 1; 1114 1116 } 1115 - if (CTLR_WORKING (*pstatus) == HPC_CTLR_WORKING_NO) 1117 + if (CTLR_WORKING(*pstatus) == HPC_CTLR_WORKING_NO) 1116 1118 done = 1; 1117 1119 if (!done) { 1118 1120 msleep(1000); 1119 1121 if (timeout < 1) { 1120 1122 done = 1; 1121 - err ("HPCreadslot - Error ctlr timeout\n"); 1123 + err("HPCreadslot - Error ctlr timeout\n"); 1122 1124 rc = HPC_ERROR; 1123 1125 } else 1124 1126 timeout--; 1125 1127 } 1126 1128 } 1127 - debug_polling ("hpc_wait_ctlr_notworking - Exit rc[%x] status[%x]\n", rc, *pstatus); 1129 + debug_polling("hpc_wait_ctlr_notworking - Exit rc[%x] status[%x]\n", rc, *pstatus); 1128 1130 return rc; 1129 1131 }
+365 -365
drivers/pci/hotplug/ibmphp_pci.c
··· 37 37 static int configure_device(struct pci_func *); 38 38 static int configure_bridge(struct pci_func **, u8); 39 39 static struct res_needed *scan_behind_bridge(struct pci_func *, u8); 40 - static int add_new_bus (struct bus_node *, struct resource_node *, struct resource_node *, struct resource_node *, u8); 41 - static u8 find_sec_number (u8 primary_busno, u8 slotno); 40 + static int add_new_bus(struct bus_node *, struct resource_node *, struct resource_node *, struct resource_node *, u8); 41 + static u8 find_sec_number(u8 primary_busno, u8 slotno); 42 42 43 43 /* 44 44 * NOTE..... If BIOS doesn't provide default routing, we assign: ··· 47 47 * We also assign the same irq numbers for multi function devices. 48 48 * These are PIC mode, so shouldn't matter n.e.ways (hopefully) 49 49 */ 50 - static void assign_alt_irq (struct pci_func *cur_func, u8 class_code) 50 + static void assign_alt_irq(struct pci_func *cur_func, u8 class_code) 51 51 { 52 52 int j; 53 53 for (j = 0; j < 4; j++) { ··· 78 78 * if there is an error, will need to go through all previous functions and 79 79 * unconfigure....or can add some code into unconfigure_card.... 80 80 */ 81 - int ibmphp_configure_card (struct pci_func *func, u8 slotno) 81 + int ibmphp_configure_card(struct pci_func *func, u8 slotno) 82 82 { 83 83 u16 vendor_id; 84 84 u32 class; ··· 92 92 u8 flag; 93 93 u8 valid_device = 0x00; /* to see if we are able to read from card any device info at all */ 94 94 95 - debug ("inside configure_card, func->busno = %x\n", func->busno); 95 + debug("inside configure_card, func->busno = %x\n", func->busno); 96 96 97 97 device = func->device; 98 98 cur_func = func; ··· 109 109 110 110 cur_func->function = function; 111 111 112 - debug ("inside the loop, cur_func->busno = %x, cur_func->device = %x, cur_func->function = %x\n", 112 + debug("inside the loop, cur_func->busno = %x, cur_func->device = %x, cur_func->function = %x\n", 113 113 cur_func->busno, cur_func->device, cur_func->function); 114 114 115 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_VENDOR_ID, &vendor_id); 115 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_VENDOR_ID, &vendor_id); 116 116 117 - debug ("vendor_id is %x\n", vendor_id); 117 + debug("vendor_id is %x\n", vendor_id); 118 118 if (vendor_id != PCI_VENDOR_ID_NOTVALID) { 119 119 /* found correct device!!! */ 120 - debug ("found valid device, vendor_id = %x\n", vendor_id); 120 + debug("found valid device, vendor_id = %x\n", vendor_id); 121 121 122 122 ++valid_device; 123 123 ··· 126 126 * |_=> 0 = single function device, 1 = multi-function device 127 127 */ 128 128 129 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_HEADER_TYPE, &hdr_type); 130 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class); 129 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_HEADER_TYPE, &hdr_type); 130 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class); 131 131 132 132 class_code = class >> 24; 133 - debug ("hrd_type = %x, class = %x, class_code %x\n", hdr_type, class, class_code); 133 + debug("hrd_type = %x, class = %x, class_code %x\n", hdr_type, class, class_code); 134 134 class >>= 8; /* to take revision out, class = class.subclass.prog i/f */ 135 135 if (class == PCI_CLASS_NOT_DEFINED_VGA) { 136 - err ("The device %x is VGA compatible and as is not supported for hot plugging. " 136 + err("The device %x is VGA compatible and as is not supported for hot plugging. " 137 137 "Please choose another device.\n", cur_func->device); 138 138 return -ENODEV; 139 139 } else if (class == PCI_CLASS_DISPLAY_VGA) { 140 - err ("The device %x is not supported for hot plugging. Please choose another device.\n", 140 + err("The device %x is not supported for hot plugging. Please choose another device.\n", 141 141 cur_func->device); 142 142 return -ENODEV; 143 143 } 144 144 switch (hdr_type) { 145 145 case PCI_HEADER_TYPE_NORMAL: 146 - debug ("single device case.... vendor id = %x, hdr_type = %x, class = %x\n", vendor_id, hdr_type, class); 147 - assign_alt_irq (cur_func, class_code); 146 + debug("single device case.... vendor id = %x, hdr_type = %x, class = %x\n", vendor_id, hdr_type, class); 147 + assign_alt_irq(cur_func, class_code); 148 148 rc = configure_device(cur_func); 149 149 if (rc < 0) { 150 150 /* We need to do this in case some other BARs were properly inserted */ 151 - err ("was not able to configure devfunc %x on bus %x.\n", 151 + err("was not able to configure devfunc %x on bus %x.\n", 152 152 cur_func->device, cur_func->busno); 153 153 cleanup_count = 6; 154 154 goto error; ··· 157 157 function = 0x8; 158 158 break; 159 159 case PCI_HEADER_TYPE_MULTIDEVICE: 160 - assign_alt_irq (cur_func, class_code); 160 + assign_alt_irq(cur_func, class_code); 161 161 rc = configure_device(cur_func); 162 162 if (rc < 0) { 163 163 /* We need to do this in case some other BARs were properly inserted */ 164 - err ("was not able to configure devfunc %x on bus %x...bailing out\n", 164 + err("was not able to configure devfunc %x on bus %x...bailing out\n", 165 165 cur_func->device, cur_func->busno); 166 166 cleanup_count = 6; 167 167 goto error; 168 168 } 169 169 newfunc = kzalloc(sizeof(*newfunc), GFP_KERNEL); 170 170 if (!newfunc) { 171 - err ("out of system memory\n"); 171 + err("out of system memory\n"); 172 172 return -ENOMEM; 173 173 } 174 174 newfunc->busno = cur_func->busno; ··· 181 181 case PCI_HEADER_TYPE_MULTIBRIDGE: 182 182 class >>= 8; 183 183 if (class != PCI_CLASS_BRIDGE_PCI) { 184 - err ("This %x is not PCI-to-PCI bridge, and as is not supported for hot-plugging. Please insert another card.\n", 184 + err("This %x is not PCI-to-PCI bridge, and as is not supported for hot-plugging. Please insert another card.\n", 185 185 cur_func->device); 186 186 return -ENODEV; 187 187 } 188 - assign_alt_irq (cur_func, class_code); 189 - rc = configure_bridge (&cur_func, slotno); 188 + assign_alt_irq(cur_func, class_code); 189 + rc = configure_bridge(&cur_func, slotno); 190 190 if (rc == -ENODEV) { 191 - err ("You chose to insert Single Bridge, or nested bridges, this is not supported...\n"); 192 - err ("Bus %x, devfunc %x\n", cur_func->busno, cur_func->device); 191 + err("You chose to insert Single Bridge, or nested bridges, this is not supported...\n"); 192 + err("Bus %x, devfunc %x\n", cur_func->busno, cur_func->device); 193 193 return rc; 194 194 } 195 195 if (rc) { 196 196 /* We need to do this in case some other BARs were properly inserted */ 197 - err ("was not able to hot-add PPB properly.\n"); 197 + err("was not able to hot-add PPB properly.\n"); 198 198 func->bus = 1; /* To indicate to the unconfigure function that this is a PPB */ 199 199 cleanup_count = 2; 200 200 goto error; 201 201 } 202 202 203 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number); 203 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number); 204 204 flag = 0; 205 205 for (i = 0; i < 32; i++) { 206 206 if (func->devices[i]) { 207 207 newfunc = kzalloc(sizeof(*newfunc), GFP_KERNEL); 208 208 if (!newfunc) { 209 - err ("out of system memory\n"); 209 + err("out of system memory\n"); 210 210 return -ENOMEM; 211 211 } 212 212 newfunc->busno = sec_number; ··· 220 220 } else 221 221 cur_func->next = newfunc; 222 222 223 - rc = ibmphp_configure_card (newfunc, slotno); 223 + rc = ibmphp_configure_card(newfunc, slotno); 224 224 /* This could only happen if kmalloc failed */ 225 225 if (rc) { 226 226 /* We need to do this in case bridge itself got configured properly, but devices behind it failed */ ··· 234 234 235 235 newfunc = kzalloc(sizeof(*newfunc), GFP_KERNEL); 236 236 if (!newfunc) { 237 - err ("out of system memory\n"); 237 + err("out of system memory\n"); 238 238 return -ENOMEM; 239 239 } 240 240 newfunc->busno = cur_func->busno; 241 241 newfunc->device = device; 242 242 for (j = 0; j < 4; j++) 243 243 newfunc->irq[j] = cur_func->irq[j]; 244 - for (prev_func = cur_func; prev_func->next; prev_func = prev_func->next) ; 244 + for (prev_func = cur_func; prev_func->next; prev_func = prev_func->next); 245 245 prev_func->next = newfunc; 246 246 cur_func = newfunc; 247 247 break; 248 248 case PCI_HEADER_TYPE_BRIDGE: 249 249 class >>= 8; 250 - debug ("class now is %x\n", class); 250 + debug("class now is %x\n", class); 251 251 if (class != PCI_CLASS_BRIDGE_PCI) { 252 - err ("This %x is not PCI-to-PCI bridge, and as is not supported for hot-plugging. Please insert another card.\n", 252 + err("This %x is not PCI-to-PCI bridge, and as is not supported for hot-plugging. Please insert another card.\n", 253 253 cur_func->device); 254 254 return -ENODEV; 255 255 } 256 256 257 - assign_alt_irq (cur_func, class_code); 257 + assign_alt_irq(cur_func, class_code); 258 258 259 - debug ("cur_func->busno b4 configure_bridge is %x\n", cur_func->busno); 260 - rc = configure_bridge (&cur_func, slotno); 259 + debug("cur_func->busno b4 configure_bridge is %x\n", cur_func->busno); 260 + rc = configure_bridge(&cur_func, slotno); 261 261 if (rc == -ENODEV) { 262 - err ("You chose to insert Single Bridge, or nested bridges, this is not supported...\n"); 263 - err ("Bus %x, devfunc %x\n", cur_func->busno, cur_func->device); 262 + err("You chose to insert Single Bridge, or nested bridges, this is not supported...\n"); 263 + err("Bus %x, devfunc %x\n", cur_func->busno, cur_func->device); 264 264 return rc; 265 265 } 266 266 if (rc) { 267 267 /* We need to do this in case some other BARs were properly inserted */ 268 268 func->bus = 1; /* To indicate to the unconfigure function that this is a PPB */ 269 - err ("was not able to hot-add PPB properly.\n"); 269 + err("was not able to hot-add PPB properly.\n"); 270 270 cleanup_count = 2; 271 271 goto error; 272 272 } 273 - debug ("cur_func->busno = %x, device = %x, function = %x\n", 273 + debug("cur_func->busno = %x, device = %x, function = %x\n", 274 274 cur_func->busno, device, function); 275 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number); 276 - debug ("after configuring bridge..., sec_number = %x\n", sec_number); 275 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number); 276 + debug("after configuring bridge..., sec_number = %x\n", sec_number); 277 277 flag = 0; 278 278 for (i = 0; i < 32; i++) { 279 279 if (func->devices[i]) { 280 - debug ("inside for loop, device is %x\n", i); 280 + debug("inside for loop, device is %x\n", i); 281 281 newfunc = kzalloc(sizeof(*newfunc), GFP_KERNEL); 282 282 if (!newfunc) { 283 - err (" out of system memory\n"); 283 + err(" out of system memory\n"); 284 284 return -ENOMEM; 285 285 } 286 286 newfunc->busno = sec_number; ··· 289 289 newfunc->irq[j] = cur_func->irq[j]; 290 290 291 291 if (flag) { 292 - for (prev_func = cur_func; prev_func->next; prev_func = prev_func->next) ; 292 + for (prev_func = cur_func; prev_func->next; prev_func = prev_func->next); 293 293 prev_func->next = newfunc; 294 294 } else 295 295 cur_func->next = newfunc; 296 296 297 - rc = ibmphp_configure_card (newfunc, slotno); 297 + rc = ibmphp_configure_card(newfunc, slotno); 298 298 299 299 /* Again, this case should not happen... For complete paranoia, will need to call remove_bus */ 300 300 if (rc) { ··· 310 310 function = 0x8; 311 311 break; 312 312 default: 313 - err ("MAJOR PROBLEM!!!!, header type not supported? %x\n", hdr_type); 313 + err("MAJOR PROBLEM!!!!, header type not supported? %x\n", hdr_type); 314 314 return -ENXIO; 315 315 break; 316 316 } /* end of switch */ ··· 318 318 } /* end of for */ 319 319 320 320 if (!valid_device) { 321 - err ("Cannot find any valid devices on the card. Or unable to read from card.\n"); 321 + err("Cannot find any valid devices on the card. Or unable to read from card.\n"); 322 322 return -ENODEV; 323 323 } 324 324 ··· 327 327 error: 328 328 for (i = 0; i < cleanup_count; i++) { 329 329 if (cur_func->io[i]) { 330 - ibmphp_remove_resource (cur_func->io[i]); 330 + ibmphp_remove_resource(cur_func->io[i]); 331 331 cur_func->io[i] = NULL; 332 332 } else if (cur_func->pfmem[i]) { 333 - ibmphp_remove_resource (cur_func->pfmem[i]); 333 + ibmphp_remove_resource(cur_func->pfmem[i]); 334 334 cur_func->pfmem[i] = NULL; 335 335 } else if (cur_func->mem[i]) { 336 - ibmphp_remove_resource (cur_func->mem[i]); 336 + ibmphp_remove_resource(cur_func->mem[i]); 337 337 cur_func->mem[i] = NULL; 338 338 } 339 339 } ··· 345 345 * Input: pointer to the pci_func 346 346 * Output: configured PCI, 0, or error 347 347 */ 348 - static int configure_device (struct pci_func *func) 348 + static int configure_device(struct pci_func *func) 349 349 { 350 350 u32 bar[6]; 351 351 u32 address[] = { ··· 366 366 struct resource_node *pfmem[6]; 367 367 unsigned int devfn; 368 368 369 - debug ("%s - inside\n", __func__); 369 + debug("%s - inside\n", __func__); 370 370 371 371 devfn = PCI_DEVFN(func->device, func->function); 372 372 ibmphp_pci_bus->number = func->busno; ··· 386 386 pcibios_write_config_dword(cur_func->busno, cur_func->device, 387 387 PCI_BASE_ADDRESS_0 + 4 * count, 0xFFFFFFFF); 388 388 */ 389 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 390 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &bar[count]); 389 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 390 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &bar[count]); 391 391 392 392 if (!bar[count]) /* This BAR is not implemented */ 393 393 continue; 394 394 395 - debug ("Device %x BAR %d wants %x\n", func->device, count, bar[count]); 395 + debug("Device %x BAR %d wants %x\n", func->device, count, bar[count]); 396 396 397 397 if (bar[count] & PCI_BASE_ADDRESS_SPACE_IO) { 398 398 /* This is IO */ 399 - debug ("inside IO SPACE\n"); 399 + debug("inside IO SPACE\n"); 400 400 401 401 len[count] = bar[count] & 0xFFFFFFFC; 402 402 len[count] = ~len[count] + 1; 403 403 404 - debug ("len[count] in IO %x, count %d\n", len[count], count); 404 + debug("len[count] in IO %x, count %d\n", len[count], count); 405 405 406 406 io[count] = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 407 407 408 408 if (!io[count]) { 409 - err ("out of system memory\n"); 409 + err("out of system memory\n"); 410 410 return -ENOMEM; 411 411 } 412 412 io[count]->type = IO; ··· 414 414 io[count]->devfunc = PCI_DEVFN(func->device, func->function); 415 415 io[count]->len = len[count]; 416 416 if (ibmphp_check_resource(io[count], 0) == 0) { 417 - ibmphp_add_resource (io[count]); 417 + ibmphp_add_resource(io[count]); 418 418 func->io[count] = io[count]; 419 419 } else { 420 - err ("cannot allocate requested io for bus %x device %x function %x len %x\n", 420 + err("cannot allocate requested io for bus %x device %x function %x len %x\n", 421 421 func->busno, func->device, func->function, len[count]); 422 - kfree (io[count]); 422 + kfree(io[count]); 423 423 return -EIO; 424 424 } 425 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], func->io[count]->start); 425 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], func->io[count]->start); 426 426 427 427 /* _______________This is for debugging purposes only_____________________ */ 428 - debug ("b4 writing, the IO address is %x\n", func->io[count]->start); 429 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &bar[count]); 430 - debug ("after writing.... the start address is %x\n", bar[count]); 428 + debug("b4 writing, the IO address is %x\n", func->io[count]->start); 429 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &bar[count]); 430 + debug("after writing.... the start address is %x\n", bar[count]); 431 431 /* _________________________________________________________________________*/ 432 432 433 433 } else { 434 434 /* This is Memory */ 435 435 if (bar[count] & PCI_BASE_ADDRESS_MEM_PREFETCH) { 436 436 /* pfmem */ 437 - debug ("PFMEM SPACE\n"); 437 + debug("PFMEM SPACE\n"); 438 438 439 439 len[count] = bar[count] & 0xFFFFFFF0; 440 440 len[count] = ~len[count] + 1; 441 441 442 - debug ("len[count] in PFMEM %x, count %d\n", len[count], count); 442 + debug("len[count] in PFMEM %x, count %d\n", len[count], count); 443 443 444 444 pfmem[count] = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 445 445 if (!pfmem[count]) { 446 - err ("out of system memory\n"); 446 + err("out of system memory\n"); 447 447 return -ENOMEM; 448 448 } 449 449 pfmem[count]->type = PFMEM; ··· 452 452 func->function); 453 453 pfmem[count]->len = len[count]; 454 454 pfmem[count]->fromMem = 0; 455 - if (ibmphp_check_resource (pfmem[count], 0) == 0) { 456 - ibmphp_add_resource (pfmem[count]); 455 + if (ibmphp_check_resource(pfmem[count], 0) == 0) { 456 + ibmphp_add_resource(pfmem[count]); 457 457 func->pfmem[count] = pfmem[count]; 458 458 } else { 459 459 mem_tmp = kzalloc(sizeof(*mem_tmp), GFP_KERNEL); 460 460 if (!mem_tmp) { 461 - err ("out of system memory\n"); 462 - kfree (pfmem[count]); 461 + err("out of system memory\n"); 462 + kfree(pfmem[count]); 463 463 return -ENOMEM; 464 464 } 465 465 mem_tmp->type = MEM; 466 466 mem_tmp->busno = pfmem[count]->busno; 467 467 mem_tmp->devfunc = pfmem[count]->devfunc; 468 468 mem_tmp->len = pfmem[count]->len; 469 - debug ("there's no pfmem... going into mem.\n"); 470 - if (ibmphp_check_resource (mem_tmp, 0) == 0) { 471 - ibmphp_add_resource (mem_tmp); 469 + debug("there's no pfmem... going into mem.\n"); 470 + if (ibmphp_check_resource(mem_tmp, 0) == 0) { 471 + ibmphp_add_resource(mem_tmp); 472 472 pfmem[count]->fromMem = 1; 473 473 pfmem[count]->rangeno = mem_tmp->rangeno; 474 474 pfmem[count]->start = mem_tmp->start; 475 475 pfmem[count]->end = mem_tmp->end; 476 - ibmphp_add_pfmem_from_mem (pfmem[count]); 476 + ibmphp_add_pfmem_from_mem(pfmem[count]); 477 477 func->pfmem[count] = pfmem[count]; 478 478 } else { 479 - err ("cannot allocate requested pfmem for bus %x, device %x, len %x\n", 479 + err("cannot allocate requested pfmem for bus %x, device %x, len %x\n", 480 480 func->busno, func->device, len[count]); 481 - kfree (mem_tmp); 482 - kfree (pfmem[count]); 481 + kfree(mem_tmp); 482 + kfree(pfmem[count]); 483 483 return -EIO; 484 484 } 485 485 } 486 486 487 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], func->pfmem[count]->start); 487 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], func->pfmem[count]->start); 488 488 489 489 /*_______________This is for debugging purposes only______________________________*/ 490 - debug ("b4 writing, start address is %x\n", func->pfmem[count]->start); 491 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &bar[count]); 492 - debug ("after writing, start address is %x\n", bar[count]); 490 + debug("b4 writing, start address is %x\n", func->pfmem[count]->start); 491 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &bar[count]); 492 + debug("after writing, start address is %x\n", bar[count]); 493 493 /*_________________________________________________________________________________*/ 494 494 495 495 if (bar[count] & PCI_BASE_ADDRESS_MEM_TYPE_64) { /* takes up another dword */ 496 - debug ("inside the mem 64 case, count %d\n", count); 496 + debug("inside the mem 64 case, count %d\n", count); 497 497 count += 1; 498 498 /* on the 2nd dword, write all 0s, since we can't handle them n.e.ways */ 499 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0x00000000); 499 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0x00000000); 500 500 } 501 501 } else { 502 502 /* regular memory */ 503 - debug ("REGULAR MEM SPACE\n"); 503 + debug("REGULAR MEM SPACE\n"); 504 504 505 505 len[count] = bar[count] & 0xFFFFFFF0; 506 506 len[count] = ~len[count] + 1; 507 507 508 - debug ("len[count] in Mem %x, count %d\n", len[count], count); 508 + debug("len[count] in Mem %x, count %d\n", len[count], count); 509 509 510 510 mem[count] = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 511 511 if (!mem[count]) { 512 - err ("out of system memory\n"); 512 + err("out of system memory\n"); 513 513 return -ENOMEM; 514 514 } 515 515 mem[count]->type = MEM; ··· 517 517 mem[count]->devfunc = PCI_DEVFN(func->device, 518 518 func->function); 519 519 mem[count]->len = len[count]; 520 - if (ibmphp_check_resource (mem[count], 0) == 0) { 521 - ibmphp_add_resource (mem[count]); 520 + if (ibmphp_check_resource(mem[count], 0) == 0) { 521 + ibmphp_add_resource(mem[count]); 522 522 func->mem[count] = mem[count]; 523 523 } else { 524 - err ("cannot allocate requested mem for bus %x, device %x, len %x\n", 524 + err("cannot allocate requested mem for bus %x, device %x, len %x\n", 525 525 func->busno, func->device, len[count]); 526 - kfree (mem[count]); 526 + kfree(mem[count]); 527 527 return -EIO; 528 528 } 529 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], func->mem[count]->start); 529 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], func->mem[count]->start); 530 530 /* _______________________This is for debugging purposes only _______________________*/ 531 - debug ("b4 writing, start address is %x\n", func->mem[count]->start); 532 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &bar[count]); 533 - debug ("after writing, the address is %x\n", bar[count]); 531 + debug("b4 writing, start address is %x\n", func->mem[count]->start); 532 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &bar[count]); 533 + debug("after writing, the address is %x\n", bar[count]); 534 534 /* __________________________________________________________________________________*/ 535 535 536 536 if (bar[count] & PCI_BASE_ADDRESS_MEM_TYPE_64) { 537 537 /* takes up another dword */ 538 - debug ("inside mem 64 case, reg. mem, count %d\n", count); 538 + debug("inside mem 64 case, reg. mem, count %d\n", count); 539 539 count += 1; 540 540 /* on the 2nd dword, write all 0s, since we can't handle them n.e.ways */ 541 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0x00000000); 541 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0x00000000); 542 542 } 543 543 } 544 544 } /* end of mem */ 545 545 } /* end of for */ 546 546 547 547 func->bus = 0; /* To indicate that this is not a PPB */ 548 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_INTERRUPT_PIN, &irq); 548 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_INTERRUPT_PIN, &irq); 549 549 if ((irq > 0x00) && (irq < 0x05)) 550 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_INTERRUPT_LINE, func->irq[irq - 1]); 550 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_INTERRUPT_LINE, func->irq[irq - 1]); 551 551 552 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_CACHE_LINE_SIZE, CACHE); 553 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_LATENCY_TIMER, LATENCY); 552 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_CACHE_LINE_SIZE, CACHE); 553 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_LATENCY_TIMER, LATENCY); 554 554 555 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, PCI_ROM_ADDRESS, 0x00L); 556 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_COMMAND, DEVICEENABLE); 555 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, PCI_ROM_ADDRESS, 0x00L); 556 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_COMMAND, DEVICEENABLE); 557 557 558 558 return 0; 559 559 } ··· 563 563 * Parameters: pci_func 564 564 * Returns: 565 565 ******************************************************************************/ 566 - static int configure_bridge (struct pci_func **func_passed, u8 slotno) 566 + static int configure_bridge(struct pci_func **func_passed, u8 slotno) 567 567 { 568 568 int count; 569 569 int i; ··· 597 597 u8 irq; 598 598 int retval; 599 599 600 - debug ("%s - enter\n", __func__); 600 + debug("%s - enter\n", __func__); 601 601 602 602 devfn = PCI_DEVFN(func->function, func->device); 603 603 ibmphp_pci_bus->number = func->busno; ··· 606 606 * behind it 607 607 */ 608 608 609 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_PRIMARY_BUS, func->busno); 609 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_PRIMARY_BUS, func->busno); 610 610 611 611 /* _____________________For debugging purposes only __________________________ 612 - pci_bus_config_byte (ibmphp_pci_bus, devfn, PCI_PRIMARY_BUS, &pri_number); 613 - debug ("primary # written into the bridge is %x\n", pri_number); 612 + pci_bus_config_byte(ibmphp_pci_bus, devfn, PCI_PRIMARY_BUS, &pri_number); 613 + debug("primary # written into the bridge is %x\n", pri_number); 614 614 ___________________________________________________________________________*/ 615 615 616 616 /* in EBDA, only get allocated 1 additional bus # per slot */ 617 - sec_number = find_sec_number (func->busno, slotno); 617 + sec_number = find_sec_number(func->busno, slotno); 618 618 if (sec_number == 0xff) { 619 - err ("cannot allocate secondary bus number for the bridged device\n"); 619 + err("cannot allocate secondary bus number for the bridged device\n"); 620 620 return -EINVAL; 621 621 } 622 622 623 - debug ("after find_sec_number, the number we got is %x\n", sec_number); 624 - debug ("AFTER FIND_SEC_NUMBER, func->busno IS %x\n", func->busno); 623 + debug("after find_sec_number, the number we got is %x\n", sec_number); 624 + debug("AFTER FIND_SEC_NUMBER, func->busno IS %x\n", func->busno); 625 625 626 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, sec_number); 626 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, sec_number); 627 627 628 628 /* __________________For debugging purposes only __________________________________ 629 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number); 630 - debug ("sec_number after write/read is %x\n", sec_number); 629 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number); 630 + debug("sec_number after write/read is %x\n", sec_number); 631 631 ________________________________________________________________________________*/ 632 632 633 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_SUBORDINATE_BUS, sec_number); 633 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_SUBORDINATE_BUS, sec_number); 634 634 635 635 /* __________________For debugging purposes only ____________________________________ 636 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SUBORDINATE_BUS, &sec_number); 637 - debug ("subordinate number after write/read is %x\n", sec_number); 636 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_SUBORDINATE_BUS, &sec_number); 637 + debug("subordinate number after write/read is %x\n", sec_number); 638 638 __________________________________________________________________________________*/ 639 639 640 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_CACHE_LINE_SIZE, CACHE); 641 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_LATENCY_TIMER, LATENCY); 642 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_SEC_LATENCY_TIMER, LATENCY); 640 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_CACHE_LINE_SIZE, CACHE); 641 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_LATENCY_TIMER, LATENCY); 642 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_SEC_LATENCY_TIMER, LATENCY); 643 643 644 - debug ("func->busno is %x\n", func->busno); 645 - debug ("sec_number after writing is %x\n", sec_number); 644 + debug("func->busno is %x\n", func->busno); 645 + debug("sec_number after writing is %x\n", sec_number); 646 646 647 647 648 648 /* !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ··· 652 652 653 653 /* First we need to allocate mem/io for the bridge itself in case it needs it */ 654 654 for (count = 0; address[count]; count++) { /* for 2 BARs */ 655 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 656 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &bar[count]); 655 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 656 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &bar[count]); 657 657 658 658 if (!bar[count]) { 659 659 /* This BAR is not implemented */ 660 - debug ("so we come here then, eh?, count = %d\n", count); 660 + debug("so we come here then, eh?, count = %d\n", count); 661 661 continue; 662 662 } 663 663 // tmp_bar = bar[count]; 664 664 665 - debug ("Bar %d wants %x\n", count, bar[count]); 665 + debug("Bar %d wants %x\n", count, bar[count]); 666 666 667 667 if (bar[count] & PCI_BASE_ADDRESS_SPACE_IO) { 668 668 /* This is IO */ 669 669 len[count] = bar[count] & 0xFFFFFFFC; 670 670 len[count] = ~len[count] + 1; 671 671 672 - debug ("len[count] in IO = %x\n", len[count]); 672 + debug("len[count] in IO = %x\n", len[count]); 673 673 674 674 bus_io[count] = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 675 675 676 676 if (!bus_io[count]) { 677 - err ("out of system memory\n"); 677 + err("out of system memory\n"); 678 678 retval = -ENOMEM; 679 679 goto error; 680 680 } ··· 683 683 bus_io[count]->devfunc = PCI_DEVFN(func->device, 684 684 func->function); 685 685 bus_io[count]->len = len[count]; 686 - if (ibmphp_check_resource (bus_io[count], 0) == 0) { 687 - ibmphp_add_resource (bus_io[count]); 686 + if (ibmphp_check_resource(bus_io[count], 0) == 0) { 687 + ibmphp_add_resource(bus_io[count]); 688 688 func->io[count] = bus_io[count]; 689 689 } else { 690 - err ("cannot allocate requested io for bus %x, device %x, len %x\n", 690 + err("cannot allocate requested io for bus %x, device %x, len %x\n", 691 691 func->busno, func->device, len[count]); 692 - kfree (bus_io[count]); 692 + kfree(bus_io[count]); 693 693 return -EIO; 694 694 } 695 695 696 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], func->io[count]->start); 696 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], func->io[count]->start); 697 697 698 698 } else { 699 699 /* This is Memory */ ··· 702 702 len[count] = bar[count] & 0xFFFFFFF0; 703 703 len[count] = ~len[count] + 1; 704 704 705 - debug ("len[count] in PFMEM = %x\n", len[count]); 705 + debug("len[count] in PFMEM = %x\n", len[count]); 706 706 707 707 bus_pfmem[count] = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 708 708 if (!bus_pfmem[count]) { 709 - err ("out of system memory\n"); 709 + err("out of system memory\n"); 710 710 retval = -ENOMEM; 711 711 goto error; 712 712 } ··· 716 716 func->function); 717 717 bus_pfmem[count]->len = len[count]; 718 718 bus_pfmem[count]->fromMem = 0; 719 - if (ibmphp_check_resource (bus_pfmem[count], 0) == 0) { 720 - ibmphp_add_resource (bus_pfmem[count]); 719 + if (ibmphp_check_resource(bus_pfmem[count], 0) == 0) { 720 + ibmphp_add_resource(bus_pfmem[count]); 721 721 func->pfmem[count] = bus_pfmem[count]; 722 722 } else { 723 723 mem_tmp = kzalloc(sizeof(*mem_tmp), GFP_KERNEL); 724 724 if (!mem_tmp) { 725 - err ("out of system memory\n"); 725 + err("out of system memory\n"); 726 726 retval = -ENOMEM; 727 727 goto error; 728 728 } ··· 730 730 mem_tmp->busno = bus_pfmem[count]->busno; 731 731 mem_tmp->devfunc = bus_pfmem[count]->devfunc; 732 732 mem_tmp->len = bus_pfmem[count]->len; 733 - if (ibmphp_check_resource (mem_tmp, 0) == 0) { 734 - ibmphp_add_resource (mem_tmp); 733 + if (ibmphp_check_resource(mem_tmp, 0) == 0) { 734 + ibmphp_add_resource(mem_tmp); 735 735 bus_pfmem[count]->fromMem = 1; 736 736 bus_pfmem[count]->rangeno = mem_tmp->rangeno; 737 - ibmphp_add_pfmem_from_mem (bus_pfmem[count]); 737 + ibmphp_add_pfmem_from_mem(bus_pfmem[count]); 738 738 func->pfmem[count] = bus_pfmem[count]; 739 739 } else { 740 - err ("cannot allocate requested pfmem for bus %x, device %x, len %x\n", 740 + err("cannot allocate requested pfmem for bus %x, device %x, len %x\n", 741 741 func->busno, func->device, len[count]); 742 - kfree (mem_tmp); 743 - kfree (bus_pfmem[count]); 742 + kfree(mem_tmp); 743 + kfree(bus_pfmem[count]); 744 744 return -EIO; 745 745 } 746 746 } 747 747 748 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], func->pfmem[count]->start); 748 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], func->pfmem[count]->start); 749 749 750 750 if (bar[count] & PCI_BASE_ADDRESS_MEM_TYPE_64) { 751 751 /* takes up another dword */ 752 752 count += 1; 753 753 /* on the 2nd dword, write all 0s, since we can't handle them n.e.ways */ 754 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0x00000000); 754 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0x00000000); 755 755 756 756 } 757 757 } else { ··· 759 759 len[count] = bar[count] & 0xFFFFFFF0; 760 760 len[count] = ~len[count] + 1; 761 761 762 - debug ("len[count] in Memory is %x\n", len[count]); 762 + debug("len[count] in Memory is %x\n", len[count]); 763 763 764 764 bus_mem[count] = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 765 765 if (!bus_mem[count]) { 766 - err ("out of system memory\n"); 766 + err("out of system memory\n"); 767 767 retval = -ENOMEM; 768 768 goto error; 769 769 } ··· 772 772 bus_mem[count]->devfunc = PCI_DEVFN(func->device, 773 773 func->function); 774 774 bus_mem[count]->len = len[count]; 775 - if (ibmphp_check_resource (bus_mem[count], 0) == 0) { 776 - ibmphp_add_resource (bus_mem[count]); 775 + if (ibmphp_check_resource(bus_mem[count], 0) == 0) { 776 + ibmphp_add_resource(bus_mem[count]); 777 777 func->mem[count] = bus_mem[count]; 778 778 } else { 779 - err ("cannot allocate requested mem for bus %x, device %x, len %x\n", 779 + err("cannot allocate requested mem for bus %x, device %x, len %x\n", 780 780 func->busno, func->device, len[count]); 781 - kfree (bus_mem[count]); 781 + kfree(bus_mem[count]); 782 782 return -EIO; 783 783 } 784 784 785 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], func->mem[count]->start); 785 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], func->mem[count]->start); 786 786 787 787 if (bar[count] & PCI_BASE_ADDRESS_MEM_TYPE_64) { 788 788 /* takes up another dword */ 789 789 count += 1; 790 790 /* on the 2nd dword, write all 0s, since we can't handle them n.e.ways */ 791 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0x00000000); 791 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0x00000000); 792 792 793 793 } 794 794 } ··· 796 796 } /* end of for */ 797 797 798 798 /* Now need to see how much space the devices behind the bridge needed */ 799 - amount_needed = scan_behind_bridge (func, sec_number); 799 + amount_needed = scan_behind_bridge(func, sec_number); 800 800 if (amount_needed == NULL) 801 801 return -ENOMEM; 802 802 803 803 ibmphp_pci_bus->number = func->busno; 804 - debug ("after coming back from scan_behind_bridge\n"); 805 - debug ("amount_needed->not_correct = %x\n", amount_needed->not_correct); 806 - debug ("amount_needed->io = %x\n", amount_needed->io); 807 - debug ("amount_needed->mem = %x\n", amount_needed->mem); 808 - debug ("amount_needed->pfmem = %x\n", amount_needed->pfmem); 804 + debug("after coming back from scan_behind_bridge\n"); 805 + debug("amount_needed->not_correct = %x\n", amount_needed->not_correct); 806 + debug("amount_needed->io = %x\n", amount_needed->io); 807 + debug("amount_needed->mem = %x\n", amount_needed->mem); 808 + debug("amount_needed->pfmem = %x\n", amount_needed->pfmem); 809 809 810 810 if (amount_needed->not_correct) { 811 - debug ("amount_needed is not correct\n"); 811 + debug("amount_needed is not correct\n"); 812 812 for (count = 0; address[count]; count++) { 813 813 /* for 2 BARs */ 814 814 if (bus_io[count]) { 815 - ibmphp_remove_resource (bus_io[count]); 815 + ibmphp_remove_resource(bus_io[count]); 816 816 func->io[count] = NULL; 817 817 } else if (bus_pfmem[count]) { 818 - ibmphp_remove_resource (bus_pfmem[count]); 818 + ibmphp_remove_resource(bus_pfmem[count]); 819 819 func->pfmem[count] = NULL; 820 820 } else if (bus_mem[count]) { 821 - ibmphp_remove_resource (bus_mem[count]); 821 + ibmphp_remove_resource(bus_mem[count]); 822 822 func->mem[count] = NULL; 823 823 } 824 824 } 825 - kfree (amount_needed); 825 + kfree(amount_needed); 826 826 return -ENODEV; 827 827 } 828 828 829 829 if (!amount_needed->io) { 830 - debug ("it doesn't want IO?\n"); 830 + debug("it doesn't want IO?\n"); 831 831 flag_io = 1; 832 832 } else { 833 - debug ("it wants %x IO behind the bridge\n", amount_needed->io); 833 + debug("it wants %x IO behind the bridge\n", amount_needed->io); 834 834 io = kzalloc(sizeof(*io), GFP_KERNEL); 835 835 836 836 if (!io) { 837 - err ("out of system memory\n"); 837 + err("out of system memory\n"); 838 838 retval = -ENOMEM; 839 839 goto error; 840 840 } ··· 842 842 io->busno = func->busno; 843 843 io->devfunc = PCI_DEVFN(func->device, func->function); 844 844 io->len = amount_needed->io; 845 - if (ibmphp_check_resource (io, 1) == 0) { 846 - debug ("were we able to add io\n"); 847 - ibmphp_add_resource (io); 845 + if (ibmphp_check_resource(io, 1) == 0) { 846 + debug("were we able to add io\n"); 847 + ibmphp_add_resource(io); 848 848 flag_io = 1; 849 849 } 850 850 } 851 851 852 852 if (!amount_needed->mem) { 853 - debug ("it doesn't want n.e.memory?\n"); 853 + debug("it doesn't want n.e.memory?\n"); 854 854 flag_mem = 1; 855 855 } else { 856 - debug ("it wants %x memory behind the bridge\n", amount_needed->mem); 856 + debug("it wants %x memory behind the bridge\n", amount_needed->mem); 857 857 mem = kzalloc(sizeof(*mem), GFP_KERNEL); 858 858 if (!mem) { 859 - err ("out of system memory\n"); 859 + err("out of system memory\n"); 860 860 retval = -ENOMEM; 861 861 goto error; 862 862 } ··· 864 864 mem->busno = func->busno; 865 865 mem->devfunc = PCI_DEVFN(func->device, func->function); 866 866 mem->len = amount_needed->mem; 867 - if (ibmphp_check_resource (mem, 1) == 0) { 868 - ibmphp_add_resource (mem); 867 + if (ibmphp_check_resource(mem, 1) == 0) { 868 + ibmphp_add_resource(mem); 869 869 flag_mem = 1; 870 - debug ("were we able to add mem\n"); 870 + debug("were we able to add mem\n"); 871 871 } 872 872 } 873 873 874 874 if (!amount_needed->pfmem) { 875 - debug ("it doesn't want n.e.pfmem mem?\n"); 875 + debug("it doesn't want n.e.pfmem mem?\n"); 876 876 flag_pfmem = 1; 877 877 } else { 878 - debug ("it wants %x pfmemory behind the bridge\n", amount_needed->pfmem); 878 + debug("it wants %x pfmemory behind the bridge\n", amount_needed->pfmem); 879 879 pfmem = kzalloc(sizeof(*pfmem), GFP_KERNEL); 880 880 if (!pfmem) { 881 - err ("out of system memory\n"); 881 + err("out of system memory\n"); 882 882 retval = -ENOMEM; 883 883 goto error; 884 884 } ··· 887 887 pfmem->devfunc = PCI_DEVFN(func->device, func->function); 888 888 pfmem->len = amount_needed->pfmem; 889 889 pfmem->fromMem = 0; 890 - if (ibmphp_check_resource (pfmem, 1) == 0) { 891 - ibmphp_add_resource (pfmem); 890 + if (ibmphp_check_resource(pfmem, 1) == 0) { 891 + ibmphp_add_resource(pfmem); 892 892 flag_pfmem = 1; 893 893 } else { 894 894 mem_tmp = kzalloc(sizeof(*mem_tmp), GFP_KERNEL); 895 895 if (!mem_tmp) { 896 - err ("out of system memory\n"); 896 + err("out of system memory\n"); 897 897 retval = -ENOMEM; 898 898 goto error; 899 899 } ··· 901 901 mem_tmp->busno = pfmem->busno; 902 902 mem_tmp->devfunc = pfmem->devfunc; 903 903 mem_tmp->len = pfmem->len; 904 - if (ibmphp_check_resource (mem_tmp, 1) == 0) { 905 - ibmphp_add_resource (mem_tmp); 904 + if (ibmphp_check_resource(mem_tmp, 1) == 0) { 905 + ibmphp_add_resource(mem_tmp); 906 906 pfmem->fromMem = 1; 907 907 pfmem->rangeno = mem_tmp->rangeno; 908 - ibmphp_add_pfmem_from_mem (pfmem); 908 + ibmphp_add_pfmem_from_mem(pfmem); 909 909 flag_pfmem = 1; 910 910 } 911 911 } 912 912 } 913 913 914 - debug ("b4 if (flag_io && flag_mem && flag_pfmem)\n"); 915 - debug ("flag_io = %x, flag_mem = %x, flag_pfmem = %x\n", flag_io, flag_mem, flag_pfmem); 914 + debug("b4 if (flag_io && flag_mem && flag_pfmem)\n"); 915 + debug("flag_io = %x, flag_mem = %x, flag_pfmem = %x\n", flag_io, flag_mem, flag_pfmem); 916 916 917 917 if (flag_io && flag_mem && flag_pfmem) { 918 918 /* If on bootup, there was a bridged card in this slot, ··· 920 920 * back again, there's no way for us to remove the bus 921 921 * struct, so no need to kmalloc, can use existing node 922 922 */ 923 - bus = ibmphp_find_res_bus (sec_number); 923 + bus = ibmphp_find_res_bus(sec_number); 924 924 if (!bus) { 925 925 bus = kzalloc(sizeof(*bus), GFP_KERNEL); 926 926 if (!bus) { 927 - err ("out of system memory\n"); 927 + err("out of system memory\n"); 928 928 retval = -ENOMEM; 929 929 goto error; 930 930 } 931 931 bus->busno = sec_number; 932 - debug ("b4 adding new bus\n"); 933 - rc = add_new_bus (bus, io, mem, pfmem, func->busno); 932 + debug("b4 adding new bus\n"); 933 + rc = add_new_bus(bus, io, mem, pfmem, func->busno); 934 934 } else if (!(bus->rangeIO) && !(bus->rangeMem) && !(bus->rangePFMem)) 935 - rc = add_new_bus (bus, io, mem, pfmem, 0xFF); 935 + rc = add_new_bus(bus, io, mem, pfmem, 0xFF); 936 936 else { 937 - err ("expected bus structure not empty?\n"); 937 + err("expected bus structure not empty?\n"); 938 938 retval = -EIO; 939 939 goto error; 940 940 } 941 941 if (rc) { 942 942 if (rc == -ENOMEM) { 943 - ibmphp_remove_bus (bus, func->busno); 944 - kfree (amount_needed); 943 + ibmphp_remove_bus(bus, func->busno); 944 + kfree(amount_needed); 945 945 return rc; 946 946 } 947 947 retval = rc; 948 948 goto error; 949 949 } 950 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_IO_BASE, &io_base); 951 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, &pfmem_base); 950 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_IO_BASE, &io_base); 951 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, &pfmem_base); 952 952 953 953 if ((io_base & PCI_IO_RANGE_TYPE_MASK) == PCI_IO_RANGE_TYPE_32) { 954 - debug ("io 32\n"); 954 + debug("io 32\n"); 955 955 need_io_upper = 1; 956 956 } 957 957 if ((pfmem_base & PCI_PREF_RANGE_TYPE_MASK) == PCI_PREF_RANGE_TYPE_64) { 958 - debug ("pfmem 64\n"); 958 + debug("pfmem 64\n"); 959 959 need_pfmem_upper = 1; 960 960 } 961 961 962 962 if (bus->noIORanges) { 963 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_IO_BASE, 0x00 | bus->rangeIO->start >> 8); 964 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_IO_LIMIT, 0x00 | bus->rangeIO->end >> 8); 963 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_IO_BASE, 0x00 | bus->rangeIO->start >> 8); 964 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_IO_LIMIT, 0x00 | bus->rangeIO->end >> 8); 965 965 966 966 /* _______________This is for debugging purposes only ____________________ 967 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_IO_BASE, &temp); 968 - debug ("io_base = %x\n", (temp & PCI_IO_RANGE_TYPE_MASK) << 8); 969 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_IO_LIMIT, &temp); 970 - debug ("io_limit = %x\n", (temp & PCI_IO_RANGE_TYPE_MASK) << 8); 967 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_IO_BASE, &temp); 968 + debug("io_base = %x\n", (temp & PCI_IO_RANGE_TYPE_MASK) << 8); 969 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_IO_LIMIT, &temp); 970 + debug("io_limit = %x\n", (temp & PCI_IO_RANGE_TYPE_MASK) << 8); 971 971 ________________________________________________________________________*/ 972 972 973 973 if (need_io_upper) { /* since can't support n.e.ways */ 974 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_IO_BASE_UPPER16, 0x0000); 975 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_IO_LIMIT_UPPER16, 0x0000); 974 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_IO_BASE_UPPER16, 0x0000); 975 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_IO_LIMIT_UPPER16, 0x0000); 976 976 } 977 977 } else { 978 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_IO_BASE, 0x00); 979 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_IO_LIMIT, 0x00); 978 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_IO_BASE, 0x00); 979 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_IO_LIMIT, 0x00); 980 980 } 981 981 982 982 if (bus->noMemRanges) { 983 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, 0x0000 | bus->rangeMem->start >> 16); 984 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, 0x0000 | bus->rangeMem->end >> 16); 983 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, 0x0000 | bus->rangeMem->start >> 16); 984 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, 0x0000 | bus->rangeMem->end >> 16); 985 985 986 986 /* ____________________This is for debugging purposes only ________________________ 987 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, &temp); 988 - debug ("mem_base = %x\n", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16); 989 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, &temp); 990 - debug ("mem_limit = %x\n", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16); 987 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, &temp); 988 + debug("mem_base = %x\n", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16); 989 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, &temp); 990 + debug("mem_limit = %x\n", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16); 991 991 __________________________________________________________________________________*/ 992 992 993 993 } else { 994 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, 0xffff); 995 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, 0x0000); 994 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, 0xffff); 995 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, 0x0000); 996 996 } 997 997 if (bus->noPFMemRanges) { 998 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, 0x0000 | bus->rangePFMem->start >> 16); 999 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, 0x0000 | bus->rangePFMem->end >> 16); 998 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, 0x0000 | bus->rangePFMem->start >> 16); 999 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, 0x0000 | bus->rangePFMem->end >> 16); 1000 1000 1001 1001 /* __________________________This is for debugging purposes only _______________________ 1002 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, &temp); 1003 - debug ("pfmem_base = %x", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16); 1004 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, &temp); 1005 - debug ("pfmem_limit = %x\n", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16); 1002 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, &temp); 1003 + debug("pfmem_base = %x", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16); 1004 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, &temp); 1005 + debug("pfmem_limit = %x\n", (temp & PCI_MEMORY_RANGE_TYPE_MASK) << 16); 1006 1006 ______________________________________________________________________________________*/ 1007 1007 1008 1008 if (need_pfmem_upper) { /* since can't support n.e.ways */ 1009 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, PCI_PREF_BASE_UPPER32, 0x00000000); 1010 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, PCI_PREF_LIMIT_UPPER32, 0x00000000); 1009 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, PCI_PREF_BASE_UPPER32, 0x00000000); 1010 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, PCI_PREF_LIMIT_UPPER32, 0x00000000); 1011 1011 } 1012 1012 } else { 1013 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, 0xffff); 1014 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, 0x0000); 1013 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, 0xffff); 1014 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, 0x0000); 1015 1015 } 1016 1016 1017 - debug ("b4 writing control information\n"); 1017 + debug("b4 writing control information\n"); 1018 1018 1019 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_INTERRUPT_PIN, &irq); 1019 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_INTERRUPT_PIN, &irq); 1020 1020 if ((irq > 0x00) && (irq < 0x05)) 1021 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_INTERRUPT_LINE, func->irq[irq - 1]); 1021 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_INTERRUPT_LINE, func->irq[irq - 1]); 1022 1022 /* 1023 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, ctrl); 1024 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, PCI_BRIDGE_CTL_PARITY); 1025 - pci_bus_write_config_byte (ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, PCI_BRIDGE_CTL_SERR); 1023 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, ctrl); 1024 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, PCI_BRIDGE_CTL_PARITY); 1025 + pci_bus_write_config_byte(ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, PCI_BRIDGE_CTL_SERR); 1026 1026 */ 1027 1027 1028 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_COMMAND, DEVICEENABLE); 1029 - pci_bus_write_config_word (ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, 0x07); 1028 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_COMMAND, DEVICEENABLE); 1029 + pci_bus_write_config_word(ibmphp_pci_bus, devfn, PCI_BRIDGE_CONTROL, 0x07); 1030 1030 for (i = 0; i < 32; i++) { 1031 1031 if (amount_needed->devices[i]) { 1032 - debug ("device where devices[i] is 1 = %x\n", i); 1032 + debug("device where devices[i] is 1 = %x\n", i); 1033 1033 func->devices[i] = 1; 1034 1034 } 1035 1035 } 1036 1036 func->bus = 1; /* For unconfiguring, to indicate it's PPB */ 1037 1037 func_passed = &func; 1038 - debug ("func->busno b4 returning is %x\n", func->busno); 1039 - debug ("func->busno b4 returning in the other structure is %x\n", (*func_passed)->busno); 1040 - kfree (amount_needed); 1038 + debug("func->busno b4 returning is %x\n", func->busno); 1039 + debug("func->busno b4 returning in the other structure is %x\n", (*func_passed)->busno); 1040 + kfree(amount_needed); 1041 1041 return 0; 1042 1042 } else { 1043 - err ("Configuring bridge was unsuccessful...\n"); 1043 + err("Configuring bridge was unsuccessful...\n"); 1044 1044 mem_tmp = NULL; 1045 1045 retval = -EIO; 1046 1046 goto error; ··· 1049 1049 error: 1050 1050 kfree(amount_needed); 1051 1051 if (pfmem) 1052 - ibmphp_remove_resource (pfmem); 1052 + ibmphp_remove_resource(pfmem); 1053 1053 if (io) 1054 - ibmphp_remove_resource (io); 1054 + ibmphp_remove_resource(io); 1055 1055 if (mem) 1056 - ibmphp_remove_resource (mem); 1056 + ibmphp_remove_resource(mem); 1057 1057 for (i = 0; i < 2; i++) { /* for 2 BARs */ 1058 1058 if (bus_io[i]) { 1059 - ibmphp_remove_resource (bus_io[i]); 1059 + ibmphp_remove_resource(bus_io[i]); 1060 1060 func->io[i] = NULL; 1061 1061 } else if (bus_pfmem[i]) { 1062 - ibmphp_remove_resource (bus_pfmem[i]); 1062 + ibmphp_remove_resource(bus_pfmem[i]); 1063 1063 func->pfmem[i] = NULL; 1064 1064 } else if (bus_mem[i]) { 1065 - ibmphp_remove_resource (bus_mem[i]); 1065 + ibmphp_remove_resource(bus_mem[i]); 1066 1066 func->mem[i] = NULL; 1067 1067 } 1068 1068 } ··· 1075 1075 * Input: bridge function 1076 1076 * Output: amount of resources needed 1077 1077 *****************************************************************************/ 1078 - static struct res_needed *scan_behind_bridge (struct pci_func *func, u8 busno) 1078 + static struct res_needed *scan_behind_bridge(struct pci_func *func, u8 busno) 1079 1079 { 1080 1080 int count, len[6]; 1081 1081 u16 vendor_id; ··· 1102 1102 1103 1103 ibmphp_pci_bus->number = busno; 1104 1104 1105 - debug ("the bus_no behind the bridge is %x\n", busno); 1106 - debug ("scanning devices behind the bridge...\n"); 1105 + debug("the bus_no behind the bridge is %x\n", busno); 1106 + debug("scanning devices behind the bridge...\n"); 1107 1107 for (device = 0; device < 32; device++) { 1108 1108 amount->devices[device] = 0; 1109 1109 for (function = 0; function < 8; function++) { 1110 1110 devfn = PCI_DEVFN(device, function); 1111 1111 1112 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_VENDOR_ID, &vendor_id); 1112 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_VENDOR_ID, &vendor_id); 1113 1113 1114 1114 if (vendor_id != PCI_VENDOR_ID_NOTVALID) { 1115 1115 /* found correct device!!! */ 1116 1116 howmany++; 1117 1117 1118 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_HEADER_TYPE, &hdr_type); 1119 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class); 1118 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_HEADER_TYPE, &hdr_type); 1119 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class); 1120 1120 1121 - debug ("hdr_type behind the bridge is %x\n", hdr_type); 1122 - if (hdr_type & PCI_HEADER_TYPE_BRIDGE) { 1123 - err ("embedded bridges not supported for hot-plugging.\n"); 1121 + debug("hdr_type behind the bridge is %x\n", hdr_type); 1122 + if ((hdr_type & 0x7f) == PCI_HEADER_TYPE_BRIDGE) { 1123 + err("embedded bridges not supported for hot-plugging.\n"); 1124 1124 amount->not_correct = 1; 1125 1125 return amount; 1126 1126 } 1127 1127 1128 1128 class >>= 8; /* to take revision out, class = class.subclass.prog i/f */ 1129 1129 if (class == PCI_CLASS_NOT_DEFINED_VGA) { 1130 - err ("The device %x is VGA compatible and as is not supported for hot plugging. Please choose another device.\n", device); 1130 + err("The device %x is VGA compatible and as is not supported for hot plugging. Please choose another device.\n", device); 1131 1131 amount->not_correct = 1; 1132 1132 return amount; 1133 1133 } else if (class == PCI_CLASS_DISPLAY_VGA) { 1134 - err ("The device %x is not supported for hot plugging. Please choose another device.\n", device); 1134 + err("The device %x is not supported for hot plugging. Please choose another device.\n", device); 1135 1135 amount->not_correct = 1; 1136 1136 return amount; 1137 1137 } ··· 1141 1141 for (count = 0; address[count]; count++) { 1142 1142 /* for 6 BARs */ 1143 1143 /* 1144 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, address[count], &tmp); 1144 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, address[count], &tmp); 1145 1145 if (tmp & 0x01) // IO 1146 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFD); 1146 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFD); 1147 1147 else // MEMORY 1148 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 1148 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 1149 1149 */ 1150 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 1151 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &bar[count]); 1150 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 1151 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &bar[count]); 1152 1152 1153 - debug ("what is bar[count]? %x, count = %d\n", bar[count], count); 1153 + debug("what is bar[count]? %x, count = %d\n", bar[count], count); 1154 1154 1155 1155 if (!bar[count]) /* This BAR is not implemented */ 1156 1156 continue; 1157 1157 1158 1158 //tmp_bar = bar[count]; 1159 1159 1160 - debug ("count %d device %x function %x wants %x resources\n", count, device, function, bar[count]); 1160 + debug("count %d device %x function %x wants %x resources\n", count, device, function, bar[count]); 1161 1161 1162 1162 if (bar[count] & PCI_BASE_ADDRESS_SPACE_IO) { 1163 1163 /* This is IO */ ··· 1211 1211 * Change: we also call these functions even if we configured the card ourselves (i.e., not 1212 1212 * the bootup case), since it should work same way 1213 1213 */ 1214 - static int unconfigure_boot_device (u8 busno, u8 device, u8 function) 1214 + static int unconfigure_boot_device(u8 busno, u8 device, u8 function) 1215 1215 { 1216 1216 u32 start_address; 1217 1217 u32 address[] = { ··· 1234 1234 u32 tmp_address; 1235 1235 unsigned int devfn; 1236 1236 1237 - debug ("%s - enter\n", __func__); 1237 + debug("%s - enter\n", __func__); 1238 1238 1239 - bus = ibmphp_find_res_bus (busno); 1239 + bus = ibmphp_find_res_bus(busno); 1240 1240 if (!bus) { 1241 - debug ("cannot find corresponding bus.\n"); 1241 + debug("cannot find corresponding bus.\n"); 1242 1242 return -EINVAL; 1243 1243 } 1244 1244 1245 1245 devfn = PCI_DEVFN(device, function); 1246 1246 ibmphp_pci_bus->number = busno; 1247 1247 for (count = 0; address[count]; count++) { /* for 6 BARs */ 1248 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &start_address); 1248 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &start_address); 1249 1249 1250 1250 /* We can do this here, b/c by that time the device driver of the card has been stopped */ 1251 1251 1252 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 1253 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &size); 1254 - pci_bus_write_config_dword (ibmphp_pci_bus, devfn, address[count], start_address); 1252 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], 0xFFFFFFFF); 1253 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &size); 1254 + pci_bus_write_config_dword(ibmphp_pci_bus, devfn, address[count], start_address); 1255 1255 1256 - debug ("start_address is %x\n", start_address); 1257 - debug ("busno, device, function %x %x %x\n", busno, device, function); 1256 + debug("start_address is %x\n", start_address); 1257 + debug("busno, device, function %x %x %x\n", busno, device, function); 1258 1258 if (!size) { 1259 1259 /* This BAR is not implemented */ 1260 - debug ("is this bar no implemented?, count = %d\n", count); 1260 + debug("is this bar no implemented?, count = %d\n", count); 1261 1261 continue; 1262 1262 } 1263 1263 tmp_address = start_address; ··· 1267 1267 size = size & 0xFFFFFFFC; 1268 1268 size = ~size + 1; 1269 1269 end_address = start_address + size - 1; 1270 - if (ibmphp_find_resource (bus, start_address, &io, IO) < 0) { 1271 - err ("cannot find corresponding IO resource to remove\n"); 1270 + if (ibmphp_find_resource(bus, start_address, &io, IO) < 0) { 1271 + err("cannot find corresponding IO resource to remove\n"); 1272 1272 return -EIO; 1273 1273 } 1274 - debug ("io->start = %x\n", io->start); 1274 + debug("io->start = %x\n", io->start); 1275 1275 temp_end = io->end; 1276 1276 start_address = io->end + 1; 1277 - ibmphp_remove_resource (io); 1277 + ibmphp_remove_resource(io); 1278 1278 /* This is needed b/c of the old I/O restrictions in the BIOS */ 1279 1279 while (temp_end < end_address) { 1280 - if (ibmphp_find_resource (bus, start_address, &io, IO) < 0) { 1281 - err ("cannot find corresponding IO resource to remove\n"); 1280 + if (ibmphp_find_resource(bus, start_address, &io, IO) < 0) { 1281 + err("cannot find corresponding IO resource to remove\n"); 1282 1282 return -EIO; 1283 1283 } 1284 - debug ("io->start = %x\n", io->start); 1284 + debug("io->start = %x\n", io->start); 1285 1285 temp_end = io->end; 1286 1286 start_address = io->end + 1; 1287 - ibmphp_remove_resource (io); 1287 + ibmphp_remove_resource(io); 1288 1288 } 1289 1289 1290 1290 /* ????????? DO WE NEED TO WRITE ANYTHING INTO THE PCI CONFIG SPACE BACK ?????????? */ ··· 1292 1292 /* This is Memory */ 1293 1293 if (start_address & PCI_BASE_ADDRESS_MEM_PREFETCH) { 1294 1294 /* pfmem */ 1295 - debug ("start address of pfmem is %x\n", start_address); 1295 + debug("start address of pfmem is %x\n", start_address); 1296 1296 start_address &= PCI_BASE_ADDRESS_MEM_MASK; 1297 1297 1298 - if (ibmphp_find_resource (bus, start_address, &pfmem, PFMEM) < 0) { 1299 - err ("cannot find corresponding PFMEM resource to remove\n"); 1298 + if (ibmphp_find_resource(bus, start_address, &pfmem, PFMEM) < 0) { 1299 + err("cannot find corresponding PFMEM resource to remove\n"); 1300 1300 return -EIO; 1301 1301 } 1302 1302 if (pfmem) { 1303 - debug ("pfmem->start = %x\n", pfmem->start); 1303 + debug("pfmem->start = %x\n", pfmem->start); 1304 1304 1305 1305 ibmphp_remove_resource(pfmem); 1306 1306 } 1307 1307 } else { 1308 1308 /* regular memory */ 1309 - debug ("start address of mem is %x\n", start_address); 1309 + debug("start address of mem is %x\n", start_address); 1310 1310 start_address &= PCI_BASE_ADDRESS_MEM_MASK; 1311 1311 1312 - if (ibmphp_find_resource (bus, start_address, &mem, MEM) < 0) { 1313 - err ("cannot find corresponding MEM resource to remove\n"); 1312 + if (ibmphp_find_resource(bus, start_address, &mem, MEM) < 0) { 1313 + err("cannot find corresponding MEM resource to remove\n"); 1314 1314 return -EIO; 1315 1315 } 1316 1316 if (mem) { 1317 - debug ("mem->start = %x\n", mem->start); 1317 + debug("mem->start = %x\n", mem->start); 1318 1318 1319 1319 ibmphp_remove_resource(mem); 1320 1320 } ··· 1329 1329 return 0; 1330 1330 } 1331 1331 1332 - static int unconfigure_boot_bridge (u8 busno, u8 device, u8 function) 1332 + static int unconfigure_boot_bridge(u8 busno, u8 device, u8 function) 1333 1333 { 1334 1334 int count; 1335 1335 int bus_no, pri_no, sub_no, sec_no = 0; ··· 1349 1349 devfn = PCI_DEVFN(device, function); 1350 1350 ibmphp_pci_bus->number = busno; 1351 1351 bus_no = (int) busno; 1352 - debug ("busno is %x\n", busno); 1353 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_PRIMARY_BUS, &pri_number); 1354 - debug ("%s - busno = %x, primary_number = %x\n", __func__, busno, pri_number); 1352 + debug("busno is %x\n", busno); 1353 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_PRIMARY_BUS, &pri_number); 1354 + debug("%s - busno = %x, primary_number = %x\n", __func__, busno, pri_number); 1355 1355 1356 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number); 1357 - debug ("sec_number is %x\n", sec_number); 1356 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_number); 1357 + debug("sec_number is %x\n", sec_number); 1358 1358 sec_no = (int) sec_number; 1359 1359 pri_no = (int) pri_number; 1360 1360 if (pri_no != bus_no) { 1361 - err ("primary numbers in our structures and pci config space don't match.\n"); 1361 + err("primary numbers in our structures and pci config space don't match.\n"); 1362 1362 return -EINVAL; 1363 1363 } 1364 1364 1365 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SUBORDINATE_BUS, &sub_number); 1365 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_SUBORDINATE_BUS, &sub_number); 1366 1366 sub_no = (int) sub_number; 1367 - debug ("sub_no is %d, sec_no is %d\n", sub_no, sec_no); 1367 + debug("sub_no is %d, sec_no is %d\n", sub_no, sec_no); 1368 1368 if (sec_no != sub_number) { 1369 - err ("there're more buses behind this bridge. Hot removal is not supported. Please choose another card\n"); 1369 + err("there're more buses behind this bridge. Hot removal is not supported. Please choose another card\n"); 1370 1370 return -ENODEV; 1371 1371 } 1372 1372 1373 - bus = ibmphp_find_res_bus (sec_number); 1373 + bus = ibmphp_find_res_bus(sec_number); 1374 1374 if (!bus) { 1375 - err ("cannot find Bus structure for the bridged device\n"); 1375 + err("cannot find Bus structure for the bridged device\n"); 1376 1376 return -EINVAL; 1377 1377 } 1378 1378 debug("bus->busno is %x\n", bus->busno); 1379 1379 debug("sec_number is %x\n", sec_number); 1380 1380 1381 - ibmphp_remove_bus (bus, busno); 1381 + ibmphp_remove_bus(bus, busno); 1382 1382 1383 1383 for (count = 0; address[count]; count++) { 1384 1384 /* for 2 BARs */ 1385 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, address[count], &start_address); 1385 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, address[count], &start_address); 1386 1386 1387 1387 if (!start_address) { 1388 1388 /* This BAR is not implemented */ ··· 1394 1394 if (start_address & PCI_BASE_ADDRESS_SPACE_IO) { 1395 1395 /* This is IO */ 1396 1396 start_address &= PCI_BASE_ADDRESS_IO_MASK; 1397 - if (ibmphp_find_resource (bus, start_address, &io, IO) < 0) { 1398 - err ("cannot find corresponding IO resource to remove\n"); 1397 + if (ibmphp_find_resource(bus, start_address, &io, IO) < 0) { 1398 + err("cannot find corresponding IO resource to remove\n"); 1399 1399 return -EIO; 1400 1400 } 1401 1401 if (io) 1402 - debug ("io->start = %x\n", io->start); 1402 + debug("io->start = %x\n", io->start); 1403 1403 1404 - ibmphp_remove_resource (io); 1404 + ibmphp_remove_resource(io); 1405 1405 1406 1406 /* ????????? DO WE NEED TO WRITE ANYTHING INTO THE PCI CONFIG SPACE BACK ?????????? */ 1407 1407 } else { ··· 1409 1409 if (start_address & PCI_BASE_ADDRESS_MEM_PREFETCH) { 1410 1410 /* pfmem */ 1411 1411 start_address &= PCI_BASE_ADDRESS_MEM_MASK; 1412 - if (ibmphp_find_resource (bus, start_address, &pfmem, PFMEM) < 0) { 1413 - err ("cannot find corresponding PFMEM resource to remove\n"); 1412 + if (ibmphp_find_resource(bus, start_address, &pfmem, PFMEM) < 0) { 1413 + err("cannot find corresponding PFMEM resource to remove\n"); 1414 1414 return -EINVAL; 1415 1415 } 1416 1416 if (pfmem) { 1417 - debug ("pfmem->start = %x\n", pfmem->start); 1417 + debug("pfmem->start = %x\n", pfmem->start); 1418 1418 1419 1419 ibmphp_remove_resource(pfmem); 1420 1420 } 1421 1421 } else { 1422 1422 /* regular memory */ 1423 1423 start_address &= PCI_BASE_ADDRESS_MEM_MASK; 1424 - if (ibmphp_find_resource (bus, start_address, &mem, MEM) < 0) { 1425 - err ("cannot find corresponding MEM resource to remove\n"); 1424 + if (ibmphp_find_resource(bus, start_address, &mem, MEM) < 0) { 1425 + err("cannot find corresponding MEM resource to remove\n"); 1426 1426 return -EINVAL; 1427 1427 } 1428 1428 if (mem) { 1429 - debug ("mem->start = %x\n", mem->start); 1429 + debug("mem->start = %x\n", mem->start); 1430 1430 1431 1431 ibmphp_remove_resource(mem); 1432 1432 } ··· 1437 1437 } 1438 1438 } /* end of mem */ 1439 1439 } /* end of for */ 1440 - debug ("%s - exiting, returning success\n", __func__); 1440 + debug("%s - exiting, returning success\n", __func__); 1441 1441 return 0; 1442 1442 } 1443 1443 1444 - static int unconfigure_boot_card (struct slot *slot_cur) 1444 + static int unconfigure_boot_card(struct slot *slot_cur) 1445 1445 { 1446 1446 u16 vendor_id; 1447 1447 u32 class; ··· 1453 1453 unsigned int devfn; 1454 1454 u8 valid_device = 0x00; /* To see if we are ever able to find valid device and read it */ 1455 1455 1456 - debug ("%s - enter\n", __func__); 1456 + debug("%s - enter\n", __func__); 1457 1457 1458 1458 device = slot_cur->device; 1459 1459 busno = slot_cur->bus; 1460 1460 1461 - debug ("b4 for loop, device is %x\n", device); 1461 + debug("b4 for loop, device is %x\n", device); 1462 1462 /* For every function on the card */ 1463 1463 for (function = 0x0; function < 0x08; function++) { 1464 1464 devfn = PCI_DEVFN(device, function); 1465 1465 ibmphp_pci_bus->number = busno; 1466 1466 1467 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_VENDOR_ID, &vendor_id); 1467 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_VENDOR_ID, &vendor_id); 1468 1468 1469 1469 if (vendor_id != PCI_VENDOR_ID_NOTVALID) { 1470 1470 /* found correct device!!! */ 1471 1471 ++valid_device; 1472 1472 1473 - debug ("%s - found correct device\n", __func__); 1473 + debug("%s - found correct device\n", __func__); 1474 1474 1475 1475 /* header: x x x x x x x x 1476 1476 * | |___________|=> 1=PPB bridge, 0=normal device, 2=CardBus Bridge 1477 1477 * |_=> 0 = single function device, 1 = multi-function device 1478 1478 */ 1479 1479 1480 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_HEADER_TYPE, &hdr_type); 1481 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class); 1480 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_HEADER_TYPE, &hdr_type); 1481 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, PCI_CLASS_REVISION, &class); 1482 1482 1483 - debug ("hdr_type %x, class %x\n", hdr_type, class); 1483 + debug("hdr_type %x, class %x\n", hdr_type, class); 1484 1484 class >>= 8; /* to take revision out, class = class.subclass.prog i/f */ 1485 1485 if (class == PCI_CLASS_NOT_DEFINED_VGA) { 1486 - err ("The device %x function %x is VGA compatible and is not supported for hot removing. Please choose another device.\n", device, function); 1486 + err("The device %x function %x is VGA compatible and is not supported for hot removing. Please choose another device.\n", device, function); 1487 1487 return -ENODEV; 1488 1488 } else if (class == PCI_CLASS_DISPLAY_VGA) { 1489 - err ("The device %x function %x is not supported for hot removing. Please choose another device.\n", device, function); 1489 + err("The device %x function %x is not supported for hot removing. Please choose another device.\n", device, function); 1490 1490 return -ENODEV; 1491 1491 } 1492 1492 1493 1493 switch (hdr_type) { 1494 1494 case PCI_HEADER_TYPE_NORMAL: 1495 - rc = unconfigure_boot_device (busno, device, function); 1495 + rc = unconfigure_boot_device(busno, device, function); 1496 1496 if (rc) { 1497 - err ("was not able to unconfigure device %x func %x on bus %x. bailing out...\n", 1497 + err("was not able to unconfigure device %x func %x on bus %x. bailing out...\n", 1498 1498 device, function, busno); 1499 1499 return rc; 1500 1500 } 1501 1501 function = 0x8; 1502 1502 break; 1503 1503 case PCI_HEADER_TYPE_MULTIDEVICE: 1504 - rc = unconfigure_boot_device (busno, device, function); 1504 + rc = unconfigure_boot_device(busno, device, function); 1505 1505 if (rc) { 1506 - err ("was not able to unconfigure device %x func %x on bus %x. bailing out...\n", 1506 + err("was not able to unconfigure device %x func %x on bus %x. bailing out...\n", 1507 1507 device, function, busno); 1508 1508 return rc; 1509 1509 } ··· 1511 1511 case PCI_HEADER_TYPE_BRIDGE: 1512 1512 class >>= 8; 1513 1513 if (class != PCI_CLASS_BRIDGE_PCI) { 1514 - err ("This device %x function %x is not PCI-to-PCI bridge, and is not supported for hot-removing. Please try another card.\n", device, function); 1514 + err("This device %x function %x is not PCI-to-PCI bridge, and is not supported for hot-removing. Please try another card.\n", device, function); 1515 1515 return -ENODEV; 1516 1516 } 1517 - rc = unconfigure_boot_bridge (busno, device, function); 1517 + rc = unconfigure_boot_bridge(busno, device, function); 1518 1518 if (rc != 0) { 1519 - err ("was not able to hot-remove PPB properly.\n"); 1519 + err("was not able to hot-remove PPB properly.\n"); 1520 1520 return rc; 1521 1521 } 1522 1522 ··· 1525 1525 case PCI_HEADER_TYPE_MULTIBRIDGE: 1526 1526 class >>= 8; 1527 1527 if (class != PCI_CLASS_BRIDGE_PCI) { 1528 - err ("This device %x function %x is not PCI-to-PCI bridge, and is not supported for hot-removing. Please try another card.\n", device, function); 1528 + err("This device %x function %x is not PCI-to-PCI bridge, and is not supported for hot-removing. Please try another card.\n", device, function); 1529 1529 return -ENODEV; 1530 1530 } 1531 - rc = unconfigure_boot_bridge (busno, device, function); 1531 + rc = unconfigure_boot_bridge(busno, device, function); 1532 1532 if (rc != 0) { 1533 - err ("was not able to hot-remove PPB properly.\n"); 1533 + err("was not able to hot-remove PPB properly.\n"); 1534 1534 return rc; 1535 1535 } 1536 1536 break; 1537 1537 default: 1538 - err ("MAJOR PROBLEM!!!! Cannot read device's header\n"); 1538 + err("MAJOR PROBLEM!!!! Cannot read device's header\n"); 1539 1539 return -1; 1540 1540 break; 1541 1541 } /* end of switch */ ··· 1543 1543 } /* end of for */ 1544 1544 1545 1545 if (!valid_device) { 1546 - err ("Could not find device to unconfigure. Or could not read the card.\n"); 1546 + err("Could not find device to unconfigure. Or could not read the card.\n"); 1547 1547 return -1; 1548 1548 } 1549 1549 return 0; ··· 1558 1558 * !!!!!!!!!!!!!!!!!!!!!!!!!FOR BUSES!!!!!!!!!!!! 1559 1559 * Returns: 0, -1, -ENODEV 1560 1560 */ 1561 - int ibmphp_unconfigure_card (struct slot **slot_cur, int the_end) 1561 + int ibmphp_unconfigure_card(struct slot **slot_cur, int the_end) 1562 1562 { 1563 1563 int i; 1564 1564 int count; ··· 1567 1567 struct pci_func *cur_func = NULL; 1568 1568 struct pci_func *temp_func; 1569 1569 1570 - debug ("%s - enter\n", __func__); 1570 + debug("%s - enter\n", __func__); 1571 1571 1572 1572 if (!the_end) { 1573 1573 /* Need to unconfigure the card */ 1574 - rc = unconfigure_boot_card (sl); 1574 + rc = unconfigure_boot_card(sl); 1575 1575 if ((rc == -ENODEV) || (rc == -EIO) || (rc == -EINVAL)) { 1576 1576 /* In all other cases, will still need to get rid of func structure if it exists */ 1577 1577 return rc; ··· 1591 1591 1592 1592 for (i = 0; i < count; i++) { 1593 1593 if (cur_func->io[i]) { 1594 - debug ("io[%d] exists\n", i); 1594 + debug("io[%d] exists\n", i); 1595 1595 if (the_end > 0) 1596 - ibmphp_remove_resource (cur_func->io[i]); 1596 + ibmphp_remove_resource(cur_func->io[i]); 1597 1597 cur_func->io[i] = NULL; 1598 1598 } 1599 1599 if (cur_func->mem[i]) { 1600 - debug ("mem[%d] exists\n", i); 1600 + debug("mem[%d] exists\n", i); 1601 1601 if (the_end > 0) 1602 - ibmphp_remove_resource (cur_func->mem[i]); 1602 + ibmphp_remove_resource(cur_func->mem[i]); 1603 1603 cur_func->mem[i] = NULL; 1604 1604 } 1605 1605 if (cur_func->pfmem[i]) { 1606 - debug ("pfmem[%d] exists\n", i); 1606 + debug("pfmem[%d] exists\n", i); 1607 1607 if (the_end > 0) 1608 - ibmphp_remove_resource (cur_func->pfmem[i]); 1608 + ibmphp_remove_resource(cur_func->pfmem[i]); 1609 1609 cur_func->pfmem[i] = NULL; 1610 1610 } 1611 1611 } 1612 1612 1613 1613 temp_func = cur_func->next; 1614 - kfree (cur_func); 1614 + kfree(cur_func); 1615 1615 cur_func = temp_func; 1616 1616 } 1617 1617 } 1618 1618 1619 1619 sl->func = NULL; 1620 1620 *slot_cur = sl; 1621 - debug ("%s - exit\n", __func__); 1621 + debug("%s - exit\n", __func__); 1622 1622 return 0; 1623 1623 } 1624 1624 ··· 1630 1630 * Output: bus added to the correct spot 1631 1631 * 0, -1, error 1632 1632 */ 1633 - static int add_new_bus (struct bus_node *bus, struct resource_node *io, struct resource_node *mem, struct resource_node *pfmem, u8 parent_busno) 1633 + static int add_new_bus(struct bus_node *bus, struct resource_node *io, struct resource_node *mem, struct resource_node *pfmem, u8 parent_busno) 1634 1634 { 1635 1635 struct range_node *io_range = NULL; 1636 1636 struct range_node *mem_range = NULL; ··· 1639 1639 1640 1640 /* Trying to find the parent bus number */ 1641 1641 if (parent_busno != 0xFF) { 1642 - cur_bus = ibmphp_find_res_bus (parent_busno); 1642 + cur_bus = ibmphp_find_res_bus(parent_busno); 1643 1643 if (!cur_bus) { 1644 - err ("strange, cannot find bus which is supposed to be at the system... something is terribly wrong...\n"); 1644 + err("strange, cannot find bus which is supposed to be at the system... something is terribly wrong...\n"); 1645 1645 return -ENODEV; 1646 1646 } 1647 1647 1648 - list_add (&bus->bus_list, &cur_bus->bus_list); 1648 + list_add(&bus->bus_list, &cur_bus->bus_list); 1649 1649 } 1650 1650 if (io) { 1651 1651 io_range = kzalloc(sizeof(*io_range), GFP_KERNEL); 1652 1652 if (!io_range) { 1653 - err ("out of system memory\n"); 1653 + err("out of system memory\n"); 1654 1654 return -ENOMEM; 1655 1655 } 1656 1656 io_range->start = io->start; ··· 1662 1662 if (mem) { 1663 1663 mem_range = kzalloc(sizeof(*mem_range), GFP_KERNEL); 1664 1664 if (!mem_range) { 1665 - err ("out of system memory\n"); 1665 + err("out of system memory\n"); 1666 1666 return -ENOMEM; 1667 1667 } 1668 1668 mem_range->start = mem->start; ··· 1674 1674 if (pfmem) { 1675 1675 pfmem_range = kzalloc(sizeof(*pfmem_range), GFP_KERNEL); 1676 1676 if (!pfmem_range) { 1677 - err ("out of system memory\n"); 1677 + err("out of system memory\n"); 1678 1678 return -ENOMEM; 1679 1679 } 1680 1680 pfmem_range->start = pfmem->start; ··· 1691 1691 * Parameters: bus_number of the primary bus 1692 1692 * Returns: bus_number of the secondary bus or 0xff in case of failure 1693 1693 */ 1694 - static u8 find_sec_number (u8 primary_busno, u8 slotno) 1694 + static u8 find_sec_number(u8 primary_busno, u8 slotno) 1695 1695 { 1696 1696 int min, max; 1697 1697 u8 busno; 1698 1698 struct bus_info *bus; 1699 1699 struct bus_node *bus_cur; 1700 1700 1701 - bus = ibmphp_find_same_bus_num (primary_busno); 1701 + bus = ibmphp_find_same_bus_num(primary_busno); 1702 1702 if (!bus) { 1703 - err ("cannot get slot range of the bus from the BIOS\n"); 1703 + err("cannot get slot range of the bus from the BIOS\n"); 1704 1704 return 0xff; 1705 1705 } 1706 1706 max = bus->slot_max; 1707 1707 min = bus->slot_min; 1708 1708 if ((slotno > max) || (slotno < min)) { 1709 - err ("got the wrong range\n"); 1709 + err("got the wrong range\n"); 1710 1710 return 0xff; 1711 1711 } 1712 1712 busno = (u8) (slotno - (u8) min); 1713 1713 busno += primary_busno + 0x01; 1714 - bus_cur = ibmphp_find_res_bus (busno); 1714 + bus_cur = ibmphp_find_res_bus(busno); 1715 1715 /* either there is no such bus number, or there are no ranges, which 1716 1716 * can only happen if we removed the bridged device in previous load 1717 1717 * of the driver, and now only have the skeleton bus struct
+250 -264
drivers/pci/hotplug/ibmphp_res.c
··· 36 36 37 37 static int flags = 0; /* for testing */ 38 38 39 - static void update_resources (struct bus_node *bus_cur, int type, int rangeno); 40 - static int once_over (void); 41 - static int remove_ranges (struct bus_node *, struct bus_node *); 42 - static int update_bridge_ranges (struct bus_node **); 43 - static int add_bus_range (int type, struct range_node *, struct bus_node *); 44 - static void fix_resources (struct bus_node *); 45 - static struct bus_node *find_bus_wprev (u8, struct bus_node **, u8); 39 + static void update_resources(struct bus_node *bus_cur, int type, int rangeno); 40 + static int once_over(void); 41 + static int remove_ranges(struct bus_node *, struct bus_node *); 42 + static int update_bridge_ranges(struct bus_node **); 43 + static int add_bus_range(int type, struct range_node *, struct bus_node *); 44 + static void fix_resources(struct bus_node *); 45 + static struct bus_node *find_bus_wprev(u8, struct bus_node **, u8); 46 46 47 47 static LIST_HEAD(gbuses); 48 48 49 - static struct bus_node * __init alloc_error_bus (struct ebda_pci_rsrc *curr, u8 busno, int flag) 49 + static struct bus_node * __init alloc_error_bus(struct ebda_pci_rsrc *curr, u8 busno, int flag) 50 50 { 51 51 struct bus_node *newbus; 52 52 53 53 if (!(curr) && !(flag)) { 54 - err ("NULL pointer passed\n"); 54 + err("NULL pointer passed\n"); 55 55 return NULL; 56 56 } 57 57 58 58 newbus = kzalloc(sizeof(struct bus_node), GFP_KERNEL); 59 59 if (!newbus) { 60 - err ("out of system memory\n"); 60 + err("out of system memory\n"); 61 61 return NULL; 62 62 } 63 63 ··· 65 65 newbus->busno = busno; 66 66 else 67 67 newbus->busno = curr->bus_num; 68 - list_add_tail (&newbus->bus_list, &gbuses); 68 + list_add_tail(&newbus->bus_list, &gbuses); 69 69 return newbus; 70 70 } 71 71 72 - static struct resource_node * __init alloc_resources (struct ebda_pci_rsrc *curr) 72 + static struct resource_node * __init alloc_resources(struct ebda_pci_rsrc *curr) 73 73 { 74 74 struct resource_node *rs; 75 75 76 76 if (!curr) { 77 - err ("NULL passed to allocate\n"); 77 + err("NULL passed to allocate\n"); 78 78 return NULL; 79 79 } 80 80 81 81 rs = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 82 82 if (!rs) { 83 - err ("out of system memory\n"); 83 + err("out of system memory\n"); 84 84 return NULL; 85 85 } 86 86 rs->busno = curr->bus_num; ··· 91 91 return rs; 92 92 } 93 93 94 - static int __init alloc_bus_range (struct bus_node **new_bus, struct range_node **new_range, struct ebda_pci_rsrc *curr, int flag, u8 first_bus) 94 + static int __init alloc_bus_range(struct bus_node **new_bus, struct range_node **new_range, struct ebda_pci_rsrc *curr, int flag, u8 first_bus) 95 95 { 96 96 struct bus_node *newbus; 97 97 struct range_node *newrange; ··· 100 100 if (first_bus) { 101 101 newbus = kzalloc(sizeof(struct bus_node), GFP_KERNEL); 102 102 if (!newbus) { 103 - err ("out of system memory.\n"); 103 + err("out of system memory.\n"); 104 104 return -ENOMEM; 105 105 } 106 106 newbus->busno = curr->bus_num; ··· 122 122 newrange = kzalloc(sizeof(struct range_node), GFP_KERNEL); 123 123 if (!newrange) { 124 124 if (first_bus) 125 - kfree (newbus); 126 - err ("out of system memory\n"); 125 + kfree(newbus); 126 + err("out of system memory\n"); 127 127 return -ENOMEM; 128 128 } 129 129 newrange->start = curr->start_addr; ··· 133 133 newrange->rangeno = 1; 134 134 else { 135 135 /* need to insert our range */ 136 - add_bus_range (flag, newrange, newbus); 137 - debug ("%d resource Primary Bus inserted on bus %x [%x - %x]\n", flag, newbus->busno, newrange->start, newrange->end); 136 + add_bus_range(flag, newrange, newbus); 137 + debug("%d resource Primary Bus inserted on bus %x [%x - %x]\n", flag, newbus->busno, newrange->start, newrange->end); 138 138 } 139 139 140 140 switch (flag) { ··· 143 143 if (first_bus) 144 144 newbus->noMemRanges = 1; 145 145 else { 146 - debug ("First Memory Primary on bus %x, [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 146 + debug("First Memory Primary on bus %x, [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 147 147 ++newbus->noMemRanges; 148 - fix_resources (newbus); 148 + fix_resources(newbus); 149 149 } 150 150 break; 151 151 case IO: ··· 153 153 if (first_bus) 154 154 newbus->noIORanges = 1; 155 155 else { 156 - debug ("First IO Primary on bus %x, [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 156 + debug("First IO Primary on bus %x, [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 157 157 ++newbus->noIORanges; 158 - fix_resources (newbus); 158 + fix_resources(newbus); 159 159 } 160 160 break; 161 161 case PFMEM: ··· 163 163 if (first_bus) 164 164 newbus->noPFMemRanges = 1; 165 165 else { 166 - debug ("1st PFMemory Primary on Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 166 + debug("1st PFMemory Primary on Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 167 167 ++newbus->noPFMemRanges; 168 - fix_resources (newbus); 168 + fix_resources(newbus); 169 169 } 170 170 171 171 break; ··· 183 183 * 2. If cannot allocate out of PFMem range, allocate from Mem ranges. PFmemFromMem 184 184 * are not sorted. (no need since use mem node). To not change the entire code, we 185 185 * also add mem node whenever this case happens so as not to change 186 - * ibmphp_check_mem_resource etc (and since it really is taking Mem resource) 186 + * ibmphp_check_mem_resource etc(and since it really is taking Mem resource) 187 187 */ 188 188 189 189 /***************************************************************************** ··· 196 196 * Input: ptr to the head of the resource list from EBDA 197 197 * Output: 0, -1 or error codes 198 198 ***************************************************************************/ 199 - int __init ibmphp_rsrc_init (void) 199 + int __init ibmphp_rsrc_init(void) 200 200 { 201 201 struct ebda_pci_rsrc *curr; 202 202 struct range_node *newrange = NULL; 203 203 struct bus_node *newbus = NULL; 204 204 struct bus_node *bus_cur; 205 205 struct bus_node *bus_prev; 206 - struct list_head *tmp; 207 206 struct resource_node *new_io = NULL; 208 207 struct resource_node *new_mem = NULL; 209 208 struct resource_node *new_pfmem = NULL; 210 209 int rc; 211 - struct list_head *tmp_ebda; 212 210 213 - list_for_each (tmp_ebda, &ibmphp_ebda_pci_rsrc_head) { 214 - curr = list_entry (tmp_ebda, struct ebda_pci_rsrc, ebda_pci_rsrc_list); 211 + list_for_each_entry(curr, &ibmphp_ebda_pci_rsrc_head, 212 + ebda_pci_rsrc_list) { 215 213 if (!(curr->rsrc_type & PCIDEVMASK)) { 216 214 /* EBDA still lists non PCI devices, so ignore... */ 217 - debug ("this is not a PCI DEVICE in rsrc_init, please take care\n"); 215 + debug("this is not a PCI DEVICE in rsrc_init, please take care\n"); 218 216 // continue; 219 217 } 220 218 ··· 221 223 /* memory */ 222 224 if ((curr->rsrc_type & RESTYPE) == MMASK) { 223 225 /* no bus structure exists in place yet */ 224 - if (list_empty (&gbuses)) { 226 + if (list_empty(&gbuses)) { 225 227 rc = alloc_bus_range(&newbus, &newrange, curr, MEM, 1); 226 228 if (rc) 227 229 return rc; 228 - list_add_tail (&newbus->bus_list, &gbuses); 229 - debug ("gbuses = NULL, Memory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 230 + list_add_tail(&newbus->bus_list, &gbuses); 231 + debug("gbuses = NULL, Memory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 230 232 } else { 231 - bus_cur = find_bus_wprev (curr->bus_num, &bus_prev, 1); 233 + bus_cur = find_bus_wprev(curr->bus_num, &bus_prev, 1); 232 234 /* found our bus */ 233 235 if (bus_cur) { 234 - rc = alloc_bus_range (&bus_cur, &newrange, curr, MEM, 0); 236 + rc = alloc_bus_range(&bus_cur, &newrange, curr, MEM, 0); 235 237 if (rc) 236 238 return rc; 237 239 } else { ··· 240 242 if (rc) 241 243 return rc; 242 244 243 - list_add_tail (&newbus->bus_list, &gbuses); 244 - debug ("New Bus, Memory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 245 + list_add_tail(&newbus->bus_list, &gbuses); 246 + debug("New Bus, Memory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 245 247 } 246 248 } 247 249 } else if ((curr->rsrc_type & RESTYPE) == PFMASK) { 248 250 /* prefetchable memory */ 249 - if (list_empty (&gbuses)) { 251 + if (list_empty(&gbuses)) { 250 252 /* no bus structure exists in place yet */ 251 253 rc = alloc_bus_range(&newbus, &newrange, curr, PFMEM, 1); 252 254 if (rc) 253 255 return rc; 254 - list_add_tail (&newbus->bus_list, &gbuses); 255 - debug ("gbuses = NULL, PFMemory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 256 + list_add_tail(&newbus->bus_list, &gbuses); 257 + debug("gbuses = NULL, PFMemory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 256 258 } else { 257 - bus_cur = find_bus_wprev (curr->bus_num, &bus_prev, 1); 259 + bus_cur = find_bus_wprev(curr->bus_num, &bus_prev, 1); 258 260 if (bus_cur) { 259 261 /* found our bus */ 260 - rc = alloc_bus_range (&bus_cur, &newrange, curr, PFMEM, 0); 262 + rc = alloc_bus_range(&bus_cur, &newrange, curr, PFMEM, 0); 261 263 if (rc) 262 264 return rc; 263 265 } else { ··· 265 267 rc = alloc_bus_range(&newbus, &newrange, curr, PFMEM, 1); 266 268 if (rc) 267 269 return rc; 268 - list_add_tail (&newbus->bus_list, &gbuses); 269 - debug ("1st Bus, PFMemory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 270 + list_add_tail(&newbus->bus_list, &gbuses); 271 + debug("1st Bus, PFMemory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 270 272 } 271 273 } 272 274 } else if ((curr->rsrc_type & RESTYPE) == IOMASK) { 273 275 /* IO */ 274 - if (list_empty (&gbuses)) { 276 + if (list_empty(&gbuses)) { 275 277 /* no bus structure exists in place yet */ 276 278 rc = alloc_bus_range(&newbus, &newrange, curr, IO, 1); 277 279 if (rc) 278 280 return rc; 279 - list_add_tail (&newbus->bus_list, &gbuses); 280 - debug ("gbuses = NULL, IO Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 281 + list_add_tail(&newbus->bus_list, &gbuses); 282 + debug("gbuses = NULL, IO Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 281 283 } else { 282 - bus_cur = find_bus_wprev (curr->bus_num, &bus_prev, 1); 284 + bus_cur = find_bus_wprev(curr->bus_num, &bus_prev, 1); 283 285 if (bus_cur) { 284 - rc = alloc_bus_range (&bus_cur, &newrange, curr, IO, 0); 286 + rc = alloc_bus_range(&bus_cur, &newrange, curr, IO, 0); 285 287 if (rc) 286 288 return rc; 287 289 } else { ··· 289 291 rc = alloc_bus_range(&newbus, &newrange, curr, IO, 1); 290 292 if (rc) 291 293 return rc; 292 - list_add_tail (&newbus->bus_list, &gbuses); 293 - debug ("1st Bus, IO Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 294 + list_add_tail(&newbus->bus_list, &gbuses); 295 + debug("1st Bus, IO Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); 294 296 } 295 297 } 296 298 ··· 302 304 /* regular pci device resource */ 303 305 if ((curr->rsrc_type & RESTYPE) == MMASK) { 304 306 /* Memory resource */ 305 - new_mem = alloc_resources (curr); 307 + new_mem = alloc_resources(curr); 306 308 if (!new_mem) 307 309 return -ENOMEM; 308 310 new_mem->type = MEM; ··· 313 315 * assign a -1 and then update once the range 314 316 * actually appears... 315 317 */ 316 - if (ibmphp_add_resource (new_mem) < 0) { 317 - newbus = alloc_error_bus (curr, 0, 0); 318 + if (ibmphp_add_resource(new_mem) < 0) { 319 + newbus = alloc_error_bus(curr, 0, 0); 318 320 if (!newbus) 319 321 return -ENOMEM; 320 322 newbus->firstMem = new_mem; 321 323 ++newbus->needMemUpdate; 322 324 new_mem->rangeno = -1; 323 325 } 324 - debug ("Memory resource for device %x, bus %x, [%x - %x]\n", new_mem->devfunc, new_mem->busno, new_mem->start, new_mem->end); 326 + debug("Memory resource for device %x, bus %x, [%x - %x]\n", new_mem->devfunc, new_mem->busno, new_mem->start, new_mem->end); 325 327 326 328 } else if ((curr->rsrc_type & RESTYPE) == PFMASK) { 327 329 /* PFMemory resource */ 328 - new_pfmem = alloc_resources (curr); 330 + new_pfmem = alloc_resources(curr); 329 331 if (!new_pfmem) 330 332 return -ENOMEM; 331 333 new_pfmem->type = PFMEM; 332 334 new_pfmem->fromMem = 0; 333 - if (ibmphp_add_resource (new_pfmem) < 0) { 334 - newbus = alloc_error_bus (curr, 0, 0); 335 + if (ibmphp_add_resource(new_pfmem) < 0) { 336 + newbus = alloc_error_bus(curr, 0, 0); 335 337 if (!newbus) 336 338 return -ENOMEM; 337 339 newbus->firstPFMem = new_pfmem; ··· 339 341 new_pfmem->rangeno = -1; 340 342 } 341 343 342 - debug ("PFMemory resource for device %x, bus %x, [%x - %x]\n", new_pfmem->devfunc, new_pfmem->busno, new_pfmem->start, new_pfmem->end); 344 + debug("PFMemory resource for device %x, bus %x, [%x - %x]\n", new_pfmem->devfunc, new_pfmem->busno, new_pfmem->start, new_pfmem->end); 343 345 } else if ((curr->rsrc_type & RESTYPE) == IOMASK) { 344 346 /* IO resource */ 345 - new_io = alloc_resources (curr); 347 + new_io = alloc_resources(curr); 346 348 if (!new_io) 347 349 return -ENOMEM; 348 350 new_io->type = IO; ··· 354 356 * Can assign a -1 and then update once the 355 357 * range actually appears... 356 358 */ 357 - if (ibmphp_add_resource (new_io) < 0) { 358 - newbus = alloc_error_bus (curr, 0, 0); 359 + if (ibmphp_add_resource(new_io) < 0) { 360 + newbus = alloc_error_bus(curr, 0, 0); 359 361 if (!newbus) 360 362 return -ENOMEM; 361 363 newbus->firstIO = new_io; 362 364 ++newbus->needIOUpdate; 363 365 new_io->rangeno = -1; 364 366 } 365 - debug ("IO resource for device %x, bus %x, [%x - %x]\n", new_io->devfunc, new_io->busno, new_io->start, new_io->end); 367 + debug("IO resource for device %x, bus %x, [%x - %x]\n", new_io->devfunc, new_io->busno, new_io->start, new_io->end); 366 368 } 367 369 } 368 370 } 369 371 370 - list_for_each (tmp, &gbuses) { 371 - bus_cur = list_entry (tmp, struct bus_node, bus_list); 372 + list_for_each_entry(bus_cur, &gbuses, bus_list) { 372 373 /* This is to get info about PPB resources, since EBDA doesn't put this info into the primary bus info */ 373 - rc = update_bridge_ranges (&bus_cur); 374 + rc = update_bridge_ranges(&bus_cur); 374 375 if (rc) 375 376 return rc; 376 377 } 377 - return once_over (); /* This is to align ranges (so no -1) */ 378 + return once_over(); /* This is to align ranges (so no -1) */ 378 379 } 379 380 380 381 /******************************************************************************** ··· 384 387 * Input: type of the resource, range to add, current bus 385 388 * Output: 0 or -1, bus and range ptrs 386 389 ********************************************************************************/ 387 - static int add_bus_range (int type, struct range_node *range, struct bus_node *bus_cur) 390 + static int add_bus_range(int type, struct range_node *range, struct bus_node *bus_cur) 388 391 { 389 392 struct range_node *range_cur = NULL; 390 393 struct range_node *range_prev; ··· 449 452 range_cur = range_cur->next; 450 453 } 451 454 452 - update_resources (bus_cur, type, i_init + 1); 455 + update_resources(bus_cur, type, i_init + 1); 453 456 return 0; 454 457 } 455 458 ··· 459 462 * 460 463 * Input: bus, type of the resource, the rangeno starting from which to update 461 464 ******************************************************************************/ 462 - static void update_resources (struct bus_node *bus_cur, int type, int rangeno) 465 + static void update_resources(struct bus_node *bus_cur, int type, int rangeno) 463 466 { 464 467 struct resource_node *res = NULL; 465 468 u8 eol = 0; /* end of list indicator */ ··· 503 506 } 504 507 } 505 508 506 - static void fix_me (struct resource_node *res, struct bus_node *bus_cur, struct range_node *range) 509 + static void fix_me(struct resource_node *res, struct bus_node *bus_cur, struct range_node *range) 507 510 { 508 - char * str = ""; 511 + char *str = ""; 509 512 switch (res->type) { 510 513 case IO: 511 514 str = "io"; ··· 523 526 while (range) { 524 527 if ((res->start >= range->start) && (res->end <= range->end)) { 525 528 res->rangeno = range->rangeno; 526 - debug ("%s->rangeno in fix_resources is %d\n", str, res->rangeno); 529 + debug("%s->rangeno in fix_resources is %d\n", str, res->rangeno); 527 530 switch (res->type) { 528 531 case IO: 529 532 --bus_cur->needIOUpdate; ··· 558 561 * Input: current bus 559 562 * Output: none, list of resources for that bus are fixed if can be 560 563 *******************************************************************************/ 561 - static void fix_resources (struct bus_node *bus_cur) 564 + static void fix_resources(struct bus_node *bus_cur) 562 565 { 563 566 struct range_node *range; 564 567 struct resource_node *res; 565 568 566 - debug ("%s - bus_cur->busno = %d\n", __func__, bus_cur->busno); 569 + debug("%s - bus_cur->busno = %d\n", __func__, bus_cur->busno); 567 570 568 571 if (bus_cur->needIOUpdate) { 569 572 res = bus_cur->firstIO; 570 573 range = bus_cur->rangeIO; 571 - fix_me (res, bus_cur, range); 574 + fix_me(res, bus_cur, range); 572 575 } 573 576 if (bus_cur->needMemUpdate) { 574 577 res = bus_cur->firstMem; 575 578 range = bus_cur->rangeMem; 576 - fix_me (res, bus_cur, range); 579 + fix_me(res, bus_cur, range); 577 580 } 578 581 if (bus_cur->needPFMemUpdate) { 579 582 res = bus_cur->firstPFMem; 580 583 range = bus_cur->rangePFMem; 581 - fix_me (res, bus_cur, range); 584 + fix_me(res, bus_cur, range); 582 585 } 583 586 } 584 587 ··· 591 594 * Output: ptrs assigned (to the node) 592 595 * 0 or -1 593 596 *******************************************************************************/ 594 - int ibmphp_add_resource (struct resource_node *res) 597 + int ibmphp_add_resource(struct resource_node *res) 595 598 { 596 599 struct resource_node *res_cur; 597 600 struct resource_node *res_prev; ··· 599 602 struct range_node *range_cur = NULL; 600 603 struct resource_node *res_start = NULL; 601 604 602 - debug ("%s - enter\n", __func__); 605 + debug("%s - enter\n", __func__); 603 606 604 607 if (!res) { 605 - err ("NULL passed to add\n"); 608 + err("NULL passed to add\n"); 606 609 return -ENODEV; 607 610 } 608 611 609 - bus_cur = find_bus_wprev (res->busno, NULL, 0); 612 + bus_cur = find_bus_wprev(res->busno, NULL, 0); 610 613 611 614 if (!bus_cur) { 612 615 /* didn't find a bus, something's wrong!!! */ 613 - debug ("no bus in the system, either pci_dev's wrong or allocation failed\n"); 616 + debug("no bus in the system, either pci_dev's wrong or allocation failed\n"); 614 617 return -ENODEV; 615 618 } 616 619 ··· 629 632 res_start = bus_cur->firstPFMem; 630 633 break; 631 634 default: 632 - err ("cannot read the type of the resource to add... problem\n"); 635 + err("cannot read the type of the resource to add... problem\n"); 633 636 return -EINVAL; 634 637 } 635 638 while (range_cur) { ··· 660 663 res->rangeno = -1; 661 664 } 662 665 663 - debug ("The range is %d\n", res->rangeno); 666 + debug("The range is %d\n", res->rangeno); 664 667 if (!res_start) { 665 668 /* no first{IO,Mem,Pfmem} on the bus, 1st IO/Mem/Pfmem resource ever */ 666 669 switch (res->type) { ··· 680 683 res_cur = res_start; 681 684 res_prev = NULL; 682 685 683 - debug ("res_cur->rangeno is %d\n", res_cur->rangeno); 686 + debug("res_cur->rangeno is %d\n", res_cur->rangeno); 684 687 685 688 while (res_cur) { 686 689 if (res_cur->rangeno >= res->rangeno) ··· 694 697 695 698 if (!res_cur) { 696 699 /* at the end of the resource list */ 697 - debug ("i should be here, [%x - %x]\n", res->start, res->end); 700 + debug("i should be here, [%x - %x]\n", res->start, res->end); 698 701 res_prev->nextRange = res; 699 702 res->next = NULL; 700 703 res->nextRange = NULL; ··· 762 765 } 763 766 } 764 767 765 - debug ("%s - exit\n", __func__); 768 + debug("%s - exit\n", __func__); 766 769 return 0; 767 770 } 768 771 ··· 773 776 * Output: modified resource list 774 777 * 0 or error code 775 778 ****************************************************************************/ 776 - int ibmphp_remove_resource (struct resource_node *res) 779 + int ibmphp_remove_resource(struct resource_node *res) 777 780 { 778 781 struct bus_node *bus_cur; 779 782 struct resource_node *res_cur = NULL; 780 783 struct resource_node *res_prev; 781 784 struct resource_node *mem_cur; 782 - char * type = ""; 785 + char *type = ""; 783 786 784 787 if (!res) { 785 - err ("resource to remove is NULL\n"); 788 + err("resource to remove is NULL\n"); 786 789 return -ENODEV; 787 790 } 788 791 789 - bus_cur = find_bus_wprev (res->busno, NULL, 0); 792 + bus_cur = find_bus_wprev(res->busno, NULL, 0); 790 793 791 794 if (!bus_cur) { 792 - err ("cannot find corresponding bus of the io resource to remove bailing out...\n"); 795 + err("cannot find corresponding bus of the io resource to remove bailing out...\n"); 793 796 return -ENODEV; 794 797 } 795 798 ··· 807 810 type = "pfmem"; 808 811 break; 809 812 default: 810 - err ("unknown type for resource to remove\n"); 813 + err("unknown type for resource to remove\n"); 811 814 return -EINVAL; 812 815 } 813 816 res_prev = NULL; ··· 845 848 mem_cur = mem_cur->nextRange; 846 849 } 847 850 if (!mem_cur) { 848 - err ("cannot find corresponding mem node for pfmem...\n"); 851 + err("cannot find corresponding mem node for pfmem...\n"); 849 852 return -EINVAL; 850 853 } 851 854 852 - ibmphp_remove_resource (mem_cur); 855 + ibmphp_remove_resource(mem_cur); 853 856 if (!res_prev) 854 857 bus_cur->firstPFMemFromMem = res_cur->next; 855 858 else 856 859 res_prev->next = res_cur->next; 857 - kfree (res_cur); 860 + kfree(res_cur); 858 861 return 0; 859 862 } 860 863 res_prev = res_cur; ··· 864 867 res_cur = res_cur->nextRange; 865 868 } 866 869 if (!res_cur) { 867 - err ("cannot find pfmem to delete...\n"); 870 + err("cannot find pfmem to delete...\n"); 868 871 return -EINVAL; 869 872 } 870 873 } else { 871 - err ("the %s resource is not in the list to be deleted...\n", type); 874 + err("the %s resource is not in the list to be deleted...\n", type); 872 875 return -EINVAL; 873 876 } 874 877 } ··· 911 914 break; 912 915 } 913 916 } 914 - kfree (res_cur); 917 + kfree(res_cur); 915 918 return 0; 916 919 } else { 917 920 if (res_cur->next) { ··· 926 929 res_prev->next = NULL; 927 930 res_prev->nextRange = NULL; 928 931 } 929 - kfree (res_cur); 932 + kfree(res_cur); 930 933 return 0; 931 934 } 932 935 933 936 return 0; 934 937 } 935 938 936 - static struct range_node *find_range (struct bus_node *bus_cur, struct resource_node *res) 939 + static struct range_node *find_range(struct bus_node *bus_cur, struct resource_node *res) 937 940 { 938 941 struct range_node *range = NULL; 939 942 ··· 948 951 range = bus_cur->rangePFMem; 949 952 break; 950 953 default: 951 - err ("cannot read resource type in find_range\n"); 954 + err("cannot read resource type in find_range\n"); 952 955 } 953 956 954 957 while (range) { ··· 968 971 * Output: the correct start and end address are inputted into the resource node, 969 972 * 0 or -EINVAL 970 973 *****************************************************************************/ 971 - int ibmphp_check_resource (struct resource_node *res, u8 bridge) 974 + int ibmphp_check_resource(struct resource_node *res, u8 bridge) 972 975 { 973 976 struct bus_node *bus_cur; 974 977 struct range_node *range = NULL; ··· 992 995 } else 993 996 tmp_divide = res->len; 994 997 995 - bus_cur = find_bus_wprev (res->busno, NULL, 0); 998 + bus_cur = find_bus_wprev(res->busno, NULL, 0); 996 999 997 1000 if (!bus_cur) { 998 1001 /* didn't find a bus, something's wrong!!! */ 999 - debug ("no bus in the system, either pci_dev's wrong or allocation failed\n"); 1002 + debug("no bus in the system, either pci_dev's wrong or allocation failed\n"); 1000 1003 return -EINVAL; 1001 1004 } 1002 1005 1003 - debug ("%s - enter\n", __func__); 1004 - debug ("bus_cur->busno is %d\n", bus_cur->busno); 1006 + debug("%s - enter\n", __func__); 1007 + debug("bus_cur->busno is %d\n", bus_cur->busno); 1005 1008 1006 1009 /* This is a quick fix to not mess up with the code very much. i.e., 1007 1010 * 2000-2fff, len = 1000, but when we compare, we need it to be fff */ ··· 1021 1024 noranges = bus_cur->noPFMemRanges; 1022 1025 break; 1023 1026 default: 1024 - err ("wrong type of resource to check\n"); 1027 + err("wrong type of resource to check\n"); 1025 1028 return -EINVAL; 1026 1029 } 1027 1030 res_prev = NULL; 1028 1031 1029 1032 while (res_cur) { 1030 - range = find_range (bus_cur, res_cur); 1031 - debug ("%s - rangeno = %d\n", __func__, res_cur->rangeno); 1033 + range = find_range(bus_cur, res_cur); 1034 + debug("%s - rangeno = %d\n", __func__, res_cur->rangeno); 1032 1035 1033 1036 if (!range) { 1034 - err ("no range for the device exists... bailing out...\n"); 1037 + err("no range for the device exists... bailing out...\n"); 1035 1038 return -EINVAL; 1036 1039 } 1037 1040 ··· 1041 1044 len_tmp = res_cur->start - 1 - range->start; 1042 1045 1043 1046 if ((res_cur->start != range->start) && (len_tmp >= res->len)) { 1044 - debug ("len_tmp = %x\n", len_tmp); 1047 + debug("len_tmp = %x\n", len_tmp); 1045 1048 1046 1049 if ((len_tmp < len_cur) || (len_cur == 0)) { 1047 1050 ··· 1069 1072 } 1070 1073 1071 1074 if (flag && len_cur == res->len) { 1072 - debug ("but we are not here, right?\n"); 1075 + debug("but we are not here, right?\n"); 1073 1076 res->start = start_cur; 1074 1077 res->len += 1; /* To restore the balance */ 1075 1078 res->end = res->start + res->len - 1; ··· 1083 1086 len_tmp = range->end - (res_cur->end + 1); 1084 1087 1085 1088 if ((range->end != res_cur->end) && (len_tmp >= res->len)) { 1086 - debug ("len_tmp = %x\n", len_tmp); 1089 + debug("len_tmp = %x\n", len_tmp); 1087 1090 if ((len_tmp < len_cur) || (len_cur == 0)) { 1088 1091 1089 1092 if (((res_cur->end + 1) % tmp_divide) == 0) { ··· 1259 1262 1260 1263 if ((!range) && (len_cur == 0)) { 1261 1264 /* have gone through the list of devices and ranges and haven't found n.e.thing */ 1262 - err ("no appropriate range.. bailing out...\n"); 1265 + err("no appropriate range.. bailing out...\n"); 1263 1266 return -EINVAL; 1264 1267 } else if (len_cur) { 1265 1268 res->start = start_cur; ··· 1270 1273 } 1271 1274 1272 1275 if (!res_cur) { 1273 - debug ("prev->rangeno = %d, noranges = %d\n", res_prev->rangeno, noranges); 1276 + debug("prev->rangeno = %d, noranges = %d\n", res_prev->rangeno, noranges); 1274 1277 if (res_prev->rangeno < noranges) { 1275 1278 /* if there're more ranges out there to check */ 1276 1279 switch (res->type) { ··· 1325 1328 1326 1329 if ((!range) && (len_cur == 0)) { 1327 1330 /* have gone through the list of devices and ranges and haven't found n.e.thing */ 1328 - err ("no appropriate range.. bailing out...\n"); 1331 + err("no appropriate range.. bailing out...\n"); 1329 1332 return -EINVAL; 1330 1333 } else if (len_cur) { 1331 1334 res->start = start_cur; ··· 1342 1345 return 0; 1343 1346 } else { 1344 1347 /* have gone through the list of devices and haven't found n.e.thing */ 1345 - err ("no appropriate range.. bailing out...\n"); 1348 + err("no appropriate range.. bailing out...\n"); 1346 1349 return -EINVAL; 1347 1350 } 1348 1351 } ··· 1356 1359 * Input: Bus 1357 1360 * Output: 0, -ENODEV 1358 1361 ********************************************************************************/ 1359 - int ibmphp_remove_bus (struct bus_node *bus, u8 parent_busno) 1362 + int ibmphp_remove_bus(struct bus_node *bus, u8 parent_busno) 1360 1363 { 1361 1364 struct resource_node *res_cur; 1362 1365 struct resource_node *res_tmp; 1363 1366 struct bus_node *prev_bus; 1364 1367 int rc; 1365 1368 1366 - prev_bus = find_bus_wprev (parent_busno, NULL, 0); 1369 + prev_bus = find_bus_wprev(parent_busno, NULL, 0); 1367 1370 1368 1371 if (!prev_bus) { 1369 - debug ("something terribly wrong. Cannot find parent bus to the one to remove\n"); 1372 + debug("something terribly wrong. Cannot find parent bus to the one to remove\n"); 1370 1373 return -ENODEV; 1371 1374 } 1372 1375 1373 - debug ("In ibmphp_remove_bus... prev_bus->busno is %x\n", prev_bus->busno); 1376 + debug("In ibmphp_remove_bus... prev_bus->busno is %x\n", prev_bus->busno); 1374 1377 1375 - rc = remove_ranges (bus, prev_bus); 1378 + rc = remove_ranges(bus, prev_bus); 1376 1379 if (rc) 1377 1380 return rc; 1378 1381 ··· 1384 1387 res_cur = res_cur->next; 1385 1388 else 1386 1389 res_cur = res_cur->nextRange; 1387 - kfree (res_tmp); 1390 + kfree(res_tmp); 1388 1391 res_tmp = NULL; 1389 1392 } 1390 1393 bus->firstIO = NULL; ··· 1397 1400 res_cur = res_cur->next; 1398 1401 else 1399 1402 res_cur = res_cur->nextRange; 1400 - kfree (res_tmp); 1403 + kfree(res_tmp); 1401 1404 res_tmp = NULL; 1402 1405 } 1403 1406 bus->firstMem = NULL; ··· 1410 1413 res_cur = res_cur->next; 1411 1414 else 1412 1415 res_cur = res_cur->nextRange; 1413 - kfree (res_tmp); 1416 + kfree(res_tmp); 1414 1417 res_tmp = NULL; 1415 1418 } 1416 1419 bus->firstPFMem = NULL; ··· 1422 1425 res_tmp = res_cur; 1423 1426 res_cur = res_cur->next; 1424 1427 1425 - kfree (res_tmp); 1428 + kfree(res_tmp); 1426 1429 res_tmp = NULL; 1427 1430 } 1428 1431 bus->firstPFMemFromMem = NULL; 1429 1432 } 1430 1433 1431 - list_del (&bus->bus_list); 1432 - kfree (bus); 1434 + list_del(&bus->bus_list); 1435 + kfree(bus); 1433 1436 return 0; 1434 1437 } 1435 1438 ··· 1439 1442 * Input: current bus, previous bus 1440 1443 * Output: 0, -EINVAL 1441 1444 ******************************************************************************/ 1442 - static int remove_ranges (struct bus_node *bus_cur, struct bus_node *bus_prev) 1445 + static int remove_ranges(struct bus_node *bus_cur, struct bus_node *bus_prev) 1443 1446 { 1444 1447 struct range_node *range_cur; 1445 1448 struct range_node *range_tmp; ··· 1449 1452 if (bus_cur->noIORanges) { 1450 1453 range_cur = bus_cur->rangeIO; 1451 1454 for (i = 0; i < bus_cur->noIORanges; i++) { 1452 - if (ibmphp_find_resource (bus_prev, range_cur->start, &res, IO) < 0) 1455 + if (ibmphp_find_resource(bus_prev, range_cur->start, &res, IO) < 0) 1453 1456 return -EINVAL; 1454 - ibmphp_remove_resource (res); 1457 + ibmphp_remove_resource(res); 1455 1458 1456 1459 range_tmp = range_cur; 1457 1460 range_cur = range_cur->next; 1458 - kfree (range_tmp); 1461 + kfree(range_tmp); 1459 1462 range_tmp = NULL; 1460 1463 } 1461 1464 bus_cur->rangeIO = NULL; ··· 1463 1466 if (bus_cur->noMemRanges) { 1464 1467 range_cur = bus_cur->rangeMem; 1465 1468 for (i = 0; i < bus_cur->noMemRanges; i++) { 1466 - if (ibmphp_find_resource (bus_prev, range_cur->start, &res, MEM) < 0) 1469 + if (ibmphp_find_resource(bus_prev, range_cur->start, &res, MEM) < 0) 1467 1470 return -EINVAL; 1468 1471 1469 - ibmphp_remove_resource (res); 1472 + ibmphp_remove_resource(res); 1470 1473 range_tmp = range_cur; 1471 1474 range_cur = range_cur->next; 1472 - kfree (range_tmp); 1475 + kfree(range_tmp); 1473 1476 range_tmp = NULL; 1474 1477 } 1475 1478 bus_cur->rangeMem = NULL; ··· 1477 1480 if (bus_cur->noPFMemRanges) { 1478 1481 range_cur = bus_cur->rangePFMem; 1479 1482 for (i = 0; i < bus_cur->noPFMemRanges; i++) { 1480 - if (ibmphp_find_resource (bus_prev, range_cur->start, &res, PFMEM) < 0) 1483 + if (ibmphp_find_resource(bus_prev, range_cur->start, &res, PFMEM) < 0) 1481 1484 return -EINVAL; 1482 1485 1483 - ibmphp_remove_resource (res); 1486 + ibmphp_remove_resource(res); 1484 1487 range_tmp = range_cur; 1485 1488 range_cur = range_cur->next; 1486 - kfree (range_tmp); 1489 + kfree(range_tmp); 1487 1490 range_tmp = NULL; 1488 1491 } 1489 1492 bus_cur->rangePFMem = NULL; ··· 1495 1498 * find the resource node in the bus 1496 1499 * Input: Resource needed, start address of the resource, type of resource 1497 1500 */ 1498 - int ibmphp_find_resource (struct bus_node *bus, u32 start_address, struct resource_node **res, int flag) 1501 + int ibmphp_find_resource(struct bus_node *bus, u32 start_address, struct resource_node **res, int flag) 1499 1502 { 1500 1503 struct resource_node *res_cur = NULL; 1501 - char * type = ""; 1504 + char *type = ""; 1502 1505 1503 1506 if (!bus) { 1504 - err ("The bus passed in NULL to find resource\n"); 1507 + err("The bus passed in NULL to find resource\n"); 1505 1508 return -ENODEV; 1506 1509 } 1507 1510 ··· 1519 1522 type = "pfmem"; 1520 1523 break; 1521 1524 default: 1522 - err ("wrong type of flag\n"); 1525 + err("wrong type of flag\n"); 1523 1526 return -EINVAL; 1524 1527 } 1525 1528 ··· 1545 1548 res_cur = res_cur->next; 1546 1549 } 1547 1550 if (!res_cur) { 1548 - debug ("SOS...cannot find %s resource in the bus.\n", type); 1551 + debug("SOS...cannot find %s resource in the bus.\n", type); 1549 1552 return -EINVAL; 1550 1553 } 1551 1554 } else { 1552 - debug ("SOS... cannot find %s resource in the bus.\n", type); 1555 + debug("SOS... cannot find %s resource in the bus.\n", type); 1553 1556 return -EINVAL; 1554 1557 } 1555 1558 } 1556 1559 1557 1560 if (*res) 1558 - debug ("*res->start = %x\n", (*res)->start); 1561 + debug("*res->start = %x\n", (*res)->start); 1559 1562 1560 1563 return 0; 1561 1564 } ··· 1566 1569 * Parameters: none 1567 1570 * Returns: none 1568 1571 ***********************************************************************/ 1569 - void ibmphp_free_resources (void) 1572 + void ibmphp_free_resources(void) 1570 1573 { 1571 - struct bus_node *bus_cur = NULL; 1574 + struct bus_node *bus_cur = NULL, *next; 1572 1575 struct bus_node *bus_tmp; 1573 1576 struct range_node *range_cur; 1574 1577 struct range_node *range_tmp; 1575 1578 struct resource_node *res_cur; 1576 1579 struct resource_node *res_tmp; 1577 - struct list_head *tmp; 1578 - struct list_head *next; 1579 1580 int i = 0; 1580 1581 flags = 1; 1581 1582 1582 - list_for_each_safe (tmp, next, &gbuses) { 1583 - bus_cur = list_entry (tmp, struct bus_node, bus_list); 1583 + list_for_each_entry_safe(bus_cur, next, &gbuses, bus_list) { 1584 1584 if (bus_cur->noIORanges) { 1585 1585 range_cur = bus_cur->rangeIO; 1586 1586 for (i = 0; i < bus_cur->noIORanges; i++) { ··· 1585 1591 break; 1586 1592 range_tmp = range_cur; 1587 1593 range_cur = range_cur->next; 1588 - kfree (range_tmp); 1594 + kfree(range_tmp); 1589 1595 range_tmp = NULL; 1590 1596 } 1591 1597 } ··· 1596 1602 break; 1597 1603 range_tmp = range_cur; 1598 1604 range_cur = range_cur->next; 1599 - kfree (range_tmp); 1605 + kfree(range_tmp); 1600 1606 range_tmp = NULL; 1601 1607 } 1602 1608 } ··· 1607 1613 break; 1608 1614 range_tmp = range_cur; 1609 1615 range_cur = range_cur->next; 1610 - kfree (range_tmp); 1616 + kfree(range_tmp); 1611 1617 range_tmp = NULL; 1612 1618 } 1613 1619 } ··· 1620 1626 res_cur = res_cur->next; 1621 1627 else 1622 1628 res_cur = res_cur->nextRange; 1623 - kfree (res_tmp); 1629 + kfree(res_tmp); 1624 1630 res_tmp = NULL; 1625 1631 } 1626 1632 bus_cur->firstIO = NULL; ··· 1633 1639 res_cur = res_cur->next; 1634 1640 else 1635 1641 res_cur = res_cur->nextRange; 1636 - kfree (res_tmp); 1642 + kfree(res_tmp); 1637 1643 res_tmp = NULL; 1638 1644 } 1639 1645 bus_cur->firstMem = NULL; ··· 1646 1652 res_cur = res_cur->next; 1647 1653 else 1648 1654 res_cur = res_cur->nextRange; 1649 - kfree (res_tmp); 1655 + kfree(res_tmp); 1650 1656 res_tmp = NULL; 1651 1657 } 1652 1658 bus_cur->firstPFMem = NULL; ··· 1658 1664 res_tmp = res_cur; 1659 1665 res_cur = res_cur->next; 1660 1666 1661 - kfree (res_tmp); 1667 + kfree(res_tmp); 1662 1668 res_tmp = NULL; 1663 1669 } 1664 1670 bus_cur->firstPFMemFromMem = NULL; 1665 1671 } 1666 1672 1667 1673 bus_tmp = bus_cur; 1668 - list_del (&bus_cur->bus_list); 1669 - kfree (bus_tmp); 1674 + list_del(&bus_cur->bus_list); 1675 + kfree(bus_tmp); 1670 1676 bus_tmp = NULL; 1671 1677 } 1672 1678 } ··· 1679 1685 * a new Mem node 1680 1686 * This routine is called right after initialization 1681 1687 *******************************************************************************/ 1682 - static int __init once_over (void) 1688 + static int __init once_over(void) 1683 1689 { 1684 1690 struct resource_node *pfmem_cur; 1685 1691 struct resource_node *pfmem_prev; 1686 1692 struct resource_node *mem; 1687 1693 struct bus_node *bus_cur; 1688 - struct list_head *tmp; 1689 1694 1690 - list_for_each (tmp, &gbuses) { 1691 - bus_cur = list_entry (tmp, struct bus_node, bus_list); 1695 + list_for_each_entry(bus_cur, &gbuses, bus_list) { 1692 1696 if ((!bus_cur->rangePFMem) && (bus_cur->firstPFMem)) { 1693 1697 for (pfmem_cur = bus_cur->firstPFMem, pfmem_prev = NULL; pfmem_cur; pfmem_prev = pfmem_cur, pfmem_cur = pfmem_cur->next) { 1694 1698 pfmem_cur->fromMem = 1; ··· 1708 1716 1709 1717 mem = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 1710 1718 if (!mem) { 1711 - err ("out of system memory\n"); 1719 + err("out of system memory\n"); 1712 1720 return -ENOMEM; 1713 1721 } 1714 1722 mem->type = MEM; ··· 1717 1725 mem->start = pfmem_cur->start; 1718 1726 mem->end = pfmem_cur->end; 1719 1727 mem->len = pfmem_cur->len; 1720 - if (ibmphp_add_resource (mem) < 0) 1721 - err ("Trouble...trouble... EBDA allocated pfmem from mem, but system doesn't display it has this space... unless not PCI device...\n"); 1728 + if (ibmphp_add_resource(mem) < 0) 1729 + err("Trouble...trouble... EBDA allocated pfmem from mem, but system doesn't display it has this space... unless not PCI device...\n"); 1722 1730 pfmem_cur->rangeno = mem->rangeno; 1723 1731 } /* end for pfmem */ 1724 1732 } /* end if */ ··· 1726 1734 return 0; 1727 1735 } 1728 1736 1729 - int ibmphp_add_pfmem_from_mem (struct resource_node *pfmem) 1737 + int ibmphp_add_pfmem_from_mem(struct resource_node *pfmem) 1730 1738 { 1731 - struct bus_node *bus_cur = find_bus_wprev (pfmem->busno, NULL, 0); 1739 + struct bus_node *bus_cur = find_bus_wprev(pfmem->busno, NULL, 0); 1732 1740 1733 1741 if (!bus_cur) { 1734 - err ("cannot find bus of pfmem to add...\n"); 1742 + err("cannot find bus of pfmem to add...\n"); 1735 1743 return -ENODEV; 1736 1744 } 1737 1745 ··· 1751 1759 * Parameters: bus_number 1752 1760 * Returns: Bus pointer or NULL 1753 1761 */ 1754 - struct bus_node *ibmphp_find_res_bus (u8 bus_number) 1762 + struct bus_node *ibmphp_find_res_bus(u8 bus_number) 1755 1763 { 1756 - return find_bus_wprev (bus_number, NULL, 0); 1764 + return find_bus_wprev(bus_number, NULL, 0); 1757 1765 } 1758 1766 1759 - static struct bus_node *find_bus_wprev (u8 bus_number, struct bus_node **prev, u8 flag) 1767 + static struct bus_node *find_bus_wprev(u8 bus_number, struct bus_node **prev, u8 flag) 1760 1768 { 1761 1769 struct bus_node *bus_cur; 1762 - struct list_head *tmp; 1763 - struct list_head *tmp_prev; 1764 1770 1765 - list_for_each (tmp, &gbuses) { 1766 - tmp_prev = tmp->prev; 1767 - bus_cur = list_entry (tmp, struct bus_node, bus_list); 1771 + list_for_each_entry(bus_cur, &gbuses, bus_list) { 1768 1772 if (flag) 1769 - *prev = list_entry (tmp_prev, struct bus_node, bus_list); 1773 + *prev = list_prev_entry(bus_cur, bus_list); 1770 1774 if (bus_cur->busno == bus_number) 1771 1775 return bus_cur; 1772 1776 } ··· 1770 1782 return NULL; 1771 1783 } 1772 1784 1773 - void ibmphp_print_test (void) 1785 + void ibmphp_print_test(void) 1774 1786 { 1775 1787 int i = 0; 1776 1788 struct bus_node *bus_cur = NULL; 1777 1789 struct range_node *range; 1778 1790 struct resource_node *res; 1779 - struct list_head *tmp; 1780 1791 1781 - debug_pci ("*****************START**********************\n"); 1792 + debug_pci("*****************START**********************\n"); 1782 1793 1783 1794 if ((!list_empty(&gbuses)) && flags) { 1784 - err ("The GBUSES is not NULL?!?!?!?!?\n"); 1795 + err("The GBUSES is not NULL?!?!?!?!?\n"); 1785 1796 return; 1786 1797 } 1787 1798 1788 - list_for_each (tmp, &gbuses) { 1789 - bus_cur = list_entry (tmp, struct bus_node, bus_list); 1799 + list_for_each_entry(bus_cur, &gbuses, bus_list) { 1790 1800 debug_pci ("This is bus # %d. There are\n", bus_cur->busno); 1791 1801 debug_pci ("IORanges = %d\t", bus_cur->noIORanges); 1792 1802 debug_pci ("MemRanges = %d\t", bus_cur->noMemRanges); ··· 1793 1807 if (bus_cur->rangeIO) { 1794 1808 range = bus_cur->rangeIO; 1795 1809 for (i = 0; i < bus_cur->noIORanges; i++) { 1796 - debug_pci ("rangeno is %d\n", range->rangeno); 1797 - debug_pci ("[%x - %x]\n", range->start, range->end); 1810 + debug_pci("rangeno is %d\n", range->rangeno); 1811 + debug_pci("[%x - %x]\n", range->start, range->end); 1798 1812 range = range->next; 1799 1813 } 1800 1814 } 1801 1815 1802 - debug_pci ("The Mem Ranges are as follows:\n"); 1816 + debug_pci("The Mem Ranges are as follows:\n"); 1803 1817 if (bus_cur->rangeMem) { 1804 1818 range = bus_cur->rangeMem; 1805 1819 for (i = 0; i < bus_cur->noMemRanges; i++) { 1806 - debug_pci ("rangeno is %d\n", range->rangeno); 1807 - debug_pci ("[%x - %x]\n", range->start, range->end); 1820 + debug_pci("rangeno is %d\n", range->rangeno); 1821 + debug_pci("[%x - %x]\n", range->start, range->end); 1808 1822 range = range->next; 1809 1823 } 1810 1824 } 1811 1825 1812 - debug_pci ("The PFMem Ranges are as follows:\n"); 1826 + debug_pci("The PFMem Ranges are as follows:\n"); 1813 1827 1814 1828 if (bus_cur->rangePFMem) { 1815 1829 range = bus_cur->rangePFMem; 1816 1830 for (i = 0; i < bus_cur->noPFMemRanges; i++) { 1817 - debug_pci ("rangeno is %d\n", range->rangeno); 1818 - debug_pci ("[%x - %x]\n", range->start, range->end); 1831 + debug_pci("rangeno is %d\n", range->rangeno); 1832 + debug_pci("[%x - %x]\n", range->start, range->end); 1819 1833 range = range->next; 1820 1834 } 1821 1835 } 1822 1836 1823 - debug_pci ("The resources on this bus are as follows\n"); 1837 + debug_pci("The resources on this bus are as follows\n"); 1824 1838 1825 - debug_pci ("IO...\n"); 1839 + debug_pci("IO...\n"); 1826 1840 if (bus_cur->firstIO) { 1827 1841 res = bus_cur->firstIO; 1828 1842 while (res) { 1829 - debug_pci ("The range # is %d\n", res->rangeno); 1830 - debug_pci ("The bus, devfnc is %d, %x\n", res->busno, res->devfunc); 1831 - debug_pci ("[%x - %x], len=%x\n", res->start, res->end, res->len); 1843 + debug_pci("The range # is %d\n", res->rangeno); 1844 + debug_pci("The bus, devfnc is %d, %x\n", res->busno, res->devfunc); 1845 + debug_pci("[%x - %x], len=%x\n", res->start, res->end, res->len); 1832 1846 if (res->next) 1833 1847 res = res->next; 1834 1848 else if (res->nextRange) ··· 1837 1851 break; 1838 1852 } 1839 1853 } 1840 - debug_pci ("Mem...\n"); 1854 + debug_pci("Mem...\n"); 1841 1855 if (bus_cur->firstMem) { 1842 1856 res = bus_cur->firstMem; 1843 1857 while (res) { 1844 - debug_pci ("The range # is %d\n", res->rangeno); 1845 - debug_pci ("The bus, devfnc is %d, %x\n", res->busno, res->devfunc); 1846 - debug_pci ("[%x - %x], len=%x\n", res->start, res->end, res->len); 1858 + debug_pci("The range # is %d\n", res->rangeno); 1859 + debug_pci("The bus, devfnc is %d, %x\n", res->busno, res->devfunc); 1860 + debug_pci("[%x - %x], len=%x\n", res->start, res->end, res->len); 1847 1861 if (res->next) 1848 1862 res = res->next; 1849 1863 else if (res->nextRange) ··· 1852 1866 break; 1853 1867 } 1854 1868 } 1855 - debug_pci ("PFMem...\n"); 1869 + debug_pci("PFMem...\n"); 1856 1870 if (bus_cur->firstPFMem) { 1857 1871 res = bus_cur->firstPFMem; 1858 1872 while (res) { 1859 - debug_pci ("The range # is %d\n", res->rangeno); 1860 - debug_pci ("The bus, devfnc is %d, %x\n", res->busno, res->devfunc); 1861 - debug_pci ("[%x - %x], len=%x\n", res->start, res->end, res->len); 1873 + debug_pci("The range # is %d\n", res->rangeno); 1874 + debug_pci("The bus, devfnc is %d, %x\n", res->busno, res->devfunc); 1875 + debug_pci("[%x - %x], len=%x\n", res->start, res->end, res->len); 1862 1876 if (res->next) 1863 1877 res = res->next; 1864 1878 else if (res->nextRange) ··· 1868 1882 } 1869 1883 } 1870 1884 1871 - debug_pci ("PFMemFromMem...\n"); 1885 + debug_pci("PFMemFromMem...\n"); 1872 1886 if (bus_cur->firstPFMemFromMem) { 1873 1887 res = bus_cur->firstPFMemFromMem; 1874 1888 while (res) { 1875 - debug_pci ("The range # is %d\n", res->rangeno); 1876 - debug_pci ("The bus, devfnc is %d, %x\n", res->busno, res->devfunc); 1877 - debug_pci ("[%x - %x], len=%x\n", res->start, res->end, res->len); 1889 + debug_pci("The range # is %d\n", res->rangeno); 1890 + debug_pci("The bus, devfnc is %d, %x\n", res->busno, res->devfunc); 1891 + debug_pci("[%x - %x], len=%x\n", res->start, res->end, res->len); 1878 1892 res = res->next; 1879 1893 } 1880 1894 } 1881 1895 } 1882 - debug_pci ("***********************END***********************\n"); 1896 + debug_pci("***********************END***********************\n"); 1883 1897 } 1884 1898 1885 - static int range_exists_already (struct range_node * range, struct bus_node * bus_cur, u8 type) 1899 + static int range_exists_already(struct range_node *range, struct bus_node *bus_cur, u8 type) 1886 1900 { 1887 - struct range_node * range_cur = NULL; 1901 + struct range_node *range_cur = NULL; 1888 1902 switch (type) { 1889 1903 case IO: 1890 1904 range_cur = bus_cur->rangeIO; ··· 1896 1910 range_cur = bus_cur->rangePFMem; 1897 1911 break; 1898 1912 default: 1899 - err ("wrong type passed to find out if range already exists\n"); 1913 + err("wrong type passed to find out if range already exists\n"); 1900 1914 return -ENODEV; 1901 1915 } 1902 1916 ··· 1923 1937 * behind them All these are TO DO. 1924 1938 * Also need to add more error checkings... (from fnc returns etc) 1925 1939 */ 1926 - static int __init update_bridge_ranges (struct bus_node **bus) 1940 + static int __init update_bridge_ranges(struct bus_node **bus) 1927 1941 { 1928 1942 u8 sec_busno, device, function, hdr_type, start_io_address, end_io_address; 1929 1943 u16 vendor_id, upper_io_start, upper_io_end, start_mem_address, end_mem_address; ··· 1941 1955 return -ENODEV; 1942 1956 ibmphp_pci_bus->number = bus_cur->busno; 1943 1957 1944 - debug ("inside %s\n", __func__); 1945 - debug ("bus_cur->busno = %x\n", bus_cur->busno); 1958 + debug("inside %s\n", __func__); 1959 + debug("bus_cur->busno = %x\n", bus_cur->busno); 1946 1960 1947 1961 for (device = 0; device < 32; device++) { 1948 1962 for (function = 0x00; function < 0x08; function++) { 1949 1963 devfn = PCI_DEVFN(device, function); 1950 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_VENDOR_ID, &vendor_id); 1964 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_VENDOR_ID, &vendor_id); 1951 1965 1952 1966 if (vendor_id != PCI_VENDOR_ID_NOTVALID) { 1953 1967 /* found correct device!!! */ 1954 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_HEADER_TYPE, &hdr_type); 1968 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_HEADER_TYPE, &hdr_type); 1955 1969 1956 1970 switch (hdr_type) { 1957 1971 case PCI_HEADER_TYPE_NORMAL: ··· 1970 1984 temp++; 1971 1985 } 1972 1986 */ 1973 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_busno); 1974 - bus_sec = find_bus_wprev (sec_busno, NULL, 0); 1987 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_SECONDARY_BUS, &sec_busno); 1988 + bus_sec = find_bus_wprev(sec_busno, NULL, 0); 1975 1989 /* this bus structure doesn't exist yet, PPB was configured during previous loading of ibmphp */ 1976 1990 if (!bus_sec) { 1977 - bus_sec = alloc_error_bus (NULL, sec_busno, 1); 1991 + bus_sec = alloc_error_bus(NULL, sec_busno, 1); 1978 1992 /* the rest will be populated during NVRAM call */ 1979 1993 return 0; 1980 1994 } 1981 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_IO_BASE, &start_io_address); 1982 - pci_bus_read_config_byte (ibmphp_pci_bus, devfn, PCI_IO_LIMIT, &end_io_address); 1983 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_IO_BASE_UPPER16, &upper_io_start); 1984 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_IO_LIMIT_UPPER16, &upper_io_end); 1995 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_IO_BASE, &start_io_address); 1996 + pci_bus_read_config_byte(ibmphp_pci_bus, devfn, PCI_IO_LIMIT, &end_io_address); 1997 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_IO_BASE_UPPER16, &upper_io_start); 1998 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_IO_LIMIT_UPPER16, &upper_io_end); 1985 1999 start_address = (start_io_address & PCI_IO_RANGE_MASK) << 8; 1986 2000 start_address |= (upper_io_start << 16); 1987 2001 end_address = (end_io_address & PCI_IO_RANGE_MASK) << 8; ··· 1990 2004 if ((start_address) && (start_address <= end_address)) { 1991 2005 range = kzalloc(sizeof(struct range_node), GFP_KERNEL); 1992 2006 if (!range) { 1993 - err ("out of system memory\n"); 2007 + err("out of system memory\n"); 1994 2008 return -ENOMEM; 1995 2009 } 1996 2010 range->start = start_address; 1997 2011 range->end = end_address + 0xfff; 1998 2012 1999 2013 if (bus_sec->noIORanges > 0) { 2000 - if (!range_exists_already (range, bus_sec, IO)) { 2001 - add_bus_range (IO, range, bus_sec); 2014 + if (!range_exists_already(range, bus_sec, IO)) { 2015 + add_bus_range(IO, range, bus_sec); 2002 2016 ++bus_sec->noIORanges; 2003 2017 } else { 2004 - kfree (range); 2018 + kfree(range); 2005 2019 range = NULL; 2006 2020 } 2007 2021 } else { ··· 2010 2024 bus_sec->rangeIO = range; 2011 2025 ++bus_sec->noIORanges; 2012 2026 } 2013 - fix_resources (bus_sec); 2027 + fix_resources(bus_sec); 2014 2028 2015 - if (ibmphp_find_resource (bus_cur, start_address, &io, IO)) { 2029 + if (ibmphp_find_resource(bus_cur, start_address, &io, IO)) { 2016 2030 io = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 2017 2031 if (!io) { 2018 - kfree (range); 2019 - err ("out of system memory\n"); 2032 + kfree(range); 2033 + err("out of system memory\n"); 2020 2034 return -ENOMEM; 2021 2035 } 2022 2036 io->type = IO; ··· 2025 2039 io->start = start_address; 2026 2040 io->end = end_address + 0xfff; 2027 2041 io->len = io->end - io->start + 1; 2028 - ibmphp_add_resource (io); 2042 + ibmphp_add_resource(io); 2029 2043 } 2030 2044 } 2031 2045 2032 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, &start_mem_address); 2033 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, &end_mem_address); 2046 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_MEMORY_BASE, &start_mem_address); 2047 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_MEMORY_LIMIT, &end_mem_address); 2034 2048 2035 2049 start_address = 0x00000000 | (start_mem_address & PCI_MEMORY_RANGE_MASK) << 16; 2036 2050 end_address = 0x00000000 | (end_mem_address & PCI_MEMORY_RANGE_MASK) << 16; ··· 2039 2053 2040 2054 range = kzalloc(sizeof(struct range_node), GFP_KERNEL); 2041 2055 if (!range) { 2042 - err ("out of system memory\n"); 2056 + err("out of system memory\n"); 2043 2057 return -ENOMEM; 2044 2058 } 2045 2059 range->start = start_address; 2046 2060 range->end = end_address + 0xfffff; 2047 2061 2048 2062 if (bus_sec->noMemRanges > 0) { 2049 - if (!range_exists_already (range, bus_sec, MEM)) { 2050 - add_bus_range (MEM, range, bus_sec); 2063 + if (!range_exists_already(range, bus_sec, MEM)) { 2064 + add_bus_range(MEM, range, bus_sec); 2051 2065 ++bus_sec->noMemRanges; 2052 2066 } else { 2053 - kfree (range); 2067 + kfree(range); 2054 2068 range = NULL; 2055 2069 } 2056 2070 } else { ··· 2060 2074 ++bus_sec->noMemRanges; 2061 2075 } 2062 2076 2063 - fix_resources (bus_sec); 2077 + fix_resources(bus_sec); 2064 2078 2065 - if (ibmphp_find_resource (bus_cur, start_address, &mem, MEM)) { 2079 + if (ibmphp_find_resource(bus_cur, start_address, &mem, MEM)) { 2066 2080 mem = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 2067 2081 if (!mem) { 2068 - kfree (range); 2069 - err ("out of system memory\n"); 2082 + kfree(range); 2083 + err("out of system memory\n"); 2070 2084 return -ENOMEM; 2071 2085 } 2072 2086 mem->type = MEM; ··· 2075 2089 mem->start = start_address; 2076 2090 mem->end = end_address + 0xfffff; 2077 2091 mem->len = mem->end - mem->start + 1; 2078 - ibmphp_add_resource (mem); 2092 + ibmphp_add_resource(mem); 2079 2093 } 2080 2094 } 2081 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, &start_mem_address); 2082 - pci_bus_read_config_word (ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, &end_mem_address); 2083 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, PCI_PREF_BASE_UPPER32, &upper_start); 2084 - pci_bus_read_config_dword (ibmphp_pci_bus, devfn, PCI_PREF_LIMIT_UPPER32, &upper_end); 2095 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_BASE, &start_mem_address); 2096 + pci_bus_read_config_word(ibmphp_pci_bus, devfn, PCI_PREF_MEMORY_LIMIT, &end_mem_address); 2097 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, PCI_PREF_BASE_UPPER32, &upper_start); 2098 + pci_bus_read_config_dword(ibmphp_pci_bus, devfn, PCI_PREF_LIMIT_UPPER32, &upper_end); 2085 2099 start_address = 0x00000000 | (start_mem_address & PCI_MEMORY_RANGE_MASK) << 16; 2086 2100 end_address = 0x00000000 | (end_mem_address & PCI_MEMORY_RANGE_MASK) << 16; 2087 2101 #if BITS_PER_LONG == 64 ··· 2093 2107 2094 2108 range = kzalloc(sizeof(struct range_node), GFP_KERNEL); 2095 2109 if (!range) { 2096 - err ("out of system memory\n"); 2110 + err("out of system memory\n"); 2097 2111 return -ENOMEM; 2098 2112 } 2099 2113 range->start = start_address; 2100 2114 range->end = end_address + 0xfffff; 2101 2115 2102 2116 if (bus_sec->noPFMemRanges > 0) { 2103 - if (!range_exists_already (range, bus_sec, PFMEM)) { 2104 - add_bus_range (PFMEM, range, bus_sec); 2117 + if (!range_exists_already(range, bus_sec, PFMEM)) { 2118 + add_bus_range(PFMEM, range, bus_sec); 2105 2119 ++bus_sec->noPFMemRanges; 2106 2120 } else { 2107 - kfree (range); 2121 + kfree(range); 2108 2122 range = NULL; 2109 2123 } 2110 2124 } else { ··· 2114 2128 ++bus_sec->noPFMemRanges; 2115 2129 } 2116 2130 2117 - fix_resources (bus_sec); 2118 - if (ibmphp_find_resource (bus_cur, start_address, &pfmem, PFMEM)) { 2131 + fix_resources(bus_sec); 2132 + if (ibmphp_find_resource(bus_cur, start_address, &pfmem, PFMEM)) { 2119 2133 pfmem = kzalloc(sizeof(struct resource_node), GFP_KERNEL); 2120 2134 if (!pfmem) { 2121 - kfree (range); 2122 - err ("out of system memory\n"); 2135 + kfree(range); 2136 + err("out of system memory\n"); 2123 2137 return -ENOMEM; 2124 2138 } 2125 2139 pfmem->type = PFMEM; ··· 2130 2144 pfmem->len = pfmem->end - pfmem->start + 1; 2131 2145 pfmem->fromMem = 0; 2132 2146 2133 - ibmphp_add_resource (pfmem); 2147 + ibmphp_add_resource(pfmem); 2134 2148 } 2135 2149 } 2136 2150 break;
+6 -8
drivers/pci/hotplug/pci_hotplug_core.c
··· 45 45 46 46 #define MY_NAME "pci_hotplug" 47 47 48 - #define dbg(fmt, arg...) do { if (debug) printk(KERN_DEBUG "%s: %s: " fmt , MY_NAME , __func__ , ## arg); } while (0) 49 - #define err(format, arg...) printk(KERN_ERR "%s: " format , MY_NAME , ## arg) 50 - #define info(format, arg...) printk(KERN_INFO "%s: " format , MY_NAME , ## arg) 51 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format , MY_NAME , ## arg) 48 + #define dbg(fmt, arg...) do { if (debug) printk(KERN_DEBUG "%s: %s: " fmt, MY_NAME, __func__, ## arg); } while (0) 49 + #define err(format, arg...) printk(KERN_ERR "%s: " format, MY_NAME, ## arg) 50 + #define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 51 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 52 52 53 53 54 54 /* local variables */ ··· 226 226 u32 test; 227 227 int retval = 0; 228 228 229 - ltest = simple_strtoul (buf, NULL, 10); 229 + ltest = simple_strtoul(buf, NULL, 10); 230 230 test = (u32)(ltest & 0xffffffff); 231 231 dbg("test = %d\n", test); 232 232 ··· 396 396 static struct hotplug_slot *get_slot_from_name(const char *name) 397 397 { 398 398 struct hotplug_slot *slot; 399 - struct list_head *tmp; 400 399 401 - list_for_each(tmp, &pci_hotplug_slot_list) { 402 - slot = list_entry(tmp, struct hotplug_slot, slot_list); 400 + list_for_each_entry(slot, &pci_hotplug_slot_list, slot_list) { 403 401 if (strcmp(hotplug_slot_name(slot), name) == 0) 404 402 return slot; 405 403 }
+4 -4
drivers/pci/hotplug/pciehp.h
··· 47 47 #define dbg(format, arg...) \ 48 48 do { \ 49 49 if (pciehp_debug) \ 50 - printk(KERN_DEBUG "%s: " format, MY_NAME , ## arg); \ 50 + printk(KERN_DEBUG "%s: " format, MY_NAME, ## arg); \ 51 51 } while (0) 52 52 #define err(format, arg...) \ 53 - printk(KERN_ERR "%s: " format, MY_NAME , ## arg) 53 + printk(KERN_ERR "%s: " format, MY_NAME, ## arg) 54 54 #define info(format, arg...) \ 55 - printk(KERN_INFO "%s: " format, MY_NAME , ## arg) 55 + printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 56 56 #define warn(format, arg...) \ 57 - printk(KERN_WARNING "%s: " format, MY_NAME , ## arg) 57 + printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 58 58 59 59 #define ctrl_dbg(ctrl, format, arg...) \ 60 60 do { \
+8 -8
drivers/pci/hotplug/pciehp_core.c
··· 62 62 63 63 #define PCIE_MODULE_NAME "pciehp" 64 64 65 - static int set_attention_status (struct hotplug_slot *slot, u8 value); 66 - static int enable_slot (struct hotplug_slot *slot); 67 - static int disable_slot (struct hotplug_slot *slot); 68 - static int get_power_status (struct hotplug_slot *slot, u8 *value); 69 - static int get_attention_status (struct hotplug_slot *slot, u8 *value); 70 - static int get_latch_status (struct hotplug_slot *slot, u8 *value); 71 - static int get_adapter_status (struct hotplug_slot *slot, u8 *value); 72 - static int reset_slot (struct hotplug_slot *slot, int probe); 65 + static int set_attention_status(struct hotplug_slot *slot, u8 value); 66 + static int enable_slot(struct hotplug_slot *slot); 67 + static int disable_slot(struct hotplug_slot *slot); 68 + static int get_power_status(struct hotplug_slot *slot, u8 *value); 69 + static int get_attention_status(struct hotplug_slot *slot, u8 *value); 70 + static int get_latch_status(struct hotplug_slot *slot, u8 *value); 71 + static int get_adapter_status(struct hotplug_slot *slot, u8 *value); 72 + static int reset_slot(struct hotplug_slot *slot, int probe); 73 73 74 74 /** 75 75 * release_slot - free up the memory used by a slot
+2
drivers/pci/hotplug/pciehp_ctrl.c
··· 511 511 case STATIC_STATE: 512 512 p_slot->state = POWEROFF_STATE; 513 513 mutex_unlock(&p_slot->lock); 514 + mutex_lock(&p_slot->hotplug_lock); 514 515 retval = pciehp_disable_slot(p_slot); 516 + mutex_unlock(&p_slot->hotplug_lock); 515 517 mutex_lock(&p_slot->lock); 516 518 p_slot->state = STATIC_STATE; 517 519 break;
+14 -17
drivers/pci/hotplug/pcihp_skeleton.c
··· 52 52 do { \ 53 53 if (debug) \ 54 54 printk(KERN_DEBUG "%s: " format "\n", \ 55 - MY_NAME , ## arg); \ 55 + MY_NAME, ## arg); \ 56 56 } while (0) 57 - #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 58 - #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 59 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) 57 + #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME, ## arg) 58 + #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME, ## arg) 59 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME, ## arg) 60 60 61 61 /* local variables */ 62 62 static bool debug; ··· 72 72 module_param(debug, bool, 0644); 73 73 MODULE_PARM_DESC(debug, "Debugging mode enabled or not"); 74 74 75 - static int enable_slot (struct hotplug_slot *slot); 76 - static int disable_slot (struct hotplug_slot *slot); 77 - static int set_attention_status (struct hotplug_slot *slot, u8 value); 78 - static int hardware_test (struct hotplug_slot *slot, u32 value); 79 - static int get_power_status (struct hotplug_slot *slot, u8 *value); 80 - static int get_attention_status (struct hotplug_slot *slot, u8 *value); 81 - static int get_latch_status (struct hotplug_slot *slot, u8 *value); 82 - static int get_adapter_status (struct hotplug_slot *slot, u8 *value); 75 + static int enable_slot(struct hotplug_slot *slot); 76 + static int disable_slot(struct hotplug_slot *slot); 77 + static int set_attention_status(struct hotplug_slot *slot, u8 value); 78 + static int hardware_test(struct hotplug_slot *slot, u32 value); 79 + static int get_power_status(struct hotplug_slot *slot, u8 *value); 80 + static int get_attention_status(struct hotplug_slot *slot, u8 *value); 81 + static int get_latch_status(struct hotplug_slot *slot, u8 *value); 82 + static int get_adapter_status(struct hotplug_slot *slot, u8 *value); 83 83 84 84 static struct hotplug_slot_ops skel_hotplug_slot_ops = { 85 85 .enable_slot = enable_slot, ··· 321 321 322 322 static void __exit cleanup_slots(void) 323 323 { 324 - struct list_head *tmp; 325 - struct list_head *next; 326 - struct slot *slot; 324 + struct slot *slot, *next; 327 325 328 326 /* 329 327 * Unregister all of our slots with the pci_hotplug subsystem. 330 328 * Memory will be freed in release_slot() callback after slot's 331 329 * lifespan is finished. 332 330 */ 333 - list_for_each_safe(tmp, next, &slot_list) { 334 - slot = list_entry(tmp, struct slot, slot_list); 331 + list_for_each_entry_safe(slot, next, &slot_list, slot_list) { 335 332 list_del(&slot->slot_list); 336 333 pci_hp_deregister(slot->hotplug_slot); 337 334 }
+3 -4
drivers/pci/hotplug/rpadlpar_core.c
··· 114 114 */ 115 115 static struct slot *find_php_slot(struct device_node *dn) 116 116 { 117 - struct list_head *tmp, *n; 118 - struct slot *slot; 117 + struct slot *slot, *next; 119 118 120 - list_for_each_safe(tmp, n, &rpaphp_slot_head) { 121 - slot = list_entry(tmp, struct slot, rpaphp_slot_list); 119 + list_for_each_entry_safe(slot, next, &rpaphp_slot_head, 120 + rpaphp_slot_list) { 122 121 if (slot->dn == dn) 123 122 return slot; 124 123 }
+4 -4
drivers/pci/hotplug/rpaphp.h
··· 51 51 do { \ 52 52 if (rpaphp_debug) \ 53 53 printk(KERN_DEBUG "%s: " format, \ 54 - MY_NAME , ## arg); \ 54 + MY_NAME, ## arg); \ 55 55 } while (0) 56 - #define err(format, arg...) printk(KERN_ERR "%s: " format, MY_NAME , ## arg) 57 - #define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME , ## arg) 58 - #define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME , ## arg) 56 + #define err(format, arg...) printk(KERN_ERR "%s: " format, MY_NAME, ## arg) 57 + #define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 58 + #define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 59 59 60 60 /* slot states */ 61 61
+4 -5
drivers/pci/hotplug/rpaphp_core.c
··· 94 94 int retval, level; 95 95 struct slot *slot = (struct slot *)hotplug_slot->private; 96 96 97 - retval = rtas_get_power_level (slot->power_domain, &level); 97 + retval = rtas_get_power_level(slot->power_domain, &level); 98 98 if (!retval) 99 99 *value = level; 100 100 return retval; ··· 356 356 357 357 static void __exit cleanup_slots(void) 358 358 { 359 - struct list_head *tmp, *n; 360 - struct slot *slot; 359 + struct slot *slot, *next; 361 360 362 361 /* 363 362 * Unregister all of our slots with the pci_hotplug subsystem, ··· 364 365 * memory will be freed in release_slot callback. 365 366 */ 366 367 367 - list_for_each_safe(tmp, n, &rpaphp_slot_head) { 368 - slot = list_entry(tmp, struct slot, rpaphp_slot_list); 368 + list_for_each_entry_safe(slot, next, &rpaphp_slot_head, 369 + rpaphp_slot_list) { 369 370 list_del(&slot->rpaphp_slot_list); 370 371 pci_hp_deregister(slot->hotplug_slot); 371 372 }
+1 -1
drivers/pci/hotplug/rpaphp_pci.c
··· 126 126 if (rpaphp_debug) { 127 127 struct pci_dev *dev; 128 128 dbg("%s: pci_devs of slot[%s]\n", __func__, slot->dn->full_name); 129 - list_for_each_entry (dev, &bus->devices, bus_list) 129 + list_for_each_entry(dev, &bus->devices, bus_list) 130 130 dbg("\t%s\n", pci_name(dev)); 131 131 } 132 132 }
+1 -1
drivers/pci/hotplug/rpaphp_slot.c
··· 48 48 } 49 49 50 50 struct slot *alloc_slot_struct(struct device_node *dn, 51 - int drc_index, char *drc_name, int power_domain) 51 + int drc_index, char *drc_name, int power_domain) 52 52 { 53 53 struct slot *slot; 54 54
+3 -4
drivers/pci/hotplug/s390_pci_hpc.c
··· 201 201 202 202 void zpci_exit_slot(struct zpci_dev *zdev) 203 203 { 204 - struct list_head *tmp, *n; 205 - struct slot *slot; 204 + struct slot *slot, *next; 206 205 207 - list_for_each_safe(tmp, n, &s390_hotplug_slot_list) { 208 - slot = list_entry(tmp, struct slot, slot_list); 206 + list_for_each_entry_safe(slot, next, &s390_hotplug_slot_list, 207 + slot_list) { 209 208 if (slot->zdev != zdev) 210 209 continue; 211 210 list_del(&slot->slot_list);
+3 -3
drivers/pci/hotplug/sgi_hotplug.c
··· 99 99 if (!slot) 100 100 return retval; 101 101 102 - retval = sprintf (buf, "%s\n", slot->physical_path); 102 + retval = sprintf(buf, "%s\n", slot->physical_path); 103 103 return retval; 104 104 } 105 105 ··· 313 313 } 314 314 315 315 if ((action == PCI_REQ_SLOT_DISABLE) && rc) { 316 - dev_dbg(&slot->pci_bus->self->dev,"remove failed rc = %d\n", rc); 316 + dev_dbg(&slot->pci_bus->self->dev, "remove failed rc = %d\n", rc); 317 317 } 318 318 319 319 return rc; ··· 488 488 489 489 /* free the ACPI resources for the slot */ 490 490 if (SN_ACPI_BASE_SUPPORT() && 491 - PCI_CONTROLLER(slot->pci_bus)->companion) { 491 + PCI_CONTROLLER(slot->pci_bus)->companion) { 492 492 unsigned long long adr; 493 493 struct acpi_device *device; 494 494 acpi_handle phandle;
+7 -7
drivers/pci/hotplug/shpchp.h
··· 50 50 #define dbg(format, arg...) \ 51 51 do { \ 52 52 if (shpchp_debug) \ 53 - printk(KERN_DEBUG "%s: " format, MY_NAME , ## arg); \ 53 + printk(KERN_DEBUG "%s: " format, MY_NAME, ## arg); \ 54 54 } while (0) 55 55 #define err(format, arg...) \ 56 - printk(KERN_ERR "%s: " format, MY_NAME , ## arg) 56 + printk(KERN_ERR "%s: " format, MY_NAME, ## arg) 57 57 #define info(format, arg...) \ 58 - printk(KERN_INFO "%s: " format, MY_NAME , ## arg) 58 + printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 59 59 #define warn(format, arg...) \ 60 - printk(KERN_WARNING "%s: " format, MY_NAME , ## arg) 60 + printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 61 61 62 62 #define ctrl_dbg(ctrl, format, arg...) \ 63 63 do { \ ··· 84 84 u8 presence_save; 85 85 u8 pwr_save; 86 86 struct controller *ctrl; 87 - struct hpc_ops *hpc_ops; 87 + const struct hpc_ops *hpc_ops; 88 88 struct hotplug_slot *hotplug_slot; 89 89 struct list_head slot_list; 90 90 struct delayed_work work; /* work for button event */ ··· 106 106 int slot_num_inc; /* 1 or -1 */ 107 107 struct pci_dev *pci_dev; 108 108 struct list_head slot_list; 109 - struct hpc_ops *hpc_ops; 109 + const struct hpc_ops *hpc_ops; 110 110 wait_queue_head_t queue; /* sleep & wake process */ 111 111 u8 slot_device_offset; 112 112 u32 pcix_misc2_reg; /* for amd pogo errata */ ··· 295 295 pci_write_config_dword(p_slot->ctrl->pci_dev, PCIX_MEM_BASE_LIMIT_OFFSET, rse_set); 296 296 } 297 297 /* restore MiscII register */ 298 - pci_read_config_dword(p_slot->ctrl->pci_dev, PCIX_MISCII_OFFSET, &pcix_misc2_temp ); 298 + pci_read_config_dword(p_slot->ctrl->pci_dev, PCIX_MISCII_OFFSET, &pcix_misc2_temp); 299 299 300 300 if (p_slot->ctrl->pcix_misc2_reg & SERRFATALENABLE_MASK) 301 301 pcix_misc2_temp |= SERRFATALENABLE_MASK;
+16 -19
drivers/pci/hotplug/shpchp_core.c
··· 57 57 58 58 #define SHPC_MODULE_NAME "shpchp" 59 59 60 - static int set_attention_status (struct hotplug_slot *slot, u8 value); 61 - static int enable_slot (struct hotplug_slot *slot); 62 - static int disable_slot (struct hotplug_slot *slot); 63 - static int get_power_status (struct hotplug_slot *slot, u8 *value); 64 - static int get_attention_status (struct hotplug_slot *slot, u8 *value); 65 - static int get_latch_status (struct hotplug_slot *slot, u8 *value); 66 - static int get_adapter_status (struct hotplug_slot *slot, u8 *value); 60 + static int set_attention_status(struct hotplug_slot *slot, u8 value); 61 + static int enable_slot(struct hotplug_slot *slot); 62 + static int disable_slot(struct hotplug_slot *slot); 63 + static int get_power_status(struct hotplug_slot *slot, u8 *value); 64 + static int get_attention_status(struct hotplug_slot *slot, u8 *value); 65 + static int get_latch_status(struct hotplug_slot *slot, u8 *value); 66 + static int get_adapter_status(struct hotplug_slot *slot, u8 *value); 67 67 68 68 static struct hotplug_slot_ops shpchp_hotplug_slot_ops = { 69 69 .set_attention_status = set_attention_status, ··· 178 178 179 179 void cleanup_slots(struct controller *ctrl) 180 180 { 181 - struct list_head *tmp; 182 - struct list_head *next; 183 - struct slot *slot; 181 + struct slot *slot, *next; 184 182 185 - list_for_each_safe(tmp, next, &ctrl->slot_list) { 186 - slot = list_entry(tmp, struct slot, slot_list); 183 + list_for_each_entry_safe(slot, next, &ctrl->slot_list, slot_list) { 187 184 list_del(&slot->slot_list); 188 185 cancel_delayed_work(&slot->work); 189 186 destroy_workqueue(slot->wq); ··· 191 194 /* 192 195 * set_attention_status - Turns the Amber LED for a slot on, off or blink 193 196 */ 194 - static int set_attention_status (struct hotplug_slot *hotplug_slot, u8 status) 197 + static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) 195 198 { 196 199 struct slot *slot = get_slot(hotplug_slot); 197 200 ··· 204 207 return 0; 205 208 } 206 209 207 - static int enable_slot (struct hotplug_slot *hotplug_slot) 210 + static int enable_slot(struct hotplug_slot *hotplug_slot) 208 211 { 209 212 struct slot *slot = get_slot(hotplug_slot); 210 213 ··· 214 217 return shpchp_sysfs_enable_slot(slot); 215 218 } 216 219 217 - static int disable_slot (struct hotplug_slot *hotplug_slot) 220 + static int disable_slot(struct hotplug_slot *hotplug_slot) 218 221 { 219 222 struct slot *slot = get_slot(hotplug_slot); 220 223 ··· 224 227 return shpchp_sysfs_disable_slot(slot); 225 228 } 226 229 227 - static int get_power_status (struct hotplug_slot *hotplug_slot, u8 *value) 230 + static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) 228 231 { 229 232 struct slot *slot = get_slot(hotplug_slot); 230 233 int retval; ··· 239 242 return 0; 240 243 } 241 244 242 - static int get_attention_status (struct hotplug_slot *hotplug_slot, u8 *value) 245 + static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) 243 246 { 244 247 struct slot *slot = get_slot(hotplug_slot); 245 248 int retval; ··· 254 257 return 0; 255 258 } 256 259 257 - static int get_latch_status (struct hotplug_slot *hotplug_slot, u8 *value) 260 + static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) 258 261 { 259 262 struct slot *slot = get_slot(hotplug_slot); 260 263 int retval; ··· 269 272 return 0; 270 273 } 271 274 272 - static int get_adapter_status (struct hotplug_slot *hotplug_slot, u8 *value) 275 + static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) 273 276 { 274 277 struct slot *slot = get_slot(hotplug_slot); 275 278 int retval;
+2 -2
drivers/pci/hotplug/shpchp_hpc.c
··· 542 542 u8 slot_cmd = 0; 543 543 544 544 switch (value) { 545 - case 0 : 545 + case 0: 546 546 slot_cmd = SET_ATTN_OFF; /* OFF */ 547 547 break; 548 548 case 1: ··· 910 910 return retval; 911 911 } 912 912 913 - static struct hpc_ops shpchp_hpc_ops = { 913 + static const struct hpc_ops shpchp_hpc_ops = { 914 914 .power_on_slot = hpc_power_on_slot, 915 915 .slot_enable = hpc_slot_enable, 916 916 .slot_disable = hpc_slot_disable,
+5 -5
drivers/pci/hotplug/shpchp_sysfs.c
··· 35 35 36 36 /* A few routines that create sysfs entries for the hot plug controller */ 37 37 38 - static ssize_t show_ctrl (struct device *dev, struct device_attribute *attr, char *buf) 38 + static ssize_t show_ctrl(struct device *dev, struct device_attribute *attr, char *buf) 39 39 { 40 40 struct pci_dev *pdev; 41 41 char *out = buf; ··· 43 43 struct resource *res; 44 44 struct pci_bus *bus; 45 45 46 - pdev = container_of (dev, struct pci_dev, dev); 46 + pdev = to_pci_dev(dev); 47 47 bus = pdev->subordinate; 48 48 49 49 out += sprintf(buf, "Free resources: memory\n"); ··· 83 83 84 84 return out - buf; 85 85 } 86 - static DEVICE_ATTR (ctrl, S_IRUGO, show_ctrl, NULL); 86 + static DEVICE_ATTR(ctrl, S_IRUGO, show_ctrl, NULL); 87 87 88 - int shpchp_create_ctrl_files (struct controller *ctrl) 88 + int shpchp_create_ctrl_files(struct controller *ctrl) 89 89 { 90 - return device_create_file (&ctrl->pci_dev->dev, &dev_attr_ctrl); 90 + return device_create_file(&ctrl->pci_dev->dev, &dev_attr_ctrl); 91 91 } 92 92 93 93 void shpchp_remove_ctrl_files(struct controller *ctrl)
-4
drivers/pci/msi.c
··· 1026 1026 } 1027 1027 EXPORT_SYMBOL(pci_msi_enabled); 1028 1028 1029 - void pci_msi_init_pci_dev(struct pci_dev *dev) 1030 - { 1031 - } 1032 - 1033 1029 /** 1034 1030 * pci_enable_msi_range - configure device's MSI capability structure 1035 1031 * @dev: device to configure
+2 -2
drivers/pci/pci-label.c
··· 77 77 struct device *dev; 78 78 struct pci_dev *pdev; 79 79 80 - dev = container_of(kobj, struct device, kobj); 80 + dev = kobj_to_dev(kobj); 81 81 pdev = to_pci_dev(dev); 82 82 83 83 return find_smbios_instance_string(pdev, NULL, SMBIOS_ATTR_NONE) ? ··· 221 221 { 222 222 struct device *dev; 223 223 224 - dev = container_of(kobj, struct device, kobj); 224 + dev = kobj_to_dev(kobj); 225 225 226 226 if (device_has_dsm(dev)) 227 227 return S_IRUGO;
+24 -34
drivers/pci/pci-sysfs.c
··· 630 630 struct bin_attribute *bin_attr, char *buf, 631 631 loff_t off, size_t count) 632 632 { 633 - struct pci_dev *dev = to_pci_dev(container_of(kobj, struct device, 634 - kobj)); 633 + struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 635 634 unsigned int size = 64; 636 635 loff_t init_off = off; 637 636 u8 *data = (u8 *) buf; ··· 706 707 struct bin_attribute *bin_attr, char *buf, 707 708 loff_t off, size_t count) 708 709 { 709 - struct pci_dev *dev = to_pci_dev(container_of(kobj, struct device, 710 - kobj)); 710 + struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 711 711 unsigned int size = count; 712 712 loff_t init_off = off; 713 713 u8 *data = (u8 *) buf; ··· 767 769 struct bin_attribute *bin_attr, char *buf, 768 770 loff_t off, size_t count) 769 771 { 770 - struct pci_dev *dev = 771 - to_pci_dev(container_of(kobj, struct device, kobj)); 772 + struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 772 773 773 774 if (off > bin_attr->size) 774 775 count = 0; ··· 781 784 struct bin_attribute *bin_attr, char *buf, 782 785 loff_t off, size_t count) 783 786 { 784 - struct pci_dev *dev = 785 - to_pci_dev(container_of(kobj, struct device, kobj)); 787 + struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 786 788 787 789 if (off > bin_attr->size) 788 790 count = 0; ··· 808 812 struct bin_attribute *bin_attr, char *buf, 809 813 loff_t off, size_t count) 810 814 { 811 - struct pci_bus *bus = to_pci_bus(container_of(kobj, struct device, 812 - kobj)); 815 + struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj)); 813 816 814 817 /* Only support 1, 2 or 4 byte accesses */ 815 818 if (count != 1 && count != 2 && count != 4) ··· 833 838 struct bin_attribute *bin_attr, char *buf, 834 839 loff_t off, size_t count) 835 840 { 836 - struct pci_bus *bus = to_pci_bus(container_of(kobj, struct device, 837 - kobj)); 841 + struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj)); 838 842 839 843 /* Only support 1, 2 or 4 byte accesses */ 840 844 if (count != 1 && count != 2 && count != 4) ··· 857 863 struct bin_attribute *attr, 858 864 struct vm_area_struct *vma) 859 865 { 860 - struct pci_bus *bus = to_pci_bus(container_of(kobj, struct device, 861 - kobj)); 866 + struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj)); 862 867 863 868 return pci_mmap_legacy_page_range(bus, vma, pci_mmap_mem); 864 869 } ··· 877 884 struct bin_attribute *attr, 878 885 struct vm_area_struct *vma) 879 886 { 880 - struct pci_bus *bus = to_pci_bus(container_of(kobj, struct device, 881 - kobj)); 887 + struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj)); 882 888 883 889 return pci_mmap_legacy_page_range(bus, vma, pci_mmap_io); 884 890 } ··· 992 1000 static int pci_mmap_resource(struct kobject *kobj, struct bin_attribute *attr, 993 1001 struct vm_area_struct *vma, int write_combine) 994 1002 { 995 - struct pci_dev *pdev = to_pci_dev(container_of(kobj, 996 - struct device, kobj)); 1003 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 997 1004 struct resource *res = attr->private; 998 1005 enum pci_mmap_state mmap_type; 999 1006 resource_size_t start, end; ··· 1045 1054 struct bin_attribute *attr, char *buf, 1046 1055 loff_t off, size_t count, bool write) 1047 1056 { 1048 - struct pci_dev *pdev = to_pci_dev(container_of(kobj, 1049 - struct device, kobj)); 1057 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 1050 1058 struct resource *res = attr->private; 1051 1059 unsigned long port = off; 1052 1060 int i; ··· 1215 1225 struct bin_attribute *bin_attr, char *buf, 1216 1226 loff_t off, size_t count) 1217 1227 { 1218 - struct pci_dev *pdev = to_pci_dev(container_of(kobj, struct device, kobj)); 1228 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 1219 1229 1220 1230 if ((off == 0) && (*buf == '0') && (count == 2)) 1221 1231 pdev->rom_attr_enabled = 0; ··· 1241 1251 struct bin_attribute *bin_attr, char *buf, 1242 1252 loff_t off, size_t count) 1243 1253 { 1244 - struct pci_dev *pdev = to_pci_dev(container_of(kobj, struct device, kobj)); 1254 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 1245 1255 void __iomem *rom; 1246 1256 size_t size; 1247 1257 ··· 1362 1372 if (!sysfs_initialized) 1363 1373 return -EACCES; 1364 1374 1365 - if (pdev->cfg_size < PCI_CFG_SPACE_EXP_SIZE) 1366 - retval = sysfs_create_bin_file(&pdev->dev.kobj, &pci_config_attr); 1367 - else 1375 + if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) 1368 1376 retval = sysfs_create_bin_file(&pdev->dev.kobj, &pcie_config_attr); 1377 + else 1378 + retval = sysfs_create_bin_file(&pdev->dev.kobj, &pci_config_attr); 1369 1379 if (retval) 1370 1380 goto err; 1371 1381 ··· 1417 1427 err_resource_files: 1418 1428 pci_remove_resource_files(pdev); 1419 1429 err_config_file: 1420 - if (pdev->cfg_size < PCI_CFG_SPACE_EXP_SIZE) 1421 - sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr); 1422 - else 1430 + if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) 1423 1431 sysfs_remove_bin_file(&pdev->dev.kobj, &pcie_config_attr); 1432 + else 1433 + sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr); 1424 1434 err: 1425 1435 return retval; 1426 1436 } ··· 1454 1464 1455 1465 pci_remove_capabilities_sysfs(pdev); 1456 1466 1457 - if (pdev->cfg_size < PCI_CFG_SPACE_EXP_SIZE) 1458 - sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr); 1459 - else 1467 + if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) 1460 1468 sysfs_remove_bin_file(&pdev->dev.kobj, &pcie_config_attr); 1469 + else 1470 + sysfs_remove_bin_file(&pdev->dev.kobj, &pci_config_attr); 1461 1471 1462 1472 pci_remove_resource_files(pdev); 1463 1473 ··· 1501 1511 static umode_t pci_dev_attrs_are_visible(struct kobject *kobj, 1502 1512 struct attribute *a, int n) 1503 1513 { 1504 - struct device *dev = container_of(kobj, struct device, kobj); 1514 + struct device *dev = kobj_to_dev(kobj); 1505 1515 struct pci_dev *pdev = to_pci_dev(dev); 1506 1516 1507 1517 if (a == &vga_attr.attr) ··· 1520 1530 static umode_t pci_dev_hp_attrs_are_visible(struct kobject *kobj, 1521 1531 struct attribute *a, int n) 1522 1532 { 1523 - struct device *dev = container_of(kobj, struct device, kobj); 1533 + struct device *dev = kobj_to_dev(kobj); 1524 1534 struct pci_dev *pdev = to_pci_dev(dev); 1525 1535 1526 1536 if (pdev->is_virtfn) ··· 1544 1554 static umode_t sriov_attrs_are_visible(struct kobject *kobj, 1545 1555 struct attribute *a, int n) 1546 1556 { 1547 - struct device *dev = container_of(kobj, struct device, kobj); 1557 + struct device *dev = kobj_to_dev(kobj); 1548 1558 1549 1559 if (!dev_is_pf(dev)) 1550 1560 return 0;
+2 -2
drivers/pci/pci.c
··· 1417 1417 1418 1418 static void pcim_release(struct device *gendev, void *res) 1419 1419 { 1420 - struct pci_dev *dev = container_of(gendev, struct pci_dev, dev); 1420 + struct pci_dev *dev = to_pci_dev(gendev); 1421 1421 struct pci_devres *this = res; 1422 1422 int i; 1423 1423 ··· 1534 1534 * is the default implementation. Architecture implementations can 1535 1535 * override this. 1536 1536 */ 1537 - void __weak pcibios_disable_device (struct pci_dev *dev) {} 1537 + void __weak pcibios_disable_device(struct pci_dev *dev) {} 1538 1538 1539 1539 /** 1540 1540 * pcibios_penalize_isa_irq - penalize an ISA IRQ
-2
drivers/pci/pci.h
··· 144 144 145 145 #ifdef CONFIG_PCI_MSI 146 146 void pci_no_msi(void); 147 - void pci_msi_init_pci_dev(struct pci_dev *dev); 148 147 #else 149 148 static inline void pci_no_msi(void) { } 150 - static inline void pci_msi_init_pci_dev(struct pci_dev *dev) { } 151 149 #endif 152 150 153 151 static inline void pci_msi_set_enable(struct pci_dev *dev, int enable)
+8 -8
drivers/pci/pcie/aer/aer_inject.c
··· 41 41 u32 header_log1; 42 42 u32 header_log2; 43 43 u32 header_log3; 44 - u16 domain; 44 + u32 domain; 45 45 }; 46 46 47 47 struct aer_error { 48 48 struct list_head list; 49 - u16 domain; 49 + u32 domain; 50 50 unsigned int bus; 51 51 unsigned int devfn; 52 52 int pos_cap_err; ··· 74 74 /* Protect einjected and pci_bus_ops_list */ 75 75 static DEFINE_SPINLOCK(inject_lock); 76 76 77 - static void aer_error_init(struct aer_error *err, u16 domain, 77 + static void aer_error_init(struct aer_error *err, u32 domain, 78 78 unsigned int bus, unsigned int devfn, 79 79 int pos_cap_err) 80 80 { ··· 86 86 } 87 87 88 88 /* inject_lock must be held before calling */ 89 - static struct aer_error *__find_aer_error(u16 domain, unsigned int bus, 89 + static struct aer_error *__find_aer_error(u32 domain, unsigned int bus, 90 90 unsigned int devfn) 91 91 { 92 92 struct aer_error *err; ··· 106 106 int domain = pci_domain_nr(dev->bus); 107 107 if (domain < 0) 108 108 return NULL; 109 - return __find_aer_error((u16)domain, dev->bus->number, dev->devfn); 109 + return __find_aer_error(domain, dev->bus->number, dev->devfn); 110 110 } 111 111 112 112 /* inject_lock must be held before calling */ ··· 196 196 domain = pci_domain_nr(bus); 197 197 if (domain < 0) 198 198 goto out; 199 - err = __find_aer_error((u16)domain, bus->number, devfn); 199 + err = __find_aer_error(domain, bus->number, devfn); 200 200 if (!err) 201 201 goto out; 202 202 ··· 228 228 domain = pci_domain_nr(bus); 229 229 if (domain < 0) 230 230 goto out; 231 - err = __find_aer_error((u16)domain, bus->number, devfn); 231 + err = __find_aer_error(domain, bus->number, devfn); 232 232 if (!err) 233 233 goto out; 234 234 ··· 329 329 u32 sever, cor_mask, uncor_mask, cor_mask_orig = 0, uncor_mask_orig = 0; 330 330 int ret = 0; 331 331 332 - dev = pci_get_domain_bus_and_slot((int)einj->domain, einj->bus, devfn); 332 + dev = pci_get_domain_bus_and_slot(einj->domain, einj->bus, devfn); 333 333 if (!dev) 334 334 return -ENODEV; 335 335 rpdev = pcie_find_root_port(dev);
+5 -5
drivers/pci/pcie/aer/aerdrv_core.c
··· 246 246 !dev->driver->err_handler || 247 247 !dev->driver->err_handler->error_detected) { 248 248 if (result_data->state == pci_channel_io_frozen && 249 - !(dev->hdr_type & PCI_HEADER_TYPE_BRIDGE)) { 249 + dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) { 250 250 /* 251 251 * In case of fatal recovery, if one of down- 252 252 * stream device has no driver. We might be ··· 269 269 * without recovery. 270 270 */ 271 271 272 - if (!(dev->hdr_type & PCI_HEADER_TYPE_BRIDGE)) 272 + if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) 273 273 vote = PCI_ERS_RESULT_NO_AER_DRIVER; 274 274 else 275 275 vote = PCI_ERS_RESULT_NONE; ··· 369 369 else 370 370 result_data.result = PCI_ERS_RESULT_RECOVERED; 371 371 372 - if (dev->hdr_type & PCI_HEADER_TYPE_BRIDGE) { 372 + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 373 373 /* 374 374 * If the error is reported by a bridge, we think this error 375 375 * is related to the downstream link of the bridge, so we ··· 440 440 pci_ers_result_t status; 441 441 struct pcie_port_service_driver *driver; 442 442 443 - if (dev->hdr_type & PCI_HEADER_TYPE_BRIDGE) { 443 + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 444 444 /* Reset this port for all subordinates */ 445 445 udev = dev; 446 446 } else { ··· 660 660 &info->mask); 661 661 if (!(info->status & ~info->mask)) 662 662 return 0; 663 - } else if (dev->hdr_type & PCI_HEADER_TYPE_BRIDGE || 663 + } else if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 664 664 info->severity == AER_NONFATAL) { 665 665 666 666 /* Link is still healthy for IO reads */
+5 -11
drivers/pci/pcie/aspm.c
··· 834 834 { 835 835 struct pci_dev *pdev = to_pci_dev(dev); 836 836 struct pcie_link_state *link, *root = pdev->link_state->root; 837 - u32 val, state = 0; 838 - 839 - if (kstrtouint(buf, 10, &val)) 840 - return -EINVAL; 837 + u32 state; 841 838 842 839 if (aspm_disabled) 843 840 return -EPERM; 844 - if (n < 1 || val > 3) 845 - return -EINVAL; 846 841 847 - /* Convert requested state to ASPM state */ 848 - if (val & PCIE_LINK_STATE_L0S) 849 - state |= ASPM_STATE_L0S; 850 - if (val & PCIE_LINK_STATE_L1) 851 - state |= ASPM_STATE_L1; 842 + if (kstrtouint(buf, 10, &state)) 843 + return -EINVAL; 844 + if ((state & ~ASPM_STATE_ALL) != 0) 845 + return -EINVAL; 852 846 853 847 down_read(&pci_bus_sem); 854 848 mutex_lock(&aspm_lock);
+13 -20
drivers/pci/probe.c
··· 1109 1109 int pos = PCI_CFG_SPACE_SIZE; 1110 1110 1111 1111 if (pci_read_config_dword(dev, pos, &status) != PCIBIOS_SUCCESSFUL) 1112 - goto fail; 1112 + return PCI_CFG_SPACE_SIZE; 1113 1113 if (status == 0xffffffff || pci_ext_cfg_is_aliased(dev)) 1114 - goto fail; 1114 + return PCI_CFG_SPACE_SIZE; 1115 1115 1116 1116 return PCI_CFG_SPACE_EXP_SIZE; 1117 - 1118 - fail: 1119 - return PCI_CFG_SPACE_SIZE; 1120 1117 } 1121 1118 1122 1119 int pci_cfg_space_size(struct pci_dev *dev) ··· 1126 1129 if (class == PCI_CLASS_BRIDGE_HOST) 1127 1130 return pci_cfg_space_size_ext(dev); 1128 1131 1129 - if (!pci_is_pcie(dev)) { 1130 - pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); 1131 - if (!pos) 1132 - goto fail; 1132 + if (pci_is_pcie(dev)) 1133 + return pci_cfg_space_size_ext(dev); 1133 1134 1134 - pci_read_config_dword(dev, pos + PCI_X_STATUS, &status); 1135 - if (!(status & (PCI_X_STATUS_266MHZ | PCI_X_STATUS_533MHZ))) 1136 - goto fail; 1137 - } 1135 + pos = pci_find_capability(dev, PCI_CAP_ID_PCIX); 1136 + if (!pos) 1137 + return PCI_CFG_SPACE_SIZE; 1138 1138 1139 - return pci_cfg_space_size_ext(dev); 1139 + pci_read_config_dword(dev, pos + PCI_X_STATUS, &status); 1140 + if (status & (PCI_X_STATUS_266MHZ | PCI_X_STATUS_533MHZ)) 1141 + return pci_cfg_space_size_ext(dev); 1140 1142 1141 - fail: 1142 1143 return PCI_CFG_SPACE_SIZE; 1143 1144 } 1144 1145 1145 1146 #define LEGACY_IO_RESOURCE (IORESOURCE_IO | IORESOURCE_PCI_FIXED) 1146 1147 1147 - void pci_msi_setup_pci_dev(struct pci_dev *dev) 1148 + static void pci_msi_setup_pci_dev(struct pci_dev *dev) 1148 1149 { 1149 1150 /* 1150 1151 * Disable the MSI hardware to avoid screaming interrupts ··· 1208 1213 1209 1214 /* "Unknown power state" */ 1210 1215 dev->current_state = PCI_UNKNOWN; 1211 - 1212 - pci_msi_setup_pci_dev(dev); 1213 1216 1214 1217 /* Early fixups, before probing the BARs */ 1215 1218 pci_fixup_device(pci_fixup_early, dev); ··· 1598 1605 /* Enhanced Allocation */ 1599 1606 pci_ea_init(dev); 1600 1607 1601 - /* MSI/MSI-X list */ 1602 - pci_msi_init_pci_dev(dev); 1608 + /* Setup MSI caps & disable MSI/MSI-X interrupts */ 1609 + pci_msi_setup_pci_dev(dev); 1603 1610 1604 1611 /* Buffers for saving PCIe and PCI-X capabilities */ 1605 1612 pci_allocate_cap_save_buffers(dev);
+16
drivers/pci/quirks.c
··· 287 287 } 288 288 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine); 289 289 290 + /* 291 + * This chip can cause bus lockups if config addresses above 0x600 292 + * are read or written. 293 + */ 294 + static void quirk_nfp6000(struct pci_dev *dev) 295 + { 296 + dev->cfg_size = 0x600; 297 + } 298 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP4000, quirk_nfp6000); 299 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP6000, quirk_nfp6000); 300 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETRONOME, PCI_DEVICE_ID_NETRONOME_NFP6000_VF, quirk_nfp6000); 301 + 290 302 /* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */ 291 303 static void quirk_extend_bar_to_page(struct pci_dev *dev) 292 304 { ··· 3633 3621 /* https://bugs.gentoo.org/show_bug.cgi?id=497630 */ 3634 3622 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_JMICRON, 3635 3623 PCI_DEVICE_ID_JMICRON_JMB388_ESD, 3624 + quirk_dma_func1_alias); 3625 + /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c117 */ 3626 + DECLARE_PCI_FIXUP_HEADER(0x1c28, /* Lite-On */ 3627 + 0x0122, /* Plextor M6E (Marvell 88SS9183)*/ 3636 3628 quirk_dma_func1_alias); 3637 3629 3638 3630 /*
+11 -12
drivers/pci/rom.c
··· 77 77 do { 78 78 void __iomem *pds; 79 79 /* Standard PCI ROMs start out with these bytes 55 AA */ 80 - if (readb(image) != 0x55) { 81 - dev_err(&pdev->dev, "Invalid ROM contents\n"); 80 + if (readw(image) != 0xAA55) { 81 + dev_err(&pdev->dev, "Invalid PCI ROM header signature: expecting 0xaa55, got %#06x\n", 82 + readw(image)); 82 83 break; 83 84 } 84 - if (readb(image + 1) != 0xAA) 85 - break; 86 - /* get the PCI data structure and check its signature */ 85 + /* get the PCI data structure and check its "PCIR" signature */ 87 86 pds = image + readw(image + 24); 88 - if (readb(pds) != 'P') 87 + if (readl(pds) != 0x52494350) { 88 + dev_err(&pdev->dev, "Invalid PCI ROM data signature: expecting 0x52494350, got %#010x\n", 89 + readl(pds)); 89 90 break; 90 - if (readb(pds + 1) != 'C') 91 - break; 92 - if (readb(pds + 2) != 'I') 93 - break; 94 - if (readb(pds + 3) != 'R') 95 - break; 91 + } 96 92 last_image = readb(pds + 21) & 0x80; 97 93 length = readw(pds + 16); 98 94 image += length * 512; 95 + /* Avoid iterating through memory outside the resource window */ 96 + if (image > rom + size) 97 + break; 99 98 } while (length && !last_image); 100 99 101 100 /* never return a size larger than the PCI resource window */
+1 -1
drivers/pci/setup-bus.c
··· 442 442 break; 443 443 } 444 444 } 445 - } 445 + } 446 446 447 447 } 448 448
+2 -2
include/linux/msi.h
··· 187 187 * @msi_free: Domain specific function to free a MSI interrupts 188 188 * @msi_check: Callback for verification of the domain/info/dev data 189 189 * @msi_prepare: Prepare the allocation of the interrupts in the domain 190 - * @msi_finish: Optional callbacl to finalize the allocation 190 + * @msi_finish: Optional callback to finalize the allocation 191 191 * @set_desc: Set the msi descriptor for an interrupt 192 192 * @handle_error: Optional error handler if the allocation fails 193 193 * ··· 195 195 * msi_create_irq_domain() and related interfaces 196 196 * 197 197 * @msi_check, @msi_prepare, @msi_finish, @set_desc and @handle_error 198 - * are callbacks used by msi_irq_domain_alloc_irqs() and related 198 + * are callbacks used by msi_domain_alloc_irqs() and related 199 199 * interfaces which are based on msi_desc. 200 200 */ 201 201 struct msi_domain_ops {
+7
include/linux/of_pci.h
··· 59 59 int of_pci_get_host_bridge_resources(struct device_node *dev, 60 60 unsigned char busno, unsigned char bus_max, 61 61 struct list_head *resources, resource_size_t *io_base); 62 + #else 63 + static inline int of_pci_get_host_bridge_resources(struct device_node *dev, 64 + unsigned char busno, unsigned char bus_max, 65 + struct list_head *resources, resource_size_t *io_base) 66 + { 67 + return -EINVAL; 68 + } 62 69 #endif 63 70 64 71 #if defined(CONFIG_OF) && defined(CONFIG_PCI_MSI)
-2
include/linux/pci.h
··· 1257 1257 u16 entry; /* driver uses to specify entry, OS writes */ 1258 1258 }; 1259 1259 1260 - void pci_msi_setup_pci_dev(struct pci_dev *dev); 1261 - 1262 1260 #ifdef CONFIG_PCI_MSI 1263 1261 int pci_msi_vec_count(struct pci_dev *dev); 1264 1262 void pci_msi_shutdown(struct pci_dev *dev);
+5
include/linux/pci_ids.h
··· 2496 2496 #define PCI_DEVICE_ID_KORENIX_JETCARDF3 0x17ff 2497 2497 2498 2498 #define PCI_VENDOR_ID_NETRONOME 0x19ee 2499 + #define PCI_DEVICE_ID_NETRONOME_NFP3200 0x3200 2500 + #define PCI_DEVICE_ID_NETRONOME_NFP3240 0x3240 2501 + #define PCI_DEVICE_ID_NETRONOME_NFP4000 0x4000 2502 + #define PCI_DEVICE_ID_NETRONOME_NFP6000 0x6000 2503 + #define PCI_DEVICE_ID_NETRONOME_NFP6000_VF 0x6003 2499 2504 2500 2505 #define PCI_VENDOR_ID_QMI 0x1a32 2501 2506
+1
kernel/irq/irqdomain.c
··· 1061 1061 __irq_set_handler(virq, handler, 0, handler_name); 1062 1062 irq_set_handler_data(virq, handler_data); 1063 1063 } 1064 + EXPORT_SYMBOL(irq_domain_set_info); 1064 1065 1065 1066 /** 1066 1067 * irq_domain_reset_irq_data - Clear hwirq, chip and chip_data in @irq_data
+5 -3
kernel/irq/msi.c
··· 109 109 if (irq_find_mapping(domain, hwirq) > 0) 110 110 return -EEXIST; 111 111 112 - ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, arg); 113 - if (ret < 0) 114 - return ret; 112 + if (domain->parent) { 113 + ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, arg); 114 + if (ret < 0) 115 + return ret; 116 + } 115 117 116 118 for (i = 0; i < nr_irqs; i++) { 117 119 ret = ops->msi_init(domain, info, virq + i, hwirq + i, arg);