Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drivers-for-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver changes from Olof Johansson:
"A handful of driver-related changes. We've had a bunch of them going
in through other branches as well, so it's only a part of what we
really have this release.

Larger pieces are:

- Removal of a now unused PWM driver for atmel
[ This includes AVR32 changes that have been appropriately acked ]
- Performance counter support for the arm CCN interconnect
- OMAP mailbox driver cleanups and consolidation
- PCI and SATA PHY drivers for SPEAr 13xx platforms
- Redefinition (with backwards compatibility!) of PCI DT bindings for
Tegra to better model regulators/power"

Note: this merge also fixes up the semantic conflict with the new
calling convention for devm_phy_create(), see commit f0ed817638b5 ("phy:
core: Let node ptr of PHY point to PHY and not of PHY provider") that
came in through Greg's USB tree.

Semantic merge patch by Stephen Rothwell <sfr@canb.auug.org.au> through
the next tree.

* tag 'drivers-for-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (38 commits)
bus: arm-ccn: Fix error handling at event allocation
mailbox/omap: add a parent structure for every IP instance
mailbox/omap: remove the private mailbox structure
mailbox/omap: consolidate OMAP mailbox driver
mailbox/omap: simplify the fifo assignment by using macros
mailbox/omap: remove omap_mbox_type_t from mailbox ops
mailbox/omap: remove OMAP1 mailbox driver
mailbox/omap: use devm_* interfaces
bus: ARM CCN: add PERF_EVENTS dependency
bus: ARM CCN PMU driver
PCI: spear: Remove spear13xx_pcie_remove()
PCI: spear: Fix Section mismatch compilation warning for probe()
ARM: tegra: Remove legacy PCIe power supply properties
PCI: tegra: Remove deprecated power supply properties
PCI: tegra: Implement accurate power supply scheme
ARM: SPEAr13xx: Update defconfigs
ARM: SPEAr13xx: Add pcie and miphy DT nodes
ARM: SPEAr13xx: Add bindings and dt node for misc block
ARM: SPEAr13xx: Fix static mapping table
phy: Add drivers for PCIe and SATA phy on SPEAr13xx
...

+3452 -2023
+52
Documentation/arm/CCN.txt
··· 1 + ARM Cache Coherent Network 2 + ========================== 3 + 4 + CCN-504 is a ring-bus interconnect consisting of 11 crosspoints 5 + (XPs), with each crosspoint supporting up to two device ports, 6 + so nodes (devices) 0 and 1 are connected to crosspoint 0, 7 + nodes 2 and 3 to crosspoint 1 etc. 8 + 9 + PMU (perf) driver 10 + ----------------- 11 + 12 + The CCN driver registers a perf PMU driver, which provides 13 + description of available events and configuration options 14 + in sysfs, see /sys/bus/event_source/devices/ccn*. 15 + 16 + The "format" directory describes format of the config, config1 17 + and config2 fields of the perf_event_attr structure. The "events" 18 + directory provides configuration templates for all documented 19 + events, that can be used with perf tool. For example "xp_valid_flit" 20 + is an equivalent of "type=0x8,event=0x4". Other parameters must be 21 + explicitly specified. For events originating from device, "node" 22 + defines its index. All crosspoint events require "xp" (index), 23 + "port" (device port number) and "vc" (virtual channel ID) and 24 + "dir" (direction). Watchpoints (special "event" value 0xfe) also 25 + require comparator values ("cmp_l" and "cmp_h") and "mask", being 26 + index of the comparator mask. 27 + 28 + Masks are defined separately from the event description 29 + (due to limited number of the config values) in the "cmp_mask" 30 + directory, with first 8 configurable by user and additional 31 + 4 hardcoded for the most frequent use cases. 32 + 33 + Cycle counter is described by a "type" value 0xff and does 34 + not require any other settings. 35 + 36 + Example of perf tool use: 37 + 38 + / # perf list | grep ccn 39 + ccn/cycles/ [Kernel PMU event] 40 + <...> 41 + ccn/xp_valid_flit/ [Kernel PMU event] 42 + <...> 43 + 44 + / # perf stat -C 0 -e ccn/cycles/,ccn/xp_valid_flit,xp=1,port=0,vc=1,dir=1/ \ 45 + sleep 1 46 + 47 + The driver does not support sampling, therefore "perf record" will 48 + not work. Also notice that only single cpu is being selected 49 + ("-C 0") - this is because perf framework does not support 50 + "non-CPU related" counters (yet?) so system-wide session ("-a") 51 + would try (and in most cases fail) to set up the same event 52 + per each CPU.
+21
Documentation/devicetree/bindings/arm/ccn.txt
··· 1 + * ARM CCN (Cache Coherent Network) 2 + 3 + Required properties: 4 + 5 + - compatible: (standard compatible string) should be one of: 6 + "arm,ccn-504" 7 + "arm,ccn-508" 8 + 9 + - reg: (standard registers property) physical address and size 10 + (16MB) of the configuration registers block 11 + 12 + - interrupts: (standard interrupt property) single interrupt 13 + generated by the control block 14 + 15 + Example: 16 + 17 + ccn@0x2000000000 { 18 + compatible = "arm,ccn-504"; 19 + reg = <0x20 0x00000000 0 0x1000000>; 20 + interrupts = <0 181 4>; 21 + };
+9
Documentation/devicetree/bindings/arm/spear-misc.txt
··· 1 + SPEAr Misc configuration 2 + =========================== 3 + SPEAr SOCs have some miscellaneous registers which are used to configure 4 + few properties of different peripheral controllers. 5 + 6 + misc node required properties: 7 + 8 + - compatible Should be "st,spear1340-misc", "syscon". 9 + - reg: Address range of misc space upto 8K
+27 -3
Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
··· 14 14 - interrupt-names: Must include the following entries: 15 15 "intr": The Tegra interrupt that is asserted for controller interrupts 16 16 "msi": The Tegra interrupt that is asserted when an MSI is received 17 - - pex-clk-supply: Supply voltage for internal reference clock 18 - - vdd-supply: Power supply for controller (1.05V) 19 - - avdd-supply: Power supply for controller (1.05V) (not required for Tegra20) 20 17 - bus-range: Range of bus numbers associated with this controller 21 18 - #address-cells: Address representation for root ports (must be 3) 22 19 - cell 0 specifies the bus and device numbers of the root port: ··· 56 59 - pex 57 60 - afi 58 61 - pcie_x 62 + 63 + Power supplies for Tegra20: 64 + - avdd-pex-supply: Power supply for analog PCIe logic. Must supply 1.05 V. 65 + - vdd-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. 66 + - avdd-pex-pll-supply: Power supply for dedicated (internal) PCIe PLL. Must 67 + supply 1.05 V. 68 + - avdd-plle-supply: Power supply for PLLE, which is shared with SATA. Must 69 + supply 1.05 V. 70 + - vddio-pex-clk-supply: Power supply for PCIe clock. Must supply 3.3 V. 71 + 72 + Power supplies for Tegra30: 73 + - Required: 74 + - avdd-pex-pll-supply: Power supply for dedicated (internal) PCIe PLL. Must 75 + supply 1.05 V. 76 + - avdd-plle-supply: Power supply for PLLE, which is shared with SATA. Must 77 + supply 1.05 V. 78 + - vddio-pex-ctl-supply: Power supply for PCIe control I/O partition. Must 79 + supply 1.8 V. 80 + - hvdd-pex-supply: High-voltage supply for PCIe I/O and PCIe output clocks. 81 + Must supply 3.3 V. 82 + - Optional: 83 + - If lanes 0 to 3 are used: 84 + - avdd-pexa-supply: Power supply for analog PCIe logic. Must supply 1.05 V. 85 + - vdd-pexa-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. 86 + - If lanes 4 or 5 are used: 87 + - avdd-pexb-supply: Power supply for analog PCIe logic. Must supply 1.05 V. 88 + - vdd-pexb-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. 59 89 60 90 Root ports are defined as subnodes of the PCIe controller node. 61 91
+14
Documentation/devicetree/bindings/pci/spear13xx-pcie.txt
··· 1 + SPEAr13XX PCIe DT detail: 2 + ================================ 3 + 4 + SPEAr13XX uses synopsis designware PCIe controller and ST MiPHY as phy 5 + controller. 6 + 7 + Required properties: 8 + - compatible : should be "st,spear1340-pcie", "snps,dw-pcie". 9 + - phys : phandle to phy node associated with pcie controller 10 + - phy-names : must be "pcie-phy" 11 + - All other definitions as per generic PCI bindings 12 + 13 + Optional properties: 14 + - st,pcie-is-gen1 indicates that forced gen1 initialization is needed.
+15
Documentation/devicetree/bindings/phy/st-spear-miphy.txt
··· 1 + ST SPEAr miphy DT details 2 + ========================= 3 + 4 + ST Microelectronics SPEAr miphy is a phy controller supporting PCIe and SATA. 5 + 6 + Required properties: 7 + - compatible : should be "st,spear1310-miphy" or "st,spear1340-miphy" 8 + - reg : offset and length of the PHY register set. 9 + - misc: phandle for the syscon node to access misc registers 10 + - #phy-cells : from the generic PHY bindings, must be 1. 11 + - cell[1]: 0 if phy used for SATA, 1 for PCIe. 12 + 13 + Optional properties: 14 + - phy-id: Instance id of the phy. Only required when there are multiple phys 15 + present on a implementation.
+6
MAINTAINERS
··· 6902 6902 F: Documentation/devicetree/bindings/pci/host-generic-pci.txt 6903 6903 F: drivers/pci/host/pci-host-generic.c 6904 6904 6905 + PCIE DRIVER FOR ST SPEAR13XX 6906 + M: Mohit Kumar <mohit.kumar@st.com> 6907 + L: linux-pci@vger.kernel.org 6908 + S: Maintained 6909 + F: drivers/pci/host/*spear* 6910 + 6905 6911 PCMCIA SUBSYSTEM 6906 6912 P: Linux PCMCIA Team 6907 6913 L: linux-pcmcia@lists.infradead.org
+4
arch/arm/boot/dts/spear1310-evb.dts
··· 106 106 status = "okay"; 107 107 }; 108 108 109 + miphy@eb800000 { 110 + status = "okay"; 111 + }; 112 + 109 113 cf@b2800000 { 110 114 status = "okay"; 111 115 };
+90 -3
arch/arm/boot/dts/spear1310.dtsi
··· 29 29 #gpio-cells = <2>; 30 30 }; 31 31 32 - ahci@b1000000 { 32 + miphy0: miphy@eb800000 { 33 + compatible = "st,spear1310-miphy"; 34 + reg = <0xeb800000 0x4000>; 35 + misc = <&misc>; 36 + phy-id = <0>; 37 + #phy-cells = <1>; 38 + status = "disabled"; 39 + }; 40 + 41 + miphy1: miphy@eb804000 { 42 + compatible = "st,spear1310-miphy"; 43 + reg = <0xeb804000 0x4000>; 44 + misc = <&misc>; 45 + phy-id = <1>; 46 + #phy-cells = <1>; 47 + status = "disabled"; 48 + }; 49 + 50 + miphy2: miphy@eb808000 { 51 + compatible = "st,spear1310-miphy"; 52 + reg = <0xeb808000 0x4000>; 53 + misc = <&misc>; 54 + phy-id = <2>; 55 + #phy-cells = <1>; 56 + status = "disabled"; 57 + }; 58 + 59 + ahci0: ahci@b1000000 { 33 60 compatible = "snps,spear-ahci"; 34 61 reg = <0xb1000000 0x10000>; 35 62 interrupts = <0 68 0x4>; 63 + phys = <&miphy0 0>; 64 + phy-names = "sata-phy"; 36 65 status = "disabled"; 37 66 }; 38 67 39 - ahci@b1800000 { 68 + ahci1: ahci@b1800000 { 40 69 compatible = "snps,spear-ahci"; 41 70 reg = <0xb1800000 0x10000>; 42 71 interrupts = <0 69 0x4>; 72 + phys = <&miphy1 0>; 73 + phy-names = "sata-phy"; 43 74 status = "disabled"; 44 75 }; 45 76 46 - ahci@b4000000 { 77 + ahci2: ahci@b4000000 { 47 78 compatible = "snps,spear-ahci"; 48 79 reg = <0xb4000000 0x10000>; 49 80 interrupts = <0 70 0x4>; 81 + phys = <&miphy2 0>; 82 + phy-names = "sata-phy"; 83 + status = "disabled"; 84 + }; 85 + 86 + pcie0: pcie@b1000000 { 87 + compatible = "st,spear1340-pcie", "snps,dw-pcie"; 88 + reg = <0xb1000000 0x4000>; 89 + interrupts = <0 68 0x4>; 90 + interrupt-map-mask = <0 0 0 0>; 91 + interrupt-map = <0x0 0 &gic 0 68 0x4>; 92 + num-lanes = <1>; 93 + phys = <&miphy0 1>; 94 + phy-names = "pcie-phy"; 95 + #address-cells = <3>; 96 + #size-cells = <2>; 97 + device_type = "pci"; 98 + ranges = <0x00000800 0 0x80000000 0x80000000 0 0x00020000 /* configuration space */ 99 + 0x81000000 0 0 0x80020000 0 0x00010000 /* downstream I/O */ 100 + 0x82000000 0 0x80030000 0xc0030000 0 0x0ffd0000>; /* non-prefetchable memory */ 101 + status = "disabled"; 102 + }; 103 + 104 + pcie1: pcie@b1800000 { 105 + compatible = "st,spear1340-pcie", "snps,dw-pcie"; 106 + reg = <0xb1800000 0x4000>; 107 + interrupts = <0 69 0x4>; 108 + interrupt-map-mask = <0 0 0 0>; 109 + interrupt-map = <0x0 0 &gic 0 69 0x4>; 110 + num-lanes = <1>; 111 + phys = <&miphy1 1>; 112 + phy-names = "pcie-phy"; 113 + #address-cells = <3>; 114 + #size-cells = <2>; 115 + device_type = "pci"; 116 + ranges = <0x00000800 0 0x90000000 0x90000000 0 0x00020000 /* configuration space */ 117 + 0x81000000 0 0 0x90020000 0 0x00010000 /* downstream I/O */ 118 + 0x82000000 0 0x90030000 0x90030000 0 0x0ffd0000>; /* non-prefetchable memory */ 119 + status = "disabled"; 120 + }; 121 + 122 + pcie2: pcie@b4000000 { 123 + compatible = "st,spear1340-pcie", "snps,dw-pcie"; 124 + reg = <0xb4000000 0x4000>; 125 + interrupts = <0 70 0x4>; 126 + interrupt-map-mask = <0 0 0 0>; 127 + interrupt-map = <0x0 0 &gic 0 70 0x4>; 128 + num-lanes = <1>; 129 + phys = <&miphy2 1>; 130 + phy-names = "pcie-phy"; 131 + #address-cells = <3>; 132 + #size-cells = <2>; 133 + device_type = "pci"; 134 + ranges = <0x00000800 0 0xc0000000 0xc0000000 0 0x00020000 /* configuration space */ 135 + 0x81000000 0 0 0xc0020000 0 0x00010000 /* downstream I/O */ 136 + 0x82000000 0 0xc0030000 0xc0030000 0 0x0ffd0000>; /* non-prefetchable memory */ 50 137 status = "disabled"; 51 138 }; 52 139
+4
arch/arm/boot/dts/spear1340-evb.dts
··· 122 122 status = "okay"; 123 123 }; 124 124 125 + miphy@eb800000 { 126 + status = "okay"; 127 + }; 128 + 125 129 dma@ea800000 { 126 130 status = "okay"; 127 131 };
+29 -1
arch/arm/boot/dts/spear1340.dtsi
··· 31 31 status = "disabled"; 32 32 }; 33 33 34 - ahci@b1000000 { 34 + miphy0: miphy@eb800000 { 35 + compatible = "st,spear1340-miphy"; 36 + reg = <0xeb800000 0x4000>; 37 + misc = <&misc>; 38 + #phy-cells = <1>; 39 + status = "disabled"; 40 + }; 41 + 42 + ahci0: ahci@b1000000 { 35 43 compatible = "snps,spear-ahci"; 36 44 reg = <0xb1000000 0x10000>; 37 45 interrupts = <0 72 0x4>; 46 + phys = <&miphy0 0>; 47 + phy-names = "sata-phy"; 48 + status = "disabled"; 49 + }; 50 + 51 + pcie0: pcie@b1000000 { 52 + compatible = "st,spear1340-pcie", "snps,dw-pcie"; 53 + reg = <0xb1000000 0x4000>; 54 + interrupts = <0 68 0x4>; 55 + interrupt-map-mask = <0 0 0 0>; 56 + interrupt-map = <0x0 0 &gic 0 68 0x4>; 57 + num-lanes = <1>; 58 + phys = <&miphy0 1>; 59 + phy-names = "pcie-phy"; 60 + #address-cells = <3>; 61 + #size-cells = <2>; 62 + device_type = "pci"; 63 + ranges = <0x00000800 0 0x80000000 0x80000000 0 0x00020000 /* configuration space */ 64 + 0x81000000 0 0 0x80020000 0 0x00010000 /* downstream I/O */ 65 + 0x82000000 0 0x80030000 0xc0030000 0 0x0ffd0000>; /* non-prefetchable memory */ 38 66 status = "disabled"; 39 67 }; 40 68
+7 -2
arch/arm/boot/dts/spear13xx.dtsi
··· 83 83 #size-cells = <1>; 84 84 compatible = "simple-bus"; 85 85 ranges = <0x50000000 0x50000000 0x10000000 86 - 0xb0000000 0xb0000000 0x10000000 87 - 0xd0000000 0xd0000000 0x02000000 86 + 0x80000000 0x80000000 0x20000000 87 + 0xb0000000 0xb0000000 0x22000000 88 88 0xd8000000 0xd8000000 0x01000000 89 89 0xe0000000 0xe0000000 0x10000000>; 90 90 ··· 219 219 0xd0000000 0xd0000000 0x02000000 220 220 0xd8000000 0xd8000000 0x01000000 221 221 0xe0000000 0xe0000000 0x10000000>; 222 + 223 + misc: syscon@e0700000 { 224 + compatible = "st,spear1340-misc", "syscon"; 225 + reg = <0xe0700000 0x1000>; 226 + }; 222 227 223 228 gpio0: gpio@e0600000 { 224 229 compatible = "arm,pl061", "arm,primecell";
+6 -2
arch/arm/boot/dts/tegra20-harmony.dts
··· 562 562 }; 563 563 564 564 pcie-controller@80003000 { 565 - pex-clk-supply = <&pci_clk_reg>; 566 - vdd-supply = <&pci_vdd_reg>; 567 565 status = "okay"; 566 + 567 + avdd-pex-supply = <&pci_vdd_reg>; 568 + vdd-pex-supply = <&pci_vdd_reg>; 569 + avdd-pex-pll-supply = <&pci_vdd_reg>; 570 + avdd-plle-supply = <&pci_vdd_reg>; 571 + vddio-pex-clk-supply = <&pci_clk_reg>; 568 572 569 573 pci@1,0 { 570 574 status = "okay";
+5 -2
arch/arm/boot/dts/tegra20-tamonten.dtsi
··· 474 474 }; 475 475 476 476 pcie-controller@80003000 { 477 - pex-clk-supply = <&pci_clk_reg>; 478 - vdd-supply = <&pci_vdd_reg>; 477 + avdd-pex-supply = <&pci_vdd_reg>; 478 + vdd-pex-supply = <&pci_vdd_reg>; 479 + avdd-pex-pll-supply = <&pci_vdd_reg>; 480 + avdd-plle-supply = <&pci_vdd_reg>; 481 + vddio-pex-clk-supply = <&pci_clk_reg>; 479 482 }; 480 483 481 484 usb@c5008000 {
+6 -2
arch/arm/boot/dts/tegra20-trimslice.dts
··· 318 318 319 319 pcie-controller@80003000 { 320 320 status = "okay"; 321 - pex-clk-supply = <&pci_clk_reg>; 322 - vdd-supply = <&pci_vdd_reg>; 321 + 322 + avdd-pex-supply = <&pci_vdd_reg>; 323 + vdd-pex-supply = <&pci_vdd_reg>; 324 + avdd-pex-pll-supply = <&pci_vdd_reg>; 325 + avdd-plle-supply = <&pci_vdd_reg>; 326 + vddio-pex-clk-supply = <&pci_clk_reg>; 323 327 324 328 pci@1,0 { 325 329 status = "okay";
+9 -3
arch/arm/boot/dts/tegra30-beaver.dts
··· 17 17 18 18 pcie-controller@00003000 { 19 19 status = "okay"; 20 - pex-clk-supply = <&sys_3v3_pexs_reg>; 21 - vdd-supply = <&ldo1_reg>; 22 - avdd-supply = <&ldo2_reg>; 20 + 21 + avdd-pexa-supply = <&ldo1_reg>; 22 + vdd-pexa-supply = <&ldo1_reg>; 23 + avdd-pexb-supply = <&ldo1_reg>; 24 + vdd-pexb-supply = <&ldo1_reg>; 25 + avdd-pex-pll-supply = <&ldo1_reg>; 26 + avdd-plle-supply = <&ldo1_reg>; 27 + vddio-pex-ctl-supply = <&sys_3v3_reg>; 28 + hvdd-pex-supply = <&sys_3v3_pexs_reg>; 23 29 24 30 pci@1,0 { 25 31 status = "okay";
+8 -3
arch/arm/boot/dts/tegra30-cardhu.dtsi
··· 38 38 39 39 pcie-controller@00003000 { 40 40 status = "okay"; 41 - pex-clk-supply = <&pex_hvdd_3v3_reg>; 42 - vdd-supply = <&ldo1_reg>; 43 - avdd-supply = <&ldo2_reg>; 41 + 42 + /* AVDD_PEXA and VDD_PEXA inputs are grounded on Cardhu. */ 43 + avdd-pexb-supply = <&ldo1_reg>; 44 + vdd-pexb-supply = <&ldo1_reg>; 45 + avdd-pex-pll-supply = <&ldo1_reg>; 46 + hvdd-pex-supply = <&pex_hvdd_3v3_reg>; 47 + vddio-pex-ctl-supply = <&sys_3v3_reg>; 48 + avdd-plle-supply = <&ldo2_reg>; 44 49 45 50 pci@1,0 { 46 51 nvidia,num-lanes = <4>;
-2
arch/arm/configs/omap1_defconfig
··· 26 26 CONFIG_ARCH_OMAP1=y 27 27 CONFIG_OMAP_RESET_CLOCKS=y 28 28 # CONFIG_OMAP_MUX is not set 29 - CONFIG_MAILBOX=y 30 - CONFIG_OMAP1_MBOX=y 31 29 CONFIG_OMAP_32K_TIMER=y 32 30 CONFIG_OMAP_DM_TIMER=y 33 31 CONFIG_ARCH_OMAP730=y
+16
arch/arm/configs/spear13xx_defconfig
··· 11 11 CONFIG_MACH_SPEAR1310=y 12 12 CONFIG_MACH_SPEAR1340=y 13 13 # CONFIG_SWP_EMULATE is not set 14 + CONFIG_PCI=y 15 + CONFIG_PCI_MSI=y 16 + CONFIG_PCIE_SPEAR13XX=y 14 17 CONFIG_SMP=y 15 18 # CONFIG_SMP_ON_UP is not set 16 19 # CONFIG_ARM_CPU_TOPOLOGY is not set 20 + CONFIG_AEABI=y 17 21 CONFIG_ARM_APPENDED_DTB=y 18 22 CONFIG_ARM_ATAG_DTB_COMPAT=y 23 + CONFIG_VFP=y 19 24 CONFIG_BINFMT_MISC=y 20 25 CONFIG_NET=y 26 + CONFIG_UNIX=y 27 + CONFIG_INET=y 28 + CONFIG_IP_PNP=y 29 + CONFIG_IP_PNP_DHCP=y 30 + CONFIG_IP_PNP_BOOTP=y 31 + CONFIG_NET_IPIP=y 21 32 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 22 33 CONFIG_MTD=y 23 34 CONFIG_MTD_OF_PARTS=y ··· 38 27 CONFIG_MTD_NAND_FSMC=y 39 28 CONFIG_BLK_DEV_RAM=y 40 29 CONFIG_BLK_DEV_RAM_SIZE=16384 30 + CONFIG_BLK_DEV_SD=y 41 31 CONFIG_ATA=y 42 32 # CONFIG_SATA_PMP is not set 43 33 CONFIG_SATA_AHCI_PLATFORM=y ··· 78 66 # CONFIG_USB_DEVICE_CLASS is not set 79 67 CONFIG_USB_EHCI_HCD=y 80 68 CONFIG_USB_OHCI_HCD=y 69 + CONFIG_USB_STORAGE=y 81 70 CONFIG_MMC=y 82 71 CONFIG_MMC_SDHCI=y 83 72 CONFIG_MMC_SDHCI_SPEAR=y ··· 92 79 CONFIG_EXT3_FS=y 93 80 CONFIG_EXT3_FS_SECURITY=y 94 81 CONFIG_AUTOFS4_FS=m 82 + CONFIG_FUSE_FS=y 95 83 CONFIG_MSDOS_FS=m 96 84 CONFIG_VFAT_FS=m 97 85 CONFIG_FAT_DEFAULT_IOCHARSET="ascii" 98 86 CONFIG_TMPFS=y 99 87 CONFIG_JFFS2_FS=y 88 + CONFIG_NFS_FS=y 89 + CONFIG_ROOT_NFS=y 100 90 CONFIG_NLS_DEFAULT="utf8" 101 91 CONFIG_NLS_CODEPAGE_437=y 102 92 CONFIG_NLS_ASCII=m
+1
arch/arm/mach-at91/at91sam9263.c
··· 200 200 CLKDEV_CON_DEV_ID("spi_clk", "atmel_spi.1", &spi1_clk), 201 201 CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.0", &tcb_clk), 202 202 CLKDEV_CON_DEV_ID(NULL, "i2c-at91sam9260.0", &twi_clk), 203 + CLKDEV_CON_DEV_ID(NULL, "at91sam9rl-pwm", &pwm_clk), 203 204 /* fake hclk clock */ 204 205 CLKDEV_CON_DEV_ID("hclk", "at91_ohci", &ohci_clk), 205 206 CLKDEV_CON_ID("pioA", &pioA_clk),
+2 -9
arch/arm/mach-at91/at91sam9263_devices.c
··· 1131 1131 * PWM 1132 1132 * --------------------------------------------------------------------*/ 1133 1133 1134 - #if defined(CONFIG_ATMEL_PWM) 1135 - static u32 pwm_mask; 1136 - 1134 + #if IS_ENABLED(CONFIG_PWM_ATMEL) 1137 1135 static struct resource pwm_resources[] = { 1138 1136 [0] = { 1139 1137 .start = AT91SAM9263_BASE_PWMC, ··· 1146 1148 }; 1147 1149 1148 1150 static struct platform_device at91sam9263_pwm0_device = { 1149 - .name = "atmel_pwm", 1151 + .name = "at91sam9rl-pwm", 1150 1152 .id = -1, 1151 - .dev = { 1152 - .platform_data = &pwm_mask, 1153 - }, 1154 1153 .resource = pwm_resources, 1155 1154 .num_resources = ARRAY_SIZE(pwm_resources), 1156 1155 }; ··· 1165 1170 1166 1171 if (mask & (1 << AT91_PWM3)) 1167 1172 at91_set_B_periph(AT91_PIN_PB29, 1); /* enable PWM3 */ 1168 - 1169 - pwm_mask = mask; 1170 1173 1171 1174 platform_device_register(&at91sam9263_pwm0_device); 1172 1175 }
+1
arch/arm/mach-at91/at91sam9g45.c
··· 252 252 CLKDEV_CON_DEV_ID(NULL, "atmel_sha", &aestdessha_clk), 253 253 CLKDEV_CON_DEV_ID(NULL, "atmel_tdes", &aestdessha_clk), 254 254 CLKDEV_CON_DEV_ID(NULL, "atmel_aes", &aestdessha_clk), 255 + CLKDEV_CON_DEV_ID(NULL, "at91sam9rl-pwm", &pwm_clk), 255 256 /* more usart lookup table for DT entries */ 256 257 CLKDEV_CON_DEV_ID("usart", "ffffee00.serial", &mck), 257 258 CLKDEV_CON_DEV_ID("usart", "fff8c000.serial", &usart0_clk),
+2 -9
arch/arm/mach-at91/at91sam9g45_devices.c
··· 1334 1334 * PWM 1335 1335 * --------------------------------------------------------------------*/ 1336 1336 1337 - #if defined(CONFIG_ATMEL_PWM) || defined(CONFIG_ATMEL_PWM_MODULE) 1338 - static u32 pwm_mask; 1339 - 1337 + #if IS_ENABLED(CONFIG_PWM_ATMEL) 1340 1338 static struct resource pwm_resources[] = { 1341 1339 [0] = { 1342 1340 .start = AT91SAM9G45_BASE_PWMC, ··· 1349 1351 }; 1350 1352 1351 1353 static struct platform_device at91sam9g45_pwm0_device = { 1352 - .name = "atmel_pwm", 1354 + .name = "at91sam9rl-pwm", 1353 1355 .id = -1, 1354 - .dev = { 1355 - .platform_data = &pwm_mask, 1356 - }, 1357 1356 .resource = pwm_resources, 1358 1357 .num_resources = ARRAY_SIZE(pwm_resources), 1359 1358 }; ··· 1368 1373 1369 1374 if (mask & (1 << AT91_PWM3)) 1370 1375 at91_set_B_periph(AT91_PIN_PD0, 1); /* enable PWM3 */ 1371 - 1372 - pwm_mask = mask; 1373 1376 1374 1377 platform_device_register(&at91sam9g45_pwm0_device); 1375 1378 }
+1
arch/arm/mach-at91/at91sam9rl.c
··· 200 200 CLKDEV_CON_DEV_ID("pclk", "fffc4000.ssc", &ssc1_clk), 201 201 CLKDEV_CON_DEV_ID(NULL, "i2c-at91sam9g20.0", &twi0_clk), 202 202 CLKDEV_CON_DEV_ID(NULL, "i2c-at91sam9g20.1", &twi1_clk), 203 + CLKDEV_CON_DEV_ID(NULL, "at91sam9rl-pwm", &pwm_clk), 203 204 CLKDEV_CON_ID("pioA", &pioA_clk), 204 205 CLKDEV_CON_ID("pioB", &pioB_clk), 205 206 CLKDEV_CON_ID("pioC", &pioC_clk),
+2 -9
arch/arm/mach-at91/at91sam9rl_devices.c
··· 799 799 * PWM 800 800 * --------------------------------------------------------------------*/ 801 801 802 - #if defined(CONFIG_ATMEL_PWM) 803 - static u32 pwm_mask; 804 - 802 + #if IS_ENABLED(CONFIG_PWM_ATMEL) 805 803 static struct resource pwm_resources[] = { 806 804 [0] = { 807 805 .start = AT91SAM9RL_BASE_PWMC, ··· 814 816 }; 815 817 816 818 static struct platform_device at91sam9rl_pwm0_device = { 817 - .name = "atmel_pwm", 819 + .name = "at91sam9rl-pwm", 818 820 .id = -1, 819 - .dev = { 820 - .platform_data = &pwm_mask, 821 - }, 822 821 .resource = pwm_resources, 823 822 .num_resources = ARRAY_SIZE(pwm_resources), 824 823 }; ··· 833 838 834 839 if (mask & (1 << AT91_PWM3)) 835 840 at91_set_B_periph(AT91_PIN_PD8, 1); /* enable PWM3 */ 836 - 837 - pwm_mask = mask; 838 841 839 842 platform_device_register(&at91sam9rl_pwm0_device); 840 843 }
+48 -9
arch/arm/mach-at91/board-sam9263ek.c
··· 32 32 #include <linux/gpio_keys.h> 33 33 #include <linux/input.h> 34 34 #include <linux/leds.h> 35 + #include <linux/pwm.h> 36 + #include <linux/leds_pwm.h> 35 37 36 38 #include <video/atmel_lcdc.h> 37 39 ··· 371 369 .name = "ds3", 372 370 .gpio = AT91_PIN_PB7, 373 371 .default_trigger = "heartbeat", 372 + }, 373 + #if !IS_ENABLED(CONFIG_LEDS_PWM) 374 + { 375 + .name = "ds1", 376 + .gpio = AT91_PIN_PB8, 377 + .active_low = 1, 378 + .default_trigger = "none", 374 379 } 380 + #endif 375 381 }; 376 382 377 383 /* 378 384 * PWM Leds 379 385 */ 380 - static struct gpio_led ek_pwm_led[] = { 381 - /* For now only DS1 is PWM-driven (by pwm1) */ 382 - { 383 - .name = "ds1", 384 - .gpio = 1, /* is PWM channel number */ 385 - .active_low = 1, 386 - .default_trigger = "none", 387 - } 386 + static struct pwm_lookup pwm_lookup[] = { 387 + PWM_LOOKUP("at91sam9rl-pwm", 1, "leds_pwm", "ds1", 388 + 5000, PWM_POLARITY_INVERSED), 388 389 }; 390 + 391 + #if IS_ENABLED(CONFIG_LEDS_PWM) 392 + static struct led_pwm pwm_leds[] = { 393 + { 394 + .name = "ds1", 395 + .max_brightness = 255, 396 + }, 397 + }; 398 + 399 + static struct led_pwm_platform_data pwm_data = { 400 + .num_leds = ARRAY_SIZE(pwm_leds), 401 + .leds = pwm_leds, 402 + }; 403 + 404 + static struct platform_device leds_pwm = { 405 + .name = "leds_pwm", 406 + .id = -1, 407 + .dev = { 408 + .platform_data = &pwm_data, 409 + }, 410 + }; 411 + #endif 412 + 389 413 390 414 /* 391 415 * CAN ··· 429 401 430 402 static struct at91_can_data ek_can_data = { 431 403 .transceiver_switch = sam9263ek_transceiver_switch, 404 + }; 405 + 406 + static struct platform_device *devices[] __initdata = { 407 + #if IS_ENABLED(CONFIG_LEDS_PWM) 408 + &leds_pwm, 409 + #endif 432 410 }; 433 411 434 412 static void __init ek_board_init(void) ··· 471 437 at91_add_device_ac97(&ek_ac97_data); 472 438 /* LEDs */ 473 439 at91_gpio_leds(ek_leds, ARRAY_SIZE(ek_leds)); 474 - at91_pwm_leds(ek_pwm_led, ARRAY_SIZE(ek_pwm_led)); 440 + pwm_add_table(pwm_lookup, ARRAY_SIZE(pwm_lookup)); 441 + #if IS_ENABLED(CONFIG_LEDS_PWM) 442 + at91_add_device_pwm(1 << AT91_PWM1); 443 + #endif 475 444 /* CAN */ 476 445 at91_add_device_can(&ek_can_data); 446 + /* Other platform devices */ 447 + platform_add_devices(devices, ARRAY_SIZE(devices)); 477 448 } 478 449 479 450 MACHINE_START(AT91SAM9263EK, "Atmel AT91SAM9263-EK")
+35 -11
arch/arm/mach-at91/board-sam9m10g45ek.c
··· 26 26 #include <linux/leds.h> 27 27 #include <linux/atmel-mci.h> 28 28 #include <linux/delay.h> 29 + #include <linux/pwm.h> 30 + #include <linux/leds_pwm.h> 29 31 30 32 #include <linux/platform_data/at91_adc.h> 31 33 ··· 418 416 .active_low = 1, 419 417 .default_trigger = "nand-disk", 420 418 }, 421 - #if !(defined(CONFIG_LEDS_ATMEL_PWM) || defined(CONFIG_LEDS_ATMEL_PWM_MODULE)) 419 + #if !IS_ENABLED(CONFIG_LEDS_PWM) 422 420 { /* "right" led, green, userled1, pwm1 */ 423 421 .name = "d7", 424 422 .gpio = AT91_PIN_PD31, ··· 432 430 /* 433 431 * PWM Leds 434 432 */ 435 - static struct gpio_led ek_pwm_led[] = { 436 - #if defined(CONFIG_LEDS_ATMEL_PWM) || defined(CONFIG_LEDS_ATMEL_PWM_MODULE) 437 - { /* "right" led, green, userled1, pwm1 */ 438 - .name = "d7", 439 - .gpio = 1, /* is PWM channel number */ 440 - .active_low = 1, 441 - .default_trigger = "none", 442 - }, 443 - #endif 433 + static struct pwm_lookup pwm_lookup[] = { 434 + PWM_LOOKUP("at91sam9rl-pwm", 1, "leds_pwm", "d7", 435 + 5000, PWM_POLARITY_INVERSED), 444 436 }; 437 + 438 + #if IS_ENABLED(CONFIG_LEDS_PWM) 439 + static struct led_pwm pwm_leds[] = { 440 + { /* "right" led, green, userled1, pwm1 */ 441 + .name = "d7", 442 + .max_brightness = 255, 443 + }, 444 + }; 445 + 446 + static struct led_pwm_platform_data pwm_data = { 447 + .num_leds = ARRAY_SIZE(pwm_leds), 448 + .leds = pwm_leds, 449 + }; 450 + 451 + static struct platform_device leds_pwm = { 452 + .name = "leds_pwm", 453 + .id = -1, 454 + .dev = { 455 + .platform_data = &pwm_data, 456 + }, 457 + }; 458 + #endif 445 459 446 460 static struct platform_device *devices[] __initdata = { 447 461 #if defined(CONFIG_SOC_CAMERA_OV2640) || \ 448 462 defined(CONFIG_SOC_CAMERA_OV2640_MODULE) 449 463 &isi_ov2640, 464 + #endif 465 + #if IS_ENABLED(CONFIG_LEDS_PWM) 466 + &leds_pwm, 450 467 #endif 451 468 }; 452 469 ··· 507 486 at91_add_device_ac97(&ek_ac97_data); 508 487 /* LEDs */ 509 488 at91_gpio_leds(ek_leds, ARRAY_SIZE(ek_leds)); 510 - at91_pwm_leds(ek_pwm_led, ARRAY_SIZE(ek_pwm_led)); 489 + pwm_add_table(pwm_lookup, ARRAY_SIZE(pwm_lookup)); 490 + #if IS_ENABLED(CONFIG_LEDS_PWM) 491 + at91_add_device_pwm(1 << AT91_PWM1); 492 + #endif 511 493 /* Other platform devices */ 512 494 platform_add_devices(devices, ARRAY_SIZE(devices)); 513 495 }
-1
arch/arm/mach-at91/board.h
··· 123 123 124 124 /* LEDs */ 125 125 extern void __init at91_gpio_leds(struct gpio_led *leds, int nr); 126 - extern void __init at91_pwm_leds(struct gpio_led *leds, int nr); 127 126 128 127 #endif
-37
arch/arm/mach-at91/leds.c
··· 54 54 void __init at91_gpio_leds(struct gpio_led *leds, int nr) {} 55 55 #endif 56 56 57 - 58 - /* ------------------------------------------------------------------------- */ 59 - 60 - #if defined (CONFIG_LEDS_ATMEL_PWM) 61 - 62 - /* 63 - * PWM Leds 64 - */ 65 - 66 - static struct gpio_led_platform_data pwm_led_data; 67 - 68 - static struct platform_device at91_pwm_leds_device = { 69 - .name = "leds-atmel-pwm", 70 - .id = -1, 71 - .dev.platform_data = &pwm_led_data, 72 - }; 73 - 74 - void __init at91_pwm_leds(struct gpio_led *leds, int nr) 75 - { 76 - int i; 77 - u32 pwm_mask = 0; 78 - 79 - if (!nr) 80 - return; 81 - 82 - for (i = 0; i < nr; i++) 83 - pwm_mask |= (1 << leds[i].gpio); 84 - 85 - pwm_led_data.leds = leds; 86 - pwm_led_data.num_leds = nr; 87 - 88 - at91_add_device_pwm(pwm_mask); 89 - platform_device_register(&at91_pwm_leds_device); 90 - } 91 - #else 92 - void __init at91_pwm_leds(struct gpio_led *leds, int nr){} 93 - #endif
+4
arch/arm/mach-spear/Kconfig
··· 19 19 select HAVE_ARM_SCU if SMP 20 20 select HAVE_ARM_TWD if SMP 21 21 select PINCTRL 22 + select MFD_SYSCON 23 + select MIGHT_HAVE_PCI 22 24 help 23 25 Supports for ARM's SPEAR13XX family 24 26 ··· 29 27 config MACH_SPEAR1310 30 28 bool "SPEAr1310 Machine support with Device Tree" 31 29 select PINCTRL_SPEAR1310 30 + select PHY_ST_SPEAR1310_MIPHY 32 31 help 33 32 Supports ST SPEAr1310 machine configured via the device-tree 34 33 35 34 config MACH_SPEAR1340 36 35 bool "SPEAr1340 Machine support with Device Tree" 37 36 select PINCTRL_SPEAR1340 37 + select PHY_ST_SPEAR1340_MIPHY 38 38 help 39 39 Supports ST SPEAr1340 machine configured via the device-tree 40 40
+2 -2
arch/arm/mach-spear/include/mach/spear.h
··· 52 52 #ifdef CONFIG_ARCH_SPEAR13XX 53 53 54 54 #define PERIP_GRP2_BASE UL(0xB3000000) 55 - #define VA_PERIP_GRP2_BASE IOMEM(0xFE000000) 55 + #define VA_PERIP_GRP2_BASE IOMEM(0xF9000000) 56 56 #define MCIF_SDHCI_BASE UL(0xB3000000) 57 57 #define SYSRAM0_BASE UL(0xB3800000) 58 - #define VA_SYSRAM0_BASE IOMEM(0xFE800000) 58 + #define VA_SYSRAM0_BASE IOMEM(0xF9800000) 59 59 #define SYS_LOCATION (VA_SYSRAM0_BASE + 0x600) 60 60 61 61 #define PERIP_GRP1_BASE UL(0xE0000000)
+1 -124
arch/arm/mach-spear/spear1340.c
··· 13 13 14 14 #define pr_fmt(fmt) "SPEAr1340: " fmt 15 15 16 - #include <linux/ahci_platform.h> 17 - #include <linux/amba/serial.h> 18 - #include <linux/delay.h> 19 16 #include <linux/of_platform.h> 20 17 #include <asm/mach/arch.h> 21 18 #include "generic.h" 22 - #include <mach/spear.h> 23 - 24 - /* FIXME: Move SATA PHY code into a standalone driver */ 25 - 26 - /* Base addresses */ 27 - #define SPEAR1340_SATA_BASE UL(0xB1000000) 28 - 29 - /* Power Management Registers */ 30 - #define SPEAR1340_PCM_CFG (VA_MISC_BASE + 0x100) 31 - #define SPEAR1340_PCM_WKUP_CFG (VA_MISC_BASE + 0x104) 32 - #define SPEAR1340_SWITCH_CTR (VA_MISC_BASE + 0x108) 33 - 34 - #define SPEAR1340_PERIP1_SW_RST (VA_MISC_BASE + 0x318) 35 - #define SPEAR1340_PERIP2_SW_RST (VA_MISC_BASE + 0x31C) 36 - #define SPEAR1340_PERIP3_SW_RST (VA_MISC_BASE + 0x320) 37 - 38 - /* PCIE - SATA configuration registers */ 39 - #define SPEAR1340_PCIE_SATA_CFG (VA_MISC_BASE + 0x424) 40 - /* PCIE CFG MASks */ 41 - #define SPEAR1340_PCIE_CFG_DEVICE_PRESENT (1 << 11) 42 - #define SPEAR1340_PCIE_CFG_POWERUP_RESET (1 << 10) 43 - #define SPEAR1340_PCIE_CFG_CORE_CLK_EN (1 << 9) 44 - #define SPEAR1340_PCIE_CFG_AUX_CLK_EN (1 << 8) 45 - #define SPEAR1340_SATA_CFG_TX_CLK_EN (1 << 4) 46 - #define SPEAR1340_SATA_CFG_RX_CLK_EN (1 << 3) 47 - #define SPEAR1340_SATA_CFG_POWERUP_RESET (1 << 2) 48 - #define SPEAR1340_SATA_CFG_PM_CLK_EN (1 << 1) 49 - #define SPEAR1340_PCIE_SATA_SEL_PCIE (0) 50 - #define SPEAR1340_PCIE_SATA_SEL_SATA (1) 51 - #define SPEAR1340_SATA_PCIE_CFG_MASK 0xF1F 52 - #define SPEAR1340_PCIE_CFG_VAL (SPEAR1340_PCIE_SATA_SEL_PCIE | \ 53 - SPEAR1340_PCIE_CFG_AUX_CLK_EN | \ 54 - SPEAR1340_PCIE_CFG_CORE_CLK_EN | \ 55 - SPEAR1340_PCIE_CFG_POWERUP_RESET | \ 56 - SPEAR1340_PCIE_CFG_DEVICE_PRESENT) 57 - #define SPEAR1340_SATA_CFG_VAL (SPEAR1340_PCIE_SATA_SEL_SATA | \ 58 - SPEAR1340_SATA_CFG_PM_CLK_EN | \ 59 - SPEAR1340_SATA_CFG_POWERUP_RESET | \ 60 - SPEAR1340_SATA_CFG_RX_CLK_EN | \ 61 - SPEAR1340_SATA_CFG_TX_CLK_EN) 62 - 63 - #define SPEAR1340_PCIE_MIPHY_CFG (VA_MISC_BASE + 0x428) 64 - #define SPEAR1340_MIPHY_OSC_BYPASS_EXT (1 << 31) 65 - #define SPEAR1340_MIPHY_CLK_REF_DIV2 (1 << 27) 66 - #define SPEAR1340_MIPHY_CLK_REF_DIV4 (2 << 27) 67 - #define SPEAR1340_MIPHY_CLK_REF_DIV8 (3 << 27) 68 - #define SPEAR1340_MIPHY_PLL_RATIO_TOP(x) (x << 0) 69 - #define SPEAR1340_PCIE_SATA_MIPHY_CFG_SATA \ 70 - (SPEAR1340_MIPHY_OSC_BYPASS_EXT | \ 71 - SPEAR1340_MIPHY_CLK_REF_DIV2 | \ 72 - SPEAR1340_MIPHY_PLL_RATIO_TOP(60)) 73 - #define SPEAR1340_PCIE_SATA_MIPHY_CFG_SATA_25M_CRYSTAL_CLK \ 74 - (SPEAR1340_MIPHY_PLL_RATIO_TOP(120)) 75 - #define SPEAR1340_PCIE_SATA_MIPHY_CFG_PCIE \ 76 - (SPEAR1340_MIPHY_OSC_BYPASS_EXT | \ 77 - SPEAR1340_MIPHY_PLL_RATIO_TOP(25)) 78 - 79 - /* SATA device registration */ 80 - static int sata_miphy_init(struct device *dev, void __iomem *addr) 81 - { 82 - writel(SPEAR1340_SATA_CFG_VAL, SPEAR1340_PCIE_SATA_CFG); 83 - writel(SPEAR1340_PCIE_SATA_MIPHY_CFG_SATA_25M_CRYSTAL_CLK, 84 - SPEAR1340_PCIE_MIPHY_CFG); 85 - /* Switch on sata power domain */ 86 - writel((readl(SPEAR1340_PCM_CFG) | (0x800)), SPEAR1340_PCM_CFG); 87 - msleep(20); 88 - /* Disable PCIE SATA Controller reset */ 89 - writel((readl(SPEAR1340_PERIP1_SW_RST) & (~0x1000)), 90 - SPEAR1340_PERIP1_SW_RST); 91 - msleep(20); 92 - 93 - return 0; 94 - } 95 - 96 - static void sata_miphy_exit(struct device *dev) 97 - { 98 - writel(0, SPEAR1340_PCIE_SATA_CFG); 99 - writel(0, SPEAR1340_PCIE_MIPHY_CFG); 100 - 101 - /* Enable PCIE SATA Controller reset */ 102 - writel((readl(SPEAR1340_PERIP1_SW_RST) | (0x1000)), 103 - SPEAR1340_PERIP1_SW_RST); 104 - msleep(20); 105 - /* Switch off sata power domain */ 106 - writel((readl(SPEAR1340_PCM_CFG) & (~0x800)), SPEAR1340_PCM_CFG); 107 - msleep(20); 108 - } 109 - 110 - static int sata_suspend(struct device *dev) 111 - { 112 - if (dev->power.power_state.event == PM_EVENT_FREEZE) 113 - return 0; 114 - 115 - sata_miphy_exit(dev); 116 - 117 - return 0; 118 - } 119 - 120 - static int sata_resume(struct device *dev) 121 - { 122 - if (dev->power.power_state.event == PM_EVENT_THAW) 123 - return 0; 124 - 125 - return sata_miphy_init(dev, NULL); 126 - } 127 - 128 - static struct ahci_platform_data sata_pdata = { 129 - .init = sata_miphy_init, 130 - .exit = sata_miphy_exit, 131 - .suspend = sata_suspend, 132 - .resume = sata_resume, 133 - }; 134 - 135 - /* Add SPEAr1340 auxdata to pass platform data */ 136 - static struct of_dev_auxdata spear1340_auxdata_lookup[] __initdata = { 137 - OF_DEV_AUXDATA("snps,spear-ahci", SPEAR1340_SATA_BASE, NULL, 138 - &sata_pdata), 139 - {} 140 - }; 141 19 142 20 static void __init spear1340_dt_init(void) 143 21 { 144 - of_platform_populate(NULL, of_default_bus_match_table, 145 - spear1340_auxdata_lookup, NULL); 22 + of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 146 23 platform_device_register_simple("spear-cpufreq", -1, NULL, 0); 147 24 } 148 25
+1 -1
arch/arm/mach-spear/spear13xx.c
··· 52 52 /* 53 53 * Following will create 16MB static virtual/physical mappings 54 54 * PHYSICAL VIRTUAL 55 - * 0xB3000000 0xFE000000 55 + * 0xB3000000 0xF9000000 56 56 * 0xE0000000 0xFD000000 57 57 * 0xEC000000 0xFC000000 58 58 * 0xED000000 0xFB000000
+22 -12
arch/avr32/boards/atngw100/mrmt.c
··· 17 17 #include <linux/types.h> 18 18 #include <linux/fb.h> 19 19 #include <linux/leds.h> 20 + #include <linux/pwm.h> 21 + #include <linux/leds_pwm.h> 20 22 #include <linux/input.h> 21 23 #include <linux/gpio_keys.h> 22 24 #include <linux/atmel_serial.h> ··· 157 155 158 156 #ifdef CONFIG_BOARD_MRMT_BL_PWM 159 157 /* PWM LEDs: LCD Backlight, etc */ 160 - static struct gpio_led rmt_pwm_led[] = { 161 - /* here the "gpio" is actually a PWM channel */ 162 - { .name = "backlight", .gpio = PWM_CH_BL, }, 158 + static struct pwm_lookup pwm_lookup[] = { 159 + PWM_LOOKUP("at91sam9rl-pwm", PWM_CH_BL, "leds_pwm", "ds1", 160 + 5000, PWM_POLARITY_INVERSED), 163 161 }; 164 162 165 - static struct gpio_led_platform_data rmt_pwm_led_data = { 166 - .num_leds = ARRAY_SIZE(rmt_pwm_led), 167 - .leds = rmt_pwm_led, 163 + static struct led_pwm pwm_leds[] = { 164 + { 165 + .name = "backlight", 166 + .max_brightness = 255, 167 + }, 168 168 }; 169 169 170 - static struct platform_device rmt_pwm_led_dev = { 171 - .name = "leds-atmel-pwm", 172 - .id = -1, 173 - .dev = { 174 - .platform_data = &rmt_pwm_led_data, 170 + static struct led_pwm_platform_data pwm_data = { 171 + .num_leds = ARRAY_SIZE(pwm_leds), 172 + .leds = pwm_leds, 173 + }; 174 + 175 + static struct platform_device leds_pwm = { 176 + .name = "leds_pwm", 177 + .id = -1, 178 + .dev = { 179 + .platform_data = &pwm_data, 175 180 }, 176 181 }; 177 182 #endif ··· 334 325 #ifdef CONFIG_BOARD_MRMT_BL_PWM 335 326 /* Use PWM for Backlight controls */ 336 327 at32_add_device_pwm(1 << PWM_CH_BL); 337 - platform_device_register(&rmt_pwm_led_dev); 328 + pwm_add_table(pwm_lookup, ARRAY_SIZE(pwm_lookup)); 329 + platform_device_register(&leds_pwm); 338 330 #else 339 331 /* Backlight always on */ 340 332 udelay( 1 );
+30 -18
arch/avr32/boards/favr-32/setup.c
··· 18 18 #include <linux/gpio.h> 19 19 #include <linux/leds.h> 20 20 #include <linux/atmel-mci.h> 21 - #include <linux/atmel-pwm-bl.h> 21 + #include <linux/pwm.h> 22 + #include <linux/pwm_backlight.h> 23 + #include <linux/regulator/fixed.h> 24 + #include <linux/regulator/machine.h> 22 25 #include <linux/spi/spi.h> 23 26 #include <linux/spi/ads7846.h> 24 27 ··· 35 32 #include <mach/init.h> 36 33 #include <mach/board.h> 37 34 #include <mach/portmux.h> 35 + 36 + #define PWM_BL_CH 2 38 37 39 38 /* Oscillator frequencies. These are board-specific */ 40 39 unsigned long at32_board_osc_rates[3] = { ··· 232 227 platform_device_register(&favr32_led_dev); 233 228 } 234 229 235 - static struct atmel_pwm_bl_platform_data atmel_pwm_bl_pdata = { 236 - .pwm_channel = 2, 237 - .pwm_frequency = 200000, 238 - .pwm_compare_max = 345, 239 - .pwm_duty_max = 345, 240 - .pwm_duty_min = 90, 241 - .pwm_active_low = 1, 242 - .gpio_on = GPIO_PIN_PA(28), 243 - .on_active_low = 0, 230 + static struct pwm_lookup pwm_lookup[] = { 231 + PWM_LOOKUP("at91sam9rl-pwm", PWM_BL_CH, "pwm-backlight.0", NULL, 232 + 5000, PWM_POLARITY_INVERSED), 244 233 }; 245 234 246 - static struct platform_device atmel_pwm_bl_dev = { 247 - .name = "atmel-pwm-bl", 248 - .id = 0, 249 - .dev = { 250 - .platform_data = &atmel_pwm_bl_pdata, 235 + static struct regulator_consumer_supply fixed_power_consumers[] = { 236 + REGULATOR_SUPPLY("power", "pwm-backlight.0"), 237 + }; 238 + 239 + static struct platform_pwm_backlight_data pwm_bl_data = { 240 + .enable_gpio = GPIO_PIN_PA(28), 241 + .max_brightness = 255, 242 + .dft_brightness = 255, 243 + .lth_brightness = 50, 244 + }; 245 + 246 + static struct platform_device pwm_bl_device = { 247 + .name = "pwm-backlight", 248 + .dev = { 249 + .platform_data = &pwm_bl_data, 251 250 }, 252 251 }; 253 252 254 253 static void __init favr32_setup_atmel_pwm_bl(void) 255 254 { 256 - platform_device_register(&atmel_pwm_bl_dev); 257 - at32_select_gpio(atmel_pwm_bl_pdata.gpio_on, 0); 255 + pwm_add_table(pwm_lookup, ARRAY_SIZE(pwm_lookup)); 256 + regulator_register_always_on(0, "fixed", fixed_power_consumers, 257 + ARRAY_SIZE(fixed_power_consumers), 3300000); 258 + platform_device_register(&pwm_bl_device); 259 + at32_select_gpio(pwm_bl_data.enable_gpio, 0); 258 260 } 259 261 260 262 void __init setup_board(void) ··· 351 339 352 340 set_abdac_rate(at32_add_device_abdac(0, &abdac0_data)); 353 341 354 - at32_add_device_pwm(1 << atmel_pwm_bl_pdata.pwm_channel); 342 + at32_add_device_pwm(1 << PWM_BL_CH); 355 343 at32_add_device_spi(1, spi1_board_info, ARRAY_SIZE(spi1_board_info)); 356 344 at32_add_device_mci(0, &mci0_data); 357 345 at32_add_device_usba(0, NULL);
+21 -13
arch/avr32/boards/merisc/setup.c
··· 22 22 #include <linux/irq.h> 23 23 #include <linux/fb.h> 24 24 #include <linux/atmel-mci.h> 25 + #include <linux/pwm.h> 26 + #include <linux/leds_pwm.h> 25 27 26 28 #include <asm/io.h> 27 29 #include <asm/setup.h> ··· 169 167 }, 170 168 }; 171 169 172 - #ifdef CONFIG_LEDS_ATMEL_PWM 173 - static struct gpio_led stk_pwm_led[] = { 170 + #if IS_ENABLED(CONFIG_LEDS_PWM) 171 + static struct pwm_lookup pwm_lookup[] = { 172 + PWM_LOOKUP("at91sam9rl-pwm", 0, "leds_pwm", "backlight", 173 + 5000, PWM_POLARITY_NORMAL), 174 + }; 175 + 176 + static struct led_pwm pwm_leds[] = { 174 177 { 175 178 .name = "backlight", 176 - .gpio = 0, /* PWM channel 0 (LCD backlight) */ 179 + .max_brightness = 255, 177 180 }, 178 181 }; 179 182 180 - static struct gpio_led_platform_data stk_pwm_led_data = { 181 - .num_leds = ARRAY_SIZE(stk_pwm_led), 182 - .leds = stk_pwm_led, 183 + static struct led_pwm_platform_data pwm_data = { 184 + .num_leds = ARRAY_SIZE(pwm_leds), 185 + .leds = pwm_leds, 183 186 }; 184 187 185 - static struct platform_device stk_pwm_led_dev = { 186 - .name = "leds-atmel-pwm", 187 - .id = -1, 188 - .dev = { 189 - .platform_data = &stk_pwm_led_data, 188 + static struct platform_device leds_pwm = { 189 + .name = "leds_pwm", 190 + .id = -1, 191 + .dev = { 192 + .platform_data = &pwm_data, 190 193 }, 191 194 }; 192 195 #endif ··· 285 278 286 279 at32_add_device_mci(0, &mci0_data); 287 280 288 - #ifdef CONFIG_LEDS_ATMEL_PWM 281 + #if IS_ENABLED(CONFIG_LEDS_PWM) 282 + pwm_add_table(pwm_lookup, ARRAY_SIZE(pwm_lookup)); 289 283 at32_add_device_pwm((1 << 0) | (1 << 2)); 290 - platform_device_register(&stk_pwm_led_dev); 284 + platform_device_register(&leds_pwm); 291 285 #else 292 286 at32_add_device_pwm((1 << 2)); 293 287 #endif
+3 -2
arch/avr32/configs/atngw100_mrmt_defconfig
··· 56 56 CONFIG_MTD_PHYSMAP=y 57 57 CONFIG_MTD_DATAFLASH=y 58 58 CONFIG_BLK_DEV_LOOP=y 59 - CONFIG_ATMEL_PWM=y 60 59 CONFIG_NETDEVICES=y 61 60 CONFIG_NET_ETHERNET=y 62 61 CONFIG_MACB=y ··· 103 104 CONFIG_MMC_ATMELMCI=y 104 105 CONFIG_NEW_LEDS=y 105 106 CONFIG_LEDS_CLASS=y 106 - CONFIG_LEDS_ATMEL_PWM=y 107 107 CONFIG_LEDS_GPIO=y 108 + CONFIG_LEDS_PWM=y 108 109 CONFIG_LEDS_TRIGGERS=y 109 110 CONFIG_LEDS_TRIGGER_TIMER=y 110 111 CONFIG_LEDS_TRIGGER_HEARTBEAT=y ··· 113 114 CONFIG_RTC_DRV_AT32AP700X=m 114 115 CONFIG_DMADEVICES=y 115 116 CONFIG_UIO=y 117 + CONFIG_PWM=y 118 + CONFIG_PWM_ATMEL=y 116 119 CONFIG_EXT2_FS=y 117 120 CONFIG_EXT2_FS_XATTR=y 118 121 CONFIG_EXT3_FS=y
+3 -2
arch/avr32/configs/atstk1002_defconfig
··· 64 64 CONFIG_BLK_DEV_NBD=m 65 65 CONFIG_BLK_DEV_RAM=m 66 66 CONFIG_MISC_DEVICES=y 67 - CONFIG_ATMEL_PWM=m 68 67 CONFIG_ATMEL_TCLIB=y 69 68 CONFIG_ATMEL_SSC=m 70 69 # CONFIG_SCSI_PROC_FS is not set ··· 132 133 CONFIG_MMC_ATMELMCI=y 133 134 CONFIG_NEW_LEDS=y 134 135 CONFIG_LEDS_CLASS=y 135 - CONFIG_LEDS_ATMEL_PWM=m 136 136 CONFIG_LEDS_GPIO=m 137 + CONFIG_LEDS_PWM=m 137 138 CONFIG_LEDS_TRIGGERS=y 138 139 CONFIG_LEDS_TRIGGER_TIMER=m 139 140 CONFIG_LEDS_TRIGGER_HEARTBEAT=m 140 141 CONFIG_RTC_CLASS=y 141 142 CONFIG_RTC_DRV_AT32AP700X=y 142 143 CONFIG_DMADEVICES=y 144 + CONFIG_PWM=y 145 + CONFIG_PWM_ATMEL=m 143 146 CONFIG_EXT2_FS=y 144 147 CONFIG_EXT3_FS=y 145 148 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
+3 -2
arch/avr32/configs/atstk1003_defconfig
··· 53 53 CONFIG_BLK_DEV_NBD=m 54 54 CONFIG_BLK_DEV_RAM=m 55 55 CONFIG_MISC_DEVICES=y 56 - CONFIG_ATMEL_PWM=m 57 56 CONFIG_ATMEL_TCLIB=y 58 57 CONFIG_ATMEL_SSC=m 59 58 # CONFIG_SCSI_PROC_FS is not set ··· 111 112 CONFIG_MMC_ATMELMCI=y 112 113 CONFIG_NEW_LEDS=y 113 114 CONFIG_LEDS_CLASS=y 114 - CONFIG_LEDS_ATMEL_PWM=m 115 115 CONFIG_LEDS_GPIO=m 116 + CONFIG_LEDS_PWM=m 116 117 CONFIG_LEDS_TRIGGERS=y 117 118 CONFIG_LEDS_TRIGGER_TIMER=m 118 119 CONFIG_LEDS_TRIGGER_HEARTBEAT=m 119 120 CONFIG_RTC_CLASS=y 120 121 CONFIG_RTC_DRV_AT32AP700X=y 121 122 CONFIG_DMADEVICES=y 123 + CONFIG_PWM=y 124 + CONFIG_PWM_ATMEL=m 122 125 CONFIG_EXT2_FS=y 123 126 CONFIG_EXT3_FS=y 124 127 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
+3 -2
arch/avr32/configs/atstk1004_defconfig
··· 53 53 CONFIG_BLK_DEV_NBD=m 54 54 CONFIG_BLK_DEV_RAM=m 55 55 CONFIG_MISC_DEVICES=y 56 - CONFIG_ATMEL_PWM=m 57 56 CONFIG_ATMEL_TCLIB=y 58 57 CONFIG_ATMEL_SSC=m 59 58 # CONFIG_SCSI_PROC_FS is not set ··· 110 111 CONFIG_MMC_ATMELMCI=y 111 112 CONFIG_NEW_LEDS=y 112 113 CONFIG_LEDS_CLASS=y 113 - CONFIG_LEDS_ATMEL_PWM=m 114 114 CONFIG_LEDS_GPIO=m 115 + CONFIG_LEDS_PWM=m 115 116 CONFIG_LEDS_TRIGGERS=y 116 117 CONFIG_LEDS_TRIGGER_TIMER=m 117 118 CONFIG_LEDS_TRIGGER_HEARTBEAT=m 118 119 CONFIG_RTC_CLASS=y 119 120 CONFIG_RTC_DRV_AT32AP700X=y 120 121 CONFIG_DMADEVICES=y 122 + CONFIG_PWM=y 123 + CONFIG_PWM_ATMEL=m 121 124 CONFIG_EXT2_FS=y 122 125 CONFIG_EXT3_FS=y 123 126 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
+3 -2
arch/avr32/configs/atstk1006_defconfig
··· 67 67 CONFIG_BLK_DEV_NBD=m 68 68 CONFIG_BLK_DEV_RAM=m 69 69 CONFIG_MISC_DEVICES=y 70 - CONFIG_ATMEL_PWM=m 71 70 CONFIG_ATMEL_TCLIB=y 72 71 CONFIG_ATMEL_SSC=m 73 72 # CONFIG_SCSI_PROC_FS is not set ··· 135 136 CONFIG_MMC_ATMELMCI=y 136 137 CONFIG_NEW_LEDS=y 137 138 CONFIG_LEDS_CLASS=y 138 - CONFIG_LEDS_ATMEL_PWM=m 139 139 CONFIG_LEDS_GPIO=m 140 + CONFIG_LEDS_PWM=m 140 141 CONFIG_LEDS_TRIGGERS=y 141 142 CONFIG_LEDS_TRIGGER_TIMER=m 142 143 CONFIG_LEDS_TRIGGER_HEARTBEAT=m 143 144 CONFIG_RTC_CLASS=y 144 145 CONFIG_RTC_DRV_AT32AP700X=y 145 146 CONFIG_DMADEVICES=y 147 + CONFIG_PWM=y 148 + CONFIG_PWM_ATMEL=m 146 149 CONFIG_EXT2_FS=y 147 150 CONFIG_EXT3_FS=y 148 151 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
+3 -3
arch/avr32/configs/favr-32_defconfig
··· 67 67 CONFIG_BLK_DEV_LOOP=m 68 68 CONFIG_BLK_DEV_NBD=m 69 69 CONFIG_BLK_DEV_RAM=m 70 - CONFIG_ATMEL_PWM=m 71 70 CONFIG_ATMEL_TCLIB=y 72 71 CONFIG_ATMEL_SSC=m 73 72 CONFIG_NETDEVICES=y ··· 107 108 CONFIG_FB_ATMEL=y 108 109 CONFIG_BACKLIGHT_LCD_SUPPORT=y 109 110 # CONFIG_LCD_CLASS_DEVICE is not set 110 - CONFIG_BACKLIGHT_ATMEL_PWM=m 111 + CONFIG_BACKLIGHT_PWM=m 111 112 CONFIG_SOUND=m 112 113 CONFIG_SOUND_PRIME=m 113 114 # CONFIG_HID_SUPPORT is not set ··· 122 123 CONFIG_MMC_ATMELMCI=y 123 124 CONFIG_NEW_LEDS=y 124 125 CONFIG_LEDS_CLASS=y 125 - CONFIG_LEDS_ATMEL_PWM=m 126 126 CONFIG_LEDS_GPIO=y 127 127 CONFIG_LEDS_TRIGGERS=y 128 128 CONFIG_LEDS_TRIGGER_TIMER=y ··· 130 132 CONFIG_RTC_CLASS=y 131 133 CONFIG_RTC_DRV_AT32AP700X=y 132 134 CONFIG_DMADEVICES=y 135 + CONFIG_PWM=y 136 + CONFIG_PWM_ATMEL=y 133 137 CONFIG_EXT2_FS=y 134 138 CONFIG_EXT3_FS=y 135 139 # CONFIG_EXT3_FS_XATTR is not set
+3 -2
arch/avr32/configs/merisc_defconfig
··· 55 55 CONFIG_MTD_PHYSMAP=y 56 56 CONFIG_MTD_BLOCK2MTD=y 57 57 CONFIG_BLK_DEV_LOOP=y 58 - CONFIG_ATMEL_PWM=y 59 58 CONFIG_ATMEL_SSC=y 60 59 CONFIG_SCSI=y 61 60 CONFIG_BLK_DEV_SD=y ··· 102 103 CONFIG_MMC_ATMELMCI=y 103 104 CONFIG_NEW_LEDS=y 104 105 CONFIG_LEDS_CLASS=y 105 - CONFIG_LEDS_ATMEL_PWM=y 106 + CONFIG_LEDS_PWM=y 106 107 CONFIG_RTC_CLASS=y 107 108 # CONFIG_RTC_HCTOSYS is not set 108 109 CONFIG_RTC_DRV_PCF8563=y 109 110 CONFIG_DMADEVICES=y 110 111 CONFIG_UIO=y 112 + CONFIG_PWM=y 113 + CONFIG_PWM_ATMEL=m 111 114 CONFIG_EXT2_FS=y 112 115 # CONFIG_DNOTIFY is not set 113 116 CONFIG_FUSE_FS=y
+2 -5
arch/avr32/mach-at32ap/at32ap700x.c
··· 1553 1553 IRQ(24), 1554 1554 }; 1555 1555 static struct clk atmel_pwm0_mck = { 1556 - .name = "pwm_clk", 1556 + .name = "at91sam9rl-pwm", 1557 1557 .parent = &pbb_clk, 1558 1558 .mode = pbb_clk_mode, 1559 1559 .get_rate = pbb_clk_get_rate, ··· 1568 1568 if (!mask) 1569 1569 return NULL; 1570 1570 1571 - pdev = platform_device_alloc("atmel_pwm", 0); 1571 + pdev = platform_device_alloc("at91sam9rl-pwm", 0); 1572 1572 if (!pdev) 1573 1573 return NULL; 1574 1574 1575 1575 if (platform_device_add_resources(pdev, atmel_pwm0_resource, 1576 1576 ARRAY_SIZE(atmel_pwm0_resource))) 1577 - goto out_free_pdev; 1578 - 1579 - if (platform_device_add_data(pdev, &mask, sizeof(mask))) 1580 1577 goto out_free_pdev; 1581 1578 1582 1579 pin_mask = 0;
+8
drivers/bus/Kconfig
··· 50 50 Driver supporting the CCI cache coherent interconnect for ARM 51 51 platforms. 52 52 53 + config ARM_CCN 54 + bool "ARM CCN driver support" 55 + depends on ARM || ARM64 56 + depends on PERF_EVENTS 57 + help 58 + PMU (perf) driver supporting the ARM CCN (Cache Coherent Network) 59 + interconnect. 60 + 53 61 config VEXPRESS_CONFIG 54 62 bool "Versatile Express configuration bus" 55 63 default y if ARCH_VEXPRESS
+3 -1
drivers/bus/Makefile
··· 9 9 10 10 # Interconnect bus driver for OMAP SoCs. 11 11 obj-$(CONFIG_OMAP_INTERCONNECT) += omap_l3_smx.o omap_l3_noc.o 12 - # CCI cache coherent interconnect for ARM platforms 12 + 13 + # Interconnect bus drivers for ARM platforms 13 14 obj-$(CONFIG_ARM_CCI) += arm-cci.o 15 + obj-$(CONFIG_ARM_CCN) += arm-ccn.o 14 16 15 17 obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
+1391
drivers/bus/arm-ccn.c
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License version 2 as 4 + * published by the Free Software Foundation. 5 + * 6 + * This program is distributed in the hope that it will be useful, 7 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 + * GNU General Public License for more details. 10 + * 11 + * Copyright (C) 2014 ARM Limited 12 + */ 13 + 14 + #include <linux/ctype.h> 15 + #include <linux/hrtimer.h> 16 + #include <linux/idr.h> 17 + #include <linux/interrupt.h> 18 + #include <linux/io.h> 19 + #include <linux/module.h> 20 + #include <linux/perf_event.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/slab.h> 23 + 24 + #define CCN_NUM_XP_PORTS 2 25 + #define CCN_NUM_VCS 4 26 + #define CCN_NUM_REGIONS 256 27 + #define CCN_REGION_SIZE 0x10000 28 + 29 + #define CCN_ALL_OLY_ID 0xff00 30 + #define CCN_ALL_OLY_ID__OLY_ID__SHIFT 0 31 + #define CCN_ALL_OLY_ID__OLY_ID__MASK 0x1f 32 + #define CCN_ALL_OLY_ID__NODE_ID__SHIFT 8 33 + #define CCN_ALL_OLY_ID__NODE_ID__MASK 0x3f 34 + 35 + #define CCN_MN_ERRINT_STATUS 0x0008 36 + #define CCN_MN_ERRINT_STATUS__INTREQ__DESSERT 0x11 37 + #define CCN_MN_ERRINT_STATUS__ALL_ERRORS__ENABLE 0x02 38 + #define CCN_MN_ERRINT_STATUS__ALL_ERRORS__DISABLED 0x20 39 + #define CCN_MN_ERRINT_STATUS__ALL_ERRORS__DISABLE 0x22 40 + #define CCN_MN_ERRINT_STATUS__CORRECTED_ERRORS_ENABLE 0x04 41 + #define CCN_MN_ERRINT_STATUS__CORRECTED_ERRORS_DISABLED 0x40 42 + #define CCN_MN_ERRINT_STATUS__CORRECTED_ERRORS_DISABLE 0x44 43 + #define CCN_MN_ERRINT_STATUS__PMU_EVENTS__ENABLE 0x08 44 + #define CCN_MN_ERRINT_STATUS__PMU_EVENTS__DISABLED 0x80 45 + #define CCN_MN_ERRINT_STATUS__PMU_EVENTS__DISABLE 0x88 46 + #define CCN_MN_OLY_COMP_LIST_63_0 0x01e0 47 + #define CCN_MN_ERR_SIG_VAL_63_0 0x0300 48 + #define CCN_MN_ERR_SIG_VAL_63_0__DT (1 << 1) 49 + 50 + #define CCN_DT_ACTIVE_DSM 0x0000 51 + #define CCN_DT_ACTIVE_DSM__DSM_ID__SHIFT(n) ((n) * 8) 52 + #define CCN_DT_ACTIVE_DSM__DSM_ID__MASK 0xff 53 + #define CCN_DT_CTL 0x0028 54 + #define CCN_DT_CTL__DT_EN (1 << 0) 55 + #define CCN_DT_PMEVCNT(n) (0x0100 + (n) * 0x8) 56 + #define CCN_DT_PMCCNTR 0x0140 57 + #define CCN_DT_PMCCNTRSR 0x0190 58 + #define CCN_DT_PMOVSR 0x0198 59 + #define CCN_DT_PMOVSR_CLR 0x01a0 60 + #define CCN_DT_PMCR 0x01a8 61 + #define CCN_DT_PMCR__OVFL_INTR_EN (1 << 6) 62 + #define CCN_DT_PMCR__PMU_EN (1 << 0) 63 + #define CCN_DT_PMSR 0x01b0 64 + #define CCN_DT_PMSR_REQ 0x01b8 65 + #define CCN_DT_PMSR_CLR 0x01c0 66 + 67 + #define CCN_HNF_PMU_EVENT_SEL 0x0600 68 + #define CCN_HNF_PMU_EVENT_SEL__ID__SHIFT(n) ((n) * 4) 69 + #define CCN_HNF_PMU_EVENT_SEL__ID__MASK 0xf 70 + 71 + #define CCN_XP_DT_CONFIG 0x0300 72 + #define CCN_XP_DT_CONFIG__DT_CFG__SHIFT(n) ((n) * 4) 73 + #define CCN_XP_DT_CONFIG__DT_CFG__MASK 0xf 74 + #define CCN_XP_DT_CONFIG__DT_CFG__PASS_THROUGH 0x0 75 + #define CCN_XP_DT_CONFIG__DT_CFG__WATCHPOINT_0_OR_1 0x1 76 + #define CCN_XP_DT_CONFIG__DT_CFG__WATCHPOINT(n) (0x2 + (n)) 77 + #define CCN_XP_DT_CONFIG__DT_CFG__XP_PMU_EVENT(n) (0x4 + (n)) 78 + #define CCN_XP_DT_CONFIG__DT_CFG__DEVICE_PMU_EVENT(d, n) (0x8 + (d) * 4 + (n)) 79 + #define CCN_XP_DT_INTERFACE_SEL 0x0308 80 + #define CCN_XP_DT_INTERFACE_SEL__DT_IO_SEL__SHIFT(n) (0 + (n) * 8) 81 + #define CCN_XP_DT_INTERFACE_SEL__DT_IO_SEL__MASK 0x1 82 + #define CCN_XP_DT_INTERFACE_SEL__DT_DEV_SEL__SHIFT(n) (1 + (n) * 8) 83 + #define CCN_XP_DT_INTERFACE_SEL__DT_DEV_SEL__MASK 0x1 84 + #define CCN_XP_DT_INTERFACE_SEL__DT_VC_SEL__SHIFT(n) (2 + (n) * 8) 85 + #define CCN_XP_DT_INTERFACE_SEL__DT_VC_SEL__MASK 0x3 86 + #define CCN_XP_DT_CMP_VAL_L(n) (0x0310 + (n) * 0x40) 87 + #define CCN_XP_DT_CMP_VAL_H(n) (0x0318 + (n) * 0x40) 88 + #define CCN_XP_DT_CMP_MASK_L(n) (0x0320 + (n) * 0x40) 89 + #define CCN_XP_DT_CMP_MASK_H(n) (0x0328 + (n) * 0x40) 90 + #define CCN_XP_DT_CONTROL 0x0370 91 + #define CCN_XP_DT_CONTROL__DT_ENABLE (1 << 0) 92 + #define CCN_XP_DT_CONTROL__WP_ARM_SEL__SHIFT(n) (12 + (n) * 4) 93 + #define CCN_XP_DT_CONTROL__WP_ARM_SEL__MASK 0xf 94 + #define CCN_XP_DT_CONTROL__WP_ARM_SEL__ALWAYS 0xf 95 + #define CCN_XP_PMU_EVENT_SEL 0x0600 96 + #define CCN_XP_PMU_EVENT_SEL__ID__SHIFT(n) ((n) * 7) 97 + #define CCN_XP_PMU_EVENT_SEL__ID__MASK 0x3f 98 + 99 + #define CCN_SBAS_PMU_EVENT_SEL 0x0600 100 + #define CCN_SBAS_PMU_EVENT_SEL__ID__SHIFT(n) ((n) * 4) 101 + #define CCN_SBAS_PMU_EVENT_SEL__ID__MASK 0xf 102 + 103 + #define CCN_RNI_PMU_EVENT_SEL 0x0600 104 + #define CCN_RNI_PMU_EVENT_SEL__ID__SHIFT(n) ((n) * 4) 105 + #define CCN_RNI_PMU_EVENT_SEL__ID__MASK 0xf 106 + 107 + #define CCN_TYPE_MN 0x01 108 + #define CCN_TYPE_DT 0x02 109 + #define CCN_TYPE_HNF 0x04 110 + #define CCN_TYPE_HNI 0x05 111 + #define CCN_TYPE_XP 0x08 112 + #define CCN_TYPE_SBSX 0x0c 113 + #define CCN_TYPE_SBAS 0x10 114 + #define CCN_TYPE_RNI_1P 0x14 115 + #define CCN_TYPE_RNI_2P 0x15 116 + #define CCN_TYPE_RNI_3P 0x16 117 + #define CCN_TYPE_RND_1P 0x18 /* RN-D = RN-I + DVM */ 118 + #define CCN_TYPE_RND_2P 0x19 119 + #define CCN_TYPE_RND_3P 0x1a 120 + #define CCN_TYPE_CYCLES 0xff /* Pseudotype */ 121 + 122 + #define CCN_EVENT_WATCHPOINT 0xfe /* Pseudoevent */ 123 + 124 + #define CCN_NUM_PMU_EVENTS 4 125 + #define CCN_NUM_XP_WATCHPOINTS 2 /* See DT.dbg_id.num_watchpoints */ 126 + #define CCN_NUM_PMU_EVENT_COUNTERS 8 /* See DT.dbg_id.num_pmucntr */ 127 + #define CCN_IDX_PMU_CYCLE_COUNTER CCN_NUM_PMU_EVENT_COUNTERS 128 + 129 + #define CCN_NUM_PREDEFINED_MASKS 4 130 + #define CCN_IDX_MASK_ANY (CCN_NUM_PMU_EVENT_COUNTERS + 0) 131 + #define CCN_IDX_MASK_EXACT (CCN_NUM_PMU_EVENT_COUNTERS + 1) 132 + #define CCN_IDX_MASK_ORDER (CCN_NUM_PMU_EVENT_COUNTERS + 2) 133 + #define CCN_IDX_MASK_OPCODE (CCN_NUM_PMU_EVENT_COUNTERS + 3) 134 + 135 + struct arm_ccn_component { 136 + void __iomem *base; 137 + u32 type; 138 + 139 + DECLARE_BITMAP(pmu_events_mask, CCN_NUM_PMU_EVENTS); 140 + union { 141 + struct { 142 + DECLARE_BITMAP(dt_cmp_mask, CCN_NUM_XP_WATCHPOINTS); 143 + } xp; 144 + }; 145 + }; 146 + 147 + #define pmu_to_arm_ccn(_pmu) container_of(container_of(_pmu, \ 148 + struct arm_ccn_dt, pmu), struct arm_ccn, dt) 149 + 150 + struct arm_ccn_dt { 151 + int id; 152 + void __iomem *base; 153 + 154 + spinlock_t config_lock; 155 + 156 + DECLARE_BITMAP(pmu_counters_mask, CCN_NUM_PMU_EVENT_COUNTERS + 1); 157 + struct { 158 + struct arm_ccn_component *source; 159 + struct perf_event *event; 160 + } pmu_counters[CCN_NUM_PMU_EVENT_COUNTERS + 1]; 161 + 162 + struct { 163 + u64 l, h; 164 + } cmp_mask[CCN_NUM_PMU_EVENT_COUNTERS + CCN_NUM_PREDEFINED_MASKS]; 165 + 166 + struct hrtimer hrtimer; 167 + 168 + struct pmu pmu; 169 + }; 170 + 171 + struct arm_ccn { 172 + struct device *dev; 173 + void __iomem *base; 174 + unsigned irq_used:1; 175 + unsigned sbas_present:1; 176 + unsigned sbsx_present:1; 177 + 178 + int num_nodes; 179 + struct arm_ccn_component *node; 180 + 181 + int num_xps; 182 + struct arm_ccn_component *xp; 183 + 184 + struct arm_ccn_dt dt; 185 + }; 186 + 187 + 188 + static int arm_ccn_node_to_xp(int node) 189 + { 190 + return node / CCN_NUM_XP_PORTS; 191 + } 192 + 193 + static int arm_ccn_node_to_xp_port(int node) 194 + { 195 + return node % CCN_NUM_XP_PORTS; 196 + } 197 + 198 + 199 + /* 200 + * Bit shifts and masks in these defines must be kept in sync with 201 + * arm_ccn_pmu_config_set() and CCN_FORMAT_ATTRs below! 202 + */ 203 + #define CCN_CONFIG_NODE(_config) (((_config) >> 0) & 0xff) 204 + #define CCN_CONFIG_XP(_config) (((_config) >> 0) & 0xff) 205 + #define CCN_CONFIG_TYPE(_config) (((_config) >> 8) & 0xff) 206 + #define CCN_CONFIG_EVENT(_config) (((_config) >> 16) & 0xff) 207 + #define CCN_CONFIG_PORT(_config) (((_config) >> 24) & 0x3) 208 + #define CCN_CONFIG_VC(_config) (((_config) >> 26) & 0x7) 209 + #define CCN_CONFIG_DIR(_config) (((_config) >> 29) & 0x1) 210 + #define CCN_CONFIG_MASK(_config) (((_config) >> 30) & 0xf) 211 + 212 + static void arm_ccn_pmu_config_set(u64 *config, u32 node_xp, u32 type, u32 port) 213 + { 214 + *config &= ~((0xff << 0) | (0xff << 8) | (0xff << 24)); 215 + *config |= (node_xp << 0) | (type << 8) | (port << 24); 216 + } 217 + 218 + static ssize_t arm_ccn_pmu_format_show(struct device *dev, 219 + struct device_attribute *attr, char *buf) 220 + { 221 + struct dev_ext_attribute *ea = container_of(attr, 222 + struct dev_ext_attribute, attr); 223 + 224 + return snprintf(buf, PAGE_SIZE, "%s\n", (char *)ea->var); 225 + } 226 + 227 + #define CCN_FORMAT_ATTR(_name, _config) \ 228 + struct dev_ext_attribute arm_ccn_pmu_format_attr_##_name = \ 229 + { __ATTR(_name, S_IRUGO, arm_ccn_pmu_format_show, \ 230 + NULL), _config } 231 + 232 + static CCN_FORMAT_ATTR(node, "config:0-7"); 233 + static CCN_FORMAT_ATTR(xp, "config:0-7"); 234 + static CCN_FORMAT_ATTR(type, "config:8-15"); 235 + static CCN_FORMAT_ATTR(event, "config:16-23"); 236 + static CCN_FORMAT_ATTR(port, "config:24-25"); 237 + static CCN_FORMAT_ATTR(vc, "config:26-28"); 238 + static CCN_FORMAT_ATTR(dir, "config:29-29"); 239 + static CCN_FORMAT_ATTR(mask, "config:30-33"); 240 + static CCN_FORMAT_ATTR(cmp_l, "config1:0-62"); 241 + static CCN_FORMAT_ATTR(cmp_h, "config2:0-59"); 242 + 243 + static struct attribute *arm_ccn_pmu_format_attrs[] = { 244 + &arm_ccn_pmu_format_attr_node.attr.attr, 245 + &arm_ccn_pmu_format_attr_xp.attr.attr, 246 + &arm_ccn_pmu_format_attr_type.attr.attr, 247 + &arm_ccn_pmu_format_attr_event.attr.attr, 248 + &arm_ccn_pmu_format_attr_port.attr.attr, 249 + &arm_ccn_pmu_format_attr_vc.attr.attr, 250 + &arm_ccn_pmu_format_attr_dir.attr.attr, 251 + &arm_ccn_pmu_format_attr_mask.attr.attr, 252 + &arm_ccn_pmu_format_attr_cmp_l.attr.attr, 253 + &arm_ccn_pmu_format_attr_cmp_h.attr.attr, 254 + NULL 255 + }; 256 + 257 + static struct attribute_group arm_ccn_pmu_format_attr_group = { 258 + .name = "format", 259 + .attrs = arm_ccn_pmu_format_attrs, 260 + }; 261 + 262 + 263 + struct arm_ccn_pmu_event { 264 + struct device_attribute attr; 265 + u32 type; 266 + u32 event; 267 + int num_ports; 268 + int num_vcs; 269 + const char *def; 270 + int mask; 271 + }; 272 + 273 + #define CCN_EVENT_ATTR(_name) \ 274 + __ATTR(_name, S_IRUGO, arm_ccn_pmu_event_show, NULL) 275 + 276 + /* 277 + * Events defined in TRM for MN, HN-I and SBSX are actually watchpoints set on 278 + * their ports in XP they are connected to. For the sake of usability they are 279 + * explicitly defined here (and translated into a relevant watchpoint in 280 + * arm_ccn_pmu_event_init()) so the user can easily request them without deep 281 + * knowledge of the flit format. 282 + */ 283 + 284 + #define CCN_EVENT_MN(_name, _def, _mask) { .attr = CCN_EVENT_ATTR(mn_##_name), \ 285 + .type = CCN_TYPE_MN, .event = CCN_EVENT_WATCHPOINT, \ 286 + .num_ports = CCN_NUM_XP_PORTS, .num_vcs = CCN_NUM_VCS, \ 287 + .def = _def, .mask = _mask, } 288 + 289 + #define CCN_EVENT_HNI(_name, _def, _mask) { \ 290 + .attr = CCN_EVENT_ATTR(hni_##_name), .type = CCN_TYPE_HNI, \ 291 + .event = CCN_EVENT_WATCHPOINT, .num_ports = CCN_NUM_XP_PORTS, \ 292 + .num_vcs = CCN_NUM_VCS, .def = _def, .mask = _mask, } 293 + 294 + #define CCN_EVENT_SBSX(_name, _def, _mask) { \ 295 + .attr = CCN_EVENT_ATTR(sbsx_##_name), .type = CCN_TYPE_SBSX, \ 296 + .event = CCN_EVENT_WATCHPOINT, .num_ports = CCN_NUM_XP_PORTS, \ 297 + .num_vcs = CCN_NUM_VCS, .def = _def, .mask = _mask, } 298 + 299 + #define CCN_EVENT_HNF(_name, _event) { .attr = CCN_EVENT_ATTR(hnf_##_name), \ 300 + .type = CCN_TYPE_HNF, .event = _event, } 301 + 302 + #define CCN_EVENT_XP(_name, _event) { .attr = CCN_EVENT_ATTR(xp_##_name), \ 303 + .type = CCN_TYPE_XP, .event = _event, \ 304 + .num_ports = CCN_NUM_XP_PORTS, .num_vcs = CCN_NUM_VCS, } 305 + 306 + /* 307 + * RN-I & RN-D (RN-D = RN-I + DVM) nodes have different type ID depending 308 + * on configuration. One of them is picked to represent the whole group, 309 + * as they all share the same event types. 310 + */ 311 + #define CCN_EVENT_RNI(_name, _event) { .attr = CCN_EVENT_ATTR(rni_##_name), \ 312 + .type = CCN_TYPE_RNI_3P, .event = _event, } 313 + 314 + #define CCN_EVENT_SBAS(_name, _event) { .attr = CCN_EVENT_ATTR(sbas_##_name), \ 315 + .type = CCN_TYPE_SBAS, .event = _event, } 316 + 317 + #define CCN_EVENT_CYCLES(_name) { .attr = CCN_EVENT_ATTR(_name), \ 318 + .type = CCN_TYPE_CYCLES } 319 + 320 + 321 + static ssize_t arm_ccn_pmu_event_show(struct device *dev, 322 + struct device_attribute *attr, char *buf) 323 + { 324 + struct arm_ccn_pmu_event *event = container_of(attr, 325 + struct arm_ccn_pmu_event, attr); 326 + ssize_t res; 327 + 328 + res = snprintf(buf, PAGE_SIZE, "type=0x%x", event->type); 329 + if (event->event) 330 + res += snprintf(buf + res, PAGE_SIZE - res, ",event=0x%x", 331 + event->event); 332 + if (event->def) 333 + res += snprintf(buf + res, PAGE_SIZE - res, ",%s", 334 + event->def); 335 + if (event->mask) 336 + res += snprintf(buf + res, PAGE_SIZE - res, ",mask=0x%x", 337 + event->mask); 338 + res += snprintf(buf + res, PAGE_SIZE - res, "\n"); 339 + 340 + return res; 341 + } 342 + 343 + static umode_t arm_ccn_pmu_events_is_visible(struct kobject *kobj, 344 + struct attribute *attr, int index) 345 + { 346 + struct device *dev = kobj_to_dev(kobj); 347 + struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev)); 348 + struct device_attribute *dev_attr = container_of(attr, 349 + struct device_attribute, attr); 350 + struct arm_ccn_pmu_event *event = container_of(dev_attr, 351 + struct arm_ccn_pmu_event, attr); 352 + 353 + if (event->type == CCN_TYPE_SBAS && !ccn->sbas_present) 354 + return 0; 355 + if (event->type == CCN_TYPE_SBSX && !ccn->sbsx_present) 356 + return 0; 357 + 358 + return attr->mode; 359 + } 360 + 361 + static struct arm_ccn_pmu_event arm_ccn_pmu_events[] = { 362 + CCN_EVENT_MN(eobarrier, "dir=0,vc=0,cmp_h=0x1c00", CCN_IDX_MASK_OPCODE), 363 + CCN_EVENT_MN(ecbarrier, "dir=0,vc=0,cmp_h=0x1e00", CCN_IDX_MASK_OPCODE), 364 + CCN_EVENT_MN(dvmop, "dir=0,vc=0,cmp_h=0x2800", CCN_IDX_MASK_OPCODE), 365 + CCN_EVENT_HNI(txdatflits, "dir=1,vc=3", CCN_IDX_MASK_ANY), 366 + CCN_EVENT_HNI(rxdatflits, "dir=0,vc=3", CCN_IDX_MASK_ANY), 367 + CCN_EVENT_HNI(txreqflits, "dir=1,vc=0", CCN_IDX_MASK_ANY), 368 + CCN_EVENT_HNI(rxreqflits, "dir=0,vc=0", CCN_IDX_MASK_ANY), 369 + CCN_EVENT_HNI(rxreqflits_order, "dir=0,vc=0,cmp_h=0x8000", 370 + CCN_IDX_MASK_ORDER), 371 + CCN_EVENT_SBSX(txdatflits, "dir=1,vc=3", CCN_IDX_MASK_ANY), 372 + CCN_EVENT_SBSX(rxdatflits, "dir=0,vc=3", CCN_IDX_MASK_ANY), 373 + CCN_EVENT_SBSX(txreqflits, "dir=1,vc=0", CCN_IDX_MASK_ANY), 374 + CCN_EVENT_SBSX(rxreqflits, "dir=0,vc=0", CCN_IDX_MASK_ANY), 375 + CCN_EVENT_SBSX(rxreqflits_order, "dir=0,vc=0,cmp_h=0x8000", 376 + CCN_IDX_MASK_ORDER), 377 + CCN_EVENT_HNF(cache_miss, 0x1), 378 + CCN_EVENT_HNF(l3_sf_cache_access, 0x02), 379 + CCN_EVENT_HNF(cache_fill, 0x3), 380 + CCN_EVENT_HNF(pocq_retry, 0x4), 381 + CCN_EVENT_HNF(pocq_reqs_recvd, 0x5), 382 + CCN_EVENT_HNF(sf_hit, 0x6), 383 + CCN_EVENT_HNF(sf_evictions, 0x7), 384 + CCN_EVENT_HNF(snoops_sent, 0x8), 385 + CCN_EVENT_HNF(snoops_broadcast, 0x9), 386 + CCN_EVENT_HNF(l3_eviction, 0xa), 387 + CCN_EVENT_HNF(l3_fill_invalid_way, 0xb), 388 + CCN_EVENT_HNF(mc_retries, 0xc), 389 + CCN_EVENT_HNF(mc_reqs, 0xd), 390 + CCN_EVENT_HNF(qos_hh_retry, 0xe), 391 + CCN_EVENT_RNI(rdata_beats_p0, 0x1), 392 + CCN_EVENT_RNI(rdata_beats_p1, 0x2), 393 + CCN_EVENT_RNI(rdata_beats_p2, 0x3), 394 + CCN_EVENT_RNI(rxdat_flits, 0x4), 395 + CCN_EVENT_RNI(txdat_flits, 0x5), 396 + CCN_EVENT_RNI(txreq_flits, 0x6), 397 + CCN_EVENT_RNI(txreq_flits_retried, 0x7), 398 + CCN_EVENT_RNI(rrt_full, 0x8), 399 + CCN_EVENT_RNI(wrt_full, 0x9), 400 + CCN_EVENT_RNI(txreq_flits_replayed, 0xa), 401 + CCN_EVENT_XP(upload_starvation, 0x1), 402 + CCN_EVENT_XP(download_starvation, 0x2), 403 + CCN_EVENT_XP(respin, 0x3), 404 + CCN_EVENT_XP(valid_flit, 0x4), 405 + CCN_EVENT_XP(watchpoint, CCN_EVENT_WATCHPOINT), 406 + CCN_EVENT_SBAS(rdata_beats_p0, 0x1), 407 + CCN_EVENT_SBAS(rxdat_flits, 0x4), 408 + CCN_EVENT_SBAS(txdat_flits, 0x5), 409 + CCN_EVENT_SBAS(txreq_flits, 0x6), 410 + CCN_EVENT_SBAS(txreq_flits_retried, 0x7), 411 + CCN_EVENT_SBAS(rrt_full, 0x8), 412 + CCN_EVENT_SBAS(wrt_full, 0x9), 413 + CCN_EVENT_SBAS(txreq_flits_replayed, 0xa), 414 + CCN_EVENT_CYCLES(cycles), 415 + }; 416 + 417 + /* Populated in arm_ccn_init() */ 418 + static struct attribute 419 + *arm_ccn_pmu_events_attrs[ARRAY_SIZE(arm_ccn_pmu_events) + 1]; 420 + 421 + static struct attribute_group arm_ccn_pmu_events_attr_group = { 422 + .name = "events", 423 + .is_visible = arm_ccn_pmu_events_is_visible, 424 + .attrs = arm_ccn_pmu_events_attrs, 425 + }; 426 + 427 + 428 + static u64 *arm_ccn_pmu_get_cmp_mask(struct arm_ccn *ccn, const char *name) 429 + { 430 + unsigned long i; 431 + 432 + if (WARN_ON(!name || !name[0] || !isxdigit(name[0]) || !name[1])) 433 + return NULL; 434 + i = isdigit(name[0]) ? name[0] - '0' : 0xa + tolower(name[0]) - 'a'; 435 + 436 + switch (name[1]) { 437 + case 'l': 438 + return &ccn->dt.cmp_mask[i].l; 439 + case 'h': 440 + return &ccn->dt.cmp_mask[i].h; 441 + default: 442 + return NULL; 443 + } 444 + } 445 + 446 + static ssize_t arm_ccn_pmu_cmp_mask_show(struct device *dev, 447 + struct device_attribute *attr, char *buf) 448 + { 449 + struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev)); 450 + u64 *mask = arm_ccn_pmu_get_cmp_mask(ccn, attr->attr.name); 451 + 452 + return mask ? snprintf(buf, PAGE_SIZE, "0x%016llx\n", *mask) : -EINVAL; 453 + } 454 + 455 + static ssize_t arm_ccn_pmu_cmp_mask_store(struct device *dev, 456 + struct device_attribute *attr, const char *buf, size_t count) 457 + { 458 + struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev)); 459 + u64 *mask = arm_ccn_pmu_get_cmp_mask(ccn, attr->attr.name); 460 + int err = -EINVAL; 461 + 462 + if (mask) 463 + err = kstrtoull(buf, 0, mask); 464 + 465 + return err ? err : count; 466 + } 467 + 468 + #define CCN_CMP_MASK_ATTR(_name) \ 469 + struct device_attribute arm_ccn_pmu_cmp_mask_attr_##_name = \ 470 + __ATTR(_name, S_IRUGO | S_IWUSR, \ 471 + arm_ccn_pmu_cmp_mask_show, arm_ccn_pmu_cmp_mask_store) 472 + 473 + #define CCN_CMP_MASK_ATTR_RO(_name) \ 474 + struct device_attribute arm_ccn_pmu_cmp_mask_attr_##_name = \ 475 + __ATTR(_name, S_IRUGO, arm_ccn_pmu_cmp_mask_show, NULL) 476 + 477 + static CCN_CMP_MASK_ATTR(0l); 478 + static CCN_CMP_MASK_ATTR(0h); 479 + static CCN_CMP_MASK_ATTR(1l); 480 + static CCN_CMP_MASK_ATTR(1h); 481 + static CCN_CMP_MASK_ATTR(2l); 482 + static CCN_CMP_MASK_ATTR(2h); 483 + static CCN_CMP_MASK_ATTR(3l); 484 + static CCN_CMP_MASK_ATTR(3h); 485 + static CCN_CMP_MASK_ATTR(4l); 486 + static CCN_CMP_MASK_ATTR(4h); 487 + static CCN_CMP_MASK_ATTR(5l); 488 + static CCN_CMP_MASK_ATTR(5h); 489 + static CCN_CMP_MASK_ATTR(6l); 490 + static CCN_CMP_MASK_ATTR(6h); 491 + static CCN_CMP_MASK_ATTR(7l); 492 + static CCN_CMP_MASK_ATTR(7h); 493 + static CCN_CMP_MASK_ATTR_RO(8l); 494 + static CCN_CMP_MASK_ATTR_RO(8h); 495 + static CCN_CMP_MASK_ATTR_RO(9l); 496 + static CCN_CMP_MASK_ATTR_RO(9h); 497 + static CCN_CMP_MASK_ATTR_RO(al); 498 + static CCN_CMP_MASK_ATTR_RO(ah); 499 + static CCN_CMP_MASK_ATTR_RO(bl); 500 + static CCN_CMP_MASK_ATTR_RO(bh); 501 + 502 + static struct attribute *arm_ccn_pmu_cmp_mask_attrs[] = { 503 + &arm_ccn_pmu_cmp_mask_attr_0l.attr, &arm_ccn_pmu_cmp_mask_attr_0h.attr, 504 + &arm_ccn_pmu_cmp_mask_attr_1l.attr, &arm_ccn_pmu_cmp_mask_attr_1h.attr, 505 + &arm_ccn_pmu_cmp_mask_attr_2l.attr, &arm_ccn_pmu_cmp_mask_attr_2h.attr, 506 + &arm_ccn_pmu_cmp_mask_attr_3l.attr, &arm_ccn_pmu_cmp_mask_attr_3h.attr, 507 + &arm_ccn_pmu_cmp_mask_attr_4l.attr, &arm_ccn_pmu_cmp_mask_attr_4h.attr, 508 + &arm_ccn_pmu_cmp_mask_attr_5l.attr, &arm_ccn_pmu_cmp_mask_attr_5h.attr, 509 + &arm_ccn_pmu_cmp_mask_attr_6l.attr, &arm_ccn_pmu_cmp_mask_attr_6h.attr, 510 + &arm_ccn_pmu_cmp_mask_attr_7l.attr, &arm_ccn_pmu_cmp_mask_attr_7h.attr, 511 + &arm_ccn_pmu_cmp_mask_attr_8l.attr, &arm_ccn_pmu_cmp_mask_attr_8h.attr, 512 + &arm_ccn_pmu_cmp_mask_attr_9l.attr, &arm_ccn_pmu_cmp_mask_attr_9h.attr, 513 + &arm_ccn_pmu_cmp_mask_attr_al.attr, &arm_ccn_pmu_cmp_mask_attr_ah.attr, 514 + &arm_ccn_pmu_cmp_mask_attr_bl.attr, &arm_ccn_pmu_cmp_mask_attr_bh.attr, 515 + NULL 516 + }; 517 + 518 + static struct attribute_group arm_ccn_pmu_cmp_mask_attr_group = { 519 + .name = "cmp_mask", 520 + .attrs = arm_ccn_pmu_cmp_mask_attrs, 521 + }; 522 + 523 + 524 + /* 525 + * Default poll period is 10ms, which is way over the top anyway, 526 + * as in the worst case scenario (an event every cycle), with 1GHz 527 + * clocked bus, the smallest, 32 bit counter will overflow in 528 + * more than 4s. 529 + */ 530 + static unsigned int arm_ccn_pmu_poll_period_us = 10000; 531 + module_param_named(pmu_poll_period_us, arm_ccn_pmu_poll_period_us, uint, 532 + S_IRUGO | S_IWUSR); 533 + 534 + static ktime_t arm_ccn_pmu_timer_period(void) 535 + { 536 + return ns_to_ktime((u64)arm_ccn_pmu_poll_period_us * 1000); 537 + } 538 + 539 + 540 + static const struct attribute_group *arm_ccn_pmu_attr_groups[] = { 541 + &arm_ccn_pmu_events_attr_group, 542 + &arm_ccn_pmu_format_attr_group, 543 + &arm_ccn_pmu_cmp_mask_attr_group, 544 + NULL 545 + }; 546 + 547 + 548 + static int arm_ccn_pmu_alloc_bit(unsigned long *bitmap, unsigned long size) 549 + { 550 + int bit; 551 + 552 + do { 553 + bit = find_first_zero_bit(bitmap, size); 554 + if (bit >= size) 555 + return -EAGAIN; 556 + } while (test_and_set_bit(bit, bitmap)); 557 + 558 + return bit; 559 + } 560 + 561 + /* All RN-I and RN-D nodes have identical PMUs */ 562 + static int arm_ccn_pmu_type_eq(u32 a, u32 b) 563 + { 564 + if (a == b) 565 + return 1; 566 + 567 + switch (a) { 568 + case CCN_TYPE_RNI_1P: 569 + case CCN_TYPE_RNI_2P: 570 + case CCN_TYPE_RNI_3P: 571 + case CCN_TYPE_RND_1P: 572 + case CCN_TYPE_RND_2P: 573 + case CCN_TYPE_RND_3P: 574 + switch (b) { 575 + case CCN_TYPE_RNI_1P: 576 + case CCN_TYPE_RNI_2P: 577 + case CCN_TYPE_RNI_3P: 578 + case CCN_TYPE_RND_1P: 579 + case CCN_TYPE_RND_2P: 580 + case CCN_TYPE_RND_3P: 581 + return 1; 582 + } 583 + break; 584 + } 585 + 586 + return 0; 587 + } 588 + 589 + static int arm_ccn_pmu_event_init(struct perf_event *event) 590 + { 591 + struct arm_ccn *ccn; 592 + struct hw_perf_event *hw = &event->hw; 593 + u32 node_xp, type, event_id; 594 + int valid, bit; 595 + struct arm_ccn_component *source; 596 + int i; 597 + 598 + if (event->attr.type != event->pmu->type) 599 + return -ENOENT; 600 + 601 + ccn = pmu_to_arm_ccn(event->pmu); 602 + 603 + if (hw->sample_period) { 604 + dev_warn(ccn->dev, "Sampling not supported!\n"); 605 + return -EOPNOTSUPP; 606 + } 607 + 608 + if (has_branch_stack(event) || event->attr.exclude_user || 609 + event->attr.exclude_kernel || event->attr.exclude_hv || 610 + event->attr.exclude_idle) { 611 + dev_warn(ccn->dev, "Can't exclude execution levels!\n"); 612 + return -EOPNOTSUPP; 613 + } 614 + 615 + if (event->cpu < 0) { 616 + dev_warn(ccn->dev, "Can't provide per-task data!\n"); 617 + return -EOPNOTSUPP; 618 + } 619 + 620 + node_xp = CCN_CONFIG_NODE(event->attr.config); 621 + type = CCN_CONFIG_TYPE(event->attr.config); 622 + event_id = CCN_CONFIG_EVENT(event->attr.config); 623 + 624 + /* Validate node/xp vs topology */ 625 + switch (type) { 626 + case CCN_TYPE_XP: 627 + if (node_xp >= ccn->num_xps) { 628 + dev_warn(ccn->dev, "Invalid XP ID %d!\n", node_xp); 629 + return -EINVAL; 630 + } 631 + break; 632 + case CCN_TYPE_CYCLES: 633 + break; 634 + default: 635 + if (node_xp >= ccn->num_nodes) { 636 + dev_warn(ccn->dev, "Invalid node ID %d!\n", node_xp); 637 + return -EINVAL; 638 + } 639 + if (!arm_ccn_pmu_type_eq(type, ccn->node[node_xp].type)) { 640 + dev_warn(ccn->dev, "Invalid type 0x%x for node %d!\n", 641 + type, node_xp); 642 + return -EINVAL; 643 + } 644 + break; 645 + } 646 + 647 + /* Validate event ID vs available for the type */ 648 + for (i = 0, valid = 0; i < ARRAY_SIZE(arm_ccn_pmu_events) && !valid; 649 + i++) { 650 + struct arm_ccn_pmu_event *e = &arm_ccn_pmu_events[i]; 651 + u32 port = CCN_CONFIG_PORT(event->attr.config); 652 + u32 vc = CCN_CONFIG_VC(event->attr.config); 653 + 654 + if (!arm_ccn_pmu_type_eq(type, e->type)) 655 + continue; 656 + if (event_id != e->event) 657 + continue; 658 + if (e->num_ports && port >= e->num_ports) { 659 + dev_warn(ccn->dev, "Invalid port %d for node/XP %d!\n", 660 + port, node_xp); 661 + return -EINVAL; 662 + } 663 + if (e->num_vcs && vc >= e->num_vcs) { 664 + dev_warn(ccn->dev, "Invalid vc %d for node/XP %d!\n", 665 + port, node_xp); 666 + return -EINVAL; 667 + } 668 + valid = 1; 669 + } 670 + if (!valid) { 671 + dev_warn(ccn->dev, "Invalid event 0x%x for node/XP %d!\n", 672 + event_id, node_xp); 673 + return -EINVAL; 674 + } 675 + 676 + /* Watchpoint-based event for a node is actually set on XP */ 677 + if (event_id == CCN_EVENT_WATCHPOINT && type != CCN_TYPE_XP) { 678 + u32 port; 679 + 680 + type = CCN_TYPE_XP; 681 + port = arm_ccn_node_to_xp_port(node_xp); 682 + node_xp = arm_ccn_node_to_xp(node_xp); 683 + 684 + arm_ccn_pmu_config_set(&event->attr.config, 685 + node_xp, type, port); 686 + } 687 + 688 + /* Allocate the cycle counter */ 689 + if (type == CCN_TYPE_CYCLES) { 690 + if (test_and_set_bit(CCN_IDX_PMU_CYCLE_COUNTER, 691 + ccn->dt.pmu_counters_mask)) 692 + return -EAGAIN; 693 + 694 + hw->idx = CCN_IDX_PMU_CYCLE_COUNTER; 695 + ccn->dt.pmu_counters[CCN_IDX_PMU_CYCLE_COUNTER].event = event; 696 + 697 + return 0; 698 + } 699 + 700 + /* Allocate an event counter */ 701 + hw->idx = arm_ccn_pmu_alloc_bit(ccn->dt.pmu_counters_mask, 702 + CCN_NUM_PMU_EVENT_COUNTERS); 703 + if (hw->idx < 0) { 704 + dev_warn(ccn->dev, "No more counters available!\n"); 705 + return -EAGAIN; 706 + } 707 + 708 + if (type == CCN_TYPE_XP) 709 + source = &ccn->xp[node_xp]; 710 + else 711 + source = &ccn->node[node_xp]; 712 + ccn->dt.pmu_counters[hw->idx].source = source; 713 + 714 + /* Allocate an event source or a watchpoint */ 715 + if (type == CCN_TYPE_XP && event_id == CCN_EVENT_WATCHPOINT) 716 + bit = arm_ccn_pmu_alloc_bit(source->xp.dt_cmp_mask, 717 + CCN_NUM_XP_WATCHPOINTS); 718 + else 719 + bit = arm_ccn_pmu_alloc_bit(source->pmu_events_mask, 720 + CCN_NUM_PMU_EVENTS); 721 + if (bit < 0) { 722 + dev_warn(ccn->dev, "No more event sources/watchpoints on node/XP %d!\n", 723 + node_xp); 724 + clear_bit(hw->idx, ccn->dt.pmu_counters_mask); 725 + return -EAGAIN; 726 + } 727 + hw->config_base = bit; 728 + 729 + ccn->dt.pmu_counters[hw->idx].event = event; 730 + 731 + return 0; 732 + } 733 + 734 + static void arm_ccn_pmu_event_free(struct perf_event *event) 735 + { 736 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 737 + struct hw_perf_event *hw = &event->hw; 738 + 739 + if (hw->idx == CCN_IDX_PMU_CYCLE_COUNTER) { 740 + clear_bit(CCN_IDX_PMU_CYCLE_COUNTER, ccn->dt.pmu_counters_mask); 741 + } else { 742 + struct arm_ccn_component *source = 743 + ccn->dt.pmu_counters[hw->idx].source; 744 + 745 + if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP && 746 + CCN_CONFIG_EVENT(event->attr.config) == 747 + CCN_EVENT_WATCHPOINT) 748 + clear_bit(hw->config_base, source->xp.dt_cmp_mask); 749 + else 750 + clear_bit(hw->config_base, source->pmu_events_mask); 751 + clear_bit(hw->idx, ccn->dt.pmu_counters_mask); 752 + } 753 + 754 + ccn->dt.pmu_counters[hw->idx].source = NULL; 755 + ccn->dt.pmu_counters[hw->idx].event = NULL; 756 + } 757 + 758 + static u64 arm_ccn_pmu_read_counter(struct arm_ccn *ccn, int idx) 759 + { 760 + u64 res; 761 + 762 + if (idx == CCN_IDX_PMU_CYCLE_COUNTER) { 763 + #ifdef readq 764 + res = readq(ccn->dt.base + CCN_DT_PMCCNTR); 765 + #else 766 + /* 40 bit counter, can do snapshot and read in two parts */ 767 + writel(0x1, ccn->dt.base + CCN_DT_PMSR_REQ); 768 + while (!(readl(ccn->dt.base + CCN_DT_PMSR) & 0x1)) 769 + ; 770 + writel(0x1, ccn->dt.base + CCN_DT_PMSR_CLR); 771 + res = readl(ccn->dt.base + CCN_DT_PMCCNTRSR + 4) & 0xff; 772 + res <<= 32; 773 + res |= readl(ccn->dt.base + CCN_DT_PMCCNTRSR); 774 + #endif 775 + } else { 776 + res = readl(ccn->dt.base + CCN_DT_PMEVCNT(idx)); 777 + } 778 + 779 + return res; 780 + } 781 + 782 + static void arm_ccn_pmu_event_update(struct perf_event *event) 783 + { 784 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 785 + struct hw_perf_event *hw = &event->hw; 786 + u64 prev_count, new_count, mask; 787 + 788 + do { 789 + prev_count = local64_read(&hw->prev_count); 790 + new_count = arm_ccn_pmu_read_counter(ccn, hw->idx); 791 + } while (local64_xchg(&hw->prev_count, new_count) != prev_count); 792 + 793 + mask = (1LLU << (hw->idx == CCN_IDX_PMU_CYCLE_COUNTER ? 40 : 32)) - 1; 794 + 795 + local64_add((new_count - prev_count) & mask, &event->count); 796 + } 797 + 798 + static void arm_ccn_pmu_xp_dt_config(struct perf_event *event, int enable) 799 + { 800 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 801 + struct hw_perf_event *hw = &event->hw; 802 + struct arm_ccn_component *xp; 803 + u32 val, dt_cfg; 804 + 805 + if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP) 806 + xp = &ccn->xp[CCN_CONFIG_XP(event->attr.config)]; 807 + else 808 + xp = &ccn->xp[arm_ccn_node_to_xp( 809 + CCN_CONFIG_NODE(event->attr.config))]; 810 + 811 + if (enable) 812 + dt_cfg = hw->event_base; 813 + else 814 + dt_cfg = CCN_XP_DT_CONFIG__DT_CFG__PASS_THROUGH; 815 + 816 + spin_lock(&ccn->dt.config_lock); 817 + 818 + val = readl(xp->base + CCN_XP_DT_CONFIG); 819 + val &= ~(CCN_XP_DT_CONFIG__DT_CFG__MASK << 820 + CCN_XP_DT_CONFIG__DT_CFG__SHIFT(hw->idx)); 821 + val |= dt_cfg << CCN_XP_DT_CONFIG__DT_CFG__SHIFT(hw->idx); 822 + writel(val, xp->base + CCN_XP_DT_CONFIG); 823 + 824 + spin_unlock(&ccn->dt.config_lock); 825 + } 826 + 827 + static void arm_ccn_pmu_event_start(struct perf_event *event, int flags) 828 + { 829 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 830 + struct hw_perf_event *hw = &event->hw; 831 + 832 + local64_set(&event->hw.prev_count, 833 + arm_ccn_pmu_read_counter(ccn, hw->idx)); 834 + hw->state = 0; 835 + 836 + if (!ccn->irq_used) 837 + hrtimer_start(&ccn->dt.hrtimer, arm_ccn_pmu_timer_period(), 838 + HRTIMER_MODE_REL); 839 + 840 + /* Set the DT bus input, engaging the counter */ 841 + arm_ccn_pmu_xp_dt_config(event, 1); 842 + } 843 + 844 + static void arm_ccn_pmu_event_stop(struct perf_event *event, int flags) 845 + { 846 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 847 + struct hw_perf_event *hw = &event->hw; 848 + u64 timeout; 849 + 850 + /* Disable counting, setting the DT bus to pass-through mode */ 851 + arm_ccn_pmu_xp_dt_config(event, 0); 852 + 853 + if (!ccn->irq_used) 854 + hrtimer_cancel(&ccn->dt.hrtimer); 855 + 856 + /* Let the DT bus drain */ 857 + timeout = arm_ccn_pmu_read_counter(ccn, CCN_IDX_PMU_CYCLE_COUNTER) + 858 + ccn->num_xps; 859 + while (arm_ccn_pmu_read_counter(ccn, CCN_IDX_PMU_CYCLE_COUNTER) < 860 + timeout) 861 + cpu_relax(); 862 + 863 + if (flags & PERF_EF_UPDATE) 864 + arm_ccn_pmu_event_update(event); 865 + 866 + hw->state |= PERF_HES_STOPPED; 867 + } 868 + 869 + static void arm_ccn_pmu_xp_watchpoint_config(struct perf_event *event) 870 + { 871 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 872 + struct hw_perf_event *hw = &event->hw; 873 + struct arm_ccn_component *source = 874 + ccn->dt.pmu_counters[hw->idx].source; 875 + unsigned long wp = hw->config_base; 876 + u32 val; 877 + u64 cmp_l = event->attr.config1; 878 + u64 cmp_h = event->attr.config2; 879 + u64 mask_l = ccn->dt.cmp_mask[CCN_CONFIG_MASK(event->attr.config)].l; 880 + u64 mask_h = ccn->dt.cmp_mask[CCN_CONFIG_MASK(event->attr.config)].h; 881 + 882 + hw->event_base = CCN_XP_DT_CONFIG__DT_CFG__WATCHPOINT(wp); 883 + 884 + /* Direction (RX/TX), device (port) & virtual channel */ 885 + val = readl(source->base + CCN_XP_DT_INTERFACE_SEL); 886 + val &= ~(CCN_XP_DT_INTERFACE_SEL__DT_IO_SEL__MASK << 887 + CCN_XP_DT_INTERFACE_SEL__DT_IO_SEL__SHIFT(wp)); 888 + val |= CCN_CONFIG_DIR(event->attr.config) << 889 + CCN_XP_DT_INTERFACE_SEL__DT_IO_SEL__SHIFT(wp); 890 + val &= ~(CCN_XP_DT_INTERFACE_SEL__DT_DEV_SEL__MASK << 891 + CCN_XP_DT_INTERFACE_SEL__DT_DEV_SEL__SHIFT(wp)); 892 + val |= CCN_CONFIG_PORT(event->attr.config) << 893 + CCN_XP_DT_INTERFACE_SEL__DT_DEV_SEL__SHIFT(wp); 894 + val &= ~(CCN_XP_DT_INTERFACE_SEL__DT_VC_SEL__MASK << 895 + CCN_XP_DT_INTERFACE_SEL__DT_VC_SEL__SHIFT(wp)); 896 + val |= CCN_CONFIG_VC(event->attr.config) << 897 + CCN_XP_DT_INTERFACE_SEL__DT_VC_SEL__SHIFT(wp); 898 + writel(val, source->base + CCN_XP_DT_INTERFACE_SEL); 899 + 900 + /* Comparison values */ 901 + writel(cmp_l & 0xffffffff, source->base + CCN_XP_DT_CMP_VAL_L(wp)); 902 + writel((cmp_l >> 32) & 0xefffffff, 903 + source->base + CCN_XP_DT_CMP_VAL_L(wp) + 4); 904 + writel(cmp_h & 0xffffffff, source->base + CCN_XP_DT_CMP_VAL_H(wp)); 905 + writel((cmp_h >> 32) & 0x0fffffff, 906 + source->base + CCN_XP_DT_CMP_VAL_H(wp) + 4); 907 + 908 + /* Mask */ 909 + writel(mask_l & 0xffffffff, source->base + CCN_XP_DT_CMP_MASK_L(wp)); 910 + writel((mask_l >> 32) & 0xefffffff, 911 + source->base + CCN_XP_DT_CMP_MASK_L(wp) + 4); 912 + writel(mask_h & 0xffffffff, source->base + CCN_XP_DT_CMP_MASK_H(wp)); 913 + writel((mask_h >> 32) & 0x0fffffff, 914 + source->base + CCN_XP_DT_CMP_MASK_H(wp) + 4); 915 + } 916 + 917 + static void arm_ccn_pmu_xp_event_config(struct perf_event *event) 918 + { 919 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 920 + struct hw_perf_event *hw = &event->hw; 921 + struct arm_ccn_component *source = 922 + ccn->dt.pmu_counters[hw->idx].source; 923 + u32 val, id; 924 + 925 + hw->event_base = CCN_XP_DT_CONFIG__DT_CFG__XP_PMU_EVENT(hw->config_base); 926 + 927 + id = (CCN_CONFIG_VC(event->attr.config) << 4) | 928 + (CCN_CONFIG_PORT(event->attr.config) << 3) | 929 + (CCN_CONFIG_EVENT(event->attr.config) << 0); 930 + 931 + val = readl(source->base + CCN_XP_PMU_EVENT_SEL); 932 + val &= ~(CCN_XP_PMU_EVENT_SEL__ID__MASK << 933 + CCN_XP_PMU_EVENT_SEL__ID__SHIFT(hw->config_base)); 934 + val |= id << CCN_XP_PMU_EVENT_SEL__ID__SHIFT(hw->config_base); 935 + writel(val, source->base + CCN_XP_PMU_EVENT_SEL); 936 + } 937 + 938 + static void arm_ccn_pmu_node_event_config(struct perf_event *event) 939 + { 940 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 941 + struct hw_perf_event *hw = &event->hw; 942 + struct arm_ccn_component *source = 943 + ccn->dt.pmu_counters[hw->idx].source; 944 + u32 type = CCN_CONFIG_TYPE(event->attr.config); 945 + u32 val, port; 946 + 947 + port = arm_ccn_node_to_xp_port(CCN_CONFIG_NODE(event->attr.config)); 948 + hw->event_base = CCN_XP_DT_CONFIG__DT_CFG__DEVICE_PMU_EVENT(port, 949 + hw->config_base); 950 + 951 + /* These *_event_sel regs should be identical, but let's make sure... */ 952 + BUILD_BUG_ON(CCN_HNF_PMU_EVENT_SEL != CCN_SBAS_PMU_EVENT_SEL); 953 + BUILD_BUG_ON(CCN_SBAS_PMU_EVENT_SEL != CCN_RNI_PMU_EVENT_SEL); 954 + BUILD_BUG_ON(CCN_HNF_PMU_EVENT_SEL__ID__SHIFT(1) != 955 + CCN_SBAS_PMU_EVENT_SEL__ID__SHIFT(1)); 956 + BUILD_BUG_ON(CCN_SBAS_PMU_EVENT_SEL__ID__SHIFT(1) != 957 + CCN_RNI_PMU_EVENT_SEL__ID__SHIFT(1)); 958 + BUILD_BUG_ON(CCN_HNF_PMU_EVENT_SEL__ID__MASK != 959 + CCN_SBAS_PMU_EVENT_SEL__ID__MASK); 960 + BUILD_BUG_ON(CCN_SBAS_PMU_EVENT_SEL__ID__MASK != 961 + CCN_RNI_PMU_EVENT_SEL__ID__MASK); 962 + if (WARN_ON(type != CCN_TYPE_HNF && type != CCN_TYPE_SBAS && 963 + !arm_ccn_pmu_type_eq(type, CCN_TYPE_RNI_3P))) 964 + return; 965 + 966 + /* Set the event id for the pre-allocated counter */ 967 + val = readl(source->base + CCN_HNF_PMU_EVENT_SEL); 968 + val &= ~(CCN_HNF_PMU_EVENT_SEL__ID__MASK << 969 + CCN_HNF_PMU_EVENT_SEL__ID__SHIFT(hw->config_base)); 970 + val |= CCN_CONFIG_EVENT(event->attr.config) << 971 + CCN_HNF_PMU_EVENT_SEL__ID__SHIFT(hw->config_base); 972 + writel(val, source->base + CCN_HNF_PMU_EVENT_SEL); 973 + } 974 + 975 + static void arm_ccn_pmu_event_config(struct perf_event *event) 976 + { 977 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 978 + struct hw_perf_event *hw = &event->hw; 979 + u32 xp, offset, val; 980 + 981 + /* Cycle counter requires no setup */ 982 + if (hw->idx == CCN_IDX_PMU_CYCLE_COUNTER) 983 + return; 984 + 985 + if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP) 986 + xp = CCN_CONFIG_XP(event->attr.config); 987 + else 988 + xp = arm_ccn_node_to_xp(CCN_CONFIG_NODE(event->attr.config)); 989 + 990 + spin_lock(&ccn->dt.config_lock); 991 + 992 + /* Set the DT bus "distance" register */ 993 + offset = (hw->idx / 4) * 4; 994 + val = readl(ccn->dt.base + CCN_DT_ACTIVE_DSM + offset); 995 + val &= ~(CCN_DT_ACTIVE_DSM__DSM_ID__MASK << 996 + CCN_DT_ACTIVE_DSM__DSM_ID__SHIFT(hw->idx % 4)); 997 + val |= xp << CCN_DT_ACTIVE_DSM__DSM_ID__SHIFT(hw->idx % 4); 998 + writel(val, ccn->dt.base + CCN_DT_ACTIVE_DSM + offset); 999 + 1000 + if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP) { 1001 + if (CCN_CONFIG_EVENT(event->attr.config) == 1002 + CCN_EVENT_WATCHPOINT) 1003 + arm_ccn_pmu_xp_watchpoint_config(event); 1004 + else 1005 + arm_ccn_pmu_xp_event_config(event); 1006 + } else { 1007 + arm_ccn_pmu_node_event_config(event); 1008 + } 1009 + 1010 + spin_unlock(&ccn->dt.config_lock); 1011 + } 1012 + 1013 + static int arm_ccn_pmu_event_add(struct perf_event *event, int flags) 1014 + { 1015 + struct hw_perf_event *hw = &event->hw; 1016 + 1017 + arm_ccn_pmu_event_config(event); 1018 + 1019 + hw->state = PERF_HES_STOPPED; 1020 + 1021 + if (flags & PERF_EF_START) 1022 + arm_ccn_pmu_event_start(event, PERF_EF_UPDATE); 1023 + 1024 + return 0; 1025 + } 1026 + 1027 + static void arm_ccn_pmu_event_del(struct perf_event *event, int flags) 1028 + { 1029 + arm_ccn_pmu_event_stop(event, PERF_EF_UPDATE); 1030 + 1031 + arm_ccn_pmu_event_free(event); 1032 + } 1033 + 1034 + static void arm_ccn_pmu_event_read(struct perf_event *event) 1035 + { 1036 + arm_ccn_pmu_event_update(event); 1037 + } 1038 + 1039 + static irqreturn_t arm_ccn_pmu_overflow_handler(struct arm_ccn_dt *dt) 1040 + { 1041 + u32 pmovsr = readl(dt->base + CCN_DT_PMOVSR); 1042 + int idx; 1043 + 1044 + if (!pmovsr) 1045 + return IRQ_NONE; 1046 + 1047 + writel(pmovsr, dt->base + CCN_DT_PMOVSR_CLR); 1048 + 1049 + BUILD_BUG_ON(CCN_IDX_PMU_CYCLE_COUNTER != CCN_NUM_PMU_EVENT_COUNTERS); 1050 + 1051 + for (idx = 0; idx < CCN_NUM_PMU_EVENT_COUNTERS + 1; idx++) { 1052 + struct perf_event *event = dt->pmu_counters[idx].event; 1053 + int overflowed = pmovsr & BIT(idx); 1054 + 1055 + WARN_ON_ONCE(overflowed && !event); 1056 + 1057 + if (!event || !overflowed) 1058 + continue; 1059 + 1060 + arm_ccn_pmu_event_update(event); 1061 + } 1062 + 1063 + return IRQ_HANDLED; 1064 + } 1065 + 1066 + static enum hrtimer_restart arm_ccn_pmu_timer_handler(struct hrtimer *hrtimer) 1067 + { 1068 + struct arm_ccn_dt *dt = container_of(hrtimer, struct arm_ccn_dt, 1069 + hrtimer); 1070 + unsigned long flags; 1071 + 1072 + local_irq_save(flags); 1073 + arm_ccn_pmu_overflow_handler(dt); 1074 + local_irq_restore(flags); 1075 + 1076 + hrtimer_forward_now(hrtimer, arm_ccn_pmu_timer_period()); 1077 + return HRTIMER_RESTART; 1078 + } 1079 + 1080 + 1081 + static DEFINE_IDA(arm_ccn_pmu_ida); 1082 + 1083 + static int arm_ccn_pmu_init(struct arm_ccn *ccn) 1084 + { 1085 + int i; 1086 + char *name; 1087 + 1088 + /* Initialize DT subsystem */ 1089 + ccn->dt.base = ccn->base + CCN_REGION_SIZE; 1090 + spin_lock_init(&ccn->dt.config_lock); 1091 + writel(CCN_DT_CTL__DT_EN, ccn->dt.base + CCN_DT_CTL); 1092 + writel(CCN_DT_PMCR__OVFL_INTR_EN | CCN_DT_PMCR__PMU_EN, 1093 + ccn->dt.base + CCN_DT_PMCR); 1094 + writel(0x1, ccn->dt.base + CCN_DT_PMSR_CLR); 1095 + for (i = 0; i < ccn->num_xps; i++) { 1096 + writel(0, ccn->xp[i].base + CCN_XP_DT_CONFIG); 1097 + writel((CCN_XP_DT_CONTROL__WP_ARM_SEL__ALWAYS << 1098 + CCN_XP_DT_CONTROL__WP_ARM_SEL__SHIFT(0)) | 1099 + (CCN_XP_DT_CONTROL__WP_ARM_SEL__ALWAYS << 1100 + CCN_XP_DT_CONTROL__WP_ARM_SEL__SHIFT(1)) | 1101 + CCN_XP_DT_CONTROL__DT_ENABLE, 1102 + ccn->xp[i].base + CCN_XP_DT_CONTROL); 1103 + } 1104 + ccn->dt.cmp_mask[CCN_IDX_MASK_ANY].l = ~0; 1105 + ccn->dt.cmp_mask[CCN_IDX_MASK_ANY].h = ~0; 1106 + ccn->dt.cmp_mask[CCN_IDX_MASK_EXACT].l = 0; 1107 + ccn->dt.cmp_mask[CCN_IDX_MASK_EXACT].h = 0; 1108 + ccn->dt.cmp_mask[CCN_IDX_MASK_ORDER].l = ~0; 1109 + ccn->dt.cmp_mask[CCN_IDX_MASK_ORDER].h = ~(0x1 << 15); 1110 + ccn->dt.cmp_mask[CCN_IDX_MASK_OPCODE].l = ~0; 1111 + ccn->dt.cmp_mask[CCN_IDX_MASK_OPCODE].h = ~(0x1f << 9); 1112 + 1113 + /* Get a convenient /sys/event_source/devices/ name */ 1114 + ccn->dt.id = ida_simple_get(&arm_ccn_pmu_ida, 0, 0, GFP_KERNEL); 1115 + if (ccn->dt.id == 0) { 1116 + name = "ccn"; 1117 + } else { 1118 + int len = snprintf(NULL, 0, "ccn_%d", ccn->dt.id); 1119 + 1120 + name = devm_kzalloc(ccn->dev, len + 1, GFP_KERNEL); 1121 + snprintf(name, len + 1, "ccn_%d", ccn->dt.id); 1122 + } 1123 + 1124 + /* Perf driver registration */ 1125 + ccn->dt.pmu = (struct pmu) { 1126 + .attr_groups = arm_ccn_pmu_attr_groups, 1127 + .task_ctx_nr = perf_invalid_context, 1128 + .event_init = arm_ccn_pmu_event_init, 1129 + .add = arm_ccn_pmu_event_add, 1130 + .del = arm_ccn_pmu_event_del, 1131 + .start = arm_ccn_pmu_event_start, 1132 + .stop = arm_ccn_pmu_event_stop, 1133 + .read = arm_ccn_pmu_event_read, 1134 + }; 1135 + 1136 + /* No overflow interrupt? Have to use a timer instead. */ 1137 + if (!ccn->irq_used) { 1138 + dev_info(ccn->dev, "No access to interrupts, using timer.\n"); 1139 + hrtimer_init(&ccn->dt.hrtimer, CLOCK_MONOTONIC, 1140 + HRTIMER_MODE_REL); 1141 + ccn->dt.hrtimer.function = arm_ccn_pmu_timer_handler; 1142 + } 1143 + 1144 + return perf_pmu_register(&ccn->dt.pmu, name, -1); 1145 + } 1146 + 1147 + static void arm_ccn_pmu_cleanup(struct arm_ccn *ccn) 1148 + { 1149 + int i; 1150 + 1151 + for (i = 0; i < ccn->num_xps; i++) 1152 + writel(0, ccn->xp[i].base + CCN_XP_DT_CONTROL); 1153 + writel(0, ccn->dt.base + CCN_DT_PMCR); 1154 + perf_pmu_unregister(&ccn->dt.pmu); 1155 + ida_simple_remove(&arm_ccn_pmu_ida, ccn->dt.id); 1156 + } 1157 + 1158 + 1159 + static int arm_ccn_for_each_valid_region(struct arm_ccn *ccn, 1160 + int (*callback)(struct arm_ccn *ccn, int region, 1161 + void __iomem *base, u32 type, u32 id)) 1162 + { 1163 + int region; 1164 + 1165 + for (region = 0; region < CCN_NUM_REGIONS; region++) { 1166 + u32 val, type, id; 1167 + void __iomem *base; 1168 + int err; 1169 + 1170 + val = readl(ccn->base + CCN_MN_OLY_COMP_LIST_63_0 + 1171 + 4 * (region / 32)); 1172 + if (!(val & (1 << (region % 32)))) 1173 + continue; 1174 + 1175 + base = ccn->base + region * CCN_REGION_SIZE; 1176 + val = readl(base + CCN_ALL_OLY_ID); 1177 + type = (val >> CCN_ALL_OLY_ID__OLY_ID__SHIFT) & 1178 + CCN_ALL_OLY_ID__OLY_ID__MASK; 1179 + id = (val >> CCN_ALL_OLY_ID__NODE_ID__SHIFT) & 1180 + CCN_ALL_OLY_ID__NODE_ID__MASK; 1181 + 1182 + err = callback(ccn, region, base, type, id); 1183 + if (err) 1184 + return err; 1185 + } 1186 + 1187 + return 0; 1188 + } 1189 + 1190 + static int arm_ccn_get_nodes_num(struct arm_ccn *ccn, int region, 1191 + void __iomem *base, u32 type, u32 id) 1192 + { 1193 + 1194 + if (type == CCN_TYPE_XP && id >= ccn->num_xps) 1195 + ccn->num_xps = id + 1; 1196 + else if (id >= ccn->num_nodes) 1197 + ccn->num_nodes = id + 1; 1198 + 1199 + return 0; 1200 + } 1201 + 1202 + static int arm_ccn_init_nodes(struct arm_ccn *ccn, int region, 1203 + void __iomem *base, u32 type, u32 id) 1204 + { 1205 + struct arm_ccn_component *component; 1206 + 1207 + dev_dbg(ccn->dev, "Region %d: id=%u, type=0x%02x\n", region, id, type); 1208 + 1209 + switch (type) { 1210 + case CCN_TYPE_MN: 1211 + case CCN_TYPE_DT: 1212 + return 0; 1213 + case CCN_TYPE_XP: 1214 + component = &ccn->xp[id]; 1215 + break; 1216 + case CCN_TYPE_SBSX: 1217 + ccn->sbsx_present = 1; 1218 + component = &ccn->node[id]; 1219 + break; 1220 + case CCN_TYPE_SBAS: 1221 + ccn->sbas_present = 1; 1222 + /* Fall-through */ 1223 + default: 1224 + component = &ccn->node[id]; 1225 + break; 1226 + } 1227 + 1228 + component->base = base; 1229 + component->type = type; 1230 + 1231 + return 0; 1232 + } 1233 + 1234 + 1235 + static irqreturn_t arm_ccn_error_handler(struct arm_ccn *ccn, 1236 + const u32 *err_sig_val) 1237 + { 1238 + /* This should be really handled by firmware... */ 1239 + dev_err(ccn->dev, "Error reported in %08x%08x%08x%08x%08x%08x.\n", 1240 + err_sig_val[5], err_sig_val[4], err_sig_val[3], 1241 + err_sig_val[2], err_sig_val[1], err_sig_val[0]); 1242 + dev_err(ccn->dev, "Disabling interrupt generation for all errors.\n"); 1243 + writel(CCN_MN_ERRINT_STATUS__ALL_ERRORS__DISABLE, 1244 + ccn->base + CCN_MN_ERRINT_STATUS); 1245 + 1246 + return IRQ_HANDLED; 1247 + } 1248 + 1249 + 1250 + static irqreturn_t arm_ccn_irq_handler(int irq, void *dev_id) 1251 + { 1252 + irqreturn_t res = IRQ_NONE; 1253 + struct arm_ccn *ccn = dev_id; 1254 + u32 err_sig_val[6]; 1255 + u32 err_or; 1256 + int i; 1257 + 1258 + /* PMU overflow is a special case */ 1259 + err_or = err_sig_val[0] = readl(ccn->base + CCN_MN_ERR_SIG_VAL_63_0); 1260 + if (err_or & CCN_MN_ERR_SIG_VAL_63_0__DT) { 1261 + err_or &= ~CCN_MN_ERR_SIG_VAL_63_0__DT; 1262 + res = arm_ccn_pmu_overflow_handler(&ccn->dt); 1263 + } 1264 + 1265 + /* Have to read all err_sig_vals to clear them */ 1266 + for (i = 1; i < ARRAY_SIZE(err_sig_val); i++) { 1267 + err_sig_val[i] = readl(ccn->base + 1268 + CCN_MN_ERR_SIG_VAL_63_0 + i * 4); 1269 + err_or |= err_sig_val[i]; 1270 + } 1271 + if (err_or) 1272 + res |= arm_ccn_error_handler(ccn, err_sig_val); 1273 + 1274 + if (res != IRQ_NONE) 1275 + writel(CCN_MN_ERRINT_STATUS__INTREQ__DESSERT, 1276 + ccn->base + CCN_MN_ERRINT_STATUS); 1277 + 1278 + return res; 1279 + } 1280 + 1281 + 1282 + static int arm_ccn_probe(struct platform_device *pdev) 1283 + { 1284 + struct arm_ccn *ccn; 1285 + struct resource *res; 1286 + int err; 1287 + 1288 + ccn = devm_kzalloc(&pdev->dev, sizeof(*ccn), GFP_KERNEL); 1289 + if (!ccn) 1290 + return -ENOMEM; 1291 + ccn->dev = &pdev->dev; 1292 + platform_set_drvdata(pdev, ccn); 1293 + 1294 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1295 + if (!res) 1296 + return -EINVAL; 1297 + 1298 + if (!devm_request_mem_region(ccn->dev, res->start, 1299 + resource_size(res), pdev->name)) 1300 + return -EBUSY; 1301 + 1302 + ccn->base = devm_ioremap(ccn->dev, res->start, 1303 + resource_size(res)); 1304 + if (!ccn->base) 1305 + return -EFAULT; 1306 + 1307 + res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 1308 + if (!res) 1309 + return -EINVAL; 1310 + 1311 + /* Check if we can use the interrupt */ 1312 + writel(CCN_MN_ERRINT_STATUS__PMU_EVENTS__DISABLE, 1313 + ccn->base + CCN_MN_ERRINT_STATUS); 1314 + if (readl(ccn->base + CCN_MN_ERRINT_STATUS) & 1315 + CCN_MN_ERRINT_STATUS__PMU_EVENTS__DISABLED) { 1316 + /* Can set 'disable' bits, so can acknowledge interrupts */ 1317 + writel(CCN_MN_ERRINT_STATUS__PMU_EVENTS__ENABLE, 1318 + ccn->base + CCN_MN_ERRINT_STATUS); 1319 + err = devm_request_irq(ccn->dev, res->start, 1320 + arm_ccn_irq_handler, 0, dev_name(ccn->dev), 1321 + ccn); 1322 + if (err) 1323 + return err; 1324 + 1325 + ccn->irq_used = 1; 1326 + } 1327 + 1328 + 1329 + /* Build topology */ 1330 + 1331 + err = arm_ccn_for_each_valid_region(ccn, arm_ccn_get_nodes_num); 1332 + if (err) 1333 + return err; 1334 + 1335 + ccn->node = devm_kzalloc(ccn->dev, sizeof(*ccn->node) * ccn->num_nodes, 1336 + GFP_KERNEL); 1337 + ccn->xp = devm_kzalloc(ccn->dev, sizeof(*ccn->node) * ccn->num_xps, 1338 + GFP_KERNEL); 1339 + if (!ccn->node || !ccn->xp) 1340 + return -ENOMEM; 1341 + 1342 + err = arm_ccn_for_each_valid_region(ccn, arm_ccn_init_nodes); 1343 + if (err) 1344 + return err; 1345 + 1346 + return arm_ccn_pmu_init(ccn); 1347 + } 1348 + 1349 + static int arm_ccn_remove(struct platform_device *pdev) 1350 + { 1351 + struct arm_ccn *ccn = platform_get_drvdata(pdev); 1352 + 1353 + arm_ccn_pmu_cleanup(ccn); 1354 + 1355 + return 0; 1356 + } 1357 + 1358 + static const struct of_device_id arm_ccn_match[] = { 1359 + { .compatible = "arm,ccn-504", }, 1360 + {}, 1361 + }; 1362 + 1363 + static struct platform_driver arm_ccn_driver = { 1364 + .driver = { 1365 + .name = "arm-ccn", 1366 + .of_match_table = arm_ccn_match, 1367 + }, 1368 + .probe = arm_ccn_probe, 1369 + .remove = arm_ccn_remove, 1370 + }; 1371 + 1372 + static int __init arm_ccn_init(void) 1373 + { 1374 + int i; 1375 + 1376 + for (i = 0; i < ARRAY_SIZE(arm_ccn_pmu_events); i++) 1377 + arm_ccn_pmu_events_attrs[i] = &arm_ccn_pmu_events[i].attr.attr; 1378 + 1379 + return platform_driver_register(&arm_ccn_driver); 1380 + } 1381 + 1382 + static void __exit arm_ccn_exit(void) 1383 + { 1384 + platform_driver_unregister(&arm_ccn_driver); 1385 + } 1386 + 1387 + module_init(arm_ccn_init); 1388 + module_exit(arm_ccn_exit); 1389 + 1390 + MODULE_AUTHOR("Pawel Moll <pawel.moll@arm.com>"); 1391 + MODULE_LICENSE("GPL");
-8
drivers/leds/Kconfig
··· 32 32 This option enables support for on-chip LED drivers found on Marvell 33 33 Semiconductor 88PM8606 PMIC. 34 34 35 - config LEDS_ATMEL_PWM 36 - tristate "LED Support using Atmel PWM outputs" 37 - depends on LEDS_CLASS 38 - depends on ATMEL_PWM 39 - help 40 - This option enables support for LEDs driven using outputs 41 - of the dedicated PWM controller found on newer Atmel SOCs. 42 - 43 35 config LEDS_LM3530 44 36 tristate "LCD Backlight driver for LM3530" 45 37 depends on LEDS_CLASS
-1
drivers/leds/Makefile
··· 6 6 7 7 # LED Platform Drivers 8 8 obj-$(CONFIG_LEDS_88PM860X) += leds-88pm860x.o 9 - obj-$(CONFIG_LEDS_ATMEL_PWM) += leds-atmel-pwm.o 10 9 obj-$(CONFIG_LEDS_BD2802) += leds-bd2802.o 11 10 obj-$(CONFIG_LEDS_LOCOMO) += leds-locomo.o 12 11 obj-$(CONFIG_LEDS_LM3530) += leds-lm3530.o
-149
drivers/leds/leds-atmel-pwm.c
··· 1 - #include <linux/kernel.h> 2 - #include <linux/platform_device.h> 3 - #include <linux/leds.h> 4 - #include <linux/io.h> 5 - #include <linux/atmel_pwm.h> 6 - #include <linux/slab.h> 7 - #include <linux/module.h> 8 - 9 - 10 - struct pwmled { 11 - struct led_classdev cdev; 12 - struct pwm_channel pwmc; 13 - struct gpio_led *desc; 14 - u32 mult; 15 - u8 active_low; 16 - }; 17 - 18 - 19 - /* 20 - * For simplicity, we use "brightness" as if it were a linear function 21 - * of PWM duty cycle. However, a logarithmic function of duty cycle is 22 - * probably a better match for perceived brightness: two is half as bright 23 - * as four, four is half as bright as eight, etc 24 - */ 25 - static void pwmled_brightness(struct led_classdev *cdev, enum led_brightness b) 26 - { 27 - struct pwmled *led; 28 - 29 - /* update the duty cycle for the *next* period */ 30 - led = container_of(cdev, struct pwmled, cdev); 31 - pwm_channel_writel(&led->pwmc, PWM_CUPD, led->mult * (unsigned) b); 32 - } 33 - 34 - /* 35 - * NOTE: we reuse the platform_data structure of GPIO leds, 36 - * but repurpose its "gpio" number as a PWM channel number. 37 - */ 38 - static int pwmled_probe(struct platform_device *pdev) 39 - { 40 - const struct gpio_led_platform_data *pdata; 41 - struct pwmled *leds; 42 - int i; 43 - int status; 44 - 45 - pdata = dev_get_platdata(&pdev->dev); 46 - if (!pdata || pdata->num_leds < 1) 47 - return -ENODEV; 48 - 49 - leds = devm_kzalloc(&pdev->dev, pdata->num_leds * sizeof(*leds), 50 - GFP_KERNEL); 51 - if (!leds) 52 - return -ENOMEM; 53 - 54 - for (i = 0; i < pdata->num_leds; i++) { 55 - struct pwmled *led = leds + i; 56 - const struct gpio_led *dat = pdata->leds + i; 57 - u32 tmp; 58 - 59 - led->cdev.name = dat->name; 60 - led->cdev.brightness = LED_OFF; 61 - led->cdev.brightness_set = pwmled_brightness; 62 - led->cdev.default_trigger = dat->default_trigger; 63 - 64 - led->active_low = dat->active_low; 65 - 66 - status = pwm_channel_alloc(dat->gpio, &led->pwmc); 67 - if (status < 0) 68 - goto err; 69 - 70 - /* 71 - * Prescale clock by 2^x, so PWM counts in low MHz. 72 - * Start each cycle with the LED active, so increasing 73 - * the duty cycle gives us more time on (== brighter). 74 - */ 75 - tmp = 5; 76 - if (!led->active_low) 77 - tmp |= PWM_CPR_CPOL; 78 - pwm_channel_writel(&led->pwmc, PWM_CMR, tmp); 79 - 80 - /* 81 - * Pick a period so PWM cycles at 100+ Hz; and a multiplier 82 - * for scaling duty cycle: brightness * mult. 83 - */ 84 - tmp = (led->pwmc.mck / (1 << 5)) / 100; 85 - tmp /= 255; 86 - led->mult = tmp; 87 - pwm_channel_writel(&led->pwmc, PWM_CDTY, 88 - led->cdev.brightness * 255); 89 - pwm_channel_writel(&led->pwmc, PWM_CPRD, 90 - LED_FULL * tmp); 91 - 92 - pwm_channel_enable(&led->pwmc); 93 - 94 - /* Hand it over to the LED framework */ 95 - status = led_classdev_register(&pdev->dev, &led->cdev); 96 - if (status < 0) { 97 - pwm_channel_free(&led->pwmc); 98 - goto err; 99 - } 100 - } 101 - 102 - platform_set_drvdata(pdev, leds); 103 - return 0; 104 - 105 - err: 106 - if (i > 0) { 107 - for (i = i - 1; i >= 0; i--) { 108 - led_classdev_unregister(&leds[i].cdev); 109 - pwm_channel_free(&leds[i].pwmc); 110 - } 111 - } 112 - 113 - return status; 114 - } 115 - 116 - static int pwmled_remove(struct platform_device *pdev) 117 - { 118 - const struct gpio_led_platform_data *pdata; 119 - struct pwmled *leds; 120 - unsigned i; 121 - 122 - pdata = dev_get_platdata(&pdev->dev); 123 - leds = platform_get_drvdata(pdev); 124 - 125 - for (i = 0; i < pdata->num_leds; i++) { 126 - struct pwmled *led = leds + i; 127 - 128 - led_classdev_unregister(&led->cdev); 129 - pwm_channel_free(&led->pwmc); 130 - } 131 - 132 - return 0; 133 - } 134 - 135 - static struct platform_driver pwmled_driver = { 136 - .driver = { 137 - .name = "leds-atmel-pwm", 138 - .owner = THIS_MODULE, 139 - }, 140 - /* REVISIT add suspend() and resume() methods */ 141 - .probe = pwmled_probe, 142 - .remove = pwmled_remove, 143 - }; 144 - 145 - module_platform_driver(pwmled_driver); 146 - 147 - MODULE_DESCRIPTION("Driver for LEDs with PWM-controlled brightness"); 148 - MODULE_LICENSE("GPL"); 149 - MODULE_ALIAS("platform:leds-atmel-pwm");
+1 -18
drivers/mailbox/Kconfig
··· 16 16 Management Engine, primarily for cpufreq. Say Y here if you want 17 17 to use the PL320 IPCM support. 18 18 19 - config OMAP_MBOX 20 - tristate 21 - help 22 - This option is selected by any OMAP architecture specific mailbox 23 - driver such as CONFIG_OMAP1_MBOX or CONFIG_OMAP2PLUS_MBOX. This 24 - enables the common OMAP mailbox framework code. 25 - 26 - config OMAP1_MBOX 27 - tristate "OMAP1 Mailbox framework support" 28 - depends on ARCH_OMAP1 29 - select OMAP_MBOX 30 - help 31 - Mailbox implementation for OMAP chips with hardware for 32 - interprocessor communication involving DSP in OMAP1. Say Y here 33 - if you want to use OMAP1 Mailbox framework support. 34 - 35 19 config OMAP2PLUS_MBOX 36 20 tristate "OMAP2+ Mailbox framework support" 37 21 depends on ARCH_OMAP2PLUS 38 - select OMAP_MBOX 39 22 help 40 23 Mailbox implementation for OMAP family chips with hardware for 41 24 interprocessor communication involving DSP, IVA1.0 and IVA2 in ··· 27 44 28 45 config OMAP_MBOX_KFIFO_SIZE 29 46 int "Mailbox kfifo default buffer size (bytes)" 30 - depends on OMAP2PLUS_MBOX || OMAP1_MBOX 47 + depends on OMAP2PLUS_MBOX 31 48 default 256 32 49 help 33 50 Specify the default size of mailbox's kfifo buffers (bytes).
+1 -5
drivers/mailbox/Makefile
··· 1 1 obj-$(CONFIG_PL320_MBOX) += pl320-ipc.o 2 2 3 - obj-$(CONFIG_OMAP_MBOX) += omap-mailbox.o 4 - obj-$(CONFIG_OMAP1_MBOX) += mailbox_omap1.o 5 - mailbox_omap1-objs := mailbox-omap1.o 6 - obj-$(CONFIG_OMAP2PLUS_MBOX) += mailbox_omap2.o 7 - mailbox_omap2-objs := mailbox-omap2.o 3 + obj-$(CONFIG_OMAP2PLUS_MBOX) += omap-mailbox.o
-203
drivers/mailbox/mailbox-omap1.c
··· 1 - /* 2 - * Mailbox reservation modules for OMAP1 3 - * 4 - * Copyright (C) 2006-2009 Nokia Corporation 5 - * Written by: Hiroshi DOYU <Hiroshi.DOYU@nokia.com> 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 - */ 11 - 12 - #include <linux/module.h> 13 - #include <linux/interrupt.h> 14 - #include <linux/platform_device.h> 15 - #include <linux/io.h> 16 - 17 - #include "omap-mbox.h" 18 - 19 - #define MAILBOX_ARM2DSP1 0x00 20 - #define MAILBOX_ARM2DSP1b 0x04 21 - #define MAILBOX_DSP2ARM1 0x08 22 - #define MAILBOX_DSP2ARM1b 0x0c 23 - #define MAILBOX_DSP2ARM2 0x10 24 - #define MAILBOX_DSP2ARM2b 0x14 25 - #define MAILBOX_ARM2DSP1_Flag 0x18 26 - #define MAILBOX_DSP2ARM1_Flag 0x1c 27 - #define MAILBOX_DSP2ARM2_Flag 0x20 28 - 29 - static void __iomem *mbox_base; 30 - 31 - struct omap_mbox1_fifo { 32 - unsigned long cmd; 33 - unsigned long data; 34 - unsigned long flag; 35 - }; 36 - 37 - struct omap_mbox1_priv { 38 - struct omap_mbox1_fifo tx_fifo; 39 - struct omap_mbox1_fifo rx_fifo; 40 - }; 41 - 42 - static inline int mbox_read_reg(size_t ofs) 43 - { 44 - return __raw_readw(mbox_base + ofs); 45 - } 46 - 47 - static inline void mbox_write_reg(u32 val, size_t ofs) 48 - { 49 - __raw_writew(val, mbox_base + ofs); 50 - } 51 - 52 - /* msg */ 53 - static mbox_msg_t omap1_mbox_fifo_read(struct omap_mbox *mbox) 54 - { 55 - struct omap_mbox1_fifo *fifo = 56 - &((struct omap_mbox1_priv *)mbox->priv)->rx_fifo; 57 - mbox_msg_t msg; 58 - 59 - msg = mbox_read_reg(fifo->data); 60 - msg |= ((mbox_msg_t) mbox_read_reg(fifo->cmd)) << 16; 61 - 62 - return msg; 63 - } 64 - 65 - static void 66 - omap1_mbox_fifo_write(struct omap_mbox *mbox, mbox_msg_t msg) 67 - { 68 - struct omap_mbox1_fifo *fifo = 69 - &((struct omap_mbox1_priv *)mbox->priv)->tx_fifo; 70 - 71 - mbox_write_reg(msg & 0xffff, fifo->data); 72 - mbox_write_reg(msg >> 16, fifo->cmd); 73 - } 74 - 75 - static int omap1_mbox_fifo_empty(struct omap_mbox *mbox) 76 - { 77 - return 0; 78 - } 79 - 80 - static int omap1_mbox_fifo_full(struct omap_mbox *mbox) 81 - { 82 - struct omap_mbox1_fifo *fifo = 83 - &((struct omap_mbox1_priv *)mbox->priv)->rx_fifo; 84 - 85 - return mbox_read_reg(fifo->flag); 86 - } 87 - 88 - /* irq */ 89 - static void 90 - omap1_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 91 - { 92 - if (irq == IRQ_RX) 93 - enable_irq(mbox->irq); 94 - } 95 - 96 - static void 97 - omap1_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 98 - { 99 - if (irq == IRQ_RX) 100 - disable_irq(mbox->irq); 101 - } 102 - 103 - static int 104 - omap1_mbox_is_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 105 - { 106 - if (irq == IRQ_TX) 107 - return 0; 108 - return 1; 109 - } 110 - 111 - static struct omap_mbox_ops omap1_mbox_ops = { 112 - .type = OMAP_MBOX_TYPE1, 113 - .fifo_read = omap1_mbox_fifo_read, 114 - .fifo_write = omap1_mbox_fifo_write, 115 - .fifo_empty = omap1_mbox_fifo_empty, 116 - .fifo_full = omap1_mbox_fifo_full, 117 - .enable_irq = omap1_mbox_enable_irq, 118 - .disable_irq = omap1_mbox_disable_irq, 119 - .is_irq = omap1_mbox_is_irq, 120 - }; 121 - 122 - /* FIXME: the following struct should be created automatically by the user id */ 123 - 124 - /* DSP */ 125 - static struct omap_mbox1_priv omap1_mbox_dsp_priv = { 126 - .tx_fifo = { 127 - .cmd = MAILBOX_ARM2DSP1b, 128 - .data = MAILBOX_ARM2DSP1, 129 - .flag = MAILBOX_ARM2DSP1_Flag, 130 - }, 131 - .rx_fifo = { 132 - .cmd = MAILBOX_DSP2ARM1b, 133 - .data = MAILBOX_DSP2ARM1, 134 - .flag = MAILBOX_DSP2ARM1_Flag, 135 - }, 136 - }; 137 - 138 - static struct omap_mbox mbox_dsp_info = { 139 - .name = "dsp", 140 - .ops = &omap1_mbox_ops, 141 - .priv = &omap1_mbox_dsp_priv, 142 - }; 143 - 144 - static struct omap_mbox *omap1_mboxes[] = { &mbox_dsp_info, NULL }; 145 - 146 - static int omap1_mbox_probe(struct platform_device *pdev) 147 - { 148 - struct resource *mem; 149 - int ret; 150 - struct omap_mbox **list; 151 - 152 - list = omap1_mboxes; 153 - list[0]->irq = platform_get_irq_byname(pdev, "dsp"); 154 - 155 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 156 - if (!mem) 157 - return -ENOENT; 158 - 159 - mbox_base = ioremap(mem->start, resource_size(mem)); 160 - if (!mbox_base) 161 - return -ENOMEM; 162 - 163 - ret = omap_mbox_register(&pdev->dev, list); 164 - if (ret) { 165 - iounmap(mbox_base); 166 - return ret; 167 - } 168 - 169 - return 0; 170 - } 171 - 172 - static int omap1_mbox_remove(struct platform_device *pdev) 173 - { 174 - omap_mbox_unregister(); 175 - iounmap(mbox_base); 176 - return 0; 177 - } 178 - 179 - static struct platform_driver omap1_mbox_driver = { 180 - .probe = omap1_mbox_probe, 181 - .remove = omap1_mbox_remove, 182 - .driver = { 183 - .name = "omap-mailbox", 184 - }, 185 - }; 186 - 187 - static int __init omap1_mbox_init(void) 188 - { 189 - return platform_driver_register(&omap1_mbox_driver); 190 - } 191 - 192 - static void __exit omap1_mbox_exit(void) 193 - { 194 - platform_driver_unregister(&omap1_mbox_driver); 195 - } 196 - 197 - module_init(omap1_mbox_init); 198 - module_exit(omap1_mbox_exit); 199 - 200 - MODULE_LICENSE("GPL v2"); 201 - MODULE_DESCRIPTION("omap mailbox: omap1 architecture specific functions"); 202 - MODULE_AUTHOR("Hiroshi DOYU <Hiroshi.DOYU@nokia.com>"); 203 - MODULE_ALIAS("platform:omap1-mailbox");
-357
drivers/mailbox/mailbox-omap2.c
··· 1 - /* 2 - * Mailbox reservation modules for OMAP2/3 3 - * 4 - * Copyright (C) 2006-2009 Nokia Corporation 5 - * Written by: Hiroshi DOYU <Hiroshi.DOYU@nokia.com> 6 - * and Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 - */ 12 - 13 - #include <linux/module.h> 14 - #include <linux/slab.h> 15 - #include <linux/clk.h> 16 - #include <linux/err.h> 17 - #include <linux/platform_device.h> 18 - #include <linux/io.h> 19 - #include <linux/pm_runtime.h> 20 - #include <linux/platform_data/mailbox-omap.h> 21 - 22 - #include "omap-mbox.h" 23 - 24 - #define MAILBOX_REVISION 0x000 25 - #define MAILBOX_MESSAGE(m) (0x040 + 4 * (m)) 26 - #define MAILBOX_FIFOSTATUS(m) (0x080 + 4 * (m)) 27 - #define MAILBOX_MSGSTATUS(m) (0x0c0 + 4 * (m)) 28 - #define MAILBOX_IRQSTATUS(u) (0x100 + 8 * (u)) 29 - #define MAILBOX_IRQENABLE(u) (0x104 + 8 * (u)) 30 - 31 - #define OMAP4_MAILBOX_IRQSTATUS(u) (0x104 + 0x10 * (u)) 32 - #define OMAP4_MAILBOX_IRQENABLE(u) (0x108 + 0x10 * (u)) 33 - #define OMAP4_MAILBOX_IRQENABLE_CLR(u) (0x10c + 0x10 * (u)) 34 - 35 - #define MAILBOX_IRQ_NEWMSG(m) (1 << (2 * (m))) 36 - #define MAILBOX_IRQ_NOTFULL(m) (1 << (2 * (m) + 1)) 37 - 38 - #define MBOX_REG_SIZE 0x120 39 - 40 - #define OMAP4_MBOX_REG_SIZE 0x130 41 - 42 - #define MBOX_NR_REGS (MBOX_REG_SIZE / sizeof(u32)) 43 - #define OMAP4_MBOX_NR_REGS (OMAP4_MBOX_REG_SIZE / sizeof(u32)) 44 - 45 - static void __iomem *mbox_base; 46 - 47 - struct omap_mbox2_fifo { 48 - unsigned long msg; 49 - unsigned long fifo_stat; 50 - unsigned long msg_stat; 51 - }; 52 - 53 - struct omap_mbox2_priv { 54 - struct omap_mbox2_fifo tx_fifo; 55 - struct omap_mbox2_fifo rx_fifo; 56 - unsigned long irqenable; 57 - unsigned long irqstatus; 58 - u32 newmsg_bit; 59 - u32 notfull_bit; 60 - u32 ctx[OMAP4_MBOX_NR_REGS]; 61 - unsigned long irqdisable; 62 - u32 intr_type; 63 - }; 64 - 65 - static inline unsigned int mbox_read_reg(size_t ofs) 66 - { 67 - return __raw_readl(mbox_base + ofs); 68 - } 69 - 70 - static inline void mbox_write_reg(u32 val, size_t ofs) 71 - { 72 - __raw_writel(val, mbox_base + ofs); 73 - } 74 - 75 - /* Mailbox H/W preparations */ 76 - static int omap2_mbox_startup(struct omap_mbox *mbox) 77 - { 78 - u32 l; 79 - 80 - pm_runtime_enable(mbox->dev->parent); 81 - pm_runtime_get_sync(mbox->dev->parent); 82 - 83 - l = mbox_read_reg(MAILBOX_REVISION); 84 - pr_debug("omap mailbox rev %d.%d\n", (l & 0xf0) >> 4, (l & 0x0f)); 85 - 86 - return 0; 87 - } 88 - 89 - static void omap2_mbox_shutdown(struct omap_mbox *mbox) 90 - { 91 - pm_runtime_put_sync(mbox->dev->parent); 92 - pm_runtime_disable(mbox->dev->parent); 93 - } 94 - 95 - /* Mailbox FIFO handle functions */ 96 - static mbox_msg_t omap2_mbox_fifo_read(struct omap_mbox *mbox) 97 - { 98 - struct omap_mbox2_fifo *fifo = 99 - &((struct omap_mbox2_priv *)mbox->priv)->rx_fifo; 100 - return (mbox_msg_t) mbox_read_reg(fifo->msg); 101 - } 102 - 103 - static void omap2_mbox_fifo_write(struct omap_mbox *mbox, mbox_msg_t msg) 104 - { 105 - struct omap_mbox2_fifo *fifo = 106 - &((struct omap_mbox2_priv *)mbox->priv)->tx_fifo; 107 - mbox_write_reg(msg, fifo->msg); 108 - } 109 - 110 - static int omap2_mbox_fifo_empty(struct omap_mbox *mbox) 111 - { 112 - struct omap_mbox2_fifo *fifo = 113 - &((struct omap_mbox2_priv *)mbox->priv)->rx_fifo; 114 - return (mbox_read_reg(fifo->msg_stat) == 0); 115 - } 116 - 117 - static int omap2_mbox_fifo_full(struct omap_mbox *mbox) 118 - { 119 - struct omap_mbox2_fifo *fifo = 120 - &((struct omap_mbox2_priv *)mbox->priv)->tx_fifo; 121 - return mbox_read_reg(fifo->fifo_stat); 122 - } 123 - 124 - /* Mailbox IRQ handle functions */ 125 - static void omap2_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 126 - { 127 - struct omap_mbox2_priv *p = mbox->priv; 128 - u32 l, bit = (irq == IRQ_TX) ? p->notfull_bit : p->newmsg_bit; 129 - 130 - l = mbox_read_reg(p->irqenable); 131 - l |= bit; 132 - mbox_write_reg(l, p->irqenable); 133 - } 134 - 135 - static void omap2_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 136 - { 137 - struct omap_mbox2_priv *p = mbox->priv; 138 - u32 bit = (irq == IRQ_TX) ? p->notfull_bit : p->newmsg_bit; 139 - 140 - /* 141 - * Read and update the interrupt configuration register for pre-OMAP4. 142 - * OMAP4 and later SoCs have a dedicated interrupt disabling register. 143 - */ 144 - if (!p->intr_type) 145 - bit = mbox_read_reg(p->irqdisable) & ~bit; 146 - 147 - mbox_write_reg(bit, p->irqdisable); 148 - } 149 - 150 - static void omap2_mbox_ack_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 151 - { 152 - struct omap_mbox2_priv *p = mbox->priv; 153 - u32 bit = (irq == IRQ_TX) ? p->notfull_bit : p->newmsg_bit; 154 - 155 - mbox_write_reg(bit, p->irqstatus); 156 - 157 - /* Flush posted write for irq status to avoid spurious interrupts */ 158 - mbox_read_reg(p->irqstatus); 159 - } 160 - 161 - static int omap2_mbox_is_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 162 - { 163 - struct omap_mbox2_priv *p = mbox->priv; 164 - u32 bit = (irq == IRQ_TX) ? p->notfull_bit : p->newmsg_bit; 165 - u32 enable = mbox_read_reg(p->irqenable); 166 - u32 status = mbox_read_reg(p->irqstatus); 167 - 168 - return (int)(enable & status & bit); 169 - } 170 - 171 - static void omap2_mbox_save_ctx(struct omap_mbox *mbox) 172 - { 173 - int i; 174 - struct omap_mbox2_priv *p = mbox->priv; 175 - int nr_regs; 176 - 177 - if (p->intr_type) 178 - nr_regs = OMAP4_MBOX_NR_REGS; 179 - else 180 - nr_regs = MBOX_NR_REGS; 181 - for (i = 0; i < nr_regs; i++) { 182 - p->ctx[i] = mbox_read_reg(i * sizeof(u32)); 183 - 184 - dev_dbg(mbox->dev, "%s: [%02x] %08x\n", __func__, 185 - i, p->ctx[i]); 186 - } 187 - } 188 - 189 - static void omap2_mbox_restore_ctx(struct omap_mbox *mbox) 190 - { 191 - int i; 192 - struct omap_mbox2_priv *p = mbox->priv; 193 - int nr_regs; 194 - 195 - if (p->intr_type) 196 - nr_regs = OMAP4_MBOX_NR_REGS; 197 - else 198 - nr_regs = MBOX_NR_REGS; 199 - for (i = 0; i < nr_regs; i++) { 200 - mbox_write_reg(p->ctx[i], i * sizeof(u32)); 201 - 202 - dev_dbg(mbox->dev, "%s: [%02x] %08x\n", __func__, 203 - i, p->ctx[i]); 204 - } 205 - } 206 - 207 - static struct omap_mbox_ops omap2_mbox_ops = { 208 - .type = OMAP_MBOX_TYPE2, 209 - .startup = omap2_mbox_startup, 210 - .shutdown = omap2_mbox_shutdown, 211 - .fifo_read = omap2_mbox_fifo_read, 212 - .fifo_write = omap2_mbox_fifo_write, 213 - .fifo_empty = omap2_mbox_fifo_empty, 214 - .fifo_full = omap2_mbox_fifo_full, 215 - .enable_irq = omap2_mbox_enable_irq, 216 - .disable_irq = omap2_mbox_disable_irq, 217 - .ack_irq = omap2_mbox_ack_irq, 218 - .is_irq = omap2_mbox_is_irq, 219 - .save_ctx = omap2_mbox_save_ctx, 220 - .restore_ctx = omap2_mbox_restore_ctx, 221 - }; 222 - 223 - static int omap2_mbox_probe(struct platform_device *pdev) 224 - { 225 - struct resource *mem; 226 - int ret; 227 - struct omap_mbox **list, *mbox, *mboxblk; 228 - struct omap_mbox2_priv *priv, *privblk; 229 - struct omap_mbox_pdata *pdata = pdev->dev.platform_data; 230 - struct omap_mbox_dev_info *info; 231 - int i; 232 - 233 - if (!pdata || !pdata->info_cnt || !pdata->info) { 234 - pr_err("%s: platform not supported\n", __func__); 235 - return -ENODEV; 236 - } 237 - 238 - /* allocate one extra for marking end of list */ 239 - list = kzalloc((pdata->info_cnt + 1) * sizeof(*list), GFP_KERNEL); 240 - if (!list) 241 - return -ENOMEM; 242 - 243 - mboxblk = mbox = kzalloc(pdata->info_cnt * sizeof(*mbox), GFP_KERNEL); 244 - if (!mboxblk) { 245 - ret = -ENOMEM; 246 - goto free_list; 247 - } 248 - 249 - privblk = priv = kzalloc(pdata->info_cnt * sizeof(*priv), GFP_KERNEL); 250 - if (!privblk) { 251 - ret = -ENOMEM; 252 - goto free_mboxblk; 253 - } 254 - 255 - info = pdata->info; 256 - for (i = 0; i < pdata->info_cnt; i++, info++, priv++) { 257 - priv->tx_fifo.msg = MAILBOX_MESSAGE(info->tx_id); 258 - priv->tx_fifo.fifo_stat = MAILBOX_FIFOSTATUS(info->tx_id); 259 - priv->rx_fifo.msg = MAILBOX_MESSAGE(info->rx_id); 260 - priv->rx_fifo.msg_stat = MAILBOX_MSGSTATUS(info->rx_id); 261 - priv->notfull_bit = MAILBOX_IRQ_NOTFULL(info->tx_id); 262 - priv->newmsg_bit = MAILBOX_IRQ_NEWMSG(info->rx_id); 263 - if (pdata->intr_type) { 264 - priv->irqenable = OMAP4_MAILBOX_IRQENABLE(info->usr_id); 265 - priv->irqstatus = OMAP4_MAILBOX_IRQSTATUS(info->usr_id); 266 - priv->irqdisable = 267 - OMAP4_MAILBOX_IRQENABLE_CLR(info->usr_id); 268 - } else { 269 - priv->irqenable = MAILBOX_IRQENABLE(info->usr_id); 270 - priv->irqstatus = MAILBOX_IRQSTATUS(info->usr_id); 271 - priv->irqdisable = MAILBOX_IRQENABLE(info->usr_id); 272 - } 273 - priv->intr_type = pdata->intr_type; 274 - 275 - mbox->priv = priv; 276 - mbox->name = info->name; 277 - mbox->ops = &omap2_mbox_ops; 278 - mbox->irq = platform_get_irq(pdev, info->irq_id); 279 - if (mbox->irq < 0) { 280 - ret = mbox->irq; 281 - goto free_privblk; 282 - } 283 - list[i] = mbox++; 284 - } 285 - 286 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 287 - if (!mem) { 288 - ret = -ENOENT; 289 - goto free_privblk; 290 - } 291 - 292 - mbox_base = ioremap(mem->start, resource_size(mem)); 293 - if (!mbox_base) { 294 - ret = -ENOMEM; 295 - goto free_privblk; 296 - } 297 - 298 - ret = omap_mbox_register(&pdev->dev, list); 299 - if (ret) 300 - goto unmap_mbox; 301 - platform_set_drvdata(pdev, list); 302 - 303 - return 0; 304 - 305 - unmap_mbox: 306 - iounmap(mbox_base); 307 - free_privblk: 308 - kfree(privblk); 309 - free_mboxblk: 310 - kfree(mboxblk); 311 - free_list: 312 - kfree(list); 313 - return ret; 314 - } 315 - 316 - static int omap2_mbox_remove(struct platform_device *pdev) 317 - { 318 - struct omap_mbox2_priv *privblk; 319 - struct omap_mbox **list = platform_get_drvdata(pdev); 320 - struct omap_mbox *mboxblk = list[0]; 321 - 322 - privblk = mboxblk->priv; 323 - omap_mbox_unregister(); 324 - iounmap(mbox_base); 325 - kfree(privblk); 326 - kfree(mboxblk); 327 - kfree(list); 328 - 329 - return 0; 330 - } 331 - 332 - static struct platform_driver omap2_mbox_driver = { 333 - .probe = omap2_mbox_probe, 334 - .remove = omap2_mbox_remove, 335 - .driver = { 336 - .name = "omap-mailbox", 337 - }, 338 - }; 339 - 340 - static int __init omap2_mbox_init(void) 341 - { 342 - return platform_driver_register(&omap2_mbox_driver); 343 - } 344 - 345 - static void __exit omap2_mbox_exit(void) 346 - { 347 - platform_driver_unregister(&omap2_mbox_driver); 348 - } 349 - 350 - module_init(omap2_mbox_init); 351 - module_exit(omap2_mbox_exit); 352 - 353 - MODULE_LICENSE("GPL v2"); 354 - MODULE_DESCRIPTION("omap mailbox: omap2/3/4 architecture specific functions"); 355 - MODULE_AUTHOR("Hiroshi DOYU <Hiroshi.DOYU@nokia.com>"); 356 - MODULE_AUTHOR("Paul Mundt"); 357 - MODULE_ALIAS("platform:omap2-mailbox");
+355 -83
drivers/mailbox/omap-mailbox.c
··· 2 2 * OMAP mailbox driver 3 3 * 4 4 * Copyright (C) 2006-2009 Nokia Corporation. All rights reserved. 5 + * Copyright (C) 2013-2014 Texas Instruments Inc. 5 6 * 6 7 * Contact: Hiroshi DOYU <Hiroshi.DOYU@nokia.com> 8 + * Suman Anna <s-anna@ti.com> 7 9 * 8 10 * This program is free software; you can redistribute it and/or 9 11 * modify it under the terms of the GNU General Public License ··· 26 24 #include <linux/interrupt.h> 27 25 #include <linux/spinlock.h> 28 26 #include <linux/mutex.h> 29 - #include <linux/delay.h> 30 27 #include <linux/slab.h> 31 28 #include <linux/kfifo.h> 32 29 #include <linux/err.h> 33 30 #include <linux/notifier.h> 34 31 #include <linux/module.h> 32 + #include <linux/platform_device.h> 33 + #include <linux/pm_runtime.h> 34 + #include <linux/platform_data/mailbox-omap.h> 35 + #include <linux/omap-mailbox.h> 35 36 36 - #include "omap-mbox.h" 37 + #define MAILBOX_REVISION 0x000 38 + #define MAILBOX_MESSAGE(m) (0x040 + 4 * (m)) 39 + #define MAILBOX_FIFOSTATUS(m) (0x080 + 4 * (m)) 40 + #define MAILBOX_MSGSTATUS(m) (0x0c0 + 4 * (m)) 37 41 38 - static struct omap_mbox **mboxes; 42 + #define OMAP2_MAILBOX_IRQSTATUS(u) (0x100 + 8 * (u)) 43 + #define OMAP2_MAILBOX_IRQENABLE(u) (0x104 + 8 * (u)) 39 44 40 - static int mbox_configured; 41 - static DEFINE_MUTEX(mbox_configured_lock); 45 + #define OMAP4_MAILBOX_IRQSTATUS(u) (0x104 + 0x10 * (u)) 46 + #define OMAP4_MAILBOX_IRQENABLE(u) (0x108 + 0x10 * (u)) 47 + #define OMAP4_MAILBOX_IRQENABLE_CLR(u) (0x10c + 0x10 * (u)) 48 + 49 + #define MAILBOX_IRQSTATUS(type, u) (type ? OMAP4_MAILBOX_IRQSTATUS(u) : \ 50 + OMAP2_MAILBOX_IRQSTATUS(u)) 51 + #define MAILBOX_IRQENABLE(type, u) (type ? OMAP4_MAILBOX_IRQENABLE(u) : \ 52 + OMAP2_MAILBOX_IRQENABLE(u)) 53 + #define MAILBOX_IRQDISABLE(type, u) (type ? OMAP4_MAILBOX_IRQENABLE_CLR(u) \ 54 + : OMAP2_MAILBOX_IRQENABLE(u)) 55 + 56 + #define MAILBOX_IRQ_NEWMSG(m) (1 << (2 * (m))) 57 + #define MAILBOX_IRQ_NOTFULL(m) (1 << (2 * (m) + 1)) 58 + 59 + #define MBOX_REG_SIZE 0x120 60 + 61 + #define OMAP4_MBOX_REG_SIZE 0x130 62 + 63 + #define MBOX_NR_REGS (MBOX_REG_SIZE / sizeof(u32)) 64 + #define OMAP4_MBOX_NR_REGS (OMAP4_MBOX_REG_SIZE / sizeof(u32)) 65 + 66 + struct omap_mbox_fifo { 67 + unsigned long msg; 68 + unsigned long fifo_stat; 69 + unsigned long msg_stat; 70 + unsigned long irqenable; 71 + unsigned long irqstatus; 72 + unsigned long irqdisable; 73 + u32 intr_bit; 74 + }; 75 + 76 + struct omap_mbox_queue { 77 + spinlock_t lock; 78 + struct kfifo fifo; 79 + struct work_struct work; 80 + struct tasklet_struct tasklet; 81 + struct omap_mbox *mbox; 82 + bool full; 83 + }; 84 + 85 + struct omap_mbox_device { 86 + struct device *dev; 87 + struct mutex cfg_lock; 88 + void __iomem *mbox_base; 89 + u32 num_users; 90 + u32 num_fifos; 91 + struct omap_mbox **mboxes; 92 + struct list_head elem; 93 + }; 94 + 95 + struct omap_mbox { 96 + const char *name; 97 + int irq; 98 + struct omap_mbox_queue *txq, *rxq; 99 + struct device *dev; 100 + struct omap_mbox_device *parent; 101 + struct omap_mbox_fifo tx_fifo; 102 + struct omap_mbox_fifo rx_fifo; 103 + u32 ctx[OMAP4_MBOX_NR_REGS]; 104 + u32 intr_type; 105 + int use_count; 106 + struct blocking_notifier_head notifier; 107 + }; 108 + 109 + /* global variables for the mailbox devices */ 110 + static DEFINE_MUTEX(omap_mbox_devices_lock); 111 + static LIST_HEAD(omap_mbox_devices); 42 112 43 113 static unsigned int mbox_kfifo_size = CONFIG_OMAP_MBOX_KFIFO_SIZE; 44 114 module_param(mbox_kfifo_size, uint, S_IRUGO); 45 115 MODULE_PARM_DESC(mbox_kfifo_size, "Size of omap's mailbox kfifo (bytes)"); 46 116 117 + static inline 118 + unsigned int mbox_read_reg(struct omap_mbox_device *mdev, size_t ofs) 119 + { 120 + return __raw_readl(mdev->mbox_base + ofs); 121 + } 122 + 123 + static inline 124 + void mbox_write_reg(struct omap_mbox_device *mdev, u32 val, size_t ofs) 125 + { 126 + __raw_writel(val, mdev->mbox_base + ofs); 127 + } 128 + 47 129 /* Mailbox FIFO handle functions */ 48 - static inline mbox_msg_t mbox_fifo_read(struct omap_mbox *mbox) 130 + static mbox_msg_t mbox_fifo_read(struct omap_mbox *mbox) 49 131 { 50 - return mbox->ops->fifo_read(mbox); 132 + struct omap_mbox_fifo *fifo = &mbox->rx_fifo; 133 + return (mbox_msg_t) mbox_read_reg(mbox->parent, fifo->msg); 51 134 } 52 - static inline void mbox_fifo_write(struct omap_mbox *mbox, mbox_msg_t msg) 135 + 136 + static void mbox_fifo_write(struct omap_mbox *mbox, mbox_msg_t msg) 53 137 { 54 - mbox->ops->fifo_write(mbox, msg); 138 + struct omap_mbox_fifo *fifo = &mbox->tx_fifo; 139 + mbox_write_reg(mbox->parent, msg, fifo->msg); 55 140 } 56 - static inline int mbox_fifo_empty(struct omap_mbox *mbox) 141 + 142 + static int mbox_fifo_empty(struct omap_mbox *mbox) 57 143 { 58 - return mbox->ops->fifo_empty(mbox); 144 + struct omap_mbox_fifo *fifo = &mbox->rx_fifo; 145 + return (mbox_read_reg(mbox->parent, fifo->msg_stat) == 0); 59 146 } 60 - static inline int mbox_fifo_full(struct omap_mbox *mbox) 147 + 148 + static int mbox_fifo_full(struct omap_mbox *mbox) 61 149 { 62 - return mbox->ops->fifo_full(mbox); 150 + struct omap_mbox_fifo *fifo = &mbox->tx_fifo; 151 + return mbox_read_reg(mbox->parent, fifo->fifo_stat); 63 152 } 64 153 65 154 /* Mailbox IRQ handle functions */ 66 - static inline void ack_mbox_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 155 + static void ack_mbox_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 67 156 { 68 - if (mbox->ops->ack_irq) 69 - mbox->ops->ack_irq(mbox, irq); 157 + struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ? 158 + &mbox->tx_fifo : &mbox->rx_fifo; 159 + u32 bit = fifo->intr_bit; 160 + u32 irqstatus = fifo->irqstatus; 161 + 162 + mbox_write_reg(mbox->parent, bit, irqstatus); 163 + 164 + /* Flush posted write for irq status to avoid spurious interrupts */ 165 + mbox_read_reg(mbox->parent, irqstatus); 70 166 } 71 - static inline int is_mbox_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 167 + 168 + static int is_mbox_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 72 169 { 73 - return mbox->ops->is_irq(mbox, irq); 170 + struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ? 171 + &mbox->tx_fifo : &mbox->rx_fifo; 172 + u32 bit = fifo->intr_bit; 173 + u32 irqenable = fifo->irqenable; 174 + u32 irqstatus = fifo->irqstatus; 175 + 176 + u32 enable = mbox_read_reg(mbox->parent, irqenable); 177 + u32 status = mbox_read_reg(mbox->parent, irqstatus); 178 + 179 + return (int)(enable & status & bit); 74 180 } 75 181 76 182 /* 77 183 * message sender 78 184 */ 79 - static int __mbox_poll_for_space(struct omap_mbox *mbox) 80 - { 81 - int ret = 0, i = 1000; 82 - 83 - while (mbox_fifo_full(mbox)) { 84 - if (mbox->ops->type == OMAP_MBOX_TYPE2) 85 - return -1; 86 - if (--i == 0) 87 - return -1; 88 - udelay(1); 89 - } 90 - return ret; 91 - } 92 - 93 185 int omap_mbox_msg_send(struct omap_mbox *mbox, mbox_msg_t msg) 94 186 { 95 187 struct omap_mbox_queue *mq = mbox->txq; ··· 196 100 goto out; 197 101 } 198 102 199 - if (kfifo_is_empty(&mq->fifo) && !__mbox_poll_for_space(mbox)) { 103 + if (kfifo_is_empty(&mq->fifo) && !mbox_fifo_full(mbox)) { 200 104 mbox_fifo_write(mbox, msg); 201 105 goto out; 202 106 } ··· 214 118 215 119 void omap_mbox_save_ctx(struct omap_mbox *mbox) 216 120 { 217 - if (!mbox->ops->save_ctx) { 218 - dev_err(mbox->dev, "%s:\tno save\n", __func__); 219 - return; 220 - } 121 + int i; 122 + int nr_regs; 221 123 222 - mbox->ops->save_ctx(mbox); 124 + if (mbox->intr_type) 125 + nr_regs = OMAP4_MBOX_NR_REGS; 126 + else 127 + nr_regs = MBOX_NR_REGS; 128 + for (i = 0; i < nr_regs; i++) { 129 + mbox->ctx[i] = mbox_read_reg(mbox->parent, i * sizeof(u32)); 130 + 131 + dev_dbg(mbox->dev, "%s: [%02x] %08x\n", __func__, 132 + i, mbox->ctx[i]); 133 + } 223 134 } 224 135 EXPORT_SYMBOL(omap_mbox_save_ctx); 225 136 226 137 void omap_mbox_restore_ctx(struct omap_mbox *mbox) 227 138 { 228 - if (!mbox->ops->restore_ctx) { 229 - dev_err(mbox->dev, "%s:\tno restore\n", __func__); 230 - return; 231 - } 139 + int i; 140 + int nr_regs; 232 141 233 - mbox->ops->restore_ctx(mbox); 142 + if (mbox->intr_type) 143 + nr_regs = OMAP4_MBOX_NR_REGS; 144 + else 145 + nr_regs = MBOX_NR_REGS; 146 + for (i = 0; i < nr_regs; i++) { 147 + mbox_write_reg(mbox->parent, mbox->ctx[i], i * sizeof(u32)); 148 + 149 + dev_dbg(mbox->dev, "%s: [%02x] %08x\n", __func__, 150 + i, mbox->ctx[i]); 151 + } 234 152 } 235 153 EXPORT_SYMBOL(omap_mbox_restore_ctx); 236 154 237 155 void omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 238 156 { 239 - mbox->ops->enable_irq(mbox, irq); 157 + u32 l; 158 + struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ? 159 + &mbox->tx_fifo : &mbox->rx_fifo; 160 + u32 bit = fifo->intr_bit; 161 + u32 irqenable = fifo->irqenable; 162 + 163 + l = mbox_read_reg(mbox->parent, irqenable); 164 + l |= bit; 165 + mbox_write_reg(mbox->parent, l, irqenable); 240 166 } 241 167 EXPORT_SYMBOL(omap_mbox_enable_irq); 242 168 243 169 void omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq) 244 170 { 245 - mbox->ops->disable_irq(mbox, irq); 171 + struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ? 172 + &mbox->tx_fifo : &mbox->rx_fifo; 173 + u32 bit = fifo->intr_bit; 174 + u32 irqdisable = fifo->irqdisable; 175 + 176 + /* 177 + * Read and update the interrupt configuration register for pre-OMAP4. 178 + * OMAP4 and later SoCs have a dedicated interrupt disabling register. 179 + */ 180 + if (!mbox->intr_type) 181 + bit = mbox_read_reg(mbox->parent, irqdisable) & ~bit; 182 + 183 + mbox_write_reg(mbox->parent, bit, irqdisable); 246 184 } 247 185 EXPORT_SYMBOL(omap_mbox_disable_irq); 248 186 ··· 288 158 int ret; 289 159 290 160 while (kfifo_len(&mq->fifo)) { 291 - if (__mbox_poll_for_space(mbox)) { 161 + if (mbox_fifo_full(mbox)) { 292 162 omap_mbox_enable_irq(mbox, IRQ_TX); 293 163 break; 294 164 } ··· 353 223 354 224 len = kfifo_in(&mq->fifo, (unsigned char *)&msg, sizeof(msg)); 355 225 WARN_ON(len != sizeof(msg)); 356 - 357 - if (mbox->ops->type == OMAP_MBOX_TYPE1) 358 - break; 359 226 } 360 227 361 228 /* no more messages in the fifo. clear IRQ source. */ ··· 410 283 { 411 284 int ret = 0; 412 285 struct omap_mbox_queue *mq; 286 + struct omap_mbox_device *mdev = mbox->parent; 413 287 414 - mutex_lock(&mbox_configured_lock); 415 - if (!mbox_configured++) { 416 - if (likely(mbox->ops->startup)) { 417 - ret = mbox->ops->startup(mbox); 418 - if (unlikely(ret)) 419 - goto fail_startup; 420 - } else 421 - goto fail_startup; 422 - } 288 + mutex_lock(&mdev->cfg_lock); 289 + ret = pm_runtime_get_sync(mdev->dev); 290 + if (unlikely(ret < 0)) 291 + goto fail_startup; 423 292 424 293 if (!mbox->use_count++) { 425 294 mq = mbox_queue_alloc(mbox, NULL, mbox_tx_tasklet); ··· 442 319 443 320 omap_mbox_enable_irq(mbox, IRQ_RX); 444 321 } 445 - mutex_unlock(&mbox_configured_lock); 322 + mutex_unlock(&mdev->cfg_lock); 446 323 return 0; 447 324 448 325 fail_request_irq: ··· 450 327 fail_alloc_rxq: 451 328 mbox_queue_free(mbox->txq); 452 329 fail_alloc_txq: 453 - if (mbox->ops->shutdown) 454 - mbox->ops->shutdown(mbox); 330 + pm_runtime_put_sync(mdev->dev); 455 331 mbox->use_count--; 456 332 fail_startup: 457 - mbox_configured--; 458 - mutex_unlock(&mbox_configured_lock); 333 + mutex_unlock(&mdev->cfg_lock); 459 334 return ret; 460 335 } 461 336 462 337 static void omap_mbox_fini(struct omap_mbox *mbox) 463 338 { 464 - mutex_lock(&mbox_configured_lock); 339 + struct omap_mbox_device *mdev = mbox->parent; 340 + 341 + mutex_lock(&mdev->cfg_lock); 465 342 466 343 if (!--mbox->use_count) { 467 344 omap_mbox_disable_irq(mbox, IRQ_RX); ··· 472 349 mbox_queue_free(mbox->rxq); 473 350 } 474 351 475 - if (likely(mbox->ops->shutdown)) { 476 - if (!--mbox_configured) 477 - mbox->ops->shutdown(mbox); 478 - } 352 + pm_runtime_put_sync(mdev->dev); 479 353 480 - mutex_unlock(&mbox_configured_lock); 354 + mutex_unlock(&mdev->cfg_lock); 481 355 } 482 356 483 - struct omap_mbox *omap_mbox_get(const char *name, struct notifier_block *nb) 357 + static struct omap_mbox *omap_mbox_device_find(struct omap_mbox_device *mdev, 358 + const char *mbox_name) 484 359 { 485 360 struct omap_mbox *_mbox, *mbox = NULL; 486 - int i, ret; 361 + struct omap_mbox **mboxes = mdev->mboxes; 362 + int i; 487 363 488 364 if (!mboxes) 489 - return ERR_PTR(-EINVAL); 365 + return NULL; 490 366 491 367 for (i = 0; (_mbox = mboxes[i]); i++) { 492 - if (!strcmp(_mbox->name, name)) { 368 + if (!strcmp(_mbox->name, mbox_name)) { 493 369 mbox = _mbox; 494 370 break; 495 371 } 496 372 } 373 + return mbox; 374 + } 375 + 376 + struct omap_mbox *omap_mbox_get(const char *name, struct notifier_block *nb) 377 + { 378 + struct omap_mbox *mbox = NULL; 379 + struct omap_mbox_device *mdev; 380 + int ret; 381 + 382 + mutex_lock(&omap_mbox_devices_lock); 383 + list_for_each_entry(mdev, &omap_mbox_devices, elem) { 384 + mbox = omap_mbox_device_find(mdev, name); 385 + if (mbox) 386 + break; 387 + } 388 + mutex_unlock(&omap_mbox_devices_lock); 497 389 498 390 if (!mbox) 499 391 return ERR_PTR(-ENOENT); ··· 535 397 536 398 static struct class omap_mbox_class = { .name = "mbox", }; 537 399 538 - int omap_mbox_register(struct device *parent, struct omap_mbox **list) 400 + static int omap_mbox_register(struct omap_mbox_device *mdev) 539 401 { 540 402 int ret; 541 403 int i; 404 + struct omap_mbox **mboxes; 542 405 543 - mboxes = list; 544 - if (!mboxes) 406 + if (!mdev || !mdev->mboxes) 545 407 return -EINVAL; 546 408 409 + mboxes = mdev->mboxes; 547 410 for (i = 0; mboxes[i]; i++) { 548 411 struct omap_mbox *mbox = mboxes[i]; 549 412 mbox->dev = device_create(&omap_mbox_class, 550 - parent, 0, mbox, "%s", mbox->name); 413 + mdev->dev, 0, mbox, "%s", mbox->name); 551 414 if (IS_ERR(mbox->dev)) { 552 415 ret = PTR_ERR(mbox->dev); 553 416 goto err_out; ··· 556 417 557 418 BLOCKING_INIT_NOTIFIER_HEAD(&mbox->notifier); 558 419 } 420 + 421 + mutex_lock(&omap_mbox_devices_lock); 422 + list_add(&mdev->elem, &omap_mbox_devices); 423 + mutex_unlock(&omap_mbox_devices_lock); 424 + 559 425 return 0; 560 426 561 427 err_out: ··· 568 424 device_unregister(mboxes[i]->dev); 569 425 return ret; 570 426 } 571 - EXPORT_SYMBOL(omap_mbox_register); 572 427 573 - int omap_mbox_unregister(void) 428 + static int omap_mbox_unregister(struct omap_mbox_device *mdev) 574 429 { 575 430 int i; 431 + struct omap_mbox **mboxes; 576 432 577 - if (!mboxes) 433 + if (!mdev || !mdev->mboxes) 578 434 return -EINVAL; 579 435 436 + mutex_lock(&omap_mbox_devices_lock); 437 + list_del(&mdev->elem); 438 + mutex_unlock(&omap_mbox_devices_lock); 439 + 440 + mboxes = mdev->mboxes; 580 441 for (i = 0; mboxes[i]; i++) 581 442 device_unregister(mboxes[i]->dev); 582 - mboxes = NULL; 583 443 return 0; 584 444 } 585 - EXPORT_SYMBOL(omap_mbox_unregister); 445 + 446 + static int omap_mbox_probe(struct platform_device *pdev) 447 + { 448 + struct resource *mem; 449 + int ret; 450 + struct omap_mbox **list, *mbox, *mboxblk; 451 + struct omap_mbox_pdata *pdata = pdev->dev.platform_data; 452 + struct omap_mbox_dev_info *info; 453 + struct omap_mbox_device *mdev; 454 + struct omap_mbox_fifo *fifo; 455 + u32 intr_type; 456 + u32 l; 457 + int i; 458 + 459 + if (!pdata || !pdata->info_cnt || !pdata->info) { 460 + pr_err("%s: platform not supported\n", __func__); 461 + return -ENODEV; 462 + } 463 + 464 + mdev = devm_kzalloc(&pdev->dev, sizeof(*mdev), GFP_KERNEL); 465 + if (!mdev) 466 + return -ENOMEM; 467 + 468 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 469 + mdev->mbox_base = devm_ioremap_resource(&pdev->dev, mem); 470 + if (IS_ERR(mdev->mbox_base)) 471 + return PTR_ERR(mdev->mbox_base); 472 + 473 + /* allocate one extra for marking end of list */ 474 + list = devm_kzalloc(&pdev->dev, (pdata->info_cnt + 1) * sizeof(*list), 475 + GFP_KERNEL); 476 + if (!list) 477 + return -ENOMEM; 478 + 479 + mboxblk = devm_kzalloc(&pdev->dev, pdata->info_cnt * sizeof(*mbox), 480 + GFP_KERNEL); 481 + if (!mboxblk) 482 + return -ENOMEM; 483 + 484 + info = pdata->info; 485 + intr_type = pdata->intr_type; 486 + mbox = mboxblk; 487 + for (i = 0; i < pdata->info_cnt; i++, info++) { 488 + fifo = &mbox->tx_fifo; 489 + fifo->msg = MAILBOX_MESSAGE(info->tx_id); 490 + fifo->fifo_stat = MAILBOX_FIFOSTATUS(info->tx_id); 491 + fifo->intr_bit = MAILBOX_IRQ_NOTFULL(info->tx_id); 492 + fifo->irqenable = MAILBOX_IRQENABLE(intr_type, info->usr_id); 493 + fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, info->usr_id); 494 + fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, info->usr_id); 495 + 496 + fifo = &mbox->rx_fifo; 497 + fifo->msg = MAILBOX_MESSAGE(info->rx_id); 498 + fifo->msg_stat = MAILBOX_MSGSTATUS(info->rx_id); 499 + fifo->intr_bit = MAILBOX_IRQ_NEWMSG(info->rx_id); 500 + fifo->irqenable = MAILBOX_IRQENABLE(intr_type, info->usr_id); 501 + fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, info->usr_id); 502 + fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, info->usr_id); 503 + 504 + mbox->intr_type = intr_type; 505 + 506 + mbox->parent = mdev; 507 + mbox->name = info->name; 508 + mbox->irq = platform_get_irq(pdev, info->irq_id); 509 + if (mbox->irq < 0) 510 + return mbox->irq; 511 + list[i] = mbox++; 512 + } 513 + 514 + mutex_init(&mdev->cfg_lock); 515 + mdev->dev = &pdev->dev; 516 + mdev->num_users = pdata->num_users; 517 + mdev->num_fifos = pdata->num_fifos; 518 + mdev->mboxes = list; 519 + ret = omap_mbox_register(mdev); 520 + if (ret) 521 + return ret; 522 + 523 + platform_set_drvdata(pdev, mdev); 524 + pm_runtime_enable(mdev->dev); 525 + 526 + ret = pm_runtime_get_sync(mdev->dev); 527 + if (ret < 0) { 528 + pm_runtime_put_noidle(mdev->dev); 529 + goto unregister; 530 + } 531 + 532 + /* 533 + * just print the raw revision register, the format is not 534 + * uniform across all SoCs 535 + */ 536 + l = mbox_read_reg(mdev, MAILBOX_REVISION); 537 + dev_info(mdev->dev, "omap mailbox rev 0x%x\n", l); 538 + 539 + ret = pm_runtime_put_sync(mdev->dev); 540 + if (ret < 0) 541 + goto unregister; 542 + 543 + return 0; 544 + 545 + unregister: 546 + pm_runtime_disable(mdev->dev); 547 + omap_mbox_unregister(mdev); 548 + return ret; 549 + } 550 + 551 + static int omap_mbox_remove(struct platform_device *pdev) 552 + { 553 + struct omap_mbox_device *mdev = platform_get_drvdata(pdev); 554 + 555 + pm_runtime_disable(mdev->dev); 556 + omap_mbox_unregister(mdev); 557 + 558 + return 0; 559 + } 560 + 561 + static struct platform_driver omap_mbox_driver = { 562 + .probe = omap_mbox_probe, 563 + .remove = omap_mbox_remove, 564 + .driver = { 565 + .name = "omap-mailbox", 566 + .owner = THIS_MODULE, 567 + }, 568 + }; 586 569 587 570 static int __init omap_mbox_init(void) 588 571 { ··· 724 453 mbox_kfifo_size = max_t(unsigned int, mbox_kfifo_size, 725 454 sizeof(mbox_msg_t)); 726 455 727 - return 0; 456 + return platform_driver_register(&omap_mbox_driver); 728 457 } 729 458 subsys_initcall(omap_mbox_init); 730 459 731 460 static void __exit omap_mbox_exit(void) 732 461 { 462 + platform_driver_unregister(&omap_mbox_driver); 733 463 class_unregister(&omap_mbox_class); 734 464 } 735 465 module_exit(omap_mbox_exit);
-67
drivers/mailbox/omap-mbox.h
··· 1 - /* 2 - * omap-mbox.h: OMAP mailbox internal definitions 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - */ 8 - 9 - #ifndef OMAP_MBOX_H 10 - #define OMAP_MBOX_H 11 - 12 - #include <linux/device.h> 13 - #include <linux/interrupt.h> 14 - #include <linux/kfifo.h> 15 - #include <linux/spinlock.h> 16 - #include <linux/workqueue.h> 17 - #include <linux/omap-mailbox.h> 18 - 19 - typedef int __bitwise omap_mbox_type_t; 20 - #define OMAP_MBOX_TYPE1 ((__force omap_mbox_type_t) 1) 21 - #define OMAP_MBOX_TYPE2 ((__force omap_mbox_type_t) 2) 22 - 23 - struct omap_mbox_ops { 24 - omap_mbox_type_t type; 25 - int (*startup)(struct omap_mbox *mbox); 26 - void (*shutdown)(struct omap_mbox *mbox); 27 - /* fifo */ 28 - mbox_msg_t (*fifo_read)(struct omap_mbox *mbox); 29 - void (*fifo_write)(struct omap_mbox *mbox, mbox_msg_t msg); 30 - int (*fifo_empty)(struct omap_mbox *mbox); 31 - int (*fifo_full)(struct omap_mbox *mbox); 32 - /* irq */ 33 - void (*enable_irq)(struct omap_mbox *mbox, 34 - omap_mbox_irq_t irq); 35 - void (*disable_irq)(struct omap_mbox *mbox, 36 - omap_mbox_irq_t irq); 37 - void (*ack_irq)(struct omap_mbox *mbox, omap_mbox_irq_t irq); 38 - int (*is_irq)(struct omap_mbox *mbox, omap_mbox_irq_t irq); 39 - /* ctx */ 40 - void (*save_ctx)(struct omap_mbox *mbox); 41 - void (*restore_ctx)(struct omap_mbox *mbox); 42 - }; 43 - 44 - struct omap_mbox_queue { 45 - spinlock_t lock; 46 - struct kfifo fifo; 47 - struct work_struct work; 48 - struct tasklet_struct tasklet; 49 - struct omap_mbox *mbox; 50 - bool full; 51 - }; 52 - 53 - struct omap_mbox { 54 - const char *name; 55 - int irq; 56 - struct omap_mbox_queue *txq, *rxq; 57 - struct omap_mbox_ops *ops; 58 - struct device *dev; 59 - void *priv; 60 - int use_count; 61 - struct blocking_notifier_head notifier; 62 - }; 63 - 64 - int omap_mbox_register(struct device *parent, struct omap_mbox **); 65 - int omap_mbox_unregister(void); 66 - 67 - #endif /* OMAP_MBOX_H */
-10
drivers/misc/Kconfig
··· 51 51 To compile this driver as a module, choose M here: the 52 52 module will be called ad525x_dpot-spi. 53 53 54 - config ATMEL_PWM 55 - tristate "Atmel AT32/AT91 PWM support" 56 - depends on HAVE_CLK 57 - depends on AVR32 || ARCH_AT91SAM9263 || ARCH_AT91SAM9RL || ARCH_AT91SAM9G45 58 - help 59 - This option enables device driver support for the PWM channels 60 - on certain Atmel processors. Pulse Width Modulation is used for 61 - purposes including software controlled power-efficient backlights 62 - on LCD displays, motor control, and waveform generation. 63 - 64 54 config ATMEL_TCLIB 65 55 bool "Atmel AT32/AT91 Timer/Counter Library" 66 56 depends on (AVR32 || ARCH_AT91)
-1
drivers/misc/Makefile
··· 7 7 obj-$(CONFIG_AD525X_DPOT_I2C) += ad525x_dpot-i2c.o 8 8 obj-$(CONFIG_AD525X_DPOT_SPI) += ad525x_dpot-spi.o 9 9 obj-$(CONFIG_INTEL_MID_PTI) += pti.o 10 - obj-$(CONFIG_ATMEL_PWM) += atmel_pwm.o 11 10 obj-$(CONFIG_ATMEL_SSC) += atmel-ssc.o 12 11 obj-$(CONFIG_ATMEL_TCLIB) += atmel_tclib.o 13 12 obj-$(CONFIG_BMP085) += bmp085.o
-402
drivers/misc/atmel_pwm.c
··· 1 - #include <linux/module.h> 2 - #include <linux/clk.h> 3 - #include <linux/err.h> 4 - #include <linux/slab.h> 5 - #include <linux/io.h> 6 - #include <linux/interrupt.h> 7 - #include <linux/platform_device.h> 8 - #include <linux/atmel_pwm.h> 9 - 10 - 11 - /* 12 - * This is a simple driver for the PWM controller found in various newer 13 - * Atmel SOCs, including the AVR32 series and the AT91sam9263. 14 - * 15 - * Chips with current Linux ports have only 4 PWM channels, out of max 32. 16 - * AT32UC3A and AT32UC3B chips have 7 channels (but currently no Linux). 17 - * Docs are inconsistent about the width of the channel counter registers; 18 - * it's at least 16 bits, but several places say 20 bits. 19 - */ 20 - #define PWM_NCHAN 4 /* max 32 */ 21 - 22 - struct pwm { 23 - spinlock_t lock; 24 - struct platform_device *pdev; 25 - u32 mask; 26 - int irq; 27 - void __iomem *base; 28 - struct clk *clk; 29 - struct pwm_channel *channel[PWM_NCHAN]; 30 - void (*handler[PWM_NCHAN])(struct pwm_channel *); 31 - }; 32 - 33 - 34 - /* global PWM controller registers */ 35 - #define PWM_MR 0x00 36 - #define PWM_ENA 0x04 37 - #define PWM_DIS 0x08 38 - #define PWM_SR 0x0c 39 - #define PWM_IER 0x10 40 - #define PWM_IDR 0x14 41 - #define PWM_IMR 0x18 42 - #define PWM_ISR 0x1c 43 - 44 - static inline void pwm_writel(const struct pwm *p, unsigned offset, u32 val) 45 - { 46 - __raw_writel(val, p->base + offset); 47 - } 48 - 49 - static inline u32 pwm_readl(const struct pwm *p, unsigned offset) 50 - { 51 - return __raw_readl(p->base + offset); 52 - } 53 - 54 - static inline void __iomem *pwmc_regs(const struct pwm *p, int index) 55 - { 56 - return p->base + 0x200 + index * 0x20; 57 - } 58 - 59 - static struct pwm *pwm; 60 - 61 - static void pwm_dumpregs(struct pwm_channel *ch, char *tag) 62 - { 63 - struct device *dev = &pwm->pdev->dev; 64 - 65 - dev_dbg(dev, "%s: mr %08x, sr %08x, imr %08x\n", 66 - tag, 67 - pwm_readl(pwm, PWM_MR), 68 - pwm_readl(pwm, PWM_SR), 69 - pwm_readl(pwm, PWM_IMR)); 70 - dev_dbg(dev, 71 - "pwm ch%d - mr %08x, dty %u, prd %u, cnt %u\n", 72 - ch->index, 73 - pwm_channel_readl(ch, PWM_CMR), 74 - pwm_channel_readl(ch, PWM_CDTY), 75 - pwm_channel_readl(ch, PWM_CPRD), 76 - pwm_channel_readl(ch, PWM_CCNT)); 77 - } 78 - 79 - 80 - /** 81 - * pwm_channel_alloc - allocate an unused PWM channel 82 - * @index: identifies the channel 83 - * @ch: structure to be initialized 84 - * 85 - * Drivers allocate PWM channels according to the board's wiring, and 86 - * matching board-specific setup code. Returns zero or negative errno. 87 - */ 88 - int pwm_channel_alloc(int index, struct pwm_channel *ch) 89 - { 90 - unsigned long flags; 91 - int status = 0; 92 - 93 - if (!pwm) 94 - return -EPROBE_DEFER; 95 - 96 - if (!(pwm->mask & 1 << index)) 97 - return -ENODEV; 98 - 99 - if (index < 0 || index >= PWM_NCHAN || !ch) 100 - return -EINVAL; 101 - memset(ch, 0, sizeof *ch); 102 - 103 - spin_lock_irqsave(&pwm->lock, flags); 104 - if (pwm->channel[index]) 105 - status = -EBUSY; 106 - else { 107 - clk_enable(pwm->clk); 108 - 109 - ch->regs = pwmc_regs(pwm, index); 110 - ch->index = index; 111 - 112 - /* REVISIT: ap7000 seems to go 2x as fast as we expect!! */ 113 - ch->mck = clk_get_rate(pwm->clk); 114 - 115 - pwm->channel[index] = ch; 116 - pwm->handler[index] = NULL; 117 - 118 - /* channel and irq are always disabled when we return */ 119 - pwm_writel(pwm, PWM_DIS, 1 << index); 120 - pwm_writel(pwm, PWM_IDR, 1 << index); 121 - } 122 - spin_unlock_irqrestore(&pwm->lock, flags); 123 - return status; 124 - } 125 - EXPORT_SYMBOL(pwm_channel_alloc); 126 - 127 - static int pwmcheck(struct pwm_channel *ch) 128 - { 129 - int index; 130 - 131 - if (!pwm) 132 - return -ENODEV; 133 - if (!ch) 134 - return -EINVAL; 135 - index = ch->index; 136 - if (index < 0 || index >= PWM_NCHAN || pwm->channel[index] != ch) 137 - return -EINVAL; 138 - 139 - return index; 140 - } 141 - 142 - /** 143 - * pwm_channel_free - release a previously allocated channel 144 - * @ch: the channel being released 145 - * 146 - * The channel is completely shut down (counter and IRQ disabled), 147 - * and made available for re-use. Returns zero, or negative errno. 148 - */ 149 - int pwm_channel_free(struct pwm_channel *ch) 150 - { 151 - unsigned long flags; 152 - int t; 153 - 154 - spin_lock_irqsave(&pwm->lock, flags); 155 - t = pwmcheck(ch); 156 - if (t >= 0) { 157 - pwm->channel[t] = NULL; 158 - pwm->handler[t] = NULL; 159 - 160 - /* channel and irq are always disabled when we return */ 161 - pwm_writel(pwm, PWM_DIS, 1 << t); 162 - pwm_writel(pwm, PWM_IDR, 1 << t); 163 - 164 - clk_disable(pwm->clk); 165 - t = 0; 166 - } 167 - spin_unlock_irqrestore(&pwm->lock, flags); 168 - return t; 169 - } 170 - EXPORT_SYMBOL(pwm_channel_free); 171 - 172 - int __pwm_channel_onoff(struct pwm_channel *ch, int enabled) 173 - { 174 - unsigned long flags; 175 - int t; 176 - 177 - /* OMITTED FUNCTIONALITY: starting several channels in synch */ 178 - 179 - spin_lock_irqsave(&pwm->lock, flags); 180 - t = pwmcheck(ch); 181 - if (t >= 0) { 182 - pwm_writel(pwm, enabled ? PWM_ENA : PWM_DIS, 1 << t); 183 - t = 0; 184 - pwm_dumpregs(ch, enabled ? "enable" : "disable"); 185 - } 186 - spin_unlock_irqrestore(&pwm->lock, flags); 187 - 188 - return t; 189 - } 190 - EXPORT_SYMBOL(__pwm_channel_onoff); 191 - 192 - /** 193 - * pwm_clk_alloc - allocate and configure CLKA or CLKB 194 - * @prescale: from 0..10, the power of two used to divide MCK 195 - * @div: from 1..255, the linear divisor to use 196 - * 197 - * Returns PWM_CPR_CLKA, PWM_CPR_CLKB, or negative errno. The allocated 198 - * clock will run with a period of (2^prescale * div) / MCK, or twice as 199 - * long if center aligned PWM output is used. The clock must later be 200 - * deconfigured using pwm_clk_free(). 201 - */ 202 - int pwm_clk_alloc(unsigned prescale, unsigned div) 203 - { 204 - unsigned long flags; 205 - u32 mr; 206 - u32 val = (prescale << 8) | div; 207 - int ret = -EBUSY; 208 - 209 - if (prescale >= 10 || div == 0 || div > 255) 210 - return -EINVAL; 211 - 212 - spin_lock_irqsave(&pwm->lock, flags); 213 - mr = pwm_readl(pwm, PWM_MR); 214 - if ((mr & 0xffff) == 0) { 215 - mr |= val; 216 - ret = PWM_CPR_CLKA; 217 - } else if ((mr & (0xffff << 16)) == 0) { 218 - mr |= val << 16; 219 - ret = PWM_CPR_CLKB; 220 - } 221 - if (ret > 0) 222 - pwm_writel(pwm, PWM_MR, mr); 223 - spin_unlock_irqrestore(&pwm->lock, flags); 224 - return ret; 225 - } 226 - EXPORT_SYMBOL(pwm_clk_alloc); 227 - 228 - /** 229 - * pwm_clk_free - deconfigure and release CLKA or CLKB 230 - * 231 - * Reverses the effect of pwm_clk_alloc(). 232 - */ 233 - void pwm_clk_free(unsigned clk) 234 - { 235 - unsigned long flags; 236 - u32 mr; 237 - 238 - spin_lock_irqsave(&pwm->lock, flags); 239 - mr = pwm_readl(pwm, PWM_MR); 240 - if (clk == PWM_CPR_CLKA) 241 - pwm_writel(pwm, PWM_MR, mr & ~(0xffff << 0)); 242 - if (clk == PWM_CPR_CLKB) 243 - pwm_writel(pwm, PWM_MR, mr & ~(0xffff << 16)); 244 - spin_unlock_irqrestore(&pwm->lock, flags); 245 - } 246 - EXPORT_SYMBOL(pwm_clk_free); 247 - 248 - /** 249 - * pwm_channel_handler - manage channel's IRQ handler 250 - * @ch: the channel 251 - * @handler: the handler to use, possibly NULL 252 - * 253 - * If the handler is non-null, the handler will be called after every 254 - * period of this PWM channel. If the handler is null, this channel 255 - * won't generate an IRQ. 256 - */ 257 - int pwm_channel_handler(struct pwm_channel *ch, 258 - void (*handler)(struct pwm_channel *ch)) 259 - { 260 - unsigned long flags; 261 - int t; 262 - 263 - spin_lock_irqsave(&pwm->lock, flags); 264 - t = pwmcheck(ch); 265 - if (t >= 0) { 266 - pwm->handler[t] = handler; 267 - pwm_writel(pwm, handler ? PWM_IER : PWM_IDR, 1 << t); 268 - t = 0; 269 - } 270 - spin_unlock_irqrestore(&pwm->lock, flags); 271 - 272 - return t; 273 - } 274 - EXPORT_SYMBOL(pwm_channel_handler); 275 - 276 - static irqreturn_t pwm_irq(int id, void *_pwm) 277 - { 278 - struct pwm *p = _pwm; 279 - irqreturn_t handled = IRQ_NONE; 280 - u32 irqstat; 281 - int index; 282 - 283 - spin_lock(&p->lock); 284 - 285 - /* ack irqs, then handle them */ 286 - irqstat = pwm_readl(pwm, PWM_ISR); 287 - 288 - while (irqstat) { 289 - struct pwm_channel *ch; 290 - void (*handler)(struct pwm_channel *ch); 291 - 292 - index = ffs(irqstat) - 1; 293 - irqstat &= ~(1 << index); 294 - ch = pwm->channel[index]; 295 - handler = pwm->handler[index]; 296 - if (handler && ch) { 297 - spin_unlock(&p->lock); 298 - handler(ch); 299 - spin_lock(&p->lock); 300 - handled = IRQ_HANDLED; 301 - } 302 - } 303 - 304 - spin_unlock(&p->lock); 305 - return handled; 306 - } 307 - 308 - static int __init pwm_probe(struct platform_device *pdev) 309 - { 310 - struct resource *r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 311 - int irq = platform_get_irq(pdev, 0); 312 - u32 *mp = pdev->dev.platform_data; 313 - struct pwm *p; 314 - int status = -EIO; 315 - 316 - if (pwm) 317 - return -EBUSY; 318 - if (!r || irq < 0 || !mp || !*mp) 319 - return -ENODEV; 320 - if (*mp & ~((1<<PWM_NCHAN)-1)) { 321 - dev_warn(&pdev->dev, "mask 0x%x ... more than %d channels\n", 322 - *mp, PWM_NCHAN); 323 - return -EINVAL; 324 - } 325 - 326 - p = kzalloc(sizeof(*p), GFP_KERNEL); 327 - if (!p) 328 - return -ENOMEM; 329 - 330 - spin_lock_init(&p->lock); 331 - p->pdev = pdev; 332 - p->mask = *mp; 333 - p->irq = irq; 334 - p->base = ioremap(r->start, resource_size(r)); 335 - if (!p->base) 336 - goto fail; 337 - p->clk = clk_get(&pdev->dev, "pwm_clk"); 338 - if (IS_ERR(p->clk)) { 339 - status = PTR_ERR(p->clk); 340 - p->clk = NULL; 341 - goto fail; 342 - } 343 - 344 - status = request_irq(irq, pwm_irq, 0, pdev->name, p); 345 - if (status < 0) 346 - goto fail; 347 - 348 - pwm = p; 349 - platform_set_drvdata(pdev, p); 350 - 351 - return 0; 352 - 353 - fail: 354 - if (p->clk) 355 - clk_put(p->clk); 356 - if (p->base) 357 - iounmap(p->base); 358 - 359 - kfree(p); 360 - return status; 361 - } 362 - 363 - static int __exit pwm_remove(struct platform_device *pdev) 364 - { 365 - struct pwm *p = platform_get_drvdata(pdev); 366 - 367 - if (p != pwm) 368 - return -EINVAL; 369 - 370 - clk_enable(pwm->clk); 371 - pwm_writel(pwm, PWM_DIS, (1 << PWM_NCHAN) - 1); 372 - pwm_writel(pwm, PWM_IDR, (1 << PWM_NCHAN) - 1); 373 - clk_disable(pwm->clk); 374 - 375 - pwm = NULL; 376 - 377 - free_irq(p->irq, p); 378 - clk_put(p->clk); 379 - iounmap(p->base); 380 - kfree(p); 381 - 382 - return 0; 383 - } 384 - 385 - static struct platform_driver atmel_pwm_driver = { 386 - .driver = { 387 - .name = "atmel_pwm", 388 - .owner = THIS_MODULE, 389 - }, 390 - .remove = __exit_p(pwm_remove), 391 - 392 - /* NOTE: PWM can keep running in AVR32 "idle" and "frozen" states; 393 - * and all AT91sam9263 states, albeit at reduced clock rate if 394 - * MCK becomes the slow clock (i.e. what Linux labels STR). 395 - */ 396 - }; 397 - 398 - module_platform_driver_probe(atmel_pwm_driver, pwm_probe); 399 - 400 - MODULE_DESCRIPTION("Driver for AT32/AT91 PWM module"); 401 - MODULE_LICENSE("GPL"); 402 - MODULE_ALIAS("platform:atmel_pwm");
+8
drivers/pci/host/Kconfig
··· 46 46 Say Y here if you want to support a simple generic PCI host 47 47 controller, such as the one emulated by kvmtool. 48 48 49 + config PCIE_SPEAR13XX 50 + tristate "STMicroelectronics SPEAr PCIe controller" 51 + depends on ARCH_SPEAR13XX 52 + select PCIEPORTBUS 53 + select PCIE_DW 54 + help 55 + Say Y here if you want PCIe support on SPEAr13XX SoCs. 56 + 49 57 endmenu
+1
drivers/pci/host/Makefile
··· 6 6 obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o 7 7 obj-$(CONFIG_PCI_RCAR_GEN2_PCIE) += pcie-rcar.o 8 8 obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o 9 + obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
+161 -61
drivers/pci/host/pci-tegra.c
··· 234 234 bool has_pex_clkreq_en; 235 235 bool has_pex_bias_ctrl; 236 236 bool has_intr_prsnt_sense; 237 - bool has_avdd_supply; 238 237 bool has_cml_clk; 239 238 }; 240 239 ··· 272 273 unsigned int num_ports; 273 274 u32 xbar_config; 274 275 275 - struct regulator *pex_clk_supply; 276 - struct regulator *vdd_supply; 277 - struct regulator *avdd_supply; 276 + struct regulator_bulk_data *supplies; 277 + unsigned int num_supplies; 278 278 279 279 const struct tegra_pcie_soc_data *soc_data; 280 280 }; ··· 893 895 894 896 static void tegra_pcie_power_off(struct tegra_pcie *pcie) 895 897 { 896 - const struct tegra_pcie_soc_data *soc = pcie->soc_data; 897 898 int err; 898 899 899 900 /* TODO: disable and unprepare clocks? */ ··· 903 906 904 907 tegra_powergate_power_off(TEGRA_POWERGATE_PCIE); 905 908 906 - if (soc->has_avdd_supply) { 907 - err = regulator_disable(pcie->avdd_supply); 908 - if (err < 0) 909 - dev_warn(pcie->dev, 910 - "failed to disable AVDD regulator: %d\n", 911 - err); 912 - } 913 - 914 - err = regulator_disable(pcie->pex_clk_supply); 909 + err = regulator_bulk_disable(pcie->num_supplies, pcie->supplies); 915 910 if (err < 0) 916 - dev_warn(pcie->dev, "failed to disable pex-clk regulator: %d\n", 917 - err); 918 - 919 - err = regulator_disable(pcie->vdd_supply); 920 - if (err < 0) 921 - dev_warn(pcie->dev, "failed to disable VDD regulator: %d\n", 922 - err); 911 + dev_warn(pcie->dev, "failed to disable regulators: %d\n", err); 923 912 } 924 913 925 914 static int tegra_pcie_power_on(struct tegra_pcie *pcie) ··· 920 937 tegra_powergate_power_off(TEGRA_POWERGATE_PCIE); 921 938 922 939 /* enable regulators */ 923 - err = regulator_enable(pcie->vdd_supply); 924 - if (err < 0) { 925 - dev_err(pcie->dev, "failed to enable VDD regulator: %d\n", err); 926 - return err; 927 - } 928 - 929 - err = regulator_enable(pcie->pex_clk_supply); 930 - if (err < 0) { 931 - dev_err(pcie->dev, "failed to enable pex-clk regulator: %d\n", 932 - err); 933 - return err; 934 - } 935 - 936 - if (soc->has_avdd_supply) { 937 - err = regulator_enable(pcie->avdd_supply); 938 - if (err < 0) { 939 - dev_err(pcie->dev, 940 - "failed to enable AVDD regulator: %d\n", 941 - err); 942 - return err; 943 - } 944 - } 940 + err = regulator_bulk_enable(pcie->num_supplies, pcie->supplies); 941 + if (err < 0) 942 + dev_err(pcie->dev, "failed to enable regulators: %d\n", err); 945 943 946 944 err = tegra_powergate_sequence_power_up(TEGRA_POWERGATE_PCIE, 947 945 pcie->pex_clk, ··· 1359 1395 return -EINVAL; 1360 1396 } 1361 1397 1398 + /* 1399 + * Check whether a given set of supplies is available in a device tree node. 1400 + * This is used to check whether the new or the legacy device tree bindings 1401 + * should be used. 1402 + */ 1403 + static bool of_regulator_bulk_available(struct device_node *np, 1404 + struct regulator_bulk_data *supplies, 1405 + unsigned int num_supplies) 1406 + { 1407 + char property[32]; 1408 + unsigned int i; 1409 + 1410 + for (i = 0; i < num_supplies; i++) { 1411 + snprintf(property, 32, "%s-supply", supplies[i].supply); 1412 + 1413 + if (of_find_property(np, property, NULL) == NULL) 1414 + return false; 1415 + } 1416 + 1417 + return true; 1418 + } 1419 + 1420 + /* 1421 + * Old versions of the device tree binding for this device used a set of power 1422 + * supplies that didn't match the hardware inputs. This happened to work for a 1423 + * number of cases but is not future proof. However to preserve backwards- 1424 + * compatibility with old device trees, this function will try to use the old 1425 + * set of supplies. 1426 + */ 1427 + static int tegra_pcie_get_legacy_regulators(struct tegra_pcie *pcie) 1428 + { 1429 + struct device_node *np = pcie->dev->of_node; 1430 + 1431 + if (of_device_is_compatible(np, "nvidia,tegra30-pcie")) 1432 + pcie->num_supplies = 3; 1433 + else if (of_device_is_compatible(np, "nvidia,tegra20-pcie")) 1434 + pcie->num_supplies = 2; 1435 + 1436 + if (pcie->num_supplies == 0) { 1437 + dev_err(pcie->dev, "device %s not supported in legacy mode\n", 1438 + np->full_name); 1439 + return -ENODEV; 1440 + } 1441 + 1442 + pcie->supplies = devm_kcalloc(pcie->dev, pcie->num_supplies, 1443 + sizeof(*pcie->supplies), 1444 + GFP_KERNEL); 1445 + if (!pcie->supplies) 1446 + return -ENOMEM; 1447 + 1448 + pcie->supplies[0].supply = "pex-clk"; 1449 + pcie->supplies[1].supply = "vdd"; 1450 + 1451 + if (pcie->num_supplies > 2) 1452 + pcie->supplies[2].supply = "avdd"; 1453 + 1454 + return devm_regulator_bulk_get(pcie->dev, pcie->num_supplies, 1455 + pcie->supplies); 1456 + } 1457 + 1458 + /* 1459 + * Obtains the list of regulators required for a particular generation of the 1460 + * IP block. 1461 + * 1462 + * This would've been nice to do simply by providing static tables for use 1463 + * with the regulator_bulk_*() API, but unfortunately Tegra30 is a bit quirky 1464 + * in that it has two pairs or AVDD_PEX and VDD_PEX supplies (PEXA and PEXB) 1465 + * and either seems to be optional depending on which ports are being used. 1466 + */ 1467 + static int tegra_pcie_get_regulators(struct tegra_pcie *pcie, u32 lane_mask) 1468 + { 1469 + struct device_node *np = pcie->dev->of_node; 1470 + unsigned int i = 0; 1471 + 1472 + if (of_device_is_compatible(np, "nvidia,tegra30-pcie")) { 1473 + bool need_pexa = false, need_pexb = false; 1474 + 1475 + /* VDD_PEXA and AVDD_PEXA supply lanes 0 to 3 */ 1476 + if (lane_mask & 0x0f) 1477 + need_pexa = true; 1478 + 1479 + /* VDD_PEXB and AVDD_PEXB supply lanes 4 to 5 */ 1480 + if (lane_mask & 0x30) 1481 + need_pexb = true; 1482 + 1483 + pcie->num_supplies = 4 + (need_pexa ? 2 : 0) + 1484 + (need_pexb ? 2 : 0); 1485 + 1486 + pcie->supplies = devm_kcalloc(pcie->dev, pcie->num_supplies, 1487 + sizeof(*pcie->supplies), 1488 + GFP_KERNEL); 1489 + if (!pcie->supplies) 1490 + return -ENOMEM; 1491 + 1492 + pcie->supplies[i++].supply = "avdd-pex-pll"; 1493 + pcie->supplies[i++].supply = "hvdd-pex"; 1494 + pcie->supplies[i++].supply = "vddio-pex-ctl"; 1495 + pcie->supplies[i++].supply = "avdd-plle"; 1496 + 1497 + if (need_pexa) { 1498 + pcie->supplies[i++].supply = "avdd-pexa"; 1499 + pcie->supplies[i++].supply = "vdd-pexa"; 1500 + } 1501 + 1502 + if (need_pexb) { 1503 + pcie->supplies[i++].supply = "avdd-pexb"; 1504 + pcie->supplies[i++].supply = "vdd-pexb"; 1505 + } 1506 + } else if (of_device_is_compatible(np, "nvidia,tegra20-pcie")) { 1507 + pcie->num_supplies = 5; 1508 + 1509 + pcie->supplies = devm_kcalloc(pcie->dev, pcie->num_supplies, 1510 + sizeof(*pcie->supplies), 1511 + GFP_KERNEL); 1512 + if (!pcie->supplies) 1513 + return -ENOMEM; 1514 + 1515 + pcie->supplies[0].supply = "avdd-pex"; 1516 + pcie->supplies[1].supply = "vdd-pex"; 1517 + pcie->supplies[2].supply = "avdd-pex-pll"; 1518 + pcie->supplies[3].supply = "avdd-plle"; 1519 + pcie->supplies[4].supply = "vddio-pex-clk"; 1520 + } 1521 + 1522 + if (of_regulator_bulk_available(pcie->dev->of_node, pcie->supplies, 1523 + pcie->num_supplies)) 1524 + return devm_regulator_bulk_get(pcie->dev, pcie->num_supplies, 1525 + pcie->supplies); 1526 + 1527 + /* 1528 + * If not all regulators are available for this new scheme, assume 1529 + * that the device tree complies with an older version of the device 1530 + * tree binding. 1531 + */ 1532 + dev_info(pcie->dev, "using legacy DT binding for power supplies\n"); 1533 + 1534 + devm_kfree(pcie->dev, pcie->supplies); 1535 + pcie->num_supplies = 0; 1536 + 1537 + return tegra_pcie_get_legacy_regulators(pcie); 1538 + } 1539 + 1362 1540 static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) 1363 1541 { 1364 1542 const struct tegra_pcie_soc_data *soc = pcie->soc_data; 1365 1543 struct device_node *np = pcie->dev->of_node, *port; 1366 1544 struct of_pci_range_parser parser; 1367 1545 struct of_pci_range range; 1546 + u32 lanes = 0, mask = 0; 1547 + unsigned int lane = 0; 1368 1548 struct resource res; 1369 - u32 lanes = 0; 1370 1549 int err; 1371 1550 1372 1551 if (of_pci_range_parser_init(&parser, np)) { 1373 1552 dev_err(pcie->dev, "missing \"ranges\" property\n"); 1374 1553 return -EINVAL; 1375 - } 1376 - 1377 - pcie->vdd_supply = devm_regulator_get(pcie->dev, "vdd"); 1378 - if (IS_ERR(pcie->vdd_supply)) 1379 - return PTR_ERR(pcie->vdd_supply); 1380 - 1381 - pcie->pex_clk_supply = devm_regulator_get(pcie->dev, "pex-clk"); 1382 - if (IS_ERR(pcie->pex_clk_supply)) 1383 - return PTR_ERR(pcie->pex_clk_supply); 1384 - 1385 - if (soc->has_avdd_supply) { 1386 - pcie->avdd_supply = devm_regulator_get(pcie->dev, "avdd"); 1387 - if (IS_ERR(pcie->avdd_supply)) 1388 - return PTR_ERR(pcie->avdd_supply); 1389 1554 } 1390 1555 1391 1556 for_each_of_pci_range(&parser, &range) { ··· 1584 1491 1585 1492 lanes |= value << (index << 3); 1586 1493 1587 - if (!of_device_is_available(port)) 1494 + if (!of_device_is_available(port)) { 1495 + lane += value; 1588 1496 continue; 1497 + } 1498 + 1499 + mask |= ((1 << value) - 1) << lane; 1500 + lane += value; 1589 1501 1590 1502 rp = devm_kzalloc(pcie->dev, sizeof(*rp), GFP_KERNEL); 1591 1503 if (!rp) ··· 1620 1522 dev_err(pcie->dev, "invalid lane configuration\n"); 1621 1523 return err; 1622 1524 } 1525 + 1526 + err = tegra_pcie_get_regulators(pcie, mask); 1527 + if (err < 0) 1528 + return err; 1623 1529 1624 1530 return 0; 1625 1531 } ··· 1718 1616 .has_pex_clkreq_en = false, 1719 1617 .has_pex_bias_ctrl = false, 1720 1618 .has_intr_prsnt_sense = false, 1721 - .has_avdd_supply = false, 1722 1619 .has_cml_clk = false, 1723 1620 }; 1724 1621 ··· 1729 1628 .has_pex_clkreq_en = true, 1730 1629 .has_pex_bias_ctrl = true, 1731 1630 .has_intr_prsnt_sense = true, 1732 - .has_avdd_supply = true, 1733 1631 .has_cml_clk = true, 1734 1632 }; 1735 1633
+393
drivers/pci/host/pcie-spear13xx.c
··· 1 + /* 2 + * PCIe host controller driver for ST Microelectronics SPEAr13xx SoCs 3 + * 4 + * SPEAr13xx PCIe Glue Layer Source Code 5 + * 6 + * Copyright (C) 2010-2014 ST Microelectronics 7 + * Pratyush Anand <pratyush.anand@st.com> 8 + * Mohit Kumar <mohit.kumar@st.com> 9 + * 10 + * This file is licensed under the terms of the GNU General Public 11 + * License version 2. This program is licensed "as is" without any 12 + * warranty of any kind, whether express or implied. 13 + */ 14 + 15 + #include <linux/clk.h> 16 + #include <linux/delay.h> 17 + #include <linux/interrupt.h> 18 + #include <linux/kernel.h> 19 + #include <linux/module.h> 20 + #include <linux/of.h> 21 + #include <linux/pci.h> 22 + #include <linux/phy/phy.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/resource.h> 25 + 26 + #include "pcie-designware.h" 27 + 28 + struct spear13xx_pcie { 29 + void __iomem *app_base; 30 + struct phy *phy; 31 + struct clk *clk; 32 + struct pcie_port pp; 33 + bool is_gen1; 34 + }; 35 + 36 + struct pcie_app_reg { 37 + u32 app_ctrl_0; /* cr0 */ 38 + u32 app_ctrl_1; /* cr1 */ 39 + u32 app_status_0; /* cr2 */ 40 + u32 app_status_1; /* cr3 */ 41 + u32 msg_status; /* cr4 */ 42 + u32 msg_payload; /* cr5 */ 43 + u32 int_sts; /* cr6 */ 44 + u32 int_clr; /* cr7 */ 45 + u32 int_mask; /* cr8 */ 46 + u32 mst_bmisc; /* cr9 */ 47 + u32 phy_ctrl; /* cr10 */ 48 + u32 phy_status; /* cr11 */ 49 + u32 cxpl_debug_info_0; /* cr12 */ 50 + u32 cxpl_debug_info_1; /* cr13 */ 51 + u32 ven_msg_ctrl_0; /* cr14 */ 52 + u32 ven_msg_ctrl_1; /* cr15 */ 53 + u32 ven_msg_data_0; /* cr16 */ 54 + u32 ven_msg_data_1; /* cr17 */ 55 + u32 ven_msi_0; /* cr18 */ 56 + u32 ven_msi_1; /* cr19 */ 57 + u32 mst_rmisc; /* cr20 */ 58 + }; 59 + 60 + /* CR0 ID */ 61 + #define RX_LANE_FLIP_EN_ID 0 62 + #define TX_LANE_FLIP_EN_ID 1 63 + #define SYS_AUX_PWR_DET_ID 2 64 + #define APP_LTSSM_ENABLE_ID 3 65 + #define SYS_ATTEN_BUTTON_PRESSED_ID 4 66 + #define SYS_MRL_SENSOR_STATE_ID 5 67 + #define SYS_PWR_FAULT_DET_ID 6 68 + #define SYS_MRL_SENSOR_CHGED_ID 7 69 + #define SYS_PRE_DET_CHGED_ID 8 70 + #define SYS_CMD_CPLED_INT_ID 9 71 + #define APP_INIT_RST_0_ID 11 72 + #define APP_REQ_ENTR_L1_ID 12 73 + #define APP_READY_ENTR_L23_ID 13 74 + #define APP_REQ_EXIT_L1_ID 14 75 + #define DEVICE_TYPE_EP (0 << 25) 76 + #define DEVICE_TYPE_LEP (1 << 25) 77 + #define DEVICE_TYPE_RC (4 << 25) 78 + #define SYS_INT_ID 29 79 + #define MISCTRL_EN_ID 30 80 + #define REG_TRANSLATION_ENABLE 31 81 + 82 + /* CR1 ID */ 83 + #define APPS_PM_XMT_TURNOFF_ID 2 84 + #define APPS_PM_XMT_PME_ID 5 85 + 86 + /* CR3 ID */ 87 + #define XMLH_LTSSM_STATE_DETECT_QUIET 0x00 88 + #define XMLH_LTSSM_STATE_DETECT_ACT 0x01 89 + #define XMLH_LTSSM_STATE_POLL_ACTIVE 0x02 90 + #define XMLH_LTSSM_STATE_POLL_COMPLIANCE 0x03 91 + #define XMLH_LTSSM_STATE_POLL_CONFIG 0x04 92 + #define XMLH_LTSSM_STATE_PRE_DETECT_QUIET 0x05 93 + #define XMLH_LTSSM_STATE_DETECT_WAIT 0x06 94 + #define XMLH_LTSSM_STATE_CFG_LINKWD_START 0x07 95 + #define XMLH_LTSSM_STATE_CFG_LINKWD_ACEPT 0x08 96 + #define XMLH_LTSSM_STATE_CFG_LANENUM_WAIT 0x09 97 + #define XMLH_LTSSM_STATE_CFG_LANENUM_ACEPT 0x0A 98 + #define XMLH_LTSSM_STATE_CFG_COMPLETE 0x0B 99 + #define XMLH_LTSSM_STATE_CFG_IDLE 0x0C 100 + #define XMLH_LTSSM_STATE_RCVRY_LOCK 0x0D 101 + #define XMLH_LTSSM_STATE_RCVRY_SPEED 0x0E 102 + #define XMLH_LTSSM_STATE_RCVRY_RCVRCFG 0x0F 103 + #define XMLH_LTSSM_STATE_RCVRY_IDLE 0x10 104 + #define XMLH_LTSSM_STATE_L0 0x11 105 + #define XMLH_LTSSM_STATE_L0S 0x12 106 + #define XMLH_LTSSM_STATE_L123_SEND_EIDLE 0x13 107 + #define XMLH_LTSSM_STATE_L1_IDLE 0x14 108 + #define XMLH_LTSSM_STATE_L2_IDLE 0x15 109 + #define XMLH_LTSSM_STATE_L2_WAKE 0x16 110 + #define XMLH_LTSSM_STATE_DISABLED_ENTRY 0x17 111 + #define XMLH_LTSSM_STATE_DISABLED_IDLE 0x18 112 + #define XMLH_LTSSM_STATE_DISABLED 0x19 113 + #define XMLH_LTSSM_STATE_LPBK_ENTRY 0x1A 114 + #define XMLH_LTSSM_STATE_LPBK_ACTIVE 0x1B 115 + #define XMLH_LTSSM_STATE_LPBK_EXIT 0x1C 116 + #define XMLH_LTSSM_STATE_LPBK_EXIT_TIMEOUT 0x1D 117 + #define XMLH_LTSSM_STATE_HOT_RESET_ENTRY 0x1E 118 + #define XMLH_LTSSM_STATE_HOT_RESET 0x1F 119 + #define XMLH_LTSSM_STATE_MASK 0x3F 120 + #define XMLH_LINK_UP (1 << 6) 121 + 122 + /* CR4 ID */ 123 + #define CFG_MSI_EN_ID 18 124 + 125 + /* CR6 */ 126 + #define INTA_CTRL_INT (1 << 7) 127 + #define INTB_CTRL_INT (1 << 8) 128 + #define INTC_CTRL_INT (1 << 9) 129 + #define INTD_CTRL_INT (1 << 10) 130 + #define MSI_CTRL_INT (1 << 26) 131 + 132 + /* CR19 ID */ 133 + #define VEN_MSI_REQ_ID 11 134 + #define VEN_MSI_FUN_NUM_ID 8 135 + #define VEN_MSI_TC_ID 5 136 + #define VEN_MSI_VECTOR_ID 0 137 + #define VEN_MSI_REQ_EN ((u32)0x1 << VEN_MSI_REQ_ID) 138 + #define VEN_MSI_FUN_NUM_MASK ((u32)0x7 << VEN_MSI_FUN_NUM_ID) 139 + #define VEN_MSI_TC_MASK ((u32)0x7 << VEN_MSI_TC_ID) 140 + #define VEN_MSI_VECTOR_MASK ((u32)0x1F << VEN_MSI_VECTOR_ID) 141 + 142 + #define EXP_CAP_ID_OFFSET 0x70 143 + 144 + #define to_spear13xx_pcie(x) container_of(x, struct spear13xx_pcie, pp) 145 + 146 + static int spear13xx_pcie_establish_link(struct pcie_port *pp) 147 + { 148 + u32 val; 149 + int count = 0; 150 + struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pp); 151 + struct pcie_app_reg *app_reg = spear13xx_pcie->app_base; 152 + u32 exp_cap_off = EXP_CAP_ID_OFFSET; 153 + 154 + if (dw_pcie_link_up(pp)) { 155 + dev_err(pp->dev, "link already up\n"); 156 + return 0; 157 + } 158 + 159 + dw_pcie_setup_rc(pp); 160 + 161 + /* 162 + * this controller support only 128 bytes read size, however its 163 + * default value in capability register is 512 bytes. So force 164 + * it to 128 here. 165 + */ 166 + dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_DEVCTL, 4, &val); 167 + val &= ~PCI_EXP_DEVCTL_READRQ; 168 + dw_pcie_cfg_write(pp->dbi_base, exp_cap_off + PCI_EXP_DEVCTL, 4, val); 169 + 170 + dw_pcie_cfg_write(pp->dbi_base, PCI_VENDOR_ID, 2, 0x104A); 171 + dw_pcie_cfg_write(pp->dbi_base, PCI_DEVICE_ID, 2, 0xCD80); 172 + 173 + /* 174 + * if is_gen1 is set then handle it, so that some buggy card 175 + * also works 176 + */ 177 + if (spear13xx_pcie->is_gen1) { 178 + dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_LNKCAP, 4, 179 + &val); 180 + if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) { 181 + val &= ~((u32)PCI_EXP_LNKCAP_SLS); 182 + val |= PCI_EXP_LNKCAP_SLS_2_5GB; 183 + dw_pcie_cfg_write(pp->dbi_base, exp_cap_off + 184 + PCI_EXP_LNKCAP, 4, val); 185 + } 186 + 187 + dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_LNKCTL2, 4, 188 + &val); 189 + if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) { 190 + val &= ~((u32)PCI_EXP_LNKCAP_SLS); 191 + val |= PCI_EXP_LNKCAP_SLS_2_5GB; 192 + dw_pcie_cfg_write(pp->dbi_base, exp_cap_off + 193 + PCI_EXP_LNKCTL2, 4, val); 194 + } 195 + } 196 + 197 + /* enable ltssm */ 198 + writel(DEVICE_TYPE_RC | (1 << MISCTRL_EN_ID) 199 + | (1 << APP_LTSSM_ENABLE_ID) 200 + | ((u32)1 << REG_TRANSLATION_ENABLE), 201 + &app_reg->app_ctrl_0); 202 + 203 + /* check if the link is up or not */ 204 + while (!dw_pcie_link_up(pp)) { 205 + mdelay(100); 206 + count++; 207 + if (count == 10) { 208 + dev_err(pp->dev, "link Fail\n"); 209 + return -EINVAL; 210 + } 211 + } 212 + dev_info(pp->dev, "link up\n"); 213 + 214 + return 0; 215 + } 216 + 217 + static irqreturn_t spear13xx_pcie_irq_handler(int irq, void *arg) 218 + { 219 + struct pcie_port *pp = arg; 220 + struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pp); 221 + struct pcie_app_reg *app_reg = spear13xx_pcie->app_base; 222 + unsigned int status; 223 + 224 + status = readl(&app_reg->int_sts); 225 + 226 + if (status & MSI_CTRL_INT) { 227 + if (!IS_ENABLED(CONFIG_PCI_MSI)) 228 + BUG(); 229 + dw_handle_msi_irq(pp); 230 + } 231 + 232 + writel(status, &app_reg->int_clr); 233 + 234 + return IRQ_HANDLED; 235 + } 236 + 237 + static void spear13xx_pcie_enable_interrupts(struct pcie_port *pp) 238 + { 239 + struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pp); 240 + struct pcie_app_reg *app_reg = spear13xx_pcie->app_base; 241 + 242 + /* Enable MSI interrupt */ 243 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 244 + dw_pcie_msi_init(pp); 245 + writel(readl(&app_reg->int_mask) | 246 + MSI_CTRL_INT, &app_reg->int_mask); 247 + } 248 + } 249 + 250 + static int spear13xx_pcie_link_up(struct pcie_port *pp) 251 + { 252 + struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pp); 253 + struct pcie_app_reg *app_reg = spear13xx_pcie->app_base; 254 + 255 + if (readl(&app_reg->app_status_1) & XMLH_LINK_UP) 256 + return 1; 257 + 258 + return 0; 259 + } 260 + 261 + static void spear13xx_pcie_host_init(struct pcie_port *pp) 262 + { 263 + spear13xx_pcie_establish_link(pp); 264 + spear13xx_pcie_enable_interrupts(pp); 265 + } 266 + 267 + static struct pcie_host_ops spear13xx_pcie_host_ops = { 268 + .link_up = spear13xx_pcie_link_up, 269 + .host_init = spear13xx_pcie_host_init, 270 + }; 271 + 272 + static int add_pcie_port(struct pcie_port *pp, struct platform_device *pdev) 273 + { 274 + struct device *dev = &pdev->dev; 275 + int ret; 276 + 277 + pp->irq = platform_get_irq(pdev, 0); 278 + if (!pp->irq) { 279 + dev_err(dev, "failed to get irq\n"); 280 + return -ENODEV; 281 + } 282 + ret = devm_request_irq(dev, pp->irq, spear13xx_pcie_irq_handler, 283 + IRQF_SHARED, "spear1340-pcie", pp); 284 + if (ret) { 285 + dev_err(dev, "failed to request irq %d\n", pp->irq); 286 + return ret; 287 + } 288 + 289 + pp->root_bus_nr = -1; 290 + pp->ops = &spear13xx_pcie_host_ops; 291 + 292 + ret = dw_pcie_host_init(pp); 293 + if (ret) { 294 + dev_err(dev, "failed to initialize host\n"); 295 + return ret; 296 + } 297 + 298 + return 0; 299 + } 300 + 301 + static int __init spear13xx_pcie_probe(struct platform_device *pdev) 302 + { 303 + struct spear13xx_pcie *spear13xx_pcie; 304 + struct pcie_port *pp; 305 + struct device *dev = &pdev->dev; 306 + struct device_node *np = pdev->dev.of_node; 307 + struct resource *dbi_base; 308 + int ret; 309 + 310 + spear13xx_pcie = devm_kzalloc(dev, sizeof(*spear13xx_pcie), GFP_KERNEL); 311 + if (!spear13xx_pcie) { 312 + dev_err(dev, "no memory for SPEAr13xx pcie\n"); 313 + return -ENOMEM; 314 + } 315 + 316 + spear13xx_pcie->phy = devm_phy_get(dev, "pcie-phy"); 317 + if (IS_ERR(spear13xx_pcie->phy)) { 318 + ret = PTR_ERR(spear13xx_pcie->phy); 319 + if (ret == -EPROBE_DEFER) 320 + dev_info(dev, "probe deferred\n"); 321 + else 322 + dev_err(dev, "couldn't get pcie-phy\n"); 323 + return ret; 324 + } 325 + 326 + phy_init(spear13xx_pcie->phy); 327 + 328 + spear13xx_pcie->clk = devm_clk_get(dev, NULL); 329 + if (IS_ERR(spear13xx_pcie->clk)) { 330 + dev_err(dev, "couldn't get clk for pcie\n"); 331 + return PTR_ERR(spear13xx_pcie->clk); 332 + } 333 + ret = clk_prepare_enable(spear13xx_pcie->clk); 334 + if (ret) { 335 + dev_err(dev, "couldn't enable clk for pcie\n"); 336 + return ret; 337 + } 338 + 339 + pp = &spear13xx_pcie->pp; 340 + 341 + pp->dev = dev; 342 + 343 + dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0); 344 + pp->dbi_base = devm_ioremap_resource(dev, dbi_base); 345 + if (IS_ERR(pp->dbi_base)) { 346 + dev_err(dev, "couldn't remap dbi base %p\n", dbi_base); 347 + ret = PTR_ERR(pp->dbi_base); 348 + goto fail_clk; 349 + } 350 + spear13xx_pcie->app_base = pp->dbi_base + 0x2000; 351 + 352 + if (of_property_read_bool(np, "st,pcie-is-gen1")) 353 + spear13xx_pcie->is_gen1 = true; 354 + 355 + ret = add_pcie_port(pp, pdev); 356 + if (ret < 0) 357 + goto fail_clk; 358 + 359 + platform_set_drvdata(pdev, spear13xx_pcie); 360 + return 0; 361 + 362 + fail_clk: 363 + clk_disable_unprepare(spear13xx_pcie->clk); 364 + 365 + return ret; 366 + } 367 + 368 + static const struct of_device_id spear13xx_pcie_of_match[] = { 369 + { .compatible = "st,spear1340-pcie", }, 370 + {}, 371 + }; 372 + MODULE_DEVICE_TABLE(of, spear13xx_pcie_of_match); 373 + 374 + static struct platform_driver spear13xx_pcie_driver __initdata = { 375 + .probe = spear13xx_pcie_probe, 376 + .driver = { 377 + .name = "spear-pcie", 378 + .owner = THIS_MODULE, 379 + .of_match_table = of_match_ptr(spear13xx_pcie_of_match), 380 + }, 381 + }; 382 + 383 + /* SPEAr13xx PCIe driver does not allow module unload */ 384 + 385 + static int __init pcie_init(void) 386 + { 387 + return platform_driver_register(&spear13xx_pcie_driver); 388 + } 389 + module_init(pcie_init); 390 + 391 + MODULE_DESCRIPTION("ST Microelectronics SPEAr13xx PCIe host controller driver"); 392 + MODULE_AUTHOR("Pratyush Anand <pratyush.anand@st.com>"); 393 + MODULE_LICENSE("GPL v2");
+19 -7
drivers/phy/Kconfig
··· 197 197 This driver provides PHY interface for USB 3.0 DRD controller 198 198 present on Exynos5 SoC series. 199 199 200 - config PHY_XGENE 201 - tristate "APM X-Gene 15Gbps PHY support" 202 - depends on HAS_IOMEM && OF && (ARM64 || COMPILE_TEST) 203 - select GENERIC_PHY 204 - help 205 - This option enables support for APM X-Gene SoC multi-purpose PHY. 206 - 207 200 config PHY_QCOM_APQ8064_SATA 208 201 tristate "Qualcomm APQ8064 SATA SerDes/PHY driver" 209 202 depends on ARCH_QCOM ··· 210 217 depends on HAS_IOMEM 211 218 depends on OF 212 219 select GENERIC_PHY 220 + 221 + config PHY_ST_SPEAR1310_MIPHY 222 + tristate "ST SPEAR1310-MIPHY driver" 223 + select GENERIC_PHY 224 + help 225 + Support for ST SPEAr1310 MIPHY which can be used for PCIe and SATA. 226 + 227 + config PHY_ST_SPEAR1340_MIPHY 228 + tristate "ST SPEAR1340-MIPHY driver" 229 + select GENERIC_PHY 230 + help 231 + Support for ST SPEAr1340 MIPHY which can be used for PCIe and SATA. 232 + 233 + config PHY_XGENE 234 + tristate "APM X-Gene 15Gbps PHY support" 235 + depends on HAS_IOMEM && OF && (ARM64 || COMPILE_TEST) 236 + select GENERIC_PHY 237 + help 238 + This option enables support for APM X-Gene SoC multi-purpose PHY. 213 239 214 240 endmenu
+3 -1
drivers/phy/Makefile
··· 23 23 phy-exynos-usb2-$(CONFIG_PHY_EXYNOS5250_USB2) += phy-exynos5250-usb2.o 24 24 phy-exynos-usb2-$(CONFIG_PHY_S5PV210_USB2) += phy-s5pv210-usb2.o 25 25 obj-$(CONFIG_PHY_EXYNOS5_USBDRD) += phy-exynos5-usbdrd.o 26 - obj-$(CONFIG_PHY_XGENE) += phy-xgene.o 27 26 obj-$(CONFIG_PHY_QCOM_APQ8064_SATA) += phy-qcom-apq8064-sata.o 28 27 obj-$(CONFIG_PHY_QCOM_IPQ806X_SATA) += phy-qcom-ipq806x-sata.o 28 + obj-$(CONFIG_PHY_ST_SPEAR1310_MIPHY) += phy-spear1310-miphy.o 29 + obj-$(CONFIG_PHY_ST_SPEAR1340_MIPHY) += phy-spear1340-miphy.o 30 + obj-$(CONFIG_PHY_XGENE) += phy-xgene.o
+274
drivers/phy/phy-spear1310-miphy.c
··· 1 + /* 2 + * ST SPEAr1310-miphy driver 3 + * 4 + * Copyright (C) 2014 ST Microelectronics 5 + * Pratyush Anand <pratyush.anand@st.com> 6 + * Mohit Kumar <mohit.kumar@st.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + * 12 + */ 13 + 14 + #include <linux/bitops.h> 15 + #include <linux/delay.h> 16 + #include <linux/dma-mapping.h> 17 + #include <linux/kernel.h> 18 + #include <linux/mfd/syscon.h> 19 + #include <linux/module.h> 20 + #include <linux/of_device.h> 21 + #include <linux/phy/phy.h> 22 + #include <linux/regmap.h> 23 + 24 + /* SPEAr1310 Registers */ 25 + #define SPEAR1310_PCIE_SATA_CFG 0x3A4 26 + #define SPEAR1310_PCIE_SATA2_SEL_PCIE (0 << 31) 27 + #define SPEAR1310_PCIE_SATA1_SEL_PCIE (0 << 30) 28 + #define SPEAR1310_PCIE_SATA0_SEL_PCIE (0 << 29) 29 + #define SPEAR1310_PCIE_SATA2_SEL_SATA BIT(31) 30 + #define SPEAR1310_PCIE_SATA1_SEL_SATA BIT(30) 31 + #define SPEAR1310_PCIE_SATA0_SEL_SATA BIT(29) 32 + #define SPEAR1310_SATA2_CFG_TX_CLK_EN BIT(27) 33 + #define SPEAR1310_SATA2_CFG_RX_CLK_EN BIT(26) 34 + #define SPEAR1310_SATA2_CFG_POWERUP_RESET BIT(25) 35 + #define SPEAR1310_SATA2_CFG_PM_CLK_EN BIT(24) 36 + #define SPEAR1310_SATA1_CFG_TX_CLK_EN BIT(23) 37 + #define SPEAR1310_SATA1_CFG_RX_CLK_EN BIT(22) 38 + #define SPEAR1310_SATA1_CFG_POWERUP_RESET BIT(21) 39 + #define SPEAR1310_SATA1_CFG_PM_CLK_EN BIT(20) 40 + #define SPEAR1310_SATA0_CFG_TX_CLK_EN BIT(19) 41 + #define SPEAR1310_SATA0_CFG_RX_CLK_EN BIT(18) 42 + #define SPEAR1310_SATA0_CFG_POWERUP_RESET BIT(17) 43 + #define SPEAR1310_SATA0_CFG_PM_CLK_EN BIT(16) 44 + #define SPEAR1310_PCIE2_CFG_DEVICE_PRESENT BIT(11) 45 + #define SPEAR1310_PCIE2_CFG_POWERUP_RESET BIT(10) 46 + #define SPEAR1310_PCIE2_CFG_CORE_CLK_EN BIT(9) 47 + #define SPEAR1310_PCIE2_CFG_AUX_CLK_EN BIT(8) 48 + #define SPEAR1310_PCIE1_CFG_DEVICE_PRESENT BIT(7) 49 + #define SPEAR1310_PCIE1_CFG_POWERUP_RESET BIT(6) 50 + #define SPEAR1310_PCIE1_CFG_CORE_CLK_EN BIT(5) 51 + #define SPEAR1310_PCIE1_CFG_AUX_CLK_EN BIT(4) 52 + #define SPEAR1310_PCIE0_CFG_DEVICE_PRESENT BIT(3) 53 + #define SPEAR1310_PCIE0_CFG_POWERUP_RESET BIT(2) 54 + #define SPEAR1310_PCIE0_CFG_CORE_CLK_EN BIT(1) 55 + #define SPEAR1310_PCIE0_CFG_AUX_CLK_EN BIT(0) 56 + 57 + #define SPEAR1310_PCIE_CFG_MASK(x) ((0xF << (x * 4)) | BIT((x + 29))) 58 + #define SPEAR1310_SATA_CFG_MASK(x) ((0xF << (x * 4 + 16)) | \ 59 + BIT((x + 29))) 60 + #define SPEAR1310_PCIE_CFG_VAL(x) \ 61 + (SPEAR1310_PCIE_SATA##x##_SEL_PCIE | \ 62 + SPEAR1310_PCIE##x##_CFG_AUX_CLK_EN | \ 63 + SPEAR1310_PCIE##x##_CFG_CORE_CLK_EN | \ 64 + SPEAR1310_PCIE##x##_CFG_POWERUP_RESET | \ 65 + SPEAR1310_PCIE##x##_CFG_DEVICE_PRESENT) 66 + #define SPEAR1310_SATA_CFG_VAL(x) \ 67 + (SPEAR1310_PCIE_SATA##x##_SEL_SATA | \ 68 + SPEAR1310_SATA##x##_CFG_PM_CLK_EN | \ 69 + SPEAR1310_SATA##x##_CFG_POWERUP_RESET | \ 70 + SPEAR1310_SATA##x##_CFG_RX_CLK_EN | \ 71 + SPEAR1310_SATA##x##_CFG_TX_CLK_EN) 72 + 73 + #define SPEAR1310_PCIE_MIPHY_CFG_1 0x3A8 74 + #define SPEAR1310_MIPHY_DUAL_OSC_BYPASS_EXT BIT(31) 75 + #define SPEAR1310_MIPHY_DUAL_CLK_REF_DIV2 BIT(28) 76 + #define SPEAR1310_MIPHY_DUAL_PLL_RATIO_TOP(x) (x << 16) 77 + #define SPEAR1310_MIPHY_SINGLE_OSC_BYPASS_EXT BIT(15) 78 + #define SPEAR1310_MIPHY_SINGLE_CLK_REF_DIV2 BIT(12) 79 + #define SPEAR1310_MIPHY_SINGLE_PLL_RATIO_TOP(x) (x << 0) 80 + #define SPEAR1310_PCIE_SATA_MIPHY_CFG_SATA_MASK (0xFFFF) 81 + #define SPEAR1310_PCIE_SATA_MIPHY_CFG_PCIE_MASK (0xFFFF << 16) 82 + #define SPEAR1310_PCIE_SATA_MIPHY_CFG_SATA \ 83 + (SPEAR1310_MIPHY_DUAL_OSC_BYPASS_EXT | \ 84 + SPEAR1310_MIPHY_DUAL_CLK_REF_DIV2 | \ 85 + SPEAR1310_MIPHY_DUAL_PLL_RATIO_TOP(60) | \ 86 + SPEAR1310_MIPHY_SINGLE_OSC_BYPASS_EXT | \ 87 + SPEAR1310_MIPHY_SINGLE_CLK_REF_DIV2 | \ 88 + SPEAR1310_MIPHY_SINGLE_PLL_RATIO_TOP(60)) 89 + #define SPEAR1310_PCIE_SATA_MIPHY_CFG_SATA_25M_CRYSTAL_CLK \ 90 + (SPEAR1310_MIPHY_SINGLE_PLL_RATIO_TOP(120)) 91 + #define SPEAR1310_PCIE_SATA_MIPHY_CFG_PCIE \ 92 + (SPEAR1310_MIPHY_DUAL_OSC_BYPASS_EXT | \ 93 + SPEAR1310_MIPHY_DUAL_PLL_RATIO_TOP(25) | \ 94 + SPEAR1310_MIPHY_SINGLE_OSC_BYPASS_EXT | \ 95 + SPEAR1310_MIPHY_SINGLE_PLL_RATIO_TOP(25)) 96 + 97 + #define SPEAR1310_PCIE_MIPHY_CFG_2 0x3AC 98 + 99 + enum spear1310_miphy_mode { 100 + SATA, 101 + PCIE, 102 + }; 103 + 104 + struct spear1310_miphy_priv { 105 + /* instance id of this phy */ 106 + u32 id; 107 + /* phy mode: 0 for SATA 1 for PCIe */ 108 + enum spear1310_miphy_mode mode; 109 + /* regmap for any soc specific misc registers */ 110 + struct regmap *misc; 111 + /* phy struct pointer */ 112 + struct phy *phy; 113 + }; 114 + 115 + static int spear1310_miphy_pcie_init(struct spear1310_miphy_priv *priv) 116 + { 117 + u32 val; 118 + 119 + regmap_update_bits(priv->misc, SPEAR1310_PCIE_MIPHY_CFG_1, 120 + SPEAR1310_PCIE_SATA_MIPHY_CFG_PCIE_MASK, 121 + SPEAR1310_PCIE_SATA_MIPHY_CFG_PCIE); 122 + 123 + switch (priv->id) { 124 + case 0: 125 + val = SPEAR1310_PCIE_CFG_VAL(0); 126 + break; 127 + case 1: 128 + val = SPEAR1310_PCIE_CFG_VAL(1); 129 + break; 130 + case 2: 131 + val = SPEAR1310_PCIE_CFG_VAL(2); 132 + break; 133 + default: 134 + return -EINVAL; 135 + } 136 + 137 + regmap_update_bits(priv->misc, SPEAR1310_PCIE_SATA_CFG, 138 + SPEAR1310_PCIE_CFG_MASK(priv->id), val); 139 + 140 + return 0; 141 + } 142 + 143 + static int spear1310_miphy_pcie_exit(struct spear1310_miphy_priv *priv) 144 + { 145 + regmap_update_bits(priv->misc, SPEAR1310_PCIE_SATA_CFG, 146 + SPEAR1310_PCIE_CFG_MASK(priv->id), 0); 147 + 148 + regmap_update_bits(priv->misc, SPEAR1310_PCIE_MIPHY_CFG_1, 149 + SPEAR1310_PCIE_SATA_MIPHY_CFG_PCIE_MASK, 0); 150 + 151 + return 0; 152 + } 153 + 154 + static int spear1310_miphy_init(struct phy *phy) 155 + { 156 + struct spear1310_miphy_priv *priv = phy_get_drvdata(phy); 157 + int ret = 0; 158 + 159 + if (priv->mode == PCIE) 160 + ret = spear1310_miphy_pcie_init(priv); 161 + 162 + return ret; 163 + } 164 + 165 + static int spear1310_miphy_exit(struct phy *phy) 166 + { 167 + struct spear1310_miphy_priv *priv = phy_get_drvdata(phy); 168 + int ret = 0; 169 + 170 + if (priv->mode == PCIE) 171 + ret = spear1310_miphy_pcie_exit(priv); 172 + 173 + return ret; 174 + } 175 + 176 + static const struct of_device_id spear1310_miphy_of_match[] = { 177 + { .compatible = "st,spear1310-miphy" }, 178 + { }, 179 + }; 180 + MODULE_DEVICE_TABLE(of, spear1310_miphy_of_match); 181 + 182 + static struct phy_ops spear1310_miphy_ops = { 183 + .init = spear1310_miphy_init, 184 + .exit = spear1310_miphy_exit, 185 + .owner = THIS_MODULE, 186 + }; 187 + 188 + static struct phy *spear1310_miphy_xlate(struct device *dev, 189 + struct of_phandle_args *args) 190 + { 191 + struct spear1310_miphy_priv *priv = dev_get_drvdata(dev); 192 + 193 + if (args->args_count < 1) { 194 + dev_err(dev, "DT did not pass correct no of args\n"); 195 + return NULL; 196 + } 197 + 198 + priv->mode = args->args[0]; 199 + 200 + if (priv->mode != SATA && priv->mode != PCIE) { 201 + dev_err(dev, "DT did not pass correct phy mode\n"); 202 + return NULL; 203 + } 204 + 205 + return priv->phy; 206 + } 207 + 208 + static int spear1310_miphy_probe(struct platform_device *pdev) 209 + { 210 + struct device *dev = &pdev->dev; 211 + struct spear1310_miphy_priv *priv; 212 + struct phy_provider *phy_provider; 213 + 214 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 215 + if (!priv) { 216 + dev_err(dev, "can't alloc spear1310_miphy private date memory\n"); 217 + return -ENOMEM; 218 + } 219 + 220 + priv->misc = 221 + syscon_regmap_lookup_by_phandle(dev->of_node, "misc"); 222 + if (IS_ERR(priv->misc)) { 223 + dev_err(dev, "failed to find misc regmap\n"); 224 + return PTR_ERR(priv->misc); 225 + } 226 + 227 + if (of_property_read_u32(dev->of_node, "phy-id", &priv->id)) { 228 + dev_err(dev, "failed to find phy id\n"); 229 + return -EINVAL; 230 + } 231 + 232 + priv->phy = devm_phy_create(dev, NULL, &spear1310_miphy_ops, NULL); 233 + if (IS_ERR(priv->phy)) { 234 + dev_err(dev, "failed to create SATA PCIe PHY\n"); 235 + return PTR_ERR(priv->phy); 236 + } 237 + 238 + dev_set_drvdata(dev, priv); 239 + phy_set_drvdata(priv->phy, priv); 240 + 241 + phy_provider = 242 + devm_of_phy_provider_register(dev, spear1310_miphy_xlate); 243 + if (IS_ERR(phy_provider)) { 244 + dev_err(dev, "failed to register phy provider\n"); 245 + return PTR_ERR(phy_provider); 246 + } 247 + 248 + return 0; 249 + } 250 + 251 + static struct platform_driver spear1310_miphy_driver = { 252 + .probe = spear1310_miphy_probe, 253 + .driver = { 254 + .name = "spear1310-miphy", 255 + .owner = THIS_MODULE, 256 + .of_match_table = of_match_ptr(spear1310_miphy_of_match), 257 + }, 258 + }; 259 + 260 + static int __init spear1310_miphy_phy_init(void) 261 + { 262 + return platform_driver_register(&spear1310_miphy_driver); 263 + } 264 + module_init(spear1310_miphy_phy_init); 265 + 266 + static void __exit spear1310_miphy_phy_exit(void) 267 + { 268 + platform_driver_unregister(&spear1310_miphy_driver); 269 + } 270 + module_exit(spear1310_miphy_phy_exit); 271 + 272 + MODULE_DESCRIPTION("ST SPEAR1310-MIPHY driver"); 273 + MODULE_AUTHOR("Pratyush Anand <pratyush.anand@st.com>"); 274 + MODULE_LICENSE("GPL v2");
+307
drivers/phy/phy-spear1340-miphy.c
··· 1 + /* 2 + * ST spear1340-miphy driver 3 + * 4 + * Copyright (C) 2014 ST Microelectronics 5 + * Pratyush Anand <pratyush.anand@st.com> 6 + * Mohit Kumar <mohit.kumar@st.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + * 12 + */ 13 + 14 + #include <linux/bitops.h> 15 + #include <linux/delay.h> 16 + #include <linux/dma-mapping.h> 17 + #include <linux/kernel.h> 18 + #include <linux/mfd/syscon.h> 19 + #include <linux/module.h> 20 + #include <linux/of_device.h> 21 + #include <linux/phy/phy.h> 22 + #include <linux/regmap.h> 23 + 24 + /* SPEAr1340 Registers */ 25 + /* Power Management Registers */ 26 + #define SPEAR1340_PCM_CFG 0x100 27 + #define SPEAR1340_PCM_CFG_SATA_POWER_EN BIT(11) 28 + #define SPEAR1340_PCM_WKUP_CFG 0x104 29 + #define SPEAR1340_SWITCH_CTR 0x108 30 + 31 + #define SPEAR1340_PERIP1_SW_RST 0x318 32 + #define SPEAR1340_PERIP1_SW_RSATA BIT(12) 33 + #define SPEAR1340_PERIP2_SW_RST 0x31C 34 + #define SPEAR1340_PERIP3_SW_RST 0x320 35 + 36 + /* PCIE - SATA configuration registers */ 37 + #define SPEAR1340_PCIE_SATA_CFG 0x424 38 + /* PCIE CFG MASks */ 39 + #define SPEAR1340_PCIE_CFG_DEVICE_PRESENT BIT(11) 40 + #define SPEAR1340_PCIE_CFG_POWERUP_RESET BIT(10) 41 + #define SPEAR1340_PCIE_CFG_CORE_CLK_EN BIT(9) 42 + #define SPEAR1340_PCIE_CFG_AUX_CLK_EN BIT(8) 43 + #define SPEAR1340_SATA_CFG_TX_CLK_EN BIT(4) 44 + #define SPEAR1340_SATA_CFG_RX_CLK_EN BIT(3) 45 + #define SPEAR1340_SATA_CFG_POWERUP_RESET BIT(2) 46 + #define SPEAR1340_SATA_CFG_PM_CLK_EN BIT(1) 47 + #define SPEAR1340_PCIE_SATA_SEL_PCIE (0) 48 + #define SPEAR1340_PCIE_SATA_SEL_SATA (1) 49 + #define SPEAR1340_PCIE_SATA_CFG_MASK 0xF1F 50 + #define SPEAR1340_PCIE_CFG_VAL (SPEAR1340_PCIE_SATA_SEL_PCIE | \ 51 + SPEAR1340_PCIE_CFG_AUX_CLK_EN | \ 52 + SPEAR1340_PCIE_CFG_CORE_CLK_EN | \ 53 + SPEAR1340_PCIE_CFG_POWERUP_RESET | \ 54 + SPEAR1340_PCIE_CFG_DEVICE_PRESENT) 55 + #define SPEAR1340_SATA_CFG_VAL (SPEAR1340_PCIE_SATA_SEL_SATA | \ 56 + SPEAR1340_SATA_CFG_PM_CLK_EN | \ 57 + SPEAR1340_SATA_CFG_POWERUP_RESET | \ 58 + SPEAR1340_SATA_CFG_RX_CLK_EN | \ 59 + SPEAR1340_SATA_CFG_TX_CLK_EN) 60 + 61 + #define SPEAR1340_PCIE_MIPHY_CFG 0x428 62 + #define SPEAR1340_MIPHY_OSC_BYPASS_EXT BIT(31) 63 + #define SPEAR1340_MIPHY_CLK_REF_DIV2 BIT(27) 64 + #define SPEAR1340_MIPHY_CLK_REF_DIV4 (2 << 27) 65 + #define SPEAR1340_MIPHY_CLK_REF_DIV8 (3 << 27) 66 + #define SPEAR1340_MIPHY_PLL_RATIO_TOP(x) (x << 0) 67 + #define SPEAR1340_PCIE_MIPHY_CFG_MASK 0xF80000FF 68 + #define SPEAR1340_PCIE_SATA_MIPHY_CFG_SATA \ 69 + (SPEAR1340_MIPHY_OSC_BYPASS_EXT | \ 70 + SPEAR1340_MIPHY_CLK_REF_DIV2 | \ 71 + SPEAR1340_MIPHY_PLL_RATIO_TOP(60)) 72 + #define SPEAR1340_PCIE_SATA_MIPHY_CFG_SATA_25M_CRYSTAL_CLK \ 73 + (SPEAR1340_MIPHY_PLL_RATIO_TOP(120)) 74 + #define SPEAR1340_PCIE_SATA_MIPHY_CFG_PCIE \ 75 + (SPEAR1340_MIPHY_OSC_BYPASS_EXT | \ 76 + SPEAR1340_MIPHY_PLL_RATIO_TOP(25)) 77 + 78 + enum spear1340_miphy_mode { 79 + SATA, 80 + PCIE, 81 + }; 82 + 83 + struct spear1340_miphy_priv { 84 + /* phy mode: 0 for SATA 1 for PCIe */ 85 + enum spear1340_miphy_mode mode; 86 + /* regmap for any soc specific misc registers */ 87 + struct regmap *misc; 88 + /* phy struct pointer */ 89 + struct phy *phy; 90 + }; 91 + 92 + static int spear1340_miphy_sata_init(struct spear1340_miphy_priv *priv) 93 + { 94 + regmap_update_bits(priv->misc, SPEAR1340_PCIE_SATA_CFG, 95 + SPEAR1340_PCIE_SATA_CFG_MASK, 96 + SPEAR1340_SATA_CFG_VAL); 97 + regmap_update_bits(priv->misc, SPEAR1340_PCIE_MIPHY_CFG, 98 + SPEAR1340_PCIE_MIPHY_CFG_MASK, 99 + SPEAR1340_PCIE_SATA_MIPHY_CFG_SATA_25M_CRYSTAL_CLK); 100 + /* Switch on sata power domain */ 101 + regmap_update_bits(priv->misc, SPEAR1340_PCM_CFG, 102 + SPEAR1340_PCM_CFG_SATA_POWER_EN, 103 + SPEAR1340_PCM_CFG_SATA_POWER_EN); 104 + /* Wait for SATA power domain on */ 105 + msleep(20); 106 + 107 + /* Disable PCIE SATA Controller reset */ 108 + regmap_update_bits(priv->misc, SPEAR1340_PERIP1_SW_RST, 109 + SPEAR1340_PERIP1_SW_RSATA, 0); 110 + /* Wait for SATA reset de-assert completion */ 111 + msleep(20); 112 + 113 + return 0; 114 + } 115 + 116 + static int spear1340_miphy_sata_exit(struct spear1340_miphy_priv *priv) 117 + { 118 + regmap_update_bits(priv->misc, SPEAR1340_PCIE_SATA_CFG, 119 + SPEAR1340_PCIE_SATA_CFG_MASK, 0); 120 + regmap_update_bits(priv->misc, SPEAR1340_PCIE_MIPHY_CFG, 121 + SPEAR1340_PCIE_MIPHY_CFG_MASK, 0); 122 + 123 + /* Enable PCIE SATA Controller reset */ 124 + regmap_update_bits(priv->misc, SPEAR1340_PERIP1_SW_RST, 125 + SPEAR1340_PERIP1_SW_RSATA, 126 + SPEAR1340_PERIP1_SW_RSATA); 127 + /* Wait for SATA power domain off */ 128 + msleep(20); 129 + /* Switch off sata power domain */ 130 + regmap_update_bits(priv->misc, SPEAR1340_PCM_CFG, 131 + SPEAR1340_PCM_CFG_SATA_POWER_EN, 0); 132 + /* Wait for SATA reset assert completion */ 133 + msleep(20); 134 + 135 + return 0; 136 + } 137 + 138 + static int spear1340_miphy_pcie_init(struct spear1340_miphy_priv *priv) 139 + { 140 + regmap_update_bits(priv->misc, SPEAR1340_PCIE_MIPHY_CFG, 141 + SPEAR1340_PCIE_MIPHY_CFG_MASK, 142 + SPEAR1340_PCIE_SATA_MIPHY_CFG_PCIE); 143 + regmap_update_bits(priv->misc, SPEAR1340_PCIE_SATA_CFG, 144 + SPEAR1340_PCIE_SATA_CFG_MASK, 145 + SPEAR1340_PCIE_CFG_VAL); 146 + 147 + return 0; 148 + } 149 + 150 + static int spear1340_miphy_pcie_exit(struct spear1340_miphy_priv *priv) 151 + { 152 + regmap_update_bits(priv->misc, SPEAR1340_PCIE_MIPHY_CFG, 153 + SPEAR1340_PCIE_MIPHY_CFG_MASK, 0); 154 + regmap_update_bits(priv->misc, SPEAR1340_PCIE_SATA_CFG, 155 + SPEAR1340_PCIE_SATA_CFG_MASK, 0); 156 + 157 + return 0; 158 + } 159 + 160 + static int spear1340_miphy_init(struct phy *phy) 161 + { 162 + struct spear1340_miphy_priv *priv = phy_get_drvdata(phy); 163 + int ret = 0; 164 + 165 + if (priv->mode == SATA) 166 + ret = spear1340_miphy_sata_init(priv); 167 + else if (priv->mode == PCIE) 168 + ret = spear1340_miphy_pcie_init(priv); 169 + 170 + return ret; 171 + } 172 + 173 + static int spear1340_miphy_exit(struct phy *phy) 174 + { 175 + struct spear1340_miphy_priv *priv = phy_get_drvdata(phy); 176 + int ret = 0; 177 + 178 + if (priv->mode == SATA) 179 + ret = spear1340_miphy_sata_exit(priv); 180 + else if (priv->mode == PCIE) 181 + ret = spear1340_miphy_pcie_exit(priv); 182 + 183 + return ret; 184 + } 185 + 186 + static const struct of_device_id spear1340_miphy_of_match[] = { 187 + { .compatible = "st,spear1340-miphy" }, 188 + { }, 189 + }; 190 + MODULE_DEVICE_TABLE(of, spear1340_miphy_of_match); 191 + 192 + static struct phy_ops spear1340_miphy_ops = { 193 + .init = spear1340_miphy_init, 194 + .exit = spear1340_miphy_exit, 195 + .owner = THIS_MODULE, 196 + }; 197 + 198 + #ifdef CONFIG_PM_SLEEP 199 + static int spear1340_miphy_suspend(struct device *dev) 200 + { 201 + struct spear1340_miphy_priv *priv = dev_get_drvdata(dev); 202 + int ret = 0; 203 + 204 + if (priv->mode == SATA) 205 + ret = spear1340_miphy_sata_exit(priv); 206 + 207 + return ret; 208 + } 209 + 210 + static int spear1340_miphy_resume(struct device *dev) 211 + { 212 + struct spear1340_miphy_priv *priv = dev_get_drvdata(dev); 213 + int ret = 0; 214 + 215 + if (priv->mode == SATA) 216 + ret = spear1340_miphy_sata_init(priv); 217 + 218 + return ret; 219 + } 220 + #endif 221 + 222 + static SIMPLE_DEV_PM_OPS(spear1340_miphy_pm_ops, spear1340_miphy_suspend, 223 + spear1340_miphy_resume); 224 + 225 + static struct phy *spear1340_miphy_xlate(struct device *dev, 226 + struct of_phandle_args *args) 227 + { 228 + struct spear1340_miphy_priv *priv = dev_get_drvdata(dev); 229 + 230 + if (args->args_count < 1) { 231 + dev_err(dev, "DT did not pass correct no of args\n"); 232 + return NULL; 233 + } 234 + 235 + priv->mode = args->args[0]; 236 + 237 + if (priv->mode != SATA && priv->mode != PCIE) { 238 + dev_err(dev, "DT did not pass correct phy mode\n"); 239 + return NULL; 240 + } 241 + 242 + return priv->phy; 243 + } 244 + 245 + static int spear1340_miphy_probe(struct platform_device *pdev) 246 + { 247 + struct device *dev = &pdev->dev; 248 + struct spear1340_miphy_priv *priv; 249 + struct phy_provider *phy_provider; 250 + 251 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 252 + if (!priv) { 253 + dev_err(dev, "can't alloc spear1340_miphy private date memory\n"); 254 + return -ENOMEM; 255 + } 256 + 257 + priv->misc = 258 + syscon_regmap_lookup_by_phandle(dev->of_node, "misc"); 259 + if (IS_ERR(priv->misc)) { 260 + dev_err(dev, "failed to find misc regmap\n"); 261 + return PTR_ERR(priv->misc); 262 + } 263 + 264 + priv->phy = devm_phy_create(dev, NULL, &spear1340_miphy_ops, NULL); 265 + if (IS_ERR(priv->phy)) { 266 + dev_err(dev, "failed to create SATA PCIe PHY\n"); 267 + return PTR_ERR(priv->phy); 268 + } 269 + 270 + dev_set_drvdata(dev, priv); 271 + phy_set_drvdata(priv->phy, priv); 272 + 273 + phy_provider = 274 + devm_of_phy_provider_register(dev, spear1340_miphy_xlate); 275 + if (IS_ERR(phy_provider)) { 276 + dev_err(dev, "failed to register phy provider\n"); 277 + return PTR_ERR(phy_provider); 278 + } 279 + 280 + return 0; 281 + } 282 + 283 + static struct platform_driver spear1340_miphy_driver = { 284 + .probe = spear1340_miphy_probe, 285 + .driver = { 286 + .name = "spear1340-miphy", 287 + .owner = THIS_MODULE, 288 + .pm = &spear1340_miphy_pm_ops, 289 + .of_match_table = of_match_ptr(spear1340_miphy_of_match), 290 + }, 291 + }; 292 + 293 + static int __init spear1340_miphy_phy_init(void) 294 + { 295 + return platform_driver_register(&spear1340_miphy_driver); 296 + } 297 + module_init(spear1340_miphy_phy_init); 298 + 299 + static void __exit spear1340_miphy_phy_exit(void) 300 + { 301 + platform_driver_unregister(&spear1340_miphy_driver); 302 + } 303 + module_exit(spear1340_miphy_phy_exit); 304 + 305 + MODULE_DESCRIPTION("ST SPEAR1340-MIPHY driver"); 306 + MODULE_AUTHOR("Pratyush Anand <pratyush.anand@st.com>"); 307 + MODULE_LICENSE("GPL v2");
+2 -2
drivers/pinctrl/pinctrl-tegra-xusb.c
··· 910 910 goto reset; 911 911 } 912 912 913 - phy = devm_phy_create(&pdev->dev, &pcie_phy_ops, NULL); 913 + phy = devm_phy_create(&pdev->dev, NULL, &pcie_phy_ops, NULL); 914 914 if (IS_ERR(phy)) { 915 915 err = PTR_ERR(phy); 916 916 goto unregister; ··· 919 919 padctl->phys[TEGRA_XUSB_PADCTL_PCIE] = phy; 920 920 phy_set_drvdata(phy, padctl); 921 921 922 - phy = devm_phy_create(&pdev->dev, &sata_phy_ops, NULL); 922 + phy = devm_phy_create(&pdev->dev, NULL, &sata_phy_ops, NULL); 923 923 if (IS_ERR(phy)) { 924 924 err = PTR_ERR(phy); 925 925 goto unregister;
+1 -1
drivers/pwm/Kconfig
··· 43 43 44 44 config PWM_ATMEL 45 45 tristate "Atmel PWM support" 46 - depends on ARCH_AT91 46 + depends on ARCH_AT91 || AVR32 47 47 help 48 48 Generic PWM framework driver for Atmel SoC. 49 49
-11
drivers/video/backlight/Kconfig
··· 178 178 If in doubt, it's safe to enable this option; it doesn't kick 179 179 in unless the board's description says it's wired that way. 180 180 181 - config BACKLIGHT_ATMEL_PWM 182 - tristate "Atmel PWM backlight control" 183 - depends on ATMEL_PWM 184 - help 185 - Say Y here if you want to use the PWM peripheral in Atmel AT91 and 186 - AVR32 devices. This driver will need additional platform data to know 187 - which PWM instance to use and how to configure it. 188 - 189 - To compile this driver as a module, choose M here: the module will be 190 - called atmel-pwm-bl. 191 - 192 181 config BACKLIGHT_EP93XX 193 182 tristate "Cirrus EP93xx Backlight Driver" 194 183 depends on FB_EP93XX
-1
drivers/video/backlight/Makefile
··· 25 25 obj-$(CONFIG_BACKLIGHT_ADP8870) += adp8870_bl.o 26 26 obj-$(CONFIG_BACKLIGHT_APPLE) += apple_bl.o 27 27 obj-$(CONFIG_BACKLIGHT_AS3711) += as3711_bl.o 28 - obj-$(CONFIG_BACKLIGHT_ATMEL_PWM) += atmel-pwm-bl.o 29 28 obj-$(CONFIG_BACKLIGHT_BD6107) += bd6107.o 30 29 obj-$(CONFIG_BACKLIGHT_CARILLO_RANCH) += cr_bllcd.o 31 30 obj-$(CONFIG_BACKLIGHT_CLASS_DEVICE) += backlight.o
-223
drivers/video/backlight/atmel-pwm-bl.c
··· 1 - /* 2 - * Copyright (C) 2008 Atmel Corporation 3 - * 4 - * Backlight driver using Atmel PWM peripheral. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms of the GNU General Public License version 2 as published by 8 - * the Free Software Foundation. 9 - */ 10 - #include <linux/init.h> 11 - #include <linux/kernel.h> 12 - #include <linux/module.h> 13 - #include <linux/platform_device.h> 14 - #include <linux/fb.h> 15 - #include <linux/gpio.h> 16 - #include <linux/backlight.h> 17 - #include <linux/atmel_pwm.h> 18 - #include <linux/atmel-pwm-bl.h> 19 - #include <linux/slab.h> 20 - 21 - struct atmel_pwm_bl { 22 - const struct atmel_pwm_bl_platform_data *pdata; 23 - struct backlight_device *bldev; 24 - struct platform_device *pdev; 25 - struct pwm_channel pwmc; 26 - int gpio_on; 27 - }; 28 - 29 - static void atmel_pwm_bl_set_gpio_on(struct atmel_pwm_bl *pwmbl, int on) 30 - { 31 - if (!gpio_is_valid(pwmbl->gpio_on)) 32 - return; 33 - 34 - gpio_set_value(pwmbl->gpio_on, on ^ pwmbl->pdata->on_active_low); 35 - } 36 - 37 - static int atmel_pwm_bl_set_intensity(struct backlight_device *bd) 38 - { 39 - struct atmel_pwm_bl *pwmbl = bl_get_data(bd); 40 - int intensity = bd->props.brightness; 41 - int pwm_duty; 42 - 43 - if (bd->props.power != FB_BLANK_UNBLANK) 44 - intensity = 0; 45 - if (bd->props.fb_blank != FB_BLANK_UNBLANK) 46 - intensity = 0; 47 - 48 - if (pwmbl->pdata->pwm_active_low) 49 - pwm_duty = pwmbl->pdata->pwm_duty_min + intensity; 50 - else 51 - pwm_duty = pwmbl->pdata->pwm_duty_max - intensity; 52 - 53 - if (pwm_duty > pwmbl->pdata->pwm_duty_max) 54 - pwm_duty = pwmbl->pdata->pwm_duty_max; 55 - if (pwm_duty < pwmbl->pdata->pwm_duty_min) 56 - pwm_duty = pwmbl->pdata->pwm_duty_min; 57 - 58 - if (!intensity) { 59 - atmel_pwm_bl_set_gpio_on(pwmbl, 0); 60 - pwm_channel_writel(&pwmbl->pwmc, PWM_CUPD, pwm_duty); 61 - pwm_channel_disable(&pwmbl->pwmc); 62 - } else { 63 - pwm_channel_enable(&pwmbl->pwmc); 64 - pwm_channel_writel(&pwmbl->pwmc, PWM_CUPD, pwm_duty); 65 - atmel_pwm_bl_set_gpio_on(pwmbl, 1); 66 - } 67 - 68 - return 0; 69 - } 70 - 71 - static int atmel_pwm_bl_get_intensity(struct backlight_device *bd) 72 - { 73 - struct atmel_pwm_bl *pwmbl = bl_get_data(bd); 74 - u32 cdty; 75 - u32 intensity; 76 - 77 - cdty = pwm_channel_readl(&pwmbl->pwmc, PWM_CDTY); 78 - if (pwmbl->pdata->pwm_active_low) 79 - intensity = cdty - pwmbl->pdata->pwm_duty_min; 80 - else 81 - intensity = pwmbl->pdata->pwm_duty_max - cdty; 82 - 83 - return intensity & 0xffff; 84 - } 85 - 86 - static int atmel_pwm_bl_init_pwm(struct atmel_pwm_bl *pwmbl) 87 - { 88 - unsigned long pwm_rate = pwmbl->pwmc.mck; 89 - unsigned long prescale = DIV_ROUND_UP(pwm_rate, 90 - (pwmbl->pdata->pwm_frequency * 91 - pwmbl->pdata->pwm_compare_max)) - 1; 92 - 93 - /* 94 - * Prescale must be power of two and maximum 0xf in size because of 95 - * hardware limit. PWM speed will be: 96 - * PWM module clock speed / (2 ^ prescale). 97 - */ 98 - prescale = fls(prescale); 99 - if (prescale > 0xf) 100 - prescale = 0xf; 101 - 102 - pwm_channel_writel(&pwmbl->pwmc, PWM_CMR, prescale); 103 - pwm_channel_writel(&pwmbl->pwmc, PWM_CDTY, 104 - pwmbl->pdata->pwm_duty_min + 105 - pwmbl->bldev->props.brightness); 106 - pwm_channel_writel(&pwmbl->pwmc, PWM_CPRD, 107 - pwmbl->pdata->pwm_compare_max); 108 - 109 - dev_info(&pwmbl->pdev->dev, "Atmel PWM backlight driver (%lu Hz)\n", 110 - pwmbl->pwmc.mck / pwmbl->pdata->pwm_compare_max / 111 - (1 << prescale)); 112 - 113 - return pwm_channel_enable(&pwmbl->pwmc); 114 - } 115 - 116 - static const struct backlight_ops atmel_pwm_bl_ops = { 117 - .get_brightness = atmel_pwm_bl_get_intensity, 118 - .update_status = atmel_pwm_bl_set_intensity, 119 - }; 120 - 121 - static int atmel_pwm_bl_probe(struct platform_device *pdev) 122 - { 123 - struct backlight_properties props; 124 - const struct atmel_pwm_bl_platform_data *pdata; 125 - struct backlight_device *bldev; 126 - struct atmel_pwm_bl *pwmbl; 127 - unsigned long flags; 128 - int retval; 129 - 130 - pdata = dev_get_platdata(&pdev->dev); 131 - if (!pdata) 132 - return -ENODEV; 133 - 134 - if (pdata->pwm_compare_max < pdata->pwm_duty_max || 135 - pdata->pwm_duty_min > pdata->pwm_duty_max || 136 - pdata->pwm_frequency == 0) 137 - return -EINVAL; 138 - 139 - pwmbl = devm_kzalloc(&pdev->dev, sizeof(struct atmel_pwm_bl), 140 - GFP_KERNEL); 141 - if (!pwmbl) 142 - return -ENOMEM; 143 - 144 - pwmbl->pdev = pdev; 145 - pwmbl->pdata = pdata; 146 - pwmbl->gpio_on = pdata->gpio_on; 147 - 148 - retval = pwm_channel_alloc(pdata->pwm_channel, &pwmbl->pwmc); 149 - if (retval) 150 - return retval; 151 - 152 - if (gpio_is_valid(pwmbl->gpio_on)) { 153 - /* Turn display off by default. */ 154 - if (pdata->on_active_low) 155 - flags = GPIOF_OUT_INIT_HIGH; 156 - else 157 - flags = GPIOF_OUT_INIT_LOW; 158 - 159 - retval = devm_gpio_request_one(&pdev->dev, pwmbl->gpio_on, 160 - flags, "gpio_atmel_pwm_bl"); 161 - if (retval) 162 - goto err_free_pwm; 163 - } 164 - 165 - memset(&props, 0, sizeof(struct backlight_properties)); 166 - props.type = BACKLIGHT_RAW; 167 - props.max_brightness = pdata->pwm_duty_max - pdata->pwm_duty_min; 168 - bldev = devm_backlight_device_register(&pdev->dev, "atmel-pwm-bl", 169 - &pdev->dev, pwmbl, &atmel_pwm_bl_ops, 170 - &props); 171 - if (IS_ERR(bldev)) { 172 - retval = PTR_ERR(bldev); 173 - goto err_free_pwm; 174 - } 175 - 176 - pwmbl->bldev = bldev; 177 - 178 - platform_set_drvdata(pdev, pwmbl); 179 - 180 - /* Power up the backlight by default at middle intesity. */ 181 - bldev->props.power = FB_BLANK_UNBLANK; 182 - bldev->props.brightness = bldev->props.max_brightness / 2; 183 - 184 - retval = atmel_pwm_bl_init_pwm(pwmbl); 185 - if (retval) 186 - goto err_free_pwm; 187 - 188 - atmel_pwm_bl_set_intensity(bldev); 189 - 190 - return 0; 191 - 192 - err_free_pwm: 193 - pwm_channel_free(&pwmbl->pwmc); 194 - 195 - return retval; 196 - } 197 - 198 - static int atmel_pwm_bl_remove(struct platform_device *pdev) 199 - { 200 - struct atmel_pwm_bl *pwmbl = platform_get_drvdata(pdev); 201 - 202 - atmel_pwm_bl_set_gpio_on(pwmbl, 0); 203 - pwm_channel_disable(&pwmbl->pwmc); 204 - pwm_channel_free(&pwmbl->pwmc); 205 - 206 - return 0; 207 - } 208 - 209 - static struct platform_driver atmel_pwm_bl_driver = { 210 - .driver = { 211 - .name = "atmel-pwm-bl", 212 - }, 213 - /* REVISIT add suspend() and resume() */ 214 - .probe = atmel_pwm_bl_probe, 215 - .remove = atmel_pwm_bl_remove, 216 - }; 217 - 218 - module_platform_driver(atmel_pwm_bl_driver); 219 - 220 - MODULE_AUTHOR("Hans-Christian egtvedt <hans-christian.egtvedt@atmel.com>"); 221 - MODULE_DESCRIPTION("Atmel PWM backlight driver"); 222 - MODULE_LICENSE("GPL"); 223 - MODULE_ALIAS("platform:atmel-pwm-bl");
-43
include/linux/atmel-pwm-bl.h
··· 1 - /* 2 - * Copyright (C) 2007 Atmel Corporation 3 - * 4 - * Driver for the AT32AP700X PS/2 controller (PSIF). 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms of the GNU General Public License version 2 as published 8 - * by the Free Software Foundation. 9 - */ 10 - 11 - #ifndef __INCLUDE_ATMEL_PWM_BL_H 12 - #define __INCLUDE_ATMEL_PWM_BL_H 13 - 14 - /** 15 - * struct atmel_pwm_bl_platform_data 16 - * @pwm_channel: which PWM channel in the PWM module to use. 17 - * @pwm_frequency: PWM frequency to generate, the driver will try to be as 18 - * close as the prescaler allows. 19 - * @pwm_compare_max: value to use in the PWM channel compare register. 20 - * @pwm_duty_max: maximum duty cycle value, must be less than or equal to 21 - * pwm_compare_max. 22 - * @pwm_duty_min: minimum duty cycle value, must be less than pwm_duty_max. 23 - * @pwm_active_low: set to one if the low part of the PWM signal increases the 24 - * brightness of the backlight. 25 - * @gpio_on: GPIO line to control the backlight on/off, set to -1 if not used. 26 - * @on_active_low: set to one if the on/off signal is on when GPIO is low. 27 - * 28 - * This struct must be added to the platform device in the board code. It is 29 - * used by the atmel-pwm-bl driver to setup the GPIO to control on/off and the 30 - * PWM device. 31 - */ 32 - struct atmel_pwm_bl_platform_data { 33 - unsigned int pwm_channel; 34 - unsigned int pwm_frequency; 35 - unsigned int pwm_compare_max; 36 - unsigned int pwm_duty_max; 37 - unsigned int pwm_duty_min; 38 - unsigned int pwm_active_low; 39 - int gpio_on; 40 - unsigned int on_active_low; 41 - }; 42 - 43 - #endif /* __INCLUDE_ATMEL_PWM_BL_H */
-70
include/linux/atmel_pwm.h
··· 1 - #ifndef __LINUX_ATMEL_PWM_H 2 - #define __LINUX_ATMEL_PWM_H 3 - 4 - /** 5 - * struct pwm_channel - driver handle to a PWM channel 6 - * @regs: base of this channel's registers 7 - * @index: number of this channel (0..31) 8 - * @mck: base clock rate, which can be prescaled and maybe subdivided 9 - * 10 - * Drivers initialize a pwm_channel structure using pwm_channel_alloc(). 11 - * Then they configure its clock rate (derived from MCK), alignment, 12 - * polarity, and duty cycle by writing directly to the channel registers, 13 - * before enabling the channel by calling pwm_channel_enable(). 14 - * 15 - * After emitting a PWM signal for the desired length of time, drivers 16 - * may then pwm_channel_disable() or pwm_channel_free(). Both of these 17 - * disable the channel, but when it's freed the IRQ is deconfigured and 18 - * the channel must later be re-allocated and reconfigured. 19 - * 20 - * Note that if the period or duty cycle need to be changed while the 21 - * PWM channel is operating, drivers must use the PWM_CUPD double buffer 22 - * mechanism, either polling until they change or getting implicitly 23 - * notified through a once-per-period interrupt handler. 24 - */ 25 - struct pwm_channel { 26 - void __iomem *regs; 27 - unsigned index; 28 - unsigned long mck; 29 - }; 30 - 31 - extern int pwm_channel_alloc(int index, struct pwm_channel *ch); 32 - extern int pwm_channel_free(struct pwm_channel *ch); 33 - 34 - extern int pwm_clk_alloc(unsigned prescale, unsigned div); 35 - extern void pwm_clk_free(unsigned clk); 36 - 37 - extern int __pwm_channel_onoff(struct pwm_channel *ch, int enabled); 38 - 39 - #define pwm_channel_enable(ch) __pwm_channel_onoff((ch), 1) 40 - #define pwm_channel_disable(ch) __pwm_channel_onoff((ch), 0) 41 - 42 - /* periodic interrupts, mostly for CUPD changes to period or cycle */ 43 - extern int pwm_channel_handler(struct pwm_channel *ch, 44 - void (*handler)(struct pwm_channel *ch)); 45 - 46 - /* per-channel registers (banked at pwm_channel->regs) */ 47 - #define PWM_CMR 0x00 /* mode register */ 48 - #define PWM_CPR_CPD (1 << 10) /* set: CUPD modifies period */ 49 - #define PWM_CPR_CPOL (1 << 9) /* set: idle high */ 50 - #define PWM_CPR_CALG (1 << 8) /* set: center align */ 51 - #define PWM_CPR_CPRE (0xf << 0) /* mask: rate is mck/(2^pre) */ 52 - #define PWM_CPR_CLKA (0xb << 0) /* rate CLKA */ 53 - #define PWM_CPR_CLKB (0xc << 0) /* rate CLKB */ 54 - #define PWM_CDTY 0x04 /* duty cycle (max of CPRD) */ 55 - #define PWM_CPRD 0x08 /* period (count up from zero) */ 56 - #define PWM_CCNT 0x0c /* counter (20 bits?) */ 57 - #define PWM_CUPD 0x10 /* update CPRD (or CDTY) next period */ 58 - 59 - static inline void 60 - pwm_channel_writel(struct pwm_channel *pwmc, unsigned offset, u32 val) 61 - { 62 - __raw_writel(val, pwmc->regs + offset); 63 - } 64 - 65 - static inline u32 pwm_channel_readl(struct pwm_channel *pwmc, unsigned offset) 66 - { 67 - return __raw_readl(pwmc->regs + offset); 68 - } 69 - 70 - #endif /* __LINUX_ATMEL_PWM_H */