Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'clk-for-linus-3.15' of git://git.linaro.org/people/mike.turquette/linux

Pull clock framework changes from Mike Turquette:
"The clock framework changes for 3.15 look similar to past pull
requests. Mostly clock driver updates, more Device Tree support in
the form of common functions useful across platforms and a handful of
features and fixes to the framework core"

* tag 'clk-for-linus-3.15' of git://git.linaro.org/people/mike.turquette/linux: (86 commits)
clk: shmobile: fix setting paretn clock rate
clk: shmobile: rcar-gen2: fix lb/sd0/sd1/sdh clock parent to pll1
clk: Fix minor errors in of_clk_init() function comments
clk: reverse default clk provider initialization order in of_clk_init()
clk: sirf: update copyright years to 2014
clk: mmp: try to use closer one when do round rate
clk: mmp: fix the wrong calculation formula
clk: mmp: fix wrong mask when calculate denominator
clk: st: Adds quadfs clock binding
clk: st: Adds clockgen-vcc and clockgen-mux clock binding
clk: st: Adds clockgen clock binding
clk: st: Adds divmux and prediv clock binding
clk: st: Support for A9 MUX clocks
clk: st: Support for ClockGenA9/DDR/GPU
clk: st: Support for QUADFS inside ClockGenB/C/D/E/F
clk: st: Support for VCC-mux and MUX clocks
clk: st: Support for PLLs inside ClockGenA(s)
clk: st: Support for DIVMUX and PreDiv Clocks
clk: support hardware-specific debugfs entries
clk: s2mps11: Use of_get_child_by_name
...

+5881 -732
+34
Documentation/clk.txt
··· 255 255 256 256 To bypass this disabling, include "clk_ignore_unused" in the bootargs to the 257 257 kernel. 258 + 259 + Part 7 - Locking 260 + 261 + The common clock framework uses two global locks, the prepare lock and the 262 + enable lock. 263 + 264 + The enable lock is a spinlock and is held across calls to the .enable, 265 + .disable and .is_enabled operations. Those operations are thus not allowed to 266 + sleep, and calls to the clk_enable(), clk_disable() and clk_is_enabled() API 267 + functions are allowed in atomic context. 268 + 269 + The prepare lock is a mutex and is held across calls to all other operations. 270 + All those operations are allowed to sleep, and calls to the corresponding API 271 + functions are not allowed in atomic context. 272 + 273 + This effectively divides operations in two groups from a locking perspective. 274 + 275 + Drivers don't need to manually protect resources shared between the operations 276 + of one group, regardless of whether those resources are shared by multiple 277 + clocks or not. However, access to resources that are shared between operations 278 + of the two groups needs to be protected by the drivers. An example of such a 279 + resource would be a register that controls both the clock rate and the clock 280 + enable/disable state. 281 + 282 + The clock framework is reentrant, in that a driver is allowed to call clock 283 + framework functions from within its implementation of clock operations. This 284 + can for instance cause a .set_rate operation of one clock being called from 285 + within the .set_rate operation of another clock. This case must be considered 286 + in the driver implementations, but the code flow is usually controlled by the 287 + driver in that case. 288 + 289 + Note that locking must also be considered when code outside of the common 290 + clock framework needs to access resources used by the clock operations. This 291 + is considered out of scope of this document.
+14
Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt
··· 30 30 resume-offset = <0x308>; 31 31 reboot-offset = <0x4>; 32 32 }; 33 + 34 + PCTRL: Peripheral misc control register 35 + 36 + Required Properties: 37 + - compatible: "hisilicon,pctrl" 38 + - reg: Address and size of pctrl. 39 + 40 + Example: 41 + 42 + /* for Hi3620 */ 43 + pctrl: pctrl@fca09000 { 44 + compatible = "hisilicon,pctrl"; 45 + reg = <0xfca09000 0x1000>; 46 + };
+5
Documentation/devicetree/bindings/clock/altr_socfpga.txt
··· 23 23 and the bit index. 24 24 - div-reg : For "socfpga-gate-clk", div-reg contains the divider register, bit shift, 25 25 and width. 26 + - clk-phase : For the sdmmc_clk, contains the value of the clock phase that controls 27 + the SDMMC CIU clock. The first value is the clk_sample(smpsel), and the second 28 + value is the cclk_in_drv(drvsel). The clk-phase is used to enable the correct 29 + hold/delay times that is needed for the SD/MMC CIU clock. The values of both 30 + can be 0-315 degrees, in 45 degree increments.
+1 -1
Documentation/devicetree/bindings/clock/axi-clkgen.txt
··· 5 5 [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 6 6 7 7 Required properties: 8 - - compatible : shall be "adi,axi-clkgen". 8 + - compatible : shall be "adi,axi-clkgen-1.00.a" or "adi,axi-clkgen-2.00.a". 9 9 - #clock-cells : from common clock binding; Should always be set to 0. 10 10 - reg : Address and length of the axi-clkgen register set. 11 11 - clocks : Phandle and clock specifier for the parent clock.
+17
Documentation/devicetree/bindings/clock/clock-bindings.txt
··· 44 44 clocks by index. The names should reflect the clock output signal 45 45 names for the device. 46 46 47 + clock-indices: If the identifyng number for the clocks in the node 48 + is not linear from zero, then the this mapping allows 49 + the mapping of identifiers into the clock-output-names 50 + array. 51 + 52 + For example, if we have two clocks <&oscillator 1> and <&oscillator 3>: 53 + 54 + oscillator { 55 + compatible = "myclocktype"; 56 + #clock-cells = <1>; 57 + clock-indices = <1>, <3>; 58 + clock-output-names = "clka", "clkb"; 59 + } 60 + 61 + This ensures we do not have any empty nodes in clock-output-names 62 + 63 + 47 64 ==Clock consumers== 48 65 49 66 Required properties:
+1
Documentation/devicetree/bindings/clock/hi3620-clock.txt
··· 7 7 8 8 - compatible: should be one of the following. 9 9 - "hisilicon,hi3620-clock" - controller compatible with Hi3620 SoC. 10 + - "hisilicon,hi3620-mmc-clock" - controller specific for Hi3620 mmc. 10 11 11 12 - reg: physical base address of the controller and length of memory mapped 12 13 region.
+48
Documentation/devicetree/bindings/clock/moxa,moxart-clock.txt
··· 1 + Device Tree Clock bindings for arch-moxart 2 + 3 + This binding uses the common clock binding[1]. 4 + 5 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 6 + 7 + MOXA ART SoCs allow to determine PLL output and APB frequencies 8 + by reading registers holding multiplier and divisor information. 9 + 10 + 11 + PLL: 12 + 13 + Required properties: 14 + - compatible : Must be "moxa,moxart-pll-clock" 15 + - #clock-cells : Should be 0 16 + - reg : Should contain registers location and length 17 + - clocks : Should contain phandle + clock-specifier for the parent clock 18 + 19 + Optional properties: 20 + - clock-output-names : Should contain clock name 21 + 22 + 23 + APB: 24 + 25 + Required properties: 26 + - compatible : Must be "moxa,moxart-apb-clock" 27 + - #clock-cells : Should be 0 28 + - reg : Should contain registers location and length 29 + - clocks : Should contain phandle + clock-specifier for the parent clock 30 + 31 + Optional properties: 32 + - clock-output-names : Should contain clock name 33 + 34 + 35 + For example: 36 + 37 + clk_pll: clk_pll@98100000 { 38 + compatible = "moxa,moxart-pll-clock"; 39 + #clock-cells = <0>; 40 + reg = <0x98100000 0x34>; 41 + }; 42 + 43 + clk_apb: clk_apb@98100000 { 44 + compatible = "moxa,moxart-apb-clock"; 45 + #clock-cells = <0>; 46 + reg = <0x98100000 0x34>; 47 + clocks = <&clk_pll>; 48 + };
+14
Documentation/devicetree/bindings/clock/mvebu-core-clock.txt
··· 11 11 3 = hclk (DRAM control clock) 12 12 4 = dramclk (DDR clock) 13 13 14 + The following is a list of provided IDs and clock names on Armada 375: 15 + 0 = tclk (Internal Bus clock) 16 + 1 = cpuclk (CPU clock) 17 + 2 = l2clk (L2 Cache clock) 18 + 3 = ddrclk (DDR clock) 19 + 20 + The following is a list of provided IDs and clock names on Armada 380/385: 21 + 0 = tclk (Internal Bus clock) 22 + 1 = cpuclk (CPU clock) 23 + 2 = l2clk (L2 Cache clock) 24 + 3 = ddrclk (DDR clock) 25 + 14 26 The following is a list of provided IDs and clock names on Kirkwood and Dove: 15 27 0 = tclk (Internal Bus clock) 16 28 1 = cpuclk (CPU0 clock) ··· 32 20 Required properties: 33 21 - compatible : shall be one of the following: 34 22 "marvell,armada-370-core-clock" - For Armada 370 SoC core clocks 23 + "marvell,armada-375-core-clock" - For Armada 375 SoC core clocks 24 + "marvell,armada-380-core-clock" - For Armada 380/385 SoC core clocks 35 25 "marvell,armada-xp-core-clock" - For Armada XP SoC core clocks 36 26 "marvell,dove-core-clock" - for Dove SoC core clocks 37 27 "marvell,kirkwood-core-clock" - for Kirkwood SoC (except mv88f6180)
+4 -1
Documentation/devicetree/bindings/clock/mvebu-corediv-clock.txt
··· 4 4 0 = nand (NAND clock) 5 5 6 6 Required properties: 7 - - compatible : must be "marvell,armada-370-corediv-clock" 7 + - compatible : must be "marvell,armada-370-corediv-clock", 8 + "marvell,armada-375-corediv-clock", 9 + "marvell,armada-380-corediv-clock", 10 + 8 11 - reg : must be the register address of Core Divider control register 9 12 - #clock-cells : from common clock binding; shall be set to 1 10 13 - clocks : must be set to the parent's phandle
+61 -4
Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt
··· 1 1 * Gated Clock bindings for Marvell EBU SoCs 2 2 3 - Marvell Armada 370/XP, Dove and Kirkwood allow some peripheral clocks to be 4 - gated to save some power. The clock consumer should specify the desired clock 5 - by having the clock ID in its "clocks" phandle cell. The clock ID is directly 6 - mapped to the corresponding clock gating control bit in HW to ease manual clock 3 + Marvell Armada 370/375/380/385/XP, Dove and Kirkwood allow some 4 + peripheral clocks to be gated to save some power. The clock consumer 5 + should specify the desired clock by having the clock ID in its 6 + "clocks" phandle cell. The clock ID is directly mapped to the 7 + corresponding clock gating control bit in HW to ease manual clock 7 8 lookup in datasheet. 8 9 9 10 The following is a list of provided IDs for Armada 370: ··· 22 21 25 tdm Time Division Mplx 23 22 28 ddr DDR Cntrl 24 23 30 sata1 SATA Host 0 24 + 25 + The following is a list of provided IDs for Armada 375: 26 + ID Clock Peripheral 27 + ----------------------------------- 28 + 2 mu Management Unit 29 + 3 pp Packet Processor 30 + 4 ptp PTP 31 + 5 pex0 PCIe 0 Clock out 32 + 6 pex1 PCIe 1 Clock out 33 + 8 audio Audio Cntrl 34 + 11 nd_clk Nand Flash Cntrl 35 + 14 sata0_link SATA 0 Link 36 + 15 sata0_core SATA 0 Core 37 + 16 usb3 USB3 Host 38 + 17 sdio SDHCI Host 39 + 18 usb USB Host 40 + 19 gop Gigabit Ethernet MAC 41 + 20 sata1_link SATA 1 Link 42 + 21 sata1_core SATA 1 Core 43 + 22 xor0 XOR DMA 0 44 + 23 xor1 XOR DMA 0 45 + 24 copro Coprocessor 46 + 25 tdm Time Division Mplx 47 + 28 crypto0_enc Cryptographic Unit Port 0 Encryption 48 + 29 crypto0_core Cryptographic Unit Port 0 Core 49 + 30 crypto1_enc Cryptographic Unit Port 1 Encryption 50 + 31 crypto1_core Cryptographic Unit Port 1 Core 51 + 52 + The following is a list of provided IDs for Armada 380/385: 53 + ID Clock Peripheral 54 + ----------------------------------- 55 + 0 audio Audio 56 + 2 ge2 Gigabit Ethernet 2 57 + 3 ge1 Gigabit Ethernet 1 58 + 4 ge0 Gigabit Ethernet 0 59 + 5 pex1 PCIe 1 60 + 6 pex2 PCIe 2 61 + 7 pex3 PCIe 3 62 + 8 pex0 PCIe 0 63 + 9 usb3h0 USB3 Host 0 64 + 10 usb3h1 USB3 Host 1 65 + 11 usb3d USB3 Device 66 + 13 bm Buffer Management 67 + 14 crypto0z Cryptographic 0 Z 68 + 15 sata0 SATA 0 69 + 16 crypto1z Cryptographic 1 Z 70 + 17 sdio SDIO 71 + 18 usb2 USB 2 72 + 21 crypto1 Cryptographic 1 73 + 22 xor0 XOR 0 74 + 23 crypto0 Cryptographic 0 75 + 25 tdm Time Division Multiplexing 76 + 28 xor1 XOR 1 77 + 30 sata1 SATA 1 25 78 26 79 The following is a list of provided IDs for Armada XP: 27 80 ID Clock Peripheral ··· 150 95 Required properties: 151 96 - compatible : shall be one of the following: 152 97 "marvell,armada-370-gating-clock" - for Armada 370 SoC clock gating 98 + "marvell,armada-375-gating-clock" - for Armada 375 SoC clock gating 99 + "marvell,armada-380-gating-clock" - for Armada 380/385 SoC clock gating 153 100 "marvell,armada-xp-gating-clock" - for Armada XP SoC clock gating 154 101 "marvell,dove-gating-clock" - for Dove SoC clock gating 155 102 "marvell,kirkwood-gating-clock" - for Kirkwood SoC clock gating
+29
Documentation/devicetree/bindings/clock/renesas,rz-cpg-clocks.txt
··· 1 + * Renesas RZ Clock Pulse Generator (CPG) 2 + 3 + The CPG generates core clocks for the RZ SoCs. It includes the PLL, variable 4 + CPU and GPU clocks, and several fixed ratio dividers. 5 + 6 + Required Properties: 7 + 8 + - compatible: Must be one of 9 + - "renesas,r7s72100-cpg-clocks" for the r7s72100 CPG 10 + - "renesas,rz-cpg-clocks" for the generic RZ CPG 11 + - reg: Base address and length of the memory resource used by the CPG 12 + - clocks: References to possible parent clocks. Order must match clock modes 13 + in the datasheet. For the r7s72100, this is extal, usb_x1. 14 + - #clock-cells: Must be 1 15 + - clock-output-names: The names of the clocks. Supported clocks are "pll", 16 + "i", and "g" 17 + 18 + 19 + Example 20 + ------- 21 + 22 + cpg_clocks: cpg_clocks@fcfe0000 { 23 + #clock-cells = <1>; 24 + compatible = "renesas,r7s72100-cpg-clocks", 25 + "renesas,rz-cpg-clocks"; 26 + reg = <0xfcfe0000 0x18>; 27 + clocks = <&extal_clk>, <&usb_x1_clk>; 28 + clock-output-names = "pll", "i", "g"; 29 + };
+49
Documentation/devicetree/bindings/clock/st/st,clkgen-divmux.txt
··· 1 + Binding for a ST divider and multiplexer clock driver. 2 + 3 + This binding uses the common clock binding[1]. 4 + Base address is located to the parent node. See clock binding[2] 5 + 6 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 7 + [2] Documentation/devicetree/bindings/clock/st/st,clkgen.txt 8 + 9 + Required properties: 10 + 11 + - compatible : shall be: 12 + "st,clkgena-divmux-c65-hs", "st,clkgena-divmux" 13 + "st,clkgena-divmux-c65-ls", "st,clkgena-divmux" 14 + "st,clkgena-divmux-c32-odf0", "st,clkgena-divmux" 15 + "st,clkgena-divmux-c32-odf1", "st,clkgena-divmux" 16 + "st,clkgena-divmux-c32-odf2", "st,clkgena-divmux" 17 + "st,clkgena-divmux-c32-odf3", "st,clkgena-divmux" 18 + 19 + - #clock-cells : From common clock binding; shall be set to 1. 20 + 21 + - clocks : From common clock binding 22 + 23 + - clock-output-names : From common clock binding. 24 + 25 + Example: 26 + 27 + clockgenA@fd345000 { 28 + reg = <0xfd345000 0xb50>; 29 + 30 + CLK_M_A1_DIV1: CLK_M_A1_DIV1 { 31 + #clock-cells = <1>; 32 + compatible = "st,clkgena-divmux-c32-odf1", 33 + "st,clkgena-divmux"; 34 + 35 + clocks = <&CLK_M_A1_OSC_PREDIV>, 36 + <&CLK_M_A1_PLL0 1>, /* PLL0 PHI1 */ 37 + <&CLK_M_A1_PLL1 1>; /* PLL1 PHI1 */ 38 + 39 + clock-output-names = "CLK_M_RX_ICN_TS", 40 + "CLK_M_RX_ICN_VDP_0", 41 + "", /* Unused */ 42 + "CLK_M_PRV_T1_BUS", 43 + "CLK_M_ICN_REG_12", 44 + "CLK_M_ICN_REG_10", 45 + "", /* Unused */ 46 + "CLK_M_ICN_ST231"; 47 + }; 48 + }; 49 +
+36
Documentation/devicetree/bindings/clock/st/st,clkgen-mux.txt
··· 1 + Binding for a ST multiplexed clock driver. 2 + 3 + This binding supports only simple indexed multiplexers, it does not 4 + support table based parent index to hardware value translations. 5 + 6 + This binding uses the common clock binding[1]. 7 + 8 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 9 + 10 + Required properties: 11 + 12 + - compatible : shall be: 13 + "st,stih416-clkgenc-vcc-hd", "st,clkgen-mux" 14 + "st,stih416-clkgenf-vcc-fvdp", "st,clkgen-mux" 15 + "st,stih416-clkgenf-vcc-hva", "st,clkgen-mux" 16 + "st,stih416-clkgenf-vcc-hd", "st,clkgen-mux" 17 + "st,stih416-clkgenf-vcc-sd", "st,clkgen-mux" 18 + "st,stih415-clkgen-a9-mux", "st,clkgen-mux" 19 + "st,stih416-clkgen-a9-mux", "st,clkgen-mux" 20 + 21 + 22 + - #clock-cells : from common clock binding; shall be set to 0. 23 + 24 + - reg : A Base address and length of the register set. 25 + 26 + - clocks : from common clock binding 27 + 28 + Example: 29 + 30 + CLK_M_HVA: CLK_M_HVA { 31 + #clock-cells = <0>; 32 + compatible = "st,stih416-clkgenf-vcc-hva", "st,clkgen-mux"; 33 + reg = <0xfd690868 4>; 34 + 35 + clocks = <&CLOCKGEN_F 1>, <&CLK_M_A1_DIV0 3>; 36 + };
+48
Documentation/devicetree/bindings/clock/st/st,clkgen-pll.txt
··· 1 + Binding for a ST pll clock driver. 2 + 3 + This binding uses the common clock binding[1]. 4 + Base address is located to the parent node. See clock binding[2] 5 + 6 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 7 + [2] Documentation/devicetree/bindings/clock/st/st,clkgen.txt 8 + 9 + Required properties: 10 + 11 + - compatible : shall be: 12 + "st,clkgena-prediv-c65", "st,clkgena-prediv" 13 + "st,clkgena-prediv-c32", "st,clkgena-prediv" 14 + 15 + "st,clkgena-plls-c65" 16 + "st,plls-c32-a1x-0", "st,clkgen-plls-c32" 17 + "st,plls-c32-a1x-1", "st,clkgen-plls-c32" 18 + "st,stih415-plls-c32-a9", "st,clkgen-plls-c32" 19 + "st,stih415-plls-c32-ddr", "st,clkgen-plls-c32" 20 + "st,stih416-plls-c32-a9", "st,clkgen-plls-c32" 21 + "st,stih416-plls-c32-ddr", "st,clkgen-plls-c32" 22 + 23 + "st,stih415-gpu-pll-c32", "st,clkgengpu-pll-c32" 24 + "st,stih416-gpu-pll-c32", "st,clkgengpu-pll-c32" 25 + 26 + 27 + - #clock-cells : From common clock binding; shall be set to 1. 28 + 29 + - clocks : From common clock binding 30 + 31 + - clock-output-names : From common clock binding. 32 + 33 + Example: 34 + 35 + clockgenA@fee62000 { 36 + reg = <0xfee62000 0xb48>; 37 + 38 + CLK_S_A0_PLL: CLK_S_A0_PLL { 39 + #clock-cells = <1>; 40 + compatible = "st,clkgena-plls-c65"; 41 + 42 + clocks = <&CLK_SYSIN>; 43 + 44 + clock-output-names = "CLK_S_A0_PLL0_HS", 45 + "CLK_S_A0_PLL0_LS", 46 + "CLK_S_A0_PLL1"; 47 + }; 48 + };
+36
Documentation/devicetree/bindings/clock/st/st,clkgen-prediv.txt
··· 1 + Binding for a ST pre-divider clock driver. 2 + 3 + This binding uses the common clock binding[1]. 4 + Base address is located to the parent node. See clock binding[2] 5 + 6 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 7 + [2] Documentation/devicetree/bindings/clock/st/st,clkgen.txt 8 + 9 + Required properties: 10 + 11 + - compatible : shall be: 12 + "st,clkgena-prediv-c65", "st,clkgena-prediv" 13 + "st,clkgena-prediv-c32", "st,clkgena-prediv" 14 + 15 + - #clock-cells : From common clock binding; shall be set to 0. 16 + 17 + - clocks : From common clock binding 18 + 19 + - clock-output-names : From common clock binding. 20 + 21 + Example: 22 + 23 + clockgenA@fd345000 { 24 + reg = <0xfd345000 0xb50>; 25 + 26 + CLK_M_A2_OSC_PREDIV: CLK_M_A2_OSC_PREDIV { 27 + #clock-cells = <0>; 28 + compatible = "st,clkgena-prediv-c32", 29 + "st,clkgena-prediv"; 30 + 31 + clocks = <&CLK_SYSIN>; 32 + 33 + clock-output-names = "CLK_M_A2_OSC_PREDIV"; 34 + }; 35 + }; 36 +
+53
Documentation/devicetree/bindings/clock/st/st,clkgen-vcc.txt
··· 1 + Binding for a type of STMicroelectronics clock crossbar (VCC). 2 + 3 + The crossbar can take up to 4 input clocks and control up to 16 4 + output clocks. Not all inputs or outputs have to be in use in a 5 + particular instantiation. Each output can be individually enabled, 6 + select any of the input clocks and apply a divide (by 1,2,4 or 8) to 7 + that selected clock. 8 + 9 + This binding uses the common clock binding[1]. 10 + 11 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 12 + 13 + Required properties: 14 + 15 + - compatible : shall be: 16 + "st,stih416-clkgenc", "st,vcc" 17 + "st,stih416-clkgenf", "st,vcc" 18 + 19 + - #clock-cells : from common clock binding; shall be set to 1. 20 + 21 + - reg : A Base address and length of the register set. 22 + 23 + - clocks : from common clock binding 24 + 25 + - clock-output-names : From common clock binding. The block has 16 26 + clock outputs but not all of them in a specific instance 27 + have to be used in the SoC. If a clock name is left as 28 + an empty string then no clock will be created for the 29 + output associated with that string index. If fewer than 30 + 16 strings are provided then no clocks will be created 31 + for the remaining outputs. 32 + 33 + Example: 34 + 35 + CLOCKGEN_C_VCC: CLOCKGEN_C_VCC { 36 + #clock-cells = <1>; 37 + compatible = "st,stih416-clkgenc", "st,clkgen-vcc"; 38 + reg = <0xfe8308ac 12>; 39 + 40 + clocks = <&CLK_S_VCC_HD>, <&CLOCKGEN_C 1>, 41 + <&CLK_S_TMDS_FROMPHY>, <&CLOCKGEN_C 2>; 42 + 43 + clock-output-names = 44 + "CLK_S_PIX_HDMI", "CLK_S_PIX_DVO", 45 + "CLK_S_OUT_DVO", "CLK_S_PIX_HD", 46 + "CLK_S_HDDAC", "CLK_S_DENC", 47 + "CLK_S_SDDAC", "CLK_S_PIX_MAIN", 48 + "CLK_S_PIX_AUX", "CLK_S_STFE_FRC_0", 49 + "CLK_S_REF_MCRU", "CLK_S_SLAVE_MCRU", 50 + "CLK_S_TMDS_HDMI", "CLK_S_HDMI_REJECT_PLL", 51 + "CLK_S_THSENS"; 52 + }; 53 +
+83
Documentation/devicetree/bindings/clock/st/st,clkgen.txt
··· 1 + Binding for a Clockgen hardware block found on 2 + certain STMicroelectronics consumer electronics SoC devices. 3 + 4 + A Clockgen node can contain pll, diviser or multiplexer nodes. 5 + 6 + We will find only the base address of the Clockgen, this base 7 + address is common of all subnode. 8 + 9 + clockgen_node { 10 + reg = <>; 11 + 12 + pll_node { 13 + ... 14 + }; 15 + 16 + prediv_node { 17 + ... 18 + }; 19 + 20 + divmux_node { 21 + ... 22 + }; 23 + 24 + quadfs_node { 25 + ... 26 + }; 27 + ... 28 + }; 29 + 30 + This binding uses the common clock binding[1]. 31 + Each subnode should use the binding discribe in [2]..[4] 32 + 33 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 34 + [2] Documentation/devicetree/bindings/clock/st,quadfs.txt 35 + [3] Documentation/devicetree/bindings/clock/st,quadfs.txt 36 + [4] Documentation/devicetree/bindings/clock/st,quadfs.txt 37 + 38 + Required properties: 39 + - reg : A Base address and length of the register set. 40 + 41 + Example: 42 + 43 + clockgenA@fee62000 { 44 + 45 + reg = <0xfee62000 0xb48>; 46 + 47 + CLK_S_A0_PLL: CLK_S_A0_PLL { 48 + #clock-cells = <1>; 49 + compatible = "st,clkgena-plls-c65"; 50 + 51 + clocks = <&CLK_SYSIN>; 52 + 53 + clock-output-names = "CLK_S_A0_PLL0_HS", 54 + "CLK_S_A0_PLL0_LS", 55 + "CLK_S_A0_PLL1"; 56 + }; 57 + 58 + CLK_S_A0_OSC_PREDIV: CLK_S_A0_OSC_PREDIV { 59 + #clock-cells = <0>; 60 + compatible = "st,clkgena-prediv-c65", 61 + "st,clkgena-prediv"; 62 + 63 + clocks = <&CLK_SYSIN>; 64 + 65 + clock-output-names = "CLK_S_A0_OSC_PREDIV"; 66 + }; 67 + 68 + CLK_S_A0_HS: CLK_S_A0_HS { 69 + #clock-cells = <1>; 70 + compatible = "st,clkgena-divmux-c65-hs", 71 + "st,clkgena-divmux"; 72 + 73 + clocks = <&CLK_S_A0_OSC_PREDIV>, 74 + <&CLK_S_A0_PLL 0>, /* PLL0 HS */ 75 + <&CLK_S_A0_PLL 2>; /* PLL1 */ 76 + 77 + clock-output-names = "CLK_S_FDMA_0", 78 + "CLK_S_FDMA_1", 79 + ""; /* CLK_S_JIT_SENSE */ 80 + /* Fourth output unused */ 81 + }; 82 + }; 83 +
+45
Documentation/devicetree/bindings/clock/st/st,quadfs.txt
··· 1 + Binding for a type of quad channel digital frequency synthesizer found on 2 + certain STMicroelectronics consumer electronics SoC devices. 3 + 4 + This version contains a programmable PLL which can generate up to 216, 432 5 + or 660MHz (from a 30MHz oscillator input) as the input to the digital 6 + synthesizers. 7 + 8 + This binding uses the common clock binding[1]. 9 + 10 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 11 + 12 + Required properties: 13 + - compatible : shall be: 14 + "st,stih416-quadfs216", "st,quadfs" 15 + "st,stih416-quadfs432", "st,quadfs" 16 + "st,stih416-quadfs660-E", "st,quadfs" 17 + "st,stih416-quadfs660-F", "st,quadfs" 18 + 19 + - #clock-cells : from common clock binding; shall be set to 1. 20 + 21 + - reg : A Base address and length of the register set. 22 + 23 + - clocks : from common clock binding 24 + 25 + - clock-output-names : From common clock binding. The block has 4 26 + clock outputs but not all of them in a specific instance 27 + have to be used in the SoC. If a clock name is left as 28 + an empty string then no clock will be created for the 29 + output associated with that string index. If fewer than 30 + 4 strings are provided then no clocks will be created 31 + for the remaining outputs. 32 + 33 + Example: 34 + 35 + CLOCKGEN_E: CLOCKGEN_E { 36 + #clock-cells = <1>; 37 + compatible = "st,stih416-quadfs660-E", "st,quadfs"; 38 + reg = <0xfd3208bc 0xB0>; 39 + 40 + clocks = <&CLK_SYSIN>; 41 + clock-output-names = "CLK_M_PIX_MDTP_0", 42 + "CLK_M_PIX_MDTP_1", 43 + "CLK_M_PIX_MDTP_2", 44 + "CLK_M_MPELPC"; 45 + };
+79 -23
Documentation/devicetree/bindings/clock/sunxi.txt
··· 6 6 7 7 Required properties: 8 8 - compatible : shall be one of the following: 9 - "allwinner,sun4i-osc-clk" - for a gatable oscillator 10 - "allwinner,sun4i-pll1-clk" - for the main PLL clock and PLL4 9 + "allwinner,sun4i-a10-osc-clk" - for a gatable oscillator 10 + "allwinner,sun4i-a10-pll1-clk" - for the main PLL clock and PLL4 11 11 "allwinner,sun6i-a31-pll1-clk" - for the main PLL clock on A31 12 - "allwinner,sun4i-pll5-clk" - for the PLL5 clock 13 - "allwinner,sun4i-pll6-clk" - for the PLL6 clock 14 - "allwinner,sun4i-cpu-clk" - for the CPU multiplexer clock 15 - "allwinner,sun4i-axi-clk" - for the AXI clock 16 - "allwinner,sun4i-axi-gates-clk" - for the AXI gates 17 - "allwinner,sun4i-ahb-clk" - for the AHB clock 18 - "allwinner,sun4i-ahb-gates-clk" - for the AHB gates on A10 12 + "allwinner,sun4i-a10-pll5-clk" - for the PLL5 clock 13 + "allwinner,sun4i-a10-pll6-clk" - for the PLL6 clock 14 + "allwinner,sun6i-a31-pll6-clk" - for the PLL6 clock on A31 15 + "allwinner,sun4i-a10-cpu-clk" - for the CPU multiplexer clock 16 + "allwinner,sun4i-a10-axi-clk" - for the AXI clock 17 + "allwinner,sun4i-a10-axi-gates-clk" - for the AXI gates 18 + "allwinner,sun4i-a10-ahb-clk" - for the AHB clock 19 + "allwinner,sun4i-a10-ahb-gates-clk" - for the AHB gates on A10 19 20 "allwinner,sun5i-a13-ahb-gates-clk" - for the AHB gates on A13 20 21 "allwinner,sun5i-a10s-ahb-gates-clk" - for the AHB gates on A10s 21 22 "allwinner,sun7i-a20-ahb-gates-clk" - for the AHB gates on A20 22 23 "allwinner,sun6i-a31-ahb1-mux-clk" - for the AHB1 multiplexer on A31 23 24 "allwinner,sun6i-a31-ahb1-gates-clk" - for the AHB1 gates on A31 24 - "allwinner,sun4i-apb0-clk" - for the APB0 clock 25 - "allwinner,sun4i-apb0-gates-clk" - for the APB0 gates on A10 25 + "allwinner,sun4i-a10-apb0-clk" - for the APB0 clock 26 + "allwinner,sun4i-a10-apb0-gates-clk" - for the APB0 gates on A10 26 27 "allwinner,sun5i-a13-apb0-gates-clk" - for the APB0 gates on A13 27 28 "allwinner,sun5i-a10s-apb0-gates-clk" - for the APB0 gates on A10s 28 29 "allwinner,sun7i-a20-apb0-gates-clk" - for the APB0 gates on A20 29 - "allwinner,sun4i-apb1-clk" - for the APB1 clock 30 - "allwinner,sun4i-apb1-mux-clk" - for the APB1 clock muxing 31 - "allwinner,sun4i-apb1-gates-clk" - for the APB1 gates on A10 30 + "allwinner,sun4i-a10-apb1-clk" - for the APB1 clock 31 + "allwinner,sun4i-a10-apb1-mux-clk" - for the APB1 clock muxing 32 + "allwinner,sun4i-a10-apb1-gates-clk" - for the APB1 gates on A10 32 33 "allwinner,sun5i-a13-apb1-gates-clk" - for the APB1 gates on A13 33 34 "allwinner,sun5i-a10s-apb1-gates-clk" - for the APB1 gates on A10s 34 35 "allwinner,sun6i-a31-apb1-gates-clk" - for the APB1 gates on A31 35 36 "allwinner,sun7i-a20-apb1-gates-clk" - for the APB1 gates on A20 36 37 "allwinner,sun6i-a31-apb2-div-clk" - for the APB2 gates on A31 37 38 "allwinner,sun6i-a31-apb2-gates-clk" - for the APB2 gates on A31 38 - "allwinner,sun4i-mod0-clk" - for the module 0 family of clocks 39 + "allwinner,sun4i-a10-mod0-clk" - for the module 0 family of clocks 39 40 "allwinner,sun7i-a20-out-clk" - for the external output clocks 41 + "allwinner,sun7i-a20-gmac-clk" - for the GMAC clock module on A20/A31 42 + "allwinner,sun4i-a10-usb-clk" - for usb gates + resets on A10 / A20 43 + "allwinner,sun5i-a13-usb-clk" - for usb gates + resets on A13 40 44 41 45 Required properties for all clocks: 42 46 - reg : shall be the control register address for the clock. ··· 48 44 multiplexed clocks, the list order must match the hardware 49 45 programming order. 50 46 - #clock-cells : from common clock binding; shall be set to 0 except for 51 - "allwinner,*-gates-clk" where it shall be set to 1 47 + "allwinner,*-gates-clk", "allwinner,sun4i-pll5-clk" and 48 + "allwinner,sun4i-pll6-clk" where it shall be set to 1 49 + - clock-output-names : shall be the corresponding names of the outputs. 50 + If the clock module only has one output, the name shall be the 51 + module name. 52 52 53 - Additionally, "allwinner,*-gates-clk" clocks require: 54 - - clock-output-names : the corresponding gate names that the clock controls 53 + And "allwinner,*-usb-clk" clocks also require: 54 + - reset-cells : shall be set to 1 55 + 56 + For "allwinner,sun7i-a20-gmac-clk", the parent clocks shall be fixed rate 57 + dummy clocks at 25 MHz and 125 MHz, respectively. See example. 55 58 56 59 Clock consumers should specify the desired clocks they use with a 57 60 "clocks" phandle cell. Consumers that are using a gated clock should ··· 67 56 68 57 For example: 69 58 70 - osc24M: osc24M@01c20050 { 59 + osc24M: clk@01c20050 { 71 60 #clock-cells = <0>; 72 - compatible = "allwinner,sun4i-osc-clk"; 61 + compatible = "allwinner,sun4i-a10-osc-clk"; 73 62 reg = <0x01c20050 0x4>; 74 63 clocks = <&osc24M_fixed>; 64 + clock-output-names = "osc24M"; 75 65 }; 76 66 77 - pll1: pll1@01c20000 { 67 + pll1: clk@01c20000 { 78 68 #clock-cells = <0>; 79 - compatible = "allwinner,sun4i-pll1-clk"; 69 + compatible = "allwinner,sun4i-a10-pll1-clk"; 80 70 reg = <0x01c20000 0x4>; 81 71 clocks = <&osc24M>; 72 + clock-output-names = "pll1"; 73 + }; 74 + 75 + pll5: clk@01c20020 { 76 + #clock-cells = <1>; 77 + compatible = "allwinner,sun4i-pll5-clk"; 78 + reg = <0x01c20020 0x4>; 79 + clocks = <&osc24M>; 80 + clock-output-names = "pll5_ddr", "pll5_other"; 82 81 }; 83 82 84 83 cpu: cpu@01c20054 { 85 84 #clock-cells = <0>; 86 - compatible = "allwinner,sun4i-cpu-clk"; 85 + compatible = "allwinner,sun4i-a10-cpu-clk"; 87 86 reg = <0x01c20054 0x4>; 88 87 clocks = <&osc32k>, <&osc24M>, <&pll1>; 88 + clock-output-names = "cpu"; 89 + }; 90 + 91 + mmc0_clk: clk@01c20088 { 92 + #clock-cells = <0>; 93 + compatible = "allwinner,sun4i-mod0-clk"; 94 + reg = <0x01c20088 0x4>; 95 + clocks = <&osc24M>, <&pll6 1>, <&pll5 1>; 96 + clock-output-names = "mmc0"; 97 + }; 98 + 99 + mii_phy_tx_clk: clk@2 { 100 + #clock-cells = <0>; 101 + compatible = "fixed-clock"; 102 + clock-frequency = <25000000>; 103 + clock-output-names = "mii_phy_tx"; 104 + }; 105 + 106 + gmac_int_tx_clk: clk@3 { 107 + #clock-cells = <0>; 108 + compatible = "fixed-clock"; 109 + clock-frequency = <125000000>; 110 + clock-output-names = "gmac_int_tx"; 111 + }; 112 + 113 + gmac_clk: clk@01c20164 { 114 + #clock-cells = <0>; 115 + compatible = "allwinner,sun7i-a20-gmac-clk"; 116 + reg = <0x01c20164 0x4>; 117 + /* 118 + * The first clock must be fixed at 25MHz; 119 + * the second clock must be fixed at 125MHz 120 + */ 121 + clocks = <&mii_phy_tx_clk>, <&gmac_int_tx_clk>; 122 + clock-output-names = "gmac"; 89 123 };
+1 -1
MAINTAINERS
··· 2318 2318 2319 2319 COMMON CLK FRAMEWORK 2320 2320 M: Mike Turquette <mturquette@linaro.org> 2321 - L: linux-arm-kernel@lists.infradead.org (same as CLK API & CLKDEV) 2321 + L: linux-kernel@vger.kernel.org 2322 2322 T: git git://git.linaro.org/people/mturquette/linux.git 2323 2323 S: Maintained 2324 2324 F: drivers/clk/
+1
arch/arm/boot/dts/socfpga.dtsi
··· 424 424 compatible = "altr,socfpga-gate-clk"; 425 425 clocks = <&f2s_periph_ref_clk>, <&main_nand_sdmmc_clk>, <&per_nand_mmc_clk>; 426 426 clk-gate = <0xa0 8>; 427 + clk-phase = <0 135>; 427 428 }; 428 429 429 430 nand_x_clk: nand_x_clk {
-5
arch/arm/mach-socfpga/socfpga.c
··· 29 29 void __iomem *socfpga_scu_base_addr = ((void __iomem *)(SOCFPGA_SCU_VIRT_BASE)); 30 30 void __iomem *sys_manager_base_addr; 31 31 void __iomem *rst_manager_base_addr; 32 - void __iomem *clk_mgr_base_addr; 33 32 unsigned long cpu1start_addr; 34 33 35 34 static struct map_desc scu_io_desc __initdata = { ··· 77 78 78 79 np = of_find_compatible_node(NULL, NULL, "altr,rst-mgr"); 79 80 rst_manager_base_addr = of_iomap(np, 0); 80 - 81 - np = of_find_compatible_node(NULL, NULL, "altr,clk-mgr"); 82 - clk_mgr_base_addr = of_iomap(np, 0); 83 81 } 84 82 85 83 static void __init socfpga_init_irq(void) ··· 102 106 { 103 107 l2x0_of_init(0, ~0UL); 104 108 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 105 - socfpga_init_clocks(); 106 109 } 107 110 108 111 static const char *altera_dt_match[] = {
+4 -2
drivers/clk/Kconfig
··· 65 65 clock generators. 66 66 67 67 config COMMON_CLK_S2MPS11 68 - tristate "Clock driver for S2MPS11 MFD" 68 + tristate "Clock driver for S2MPS11/S5M8767 MFD" 69 69 depends on MFD_SEC_CORE 70 70 ---help--- 71 - This driver supports S2MPS11 crystal oscillator clock. 71 + This driver supports S2MPS11/S5M8767 crystal oscillator clock. These 72 + multi-function devices have 3 fixed-rate oscillators, clocked at 73 + 32KHz each. 72 74 73 75 config CLK_TWL6040 74 76 tristate "External McPDM functional clock from twl6040"
+3
drivers/clk/Makefile
··· 17 17 obj-$(CONFIG_ARCH_HIGHBANK) += clk-highbank.o 18 18 obj-$(CONFIG_MACH_LOONGSON1) += clk-ls1x.o 19 19 obj-$(CONFIG_COMMON_CLK_MAX77686) += clk-max77686.o 20 + obj-$(CONFIG_ARCH_MOXART) += clk-moxart.o 20 21 obj-$(CONFIG_ARCH_NOMADIK) += clk-nomadik.o 21 22 obj-$(CONFIG_ARCH_NSPIRE) += clk-nspire.o 22 23 obj-$(CONFIG_CLK_PPC_CORENET) += clk-ppc-corenet.o ··· 32 31 obj-$(CONFIG_COMMON_CLK_AT91) += at91/ 33 32 obj-$(CONFIG_ARCH_BCM_MOBILE) += bcm/ 34 33 obj-$(CONFIG_ARCH_HI3xxx) += hisilicon/ 34 + obj-$(CONFIG_ARCH_HIP04) += hisilicon/ 35 35 obj-$(CONFIG_COMMON_CLK_KEYSTONE) += keystone/ 36 36 ifeq ($(CONFIG_COMMON_CLK), y) 37 37 obj-$(CONFIG_ARCH_MMP) += mmp/ ··· 46 44 obj-$(CONFIG_ARCH_SIRF) += sirf/ 47 45 obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/ 48 46 obj-$(CONFIG_PLAT_SPEAR) += spear/ 47 + obj-$(CONFIG_ARCH_STI) += st/ 49 48 obj-$(CONFIG_ARCH_SUNXI) += sunxi/ 50 49 obj-$(CONFIG_ARCH_TEGRA) += tegra/ 51 50 obj-$(CONFIG_ARCH_OMAP2PLUS) += ti/
+61 -141
drivers/clk/at91/clk-programmable.c
··· 13 13 #include <linux/clk/at91_pmc.h> 14 14 #include <linux/of.h> 15 15 #include <linux/of_address.h> 16 - #include <linux/of_irq.h> 17 16 #include <linux/io.h> 18 17 #include <linux/wait.h> 19 18 #include <linux/sched.h> 20 - #include <linux/interrupt.h> 21 - #include <linux/irq.h> 22 19 23 20 #include "pmc.h" 24 21 ··· 35 38 struct clk_programmable { 36 39 struct clk_hw hw; 37 40 struct at91_pmc *pmc; 38 - unsigned int irq; 39 - wait_queue_head_t wait; 40 41 u8 id; 41 - u8 css; 42 - u8 pres; 43 - u8 slckmck; 44 42 const struct clk_programmable_layout *layout; 45 43 }; 46 44 47 45 #define to_clk_programmable(hw) container_of(hw, struct clk_programmable, hw) 48 46 49 - 50 - static irqreturn_t clk_programmable_irq_handler(int irq, void *dev_id) 51 - { 52 - struct clk_programmable *prog = (struct clk_programmable *)dev_id; 53 - 54 - wake_up(&prog->wait); 55 - 56 - return IRQ_HANDLED; 57 - } 58 - 59 - static int clk_programmable_prepare(struct clk_hw *hw) 60 - { 61 - u32 tmp; 62 - struct clk_programmable *prog = to_clk_programmable(hw); 63 - struct at91_pmc *pmc = prog->pmc; 64 - const struct clk_programmable_layout *layout = prog->layout; 65 - u8 id = prog->id; 66 - u32 mask = PROG_STATUS_MASK(id); 67 - 68 - tmp = prog->css | (prog->pres << layout->pres_shift); 69 - if (layout->have_slck_mck && prog->slckmck) 70 - tmp |= AT91_PMC_CSSMCK_MCK; 71 - 72 - pmc_write(pmc, AT91_PMC_PCKR(id), tmp); 73 - 74 - while (!(pmc_read(pmc, AT91_PMC_SR) & mask)) 75 - wait_event(prog->wait, pmc_read(pmc, AT91_PMC_SR) & mask); 76 - 77 - return 0; 78 - } 79 - 80 - static int clk_programmable_is_ready(struct clk_hw *hw) 81 - { 82 - struct clk_programmable *prog = to_clk_programmable(hw); 83 - struct at91_pmc *pmc = prog->pmc; 84 - 85 - return !!(pmc_read(pmc, AT91_PMC_SR) & AT91_PMC_PCKR(prog->id)); 86 - } 87 - 88 47 static unsigned long clk_programmable_recalc_rate(struct clk_hw *hw, 89 48 unsigned long parent_rate) 90 49 { 91 - u32 tmp; 50 + u32 pres; 92 51 struct clk_programmable *prog = to_clk_programmable(hw); 93 52 struct at91_pmc *pmc = prog->pmc; 94 53 const struct clk_programmable_layout *layout = prog->layout; 95 54 96 - tmp = pmc_read(pmc, AT91_PMC_PCKR(prog->id)); 97 - prog->pres = (tmp >> layout->pres_shift) & PROG_PRES_MASK; 98 - 99 - return parent_rate >> prog->pres; 55 + pres = (pmc_read(pmc, AT91_PMC_PCKR(prog->id)) >> layout->pres_shift) & 56 + PROG_PRES_MASK; 57 + return parent_rate >> pres; 100 58 } 101 59 102 - static long clk_programmable_round_rate(struct clk_hw *hw, unsigned long rate, 103 - unsigned long *parent_rate) 60 + static long clk_programmable_determine_rate(struct clk_hw *hw, 61 + unsigned long rate, 62 + unsigned long *best_parent_rate, 63 + struct clk **best_parent_clk) 104 64 { 105 - unsigned long best_rate = *parent_rate; 106 - unsigned long best_diff; 107 - unsigned long new_diff; 108 - unsigned long cur_rate; 109 - int shift = shift; 65 + struct clk *parent = NULL; 66 + long best_rate = -EINVAL; 67 + unsigned long parent_rate; 68 + unsigned long tmp_rate; 69 + int shift; 70 + int i; 110 71 111 - if (rate > *parent_rate) 112 - return *parent_rate; 113 - else 114 - best_diff = *parent_rate - rate; 72 + for (i = 0; i < __clk_get_num_parents(hw->clk); i++) { 73 + parent = clk_get_parent_by_index(hw->clk, i); 74 + if (!parent) 75 + continue; 115 76 116 - if (!best_diff) 117 - return best_rate; 118 - 119 - for (shift = 1; shift < PROG_PRES_MASK; shift++) { 120 - cur_rate = *parent_rate >> shift; 121 - 122 - if (cur_rate > rate) 123 - new_diff = cur_rate - rate; 124 - else 125 - new_diff = rate - cur_rate; 126 - 127 - if (!new_diff) 128 - return cur_rate; 129 - 130 - if (new_diff < best_diff) { 131 - best_diff = new_diff; 132 - best_rate = cur_rate; 77 + parent_rate = __clk_get_rate(parent); 78 + for (shift = 0; shift < PROG_PRES_MASK; shift++) { 79 + tmp_rate = parent_rate >> shift; 80 + if (tmp_rate <= rate) 81 + break; 133 82 } 134 83 135 - if (rate > cur_rate) 84 + if (tmp_rate > rate) 85 + continue; 86 + 87 + if (best_rate < 0 || (rate - tmp_rate) < (rate - best_rate)) { 88 + best_rate = tmp_rate; 89 + *best_parent_rate = parent_rate; 90 + *best_parent_clk = parent; 91 + } 92 + 93 + if (!best_rate) 136 94 break; 137 95 } 138 96 ··· 98 146 { 99 147 struct clk_programmable *prog = to_clk_programmable(hw); 100 148 const struct clk_programmable_layout *layout = prog->layout; 149 + struct at91_pmc *pmc = prog->pmc; 150 + u32 tmp = pmc_read(pmc, AT91_PMC_PCKR(prog->id)) & ~layout->css_mask; 151 + 152 + if (layout->have_slck_mck) 153 + tmp &= AT91_PMC_CSSMCK_MCK; 154 + 101 155 if (index > layout->css_mask) { 102 156 if (index > PROG_MAX_RM9200_CSS && layout->have_slck_mck) { 103 - prog->css = 0; 104 - prog->slckmck = 1; 157 + tmp |= AT91_PMC_CSSMCK_MCK; 105 158 return 0; 106 159 } else { 107 160 return -EINVAL; 108 161 } 109 162 } 110 163 111 - prog->css = index; 164 + pmc_write(pmc, AT91_PMC_PCKR(prog->id), tmp | index); 112 165 return 0; 113 166 } 114 167 ··· 126 169 const struct clk_programmable_layout *layout = prog->layout; 127 170 128 171 tmp = pmc_read(pmc, AT91_PMC_PCKR(prog->id)); 129 - prog->css = tmp & layout->css_mask; 130 - ret = prog->css; 131 - if (layout->have_slck_mck) { 132 - prog->slckmck = !!(tmp & AT91_PMC_CSSMCK_MCK); 133 - if (prog->slckmck && !ret) 134 - ret = PROG_MAX_RM9200_CSS + 1; 135 - } 172 + ret = tmp & layout->css_mask; 173 + if (layout->have_slck_mck && (tmp & AT91_PMC_CSSMCK_MCK) && !ret) 174 + ret = PROG_MAX_RM9200_CSS + 1; 136 175 137 176 return ret; 138 177 } ··· 137 184 unsigned long parent_rate) 138 185 { 139 186 struct clk_programmable *prog = to_clk_programmable(hw); 140 - unsigned long best_rate = parent_rate; 141 - unsigned long best_diff; 142 - unsigned long new_diff; 143 - unsigned long cur_rate; 187 + struct at91_pmc *pmc = prog->pmc; 188 + const struct clk_programmable_layout *layout = prog->layout; 189 + unsigned long div = parent_rate / rate; 144 190 int shift = 0; 191 + u32 tmp = pmc_read(pmc, AT91_PMC_PCKR(prog->id)) & 192 + ~(PROG_PRES_MASK << layout->pres_shift); 145 193 146 - if (rate > parent_rate) 147 - return parent_rate; 148 - else 149 - best_diff = parent_rate - rate; 194 + if (!div) 195 + return -EINVAL; 150 196 151 - if (!best_diff) { 152 - prog->pres = shift; 153 - return 0; 154 - } 197 + shift = fls(div) - 1; 155 198 156 - for (shift = 1; shift < PROG_PRES_MASK; shift++) { 157 - cur_rate = parent_rate >> shift; 199 + if (div != (1<<shift)) 200 + return -EINVAL; 158 201 159 - if (cur_rate > rate) 160 - new_diff = cur_rate - rate; 161 - else 162 - new_diff = rate - cur_rate; 202 + if (shift >= PROG_PRES_MASK) 203 + return -EINVAL; 163 204 164 - if (!new_diff) 165 - break; 205 + pmc_write(pmc, AT91_PMC_PCKR(prog->id), 206 + tmp | (shift << layout->pres_shift)); 166 207 167 - if (new_diff < best_diff) { 168 - best_diff = new_diff; 169 - best_rate = cur_rate; 170 - } 171 - 172 - if (rate > cur_rate) 173 - break; 174 - } 175 - 176 - prog->pres = shift; 177 208 return 0; 178 209 } 179 210 180 211 static const struct clk_ops programmable_ops = { 181 - .prepare = clk_programmable_prepare, 182 - .is_prepared = clk_programmable_is_ready, 183 212 .recalc_rate = clk_programmable_recalc_rate, 184 - .round_rate = clk_programmable_round_rate, 213 + .determine_rate = clk_programmable_determine_rate, 185 214 .get_parent = clk_programmable_get_parent, 186 215 .set_parent = clk_programmable_set_parent, 187 216 .set_rate = clk_programmable_set_rate, 188 217 }; 189 218 190 219 static struct clk * __init 191 - at91_clk_register_programmable(struct at91_pmc *pmc, unsigned int irq, 220 + at91_clk_register_programmable(struct at91_pmc *pmc, 192 221 const char *name, const char **parent_names, 193 222 u8 num_parents, u8 id, 194 223 const struct clk_programmable_layout *layout) 195 224 { 196 - int ret; 197 225 struct clk_programmable *prog; 198 226 struct clk *clk = NULL; 199 227 struct clk_init_data init; 200 - char irq_name[11]; 201 228 202 229 if (id > PROG_ID_MAX) 203 230 return ERR_PTR(-EINVAL); ··· 196 263 prog->layout = layout; 197 264 prog->hw.init = &init; 198 265 prog->pmc = pmc; 199 - prog->irq = irq; 200 - init_waitqueue_head(&prog->wait); 201 - irq_set_status_flags(prog->irq, IRQ_NOAUTOEN); 202 - snprintf(irq_name, sizeof(irq_name), "clk-prog%d", id); 203 - ret = request_irq(prog->irq, clk_programmable_irq_handler, 204 - IRQF_TRIGGER_HIGH, irq_name, prog); 205 - if (ret) 206 - return ERR_PTR(ret); 207 266 208 267 clk = clk_register(NULL, &prog->hw); 209 268 if (IS_ERR(clk)) ··· 229 304 int num; 230 305 u32 id; 231 306 int i; 232 - unsigned int irq; 233 307 struct clk *clk; 234 308 int num_parents; 235 309 const char *parent_names[PROG_SOURCE_MAX]; ··· 256 332 if (of_property_read_string(np, "clock-output-names", &name)) 257 333 name = progclknp->name; 258 334 259 - irq = irq_of_parse_and_map(progclknp, 0); 260 - if (!irq) 261 - continue; 262 - 263 - clk = at91_clk_register_programmable(pmc, irq, name, 335 + clk = at91_clk_register_programmable(pmc, name, 264 336 parent_names, num_parents, 265 337 id, layout); 266 338 if (IS_ERR(clk))
+65 -11
drivers/clk/at91/clk-system.c
··· 14 14 #include <linux/of.h> 15 15 #include <linux/of_address.h> 16 16 #include <linux/io.h> 17 + #include <linux/irq.h> 18 + #include <linux/of_irq.h> 19 + #include <linux/interrupt.h> 20 + #include <linux/wait.h> 21 + #include <linux/sched.h> 17 22 18 23 #include "pmc.h" 19 24 ··· 30 25 struct clk_system { 31 26 struct clk_hw hw; 32 27 struct at91_pmc *pmc; 28 + unsigned int irq; 29 + wait_queue_head_t wait; 33 30 u8 id; 34 31 }; 35 32 36 - static int clk_system_enable(struct clk_hw *hw) 33 + static inline int is_pck(int id) 34 + { 35 + return (id >= 8) && (id <= 15); 36 + } 37 + static irqreturn_t clk_system_irq_handler(int irq, void *dev_id) 38 + { 39 + struct clk_system *sys = (struct clk_system *)dev_id; 40 + 41 + wake_up(&sys->wait); 42 + disable_irq_nosync(sys->irq); 43 + 44 + return IRQ_HANDLED; 45 + } 46 + 47 + static int clk_system_prepare(struct clk_hw *hw) 37 48 { 38 49 struct clk_system *sys = to_clk_system(hw); 39 50 struct at91_pmc *pmc = sys->pmc; 51 + u32 mask = 1 << sys->id; 40 52 41 - pmc_write(pmc, AT91_PMC_SCER, 1 << sys->id); 53 + pmc_write(pmc, AT91_PMC_SCER, mask); 54 + 55 + if (!is_pck(sys->id)) 56 + return 0; 57 + 58 + while (!(pmc_read(pmc, AT91_PMC_SR) & mask)) { 59 + if (sys->irq) { 60 + enable_irq(sys->irq); 61 + wait_event(sys->wait, 62 + pmc_read(pmc, AT91_PMC_SR) & mask); 63 + } else 64 + cpu_relax(); 65 + } 42 66 return 0; 43 67 } 44 68 45 - static void clk_system_disable(struct clk_hw *hw) 69 + static void clk_system_unprepare(struct clk_hw *hw) 46 70 { 47 71 struct clk_system *sys = to_clk_system(hw); 48 72 struct at91_pmc *pmc = sys->pmc; ··· 79 45 pmc_write(pmc, AT91_PMC_SCDR, 1 << sys->id); 80 46 } 81 47 82 - static int clk_system_is_enabled(struct clk_hw *hw) 48 + static int clk_system_is_prepared(struct clk_hw *hw) 83 49 { 84 50 struct clk_system *sys = to_clk_system(hw); 85 51 struct at91_pmc *pmc = sys->pmc; 86 52 87 - return !!(pmc_read(pmc, AT91_PMC_SCSR) & (1 << sys->id)); 53 + if (!(pmc_read(pmc, AT91_PMC_SCSR) & (1 << sys->id))) 54 + return 0; 55 + 56 + if (!is_pck(sys->id)) 57 + return 1; 58 + 59 + return !!(pmc_read(pmc, AT91_PMC_SR) & (1 << sys->id)); 88 60 } 89 61 90 62 static const struct clk_ops system_ops = { 91 - .enable = clk_system_enable, 92 - .disable = clk_system_disable, 93 - .is_enabled = clk_system_is_enabled, 63 + .prepare = clk_system_prepare, 64 + .unprepare = clk_system_unprepare, 65 + .is_prepared = clk_system_is_prepared, 94 66 }; 95 67 96 68 static struct clk * __init 97 69 at91_clk_register_system(struct at91_pmc *pmc, const char *name, 98 - const char *parent_name, u8 id) 70 + const char *parent_name, u8 id, int irq) 99 71 { 100 72 struct clk_system *sys; 101 73 struct clk *clk = NULL; 102 74 struct clk_init_data init; 75 + int ret; 103 76 104 77 if (!parent_name || id > SYSTEM_MAX_ID) 105 78 return ERR_PTR(-EINVAL); ··· 125 84 * (see drivers/memory) which would request and enable the ddrck clock. 126 85 * When this is done we will be able to remove CLK_IGNORE_UNUSED flag. 127 86 */ 128 - init.flags = CLK_IGNORE_UNUSED; 87 + init.flags = CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED; 129 88 130 89 sys->id = id; 131 90 sys->hw.init = &init; 132 91 sys->pmc = pmc; 92 + sys->irq = irq; 93 + if (irq) { 94 + init_waitqueue_head(&sys->wait); 95 + irq_set_status_flags(sys->irq, IRQ_NOAUTOEN); 96 + ret = request_irq(sys->irq, clk_system_irq_handler, 97 + IRQF_TRIGGER_HIGH, name, sys); 98 + if (ret) 99 + return ERR_PTR(ret); 100 + } 133 101 134 102 clk = clk_register(NULL, &sys->hw); 135 103 if (IS_ERR(clk)) ··· 151 101 of_at91_clk_sys_setup(struct device_node *np, struct at91_pmc *pmc) 152 102 { 153 103 int num; 104 + int irq = 0; 154 105 u32 id; 155 106 struct clk *clk; 156 107 const char *name; ··· 169 118 if (of_property_read_string(np, "clock-output-names", &name)) 170 119 name = sysclknp->name; 171 120 121 + if (is_pck(id)) 122 + irq = irq_of_parse_and_map(sysclknp, 0); 123 + 172 124 parent_name = of_clk_get_parent_name(sysclknp, 0); 173 125 174 - clk = at91_clk_register_system(pmc, name, parent_name, id); 126 + clk = at91_clk_register_system(pmc, name, parent_name, id, irq); 175 127 if (IS_ERR(clk)) 176 128 continue; 177 129
+268 -42
drivers/clk/clk-axi-clkgen.c
··· 17 17 #include <linux/module.h> 18 18 #include <linux/err.h> 19 19 20 - #define AXI_CLKGEN_REG_UPDATE_ENABLE 0x04 21 - #define AXI_CLKGEN_REG_CLK_OUT1 0x08 22 - #define AXI_CLKGEN_REG_CLK_OUT2 0x0c 23 - #define AXI_CLKGEN_REG_CLK_DIV 0x10 24 - #define AXI_CLKGEN_REG_CLK_FB1 0x14 25 - #define AXI_CLKGEN_REG_CLK_FB2 0x18 26 - #define AXI_CLKGEN_REG_LOCK1 0x1c 27 - #define AXI_CLKGEN_REG_LOCK2 0x20 28 - #define AXI_CLKGEN_REG_LOCK3 0x24 29 - #define AXI_CLKGEN_REG_FILTER1 0x28 30 - #define AXI_CLKGEN_REG_FILTER2 0x2c 20 + #define AXI_CLKGEN_V1_REG_UPDATE_ENABLE 0x04 21 + #define AXI_CLKGEN_V1_REG_CLK_OUT1 0x08 22 + #define AXI_CLKGEN_V1_REG_CLK_OUT2 0x0c 23 + #define AXI_CLKGEN_V1_REG_CLK_DIV 0x10 24 + #define AXI_CLKGEN_V1_REG_CLK_FB1 0x14 25 + #define AXI_CLKGEN_V1_REG_CLK_FB2 0x18 26 + #define AXI_CLKGEN_V1_REG_LOCK1 0x1c 27 + #define AXI_CLKGEN_V1_REG_LOCK2 0x20 28 + #define AXI_CLKGEN_V1_REG_LOCK3 0x24 29 + #define AXI_CLKGEN_V1_REG_FILTER1 0x28 30 + #define AXI_CLKGEN_V1_REG_FILTER2 0x2c 31 + 32 + #define AXI_CLKGEN_V2_REG_RESET 0x40 33 + #define AXI_CLKGEN_V2_REG_DRP_CNTRL 0x70 34 + #define AXI_CLKGEN_V2_REG_DRP_STATUS 0x74 35 + 36 + #define AXI_CLKGEN_V2_RESET_MMCM_ENABLE BIT(1) 37 + #define AXI_CLKGEN_V2_RESET_ENABLE BIT(0) 38 + 39 + #define AXI_CLKGEN_V2_DRP_CNTRL_SEL BIT(29) 40 + #define AXI_CLKGEN_V2_DRP_CNTRL_READ BIT(28) 41 + 42 + #define AXI_CLKGEN_V2_DRP_STATUS_BUSY BIT(16) 43 + 44 + #define MMCM_REG_CLKOUT0_1 0x08 45 + #define MMCM_REG_CLKOUT0_2 0x09 46 + #define MMCM_REG_CLK_FB1 0x14 47 + #define MMCM_REG_CLK_FB2 0x15 48 + #define MMCM_REG_CLK_DIV 0x16 49 + #define MMCM_REG_LOCK1 0x18 50 + #define MMCM_REG_LOCK2 0x19 51 + #define MMCM_REG_LOCK3 0x1a 52 + #define MMCM_REG_FILTER1 0x4e 53 + #define MMCM_REG_FILTER2 0x4f 54 + 55 + struct axi_clkgen; 56 + 57 + struct axi_clkgen_mmcm_ops { 58 + void (*enable)(struct axi_clkgen *axi_clkgen, bool enable); 59 + int (*write)(struct axi_clkgen *axi_clkgen, unsigned int reg, 60 + unsigned int val, unsigned int mask); 61 + int (*read)(struct axi_clkgen *axi_clkgen, unsigned int reg, 62 + unsigned int *val); 63 + }; 31 64 32 65 struct axi_clkgen { 33 66 void __iomem *base; 67 + const struct axi_clkgen_mmcm_ops *mmcm_ops; 34 68 struct clk_hw clk_hw; 35 69 }; 70 + 71 + static void axi_clkgen_mmcm_enable(struct axi_clkgen *axi_clkgen, 72 + bool enable) 73 + { 74 + axi_clkgen->mmcm_ops->enable(axi_clkgen, enable); 75 + } 76 + 77 + static int axi_clkgen_mmcm_write(struct axi_clkgen *axi_clkgen, 78 + unsigned int reg, unsigned int val, unsigned int mask) 79 + { 80 + return axi_clkgen->mmcm_ops->write(axi_clkgen, reg, val, mask); 81 + } 82 + 83 + static int axi_clkgen_mmcm_read(struct axi_clkgen *axi_clkgen, 84 + unsigned int reg, unsigned int *val) 85 + { 86 + return axi_clkgen->mmcm_ops->read(axi_clkgen, reg, val); 87 + } 36 88 37 89 static uint32_t axi_clkgen_lookup_filter(unsigned int m) 38 90 { ··· 208 156 *val = readl(axi_clkgen->base + reg); 209 157 } 210 158 159 + static unsigned int axi_clkgen_v1_map_mmcm_reg(unsigned int reg) 160 + { 161 + switch (reg) { 162 + case MMCM_REG_CLKOUT0_1: 163 + return AXI_CLKGEN_V1_REG_CLK_OUT1; 164 + case MMCM_REG_CLKOUT0_2: 165 + return AXI_CLKGEN_V1_REG_CLK_OUT2; 166 + case MMCM_REG_CLK_FB1: 167 + return AXI_CLKGEN_V1_REG_CLK_FB1; 168 + case MMCM_REG_CLK_FB2: 169 + return AXI_CLKGEN_V1_REG_CLK_FB2; 170 + case MMCM_REG_CLK_DIV: 171 + return AXI_CLKGEN_V1_REG_CLK_DIV; 172 + case MMCM_REG_LOCK1: 173 + return AXI_CLKGEN_V1_REG_LOCK1; 174 + case MMCM_REG_LOCK2: 175 + return AXI_CLKGEN_V1_REG_LOCK2; 176 + case MMCM_REG_LOCK3: 177 + return AXI_CLKGEN_V1_REG_LOCK3; 178 + case MMCM_REG_FILTER1: 179 + return AXI_CLKGEN_V1_REG_FILTER1; 180 + case MMCM_REG_FILTER2: 181 + return AXI_CLKGEN_V1_REG_FILTER2; 182 + default: 183 + return 0; 184 + } 185 + } 186 + 187 + static int axi_clkgen_v1_mmcm_write(struct axi_clkgen *axi_clkgen, 188 + unsigned int reg, unsigned int val, unsigned int mask) 189 + { 190 + reg = axi_clkgen_v1_map_mmcm_reg(reg); 191 + if (reg == 0) 192 + return -EINVAL; 193 + 194 + axi_clkgen_write(axi_clkgen, reg, val); 195 + 196 + return 0; 197 + } 198 + 199 + static int axi_clkgen_v1_mmcm_read(struct axi_clkgen *axi_clkgen, 200 + unsigned int reg, unsigned int *val) 201 + { 202 + reg = axi_clkgen_v1_map_mmcm_reg(reg); 203 + if (reg == 0) 204 + return -EINVAL; 205 + 206 + axi_clkgen_read(axi_clkgen, reg, val); 207 + 208 + return 0; 209 + } 210 + 211 + static void axi_clkgen_v1_mmcm_enable(struct axi_clkgen *axi_clkgen, 212 + bool enable) 213 + { 214 + axi_clkgen_write(axi_clkgen, AXI_CLKGEN_V1_REG_UPDATE_ENABLE, enable); 215 + } 216 + 217 + static const struct axi_clkgen_mmcm_ops axi_clkgen_v1_mmcm_ops = { 218 + .write = axi_clkgen_v1_mmcm_write, 219 + .read = axi_clkgen_v1_mmcm_read, 220 + .enable = axi_clkgen_v1_mmcm_enable, 221 + }; 222 + 223 + static int axi_clkgen_wait_non_busy(struct axi_clkgen *axi_clkgen) 224 + { 225 + unsigned int timeout = 10000; 226 + unsigned int val; 227 + 228 + do { 229 + axi_clkgen_read(axi_clkgen, AXI_CLKGEN_V2_REG_DRP_STATUS, &val); 230 + } while ((val & AXI_CLKGEN_V2_DRP_STATUS_BUSY) && --timeout); 231 + 232 + if (val & AXI_CLKGEN_V2_DRP_STATUS_BUSY) 233 + return -EIO; 234 + 235 + return val & 0xffff; 236 + } 237 + 238 + static int axi_clkgen_v2_mmcm_read(struct axi_clkgen *axi_clkgen, 239 + unsigned int reg, unsigned int *val) 240 + { 241 + unsigned int reg_val; 242 + int ret; 243 + 244 + ret = axi_clkgen_wait_non_busy(axi_clkgen); 245 + if (ret < 0) 246 + return ret; 247 + 248 + reg_val = AXI_CLKGEN_V2_DRP_CNTRL_SEL | AXI_CLKGEN_V2_DRP_CNTRL_READ; 249 + reg_val |= (reg << 16); 250 + 251 + axi_clkgen_write(axi_clkgen, AXI_CLKGEN_V2_REG_DRP_CNTRL, reg_val); 252 + 253 + ret = axi_clkgen_wait_non_busy(axi_clkgen); 254 + if (ret < 0) 255 + return ret; 256 + 257 + *val = ret; 258 + 259 + return 0; 260 + } 261 + 262 + static int axi_clkgen_v2_mmcm_write(struct axi_clkgen *axi_clkgen, 263 + unsigned int reg, unsigned int val, unsigned int mask) 264 + { 265 + unsigned int reg_val = 0; 266 + int ret; 267 + 268 + ret = axi_clkgen_wait_non_busy(axi_clkgen); 269 + if (ret < 0) 270 + return ret; 271 + 272 + if (mask != 0xffff) { 273 + axi_clkgen_v2_mmcm_read(axi_clkgen, reg, &reg_val); 274 + reg_val &= ~mask; 275 + } 276 + 277 + reg_val |= AXI_CLKGEN_V2_DRP_CNTRL_SEL | (reg << 16) | (val & mask); 278 + 279 + axi_clkgen_write(axi_clkgen, AXI_CLKGEN_V2_REG_DRP_CNTRL, reg_val); 280 + 281 + return 0; 282 + } 283 + 284 + static void axi_clkgen_v2_mmcm_enable(struct axi_clkgen *axi_clkgen, 285 + bool enable) 286 + { 287 + unsigned int val = AXI_CLKGEN_V2_RESET_ENABLE; 288 + 289 + if (enable) 290 + val |= AXI_CLKGEN_V2_RESET_MMCM_ENABLE; 291 + 292 + axi_clkgen_write(axi_clkgen, AXI_CLKGEN_V2_REG_RESET, val); 293 + } 294 + 295 + static const struct axi_clkgen_mmcm_ops axi_clkgen_v2_mmcm_ops = { 296 + .write = axi_clkgen_v2_mmcm_write, 297 + .read = axi_clkgen_v2_mmcm_read, 298 + .enable = axi_clkgen_v2_mmcm_enable, 299 + }; 300 + 211 301 static struct axi_clkgen *clk_hw_to_axi_clkgen(struct clk_hw *clk_hw) 212 302 { 213 303 return container_of(clk_hw, struct axi_clkgen, clk_hw); ··· 378 184 filter = axi_clkgen_lookup_filter(m - 1); 379 185 lock = axi_clkgen_lookup_lock(m - 1); 380 186 381 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_UPDATE_ENABLE, 0); 382 - 383 187 axi_clkgen_calc_clk_params(dout, &low, &high, &edge, &nocount); 384 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_CLK_OUT1, 385 - (high << 6) | low); 386 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_CLK_OUT2, 387 - (edge << 7) | (nocount << 6)); 188 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_CLKOUT0_1, 189 + (high << 6) | low, 0xefff); 190 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_CLKOUT0_2, 191 + (edge << 7) | (nocount << 6), 0x03ff); 388 192 389 193 axi_clkgen_calc_clk_params(d, &low, &high, &edge, &nocount); 390 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_CLK_DIV, 391 - (edge << 13) | (nocount << 12) | (high << 6) | low); 194 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_CLK_DIV, 195 + (edge << 13) | (nocount << 12) | (high << 6) | low, 0x3fff); 392 196 393 197 axi_clkgen_calc_clk_params(m, &low, &high, &edge, &nocount); 394 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_CLK_FB1, 395 - (high << 6) | low); 396 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_CLK_FB2, 397 - (edge << 7) | (nocount << 6)); 198 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_CLK_FB1, 199 + (high << 6) | low, 0xefff); 200 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_CLK_FB2, 201 + (edge << 7) | (nocount << 6), 0x03ff); 398 202 399 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_LOCK1, lock & 0x3ff); 400 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_LOCK2, 401 - (((lock >> 16) & 0x1f) << 10) | 0x1); 402 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_LOCK3, 403 - (((lock >> 24) & 0x1f) << 10) | 0x3e9); 404 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_FILTER1, filter >> 16); 405 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_FILTER2, filter); 406 - 407 - axi_clkgen_write(axi_clkgen, AXI_CLKGEN_REG_UPDATE_ENABLE, 1); 203 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_LOCK1, lock & 0x3ff, 0x3ff); 204 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_LOCK2, 205 + (((lock >> 16) & 0x1f) << 10) | 0x1, 0x7fff); 206 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_LOCK3, 207 + (((lock >> 24) & 0x1f) << 10) | 0x3e9, 0x7fff); 208 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_FILTER1, filter >> 16, 0x9900); 209 + axi_clkgen_mmcm_write(axi_clkgen, MMCM_REG_FILTER2, filter, 0x9900); 408 210 409 211 return 0; 410 212 } ··· 426 236 unsigned int reg; 427 237 unsigned long long tmp; 428 238 429 - axi_clkgen_read(axi_clkgen, AXI_CLKGEN_REG_CLK_OUT1, &reg); 239 + axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLKOUT0_1, &reg); 430 240 dout = (reg & 0x3f) + ((reg >> 6) & 0x3f); 431 - axi_clkgen_read(axi_clkgen, AXI_CLKGEN_REG_CLK_DIV, &reg); 241 + axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_DIV, &reg); 432 242 d = (reg & 0x3f) + ((reg >> 6) & 0x3f); 433 - axi_clkgen_read(axi_clkgen, AXI_CLKGEN_REG_CLK_FB1, &reg); 243 + axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_FB1, &reg); 434 244 m = (reg & 0x3f) + ((reg >> 6) & 0x3f); 435 245 436 246 if (d == 0 || dout == 0) ··· 445 255 return tmp; 446 256 } 447 257 258 + static int axi_clkgen_enable(struct clk_hw *clk_hw) 259 + { 260 + struct axi_clkgen *axi_clkgen = clk_hw_to_axi_clkgen(clk_hw); 261 + 262 + axi_clkgen_mmcm_enable(axi_clkgen, true); 263 + 264 + return 0; 265 + } 266 + 267 + static void axi_clkgen_disable(struct clk_hw *clk_hw) 268 + { 269 + struct axi_clkgen *axi_clkgen = clk_hw_to_axi_clkgen(clk_hw); 270 + 271 + axi_clkgen_mmcm_enable(axi_clkgen, false); 272 + } 273 + 448 274 static const struct clk_ops axi_clkgen_ops = { 449 275 .recalc_rate = axi_clkgen_recalc_rate, 450 276 .round_rate = axi_clkgen_round_rate, 451 277 .set_rate = axi_clkgen_set_rate, 278 + .enable = axi_clkgen_enable, 279 + .disable = axi_clkgen_disable, 452 280 }; 281 + 282 + static const struct of_device_id axi_clkgen_ids[] = { 283 + { 284 + .compatible = "adi,axi-clkgen-1.00.a", 285 + .data = &axi_clkgen_v1_mmcm_ops 286 + }, { 287 + .compatible = "adi,axi-clkgen-2.00.a", 288 + .data = &axi_clkgen_v2_mmcm_ops, 289 + }, 290 + { }, 291 + }; 292 + MODULE_DEVICE_TABLE(of, axi_clkgen_ids); 453 293 454 294 static int axi_clkgen_probe(struct platform_device *pdev) 455 295 { 296 + const struct of_device_id *id; 456 297 struct axi_clkgen *axi_clkgen; 457 298 struct clk_init_data init; 458 299 const char *parent_name; ··· 491 270 struct resource *mem; 492 271 struct clk *clk; 493 272 273 + if (!pdev->dev.of_node) 274 + return -ENODEV; 275 + 276 + id = of_match_node(axi_clkgen_ids, pdev->dev.of_node); 277 + if (!id) 278 + return -ENODEV; 279 + 494 280 axi_clkgen = devm_kzalloc(&pdev->dev, sizeof(*axi_clkgen), GFP_KERNEL); 495 281 if (!axi_clkgen) 496 282 return -ENOMEM; 283 + 284 + axi_clkgen->mmcm_ops = id->data; 497 285 498 286 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 499 287 axi_clkgen->base = devm_ioremap_resource(&pdev->dev, mem); ··· 519 289 520 290 init.name = clk_name; 521 291 init.ops = &axi_clkgen_ops; 522 - init.flags = 0; 292 + init.flags = CLK_SET_RATE_GATE; 523 293 init.parent_names = &parent_name; 524 294 init.num_parents = 1; 295 + 296 + axi_clkgen_mmcm_enable(axi_clkgen, false); 525 297 526 298 axi_clkgen->clk_hw.init = &init; 527 299 clk = devm_clk_register(&pdev->dev, &axi_clkgen->clk_hw); ··· 540 308 541 309 return 0; 542 310 } 543 - 544 - static const struct of_device_id axi_clkgen_ids[] = { 545 - { .compatible = "adi,axi-clkgen-1.00.a" }, 546 - { }, 547 - }; 548 - MODULE_DEVICE_TABLE(of, axi_clkgen_ids); 549 311 550 312 static struct platform_driver axi_clkgen_driver = { 551 313 .driver = {
+5 -5
drivers/clk/clk-divider.c
··· 24 24 * Traits of this clock: 25 25 * prepare - clk_prepare only ensures that parents are prepared 26 26 * enable - clk_enable only ensures that parents are enabled 27 - * rate - rate is adjustable. clk->rate = parent->rate / divisor 27 + * rate - rate is adjustable. clk->rate = DIV_ROUND_UP(parent->rate / divisor) 28 28 * parent - fixed parent. No clk_set_parent support 29 29 */ 30 30 ··· 115 115 return parent_rate; 116 116 } 117 117 118 - return parent_rate / div; 118 + return DIV_ROUND_UP(parent_rate, div); 119 119 } 120 120 121 121 /* ··· 185 185 } 186 186 parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), 187 187 MULT_ROUND_UP(rate, i)); 188 - now = parent_rate / i; 188 + now = DIV_ROUND_UP(parent_rate, i); 189 189 if (now <= rate && now > best) { 190 190 bestdiv = i; 191 191 best = now; ··· 207 207 int div; 208 208 div = clk_divider_bestdiv(hw, rate, prate); 209 209 210 - return *prate / div; 210 + return DIV_ROUND_UP(*prate, div); 211 211 } 212 212 213 213 static int clk_divider_set_rate(struct clk_hw *hw, unsigned long rate, ··· 218 218 unsigned long flags = 0; 219 219 u32 val; 220 220 221 - div = parent_rate / rate; 221 + div = DIV_ROUND_UP(parent_rate, rate); 222 222 value = _get_val(divider, div); 223 223 224 224 if (value > div_mask(divider))
+97
drivers/clk/clk-moxart.c
··· 1 + /* 2 + * MOXA ART SoCs clock driver. 3 + * 4 + * Copyright (C) 2013 Jonas Jensen 5 + * 6 + * Jonas Jensen <jonas.jensen@gmail.com> 7 + * 8 + * This file is licensed under the terms of the GNU General Public 9 + * License version 2. This program is licensed "as is" without any 10 + * warranty of any kind, whether express or implied. 11 + */ 12 + 13 + #include <linux/clk-provider.h> 14 + #include <linux/io.h> 15 + #include <linux/of_address.h> 16 + #include <linux/clkdev.h> 17 + 18 + void __init moxart_of_pll_clk_init(struct device_node *node) 19 + { 20 + static void __iomem *base; 21 + struct clk *clk, *ref_clk; 22 + unsigned int mul; 23 + const char *name = node->name; 24 + const char *parent_name; 25 + 26 + of_property_read_string(node, "clock-output-names", &name); 27 + parent_name = of_clk_get_parent_name(node, 0); 28 + 29 + base = of_iomap(node, 0); 30 + if (!base) { 31 + pr_err("%s: of_iomap failed\n", node->full_name); 32 + return; 33 + } 34 + 35 + mul = readl(base + 0x30) >> 3 & 0x3f; 36 + iounmap(base); 37 + 38 + ref_clk = of_clk_get(node, 0); 39 + if (IS_ERR(ref_clk)) { 40 + pr_err("%s: of_clk_get failed\n", node->full_name); 41 + return; 42 + } 43 + 44 + clk = clk_register_fixed_factor(NULL, name, parent_name, 0, mul, 1); 45 + if (IS_ERR(clk)) { 46 + pr_err("%s: failed to register clock\n", node->full_name); 47 + return; 48 + } 49 + 50 + clk_register_clkdev(clk, NULL, name); 51 + of_clk_add_provider(node, of_clk_src_simple_get, clk); 52 + } 53 + CLK_OF_DECLARE(moxart_pll_clock, "moxa,moxart-pll-clock", 54 + moxart_of_pll_clk_init); 55 + 56 + void __init moxart_of_apb_clk_init(struct device_node *node) 57 + { 58 + static void __iomem *base; 59 + struct clk *clk, *pll_clk; 60 + unsigned int div, val; 61 + unsigned int div_idx[] = { 2, 3, 4, 6, 8}; 62 + const char *name = node->name; 63 + const char *parent_name; 64 + 65 + of_property_read_string(node, "clock-output-names", &name); 66 + parent_name = of_clk_get_parent_name(node, 0); 67 + 68 + base = of_iomap(node, 0); 69 + if (!base) { 70 + pr_err("%s: of_iomap failed\n", node->full_name); 71 + return; 72 + } 73 + 74 + val = readl(base + 0xc) >> 4 & 0x7; 75 + iounmap(base); 76 + 77 + if (val > 4) 78 + val = 0; 79 + div = div_idx[val] * 2; 80 + 81 + pll_clk = of_clk_get(node, 0); 82 + if (IS_ERR(pll_clk)) { 83 + pr_err("%s: of_clk_get failed\n", node->full_name); 84 + return; 85 + } 86 + 87 + clk = clk_register_fixed_factor(NULL, name, parent_name, 0, 1, div); 88 + if (IS_ERR(clk)) { 89 + pr_err("%s: failed to register clock\n", node->full_name); 90 + return; 91 + } 92 + 93 + clk_register_clkdev(clk, NULL, name); 94 + of_clk_add_provider(node, of_clk_src_simple_get, clk); 95 + } 96 + CLK_OF_DECLARE(moxart_apb_clock, "moxa,moxart-apb-clock", 97 + moxart_of_apb_clk_init);
+48 -22
drivers/clk/clk-ppc-corenet.c
··· 27 27 #define CLKSEL_ADJUST BIT(0) 28 28 #define to_cmux_clk(p) container_of(p, struct cmux_clk, hw) 29 29 30 - static void __iomem *base; 31 30 static unsigned int clocks_per_pll; 32 31 33 32 static int cmux_set_parent(struct clk_hw *hw, u8 idx) ··· 99 100 pr_err("%s: could not allocate cmux_clk\n", __func__); 100 101 goto err_name; 101 102 } 102 - cmux_clk->reg = base + offset; 103 + cmux_clk->reg = of_iomap(np, 0); 104 + if (!cmux_clk->reg) { 105 + pr_err("%s: could not map register\n", __func__); 106 + goto err_clk; 107 + } 103 108 104 109 node = of_find_compatible_node(NULL, NULL, "fsl,p4080-clockgen"); 105 110 if (node && (offset >= 0x80)) ··· 146 143 147 144 static void __init core_pll_init(struct device_node *np) 148 145 { 149 - u32 offset, mult; 146 + u32 mult; 150 147 int i, rc, count; 151 148 const char *clk_name, *parent_name; 152 149 struct clk_onecell_data *onecell_data; 153 150 struct clk **subclks; 151 + void __iomem *base; 154 152 155 - rc = of_property_read_u32(np, "reg", &offset); 156 - if (rc) { 157 - pr_err("%s: could not get reg property\n", np->name); 153 + base = of_iomap(np, 0); 154 + if (!base) { 155 + pr_err("clk-ppc: iomap error\n"); 158 156 return; 159 157 } 160 158 161 159 /* get the multiple of PLL */ 162 - mult = ioread32be(base + offset); 160 + mult = ioread32be(base); 163 161 164 162 /* check if this PLL is disabled */ 165 163 if (mult & PLL_KILL) { 166 164 pr_debug("PLL:%s is disabled\n", np->name); 167 - return; 165 + goto err_map; 168 166 } 169 167 mult = (mult >> 1) & 0x3f; 170 168 171 169 parent_name = of_clk_get_parent_name(np, 0); 172 170 if (!parent_name) { 173 171 pr_err("PLL: %s must have a parent\n", np->name); 174 - return; 172 + goto err_map; 175 173 } 176 174 177 175 count = of_property_count_strings(np, "clock-output-names"); 178 176 if (count < 0 || count > 4) { 179 177 pr_err("%s: clock is not supported\n", np->name); 180 - return; 178 + goto err_map; 181 179 } 182 180 183 181 /* output clock number per PLL */ ··· 187 183 subclks = kzalloc(sizeof(struct clk *) * count, GFP_KERNEL); 188 184 if (!subclks) { 189 185 pr_err("%s: could not allocate subclks\n", __func__); 190 - return; 186 + goto err_map; 191 187 } 192 188 193 189 onecell_data = kzalloc(sizeof(struct clk_onecell_data), GFP_KERNEL); ··· 234 230 goto err_cell; 235 231 } 236 232 233 + iounmap(base); 237 234 return; 238 235 err_cell: 239 236 kfree(onecell_data); 240 237 err_clks: 241 238 kfree(subclks); 239 + err_map: 240 + iounmap(base); 241 + } 242 + 243 + static void __init sysclk_init(struct device_node *node) 244 + { 245 + struct clk *clk; 246 + const char *clk_name = node->name; 247 + struct device_node *np = of_get_parent(node); 248 + u32 rate; 249 + 250 + if (!np) { 251 + pr_err("ppc-clk: could not get parent node\n"); 252 + return; 253 + } 254 + 255 + if (of_property_read_u32(np, "clock-frequency", &rate)) { 256 + of_node_put(node); 257 + return; 258 + } 259 + 260 + of_property_read_string(np, "clock-output-names", &clk_name); 261 + 262 + clk = clk_register_fixed_rate(NULL, clk_name, NULL, CLK_IS_ROOT, rate); 263 + if (!IS_ERR(clk)) 264 + of_clk_add_provider(np, of_clk_src_simple_get, clk); 242 265 } 243 266 244 267 static const struct of_device_id clk_match[] __initconst = { 245 - { .compatible = "fixed-clock", .data = of_fixed_clk_setup, }, 246 - { .compatible = "fsl,core-pll-clock", .data = core_pll_init, }, 247 - { .compatible = "fsl,core-mux-clock", .data = core_mux_init, }, 268 + { .compatible = "fsl,qoriq-sysclk-1.0", .data = sysclk_init, }, 269 + { .compatible = "fsl,qoriq-sysclk-2.0", .data = sysclk_init, }, 270 + { .compatible = "fsl,qoriq-core-pll-1.0", .data = core_pll_init, }, 271 + { .compatible = "fsl,qoriq-core-pll-2.0", .data = core_pll_init, }, 272 + { .compatible = "fsl,qoriq-core-mux-1.0", .data = core_mux_init, }, 273 + { .compatible = "fsl,qoriq-core-mux-2.0", .data = core_mux_init, }, 248 274 {} 249 275 }; 250 276 251 277 static int __init ppc_corenet_clk_probe(struct platform_device *pdev) 252 278 { 253 - struct device_node *np; 254 - 255 - np = pdev->dev.of_node; 256 - base = of_iomap(np, 0); 257 - if (!base) { 258 - dev_err(&pdev->dev, "iomap error\n"); 259 - return -ENOMEM; 260 - } 261 279 of_clk_init(clk_match); 262 280 263 281 return 0;
+23 -6
drivers/clk/clk-s2mps11.c
··· 27 27 #include <linux/clk-provider.h> 28 28 #include <linux/platform_device.h> 29 29 #include <linux/mfd/samsung/s2mps11.h> 30 + #include <linux/mfd/samsung/s5m8767.h> 30 31 #include <linux/mfd/samsung/core.h> 31 32 32 33 #define s2mps11_name(a) (a->hw.init->name) ··· 49 48 struct clk_lookup *lookup; 50 49 u32 mask; 51 50 bool enabled; 51 + unsigned int reg; 52 52 }; 53 53 54 54 static struct s2mps11_clk *to_s2mps11_clk(struct clk_hw *hw) ··· 63 61 int ret; 64 62 65 63 ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, 66 - S2MPS11_REG_RTC_CTRL, 64 + s2mps11->reg, 67 65 s2mps11->mask, s2mps11->mask); 68 66 if (!ret) 69 67 s2mps11->enabled = true; ··· 76 74 struct s2mps11_clk *s2mps11 = to_s2mps11_clk(hw); 77 75 int ret; 78 76 79 - ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, S2MPS11_REG_RTC_CTRL, 77 + ret = regmap_update_bits(s2mps11->iodev->regmap_pmic, s2mps11->reg, 80 78 s2mps11->mask, ~s2mps11->mask); 81 79 82 80 if (!ret) ··· 132 130 int i; 133 131 134 132 if (!iodev->dev->of_node) 135 - return NULL; 133 + return ERR_PTR(-EINVAL); 136 134 137 - clk_np = of_find_node_by_name(iodev->dev->of_node, "clocks"); 135 + clk_np = of_get_child_by_name(iodev->dev->of_node, "clocks"); 138 136 if (!clk_np) { 139 137 dev_err(&pdev->dev, "could not find clock sub-node\n"); 140 138 return ERR_PTR(-EINVAL); ··· 157 155 struct sec_pmic_dev *iodev = dev_get_drvdata(pdev->dev.parent); 158 156 struct s2mps11_clk *s2mps11_clks, *s2mps11_clk; 159 157 struct device_node *clk_np = NULL; 158 + unsigned int s2mps11_reg; 160 159 int i, ret = 0; 161 160 u32 val; 162 161 ··· 172 169 if (IS_ERR(clk_np)) 173 170 return PTR_ERR(clk_np); 174 171 172 + switch(platform_get_device_id(pdev)->driver_data) { 173 + case S2MPS11X: 174 + s2mps11_reg = S2MPS11_REG_RTC_CTRL; 175 + break; 176 + case S5M8767X: 177 + s2mps11_reg = S5M8767_REG_CTRL1; 178 + break; 179 + default: 180 + dev_err(&pdev->dev, "Invalid device type\n"); 181 + return -EINVAL; 182 + }; 183 + 175 184 for (i = 0; i < S2MPS11_CLKS_NUM; i++, s2mps11_clk++) { 176 185 s2mps11_clk->iodev = iodev; 177 186 s2mps11_clk->hw.init = &s2mps11_clks_init[i]; 178 187 s2mps11_clk->mask = 1 << i; 188 + s2mps11_clk->reg = s2mps11_reg; 179 189 180 190 ret = regmap_read(s2mps11_clk->iodev->regmap_pmic, 181 - S2MPS11_REG_RTC_CTRL, &val); 191 + s2mps11_clk->reg, &val); 182 192 if (ret < 0) 183 193 goto err_reg; 184 194 ··· 257 241 } 258 242 259 243 static const struct platform_device_id s2mps11_clk_id[] = { 260 - { "s2mps11-clk", 0}, 244 + { "s2mps11-clk", S2MPS11X}, 245 + { "s5m8767-clk", S5M8767X}, 261 246 { }, 262 247 }; 263 248 MODULE_DEVICE_TABLE(platform, s2mps11_clk_id);
+111 -20
drivers/clk/clk.c
··· 277 277 if (!d) 278 278 goto err_out; 279 279 280 + if (clk->ops->debug_init) 281 + if (clk->ops->debug_init(clk->hw, clk->dentry)) 282 + goto err_out; 283 + 280 284 ret = 0; 281 285 goto out; 282 286 ··· 1343 1339 if (clk->notifier_count) 1344 1340 ret = __clk_notify(clk, PRE_RATE_CHANGE, clk->rate, new_rate); 1345 1341 1346 - if (ret & NOTIFY_STOP_MASK) 1342 + if (ret & NOTIFY_STOP_MASK) { 1343 + pr_debug("%s: clk notifier callback for clock %s aborted with error %d\n", 1344 + __func__, clk->name, ret); 1347 1345 goto out; 1346 + } 1348 1347 1349 1348 hlist_for_each_entry(child, &clk->children, child_node) { 1350 1349 ret = __clk_speculate_rates(child, new_rate); ··· 1595 1588 /* notify that we are about to change rates */ 1596 1589 fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE); 1597 1590 if (fail_clk) { 1598 - pr_warn("%s: failed to set %s rate\n", __func__, 1591 + pr_debug("%s: failed to set %s rate\n", __func__, 1599 1592 fail_clk->name); 1600 1593 clk_propagate_rate_change(top, ABORT_RATE_CHANGE); 1601 1594 ret = -EBUSY; ··· 2267 2260 * re-enter into the clk framework by calling any top-level clk APIs; 2268 2261 * this will cause a nested prepare_lock mutex. 2269 2262 * 2270 - * Pre-change notifier callbacks will be passed the current, pre-change 2271 - * rate of the clk via struct clk_notifier_data.old_rate. The new, 2272 - * post-change rate of the clk is passed via struct 2263 + * In all notification cases cases (pre, post and abort rate change) the 2264 + * original clock rate is passed to the callback via struct 2265 + * clk_notifier_data.old_rate and the new frequency is passed via struct 2273 2266 * clk_notifier_data.new_rate. 2274 - * 2275 - * Post-change notifiers will pass the now-current, post-change rate of 2276 - * the clk in both struct clk_notifier_data.old_rate and struct 2277 - * clk_notifier_data.new_rate. 2278 - * 2279 - * Abort-change notifiers are effectively the opposite of pre-change 2280 - * notifiers: the original pre-change clk rate is passed in via struct 2281 - * clk_notifier_data.new_rate and the failed post-change rate is passed 2282 - * in via struct clk_notifier_data.old_rate. 2283 2267 * 2284 2268 * clk_notifier_register() must be called from non-atomic context. 2285 2269 * Returns -EINVAL if called with null arguments, -ENOMEM upon ··· 2471 2473 struct clk *__of_clk_get_from_provider(struct of_phandle_args *clkspec) 2472 2474 { 2473 2475 struct of_clk_provider *provider; 2474 - struct clk *clk = ERR_PTR(-ENOENT); 2476 + struct clk *clk = ERR_PTR(-EPROBE_DEFER); 2475 2477 2476 2478 /* Check if we have such a provider in our array */ 2477 2479 list_for_each_entry(provider, &of_clk_providers, link) { ··· 2504 2506 const char *of_clk_get_parent_name(struct device_node *np, int index) 2505 2507 { 2506 2508 struct of_phandle_args clkspec; 2509 + struct property *prop; 2507 2510 const char *clk_name; 2511 + const __be32 *vp; 2512 + u32 pv; 2508 2513 int rc; 2514 + int count; 2509 2515 2510 2516 if (index < 0) 2511 2517 return NULL; ··· 2519 2517 if (rc) 2520 2518 return NULL; 2521 2519 2520 + index = clkspec.args_count ? clkspec.args[0] : 0; 2521 + count = 0; 2522 + 2523 + /* if there is an indices property, use it to transfer the index 2524 + * specified into an array offset for the clock-output-names property. 2525 + */ 2526 + of_property_for_each_u32(clkspec.np, "clock-indices", prop, vp, pv) { 2527 + if (index == pv) { 2528 + index = count; 2529 + break; 2530 + } 2531 + count++; 2532 + } 2533 + 2522 2534 if (of_property_read_string_index(clkspec.np, "clock-output-names", 2523 - clkspec.args_count ? clkspec.args[0] : 0, 2535 + index, 2524 2536 &clk_name) < 0) 2525 2537 clk_name = clkspec.np->name; 2526 2538 ··· 2543 2527 } 2544 2528 EXPORT_SYMBOL_GPL(of_clk_get_parent_name); 2545 2529 2530 + struct clock_provider { 2531 + of_clk_init_cb_t clk_init_cb; 2532 + struct device_node *np; 2533 + struct list_head node; 2534 + }; 2535 + 2536 + static LIST_HEAD(clk_provider_list); 2537 + 2538 + /* 2539 + * This function looks for a parent clock. If there is one, then it 2540 + * checks that the provider for this parent clock was initialized, in 2541 + * this case the parent clock will be ready. 2542 + */ 2543 + static int parent_ready(struct device_node *np) 2544 + { 2545 + int i = 0; 2546 + 2547 + while (true) { 2548 + struct clk *clk = of_clk_get(np, i); 2549 + 2550 + /* this parent is ready we can check the next one */ 2551 + if (!IS_ERR(clk)) { 2552 + clk_put(clk); 2553 + i++; 2554 + continue; 2555 + } 2556 + 2557 + /* at least one parent is not ready, we exit now */ 2558 + if (PTR_ERR(clk) == -EPROBE_DEFER) 2559 + return 0; 2560 + 2561 + /* 2562 + * Here we make assumption that the device tree is 2563 + * written correctly. So an error means that there is 2564 + * no more parent. As we didn't exit yet, then the 2565 + * previous parent are ready. If there is no clock 2566 + * parent, no need to wait for them, then we can 2567 + * consider their absence as being ready 2568 + */ 2569 + return 1; 2570 + } 2571 + } 2572 + 2546 2573 /** 2547 2574 * of_clk_init() - Scan and init clock providers from the DT 2548 2575 * @matches: array of compatible values and init functions for providers. 2549 2576 * 2550 - * This function scans the device tree for matching clock providers and 2551 - * calls their initialization functions 2577 + * This function scans the device tree for matching clock providers 2578 + * and calls their initialization functions. It also does it by trying 2579 + * to follow the dependencies. 2552 2580 */ 2553 2581 void __init of_clk_init(const struct of_device_id *matches) 2554 2582 { 2555 2583 const struct of_device_id *match; 2556 2584 struct device_node *np; 2585 + struct clock_provider *clk_provider, *next; 2586 + bool is_init_done; 2587 + bool force = false; 2557 2588 2558 2589 if (!matches) 2559 2590 matches = &__clk_of_table; 2560 2591 2592 + /* First prepare the list of the clocks providers */ 2561 2593 for_each_matching_node_and_match(np, matches, &match) { 2562 - of_clk_init_cb_t clk_init_cb = match->data; 2563 - clk_init_cb(np); 2594 + struct clock_provider *parent = 2595 + kzalloc(sizeof(struct clock_provider), GFP_KERNEL); 2596 + 2597 + parent->clk_init_cb = match->data; 2598 + parent->np = np; 2599 + list_add_tail(&parent->node, &clk_provider_list); 2600 + } 2601 + 2602 + while (!list_empty(&clk_provider_list)) { 2603 + is_init_done = false; 2604 + list_for_each_entry_safe(clk_provider, next, 2605 + &clk_provider_list, node) { 2606 + if (force || parent_ready(clk_provider->np)) { 2607 + clk_provider->clk_init_cb(clk_provider->np); 2608 + list_del(&clk_provider->node); 2609 + kfree(clk_provider); 2610 + is_init_done = true; 2611 + } 2612 + } 2613 + 2614 + /* 2615 + * We didn't manage to initialize any of the 2616 + * remaining providers during the last loop, so now we 2617 + * initialize all the remaining ones unconditionally 2618 + * in case the clock parent was not mandatory 2619 + */ 2620 + if (!is_init_done) 2621 + force = true; 2622 + 2564 2623 } 2565 2624 } 2566 2625 #endif
+2
drivers/clk/clkdev.c
··· 167 167 clk = of_clk_get_by_name(dev->of_node, con_id); 168 168 if (!IS_ERR(clk)) 169 169 return clk; 170 + if (PTR_ERR(clk) == -EPROBE_DEFER) 171 + return clk; 170 172 } 171 173 172 174 return clk_get_sys(dev_id, con_id);
+4 -1
drivers/clk/hisilicon/Makefile
··· 2 2 # Hisilicon Clock specific Makefile 3 3 # 4 4 5 - obj-y += clk.o clkgate-separated.o clk-hi3620.o 5 + obj-y += clk.o clkgate-separated.o 6 + 7 + obj-$(CONFIG_ARCH_HI3xxx) += clk-hi3620.o 8 + obj-$(CONFIG_ARCH_HIP04) += clk-hip04.o
+281 -17
drivers/clk/hisilicon/clk-hi3620.c
··· 210 210 211 211 static void __init hi3620_clk_init(struct device_node *np) 212 212 { 213 - void __iomem *base; 213 + struct hisi_clock_data *clk_data; 214 214 215 - if (np) { 216 - base = of_iomap(np, 0); 217 - if (!base) { 218 - pr_err("failed to map Hi3620 clock registers\n"); 219 - return; 220 - } 221 - } else { 222 - pr_err("failed to find Hi3620 clock node in DTS\n"); 215 + clk_data = hisi_clk_init(np, HI3620_NR_CLKS); 216 + if (!clk_data) 223 217 return; 224 - } 225 - 226 - hisi_clk_init(np, HI3620_NR_CLKS); 227 218 228 219 hisi_clk_register_fixed_rate(hi3620_fixed_rate_clks, 229 220 ARRAY_SIZE(hi3620_fixed_rate_clks), 230 - base); 221 + clk_data); 231 222 hisi_clk_register_fixed_factor(hi3620_fixed_factor_clks, 232 223 ARRAY_SIZE(hi3620_fixed_factor_clks), 233 - base); 224 + clk_data); 234 225 hisi_clk_register_mux(hi3620_mux_clks, ARRAY_SIZE(hi3620_mux_clks), 235 - base); 226 + clk_data); 236 227 hisi_clk_register_divider(hi3620_div_clks, ARRAY_SIZE(hi3620_div_clks), 237 - base); 228 + clk_data); 238 229 hisi_clk_register_gate_sep(hi3620_seperated_gate_clks, 239 230 ARRAY_SIZE(hi3620_seperated_gate_clks), 240 - base); 231 + clk_data); 241 232 } 242 233 CLK_OF_DECLARE(hi3620_clk, "hisilicon,hi3620-clock", hi3620_clk_init); 234 + 235 + struct hisi_mmc_clock { 236 + unsigned int id; 237 + const char *name; 238 + const char *parent_name; 239 + unsigned long flags; 240 + u32 clken_reg; 241 + u32 clken_bit; 242 + u32 div_reg; 243 + u32 div_off; 244 + u32 div_bits; 245 + u32 drv_reg; 246 + u32 drv_off; 247 + u32 drv_bits; 248 + u32 sam_reg; 249 + u32 sam_off; 250 + u32 sam_bits; 251 + }; 252 + 253 + struct clk_mmc { 254 + struct clk_hw hw; 255 + u32 id; 256 + void __iomem *clken_reg; 257 + u32 clken_bit; 258 + void __iomem *div_reg; 259 + u32 div_off; 260 + u32 div_bits; 261 + void __iomem *drv_reg; 262 + u32 drv_off; 263 + u32 drv_bits; 264 + void __iomem *sam_reg; 265 + u32 sam_off; 266 + u32 sam_bits; 267 + }; 268 + 269 + #define to_mmc(_hw) container_of(_hw, struct clk_mmc, hw) 270 + 271 + static struct hisi_mmc_clock hi3620_mmc_clks[] __initdata = { 272 + { HI3620_SD_CIUCLK, "sd_bclk1", "sd_clk", CLK_SET_RATE_PARENT, 0x1f8, 0, 0x1f8, 1, 3, 0x1f8, 4, 4, 0x1f8, 8, 4}, 273 + { HI3620_MMC_CIUCLK1, "mmc_bclk1", "mmc_clk1", CLK_SET_RATE_PARENT, 0x1f8, 12, 0x1f8, 13, 3, 0x1f8, 16, 4, 0x1f8, 20, 4}, 274 + { HI3620_MMC_CIUCLK2, "mmc_bclk2", "mmc_clk2", CLK_SET_RATE_PARENT, 0x1f8, 24, 0x1f8, 25, 3, 0x1f8, 28, 4, 0x1fc, 0, 4}, 275 + { HI3620_MMC_CIUCLK3, "mmc_bclk3", "mmc_clk3", CLK_SET_RATE_PARENT, 0x1fc, 4, 0x1fc, 5, 3, 0x1fc, 8, 4, 0x1fc, 12, 4}, 276 + }; 277 + 278 + static unsigned long mmc_clk_recalc_rate(struct clk_hw *hw, 279 + unsigned long parent_rate) 280 + { 281 + switch (parent_rate) { 282 + case 26000000: 283 + return 13000000; 284 + case 180000000: 285 + return 25000000; 286 + case 360000000: 287 + return 50000000; 288 + case 720000000: 289 + return 100000000; 290 + case 1440000000: 291 + return 180000000; 292 + default: 293 + return parent_rate; 294 + } 295 + } 296 + 297 + static long mmc_clk_determine_rate(struct clk_hw *hw, unsigned long rate, 298 + unsigned long *best_parent_rate, 299 + struct clk **best_parent_p) 300 + { 301 + struct clk_mmc *mclk = to_mmc(hw); 302 + unsigned long best = 0; 303 + 304 + if ((rate <= 13000000) && (mclk->id == HI3620_MMC_CIUCLK1)) { 305 + rate = 13000000; 306 + best = 26000000; 307 + } else if (rate <= 26000000) { 308 + rate = 25000000; 309 + best = 180000000; 310 + } else if (rate <= 52000000) { 311 + rate = 50000000; 312 + best = 360000000; 313 + } else if (rate <= 100000000) { 314 + rate = 100000000; 315 + best = 720000000; 316 + } else { 317 + /* max is 180M */ 318 + rate = 180000000; 319 + best = 1440000000; 320 + } 321 + *best_parent_rate = best; 322 + return rate; 323 + } 324 + 325 + static u32 mmc_clk_delay(u32 val, u32 para, u32 off, u32 len) 326 + { 327 + u32 i; 328 + 329 + for (i = 0; i < len; i++) { 330 + if (para % 2) 331 + val |= 1 << (off + i); 332 + else 333 + val &= ~(1 << (off + i)); 334 + para = para >> 1; 335 + } 336 + 337 + return val; 338 + } 339 + 340 + static int mmc_clk_set_timing(struct clk_hw *hw, unsigned long rate) 341 + { 342 + struct clk_mmc *mclk = to_mmc(hw); 343 + unsigned long flags; 344 + u32 sam, drv, div, val; 345 + static DEFINE_SPINLOCK(mmc_clk_lock); 346 + 347 + switch (rate) { 348 + case 13000000: 349 + sam = 3; 350 + drv = 1; 351 + div = 1; 352 + break; 353 + case 25000000: 354 + sam = 13; 355 + drv = 6; 356 + div = 6; 357 + break; 358 + case 50000000: 359 + sam = 3; 360 + drv = 6; 361 + div = 6; 362 + break; 363 + case 100000000: 364 + sam = 6; 365 + drv = 4; 366 + div = 6; 367 + break; 368 + case 180000000: 369 + sam = 6; 370 + drv = 4; 371 + div = 7; 372 + break; 373 + default: 374 + return -EINVAL; 375 + } 376 + 377 + spin_lock_irqsave(&mmc_clk_lock, flags); 378 + 379 + val = readl_relaxed(mclk->clken_reg); 380 + val &= ~(1 << mclk->clken_bit); 381 + writel_relaxed(val, mclk->clken_reg); 382 + 383 + val = readl_relaxed(mclk->sam_reg); 384 + val = mmc_clk_delay(val, sam, mclk->sam_off, mclk->sam_bits); 385 + writel_relaxed(val, mclk->sam_reg); 386 + 387 + val = readl_relaxed(mclk->drv_reg); 388 + val = mmc_clk_delay(val, drv, mclk->drv_off, mclk->drv_bits); 389 + writel_relaxed(val, mclk->drv_reg); 390 + 391 + val = readl_relaxed(mclk->div_reg); 392 + val = mmc_clk_delay(val, div, mclk->div_off, mclk->div_bits); 393 + writel_relaxed(val, mclk->div_reg); 394 + 395 + val = readl_relaxed(mclk->clken_reg); 396 + val |= 1 << mclk->clken_bit; 397 + writel_relaxed(val, mclk->clken_reg); 398 + 399 + spin_unlock_irqrestore(&mmc_clk_lock, flags); 400 + 401 + return 0; 402 + } 403 + 404 + static int mmc_clk_prepare(struct clk_hw *hw) 405 + { 406 + struct clk_mmc *mclk = to_mmc(hw); 407 + unsigned long rate; 408 + 409 + if (mclk->id == HI3620_MMC_CIUCLK1) 410 + rate = 13000000; 411 + else 412 + rate = 25000000; 413 + 414 + return mmc_clk_set_timing(hw, rate); 415 + } 416 + 417 + static int mmc_clk_set_rate(struct clk_hw *hw, unsigned long rate, 418 + unsigned long parent_rate) 419 + { 420 + return mmc_clk_set_timing(hw, rate); 421 + } 422 + 423 + static struct clk_ops clk_mmc_ops = { 424 + .prepare = mmc_clk_prepare, 425 + .determine_rate = mmc_clk_determine_rate, 426 + .set_rate = mmc_clk_set_rate, 427 + .recalc_rate = mmc_clk_recalc_rate, 428 + }; 429 + 430 + static struct clk *hisi_register_clk_mmc(struct hisi_mmc_clock *mmc_clk, 431 + void __iomem *base, struct device_node *np) 432 + { 433 + struct clk_mmc *mclk; 434 + struct clk *clk; 435 + struct clk_init_data init; 436 + 437 + mclk = kzalloc(sizeof(*mclk), GFP_KERNEL); 438 + if (!mclk) { 439 + pr_err("%s: fail to allocate mmc clk\n", __func__); 440 + return ERR_PTR(-ENOMEM); 441 + } 442 + 443 + init.name = mmc_clk->name; 444 + init.ops = &clk_mmc_ops; 445 + init.flags = mmc_clk->flags | CLK_IS_BASIC; 446 + init.parent_names = (mmc_clk->parent_name ? &mmc_clk->parent_name : NULL); 447 + init.num_parents = (mmc_clk->parent_name ? 1 : 0); 448 + mclk->hw.init = &init; 449 + 450 + mclk->id = mmc_clk->id; 451 + mclk->clken_reg = base + mmc_clk->clken_reg; 452 + mclk->clken_bit = mmc_clk->clken_bit; 453 + mclk->div_reg = base + mmc_clk->div_reg; 454 + mclk->div_off = mmc_clk->div_off; 455 + mclk->div_bits = mmc_clk->div_bits; 456 + mclk->drv_reg = base + mmc_clk->drv_reg; 457 + mclk->drv_off = mmc_clk->drv_off; 458 + mclk->drv_bits = mmc_clk->drv_bits; 459 + mclk->sam_reg = base + mmc_clk->sam_reg; 460 + mclk->sam_off = mmc_clk->sam_off; 461 + mclk->sam_bits = mmc_clk->sam_bits; 462 + 463 + clk = clk_register(NULL, &mclk->hw); 464 + if (WARN_ON(IS_ERR(clk))) 465 + kfree(mclk); 466 + return clk; 467 + } 468 + 469 + static void __init hi3620_mmc_clk_init(struct device_node *node) 470 + { 471 + void __iomem *base; 472 + int i, num = ARRAY_SIZE(hi3620_mmc_clks); 473 + struct clk_onecell_data *clk_data; 474 + 475 + if (!node) { 476 + pr_err("failed to find pctrl node in DTS\n"); 477 + return; 478 + } 479 + 480 + base = of_iomap(node, 0); 481 + if (!base) { 482 + pr_err("failed to map pctrl\n"); 483 + return; 484 + } 485 + 486 + clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL); 487 + if (WARN_ON(!clk_data)) 488 + return; 489 + 490 + clk_data->clks = kzalloc(sizeof(struct clk *) * num, GFP_KERNEL); 491 + if (!clk_data->clks) { 492 + pr_err("%s: fail to allocate mmc clk\n", __func__); 493 + return; 494 + } 495 + 496 + for (i = 0; i < num; i++) { 497 + struct hisi_mmc_clock *mmc_clk = &hi3620_mmc_clks[i]; 498 + clk_data->clks[mmc_clk->id] = 499 + hisi_register_clk_mmc(mmc_clk, base, node); 500 + } 501 + 502 + clk_data->clk_num = num; 503 + of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); 504 + } 505 + 506 + CLK_OF_DECLARE(hi3620_mmc_clk, "hisilicon,hi3620-mmc-clock", hi3620_mmc_clk_init);
+58
drivers/clk/hisilicon/clk-hip04.c
··· 1 + /* 2 + * Hisilicon HiP04 clock driver 3 + * 4 + * Copyright (c) 2013-2014 Hisilicon Limited. 5 + * Copyright (c) 2013-2014 Linaro Limited. 6 + * 7 + * Author: Haojian Zhuang <haojian.zhuang@linaro.org> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License as published by 11 + * the Free Software Foundation; either version 2 of the License, or 12 + * (at your option) any later version. 13 + * 14 + * This program is distributed in the hope that it will be useful, 15 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 + * GNU General Public License for more details. 18 + * 19 + * You should have received a copy of the GNU General Public License along 20 + * with this program; if not, write to the Free Software Foundation, Inc., 21 + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 22 + * 23 + */ 24 + 25 + #include <linux/kernel.h> 26 + #include <linux/clk-provider.h> 27 + #include <linux/clkdev.h> 28 + #include <linux/io.h> 29 + #include <linux/of.h> 30 + #include <linux/of_address.h> 31 + #include <linux/of_device.h> 32 + #include <linux/slab.h> 33 + #include <linux/clk.h> 34 + 35 + #include <dt-bindings/clock/hip04-clock.h> 36 + 37 + #include "clk.h" 38 + 39 + /* fixed rate clocks */ 40 + static struct hisi_fixed_rate_clock hip04_fixed_rate_clks[] __initdata = { 41 + { HIP04_OSC50M, "osc50m", NULL, CLK_IS_ROOT, 50000000, }, 42 + { HIP04_CLK_50M, "clk50m", NULL, CLK_IS_ROOT, 50000000, }, 43 + { HIP04_CLK_168M, "clk168m", NULL, CLK_IS_ROOT, 168750000, }, 44 + }; 45 + 46 + static void __init hip04_clk_init(struct device_node *np) 47 + { 48 + struct hisi_clock_data *clk_data; 49 + 50 + clk_data = hisi_clk_init(np, HIP04_NR_CLKS); 51 + if (!clk_data) 52 + return; 53 + 54 + hisi_clk_register_fixed_rate(hip04_fixed_rate_clks, 55 + ARRAY_SIZE(hip04_fixed_rate_clks), 56 + clk_data); 57 + } 58 + CLK_OF_DECLARE(hip04_clk, "hisilicon,hip04-clock", hip04_clk_init);
+47 -15
drivers/clk/hisilicon/clk.c
··· 37 37 #include "clk.h" 38 38 39 39 static DEFINE_SPINLOCK(hisi_clk_lock); 40 - static struct clk **clk_table; 41 - static struct clk_onecell_data clk_data; 42 40 43 - void __init hisi_clk_init(struct device_node *np, int nr_clks) 41 + struct hisi_clock_data __init *hisi_clk_init(struct device_node *np, 42 + int nr_clks) 44 43 { 44 + struct hisi_clock_data *clk_data; 45 + struct clk **clk_table; 46 + void __iomem *base; 47 + 48 + if (np) { 49 + base = of_iomap(np, 0); 50 + if (!base) { 51 + pr_err("failed to map Hisilicon clock registers\n"); 52 + goto err; 53 + } 54 + } else { 55 + pr_err("failed to find Hisilicon clock node in DTS\n"); 56 + goto err; 57 + } 58 + 59 + clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL); 60 + if (!clk_data) { 61 + pr_err("%s: could not allocate clock data\n", __func__); 62 + goto err; 63 + } 64 + clk_data->base = base; 65 + 45 66 clk_table = kzalloc(sizeof(struct clk *) * nr_clks, GFP_KERNEL); 46 67 if (!clk_table) { 47 68 pr_err("%s: could not allocate clock lookup table\n", __func__); 48 - return; 69 + goto err_data; 49 70 } 50 - clk_data.clks = clk_table; 51 - clk_data.clk_num = nr_clks; 52 - of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); 71 + clk_data->clk_data.clks = clk_table; 72 + clk_data->clk_data.clk_num = nr_clks; 73 + of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data->clk_data); 74 + return clk_data; 75 + err_data: 76 + kfree(clk_data); 77 + err: 78 + return NULL; 53 79 } 54 80 55 81 void __init hisi_clk_register_fixed_rate(struct hisi_fixed_rate_clock *clks, 56 - int nums, void __iomem *base) 82 + int nums, struct hisi_clock_data *data) 57 83 { 58 84 struct clk *clk; 59 85 int i; ··· 94 68 __func__, clks[i].name); 95 69 continue; 96 70 } 71 + data->clk_data.clks[clks[i].id] = clk; 97 72 } 98 73 } 99 74 100 75 void __init hisi_clk_register_fixed_factor(struct hisi_fixed_factor_clock *clks, 101 - int nums, void __iomem *base) 76 + int nums, 77 + struct hisi_clock_data *data) 102 78 { 103 79 struct clk *clk; 104 80 int i; ··· 115 87 __func__, clks[i].name); 116 88 continue; 117 89 } 90 + data->clk_data.clks[clks[i].id] = clk; 118 91 } 119 92 } 120 93 121 94 void __init hisi_clk_register_mux(struct hisi_mux_clock *clks, 122 - int nums, void __iomem *base) 95 + int nums, struct hisi_clock_data *data) 123 96 { 124 97 struct clk *clk; 98 + void __iomem *base = data->base; 125 99 int i; 126 100 127 101 for (i = 0; i < nums; i++) { ··· 141 111 if (clks[i].alias) 142 112 clk_register_clkdev(clk, clks[i].alias, NULL); 143 113 144 - clk_table[clks[i].id] = clk; 114 + data->clk_data.clks[clks[i].id] = clk; 145 115 } 146 116 } 147 117 148 118 void __init hisi_clk_register_divider(struct hisi_divider_clock *clks, 149 - int nums, void __iomem *base) 119 + int nums, struct hisi_clock_data *data) 150 120 { 151 121 struct clk *clk; 122 + void __iomem *base = data->base; 152 123 int i; 153 124 154 125 for (i = 0; i < nums; i++) { ··· 170 139 if (clks[i].alias) 171 140 clk_register_clkdev(clk, clks[i].alias, NULL); 172 141 173 - clk_table[clks[i].id] = clk; 142 + data->clk_data.clks[clks[i].id] = clk; 174 143 } 175 144 } 176 145 177 146 void __init hisi_clk_register_gate_sep(struct hisi_gate_clock *clks, 178 - int nums, void __iomem *base) 147 + int nums, struct hisi_clock_data *data) 179 148 { 180 149 struct clk *clk; 150 + void __iomem *base = data->base; 181 151 int i; 182 152 183 153 for (i = 0; i < nums; i++) { ··· 198 166 if (clks[i].alias) 199 167 clk_register_clkdev(clk, clks[i].alias, NULL); 200 168 201 - clk_table[clks[i].id] = clk; 169 + data->clk_data.clks[clks[i].id] = clk; 202 170 } 203 171 }
+11 -6
drivers/clk/hisilicon/clk.h
··· 30 30 #include <linux/io.h> 31 31 #include <linux/spinlock.h> 32 32 33 + struct hisi_clock_data { 34 + struct clk_onecell_data clk_data; 35 + void __iomem *base; 36 + }; 37 + 33 38 struct hisi_fixed_rate_clock { 34 39 unsigned int id; 35 40 char *name; ··· 94 89 void __iomem *, u8, 95 90 u8, spinlock_t *); 96 91 97 - void __init hisi_clk_init(struct device_node *, int); 92 + struct hisi_clock_data __init *hisi_clk_init(struct device_node *, int); 98 93 void __init hisi_clk_register_fixed_rate(struct hisi_fixed_rate_clock *, 99 - int, void __iomem *); 94 + int, struct hisi_clock_data *); 100 95 void __init hisi_clk_register_fixed_factor(struct hisi_fixed_factor_clock *, 101 - int, void __iomem *); 96 + int, struct hisi_clock_data *); 102 97 void __init hisi_clk_register_mux(struct hisi_mux_clock *, int, 103 - void __iomem *); 98 + struct hisi_clock_data *); 104 99 void __init hisi_clk_register_divider(struct hisi_divider_clock *, 105 - int, void __iomem *); 100 + int, struct hisi_clock_data *); 106 101 void __init hisi_clk_register_gate_sep(struct hisi_gate_clock *, 107 - int, void __iomem *); 102 + int, struct hisi_clock_data *); 108 103 #endif /* __HISI_CLK_H */
+12 -8
drivers/clk/mmp/clk-frac.c
··· 40 40 41 41 for (i = 0; i < factor->ftbl_cnt; i++) { 42 42 prev_rate = rate; 43 - rate = (((*prate / 10000) * factor->ftbl[i].num) / 44 - (factor->ftbl[i].den * factor->masks->factor)) * 10000; 43 + rate = (((*prate / 10000) * factor->ftbl[i].den) / 44 + (factor->ftbl[i].num * factor->masks->factor)) * 10000; 45 45 if (rate > drate) 46 46 break; 47 47 } 48 - if (i == 0) 48 + if ((i == 0) || (i == factor->ftbl_cnt)) { 49 49 return rate; 50 - else 51 - return prev_rate; 50 + } else { 51 + if ((drate - prev_rate) > (rate - drate)) 52 + return rate; 53 + else 54 + return prev_rate; 55 + } 52 56 } 53 57 54 58 static unsigned long clk_factor_recalc_rate(struct clk_hw *hw, ··· 68 64 num = (val >> masks->num_shift) & masks->num_mask; 69 65 70 66 /* calculate denominator */ 71 - den = (val >> masks->den_shift) & masks->num_mask; 67 + den = (val >> masks->den_shift) & masks->den_mask; 72 68 73 69 if (!den) 74 70 return 0; ··· 89 85 90 86 for (i = 0; i < factor->ftbl_cnt; i++) { 91 87 prev_rate = rate; 92 - rate = (((prate / 10000) * factor->ftbl[i].num) / 93 - (factor->ftbl[i].den * factor->masks->factor)) * 10000; 88 + rate = (((prate / 10000) * factor->ftbl[i].den) / 89 + (factor->ftbl[i].num * factor->masks->factor)) * 10000; 94 90 if (rate > drate) 95 91 break; 96 92 }
+8
drivers/clk/mvebu/Kconfig
··· 13 13 select MVEBU_CLK_CPU 14 14 select MVEBU_CLK_COREDIV 15 15 16 + config ARMADA_375_CLK 17 + bool 18 + select MVEBU_CLK_COMMON 19 + 20 + config ARMADA_38X_CLK 21 + bool 22 + select MVEBU_CLK_COMMON 23 + 16 24 config ARMADA_XP_CLK 17 25 bool 18 26 select MVEBU_CLK_COMMON
+2
drivers/clk/mvebu/Makefile
··· 3 3 obj-$(CONFIG_MVEBU_CLK_COREDIV) += clk-corediv.o 4 4 5 5 obj-$(CONFIG_ARMADA_370_CLK) += armada-370.o 6 + obj-$(CONFIG_ARMADA_375_CLK) += armada-375.o 7 + obj-$(CONFIG_ARMADA_38X_CLK) += armada-38x.o 6 8 obj-$(CONFIG_ARMADA_XP_CLK) += armada-xp.o 7 9 obj-$(CONFIG_DOVE_CLK) += dove.o 8 10 obj-$(CONFIG_KIRKWOOD_CLK) += kirkwood.o
+184
drivers/clk/mvebu/armada-375.c
··· 1 + /* 2 + * Marvell Armada 375 SoC clocks 3 + * 4 + * Copyright (C) 2014 Marvell 5 + * 6 + * Gregory CLEMENT <gregory.clement@free-electrons.com> 7 + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 8 + * Andrew Lunn <andrew@lunn.ch> 9 + * 10 + * This file is licensed under the terms of the GNU General Public 11 + * License version 2. This program is licensed "as is" without any 12 + * warranty of any kind, whether express or implied. 13 + */ 14 + 15 + #include <linux/kernel.h> 16 + #include <linux/clk-provider.h> 17 + #include <linux/io.h> 18 + #include <linux/of.h> 19 + #include "common.h" 20 + 21 + /* 22 + * Core Clocks 23 + */ 24 + 25 + /* 26 + * For the Armada 375 SoCs, the CPU, DDR and L2 clocks frequencies are 27 + * all modified at the same time, and not separately as for the Armada 28 + * 370 or the Armada XP SoCs. 29 + * 30 + * SAR0[21:17] : CPU frequency DDR frequency L2 frequency 31 + * 6 = 400 MHz 400 MHz 200 MHz 32 + * 15 = 600 MHz 600 MHz 300 MHz 33 + * 21 = 800 MHz 534 MHz 400 MHz 34 + * 25 = 1000 MHz 500 MHz 500 MHz 35 + * others reserved. 36 + * 37 + * SAR0[22] : TCLK frequency 38 + * 0 = 166 MHz 39 + * 1 = 200 MHz 40 + */ 41 + 42 + #define SAR1_A375_TCLK_FREQ_OPT 22 43 + #define SAR1_A375_TCLK_FREQ_OPT_MASK 0x1 44 + #define SAR1_A375_CPU_DDR_L2_FREQ_OPT 17 45 + #define SAR1_A375_CPU_DDR_L2_FREQ_OPT_MASK 0x1F 46 + 47 + static const u32 armada_375_tclk_frequencies[] __initconst = { 48 + 166000000, 49 + 200000000, 50 + }; 51 + 52 + static u32 __init armada_375_get_tclk_freq(void __iomem *sar) 53 + { 54 + u8 tclk_freq_select; 55 + 56 + tclk_freq_select = ((readl(sar) >> SAR1_A375_TCLK_FREQ_OPT) & 57 + SAR1_A375_TCLK_FREQ_OPT_MASK); 58 + return armada_375_tclk_frequencies[tclk_freq_select]; 59 + } 60 + 61 + 62 + static const u32 armada_375_cpu_frequencies[] __initconst = { 63 + 0, 0, 0, 0, 0, 0, 64 + 400000000, 65 + 0, 0, 0, 0, 0, 0, 0, 0, 66 + 600000000, 67 + 0, 0, 0, 0, 0, 68 + 800000000, 69 + 0, 0, 0, 70 + 1000000000, 71 + }; 72 + 73 + static u32 __init armada_375_get_cpu_freq(void __iomem *sar) 74 + { 75 + u8 cpu_freq_select; 76 + 77 + cpu_freq_select = ((readl(sar) >> SAR1_A375_CPU_DDR_L2_FREQ_OPT) & 78 + SAR1_A375_CPU_DDR_L2_FREQ_OPT_MASK); 79 + if (cpu_freq_select >= ARRAY_SIZE(armada_375_cpu_frequencies)) { 80 + pr_err("Selected CPU frequency (%d) unsupported\n", 81 + cpu_freq_select); 82 + return 0; 83 + } else 84 + return armada_375_cpu_frequencies[cpu_freq_select]; 85 + } 86 + 87 + enum { A375_CPU_TO_DDR, A375_CPU_TO_L2 }; 88 + 89 + static const struct coreclk_ratio armada_375_coreclk_ratios[] __initconst = { 90 + { .id = A375_CPU_TO_L2, .name = "l2clk" }, 91 + { .id = A375_CPU_TO_DDR, .name = "ddrclk" }, 92 + }; 93 + 94 + static const int armada_375_cpu_l2_ratios[32][2] __initconst = { 95 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 96 + {0, 1}, {0, 1}, {1, 2}, {0, 1}, 97 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 98 + {0, 1}, {0, 1}, {0, 1}, {1, 2}, 99 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 100 + {0, 1}, {1, 2}, {0, 1}, {0, 1}, 101 + {0, 1}, {1, 2}, {0, 1}, {0, 1}, 102 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 103 + }; 104 + 105 + static const int armada_375_cpu_ddr_ratios[32][2] __initconst = { 106 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 107 + {0, 1}, {0, 1}, {1, 1}, {0, 1}, 108 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 109 + {0, 1}, {0, 1}, {0, 1}, {2, 3}, 110 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 111 + {0, 1}, {2, 3}, {0, 1}, {0, 1}, 112 + {0, 1}, {1, 2}, {0, 1}, {0, 1}, 113 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 114 + }; 115 + 116 + static void __init armada_375_get_clk_ratio( 117 + void __iomem *sar, int id, int *mult, int *div) 118 + { 119 + u32 opt = ((readl(sar) >> SAR1_A375_CPU_DDR_L2_FREQ_OPT) & 120 + SAR1_A375_CPU_DDR_L2_FREQ_OPT_MASK); 121 + 122 + switch (id) { 123 + case A375_CPU_TO_L2: 124 + *mult = armada_375_cpu_l2_ratios[opt][0]; 125 + *div = armada_375_cpu_l2_ratios[opt][1]; 126 + break; 127 + case A375_CPU_TO_DDR: 128 + *mult = armada_375_cpu_ddr_ratios[opt][0]; 129 + *div = armada_375_cpu_ddr_ratios[opt][1]; 130 + break; 131 + } 132 + } 133 + 134 + static const struct coreclk_soc_desc armada_375_coreclks = { 135 + .get_tclk_freq = armada_375_get_tclk_freq, 136 + .get_cpu_freq = armada_375_get_cpu_freq, 137 + .get_clk_ratio = armada_375_get_clk_ratio, 138 + .ratios = armada_375_coreclk_ratios, 139 + .num_ratios = ARRAY_SIZE(armada_375_coreclk_ratios), 140 + }; 141 + 142 + static void __init armada_375_coreclk_init(struct device_node *np) 143 + { 144 + mvebu_coreclk_setup(np, &armada_375_coreclks); 145 + } 146 + CLK_OF_DECLARE(armada_375_core_clk, "marvell,armada-375-core-clock", 147 + armada_375_coreclk_init); 148 + 149 + /* 150 + * Clock Gating Control 151 + */ 152 + static const struct clk_gating_soc_desc armada_375_gating_desc[] __initconst = { 153 + { "mu", NULL, 2 }, 154 + { "pp", NULL, 3 }, 155 + { "ptp", NULL, 4 }, 156 + { "pex0", NULL, 5 }, 157 + { "pex1", NULL, 6 }, 158 + { "audio", NULL, 8 }, 159 + { "nd_clk", "nand", 11 }, 160 + { "sata0_link", "sata0_core", 14 }, 161 + { "sata0_core", NULL, 15 }, 162 + { "usb3", NULL, 16 }, 163 + { "sdio", NULL, 17 }, 164 + { "usb", NULL, 18 }, 165 + { "gop", NULL, 19 }, 166 + { "sata1_link", "sata1_core", 20 }, 167 + { "sata1_core", NULL, 21 }, 168 + { "xor0", NULL, 22 }, 169 + { "xor1", NULL, 23 }, 170 + { "copro", NULL, 24 }, 171 + { "tdm", NULL, 25 }, 172 + { "crypto0_enc", NULL, 28 }, 173 + { "crypto0_core", NULL, 29 }, 174 + { "crypto1_enc", NULL, 30 }, 175 + { "crypto1_core", NULL, 31 }, 176 + { } 177 + }; 178 + 179 + static void __init armada_375_clk_gating_init(struct device_node *np) 180 + { 181 + mvebu_clk_gating_setup(np, armada_375_gating_desc); 182 + } 183 + CLK_OF_DECLARE(armada_375_clk_gating, "marvell,armada-375-gating-clock", 184 + armada_375_clk_gating_init);
+167
drivers/clk/mvebu/armada-38x.c
··· 1 + /* 2 + * Marvell Armada 380/385 SoC clocks 3 + * 4 + * Copyright (C) 2014 Marvell 5 + * 6 + * Gregory CLEMENT <gregory.clement@free-electrons.com> 7 + * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 8 + * Andrew Lunn <andrew@lunn.ch> 9 + * 10 + * This file is licensed under the terms of the GNU General Public 11 + * License version 2. This program is licensed "as is" without any 12 + * warranty of any kind, whether express or implied. 13 + */ 14 + 15 + #include <linux/kernel.h> 16 + #include <linux/clk-provider.h> 17 + #include <linux/io.h> 18 + #include <linux/of.h> 19 + #include "common.h" 20 + 21 + /* 22 + * SAR[14:10] : Ratios between PCLK0, NBCLK, HCLK and DRAM clocks 23 + * 24 + * SAR[15] : TCLK frequency 25 + * 0 = 250 MHz 26 + * 1 = 200 MHz 27 + */ 28 + 29 + #define SAR_A380_TCLK_FREQ_OPT 15 30 + #define SAR_A380_TCLK_FREQ_OPT_MASK 0x1 31 + #define SAR_A380_CPU_DDR_L2_FREQ_OPT 10 32 + #define SAR_A380_CPU_DDR_L2_FREQ_OPT_MASK 0x1F 33 + 34 + static const u32 armada_38x_tclk_frequencies[] __initconst = { 35 + 250000000, 36 + 200000000, 37 + }; 38 + 39 + static u32 __init armada_38x_get_tclk_freq(void __iomem *sar) 40 + { 41 + u8 tclk_freq_select; 42 + 43 + tclk_freq_select = ((readl(sar) >> SAR_A380_TCLK_FREQ_OPT) & 44 + SAR_A380_TCLK_FREQ_OPT_MASK); 45 + return armada_38x_tclk_frequencies[tclk_freq_select]; 46 + } 47 + 48 + static const u32 armada_38x_cpu_frequencies[] __initconst = { 49 + 0, 0, 0, 0, 50 + 1066 * 1000 * 1000, 0, 0, 0, 51 + 1332 * 1000 * 1000, 0, 0, 0, 52 + 1600 * 1000 * 1000, 53 + }; 54 + 55 + static u32 __init armada_38x_get_cpu_freq(void __iomem *sar) 56 + { 57 + u8 cpu_freq_select; 58 + 59 + cpu_freq_select = ((readl(sar) >> SAR_A380_CPU_DDR_L2_FREQ_OPT) & 60 + SAR_A380_CPU_DDR_L2_FREQ_OPT_MASK); 61 + if (cpu_freq_select >= ARRAY_SIZE(armada_38x_cpu_frequencies)) { 62 + pr_err("Selected CPU frequency (%d) unsupported\n", 63 + cpu_freq_select); 64 + return 0; 65 + } 66 + 67 + return armada_38x_cpu_frequencies[cpu_freq_select]; 68 + } 69 + 70 + enum { A380_CPU_TO_DDR, A380_CPU_TO_L2 }; 71 + 72 + static const struct coreclk_ratio armada_38x_coreclk_ratios[] __initconst = { 73 + { .id = A380_CPU_TO_L2, .name = "l2clk" }, 74 + { .id = A380_CPU_TO_DDR, .name = "ddrclk" }, 75 + }; 76 + 77 + static const int armada_38x_cpu_l2_ratios[32][2] __initconst = { 78 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 79 + {1, 2}, {0, 1}, {0, 1}, {0, 1}, 80 + {1, 2}, {0, 1}, {0, 1}, {0, 1}, 81 + {1, 2}, {0, 1}, {0, 1}, {0, 1}, 82 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 83 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 84 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 85 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 86 + }; 87 + 88 + static const int armada_38x_cpu_ddr_ratios[32][2] __initconst = { 89 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 90 + {1, 2}, {0, 1}, {0, 1}, {0, 1}, 91 + {1, 2}, {0, 1}, {0, 1}, {0, 1}, 92 + {1, 2}, {0, 1}, {0, 1}, {0, 1}, 93 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 94 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 95 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 96 + {0, 1}, {0, 1}, {0, 1}, {0, 1}, 97 + }; 98 + 99 + static void __init armada_38x_get_clk_ratio( 100 + void __iomem *sar, int id, int *mult, int *div) 101 + { 102 + u32 opt = ((readl(sar) >> SAR_A380_CPU_DDR_L2_FREQ_OPT) & 103 + SAR_A380_CPU_DDR_L2_FREQ_OPT_MASK); 104 + 105 + switch (id) { 106 + case A380_CPU_TO_L2: 107 + *mult = armada_38x_cpu_l2_ratios[opt][0]; 108 + *div = armada_38x_cpu_l2_ratios[opt][1]; 109 + break; 110 + case A380_CPU_TO_DDR: 111 + *mult = armada_38x_cpu_ddr_ratios[opt][0]; 112 + *div = armada_38x_cpu_ddr_ratios[opt][1]; 113 + break; 114 + } 115 + } 116 + 117 + static const struct coreclk_soc_desc armada_38x_coreclks = { 118 + .get_tclk_freq = armada_38x_get_tclk_freq, 119 + .get_cpu_freq = armada_38x_get_cpu_freq, 120 + .get_clk_ratio = armada_38x_get_clk_ratio, 121 + .ratios = armada_38x_coreclk_ratios, 122 + .num_ratios = ARRAY_SIZE(armada_38x_coreclk_ratios), 123 + }; 124 + 125 + static void __init armada_38x_coreclk_init(struct device_node *np) 126 + { 127 + mvebu_coreclk_setup(np, &armada_38x_coreclks); 128 + } 129 + CLK_OF_DECLARE(armada_38x_core_clk, "marvell,armada-380-core-clock", 130 + armada_38x_coreclk_init); 131 + 132 + /* 133 + * Clock Gating Control 134 + */ 135 + static const struct clk_gating_soc_desc armada_38x_gating_desc[] __initconst = { 136 + { "audio", NULL, 0 }, 137 + { "ge2", NULL, 2 }, 138 + { "ge1", NULL, 3 }, 139 + { "ge0", NULL, 4 }, 140 + { "pex1", NULL, 5 }, 141 + { "pex2", NULL, 6 }, 142 + { "pex3", NULL, 7 }, 143 + { "pex0", NULL, 8 }, 144 + { "usb3h0", NULL, 9 }, 145 + { "usb3h1", NULL, 10 }, 146 + { "usb3d", NULL, 11 }, 147 + { "bm", NULL, 13 }, 148 + { "crypto0z", NULL, 14 }, 149 + { "sata0", NULL, 15 }, 150 + { "crypto1z", NULL, 16 }, 151 + { "sdio", NULL, 17 }, 152 + { "usb2", NULL, 18 }, 153 + { "crypto1", NULL, 21 }, 154 + { "xor0", NULL, 22 }, 155 + { "crypto0", NULL, 23 }, 156 + { "tdm", NULL, 25 }, 157 + { "xor1", NULL, 28 }, 158 + { "sata1", NULL, 30 }, 159 + { } 160 + }; 161 + 162 + static void __init armada_38x_clk_gating_init(struct device_node *np) 163 + { 164 + mvebu_clk_gating_setup(np, armada_38x_gating_desc); 165 + } 166 + CLK_OF_DECLARE(armada_38x_clk_gating, "marvell,armada-380-gating-clock", 167 + armada_38x_clk_gating_init);
+123 -31
drivers/clk/mvebu/clk-corediv.c
··· 18 18 #include "common.h" 19 19 20 20 #define CORE_CLK_DIV_RATIO_MASK 0xff 21 - #define CORE_CLK_DIV_RATIO_RELOAD BIT(8) 22 - #define CORE_CLK_DIV_ENABLE_OFFSET 24 23 - #define CORE_CLK_DIV_RATIO_OFFSET 0x8 24 21 22 + /* 23 + * This structure describes the hardware details (bit offset and mask) 24 + * to configure one particular core divider clock. Those hardware 25 + * details may differ from one SoC to another. This structure is 26 + * therefore typically instantiated statically to describe the 27 + * hardware details. 28 + */ 25 29 struct clk_corediv_desc { 26 30 unsigned int mask; 27 31 unsigned int offset; 28 32 unsigned int fieldbit; 29 33 }; 30 34 35 + /* 36 + * This structure describes the hardware details to configure the core 37 + * divider clocks on a given SoC. Amongst others, it points to the 38 + * array of core divider clock descriptors for this SoC, as well as 39 + * the corresponding operations to manipulate them. 40 + */ 41 + struct clk_corediv_soc_desc { 42 + const struct clk_corediv_desc *descs; 43 + unsigned int ndescs; 44 + const struct clk_ops ops; 45 + u32 ratio_reload; 46 + u32 enable_bit_offset; 47 + u32 ratio_offset; 48 + }; 49 + 50 + /* 51 + * This structure represents one core divider clock for the clock 52 + * framework, and is dynamically allocated for each core divider clock 53 + * existing in the current SoC. 54 + */ 31 55 struct clk_corediv { 32 56 struct clk_hw hw; 33 57 void __iomem *reg; 34 - struct clk_corediv_desc desc; 58 + const struct clk_corediv_desc *desc; 59 + const struct clk_corediv_soc_desc *soc_desc; 35 60 spinlock_t lock; 36 61 }; 37 62 38 63 static struct clk_onecell_data clk_data; 39 64 40 - static const struct clk_corediv_desc mvebu_corediv_desc[] __initconst = { 65 + /* 66 + * Description of the core divider clocks available. For now, we 67 + * support only NAND, and it is available at the same register 68 + * locations regardless of the SoC. 69 + */ 70 + static const struct clk_corediv_desc mvebu_corediv_desc[] = { 41 71 { .mask = 0x3f, .offset = 8, .fieldbit = 1 }, /* NAND clock */ 42 72 }; 43 73 ··· 76 46 static int clk_corediv_is_enabled(struct clk_hw *hwclk) 77 47 { 78 48 struct clk_corediv *corediv = to_corediv_clk(hwclk); 79 - struct clk_corediv_desc *desc = &corediv->desc; 80 - u32 enable_mask = BIT(desc->fieldbit) << CORE_CLK_DIV_ENABLE_OFFSET; 49 + const struct clk_corediv_soc_desc *soc_desc = corediv->soc_desc; 50 + const struct clk_corediv_desc *desc = corediv->desc; 51 + u32 enable_mask = BIT(desc->fieldbit) << soc_desc->enable_bit_offset; 81 52 82 53 return !!(readl(corediv->reg) & enable_mask); 83 54 } ··· 86 55 static int clk_corediv_enable(struct clk_hw *hwclk) 87 56 { 88 57 struct clk_corediv *corediv = to_corediv_clk(hwclk); 89 - struct clk_corediv_desc *desc = &corediv->desc; 58 + const struct clk_corediv_soc_desc *soc_desc = corediv->soc_desc; 59 + const struct clk_corediv_desc *desc = corediv->desc; 90 60 unsigned long flags = 0; 91 61 u32 reg; 92 62 93 63 spin_lock_irqsave(&corediv->lock, flags); 94 64 95 65 reg = readl(corediv->reg); 96 - reg |= (BIT(desc->fieldbit) << CORE_CLK_DIV_ENABLE_OFFSET); 66 + reg |= (BIT(desc->fieldbit) << soc_desc->enable_bit_offset); 97 67 writel(reg, corediv->reg); 98 68 99 69 spin_unlock_irqrestore(&corediv->lock, flags); ··· 105 73 static void clk_corediv_disable(struct clk_hw *hwclk) 106 74 { 107 75 struct clk_corediv *corediv = to_corediv_clk(hwclk); 108 - struct clk_corediv_desc *desc = &corediv->desc; 76 + const struct clk_corediv_soc_desc *soc_desc = corediv->soc_desc; 77 + const struct clk_corediv_desc *desc = corediv->desc; 109 78 unsigned long flags = 0; 110 79 u32 reg; 111 80 112 81 spin_lock_irqsave(&corediv->lock, flags); 113 82 114 83 reg = readl(corediv->reg); 115 - reg &= ~(BIT(desc->fieldbit) << CORE_CLK_DIV_ENABLE_OFFSET); 84 + reg &= ~(BIT(desc->fieldbit) << soc_desc->enable_bit_offset); 116 85 writel(reg, corediv->reg); 117 86 118 87 spin_unlock_irqrestore(&corediv->lock, flags); ··· 123 90 unsigned long parent_rate) 124 91 { 125 92 struct clk_corediv *corediv = to_corediv_clk(hwclk); 126 - struct clk_corediv_desc *desc = &corediv->desc; 93 + const struct clk_corediv_soc_desc *soc_desc = corediv->soc_desc; 94 + const struct clk_corediv_desc *desc = corediv->desc; 127 95 u32 reg, div; 128 96 129 - reg = readl(corediv->reg + CORE_CLK_DIV_RATIO_OFFSET); 97 + reg = readl(corediv->reg + soc_desc->ratio_offset); 130 98 div = (reg >> desc->offset) & desc->mask; 131 99 return parent_rate / div; 132 100 } ··· 151 117 unsigned long parent_rate) 152 118 { 153 119 struct clk_corediv *corediv = to_corediv_clk(hwclk); 154 - struct clk_corediv_desc *desc = &corediv->desc; 120 + const struct clk_corediv_soc_desc *soc_desc = corediv->soc_desc; 121 + const struct clk_corediv_desc *desc = corediv->desc; 155 122 unsigned long flags = 0; 156 123 u32 reg, div; 157 124 ··· 161 126 spin_lock_irqsave(&corediv->lock, flags); 162 127 163 128 /* Write new divider to the divider ratio register */ 164 - reg = readl(corediv->reg + CORE_CLK_DIV_RATIO_OFFSET); 129 + reg = readl(corediv->reg + soc_desc->ratio_offset); 165 130 reg &= ~(desc->mask << desc->offset); 166 131 reg |= (div & desc->mask) << desc->offset; 167 - writel(reg, corediv->reg + CORE_CLK_DIV_RATIO_OFFSET); 132 + writel(reg, corediv->reg + soc_desc->ratio_offset); 168 133 169 134 /* Set reload-force for this clock */ 170 135 reg = readl(corediv->reg) | BIT(desc->fieldbit); 171 136 writel(reg, corediv->reg); 172 137 173 138 /* Now trigger the clock update */ 174 - reg = readl(corediv->reg) | CORE_CLK_DIV_RATIO_RELOAD; 139 + reg = readl(corediv->reg) | soc_desc->ratio_reload; 175 140 writel(reg, corediv->reg); 176 141 177 142 /* ··· 179 144 * ratios request and the reload request. 180 145 */ 181 146 udelay(1000); 182 - reg &= ~(CORE_CLK_DIV_RATIO_MASK | CORE_CLK_DIV_RATIO_RELOAD); 147 + reg &= ~(CORE_CLK_DIV_RATIO_MASK | soc_desc->ratio_reload); 183 148 writel(reg, corediv->reg); 184 149 udelay(1000); 185 150 ··· 188 153 return 0; 189 154 } 190 155 191 - static const struct clk_ops corediv_ops = { 192 - .enable = clk_corediv_enable, 193 - .disable = clk_corediv_disable, 194 - .is_enabled = clk_corediv_is_enabled, 195 - .recalc_rate = clk_corediv_recalc_rate, 196 - .round_rate = clk_corediv_round_rate, 197 - .set_rate = clk_corediv_set_rate, 156 + static const struct clk_corediv_soc_desc armada370_corediv_soc = { 157 + .descs = mvebu_corediv_desc, 158 + .ndescs = ARRAY_SIZE(mvebu_corediv_desc), 159 + .ops = { 160 + .enable = clk_corediv_enable, 161 + .disable = clk_corediv_disable, 162 + .is_enabled = clk_corediv_is_enabled, 163 + .recalc_rate = clk_corediv_recalc_rate, 164 + .round_rate = clk_corediv_round_rate, 165 + .set_rate = clk_corediv_set_rate, 166 + }, 167 + .ratio_reload = BIT(8), 168 + .enable_bit_offset = 24, 169 + .ratio_offset = 0x8, 198 170 }; 199 171 200 - static void __init mvebu_corediv_clk_init(struct device_node *node) 172 + static const struct clk_corediv_soc_desc armada380_corediv_soc = { 173 + .descs = mvebu_corediv_desc, 174 + .ndescs = ARRAY_SIZE(mvebu_corediv_desc), 175 + .ops = { 176 + .enable = clk_corediv_enable, 177 + .disable = clk_corediv_disable, 178 + .is_enabled = clk_corediv_is_enabled, 179 + .recalc_rate = clk_corediv_recalc_rate, 180 + .round_rate = clk_corediv_round_rate, 181 + .set_rate = clk_corediv_set_rate, 182 + }, 183 + .ratio_reload = BIT(8), 184 + .enable_bit_offset = 16, 185 + .ratio_offset = 0x4, 186 + }; 187 + 188 + static const struct clk_corediv_soc_desc armada375_corediv_soc = { 189 + .descs = mvebu_corediv_desc, 190 + .ndescs = ARRAY_SIZE(mvebu_corediv_desc), 191 + .ops = { 192 + .recalc_rate = clk_corediv_recalc_rate, 193 + .round_rate = clk_corediv_round_rate, 194 + .set_rate = clk_corediv_set_rate, 195 + }, 196 + .ratio_reload = BIT(8), 197 + .ratio_offset = 0x4, 198 + }; 199 + 200 + static void __init 201 + mvebu_corediv_clk_init(struct device_node *node, 202 + const struct clk_corediv_soc_desc *soc_desc) 201 203 { 202 204 struct clk_init_data init; 203 205 struct clk_corediv *corediv; ··· 250 178 251 179 parent_name = of_clk_get_parent_name(node, 0); 252 180 253 - clk_data.clk_num = ARRAY_SIZE(mvebu_corediv_desc); 181 + clk_data.clk_num = soc_desc->ndescs; 254 182 255 183 /* clks holds the clock array */ 256 184 clks = kcalloc(clk_data.clk_num, sizeof(struct clk *), ··· 271 199 init.num_parents = 1; 272 200 init.parent_names = &parent_name; 273 201 init.name = clk_name; 274 - init.ops = &corediv_ops; 202 + init.ops = &soc_desc->ops; 275 203 init.flags = 0; 276 204 277 - corediv[i].desc = mvebu_corediv_desc[i]; 205 + corediv[i].soc_desc = soc_desc; 206 + corediv[i].desc = soc_desc->descs + i; 278 207 corediv[i].reg = base; 279 208 corediv[i].hw.init = &init; 280 209 ··· 292 219 err_unmap: 293 220 iounmap(base); 294 221 } 295 - CLK_OF_DECLARE(mvebu_corediv_clk, "marvell,armada-370-corediv-clock", 296 - mvebu_corediv_clk_init); 222 + 223 + static void __init armada370_corediv_clk_init(struct device_node *node) 224 + { 225 + return mvebu_corediv_clk_init(node, &armada370_corediv_soc); 226 + } 227 + CLK_OF_DECLARE(armada370_corediv_clk, "marvell,armada-370-corediv-clock", 228 + armada370_corediv_clk_init); 229 + 230 + static void __init armada375_corediv_clk_init(struct device_node *node) 231 + { 232 + return mvebu_corediv_clk_init(node, &armada375_corediv_soc); 233 + } 234 + CLK_OF_DECLARE(armada375_corediv_clk, "marvell,armada-375-corediv-clock", 235 + armada375_corediv_clk_init); 236 + 237 + static void __init armada380_corediv_clk_init(struct device_node *node) 238 + { 239 + return mvebu_corediv_clk_init(node, &armada380_corediv_soc); 240 + } 241 + CLK_OF_DECLARE(armada380_corediv_clk, "marvell,armada-380-corediv-clock", 242 + armada380_corediv_clk_init);
+1
drivers/clk/shmobile/Makefile
··· 1 1 obj-$(CONFIG_ARCH_EMEV2) += clk-emev2.o 2 + obj-$(CONFIG_ARCH_R7S72100) += clk-rz.o 2 3 obj-$(CONFIG_ARCH_R8A7790) += clk-rcar-gen2.o 3 4 obj-$(CONFIG_ARCH_R8A7791) += clk-rcar-gen2.o 4 5 obj-$(CONFIG_ARCH_SHMOBILE_MULTI) += clk-div6.o
+1 -1
drivers/clk/shmobile/clk-div6.c
··· 23 23 #define CPG_DIV6_DIV_MASK 0x3f 24 24 25 25 /** 26 - * struct div6_clock - MSTP gating clock 26 + * struct div6_clock - CPG 6 bit divider clock 27 27 * @hw: handle between common and hardware-specific interfaces 28 28 * @reg: IO-remapped register 29 29 * @div: divisor value (1-64)
+1 -1
drivers/clk/shmobile/clk-mstp.c
··· 137 137 138 138 init.name = name; 139 139 init.ops = &cpg_mstp_clock_ops; 140 - init.flags = CLK_IS_BASIC; 140 + init.flags = CLK_IS_BASIC | CLK_SET_RATE_PARENT; 141 141 init.parent_names = &parent_name; 142 142 init.num_parents = 1; 143 143
+4 -4
drivers/clk/shmobile/clk-rcar-gen2.c
··· 242 242 parent_name = "main"; 243 243 mult = config->pll3_mult; 244 244 } else if (!strcmp(name, "lb")) { 245 - parent_name = "pll1_div2"; 245 + parent_name = "pll1"; 246 246 div = cpg_mode & BIT(18) ? 36 : 24; 247 247 } else if (!strcmp(name, "qspi")) { 248 248 parent_name = "pll1_div2"; 249 249 div = (cpg_mode & (BIT(3) | BIT(2) | BIT(1))) == BIT(2) 250 250 ? 8 : 10; 251 251 } else if (!strcmp(name, "sdh")) { 252 - parent_name = "pll1_div2"; 252 + parent_name = "pll1"; 253 253 table = cpg_sdh_div_table; 254 254 shift = 8; 255 255 } else if (!strcmp(name, "sd0")) { 256 - parent_name = "pll1_div2"; 256 + parent_name = "pll1"; 257 257 table = cpg_sd01_div_table; 258 258 shift = 4; 259 259 } else if (!strcmp(name, "sd1")) { 260 - parent_name = "pll1_div2"; 260 + parent_name = "pll1"; 261 261 table = cpg_sd01_div_table; 262 262 shift = 0; 263 263 } else if (!strcmp(name, "z")) {
+103
drivers/clk/shmobile/clk-rz.c
··· 1 + /* 2 + * rz Core CPG Clocks 3 + * 4 + * Copyright (C) 2013 Ideas On Board SPRL 5 + * Copyright (C) 2014 Wolfram Sang, Sang Engineering <wsa@sang-engineering.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; version 2 of the License. 10 + */ 11 + 12 + #include <linux/clk-provider.h> 13 + #include <linux/init.h> 14 + #include <linux/kernel.h> 15 + #include <linux/of.h> 16 + #include <linux/of_address.h> 17 + #include <linux/slab.h> 18 + 19 + struct rz_cpg { 20 + struct clk_onecell_data data; 21 + void __iomem *reg; 22 + }; 23 + 24 + #define CPG_FRQCR 0x10 25 + #define CPG_FRQCR2 0x14 26 + 27 + /* ----------------------------------------------------------------------------- 28 + * Initialization 29 + */ 30 + 31 + static struct clk * __init 32 + rz_cpg_register_clock(struct device_node *np, struct rz_cpg *cpg, const char *name) 33 + { 34 + u32 val; 35 + unsigned mult; 36 + static const unsigned frqcr_tab[4] = { 3, 2, 0, 1 }; 37 + 38 + if (strcmp(name, "pll") == 0) { 39 + /* FIXME: cpg_mode should be read from GPIO. But no GPIO support yet */ 40 + unsigned cpg_mode = 0; /* hardcoded to EXTAL for now */ 41 + const char *parent_name = of_clk_get_parent_name(np, cpg_mode); 42 + 43 + mult = cpg_mode ? (32 / 4) : 30; 44 + 45 + return clk_register_fixed_factor(NULL, name, parent_name, 0, mult, 1); 46 + } 47 + 48 + /* If mapping regs failed, skip non-pll clocks. System will boot anyhow */ 49 + if (!cpg->reg) 50 + return ERR_PTR(-ENXIO); 51 + 52 + /* FIXME:"i" and "g" are variable clocks with non-integer dividers (e.g. 2/3) 53 + * and the constraint that always g <= i. To get the rz platform started, 54 + * let them run at fixed current speed and implement the details later. 55 + */ 56 + if (strcmp(name, "i") == 0) 57 + val = (clk_readl(cpg->reg + CPG_FRQCR) >> 8) & 3; 58 + else if (strcmp(name, "g") == 0) 59 + val = clk_readl(cpg->reg + CPG_FRQCR2) & 3; 60 + else 61 + return ERR_PTR(-EINVAL); 62 + 63 + mult = frqcr_tab[val]; 64 + return clk_register_fixed_factor(NULL, name, "pll", 0, mult, 3); 65 + } 66 + 67 + static void __init rz_cpg_clocks_init(struct device_node *np) 68 + { 69 + struct rz_cpg *cpg; 70 + struct clk **clks; 71 + unsigned i; 72 + int num_clks; 73 + 74 + num_clks = of_property_count_strings(np, "clock-output-names"); 75 + if (WARN(num_clks <= 0, "can't count CPG clocks\n")) 76 + return; 77 + 78 + cpg = kzalloc(sizeof(*cpg), GFP_KERNEL); 79 + clks = kzalloc(num_clks * sizeof(*clks), GFP_KERNEL); 80 + BUG_ON(!cpg || !clks); 81 + 82 + cpg->data.clks = clks; 83 + cpg->data.clk_num = num_clks; 84 + 85 + cpg->reg = of_iomap(np, 0); 86 + 87 + for (i = 0; i < num_clks; ++i) { 88 + const char *name; 89 + struct clk *clk; 90 + 91 + of_property_read_string_index(np, "clock-output-names", i, &name); 92 + 93 + clk = rz_cpg_register_clock(np, cpg, name); 94 + if (IS_ERR(clk)) 95 + pr_err("%s: failed to register %s %s clock (%ld)\n", 96 + __func__, np->name, name, PTR_ERR(clk)); 97 + else 98 + cpg->data.clks[i] = clk; 99 + } 100 + 101 + of_clk_add_provider(np, of_clk_src_onecell_get, &cpg->data); 102 + } 103 + CLK_OF_DECLARE(rz_cpg_clks, "renesas,rz-cpg-clocks", rz_cpg_clocks_init);
+2 -1
drivers/clk/sirf/clk-atlas6.c
··· 1 1 /* 2 2 * Clock tree for CSR SiRFatlasVI 3 3 * 4 - * Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company. 4 + * Copyright (c) 2011 - 2014 Cambridge Silicon Radio Limited, a CSR plc group 5 + * company. 5 6 * 6 7 * Licensed under GPLv2 or later. 7 8 */
+2 -1
drivers/clk/sirf/clk-common.c
··· 1 1 /* 2 2 * common clks module for all SiRF SoCs 3 3 * 4 - * Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company. 4 + * Copyright (c) 2011 - 2014 Cambridge Silicon Radio Limited, a CSR plc group 5 + * company. 5 6 * 6 7 * Licensed under GPLv2 or later. 7 8 */
+2 -1
drivers/clk/sirf/clk-prima2.c
··· 1 1 /* 2 2 * Clock tree for CSR SiRFprimaII 3 3 * 4 - * Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company. 4 + * Copyright (c) 2011 - 2014 Cambridge Silicon Radio Limited, a CSR plc group 5 + * company. 5 6 * 6 7 * Licensed under GPLv2 or later. 7 8 */
+3
drivers/clk/socfpga/Makefile
··· 1 1 obj-y += clk.o 2 + obj-y += clk-gate.o 3 + obj-y += clk-pll.o 4 + obj-y += clk-periph.o
+263
drivers/clk/socfpga/clk-gate.c
··· 1 + /* 2 + * Copyright 2011-2012 Calxeda, Inc. 3 + * Copyright (C) 2012-2013 Altera Corporation <www.altera.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * Based from clk-highbank.c 16 + * 17 + */ 18 + #include <linux/clk.h> 19 + #include <linux/clkdev.h> 20 + #include <linux/clk-provider.h> 21 + #include <linux/io.h> 22 + #include <linux/mfd/syscon.h> 23 + #include <linux/of.h> 24 + #include <linux/regmap.h> 25 + 26 + #include "clk.h" 27 + 28 + #define SOCFPGA_L4_MP_CLK "l4_mp_clk" 29 + #define SOCFPGA_L4_SP_CLK "l4_sp_clk" 30 + #define SOCFPGA_NAND_CLK "nand_clk" 31 + #define SOCFPGA_NAND_X_CLK "nand_x_clk" 32 + #define SOCFPGA_MMC_CLK "sdmmc_clk" 33 + #define SOCFPGA_GPIO_DB_CLK_OFFSET 0xA8 34 + 35 + #define div_mask(width) ((1 << (width)) - 1) 36 + #define streq(a, b) (strcmp((a), (b)) == 0) 37 + 38 + #define to_socfpga_gate_clk(p) container_of(p, struct socfpga_gate_clk, hw.hw) 39 + 40 + /* SDMMC Group for System Manager defines */ 41 + #define SYSMGR_SDMMCGRP_CTRL_OFFSET 0x108 42 + #define SYSMGR_SDMMC_CTRL_SET(smplsel, drvsel) \ 43 + ((((smplsel) & 0x7) << 3) | (((drvsel) & 0x7) << 0)) 44 + 45 + static u8 socfpga_clk_get_parent(struct clk_hw *hwclk) 46 + { 47 + u32 l4_src; 48 + u32 perpll_src; 49 + 50 + if (streq(hwclk->init->name, SOCFPGA_L4_MP_CLK)) { 51 + l4_src = readl(clk_mgr_base_addr + CLKMGR_L4SRC); 52 + return l4_src &= 0x1; 53 + } 54 + if (streq(hwclk->init->name, SOCFPGA_L4_SP_CLK)) { 55 + l4_src = readl(clk_mgr_base_addr + CLKMGR_L4SRC); 56 + return !!(l4_src & 2); 57 + } 58 + 59 + perpll_src = readl(clk_mgr_base_addr + CLKMGR_PERPLL_SRC); 60 + if (streq(hwclk->init->name, SOCFPGA_MMC_CLK)) 61 + return perpll_src &= 0x3; 62 + if (streq(hwclk->init->name, SOCFPGA_NAND_CLK) || 63 + streq(hwclk->init->name, SOCFPGA_NAND_X_CLK)) 64 + return (perpll_src >> 2) & 3; 65 + 66 + /* QSPI clock */ 67 + return (perpll_src >> 4) & 3; 68 + 69 + } 70 + 71 + static int socfpga_clk_set_parent(struct clk_hw *hwclk, u8 parent) 72 + { 73 + u32 src_reg; 74 + 75 + if (streq(hwclk->init->name, SOCFPGA_L4_MP_CLK)) { 76 + src_reg = readl(clk_mgr_base_addr + CLKMGR_L4SRC); 77 + src_reg &= ~0x1; 78 + src_reg |= parent; 79 + writel(src_reg, clk_mgr_base_addr + CLKMGR_L4SRC); 80 + } else if (streq(hwclk->init->name, SOCFPGA_L4_SP_CLK)) { 81 + src_reg = readl(clk_mgr_base_addr + CLKMGR_L4SRC); 82 + src_reg &= ~0x2; 83 + src_reg |= (parent << 1); 84 + writel(src_reg, clk_mgr_base_addr + CLKMGR_L4SRC); 85 + } else { 86 + src_reg = readl(clk_mgr_base_addr + CLKMGR_PERPLL_SRC); 87 + if (streq(hwclk->init->name, SOCFPGA_MMC_CLK)) { 88 + src_reg &= ~0x3; 89 + src_reg |= parent; 90 + } else if (streq(hwclk->init->name, SOCFPGA_NAND_CLK) || 91 + streq(hwclk->init->name, SOCFPGA_NAND_X_CLK)) { 92 + src_reg &= ~0xC; 93 + src_reg |= (parent << 2); 94 + } else {/* QSPI clock */ 95 + src_reg &= ~0x30; 96 + src_reg |= (parent << 4); 97 + } 98 + writel(src_reg, clk_mgr_base_addr + CLKMGR_PERPLL_SRC); 99 + } 100 + 101 + return 0; 102 + } 103 + 104 + static unsigned long socfpga_clk_recalc_rate(struct clk_hw *hwclk, 105 + unsigned long parent_rate) 106 + { 107 + struct socfpga_gate_clk *socfpgaclk = to_socfpga_gate_clk(hwclk); 108 + u32 div = 1, val; 109 + 110 + if (socfpgaclk->fixed_div) 111 + div = socfpgaclk->fixed_div; 112 + else if (socfpgaclk->div_reg) { 113 + val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift; 114 + val &= div_mask(socfpgaclk->width); 115 + /* Check for GPIO_DB_CLK by its offset */ 116 + if ((int) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET) 117 + div = val + 1; 118 + else 119 + div = (1 << val); 120 + } 121 + 122 + return parent_rate / div; 123 + } 124 + 125 + static int socfpga_clk_prepare(struct clk_hw *hwclk) 126 + { 127 + struct socfpga_gate_clk *socfpgaclk = to_socfpga_gate_clk(hwclk); 128 + struct regmap *sys_mgr_base_addr; 129 + int i; 130 + u32 hs_timing; 131 + u32 clk_phase[2]; 132 + 133 + if (socfpgaclk->clk_phase[0] || socfpgaclk->clk_phase[1]) { 134 + sys_mgr_base_addr = syscon_regmap_lookup_by_compatible("altr,sys-mgr"); 135 + if (IS_ERR(sys_mgr_base_addr)) { 136 + pr_err("%s: failed to find altr,sys-mgr regmap!\n", __func__); 137 + return -EINVAL; 138 + } 139 + 140 + for (i = 0; i < 2; i++) { 141 + switch (socfpgaclk->clk_phase[i]) { 142 + case 0: 143 + clk_phase[i] = 0; 144 + break; 145 + case 45: 146 + clk_phase[i] = 1; 147 + break; 148 + case 90: 149 + clk_phase[i] = 2; 150 + break; 151 + case 135: 152 + clk_phase[i] = 3; 153 + break; 154 + case 180: 155 + clk_phase[i] = 4; 156 + break; 157 + case 225: 158 + clk_phase[i] = 5; 159 + break; 160 + case 270: 161 + clk_phase[i] = 6; 162 + break; 163 + case 315: 164 + clk_phase[i] = 7; 165 + break; 166 + default: 167 + clk_phase[i] = 0; 168 + break; 169 + } 170 + } 171 + hs_timing = SYSMGR_SDMMC_CTRL_SET(clk_phase[0], clk_phase[1]); 172 + regmap_write(sys_mgr_base_addr, SYSMGR_SDMMCGRP_CTRL_OFFSET, 173 + hs_timing); 174 + } 175 + return 0; 176 + } 177 + 178 + static struct clk_ops gateclk_ops = { 179 + .prepare = socfpga_clk_prepare, 180 + .recalc_rate = socfpga_clk_recalc_rate, 181 + .get_parent = socfpga_clk_get_parent, 182 + .set_parent = socfpga_clk_set_parent, 183 + }; 184 + 185 + static void __init __socfpga_gate_init(struct device_node *node, 186 + const struct clk_ops *ops) 187 + { 188 + u32 clk_gate[2]; 189 + u32 div_reg[3]; 190 + u32 clk_phase[2]; 191 + u32 fixed_div; 192 + struct clk *clk; 193 + struct socfpga_gate_clk *socfpga_clk; 194 + const char *clk_name = node->name; 195 + const char *parent_name[SOCFPGA_MAX_PARENTS]; 196 + struct clk_init_data init; 197 + int rc; 198 + int i = 0; 199 + 200 + socfpga_clk = kzalloc(sizeof(*socfpga_clk), GFP_KERNEL); 201 + if (WARN_ON(!socfpga_clk)) 202 + return; 203 + 204 + rc = of_property_read_u32_array(node, "clk-gate", clk_gate, 2); 205 + if (rc) 206 + clk_gate[0] = 0; 207 + 208 + if (clk_gate[0]) { 209 + socfpga_clk->hw.reg = clk_mgr_base_addr + clk_gate[0]; 210 + socfpga_clk->hw.bit_idx = clk_gate[1]; 211 + 212 + gateclk_ops.enable = clk_gate_ops.enable; 213 + gateclk_ops.disable = clk_gate_ops.disable; 214 + } 215 + 216 + rc = of_property_read_u32(node, "fixed-divider", &fixed_div); 217 + if (rc) 218 + socfpga_clk->fixed_div = 0; 219 + else 220 + socfpga_clk->fixed_div = fixed_div; 221 + 222 + rc = of_property_read_u32_array(node, "div-reg", div_reg, 3); 223 + if (!rc) { 224 + socfpga_clk->div_reg = clk_mgr_base_addr + div_reg[0]; 225 + socfpga_clk->shift = div_reg[1]; 226 + socfpga_clk->width = div_reg[2]; 227 + } else { 228 + socfpga_clk->div_reg = 0; 229 + } 230 + 231 + rc = of_property_read_u32_array(node, "clk-phase", clk_phase, 2); 232 + if (!rc) { 233 + socfpga_clk->clk_phase[0] = clk_phase[0]; 234 + socfpga_clk->clk_phase[1] = clk_phase[1]; 235 + } 236 + 237 + of_property_read_string(node, "clock-output-names", &clk_name); 238 + 239 + init.name = clk_name; 240 + init.ops = ops; 241 + init.flags = 0; 242 + while (i < SOCFPGA_MAX_PARENTS && (parent_name[i] = 243 + of_clk_get_parent_name(node, i)) != NULL) 244 + i++; 245 + 246 + init.parent_names = parent_name; 247 + init.num_parents = i; 248 + socfpga_clk->hw.hw.init = &init; 249 + 250 + clk = clk_register(NULL, &socfpga_clk->hw.hw); 251 + if (WARN_ON(IS_ERR(clk))) { 252 + kfree(socfpga_clk); 253 + return; 254 + } 255 + rc = of_clk_add_provider(node, of_clk_src_simple_get, clk); 256 + if (WARN_ON(rc)) 257 + return; 258 + } 259 + 260 + void __init socfpga_gate_init(struct device_node *node) 261 + { 262 + __socfpga_gate_init(node, &gateclk_ops); 263 + }
+94
drivers/clk/socfpga/clk-periph.c
··· 1 + /* 2 + * Copyright 2011-2012 Calxeda, Inc. 3 + * Copyright (C) 2012-2013 Altera Corporation <www.altera.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * Based from clk-highbank.c 16 + * 17 + */ 18 + #include <linux/clk.h> 19 + #include <linux/clkdev.h> 20 + #include <linux/clk-provider.h> 21 + #include <linux/io.h> 22 + #include <linux/of.h> 23 + 24 + #include "clk.h" 25 + 26 + #define to_socfpga_periph_clk(p) container_of(p, struct socfpga_periph_clk, hw.hw) 27 + 28 + static unsigned long clk_periclk_recalc_rate(struct clk_hw *hwclk, 29 + unsigned long parent_rate) 30 + { 31 + struct socfpga_periph_clk *socfpgaclk = to_socfpga_periph_clk(hwclk); 32 + u32 div; 33 + 34 + if (socfpgaclk->fixed_div) 35 + div = socfpgaclk->fixed_div; 36 + else 37 + div = ((readl(socfpgaclk->hw.reg) & 0x1ff) + 1); 38 + 39 + return parent_rate / div; 40 + } 41 + 42 + static const struct clk_ops periclk_ops = { 43 + .recalc_rate = clk_periclk_recalc_rate, 44 + }; 45 + 46 + static __init void __socfpga_periph_init(struct device_node *node, 47 + const struct clk_ops *ops) 48 + { 49 + u32 reg; 50 + struct clk *clk; 51 + struct socfpga_periph_clk *periph_clk; 52 + const char *clk_name = node->name; 53 + const char *parent_name; 54 + struct clk_init_data init; 55 + int rc; 56 + u32 fixed_div; 57 + 58 + of_property_read_u32(node, "reg", &reg); 59 + 60 + periph_clk = kzalloc(sizeof(*periph_clk), GFP_KERNEL); 61 + if (WARN_ON(!periph_clk)) 62 + return; 63 + 64 + periph_clk->hw.reg = clk_mgr_base_addr + reg; 65 + 66 + rc = of_property_read_u32(node, "fixed-divider", &fixed_div); 67 + if (rc) 68 + periph_clk->fixed_div = 0; 69 + else 70 + periph_clk->fixed_div = fixed_div; 71 + 72 + of_property_read_string(node, "clock-output-names", &clk_name); 73 + 74 + init.name = clk_name; 75 + init.ops = ops; 76 + init.flags = 0; 77 + parent_name = of_clk_get_parent_name(node, 0); 78 + init.parent_names = &parent_name; 79 + init.num_parents = 1; 80 + 81 + periph_clk->hw.hw.init = &init; 82 + 83 + clk = clk_register(NULL, &periph_clk->hw.hw); 84 + if (WARN_ON(IS_ERR(clk))) { 85 + kfree(periph_clk); 86 + return; 87 + } 88 + rc = of_clk_add_provider(node, of_clk_src_simple_get, clk); 89 + } 90 + 91 + void __init socfpga_periph_init(struct device_node *node) 92 + { 93 + __socfpga_periph_init(node, &periclk_ops); 94 + }
+131
drivers/clk/socfpga/clk-pll.c
··· 1 + /* 2 + * Copyright 2011-2012 Calxeda, Inc. 3 + * Copyright (C) 2012-2013 Altera Corporation <www.altera.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * Based from clk-highbank.c 16 + * 17 + */ 18 + #include <linux/clk.h> 19 + #include <linux/clkdev.h> 20 + #include <linux/clk-provider.h> 21 + #include <linux/io.h> 22 + #include <linux/of.h> 23 + 24 + #include "clk.h" 25 + 26 + /* Clock bypass bits */ 27 + #define MAINPLL_BYPASS (1<<0) 28 + #define SDRAMPLL_BYPASS (1<<1) 29 + #define SDRAMPLL_SRC_BYPASS (1<<2) 30 + #define PERPLL_BYPASS (1<<3) 31 + #define PERPLL_SRC_BYPASS (1<<4) 32 + 33 + #define SOCFPGA_PLL_BG_PWRDWN 0 34 + #define SOCFPGA_PLL_EXT_ENA 1 35 + #define SOCFPGA_PLL_PWR_DOWN 2 36 + #define SOCFPGA_PLL_DIVF_MASK 0x0000FFF8 37 + #define SOCFPGA_PLL_DIVF_SHIFT 3 38 + #define SOCFPGA_PLL_DIVQ_MASK 0x003F0000 39 + #define SOCFPGA_PLL_DIVQ_SHIFT 16 40 + 41 + #define CLK_MGR_PLL_CLK_SRC_SHIFT 22 42 + #define CLK_MGR_PLL_CLK_SRC_MASK 0x3 43 + 44 + #define to_socfpga_clk(p) container_of(p, struct socfpga_pll, hw.hw) 45 + 46 + static unsigned long clk_pll_recalc_rate(struct clk_hw *hwclk, 47 + unsigned long parent_rate) 48 + { 49 + struct socfpga_pll *socfpgaclk = to_socfpga_clk(hwclk); 50 + unsigned long divf, divq, reg; 51 + unsigned long long vco_freq; 52 + unsigned long bypass; 53 + 54 + reg = readl(socfpgaclk->hw.reg); 55 + bypass = readl(clk_mgr_base_addr + CLKMGR_BYPASS); 56 + if (bypass & MAINPLL_BYPASS) 57 + return parent_rate; 58 + 59 + divf = (reg & SOCFPGA_PLL_DIVF_MASK) >> SOCFPGA_PLL_DIVF_SHIFT; 60 + divq = (reg & SOCFPGA_PLL_DIVQ_MASK) >> SOCFPGA_PLL_DIVQ_SHIFT; 61 + vco_freq = (unsigned long long)parent_rate * (divf + 1); 62 + do_div(vco_freq, (1 + divq)); 63 + return (unsigned long)vco_freq; 64 + } 65 + 66 + static u8 clk_pll_get_parent(struct clk_hw *hwclk) 67 + { 68 + u32 pll_src; 69 + struct socfpga_pll *socfpgaclk = to_socfpga_clk(hwclk); 70 + 71 + pll_src = readl(socfpgaclk->hw.reg); 72 + return (pll_src >> CLK_MGR_PLL_CLK_SRC_SHIFT) & 73 + CLK_MGR_PLL_CLK_SRC_MASK; 74 + } 75 + 76 + static struct clk_ops clk_pll_ops = { 77 + .recalc_rate = clk_pll_recalc_rate, 78 + .get_parent = clk_pll_get_parent, 79 + }; 80 + 81 + static __init struct clk *__socfpga_pll_init(struct device_node *node, 82 + const struct clk_ops *ops) 83 + { 84 + u32 reg; 85 + struct clk *clk; 86 + struct socfpga_pll *pll_clk; 87 + const char *clk_name = node->name; 88 + const char *parent_name[SOCFPGA_MAX_PARENTS]; 89 + struct clk_init_data init; 90 + int rc; 91 + int i = 0; 92 + 93 + of_property_read_u32(node, "reg", &reg); 94 + 95 + pll_clk = kzalloc(sizeof(*pll_clk), GFP_KERNEL); 96 + if (WARN_ON(!pll_clk)) 97 + return NULL; 98 + 99 + pll_clk->hw.reg = clk_mgr_base_addr + reg; 100 + 101 + of_property_read_string(node, "clock-output-names", &clk_name); 102 + 103 + init.name = clk_name; 104 + init.ops = ops; 105 + init.flags = 0; 106 + 107 + while (i < SOCFPGA_MAX_PARENTS && (parent_name[i] = 108 + of_clk_get_parent_name(node, i)) != NULL) 109 + i++; 110 + 111 + init.num_parents = i; 112 + init.parent_names = parent_name; 113 + pll_clk->hw.hw.init = &init; 114 + 115 + pll_clk->hw.bit_idx = SOCFPGA_PLL_EXT_ENA; 116 + clk_pll_ops.enable = clk_gate_ops.enable; 117 + clk_pll_ops.disable = clk_gate_ops.disable; 118 + 119 + clk = clk_register(NULL, &pll_clk->hw.hw); 120 + if (WARN_ON(IS_ERR(clk))) { 121 + kfree(pll_clk); 122 + return NULL; 123 + } 124 + rc = of_clk_add_provider(node, of_clk_src_simple_get, clk); 125 + return clk; 126 + } 127 + 128 + void __init socfpga_pll_init(struct device_node *node) 129 + { 130 + __socfpga_pll_init(node, &clk_pll_ops); 131 + }
+12 -314
drivers/clk/socfpga/clk.c
··· 22 22 #include <linux/clk-provider.h> 23 23 #include <linux/io.h> 24 24 #include <linux/of.h> 25 + #include <linux/of_address.h> 25 26 26 - /* Clock Manager offsets */ 27 - #define CLKMGR_CTRL 0x0 28 - #define CLKMGR_BYPASS 0x4 29 - #define CLKMGR_L4SRC 0x70 30 - #define CLKMGR_PERPLL_SRC 0xAC 27 + #include "clk.h" 31 28 32 - /* Clock bypass bits */ 33 - #define MAINPLL_BYPASS (1<<0) 34 - #define SDRAMPLL_BYPASS (1<<1) 35 - #define SDRAMPLL_SRC_BYPASS (1<<2) 36 - #define PERPLL_BYPASS (1<<3) 37 - #define PERPLL_SRC_BYPASS (1<<4) 29 + void __iomem *clk_mgr_base_addr; 38 30 39 - #define SOCFPGA_PLL_BG_PWRDWN 0 40 - #define SOCFPGA_PLL_EXT_ENA 1 41 - #define SOCFPGA_PLL_PWR_DOWN 2 42 - #define SOCFPGA_PLL_DIVF_MASK 0x0000FFF8 43 - #define SOCFPGA_PLL_DIVF_SHIFT 3 44 - #define SOCFPGA_PLL_DIVQ_MASK 0x003F0000 45 - #define SOCFPGA_PLL_DIVQ_SHIFT 16 46 - #define SOCFGPA_MAX_PARENTS 3 47 - 48 - #define SOCFPGA_L4_MP_CLK "l4_mp_clk" 49 - #define SOCFPGA_L4_SP_CLK "l4_sp_clk" 50 - #define SOCFPGA_NAND_CLK "nand_clk" 51 - #define SOCFPGA_NAND_X_CLK "nand_x_clk" 52 - #define SOCFPGA_MMC_CLK "sdmmc_clk" 53 - #define SOCFPGA_DB_CLK "gpio_db_clk" 54 - 55 - #define div_mask(width) ((1 << (width)) - 1) 56 - #define streq(a, b) (strcmp((a), (b)) == 0) 57 - 58 - extern void __iomem *clk_mgr_base_addr; 59 - 60 - struct socfpga_clk { 61 - struct clk_gate hw; 62 - char *parent_name; 63 - char *clk_name; 64 - u32 fixed_div; 65 - void __iomem *div_reg; 66 - u32 width; /* only valid if div_reg != 0 */ 67 - u32 shift; /* only valid if div_reg != 0 */ 68 - }; 69 - #define to_socfpga_clk(p) container_of(p, struct socfpga_clk, hw.hw) 70 - 71 - static unsigned long clk_pll_recalc_rate(struct clk_hw *hwclk, 72 - unsigned long parent_rate) 73 - { 74 - struct socfpga_clk *socfpgaclk = to_socfpga_clk(hwclk); 75 - unsigned long divf, divq, vco_freq, reg; 76 - unsigned long bypass; 77 - 78 - reg = readl(socfpgaclk->hw.reg); 79 - bypass = readl(clk_mgr_base_addr + CLKMGR_BYPASS); 80 - if (bypass & MAINPLL_BYPASS) 81 - return parent_rate; 82 - 83 - divf = (reg & SOCFPGA_PLL_DIVF_MASK) >> SOCFPGA_PLL_DIVF_SHIFT; 84 - divq = (reg & SOCFPGA_PLL_DIVQ_MASK) >> SOCFPGA_PLL_DIVQ_SHIFT; 85 - vco_freq = parent_rate * (divf + 1); 86 - return vco_freq / (1 + divq); 87 - } 88 - 89 - 90 - static struct clk_ops clk_pll_ops = { 91 - .recalc_rate = clk_pll_recalc_rate, 31 + static const struct of_device_id socfpga_child_clocks[] __initconst = { 32 + { .compatible = "altr,socfpga-pll-clock", socfpga_pll_init, }, 33 + { .compatible = "altr,socfpga-perip-clk", socfpga_periph_init, }, 34 + { .compatible = "altr,socfpga-gate-clk", socfpga_gate_init, }, 35 + {}, 92 36 }; 93 37 94 - static unsigned long clk_periclk_recalc_rate(struct clk_hw *hwclk, 95 - unsigned long parent_rate) 38 + static void __init socfpga_clkmgr_init(struct device_node *node) 96 39 { 97 - struct socfpga_clk *socfpgaclk = to_socfpga_clk(hwclk); 98 - u32 div; 99 - 100 - if (socfpgaclk->fixed_div) 101 - div = socfpgaclk->fixed_div; 102 - else 103 - div = ((readl(socfpgaclk->hw.reg) & 0x1ff) + 1); 104 - 105 - return parent_rate / div; 40 + clk_mgr_base_addr = of_iomap(node, 0); 41 + of_clk_init(socfpga_child_clocks); 106 42 } 43 + CLK_OF_DECLARE(socfpga_mgr, "altr,clk-mgr", socfpga_clkmgr_init); 107 44 108 - static const struct clk_ops periclk_ops = { 109 - .recalc_rate = clk_periclk_recalc_rate, 110 - }; 111 - 112 - static __init struct clk *socfpga_clk_init(struct device_node *node, 113 - const struct clk_ops *ops) 114 - { 115 - u32 reg; 116 - struct clk *clk; 117 - struct socfpga_clk *socfpga_clk; 118 - const char *clk_name = node->name; 119 - const char *parent_name; 120 - struct clk_init_data init; 121 - int rc; 122 - u32 fixed_div; 123 - 124 - of_property_read_u32(node, "reg", &reg); 125 - 126 - socfpga_clk = kzalloc(sizeof(*socfpga_clk), GFP_KERNEL); 127 - if (WARN_ON(!socfpga_clk)) 128 - return NULL; 129 - 130 - socfpga_clk->hw.reg = clk_mgr_base_addr + reg; 131 - 132 - rc = of_property_read_u32(node, "fixed-divider", &fixed_div); 133 - if (rc) 134 - socfpga_clk->fixed_div = 0; 135 - else 136 - socfpga_clk->fixed_div = fixed_div; 137 - 138 - of_property_read_string(node, "clock-output-names", &clk_name); 139 - 140 - init.name = clk_name; 141 - init.ops = ops; 142 - init.flags = 0; 143 - parent_name = of_clk_get_parent_name(node, 0); 144 - init.parent_names = &parent_name; 145 - init.num_parents = 1; 146 - 147 - socfpga_clk->hw.hw.init = &init; 148 - 149 - if (streq(clk_name, "main_pll") || 150 - streq(clk_name, "periph_pll") || 151 - streq(clk_name, "sdram_pll")) { 152 - socfpga_clk->hw.bit_idx = SOCFPGA_PLL_EXT_ENA; 153 - clk_pll_ops.enable = clk_gate_ops.enable; 154 - clk_pll_ops.disable = clk_gate_ops.disable; 155 - } 156 - 157 - clk = clk_register(NULL, &socfpga_clk->hw.hw); 158 - if (WARN_ON(IS_ERR(clk))) { 159 - kfree(socfpga_clk); 160 - return NULL; 161 - } 162 - rc = of_clk_add_provider(node, of_clk_src_simple_get, clk); 163 - return clk; 164 - } 165 - 166 - static u8 socfpga_clk_get_parent(struct clk_hw *hwclk) 167 - { 168 - u32 l4_src; 169 - u32 perpll_src; 170 - 171 - if (streq(hwclk->init->name, SOCFPGA_L4_MP_CLK)) { 172 - l4_src = readl(clk_mgr_base_addr + CLKMGR_L4SRC); 173 - return l4_src &= 0x1; 174 - } 175 - if (streq(hwclk->init->name, SOCFPGA_L4_SP_CLK)) { 176 - l4_src = readl(clk_mgr_base_addr + CLKMGR_L4SRC); 177 - return !!(l4_src & 2); 178 - } 179 - 180 - perpll_src = readl(clk_mgr_base_addr + CLKMGR_PERPLL_SRC); 181 - if (streq(hwclk->init->name, SOCFPGA_MMC_CLK)) 182 - return perpll_src &= 0x3; 183 - if (streq(hwclk->init->name, SOCFPGA_NAND_CLK) || 184 - streq(hwclk->init->name, SOCFPGA_NAND_X_CLK)) 185 - return (perpll_src >> 2) & 3; 186 - 187 - /* QSPI clock */ 188 - return (perpll_src >> 4) & 3; 189 - 190 - } 191 - 192 - static int socfpga_clk_set_parent(struct clk_hw *hwclk, u8 parent) 193 - { 194 - u32 src_reg; 195 - 196 - if (streq(hwclk->init->name, SOCFPGA_L4_MP_CLK)) { 197 - src_reg = readl(clk_mgr_base_addr + CLKMGR_L4SRC); 198 - src_reg &= ~0x1; 199 - src_reg |= parent; 200 - writel(src_reg, clk_mgr_base_addr + CLKMGR_L4SRC); 201 - } else if (streq(hwclk->init->name, SOCFPGA_L4_SP_CLK)) { 202 - src_reg = readl(clk_mgr_base_addr + CLKMGR_L4SRC); 203 - src_reg &= ~0x2; 204 - src_reg |= (parent << 1); 205 - writel(src_reg, clk_mgr_base_addr + CLKMGR_L4SRC); 206 - } else { 207 - src_reg = readl(clk_mgr_base_addr + CLKMGR_PERPLL_SRC); 208 - if (streq(hwclk->init->name, SOCFPGA_MMC_CLK)) { 209 - src_reg &= ~0x3; 210 - src_reg |= parent; 211 - } else if (streq(hwclk->init->name, SOCFPGA_NAND_CLK) || 212 - streq(hwclk->init->name, SOCFPGA_NAND_X_CLK)) { 213 - src_reg &= ~0xC; 214 - src_reg |= (parent << 2); 215 - } else {/* QSPI clock */ 216 - src_reg &= ~0x30; 217 - src_reg |= (parent << 4); 218 - } 219 - writel(src_reg, clk_mgr_base_addr + CLKMGR_PERPLL_SRC); 220 - } 221 - 222 - return 0; 223 - } 224 - 225 - static unsigned long socfpga_clk_recalc_rate(struct clk_hw *hwclk, 226 - unsigned long parent_rate) 227 - { 228 - struct socfpga_clk *socfpgaclk = to_socfpga_clk(hwclk); 229 - u32 div = 1, val; 230 - 231 - if (socfpgaclk->fixed_div) 232 - div = socfpgaclk->fixed_div; 233 - else if (socfpgaclk->div_reg) { 234 - val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift; 235 - val &= div_mask(socfpgaclk->width); 236 - if (streq(hwclk->init->name, SOCFPGA_DB_CLK)) 237 - div = val + 1; 238 - else 239 - div = (1 << val); 240 - } 241 - 242 - return parent_rate / div; 243 - } 244 - 245 - static struct clk_ops gateclk_ops = { 246 - .recalc_rate = socfpga_clk_recalc_rate, 247 - .get_parent = socfpga_clk_get_parent, 248 - .set_parent = socfpga_clk_set_parent, 249 - }; 250 - 251 - static void __init socfpga_gate_clk_init(struct device_node *node, 252 - const struct clk_ops *ops) 253 - { 254 - u32 clk_gate[2]; 255 - u32 div_reg[3]; 256 - u32 fixed_div; 257 - struct clk *clk; 258 - struct socfpga_clk *socfpga_clk; 259 - const char *clk_name = node->name; 260 - const char *parent_name[SOCFGPA_MAX_PARENTS]; 261 - struct clk_init_data init; 262 - int rc; 263 - int i = 0; 264 - 265 - socfpga_clk = kzalloc(sizeof(*socfpga_clk), GFP_KERNEL); 266 - if (WARN_ON(!socfpga_clk)) 267 - return; 268 - 269 - rc = of_property_read_u32_array(node, "clk-gate", clk_gate, 2); 270 - if (rc) 271 - clk_gate[0] = 0; 272 - 273 - if (clk_gate[0]) { 274 - socfpga_clk->hw.reg = clk_mgr_base_addr + clk_gate[0]; 275 - socfpga_clk->hw.bit_idx = clk_gate[1]; 276 - 277 - gateclk_ops.enable = clk_gate_ops.enable; 278 - gateclk_ops.disable = clk_gate_ops.disable; 279 - } 280 - 281 - rc = of_property_read_u32(node, "fixed-divider", &fixed_div); 282 - if (rc) 283 - socfpga_clk->fixed_div = 0; 284 - else 285 - socfpga_clk->fixed_div = fixed_div; 286 - 287 - rc = of_property_read_u32_array(node, "div-reg", div_reg, 3); 288 - if (!rc) { 289 - socfpga_clk->div_reg = clk_mgr_base_addr + div_reg[0]; 290 - socfpga_clk->shift = div_reg[1]; 291 - socfpga_clk->width = div_reg[2]; 292 - } else { 293 - socfpga_clk->div_reg = NULL; 294 - } 295 - 296 - of_property_read_string(node, "clock-output-names", &clk_name); 297 - 298 - init.name = clk_name; 299 - init.ops = ops; 300 - init.flags = 0; 301 - while (i < SOCFGPA_MAX_PARENTS && (parent_name[i] = 302 - of_clk_get_parent_name(node, i)) != NULL) 303 - i++; 304 - 305 - init.parent_names = parent_name; 306 - init.num_parents = i; 307 - socfpga_clk->hw.hw.init = &init; 308 - 309 - clk = clk_register(NULL, &socfpga_clk->hw.hw); 310 - if (WARN_ON(IS_ERR(clk))) { 311 - kfree(socfpga_clk); 312 - return; 313 - } 314 - rc = of_clk_add_provider(node, of_clk_src_simple_get, clk); 315 - if (WARN_ON(rc)) 316 - return; 317 - } 318 - 319 - static void __init socfpga_pll_init(struct device_node *node) 320 - { 321 - socfpga_clk_init(node, &clk_pll_ops); 322 - } 323 - CLK_OF_DECLARE(socfpga_pll, "altr,socfpga-pll-clock", socfpga_pll_init); 324 - 325 - static void __init socfpga_periph_init(struct device_node *node) 326 - { 327 - socfpga_clk_init(node, &periclk_ops); 328 - } 329 - CLK_OF_DECLARE(socfpga_periph, "altr,socfpga-perip-clk", socfpga_periph_init); 330 - 331 - static void __init socfpga_gate_init(struct device_node *node) 332 - { 333 - socfpga_gate_clk_init(node, &gateclk_ops); 334 - } 335 - CLK_OF_DECLARE(socfpga_gate, "altr,socfpga-gate-clk", socfpga_gate_init); 336 - 337 - void __init socfpga_init_clocks(void) 338 - { 339 - struct clk *clk; 340 - int ret; 341 - 342 - clk = clk_register_fixed_factor(NULL, "smp_twd", "mpuclk", 0, 1, 4); 343 - ret = clk_register_clkdev(clk, NULL, "smp_twd"); 344 - if (ret) 345 - pr_err("smp_twd alias not registered\n"); 346 - }
+57
drivers/clk/socfpga/clk.h
··· 1 + /* 2 + * Copyright (c) 2013, Steffen Trumtrar <s.trumtrar@pengutronix.de> 3 + * 4 + * based on drivers/clk/tegra/clk.h 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + * 15 + */ 16 + 17 + #ifndef __SOCFPGA_CLK_H 18 + #define __SOCFPGA_CLK_H 19 + 20 + #include <linux/clk-provider.h> 21 + #include <linux/clkdev.h> 22 + 23 + /* Clock Manager offsets */ 24 + #define CLKMGR_CTRL 0x0 25 + #define CLKMGR_BYPASS 0x4 26 + #define CLKMGR_L4SRC 0x70 27 + #define CLKMGR_PERPLL_SRC 0xAC 28 + 29 + #define SOCFPGA_MAX_PARENTS 3 30 + 31 + extern void __iomem *clk_mgr_base_addr; 32 + 33 + void __init socfpga_pll_init(struct device_node *node); 34 + void __init socfpga_periph_init(struct device_node *node); 35 + void __init socfpga_gate_init(struct device_node *node); 36 + 37 + struct socfpga_pll { 38 + struct clk_gate hw; 39 + }; 40 + 41 + struct socfpga_gate_clk { 42 + struct clk_gate hw; 43 + char *parent_name; 44 + u32 fixed_div; 45 + void __iomem *div_reg; 46 + u32 width; /* only valid if div_reg != 0 */ 47 + u32 shift; /* only valid if div_reg != 0 */ 48 + u32 clk_phase[2]; 49 + }; 50 + 51 + struct socfpga_periph_clk { 52 + struct clk_gate hw; 53 + char *parent_name; 54 + u32 fixed_div; 55 + }; 56 + 57 + #endif /* SOCFPGA_CLK_H */
+1
drivers/clk/st/Makefile
··· 1 + obj-y += clkgen-mux.o clkgen-pll.o clkgen-fsyn.o
+1039
drivers/clk/st/clkgen-fsyn.c
··· 1 + /* 2 + * Copyright (C) 2014 STMicroelectronics R&D Ltd 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + */ 9 + 10 + /* 11 + * Authors: 12 + * Stephen Gallimore <stephen.gallimore@st.com>, 13 + * Pankaj Dev <pankaj.dev@st.com>. 14 + */ 15 + 16 + #include <linux/slab.h> 17 + #include <linux/of_address.h> 18 + #include <linux/clk-provider.h> 19 + 20 + #include "clkgen.h" 21 + 22 + /* 23 + * Maximum input clock to the PLL before we divide it down by 2 24 + * although in reality in actual systems this has never been seen to 25 + * be used. 26 + */ 27 + #define QUADFS_NDIV_THRESHOLD 30000000 28 + 29 + #define PLL_BW_GOODREF (0L) 30 + #define PLL_BW_VBADREF (1L) 31 + #define PLL_BW_BADREF (2L) 32 + #define PLL_BW_VGOODREF (3L) 33 + 34 + #define QUADFS_MAX_CHAN 4 35 + 36 + struct stm_fs { 37 + unsigned long ndiv; 38 + unsigned long mdiv; 39 + unsigned long pe; 40 + unsigned long sdiv; 41 + unsigned long nsdiv; 42 + }; 43 + 44 + static struct stm_fs fs216c65_rtbl[] = { 45 + { .mdiv = 0x1f, .pe = 0x0, .sdiv = 0x7, .nsdiv = 0 }, /* 312.5 Khz */ 46 + { .mdiv = 0x17, .pe = 0x25ed, .sdiv = 0x1, .nsdiv = 0 }, /* 27 MHz */ 47 + { .mdiv = 0x1a, .pe = 0x7b36, .sdiv = 0x2, .nsdiv = 1 }, /* 36.87 MHz */ 48 + { .mdiv = 0x13, .pe = 0x0, .sdiv = 0x2, .nsdiv = 1 }, /* 48 MHz */ 49 + { .mdiv = 0x11, .pe = 0x1c72, .sdiv = 0x1, .nsdiv = 1 }, /* 108 MHz */ 50 + }; 51 + 52 + static struct stm_fs fs432c65_rtbl[] = { 53 + { .mdiv = 0x1f, .pe = 0x0, .sdiv = 0x7, .nsdiv = 0 }, /* 625 Khz */ 54 + { .mdiv = 0x11, .pe = 0x1c72, .sdiv = 0x2, .nsdiv = 1 }, /* 108 MHz */ 55 + { .mdiv = 0x19, .pe = 0x121a, .sdiv = 0x0, .nsdiv = 1 }, /* 297 MHz */ 56 + }; 57 + 58 + static struct stm_fs fs660c32_rtbl[] = { 59 + { .mdiv = 0x01, .pe = 0x2aaa, .sdiv = 0x8, .nsdiv = 0 }, /* 600 KHz */ 60 + { .mdiv = 0x02, .pe = 0x3d33, .sdiv = 0x0, .nsdiv = 0 }, /* 148.5 Mhz */ 61 + { .mdiv = 0x13, .pe = 0x5bcc, .sdiv = 0x0, .nsdiv = 1 }, /* 297 Mhz */ 62 + { .mdiv = 0x0e, .pe = 0x1025, .sdiv = 0x0, .nsdiv = 1 }, /* 333 Mhz */ 63 + { .mdiv = 0x0b, .pe = 0x715f, .sdiv = 0x0, .nsdiv = 1 }, /* 350 Mhz */ 64 + }; 65 + 66 + struct clkgen_quadfs_data { 67 + bool reset_present; 68 + bool bwfilter_present; 69 + bool lockstatus_present; 70 + bool nsdiv_present; 71 + struct clkgen_field ndiv; 72 + struct clkgen_field ref_bw; 73 + struct clkgen_field nreset; 74 + struct clkgen_field npda; 75 + struct clkgen_field lock_status; 76 + 77 + struct clkgen_field nsb[QUADFS_MAX_CHAN]; 78 + struct clkgen_field en[QUADFS_MAX_CHAN]; 79 + struct clkgen_field mdiv[QUADFS_MAX_CHAN]; 80 + struct clkgen_field pe[QUADFS_MAX_CHAN]; 81 + struct clkgen_field sdiv[QUADFS_MAX_CHAN]; 82 + struct clkgen_field nsdiv[QUADFS_MAX_CHAN]; 83 + 84 + const struct clk_ops *pll_ops; 85 + struct stm_fs *rtbl; 86 + u8 rtbl_cnt; 87 + int (*get_rate)(unsigned long , struct stm_fs *, 88 + unsigned long *); 89 + }; 90 + 91 + static const struct clk_ops st_quadfs_pll_c65_ops; 92 + static const struct clk_ops st_quadfs_pll_c32_ops; 93 + static const struct clk_ops st_quadfs_fs216c65_ops; 94 + static const struct clk_ops st_quadfs_fs432c65_ops; 95 + static const struct clk_ops st_quadfs_fs660c32_ops; 96 + 97 + static int clk_fs216c65_get_rate(unsigned long, struct stm_fs *, 98 + unsigned long *); 99 + static int clk_fs432c65_get_rate(unsigned long, struct stm_fs *, 100 + unsigned long *); 101 + static int clk_fs660c32_dig_get_rate(unsigned long, struct stm_fs *, 102 + unsigned long *); 103 + /* 104 + * Values for all of the standalone instances of this clock 105 + * generator found in STiH415 and STiH416 SYSCFG register banks. Note 106 + * that the individual channel standby control bits (nsb) are in the 107 + * first register along with the PLL control bits. 108 + */ 109 + static struct clkgen_quadfs_data st_fs216c65_416 = { 110 + /* 416 specific */ 111 + .npda = CLKGEN_FIELD(0x0, 0x1, 14), 112 + .nsb = { CLKGEN_FIELD(0x0, 0x1, 10), 113 + CLKGEN_FIELD(0x0, 0x1, 11), 114 + CLKGEN_FIELD(0x0, 0x1, 12), 115 + CLKGEN_FIELD(0x0, 0x1, 13) }, 116 + .nsdiv_present = true, 117 + .nsdiv = { CLKGEN_FIELD(0x0, 0x1, 18), 118 + CLKGEN_FIELD(0x0, 0x1, 19), 119 + CLKGEN_FIELD(0x0, 0x1, 20), 120 + CLKGEN_FIELD(0x0, 0x1, 21) }, 121 + .mdiv = { CLKGEN_FIELD(0x4, 0x1f, 0), 122 + CLKGEN_FIELD(0x14, 0x1f, 0), 123 + CLKGEN_FIELD(0x24, 0x1f, 0), 124 + CLKGEN_FIELD(0x34, 0x1f, 0) }, 125 + .en = { CLKGEN_FIELD(0x10, 0x1, 0), 126 + CLKGEN_FIELD(0x20, 0x1, 0), 127 + CLKGEN_FIELD(0x30, 0x1, 0), 128 + CLKGEN_FIELD(0x40, 0x1, 0) }, 129 + .ndiv = CLKGEN_FIELD(0x0, 0x1, 15), 130 + .bwfilter_present = true, 131 + .ref_bw = CLKGEN_FIELD(0x0, 0x3, 16), 132 + .pe = { CLKGEN_FIELD(0x8, 0xffff, 0), 133 + CLKGEN_FIELD(0x18, 0xffff, 0), 134 + CLKGEN_FIELD(0x28, 0xffff, 0), 135 + CLKGEN_FIELD(0x38, 0xffff, 0) }, 136 + .sdiv = { CLKGEN_FIELD(0xC, 0x7, 0), 137 + CLKGEN_FIELD(0x1C, 0x7, 0), 138 + CLKGEN_FIELD(0x2C, 0x7, 0), 139 + CLKGEN_FIELD(0x3C, 0x7, 0) }, 140 + .pll_ops = &st_quadfs_pll_c65_ops, 141 + .rtbl = fs216c65_rtbl, 142 + .rtbl_cnt = ARRAY_SIZE(fs216c65_rtbl), 143 + .get_rate = clk_fs216c65_get_rate, 144 + }; 145 + 146 + static struct clkgen_quadfs_data st_fs432c65_416 = { 147 + .npda = CLKGEN_FIELD(0x0, 0x1, 14), 148 + .nsb = { CLKGEN_FIELD(0x0, 0x1, 10), 149 + CLKGEN_FIELD(0x0, 0x1, 11), 150 + CLKGEN_FIELD(0x0, 0x1, 12), 151 + CLKGEN_FIELD(0x0, 0x1, 13) }, 152 + .nsdiv_present = true, 153 + .nsdiv = { CLKGEN_FIELD(0x0, 0x1, 18), 154 + CLKGEN_FIELD(0x0, 0x1, 19), 155 + CLKGEN_FIELD(0x0, 0x1, 20), 156 + CLKGEN_FIELD(0x0, 0x1, 21) }, 157 + .mdiv = { CLKGEN_FIELD(0x4, 0x1f, 0), 158 + CLKGEN_FIELD(0x14, 0x1f, 0), 159 + CLKGEN_FIELD(0x24, 0x1f, 0), 160 + CLKGEN_FIELD(0x34, 0x1f, 0) }, 161 + .en = { CLKGEN_FIELD(0x10, 0x1, 0), 162 + CLKGEN_FIELD(0x20, 0x1, 0), 163 + CLKGEN_FIELD(0x30, 0x1, 0), 164 + CLKGEN_FIELD(0x40, 0x1, 0) }, 165 + .ndiv = CLKGEN_FIELD(0x0, 0x1, 15), 166 + .bwfilter_present = true, 167 + .ref_bw = CLKGEN_FIELD(0x0, 0x3, 16), 168 + .pe = { CLKGEN_FIELD(0x8, 0xffff, 0), 169 + CLKGEN_FIELD(0x18, 0xffff, 0), 170 + CLKGEN_FIELD(0x28, 0xffff, 0), 171 + CLKGEN_FIELD(0x38, 0xffff, 0) }, 172 + .sdiv = { CLKGEN_FIELD(0xC, 0x7, 0), 173 + CLKGEN_FIELD(0x1C, 0x7, 0), 174 + CLKGEN_FIELD(0x2C, 0x7, 0), 175 + CLKGEN_FIELD(0x3C, 0x7, 0) }, 176 + .pll_ops = &st_quadfs_pll_c65_ops, 177 + .rtbl = fs432c65_rtbl, 178 + .rtbl_cnt = ARRAY_SIZE(fs432c65_rtbl), 179 + .get_rate = clk_fs432c65_get_rate, 180 + }; 181 + 182 + static struct clkgen_quadfs_data st_fs660c32_E_416 = { 183 + .npda = CLKGEN_FIELD(0x0, 0x1, 14), 184 + .nsb = { CLKGEN_FIELD(0x0, 0x1, 10), 185 + CLKGEN_FIELD(0x0, 0x1, 11), 186 + CLKGEN_FIELD(0x0, 0x1, 12), 187 + CLKGEN_FIELD(0x0, 0x1, 13) }, 188 + .nsdiv_present = true, 189 + .nsdiv = { CLKGEN_FIELD(0x0, 0x1, 18), 190 + CLKGEN_FIELD(0x0, 0x1, 19), 191 + CLKGEN_FIELD(0x0, 0x1, 20), 192 + CLKGEN_FIELD(0x0, 0x1, 21) }, 193 + .mdiv = { CLKGEN_FIELD(0x4, 0x1f, 0), 194 + CLKGEN_FIELD(0x14, 0x1f, 0), 195 + CLKGEN_FIELD(0x24, 0x1f, 0), 196 + CLKGEN_FIELD(0x34, 0x1f, 0) }, 197 + .en = { CLKGEN_FIELD(0x10, 0x1, 0), 198 + CLKGEN_FIELD(0x20, 0x1, 0), 199 + CLKGEN_FIELD(0x30, 0x1, 0), 200 + CLKGEN_FIELD(0x40, 0x1, 0) }, 201 + .ndiv = CLKGEN_FIELD(0x0, 0x7, 15), 202 + .pe = { CLKGEN_FIELD(0x8, 0x7fff, 0), 203 + CLKGEN_FIELD(0x18, 0x7fff, 0), 204 + CLKGEN_FIELD(0x28, 0x7fff, 0), 205 + CLKGEN_FIELD(0x38, 0x7fff, 0) }, 206 + .sdiv = { CLKGEN_FIELD(0xC, 0xf, 0), 207 + CLKGEN_FIELD(0x1C, 0xf, 0), 208 + CLKGEN_FIELD(0x2C, 0xf, 0), 209 + CLKGEN_FIELD(0x3C, 0xf, 0) }, 210 + .lockstatus_present = true, 211 + .lock_status = CLKGEN_FIELD(0xAC, 0x1, 0), 212 + .pll_ops = &st_quadfs_pll_c32_ops, 213 + .rtbl = fs660c32_rtbl, 214 + .rtbl_cnt = ARRAY_SIZE(fs660c32_rtbl), 215 + .get_rate = clk_fs660c32_dig_get_rate, 216 + }; 217 + 218 + static struct clkgen_quadfs_data st_fs660c32_F_416 = { 219 + .npda = CLKGEN_FIELD(0x0, 0x1, 14), 220 + .nsb = { CLKGEN_FIELD(0x0, 0x1, 10), 221 + CLKGEN_FIELD(0x0, 0x1, 11), 222 + CLKGEN_FIELD(0x0, 0x1, 12), 223 + CLKGEN_FIELD(0x0, 0x1, 13) }, 224 + .nsdiv_present = true, 225 + .nsdiv = { CLKGEN_FIELD(0x0, 0x1, 18), 226 + CLKGEN_FIELD(0x0, 0x1, 19), 227 + CLKGEN_FIELD(0x0, 0x1, 20), 228 + CLKGEN_FIELD(0x0, 0x1, 21) }, 229 + .mdiv = { CLKGEN_FIELD(0x4, 0x1f, 0), 230 + CLKGEN_FIELD(0x14, 0x1f, 0), 231 + CLKGEN_FIELD(0x24, 0x1f, 0), 232 + CLKGEN_FIELD(0x34, 0x1f, 0) }, 233 + .en = { CLKGEN_FIELD(0x10, 0x1, 0), 234 + CLKGEN_FIELD(0x20, 0x1, 0), 235 + CLKGEN_FIELD(0x30, 0x1, 0), 236 + CLKGEN_FIELD(0x40, 0x1, 0) }, 237 + .ndiv = CLKGEN_FIELD(0x0, 0x7, 15), 238 + .pe = { CLKGEN_FIELD(0x8, 0x7fff, 0), 239 + CLKGEN_FIELD(0x18, 0x7fff, 0), 240 + CLKGEN_FIELD(0x28, 0x7fff, 0), 241 + CLKGEN_FIELD(0x38, 0x7fff, 0) }, 242 + .sdiv = { CLKGEN_FIELD(0xC, 0xf, 0), 243 + CLKGEN_FIELD(0x1C, 0xf, 0), 244 + CLKGEN_FIELD(0x2C, 0xf, 0), 245 + CLKGEN_FIELD(0x3C, 0xf, 0) }, 246 + .lockstatus_present = true, 247 + .lock_status = CLKGEN_FIELD(0xEC, 0x1, 0), 248 + .pll_ops = &st_quadfs_pll_c32_ops, 249 + .rtbl = fs660c32_rtbl, 250 + .rtbl_cnt = ARRAY_SIZE(fs660c32_rtbl), 251 + .get_rate = clk_fs660c32_dig_get_rate, 252 + }; 253 + 254 + /** 255 + * DOC: A Frequency Synthesizer that multiples its input clock by a fixed factor 256 + * 257 + * Traits of this clock: 258 + * prepare - clk_(un)prepare only ensures parent is (un)prepared 259 + * enable - clk_enable and clk_disable are functional & control the Fsyn 260 + * rate - inherits rate from parent. set_rate/round_rate/recalc_rate 261 + * parent - fixed parent. No clk_set_parent support 262 + */ 263 + 264 + /** 265 + * struct st_clk_quadfs_pll - A pll which outputs a fixed multiplier of 266 + * its parent clock, found inside a type of 267 + * ST quad channel frequency synthesizer block 268 + * 269 + * @hw: handle between common and hardware-specific interfaces. 270 + * @ndiv: regmap field for the ndiv control. 271 + * @regs_base: base address of the configuration registers. 272 + * @lock: spinlock. 273 + * 274 + */ 275 + struct st_clk_quadfs_pll { 276 + struct clk_hw hw; 277 + void __iomem *regs_base; 278 + spinlock_t *lock; 279 + struct clkgen_quadfs_data *data; 280 + u32 ndiv; 281 + }; 282 + 283 + #define to_quadfs_pll(_hw) container_of(_hw, struct st_clk_quadfs_pll, hw) 284 + 285 + static int quadfs_pll_enable(struct clk_hw *hw) 286 + { 287 + struct st_clk_quadfs_pll *pll = to_quadfs_pll(hw); 288 + unsigned long flags = 0, timeout = jiffies + msecs_to_jiffies(10); 289 + 290 + if (pll->lock) 291 + spin_lock_irqsave(pll->lock, flags); 292 + 293 + /* 294 + * Bring block out of reset if we have reset control. 295 + */ 296 + if (pll->data->reset_present) 297 + CLKGEN_WRITE(pll, nreset, 1); 298 + 299 + /* 300 + * Use a fixed input clock noise bandwidth filter for the moment 301 + */ 302 + if (pll->data->bwfilter_present) 303 + CLKGEN_WRITE(pll, ref_bw, PLL_BW_GOODREF); 304 + 305 + 306 + CLKGEN_WRITE(pll, ndiv, pll->ndiv); 307 + 308 + /* 309 + * Power up the PLL 310 + */ 311 + CLKGEN_WRITE(pll, npda, 1); 312 + 313 + if (pll->lock) 314 + spin_unlock_irqrestore(pll->lock, flags); 315 + 316 + if (pll->data->lockstatus_present) 317 + while (!CLKGEN_READ(pll, lock_status)) { 318 + if (time_after(jiffies, timeout)) 319 + return -ETIMEDOUT; 320 + cpu_relax(); 321 + } 322 + 323 + return 0; 324 + } 325 + 326 + static void quadfs_pll_disable(struct clk_hw *hw) 327 + { 328 + struct st_clk_quadfs_pll *pll = to_quadfs_pll(hw); 329 + unsigned long flags = 0; 330 + 331 + if (pll->lock) 332 + spin_lock_irqsave(pll->lock, flags); 333 + 334 + /* 335 + * Powerdown the PLL and then put block into soft reset if we have 336 + * reset control. 337 + */ 338 + CLKGEN_WRITE(pll, npda, 0); 339 + 340 + if (pll->data->reset_present) 341 + CLKGEN_WRITE(pll, nreset, 0); 342 + 343 + if (pll->lock) 344 + spin_unlock_irqrestore(pll->lock, flags); 345 + } 346 + 347 + static int quadfs_pll_is_enabled(struct clk_hw *hw) 348 + { 349 + struct st_clk_quadfs_pll *pll = to_quadfs_pll(hw); 350 + u32 npda = CLKGEN_READ(pll, npda); 351 + 352 + return !!npda; 353 + } 354 + 355 + int clk_fs660c32_vco_get_rate(unsigned long input, struct stm_fs *fs, 356 + unsigned long *rate) 357 + { 358 + unsigned long nd = fs->ndiv + 16; /* ndiv value */ 359 + 360 + *rate = input * nd; 361 + 362 + return 0; 363 + } 364 + 365 + static unsigned long quadfs_pll_fs660c32_recalc_rate(struct clk_hw *hw, 366 + unsigned long parent_rate) 367 + { 368 + struct st_clk_quadfs_pll *pll = to_quadfs_pll(hw); 369 + unsigned long rate = 0; 370 + struct stm_fs params; 371 + 372 + params.ndiv = CLKGEN_READ(pll, ndiv); 373 + if (clk_fs660c32_vco_get_rate(parent_rate, &params, &rate)) 374 + pr_err("%s:%s error calculating rate\n", 375 + __clk_get_name(hw->clk), __func__); 376 + 377 + pll->ndiv = params.ndiv; 378 + 379 + return rate; 380 + } 381 + 382 + int clk_fs660c32_vco_get_params(unsigned long input, 383 + unsigned long output, struct stm_fs *fs) 384 + { 385 + /* Formula 386 + VCO frequency = (fin x ndiv) / pdiv 387 + ndiv = VCOfreq * pdiv / fin 388 + */ 389 + unsigned long pdiv = 1, n; 390 + 391 + /* Output clock range: 384Mhz to 660Mhz */ 392 + if (output < 384000000 || output > 660000000) 393 + return -EINVAL; 394 + 395 + if (input > 40000000) 396 + /* This means that PDIV would be 2 instead of 1. 397 + Not supported today. */ 398 + return -EINVAL; 399 + 400 + input /= 1000; 401 + output /= 1000; 402 + 403 + n = output * pdiv / input; 404 + if (n < 16) 405 + n = 16; 406 + fs->ndiv = n - 16; /* Converting formula value to reg value */ 407 + 408 + return 0; 409 + } 410 + 411 + static long quadfs_pll_fs660c32_round_rate(struct clk_hw *hw, unsigned long rate 412 + , unsigned long *prate) 413 + { 414 + struct stm_fs params; 415 + 416 + if (!clk_fs660c32_vco_get_params(*prate, rate, &params)) 417 + clk_fs660c32_vco_get_rate(*prate, &params, &rate); 418 + 419 + pr_debug("%s: %s new rate %ld [sdiv=0x%x,md=0x%x,pe=0x%x,nsdiv3=%u]\n", 420 + __func__, __clk_get_name(hw->clk), 421 + rate, (unsigned int)params.sdiv, 422 + (unsigned int)params.mdiv, 423 + (unsigned int)params.pe, (unsigned int)params.nsdiv); 424 + 425 + return rate; 426 + } 427 + 428 + static int quadfs_pll_fs660c32_set_rate(struct clk_hw *hw, unsigned long rate, 429 + unsigned long parent_rate) 430 + { 431 + struct st_clk_quadfs_pll *pll = to_quadfs_pll(hw); 432 + struct stm_fs params; 433 + long hwrate = 0; 434 + unsigned long flags = 0; 435 + 436 + if (!rate || !parent_rate) 437 + return -EINVAL; 438 + 439 + if (!clk_fs660c32_vco_get_params(parent_rate, rate, &params)) 440 + clk_fs660c32_vco_get_rate(parent_rate, &params, &hwrate); 441 + 442 + pr_debug("%s: %s new rate %ld [ndiv=0x%x]\n", 443 + __func__, __clk_get_name(hw->clk), 444 + hwrate, (unsigned int)params.ndiv); 445 + 446 + if (!hwrate) 447 + return -EINVAL; 448 + 449 + pll->ndiv = params.ndiv; 450 + 451 + if (pll->lock) 452 + spin_lock_irqsave(pll->lock, flags); 453 + 454 + CLKGEN_WRITE(pll, ndiv, pll->ndiv); 455 + 456 + if (pll->lock) 457 + spin_unlock_irqrestore(pll->lock, flags); 458 + 459 + return 0; 460 + } 461 + 462 + static const struct clk_ops st_quadfs_pll_c65_ops = { 463 + .enable = quadfs_pll_enable, 464 + .disable = quadfs_pll_disable, 465 + .is_enabled = quadfs_pll_is_enabled, 466 + }; 467 + 468 + static const struct clk_ops st_quadfs_pll_c32_ops = { 469 + .enable = quadfs_pll_enable, 470 + .disable = quadfs_pll_disable, 471 + .is_enabled = quadfs_pll_is_enabled, 472 + .recalc_rate = quadfs_pll_fs660c32_recalc_rate, 473 + .round_rate = quadfs_pll_fs660c32_round_rate, 474 + .set_rate = quadfs_pll_fs660c32_set_rate, 475 + }; 476 + 477 + static struct clk * __init st_clk_register_quadfs_pll( 478 + const char *name, const char *parent_name, 479 + struct clkgen_quadfs_data *quadfs, void __iomem *reg, 480 + spinlock_t *lock) 481 + { 482 + struct st_clk_quadfs_pll *pll; 483 + struct clk *clk; 484 + struct clk_init_data init; 485 + 486 + /* 487 + * Sanity check required pointers. 488 + */ 489 + if (WARN_ON(!name || !parent_name)) 490 + return ERR_PTR(-EINVAL); 491 + 492 + pll = kzalloc(sizeof(*pll), GFP_KERNEL); 493 + if (!pll) 494 + return ERR_PTR(-ENOMEM); 495 + 496 + init.name = name; 497 + init.ops = quadfs->pll_ops; 498 + init.flags = CLK_IS_BASIC; 499 + init.parent_names = &parent_name; 500 + init.num_parents = 1; 501 + 502 + pll->data = quadfs; 503 + pll->regs_base = reg; 504 + pll->lock = lock; 505 + pll->hw.init = &init; 506 + 507 + clk = clk_register(NULL, &pll->hw); 508 + 509 + if (IS_ERR(clk)) 510 + kfree(pll); 511 + 512 + return clk; 513 + } 514 + 515 + /** 516 + * DOC: A digital frequency synthesizer 517 + * 518 + * Traits of this clock: 519 + * prepare - clk_(un)prepare only ensures parent is (un)prepared 520 + * enable - clk_enable and clk_disable are functional 521 + * rate - set rate is functional 522 + * parent - fixed parent. No clk_set_parent support 523 + */ 524 + 525 + /** 526 + * struct st_clk_quadfs_fsynth - One clock output from a four channel digital 527 + * frequency synthesizer (fsynth) block. 528 + * 529 + * @hw: handle between common and hardware-specific interfaces 530 + * 531 + * @nsb: regmap field in the output control register for the digital 532 + * standby of this fsynth channel. This control is active low so 533 + * the channel is in standby when the control bit is cleared. 534 + * 535 + * @nsdiv: regmap field in the output control register for 536 + * for the optional divide by 3 of this fsynth channel. This control 537 + * is active low so the divide by 3 is active when the control bit is 538 + * cleared and the divide is bypassed when the bit is set. 539 + */ 540 + struct st_clk_quadfs_fsynth { 541 + struct clk_hw hw; 542 + void __iomem *regs_base; 543 + spinlock_t *lock; 544 + struct clkgen_quadfs_data *data; 545 + 546 + u32 chan; 547 + /* 548 + * Cached hardware values from set_rate so we can program the 549 + * hardware in enable. There are two reasons for this: 550 + * 551 + * 1. The registers may not be writable until the parent has been 552 + * enabled. 553 + * 554 + * 2. It restores the clock rate when a driver does an enable 555 + * on PM restore, after a suspend to RAM has lost the hardware 556 + * setup. 557 + */ 558 + u32 md; 559 + u32 pe; 560 + u32 sdiv; 561 + u32 nsdiv; 562 + }; 563 + 564 + #define to_quadfs_fsynth(_hw) \ 565 + container_of(_hw, struct st_clk_quadfs_fsynth, hw) 566 + 567 + static void quadfs_fsynth_program_enable(struct st_clk_quadfs_fsynth *fs) 568 + { 569 + /* 570 + * Pulse the program enable register lsb to make the hardware take 571 + * notice of the new md/pe values with a glitchless transition. 572 + */ 573 + CLKGEN_WRITE(fs, en[fs->chan], 1); 574 + CLKGEN_WRITE(fs, en[fs->chan], 0); 575 + } 576 + 577 + static void quadfs_fsynth_program_rate(struct st_clk_quadfs_fsynth *fs) 578 + { 579 + unsigned long flags = 0; 580 + 581 + /* 582 + * Ensure the md/pe parameters are ignored while we are 583 + * reprogramming them so we can get a glitchless change 584 + * when fine tuning the speed of a running clock. 585 + */ 586 + CLKGEN_WRITE(fs, en[fs->chan], 0); 587 + 588 + CLKGEN_WRITE(fs, mdiv[fs->chan], fs->md); 589 + CLKGEN_WRITE(fs, pe[fs->chan], fs->pe); 590 + CLKGEN_WRITE(fs, sdiv[fs->chan], fs->sdiv); 591 + 592 + if (fs->lock) 593 + spin_lock_irqsave(fs->lock, flags); 594 + 595 + if (fs->data->nsdiv_present) 596 + CLKGEN_WRITE(fs, nsdiv[fs->chan], fs->nsdiv); 597 + 598 + if (fs->lock) 599 + spin_unlock_irqrestore(fs->lock, flags); 600 + } 601 + 602 + static int quadfs_fsynth_enable(struct clk_hw *hw) 603 + { 604 + struct st_clk_quadfs_fsynth *fs = to_quadfs_fsynth(hw); 605 + unsigned long flags = 0; 606 + 607 + pr_debug("%s: %s\n", __func__, __clk_get_name(hw->clk)); 608 + 609 + quadfs_fsynth_program_rate(fs); 610 + 611 + if (fs->lock) 612 + spin_lock_irqsave(fs->lock, flags); 613 + 614 + CLKGEN_WRITE(fs, nsb[fs->chan], 1); 615 + 616 + if (fs->lock) 617 + spin_unlock_irqrestore(fs->lock, flags); 618 + 619 + quadfs_fsynth_program_enable(fs); 620 + 621 + return 0; 622 + } 623 + 624 + static void quadfs_fsynth_disable(struct clk_hw *hw) 625 + { 626 + struct st_clk_quadfs_fsynth *fs = to_quadfs_fsynth(hw); 627 + unsigned long flags = 0; 628 + 629 + pr_debug("%s: %s\n", __func__, __clk_get_name(hw->clk)); 630 + 631 + if (fs->lock) 632 + spin_lock_irqsave(fs->lock, flags); 633 + 634 + CLKGEN_WRITE(fs, nsb[fs->chan], 0); 635 + 636 + if (fs->lock) 637 + spin_unlock_irqrestore(fs->lock, flags); 638 + } 639 + 640 + static int quadfs_fsynth_is_enabled(struct clk_hw *hw) 641 + { 642 + struct st_clk_quadfs_fsynth *fs = to_quadfs_fsynth(hw); 643 + u32 nsb = CLKGEN_READ(fs, nsb[fs->chan]); 644 + 645 + pr_debug("%s: %s enable bit = 0x%x\n", 646 + __func__, __clk_get_name(hw->clk), nsb); 647 + 648 + return !!nsb; 649 + } 650 + 651 + #define P15 (uint64_t)(1 << 15) 652 + 653 + static int clk_fs216c65_get_rate(unsigned long input, struct stm_fs *fs, 654 + unsigned long *rate) 655 + { 656 + uint64_t res; 657 + unsigned long ns; 658 + unsigned long nd = 8; /* ndiv stuck at 0 => val = 8 */ 659 + unsigned long s; 660 + long m; 661 + 662 + m = fs->mdiv - 32; 663 + s = 1 << (fs->sdiv + 1); 664 + ns = (fs->nsdiv ? 1 : 3); 665 + 666 + res = (uint64_t)(s * ns * P15 * (uint64_t)(m + 33)); 667 + res = res - (s * ns * fs->pe); 668 + *rate = div64_u64(P15 * nd * input * 32, res); 669 + 670 + return 0; 671 + } 672 + 673 + static int clk_fs432c65_get_rate(unsigned long input, struct stm_fs *fs, 674 + unsigned long *rate) 675 + { 676 + uint64_t res; 677 + unsigned long nd = 16; /* ndiv value; stuck at 0 (30Mhz input) */ 678 + long m; 679 + unsigned long sd; 680 + unsigned long ns; 681 + 682 + m = fs->mdiv - 32; 683 + sd = 1 << (fs->sdiv + 1); 684 + ns = (fs->nsdiv ? 1 : 3); 685 + 686 + res = (uint64_t)(sd * ns * P15 * (uint64_t)(m + 33)); 687 + res = res - (sd * ns * fs->pe); 688 + *rate = div64_u64(P15 * nd * input * 32, res); 689 + 690 + return 0; 691 + } 692 + 693 + #define P20 (uint64_t)(1 << 20) 694 + 695 + static int clk_fs660c32_dig_get_rate(unsigned long input, 696 + struct stm_fs *fs, unsigned long *rate) 697 + { 698 + unsigned long s = (1 << fs->sdiv); 699 + unsigned long ns; 700 + uint64_t res; 701 + 702 + /* 703 + * 'nsdiv' is a register value ('BIN') which is translated 704 + * to a decimal value according to following rules. 705 + * 706 + * nsdiv ns.dec 707 + * 0 3 708 + * 1 1 709 + */ 710 + ns = (fs->nsdiv == 1) ? 1 : 3; 711 + 712 + res = (P20 * (32 + fs->mdiv) + 32 * fs->pe) * s * ns; 713 + *rate = (unsigned long)div64_u64(input * P20 * 32, res); 714 + 715 + return 0; 716 + } 717 + 718 + static int quadfs_fsynt_get_hw_value_for_recalc(struct st_clk_quadfs_fsynth *fs, 719 + struct stm_fs *params) 720 + { 721 + /* 722 + * Get the initial hardware values for recalc_rate 723 + */ 724 + params->mdiv = CLKGEN_READ(fs, mdiv[fs->chan]); 725 + params->pe = CLKGEN_READ(fs, pe[fs->chan]); 726 + params->sdiv = CLKGEN_READ(fs, sdiv[fs->chan]); 727 + 728 + if (fs->data->nsdiv_present) 729 + params->nsdiv = CLKGEN_READ(fs, nsdiv[fs->chan]); 730 + else 731 + params->nsdiv = 1; 732 + 733 + /* 734 + * If All are NULL then assume no clock rate is programmed. 735 + */ 736 + if (!params->mdiv && !params->pe && !params->sdiv) 737 + return 1; 738 + 739 + fs->md = params->mdiv; 740 + fs->pe = params->pe; 741 + fs->sdiv = params->sdiv; 742 + fs->nsdiv = params->nsdiv; 743 + 744 + return 0; 745 + } 746 + 747 + static long quadfs_find_best_rate(struct clk_hw *hw, unsigned long drate, 748 + unsigned long prate, struct stm_fs *params) 749 + { 750 + struct st_clk_quadfs_fsynth *fs = to_quadfs_fsynth(hw); 751 + int (*clk_fs_get_rate)(unsigned long , 752 + struct stm_fs *, unsigned long *); 753 + struct stm_fs prev_params; 754 + unsigned long prev_rate, rate = 0; 755 + unsigned long diff_rate, prev_diff_rate = ~0; 756 + int index; 757 + 758 + clk_fs_get_rate = fs->data->get_rate; 759 + 760 + for (index = 0; index < fs->data->rtbl_cnt; index++) { 761 + prev_rate = rate; 762 + 763 + *params = fs->data->rtbl[index]; 764 + prev_params = *params; 765 + 766 + clk_fs_get_rate(prate, &fs->data->rtbl[index], &rate); 767 + 768 + diff_rate = abs(drate - rate); 769 + 770 + if (diff_rate > prev_diff_rate) { 771 + rate = prev_rate; 772 + *params = prev_params; 773 + break; 774 + } 775 + 776 + prev_diff_rate = diff_rate; 777 + 778 + if (drate == rate) 779 + return rate; 780 + } 781 + 782 + 783 + if (index == fs->data->rtbl_cnt) 784 + *params = prev_params; 785 + 786 + return rate; 787 + } 788 + 789 + static unsigned long quadfs_recalc_rate(struct clk_hw *hw, 790 + unsigned long parent_rate) 791 + { 792 + struct st_clk_quadfs_fsynth *fs = to_quadfs_fsynth(hw); 793 + unsigned long rate = 0; 794 + struct stm_fs params; 795 + int (*clk_fs_get_rate)(unsigned long , 796 + struct stm_fs *, unsigned long *); 797 + 798 + clk_fs_get_rate = fs->data->get_rate; 799 + 800 + if (quadfs_fsynt_get_hw_value_for_recalc(fs, &params)) 801 + return 0; 802 + 803 + if (clk_fs_get_rate(parent_rate, &params, &rate)) { 804 + pr_err("%s:%s error calculating rate\n", 805 + __clk_get_name(hw->clk), __func__); 806 + } 807 + 808 + pr_debug("%s:%s rate %lu\n", __clk_get_name(hw->clk), __func__, rate); 809 + 810 + return rate; 811 + } 812 + 813 + static long quadfs_round_rate(struct clk_hw *hw, unsigned long rate, 814 + unsigned long *prate) 815 + { 816 + struct stm_fs params; 817 + 818 + rate = quadfs_find_best_rate(hw, rate, *prate, &params); 819 + 820 + pr_debug("%s: %s new rate %ld [sdiv=0x%x,md=0x%x,pe=0x%x,nsdiv3=%u]\n", 821 + __func__, __clk_get_name(hw->clk), 822 + rate, (unsigned int)params.sdiv, (unsigned int)params.mdiv, 823 + (unsigned int)params.pe, (unsigned int)params.nsdiv); 824 + 825 + return rate; 826 + } 827 + 828 + 829 + static void quadfs_program_and_enable(struct st_clk_quadfs_fsynth *fs, 830 + struct stm_fs *params) 831 + { 832 + fs->md = params->mdiv; 833 + fs->pe = params->pe; 834 + fs->sdiv = params->sdiv; 835 + fs->nsdiv = params->nsdiv; 836 + 837 + /* 838 + * In some integrations you can only change the fsynth programming when 839 + * the parent entity containing it is enabled. 840 + */ 841 + quadfs_fsynth_program_rate(fs); 842 + quadfs_fsynth_program_enable(fs); 843 + } 844 + 845 + static int quadfs_set_rate(struct clk_hw *hw, unsigned long rate, 846 + unsigned long parent_rate) 847 + { 848 + struct st_clk_quadfs_fsynth *fs = to_quadfs_fsynth(hw); 849 + struct stm_fs params; 850 + long hwrate; 851 + int uninitialized_var(i); 852 + 853 + if (!rate || !parent_rate) 854 + return -EINVAL; 855 + 856 + memset(&params, 0, sizeof(struct stm_fs)); 857 + 858 + hwrate = quadfs_find_best_rate(hw, rate, parent_rate, &params); 859 + if (!hwrate) 860 + return -EINVAL; 861 + 862 + quadfs_program_and_enable(fs, &params); 863 + 864 + return 0; 865 + } 866 + 867 + 868 + 869 + static const struct clk_ops st_quadfs_ops = { 870 + .enable = quadfs_fsynth_enable, 871 + .disable = quadfs_fsynth_disable, 872 + .is_enabled = quadfs_fsynth_is_enabled, 873 + .round_rate = quadfs_round_rate, 874 + .set_rate = quadfs_set_rate, 875 + .recalc_rate = quadfs_recalc_rate, 876 + }; 877 + 878 + static struct clk * __init st_clk_register_quadfs_fsynth( 879 + const char *name, const char *parent_name, 880 + struct clkgen_quadfs_data *quadfs, void __iomem *reg, u32 chan, 881 + spinlock_t *lock) 882 + { 883 + struct st_clk_quadfs_fsynth *fs; 884 + struct clk *clk; 885 + struct clk_init_data init; 886 + 887 + /* 888 + * Sanity check required pointers, note that nsdiv3 is optional. 889 + */ 890 + if (WARN_ON(!name || !parent_name)) 891 + return ERR_PTR(-EINVAL); 892 + 893 + fs = kzalloc(sizeof(*fs), GFP_KERNEL); 894 + if (!fs) 895 + return ERR_PTR(-ENOMEM); 896 + 897 + init.name = name; 898 + init.ops = &st_quadfs_ops; 899 + init.flags = CLK_GET_RATE_NOCACHE | CLK_IS_BASIC; 900 + init.parent_names = &parent_name; 901 + init.num_parents = 1; 902 + 903 + fs->data = quadfs; 904 + fs->regs_base = reg; 905 + fs->chan = chan; 906 + fs->lock = lock; 907 + fs->hw.init = &init; 908 + 909 + clk = clk_register(NULL, &fs->hw); 910 + 911 + if (IS_ERR(clk)) 912 + kfree(fs); 913 + 914 + return clk; 915 + } 916 + 917 + static struct of_device_id quadfs_of_match[] = { 918 + { 919 + .compatible = "st,stih416-quadfs216", 920 + .data = (void *)&st_fs216c65_416 921 + }, 922 + { 923 + .compatible = "st,stih416-quadfs432", 924 + .data = (void *)&st_fs432c65_416 925 + }, 926 + { 927 + .compatible = "st,stih416-quadfs660-E", 928 + .data = (void *)&st_fs660c32_E_416 929 + }, 930 + { 931 + .compatible = "st,stih416-quadfs660-F", 932 + .data = (void *)&st_fs660c32_F_416 933 + }, 934 + {} 935 + }; 936 + 937 + static void __init st_of_create_quadfs_fsynths( 938 + struct device_node *np, const char *pll_name, 939 + struct clkgen_quadfs_data *quadfs, void __iomem *reg, 940 + spinlock_t *lock) 941 + { 942 + struct clk_onecell_data *clk_data; 943 + int fschan; 944 + 945 + clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL); 946 + if (!clk_data) 947 + return; 948 + 949 + clk_data->clk_num = QUADFS_MAX_CHAN; 950 + clk_data->clks = kzalloc(QUADFS_MAX_CHAN * sizeof(struct clk *), 951 + GFP_KERNEL); 952 + 953 + if (!clk_data->clks) { 954 + kfree(clk_data); 955 + return; 956 + } 957 + 958 + for (fschan = 0; fschan < QUADFS_MAX_CHAN; fschan++) { 959 + struct clk *clk; 960 + const char *clk_name; 961 + 962 + if (of_property_read_string_index(np, "clock-output-names", 963 + fschan, &clk_name)) { 964 + break; 965 + } 966 + 967 + /* 968 + * If we read an empty clock name then the channel is unused 969 + */ 970 + if (*clk_name == '\0') 971 + continue; 972 + 973 + clk = st_clk_register_quadfs_fsynth(clk_name, pll_name, 974 + quadfs, reg, fschan, lock); 975 + 976 + /* 977 + * If there was an error registering this clock output, clean 978 + * up and move on to the next one. 979 + */ 980 + if (!IS_ERR(clk)) { 981 + clk_data->clks[fschan] = clk; 982 + pr_debug("%s: parent %s rate %u\n", 983 + __clk_get_name(clk), 984 + __clk_get_name(clk_get_parent(clk)), 985 + (unsigned int)clk_get_rate(clk)); 986 + } 987 + } 988 + 989 + of_clk_add_provider(np, of_clk_src_onecell_get, clk_data); 990 + } 991 + 992 + static void __init st_of_quadfs_setup(struct device_node *np) 993 + { 994 + const struct of_device_id *match; 995 + struct clk *clk; 996 + const char *pll_name, *clk_parent_name; 997 + void __iomem *reg; 998 + spinlock_t *lock; 999 + 1000 + match = of_match_node(quadfs_of_match, np); 1001 + if (WARN_ON(!match)) 1002 + return; 1003 + 1004 + reg = of_iomap(np, 0); 1005 + if (!reg) 1006 + return; 1007 + 1008 + clk_parent_name = of_clk_get_parent_name(np, 0); 1009 + if (!clk_parent_name) 1010 + return; 1011 + 1012 + pll_name = kasprintf(GFP_KERNEL, "%s.pll", np->name); 1013 + if (!pll_name) 1014 + return; 1015 + 1016 + lock = kzalloc(sizeof(*lock), GFP_KERNEL); 1017 + if (!lock) 1018 + goto err_exit; 1019 + 1020 + spin_lock_init(lock); 1021 + 1022 + clk = st_clk_register_quadfs_pll(pll_name, clk_parent_name, 1023 + (struct clkgen_quadfs_data *) match->data, reg, lock); 1024 + if (IS_ERR(clk)) 1025 + goto err_exit; 1026 + else 1027 + pr_debug("%s: parent %s rate %u\n", 1028 + __clk_get_name(clk), 1029 + __clk_get_name(clk_get_parent(clk)), 1030 + (unsigned int)clk_get_rate(clk)); 1031 + 1032 + st_of_create_quadfs_fsynths(np, pll_name, 1033 + (struct clkgen_quadfs_data *)match->data, 1034 + reg, lock); 1035 + 1036 + err_exit: 1037 + kfree(pll_name); /* No longer need local copy of the PLL name */ 1038 + } 1039 + CLK_OF_DECLARE(quadfs, "st,quadfs", st_of_quadfs_setup);
+820
drivers/clk/st/clkgen-mux.c
··· 1 + /* 2 + * clkgen-mux.c: ST GEN-MUX Clock driver 3 + * 4 + * Copyright (C) 2014 STMicroelectronics (R&D) Limited 5 + * 6 + * Authors: Stephen Gallimore <stephen.gallimore@st.com> 7 + * Pankaj Dev <pankaj.dev@st.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License as published by 11 + * the Free Software Foundation; either version 2 of the License, or 12 + * (at your option) any later version. 13 + * 14 + */ 15 + 16 + #include <linux/slab.h> 17 + #include <linux/of_address.h> 18 + #include <linux/clk-provider.h> 19 + 20 + static DEFINE_SPINLOCK(clkgena_divmux_lock); 21 + static DEFINE_SPINLOCK(clkgenf_lock); 22 + 23 + static const char ** __init clkgen_mux_get_parents(struct device_node *np, 24 + int *num_parents) 25 + { 26 + const char **parents; 27 + int nparents, i; 28 + 29 + nparents = of_count_phandle_with_args(np, "clocks", "#clock-cells"); 30 + if (WARN_ON(nparents <= 0)) 31 + return ERR_PTR(-EINVAL); 32 + 33 + parents = kzalloc(nparents * sizeof(const char *), GFP_KERNEL); 34 + if (!parents) 35 + return ERR_PTR(-ENOMEM); 36 + 37 + for (i = 0; i < nparents; i++) 38 + parents[i] = of_clk_get_parent_name(np, i); 39 + 40 + *num_parents = nparents; 41 + return parents; 42 + } 43 + 44 + /** 45 + * DOC: Clock mux with a programmable divider on each of its three inputs. 46 + * The mux has an input setting which effectively gates its output. 47 + * 48 + * Traits of this clock: 49 + * prepare - clk_(un)prepare only ensures parent is (un)prepared 50 + * enable - clk_enable and clk_disable are functional & control gating 51 + * rate - set rate is supported 52 + * parent - set/get parent 53 + */ 54 + 55 + #define NUM_INPUTS 3 56 + 57 + struct clkgena_divmux { 58 + struct clk_hw hw; 59 + /* Subclassed mux and divider structures */ 60 + struct clk_mux mux; 61 + struct clk_divider div[NUM_INPUTS]; 62 + /* Enable/running feedback register bits for each input */ 63 + void __iomem *feedback_reg[NUM_INPUTS]; 64 + int feedback_bit_idx; 65 + 66 + u8 muxsel; 67 + }; 68 + 69 + #define to_clkgena_divmux(_hw) container_of(_hw, struct clkgena_divmux, hw) 70 + 71 + struct clkgena_divmux_data { 72 + int num_outputs; 73 + int mux_offset; 74 + int mux_offset2; 75 + int mux_start_bit; 76 + int div_offsets[NUM_INPUTS]; 77 + int fb_offsets[NUM_INPUTS]; 78 + int fb_start_bit_idx; 79 + }; 80 + 81 + #define CKGAX_CLKOPSRC_SWITCH_OFF 0x3 82 + 83 + static int clkgena_divmux_is_running(struct clkgena_divmux *mux) 84 + { 85 + u32 regval = readl(mux->feedback_reg[mux->muxsel]); 86 + u32 running = regval & BIT(mux->feedback_bit_idx); 87 + return !!running; 88 + } 89 + 90 + static int clkgena_divmux_enable(struct clk_hw *hw) 91 + { 92 + struct clkgena_divmux *genamux = to_clkgena_divmux(hw); 93 + struct clk_hw *mux_hw = &genamux->mux.hw; 94 + unsigned long timeout; 95 + int ret = 0; 96 + 97 + mux_hw->clk = hw->clk; 98 + 99 + ret = clk_mux_ops.set_parent(mux_hw, genamux->muxsel); 100 + if (ret) 101 + return ret; 102 + 103 + timeout = jiffies + msecs_to_jiffies(10); 104 + 105 + while (!clkgena_divmux_is_running(genamux)) { 106 + if (time_after(jiffies, timeout)) 107 + return -ETIMEDOUT; 108 + cpu_relax(); 109 + } 110 + 111 + return 0; 112 + } 113 + 114 + static void clkgena_divmux_disable(struct clk_hw *hw) 115 + { 116 + struct clkgena_divmux *genamux = to_clkgena_divmux(hw); 117 + struct clk_hw *mux_hw = &genamux->mux.hw; 118 + 119 + mux_hw->clk = hw->clk; 120 + 121 + clk_mux_ops.set_parent(mux_hw, CKGAX_CLKOPSRC_SWITCH_OFF); 122 + } 123 + 124 + static int clkgena_divmux_is_enabled(struct clk_hw *hw) 125 + { 126 + struct clkgena_divmux *genamux = to_clkgena_divmux(hw); 127 + struct clk_hw *mux_hw = &genamux->mux.hw; 128 + 129 + mux_hw->clk = hw->clk; 130 + 131 + return (s8)clk_mux_ops.get_parent(mux_hw) > 0; 132 + } 133 + 134 + u8 clkgena_divmux_get_parent(struct clk_hw *hw) 135 + { 136 + struct clkgena_divmux *genamux = to_clkgena_divmux(hw); 137 + struct clk_hw *mux_hw = &genamux->mux.hw; 138 + 139 + mux_hw->clk = hw->clk; 140 + 141 + genamux->muxsel = clk_mux_ops.get_parent(mux_hw); 142 + if ((s8)genamux->muxsel < 0) { 143 + pr_debug("%s: %s: Invalid parent, setting to default.\n", 144 + __func__, __clk_get_name(hw->clk)); 145 + genamux->muxsel = 0; 146 + } 147 + 148 + return genamux->muxsel; 149 + } 150 + 151 + static int clkgena_divmux_set_parent(struct clk_hw *hw, u8 index) 152 + { 153 + struct clkgena_divmux *genamux = to_clkgena_divmux(hw); 154 + 155 + if (index >= CKGAX_CLKOPSRC_SWITCH_OFF) 156 + return -EINVAL; 157 + 158 + genamux->muxsel = index; 159 + 160 + /* 161 + * If the mux is already enabled, call enable directly to set the 162 + * new mux position and wait for it to start running again. Otherwise 163 + * do nothing. 164 + */ 165 + if (clkgena_divmux_is_enabled(hw)) 166 + clkgena_divmux_enable(hw); 167 + 168 + return 0; 169 + } 170 + 171 + unsigned long clkgena_divmux_recalc_rate(struct clk_hw *hw, 172 + unsigned long parent_rate) 173 + { 174 + struct clkgena_divmux *genamux = to_clkgena_divmux(hw); 175 + struct clk_hw *div_hw = &genamux->div[genamux->muxsel].hw; 176 + 177 + div_hw->clk = hw->clk; 178 + 179 + return clk_divider_ops.recalc_rate(div_hw, parent_rate); 180 + } 181 + 182 + static int clkgena_divmux_set_rate(struct clk_hw *hw, unsigned long rate, 183 + unsigned long parent_rate) 184 + { 185 + struct clkgena_divmux *genamux = to_clkgena_divmux(hw); 186 + struct clk_hw *div_hw = &genamux->div[genamux->muxsel].hw; 187 + 188 + div_hw->clk = hw->clk; 189 + 190 + return clk_divider_ops.set_rate(div_hw, rate, parent_rate); 191 + } 192 + 193 + static long clkgena_divmux_round_rate(struct clk_hw *hw, unsigned long rate, 194 + unsigned long *prate) 195 + { 196 + struct clkgena_divmux *genamux = to_clkgena_divmux(hw); 197 + struct clk_hw *div_hw = &genamux->div[genamux->muxsel].hw; 198 + 199 + div_hw->clk = hw->clk; 200 + 201 + return clk_divider_ops.round_rate(div_hw, rate, prate); 202 + } 203 + 204 + static const struct clk_ops clkgena_divmux_ops = { 205 + .enable = clkgena_divmux_enable, 206 + .disable = clkgena_divmux_disable, 207 + .is_enabled = clkgena_divmux_is_enabled, 208 + .get_parent = clkgena_divmux_get_parent, 209 + .set_parent = clkgena_divmux_set_parent, 210 + .round_rate = clkgena_divmux_round_rate, 211 + .recalc_rate = clkgena_divmux_recalc_rate, 212 + .set_rate = clkgena_divmux_set_rate, 213 + }; 214 + 215 + /** 216 + * clk_register_genamux - register a genamux clock with the clock framework 217 + */ 218 + struct clk *clk_register_genamux(const char *name, 219 + const char **parent_names, u8 num_parents, 220 + void __iomem *reg, 221 + const struct clkgena_divmux_data *muxdata, 222 + u32 idx) 223 + { 224 + /* 225 + * Fixed constants across all ClockgenA variants 226 + */ 227 + const int mux_width = 2; 228 + const int divider_width = 5; 229 + struct clkgena_divmux *genamux; 230 + struct clk *clk; 231 + struct clk_init_data init; 232 + int i; 233 + 234 + genamux = kzalloc(sizeof(*genamux), GFP_KERNEL); 235 + if (!genamux) 236 + return ERR_PTR(-ENOMEM); 237 + 238 + init.name = name; 239 + init.ops = &clkgena_divmux_ops; 240 + init.flags = CLK_IS_BASIC; 241 + init.parent_names = parent_names; 242 + init.num_parents = num_parents; 243 + 244 + genamux->mux.lock = &clkgena_divmux_lock; 245 + genamux->mux.mask = BIT(mux_width) - 1; 246 + genamux->mux.shift = muxdata->mux_start_bit + (idx * mux_width); 247 + if (genamux->mux.shift > 31) { 248 + /* 249 + * We have spilled into the second mux register so 250 + * adjust the register address and the bit shift accordingly 251 + */ 252 + genamux->mux.reg = reg + muxdata->mux_offset2; 253 + genamux->mux.shift -= 32; 254 + } else { 255 + genamux->mux.reg = reg + muxdata->mux_offset; 256 + } 257 + 258 + for (i = 0; i < NUM_INPUTS; i++) { 259 + /* 260 + * Divider config for each input 261 + */ 262 + void __iomem *divbase = reg + muxdata->div_offsets[i]; 263 + genamux->div[i].width = divider_width; 264 + genamux->div[i].reg = divbase + (idx * sizeof(u32)); 265 + 266 + /* 267 + * Mux enabled/running feedback register for each input. 268 + */ 269 + genamux->feedback_reg[i] = reg + muxdata->fb_offsets[i]; 270 + } 271 + 272 + genamux->feedback_bit_idx = muxdata->fb_start_bit_idx + idx; 273 + genamux->hw.init = &init; 274 + 275 + clk = clk_register(NULL, &genamux->hw); 276 + if (IS_ERR(clk)) { 277 + kfree(genamux); 278 + goto err; 279 + } 280 + 281 + pr_debug("%s: parent %s rate %lu\n", 282 + __clk_get_name(clk), 283 + __clk_get_name(clk_get_parent(clk)), 284 + clk_get_rate(clk)); 285 + err: 286 + return clk; 287 + } 288 + 289 + static struct clkgena_divmux_data st_divmux_c65hs = { 290 + .num_outputs = 4, 291 + .mux_offset = 0x14, 292 + .mux_start_bit = 0, 293 + .div_offsets = { 0x800, 0x900, 0xb00 }, 294 + .fb_offsets = { 0x18, 0x1c, 0x20 }, 295 + .fb_start_bit_idx = 0, 296 + }; 297 + 298 + static struct clkgena_divmux_data st_divmux_c65ls = { 299 + .num_outputs = 14, 300 + .mux_offset = 0x14, 301 + .mux_offset2 = 0x24, 302 + .mux_start_bit = 8, 303 + .div_offsets = { 0x810, 0xa10, 0xb10 }, 304 + .fb_offsets = { 0x18, 0x1c, 0x20 }, 305 + .fb_start_bit_idx = 4, 306 + }; 307 + 308 + static struct clkgena_divmux_data st_divmux_c32odf0 = { 309 + .num_outputs = 8, 310 + .mux_offset = 0x1c, 311 + .mux_start_bit = 0, 312 + .div_offsets = { 0x800, 0x900, 0xa60 }, 313 + .fb_offsets = { 0x2c, 0x24, 0x28 }, 314 + .fb_start_bit_idx = 0, 315 + }; 316 + 317 + static struct clkgena_divmux_data st_divmux_c32odf1 = { 318 + .num_outputs = 8, 319 + .mux_offset = 0x1c, 320 + .mux_start_bit = 16, 321 + .div_offsets = { 0x820, 0x980, 0xa80 }, 322 + .fb_offsets = { 0x2c, 0x24, 0x28 }, 323 + .fb_start_bit_idx = 8, 324 + }; 325 + 326 + static struct clkgena_divmux_data st_divmux_c32odf2 = { 327 + .num_outputs = 8, 328 + .mux_offset = 0x20, 329 + .mux_start_bit = 0, 330 + .div_offsets = { 0x840, 0xa20, 0xb10 }, 331 + .fb_offsets = { 0x2c, 0x24, 0x28 }, 332 + .fb_start_bit_idx = 16, 333 + }; 334 + 335 + static struct clkgena_divmux_data st_divmux_c32odf3 = { 336 + .num_outputs = 8, 337 + .mux_offset = 0x20, 338 + .mux_start_bit = 16, 339 + .div_offsets = { 0x860, 0xa40, 0xb30 }, 340 + .fb_offsets = { 0x2c, 0x24, 0x28 }, 341 + .fb_start_bit_idx = 24, 342 + }; 343 + 344 + static struct of_device_id clkgena_divmux_of_match[] = { 345 + { 346 + .compatible = "st,clkgena-divmux-c65-hs", 347 + .data = &st_divmux_c65hs, 348 + }, 349 + { 350 + .compatible = "st,clkgena-divmux-c65-ls", 351 + .data = &st_divmux_c65ls, 352 + }, 353 + { 354 + .compatible = "st,clkgena-divmux-c32-odf0", 355 + .data = &st_divmux_c32odf0, 356 + }, 357 + { 358 + .compatible = "st,clkgena-divmux-c32-odf1", 359 + .data = &st_divmux_c32odf1, 360 + }, 361 + { 362 + .compatible = "st,clkgena-divmux-c32-odf2", 363 + .data = &st_divmux_c32odf2, 364 + }, 365 + { 366 + .compatible = "st,clkgena-divmux-c32-odf3", 367 + .data = &st_divmux_c32odf3, 368 + }, 369 + {} 370 + }; 371 + 372 + static void __iomem * __init clkgen_get_register_base( 373 + struct device_node *np) 374 + { 375 + struct device_node *pnode; 376 + void __iomem *reg = NULL; 377 + 378 + pnode = of_get_parent(np); 379 + if (!pnode) 380 + return NULL; 381 + 382 + reg = of_iomap(pnode, 0); 383 + 384 + of_node_put(pnode); 385 + return reg; 386 + } 387 + 388 + void __init st_of_clkgena_divmux_setup(struct device_node *np) 389 + { 390 + const struct of_device_id *match; 391 + const struct clkgena_divmux_data *data; 392 + struct clk_onecell_data *clk_data; 393 + void __iomem *reg; 394 + const char **parents; 395 + int num_parents = 0, i; 396 + 397 + match = of_match_node(clkgena_divmux_of_match, np); 398 + if (WARN_ON(!match)) 399 + return; 400 + 401 + data = (struct clkgena_divmux_data *)match->data; 402 + 403 + reg = clkgen_get_register_base(np); 404 + if (!reg) 405 + return; 406 + 407 + parents = clkgen_mux_get_parents(np, &num_parents); 408 + if (IS_ERR(parents)) 409 + return; 410 + 411 + clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL); 412 + if (!clk_data) 413 + goto err; 414 + 415 + clk_data->clk_num = data->num_outputs; 416 + clk_data->clks = kzalloc(clk_data->clk_num * sizeof(struct clk *), 417 + GFP_KERNEL); 418 + 419 + if (!clk_data->clks) 420 + goto err; 421 + 422 + for (i = 0; i < clk_data->clk_num; i++) { 423 + struct clk *clk; 424 + const char *clk_name; 425 + 426 + if (of_property_read_string_index(np, "clock-output-names", 427 + i, &clk_name)) 428 + break; 429 + 430 + /* 431 + * If we read an empty clock name then the output is unused 432 + */ 433 + if (*clk_name == '\0') 434 + continue; 435 + 436 + clk = clk_register_genamux(clk_name, parents, num_parents, 437 + reg, data, i); 438 + 439 + if (IS_ERR(clk)) 440 + goto err; 441 + 442 + clk_data->clks[i] = clk; 443 + } 444 + 445 + kfree(parents); 446 + 447 + of_clk_add_provider(np, of_clk_src_onecell_get, clk_data); 448 + return; 449 + err: 450 + if (clk_data) 451 + kfree(clk_data->clks); 452 + 453 + kfree(clk_data); 454 + kfree(parents); 455 + } 456 + CLK_OF_DECLARE(clkgenadivmux, "st,clkgena-divmux", st_of_clkgena_divmux_setup); 457 + 458 + struct clkgena_prediv_data { 459 + u32 offset; 460 + u8 shift; 461 + struct clk_div_table *table; 462 + }; 463 + 464 + static struct clk_div_table prediv_table16[] = { 465 + { .val = 0, .div = 1 }, 466 + { .val = 1, .div = 16 }, 467 + { .div = 0 }, 468 + }; 469 + 470 + static struct clkgena_prediv_data prediv_c65_data = { 471 + .offset = 0x4c, 472 + .shift = 31, 473 + .table = prediv_table16, 474 + }; 475 + 476 + static struct clkgena_prediv_data prediv_c32_data = { 477 + .offset = 0x50, 478 + .shift = 1, 479 + .table = prediv_table16, 480 + }; 481 + 482 + static struct of_device_id clkgena_prediv_of_match[] = { 483 + { .compatible = "st,clkgena-prediv-c65", .data = &prediv_c65_data }, 484 + { .compatible = "st,clkgena-prediv-c32", .data = &prediv_c32_data }, 485 + {} 486 + }; 487 + 488 + void __init st_of_clkgena_prediv_setup(struct device_node *np) 489 + { 490 + const struct of_device_id *match; 491 + void __iomem *reg; 492 + const char *parent_name, *clk_name; 493 + struct clk *clk; 494 + struct clkgena_prediv_data *data; 495 + 496 + match = of_match_node(clkgena_prediv_of_match, np); 497 + if (!match) { 498 + pr_err("%s: No matching data\n", __func__); 499 + return; 500 + } 501 + 502 + data = (struct clkgena_prediv_data *)match->data; 503 + 504 + reg = clkgen_get_register_base(np); 505 + if (!reg) 506 + return; 507 + 508 + parent_name = of_clk_get_parent_name(np, 0); 509 + if (!parent_name) 510 + return; 511 + 512 + if (of_property_read_string_index(np, "clock-output-names", 513 + 0, &clk_name)) 514 + return; 515 + 516 + clk = clk_register_divider_table(NULL, clk_name, parent_name, 0, 517 + reg + data->offset, data->shift, 1, 518 + 0, data->table, NULL); 519 + if (IS_ERR(clk)) 520 + return; 521 + 522 + of_clk_add_provider(np, of_clk_src_simple_get, clk); 523 + pr_debug("%s: parent %s rate %u\n", 524 + __clk_get_name(clk), 525 + __clk_get_name(clk_get_parent(clk)), 526 + (unsigned int)clk_get_rate(clk)); 527 + 528 + return; 529 + } 530 + CLK_OF_DECLARE(clkgenaprediv, "st,clkgena-prediv", st_of_clkgena_prediv_setup); 531 + 532 + struct clkgen_mux_data { 533 + u32 offset; 534 + u8 shift; 535 + u8 width; 536 + spinlock_t *lock; 537 + unsigned long clk_flags; 538 + u8 mux_flags; 539 + }; 540 + 541 + static struct clkgen_mux_data clkgen_mux_c_vcc_hd_416 = { 542 + .offset = 0, 543 + .shift = 0, 544 + .width = 1, 545 + }; 546 + 547 + static struct clkgen_mux_data clkgen_mux_f_vcc_fvdp_416 = { 548 + .offset = 0, 549 + .shift = 0, 550 + .width = 1, 551 + }; 552 + 553 + static struct clkgen_mux_data clkgen_mux_f_vcc_hva_416 = { 554 + .offset = 0, 555 + .shift = 0, 556 + .width = 1, 557 + }; 558 + 559 + static struct clkgen_mux_data clkgen_mux_f_vcc_hd_416 = { 560 + .offset = 0, 561 + .shift = 16, 562 + .width = 1, 563 + .lock = &clkgenf_lock, 564 + }; 565 + 566 + static struct clkgen_mux_data clkgen_mux_c_vcc_sd_416 = { 567 + .offset = 0, 568 + .shift = 17, 569 + .width = 1, 570 + .lock = &clkgenf_lock, 571 + }; 572 + 573 + static struct clkgen_mux_data stih415_a9_mux_data = { 574 + .offset = 0, 575 + .shift = 1, 576 + .width = 2, 577 + }; 578 + static struct clkgen_mux_data stih416_a9_mux_data = { 579 + .offset = 0, 580 + .shift = 0, 581 + .width = 2, 582 + }; 583 + 584 + static struct of_device_id mux_of_match[] = { 585 + { 586 + .compatible = "st,stih416-clkgenc-vcc-hd", 587 + .data = &clkgen_mux_c_vcc_hd_416, 588 + }, 589 + { 590 + .compatible = "st,stih416-clkgenf-vcc-fvdp", 591 + .data = &clkgen_mux_f_vcc_fvdp_416, 592 + }, 593 + { 594 + .compatible = "st,stih416-clkgenf-vcc-hva", 595 + .data = &clkgen_mux_f_vcc_hva_416, 596 + }, 597 + { 598 + .compatible = "st,stih416-clkgenf-vcc-hd", 599 + .data = &clkgen_mux_f_vcc_hd_416, 600 + }, 601 + { 602 + .compatible = "st,stih416-clkgenf-vcc-sd", 603 + .data = &clkgen_mux_c_vcc_sd_416, 604 + }, 605 + { 606 + .compatible = "st,stih415-clkgen-a9-mux", 607 + .data = &stih415_a9_mux_data, 608 + }, 609 + { 610 + .compatible = "st,stih416-clkgen-a9-mux", 611 + .data = &stih416_a9_mux_data, 612 + }, 613 + {} 614 + }; 615 + 616 + void __init st_of_clkgen_mux_setup(struct device_node *np) 617 + { 618 + const struct of_device_id *match; 619 + struct clk *clk; 620 + void __iomem *reg; 621 + const char **parents; 622 + int num_parents; 623 + struct clkgen_mux_data *data; 624 + 625 + match = of_match_node(mux_of_match, np); 626 + if (!match) { 627 + pr_err("%s: No matching data\n", __func__); 628 + return; 629 + } 630 + 631 + data = (struct clkgen_mux_data *)match->data; 632 + 633 + reg = of_iomap(np, 0); 634 + if (!reg) { 635 + pr_err("%s: Failed to get base address\n", __func__); 636 + return; 637 + } 638 + 639 + parents = clkgen_mux_get_parents(np, &num_parents); 640 + if (IS_ERR(parents)) { 641 + pr_err("%s: Failed to get parents (%ld)\n", 642 + __func__, PTR_ERR(parents)); 643 + return; 644 + } 645 + 646 + clk = clk_register_mux(NULL, np->name, parents, num_parents, 647 + data->clk_flags | CLK_SET_RATE_PARENT, 648 + reg + data->offset, 649 + data->shift, data->width, data->mux_flags, 650 + data->lock); 651 + if (IS_ERR(clk)) 652 + goto err; 653 + 654 + pr_debug("%s: parent %s rate %u\n", 655 + __clk_get_name(clk), 656 + __clk_get_name(clk_get_parent(clk)), 657 + (unsigned int)clk_get_rate(clk)); 658 + 659 + of_clk_add_provider(np, of_clk_src_simple_get, clk); 660 + 661 + err: 662 + kfree(parents); 663 + 664 + return; 665 + } 666 + CLK_OF_DECLARE(clkgen_mux, "st,clkgen-mux", st_of_clkgen_mux_setup); 667 + 668 + #define VCC_MAX_CHANNELS 16 669 + 670 + #define VCC_GATE_OFFSET 0x0 671 + #define VCC_MUX_OFFSET 0x4 672 + #define VCC_DIV_OFFSET 0x8 673 + 674 + struct clkgen_vcc_data { 675 + spinlock_t *lock; 676 + unsigned long clk_flags; 677 + }; 678 + 679 + static struct clkgen_vcc_data st_clkgenc_vcc_416 = { 680 + .clk_flags = CLK_SET_RATE_PARENT, 681 + }; 682 + 683 + static struct clkgen_vcc_data st_clkgenf_vcc_416 = { 684 + .lock = &clkgenf_lock, 685 + }; 686 + 687 + static struct of_device_id vcc_of_match[] = { 688 + { .compatible = "st,stih416-clkgenc", .data = &st_clkgenc_vcc_416 }, 689 + { .compatible = "st,stih416-clkgenf", .data = &st_clkgenf_vcc_416 }, 690 + {} 691 + }; 692 + 693 + void __init st_of_clkgen_vcc_setup(struct device_node *np) 694 + { 695 + const struct of_device_id *match; 696 + void __iomem *reg; 697 + const char **parents; 698 + int num_parents, i; 699 + struct clk_onecell_data *clk_data; 700 + struct clkgen_vcc_data *data; 701 + 702 + match = of_match_node(vcc_of_match, np); 703 + if (WARN_ON(!match)) 704 + return; 705 + data = (struct clkgen_vcc_data *)match->data; 706 + 707 + reg = of_iomap(np, 0); 708 + if (!reg) 709 + return; 710 + 711 + parents = clkgen_mux_get_parents(np, &num_parents); 712 + if (IS_ERR(parents)) 713 + return; 714 + 715 + clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL); 716 + if (!clk_data) 717 + goto err; 718 + 719 + clk_data->clk_num = VCC_MAX_CHANNELS; 720 + clk_data->clks = kzalloc(clk_data->clk_num * sizeof(struct clk *), 721 + GFP_KERNEL); 722 + 723 + if (!clk_data->clks) 724 + goto err; 725 + 726 + for (i = 0; i < clk_data->clk_num; i++) { 727 + struct clk *clk; 728 + const char *clk_name; 729 + struct clk_gate *gate; 730 + struct clk_divider *div; 731 + struct clk_mux *mux; 732 + 733 + if (of_property_read_string_index(np, "clock-output-names", 734 + i, &clk_name)) 735 + break; 736 + 737 + /* 738 + * If we read an empty clock name then the output is unused 739 + */ 740 + if (*clk_name == '\0') 741 + continue; 742 + 743 + gate = kzalloc(sizeof(struct clk_gate), GFP_KERNEL); 744 + if (!gate) 745 + break; 746 + 747 + div = kzalloc(sizeof(struct clk_divider), GFP_KERNEL); 748 + if (!div) { 749 + kfree(gate); 750 + break; 751 + } 752 + 753 + mux = kzalloc(sizeof(struct clk_mux), GFP_KERNEL); 754 + if (!mux) { 755 + kfree(gate); 756 + kfree(div); 757 + break; 758 + } 759 + 760 + gate->reg = reg + VCC_GATE_OFFSET; 761 + gate->bit_idx = i; 762 + gate->flags = CLK_GATE_SET_TO_DISABLE; 763 + gate->lock = data->lock; 764 + 765 + div->reg = reg + VCC_DIV_OFFSET; 766 + div->shift = 2 * i; 767 + div->width = 2; 768 + div->flags = CLK_DIVIDER_POWER_OF_TWO; 769 + 770 + mux->reg = reg + VCC_MUX_OFFSET; 771 + mux->shift = 2 * i; 772 + mux->mask = 0x3; 773 + 774 + clk = clk_register_composite(NULL, clk_name, parents, 775 + num_parents, 776 + &mux->hw, &clk_mux_ops, 777 + &div->hw, &clk_divider_ops, 778 + &gate->hw, &clk_gate_ops, 779 + data->clk_flags); 780 + if (IS_ERR(clk)) { 781 + kfree(gate); 782 + kfree(div); 783 + kfree(mux); 784 + goto err; 785 + } 786 + 787 + pr_debug("%s: parent %s rate %u\n", 788 + __clk_get_name(clk), 789 + __clk_get_name(clk_get_parent(clk)), 790 + (unsigned int)clk_get_rate(clk)); 791 + 792 + clk_data->clks[i] = clk; 793 + } 794 + 795 + kfree(parents); 796 + 797 + of_clk_add_provider(np, of_clk_src_onecell_get, clk_data); 798 + return; 799 + 800 + err: 801 + for (i = 0; i < clk_data->clk_num; i++) { 802 + struct clk_composite *composite; 803 + 804 + if (!clk_data->clks[i]) 805 + continue; 806 + 807 + composite = container_of(__clk_get_hw(clk_data->clks[i]), 808 + struct clk_composite, hw); 809 + kfree(container_of(composite->gate_hw, struct clk_gate, hw)); 810 + kfree(container_of(composite->rate_hw, struct clk_divider, hw)); 811 + kfree(container_of(composite->mux_hw, struct clk_mux, hw)); 812 + } 813 + 814 + if (clk_data) 815 + kfree(clk_data->clks); 816 + 817 + kfree(clk_data); 818 + kfree(parents); 819 + } 820 + CLK_OF_DECLARE(clkgen_vcc, "st,clkgen-vcc", st_of_clkgen_vcc_setup);
+698
drivers/clk/st/clkgen-pll.c
··· 1 + /* 2 + * Copyright (C) 2014 STMicroelectronics (R&D) Limited 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + * 9 + */ 10 + 11 + /* 12 + * Authors: 13 + * Stephen Gallimore <stephen.gallimore@st.com>, 14 + * Pankaj Dev <pankaj.dev@st.com>. 15 + */ 16 + 17 + #include <linux/slab.h> 18 + #include <linux/of_address.h> 19 + #include <linux/clk-provider.h> 20 + 21 + #include "clkgen.h" 22 + 23 + static DEFINE_SPINLOCK(clkgena_c32_odf_lock); 24 + 25 + /* 26 + * Common PLL configuration register bits for PLL800 and PLL1600 C65 27 + */ 28 + #define C65_MDIV_PLL800_MASK (0xff) 29 + #define C65_MDIV_PLL1600_MASK (0x7) 30 + #define C65_NDIV_MASK (0xff) 31 + #define C65_PDIV_MASK (0x7) 32 + 33 + /* 34 + * PLL configuration register bits for PLL3200 C32 35 + */ 36 + #define C32_NDIV_MASK (0xff) 37 + #define C32_IDF_MASK (0x7) 38 + #define C32_ODF_MASK (0x3f) 39 + #define C32_LDF_MASK (0x7f) 40 + 41 + #define C32_MAX_ODFS (4) 42 + 43 + struct clkgen_pll_data { 44 + struct clkgen_field pdn_status; 45 + struct clkgen_field locked_status; 46 + struct clkgen_field mdiv; 47 + struct clkgen_field ndiv; 48 + struct clkgen_field pdiv; 49 + struct clkgen_field idf; 50 + struct clkgen_field ldf; 51 + unsigned int num_odfs; 52 + struct clkgen_field odf[C32_MAX_ODFS]; 53 + struct clkgen_field odf_gate[C32_MAX_ODFS]; 54 + const struct clk_ops *ops; 55 + }; 56 + 57 + static const struct clk_ops st_pll1600c65_ops; 58 + static const struct clk_ops st_pll800c65_ops; 59 + static const struct clk_ops stm_pll3200c32_ops; 60 + static const struct clk_ops st_pll1200c32_ops; 61 + 62 + static struct clkgen_pll_data st_pll1600c65_ax = { 63 + .pdn_status = CLKGEN_FIELD(0x0, 0x1, 19), 64 + .locked_status = CLKGEN_FIELD(0x0, 0x1, 31), 65 + .mdiv = CLKGEN_FIELD(0x0, C65_MDIV_PLL1600_MASK, 0), 66 + .ndiv = CLKGEN_FIELD(0x0, C65_NDIV_MASK, 8), 67 + .ops = &st_pll1600c65_ops 68 + }; 69 + 70 + static struct clkgen_pll_data st_pll800c65_ax = { 71 + .pdn_status = CLKGEN_FIELD(0x0, 0x1, 19), 72 + .locked_status = CLKGEN_FIELD(0x0, 0x1, 31), 73 + .mdiv = CLKGEN_FIELD(0x0, C65_MDIV_PLL800_MASK, 0), 74 + .ndiv = CLKGEN_FIELD(0x0, C65_NDIV_MASK, 8), 75 + .pdiv = CLKGEN_FIELD(0x0, C65_PDIV_MASK, 16), 76 + .ops = &st_pll800c65_ops 77 + }; 78 + 79 + static struct clkgen_pll_data st_pll3200c32_a1x_0 = { 80 + .pdn_status = CLKGEN_FIELD(0x0, 0x1, 31), 81 + .locked_status = CLKGEN_FIELD(0x4, 0x1, 31), 82 + .ndiv = CLKGEN_FIELD(0x0, C32_NDIV_MASK, 0x0), 83 + .idf = CLKGEN_FIELD(0x4, C32_IDF_MASK, 0x0), 84 + .num_odfs = 4, 85 + .odf = { CLKGEN_FIELD(0x54, C32_ODF_MASK, 4), 86 + CLKGEN_FIELD(0x54, C32_ODF_MASK, 10), 87 + CLKGEN_FIELD(0x54, C32_ODF_MASK, 16), 88 + CLKGEN_FIELD(0x54, C32_ODF_MASK, 22) }, 89 + .odf_gate = { CLKGEN_FIELD(0x54, 0x1, 0), 90 + CLKGEN_FIELD(0x54, 0x1, 1), 91 + CLKGEN_FIELD(0x54, 0x1, 2), 92 + CLKGEN_FIELD(0x54, 0x1, 3) }, 93 + .ops = &stm_pll3200c32_ops, 94 + }; 95 + 96 + static struct clkgen_pll_data st_pll3200c32_a1x_1 = { 97 + .pdn_status = CLKGEN_FIELD(0xC, 0x1, 31), 98 + .locked_status = CLKGEN_FIELD(0x10, 0x1, 31), 99 + .ndiv = CLKGEN_FIELD(0xC, C32_NDIV_MASK, 0x0), 100 + .idf = CLKGEN_FIELD(0x10, C32_IDF_MASK, 0x0), 101 + .num_odfs = 4, 102 + .odf = { CLKGEN_FIELD(0x58, C32_ODF_MASK, 4), 103 + CLKGEN_FIELD(0x58, C32_ODF_MASK, 10), 104 + CLKGEN_FIELD(0x58, C32_ODF_MASK, 16), 105 + CLKGEN_FIELD(0x58, C32_ODF_MASK, 22) }, 106 + .odf_gate = { CLKGEN_FIELD(0x58, 0x1, 0), 107 + CLKGEN_FIELD(0x58, 0x1, 1), 108 + CLKGEN_FIELD(0x58, 0x1, 2), 109 + CLKGEN_FIELD(0x58, 0x1, 3) }, 110 + .ops = &stm_pll3200c32_ops, 111 + }; 112 + 113 + /* 415 specific */ 114 + static struct clkgen_pll_data st_pll3200c32_a9_415 = { 115 + .pdn_status = CLKGEN_FIELD(0x0, 0x1, 0), 116 + .locked_status = CLKGEN_FIELD(0x6C, 0x1, 0), 117 + .ndiv = CLKGEN_FIELD(0x0, C32_NDIV_MASK, 9), 118 + .idf = CLKGEN_FIELD(0x0, C32_IDF_MASK, 22), 119 + .num_odfs = 1, 120 + .odf = { CLKGEN_FIELD(0x0, C32_ODF_MASK, 3) }, 121 + .odf_gate = { CLKGEN_FIELD(0x0, 0x1, 28) }, 122 + .ops = &stm_pll3200c32_ops, 123 + }; 124 + 125 + static struct clkgen_pll_data st_pll3200c32_ddr_415 = { 126 + .pdn_status = CLKGEN_FIELD(0x0, 0x1, 0), 127 + .locked_status = CLKGEN_FIELD(0x100, 0x1, 0), 128 + .ndiv = CLKGEN_FIELD(0x8, C32_NDIV_MASK, 0), 129 + .idf = CLKGEN_FIELD(0x0, C32_IDF_MASK, 25), 130 + .num_odfs = 2, 131 + .odf = { CLKGEN_FIELD(0x8, C32_ODF_MASK, 8), 132 + CLKGEN_FIELD(0x8, C32_ODF_MASK, 14) }, 133 + .odf_gate = { CLKGEN_FIELD(0x4, 0x1, 28), 134 + CLKGEN_FIELD(0x4, 0x1, 29) }, 135 + .ops = &stm_pll3200c32_ops, 136 + }; 137 + 138 + static struct clkgen_pll_data st_pll1200c32_gpu_415 = { 139 + .pdn_status = CLKGEN_FIELD(0x144, 0x1, 3), 140 + .locked_status = CLKGEN_FIELD(0x168, 0x1, 0), 141 + .ldf = CLKGEN_FIELD(0x0, C32_LDF_MASK, 3), 142 + .idf = CLKGEN_FIELD(0x0, C32_IDF_MASK, 0), 143 + .num_odfs = 0, 144 + .odf = { CLKGEN_FIELD(0x0, C32_ODF_MASK, 10) }, 145 + .ops = &st_pll1200c32_ops, 146 + }; 147 + 148 + /* 416 specific */ 149 + static struct clkgen_pll_data st_pll3200c32_a9_416 = { 150 + .pdn_status = CLKGEN_FIELD(0x0, 0x1, 0), 151 + .locked_status = CLKGEN_FIELD(0x6C, 0x1, 0), 152 + .ndiv = CLKGEN_FIELD(0x8, C32_NDIV_MASK, 0), 153 + .idf = CLKGEN_FIELD(0x0, C32_IDF_MASK, 25), 154 + .num_odfs = 1, 155 + .odf = { CLKGEN_FIELD(0x8, C32_ODF_MASK, 8) }, 156 + .odf_gate = { CLKGEN_FIELD(0x4, 0x1, 28) }, 157 + .ops = &stm_pll3200c32_ops, 158 + }; 159 + 160 + static struct clkgen_pll_data st_pll3200c32_ddr_416 = { 161 + .pdn_status = CLKGEN_FIELD(0x0, 0x1, 0), 162 + .locked_status = CLKGEN_FIELD(0x10C, 0x1, 0), 163 + .ndiv = CLKGEN_FIELD(0x8, C32_NDIV_MASK, 0), 164 + .idf = CLKGEN_FIELD(0x0, C32_IDF_MASK, 25), 165 + .num_odfs = 2, 166 + .odf = { CLKGEN_FIELD(0x8, C32_ODF_MASK, 8), 167 + CLKGEN_FIELD(0x8, C32_ODF_MASK, 14) }, 168 + .odf_gate = { CLKGEN_FIELD(0x4, 0x1, 28), 169 + CLKGEN_FIELD(0x4, 0x1, 29) }, 170 + .ops = &stm_pll3200c32_ops, 171 + }; 172 + 173 + static struct clkgen_pll_data st_pll1200c32_gpu_416 = { 174 + .pdn_status = CLKGEN_FIELD(0x8E4, 0x1, 3), 175 + .locked_status = CLKGEN_FIELD(0x90C, 0x1, 0), 176 + .ldf = CLKGEN_FIELD(0x0, C32_LDF_MASK, 3), 177 + .idf = CLKGEN_FIELD(0x0, C32_IDF_MASK, 0), 178 + .num_odfs = 0, 179 + .odf = { CLKGEN_FIELD(0x0, C32_ODF_MASK, 10) }, 180 + .ops = &st_pll1200c32_ops, 181 + }; 182 + 183 + /** 184 + * DOC: Clock Generated by PLL, rate set and enabled by bootloader 185 + * 186 + * Traits of this clock: 187 + * prepare - clk_(un)prepare only ensures parent is (un)prepared 188 + * enable - clk_enable/disable only ensures parent is enabled 189 + * rate - rate is fixed. No clk_set_rate support 190 + * parent - fixed parent. No clk_set_parent support 191 + */ 192 + 193 + /** 194 + * PLL clock that is integrated in the ClockGenA instances on the STiH415 195 + * and STiH416. 196 + * 197 + * @hw: handle between common and hardware-specific interfaces. 198 + * @type: PLL instance type. 199 + * @regs_base: base of the PLL configuration register(s). 200 + * 201 + */ 202 + struct clkgen_pll { 203 + struct clk_hw hw; 204 + struct clkgen_pll_data *data; 205 + void __iomem *regs_base; 206 + }; 207 + 208 + #define to_clkgen_pll(_hw) container_of(_hw, struct clkgen_pll, hw) 209 + 210 + static int clkgen_pll_is_locked(struct clk_hw *hw) 211 + { 212 + struct clkgen_pll *pll = to_clkgen_pll(hw); 213 + u32 locked = CLKGEN_READ(pll, locked_status); 214 + 215 + return !!locked; 216 + } 217 + 218 + static int clkgen_pll_is_enabled(struct clk_hw *hw) 219 + { 220 + struct clkgen_pll *pll = to_clkgen_pll(hw); 221 + u32 poweroff = CLKGEN_READ(pll, pdn_status); 222 + return !poweroff; 223 + } 224 + 225 + unsigned long recalc_stm_pll800c65(struct clk_hw *hw, 226 + unsigned long parent_rate) 227 + { 228 + struct clkgen_pll *pll = to_clkgen_pll(hw); 229 + unsigned long mdiv, ndiv, pdiv; 230 + unsigned long rate; 231 + uint64_t res; 232 + 233 + if (!clkgen_pll_is_enabled(hw) || !clkgen_pll_is_locked(hw)) 234 + return 0; 235 + 236 + pdiv = CLKGEN_READ(pll, pdiv); 237 + mdiv = CLKGEN_READ(pll, mdiv); 238 + ndiv = CLKGEN_READ(pll, ndiv); 239 + 240 + if (!mdiv) 241 + mdiv++; /* mdiv=0 or 1 => MDIV=1 */ 242 + 243 + res = (uint64_t)2 * (uint64_t)parent_rate * (uint64_t)ndiv; 244 + rate = (unsigned long)div64_u64(res, mdiv * (1 << pdiv)); 245 + 246 + pr_debug("%s:%s rate %lu\n", __clk_get_name(hw->clk), __func__, rate); 247 + 248 + return rate; 249 + 250 + } 251 + 252 + unsigned long recalc_stm_pll1600c65(struct clk_hw *hw, 253 + unsigned long parent_rate) 254 + { 255 + struct clkgen_pll *pll = to_clkgen_pll(hw); 256 + unsigned long mdiv, ndiv; 257 + unsigned long rate; 258 + 259 + if (!clkgen_pll_is_enabled(hw) || !clkgen_pll_is_locked(hw)) 260 + return 0; 261 + 262 + mdiv = CLKGEN_READ(pll, mdiv); 263 + ndiv = CLKGEN_READ(pll, ndiv); 264 + 265 + if (!mdiv) 266 + mdiv = 1; 267 + 268 + /* Note: input is divided by 1000 to avoid overflow */ 269 + rate = ((2 * (parent_rate / 1000) * ndiv) / mdiv) * 1000; 270 + 271 + pr_debug("%s:%s rate %lu\n", __clk_get_name(hw->clk), __func__, rate); 272 + 273 + return rate; 274 + } 275 + 276 + unsigned long recalc_stm_pll3200c32(struct clk_hw *hw, 277 + unsigned long parent_rate) 278 + { 279 + struct clkgen_pll *pll = to_clkgen_pll(hw); 280 + unsigned long ndiv, idf; 281 + unsigned long rate = 0; 282 + 283 + if (!clkgen_pll_is_enabled(hw) || !clkgen_pll_is_locked(hw)) 284 + return 0; 285 + 286 + ndiv = CLKGEN_READ(pll, ndiv); 287 + idf = CLKGEN_READ(pll, idf); 288 + 289 + if (idf) 290 + /* Note: input is divided to avoid overflow */ 291 + rate = ((2 * (parent_rate/1000) * ndiv) / idf) * 1000; 292 + 293 + pr_debug("%s:%s rate %lu\n", __clk_get_name(hw->clk), __func__, rate); 294 + 295 + return rate; 296 + } 297 + 298 + unsigned long recalc_stm_pll1200c32(struct clk_hw *hw, 299 + unsigned long parent_rate) 300 + { 301 + struct clkgen_pll *pll = to_clkgen_pll(hw); 302 + unsigned long odf, ldf, idf; 303 + unsigned long rate; 304 + 305 + if (!clkgen_pll_is_enabled(hw) || !clkgen_pll_is_locked(hw)) 306 + return 0; 307 + 308 + odf = CLKGEN_READ(pll, odf[0]); 309 + ldf = CLKGEN_READ(pll, ldf); 310 + idf = CLKGEN_READ(pll, idf); 311 + 312 + if (!idf) /* idf==0 means 1 */ 313 + idf = 1; 314 + if (!odf) /* odf==0 means 1 */ 315 + odf = 1; 316 + 317 + /* Note: input is divided by 1000 to avoid overflow */ 318 + rate = (((parent_rate / 1000) * ldf) / (odf * idf)) * 1000; 319 + 320 + pr_debug("%s:%s rate %lu\n", __clk_get_name(hw->clk), __func__, rate); 321 + 322 + return rate; 323 + } 324 + 325 + static const struct clk_ops st_pll1600c65_ops = { 326 + .is_enabled = clkgen_pll_is_enabled, 327 + .recalc_rate = recalc_stm_pll1600c65, 328 + }; 329 + 330 + static const struct clk_ops st_pll800c65_ops = { 331 + .is_enabled = clkgen_pll_is_enabled, 332 + .recalc_rate = recalc_stm_pll800c65, 333 + }; 334 + 335 + static const struct clk_ops stm_pll3200c32_ops = { 336 + .is_enabled = clkgen_pll_is_enabled, 337 + .recalc_rate = recalc_stm_pll3200c32, 338 + }; 339 + 340 + static const struct clk_ops st_pll1200c32_ops = { 341 + .is_enabled = clkgen_pll_is_enabled, 342 + .recalc_rate = recalc_stm_pll1200c32, 343 + }; 344 + 345 + static struct clk * __init clkgen_pll_register(const char *parent_name, 346 + struct clkgen_pll_data *pll_data, 347 + void __iomem *reg, 348 + const char *clk_name) 349 + { 350 + struct clkgen_pll *pll; 351 + struct clk *clk; 352 + struct clk_init_data init; 353 + 354 + pll = kzalloc(sizeof(*pll), GFP_KERNEL); 355 + if (!pll) 356 + return ERR_PTR(-ENOMEM); 357 + 358 + init.name = clk_name; 359 + init.ops = pll_data->ops; 360 + 361 + init.flags = CLK_IS_BASIC; 362 + init.parent_names = &parent_name; 363 + init.num_parents = 1; 364 + 365 + pll->data = pll_data; 366 + pll->regs_base = reg; 367 + pll->hw.init = &init; 368 + 369 + clk = clk_register(NULL, &pll->hw); 370 + if (IS_ERR(clk)) { 371 + kfree(pll); 372 + return clk; 373 + } 374 + 375 + pr_debug("%s: parent %s rate %lu\n", 376 + __clk_get_name(clk), 377 + __clk_get_name(clk_get_parent(clk)), 378 + clk_get_rate(clk)); 379 + 380 + return clk; 381 + } 382 + 383 + static struct clk * __init clkgen_c65_lsdiv_register(const char *parent_name, 384 + const char *clk_name) 385 + { 386 + struct clk *clk; 387 + 388 + clk = clk_register_fixed_factor(NULL, clk_name, parent_name, 0, 1, 2); 389 + if (IS_ERR(clk)) 390 + return clk; 391 + 392 + pr_debug("%s: parent %s rate %lu\n", 393 + __clk_get_name(clk), 394 + __clk_get_name(clk_get_parent(clk)), 395 + clk_get_rate(clk)); 396 + return clk; 397 + } 398 + 399 + static void __iomem * __init clkgen_get_register_base( 400 + struct device_node *np) 401 + { 402 + struct device_node *pnode; 403 + void __iomem *reg = NULL; 404 + 405 + pnode = of_get_parent(np); 406 + if (!pnode) 407 + return NULL; 408 + 409 + reg = of_iomap(pnode, 0); 410 + 411 + of_node_put(pnode); 412 + return reg; 413 + } 414 + 415 + #define CLKGENAx_PLL0_OFFSET 0x0 416 + #define CLKGENAx_PLL1_OFFSET 0x4 417 + 418 + static void __init clkgena_c65_pll_setup(struct device_node *np) 419 + { 420 + const int num_pll_outputs = 3; 421 + struct clk_onecell_data *clk_data; 422 + const char *parent_name; 423 + void __iomem *reg; 424 + const char *clk_name; 425 + 426 + parent_name = of_clk_get_parent_name(np, 0); 427 + if (!parent_name) 428 + return; 429 + 430 + reg = clkgen_get_register_base(np); 431 + if (!reg) 432 + return; 433 + 434 + clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL); 435 + if (!clk_data) 436 + return; 437 + 438 + clk_data->clk_num = num_pll_outputs; 439 + clk_data->clks = kzalloc(clk_data->clk_num * sizeof(struct clk *), 440 + GFP_KERNEL); 441 + 442 + if (!clk_data->clks) 443 + goto err; 444 + 445 + if (of_property_read_string_index(np, "clock-output-names", 446 + 0, &clk_name)) 447 + goto err; 448 + 449 + /* 450 + * PLL0 HS (high speed) output 451 + */ 452 + clk_data->clks[0] = clkgen_pll_register(parent_name, 453 + &st_pll1600c65_ax, 454 + reg + CLKGENAx_PLL0_OFFSET, 455 + clk_name); 456 + 457 + if (IS_ERR(clk_data->clks[0])) 458 + goto err; 459 + 460 + if (of_property_read_string_index(np, "clock-output-names", 461 + 1, &clk_name)) 462 + goto err; 463 + 464 + /* 465 + * PLL0 LS (low speed) output, which is a fixed divide by 2 of the 466 + * high speed output. 467 + */ 468 + clk_data->clks[1] = clkgen_c65_lsdiv_register(__clk_get_name 469 + (clk_data->clks[0]), 470 + clk_name); 471 + 472 + if (IS_ERR(clk_data->clks[1])) 473 + goto err; 474 + 475 + if (of_property_read_string_index(np, "clock-output-names", 476 + 2, &clk_name)) 477 + goto err; 478 + 479 + /* 480 + * PLL1 output 481 + */ 482 + clk_data->clks[2] = clkgen_pll_register(parent_name, 483 + &st_pll800c65_ax, 484 + reg + CLKGENAx_PLL1_OFFSET, 485 + clk_name); 486 + 487 + if (IS_ERR(clk_data->clks[2])) 488 + goto err; 489 + 490 + of_clk_add_provider(np, of_clk_src_onecell_get, clk_data); 491 + return; 492 + 493 + err: 494 + kfree(clk_data->clks); 495 + kfree(clk_data); 496 + } 497 + CLK_OF_DECLARE(clkgena_c65_plls, 498 + "st,clkgena-plls-c65", clkgena_c65_pll_setup); 499 + 500 + static struct clk * __init clkgen_odf_register(const char *parent_name, 501 + void * __iomem reg, 502 + struct clkgen_pll_data *pll_data, 503 + int odf, 504 + spinlock_t *odf_lock, 505 + const char *odf_name) 506 + { 507 + struct clk *clk; 508 + unsigned long flags; 509 + struct clk_gate *gate; 510 + struct clk_divider *div; 511 + 512 + flags = CLK_GET_RATE_NOCACHE | CLK_SET_RATE_GATE; 513 + 514 + gate = kzalloc(sizeof(*gate), GFP_KERNEL); 515 + if (!gate) 516 + return ERR_PTR(-ENOMEM); 517 + 518 + gate->flags = CLK_GATE_SET_TO_DISABLE; 519 + gate->reg = reg + pll_data->odf_gate[odf].offset; 520 + gate->bit_idx = pll_data->odf_gate[odf].shift; 521 + gate->lock = odf_lock; 522 + 523 + div = kzalloc(sizeof(*div), GFP_KERNEL); 524 + if (!div) 525 + return ERR_PTR(-ENOMEM); 526 + 527 + div->flags = CLK_DIVIDER_ONE_BASED | CLK_DIVIDER_ALLOW_ZERO; 528 + div->reg = reg + pll_data->odf[odf].offset; 529 + div->shift = pll_data->odf[odf].shift; 530 + div->width = fls(pll_data->odf[odf].mask); 531 + div->lock = odf_lock; 532 + 533 + clk = clk_register_composite(NULL, odf_name, &parent_name, 1, 534 + NULL, NULL, 535 + &div->hw, &clk_divider_ops, 536 + &gate->hw, &clk_gate_ops, 537 + flags); 538 + if (IS_ERR(clk)) 539 + return clk; 540 + 541 + pr_debug("%s: parent %s rate %lu\n", 542 + __clk_get_name(clk), 543 + __clk_get_name(clk_get_parent(clk)), 544 + clk_get_rate(clk)); 545 + return clk; 546 + } 547 + 548 + static struct of_device_id c32_pll_of_match[] = { 549 + { 550 + .compatible = "st,plls-c32-a1x-0", 551 + .data = &st_pll3200c32_a1x_0, 552 + }, 553 + { 554 + .compatible = "st,plls-c32-a1x-1", 555 + .data = &st_pll3200c32_a1x_1, 556 + }, 557 + { 558 + .compatible = "st,stih415-plls-c32-a9", 559 + .data = &st_pll3200c32_a9_415, 560 + }, 561 + { 562 + .compatible = "st,stih415-plls-c32-ddr", 563 + .data = &st_pll3200c32_ddr_415, 564 + }, 565 + { 566 + .compatible = "st,stih416-plls-c32-a9", 567 + .data = &st_pll3200c32_a9_416, 568 + }, 569 + { 570 + .compatible = "st,stih416-plls-c32-ddr", 571 + .data = &st_pll3200c32_ddr_416, 572 + }, 573 + {} 574 + }; 575 + 576 + static void __init clkgen_c32_pll_setup(struct device_node *np) 577 + { 578 + const struct of_device_id *match; 579 + struct clk *clk; 580 + const char *parent_name, *pll_name; 581 + void __iomem *pll_base; 582 + int num_odfs, odf; 583 + struct clk_onecell_data *clk_data; 584 + struct clkgen_pll_data *data; 585 + 586 + match = of_match_node(c32_pll_of_match, np); 587 + if (!match) { 588 + pr_err("%s: No matching data\n", __func__); 589 + return; 590 + } 591 + 592 + data = (struct clkgen_pll_data *) match->data; 593 + 594 + parent_name = of_clk_get_parent_name(np, 0); 595 + if (!parent_name) 596 + return; 597 + 598 + pll_base = clkgen_get_register_base(np); 599 + if (!pll_base) 600 + return; 601 + 602 + clk = clkgen_pll_register(parent_name, data, pll_base, np->name); 603 + if (IS_ERR(clk)) 604 + return; 605 + 606 + pll_name = __clk_get_name(clk); 607 + 608 + num_odfs = data->num_odfs; 609 + 610 + clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL); 611 + if (!clk_data) 612 + return; 613 + 614 + clk_data->clk_num = num_odfs; 615 + clk_data->clks = kzalloc(clk_data->clk_num * sizeof(struct clk *), 616 + GFP_KERNEL); 617 + 618 + if (!clk_data->clks) 619 + goto err; 620 + 621 + for (odf = 0; odf < num_odfs; odf++) { 622 + struct clk *clk; 623 + const char *clk_name; 624 + 625 + if (of_property_read_string_index(np, "clock-output-names", 626 + odf, &clk_name)) 627 + return; 628 + 629 + clk = clkgen_odf_register(pll_name, pll_base, data, 630 + odf, &clkgena_c32_odf_lock, clk_name); 631 + if (IS_ERR(clk)) 632 + goto err; 633 + 634 + clk_data->clks[odf] = clk; 635 + } 636 + 637 + of_clk_add_provider(np, of_clk_src_onecell_get, clk_data); 638 + return; 639 + 640 + err: 641 + kfree(pll_name); 642 + kfree(clk_data->clks); 643 + kfree(clk_data); 644 + } 645 + CLK_OF_DECLARE(clkgen_c32_pll, "st,clkgen-plls-c32", clkgen_c32_pll_setup); 646 + 647 + static struct of_device_id c32_gpu_pll_of_match[] = { 648 + { 649 + .compatible = "st,stih415-gpu-pll-c32", 650 + .data = &st_pll1200c32_gpu_415, 651 + }, 652 + { 653 + .compatible = "st,stih416-gpu-pll-c32", 654 + .data = &st_pll1200c32_gpu_416, 655 + }, 656 + }; 657 + 658 + static void __init clkgengpu_c32_pll_setup(struct device_node *np) 659 + { 660 + const struct of_device_id *match; 661 + struct clk *clk; 662 + const char *parent_name; 663 + void __iomem *reg; 664 + const char *clk_name; 665 + struct clkgen_pll_data *data; 666 + 667 + match = of_match_node(c32_gpu_pll_of_match, np); 668 + if (!match) { 669 + pr_err("%s: No matching data\n", __func__); 670 + return; 671 + } 672 + 673 + data = (struct clkgen_pll_data *)match->data; 674 + 675 + parent_name = of_clk_get_parent_name(np, 0); 676 + if (!parent_name) 677 + return; 678 + 679 + reg = clkgen_get_register_base(np); 680 + if (!reg) 681 + return; 682 + 683 + if (of_property_read_string_index(np, "clock-output-names", 684 + 0, &clk_name)) 685 + return; 686 + 687 + /* 688 + * PLL 1200MHz output 689 + */ 690 + clk = clkgen_pll_register(parent_name, data, reg, clk_name); 691 + 692 + if (!IS_ERR(clk)) 693 + of_clk_add_provider(np, of_clk_src_simple_get, clk); 694 + 695 + return; 696 + } 697 + CLK_OF_DECLARE(clkgengpu_c32_pll, 698 + "st,clkgengpu-pll-c32", clkgengpu_c32_pll_setup);
+48
drivers/clk/st/clkgen.h
··· 1 + /************************************************************************ 2 + File : Clock H/w specific Information 3 + 4 + Author: Pankaj Dev <pankaj.dev@st.com> 5 + 6 + Copyright (C) 2014 STMicroelectronics 7 + ************************************************************************/ 8 + 9 + #ifndef __CLKGEN_INFO_H 10 + #define __CLKGEN_INFO_H 11 + 12 + struct clkgen_field { 13 + unsigned int offset; 14 + unsigned int mask; 15 + unsigned int shift; 16 + }; 17 + 18 + static inline unsigned long clkgen_read(void __iomem *base, 19 + struct clkgen_field *field) 20 + { 21 + return (readl(base + field->offset) >> field->shift) & field->mask; 22 + } 23 + 24 + 25 + static inline void clkgen_write(void __iomem *base, struct clkgen_field *field, 26 + unsigned long val) 27 + { 28 + writel((readl(base + field->offset) & 29 + ~(field->mask << field->shift)) | (val << field->shift), 30 + base + field->offset); 31 + 32 + return; 33 + } 34 + 35 + #define CLKGEN_FIELD(_offset, _mask, _shift) { \ 36 + .offset = _offset, \ 37 + .mask = _mask, \ 38 + .shift = _shift, \ 39 + } 40 + 41 + #define CLKGEN_READ(pll, field) clkgen_read(pll->regs_base, \ 42 + &pll->data->field) 43 + 44 + #define CLKGEN_WRITE(pll, field, val) clkgen_write(pll->regs_base, \ 45 + &pll->data->field, val) 46 + 47 + #endif /*__CLKGEN_INFO_H*/ 48 +
+276 -29
drivers/clk/sunxi/clk-sunxi.c
··· 18 18 #include <linux/clkdev.h> 19 19 #include <linux/of.h> 20 20 #include <linux/of_address.h> 21 + #include <linux/reset-controller.h> 21 22 22 23 #include "clk-factors.h" 23 24 ··· 52 51 if (!gate) 53 52 goto err_free_fixed; 54 53 54 + of_property_read_string(node, "clock-output-names", &clk_name); 55 + 55 56 /* set up gate and fixed rate properties */ 56 57 gate->reg = of_iomap(node, 0); 57 58 gate->bit_idx = SUNXI_OSC24M_GATE; ··· 80 77 err_free_fixed: 81 78 kfree(fixed); 82 79 } 83 - CLK_OF_DECLARE(sun4i_osc, "allwinner,sun4i-osc-clk", sun4i_osc_clk_setup); 80 + CLK_OF_DECLARE(sun4i_osc, "allwinner,sun4i-a10-osc-clk", sun4i_osc_clk_setup); 84 81 85 82 86 83 ··· 252 249 *n = DIV_ROUND_UP(div, (*k+1)); 253 250 } 254 251 252 + /** 253 + * sun6i_a31_get_pll6_factors() - calculates n, k factors for A31 PLL6 254 + * PLL6 rate is calculated as follows 255 + * rate = parent_rate * n * (k + 1) / 2 256 + * parent_rate is always 24Mhz 257 + */ 255 258 259 + static void sun6i_a31_get_pll6_factors(u32 *freq, u32 parent_rate, 260 + u8 *n, u8 *k, u8 *m, u8 *p) 261 + { 262 + u8 div; 263 + 264 + /* 265 + * We always have 24MHz / 2, so we can just say that our 266 + * parent clock is 12MHz. 267 + */ 268 + parent_rate = parent_rate / 2; 269 + 270 + /* Normalize value to a parent_rate multiple (24M / 2) */ 271 + div = *freq / parent_rate; 272 + *freq = parent_rate * div; 273 + 274 + /* we were called to round the frequency, we can now return */ 275 + if (n == NULL) 276 + return; 277 + 278 + *k = div / 32; 279 + if (*k > 3) 280 + *k = 3; 281 + 282 + *n = DIV_ROUND_UP(div, (*k+1)); 283 + } 256 284 257 285 /** 258 286 * sun4i_get_apb1_factors() - calculates m, p factors for APB1 ··· 299 265 if (parent_rate < *freq) 300 266 *freq = parent_rate; 301 267 302 - parent_rate = (parent_rate + (*freq - 1)) / *freq; 268 + parent_rate = DIV_ROUND_UP(parent_rate, *freq); 303 269 304 270 /* Invalid rate! */ 305 271 if (parent_rate > 32) ··· 330 296 331 297 /** 332 298 * sun4i_get_mod0_factors() - calculates m, n factors for MOD0-style clocks 333 - * MMC rate is calculated as follows 299 + * MOD0 rate is calculated as follows 334 300 * rate = (parent_rate >> p) / (m + 1); 335 301 */ 336 302 ··· 344 310 if (*freq > parent_rate) 345 311 *freq = parent_rate; 346 312 347 - div = parent_rate / *freq; 313 + div = DIV_ROUND_UP(parent_rate, *freq); 348 314 349 315 if (div < 16) 350 316 calcp = 0; ··· 385 351 if (*freq > parent_rate) 386 352 *freq = parent_rate; 387 353 388 - div = parent_rate / *freq; 354 + div = DIV_ROUND_UP(parent_rate, *freq); 389 355 390 356 if (div < 32) 391 357 calcp = 0; ··· 411 377 412 378 413 379 /** 380 + * sun7i_a20_gmac_clk_setup - Setup function for A20/A31 GMAC clock module 381 + * 382 + * This clock looks something like this 383 + * ________________________ 384 + * MII TX clock from PHY >-----|___________ _________|----> to GMAC core 385 + * GMAC Int. RGMII TX clk >----|___________\__/__gate---|----> to PHY 386 + * Ext. 125MHz RGMII TX clk >--|__divider__/ | 387 + * |________________________| 388 + * 389 + * The external 125 MHz reference is optional, i.e. GMAC can use its 390 + * internal TX clock just fine. The A31 GMAC clock module does not have 391 + * the divider controls for the external reference. 392 + * 393 + * To keep it simple, let the GMAC use either the MII TX clock for MII mode, 394 + * and its internal TX clock for GMII and RGMII modes. The GMAC driver should 395 + * select the appropriate source and gate/ungate the output to the PHY. 396 + * 397 + * Only the GMAC should use this clock. Altering the clock so that it doesn't 398 + * match the GMAC's operation parameters will result in the GMAC not being 399 + * able to send traffic out. The GMAC driver should set the clock rate and 400 + * enable/disable this clock to configure the required state. The clock 401 + * driver then responds by auto-reparenting the clock. 402 + */ 403 + 404 + #define SUN7I_A20_GMAC_GPIT 2 405 + #define SUN7I_A20_GMAC_MASK 0x3 406 + #define SUN7I_A20_GMAC_PARENTS 2 407 + 408 + static void __init sun7i_a20_gmac_clk_setup(struct device_node *node) 409 + { 410 + struct clk *clk; 411 + struct clk_mux *mux; 412 + struct clk_gate *gate; 413 + const char *clk_name = node->name; 414 + const char *parents[SUN7I_A20_GMAC_PARENTS]; 415 + void *reg; 416 + 417 + if (of_property_read_string(node, "clock-output-names", &clk_name)) 418 + return; 419 + 420 + /* allocate mux and gate clock structs */ 421 + mux = kzalloc(sizeof(struct clk_mux), GFP_KERNEL); 422 + if (!mux) 423 + return; 424 + 425 + gate = kzalloc(sizeof(struct clk_gate), GFP_KERNEL); 426 + if (!gate) 427 + goto free_mux; 428 + 429 + /* gmac clock requires exactly 2 parents */ 430 + parents[0] = of_clk_get_parent_name(node, 0); 431 + parents[1] = of_clk_get_parent_name(node, 1); 432 + if (!parents[0] || !parents[1]) 433 + goto free_gate; 434 + 435 + reg = of_iomap(node, 0); 436 + if (!reg) 437 + goto free_gate; 438 + 439 + /* set up gate and fixed rate properties */ 440 + gate->reg = reg; 441 + gate->bit_idx = SUN7I_A20_GMAC_GPIT; 442 + gate->lock = &clk_lock; 443 + mux->reg = reg; 444 + mux->mask = SUN7I_A20_GMAC_MASK; 445 + mux->flags = CLK_MUX_INDEX_BIT; 446 + mux->lock = &clk_lock; 447 + 448 + clk = clk_register_composite(NULL, clk_name, 449 + parents, SUN7I_A20_GMAC_PARENTS, 450 + &mux->hw, &clk_mux_ops, 451 + NULL, NULL, 452 + &gate->hw, &clk_gate_ops, 453 + 0); 454 + 455 + if (IS_ERR(clk)) 456 + goto iounmap_reg; 457 + 458 + of_clk_add_provider(node, of_clk_src_simple_get, clk); 459 + clk_register_clkdev(clk, clk_name, NULL); 460 + 461 + return; 462 + 463 + iounmap_reg: 464 + iounmap(reg); 465 + free_gate: 466 + kfree(gate); 467 + free_mux: 468 + kfree(mux); 469 + } 470 + CLK_OF_DECLARE(sun7i_a20_gmac, "allwinner,sun7i-a20-gmac-clk", 471 + sun7i_a20_gmac_clk_setup); 472 + 473 + 474 + 475 + /** 414 476 * sunxi_factors_clk_setup() - Setup function for factor clocks 415 477 */ 416 478 ··· 517 387 int mux; 518 388 struct clk_factors_config *table; 519 389 void (*getter) (u32 *rate, u32 parent_rate, u8 *n, u8 *k, u8 *m, u8 *p); 390 + const char *name; 520 391 }; 521 392 522 393 static struct clk_factors_config sun4i_pll1_config = { ··· 542 411 543 412 static struct clk_factors_config sun4i_pll5_config = { 544 413 .nshift = 8, 414 + .nwidth = 5, 415 + .kshift = 4, 416 + .kwidth = 2, 417 + }; 418 + 419 + static struct clk_factors_config sun6i_a31_pll6_config = { 420 + .nshift = 8, 545 421 .nwidth = 5, 546 422 .kshift = 4, 547 423 .kwidth = 2, ··· 589 451 .getter = sun6i_a31_get_pll1_factors, 590 452 }; 591 453 454 + static const struct factors_data sun7i_a20_pll4_data __initconst = { 455 + .enable = 31, 456 + .table = &sun4i_pll5_config, 457 + .getter = sun4i_get_pll5_factors, 458 + }; 459 + 592 460 static const struct factors_data sun4i_pll5_data __initconst = { 593 461 .enable = 31, 594 462 .table = &sun4i_pll5_config, 595 463 .getter = sun4i_get_pll5_factors, 464 + .name = "pll5", 465 + }; 466 + 467 + static const struct factors_data sun4i_pll6_data __initconst = { 468 + .enable = 31, 469 + .table = &sun4i_pll5_config, 470 + .getter = sun4i_get_pll5_factors, 471 + .name = "pll6", 472 + }; 473 + 474 + static const struct factors_data sun6i_a31_pll6_data __initconst = { 475 + .enable = 31, 476 + .table = &sun6i_a31_pll6_config, 477 + .getter = sun6i_a31_get_pll6_factors, 596 478 }; 597 479 598 480 static const struct factors_data sun4i_apb1_data __initconst = { ··· 655 497 (parents[i] = of_clk_get_parent_name(node, i)) != NULL) 656 498 i++; 657 499 658 - /* Nodes should be providing the name via clock-output-names 659 - * but originally our dts didn't, and so we used node->name. 660 - * The new, better nodes look like clk@deadbeef, so we pull the 661 - * name just in this case */ 662 - if (!strcmp("clk", clk_name)) { 663 - of_property_read_string_index(node, "clock-output-names", 664 - 0, &clk_name); 665 - } 500 + /* 501 + * some factor clocks, such as pll5 and pll6, may have multiple 502 + * outputs, and have their name designated in factors_data 503 + */ 504 + if (data->name) 505 + clk_name = data->name; 506 + else 507 + of_property_read_string(node, "clock-output-names", &clk_name); 666 508 667 509 factors = kzalloc(sizeof(struct clk_factors), GFP_KERNEL); 668 510 if (!factors) ··· 759 601 (parents[i] = of_clk_get_parent_name(node, i)) != NULL) 760 602 i++; 761 603 604 + of_property_read_string(node, "clock-output-names", &clk_name); 605 + 762 606 clk = clk_register_mux(NULL, clk_name, parents, i, 763 607 CLK_SET_RATE_NO_REPARENT, reg, 764 608 data->shift, SUNXI_MUX_GATE_WIDTH, ··· 820 660 821 661 clk_parent = of_clk_get_parent_name(node, 0); 822 662 663 + of_property_read_string(node, "clock-output-names", &clk_name); 664 + 823 665 clk = clk_register_divider(NULL, clk_name, clk_parent, 0, 824 666 reg, data->shift, data->width, 825 667 data->pow ? CLK_DIVIDER_POWER_OF_TWO : 0, ··· 835 673 836 674 837 675 /** 676 + * sunxi_gates_reset... - reset bits in leaf gate clk registers handling 677 + */ 678 + 679 + struct gates_reset_data { 680 + void __iomem *reg; 681 + spinlock_t *lock; 682 + struct reset_controller_dev rcdev; 683 + }; 684 + 685 + static int sunxi_gates_reset_assert(struct reset_controller_dev *rcdev, 686 + unsigned long id) 687 + { 688 + struct gates_reset_data *data = container_of(rcdev, 689 + struct gates_reset_data, 690 + rcdev); 691 + unsigned long flags; 692 + u32 reg; 693 + 694 + spin_lock_irqsave(data->lock, flags); 695 + 696 + reg = readl(data->reg); 697 + writel(reg & ~BIT(id), data->reg); 698 + 699 + spin_unlock_irqrestore(data->lock, flags); 700 + 701 + return 0; 702 + } 703 + 704 + static int sunxi_gates_reset_deassert(struct reset_controller_dev *rcdev, 705 + unsigned long id) 706 + { 707 + struct gates_reset_data *data = container_of(rcdev, 708 + struct gates_reset_data, 709 + rcdev); 710 + unsigned long flags; 711 + u32 reg; 712 + 713 + spin_lock_irqsave(data->lock, flags); 714 + 715 + reg = readl(data->reg); 716 + writel(reg | BIT(id), data->reg); 717 + 718 + spin_unlock_irqrestore(data->lock, flags); 719 + 720 + return 0; 721 + } 722 + 723 + static struct reset_control_ops sunxi_gates_reset_ops = { 724 + .assert = sunxi_gates_reset_assert, 725 + .deassert = sunxi_gates_reset_deassert, 726 + }; 727 + 728 + /** 838 729 * sunxi_gates_clk_setup() - Setup function for leaf gates on clocks 839 730 */ 840 731 ··· 895 680 896 681 struct gates_data { 897 682 DECLARE_BITMAP(mask, SUNXI_GATES_MAX_SIZE); 683 + u32 reset_mask; 898 684 }; 899 685 900 686 static const struct gates_data sun4i_axi_gates_data __initconst = { ··· 962 746 .mask = { 0xff80ff }, 963 747 }; 964 748 749 + static const struct gates_data sun4i_a10_usb_gates_data __initconst = { 750 + .mask = {0x1C0}, 751 + .reset_mask = 0x07, 752 + }; 753 + 754 + static const struct gates_data sun5i_a13_usb_gates_data __initconst = { 755 + .mask = {0x140}, 756 + .reset_mask = 0x03, 757 + }; 758 + 965 759 static void __init sunxi_gates_clk_setup(struct device_node *node, 966 760 struct gates_data *data) 967 761 { 968 762 struct clk_onecell_data *clk_data; 763 + struct gates_reset_data *reset_data; 969 764 const char *clk_parent; 970 765 const char *clk_name; 971 766 void *reg; ··· 1020 793 clk_data->clk_num = i; 1021 794 1022 795 of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); 796 + 797 + /* Register a reset controler for gates with reset bits */ 798 + if (data->reset_mask == 0) 799 + return; 800 + 801 + reset_data = kzalloc(sizeof(*reset_data), GFP_KERNEL); 802 + if (!reset_data) 803 + return; 804 + 805 + reset_data->reg = reg; 806 + reset_data->lock = &clk_lock; 807 + reset_data->rcdev.nr_resets = __fls(data->reset_mask) + 1; 808 + reset_data->rcdev.ops = &sunxi_gates_reset_ops; 809 + reset_data->rcdev.of_node = node; 810 + reset_controller_register(&reset_data->rcdev); 1023 811 } 1024 812 1025 813 ··· 1074 832 }; 1075 833 1076 834 static const struct divs_data pll6_divs_data __initconst = { 1077 - .factors = &sun4i_pll5_data, 835 + .factors = &sun4i_pll6_data, 1078 836 .div = { 1079 837 { .shift = 0, .table = pll6_sata_tbl, .gate = 14 }, /* M, SATA */ 1080 838 { .fixed = 2 }, /* P, other */ ··· 1096 854 struct divs_data *data) 1097 855 { 1098 856 struct clk_onecell_data *clk_data; 1099 - const char *parent = node->name; 857 + const char *parent; 1100 858 const char *clk_name; 1101 859 struct clk **clks, *pclk; 1102 860 struct clk_hw *gate_hw, *rate_hw; ··· 1110 868 1111 869 /* Set up factor clock that we will be dividing */ 1112 870 pclk = sunxi_factors_clk_setup(node, data->factors); 871 + parent = __clk_get_name(pclk); 1113 872 1114 873 reg = of_iomap(node, 0); 1115 874 ··· 1213 970 1214 971 /* Matches for factors clocks */ 1215 972 static const struct of_device_id clk_factors_match[] __initconst = { 1216 - {.compatible = "allwinner,sun4i-pll1-clk", .data = &sun4i_pll1_data,}, 973 + {.compatible = "allwinner,sun4i-a10-pll1-clk", .data = &sun4i_pll1_data,}, 1217 974 {.compatible = "allwinner,sun6i-a31-pll1-clk", .data = &sun6i_a31_pll1_data,}, 1218 - {.compatible = "allwinner,sun4i-apb1-clk", .data = &sun4i_apb1_data,}, 1219 - {.compatible = "allwinner,sun4i-mod0-clk", .data = &sun4i_mod0_data,}, 975 + {.compatible = "allwinner,sun7i-a20-pll4-clk", .data = &sun7i_a20_pll4_data,}, 976 + {.compatible = "allwinner,sun6i-a31-pll6-clk", .data = &sun6i_a31_pll6_data,}, 977 + {.compatible = "allwinner,sun4i-a10-apb1-clk", .data = &sun4i_apb1_data,}, 978 + {.compatible = "allwinner,sun4i-a10-mod0-clk", .data = &sun4i_mod0_data,}, 1220 979 {.compatible = "allwinner,sun7i-a20-out-clk", .data = &sun7i_a20_out_data,}, 1221 980 {} 1222 981 }; 1223 982 1224 983 /* Matches for divider clocks */ 1225 984 static const struct of_device_id clk_div_match[] __initconst = { 1226 - {.compatible = "allwinner,sun4i-axi-clk", .data = &sun4i_axi_data,}, 1227 - {.compatible = "allwinner,sun4i-ahb-clk", .data = &sun4i_ahb_data,}, 1228 - {.compatible = "allwinner,sun4i-apb0-clk", .data = &sun4i_apb0_data,}, 985 + {.compatible = "allwinner,sun4i-a10-axi-clk", .data = &sun4i_axi_data,}, 986 + {.compatible = "allwinner,sun4i-a10-ahb-clk", .data = &sun4i_ahb_data,}, 987 + {.compatible = "allwinner,sun4i-a10-apb0-clk", .data = &sun4i_apb0_data,}, 1229 988 {.compatible = "allwinner,sun6i-a31-apb2-div-clk", .data = &sun6i_a31_apb2_div_data,}, 1230 989 {} 1231 990 }; 1232 991 1233 992 /* Matches for divided outputs */ 1234 993 static const struct of_device_id clk_divs_match[] __initconst = { 1235 - {.compatible = "allwinner,sun4i-pll5-clk", .data = &pll5_divs_data,}, 1236 - {.compatible = "allwinner,sun4i-pll6-clk", .data = &pll6_divs_data,}, 994 + {.compatible = "allwinner,sun4i-a10-pll5-clk", .data = &pll5_divs_data,}, 995 + {.compatible = "allwinner,sun4i-a10-pll6-clk", .data = &pll6_divs_data,}, 1237 996 {} 1238 997 }; 1239 998 1240 999 /* Matches for mux clocks */ 1241 1000 static const struct of_device_id clk_mux_match[] __initconst = { 1242 - {.compatible = "allwinner,sun4i-cpu-clk", .data = &sun4i_cpu_mux_data,}, 1243 - {.compatible = "allwinner,sun4i-apb1-mux-clk", .data = &sun4i_apb1_mux_data,}, 1001 + {.compatible = "allwinner,sun4i-a10-cpu-clk", .data = &sun4i_cpu_mux_data,}, 1002 + {.compatible = "allwinner,sun4i-a10-apb1-mux-clk", .data = &sun4i_apb1_mux_data,}, 1244 1003 {.compatible = "allwinner,sun6i-a31-ahb1-mux-clk", .data = &sun6i_a31_ahb1_mux_data,}, 1245 1004 {} 1246 1005 }; 1247 1006 1248 1007 /* Matches for gate clocks */ 1249 1008 static const struct of_device_id clk_gates_match[] __initconst = { 1250 - {.compatible = "allwinner,sun4i-axi-gates-clk", .data = &sun4i_axi_gates_data,}, 1251 - {.compatible = "allwinner,sun4i-ahb-gates-clk", .data = &sun4i_ahb_gates_data,}, 1009 + {.compatible = "allwinner,sun4i-a10-axi-gates-clk", .data = &sun4i_axi_gates_data,}, 1010 + {.compatible = "allwinner,sun4i-a10-ahb-gates-clk", .data = &sun4i_ahb_gates_data,}, 1252 1011 {.compatible = "allwinner,sun5i-a10s-ahb-gates-clk", .data = &sun5i_a10s_ahb_gates_data,}, 1253 1012 {.compatible = "allwinner,sun5i-a13-ahb-gates-clk", .data = &sun5i_a13_ahb_gates_data,}, 1254 1013 {.compatible = "allwinner,sun6i-a31-ahb1-gates-clk", .data = &sun6i_a31_ahb1_gates_data,}, 1255 1014 {.compatible = "allwinner,sun7i-a20-ahb-gates-clk", .data = &sun7i_a20_ahb_gates_data,}, 1256 - {.compatible = "allwinner,sun4i-apb0-gates-clk", .data = &sun4i_apb0_gates_data,}, 1015 + {.compatible = "allwinner,sun4i-a10-apb0-gates-clk", .data = &sun4i_apb0_gates_data,}, 1257 1016 {.compatible = "allwinner,sun5i-a10s-apb0-gates-clk", .data = &sun5i_a10s_apb0_gates_data,}, 1258 1017 {.compatible = "allwinner,sun5i-a13-apb0-gates-clk", .data = &sun5i_a13_apb0_gates_data,}, 1259 1018 {.compatible = "allwinner,sun7i-a20-apb0-gates-clk", .data = &sun7i_a20_apb0_gates_data,}, 1260 - {.compatible = "allwinner,sun4i-apb1-gates-clk", .data = &sun4i_apb1_gates_data,}, 1019 + {.compatible = "allwinner,sun4i-a10-apb1-gates-clk", .data = &sun4i_apb1_gates_data,}, 1261 1020 {.compatible = "allwinner,sun5i-a10s-apb1-gates-clk", .data = &sun5i_a10s_apb1_gates_data,}, 1262 1021 {.compatible = "allwinner,sun5i-a13-apb1-gates-clk", .data = &sun5i_a13_apb1_gates_data,}, 1263 1022 {.compatible = "allwinner,sun6i-a31-apb1-gates-clk", .data = &sun6i_a31_apb1_gates_data,}, 1264 1023 {.compatible = "allwinner,sun7i-a20-apb1-gates-clk", .data = &sun7i_a20_apb1_gates_data,}, 1265 1024 {.compatible = "allwinner,sun6i-a31-apb2-gates-clk", .data = &sun6i_a31_apb2_gates_data,}, 1025 + {.compatible = "allwinner,sun4i-a10-usb-clk", .data = &sun4i_a10_usb_gates_data,}, 1026 + {.compatible = "allwinner,sun5i-a13-usb-clk", .data = &sun5i_a13_usb_gates_data,}, 1266 1027 {} 1267 1028 }; 1268 1029
+1 -1
drivers/clk/tegra/clk-periph.c
··· 130 130 .disable = clk_periph_disable, 131 131 }; 132 132 133 - const struct clk_ops tegra_clk_periph_no_gate_ops = { 133 + static const struct clk_ops tegra_clk_periph_no_gate_ops = { 134 134 .get_parent = clk_periph_get_parent, 135 135 .set_parent = clk_periph_set_parent, 136 136 .recalc_rate = clk_periph_recalc_rate,
-1
drivers/clk/ti/clk-33xx.c
··· 34 34 DT_CLK(NULL, "dpll_core_m5_ck", "dpll_core_m5_ck"), 35 35 DT_CLK(NULL, "dpll_core_m6_ck", "dpll_core_m6_ck"), 36 36 DT_CLK(NULL, "dpll_mpu_ck", "dpll_mpu_ck"), 37 - DT_CLK("cpu0", NULL, "dpll_mpu_ck"), 38 37 DT_CLK(NULL, "dpll_mpu_m2_ck", "dpll_mpu_m2_ck"), 39 38 DT_CLK(NULL, "dpll_ddr_ck", "dpll_ddr_ck"), 40 39 DT_CLK(NULL, "dpll_ddr_m2_ck", "dpll_ddr_m2_ck"),
+4 -4
drivers/clk/ti/divider.c
··· 112 112 return parent_rate; 113 113 } 114 114 115 - return parent_rate / div; 115 + return DIV_ROUND_UP(parent_rate, div); 116 116 } 117 117 118 118 /* ··· 182 182 } 183 183 parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), 184 184 MULT_ROUND_UP(rate, i)); 185 - now = parent_rate / i; 185 + now = DIV_ROUND_UP(parent_rate, i); 186 186 if (now <= rate && now > best) { 187 187 bestdiv = i; 188 188 best = now; ··· 205 205 int div; 206 206 div = ti_clk_divider_bestdiv(hw, rate, prate); 207 207 208 - return *prate / div; 208 + return DIV_ROUND_UP(*prate, div); 209 209 } 210 210 211 211 static int ti_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate, ··· 216 216 unsigned long flags = 0; 217 217 u32 val; 218 218 219 - div = parent_rate / rate; 219 + div = DIV_ROUND_UP(parent_rate, rate); 220 220 value = _get_val(divider, div); 221 221 222 222 if (value > div_mask(divider))
+2 -1
drivers/clk/ux500/u8500_of_clk.c
··· 29 29 #define PRCC_KCLK_STORE(clk, base, bit) \ 30 30 prcc_kclk[(base * PRCC_PERIPHS_PER_CLUSTER) + bit] = clk 31 31 32 - struct clk *ux500_twocell_get(struct of_phandle_args *clkspec, void *data) 32 + static struct clk *ux500_twocell_get(struct of_phandle_args *clkspec, 33 + void *data) 33 34 { 34 35 struct clk **clk_data = data; 35 36 unsigned int base, bit;
+2 -2
drivers/clk/zynq/clkc.c
··· 149 149 clks[fclk] = clk_register_gate(NULL, clk_name, 150 150 div1_name, CLK_SET_RATE_PARENT, fclk_gate_reg, 151 151 0, CLK_GATE_SET_TO_DISABLE, fclk_gate_lock); 152 - enable_reg = readl(fclk_gate_reg) & 1; 152 + enable_reg = clk_readl(fclk_gate_reg) & 1; 153 153 if (enable && !enable_reg) { 154 154 if (clk_prepare_enable(clks[fclk])) 155 155 pr_warn("%s: FCLK%u enable failed\n", __func__, ··· 278 278 SLCR_IOPLL_CTRL, 4, 1, 0, &iopll_lock); 279 279 280 280 /* CPU clocks */ 281 - tmp = readl(SLCR_621_TRUE) & 1; 281 + tmp = clk_readl(SLCR_621_TRUE) & 1; 282 282 clk = clk_register_mux(NULL, "cpu_mux", cpu_parents, 4, 283 283 CLK_SET_RATE_NO_REPARENT, SLCR_ARM_CLK_CTRL, 4, 2, 0, 284 284 &armclk_lock);
+9 -9
drivers/clk/zynq/pll.c
··· 90 90 * makes probably sense to redundantly save fbdiv in the struct 91 91 * zynq_pll to save the IO access. 92 92 */ 93 - fbdiv = (readl(clk->pll_ctrl) & PLLCTRL_FBDIV_MASK) >> 93 + fbdiv = (clk_readl(clk->pll_ctrl) & PLLCTRL_FBDIV_MASK) >> 94 94 PLLCTRL_FBDIV_SHIFT; 95 95 96 96 return parent_rate * fbdiv; ··· 112 112 113 113 spin_lock_irqsave(clk->lock, flags); 114 114 115 - reg = readl(clk->pll_ctrl); 115 + reg = clk_readl(clk->pll_ctrl); 116 116 117 117 spin_unlock_irqrestore(clk->lock, flags); 118 118 ··· 138 138 /* Power up PLL and wait for lock */ 139 139 spin_lock_irqsave(clk->lock, flags); 140 140 141 - reg = readl(clk->pll_ctrl); 141 + reg = clk_readl(clk->pll_ctrl); 142 142 reg &= ~(PLLCTRL_RESET_MASK | PLLCTRL_PWRDWN_MASK); 143 - writel(reg, clk->pll_ctrl); 144 - while (!(readl(clk->pll_status) & (1 << clk->lockbit))) 143 + clk_writel(reg, clk->pll_ctrl); 144 + while (!(clk_readl(clk->pll_status) & (1 << clk->lockbit))) 145 145 ; 146 146 147 147 spin_unlock_irqrestore(clk->lock, flags); ··· 168 168 /* shut down PLL */ 169 169 spin_lock_irqsave(clk->lock, flags); 170 170 171 - reg = readl(clk->pll_ctrl); 171 + reg = clk_readl(clk->pll_ctrl); 172 172 reg |= PLLCTRL_RESET_MASK | PLLCTRL_PWRDWN_MASK; 173 - writel(reg, clk->pll_ctrl); 173 + clk_writel(reg, clk->pll_ctrl); 174 174 175 175 spin_unlock_irqrestore(clk->lock, flags); 176 176 } ··· 225 225 226 226 spin_lock_irqsave(pll->lock, flags); 227 227 228 - reg = readl(pll->pll_ctrl); 228 + reg = clk_readl(pll->pll_ctrl); 229 229 reg &= ~PLLCTRL_BPQUAL_MASK; 230 - writel(reg, pll->pll_ctrl); 230 + clk_writel(reg, pll->pll_ctrl); 231 231 232 232 spin_unlock_irqrestore(pll->lock, flags); 233 233
+5
include/dt-bindings/clock/hi3620-clock.h
··· 147 147 #define HI3620_MMC_CLK3 217 148 148 #define HI3620_MCU_CLK 218 149 149 150 + #define HI3620_SD_CIUCLK 0 151 + #define HI3620_MMC_CIUCLK1 1 152 + #define HI3620_MMC_CIUCLK2 2 153 + #define HI3620_MMC_CIUCLK3 3 154 + 150 155 #define HI3620_NR_CLKS 219 151 156 152 157 #endif /* __DTS_HI3620_CLOCK_H */
+35
include/dt-bindings/clock/hip04-clock.h
··· 1 + /* 2 + * Copyright (c) 2013-2014 Hisilicon Limited. 3 + * Copyright (c) 2013-2014 Linaro Limited. 4 + * 5 + * Author: Haojian Zhuang <haojian.zhuang@linaro.org> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License along 18 + * with this program; if not, write to the Free Software Foundation, Inc., 19 + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 20 + * 21 + */ 22 + 23 + #ifndef __DTS_HIP04_CLOCK_H 24 + #define __DTS_HIP04_CLOCK_H 25 + 26 + #define HIP04_NONE_CLOCK 0 27 + 28 + /* fixed rate & fixed factor clocks */ 29 + #define HIP04_OSC50M 1 30 + #define HIP04_CLK_50M 2 31 + #define HIP04_CLK_168M 3 32 + 33 + #define HIP04_NR_CLKS 64 34 + 35 + #endif /* __DTS_HIP04_CLOCK_H */
+8
include/linux/clk-provider.h
··· 32 32 #define CLK_GET_ACCURACY_NOCACHE BIT(8) /* do not use the cached clk accuracy */ 33 33 34 34 struct clk_hw; 35 + struct dentry; 35 36 36 37 /** 37 38 * struct clk_ops - Callback operations for hardware clocks; these are to ··· 128 127 * separately via calls to .set_parent and .set_rate. 129 128 * Returns 0 on success, -EERROR otherwise. 130 129 * 130 + * @debug_init: Set up type-specific debugfs entries for this clock. This 131 + * is called once, after the debugfs directory entry for this 132 + * clock has been created. The dentry pointer representing that 133 + * directory is provided as an argument. Called with 134 + * prepare_lock held. Returns 0 on success, -EERROR otherwise. 135 + * 131 136 * 132 137 * The clk_enable/clk_disable and clk_prepare/clk_unprepare pairs allow 133 138 * implementations to split any work between atomic (enable) and sleepable ··· 172 165 unsigned long (*recalc_accuracy)(struct clk_hw *hw, 173 166 unsigned long parent_accuracy); 174 167 void (*init)(struct clk_hw *hw); 168 + int (*debug_init)(struct clk_hw *hw, struct dentry *dentry); 175 169 }; 176 170 177 171 /**
+14
include/linux/clk.h
··· 78 78 unsigned long new_rate; 79 79 }; 80 80 81 + /** 82 + * clk_notifier_register: register a clock rate-change notifier callback 83 + * @clk: clock whose rate we are interested in 84 + * @nb: notifier block with callback function pointer 85 + * 86 + * ProTip: debugging across notifier chains can be frustrating. Make sure that 87 + * your notifier callback function prints a nice big warning in case of 88 + * failure. 89 + */ 81 90 int clk_notifier_register(struct clk *clk, struct notifier_block *nb); 82 91 92 + /** 93 + * clk_notifier_unregister: unregister a clock rate-change notifier callback 94 + * @clk: clock whose rate we are no longer interested in 95 + * @nb: notifier block which will be unregistered 96 + */ 83 97 int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb); 84 98 85 99 /**