Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drivers-for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc into next

Pull ARM SoC driver changes from Olof Johansson:
"SoC-near driver changes that we're merging through our tree. Mostly
because they depend on other changes we have staged, but in some cases
because the driver maintainers preferred that we did it this way.

This contains a largeish cleanup series of the omap_l3_noc bus driver,
cpuidle rework for Exynos, some reset driver conversions and a long
branch of TI EDMA fixes and cleanups, with more to come next release.

The TI EDMA cleanups is a shared branch with the dmaengine tree, with
a handful of Davinci-specific fixes on top.

After discussion at last year's KS (and some more on the mailing
lists), we are here adding a drivers/soc directory. The purpose of
this is to keep per-vendor shared code that's needed by different
drivers but that doesn't fit into the MFD (nor drivers/platform)
model. We expect to keep merging contents for this hierarchy through
arm-soc so we can keep an eye on what the vendors keep adding here and
not making it a free-for-all to shove in crazy stuff"

* tag 'drivers-for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (101 commits)
cpufreq: exynos: Fix driver compilation with ARCH_MULTIPLATFORM
tty: serial: msm: Remove direct access to GSBI
power: reset: keystone-reset: introduce keystone reset driver
Documentation: dt: add bindings for keystone pll control controller
Documentation: dt: add bindings for keystone reset driver
soc: qcom: fix of_device_id table
ARM: EXYNOS: Fix kernel panic when unplugging CPU1 on exynos
ARM: EXYNOS: Move the driver to drivers/cpuidle directory
ARM: EXYNOS: Cleanup all unneeded headers from cpuidle.c
ARM: EXYNOS: Pass the AFTR callback to the platform_data
ARM: EXYNOS: Move S5P_CHECK_SLEEP into pm.c
ARM: EXYNOS: Move the power sequence call in the cpu_pm notifier
ARM: EXYNOS: Move the AFTR state function into pm.c
ARM: EXYNOS: Encapsulate the AFTR code into a function
ARM: EXYNOS: Disable cpuidle for exynos5440
ARM: EXYNOS: Encapsulate boot vector code into a function for cpuidle
ARM: EXYNOS: Pass wakeup mask parameter to function for cpuidle
ARM: EXYNOS: Remove ifdef for scu_enable in pm
ARM: EXYNOS: Move scu_enable in the cpu_pm notifier
ARM: EXYNOS: Use the cpu_pm notifier for pm
...

+2591 -1048
+8
Documentation/ABI/testing/sysfs-platform-brcmstb-gisb-arb
··· 1 + What: /sys/devices/../../gisb_arb_timeout 2 + Date: May 2014 3 + KernelVersion: 3.17 4 + Contact: Florian Fainelli <f.fainelli@gmail.com> 5 + Description: 6 + Returns the currently configured raw timeout value of the 7 + Broadcom Set Top Box internal GISB bus arbiter. Minimum value 8 + is 1, and maximum value is 0xffffffff.
+2
Documentation/devicetree/bindings/arm/omap/l3-noc.txt
··· 6 6 Required properties: 7 7 - compatible : Should be "ti,omap3-l3-smx" for OMAP3 family 8 8 Should be "ti,omap4-l3-noc" for OMAP4 family 9 + Should be "ti,dra7-l3-noc" for DRA7 family 10 + Should be "ti,am4372-l3-noc" for AM43 family 9 11 - reg: Contains L3 register address range for each noc domain. 10 12 - ti,hwmods: "l3_main_1", ... One hwmod for each noc domain. 11 13
+30
Documentation/devicetree/bindings/bus/brcm,gisb-arb.txt
··· 1 + Broadcom GISB bus Arbiter controller 2 + 3 + Required properties: 4 + 5 + - compatible: should be "brcm,gisb-arb" 6 + - reg: specifies the base physical address and size of the registers 7 + - interrupt-parent: specifies the phandle to the parent interrupt controller 8 + this arbiter gets interrupt line from 9 + - interrupts: specifies the two interrupts (timeout and TEA) to be used from 10 + the parent interrupt controller 11 + 12 + Optional properties: 13 + 14 + - brcm,gisb-arb-master-mask: 32-bits wide bitmask used to specify which GISB 15 + masters are valid at the system level 16 + - brcm,gisb-arb-master-names: string list of the litteral name of the GISB 17 + masters. Should match the number of bits set in brcm,gisb-master-mask and 18 + the order in which they appear 19 + 20 + Example: 21 + 22 + gisb-arb@f0400000 { 23 + compatible = "brcm,gisb-arb"; 24 + reg = <0xf0400000 0x800>; 25 + interrupts = <0>, <2>; 26 + interrupt-parent = <&sun_l2_intc>; 27 + 28 + brcm,gisb-arb-master-mask = <0x7>; 29 + brcm,gisb-arb-master-names = "bsp_0", "scpu_0", "cpu_0"; 30 + };
+20
Documentation/devicetree/bindings/clock/ti-keystone-pllctrl.txt
··· 1 + * Device tree bindings for Texas Instruments keystone pll controller 2 + 3 + The main pll controller used to drive theC66x CorePacs, the switch fabric, 4 + and a majority of the peripheral clocks (all but the ARM CorePacs, DDR3 and 5 + the NETCP modules) requires a PLL Controller to manage the various clock 6 + divisions, gating, and synchronization. 7 + 8 + Required properties: 9 + 10 + - compatible: "ti,keystone-pllctrl", "syscon" 11 + 12 + - reg: contains offset/length value for pll controller 13 + registers space. 14 + 15 + Example: 16 + 17 + pllctrl: pll-controller@0x02310000 { 18 + compatible = "ti,keystone-pllctrl", "syscon"; 19 + reg = <0x02310000 0x200>; 20 + };
+7 -6
Documentation/devicetree/bindings/dma/ti-edma.txt
··· 2 2 3 3 Required properties: 4 4 - compatible : "ti,edma3" 5 - - ti,edma-regions: Number of regions 6 - - ti,edma-slots: Number of slots 7 5 - #dma-cells: Should be set to <1> 8 6 Clients should use a single channel number per DMA request. 9 - - dma-channels: Specify total DMA channels per CC 10 7 - reg: Memory map for accessing module 11 8 - interrupt-parent: Interrupt controller the interrupt is routed through 12 9 - interrupts: Exactly 3 interrupts need to be specified in the order: ··· 14 17 - ti,hwmods: Name of the hwmods associated to the EDMA 15 18 - ti,edma-xbar-event-map: Crossbar event to channel map 16 19 20 + Deprecated properties: 21 + Listed here in case one wants to boot an old kernel with new DTB. These 22 + properties might need to be added to the new DTS files. 23 + - ti,edma-regions: Number of regions 24 + - ti,edma-slots: Number of slots 25 + - dma-channels: Specify total DMA channels per CC 26 + 17 27 Example: 18 28 19 29 edma: edma@49000000 { ··· 30 26 compatible = "ti,edma3"; 31 27 ti,hwmods = "tpcc", "tptc0", "tptc1", "tptc2"; 32 28 #dma-cells = <1>; 33 - dma-channels = <64>; 34 - ti,edma-regions = <4>; 35 - ti,edma-slots = <256>; 36 29 ti,edma-xbar-event-map = /bits/ 16 <1 12 37 30 2 13>; 38 31 };
+27 -5
Documentation/devicetree/bindings/memory-controllers/mvebu-devbus.txt
··· 6 6 7 7 Required properties: 8 8 9 - - compatible: Currently only Armada 370/XP SoC are supported, 10 - with this compatible string: 9 + - compatible: Armada 370/XP SoC are supported using the 10 + "marvell,mvebu-devbus" compatible string. 11 11 12 - marvell,mvebu-devbus 12 + Orion5x SoC are supported using the 13 + "marvell,orion-devbus" compatible string. 13 14 14 15 - reg: A resource specifier for the register space. 15 16 This is the base address of a chip select within ··· 23 22 integer values for each chip-select line in use: 24 23 0 <physical address of mapping> <size> 25 24 26 - Mandatory timing properties for child nodes: 25 + Optional properties: 26 + 27 + - devbus,keep-config This property can optionally be used to keep 28 + using the timing parameters set by the 29 + bootloader. It makes all the timing properties 30 + described below unused. 31 + 32 + Timing properties for child nodes: 27 33 28 34 Read parameters: 29 35 ··· 38 30 drive the AD bus after the completion of a device read. 39 31 This prevents contentions on the Device Bus after a read 40 32 cycle from a slow device. 33 + Mandatory, except if devbus,keep-config is used. 41 34 42 - - devbus,bus-width: Defines the bus width (e.g. <16>) 35 + - devbus,bus-width: Defines the bus width, in bits (e.g. <16>). 36 + Mandatory, except if devbus,keep-config is used. 43 37 44 38 - devbus,badr-skew-ps: Defines the time delay from from A[2:0] toggle, 45 39 to read data sample. This parameter is useful for 46 40 synchronous pipelined devices, where the address 47 41 precedes the read data by one or two cycles. 42 + Mandatory, except if devbus,keep-config is used. 48 43 49 44 - devbus,acc-first-ps: Defines the time delay from the negation of 50 45 ALE[0] to the cycle that the first read data is sampled 51 46 by the controller. 47 + Mandatory, except if devbus,keep-config is used. 52 48 53 49 - devbus,acc-next-ps: Defines the time delay between the cycle that 54 50 samples data N and the cycle that samples data N+1 55 51 (in burst accesses). 52 + Mandatory, except if devbus,keep-config is used. 56 53 57 54 - devbus,rd-setup-ps: Defines the time delay between DEV_CSn assertion to 58 55 DEV_OEn assertion. If set to 0 (default), ··· 65 52 This parameter has no affect on <acc-first-ps> parameter 66 53 (no affect on first data sample). Set <rd-setup-ps> 67 54 to a value smaller than <acc-first-ps>. 55 + Mandatory for "marvell,mvebu-devbus" compatible string, 56 + except if devbus,keep-config is used. 68 57 69 58 - devbus,rd-hold-ps: Defines the time between the last data sample to the 70 59 de-assertion of DEV_CSn. If set to 0 (default), ··· 77 62 last data sampled. Also this parameter has no 78 63 affect on <turn-off-ps> parameter. 79 64 Set <rd-hold-ps> to a value smaller than <turn-off-ps>. 65 + Mandatory for "marvell,mvebu-devbus" compatible string, 66 + except if devbus,keep-config is used. 80 67 81 68 Write parameters: 82 69 83 70 - devbus,ale-wr-ps: Defines the time delay from the ALE[0] negation cycle 84 71 to the DEV_WEn assertion. 72 + Mandatory. 85 73 86 74 - devbus,wr-low-ps: Defines the time during which DEV_WEn is active. 87 75 A[2:0] and Data are kept valid as long as DEV_WEn 88 76 is active. This parameter defines the setup time of 89 77 address and data to DEV_WEn rise. 78 + Mandatory. 90 79 91 80 - devbus,wr-high-ps: Defines the time during which DEV_WEn is kept 92 81 inactive (high) between data beats of a burst write. ··· 98 79 <wr-high-ps> - <tick> ps. 99 80 This parameter defines the hold time of address and 100 81 data after DEV_WEn rise. 82 + Mandatory. 101 83 102 84 - devbus,sync-enable: Synchronous device enable. 103 85 1: True 104 86 0: False 87 + Mandatory for "marvell,mvebu-devbus" compatible string, 88 + except if devbus,keep-config is used. 105 89 106 90 An example for an Armada XP GP board, with a 16 MiB NOR device as child 107 91 is showed below. Note that the Device Bus driver is in charge of allocating
+67
Documentation/devicetree/bindings/power/reset/keystone-reset.txt
··· 1 + * Device tree bindings for Texas Instruments keystone reset 2 + 3 + This node is intended to allow SoC reset in case of software reset 4 + of selected watchdogs. 5 + 6 + The Keystone SoCs can contain up to 4 watchdog timers to reset 7 + SoC. Each watchdog timer event input is connected to the Reset Mux 8 + block. The Reset Mux block can be configured to cause reset or not. 9 + 10 + Additionally soft or hard reset can be configured. 11 + 12 + Required properties: 13 + 14 + - compatible: ti,keystone-reset 15 + 16 + - ti,syscon-pll: phandle/offset pair. The phandle to syscon used to 17 + access pll controller registers and the offset to use 18 + reset control registers. 19 + 20 + - ti,syscon-dev: phandle/offset pair. The phandle to syscon used to 21 + access device state control registers and the offset 22 + in order to use mux block registers for all watchdogs. 23 + 24 + Optional properties: 25 + 26 + - ti,soft-reset: Boolean option indicating soft reset. 27 + By default hard reset is used. 28 + 29 + - ti,wdt-list: WDT list that can cause SoC reset. It's not related 30 + to WDT driver, it's just needed to enable a SoC related 31 + reset that's triggered by one of WDTs. The list is 32 + in format: <0>, <2>; It can be in random order and 33 + begins from 0 to 3, as keystone can contain up to 4 SoC 34 + reset watchdogs and can be in random order. 35 + 36 + Example 1: 37 + Setup keystone reset so that in case software reset or 38 + WDT0 is triggered it issues hard reset for SoC. 39 + 40 + pllctrl: pll-controller@02310000 { 41 + compatible = "ti,keystone-pllctrl", "syscon"; 42 + reg = <0x02310000 0x200>; 43 + }; 44 + 45 + devctrl: device-state-control@02620000 { 46 + compatible = "ti,keystone-devctrl", "syscon"; 47 + reg = <0x02620000 0x1000>; 48 + }; 49 + 50 + rstctrl: reset-controller { 51 + compatible = "ti,keystone-reset"; 52 + ti,syscon-pll = <&pllctrl 0xe4>; 53 + ti,syscon-dev = <&devctrl 0x328>; 54 + ti,wdt-list = <0>; 55 + }; 56 + 57 + Example 2: 58 + Setup keystone reset so that in case of software reset or 59 + WDT0 or WDT2 is triggered it issues soft reset for SoC. 60 + 61 + rstctrl: reset-controller { 62 + compatible = "ti,keystone-reset"; 63 + ti,syscon-pll = <&pllctrl 0xe4>; 64 + ti,syscon-dev = <&devctrl 0x328>; 65 + ti,wdt-list = <0>, <2>; 66 + ti,soft-reset; 67 + };
+21
Documentation/devicetree/bindings/reset/allwinner,sunxi-clock-reset.txt
··· 1 + Allwinner sunxi Peripheral Reset Controller 2 + =========================================== 3 + 4 + Please also refer to reset.txt in this directory for common reset 5 + controller binding usage. 6 + 7 + Required properties: 8 + - compatible: Should be one of the following: 9 + "allwinner,sun6i-a31-ahb1-reset" 10 + "allwinner,sun6i-a31-clock-reset" 11 + - reg: should be register base and length as documented in the 12 + datasheet 13 + - #reset-cells: 1, see below 14 + 15 + example: 16 + 17 + ahb1_rst: reset@01c202c0 { 18 + #reset-cells = <1>; 19 + compatible = "allwinner,sun6i-a31-ahb1-reset"; 20 + reg = <0x01c202c0 0xc>; 21 + };
-3
arch/arm/boot/dts/am33xx.dtsi
··· 147 147 <0x44e10f90 0x40>; 148 148 interrupts = <12 13 14>; 149 149 #dma-cells = <1>; 150 - dma-channels = <64>; 151 - ti,edma-regions = <4>; 152 - ti,edma-slots = <256>; 153 150 }; 154 151 155 152 gpio0: gpio@44e07000 {
-3
arch/arm/boot/dts/am4372.dtsi
··· 112 112 <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>, 113 113 <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>; 114 114 #dma-cells = <1>; 115 - dma-channels = <64>; 116 - ti,edma-regions = <4>; 117 - ti,edma-slots = <256>; 118 115 }; 119 116 120 117 uart0: serial@44e09000 {
+96 -101
arch/arm/common/edma.c
··· 102 102 #define PARM_OFFSET(param_no) (EDMA_PARM + ((param_no) << 5)) 103 103 104 104 #define EDMA_DCHMAP 0x0100 /* 64 registers */ 105 - #define CHMAP_EXIST BIT(24) 105 + 106 + /* CCCFG register */ 107 + #define GET_NUM_DMACH(x) (x & 0x7) /* bits 0-2 */ 108 + #define GET_NUM_PAENTRY(x) ((x & 0x7000) >> 12) /* bits 12-14 */ 109 + #define GET_NUM_EVQUE(x) ((x & 0x70000) >> 16) /* bits 16-18 */ 110 + #define GET_NUM_REGN(x) ((x & 0x300000) >> 20) /* bits 20-21 */ 111 + #define CHMAP_EXIST BIT(24) 106 112 107 113 #define EDMA_MAX_DMACH 64 108 114 #define EDMA_MAX_PARAMENTRY 512 ··· 239 233 unsigned num_region; 240 234 unsigned num_slots; 241 235 unsigned num_tc; 242 - unsigned num_cc; 243 236 enum dma_event_q default_queue; 244 237 245 238 /* list of channels with no even trigger; terminated by "-1" */ ··· 293 288 queue_no &= 7; 294 289 edma_modify_array(ctlr, EDMA_DMAQNUM, (ch_no >> 3), 295 290 ~(0x7 << bit), queue_no << bit); 296 - } 297 - 298 - static void __init map_queue_tc(unsigned ctlr, int queue_no, int tc_no) 299 - { 300 - int bit = queue_no * 4; 301 - edma_modify(ctlr, EDMA_QUETCMAP, ~(0x7 << bit), ((tc_no & 0x7) << bit)); 302 291 } 303 292 304 293 static void __init assign_priority_to_queue(unsigned ctlr, int queue_no, ··· 993 994 EXPORT_SYMBOL(edma_set_dest); 994 995 995 996 /** 996 - * edma_get_position - returns the current transfer points 997 + * edma_get_position - returns the current transfer point 997 998 * @slot: parameter RAM slot being examined 998 - * @src: pointer to source port position 999 - * @dst: pointer to destination port position 999 + * @dst: true selects the dest position, false the source 1000 1000 * 1001 - * Returns current source and destination addresses for a particular 1002 - * parameter RAM slot. Its channel should not be active when this is called. 1001 + * Returns the position of the current active slot 1003 1002 */ 1004 - void edma_get_position(unsigned slot, dma_addr_t *src, dma_addr_t *dst) 1003 + dma_addr_t edma_get_position(unsigned slot, bool dst) 1005 1004 { 1006 - struct edmacc_param temp; 1007 - unsigned ctlr; 1005 + u32 offs, ctlr = EDMA_CTLR(slot); 1008 1006 1009 - ctlr = EDMA_CTLR(slot); 1010 1007 slot = EDMA_CHAN_SLOT(slot); 1011 1008 1012 - edma_read_slot(EDMA_CTLR_CHAN(ctlr, slot), &temp); 1013 - if (src != NULL) 1014 - *src = temp.src; 1015 - if (dst != NULL) 1016 - *dst = temp.dst; 1009 + offs = PARM_OFFSET(slot); 1010 + offs += dst ? PARM_DST : PARM_SRC; 1011 + 1012 + return edma_read(ctlr, offs); 1017 1013 } 1018 - EXPORT_SYMBOL(edma_get_position); 1019 1014 1020 1015 /** 1021 1016 * edma_set_src_index - configure DMA source address indexing ··· 1414 1421 } 1415 1422 EXPORT_SYMBOL(edma_clear_event); 1416 1423 1424 + static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata, 1425 + struct edma *edma_cc) 1426 + { 1427 + int i; 1428 + u32 value, cccfg; 1429 + s8 (*queue_priority_map)[2]; 1430 + 1431 + /* Decode the eDMA3 configuration from CCCFG register */ 1432 + cccfg = edma_read(0, EDMA_CCCFG); 1433 + 1434 + value = GET_NUM_REGN(cccfg); 1435 + edma_cc->num_region = BIT(value); 1436 + 1437 + value = GET_NUM_DMACH(cccfg); 1438 + edma_cc->num_channels = BIT(value + 1); 1439 + 1440 + value = GET_NUM_PAENTRY(cccfg); 1441 + edma_cc->num_slots = BIT(value + 4); 1442 + 1443 + value = GET_NUM_EVQUE(cccfg); 1444 + edma_cc->num_tc = value + 1; 1445 + 1446 + dev_dbg(dev, "eDMA3 HW configuration (cccfg: 0x%08x):\n", cccfg); 1447 + dev_dbg(dev, "num_region: %u\n", edma_cc->num_region); 1448 + dev_dbg(dev, "num_channel: %u\n", edma_cc->num_channels); 1449 + dev_dbg(dev, "num_slot: %u\n", edma_cc->num_slots); 1450 + dev_dbg(dev, "num_tc: %u\n", edma_cc->num_tc); 1451 + 1452 + /* Nothing need to be done if queue priority is provided */ 1453 + if (pdata->queue_priority_mapping) 1454 + return 0; 1455 + 1456 + /* 1457 + * Configure TC/queue priority as follows: 1458 + * Q0 - priority 0 1459 + * Q1 - priority 1 1460 + * Q2 - priority 2 1461 + * ... 1462 + * The meaning of priority numbers: 0 highest priority, 7 lowest 1463 + * priority. So Q0 is the highest priority queue and the last queue has 1464 + * the lowest priority. 1465 + */ 1466 + queue_priority_map = devm_kzalloc(dev, 1467 + (edma_cc->num_tc + 1) * sizeof(s8), 1468 + GFP_KERNEL); 1469 + if (!queue_priority_map) 1470 + return -ENOMEM; 1471 + 1472 + for (i = 0; i < edma_cc->num_tc; i++) { 1473 + queue_priority_map[i][0] = i; 1474 + queue_priority_map[i][1] = i; 1475 + } 1476 + queue_priority_map[i][0] = -1; 1477 + queue_priority_map[i][1] = -1; 1478 + 1479 + pdata->queue_priority_mapping = queue_priority_map; 1480 + pdata->default_queue = 0; 1481 + 1482 + return 0; 1483 + } 1484 + 1417 1485 #if IS_ENABLED(CONFIG_OF) && IS_ENABLED(CONFIG_DMADEVICES) 1418 1486 1419 1487 static int edma_xbar_event_map(struct device *dev, struct device_node *node, ··· 1525 1471 struct device_node *node, 1526 1472 struct edma_soc_info *pdata) 1527 1473 { 1528 - int ret = 0, i; 1529 - u32 value; 1474 + int ret = 0; 1530 1475 struct property *prop; 1531 1476 size_t sz; 1532 1477 struct edma_rsv_info *rsv_info; 1533 - s8 (*queue_tc_map)[2], (*queue_priority_map)[2]; 1534 - 1535 - memset(pdata, 0, sizeof(struct edma_soc_info)); 1536 - 1537 - ret = of_property_read_u32(node, "dma-channels", &value); 1538 - if (ret < 0) 1539 - return ret; 1540 - pdata->n_channel = value; 1541 - 1542 - ret = of_property_read_u32(node, "ti,edma-regions", &value); 1543 - if (ret < 0) 1544 - return ret; 1545 - pdata->n_region = value; 1546 - 1547 - ret = of_property_read_u32(node, "ti,edma-slots", &value); 1548 - if (ret < 0) 1549 - return ret; 1550 - pdata->n_slot = value; 1551 - 1552 - pdata->n_cc = 1; 1553 1478 1554 1479 rsv_info = devm_kzalloc(dev, sizeof(struct edma_rsv_info), GFP_KERNEL); 1555 1480 if (!rsv_info) 1556 1481 return -ENOMEM; 1557 1482 pdata->rsv = rsv_info; 1558 - 1559 - queue_tc_map = devm_kzalloc(dev, 8*sizeof(s8), GFP_KERNEL); 1560 - if (!queue_tc_map) 1561 - return -ENOMEM; 1562 - 1563 - for (i = 0; i < 3; i++) { 1564 - queue_tc_map[i][0] = i; 1565 - queue_tc_map[i][1] = i; 1566 - } 1567 - queue_tc_map[i][0] = -1; 1568 - queue_tc_map[i][1] = -1; 1569 - 1570 - pdata->queue_tc_mapping = queue_tc_map; 1571 - 1572 - queue_priority_map = devm_kzalloc(dev, 8*sizeof(s8), GFP_KERNEL); 1573 - if (!queue_priority_map) 1574 - return -ENOMEM; 1575 - 1576 - for (i = 0; i < 3; i++) { 1577 - queue_priority_map[i][0] = i; 1578 - queue_priority_map[i][1] = i; 1579 - } 1580 - queue_priority_map[i][0] = -1; 1581 - queue_priority_map[i][1] = -1; 1582 - 1583 - pdata->queue_priority_mapping = queue_priority_map; 1584 - 1585 - pdata->default_queue = 0; 1586 1483 1587 1484 prop = of_find_property(node, "ti,edma-xbar-event-map", &sz); 1588 1485 if (prop) ··· 1561 1556 return ERR_PTR(ret); 1562 1557 1563 1558 dma_cap_set(DMA_SLAVE, edma_filter_info.dma_cap); 1559 + dma_cap_set(DMA_CYCLIC, edma_filter_info.dma_cap); 1564 1560 of_dma_controller_register(dev->of_node, of_dma_simple_xlate, 1565 1561 &edma_filter_info); 1566 1562 ··· 1580 1574 struct edma_soc_info **info = pdev->dev.platform_data; 1581 1575 struct edma_soc_info *ninfo[EDMA_MAX_CC] = {NULL}; 1582 1576 s8 (*queue_priority_mapping)[2]; 1583 - s8 (*queue_tc_mapping)[2]; 1584 1577 int i, j, off, ln, found = 0; 1585 1578 int status = -1; 1586 1579 const s16 (*rsv_chans)[2]; ··· 1590 1585 struct resource *r[EDMA_MAX_CC] = {NULL}; 1591 1586 struct resource res[EDMA_MAX_CC]; 1592 1587 char res_name[10]; 1593 - char irq_name[10]; 1594 1588 struct device_node *node = pdev->dev.of_node; 1595 1589 struct device *dev = &pdev->dev; 1596 1590 int ret; ··· 1654 1650 if (!edma_cc[j]) 1655 1651 return -ENOMEM; 1656 1652 1657 - edma_cc[j]->num_channels = min_t(unsigned, info[j]->n_channel, 1658 - EDMA_MAX_DMACH); 1659 - edma_cc[j]->num_slots = min_t(unsigned, info[j]->n_slot, 1660 - EDMA_MAX_PARAMENTRY); 1661 - edma_cc[j]->num_cc = min_t(unsigned, info[j]->n_cc, 1662 - EDMA_MAX_CC); 1653 + /* Get eDMA3 configuration from IP */ 1654 + ret = edma_setup_from_hw(dev, info[j], edma_cc[j]); 1655 + if (ret) 1656 + return ret; 1663 1657 1664 1658 edma_cc[j]->default_queue = info[j]->default_queue; 1665 1659 ··· 1709 1707 1710 1708 if (node) { 1711 1709 irq[j] = irq_of_parse_and_map(node, 0); 1710 + err_irq[j] = irq_of_parse_and_map(node, 2); 1712 1711 } else { 1712 + char irq_name[10]; 1713 + 1713 1714 sprintf(irq_name, "edma%d", j); 1714 1715 irq[j] = platform_get_irq_byname(pdev, irq_name); 1716 + 1717 + sprintf(irq_name, "edma%d_err", j); 1718 + err_irq[j] = platform_get_irq_byname(pdev, irq_name); 1715 1719 } 1716 1720 edma_cc[j]->irq_res_start = irq[j]; 1717 - status = devm_request_irq(&pdev->dev, irq[j], 1718 - dma_irq_handler, 0, "edma", 1719 - &pdev->dev); 1721 + edma_cc[j]->irq_res_end = err_irq[j]; 1722 + 1723 + status = devm_request_irq(dev, irq[j], dma_irq_handler, 0, 1724 + "edma", dev); 1720 1725 if (status < 0) { 1721 1726 dev_dbg(&pdev->dev, 1722 1727 "devm_request_irq %d failed --> %d\n", ··· 1731 1722 return status; 1732 1723 } 1733 1724 1734 - if (node) { 1735 - err_irq[j] = irq_of_parse_and_map(node, 2); 1736 - } else { 1737 - sprintf(irq_name, "edma%d_err", j); 1738 - err_irq[j] = platform_get_irq_byname(pdev, irq_name); 1739 - } 1740 - edma_cc[j]->irq_res_end = err_irq[j]; 1741 - status = devm_request_irq(&pdev->dev, err_irq[j], 1742 - dma_ccerr_handler, 0, 1743 - "edma_error", &pdev->dev); 1725 + status = devm_request_irq(dev, err_irq[j], dma_ccerr_handler, 0, 1726 + "edma_error", dev); 1744 1727 if (status < 0) { 1745 1728 dev_dbg(&pdev->dev, 1746 1729 "devm_request_irq %d failed --> %d\n", ··· 1743 1742 for (i = 0; i < edma_cc[j]->num_channels; i++) 1744 1743 map_dmach_queue(j, i, info[j]->default_queue); 1745 1744 1746 - queue_tc_mapping = info[j]->queue_tc_mapping; 1747 1745 queue_priority_mapping = info[j]->queue_priority_mapping; 1748 - 1749 - /* Event queue to TC mapping */ 1750 - for (i = 0; queue_tc_mapping[i][0] != -1; i++) 1751 - map_queue_tc(j, queue_tc_mapping[i][0], 1752 - queue_tc_mapping[i][1]); 1753 1746 1754 1747 /* Event queue priority mapping */ 1755 1748 for (i = 0; queue_priority_mapping[i][0] != -1; i++) ··· 1757 1762 if (edma_read(j, EDMA_CCCFG) & CHMAP_EXIST) 1758 1763 map_dmach_param(j); 1759 1764 1760 - for (i = 0; i < info[j]->n_region; i++) { 1765 + for (i = 0; i < edma_cc[j]->num_region; i++) { 1761 1766 edma_write_array2(j, EDMA_DRAE, i, 0, 0x0); 1762 1767 edma_write_array2(j, EDMA_DRAE, i, 1, 0x0); 1763 1768 edma_write_array(j, EDMA_QRAE, i, 0x0);
-31
arch/arm/mach-davinci/devices-da8xx.c
··· 134 134 } 135 135 }; 136 136 137 - static s8 da8xx_queue_tc_mapping[][2] = { 138 - /* {event queue no, TC no} */ 139 - {0, 0}, 140 - {1, 1}, 141 - {-1, -1} 142 - }; 143 - 144 137 static s8 da8xx_queue_priority_mapping[][2] = { 145 138 /* {event queue no, Priority} */ 146 139 {0, 3}, 147 140 {1, 7}, 148 - {-1, -1} 149 - }; 150 - 151 - static s8 da850_queue_tc_mapping[][2] = { 152 - /* {event queue no, TC no} */ 153 - {0, 0}, 154 141 {-1, -1} 155 142 }; 156 143 ··· 148 161 }; 149 162 150 163 static struct edma_soc_info da830_edma_cc0_info = { 151 - .n_channel = 32, 152 - .n_region = 4, 153 - .n_slot = 128, 154 - .n_tc = 2, 155 - .n_cc = 1, 156 - .queue_tc_mapping = da8xx_queue_tc_mapping, 157 164 .queue_priority_mapping = da8xx_queue_priority_mapping, 158 165 .default_queue = EVENTQ_1, 159 166 }; ··· 158 177 159 178 static struct edma_soc_info da850_edma_cc_info[] = { 160 179 { 161 - .n_channel = 32, 162 - .n_region = 4, 163 - .n_slot = 128, 164 - .n_tc = 2, 165 - .n_cc = 1, 166 - .queue_tc_mapping = da8xx_queue_tc_mapping, 167 180 .queue_priority_mapping = da8xx_queue_priority_mapping, 168 181 .default_queue = EVENTQ_1, 169 182 }, 170 183 { 171 - .n_channel = 32, 172 - .n_region = 4, 173 - .n_slot = 128, 174 - .n_tc = 1, 175 - .n_cc = 1, 176 - .queue_tc_mapping = da850_queue_tc_mapping, 177 184 .queue_priority_mapping = da850_queue_priority_mapping, 178 185 .default_queue = EVENTQ_0, 179 186 },
-14
arch/arm/mach-davinci/dm355.c
··· 569 569 /*----------------------------------------------------------------------*/ 570 570 571 571 static s8 572 - queue_tc_mapping[][2] = { 573 - /* {event queue no, TC no} */ 574 - {0, 0}, 575 - {1, 1}, 576 - {-1, -1}, 577 - }; 578 - 579 - static s8 580 572 queue_priority_mapping[][2] = { 581 573 /* {event queue no, Priority} */ 582 574 {0, 3}, ··· 577 585 }; 578 586 579 587 static struct edma_soc_info edma_cc0_info = { 580 - .n_channel = 64, 581 - .n_region = 4, 582 - .n_slot = 128, 583 - .n_tc = 2, 584 - .n_cc = 1, 585 - .queue_tc_mapping = queue_tc_mapping, 586 588 .queue_priority_mapping = queue_priority_mapping, 587 589 .default_queue = EVENTQ_1, 588 590 };
-16
arch/arm/mach-davinci/dm365.c
··· 853 853 854 854 /* Four Transfer Controllers on DM365 */ 855 855 static s8 856 - dm365_queue_tc_mapping[][2] = { 857 - /* {event queue no, TC no} */ 858 - {0, 0}, 859 - {1, 1}, 860 - {2, 2}, 861 - {3, 3}, 862 - {-1, -1}, 863 - }; 864 - 865 - static s8 866 856 dm365_queue_priority_mapping[][2] = { 867 857 /* {event queue no, Priority} */ 868 858 {0, 7}, ··· 863 873 }; 864 874 865 875 static struct edma_soc_info edma_cc0_info = { 866 - .n_channel = 64, 867 - .n_region = 4, 868 - .n_slot = 256, 869 - .n_tc = 4, 870 - .n_cc = 1, 871 - .queue_tc_mapping = dm365_queue_tc_mapping, 872 876 .queue_priority_mapping = dm365_queue_priority_mapping, 873 877 .default_queue = EVENTQ_3, 874 878 };
-14
arch/arm/mach-davinci/dm644x.c
··· 499 499 /*----------------------------------------------------------------------*/ 500 500 501 501 static s8 502 - queue_tc_mapping[][2] = { 503 - /* {event queue no, TC no} */ 504 - {0, 0}, 505 - {1, 1}, 506 - {-1, -1}, 507 - }; 508 - 509 - static s8 510 502 queue_priority_mapping[][2] = { 511 503 /* {event queue no, Priority} */ 512 504 {0, 3}, ··· 507 515 }; 508 516 509 517 static struct edma_soc_info edma_cc0_info = { 510 - .n_channel = 64, 511 - .n_region = 4, 512 - .n_slot = 128, 513 - .n_tc = 2, 514 - .n_cc = 1, 515 - .queue_tc_mapping = queue_tc_mapping, 516 518 .queue_priority_mapping = queue_priority_mapping, 517 519 .default_queue = EVENTQ_1, 518 520 };
-16
arch/arm/mach-davinci/dm646x.c
··· 533 533 534 534 /* Four Transfer Controllers on DM646x */ 535 535 static s8 536 - dm646x_queue_tc_mapping[][2] = { 537 - /* {event queue no, TC no} */ 538 - {0, 0}, 539 - {1, 1}, 540 - {2, 2}, 541 - {3, 3}, 542 - {-1, -1}, 543 - }; 544 - 545 - static s8 546 536 dm646x_queue_priority_mapping[][2] = { 547 537 /* {event queue no, Priority} */ 548 538 {0, 4}, ··· 543 553 }; 544 554 545 555 static struct edma_soc_info edma_cc0_info = { 546 - .n_channel = 64, 547 - .n_region = 6, /* 0-1, 4-7 */ 548 - .n_slot = 512, 549 - .n_tc = 4, 550 - .n_cc = 1, 551 - .queue_tc_mapping = dm646x_queue_tc_mapping, 552 556 .queue_priority_mapping = dm646x_queue_priority_mapping, 553 557 .default_queue = EVENTQ_1, 554 558 };
-1
arch/arm/mach-exynos/Makefile
··· 18 18 19 19 obj-$(CONFIG_PM_SLEEP) += pm.o sleep.o 20 20 obj-$(CONFIG_PM_GENERIC_DOMAINS) += pm_domains.o 21 - obj-$(CONFIG_CPU_IDLE) += cpuidle.o 22 21 23 22 obj-$(CONFIG_SMP) += platsmp.o headsmp.o 24 23
+2
arch/arm/mach-exynos/common.h
··· 115 115 116 116 struct map_desc; 117 117 extern void __iomem *sysram_ns_base_addr; 118 + extern void __iomem *sysram_base_addr; 118 119 void exynos_init_io(void); 119 120 void exynos_restart(enum reboot_mode mode, const char *cmd); 120 121 void exynos_cpuidle_init(void); ··· 166 165 extern void exynos_cluster_power_down(int cluster); 167 166 extern void exynos_cluster_power_up(int cluster); 168 167 extern int exynos_cluster_power_state(int cluster); 168 + extern void exynos_enter_aftr(void); 169 169 170 170 extern void s5p_init_cpu(void __iomem *cpuid_addr); 171 171 extern unsigned int samsung_rev(void);
-255
arch/arm/mach-exynos/cpuidle.c
··· 1 - /* linux/arch/arm/mach-exynos4/cpuidle.c 2 - * 3 - * Copyright (c) 2011 Samsung Electronics Co., Ltd. 4 - * http://www.samsung.com 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 - */ 10 - 11 - #include <linux/kernel.h> 12 - #include <linux/init.h> 13 - #include <linux/cpuidle.h> 14 - #include <linux/cpu_pm.h> 15 - #include <linux/io.h> 16 - #include <linux/export.h> 17 - #include <linux/module.h> 18 - #include <linux/time.h> 19 - #include <linux/platform_device.h> 20 - 21 - #include <asm/proc-fns.h> 22 - #include <asm/smp_scu.h> 23 - #include <asm/suspend.h> 24 - #include <asm/unified.h> 25 - #include <asm/cpuidle.h> 26 - 27 - #include <plat/pm.h> 28 - 29 - #include <mach/map.h> 30 - 31 - #include "common.h" 32 - #include "regs-pmu.h" 33 - 34 - #define REG_DIRECTGO_ADDR (samsung_rev() == EXYNOS4210_REV_1_1 ? \ 35 - S5P_INFORM7 : (samsung_rev() == EXYNOS4210_REV_1_0 ? \ 36 - (S5P_VA_SYSRAM + 0x24) : S5P_INFORM0)) 37 - #define REG_DIRECTGO_FLAG (samsung_rev() == EXYNOS4210_REV_1_1 ? \ 38 - S5P_INFORM6 : (samsung_rev() == EXYNOS4210_REV_1_0 ? \ 39 - (S5P_VA_SYSRAM + 0x20) : S5P_INFORM1)) 40 - 41 - #define S5P_CHECK_AFTR 0xFCBA0D10 42 - 43 - #define EXYNOS5_PWR_CTRL1 (S5P_VA_CMU + 0x01020) 44 - #define EXYNOS5_PWR_CTRL2 (S5P_VA_CMU + 0x01024) 45 - 46 - #define PWR_CTRL1_CORE2_DOWN_RATIO (7 << 28) 47 - #define PWR_CTRL1_CORE1_DOWN_RATIO (7 << 16) 48 - #define PWR_CTRL1_DIV2_DOWN_EN (1 << 9) 49 - #define PWR_CTRL1_DIV1_DOWN_EN (1 << 8) 50 - #define PWR_CTRL1_USE_CORE1_WFE (1 << 5) 51 - #define PWR_CTRL1_USE_CORE0_WFE (1 << 4) 52 - #define PWR_CTRL1_USE_CORE1_WFI (1 << 1) 53 - #define PWR_CTRL1_USE_CORE0_WFI (1 << 0) 54 - 55 - #define PWR_CTRL2_DIV2_UP_EN (1 << 25) 56 - #define PWR_CTRL2_DIV1_UP_EN (1 << 24) 57 - #define PWR_CTRL2_DUR_STANDBY2_VAL (1 << 16) 58 - #define PWR_CTRL2_DUR_STANDBY1_VAL (1 << 8) 59 - #define PWR_CTRL2_CORE2_UP_RATIO (1 << 4) 60 - #define PWR_CTRL2_CORE1_UP_RATIO (1 << 0) 61 - 62 - static int exynos4_enter_lowpower(struct cpuidle_device *dev, 63 - struct cpuidle_driver *drv, 64 - int index); 65 - 66 - static DEFINE_PER_CPU(struct cpuidle_device, exynos4_cpuidle_device); 67 - 68 - static struct cpuidle_driver exynos4_idle_driver = { 69 - .name = "exynos4_idle", 70 - .owner = THIS_MODULE, 71 - .states = { 72 - [0] = ARM_CPUIDLE_WFI_STATE, 73 - [1] = { 74 - .enter = exynos4_enter_lowpower, 75 - .exit_latency = 300, 76 - .target_residency = 100000, 77 - .flags = CPUIDLE_FLAG_TIME_VALID, 78 - .name = "C1", 79 - .desc = "ARM power down", 80 - }, 81 - }, 82 - .state_count = 2, 83 - .safe_state_index = 0, 84 - }; 85 - 86 - /* Ext-GIC nIRQ/nFIQ is the only wakeup source in AFTR */ 87 - static void exynos4_set_wakeupmask(void) 88 - { 89 - __raw_writel(0x0000ff3e, S5P_WAKEUP_MASK); 90 - } 91 - 92 - static unsigned int g_pwr_ctrl, g_diag_reg; 93 - 94 - static void save_cpu_arch_register(void) 95 - { 96 - /*read power control register*/ 97 - asm("mrc p15, 0, %0, c15, c0, 0" : "=r"(g_pwr_ctrl) : : "cc"); 98 - /*read diagnostic register*/ 99 - asm("mrc p15, 0, %0, c15, c0, 1" : "=r"(g_diag_reg) : : "cc"); 100 - return; 101 - } 102 - 103 - static void restore_cpu_arch_register(void) 104 - { 105 - /*write power control register*/ 106 - asm("mcr p15, 0, %0, c15, c0, 0" : : "r"(g_pwr_ctrl) : "cc"); 107 - /*write diagnostic register*/ 108 - asm("mcr p15, 0, %0, c15, c0, 1" : : "r"(g_diag_reg) : "cc"); 109 - return; 110 - } 111 - 112 - static int idle_finisher(unsigned long flags) 113 - { 114 - cpu_do_idle(); 115 - return 1; 116 - } 117 - 118 - static int exynos4_enter_core0_aftr(struct cpuidle_device *dev, 119 - struct cpuidle_driver *drv, 120 - int index) 121 - { 122 - unsigned long tmp; 123 - 124 - exynos4_set_wakeupmask(); 125 - 126 - /* Set value of power down register for aftr mode */ 127 - exynos_sys_powerdown_conf(SYS_AFTR); 128 - 129 - __raw_writel(virt_to_phys(exynos_cpu_resume), REG_DIRECTGO_ADDR); 130 - __raw_writel(S5P_CHECK_AFTR, REG_DIRECTGO_FLAG); 131 - 132 - save_cpu_arch_register(); 133 - 134 - /* Setting Central Sequence Register for power down mode */ 135 - tmp = __raw_readl(S5P_CENTRAL_SEQ_CONFIGURATION); 136 - tmp &= ~S5P_CENTRAL_LOWPWR_CFG; 137 - __raw_writel(tmp, S5P_CENTRAL_SEQ_CONFIGURATION); 138 - 139 - cpu_pm_enter(); 140 - cpu_suspend(0, idle_finisher); 141 - 142 - #ifdef CONFIG_SMP 143 - if (!soc_is_exynos5250()) 144 - scu_enable(S5P_VA_SCU); 145 - #endif 146 - cpu_pm_exit(); 147 - 148 - restore_cpu_arch_register(); 149 - 150 - /* 151 - * If PMU failed while entering sleep mode, WFI will be 152 - * ignored by PMU and then exiting cpu_do_idle(). 153 - * S5P_CENTRAL_LOWPWR_CFG bit will not be set automatically 154 - * in this situation. 155 - */ 156 - tmp = __raw_readl(S5P_CENTRAL_SEQ_CONFIGURATION); 157 - if (!(tmp & S5P_CENTRAL_LOWPWR_CFG)) { 158 - tmp |= S5P_CENTRAL_LOWPWR_CFG; 159 - __raw_writel(tmp, S5P_CENTRAL_SEQ_CONFIGURATION); 160 - } 161 - 162 - /* Clear wakeup state register */ 163 - __raw_writel(0x0, S5P_WAKEUP_STAT); 164 - 165 - return index; 166 - } 167 - 168 - static int exynos4_enter_lowpower(struct cpuidle_device *dev, 169 - struct cpuidle_driver *drv, 170 - int index) 171 - { 172 - int new_index = index; 173 - 174 - /* AFTR can only be entered when cores other than CPU0 are offline */ 175 - if (num_online_cpus() > 1 || dev->cpu != 0) 176 - new_index = drv->safe_state_index; 177 - 178 - if (new_index == 0) 179 - return arm_cpuidle_simple_enter(dev, drv, new_index); 180 - else 181 - return exynos4_enter_core0_aftr(dev, drv, new_index); 182 - } 183 - 184 - static void __init exynos5_core_down_clk(void) 185 - { 186 - unsigned int tmp; 187 - 188 - /* 189 - * Enable arm clock down (in idle) and set arm divider 190 - * ratios in WFI/WFE state. 191 - */ 192 - tmp = PWR_CTRL1_CORE2_DOWN_RATIO | \ 193 - PWR_CTRL1_CORE1_DOWN_RATIO | \ 194 - PWR_CTRL1_DIV2_DOWN_EN | \ 195 - PWR_CTRL1_DIV1_DOWN_EN | \ 196 - PWR_CTRL1_USE_CORE1_WFE | \ 197 - PWR_CTRL1_USE_CORE0_WFE | \ 198 - PWR_CTRL1_USE_CORE1_WFI | \ 199 - PWR_CTRL1_USE_CORE0_WFI; 200 - __raw_writel(tmp, EXYNOS5_PWR_CTRL1); 201 - 202 - /* 203 - * Enable arm clock up (on exiting idle). Set arm divider 204 - * ratios when not in idle along with the standby duration 205 - * ratios. 206 - */ 207 - tmp = PWR_CTRL2_DIV2_UP_EN | \ 208 - PWR_CTRL2_DIV1_UP_EN | \ 209 - PWR_CTRL2_DUR_STANDBY2_VAL | \ 210 - PWR_CTRL2_DUR_STANDBY1_VAL | \ 211 - PWR_CTRL2_CORE2_UP_RATIO | \ 212 - PWR_CTRL2_CORE1_UP_RATIO; 213 - __raw_writel(tmp, EXYNOS5_PWR_CTRL2); 214 - } 215 - 216 - static int exynos_cpuidle_probe(struct platform_device *pdev) 217 - { 218 - int cpu_id, ret; 219 - struct cpuidle_device *device; 220 - 221 - if (soc_is_exynos5250()) 222 - exynos5_core_down_clk(); 223 - 224 - if (soc_is_exynos5440()) 225 - exynos4_idle_driver.state_count = 1; 226 - 227 - ret = cpuidle_register_driver(&exynos4_idle_driver); 228 - if (ret) { 229 - dev_err(&pdev->dev, "failed to register cpuidle driver\n"); 230 - return ret; 231 - } 232 - 233 - for_each_online_cpu(cpu_id) { 234 - device = &per_cpu(exynos4_cpuidle_device, cpu_id); 235 - device->cpu = cpu_id; 236 - 237 - ret = cpuidle_register_device(device); 238 - if (ret) { 239 - dev_err(&pdev->dev, "failed to register cpuidle device\n"); 240 - return ret; 241 - } 242 - } 243 - 244 - return 0; 245 - } 246 - 247 - static struct platform_driver exynos_cpuidle_driver = { 248 - .probe = exynos_cpuidle_probe, 249 - .driver = { 250 - .name = "exynos_cpuidle", 251 - .owner = THIS_MODULE, 252 - }, 253 - }; 254 - 255 - module_platform_driver(exynos_cpuidle_driver);
+6 -2
arch/arm/mach-exynos/exynos.c
··· 169 169 } 170 170 171 171 static struct platform_device exynos_cpuidle = { 172 - .name = "exynos_cpuidle", 173 - .id = -1, 172 + .name = "exynos_cpuidle", 173 + .dev.platform_data = exynos_enter_aftr, 174 + .id = -1, 174 175 }; 175 176 176 177 void __init exynos_cpuidle_init(void) 177 178 { 179 + if (soc_is_exynos5440()) 180 + return; 181 + 178 182 platform_device_register(&exynos_cpuidle); 179 183 } 180 184
+1 -1
arch/arm/mach-exynos/platsmp.c
··· 32 32 33 33 extern void exynos4_secondary_startup(void); 34 34 35 - static void __iomem *sysram_base_addr; 35 + void __iomem *sysram_base_addr; 36 36 void __iomem *sysram_ns_base_addr; 37 37 38 38 static void __init exynos_smp_prepare_sysram(void)
+122 -28
arch/arm/mach-exynos/pm.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/suspend.h> 18 18 #include <linux/syscore_ops.h> 19 + #include <linux/cpu_pm.h> 19 20 #include <linux/io.h> 20 21 #include <linux/irqchip/arm-gic.h> 21 22 #include <linux/err.h> ··· 166 165 S5P_CORE_LOCAL_PWR_EN); 167 166 } 168 167 168 + #define EXYNOS_BOOT_VECTOR_ADDR (samsung_rev() == EXYNOS4210_REV_1_1 ? \ 169 + S5P_INFORM7 : (samsung_rev() == EXYNOS4210_REV_1_0 ? \ 170 + (sysram_base_addr + 0x24) : S5P_INFORM0)) 171 + #define EXYNOS_BOOT_VECTOR_FLAG (samsung_rev() == EXYNOS4210_REV_1_1 ? \ 172 + S5P_INFORM6 : (samsung_rev() == EXYNOS4210_REV_1_0 ? \ 173 + (sysram_base_addr + 0x20) : S5P_INFORM1)) 174 + 175 + #define S5P_CHECK_AFTR 0xFCBA0D10 176 + #define S5P_CHECK_SLEEP 0x00000BAD 177 + 178 + /* Ext-GIC nIRQ/nFIQ is the only wakeup source in AFTR */ 179 + static void exynos_set_wakeupmask(long mask) 180 + { 181 + __raw_writel(mask, S5P_WAKEUP_MASK); 182 + } 183 + 184 + static void exynos_cpu_set_boot_vector(long flags) 185 + { 186 + __raw_writel(virt_to_phys(exynos_cpu_resume), EXYNOS_BOOT_VECTOR_ADDR); 187 + __raw_writel(flags, EXYNOS_BOOT_VECTOR_FLAG); 188 + } 189 + 190 + void exynos_enter_aftr(void) 191 + { 192 + exynos_set_wakeupmask(0x0000ff3e); 193 + exynos_cpu_set_boot_vector(S5P_CHECK_AFTR); 194 + /* Set value of power down register for aftr mode */ 195 + exynos_sys_powerdown_conf(SYS_AFTR); 196 + } 197 + 169 198 /* For Cortex-A9 Diagnostic and Power control register */ 170 199 static unsigned int save_arm_register[2]; 200 + 201 + static void exynos_cpu_save_register(void) 202 + { 203 + unsigned long tmp; 204 + 205 + /* Save Power control register */ 206 + asm ("mrc p15, 0, %0, c15, c0, 0" 207 + : "=r" (tmp) : : "cc"); 208 + 209 + save_arm_register[0] = tmp; 210 + 211 + /* Save Diagnostic register */ 212 + asm ("mrc p15, 0, %0, c15, c0, 1" 213 + : "=r" (tmp) : : "cc"); 214 + 215 + save_arm_register[1] = tmp; 216 + } 217 + 218 + static void exynos_cpu_restore_register(void) 219 + { 220 + unsigned long tmp; 221 + 222 + /* Restore Power control register */ 223 + tmp = save_arm_register[0]; 224 + 225 + asm volatile ("mcr p15, 0, %0, c15, c0, 0" 226 + : : "r" (tmp) 227 + : "cc"); 228 + 229 + /* Restore Diagnostic register */ 230 + tmp = save_arm_register[1]; 231 + 232 + asm volatile ("mcr p15, 0, %0, c15, c0, 1" 233 + : : "r" (tmp) 234 + : "cc"); 235 + } 171 236 172 237 static int exynos_cpu_suspend(unsigned long arg) 173 238 { ··· 279 212 __raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0); 280 213 } 281 214 282 - static int exynos_pm_suspend(void) 215 + static void exynos_pm_central_suspend(void) 283 216 { 284 217 unsigned long tmp; 285 218 286 219 /* Setting Central Sequence Register for power down mode */ 287 - 288 220 tmp = __raw_readl(S5P_CENTRAL_SEQ_CONFIGURATION); 289 221 tmp &= ~S5P_CENTRAL_LOWPWR_CFG; 290 222 __raw_writel(tmp, S5P_CENTRAL_SEQ_CONFIGURATION); 223 + } 224 + 225 + static int exynos_pm_suspend(void) 226 + { 227 + unsigned long tmp; 228 + 229 + exynos_pm_central_suspend(); 291 230 292 231 /* Setting SEQ_OPTION register */ 293 232 294 233 tmp = (S5P_USE_STANDBY_WFI0 | S5P_USE_STANDBY_WFE0); 295 234 __raw_writel(tmp, S5P_CENTRAL_SEQ_OPTION); 296 235 297 - if (!soc_is_exynos5250()) { 298 - /* Save Power control register */ 299 - asm ("mrc p15, 0, %0, c15, c0, 0" 300 - : "=r" (tmp) : : "cc"); 301 - save_arm_register[0] = tmp; 302 - 303 - /* Save Diagnostic register */ 304 - asm ("mrc p15, 0, %0, c15, c0, 1" 305 - : "=r" (tmp) : : "cc"); 306 - save_arm_register[1] = tmp; 307 - } 236 + if (!soc_is_exynos5250()) 237 + exynos_cpu_save_register(); 308 238 309 239 return 0; 310 240 } 311 241 312 - static void exynos_pm_resume(void) 242 + static int exynos_pm_central_resume(void) 313 243 { 314 244 unsigned long tmp; 315 245 ··· 323 259 /* clear the wakeup state register */ 324 260 __raw_writel(0x0, S5P_WAKEUP_STAT); 325 261 /* No need to perform below restore code */ 326 - goto early_wakeup; 262 + return -1; 327 263 } 328 - if (!soc_is_exynos5250()) { 329 - /* Restore Power control register */ 330 - tmp = save_arm_register[0]; 331 - asm volatile ("mcr p15, 0, %0, c15, c0, 0" 332 - : : "r" (tmp) 333 - : "cc"); 334 264 335 - /* Restore Diagnostic register */ 336 - tmp = save_arm_register[1]; 337 - asm volatile ("mcr p15, 0, %0, c15, c0, 1" 338 - : : "r" (tmp) 339 - : "cc"); 340 - } 265 + return 0; 266 + } 267 + 268 + static void exynos_pm_resume(void) 269 + { 270 + if (exynos_pm_central_resume()) 271 + goto early_wakeup; 272 + 273 + if (!soc_is_exynos5250()) 274 + exynos_cpu_restore_register(); 341 275 342 276 /* For release retention */ 343 277 ··· 353 291 354 292 s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save)); 355 293 356 - if (IS_ENABLED(CONFIG_SMP) && !soc_is_exynos5250()) 294 + if (!soc_is_exynos5250()) 357 295 scu_enable(S5P_VA_SCU); 358 296 359 297 early_wakeup: ··· 431 369 .valid = suspend_valid_only_mem, 432 370 }; 433 371 372 + static int exynos_cpu_pm_notifier(struct notifier_block *self, 373 + unsigned long cmd, void *v) 374 + { 375 + int cpu = smp_processor_id(); 376 + 377 + switch (cmd) { 378 + case CPU_PM_ENTER: 379 + if (cpu == 0) { 380 + exynos_pm_central_suspend(); 381 + exynos_cpu_save_register(); 382 + } 383 + break; 384 + 385 + case CPU_PM_EXIT: 386 + if (cpu == 0) { 387 + if (!soc_is_exynos5250()) 388 + scu_enable(S5P_VA_SCU); 389 + exynos_cpu_restore_register(); 390 + exynos_pm_central_resume(); 391 + } 392 + break; 393 + } 394 + 395 + return NOTIFY_OK; 396 + } 397 + 398 + static struct notifier_block exynos_cpu_pm_notifier_block = { 399 + .notifier_call = exynos_cpu_pm_notifier, 400 + }; 401 + 434 402 void __init exynos_pm_init(void) 435 403 { 436 404 u32 tmp; 405 + 406 + cpu_pm_register_notifier(&exynos_cpu_pm_notifier_block); 437 407 438 408 /* Platform-specific GIC callback */ 439 409 gic_arch_extn.irq_set_wake = exynos_irq_set_wake;
-2
arch/arm/mach-exynos/regs-pmu.h
··· 129 129 #define S5P_CORE_LOCAL_PWR_EN 0x3 130 130 #define S5P_INT_LOCAL_PWR_EN 0x7 131 131 132 - #define S5P_CHECK_SLEEP 0x00000BAD 133 - 134 132 /* Only for EXYNOS4210 */ 135 133 #define S5P_CMU_CLKSTOP_LCD1_LOWPWR S5P_PMUREG(0x1154) 136 134 #define S5P_CMU_RESET_LCD1_LOWPWR S5P_PMUREG(0x1174)
+2
drivers/Kconfig
··· 132 132 133 133 source "drivers/platform/Kconfig" 134 134 135 + source "drivers/soc/Kconfig" 136 + 135 137 source "drivers/clk/Kconfig" 136 138 137 139 source "drivers/hwspinlock/Kconfig"
+3
drivers/Makefile
··· 33 33 # really early. 34 34 obj-$(CONFIG_DMADEVICES) += dma/ 35 35 36 + # SOC specific infrastructure drivers. 37 + obj-y += soc/ 38 + 36 39 obj-$(CONFIG_VIRTIO) += virtio/ 37 40 obj-$(CONFIG_XEN) += xen/ 38 41
+8
drivers/bus/Kconfig
··· 4 4 5 5 menu "Bus devices" 6 6 7 + config BRCMSTB_GISB_ARB 8 + bool "Broadcom STB GISB bus arbiter" 9 + depends on ARM 10 + help 11 + Driver for the Broadcom Set Top Box System-on-a-chip internal bus 12 + arbiter. This driver provides timeout and target abort error handling 13 + and internal bus master decoding. 14 + 7 15 config IMX_WEIM 8 16 bool "Freescale EIM DRIVER" 9 17 depends on ARCH_MXC
+1
drivers/bus/Makefile
··· 2 2 # Makefile for the bus drivers. 3 3 # 4 4 5 + obj-$(CONFIG_BRCMSTB_GISB_ARB) += brcmstb_gisb.o 5 6 obj-$(CONFIG_IMX_WEIM) += imx-weim.o 6 7 obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o 7 8 obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o
+289
drivers/bus/brcmstb_gisb.c
··· 1 + /* 2 + * Copyright (C) 2014 Broadcom Corporation 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <linux/init.h> 15 + #include <linux/types.h> 16 + #include <linux/module.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/sysfs.h> 20 + #include <linux/io.h> 21 + #include <linux/string.h> 22 + #include <linux/device.h> 23 + #include <linux/list.h> 24 + #include <linux/of.h> 25 + #include <linux/bitops.h> 26 + 27 + #include <asm/bug.h> 28 + #include <asm/signal.h> 29 + 30 + #define ARB_TIMER 0x008 31 + #define ARB_ERR_CAP_CLR 0x7e4 32 + #define ARB_ERR_CAP_CLEAR (1 << 0) 33 + #define ARB_ERR_CAP_HI_ADDR 0x7e8 34 + #define ARB_ERR_CAP_ADDR 0x7ec 35 + #define ARB_ERR_CAP_DATA 0x7f0 36 + #define ARB_ERR_CAP_STATUS 0x7f4 37 + #define ARB_ERR_CAP_STATUS_TIMEOUT (1 << 12) 38 + #define ARB_ERR_CAP_STATUS_TEA (1 << 11) 39 + #define ARB_ERR_CAP_STATUS_BS_SHIFT (1 << 2) 40 + #define ARB_ERR_CAP_STATUS_BS_MASK 0x3c 41 + #define ARB_ERR_CAP_STATUS_WRITE (1 << 1) 42 + #define ARB_ERR_CAP_STATUS_VALID (1 << 0) 43 + #define ARB_ERR_CAP_MASTER 0x7f8 44 + 45 + struct brcmstb_gisb_arb_device { 46 + void __iomem *base; 47 + struct mutex lock; 48 + struct list_head next; 49 + u32 valid_mask; 50 + const char *master_names[sizeof(u32) * BITS_PER_BYTE]; 51 + }; 52 + 53 + static LIST_HEAD(brcmstb_gisb_arb_device_list); 54 + 55 + static ssize_t gisb_arb_get_timeout(struct device *dev, 56 + struct device_attribute *attr, 57 + char *buf) 58 + { 59 + struct platform_device *pdev = to_platform_device(dev); 60 + struct brcmstb_gisb_arb_device *gdev = platform_get_drvdata(pdev); 61 + u32 timeout; 62 + 63 + mutex_lock(&gdev->lock); 64 + timeout = ioread32(gdev->base + ARB_TIMER); 65 + mutex_unlock(&gdev->lock); 66 + 67 + return sprintf(buf, "%d", timeout); 68 + } 69 + 70 + static ssize_t gisb_arb_set_timeout(struct device *dev, 71 + struct device_attribute *attr, 72 + const char *buf, size_t count) 73 + { 74 + struct platform_device *pdev = to_platform_device(dev); 75 + struct brcmstb_gisb_arb_device *gdev = platform_get_drvdata(pdev); 76 + int val, ret; 77 + 78 + ret = kstrtoint(buf, 10, &val); 79 + if (ret < 0) 80 + return ret; 81 + 82 + if (val == 0 || val >= 0xffffffff) 83 + return -EINVAL; 84 + 85 + mutex_lock(&gdev->lock); 86 + iowrite32(val, gdev->base + ARB_TIMER); 87 + mutex_unlock(&gdev->lock); 88 + 89 + return count; 90 + } 91 + 92 + static const char * 93 + brcmstb_gisb_master_to_str(struct brcmstb_gisb_arb_device *gdev, 94 + u32 masters) 95 + { 96 + u32 mask = gdev->valid_mask & masters; 97 + 98 + if (hweight_long(mask) != 1) 99 + return NULL; 100 + 101 + return gdev->master_names[ffs(mask) - 1]; 102 + } 103 + 104 + static int brcmstb_gisb_arb_decode_addr(struct brcmstb_gisb_arb_device *gdev, 105 + const char *reason) 106 + { 107 + u32 cap_status; 108 + unsigned long arb_addr; 109 + u32 master; 110 + const char *m_name; 111 + char m_fmt[11]; 112 + 113 + cap_status = ioread32(gdev->base + ARB_ERR_CAP_STATUS); 114 + 115 + /* Invalid captured address, bail out */ 116 + if (!(cap_status & ARB_ERR_CAP_STATUS_VALID)) 117 + return 1; 118 + 119 + /* Read the address and master */ 120 + arb_addr = ioread32(gdev->base + ARB_ERR_CAP_ADDR) & 0xffffffff; 121 + #if (IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT)) 122 + arb_addr |= (u64)ioread32(gdev->base + ARB_ERR_CAP_HI_ADDR) << 32; 123 + #endif 124 + master = ioread32(gdev->base + ARB_ERR_CAP_MASTER); 125 + 126 + m_name = brcmstb_gisb_master_to_str(gdev, master); 127 + if (!m_name) { 128 + snprintf(m_fmt, sizeof(m_fmt), "0x%08x", master); 129 + m_name = m_fmt; 130 + } 131 + 132 + pr_crit("%s: %s at 0x%lx [%c %s], core: %s\n", 133 + __func__, reason, arb_addr, 134 + cap_status & ARB_ERR_CAP_STATUS_WRITE ? 'W' : 'R', 135 + cap_status & ARB_ERR_CAP_STATUS_TIMEOUT ? "timeout" : "", 136 + m_name); 137 + 138 + /* clear the GISB error */ 139 + iowrite32(ARB_ERR_CAP_CLEAR, gdev->base + ARB_ERR_CAP_CLR); 140 + 141 + return 0; 142 + } 143 + 144 + static int brcmstb_bus_error_handler(unsigned long addr, unsigned int fsr, 145 + struct pt_regs *regs) 146 + { 147 + int ret = 0; 148 + struct brcmstb_gisb_arb_device *gdev; 149 + 150 + /* iterate over each GISB arb registered handlers */ 151 + list_for_each_entry(gdev, &brcmstb_gisb_arb_device_list, next) 152 + ret |= brcmstb_gisb_arb_decode_addr(gdev, "bus error"); 153 + /* 154 + * If it was an imprecise abort, then we need to correct the 155 + * return address to be _after_ the instruction. 156 + */ 157 + if (fsr & (1 << 10)) 158 + regs->ARM_pc += 4; 159 + 160 + return ret; 161 + } 162 + 163 + void __init brcmstb_hook_fault_code(void) 164 + { 165 + hook_fault_code(22, brcmstb_bus_error_handler, SIGBUS, 0, 166 + "imprecise external abort"); 167 + } 168 + 169 + static irqreturn_t brcmstb_gisb_timeout_handler(int irq, void *dev_id) 170 + { 171 + brcmstb_gisb_arb_decode_addr(dev_id, "timeout"); 172 + 173 + return IRQ_HANDLED; 174 + } 175 + 176 + static irqreturn_t brcmstb_gisb_tea_handler(int irq, void *dev_id) 177 + { 178 + brcmstb_gisb_arb_decode_addr(dev_id, "target abort"); 179 + 180 + return IRQ_HANDLED; 181 + } 182 + 183 + static DEVICE_ATTR(gisb_arb_timeout, S_IWUSR | S_IRUGO, 184 + gisb_arb_get_timeout, gisb_arb_set_timeout); 185 + 186 + static struct attribute *gisb_arb_sysfs_attrs[] = { 187 + &dev_attr_gisb_arb_timeout.attr, 188 + NULL, 189 + }; 190 + 191 + static struct attribute_group gisb_arb_sysfs_attr_group = { 192 + .attrs = gisb_arb_sysfs_attrs, 193 + }; 194 + 195 + static int brcmstb_gisb_arb_probe(struct platform_device *pdev) 196 + { 197 + struct device_node *dn = pdev->dev.of_node; 198 + struct brcmstb_gisb_arb_device *gdev; 199 + struct resource *r; 200 + int err, timeout_irq, tea_irq; 201 + unsigned int num_masters, j = 0; 202 + int i, first, last; 203 + 204 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 205 + timeout_irq = platform_get_irq(pdev, 0); 206 + tea_irq = platform_get_irq(pdev, 1); 207 + 208 + gdev = devm_kzalloc(&pdev->dev, sizeof(*gdev), GFP_KERNEL); 209 + if (!gdev) 210 + return -ENOMEM; 211 + 212 + mutex_init(&gdev->lock); 213 + INIT_LIST_HEAD(&gdev->next); 214 + 215 + gdev->base = devm_request_and_ioremap(&pdev->dev, r); 216 + if (!gdev->base) 217 + return -ENOMEM; 218 + 219 + err = devm_request_irq(&pdev->dev, timeout_irq, 220 + brcmstb_gisb_timeout_handler, 0, pdev->name, 221 + gdev); 222 + if (err < 0) 223 + return err; 224 + 225 + err = devm_request_irq(&pdev->dev, tea_irq, 226 + brcmstb_gisb_tea_handler, 0, pdev->name, 227 + gdev); 228 + if (err < 0) 229 + return err; 230 + 231 + /* If we do not have a valid mask, assume all masters are enabled */ 232 + if (of_property_read_u32(dn, "brcm,gisb-arb-master-mask", 233 + &gdev->valid_mask)) 234 + gdev->valid_mask = 0xffffffff; 235 + 236 + /* Proceed with reading the litteral names if we agree on the 237 + * number of masters 238 + */ 239 + num_masters = of_property_count_strings(dn, 240 + "brcm,gisb-arb-master-names"); 241 + if (hweight_long(gdev->valid_mask) == num_masters) { 242 + first = ffs(gdev->valid_mask) - 1; 243 + last = fls(gdev->valid_mask) - 1; 244 + 245 + for (i = first; i < last; i++) { 246 + if (!(gdev->valid_mask & BIT(i))) 247 + continue; 248 + 249 + of_property_read_string_index(dn, 250 + "brcm,gisb-arb-master-names", j, 251 + &gdev->master_names[i]); 252 + j++; 253 + } 254 + } 255 + 256 + err = sysfs_create_group(&pdev->dev.kobj, &gisb_arb_sysfs_attr_group); 257 + if (err) 258 + return err; 259 + 260 + platform_set_drvdata(pdev, gdev); 261 + 262 + list_add_tail(&gdev->next, &brcmstb_gisb_arb_device_list); 263 + 264 + dev_info(&pdev->dev, "registered mem: %p, irqs: %d, %d\n", 265 + gdev->base, timeout_irq, tea_irq); 266 + 267 + return 0; 268 + } 269 + 270 + static const struct of_device_id brcmstb_gisb_arb_of_match[] = { 271 + { .compatible = "brcm,gisb-arb" }, 272 + { }, 273 + }; 274 + 275 + static struct platform_driver brcmstb_gisb_arb_driver = { 276 + .probe = brcmstb_gisb_arb_probe, 277 + .driver = { 278 + .name = "brcm-gisb-arb", 279 + .owner = THIS_MODULE, 280 + .of_match_table = brcmstb_gisb_arb_of_match, 281 + }, 282 + }; 283 + 284 + static int __init brcm_gisb_driver_init(void) 285 + { 286 + return platform_driver_register(&brcmstb_gisb_arb_driver); 287 + } 288 + 289 + module_init(brcm_gisb_driver_init);
+227 -175
drivers/bus/omap_l3_noc.c
··· 1 1 /* 2 - * OMAP4XXX L3 Interconnect error handling driver 2 + * OMAP L3 Interconnect error handling driver 3 3 * 4 - * Copyright (C) 2011 Texas Corporation 4 + * Copyright (C) 2011-2014 Texas Instruments Incorporated - http://www.ti.com/ 5 5 * Santosh Shilimkar <santosh.shilimkar@ti.com> 6 6 * Sricharan <r.sricharan@ti.com> 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 12 11 * 13 - * This program is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 13 + * kind, whether express or implied; without even the implied warranty 14 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 15 * GNU General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 21 - * USA 22 16 */ 23 - #include <linux/module.h> 24 17 #include <linux/init.h> 25 - #include <linux/io.h> 26 - #include <linux/platform_device.h> 27 18 #include <linux/interrupt.h> 19 + #include <linux/io.h> 28 20 #include <linux/kernel.h> 21 + #include <linux/module.h> 22 + #include <linux/of_device.h> 23 + #include <linux/of.h> 24 + #include <linux/platform_device.h> 29 25 #include <linux/slab.h> 30 26 31 27 #include "omap_l3_noc.h" 32 28 33 - /* 34 - * Interrupt Handler for L3 error detection. 35 - * 1) Identify the L3 clockdomain partition to which the error belongs to. 36 - * 2) Identify the slave where the error information is logged 37 - * 3) Print the logged information. 38 - * 4) Add dump stack to provide kernel trace. 29 + /** 30 + * l3_handle_target() - Handle Target specific parse and reporting 31 + * @l3: pointer to l3 struct 32 + * @base: base address of clkdm 33 + * @flag_mux: flagmux corresponding to the event 34 + * @err_src: error source index of the slave (target) 39 35 * 40 - * Two Types of errors : 36 + * This does the second part of the error interrupt handling: 37 + * 3) Parse in the slave information 38 + * 4) Print the logged information. 39 + * 5) Add dump stack to provide kernel trace. 40 + * 6) Clear the source if known. 41 + * 42 + * This handles two types of errors: 41 43 * 1) Custom errors in L3 : 42 44 * Target like DMM/FW/EMIF generates SRESP=ERR error 43 45 * 2) Standard L3 error: ··· 55 53 * can be trapped as well. But the trapping is implemented as part 56 54 * secure software and hence need not be implemented here. 57 55 */ 56 + static int l3_handle_target(struct omap_l3 *l3, void __iomem *base, 57 + struct l3_flagmux_data *flag_mux, int err_src) 58 + { 59 + int k; 60 + u32 std_err_main, clear, masterid; 61 + u8 op_code, m_req_info; 62 + void __iomem *l3_targ_base; 63 + void __iomem *l3_targ_stderr, *l3_targ_slvofslsb, *l3_targ_mstaddr; 64 + void __iomem *l3_targ_hdr, *l3_targ_info; 65 + struct l3_target_data *l3_targ_inst; 66 + struct l3_masters_data *master; 67 + char *target_name, *master_name = "UN IDENTIFIED"; 68 + char *err_description; 69 + char err_string[30] = { 0 }; 70 + char info_string[60] = { 0 }; 71 + 72 + /* We DONOT expect err_src to go out of bounds */ 73 + BUG_ON(err_src > MAX_CLKDM_TARGETS); 74 + 75 + if (err_src < flag_mux->num_targ_data) { 76 + l3_targ_inst = &flag_mux->l3_targ[err_src]; 77 + target_name = l3_targ_inst->name; 78 + l3_targ_base = base + l3_targ_inst->offset; 79 + } else { 80 + target_name = L3_TARGET_NOT_SUPPORTED; 81 + } 82 + 83 + if (target_name == L3_TARGET_NOT_SUPPORTED) 84 + return -ENODEV; 85 + 86 + /* Read the stderrlog_main_source from clk domain */ 87 + l3_targ_stderr = l3_targ_base + L3_TARG_STDERRLOG_MAIN; 88 + l3_targ_slvofslsb = l3_targ_base + L3_TARG_STDERRLOG_SLVOFSLSB; 89 + 90 + std_err_main = readl_relaxed(l3_targ_stderr); 91 + 92 + switch (std_err_main & CUSTOM_ERROR) { 93 + case STANDARD_ERROR: 94 + err_description = "Standard"; 95 + snprintf(err_string, sizeof(err_string), 96 + ": At Address: 0x%08X ", 97 + readl_relaxed(l3_targ_slvofslsb)); 98 + 99 + l3_targ_mstaddr = l3_targ_base + L3_TARG_STDERRLOG_MSTADDR; 100 + l3_targ_hdr = l3_targ_base + L3_TARG_STDERRLOG_HDR; 101 + l3_targ_info = l3_targ_base + L3_TARG_STDERRLOG_INFO; 102 + break; 103 + 104 + case CUSTOM_ERROR: 105 + err_description = "Custom"; 106 + 107 + l3_targ_mstaddr = l3_targ_base + 108 + L3_TARG_STDERRLOG_CINFO_MSTADDR; 109 + l3_targ_hdr = l3_targ_base + L3_TARG_STDERRLOG_CINFO_OPCODE; 110 + l3_targ_info = l3_targ_base + L3_TARG_STDERRLOG_CINFO_INFO; 111 + break; 112 + 113 + default: 114 + /* Nothing to be handled here as of now */ 115 + return 0; 116 + } 117 + 118 + /* STDERRLOG_MSTADDR Stores the NTTP master address. */ 119 + masterid = (readl_relaxed(l3_targ_mstaddr) & 120 + l3->mst_addr_mask) >> __ffs(l3->mst_addr_mask); 121 + 122 + for (k = 0, master = l3->l3_masters; k < l3->num_masters; 123 + k++, master++) { 124 + if (masterid == master->id) { 125 + master_name = master->name; 126 + break; 127 + } 128 + } 129 + 130 + op_code = readl_relaxed(l3_targ_hdr) & 0x7; 131 + 132 + m_req_info = readl_relaxed(l3_targ_info) & 0xF; 133 + snprintf(info_string, sizeof(info_string), 134 + ": %s in %s mode during %s access", 135 + (m_req_info & BIT(0)) ? "Opcode Fetch" : "Data Access", 136 + (m_req_info & BIT(1)) ? "Supervisor" : "User", 137 + (m_req_info & BIT(3)) ? "Debug" : "Functional"); 138 + 139 + WARN(true, 140 + "%s:L3 %s Error: MASTER %s TARGET %s (%s)%s%s\n", 141 + dev_name(l3->dev), 142 + err_description, 143 + master_name, target_name, 144 + l3_transaction_type[op_code], 145 + err_string, info_string); 146 + 147 + /* clear the std error log*/ 148 + clear = std_err_main | CLEAR_STDERR_LOG; 149 + writel_relaxed(clear, l3_targ_stderr); 150 + 151 + return 0; 152 + } 153 + 154 + /** 155 + * l3_interrupt_handler() - interrupt handler for l3 events 156 + * @irq: irq number 157 + * @_l3: pointer to l3 structure 158 + * 159 + * Interrupt Handler for L3 error detection. 160 + * 1) Identify the L3 clockdomain partition to which the error belongs to. 161 + * 2) Identify the slave where the error information is logged 162 + * ... handle the slave event.. 163 + * 7) if the slave is unknown, mask out the slave. 164 + */ 58 165 static irqreturn_t l3_interrupt_handler(int irq, void *_l3) 59 166 { 60 - 61 - struct omap4_l3 *l3 = _l3; 62 - int inttype, i, k; 167 + struct omap_l3 *l3 = _l3; 168 + int inttype, i, ret; 63 169 int err_src = 0; 64 - u32 std_err_main, err_reg, clear, masterid; 65 - void __iomem *base, *l3_targ_base; 66 - char *target_name, *master_name = "UN IDENTIFIED"; 170 + u32 err_reg, mask_val; 171 + void __iomem *base, *mask_reg; 172 + struct l3_flagmux_data *flag_mux; 67 173 68 174 /* Get the Type of interrupt */ 69 175 inttype = irq == l3->app_irq ? L3_APPLICATION_ERROR : L3_DEBUG_ERROR; 70 176 71 - for (i = 0; i < L3_MODULES; i++) { 177 + for (i = 0; i < l3->num_modules; i++) { 72 178 /* 73 179 * Read the regerr register of the clock domain 74 180 * to determine the source 75 181 */ 76 182 base = l3->l3_base[i]; 77 - err_reg = __raw_readl(base + l3_flagmux[i] + 78 - + L3_FLAGMUX_REGERR0 + (inttype << 3)); 183 + flag_mux = l3->l3_flagmux[i]; 184 + err_reg = readl_relaxed(base + flag_mux->offset + 185 + L3_FLAGMUX_REGERR0 + (inttype << 3)); 186 + 187 + err_reg &= ~(inttype ? flag_mux->mask_app_bits : 188 + flag_mux->mask_dbg_bits); 79 189 80 190 /* Get the corresponding error and analyse */ 81 191 if (err_reg) { 82 192 /* Identify the source from control status register */ 83 193 err_src = __ffs(err_reg); 84 194 85 - /* Read the stderrlog_main_source from clk domain */ 86 - l3_targ_base = base + *(l3_targ[i] + err_src); 87 - std_err_main = __raw_readl(l3_targ_base + 88 - L3_TARG_STDERRLOG_MAIN); 89 - masterid = __raw_readl(l3_targ_base + 90 - L3_TARG_STDERRLOG_MSTADDR); 195 + ret = l3_handle_target(l3, base, flag_mux, err_src); 91 196 92 - switch (std_err_main & CUSTOM_ERROR) { 93 - case STANDARD_ERROR: 94 - target_name = 95 - l3_targ_inst_name[i][err_src]; 96 - WARN(true, "L3 standard error: TARGET:%s at address 0x%x\n", 97 - target_name, 98 - __raw_readl(l3_targ_base + 99 - L3_TARG_STDERRLOG_SLVOFSLSB)); 100 - /* clear the std error log*/ 101 - clear = std_err_main | CLEAR_STDERR_LOG; 102 - writel(clear, l3_targ_base + 103 - L3_TARG_STDERRLOG_MAIN); 104 - break; 197 + /* 198 + * Certain plaforms may have "undocumented" status 199 + * pending on boot. So dont generate a severe warning 200 + * here. Just mask it off to prevent the error from 201 + * reoccuring and locking up the system. 202 + */ 203 + if (ret) { 204 + dev_err(l3->dev, 205 + "L3 %s error: target %d mod:%d %s\n", 206 + inttype ? "debug" : "application", 207 + err_src, i, "(unclearable)"); 105 208 106 - case CUSTOM_ERROR: 107 - target_name = 108 - l3_targ_inst_name[i][err_src]; 109 - for (k = 0; k < NUM_OF_L3_MASTERS; k++) { 110 - if (masterid == l3_masters[k].id) 111 - master_name = 112 - l3_masters[k].name; 113 - } 114 - WARN(true, "L3 custom error: MASTER:%s TARGET:%s\n", 115 - master_name, target_name); 116 - /* clear the std error log*/ 117 - clear = std_err_main | CLEAR_STDERR_LOG; 118 - writel(clear, l3_targ_base + 119 - L3_TARG_STDERRLOG_MAIN); 120 - break; 209 + mask_reg = base + flag_mux->offset + 210 + L3_FLAGMUX_MASK0 + (inttype << 3); 211 + mask_val = readl_relaxed(mask_reg); 212 + mask_val &= ~(1 << err_src); 213 + writel_relaxed(mask_val, mask_reg); 121 214 122 - default: 123 - /* Nothing to be handled here as of now */ 124 - break; 215 + /* Mark these bits as to be ignored */ 216 + if (inttype) 217 + flag_mux->mask_app_bits |= 1 << err_src; 218 + else 219 + flag_mux->mask_dbg_bits |= 1 << err_src; 125 220 } 126 - /* Error found so break the for loop */ 127 - break; 221 + 222 + /* Error found so break the for loop */ 223 + break; 128 224 } 129 225 } 130 226 return IRQ_HANDLED; 131 227 } 132 228 133 - static int omap4_l3_probe(struct platform_device *pdev) 134 - { 135 - static struct omap4_l3 *l3; 136 - struct resource *res; 137 - int ret; 229 + static const struct of_device_id l3_noc_match[] = { 230 + {.compatible = "ti,omap4-l3-noc", .data = &omap_l3_data}, 231 + {.compatible = "ti,dra7-l3-noc", .data = &dra_l3_data}, 232 + {.compatible = "ti,am4372-l3-noc", .data = &am4372_l3_data}, 233 + {}, 234 + }; 235 + MODULE_DEVICE_TABLE(of, l3_noc_match); 138 236 139 - l3 = kzalloc(sizeof(*l3), GFP_KERNEL); 237 + static int omap_l3_probe(struct platform_device *pdev) 238 + { 239 + const struct of_device_id *of_id; 240 + static struct omap_l3 *l3; 241 + int ret, i, res_idx; 242 + 243 + of_id = of_match_device(l3_noc_match, &pdev->dev); 244 + if (!of_id) { 245 + dev_err(&pdev->dev, "OF data missing\n"); 246 + return -EINVAL; 247 + } 248 + 249 + l3 = devm_kzalloc(&pdev->dev, sizeof(*l3), GFP_KERNEL); 140 250 if (!l3) 141 251 return -ENOMEM; 142 252 253 + memcpy(l3, of_id->data, sizeof(*l3)); 254 + l3->dev = &pdev->dev; 143 255 platform_set_drvdata(pdev, l3); 144 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 145 - if (!res) { 146 - dev_err(&pdev->dev, "couldn't find resource 0\n"); 147 - ret = -ENODEV; 148 - goto err0; 149 - } 150 256 151 - l3->l3_base[0] = ioremap(res->start, resource_size(res)); 152 - if (!l3->l3_base[0]) { 153 - dev_err(&pdev->dev, "ioremap failed\n"); 154 - ret = -ENOMEM; 155 - goto err0; 156 - } 257 + /* Get mem resources */ 258 + for (i = 0, res_idx = 0; i < l3->num_modules; i++) { 259 + struct resource *res; 157 260 158 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 159 - if (!res) { 160 - dev_err(&pdev->dev, "couldn't find resource 1\n"); 161 - ret = -ENODEV; 162 - goto err1; 163 - } 164 - 165 - l3->l3_base[1] = ioremap(res->start, resource_size(res)); 166 - if (!l3->l3_base[1]) { 167 - dev_err(&pdev->dev, "ioremap failed\n"); 168 - ret = -ENOMEM; 169 - goto err1; 170 - } 171 - 172 - res = platform_get_resource(pdev, IORESOURCE_MEM, 2); 173 - if (!res) { 174 - dev_err(&pdev->dev, "couldn't find resource 2\n"); 175 - ret = -ENODEV; 176 - goto err2; 177 - } 178 - 179 - l3->l3_base[2] = ioremap(res->start, resource_size(res)); 180 - if (!l3->l3_base[2]) { 181 - dev_err(&pdev->dev, "ioremap failed\n"); 182 - ret = -ENOMEM; 183 - goto err2; 261 + if (l3->l3_base[i] == L3_BASE_IS_SUBMODULE) { 262 + /* First entry cannot be submodule */ 263 + BUG_ON(i == 0); 264 + l3->l3_base[i] = l3->l3_base[i - 1]; 265 + continue; 266 + } 267 + res = platform_get_resource(pdev, IORESOURCE_MEM, res_idx); 268 + l3->l3_base[i] = devm_ioremap_resource(&pdev->dev, res); 269 + if (IS_ERR(l3->l3_base[i])) { 270 + dev_err(l3->dev, "ioremap %d failed\n", i); 271 + return PTR_ERR(l3->l3_base[i]); 272 + } 273 + res_idx++; 184 274 } 185 275 186 276 /* 187 277 * Setup interrupt Handlers 188 278 */ 189 279 l3->debug_irq = platform_get_irq(pdev, 0); 190 - ret = request_irq(l3->debug_irq, 191 - l3_interrupt_handler, 192 - IRQF_DISABLED, "l3-dbg-irq", l3); 280 + ret = devm_request_irq(l3->dev, l3->debug_irq, l3_interrupt_handler, 281 + IRQF_DISABLED, "l3-dbg-irq", l3); 193 282 if (ret) { 194 - pr_crit("L3: request_irq failed to register for 0x%x\n", 195 - l3->debug_irq); 196 - goto err3; 283 + dev_err(l3->dev, "request_irq failed for %d\n", 284 + l3->debug_irq); 285 + return ret; 197 286 } 198 287 199 288 l3->app_irq = platform_get_irq(pdev, 1); 200 - ret = request_irq(l3->app_irq, 201 - l3_interrupt_handler, 202 - IRQF_DISABLED, "l3-app-irq", l3); 203 - if (ret) { 204 - pr_crit("L3: request_irq failed to register for 0x%x\n", 205 - l3->app_irq); 206 - goto err4; 207 - } 289 + ret = devm_request_irq(l3->dev, l3->app_irq, l3_interrupt_handler, 290 + IRQF_DISABLED, "l3-app-irq", l3); 291 + if (ret) 292 + dev_err(l3->dev, "request_irq failed for %d\n", l3->app_irq); 208 293 209 - return 0; 210 - 211 - err4: 212 - free_irq(l3->debug_irq, l3); 213 - err3: 214 - iounmap(l3->l3_base[2]); 215 - err2: 216 - iounmap(l3->l3_base[1]); 217 - err1: 218 - iounmap(l3->l3_base[0]); 219 - err0: 220 - kfree(l3); 221 294 return ret; 222 295 } 223 296 224 - static int omap4_l3_remove(struct platform_device *pdev) 225 - { 226 - struct omap4_l3 *l3 = platform_get_drvdata(pdev); 227 - 228 - free_irq(l3->app_irq, l3); 229 - free_irq(l3->debug_irq, l3); 230 - iounmap(l3->l3_base[0]); 231 - iounmap(l3->l3_base[1]); 232 - iounmap(l3->l3_base[2]); 233 - kfree(l3); 234 - 235 - return 0; 236 - } 237 - 238 - #if defined(CONFIG_OF) 239 - static const struct of_device_id l3_noc_match[] = { 240 - {.compatible = "ti,omap4-l3-noc", }, 241 - {}, 242 - }; 243 - MODULE_DEVICE_TABLE(of, l3_noc_match); 244 - #else 245 - #define l3_noc_match NULL 246 - #endif 247 - 248 - static struct platform_driver omap4_l3_driver = { 249 - .probe = omap4_l3_probe, 250 - .remove = omap4_l3_remove, 297 + static struct platform_driver omap_l3_driver = { 298 + .probe = omap_l3_probe, 251 299 .driver = { 252 300 .name = "omap_l3_noc", 253 301 .owner = THIS_MODULE, 254 - .of_match_table = l3_noc_match, 302 + .of_match_table = of_match_ptr(l3_noc_match), 255 303 }, 256 304 }; 257 305 258 - static int __init omap4_l3_init(void) 306 + static int __init omap_l3_init(void) 259 307 { 260 - return platform_driver_register(&omap4_l3_driver); 308 + return platform_driver_register(&omap_l3_driver); 261 309 } 262 - postcore_initcall_sync(omap4_l3_init); 310 + postcore_initcall_sync(omap_l3_init); 263 311 264 - static void __exit omap4_l3_exit(void) 312 + static void __exit omap_l3_exit(void) 265 313 { 266 - platform_driver_unregister(&omap4_l3_driver); 314 + platform_driver_unregister(&omap_l3_driver); 267 315 } 268 - module_exit(omap4_l3_exit); 316 + module_exit(omap_l3_exit);
+417 -118
drivers/bus/omap_l3_noc.h
··· 1 1 /* 2 - * OMAP4XXX L3 Interconnect error handling driver header 2 + * OMAP L3 Interconnect error handling driver header 3 3 * 4 - * Copyright (C) 2011 Texas Corporation 4 + * Copyright (C) 2011-2014 Texas Instruments Incorporated - http://www.ti.com/ 5 5 * Santosh Shilimkar <santosh.shilimkar@ti.com> 6 6 * sricharan <r.sricharan@ti.com> 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 12 11 * 13 - * This program is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 13 + * kind, whether express or implied; without even the implied warranty 14 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 15 * GNU General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 21 - * USA 22 16 */ 23 - #ifndef __ARCH_ARM_MACH_OMAP2_L3_INTERCONNECT_3XXX_H 24 - #define __ARCH_ARM_MACH_OMAP2_L3_INTERCONNECT_3XXX_H 17 + #ifndef __OMAP_L3_NOC_H 18 + #define __OMAP_L3_NOC_H 25 19 26 - #define L3_MODULES 3 20 + #define MAX_L3_MODULES 3 21 + #define MAX_CLKDM_TARGETS 31 22 + 27 23 #define CLEAR_STDERR_LOG (1 << 31) 28 24 #define CUSTOM_ERROR 0x2 29 25 #define STANDARD_ERROR 0x0 ··· 29 33 30 34 /* L3 TARG register offsets */ 31 35 #define L3_TARG_STDERRLOG_MAIN 0x48 36 + #define L3_TARG_STDERRLOG_HDR 0x4c 37 + #define L3_TARG_STDERRLOG_MSTADDR 0x50 38 + #define L3_TARG_STDERRLOG_INFO 0x58 32 39 #define L3_TARG_STDERRLOG_SLVOFSLSB 0x5c 33 - #define L3_TARG_STDERRLOG_MSTADDR 0x68 40 + #define L3_TARG_STDERRLOG_CINFO_INFO 0x64 41 + #define L3_TARG_STDERRLOG_CINFO_MSTADDR 0x68 42 + #define L3_TARG_STDERRLOG_CINFO_OPCODE 0x6c 34 43 #define L3_FLAGMUX_REGERR0 0xc 44 + #define L3_FLAGMUX_MASK0 0x8 35 45 36 - #define NUM_OF_L3_MASTERS (sizeof(l3_masters)/sizeof(l3_masters[0])) 46 + #define L3_TARGET_NOT_SUPPORTED NULL 37 47 38 - static u32 l3_flagmux[L3_MODULES] = { 39 - 0x500, 40 - 0x1000, 41 - 0X0200 48 + #define L3_BASE_IS_SUBMODULE ((void __iomem *)(1 << 0)) 49 + 50 + static const char * const l3_transaction_type[] = { 51 + /* 0 0 0 */ "Idle", 52 + /* 0 0 1 */ "Write", 53 + /* 0 1 0 */ "Read", 54 + /* 0 1 1 */ "ReadEx", 55 + /* 1 0 0 */ "Read Link", 56 + /* 1 0 1 */ "Write Non-Posted", 57 + /* 1 1 0 */ "Write Conditional", 58 + /* 1 1 1 */ "Write Broadcast", 42 59 }; 43 60 44 - /* L3 Target standard Error register offsets */ 45 - static u32 l3_targ_inst_clk1[] = { 46 - 0x100, /* DMM1 */ 47 - 0x200, /* DMM2 */ 48 - 0x300, /* ABE */ 49 - 0x400, /* L4CFG */ 50 - 0x600, /* CLK2 PWR DISC */ 51 - 0x0, /* Host CLK1 */ 52 - 0x900 /* L4 Wakeup */ 53 - }; 54 - 55 - static u32 l3_targ_inst_clk2[] = { 56 - 0x500, /* CORTEX M3 */ 57 - 0x300, /* DSS */ 58 - 0x100, /* GPMC */ 59 - 0x400, /* ISS */ 60 - 0x700, /* IVAHD */ 61 - 0xD00, /* missing in TRM corresponds to AES1*/ 62 - 0x900, /* L4 PER0*/ 63 - 0x200, /* OCMRAM */ 64 - 0x100, /* missing in TRM corresponds to GPMC sERROR*/ 65 - 0x600, /* SGX */ 66 - 0x800, /* SL2 */ 67 - 0x1600, /* C2C */ 68 - 0x1100, /* missing in TRM corresponds PWR DISC CLK1*/ 69 - 0xF00, /* missing in TRM corrsponds to SHA1*/ 70 - 0xE00, /* missing in TRM corresponds to AES2*/ 71 - 0xC00, /* L4 PER3 */ 72 - 0xA00, /* L4 PER1*/ 73 - 0xB00, /* L4 PER2*/ 74 - 0x0, /* HOST CLK2 */ 75 - 0x1800, /* CAL */ 76 - 0x1700 /* LLI */ 77 - }; 78 - 79 - static u32 l3_targ_inst_clk3[] = { 80 - 0x0100 /* EMUSS */, 81 - 0x0300, /* DEBUGSS_CT_TBR */ 82 - 0x0 /* HOST CLK3 */ 83 - }; 84 - 85 - static struct l3_masters_data { 61 + /** 62 + * struct l3_masters_data - L3 Master information 63 + * @id: ID of the L3 Master 64 + * @name: master name 65 + */ 66 + struct l3_masters_data { 86 67 u32 id; 87 - char name[10]; 88 - } l3_masters[] = { 68 + char *name; 69 + }; 70 + 71 + /** 72 + * struct l3_target_data - L3 Target information 73 + * @offset: Offset from base for L3 Target 74 + * @name: Target name 75 + * 76 + * Target information is organized indexed by bit field definitions. 77 + */ 78 + struct l3_target_data { 79 + u32 offset; 80 + char *name; 81 + }; 82 + 83 + /** 84 + * struct l3_flagmux_data - Flag Mux information 85 + * @offset: offset from base for flagmux register 86 + * @l3_targ: array indexed by flagmux index (bit offset) pointing to the 87 + * target data. unsupported ones are marked with 88 + * L3_TARGET_NOT_SUPPORTED 89 + * @num_targ_data: number of entries in target data 90 + * @mask_app_bits: ignore these from raw application irq status 91 + * @mask_dbg_bits: ignore these from raw debug irq status 92 + */ 93 + struct l3_flagmux_data { 94 + u32 offset; 95 + struct l3_target_data *l3_targ; 96 + u8 num_targ_data; 97 + u32 mask_app_bits; 98 + u32 mask_dbg_bits; 99 + }; 100 + 101 + 102 + /** 103 + * struct omap_l3 - Description of data relevant for L3 bus. 104 + * @dev: device representing the bus (populated runtime) 105 + * @l3_base: base addresses of modules (populated runtime if 0) 106 + * if set to L3_BASE_IS_SUBMODULE, then uses previous 107 + * module index as the base address 108 + * @l3_flag_mux: array containing flag mux data per module 109 + * offset from corresponding module base indexed per 110 + * module. 111 + * @num_modules: number of clock domains / modules. 112 + * @l3_masters: array pointing to master data containing name and register 113 + * offset for the master. 114 + * @num_master: number of masters 115 + * @mst_addr_mask: Mask representing MSTADDR information of NTTP packet 116 + * @debug_irq: irq number of the debug interrupt (populated runtime) 117 + * @app_irq: irq number of the application interrupt (populated runtime) 118 + */ 119 + struct omap_l3 { 120 + struct device *dev; 121 + 122 + void __iomem *l3_base[MAX_L3_MODULES]; 123 + struct l3_flagmux_data **l3_flagmux; 124 + int num_modules; 125 + 126 + struct l3_masters_data *l3_masters; 127 + int num_masters; 128 + u32 mst_addr_mask; 129 + 130 + int debug_irq; 131 + int app_irq; 132 + }; 133 + 134 + static struct l3_target_data omap_l3_target_data_clk1[] = { 135 + {0x100, "DMM1",}, 136 + {0x200, "DMM2",}, 137 + {0x300, "ABE",}, 138 + {0x400, "L4CFG",}, 139 + {0x600, "CLK2PWRDISC",}, 140 + {0x0, "HOSTCLK1",}, 141 + {0x900, "L4WAKEUP",}, 142 + }; 143 + 144 + static struct l3_flagmux_data omap_l3_flagmux_clk1 = { 145 + .offset = 0x500, 146 + .l3_targ = omap_l3_target_data_clk1, 147 + .num_targ_data = ARRAY_SIZE(omap_l3_target_data_clk1), 148 + }; 149 + 150 + 151 + static struct l3_target_data omap_l3_target_data_clk2[] = { 152 + {0x500, "CORTEXM3",}, 153 + {0x300, "DSS",}, 154 + {0x100, "GPMC",}, 155 + {0x400, "ISS",}, 156 + {0x700, "IVAHD",}, 157 + {0xD00, "AES1",}, 158 + {0x900, "L4PER0",}, 159 + {0x200, "OCMRAM",}, 160 + {0x100, "GPMCsERROR",}, 161 + {0x600, "SGX",}, 162 + {0x800, "SL2",}, 163 + {0x1600, "C2C",}, 164 + {0x1100, "PWRDISCCLK1",}, 165 + {0xF00, "SHA1",}, 166 + {0xE00, "AES2",}, 167 + {0xC00, "L4PER3",}, 168 + {0xA00, "L4PER1",}, 169 + {0xB00, "L4PER2",}, 170 + {0x0, "HOSTCLK2",}, 171 + {0x1800, "CAL",}, 172 + {0x1700, "LLI",}, 173 + }; 174 + 175 + static struct l3_flagmux_data omap_l3_flagmux_clk2 = { 176 + .offset = 0x1000, 177 + .l3_targ = omap_l3_target_data_clk2, 178 + .num_targ_data = ARRAY_SIZE(omap_l3_target_data_clk2), 179 + }; 180 + 181 + 182 + static struct l3_target_data omap_l3_target_data_clk3[] = { 183 + {0x0100, "EMUSS",}, 184 + {0x0300, "DEBUG SOURCE",}, 185 + {0x0, "HOST CLK3",}, 186 + }; 187 + 188 + static struct l3_flagmux_data omap_l3_flagmux_clk3 = { 189 + .offset = 0x0200, 190 + .l3_targ = omap_l3_target_data_clk3, 191 + .num_targ_data = ARRAY_SIZE(omap_l3_target_data_clk3), 192 + }; 193 + 194 + static struct l3_masters_data omap_l3_masters[] = { 89 195 { 0x0 , "MPU"}, 90 196 { 0x10, "CS_ADP"}, 91 197 { 0x14, "xxx"}, ··· 215 117 { 0xC8, "USBHOSTFS"} 216 118 }; 217 119 218 - static char *l3_targ_inst_name[L3_MODULES][21] = { 219 - { 220 - "DMM1", 221 - "DMM2", 222 - "ABE", 223 - "L4CFG", 224 - "CLK2 PWR DISC", 225 - "HOST CLK1", 226 - "L4 WAKEUP" 227 - }, 228 - { 229 - "CORTEX M3" , 230 - "DSS ", 231 - "GPMC ", 232 - "ISS ", 233 - "IVAHD ", 234 - "AES1", 235 - "L4 PER0", 236 - "OCMRAM ", 237 - "GPMC sERROR", 238 - "SGX ", 239 - "SL2 ", 240 - "C2C ", 241 - "PWR DISC CLK1", 242 - "SHA1", 243 - "AES2", 244 - "L4 PER3", 245 - "L4 PER1", 246 - "L4 PER2", 247 - "HOST CLK2", 248 - "CAL", 249 - "LLI" 250 - }, 251 - { 252 - "EMUSS", 253 - "DEBUG SOURCE", 254 - "HOST CLK3" 255 - }, 120 + static struct l3_flagmux_data *omap_l3_flagmux[] = { 121 + &omap_l3_flagmux_clk1, 122 + &omap_l3_flagmux_clk2, 123 + &omap_l3_flagmux_clk3, 256 124 }; 257 125 258 - static u32 *l3_targ[L3_MODULES] = { 259 - l3_targ_inst_clk1, 260 - l3_targ_inst_clk2, 261 - l3_targ_inst_clk3, 126 + static const struct omap_l3 omap_l3_data = { 127 + .l3_flagmux = omap_l3_flagmux, 128 + .num_modules = ARRAY_SIZE(omap_l3_flagmux), 129 + .l3_masters = omap_l3_masters, 130 + .num_masters = ARRAY_SIZE(omap_l3_masters), 131 + /* The 6 MSBs of register field used to distinguish initiator */ 132 + .mst_addr_mask = 0xFC, 262 133 }; 263 134 264 - struct omap4_l3 { 265 - struct device *dev; 266 - struct clk *ick; 267 - 268 - /* memory base */ 269 - void __iomem *l3_base[L3_MODULES]; 270 - 271 - int debug_irq; 272 - int app_irq; 135 + /* DRA7 data */ 136 + static struct l3_target_data dra_l3_target_data_clk1[] = { 137 + {0x2a00, "AES1",}, 138 + {0x0200, "DMM_P1",}, 139 + {0x0600, "DSP2_SDMA",}, 140 + {0x0b00, "EVE2",}, 141 + {0x1300, "DMM_P2",}, 142 + {0x2c00, "AES2",}, 143 + {0x0300, "DSP1_SDMA",}, 144 + {0x0a00, "EVE1",}, 145 + {0x0c00, "EVE3",}, 146 + {0x0d00, "EVE4",}, 147 + {0x2900, "DSS",}, 148 + {0x0100, "GPMC",}, 149 + {0x3700, "PCIE1",}, 150 + {0x1600, "IVA_CONFIG",}, 151 + {0x1800, "IVA_SL2IF",}, 152 + {0x0500, "L4_CFG",}, 153 + {0x1d00, "L4_WKUP",}, 154 + {0x3800, "PCIE2",}, 155 + {0x3300, "SHA2_1",}, 156 + {0x1200, "GPU",}, 157 + {0x1000, "IPU1",}, 158 + {0x1100, "IPU2",}, 159 + {0x2000, "TPCC_EDMA",}, 160 + {0x2e00, "TPTC1_EDMA",}, 161 + {0x2b00, "TPTC2_EDMA",}, 162 + {0x0700, "VCP1",}, 163 + {0x2500, "L4_PER2_P3",}, 164 + {0x0e00, "L4_PER3_P3",}, 165 + {0x2200, "MMU1",}, 166 + {0x1400, "PRUSS1",}, 167 + {0x1500, "PRUSS2"}, 168 + {0x0800, "VCP1",}, 273 169 }; 274 - #endif 170 + 171 + static struct l3_flagmux_data dra_l3_flagmux_clk1 = { 172 + .offset = 0x803500, 173 + .l3_targ = dra_l3_target_data_clk1, 174 + .num_targ_data = ARRAY_SIZE(dra_l3_target_data_clk1), 175 + }; 176 + 177 + static struct l3_target_data dra_l3_target_data_clk2[] = { 178 + {0x0, "HOST CLK1",}, 179 + {0x0, "HOST CLK2",}, 180 + {0xdead, L3_TARGET_NOT_SUPPORTED,}, 181 + {0x3400, "SHA2_2",}, 182 + {0x0900, "BB2D",}, 183 + {0xdead, L3_TARGET_NOT_SUPPORTED,}, 184 + {0x2100, "L4_PER1_P3",}, 185 + {0x1c00, "L4_PER1_P1",}, 186 + {0x1f00, "L4_PER1_P2",}, 187 + {0x2300, "L4_PER2_P1",}, 188 + {0x2400, "L4_PER2_P2",}, 189 + {0x2600, "L4_PER3_P1",}, 190 + {0x2700, "L4_PER3_P2",}, 191 + {0x2f00, "MCASP1",}, 192 + {0x3000, "MCASP2",}, 193 + {0x3100, "MCASP3",}, 194 + {0x2800, "MMU2",}, 195 + {0x0f00, "OCMC_RAM1",}, 196 + {0x1700, "OCMC_RAM2",}, 197 + {0x1900, "OCMC_RAM3",}, 198 + {0x1e00, "OCMC_ROM",}, 199 + {0x3900, "QSPI",}, 200 + }; 201 + 202 + static struct l3_flagmux_data dra_l3_flagmux_clk2 = { 203 + .offset = 0x803600, 204 + .l3_targ = dra_l3_target_data_clk2, 205 + .num_targ_data = ARRAY_SIZE(dra_l3_target_data_clk2), 206 + }; 207 + 208 + static struct l3_target_data dra_l3_target_data_clk3[] = { 209 + {0x0100, "L3_INSTR"}, 210 + {0x0300, "DEBUGSS_CT_TBR"}, 211 + {0x0, "HOST CLK3"}, 212 + }; 213 + 214 + static struct l3_flagmux_data dra_l3_flagmux_clk3 = { 215 + .offset = 0x200, 216 + .l3_targ = dra_l3_target_data_clk3, 217 + .num_targ_data = ARRAY_SIZE(dra_l3_target_data_clk3), 218 + }; 219 + 220 + static struct l3_masters_data dra_l3_masters[] = { 221 + { 0x0, "MPU" }, 222 + { 0x4, "CS_DAP" }, 223 + { 0x5, "IEEE1500_2_OCP" }, 224 + { 0x8, "DSP1_MDMA" }, 225 + { 0x9, "DSP1_CFG" }, 226 + { 0xA, "DSP1_DMA" }, 227 + { 0xB, "DSP2_MDMA" }, 228 + { 0xC, "DSP2_CFG" }, 229 + { 0xD, "DSP2_DMA" }, 230 + { 0xE, "IVA" }, 231 + { 0x10, "EVE1_P1" }, 232 + { 0x11, "EVE2_P1" }, 233 + { 0x12, "EVE3_P1" }, 234 + { 0x13, "EVE4_P1" }, 235 + { 0x14, "PRUSS1 PRU1" }, 236 + { 0x15, "PRUSS1 PRU2" }, 237 + { 0x16, "PRUSS2 PRU1" }, 238 + { 0x17, "PRUSS2 PRU2" }, 239 + { 0x18, "IPU1" }, 240 + { 0x19, "IPU2" }, 241 + { 0x1A, "SDMA" }, 242 + { 0x1B, "CDMA" }, 243 + { 0x1C, "TC1_EDMA" }, 244 + { 0x1D, "TC2_EDMA" }, 245 + { 0x20, "DSS" }, 246 + { 0x21, "MMU1" }, 247 + { 0x22, "PCIE1" }, 248 + { 0x23, "MMU2" }, 249 + { 0x24, "VIP1" }, 250 + { 0x25, "VIP2" }, 251 + { 0x26, "VIP3" }, 252 + { 0x27, "VPE" }, 253 + { 0x28, "GPU_P1" }, 254 + { 0x29, "BB2D" }, 255 + { 0x29, "GPU_P2" }, 256 + { 0x2B, "GMAC_SW" }, 257 + { 0x2C, "USB3" }, 258 + { 0x2D, "USB2_SS" }, 259 + { 0x2E, "USB2_ULPI_SS1" }, 260 + { 0x2F, "USB2_ULPI_SS2" }, 261 + { 0x30, "CSI2_1" }, 262 + { 0x31, "CSI2_2" }, 263 + { 0x33, "SATA" }, 264 + { 0x34, "EVE1_P2" }, 265 + { 0x35, "EVE2_P2" }, 266 + { 0x36, "EVE3_P2" }, 267 + { 0x37, "EVE4_P2" } 268 + }; 269 + 270 + static struct l3_flagmux_data *dra_l3_flagmux[] = { 271 + &dra_l3_flagmux_clk1, 272 + &dra_l3_flagmux_clk2, 273 + &dra_l3_flagmux_clk3, 274 + }; 275 + 276 + static const struct omap_l3 dra_l3_data = { 277 + .l3_base = { [1] = L3_BASE_IS_SUBMODULE }, 278 + .l3_flagmux = dra_l3_flagmux, 279 + .num_modules = ARRAY_SIZE(dra_l3_flagmux), 280 + .l3_masters = dra_l3_masters, 281 + .num_masters = ARRAY_SIZE(dra_l3_masters), 282 + /* The 6 MSBs of register field used to distinguish initiator */ 283 + .mst_addr_mask = 0xFC, 284 + }; 285 + 286 + /* AM4372 data */ 287 + static struct l3_target_data am4372_l3_target_data_200f[] = { 288 + {0xf00, "EMIF",}, 289 + {0x1200, "DES",}, 290 + {0x400, "OCMCRAM",}, 291 + {0x700, "TPTC0",}, 292 + {0x800, "TPTC1",}, 293 + {0x900, "TPTC2"}, 294 + {0xb00, "TPCC",}, 295 + {0xd00, "DEBUGSS",}, 296 + {0xdead, L3_TARGET_NOT_SUPPORTED,}, 297 + {0x200, "SHA",}, 298 + {0xc00, "SGX530",}, 299 + {0x500, "AES0",}, 300 + {0xa00, "L4_FAST",}, 301 + {0x300, "MPUSS_L2_RAM",}, 302 + {0x100, "ICSS",}, 303 + }; 304 + 305 + static struct l3_flagmux_data am4372_l3_flagmux_200f = { 306 + .offset = 0x1000, 307 + .l3_targ = am4372_l3_target_data_200f, 308 + .num_targ_data = ARRAY_SIZE(am4372_l3_target_data_200f), 309 + }; 310 + 311 + static struct l3_target_data am4372_l3_target_data_100s[] = { 312 + {0x100, "L4_PER_0",}, 313 + {0x200, "L4_PER_1",}, 314 + {0x300, "L4_PER_2",}, 315 + {0x400, "L4_PER_3",}, 316 + {0x800, "McASP0",}, 317 + {0x900, "McASP1",}, 318 + {0xC00, "MMCHS2",}, 319 + {0x700, "GPMC",}, 320 + {0xD00, "L4_FW",}, 321 + {0xdead, L3_TARGET_NOT_SUPPORTED,}, 322 + {0x500, "ADCTSC",}, 323 + {0xE00, "L4_WKUP",}, 324 + {0xA00, "MAG_CARD",}, 325 + }; 326 + 327 + static struct l3_flagmux_data am4372_l3_flagmux_100s = { 328 + .offset = 0x600, 329 + .l3_targ = am4372_l3_target_data_100s, 330 + .num_targ_data = ARRAY_SIZE(am4372_l3_target_data_100s), 331 + }; 332 + 333 + static struct l3_masters_data am4372_l3_masters[] = { 334 + { 0x0, "M1 (128-bit)"}, 335 + { 0x1, "M2 (64-bit)"}, 336 + { 0x4, "DAP"}, 337 + { 0x5, "P1500"}, 338 + { 0xC, "ICSS0"}, 339 + { 0xD, "ICSS1"}, 340 + { 0x14, "Wakeup Processor"}, 341 + { 0x18, "TPTC0 Read"}, 342 + { 0x19, "TPTC0 Write"}, 343 + { 0x1A, "TPTC1 Read"}, 344 + { 0x1B, "TPTC1 Write"}, 345 + { 0x1C, "TPTC2 Read"}, 346 + { 0x1D, "TPTC2 Write"}, 347 + { 0x20, "SGX530"}, 348 + { 0x21, "OCP WP Traffic Probe"}, 349 + { 0x22, "OCP WP DMA Profiling"}, 350 + { 0x23, "OCP WP Event Trace"}, 351 + { 0x25, "DSS"}, 352 + { 0x28, "Crypto DMA RD"}, 353 + { 0x29, "Crypto DMA WR"}, 354 + { 0x2C, "VPFE0"}, 355 + { 0x2D, "VPFE1"}, 356 + { 0x30, "GEMAC"}, 357 + { 0x34, "USB0 RD"}, 358 + { 0x35, "USB0 WR"}, 359 + { 0x36, "USB1 RD"}, 360 + { 0x37, "USB1 WR"}, 361 + }; 362 + 363 + static struct l3_flagmux_data *am4372_l3_flagmux[] = { 364 + &am4372_l3_flagmux_200f, 365 + &am4372_l3_flagmux_100s, 366 + }; 367 + 368 + static const struct omap_l3 am4372_l3_data = { 369 + .l3_flagmux = am4372_l3_flagmux, 370 + .num_modules = ARRAY_SIZE(am4372_l3_flagmux), 371 + .l3_masters = am4372_l3_masters, 372 + .num_masters = ARRAY_SIZE(am4372_l3_masters), 373 + /* All 6 bits of register field used to distinguish initiator */ 374 + .mst_addr_mask = 0x3F, 375 + }; 376 + 377 + #endif /* __OMAP_L3_NOC_H */
+42
drivers/clk/samsung/clk-exynos5250.c
··· 24 24 #define APLL_CON0 0x100 25 25 #define SRC_CPU 0x200 26 26 #define DIV_CPU0 0x500 27 + #define PWR_CTRL1 0x1020 28 + #define PWR_CTRL2 0x1024 27 29 #define MPLL_LOCK 0x4000 28 30 #define MPLL_CON0 0x4100 29 31 #define SRC_CORE1 0x4204 ··· 86 84 #define SRC_CDREX 0x20200 87 85 #define PLL_DIV2_SEL 0x20a24 88 86 87 + /*Below definitions are used for PWR_CTRL settings*/ 88 + #define PWR_CTRL1_CORE2_DOWN_RATIO (7 << 28) 89 + #define PWR_CTRL1_CORE1_DOWN_RATIO (7 << 16) 90 + #define PWR_CTRL1_DIV2_DOWN_EN (1 << 9) 91 + #define PWR_CTRL1_DIV1_DOWN_EN (1 << 8) 92 + #define PWR_CTRL1_USE_CORE1_WFE (1 << 5) 93 + #define PWR_CTRL1_USE_CORE0_WFE (1 << 4) 94 + #define PWR_CTRL1_USE_CORE1_WFI (1 << 1) 95 + #define PWR_CTRL1_USE_CORE0_WFI (1 << 0) 96 + 97 + #define PWR_CTRL2_DIV2_UP_EN (1 << 25) 98 + #define PWR_CTRL2_DIV1_UP_EN (1 << 24) 99 + #define PWR_CTRL2_DUR_STANDBY2_VAL (1 << 16) 100 + #define PWR_CTRL2_DUR_STANDBY1_VAL (1 << 8) 101 + #define PWR_CTRL2_CORE2_UP_RATIO (1 << 4) 102 + #define PWR_CTRL2_CORE1_UP_RATIO (1 << 0) 103 + 89 104 /* list of PLLs to be registered */ 90 105 enum exynos5250_plls { 91 106 apll, mpll, cpll, epll, vpll, gpll, bpll, ··· 121 102 static unsigned long exynos5250_clk_regs[] __initdata = { 122 103 SRC_CPU, 123 104 DIV_CPU0, 105 + PWR_CTRL1, 106 + PWR_CTRL2, 124 107 SRC_CORE1, 125 108 SRC_TOP0, 126 109 SRC_TOP1, ··· 757 736 static void __init exynos5250_clk_init(struct device_node *np) 758 737 { 759 738 struct samsung_clk_provider *ctx; 739 + unsigned int tmp; 760 740 761 741 if (np) { 762 742 reg_base = of_iomap(np, 0); ··· 797 775 ARRAY_SIZE(exynos5250_div_clks)); 798 776 samsung_clk_register_gate(ctx, exynos5250_gate_clks, 799 777 ARRAY_SIZE(exynos5250_gate_clks)); 778 + 779 + /* 780 + * Enable arm clock down (in idle) and set arm divider 781 + * ratios in WFI/WFE state. 782 + */ 783 + tmp = (PWR_CTRL1_CORE2_DOWN_RATIO | PWR_CTRL1_CORE1_DOWN_RATIO | 784 + PWR_CTRL1_DIV2_DOWN_EN | PWR_CTRL1_DIV1_DOWN_EN | 785 + PWR_CTRL1_USE_CORE1_WFE | PWR_CTRL1_USE_CORE0_WFE | 786 + PWR_CTRL1_USE_CORE1_WFI | PWR_CTRL1_USE_CORE0_WFI); 787 + __raw_writel(tmp, reg_base + PWR_CTRL1); 788 + 789 + /* 790 + * Enable arm clock up (on exiting idle). Set arm divider 791 + * ratios when not in idle along with the standby duration 792 + * ratios. 793 + */ 794 + tmp = (PWR_CTRL2_DIV2_UP_EN | PWR_CTRL2_DIV1_UP_EN | 795 + PWR_CTRL2_DUR_STANDBY2_VAL | PWR_CTRL2_DUR_STANDBY1_VAL | 796 + PWR_CTRL2_CORE2_UP_RATIO | PWR_CTRL2_CORE1_UP_RATIO); 797 + __raw_writel(tmp, reg_base + PWR_CTRL2); 800 798 801 799 exynos5250_clk_sleep_init(); 802 800
+8
drivers/clocksource/exynos_mct.c
··· 24 24 #include <linux/of_irq.h> 25 25 #include <linux/of_address.h> 26 26 #include <linux/clocksource.h> 27 + #include <linux/sched_clock.h> 27 28 28 29 #define EXYNOS4_MCTREG(x) (x) 29 30 #define EXYNOS4_MCT_G_CNT_L EXYNOS4_MCTREG(0x100) ··· 193 192 .resume = exynos4_frc_resume, 194 193 }; 195 194 195 + static u64 notrace exynos4_read_sched_clock(void) 196 + { 197 + return exynos4_frc_read(&mct_frc); 198 + } 199 + 196 200 static void __init exynos4_clocksource_init(void) 197 201 { 198 202 exynos4_mct_frc_start(0, 0); 199 203 200 204 if (clocksource_register_hz(&mct_frc, clk_rate)) 201 205 panic("%s: can't register clocksource\n", mct_frc.name); 206 + 207 + sched_clock_register(exynos4_read_sched_clock, 64, clk_rate); 202 208 } 203 209 204 210 static void exynos4_mct_comp0_stop(void)
+3 -3
drivers/cpufreq/Kconfig.arm
··· 30 30 31 31 config ARM_EXYNOS4210_CPUFREQ 32 32 bool "SAMSUNG EXYNOS4210" 33 - depends on CPU_EXYNOS4210 && !ARCH_MULTIPLATFORM 33 + depends on CPU_EXYNOS4210 34 34 default y 35 35 select ARM_EXYNOS_CPUFREQ 36 36 help ··· 41 41 42 42 config ARM_EXYNOS4X12_CPUFREQ 43 43 bool "SAMSUNG EXYNOS4x12" 44 - depends on (SOC_EXYNOS4212 || SOC_EXYNOS4412) && !ARCH_MULTIPLATFORM 44 + depends on SOC_EXYNOS4212 || SOC_EXYNOS4412 45 45 default y 46 46 select ARM_EXYNOS_CPUFREQ 47 47 help ··· 52 52 53 53 config ARM_EXYNOS5250_CPUFREQ 54 54 bool "SAMSUNG EXYNOS5250" 55 - depends on SOC_EXYNOS5250 && !ARCH_MULTIPLATFORM 55 + depends on SOC_EXYNOS5250 56 56 default y 57 57 select ARM_EXYNOS_CPUFREQ 58 58 help
-2
drivers/cpufreq/exynos-cpufreq.c
··· 19 19 #include <linux/platform_device.h> 20 20 #include <linux/of.h> 21 21 22 - #include <plat/cpu.h> 23 - 24 22 #include "exynos-cpufreq.h" 25 23 26 24 static struct exynos_dvfs_info *exynos_info;
+14 -16
drivers/cpufreq/exynos-cpufreq.h
··· 49 49 struct cpufreq_frequency_table *freq_table; 50 50 void (*set_freq)(unsigned int, unsigned int); 51 51 bool (*need_apll_change)(unsigned int, unsigned int); 52 + void __iomem *cmu_regs; 52 53 }; 53 54 54 55 #ifdef CONFIG_ARM_EXYNOS4210_CPUFREQ ··· 77 76 } 78 77 #endif 79 78 80 - #include <plat/cpu.h> 81 - #include <mach/map.h> 79 + #define EXYNOS4_CLKSRC_CPU 0x14200 80 + #define EXYNOS4_CLKMUX_STATCPU 0x14400 82 81 83 - #define EXYNOS4_CLKSRC_CPU (S5P_VA_CMU + 0x14200) 84 - #define EXYNOS4_CLKMUX_STATCPU (S5P_VA_CMU + 0x14400) 85 - 86 - #define EXYNOS4_CLKDIV_CPU (S5P_VA_CMU + 0x14500) 87 - #define EXYNOS4_CLKDIV_CPU1 (S5P_VA_CMU + 0x14504) 88 - #define EXYNOS4_CLKDIV_STATCPU (S5P_VA_CMU + 0x14600) 89 - #define EXYNOS4_CLKDIV_STATCPU1 (S5P_VA_CMU + 0x14604) 82 + #define EXYNOS4_CLKDIV_CPU 0x14500 83 + #define EXYNOS4_CLKDIV_CPU1 0x14504 84 + #define EXYNOS4_CLKDIV_STATCPU 0x14600 85 + #define EXYNOS4_CLKDIV_STATCPU1 0x14604 90 86 91 87 #define EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT (16) 92 88 #define EXYNOS4_CLKMUX_STATCPU_MUXCORE_MASK (0x7 << EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT) 93 89 94 - #define EXYNOS5_APLL_LOCK (S5P_VA_CMU + 0x00000) 95 - #define EXYNOS5_APLL_CON0 (S5P_VA_CMU + 0x00100) 96 - #define EXYNOS5_CLKMUX_STATCPU (S5P_VA_CMU + 0x00400) 97 - #define EXYNOS5_CLKDIV_CPU0 (S5P_VA_CMU + 0x00500) 98 - #define EXYNOS5_CLKDIV_CPU1 (S5P_VA_CMU + 0x00504) 99 - #define EXYNOS5_CLKDIV_STATCPU0 (S5P_VA_CMU + 0x00600) 100 - #define EXYNOS5_CLKDIV_STATCPU1 (S5P_VA_CMU + 0x00604) 90 + #define EXYNOS5_APLL_LOCK 0x00000 91 + #define EXYNOS5_APLL_CON0 0x00100 92 + #define EXYNOS5_CLKMUX_STATCPU 0x00400 93 + #define EXYNOS5_CLKDIV_CPU0 0x00500 94 + #define EXYNOS5_CLKDIV_CPU1 0x00504 95 + #define EXYNOS5_CLKDIV_STATCPU0 0x00600 96 + #define EXYNOS5_CLKDIV_STATCPU1 0x00604
+33 -6
drivers/cpufreq/exynos4210-cpufreq.c
··· 16 16 #include <linux/io.h> 17 17 #include <linux/slab.h> 18 18 #include <linux/cpufreq.h> 19 + #include <linux/of.h> 20 + #include <linux/of_address.h> 19 21 20 22 #include "exynos-cpufreq.h" 21 23 ··· 25 23 static struct clk *moutcore; 26 24 static struct clk *mout_mpll; 27 25 static struct clk *mout_apll; 26 + static struct exynos_dvfs_info *cpufreq; 28 27 29 28 static unsigned int exynos4210_volt_table[] = { 30 29 1250000, 1150000, 1050000, 975000, 950000, ··· 63 60 64 61 tmp = apll_freq_4210[div_index].clk_div_cpu0; 65 62 66 - __raw_writel(tmp, EXYNOS4_CLKDIV_CPU); 63 + __raw_writel(tmp, cpufreq->cmu_regs + EXYNOS4_CLKDIV_CPU); 67 64 68 65 do { 69 - tmp = __raw_readl(EXYNOS4_CLKDIV_STATCPU); 66 + tmp = __raw_readl(cpufreq->cmu_regs + EXYNOS4_CLKDIV_STATCPU); 70 67 } while (tmp & 0x1111111); 71 68 72 69 /* Change Divider - CPU1 */ 73 70 74 71 tmp = apll_freq_4210[div_index].clk_div_cpu1; 75 72 76 - __raw_writel(tmp, EXYNOS4_CLKDIV_CPU1); 73 + __raw_writel(tmp, cpufreq->cmu_regs + EXYNOS4_CLKDIV_CPU1); 77 74 78 75 do { 79 - tmp = __raw_readl(EXYNOS4_CLKDIV_STATCPU1); 76 + tmp = __raw_readl(cpufreq->cmu_regs + EXYNOS4_CLKDIV_STATCPU1); 80 77 } while (tmp & 0x11); 81 78 } 82 79 ··· 88 85 clk_set_parent(moutcore, mout_mpll); 89 86 90 87 do { 91 - tmp = (__raw_readl(EXYNOS4_CLKMUX_STATCPU) 88 + tmp = (__raw_readl(cpufreq->cmu_regs + EXYNOS4_CLKMUX_STATCPU) 92 89 >> EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT); 93 90 tmp &= 0x7; 94 91 } while (tmp != 0x2); ··· 99 96 clk_set_parent(moutcore, mout_apll); 100 97 101 98 do { 102 - tmp = __raw_readl(EXYNOS4_CLKMUX_STATCPU); 99 + tmp = __raw_readl(cpufreq->cmu_regs + EXYNOS4_CLKMUX_STATCPU); 103 100 tmp &= EXYNOS4_CLKMUX_STATCPU_MUXCORE_MASK; 104 101 } while (tmp != (0x1 << EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT)); 105 102 } ··· 118 115 119 116 int exynos4210_cpufreq_init(struct exynos_dvfs_info *info) 120 117 { 118 + struct device_node *np; 121 119 unsigned long rate; 120 + 121 + /* 122 + * HACK: This is a temporary workaround to get access to clock 123 + * controller registers directly and remove static mappings and 124 + * dependencies on platform headers. It is necessary to enable 125 + * Exynos multi-platform support and will be removed together with 126 + * this whole driver as soon as Exynos gets migrated to use 127 + * cpufreq-cpu0 driver. 128 + */ 129 + np = of_find_compatible_node(NULL, NULL, "samsung,exynos4210-clock"); 130 + if (!np) { 131 + pr_err("%s: failed to find clock controller DT node\n", 132 + __func__); 133 + return -ENODEV; 134 + } 135 + 136 + info->cmu_regs = of_iomap(np, 0); 137 + if (!info->cmu_regs) { 138 + pr_err("%s: failed to map CMU registers\n", __func__); 139 + return -EFAULT; 140 + } 122 141 123 142 cpu_clk = clk_get(NULL, "armclk"); 124 143 if (IS_ERR(cpu_clk)) ··· 167 142 info->volt_table = exynos4210_volt_table; 168 143 info->freq_table = exynos4210_freq_table; 169 144 info->set_freq = exynos4210_set_frequency; 145 + 146 + cpufreq = info; 170 147 171 148 return 0; 172 149
+34 -6
drivers/cpufreq/exynos4x12-cpufreq.c
··· 16 16 #include <linux/io.h> 17 17 #include <linux/slab.h> 18 18 #include <linux/cpufreq.h> 19 + #include <linux/of.h> 20 + #include <linux/of_address.h> 19 21 20 22 #include "exynos-cpufreq.h" 21 23 ··· 25 23 static struct clk *moutcore; 26 24 static struct clk *mout_mpll; 27 25 static struct clk *mout_apll; 26 + static struct exynos_dvfs_info *cpufreq; 28 27 29 28 static unsigned int exynos4x12_volt_table[] = { 30 29 1350000, 1287500, 1250000, 1187500, 1137500, 1087500, 1037500, ··· 108 105 109 106 tmp = apll_freq_4x12[div_index].clk_div_cpu0; 110 107 111 - __raw_writel(tmp, EXYNOS4_CLKDIV_CPU); 108 + __raw_writel(tmp, cpufreq->cmu_regs + EXYNOS4_CLKDIV_CPU); 112 109 113 - while (__raw_readl(EXYNOS4_CLKDIV_STATCPU) & 0x11111111) 110 + while (__raw_readl(cpufreq->cmu_regs + EXYNOS4_CLKDIV_STATCPU) 111 + & 0x11111111) 114 112 cpu_relax(); 115 113 116 114 /* Change Divider - CPU1 */ 117 115 tmp = apll_freq_4x12[div_index].clk_div_cpu1; 118 116 119 - __raw_writel(tmp, EXYNOS4_CLKDIV_CPU1); 117 + __raw_writel(tmp, cpufreq->cmu_regs + EXYNOS4_CLKDIV_CPU1); 120 118 121 119 do { 122 120 cpu_relax(); 123 - tmp = __raw_readl(EXYNOS4_CLKDIV_STATCPU1); 121 + tmp = __raw_readl(cpufreq->cmu_regs + EXYNOS4_CLKDIV_STATCPU1); 124 122 } while (tmp != 0x0); 125 123 } 126 124 ··· 134 130 135 131 do { 136 132 cpu_relax(); 137 - tmp = (__raw_readl(EXYNOS4_CLKMUX_STATCPU) 133 + tmp = (__raw_readl(cpufreq->cmu_regs + EXYNOS4_CLKMUX_STATCPU) 138 134 >> EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT); 139 135 tmp &= 0x7; 140 136 } while (tmp != 0x2); ··· 146 142 147 143 do { 148 144 cpu_relax(); 149 - tmp = __raw_readl(EXYNOS4_CLKMUX_STATCPU); 145 + tmp = __raw_readl(cpufreq->cmu_regs + EXYNOS4_CLKMUX_STATCPU); 150 146 tmp &= EXYNOS4_CLKMUX_STATCPU_MUXCORE_MASK; 151 147 } while (tmp != (0x1 << EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT)); 152 148 } ··· 165 161 166 162 int exynos4x12_cpufreq_init(struct exynos_dvfs_info *info) 167 163 { 164 + struct device_node *np; 168 165 unsigned long rate; 166 + 167 + /* 168 + * HACK: This is a temporary workaround to get access to clock 169 + * controller registers directly and remove static mappings and 170 + * dependencies on platform headers. It is necessary to enable 171 + * Exynos multi-platform support and will be removed together with 172 + * this whole driver as soon as Exynos gets migrated to use 173 + * cpufreq-cpu0 driver. 174 + */ 175 + np = of_find_compatible_node(NULL, NULL, "samsung,exynos4412-clock"); 176 + if (!np) { 177 + pr_err("%s: failed to find clock controller DT node\n", 178 + __func__); 179 + return -ENODEV; 180 + } 181 + 182 + info->cmu_regs = of_iomap(np, 0); 183 + if (!info->cmu_regs) { 184 + pr_err("%s: failed to map CMU registers\n", __func__); 185 + return -EFAULT; 186 + } 169 187 170 188 cpu_clk = clk_get(NULL, "armclk"); 171 189 if (IS_ERR(cpu_clk)) ··· 219 193 info->volt_table = exynos4x12_volt_table; 220 194 info->freq_table = exynos4x12_freq_table; 221 195 info->set_freq = exynos4x12_set_frequency; 196 + 197 + cpufreq = info; 222 198 223 199 return 0; 224 200
+35 -8
drivers/cpufreq/exynos5250-cpufreq.c
··· 16 16 #include <linux/io.h> 17 17 #include <linux/slab.h> 18 18 #include <linux/cpufreq.h> 19 - 20 - #include <mach/map.h> 19 + #include <linux/of.h> 20 + #include <linux/of_address.h> 21 21 22 22 #include "exynos-cpufreq.h" 23 23 ··· 25 25 static struct clk *moutcore; 26 26 static struct clk *mout_mpll; 27 27 static struct clk *mout_apll; 28 + static struct exynos_dvfs_info *cpufreq; 28 29 29 30 static unsigned int exynos5250_volt_table[] = { 30 31 1300000, 1250000, 1225000, 1200000, 1150000, ··· 88 87 89 88 tmp = apll_freq_5250[div_index].clk_div_cpu0; 90 89 91 - __raw_writel(tmp, EXYNOS5_CLKDIV_CPU0); 90 + __raw_writel(tmp, cpufreq->cmu_regs + EXYNOS5_CLKDIV_CPU0); 92 91 93 - while (__raw_readl(EXYNOS5_CLKDIV_STATCPU0) & 0x11111111) 92 + while (__raw_readl(cpufreq->cmu_regs + EXYNOS5_CLKDIV_STATCPU0) 93 + & 0x11111111) 94 94 cpu_relax(); 95 95 96 96 /* Change Divider - CPU1 */ 97 97 tmp = apll_freq_5250[div_index].clk_div_cpu1; 98 98 99 - __raw_writel(tmp, EXYNOS5_CLKDIV_CPU1); 99 + __raw_writel(tmp, cpufreq->cmu_regs + EXYNOS5_CLKDIV_CPU1); 100 100 101 - while (__raw_readl(EXYNOS5_CLKDIV_STATCPU1) & 0x11) 101 + while (__raw_readl(cpufreq->cmu_regs + EXYNOS5_CLKDIV_STATCPU1) & 0x11) 102 102 cpu_relax(); 103 103 } 104 104 ··· 113 111 114 112 do { 115 113 cpu_relax(); 116 - tmp = (__raw_readl(EXYNOS5_CLKMUX_STATCPU) >> 16); 114 + tmp = (__raw_readl(cpufreq->cmu_regs + EXYNOS5_CLKMUX_STATCPU) 115 + >> 16); 117 116 tmp &= 0x7; 118 117 } while (tmp != 0x2); 119 118 ··· 125 122 126 123 do { 127 124 cpu_relax(); 128 - tmp = __raw_readl(EXYNOS5_CLKMUX_STATCPU); 125 + tmp = __raw_readl(cpufreq->cmu_regs + EXYNOS5_CLKMUX_STATCPU); 129 126 tmp &= (0x7 << 16); 130 127 } while (tmp != (0x1 << 16)); 131 128 } ··· 144 141 145 142 int exynos5250_cpufreq_init(struct exynos_dvfs_info *info) 146 143 { 144 + struct device_node *np; 147 145 unsigned long rate; 146 + 147 + /* 148 + * HACK: This is a temporary workaround to get access to clock 149 + * controller registers directly and remove static mappings and 150 + * dependencies on platform headers. It is necessary to enable 151 + * Exynos multi-platform support and will be removed together with 152 + * this whole driver as soon as Exynos gets migrated to use 153 + * cpufreq-cpu0 driver. 154 + */ 155 + np = of_find_compatible_node(NULL, NULL, "samsung,exynos5250-clock"); 156 + if (!np) { 157 + pr_err("%s: failed to find clock controller DT node\n", 158 + __func__); 159 + return -ENODEV; 160 + } 161 + 162 + info->cmu_regs = of_iomap(np, 0); 163 + if (!info->cmu_regs) { 164 + pr_err("%s: failed to map CMU registers\n", __func__); 165 + return -EFAULT; 166 + } 148 167 149 168 cpu_clk = clk_get(NULL, "armclk"); 150 169 if (IS_ERR(cpu_clk)) ··· 193 168 info->volt_table = exynos5250_volt_table; 194 169 info->freq_table = exynos5250_freq_table; 195 170 info->set_freq = exynos5250_set_frequency; 171 + 172 + cpufreq = info; 196 173 197 174 return 0; 198 175
+6
drivers/cpuidle/Kconfig.arm
··· 49 49 depends on ARCH_AT91 50 50 help 51 51 Select this to enable cpuidle for AT91 processors 52 + 53 + config ARM_EXYNOS_CPUIDLE 54 + bool "Cpu Idle Driver for the Exynos processors" 55 + depends on ARCH_EXYNOS 56 + help 57 + Select this to enable cpuidle for Exynos processors
+1
drivers/cpuidle/Makefile
··· 14 14 obj-$(CONFIG_ARM_ZYNQ_CPUIDLE) += cpuidle-zynq.o 15 15 obj-$(CONFIG_ARM_U8500_CPUIDLE) += cpuidle-ux500.o 16 16 obj-$(CONFIG_ARM_AT91_CPUIDLE) += cpuidle-at91.o 17 + obj-$(CONFIG_ARM_EXYNOS_CPUIDLE) += cpuidle-exynos.o 17 18 18 19 ############################################################################### 19 20 # POWERPC drivers
+99
drivers/cpuidle/cpuidle-exynos.c
··· 1 + /* linux/arch/arm/mach-exynos/cpuidle.c 2 + * 3 + * Copyright (c) 2011 Samsung Electronics Co., Ltd. 4 + * http://www.samsung.com 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #include <linux/cpuidle.h> 12 + #include <linux/cpu_pm.h> 13 + #include <linux/export.h> 14 + #include <linux/module.h> 15 + #include <linux/platform_device.h> 16 + 17 + #include <asm/proc-fns.h> 18 + #include <asm/suspend.h> 19 + #include <asm/cpuidle.h> 20 + 21 + static void (*exynos_enter_aftr)(void); 22 + 23 + static int idle_finisher(unsigned long flags) 24 + { 25 + exynos_enter_aftr(); 26 + cpu_do_idle(); 27 + 28 + return 1; 29 + } 30 + 31 + static int exynos_enter_core0_aftr(struct cpuidle_device *dev, 32 + struct cpuidle_driver *drv, 33 + int index) 34 + { 35 + cpu_pm_enter(); 36 + cpu_suspend(0, idle_finisher); 37 + cpu_pm_exit(); 38 + 39 + return index; 40 + } 41 + 42 + static int exynos_enter_lowpower(struct cpuidle_device *dev, 43 + struct cpuidle_driver *drv, 44 + int index) 45 + { 46 + int new_index = index; 47 + 48 + /* AFTR can only be entered when cores other than CPU0 are offline */ 49 + if (num_online_cpus() > 1 || dev->cpu != 0) 50 + new_index = drv->safe_state_index; 51 + 52 + if (new_index == 0) 53 + return arm_cpuidle_simple_enter(dev, drv, new_index); 54 + else 55 + return exynos_enter_core0_aftr(dev, drv, new_index); 56 + } 57 + 58 + static struct cpuidle_driver exynos_idle_driver = { 59 + .name = "exynos_idle", 60 + .owner = THIS_MODULE, 61 + .states = { 62 + [0] = ARM_CPUIDLE_WFI_STATE, 63 + [1] = { 64 + .enter = exynos_enter_lowpower, 65 + .exit_latency = 300, 66 + .target_residency = 100000, 67 + .flags = CPUIDLE_FLAG_TIME_VALID, 68 + .name = "C1", 69 + .desc = "ARM power down", 70 + }, 71 + }, 72 + .state_count = 2, 73 + .safe_state_index = 0, 74 + }; 75 + 76 + static int exynos_cpuidle_probe(struct platform_device *pdev) 77 + { 78 + int ret; 79 + 80 + exynos_enter_aftr = (void *)(pdev->dev.platform_data); 81 + 82 + ret = cpuidle_register(&exynos_idle_driver, NULL); 83 + if (ret) { 84 + dev_err(&pdev->dev, "failed to register cpuidle driver\n"); 85 + return ret; 86 + } 87 + 88 + return 0; 89 + } 90 + 91 + static struct platform_driver exynos_cpuidle_driver = { 92 + .probe = exynos_cpuidle_probe, 93 + .driver = { 94 + .name = "exynos_cpuidle", 95 + .owner = THIS_MODULE, 96 + }, 97 + }; 98 + 99 + module_platform_driver(exynos_cpuidle_driver);
+258 -73
drivers/dma/edma.c
··· 57 57 #define EDMA_MAX_SLOTS MAX_NR_SG 58 58 #define EDMA_DESCRIPTORS 16 59 59 60 + struct edma_pset { 61 + u32 len; 62 + dma_addr_t addr; 63 + struct edmacc_param param; 64 + }; 65 + 60 66 struct edma_desc { 61 67 struct virt_dma_desc vdesc; 62 68 struct list_head node; 69 + enum dma_transfer_direction direction; 63 70 int cyclic; 64 71 int absync; 65 72 int pset_nr; 73 + struct edma_chan *echan; 66 74 int processed; 67 - struct edmacc_param pset[0]; 75 + 76 + /* 77 + * The following 4 elements are used for residue accounting. 78 + * 79 + * - processed_stat: the number of SG elements we have traversed 80 + * so far to cover accounting. This is updated directly to processed 81 + * during edma_callback and is always <= processed, because processed 82 + * refers to the number of pending transfer (programmed to EDMA 83 + * controller), where as processed_stat tracks number of transfers 84 + * accounted for so far. 85 + * 86 + * - residue: The amount of bytes we have left to transfer for this desc 87 + * 88 + * - residue_stat: The residue in bytes of data we have covered 89 + * so far for accounting. This is updated directly to residue 90 + * during callbacks to keep it current. 91 + * 92 + * - sg_len: Tracks the length of the current intermediate transfer, 93 + * this is required to update the residue during intermediate transfer 94 + * completion callback. 95 + */ 96 + int processed_stat; 97 + u32 sg_len; 98 + u32 residue; 99 + u32 residue_stat; 100 + 101 + struct edma_pset pset[0]; 68 102 }; 69 103 70 104 struct edma_cc; ··· 170 136 /* Find out how many left */ 171 137 left = edesc->pset_nr - edesc->processed; 172 138 nslots = min(MAX_NR_SG, left); 139 + edesc->sg_len = 0; 173 140 174 141 /* Write descriptor PaRAM set(s) */ 175 142 for (i = 0; i < nslots; i++) { 176 143 j = i + edesc->processed; 177 - edma_write_slot(echan->slot[i], &edesc->pset[j]); 178 - dev_dbg(echan->vchan.chan.device->dev, 144 + edma_write_slot(echan->slot[i], &edesc->pset[j].param); 145 + edesc->sg_len += edesc->pset[j].len; 146 + dev_vdbg(echan->vchan.chan.device->dev, 179 147 "\n pset[%d]:\n" 180 148 " chnum\t%d\n" 181 149 " slot\t%d\n" ··· 190 154 " cidx\t%08x\n" 191 155 " lkrld\t%08x\n", 192 156 j, echan->ch_num, echan->slot[i], 193 - edesc->pset[j].opt, 194 - edesc->pset[j].src, 195 - edesc->pset[j].dst, 196 - edesc->pset[j].a_b_cnt, 197 - edesc->pset[j].ccnt, 198 - edesc->pset[j].src_dst_bidx, 199 - edesc->pset[j].src_dst_cidx, 200 - edesc->pset[j].link_bcntrld); 157 + edesc->pset[j].param.opt, 158 + edesc->pset[j].param.src, 159 + edesc->pset[j].param.dst, 160 + edesc->pset[j].param.a_b_cnt, 161 + edesc->pset[j].param.ccnt, 162 + edesc->pset[j].param.src_dst_bidx, 163 + edesc->pset[j].param.src_dst_cidx, 164 + edesc->pset[j].param.link_bcntrld); 201 165 /* Link to the previous slot if not the last set */ 202 166 if (i != (nslots - 1)) 203 167 edma_link(echan->slot[i], echan->slot[i+1]); ··· 219 183 } 220 184 221 185 if (edesc->processed <= MAX_NR_SG) { 222 - dev_dbg(dev, "first transfer starting %d\n", echan->ch_num); 186 + dev_dbg(dev, "first transfer starting on channel %d\n", 187 + echan->ch_num); 223 188 edma_start(echan->ch_num); 224 189 } else { 225 190 dev_dbg(dev, "chan: %d: completed %d elements, resuming\n", ··· 234 197 * MAX_NR_SG 235 198 */ 236 199 if (echan->missed) { 237 - dev_dbg(dev, "missed event in execute detected\n"); 200 + dev_dbg(dev, "missed event on channel %d\n", echan->ch_num); 238 201 edma_clean_channel(echan->ch_num); 239 202 edma_stop(echan->ch_num); 240 203 edma_start(echan->ch_num); ··· 279 242 return 0; 280 243 } 281 244 245 + static int edma_dma_pause(struct edma_chan *echan) 246 + { 247 + /* Pause/Resume only allowed with cyclic mode */ 248 + if (!echan->edesc->cyclic) 249 + return -EINVAL; 250 + 251 + edma_pause(echan->ch_num); 252 + return 0; 253 + } 254 + 255 + static int edma_dma_resume(struct edma_chan *echan) 256 + { 257 + /* Pause/Resume only allowed with cyclic mode */ 258 + if (!echan->edesc->cyclic) 259 + return -EINVAL; 260 + 261 + edma_resume(echan->ch_num); 262 + return 0; 263 + } 264 + 282 265 static int edma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, 283 266 unsigned long arg) 284 267 { ··· 314 257 config = (struct dma_slave_config *)arg; 315 258 ret = edma_slave_config(echan, config); 316 259 break; 260 + case DMA_PAUSE: 261 + ret = edma_dma_pause(echan); 262 + break; 263 + 264 + case DMA_RESUME: 265 + ret = edma_dma_resume(echan); 266 + break; 267 + 317 268 default: 318 269 ret = -ENOSYS; 319 270 } ··· 340 275 * @dma_length: Total length of the DMA transfer 341 276 * @direction: Direction of the transfer 342 277 */ 343 - static int edma_config_pset(struct dma_chan *chan, struct edmacc_param *pset, 278 + static int edma_config_pset(struct dma_chan *chan, struct edma_pset *epset, 344 279 dma_addr_t src_addr, dma_addr_t dst_addr, u32 burst, 345 280 enum dma_slave_buswidth dev_width, unsigned int dma_length, 346 281 enum dma_transfer_direction direction) 347 282 { 348 283 struct edma_chan *echan = to_edma_chan(chan); 349 284 struct device *dev = chan->device->dev; 285 + struct edmacc_param *param = &epset->param; 350 286 int acnt, bcnt, ccnt, cidx; 351 287 int src_bidx, dst_bidx, src_cidx, dst_cidx; 352 288 int absync; 353 289 354 290 acnt = dev_width; 291 + 292 + /* src/dst_maxburst == 0 is the same case as src/dst_maxburst == 1 */ 293 + if (!burst) 294 + burst = 1; 355 295 /* 356 296 * If the maxburst is equal to the fifo width, use 357 297 * A-synced transfers. This allows for large contiguous ··· 407 337 cidx = acnt * bcnt; 408 338 } 409 339 340 + epset->len = dma_length; 341 + 410 342 if (direction == DMA_MEM_TO_DEV) { 411 343 src_bidx = acnt; 412 344 src_cidx = cidx; 413 345 dst_bidx = 0; 414 346 dst_cidx = 0; 347 + epset->addr = src_addr; 415 348 } else if (direction == DMA_DEV_TO_MEM) { 416 349 src_bidx = 0; 417 350 src_cidx = 0; 351 + dst_bidx = acnt; 352 + dst_cidx = cidx; 353 + epset->addr = dst_addr; 354 + } else if (direction == DMA_MEM_TO_MEM) { 355 + src_bidx = acnt; 356 + src_cidx = cidx; 418 357 dst_bidx = acnt; 419 358 dst_cidx = cidx; 420 359 } else { ··· 431 352 return -EINVAL; 432 353 } 433 354 434 - pset->opt = EDMA_TCC(EDMA_CHAN_SLOT(echan->ch_num)); 355 + param->opt = EDMA_TCC(EDMA_CHAN_SLOT(echan->ch_num)); 435 356 /* Configure A or AB synchronized transfers */ 436 357 if (absync) 437 - pset->opt |= SYNCDIM; 358 + param->opt |= SYNCDIM; 438 359 439 - pset->src = src_addr; 440 - pset->dst = dst_addr; 360 + param->src = src_addr; 361 + param->dst = dst_addr; 441 362 442 - pset->src_dst_bidx = (dst_bidx << 16) | src_bidx; 443 - pset->src_dst_cidx = (dst_cidx << 16) | src_cidx; 363 + param->src_dst_bidx = (dst_bidx << 16) | src_bidx; 364 + param->src_dst_cidx = (dst_cidx << 16) | src_cidx; 444 365 445 - pset->a_b_cnt = bcnt << 16 | acnt; 446 - pset->ccnt = ccnt; 366 + param->a_b_cnt = bcnt << 16 | acnt; 367 + param->ccnt = ccnt; 447 368 /* 448 369 * Only time when (bcntrld) auto reload is required is for 449 370 * A-sync case, and in this case, a requirement of reload value 450 371 * of SZ_64K-1 only is assured. 'link' is initially set to NULL 451 372 * and then later will be populated by edma_execute. 452 373 */ 453 - pset->link_bcntrld = 0xffffffff; 374 + param->link_bcntrld = 0xffffffff; 454 375 return absync; 455 376 } 456 377 ··· 480 401 dev_width = echan->cfg.dst_addr_width; 481 402 burst = echan->cfg.dst_maxburst; 482 403 } else { 483 - dev_err(dev, "%s: bad direction?\n", __func__); 404 + dev_err(dev, "%s: bad direction: %d\n", __func__, direction); 484 405 return NULL; 485 406 } 486 407 487 408 if (dev_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) { 488 - dev_err(dev, "Undefined slave buswidth\n"); 409 + dev_err(dev, "%s: Undefined slave buswidth\n", __func__); 489 410 return NULL; 490 411 } 491 412 492 413 edesc = kzalloc(sizeof(*edesc) + sg_len * 493 414 sizeof(edesc->pset[0]), GFP_ATOMIC); 494 415 if (!edesc) { 495 - dev_dbg(dev, "Failed to allocate a descriptor\n"); 416 + dev_err(dev, "%s: Failed to allocate a descriptor\n", __func__); 496 417 return NULL; 497 418 } 498 419 499 420 edesc->pset_nr = sg_len; 421 + edesc->residue = 0; 422 + edesc->direction = direction; 423 + edesc->echan = echan; 500 424 501 425 /* Allocate a PaRAM slot, if needed */ 502 426 nslots = min_t(unsigned, MAX_NR_SG, sg_len); ··· 511 429 EDMA_SLOT_ANY); 512 430 if (echan->slot[i] < 0) { 513 431 kfree(edesc); 514 - dev_err(dev, "Failed to allocate slot\n"); 432 + dev_err(dev, "%s: Failed to allocate slot\n", 433 + __func__); 515 434 return NULL; 516 435 } 517 436 } ··· 535 452 } 536 453 537 454 edesc->absync = ret; 455 + edesc->residue += sg_dma_len(sg); 538 456 539 457 /* If this is the last in a current SG set of transactions, 540 458 enable interrupts so that next set is processed */ 541 459 if (!((i+1) % MAX_NR_SG)) 542 - edesc->pset[i].opt |= TCINTEN; 460 + edesc->pset[i].param.opt |= TCINTEN; 543 461 544 462 /* If this is the last set, enable completion interrupt flag */ 545 463 if (i == sg_len - 1) 546 - edesc->pset[i].opt |= TCINTEN; 464 + edesc->pset[i].param.opt |= TCINTEN; 547 465 } 466 + edesc->residue_stat = edesc->residue; 467 + 468 + return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags); 469 + } 470 + 471 + struct dma_async_tx_descriptor *edma_prep_dma_memcpy( 472 + struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, 473 + size_t len, unsigned long tx_flags) 474 + { 475 + int ret; 476 + struct edma_desc *edesc; 477 + struct device *dev = chan->device->dev; 478 + struct edma_chan *echan = to_edma_chan(chan); 479 + 480 + if (unlikely(!echan || !len)) 481 + return NULL; 482 + 483 + edesc = kzalloc(sizeof(*edesc) + sizeof(edesc->pset[0]), GFP_ATOMIC); 484 + if (!edesc) { 485 + dev_dbg(dev, "Failed to allocate a descriptor\n"); 486 + return NULL; 487 + } 488 + 489 + edesc->pset_nr = 1; 490 + 491 + ret = edma_config_pset(chan, &edesc->pset[0], src, dest, 1, 492 + DMA_SLAVE_BUSWIDTH_4_BYTES, len, DMA_MEM_TO_MEM); 493 + if (ret < 0) 494 + return NULL; 495 + 496 + edesc->absync = ret; 497 + 498 + /* 499 + * Enable intermediate transfer chaining to re-trigger channel 500 + * on completion of every TR, and enable transfer-completion 501 + * interrupt on completion of the whole transfer. 502 + */ 503 + edesc->pset[0].param.opt |= ITCCHEN; 504 + edesc->pset[0].param.opt |= TCINTEN; 548 505 549 506 return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags); 550 507 } ··· 616 493 dev_width = echan->cfg.dst_addr_width; 617 494 burst = echan->cfg.dst_maxburst; 618 495 } else { 619 - dev_err(dev, "%s: bad direction?\n", __func__); 496 + dev_err(dev, "%s: bad direction: %d\n", __func__, direction); 620 497 return NULL; 621 498 } 622 499 623 500 if (dev_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) { 624 - dev_err(dev, "Undefined slave buswidth\n"); 501 + dev_err(dev, "%s: Undefined slave buswidth\n", __func__); 625 502 return NULL; 626 503 } 627 504 ··· 646 523 edesc = kzalloc(sizeof(*edesc) + nslots * 647 524 sizeof(edesc->pset[0]), GFP_ATOMIC); 648 525 if (!edesc) { 649 - dev_dbg(dev, "Failed to allocate a descriptor\n"); 526 + dev_err(dev, "%s: Failed to allocate a descriptor\n", __func__); 650 527 return NULL; 651 528 } 652 529 653 530 edesc->cyclic = 1; 654 531 edesc->pset_nr = nslots; 532 + edesc->residue = edesc->residue_stat = buf_len; 533 + edesc->direction = direction; 534 + edesc->echan = echan; 655 535 656 - dev_dbg(dev, "%s: nslots=%d\n", __func__, nslots); 657 - dev_dbg(dev, "%s: period_len=%d\n", __func__, period_len); 658 - dev_dbg(dev, "%s: buf_len=%d\n", __func__, buf_len); 536 + dev_dbg(dev, "%s: channel=%d nslots=%d period_len=%zu buf_len=%zu\n", 537 + __func__, echan->ch_num, nslots, period_len, buf_len); 659 538 660 539 for (i = 0; i < nslots; i++) { 661 540 /* Allocate a PaRAM slot, if needed */ ··· 667 542 EDMA_SLOT_ANY); 668 543 if (echan->slot[i] < 0) { 669 544 kfree(edesc); 670 - dev_err(dev, "Failed to allocate slot\n"); 545 + dev_err(dev, "%s: Failed to allocate slot\n", 546 + __func__); 671 547 return NULL; 672 548 } 673 549 } ··· 692 566 else 693 567 src_addr += period_len; 694 568 695 - dev_dbg(dev, "%s: Configure period %d of buf:\n", __func__, i); 696 - dev_dbg(dev, 569 + dev_vdbg(dev, "%s: Configure period %d of buf:\n", __func__, i); 570 + dev_vdbg(dev, 697 571 "\n pset[%d]:\n" 698 572 " chnum\t%d\n" 699 573 " slot\t%d\n" ··· 706 580 " cidx\t%08x\n" 707 581 " lkrld\t%08x\n", 708 582 i, echan->ch_num, echan->slot[i], 709 - edesc->pset[i].opt, 710 - edesc->pset[i].src, 711 - edesc->pset[i].dst, 712 - edesc->pset[i].a_b_cnt, 713 - edesc->pset[i].ccnt, 714 - edesc->pset[i].src_dst_bidx, 715 - edesc->pset[i].src_dst_cidx, 716 - edesc->pset[i].link_bcntrld); 583 + edesc->pset[i].param.opt, 584 + edesc->pset[i].param.src, 585 + edesc->pset[i].param.dst, 586 + edesc->pset[i].param.a_b_cnt, 587 + edesc->pset[i].param.ccnt, 588 + edesc->pset[i].param.src_dst_bidx, 589 + edesc->pset[i].param.src_dst_cidx, 590 + edesc->pset[i].param.link_bcntrld); 717 591 718 592 edesc->absync = ret; 719 593 ··· 721 595 * Enable interrupts for every period because callback 722 596 * has to be called for every period. 723 597 */ 724 - edesc->pset[i].opt |= TCINTEN; 598 + edesc->pset[i].param.opt |= TCINTEN; 725 599 } 726 600 727 601 return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags); ··· 732 606 struct edma_chan *echan = data; 733 607 struct device *dev = echan->vchan.chan.device->dev; 734 608 struct edma_desc *edesc; 735 - unsigned long flags; 736 609 struct edmacc_param p; 737 610 738 611 edesc = echan->edesc; ··· 742 617 743 618 switch (ch_status) { 744 619 case EDMA_DMA_COMPLETE: 745 - spin_lock_irqsave(&echan->vchan.lock, flags); 620 + spin_lock(&echan->vchan.lock); 746 621 747 622 if (edesc) { 748 623 if (edesc->cyclic) { 749 624 vchan_cyclic_callback(&edesc->vdesc); 750 625 } else if (edesc->processed == edesc->pset_nr) { 751 626 dev_dbg(dev, "Transfer complete, stopping channel %d\n", ch_num); 627 + edesc->residue = 0; 752 628 edma_stop(echan->ch_num); 753 629 vchan_cookie_complete(&edesc->vdesc); 754 630 edma_execute(echan); 755 631 } else { 756 632 dev_dbg(dev, "Intermediate transfer complete on channel %d\n", ch_num); 633 + 634 + /* Update statistics for tx_status */ 635 + edesc->residue -= edesc->sg_len; 636 + edesc->residue_stat = edesc->residue; 637 + edesc->processed_stat = edesc->processed; 638 + 757 639 edma_execute(echan); 758 640 } 759 641 } 760 642 761 - spin_unlock_irqrestore(&echan->vchan.lock, flags); 643 + spin_unlock(&echan->vchan.lock); 762 644 763 645 break; 764 646 case EDMA_DMA_CC_ERROR: 765 - spin_lock_irqsave(&echan->vchan.lock, flags); 647 + spin_lock(&echan->vchan.lock); 766 648 767 649 edma_read_slot(EDMA_CHAN_SLOT(echan->slot[0]), &p); 768 650 ··· 800 668 edma_trigger_channel(echan->ch_num); 801 669 } 802 670 803 - spin_unlock_irqrestore(&echan->vchan.lock, flags); 671 + spin_unlock(&echan->vchan.lock); 804 672 805 673 break; 806 674 default: ··· 836 704 echan->alloced = true; 837 705 echan->slot[0] = echan->ch_num; 838 706 839 - dev_dbg(dev, "allocated channel for %u:%u\n", 707 + dev_dbg(dev, "allocated channel %d for %u:%u\n", echan->ch_num, 840 708 EDMA_CTLR(echan->ch_num), EDMA_CHAN_SLOT(echan->ch_num)); 841 709 842 710 return 0; ··· 888 756 spin_unlock_irqrestore(&echan->vchan.lock, flags); 889 757 } 890 758 891 - static size_t edma_desc_size(struct edma_desc *edesc) 759 + static u32 edma_residue(struct edma_desc *edesc) 892 760 { 761 + bool dst = edesc->direction == DMA_DEV_TO_MEM; 762 + struct edma_pset *pset = edesc->pset; 763 + dma_addr_t done, pos; 893 764 int i; 894 - size_t size; 895 765 896 - if (edesc->absync) 897 - for (size = i = 0; i < edesc->pset_nr; i++) 898 - size += (edesc->pset[i].a_b_cnt & 0xffff) * 899 - (edesc->pset[i].a_b_cnt >> 16) * 900 - edesc->pset[i].ccnt; 901 - else 902 - size = (edesc->pset[0].a_b_cnt & 0xffff) * 903 - (edesc->pset[0].a_b_cnt >> 16) + 904 - (edesc->pset[0].a_b_cnt & 0xffff) * 905 - (SZ_64K - 1) * edesc->pset[0].ccnt; 766 + /* 767 + * We always read the dst/src position from the first RamPar 768 + * pset. That's the one which is active now. 769 + */ 770 + pos = edma_get_position(edesc->echan->slot[0], dst); 906 771 907 - return size; 772 + /* 773 + * Cyclic is simple. Just subtract pset[0].addr from pos. 774 + * 775 + * We never update edesc->residue in the cyclic case, so we 776 + * can tell the remaining room to the end of the circular 777 + * buffer. 778 + */ 779 + if (edesc->cyclic) { 780 + done = pos - pset->addr; 781 + edesc->residue_stat = edesc->residue - done; 782 + return edesc->residue_stat; 783 + } 784 + 785 + /* 786 + * For SG operation we catch up with the last processed 787 + * status. 788 + */ 789 + pset += edesc->processed_stat; 790 + 791 + for (i = edesc->processed_stat; i < edesc->processed; i++, pset++) { 792 + /* 793 + * If we are inside this pset address range, we know 794 + * this is the active one. Get the current delta and 795 + * stop walking the psets. 796 + */ 797 + if (pos >= pset->addr && pos < pset->addr + pset->len) 798 + return edesc->residue_stat - (pos - pset->addr); 799 + 800 + /* Otherwise mark it done and update residue_stat. */ 801 + edesc->processed_stat++; 802 + edesc->residue_stat -= pset->len; 803 + } 804 + return edesc->residue_stat; 908 805 } 909 806 910 807 /* Check request completion status */ ··· 951 790 return ret; 952 791 953 792 spin_lock_irqsave(&echan->vchan.lock, flags); 954 - vdesc = vchan_find_desc(&echan->vchan, cookie); 955 - if (vdesc) { 956 - txstate->residue = edma_desc_size(to_edma_desc(&vdesc->tx)); 957 - } else if (echan->edesc && echan->edesc->vdesc.tx.cookie == cookie) { 958 - struct edma_desc *edesc = echan->edesc; 959 - txstate->residue = edma_desc_size(edesc); 960 - } 793 + if (echan->edesc && echan->edesc->vdesc.tx.cookie == cookie) 794 + txstate->residue = edma_residue(echan->edesc); 795 + else if ((vdesc = vchan_find_desc(&echan->vchan, cookie))) 796 + txstate->residue = to_edma_desc(&vdesc->tx)->residue; 961 797 spin_unlock_irqrestore(&echan->vchan.lock, flags); 962 798 963 799 return ret; ··· 980 822 } 981 823 } 982 824 825 + #define EDMA_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 826 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ 827 + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)) 828 + 829 + static int edma_dma_device_slave_caps(struct dma_chan *dchan, 830 + struct dma_slave_caps *caps) 831 + { 832 + caps->src_addr_widths = EDMA_DMA_BUSWIDTHS; 833 + caps->dstn_addr_widths = EDMA_DMA_BUSWIDTHS; 834 + caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 835 + caps->cmd_pause = true; 836 + caps->cmd_terminate = true; 837 + caps->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 838 + 839 + return 0; 840 + } 841 + 983 842 static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma, 984 843 struct device *dev) 985 844 { 986 845 dma->device_prep_slave_sg = edma_prep_slave_sg; 987 846 dma->device_prep_dma_cyclic = edma_prep_dma_cyclic; 847 + dma->device_prep_dma_memcpy = edma_prep_dma_memcpy; 988 848 dma->device_alloc_chan_resources = edma_alloc_chan_resources; 989 849 dma->device_free_chan_resources = edma_free_chan_resources; 990 850 dma->device_issue_pending = edma_issue_pending; 991 851 dma->device_tx_status = edma_tx_status; 992 852 dma->device_control = edma_control; 853 + dma->device_slave_caps = edma_dma_device_slave_caps; 993 854 dma->dev = dev; 855 + 856 + /* 857 + * code using dma memcpy must make sure alignment of 858 + * length is at dma->copy_align boundary. 859 + */ 860 + dma->copy_align = DMA_SLAVE_BUSWIDTH_4_BYTES; 994 861 995 862 INIT_LIST_HEAD(&dma->channels); 996 863 } ··· 1044 861 1045 862 dma_cap_zero(ecc->dma_slave.cap_mask); 1046 863 dma_cap_set(DMA_SLAVE, ecc->dma_slave.cap_mask); 864 + dma_cap_set(DMA_CYCLIC, ecc->dma_slave.cap_mask); 865 + dma_cap_set(DMA_MEMCPY, ecc->dma_slave.cap_mask); 1047 866 1048 867 edma_dma_init(ecc, &ecc->dma_slave, &pdev->dev); 1049 868
+150 -71
drivers/memory/mvebu-devbus.c
··· 2 2 * Marvell EBU SoC Device Bus Controller 3 3 * (memory controller for NOR/NAND/SRAM/FPGA devices) 4 4 * 5 - * Copyright (C) 2013 Marvell 5 + * Copyright (C) 2013-2014 Marvell 6 6 * 7 7 * This program is free software: you can redistribute it and/or modify 8 8 * it under the terms of the GNU General Public License as published by ··· 30 30 #include <linux/platform_device.h> 31 31 32 32 /* Register definitions */ 33 - #define DEV_WIDTH_BIT 30 34 - #define BADR_SKEW_BIT 28 35 - #define RD_HOLD_BIT 23 36 - #define ACC_NEXT_BIT 17 37 - #define RD_SETUP_BIT 12 38 - #define ACC_FIRST_BIT 6 33 + #define ARMADA_DEV_WIDTH_SHIFT 30 34 + #define ARMADA_BADR_SKEW_SHIFT 28 35 + #define ARMADA_RD_HOLD_SHIFT 23 36 + #define ARMADA_ACC_NEXT_SHIFT 17 37 + #define ARMADA_RD_SETUP_SHIFT 12 38 + #define ARMADA_ACC_FIRST_SHIFT 6 39 39 40 - #define SYNC_ENABLE_BIT 24 41 - #define WR_HIGH_BIT 16 42 - #define WR_LOW_BIT 8 40 + #define ARMADA_SYNC_ENABLE_SHIFT 24 41 + #define ARMADA_WR_HIGH_SHIFT 16 42 + #define ARMADA_WR_LOW_SHIFT 8 43 43 44 - #define READ_PARAM_OFFSET 0x0 45 - #define WRITE_PARAM_OFFSET 0x4 44 + #define ARMADA_READ_PARAM_OFFSET 0x0 45 + #define ARMADA_WRITE_PARAM_OFFSET 0x4 46 + 47 + #define ORION_RESERVED (0x2 << 30) 48 + #define ORION_BADR_SKEW_SHIFT 28 49 + #define ORION_WR_HIGH_EXT_BIT BIT(27) 50 + #define ORION_WR_HIGH_EXT_MASK 0x8 51 + #define ORION_WR_LOW_EXT_BIT BIT(26) 52 + #define ORION_WR_LOW_EXT_MASK 0x8 53 + #define ORION_ALE_WR_EXT_BIT BIT(25) 54 + #define ORION_ALE_WR_EXT_MASK 0x8 55 + #define ORION_ACC_NEXT_EXT_BIT BIT(24) 56 + #define ORION_ACC_NEXT_EXT_MASK 0x10 57 + #define ORION_ACC_FIRST_EXT_BIT BIT(23) 58 + #define ORION_ACC_FIRST_EXT_MASK 0x10 59 + #define ORION_TURN_OFF_EXT_BIT BIT(22) 60 + #define ORION_TURN_OFF_EXT_MASK 0x8 61 + #define ORION_DEV_WIDTH_SHIFT 20 62 + #define ORION_WR_HIGH_SHIFT 17 63 + #define ORION_WR_HIGH_MASK 0x7 64 + #define ORION_WR_LOW_SHIFT 14 65 + #define ORION_WR_LOW_MASK 0x7 66 + #define ORION_ALE_WR_SHIFT 11 67 + #define ORION_ALE_WR_MASK 0x7 68 + #define ORION_ACC_NEXT_SHIFT 7 69 + #define ORION_ACC_NEXT_MASK 0xF 70 + #define ORION_ACC_FIRST_SHIFT 3 71 + #define ORION_ACC_FIRST_MASK 0xF 72 + #define ORION_TURN_OFF_SHIFT 0 73 + #define ORION_TURN_OFF_MASK 0x7 46 74 47 75 struct devbus_read_params { 48 76 u32 bus_width; ··· 117 89 return 0; 118 90 } 119 91 120 - static int devbus_set_timing_params(struct devbus *devbus, 121 - struct device_node *node) 92 + static int devbus_get_timing_params(struct devbus *devbus, 93 + struct device_node *node, 94 + struct devbus_read_params *r, 95 + struct devbus_write_params *w) 122 96 { 123 - struct devbus_read_params r; 124 - struct devbus_write_params w; 125 - u32 value; 126 97 int err; 127 98 128 - dev_dbg(devbus->dev, "Setting timing parameter, tick is %lu ps\n", 129 - devbus->tick_ps); 130 - 131 - /* Get read timings */ 132 - err = of_property_read_u32(node, "devbus,bus-width", &r.bus_width); 99 + err = of_property_read_u32(node, "devbus,bus-width", &r->bus_width); 133 100 if (err < 0) { 134 101 dev_err(devbus->dev, 135 102 "%s has no 'devbus,bus-width' property\n", ··· 136 113 * The bus width is encoded into the register as 0 for 8 bits, 137 114 * and 1 for 16 bits, so we do the necessary conversion here. 138 115 */ 139 - if (r.bus_width == 8) 140 - r.bus_width = 0; 141 - else if (r.bus_width == 16) 142 - r.bus_width = 1; 116 + if (r->bus_width == 8) 117 + r->bus_width = 0; 118 + else if (r->bus_width == 16) 119 + r->bus_width = 1; 143 120 else { 144 - dev_err(devbus->dev, "invalid bus width %d\n", r.bus_width); 121 + dev_err(devbus->dev, "invalid bus width %d\n", r->bus_width); 145 122 return -EINVAL; 146 123 } 147 124 148 125 err = get_timing_param_ps(devbus, node, "devbus,badr-skew-ps", 149 - &r.badr_skew); 126 + &r->badr_skew); 150 127 if (err < 0) 151 128 return err; 152 129 153 130 err = get_timing_param_ps(devbus, node, "devbus,turn-off-ps", 154 - &r.turn_off); 131 + &r->turn_off); 155 132 if (err < 0) 156 133 return err; 157 134 158 135 err = get_timing_param_ps(devbus, node, "devbus,acc-first-ps", 159 - &r.acc_first); 136 + &r->acc_first); 160 137 if (err < 0) 161 138 return err; 162 139 163 140 err = get_timing_param_ps(devbus, node, "devbus,acc-next-ps", 164 - &r.acc_next); 141 + &r->acc_next); 165 142 if (err < 0) 166 143 return err; 167 144 168 - err = get_timing_param_ps(devbus, node, "devbus,rd-setup-ps", 169 - &r.rd_setup); 170 - if (err < 0) 171 - return err; 145 + if (of_device_is_compatible(devbus->dev->of_node, "marvell,mvebu-devbus")) { 146 + err = get_timing_param_ps(devbus, node, "devbus,rd-setup-ps", 147 + &r->rd_setup); 148 + if (err < 0) 149 + return err; 172 150 173 - err = get_timing_param_ps(devbus, node, "devbus,rd-hold-ps", 174 - &r.rd_hold); 175 - if (err < 0) 176 - return err; 151 + err = get_timing_param_ps(devbus, node, "devbus,rd-hold-ps", 152 + &r->rd_hold); 153 + if (err < 0) 154 + return err; 177 155 178 - /* Get write timings */ 179 - err = of_property_read_u32(node, "devbus,sync-enable", 180 - &w.sync_enable); 181 - if (err < 0) { 182 - dev_err(devbus->dev, 183 - "%s has no 'devbus,sync-enable' property\n", 184 - node->full_name); 185 - return err; 156 + err = of_property_read_u32(node, "devbus,sync-enable", 157 + &w->sync_enable); 158 + if (err < 0) { 159 + dev_err(devbus->dev, 160 + "%s has no 'devbus,sync-enable' property\n", 161 + node->full_name); 162 + return err; 163 + } 186 164 } 187 165 188 166 err = get_timing_param_ps(devbus, node, "devbus,ale-wr-ps", 189 - &w.ale_wr); 167 + &w->ale_wr); 190 168 if (err < 0) 191 169 return err; 192 170 193 171 err = get_timing_param_ps(devbus, node, "devbus,wr-low-ps", 194 - &w.wr_low); 172 + &w->wr_low); 195 173 if (err < 0) 196 174 return err; 197 175 198 176 err = get_timing_param_ps(devbus, node, "devbus,wr-high-ps", 199 - &w.wr_high); 177 + &w->wr_high); 200 178 if (err < 0) 201 179 return err; 202 180 181 + return 0; 182 + } 183 + 184 + static void devbus_orion_set_timing_params(struct devbus *devbus, 185 + struct device_node *node, 186 + struct devbus_read_params *r, 187 + struct devbus_write_params *w) 188 + { 189 + u32 value; 190 + 191 + /* 192 + * The hardware designers found it would be a good idea to 193 + * split most of the values in the register into two fields: 194 + * one containing all the low-order bits, and another one 195 + * containing just the high-order bit. For all of those 196 + * fields, we have to split the value into these two parts. 197 + */ 198 + value = (r->turn_off & ORION_TURN_OFF_MASK) << ORION_TURN_OFF_SHIFT | 199 + (r->acc_first & ORION_ACC_FIRST_MASK) << ORION_ACC_FIRST_SHIFT | 200 + (r->acc_next & ORION_ACC_NEXT_MASK) << ORION_ACC_NEXT_SHIFT | 201 + (w->ale_wr & ORION_ALE_WR_MASK) << ORION_ALE_WR_SHIFT | 202 + (w->wr_low & ORION_WR_LOW_MASK) << ORION_WR_LOW_SHIFT | 203 + (w->wr_high & ORION_WR_HIGH_MASK) << ORION_WR_HIGH_SHIFT | 204 + r->bus_width << ORION_DEV_WIDTH_SHIFT | 205 + ((r->turn_off & ORION_TURN_OFF_EXT_MASK) ? ORION_TURN_OFF_EXT_BIT : 0) | 206 + ((r->acc_first & ORION_ACC_FIRST_EXT_MASK) ? ORION_ACC_FIRST_EXT_BIT : 0) | 207 + ((r->acc_next & ORION_ACC_NEXT_EXT_MASK) ? ORION_ACC_NEXT_EXT_BIT : 0) | 208 + ((w->ale_wr & ORION_ALE_WR_EXT_MASK) ? ORION_ALE_WR_EXT_BIT : 0) | 209 + ((w->wr_low & ORION_WR_LOW_EXT_MASK) ? ORION_WR_LOW_EXT_BIT : 0) | 210 + ((w->wr_high & ORION_WR_HIGH_EXT_MASK) ? ORION_WR_HIGH_EXT_BIT : 0) | 211 + (r->badr_skew << ORION_BADR_SKEW_SHIFT) | 212 + ORION_RESERVED; 213 + 214 + writel(value, devbus->base); 215 + } 216 + 217 + static void devbus_armada_set_timing_params(struct devbus *devbus, 218 + struct device_node *node, 219 + struct devbus_read_params *r, 220 + struct devbus_write_params *w) 221 + { 222 + u32 value; 223 + 203 224 /* Set read timings */ 204 - value = r.bus_width << DEV_WIDTH_BIT | 205 - r.badr_skew << BADR_SKEW_BIT | 206 - r.rd_hold << RD_HOLD_BIT | 207 - r.acc_next << ACC_NEXT_BIT | 208 - r.rd_setup << RD_SETUP_BIT | 209 - r.acc_first << ACC_FIRST_BIT | 210 - r.turn_off; 225 + value = r->bus_width << ARMADA_DEV_WIDTH_SHIFT | 226 + r->badr_skew << ARMADA_BADR_SKEW_SHIFT | 227 + r->rd_hold << ARMADA_RD_HOLD_SHIFT | 228 + r->acc_next << ARMADA_ACC_NEXT_SHIFT | 229 + r->rd_setup << ARMADA_RD_SETUP_SHIFT | 230 + r->acc_first << ARMADA_ACC_FIRST_SHIFT | 231 + r->turn_off; 211 232 212 233 dev_dbg(devbus->dev, "read parameters register 0x%p = 0x%x\n", 213 - devbus->base + READ_PARAM_OFFSET, 234 + devbus->base + ARMADA_READ_PARAM_OFFSET, 214 235 value); 215 236 216 - writel(value, devbus->base + READ_PARAM_OFFSET); 237 + writel(value, devbus->base + ARMADA_READ_PARAM_OFFSET); 217 238 218 239 /* Set write timings */ 219 - value = w.sync_enable << SYNC_ENABLE_BIT | 220 - w.wr_low << WR_LOW_BIT | 221 - w.wr_high << WR_HIGH_BIT | 222 - w.ale_wr; 240 + value = w->sync_enable << ARMADA_SYNC_ENABLE_SHIFT | 241 + w->wr_low << ARMADA_WR_LOW_SHIFT | 242 + w->wr_high << ARMADA_WR_HIGH_SHIFT | 243 + w->ale_wr; 223 244 224 245 dev_dbg(devbus->dev, "write parameters register: 0x%p = 0x%x\n", 225 - devbus->base + WRITE_PARAM_OFFSET, 246 + devbus->base + ARMADA_WRITE_PARAM_OFFSET, 226 247 value); 227 248 228 - writel(value, devbus->base + WRITE_PARAM_OFFSET); 229 - 230 - return 0; 249 + writel(value, devbus->base + ARMADA_WRITE_PARAM_OFFSET); 231 250 } 232 251 233 252 static int mvebu_devbus_probe(struct platform_device *pdev) 234 253 { 235 254 struct device *dev = &pdev->dev; 236 255 struct device_node *node = pdev->dev.of_node; 256 + struct devbus_read_params r; 257 + struct devbus_write_params w; 237 258 struct devbus *devbus; 238 259 struct resource *res; 239 260 struct clk *clk; ··· 307 240 rate = clk_get_rate(clk) / 1000; 308 241 devbus->tick_ps = 1000000000 / rate; 309 242 310 - /* Read the device tree node and set the new timing parameters */ 311 - err = devbus_set_timing_params(devbus, node); 312 - if (err < 0) 313 - return err; 243 + dev_dbg(devbus->dev, "Setting timing parameter, tick is %lu ps\n", 244 + devbus->tick_ps); 245 + 246 + if (!of_property_read_bool(node, "devbus,keep-config")) { 247 + /* Read the Device Tree node */ 248 + err = devbus_get_timing_params(devbus, node, &r, &w); 249 + if (err < 0) 250 + return err; 251 + 252 + /* Set the new timing parameters */ 253 + if (of_device_is_compatible(node, "marvell,orion-devbus")) 254 + devbus_orion_set_timing_params(devbus, node, &r, &w); 255 + else 256 + devbus_armada_set_timing_params(devbus, node, &r, &w); 257 + } 314 258 315 259 /* 316 260 * We need to create a child device explicitly from here to ··· 337 259 338 260 static const struct of_device_id mvebu_devbus_of_match[] = { 339 261 { .compatible = "marvell,mvebu-devbus" }, 262 + { .compatible = "marvell,orion-devbus" }, 340 263 {}, 341 264 }; 342 265 MODULE_DEVICE_TABLE(of, mvebu_devbus_of_match);
+15
drivers/power/reset/Kconfig
··· 51 51 Instead they restart, and u-boot holds the SoC until the 52 52 user presses a key. u-boot then boots into Linux. 53 53 54 + config POWER_RESET_SUN6I 55 + bool "Allwinner A31 SoC reset driver" 56 + depends on ARCH_SUNXI 57 + depends on POWER_RESET 58 + help 59 + Reboot support for the Allwinner A31 SoCs. 60 + 54 61 config POWER_RESET_VEXPRESS 55 62 bool "ARM Versatile Express power-off and reset driver" 56 63 depends on ARM || ARM64 ··· 72 65 depends on POWER_RESET 73 66 help 74 67 Reboot support for the APM SoC X-Gene Eval boards. 68 + 69 + config POWER_RESET_KEYSTONE 70 + bool "Keystone reset driver" 71 + depends on ARCH_KEYSTONE 72 + select MFD_SYSCON 73 + help 74 + Reboot support for the KEYSTONE SoCs. 75 +
+2
drivers/power/reset/Makefile
··· 4 4 obj-$(CONFIG_POWER_RESET_MSM) += msm-poweroff.o 5 5 obj-$(CONFIG_POWER_RESET_QNAP) += qnap-poweroff.o 6 6 obj-$(CONFIG_POWER_RESET_RESTART) += restart-poweroff.o 7 + obj-$(CONFIG_POWER_RESET_SUN6I) += sun6i-reboot.o 7 8 obj-$(CONFIG_POWER_RESET_VEXPRESS) += vexpress-poweroff.o 8 9 obj-$(CONFIG_POWER_RESET_XGENE) += xgene-reboot.o 10 + obj-$(CONFIG_POWER_RESET_KEYSTONE) += keystone-reset.o
+166
drivers/power/reset/keystone-reset.c
··· 1 + /* 2 + * TI keystone reboot driver 3 + * 4 + * Copyright (C) 2014 Texas Instruments Incorporated. http://www.ti.com/ 5 + * 6 + * Author: Ivan Khoronzhuk <ivan.khoronzhuk@ti.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + 13 + #include <linux/io.h> 14 + #include <linux/module.h> 15 + #include <linux/reboot.h> 16 + #include <linux/regmap.h> 17 + #include <asm/system_misc.h> 18 + #include <linux/mfd/syscon.h> 19 + #include <linux/of_platform.h> 20 + 21 + #define RSTYPE_RG 0x0 22 + #define RSCTRL_RG 0x4 23 + #define RSCFG_RG 0x8 24 + #define RSISO_RG 0xc 25 + 26 + #define RSCTRL_KEY_MASK 0x0000ffff 27 + #define RSCTRL_RESET_MASK BIT(16) 28 + #define RSCTRL_KEY 0x5a69 29 + 30 + #define RSMUX_OMODE_MASK 0xe 31 + #define RSMUX_OMODE_RESET_ON 0xa 32 + #define RSMUX_OMODE_RESET_OFF 0x0 33 + #define RSMUX_LOCK_MASK 0x1 34 + #define RSMUX_LOCK_SET 0x1 35 + 36 + #define RSCFG_RSTYPE_SOFT 0x300f 37 + #define RSCFG_RSTYPE_HARD 0x0 38 + 39 + #define WDT_MUX_NUMBER 0x4 40 + 41 + static int rspll_offset; 42 + static struct regmap *pllctrl_regs; 43 + 44 + /** 45 + * rsctrl_enable_rspll_write - enable access to RSCTRL, RSCFG 46 + * To be able to access to RSCTRL, RSCFG registers 47 + * we have to write a key before 48 + */ 49 + static inline int rsctrl_enable_rspll_write(void) 50 + { 51 + return regmap_update_bits(pllctrl_regs, rspll_offset + RSCTRL_RG, 52 + RSCTRL_KEY_MASK, RSCTRL_KEY); 53 + } 54 + 55 + static void rsctrl_restart(enum reboot_mode mode, const char *cmd) 56 + { 57 + /* enable write access to RSTCTRL */ 58 + rsctrl_enable_rspll_write(); 59 + 60 + /* reset the SOC */ 61 + regmap_update_bits(pllctrl_regs, rspll_offset + RSCTRL_RG, 62 + RSCTRL_RESET_MASK, 0); 63 + } 64 + 65 + static struct of_device_id rsctrl_of_match[] = { 66 + {.compatible = "ti,keystone-reset", }, 67 + {}, 68 + }; 69 + 70 + static int rsctrl_probe(struct platform_device *pdev) 71 + { 72 + int i; 73 + int ret; 74 + u32 val; 75 + unsigned int rg; 76 + u32 rsmux_offset; 77 + struct regmap *devctrl_regs; 78 + struct device *dev = &pdev->dev; 79 + struct device_node *np = dev->of_node; 80 + 81 + if (!np) 82 + return -ENODEV; 83 + 84 + /* get regmaps */ 85 + pllctrl_regs = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pll"); 86 + if (IS_ERR(pllctrl_regs)) 87 + return PTR_ERR(pllctrl_regs); 88 + 89 + devctrl_regs = syscon_regmap_lookup_by_phandle(np, "ti,syscon-dev"); 90 + if (IS_ERR(devctrl_regs)) 91 + return PTR_ERR(devctrl_regs); 92 + 93 + ret = of_property_read_u32_index(np, "ti,syscon-pll", 1, &rspll_offset); 94 + if (ret) { 95 + dev_err(dev, "couldn't read the reset pll offset!\n"); 96 + return -EINVAL; 97 + } 98 + 99 + ret = of_property_read_u32_index(np, "ti,syscon-dev", 1, &rsmux_offset); 100 + if (ret) { 101 + dev_err(dev, "couldn't read the rsmux offset!\n"); 102 + return -EINVAL; 103 + } 104 + 105 + /* set soft/hard reset */ 106 + val = of_property_read_bool(np, "ti,soft-reset"); 107 + val = val ? RSCFG_RSTYPE_SOFT : RSCFG_RSTYPE_HARD; 108 + 109 + ret = rsctrl_enable_rspll_write(); 110 + if (ret) 111 + return ret; 112 + 113 + ret = regmap_write(pllctrl_regs, rspll_offset + RSCFG_RG, val); 114 + if (ret) 115 + return ret; 116 + 117 + arm_pm_restart = rsctrl_restart; 118 + 119 + /* disable a reset isolation for all module clocks */ 120 + ret = regmap_write(pllctrl_regs, rspll_offset + RSISO_RG, 0); 121 + if (ret) 122 + return ret; 123 + 124 + /* enable a reset for watchdogs from wdt-list */ 125 + for (i = 0; i < WDT_MUX_NUMBER; i++) { 126 + ret = of_property_read_u32_index(np, "ti,wdt-list", i, &val); 127 + if (ret == -EOVERFLOW && !i) { 128 + dev_err(dev, "ti,wdt-list property has to contain at" 129 + "least one entry\n"); 130 + return -EINVAL; 131 + } else if (ret) { 132 + break; 133 + } 134 + 135 + if (val >= WDT_MUX_NUMBER) { 136 + dev_err(dev, "ti,wdt-list property can contain" 137 + "only numbers < 4\n"); 138 + return -EINVAL; 139 + } 140 + 141 + rg = rsmux_offset + val * 4; 142 + 143 + ret = regmap_update_bits(devctrl_regs, rg, RSMUX_OMODE_MASK, 144 + RSMUX_OMODE_RESET_ON | 145 + RSMUX_LOCK_SET); 146 + if (ret) 147 + return ret; 148 + } 149 + 150 + return 0; 151 + } 152 + 153 + static struct platform_driver rsctrl_driver = { 154 + .probe = rsctrl_probe, 155 + .driver = { 156 + .owner = THIS_MODULE, 157 + .name = KBUILD_MODNAME, 158 + .of_match_table = rsctrl_of_match, 159 + }, 160 + }; 161 + module_platform_driver(rsctrl_driver); 162 + 163 + MODULE_AUTHOR("Ivan Khoronzhuk <ivan.khoronzhuk@ti.com>"); 164 + MODULE_DESCRIPTION("Texas Instruments keystone reset driver"); 165 + MODULE_LICENSE("GPL v2"); 166 + MODULE_ALIAS("platform:" KBUILD_MODNAME);
+85
drivers/power/reset/sun6i-reboot.c
··· 1 + /* 2 + * Allwinner A31 SoCs reset code 3 + * 4 + * Copyright (C) 2012-2014 Maxime Ripard 5 + * 6 + * Maxime Ripard <maxime.ripard@free-electrons.com> 7 + * 8 + * This file is licensed under the terms of the GNU General Public 9 + * License version 2. This program is licensed "as is" without any 10 + * warranty of any kind, whether express or implied. 11 + */ 12 + 13 + #include <linux/delay.h> 14 + #include <linux/io.h> 15 + #include <linux/module.h> 16 + #include <linux/of_address.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/reboot.h> 19 + 20 + #include <asm/system_misc.h> 21 + 22 + #define SUN6I_WATCHDOG1_IRQ_REG 0x00 23 + #define SUN6I_WATCHDOG1_CTRL_REG 0x10 24 + #define SUN6I_WATCHDOG1_CTRL_RESTART BIT(0) 25 + #define SUN6I_WATCHDOG1_CONFIG_REG 0x14 26 + #define SUN6I_WATCHDOG1_CONFIG_RESTART BIT(0) 27 + #define SUN6I_WATCHDOG1_CONFIG_IRQ BIT(1) 28 + #define SUN6I_WATCHDOG1_MODE_REG 0x18 29 + #define SUN6I_WATCHDOG1_MODE_ENABLE BIT(0) 30 + 31 + static void __iomem *wdt_base; 32 + 33 + static void sun6i_wdt_restart(enum reboot_mode mode, const char *cmd) 34 + { 35 + if (!wdt_base) 36 + return; 37 + 38 + /* Disable interrupts */ 39 + writel(0, wdt_base + SUN6I_WATCHDOG1_IRQ_REG); 40 + 41 + /* We want to disable the IRQ and just reset the whole system */ 42 + writel(SUN6I_WATCHDOG1_CONFIG_RESTART, 43 + wdt_base + SUN6I_WATCHDOG1_CONFIG_REG); 44 + 45 + /* Enable timer. The default and lowest interval value is 0.5s */ 46 + writel(SUN6I_WATCHDOG1_MODE_ENABLE, 47 + wdt_base + SUN6I_WATCHDOG1_MODE_REG); 48 + 49 + /* Restart the watchdog. */ 50 + writel(SUN6I_WATCHDOG1_CTRL_RESTART, 51 + wdt_base + SUN6I_WATCHDOG1_CTRL_REG); 52 + 53 + while (1) { 54 + mdelay(5); 55 + writel(SUN6I_WATCHDOG1_MODE_ENABLE, 56 + wdt_base + SUN6I_WATCHDOG1_MODE_REG); 57 + } 58 + } 59 + 60 + static int sun6i_reboot_probe(struct platform_device *pdev) 61 + { 62 + wdt_base = of_iomap(pdev->dev.of_node, 0); 63 + if (!wdt_base) { 64 + WARN(1, "failed to map watchdog base address"); 65 + return -ENODEV; 66 + } 67 + 68 + arm_pm_restart = sun6i_wdt_restart; 69 + 70 + return 0; 71 + } 72 + 73 + static struct of_device_id sun6i_reboot_of_match[] = { 74 + { .compatible = "allwinner,sun6i-a31-wdt" }, 75 + {} 76 + }; 77 + 78 + static struct platform_driver sun6i_reboot_driver = { 79 + .probe = sun6i_reboot_probe, 80 + .driver = { 81 + .name = "sun6i-reboot", 82 + .of_match_table = sun6i_reboot_of_match, 83 + }, 84 + }; 85 + module_platform_driver(sun6i_reboot_driver);
+1
drivers/reset/Makefile
··· 1 1 obj-$(CONFIG_RESET_CONTROLLER) += core.o 2 + obj-$(CONFIG_ARCH_SOCFPGA) += reset-socfpga.o 2 3 obj-$(CONFIG_ARCH_SUNXI) += reset-sunxi.o 3 4 obj-$(CONFIG_ARCH_STI) += sti/
+146
drivers/reset/reset-socfpga.c
··· 1 + /* 2 + * Copyright 2014 Steffen Trumtrar <s.trumtrar@pengutronix.de> 3 + * 4 + * based on 5 + * Allwinner SoCs Reset Controller driver 6 + * 7 + * Copyright 2013 Maxime Ripard 8 + * 9 + * Maxime Ripard <maxime.ripard@free-electrons.com> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License as published by 13 + * the Free Software Foundation; either version 2 of the License, or 14 + * (at your option) any later version. 15 + */ 16 + 17 + #include <linux/err.h> 18 + #include <linux/io.h> 19 + #include <linux/module.h> 20 + #include <linux/of.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/reset-controller.h> 23 + #include <linux/spinlock.h> 24 + #include <linux/types.h> 25 + 26 + #define NR_BANKS 4 27 + #define OFFSET_MODRST 0x10 28 + 29 + struct socfpga_reset_data { 30 + spinlock_t lock; 31 + void __iomem *membase; 32 + struct reset_controller_dev rcdev; 33 + }; 34 + 35 + static int socfpga_reset_assert(struct reset_controller_dev *rcdev, 36 + unsigned long id) 37 + { 38 + struct socfpga_reset_data *data = container_of(rcdev, 39 + struct socfpga_reset_data, 40 + rcdev); 41 + int bank = id / BITS_PER_LONG; 42 + int offset = id % BITS_PER_LONG; 43 + unsigned long flags; 44 + u32 reg; 45 + 46 + spin_lock_irqsave(&data->lock, flags); 47 + 48 + reg = readl(data->membase + OFFSET_MODRST + (bank * NR_BANKS)); 49 + writel(reg | BIT(offset), data->membase + OFFSET_MODRST + 50 + (bank * NR_BANKS)); 51 + spin_unlock_irqrestore(&data->lock, flags); 52 + 53 + return 0; 54 + } 55 + 56 + static int socfpga_reset_deassert(struct reset_controller_dev *rcdev, 57 + unsigned long id) 58 + { 59 + struct socfpga_reset_data *data = container_of(rcdev, 60 + struct socfpga_reset_data, 61 + rcdev); 62 + 63 + int bank = id / BITS_PER_LONG; 64 + int offset = id % BITS_PER_LONG; 65 + unsigned long flags; 66 + u32 reg; 67 + 68 + spin_lock_irqsave(&data->lock, flags); 69 + 70 + reg = readl(data->membase + OFFSET_MODRST + (bank * NR_BANKS)); 71 + writel(reg & ~BIT(offset), data->membase + OFFSET_MODRST + 72 + (bank * NR_BANKS)); 73 + 74 + spin_unlock_irqrestore(&data->lock, flags); 75 + 76 + return 0; 77 + } 78 + 79 + static struct reset_control_ops socfpga_reset_ops = { 80 + .assert = socfpga_reset_assert, 81 + .deassert = socfpga_reset_deassert, 82 + }; 83 + 84 + static int socfpga_reset_probe(struct platform_device *pdev) 85 + { 86 + struct socfpga_reset_data *data; 87 + struct resource *res; 88 + 89 + /* 90 + * The binding was mainlined without the required property. 91 + * Do not continue, when we encounter an old DT. 92 + */ 93 + if (!of_find_property(pdev->dev.of_node, "#reset-cells", NULL)) { 94 + dev_err(&pdev->dev, "%s missing #reset-cells property\n", 95 + pdev->dev.of_node->full_name); 96 + return -EINVAL; 97 + } 98 + 99 + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 100 + if (!data) 101 + return -ENOMEM; 102 + 103 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 104 + data->membase = devm_ioremap_resource(&pdev->dev, res); 105 + if (IS_ERR(data->membase)) 106 + return PTR_ERR(data->membase); 107 + 108 + spin_lock_init(&data->lock); 109 + 110 + data->rcdev.owner = THIS_MODULE; 111 + data->rcdev.nr_resets = NR_BANKS * BITS_PER_LONG; 112 + data->rcdev.ops = &socfpga_reset_ops; 113 + data->rcdev.of_node = pdev->dev.of_node; 114 + reset_controller_register(&data->rcdev); 115 + 116 + return 0; 117 + } 118 + 119 + static int socfpga_reset_remove(struct platform_device *pdev) 120 + { 121 + struct socfpga_reset_data *data = platform_get_drvdata(pdev); 122 + 123 + reset_controller_unregister(&data->rcdev); 124 + 125 + return 0; 126 + } 127 + 128 + static const struct of_device_id socfpga_reset_dt_ids[] = { 129 + { .compatible = "altr,rst-mgr", }, 130 + { /* sentinel */ }, 131 + }; 132 + 133 + static struct platform_driver socfpga_reset_driver = { 134 + .probe = socfpga_reset_probe, 135 + .remove = socfpga_reset_remove, 136 + .driver = { 137 + .name = "socfpga-reset", 138 + .owner = THIS_MODULE, 139 + .of_match_table = socfpga_reset_dt_ids, 140 + }, 141 + }; 142 + module_platform_driver(socfpga_reset_driver); 143 + 144 + MODULE_AUTHOR("Steffen Trumtrar <s.trumtrar@pengutronix.de"); 145 + MODULE_DESCRIPTION("Socfpga Reset Controller Driver"); 146 + MODULE_LICENSE("GPL");
+18 -3
drivers/reset/reset-sunxi.c
··· 145 145 146 146 static int sunxi_reset_probe(struct platform_device *pdev) 147 147 { 148 - return sunxi_reset_init(pdev->dev.of_node); 148 + struct sunxi_reset_data *data; 149 + struct resource *res; 150 + 151 + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 152 + if (!data) 153 + return -ENOMEM; 154 + 155 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 156 + data->membase = devm_ioremap_resource(&pdev->dev, res); 157 + if (IS_ERR(data->membase)) 158 + return PTR_ERR(data->membase); 159 + 160 + data->rcdev.owner = THIS_MODULE; 161 + data->rcdev.nr_resets = resource_size(res) * 32; 162 + data->rcdev.ops = &sunxi_reset_ops; 163 + data->rcdev.of_node = pdev->dev.of_node; 164 + 165 + return reset_controller_register(&data->rcdev); 149 166 } 150 167 151 168 static int sunxi_reset_remove(struct platform_device *pdev) ··· 170 153 struct sunxi_reset_data *data = platform_get_drvdata(pdev); 171 154 172 155 reset_controller_unregister(&data->rcdev); 173 - iounmap(data->membase); 174 - kfree(data); 175 156 176 157 return 0; 177 158 }
+5
drivers/soc/Kconfig
··· 1 + menu "SOC (System On Chip) specific Drivers" 2 + 3 + source "drivers/soc/qcom/Kconfig" 4 + 5 + endmenu
+5
drivers/soc/Makefile
··· 1 + # 2 + # Makefile for the Linux Kernel SOC specific device drivers. 3 + # 4 + 5 + obj-$(CONFIG_ARCH_QCOM) += qcom/
+11
drivers/soc/qcom/Kconfig
··· 1 + # 2 + # QCOM Soc drivers 3 + # 4 + config QCOM_GSBI 5 + tristate "QCOM General Serial Bus Interface" 6 + depends on ARCH_QCOM 7 + help 8 + Say y here to enable GSBI support. The GSBI provides control 9 + functions for connecting the underlying serial UART, SPI, and I2C 10 + devices to the output pins. 11 +
+1
drivers/soc/qcom/Makefile
··· 1 + obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
+85
drivers/soc/qcom/qcom_gsbi.c
··· 1 + /* 2 + * Copyright (c) 2014, The Linux foundation. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License rev 2 and 6 + * only rev 2 as published by the free Software foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or fITNESS fOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <linux/clk.h> 15 + #include <linux/err.h> 16 + #include <linux/io.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/of_platform.h> 20 + #include <linux/platform_device.h> 21 + 22 + #define GSBI_CTRL_REG 0x0000 23 + #define GSBI_PROTOCOL_SHIFT 4 24 + 25 + static int gsbi_probe(struct platform_device *pdev) 26 + { 27 + struct device_node *node = pdev->dev.of_node; 28 + struct resource *res; 29 + void __iomem *base; 30 + struct clk *hclk; 31 + u32 mode, crci = 0; 32 + 33 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 34 + base = devm_ioremap_resource(&pdev->dev, res); 35 + if (IS_ERR(base)) 36 + return PTR_ERR(base); 37 + 38 + if (of_property_read_u32(node, "qcom,mode", &mode)) { 39 + dev_err(&pdev->dev, "missing mode configuration\n"); 40 + return -EINVAL; 41 + } 42 + 43 + /* not required, so default to 0 if not present */ 44 + of_property_read_u32(node, "qcom,crci", &crci); 45 + 46 + dev_info(&pdev->dev, "GSBI port protocol: %d crci: %d\n", mode, crci); 47 + 48 + hclk = devm_clk_get(&pdev->dev, "iface"); 49 + if (IS_ERR(hclk)) 50 + return PTR_ERR(hclk); 51 + 52 + clk_prepare_enable(hclk); 53 + 54 + writel_relaxed((mode << GSBI_PROTOCOL_SHIFT) | crci, 55 + base + GSBI_CTRL_REG); 56 + 57 + /* make sure the gsbi control write is not reordered */ 58 + wmb(); 59 + 60 + clk_disable_unprepare(hclk); 61 + 62 + return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 63 + } 64 + 65 + static const struct of_device_id gsbi_dt_match[] = { 66 + { .compatible = "qcom,gsbi-v1.0.0", }, 67 + { }, 68 + }; 69 + 70 + MODULE_DEVICE_TABLE(of, gsbi_dt_match); 71 + 72 + static struct platform_driver gsbi_driver = { 73 + .driver = { 74 + .name = "gsbi", 75 + .owner = THIS_MODULE, 76 + .of_match_table = gsbi_dt_match, 77 + }, 78 + .probe = gsbi_probe, 79 + }; 80 + 81 + module_platform_driver(gsbi_driver); 82 + 83 + MODULE_AUTHOR("Andy Gross <agross@codeaurora.org>"); 84 + MODULE_DESCRIPTION("QCOM GSBI driver"); 85 + MODULE_LICENSE("GPL v2");
+2 -46
drivers/tty/serial/msm_serial.c
··· 52 52 struct clk *clk; 53 53 struct clk *pclk; 54 54 unsigned int imr; 55 - void __iomem *gsbi_base; 56 55 int is_uartdm; 57 56 unsigned int old_snap_state; 58 57 }; ··· 598 599 static void msm_release_port(struct uart_port *port) 599 600 { 600 601 struct platform_device *pdev = to_platform_device(port->dev); 601 - struct msm_port *msm_port = UART_TO_MSM(port); 602 602 struct resource *uart_resource; 603 - struct resource *gsbi_resource; 604 603 resource_size_t size; 605 604 606 605 uart_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 609 612 release_mem_region(port->mapbase, size); 610 613 iounmap(port->membase); 611 614 port->membase = NULL; 612 - 613 - if (msm_port->gsbi_base) { 614 - writel_relaxed(GSBI_PROTOCOL_IDLE, 615 - msm_port->gsbi_base + GSBI_CONTROL); 616 - 617 - gsbi_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1); 618 - if (unlikely(!gsbi_resource)) 619 - return; 620 - 621 - size = resource_size(gsbi_resource); 622 - release_mem_region(gsbi_resource->start, size); 623 - iounmap(msm_port->gsbi_base); 624 - msm_port->gsbi_base = NULL; 625 - } 626 615 } 627 616 628 617 static int msm_request_port(struct uart_port *port) 629 618 { 630 - struct msm_port *msm_port = UART_TO_MSM(port); 631 619 struct platform_device *pdev = to_platform_device(port->dev); 632 620 struct resource *uart_resource; 633 - struct resource *gsbi_resource; 634 621 resource_size_t size; 635 622 int ret; 636 623 ··· 633 652 goto fail_release_port; 634 653 } 635 654 636 - gsbi_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1); 637 - /* Is this a GSBI-based port? */ 638 - if (gsbi_resource) { 639 - size = resource_size(gsbi_resource); 640 - 641 - if (!request_mem_region(gsbi_resource->start, size, 642 - "msm_serial")) { 643 - ret = -EBUSY; 644 - goto fail_release_port_membase; 645 - } 646 - 647 - msm_port->gsbi_base = ioremap(gsbi_resource->start, size); 648 - if (!msm_port->gsbi_base) { 649 - ret = -EBUSY; 650 - goto fail_release_gsbi; 651 - } 652 - } 653 - 654 655 return 0; 655 656 656 - fail_release_gsbi: 657 - release_mem_region(gsbi_resource->start, size); 658 - fail_release_port_membase: 659 - iounmap(port->membase); 660 657 fail_release_port: 661 658 release_mem_region(port->mapbase, size); 662 659 return ret; ··· 642 683 643 684 static void msm_config_port(struct uart_port *port, int flags) 644 685 { 645 - struct msm_port *msm_port = UART_TO_MSM(port); 646 686 int ret; 647 687 if (flags & UART_CONFIG_TYPE) { 648 688 port->type = PORT_MSM; ··· 649 691 if (ret) 650 692 return; 651 693 } 652 - if (msm_port->gsbi_base) 653 - writel_relaxed(GSBI_PROTOCOL_UART, 654 - msm_port->gsbi_base + GSBI_CONTROL); 655 694 } 656 695 657 696 static int msm_verify_port(struct uart_port *port, struct serial_struct *ser) ··· 1065 1110 1066 1111 static struct platform_driver msm_platform_driver = { 1067 1112 .remove = msm_serial_remove, 1113 + .probe = msm_serial_probe, 1068 1114 .driver = { 1069 1115 .name = "msm_serial", 1070 1116 .owner = THIS_MODULE, ··· 1081 1125 if (unlikely(ret)) 1082 1126 return ret; 1083 1127 1084 - ret = platform_driver_probe(&msm_platform_driver, msm_serial_probe); 1128 + ret = platform_driver_register(&msm_platform_driver); 1085 1129 if (unlikely(ret)) 1086 1130 uart_unregister_driver(&msm_uart_driver); 1087 1131
-5
drivers/tty/serial/msm_serial.h
··· 109 109 #define UART_ISR 0x0014 110 110 #define UART_ISR_TX_READY (1 << 7) 111 111 112 - #define GSBI_CONTROL 0x0 113 - #define GSBI_PROTOCOL_CODE 0x30 114 - #define GSBI_PROTOCOL_UART 0x40 115 - #define GSBI_PROTOCOL_IDLE 0x0 116 - 117 112 #define UARTDM_RXFS 0x50 118 113 #define UARTDM_RXFS_BUF_SHIFT 0x7 119 114 #define UARTDM_RXFS_BUF_MASK 0x7
+10 -18
include/linux/platform_data/edma.h
··· 43 43 44 44 /* PaRAM slots are laid out like this */ 45 45 struct edmacc_param { 46 - unsigned int opt; 47 - unsigned int src; 48 - unsigned int a_b_cnt; 49 - unsigned int dst; 50 - unsigned int src_dst_bidx; 51 - unsigned int link_bcntrld; 52 - unsigned int src_dst_cidx; 53 - unsigned int ccnt; 54 - }; 46 + u32 opt; 47 + u32 src; 48 + u32 a_b_cnt; 49 + u32 dst; 50 + u32 src_dst_bidx; 51 + u32 link_bcntrld; 52 + u32 src_dst_cidx; 53 + u32 ccnt; 54 + } __packed; 55 55 56 56 /* fields in edmacc_param.opt */ 57 57 #define SAM BIT(0) ··· 130 130 enum address_mode mode, enum fifo_width); 131 131 void edma_set_dest(unsigned slot, dma_addr_t dest_port, 132 132 enum address_mode mode, enum fifo_width); 133 - void edma_get_position(unsigned slot, dma_addr_t *src, dma_addr_t *dst); 133 + dma_addr_t edma_get_position(unsigned slot, bool dst); 134 134 void edma_set_src_index(unsigned slot, s16 src_bidx, s16 src_cidx); 135 135 void edma_set_dest_index(unsigned slot, s16 dest_bidx, s16 dest_cidx); 136 136 void edma_set_transfer_params(unsigned slot, u16 acnt, u16 bcnt, u16 ccnt, ··· 158 158 159 159 /* platform_data for EDMA driver */ 160 160 struct edma_soc_info { 161 - 162 - /* how many dma resources of each type */ 163 - unsigned n_channel; 164 - unsigned n_region; 165 - unsigned n_slot; 166 - unsigned n_tc; 167 - unsigned n_cc; 168 161 /* 169 162 * Default queue is expected to be a low-priority queue. 170 163 * This way, long transfers on the default queue started ··· 168 175 /* Resource reservation for other cores */ 169 176 struct edma_rsv_info *rsv; 170 177 171 - s8 (*queue_tc_mapping)[2]; 172 178 s8 (*queue_priority_mapping)[2]; 173 179 const s16 (*xbar_chans)[2]; 174 180 };