Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'fpga-dfl-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga into char-misc-next

Moritz writes:

FPGA DFL Changes for 5.4

This pull-request contains the FPGA DFL changes for 5.4

- The first three patches are cleanup patches making use of dev_groups and
making the init callback optional.
- One patch adds userclock sysfs entries that are DFL specific
- One patch exposes AFU port disable/enable functions
- One patch adds error reporting
- One patch adds AFU SignalTap support
- One patch adds FME global error reporting
- The final patch is a documentation patch that decribes the
virtualization interfaces

This patchset requires the 'dev_groups_all_drivers' tag from drivers
core for the dev_groups refactoring as well as the DFL changes already
in char-misc-next.

Signed-off-by: Moritz Fischer <mdf@kernel.org>

* tag 'fpga-dfl-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga:
Documentation: fpga: dfl: add descriptions for virtualization and new interfaces.
fpga: dfl: fme: add global error reporting support
fpga: dfl: afu: add STP (SignalTap) support
fpga: dfl: afu: add error reporting support.
fpga: dfl: afu: expose __afu_port_enable/disable function.
fpga: dfl: afu: add userclock sysfs interfaces.
fpga: dfl: afu: convert platform_driver to use dev_groups
fpga: dfl: fme: convert platform_driver to use dev_groups
fpga: dfl: make init callback optional
driver core: add dev_groups to all drivers

+1062 -78
+62
Documentation/ABI/testing/sysfs-platform-dfl-fme
··· 44 44 this FPGA belongs to, only valid for integrated solution. 45 45 User only needs this information, in case standard numa node 46 46 can't provide correct information. 47 + 48 + What: /sys/bus/platform/devices/dfl-fme.0/errors/pcie0_errors 49 + Date: August 2019 50 + KernelVersion: 5.4 51 + Contact: Wu Hao <hao.wu@intel.com> 52 + Description: Read-Write. Read this file for errors detected on pcie0 link. 53 + Write this file to clear errors logged in pcie0_errors. Write 54 + fails with -EINVAL if input parsing fails or input error code 55 + doesn't match. 56 + 57 + What: /sys/bus/platform/devices/dfl-fme.0/errors/pcie1_errors 58 + Date: August 2019 59 + KernelVersion: 5.4 60 + Contact: Wu Hao <hao.wu@intel.com> 61 + Description: Read-Write. Read this file for errors detected on pcie1 link. 62 + Write this file to clear errors logged in pcie1_errors. Write 63 + fails with -EINVAL if input parsing fails or input error code 64 + doesn't match. 65 + 66 + What: /sys/bus/platform/devices/dfl-fme.0/errors/nonfatal_errors 67 + Date: August 2019 68 + KernelVersion: 5.4 69 + Contact: Wu Hao <hao.wu@intel.com> 70 + Description: Read-only. It returns non-fatal errors detected. 71 + 72 + What: /sys/bus/platform/devices/dfl-fme.0/errors/catfatal_errors 73 + Date: August 2019 74 + KernelVersion: 5.4 75 + Contact: Wu Hao <hao.wu@intel.com> 76 + Description: Read-only. It returns catastrophic and fatal errors detected. 77 + 78 + What: /sys/bus/platform/devices/dfl-fme.0/errors/inject_errors 79 + Date: August 2019 80 + KernelVersion: 5.4 81 + Contact: Wu Hao <hao.wu@intel.com> 82 + Description: Read-Write. Read this file to check errors injected. Write this 83 + file to inject errors for testing purpose. Write fails with 84 + -EINVAL if input parsing fails or input inject error code isn't 85 + supported. 86 + 87 + What: /sys/bus/platform/devices/dfl-fme.0/errors/fme_errors 88 + Date: August 2019 89 + KernelVersion: 5.4 90 + Contact: Wu Hao <hao.wu@intel.com> 91 + Description: Read-Write. Read this file to get errors detected on FME. 92 + Write this file to clear errors logged in fme_errors. Write 93 + fials with -EINVAL if input parsing fails or input error code 94 + doesn't match. 95 + 96 + What: /sys/bus/platform/devices/dfl-fme.0/errors/first_error 97 + Date: August 2019 98 + KernelVersion: 5.4 99 + Contact: Wu Hao <hao.wu@intel.com> 100 + Description: Read-only. Read this file to get the first error detected by 101 + hardware. 102 + 103 + What: /sys/bus/platform/devices/dfl-fme.0/errors/next_error 104 + Date: August 2019 105 + KernelVersion: 5.4 106 + Contact: Wu Hao <hao.wu@intel.com> 107 + Description: Read-only. Read this file to get the second error detected by 108 + hardware.
+53
Documentation/ABI/testing/sysfs-platform-dfl-port
··· 46 46 Description: Read-write. Read or set AFU latency tolerance reporting value. 47 47 Set ltr to 1 if the AFU can tolerate latency >= 40us or set it 48 48 to 0 if it is latency sensitive. 49 + 50 + What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcmd 51 + Date: August 2019 52 + KernelVersion: 5.4 53 + Contact: Wu Hao <hao.wu@intel.com> 54 + Description: Write-only. User writes command to this interface to set 55 + userclock to AFU. 56 + 57 + What: /sys/bus/platform/devices/dfl-port.0/userclk_freqsts 58 + Date: August 2019 59 + KernelVersion: 5.4 60 + Contact: Wu Hao <hao.wu@intel.com> 61 + Description: Read-only. Read this file to get the status of issued command 62 + to userclck_freqcmd. 63 + 64 + What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcntrcmd 65 + Date: August 2019 66 + KernelVersion: 5.4 67 + Contact: Wu Hao <hao.wu@intel.com> 68 + Description: Write-only. User writes command to this interface to set 69 + userclock counter. 70 + 71 + What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcntrsts 72 + Date: August 2019 73 + KernelVersion: 5.4 74 + Contact: Wu Hao <hao.wu@intel.com> 75 + Description: Read-only. Read this file to get the status of issued command 76 + to userclck_freqcntrcmd. 77 + 78 + What: /sys/bus/platform/devices/dfl-port.0/errors/errors 79 + Date: August 2019 80 + KernelVersion: 5.4 81 + Contact: Wu Hao <hao.wu@intel.com> 82 + Description: Read-Write. Read this file to get errors detected on port and 83 + Accelerated Function Unit (AFU). Write error code to this file 84 + to clear errors. Write fails with -EINVAL if input parsing 85 + fails or input error code doesn't match. Write fails with 86 + -EBUSY or -ETIMEDOUT if error can't be cleared as hardware 87 + in low power state (-EBUSY) or not respoding (-ETIMEDOUT). 88 + 89 + What: /sys/bus/platform/devices/dfl-port.0/errors/first_error 90 + Date: August 2019 91 + KernelVersion: 5.4 92 + Contact: Wu Hao <hao.wu@intel.com> 93 + Description: Read-only. Read this file to get the first error detected by 94 + hardware. 95 + 96 + What: /sys/bus/platform/devices/dfl-port.0/errors/first_malformed_req 97 + Date: August 2019 98 + KernelVersion: 5.4 99 + Contact: Wu Hao <hao.wu@intel.com> 100 + Description: Read-only. Read this file to get the first malformed request 101 + captured by hardware.
+105
Documentation/fpga/dfl.rst
··· 87 87 - Get driver API version (DFL_FPGA_GET_API_VERSION) 88 88 - Check for extensions (DFL_FPGA_CHECK_EXTENSION) 89 89 - Program bitstream (DFL_FPGA_FME_PORT_PR) 90 + - Assign port to PF (DFL_FPGA_FME_PORT_ASSIGN) 91 + - Release port from PF (DFL_FPGA_FME_PORT_RELEASE) 90 92 91 93 More functions are exposed through sysfs 92 94 (/sys/class/fpga_region/regionX/dfl-fme.n/): ··· 103 101 Read number of ports (ports_num) 104 102 one FPGA device may have more than one port, this sysfs interface indicates 105 103 how many ports the FPGA device has. 104 + 105 + Global error reporting management (errors/) 106 + error reporting sysfs interfaces allow user to read errors detected by the 107 + hardware, and clear the logged errors. 106 108 107 109 108 110 FIU - PORT ··· 148 142 149 143 Read Accelerator GUID (afu_id) 150 144 afu_id indicates which PR bitstream is programmed to this AFU. 145 + 146 + Error reporting (errors/) 147 + error reporting sysfs interfaces allow user to read port/afu errors 148 + detected by the hardware, and clear the logged errors. 151 149 152 150 153 151 DFL Framework Overview ··· 227 217 the compat_id exposed by the target FPGA region. This check is usually done by 228 218 userspace before calling the reconfiguration IOCTL. 229 219 220 + 221 + FPGA virtualization - PCIe SRIOV 222 + ================================ 223 + This section describes the virtualization support on DFL based FPGA device to 224 + enable accessing an accelerator from applications running in a virtual machine 225 + (VM). This section only describes the PCIe based FPGA device with SRIOV support. 226 + 227 + Features supported by the particular FPGA device are exposed through Device 228 + Feature Lists, as illustrated below: 229 + 230 + :: 231 + 232 + +-------------------------------+ +-------------+ 233 + | PF | | VF | 234 + +-------------------------------+ +-------------+ 235 + ^ ^ ^ ^ 236 + | | | | 237 + +-----|------------|---------|--------------|-------+ 238 + | | | | | | 239 + | +-----+ +-------+ +-------+ +-------+ | 240 + | | FME | | Port0 | | Port1 | | Port2 | | 241 + | +-----+ +-------+ +-------+ +-------+ | 242 + | ^ ^ ^ | 243 + | | | | | 244 + | +-------+ +------+ +-------+ | 245 + | | AFU | | AFU | | AFU | | 246 + | +-------+ +------+ +-------+ | 247 + | | 248 + | DFL based FPGA PCIe Device | 249 + +---------------------------------------------------+ 250 + 251 + FME is always accessed through the physical function (PF). 252 + 253 + Ports (and related AFUs) are accessed via PF by default, but could be exposed 254 + through virtual function (VF) devices via PCIe SRIOV. Each VF only contains 255 + 1 Port and 1 AFU for isolation. Users could assign individual VFs (accelerators) 256 + created via PCIe SRIOV interface, to virtual machines. 257 + 258 + The driver organization in virtualization case is illustrated below: 259 + :: 260 + 261 + +-------++------++------+ | 262 + | FME || FME || FME | | 263 + | FPGA || FPGA || FPGA | | 264 + |Manager||Bridge||Region| | 265 + +-------++------++------+ | 266 + +-----------------------+ +--------+ | +--------+ 267 + | FME | | AFU | | | AFU | 268 + | Module | | Module | | | Module | 269 + +-----------------------+ +--------+ | +--------+ 270 + +-----------------------+ | +-----------------------+ 271 + | FPGA Container Device | | | FPGA Container Device | 272 + | (FPGA Base Region) | | | (FPGA Base Region) | 273 + +-----------------------+ | +-----------------------+ 274 + +------------------+ | +------------------+ 275 + | FPGA PCIE Module | | Virtual | FPGA PCIE Module | 276 + +------------------+ Host | Machine +------------------+ 277 + -------------------------------------- | ------------------------------ 278 + +---------------+ | +---------------+ 279 + | PCI PF Device | | | PCI VF Device | 280 + +---------------+ | +---------------+ 281 + 282 + FPGA PCIe device driver is always loaded first once a FPGA PCIe PF or VF device 283 + is detected. It: 284 + 285 + * Finishes enumeration on both FPGA PCIe PF and VF device using common 286 + interfaces from DFL framework. 287 + * Supports SRIOV. 288 + 289 + The FME device driver plays a management role in this driver architecture, it 290 + provides ioctls to release Port from PF and assign Port to PF. After release 291 + a port from PF, then it's safe to expose this port through a VF via PCIe SRIOV 292 + sysfs interface. 293 + 294 + To enable accessing an accelerator from applications running in a VM, the 295 + respective AFU's port needs to be assigned to a VF using the following steps: 296 + 297 + #. The PF owns all AFU ports by default. Any port that needs to be 298 + reassigned to a VF must first be released through the 299 + DFL_FPGA_FME_PORT_RELEASE ioctl on the FME device. 300 + 301 + #. Once N ports are released from PF, then user can use command below 302 + to enable SRIOV and VFs. Each VF owns only one Port with AFU. 303 + 304 + :: 305 + 306 + echo N > $PCI_DEVICE_PATH/sriov_numvfs 307 + 308 + #. Pass through the VFs to VMs 309 + 310 + #. The AFU under VF is accessible from applications in VM (using the 311 + same driver inside the VF). 312 + 313 + Note that an FME can't be assigned to a VF, thus PR and other management 314 + functions are only available via the PF. 230 315 231 316 Device enumeration 232 317 ==================
+14
drivers/base/dd.c
··· 554 554 goto probe_failed; 555 555 } 556 556 557 + if (device_add_groups(dev, drv->dev_groups)) { 558 + dev_err(dev, "device_add_groups() failed\n"); 559 + goto dev_groups_failed; 560 + } 561 + 557 562 if (test_remove) { 558 563 test_remove = false; 564 + 565 + device_remove_groups(dev, drv->dev_groups); 559 566 560 567 if (dev->bus->remove) 561 568 dev->bus->remove(dev); ··· 591 584 drv->bus->name, __func__, dev_name(dev), drv->name); 592 585 goto done; 593 586 587 + dev_groups_failed: 588 + if (dev->bus->remove) 589 + dev->bus->remove(dev); 590 + else if (drv->remove) 591 + drv->remove(dev); 594 592 probe_failed: 595 593 if (dev->bus) 596 594 blocking_notifier_call_chain(&dev->bus->p->bus_notifier, ··· 1125 1113 dev); 1126 1114 1127 1115 pm_runtime_put_sync(dev); 1116 + 1117 + device_remove_groups(dev, drv->dev_groups); 1128 1118 1129 1119 if (dev->bus && dev->bus->remove) 1130 1120 dev->bus->remove(dev);
+2 -1
drivers/fpga/Makefile
··· 39 39 obj-$(CONFIG_FPGA_DFL_FME_REGION) += dfl-fme-region.o 40 40 obj-$(CONFIG_FPGA_DFL_AFU) += dfl-afu.o 41 41 42 - dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o 42 + dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o dfl-fme-error.o 43 43 dfl-afu-objs := dfl-afu-main.o dfl-afu-region.o dfl-afu-dma-region.o 44 + dfl-afu-objs += dfl-afu-error.o 44 45 45 46 # Drivers for FPGAs which implement DFL 46 47 obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
+230
drivers/fpga/dfl-afu-error.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Accelerated Function Unit (AFU) Error Reporting 4 + * 5 + * Copyright 2019 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Wu Hao <hao.wu@linux.intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Joseph Grecco <joe.grecco@intel.com> 11 + * Enno Luebbers <enno.luebbers@intel.com> 12 + * Tim Whisonant <tim.whisonant@intel.com> 13 + * Ananda Ravuri <ananda.ravuri@intel.com> 14 + * Mitchel Henry <henry.mitchel@intel.com> 15 + */ 16 + 17 + #include <linux/uaccess.h> 18 + 19 + #include "dfl-afu.h" 20 + 21 + #define PORT_ERROR_MASK 0x8 22 + #define PORT_ERROR 0x10 23 + #define PORT_FIRST_ERROR 0x18 24 + #define PORT_MALFORMED_REQ0 0x20 25 + #define PORT_MALFORMED_REQ1 0x28 26 + 27 + #define ERROR_MASK GENMASK_ULL(63, 0) 28 + 29 + /* mask or unmask port errors by the error mask register. */ 30 + static void __afu_port_err_mask(struct device *dev, bool mask) 31 + { 32 + void __iomem *base; 33 + 34 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 35 + 36 + writeq(mask ? ERROR_MASK : 0, base + PORT_ERROR_MASK); 37 + } 38 + 39 + static void afu_port_err_mask(struct device *dev, bool mask) 40 + { 41 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 42 + 43 + mutex_lock(&pdata->lock); 44 + __afu_port_err_mask(dev, mask); 45 + mutex_unlock(&pdata->lock); 46 + } 47 + 48 + /* clear port errors. */ 49 + static int afu_port_err_clear(struct device *dev, u64 err) 50 + { 51 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 52 + struct platform_device *pdev = to_platform_device(dev); 53 + void __iomem *base_err, *base_hdr; 54 + int ret = -EBUSY; 55 + u64 v; 56 + 57 + base_err = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 58 + base_hdr = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 59 + 60 + mutex_lock(&pdata->lock); 61 + 62 + /* 63 + * clear Port Errors 64 + * 65 + * - Check for AP6 State 66 + * - Halt Port by keeping Port in reset 67 + * - Set PORT Error mask to all 1 to mask errors 68 + * - Clear all errors 69 + * - Set Port mask to all 0 to enable errors 70 + * - All errors start capturing new errors 71 + * - Enable Port by pulling the port out of reset 72 + */ 73 + 74 + /* if device is still in AP6 power state, can not clear any error. */ 75 + v = readq(base_hdr + PORT_HDR_STS); 76 + if (FIELD_GET(PORT_STS_PWR_STATE, v) == PORT_STS_PWR_STATE_AP6) { 77 + dev_err(dev, "Could not clear errors, device in AP6 state.\n"); 78 + goto done; 79 + } 80 + 81 + /* Halt Port by keeping Port in reset */ 82 + ret = __afu_port_disable(pdev); 83 + if (ret) 84 + goto done; 85 + 86 + /* Mask all errors */ 87 + __afu_port_err_mask(dev, true); 88 + 89 + /* Clear errors if err input matches with current port errors.*/ 90 + v = readq(base_err + PORT_ERROR); 91 + 92 + if (v == err) { 93 + writeq(v, base_err + PORT_ERROR); 94 + 95 + v = readq(base_err + PORT_FIRST_ERROR); 96 + writeq(v, base_err + PORT_FIRST_ERROR); 97 + } else { 98 + ret = -EINVAL; 99 + } 100 + 101 + /* Clear mask */ 102 + __afu_port_err_mask(dev, false); 103 + 104 + /* Enable the Port by clear the reset */ 105 + __afu_port_enable(pdev); 106 + 107 + done: 108 + mutex_unlock(&pdata->lock); 109 + return ret; 110 + } 111 + 112 + static ssize_t errors_show(struct device *dev, struct device_attribute *attr, 113 + char *buf) 114 + { 115 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 116 + void __iomem *base; 117 + u64 error; 118 + 119 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 120 + 121 + mutex_lock(&pdata->lock); 122 + error = readq(base + PORT_ERROR); 123 + mutex_unlock(&pdata->lock); 124 + 125 + return sprintf(buf, "0x%llx\n", (unsigned long long)error); 126 + } 127 + 128 + static ssize_t errors_store(struct device *dev, struct device_attribute *attr, 129 + const char *buff, size_t count) 130 + { 131 + u64 value; 132 + int ret; 133 + 134 + if (kstrtou64(buff, 0, &value)) 135 + return -EINVAL; 136 + 137 + ret = afu_port_err_clear(dev, value); 138 + 139 + return ret ? ret : count; 140 + } 141 + static DEVICE_ATTR_RW(errors); 142 + 143 + static ssize_t first_error_show(struct device *dev, 144 + struct device_attribute *attr, char *buf) 145 + { 146 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 147 + void __iomem *base; 148 + u64 error; 149 + 150 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 151 + 152 + mutex_lock(&pdata->lock); 153 + error = readq(base + PORT_FIRST_ERROR); 154 + mutex_unlock(&pdata->lock); 155 + 156 + return sprintf(buf, "0x%llx\n", (unsigned long long)error); 157 + } 158 + static DEVICE_ATTR_RO(first_error); 159 + 160 + static ssize_t first_malformed_req_show(struct device *dev, 161 + struct device_attribute *attr, 162 + char *buf) 163 + { 164 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 165 + void __iomem *base; 166 + u64 req0, req1; 167 + 168 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 169 + 170 + mutex_lock(&pdata->lock); 171 + req0 = readq(base + PORT_MALFORMED_REQ0); 172 + req1 = readq(base + PORT_MALFORMED_REQ1); 173 + mutex_unlock(&pdata->lock); 174 + 175 + return sprintf(buf, "0x%016llx%016llx\n", 176 + (unsigned long long)req1, (unsigned long long)req0); 177 + } 178 + static DEVICE_ATTR_RO(first_malformed_req); 179 + 180 + static struct attribute *port_err_attrs[] = { 181 + &dev_attr_errors.attr, 182 + &dev_attr_first_error.attr, 183 + &dev_attr_first_malformed_req.attr, 184 + NULL, 185 + }; 186 + 187 + static umode_t port_err_attrs_visible(struct kobject *kobj, 188 + struct attribute *attr, int n) 189 + { 190 + struct device *dev = kobj_to_dev(kobj); 191 + 192 + /* 193 + * sysfs entries are visible only if related private feature is 194 + * enumerated. 195 + */ 196 + if (!dfl_get_feature_by_id(dev, PORT_FEATURE_ID_ERROR)) 197 + return 0; 198 + 199 + return attr->mode; 200 + } 201 + 202 + const struct attribute_group port_err_group = { 203 + .name = "errors", 204 + .attrs = port_err_attrs, 205 + .is_visible = port_err_attrs_visible, 206 + }; 207 + 208 + static int port_err_init(struct platform_device *pdev, 209 + struct dfl_feature *feature) 210 + { 211 + afu_port_err_mask(&pdev->dev, false); 212 + 213 + return 0; 214 + } 215 + 216 + static void port_err_uinit(struct platform_device *pdev, 217 + struct dfl_feature *feature) 218 + { 219 + afu_port_err_mask(&pdev->dev, true); 220 + } 221 + 222 + const struct dfl_feature_id port_err_id_table[] = { 223 + {.id = PORT_FEATURE_ID_ERROR,}, 224 + {0,} 225 + }; 226 + 227 + const struct dfl_feature_ops port_err_ops = { 228 + .init = port_err_init, 229 + .uinit = port_err_uinit, 230 + };
+192 -46
drivers/fpga/dfl-afu-main.c
··· 22 22 #include "dfl-afu.h" 23 23 24 24 /** 25 - * port_enable - enable a port 25 + * __afu_port_enable - enable a port by clear reset 26 26 * @pdev: port platform device. 27 27 * 28 28 * Enable Port by clear the port soft reset bit, which is set by default. 29 29 * The AFU is unable to respond to any MMIO access while in reset. 30 - * port_enable function should only be used after port_disable function. 30 + * __afu_port_enable function should only be used after __afu_port_disable 31 + * function. 32 + * 33 + * The caller needs to hold lock for protection. 31 34 */ 32 - static void port_enable(struct platform_device *pdev) 35 + void __afu_port_enable(struct platform_device *pdev) 33 36 { 34 37 struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 35 38 void __iomem *base; ··· 55 52 #define RST_POLL_TIMEOUT 1000 /* us */ 56 53 57 54 /** 58 - * port_disable - disable a port 55 + * __afu_port_disable - disable a port by hold reset 59 56 * @pdev: port platform device. 60 57 * 61 - * Disable Port by setting the port soft reset bit, it puts the port into 62 - * reset. 58 + * Disable Port by setting the port soft reset bit, it puts the port into reset. 59 + * 60 + * The caller needs to hold lock for protection. 63 61 */ 64 - static int port_disable(struct platform_device *pdev) 62 + int __afu_port_disable(struct platform_device *pdev) 65 63 { 66 64 struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 67 65 void __iomem *base; ··· 108 104 { 109 105 int ret; 110 106 111 - ret = port_disable(pdev); 107 + ret = __afu_port_disable(pdev); 112 108 if (!ret) 113 - port_enable(pdev); 109 + __afu_port_enable(pdev); 114 110 115 111 return ret; 116 112 } ··· 278 274 } 279 275 static DEVICE_ATTR_RO(power_state); 280 276 277 + static ssize_t 278 + userclk_freqcmd_store(struct device *dev, struct device_attribute *attr, 279 + const char *buf, size_t count) 280 + { 281 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 282 + u64 userclk_freq_cmd; 283 + void __iomem *base; 284 + 285 + if (kstrtou64(buf, 0, &userclk_freq_cmd)) 286 + return -EINVAL; 287 + 288 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 289 + 290 + mutex_lock(&pdata->lock); 291 + writeq(userclk_freq_cmd, base + PORT_HDR_USRCLK_CMD0); 292 + mutex_unlock(&pdata->lock); 293 + 294 + return count; 295 + } 296 + static DEVICE_ATTR_WO(userclk_freqcmd); 297 + 298 + static ssize_t 299 + userclk_freqcntrcmd_store(struct device *dev, struct device_attribute *attr, 300 + const char *buf, size_t count) 301 + { 302 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 303 + u64 userclk_freqcntr_cmd; 304 + void __iomem *base; 305 + 306 + if (kstrtou64(buf, 0, &userclk_freqcntr_cmd)) 307 + return -EINVAL; 308 + 309 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 310 + 311 + mutex_lock(&pdata->lock); 312 + writeq(userclk_freqcntr_cmd, base + PORT_HDR_USRCLK_CMD1); 313 + mutex_unlock(&pdata->lock); 314 + 315 + return count; 316 + } 317 + static DEVICE_ATTR_WO(userclk_freqcntrcmd); 318 + 319 + static ssize_t 320 + userclk_freqsts_show(struct device *dev, struct device_attribute *attr, 321 + char *buf) 322 + { 323 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 324 + u64 userclk_freqsts; 325 + void __iomem *base; 326 + 327 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 328 + 329 + mutex_lock(&pdata->lock); 330 + userclk_freqsts = readq(base + PORT_HDR_USRCLK_STS0); 331 + mutex_unlock(&pdata->lock); 332 + 333 + return sprintf(buf, "0x%llx\n", (unsigned long long)userclk_freqsts); 334 + } 335 + static DEVICE_ATTR_RO(userclk_freqsts); 336 + 337 + static ssize_t 338 + userclk_freqcntrsts_show(struct device *dev, struct device_attribute *attr, 339 + char *buf) 340 + { 341 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 342 + u64 userclk_freqcntrsts; 343 + void __iomem *base; 344 + 345 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 346 + 347 + mutex_lock(&pdata->lock); 348 + userclk_freqcntrsts = readq(base + PORT_HDR_USRCLK_STS1); 349 + mutex_unlock(&pdata->lock); 350 + 351 + return sprintf(buf, "0x%llx\n", 352 + (unsigned long long)userclk_freqcntrsts); 353 + } 354 + static DEVICE_ATTR_RO(userclk_freqcntrsts); 355 + 281 356 static struct attribute *port_hdr_attrs[] = { 282 357 &dev_attr_id.attr, 283 358 &dev_attr_ltr.attr, 284 359 &dev_attr_ap1_event.attr, 285 360 &dev_attr_ap2_event.attr, 286 361 &dev_attr_power_state.attr, 362 + &dev_attr_userclk_freqcmd.attr, 363 + &dev_attr_userclk_freqcntrcmd.attr, 364 + &dev_attr_userclk_freqsts.attr, 365 + &dev_attr_userclk_freqcntrsts.attr, 287 366 NULL, 288 367 }; 289 - ATTRIBUTE_GROUPS(port_hdr); 368 + 369 + static umode_t port_hdr_attrs_visible(struct kobject *kobj, 370 + struct attribute *attr, int n) 371 + { 372 + struct device *dev = kobj_to_dev(kobj); 373 + umode_t mode = attr->mode; 374 + void __iomem *base; 375 + 376 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 377 + 378 + if (dfl_feature_revision(base) > 0) { 379 + /* 380 + * userclk sysfs interfaces are only visible in case port 381 + * revision is 0, as hardware with revision >0 doesn't 382 + * support this. 383 + */ 384 + if (attr == &dev_attr_userclk_freqcmd.attr || 385 + attr == &dev_attr_userclk_freqcntrcmd.attr || 386 + attr == &dev_attr_userclk_freqsts.attr || 387 + attr == &dev_attr_userclk_freqcntrsts.attr) 388 + mode = 0; 389 + } 390 + 391 + return mode; 392 + } 393 + 394 + static const struct attribute_group port_hdr_group = { 395 + .attrs = port_hdr_attrs, 396 + .is_visible = port_hdr_attrs_visible, 397 + }; 290 398 291 399 static int port_hdr_init(struct platform_device *pdev, 292 400 struct dfl_feature *feature) 293 401 { 294 - dev_dbg(&pdev->dev, "PORT HDR Init.\n"); 295 - 296 402 port_reset(pdev); 297 403 298 - return device_add_groups(&pdev->dev, port_hdr_groups); 299 - } 300 - 301 - static void port_hdr_uinit(struct platform_device *pdev, 302 - struct dfl_feature *feature) 303 - { 304 - dev_dbg(&pdev->dev, "PORT HDR UInit.\n"); 305 - 306 - device_remove_groups(&pdev->dev, port_hdr_groups); 404 + return 0; 307 405 } 308 406 309 407 static long ··· 436 330 437 331 static const struct dfl_feature_ops port_hdr_ops = { 438 332 .init = port_hdr_init, 439 - .uinit = port_hdr_uinit, 440 333 .ioctl = port_hdr_ioctl, 441 334 }; 442 335 ··· 466 361 &dev_attr_afu_id.attr, 467 362 NULL 468 363 }; 469 - ATTRIBUTE_GROUPS(port_afu); 364 + 365 + static umode_t port_afu_attrs_visible(struct kobject *kobj, 366 + struct attribute *attr, int n) 367 + { 368 + struct device *dev = kobj_to_dev(kobj); 369 + 370 + /* 371 + * sysfs entries are visible only if related private feature is 372 + * enumerated. 373 + */ 374 + if (!dfl_get_feature_by_id(dev, PORT_FEATURE_ID_AFU)) 375 + return 0; 376 + 377 + return attr->mode; 378 + } 379 + 380 + static const struct attribute_group port_afu_group = { 381 + .attrs = port_afu_attrs, 382 + .is_visible = port_afu_attrs_visible, 383 + }; 470 384 471 385 static int port_afu_init(struct platform_device *pdev, 472 386 struct dfl_feature *feature) 473 387 { 474 388 struct resource *res = &pdev->resource[feature->resource_index]; 475 - int ret; 476 389 477 - dev_dbg(&pdev->dev, "PORT AFU Init.\n"); 478 - 479 - ret = afu_mmio_region_add(dev_get_platdata(&pdev->dev), 480 - DFL_PORT_REGION_INDEX_AFU, resource_size(res), 481 - res->start, DFL_PORT_REGION_READ | 482 - DFL_PORT_REGION_WRITE | DFL_PORT_REGION_MMAP); 483 - if (ret) 484 - return ret; 485 - 486 - return device_add_groups(&pdev->dev, port_afu_groups); 487 - } 488 - 489 - static void port_afu_uinit(struct platform_device *pdev, 490 - struct dfl_feature *feature) 491 - { 492 - dev_dbg(&pdev->dev, "PORT AFU UInit.\n"); 493 - 494 - device_remove_groups(&pdev->dev, port_afu_groups); 390 + return afu_mmio_region_add(dev_get_platdata(&pdev->dev), 391 + DFL_PORT_REGION_INDEX_AFU, 392 + resource_size(res), res->start, 393 + DFL_PORT_REGION_MMAP | DFL_PORT_REGION_READ | 394 + DFL_PORT_REGION_WRITE); 495 395 } 496 396 497 397 static const struct dfl_feature_id port_afu_id_table[] = { ··· 506 396 507 397 static const struct dfl_feature_ops port_afu_ops = { 508 398 .init = port_afu_init, 509 - .uinit = port_afu_uinit, 399 + }; 400 + 401 + static int port_stp_init(struct platform_device *pdev, 402 + struct dfl_feature *feature) 403 + { 404 + struct resource *res = &pdev->resource[feature->resource_index]; 405 + 406 + return afu_mmio_region_add(dev_get_platdata(&pdev->dev), 407 + DFL_PORT_REGION_INDEX_STP, 408 + resource_size(res), res->start, 409 + DFL_PORT_REGION_MMAP | DFL_PORT_REGION_READ | 410 + DFL_PORT_REGION_WRITE); 411 + } 412 + 413 + static const struct dfl_feature_id port_stp_id_table[] = { 414 + {.id = PORT_FEATURE_ID_STP,}, 415 + {0,} 416 + }; 417 + 418 + static const struct dfl_feature_ops port_stp_ops = { 419 + .init = port_stp_init, 510 420 }; 511 421 512 422 static struct dfl_feature_driver port_feature_drvs[] = { ··· 537 407 { 538 408 .id_table = port_afu_id_table, 539 409 .ops = &port_afu_ops, 410 + }, 411 + { 412 + .id_table = port_err_id_table, 413 + .ops = &port_err_ops, 414 + }, 415 + { 416 + .id_table = port_stp_id_table, 417 + .ops = &port_stp_ops, 540 418 }, 541 419 { 542 420 .ops = NULL, ··· 832 694 833 695 mutex_lock(&pdata->lock); 834 696 if (enable) 835 - port_enable(pdev); 697 + __afu_port_enable(pdev); 836 698 else 837 - ret = port_disable(pdev); 699 + ret = __afu_port_disable(pdev); 838 700 mutex_unlock(&pdata->lock); 839 701 840 702 return ret; ··· 886 748 return 0; 887 749 } 888 750 751 + static const struct attribute_group *afu_dev_groups[] = { 752 + &port_hdr_group, 753 + &port_afu_group, 754 + &port_err_group, 755 + NULL 756 + }; 757 + 889 758 static struct platform_driver afu_driver = { 890 759 .driver = { 891 - .name = DFL_FPGA_FEATURE_DEV_PORT, 760 + .name = DFL_FPGA_FEATURE_DEV_PORT, 761 + .dev_groups = afu_dev_groups, 892 762 }, 893 763 .probe = afu_probe, 894 764 .remove = afu_remove,
+9
drivers/fpga/dfl-afu.h
··· 79 79 struct dfl_feature_platform_data *pdata; 80 80 }; 81 81 82 + /* hold pdata->lock when call __afu_port_enable/disable */ 83 + void __afu_port_enable(struct platform_device *pdev); 84 + int __afu_port_disable(struct platform_device *pdev); 85 + 82 86 void afu_mmio_region_init(struct dfl_feature_platform_data *pdata); 83 87 int afu_mmio_region_add(struct dfl_feature_platform_data *pdata, 84 88 u32 region_index, u64 region_size, u64 phys, u32 flags); ··· 101 97 struct dfl_afu_dma_region * 102 98 afu_dma_region_find(struct dfl_feature_platform_data *pdata, 103 99 u64 iova, u64 size); 100 + 101 + extern const struct dfl_feature_ops port_err_ops; 102 + extern const struct dfl_feature_id port_err_id_table[]; 103 + extern const struct attribute_group port_err_group; 104 + 104 105 #endif /* __DFL_AFU_H */
+359
drivers/fpga/dfl-fme-error.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Management Engine Error Management 4 + * 5 + * Copyright 2019 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Joseph Grecco <joe.grecco@intel.com> 12 + * Enno Luebbers <enno.luebbers@intel.com> 13 + * Tim Whisonant <tim.whisonant@intel.com> 14 + * Ananda Ravuri <ananda.ravuri@intel.com> 15 + * Mitchel, Henry <henry.mitchel@intel.com> 16 + */ 17 + 18 + #include <linux/uaccess.h> 19 + 20 + #include "dfl.h" 21 + #include "dfl-fme.h" 22 + 23 + #define FME_ERROR_MASK 0x8 24 + #define FME_ERROR 0x10 25 + #define MBP_ERROR BIT_ULL(6) 26 + #define PCIE0_ERROR_MASK 0x18 27 + #define PCIE0_ERROR 0x20 28 + #define PCIE1_ERROR_MASK 0x28 29 + #define PCIE1_ERROR 0x30 30 + #define FME_FIRST_ERROR 0x38 31 + #define FME_NEXT_ERROR 0x40 32 + #define RAS_NONFAT_ERROR_MASK 0x48 33 + #define RAS_NONFAT_ERROR 0x50 34 + #define RAS_CATFAT_ERROR_MASK 0x58 35 + #define RAS_CATFAT_ERROR 0x60 36 + #define RAS_ERROR_INJECT 0x68 37 + #define INJECT_ERROR_MASK GENMASK_ULL(2, 0) 38 + 39 + #define ERROR_MASK GENMASK_ULL(63, 0) 40 + 41 + static ssize_t pcie0_errors_show(struct device *dev, 42 + struct device_attribute *attr, char *buf) 43 + { 44 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 45 + void __iomem *base; 46 + u64 value; 47 + 48 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 49 + 50 + mutex_lock(&pdata->lock); 51 + value = readq(base + PCIE0_ERROR); 52 + mutex_unlock(&pdata->lock); 53 + 54 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 55 + } 56 + 57 + static ssize_t pcie0_errors_store(struct device *dev, 58 + struct device_attribute *attr, 59 + const char *buf, size_t count) 60 + { 61 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 62 + void __iomem *base; 63 + int ret = 0; 64 + u64 v, val; 65 + 66 + if (kstrtou64(buf, 0, &val)) 67 + return -EINVAL; 68 + 69 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 70 + 71 + mutex_lock(&pdata->lock); 72 + writeq(GENMASK_ULL(63, 0), base + PCIE0_ERROR_MASK); 73 + 74 + v = readq(base + PCIE0_ERROR); 75 + if (val == v) 76 + writeq(v, base + PCIE0_ERROR); 77 + else 78 + ret = -EINVAL; 79 + 80 + writeq(0ULL, base + PCIE0_ERROR_MASK); 81 + mutex_unlock(&pdata->lock); 82 + return ret ? ret : count; 83 + } 84 + static DEVICE_ATTR_RW(pcie0_errors); 85 + 86 + static ssize_t pcie1_errors_show(struct device *dev, 87 + struct device_attribute *attr, char *buf) 88 + { 89 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 90 + void __iomem *base; 91 + u64 value; 92 + 93 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 94 + 95 + mutex_lock(&pdata->lock); 96 + value = readq(base + PCIE1_ERROR); 97 + mutex_unlock(&pdata->lock); 98 + 99 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 100 + } 101 + 102 + static ssize_t pcie1_errors_store(struct device *dev, 103 + struct device_attribute *attr, 104 + const char *buf, size_t count) 105 + { 106 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 107 + void __iomem *base; 108 + int ret = 0; 109 + u64 v, val; 110 + 111 + if (kstrtou64(buf, 0, &val)) 112 + return -EINVAL; 113 + 114 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 115 + 116 + mutex_lock(&pdata->lock); 117 + writeq(GENMASK_ULL(63, 0), base + PCIE1_ERROR_MASK); 118 + 119 + v = readq(base + PCIE1_ERROR); 120 + if (val == v) 121 + writeq(v, base + PCIE1_ERROR); 122 + else 123 + ret = -EINVAL; 124 + 125 + writeq(0ULL, base + PCIE1_ERROR_MASK); 126 + mutex_unlock(&pdata->lock); 127 + return ret ? ret : count; 128 + } 129 + static DEVICE_ATTR_RW(pcie1_errors); 130 + 131 + static ssize_t nonfatal_errors_show(struct device *dev, 132 + struct device_attribute *attr, char *buf) 133 + { 134 + void __iomem *base; 135 + 136 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 137 + 138 + return sprintf(buf, "0x%llx\n", 139 + (unsigned long long)readq(base + RAS_NONFAT_ERROR)); 140 + } 141 + static DEVICE_ATTR_RO(nonfatal_errors); 142 + 143 + static ssize_t catfatal_errors_show(struct device *dev, 144 + struct device_attribute *attr, char *buf) 145 + { 146 + void __iomem *base; 147 + 148 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 149 + 150 + return sprintf(buf, "0x%llx\n", 151 + (unsigned long long)readq(base + RAS_CATFAT_ERROR)); 152 + } 153 + static DEVICE_ATTR_RO(catfatal_errors); 154 + 155 + static ssize_t inject_errors_show(struct device *dev, 156 + struct device_attribute *attr, char *buf) 157 + { 158 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 159 + void __iomem *base; 160 + u64 v; 161 + 162 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 163 + 164 + mutex_lock(&pdata->lock); 165 + v = readq(base + RAS_ERROR_INJECT); 166 + mutex_unlock(&pdata->lock); 167 + 168 + return sprintf(buf, "0x%llx\n", 169 + (unsigned long long)FIELD_GET(INJECT_ERROR_MASK, v)); 170 + } 171 + 172 + static ssize_t inject_errors_store(struct device *dev, 173 + struct device_attribute *attr, 174 + const char *buf, size_t count) 175 + { 176 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 177 + void __iomem *base; 178 + u8 inject_error; 179 + u64 v; 180 + 181 + if (kstrtou8(buf, 0, &inject_error)) 182 + return -EINVAL; 183 + 184 + if (inject_error & ~INJECT_ERROR_MASK) 185 + return -EINVAL; 186 + 187 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 188 + 189 + mutex_lock(&pdata->lock); 190 + v = readq(base + RAS_ERROR_INJECT); 191 + v &= ~INJECT_ERROR_MASK; 192 + v |= FIELD_PREP(INJECT_ERROR_MASK, inject_error); 193 + writeq(v, base + RAS_ERROR_INJECT); 194 + mutex_unlock(&pdata->lock); 195 + 196 + return count; 197 + } 198 + static DEVICE_ATTR_RW(inject_errors); 199 + 200 + static ssize_t fme_errors_show(struct device *dev, 201 + struct device_attribute *attr, char *buf) 202 + { 203 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 204 + void __iomem *base; 205 + u64 value; 206 + 207 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 208 + 209 + mutex_lock(&pdata->lock); 210 + value = readq(base + FME_ERROR); 211 + mutex_unlock(&pdata->lock); 212 + 213 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 214 + } 215 + 216 + static ssize_t fme_errors_store(struct device *dev, 217 + struct device_attribute *attr, 218 + const char *buf, size_t count) 219 + { 220 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 221 + void __iomem *base; 222 + u64 v, val; 223 + int ret = 0; 224 + 225 + if (kstrtou64(buf, 0, &val)) 226 + return -EINVAL; 227 + 228 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 229 + 230 + mutex_lock(&pdata->lock); 231 + writeq(GENMASK_ULL(63, 0), base + FME_ERROR_MASK); 232 + 233 + v = readq(base + FME_ERROR); 234 + if (val == v) 235 + writeq(v, base + FME_ERROR); 236 + else 237 + ret = -EINVAL; 238 + 239 + /* Workaround: disable MBP_ERROR if feature revision is 0 */ 240 + writeq(dfl_feature_revision(base) ? 0ULL : MBP_ERROR, 241 + base + FME_ERROR_MASK); 242 + mutex_unlock(&pdata->lock); 243 + return ret ? ret : count; 244 + } 245 + static DEVICE_ATTR_RW(fme_errors); 246 + 247 + static ssize_t first_error_show(struct device *dev, 248 + struct device_attribute *attr, char *buf) 249 + { 250 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 251 + void __iomem *base; 252 + u64 value; 253 + 254 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 255 + 256 + mutex_lock(&pdata->lock); 257 + value = readq(base + FME_FIRST_ERROR); 258 + mutex_unlock(&pdata->lock); 259 + 260 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 261 + } 262 + static DEVICE_ATTR_RO(first_error); 263 + 264 + static ssize_t next_error_show(struct device *dev, 265 + struct device_attribute *attr, char *buf) 266 + { 267 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 268 + void __iomem *base; 269 + u64 value; 270 + 271 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 272 + 273 + mutex_lock(&pdata->lock); 274 + value = readq(base + FME_NEXT_ERROR); 275 + mutex_unlock(&pdata->lock); 276 + 277 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 278 + } 279 + static DEVICE_ATTR_RO(next_error); 280 + 281 + static struct attribute *fme_global_err_attrs[] = { 282 + &dev_attr_pcie0_errors.attr, 283 + &dev_attr_pcie1_errors.attr, 284 + &dev_attr_nonfatal_errors.attr, 285 + &dev_attr_catfatal_errors.attr, 286 + &dev_attr_inject_errors.attr, 287 + &dev_attr_fme_errors.attr, 288 + &dev_attr_first_error.attr, 289 + &dev_attr_next_error.attr, 290 + NULL, 291 + }; 292 + 293 + static umode_t fme_global_err_attrs_visible(struct kobject *kobj, 294 + struct attribute *attr, int n) 295 + { 296 + struct device *dev = kobj_to_dev(kobj); 297 + 298 + /* 299 + * sysfs entries are visible only if related private feature is 300 + * enumerated. 301 + */ 302 + if (!dfl_get_feature_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR)) 303 + return 0; 304 + 305 + return attr->mode; 306 + } 307 + 308 + const struct attribute_group fme_global_err_group = { 309 + .name = "errors", 310 + .attrs = fme_global_err_attrs, 311 + .is_visible = fme_global_err_attrs_visible, 312 + }; 313 + 314 + static void fme_err_mask(struct device *dev, bool mask) 315 + { 316 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 317 + void __iomem *base; 318 + 319 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 320 + 321 + mutex_lock(&pdata->lock); 322 + 323 + /* Workaround: keep MBP_ERROR always masked if revision is 0 */ 324 + if (dfl_feature_revision(base)) 325 + writeq(mask ? ERROR_MASK : 0, base + FME_ERROR_MASK); 326 + else 327 + writeq(mask ? ERROR_MASK : MBP_ERROR, base + FME_ERROR_MASK); 328 + 329 + writeq(mask ? ERROR_MASK : 0, base + PCIE0_ERROR_MASK); 330 + writeq(mask ? ERROR_MASK : 0, base + PCIE1_ERROR_MASK); 331 + writeq(mask ? ERROR_MASK : 0, base + RAS_NONFAT_ERROR_MASK); 332 + writeq(mask ? ERROR_MASK : 0, base + RAS_CATFAT_ERROR_MASK); 333 + 334 + mutex_unlock(&pdata->lock); 335 + } 336 + 337 + static int fme_global_err_init(struct platform_device *pdev, 338 + struct dfl_feature *feature) 339 + { 340 + fme_err_mask(&pdev->dev, false); 341 + 342 + return 0; 343 + } 344 + 345 + static void fme_global_err_uinit(struct platform_device *pdev, 346 + struct dfl_feature *feature) 347 + { 348 + fme_err_mask(&pdev->dev, true); 349 + } 350 + 351 + const struct dfl_feature_id fme_global_err_id_table[] = { 352 + {.id = FME_FEATURE_ID_GLOBAL_ERR,}, 353 + {0,} 354 + }; 355 + 356 + const struct dfl_feature_ops fme_global_err_ops = { 357 + .init = fme_global_err_init, 358 + .uinit = fme_global_err_uinit, 359 + };
+15 -27
drivers/fpga/dfl-fme-main.c
··· 127 127 &dev_attr_socket_id.attr, 128 128 NULL, 129 129 }; 130 - ATTRIBUTE_GROUPS(fme_hdr); 131 130 132 - static int fme_hdr_init(struct platform_device *pdev, 133 - struct dfl_feature *feature) 134 - { 135 - void __iomem *base = feature->ioaddr; 136 - int ret; 137 - 138 - dev_dbg(&pdev->dev, "FME HDR Init.\n"); 139 - dev_dbg(&pdev->dev, "FME cap %llx.\n", 140 - (unsigned long long)readq(base + FME_HDR_CAP)); 141 - 142 - ret = device_add_groups(&pdev->dev, fme_hdr_groups); 143 - if (ret) 144 - return ret; 145 - 146 - return 0; 147 - } 148 - 149 - static void fme_hdr_uinit(struct platform_device *pdev, 150 - struct dfl_feature *feature) 151 - { 152 - dev_dbg(&pdev->dev, "FME HDR UInit.\n"); 153 - device_remove_groups(&pdev->dev, fme_hdr_groups); 154 - } 131 + static const struct attribute_group fme_hdr_group = { 132 + .attrs = fme_hdr_attrs, 133 + }; 155 134 156 135 static long fme_hdr_ioctl_release_port(struct dfl_feature_platform_data *pdata, 157 136 unsigned long arg) ··· 178 199 }; 179 200 180 201 static const struct dfl_feature_ops fme_hdr_ops = { 181 - .init = fme_hdr_init, 182 - .uinit = fme_hdr_uinit, 183 202 .ioctl = fme_hdr_ioctl, 184 203 }; 185 204 ··· 189 212 { 190 213 .id_table = fme_pr_mgmt_id_table, 191 214 .ops = &fme_pr_mgmt_ops, 215 + }, 216 + { 217 + .id_table = fme_global_err_id_table, 218 + .ops = &fme_global_err_ops, 192 219 }, 193 220 { 194 221 .ops = NULL, ··· 340 359 return 0; 341 360 } 342 361 362 + static const struct attribute_group *fme_dev_groups[] = { 363 + &fme_hdr_group, 364 + &fme_global_err_group, 365 + NULL 366 + }; 367 + 343 368 static struct platform_driver fme_driver = { 344 369 .driver = { 345 - .name = DFL_FPGA_FEATURE_DEV_FME, 370 + .name = DFL_FPGA_FEATURE_DEV_FME, 371 + .dev_groups = fme_dev_groups, 346 372 }, 347 373 .probe = fme_probe, 348 374 .remove = fme_remove,
+3
drivers/fpga/dfl-fme.h
··· 35 35 36 36 extern const struct dfl_feature_ops fme_pr_mgmt_ops; 37 37 extern const struct dfl_feature_id fme_pr_mgmt_id_table[]; 38 + extern const struct dfl_feature_ops fme_global_err_ops; 39 + extern const struct dfl_feature_id fme_global_err_id_table[]; 40 + extern const struct attribute_group fme_global_err_group; 38 41 39 42 #endif /* __DFL_FME_H */
+6 -4
drivers/fpga/dfl.c
··· 271 271 struct dfl_feature *feature, 272 272 struct dfl_feature_driver *drv) 273 273 { 274 - int ret; 274 + int ret = 0; 275 275 276 - ret = drv->ops->init(pdev, feature); 277 - if (ret) 278 - return ret; 276 + if (drv->ops->init) { 277 + ret = drv->ops->init(pdev, feature); 278 + if (ret) 279 + return ret; 280 + } 279 281 280 282 feature->ops = drv->ops; 281 283
+9
drivers/fpga/dfl.h
··· 120 120 #define PORT_HDR_CAP 0x30 121 121 #define PORT_HDR_CTRL 0x38 122 122 #define PORT_HDR_STS 0x40 123 + #define PORT_HDR_USRCLK_CMD0 0x50 124 + #define PORT_HDR_USRCLK_CMD1 0x58 125 + #define PORT_HDR_USRCLK_STS0 0x60 126 + #define PORT_HDR_USRCLK_STS1 0x68 123 127 124 128 /* Port Capability Register Bitfield */ 125 129 #define PORT_CAP_PORT_NUM GENMASK_ULL(1, 0) /* ID of this port */ ··· 357 353 358 354 return (FIELD_GET(DFH_TYPE, v) == DFH_TYPE_FIU) && 359 355 (FIELD_GET(DFH_ID, v) == DFH_ID_FIU_PORT); 356 + } 357 + 358 + static inline u8 dfl_feature_revision(void __iomem *base) 359 + { 360 + return (u8)FIELD_GET(DFH_REVISION, readq(base + DFH)); 360 361 } 361 362 362 363 /**
+3
include/linux/device.h
··· 262 262 * @resume: Called to bring a device from sleep mode. 263 263 * @groups: Default attributes that get created by the driver core 264 264 * automatically. 265 + * @dev_groups: Additional attributes attached to device instance once the 266 + * it is bound to the driver. 265 267 * @pm: Power management operations of the device which matched 266 268 * this driver. 267 269 * @coredump: Called when sysfs entry is written to. The device driver ··· 298 296 int (*suspend) (struct device *dev, pm_message_t state); 299 297 int (*resume) (struct device *dev); 300 298 const struct attribute_group **groups; 299 + const struct attribute_group **dev_groups; 301 300 302 301 const struct dev_pm_ops *pm; 303 302 void (*coredump) (struct device *dev);