Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'edac_updates_for_v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras

Pull EDAC updates from Borislav Petkov:
"Somewhat busier than usual this cycle:

- Add support for AST2400 and AST2600 hw to aspeed_edac (Troy Lee)

- Remove an orphaned mv64x60_edac driver. Good riddance (Michael
Ellerman)

- Add a new igen6 driver for Intel client SoCs with an integrated
memory controller and using in-band ECC (Qiuxu Zhuo and Tony Luck)

- The usual smattering of fixes and cleanups all over"

* tag 'edac_updates_for_v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
EDAC/mv64x60: Remove orphan mv64x60 driver
EDAC/aspeed: Add support for AST2400 and AST2600
ARM: dts: aspeed: Add AST2600 EDAC into common devicetree
dt-bindings: edac: aspeed-sdram-edac: Add ast2400/ast2600 support
EDAC/amd64: Fix PCI component registration
EDAC/igen6: ecclog_llist can be static
EDAC/i10nm: Add Intel Sapphire Rapids server support
EDAC: Add DDR5 new memory type
EDAC/i10nm: Use readl() to access MMIO registers
MAINTAINERS: Add entry for Intel IGEN6 EDAC driver
EDAC/igen6: Add debugfs interface for Intel client SoC EDAC driver
EDAC/igen6: Add EDAC driver for Intel client SoCs using IBECC
EDAC/synopsys: Return the correct value in mc_probe()
MAINTAINERS: Clean up the F: entries for some EDAC drivers
EDAC: Add three new memory types
EDAC: Fix some kernel-doc markups
EDAC: Do not issue useless debug statements in the polling routine
EDAC/amd64: Remove unneeded breaks

+1119 -1080
+6 -3
Documentation/devicetree/bindings/edac/aspeed-sdram-edac.txt
··· 1 - Aspeed AST2500 SoC EDAC node 1 + Aspeed BMC SoC EDAC node 2 2 3 - The Aspeed AST2500 SoC supports DDR3 and DDR4 memory with and without ECC (error 3 + The Aspeed BMC SoC supports DDR3 and DDR4 memory with and without ECC (error 4 4 correction check). 5 5 6 6 The memory controller supports SECDED (single bit error correction, double bit ··· 11 11 12 12 13 13 Required properties: 14 - - compatible: should be "aspeed,ast2500-sdram-edac" 14 + - compatible: should be one of 15 + - "aspeed,ast2400-sdram-edac" 16 + - "aspeed,ast2500-sdram-edac" 17 + - "aspeed,ast2600-sdram-edac" 15 18 - reg: sdram controller register set should be <0x1e6e0000 0x174> 16 19 - interrupts: should be AVIC interrupt #0 17 20
+9 -2
MAINTAINERS
··· 2485 2485 ARM/SOCFPGA EDAC SUPPORT 2486 2486 M: Dinh Nguyen <dinguyen@kernel.org> 2487 2487 S: Maintained 2488 - F: drivers/edac/altera_edac. 2488 + F: drivers/edac/altera_edac.[ch] 2489 2489 2490 2490 ARM/SPREADTRUM SoC SUPPORT 2491 2491 M: Orson Zhai <orsonzhai@gmail.com> ··· 6370 6370 S: Maintained 6371 6371 F: drivers/edac/ie31200_edac.c 6372 6372 6373 + EDAC-IGEN6 6374 + M: Tony Luck <tony.luck@intel.com> 6375 + R: Qiuxu Zhuo <qiuxu.zhuo@intel.com> 6376 + L: linux-edac@vger.kernel.org 6377 + S: Maintained 6378 + F: drivers/edac/igen6_edac.c 6379 + 6373 6380 EDAC-MPC85XX 6374 6381 M: Johannes Thumshirn <morbidrsa@gmail.com> 6375 6382 L: linux-edac@vger.kernel.org ··· 6426 6419 M: Tony Luck <tony.luck@intel.com> 6427 6420 L: linux-edac@vger.kernel.org 6428 6421 S: Maintained 6429 - F: drivers/edac/skx_*.c 6422 + F: drivers/edac/skx_*.[ch] 6430 6423 6431 6424 EDAC-TI 6432 6425 M: Tero Kristo <t-kristo@ti.com>
+6
arch/arm/boot/dts/aspeed-g6.dtsi
··· 69 69 always-on; 70 70 }; 71 71 72 + edac: sdram@1e6e0000 { 73 + compatible = "aspeed,ast2600-sdram-edac", "syscon"; 74 + reg = <0x1e6e0000 0x174>; 75 + interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>; 76 + }; 77 + 72 78 ahb { 73 79 compatible = "simple-bus"; 74 80 #address-cells = <1>;
+12 -10
drivers/edac/Kconfig
··· 269 269 first used on the Apollo Lake platform and Denverton 270 270 micro-server but may appear on others in the future. 271 271 272 + config EDAC_IGEN6 273 + tristate "Intel client SoC Integrated MC" 274 + depends on PCI && X86_64 && PCI_MMCONFIG && ARCH_HAVE_NMI_SAFE_CMPXCHG 275 + help 276 + Support for error detection and correction on the Intel 277 + client SoC Integrated Memory Controller using In-Band ECC IP. 278 + This In-Band ECC is first used on the Elkhart Lake SoC but 279 + may appear on others in the future. 280 + 272 281 config EDAC_MPC85XX 273 282 bool "Freescale MPC83xx / MPC85xx" 274 283 depends on FSL_SOC && EDAC=y ··· 291 282 help 292 283 Support for error detection and correction on Freescale memory 293 284 controllers on Layerscape SoCs. 294 - 295 - config EDAC_MV64X60 296 - tristate "Marvell MV64x60" 297 - depends on MV64X60 298 - help 299 - Support for error detection and correction on the Marvell 300 - MV64360 and MV64460 chipsets. 301 285 302 286 config EDAC_PASEMI 303 287 tristate "PA Semi PWRficient" ··· 517 515 health, you should probably say 'Y' here. 518 516 519 517 config EDAC_ASPEED 520 - tristate "Aspeed AST 2500 SoC" 521 - depends on MACH_ASPEED_G5 518 + tristate "Aspeed AST BMC SoC" 519 + depends on ARCH_ASPEED 522 520 help 523 - Support for error detection and correction on the Aspeed AST 2500 SoC. 521 + Support for error detection and correction on the Aspeed AST BMC SoC. 524 522 525 523 First, ECC must be configured in the bootloader. Then, this driver 526 524 will expose error counters via the EDAC kernel framework.
+1 -1
drivers/edac/Makefile
··· 32 32 obj-$(CONFIG_EDAC_I7CORE) += i7core_edac.o 33 33 obj-$(CONFIG_EDAC_SBRIDGE) += sb_edac.o 34 34 obj-$(CONFIG_EDAC_PND2) += pnd2_edac.o 35 + obj-$(CONFIG_EDAC_IGEN6) += igen6_edac.o 35 36 obj-$(CONFIG_EDAC_E7XXX) += e7xxx_edac.o 36 37 obj-$(CONFIG_EDAC_E752X) += e752x_edac.o 37 38 obj-$(CONFIG_EDAC_I82443BXGX) += i82443bxgx_edac.o ··· 65 64 i10nm_edac-y := skx_common.o i10nm_base.o 66 65 obj-$(CONFIG_EDAC_I10NM) += i10nm_edac.o 67 66 68 - obj-$(CONFIG_EDAC_MV64X60) += mv64x60_edac.o 69 67 obj-$(CONFIG_EDAC_CELL) += cell_edac.o 70 68 obj-$(CONFIG_EDAC_PPC4XX) += ppc4xx_edac.o 71 69 obj-$(CONFIG_EDAC_AMD8111) += amd8111_edac.o
+14 -20
drivers/edac/amd64_edac.c
··· 18 18 /* Per-node stuff */ 19 19 static struct ecc_settings **ecc_stngs; 20 20 21 + /* Device for the PCI component */ 22 + static struct device *pci_ctl_dev; 23 + 21 24 /* 22 25 * Valid scrub rates for the K8 hardware memory scrubber. We map the scrubbing 23 26 * bandwidth to a valid bit pattern. The 'set' operation finds the 'matching- ··· 2464 2461 case 0x20: 2465 2462 case 0x21: 2466 2463 return 0; 2467 - break; 2468 2464 case 0x22: 2469 2465 case 0x23: 2470 2466 return 1; 2471 - break; 2472 2467 default: 2473 2468 return err_sym >> 4; 2474 - break; 2475 2469 } 2476 2470 /* x8 symbols */ 2477 2471 else ··· 2478 2478 WARN(1, KERN_ERR "Invalid error symbol: 0x%x\n", 2479 2479 err_sym); 2480 2480 return -1; 2481 - break; 2482 - 2483 2481 case 0x11: 2484 2482 return 0; 2485 - break; 2486 2483 case 0x12: 2487 2484 return 1; 2488 - break; 2489 2485 default: 2490 2486 return err_sym >> 3; 2491 - break; 2492 2487 } 2493 2488 return -1; 2494 2489 } ··· 2678 2683 return -ENODEV; 2679 2684 } 2680 2685 2686 + if (!pci_ctl_dev) 2687 + pci_ctl_dev = &pvt->F0->dev; 2688 + 2681 2689 edac_dbg(1, "F0: %s\n", pci_name(pvt->F0)); 2682 2690 edac_dbg(1, "F3: %s\n", pci_name(pvt->F3)); 2683 2691 edac_dbg(1, "F6: %s\n", pci_name(pvt->F6)); ··· 2704 2706 amd64_err("F2 not found: device 0x%x (broken BIOS?)\n", pci_id2); 2705 2707 return -ENODEV; 2706 2708 } 2709 + 2710 + if (!pci_ctl_dev) 2711 + pci_ctl_dev = &pvt->F2->dev; 2707 2712 2708 2713 edac_dbg(1, "F1: %s\n", pci_name(pvt->F1)); 2709 2714 edac_dbg(1, "F2: %s\n", pci_name(pvt->F2)); ··· 3624 3623 3625 3624 static void setup_pci_device(void) 3626 3625 { 3627 - struct mem_ctl_info *mci; 3628 - struct amd64_pvt *pvt; 3629 - 3630 3626 if (pci_ctl) 3631 3627 return; 3632 3628 3633 - mci = edac_mc_find(0); 3634 - if (!mci) 3635 - return; 3636 - 3637 - pvt = mci->pvt_info; 3638 - if (pvt->umc) 3639 - pci_ctl = edac_pci_create_generic_ctl(&pvt->F0->dev, EDAC_MOD_STR); 3640 - else 3641 - pci_ctl = edac_pci_create_generic_ctl(&pvt->F2->dev, EDAC_MOD_STR); 3629 + pci_ctl = edac_pci_create_generic_ctl(pci_ctl_dev, EDAC_MOD_STR); 3642 3630 if (!pci_ctl) { 3643 3631 pr_warn("%s(): Unable to create PCI control\n", __func__); 3644 3632 pr_warn("%s(): PCI error report via EDAC not set\n", __func__); ··· 3706 3716 return 0; 3707 3717 3708 3718 err_pci: 3719 + pci_ctl_dev = NULL; 3720 + 3709 3721 msrs_free(msrs); 3710 3722 msrs = NULL; 3711 3723 ··· 3736 3744 3737 3745 kfree(ecc_stngs); 3738 3746 ecc_stngs = NULL; 3747 + 3748 + pci_ctl_dev = NULL; 3739 3749 3740 3750 msrs_free(msrs); 3741 3751 msrs = NULL;
-1
drivers/edac/amd76x_edac.c
··· 179 179 static void amd76x_check(struct mem_ctl_info *mci) 180 180 { 181 181 struct amd76x_error_info info; 182 - edac_dbg(3, "\n"); 183 182 amd76x_get_error_info(mci, &info); 184 183 amd76x_process_error_info(mci, &info, 1); 185 184 }
+5 -2
drivers/edac/aspeed_edac.c
··· 239 239 int rc; 240 240 241 241 /* retrieve info about physical memory from device tree */ 242 - np = of_find_node_by_path("/memory"); 242 + np = of_find_node_by_name(NULL, "memory"); 243 243 if (!np) { 244 244 dev_err(mci->pdev, "dt: missing /memory node\n"); 245 245 return -ENODEV; ··· 375 375 376 376 377 377 static const struct of_device_id aspeed_of_match[] = { 378 + { .compatible = "aspeed,ast2400-sdram-edac" }, 378 379 { .compatible = "aspeed,ast2500-sdram-edac" }, 380 + { .compatible = "aspeed,ast2600-sdram-edac" }, 379 381 {}, 380 382 }; 381 383 384 + MODULE_DEVICE_TABLE(of, aspeed_of_match); 382 385 383 386 static struct platform_driver aspeed_driver = { 384 387 .driver = { ··· 395 392 396 393 MODULE_LICENSE("GPL"); 397 394 MODULE_AUTHOR("Stefan Schaeckeler <sschaeck@cisco.com>"); 398 - MODULE_DESCRIPTION("Aspeed AST2500 EDAC driver"); 395 + MODULE_DESCRIPTION("Aspeed BMC SoC EDAC driver"); 399 396 MODULE_VERSION("1.0");
-1
drivers/edac/e752x_edac.c
··· 980 980 { 981 981 struct e752x_error_info info; 982 982 983 - edac_dbg(3, "\n"); 984 983 e752x_get_error_info(mci, &info); 985 984 e752x_process_error_info(mci, &info, 1); 986 985 }
-1
drivers/edac/e7xxx_edac.c
··· 333 333 { 334 334 struct e7xxx_error_info info; 335 335 336 - edac_dbg(3, "\n"); 337 336 e7xxx_get_error_info(mci, &info); 338 337 e7xxx_process_error_info(mci, &info, 1); 339 338 }
+5 -6
drivers/edac/edac_device.h
··· 258 258 extern void edac_device_free_ctl_info(struct edac_device_ctl_info *ctl_info); 259 259 260 260 /** 261 - * edac_device_add_device: Insert the 'edac_dev' structure into the 261 + * edac_device_add_device - Insert the 'edac_dev' structure into the 262 262 * edac_device global list and create sysfs entries associated with 263 263 * edac_device structure. 264 264 * ··· 271 271 extern int edac_device_add_device(struct edac_device_ctl_info *edac_dev); 272 272 273 273 /** 274 - * edac_device_del_device: 275 - * Remove sysfs entries for specified edac_device structure and 276 - * then remove edac_device structure from global list 274 + * edac_device_del_device - Remove sysfs entries for specified edac_device 275 + * structure and then remove edac_device structure from global list 277 276 * 278 277 * @dev: 279 278 * Pointer to struct &device representing the edac device ··· 285 286 extern struct edac_device_ctl_info *edac_device_del_device(struct device *dev); 286 287 287 288 /** 288 - * Log correctable errors. 289 + * edac_device_handle_ce_count - Log correctable errors. 289 290 * 290 291 * @edac_dev: pointer to struct &edac_device_ctl_info 291 292 * @inst_nr: number of the instance where the CE error happened ··· 298 299 const char *msg); 299 300 300 301 /** 301 - * Log uncorrectable errors. 302 + * edac_device_handle_ue_count - Log uncorrectable errors. 302 303 * 303 304 * @edac_dev: pointer to struct &edac_device_ctl_info 304 305 * @inst_nr: number of the instance where the CE error happened
+4
drivers/edac/edac_mc.c
··· 158 158 [MEM_DDR3] = "Unbuffered-DDR3", 159 159 [MEM_RDDR3] = "Registered-DDR3", 160 160 [MEM_LRDDR3] = "Load-Reduced-DDR3-RAM", 161 + [MEM_LPDDR3] = "Low-Power-DDR3-RAM", 161 162 [MEM_DDR4] = "Unbuffered-DDR4", 162 163 [MEM_RDDR4] = "Registered-DDR4", 164 + [MEM_LPDDR4] = "Low-Power-DDR4-RAM", 163 165 [MEM_LRDDR4] = "Load-Reduced-DDR4-RAM", 166 + [MEM_DDR5] = "Unbuffered-DDR5", 164 167 [MEM_NVDIMM] = "Non-volatile-RAM", 168 + [MEM_WIO2] = "Wide-IO-2", 165 169 }; 166 170 EXPORT_SYMBOL_GPL(edac_mem_types); 167 171
+29 -10
drivers/edac/i10nm_base.c
··· 6 6 */ 7 7 8 8 #include <linux/kernel.h> 9 + #include <linux/io.h> 9 10 #include <asm/cpu_device_id.h> 10 11 #include <asm/intel-family.h> 11 12 #include <asm/mce.h> 12 13 #include "edac_module.h" 13 14 #include "skx_common.h" 14 15 15 - #define I10NM_REVISION "v0.0.3" 16 + #define I10NM_REVISION "v0.0.4" 16 17 #define EDAC_MOD_STR "i10nm_edac" 17 18 18 19 /* Debug macros */ 19 20 #define i10nm_printk(level, fmt, arg...) \ 20 21 edac_printk(level, "i10nm", fmt, ##arg) 21 22 22 - #define I10NM_GET_SCK_BAR(d, reg) \ 23 + #define I10NM_GET_SCK_BAR(d, reg) \ 23 24 pci_read_config_dword((d)->uracu, 0xd0, &(reg)) 24 25 #define I10NM_GET_IMC_BAR(d, i, reg) \ 25 26 pci_read_config_dword((d)->uracu, 0xd8 + (i) * 4, &(reg)) 26 27 #define I10NM_GET_DIMMMTR(m, i, j) \ 27 - (*(u32 *)((m)->mbase + 0x2080c + (i) * 0x4000 + (j) * 4)) 28 + readl((m)->mbase + 0x2080c + (i) * (m)->chan_mmio_sz + (j) * 4) 28 29 #define I10NM_GET_MCDDRTCFG(m, i, j) \ 29 - (*(u32 *)((m)->mbase + 0x20970 + (i) * 0x4000 + (j) * 4)) 30 + readl((m)->mbase + 0x20970 + (i) * (m)->chan_mmio_sz + (j) * 4) 31 + #define I10NM_GET_MCMTR(m, i) \ 32 + readl((m)->mbase + 0x20ef8 + (i) * (m)->chan_mmio_sz) 33 + #define I10NM_GET_AMAP(m, i) \ 34 + readl((m)->mbase + 0x20814 + (i) * (m)->chan_mmio_sz) 30 35 31 36 #define I10NM_GET_SCK_MMIO_BASE(reg) (GET_BITFIELD(reg, 0, 28) << 23) 32 37 #define I10NM_GET_IMC_MMIO_OFFSET(reg) (GET_BITFIELD(reg, 0, 10) << 12) ··· 131 126 .type = I10NM, 132 127 .decs_did = 0x3452, 133 128 .busno_cfg_offset = 0xcc, 129 + .ddr_chan_mmio_sz = 0x4000, 134 130 }; 135 131 136 132 static struct res_config i10nm_cfg1 = { 137 133 .type = I10NM, 138 134 .decs_did = 0x3452, 139 135 .busno_cfg_offset = 0xd0, 136 + .ddr_chan_mmio_sz = 0x4000, 137 + }; 138 + 139 + static struct res_config spr_cfg = { 140 + .type = SPR, 141 + .decs_did = 0x3252, 142 + .busno_cfg_offset = 0xd0, 143 + .ddr_chan_mmio_sz = 0x8000, 144 + .support_ddr5 = true, 140 145 }; 141 146 142 147 static const struct x86_cpu_id i10nm_cpuids[] = { ··· 155 140 X86_MATCH_INTEL_FAM6_MODEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x0, 0x3), &i10nm_cfg0), 156 141 X86_MATCH_INTEL_FAM6_MODEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0xf), &i10nm_cfg1), 157 142 X86_MATCH_INTEL_FAM6_MODEL_STEPPINGS(ICELAKE_D, X86_STEPPINGS(0x0, 0xf), &i10nm_cfg1), 143 + X86_MATCH_INTEL_FAM6_MODEL_STEPPINGS(SAPPHIRERAPIDS_X, X86_STEPPINGS(0x0, 0xf), &spr_cfg), 158 144 {} 159 145 }; 160 146 MODULE_DEVICE_TABLE(x86cpu, i10nm_cpuids); ··· 164 148 { 165 149 u32 mcmtr; 166 150 167 - mcmtr = *(u32 *)(imc->mbase + 0x20ef8 + chan * 0x4000); 151 + mcmtr = I10NM_GET_MCMTR(imc, chan); 168 152 edac_dbg(1, "ch%d mcmtr reg %x\n", chan, mcmtr); 169 153 170 154 return !!GET_BITFIELD(mcmtr, 2, 2); 171 155 } 172 156 173 - static int i10nm_get_dimm_config(struct mem_ctl_info *mci) 157 + static int i10nm_get_dimm_config(struct mem_ctl_info *mci, 158 + struct res_config *cfg) 174 159 { 175 160 struct skx_pvt *pvt = mci->pvt_info; 176 161 struct skx_imc *imc = pvt->imc; 162 + u32 mtr, amap, mcddrtcfg; 177 163 struct dimm_info *dimm; 178 - u32 mtr, mcddrtcfg; 179 164 int i, j, ndimms; 180 165 181 166 for (i = 0; i < I10NM_NUM_CHANNELS; i++) { ··· 184 167 continue; 185 168 186 169 ndimms = 0; 170 + amap = I10NM_GET_AMAP(imc, i); 187 171 for (j = 0; j < I10NM_NUM_DIMMS; j++) { 188 172 dimm = edac_get_dimm(mci, i, j, 0); 189 173 mtr = I10NM_GET_DIMMMTR(imc, i, j); ··· 193 175 mtr, mcddrtcfg, imc->mc, i, j); 194 176 195 177 if (IS_DIMM_PRESENT(mtr)) 196 - ndimms += skx_get_dimm_info(mtr, 0, 0, dimm, 197 - imc, i, j); 178 + ndimms += skx_get_dimm_info(mtr, 0, amap, dimm, 179 + imc, i, j, cfg); 198 180 else if (IS_NVDIMM_PRESENT(mcddrtcfg, j)) 199 181 ndimms += skx_get_nvdimm_info(dimm, imc, i, j, 200 182 EDAC_MOD_STR); ··· 318 300 d->imc[i].lmc = i; 319 301 d->imc[i].src_id = src_id; 320 302 d->imc[i].node_id = node_id; 303 + d->imc[i].chan_mmio_sz = cfg->ddr_chan_mmio_sz; 321 304 322 305 rc = skx_register_mci(&d->imc[i], d->imc[i].mdev, 323 306 "Intel_10nm Socket", EDAC_MOD_STR, 324 - i10nm_get_dimm_config); 307 + i10nm_get_dimm_config, cfg); 325 308 if (rc < 0) 326 309 goto fail; 327 310 }
-1
drivers/edac/i3000_edac.c
··· 273 273 { 274 274 struct i3000_error_info info; 275 275 276 - edac_dbg(1, "MC%d\n", mci->mc_idx); 277 276 i3000_get_error_info(mci, &info); 278 277 i3000_process_error_info(mci, &info, 1); 279 278 }
-1
drivers/edac/i3200_edac.c
··· 253 253 { 254 254 struct i3200_error_info info; 255 255 256 - edac_dbg(1, "MC%d\n", mci->mc_idx); 257 256 i3200_get_and_clear_error_info(mci, &info); 258 257 i3200_process_error_info(mci, &info); 259 258 }
+1 -1
drivers/edac/i5000_edac.c
··· 765 765 static void i5000_check_error(struct mem_ctl_info *mci) 766 766 { 767 767 struct i5000_error_info info; 768 - edac_dbg(4, "MC%d\n", mci->mc_idx); 768 + 769 769 i5000_get_error_info(mci, &info); 770 770 i5000_process_error_info(mci, &info, 1); 771 771 }
+1 -1
drivers/edac/i5400_edac.c
··· 686 686 static void i5400_check_error(struct mem_ctl_info *mci) 687 687 { 688 688 struct i5400_error_info info; 689 - edac_dbg(4, "MC%d\n", mci->mc_idx); 689 + 690 690 i5400_get_error_info(mci, &info); 691 691 i5400_process_error_info(mci, &info); 692 692 }
-1
drivers/edac/i82443bxgx_edac.c
··· 176 176 { 177 177 struct i82443bxgx_edacmc_error_info info; 178 178 179 - edac_dbg(1, "MC%d\n", mci->mc_idx); 180 179 i82443bxgx_edacmc_get_error_info(mci, &info); 181 180 i82443bxgx_edacmc_process_error_info(mci, &info, 1); 182 181 }
-1
drivers/edac/i82860_edac.c
··· 135 135 { 136 136 struct i82860_error_info info; 137 137 138 - edac_dbg(1, "MC%d\n", mci->mc_idx); 139 138 i82860_get_error_info(mci, &info); 140 139 i82860_process_error_info(mci, &info, 1); 141 140 }
-1
drivers/edac/i82875p_edac.c
··· 262 262 { 263 263 struct i82875p_error_info info; 264 264 265 - edac_dbg(1, "MC%d\n", mci->mc_idx); 266 265 i82875p_get_error_info(mci, &info); 267 266 i82875p_process_error_info(mci, &info, 1); 268 267 }
-1
drivers/edac/i82975x_edac.c
··· 330 330 { 331 331 struct i82975x_error_info info; 332 332 333 - edac_dbg(1, "MC%d\n", mci->mc_idx); 334 333 i82975x_get_error_info(mci, &info); 335 334 i82975x_process_error_info(mci, &info, 1); 336 335 }
-1
drivers/edac/ie31200_edac.c
··· 333 333 { 334 334 struct ie31200_error_info info; 335 335 336 - edac_dbg(1, "MC%d\n", mci->mc_idx); 337 336 ie31200_get_and_clear_error_info(mci, &info); 338 337 ie31200_process_error_info(mci, &info); 339 338 }
+977
drivers/edac/igen6_edac.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for Intel client SoC with integrated memory controller using IBECC 4 + * 5 + * Copyright (C) 2020 Intel Corporation 6 + * 7 + * The In-Band ECC (IBECC) IP provides ECC protection to all or specific 8 + * regions of the physical memory space. It's used for memory controllers 9 + * that don't support the out-of-band ECC which often needs an additional 10 + * storage device to each channel for storing ECC data. 11 + */ 12 + 13 + #include <linux/module.h> 14 + #include <linux/init.h> 15 + #include <linux/pci.h> 16 + #include <linux/slab.h> 17 + #include <linux/irq_work.h> 18 + #include <linux/llist.h> 19 + #include <linux/genalloc.h> 20 + #include <linux/edac.h> 21 + #include <linux/bits.h> 22 + #include <linux/io.h> 23 + #include <asm/mach_traps.h> 24 + #include <asm/nmi.h> 25 + 26 + #include "edac_mc.h" 27 + #include "edac_module.h" 28 + 29 + #define IGEN6_REVISION "v2.4" 30 + 31 + #define EDAC_MOD_STR "igen6_edac" 32 + #define IGEN6_NMI_NAME "igen6_ibecc" 33 + 34 + /* Debug macros */ 35 + #define igen6_printk(level, fmt, arg...) \ 36 + edac_printk(level, "igen6", fmt, ##arg) 37 + 38 + #define igen6_mc_printk(mci, level, fmt, arg...) \ 39 + edac_mc_chipset_printk(mci, level, "igen6", fmt, ##arg) 40 + 41 + #define GET_BITFIELD(v, lo, hi) (((v) & GENMASK_ULL(hi, lo)) >> (lo)) 42 + 43 + #define NUM_IMC 1 /* Max memory controllers */ 44 + #define NUM_CHANNELS 2 /* Max channels */ 45 + #define NUM_DIMMS 2 /* Max DIMMs per channel */ 46 + 47 + #define _4GB BIT_ULL(32) 48 + 49 + /* Size of physical memory */ 50 + #define TOM_OFFSET 0xa0 51 + /* Top of low usable DRAM */ 52 + #define TOLUD_OFFSET 0xbc 53 + /* Capability register C */ 54 + #define CAPID_C_OFFSET 0xec 55 + #define CAPID_C_IBECC BIT(15) 56 + 57 + /* Error Status */ 58 + #define ERRSTS_OFFSET 0xc8 59 + #define ERRSTS_CE BIT_ULL(6) 60 + #define ERRSTS_UE BIT_ULL(7) 61 + 62 + /* Error Command */ 63 + #define ERRCMD_OFFSET 0xca 64 + #define ERRCMD_CE BIT_ULL(6) 65 + #define ERRCMD_UE BIT_ULL(7) 66 + 67 + /* IBECC MMIO base address */ 68 + #define IBECC_BASE (res_cfg->ibecc_base) 69 + #define IBECC_ACTIVATE_OFFSET IBECC_BASE 70 + #define IBECC_ACTIVATE_EN BIT(0) 71 + 72 + /* IBECC error log */ 73 + #define ECC_ERROR_LOG_OFFSET (IBECC_BASE + 0x170) 74 + #define ECC_ERROR_LOG_CE BIT_ULL(62) 75 + #define ECC_ERROR_LOG_UE BIT_ULL(63) 76 + #define ECC_ERROR_LOG_ADDR_SHIFT 5 77 + #define ECC_ERROR_LOG_ADDR(v) GET_BITFIELD(v, 5, 38) 78 + #define ECC_ERROR_LOG_SYND(v) GET_BITFIELD(v, 46, 61) 79 + 80 + /* Host MMIO base address */ 81 + #define MCHBAR_OFFSET 0x48 82 + #define MCHBAR_EN BIT_ULL(0) 83 + #define MCHBAR_BASE(v) (GET_BITFIELD(v, 16, 38) << 16) 84 + #define MCHBAR_SIZE 0x10000 85 + 86 + /* Parameters for the channel decode stage */ 87 + #define MAD_INTER_CHANNEL_OFFSET 0x5000 88 + #define MAD_INTER_CHANNEL_DDR_TYPE(v) GET_BITFIELD(v, 0, 2) 89 + #define MAD_INTER_CHANNEL_ECHM(v) GET_BITFIELD(v, 3, 3) 90 + #define MAD_INTER_CHANNEL_CH_L_MAP(v) GET_BITFIELD(v, 4, 4) 91 + #define MAD_INTER_CHANNEL_CH_S_SIZE(v) ((u64)GET_BITFIELD(v, 12, 19) << 29) 92 + 93 + /* Parameters for DRAM decode stage */ 94 + #define MAD_INTRA_CH0_OFFSET 0x5004 95 + #define MAD_INTRA_CH_DIMM_L_MAP(v) GET_BITFIELD(v, 0, 0) 96 + 97 + /* DIMM characteristics */ 98 + #define MAD_DIMM_CH0_OFFSET 0x500c 99 + #define MAD_DIMM_CH_DIMM_L_SIZE(v) ((u64)GET_BITFIELD(v, 0, 6) << 29) 100 + #define MAD_DIMM_CH_DLW(v) GET_BITFIELD(v, 7, 8) 101 + #define MAD_DIMM_CH_DIMM_S_SIZE(v) ((u64)GET_BITFIELD(v, 16, 22) << 29) 102 + #define MAD_DIMM_CH_DSW(v) GET_BITFIELD(v, 24, 25) 103 + 104 + /* Hash for channel selection */ 105 + #define CHANNEL_HASH_OFFSET 0X5024 106 + /* Hash for enhanced channel selection */ 107 + #define CHANNEL_EHASH_OFFSET 0X5028 108 + #define CHANNEL_HASH_MASK(v) (GET_BITFIELD(v, 6, 19) << 6) 109 + #define CHANNEL_HASH_LSB_MASK_BIT(v) GET_BITFIELD(v, 24, 26) 110 + #define CHANNEL_HASH_MODE(v) GET_BITFIELD(v, 28, 28) 111 + 112 + static struct res_config { 113 + int num_imc; 114 + u32 ibecc_base; 115 + bool (*ibecc_available)(struct pci_dev *pdev); 116 + /* Convert error address logged in IBECC to system physical address */ 117 + u64 (*err_addr_to_sys_addr)(u64 eaddr); 118 + /* Convert error address logged in IBECC to integrated memory controller address */ 119 + u64 (*err_addr_to_imc_addr)(u64 eaddr); 120 + } *res_cfg; 121 + 122 + struct igen6_imc { 123 + int mc; 124 + struct mem_ctl_info *mci; 125 + struct pci_dev *pdev; 126 + struct device dev; 127 + void __iomem *window; 128 + u64 ch_s_size; 129 + int ch_l_map; 130 + u64 dimm_s_size[NUM_CHANNELS]; 131 + u64 dimm_l_size[NUM_CHANNELS]; 132 + int dimm_l_map[NUM_CHANNELS]; 133 + }; 134 + 135 + static struct igen6_pvt { 136 + struct igen6_imc imc[NUM_IMC]; 137 + } *igen6_pvt; 138 + 139 + /* The top of low usable DRAM */ 140 + static u32 igen6_tolud; 141 + /* The size of physical memory */ 142 + static u64 igen6_tom; 143 + 144 + struct decoded_addr { 145 + int mc; 146 + u64 imc_addr; 147 + u64 sys_addr; 148 + int channel_idx; 149 + u64 channel_addr; 150 + int sub_channel_idx; 151 + u64 sub_channel_addr; 152 + }; 153 + 154 + struct ecclog_node { 155 + struct llist_node llnode; 156 + int mc; 157 + u64 ecclog; 158 + }; 159 + 160 + /* 161 + * In the NMI handler, the driver uses the lock-less memory allocator 162 + * to allocate memory to store the IBECC error logs and links the logs 163 + * to the lock-less list. Delay printk() and the work of error reporting 164 + * to EDAC core in a worker. 165 + */ 166 + #define ECCLOG_POOL_SIZE PAGE_SIZE 167 + static LLIST_HEAD(ecclog_llist); 168 + static struct gen_pool *ecclog_pool; 169 + static char ecclog_buf[ECCLOG_POOL_SIZE]; 170 + static struct irq_work ecclog_irq_work; 171 + static struct work_struct ecclog_work; 172 + 173 + /* Compute die IDs for Elkhart Lake with IBECC */ 174 + #define DID_EHL_SKU5 0x4514 175 + #define DID_EHL_SKU6 0x4528 176 + #define DID_EHL_SKU7 0x452a 177 + #define DID_EHL_SKU8 0x4516 178 + #define DID_EHL_SKU9 0x452c 179 + #define DID_EHL_SKU10 0x452e 180 + #define DID_EHL_SKU11 0x4532 181 + #define DID_EHL_SKU12 0x4518 182 + #define DID_EHL_SKU13 0x451a 183 + #define DID_EHL_SKU14 0x4534 184 + #define DID_EHL_SKU15 0x4536 185 + 186 + static bool ehl_ibecc_available(struct pci_dev *pdev) 187 + { 188 + u32 v; 189 + 190 + if (pci_read_config_dword(pdev, CAPID_C_OFFSET, &v)) 191 + return false; 192 + 193 + return !!(CAPID_C_IBECC & v); 194 + } 195 + 196 + static u64 ehl_err_addr_to_sys_addr(u64 eaddr) 197 + { 198 + return eaddr; 199 + } 200 + 201 + static u64 ehl_err_addr_to_imc_addr(u64 eaddr) 202 + { 203 + if (eaddr < igen6_tolud) 204 + return eaddr; 205 + 206 + if (igen6_tom <= _4GB) 207 + return eaddr + igen6_tolud - _4GB; 208 + 209 + if (eaddr < _4GB) 210 + return eaddr + igen6_tolud - igen6_tom; 211 + 212 + return eaddr; 213 + } 214 + 215 + static struct res_config ehl_cfg = { 216 + .num_imc = 1, 217 + .ibecc_base = 0xdc00, 218 + .ibecc_available = ehl_ibecc_available, 219 + .err_addr_to_sys_addr = ehl_err_addr_to_sys_addr, 220 + .err_addr_to_imc_addr = ehl_err_addr_to_imc_addr, 221 + }; 222 + 223 + static const struct pci_device_id igen6_pci_tbl[] = { 224 + { PCI_VDEVICE(INTEL, DID_EHL_SKU5), (kernel_ulong_t)&ehl_cfg }, 225 + { PCI_VDEVICE(INTEL, DID_EHL_SKU6), (kernel_ulong_t)&ehl_cfg }, 226 + { PCI_VDEVICE(INTEL, DID_EHL_SKU7), (kernel_ulong_t)&ehl_cfg }, 227 + { PCI_VDEVICE(INTEL, DID_EHL_SKU8), (kernel_ulong_t)&ehl_cfg }, 228 + { PCI_VDEVICE(INTEL, DID_EHL_SKU9), (kernel_ulong_t)&ehl_cfg }, 229 + { PCI_VDEVICE(INTEL, DID_EHL_SKU10), (kernel_ulong_t)&ehl_cfg }, 230 + { PCI_VDEVICE(INTEL, DID_EHL_SKU11), (kernel_ulong_t)&ehl_cfg }, 231 + { PCI_VDEVICE(INTEL, DID_EHL_SKU12), (kernel_ulong_t)&ehl_cfg }, 232 + { PCI_VDEVICE(INTEL, DID_EHL_SKU13), (kernel_ulong_t)&ehl_cfg }, 233 + { PCI_VDEVICE(INTEL, DID_EHL_SKU14), (kernel_ulong_t)&ehl_cfg }, 234 + { PCI_VDEVICE(INTEL, DID_EHL_SKU15), (kernel_ulong_t)&ehl_cfg }, 235 + { }, 236 + }; 237 + MODULE_DEVICE_TABLE(pci, igen6_pci_tbl); 238 + 239 + static enum dev_type get_width(int dimm_l, u32 mad_dimm) 240 + { 241 + u32 w = dimm_l ? MAD_DIMM_CH_DLW(mad_dimm) : 242 + MAD_DIMM_CH_DSW(mad_dimm); 243 + 244 + switch (w) { 245 + case 0: 246 + return DEV_X8; 247 + case 1: 248 + return DEV_X16; 249 + case 2: 250 + return DEV_X32; 251 + default: 252 + return DEV_UNKNOWN; 253 + } 254 + } 255 + 256 + static enum mem_type get_memory_type(u32 mad_inter) 257 + { 258 + u32 t = MAD_INTER_CHANNEL_DDR_TYPE(mad_inter); 259 + 260 + switch (t) { 261 + case 0: 262 + return MEM_DDR4; 263 + case 1: 264 + return MEM_DDR3; 265 + case 2: 266 + return MEM_LPDDR3; 267 + case 3: 268 + return MEM_LPDDR4; 269 + case 4: 270 + return MEM_WIO2; 271 + default: 272 + return MEM_UNKNOWN; 273 + } 274 + } 275 + 276 + static int decode_chan_idx(u64 addr, u64 mask, int intlv_bit) 277 + { 278 + u64 hash_addr = addr & mask, hash = 0; 279 + u64 intlv = (addr >> intlv_bit) & 1; 280 + int i; 281 + 282 + for (i = 6; i < 20; i++) 283 + hash ^= (hash_addr >> i) & 1; 284 + 285 + return (int)hash ^ intlv; 286 + } 287 + 288 + static u64 decode_channel_addr(u64 addr, int intlv_bit) 289 + { 290 + u64 channel_addr; 291 + 292 + /* Remove the interleave bit and shift upper part down to fill gap */ 293 + channel_addr = GET_BITFIELD(addr, intlv_bit + 1, 63) << intlv_bit; 294 + channel_addr |= GET_BITFIELD(addr, 0, intlv_bit - 1); 295 + 296 + return channel_addr; 297 + } 298 + 299 + static void decode_addr(u64 addr, u32 hash, u64 s_size, int l_map, 300 + int *idx, u64 *sub_addr) 301 + { 302 + int intlv_bit = CHANNEL_HASH_LSB_MASK_BIT(hash) + 6; 303 + 304 + if (addr > 2 * s_size) { 305 + *sub_addr = addr - s_size; 306 + *idx = l_map; 307 + return; 308 + } 309 + 310 + if (CHANNEL_HASH_MODE(hash)) { 311 + *sub_addr = decode_channel_addr(addr, intlv_bit); 312 + *idx = decode_chan_idx(addr, CHANNEL_HASH_MASK(hash), intlv_bit); 313 + } else { 314 + *sub_addr = decode_channel_addr(addr, 6); 315 + *idx = GET_BITFIELD(addr, 6, 6); 316 + } 317 + } 318 + 319 + static int igen6_decode(struct decoded_addr *res) 320 + { 321 + struct igen6_imc *imc = &igen6_pvt->imc[res->mc]; 322 + u64 addr = res->imc_addr, sub_addr, s_size; 323 + int idx, l_map; 324 + u32 hash; 325 + 326 + if (addr >= igen6_tom) { 327 + edac_dbg(0, "Address 0x%llx out of range\n", addr); 328 + return -EINVAL; 329 + } 330 + 331 + /* Decode channel */ 332 + hash = readl(imc->window + CHANNEL_HASH_OFFSET); 333 + s_size = imc->ch_s_size; 334 + l_map = imc->ch_l_map; 335 + decode_addr(addr, hash, s_size, l_map, &idx, &sub_addr); 336 + res->channel_idx = idx; 337 + res->channel_addr = sub_addr; 338 + 339 + /* Decode sub-channel/DIMM */ 340 + hash = readl(imc->window + CHANNEL_EHASH_OFFSET); 341 + s_size = imc->dimm_s_size[idx]; 342 + l_map = imc->dimm_l_map[idx]; 343 + decode_addr(res->channel_addr, hash, s_size, l_map, &idx, &sub_addr); 344 + res->sub_channel_idx = idx; 345 + res->sub_channel_addr = sub_addr; 346 + 347 + return 0; 348 + } 349 + 350 + static void igen6_output_error(struct decoded_addr *res, 351 + struct mem_ctl_info *mci, u64 ecclog) 352 + { 353 + enum hw_event_mc_err_type type = ecclog & ECC_ERROR_LOG_UE ? 354 + HW_EVENT_ERR_UNCORRECTED : 355 + HW_EVENT_ERR_CORRECTED; 356 + 357 + edac_mc_handle_error(type, mci, 1, 358 + res->sys_addr >> PAGE_SHIFT, 359 + res->sys_addr & ~PAGE_MASK, 360 + ECC_ERROR_LOG_SYND(ecclog), 361 + res->channel_idx, res->sub_channel_idx, 362 + -1, "", ""); 363 + } 364 + 365 + static struct gen_pool *ecclog_gen_pool_create(void) 366 + { 367 + struct gen_pool *pool; 368 + 369 + pool = gen_pool_create(ilog2(sizeof(struct ecclog_node)), -1); 370 + if (!pool) 371 + return NULL; 372 + 373 + if (gen_pool_add(pool, (unsigned long)ecclog_buf, ECCLOG_POOL_SIZE, -1)) { 374 + gen_pool_destroy(pool); 375 + return NULL; 376 + } 377 + 378 + return pool; 379 + } 380 + 381 + static int ecclog_gen_pool_add(int mc, u64 ecclog) 382 + { 383 + struct ecclog_node *node; 384 + 385 + node = (void *)gen_pool_alloc(ecclog_pool, sizeof(*node)); 386 + if (!node) 387 + return -ENOMEM; 388 + 389 + node->mc = mc; 390 + node->ecclog = ecclog; 391 + llist_add(&node->llnode, &ecclog_llist); 392 + 393 + return 0; 394 + } 395 + 396 + /* 397 + * Either the memory-mapped I/O status register ECC_ERROR_LOG or the PCI 398 + * configuration space status register ERRSTS can indicate whether a 399 + * correctable error or an uncorrectable error occurred. We only use the 400 + * ECC_ERROR_LOG register to check error type, but need to clear both 401 + * registers to enable future error events. 402 + */ 403 + static u64 ecclog_read_and_clear(struct igen6_imc *imc) 404 + { 405 + u64 ecclog = readq(imc->window + ECC_ERROR_LOG_OFFSET); 406 + 407 + if (ecclog & (ECC_ERROR_LOG_CE | ECC_ERROR_LOG_UE)) { 408 + /* Clear CE/UE bits by writing 1s */ 409 + writeq(ecclog, imc->window + ECC_ERROR_LOG_OFFSET); 410 + return ecclog; 411 + } 412 + 413 + return 0; 414 + } 415 + 416 + static void errsts_clear(struct igen6_imc *imc) 417 + { 418 + u16 errsts; 419 + 420 + if (pci_read_config_word(imc->pdev, ERRSTS_OFFSET, &errsts)) { 421 + igen6_printk(KERN_ERR, "Failed to read ERRSTS\n"); 422 + return; 423 + } 424 + 425 + /* Clear CE/UE bits by writing 1s */ 426 + if (errsts & (ERRSTS_CE | ERRSTS_UE)) 427 + pci_write_config_word(imc->pdev, ERRSTS_OFFSET, errsts); 428 + } 429 + 430 + static int errcmd_enable_error_reporting(bool enable) 431 + { 432 + struct igen6_imc *imc = &igen6_pvt->imc[0]; 433 + u16 errcmd; 434 + int rc; 435 + 436 + rc = pci_read_config_word(imc->pdev, ERRCMD_OFFSET, &errcmd); 437 + if (rc) 438 + return rc; 439 + 440 + if (enable) 441 + errcmd |= ERRCMD_CE | ERRSTS_UE; 442 + else 443 + errcmd &= ~(ERRCMD_CE | ERRSTS_UE); 444 + 445 + rc = pci_write_config_word(imc->pdev, ERRCMD_OFFSET, errcmd); 446 + if (rc) 447 + return rc; 448 + 449 + return 0; 450 + } 451 + 452 + static int ecclog_handler(void) 453 + { 454 + struct igen6_imc *imc; 455 + int i, n = 0; 456 + u64 ecclog; 457 + 458 + for (i = 0; i < res_cfg->num_imc; i++) { 459 + imc = &igen6_pvt->imc[i]; 460 + 461 + /* errsts_clear() isn't NMI-safe. Delay it in the IRQ context */ 462 + 463 + ecclog = ecclog_read_and_clear(imc); 464 + if (!ecclog) 465 + continue; 466 + 467 + if (!ecclog_gen_pool_add(i, ecclog)) 468 + irq_work_queue(&ecclog_irq_work); 469 + 470 + n++; 471 + } 472 + 473 + return n; 474 + } 475 + 476 + static void ecclog_work_cb(struct work_struct *work) 477 + { 478 + struct ecclog_node *node, *tmp; 479 + struct mem_ctl_info *mci; 480 + struct llist_node *head; 481 + struct decoded_addr res; 482 + u64 eaddr; 483 + 484 + head = llist_del_all(&ecclog_llist); 485 + if (!head) 486 + return; 487 + 488 + llist_for_each_entry_safe(node, tmp, head, llnode) { 489 + memset(&res, 0, sizeof(res)); 490 + eaddr = ECC_ERROR_LOG_ADDR(node->ecclog) << 491 + ECC_ERROR_LOG_ADDR_SHIFT; 492 + res.mc = node->mc; 493 + res.sys_addr = res_cfg->err_addr_to_sys_addr(eaddr); 494 + res.imc_addr = res_cfg->err_addr_to_imc_addr(eaddr); 495 + 496 + mci = igen6_pvt->imc[res.mc].mci; 497 + 498 + edac_dbg(2, "MC %d, ecclog = 0x%llx\n", node->mc, node->ecclog); 499 + igen6_mc_printk(mci, KERN_DEBUG, "HANDLING IBECC MEMORY ERROR\n"); 500 + igen6_mc_printk(mci, KERN_DEBUG, "ADDR 0x%llx ", res.sys_addr); 501 + 502 + if (!igen6_decode(&res)) 503 + igen6_output_error(&res, mci, node->ecclog); 504 + 505 + gen_pool_free(ecclog_pool, (unsigned long)node, sizeof(*node)); 506 + } 507 + } 508 + 509 + static void ecclog_irq_work_cb(struct irq_work *irq_work) 510 + { 511 + int i; 512 + 513 + for (i = 0; i < res_cfg->num_imc; i++) 514 + errsts_clear(&igen6_pvt->imc[i]); 515 + 516 + if (!llist_empty(&ecclog_llist)) 517 + schedule_work(&ecclog_work); 518 + } 519 + 520 + static int ecclog_nmi_handler(unsigned int cmd, struct pt_regs *regs) 521 + { 522 + unsigned char reason; 523 + 524 + if (!ecclog_handler()) 525 + return NMI_DONE; 526 + 527 + /* 528 + * Both In-Band ECC correctable error and uncorrectable error are 529 + * reported by SERR# NMI. The NMI generic code (see pci_serr_error()) 530 + * doesn't clear the bit NMI_REASON_CLEAR_SERR (in port 0x61) to 531 + * re-enable the SERR# NMI after NMI handling. So clear this bit here 532 + * to re-enable SERR# NMI for receiving future In-Band ECC errors. 533 + */ 534 + reason = x86_platform.get_nmi_reason() & NMI_REASON_CLEAR_MASK; 535 + reason |= NMI_REASON_CLEAR_SERR; 536 + outb(reason, NMI_REASON_PORT); 537 + reason &= ~NMI_REASON_CLEAR_SERR; 538 + outb(reason, NMI_REASON_PORT); 539 + 540 + return NMI_HANDLED; 541 + } 542 + 543 + static bool igen6_check_ecc(struct igen6_imc *imc) 544 + { 545 + u32 activate = readl(imc->window + IBECC_ACTIVATE_OFFSET); 546 + 547 + return !!(activate & IBECC_ACTIVATE_EN); 548 + } 549 + 550 + static int igen6_get_dimm_config(struct mem_ctl_info *mci) 551 + { 552 + struct igen6_imc *imc = mci->pvt_info; 553 + u32 mad_inter, mad_intra, mad_dimm; 554 + int i, j, ndimms, mc = imc->mc; 555 + struct dimm_info *dimm; 556 + enum mem_type mtype; 557 + enum dev_type dtype; 558 + u64 dsize; 559 + bool ecc; 560 + 561 + edac_dbg(2, "\n"); 562 + 563 + mad_inter = readl(imc->window + MAD_INTER_CHANNEL_OFFSET); 564 + mtype = get_memory_type(mad_inter); 565 + ecc = igen6_check_ecc(imc); 566 + imc->ch_s_size = MAD_INTER_CHANNEL_CH_S_SIZE(mad_inter); 567 + imc->ch_l_map = MAD_INTER_CHANNEL_CH_L_MAP(mad_inter); 568 + 569 + for (i = 0; i < NUM_CHANNELS; i++) { 570 + mad_intra = readl(imc->window + MAD_INTRA_CH0_OFFSET + i * 4); 571 + mad_dimm = readl(imc->window + MAD_DIMM_CH0_OFFSET + i * 4); 572 + 573 + imc->dimm_l_size[i] = MAD_DIMM_CH_DIMM_L_SIZE(mad_dimm); 574 + imc->dimm_s_size[i] = MAD_DIMM_CH_DIMM_S_SIZE(mad_dimm); 575 + imc->dimm_l_map[i] = MAD_INTRA_CH_DIMM_L_MAP(mad_intra); 576 + ndimms = 0; 577 + 578 + for (j = 0; j < NUM_DIMMS; j++) { 579 + dimm = edac_get_dimm(mci, i, j, 0); 580 + 581 + if (j ^ imc->dimm_l_map[i]) { 582 + dtype = get_width(0, mad_dimm); 583 + dsize = imc->dimm_s_size[i]; 584 + } else { 585 + dtype = get_width(1, mad_dimm); 586 + dsize = imc->dimm_l_size[i]; 587 + } 588 + 589 + if (!dsize) 590 + continue; 591 + 592 + dimm->grain = 64; 593 + dimm->mtype = mtype; 594 + dimm->dtype = dtype; 595 + dimm->nr_pages = MiB_TO_PAGES(dsize >> 20); 596 + dimm->edac_mode = EDAC_SECDED; 597 + snprintf(dimm->label, sizeof(dimm->label), 598 + "MC#%d_Chan#%d_DIMM#%d", mc, i, j); 599 + edac_dbg(0, "MC %d, Channel %d, DIMM %d, Size %llu MiB (%u pages)\n", 600 + mc, i, j, dsize >> 20, dimm->nr_pages); 601 + 602 + ndimms++; 603 + } 604 + 605 + if (ndimms && !ecc) { 606 + igen6_printk(KERN_ERR, "MC%d In-Band ECC is disabled\n", mc); 607 + return -ENODEV; 608 + } 609 + } 610 + 611 + return 0; 612 + } 613 + 614 + #ifdef CONFIG_EDAC_DEBUG 615 + /* Top of upper usable DRAM */ 616 + static u64 igen6_touud; 617 + #define TOUUD_OFFSET 0xa8 618 + 619 + static void igen6_reg_dump(struct igen6_imc *imc) 620 + { 621 + int i; 622 + 623 + edac_dbg(2, "CHANNEL_HASH : 0x%x\n", 624 + readl(imc->window + CHANNEL_HASH_OFFSET)); 625 + edac_dbg(2, "CHANNEL_EHASH : 0x%x\n", 626 + readl(imc->window + CHANNEL_EHASH_OFFSET)); 627 + edac_dbg(2, "MAD_INTER_CHANNEL: 0x%x\n", 628 + readl(imc->window + MAD_INTER_CHANNEL_OFFSET)); 629 + edac_dbg(2, "ECC_ERROR_LOG : 0x%llx\n", 630 + readq(imc->window + ECC_ERROR_LOG_OFFSET)); 631 + 632 + for (i = 0; i < NUM_CHANNELS; i++) { 633 + edac_dbg(2, "MAD_INTRA_CH%d : 0x%x\n", i, 634 + readl(imc->window + MAD_INTRA_CH0_OFFSET + i * 4)); 635 + edac_dbg(2, "MAD_DIMM_CH%d : 0x%x\n", i, 636 + readl(imc->window + MAD_DIMM_CH0_OFFSET + i * 4)); 637 + } 638 + edac_dbg(2, "TOLUD : 0x%x", igen6_tolud); 639 + edac_dbg(2, "TOUUD : 0x%llx", igen6_touud); 640 + edac_dbg(2, "TOM : 0x%llx", igen6_tom); 641 + } 642 + 643 + static struct dentry *igen6_test; 644 + 645 + static int debugfs_u64_set(void *data, u64 val) 646 + { 647 + u64 ecclog; 648 + 649 + if ((val >= igen6_tolud && val < _4GB) || val >= igen6_touud) { 650 + edac_dbg(0, "Address 0x%llx out of range\n", val); 651 + return 0; 652 + } 653 + 654 + pr_warn_once("Fake error to 0x%llx injected via debugfs\n", val); 655 + 656 + val >>= ECC_ERROR_LOG_ADDR_SHIFT; 657 + ecclog = (val << ECC_ERROR_LOG_ADDR_SHIFT) | ECC_ERROR_LOG_CE; 658 + 659 + if (!ecclog_gen_pool_add(0, ecclog)) 660 + irq_work_queue(&ecclog_irq_work); 661 + 662 + return 0; 663 + } 664 + DEFINE_SIMPLE_ATTRIBUTE(fops_u64_wo, NULL, debugfs_u64_set, "%llu\n"); 665 + 666 + static void igen6_debug_setup(void) 667 + { 668 + igen6_test = edac_debugfs_create_dir("igen6_test"); 669 + if (!igen6_test) 670 + return; 671 + 672 + if (!edac_debugfs_create_file("addr", 0200, igen6_test, 673 + NULL, &fops_u64_wo)) { 674 + debugfs_remove(igen6_test); 675 + igen6_test = NULL; 676 + } 677 + } 678 + 679 + static void igen6_debug_teardown(void) 680 + { 681 + debugfs_remove_recursive(igen6_test); 682 + } 683 + #else 684 + static void igen6_reg_dump(struct igen6_imc *imc) {} 685 + static void igen6_debug_setup(void) {} 686 + static void igen6_debug_teardown(void) {} 687 + #endif 688 + 689 + static int igen6_pci_setup(struct pci_dev *pdev, u64 *mchbar) 690 + { 691 + union { 692 + u64 v; 693 + struct { 694 + u32 v_lo; 695 + u32 v_hi; 696 + }; 697 + } u; 698 + 699 + edac_dbg(2, "\n"); 700 + 701 + if (!res_cfg->ibecc_available(pdev)) { 702 + edac_dbg(2, "No In-Band ECC IP\n"); 703 + goto fail; 704 + } 705 + 706 + if (pci_read_config_dword(pdev, TOLUD_OFFSET, &igen6_tolud)) { 707 + igen6_printk(KERN_ERR, "Failed to read TOLUD\n"); 708 + goto fail; 709 + } 710 + 711 + igen6_tolud &= GENMASK(31, 20); 712 + 713 + if (pci_read_config_dword(pdev, TOM_OFFSET, &u.v_lo)) { 714 + igen6_printk(KERN_ERR, "Failed to read lower TOM\n"); 715 + goto fail; 716 + } 717 + 718 + if (pci_read_config_dword(pdev, TOM_OFFSET + 4, &u.v_hi)) { 719 + igen6_printk(KERN_ERR, "Failed to read upper TOM\n"); 720 + goto fail; 721 + } 722 + 723 + igen6_tom = u.v & GENMASK_ULL(38, 20); 724 + 725 + if (pci_read_config_dword(pdev, MCHBAR_OFFSET, &u.v_lo)) { 726 + igen6_printk(KERN_ERR, "Failed to read lower MCHBAR\n"); 727 + goto fail; 728 + } 729 + 730 + if (pci_read_config_dword(pdev, MCHBAR_OFFSET + 4, &u.v_hi)) { 731 + igen6_printk(KERN_ERR, "Failed to read upper MCHBAR\n"); 732 + goto fail; 733 + } 734 + 735 + if (!(u.v & MCHBAR_EN)) { 736 + igen6_printk(KERN_ERR, "MCHBAR is disabled\n"); 737 + goto fail; 738 + } 739 + 740 + *mchbar = MCHBAR_BASE(u.v); 741 + 742 + #ifdef CONFIG_EDAC_DEBUG 743 + if (pci_read_config_dword(pdev, TOUUD_OFFSET, &u.v_lo)) 744 + edac_dbg(2, "Failed to read lower TOUUD\n"); 745 + else if (pci_read_config_dword(pdev, TOUUD_OFFSET + 4, &u.v_hi)) 746 + edac_dbg(2, "Failed to read upper TOUUD\n"); 747 + else 748 + igen6_touud = u.v & GENMASK_ULL(38, 20); 749 + #endif 750 + 751 + return 0; 752 + fail: 753 + return -ENODEV; 754 + } 755 + 756 + static int igen6_register_mci(int mc, u64 mchbar, struct pci_dev *pdev) 757 + { 758 + struct edac_mc_layer layers[2]; 759 + struct mem_ctl_info *mci; 760 + struct igen6_imc *imc; 761 + void __iomem *window; 762 + int rc; 763 + 764 + edac_dbg(2, "\n"); 765 + 766 + mchbar += mc * MCHBAR_SIZE; 767 + window = ioremap(mchbar, MCHBAR_SIZE); 768 + if (!window) { 769 + igen6_printk(KERN_ERR, "Failed to ioremap 0x%llx\n", mchbar); 770 + return -ENODEV; 771 + } 772 + 773 + layers[0].type = EDAC_MC_LAYER_CHANNEL; 774 + layers[0].size = NUM_CHANNELS; 775 + layers[0].is_virt_csrow = false; 776 + layers[1].type = EDAC_MC_LAYER_SLOT; 777 + layers[1].size = NUM_DIMMS; 778 + layers[1].is_virt_csrow = true; 779 + 780 + mci = edac_mc_alloc(mc, ARRAY_SIZE(layers), layers, 0); 781 + if (!mci) { 782 + rc = -ENOMEM; 783 + goto fail; 784 + } 785 + 786 + mci->ctl_name = kasprintf(GFP_KERNEL, "Intel_client_SoC MC#%d", mc); 787 + if (!mci->ctl_name) { 788 + rc = -ENOMEM; 789 + goto fail2; 790 + } 791 + 792 + mci->mtype_cap = MEM_FLAG_LPDDR4 | MEM_FLAG_DDR4; 793 + mci->edac_ctl_cap = EDAC_FLAG_SECDED; 794 + mci->edac_cap = EDAC_FLAG_SECDED; 795 + mci->mod_name = EDAC_MOD_STR; 796 + mci->dev_name = pci_name(pdev); 797 + mci->pvt_info = &igen6_pvt->imc[mc]; 798 + 799 + imc = mci->pvt_info; 800 + device_initialize(&imc->dev); 801 + /* 802 + * EDAC core uses mci->pdev(pointer of structure device) as 803 + * memory controller ID. The client SoCs attach one or more 804 + * memory controllers to single pci_dev (single pci_dev->dev 805 + * can be for multiple memory controllers). 806 + * 807 + * To make mci->pdev unique, assign pci_dev->dev to mci->pdev 808 + * for the first memory controller and assign a unique imc->dev 809 + * to mci->pdev for each non-first memory controller. 810 + */ 811 + mci->pdev = mc ? &imc->dev : &pdev->dev; 812 + imc->mc = mc; 813 + imc->pdev = pdev; 814 + imc->window = window; 815 + 816 + igen6_reg_dump(imc); 817 + 818 + rc = igen6_get_dimm_config(mci); 819 + if (rc) 820 + goto fail3; 821 + 822 + rc = edac_mc_add_mc(mci); 823 + if (rc) { 824 + igen6_printk(KERN_ERR, "Failed to register mci#%d\n", mc); 825 + goto fail3; 826 + } 827 + 828 + imc->mci = mci; 829 + return 0; 830 + fail3: 831 + kfree(mci->ctl_name); 832 + fail2: 833 + edac_mc_free(mci); 834 + fail: 835 + iounmap(window); 836 + return rc; 837 + } 838 + 839 + static void igen6_unregister_mcis(void) 840 + { 841 + struct mem_ctl_info *mci; 842 + struct igen6_imc *imc; 843 + int i; 844 + 845 + edac_dbg(2, "\n"); 846 + 847 + for (i = 0; i < res_cfg->num_imc; i++) { 848 + imc = &igen6_pvt->imc[i]; 849 + mci = imc->mci; 850 + if (!mci) 851 + continue; 852 + 853 + edac_mc_del_mc(mci->pdev); 854 + kfree(mci->ctl_name); 855 + edac_mc_free(mci); 856 + iounmap(imc->window); 857 + } 858 + } 859 + 860 + static int igen6_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 861 + { 862 + u64 mchbar; 863 + int i, rc; 864 + 865 + edac_dbg(2, "\n"); 866 + 867 + igen6_pvt = kzalloc(sizeof(*igen6_pvt), GFP_KERNEL); 868 + if (!igen6_pvt) 869 + return -ENOMEM; 870 + 871 + res_cfg = (struct res_config *)ent->driver_data; 872 + 873 + rc = igen6_pci_setup(pdev, &mchbar); 874 + if (rc) 875 + goto fail; 876 + 877 + for (i = 0; i < res_cfg->num_imc; i++) { 878 + rc = igen6_register_mci(i, mchbar, pdev); 879 + if (rc) 880 + goto fail2; 881 + } 882 + 883 + ecclog_pool = ecclog_gen_pool_create(); 884 + if (!ecclog_pool) { 885 + rc = -ENOMEM; 886 + goto fail2; 887 + } 888 + 889 + INIT_WORK(&ecclog_work, ecclog_work_cb); 890 + init_irq_work(&ecclog_irq_work, ecclog_irq_work_cb); 891 + 892 + /* Check if any pending errors before registering the NMI handler */ 893 + ecclog_handler(); 894 + 895 + rc = register_nmi_handler(NMI_SERR, ecclog_nmi_handler, 896 + 0, IGEN6_NMI_NAME); 897 + if (rc) { 898 + igen6_printk(KERN_ERR, "Failed to register NMI handler\n"); 899 + goto fail3; 900 + } 901 + 902 + /* Enable error reporting */ 903 + rc = errcmd_enable_error_reporting(true); 904 + if (rc) { 905 + igen6_printk(KERN_ERR, "Failed to enable error reporting\n"); 906 + goto fail4; 907 + } 908 + 909 + igen6_debug_setup(); 910 + return 0; 911 + fail4: 912 + unregister_nmi_handler(NMI_SERR, IGEN6_NMI_NAME); 913 + fail3: 914 + gen_pool_destroy(ecclog_pool); 915 + fail2: 916 + igen6_unregister_mcis(); 917 + fail: 918 + kfree(igen6_pvt); 919 + return rc; 920 + } 921 + 922 + static void igen6_remove(struct pci_dev *pdev) 923 + { 924 + edac_dbg(2, "\n"); 925 + 926 + igen6_debug_teardown(); 927 + errcmd_enable_error_reporting(false); 928 + unregister_nmi_handler(NMI_SERR, IGEN6_NMI_NAME); 929 + irq_work_sync(&ecclog_irq_work); 930 + flush_work(&ecclog_work); 931 + gen_pool_destroy(ecclog_pool); 932 + igen6_unregister_mcis(); 933 + kfree(igen6_pvt); 934 + } 935 + 936 + static struct pci_driver igen6_driver = { 937 + .name = EDAC_MOD_STR, 938 + .probe = igen6_probe, 939 + .remove = igen6_remove, 940 + .id_table = igen6_pci_tbl, 941 + }; 942 + 943 + static int __init igen6_init(void) 944 + { 945 + const char *owner; 946 + int rc; 947 + 948 + edac_dbg(2, "\n"); 949 + 950 + owner = edac_get_owner(); 951 + if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR))) 952 + return -ENODEV; 953 + 954 + edac_op_state = EDAC_OPSTATE_NMI; 955 + 956 + rc = pci_register_driver(&igen6_driver); 957 + if (rc) 958 + return rc; 959 + 960 + igen6_printk(KERN_INFO, "%s\n", IGEN6_REVISION); 961 + 962 + return 0; 963 + } 964 + 965 + static void __exit igen6_exit(void) 966 + { 967 + edac_dbg(2, "\n"); 968 + 969 + pci_unregister_driver(&igen6_driver); 970 + } 971 + 972 + module_init(igen6_init); 973 + module_exit(igen6_exit); 974 + 975 + MODULE_LICENSE("GPL v2"); 976 + MODULE_AUTHOR("Qiuxu Zhuo"); 977 + MODULE_DESCRIPTION("MC Driver for Intel client SoC using In-Band ECC");
-883
drivers/edac/mv64x60_edac.c
··· 1 - /* 2 - * Marvell MV64x60 Memory Controller kernel module for PPC platforms 3 - * 4 - * Author: Dave Jiang <djiang@mvista.com> 5 - * 6 - * 2006-2007 (c) MontaVista Software, Inc. This file is licensed under 7 - * the terms of the GNU General Public License version 2. This program 8 - * is licensed "as is" without any warranty of any kind, whether express 9 - * or implied. 10 - * 11 - */ 12 - 13 - #include <linux/module.h> 14 - #include <linux/init.h> 15 - #include <linux/interrupt.h> 16 - #include <linux/io.h> 17 - #include <linux/edac.h> 18 - #include <linux/gfp.h> 19 - 20 - #include "edac_module.h" 21 - #include "mv64x60_edac.h" 22 - 23 - static const char *mv64x60_ctl_name = "MV64x60"; 24 - static int edac_dev_idx; 25 - static int edac_pci_idx; 26 - static int edac_mc_idx; 27 - 28 - /*********************** PCI err device **********************************/ 29 - #ifdef CONFIG_PCI 30 - static void mv64x60_pci_check(struct edac_pci_ctl_info *pci) 31 - { 32 - struct mv64x60_pci_pdata *pdata = pci->pvt_info; 33 - u32 cause; 34 - 35 - cause = readl(pdata->pci_vbase + MV64X60_PCI_ERROR_CAUSE); 36 - if (!cause) 37 - return; 38 - 39 - printk(KERN_ERR "Error in PCI %d Interface\n", pdata->pci_hose); 40 - printk(KERN_ERR "Cause register: 0x%08x\n", cause); 41 - printk(KERN_ERR "Address Low: 0x%08x\n", 42 - readl(pdata->pci_vbase + MV64X60_PCI_ERROR_ADDR_LO)); 43 - printk(KERN_ERR "Address High: 0x%08x\n", 44 - readl(pdata->pci_vbase + MV64X60_PCI_ERROR_ADDR_HI)); 45 - printk(KERN_ERR "Attribute: 0x%08x\n", 46 - readl(pdata->pci_vbase + MV64X60_PCI_ERROR_ATTR)); 47 - printk(KERN_ERR "Command: 0x%08x\n", 48 - readl(pdata->pci_vbase + MV64X60_PCI_ERROR_CMD)); 49 - writel(~cause, pdata->pci_vbase + MV64X60_PCI_ERROR_CAUSE); 50 - 51 - if (cause & MV64X60_PCI_PE_MASK) 52 - edac_pci_handle_pe(pci, pci->ctl_name); 53 - 54 - if (!(cause & MV64X60_PCI_PE_MASK)) 55 - edac_pci_handle_npe(pci, pci->ctl_name); 56 - } 57 - 58 - static irqreturn_t mv64x60_pci_isr(int irq, void *dev_id) 59 - { 60 - struct edac_pci_ctl_info *pci = dev_id; 61 - struct mv64x60_pci_pdata *pdata = pci->pvt_info; 62 - u32 val; 63 - 64 - val = readl(pdata->pci_vbase + MV64X60_PCI_ERROR_CAUSE); 65 - if (!val) 66 - return IRQ_NONE; 67 - 68 - mv64x60_pci_check(pci); 69 - 70 - return IRQ_HANDLED; 71 - } 72 - 73 - /* 74 - * Bit 0 of MV64x60_PCIx_ERR_MASK does not exist on the 64360 and because of 75 - * errata FEr-#11 and FEr-##16 for the 64460, it should be 0 on that chip as 76 - * well. IOW, don't set bit 0. 77 - */ 78 - 79 - /* Erratum FEr PCI-#16: clear bit 0 of PCI SERRn Mask reg. */ 80 - static int __init mv64x60_pci_fixup(struct platform_device *pdev) 81 - { 82 - struct resource *r; 83 - void __iomem *pci_serr; 84 - 85 - r = platform_get_resource(pdev, IORESOURCE_MEM, 1); 86 - if (!r) { 87 - printk(KERN_ERR "%s: Unable to get resource for " 88 - "PCI err regs\n", __func__); 89 - return -ENOENT; 90 - } 91 - 92 - pci_serr = ioremap(r->start, resource_size(r)); 93 - if (!pci_serr) 94 - return -ENOMEM; 95 - 96 - writel(readl(pci_serr) & ~0x1, pci_serr); 97 - iounmap(pci_serr); 98 - 99 - return 0; 100 - } 101 - 102 - static int mv64x60_pci_err_probe(struct platform_device *pdev) 103 - { 104 - struct edac_pci_ctl_info *pci; 105 - struct mv64x60_pci_pdata *pdata; 106 - struct resource *r; 107 - int res = 0; 108 - 109 - if (!devres_open_group(&pdev->dev, mv64x60_pci_err_probe, GFP_KERNEL)) 110 - return -ENOMEM; 111 - 112 - pci = edac_pci_alloc_ctl_info(sizeof(*pdata), "mv64x60_pci_err"); 113 - if (!pci) 114 - return -ENOMEM; 115 - 116 - pdata = pci->pvt_info; 117 - 118 - pdata->pci_hose = pdev->id; 119 - pdata->name = "mv64x60_pci_err"; 120 - platform_set_drvdata(pdev, pci); 121 - pci->dev = &pdev->dev; 122 - pci->dev_name = dev_name(&pdev->dev); 123 - pci->mod_name = EDAC_MOD_STR; 124 - pci->ctl_name = pdata->name; 125 - 126 - if (edac_op_state == EDAC_OPSTATE_POLL) 127 - pci->edac_check = mv64x60_pci_check; 128 - 129 - pdata->edac_idx = edac_pci_idx++; 130 - 131 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 132 - if (!r) { 133 - printk(KERN_ERR "%s: Unable to get resource for " 134 - "PCI err regs\n", __func__); 135 - res = -ENOENT; 136 - goto err; 137 - } 138 - 139 - if (!devm_request_mem_region(&pdev->dev, 140 - r->start, 141 - resource_size(r), 142 - pdata->name)) { 143 - printk(KERN_ERR "%s: Error while requesting mem region\n", 144 - __func__); 145 - res = -EBUSY; 146 - goto err; 147 - } 148 - 149 - pdata->pci_vbase = devm_ioremap(&pdev->dev, 150 - r->start, 151 - resource_size(r)); 152 - if (!pdata->pci_vbase) { 153 - printk(KERN_ERR "%s: Unable to setup PCI err regs\n", __func__); 154 - res = -ENOMEM; 155 - goto err; 156 - } 157 - 158 - res = mv64x60_pci_fixup(pdev); 159 - if (res < 0) { 160 - printk(KERN_ERR "%s: PCI fixup failed\n", __func__); 161 - goto err; 162 - } 163 - 164 - writel(0, pdata->pci_vbase + MV64X60_PCI_ERROR_CAUSE); 165 - writel(0, pdata->pci_vbase + MV64X60_PCI_ERROR_MASK); 166 - writel(MV64X60_PCIx_ERR_MASK_VAL, 167 - pdata->pci_vbase + MV64X60_PCI_ERROR_MASK); 168 - 169 - if (edac_pci_add_device(pci, pdata->edac_idx) > 0) { 170 - edac_dbg(3, "failed edac_pci_add_device()\n"); 171 - goto err; 172 - } 173 - 174 - if (edac_op_state == EDAC_OPSTATE_INT) { 175 - pdata->irq = platform_get_irq(pdev, 0); 176 - res = devm_request_irq(&pdev->dev, 177 - pdata->irq, 178 - mv64x60_pci_isr, 179 - 0, 180 - "[EDAC] PCI err", 181 - pci); 182 - if (res < 0) { 183 - printk(KERN_ERR "%s: Unable to request irq %d for " 184 - "MV64x60 PCI ERR\n", __func__, pdata->irq); 185 - res = -ENODEV; 186 - goto err2; 187 - } 188 - printk(KERN_INFO EDAC_MOD_STR " acquired irq %d for PCI Err\n", 189 - pdata->irq); 190 - } 191 - 192 - devres_remove_group(&pdev->dev, mv64x60_pci_err_probe); 193 - 194 - /* get this far and it's successful */ 195 - edac_dbg(3, "success\n"); 196 - 197 - return 0; 198 - 199 - err2: 200 - edac_pci_del_device(&pdev->dev); 201 - err: 202 - edac_pci_free_ctl_info(pci); 203 - devres_release_group(&pdev->dev, mv64x60_pci_err_probe); 204 - return res; 205 - } 206 - 207 - static int mv64x60_pci_err_remove(struct platform_device *pdev) 208 - { 209 - struct edac_pci_ctl_info *pci = platform_get_drvdata(pdev); 210 - 211 - edac_dbg(0, "\n"); 212 - 213 - edac_pci_del_device(&pdev->dev); 214 - 215 - edac_pci_free_ctl_info(pci); 216 - 217 - return 0; 218 - } 219 - 220 - static struct platform_driver mv64x60_pci_err_driver = { 221 - .probe = mv64x60_pci_err_probe, 222 - .remove = mv64x60_pci_err_remove, 223 - .driver = { 224 - .name = "mv64x60_pci_err", 225 - } 226 - }; 227 - 228 - #endif /* CONFIG_PCI */ 229 - 230 - /*********************** SRAM err device **********************************/ 231 - static void mv64x60_sram_check(struct edac_device_ctl_info *edac_dev) 232 - { 233 - struct mv64x60_sram_pdata *pdata = edac_dev->pvt_info; 234 - u32 cause; 235 - 236 - cause = readl(pdata->sram_vbase + MV64X60_SRAM_ERR_CAUSE); 237 - if (!cause) 238 - return; 239 - 240 - printk(KERN_ERR "Error in internal SRAM\n"); 241 - printk(KERN_ERR "Cause register: 0x%08x\n", cause); 242 - printk(KERN_ERR "Address Low: 0x%08x\n", 243 - readl(pdata->sram_vbase + MV64X60_SRAM_ERR_ADDR_LO)); 244 - printk(KERN_ERR "Address High: 0x%08x\n", 245 - readl(pdata->sram_vbase + MV64X60_SRAM_ERR_ADDR_HI)); 246 - printk(KERN_ERR "Data Low: 0x%08x\n", 247 - readl(pdata->sram_vbase + MV64X60_SRAM_ERR_DATA_LO)); 248 - printk(KERN_ERR "Data High: 0x%08x\n", 249 - readl(pdata->sram_vbase + MV64X60_SRAM_ERR_DATA_HI)); 250 - printk(KERN_ERR "Parity: 0x%08x\n", 251 - readl(pdata->sram_vbase + MV64X60_SRAM_ERR_PARITY)); 252 - writel(0, pdata->sram_vbase + MV64X60_SRAM_ERR_CAUSE); 253 - 254 - edac_device_handle_ue(edac_dev, 0, 0, edac_dev->ctl_name); 255 - } 256 - 257 - static irqreturn_t mv64x60_sram_isr(int irq, void *dev_id) 258 - { 259 - struct edac_device_ctl_info *edac_dev = dev_id; 260 - struct mv64x60_sram_pdata *pdata = edac_dev->pvt_info; 261 - u32 cause; 262 - 263 - cause = readl(pdata->sram_vbase + MV64X60_SRAM_ERR_CAUSE); 264 - if (!cause) 265 - return IRQ_NONE; 266 - 267 - mv64x60_sram_check(edac_dev); 268 - 269 - return IRQ_HANDLED; 270 - } 271 - 272 - static int mv64x60_sram_err_probe(struct platform_device *pdev) 273 - { 274 - struct edac_device_ctl_info *edac_dev; 275 - struct mv64x60_sram_pdata *pdata; 276 - struct resource *r; 277 - int res = 0; 278 - 279 - if (!devres_open_group(&pdev->dev, mv64x60_sram_err_probe, GFP_KERNEL)) 280 - return -ENOMEM; 281 - 282 - edac_dev = edac_device_alloc_ctl_info(sizeof(*pdata), 283 - "sram", 1, NULL, 0, 0, NULL, 0, 284 - edac_dev_idx); 285 - if (!edac_dev) { 286 - devres_release_group(&pdev->dev, mv64x60_sram_err_probe); 287 - return -ENOMEM; 288 - } 289 - 290 - pdata = edac_dev->pvt_info; 291 - pdata->name = "mv64x60_sram_err"; 292 - edac_dev->dev = &pdev->dev; 293 - platform_set_drvdata(pdev, edac_dev); 294 - edac_dev->dev_name = dev_name(&pdev->dev); 295 - 296 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 297 - if (!r) { 298 - printk(KERN_ERR "%s: Unable to get resource for " 299 - "SRAM err regs\n", __func__); 300 - res = -ENOENT; 301 - goto err; 302 - } 303 - 304 - if (!devm_request_mem_region(&pdev->dev, 305 - r->start, 306 - resource_size(r), 307 - pdata->name)) { 308 - printk(KERN_ERR "%s: Error while request mem region\n", 309 - __func__); 310 - res = -EBUSY; 311 - goto err; 312 - } 313 - 314 - pdata->sram_vbase = devm_ioremap(&pdev->dev, 315 - r->start, 316 - resource_size(r)); 317 - if (!pdata->sram_vbase) { 318 - printk(KERN_ERR "%s: Unable to setup SRAM err regs\n", 319 - __func__); 320 - res = -ENOMEM; 321 - goto err; 322 - } 323 - 324 - /* setup SRAM err registers */ 325 - writel(0, pdata->sram_vbase + MV64X60_SRAM_ERR_CAUSE); 326 - 327 - edac_dev->mod_name = EDAC_MOD_STR; 328 - edac_dev->ctl_name = pdata->name; 329 - 330 - if (edac_op_state == EDAC_OPSTATE_POLL) 331 - edac_dev->edac_check = mv64x60_sram_check; 332 - 333 - pdata->edac_idx = edac_dev_idx++; 334 - 335 - if (edac_device_add_device(edac_dev) > 0) { 336 - edac_dbg(3, "failed edac_device_add_device()\n"); 337 - goto err; 338 - } 339 - 340 - if (edac_op_state == EDAC_OPSTATE_INT) { 341 - pdata->irq = platform_get_irq(pdev, 0); 342 - res = devm_request_irq(&pdev->dev, 343 - pdata->irq, 344 - mv64x60_sram_isr, 345 - 0, 346 - "[EDAC] SRAM err", 347 - edac_dev); 348 - if (res < 0) { 349 - printk(KERN_ERR 350 - "%s: Unable to request irq %d for " 351 - "MV64x60 SRAM ERR\n", __func__, pdata->irq); 352 - res = -ENODEV; 353 - goto err2; 354 - } 355 - 356 - printk(KERN_INFO EDAC_MOD_STR " acquired irq %d for SRAM Err\n", 357 - pdata->irq); 358 - } 359 - 360 - devres_remove_group(&pdev->dev, mv64x60_sram_err_probe); 361 - 362 - /* get this far and it's successful */ 363 - edac_dbg(3, "success\n"); 364 - 365 - return 0; 366 - 367 - err2: 368 - edac_device_del_device(&pdev->dev); 369 - err: 370 - devres_release_group(&pdev->dev, mv64x60_sram_err_probe); 371 - edac_device_free_ctl_info(edac_dev); 372 - return res; 373 - } 374 - 375 - static int mv64x60_sram_err_remove(struct platform_device *pdev) 376 - { 377 - struct edac_device_ctl_info *edac_dev = platform_get_drvdata(pdev); 378 - 379 - edac_dbg(0, "\n"); 380 - 381 - edac_device_del_device(&pdev->dev); 382 - edac_device_free_ctl_info(edac_dev); 383 - 384 - return 0; 385 - } 386 - 387 - static struct platform_driver mv64x60_sram_err_driver = { 388 - .probe = mv64x60_sram_err_probe, 389 - .remove = mv64x60_sram_err_remove, 390 - .driver = { 391 - .name = "mv64x60_sram_err", 392 - } 393 - }; 394 - 395 - /*********************** CPU err device **********************************/ 396 - static void mv64x60_cpu_check(struct edac_device_ctl_info *edac_dev) 397 - { 398 - struct mv64x60_cpu_pdata *pdata = edac_dev->pvt_info; 399 - u32 cause; 400 - 401 - cause = readl(pdata->cpu_vbase[1] + MV64x60_CPU_ERR_CAUSE) & 402 - MV64x60_CPU_CAUSE_MASK; 403 - if (!cause) 404 - return; 405 - 406 - printk(KERN_ERR "Error on CPU interface\n"); 407 - printk(KERN_ERR "Cause register: 0x%08x\n", cause); 408 - printk(KERN_ERR "Address Low: 0x%08x\n", 409 - readl(pdata->cpu_vbase[0] + MV64x60_CPU_ERR_ADDR_LO)); 410 - printk(KERN_ERR "Address High: 0x%08x\n", 411 - readl(pdata->cpu_vbase[0] + MV64x60_CPU_ERR_ADDR_HI)); 412 - printk(KERN_ERR "Data Low: 0x%08x\n", 413 - readl(pdata->cpu_vbase[1] + MV64x60_CPU_ERR_DATA_LO)); 414 - printk(KERN_ERR "Data High: 0x%08x\n", 415 - readl(pdata->cpu_vbase[1] + MV64x60_CPU_ERR_DATA_HI)); 416 - printk(KERN_ERR "Parity: 0x%08x\n", 417 - readl(pdata->cpu_vbase[1] + MV64x60_CPU_ERR_PARITY)); 418 - writel(0, pdata->cpu_vbase[1] + MV64x60_CPU_ERR_CAUSE); 419 - 420 - edac_device_handle_ue(edac_dev, 0, 0, edac_dev->ctl_name); 421 - } 422 - 423 - static irqreturn_t mv64x60_cpu_isr(int irq, void *dev_id) 424 - { 425 - struct edac_device_ctl_info *edac_dev = dev_id; 426 - struct mv64x60_cpu_pdata *pdata = edac_dev->pvt_info; 427 - u32 cause; 428 - 429 - cause = readl(pdata->cpu_vbase[1] + MV64x60_CPU_ERR_CAUSE) & 430 - MV64x60_CPU_CAUSE_MASK; 431 - if (!cause) 432 - return IRQ_NONE; 433 - 434 - mv64x60_cpu_check(edac_dev); 435 - 436 - return IRQ_HANDLED; 437 - } 438 - 439 - static int mv64x60_cpu_err_probe(struct platform_device *pdev) 440 - { 441 - struct edac_device_ctl_info *edac_dev; 442 - struct resource *r; 443 - struct mv64x60_cpu_pdata *pdata; 444 - int res = 0; 445 - 446 - if (!devres_open_group(&pdev->dev, mv64x60_cpu_err_probe, GFP_KERNEL)) 447 - return -ENOMEM; 448 - 449 - edac_dev = edac_device_alloc_ctl_info(sizeof(*pdata), 450 - "cpu", 1, NULL, 0, 0, NULL, 0, 451 - edac_dev_idx); 452 - if (!edac_dev) { 453 - devres_release_group(&pdev->dev, mv64x60_cpu_err_probe); 454 - return -ENOMEM; 455 - } 456 - 457 - pdata = edac_dev->pvt_info; 458 - pdata->name = "mv64x60_cpu_err"; 459 - edac_dev->dev = &pdev->dev; 460 - platform_set_drvdata(pdev, edac_dev); 461 - edac_dev->dev_name = dev_name(&pdev->dev); 462 - 463 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 464 - if (!r) { 465 - printk(KERN_ERR "%s: Unable to get resource for " 466 - "CPU err regs\n", __func__); 467 - res = -ENOENT; 468 - goto err; 469 - } 470 - 471 - if (!devm_request_mem_region(&pdev->dev, 472 - r->start, 473 - resource_size(r), 474 - pdata->name)) { 475 - printk(KERN_ERR "%s: Error while requesting mem region\n", 476 - __func__); 477 - res = -EBUSY; 478 - goto err; 479 - } 480 - 481 - pdata->cpu_vbase[0] = devm_ioremap(&pdev->dev, 482 - r->start, 483 - resource_size(r)); 484 - if (!pdata->cpu_vbase[0]) { 485 - printk(KERN_ERR "%s: Unable to setup CPU err regs\n", __func__); 486 - res = -ENOMEM; 487 - goto err; 488 - } 489 - 490 - r = platform_get_resource(pdev, IORESOURCE_MEM, 1); 491 - if (!r) { 492 - printk(KERN_ERR "%s: Unable to get resource for " 493 - "CPU err regs\n", __func__); 494 - res = -ENOENT; 495 - goto err; 496 - } 497 - 498 - if (!devm_request_mem_region(&pdev->dev, 499 - r->start, 500 - resource_size(r), 501 - pdata->name)) { 502 - printk(KERN_ERR "%s: Error while requesting mem region\n", 503 - __func__); 504 - res = -EBUSY; 505 - goto err; 506 - } 507 - 508 - pdata->cpu_vbase[1] = devm_ioremap(&pdev->dev, 509 - r->start, 510 - resource_size(r)); 511 - if (!pdata->cpu_vbase[1]) { 512 - printk(KERN_ERR "%s: Unable to setup CPU err regs\n", __func__); 513 - res = -ENOMEM; 514 - goto err; 515 - } 516 - 517 - /* setup CPU err registers */ 518 - writel(0, pdata->cpu_vbase[1] + MV64x60_CPU_ERR_CAUSE); 519 - writel(0, pdata->cpu_vbase[1] + MV64x60_CPU_ERR_MASK); 520 - writel(0x000000ff, pdata->cpu_vbase[1] + MV64x60_CPU_ERR_MASK); 521 - 522 - edac_dev->mod_name = EDAC_MOD_STR; 523 - edac_dev->ctl_name = pdata->name; 524 - if (edac_op_state == EDAC_OPSTATE_POLL) 525 - edac_dev->edac_check = mv64x60_cpu_check; 526 - 527 - pdata->edac_idx = edac_dev_idx++; 528 - 529 - if (edac_device_add_device(edac_dev) > 0) { 530 - edac_dbg(3, "failed edac_device_add_device()\n"); 531 - goto err; 532 - } 533 - 534 - if (edac_op_state == EDAC_OPSTATE_INT) { 535 - pdata->irq = platform_get_irq(pdev, 0); 536 - res = devm_request_irq(&pdev->dev, 537 - pdata->irq, 538 - mv64x60_cpu_isr, 539 - 0, 540 - "[EDAC] CPU err", 541 - edac_dev); 542 - if (res < 0) { 543 - printk(KERN_ERR 544 - "%s: Unable to request irq %d for MV64x60 " 545 - "CPU ERR\n", __func__, pdata->irq); 546 - res = -ENODEV; 547 - goto err2; 548 - } 549 - 550 - printk(KERN_INFO EDAC_MOD_STR 551 - " acquired irq %d for CPU Err\n", pdata->irq); 552 - } 553 - 554 - devres_remove_group(&pdev->dev, mv64x60_cpu_err_probe); 555 - 556 - /* get this far and it's successful */ 557 - edac_dbg(3, "success\n"); 558 - 559 - return 0; 560 - 561 - err2: 562 - edac_device_del_device(&pdev->dev); 563 - err: 564 - devres_release_group(&pdev->dev, mv64x60_cpu_err_probe); 565 - edac_device_free_ctl_info(edac_dev); 566 - return res; 567 - } 568 - 569 - static int mv64x60_cpu_err_remove(struct platform_device *pdev) 570 - { 571 - struct edac_device_ctl_info *edac_dev = platform_get_drvdata(pdev); 572 - 573 - edac_dbg(0, "\n"); 574 - 575 - edac_device_del_device(&pdev->dev); 576 - edac_device_free_ctl_info(edac_dev); 577 - return 0; 578 - } 579 - 580 - static struct platform_driver mv64x60_cpu_err_driver = { 581 - .probe = mv64x60_cpu_err_probe, 582 - .remove = mv64x60_cpu_err_remove, 583 - .driver = { 584 - .name = "mv64x60_cpu_err", 585 - } 586 - }; 587 - 588 - /*********************** DRAM err device **********************************/ 589 - 590 - static void mv64x60_mc_check(struct mem_ctl_info *mci) 591 - { 592 - struct mv64x60_mc_pdata *pdata = mci->pvt_info; 593 - u32 reg; 594 - u32 err_addr; 595 - u32 sdram_ecc; 596 - u32 comp_ecc; 597 - u32 syndrome; 598 - 599 - reg = readl(pdata->mc_vbase + MV64X60_SDRAM_ERR_ADDR); 600 - if (!reg) 601 - return; 602 - 603 - err_addr = reg & ~0x3; 604 - sdram_ecc = readl(pdata->mc_vbase + MV64X60_SDRAM_ERR_ECC_RCVD); 605 - comp_ecc = readl(pdata->mc_vbase + MV64X60_SDRAM_ERR_ECC_CALC); 606 - syndrome = sdram_ecc ^ comp_ecc; 607 - 608 - /* first bit clear in ECC Err Reg, 1 bit error, correctable by HW */ 609 - if (!(reg & 0x1)) 610 - edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, 611 - err_addr >> PAGE_SHIFT, 612 - err_addr & PAGE_MASK, syndrome, 613 - 0, 0, -1, 614 - mci->ctl_name, ""); 615 - else /* 2 bit error, UE */ 616 - edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 617 - err_addr >> PAGE_SHIFT, 618 - err_addr & PAGE_MASK, 0, 619 - 0, 0, -1, 620 - mci->ctl_name, ""); 621 - 622 - /* clear the error */ 623 - writel(0, pdata->mc_vbase + MV64X60_SDRAM_ERR_ADDR); 624 - } 625 - 626 - static irqreturn_t mv64x60_mc_isr(int irq, void *dev_id) 627 - { 628 - struct mem_ctl_info *mci = dev_id; 629 - struct mv64x60_mc_pdata *pdata = mci->pvt_info; 630 - u32 reg; 631 - 632 - reg = readl(pdata->mc_vbase + MV64X60_SDRAM_ERR_ADDR); 633 - if (!reg) 634 - return IRQ_NONE; 635 - 636 - /* writing 0's to the ECC err addr in check function clears irq */ 637 - mv64x60_mc_check(mci); 638 - 639 - return IRQ_HANDLED; 640 - } 641 - 642 - static void get_total_mem(struct mv64x60_mc_pdata *pdata) 643 - { 644 - struct device_node *np = NULL; 645 - const unsigned int *reg; 646 - 647 - np = of_find_node_by_type(NULL, "memory"); 648 - if (!np) 649 - return; 650 - 651 - reg = of_get_property(np, "reg", NULL); 652 - 653 - pdata->total_mem = reg[1]; 654 - } 655 - 656 - static void mv64x60_init_csrows(struct mem_ctl_info *mci, 657 - struct mv64x60_mc_pdata *pdata) 658 - { 659 - struct csrow_info *csrow; 660 - struct dimm_info *dimm; 661 - 662 - u32 devtype; 663 - u32 ctl; 664 - 665 - get_total_mem(pdata); 666 - 667 - ctl = readl(pdata->mc_vbase + MV64X60_SDRAM_CONFIG); 668 - 669 - csrow = mci->csrows[0]; 670 - dimm = csrow->channels[0]->dimm; 671 - 672 - dimm->nr_pages = pdata->total_mem >> PAGE_SHIFT; 673 - dimm->grain = 8; 674 - 675 - dimm->mtype = (ctl & MV64X60_SDRAM_REGISTERED) ? MEM_RDDR : MEM_DDR; 676 - 677 - devtype = (ctl >> 20) & 0x3; 678 - switch (devtype) { 679 - case 0x0: 680 - dimm->dtype = DEV_X32; 681 - break; 682 - case 0x2: /* could be X8 too, but no way to tell */ 683 - dimm->dtype = DEV_X16; 684 - break; 685 - case 0x3: 686 - dimm->dtype = DEV_X4; 687 - break; 688 - default: 689 - dimm->dtype = DEV_UNKNOWN; 690 - break; 691 - } 692 - 693 - dimm->edac_mode = EDAC_SECDED; 694 - } 695 - 696 - static int mv64x60_mc_err_probe(struct platform_device *pdev) 697 - { 698 - struct mem_ctl_info *mci; 699 - struct edac_mc_layer layers[2]; 700 - struct mv64x60_mc_pdata *pdata; 701 - struct resource *r; 702 - u32 ctl; 703 - int res = 0; 704 - 705 - if (!devres_open_group(&pdev->dev, mv64x60_mc_err_probe, GFP_KERNEL)) 706 - return -ENOMEM; 707 - 708 - layers[0].type = EDAC_MC_LAYER_CHIP_SELECT; 709 - layers[0].size = 1; 710 - layers[0].is_virt_csrow = true; 711 - layers[1].type = EDAC_MC_LAYER_CHANNEL; 712 - layers[1].size = 1; 713 - layers[1].is_virt_csrow = false; 714 - mci = edac_mc_alloc(edac_mc_idx, ARRAY_SIZE(layers), layers, 715 - sizeof(struct mv64x60_mc_pdata)); 716 - if (!mci) { 717 - printk(KERN_ERR "%s: No memory for CPU err\n", __func__); 718 - devres_release_group(&pdev->dev, mv64x60_mc_err_probe); 719 - return -ENOMEM; 720 - } 721 - 722 - pdata = mci->pvt_info; 723 - mci->pdev = &pdev->dev; 724 - platform_set_drvdata(pdev, mci); 725 - pdata->name = "mv64x60_mc_err"; 726 - mci->dev_name = dev_name(&pdev->dev); 727 - pdata->edac_idx = edac_mc_idx++; 728 - 729 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 730 - if (!r) { 731 - printk(KERN_ERR "%s: Unable to get resource for " 732 - "MC err regs\n", __func__); 733 - res = -ENOENT; 734 - goto err; 735 - } 736 - 737 - if (!devm_request_mem_region(&pdev->dev, 738 - r->start, 739 - resource_size(r), 740 - pdata->name)) { 741 - printk(KERN_ERR "%s: Error while requesting mem region\n", 742 - __func__); 743 - res = -EBUSY; 744 - goto err; 745 - } 746 - 747 - pdata->mc_vbase = devm_ioremap(&pdev->dev, 748 - r->start, 749 - resource_size(r)); 750 - if (!pdata->mc_vbase) { 751 - printk(KERN_ERR "%s: Unable to setup MC err regs\n", __func__); 752 - res = -ENOMEM; 753 - goto err; 754 - } 755 - 756 - ctl = readl(pdata->mc_vbase + MV64X60_SDRAM_CONFIG); 757 - if (!(ctl & MV64X60_SDRAM_ECC)) { 758 - /* Non-ECC RAM? */ 759 - printk(KERN_WARNING "%s: No ECC DIMMs discovered\n", __func__); 760 - res = -ENODEV; 761 - goto err; 762 - } 763 - 764 - edac_dbg(3, "init mci\n"); 765 - mci->mtype_cap = MEM_FLAG_RDDR | MEM_FLAG_DDR; 766 - mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED; 767 - mci->edac_cap = EDAC_FLAG_SECDED; 768 - mci->mod_name = EDAC_MOD_STR; 769 - mci->ctl_name = mv64x60_ctl_name; 770 - 771 - if (edac_op_state == EDAC_OPSTATE_POLL) 772 - mci->edac_check = mv64x60_mc_check; 773 - 774 - mci->ctl_page_to_phys = NULL; 775 - 776 - mci->scrub_mode = SCRUB_SW_SRC; 777 - 778 - mv64x60_init_csrows(mci, pdata); 779 - 780 - /* setup MC registers */ 781 - writel(0, pdata->mc_vbase + MV64X60_SDRAM_ERR_ADDR); 782 - ctl = readl(pdata->mc_vbase + MV64X60_SDRAM_ERR_ECC_CNTL); 783 - ctl = (ctl & 0xff00ffff) | 0x10000; 784 - writel(ctl, pdata->mc_vbase + MV64X60_SDRAM_ERR_ECC_CNTL); 785 - 786 - res = edac_mc_add_mc(mci); 787 - if (res) { 788 - edac_dbg(3, "failed edac_mc_add_mc()\n"); 789 - goto err; 790 - } 791 - 792 - if (edac_op_state == EDAC_OPSTATE_INT) { 793 - /* acquire interrupt that reports errors */ 794 - pdata->irq = platform_get_irq(pdev, 0); 795 - res = devm_request_irq(&pdev->dev, 796 - pdata->irq, 797 - mv64x60_mc_isr, 798 - 0, 799 - "[EDAC] MC err", 800 - mci); 801 - if (res < 0) { 802 - printk(KERN_ERR "%s: Unable to request irq %d for " 803 - "MV64x60 DRAM ERR\n", __func__, pdata->irq); 804 - res = -ENODEV; 805 - goto err2; 806 - } 807 - 808 - printk(KERN_INFO EDAC_MOD_STR " acquired irq %d for MC Err\n", 809 - pdata->irq); 810 - } 811 - 812 - /* get this far and it's successful */ 813 - edac_dbg(3, "success\n"); 814 - 815 - return 0; 816 - 817 - err2: 818 - edac_mc_del_mc(&pdev->dev); 819 - err: 820 - devres_release_group(&pdev->dev, mv64x60_mc_err_probe); 821 - edac_mc_free(mci); 822 - return res; 823 - } 824 - 825 - static int mv64x60_mc_err_remove(struct platform_device *pdev) 826 - { 827 - struct mem_ctl_info *mci = platform_get_drvdata(pdev); 828 - 829 - edac_dbg(0, "\n"); 830 - 831 - edac_mc_del_mc(&pdev->dev); 832 - edac_mc_free(mci); 833 - return 0; 834 - } 835 - 836 - static struct platform_driver mv64x60_mc_err_driver = { 837 - .probe = mv64x60_mc_err_probe, 838 - .remove = mv64x60_mc_err_remove, 839 - .driver = { 840 - .name = "mv64x60_mc_err", 841 - } 842 - }; 843 - 844 - static struct platform_driver * const drivers[] = { 845 - &mv64x60_mc_err_driver, 846 - &mv64x60_cpu_err_driver, 847 - &mv64x60_sram_err_driver, 848 - #ifdef CONFIG_PCI 849 - &mv64x60_pci_err_driver, 850 - #endif 851 - }; 852 - 853 - static int __init mv64x60_edac_init(void) 854 - { 855 - 856 - printk(KERN_INFO "Marvell MV64x60 EDAC driver " MV64x60_REVISION "\n"); 857 - printk(KERN_INFO "\t(C) 2006-2007 MontaVista Software\n"); 858 - 859 - /* make sure error reporting method is sane */ 860 - switch (edac_op_state) { 861 - case EDAC_OPSTATE_POLL: 862 - case EDAC_OPSTATE_INT: 863 - break; 864 - default: 865 - edac_op_state = EDAC_OPSTATE_INT; 866 - break; 867 - } 868 - 869 - return platform_register_drivers(drivers, ARRAY_SIZE(drivers)); 870 - } 871 - module_init(mv64x60_edac_init); 872 - 873 - static void __exit mv64x60_edac_exit(void) 874 - { 875 - platform_unregister_drivers(drivers, ARRAY_SIZE(drivers)); 876 - } 877 - module_exit(mv64x60_edac_exit); 878 - 879 - MODULE_LICENSE("GPL"); 880 - MODULE_AUTHOR("Montavista Software, Inc."); 881 - module_param(edac_op_state, int, 0444); 882 - MODULE_PARM_DESC(edac_op_state, 883 - "EDAC Error Reporting state: 0=Poll, 2=Interrupt");
-114
drivers/edac/mv64x60_edac.h
··· 1 - /* 2 - * EDAC defs for Marvell MV64x60 bridge chip 3 - * 4 - * Author: Dave Jiang <djiang@mvista.com> 5 - * 6 - * 2007 (c) MontaVista Software, Inc. This file is licensed under 7 - * the terms of the GNU General Public License version 2. This program 8 - * is licensed "as is" without any warranty of any kind, whether express 9 - * or implied. 10 - * 11 - */ 12 - #ifndef _MV64X60_EDAC_H_ 13 - #define _MV64X60_EDAC_H_ 14 - 15 - #define MV64x60_REVISION " Ver: 2.0.0" 16 - #define EDAC_MOD_STR "MV64x60_edac" 17 - 18 - #define mv64x60_printk(level, fmt, arg...) \ 19 - edac_printk(level, "MV64x60", fmt, ##arg) 20 - 21 - #define mv64x60_mc_printk(mci, level, fmt, arg...) \ 22 - edac_mc_chipset_printk(mci, level, "MV64x60", fmt, ##arg) 23 - 24 - /* CPU Error Report Registers */ 25 - #define MV64x60_CPU_ERR_ADDR_LO 0x00 /* 0x0070 */ 26 - #define MV64x60_CPU_ERR_ADDR_HI 0x08 /* 0x0078 */ 27 - #define MV64x60_CPU_ERR_DATA_LO 0x00 /* 0x0128 */ 28 - #define MV64x60_CPU_ERR_DATA_HI 0x08 /* 0x0130 */ 29 - #define MV64x60_CPU_ERR_PARITY 0x10 /* 0x0138 */ 30 - #define MV64x60_CPU_ERR_CAUSE 0x18 /* 0x0140 */ 31 - #define MV64x60_CPU_ERR_MASK 0x20 /* 0x0148 */ 32 - 33 - #define MV64x60_CPU_CAUSE_MASK 0x07ffffff 34 - 35 - /* SRAM Error Report Registers */ 36 - #define MV64X60_SRAM_ERR_CAUSE 0x08 /* 0x0388 */ 37 - #define MV64X60_SRAM_ERR_ADDR_LO 0x10 /* 0x0390 */ 38 - #define MV64X60_SRAM_ERR_ADDR_HI 0x78 /* 0x03f8 */ 39 - #define MV64X60_SRAM_ERR_DATA_LO 0x18 /* 0x0398 */ 40 - #define MV64X60_SRAM_ERR_DATA_HI 0x20 /* 0x03a0 */ 41 - #define MV64X60_SRAM_ERR_PARITY 0x28 /* 0x03a8 */ 42 - 43 - /* SDRAM Controller Registers */ 44 - #define MV64X60_SDRAM_CONFIG 0x00 /* 0x1400 */ 45 - #define MV64X60_SDRAM_ERR_DATA_HI 0x40 /* 0x1440 */ 46 - #define MV64X60_SDRAM_ERR_DATA_LO 0x44 /* 0x1444 */ 47 - #define MV64X60_SDRAM_ERR_ECC_RCVD 0x48 /* 0x1448 */ 48 - #define MV64X60_SDRAM_ERR_ECC_CALC 0x4c /* 0x144c */ 49 - #define MV64X60_SDRAM_ERR_ADDR 0x50 /* 0x1450 */ 50 - #define MV64X60_SDRAM_ERR_ECC_CNTL 0x54 /* 0x1454 */ 51 - #define MV64X60_SDRAM_ERR_ECC_ERR_CNT 0x58 /* 0x1458 */ 52 - 53 - #define MV64X60_SDRAM_REGISTERED 0x20000 54 - #define MV64X60_SDRAM_ECC 0x40000 55 - 56 - #ifdef CONFIG_PCI 57 - /* 58 - * Bit 0 of MV64x60_PCIx_ERR_MASK does not exist on the 64360 and because of 59 - * errata FEr-#11 and FEr-##16 for the 64460, it should be 0 on that chip as 60 - * well. IOW, don't set bit 0. 61 - */ 62 - #define MV64X60_PCIx_ERR_MASK_VAL 0x00a50c24 63 - 64 - /* Register offsets from PCIx error address low register */ 65 - #define MV64X60_PCI_ERROR_ADDR_LO 0x00 66 - #define MV64X60_PCI_ERROR_ADDR_HI 0x04 67 - #define MV64X60_PCI_ERROR_ATTR 0x08 68 - #define MV64X60_PCI_ERROR_CMD 0x10 69 - #define MV64X60_PCI_ERROR_CAUSE 0x18 70 - #define MV64X60_PCI_ERROR_MASK 0x1c 71 - 72 - #define MV64X60_PCI_ERR_SWrPerr 0x0002 73 - #define MV64X60_PCI_ERR_SRdPerr 0x0004 74 - #define MV64X60_PCI_ERR_MWrPerr 0x0020 75 - #define MV64X60_PCI_ERR_MRdPerr 0x0040 76 - 77 - #define MV64X60_PCI_PE_MASK (MV64X60_PCI_ERR_SWrPerr | \ 78 - MV64X60_PCI_ERR_SRdPerr | \ 79 - MV64X60_PCI_ERR_MWrPerr | \ 80 - MV64X60_PCI_ERR_MRdPerr) 81 - 82 - struct mv64x60_pci_pdata { 83 - int pci_hose; 84 - void __iomem *pci_vbase; 85 - char *name; 86 - int irq; 87 - int edac_idx; 88 - }; 89 - 90 - #endif /* CONFIG_PCI */ 91 - 92 - struct mv64x60_mc_pdata { 93 - void __iomem *mc_vbase; 94 - int total_mem; 95 - char *name; 96 - int irq; 97 - int edac_idx; 98 - }; 99 - 100 - struct mv64x60_cpu_pdata { 101 - void __iomem *cpu_vbase[2]; 102 - char *name; 103 - int irq; 104 - int edac_idx; 105 - }; 106 - 107 - struct mv64x60_sram_pdata { 108 - void __iomem *sram_vbase; 109 - char *name; 110 - int irq; 111 - int edac_idx; 112 - }; 113 - 114 - #endif
-1
drivers/edac/r82600_edac.c
··· 204 204 { 205 205 struct r82600_error_info info; 206 206 207 - edac_dbg(1, "MC%d\n", mci->mc_idx); 208 207 r82600_get_error_info(mci, &info); 209 208 r82600_process_error_info(mci, &info, 1); 210 209 }
+3 -3
drivers/edac/skx_base.c
··· 174 174 return !!GET_BITFIELD(mcmtr, 2, 2); 175 175 } 176 176 177 - static int skx_get_dimm_config(struct mem_ctl_info *mci) 177 + static int skx_get_dimm_config(struct mem_ctl_info *mci, struct res_config *cfg) 178 178 { 179 179 struct skx_pvt *pvt = mci->pvt_info; 180 180 u32 mtr, mcmtr, amap, mcddrtcfg; ··· 195 195 pci_read_config_dword(imc->chan[i].cdev, 196 196 0x80 + 4 * j, &mtr); 197 197 if (IS_DIMM_PRESENT(mtr)) { 198 - ndimms += skx_get_dimm_info(mtr, mcmtr, amap, dimm, imc, i, j); 198 + ndimms += skx_get_dimm_info(mtr, mcmtr, amap, dimm, imc, i, j, cfg); 199 199 } else if (IS_NVDIMM_PRESENT(mcddrtcfg, j)) { 200 200 ndimms += skx_get_nvdimm_info(dimm, imc, i, j, 201 201 EDAC_MOD_STR); ··· 702 702 d->imc[i].node_id = node_id; 703 703 rc = skx_register_mci(&d->imc[i], d->imc[i].chan[0].cdev, 704 704 "Skylake Socket", EDAC_MOD_STR, 705 - skx_get_dimm_config); 705 + skx_get_dimm_config, cfg); 706 706 if (rc < 0) 707 707 goto fail; 708 708 }
+18 -5
drivers/edac/skx_common.c
··· 304 304 #define numcol(reg) skx_get_dimm_attr(reg, 0, 1, 10, 0, 2, "cols") 305 305 306 306 int skx_get_dimm_info(u32 mtr, u32 mcmtr, u32 amap, struct dimm_info *dimm, 307 - struct skx_imc *imc, int chan, int dimmno) 307 + struct skx_imc *imc, int chan, int dimmno, 308 + struct res_config *cfg) 308 309 { 309 - int banks = 16, ranks, rows, cols, npages; 310 + int banks, ranks, rows, cols, npages; 311 + enum mem_type mtype; 310 312 u64 size; 311 313 312 314 ranks = numrank(mtr); 313 315 rows = numrow(mtr); 314 316 cols = numcol(mtr); 317 + 318 + if (cfg->support_ddr5 && (amap & 0x8)) { 319 + banks = 32; 320 + mtype = MEM_DDR5; 321 + } else { 322 + banks = 16; 323 + mtype = MEM_DDR4; 324 + } 315 325 316 326 /* 317 327 * Compute size in 8-byte (2^3) words, then shift to MiB (2^20) ··· 342 332 dimm->nr_pages = npages; 343 333 dimm->grain = 32; 344 334 dimm->dtype = get_width(mtr); 345 - dimm->mtype = MEM_DDR4; 335 + dimm->mtype = mtype; 346 336 dimm->edac_mode = EDAC_SECDED; /* likely better than this */ 347 337 snprintf(dimm->label, sizeof(dimm->label), "CPU_SrcID#%u_MC#%u_Chan#%u_DIMM#%u", 348 338 imc->src_id, imc->lmc, chan, dimmno); ··· 400 390 401 391 int skx_register_mci(struct skx_imc *imc, struct pci_dev *pdev, 402 392 const char *ctl_name, const char *mod_str, 403 - get_dimm_config_f get_dimm_config) 393 + get_dimm_config_f get_dimm_config, 394 + struct res_config *cfg) 404 395 { 405 396 struct mem_ctl_info *mci; 406 397 struct edac_mc_layer layers[2]; ··· 436 425 } 437 426 438 427 mci->mtype_cap = MEM_FLAG_DDR4 | MEM_FLAG_NVDIMM; 428 + if (cfg->support_ddr5) 429 + mci->mtype_cap |= MEM_FLAG_DDR5; 439 430 mci->edac_ctl_cap = EDAC_FLAG_NONE; 440 431 mci->edac_cap = EDAC_FLAG_NONE; 441 432 mci->mod_name = mod_str; 442 433 mci->dev_name = pci_name(pdev); 443 434 mci->ctl_page_to_phys = NULL; 444 435 445 - rc = get_dimm_config(mci); 436 + rc = get_dimm_config(mci, cfg); 446 437 if (rc < 0) 447 438 goto fail; 448 439
+12 -4
drivers/edac/skx_common.h
··· 59 59 struct mem_ctl_info *mci; 60 60 struct pci_dev *mdev; /* for i10nm CPU */ 61 61 void __iomem *mbase; /* for i10nm CPU */ 62 + int chan_mmio_sz; /* for i10nm CPU */ 62 63 u8 mc; /* system wide mc# */ 63 64 u8 lmc; /* socket relative mc# */ 64 65 u8 src_id, node_id; ··· 83 82 84 83 enum type { 85 84 SKX, 86 - I10NM 85 + I10NM, 86 + SPR 87 87 }; 88 88 89 89 enum { ··· 120 118 unsigned int decs_did; 121 119 /* Default bus number configuration register offset */ 122 120 int busno_cfg_offset; 121 + /* Per DDR channel memory-mapped I/O size */ 122 + int ddr_chan_mmio_sz; 123 + bool support_ddr5; 123 124 }; 124 125 125 - typedef int (*get_dimm_config_f)(struct mem_ctl_info *mci); 126 + typedef int (*get_dimm_config_f)(struct mem_ctl_info *mci, 127 + struct res_config *cfg); 126 128 typedef bool (*skx_decode_f)(struct decoded_addr *res); 127 129 typedef void (*skx_show_retry_log_f)(struct decoded_addr *res, char *msg, int len); 128 130 ··· 142 136 int skx_get_hi_lo(unsigned int did, int off[], u64 *tolm, u64 *tohm); 143 137 144 138 int skx_get_dimm_info(u32 mtr, u32 mcmtr, u32 amap, struct dimm_info *dimm, 145 - struct skx_imc *imc, int chan, int dimmno); 139 + struct skx_imc *imc, int chan, int dimmno, 140 + struct res_config *cfg); 146 141 147 142 int skx_get_nvdimm_info(struct dimm_info *dimm, struct skx_imc *imc, 148 143 int chan, int dimmno, const char *mod_str); 149 144 150 145 int skx_register_mci(struct skx_imc *imc, struct pci_dev *pdev, 151 146 const char *ctl_name, const char *mod_str, 152 - get_dimm_config_f get_dimm_config); 147 + get_dimm_config_f get_dimm_config, 148 + struct res_config *cfg); 153 149 154 150 int skx_mce_check_error(struct notifier_block *nb, unsigned long val, 155 151 void *data);
+2 -1
drivers/edac/synopsys_edac.c
··· 1344 1344 1345 1345 #ifdef CONFIG_EDAC_DEBUG 1346 1346 if (priv->p_data->quirks & DDR_ECC_DATA_POISON_SUPPORT) { 1347 - if (edac_create_sysfs_attributes(mci)) { 1347 + rc = edac_create_sysfs_attributes(mci); 1348 + if (rc) { 1348 1349 edac_printk(KERN_ERR, EDAC_MC, 1349 1350 "Failed to create sysfs entries\n"); 1350 1351 goto free_edac_mc;
-1
drivers/edac/x38_edac.c
··· 238 238 { 239 239 struct x38_error_info info; 240 240 241 - edac_dbg(1, "MC%d\n", mci->mc_idx); 242 241 x38_get_and_clear_error_info(mci, &info); 243 242 x38_process_error_info(mci, &info); 244 243 }
+14 -2
include/linux/edac.h
··· 175 175 * @MEM_RDDR3: Registered DDR3 RAM 176 176 * This is a variant of the DDR3 memories. 177 177 * @MEM_LRDDR3: Load-Reduced DDR3 memory. 178 + * @MEM_LPDDR3: Low-Power DDR3 memory. 178 179 * @MEM_DDR4: Unbuffered DDR4 RAM 179 180 * @MEM_RDDR4: Registered DDR4 RAM 180 181 * This is a variant of the DDR4 memories. 181 182 * @MEM_LRDDR4: Load-Reduced DDR4 memory. 183 + * @MEM_LPDDR4: Low-Power DDR4 memory. 184 + * @MEM_DDR5: Unbuffered DDR5 RAM 182 185 * @MEM_NVDIMM: Non-volatile RAM 186 + * @MEM_WIO2: Wide I/O 2. 183 187 */ 184 188 enum mem_type { 185 189 MEM_EMPTY = 0, ··· 204 200 MEM_DDR3, 205 201 MEM_RDDR3, 206 202 MEM_LRDDR3, 203 + MEM_LPDDR3, 207 204 MEM_DDR4, 208 205 MEM_RDDR4, 209 206 MEM_LRDDR4, 207 + MEM_LPDDR4, 208 + MEM_DDR5, 210 209 MEM_NVDIMM, 210 + MEM_WIO2, 211 211 }; 212 212 213 213 #define MEM_FLAG_EMPTY BIT(MEM_EMPTY) ··· 231 223 #define MEM_FLAG_XDR BIT(MEM_XDR) 232 224 #define MEM_FLAG_DDR3 BIT(MEM_DDR3) 233 225 #define MEM_FLAG_RDDR3 BIT(MEM_RDDR3) 226 + #define MEM_FLAG_LPDDR3 BIT(MEM_LPDDR3) 234 227 #define MEM_FLAG_DDR4 BIT(MEM_DDR4) 235 228 #define MEM_FLAG_RDDR4 BIT(MEM_RDDR4) 236 229 #define MEM_FLAG_LRDDR4 BIT(MEM_LRDDR4) 230 + #define MEM_FLAG_LPDDR4 BIT(MEM_LPDDR4) 231 + #define MEM_FLAG_DDR5 BIT(MEM_DDR5) 237 232 #define MEM_FLAG_NVDIMM BIT(MEM_NVDIMM) 233 + #define MEM_FLAG_WIO2 BIT(MEM_WIO2) 238 234 239 235 /** 240 - * enum edac-type - Error Detection and Correction capabilities and mode 236 + * enum edac_type - Error Detection and Correction capabilities and mode 241 237 * @EDAC_UNKNOWN: Unknown if ECC is available 242 238 * @EDAC_NONE: Doesn't support ECC 243 239 * @EDAC_RESERVED: Reserved ECC type ··· 321 309 #define OP_OFFLINE 0x300 322 310 323 311 /** 324 - * enum edac_mc_layer - memory controller hierarchy layer 312 + * enum edac_mc_layer_type - memory controller hierarchy layer 325 313 * 326 314 * @EDAC_MC_LAYER_BRANCH: memory layer is named "branch" 327 315 * @EDAC_MC_LAYER_CHANNEL: memory layer is named "channel"