Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

PCI-Express Non-Transparent Bridge Support

A PCI-Express non-transparent bridge (NTB) is a point-to-point PCIe bus
connecting 2 systems, providing electrical isolation between the two subsystems.
A non-transparent bridge is functionally similar to a transparent bridge except
that both sides of the bridge have their own independent address domains. The
host on one side of the bridge will not have the visibility of the complete
memory or I/O space on the other side of the bridge. To communicate across the
non-transparent bridge, each NTB endpoint has one (or more) apertures exposed to
the local system. Writes to these apertures are mirrored to memory on the
remote system. Communications can also occur through the use of doorbell
registers that initiate interrupts to the alternate domain, and scratch-pad
registers accessible from both sides.

The NTB device driver is needed to configure these memory windows, doorbell, and
scratch-pad registers as well as use them in such a way as they can be turned
into a viable communication channel to the remote system. ntb_hw.[ch]
determines the usage model (NTB to NTB or NTB to Root Port) and abstracts away
the underlying hardware to provide access and a common interface to the doorbell
registers, scratch pads, and memory windows. These hardware interfaces are
exported so that other, non-mainlined kernel drivers can access these.
ntb_transport.[ch] also uses the exported interfaces in ntb_hw.[ch] to setup a
communication channel(s) and provide a reliable way of transferring data from
one side to the other, which it then exports so that "client" drivers can access
them. These client drivers are used to provide a standard kernel interface
(i.e., Ethernet device) to NTB, such that Linux can transfer data from one
system to the other in a standard way.

Signed-off-by: Jon Mason <jon.mason@intel.com>
Reviewed-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

authored by

Jon Mason and committed by
Greg Kroah-Hartman
fce8a7bb ea8a83a4

+3012
+6
MAINTAINERS
··· 5394 5394 F: Documentation/scsi/NinjaSCSI.txt 5395 5395 F: drivers/scsi/nsp32* 5396 5396 5397 + NTB DRIVER 5398 + M: Jon Mason <jon.mason@intel.com> 5399 + S: Supported 5400 + F: drivers/ntb/ 5401 + F: include/linux/ntb.h 5402 + 5397 5403 NTFS FILESYSTEM 5398 5404 M: Anton Altaparmakov <anton@tuxera.com> 5399 5405 L: linux-ntfs-dev@lists.sourceforge.net
+2
drivers/Kconfig
··· 150 150 151 151 source "drivers/iio/Kconfig" 152 152 153 + source "drivers/ntb/Kconfig" 154 + 153 155 source "drivers/vme/Kconfig" 154 156 155 157 source "drivers/pwm/Kconfig"
+1
drivers/Makefile
··· 146 146 obj-$(CONFIG_IIO) += iio/ 147 147 obj-$(CONFIG_VME_BUS) += vme/ 148 148 obj-$(CONFIG_IPACK_BUS) += ipack/ 149 + obj-$(CONFIG_NTB) += ntb/
+13
drivers/ntb/Kconfig
··· 1 + config NTB 2 + tristate "Intel Non-Transparent Bridge support" 3 + depends on PCI 4 + depends on X86 5 + help 6 + The PCI-E Non-transparent bridge hardware is a point-to-point PCI-E bus 7 + connecting 2 systems. When configured, writes to the device's PCI 8 + mapped memory will be mirrored to a buffer on the remote system. The 9 + ntb Linux driver uses this point-to-point communication as a method to 10 + transfer data from one system to the other. 11 + 12 + If unsure, say N. 13 +
+3
drivers/ntb/Makefile
··· 1 + obj-$(CONFIG_NTB) += ntb.o 2 + 3 + ntb-objs := ntb_hw.o ntb_transport.o
+1157
drivers/ntb/ntb_hw.c
··· 1 + /* 2 + * This file is provided under a dual BSD/GPLv2 license. When using or 3 + * redistributing this file, you may do so under either license. 4 + * 5 + * GPL LICENSE SUMMARY 6 + * 7 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of version 2 of the GNU General Public License as 11 + * published by the Free Software Foundation. 12 + * 13 + * BSD LICENSE 14 + * 15 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 16 + * 17 + * Redistribution and use in source and binary forms, with or without 18 + * modification, are permitted provided that the following conditions 19 + * are met: 20 + * 21 + * * Redistributions of source code must retain the above copyright 22 + * notice, this list of conditions and the following disclaimer. 23 + * * Redistributions in binary form must reproduce the above copy 24 + * notice, this list of conditions and the following disclaimer in 25 + * the documentation and/or other materials provided with the 26 + * distribution. 27 + * * Neither the name of Intel Corporation nor the names of its 28 + * contributors may be used to endorse or promote products derived 29 + * from this software without specific prior written permission. 30 + * 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 36 + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 37 + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 38 + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 39 + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 40 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 41 + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 42 + * 43 + * Intel PCIe NTB Linux driver 44 + * 45 + * Contact Information: 46 + * Jon Mason <jon.mason@intel.com> 47 + */ 48 + #include <linux/debugfs.h> 49 + #include <linux/init.h> 50 + #include <linux/interrupt.h> 51 + #include <linux/module.h> 52 + #include <linux/pci.h> 53 + #include <linux/slab.h> 54 + #include "ntb_hw.h" 55 + #include "ntb_regs.h" 56 + 57 + #define NTB_NAME "Intel(R) PCI-E Non-Transparent Bridge Driver" 58 + #define NTB_VER "0.24" 59 + 60 + MODULE_DESCRIPTION(NTB_NAME); 61 + MODULE_VERSION(NTB_VER); 62 + MODULE_LICENSE("Dual BSD/GPL"); 63 + MODULE_AUTHOR("Intel Corporation"); 64 + 65 + enum { 66 + NTB_CONN_CLASSIC = 0, 67 + NTB_CONN_B2B, 68 + NTB_CONN_RP, 69 + }; 70 + 71 + enum { 72 + NTB_DEV_USD = 0, 73 + NTB_DEV_DSD, 74 + }; 75 + 76 + enum { 77 + SNB_HW = 0, 78 + BWD_HW, 79 + }; 80 + 81 + /* Translate memory window 0,1 to BAR 2,4 */ 82 + #define MW_TO_BAR(mw) (mw * 2 + 2) 83 + 84 + static DEFINE_PCI_DEVICE_TABLE(ntb_pci_tbl) = { 85 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_NTB_B2B_BWD)}, 86 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_NTB_B2B_JSF)}, 87 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_NTB_CLASSIC_JSF)}, 88 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_NTB_RP_JSF)}, 89 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_NTB_RP_SNB)}, 90 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_NTB_B2B_SNB)}, 91 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_NTB_CLASSIC_SNB)}, 92 + {0} 93 + }; 94 + MODULE_DEVICE_TABLE(pci, ntb_pci_tbl); 95 + 96 + /** 97 + * ntb_register_event_callback() - register event callback 98 + * @ndev: pointer to ntb_device instance 99 + * @func: callback function to register 100 + * 101 + * This function registers a callback for any HW driver events such as link 102 + * up/down, power management notices and etc. 103 + * 104 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 105 + */ 106 + int ntb_register_event_callback(struct ntb_device *ndev, 107 + void (*func)(void *handle, unsigned int event)) 108 + { 109 + if (ndev->event_cb) 110 + return -EINVAL; 111 + 112 + ndev->event_cb = func; 113 + 114 + return 0; 115 + } 116 + 117 + /** 118 + * ntb_unregister_event_callback() - unregisters the event callback 119 + * @ndev: pointer to ntb_device instance 120 + * 121 + * This function unregisters the existing callback from transport 122 + */ 123 + void ntb_unregister_event_callback(struct ntb_device *ndev) 124 + { 125 + ndev->event_cb = NULL; 126 + } 127 + 128 + /** 129 + * ntb_register_db_callback() - register a callback for doorbell interrupt 130 + * @ndev: pointer to ntb_device instance 131 + * @idx: doorbell index to register callback, zero based 132 + * @func: callback function to register 133 + * 134 + * This function registers a callback function for the doorbell interrupt 135 + * on the primary side. The function will unmask the doorbell as well to 136 + * allow interrupt. 137 + * 138 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 139 + */ 140 + int ntb_register_db_callback(struct ntb_device *ndev, unsigned int idx, 141 + void *data, void (*func)(void *data, int db_num)) 142 + { 143 + unsigned long mask; 144 + 145 + if (idx >= ndev->max_cbs || ndev->db_cb[idx].callback) { 146 + dev_warn(&ndev->pdev->dev, "Invalid Index.\n"); 147 + return -EINVAL; 148 + } 149 + 150 + ndev->db_cb[idx].callback = func; 151 + ndev->db_cb[idx].data = data; 152 + 153 + /* unmask interrupt */ 154 + mask = readw(ndev->reg_ofs.pdb_mask); 155 + clear_bit(idx * ndev->bits_per_vector, &mask); 156 + writew(mask, ndev->reg_ofs.pdb_mask); 157 + 158 + return 0; 159 + } 160 + 161 + /** 162 + * ntb_unregister_db_callback() - unregister a callback for doorbell interrupt 163 + * @ndev: pointer to ntb_device instance 164 + * @idx: doorbell index to register callback, zero based 165 + * 166 + * This function unregisters a callback function for the doorbell interrupt 167 + * on the primary side. The function will also mask the said doorbell. 168 + */ 169 + void ntb_unregister_db_callback(struct ntb_device *ndev, unsigned int idx) 170 + { 171 + unsigned long mask; 172 + 173 + if (idx >= ndev->max_cbs || !ndev->db_cb[idx].callback) 174 + return; 175 + 176 + mask = readw(ndev->reg_ofs.pdb_mask); 177 + set_bit(idx * ndev->bits_per_vector, &mask); 178 + writew(mask, ndev->reg_ofs.pdb_mask); 179 + 180 + ndev->db_cb[idx].callback = NULL; 181 + } 182 + 183 + /** 184 + * ntb_find_transport() - find the transport pointer 185 + * @transport: pointer to pci device 186 + * 187 + * Given the pci device pointer, return the transport pointer passed in when 188 + * the transport attached when it was inited. 189 + * 190 + * RETURNS: pointer to transport. 191 + */ 192 + void *ntb_find_transport(struct pci_dev *pdev) 193 + { 194 + struct ntb_device *ndev = pci_get_drvdata(pdev); 195 + return ndev->ntb_transport; 196 + } 197 + 198 + /** 199 + * ntb_register_transport() - Register NTB transport with NTB HW driver 200 + * @transport: transport identifier 201 + * 202 + * This function allows a transport to reserve the hardware driver for 203 + * NTB usage. 204 + * 205 + * RETURNS: pointer to ntb_device, NULL on error. 206 + */ 207 + struct ntb_device *ntb_register_transport(struct pci_dev *pdev, void *transport) 208 + { 209 + struct ntb_device *ndev = pci_get_drvdata(pdev); 210 + 211 + if (ndev->ntb_transport) 212 + return NULL; 213 + 214 + ndev->ntb_transport = transport; 215 + return ndev; 216 + } 217 + 218 + /** 219 + * ntb_unregister_transport() - Unregister the transport with the NTB HW driver 220 + * @ndev - ntb_device of the transport to be freed 221 + * 222 + * This function unregisters the transport from the HW driver and performs any 223 + * necessary cleanups. 224 + */ 225 + void ntb_unregister_transport(struct ntb_device *ndev) 226 + { 227 + int i; 228 + 229 + if (!ndev->ntb_transport) 230 + return; 231 + 232 + for (i = 0; i < ndev->max_cbs; i++) 233 + ntb_unregister_db_callback(ndev, i); 234 + 235 + ntb_unregister_event_callback(ndev); 236 + ndev->ntb_transport = NULL; 237 + } 238 + 239 + /** 240 + * ntb_get_max_spads() - get the total scratch regs usable 241 + * @ndev: pointer to ntb_device instance 242 + * 243 + * This function returns the max 32bit scratchpad registers usable by the 244 + * upper layer. 245 + * 246 + * RETURNS: total number of scratch pad registers available 247 + */ 248 + int ntb_get_max_spads(struct ntb_device *ndev) 249 + { 250 + return ndev->limits.max_spads; 251 + } 252 + 253 + /** 254 + * ntb_write_local_spad() - write to the secondary scratchpad register 255 + * @ndev: pointer to ntb_device instance 256 + * @idx: index to the scratchpad register, 0 based 257 + * @val: the data value to put into the register 258 + * 259 + * This function allows writing of a 32bit value to the indexed scratchpad 260 + * register. This writes over the data mirrored to the local scratchpad register 261 + * by the remote system. 262 + * 263 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 264 + */ 265 + int ntb_write_local_spad(struct ntb_device *ndev, unsigned int idx, u32 val) 266 + { 267 + if (idx >= ndev->limits.max_spads) 268 + return -EINVAL; 269 + 270 + dev_dbg(&ndev->pdev->dev, "Writing %x to local scratch pad index %d\n", 271 + val, idx); 272 + writel(val, ndev->reg_ofs.spad_read + idx * 4); 273 + 274 + return 0; 275 + } 276 + 277 + /** 278 + * ntb_read_local_spad() - read from the primary scratchpad register 279 + * @ndev: pointer to ntb_device instance 280 + * @idx: index to scratchpad register, 0 based 281 + * @val: pointer to 32bit integer for storing the register value 282 + * 283 + * This function allows reading of the 32bit scratchpad register on 284 + * the primary (internal) side. This allows the local system to read data 285 + * written and mirrored to the scratchpad register by the remote system. 286 + * 287 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 288 + */ 289 + int ntb_read_local_spad(struct ntb_device *ndev, unsigned int idx, u32 *val) 290 + { 291 + if (idx >= ndev->limits.max_spads) 292 + return -EINVAL; 293 + 294 + *val = readl(ndev->reg_ofs.spad_write + idx * 4); 295 + dev_dbg(&ndev->pdev->dev, 296 + "Reading %x from local scratch pad index %d\n", *val, idx); 297 + 298 + return 0; 299 + } 300 + 301 + /** 302 + * ntb_write_remote_spad() - write to the secondary scratchpad register 303 + * @ndev: pointer to ntb_device instance 304 + * @idx: index to the scratchpad register, 0 based 305 + * @val: the data value to put into the register 306 + * 307 + * This function allows writing of a 32bit value to the indexed scratchpad 308 + * register. The register resides on the secondary (external) side. This allows 309 + * the local system to write data to be mirrored to the remote systems 310 + * scratchpad register. 311 + * 312 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 313 + */ 314 + int ntb_write_remote_spad(struct ntb_device *ndev, unsigned int idx, u32 val) 315 + { 316 + if (idx >= ndev->limits.max_spads) 317 + return -EINVAL; 318 + 319 + dev_dbg(&ndev->pdev->dev, "Writing %x to remote scratch pad index %d\n", 320 + val, idx); 321 + writel(val, ndev->reg_ofs.spad_write + idx * 4); 322 + 323 + return 0; 324 + } 325 + 326 + /** 327 + * ntb_read_remote_spad() - read from the primary scratchpad register 328 + * @ndev: pointer to ntb_device instance 329 + * @idx: index to scratchpad register, 0 based 330 + * @val: pointer to 32bit integer for storing the register value 331 + * 332 + * This function allows reading of the 32bit scratchpad register on 333 + * the primary (internal) side. This alloows the local system to read the data 334 + * it wrote to be mirrored on the remote system. 335 + * 336 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 337 + */ 338 + int ntb_read_remote_spad(struct ntb_device *ndev, unsigned int idx, u32 *val) 339 + { 340 + if (idx >= ndev->limits.max_spads) 341 + return -EINVAL; 342 + 343 + *val = readl(ndev->reg_ofs.spad_read + idx * 4); 344 + dev_dbg(&ndev->pdev->dev, 345 + "Reading %x from remote scratch pad index %d\n", *val, idx); 346 + 347 + return 0; 348 + } 349 + 350 + /** 351 + * ntb_get_mw_vbase() - get virtual addr for the NTB memory window 352 + * @ndev: pointer to ntb_device instance 353 + * @mw: memory window number 354 + * 355 + * This function provides the base virtual address of the memory window 356 + * specified. 357 + * 358 + * RETURNS: pointer to virtual address, or NULL on error. 359 + */ 360 + void *ntb_get_mw_vbase(struct ntb_device *ndev, unsigned int mw) 361 + { 362 + if (mw > NTB_NUM_MW) 363 + return NULL; 364 + 365 + return ndev->mw[mw].vbase; 366 + } 367 + 368 + /** 369 + * ntb_get_mw_size() - return size of NTB memory window 370 + * @ndev: pointer to ntb_device instance 371 + * @mw: memory window number 372 + * 373 + * This function provides the physical size of the memory window specified 374 + * 375 + * RETURNS: the size of the memory window or zero on error 376 + */ 377 + resource_size_t ntb_get_mw_size(struct ntb_device *ndev, unsigned int mw) 378 + { 379 + if (mw > NTB_NUM_MW) 380 + return 0; 381 + 382 + return ndev->mw[mw].bar_sz; 383 + } 384 + 385 + /** 386 + * ntb_set_mw_addr - set the memory window address 387 + * @ndev: pointer to ntb_device instance 388 + * @mw: memory window number 389 + * @addr: base address for data 390 + * 391 + * This function sets the base physical address of the memory window. This 392 + * memory address is where data from the remote system will be transfered into 393 + * or out of depending on how the transport is configured. 394 + */ 395 + void ntb_set_mw_addr(struct ntb_device *ndev, unsigned int mw, u64 addr) 396 + { 397 + if (mw > NTB_NUM_MW) 398 + return; 399 + 400 + dev_dbg(&ndev->pdev->dev, "Writing addr %Lx to BAR %d\n", addr, 401 + MW_TO_BAR(mw)); 402 + 403 + ndev->mw[mw].phys_addr = addr; 404 + 405 + switch (MW_TO_BAR(mw)) { 406 + case NTB_BAR_23: 407 + writeq(addr, ndev->reg_ofs.sbar2_xlat); 408 + break; 409 + case NTB_BAR_45: 410 + writeq(addr, ndev->reg_ofs.sbar4_xlat); 411 + break; 412 + } 413 + } 414 + 415 + /** 416 + * ntb_ring_sdb() - Set the doorbell on the secondary/external side 417 + * @ndev: pointer to ntb_device instance 418 + * @db: doorbell to ring 419 + * 420 + * This function allows triggering of a doorbell on the secondary/external 421 + * side that will initiate an interrupt on the remote host 422 + * 423 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 424 + */ 425 + void ntb_ring_sdb(struct ntb_device *ndev, unsigned int db) 426 + { 427 + dev_dbg(&ndev->pdev->dev, "%s: ringing doorbell %d\n", __func__, db); 428 + 429 + if (ndev->hw_type == BWD_HW) 430 + writeq((u64) 1 << db, ndev->reg_ofs.sdb); 431 + else 432 + writew(((1 << ndev->bits_per_vector) - 1) << 433 + (db * ndev->bits_per_vector), ndev->reg_ofs.sdb); 434 + } 435 + 436 + static void ntb_link_event(struct ntb_device *ndev, int link_state) 437 + { 438 + unsigned int event; 439 + 440 + if (ndev->link_status == link_state) 441 + return; 442 + 443 + if (link_state == NTB_LINK_UP) { 444 + u16 status; 445 + 446 + dev_info(&ndev->pdev->dev, "Link Up\n"); 447 + ndev->link_status = NTB_LINK_UP; 448 + event = NTB_EVENT_HW_LINK_UP; 449 + 450 + if (ndev->hw_type == BWD_HW) 451 + status = readw(ndev->reg_ofs.lnk_stat); 452 + else { 453 + int rc = pci_read_config_word(ndev->pdev, 454 + SNB_LINK_STATUS_OFFSET, 455 + &status); 456 + if (rc) 457 + return; 458 + } 459 + dev_info(&ndev->pdev->dev, "Link Width %d, Link Speed %d\n", 460 + (status & NTB_LINK_WIDTH_MASK) >> 4, 461 + (status & NTB_LINK_SPEED_MASK)); 462 + } else { 463 + dev_info(&ndev->pdev->dev, "Link Down\n"); 464 + ndev->link_status = NTB_LINK_DOWN; 465 + event = NTB_EVENT_HW_LINK_DOWN; 466 + } 467 + 468 + /* notify the upper layer if we have an event change */ 469 + if (ndev->event_cb) 470 + ndev->event_cb(ndev->ntb_transport, event); 471 + } 472 + 473 + static int ntb_link_status(struct ntb_device *ndev) 474 + { 475 + int link_state; 476 + 477 + if (ndev->hw_type == BWD_HW) { 478 + u32 ntb_cntl; 479 + 480 + ntb_cntl = readl(ndev->reg_ofs.lnk_cntl); 481 + if (ntb_cntl & BWD_CNTL_LINK_DOWN) 482 + link_state = NTB_LINK_DOWN; 483 + else 484 + link_state = NTB_LINK_UP; 485 + } else { 486 + u16 status; 487 + int rc; 488 + 489 + rc = pci_read_config_word(ndev->pdev, SNB_LINK_STATUS_OFFSET, 490 + &status); 491 + if (rc) 492 + return rc; 493 + 494 + if (status & NTB_LINK_STATUS_ACTIVE) 495 + link_state = NTB_LINK_UP; 496 + else 497 + link_state = NTB_LINK_DOWN; 498 + } 499 + 500 + ntb_link_event(ndev, link_state); 501 + 502 + return 0; 503 + } 504 + 505 + /* BWD doesn't have link status interrupt, poll on that platform */ 506 + static void bwd_link_poll(struct work_struct *work) 507 + { 508 + struct ntb_device *ndev = container_of(work, struct ntb_device, 509 + hb_timer.work); 510 + unsigned long ts = jiffies; 511 + 512 + /* If we haven't gotten an interrupt in a while, check the BWD link 513 + * status bit 514 + */ 515 + if (ts > ndev->last_ts + NTB_HB_TIMEOUT) { 516 + int rc = ntb_link_status(ndev); 517 + if (rc) 518 + dev_err(&ndev->pdev->dev, 519 + "Error determining link status\n"); 520 + } 521 + 522 + schedule_delayed_work(&ndev->hb_timer, NTB_HB_TIMEOUT); 523 + } 524 + 525 + static int ntb_xeon_setup(struct ntb_device *ndev) 526 + { 527 + int rc; 528 + u8 val; 529 + 530 + ndev->hw_type = SNB_HW; 531 + 532 + rc = pci_read_config_byte(ndev->pdev, NTB_PPD_OFFSET, &val); 533 + if (rc) 534 + return rc; 535 + 536 + switch (val & SNB_PPD_CONN_TYPE) { 537 + case NTB_CONN_B2B: 538 + ndev->conn_type = NTB_CONN_B2B; 539 + break; 540 + case NTB_CONN_CLASSIC: 541 + case NTB_CONN_RP: 542 + default: 543 + dev_err(&ndev->pdev->dev, "Only B2B supported at this time\n"); 544 + return -EINVAL; 545 + } 546 + 547 + if (val & SNB_PPD_DEV_TYPE) 548 + ndev->dev_type = NTB_DEV_DSD; 549 + else 550 + ndev->dev_type = NTB_DEV_USD; 551 + 552 + ndev->reg_ofs.pdb = ndev->reg_base + SNB_PDOORBELL_OFFSET; 553 + ndev->reg_ofs.pdb_mask = ndev->reg_base + SNB_PDBMSK_OFFSET; 554 + ndev->reg_ofs.sbar2_xlat = ndev->reg_base + SNB_SBAR2XLAT_OFFSET; 555 + ndev->reg_ofs.sbar4_xlat = ndev->reg_base + SNB_SBAR4XLAT_OFFSET; 556 + ndev->reg_ofs.lnk_cntl = ndev->reg_base + SNB_NTBCNTL_OFFSET; 557 + ndev->reg_ofs.lnk_stat = ndev->reg_base + SNB_LINK_STATUS_OFFSET; 558 + ndev->reg_ofs.spad_read = ndev->reg_base + SNB_SPAD_OFFSET; 559 + ndev->reg_ofs.spci_cmd = ndev->reg_base + SNB_PCICMD_OFFSET; 560 + 561 + if (ndev->conn_type == NTB_CONN_B2B) { 562 + ndev->reg_ofs.sdb = ndev->reg_base + SNB_B2B_DOORBELL_OFFSET; 563 + ndev->reg_ofs.spad_write = ndev->reg_base + SNB_B2B_SPAD_OFFSET; 564 + ndev->limits.max_spads = SNB_MAX_SPADS; 565 + } else { 566 + ndev->reg_ofs.sdb = ndev->reg_base + SNB_SDOORBELL_OFFSET; 567 + ndev->reg_ofs.spad_write = ndev->reg_base + SNB_SPAD_OFFSET; 568 + ndev->limits.max_spads = SNB_MAX_COMPAT_SPADS; 569 + } 570 + 571 + ndev->limits.max_db_bits = SNB_MAX_DB_BITS; 572 + ndev->limits.msix_cnt = SNB_MSIX_CNT; 573 + ndev->bits_per_vector = SNB_DB_BITS_PER_VEC; 574 + 575 + return 0; 576 + } 577 + 578 + static int ntb_bwd_setup(struct ntb_device *ndev) 579 + { 580 + int rc; 581 + u32 val; 582 + 583 + ndev->hw_type = BWD_HW; 584 + 585 + rc = pci_read_config_dword(ndev->pdev, NTB_PPD_OFFSET, &val); 586 + if (rc) 587 + return rc; 588 + 589 + switch ((val & BWD_PPD_CONN_TYPE) >> 8) { 590 + case NTB_CONN_B2B: 591 + ndev->conn_type = NTB_CONN_B2B; 592 + break; 593 + case NTB_CONN_RP: 594 + default: 595 + dev_err(&ndev->pdev->dev, "Only B2B supported at this time\n"); 596 + return -EINVAL; 597 + } 598 + 599 + if (val & BWD_PPD_DEV_TYPE) 600 + ndev->dev_type = NTB_DEV_DSD; 601 + else 602 + ndev->dev_type = NTB_DEV_USD; 603 + 604 + /* Initiate PCI-E link training */ 605 + rc = pci_write_config_dword(ndev->pdev, NTB_PPD_OFFSET, 606 + val | BWD_PPD_INIT_LINK); 607 + if (rc) 608 + return rc; 609 + 610 + ndev->reg_ofs.pdb = ndev->reg_base + BWD_PDOORBELL_OFFSET; 611 + ndev->reg_ofs.pdb_mask = ndev->reg_base + BWD_PDBMSK_OFFSET; 612 + ndev->reg_ofs.sbar2_xlat = ndev->reg_base + BWD_SBAR2XLAT_OFFSET; 613 + ndev->reg_ofs.sbar4_xlat = ndev->reg_base + BWD_SBAR4XLAT_OFFSET; 614 + ndev->reg_ofs.lnk_cntl = ndev->reg_base + BWD_NTBCNTL_OFFSET; 615 + ndev->reg_ofs.lnk_stat = ndev->reg_base + BWD_LINK_STATUS_OFFSET; 616 + ndev->reg_ofs.spad_read = ndev->reg_base + BWD_SPAD_OFFSET; 617 + ndev->reg_ofs.spci_cmd = ndev->reg_base + BWD_PCICMD_OFFSET; 618 + 619 + if (ndev->conn_type == NTB_CONN_B2B) { 620 + ndev->reg_ofs.sdb = ndev->reg_base + BWD_B2B_DOORBELL_OFFSET; 621 + ndev->reg_ofs.spad_write = ndev->reg_base + BWD_B2B_SPAD_OFFSET; 622 + ndev->limits.max_spads = BWD_MAX_SPADS; 623 + } else { 624 + ndev->reg_ofs.sdb = ndev->reg_base + BWD_PDOORBELL_OFFSET; 625 + ndev->reg_ofs.spad_write = ndev->reg_base + BWD_SPAD_OFFSET; 626 + ndev->limits.max_spads = BWD_MAX_COMPAT_SPADS; 627 + } 628 + 629 + ndev->limits.max_db_bits = BWD_MAX_DB_BITS; 630 + ndev->limits.msix_cnt = BWD_MSIX_CNT; 631 + ndev->bits_per_vector = BWD_DB_BITS_PER_VEC; 632 + 633 + /* Since bwd doesn't have a link interrupt, setup a poll timer */ 634 + INIT_DELAYED_WORK(&ndev->hb_timer, bwd_link_poll); 635 + schedule_delayed_work(&ndev->hb_timer, NTB_HB_TIMEOUT); 636 + 637 + return 0; 638 + } 639 + 640 + static int __devinit ntb_device_setup(struct ntb_device *ndev) 641 + { 642 + int rc; 643 + 644 + switch (ndev->pdev->device) { 645 + case PCI_DEVICE_ID_INTEL_NTB_2ND_SNB: 646 + case PCI_DEVICE_ID_INTEL_NTB_RP_JSF: 647 + case PCI_DEVICE_ID_INTEL_NTB_RP_SNB: 648 + case PCI_DEVICE_ID_INTEL_NTB_CLASSIC_JSF: 649 + case PCI_DEVICE_ID_INTEL_NTB_CLASSIC_SNB: 650 + case PCI_DEVICE_ID_INTEL_NTB_B2B_JSF: 651 + case PCI_DEVICE_ID_INTEL_NTB_B2B_SNB: 652 + rc = ntb_xeon_setup(ndev); 653 + break; 654 + case PCI_DEVICE_ID_INTEL_NTB_B2B_BWD: 655 + rc = ntb_bwd_setup(ndev); 656 + break; 657 + default: 658 + rc = -ENODEV; 659 + } 660 + 661 + /* Enable Bus Master and Memory Space on the secondary side */ 662 + writew(PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER, ndev->reg_ofs.spci_cmd); 663 + 664 + return rc; 665 + } 666 + 667 + static void ntb_device_free(struct ntb_device *ndev) 668 + { 669 + if (ndev->hw_type == BWD_HW) 670 + cancel_delayed_work_sync(&ndev->hb_timer); 671 + } 672 + 673 + static irqreturn_t bwd_callback_msix_irq(int irq, void *data) 674 + { 675 + struct ntb_db_cb *db_cb = data; 676 + struct ntb_device *ndev = db_cb->ndev; 677 + 678 + dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for DB %d\n", irq, 679 + db_cb->db_num); 680 + 681 + if (db_cb->callback) 682 + db_cb->callback(db_cb->data, db_cb->db_num); 683 + 684 + /* No need to check for the specific HB irq, any interrupt means 685 + * we're connected. 686 + */ 687 + ndev->last_ts = jiffies; 688 + 689 + writeq((u64) 1 << db_cb->db_num, ndev->reg_ofs.pdb); 690 + 691 + return IRQ_HANDLED; 692 + } 693 + 694 + static irqreturn_t xeon_callback_msix_irq(int irq, void *data) 695 + { 696 + struct ntb_db_cb *db_cb = data; 697 + struct ntb_device *ndev = db_cb->ndev; 698 + 699 + dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for DB %d\n", irq, 700 + db_cb->db_num); 701 + 702 + if (db_cb->callback) 703 + db_cb->callback(db_cb->data, db_cb->db_num); 704 + 705 + /* On Sandybridge, there are 16 bits in the interrupt register 706 + * but only 4 vectors. So, 5 bits are assigned to the first 3 707 + * vectors, with the 4th having a single bit for link 708 + * interrupts. 709 + */ 710 + writew(((1 << ndev->bits_per_vector) - 1) << 711 + (db_cb->db_num * ndev->bits_per_vector), ndev->reg_ofs.pdb); 712 + 713 + return IRQ_HANDLED; 714 + } 715 + 716 + /* Since we do not have a HW doorbell in BWD, this is only used in JF/JT */ 717 + static irqreturn_t xeon_event_msix_irq(int irq, void *dev) 718 + { 719 + struct ntb_device *ndev = dev; 720 + int rc; 721 + 722 + dev_dbg(&ndev->pdev->dev, "MSI-X irq %d received for Events\n", irq); 723 + 724 + rc = ntb_link_status(ndev); 725 + if (rc) 726 + dev_err(&ndev->pdev->dev, "Error determining link status\n"); 727 + 728 + /* bit 15 is always the link bit */ 729 + writew(1 << ndev->limits.max_db_bits, ndev->reg_ofs.pdb); 730 + 731 + return IRQ_HANDLED; 732 + } 733 + 734 + static irqreturn_t ntb_interrupt(int irq, void *dev) 735 + { 736 + struct ntb_device *ndev = dev; 737 + unsigned int i = 0; 738 + 739 + if (ndev->hw_type == BWD_HW) { 740 + u64 pdb = readq(ndev->reg_ofs.pdb); 741 + 742 + dev_dbg(&ndev->pdev->dev, "irq %d - pdb = %Lx\n", irq, pdb); 743 + 744 + while (pdb) { 745 + i = __ffs(pdb); 746 + pdb &= pdb - 1; 747 + bwd_callback_msix_irq(irq, &ndev->db_cb[i]); 748 + } 749 + } else { 750 + u16 pdb = readw(ndev->reg_ofs.pdb); 751 + 752 + dev_dbg(&ndev->pdev->dev, "irq %d - pdb = %x sdb %x\n", irq, 753 + pdb, readw(ndev->reg_ofs.sdb)); 754 + 755 + if (pdb & SNB_DB_HW_LINK) { 756 + xeon_event_msix_irq(irq, dev); 757 + pdb &= ~SNB_DB_HW_LINK; 758 + } 759 + 760 + while (pdb) { 761 + i = __ffs(pdb); 762 + pdb &= pdb - 1; 763 + xeon_callback_msix_irq(irq, &ndev->db_cb[i]); 764 + } 765 + } 766 + 767 + return IRQ_HANDLED; 768 + } 769 + 770 + static int ntb_setup_msix(struct ntb_device *ndev) 771 + { 772 + struct pci_dev *pdev = ndev->pdev; 773 + struct msix_entry *msix; 774 + int msix_entries; 775 + int rc, i, pos; 776 + u16 val; 777 + 778 + pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX); 779 + if (!pos) { 780 + rc = -EIO; 781 + goto err; 782 + } 783 + 784 + rc = pci_read_config_word(pdev, pos + PCI_MSIX_FLAGS, &val); 785 + if (rc) 786 + goto err; 787 + 788 + msix_entries = msix_table_size(val); 789 + if (msix_entries > ndev->limits.msix_cnt) { 790 + rc = -EINVAL; 791 + goto err; 792 + } 793 + 794 + ndev->msix_entries = kmalloc(sizeof(struct msix_entry) * msix_entries, 795 + GFP_KERNEL); 796 + if (!ndev->msix_entries) { 797 + rc = -ENOMEM; 798 + goto err; 799 + } 800 + 801 + for (i = 0; i < msix_entries; i++) 802 + ndev->msix_entries[i].entry = i; 803 + 804 + rc = pci_enable_msix(pdev, ndev->msix_entries, msix_entries); 805 + if (rc < 0) 806 + goto err1; 807 + if (rc > 0) { 808 + /* On SNB, the link interrupt is always tied to 4th vector. If 809 + * we can't get all 4, then we can't use MSI-X. 810 + */ 811 + if (ndev->hw_type != BWD_HW) { 812 + rc = -EIO; 813 + goto err1; 814 + } 815 + 816 + dev_warn(&pdev->dev, 817 + "Only %d MSI-X vectors. Limiting the number of queues to that number.\n", 818 + rc); 819 + msix_entries = rc; 820 + } 821 + 822 + for (i = 0; i < msix_entries; i++) { 823 + msix = &ndev->msix_entries[i]; 824 + WARN_ON(!msix->vector); 825 + 826 + /* Use the last MSI-X vector for Link status */ 827 + if (ndev->hw_type == BWD_HW) { 828 + rc = request_irq(msix->vector, bwd_callback_msix_irq, 0, 829 + "ntb-callback-msix", &ndev->db_cb[i]); 830 + if (rc) 831 + goto err2; 832 + } else { 833 + if (i == msix_entries - 1) { 834 + rc = request_irq(msix->vector, 835 + xeon_event_msix_irq, 0, 836 + "ntb-event-msix", ndev); 837 + if (rc) 838 + goto err2; 839 + } else { 840 + rc = request_irq(msix->vector, 841 + xeon_callback_msix_irq, 0, 842 + "ntb-callback-msix", 843 + &ndev->db_cb[i]); 844 + if (rc) 845 + goto err2; 846 + } 847 + } 848 + } 849 + 850 + ndev->num_msix = msix_entries; 851 + if (ndev->hw_type == BWD_HW) 852 + ndev->max_cbs = msix_entries; 853 + else 854 + ndev->max_cbs = msix_entries - 1; 855 + 856 + return 0; 857 + 858 + err2: 859 + while (--i >= 0) { 860 + msix = &ndev->msix_entries[i]; 861 + if (ndev->hw_type != BWD_HW && i == ndev->num_msix - 1) 862 + free_irq(msix->vector, ndev); 863 + else 864 + free_irq(msix->vector, &ndev->db_cb[i]); 865 + } 866 + pci_disable_msix(pdev); 867 + err1: 868 + kfree(ndev->msix_entries); 869 + dev_err(&pdev->dev, "Error allocating MSI-X interrupt\n"); 870 + err: 871 + ndev->num_msix = 0; 872 + return rc; 873 + } 874 + 875 + static int ntb_setup_msi(struct ntb_device *ndev) 876 + { 877 + struct pci_dev *pdev = ndev->pdev; 878 + int rc; 879 + 880 + rc = pci_enable_msi(pdev); 881 + if (rc) 882 + return rc; 883 + 884 + rc = request_irq(pdev->irq, ntb_interrupt, 0, "ntb-msi", ndev); 885 + if (rc) { 886 + pci_disable_msi(pdev); 887 + dev_err(&pdev->dev, "Error allocating MSI interrupt\n"); 888 + return rc; 889 + } 890 + 891 + return 0; 892 + } 893 + 894 + static int ntb_setup_intx(struct ntb_device *ndev) 895 + { 896 + struct pci_dev *pdev = ndev->pdev; 897 + int rc; 898 + 899 + pci_msi_off(pdev); 900 + 901 + /* Verify intx is enabled */ 902 + pci_intx(pdev, 1); 903 + 904 + rc = request_irq(pdev->irq, ntb_interrupt, IRQF_SHARED, "ntb-intx", 905 + ndev); 906 + if (rc) 907 + return rc; 908 + 909 + return 0; 910 + } 911 + 912 + static int __devinit ntb_setup_interrupts(struct ntb_device *ndev) 913 + { 914 + int rc; 915 + 916 + /* On BWD, disable all interrupts. On SNB, disable all but Link 917 + * Interrupt. The rest will be unmasked as callbacks are registered. 918 + */ 919 + if (ndev->hw_type == BWD_HW) 920 + writeq(~0, ndev->reg_ofs.pdb_mask); 921 + else 922 + writew(~(1 << ndev->limits.max_db_bits), 923 + ndev->reg_ofs.pdb_mask); 924 + 925 + rc = ntb_setup_msix(ndev); 926 + if (!rc) 927 + goto done; 928 + 929 + ndev->bits_per_vector = 1; 930 + ndev->max_cbs = ndev->limits.max_db_bits; 931 + 932 + rc = ntb_setup_msi(ndev); 933 + if (!rc) 934 + goto done; 935 + 936 + rc = ntb_setup_intx(ndev); 937 + if (rc) { 938 + dev_err(&ndev->pdev->dev, "no usable interrupts\n"); 939 + return rc; 940 + } 941 + 942 + done: 943 + return 0; 944 + } 945 + 946 + static void __devexit ntb_free_interrupts(struct ntb_device *ndev) 947 + { 948 + struct pci_dev *pdev = ndev->pdev; 949 + 950 + /* mask interrupts */ 951 + if (ndev->hw_type == BWD_HW) 952 + writeq(~0, ndev->reg_ofs.pdb_mask); 953 + else 954 + writew(~0, ndev->reg_ofs.pdb_mask); 955 + 956 + if (ndev->num_msix) { 957 + struct msix_entry *msix; 958 + u32 i; 959 + 960 + for (i = 0; i < ndev->num_msix; i++) { 961 + msix = &ndev->msix_entries[i]; 962 + if (ndev->hw_type != BWD_HW && i == ndev->num_msix - 1) 963 + free_irq(msix->vector, ndev); 964 + else 965 + free_irq(msix->vector, &ndev->db_cb[i]); 966 + } 967 + pci_disable_msix(pdev); 968 + } else { 969 + free_irq(pdev->irq, ndev); 970 + 971 + if (pci_dev_msi_enabled(pdev)) 972 + pci_disable_msi(pdev); 973 + } 974 + } 975 + 976 + static int __devinit ntb_create_callbacks(struct ntb_device *ndev) 977 + { 978 + int i; 979 + 980 + /* Checken-egg issue. We won't know how many callbacks are necessary 981 + * until we see how many MSI-X vectors we get, but these pointers need 982 + * to be passed into the MSI-X register fucntion. So, we allocate the 983 + * max, knowing that they might not all be used, to work around this. 984 + */ 985 + ndev->db_cb = kcalloc(ndev->limits.max_db_bits, 986 + sizeof(struct ntb_db_cb), 987 + GFP_KERNEL); 988 + if (!ndev->db_cb) 989 + return -ENOMEM; 990 + 991 + for (i = 0; i < ndev->limits.max_db_bits; i++) { 992 + ndev->db_cb[i].db_num = i; 993 + ndev->db_cb[i].ndev = ndev; 994 + } 995 + 996 + return 0; 997 + } 998 + 999 + static void ntb_free_callbacks(struct ntb_device *ndev) 1000 + { 1001 + int i; 1002 + 1003 + for (i = 0; i < ndev->limits.max_db_bits; i++) 1004 + ntb_unregister_db_callback(ndev, i); 1005 + 1006 + kfree(ndev->db_cb); 1007 + } 1008 + 1009 + static int __devinit 1010 + ntb_pci_probe(struct pci_dev *pdev, 1011 + __attribute__((unused)) const struct pci_device_id *id) 1012 + { 1013 + struct ntb_device *ndev; 1014 + int rc, i; 1015 + 1016 + ndev = kzalloc(sizeof(struct ntb_device), GFP_KERNEL); 1017 + if (!ndev) 1018 + return -ENOMEM; 1019 + 1020 + ndev->pdev = pdev; 1021 + ndev->link_status = NTB_LINK_DOWN; 1022 + pci_set_drvdata(pdev, ndev); 1023 + 1024 + rc = pci_enable_device(pdev); 1025 + if (rc) 1026 + goto err; 1027 + 1028 + pci_set_master(ndev->pdev); 1029 + 1030 + rc = pci_request_selected_regions(pdev, NTB_BAR_MASK, KBUILD_MODNAME); 1031 + if (rc) 1032 + goto err1; 1033 + 1034 + ndev->reg_base = pci_ioremap_bar(pdev, NTB_BAR_MMIO); 1035 + if (!ndev->reg_base) { 1036 + dev_warn(&pdev->dev, "Cannot remap BAR 0\n"); 1037 + rc = -EIO; 1038 + goto err2; 1039 + } 1040 + 1041 + for (i = 0; i < NTB_NUM_MW; i++) { 1042 + ndev->mw[i].bar_sz = pci_resource_len(pdev, MW_TO_BAR(i)); 1043 + ndev->mw[i].vbase = 1044 + ioremap_wc(pci_resource_start(pdev, MW_TO_BAR(i)), 1045 + ndev->mw[i].bar_sz); 1046 + dev_info(&pdev->dev, "MW %d size %d\n", i, 1047 + (u32) pci_resource_len(pdev, MW_TO_BAR(i))); 1048 + if (!ndev->mw[i].vbase) { 1049 + dev_warn(&pdev->dev, "Cannot remap BAR %d\n", 1050 + MW_TO_BAR(i)); 1051 + rc = -EIO; 1052 + goto err3; 1053 + } 1054 + } 1055 + 1056 + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); 1057 + if (rc) { 1058 + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); 1059 + if (rc) 1060 + goto err3; 1061 + 1062 + dev_warn(&pdev->dev, "Cannot DMA highmem\n"); 1063 + } 1064 + 1065 + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); 1066 + if (rc) { 1067 + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); 1068 + if (rc) 1069 + goto err3; 1070 + 1071 + dev_warn(&pdev->dev, "Cannot DMA consistent highmem\n"); 1072 + } 1073 + 1074 + rc = ntb_device_setup(ndev); 1075 + if (rc) 1076 + goto err3; 1077 + 1078 + rc = ntb_create_callbacks(ndev); 1079 + if (rc) 1080 + goto err4; 1081 + 1082 + rc = ntb_setup_interrupts(ndev); 1083 + if (rc) 1084 + goto err5; 1085 + 1086 + /* The scratchpad registers keep the values between rmmod/insmod, 1087 + * blast them now 1088 + */ 1089 + for (i = 0; i < ndev->limits.max_spads; i++) { 1090 + ntb_write_local_spad(ndev, i, 0); 1091 + ntb_write_remote_spad(ndev, i, 0); 1092 + } 1093 + 1094 + rc = ntb_transport_init(pdev); 1095 + if (rc) 1096 + goto err6; 1097 + 1098 + /* Let's bring the NTB link up */ 1099 + writel(NTB_CNTL_BAR23_SNOOP | NTB_CNTL_BAR45_SNOOP, 1100 + ndev->reg_ofs.lnk_cntl); 1101 + 1102 + return 0; 1103 + 1104 + err6: 1105 + ntb_free_interrupts(ndev); 1106 + err5: 1107 + ntb_free_callbacks(ndev); 1108 + err4: 1109 + ntb_device_free(ndev); 1110 + err3: 1111 + for (i--; i >= 0; i--) 1112 + iounmap(ndev->mw[i].vbase); 1113 + iounmap(ndev->reg_base); 1114 + err2: 1115 + pci_release_selected_regions(pdev, NTB_BAR_MASK); 1116 + err1: 1117 + pci_disable_device(pdev); 1118 + err: 1119 + kfree(ndev); 1120 + 1121 + dev_err(&pdev->dev, "Error loading %s module\n", KBUILD_MODNAME); 1122 + return rc; 1123 + } 1124 + 1125 + static void __devexit ntb_pci_remove(struct pci_dev *pdev) 1126 + { 1127 + struct ntb_device *ndev = pci_get_drvdata(pdev); 1128 + int i; 1129 + u32 ntb_cntl; 1130 + 1131 + /* Bring NTB link down */ 1132 + ntb_cntl = readl(ndev->reg_ofs.lnk_cntl); 1133 + ntb_cntl |= NTB_LINK_DISABLE; 1134 + writel(ntb_cntl, ndev->reg_ofs.lnk_cntl); 1135 + 1136 + ntb_transport_free(ndev->ntb_transport); 1137 + 1138 + ntb_free_interrupts(ndev); 1139 + ntb_free_callbacks(ndev); 1140 + ntb_device_free(ndev); 1141 + 1142 + for (i = 0; i < NTB_NUM_MW; i++) 1143 + iounmap(ndev->mw[i].vbase); 1144 + 1145 + iounmap(ndev->reg_base); 1146 + pci_release_selected_regions(pdev, NTB_BAR_MASK); 1147 + pci_disable_device(pdev); 1148 + kfree(ndev); 1149 + } 1150 + 1151 + static struct pci_driver ntb_pci_driver = { 1152 + .name = KBUILD_MODNAME, 1153 + .id_table = ntb_pci_tbl, 1154 + .probe = ntb_pci_probe, 1155 + .remove = __devexit_p(ntb_pci_remove), 1156 + }; 1157 + module_pci_driver(ntb_pci_driver);
+181
drivers/ntb/ntb_hw.h
··· 1 + /* 2 + * This file is provided under a dual BSD/GPLv2 license. When using or 3 + * redistributing this file, you may do so under either license. 4 + * 5 + * GPL LICENSE SUMMARY 6 + * 7 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of version 2 of the GNU General Public License as 11 + * published by the Free Software Foundation. 12 + * 13 + * BSD LICENSE 14 + * 15 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 16 + * 17 + * Redistribution and use in source and binary forms, with or without 18 + * modification, are permitted provided that the following conditions 19 + * are met: 20 + * 21 + * * Redistributions of source code must retain the above copyright 22 + * notice, this list of conditions and the following disclaimer. 23 + * * Redistributions in binary form must reproduce the above copy 24 + * notice, this list of conditions and the following disclaimer in 25 + * the documentation and/or other materials provided with the 26 + * distribution. 27 + * * Neither the name of Intel Corporation nor the names of its 28 + * contributors may be used to endorse or promote products derived 29 + * from this software without specific prior written permission. 30 + * 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 36 + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 37 + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 38 + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 39 + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 40 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 41 + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 42 + * 43 + * Intel PCIe NTB Linux driver 44 + * 45 + * Contact Information: 46 + * Jon Mason <jon.mason@intel.com> 47 + */ 48 + 49 + #define PCI_DEVICE_ID_INTEL_NTB_B2B_JSF 0x3725 50 + #define PCI_DEVICE_ID_INTEL_NTB_CLASSIC_JSF 0x3726 51 + #define PCI_DEVICE_ID_INTEL_NTB_RP_JSF 0x3727 52 + #define PCI_DEVICE_ID_INTEL_NTB_RP_SNB 0x3C08 53 + #define PCI_DEVICE_ID_INTEL_NTB_B2B_SNB 0x3C0D 54 + #define PCI_DEVICE_ID_INTEL_NTB_CLASSIC_SNB 0x3C0E 55 + #define PCI_DEVICE_ID_INTEL_NTB_2ND_SNB 0x3C0F 56 + #define PCI_DEVICE_ID_INTEL_NTB_B2B_BWD 0x0C4E 57 + 58 + #define msix_table_size(control) ((control & PCI_MSIX_FLAGS_QSIZE)+1) 59 + 60 + #define NTB_BAR_MMIO 0 61 + #define NTB_BAR_23 2 62 + #define NTB_BAR_45 4 63 + #define NTB_BAR_MASK ((1 << NTB_BAR_MMIO) | (1 << NTB_BAR_23) |\ 64 + (1 << NTB_BAR_45)) 65 + 66 + #define NTB_LINK_DOWN 0 67 + #define NTB_LINK_UP 1 68 + 69 + #define NTB_HB_TIMEOUT msecs_to_jiffies(1000) 70 + 71 + #define NTB_NUM_MW 2 72 + 73 + enum ntb_hw_event { 74 + NTB_EVENT_SW_EVENT0 = 0, 75 + NTB_EVENT_SW_EVENT1, 76 + NTB_EVENT_SW_EVENT2, 77 + NTB_EVENT_HW_ERROR, 78 + NTB_EVENT_HW_LINK_UP, 79 + NTB_EVENT_HW_LINK_DOWN, 80 + }; 81 + 82 + struct ntb_mw { 83 + dma_addr_t phys_addr; 84 + void __iomem *vbase; 85 + resource_size_t bar_sz; 86 + }; 87 + 88 + struct ntb_db_cb { 89 + void (*callback) (void *data, int db_num); 90 + unsigned int db_num; 91 + void *data; 92 + struct ntb_device *ndev; 93 + }; 94 + 95 + struct ntb_device { 96 + struct pci_dev *pdev; 97 + struct msix_entry *msix_entries; 98 + void __iomem *reg_base; 99 + struct ntb_mw mw[NTB_NUM_MW]; 100 + struct { 101 + unsigned int max_spads; 102 + unsigned int max_db_bits; 103 + unsigned int msix_cnt; 104 + } limits; 105 + struct { 106 + void __iomem *pdb; 107 + void __iomem *pdb_mask; 108 + void __iomem *sdb; 109 + void __iomem *sbar2_xlat; 110 + void __iomem *sbar4_xlat; 111 + void __iomem *spad_write; 112 + void __iomem *spad_read; 113 + void __iomem *lnk_cntl; 114 + void __iomem *lnk_stat; 115 + void __iomem *spci_cmd; 116 + } reg_ofs; 117 + struct ntb_transport *ntb_transport; 118 + void (*event_cb)(void *handle, enum ntb_hw_event event); 119 + 120 + struct ntb_db_cb *db_cb; 121 + unsigned char hw_type; 122 + unsigned char conn_type; 123 + unsigned char dev_type; 124 + unsigned char num_msix; 125 + unsigned char bits_per_vector; 126 + unsigned char max_cbs; 127 + unsigned char link_status; 128 + struct delayed_work hb_timer; 129 + unsigned long last_ts; 130 + }; 131 + 132 + /** 133 + * ntb_hw_link_status() - return the hardware link status 134 + * @ndev: pointer to ntb_device instance 135 + * 136 + * Returns true if the hardware is connected to the remote system 137 + * 138 + * RETURNS: true or false based on the hardware link state 139 + */ 140 + static inline bool ntb_hw_link_status(struct ntb_device *ndev) 141 + { 142 + return ndev->link_status == NTB_LINK_UP; 143 + } 144 + 145 + /** 146 + * ntb_query_pdev() - return the pci_dev pointer 147 + * @ndev: pointer to ntb_device instance 148 + * 149 + * Given the ntb pointer return the pci_dev pointerfor the NTB hardware device 150 + * 151 + * RETURNS: a pointer to the ntb pci_dev 152 + */ 153 + static inline struct pci_dev *ntb_query_pdev(struct ntb_device *ndev) 154 + { 155 + return ndev->pdev; 156 + } 157 + 158 + struct ntb_device *ntb_register_transport(struct pci_dev *pdev, 159 + void *transport); 160 + void ntb_unregister_transport(struct ntb_device *ndev); 161 + void ntb_set_mw_addr(struct ntb_device *ndev, unsigned int mw, u64 addr); 162 + int ntb_register_db_callback(struct ntb_device *ndev, unsigned int idx, 163 + void *data, void (*db_cb_func) (void *data, 164 + int db_num)); 165 + void ntb_unregister_db_callback(struct ntb_device *ndev, unsigned int idx); 166 + int ntb_register_event_callback(struct ntb_device *ndev, 167 + void (*event_cb_func) (void *handle, 168 + unsigned int event)); 169 + void ntb_unregister_event_callback(struct ntb_device *ndev); 170 + int ntb_get_max_spads(struct ntb_device *ndev); 171 + int ntb_write_local_spad(struct ntb_device *ndev, unsigned int idx, u32 val); 172 + int ntb_read_local_spad(struct ntb_device *ndev, unsigned int idx, u32 *val); 173 + int ntb_write_remote_spad(struct ntb_device *ndev, unsigned int idx, u32 val); 174 + int ntb_read_remote_spad(struct ntb_device *ndev, unsigned int idx, u32 *val); 175 + void *ntb_get_mw_vbase(struct ntb_device *ndev, unsigned int mw); 176 + resource_size_t ntb_get_mw_size(struct ntb_device *ndev, unsigned int mw); 177 + void ntb_ring_sdb(struct ntb_device *ndev, unsigned int idx); 178 + void *ntb_find_transport(struct pci_dev *pdev); 179 + 180 + int ntb_transport_init(struct pci_dev *pdev); 181 + void ntb_transport_free(void *transport);
+139
drivers/ntb/ntb_regs.h
··· 1 + /* 2 + * This file is provided under a dual BSD/GPLv2 license. When using or 3 + * redistributing this file, you may do so under either license. 4 + * 5 + * GPL LICENSE SUMMARY 6 + * 7 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of version 2 of the GNU General Public License as 11 + * published by the Free Software Foundation. 12 + * 13 + * BSD LICENSE 14 + * 15 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 16 + * 17 + * Redistribution and use in source and binary forms, with or without 18 + * modification, are permitted provided that the following conditions 19 + * are met: 20 + * 21 + * * Redistributions of source code must retain the above copyright 22 + * notice, this list of conditions and the following disclaimer. 23 + * * Redistributions in binary form must reproduce the above copy 24 + * notice, this list of conditions and the following disclaimer in 25 + * the documentation and/or other materials provided with the 26 + * distribution. 27 + * * Neither the name of Intel Corporation nor the names of its 28 + * contributors may be used to endorse or promote products derived 29 + * from this software without specific prior written permission. 30 + * 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 36 + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 37 + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 38 + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 39 + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 40 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 41 + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 42 + * 43 + * Intel PCIe NTB Linux driver 44 + * 45 + * Contact Information: 46 + * Jon Mason <jon.mason@intel.com> 47 + */ 48 + 49 + #define NTB_LINK_ENABLE 0x0000 50 + #define NTB_LINK_DISABLE 0x0002 51 + #define NTB_LINK_STATUS_ACTIVE 0x2000 52 + #define NTB_LINK_SPEED_MASK 0x000f 53 + #define NTB_LINK_WIDTH_MASK 0x03f0 54 + 55 + #define SNB_MSIX_CNT 4 56 + #define SNB_MAX_SPADS 16 57 + #define SNB_MAX_COMPAT_SPADS 8 58 + /* Reserve the uppermost bit for link interrupt */ 59 + #define SNB_MAX_DB_BITS 15 60 + #define SNB_DB_BITS_PER_VEC 5 61 + 62 + #define SNB_DB_HW_LINK 0x8000 63 + 64 + #define SNB_PCICMD_OFFSET 0x0504 65 + #define SNB_DEVCTRL_OFFSET 0x0598 66 + #define SNB_LINK_STATUS_OFFSET 0x01A2 67 + 68 + #define SNB_PBAR2LMT_OFFSET 0x0000 69 + #define SNB_PBAR4LMT_OFFSET 0x0008 70 + #define SNB_PBAR2XLAT_OFFSET 0x0010 71 + #define SNB_PBAR4XLAT_OFFSET 0x0018 72 + #define SNB_SBAR2LMT_OFFSET 0x0020 73 + #define SNB_SBAR4LMT_OFFSET 0x0028 74 + #define SNB_SBAR2XLAT_OFFSET 0x0030 75 + #define SNB_SBAR4XLAT_OFFSET 0x0038 76 + #define SNB_SBAR0BASE_OFFSET 0x0040 77 + #define SNB_SBAR2BASE_OFFSET 0x0048 78 + #define SNB_SBAR4BASE_OFFSET 0x0050 79 + #define SNB_NTBCNTL_OFFSET 0x0058 80 + #define SNB_SBDF_OFFSET 0x005C 81 + #define SNB_PDOORBELL_OFFSET 0x0060 82 + #define SNB_PDBMSK_OFFSET 0x0062 83 + #define SNB_SDOORBELL_OFFSET 0x0064 84 + #define SNB_SDBMSK_OFFSET 0x0066 85 + #define SNB_USMEMMISS 0x0070 86 + #define SNB_SPAD_OFFSET 0x0080 87 + #define SNB_SPADSEMA4_OFFSET 0x00c0 88 + #define SNB_WCCNTRL_OFFSET 0x00e0 89 + #define SNB_B2B_SPAD_OFFSET 0x0100 90 + #define SNB_B2B_DOORBELL_OFFSET 0x0140 91 + #define SNB_B2B_XLAT_OFFSET 0x0144 92 + 93 + #define BWD_MSIX_CNT 34 94 + #define BWD_MAX_SPADS 16 95 + #define BWD_MAX_COMPAT_SPADS 16 96 + #define BWD_MAX_DB_BITS 34 97 + #define BWD_DB_BITS_PER_VEC 1 98 + 99 + #define BWD_PCICMD_OFFSET 0xb004 100 + #define BWD_MBAR23_OFFSET 0xb018 101 + #define BWD_MBAR45_OFFSET 0xb020 102 + #define BWD_DEVCTRL_OFFSET 0xb048 103 + #define BWD_LINK_STATUS_OFFSET 0xb052 104 + 105 + #define BWD_SBAR2XLAT_OFFSET 0x0008 106 + #define BWD_SBAR4XLAT_OFFSET 0x0010 107 + #define BWD_PDOORBELL_OFFSET 0x0020 108 + #define BWD_PDBMSK_OFFSET 0x0028 109 + #define BWD_NTBCNTL_OFFSET 0x0060 110 + #define BWD_EBDF_OFFSET 0x0064 111 + #define BWD_SPAD_OFFSET 0x0080 112 + #define BWD_SPADSEMA_OFFSET 0x00c0 113 + #define BWD_STKYSPAD_OFFSET 0x00c4 114 + #define BWD_PBAR2XLAT_OFFSET 0x8008 115 + #define BWD_PBAR4XLAT_OFFSET 0x8010 116 + #define BWD_B2B_DOORBELL_OFFSET 0x8020 117 + #define BWD_B2B_SPAD_OFFSET 0x8080 118 + #define BWD_B2B_SPADSEMA_OFFSET 0x80c0 119 + #define BWD_B2B_STKYSPAD_OFFSET 0x80c4 120 + 121 + #define NTB_CNTL_BAR23_SNOOP (1 << 2) 122 + #define NTB_CNTL_BAR45_SNOOP (1 << 6) 123 + #define BWD_CNTL_LINK_DOWN (1 << 16) 124 + 125 + #define NTB_PPD_OFFSET 0x00D4 126 + #define SNB_PPD_CONN_TYPE 0x0003 127 + #define SNB_PPD_DEV_TYPE 0x0010 128 + #define BWD_PPD_INIT_LINK 0x0008 129 + #define BWD_PPD_CONN_TYPE 0x0300 130 + #define BWD_PPD_DEV_TYPE 0x1000 131 + 132 + #define BWD_PBAR2XLAT_USD_ADDR 0x0000004000000000 133 + #define BWD_PBAR4XLAT_USD_ADDR 0x0000008000000000 134 + #define BWD_MBAR23_USD_ADDR 0x000000410000000C 135 + #define BWD_MBAR45_USD_ADDR 0x000000810000000C 136 + #define BWD_PBAR2XLAT_DSD_ADDR 0x0000004100000000 137 + #define BWD_PBAR4XLAT_DSD_ADDR 0x0000008100000000 138 + #define BWD_MBAR23_DSD_ADDR 0x000000400000000C 139 + #define BWD_MBAR45_DSD_ADDR 0x000000800000000C
+1427
drivers/ntb/ntb_transport.c
··· 1 + /* 2 + * This file is provided under a dual BSD/GPLv2 license. When using or 3 + * redistributing this file, you may do so under either license. 4 + * 5 + * GPL LICENSE SUMMARY 6 + * 7 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of version 2 of the GNU General Public License as 11 + * published by the Free Software Foundation. 12 + * 13 + * BSD LICENSE 14 + * 15 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 16 + * 17 + * Redistribution and use in source and binary forms, with or without 18 + * modification, are permitted provided that the following conditions 19 + * are met: 20 + * 21 + * * Redistributions of source code must retain the above copyright 22 + * notice, this list of conditions and the following disclaimer. 23 + * * Redistributions in binary form must reproduce the above copy 24 + * notice, this list of conditions and the following disclaimer in 25 + * the documentation and/or other materials provided with the 26 + * distribution. 27 + * * Neither the name of Intel Corporation nor the names of its 28 + * contributors may be used to endorse or promote products derived 29 + * from this software without specific prior written permission. 30 + * 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 36 + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 37 + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 38 + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 39 + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 40 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 41 + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 42 + * 43 + * Intel PCIe NTB Linux driver 44 + * 45 + * Contact Information: 46 + * Jon Mason <jon.mason@intel.com> 47 + */ 48 + #include <linux/debugfs.h> 49 + #include <linux/delay.h> 50 + #include <linux/dma-mapping.h> 51 + #include <linux/errno.h> 52 + #include <linux/export.h> 53 + #include <linux/interrupt.h> 54 + #include <linux/module.h> 55 + #include <linux/pci.h> 56 + #include <linux/slab.h> 57 + #include <linux/types.h> 58 + #include <linux/ntb.h> 59 + #include "ntb_hw.h" 60 + 61 + #define NTB_TRANSPORT_VERSION 1 62 + 63 + static int transport_mtu = 0x401E; 64 + module_param(transport_mtu, uint, 0644); 65 + MODULE_PARM_DESC(transport_mtu, "Maximum size of NTB transport packets"); 66 + 67 + static unsigned char max_num_clients = 2; 68 + module_param(max_num_clients, byte, 0644); 69 + MODULE_PARM_DESC(max_num_clients, "Maximum number of NTB transport clients"); 70 + 71 + struct ntb_queue_entry { 72 + /* ntb_queue list reference */ 73 + struct list_head entry; 74 + /* pointers to data to be transfered */ 75 + void *cb_data; 76 + void *buf; 77 + unsigned int len; 78 + unsigned int flags; 79 + }; 80 + 81 + struct ntb_transport_qp { 82 + struct ntb_transport *transport; 83 + struct ntb_device *ndev; 84 + void *cb_data; 85 + 86 + bool client_ready; 87 + bool qp_link; 88 + u8 qp_num; /* Only 64 QP's are allowed. 0-63 */ 89 + 90 + void (*tx_handler) (struct ntb_transport_qp *qp, void *qp_data, 91 + void *data, int len); 92 + struct list_head tx_free_q; 93 + spinlock_t ntb_tx_free_q_lock; 94 + void *tx_mw_begin; 95 + void *tx_mw_end; 96 + void *tx_offset; 97 + 98 + void (*rx_handler) (struct ntb_transport_qp *qp, void *qp_data, 99 + void *data, int len); 100 + struct tasklet_struct rx_work; 101 + struct list_head rx_pend_q; 102 + struct list_head rx_free_q; 103 + spinlock_t ntb_rx_pend_q_lock; 104 + spinlock_t ntb_rx_free_q_lock; 105 + void *rx_buff_begin; 106 + void *rx_buff_end; 107 + void *rx_offset; 108 + 109 + void (*event_handler) (void *data, int status); 110 + struct delayed_work link_work; 111 + 112 + struct dentry *debugfs_dir; 113 + struct dentry *debugfs_stats; 114 + 115 + /* Stats */ 116 + u64 rx_bytes; 117 + u64 rx_pkts; 118 + u64 rx_ring_empty; 119 + u64 rx_err_no_buf; 120 + u64 rx_err_oflow; 121 + u64 rx_err_ver; 122 + u64 tx_bytes; 123 + u64 tx_pkts; 124 + u64 tx_ring_full; 125 + }; 126 + 127 + struct ntb_transport_mw { 128 + size_t size; 129 + void *virt_addr; 130 + dma_addr_t dma_addr; 131 + }; 132 + 133 + struct ntb_transport_client_dev { 134 + struct list_head entry; 135 + struct device dev; 136 + }; 137 + 138 + struct ntb_transport { 139 + struct list_head entry; 140 + struct list_head client_devs; 141 + 142 + struct ntb_device *ndev; 143 + struct ntb_transport_mw mw[NTB_NUM_MW]; 144 + struct ntb_transport_qp *qps; 145 + unsigned int max_qps; 146 + unsigned long qp_bitmap; 147 + bool transport_link; 148 + struct delayed_work link_work; 149 + struct dentry *debugfs_dir; 150 + }; 151 + 152 + enum { 153 + DESC_DONE_FLAG = 1 << 0, 154 + LINK_DOWN_FLAG = 1 << 1, 155 + }; 156 + 157 + struct ntb_payload_header { 158 + u64 ver; 159 + unsigned int len; 160 + unsigned int flags; 161 + }; 162 + 163 + enum { 164 + VERSION = 0, 165 + MW0_SZ, 166 + MW1_SZ, 167 + NUM_QPS, 168 + QP_LINKS, 169 + MAX_SPAD, 170 + }; 171 + 172 + #define QP_TO_MW(qp) ((qp) % NTB_NUM_MW) 173 + #define NTB_QP_DEF_NUM_ENTRIES 100 174 + #define NTB_LINK_DOWN_TIMEOUT 10 175 + 176 + static int ntb_match_bus(struct device *dev, struct device_driver *drv) 177 + { 178 + return !strncmp(dev_name(dev), drv->name, strlen(drv->name)); 179 + } 180 + 181 + static int ntb_client_probe(struct device *dev) 182 + { 183 + const struct ntb_client *drv = container_of(dev->driver, 184 + struct ntb_client, driver); 185 + struct pci_dev *pdev = container_of(dev->parent, struct pci_dev, dev); 186 + int rc = -EINVAL; 187 + 188 + get_device(dev); 189 + if (drv && drv->probe) 190 + rc = drv->probe(pdev); 191 + if (rc) 192 + put_device(dev); 193 + 194 + return rc; 195 + } 196 + 197 + static int ntb_client_remove(struct device *dev) 198 + { 199 + const struct ntb_client *drv = container_of(dev->driver, 200 + struct ntb_client, driver); 201 + struct pci_dev *pdev = container_of(dev->parent, struct pci_dev, dev); 202 + 203 + if (drv && drv->remove) 204 + drv->remove(pdev); 205 + 206 + put_device(dev); 207 + 208 + return 0; 209 + } 210 + 211 + struct bus_type ntb_bus_type = { 212 + .name = "ntb_bus", 213 + .match = ntb_match_bus, 214 + .probe = ntb_client_probe, 215 + .remove = ntb_client_remove, 216 + }; 217 + 218 + static LIST_HEAD(ntb_transport_list); 219 + 220 + static int __devinit ntb_bus_init(struct ntb_transport *nt) 221 + { 222 + if (list_empty(&ntb_transport_list)) { 223 + int rc = bus_register(&ntb_bus_type); 224 + if (rc) 225 + return rc; 226 + } 227 + 228 + list_add(&nt->entry, &ntb_transport_list); 229 + 230 + return 0; 231 + } 232 + 233 + static void __devexit ntb_bus_remove(struct ntb_transport *nt) 234 + { 235 + struct ntb_transport_client_dev *client_dev, *cd; 236 + 237 + list_for_each_entry_safe(client_dev, cd, &nt->client_devs, entry) { 238 + dev_err(client_dev->dev.parent, "%s still attached to bus, removing\n", 239 + dev_name(&client_dev->dev)); 240 + list_del(&client_dev->entry); 241 + device_unregister(&client_dev->dev); 242 + } 243 + 244 + list_del(&nt->entry); 245 + 246 + if (list_empty(&ntb_transport_list)) 247 + bus_unregister(&ntb_bus_type); 248 + } 249 + 250 + static void ntb_client_release(struct device *dev) 251 + { 252 + struct ntb_transport_client_dev *client_dev; 253 + client_dev = container_of(dev, struct ntb_transport_client_dev, dev); 254 + 255 + kfree(client_dev); 256 + } 257 + 258 + /** 259 + * ntb_unregister_client_dev - Unregister NTB client device 260 + * @device_name: Name of NTB client device 261 + * 262 + * Unregister an NTB client device with the NTB transport layer 263 + */ 264 + void ntb_unregister_client_dev(char *device_name) 265 + { 266 + struct ntb_transport_client_dev *client, *cd; 267 + struct ntb_transport *nt; 268 + 269 + list_for_each_entry(nt, &ntb_transport_list, entry) 270 + list_for_each_entry_safe(client, cd, &nt->client_devs, entry) 271 + if (!strncmp(dev_name(&client->dev), device_name, 272 + strlen(device_name))) { 273 + list_del(&client->entry); 274 + device_unregister(&client->dev); 275 + } 276 + } 277 + EXPORT_SYMBOL_GPL(ntb_unregister_client_dev); 278 + 279 + /** 280 + * ntb_register_client_dev - Register NTB client device 281 + * @device_name: Name of NTB client device 282 + * 283 + * Register an NTB client device with the NTB transport layer 284 + */ 285 + int ntb_register_client_dev(char *device_name) 286 + { 287 + struct ntb_transport_client_dev *client_dev; 288 + struct ntb_transport *nt; 289 + int rc; 290 + 291 + list_for_each_entry(nt, &ntb_transport_list, entry) { 292 + struct device *dev; 293 + 294 + client_dev = kzalloc(sizeof(struct ntb_transport_client_dev), 295 + GFP_KERNEL); 296 + if (!client_dev) { 297 + rc = -ENOMEM; 298 + goto err; 299 + } 300 + 301 + dev = &client_dev->dev; 302 + 303 + /* setup and register client devices */ 304 + dev_set_name(dev, "%s", device_name); 305 + dev->bus = &ntb_bus_type; 306 + dev->release = ntb_client_release; 307 + dev->parent = &ntb_query_pdev(nt->ndev)->dev; 308 + 309 + rc = device_register(dev); 310 + if (rc) { 311 + kfree(client_dev); 312 + goto err; 313 + } 314 + 315 + list_add_tail(&client_dev->entry, &nt->client_devs); 316 + } 317 + 318 + return 0; 319 + 320 + err: 321 + ntb_unregister_client_dev(device_name); 322 + 323 + return rc; 324 + } 325 + EXPORT_SYMBOL_GPL(ntb_register_client_dev); 326 + 327 + /** 328 + * ntb_register_client - Register NTB client driver 329 + * @drv: NTB client driver to be registered 330 + * 331 + * Register an NTB client driver with the NTB transport layer 332 + * 333 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 334 + */ 335 + int ntb_register_client(struct ntb_client *drv) 336 + { 337 + drv->driver.bus = &ntb_bus_type; 338 + 339 + return driver_register(&drv->driver); 340 + } 341 + EXPORT_SYMBOL_GPL(ntb_register_client); 342 + 343 + /** 344 + * ntb_unregister_client - Unregister NTB client driver 345 + * @drv: NTB client driver to be unregistered 346 + * 347 + * Unregister an NTB client driver with the NTB transport layer 348 + * 349 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 350 + */ 351 + void ntb_unregister_client(struct ntb_client *drv) 352 + { 353 + driver_unregister(&drv->driver); 354 + } 355 + EXPORT_SYMBOL_GPL(ntb_unregister_client); 356 + 357 + static int debugfs_open(struct inode *inode, struct file *filp) 358 + { 359 + filp->private_data = inode->i_private; 360 + return 0; 361 + } 362 + 363 + static ssize_t debugfs_read(struct file *filp, char __user *ubuf, size_t count, 364 + loff_t *offp) 365 + { 366 + struct ntb_transport_qp *qp; 367 + char buf[1024]; 368 + ssize_t ret, out_offset, out_count; 369 + 370 + out_count = 1024; 371 + 372 + qp = filp->private_data; 373 + out_offset = 0; 374 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 375 + "NTB QP stats\n"); 376 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 377 + "rx_bytes - \t%llu\n", qp->rx_bytes); 378 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 379 + "rx_pkts - \t%llu\n", qp->rx_pkts); 380 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 381 + "rx_ring_empty - %llu\n", qp->rx_ring_empty); 382 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 383 + "rx_err_no_buf - %llu\n", qp->rx_err_no_buf); 384 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 385 + "rx_err_oflow - \t%llu\n", qp->rx_err_oflow); 386 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 387 + "rx_err_ver - \t%llu\n", qp->rx_err_ver); 388 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 389 + "rx_buff_begin - %p\n", qp->rx_buff_begin); 390 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 391 + "rx_offset - \t%p\n", qp->rx_offset); 392 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 393 + "rx_buff_end - \t%p\n", qp->rx_buff_end); 394 + 395 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 396 + "tx_bytes - \t%llu\n", qp->tx_bytes); 397 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 398 + "tx_pkts - \t%llu\n", qp->tx_pkts); 399 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 400 + "tx_ring_full - \t%llu\n", qp->tx_ring_full); 401 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 402 + "tx_mw_begin - \t%p\n", qp->tx_mw_begin); 403 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 404 + "tx_offset - \t%p\n", qp->tx_offset); 405 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 406 + "tx_mw_end - \t%p\n", qp->tx_mw_end); 407 + 408 + out_offset += snprintf(buf + out_offset, out_count - out_offset, 409 + "QP Link %s\n", (qp->qp_link == NTB_LINK_UP) ? 410 + "Up" : "Down"); 411 + 412 + ret = simple_read_from_buffer(ubuf, count, offp, buf, out_offset); 413 + return ret; 414 + } 415 + 416 + static const struct file_operations ntb_qp_debugfs_stats = { 417 + .owner = THIS_MODULE, 418 + .open = debugfs_open, 419 + .read = debugfs_read, 420 + }; 421 + 422 + static void ntb_list_add(spinlock_t *lock, struct list_head *entry, 423 + struct list_head *list) 424 + { 425 + unsigned long flags; 426 + 427 + spin_lock_irqsave(lock, flags); 428 + list_add_tail(entry, list); 429 + spin_unlock_irqrestore(lock, flags); 430 + } 431 + 432 + static struct ntb_queue_entry *ntb_list_rm(spinlock_t *lock, 433 + struct list_head *list) 434 + { 435 + struct ntb_queue_entry *entry; 436 + unsigned long flags; 437 + 438 + spin_lock_irqsave(lock, flags); 439 + if (list_empty(list)) { 440 + entry = NULL; 441 + goto out; 442 + } 443 + entry = list_first_entry(list, struct ntb_queue_entry, entry); 444 + list_del(&entry->entry); 445 + out: 446 + spin_unlock_irqrestore(lock, flags); 447 + 448 + return entry; 449 + } 450 + 451 + static void ntb_transport_setup_qp_mw(struct ntb_transport *nt, 452 + unsigned int qp_num) 453 + { 454 + struct ntb_transport_qp *qp = &nt->qps[qp_num]; 455 + unsigned int size, num_qps_mw; 456 + u8 mw_num = QP_TO_MW(qp_num); 457 + 458 + WARN_ON(nt->mw[mw_num].virt_addr == 0); 459 + 460 + if (nt->max_qps % NTB_NUM_MW && !mw_num) 461 + num_qps_mw = nt->max_qps / NTB_NUM_MW + 462 + (nt->max_qps % NTB_NUM_MW - mw_num); 463 + else 464 + num_qps_mw = nt->max_qps / NTB_NUM_MW; 465 + 466 + size = nt->mw[mw_num].size / num_qps_mw; 467 + 468 + qp->rx_buff_begin = nt->mw[mw_num].virt_addr + 469 + (qp_num / NTB_NUM_MW * size); 470 + qp->rx_buff_end = qp->rx_buff_begin + size; 471 + qp->rx_offset = qp->rx_buff_begin; 472 + 473 + qp->tx_mw_begin = ntb_get_mw_vbase(nt->ndev, mw_num) + 474 + (qp_num / NTB_NUM_MW * size); 475 + qp->tx_mw_end = qp->tx_mw_begin + size; 476 + qp->tx_offset = qp->tx_mw_begin; 477 + 478 + qp->rx_pkts = 0; 479 + qp->tx_pkts = 0; 480 + } 481 + 482 + static int ntb_set_mw(struct ntb_transport *nt, int num_mw, unsigned int size) 483 + { 484 + struct ntb_transport_mw *mw = &nt->mw[num_mw]; 485 + struct pci_dev *pdev = ntb_query_pdev(nt->ndev); 486 + void *offset; 487 + 488 + /* Alloc memory for receiving data. Must be 4k aligned */ 489 + mw->size = ALIGN(size, 4096); 490 + 491 + mw->virt_addr = dma_alloc_coherent(&pdev->dev, mw->size, &mw->dma_addr, 492 + GFP_KERNEL); 493 + if (!mw->virt_addr) { 494 + dev_err(&pdev->dev, "Unable to allocate MW buffer of size %d\n", 495 + (int) mw->size); 496 + return -ENOMEM; 497 + } 498 + 499 + /* setup the hdr offsets with 0's */ 500 + for (offset = mw->virt_addr + transport_mtu - 501 + sizeof(struct ntb_payload_header); 502 + offset < mw->virt_addr + size; offset += transport_mtu) 503 + memset(offset, 0, sizeof(struct ntb_payload_header)); 504 + 505 + /* Notify HW the memory location of the receive buffer */ 506 + ntb_set_mw_addr(nt->ndev, num_mw, mw->dma_addr); 507 + 508 + return 0; 509 + } 510 + 511 + static void ntb_qp_link_down(struct ntb_transport_qp *qp) 512 + { 513 + struct ntb_transport *nt = qp->transport; 514 + struct pci_dev *pdev = ntb_query_pdev(nt->ndev); 515 + 516 + if (qp->qp_link == NTB_LINK_DOWN) { 517 + cancel_delayed_work_sync(&qp->link_work); 518 + return; 519 + } 520 + 521 + if (qp->event_handler) 522 + qp->event_handler(qp->cb_data, NTB_LINK_DOWN); 523 + 524 + dev_info(&pdev->dev, "qp %d: Link Down\n", qp->qp_num); 525 + qp->qp_link = NTB_LINK_DOWN; 526 + 527 + if (nt->transport_link == NTB_LINK_UP) 528 + schedule_delayed_work(&qp->link_work, 529 + msecs_to_jiffies(NTB_LINK_DOWN_TIMEOUT)); 530 + } 531 + 532 + static void ntb_transport_conn_down(struct ntb_transport *nt) 533 + { 534 + int i; 535 + 536 + if (nt->transport_link == NTB_LINK_DOWN) 537 + cancel_delayed_work_sync(&nt->link_work); 538 + else 539 + nt->transport_link = NTB_LINK_DOWN; 540 + 541 + /* Pass along the info to any clients */ 542 + for (i = 0; i < nt->max_qps; i++) 543 + if (!test_bit(i, &nt->qp_bitmap)) 544 + ntb_qp_link_down(&nt->qps[i]); 545 + 546 + /* The scratchpad registers keep the values if the remote side 547 + * goes down, blast them now to give them a sane value the next 548 + * time they are accessed 549 + */ 550 + for (i = 0; i < MAX_SPAD; i++) 551 + ntb_write_local_spad(nt->ndev, i, 0); 552 + } 553 + 554 + static void ntb_transport_event_callback(void *data, enum ntb_hw_event event) 555 + { 556 + struct ntb_transport *nt = data; 557 + 558 + switch (event) { 559 + case NTB_EVENT_HW_LINK_UP: 560 + schedule_delayed_work(&nt->link_work, 0); 561 + break; 562 + case NTB_EVENT_HW_LINK_DOWN: 563 + ntb_transport_conn_down(nt); 564 + break; 565 + default: 566 + BUG(); 567 + } 568 + } 569 + 570 + static void ntb_transport_link_work(struct work_struct *work) 571 + { 572 + struct ntb_transport *nt = container_of(work, struct ntb_transport, 573 + link_work.work); 574 + struct ntb_device *ndev = nt->ndev; 575 + struct pci_dev *pdev = ntb_query_pdev(ndev); 576 + u32 val; 577 + int rc, i; 578 + 579 + /* send the local info */ 580 + rc = ntb_write_remote_spad(ndev, VERSION, NTB_TRANSPORT_VERSION); 581 + if (rc) { 582 + dev_err(&pdev->dev, "Error writing %x to remote spad %d\n", 583 + 0, VERSION); 584 + goto out; 585 + } 586 + 587 + rc = ntb_write_remote_spad(ndev, MW0_SZ, ntb_get_mw_size(ndev, 0)); 588 + if (rc) { 589 + dev_err(&pdev->dev, "Error writing %x to remote spad %d\n", 590 + (u32) ntb_get_mw_size(ndev, 0), MW0_SZ); 591 + goto out; 592 + } 593 + 594 + rc = ntb_write_remote_spad(ndev, MW1_SZ, ntb_get_mw_size(ndev, 1)); 595 + if (rc) { 596 + dev_err(&pdev->dev, "Error writing %x to remote spad %d\n", 597 + (u32) ntb_get_mw_size(ndev, 1), MW1_SZ); 598 + goto out; 599 + } 600 + 601 + rc = ntb_write_remote_spad(ndev, NUM_QPS, nt->max_qps); 602 + if (rc) { 603 + dev_err(&pdev->dev, "Error writing %x to remote spad %d\n", 604 + nt->max_qps, NUM_QPS); 605 + goto out; 606 + } 607 + 608 + rc = ntb_read_local_spad(nt->ndev, QP_LINKS, &val); 609 + if (rc) { 610 + dev_err(&pdev->dev, "Error reading spad %d\n", QP_LINKS); 611 + goto out; 612 + } 613 + 614 + rc = ntb_write_remote_spad(ndev, QP_LINKS, val); 615 + if (rc) { 616 + dev_err(&pdev->dev, "Error writing %x to remote spad %d\n", 617 + val, QP_LINKS); 618 + goto out; 619 + } 620 + 621 + /* Query the remote side for its info */ 622 + rc = ntb_read_remote_spad(ndev, VERSION, &val); 623 + if (rc) { 624 + dev_err(&pdev->dev, "Error reading remote spad %d\n", VERSION); 625 + goto out; 626 + } 627 + 628 + if (val != NTB_TRANSPORT_VERSION) 629 + goto out; 630 + dev_dbg(&pdev->dev, "Remote version = %d\n", val); 631 + 632 + rc = ntb_read_remote_spad(ndev, NUM_QPS, &val); 633 + if (rc) { 634 + dev_err(&pdev->dev, "Error reading remote spad %d\n", NUM_QPS); 635 + goto out; 636 + } 637 + 638 + if (val != nt->max_qps) 639 + goto out; 640 + dev_dbg(&pdev->dev, "Remote max number of qps = %d\n", val); 641 + 642 + rc = ntb_read_remote_spad(ndev, MW0_SZ, &val); 643 + if (rc) { 644 + dev_err(&pdev->dev, "Error reading remote spad %d\n", MW0_SZ); 645 + goto out; 646 + } 647 + 648 + if (!val) 649 + goto out; 650 + dev_dbg(&pdev->dev, "Remote MW0 size = %d\n", val); 651 + 652 + rc = ntb_set_mw(nt, 0, val); 653 + if (rc) 654 + goto out; 655 + 656 + rc = ntb_read_remote_spad(ndev, MW1_SZ, &val); 657 + if (rc) { 658 + dev_err(&pdev->dev, "Error reading remote spad %d\n", MW1_SZ); 659 + goto out; 660 + } 661 + 662 + if (!val) 663 + goto out; 664 + dev_dbg(&pdev->dev, "Remote MW1 size = %d\n", val); 665 + 666 + rc = ntb_set_mw(nt, 1, val); 667 + if (rc) 668 + goto out; 669 + 670 + nt->transport_link = NTB_LINK_UP; 671 + 672 + for (i = 0; i < nt->max_qps; i++) { 673 + struct ntb_transport_qp *qp = &nt->qps[i]; 674 + 675 + ntb_transport_setup_qp_mw(nt, i); 676 + 677 + if (qp->client_ready == NTB_LINK_UP) 678 + schedule_delayed_work(&qp->link_work, 0); 679 + } 680 + 681 + return; 682 + 683 + out: 684 + if (ntb_hw_link_status(ndev)) 685 + schedule_delayed_work(&nt->link_work, 686 + msecs_to_jiffies(NTB_LINK_DOWN_TIMEOUT)); 687 + } 688 + 689 + static void ntb_qp_link_work(struct work_struct *work) 690 + { 691 + struct ntb_transport_qp *qp = container_of(work, 692 + struct ntb_transport_qp, 693 + link_work.work); 694 + struct pci_dev *pdev = ntb_query_pdev(qp->ndev); 695 + struct ntb_transport *nt = qp->transport; 696 + int rc, val; 697 + 698 + WARN_ON(nt->transport_link != NTB_LINK_UP); 699 + 700 + rc = ntb_read_local_spad(nt->ndev, QP_LINKS, &val); 701 + if (rc) { 702 + dev_err(&pdev->dev, "Error reading spad %d\n", QP_LINKS); 703 + return; 704 + } 705 + 706 + rc = ntb_write_remote_spad(nt->ndev, QP_LINKS, val | 1 << qp->qp_num); 707 + if (rc) 708 + dev_err(&pdev->dev, "Error writing %x to remote spad %d\n", 709 + val | 1 << qp->qp_num, QP_LINKS); 710 + 711 + /* query remote spad for qp ready bits */ 712 + rc = ntb_read_remote_spad(nt->ndev, QP_LINKS, &val); 713 + if (rc) 714 + dev_err(&pdev->dev, "Error reading remote spad %d\n", QP_LINKS); 715 + 716 + dev_dbg(&pdev->dev, "Remote QP link status = %x\n", val); 717 + 718 + /* See if the remote side is up */ 719 + if (1 << qp->qp_num & val) { 720 + qp->qp_link = NTB_LINK_UP; 721 + 722 + dev_info(&pdev->dev, "qp %d: Link Up\n", qp->qp_num); 723 + if (qp->event_handler) 724 + qp->event_handler(qp->cb_data, NTB_LINK_UP); 725 + } else if (nt->transport_link == NTB_LINK_UP) 726 + schedule_delayed_work(&qp->link_work, 727 + msecs_to_jiffies(NTB_LINK_DOWN_TIMEOUT)); 728 + } 729 + 730 + static void ntb_transport_init_queue(struct ntb_transport *nt, 731 + unsigned int qp_num) 732 + { 733 + struct ntb_transport_qp *qp; 734 + 735 + qp = &nt->qps[qp_num]; 736 + qp->qp_num = qp_num; 737 + qp->transport = nt; 738 + qp->ndev = nt->ndev; 739 + qp->qp_link = NTB_LINK_DOWN; 740 + qp->client_ready = NTB_LINK_DOWN; 741 + qp->event_handler = NULL; 742 + 743 + if (nt->debugfs_dir) { 744 + char debugfs_name[4]; 745 + 746 + snprintf(debugfs_name, 4, "qp%d", qp_num); 747 + qp->debugfs_dir = debugfs_create_dir(debugfs_name, 748 + nt->debugfs_dir); 749 + 750 + qp->debugfs_stats = debugfs_create_file("stats", S_IRUSR, 751 + qp->debugfs_dir, qp, 752 + &ntb_qp_debugfs_stats); 753 + } 754 + 755 + INIT_DELAYED_WORK(&qp->link_work, ntb_qp_link_work); 756 + 757 + spin_lock_init(&qp->ntb_rx_pend_q_lock); 758 + spin_lock_init(&qp->ntb_rx_free_q_lock); 759 + spin_lock_init(&qp->ntb_tx_free_q_lock); 760 + 761 + INIT_LIST_HEAD(&qp->rx_pend_q); 762 + INIT_LIST_HEAD(&qp->rx_free_q); 763 + INIT_LIST_HEAD(&qp->tx_free_q); 764 + } 765 + 766 + int ntb_transport_init(struct pci_dev *pdev) 767 + { 768 + struct ntb_transport *nt; 769 + int rc, i; 770 + 771 + nt = kzalloc(sizeof(struct ntb_transport), GFP_KERNEL); 772 + if (!nt) 773 + return -ENOMEM; 774 + 775 + if (debugfs_initialized()) 776 + nt->debugfs_dir = debugfs_create_dir(KBUILD_MODNAME, NULL); 777 + else 778 + nt->debugfs_dir = NULL; 779 + 780 + nt->ndev = ntb_register_transport(pdev, nt); 781 + if (!nt->ndev) { 782 + rc = -EIO; 783 + goto err; 784 + } 785 + 786 + nt->max_qps = min(nt->ndev->max_cbs, max_num_clients); 787 + 788 + nt->qps = kcalloc(nt->max_qps, sizeof(struct ntb_transport_qp), 789 + GFP_KERNEL); 790 + if (!nt->qps) { 791 + rc = -ENOMEM; 792 + goto err1; 793 + } 794 + 795 + nt->qp_bitmap = ((u64) 1 << nt->max_qps) - 1; 796 + 797 + for (i = 0; i < nt->max_qps; i++) 798 + ntb_transport_init_queue(nt, i); 799 + 800 + INIT_DELAYED_WORK(&nt->link_work, ntb_transport_link_work); 801 + 802 + rc = ntb_register_event_callback(nt->ndev, 803 + ntb_transport_event_callback); 804 + if (rc) 805 + goto err2; 806 + 807 + INIT_LIST_HEAD(&nt->client_devs); 808 + rc = ntb_bus_init(nt); 809 + if (rc) 810 + goto err3; 811 + 812 + if (ntb_hw_link_status(nt->ndev)) 813 + schedule_delayed_work(&nt->link_work, 0); 814 + 815 + return 0; 816 + 817 + err3: 818 + ntb_unregister_event_callback(nt->ndev); 819 + err2: 820 + kfree(nt->qps); 821 + err1: 822 + ntb_unregister_transport(nt->ndev); 823 + err: 824 + debugfs_remove_recursive(nt->debugfs_dir); 825 + kfree(nt); 826 + return rc; 827 + } 828 + 829 + void ntb_transport_free(void *transport) 830 + { 831 + struct ntb_transport *nt = transport; 832 + struct pci_dev *pdev; 833 + int i; 834 + 835 + nt->transport_link = NTB_LINK_DOWN; 836 + 837 + /* verify that all the qp's are freed */ 838 + for (i = 0; i < nt->max_qps; i++) 839 + if (!test_bit(i, &nt->qp_bitmap)) 840 + ntb_transport_free_queue(&nt->qps[i]); 841 + 842 + ntb_bus_remove(nt); 843 + 844 + cancel_delayed_work_sync(&nt->link_work); 845 + 846 + debugfs_remove_recursive(nt->debugfs_dir); 847 + 848 + ntb_unregister_event_callback(nt->ndev); 849 + 850 + pdev = ntb_query_pdev(nt->ndev); 851 + 852 + for (i = 0; i < NTB_NUM_MW; i++) 853 + if (nt->mw[i].virt_addr) 854 + dma_free_coherent(&pdev->dev, nt->mw[i].size, 855 + nt->mw[i].virt_addr, 856 + nt->mw[i].dma_addr); 857 + 858 + kfree(nt->qps); 859 + ntb_unregister_transport(nt->ndev); 860 + kfree(nt); 861 + } 862 + 863 + static void ntb_rx_copy_task(struct ntb_transport_qp *qp, 864 + struct ntb_queue_entry *entry, void *offset) 865 + { 866 + 867 + struct ntb_payload_header *hdr; 868 + 869 + BUG_ON(offset < qp->rx_buff_begin || 870 + offset + transport_mtu >= qp->rx_buff_end); 871 + 872 + hdr = offset + transport_mtu - sizeof(struct ntb_payload_header); 873 + entry->len = hdr->len; 874 + 875 + memcpy(entry->buf, offset, entry->len); 876 + 877 + /* Ensure that the data is fully copied out before clearing the flag */ 878 + wmb(); 879 + hdr->flags = 0; 880 + 881 + if (qp->rx_handler && qp->client_ready == NTB_LINK_UP) 882 + qp->rx_handler(qp, qp->cb_data, entry->cb_data, entry->len); 883 + 884 + ntb_list_add(&qp->ntb_rx_free_q_lock, &entry->entry, &qp->rx_free_q); 885 + } 886 + 887 + static int ntb_process_rxc(struct ntb_transport_qp *qp) 888 + { 889 + struct ntb_payload_header *hdr; 890 + struct ntb_queue_entry *entry; 891 + void *offset; 892 + 893 + entry = ntb_list_rm(&qp->ntb_rx_pend_q_lock, &qp->rx_pend_q); 894 + if (!entry) { 895 + hdr = offset + transport_mtu - 896 + sizeof(struct ntb_payload_header); 897 + dev_dbg(&ntb_query_pdev(qp->ndev)->dev, 898 + "no buffer - HDR ver %llu, len %d, flags %x\n", 899 + hdr->ver, hdr->len, hdr->flags); 900 + qp->rx_err_no_buf++; 901 + return -ENOMEM; 902 + } 903 + 904 + offset = qp->rx_offset; 905 + hdr = offset + transport_mtu - sizeof(struct ntb_payload_header); 906 + 907 + if (!(hdr->flags & DESC_DONE_FLAG)) { 908 + ntb_list_add(&qp->ntb_rx_pend_q_lock, &entry->entry, 909 + &qp->rx_pend_q); 910 + qp->rx_ring_empty++; 911 + return -EAGAIN; 912 + } 913 + 914 + if (hdr->ver != qp->rx_pkts) { 915 + dev_dbg(&ntb_query_pdev(qp->ndev)->dev, 916 + "qp %d: version mismatch, expected %llu - got %llu\n", 917 + qp->qp_num, qp->rx_pkts, hdr->ver); 918 + ntb_list_add(&qp->ntb_rx_pend_q_lock, &entry->entry, 919 + &qp->rx_pend_q); 920 + qp->rx_err_ver++; 921 + return -EIO; 922 + } 923 + 924 + if (hdr->flags & LINK_DOWN_FLAG) { 925 + ntb_qp_link_down(qp); 926 + 927 + ntb_list_add(&qp->ntb_rx_pend_q_lock, &entry->entry, 928 + &qp->rx_pend_q); 929 + 930 + /* Ensure that the data is fully copied out before clearing the 931 + * done flag 932 + */ 933 + wmb(); 934 + hdr->flags = 0; 935 + goto out; 936 + } 937 + 938 + dev_dbg(&ntb_query_pdev(qp->ndev)->dev, 939 + "rx offset %p, ver %llu - %d payload received, buf size %d\n", 940 + qp->rx_offset, hdr->ver, hdr->len, entry->len); 941 + 942 + if (hdr->len <= entry->len) 943 + ntb_rx_copy_task(qp, entry, offset); 944 + else { 945 + ntb_list_add(&qp->ntb_rx_pend_q_lock, &entry->entry, 946 + &qp->rx_pend_q); 947 + 948 + /* Ensure that the data is fully copied out before clearing the 949 + * done flag 950 + */ 951 + wmb(); 952 + hdr->flags = 0; 953 + qp->rx_err_oflow++; 954 + dev_dbg(&ntb_query_pdev(qp->ndev)->dev, 955 + "RX overflow! Wanted %d got %d\n", 956 + hdr->len, entry->len); 957 + } 958 + 959 + qp->rx_bytes += hdr->len; 960 + qp->rx_pkts++; 961 + 962 + out: 963 + qp->rx_offset += transport_mtu; 964 + if (qp->rx_offset + transport_mtu >= qp->rx_buff_end) 965 + qp->rx_offset = qp->rx_buff_begin; 966 + 967 + return 0; 968 + } 969 + 970 + static void ntb_transport_rx(unsigned long data) 971 + { 972 + struct ntb_transport_qp *qp = (struct ntb_transport_qp *)data; 973 + int rc; 974 + 975 + do { 976 + rc = ntb_process_rxc(qp); 977 + } while (!rc); 978 + } 979 + 980 + static void ntb_transport_rxc_db(void *data, int db_num) 981 + { 982 + struct ntb_transport_qp *qp = data; 983 + 984 + dev_dbg(&ntb_query_pdev(qp->ndev)->dev, "%s: doorbell %d received\n", 985 + __func__, db_num); 986 + 987 + tasklet_schedule(&qp->rx_work); 988 + } 989 + 990 + static void ntb_tx_copy_task(struct ntb_transport_qp *qp, 991 + struct ntb_queue_entry *entry, 992 + void *offset) 993 + { 994 + struct ntb_payload_header *hdr; 995 + 996 + BUG_ON(offset < qp->tx_mw_begin || 997 + offset + transport_mtu >= qp->tx_mw_end); 998 + 999 + memcpy_toio(offset, entry->buf, entry->len); 1000 + 1001 + hdr = offset + transport_mtu - sizeof(struct ntb_payload_header); 1002 + hdr->len = entry->len; 1003 + hdr->ver = qp->tx_pkts; 1004 + 1005 + /* Ensure that the data is fully copied out before setting the flag */ 1006 + mmiowb(); 1007 + hdr->flags = entry->flags | DESC_DONE_FLAG; 1008 + 1009 + ntb_ring_sdb(qp->ndev, qp->qp_num); 1010 + 1011 + /* The entry length can only be zero if the packet is intended to be a 1012 + * "link down" or similar. Since no payload is being sent in these 1013 + * cases, there is nothing to add to the completion queue. 1014 + */ 1015 + if (entry->len > 0) { 1016 + qp->tx_bytes += entry->len; 1017 + 1018 + if (qp->tx_handler) 1019 + qp->tx_handler(qp, qp->cb_data, entry->cb_data, 1020 + entry->len); 1021 + } 1022 + 1023 + ntb_list_add(&qp->ntb_tx_free_q_lock, &entry->entry, &qp->tx_free_q); 1024 + } 1025 + 1026 + static int ntb_process_tx(struct ntb_transport_qp *qp, 1027 + struct ntb_queue_entry *entry) 1028 + { 1029 + struct ntb_payload_header *hdr; 1030 + void *offset; 1031 + 1032 + offset = qp->tx_offset; 1033 + hdr = offset + transport_mtu - sizeof(struct ntb_payload_header); 1034 + 1035 + dev_dbg(&ntb_query_pdev(qp->ndev)->dev, "%lld - offset %p, tx %p, entry len %d flags %x buff %p\n", 1036 + qp->tx_pkts, offset, qp->tx_offset, entry->len, entry->flags, 1037 + entry->buf); 1038 + if (hdr->flags) { 1039 + qp->tx_ring_full++; 1040 + return -EAGAIN; 1041 + } 1042 + 1043 + if (entry->len > transport_mtu - sizeof(struct ntb_payload_header)) { 1044 + if (qp->tx_handler) 1045 + qp->tx_handler(qp->cb_data, qp, NULL, -EIO); 1046 + 1047 + ntb_list_add(&qp->ntb_tx_free_q_lock, &entry->entry, 1048 + &qp->tx_free_q); 1049 + return 0; 1050 + } 1051 + 1052 + ntb_tx_copy_task(qp, entry, offset); 1053 + 1054 + qp->tx_offset += transport_mtu; 1055 + if (qp->tx_offset + transport_mtu >= qp->tx_mw_end) 1056 + qp->tx_offset = qp->tx_mw_begin; 1057 + 1058 + qp->tx_pkts++; 1059 + 1060 + return 0; 1061 + } 1062 + 1063 + static void ntb_send_link_down(struct ntb_transport_qp *qp) 1064 + { 1065 + struct pci_dev *pdev = ntb_query_pdev(qp->ndev); 1066 + struct ntb_queue_entry *entry; 1067 + int i, rc; 1068 + 1069 + if (qp->qp_link == NTB_LINK_DOWN) 1070 + return; 1071 + 1072 + qp->qp_link = NTB_LINK_DOWN; 1073 + dev_info(&pdev->dev, "qp %d: Link Down\n", qp->qp_num); 1074 + 1075 + for (i = 0; i < NTB_LINK_DOWN_TIMEOUT; i++) { 1076 + entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, 1077 + &qp->tx_free_q); 1078 + if (entry) 1079 + break; 1080 + msleep(100); 1081 + } 1082 + 1083 + if (!entry) 1084 + return; 1085 + 1086 + entry->cb_data = NULL; 1087 + entry->buf = NULL; 1088 + entry->len = 0; 1089 + entry->flags = LINK_DOWN_FLAG; 1090 + 1091 + rc = ntb_process_tx(qp, entry); 1092 + if (rc) 1093 + dev_err(&pdev->dev, "ntb: QP%d unable to send linkdown msg\n", 1094 + qp->qp_num); 1095 + } 1096 + 1097 + /** 1098 + * ntb_transport_create_queue - Create a new NTB transport layer queue 1099 + * @rx_handler: receive callback function 1100 + * @tx_handler: transmit callback function 1101 + * @event_handler: event callback function 1102 + * 1103 + * Create a new NTB transport layer queue and provide the queue with a callback 1104 + * routine for both transmit and receive. The receive callback routine will be 1105 + * used to pass up data when the transport has received it on the queue. The 1106 + * transmit callback routine will be called when the transport has completed the 1107 + * transmission of the data on the queue and the data is ready to be freed. 1108 + * 1109 + * RETURNS: pointer to newly created ntb_queue, NULL on error. 1110 + */ 1111 + struct ntb_transport_qp * 1112 + ntb_transport_create_queue(void *data, struct pci_dev *pdev, 1113 + const struct ntb_queue_handlers *handlers) 1114 + { 1115 + struct ntb_queue_entry *entry; 1116 + struct ntb_transport_qp *qp; 1117 + struct ntb_transport *nt; 1118 + unsigned int free_queue; 1119 + int rc, i; 1120 + 1121 + nt = ntb_find_transport(pdev); 1122 + if (!nt) 1123 + goto err; 1124 + 1125 + free_queue = ffs(nt->qp_bitmap); 1126 + if (!free_queue) 1127 + goto err; 1128 + 1129 + /* decrement free_queue to make it zero based */ 1130 + free_queue--; 1131 + 1132 + clear_bit(free_queue, &nt->qp_bitmap); 1133 + 1134 + qp = &nt->qps[free_queue]; 1135 + qp->cb_data = data; 1136 + qp->rx_handler = handlers->rx_handler; 1137 + qp->tx_handler = handlers->tx_handler; 1138 + qp->event_handler = handlers->event_handler; 1139 + 1140 + for (i = 0; i < NTB_QP_DEF_NUM_ENTRIES; i++) { 1141 + entry = kzalloc(sizeof(struct ntb_queue_entry), GFP_ATOMIC); 1142 + if (!entry) 1143 + goto err1; 1144 + 1145 + ntb_list_add(&qp->ntb_rx_free_q_lock, &entry->entry, 1146 + &qp->rx_free_q); 1147 + } 1148 + 1149 + for (i = 0; i < NTB_QP_DEF_NUM_ENTRIES; i++) { 1150 + entry = kzalloc(sizeof(struct ntb_queue_entry), GFP_ATOMIC); 1151 + if (!entry) 1152 + goto err2; 1153 + 1154 + ntb_list_add(&qp->ntb_tx_free_q_lock, &entry->entry, 1155 + &qp->tx_free_q); 1156 + } 1157 + 1158 + tasklet_init(&qp->rx_work, ntb_transport_rx, (unsigned long) qp); 1159 + 1160 + rc = ntb_register_db_callback(qp->ndev, free_queue, qp, 1161 + ntb_transport_rxc_db); 1162 + if (rc) 1163 + goto err3; 1164 + 1165 + dev_info(&pdev->dev, "NTB Transport QP %d created\n", qp->qp_num); 1166 + 1167 + return qp; 1168 + 1169 + err3: 1170 + tasklet_disable(&qp->rx_work); 1171 + err2: 1172 + while ((entry = 1173 + ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q))) 1174 + kfree(entry); 1175 + err1: 1176 + while ((entry = 1177 + ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q))) 1178 + kfree(entry); 1179 + set_bit(free_queue, &nt->qp_bitmap); 1180 + err: 1181 + return NULL; 1182 + } 1183 + EXPORT_SYMBOL_GPL(ntb_transport_create_queue); 1184 + 1185 + /** 1186 + * ntb_transport_free_queue - Frees NTB transport queue 1187 + * @qp: NTB queue to be freed 1188 + * 1189 + * Frees NTB transport queue 1190 + */ 1191 + void ntb_transport_free_queue(struct ntb_transport_qp *qp) 1192 + { 1193 + struct pci_dev *pdev = ntb_query_pdev(qp->ndev); 1194 + struct ntb_queue_entry *entry; 1195 + 1196 + if (!qp) 1197 + return; 1198 + 1199 + cancel_delayed_work_sync(&qp->link_work); 1200 + 1201 + ntb_unregister_db_callback(qp->ndev, qp->qp_num); 1202 + tasklet_disable(&qp->rx_work); 1203 + 1204 + while ((entry = 1205 + ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q))) 1206 + kfree(entry); 1207 + 1208 + while ((entry = 1209 + ntb_list_rm(&qp->ntb_rx_pend_q_lock, &qp->rx_pend_q))) { 1210 + dev_warn(&pdev->dev, "Freeing item from a non-empty queue\n"); 1211 + kfree(entry); 1212 + } 1213 + 1214 + while ((entry = 1215 + ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q))) 1216 + kfree(entry); 1217 + 1218 + set_bit(qp->qp_num, &qp->transport->qp_bitmap); 1219 + 1220 + dev_info(&pdev->dev, "NTB Transport QP %d freed\n", qp->qp_num); 1221 + } 1222 + EXPORT_SYMBOL_GPL(ntb_transport_free_queue); 1223 + 1224 + /** 1225 + * ntb_transport_rx_remove - Dequeues enqueued rx packet 1226 + * @qp: NTB queue to be freed 1227 + * @len: pointer to variable to write enqueued buffers length 1228 + * 1229 + * Dequeues unused buffers from receive queue. Should only be used during 1230 + * shutdown of qp. 1231 + * 1232 + * RETURNS: NULL error value on error, or void* for success. 1233 + */ 1234 + void *ntb_transport_rx_remove(struct ntb_transport_qp *qp, unsigned int *len) 1235 + { 1236 + struct ntb_queue_entry *entry; 1237 + void *buf; 1238 + 1239 + if (!qp || qp->client_ready == NTB_LINK_UP) 1240 + return NULL; 1241 + 1242 + entry = ntb_list_rm(&qp->ntb_rx_pend_q_lock, &qp->rx_pend_q); 1243 + if (!entry) 1244 + return NULL; 1245 + 1246 + buf = entry->cb_data; 1247 + *len = entry->len; 1248 + 1249 + ntb_list_add(&qp->ntb_rx_free_q_lock, &entry->entry, 1250 + &qp->rx_free_q); 1251 + 1252 + return buf; 1253 + } 1254 + EXPORT_SYMBOL_GPL(ntb_transport_rx_remove); 1255 + 1256 + /** 1257 + * ntb_transport_rx_enqueue - Enqueue a new NTB queue entry 1258 + * @qp: NTB transport layer queue the entry is to be enqueued on 1259 + * @cb: per buffer pointer for callback function to use 1260 + * @data: pointer to data buffer that incoming packets will be copied into 1261 + * @len: length of the data buffer 1262 + * 1263 + * Enqueue a new receive buffer onto the transport queue into which a NTB 1264 + * payload can be received into. 1265 + * 1266 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 1267 + */ 1268 + int ntb_transport_rx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data, 1269 + unsigned int len) 1270 + { 1271 + struct ntb_queue_entry *entry; 1272 + 1273 + if (!qp) 1274 + return -EINVAL; 1275 + 1276 + entry = ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q); 1277 + if (!entry) 1278 + return -ENOMEM; 1279 + 1280 + entry->cb_data = cb; 1281 + entry->buf = data; 1282 + entry->len = len; 1283 + 1284 + ntb_list_add(&qp->ntb_rx_pend_q_lock, &entry->entry, 1285 + &qp->rx_pend_q); 1286 + 1287 + return 0; 1288 + } 1289 + EXPORT_SYMBOL_GPL(ntb_transport_rx_enqueue); 1290 + 1291 + /** 1292 + * ntb_transport_tx_enqueue - Enqueue a new NTB queue entry 1293 + * @qp: NTB transport layer queue the entry is to be enqueued on 1294 + * @cb: per buffer pointer for callback function to use 1295 + * @data: pointer to data buffer that will be sent 1296 + * @len: length of the data buffer 1297 + * 1298 + * Enqueue a new transmit buffer onto the transport queue from which a NTB 1299 + * payload will be transmitted. This assumes that a lock is behing held to 1300 + * serialize access to the qp. 1301 + * 1302 + * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 1303 + */ 1304 + int ntb_transport_tx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data, 1305 + unsigned int len) 1306 + { 1307 + struct ntb_queue_entry *entry; 1308 + int rc; 1309 + 1310 + if (!qp || qp->qp_link != NTB_LINK_UP || !len) 1311 + return -EINVAL; 1312 + 1313 + entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q); 1314 + if (!entry) 1315 + return -ENOMEM; 1316 + 1317 + entry->cb_data = cb; 1318 + entry->buf = data; 1319 + entry->len = len; 1320 + entry->flags = 0; 1321 + 1322 + rc = ntb_process_tx(qp, entry); 1323 + if (rc) 1324 + ntb_list_add(&qp->ntb_tx_free_q_lock, &entry->entry, 1325 + &qp->tx_free_q); 1326 + 1327 + return rc; 1328 + } 1329 + EXPORT_SYMBOL_GPL(ntb_transport_tx_enqueue); 1330 + 1331 + /** 1332 + * ntb_transport_link_up - Notify NTB transport of client readiness to use queue 1333 + * @qp: NTB transport layer queue to be enabled 1334 + * 1335 + * Notify NTB transport layer of client readiness to use queue 1336 + */ 1337 + void ntb_transport_link_up(struct ntb_transport_qp *qp) 1338 + { 1339 + if (!qp) 1340 + return; 1341 + 1342 + qp->client_ready = NTB_LINK_UP; 1343 + 1344 + if (qp->transport->transport_link == NTB_LINK_UP) 1345 + schedule_delayed_work(&qp->link_work, 0); 1346 + } 1347 + EXPORT_SYMBOL_GPL(ntb_transport_link_up); 1348 + 1349 + /** 1350 + * ntb_transport_link_down - Notify NTB transport to no longer enqueue data 1351 + * @qp: NTB transport layer queue to be disabled 1352 + * 1353 + * Notify NTB transport layer of client's desire to no longer receive data on 1354 + * transport queue specified. It is the client's responsibility to ensure all 1355 + * entries on queue are purged or otherwise handled appropraitely. 1356 + */ 1357 + void ntb_transport_link_down(struct ntb_transport_qp *qp) 1358 + { 1359 + struct pci_dev *pdev = ntb_query_pdev(qp->ndev); 1360 + int rc, val; 1361 + 1362 + if (!qp) 1363 + return; 1364 + 1365 + qp->client_ready = NTB_LINK_DOWN; 1366 + 1367 + rc = ntb_read_local_spad(qp->ndev, QP_LINKS, &val); 1368 + if (rc) { 1369 + dev_err(&pdev->dev, "Error reading spad %d\n", QP_LINKS); 1370 + return; 1371 + } 1372 + 1373 + rc = ntb_write_remote_spad(qp->ndev, QP_LINKS, 1374 + val & ~(1 << qp->qp_num)); 1375 + if (rc) 1376 + dev_err(&pdev->dev, "Error writing %x to remote spad %d\n", 1377 + val & ~(1 << qp->qp_num), QP_LINKS); 1378 + 1379 + if (qp->qp_link == NTB_LINK_UP) 1380 + ntb_send_link_down(qp); 1381 + else 1382 + cancel_delayed_work_sync(&qp->link_work); 1383 + } 1384 + EXPORT_SYMBOL_GPL(ntb_transport_link_down); 1385 + 1386 + /** 1387 + * ntb_transport_link_query - Query transport link state 1388 + * @qp: NTB transport layer queue to be queried 1389 + * 1390 + * Query connectivity to the remote system of the NTB transport queue 1391 + * 1392 + * RETURNS: true for link up or false for link down 1393 + */ 1394 + bool ntb_transport_link_query(struct ntb_transport_qp *qp) 1395 + { 1396 + return qp->qp_link == NTB_LINK_UP; 1397 + } 1398 + EXPORT_SYMBOL_GPL(ntb_transport_link_query); 1399 + 1400 + /** 1401 + * ntb_transport_qp_num - Query the qp number 1402 + * @qp: NTB transport layer queue to be queried 1403 + * 1404 + * Query qp number of the NTB transport queue 1405 + * 1406 + * RETURNS: a zero based number specifying the qp number 1407 + */ 1408 + unsigned char ntb_transport_qp_num(struct ntb_transport_qp *qp) 1409 + { 1410 + return qp->qp_num; 1411 + } 1412 + EXPORT_SYMBOL_GPL(ntb_transport_qp_num); 1413 + 1414 + /** 1415 + * ntb_transport_max_size - Query the max payload size of a qp 1416 + * @qp: NTB transport layer queue to be queried 1417 + * 1418 + * Query the maximum payload size permissible on the given qp 1419 + * 1420 + * RETURNS: the max payload size of a qp 1421 + */ 1422 + unsigned int 1423 + ntb_transport_max_size(__attribute__((unused)) struct ntb_transport_qp *qp) 1424 + { 1425 + return transport_mtu - sizeof(struct ntb_payload_header); 1426 + } 1427 + EXPORT_SYMBOL_GPL(ntb_transport_max_size);
+83
include/linux/ntb.h
··· 1 + /* 2 + * This file is provided under a dual BSD/GPLv2 license. When using or 3 + * redistributing this file, you may do so under either license. 4 + * 5 + * GPL LICENSE SUMMARY 6 + * 7 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of version 2 of the GNU General Public License as 11 + * published by the Free Software Foundation. 12 + * 13 + * BSD LICENSE 14 + * 15 + * Copyright(c) 2012 Intel Corporation. All rights reserved. 16 + * 17 + * Redistribution and use in source and binary forms, with or without 18 + * modification, are permitted provided that the following conditions 19 + * are met: 20 + * 21 + * * Redistributions of source code must retain the above copyright 22 + * notice, this list of conditions and the following disclaimer. 23 + * * Redistributions in binary form must reproduce the above copy 24 + * notice, this list of conditions and the following disclaimer in 25 + * the documentation and/or other materials provided with the 26 + * distribution. 27 + * * Neither the name of Intel Corporation nor the names of its 28 + * contributors may be used to endorse or promote products derived 29 + * from this software without specific prior written permission. 30 + * 31 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 32 + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 33 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 34 + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 35 + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 36 + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 37 + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 38 + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 39 + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 40 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 41 + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 42 + * 43 + * Intel PCIe NTB Linux driver 44 + * 45 + * Contact Information: 46 + * Jon Mason <jon.mason@intel.com> 47 + */ 48 + 49 + struct ntb_transport_qp; 50 + 51 + struct ntb_client { 52 + struct device_driver driver; 53 + int (*probe) (struct pci_dev *pdev); 54 + void (*remove) (struct pci_dev *pdev); 55 + }; 56 + 57 + int ntb_register_client(struct ntb_client *drvr); 58 + void ntb_unregister_client(struct ntb_client *drvr); 59 + int ntb_register_client_dev(char *device_name); 60 + void ntb_unregister_client_dev(char *device_name); 61 + 62 + struct ntb_queue_handlers { 63 + void (*rx_handler) (struct ntb_transport_qp *qp, void *qp_data, 64 + void *data, int len); 65 + void (*tx_handler) (struct ntb_transport_qp *qp, void *qp_data, 66 + void *data, int len); 67 + void (*event_handler) (void *data, int status); 68 + }; 69 + 70 + unsigned char ntb_transport_qp_num(struct ntb_transport_qp *qp); 71 + unsigned int ntb_transport_max_size(struct ntb_transport_qp *qp); 72 + struct ntb_transport_qp * 73 + ntb_transport_create_queue(void *data, struct pci_dev *pdev, 74 + const struct ntb_queue_handlers *handlers); 75 + void ntb_transport_free_queue(struct ntb_transport_qp *qp); 76 + int ntb_transport_rx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data, 77 + unsigned int len); 78 + int ntb_transport_tx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data, 79 + unsigned int len); 80 + void *ntb_transport_rx_remove(struct ntb_transport_qp *qp, unsigned int *len); 81 + void ntb_transport_link_up(struct ntb_transport_qp *qp); 82 + void ntb_transport_link_down(struct ntb_transport_qp *qp); 83 + bool ntb_transport_link_query(struct ntb_transport_qp *qp);