Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'dibs-direct-internal-buffer-sharing'

Alexandra Winter says:

====================
dibs - Direct Internal Buffer Sharing

This series introduces a generic abstraction of existing components like:
- the s390 specific ISM device (Internal Shared Memory),
- the SMC-D loopback mechanism (Shared Memory Communication - Direct)
- the client interface of the SMC-D module to the transport devices
This generic shim layer can be extended with more devices, more clients and
more features in the future.

This layer is called 'dibs' for Direct Internal Buffer Sharing based on the
common scheme that these mechanisms enable controlled sharing of memory
buffers within some containing entity such as a hypervisor or a Linux
instance.

Benefits:
- Cleaner separation of ISM and SMC-D functionality
- simpler and less module dependencies
- Clear interface definition.
- Extendable for future devices and clients.

An overview was given at the Netdev 0x19 conference, recordings and slides
are available [1].

Background / Status quo:
------------------------
Currently s390 hardware provides virtual PCI ISM devices (Internal Shared
Memory). Their driver is in drivers/s390/net/ism_drv.c. The main user is
SMC-D (net/smc). The ism driver offers a client interface so other
users/protocols can also use them, but it is still heavily intermingled
with the smc code. Namely, the ism module cannot be used without the smc
module, which feels artificial.

There is ongoing work to extend the ISM concept of shared buffers that can
be accessed directly by another instance on the same hardware: [2] proposed
a loopback interface (ism_lo), that can be used on non-s390 architectures
(e.g. between containers or to test SMC-D). A minimal implementation went
upstream with [3]: ism_lo currently is a part of the smc protocol and
rather hidden.

[4] proposed a virtio definition of ism (ism_virtio) that can be used
between kvm guests.

We will shortly send an RFC for an dibs client that uses dibs as transport
for TTY.

Concept:
--------
Create a shim layer in net/dibs that contains common definitions and code
for all dibs devices and all dibs clients. Any device or client module only
needs to depend on this dibs layer module and any device or client code
only needs to include the definitions in include/linux/dibs.h.

The name dibs was chosen to clearly distinguish it from the existing s390
ism devices. And to emphasize that it is not about sharing whole memory
regions with anybody, but dedicating single buffers for another system.

Implementation:
---------------
The end result of this series is: A dibs shim layer with
- One dibs client: smc-d
- Two dibs device drivers: ism and dibs-loopback
- Everything prepared to add more clients and more device drivers.

Patches 1-2 contain some issues that were found along the way. They make
sense on their own, but also enable a better structured dibs series.

There are three components that exist today:
a) smc module (especially SMC-D functionality, which is an ism client today)
b) ism device driver (supports multiple ism clients today)
c) smc-loopback (integrated with smc today)
In order to preserve existing functionality at each step, these are not
moved to dibs layer by component, instead:
- the dibs layer is established in parallel to existing code [patches 3-6]
- then some service functions are moved to the dibs layer [patches 7-12]
- the actual data movement is moved to the dibs layer [patch 13]
- and last event handling is moved to the dibs layer [patch 14]

Future:
-------
Items that are not part of this patchset but can be added later:
- dynamically add or remove dibs_loopback. That will be allow for simple
testing of add_dev()/del_dev()
- handle_irq(): Call clients without interrupt context. e.g using
threaded interrupts. I left this for a follow-on, because it includes
conceptual changes for the smcd receive code.
- Any improvements of locking scopes. I mainly moved some of the the
existing locks to dibs layer. I have the feeling there is room for
improvements.
- The device drivers should not loop through the client array
- dibs_dev_op.*_dmb() functions reveal unnecessary details of the
internal dmb struct to the clients
- Check whether client calls to dibs_dev_ops should be replaced by
interface functions that provide additional value
- Check whether device driver calls to dibs_client_ops should be replaced
by interface functions that provide additional value.

Link: [1] https://netdevconf.info/0x19/sessions/talk/communication-via-internal-shared-memory-ism-time-to-open-up.html
Link: [2] https://lore.kernel.org/netdev/1695568613-125057-1-git-send-email-guwen@linux.alibaba.com/
Link: [3] https://lore.kernel.org/linux-kernel//20240428060738.60843-1-guwen@linux.alibaba.com/
Link: [4] https://groups.oasis-open.org/communities/community-home/digestviewer/viewthread?GroupId=3973&MessageKey=c060ecf9-ea1a-49a2-9827-c92f0e6447b2&CommunityKey=2f26be99-3aa1-48f6-93a5-018dce262226&hlmlt=VT
====================

Link: https://patch.msgid.link/20250918110500.1731261-1-wintera@linux.ibm.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>

+1645 -1085
+7 -2
MAINTAINERS
··· 7132 7132 S: Maintained 7133 7133 F: drivers/gpio/gpio-gpio-mm.c 7134 7134 7135 + DIBS (DIRECT INTERNAL BUFFER SHARING) 7136 + M: Alexandra Winter <wintera@linux.ibm.com> 7137 + L: netdev@vger.kernel.org 7138 + S: Supported 7139 + F: drivers/dibs/ 7140 + F: include/linux/dibs.h 7141 + 7135 7142 DIGITEQ AUTOMOTIVE MGB4 V4L2 DRIVER 7136 7143 M: Martin Tuma <martin.tuma@digiteqautomotive.com> 7137 7144 L: linux-media@vger.kernel.org ··· 17580 17573 F: include/linux/hippidevice.h 17581 17574 F: include/linux/if_* 17582 17575 F: include/linux/inetdevice.h 17583 - F: include/linux/ism.h 17584 17576 F: include/linux/netdev* 17585 17577 F: include/linux/platform_data/wiznet.h 17586 17578 F: include/uapi/linux/cn_proc.h ··· 22236 22230 L: netdev@vger.kernel.org 22237 22231 S: Supported 22238 22232 F: drivers/s390/net/ 22239 - F: include/linux/ism.h 22240 22233 22241 22234 S390 PCI SUBSYSTEM 22242 22235 M: Niklas Schnelle <schnelle@linux.ibm.com>
+3 -1
arch/s390/configs/debug_defconfig
··· 120 120 CONFIG_UNIX_DIAG=m 121 121 CONFIG_XFRM_USER=m 122 122 CONFIG_NET_KEY=m 123 + CONFIG_DIBS=y 124 + CONFIG_DIBS_LO=y 125 + CONFIG_SMC=m 123 126 CONFIG_SMC_DIAG=m 124 - CONFIG_SMC_LO=y 125 127 CONFIG_INET=y 126 128 CONFIG_IP_MULTICAST=y 127 129 CONFIG_IP_ADVANCED_ROUTER=y
+3 -1
arch/s390/configs/defconfig
··· 111 111 CONFIG_UNIX_DIAG=m 112 112 CONFIG_XFRM_USER=m 113 113 CONFIG_NET_KEY=m 114 + CONFIG_DIBS=y 115 + CONFIG_DIBS_LO=y 116 + CONFIG_SMC=m 114 117 CONFIG_SMC_DIAG=m 115 - CONFIG_SMC_LO=y 116 118 CONFIG_INET=y 117 119 CONFIG_IP_MULTICAST=y 118 120 CONFIG_IP_ADVANCED_ROUTER=y
+1
drivers/Makefile
··· 195 195 obj-$(CONFIG_CDX_BUS) += cdx/ 196 196 obj-$(CONFIG_DPLL) += dpll/ 197 197 198 + obj-$(CONFIG_DIBS) += dibs/ 198 199 obj-$(CONFIG_S390) += s390/
+23
drivers/dibs/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + config DIBS 3 + tristate "DIBS support" 4 + default n 5 + help 6 + Direct Internal Buffer Sharing (DIBS) 7 + A communication method that uses common physical (internal) memory 8 + for synchronous direct access into a remote buffer. 9 + 10 + Select this option to provide the abstraction layer between 11 + dibs devices and dibs clients like the SMC protocol. 12 + The module name is dibs. 13 + 14 + config DIBS_LO 15 + bool "intra-OS shortcut with dibs loopback" 16 + depends on DIBS 17 + default n 18 + help 19 + DIBS_LO enables the creation of an software-emulated dibs device 20 + named lo which can be used for transferring data when communication 21 + occurs within the same OS. This helps in convenient testing of 22 + dibs clients, since dibs loopback is independent of architecture or 23 + hardware.
+8
drivers/dibs/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # DIBS class module 4 + # 5 + 6 + dibs-y += dibs_main.o 7 + obj-$(CONFIG_DIBS) += dibs.o 8 + dibs-$(CONFIG_DIBS_LO) += dibs_loopback.o
+356
drivers/dibs/dibs_loopback.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Functions for dibs loopback/loopback-ism device. 4 + * 5 + * Copyright (c) 2024, Alibaba Inc. 6 + * 7 + * Author: Wen Gu <guwen@linux.alibaba.com> 8 + * Tony Lu <tonylu@linux.alibaba.com> 9 + * 10 + */ 11 + 12 + #include <linux/bitops.h> 13 + #include <linux/device.h> 14 + #include <linux/dibs.h> 15 + #include <linux/slab.h> 16 + #include <linux/spinlock.h> 17 + #include <linux/types.h> 18 + 19 + #include "dibs_loopback.h" 20 + 21 + #define DIBS_LO_SUPPORT_NOCOPY 0x1 22 + #define DIBS_DMA_ADDR_INVALID (~(dma_addr_t)0) 23 + 24 + static const char dibs_lo_dev_name[] = "lo"; 25 + /* global loopback device */ 26 + static struct dibs_lo_dev *lo_dev; 27 + 28 + static u16 dibs_lo_get_fabric_id(struct dibs_dev *dibs) 29 + { 30 + return DIBS_LOOPBACK_FABRIC; 31 + } 32 + 33 + static int dibs_lo_query_rgid(struct dibs_dev *dibs, const uuid_t *rgid, 34 + u32 vid_valid, u32 vid) 35 + { 36 + /* rgid should be the same as lgid */ 37 + if (!uuid_equal(rgid, &dibs->gid)) 38 + return -ENETUNREACH; 39 + return 0; 40 + } 41 + 42 + static int dibs_lo_max_dmbs(void) 43 + { 44 + return DIBS_LO_MAX_DMBS; 45 + } 46 + 47 + static int dibs_lo_register_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb, 48 + struct dibs_client *client) 49 + { 50 + struct dibs_lo_dmb_node *dmb_node, *tmp_node; 51 + struct dibs_lo_dev *ldev; 52 + unsigned long flags; 53 + int sba_idx, rc; 54 + 55 + ldev = dibs->drv_priv; 56 + sba_idx = dmb->idx; 57 + /* check space for new dmb */ 58 + for_each_clear_bit(sba_idx, ldev->sba_idx_mask, DIBS_LO_MAX_DMBS) { 59 + if (!test_and_set_bit(sba_idx, ldev->sba_idx_mask)) 60 + break; 61 + } 62 + if (sba_idx == DIBS_LO_MAX_DMBS) 63 + return -ENOSPC; 64 + 65 + dmb_node = kzalloc(sizeof(*dmb_node), GFP_KERNEL); 66 + if (!dmb_node) { 67 + rc = -ENOMEM; 68 + goto err_bit; 69 + } 70 + 71 + dmb_node->sba_idx = sba_idx; 72 + dmb_node->len = dmb->dmb_len; 73 + dmb_node->cpu_addr = kzalloc(dmb_node->len, GFP_KERNEL | 74 + __GFP_NOWARN | __GFP_NORETRY | 75 + __GFP_NOMEMALLOC); 76 + if (!dmb_node->cpu_addr) { 77 + rc = -ENOMEM; 78 + goto err_node; 79 + } 80 + dmb_node->dma_addr = DIBS_DMA_ADDR_INVALID; 81 + refcount_set(&dmb_node->refcnt, 1); 82 + 83 + again: 84 + /* add new dmb into hash table */ 85 + get_random_bytes(&dmb_node->token, sizeof(dmb_node->token)); 86 + write_lock_bh(&ldev->dmb_ht_lock); 87 + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb_node->token) { 88 + if (tmp_node->token == dmb_node->token) { 89 + write_unlock_bh(&ldev->dmb_ht_lock); 90 + goto again; 91 + } 92 + } 93 + hash_add(ldev->dmb_ht, &dmb_node->list, dmb_node->token); 94 + write_unlock_bh(&ldev->dmb_ht_lock); 95 + atomic_inc(&ldev->dmb_cnt); 96 + 97 + dmb->idx = dmb_node->sba_idx; 98 + dmb->dmb_tok = dmb_node->token; 99 + dmb->cpu_addr = dmb_node->cpu_addr; 100 + dmb->dma_addr = dmb_node->dma_addr; 101 + dmb->dmb_len = dmb_node->len; 102 + 103 + spin_lock_irqsave(&dibs->lock, flags); 104 + dibs->dmb_clientid_arr[sba_idx] = client->id; 105 + spin_unlock_irqrestore(&dibs->lock, flags); 106 + 107 + return 0; 108 + 109 + err_node: 110 + kfree(dmb_node); 111 + err_bit: 112 + clear_bit(sba_idx, ldev->sba_idx_mask); 113 + return rc; 114 + } 115 + 116 + static void __dibs_lo_unregister_dmb(struct dibs_lo_dev *ldev, 117 + struct dibs_lo_dmb_node *dmb_node) 118 + { 119 + /* remove dmb from hash table */ 120 + write_lock_bh(&ldev->dmb_ht_lock); 121 + hash_del(&dmb_node->list); 122 + write_unlock_bh(&ldev->dmb_ht_lock); 123 + 124 + clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask); 125 + kfree(dmb_node->cpu_addr); 126 + kfree(dmb_node); 127 + 128 + if (atomic_dec_and_test(&ldev->dmb_cnt)) 129 + wake_up(&ldev->ldev_release); 130 + } 131 + 132 + static int dibs_lo_unregister_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb) 133 + { 134 + struct dibs_lo_dmb_node *dmb_node = NULL, *tmp_node; 135 + struct dibs_lo_dev *ldev; 136 + unsigned long flags; 137 + 138 + ldev = dibs->drv_priv; 139 + 140 + /* find dmb from hash table */ 141 + read_lock_bh(&ldev->dmb_ht_lock); 142 + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb->dmb_tok) { 143 + if (tmp_node->token == dmb->dmb_tok) { 144 + dmb_node = tmp_node; 145 + break; 146 + } 147 + } 148 + read_unlock_bh(&ldev->dmb_ht_lock); 149 + if (!dmb_node) 150 + return -EINVAL; 151 + 152 + if (refcount_dec_and_test(&dmb_node->refcnt)) { 153 + spin_lock_irqsave(&dibs->lock, flags); 154 + dibs->dmb_clientid_arr[dmb_node->sba_idx] = NO_DIBS_CLIENT; 155 + spin_unlock_irqrestore(&dibs->lock, flags); 156 + 157 + __dibs_lo_unregister_dmb(ldev, dmb_node); 158 + } 159 + return 0; 160 + } 161 + 162 + static int dibs_lo_support_dmb_nocopy(struct dibs_dev *dibs) 163 + { 164 + return DIBS_LO_SUPPORT_NOCOPY; 165 + } 166 + 167 + static int dibs_lo_attach_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb) 168 + { 169 + struct dibs_lo_dmb_node *dmb_node = NULL, *tmp_node; 170 + struct dibs_lo_dev *ldev; 171 + 172 + ldev = dibs->drv_priv; 173 + 174 + /* find dmb_node according to dmb->dmb_tok */ 175 + read_lock_bh(&ldev->dmb_ht_lock); 176 + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb->dmb_tok) { 177 + if (tmp_node->token == dmb->dmb_tok) { 178 + dmb_node = tmp_node; 179 + break; 180 + } 181 + } 182 + if (!dmb_node) { 183 + read_unlock_bh(&ldev->dmb_ht_lock); 184 + return -EINVAL; 185 + } 186 + read_unlock_bh(&ldev->dmb_ht_lock); 187 + 188 + if (!refcount_inc_not_zero(&dmb_node->refcnt)) 189 + /* the dmb is being unregistered, but has 190 + * not been removed from the hash table. 191 + */ 192 + return -EINVAL; 193 + 194 + /* provide dmb information */ 195 + dmb->idx = dmb_node->sba_idx; 196 + dmb->dmb_tok = dmb_node->token; 197 + dmb->cpu_addr = dmb_node->cpu_addr; 198 + dmb->dma_addr = dmb_node->dma_addr; 199 + dmb->dmb_len = dmb_node->len; 200 + return 0; 201 + } 202 + 203 + static int dibs_lo_detach_dmb(struct dibs_dev *dibs, u64 token) 204 + { 205 + struct dibs_lo_dmb_node *dmb_node = NULL, *tmp_node; 206 + struct dibs_lo_dev *ldev; 207 + 208 + ldev = dibs->drv_priv; 209 + 210 + /* find dmb_node according to dmb->dmb_tok */ 211 + read_lock_bh(&ldev->dmb_ht_lock); 212 + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, token) { 213 + if (tmp_node->token == token) { 214 + dmb_node = tmp_node; 215 + break; 216 + } 217 + } 218 + if (!dmb_node) { 219 + read_unlock_bh(&ldev->dmb_ht_lock); 220 + return -EINVAL; 221 + } 222 + read_unlock_bh(&ldev->dmb_ht_lock); 223 + 224 + if (refcount_dec_and_test(&dmb_node->refcnt)) 225 + __dibs_lo_unregister_dmb(ldev, dmb_node); 226 + return 0; 227 + } 228 + 229 + static int dibs_lo_move_data(struct dibs_dev *dibs, u64 dmb_tok, 230 + unsigned int idx, bool sf, unsigned int offset, 231 + void *data, unsigned int size) 232 + { 233 + struct dibs_lo_dmb_node *rmb_node = NULL, *tmp_node; 234 + struct dibs_lo_dev *ldev; 235 + u16 s_mask; 236 + u8 client_id; 237 + u32 sba_idx; 238 + 239 + ldev = dibs->drv_priv; 240 + 241 + read_lock_bh(&ldev->dmb_ht_lock); 242 + hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb_tok) { 243 + if (tmp_node->token == dmb_tok) { 244 + rmb_node = tmp_node; 245 + break; 246 + } 247 + } 248 + if (!rmb_node) { 249 + read_unlock_bh(&ldev->dmb_ht_lock); 250 + return -EINVAL; 251 + } 252 + memcpy((char *)rmb_node->cpu_addr + offset, data, size); 253 + sba_idx = rmb_node->sba_idx; 254 + read_unlock_bh(&ldev->dmb_ht_lock); 255 + 256 + if (!sf) 257 + return 0; 258 + 259 + spin_lock(&dibs->lock); 260 + client_id = dibs->dmb_clientid_arr[sba_idx]; 261 + s_mask = ror16(0x1000, idx); 262 + if (likely(client_id != NO_DIBS_CLIENT && dibs->subs[client_id])) 263 + dibs->subs[client_id]->ops->handle_irq(dibs, sba_idx, s_mask); 264 + spin_unlock(&dibs->lock); 265 + 266 + return 0; 267 + } 268 + 269 + static const struct dibs_dev_ops dibs_lo_ops = { 270 + .get_fabric_id = dibs_lo_get_fabric_id, 271 + .query_remote_gid = dibs_lo_query_rgid, 272 + .max_dmbs = dibs_lo_max_dmbs, 273 + .register_dmb = dibs_lo_register_dmb, 274 + .unregister_dmb = dibs_lo_unregister_dmb, 275 + .move_data = dibs_lo_move_data, 276 + .support_mmapped_rdmb = dibs_lo_support_dmb_nocopy, 277 + .attach_dmb = dibs_lo_attach_dmb, 278 + .detach_dmb = dibs_lo_detach_dmb, 279 + }; 280 + 281 + static void dibs_lo_dev_init(struct dibs_lo_dev *ldev) 282 + { 283 + rwlock_init(&ldev->dmb_ht_lock); 284 + hash_init(ldev->dmb_ht); 285 + atomic_set(&ldev->dmb_cnt, 0); 286 + init_waitqueue_head(&ldev->ldev_release); 287 + } 288 + 289 + static void dibs_lo_dev_exit(struct dibs_lo_dev *ldev) 290 + { 291 + if (atomic_read(&ldev->dmb_cnt)) 292 + wait_event(ldev->ldev_release, !atomic_read(&ldev->dmb_cnt)); 293 + } 294 + 295 + static int dibs_lo_dev_probe(void) 296 + { 297 + struct dibs_lo_dev *ldev; 298 + struct dibs_dev *dibs; 299 + int ret; 300 + 301 + ldev = kzalloc(sizeof(*ldev), GFP_KERNEL); 302 + if (!ldev) 303 + return -ENOMEM; 304 + 305 + dibs = dibs_dev_alloc(); 306 + if (!dibs) { 307 + kfree(ldev); 308 + return -ENOMEM; 309 + } 310 + 311 + ldev->dibs = dibs; 312 + dibs->drv_priv = ldev; 313 + dibs_lo_dev_init(ldev); 314 + uuid_gen(&dibs->gid); 315 + dibs->ops = &dibs_lo_ops; 316 + 317 + dibs->dev.parent = NULL; 318 + dev_set_name(&dibs->dev, "%s", dibs_lo_dev_name); 319 + 320 + ret = dibs_dev_add(dibs); 321 + if (ret) 322 + goto err_reg; 323 + lo_dev = ldev; 324 + return 0; 325 + 326 + err_reg: 327 + kfree(dibs->dmb_clientid_arr); 328 + /* pairs with dibs_dev_alloc() */ 329 + put_device(&dibs->dev); 330 + kfree(ldev); 331 + 332 + return ret; 333 + } 334 + 335 + static void dibs_lo_dev_remove(void) 336 + { 337 + if (!lo_dev) 338 + return; 339 + 340 + dibs_dev_del(lo_dev->dibs); 341 + dibs_lo_dev_exit(lo_dev); 342 + /* pairs with dibs_dev_alloc() */ 343 + put_device(&lo_dev->dibs->dev); 344 + kfree(lo_dev); 345 + lo_dev = NULL; 346 + } 347 + 348 + int dibs_loopback_init(void) 349 + { 350 + return dibs_lo_dev_probe(); 351 + } 352 + 353 + void dibs_loopback_exit(void) 354 + { 355 + dibs_lo_dev_remove(); 356 + }
+57
drivers/dibs/dibs_loopback.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * dibs loopback (aka loopback-ism) device structure definitions. 4 + * 5 + * Copyright (c) 2024, Alibaba Inc. 6 + * 7 + * Author: Wen Gu <guwen@linux.alibaba.com> 8 + * Tony Lu <tonylu@linux.alibaba.com> 9 + * 10 + */ 11 + 12 + #ifndef _DIBS_LOOPBACK_H 13 + #define _DIBS_LOOPBACK_H 14 + 15 + #include <linux/dibs.h> 16 + #include <linux/hashtable.h> 17 + #include <linux/spinlock.h> 18 + #include <linux/types.h> 19 + #include <linux/wait.h> 20 + 21 + #if IS_ENABLED(CONFIG_DIBS_LO) 22 + #define DIBS_LO_DMBS_HASH_BITS 12 23 + #define DIBS_LO_MAX_DMBS 5000 24 + 25 + struct dibs_lo_dmb_node { 26 + struct hlist_node list; 27 + u64 token; 28 + u32 len; 29 + u32 sba_idx; 30 + void *cpu_addr; 31 + dma_addr_t dma_addr; 32 + refcount_t refcnt; 33 + }; 34 + 35 + struct dibs_lo_dev { 36 + struct dibs_dev *dibs; 37 + atomic_t dmb_cnt; 38 + rwlock_t dmb_ht_lock; 39 + DECLARE_BITMAP(sba_idx_mask, DIBS_LO_MAX_DMBS); 40 + DECLARE_HASHTABLE(dmb_ht, DIBS_LO_DMBS_HASH_BITS); 41 + wait_queue_head_t ldev_release; 42 + }; 43 + 44 + int dibs_loopback_init(void); 45 + void dibs_loopback_exit(void); 46 + #else 47 + static inline int dibs_loopback_init(void) 48 + { 49 + return 0; 50 + } 51 + 52 + static inline void dibs_loopback_exit(void) 53 + { 54 + } 55 + #endif 56 + 57 + #endif /* _DIBS_LOOPBACK_H */
+278
drivers/dibs/dibs_main.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * DIBS - Direct Internal Buffer Sharing 4 + * 5 + * Implementation of the DIBS class module 6 + * 7 + * Copyright IBM Corp. 2025 8 + */ 9 + #define KMSG_COMPONENT "dibs" 10 + #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt 11 + 12 + #include <linux/module.h> 13 + #include <linux/types.h> 14 + #include <linux/slab.h> 15 + #include <linux/err.h> 16 + #include <linux/dibs.h> 17 + 18 + #include "dibs_loopback.h" 19 + 20 + MODULE_DESCRIPTION("Direct Internal Buffer Sharing class"); 21 + MODULE_LICENSE("GPL"); 22 + 23 + static struct class *dibs_class; 24 + 25 + /* use an array rather a list for fast mapping: */ 26 + static struct dibs_client *clients[MAX_DIBS_CLIENTS]; 27 + static u8 max_client; 28 + static DEFINE_MUTEX(clients_lock); 29 + struct dibs_dev_list { 30 + struct list_head list; 31 + struct mutex mutex; /* protects dibs device list */ 32 + }; 33 + 34 + static struct dibs_dev_list dibs_dev_list = { 35 + .list = LIST_HEAD_INIT(dibs_dev_list.list), 36 + .mutex = __MUTEX_INITIALIZER(dibs_dev_list.mutex), 37 + }; 38 + 39 + static void dibs_setup_forwarding(struct dibs_client *client, 40 + struct dibs_dev *dibs) 41 + { 42 + unsigned long flags; 43 + 44 + spin_lock_irqsave(&dibs->lock, flags); 45 + dibs->subs[client->id] = client; 46 + spin_unlock_irqrestore(&dibs->lock, flags); 47 + } 48 + 49 + int dibs_register_client(struct dibs_client *client) 50 + { 51 + struct dibs_dev *dibs; 52 + int i, rc = -ENOSPC; 53 + 54 + mutex_lock(&dibs_dev_list.mutex); 55 + mutex_lock(&clients_lock); 56 + for (i = 0; i < MAX_DIBS_CLIENTS; ++i) { 57 + if (!clients[i]) { 58 + clients[i] = client; 59 + client->id = i; 60 + if (i == max_client) 61 + max_client++; 62 + rc = 0; 63 + break; 64 + } 65 + } 66 + mutex_unlock(&clients_lock); 67 + 68 + if (i < MAX_DIBS_CLIENTS) { 69 + /* initialize with all devices that we got so far */ 70 + list_for_each_entry(dibs, &dibs_dev_list.list, list) { 71 + dibs->priv[i] = NULL; 72 + client->ops->add_dev(dibs); 73 + dibs_setup_forwarding(client, dibs); 74 + } 75 + } 76 + mutex_unlock(&dibs_dev_list.mutex); 77 + 78 + return rc; 79 + } 80 + EXPORT_SYMBOL_GPL(dibs_register_client); 81 + 82 + int dibs_unregister_client(struct dibs_client *client) 83 + { 84 + struct dibs_dev *dibs; 85 + unsigned long flags; 86 + int max_dmbs; 87 + int rc = 0; 88 + 89 + mutex_lock(&dibs_dev_list.mutex); 90 + list_for_each_entry(dibs, &dibs_dev_list.list, list) { 91 + spin_lock_irqsave(&dibs->lock, flags); 92 + max_dmbs = dibs->ops->max_dmbs(); 93 + for (int i = 0; i < max_dmbs; ++i) { 94 + if (dibs->dmb_clientid_arr[i] == client->id) { 95 + WARN(1, "%s: attempt to unregister '%s' with registered dmb(s)\n", 96 + __func__, client->name); 97 + rc = -EBUSY; 98 + goto err_reg_dmb; 99 + } 100 + } 101 + /* Stop forwarding IRQs and events */ 102 + dibs->subs[client->id] = NULL; 103 + spin_unlock_irqrestore(&dibs->lock, flags); 104 + clients[client->id]->ops->del_dev(dibs); 105 + dibs->priv[client->id] = NULL; 106 + } 107 + 108 + mutex_lock(&clients_lock); 109 + clients[client->id] = NULL; 110 + if (client->id + 1 == max_client) 111 + max_client--; 112 + mutex_unlock(&clients_lock); 113 + 114 + mutex_unlock(&dibs_dev_list.mutex); 115 + return rc; 116 + 117 + err_reg_dmb: 118 + spin_unlock_irqrestore(&dibs->lock, flags); 119 + mutex_unlock(&dibs_dev_list.mutex); 120 + return rc; 121 + } 122 + EXPORT_SYMBOL_GPL(dibs_unregister_client); 123 + 124 + static void dibs_dev_release(struct device *dev) 125 + { 126 + struct dibs_dev *dibs; 127 + 128 + dibs = container_of(dev, struct dibs_dev, dev); 129 + 130 + kfree(dibs); 131 + } 132 + 133 + struct dibs_dev *dibs_dev_alloc(void) 134 + { 135 + struct dibs_dev *dibs; 136 + 137 + dibs = kzalloc(sizeof(*dibs), GFP_KERNEL); 138 + if (!dibs) 139 + return dibs; 140 + dibs->dev.release = dibs_dev_release; 141 + dibs->dev.class = dibs_class; 142 + device_initialize(&dibs->dev); 143 + 144 + return dibs; 145 + } 146 + EXPORT_SYMBOL_GPL(dibs_dev_alloc); 147 + 148 + static ssize_t gid_show(struct device *dev, struct device_attribute *attr, 149 + char *buf) 150 + { 151 + struct dibs_dev *dibs; 152 + 153 + dibs = container_of(dev, struct dibs_dev, dev); 154 + 155 + return sysfs_emit(buf, "%pUb\n", &dibs->gid); 156 + } 157 + static DEVICE_ATTR_RO(gid); 158 + 159 + static ssize_t fabric_id_show(struct device *dev, struct device_attribute *attr, 160 + char *buf) 161 + { 162 + struct dibs_dev *dibs; 163 + u16 fabric_id; 164 + 165 + dibs = container_of(dev, struct dibs_dev, dev); 166 + fabric_id = dibs->ops->get_fabric_id(dibs); 167 + 168 + return sysfs_emit(buf, "0x%04x\n", fabric_id); 169 + } 170 + static DEVICE_ATTR_RO(fabric_id); 171 + 172 + static struct attribute *dibs_dev_attrs[] = { 173 + &dev_attr_gid.attr, 174 + &dev_attr_fabric_id.attr, 175 + NULL, 176 + }; 177 + 178 + static const struct attribute_group dibs_dev_attr_group = { 179 + .attrs = dibs_dev_attrs, 180 + }; 181 + 182 + int dibs_dev_add(struct dibs_dev *dibs) 183 + { 184 + int max_dmbs; 185 + int i, ret; 186 + 187 + max_dmbs = dibs->ops->max_dmbs(); 188 + spin_lock_init(&dibs->lock); 189 + dibs->dmb_clientid_arr = kzalloc(max_dmbs, GFP_KERNEL); 190 + if (!dibs->dmb_clientid_arr) 191 + return -ENOMEM; 192 + memset(dibs->dmb_clientid_arr, NO_DIBS_CLIENT, max_dmbs); 193 + 194 + ret = device_add(&dibs->dev); 195 + if (ret) 196 + goto free_client_arr; 197 + 198 + ret = sysfs_create_group(&dibs->dev.kobj, &dibs_dev_attr_group); 199 + if (ret) { 200 + dev_err(&dibs->dev, "sysfs_create_group failed for dibs_dev\n"); 201 + goto err_device_del; 202 + } 203 + mutex_lock(&dibs_dev_list.mutex); 204 + mutex_lock(&clients_lock); 205 + for (i = 0; i < max_client; ++i) { 206 + if (clients[i]) { 207 + clients[i]->ops->add_dev(dibs); 208 + dibs_setup_forwarding(clients[i], dibs); 209 + } 210 + } 211 + mutex_unlock(&clients_lock); 212 + list_add(&dibs->list, &dibs_dev_list.list); 213 + mutex_unlock(&dibs_dev_list.mutex); 214 + 215 + return 0; 216 + 217 + err_device_del: 218 + device_del(&dibs->dev); 219 + free_client_arr: 220 + kfree(dibs->dmb_clientid_arr); 221 + return ret; 222 + 223 + } 224 + EXPORT_SYMBOL_GPL(dibs_dev_add); 225 + 226 + void dibs_dev_del(struct dibs_dev *dibs) 227 + { 228 + unsigned long flags; 229 + int i; 230 + 231 + sysfs_remove_group(&dibs->dev.kobj, &dibs_dev_attr_group); 232 + 233 + spin_lock_irqsave(&dibs->lock, flags); 234 + for (i = 0; i < MAX_DIBS_CLIENTS; ++i) 235 + dibs->subs[i] = NULL; 236 + spin_unlock_irqrestore(&dibs->lock, flags); 237 + 238 + mutex_lock(&dibs_dev_list.mutex); 239 + mutex_lock(&clients_lock); 240 + for (i = 0; i < max_client; ++i) { 241 + if (clients[i]) 242 + clients[i]->ops->del_dev(dibs); 243 + } 244 + mutex_unlock(&clients_lock); 245 + list_del_init(&dibs->list); 246 + mutex_unlock(&dibs_dev_list.mutex); 247 + 248 + device_del(&dibs->dev); 249 + kfree(dibs->dmb_clientid_arr); 250 + } 251 + EXPORT_SYMBOL_GPL(dibs_dev_del); 252 + 253 + static int __init dibs_init(void) 254 + { 255 + int rc; 256 + 257 + memset(clients, 0, sizeof(clients)); 258 + max_client = 0; 259 + 260 + dibs_class = class_create("dibs"); 261 + if (IS_ERR(&dibs_class)) 262 + return PTR_ERR(&dibs_class); 263 + 264 + rc = dibs_loopback_init(); 265 + if (rc) 266 + pr_err("%s fails with %d\n", __func__, rc); 267 + 268 + return rc; 269 + } 270 + 271 + static void __exit dibs_exit(void) 272 + { 273 + dibs_loopback_exit(); 274 + class_destroy(dibs_class); 275 + } 276 + 277 + module_init(dibs_init); 278 + module_exit(dibs_exit);
+1 -2
drivers/s390/net/Kconfig
··· 81 81 82 82 config ISM 83 83 tristate "Support for ISM vPCI Adapter" 84 - depends on PCI 85 - imply SMC 84 + depends on PCI && DIBS 86 85 default n 87 86 help 88 87 Select this option if you want to use the Internal Shared Memory
+51 -2
drivers/s390/net/ism.h
··· 5 5 #include <linux/spinlock.h> 6 6 #include <linux/types.h> 7 7 #include <linux/pci.h> 8 - #include <linux/ism.h> 9 - #include <net/smc.h> 8 + #include <linux/dibs.h> 10 9 #include <asm/pci_insn.h> 11 10 12 11 #define UTIL_STR_LEN 16 12 + #define ISM_ERROR 0xFFFF 13 + 14 + #define ISM_NR_DMBS 1920 13 15 14 16 /* 15 17 * Do not use the first word of the DMB bits to ensure 8 byte aligned access. ··· 33 31 #define ISM_SIGNAL_IEQ 0xE 34 32 #define ISM_UNREG_SBA 0x11 35 33 #define ISM_UNREG_IEQ 0x12 34 + 35 + enum ism_event_type { 36 + ISM_EVENT_BUF = 0x00, 37 + ISM_EVENT_DEV = 0x01, 38 + ISM_EVENT_SWR = 0x02 39 + }; 40 + 41 + enum ism_event_code { 42 + ISM_BUF_DMB_UNREGISTERED = 0x04, 43 + ISM_BUF_USING_ISM_DEV_DISABLED = 0x08, 44 + ISM_BUF_OWNING_ISM_DEV_IN_ERR_STATE = 0x02, 45 + ISM_BUF_USING_ISM_DEV_IN_ERR_STATE = 0x03, 46 + ISM_BUF_VLAN_MISMATCH_WITH_OWNER = 0x05, 47 + ISM_BUF_VLAN_MISMATCH_WITH_USER = 0x06, 48 + ISM_DEV_GID_DISABLED = 0x07, 49 + ISM_DEV_GID_ERR_STATE = 0x01 50 + }; 36 51 37 52 struct ism_req_hdr { 38 53 u32 cmd; ··· 84 65 } response; 85 66 } __aligned(16); 86 67 68 + /* ISM-vPCI devices provide 64 Bit GIDs 69 + * Map them to ISM UUID GIDs like this: 70 + * _________________________________________ 71 + * | 64 Bit ISM-vPCI GID | 00000000_00000000 | 72 + * ----------------------------------------- 73 + * This will be interpreted as a UIID variant, that is reserved 74 + * for NCS backward compatibility. So it will not collide with 75 + * proper UUIDs. 76 + */ 87 77 union ism_read_gid { 88 78 struct { 89 79 struct ism_req_hdr hdr; ··· 202 174 u64 : 64; 203 175 }; 204 176 177 + struct ism_event { 178 + u32 type; 179 + u32 code; 180 + u64 tok; 181 + u64 time; 182 + u64 info; 183 + }; 184 + 205 185 struct ism_eq { 206 186 struct ism_eq_header header; 207 187 struct ism_event entry[15]; ··· 222 186 u32 dmb_bits[ISM_NR_DMBS / 32]; 223 187 u32 reserved[3]; 224 188 u16 dmbe_mask[ISM_NR_DMBS]; 189 + }; 190 + 191 + struct ism_dev { 192 + spinlock_t cmd_lock; /* serializes cmds */ 193 + struct dibs_dev *dibs; 194 + struct pci_dev *pdev; 195 + struct ism_sba *sba; 196 + dma_addr_t sba_dma_addr; 197 + DECLARE_BITMAP(sba_bitmap, ISM_NR_DMBS); 198 + 199 + struct ism_eq *ieq; 200 + dma_addr_t ieq_dma_addr; 201 + int ieq_idx; 225 202 }; 226 203 227 204 #define ISM_CREATE_REQ(dmb, idx, sf, offset) \
+207 -366
drivers/s390/net/ism_drv.c
··· 31 31 32 32 static debug_info_t *ism_debug_info; 33 33 34 - #define NO_CLIENT 0xff /* must be >= MAX_CLIENTS */ 35 - static struct ism_client *clients[MAX_CLIENTS]; /* use an array rather than */ 36 - /* a list for fast mapping */ 37 - static u8 max_client; 38 - static DEFINE_MUTEX(clients_lock); 39 - static bool ism_v2_capable; 40 - struct ism_dev_list { 41 - struct list_head list; 42 - struct mutex mutex; /* protects ism device list */ 43 - }; 44 - 45 - static struct ism_dev_list ism_dev_list = { 46 - .list = LIST_HEAD_INIT(ism_dev_list.list), 47 - .mutex = __MUTEX_INITIALIZER(ism_dev_list.mutex), 48 - }; 49 - 50 - static void ism_setup_forwarding(struct ism_client *client, struct ism_dev *ism) 51 - { 52 - unsigned long flags; 53 - 54 - spin_lock_irqsave(&ism->lock, flags); 55 - ism->subs[client->id] = client; 56 - spin_unlock_irqrestore(&ism->lock, flags); 57 - } 58 - 59 - int ism_register_client(struct ism_client *client) 60 - { 61 - struct ism_dev *ism; 62 - int i, rc = -ENOSPC; 63 - 64 - mutex_lock(&ism_dev_list.mutex); 65 - mutex_lock(&clients_lock); 66 - for (i = 0; i < MAX_CLIENTS; ++i) { 67 - if (!clients[i]) { 68 - clients[i] = client; 69 - client->id = i; 70 - if (i == max_client) 71 - max_client++; 72 - rc = 0; 73 - break; 74 - } 75 - } 76 - mutex_unlock(&clients_lock); 77 - 78 - if (i < MAX_CLIENTS) { 79 - /* initialize with all devices that we got so far */ 80 - list_for_each_entry(ism, &ism_dev_list.list, list) { 81 - ism->priv[i] = NULL; 82 - client->add(ism); 83 - ism_setup_forwarding(client, ism); 84 - } 85 - } 86 - mutex_unlock(&ism_dev_list.mutex); 87 - 88 - return rc; 89 - } 90 - EXPORT_SYMBOL_GPL(ism_register_client); 91 - 92 - int ism_unregister_client(struct ism_client *client) 93 - { 94 - struct ism_dev *ism; 95 - unsigned long flags; 96 - int rc = 0; 97 - 98 - mutex_lock(&ism_dev_list.mutex); 99 - list_for_each_entry(ism, &ism_dev_list.list, list) { 100 - spin_lock_irqsave(&ism->lock, flags); 101 - /* Stop forwarding IRQs and events */ 102 - ism->subs[client->id] = NULL; 103 - for (int i = 0; i < ISM_NR_DMBS; ++i) { 104 - if (ism->sba_client_arr[i] == client->id) { 105 - WARN(1, "%s: attempt to unregister '%s' with registered dmb(s)\n", 106 - __func__, client->name); 107 - rc = -EBUSY; 108 - goto err_reg_dmb; 109 - } 110 - } 111 - spin_unlock_irqrestore(&ism->lock, flags); 112 - } 113 - mutex_unlock(&ism_dev_list.mutex); 114 - 115 - mutex_lock(&clients_lock); 116 - clients[client->id] = NULL; 117 - if (client->id + 1 == max_client) 118 - max_client--; 119 - mutex_unlock(&clients_lock); 120 - return rc; 121 - 122 - err_reg_dmb: 123 - spin_unlock_irqrestore(&ism->lock, flags); 124 - mutex_unlock(&ism_dev_list.mutex); 125 - return rc; 126 - } 127 - EXPORT_SYMBOL_GPL(ism_unregister_client); 128 - 129 34 static int ism_cmd(struct ism_dev *ism, void *cmd) 130 35 { 131 36 struct ism_req_hdr *req = cmd; ··· 178 273 return 0; 179 274 } 180 275 181 - static int ism_read_local_gid(struct ism_dev *ism) 276 + static int ism_read_local_gid(struct dibs_dev *dibs) 182 277 { 278 + struct ism_dev *ism = dibs->drv_priv; 183 279 union ism_read_gid cmd; 184 280 int ret; 185 281 ··· 192 286 if (ret) 193 287 goto out; 194 288 195 - ism->local_gid = cmd.response.gid; 289 + memset(&dibs->gid, 0, sizeof(dibs->gid)); 290 + memcpy(&dibs->gid, &cmd.response.gid, sizeof(cmd.response.gid)); 196 291 out: 197 292 return ret; 198 293 } 199 294 200 - static void ism_free_dmb(struct ism_dev *ism, struct ism_dmb *dmb) 295 + static int ism_query_rgid(struct dibs_dev *dibs, const uuid_t *rgid, 296 + u32 vid_valid, u32 vid) 201 297 { 202 - clear_bit(dmb->sba_idx, ism->sba_bitmap); 298 + struct ism_dev *ism = dibs->drv_priv; 299 + union ism_query_rgid cmd; 300 + 301 + memset(&cmd, 0, sizeof(cmd)); 302 + cmd.request.hdr.cmd = ISM_QUERY_RGID; 303 + cmd.request.hdr.len = sizeof(cmd.request); 304 + 305 + memcpy(&cmd.request.rgid, rgid, sizeof(cmd.request.rgid)); 306 + cmd.request.vlan_valid = vid_valid; 307 + cmd.request.vlan_id = vid; 308 + 309 + return ism_cmd(ism, &cmd); 310 + } 311 + 312 + static int ism_max_dmbs(void) 313 + { 314 + return ISM_NR_DMBS; 315 + } 316 + 317 + static void ism_free_dmb(struct ism_dev *ism, struct dibs_dmb *dmb) 318 + { 319 + clear_bit(dmb->idx, ism->sba_bitmap); 203 320 dma_unmap_page(&ism->pdev->dev, dmb->dma_addr, dmb->dmb_len, 204 321 DMA_FROM_DEVICE); 205 322 folio_put(virt_to_folio(dmb->cpu_addr)); 206 323 } 207 324 208 - static int ism_alloc_dmb(struct ism_dev *ism, struct ism_dmb *dmb) 325 + static int ism_alloc_dmb(struct ism_dev *ism, struct dibs_dmb *dmb) 209 326 { 210 327 struct folio *folio; 211 328 unsigned long bit; ··· 237 308 if (PAGE_ALIGN(dmb->dmb_len) > dma_get_max_seg_size(&ism->pdev->dev)) 238 309 return -EINVAL; 239 310 240 - if (!dmb->sba_idx) { 311 + if (!dmb->idx) { 241 312 bit = find_next_zero_bit(ism->sba_bitmap, ISM_NR_DMBS, 242 313 ISM_DMB_BIT_OFFSET); 243 314 if (bit == ISM_NR_DMBS) 244 315 return -ENOSPC; 245 316 246 - dmb->sba_idx = bit; 317 + dmb->idx = bit; 247 318 } 248 - if (dmb->sba_idx < ISM_DMB_BIT_OFFSET || 249 - test_and_set_bit(dmb->sba_idx, ism->sba_bitmap)) 319 + if (dmb->idx < ISM_DMB_BIT_OFFSET || 320 + test_and_set_bit(dmb->idx, ism->sba_bitmap)) 250 321 return -EINVAL; 251 322 252 323 folio = folio_alloc(GFP_KERNEL | __GFP_NOWARN | __GFP_NOMEMALLOC | ··· 271 342 out_free: 272 343 kfree(dmb->cpu_addr); 273 344 out_bit: 274 - clear_bit(dmb->sba_idx, ism->sba_bitmap); 345 + clear_bit(dmb->idx, ism->sba_bitmap); 275 346 return rc; 276 347 } 277 348 278 - int ism_register_dmb(struct ism_dev *ism, struct ism_dmb *dmb, 279 - struct ism_client *client) 349 + static int ism_register_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb, 350 + struct dibs_client *client) 280 351 { 352 + struct ism_dev *ism = dibs->drv_priv; 281 353 union ism_reg_dmb cmd; 282 354 unsigned long flags; 283 355 int ret; ··· 293 363 294 364 cmd.request.dmb = dmb->dma_addr; 295 365 cmd.request.dmb_len = dmb->dmb_len; 296 - cmd.request.sba_idx = dmb->sba_idx; 366 + cmd.request.sba_idx = dmb->idx; 297 367 cmd.request.vlan_valid = dmb->vlan_valid; 298 368 cmd.request.vlan_id = dmb->vlan_id; 299 - cmd.request.rgid = dmb->rgid; 369 + memcpy(&cmd.request.rgid, &dmb->rgid, sizeof(u64)); 300 370 301 371 ret = ism_cmd(ism, &cmd); 302 372 if (ret) { ··· 304 374 goto out; 305 375 } 306 376 dmb->dmb_tok = cmd.response.dmb_tok; 307 - spin_lock_irqsave(&ism->lock, flags); 308 - ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = client->id; 309 - spin_unlock_irqrestore(&ism->lock, flags); 377 + spin_lock_irqsave(&dibs->lock, flags); 378 + dibs->dmb_clientid_arr[dmb->idx - ISM_DMB_BIT_OFFSET] = client->id; 379 + spin_unlock_irqrestore(&dibs->lock, flags); 310 380 out: 311 381 return ret; 312 382 } 313 - EXPORT_SYMBOL_GPL(ism_register_dmb); 314 383 315 - int ism_unregister_dmb(struct ism_dev *ism, struct ism_dmb *dmb) 384 + static int ism_unregister_dmb(struct dibs_dev *dibs, struct dibs_dmb *dmb) 316 385 { 386 + struct ism_dev *ism = dibs->drv_priv; 317 387 union ism_unreg_dmb cmd; 318 388 unsigned long flags; 319 389 int ret; ··· 324 394 325 395 cmd.request.dmb_tok = dmb->dmb_tok; 326 396 327 - spin_lock_irqsave(&ism->lock, flags); 328 - ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = NO_CLIENT; 329 - spin_unlock_irqrestore(&ism->lock, flags); 397 + spin_lock_irqsave(&dibs->lock, flags); 398 + dibs->dmb_clientid_arr[dmb->idx - ISM_DMB_BIT_OFFSET] = NO_DIBS_CLIENT; 399 + spin_unlock_irqrestore(&dibs->lock, flags); 330 400 331 401 ret = ism_cmd(ism, &cmd); 332 402 if (ret && ret != ISM_ERROR) ··· 336 406 out: 337 407 return ret; 338 408 } 339 - EXPORT_SYMBOL_GPL(ism_unregister_dmb); 340 409 341 - static int ism_add_vlan_id(struct ism_dev *ism, u64 vlan_id) 410 + static int ism_add_vlan_id(struct dibs_dev *dibs, u64 vlan_id) 342 411 { 412 + struct ism_dev *ism = dibs->drv_priv; 343 413 union ism_set_vlan_id cmd; 344 414 345 415 memset(&cmd, 0, sizeof(cmd)); ··· 351 421 return ism_cmd(ism, &cmd); 352 422 } 353 423 354 - static int ism_del_vlan_id(struct ism_dev *ism, u64 vlan_id) 424 + static int ism_del_vlan_id(struct dibs_dev *dibs, u64 vlan_id) 355 425 { 426 + struct ism_dev *ism = dibs->drv_priv; 356 427 union ism_set_vlan_id cmd; 357 428 358 429 memset(&cmd, 0, sizeof(cmd)); ··· 365 434 return ism_cmd(ism, &cmd); 366 435 } 367 436 437 + static int ism_signal_ieq(struct dibs_dev *dibs, const uuid_t *rgid, 438 + u32 trigger_irq, u32 event_code, u64 info) 439 + { 440 + struct ism_dev *ism = dibs->drv_priv; 441 + union ism_sig_ieq cmd; 442 + 443 + memset(&cmd, 0, sizeof(cmd)); 444 + cmd.request.hdr.cmd = ISM_SIGNAL_IEQ; 445 + cmd.request.hdr.len = sizeof(cmd.request); 446 + 447 + memcpy(&cmd.request.rgid, rgid, sizeof(cmd.request.rgid)); 448 + cmd.request.trigger_irq = trigger_irq; 449 + cmd.request.event_code = event_code; 450 + cmd.request.info = info; 451 + 452 + return ism_cmd(ism, &cmd); 453 + } 454 + 368 455 static unsigned int max_bytes(unsigned int start, unsigned int len, 369 456 unsigned int boundary) 370 457 { 371 458 return min(boundary - (start & (boundary - 1)), len); 372 459 } 373 460 374 - int ism_move(struct ism_dev *ism, u64 dmb_tok, unsigned int idx, bool sf, 375 - unsigned int offset, void *data, unsigned int size) 461 + static int ism_move(struct dibs_dev *dibs, u64 dmb_tok, unsigned int idx, 462 + bool sf, unsigned int offset, void *data, 463 + unsigned int size) 376 464 { 465 + struct ism_dev *ism = dibs->drv_priv; 377 466 unsigned int bytes; 378 467 u64 dmb_req; 379 468 int ret; ··· 414 463 415 464 return 0; 416 465 } 417 - EXPORT_SYMBOL_GPL(ism_move); 466 + 467 + static u16 ism_get_chid(struct dibs_dev *dibs) 468 + { 469 + struct ism_dev *ism = dibs->drv_priv; 470 + 471 + if (!ism || !ism->pdev) 472 + return 0; 473 + 474 + return to_zpci(ism->pdev)->pchid; 475 + } 476 + 477 + static int ism_match_event_type(u32 s390_event_type) 478 + { 479 + switch (s390_event_type) { 480 + case ISM_EVENT_BUF: 481 + return DIBS_BUF_EVENT; 482 + case ISM_EVENT_DEV: 483 + return DIBS_DEV_EVENT; 484 + case ISM_EVENT_SWR: 485 + return DIBS_SW_EVENT; 486 + default: 487 + return DIBS_OTHER_TYPE; 488 + } 489 + } 490 + 491 + static int ism_match_event_subtype(u32 s390_event_subtype) 492 + { 493 + switch (s390_event_subtype) { 494 + case ISM_BUF_DMB_UNREGISTERED: 495 + return DIBS_BUF_UNREGISTERED; 496 + case ISM_DEV_GID_DISABLED: 497 + return DIBS_DEV_DISABLED; 498 + case ISM_DEV_GID_ERR_STATE: 499 + return DIBS_DEV_ERR_STATE; 500 + default: 501 + return DIBS_OTHER_SUBTYPE; 502 + } 503 + } 418 504 419 505 static void ism_handle_event(struct ism_dev *ism) 420 506 { 507 + struct dibs_dev *dibs = ism->dibs; 508 + struct dibs_event event; 421 509 struct ism_event *entry; 422 - struct ism_client *clt; 510 + struct dibs_client *clt; 423 511 int i; 424 512 425 513 while ((ism->ieq_idx + 1) != READ_ONCE(ism->ieq->header.idx)) { 426 - if (++(ism->ieq_idx) == ARRAY_SIZE(ism->ieq->entry)) 514 + if (++ism->ieq_idx == ARRAY_SIZE(ism->ieq->entry)) 427 515 ism->ieq_idx = 0; 428 516 429 517 entry = &ism->ieq->entry[ism->ieq_idx]; 430 518 debug_event(ism_debug_info, 2, entry, sizeof(*entry)); 431 - for (i = 0; i < max_client; ++i) { 432 - clt = ism->subs[i]; 519 + __memset(&event, 0, sizeof(event)); 520 + event.type = ism_match_event_type(entry->type); 521 + if (event.type == DIBS_SW_EVENT) 522 + event.subtype = entry->code; 523 + else 524 + event.subtype = ism_match_event_subtype(entry->code); 525 + event.time = entry->time; 526 + event.data = entry->info; 527 + switch (event.type) { 528 + case DIBS_BUF_EVENT: 529 + event.buffer_tok = entry->tok; 530 + break; 531 + case DIBS_DEV_EVENT: 532 + case DIBS_SW_EVENT: 533 + memcpy(&event.gid, &entry->tok, sizeof(u64)); 534 + } 535 + for (i = 0; i < MAX_DIBS_CLIENTS; ++i) { 536 + clt = dibs->subs[i]; 433 537 if (clt) 434 - clt->handle_event(ism, entry); 538 + clt->ops->handle_event(dibs, &event); 435 539 } 436 540 } 437 541 } ··· 495 489 { 496 490 struct ism_dev *ism = data; 497 491 unsigned long bit, end; 492 + struct dibs_dev *dibs; 498 493 unsigned long *bv; 499 494 u16 dmbemask; 500 495 u8 client_id; 501 496 497 + dibs = ism->dibs; 498 + 502 499 bv = (void *) &ism->sba->dmb_bits[ISM_DMB_WORD_OFFSET]; 503 500 end = sizeof(ism->sba->dmb_bits) * BITS_PER_BYTE - ISM_DMB_BIT_OFFSET; 504 501 505 - spin_lock(&ism->lock); 502 + spin_lock(&dibs->lock); 506 503 ism->sba->s = 0; 507 504 barrier(); 508 505 for (bit = 0;;) { ··· 517 508 dmbemask = ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET]; 518 509 ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; 519 510 barrier(); 520 - client_id = ism->sba_client_arr[bit]; 521 - if (unlikely(client_id == NO_CLIENT || !ism->subs[client_id])) 511 + client_id = dibs->dmb_clientid_arr[bit]; 512 + if (unlikely(client_id == NO_DIBS_CLIENT || 513 + !dibs->subs[client_id])) 522 514 continue; 523 - ism->subs[client_id]->handle_irq(ism, bit + ISM_DMB_BIT_OFFSET, dmbemask); 515 + dibs->subs[client_id]->ops->handle_irq(dibs, 516 + bit + ISM_DMB_BIT_OFFSET, 517 + dmbemask); 524 518 } 525 519 526 520 if (ism->sba->e) { ··· 531 519 barrier(); 532 520 ism_handle_event(ism); 533 521 } 534 - spin_unlock(&ism->lock); 522 + spin_unlock(&dibs->lock); 535 523 return IRQ_HANDLED; 536 524 } 525 + 526 + static const struct dibs_dev_ops ism_ops = { 527 + .get_fabric_id = ism_get_chid, 528 + .query_remote_gid = ism_query_rgid, 529 + .max_dmbs = ism_max_dmbs, 530 + .register_dmb = ism_register_dmb, 531 + .unregister_dmb = ism_unregister_dmb, 532 + .move_data = ism_move, 533 + .add_vlan_id = ism_add_vlan_id, 534 + .del_vlan_id = ism_del_vlan_id, 535 + .signal_event = ism_signal_ieq, 536 + }; 537 537 538 538 static int ism_dev_init(struct ism_dev *ism) 539 539 { 540 540 struct pci_dev *pdev = ism->pdev; 541 - int i, ret; 541 + int ret; 542 542 543 543 ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI); 544 544 if (ret <= 0) 545 545 goto out; 546 546 547 - ism->sba_client_arr = kzalloc(ISM_NR_DMBS, GFP_KERNEL); 548 - if (!ism->sba_client_arr) 549 - goto free_vectors; 550 - memset(ism->sba_client_arr, NO_CLIENT, ISM_NR_DMBS); 551 - 552 547 ret = request_irq(pci_irq_vector(pdev, 0), ism_handle_irq, 0, 553 548 pci_name(pdev), ism); 554 549 if (ret) 555 - goto free_client_arr; 550 + goto free_vectors; 556 551 557 552 ret = register_sba(ism); 558 553 if (ret) ··· 569 550 if (ret) 570 551 goto unreg_sba; 571 552 572 - ret = ism_read_local_gid(ism); 573 - if (ret) 574 - goto unreg_ieq; 575 - 576 - if (!ism_add_vlan_id(ism, ISM_RESERVED_VLANID)) 577 - /* hardware is V2 capable */ 578 - ism_v2_capable = true; 579 - else 580 - ism_v2_capable = false; 581 - 582 - mutex_lock(&ism_dev_list.mutex); 583 - mutex_lock(&clients_lock); 584 - for (i = 0; i < max_client; ++i) { 585 - if (clients[i]) { 586 - clients[i]->add(ism); 587 - ism_setup_forwarding(clients[i], ism); 588 - } 589 - } 590 - mutex_unlock(&clients_lock); 591 - 592 - list_add(&ism->list, &ism_dev_list.list); 593 - mutex_unlock(&ism_dev_list.mutex); 594 - 595 553 query_info(ism); 596 554 return 0; 597 555 598 - unreg_ieq: 599 - unregister_ieq(ism); 600 556 unreg_sba: 601 557 unregister_sba(ism); 602 558 free_irq: 603 559 free_irq(pci_irq_vector(pdev, 0), ism); 604 - free_client_arr: 605 - kfree(ism->sba_client_arr); 606 560 free_vectors: 607 561 pci_free_irq_vectors(pdev); 608 562 out: 609 563 return ret; 610 564 } 611 565 612 - static void ism_dev_release(struct device *dev) 566 + static void ism_dev_exit(struct ism_dev *ism) 613 567 { 614 - struct ism_dev *ism; 568 + struct pci_dev *pdev = ism->pdev; 615 569 616 - ism = container_of(dev, struct ism_dev, dev); 617 - 618 - kfree(ism); 570 + unregister_ieq(ism); 571 + unregister_sba(ism); 572 + free_irq(pci_irq_vector(pdev, 0), ism); 573 + pci_free_irq_vectors(pdev); 619 574 } 620 575 621 576 static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id) 622 577 { 578 + struct dibs_dev *dibs; 579 + struct zpci_dev *zdev; 623 580 struct ism_dev *ism; 624 581 int ret; 625 582 ··· 603 608 if (!ism) 604 609 return -ENOMEM; 605 610 606 - spin_lock_init(&ism->lock); 607 611 spin_lock_init(&ism->cmd_lock); 608 612 dev_set_drvdata(&pdev->dev, ism); 609 613 ism->pdev = pdev; 610 - ism->dev.parent = &pdev->dev; 611 - ism->dev.release = ism_dev_release; 612 - device_initialize(&ism->dev); 613 - dev_set_name(&ism->dev, "%s", dev_name(&pdev->dev)); 614 - ret = device_add(&ism->dev); 615 - if (ret) 616 - goto err_dev; 617 614 618 615 ret = pci_enable_device_mem(pdev); 619 616 if (ret) 620 - goto err; 617 + goto err_dev; 621 618 622 619 ret = pci_request_mem_regions(pdev, DRV_NAME); 623 620 if (ret) ··· 623 636 dma_set_max_seg_size(&pdev->dev, SZ_1M); 624 637 pci_set_master(pdev); 625 638 639 + dibs = dibs_dev_alloc(); 640 + if (!dibs) { 641 + ret = -ENOMEM; 642 + goto err_resource; 643 + } 644 + /* set this up before we enable interrupts */ 645 + ism->dibs = dibs; 646 + dibs->drv_priv = ism; 647 + dibs->ops = &ism_ops; 648 + 649 + /* enable ism device, but any interrupts and events will be ignored 650 + * before dibs_dev_add() adds it to any clients. 651 + */ 626 652 ret = ism_dev_init(ism); 627 653 if (ret) 628 - goto err_resource; 654 + goto err_dibs; 655 + 656 + /* after ism_dev_init() we can call ism function to set gid */ 657 + ret = ism_read_local_gid(dibs); 658 + if (ret) 659 + goto err_ism; 660 + 661 + dibs->dev.parent = &pdev->dev; 662 + 663 + zdev = to_zpci(pdev); 664 + dev_set_name(&dibs->dev, "ism%x", zdev->uid ? zdev->uid : zdev->fid); 665 + 666 + ret = dibs_dev_add(dibs); 667 + if (ret) 668 + goto err_ism; 629 669 630 670 return 0; 631 671 672 + err_ism: 673 + ism_dev_exit(ism); 674 + err_dibs: 675 + /* pairs with dibs_dev_alloc() */ 676 + put_device(&dibs->dev); 632 677 err_resource: 633 678 pci_release_mem_regions(pdev); 634 679 err_disable: 635 680 pci_disable_device(pdev); 636 - err: 637 - device_del(&ism->dev); 638 681 err_dev: 639 682 dev_set_drvdata(&pdev->dev, NULL); 640 - put_device(&ism->dev); 683 + kfree(ism); 641 684 642 685 return ret; 643 - } 644 - 645 - static void ism_dev_exit(struct ism_dev *ism) 646 - { 647 - struct pci_dev *pdev = ism->pdev; 648 - unsigned long flags; 649 - int i; 650 - 651 - spin_lock_irqsave(&ism->lock, flags); 652 - for (i = 0; i < max_client; ++i) 653 - ism->subs[i] = NULL; 654 - spin_unlock_irqrestore(&ism->lock, flags); 655 - 656 - mutex_lock(&ism_dev_list.mutex); 657 - mutex_lock(&clients_lock); 658 - for (i = 0; i < max_client; ++i) { 659 - if (clients[i]) 660 - clients[i]->remove(ism); 661 - } 662 - mutex_unlock(&clients_lock); 663 - 664 - if (ism_v2_capable) 665 - ism_del_vlan_id(ism, ISM_RESERVED_VLANID); 666 - unregister_ieq(ism); 667 - unregister_sba(ism); 668 - free_irq(pci_irq_vector(pdev, 0), ism); 669 - kfree(ism->sba_client_arr); 670 - pci_free_irq_vectors(pdev); 671 - list_del_init(&ism->list); 672 - mutex_unlock(&ism_dev_list.mutex); 673 686 } 674 687 675 688 static void ism_remove(struct pci_dev *pdev) 676 689 { 677 690 struct ism_dev *ism = dev_get_drvdata(&pdev->dev); 691 + struct dibs_dev *dibs = ism->dibs; 678 692 693 + dibs_dev_del(dibs); 679 694 ism_dev_exit(ism); 695 + /* pairs with dibs_dev_alloc() */ 696 + put_device(&dibs->dev); 680 697 681 698 pci_release_mem_regions(pdev); 682 699 pci_disable_device(pdev); 683 - device_del(&ism->dev); 684 700 dev_set_drvdata(&pdev->dev, NULL); 685 - put_device(&ism->dev); 701 + kfree(ism); 686 702 } 687 703 688 704 static struct pci_driver ism_driver = { ··· 703 713 if (!ism_debug_info) 704 714 return -ENODEV; 705 715 706 - memset(clients, 0, sizeof(clients)); 707 - max_client = 0; 708 716 debug_register_view(ism_debug_info, &debug_hex_ascii_view); 709 717 ret = pci_register_driver(&ism_driver); 710 718 if (ret) ··· 719 731 720 732 module_init(ism_init); 721 733 module_exit(ism_exit); 722 - 723 - /*************************** SMC-D Implementation *****************************/ 724 - 725 - #if IS_ENABLED(CONFIG_SMC) 726 - static int ism_query_rgid(struct ism_dev *ism, u64 rgid, u32 vid_valid, 727 - u32 vid) 728 - { 729 - union ism_query_rgid cmd; 730 - 731 - memset(&cmd, 0, sizeof(cmd)); 732 - cmd.request.hdr.cmd = ISM_QUERY_RGID; 733 - cmd.request.hdr.len = sizeof(cmd.request); 734 - 735 - cmd.request.rgid = rgid; 736 - cmd.request.vlan_valid = vid_valid; 737 - cmd.request.vlan_id = vid; 738 - 739 - return ism_cmd(ism, &cmd); 740 - } 741 - 742 - static int smcd_query_rgid(struct smcd_dev *smcd, struct smcd_gid *rgid, 743 - u32 vid_valid, u32 vid) 744 - { 745 - return ism_query_rgid(smcd->priv, rgid->gid, vid_valid, vid); 746 - } 747 - 748 - static int smcd_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb, 749 - void *client) 750 - { 751 - return ism_register_dmb(smcd->priv, (struct ism_dmb *)dmb, client); 752 - } 753 - 754 - static int smcd_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb) 755 - { 756 - return ism_unregister_dmb(smcd->priv, (struct ism_dmb *)dmb); 757 - } 758 - 759 - static int smcd_add_vlan_id(struct smcd_dev *smcd, u64 vlan_id) 760 - { 761 - return ism_add_vlan_id(smcd->priv, vlan_id); 762 - } 763 - 764 - static int smcd_del_vlan_id(struct smcd_dev *smcd, u64 vlan_id) 765 - { 766 - return ism_del_vlan_id(smcd->priv, vlan_id); 767 - } 768 - 769 - static int smcd_set_vlan_required(struct smcd_dev *smcd) 770 - { 771 - return ism_cmd_simple(smcd->priv, ISM_SET_VLAN); 772 - } 773 - 774 - static int smcd_reset_vlan_required(struct smcd_dev *smcd) 775 - { 776 - return ism_cmd_simple(smcd->priv, ISM_RESET_VLAN); 777 - } 778 - 779 - static int ism_signal_ieq(struct ism_dev *ism, u64 rgid, u32 trigger_irq, 780 - u32 event_code, u64 info) 781 - { 782 - union ism_sig_ieq cmd; 783 - 784 - memset(&cmd, 0, sizeof(cmd)); 785 - cmd.request.hdr.cmd = ISM_SIGNAL_IEQ; 786 - cmd.request.hdr.len = sizeof(cmd.request); 787 - 788 - cmd.request.rgid = rgid; 789 - cmd.request.trigger_irq = trigger_irq; 790 - cmd.request.event_code = event_code; 791 - cmd.request.info = info; 792 - 793 - return ism_cmd(ism, &cmd); 794 - } 795 - 796 - static int smcd_signal_ieq(struct smcd_dev *smcd, struct smcd_gid *rgid, 797 - u32 trigger_irq, u32 event_code, u64 info) 798 - { 799 - return ism_signal_ieq(smcd->priv, rgid->gid, 800 - trigger_irq, event_code, info); 801 - } 802 - 803 - static int smcd_move(struct smcd_dev *smcd, u64 dmb_tok, unsigned int idx, 804 - bool sf, unsigned int offset, void *data, 805 - unsigned int size) 806 - { 807 - return ism_move(smcd->priv, dmb_tok, idx, sf, offset, data, size); 808 - } 809 - 810 - static int smcd_supports_v2(void) 811 - { 812 - return ism_v2_capable; 813 - } 814 - 815 - static u64 ism_get_local_gid(struct ism_dev *ism) 816 - { 817 - return ism->local_gid; 818 - } 819 - 820 - static void smcd_get_local_gid(struct smcd_dev *smcd, 821 - struct smcd_gid *smcd_gid) 822 - { 823 - smcd_gid->gid = ism_get_local_gid(smcd->priv); 824 - smcd_gid->gid_ext = 0; 825 - } 826 - 827 - static u16 ism_get_chid(struct ism_dev *ism) 828 - { 829 - if (!ism || !ism->pdev) 830 - return 0; 831 - 832 - return to_zpci(ism->pdev)->pchid; 833 - } 834 - 835 - static u16 smcd_get_chid(struct smcd_dev *smcd) 836 - { 837 - return ism_get_chid(smcd->priv); 838 - } 839 - 840 - static inline struct device *smcd_get_dev(struct smcd_dev *dev) 841 - { 842 - struct ism_dev *ism = dev->priv; 843 - 844 - return &ism->dev; 845 - } 846 - 847 - static const struct smcd_ops ism_ops = { 848 - .query_remote_gid = smcd_query_rgid, 849 - .register_dmb = smcd_register_dmb, 850 - .unregister_dmb = smcd_unregister_dmb, 851 - .add_vlan_id = smcd_add_vlan_id, 852 - .del_vlan_id = smcd_del_vlan_id, 853 - .set_vlan_required = smcd_set_vlan_required, 854 - .reset_vlan_required = smcd_reset_vlan_required, 855 - .signal_event = smcd_signal_ieq, 856 - .move_data = smcd_move, 857 - .supports_v2 = smcd_supports_v2, 858 - .get_local_gid = smcd_get_local_gid, 859 - .get_chid = smcd_get_chid, 860 - .get_dev = smcd_get_dev, 861 - }; 862 - 863 - const struct smcd_ops *ism_get_smcd_ops(void) 864 - { 865 - return &ism_ops; 866 - } 867 - EXPORT_SYMBOL_GPL(ism_get_smcd_ops); 868 - #endif
+464
include/linux/dibs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Direct Internal Buffer Sharing 4 + * 5 + * Definitions for the DIBS module 6 + * 7 + * Copyright IBM Corp. 2025 8 + */ 9 + #ifndef _DIBS_H 10 + #define _DIBS_H 11 + 12 + #include <linux/device.h> 13 + #include <linux/uuid.h> 14 + 15 + /* DIBS - Direct Internal Buffer Sharing - concept 16 + * ----------------------------------------------- 17 + * In the case of multiple system sharing the same hardware, dibs fabrics can 18 + * provide dibs devices to these systems. The systems use dibs devices of the 19 + * same fabric to communicate via dmbs (Direct Memory Buffers). Each dmb has 20 + * exactly one owning local dibs device and one remote using dibs device, that 21 + * is authorized to write into this dmb. This access control is provided by the 22 + * dibs fabric. 23 + * 24 + * Because the access to the dmb is based on access to physical memory, it is 25 + * lossless and synchronous. The remote devices can directly access any offset 26 + * of the dmb. 27 + * 28 + * Dibs fabrics, dibs devices and dmbs are identified by tokens and ids. 29 + * Dibs fabric id is unique within the same hardware (with the exception of the 30 + * dibs loopback fabric), dmb token is unique within the same fabric, dibs 31 + * device gids are guaranteed to be unique within the same fabric and 32 + * statistically likely to be globally unique. The exchange of these tokens and 33 + * ids between the systems is not part of the dibs concept. 34 + * 35 + * The dibs layer provides an abstraction between dibs device drivers and dibs 36 + * clients. 37 + */ 38 + 39 + /* DMB - Direct Memory Buffer 40 + * -------------------------- 41 + * A dibs client provides a dmb as input buffer for a local receiving 42 + * dibs device for exactly one (remote) sending dibs device. Only this 43 + * sending device can send data into this dmb using move_data(). Sender 44 + * and receiver can be the same device. A dmb belongs to exactly one client. 45 + */ 46 + struct dibs_dmb { 47 + /* tok - Token for this dmb 48 + * Used by remote and local devices and clients to address this dmb. 49 + * Provided by dibs fabric. Unique per dibs fabric. 50 + */ 51 + u64 dmb_tok; 52 + /* rgid - GID of designated remote sending device */ 53 + uuid_t rgid; 54 + /* cpu_addr - buffer address */ 55 + void *cpu_addr; 56 + /* len - buffer length */ 57 + u32 dmb_len; 58 + /* idx - Index of this DMB on this receiving device */ 59 + u32 idx; 60 + /* VLAN support (deprecated) 61 + * In order to write into a vlan-tagged dmb, the remote device needs 62 + * to belong to the this vlan 63 + */ 64 + u32 vlan_valid; 65 + u32 vlan_id; 66 + /* optional, used by device driver */ 67 + dma_addr_t dma_addr; 68 + }; 69 + 70 + /* DIBS events 71 + * ----------- 72 + * Dibs devices can optionally notify dibs clients about events that happened 73 + * in the fabric or at the remote device or remote dmb. 74 + */ 75 + enum dibs_event_type { 76 + /* Buffer event, e.g. a remote dmb was unregistered */ 77 + DIBS_BUF_EVENT, 78 + /* Device event, e.g. a remote dibs device was disabled */ 79 + DIBS_DEV_EVENT, 80 + /* Software event, a dibs client can send an event signal to a 81 + * remote dibs device. 82 + */ 83 + DIBS_SW_EVENT, 84 + DIBS_OTHER_TYPE }; 85 + 86 + enum dibs_event_subtype { 87 + DIBS_BUF_UNREGISTERED, 88 + DIBS_DEV_DISABLED, 89 + DIBS_DEV_ERR_STATE, 90 + DIBS_OTHER_SUBTYPE 91 + }; 92 + 93 + struct dibs_event { 94 + u32 type; 95 + u32 subtype; 96 + /* uuid_null if invalid */ 97 + uuid_t gid; 98 + /* zero if invalid */ 99 + u64 buffer_tok; 100 + u64 time; 101 + /* additional data or zero */ 102 + u64 data; 103 + }; 104 + 105 + struct dibs_dev; 106 + 107 + /* DIBS client 108 + * ----------- 109 + */ 110 + #define MAX_DIBS_CLIENTS 8 111 + #define NO_DIBS_CLIENT 0xff 112 + /* All dibs clients have access to all dibs devices. 113 + * A dibs client provides the following functions to be called by dibs layer or 114 + * dibs device drivers: 115 + */ 116 + struct dibs_client_ops { 117 + /** 118 + * add_dev() - add a dibs device 119 + * @dev: device that was added 120 + * 121 + * Will be called during dibs_register_client() for all existing 122 + * dibs devices and whenever a new dibs device is registered. 123 + * dev is usable until dibs_client.remove() is called. 124 + * *dev is protected by device refcounting. 125 + */ 126 + void (*add_dev)(struct dibs_dev *dev); 127 + /** 128 + * del_dev() - remove a dibs device 129 + * @dev: device to be removed 130 + * 131 + * Will be called whenever a dibs device is removed. 132 + * Will be called during dibs_unregister_client() for all existing 133 + * dibs devices and whenever a dibs device is unregistered. 134 + * The device has already stopped initiative for this client: 135 + * No new handlers will be started. 136 + * The device is no longer usable by this client after this call. 137 + */ 138 + void (*del_dev)(struct dibs_dev *dev); 139 + /** 140 + * handle_irq() - Handle signaling for a DMB 141 + * @dev: device that owns the dmb 142 + * @idx: Index of the dmb that got signalled 143 + * @dmbemask: signaling mask of the dmb 144 + * 145 + * Handle signaling for a dmb that was registered by this client 146 + * for this device. 147 + * The dibs device can coalesce multiple signaling triggers into a 148 + * single call of handle_irq(). dmbemask can be used to indicate 149 + * different kinds of triggers. 150 + * 151 + * Context: Called in IRQ context by dibs device driver 152 + */ 153 + void (*handle_irq)(struct dibs_dev *dev, unsigned int idx, 154 + u16 dmbemask); 155 + /** 156 + * handle_event() - Handle control information sent by device 157 + * @dev: device reporting the event 158 + * @event: ism event structure 159 + * 160 + * * Context: Called in IRQ context by dibs device driver 161 + */ 162 + void (*handle_event)(struct dibs_dev *dev, 163 + const struct dibs_event *event); 164 + }; 165 + 166 + struct dibs_client { 167 + /* client name for logging and debugging purposes */ 168 + const char *name; 169 + const struct dibs_client_ops *ops; 170 + /* client index - provided and used by dibs layer */ 171 + u8 id; 172 + }; 173 + 174 + /* Functions to be called by dibs clients: 175 + */ 176 + /** 177 + * dibs_register_client() - register a client with dibs layer 178 + * @client: this client 179 + * 180 + * Will call client->ops->add_dev() for all existing dibs devices. 181 + * Return: zero on success. 182 + */ 183 + int dibs_register_client(struct dibs_client *client); 184 + /** 185 + * dibs_unregister_client() - unregister a client with dibs layer 186 + * @client: this client 187 + * 188 + * Will call client->ops->del_dev() for all existing dibs devices. 189 + * Return: zero on success. 190 + */ 191 + int dibs_unregister_client(struct dibs_client *client); 192 + 193 + /* dibs clients can call dibs device ops. */ 194 + 195 + /* DIBS devices 196 + * ------------ 197 + */ 198 + 199 + /* Defined fabric id / CHID for all loopback devices: 200 + * All dibs loopback devices report this fabric id. In this case devices with 201 + * the same fabric id can NOT communicate via dibs. Only loopback devices with 202 + * the same dibs device gid can communicate (=same device with itself). 203 + */ 204 + #define DIBS_LOOPBACK_FABRIC 0xFFFF 205 + 206 + /* A dibs device provides the following functions to be called by dibs clients. 207 + * They are mandatory, unless marked 'optional'. 208 + */ 209 + struct dibs_dev_ops { 210 + /** 211 + * get_fabric_id() 212 + * @dev: local dibs device 213 + * 214 + * Only devices on the same dibs fabric can communicate. Fabric_id is 215 + * unique inside the same HW system. Use fabric_id for fast negative 216 + * checks, but only query_remote_gid() can give a reliable positive 217 + * answer: 218 + * Different fabric_id: dibs is not possible 219 + * Same fabric_id: dibs may be possible or not 220 + * (e.g. different HW systems) 221 + * EXCEPTION: DIBS_LOOPBACK_FABRIC denotes an ism_loopback device 222 + * that can only communicate with itself. Use dibs_dev.gid 223 + * or query_remote_gid()to determine whether sender and 224 + * receiver use the same ism_loopback device. 225 + * Return: 2 byte dibs fabric id 226 + */ 227 + u16 (*get_fabric_id)(struct dibs_dev *dev); 228 + /** 229 + * query_remote_gid() 230 + * @dev: local dibs device 231 + * @rgid: gid of remote dibs device 232 + * @vid_valid: if zero, vid will be ignored; 233 + * deprecated, ignored if device does not support vlan 234 + * @vid: VLAN id; deprecated, ignored if device does not support vlan 235 + * 236 + * Query whether a remote dibs device is reachable via this local device 237 + * and this vlan id. 238 + * Return: 0 if remote gid is reachable. 239 + */ 240 + int (*query_remote_gid)(struct dibs_dev *dev, const uuid_t *rgid, 241 + u32 vid_valid, u32 vid); 242 + /** 243 + * max_dmbs() 244 + * Return: Max number of DMBs that can be registered for this kind of 245 + * dibs_dev 246 + */ 247 + int (*max_dmbs)(void); 248 + /** 249 + * register_dmb() - allocate and register a dmb 250 + * @dev: dibs device 251 + * @dmb: dmb struct to be registered 252 + * @client: dibs client 253 + * @vid: VLAN id; deprecated, ignored if device does not support vlan 254 + * 255 + * The following fields of dmb must provide valid input: 256 + * @rgid: gid of remote user device 257 + * @dmb_len: buffer length 258 + * @idx: Optionally:requested idx (if non-zero) 259 + * @vlan_valid: if zero, vlan_id will be ignored; 260 + * deprecated, ignored if device does not support vlan 261 + * @vlan_id: deprecated, ignored if device does not support vlan 262 + * Upon return in addition the following fields will be valid: 263 + * @dmb_tok: for usage by remote and local devices and clients 264 + * @cpu_addr: allocated buffer 265 + * @idx: dmb index, unique per dibs device 266 + * @dma_addr: to be used by device driver,if applicable 267 + * 268 + * Allocate a dmb buffer and register it with this device and for this 269 + * client. 270 + * Return: zero on success 271 + */ 272 + int (*register_dmb)(struct dibs_dev *dev, struct dibs_dmb *dmb, 273 + struct dibs_client *client); 274 + /** 275 + * unregister_dmb() - unregister and free a dmb 276 + * @dev: dibs device 277 + * @dmb: dmb struct to be unregistered 278 + * The following fields of dmb must provide valid input: 279 + * @dmb_tok 280 + * @cpu_addr 281 + * @idx 282 + * 283 + * Free dmb.cpu_addr and unregister the dmb from this device. 284 + * Return: zero on success 285 + */ 286 + int (*unregister_dmb)(struct dibs_dev *dev, struct dibs_dmb *dmb); 287 + /** 288 + * move_data() - write into a remote dmb 289 + * @dev: Local sending dibs device 290 + * @dmb_tok: Token of the remote dmb 291 + * @idx: signaling index in dmbemask 292 + * @sf: signaling flag; 293 + * if true, idx will be turned on at target dmbemask mask 294 + * and target device will be signaled. 295 + * @offset: offset within target dmb 296 + * @data: pointer to data to be sent 297 + * @size: length of data to be sent, can be zero. 298 + * 299 + * Use dev to write data of size at offset into a remote dmb 300 + * identified by dmb_tok. Data is moved synchronously, *data can 301 + * be freed when this function returns. 302 + * 303 + * If signaling flag (sf) is true, bit number idx bit will be turned 304 + * on in the dmbemask mask when handle_irq() is called at the remote 305 + * dibs client that owns the target dmb. The target device may chose 306 + * to coalesce the signaling triggers of multiple move_data() calls 307 + * to the same target dmb into a single handle_irq() call. 308 + * Return: zero on success 309 + */ 310 + int (*move_data)(struct dibs_dev *dev, u64 dmb_tok, unsigned int idx, 311 + bool sf, unsigned int offset, void *data, 312 + unsigned int size); 313 + /** 314 + * add_vlan_id() - add dibs device to vlan (optional, deprecated) 315 + * @dev: dibs device 316 + * @vlan_id: vlan id 317 + * 318 + * In order to write into a vlan-tagged dmb, the remote device needs 319 + * to belong to the this vlan. A device can belong to more than 1 vlan. 320 + * Any device can access an untagged dmb. 321 + * Deprecated, only supported for backwards compatibility. 322 + * Return: zero on success 323 + */ 324 + int (*add_vlan_id)(struct dibs_dev *dev, u64 vlan_id); 325 + /** 326 + * del_vlan_id() - remove dibs device from vlan (optional, deprecated) 327 + * @dev: dibs device 328 + * @vlan_id: vlan id 329 + * Return: zero on success 330 + */ 331 + int (*del_vlan_id)(struct dibs_dev *dev, u64 vlan_id); 332 + /** 333 + * signal_event() - trigger an event at a remote dibs device (optional) 334 + * @dev: local dibs device 335 + * @rgid: gid of remote dibs device 336 + * trigger_irq: zero: notification may be coalesced with other events 337 + * non-zero: notify immediately 338 + * @subtype: 4 byte event code, meaning is defined by dibs client 339 + * @data: 8 bytes of additional information, 340 + * meaning is defined by dibs client 341 + * 342 + * dibs devices can offer support for sending a control event of type 343 + * EVENT_SWR to a remote dibs device. 344 + * NOTE: handle_event() will be called for all registered dibs clients 345 + * at the remote device. 346 + * Return: zero on success 347 + */ 348 + int (*signal_event)(struct dibs_dev *dev, const uuid_t *rgid, 349 + u32 trigger_irq, u32 event_code, u64 info); 350 + /** 351 + * support_mmapped_rdmb() - can this device provide memory mapped 352 + * remote dmbs? (optional) 353 + * @dev: dibs device 354 + * 355 + * A dibs device can provide a kernel address + length, that represent 356 + * a remote target dmb (like MMIO). Alternatively to calling 357 + * move_data(), a dibs client can write into such a ghost-send-buffer 358 + * (= to this kernel address) and the data will automatically 359 + * immediately appear in the target dmb, even without calling 360 + * move_data(). 361 + * 362 + * Either all 3 function pointers for support_dmb_nocopy(), 363 + * attach_dmb() and detach_dmb() are defined, or all of them must 364 + * be NULL. 365 + * 366 + * Return: non-zero, if memory mapped remote dmbs are supported. 367 + */ 368 + int (*support_mmapped_rdmb)(struct dibs_dev *dev); 369 + /** 370 + * attach_dmb() - attach local memory to a remote dmb 371 + * @dev: Local sending ism device 372 + * @dmb: all other parameters are passed in the form of a 373 + * dmb struct 374 + * TODO: (THIS IS CONFUSING, should be changed) 375 + * dmb_tok: (in) Token of the remote dmb, we want to attach to 376 + * cpu_addr: (out) MMIO address 377 + * dma_addr: (out) MMIO address (if applicable, invalid otherwise) 378 + * dmb_len: (out) length of local MMIO region, 379 + * equal to length of remote DMB. 380 + * sba_idx: (out) index of remote dmb (NOT HELPFUL, should be removed) 381 + * 382 + * Provides a memory address to the sender that can be used to 383 + * directly write into the remote dmb. 384 + * Memory is available until detach_dmb is called 385 + * 386 + * Return: Zero upon success, Error code otherwise 387 + */ 388 + int (*attach_dmb)(struct dibs_dev *dev, struct dibs_dmb *dmb); 389 + /** 390 + * detach_dmb() - Detach the ghost buffer from a remote dmb 391 + * @dev: ism device 392 + * @token: dmb token of the remote dmb 393 + * 394 + * No need to free cpu_addr. 395 + * 396 + * Return: Zero upon success, Error code otherwise 397 + */ 398 + int (*detach_dmb)(struct dibs_dev *dev, u64 token); 399 + }; 400 + 401 + struct dibs_dev { 402 + struct list_head list; 403 + struct device dev; 404 + /* To be filled by device driver, before calling dibs_dev_add(): */ 405 + const struct dibs_dev_ops *ops; 406 + uuid_t gid; 407 + /* priv pointer for device driver */ 408 + void *drv_priv; 409 + 410 + /* priv pointer per client; for client usage only */ 411 + void *priv[MAX_DIBS_CLIENTS]; 412 + 413 + /* get this lock before accessing any of the fields below */ 414 + spinlock_t lock; 415 + /* array of client ids indexed by dmb idx; 416 + * can be used as indices into priv and subs arrays 417 + */ 418 + u8 *dmb_clientid_arr; 419 + /* Sparse array of all ISM clients */ 420 + struct dibs_client *subs[MAX_DIBS_CLIENTS]; 421 + }; 422 + 423 + static inline void dibs_set_priv(struct dibs_dev *dev, 424 + struct dibs_client *client, void *priv) 425 + { 426 + dev->priv[client->id] = priv; 427 + } 428 + 429 + static inline void *dibs_get_priv(struct dibs_dev *dev, 430 + struct dibs_client *client) 431 + { 432 + return dev->priv[client->id]; 433 + } 434 + 435 + /* ------- End of client-only functions ----------- */ 436 + 437 + /* Functions to be called by dibs device drivers: 438 + */ 439 + /** 440 + * dibs_dev_alloc() - allocate and reference device structure 441 + * 442 + * The following fields will be valid upon successful return: dev 443 + * NOTE: Use put_device(dibs_get_dev(@dibs)) to give up your reference instead 444 + * of freeing @dibs @dev directly once you have successfully called this 445 + * function. 446 + * Return: Pointer to dibs device structure 447 + */ 448 + struct dibs_dev *dibs_dev_alloc(void); 449 + /** 450 + * dibs_dev_add() - register with dibs layer and all clients 451 + * @dibs: dibs device 452 + * 453 + * The following fields must be valid upon entry: dev, ops, drv_priv 454 + * All fields will be valid upon successful return. 455 + * Return: zero on success 456 + */ 457 + int dibs_dev_add(struct dibs_dev *dibs); 458 + /** 459 + * dibs_dev_del() - unregister from dibs layer and all clients 460 + * @dibs: dibs device 461 + */ 462 + void dibs_dev_del(struct dibs_dev *dibs); 463 + 464 + #endif /* _DIBS_H */
+1 -27
include/linux/ism.h
··· 11 11 12 12 #include <linux/workqueue.h> 13 13 14 - struct ism_dmb { 15 - u64 dmb_tok; 16 - u64 rgid; 17 - u32 dmb_len; 18 - u32 sba_idx; 19 - u32 vlan_valid; 20 - u32 vlan_id; 21 - void *cpu_addr; 22 - dma_addr_t dma_addr; 23 - }; 24 - 25 14 /* Unless we gain unexpected popularity, this limit should hold for a while */ 26 15 #define MAX_CLIENTS 8 27 16 #define ISM_NR_DMBS 1920 ··· 19 30 spinlock_t lock; /* protects the ism device */ 20 31 spinlock_t cmd_lock; /* serializes cmds */ 21 32 struct list_head list; 33 + struct dibs_dev *dibs; 22 34 struct pci_dev *pdev; 23 35 24 36 struct ism_sba *sba; 25 37 dma_addr_t sba_dma_addr; 26 38 DECLARE_BITMAP(sba_bitmap, ISM_NR_DMBS); 27 - u8 *sba_client_arr; /* entries are indices into 'clients' array */ 28 39 void *priv[MAX_CLIENTS]; 29 40 30 41 struct ism_eq *ieq; 31 42 dma_addr_t ieq_dma_addr; 32 43 33 - struct device dev; 34 - u64 local_gid; 35 44 int ieq_idx; 36 45 37 46 struct ism_client *subs[MAX_CLIENTS]; ··· 45 58 46 59 struct ism_client { 47 60 const char *name; 48 - void (*add)(struct ism_dev *dev); 49 - void (*remove)(struct ism_dev *dev); 50 61 void (*handle_event)(struct ism_dev *dev, struct ism_event *event); 51 - /* Parameter dmbemask contains a bit vector with updated DMBEs, if sent 52 - * via ism_move_data(). Callback function must handle all active bits 53 - * indicated by dmbemask. 54 - */ 55 - void (*handle_irq)(struct ism_dev *dev, unsigned int bit, u16 dmbemask); 56 62 /* Private area - don't touch! */ 57 63 u8 id; 58 64 }; ··· 61 81 void *priv) { 62 82 dev->priv[client->id] = priv; 63 83 } 64 - 65 - int ism_register_dmb(struct ism_dev *dev, struct ism_dmb *dmb, 66 - struct ism_client *client); 67 - int ism_unregister_dmb(struct ism_dev *dev, struct ism_dmb *dmb); 68 - int ism_move(struct ism_dev *dev, u64 dmb_tok, unsigned int idx, bool sf, 69 - unsigned int offset, void *data, unsigned int size); 70 84 71 85 const struct smcd_ops *ism_get_smcd_ops(void); 72 86
+2 -49
include/net/smc.h
··· 15 15 #include <linux/spinlock.h> 16 16 #include <linux/types.h> 17 17 #include <linux/wait.h> 18 - #include "linux/ism.h" 18 + #include <linux/dibs.h> 19 19 20 20 struct sock; 21 21 ··· 27 27 }; 28 28 29 29 /* SMCD/ISM device driver interface */ 30 - struct smcd_dmb { 31 - u64 dmb_tok; 32 - u64 rgid; 33 - u32 dmb_len; 34 - u32 sba_idx; 35 - u32 vlan_valid; 36 - u32 vlan_id; 37 - void *cpu_addr; 38 - dma_addr_t dma_addr; 39 - }; 40 - 41 - #define ISM_EVENT_DMB 0 42 - #define ISM_EVENT_GID 1 43 - #define ISM_EVENT_SWR 2 44 - 45 30 #define ISM_RESERVED_VLANID 0x1FFF 46 - 47 - #define ISM_ERROR 0xFFFF 48 - 49 - struct smcd_dev; 50 31 51 32 struct smcd_gid { 52 33 u64 gid; 53 34 u64 gid_ext; 54 35 }; 55 36 56 - struct smcd_ops { 57 - int (*query_remote_gid)(struct smcd_dev *dev, struct smcd_gid *rgid, 58 - u32 vid_valid, u32 vid); 59 - int (*register_dmb)(struct smcd_dev *dev, struct smcd_dmb *dmb, 60 - void *client); 61 - int (*unregister_dmb)(struct smcd_dev *dev, struct smcd_dmb *dmb); 62 - int (*move_data)(struct smcd_dev *dev, u64 dmb_tok, unsigned int idx, 63 - bool sf, unsigned int offset, void *data, 64 - unsigned int size); 65 - int (*supports_v2)(void); 66 - void (*get_local_gid)(struct smcd_dev *dev, struct smcd_gid *gid); 67 - u16 (*get_chid)(struct smcd_dev *dev); 68 - struct device* (*get_dev)(struct smcd_dev *dev); 69 - 70 - /* optional operations */ 71 - int (*add_vlan_id)(struct smcd_dev *dev, u64 vlan_id); 72 - int (*del_vlan_id)(struct smcd_dev *dev, u64 vlan_id); 73 - int (*set_vlan_required)(struct smcd_dev *dev); 74 - int (*reset_vlan_required)(struct smcd_dev *dev); 75 - int (*signal_event)(struct smcd_dev *dev, struct smcd_gid *rgid, 76 - u32 trigger_irq, u32 event_code, u64 info); 77 - int (*support_dmb_nocopy)(struct smcd_dev *dev); 78 - int (*attach_dmb)(struct smcd_dev *dev, struct smcd_dmb *dmb); 79 - int (*detach_dmb)(struct smcd_dev *dev, u64 token); 80 - }; 81 - 82 37 struct smcd_dev { 83 - const struct smcd_ops *ops; 84 - void *priv; 85 - void *client; 38 + struct dibs_dev *dibs; 86 39 struct list_head list; 87 40 spinlock_t lock; 88 41 struct smc_connection **conn;
+1
net/Kconfig
··· 88 88 source "net/xfrm/Kconfig" 89 89 source "net/iucv/Kconfig" 90 90 source "net/smc/Kconfig" 91 + source "drivers/dibs/Kconfig" 91 92 source "net/xdp/Kconfig" 92 93 93 94 config NET_HANDSHAKE
+1 -15
net/smc/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config SMC 3 3 tristate "SMC socket protocol family" 4 - depends on INET && INFINIBAND 5 - depends on m || ISM != m 4 + depends on INET && INFINIBAND && DIBS 6 5 help 7 6 SMC-R provides a "sockets over RDMA" solution making use of 8 7 RDMA over Converged Ethernet (RoCE) technology to upgrade ··· 19 20 smcss. 20 21 21 22 if unsure, say Y. 22 - 23 - config SMC_LO 24 - bool "SMC intra-OS shortcut with loopback-ism" 25 - depends on SMC 26 - default n 27 - help 28 - SMC_LO enables the creation of an Emulated-ISM device named 29 - loopback-ism in SMC and makes use of it for transferring data 30 - when communication occurs within the same OS. This helps in 31 - convenient testing of SMC-D since loopback-ism is independent 32 - of architecture or hardware. 33 - 34 - if unsure, say N.
-1
net/smc/Makefile
··· 6 6 smc-y += smc_cdc.o smc_tx.o smc_rx.o smc_close.o smc_ism.o smc_netlink.o smc_stats.o 7 7 smc-y += smc_tracepoint.o smc_inet.o 8 8 smc-$(CONFIG_SYSCTL) += smc_sysctl.o 9 - smc-$(CONFIG_SMC_LO) += smc_loopback.o
+1 -11
net/smc/af_smc.c
··· 57 57 #include "smc_stats.h" 58 58 #include "smc_tracepoint.h" 59 59 #include "smc_sysctl.h" 60 - #include "smc_loopback.h" 61 60 #include "smc_inet.h" 62 61 63 62 static DEFINE_MUTEX(smc_server_lgr_pending); /* serialize link group ··· 3590 3591 goto out_sock; 3591 3592 } 3592 3593 3593 - rc = smc_loopback_init(); 3594 - if (rc) { 3595 - pr_err("%s: smc_loopback_init fails with %d\n", __func__, rc); 3596 - goto out_ib; 3597 - } 3598 - 3599 3594 rc = tcp_register_ulp(&smc_ulp_ops); 3600 3595 if (rc) { 3601 3596 pr_err("%s: tcp_ulp_register fails with %d\n", __func__, rc); 3602 - goto out_lo; 3597 + goto out_ib; 3603 3598 } 3604 3599 rc = smc_inet_init(); 3605 3600 if (rc) { ··· 3604 3611 return 0; 3605 3612 out_ulp: 3606 3613 tcp_unregister_ulp(&smc_ulp_ops); 3607 - out_lo: 3608 - smc_loopback_exit(); 3609 3614 out_ib: 3610 3615 smc_ib_unregister_client(); 3611 3616 out_sock: ··· 3642 3651 tcp_unregister_ulp(&smc_ulp_ops); 3643 3652 sock_unregister(PF_SMC); 3644 3653 smc_core_exit(); 3645 - smc_loopback_exit(); 3646 3654 smc_ib_unregister_client(); 3647 3655 smc_ism_exit(); 3648 3656 destroy_workqueue(smc_close_wq);
+3 -3
net/smc/smc_clc.c
··· 916 916 /* add SMC-D specifics */ 917 917 if (ini->ism_dev[0]) { 918 918 smcd = ini->ism_dev[0]; 919 - smcd->ops->get_local_gid(smcd, &smcd_gid); 919 + copy_to_smcdgid(&smcd_gid, &smcd->dibs->gid); 920 920 pclc_smcd->ism.gid = htonll(smcd_gid.gid); 921 921 pclc_smcd->ism.chid = 922 922 htons(smc_ism_get_chid(ini->ism_dev[0])); ··· 966 966 if (ini->ism_offered_cnt) { 967 967 for (i = 1; i <= ini->ism_offered_cnt; i++) { 968 968 smcd = ini->ism_dev[i]; 969 - smcd->ops->get_local_gid(smcd, &smcd_gid); 969 + copy_to_smcdgid(&smcd_gid, &smcd->dibs->gid); 970 970 gidchids[entry].chid = 971 971 htons(smc_ism_get_chid(ini->ism_dev[i])); 972 972 gidchids[entry].gid = htonll(smcd_gid.gid); ··· 1059 1059 /* SMC-D specific settings */ 1060 1060 memcpy(clc->hdr.eyecatcher, SMCD_EYECATCHER, 1061 1061 sizeof(SMCD_EYECATCHER)); 1062 - smcd->ops->get_local_gid(smcd, &smcd_gid); 1062 + copy_to_smcdgid(&smcd_gid, &smcd->dibs->gid); 1063 1063 clc->hdr.typev1 = SMC_TYPE_D; 1064 1064 clc->d0.gid = htonll(smcd_gid.gid); 1065 1065 clc->d0.token = htonll(conn->rmb_desc->token);
+3 -3
net/smc/smc_core.c
··· 555 555 556 556 if (nla_put_u32(skb, SMC_NLA_LGR_D_ID, *((u32 *)&lgr->id))) 557 557 goto errattr; 558 - smcd->ops->get_local_gid(smcd, &smcd_gid); 558 + copy_to_smcdgid(&smcd_gid, &smcd->dibs->gid); 559 559 if (nla_put_u64_64bit(skb, SMC_NLA_LGR_D_GID, 560 560 smcd_gid.gid, SMC_NLA_LGR_D_PAD)) 561 561 goto errattr; ··· 924 924 if (ini->is_smcd) { 925 925 /* SMC-D specific settings */ 926 926 smcd = ini->ism_dev[ini->ism_selected]; 927 - get_device(smcd->ops->get_dev(smcd)); 927 + get_device(&smcd->dibs->dev); 928 928 lgr->peer_gid.gid = 929 929 ini->ism_peer_gid[ini->ism_selected].gid; 930 930 lgr->peer_gid.gid_ext = ··· 1474 1474 destroy_workqueue(lgr->tx_wq); 1475 1475 if (lgr->is_smcd) { 1476 1476 smc_ism_put_vlan(lgr->smcd, lgr->vlan_id); 1477 - put_device(lgr->smcd->ops->get_dev(lgr->smcd)); 1477 + put_device(&lgr->smcd->dibs->dev); 1478 1478 } 1479 1479 smc_lgr_put(lgr); /* theoretically last lgr_put */ 1480 1480 }
+5
net/smc/smc_core.h
··· 13 13 #define _SMC_CORE_H 14 14 15 15 #include <linux/atomic.h> 16 + #include <linux/types.h> 16 17 #include <linux/smc.h> 17 18 #include <linux/pci.h> 18 19 #include <rdma/ib_verbs.h> ··· 222 221 /* virtually contiguous */ 223 222 }; 224 223 struct { /* SMC-D */ 224 + /* SMC-D tx buffer */ 225 + bool is_attached; 226 + /* no need for explicit writes */ 227 + /* SMC-D rx buffer: */ 225 228 unsigned short sba_idx; 226 229 /* SBA index number */ 227 230 u64 token;
+1 -1
net/smc/smc_diag.c
··· 175 175 dinfo.linkid = *((u32 *)conn->lgr->id); 176 176 dinfo.peer_gid = conn->lgr->peer_gid.gid; 177 177 dinfo.peer_gid_ext = conn->lgr->peer_gid.gid_ext; 178 - smcd->ops->get_local_gid(smcd, &smcd_gid); 178 + copy_to_smcdgid(&smcd_gid, &smcd->dibs->gid); 179 179 dinfo.my_gid = smcd_gid.gid; 180 180 dinfo.my_gid_ext = smcd_gid.gid_ext; 181 181 dinfo.token = conn->rmb_desc->token;
+119 -105
net/smc/smc_ism.c
··· 17 17 #include "smc_ism.h" 18 18 #include "smc_pnet.h" 19 19 #include "smc_netlink.h" 20 - #include "linux/ism.h" 20 + #include "linux/dibs.h" 21 21 22 22 struct smcd_dev_list smcd_dev_list = { 23 23 .list = LIST_HEAD_INIT(smcd_dev_list.list), ··· 27 27 static bool smc_ism_v2_capable; 28 28 static u8 smc_ism_v2_system_eid[SMC_MAX_EID_LEN]; 29 29 30 - #if IS_ENABLED(CONFIG_ISM) 31 - static void smcd_register_dev(struct ism_dev *ism); 32 - static void smcd_unregister_dev(struct ism_dev *ism); 33 - static void smcd_handle_event(struct ism_dev *ism, struct ism_event *event); 34 - static void smcd_handle_irq(struct ism_dev *ism, unsigned int dmbno, 30 + static void smcd_register_dev(struct dibs_dev *dibs); 31 + static void smcd_unregister_dev(struct dibs_dev *dibs); 32 + static void smcd_handle_event(struct dibs_dev *dibs, 33 + const struct dibs_event *event); 34 + static void smcd_handle_irq(struct dibs_dev *dibs, unsigned int dmbno, 35 35 u16 dmbemask); 36 36 37 - static struct ism_client smc_ism_client = { 38 - .name = "SMC-D", 39 - .add = smcd_register_dev, 40 - .remove = smcd_unregister_dev, 37 + static struct dibs_client_ops smc_client_ops = { 38 + .add_dev = smcd_register_dev, 39 + .del_dev = smcd_unregister_dev, 41 40 .handle_event = smcd_handle_event, 42 41 .handle_irq = smcd_handle_irq, 43 42 }; 44 - #endif 43 + 44 + static struct dibs_client smc_dibs_client = { 45 + .name = "SMC-D", 46 + .ops = &smc_client_ops, 47 + }; 45 48 46 49 static void smc_ism_create_system_eid(void) 47 50 { ··· 71 68 int smc_ism_cantalk(struct smcd_gid *peer_gid, unsigned short vlan_id, 72 69 struct smcd_dev *smcd) 73 70 { 74 - return smcd->ops->query_remote_gid(smcd, peer_gid, vlan_id ? 1 : 0, 75 - vlan_id); 71 + struct dibs_dev *dibs = smcd->dibs; 72 + uuid_t ism_rgid; 73 + 74 + copy_to_dibsgid(&ism_rgid, peer_gid); 75 + return dibs->ops->query_remote_gid(dibs, &ism_rgid, vlan_id ? 1 : 0, 76 + vlan_id); 76 77 } 77 78 78 79 void smc_ism_get_system_eid(u8 **eid) ··· 89 82 90 83 u16 smc_ism_get_chid(struct smcd_dev *smcd) 91 84 { 92 - return smcd->ops->get_chid(smcd); 85 + return smcd->dibs->ops->get_fabric_id(smcd->dibs); 93 86 } 94 87 95 88 /* HW supports ISM V2 and thus System EID is defined */ ··· 138 131 139 132 if (!vlanid) /* No valid vlan id */ 140 133 return -EINVAL; 141 - if (!smcd->ops->add_vlan_id) 134 + if (!smcd->dibs->ops->add_vlan_id) 142 135 return -EOPNOTSUPP; 143 136 144 137 /* create new vlan entry, in case we need it */ ··· 161 154 /* no existing entry found. 162 155 * add new entry to device; might fail, e.g., if HW limit reached 163 156 */ 164 - if (smcd->ops->add_vlan_id(smcd, vlanid)) { 157 + if (smcd->dibs->ops->add_vlan_id(smcd->dibs, vlanid)) { 165 158 kfree(new_vlan); 166 159 rc = -EIO; 167 160 goto out; ··· 185 178 186 179 if (!vlanid) /* No valid vlan id */ 187 180 return -EINVAL; 188 - if (!smcd->ops->del_vlan_id) 181 + if (!smcd->dibs->ops->del_vlan_id) 189 182 return -EOPNOTSUPP; 190 183 191 184 spin_lock_irqsave(&smcd->lock, flags); ··· 203 196 } 204 197 205 198 /* Found and the last reference just gone */ 206 - if (smcd->ops->del_vlan_id(smcd, vlanid)) 199 + if (smcd->dibs->ops->del_vlan_id(smcd->dibs, vlanid)) 207 200 rc = -EIO; 208 201 list_del(&vlan->list); 209 202 kfree(vlan); ··· 212 205 return rc; 213 206 } 214 207 215 - int smc_ism_unregister_dmb(struct smcd_dev *smcd, struct smc_buf_desc *dmb_desc) 208 + void smc_ism_unregister_dmb(struct smcd_dev *smcd, 209 + struct smc_buf_desc *dmb_desc) 216 210 { 217 - struct smcd_dmb dmb; 218 - int rc = 0; 211 + struct dibs_dmb dmb; 219 212 220 213 if (!dmb_desc->dma_addr) 221 - return rc; 214 + return; 222 215 223 216 memset(&dmb, 0, sizeof(dmb)); 224 217 dmb.dmb_tok = dmb_desc->token; 225 - dmb.sba_idx = dmb_desc->sba_idx; 218 + dmb.idx = dmb_desc->sba_idx; 226 219 dmb.cpu_addr = dmb_desc->cpu_addr; 227 220 dmb.dma_addr = dmb_desc->dma_addr; 228 221 dmb.dmb_len = dmb_desc->len; 229 - rc = smcd->ops->unregister_dmb(smcd, &dmb); 230 - if (!rc || rc == ISM_ERROR) { 231 - dmb_desc->cpu_addr = NULL; 232 - dmb_desc->dma_addr = 0; 233 - } 234 222 235 - return rc; 223 + smcd->dibs->ops->unregister_dmb(smcd->dibs, &dmb); 224 + 225 + return; 236 226 } 237 227 238 228 int smc_ism_register_dmb(struct smc_link_group *lgr, int dmb_len, 239 229 struct smc_buf_desc *dmb_desc) 240 230 { 241 - struct smcd_dmb dmb; 231 + struct dibs_dev *dibs; 232 + struct dibs_dmb dmb; 242 233 int rc; 243 234 244 235 memset(&dmb, 0, sizeof(dmb)); 245 236 dmb.dmb_len = dmb_len; 246 - dmb.sba_idx = dmb_desc->sba_idx; 237 + dmb.idx = dmb_desc->sba_idx; 247 238 dmb.vlan_id = lgr->vlan_id; 248 - dmb.rgid = lgr->peer_gid.gid; 249 - rc = lgr->smcd->ops->register_dmb(lgr->smcd, &dmb, lgr->smcd->client); 239 + copy_to_dibsgid(&dmb.rgid, &lgr->peer_gid); 240 + 241 + dibs = lgr->smcd->dibs; 242 + rc = dibs->ops->register_dmb(dibs, &dmb, &smc_dibs_client); 250 243 if (!rc) { 251 - dmb_desc->sba_idx = dmb.sba_idx; 244 + dmb_desc->sba_idx = dmb.idx; 252 245 dmb_desc->token = dmb.dmb_tok; 253 246 dmb_desc->cpu_addr = dmb.cpu_addr; 254 247 dmb_desc->dma_addr = dmb.dma_addr; ··· 263 256 * merging sndbuf with peer DMB to avoid 264 257 * data copies between them. 265 258 */ 266 - return (smcd->ops->support_dmb_nocopy && 267 - smcd->ops->support_dmb_nocopy(smcd)); 259 + return (smcd->dibs->ops->support_mmapped_rdmb && 260 + smcd->dibs->ops->support_mmapped_rdmb(smcd->dibs)); 268 261 } 269 262 270 263 int smc_ism_attach_dmb(struct smcd_dev *dev, u64 token, 271 264 struct smc_buf_desc *dmb_desc) 272 265 { 273 - struct smcd_dmb dmb; 266 + struct dibs_dmb dmb; 274 267 int rc = 0; 275 268 276 - if (!dev->ops->attach_dmb) 269 + if (!dev->dibs->ops->attach_dmb) 277 270 return -EINVAL; 278 271 279 272 memset(&dmb, 0, sizeof(dmb)); 280 273 dmb.dmb_tok = token; 281 - rc = dev->ops->attach_dmb(dev, &dmb); 274 + rc = dev->dibs->ops->attach_dmb(dev->dibs, &dmb); 282 275 if (!rc) { 283 - dmb_desc->sba_idx = dmb.sba_idx; 276 + dmb_desc->sba_idx = dmb.idx; 284 277 dmb_desc->token = dmb.dmb_tok; 285 278 dmb_desc->cpu_addr = dmb.cpu_addr; 286 279 dmb_desc->dma_addr = dmb.dma_addr; 287 280 dmb_desc->len = dmb.dmb_len; 281 + dmb_desc->is_attached = true; 288 282 } 289 283 return rc; 290 284 } 291 285 292 286 int smc_ism_detach_dmb(struct smcd_dev *dev, u64 token) 293 287 { 294 - if (!dev->ops->detach_dmb) 288 + if (!dev->dibs->ops->detach_dmb) 295 289 return -EINVAL; 296 290 297 - return dev->ops->detach_dmb(dev, token); 291 + return dev->dibs->ops->detach_dmb(dev->dibs, token); 298 292 } 299 293 300 294 static int smc_nl_handle_smcd_dev(struct smcd_dev *smcd, ··· 305 297 char smc_pnet[SMC_MAX_PNETID_LEN + 1]; 306 298 struct smc_pci_dev smc_pci_dev; 307 299 struct nlattr *port_attrs; 300 + struct dibs_dev *dibs; 308 301 struct nlattr *attrs; 309 - struct ism_dev *ism; 310 302 int use_cnt = 0; 311 303 void *nlh; 312 304 313 - ism = smcd->priv; 305 + dibs = smcd->dibs; 314 306 nlh = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, 315 307 &smc_gen_nl_family, NLM_F_MULTI, 316 308 SMC_NETLINK_GET_DEV_SMCD); ··· 325 317 if (nla_put_u8(skb, SMC_NLA_DEV_IS_CRIT, use_cnt > 0)) 326 318 goto errattr; 327 319 memset(&smc_pci_dev, 0, sizeof(smc_pci_dev)); 328 - smc_set_pci_values(to_pci_dev(ism->dev.parent), &smc_pci_dev); 320 + smc_set_pci_values(to_pci_dev(dibs->dev.parent), &smc_pci_dev); 329 321 if (nla_put_u32(skb, SMC_NLA_DEV_PCI_FID, smc_pci_dev.pci_fid)) 330 322 goto errattr; 331 323 if (nla_put_u16(skb, SMC_NLA_DEV_PCI_CHID, smc_pci_dev.pci_pchid)) ··· 375 367 list_for_each_entry(smcd, &dev_list->list, list) { 376 368 if (num < snum) 377 369 goto next; 378 - if (smc_ism_is_loopback(smcd)) 370 + if (smc_ism_is_loopback(smcd->dibs)) 379 371 goto next; 380 372 if (smc_nl_handle_smcd_dev(smcd, skb, cb)) 381 373 goto errout; ··· 393 385 return skb->len; 394 386 } 395 387 396 - #if IS_ENABLED(CONFIG_ISM) 397 388 struct smc_ism_event_work { 398 389 struct work_struct work; 399 390 struct smcd_dev *smcd; 400 - struct ism_event event; 391 + struct dibs_event event; 401 392 }; 402 393 403 394 #define ISM_EVENT_REQUEST 0x0001 ··· 416 409 417 410 static void smcd_handle_sw_event(struct smc_ism_event_work *wrk) 418 411 { 419 - struct smcd_gid peer_gid = { .gid = wrk->event.tok, 420 - .gid_ext = 0 }; 412 + struct dibs_dev *dibs = wrk->smcd->dibs; 421 413 union smcd_sw_event_info ev_info; 414 + struct smcd_gid peer_gid; 415 + uuid_t ism_rgid; 422 416 423 - ev_info.info = wrk->event.info; 424 - switch (wrk->event.code) { 417 + copy_to_smcdgid(&peer_gid, &wrk->event.gid); 418 + ev_info.info = wrk->event.data; 419 + switch (wrk->event.subtype) { 425 420 case ISM_EVENT_CODE_SHUTDOWN: /* Peer shut down DMBs */ 426 421 smc_smcd_terminate(wrk->smcd, &peer_gid, ev_info.vlan_id); 427 422 break; 428 423 case ISM_EVENT_CODE_TESTLINK: /* Activity timer */ 429 424 if (ev_info.code == ISM_EVENT_REQUEST && 430 - wrk->smcd->ops->signal_event) { 425 + dibs->ops->signal_event) { 431 426 ev_info.code = ISM_EVENT_RESPONSE; 432 - wrk->smcd->ops->signal_event(wrk->smcd, 433 - &peer_gid, 434 - ISM_EVENT_REQUEST_IR, 435 - ISM_EVENT_CODE_TESTLINK, 436 - ev_info.info); 437 - } 427 + copy_to_dibsgid(&ism_rgid, &peer_gid); 428 + dibs->ops->signal_event(dibs, &ism_rgid, 429 + ISM_EVENT_REQUEST_IR, 430 + ISM_EVENT_CODE_TESTLINK, 431 + ev_info.info); 432 + } 438 433 break; 439 434 } 440 435 } ··· 446 437 { 447 438 struct smc_ism_event_work *wrk = 448 439 container_of(work, struct smc_ism_event_work, work); 449 - struct smcd_gid smcd_gid = { .gid = wrk->event.tok, 450 - .gid_ext = 0 }; 440 + struct smcd_gid smcd_gid; 441 + 442 + copy_to_smcdgid(&smcd_gid, &wrk->event.gid); 451 443 452 444 switch (wrk->event.type) { 453 - case ISM_EVENT_GID: /* GID event, token is peer GID */ 445 + case DIBS_DEV_EVENT: /* GID event, token is peer GID */ 454 446 smc_smcd_terminate(wrk->smcd, &smcd_gid, VLAN_VID_MASK); 455 447 break; 456 - case ISM_EVENT_DMB: 448 + case DIBS_BUF_EVENT: 457 449 break; 458 - case ISM_EVENT_SWR: /* Software defined event */ 450 + case DIBS_SW_EVENT: /* Software defined event */ 459 451 smcd_handle_sw_event(wrk); 460 452 break; 461 453 } 462 454 kfree(wrk); 463 455 } 464 456 465 - static struct smcd_dev *smcd_alloc_dev(struct device *parent, const char *name, 466 - const struct smcd_ops *ops, int max_dmbs) 457 + static struct smcd_dev *smcd_alloc_dev(const char *name, int max_dmbs) 467 458 { 468 459 struct smcd_dev *smcd; 469 460 470 - smcd = devm_kzalloc(parent, sizeof(*smcd), GFP_KERNEL); 461 + smcd = kzalloc(sizeof(*smcd), GFP_KERNEL); 471 462 if (!smcd) 472 463 return NULL; 473 - smcd->conn = devm_kcalloc(parent, max_dmbs, 474 - sizeof(struct smc_connection *), GFP_KERNEL); 464 + smcd->conn = kcalloc(max_dmbs, sizeof(struct smc_connection *), 465 + GFP_KERNEL); 475 466 if (!smcd->conn) 476 - return NULL; 467 + goto free_smcd; 477 468 478 469 smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)", 479 470 WQ_MEM_RECLAIM, name); 480 471 if (!smcd->event_wq) 481 - return NULL; 482 - 483 - smcd->ops = ops; 472 + goto free_conn; 484 473 485 474 spin_lock_init(&smcd->lock); 486 475 spin_lock_init(&smcd->lgr_lock); ··· 486 479 INIT_LIST_HEAD(&smcd->lgr_list); 487 480 init_waitqueue_head(&smcd->lgrs_deleted); 488 481 return smcd; 482 + 483 + free_conn: 484 + kfree(smcd->conn); 485 + free_smcd: 486 + kfree(smcd); 487 + return NULL; 489 488 } 490 489 491 - static void smcd_register_dev(struct ism_dev *ism) 490 + static void smcd_register_dev(struct dibs_dev *dibs) 492 491 { 493 - const struct smcd_ops *ops = ism_get_smcd_ops(); 494 492 struct smcd_dev *smcd, *fentry; 493 + int max_dmbs; 495 494 496 - if (!ops) 497 - return; 495 + max_dmbs = dibs->ops->max_dmbs(); 498 496 499 - smcd = smcd_alloc_dev(&ism->pdev->dev, dev_name(&ism->pdev->dev), ops, 500 - ISM_NR_DMBS); 497 + smcd = smcd_alloc_dev(dev_name(&dibs->dev), max_dmbs); 501 498 if (!smcd) 502 499 return; 503 - smcd->priv = ism; 504 - smcd->client = &smc_ism_client; 505 - ism_set_priv(ism, &smc_ism_client, smcd); 506 - if (smc_pnetid_by_dev_port(&ism->pdev->dev, 0, smcd->pnetid)) 500 + 501 + smcd->dibs = dibs; 502 + dibs_set_priv(dibs, &smc_dibs_client, smcd); 503 + 504 + if (smc_pnetid_by_dev_port(dibs->dev.parent, 0, smcd->pnetid)) 507 505 smc_pnetid_by_table_smcd(smcd); 508 506 509 - if (smcd->ops->supports_v2()) 507 + if (smc_ism_is_loopback(dibs) || 508 + (dibs->ops->add_vlan_id && 509 + !dibs->ops->add_vlan_id(dibs, ISM_RESERVED_VLANID))) { 510 510 smc_ism_set_v2_capable(); 511 + } 512 + 511 513 mutex_lock(&smcd_dev_list.mutex); 512 514 /* sort list: 513 515 * - devices without pnetid before devices with pnetid; ··· 525 509 if (!smcd->pnetid[0]) { 526 510 fentry = list_first_entry_or_null(&smcd_dev_list.list, 527 511 struct smcd_dev, list); 528 - if (fentry && smc_ism_is_loopback(fentry)) 512 + if (fentry && smc_ism_is_loopback(fentry->dibs)) 529 513 list_add(&smcd->list, &fentry->list); 530 514 else 531 515 list_add(&smcd->list, &smcd_dev_list.list); ··· 536 520 537 521 if (smc_pnet_is_pnetid_set(smcd->pnetid)) 538 522 pr_warn_ratelimited("smc: adding smcd device %s with pnetid %.16s%s\n", 539 - dev_name(&ism->dev), smcd->pnetid, 523 + dev_name(&dibs->dev), smcd->pnetid, 540 524 smcd->pnetid_by_user ? 541 525 " (user defined)" : 542 526 ""); 543 527 else 544 528 pr_warn_ratelimited("smc: adding smcd device %s without pnetid\n", 545 - dev_name(&ism->dev)); 529 + dev_name(&dibs->dev)); 546 530 return; 547 531 } 548 532 549 - static void smcd_unregister_dev(struct ism_dev *ism) 533 + static void smcd_unregister_dev(struct dibs_dev *dibs) 550 534 { 551 - struct smcd_dev *smcd = ism_get_priv(ism, &smc_ism_client); 535 + struct smcd_dev *smcd = dibs_get_priv(dibs, &smc_dibs_client); 552 536 553 537 pr_warn_ratelimited("smc: removing smcd device %s\n", 554 - dev_name(&ism->dev)); 538 + dev_name(&dibs->dev)); 555 539 smcd->going_away = 1; 556 540 smc_smcd_terminate_all(smcd); 557 541 mutex_lock(&smcd_dev_list.mutex); 558 542 list_del_init(&smcd->list); 559 543 mutex_unlock(&smcd_dev_list.mutex); 560 544 destroy_workqueue(smcd->event_wq); 545 + kfree(smcd->conn); 546 + kfree(smcd); 561 547 } 562 548 563 549 /* SMCD Device event handler. Called from ISM device interrupt handler. ··· 573 555 * Context: 574 556 * - Function called in IRQ context from ISM device driver event handler. 575 557 */ 576 - static void smcd_handle_event(struct ism_dev *ism, struct ism_event *event) 558 + static void smcd_handle_event(struct dibs_dev *dibs, 559 + const struct dibs_event *event) 577 560 { 578 - struct smcd_dev *smcd = ism_get_priv(ism, &smc_ism_client); 561 + struct smcd_dev *smcd = dibs_get_priv(dibs, &smc_dibs_client); 579 562 struct smc_ism_event_work *wrk; 580 563 581 564 if (smcd->going_away) ··· 598 579 * Context: 599 580 * - Function called in IRQ context from ISM device driver IRQ handler. 600 581 */ 601 - static void smcd_handle_irq(struct ism_dev *ism, unsigned int dmbno, 582 + static void smcd_handle_irq(struct dibs_dev *dibs, unsigned int dmbno, 602 583 u16 dmbemask) 603 584 { 604 - struct smcd_dev *smcd = ism_get_priv(ism, &smc_ism_client); 585 + struct smcd_dev *smcd = dibs_get_priv(dibs, &smc_dibs_client); 605 586 struct smc_connection *conn = NULL; 606 587 unsigned long flags; 607 588 ··· 611 592 tasklet_schedule(&conn->rx_tsklet); 612 593 spin_unlock_irqrestore(&smcd->lock, flags); 613 594 } 614 - #endif 615 595 616 596 int smc_ism_signal_shutdown(struct smc_link_group *lgr) 617 597 { 618 598 int rc = 0; 619 - #if IS_ENABLED(CONFIG_ISM) 620 599 union smcd_sw_event_info ev_info; 600 + uuid_t ism_rgid; 621 601 622 602 if (lgr->peer_shutdown) 623 603 return 0; 624 - if (!lgr->smcd->ops->signal_event) 604 + if (!lgr->smcd->dibs->ops->signal_event) 625 605 return 0; 626 606 627 607 memcpy(ev_info.uid, lgr->id, SMC_LGR_ID_SIZE); 628 608 ev_info.vlan_id = lgr->vlan_id; 629 609 ev_info.code = ISM_EVENT_REQUEST; 630 - rc = lgr->smcd->ops->signal_event(lgr->smcd, &lgr->peer_gid, 610 + copy_to_dibsgid(&ism_rgid, &lgr->peer_gid); 611 + rc = lgr->smcd->dibs->ops->signal_event(lgr->smcd->dibs, &ism_rgid, 631 612 ISM_EVENT_REQUEST_IR, 632 613 ISM_EVENT_CODE_SHUTDOWN, 633 614 ev_info.info); 634 - #endif 635 615 return rc; 636 616 } 637 617 ··· 641 623 smc_ism_v2_capable = false; 642 624 smc_ism_create_system_eid(); 643 625 644 - #if IS_ENABLED(CONFIG_ISM) 645 - rc = ism_register_client(&smc_ism_client); 646 - #endif 626 + rc = dibs_register_client(&smc_dibs_client); 647 627 return rc; 648 628 } 649 629 650 630 void smc_ism_exit(void) 651 631 { 652 - #if IS_ENABLED(CONFIG_ISM) 653 - ism_unregister_client(&smc_ism_client); 654 - #endif 632 + dibs_unregister_client(&smc_dibs_client); 655 633 }
+31 -5
net/smc/smc_ism.h
··· 12 12 #include <linux/uio.h> 13 13 #include <linux/types.h> 14 14 #include <linux/mutex.h> 15 + #include <linux/dibs.h> 15 16 16 17 #include "smc.h" 17 18 ··· 48 47 int smc_ism_put_vlan(struct smcd_dev *dev, unsigned short vlan_id); 49 48 int smc_ism_register_dmb(struct smc_link_group *lgr, int buf_size, 50 49 struct smc_buf_desc *dmb_desc); 51 - int smc_ism_unregister_dmb(struct smcd_dev *dev, struct smc_buf_desc *dmb_desc); 50 + void smc_ism_unregister_dmb(struct smcd_dev *dev, 51 + struct smc_buf_desc *dmb_desc); 52 52 bool smc_ism_support_dmb_nocopy(struct smcd_dev *smcd); 53 53 int smc_ism_attach_dmb(struct smcd_dev *dev, u64 token, 54 54 struct smc_buf_desc *dmb_desc); ··· 69 67 { 70 68 int rc; 71 69 72 - rc = smcd->ops->move_data(smcd, dmb_tok, idx, sf, offset, data, len); 70 + rc = smcd->dibs->ops->move_data(smcd->dibs, dmb_tok, idx, sf, offset, 71 + data, len); 72 + 73 73 return rc < 0 ? rc : 0; 74 74 } 75 75 ··· 88 84 89 85 static inline bool smc_ism_is_emulated(struct smcd_dev *smcd) 90 86 { 91 - u16 chid = smcd->ops->get_chid(smcd); 87 + u16 chid = smcd->dibs->ops->get_fabric_id(smcd->dibs); 92 88 93 89 return __smc_ism_is_emulated(chid); 94 90 } 95 91 96 - static inline bool smc_ism_is_loopback(struct smcd_dev *smcd) 92 + static inline bool smc_ism_is_loopback(struct dibs_dev *dibs) 97 93 { 98 - return (smcd->ops->get_chid(smcd) == 0xFFFF); 94 + return (dibs->ops->get_fabric_id(dibs) == DIBS_LOOPBACK_FABRIC); 95 + } 96 + 97 + static inline void copy_to_smcdgid(struct smcd_gid *sgid, uuid_t *dibs_gid) 98 + { 99 + __be64 temp; 100 + 101 + memcpy(&temp, dibs_gid, sizeof(sgid->gid)); 102 + sgid->gid = ntohll(temp); 103 + memcpy(&temp, (uint8_t *)dibs_gid + sizeof(sgid->gid), 104 + sizeof(sgid->gid_ext)); 105 + sgid->gid_ext = ntohll(temp); 106 + } 107 + 108 + static inline void copy_to_dibsgid(uuid_t *dibs_gid, struct smcd_gid *sgid) 109 + { 110 + __be64 temp; 111 + 112 + temp = htonll(sgid->gid); 113 + memcpy(dibs_gid, &temp, sizeof(sgid->gid)); 114 + temp = htonll(sgid->gid_ext); 115 + memcpy((uint8_t *)dibs_gid + sizeof(sgid->gid), &temp, 116 + sizeof(sgid->gid_ext)); 99 117 } 100 118 101 119 #endif
-421
net/smc/smc_loopback.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Shared Memory Communications Direct over loopback-ism device. 4 - * 5 - * Functions for loopback-ism device. 6 - * 7 - * Copyright (c) 2024, Alibaba Inc. 8 - * 9 - * Author: Wen Gu <guwen@linux.alibaba.com> 10 - * Tony Lu <tonylu@linux.alibaba.com> 11 - * 12 - */ 13 - 14 - #include <linux/device.h> 15 - #include <linux/types.h> 16 - #include <net/smc.h> 17 - 18 - #include "smc_cdc.h" 19 - #include "smc_ism.h" 20 - #include "smc_loopback.h" 21 - 22 - #define SMC_LO_V2_CAPABLE 0x1 /* loopback-ism acts as ISMv2 */ 23 - #define SMC_LO_SUPPORT_NOCOPY 0x1 24 - #define SMC_DMA_ADDR_INVALID (~(dma_addr_t)0) 25 - 26 - static const char smc_lo_dev_name[] = "loopback-ism"; 27 - static struct smc_lo_dev *lo_dev; 28 - 29 - static void smc_lo_generate_ids(struct smc_lo_dev *ldev) 30 - { 31 - struct smcd_gid *lgid = &ldev->local_gid; 32 - uuid_t uuid; 33 - 34 - uuid_gen(&uuid); 35 - memcpy(&lgid->gid, &uuid, sizeof(lgid->gid)); 36 - memcpy(&lgid->gid_ext, (u8 *)&uuid + sizeof(lgid->gid), 37 - sizeof(lgid->gid_ext)); 38 - 39 - ldev->chid = SMC_LO_RESERVED_CHID; 40 - } 41 - 42 - static int smc_lo_query_rgid(struct smcd_dev *smcd, struct smcd_gid *rgid, 43 - u32 vid_valid, u32 vid) 44 - { 45 - struct smc_lo_dev *ldev = smcd->priv; 46 - 47 - /* rgid should be the same as lgid */ 48 - if (!ldev || rgid->gid != ldev->local_gid.gid || 49 - rgid->gid_ext != ldev->local_gid.gid_ext) 50 - return -ENETUNREACH; 51 - return 0; 52 - } 53 - 54 - static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb, 55 - void *client_priv) 56 - { 57 - struct smc_lo_dmb_node *dmb_node, *tmp_node; 58 - struct smc_lo_dev *ldev = smcd->priv; 59 - int sba_idx, rc; 60 - 61 - /* check space for new dmb */ 62 - for_each_clear_bit(sba_idx, ldev->sba_idx_mask, SMC_LO_MAX_DMBS) { 63 - if (!test_and_set_bit(sba_idx, ldev->sba_idx_mask)) 64 - break; 65 - } 66 - if (sba_idx == SMC_LO_MAX_DMBS) 67 - return -ENOSPC; 68 - 69 - dmb_node = kzalloc(sizeof(*dmb_node), GFP_KERNEL); 70 - if (!dmb_node) { 71 - rc = -ENOMEM; 72 - goto err_bit; 73 - } 74 - 75 - dmb_node->sba_idx = sba_idx; 76 - dmb_node->len = dmb->dmb_len; 77 - dmb_node->cpu_addr = kzalloc(dmb_node->len, GFP_KERNEL | 78 - __GFP_NOWARN | __GFP_NORETRY | 79 - __GFP_NOMEMALLOC); 80 - if (!dmb_node->cpu_addr) { 81 - rc = -ENOMEM; 82 - goto err_node; 83 - } 84 - dmb_node->dma_addr = SMC_DMA_ADDR_INVALID; 85 - refcount_set(&dmb_node->refcnt, 1); 86 - 87 - again: 88 - /* add new dmb into hash table */ 89 - get_random_bytes(&dmb_node->token, sizeof(dmb_node->token)); 90 - write_lock_bh(&ldev->dmb_ht_lock); 91 - hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb_node->token) { 92 - if (tmp_node->token == dmb_node->token) { 93 - write_unlock_bh(&ldev->dmb_ht_lock); 94 - goto again; 95 - } 96 - } 97 - hash_add(ldev->dmb_ht, &dmb_node->list, dmb_node->token); 98 - write_unlock_bh(&ldev->dmb_ht_lock); 99 - atomic_inc(&ldev->dmb_cnt); 100 - 101 - dmb->sba_idx = dmb_node->sba_idx; 102 - dmb->dmb_tok = dmb_node->token; 103 - dmb->cpu_addr = dmb_node->cpu_addr; 104 - dmb->dma_addr = dmb_node->dma_addr; 105 - dmb->dmb_len = dmb_node->len; 106 - 107 - return 0; 108 - 109 - err_node: 110 - kfree(dmb_node); 111 - err_bit: 112 - clear_bit(sba_idx, ldev->sba_idx_mask); 113 - return rc; 114 - } 115 - 116 - static void __smc_lo_unregister_dmb(struct smc_lo_dev *ldev, 117 - struct smc_lo_dmb_node *dmb_node) 118 - { 119 - /* remove dmb from hash table */ 120 - write_lock_bh(&ldev->dmb_ht_lock); 121 - hash_del(&dmb_node->list); 122 - write_unlock_bh(&ldev->dmb_ht_lock); 123 - 124 - clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask); 125 - kvfree(dmb_node->cpu_addr); 126 - kfree(dmb_node); 127 - 128 - if (atomic_dec_and_test(&ldev->dmb_cnt)) 129 - wake_up(&ldev->ldev_release); 130 - } 131 - 132 - static int smc_lo_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb) 133 - { 134 - struct smc_lo_dmb_node *dmb_node = NULL, *tmp_node; 135 - struct smc_lo_dev *ldev = smcd->priv; 136 - 137 - /* find dmb from hash table */ 138 - read_lock_bh(&ldev->dmb_ht_lock); 139 - hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb->dmb_tok) { 140 - if (tmp_node->token == dmb->dmb_tok) { 141 - dmb_node = tmp_node; 142 - break; 143 - } 144 - } 145 - if (!dmb_node) { 146 - read_unlock_bh(&ldev->dmb_ht_lock); 147 - return -EINVAL; 148 - } 149 - read_unlock_bh(&ldev->dmb_ht_lock); 150 - 151 - if (refcount_dec_and_test(&dmb_node->refcnt)) 152 - __smc_lo_unregister_dmb(ldev, dmb_node); 153 - return 0; 154 - } 155 - 156 - static int smc_lo_support_dmb_nocopy(struct smcd_dev *smcd) 157 - { 158 - return SMC_LO_SUPPORT_NOCOPY; 159 - } 160 - 161 - static int smc_lo_attach_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb) 162 - { 163 - struct smc_lo_dmb_node *dmb_node = NULL, *tmp_node; 164 - struct smc_lo_dev *ldev = smcd->priv; 165 - 166 - /* find dmb_node according to dmb->dmb_tok */ 167 - read_lock_bh(&ldev->dmb_ht_lock); 168 - hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb->dmb_tok) { 169 - if (tmp_node->token == dmb->dmb_tok) { 170 - dmb_node = tmp_node; 171 - break; 172 - } 173 - } 174 - if (!dmb_node) { 175 - read_unlock_bh(&ldev->dmb_ht_lock); 176 - return -EINVAL; 177 - } 178 - read_unlock_bh(&ldev->dmb_ht_lock); 179 - 180 - if (!refcount_inc_not_zero(&dmb_node->refcnt)) 181 - /* the dmb is being unregistered, but has 182 - * not been removed from the hash table. 183 - */ 184 - return -EINVAL; 185 - 186 - /* provide dmb information */ 187 - dmb->sba_idx = dmb_node->sba_idx; 188 - dmb->dmb_tok = dmb_node->token; 189 - dmb->cpu_addr = dmb_node->cpu_addr; 190 - dmb->dma_addr = dmb_node->dma_addr; 191 - dmb->dmb_len = dmb_node->len; 192 - return 0; 193 - } 194 - 195 - static int smc_lo_detach_dmb(struct smcd_dev *smcd, u64 token) 196 - { 197 - struct smc_lo_dmb_node *dmb_node = NULL, *tmp_node; 198 - struct smc_lo_dev *ldev = smcd->priv; 199 - 200 - /* find dmb_node according to dmb->dmb_tok */ 201 - read_lock_bh(&ldev->dmb_ht_lock); 202 - hash_for_each_possible(ldev->dmb_ht, tmp_node, list, token) { 203 - if (tmp_node->token == token) { 204 - dmb_node = tmp_node; 205 - break; 206 - } 207 - } 208 - if (!dmb_node) { 209 - read_unlock_bh(&ldev->dmb_ht_lock); 210 - return -EINVAL; 211 - } 212 - read_unlock_bh(&ldev->dmb_ht_lock); 213 - 214 - if (refcount_dec_and_test(&dmb_node->refcnt)) 215 - __smc_lo_unregister_dmb(ldev, dmb_node); 216 - return 0; 217 - } 218 - 219 - static int smc_lo_move_data(struct smcd_dev *smcd, u64 dmb_tok, 220 - unsigned int idx, bool sf, unsigned int offset, 221 - void *data, unsigned int size) 222 - { 223 - struct smc_lo_dmb_node *rmb_node = NULL, *tmp_node; 224 - struct smc_lo_dev *ldev = smcd->priv; 225 - struct smc_connection *conn; 226 - 227 - if (!sf) 228 - /* since sndbuf is merged with peer DMB, there is 229 - * no need to copy data from sndbuf to peer DMB. 230 - */ 231 - return 0; 232 - 233 - read_lock_bh(&ldev->dmb_ht_lock); 234 - hash_for_each_possible(ldev->dmb_ht, tmp_node, list, dmb_tok) { 235 - if (tmp_node->token == dmb_tok) { 236 - rmb_node = tmp_node; 237 - break; 238 - } 239 - } 240 - if (!rmb_node) { 241 - read_unlock_bh(&ldev->dmb_ht_lock); 242 - return -EINVAL; 243 - } 244 - memcpy((char *)rmb_node->cpu_addr + offset, data, size); 245 - read_unlock_bh(&ldev->dmb_ht_lock); 246 - 247 - conn = smcd->conn[rmb_node->sba_idx]; 248 - if (!conn || conn->killed) 249 - return -EPIPE; 250 - tasklet_schedule(&conn->rx_tsklet); 251 - return 0; 252 - } 253 - 254 - static void smc_lo_get_local_gid(struct smcd_dev *smcd, 255 - struct smcd_gid *smcd_gid) 256 - { 257 - struct smc_lo_dev *ldev = smcd->priv; 258 - 259 - smcd_gid->gid = ldev->local_gid.gid; 260 - smcd_gid->gid_ext = ldev->local_gid.gid_ext; 261 - } 262 - 263 - static u16 smc_lo_get_chid(struct smcd_dev *smcd) 264 - { 265 - return ((struct smc_lo_dev *)smcd->priv)->chid; 266 - } 267 - 268 - static struct device *smc_lo_get_dev(struct smcd_dev *smcd) 269 - { 270 - return &((struct smc_lo_dev *)smcd->priv)->dev; 271 - } 272 - 273 - static const struct smcd_ops lo_ops = { 274 - .query_remote_gid = smc_lo_query_rgid, 275 - .register_dmb = smc_lo_register_dmb, 276 - .unregister_dmb = smc_lo_unregister_dmb, 277 - .support_dmb_nocopy = smc_lo_support_dmb_nocopy, 278 - .attach_dmb = smc_lo_attach_dmb, 279 - .detach_dmb = smc_lo_detach_dmb, 280 - .add_vlan_id = NULL, 281 - .del_vlan_id = NULL, 282 - .set_vlan_required = NULL, 283 - .reset_vlan_required = NULL, 284 - .signal_event = NULL, 285 - .move_data = smc_lo_move_data, 286 - .get_local_gid = smc_lo_get_local_gid, 287 - .get_chid = smc_lo_get_chid, 288 - .get_dev = smc_lo_get_dev, 289 - }; 290 - 291 - static struct smcd_dev *smcd_lo_alloc_dev(const struct smcd_ops *ops, 292 - int max_dmbs) 293 - { 294 - struct smcd_dev *smcd; 295 - 296 - smcd = kzalloc(sizeof(*smcd), GFP_KERNEL); 297 - if (!smcd) 298 - return NULL; 299 - 300 - smcd->conn = kcalloc(max_dmbs, sizeof(struct smc_connection *), 301 - GFP_KERNEL); 302 - if (!smcd->conn) 303 - goto out_smcd; 304 - 305 - smcd->ops = ops; 306 - 307 - spin_lock_init(&smcd->lock); 308 - spin_lock_init(&smcd->lgr_lock); 309 - INIT_LIST_HEAD(&smcd->vlan); 310 - INIT_LIST_HEAD(&smcd->lgr_list); 311 - init_waitqueue_head(&smcd->lgrs_deleted); 312 - return smcd; 313 - 314 - out_smcd: 315 - kfree(smcd); 316 - return NULL; 317 - } 318 - 319 - static int smcd_lo_register_dev(struct smc_lo_dev *ldev) 320 - { 321 - struct smcd_dev *smcd; 322 - 323 - smcd = smcd_lo_alloc_dev(&lo_ops, SMC_LO_MAX_DMBS); 324 - if (!smcd) 325 - return -ENOMEM; 326 - ldev->smcd = smcd; 327 - smcd->priv = ldev; 328 - smc_ism_set_v2_capable(); 329 - mutex_lock(&smcd_dev_list.mutex); 330 - list_add(&smcd->list, &smcd_dev_list.list); 331 - mutex_unlock(&smcd_dev_list.mutex); 332 - pr_warn_ratelimited("smc: adding smcd device %s\n", 333 - dev_name(&ldev->dev)); 334 - return 0; 335 - } 336 - 337 - static void smcd_lo_unregister_dev(struct smc_lo_dev *ldev) 338 - { 339 - struct smcd_dev *smcd = ldev->smcd; 340 - 341 - pr_warn_ratelimited("smc: removing smcd device %s\n", 342 - dev_name(&ldev->dev)); 343 - smcd->going_away = 1; 344 - smc_smcd_terminate_all(smcd); 345 - mutex_lock(&smcd_dev_list.mutex); 346 - list_del_init(&smcd->list); 347 - mutex_unlock(&smcd_dev_list.mutex); 348 - kfree(smcd->conn); 349 - kfree(smcd); 350 - } 351 - 352 - static int smc_lo_dev_init(struct smc_lo_dev *ldev) 353 - { 354 - smc_lo_generate_ids(ldev); 355 - rwlock_init(&ldev->dmb_ht_lock); 356 - hash_init(ldev->dmb_ht); 357 - atomic_set(&ldev->dmb_cnt, 0); 358 - init_waitqueue_head(&ldev->ldev_release); 359 - 360 - return smcd_lo_register_dev(ldev); 361 - } 362 - 363 - static void smc_lo_dev_exit(struct smc_lo_dev *ldev) 364 - { 365 - smcd_lo_unregister_dev(ldev); 366 - if (atomic_read(&ldev->dmb_cnt)) 367 - wait_event(ldev->ldev_release, !atomic_read(&ldev->dmb_cnt)); 368 - } 369 - 370 - static void smc_lo_dev_release(struct device *dev) 371 - { 372 - struct smc_lo_dev *ldev = 373 - container_of(dev, struct smc_lo_dev, dev); 374 - 375 - kfree(ldev); 376 - } 377 - 378 - static int smc_lo_dev_probe(void) 379 - { 380 - struct smc_lo_dev *ldev; 381 - int ret; 382 - 383 - ldev = kzalloc(sizeof(*ldev), GFP_KERNEL); 384 - if (!ldev) 385 - return -ENOMEM; 386 - 387 - ldev->dev.parent = NULL; 388 - ldev->dev.release = smc_lo_dev_release; 389 - device_initialize(&ldev->dev); 390 - dev_set_name(&ldev->dev, smc_lo_dev_name); 391 - 392 - ret = smc_lo_dev_init(ldev); 393 - if (ret) 394 - goto free_dev; 395 - 396 - lo_dev = ldev; /* global loopback device */ 397 - return 0; 398 - 399 - free_dev: 400 - put_device(&ldev->dev); 401 - return ret; 402 - } 403 - 404 - static void smc_lo_dev_remove(void) 405 - { 406 - if (!lo_dev) 407 - return; 408 - 409 - smc_lo_dev_exit(lo_dev); 410 - put_device(&lo_dev->dev); /* device_initialize in smc_lo_dev_probe */ 411 - } 412 - 413 - int smc_loopback_init(void) 414 - { 415 - return smc_lo_dev_probe(); 416 - } 417 - 418 - void smc_loopback_exit(void) 419 - { 420 - smc_lo_dev_remove(); 421 - }
-60
net/smc/smc_loopback.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Shared Memory Communications Direct over loopback-ism device. 4 - * 5 - * SMC-D loopback-ism device structure definitions. 6 - * 7 - * Copyright (c) 2024, Alibaba Inc. 8 - * 9 - * Author: Wen Gu <guwen@linux.alibaba.com> 10 - * Tony Lu <tonylu@linux.alibaba.com> 11 - * 12 - */ 13 - 14 - #ifndef _SMC_LOOPBACK_H 15 - #define _SMC_LOOPBACK_H 16 - 17 - #include <linux/device.h> 18 - #include <net/smc.h> 19 - 20 - #if IS_ENABLED(CONFIG_SMC_LO) 21 - #define SMC_LO_MAX_DMBS 5000 22 - #define SMC_LO_DMBS_HASH_BITS 12 23 - #define SMC_LO_RESERVED_CHID 0xFFFF 24 - 25 - struct smc_lo_dmb_node { 26 - struct hlist_node list; 27 - u64 token; 28 - u32 len; 29 - u32 sba_idx; 30 - void *cpu_addr; 31 - dma_addr_t dma_addr; 32 - refcount_t refcnt; 33 - }; 34 - 35 - struct smc_lo_dev { 36 - struct smcd_dev *smcd; 37 - struct device dev; 38 - u16 chid; 39 - struct smcd_gid local_gid; 40 - atomic_t dmb_cnt; 41 - rwlock_t dmb_ht_lock; 42 - DECLARE_BITMAP(sba_idx_mask, SMC_LO_MAX_DMBS); 43 - DECLARE_HASHTABLE(dmb_ht, SMC_LO_DMBS_HASH_BITS); 44 - wait_queue_head_t ldev_release; 45 - }; 46 - 47 - int smc_loopback_init(void); 48 - void smc_loopback_exit(void); 49 - #else 50 - static inline int smc_loopback_init(void) 51 - { 52 - return 0; 53 - } 54 - 55 - static inline void smc_loopback_exit(void) 56 - { 57 - } 58 - #endif 59 - 60 - #endif /* _SMC_LOOPBACK_H */
+15 -10
net/smc/smc_pnet.c
··· 169 169 pr_warn_ratelimited("smc: smcd device %s " 170 170 "erased user defined pnetid " 171 171 "%.16s\n", 172 - dev_name(smcd->ops->get_dev(smcd)), 172 + dev_name(&smcd->dibs->dev), 173 173 smcd->pnetid); 174 174 memset(smcd->pnetid, 0, SMC_MAX_PNETID_LEN); 175 175 smcd->pnetid_by_user = false; ··· 332 332 333 333 mutex_lock(&smcd_dev_list.mutex); 334 334 list_for_each_entry(smcd_dev, &smcd_dev_list.list, list) { 335 - if (!strncmp(dev_name(smcd_dev->ops->get_dev(smcd_dev)), 336 - smcd_name, IB_DEVICE_NAME_MAX - 1)) 335 + if (!strncmp(dev_name(&smcd_dev->dibs->dev), smcd_name, 336 + IB_DEVICE_NAME_MAX - 1) || 337 + (smcd_dev->dibs->dev.parent && 338 + !strncmp(dev_name(smcd_dev->dibs->dev.parent), smcd_name, 339 + IB_DEVICE_NAME_MAX - 1))) 337 340 goto out; 338 341 } 339 342 smcd_dev = NULL; ··· 416 413 bool smcddev_applied = true; 417 414 bool ibdev_applied = true; 418 415 struct smcd_dev *smcd; 419 - struct device *dev; 420 416 bool new_ibdev; 421 417 422 418 /* try to apply the pnetid to active devices */ ··· 433 431 if (smcd) { 434 432 smcddev_applied = smc_pnet_apply_smcd(smcd, pnet_name); 435 433 if (smcddev_applied) { 436 - dev = smcd->ops->get_dev(smcd); 437 - pr_warn_ratelimited("smc: smcd device %s " 438 - "applied user defined pnetid " 439 - "%.16s\n", dev_name(dev), 434 + pr_warn_ratelimited("smc: smcd device %s applied user defined pnetid %.16s\n", 435 + dev_name(&smcd->dibs->dev), 440 436 smcd->pnetid); 441 437 } 442 438 } ··· 1193 1193 */ 1194 1194 int smc_pnetid_by_table_smcd(struct smcd_dev *smcddev) 1195 1195 { 1196 - const char *ib_name = dev_name(smcddev->ops->get_dev(smcddev)); 1197 1196 struct smc_pnettable *pnettable; 1198 1197 struct smc_pnetentry *tmp_pe; 1199 1198 struct smc_net *sn; ··· 1205 1206 mutex_lock(&pnettable->lock); 1206 1207 list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) { 1207 1208 if (tmp_pe->type == SMC_PNET_IB && 1208 - !strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) { 1209 + (!strncmp(tmp_pe->ib_name, 1210 + dev_name(&smcddev->dibs->dev), 1211 + sizeof(tmp_pe->ib_name)) || 1212 + (smcddev->dibs->dev.parent && 1213 + !strncmp(tmp_pe->ib_name, 1214 + dev_name(smcddev->dibs->dev.parent), 1215 + sizeof(tmp_pe->ib_name))))) { 1209 1216 smc_pnet_apply_smcd(smcddev, tmp_pe->pnet_name); 1210 1217 rc = 0; 1211 1218 break;
+3
net/smc/smc_tx.c
··· 426 426 int srcchunk, dstchunk; 427 427 int rc; 428 428 429 + if (conn->sndbuf_desc->is_attached) 430 + return 0; 431 + 429 432 for (dstchunk = 0; dstchunk < 2; dstchunk++) { 430 433 for (srcchunk = 0; srcchunk < 2; srcchunk++) { 431 434 void *data = conn->sndbuf_desc->cpu_addr + src_off;