Merge tag '6.16-rc-part2-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6

Pull more smb client updates from Steve French:

- multichannel/reconnect fixes

- move smbdirect (smb over RDMA) defines to fs/smb/common so they will
be able to be used in the future more broadly, and a documentation
update explaining setting up smbdirect mounts

- update email address for Paulo

* tag '6.16-rc-part2-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6:
cifs: update internal version number
MAINTAINERS, mailmap: Update Paulo Alcantara's email address
cifs: add documentation for smbdirect setup
cifs: do not disable interface polling on failure
cifs: serialize other channels when query server interfaces is pending
cifs: deal with the channel loading lag while picking channels
smb: client: make use of common smbdirect_socket_parameters
smb: smbdirect: introduce smbdirect_socket_parameters
smb: client: make use of common smbdirect_socket
smb: smbdirect: add smbdirect_socket.h
smb: client: make use of common smbdirect.h
smb: smbdirect: add smbdirect.h with public structures
smb: client: make use of common smbdirect_pdu.h
smb: smbdirect: add smbdirect_pdu.h with protocol definitions

+532 -287
+6
.mailmap
··· 602 602 Paul Mackerras <paulus@ozlabs.org> <paulus@au1.ibm.com> 603 603 Paul Moore <paul@paul-moore.com> <paul.moore@hp.com> 604 604 Paul Moore <paul@paul-moore.com> <pmoore@redhat.com> 605 + Paulo Alcantara <pc@manguebit.org> <pcacjr@zytor.com> 606 + Paulo Alcantara <pc@manguebit.org> <paulo@paulo.ac> 607 + Paulo Alcantara <pc@manguebit.org> <pc@cjr.nz> 608 + Paulo Alcantara <pc@manguebit.org> <palcantara@suse.de> 609 + Paulo Alcantara <pc@manguebit.org> <palcantara@suse.com> 610 + Paulo Alcantara <pc@manguebit.org> <pc@manguebit.com> 605 611 Pavankumar Kondeti <quic_pkondeti@quicinc.com> <pkondeti@codeaurora.org> 606 612 Peter A Jonsson <pj@ludd.ltu.se> 607 613 Peter Oruba <peter.oruba@amd.com>
+1
Documentation/filesystems/smb/index.rst
··· 8 8 9 9 ksmbd 10 10 cifsroot 11 + smbdirect
+103
Documentation/filesystems/smb/smbdirect.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =========================== 4 + SMB Direct - SMB3 over RDMA 5 + =========================== 6 + 7 + This document describes how to set up the Linux SMB client and server to 8 + use RDMA. 9 + 10 + Overview 11 + ======== 12 + The Linux SMB kernel client supports SMB Direct, which is a transport 13 + scheme for SMB3 that uses RDMA (Remote Direct Memory Access) to provide 14 + high throughput and low latencies by bypassing the traditional TCP/IP 15 + stack. 16 + SMB Direct on the Linux SMB client can be tested against KSMBD - a 17 + kernel-space SMB server. 18 + 19 + Installation 20 + ============= 21 + - Install an RDMA device. As long as the RDMA device driver is supported 22 + by the kernel, it should work. This includes both software emulators (soft 23 + RoCE, soft iWARP) and hardware devices (InfiniBand, RoCE, iWARP). 24 + 25 + - Install a kernel with SMB Direct support. The first kernel release to 26 + support SMB Direct on both the client and server side is 5.15. Therefore, 27 + a distribution compatible with kernel 5.15 or later is required. 28 + 29 + - Install cifs-utils, which provides the `mount.cifs` command to mount SMB 30 + shares. 31 + 32 + - Configure the RDMA stack 33 + 34 + Make sure that your kernel configuration has RDMA support enabled. Under 35 + Device Drivers -> Infiniband support, update the kernel configuration to 36 + enable Infiniband support. 37 + 38 + Enable the appropriate IB HCA support or iWARP adapter support, 39 + depending on your hardware. 40 + 41 + If you are using InfiniBand, enable IP-over-InfiniBand support. 42 + 43 + For soft RDMA, enable either the soft iWARP (`RDMA _SIW`) or soft RoCE 44 + (`RDMA_RXE`) module. Install the `iproute2` package and use the 45 + `rdma link add` command to load the module and create an 46 + RDMA interface. 47 + 48 + e.g. if your local ethernet interface is `eth0`, you can use: 49 + 50 + .. code-block:: bash 51 + 52 + sudo rdma link add siw0 type siw netdev eth0 53 + 54 + - Enable SMB Direct support for both the server and the client in the kernel 55 + configuration. 56 + 57 + Server Setup 58 + 59 + .. code-block:: text 60 + 61 + Network File Systems ---> 62 + <M> SMB3 server support 63 + [*] Support for SMB Direct protocol 64 + 65 + Client Setup 66 + 67 + .. code-block:: text 68 + 69 + Network File Systems ---> 70 + <M> SMB3 and CIFS support (advanced network filesystem) 71 + [*] SMB Direct support 72 + 73 + - Build and install the kernel. SMB Direct support will be enabled in the 74 + cifs.ko and ksmbd.ko modules. 75 + 76 + Setup and Usage 77 + ================ 78 + 79 + - Set up and start a KSMBD server as described in the `KSMBD documentation 80 + <https://www.kernel.org/doc/Documentation/filesystems/smb/ksmbd.rst>`_. 81 + Also add the "server multi channel support = yes" parameter to ksmbd.conf. 82 + 83 + - On the client, mount the share with `rdma` mount option to use SMB Direct 84 + (specify a SMB version 3.0 or higher using `vers`). 85 + 86 + For example: 87 + 88 + .. code-block:: bash 89 + 90 + mount -t cifs //server/share /mnt/point -o vers=3.1.1,rdma 91 + 92 + - To verify that the mount is using SMB Direct, you can check dmesg for the 93 + following log line after mounting: 94 + 95 + .. code-block:: text 96 + 97 + CIFS: VFS: RDMA transport established 98 + 99 + Or, verify `rdma` mount option for the share in `/proc/mounts`: 100 + 101 + .. code-block:: bash 102 + 103 + cat /proc/mounts | grep cifs
+2 -2
MAINTAINERS
··· 5986 5986 COMMON INTERNET FILE SYSTEM CLIENT (CIFS and SMB3) 5987 5987 M: Steve French <sfrench@samba.org> 5988 5988 M: Steve French <smfrench@gmail.com> 5989 - R: Paulo Alcantara <pc@manguebit.com> (DFS, global name space) 5989 + R: Paulo Alcantara <pc@manguebit.org> (DFS, global name space) 5990 5990 R: Ronnie Sahlberg <ronniesahlberg@gmail.com> (directory leases, sparse files) 5991 5991 R: Shyam Prasad N <sprasad@microsoft.com> (multichannel) 5992 5992 R: Tom Talpey <tom@talpey.com> (RDMA, smbdirect) ··· 9280 9280 9281 9281 FILESYSTEMS [NETFS LIBRARY] 9282 9282 M: David Howells <dhowells@redhat.com> 9283 - M: Paulo Alcantara <pc@manguebit.com> 9283 + M: Paulo Alcantara <pc@manguebit.org> 9284 9284 L: netfs@lists.linux.dev 9285 9285 L: linux-fsdevel@vger.kernel.org 9286 9286 S: Supported
+14 -9
fs/smb/client/cifs_debug.c
··· 362 362 c = 0; 363 363 spin_lock(&cifs_tcp_ses_lock); 364 364 list_for_each_entry(server, &cifs_tcp_ses_list, tcp_ses_list) { 365 + #ifdef CONFIG_CIFS_SMB_DIRECT 366 + struct smbdirect_socket_parameters *sp; 367 + #endif 368 + 365 369 /* channel info will be printed as a part of sessions below */ 366 370 if (SERVER_IS_CHAN(server)) 367 371 continue; ··· 387 383 seq_printf(m, "\nSMBDirect transport not available"); 388 384 goto skip_rdma; 389 385 } 386 + sp = &server->smbd_conn->socket.parameters; 390 387 391 388 seq_printf(m, "\nSMBDirect (in hex) protocol version: %x " 392 389 "transport status: %x", 393 390 server->smbd_conn->protocol, 394 - server->smbd_conn->transport_status); 391 + server->smbd_conn->socket.status); 395 392 seq_printf(m, "\nConn receive_credit_max: %x " 396 393 "send_credit_target: %x max_send_size: %x", 397 - server->smbd_conn->receive_credit_max, 398 - server->smbd_conn->send_credit_target, 399 - server->smbd_conn->max_send_size); 394 + sp->recv_credit_max, 395 + sp->send_credit_target, 396 + sp->max_send_size); 400 397 seq_printf(m, "\nConn max_fragmented_recv_size: %x " 401 398 "max_fragmented_send_size: %x max_receive_size:%x", 402 - server->smbd_conn->max_fragmented_recv_size, 403 - server->smbd_conn->max_fragmented_send_size, 404 - server->smbd_conn->max_receive_size); 399 + sp->max_fragmented_recv_size, 400 + sp->max_fragmented_send_size, 401 + sp->max_recv_size); 405 402 seq_printf(m, "\nConn keep_alive_interval: %x " 406 403 "max_readwrite_size: %x rdma_readwrite_threshold: %x", 407 - server->smbd_conn->keep_alive_interval, 408 - server->smbd_conn->max_readwrite_size, 404 + sp->keepalive_interval_msec * 1000, 405 + sp->max_read_write_size, 409 406 server->smbd_conn->rdma_readwrite_threshold); 410 407 seq_printf(m, "\nDebug count_get_receive_buffer: %x " 411 408 "count_put_receive_buffer: %x count_send_empty: %x",
+2 -2
fs/smb/client/cifsfs.h
··· 145 145 #endif /* CONFIG_CIFS_NFSD_EXPORT */ 146 146 147 147 /* when changing internal version - update following two lines at same time */ 148 - #define SMB3_PRODUCT_BUILD 54 149 - #define CIFS_VERSION "2.54" 148 + #define SMB3_PRODUCT_BUILD 55 149 + #define CIFS_VERSION "2.55" 150 150 #endif /* _CIFSFS_H */
+1
fs/smb/client/cifsglob.h
··· 1085 1085 }; 1086 1086 1087 1087 #define CIFS_SES_FLAG_SCALE_CHANNELS (0x1) 1088 + #define CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES (0x2) 1088 1089 1089 1090 /* 1090 1091 * Session structure. One of these for each uid session with a particular host
+1 -5
fs/smb/client/connect.c
··· 116 116 rc = server->ops->query_server_interfaces(xid, tcon, false); 117 117 free_xid(xid); 118 118 119 - if (rc) { 120 - if (rc == -EOPNOTSUPP) 121 - return; 122 - 119 + if (rc) 123 120 cifs_dbg(FYI, "%s: failed to query server interfaces: %d\n", 124 121 __func__, rc); 125 - } 126 122 127 123 queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, 128 124 (SMB_INTERFACE_POLL_INTERVAL * HZ));
+10 -4
fs/smb/client/smb2ops.c
··· 504 504 wsize = min_t(unsigned int, wsize, server->max_write); 505 505 #ifdef CONFIG_CIFS_SMB_DIRECT 506 506 if (server->rdma) { 507 + struct smbdirect_socket_parameters *sp = 508 + &server->smbd_conn->socket.parameters; 509 + 507 510 if (server->sign) 508 511 /* 509 512 * Account for SMB2 data transfer packet header and ··· 514 511 */ 515 512 wsize = min_t(unsigned int, 516 513 wsize, 517 - server->smbd_conn->max_fragmented_send_size - 514 + sp->max_fragmented_send_size - 518 515 SMB2_READWRITE_PDU_HEADER_SIZE - 519 516 sizeof(struct smb2_transform_hdr)); 520 517 else 521 518 wsize = min_t(unsigned int, 522 - wsize, server->smbd_conn->max_readwrite_size); 519 + wsize, sp->max_read_write_size); 523 520 } 524 521 #endif 525 522 if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU)) ··· 555 552 rsize = min_t(unsigned int, rsize, server->max_read); 556 553 #ifdef CONFIG_CIFS_SMB_DIRECT 557 554 if (server->rdma) { 555 + struct smbdirect_socket_parameters *sp = 556 + &server->smbd_conn->socket.parameters; 557 + 558 558 if (server->sign) 559 559 /* 560 560 * Account for SMB2 data transfer packet header and ··· 565 559 */ 566 560 rsize = min_t(unsigned int, 567 561 rsize, 568 - server->smbd_conn->max_fragmented_recv_size - 562 + sp->max_fragmented_recv_size - 569 563 SMB2_READWRITE_PDU_HEADER_SIZE - 570 564 sizeof(struct smb2_transform_hdr)); 571 565 else 572 566 rsize = min_t(unsigned int, 573 - rsize, server->smbd_conn->max_readwrite_size); 567 + rsize, sp->max_read_write_size); 574 568 } 575 569 #endif 576 570
+32 -18
fs/smb/client/smb2pdu.c
··· 36 36 #include "smb2glob.h" 37 37 #include "cifspdu.h" 38 38 #include "cifs_spnego.h" 39 + #include "../common/smbdirect/smbdirect.h" 39 40 #include "smbdirect.h" 40 41 #include "trace.h" 41 42 #ifdef CONFIG_CIFS_DFS_UPCALL ··· 412 411 if (!rc && 413 412 (server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL) && 414 413 server->ops->query_server_interfaces) { 415 - mutex_unlock(&ses->session_mutex); 416 - 417 414 /* 418 - * query server network interfaces, in case they change 415 + * query server network interfaces, in case they change. 416 + * Also mark the session as pending this update while the query 417 + * is in progress. This will be used to avoid calling 418 + * smb2_reconnect recursively. 419 419 */ 420 + ses->flags |= CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES; 420 421 xid = get_xid(); 421 422 rc = server->ops->query_server_interfaces(xid, tcon, false); 422 423 free_xid(xid); 424 + ses->flags &= ~CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES; 425 + 426 + /* regardless of rc value, setup polling */ 427 + queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, 428 + (SMB_INTERFACE_POLL_INTERVAL * HZ)); 429 + 430 + mutex_unlock(&ses->session_mutex); 423 431 424 432 if (rc == -EOPNOTSUPP && ses->chan_count > 1) { 425 433 /* ··· 448 438 if (ses->chan_max > ses->chan_count && 449 439 ses->iface_count && 450 440 !SERVER_IS_CHAN(server)) { 451 - if (ses->chan_count == 1) { 441 + if (ses->chan_count == 1) 452 442 cifs_server_dbg(VFS, "supports multichannel now\n"); 453 - queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, 454 - (SMB_INTERFACE_POLL_INTERVAL * HZ)); 455 - } 456 443 457 444 cifs_try_adding_channels(ses); 458 445 } ··· 567 560 struct TCP_Server_Info *server, 568 561 void **request_buf, unsigned int *total_len) 569 562 { 570 - /* Skip reconnect only for FSCTL_VALIDATE_NEGOTIATE_INFO IOCTLs */ 571 - if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO) { 563 + /* 564 + * Skip reconnect in one of the following cases: 565 + * 1. For FSCTL_VALIDATE_NEGOTIATE_INFO IOCTLs 566 + * 2. For FSCTL_QUERY_NETWORK_INTERFACE_INFO IOCTL when called from 567 + * smb2_reconnect (indicated by CIFS_SES_FLAG_SCALE_CHANNELS ses flag) 568 + */ 569 + if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO || 570 + (opcode == FSCTL_QUERY_NETWORK_INTERFACE_INFO && 571 + (tcon->ses->flags & CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES))) 572 572 return __smb2_plain_req_init(SMB2_IOCTL, tcon, server, 573 573 request_buf, total_len); 574 - } 574 + 575 575 return smb2_plain_req_init(SMB2_IOCTL, tcon, server, 576 576 request_buf, total_len); 577 577 } ··· 4463 4449 #ifdef CONFIG_CIFS_SMB_DIRECT 4464 4450 /* 4465 4451 * If we want to do a RDMA write, fill in and append 4466 - * smbd_buffer_descriptor_v1 to the end of read request 4452 + * smbdirect_buffer_descriptor_v1 to the end of read request 4467 4453 */ 4468 4454 if (rdata && smb3_use_rdma_offload(io_parms)) { 4469 - struct smbd_buffer_descriptor_v1 *v1; 4455 + struct smbdirect_buffer_descriptor_v1 *v1; 4470 4456 bool need_invalidate = server->dialect == SMB30_PROT_ID; 4471 4457 4472 4458 rdata->mr = smbd_register_mr(server->smbd_conn, &rdata->subreq.io_iter, ··· 4480 4466 req->ReadChannelInfoOffset = 4481 4467 cpu_to_le16(offsetof(struct smb2_read_req, Buffer)); 4482 4468 req->ReadChannelInfoLength = 4483 - cpu_to_le16(sizeof(struct smbd_buffer_descriptor_v1)); 4484 - v1 = (struct smbd_buffer_descriptor_v1 *) &req->Buffer[0]; 4469 + cpu_to_le16(sizeof(struct smbdirect_buffer_descriptor_v1)); 4470 + v1 = (struct smbdirect_buffer_descriptor_v1 *) &req->Buffer[0]; 4485 4471 v1->offset = cpu_to_le64(rdata->mr->mr->iova); 4486 4472 v1->token = cpu_to_le32(rdata->mr->mr->rkey); 4487 4473 v1->length = cpu_to_le32(rdata->mr->mr->length); ··· 4989 4975 #ifdef CONFIG_CIFS_SMB_DIRECT 4990 4976 /* 4991 4977 * If we want to do a server RDMA read, fill in and append 4992 - * smbd_buffer_descriptor_v1 to the end of write request 4978 + * smbdirect_buffer_descriptor_v1 to the end of write request 4993 4979 */ 4994 4980 if (smb3_use_rdma_offload(io_parms)) { 4995 - struct smbd_buffer_descriptor_v1 *v1; 4981 + struct smbdirect_buffer_descriptor_v1 *v1; 4996 4982 bool need_invalidate = server->dialect == SMB30_PROT_ID; 4997 4983 4998 4984 wdata->mr = smbd_register_mr(server->smbd_conn, &wdata->subreq.io_iter, ··· 5011 4997 req->WriteChannelInfoOffset = 5012 4998 cpu_to_le16(offsetof(struct smb2_write_req, Buffer)); 5013 4999 req->WriteChannelInfoLength = 5014 - cpu_to_le16(sizeof(struct smbd_buffer_descriptor_v1)); 5015 - v1 = (struct smbd_buffer_descriptor_v1 *) &req->Buffer[0]; 5000 + cpu_to_le16(sizeof(struct smbdirect_buffer_descriptor_v1)); 5001 + v1 = (struct smbdirect_buffer_descriptor_v1 *) &req->Buffer[0]; 5016 5002 v1->offset = cpu_to_le64(wdata->mr->mr->iova); 5017 5003 v1->token = cpu_to_le32(wdata->mr->mr->rkey); 5018 5004 v1->length = cpu_to_le32(wdata->mr->mr->length);
+213 -176
fs/smb/client/smbdirect.c
··· 7 7 #include <linux/module.h> 8 8 #include <linux/highmem.h> 9 9 #include <linux/folio_queue.h> 10 + #include "../common/smbdirect/smbdirect_pdu.h" 10 11 #include "smbdirect.h" 11 12 #include "cifs_debug.h" 12 13 #include "cifsproto.h" ··· 50 49 }; 51 50 static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len, 52 51 struct smb_extract_to_rdma *rdma); 53 - 54 - /* SMBD version number */ 55 - #define SMBD_V1 0x0100 56 52 57 53 /* Port numbers for SMBD transport */ 58 54 #define SMB_PORT 445 ··· 163 165 { 164 166 struct smbd_connection *info = 165 167 container_of(work, struct smbd_connection, disconnect_work); 168 + struct smbdirect_socket *sc = &info->socket; 166 169 167 - if (info->transport_status == SMBD_CONNECTED) { 168 - info->transport_status = SMBD_DISCONNECTING; 169 - rdma_disconnect(info->id); 170 + if (sc->status == SMBDIRECT_SOCKET_CONNECTED) { 171 + sc->status = SMBDIRECT_SOCKET_DISCONNECTING; 172 + rdma_disconnect(sc->rdma.cm_id); 170 173 } 171 174 } 172 175 ··· 181 182 struct rdma_cm_id *id, struct rdma_cm_event *event) 182 183 { 183 184 struct smbd_connection *info = id->context; 185 + struct smbdirect_socket *sc = &info->socket; 184 186 185 187 log_rdma_event(INFO, "event=%d status=%d\n", 186 188 event->event, event->status); ··· 205 205 206 206 case RDMA_CM_EVENT_ESTABLISHED: 207 207 log_rdma_event(INFO, "connected event=%d\n", event->event); 208 - info->transport_status = SMBD_CONNECTED; 208 + sc->status = SMBDIRECT_SOCKET_CONNECTED; 209 209 wake_up_interruptible(&info->conn_wait); 210 210 break; 211 211 ··· 213 213 case RDMA_CM_EVENT_UNREACHABLE: 214 214 case RDMA_CM_EVENT_REJECTED: 215 215 log_rdma_event(INFO, "connecting failed event=%d\n", event->event); 216 - info->transport_status = SMBD_DISCONNECTED; 216 + sc->status = SMBDIRECT_SOCKET_DISCONNECTED; 217 217 wake_up_interruptible(&info->conn_wait); 218 218 break; 219 219 220 220 case RDMA_CM_EVENT_DEVICE_REMOVAL: 221 221 case RDMA_CM_EVENT_DISCONNECTED: 222 222 /* This happens when we fail the negotiation */ 223 - if (info->transport_status == SMBD_NEGOTIATE_FAILED) { 224 - info->transport_status = SMBD_DISCONNECTED; 223 + if (sc->status == SMBDIRECT_SOCKET_NEGOTIATE_FAILED) { 224 + sc->status = SMBDIRECT_SOCKET_DISCONNECTED; 225 225 wake_up(&info->conn_wait); 226 226 break; 227 227 } 228 228 229 - info->transport_status = SMBD_DISCONNECTED; 229 + sc->status = SMBDIRECT_SOCKET_DISCONNECTED; 230 230 wake_up_interruptible(&info->disconn_wait); 231 231 wake_up_interruptible(&info->wait_reassembly_queue); 232 232 wake_up_interruptible_all(&info->wait_send_queue); ··· 275 275 int i; 276 276 struct smbd_request *request = 277 277 container_of(wc->wr_cqe, struct smbd_request, cqe); 278 + struct smbd_connection *info = request->info; 279 + struct smbdirect_socket *sc = &info->socket; 278 280 279 281 log_rdma_send(INFO, "smbd_request 0x%p completed wc->status=%d\n", 280 282 request, wc->status); ··· 288 286 } 289 287 290 288 for (i = 0; i < request->num_sge; i++) 291 - ib_dma_unmap_single(request->info->id->device, 289 + ib_dma_unmap_single(sc->ib.dev, 292 290 request->sge[i].addr, 293 291 request->sge[i].length, 294 292 DMA_TO_DEVICE); ··· 301 299 mempool_free(request, request->info->request_mempool); 302 300 } 303 301 304 - static void dump_smbd_negotiate_resp(struct smbd_negotiate_resp *resp) 302 + static void dump_smbdirect_negotiate_resp(struct smbdirect_negotiate_resp *resp) 305 303 { 306 304 log_rdma_event(INFO, "resp message min_version %u max_version %u negotiated_version %u credits_requested %u credits_granted %u status %u max_readwrite_size %u preferred_send_size %u max_receive_size %u max_fragmented_size %u\n", 307 305 resp->min_version, resp->max_version, ··· 320 318 struct smbd_response *response, int packet_length) 321 319 { 322 320 struct smbd_connection *info = response->info; 323 - struct smbd_negotiate_resp *packet = smbd_response_payload(response); 321 + struct smbdirect_socket *sc = &info->socket; 322 + struct smbdirect_socket_parameters *sp = &sc->parameters; 323 + struct smbdirect_negotiate_resp *packet = smbd_response_payload(response); 324 324 325 - if (packet_length < sizeof(struct smbd_negotiate_resp)) { 325 + if (packet_length < sizeof(struct smbdirect_negotiate_resp)) { 326 326 log_rdma_event(ERR, 327 327 "error: packet_length=%d\n", packet_length); 328 328 return false; 329 329 } 330 330 331 - if (le16_to_cpu(packet->negotiated_version) != SMBD_V1) { 331 + if (le16_to_cpu(packet->negotiated_version) != SMBDIRECT_V1) { 332 332 log_rdma_event(ERR, "error: negotiated_version=%x\n", 333 333 le16_to_cpu(packet->negotiated_version)); 334 334 return false; ··· 351 347 352 348 atomic_set(&info->receive_credits, 0); 353 349 354 - if (le32_to_cpu(packet->preferred_send_size) > info->max_receive_size) { 350 + if (le32_to_cpu(packet->preferred_send_size) > sp->max_recv_size) { 355 351 log_rdma_event(ERR, "error: preferred_send_size=%d\n", 356 352 le32_to_cpu(packet->preferred_send_size)); 357 353 return false; 358 354 } 359 - info->max_receive_size = le32_to_cpu(packet->preferred_send_size); 355 + sp->max_recv_size = le32_to_cpu(packet->preferred_send_size); 360 356 361 357 if (le32_to_cpu(packet->max_receive_size) < SMBD_MIN_RECEIVE_SIZE) { 362 358 log_rdma_event(ERR, "error: max_receive_size=%d\n", 363 359 le32_to_cpu(packet->max_receive_size)); 364 360 return false; 365 361 } 366 - info->max_send_size = min_t(int, info->max_send_size, 367 - le32_to_cpu(packet->max_receive_size)); 362 + sp->max_send_size = min_t(u32, sp->max_send_size, 363 + le32_to_cpu(packet->max_receive_size)); 368 364 369 365 if (le32_to_cpu(packet->max_fragmented_size) < 370 366 SMBD_MIN_FRAGMENTED_SIZE) { ··· 372 368 le32_to_cpu(packet->max_fragmented_size)); 373 369 return false; 374 370 } 375 - info->max_fragmented_send_size = 371 + sp->max_fragmented_send_size = 376 372 le32_to_cpu(packet->max_fragmented_size); 377 373 info->rdma_readwrite_threshold = 378 - rdma_readwrite_threshold > info->max_fragmented_send_size ? 379 - info->max_fragmented_send_size : 374 + rdma_readwrite_threshold > sp->max_fragmented_send_size ? 375 + sp->max_fragmented_send_size : 380 376 rdma_readwrite_threshold; 381 377 382 378 383 - info->max_readwrite_size = min_t(u32, 379 + sp->max_read_write_size = min_t(u32, 384 380 le32_to_cpu(packet->max_readwrite_size), 385 381 info->max_frmr_depth * PAGE_SIZE); 386 - info->max_frmr_depth = info->max_readwrite_size / PAGE_SIZE; 382 + info->max_frmr_depth = sp->max_read_write_size / PAGE_SIZE; 387 383 388 384 return true; 389 385 } ··· 397 393 struct smbd_connection *info = 398 394 container_of(work, struct smbd_connection, 399 395 post_send_credits_work); 396 + struct smbdirect_socket *sc = &info->socket; 400 397 401 - if (info->transport_status != SMBD_CONNECTED) { 398 + if (sc->status != SMBDIRECT_SOCKET_CONNECTED) { 402 399 wake_up(&info->wait_receive_queues); 403 400 return; 404 401 } ··· 453 448 /* Called from softirq, when recv is done */ 454 449 static void recv_done(struct ib_cq *cq, struct ib_wc *wc) 455 450 { 456 - struct smbd_data_transfer *data_transfer; 451 + struct smbdirect_data_transfer *data_transfer; 457 452 struct smbd_response *response = 458 453 container_of(wc->wr_cqe, struct smbd_response, cqe); 459 454 struct smbd_connection *info = response->info; ··· 479 474 switch (response->type) { 480 475 /* SMBD negotiation response */ 481 476 case SMBD_NEGOTIATE_RESP: 482 - dump_smbd_negotiate_resp(smbd_response_payload(response)); 477 + dump_smbdirect_negotiate_resp(smbd_response_payload(response)); 483 478 info->full_packet_received = true; 484 479 info->negotiate_done = 485 480 process_negotiation_response(response, wc->byte_len); ··· 536 531 /* Send a KEEP_ALIVE response right away if requested */ 537 532 info->keep_alive_requested = KEEP_ALIVE_NONE; 538 533 if (le16_to_cpu(data_transfer->flags) & 539 - SMB_DIRECT_RESPONSE_REQUESTED) { 534 + SMBDIRECT_FLAG_RESPONSE_REQUESTED) { 540 535 info->keep_alive_requested = KEEP_ALIVE_PENDING; 541 536 } 542 537 ··· 640 635 struct smbd_connection *info, 641 636 struct sockaddr *dstaddr, int port) 642 637 { 638 + struct smbdirect_socket *sc = &info->socket; 643 639 int rc; 644 640 645 - info->id = smbd_create_id(info, dstaddr, port); 646 - if (IS_ERR(info->id)) { 647 - rc = PTR_ERR(info->id); 641 + sc->rdma.cm_id = smbd_create_id(info, dstaddr, port); 642 + if (IS_ERR(sc->rdma.cm_id)) { 643 + rc = PTR_ERR(sc->rdma.cm_id); 648 644 goto out1; 649 645 } 646 + sc->ib.dev = sc->rdma.cm_id->device; 650 647 651 - if (!frwr_is_supported(&info->id->device->attrs)) { 648 + if (!frwr_is_supported(&sc->ib.dev->attrs)) { 652 649 log_rdma_event(ERR, "Fast Registration Work Requests (FRWR) is not supported\n"); 653 650 log_rdma_event(ERR, "Device capability flags = %llx max_fast_reg_page_list_len = %u\n", 654 - info->id->device->attrs.device_cap_flags, 655 - info->id->device->attrs.max_fast_reg_page_list_len); 651 + sc->ib.dev->attrs.device_cap_flags, 652 + sc->ib.dev->attrs.max_fast_reg_page_list_len); 656 653 rc = -EPROTONOSUPPORT; 657 654 goto out2; 658 655 } 659 656 info->max_frmr_depth = min_t(int, 660 657 smbd_max_frmr_depth, 661 - info->id->device->attrs.max_fast_reg_page_list_len); 658 + sc->ib.dev->attrs.max_fast_reg_page_list_len); 662 659 info->mr_type = IB_MR_TYPE_MEM_REG; 663 - if (info->id->device->attrs.kernel_cap_flags & IBK_SG_GAPS_REG) 660 + if (sc->ib.dev->attrs.kernel_cap_flags & IBK_SG_GAPS_REG) 664 661 info->mr_type = IB_MR_TYPE_SG_GAPS; 665 662 666 - info->pd = ib_alloc_pd(info->id->device, 0); 667 - if (IS_ERR(info->pd)) { 668 - rc = PTR_ERR(info->pd); 663 + sc->ib.pd = ib_alloc_pd(sc->ib.dev, 0); 664 + if (IS_ERR(sc->ib.pd)) { 665 + rc = PTR_ERR(sc->ib.pd); 669 666 log_rdma_event(ERR, "ib_alloc_pd() returned %d\n", rc); 670 667 goto out2; 671 668 } ··· 675 668 return 0; 676 669 677 670 out2: 678 - rdma_destroy_id(info->id); 679 - info->id = NULL; 671 + rdma_destroy_id(sc->rdma.cm_id); 672 + sc->rdma.cm_id = NULL; 680 673 681 674 out1: 682 675 return rc; ··· 690 683 */ 691 684 static int smbd_post_send_negotiate_req(struct smbd_connection *info) 692 685 { 686 + struct smbdirect_socket *sc = &info->socket; 687 + struct smbdirect_socket_parameters *sp = &sc->parameters; 693 688 struct ib_send_wr send_wr; 694 689 int rc = -ENOMEM; 695 690 struct smbd_request *request; 696 - struct smbd_negotiate_req *packet; 691 + struct smbdirect_negotiate_req *packet; 697 692 698 693 request = mempool_alloc(info->request_mempool, GFP_KERNEL); 699 694 if (!request) ··· 704 695 request->info = info; 705 696 706 697 packet = smbd_request_payload(request); 707 - packet->min_version = cpu_to_le16(SMBD_V1); 708 - packet->max_version = cpu_to_le16(SMBD_V1); 698 + packet->min_version = cpu_to_le16(SMBDIRECT_V1); 699 + packet->max_version = cpu_to_le16(SMBDIRECT_V1); 709 700 packet->reserved = 0; 710 - packet->credits_requested = cpu_to_le16(info->send_credit_target); 711 - packet->preferred_send_size = cpu_to_le32(info->max_send_size); 712 - packet->max_receive_size = cpu_to_le32(info->max_receive_size); 701 + packet->credits_requested = cpu_to_le16(sp->send_credit_target); 702 + packet->preferred_send_size = cpu_to_le32(sp->max_send_size); 703 + packet->max_receive_size = cpu_to_le32(sp->max_recv_size); 713 704 packet->max_fragmented_size = 714 - cpu_to_le32(info->max_fragmented_recv_size); 705 + cpu_to_le32(sp->max_fragmented_recv_size); 715 706 716 707 request->num_sge = 1; 717 708 request->sge[0].addr = ib_dma_map_single( 718 - info->id->device, (void *)packet, 709 + sc->ib.dev, (void *)packet, 719 710 sizeof(*packet), DMA_TO_DEVICE); 720 - if (ib_dma_mapping_error(info->id->device, request->sge[0].addr)) { 711 + if (ib_dma_mapping_error(sc->ib.dev, request->sge[0].addr)) { 721 712 rc = -EIO; 722 713 goto dma_mapping_failed; 723 714 } 724 715 725 716 request->sge[0].length = sizeof(*packet); 726 - request->sge[0].lkey = info->pd->local_dma_lkey; 717 + request->sge[0].lkey = sc->ib.pd->local_dma_lkey; 727 718 728 719 ib_dma_sync_single_for_device( 729 - info->id->device, request->sge[0].addr, 720 + sc->ib.dev, request->sge[0].addr, 730 721 request->sge[0].length, DMA_TO_DEVICE); 731 722 732 723 request->cqe.done = send_done; ··· 743 734 request->sge[0].length, request->sge[0].lkey); 744 735 745 736 atomic_inc(&info->send_pending); 746 - rc = ib_post_send(info->id->qp, &send_wr, NULL); 737 + rc = ib_post_send(sc->ib.qp, &send_wr, NULL); 747 738 if (!rc) 748 739 return 0; 749 740 750 741 /* if we reach here, post send failed */ 751 742 log_rdma_send(ERR, "ib_post_send failed rc=%d\n", rc); 752 743 atomic_dec(&info->send_pending); 753 - ib_dma_unmap_single(info->id->device, request->sge[0].addr, 744 + ib_dma_unmap_single(sc->ib.dev, request->sge[0].addr, 754 745 request->sge[0].length, DMA_TO_DEVICE); 755 746 756 747 smbd_disconnect_rdma_connection(info); ··· 783 774 /* 784 775 * Check if we need to send a KEEP_ALIVE message 785 776 * The idle connection timer triggers a KEEP_ALIVE message when expires 786 - * SMB_DIRECT_RESPONSE_REQUESTED is set in the message flag to have peer send 777 + * SMBDIRECT_FLAG_RESPONSE_REQUESTED is set in the message flag to have peer send 787 778 * back a response. 788 779 * return value: 789 - * 1 if SMB_DIRECT_RESPONSE_REQUESTED needs to be set 780 + * 1 if SMBDIRECT_FLAG_RESPONSE_REQUESTED needs to be set 790 781 * 0: otherwise 791 782 */ 792 783 static int manage_keep_alive_before_sending(struct smbd_connection *info) ··· 802 793 static int smbd_post_send(struct smbd_connection *info, 803 794 struct smbd_request *request) 804 795 { 796 + struct smbdirect_socket *sc = &info->socket; 797 + struct smbdirect_socket_parameters *sp = &sc->parameters; 805 798 struct ib_send_wr send_wr; 806 799 int rc, i; 807 800 ··· 812 801 "rdma_request sge[%d] addr=0x%llx length=%u\n", 813 802 i, request->sge[i].addr, request->sge[i].length); 814 803 ib_dma_sync_single_for_device( 815 - info->id->device, 804 + sc->ib.dev, 816 805 request->sge[i].addr, 817 806 request->sge[i].length, 818 807 DMA_TO_DEVICE); ··· 827 816 send_wr.opcode = IB_WR_SEND; 828 817 send_wr.send_flags = IB_SEND_SIGNALED; 829 818 830 - rc = ib_post_send(info->id->qp, &send_wr, NULL); 819 + rc = ib_post_send(sc->ib.qp, &send_wr, NULL); 831 820 if (rc) { 832 821 log_rdma_send(ERR, "ib_post_send failed rc=%d\n", rc); 833 822 smbd_disconnect_rdma_connection(info); ··· 835 824 } else 836 825 /* Reset timer for idle connection after packet is sent */ 837 826 mod_delayed_work(info->workqueue, &info->idle_timer_work, 838 - info->keep_alive_interval*HZ); 827 + msecs_to_jiffies(sp->keepalive_interval_msec)); 839 828 840 829 return rc; 841 830 } ··· 844 833 struct iov_iter *iter, 845 834 int *_remaining_data_length) 846 835 { 836 + struct smbdirect_socket *sc = &info->socket; 837 + struct smbdirect_socket_parameters *sp = &sc->parameters; 847 838 int i, rc; 848 839 int header_length; 849 840 int data_length; 850 841 struct smbd_request *request; 851 - struct smbd_data_transfer *packet; 842 + struct smbdirect_data_transfer *packet; 852 843 int new_credits = 0; 853 844 854 845 wait_credit: 855 846 /* Wait for send credits. A SMBD packet needs one credit */ 856 847 rc = wait_event_interruptible(info->wait_send_queue, 857 848 atomic_read(&info->send_credits) > 0 || 858 - info->transport_status != SMBD_CONNECTED); 849 + sc->status != SMBDIRECT_SOCKET_CONNECTED); 859 850 if (rc) 860 851 goto err_wait_credit; 861 852 862 - if (info->transport_status != SMBD_CONNECTED) { 853 + if (sc->status != SMBDIRECT_SOCKET_CONNECTED) { 863 854 log_outgoing(ERR, "disconnected not sending on wait_credit\n"); 864 855 rc = -EAGAIN; 865 856 goto err_wait_credit; ··· 873 860 874 861 wait_send_queue: 875 862 wait_event(info->wait_post_send, 876 - atomic_read(&info->send_pending) < info->send_credit_target || 877 - info->transport_status != SMBD_CONNECTED); 863 + atomic_read(&info->send_pending) < sp->send_credit_target || 864 + sc->status != SMBDIRECT_SOCKET_CONNECTED); 878 865 879 - if (info->transport_status != SMBD_CONNECTED) { 866 + if (sc->status != SMBDIRECT_SOCKET_CONNECTED) { 880 867 log_outgoing(ERR, "disconnected not sending on wait_send_queue\n"); 881 868 rc = -EAGAIN; 882 869 goto err_wait_send_queue; 883 870 } 884 871 885 872 if (unlikely(atomic_inc_return(&info->send_pending) > 886 - info->send_credit_target)) { 873 + sp->send_credit_target)) { 887 874 atomic_dec(&info->send_pending); 888 875 goto wait_send_queue; 889 876 } ··· 903 890 .nr_sge = 1, 904 891 .max_sge = SMBDIRECT_MAX_SEND_SGE, 905 892 .sge = request->sge, 906 - .device = info->id->device, 907 - .local_dma_lkey = info->pd->local_dma_lkey, 893 + .device = sc->ib.dev, 894 + .local_dma_lkey = sc->ib.pd->local_dma_lkey, 908 895 .direction = DMA_TO_DEVICE, 909 896 }; 910 897 ··· 922 909 923 910 /* Fill in the packet header */ 924 911 packet = smbd_request_payload(request); 925 - packet->credits_requested = cpu_to_le16(info->send_credit_target); 912 + packet->credits_requested = cpu_to_le16(sp->send_credit_target); 926 913 927 914 new_credits = manage_credits_prior_sending(info); 928 915 atomic_add(new_credits, &info->receive_credits); ··· 932 919 933 920 packet->flags = 0; 934 921 if (manage_keep_alive_before_sending(info)) 935 - packet->flags |= cpu_to_le16(SMB_DIRECT_RESPONSE_REQUESTED); 922 + packet->flags |= cpu_to_le16(SMBDIRECT_FLAG_RESPONSE_REQUESTED); 936 923 937 924 packet->reserved = 0; 938 925 if (!data_length) ··· 951 938 le32_to_cpu(packet->remaining_data_length)); 952 939 953 940 /* Map the packet to DMA */ 954 - header_length = sizeof(struct smbd_data_transfer); 941 + header_length = sizeof(struct smbdirect_data_transfer); 955 942 /* If this is a packet without payload, don't send padding */ 956 943 if (!data_length) 957 - header_length = offsetof(struct smbd_data_transfer, padding); 944 + header_length = offsetof(struct smbdirect_data_transfer, padding); 958 945 959 - request->sge[0].addr = ib_dma_map_single(info->id->device, 946 + request->sge[0].addr = ib_dma_map_single(sc->ib.dev, 960 947 (void *)packet, 961 948 header_length, 962 949 DMA_TO_DEVICE); 963 - if (ib_dma_mapping_error(info->id->device, request->sge[0].addr)) { 950 + if (ib_dma_mapping_error(sc->ib.dev, request->sge[0].addr)) { 964 951 rc = -EIO; 965 952 request->sge[0].addr = 0; 966 953 goto err_dma; 967 954 } 968 955 969 956 request->sge[0].length = header_length; 970 - request->sge[0].lkey = info->pd->local_dma_lkey; 957 + request->sge[0].lkey = sc->ib.pd->local_dma_lkey; 971 958 972 959 rc = smbd_post_send(info, request); 973 960 if (!rc) ··· 976 963 err_dma: 977 964 for (i = 0; i < request->num_sge; i++) 978 965 if (request->sge[i].addr) 979 - ib_dma_unmap_single(info->id->device, 966 + ib_dma_unmap_single(sc->ib.dev, 980 967 request->sge[i].addr, 981 968 request->sge[i].length, 982 969 DMA_TO_DEVICE); ··· 1021 1008 static int smbd_post_recv( 1022 1009 struct smbd_connection *info, struct smbd_response *response) 1023 1010 { 1011 + struct smbdirect_socket *sc = &info->socket; 1012 + struct smbdirect_socket_parameters *sp = &sc->parameters; 1024 1013 struct ib_recv_wr recv_wr; 1025 1014 int rc = -EIO; 1026 1015 1027 1016 response->sge.addr = ib_dma_map_single( 1028 - info->id->device, response->packet, 1029 - info->max_receive_size, DMA_FROM_DEVICE); 1030 - if (ib_dma_mapping_error(info->id->device, response->sge.addr)) 1017 + sc->ib.dev, response->packet, 1018 + sp->max_recv_size, DMA_FROM_DEVICE); 1019 + if (ib_dma_mapping_error(sc->ib.dev, response->sge.addr)) 1031 1020 return rc; 1032 1021 1033 - response->sge.length = info->max_receive_size; 1034 - response->sge.lkey = info->pd->local_dma_lkey; 1022 + response->sge.length = sp->max_recv_size; 1023 + response->sge.lkey = sc->ib.pd->local_dma_lkey; 1035 1024 1036 1025 response->cqe.done = recv_done; 1037 1026 ··· 1042 1027 recv_wr.sg_list = &response->sge; 1043 1028 recv_wr.num_sge = 1; 1044 1029 1045 - rc = ib_post_recv(info->id->qp, &recv_wr, NULL); 1030 + rc = ib_post_recv(sc->ib.qp, &recv_wr, NULL); 1046 1031 if (rc) { 1047 - ib_dma_unmap_single(info->id->device, response->sge.addr, 1032 + ib_dma_unmap_single(sc->ib.dev, response->sge.addr, 1048 1033 response->sge.length, DMA_FROM_DEVICE); 1049 1034 smbd_disconnect_rdma_connection(info); 1050 1035 log_rdma_recv(ERR, "ib_post_recv failed rc=%d\n", rc); ··· 1202 1187 static void put_receive_buffer( 1203 1188 struct smbd_connection *info, struct smbd_response *response) 1204 1189 { 1190 + struct smbdirect_socket *sc = &info->socket; 1205 1191 unsigned long flags; 1206 1192 1207 - ib_dma_unmap_single(info->id->device, response->sge.addr, 1193 + ib_dma_unmap_single(sc->ib.dev, response->sge.addr, 1208 1194 response->sge.length, DMA_FROM_DEVICE); 1209 1195 1210 1196 spin_lock_irqsave(&info->receive_queue_lock, flags); ··· 1280 1264 struct smbd_connection *info = container_of( 1281 1265 work, struct smbd_connection, 1282 1266 idle_timer_work.work); 1267 + struct smbdirect_socket *sc = &info->socket; 1268 + struct smbdirect_socket_parameters *sp = &sc->parameters; 1283 1269 1284 1270 if (info->keep_alive_requested != KEEP_ALIVE_NONE) { 1285 1271 log_keep_alive(ERR, ··· 1296 1278 1297 1279 /* Setup the next idle timeout work */ 1298 1280 queue_delayed_work(info->workqueue, &info->idle_timer_work, 1299 - info->keep_alive_interval*HZ); 1281 + msecs_to_jiffies(sp->keepalive_interval_msec)); 1300 1282 } 1301 1283 1302 1284 /* ··· 1307 1289 void smbd_destroy(struct TCP_Server_Info *server) 1308 1290 { 1309 1291 struct smbd_connection *info = server->smbd_conn; 1292 + struct smbdirect_socket *sc; 1293 + struct smbdirect_socket_parameters *sp; 1310 1294 struct smbd_response *response; 1311 1295 unsigned long flags; 1312 1296 ··· 1316 1296 log_rdma_event(INFO, "rdma session already destroyed\n"); 1317 1297 return; 1318 1298 } 1299 + sc = &info->socket; 1300 + sp = &sc->parameters; 1319 1301 1320 1302 log_rdma_event(INFO, "destroying rdma session\n"); 1321 - if (info->transport_status != SMBD_DISCONNECTED) { 1322 - rdma_disconnect(server->smbd_conn->id); 1303 + if (sc->status != SMBDIRECT_SOCKET_DISCONNECTED) { 1304 + rdma_disconnect(sc->rdma.cm_id); 1323 1305 log_rdma_event(INFO, "wait for transport being disconnected\n"); 1324 1306 wait_event_interruptible( 1325 1307 info->disconn_wait, 1326 - info->transport_status == SMBD_DISCONNECTED); 1308 + sc->status == SMBDIRECT_SOCKET_DISCONNECTED); 1327 1309 } 1328 1310 1329 1311 log_rdma_event(INFO, "destroying qp\n"); 1330 - ib_drain_qp(info->id->qp); 1331 - rdma_destroy_qp(info->id); 1312 + ib_drain_qp(sc->ib.qp); 1313 + rdma_destroy_qp(sc->rdma.cm_id); 1314 + sc->ib.qp = NULL; 1332 1315 1333 1316 log_rdma_event(INFO, "cancelling idle timer\n"); 1334 1317 cancel_delayed_work_sync(&info->idle_timer_work); ··· 1359 1336 log_rdma_event(INFO, "free receive buffers\n"); 1360 1337 wait_event(info->wait_receive_queues, 1361 1338 info->count_receive_queue + info->count_empty_packet_queue 1362 - == info->receive_credit_max); 1339 + == sp->recv_credit_max); 1363 1340 destroy_receive_buffers(info); 1364 1341 1365 1342 /* ··· 1378 1355 } 1379 1356 destroy_mr_list(info); 1380 1357 1381 - ib_free_cq(info->send_cq); 1382 - ib_free_cq(info->recv_cq); 1383 - ib_dealloc_pd(info->pd); 1384 - rdma_destroy_id(info->id); 1358 + ib_free_cq(sc->ib.send_cq); 1359 + ib_free_cq(sc->ib.recv_cq); 1360 + ib_dealloc_pd(sc->ib.pd); 1361 + rdma_destroy_id(sc->rdma.cm_id); 1385 1362 1386 1363 /* free mempools */ 1387 1364 mempool_destroy(info->request_mempool); ··· 1390 1367 mempool_destroy(info->response_mempool); 1391 1368 kmem_cache_destroy(info->response_cache); 1392 1369 1393 - info->transport_status = SMBD_DESTROYED; 1370 + sc->status = SMBDIRECT_SOCKET_DESTROYED; 1394 1371 1395 1372 destroy_workqueue(info->workqueue); 1396 1373 log_rdma_event(INFO, "rdma session destroyed\n"); ··· 1415 1392 * This is possible if transport is disconnected and we haven't received 1416 1393 * notification from RDMA, but upper layer has detected timeout 1417 1394 */ 1418 - if (server->smbd_conn->transport_status == SMBD_CONNECTED) { 1395 + if (server->smbd_conn->socket.status == SMBDIRECT_SOCKET_CONNECTED) { 1419 1396 log_rdma_event(INFO, "disconnecting transport\n"); 1420 1397 smbd_destroy(server); 1421 1398 } ··· 1447 1424 #define MAX_NAME_LEN 80 1448 1425 static int allocate_caches_and_workqueue(struct smbd_connection *info) 1449 1426 { 1427 + struct smbdirect_socket *sc = &info->socket; 1428 + struct smbdirect_socket_parameters *sp = &sc->parameters; 1450 1429 char name[MAX_NAME_LEN]; 1451 1430 int rc; 1452 1431 ··· 1457 1432 kmem_cache_create( 1458 1433 name, 1459 1434 sizeof(struct smbd_request) + 1460 - sizeof(struct smbd_data_transfer), 1435 + sizeof(struct smbdirect_data_transfer), 1461 1436 0, SLAB_HWCACHE_ALIGN, NULL); 1462 1437 if (!info->request_cache) 1463 1438 return -ENOMEM; 1464 1439 1465 1440 info->request_mempool = 1466 - mempool_create(info->send_credit_target, mempool_alloc_slab, 1441 + mempool_create(sp->send_credit_target, mempool_alloc_slab, 1467 1442 mempool_free_slab, info->request_cache); 1468 1443 if (!info->request_mempool) 1469 1444 goto out1; ··· 1473 1448 kmem_cache_create( 1474 1449 name, 1475 1450 sizeof(struct smbd_response) + 1476 - info->max_receive_size, 1451 + sp->max_recv_size, 1477 1452 0, SLAB_HWCACHE_ALIGN, NULL); 1478 1453 if (!info->response_cache) 1479 1454 goto out2; 1480 1455 1481 1456 info->response_mempool = 1482 - mempool_create(info->receive_credit_max, mempool_alloc_slab, 1457 + mempool_create(sp->recv_credit_max, mempool_alloc_slab, 1483 1458 mempool_free_slab, info->response_cache); 1484 1459 if (!info->response_mempool) 1485 1460 goto out3; ··· 1489 1464 if (!info->workqueue) 1490 1465 goto out4; 1491 1466 1492 - rc = allocate_receive_buffers(info, info->receive_credit_max); 1467 + rc = allocate_receive_buffers(info, sp->recv_credit_max); 1493 1468 if (rc) { 1494 1469 log_rdma_event(ERR, "failed to allocate receive buffers\n"); 1495 1470 goto out5; ··· 1516 1491 { 1517 1492 int rc; 1518 1493 struct smbd_connection *info; 1494 + struct smbdirect_socket *sc; 1495 + struct smbdirect_socket_parameters *sp; 1519 1496 struct rdma_conn_param conn_param; 1520 1497 struct ib_qp_init_attr qp_attr; 1521 1498 struct sockaddr_in *addr_in = (struct sockaddr_in *) dstaddr; ··· 1527 1500 info = kzalloc(sizeof(struct smbd_connection), GFP_KERNEL); 1528 1501 if (!info) 1529 1502 return NULL; 1503 + sc = &info->socket; 1504 + sp = &sc->parameters; 1530 1505 1531 - info->transport_status = SMBD_CONNECTING; 1506 + sc->status = SMBDIRECT_SOCKET_CONNECTING; 1532 1507 rc = smbd_ia_open(info, dstaddr, port); 1533 1508 if (rc) { 1534 1509 log_rdma_event(INFO, "smbd_ia_open rc=%d\n", rc); 1535 1510 goto create_id_failed; 1536 1511 } 1537 1512 1538 - if (smbd_send_credit_target > info->id->device->attrs.max_cqe || 1539 - smbd_send_credit_target > info->id->device->attrs.max_qp_wr) { 1513 + if (smbd_send_credit_target > sc->ib.dev->attrs.max_cqe || 1514 + smbd_send_credit_target > sc->ib.dev->attrs.max_qp_wr) { 1540 1515 log_rdma_event(ERR, "consider lowering send_credit_target = %d. Possible CQE overrun, device reporting max_cqe %d max_qp_wr %d\n", 1541 1516 smbd_send_credit_target, 1542 - info->id->device->attrs.max_cqe, 1543 - info->id->device->attrs.max_qp_wr); 1517 + sc->ib.dev->attrs.max_cqe, 1518 + sc->ib.dev->attrs.max_qp_wr); 1544 1519 goto config_failed; 1545 1520 } 1546 1521 1547 - if (smbd_receive_credit_max > info->id->device->attrs.max_cqe || 1548 - smbd_receive_credit_max > info->id->device->attrs.max_qp_wr) { 1522 + if (smbd_receive_credit_max > sc->ib.dev->attrs.max_cqe || 1523 + smbd_receive_credit_max > sc->ib.dev->attrs.max_qp_wr) { 1549 1524 log_rdma_event(ERR, "consider lowering receive_credit_max = %d. Possible CQE overrun, device reporting max_cqe %d max_qp_wr %d\n", 1550 1525 smbd_receive_credit_max, 1551 - info->id->device->attrs.max_cqe, 1552 - info->id->device->attrs.max_qp_wr); 1526 + sc->ib.dev->attrs.max_cqe, 1527 + sc->ib.dev->attrs.max_qp_wr); 1553 1528 goto config_failed; 1554 1529 } 1555 1530 1556 - info->receive_credit_max = smbd_receive_credit_max; 1557 - info->send_credit_target = smbd_send_credit_target; 1558 - info->max_send_size = smbd_max_send_size; 1559 - info->max_fragmented_recv_size = smbd_max_fragmented_recv_size; 1560 - info->max_receive_size = smbd_max_receive_size; 1561 - info->keep_alive_interval = smbd_keep_alive_interval; 1531 + sp->recv_credit_max = smbd_receive_credit_max; 1532 + sp->send_credit_target = smbd_send_credit_target; 1533 + sp->max_send_size = smbd_max_send_size; 1534 + sp->max_fragmented_recv_size = smbd_max_fragmented_recv_size; 1535 + sp->max_recv_size = smbd_max_receive_size; 1536 + sp->keepalive_interval_msec = smbd_keep_alive_interval * 1000; 1562 1537 1563 - if (info->id->device->attrs.max_send_sge < SMBDIRECT_MAX_SEND_SGE || 1564 - info->id->device->attrs.max_recv_sge < SMBDIRECT_MAX_RECV_SGE) { 1538 + if (sc->ib.dev->attrs.max_send_sge < SMBDIRECT_MAX_SEND_SGE || 1539 + sc->ib.dev->attrs.max_recv_sge < SMBDIRECT_MAX_RECV_SGE) { 1565 1540 log_rdma_event(ERR, 1566 1541 "device %.*s max_send_sge/max_recv_sge = %d/%d too small\n", 1567 1542 IB_DEVICE_NAME_MAX, 1568 - info->id->device->name, 1569 - info->id->device->attrs.max_send_sge, 1570 - info->id->device->attrs.max_recv_sge); 1543 + sc->ib.dev->name, 1544 + sc->ib.dev->attrs.max_send_sge, 1545 + sc->ib.dev->attrs.max_recv_sge); 1571 1546 goto config_failed; 1572 1547 } 1573 1548 1574 - info->send_cq = NULL; 1575 - info->recv_cq = NULL; 1576 - info->send_cq = 1577 - ib_alloc_cq_any(info->id->device, info, 1578 - info->send_credit_target, IB_POLL_SOFTIRQ); 1579 - if (IS_ERR(info->send_cq)) { 1580 - info->send_cq = NULL; 1549 + sc->ib.send_cq = 1550 + ib_alloc_cq_any(sc->ib.dev, info, 1551 + sp->send_credit_target, IB_POLL_SOFTIRQ); 1552 + if (IS_ERR(sc->ib.send_cq)) { 1553 + sc->ib.send_cq = NULL; 1581 1554 goto alloc_cq_failed; 1582 1555 } 1583 1556 1584 - info->recv_cq = 1585 - ib_alloc_cq_any(info->id->device, info, 1586 - info->receive_credit_max, IB_POLL_SOFTIRQ); 1587 - if (IS_ERR(info->recv_cq)) { 1588 - info->recv_cq = NULL; 1557 + sc->ib.recv_cq = 1558 + ib_alloc_cq_any(sc->ib.dev, info, 1559 + sp->recv_credit_max, IB_POLL_SOFTIRQ); 1560 + if (IS_ERR(sc->ib.recv_cq)) { 1561 + sc->ib.recv_cq = NULL; 1589 1562 goto alloc_cq_failed; 1590 1563 } 1591 1564 1592 1565 memset(&qp_attr, 0, sizeof(qp_attr)); 1593 1566 qp_attr.event_handler = smbd_qp_async_error_upcall; 1594 1567 qp_attr.qp_context = info; 1595 - qp_attr.cap.max_send_wr = info->send_credit_target; 1596 - qp_attr.cap.max_recv_wr = info->receive_credit_max; 1568 + qp_attr.cap.max_send_wr = sp->send_credit_target; 1569 + qp_attr.cap.max_recv_wr = sp->recv_credit_max; 1597 1570 qp_attr.cap.max_send_sge = SMBDIRECT_MAX_SEND_SGE; 1598 1571 qp_attr.cap.max_recv_sge = SMBDIRECT_MAX_RECV_SGE; 1599 1572 qp_attr.cap.max_inline_data = 0; 1600 1573 qp_attr.sq_sig_type = IB_SIGNAL_REQ_WR; 1601 1574 qp_attr.qp_type = IB_QPT_RC; 1602 - qp_attr.send_cq = info->send_cq; 1603 - qp_attr.recv_cq = info->recv_cq; 1575 + qp_attr.send_cq = sc->ib.send_cq; 1576 + qp_attr.recv_cq = sc->ib.recv_cq; 1604 1577 qp_attr.port_num = ~0; 1605 1578 1606 - rc = rdma_create_qp(info->id, info->pd, &qp_attr); 1579 + rc = rdma_create_qp(sc->rdma.cm_id, sc->ib.pd, &qp_attr); 1607 1580 if (rc) { 1608 1581 log_rdma_event(ERR, "rdma_create_qp failed %i\n", rc); 1609 1582 goto create_qp_failed; 1610 1583 } 1584 + sc->ib.qp = sc->rdma.cm_id->qp; 1611 1585 1612 1586 memset(&conn_param, 0, sizeof(conn_param)); 1613 1587 conn_param.initiator_depth = 0; 1614 1588 1615 1589 conn_param.responder_resources = 1616 - min(info->id->device->attrs.max_qp_rd_atom, 1590 + min(sc->ib.dev->attrs.max_qp_rd_atom, 1617 1591 SMBD_CM_RESPONDER_RESOURCES); 1618 1592 info->responder_resources = conn_param.responder_resources; 1619 1593 log_rdma_mr(INFO, "responder_resources=%d\n", 1620 1594 info->responder_resources); 1621 1595 1622 1596 /* Need to send IRD/ORD in private data for iWARP */ 1623 - info->id->device->ops.get_port_immutable( 1624 - info->id->device, info->id->port_num, &port_immutable); 1597 + sc->ib.dev->ops.get_port_immutable( 1598 + sc->ib.dev, sc->rdma.cm_id->port_num, &port_immutable); 1625 1599 if (port_immutable.core_cap_flags & RDMA_CORE_PORT_IWARP) { 1626 1600 ird_ord_hdr[0] = info->responder_resources; 1627 1601 ird_ord_hdr[1] = 1; ··· 1643 1615 init_waitqueue_head(&info->conn_wait); 1644 1616 init_waitqueue_head(&info->disconn_wait); 1645 1617 init_waitqueue_head(&info->wait_reassembly_queue); 1646 - rc = rdma_connect(info->id, &conn_param); 1618 + rc = rdma_connect(sc->rdma.cm_id, &conn_param); 1647 1619 if (rc) { 1648 1620 log_rdma_event(ERR, "rdma_connect() failed with %i\n", rc); 1649 1621 goto rdma_connect_failed; 1650 1622 } 1651 1623 1652 1624 wait_event_interruptible( 1653 - info->conn_wait, info->transport_status != SMBD_CONNECTING); 1625 + info->conn_wait, sc->status != SMBDIRECT_SOCKET_CONNECTING); 1654 1626 1655 - if (info->transport_status != SMBD_CONNECTED) { 1627 + if (sc->status != SMBDIRECT_SOCKET_CONNECTED) { 1656 1628 log_rdma_event(ERR, "rdma_connect failed port=%d\n", port); 1657 1629 goto rdma_connect_failed; 1658 1630 } ··· 1668 1640 init_waitqueue_head(&info->wait_send_queue); 1669 1641 INIT_DELAYED_WORK(&info->idle_timer_work, idle_connection_timer); 1670 1642 queue_delayed_work(info->workqueue, &info->idle_timer_work, 1671 - info->keep_alive_interval*HZ); 1643 + msecs_to_jiffies(sp->keepalive_interval_msec)); 1672 1644 1673 1645 init_waitqueue_head(&info->wait_send_pending); 1674 1646 atomic_set(&info->send_pending, 0); ··· 1703 1675 negotiation_failed: 1704 1676 cancel_delayed_work_sync(&info->idle_timer_work); 1705 1677 destroy_caches_and_workqueue(info); 1706 - info->transport_status = SMBD_NEGOTIATE_FAILED; 1678 + sc->status = SMBDIRECT_SOCKET_NEGOTIATE_FAILED; 1707 1679 init_waitqueue_head(&info->conn_wait); 1708 - rdma_disconnect(info->id); 1680 + rdma_disconnect(sc->rdma.cm_id); 1709 1681 wait_event(info->conn_wait, 1710 - info->transport_status == SMBD_DISCONNECTED); 1682 + sc->status == SMBDIRECT_SOCKET_DISCONNECTED); 1711 1683 1712 1684 allocate_cache_failed: 1713 1685 rdma_connect_failed: 1714 - rdma_destroy_qp(info->id); 1686 + rdma_destroy_qp(sc->rdma.cm_id); 1715 1687 1716 1688 create_qp_failed: 1717 1689 alloc_cq_failed: 1718 - if (info->send_cq) 1719 - ib_free_cq(info->send_cq); 1720 - if (info->recv_cq) 1721 - ib_free_cq(info->recv_cq); 1690 + if (sc->ib.send_cq) 1691 + ib_free_cq(sc->ib.send_cq); 1692 + if (sc->ib.recv_cq) 1693 + ib_free_cq(sc->ib.recv_cq); 1722 1694 1723 1695 config_failed: 1724 - ib_dealloc_pd(info->pd); 1725 - rdma_destroy_id(info->id); 1696 + ib_dealloc_pd(sc->ib.pd); 1697 + rdma_destroy_id(sc->rdma.cm_id); 1726 1698 1727 1699 create_id_failed: 1728 1700 kfree(info); ··· 1762 1734 static int smbd_recv_buf(struct smbd_connection *info, char *buf, 1763 1735 unsigned int size) 1764 1736 { 1737 + struct smbdirect_socket *sc = &info->socket; 1765 1738 struct smbd_response *response; 1766 - struct smbd_data_transfer *data_transfer; 1739 + struct smbdirect_data_transfer *data_transfer; 1767 1740 int to_copy, to_read, data_read, offset; 1768 1741 u32 data_length, remaining_data_length, data_offset; 1769 1742 int rc; ··· 1877 1848 rc = wait_event_interruptible( 1878 1849 info->wait_reassembly_queue, 1879 1850 info->reassembly_data_length >= size || 1880 - info->transport_status != SMBD_CONNECTED); 1851 + sc->status != SMBDIRECT_SOCKET_CONNECTED); 1881 1852 /* Don't return any data if interrupted */ 1882 1853 if (rc) 1883 1854 return rc; 1884 1855 1885 - if (info->transport_status != SMBD_CONNECTED) { 1856 + if (sc->status != SMBDIRECT_SOCKET_CONNECTED) { 1886 1857 log_read(ERR, "disconnected\n"); 1887 1858 return -ECONNABORTED; 1888 1859 } ··· 1900 1871 struct page *page, unsigned int page_offset, 1901 1872 unsigned int to_read) 1902 1873 { 1874 + struct smbdirect_socket *sc = &info->socket; 1903 1875 int ret; 1904 1876 char *to_address; 1905 1877 void *page_address; ··· 1909 1879 ret = wait_event_interruptible( 1910 1880 info->wait_reassembly_queue, 1911 1881 info->reassembly_data_length >= to_read || 1912 - info->transport_status != SMBD_CONNECTED); 1882 + sc->status != SMBDIRECT_SOCKET_CONNECTED); 1913 1883 if (ret) 1914 1884 return ret; 1915 1885 ··· 1984 1954 int num_rqst, struct smb_rqst *rqst_array) 1985 1955 { 1986 1956 struct smbd_connection *info = server->smbd_conn; 1957 + struct smbdirect_socket *sc = &info->socket; 1958 + struct smbdirect_socket_parameters *sp = &sc->parameters; 1987 1959 struct smb_rqst *rqst; 1988 1960 struct iov_iter iter; 1989 1961 unsigned int remaining_data_length, klen; 1990 1962 int rc, i, rqst_idx; 1991 1963 1992 - if (info->transport_status != SMBD_CONNECTED) 1964 + if (sc->status != SMBDIRECT_SOCKET_CONNECTED) 1993 1965 return -EAGAIN; 1994 1966 1995 1967 /* ··· 2003 1971 for (i = 0; i < num_rqst; i++) 2004 1972 remaining_data_length += smb_rqst_len(server, &rqst_array[i]); 2005 1973 2006 - if (unlikely(remaining_data_length > info->max_fragmented_send_size)) { 1974 + if (unlikely(remaining_data_length > sp->max_fragmented_send_size)) { 2007 1975 /* assertion: payload never exceeds negotiated maximum */ 2008 1976 log_write(ERR, "payload size %d > max size %d\n", 2009 - remaining_data_length, info->max_fragmented_send_size); 1977 + remaining_data_length, sp->max_fragmented_send_size); 2010 1978 return -EINVAL; 2011 1979 } 2012 1980 ··· 2085 2053 { 2086 2054 struct smbd_connection *info = 2087 2055 container_of(work, struct smbd_connection, mr_recovery_work); 2056 + struct smbdirect_socket *sc = &info->socket; 2088 2057 struct smbd_mr *smbdirect_mr; 2089 2058 int rc; 2090 2059 ··· 2103 2070 } 2104 2071 2105 2072 smbdirect_mr->mr = ib_alloc_mr( 2106 - info->pd, info->mr_type, 2073 + sc->ib.pd, info->mr_type, 2107 2074 info->max_frmr_depth); 2108 2075 if (IS_ERR(smbdirect_mr->mr)) { 2109 2076 log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n", ··· 2132 2099 2133 2100 static void destroy_mr_list(struct smbd_connection *info) 2134 2101 { 2102 + struct smbdirect_socket *sc = &info->socket; 2135 2103 struct smbd_mr *mr, *tmp; 2136 2104 2137 2105 cancel_work_sync(&info->mr_recovery_work); 2138 2106 list_for_each_entry_safe(mr, tmp, &info->mr_list, list) { 2139 2107 if (mr->state == MR_INVALIDATED) 2140 - ib_dma_unmap_sg(info->id->device, mr->sgt.sgl, 2108 + ib_dma_unmap_sg(sc->ib.dev, mr->sgt.sgl, 2141 2109 mr->sgt.nents, mr->dir); 2142 2110 ib_dereg_mr(mr->mr); 2143 2111 kfree(mr->sgt.sgl); ··· 2155 2121 */ 2156 2122 static int allocate_mr_list(struct smbd_connection *info) 2157 2123 { 2124 + struct smbdirect_socket *sc = &info->socket; 2158 2125 int i; 2159 2126 struct smbd_mr *smbdirect_mr, *tmp; 2160 2127 ··· 2171 2136 smbdirect_mr = kzalloc(sizeof(*smbdirect_mr), GFP_KERNEL); 2172 2137 if (!smbdirect_mr) 2173 2138 goto cleanup_entries; 2174 - smbdirect_mr->mr = ib_alloc_mr(info->pd, info->mr_type, 2139 + smbdirect_mr->mr = ib_alloc_mr(sc->ib.pd, info->mr_type, 2175 2140 info->max_frmr_depth); 2176 2141 if (IS_ERR(smbdirect_mr->mr)) { 2177 2142 log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n", ··· 2216 2181 */ 2217 2182 static struct smbd_mr *get_mr(struct smbd_connection *info) 2218 2183 { 2184 + struct smbdirect_socket *sc = &info->socket; 2219 2185 struct smbd_mr *ret; 2220 2186 int rc; 2221 2187 again: 2222 2188 rc = wait_event_interruptible(info->wait_mr, 2223 2189 atomic_read(&info->mr_ready_count) || 2224 - info->transport_status != SMBD_CONNECTED); 2190 + sc->status != SMBDIRECT_SOCKET_CONNECTED); 2225 2191 if (rc) { 2226 2192 log_rdma_mr(ERR, "wait_event_interruptible rc=%x\n", rc); 2227 2193 return NULL; 2228 2194 } 2229 2195 2230 - if (info->transport_status != SMBD_CONNECTED) { 2231 - log_rdma_mr(ERR, "info->transport_status=%x\n", 2232 - info->transport_status); 2196 + if (sc->status != SMBDIRECT_SOCKET_CONNECTED) { 2197 + log_rdma_mr(ERR, "sc->status=%x\n", sc->status); 2233 2198 return NULL; 2234 2199 } 2235 2200 ··· 2282 2247 struct iov_iter *iter, 2283 2248 bool writing, bool need_invalidate) 2284 2249 { 2250 + struct smbdirect_socket *sc = &info->socket; 2285 2251 struct smbd_mr *smbdirect_mr; 2286 2252 int rc, num_pages; 2287 2253 enum dma_data_direction dir; ··· 2312 2276 num_pages, iov_iter_count(iter), info->max_frmr_depth); 2313 2277 smbd_iter_to_mr(info, iter, &smbdirect_mr->sgt, info->max_frmr_depth); 2314 2278 2315 - rc = ib_dma_map_sg(info->id->device, smbdirect_mr->sgt.sgl, 2279 + rc = ib_dma_map_sg(sc->ib.dev, smbdirect_mr->sgt.sgl, 2316 2280 smbdirect_mr->sgt.nents, dir); 2317 2281 if (!rc) { 2318 2282 log_rdma_mr(ERR, "ib_dma_map_sg num_pages=%x dir=%x rc=%x\n", ··· 2348 2312 * on IB_WR_REG_MR. Hardware enforces a barrier and order of execution 2349 2313 * on the next ib_post_send when we actually send I/O to remote peer 2350 2314 */ 2351 - rc = ib_post_send(info->id->qp, &reg_wr->wr, NULL); 2315 + rc = ib_post_send(sc->ib.qp, &reg_wr->wr, NULL); 2352 2316 if (!rc) 2353 2317 return smbdirect_mr; 2354 2318 ··· 2357 2321 2358 2322 /* If all failed, attempt to recover this MR by setting it MR_ERROR*/ 2359 2323 map_mr_error: 2360 - ib_dma_unmap_sg(info->id->device, smbdirect_mr->sgt.sgl, 2324 + ib_dma_unmap_sg(sc->ib.dev, smbdirect_mr->sgt.sgl, 2361 2325 smbdirect_mr->sgt.nents, smbdirect_mr->dir); 2362 2326 2363 2327 dma_map_error: ··· 2395 2359 { 2396 2360 struct ib_send_wr *wr; 2397 2361 struct smbd_connection *info = smbdirect_mr->conn; 2362 + struct smbdirect_socket *sc = &info->socket; 2398 2363 int rc = 0; 2399 2364 2400 2365 if (smbdirect_mr->need_invalidate) { ··· 2409 2372 wr->send_flags = IB_SEND_SIGNALED; 2410 2373 2411 2374 init_completion(&smbdirect_mr->invalidate_done); 2412 - rc = ib_post_send(info->id->qp, wr, NULL); 2375 + rc = ib_post_send(sc->ib.qp, wr, NULL); 2413 2376 if (rc) { 2414 2377 log_rdma_mr(ERR, "ib_post_send failed rc=%x\n", rc); 2415 2378 smbd_disconnect_rdma_connection(info); ··· 2426 2389 2427 2390 if (smbdirect_mr->state == MR_INVALIDATED) { 2428 2391 ib_dma_unmap_sg( 2429 - info->id->device, smbdirect_mr->sgt.sgl, 2392 + sc->ib.dev, smbdirect_mr->sgt.sgl, 2430 2393 smbdirect_mr->sgt.nents, 2431 2394 smbdirect_mr->dir); 2432 2395 smbdirect_mr->state = MR_READY;
+5 -64
fs/smb/client/smbdirect.h
··· 15 15 #include <rdma/rdma_cm.h> 16 16 #include <linux/mempool.h> 17 17 18 + #include "../common/smbdirect/smbdirect.h" 19 + #include "../common/smbdirect/smbdirect_socket.h" 20 + 18 21 extern int rdma_readwrite_threshold; 19 22 extern int smbd_max_frmr_depth; 20 23 extern int smbd_keep_alive_interval; ··· 53 50 * 5. mempools for allocating packets 54 51 */ 55 52 struct smbd_connection { 56 - enum smbd_connection_status transport_status; 53 + struct smbdirect_socket socket; 57 54 58 - /* RDMA related */ 59 - struct rdma_cm_id *id; 60 - struct ib_qp_init_attr qp_attr; 61 - struct ib_pd *pd; 62 - struct ib_cq *send_cq, *recv_cq; 63 - struct ib_device_attr dev_attr; 64 55 int ri_rc; 65 56 struct completion ri_done; 66 57 wait_queue_head_t conn_wait; ··· 69 72 spinlock_t lock_new_credits_offered; 70 73 int new_credits_offered; 71 74 72 - /* Connection parameters defined in [MS-SMBD] 3.1.1.1 */ 73 - int receive_credit_max; 74 - int send_credit_target; 75 - int max_send_size; 76 - int max_fragmented_recv_size; 77 - int max_fragmented_send_size; 78 - int max_receive_size; 79 - int keep_alive_interval; 80 - int max_readwrite_size; 75 + /* dynamic connection parameters defined in [MS-SMBD] 3.1.1.1 */ 81 76 enum keep_alive_status keep_alive_requested; 82 77 int protocol; 83 78 atomic_t send_credits; ··· 165 176 SMBD_NEGOTIATE_RESP, 166 177 SMBD_TRANSFER_DATA, 167 178 }; 168 - 169 - #define SMB_DIRECT_RESPONSE_REQUESTED 0x0001 170 - 171 - /* SMBD negotiation request packet [MS-SMBD] 2.2.1 */ 172 - struct smbd_negotiate_req { 173 - __le16 min_version; 174 - __le16 max_version; 175 - __le16 reserved; 176 - __le16 credits_requested; 177 - __le32 preferred_send_size; 178 - __le32 max_receive_size; 179 - __le32 max_fragmented_size; 180 - } __packed; 181 - 182 - /* SMBD negotiation response packet [MS-SMBD] 2.2.2 */ 183 - struct smbd_negotiate_resp { 184 - __le16 min_version; 185 - __le16 max_version; 186 - __le16 negotiated_version; 187 - __le16 reserved; 188 - __le16 credits_requested; 189 - __le16 credits_granted; 190 - __le32 status; 191 - __le32 max_readwrite_size; 192 - __le32 preferred_send_size; 193 - __le32 max_receive_size; 194 - __le32 max_fragmented_size; 195 - } __packed; 196 - 197 - /* SMBD data transfer packet with payload [MS-SMBD] 2.2.3 */ 198 - struct smbd_data_transfer { 199 - __le16 credits_requested; 200 - __le16 credits_granted; 201 - __le16 flags; 202 - __le16 reserved; 203 - __le32 remaining_data_length; 204 - __le32 data_offset; 205 - __le32 data_length; 206 - __le32 padding; 207 - __u8 buffer[]; 208 - } __packed; 209 - 210 - /* The packet fields for a registered RDMA buffer */ 211 - struct smbd_buffer_descriptor_v1 { 212 - __le64 offset; 213 - __le32 token; 214 - __le32 length; 215 - } __packed; 216 179 217 180 /* Maximum number of SGEs used by smbdirect.c in any send work request */ 218 181 #define SMBDIRECT_MAX_SEND_SGE 6
+7 -7
fs/smb/client/transport.c
··· 1018 1018 uint index = 0; 1019 1019 unsigned int min_in_flight = UINT_MAX, max_in_flight = 0; 1020 1020 struct TCP_Server_Info *server = NULL; 1021 - int i; 1021 + int i, start, cur; 1022 1022 1023 1023 if (!ses) 1024 1024 return NULL; 1025 1025 1026 1026 spin_lock(&ses->chan_lock); 1027 + start = atomic_inc_return(&ses->chan_seq); 1027 1028 for (i = 0; i < ses->chan_count; i++) { 1028 - server = ses->chans[i].server; 1029 + cur = (start + i) % ses->chan_count; 1030 + server = ses->chans[cur].server; 1029 1031 if (!server || server->terminate) 1030 1032 continue; 1031 1033 ··· 1044 1042 */ 1045 1043 if (server->in_flight < min_in_flight) { 1046 1044 min_in_flight = server->in_flight; 1047 - index = i; 1045 + index = cur; 1048 1046 } 1049 1047 if (server->in_flight > max_in_flight) 1050 1048 max_in_flight = server->in_flight; 1051 1049 } 1052 1050 1053 1051 /* if all channels are equally loaded, fall back to round-robin */ 1054 - if (min_in_flight == max_in_flight) { 1055 - index = (uint)atomic_inc_return(&ses->chan_seq); 1056 - index %= ses->chan_count; 1057 - } 1052 + if (min_in_flight == max_in_flight) 1053 + index = (uint)start % ses->chan_count; 1058 1054 1059 1055 server = ses->chans[index].server; 1060 1056 spin_unlock(&ses->chan_lock);
+37
fs/smb/common/smbdirect/smbdirect.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * Copyright (C) 2017, Microsoft Corporation. 4 + * Copyright (C) 2018, LG Electronics. 5 + */ 6 + 7 + #ifndef __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_H__ 8 + #define __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_H__ 9 + 10 + /* SMB-DIRECT buffer descriptor V1 structure [MS-SMBD] 2.2.3.1 */ 11 + struct smbdirect_buffer_descriptor_v1 { 12 + __le64 offset; 13 + __le32 token; 14 + __le32 length; 15 + } __packed; 16 + 17 + /* 18 + * Connection parameters mostly from [MS-SMBD] 3.1.1.1 19 + * 20 + * These are setup and negotiated at the beginning of a 21 + * connection and remain constant unless explicitly changed. 22 + * 23 + * Some values are important for the upper layer. 24 + */ 25 + struct smbdirect_socket_parameters { 26 + __u16 recv_credit_max; 27 + __u16 send_credit_target; 28 + __u32 max_send_size; 29 + __u32 max_fragmented_send_size; 30 + __u32 max_recv_size; 31 + __u32 max_fragmented_recv_size; 32 + __u32 max_read_write_size; 33 + __u32 keepalive_interval_msec; 34 + __u32 keepalive_timeout_msec; 35 + } __packed; 36 + 37 + #endif /* __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_H__ */
+55
fs/smb/common/smbdirect/smbdirect_pdu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * Copyright (c) 2017 Stefan Metzmacher 4 + */ 5 + 6 + #ifndef __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_PDU_H__ 7 + #define __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_PDU_H__ 8 + 9 + #define SMBDIRECT_V1 0x0100 10 + 11 + /* SMBD negotiation request packet [MS-SMBD] 2.2.1 */ 12 + struct smbdirect_negotiate_req { 13 + __le16 min_version; 14 + __le16 max_version; 15 + __le16 reserved; 16 + __le16 credits_requested; 17 + __le32 preferred_send_size; 18 + __le32 max_receive_size; 19 + __le32 max_fragmented_size; 20 + } __packed; 21 + 22 + /* SMBD negotiation response packet [MS-SMBD] 2.2.2 */ 23 + struct smbdirect_negotiate_resp { 24 + __le16 min_version; 25 + __le16 max_version; 26 + __le16 negotiated_version; 27 + __le16 reserved; 28 + __le16 credits_requested; 29 + __le16 credits_granted; 30 + __le32 status; 31 + __le32 max_readwrite_size; 32 + __le32 preferred_send_size; 33 + __le32 max_receive_size; 34 + __le32 max_fragmented_size; 35 + } __packed; 36 + 37 + #define SMBDIRECT_DATA_MIN_HDR_SIZE 0x14 38 + #define SMBDIRECT_DATA_OFFSET 0x18 39 + 40 + #define SMBDIRECT_FLAG_RESPONSE_REQUESTED 0x0001 41 + 42 + /* SMBD data transfer packet with payload [MS-SMBD] 2.2.3 */ 43 + struct smbdirect_data_transfer { 44 + __le16 credits_requested; 45 + __le16 credits_granted; 46 + __le16 flags; 47 + __le16 reserved; 48 + __le32 remaining_data_length; 49 + __le32 data_offset; 50 + __le32 data_length; 51 + __le32 padding; 52 + __u8 buffer[]; 53 + } __packed; 54 + 55 + #endif /* __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_PDU_H__ */
+43
fs/smb/common/smbdirect/smbdirect_socket.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * Copyright (c) 2025 Stefan Metzmacher 4 + */ 5 + 6 + #ifndef __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_SOCKET_H__ 7 + #define __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_SOCKET_H__ 8 + 9 + enum smbdirect_socket_status { 10 + SMBDIRECT_SOCKET_CREATED, 11 + SMBDIRECT_SOCKET_CONNECTING, 12 + SMBDIRECT_SOCKET_CONNECTED, 13 + SMBDIRECT_SOCKET_NEGOTIATE_FAILED, 14 + SMBDIRECT_SOCKET_DISCONNECTING, 15 + SMBDIRECT_SOCKET_DISCONNECTED, 16 + SMBDIRECT_SOCKET_DESTROYED 17 + }; 18 + 19 + struct smbdirect_socket { 20 + enum smbdirect_socket_status status; 21 + 22 + /* RDMA related */ 23 + struct { 24 + struct rdma_cm_id *cm_id; 25 + } rdma; 26 + 27 + /* IB verbs related */ 28 + struct { 29 + struct ib_pd *pd; 30 + struct ib_cq *send_cq; 31 + struct ib_cq *recv_cq; 32 + 33 + /* 34 + * shortcuts for rdma.cm_id->{qp,device}; 35 + */ 36 + struct ib_qp *qp; 37 + struct ib_device *dev; 38 + } ib; 39 + 40 + struct smbdirect_socket_parameters parameters; 41 + }; 42 + 43 + #endif /* __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_SOCKET_H__ */