Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

rxrpc: Don't expose skbs to in-kernel users [ver #2]

Don't expose skbs to in-kernel users, such as the AFS filesystem, but
instead provide a notification hook the indicates that a call needs
attention and another that indicates that there's a new call to be
collected.

This makes the following possibilities more achievable:

(1) Call refcounting can be made simpler if skbs don't hold refs to calls.

(2) skbs referring to non-data events will be able to be freed much sooner
rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
will be able to consult the call state.

(3) We can shortcut the receive phase when a call is remotely aborted
because we don't have to go through all the packets to get to the one
cancelling the operation.

(4) It makes it easier to do encryption/decryption directly between AFS's
buffers and sk_buffs.

(5) Encryption/decryption can more easily be done in the AFS's thread
contexts - usually that of the userspace process that issued a syscall
- rather than in one of rxrpc's background threads on a workqueue.

(6) AFS will be able to wait synchronously on a call inside AF_RXRPC.

To make this work, the following interface function has been added:

int rxrpc_kernel_recv_data(
struct socket *sock, struct rxrpc_call *call,
void *buffer, size_t bufsize, size_t *_offset,
bool want_more, u32 *_abort_code);

This is the recvmsg equivalent. It allows the caller to find out about the
state of a specific call and to transfer received data into a buffer
piecemeal.

afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
logic between them. They don't wait synchronously yet because the socket
lock needs to be dealt with.

Five interface functions have been removed:

rxrpc_kernel_is_data_last()
rxrpc_kernel_get_abort_code()
rxrpc_kernel_get_error_number()
rxrpc_kernel_free_skb()
rxrpc_kernel_data_consumed()

As a temporary hack, sk_buffs going to an in-kernel call are queued on the
rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
in-kernel user. To process the queue internally, a temporary function,
temp_deliver_data() has been added. This will be replaced with common code
between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
future patch.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

David Howells and committed by
David S. Miller
d001648e 95ac3994

+566 -587
+31 -41
Documentation/networking/rxrpc.txt
··· 748 748 The msg must not specify a destination address, control data or any flags 749 749 other than MSG_MORE. len is the total amount of data to transmit. 750 750 751 + (*) Receive data from a call. 752 + 753 + int rxrpc_kernel_recv_data(struct socket *sock, 754 + struct rxrpc_call *call, 755 + void *buf, 756 + size_t size, 757 + size_t *_offset, 758 + bool want_more, 759 + u32 *_abort) 760 + 761 + This is used to receive data from either the reply part of a client call 762 + or the request part of a service call. buf and size specify how much 763 + data is desired and where to store it. *_offset is added on to buf and 764 + subtracted from size internally; the amount copied into the buffer is 765 + added to *_offset before returning. 766 + 767 + want_more should be true if further data will be required after this is 768 + satisfied and false if this is the last item of the receive phase. 769 + 770 + There are three normal returns: 0 if the buffer was filled and want_more 771 + was true; 1 if the buffer was filled, the last DATA packet has been 772 + emptied and want_more was false; and -EAGAIN if the function needs to be 773 + called again. 774 + 775 + If the last DATA packet is processed but the buffer contains less than 776 + the amount requested, EBADMSG is returned. If want_more wasn't set, but 777 + more data was available, EMSGSIZE is returned. 778 + 779 + If a remote ABORT is detected, the abort code received will be stored in 780 + *_abort and ECONNABORTED will be returned. 781 + 751 782 (*) Abort a call. 752 783 753 784 void rxrpc_kernel_abort_call(struct socket *sock, ··· 855 824 a BUSY message. -ENODATA is returned if there were no incoming calls. 856 825 Other errors may be returned if the call had been aborted (-ECONNABORTED) 857 826 or had timed out (-ETIME). 858 - 859 - (*) Record the delivery of a data message. 860 - 861 - void rxrpc_kernel_data_consumed(struct rxrpc_call *call, 862 - struct sk_buff *skb); 863 - 864 - This is used to record a data message as having been consumed and to 865 - update the ACK state for the call. The message must still be passed to 866 - rxrpc_kernel_free_skb() for disposal by the caller. 867 - 868 - (*) Free a message. 869 - 870 - void rxrpc_kernel_free_skb(struct sk_buff *skb); 871 - 872 - This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC 873 - socket. 874 - 875 - (*) Determine if a data message is the last one on a call. 876 - 877 - bool rxrpc_kernel_is_data_last(struct sk_buff *skb); 878 - 879 - This is used to determine if a socket buffer holds the last data message 880 - to be received for a call (true will be returned if it does, false 881 - if not). 882 - 883 - The data message will be part of the reply on a client call and the 884 - request on an incoming call. In the latter case there will be more 885 - messages, but in the former case there will not. 886 - 887 - (*) Get the abort code from an abort message. 888 - 889 - u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb); 890 - 891 - This is used to extract the abort code from a remote abort message. 892 - 893 - (*) Get the error number from a local or network error message. 894 - 895 - int rxrpc_kernel_get_error_number(struct sk_buff *skb); 896 - 897 - This is used to extract the error number from a message indicating either 898 - a local error occurred or a network error occurred. 899 827 900 828 (*) Allocate a null key for doing anonymous security. 901 829
+77 -65
fs/afs/cmservice.c
··· 17 17 #include "internal.h" 18 18 #include "afs_cm.h" 19 19 20 - static int afs_deliver_cb_init_call_back_state(struct afs_call *, 21 - struct sk_buff *, bool); 22 - static int afs_deliver_cb_init_call_back_state3(struct afs_call *, 23 - struct sk_buff *, bool); 24 - static int afs_deliver_cb_probe(struct afs_call *, struct sk_buff *, bool); 25 - static int afs_deliver_cb_callback(struct afs_call *, struct sk_buff *, bool); 26 - static int afs_deliver_cb_probe_uuid(struct afs_call *, struct sk_buff *, bool); 27 - static int afs_deliver_cb_tell_me_about_yourself(struct afs_call *, 28 - struct sk_buff *, bool); 20 + static int afs_deliver_cb_init_call_back_state(struct afs_call *); 21 + static int afs_deliver_cb_init_call_back_state3(struct afs_call *); 22 + static int afs_deliver_cb_probe(struct afs_call *); 23 + static int afs_deliver_cb_callback(struct afs_call *); 24 + static int afs_deliver_cb_probe_uuid(struct afs_call *); 25 + static int afs_deliver_cb_tell_me_about_yourself(struct afs_call *); 29 26 static void afs_cm_destructor(struct afs_call *); 30 27 31 28 /* ··· 127 130 * received. The step number here must match the final number in 128 131 * afs_deliver_cb_callback(). 129 132 */ 130 - if (call->unmarshall == 6) { 133 + if (call->unmarshall == 5) { 131 134 ASSERT(call->server && call->count && call->request); 132 135 afs_break_callbacks(call->server, call->count, call->request); 133 136 } ··· 161 164 /* 162 165 * deliver request data to a CB.CallBack call 163 166 */ 164 - static int afs_deliver_cb_callback(struct afs_call *call, struct sk_buff *skb, 165 - bool last) 167 + static int afs_deliver_cb_callback(struct afs_call *call) 166 168 { 167 169 struct sockaddr_rxrpc srx; 168 170 struct afs_callback *cb; ··· 170 174 u32 tmp; 171 175 int ret, loop; 172 176 173 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 177 + _enter("{%u}", call->unmarshall); 174 178 175 179 switch (call->unmarshall) { 176 180 case 0: ··· 181 185 /* extract the FID array and its count in two steps */ 182 186 case 1: 183 187 _debug("extract FID count"); 184 - ret = afs_extract_data(call, skb, last, &call->tmp, 4); 188 + ret = afs_extract_data(call, &call->tmp, 4, true); 185 189 if (ret < 0) 186 190 return ret; 187 191 ··· 198 202 199 203 case 2: 200 204 _debug("extract FID array"); 201 - ret = afs_extract_data(call, skb, last, call->buffer, 202 - call->count * 3 * 4); 205 + ret = afs_extract_data(call, call->buffer, 206 + call->count * 3 * 4, true); 203 207 if (ret < 0) 204 208 return ret; 205 209 ··· 225 229 /* extract the callback array and its count in two steps */ 226 230 case 3: 227 231 _debug("extract CB count"); 228 - ret = afs_extract_data(call, skb, last, &call->tmp, 4); 232 + ret = afs_extract_data(call, &call->tmp, 4, true); 229 233 if (ret < 0) 230 234 return ret; 231 235 ··· 235 239 return -EBADMSG; 236 240 call->offset = 0; 237 241 call->unmarshall++; 238 - if (tmp == 0) 239 - goto empty_cb_array; 240 242 241 243 case 4: 242 244 _debug("extract CB array"); 243 - ret = afs_extract_data(call, skb, last, call->request, 244 - call->count * 3 * 4); 245 + ret = afs_extract_data(call, call->buffer, 246 + call->count * 3 * 4, false); 245 247 if (ret < 0) 246 248 return ret; 247 249 ··· 252 258 cb->type = ntohl(*bp++); 253 259 } 254 260 255 - empty_cb_array: 256 261 call->offset = 0; 257 262 call->unmarshall++; 258 - 259 - case 5: 260 - ret = afs_data_complete(call, skb, last); 261 - if (ret < 0) 262 - return ret; 263 263 264 264 /* Record that the message was unmarshalled successfully so 265 265 * that the call destructor can know do the callback breaking ··· 263 275 * updated also. 264 276 */ 265 277 call->unmarshall++; 266 - case 6: 278 + case 5: 267 279 break; 268 280 } 269 281 ··· 298 310 /* 299 311 * deliver request data to a CB.InitCallBackState call 300 312 */ 301 - static int afs_deliver_cb_init_call_back_state(struct afs_call *call, 302 - struct sk_buff *skb, 303 - bool last) 313 + static int afs_deliver_cb_init_call_back_state(struct afs_call *call) 304 314 { 305 315 struct sockaddr_rxrpc srx; 306 316 struct afs_server *server; 307 317 int ret; 308 318 309 - _enter(",{%u},%d", skb->len, last); 319 + _enter(""); 310 320 311 321 rxrpc_kernel_get_peer(afs_socket, call->rxcall, &srx); 312 322 313 - ret = afs_data_complete(call, skb, last); 323 + ret = afs_extract_data(call, NULL, 0, false); 314 324 if (ret < 0) 315 325 return ret; 316 326 ··· 330 344 /* 331 345 * deliver request data to a CB.InitCallBackState3 call 332 346 */ 333 - static int afs_deliver_cb_init_call_back_state3(struct afs_call *call, 334 - struct sk_buff *skb, 335 - bool last) 347 + static int afs_deliver_cb_init_call_back_state3(struct afs_call *call) 336 348 { 337 349 struct sockaddr_rxrpc srx; 338 350 struct afs_server *server; 351 + struct afs_uuid *r; 352 + unsigned loop; 353 + __be32 *b; 354 + int ret; 339 355 340 - _enter(",{%u},%d", skb->len, last); 356 + _enter(""); 341 357 342 358 rxrpc_kernel_get_peer(afs_socket, call->rxcall, &srx); 343 359 344 - /* There are some arguments that we ignore */ 345 - afs_data_consumed(call, skb); 346 - if (!last) 347 - return -EAGAIN; 360 + _enter("{%u}", call->unmarshall); 361 + 362 + switch (call->unmarshall) { 363 + case 0: 364 + call->offset = 0; 365 + call->buffer = kmalloc(11 * sizeof(__be32), GFP_KERNEL); 366 + if (!call->buffer) 367 + return -ENOMEM; 368 + call->unmarshall++; 369 + 370 + case 1: 371 + _debug("extract UUID"); 372 + ret = afs_extract_data(call, call->buffer, 373 + 11 * sizeof(__be32), false); 374 + switch (ret) { 375 + case 0: break; 376 + case -EAGAIN: return 0; 377 + default: return ret; 378 + } 379 + 380 + _debug("unmarshall UUID"); 381 + call->request = kmalloc(sizeof(struct afs_uuid), GFP_KERNEL); 382 + if (!call->request) 383 + return -ENOMEM; 384 + 385 + b = call->buffer; 386 + r = call->request; 387 + r->time_low = ntohl(b[0]); 388 + r->time_mid = ntohl(b[1]); 389 + r->time_hi_and_version = ntohl(b[2]); 390 + r->clock_seq_hi_and_reserved = ntohl(b[3]); 391 + r->clock_seq_low = ntohl(b[4]); 392 + 393 + for (loop = 0; loop < 6; loop++) 394 + r->node[loop] = ntohl(b[loop + 5]); 395 + 396 + call->offset = 0; 397 + call->unmarshall++; 398 + 399 + case 2: 400 + break; 401 + } 348 402 349 403 /* no unmarshalling required */ 350 404 call->state = AFS_CALL_REPLYING; ··· 416 390 /* 417 391 * deliver request data to a CB.Probe call 418 392 */ 419 - static int afs_deliver_cb_probe(struct afs_call *call, struct sk_buff *skb, 420 - bool last) 393 + static int afs_deliver_cb_probe(struct afs_call *call) 421 394 { 422 395 int ret; 423 396 424 - _enter(",{%u},%d", skb->len, last); 397 + _enter(""); 425 398 426 - ret = afs_data_complete(call, skb, last); 399 + ret = afs_extract_data(call, NULL, 0, false); 427 400 if (ret < 0) 428 401 return ret; 429 402 ··· 460 435 /* 461 436 * deliver request data to a CB.ProbeUuid call 462 437 */ 463 - static int afs_deliver_cb_probe_uuid(struct afs_call *call, struct sk_buff *skb, 464 - bool last) 438 + static int afs_deliver_cb_probe_uuid(struct afs_call *call) 465 439 { 466 440 struct afs_uuid *r; 467 441 unsigned loop; 468 442 __be32 *b; 469 443 int ret; 470 444 471 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 472 - 473 - ret = afs_data_complete(call, skb, last); 474 - if (ret < 0) 475 - return ret; 445 + _enter("{%u}", call->unmarshall); 476 446 477 447 switch (call->unmarshall) { 478 448 case 0: ··· 479 459 480 460 case 1: 481 461 _debug("extract UUID"); 482 - ret = afs_extract_data(call, skb, last, call->buffer, 483 - 11 * sizeof(__be32)); 462 + ret = afs_extract_data(call, call->buffer, 463 + 11 * sizeof(__be32), false); 484 464 switch (ret) { 485 465 case 0: break; 486 466 case -EAGAIN: return 0; ··· 507 487 call->unmarshall++; 508 488 509 489 case 2: 510 - _debug("trailer"); 511 - if (skb->len != 0) 512 - return -EBADMSG; 513 490 break; 514 491 } 515 - 516 - ret = afs_data_complete(call, skb, last); 517 - if (ret < 0) 518 - return ret; 519 492 520 493 call->state = AFS_CALL_REPLYING; 521 494 ··· 583 570 /* 584 571 * deliver request data to a CB.TellMeAboutYourself call 585 572 */ 586 - static int afs_deliver_cb_tell_me_about_yourself(struct afs_call *call, 587 - struct sk_buff *skb, bool last) 573 + static int afs_deliver_cb_tell_me_about_yourself(struct afs_call *call) 588 574 { 589 575 int ret; 590 576 591 - _enter(",{%u},%d", skb->len, last); 577 + _enter(""); 592 578 593 - ret = afs_data_complete(call, skb, last); 579 + ret = afs_extract_data(call, NULL, 0, false); 594 580 if (ret < 0) 595 581 return ret; 596 582
+62 -86
fs/afs/fsclient.c
··· 235 235 /* 236 236 * deliver reply data to an FS.FetchStatus 237 237 */ 238 - static int afs_deliver_fs_fetch_status(struct afs_call *call, 239 - struct sk_buff *skb, bool last) 238 + static int afs_deliver_fs_fetch_status(struct afs_call *call) 240 239 { 241 240 struct afs_vnode *vnode = call->reply; 242 241 const __be32 *bp; 243 242 int ret; 244 243 245 - _enter(",,%u", last); 244 + _enter(""); 246 245 247 - ret = afs_transfer_reply(call, skb, last); 246 + ret = afs_transfer_reply(call); 248 247 if (ret < 0) 249 248 return ret; 250 249 ··· 306 307 /* 307 308 * deliver reply data to an FS.FetchData 308 309 */ 309 - static int afs_deliver_fs_fetch_data(struct afs_call *call, 310 - struct sk_buff *skb, bool last) 310 + static int afs_deliver_fs_fetch_data(struct afs_call *call) 311 311 { 312 312 struct afs_vnode *vnode = call->reply; 313 313 const __be32 *bp; ··· 314 316 void *buffer; 315 317 int ret; 316 318 317 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 319 + _enter("{%u}", call->unmarshall); 318 320 319 321 switch (call->unmarshall) { 320 322 case 0: ··· 330 332 * client) */ 331 333 case 1: 332 334 _debug("extract data length (MSW)"); 333 - ret = afs_extract_data(call, skb, last, &call->tmp, 4); 335 + ret = afs_extract_data(call, &call->tmp, 4, true); 334 336 if (ret < 0) 335 337 return ret; 336 338 ··· 345 347 /* extract the returned data length */ 346 348 case 2: 347 349 _debug("extract data length"); 348 - ret = afs_extract_data(call, skb, last, &call->tmp, 4); 350 + ret = afs_extract_data(call, &call->tmp, 4, true); 349 351 if (ret < 0) 350 352 return ret; 351 353 ··· 361 363 _debug("extract data"); 362 364 if (call->count > 0) { 363 365 page = call->reply3; 364 - buffer = kmap_atomic(page); 365 - ret = afs_extract_data(call, skb, last, buffer, 366 - call->count); 367 - kunmap_atomic(buffer); 366 + buffer = kmap(page); 367 + ret = afs_extract_data(call, buffer, 368 + call->count, true); 369 + kunmap(buffer); 368 370 if (ret < 0) 369 371 return ret; 370 372 } ··· 374 376 375 377 /* extract the metadata */ 376 378 case 4: 377 - ret = afs_extract_data(call, skb, last, call->buffer, 378 - (21 + 3 + 6) * 4); 379 + ret = afs_extract_data(call, call->buffer, 380 + (21 + 3 + 6) * 4, false); 379 381 if (ret < 0) 380 382 return ret; 381 383 ··· 389 391 call->unmarshall++; 390 392 391 393 case 5: 392 - ret = afs_data_complete(call, skb, last); 393 - if (ret < 0) 394 - return ret; 395 394 break; 396 395 } 397 396 398 397 if (call->count < PAGE_SIZE) { 399 398 _debug("clear"); 400 399 page = call->reply3; 401 - buffer = kmap_atomic(page); 400 + buffer = kmap(page); 402 401 memset(buffer + call->count, 0, PAGE_SIZE - call->count); 403 - kunmap_atomic(buffer); 402 + kunmap(buffer); 404 403 } 405 404 406 405 _leave(" = 0 [done]"); ··· 510 515 /* 511 516 * deliver reply data to an FS.GiveUpCallBacks 512 517 */ 513 - static int afs_deliver_fs_give_up_callbacks(struct afs_call *call, 514 - struct sk_buff *skb, bool last) 518 + static int afs_deliver_fs_give_up_callbacks(struct afs_call *call) 515 519 { 516 - _enter(",{%u},%d", skb->len, last); 520 + _enter(""); 517 521 518 522 /* shouldn't be any reply data */ 519 - return afs_data_complete(call, skb, last); 523 + return afs_extract_data(call, NULL, 0, false); 520 524 } 521 525 522 526 /* ··· 593 599 /* 594 600 * deliver reply data to an FS.CreateFile or an FS.MakeDir 595 601 */ 596 - static int afs_deliver_fs_create_vnode(struct afs_call *call, 597 - struct sk_buff *skb, bool last) 602 + static int afs_deliver_fs_create_vnode(struct afs_call *call) 598 603 { 599 604 struct afs_vnode *vnode = call->reply; 600 605 const __be32 *bp; 601 606 int ret; 602 607 603 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 608 + _enter("{%u}", call->unmarshall); 604 609 605 - ret = afs_transfer_reply(call, skb, last); 610 + ret = afs_transfer_reply(call); 606 611 if (ret < 0) 607 612 return ret; 608 613 ··· 689 696 /* 690 697 * deliver reply data to an FS.RemoveFile or FS.RemoveDir 691 698 */ 692 - static int afs_deliver_fs_remove(struct afs_call *call, 693 - struct sk_buff *skb, bool last) 699 + static int afs_deliver_fs_remove(struct afs_call *call) 694 700 { 695 701 struct afs_vnode *vnode = call->reply; 696 702 const __be32 *bp; 697 703 int ret; 698 704 699 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 705 + _enter("{%u}", call->unmarshall); 700 706 701 - ret = afs_transfer_reply(call, skb, last); 707 + ret = afs_transfer_reply(call); 702 708 if (ret < 0) 703 709 return ret; 704 710 ··· 769 777 /* 770 778 * deliver reply data to an FS.Link 771 779 */ 772 - static int afs_deliver_fs_link(struct afs_call *call, 773 - struct sk_buff *skb, bool last) 780 + static int afs_deliver_fs_link(struct afs_call *call) 774 781 { 775 782 struct afs_vnode *dvnode = call->reply, *vnode = call->reply2; 776 783 const __be32 *bp; 777 784 int ret; 778 785 779 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 786 + _enter("{%u}", call->unmarshall); 780 787 781 - ret = afs_transfer_reply(call, skb, last); 788 + ret = afs_transfer_reply(call); 782 789 if (ret < 0) 783 790 return ret; 784 791 ··· 854 863 /* 855 864 * deliver reply data to an FS.Symlink 856 865 */ 857 - static int afs_deliver_fs_symlink(struct afs_call *call, 858 - struct sk_buff *skb, bool last) 866 + static int afs_deliver_fs_symlink(struct afs_call *call) 859 867 { 860 868 struct afs_vnode *vnode = call->reply; 861 869 const __be32 *bp; 862 870 int ret; 863 871 864 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 872 + _enter("{%u}", call->unmarshall); 865 873 866 - ret = afs_transfer_reply(call, skb, last); 874 + ret = afs_transfer_reply(call); 867 875 if (ret < 0) 868 876 return ret; 869 877 ··· 958 968 /* 959 969 * deliver reply data to an FS.Rename 960 970 */ 961 - static int afs_deliver_fs_rename(struct afs_call *call, 962 - struct sk_buff *skb, bool last) 971 + static int afs_deliver_fs_rename(struct afs_call *call) 963 972 { 964 973 struct afs_vnode *orig_dvnode = call->reply, *new_dvnode = call->reply2; 965 974 const __be32 *bp; 966 975 int ret; 967 976 968 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 977 + _enter("{%u}", call->unmarshall); 969 978 970 - ret = afs_transfer_reply(call, skb, last); 979 + ret = afs_transfer_reply(call); 971 980 if (ret < 0) 972 981 return ret; 973 982 ··· 1061 1072 /* 1062 1073 * deliver reply data to an FS.StoreData 1063 1074 */ 1064 - static int afs_deliver_fs_store_data(struct afs_call *call, 1065 - struct sk_buff *skb, bool last) 1075 + static int afs_deliver_fs_store_data(struct afs_call *call) 1066 1076 { 1067 1077 struct afs_vnode *vnode = call->reply; 1068 1078 const __be32 *bp; 1069 1079 int ret; 1070 1080 1071 - _enter(",,%u", last); 1081 + _enter(""); 1072 1082 1073 - ret = afs_transfer_reply(call, skb, last); 1083 + ret = afs_transfer_reply(call); 1074 1084 if (ret < 0) 1075 1085 return ret; 1076 1086 ··· 1239 1251 /* 1240 1252 * deliver reply data to an FS.StoreStatus 1241 1253 */ 1242 - static int afs_deliver_fs_store_status(struct afs_call *call, 1243 - struct sk_buff *skb, bool last) 1254 + static int afs_deliver_fs_store_status(struct afs_call *call) 1244 1255 { 1245 1256 afs_dataversion_t *store_version; 1246 1257 struct afs_vnode *vnode = call->reply; 1247 1258 const __be32 *bp; 1248 1259 int ret; 1249 1260 1250 - _enter(",,%u", last); 1261 + _enter(""); 1251 1262 1252 - ret = afs_transfer_reply(call, skb, last); 1263 + ret = afs_transfer_reply(call); 1253 1264 if (ret < 0) 1254 1265 return ret; 1255 1266 ··· 1430 1443 /* 1431 1444 * deliver reply data to an FS.GetVolumeStatus 1432 1445 */ 1433 - static int afs_deliver_fs_get_volume_status(struct afs_call *call, 1434 - struct sk_buff *skb, bool last) 1446 + static int afs_deliver_fs_get_volume_status(struct afs_call *call) 1435 1447 { 1436 1448 const __be32 *bp; 1437 1449 char *p; 1438 1450 int ret; 1439 1451 1440 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 1452 + _enter("{%u}", call->unmarshall); 1441 1453 1442 1454 switch (call->unmarshall) { 1443 1455 case 0: ··· 1446 1460 /* extract the returned status record */ 1447 1461 case 1: 1448 1462 _debug("extract status"); 1449 - ret = afs_extract_data(call, skb, last, call->buffer, 1450 - 12 * 4); 1463 + ret = afs_extract_data(call, call->buffer, 1464 + 12 * 4, true); 1451 1465 if (ret < 0) 1452 1466 return ret; 1453 1467 ··· 1458 1472 1459 1473 /* extract the volume name length */ 1460 1474 case 2: 1461 - ret = afs_extract_data(call, skb, last, &call->tmp, 4); 1475 + ret = afs_extract_data(call, &call->tmp, 4, true); 1462 1476 if (ret < 0) 1463 1477 return ret; 1464 1478 ··· 1473 1487 case 3: 1474 1488 _debug("extract volname"); 1475 1489 if (call->count > 0) { 1476 - ret = afs_extract_data(call, skb, last, call->reply3, 1477 - call->count); 1490 + ret = afs_extract_data(call, call->reply3, 1491 + call->count, true); 1478 1492 if (ret < 0) 1479 1493 return ret; 1480 1494 } ··· 1494 1508 call->count = 4 - (call->count & 3); 1495 1509 1496 1510 case 4: 1497 - ret = afs_extract_data(call, skb, last, call->buffer, 1498 - call->count); 1511 + ret = afs_extract_data(call, call->buffer, 1512 + call->count, true); 1499 1513 if (ret < 0) 1500 1514 return ret; 1501 1515 ··· 1505 1519 1506 1520 /* extract the offline message length */ 1507 1521 case 5: 1508 - ret = afs_extract_data(call, skb, last, &call->tmp, 4); 1522 + ret = afs_extract_data(call, &call->tmp, 4, true); 1509 1523 if (ret < 0) 1510 1524 return ret; 1511 1525 ··· 1520 1534 case 6: 1521 1535 _debug("extract offline"); 1522 1536 if (call->count > 0) { 1523 - ret = afs_extract_data(call, skb, last, call->reply3, 1524 - call->count); 1537 + ret = afs_extract_data(call, call->reply3, 1538 + call->count, true); 1525 1539 if (ret < 0) 1526 1540 return ret; 1527 1541 } ··· 1541 1555 call->count = 4 - (call->count & 3); 1542 1556 1543 1557 case 7: 1544 - ret = afs_extract_data(call, skb, last, call->buffer, 1545 - call->count); 1558 + ret = afs_extract_data(call, call->buffer, 1559 + call->count, true); 1546 1560 if (ret < 0) 1547 1561 return ret; 1548 1562 ··· 1552 1566 1553 1567 /* extract the message of the day length */ 1554 1568 case 8: 1555 - ret = afs_extract_data(call, skb, last, &call->tmp, 4); 1569 + ret = afs_extract_data(call, &call->tmp, 4, true); 1556 1570 if (ret < 0) 1557 1571 return ret; 1558 1572 ··· 1567 1581 case 9: 1568 1582 _debug("extract motd"); 1569 1583 if (call->count > 0) { 1570 - ret = afs_extract_data(call, skb, last, call->reply3, 1571 - call->count); 1584 + ret = afs_extract_data(call, call->reply3, 1585 + call->count, true); 1572 1586 if (ret < 0) 1573 1587 return ret; 1574 1588 } ··· 1581 1595 call->unmarshall++; 1582 1596 1583 1597 /* extract the message of the day padding */ 1584 - if ((call->count & 3) == 0) { 1585 - call->unmarshall++; 1586 - goto no_motd_padding; 1587 - } 1588 - call->count = 4 - (call->count & 3); 1598 + call->count = (4 - (call->count & 3)) & 3; 1589 1599 1590 1600 case 10: 1591 - ret = afs_extract_data(call, skb, last, call->buffer, 1592 - call->count); 1601 + ret = afs_extract_data(call, call->buffer, 1602 + call->count, false); 1593 1603 if (ret < 0) 1594 1604 return ret; 1595 1605 1596 1606 call->offset = 0; 1597 1607 call->unmarshall++; 1598 - no_motd_padding: 1599 - 1600 1608 case 11: 1601 - ret = afs_data_complete(call, skb, last); 1602 - if (ret < 0) 1603 - return ret; 1604 1609 break; 1605 1610 } 1606 1611 ··· 1662 1685 /* 1663 1686 * deliver reply data to an FS.SetLock, FS.ExtendLock or FS.ReleaseLock 1664 1687 */ 1665 - static int afs_deliver_fs_xxxx_lock(struct afs_call *call, 1666 - struct sk_buff *skb, bool last) 1688 + static int afs_deliver_fs_xxxx_lock(struct afs_call *call) 1667 1689 { 1668 1690 const __be32 *bp; 1669 1691 int ret; 1670 1692 1671 - _enter("{%u},{%u},%d", call->unmarshall, skb->len, last); 1693 + _enter("{%u}", call->unmarshall); 1672 1694 1673 - ret = afs_transfer_reply(call, skb, last); 1695 + ret = afs_transfer_reply(call); 1674 1696 if (ret < 0) 1675 1697 return ret; 1676 1698
+9 -24
fs/afs/internal.h
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/fs.h> 15 15 #include <linux/pagemap.h> 16 - #include <linux/skbuff.h> 17 16 #include <linux/rxrpc.h> 18 17 #include <linux/key.h> 19 18 #include <linux/workqueue.h> ··· 56 57 */ 57 58 struct afs_wait_mode { 58 59 /* RxRPC received message notification */ 59 - void (*rx_wakeup)(struct afs_call *call); 60 + rxrpc_notify_rx_t notify_rx; 60 61 61 62 /* synchronous call waiter and call dispatched notification */ 62 63 int (*wait)(struct afs_call *call); ··· 75 76 const struct afs_call_type *type; /* type of call */ 76 77 const struct afs_wait_mode *wait_mode; /* completion wait mode */ 77 78 wait_queue_head_t waitq; /* processes awaiting completion */ 78 - void (*async_workfn)(struct afs_call *call); /* asynchronous work function */ 79 79 struct work_struct async_work; /* asynchronous work processor */ 80 80 struct work_struct work; /* actual work processor */ 81 - struct sk_buff_head rx_queue; /* received packets */ 82 81 struct rxrpc_call *rxcall; /* RxRPC call handle */ 83 82 struct key *key; /* security for this call */ 84 83 struct afs_server *server; /* server affected by incoming CM call */ ··· 90 93 void *reply4; /* reply buffer (fourth part) */ 91 94 pgoff_t first; /* first page in mapping to deal with */ 92 95 pgoff_t last; /* last page in mapping to deal with */ 96 + size_t offset; /* offset into received data store */ 93 97 enum { /* call state */ 94 98 AFS_CALL_REQUESTING, /* request is being sent for outgoing call */ 95 99 AFS_CALL_AWAIT_REPLY, /* awaiting reply to outgoing call */ ··· 98 100 AFS_CALL_AWAIT_REQUEST, /* awaiting request data on incoming call */ 99 101 AFS_CALL_REPLYING, /* replying to incoming call */ 100 102 AFS_CALL_AWAIT_ACK, /* awaiting final ACK of incoming call */ 101 - AFS_CALL_COMPLETE, /* successfully completed */ 102 - AFS_CALL_BUSY, /* server was busy */ 103 - AFS_CALL_ABORTED, /* call was aborted */ 104 - AFS_CALL_ERROR, /* call failed due to error */ 103 + AFS_CALL_COMPLETE, /* Completed or failed */ 105 104 } state; 106 105 int error; /* error code */ 106 + u32 abort_code; /* Remote abort ID or 0 */ 107 107 unsigned request_size; /* size of request data */ 108 108 unsigned reply_max; /* maximum size of reply */ 109 - unsigned reply_size; /* current size of reply */ 110 109 unsigned first_offset; /* offset into mapping[first] */ 111 110 unsigned last_to; /* amount of mapping[last] */ 112 - unsigned offset; /* offset into received data store */ 113 111 unsigned char unmarshall; /* unmarshalling phase */ 114 112 bool incoming; /* T if incoming call */ 115 113 bool send_pages; /* T if data from mapping should be sent */ 114 + bool need_attention; /* T if RxRPC poked us */ 116 115 u16 service_id; /* RxRPC service ID to call */ 117 116 __be16 port; /* target UDP port */ 118 117 __be32 operation_ID; /* operation ID for an incoming call */ ··· 124 129 /* deliver request or reply data to an call 125 130 * - returning an error will cause the call to be aborted 126 131 */ 127 - int (*deliver)(struct afs_call *call, struct sk_buff *skb, 128 - bool last); 132 + int (*deliver)(struct afs_call *call); 129 133 130 134 /* map an abort code to an error number */ 131 135 int (*abort_to_error)(u32 abort_code); ··· 606 612 607 613 extern int afs_open_socket(void); 608 614 extern void afs_close_socket(void); 609 - extern void afs_data_consumed(struct afs_call *, struct sk_buff *); 610 615 extern int afs_make_call(struct in_addr *, struct afs_call *, gfp_t, 611 616 const struct afs_wait_mode *); 612 617 extern struct afs_call *afs_alloc_flat_call(const struct afs_call_type *, 613 618 size_t, size_t); 614 619 extern void afs_flat_call_destructor(struct afs_call *); 615 - extern int afs_transfer_reply(struct afs_call *, struct sk_buff *, bool); 616 620 extern void afs_send_empty_reply(struct afs_call *); 617 621 extern void afs_send_simple_reply(struct afs_call *, const void *, size_t); 618 - extern int afs_extract_data(struct afs_call *, struct sk_buff *, bool, void *, 619 - size_t); 622 + extern int afs_extract_data(struct afs_call *, void *, size_t, bool); 620 623 621 - static inline int afs_data_complete(struct afs_call *call, struct sk_buff *skb, 622 - bool last) 624 + static inline int afs_transfer_reply(struct afs_call *call) 623 625 { 624 - if (skb->len > 0) 625 - return -EBADMSG; 626 - afs_data_consumed(call, skb); 627 - if (!last) 628 - return -EAGAIN; 629 - return 0; 626 + return afs_extract_data(call, call->buffer, call->reply_max, false); 630 627 } 631 628 632 629 /*
+157 -280
fs/afs/rxrpc.c
··· 19 19 struct socket *afs_socket; /* my RxRPC socket */ 20 20 static struct workqueue_struct *afs_async_calls; 21 21 static atomic_t afs_outstanding_calls; 22 - static atomic_t afs_outstanding_skbs; 23 22 24 - static void afs_wake_up_call_waiter(struct afs_call *); 23 + static void afs_free_call(struct afs_call *); 24 + static void afs_wake_up_call_waiter(struct sock *, struct rxrpc_call *, unsigned long); 25 25 static int afs_wait_for_call_to_complete(struct afs_call *); 26 - static void afs_wake_up_async_call(struct afs_call *); 26 + static void afs_wake_up_async_call(struct sock *, struct rxrpc_call *, unsigned long); 27 27 static int afs_dont_wait_for_call_to_complete(struct afs_call *); 28 - static void afs_process_async_call(struct afs_call *); 29 - static void afs_rx_interceptor(struct sock *, unsigned long, struct sk_buff *); 30 - static int afs_deliver_cm_op_id(struct afs_call *, struct sk_buff *, bool); 28 + static void afs_process_async_call(struct work_struct *); 29 + static void afs_rx_new_call(struct sock *); 30 + static int afs_deliver_cm_op_id(struct afs_call *); 31 31 32 32 /* synchronous call management */ 33 33 const struct afs_wait_mode afs_sync_call = { 34 - .rx_wakeup = afs_wake_up_call_waiter, 34 + .notify_rx = afs_wake_up_call_waiter, 35 35 .wait = afs_wait_for_call_to_complete, 36 36 }; 37 37 38 38 /* asynchronous call management */ 39 39 const struct afs_wait_mode afs_async_call = { 40 - .rx_wakeup = afs_wake_up_async_call, 40 + .notify_rx = afs_wake_up_async_call, 41 41 .wait = afs_dont_wait_for_call_to_complete, 42 42 }; 43 43 44 44 /* asynchronous incoming call management */ 45 45 static const struct afs_wait_mode afs_async_incoming_call = { 46 - .rx_wakeup = afs_wake_up_async_call, 46 + .notify_rx = afs_wake_up_async_call, 47 47 }; 48 48 49 49 /* asynchronous incoming call initial processing */ ··· 55 55 56 56 static void afs_collect_incoming_call(struct work_struct *); 57 57 58 - static struct sk_buff_head afs_incoming_calls; 59 58 static DECLARE_WORK(afs_collect_incoming_call_work, afs_collect_incoming_call); 60 - 61 - static void afs_async_workfn(struct work_struct *work) 62 - { 63 - struct afs_call *call = container_of(work, struct afs_call, async_work); 64 - 65 - call->async_workfn(call); 66 - } 67 59 68 60 static int afs_wait_atomic_t(atomic_t *p) 69 61 { ··· 74 82 int ret; 75 83 76 84 _enter(""); 77 - 78 - skb_queue_head_init(&afs_incoming_calls); 79 85 80 86 ret = -ENOMEM; 81 87 afs_async_calls = create_singlethread_workqueue("kafsd"); ··· 100 110 if (ret < 0) 101 111 goto error_2; 102 112 113 + rxrpc_kernel_new_call_notification(socket, afs_rx_new_call); 114 + 103 115 ret = kernel_listen(socket, INT_MAX); 104 116 if (ret < 0) 105 117 goto error_2; 106 - 107 - rxrpc_kernel_intercept_rx_messages(socket, afs_rx_interceptor); 108 118 109 119 afs_socket = socket; 110 120 _leave(" = 0"); ··· 126 136 { 127 137 _enter(""); 128 138 139 + _debug("outstanding %u", atomic_read(&afs_outstanding_calls)); 129 140 wait_on_atomic_t(&afs_outstanding_calls, afs_wait_atomic_t, 130 141 TASK_UNINTERRUPTIBLE); 131 142 _debug("no outstanding calls"); 132 143 144 + flush_workqueue(afs_async_calls); 133 145 sock_release(afs_socket); 134 146 135 147 _debug("dework"); 136 148 destroy_workqueue(afs_async_calls); 137 - 138 - ASSERTCMP(atomic_read(&afs_outstanding_skbs), ==, 0); 139 149 _leave(""); 140 - } 141 - 142 - /* 143 - * Note that the data in a socket buffer is now consumed. 144 - */ 145 - void afs_data_consumed(struct afs_call *call, struct sk_buff *skb) 146 - { 147 - if (!skb) { 148 - _debug("DLVR NULL [%d]", atomic_read(&afs_outstanding_skbs)); 149 - dump_stack(); 150 - } else { 151 - _debug("DLVR %p{%u} [%d]", 152 - skb, skb->mark, atomic_read(&afs_outstanding_skbs)); 153 - rxrpc_kernel_data_consumed(call->rxcall, skb); 154 - } 155 - } 156 - 157 - /* 158 - * free a socket buffer 159 - */ 160 - static void afs_free_skb(struct sk_buff *skb) 161 - { 162 - if (!skb) { 163 - _debug("FREE NULL [%d]", atomic_read(&afs_outstanding_skbs)); 164 - dump_stack(); 165 - } else { 166 - _debug("FREE %p{%u} [%d]", 167 - skb, skb->mark, atomic_read(&afs_outstanding_skbs)); 168 - if (atomic_dec_return(&afs_outstanding_skbs) == -1) 169 - BUG(); 170 - rxrpc_kernel_free_skb(skb); 171 - } 172 150 } 173 151 174 152 /* ··· 149 191 150 192 ASSERTCMP(call->rxcall, ==, NULL); 151 193 ASSERT(!work_pending(&call->async_work)); 152 - ASSERT(skb_queue_empty(&call->rx_queue)); 153 194 ASSERT(call->type->name != NULL); 154 195 155 196 kfree(call->request); ··· 184 227 * allocate a call with flat request and reply buffers 185 228 */ 186 229 struct afs_call *afs_alloc_flat_call(const struct afs_call_type *type, 187 - size_t request_size, size_t reply_size) 230 + size_t request_size, size_t reply_max) 188 231 { 189 232 struct afs_call *call; 190 233 ··· 198 241 199 242 call->type = type; 200 243 call->request_size = request_size; 201 - call->reply_max = reply_size; 244 + call->reply_max = reply_max; 202 245 203 246 if (request_size) { 204 247 call->request = kmalloc(request_size, GFP_NOFS); ··· 206 249 goto nomem_free; 207 250 } 208 251 209 - if (reply_size) { 210 - call->buffer = kmalloc(reply_size, GFP_NOFS); 252 + if (reply_max) { 253 + call->buffer = kmalloc(reply_max, GFP_NOFS); 211 254 if (!call->buffer) 212 255 goto nomem_free; 213 256 } 214 257 215 258 init_waitqueue_head(&call->waitq); 216 - skb_queue_head_init(&call->rx_queue); 217 259 return call; 218 260 219 261 nomem_free: ··· 310 354 struct msghdr msg; 311 355 struct kvec iov[1]; 312 356 int ret; 313 - struct sk_buff *skb; 314 357 315 358 _enter("%x,{%d},", addr->s_addr, ntohs(call->port)); 316 359 ··· 321 366 atomic_read(&afs_outstanding_calls)); 322 367 323 368 call->wait_mode = wait_mode; 324 - call->async_workfn = afs_process_async_call; 325 - INIT_WORK(&call->async_work, afs_async_workfn); 369 + INIT_WORK(&call->async_work, afs_process_async_call); 326 370 327 371 memset(&srx, 0, sizeof(srx)); 328 372 srx.srx_family = AF_RXRPC; ··· 334 380 335 381 /* create a call */ 336 382 rxcall = rxrpc_kernel_begin_call(afs_socket, &srx, call->key, 337 - (unsigned long) call, gfp); 383 + (unsigned long) call, gfp, 384 + wait_mode->notify_rx); 338 385 call->key = NULL; 339 386 if (IS_ERR(rxcall)) { 340 387 ret = PTR_ERR(rxcall); ··· 378 423 379 424 error_do_abort: 380 425 rxrpc_kernel_abort_call(afs_socket, rxcall, RX_USER_ABORT); 381 - while ((skb = skb_dequeue(&call->rx_queue))) 382 - afs_free_skb(skb); 383 426 error_kill_call: 384 427 afs_end_call(call); 385 428 _leave(" = %d", ret); ··· 385 432 } 386 433 387 434 /* 388 - * Handles intercepted messages that were arriving in the socket's Rx queue. 389 - * 390 - * Called from the AF_RXRPC call processor in waitqueue process context. For 391 - * each call, it is guaranteed this will be called in order of packet to be 392 - * delivered. 393 - */ 394 - static void afs_rx_interceptor(struct sock *sk, unsigned long user_call_ID, 395 - struct sk_buff *skb) 396 - { 397 - struct afs_call *call = (struct afs_call *) user_call_ID; 398 - 399 - _enter("%p,,%u", call, skb->mark); 400 - 401 - _debug("ICPT %p{%u} [%d]", 402 - skb, skb->mark, atomic_read(&afs_outstanding_skbs)); 403 - 404 - ASSERTCMP(sk, ==, afs_socket->sk); 405 - atomic_inc(&afs_outstanding_skbs); 406 - 407 - if (!call) { 408 - /* its an incoming call for our callback service */ 409 - skb_queue_tail(&afs_incoming_calls, skb); 410 - queue_work(afs_wq, &afs_collect_incoming_call_work); 411 - } else { 412 - /* route the messages directly to the appropriate call */ 413 - skb_queue_tail(&call->rx_queue, skb); 414 - call->wait_mode->rx_wakeup(call); 415 - } 416 - 417 - _leave(""); 418 - } 419 - 420 - /* 421 435 * deliver messages to a call 422 436 */ 423 437 static void afs_deliver_to_call(struct afs_call *call) 424 438 { 425 - struct sk_buff *skb; 426 - bool last; 427 439 u32 abort_code; 428 440 int ret; 429 441 430 - _enter(""); 442 + _enter("%s", call->type->name); 431 443 432 - while ((call->state == AFS_CALL_AWAIT_REPLY || 433 - call->state == AFS_CALL_AWAIT_OP_ID || 434 - call->state == AFS_CALL_AWAIT_REQUEST || 435 - call->state == AFS_CALL_AWAIT_ACK) && 436 - (skb = skb_dequeue(&call->rx_queue))) { 437 - switch (skb->mark) { 438 - case RXRPC_SKB_MARK_DATA: 439 - _debug("Rcv DATA"); 440 - last = rxrpc_kernel_is_data_last(skb); 441 - ret = call->type->deliver(call, skb, last); 442 - switch (ret) { 443 - case -EAGAIN: 444 - if (last) { 445 - _debug("short data"); 446 - goto unmarshal_error; 447 - } 448 - break; 449 - case 0: 450 - ASSERT(last); 451 - if (call->state == AFS_CALL_AWAIT_REPLY) 452 - call->state = AFS_CALL_COMPLETE; 453 - break; 454 - case -ENOTCONN: 455 - abort_code = RX_CALL_DEAD; 456 - goto do_abort; 457 - case -ENOTSUPP: 458 - abort_code = RX_INVALID_OPERATION; 459 - goto do_abort; 460 - default: 461 - unmarshal_error: 462 - abort_code = RXGEN_CC_UNMARSHAL; 463 - if (call->state != AFS_CALL_AWAIT_REPLY) 464 - abort_code = RXGEN_SS_UNMARSHAL; 465 - do_abort: 466 - rxrpc_kernel_abort_call(afs_socket, 467 - call->rxcall, 468 - abort_code); 469 - call->error = ret; 470 - call->state = AFS_CALL_ERROR; 471 - break; 444 + while (call->state == AFS_CALL_AWAIT_REPLY || 445 + call->state == AFS_CALL_AWAIT_OP_ID || 446 + call->state == AFS_CALL_AWAIT_REQUEST || 447 + call->state == AFS_CALL_AWAIT_ACK 448 + ) { 449 + if (call->state == AFS_CALL_AWAIT_ACK) { 450 + size_t offset = 0; 451 + ret = rxrpc_kernel_recv_data(afs_socket, call->rxcall, 452 + NULL, 0, &offset, false, 453 + &call->abort_code); 454 + if (ret == -EINPROGRESS || ret == -EAGAIN) 455 + return; 456 + if (ret == 1) { 457 + call->state = AFS_CALL_COMPLETE; 458 + goto done; 472 459 } 473 - break; 474 - case RXRPC_SKB_MARK_FINAL_ACK: 475 - _debug("Rcv ACK"); 476 - call->state = AFS_CALL_COMPLETE; 477 - break; 478 - case RXRPC_SKB_MARK_BUSY: 479 - _debug("Rcv BUSY"); 480 - call->error = -EBUSY; 481 - call->state = AFS_CALL_BUSY; 482 - break; 483 - case RXRPC_SKB_MARK_REMOTE_ABORT: 484 - abort_code = rxrpc_kernel_get_abort_code(skb); 485 - call->error = call->type->abort_to_error(abort_code); 486 - call->state = AFS_CALL_ABORTED; 487 - _debug("Rcv ABORT %u -> %d", abort_code, call->error); 488 - break; 489 - case RXRPC_SKB_MARK_LOCAL_ABORT: 490 - abort_code = rxrpc_kernel_get_abort_code(skb); 491 - call->error = call->type->abort_to_error(abort_code); 492 - call->state = AFS_CALL_ABORTED; 493 - _debug("Loc ABORT %u -> %d", abort_code, call->error); 494 - break; 495 - case RXRPC_SKB_MARK_NET_ERROR: 496 - call->error = -rxrpc_kernel_get_error_number(skb); 497 - call->state = AFS_CALL_ERROR; 498 - _debug("Rcv NET ERROR %d", call->error); 499 - break; 500 - case RXRPC_SKB_MARK_LOCAL_ERROR: 501 - call->error = -rxrpc_kernel_get_error_number(skb); 502 - call->state = AFS_CALL_ERROR; 503 - _debug("Rcv LOCAL ERROR %d", call->error); 504 - break; 505 - default: 506 - BUG(); 507 - break; 460 + return; 508 461 } 509 462 510 - afs_free_skb(skb); 463 + ret = call->type->deliver(call); 464 + switch (ret) { 465 + case 0: 466 + if (call->state == AFS_CALL_AWAIT_REPLY) 467 + call->state = AFS_CALL_COMPLETE; 468 + goto done; 469 + case -EINPROGRESS: 470 + case -EAGAIN: 471 + goto out; 472 + case -ENOTCONN: 473 + abort_code = RX_CALL_DEAD; 474 + rxrpc_kernel_abort_call(afs_socket, call->rxcall, 475 + abort_code); 476 + goto do_abort; 477 + case -ENOTSUPP: 478 + abort_code = RX_INVALID_OPERATION; 479 + rxrpc_kernel_abort_call(afs_socket, call->rxcall, 480 + abort_code); 481 + goto do_abort; 482 + case -ENODATA: 483 + case -EBADMSG: 484 + case -EMSGSIZE: 485 + default: 486 + abort_code = RXGEN_CC_UNMARSHAL; 487 + if (call->state != AFS_CALL_AWAIT_REPLY) 488 + abort_code = RXGEN_SS_UNMARSHAL; 489 + rxrpc_kernel_abort_call(afs_socket, call->rxcall, 490 + abort_code); 491 + goto do_abort; 492 + } 511 493 } 512 494 513 - /* make sure the queue is empty if the call is done with (we might have 514 - * aborted the call early because of an unmarshalling error) */ 515 - if (call->state >= AFS_CALL_COMPLETE) { 516 - while ((skb = skb_dequeue(&call->rx_queue))) 517 - afs_free_skb(skb); 518 - if (call->incoming) 519 - afs_end_call(call); 520 - } 521 - 495 + done: 496 + if (call->state == AFS_CALL_COMPLETE && call->incoming) 497 + afs_end_call(call); 498 + out: 522 499 _leave(""); 500 + return; 501 + 502 + do_abort: 503 + call->error = ret; 504 + call->state = AFS_CALL_COMPLETE; 505 + goto done; 523 506 } 524 507 525 508 /* ··· 463 574 */ 464 575 static int afs_wait_for_call_to_complete(struct afs_call *call) 465 576 { 466 - struct sk_buff *skb; 467 577 int ret; 468 578 469 579 DECLARE_WAITQUEUE(myself, current); ··· 474 586 set_current_state(TASK_INTERRUPTIBLE); 475 587 476 588 /* deliver any messages that are in the queue */ 477 - if (!skb_queue_empty(&call->rx_queue)) { 589 + if (call->state < AFS_CALL_COMPLETE && call->need_attention) { 590 + call->need_attention = false; 478 591 __set_current_state(TASK_RUNNING); 479 592 afs_deliver_to_call(call); 480 593 continue; 481 594 } 482 595 483 596 ret = call->error; 484 - if (call->state >= AFS_CALL_COMPLETE) 597 + if (call->state == AFS_CALL_COMPLETE) 485 598 break; 486 599 ret = -EINTR; 487 600 if (signal_pending(current)) ··· 496 607 /* kill the call */ 497 608 if (call->state < AFS_CALL_COMPLETE) { 498 609 _debug("call incomplete"); 499 - rxrpc_kernel_abort_call(afs_socket, call->rxcall, RX_CALL_DEAD); 500 - while ((skb = skb_dequeue(&call->rx_queue))) 501 - afs_free_skb(skb); 610 + rxrpc_kernel_abort_call(afs_socket, call->rxcall, 611 + RX_CALL_DEAD); 502 612 } 503 613 504 614 _debug("call complete"); ··· 509 621 /* 510 622 * wake up a waiting call 511 623 */ 512 - static void afs_wake_up_call_waiter(struct afs_call *call) 624 + static void afs_wake_up_call_waiter(struct sock *sk, struct rxrpc_call *rxcall, 625 + unsigned long call_user_ID) 513 626 { 627 + struct afs_call *call = (struct afs_call *)call_user_ID; 628 + 629 + call->need_attention = true; 514 630 wake_up(&call->waitq); 515 631 } 516 632 517 633 /* 518 634 * wake up an asynchronous call 519 635 */ 520 - static void afs_wake_up_async_call(struct afs_call *call) 636 + static void afs_wake_up_async_call(struct sock *sk, struct rxrpc_call *rxcall, 637 + unsigned long call_user_ID) 521 638 { 522 - _enter(""); 639 + struct afs_call *call = (struct afs_call *)call_user_ID; 640 + 641 + call->need_attention = true; 523 642 queue_work(afs_async_calls, &call->async_work); 524 643 } 525 644 ··· 544 649 /* 545 650 * delete an asynchronous call 546 651 */ 547 - static void afs_delete_async_call(struct afs_call *call) 652 + static void afs_delete_async_call(struct work_struct *work) 548 653 { 654 + struct afs_call *call = container_of(work, struct afs_call, async_work); 655 + 549 656 _enter(""); 550 657 551 658 afs_free_call(call); ··· 557 660 558 661 /* 559 662 * perform processing on an asynchronous call 560 - * - on a multiple-thread workqueue this work item may try to run on several 561 - * CPUs at the same time 562 663 */ 563 - static void afs_process_async_call(struct afs_call *call) 664 + static void afs_process_async_call(struct work_struct *work) 564 665 { 666 + struct afs_call *call = container_of(work, struct afs_call, async_work); 667 + 565 668 _enter(""); 566 669 567 - if (!skb_queue_empty(&call->rx_queue)) 670 + if (call->state < AFS_CALL_COMPLETE && call->need_attention) { 671 + call->need_attention = false; 568 672 afs_deliver_to_call(call); 673 + } 569 674 570 - if (call->state >= AFS_CALL_COMPLETE && call->wait_mode) { 675 + if (call->state == AFS_CALL_COMPLETE && call->wait_mode) { 571 676 if (call->wait_mode->async_complete) 572 677 call->wait_mode->async_complete(call->reply, 573 678 call->error); ··· 580 681 581 682 /* we can't just delete the call because the work item may be 582 683 * queued */ 583 - call->async_workfn = afs_delete_async_call; 684 + call->async_work.func = afs_delete_async_call; 584 685 queue_work(afs_async_calls, &call->async_work); 585 686 } 586 687 587 688 _leave(""); 588 - } 589 - 590 - /* 591 - * Empty a socket buffer into a flat reply buffer. 592 - */ 593 - int afs_transfer_reply(struct afs_call *call, struct sk_buff *skb, bool last) 594 - { 595 - size_t len = skb->len; 596 - 597 - if (len > call->reply_max - call->reply_size) { 598 - _leave(" = -EBADMSG [%zu > %u]", 599 - len, call->reply_max - call->reply_size); 600 - return -EBADMSG; 601 - } 602 - 603 - if (len > 0) { 604 - if (skb_copy_bits(skb, 0, call->buffer + call->reply_size, 605 - len) < 0) 606 - BUG(); 607 - call->reply_size += len; 608 - } 609 - 610 - afs_data_consumed(call, skb); 611 - if (!last) 612 - return -EAGAIN; 613 - 614 - if (call->reply_size != call->reply_max) { 615 - _leave(" = -EBADMSG [%u != %u]", 616 - call->reply_size, call->reply_max); 617 - return -EBADMSG; 618 - } 619 - return 0; 620 689 } 621 690 622 691 /* ··· 594 727 { 595 728 struct rxrpc_call *rxcall; 596 729 struct afs_call *call = NULL; 597 - struct sk_buff *skb; 598 730 599 - while ((skb = skb_dequeue(&afs_incoming_calls))) { 600 - _debug("new call"); 731 + _enter(""); 601 732 602 - /* don't need the notification */ 603 - afs_free_skb(skb); 604 - 733 + do { 605 734 if (!call) { 606 735 call = kzalloc(sizeof(struct afs_call), GFP_KERNEL); 607 736 if (!call) { ··· 605 742 return; 606 743 } 607 744 608 - call->async_workfn = afs_process_async_call; 609 - INIT_WORK(&call->async_work, afs_async_workfn); 745 + INIT_WORK(&call->async_work, afs_process_async_call); 610 746 call->wait_mode = &afs_async_incoming_call; 611 747 call->type = &afs_RXCMxxxx; 612 748 init_waitqueue_head(&call->waitq); 613 - skb_queue_head_init(&call->rx_queue); 614 749 call->state = AFS_CALL_AWAIT_OP_ID; 615 750 616 751 _debug("CALL %p{%s} [%d]", ··· 618 757 } 619 758 620 759 rxcall = rxrpc_kernel_accept_call(afs_socket, 621 - (unsigned long) call); 760 + (unsigned long)call, 761 + afs_wake_up_async_call); 622 762 if (!IS_ERR(rxcall)) { 623 763 call->rxcall = rxcall; 764 + call->need_attention = true; 765 + queue_work(afs_async_calls, &call->async_work); 624 766 call = NULL; 625 767 } 626 - } 768 + } while (!call); 627 769 628 770 if (call) 629 771 afs_free_call(call); 630 772 } 631 773 632 774 /* 775 + * Notification of an incoming call. 776 + */ 777 + static void afs_rx_new_call(struct sock *sk) 778 + { 779 + queue_work(afs_wq, &afs_collect_incoming_call_work); 780 + } 781 + 782 + /* 633 783 * Grab the operation ID from an incoming cache manager call. The socket 634 784 * buffer is discarded on error or if we don't yet have sufficient data. 635 785 */ 636 - static int afs_deliver_cm_op_id(struct afs_call *call, struct sk_buff *skb, 637 - bool last) 786 + static int afs_deliver_cm_op_id(struct afs_call *call) 638 787 { 639 - size_t len = skb->len; 640 - void *oibuf = (void *) &call->operation_ID; 788 + int ret; 641 789 642 - _enter("{%u},{%zu},%d", call->offset, len, last); 790 + _enter("{%zu}", call->offset); 643 791 644 792 ASSERTCMP(call->offset, <, 4); 645 793 646 794 /* the operation ID forms the first four bytes of the request data */ 647 - len = min_t(size_t, len, 4 - call->offset); 648 - if (skb_copy_bits(skb, 0, oibuf + call->offset, len) < 0) 649 - BUG(); 650 - if (!pskb_pull(skb, len)) 651 - BUG(); 652 - call->offset += len; 653 - 654 - if (call->offset < 4) { 655 - afs_data_consumed(call, skb); 656 - _leave(" = -EAGAIN"); 657 - return -EAGAIN; 658 - } 795 + ret = afs_extract_data(call, &call->operation_ID, 4, true); 796 + if (ret < 0) 797 + return ret; 659 798 660 799 call->state = AFS_CALL_AWAIT_REQUEST; 800 + call->offset = 0; 661 801 662 802 /* ask the cache manager to route the call (it'll change the call type 663 803 * if successful) */ ··· 667 805 668 806 /* pass responsibility for the remainer of this message off to the 669 807 * cache manager op */ 670 - return call->type->deliver(call, skb, last); 808 + return call->type->deliver(call); 671 809 } 672 810 673 811 /* ··· 743 881 /* 744 882 * Extract a piece of data from the received data socket buffers. 745 883 */ 746 - int afs_extract_data(struct afs_call *call, struct sk_buff *skb, 747 - bool last, void *buf, size_t count) 884 + int afs_extract_data(struct afs_call *call, void *buf, size_t count, 885 + bool want_more) 748 886 { 749 - size_t len = skb->len; 887 + int ret; 750 888 751 - _enter("{%u},{%zu},%d,,%zu", call->offset, len, last, count); 889 + _enter("{%s,%zu},,%zu,%d", 890 + call->type->name, call->offset, count, want_more); 752 891 753 - ASSERTCMP(call->offset, <, count); 892 + ASSERTCMP(call->offset, <=, count); 754 893 755 - len = min_t(size_t, len, count - call->offset); 756 - if (skb_copy_bits(skb, 0, buf + call->offset, len) < 0 || 757 - !pskb_pull(skb, len)) 758 - BUG(); 759 - call->offset += len; 894 + ret = rxrpc_kernel_recv_data(afs_socket, call->rxcall, 895 + buf, count, &call->offset, 896 + want_more, &call->abort_code); 897 + if (ret == 0 || ret == -EAGAIN) 898 + return ret; 760 899 761 - if (call->offset < count) { 762 - afs_data_consumed(call, skb); 763 - _leave(" = -EAGAIN"); 764 - return -EAGAIN; 900 + if (ret == 1) { 901 + switch (call->state) { 902 + case AFS_CALL_AWAIT_REPLY: 903 + call->state = AFS_CALL_COMPLETE; 904 + break; 905 + case AFS_CALL_AWAIT_REQUEST: 906 + call->state = AFS_CALL_REPLYING; 907 + break; 908 + default: 909 + break; 910 + } 911 + return 0; 765 912 } 766 - return 0; 913 + 914 + if (ret == -ECONNABORTED) 915 + call->error = call->type->abort_to_error(call->abort_code); 916 + else 917 + call->error = ret; 918 + call->state = AFS_CALL_COMPLETE; 919 + return ret; 767 920 }
+3 -4
fs/afs/vlclient.c
··· 58 58 /* 59 59 * deliver reply data to a VL.GetEntryByXXX call 60 60 */ 61 - static int afs_deliver_vl_get_entry_by_xxx(struct afs_call *call, 62 - struct sk_buff *skb, bool last) 61 + static int afs_deliver_vl_get_entry_by_xxx(struct afs_call *call) 63 62 { 64 63 struct afs_cache_vlocation *entry; 65 64 __be32 *bp; 66 65 u32 tmp; 67 66 int loop, ret; 68 67 69 - _enter(",,%u", last); 68 + _enter(""); 70 69 71 - ret = afs_transfer_reply(call, skb, last); 70 + ret = afs_transfer_reply(call); 72 71 if (ret < 0) 73 72 return ret; 74 73
+11 -24
include/net/af_rxrpc.h
··· 12 12 #ifndef _NET_RXRPC_H 13 13 #define _NET_RXRPC_H 14 14 15 - #include <linux/skbuff.h> 16 15 #include <linux/rxrpc.h> 17 16 18 17 struct key; ··· 19 20 struct socket; 20 21 struct rxrpc_call; 21 22 22 - /* 23 - * the mark applied to socket buffers that may be intercepted 24 - */ 25 - enum rxrpc_skb_mark { 26 - RXRPC_SKB_MARK_DATA, /* data message */ 27 - RXRPC_SKB_MARK_FINAL_ACK, /* final ACK received message */ 28 - RXRPC_SKB_MARK_BUSY, /* server busy message */ 29 - RXRPC_SKB_MARK_REMOTE_ABORT, /* remote abort message */ 30 - RXRPC_SKB_MARK_LOCAL_ABORT, /* local abort message */ 31 - RXRPC_SKB_MARK_NET_ERROR, /* network error message */ 32 - RXRPC_SKB_MARK_LOCAL_ERROR, /* local error message */ 33 - RXRPC_SKB_MARK_NEW_CALL, /* local error message */ 34 - }; 23 + typedef void (*rxrpc_notify_rx_t)(struct sock *, struct rxrpc_call *, 24 + unsigned long); 25 + typedef void (*rxrpc_notify_new_call_t)(struct sock *); 35 26 36 - typedef void (*rxrpc_interceptor_t)(struct sock *, unsigned long, 37 - struct sk_buff *); 38 - void rxrpc_kernel_intercept_rx_messages(struct socket *, rxrpc_interceptor_t); 27 + void rxrpc_kernel_new_call_notification(struct socket *, 28 + rxrpc_notify_new_call_t); 39 29 struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *, 40 30 struct sockaddr_rxrpc *, 41 31 struct key *, 42 32 unsigned long, 43 - gfp_t); 33 + gfp_t, 34 + rxrpc_notify_rx_t); 44 35 int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *, 45 36 struct msghdr *, size_t); 46 - void rxrpc_kernel_data_consumed(struct rxrpc_call *, struct sk_buff *); 37 + int rxrpc_kernel_recv_data(struct socket *, struct rxrpc_call *, 38 + void *, size_t, size_t *, bool, u32 *); 47 39 void rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *, u32); 48 40 void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *); 49 - bool rxrpc_kernel_is_data_last(struct sk_buff *); 50 - u32 rxrpc_kernel_get_abort_code(struct sk_buff *); 51 - int rxrpc_kernel_get_error_number(struct sk_buff *); 52 - void rxrpc_kernel_free_skb(struct sk_buff *); 53 - struct rxrpc_call *rxrpc_kernel_accept_call(struct socket *, unsigned long); 41 + struct rxrpc_call *rxrpc_kernel_accept_call(struct socket *, unsigned long, 42 + rxrpc_notify_rx_t); 54 43 int rxrpc_kernel_reject_call(struct socket *); 55 44 void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *, 56 45 struct sockaddr_rxrpc *);
+15 -14
net/rxrpc/af_rxrpc.c
··· 231 231 * @srx: The address of the peer to contact 232 232 * @key: The security context to use (defaults to socket setting) 233 233 * @user_call_ID: The ID to use 234 + * @gfp: The allocation constraints 235 + * @notify_rx: Where to send notifications instead of socket queue 234 236 * 235 237 * Allow a kernel service to begin a call on the nominated socket. This just 236 238 * sets up all the internal tracking structures and allocates connection and ··· 245 243 struct sockaddr_rxrpc *srx, 246 244 struct key *key, 247 245 unsigned long user_call_ID, 248 - gfp_t gfp) 246 + gfp_t gfp, 247 + rxrpc_notify_rx_t notify_rx) 249 248 { 250 249 struct rxrpc_conn_parameters cp; 251 250 struct rxrpc_call *call; ··· 273 270 cp.exclusive = false; 274 271 cp.service_id = srx->srx_service; 275 272 call = rxrpc_new_client_call(rx, &cp, srx, user_call_ID, gfp); 273 + if (!IS_ERR(call)) 274 + call->notify_rx = notify_rx; 276 275 277 276 release_sock(&rx->sk); 278 277 _leave(" = %p", call); ··· 294 289 { 295 290 _enter("%d{%d}", call->debug_id, atomic_read(&call->usage)); 296 291 rxrpc_remove_user_ID(rxrpc_sk(sock->sk), call); 292 + rxrpc_purge_queue(&call->knlrecv_queue); 297 293 rxrpc_put_call(call); 298 294 } 299 295 EXPORT_SYMBOL(rxrpc_kernel_end_call); 300 296 301 297 /** 302 - * rxrpc_kernel_intercept_rx_messages - Intercept received RxRPC messages 298 + * rxrpc_kernel_new_call_notification - Get notifications of new calls 303 299 * @sock: The socket to intercept received messages on 304 - * @interceptor: The function to pass the messages to 300 + * @notify_new_call: Function to be called when new calls appear 305 301 * 306 - * Allow a kernel service to intercept messages heading for the Rx queue on an 307 - * RxRPC socket. They get passed to the specified function instead. 308 - * @interceptor should free the socket buffers it is given. @interceptor is 309 - * called with the socket receive queue spinlock held and softirqs disabled - 310 - * this ensures that the messages will be delivered in the right order. 302 + * Allow a kernel service to be given notifications about new calls. 311 303 */ 312 - void rxrpc_kernel_intercept_rx_messages(struct socket *sock, 313 - rxrpc_interceptor_t interceptor) 304 + void rxrpc_kernel_new_call_notification( 305 + struct socket *sock, 306 + rxrpc_notify_new_call_t notify_new_call) 314 307 { 315 308 struct rxrpc_sock *rx = rxrpc_sk(sock->sk); 316 309 317 - _enter(""); 318 - rx->interceptor = interceptor; 310 + rx->notify_new_call = notify_new_call; 319 311 } 320 - 321 - EXPORT_SYMBOL(rxrpc_kernel_intercept_rx_messages); 312 + EXPORT_SYMBOL(rxrpc_kernel_new_call_notification); 322 313 323 314 /* 324 315 * connect an RxRPC socket
+21 -2
net/rxrpc/ar-internal.h
··· 40 40 struct rxrpc_connection; 41 41 42 42 /* 43 + * Mark applied to socket buffers. 44 + */ 45 + enum rxrpc_skb_mark { 46 + RXRPC_SKB_MARK_DATA, /* data message */ 47 + RXRPC_SKB_MARK_FINAL_ACK, /* final ACK received message */ 48 + RXRPC_SKB_MARK_BUSY, /* server busy message */ 49 + RXRPC_SKB_MARK_REMOTE_ABORT, /* remote abort message */ 50 + RXRPC_SKB_MARK_LOCAL_ABORT, /* local abort message */ 51 + RXRPC_SKB_MARK_NET_ERROR, /* network error message */ 52 + RXRPC_SKB_MARK_LOCAL_ERROR, /* local error message */ 53 + RXRPC_SKB_MARK_NEW_CALL, /* local error message */ 54 + }; 55 + 56 + /* 43 57 * sk_state for RxRPC sockets 44 58 */ 45 59 enum { ··· 71 57 struct rxrpc_sock { 72 58 /* WARNING: sk has to be the first member */ 73 59 struct sock sk; 74 - rxrpc_interceptor_t interceptor; /* kernel service Rx interceptor function */ 60 + rxrpc_notify_new_call_t notify_new_call; /* Func to notify of new call */ 75 61 struct rxrpc_local *local; /* local endpoint */ 76 62 struct list_head listen_link; /* link in the local endpoint's listen list */ 77 63 struct list_head secureq; /* calls awaiting connection security clearance */ ··· 381 367 RXRPC_CALL_EXPECT_OOS, /* expect out of sequence packets */ 382 368 RXRPC_CALL_IS_SERVICE, /* Call is service call */ 383 369 RXRPC_CALL_EXPOSED, /* The call was exposed to the world */ 370 + RXRPC_CALL_RX_NO_MORE, /* Don't indicate MSG_MORE from recvmsg() */ 384 371 }; 385 372 386 373 /* ··· 456 441 struct timer_list resend_timer; /* Tx resend timer */ 457 442 struct work_struct destroyer; /* call destroyer */ 458 443 struct work_struct processor; /* packet processor and ACK generator */ 444 + rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */ 459 445 struct list_head link; /* link in master call list */ 460 446 struct list_head chan_wait_link; /* Link in conn->waiting_calls */ 461 447 struct hlist_node error_link; /* link in error distribution list */ ··· 464 448 struct rb_node sock_node; /* node in socket call tree */ 465 449 struct sk_buff_head rx_queue; /* received packets */ 466 450 struct sk_buff_head rx_oos_queue; /* packets received out of sequence */ 451 + struct sk_buff_head knlrecv_queue; /* Queue for kernel_recv [TODO: replace this] */ 467 452 struct sk_buff *tx_pending; /* Tx socket buffer being filled */ 468 453 wait_queue_head_t waitq; /* Wait queue for channel or Tx */ 469 454 __be32 crypto_buf[2]; /* Temporary packet crypto buffer */ ··· 529 512 * call_accept.c 530 513 */ 531 514 void rxrpc_accept_incoming_calls(struct rxrpc_local *); 532 - struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *, unsigned long); 515 + struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *, unsigned long, 516 + rxrpc_notify_rx_t); 533 517 int rxrpc_reject_call(struct rxrpc_sock *); 534 518 535 519 /* ··· 892 874 /* 893 875 * skbuff.c 894 876 */ 877 + void rxrpc_kernel_data_consumed(struct rxrpc_call *, struct sk_buff *); 895 878 void rxrpc_packet_destructor(struct sk_buff *); 896 879 void rxrpc_new_skb(struct sk_buff *); 897 880 void rxrpc_see_skb(struct sk_buff *);
+9 -4
net/rxrpc/call_accept.c
··· 286 286 * - assign the user call ID to the call at the front of the queue 287 287 */ 288 288 struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx, 289 - unsigned long user_call_ID) 289 + unsigned long user_call_ID, 290 + rxrpc_notify_rx_t notify_rx) 290 291 { 291 292 struct rxrpc_call *call; 292 293 struct rb_node *parent, **pp; ··· 341 340 } 342 341 343 342 /* formalise the acceptance */ 343 + call->notify_rx = notify_rx; 344 344 call->user_call_ID = user_call_ID; 345 345 rb_link_node(&call->sock_node, parent, pp); 346 346 rb_insert_color(&call->sock_node, &rx->calls); ··· 439 437 * rxrpc_kernel_accept_call - Allow a kernel service to accept an incoming call 440 438 * @sock: The socket on which the impending call is waiting 441 439 * @user_call_ID: The tag to attach to the call 440 + * @notify_rx: Where to send notifications instead of socket queue 442 441 * 443 442 * Allow a kernel service to accept an incoming call, assuming the incoming 444 - * call is still valid. 443 + * call is still valid. The caller should immediately trigger their own 444 + * notification as there must be data waiting. 445 445 */ 446 446 struct rxrpc_call *rxrpc_kernel_accept_call(struct socket *sock, 447 - unsigned long user_call_ID) 447 + unsigned long user_call_ID, 448 + rxrpc_notify_rx_t notify_rx) 448 449 { 449 450 struct rxrpc_call *call; 450 451 451 452 _enter(",%lx", user_call_ID); 452 - call = rxrpc_accept_call(rxrpc_sk(sock->sk), user_call_ID); 453 + call = rxrpc_accept_call(rxrpc_sk(sock->sk), user_call_ID, notify_rx); 453 454 _leave(" = %p", call); 454 455 return call; 455 456 }
+3 -2
net/rxrpc/call_object.c
··· 136 136 INIT_LIST_HEAD(&call->accept_link); 137 137 skb_queue_head_init(&call->rx_queue); 138 138 skb_queue_head_init(&call->rx_oos_queue); 139 + skb_queue_head_init(&call->knlrecv_queue); 139 140 init_waitqueue_head(&call->waitq); 140 141 spin_lock_init(&call->lock); 141 142 rwlock_init(&call->state_lock); ··· 553 552 spin_lock_bh(&call->lock); 554 553 } 555 554 spin_unlock_bh(&call->lock); 556 - 557 - ASSERTCMP(call->state, !=, RXRPC_CALL_COMPLETE); 558 555 } 559 556 560 557 del_timer_sync(&call->resend_timer); ··· 681 682 struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu); 682 683 683 684 rxrpc_purge_queue(&call->rx_queue); 685 + rxrpc_purge_queue(&call->knlrecv_queue); 684 686 rxrpc_put_peer(call->peer); 685 687 kmem_cache_free(rxrpc_call_jar, call); 686 688 } ··· 737 737 738 738 rxrpc_purge_queue(&call->rx_queue); 739 739 ASSERT(skb_queue_empty(&call->rx_oos_queue)); 740 + rxrpc_purge_queue(&call->knlrecv_queue); 740 741 sock_put(&call->socket->sk); 741 742 call_rcu(&call->rcu, rxrpc_rcu_destroy_call); 742 743 }
-1
net/rxrpc/conn_event.c
··· 282 282 case RXRPC_PACKET_TYPE_DATA: 283 283 case RXRPC_PACKET_TYPE_ACK: 284 284 rxrpc_conn_retransmit_call(conn, skb); 285 - rxrpc_free_skb(skb); 286 285 return 0; 287 286 288 287 case RXRPC_PACKET_TYPE_ABORT:
+8 -2
net/rxrpc/input.c
··· 90 90 } 91 91 92 92 /* allow interception by a kernel service */ 93 - if (rx->interceptor) { 94 - rx->interceptor(sk, call->user_call_ID, skb); 93 + if (skb->mark == RXRPC_SKB_MARK_NEW_CALL && 94 + rx->notify_new_call) { 95 95 spin_unlock_bh(&sk->sk_receive_queue.lock); 96 + skb_queue_tail(&call->knlrecv_queue, skb); 97 + rx->notify_new_call(&rx->sk); 98 + } else if (call->notify_rx) { 99 + spin_unlock_bh(&sk->sk_receive_queue.lock); 100 + skb_queue_tail(&call->knlrecv_queue, skb); 101 + call->notify_rx(&rx->sk, call, call->user_call_ID); 96 102 } else { 97 103 _net("post skb %p", skb); 98 104 __skb_queue_tail(&sk->sk_receive_queue, skb);
+1 -1
net/rxrpc/output.c
··· 190 190 if (cmd == RXRPC_CMD_ACCEPT) { 191 191 if (rx->sk.sk_state != RXRPC_SERVER_LISTENING) 192 192 return -EINVAL; 193 - call = rxrpc_accept_call(rx, user_call_ID); 193 + call = rxrpc_accept_call(rx, user_call_ID, NULL); 194 194 if (IS_ERR(call)) 195 195 return PTR_ERR(call); 196 196 rxrpc_put_call(call);
+159 -36
net/rxrpc/recvmsg.c
··· 369 369 370 370 } 371 371 372 - /** 373 - * rxrpc_kernel_is_data_last - Determine if data message is last one 374 - * @skb: Message holding data 372 + /* 373 + * Deliver messages to a call. This keeps processing packets until the buffer 374 + * is filled and we find either more DATA (returns 0) or the end of the DATA 375 + * (returns 1). If more packets are required, it returns -EAGAIN. 375 376 * 376 - * Determine if data message is last one for the parent call. 377 + * TODO: Note that this is hacked in at the moment and will be replaced. 377 378 */ 378 - bool rxrpc_kernel_is_data_last(struct sk_buff *skb) 379 + static int temp_deliver_data(struct socket *sock, struct rxrpc_call *call, 380 + struct iov_iter *iter, size_t size, 381 + size_t *_offset) 379 382 { 380 - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 383 + struct rxrpc_skb_priv *sp; 384 + struct sk_buff *skb; 385 + size_t remain; 386 + int ret, copy; 381 387 382 - ASSERTCMP(skb->mark, ==, RXRPC_SKB_MARK_DATA); 388 + _enter("%d", call->debug_id); 383 389 384 - return sp->hdr.flags & RXRPC_LAST_PACKET; 385 - } 390 + next: 391 + local_bh_disable(); 392 + skb = skb_dequeue(&call->knlrecv_queue); 393 + local_bh_enable(); 394 + if (!skb) { 395 + if (test_bit(RXRPC_CALL_RX_NO_MORE, &call->flags)) 396 + return 1; 397 + _leave(" = -EAGAIN [empty]"); 398 + return -EAGAIN; 399 + } 386 400 387 - EXPORT_SYMBOL(rxrpc_kernel_is_data_last); 388 - 389 - /** 390 - * rxrpc_kernel_get_abort_code - Get the abort code from an RxRPC abort message 391 - * @skb: Message indicating an abort 392 - * 393 - * Get the abort code from an RxRPC abort message. 394 - */ 395 - u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb) 396 - { 397 - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 401 + sp = rxrpc_skb(skb); 402 + _debug("dequeued %p %u/%zu", skb, sp->offset, size); 398 403 399 404 switch (skb->mark) { 400 - case RXRPC_SKB_MARK_REMOTE_ABORT: 401 - case RXRPC_SKB_MARK_LOCAL_ABORT: 402 - return sp->call->abort_code; 403 - default: 404 - BUG(); 405 - } 406 - } 405 + case RXRPC_SKB_MARK_DATA: 406 + remain = size - *_offset; 407 + if (remain > 0) { 408 + copy = skb->len - sp->offset; 409 + if (copy > remain) 410 + copy = remain; 411 + ret = skb_copy_datagram_iter(skb, sp->offset, iter, 412 + copy); 413 + if (ret < 0) 414 + goto requeue_and_leave; 407 415 408 - EXPORT_SYMBOL(rxrpc_kernel_get_abort_code); 416 + /* handle piecemeal consumption of data packets */ 417 + sp->offset += copy; 418 + *_offset += copy; 419 + } 420 + 421 + if (sp->offset < skb->len) 422 + goto partially_used_skb; 423 + 424 + /* We consumed the whole packet */ 425 + ASSERTCMP(sp->offset, ==, skb->len); 426 + if (sp->hdr.flags & RXRPC_LAST_PACKET) 427 + set_bit(RXRPC_CALL_RX_NO_MORE, &call->flags); 428 + rxrpc_kernel_data_consumed(call, skb); 429 + rxrpc_free_skb(skb); 430 + goto next; 431 + 432 + default: 433 + rxrpc_free_skb(skb); 434 + goto next; 435 + } 436 + 437 + partially_used_skb: 438 + ASSERTCMP(*_offset, ==, size); 439 + ret = 0; 440 + requeue_and_leave: 441 + skb_queue_head(&call->knlrecv_queue, skb); 442 + return ret; 443 + } 409 444 410 445 /** 411 - * rxrpc_kernel_get_error - Get the error number from an RxRPC error message 412 - * @skb: Message indicating an error 446 + * rxrpc_kernel_recv_data - Allow a kernel service to receive data/info 447 + * @sock: The socket that the call exists on 448 + * @call: The call to send data through 449 + * @buf: The buffer to receive into 450 + * @size: The size of the buffer, including data already read 451 + * @_offset: The running offset into the buffer. 452 + * @want_more: True if more data is expected to be read 453 + * @_abort: Where the abort code is stored if -ECONNABORTED is returned 413 454 * 414 - * Get the error number from an RxRPC error message. 455 + * Allow a kernel service to receive data and pick up information about the 456 + * state of a call. Returns 0 if got what was asked for and there's more 457 + * available, 1 if we got what was asked for and we're at the end of the data 458 + * and -EAGAIN if we need more data. 459 + * 460 + * Note that we may return -EAGAIN to drain empty packets at the end of the 461 + * data, even if we've already copied over the requested data. 462 + * 463 + * This function adds the amount it transfers to *_offset, so this should be 464 + * precleared as appropriate. Note that the amount remaining in the buffer is 465 + * taken to be size - *_offset. 466 + * 467 + * *_abort should also be initialised to 0. 415 468 */ 416 - int rxrpc_kernel_get_error_number(struct sk_buff *skb) 469 + int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call, 470 + void *buf, size_t size, size_t *_offset, 471 + bool want_more, u32 *_abort) 417 472 { 418 - struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 473 + struct iov_iter iter; 474 + struct kvec iov; 475 + int ret; 419 476 420 - return sp->error; 477 + _enter("{%d,%s},%zu,%d", 478 + call->debug_id, rxrpc_call_states[call->state], size, want_more); 479 + 480 + ASSERTCMP(*_offset, <=, size); 481 + ASSERTCMP(call->state, !=, RXRPC_CALL_SERVER_ACCEPTING); 482 + 483 + iov.iov_base = buf + *_offset; 484 + iov.iov_len = size - *_offset; 485 + iov_iter_kvec(&iter, ITER_KVEC | READ, &iov, 1, size - *_offset); 486 + 487 + lock_sock(sock->sk); 488 + 489 + switch (call->state) { 490 + case RXRPC_CALL_CLIENT_RECV_REPLY: 491 + case RXRPC_CALL_SERVER_RECV_REQUEST: 492 + case RXRPC_CALL_SERVER_ACK_REQUEST: 493 + ret = temp_deliver_data(sock, call, &iter, size, _offset); 494 + if (ret < 0) 495 + goto out; 496 + 497 + /* We can only reach here with a partially full buffer if we 498 + * have reached the end of the data. We must otherwise have a 499 + * full buffer or have been given -EAGAIN. 500 + */ 501 + if (ret == 1) { 502 + if (*_offset < size) 503 + goto short_data; 504 + if (!want_more) 505 + goto read_phase_complete; 506 + ret = 0; 507 + goto out; 508 + } 509 + 510 + if (!want_more) 511 + goto excess_data; 512 + goto out; 513 + 514 + case RXRPC_CALL_COMPLETE: 515 + goto call_complete; 516 + 517 + default: 518 + *_offset = 0; 519 + ret = -EINPROGRESS; 520 + goto out; 521 + } 522 + 523 + read_phase_complete: 524 + ret = 1; 525 + out: 526 + release_sock(sock->sk); 527 + _leave(" = %d [%zu,%d]", ret, *_offset, *_abort); 528 + return ret; 529 + 530 + short_data: 531 + ret = -EBADMSG; 532 + goto out; 533 + excess_data: 534 + ret = -EMSGSIZE; 535 + goto out; 536 + call_complete: 537 + *_abort = call->abort_code; 538 + ret = call->error; 539 + if (call->completion == RXRPC_CALL_SUCCEEDED) { 540 + ret = 1; 541 + if (size > 0) 542 + ret = -ECONNRESET; 543 + } 544 + goto out; 421 545 } 422 - 423 - EXPORT_SYMBOL(rxrpc_kernel_get_error_number); 546 + EXPORT_SYMBOL(rxrpc_kernel_recv_data);
-1
net/rxrpc/skbuff.c
··· 127 127 call->rx_data_recv = sp->hdr.seq; 128 128 rxrpc_hard_ACK_data(call, skb); 129 129 } 130 - EXPORT_SYMBOL(rxrpc_kernel_data_consumed); 131 130 132 131 /* 133 132 * Destroy a packet that has an RxRPC control buffer