Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6

Pull CIFS updates from Steve French:
"The most visible change in this set is the additional of multi-credit
support for SMB2/SMB3 which dramatically improves the large file i/o
performance for these dialects and significantly increases the maximum
i/o size used on the wire for SMB2/SMB3.

Also reconnection behavior after network failure is improved"

* 'for-next' of git://git.samba.org/sfrench/cifs-2.6: (35 commits)
Add worker function to set allocation size
[CIFS] Fix incorrect hex vs. decimal in some debug print statements
update CIFS TODO list
Add Pavel to contributor list in cifs AUTHORS file
Update cifs version
CIFS: Fix STATUS_CANNOT_DELETE error mapping for SMB2
CIFS: Optimize readpages in a short read case on reconnects
CIFS: Optimize cifs_user_read() in a short read case on reconnects
CIFS: Improve indentation in cifs_user_read()
CIFS: Fix possible buffer corruption in cifs_user_read()
CIFS: Count got bytes in read_into_pages()
CIFS: Use separate var for the number of bytes got in async read
CIFS: Indicate reconnect with ECONNABORTED error code
CIFS: Use multicredits for SMB 2.1/3 reads
CIFS: Fix rsize usage for sync read
CIFS: Fix rsize usage in user read
CIFS: Separate page reading from user read
CIFS: Fix rsize usage in readpages
CIFS: Separate page search from readpages
CIFS: Use multicredits for SMB 2.1/3 writes
...

+1758 -900
+1
Documentation/filesystems/cifs/AUTHORS
··· 40 40 Igor Mammedov (DFS support) 41 41 Jeff Layton (many, many fixes, as well as great work on the cifs Kerberos code) 42 42 Scott Lovenberg 43 + Pavel Shilovsky (for great work adding SMB2 support, and various SMB3 features) 43 44 44 45 Test case and Bug Report contributors 45 46 -------------------------------------
+36 -61
Documentation/filesystems/cifs/TODO
··· 1 - Version 1.53 May 20, 2008 1 + Version 2.03 August 1, 2014 2 2 3 3 A Partial List of Missing Features 4 4 ================================== ··· 7 7 for visible, important contributions to this module. Here 8 8 is a partial list of the known problems and missing features: 9 9 10 - a) Support for SecurityDescriptors(Windows/CIFS ACLs) for chmod/chgrp/chown 11 - so that these operations can be supported to Windows servers 10 + a) SMB3 (and SMB3.02) missing optional features: 11 + - RDMA 12 + - multichannel (started) 13 + - directory leases (improved metadata caching) 14 + - T10 copy offload (copy chunk is only mechanism supported) 15 + - encrypted shares 12 16 13 - b) Mapping POSIX ACLs (and eventually NFSv4 ACLs) to CIFS 14 - SecurityDescriptors 17 + b) improved sparse file support 15 18 16 - c) Better pam/winbind integration (e.g. to handle uid mapping 17 - better) 18 - 19 - d) Cleanup now unneeded SessSetup code in 20 - fs/cifs/connect.c and add back in NTLMSSP code if any servers 21 - need it 22 - 23 - e) fix NTLMv2 signing when two mounts with different users to same 24 - server. 25 - 26 - f) Directory entry caching relies on a 1 second timer, rather than 19 + c) Directory entry caching relies on a 1 second timer, rather than 27 20 using FindNotify or equivalent. - (started) 28 21 29 - g) quota support (needs minor kernel change since quota calls 22 + d) quota support (needs minor kernel change since quota calls 30 23 to make it to network filesystems or deviceless filesystems) 31 24 32 - h) investigate sync behavior (including syncpage) and check 33 - for proper behavior of intr/nointr 34 - 35 - i) improve support for very old servers (OS/2 and Win9x for example) 25 + e) improve support for very old servers (OS/2 and Win9x for example) 36 26 Including support for changing the time remotely (utimes command). 37 27 38 - j) hook lower into the sockets api (as NFS/SunRPC does) to avoid the 28 + f) hook lower into the sockets api (as NFS/SunRPC does) to avoid the 39 29 extra copy in/out of the socket buffers in some cases. 40 30 41 - k) Better optimize open (and pathbased setfilesize) to reduce the 31 + g) Better optimize open (and pathbased setfilesize) to reduce the 42 32 oplock breaks coming from windows srv. Piggyback identical file 43 33 opens on top of each other by incrementing reference count rather 44 34 than resending (helps reduce server resource utilization and avoid 45 35 spurious oplock breaks). 46 36 47 - l) Improve performance of readpages by sending more than one read 48 - at a time when 8 pages or more are requested. In conjuntion 49 - add support for async_cifs_readpages. 50 - 51 - m) Add support for storing symlink info to Windows servers 37 + h) Add support for storing symlink info to Windows servers 52 38 in the Extended Attribute format their SFU clients would recognize. 53 39 54 - n) Finish fcntl D_NOTIFY support so kde and gnome file list windows 40 + i) Finish inotify support so kde and gnome file list windows 55 41 will autorefresh (partially complete by Asser). Needs minor kernel 56 42 vfs change to support removing D_NOTIFY on a file. 57 43 58 - o) Add GUI tool to configure /proc/fs/cifs settings and for display of 44 + j) Add GUI tool to configure /proc/fs/cifs settings and for display of 59 45 the CIFS statistics (started) 60 46 61 - p) implement support for security and trusted categories of xattrs 47 + k) implement support for security and trusted categories of xattrs 62 48 (requires minor protocol extension) to enable better support for SELINUX 63 49 64 - q) Implement O_DIRECT flag on open (already supported on mount) 50 + l) Implement O_DIRECT flag on open (already supported on mount) 65 51 66 - r) Create UID mapping facility so server UIDs can be mapped on a per 52 + m) Create UID mapping facility so server UIDs can be mapped on a per 67 53 mount or a per server basis to client UIDs or nobody if no mapping 68 54 exists. This is helpful when Unix extensions are negotiated to 69 55 allow better permission checking when UIDs differ on the server ··· 57 71 standard for asking the server for the corresponding name of a 58 72 particular uid. 59 73 60 - s) Add support for CIFS Unix and also the newer POSIX extensions to the 61 - server side for Samba 4. 74 + n) DOS attrs - returned as pseudo-xattr in Samba format (check VFAT and NTFS for this too) 62 75 63 - t) In support for OS/2 (LANMAN 1.2 and LANMAN2.1 based SMB servers) 64 - need to add ability to set time to server (utimes command) 76 + o) mount check for unmatched uids 65 77 66 - u) DOS attrs - returned as pseudo-xattr in Samba format (check VFAT and NTFS for this too) 78 + p) Add support for new vfs entry point for fallocate 67 79 68 - v) mount check for unmatched uids 80 + q) Add tools to take advantage of cifs/smb3 specific ioctls and features 81 + such as "CopyChunk" (fast server side file copy) 69 82 70 - w) Add support for new vfs entry point for fallocate 83 + r) encrypted file support 71 84 72 - x) Fix Samba 3 server to handle Linux kernel aio so dbench with lots of 73 - processes can proceed better in parallel (on the server) 85 + s) improved stats gathering, tools (perhaps integration with nfsometer?) 74 86 75 - y) Fix Samba 3 to handle reads/writes over 127K (and remove the cifs mount 76 - restriction of wsize max being 127K) 87 + t) allow setting more NTFS/SMB3 file attributes remotely (currently limited to compressed 88 + file attribute via chflags) 77 89 78 - KNOWN BUGS (updated April 24, 2007) 90 + u) mount helper GUI (to simplify the various configuration options on mount) 91 + 92 + 93 + KNOWN BUGS 79 94 ==================================== 80 95 See http://bugzilla.samba.org - search on product "CifsVFS" for 81 - current bug list. 96 + current bug list. Also check http://bugzilla.kernel.org (Product = File System, Component = CIFS) 82 97 83 98 1) existing symbolic links (Windows reparse points) are recognized but 84 99 can not be created remotely. They are implemented for Samba and those that ··· 87 100 overly restrict the pathnames. 88 101 2) follow_link and readdir code does not follow dfs junctions 89 102 but recognizes them 90 - 3) create of new files to FAT partitions on Windows servers can 91 - succeed but still return access denied (appears to be Windows 92 - server not cifs client problem) and has not been reproduced recently. 93 - NTFS partitions do not have this problem. 94 - 4) Unix/POSIX capabilities are reset after reconnection, and affect 95 - a few fields in the tree connection but we do do not know which 96 - superblocks to apply these changes to. We should probably walk 97 - the list of superblocks to set these. Also need to check the 98 - flags on the second mount to the same share, and see if we 99 - can do the same trick that NFS does to remount duplicate shares. 100 103 101 104 Misc testing to do 102 105 ================== 103 106 1) check out max path names and max path name components against various server 104 107 types. Try nested symlinks (8 deep). Return max path name in stat -f information 105 108 106 - 2) Modify file portion of ltp so it can run against a mounted network 107 - share and run it against cifs vfs in automated fashion. 109 + 2) Improve xfstest's cifs enablement and adapt xfstests where needed to test 110 + cifs better 108 111 109 112 3) Additional performance testing and optimization using iozone and similar - 110 113 there are some easy changes that can be done to parallelize sequential writes, 111 114 and when signing is disabled to request larger read sizes (larger than 112 115 negotiated size) and send larger write sizes to modern servers. 113 116 114 - 4) More exhaustively test against less common servers. More testing 115 - against Windows 9x, Windows ME servers. 116 - 117 + 4) More exhaustively test against less common servers
+1 -1
fs/cifs/cifs_debug.c
··· 213 213 tcon->nativeFileSystem); 214 214 } 215 215 seq_printf(m, "DevInfo: 0x%x Attributes: 0x%x" 216 - "\n\tPathComponentMax: %d Status: 0x%d", 216 + "\n\tPathComponentMax: %d Status: %d", 217 217 le32_to_cpu(tcon->fsDevInfo.DeviceCharacteristics), 218 218 le32_to_cpu(tcon->fsAttrInfo.Attributes), 219 219 le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength),
+1 -1
fs/cifs/cifsfs.h
··· 136 136 extern const struct export_operations cifs_export_ops; 137 137 #endif /* CONFIG_CIFS_NFSD_EXPORT */ 138 138 139 - #define CIFS_VERSION "2.03" 139 + #define CIFS_VERSION "2.04" 140 140 #endif /* _CIFSFS_H */
+19
fs/cifs/cifsglob.h
··· 404 404 const struct cifs_fid *, u32 *); 405 405 int (*set_acl)(struct cifs_ntsd *, __u32, struct inode *, const char *, 406 406 int); 407 + /* writepages retry size */ 408 + unsigned int (*wp_retry_size)(struct inode *); 409 + /* get mtu credits */ 410 + int (*wait_mtu_credits)(struct TCP_Server_Info *, unsigned int, 411 + unsigned int *, unsigned int *); 407 412 }; 408 413 409 414 struct smb_version_values { ··· 642 637 const int optype) 643 638 { 644 639 server->ops->add_credits(server, add, optype); 640 + } 641 + 642 + static inline void 643 + add_credits_and_wake_if(struct TCP_Server_Info *server, const unsigned int add, 644 + const int optype) 645 + { 646 + if (add) { 647 + server->ops->add_credits(server, add, optype); 648 + wake_up(&server->request_q); 649 + } 645 650 } 646 651 647 652 static inline void ··· 1059 1044 struct address_space *mapping; 1060 1045 __u64 offset; 1061 1046 unsigned int bytes; 1047 + unsigned int got_bytes; 1062 1048 pid_t pid; 1063 1049 int result; 1064 1050 struct work_struct work; ··· 1069 1053 struct kvec iov; 1070 1054 unsigned int pagesz; 1071 1055 unsigned int tailsz; 1056 + unsigned int credits; 1072 1057 unsigned int nr_pages; 1073 1058 struct page *pages[]; 1074 1059 }; ··· 1090 1073 int result; 1091 1074 unsigned int pagesz; 1092 1075 unsigned int tailsz; 1076 + unsigned int credits; 1093 1077 unsigned int nr_pages; 1094 1078 struct page *pages[]; 1095 1079 }; ··· 1416 1398 #define CIFS_OBREAK_OP 0x0100 /* oplock break request */ 1417 1399 #define CIFS_NEG_OP 0x0200 /* negotiate request */ 1418 1400 #define CIFS_OP_MASK 0x0380 /* mask request type */ 1401 + #define CIFS_HAS_CREDITS 0x0400 /* already has credits */ 1419 1402 1420 1403 /* Security Flags: indicate type of session setup needed */ 1421 1404 #define CIFSSEC_MAY_SIGN 0x00001
+4
fs/cifs/cifsproto.h
··· 36 36 extern void cifs_buf_release(void *); 37 37 extern struct smb_hdr *cifs_small_buf_get(void); 38 38 extern void cifs_small_buf_release(void *); 39 + extern void free_rsp_buf(int, void *); 39 40 extern void cifs_rqst_page_to_kvec(struct smb_rqst *rqst, unsigned int idx, 40 41 struct kvec *iov); 41 42 extern int smb_send(struct TCP_Server_Info *, struct smb_hdr *, ··· 90 89 struct smb_rqst *); 91 90 extern int cifs_check_receive(struct mid_q_entry *mid, 92 91 struct TCP_Server_Info *server, bool log_error); 92 + extern int cifs_wait_mtu_credits(struct TCP_Server_Info *server, 93 + unsigned int size, unsigned int *num, 94 + unsigned int *credits); 93 95 extern int SendReceive2(const unsigned int /* xid */ , struct cifs_ses *, 94 96 struct kvec *, int /* nvec to send */, 95 97 int * /* type of buf returned */ , const int flags);
+80 -39
fs/cifs/cifssmb.c
··· 196 196 if (rc) 197 197 goto out; 198 198 199 - /* 200 - * FIXME: check if wsize needs updated due to negotiated smb buffer 201 - * size shrinking 202 - */ 203 199 atomic_inc(&tconInfoReconnectCount); 204 200 205 201 /* tell server Unix caps we support */ ··· 1513 1517 return length; 1514 1518 1515 1519 server->total_read += length; 1516 - rdata->bytes = length; 1517 1520 1518 1521 cifs_dbg(FYI, "total_read=%u buflen=%u remaining=%u\n", 1519 1522 server->total_read, buflen, data_len); ··· 1555 1560 rc); 1556 1561 } 1557 1562 /* FIXME: should this be counted toward the initiating task? */ 1558 - task_io_account_read(rdata->bytes); 1559 - cifs_stats_bytes_read(tcon, rdata->bytes); 1563 + task_io_account_read(rdata->got_bytes); 1564 + cifs_stats_bytes_read(tcon, rdata->got_bytes); 1560 1565 break; 1561 1566 case MID_REQUEST_SUBMITTED: 1562 1567 case MID_RETRY_NEEDED: 1563 1568 rdata->result = -EAGAIN; 1569 + if (server->sign && rdata->got_bytes) 1570 + /* reset bytes number since we can not check a sign */ 1571 + rdata->got_bytes = 0; 1572 + /* FIXME: should this be counted toward the initiating task? */ 1573 + task_io_account_read(rdata->got_bytes); 1574 + cifs_stats_bytes_read(tcon, rdata->got_bytes); 1564 1575 break; 1565 1576 default: 1566 1577 rdata->result = -EIO; ··· 1735 1734 1736 1735 /* cifs_small_buf_release(pSMB); */ /* Freed earlier now in SendReceive2 */ 1737 1736 if (*buf) { 1738 - if (resp_buf_type == CIFS_SMALL_BUFFER) 1739 - cifs_small_buf_release(iov[0].iov_base); 1740 - else if (resp_buf_type == CIFS_LARGE_BUFFER) 1741 - cifs_buf_release(iov[0].iov_base); 1737 + free_rsp_buf(resp_buf_type, iov[0].iov_base); 1742 1738 } else if (resp_buf_type != CIFS_NO_BUFFER) { 1743 1739 /* return buffer to caller to free */ 1744 1740 *buf = iov[0].iov_base; ··· 1897 1899 static void 1898 1900 cifs_writev_requeue(struct cifs_writedata *wdata) 1899 1901 { 1900 - int i, rc; 1902 + int i, rc = 0; 1901 1903 struct inode *inode = wdata->cfile->dentry->d_inode; 1902 1904 struct TCP_Server_Info *server; 1905 + unsigned int rest_len; 1903 1906 1904 - for (i = 0; i < wdata->nr_pages; i++) { 1905 - lock_page(wdata->pages[i]); 1906 - clear_page_dirty_for_io(wdata->pages[i]); 1907 - } 1908 - 1907 + server = tlink_tcon(wdata->cfile->tlink)->ses->server; 1908 + i = 0; 1909 + rest_len = wdata->bytes; 1909 1910 do { 1910 - server = tlink_tcon(wdata->cfile->tlink)->ses->server; 1911 - rc = server->ops->async_writev(wdata, cifs_writedata_release); 1912 - } while (rc == -EAGAIN); 1911 + struct cifs_writedata *wdata2; 1912 + unsigned int j, nr_pages, wsize, tailsz, cur_len; 1913 1913 1914 - for (i = 0; i < wdata->nr_pages; i++) { 1915 - unlock_page(wdata->pages[i]); 1916 - if (rc != 0) { 1917 - SetPageError(wdata->pages[i]); 1918 - end_page_writeback(wdata->pages[i]); 1919 - page_cache_release(wdata->pages[i]); 1914 + wsize = server->ops->wp_retry_size(inode); 1915 + if (wsize < rest_len) { 1916 + nr_pages = wsize / PAGE_CACHE_SIZE; 1917 + if (!nr_pages) { 1918 + rc = -ENOTSUPP; 1919 + break; 1920 + } 1921 + cur_len = nr_pages * PAGE_CACHE_SIZE; 1922 + tailsz = PAGE_CACHE_SIZE; 1923 + } else { 1924 + nr_pages = DIV_ROUND_UP(rest_len, PAGE_CACHE_SIZE); 1925 + cur_len = rest_len; 1926 + tailsz = rest_len - (nr_pages - 1) * PAGE_CACHE_SIZE; 1920 1927 } 1921 - } 1928 + 1929 + wdata2 = cifs_writedata_alloc(nr_pages, cifs_writev_complete); 1930 + if (!wdata2) { 1931 + rc = -ENOMEM; 1932 + break; 1933 + } 1934 + 1935 + for (j = 0; j < nr_pages; j++) { 1936 + wdata2->pages[j] = wdata->pages[i + j]; 1937 + lock_page(wdata2->pages[j]); 1938 + clear_page_dirty_for_io(wdata2->pages[j]); 1939 + } 1940 + 1941 + wdata2->sync_mode = wdata->sync_mode; 1942 + wdata2->nr_pages = nr_pages; 1943 + wdata2->offset = page_offset(wdata2->pages[0]); 1944 + wdata2->pagesz = PAGE_CACHE_SIZE; 1945 + wdata2->tailsz = tailsz; 1946 + wdata2->bytes = cur_len; 1947 + 1948 + wdata2->cfile = find_writable_file(CIFS_I(inode), false); 1949 + if (!wdata2->cfile) { 1950 + cifs_dbg(VFS, "No writable handles for inode\n"); 1951 + rc = -EBADF; 1952 + break; 1953 + } 1954 + wdata2->pid = wdata2->cfile->pid; 1955 + rc = server->ops->async_writev(wdata2, cifs_writedata_release); 1956 + 1957 + for (j = 0; j < nr_pages; j++) { 1958 + unlock_page(wdata2->pages[j]); 1959 + if (rc != 0 && rc != -EAGAIN) { 1960 + SetPageError(wdata2->pages[j]); 1961 + end_page_writeback(wdata2->pages[j]); 1962 + page_cache_release(wdata2->pages[j]); 1963 + } 1964 + } 1965 + 1966 + if (rc) { 1967 + kref_put(&wdata2->refcount, cifs_writedata_release); 1968 + if (rc == -EAGAIN) 1969 + continue; 1970 + break; 1971 + } 1972 + 1973 + rest_len -= cur_len; 1974 + i += nr_pages; 1975 + } while (i < wdata->nr_pages); 1922 1976 1923 1977 mapping_set_error(inode->i_mapping, rc); 1924 1978 kref_put(&wdata->refcount, cifs_writedata_release); ··· 2253 2203 } 2254 2204 2255 2205 /* cifs_small_buf_release(pSMB); */ /* Freed earlier now in SendReceive2 */ 2256 - if (resp_buf_type == CIFS_SMALL_BUFFER) 2257 - cifs_small_buf_release(iov[0].iov_base); 2258 - else if (resp_buf_type == CIFS_LARGE_BUFFER) 2259 - cifs_buf_release(iov[0].iov_base); 2206 + free_rsp_buf(resp_buf_type, iov[0].iov_base); 2260 2207 2261 2208 /* Note: On -EAGAIN error only caller can retry on handle based calls 2262 2209 since file handle passed in no longer valid */ ··· 2498 2451 if (pSMB) 2499 2452 cifs_small_buf_release(pSMB); 2500 2453 2501 - if (resp_buf_type == CIFS_SMALL_BUFFER) 2502 - cifs_small_buf_release(iov[0].iov_base); 2503 - else if (resp_buf_type == CIFS_LARGE_BUFFER) 2504 - cifs_buf_release(iov[0].iov_base); 2454 + free_rsp_buf(resp_buf_type, iov[0].iov_base); 2505 2455 2506 2456 /* Note: On -EAGAIN error only caller can retry on handle based calls 2507 2457 since file handle passed in no longer valid */ ··· 3882 3838 } 3883 3839 } 3884 3840 qsec_out: 3885 - if (buf_type == CIFS_SMALL_BUFFER) 3886 - cifs_small_buf_release(iov[0].iov_base); 3887 - else if (buf_type == CIFS_LARGE_BUFFER) 3888 - cifs_buf_release(iov[0].iov_base); 3841 + free_rsp_buf(buf_type, iov[0].iov_base); 3889 3842 /* cifs_small_buf_release(pSMB); */ /* Freed earlier now in SendReceive2 */ 3890 3843 return rc; 3891 3844 }
+4 -4
fs/cifs/connect.c
··· 557 557 try_to_freeze(); 558 558 559 559 if (server_unresponsive(server)) { 560 - total_read = -EAGAIN; 560 + total_read = -ECONNABORTED; 561 561 break; 562 562 } 563 563 ··· 571 571 break; 572 572 } else if (server->tcpStatus == CifsNeedReconnect) { 573 573 cifs_reconnect(server); 574 - total_read = -EAGAIN; 574 + total_read = -ECONNABORTED; 575 575 break; 576 576 } else if (length == -ERESTARTSYS || 577 577 length == -EAGAIN || ··· 588 588 cifs_dbg(FYI, "Received no data or error: expecting %d\n" 589 589 "got %d", to_read, length); 590 590 cifs_reconnect(server); 591 - total_read = -EAGAIN; 591 + total_read = -ECONNABORTED; 592 592 break; 593 593 } 594 594 } ··· 786 786 cifs_dbg(VFS, "SMB response too long (%u bytes)\n", pdu_length); 787 787 cifs_reconnect(server); 788 788 wake_up(&server->response_q); 789 - return -EAGAIN; 789 + return -ECONNABORTED; 790 790 } 791 791 792 792 /* switch to large buffer if too big for a small one */
+552 -358
fs/cifs/file.c
··· 1670 1670 break; 1671 1671 } 1672 1672 1673 - len = min((size_t)cifs_sb->wsize, 1674 - write_size - total_written); 1673 + len = min(server->ops->wp_retry_size(dentry->d_inode), 1674 + (unsigned int)write_size - total_written); 1675 1675 /* iov[0] is reserved for smb header */ 1676 1676 iov[1].iov_base = (char *)write_data + total_written; 1677 1677 iov[1].iov_len = len; ··· 1878 1878 return rc; 1879 1879 } 1880 1880 1881 + static struct cifs_writedata * 1882 + wdata_alloc_and_fillpages(pgoff_t tofind, struct address_space *mapping, 1883 + pgoff_t end, pgoff_t *index, 1884 + unsigned int *found_pages) 1885 + { 1886 + unsigned int nr_pages; 1887 + struct page **pages; 1888 + struct cifs_writedata *wdata; 1889 + 1890 + wdata = cifs_writedata_alloc((unsigned int)tofind, 1891 + cifs_writev_complete); 1892 + if (!wdata) 1893 + return NULL; 1894 + 1895 + /* 1896 + * find_get_pages_tag seems to return a max of 256 on each 1897 + * iteration, so we must call it several times in order to 1898 + * fill the array or the wsize is effectively limited to 1899 + * 256 * PAGE_CACHE_SIZE. 1900 + */ 1901 + *found_pages = 0; 1902 + pages = wdata->pages; 1903 + do { 1904 + nr_pages = find_get_pages_tag(mapping, index, 1905 + PAGECACHE_TAG_DIRTY, tofind, 1906 + pages); 1907 + *found_pages += nr_pages; 1908 + tofind -= nr_pages; 1909 + pages += nr_pages; 1910 + } while (nr_pages && tofind && *index <= end); 1911 + 1912 + return wdata; 1913 + } 1914 + 1915 + static unsigned int 1916 + wdata_prepare_pages(struct cifs_writedata *wdata, unsigned int found_pages, 1917 + struct address_space *mapping, 1918 + struct writeback_control *wbc, 1919 + pgoff_t end, pgoff_t *index, pgoff_t *next, bool *done) 1920 + { 1921 + unsigned int nr_pages = 0, i; 1922 + struct page *page; 1923 + 1924 + for (i = 0; i < found_pages; i++) { 1925 + page = wdata->pages[i]; 1926 + /* 1927 + * At this point we hold neither mapping->tree_lock nor 1928 + * lock on the page itself: the page may be truncated or 1929 + * invalidated (changing page->mapping to NULL), or even 1930 + * swizzled back from swapper_space to tmpfs file 1931 + * mapping 1932 + */ 1933 + 1934 + if (nr_pages == 0) 1935 + lock_page(page); 1936 + else if (!trylock_page(page)) 1937 + break; 1938 + 1939 + if (unlikely(page->mapping != mapping)) { 1940 + unlock_page(page); 1941 + break; 1942 + } 1943 + 1944 + if (!wbc->range_cyclic && page->index > end) { 1945 + *done = true; 1946 + unlock_page(page); 1947 + break; 1948 + } 1949 + 1950 + if (*next && (page->index != *next)) { 1951 + /* Not next consecutive page */ 1952 + unlock_page(page); 1953 + break; 1954 + } 1955 + 1956 + if (wbc->sync_mode != WB_SYNC_NONE) 1957 + wait_on_page_writeback(page); 1958 + 1959 + if (PageWriteback(page) || 1960 + !clear_page_dirty_for_io(page)) { 1961 + unlock_page(page); 1962 + break; 1963 + } 1964 + 1965 + /* 1966 + * This actually clears the dirty bit in the radix tree. 1967 + * See cifs_writepage() for more commentary. 1968 + */ 1969 + set_page_writeback(page); 1970 + if (page_offset(page) >= i_size_read(mapping->host)) { 1971 + *done = true; 1972 + unlock_page(page); 1973 + end_page_writeback(page); 1974 + break; 1975 + } 1976 + 1977 + wdata->pages[i] = page; 1978 + *next = page->index + 1; 1979 + ++nr_pages; 1980 + } 1981 + 1982 + /* reset index to refind any pages skipped */ 1983 + if (nr_pages == 0) 1984 + *index = wdata->pages[0]->index + 1; 1985 + 1986 + /* put any pages we aren't going to use */ 1987 + for (i = nr_pages; i < found_pages; i++) { 1988 + page_cache_release(wdata->pages[i]); 1989 + wdata->pages[i] = NULL; 1990 + } 1991 + 1992 + return nr_pages; 1993 + } 1994 + 1995 + static int 1996 + wdata_send_pages(struct cifs_writedata *wdata, unsigned int nr_pages, 1997 + struct address_space *mapping, struct writeback_control *wbc) 1998 + { 1999 + int rc = 0; 2000 + struct TCP_Server_Info *server; 2001 + unsigned int i; 2002 + 2003 + wdata->sync_mode = wbc->sync_mode; 2004 + wdata->nr_pages = nr_pages; 2005 + wdata->offset = page_offset(wdata->pages[0]); 2006 + wdata->pagesz = PAGE_CACHE_SIZE; 2007 + wdata->tailsz = min(i_size_read(mapping->host) - 2008 + page_offset(wdata->pages[nr_pages - 1]), 2009 + (loff_t)PAGE_CACHE_SIZE); 2010 + wdata->bytes = ((nr_pages - 1) * PAGE_CACHE_SIZE) + wdata->tailsz; 2011 + 2012 + if (wdata->cfile != NULL) 2013 + cifsFileInfo_put(wdata->cfile); 2014 + wdata->cfile = find_writable_file(CIFS_I(mapping->host), false); 2015 + if (!wdata->cfile) { 2016 + cifs_dbg(VFS, "No writable handles for inode\n"); 2017 + rc = -EBADF; 2018 + } else { 2019 + wdata->pid = wdata->cfile->pid; 2020 + server = tlink_tcon(wdata->cfile->tlink)->ses->server; 2021 + rc = server->ops->async_writev(wdata, cifs_writedata_release); 2022 + } 2023 + 2024 + for (i = 0; i < nr_pages; ++i) 2025 + unlock_page(wdata->pages[i]); 2026 + 2027 + return rc; 2028 + } 2029 + 1881 2030 static int cifs_writepages(struct address_space *mapping, 1882 2031 struct writeback_control *wbc) 1883 2032 { 1884 2033 struct cifs_sb_info *cifs_sb = CIFS_SB(mapping->host->i_sb); 2034 + struct TCP_Server_Info *server; 1885 2035 bool done = false, scanned = false, range_whole = false; 1886 2036 pgoff_t end, index; 1887 2037 struct cifs_writedata *wdata; 1888 - struct TCP_Server_Info *server; 1889 - struct page *page; 1890 2038 int rc = 0; 1891 2039 1892 2040 /* ··· 2054 1906 range_whole = true; 2055 1907 scanned = true; 2056 1908 } 1909 + server = cifs_sb_master_tcon(cifs_sb)->ses->server; 2057 1910 retry: 2058 1911 while (!done && index <= end) { 2059 - unsigned int i, nr_pages, found_pages; 2060 - pgoff_t next = 0, tofind; 2061 - struct page **pages; 1912 + unsigned int i, nr_pages, found_pages, wsize, credits; 1913 + pgoff_t next = 0, tofind, saved_index = index; 2062 1914 2063 - tofind = min((cifs_sb->wsize / PAGE_CACHE_SIZE) - 1, 2064 - end - index) + 1; 1915 + rc = server->ops->wait_mtu_credits(server, cifs_sb->wsize, 1916 + &wsize, &credits); 1917 + if (rc) 1918 + break; 2065 1919 2066 - wdata = cifs_writedata_alloc((unsigned int)tofind, 2067 - cifs_writev_complete); 1920 + tofind = min((wsize / PAGE_CACHE_SIZE) - 1, end - index) + 1; 1921 + 1922 + wdata = wdata_alloc_and_fillpages(tofind, mapping, end, &index, 1923 + &found_pages); 2068 1924 if (!wdata) { 2069 1925 rc = -ENOMEM; 1926 + add_credits_and_wake_if(server, credits, 0); 2070 1927 break; 2071 1928 } 2072 - 2073 - /* 2074 - * find_get_pages_tag seems to return a max of 256 on each 2075 - * iteration, so we must call it several times in order to 2076 - * fill the array or the wsize is effectively limited to 2077 - * 256 * PAGE_CACHE_SIZE. 2078 - */ 2079 - found_pages = 0; 2080 - pages = wdata->pages; 2081 - do { 2082 - nr_pages = find_get_pages_tag(mapping, &index, 2083 - PAGECACHE_TAG_DIRTY, 2084 - tofind, pages); 2085 - found_pages += nr_pages; 2086 - tofind -= nr_pages; 2087 - pages += nr_pages; 2088 - } while (nr_pages && tofind && index <= end); 2089 1929 2090 1930 if (found_pages == 0) { 2091 1931 kref_put(&wdata->refcount, cifs_writedata_release); 1932 + add_credits_and_wake_if(server, credits, 0); 2092 1933 break; 2093 1934 } 2094 1935 2095 - nr_pages = 0; 2096 - for (i = 0; i < found_pages; i++) { 2097 - page = wdata->pages[i]; 2098 - /* 2099 - * At this point we hold neither mapping->tree_lock nor 2100 - * lock on the page itself: the page may be truncated or 2101 - * invalidated (changing page->mapping to NULL), or even 2102 - * swizzled back from swapper_space to tmpfs file 2103 - * mapping 2104 - */ 2105 - 2106 - if (nr_pages == 0) 2107 - lock_page(page); 2108 - else if (!trylock_page(page)) 2109 - break; 2110 - 2111 - if (unlikely(page->mapping != mapping)) { 2112 - unlock_page(page); 2113 - break; 2114 - } 2115 - 2116 - if (!wbc->range_cyclic && page->index > end) { 2117 - done = true; 2118 - unlock_page(page); 2119 - break; 2120 - } 2121 - 2122 - if (next && (page->index != next)) { 2123 - /* Not next consecutive page */ 2124 - unlock_page(page); 2125 - break; 2126 - } 2127 - 2128 - if (wbc->sync_mode != WB_SYNC_NONE) 2129 - wait_on_page_writeback(page); 2130 - 2131 - if (PageWriteback(page) || 2132 - !clear_page_dirty_for_io(page)) { 2133 - unlock_page(page); 2134 - break; 2135 - } 2136 - 2137 - /* 2138 - * This actually clears the dirty bit in the radix tree. 2139 - * See cifs_writepage() for more commentary. 2140 - */ 2141 - set_page_writeback(page); 2142 - 2143 - if (page_offset(page) >= i_size_read(mapping->host)) { 2144 - done = true; 2145 - unlock_page(page); 2146 - end_page_writeback(page); 2147 - break; 2148 - } 2149 - 2150 - wdata->pages[i] = page; 2151 - next = page->index + 1; 2152 - ++nr_pages; 2153 - } 2154 - 2155 - /* reset index to refind any pages skipped */ 2156 - if (nr_pages == 0) 2157 - index = wdata->pages[0]->index + 1; 2158 - 2159 - /* put any pages we aren't going to use */ 2160 - for (i = nr_pages; i < found_pages; i++) { 2161 - page_cache_release(wdata->pages[i]); 2162 - wdata->pages[i] = NULL; 2163 - } 1936 + nr_pages = wdata_prepare_pages(wdata, found_pages, mapping, wbc, 1937 + end, &index, &next, &done); 2164 1938 2165 1939 /* nothing to write? */ 2166 1940 if (nr_pages == 0) { 2167 1941 kref_put(&wdata->refcount, cifs_writedata_release); 1942 + add_credits_and_wake_if(server, credits, 0); 2168 1943 continue; 2169 1944 } 2170 1945 2171 - wdata->sync_mode = wbc->sync_mode; 2172 - wdata->nr_pages = nr_pages; 2173 - wdata->offset = page_offset(wdata->pages[0]); 2174 - wdata->pagesz = PAGE_CACHE_SIZE; 2175 - wdata->tailsz = 2176 - min(i_size_read(mapping->host) - 2177 - page_offset(wdata->pages[nr_pages - 1]), 2178 - (loff_t)PAGE_CACHE_SIZE); 2179 - wdata->bytes = ((nr_pages - 1) * PAGE_CACHE_SIZE) + 2180 - wdata->tailsz; 1946 + wdata->credits = credits; 2181 1947 2182 - do { 2183 - if (wdata->cfile != NULL) 2184 - cifsFileInfo_put(wdata->cfile); 2185 - wdata->cfile = find_writable_file(CIFS_I(mapping->host), 2186 - false); 2187 - if (!wdata->cfile) { 2188 - cifs_dbg(VFS, "No writable handles for inode\n"); 2189 - rc = -EBADF; 2190 - break; 2191 - } 2192 - wdata->pid = wdata->cfile->pid; 2193 - server = tlink_tcon(wdata->cfile->tlink)->ses->server; 2194 - rc = server->ops->async_writev(wdata, 2195 - cifs_writedata_release); 2196 - } while (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN); 2197 - 2198 - for (i = 0; i < nr_pages; ++i) 2199 - unlock_page(wdata->pages[i]); 1948 + rc = wdata_send_pages(wdata, nr_pages, mapping, wbc); 2200 1949 2201 1950 /* send failure -- clean up the mess */ 2202 1951 if (rc != 0) { 1952 + add_credits_and_wake_if(server, wdata->credits, 0); 2203 1953 for (i = 0; i < nr_pages; ++i) { 2204 1954 if (rc == -EAGAIN) 2205 1955 redirty_page_for_writepage(wbc, ··· 2111 2065 mapping_set_error(mapping, rc); 2112 2066 } 2113 2067 kref_put(&wdata->refcount, cifs_writedata_release); 2068 + 2069 + if (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN) { 2070 + index = saved_index; 2071 + continue; 2072 + } 2114 2073 2115 2074 wbc->nr_to_write -= nr_pages; 2116 2075 if (wbc->nr_to_write <= 0) ··· 2413 2362 kref_put(&wdata->refcount, cifs_uncached_writedata_release); 2414 2363 } 2415 2364 2416 - /* attempt to send write to server, retry on any -EAGAIN errors */ 2417 2365 static int 2418 - cifs_uncached_retry_writev(struct cifs_writedata *wdata) 2366 + wdata_fill_from_iovec(struct cifs_writedata *wdata, struct iov_iter *from, 2367 + size_t *len, unsigned long *num_pages) 2419 2368 { 2420 - int rc; 2369 + size_t save_len, copied, bytes, cur_len = *len; 2370 + unsigned long i, nr_pages = *num_pages; 2371 + 2372 + save_len = cur_len; 2373 + for (i = 0; i < nr_pages; i++) { 2374 + bytes = min_t(const size_t, cur_len, PAGE_SIZE); 2375 + copied = copy_page_from_iter(wdata->pages[i], 0, bytes, from); 2376 + cur_len -= copied; 2377 + /* 2378 + * If we didn't copy as much as we expected, then that 2379 + * may mean we trod into an unmapped area. Stop copying 2380 + * at that point. On the next pass through the big 2381 + * loop, we'll likely end up getting a zero-length 2382 + * write and bailing out of it. 2383 + */ 2384 + if (copied < bytes) 2385 + break; 2386 + } 2387 + cur_len = save_len - cur_len; 2388 + *len = cur_len; 2389 + 2390 + /* 2391 + * If we have no data to send, then that probably means that 2392 + * the copy above failed altogether. That's most likely because 2393 + * the address in the iovec was bogus. Return -EFAULT and let 2394 + * the caller free anything we allocated and bail out. 2395 + */ 2396 + if (!cur_len) 2397 + return -EFAULT; 2398 + 2399 + /* 2400 + * i + 1 now represents the number of pages we actually used in 2401 + * the copy phase above. 2402 + */ 2403 + *num_pages = i + 1; 2404 + return 0; 2405 + } 2406 + 2407 + static int 2408 + cifs_write_from_iter(loff_t offset, size_t len, struct iov_iter *from, 2409 + struct cifsFileInfo *open_file, 2410 + struct cifs_sb_info *cifs_sb, struct list_head *wdata_list) 2411 + { 2412 + int rc = 0; 2413 + size_t cur_len; 2414 + unsigned long nr_pages, num_pages, i; 2415 + struct cifs_writedata *wdata; 2416 + struct iov_iter saved_from; 2417 + loff_t saved_offset = offset; 2418 + pid_t pid; 2421 2419 struct TCP_Server_Info *server; 2422 2420 2423 - server = tlink_tcon(wdata->cfile->tlink)->ses->server; 2421 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) 2422 + pid = open_file->pid; 2423 + else 2424 + pid = current->tgid; 2425 + 2426 + server = tlink_tcon(open_file->tlink)->ses->server; 2427 + memcpy(&saved_from, from, sizeof(struct iov_iter)); 2424 2428 2425 2429 do { 2426 - if (wdata->cfile->invalidHandle) { 2427 - rc = cifs_reopen_file(wdata->cfile, false); 2428 - if (rc != 0) 2429 - continue; 2430 + unsigned int wsize, credits; 2431 + 2432 + rc = server->ops->wait_mtu_credits(server, cifs_sb->wsize, 2433 + &wsize, &credits); 2434 + if (rc) 2435 + break; 2436 + 2437 + nr_pages = get_numpages(wsize, len, &cur_len); 2438 + wdata = cifs_writedata_alloc(nr_pages, 2439 + cifs_uncached_writev_complete); 2440 + if (!wdata) { 2441 + rc = -ENOMEM; 2442 + add_credits_and_wake_if(server, credits, 0); 2443 + break; 2430 2444 } 2431 - rc = server->ops->async_writev(wdata, 2432 - cifs_uncached_writedata_release); 2433 - } while (rc == -EAGAIN); 2445 + 2446 + rc = cifs_write_allocate_pages(wdata->pages, nr_pages); 2447 + if (rc) { 2448 + kfree(wdata); 2449 + add_credits_and_wake_if(server, credits, 0); 2450 + break; 2451 + } 2452 + 2453 + num_pages = nr_pages; 2454 + rc = wdata_fill_from_iovec(wdata, from, &cur_len, &num_pages); 2455 + if (rc) { 2456 + for (i = 0; i < nr_pages; i++) 2457 + put_page(wdata->pages[i]); 2458 + kfree(wdata); 2459 + add_credits_and_wake_if(server, credits, 0); 2460 + break; 2461 + } 2462 + 2463 + /* 2464 + * Bring nr_pages down to the number of pages we actually used, 2465 + * and free any pages that we didn't use. 2466 + */ 2467 + for ( ; nr_pages > num_pages; nr_pages--) 2468 + put_page(wdata->pages[nr_pages - 1]); 2469 + 2470 + wdata->sync_mode = WB_SYNC_ALL; 2471 + wdata->nr_pages = nr_pages; 2472 + wdata->offset = (__u64)offset; 2473 + wdata->cfile = cifsFileInfo_get(open_file); 2474 + wdata->pid = pid; 2475 + wdata->bytes = cur_len; 2476 + wdata->pagesz = PAGE_SIZE; 2477 + wdata->tailsz = cur_len - ((nr_pages - 1) * PAGE_SIZE); 2478 + wdata->credits = credits; 2479 + 2480 + if (!wdata->cfile->invalidHandle || 2481 + !cifs_reopen_file(wdata->cfile, false)) 2482 + rc = server->ops->async_writev(wdata, 2483 + cifs_uncached_writedata_release); 2484 + if (rc) { 2485 + add_credits_and_wake_if(server, wdata->credits, 0); 2486 + kref_put(&wdata->refcount, 2487 + cifs_uncached_writedata_release); 2488 + if (rc == -EAGAIN) { 2489 + memcpy(from, &saved_from, 2490 + sizeof(struct iov_iter)); 2491 + iov_iter_advance(from, offset - saved_offset); 2492 + continue; 2493 + } 2494 + break; 2495 + } 2496 + 2497 + list_add_tail(&wdata->list, wdata_list); 2498 + offset += cur_len; 2499 + len -= cur_len; 2500 + } while (len > 0); 2434 2501 2435 2502 return rc; 2436 2503 } ··· 2556 2387 static ssize_t 2557 2388 cifs_iovec_write(struct file *file, struct iov_iter *from, loff_t *poffset) 2558 2389 { 2559 - unsigned long nr_pages, i; 2560 - size_t bytes, copied, len, cur_len; 2390 + size_t len; 2561 2391 ssize_t total_written = 0; 2562 - loff_t offset; 2563 2392 struct cifsFileInfo *open_file; 2564 2393 struct cifs_tcon *tcon; 2565 2394 struct cifs_sb_info *cifs_sb; 2566 2395 struct cifs_writedata *wdata, *tmp; 2567 2396 struct list_head wdata_list; 2397 + struct iov_iter saved_from; 2568 2398 int rc; 2569 - pid_t pid; 2570 2399 2571 2400 len = iov_iter_count(from); 2572 2401 rc = generic_write_checks(file, poffset, &len, 0); ··· 2584 2417 if (!tcon->ses->server->ops->async_writev) 2585 2418 return -ENOSYS; 2586 2419 2587 - offset = *poffset; 2420 + memcpy(&saved_from, from, sizeof(struct iov_iter)); 2588 2421 2589 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) 2590 - pid = open_file->pid; 2591 - else 2592 - pid = current->tgid; 2593 - 2594 - do { 2595 - size_t save_len; 2596 - 2597 - nr_pages = get_numpages(cifs_sb->wsize, len, &cur_len); 2598 - wdata = cifs_writedata_alloc(nr_pages, 2599 - cifs_uncached_writev_complete); 2600 - if (!wdata) { 2601 - rc = -ENOMEM; 2602 - break; 2603 - } 2604 - 2605 - rc = cifs_write_allocate_pages(wdata->pages, nr_pages); 2606 - if (rc) { 2607 - kfree(wdata); 2608 - break; 2609 - } 2610 - 2611 - save_len = cur_len; 2612 - for (i = 0; i < nr_pages; i++) { 2613 - bytes = min_t(size_t, cur_len, PAGE_SIZE); 2614 - copied = copy_page_from_iter(wdata->pages[i], 0, bytes, 2615 - from); 2616 - cur_len -= copied; 2617 - /* 2618 - * If we didn't copy as much as we expected, then that 2619 - * may mean we trod into an unmapped area. Stop copying 2620 - * at that point. On the next pass through the big 2621 - * loop, we'll likely end up getting a zero-length 2622 - * write and bailing out of it. 2623 - */ 2624 - if (copied < bytes) 2625 - break; 2626 - } 2627 - cur_len = save_len - cur_len; 2628 - 2629 - /* 2630 - * If we have no data to send, then that probably means that 2631 - * the copy above failed altogether. That's most likely because 2632 - * the address in the iovec was bogus. Set the rc to -EFAULT, 2633 - * free anything we allocated and bail out. 2634 - */ 2635 - if (!cur_len) { 2636 - for (i = 0; i < nr_pages; i++) 2637 - put_page(wdata->pages[i]); 2638 - kfree(wdata); 2639 - rc = -EFAULT; 2640 - break; 2641 - } 2642 - 2643 - /* 2644 - * i + 1 now represents the number of pages we actually used in 2645 - * the copy phase above. Bring nr_pages down to that, and free 2646 - * any pages that we didn't use. 2647 - */ 2648 - for ( ; nr_pages > i + 1; nr_pages--) 2649 - put_page(wdata->pages[nr_pages - 1]); 2650 - 2651 - wdata->sync_mode = WB_SYNC_ALL; 2652 - wdata->nr_pages = nr_pages; 2653 - wdata->offset = (__u64)offset; 2654 - wdata->cfile = cifsFileInfo_get(open_file); 2655 - wdata->pid = pid; 2656 - wdata->bytes = cur_len; 2657 - wdata->pagesz = PAGE_SIZE; 2658 - wdata->tailsz = cur_len - ((nr_pages - 1) * PAGE_SIZE); 2659 - rc = cifs_uncached_retry_writev(wdata); 2660 - if (rc) { 2661 - kref_put(&wdata->refcount, 2662 - cifs_uncached_writedata_release); 2663 - break; 2664 - } 2665 - 2666 - list_add_tail(&wdata->list, &wdata_list); 2667 - offset += cur_len; 2668 - len -= cur_len; 2669 - } while (len > 0); 2422 + rc = cifs_write_from_iter(*poffset, len, from, open_file, cifs_sb, 2423 + &wdata_list); 2670 2424 2671 2425 /* 2672 2426 * If at least one write was successfully sent, then discard any rc ··· 2617 2529 2618 2530 /* resend call if it's a retryable error */ 2619 2531 if (rc == -EAGAIN) { 2620 - rc = cifs_uncached_retry_writev(wdata); 2532 + struct list_head tmp_list; 2533 + struct iov_iter tmp_from; 2534 + 2535 + INIT_LIST_HEAD(&tmp_list); 2536 + list_del_init(&wdata->list); 2537 + 2538 + memcpy(&tmp_from, &saved_from, 2539 + sizeof(struct iov_iter)); 2540 + iov_iter_advance(&tmp_from, 2541 + wdata->offset - *poffset); 2542 + 2543 + rc = cifs_write_from_iter(wdata->offset, 2544 + wdata->bytes, &tmp_from, 2545 + open_file, cifs_sb, &tmp_list); 2546 + 2547 + list_splice(&tmp_list, &wdata_list); 2548 + 2549 + kref_put(&wdata->refcount, 2550 + cifs_uncached_writedata_release); 2621 2551 goto restart_loop; 2622 2552 } 2623 2553 } ··· 2828 2722 cifs_readdata_release(refcount); 2829 2723 } 2830 2724 2831 - static int 2832 - cifs_retry_async_readv(struct cifs_readdata *rdata) 2833 - { 2834 - int rc; 2835 - struct TCP_Server_Info *server; 2836 - 2837 - server = tlink_tcon(rdata->cfile->tlink)->ses->server; 2838 - 2839 - do { 2840 - if (rdata->cfile->invalidHandle) { 2841 - rc = cifs_reopen_file(rdata->cfile, true); 2842 - if (rc != 0) 2843 - continue; 2844 - } 2845 - rc = server->ops->async_readv(rdata); 2846 - } while (rc == -EAGAIN); 2847 - 2848 - return rc; 2849 - } 2850 - 2851 2725 /** 2852 2726 * cifs_readdata_to_iov - copy data from pages in response to an iovec 2853 2727 * @rdata: the readdata response with list of pages holding data ··· 2840 2754 static int 2841 2755 cifs_readdata_to_iov(struct cifs_readdata *rdata, struct iov_iter *iter) 2842 2756 { 2843 - size_t remaining = rdata->bytes; 2757 + size_t remaining = rdata->got_bytes; 2844 2758 unsigned int i; 2845 2759 2846 2760 for (i = 0; i < rdata->nr_pages; i++) { ··· 2868 2782 cifs_uncached_read_into_pages(struct TCP_Server_Info *server, 2869 2783 struct cifs_readdata *rdata, unsigned int len) 2870 2784 { 2871 - int total_read = 0, result = 0; 2785 + int result = 0; 2872 2786 unsigned int i; 2873 2787 unsigned int nr_pages = rdata->nr_pages; 2874 2788 struct kvec iov; 2875 2789 2790 + rdata->got_bytes = 0; 2876 2791 rdata->tailsz = PAGE_SIZE; 2877 2792 for (i = 0; i < nr_pages; i++) { 2878 2793 struct page *page = rdata->pages[i]; ··· 2907 2820 if (result < 0) 2908 2821 break; 2909 2822 2910 - total_read += result; 2823 + rdata->got_bytes += result; 2911 2824 } 2912 2825 2913 - return total_read > 0 ? total_read : result; 2826 + return rdata->got_bytes > 0 && result != -ECONNABORTED ? 2827 + rdata->got_bytes : result; 2914 2828 } 2915 2829 2916 - ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) 2830 + static int 2831 + cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, 2832 + struct cifs_sb_info *cifs_sb, struct list_head *rdata_list) 2917 2833 { 2918 - struct file *file = iocb->ki_filp; 2919 - ssize_t rc; 2920 - size_t len, cur_len; 2921 - ssize_t total_read = 0; 2922 - loff_t offset = iocb->ki_pos; 2923 - unsigned int npages; 2924 - struct cifs_sb_info *cifs_sb; 2925 - struct cifs_tcon *tcon; 2926 - struct cifsFileInfo *open_file; 2927 - struct cifs_readdata *rdata, *tmp; 2928 - struct list_head rdata_list; 2834 + struct cifs_readdata *rdata; 2835 + unsigned int npages, rsize, credits; 2836 + size_t cur_len; 2837 + int rc; 2929 2838 pid_t pid; 2839 + struct TCP_Server_Info *server; 2930 2840 2931 - len = iov_iter_count(to); 2932 - if (!len) 2933 - return 0; 2934 - 2935 - INIT_LIST_HEAD(&rdata_list); 2936 - cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 2937 - open_file = file->private_data; 2938 - tcon = tlink_tcon(open_file->tlink); 2939 - 2940 - if (!tcon->ses->server->ops->async_readv) 2941 - return -ENOSYS; 2841 + server = tlink_tcon(open_file->tlink)->ses->server; 2942 2842 2943 2843 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) 2944 2844 pid = open_file->pid; 2945 2845 else 2946 2846 pid = current->tgid; 2947 2847 2948 - if ((file->f_flags & O_ACCMODE) == O_WRONLY) 2949 - cifs_dbg(FYI, "attempting read on write only file instance\n"); 2950 - 2951 2848 do { 2952 - cur_len = min_t(const size_t, len - total_read, cifs_sb->rsize); 2849 + rc = server->ops->wait_mtu_credits(server, cifs_sb->rsize, 2850 + &rsize, &credits); 2851 + if (rc) 2852 + break; 2853 + 2854 + cur_len = min_t(const size_t, len, rsize); 2953 2855 npages = DIV_ROUND_UP(cur_len, PAGE_SIZE); 2954 2856 2955 2857 /* allocate a readdata struct */ 2956 2858 rdata = cifs_readdata_alloc(npages, 2957 2859 cifs_uncached_readv_complete); 2958 2860 if (!rdata) { 2861 + add_credits_and_wake_if(server, credits, 0); 2959 2862 rc = -ENOMEM; 2960 2863 break; 2961 2864 } ··· 2961 2884 rdata->pid = pid; 2962 2885 rdata->pagesz = PAGE_SIZE; 2963 2886 rdata->read_into_pages = cifs_uncached_read_into_pages; 2887 + rdata->credits = credits; 2964 2888 2965 - rc = cifs_retry_async_readv(rdata); 2889 + if (!rdata->cfile->invalidHandle || 2890 + !cifs_reopen_file(rdata->cfile, true)) 2891 + rc = server->ops->async_readv(rdata); 2966 2892 error: 2967 2893 if (rc) { 2894 + add_credits_and_wake_if(server, rdata->credits, 0); 2968 2895 kref_put(&rdata->refcount, 2969 2896 cifs_uncached_readdata_release); 2897 + if (rc == -EAGAIN) 2898 + continue; 2970 2899 break; 2971 2900 } 2972 2901 2973 - list_add_tail(&rdata->list, &rdata_list); 2902 + list_add_tail(&rdata->list, rdata_list); 2974 2903 offset += cur_len; 2975 2904 len -= cur_len; 2976 2905 } while (len > 0); 2906 + 2907 + return rc; 2908 + } 2909 + 2910 + ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) 2911 + { 2912 + struct file *file = iocb->ki_filp; 2913 + ssize_t rc; 2914 + size_t len; 2915 + ssize_t total_read = 0; 2916 + loff_t offset = iocb->ki_pos; 2917 + struct cifs_sb_info *cifs_sb; 2918 + struct cifs_tcon *tcon; 2919 + struct cifsFileInfo *open_file; 2920 + struct cifs_readdata *rdata, *tmp; 2921 + struct list_head rdata_list; 2922 + 2923 + len = iov_iter_count(to); 2924 + if (!len) 2925 + return 0; 2926 + 2927 + INIT_LIST_HEAD(&rdata_list); 2928 + cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 2929 + open_file = file->private_data; 2930 + tcon = tlink_tcon(open_file->tlink); 2931 + 2932 + if (!tcon->ses->server->ops->async_readv) 2933 + return -ENOSYS; 2934 + 2935 + if ((file->f_flags & O_ACCMODE) == O_WRONLY) 2936 + cifs_dbg(FYI, "attempting read on write only file instance\n"); 2937 + 2938 + rc = cifs_send_async_read(offset, len, open_file, cifs_sb, &rdata_list); 2977 2939 2978 2940 /* if at least one read request send succeeded, then reset rc */ 2979 2941 if (!list_empty(&rdata_list)) ··· 3020 2904 3021 2905 len = iov_iter_count(to); 3022 2906 /* the loop below should proceed in the order of increasing offsets */ 2907 + again: 3023 2908 list_for_each_entry_safe(rdata, tmp, &rdata_list, list) { 3024 - again: 3025 2909 if (!rc) { 3026 2910 /* FIXME: freezable sleep too? */ 3027 2911 rc = wait_for_completion_killable(&rdata->done); 3028 2912 if (rc) 3029 2913 rc = -EINTR; 3030 - else if (rdata->result) { 3031 - rc = rdata->result; 2914 + else if (rdata->result == -EAGAIN) { 3032 2915 /* resend call if it's a retryable error */ 3033 - if (rc == -EAGAIN) { 3034 - rc = cifs_retry_async_readv(rdata); 3035 - goto again; 3036 - } 3037 - } else { 3038 - rc = cifs_readdata_to_iov(rdata, to); 3039 - } 2916 + struct list_head tmp_list; 2917 + unsigned int got_bytes = rdata->got_bytes; 3040 2918 2919 + list_del_init(&rdata->list); 2920 + INIT_LIST_HEAD(&tmp_list); 2921 + 2922 + /* 2923 + * Got a part of data and then reconnect has 2924 + * happened -- fill the buffer and continue 2925 + * reading. 2926 + */ 2927 + if (got_bytes && got_bytes < rdata->bytes) { 2928 + rc = cifs_readdata_to_iov(rdata, to); 2929 + if (rc) { 2930 + kref_put(&rdata->refcount, 2931 + cifs_uncached_readdata_release); 2932 + continue; 2933 + } 2934 + } 2935 + 2936 + rc = cifs_send_async_read( 2937 + rdata->offset + got_bytes, 2938 + rdata->bytes - got_bytes, 2939 + rdata->cfile, cifs_sb, 2940 + &tmp_list); 2941 + 2942 + list_splice(&tmp_list, &rdata_list); 2943 + 2944 + kref_put(&rdata->refcount, 2945 + cifs_uncached_readdata_release); 2946 + goto again; 2947 + } else if (rdata->result) 2948 + rc = rdata->result; 2949 + else 2950 + rc = cifs_readdata_to_iov(rdata, to); 2951 + 2952 + /* if there was a short read -- discard anything left */ 2953 + if (rdata->got_bytes && rdata->got_bytes < rdata->bytes) 2954 + rc = -ENODATA; 3041 2955 } 3042 2956 list_del_init(&rdata->list); 3043 2957 kref_put(&rdata->refcount, cifs_uncached_readdata_release); ··· 3176 3030 3177 3031 for (total_read = 0, cur_offset = read_data; read_size > total_read; 3178 3032 total_read += bytes_read, cur_offset += bytes_read) { 3179 - current_read_size = min_t(uint, read_size - total_read, rsize); 3180 - /* 3181 - * For windows me and 9x we do not want to request more than it 3182 - * negotiated since it will refuse the read then. 3183 - */ 3184 - if ((tcon->ses) && !(tcon->ses->capabilities & 3033 + do { 3034 + current_read_size = min_t(uint, read_size - total_read, 3035 + rsize); 3036 + /* 3037 + * For windows me and 9x we do not want to request more 3038 + * than it negotiated since it will refuse the read 3039 + * then. 3040 + */ 3041 + if ((tcon->ses) && !(tcon->ses->capabilities & 3185 3042 tcon->ses->server->vals->cap_large_files)) { 3186 - current_read_size = min_t(uint, current_read_size, 3187 - CIFSMaxBufSize); 3188 - } 3189 - rc = -EAGAIN; 3190 - while (rc == -EAGAIN) { 3043 + current_read_size = min_t(uint, 3044 + current_read_size, CIFSMaxBufSize); 3045 + } 3191 3046 if (open_file->invalidHandle) { 3192 3047 rc = cifs_reopen_file(open_file, true); 3193 3048 if (rc != 0) ··· 3201 3054 rc = server->ops->sync_read(xid, open_file, &io_parms, 3202 3055 &bytes_read, &cur_offset, 3203 3056 &buf_type); 3204 - } 3057 + } while (rc == -EAGAIN); 3058 + 3205 3059 if (rc || (bytes_read == 0)) { 3206 3060 if (total_read) { 3207 3061 break; ··· 3281 3133 static void 3282 3134 cifs_readv_complete(struct work_struct *work) 3283 3135 { 3284 - unsigned int i; 3136 + unsigned int i, got_bytes; 3285 3137 struct cifs_readdata *rdata = container_of(work, 3286 3138 struct cifs_readdata, work); 3287 3139 3140 + got_bytes = rdata->got_bytes; 3288 3141 for (i = 0; i < rdata->nr_pages; i++) { 3289 3142 struct page *page = rdata->pages[i]; 3290 3143 3291 3144 lru_cache_add_file(page); 3292 3145 3293 - if (rdata->result == 0) { 3146 + if (rdata->result == 0 || 3147 + (rdata->result == -EAGAIN && got_bytes)) { 3294 3148 flush_dcache_page(page); 3295 3149 SetPageUptodate(page); 3296 3150 } 3297 3151 3298 3152 unlock_page(page); 3299 3153 3300 - if (rdata->result == 0) 3154 + if (rdata->result == 0 || 3155 + (rdata->result == -EAGAIN && got_bytes)) 3301 3156 cifs_readpage_to_fscache(rdata->mapping->host, page); 3157 + 3158 + got_bytes -= min_t(unsigned int, PAGE_CACHE_SIZE, got_bytes); 3302 3159 3303 3160 page_cache_release(page); 3304 3161 rdata->pages[i] = NULL; ··· 3315 3162 cifs_readpages_read_into_pages(struct TCP_Server_Info *server, 3316 3163 struct cifs_readdata *rdata, unsigned int len) 3317 3164 { 3318 - int total_read = 0, result = 0; 3165 + int result = 0; 3319 3166 unsigned int i; 3320 3167 u64 eof; 3321 3168 pgoff_t eof_index; ··· 3327 3174 eof_index = eof ? (eof - 1) >> PAGE_CACHE_SHIFT : 0; 3328 3175 cifs_dbg(FYI, "eof=%llu eof_index=%lu\n", eof, eof_index); 3329 3176 3177 + rdata->got_bytes = 0; 3330 3178 rdata->tailsz = PAGE_CACHE_SIZE; 3331 3179 for (i = 0; i < nr_pages; i++) { 3332 3180 struct page *page = rdata->pages[i]; ··· 3382 3228 if (result < 0) 3383 3229 break; 3384 3230 3385 - total_read += result; 3231 + rdata->got_bytes += result; 3386 3232 } 3387 3233 3388 - return total_read > 0 ? total_read : result; 3234 + return rdata->got_bytes > 0 && result != -ECONNABORTED ? 3235 + rdata->got_bytes : result; 3236 + } 3237 + 3238 + static int 3239 + readpages_get_pages(struct address_space *mapping, struct list_head *page_list, 3240 + unsigned int rsize, struct list_head *tmplist, 3241 + unsigned int *nr_pages, loff_t *offset, unsigned int *bytes) 3242 + { 3243 + struct page *page, *tpage; 3244 + unsigned int expected_index; 3245 + int rc; 3246 + 3247 + INIT_LIST_HEAD(tmplist); 3248 + 3249 + page = list_entry(page_list->prev, struct page, lru); 3250 + 3251 + /* 3252 + * Lock the page and put it in the cache. Since no one else 3253 + * should have access to this page, we're safe to simply set 3254 + * PG_locked without checking it first. 3255 + */ 3256 + __set_page_locked(page); 3257 + rc = add_to_page_cache_locked(page, mapping, 3258 + page->index, GFP_KERNEL); 3259 + 3260 + /* give up if we can't stick it in the cache */ 3261 + if (rc) { 3262 + __clear_page_locked(page); 3263 + return rc; 3264 + } 3265 + 3266 + /* move first page to the tmplist */ 3267 + *offset = (loff_t)page->index << PAGE_CACHE_SHIFT; 3268 + *bytes = PAGE_CACHE_SIZE; 3269 + *nr_pages = 1; 3270 + list_move_tail(&page->lru, tmplist); 3271 + 3272 + /* now try and add more pages onto the request */ 3273 + expected_index = page->index + 1; 3274 + list_for_each_entry_safe_reverse(page, tpage, page_list, lru) { 3275 + /* discontinuity ? */ 3276 + if (page->index != expected_index) 3277 + break; 3278 + 3279 + /* would this page push the read over the rsize? */ 3280 + if (*bytes + PAGE_CACHE_SIZE > rsize) 3281 + break; 3282 + 3283 + __set_page_locked(page); 3284 + if (add_to_page_cache_locked(page, mapping, page->index, 3285 + GFP_KERNEL)) { 3286 + __clear_page_locked(page); 3287 + break; 3288 + } 3289 + list_move_tail(&page->lru, tmplist); 3290 + (*bytes) += PAGE_CACHE_SIZE; 3291 + expected_index++; 3292 + (*nr_pages)++; 3293 + } 3294 + return rc; 3389 3295 } 3390 3296 3391 3297 static int cifs_readpages(struct file *file, struct address_space *mapping, ··· 3455 3241 struct list_head tmplist; 3456 3242 struct cifsFileInfo *open_file = file->private_data; 3457 3243 struct cifs_sb_info *cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 3458 - unsigned int rsize = cifs_sb->rsize; 3244 + struct TCP_Server_Info *server; 3459 3245 pid_t pid; 3460 - 3461 - /* 3462 - * Give up immediately if rsize is too small to read an entire page. 3463 - * The VFS will fall back to readpage. We should never reach this 3464 - * point however since we set ra_pages to 0 when the rsize is smaller 3465 - * than a cache page. 3466 - */ 3467 - if (unlikely(rsize < PAGE_CACHE_SIZE)) 3468 - return 0; 3469 3246 3470 3247 /* 3471 3248 * Reads as many pages as possible from fscache. Returns -ENOBUFS ··· 3476 3271 pid = current->tgid; 3477 3272 3478 3273 rc = 0; 3479 - INIT_LIST_HEAD(&tmplist); 3274 + server = tlink_tcon(open_file->tlink)->ses->server; 3480 3275 3481 3276 cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n", 3482 3277 __func__, file, mapping, num_pages); ··· 3493 3288 * the rdata->pages, then we want them in increasing order. 3494 3289 */ 3495 3290 while (!list_empty(page_list)) { 3496 - unsigned int i; 3497 - unsigned int bytes = PAGE_CACHE_SIZE; 3498 - unsigned int expected_index; 3499 - unsigned int nr_pages = 1; 3291 + unsigned int i, nr_pages, bytes, rsize; 3500 3292 loff_t offset; 3501 3293 struct page *page, *tpage; 3502 3294 struct cifs_readdata *rdata; 3295 + unsigned credits; 3503 3296 3504 - page = list_entry(page_list->prev, struct page, lru); 3297 + rc = server->ops->wait_mtu_credits(server, cifs_sb->rsize, 3298 + &rsize, &credits); 3299 + if (rc) 3300 + break; 3505 3301 3506 3302 /* 3507 - * Lock the page and put it in the cache. Since no one else 3508 - * should have access to this page, we're safe to simply set 3509 - * PG_locked without checking it first. 3303 + * Give up immediately if rsize is too small to read an entire 3304 + * page. The VFS will fall back to readpage. We should never 3305 + * reach this point however since we set ra_pages to 0 when the 3306 + * rsize is smaller than a cache page. 3510 3307 */ 3511 - __set_page_locked(page); 3512 - rc = add_to_page_cache_locked(page, mapping, 3513 - page->index, GFP_KERNEL); 3514 - 3515 - /* give up if we can't stick it in the cache */ 3516 - if (rc) { 3517 - __clear_page_locked(page); 3518 - break; 3308 + if (unlikely(rsize < PAGE_CACHE_SIZE)) { 3309 + add_credits_and_wake_if(server, credits, 0); 3310 + return 0; 3519 3311 } 3520 3312 3521 - /* move first page to the tmplist */ 3522 - offset = (loff_t)page->index << PAGE_CACHE_SHIFT; 3523 - list_move_tail(&page->lru, &tmplist); 3524 - 3525 - /* now try and add more pages onto the request */ 3526 - expected_index = page->index + 1; 3527 - list_for_each_entry_safe_reverse(page, tpage, page_list, lru) { 3528 - /* discontinuity ? */ 3529 - if (page->index != expected_index) 3530 - break; 3531 - 3532 - /* would this page push the read over the rsize? */ 3533 - if (bytes + PAGE_CACHE_SIZE > rsize) 3534 - break; 3535 - 3536 - __set_page_locked(page); 3537 - if (add_to_page_cache_locked(page, mapping, 3538 - page->index, GFP_KERNEL)) { 3539 - __clear_page_locked(page); 3540 - break; 3541 - } 3542 - list_move_tail(&page->lru, &tmplist); 3543 - bytes += PAGE_CACHE_SIZE; 3544 - expected_index++; 3545 - nr_pages++; 3313 + rc = readpages_get_pages(mapping, page_list, rsize, &tmplist, 3314 + &nr_pages, &offset, &bytes); 3315 + if (rc) { 3316 + add_credits_and_wake_if(server, credits, 0); 3317 + break; 3546 3318 } 3547 3319 3548 3320 rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete); ··· 3532 3350 page_cache_release(page); 3533 3351 } 3534 3352 rc = -ENOMEM; 3353 + add_credits_and_wake_if(server, credits, 0); 3535 3354 break; 3536 3355 } 3537 3356 ··· 3543 3360 rdata->pid = pid; 3544 3361 rdata->pagesz = PAGE_CACHE_SIZE; 3545 3362 rdata->read_into_pages = cifs_readpages_read_into_pages; 3363 + rdata->credits = credits; 3546 3364 3547 3365 list_for_each_entry_safe(page, tpage, &tmplist, lru) { 3548 3366 list_del(&page->lru); 3549 3367 rdata->pages[rdata->nr_pages++] = page; 3550 3368 } 3551 3369 3552 - rc = cifs_retry_async_readv(rdata); 3553 - if (rc != 0) { 3370 + if (!rdata->cfile->invalidHandle || 3371 + !cifs_reopen_file(rdata->cfile, true)) 3372 + rc = server->ops->async_readv(rdata); 3373 + if (rc) { 3374 + add_credits_and_wake_if(server, rdata->credits, 0); 3554 3375 for (i = 0; i < rdata->nr_pages; i++) { 3555 3376 page = rdata->pages[i]; 3556 3377 lru_cache_add_file(page); 3557 3378 unlock_page(page); 3558 3379 page_cache_release(page); 3380 + if (rc == -EAGAIN) 3381 + list_add_tail(&page->lru, &tmplist); 3559 3382 } 3560 3383 kref_put(&rdata->refcount, cifs_readdata_release); 3384 + if (rc == -EAGAIN) { 3385 + /* Re-add pages to the page_list and retry */ 3386 + list_splice(&tmplist, page_list); 3387 + continue; 3388 + } 3561 3389 break; 3562 3390 } 3563 3391
+11 -2
fs/cifs/misc.c
··· 226 226 return; 227 227 } 228 228 229 + void 230 + free_rsp_buf(int resp_buftype, void *rsp) 231 + { 232 + if (resp_buftype == CIFS_SMALL_BUFFER) 233 + cifs_small_buf_release(rsp); 234 + else if (resp_buftype == CIFS_LARGE_BUFFER) 235 + cifs_buf_release(rsp); 236 + } 237 + 229 238 /* NB: MID can not be set if treeCon not passed in, in that 230 239 case it is responsbility of caller to set the mid */ 231 240 void ··· 423 414 return true; 424 415 } 425 416 if (pSMBr->hdr.Status.CifsError) { 426 - cifs_dbg(FYI, "notify err 0x%d\n", 417 + cifs_dbg(FYI, "notify err 0x%x\n", 427 418 pSMBr->hdr.Status.CifsError); 428 419 return true; 429 420 } ··· 450 441 if (pSMB->hdr.WordCount != 8) 451 442 return false; 452 443 453 - cifs_dbg(FYI, "oplock type 0x%d level 0x%d\n", 444 + cifs_dbg(FYI, "oplock type 0x%x level 0x%x\n", 454 445 pSMB->LockType, pSMB->OplockLevel); 455 446 if (!(pSMB->LockType & LOCKING_ANDX_OPLOCK_RELEASE)) 456 447 return false;
+877 -389
fs/cifs/sess.c
··· 520 520 } 521 521 } 522 522 523 - int 524 - CIFS_SessSetup(const unsigned int xid, struct cifs_ses *ses, 525 - const struct nls_table *nls_cp) 526 - { 527 - int rc = 0; 528 - int wct; 529 - struct smb_hdr *smb_buf; 530 - char *bcc_ptr; 531 - char *str_area; 532 - SESSION_SETUP_ANDX *pSMB; 533 - __u32 capabilities; 534 - __u16 count; 535 - int resp_buf_type; 523 + struct sess_data { 524 + unsigned int xid; 525 + struct cifs_ses *ses; 526 + struct nls_table *nls_cp; 527 + void (*func)(struct sess_data *); 528 + int result; 529 + 530 + /* we will send the SMB in three pieces: 531 + * a fixed length beginning part, an optional 532 + * SPNEGO blob (which can be zero length), and a 533 + * last part which will include the strings 534 + * and rest of bcc area. This allows us to avoid 535 + * a large buffer 17K allocation 536 + */ 537 + int buf0_type; 536 538 struct kvec iov[3]; 537 - enum securityEnum type; 538 - __u16 action, bytes_remaining; 539 - struct key *spnego_key = NULL; 540 - __le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */ 541 - u16 blob_len; 542 - char *ntlmsspblob = NULL; 539 + }; 543 540 544 - if (ses == NULL) { 545 - WARN(1, "%s: ses == NULL!", __func__); 546 - return -EINVAL; 547 - } 548 - 549 - type = select_sectype(ses->server, ses->sectype); 550 - cifs_dbg(FYI, "sess setup type %d\n", type); 551 - if (type == Unspecified) { 552 - cifs_dbg(VFS, 553 - "Unable to select appropriate authentication method!"); 554 - return -EINVAL; 555 - } 556 - 557 - if (type == RawNTLMSSP) { 558 - /* if memory allocation is successful, caller of this function 559 - * frees it. 560 - */ 561 - ses->ntlmssp = kmalloc(sizeof(struct ntlmssp_auth), GFP_KERNEL); 562 - if (!ses->ntlmssp) 563 - return -ENOMEM; 564 - ses->ntlmssp->sesskey_per_smbsess = false; 565 - 566 - } 567 - 568 - ssetup_ntlmssp_authenticate: 569 - if (phase == NtLmChallenge) 570 - phase = NtLmAuthenticate; /* if ntlmssp, now final phase */ 571 - 572 - if (type == LANMAN) { 573 - #ifndef CONFIG_CIFS_WEAK_PW_HASH 574 - /* LANMAN and plaintext are less secure and off by default. 575 - So we make this explicitly be turned on in kconfig (in the 576 - build) and turned on at runtime (changed from the default) 577 - in proc/fs/cifs or via mount parm. Unfortunately this is 578 - needed for old Win (e.g. Win95), some obscure NAS and OS/2 */ 579 - return -EOPNOTSUPP; 580 - #endif 581 - wct = 10; /* lanman 2 style sessionsetup */ 582 - } else if ((type == NTLM) || (type == NTLMv2)) { 583 - /* For NTLMv2 failures eventually may need to retry NTLM */ 584 - wct = 13; /* old style NTLM sessionsetup */ 585 - } else /* same size: negotiate or auth, NTLMSSP or extended security */ 586 - wct = 12; 541 + static int 542 + sess_alloc_buffer(struct sess_data *sess_data, int wct) 543 + { 544 + int rc; 545 + struct cifs_ses *ses = sess_data->ses; 546 + struct smb_hdr *smb_buf; 587 547 588 548 rc = small_smb_init_no_tc(SMB_COM_SESSION_SETUP_ANDX, wct, ses, 589 - (void **)&smb_buf); 549 + (void **)&smb_buf); 550 + 590 551 if (rc) 591 552 return rc; 592 553 593 - pSMB = (SESSION_SETUP_ANDX *)smb_buf; 594 - 595 - capabilities = cifs_ssetup_hdr(ses, pSMB); 596 - 597 - /* we will send the SMB in three pieces: 598 - a fixed length beginning part, an optional 599 - SPNEGO blob (which can be zero length), and a 600 - last part which will include the strings 601 - and rest of bcc area. This allows us to avoid 602 - a large buffer 17K allocation */ 603 - iov[0].iov_base = (char *)pSMB; 604 - iov[0].iov_len = be32_to_cpu(smb_buf->smb_buf_length) + 4; 605 - 606 - /* setting this here allows the code at the end of the function 607 - to free the request buffer if there's an error */ 608 - resp_buf_type = CIFS_SMALL_BUFFER; 554 + sess_data->iov[0].iov_base = (char *)smb_buf; 555 + sess_data->iov[0].iov_len = be32_to_cpu(smb_buf->smb_buf_length) + 4; 556 + /* 557 + * This variable will be used to clear the buffer 558 + * allocated above in case of any error in the calling function. 559 + */ 560 + sess_data->buf0_type = CIFS_SMALL_BUFFER; 609 561 610 562 /* 2000 big enough to fit max user, domain, NOS name etc. */ 611 - str_area = kmalloc(2000, GFP_KERNEL); 612 - if (str_area == NULL) { 563 + sess_data->iov[2].iov_base = kmalloc(2000, GFP_KERNEL); 564 + if (!sess_data->iov[2].iov_base) { 613 565 rc = -ENOMEM; 614 - goto ssetup_exit; 615 - } 616 - bcc_ptr = str_area; 617 - 618 - iov[1].iov_base = NULL; 619 - iov[1].iov_len = 0; 620 - 621 - if (type == LANMAN) { 622 - #ifdef CONFIG_CIFS_WEAK_PW_HASH 623 - char lnm_session_key[CIFS_AUTH_RESP_SIZE]; 624 - 625 - pSMB->req.hdr.Flags2 &= ~SMBFLG2_UNICODE; 626 - 627 - /* no capabilities flags in old lanman negotiation */ 628 - 629 - pSMB->old_req.PasswordLength = cpu_to_le16(CIFS_AUTH_RESP_SIZE); 630 - 631 - /* Calculate hash with password and copy into bcc_ptr. 632 - * Encryption Key (stored as in cryptkey) gets used if the 633 - * security mode bit in Negottiate Protocol response states 634 - * to use challenge/response method (i.e. Password bit is 1). 635 - */ 636 - 637 - rc = calc_lanman_hash(ses->password, ses->server->cryptkey, 638 - ses->server->sec_mode & SECMODE_PW_ENCRYPT ? 639 - true : false, lnm_session_key); 640 - 641 - memcpy(bcc_ptr, (char *)lnm_session_key, CIFS_AUTH_RESP_SIZE); 642 - bcc_ptr += CIFS_AUTH_RESP_SIZE; 643 - 644 - /* can not sign if LANMAN negotiated so no need 645 - to calculate signing key? but what if server 646 - changed to do higher than lanman dialect and 647 - we reconnected would we ever calc signing_key? */ 648 - 649 - cifs_dbg(FYI, "Negotiating LANMAN setting up strings\n"); 650 - /* Unicode not allowed for LANMAN dialects */ 651 - ascii_ssetup_strings(&bcc_ptr, ses, nls_cp); 652 - #endif 653 - } else if (type == NTLM) { 654 - pSMB->req_no_secext.Capabilities = cpu_to_le32(capabilities); 655 - pSMB->req_no_secext.CaseInsensitivePasswordLength = 656 - cpu_to_le16(CIFS_AUTH_RESP_SIZE); 657 - pSMB->req_no_secext.CaseSensitivePasswordLength = 658 - cpu_to_le16(CIFS_AUTH_RESP_SIZE); 659 - 660 - /* calculate ntlm response and session key */ 661 - rc = setup_ntlm_response(ses, nls_cp); 662 - if (rc) { 663 - cifs_dbg(VFS, "Error %d during NTLM authentication\n", 664 - rc); 665 - goto ssetup_exit; 666 - } 667 - 668 - /* copy ntlm response */ 669 - memcpy(bcc_ptr, ses->auth_key.response + CIFS_SESS_KEY_SIZE, 670 - CIFS_AUTH_RESP_SIZE); 671 - bcc_ptr += CIFS_AUTH_RESP_SIZE; 672 - memcpy(bcc_ptr, ses->auth_key.response + CIFS_SESS_KEY_SIZE, 673 - CIFS_AUTH_RESP_SIZE); 674 - bcc_ptr += CIFS_AUTH_RESP_SIZE; 675 - 676 - if (ses->capabilities & CAP_UNICODE) { 677 - /* unicode strings must be word aligned */ 678 - if (iov[0].iov_len % 2) { 679 - *bcc_ptr = 0; 680 - bcc_ptr++; 681 - } 682 - unicode_ssetup_strings(&bcc_ptr, ses, nls_cp); 683 - } else 684 - ascii_ssetup_strings(&bcc_ptr, ses, nls_cp); 685 - } else if (type == NTLMv2) { 686 - pSMB->req_no_secext.Capabilities = cpu_to_le32(capabilities); 687 - 688 - /* LM2 password would be here if we supported it */ 689 - pSMB->req_no_secext.CaseInsensitivePasswordLength = 0; 690 - 691 - /* calculate nlmv2 response and session key */ 692 - rc = setup_ntlmv2_rsp(ses, nls_cp); 693 - if (rc) { 694 - cifs_dbg(VFS, "Error %d during NTLMv2 authentication\n", 695 - rc); 696 - goto ssetup_exit; 697 - } 698 - memcpy(bcc_ptr, ses->auth_key.response + CIFS_SESS_KEY_SIZE, 699 - ses->auth_key.len - CIFS_SESS_KEY_SIZE); 700 - bcc_ptr += ses->auth_key.len - CIFS_SESS_KEY_SIZE; 701 - 702 - /* set case sensitive password length after tilen may get 703 - * assigned, tilen is 0 otherwise. 704 - */ 705 - pSMB->req_no_secext.CaseSensitivePasswordLength = 706 - cpu_to_le16(ses->auth_key.len - CIFS_SESS_KEY_SIZE); 707 - 708 - if (ses->capabilities & CAP_UNICODE) { 709 - if (iov[0].iov_len % 2) { 710 - *bcc_ptr = 0; 711 - bcc_ptr++; 712 - } 713 - unicode_ssetup_strings(&bcc_ptr, ses, nls_cp); 714 - } else 715 - ascii_ssetup_strings(&bcc_ptr, ses, nls_cp); 716 - } else if (type == Kerberos) { 717 - #ifdef CONFIG_CIFS_UPCALL 718 - struct cifs_spnego_msg *msg; 719 - 720 - spnego_key = cifs_get_spnego_key(ses); 721 - if (IS_ERR(spnego_key)) { 722 - rc = PTR_ERR(spnego_key); 723 - spnego_key = NULL; 724 - goto ssetup_exit; 725 - } 726 - 727 - msg = spnego_key->payload.data; 728 - /* check version field to make sure that cifs.upcall is 729 - sending us a response in an expected form */ 730 - if (msg->version != CIFS_SPNEGO_UPCALL_VERSION) { 731 - cifs_dbg(VFS, "incorrect version of cifs.upcall " 732 - "expected %d but got %d)", 733 - CIFS_SPNEGO_UPCALL_VERSION, msg->version); 734 - rc = -EKEYREJECTED; 735 - goto ssetup_exit; 736 - } 737 - 738 - ses->auth_key.response = kmemdup(msg->data, msg->sesskey_len, 739 - GFP_KERNEL); 740 - if (!ses->auth_key.response) { 741 - cifs_dbg(VFS, 742 - "Kerberos can't allocate (%u bytes) memory", 743 - msg->sesskey_len); 744 - rc = -ENOMEM; 745 - goto ssetup_exit; 746 - } 747 - ses->auth_key.len = msg->sesskey_len; 748 - 749 - pSMB->req.hdr.Flags2 |= SMBFLG2_EXT_SEC; 750 - capabilities |= CAP_EXTENDED_SECURITY; 751 - pSMB->req.Capabilities = cpu_to_le32(capabilities); 752 - iov[1].iov_base = msg->data + msg->sesskey_len; 753 - iov[1].iov_len = msg->secblob_len; 754 - pSMB->req.SecurityBlobLength = cpu_to_le16(iov[1].iov_len); 755 - 756 - if (ses->capabilities & CAP_UNICODE) { 757 - /* unicode strings must be word aligned */ 758 - if ((iov[0].iov_len + iov[1].iov_len) % 2) { 759 - *bcc_ptr = 0; 760 - bcc_ptr++; 761 - } 762 - unicode_oslm_strings(&bcc_ptr, nls_cp); 763 - unicode_domain_string(&bcc_ptr, ses, nls_cp); 764 - } else 765 - /* BB: is this right? */ 766 - ascii_ssetup_strings(&bcc_ptr, ses, nls_cp); 767 - #else /* ! CONFIG_CIFS_UPCALL */ 768 - cifs_dbg(VFS, "Kerberos negotiated but upcall support disabled!\n"); 769 - rc = -ENOSYS; 770 - goto ssetup_exit; 771 - #endif /* CONFIG_CIFS_UPCALL */ 772 - } else if (type == RawNTLMSSP) { 773 - if ((pSMB->req.hdr.Flags2 & SMBFLG2_UNICODE) == 0) { 774 - cifs_dbg(VFS, "NTLMSSP requires Unicode support\n"); 775 - rc = -ENOSYS; 776 - goto ssetup_exit; 777 - } 778 - 779 - cifs_dbg(FYI, "ntlmssp session setup phase %d\n", phase); 780 - pSMB->req.hdr.Flags2 |= SMBFLG2_EXT_SEC; 781 - capabilities |= CAP_EXTENDED_SECURITY; 782 - pSMB->req.Capabilities |= cpu_to_le32(capabilities); 783 - switch(phase) { 784 - case NtLmNegotiate: 785 - build_ntlmssp_negotiate_blob( 786 - pSMB->req.SecurityBlob, ses); 787 - iov[1].iov_len = sizeof(NEGOTIATE_MESSAGE); 788 - iov[1].iov_base = pSMB->req.SecurityBlob; 789 - pSMB->req.SecurityBlobLength = 790 - cpu_to_le16(sizeof(NEGOTIATE_MESSAGE)); 791 - break; 792 - case NtLmAuthenticate: 793 - /* 794 - * 5 is an empirical value, large enough to hold 795 - * authenticate message plus max 10 of av paris, 796 - * domain, user, workstation names, flags, etc. 797 - */ 798 - ntlmsspblob = kzalloc( 799 - 5*sizeof(struct _AUTHENTICATE_MESSAGE), 800 - GFP_KERNEL); 801 - if (!ntlmsspblob) { 802 - rc = -ENOMEM; 803 - goto ssetup_exit; 804 - } 805 - 806 - rc = build_ntlmssp_auth_blob(ntlmsspblob, 807 - &blob_len, ses, nls_cp); 808 - if (rc) 809 - goto ssetup_exit; 810 - iov[1].iov_len = blob_len; 811 - iov[1].iov_base = ntlmsspblob; 812 - pSMB->req.SecurityBlobLength = cpu_to_le16(blob_len); 813 - /* 814 - * Make sure that we tell the server that we are using 815 - * the uid that it just gave us back on the response 816 - * (challenge) 817 - */ 818 - smb_buf->Uid = ses->Suid; 819 - break; 820 - default: 821 - cifs_dbg(VFS, "invalid phase %d\n", phase); 822 - rc = -ENOSYS; 823 - goto ssetup_exit; 824 - } 825 - /* unicode strings must be word aligned */ 826 - if ((iov[0].iov_len + iov[1].iov_len) % 2) { 827 - *bcc_ptr = 0; 828 - bcc_ptr++; 829 - } 830 - unicode_oslm_strings(&bcc_ptr, nls_cp); 831 - } else { 832 - cifs_dbg(VFS, "secType %d not supported!\n", type); 833 - rc = -ENOSYS; 834 - goto ssetup_exit; 566 + goto out_free_smb_buf; 835 567 } 836 568 837 - iov[2].iov_base = str_area; 838 - iov[2].iov_len = (long) bcc_ptr - (long) str_area; 569 + return 0; 839 570 840 - count = iov[1].iov_len + iov[2].iov_len; 571 + out_free_smb_buf: 572 + kfree(smb_buf); 573 + sess_data->iov[0].iov_base = NULL; 574 + sess_data->iov[0].iov_len = 0; 575 + sess_data->buf0_type = CIFS_NO_BUFFER; 576 + return rc; 577 + } 578 + 579 + static void 580 + sess_free_buffer(struct sess_data *sess_data) 581 + { 582 + 583 + free_rsp_buf(sess_data->buf0_type, sess_data->iov[0].iov_base); 584 + sess_data->buf0_type = CIFS_NO_BUFFER; 585 + kfree(sess_data->iov[2].iov_base); 586 + } 587 + 588 + static int 589 + sess_establish_session(struct sess_data *sess_data) 590 + { 591 + struct cifs_ses *ses = sess_data->ses; 592 + 593 + mutex_lock(&ses->server->srv_mutex); 594 + if (!ses->server->session_estab) { 595 + if (ses->server->sign) { 596 + ses->server->session_key.response = 597 + kmemdup(ses->auth_key.response, 598 + ses->auth_key.len, GFP_KERNEL); 599 + if (!ses->server->session_key.response) { 600 + mutex_unlock(&ses->server->srv_mutex); 601 + return -ENOMEM; 602 + } 603 + ses->server->session_key.len = 604 + ses->auth_key.len; 605 + } 606 + ses->server->sequence_number = 0x2; 607 + ses->server->session_estab = true; 608 + } 609 + mutex_unlock(&ses->server->srv_mutex); 610 + 611 + cifs_dbg(FYI, "CIFS session established successfully\n"); 612 + spin_lock(&GlobalMid_Lock); 613 + ses->status = CifsGood; 614 + ses->need_reconnect = false; 615 + spin_unlock(&GlobalMid_Lock); 616 + 617 + return 0; 618 + } 619 + 620 + static int 621 + sess_sendreceive(struct sess_data *sess_data) 622 + { 623 + int rc; 624 + struct smb_hdr *smb_buf = (struct smb_hdr *) sess_data->iov[0].iov_base; 625 + __u16 count; 626 + 627 + count = sess_data->iov[1].iov_len + sess_data->iov[2].iov_len; 841 628 smb_buf->smb_buf_length = 842 629 cpu_to_be32(be32_to_cpu(smb_buf->smb_buf_length) + count); 843 - 844 630 put_bcc(count, smb_buf); 845 631 846 - rc = SendReceive2(xid, ses, iov, 3 /* num_iovecs */, &resp_buf_type, 632 + rc = SendReceive2(sess_data->xid, sess_data->ses, 633 + sess_data->iov, 3 /* num_iovecs */, 634 + &sess_data->buf0_type, 847 635 CIFS_LOG_ERROR); 848 - /* SMB request buf freed in SendReceive2 */ 849 636 850 - pSMB = (SESSION_SETUP_ANDX *)iov[0].iov_base; 851 - smb_buf = (struct smb_hdr *)iov[0].iov_base; 637 + return rc; 638 + } 852 639 853 - if ((type == RawNTLMSSP) && (resp_buf_type != CIFS_NO_BUFFER) && 854 - (smb_buf->Status.CifsError == 855 - cpu_to_le32(NT_STATUS_MORE_PROCESSING_REQUIRED))) { 856 - if (phase != NtLmNegotiate) { 857 - cifs_dbg(VFS, "Unexpected more processing error\n"); 858 - goto ssetup_exit; 859 - } 860 - /* NTLMSSP Negotiate sent now processing challenge (response) */ 861 - phase = NtLmChallenge; /* process ntlmssp challenge */ 862 - rc = 0; /* MORE_PROC rc is not an error here, but expected */ 863 - } 640 + /* 641 + * LANMAN and plaintext are less secure and off by default. 642 + * So we make this explicitly be turned on in kconfig (in the 643 + * build) and turned on at runtime (changed from the default) 644 + * in proc/fs/cifs or via mount parm. Unfortunately this is 645 + * needed for old Win (e.g. Win95), some obscure NAS and OS/2 646 + */ 647 + #ifdef CONFIG_CIFS_WEAK_PW_HASH 648 + static void 649 + sess_auth_lanman(struct sess_data *sess_data) 650 + { 651 + int rc = 0; 652 + struct smb_hdr *smb_buf; 653 + SESSION_SETUP_ANDX *pSMB; 654 + char *bcc_ptr; 655 + struct cifs_ses *ses = sess_data->ses; 656 + char lnm_session_key[CIFS_AUTH_RESP_SIZE]; 657 + __u32 capabilities; 658 + __u16 bytes_remaining; 659 + 660 + /* lanman 2 style sessionsetup */ 661 + /* wct = 10 */ 662 + rc = sess_alloc_buffer(sess_data, 10); 864 663 if (rc) 865 - goto ssetup_exit; 664 + goto out; 866 665 867 - if ((smb_buf->WordCount != 3) && (smb_buf->WordCount != 4)) { 666 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 667 + bcc_ptr = sess_data->iov[2].iov_base; 668 + capabilities = cifs_ssetup_hdr(ses, pSMB); 669 + 670 + pSMB->req.hdr.Flags2 &= ~SMBFLG2_UNICODE; 671 + 672 + /* no capabilities flags in old lanman negotiation */ 673 + pSMB->old_req.PasswordLength = cpu_to_le16(CIFS_AUTH_RESP_SIZE); 674 + 675 + /* Calculate hash with password and copy into bcc_ptr. 676 + * Encryption Key (stored as in cryptkey) gets used if the 677 + * security mode bit in Negottiate Protocol response states 678 + * to use challenge/response method (i.e. Password bit is 1). 679 + */ 680 + rc = calc_lanman_hash(ses->password, ses->server->cryptkey, 681 + ses->server->sec_mode & SECMODE_PW_ENCRYPT ? 682 + true : false, lnm_session_key); 683 + 684 + memcpy(bcc_ptr, (char *)lnm_session_key, CIFS_AUTH_RESP_SIZE); 685 + bcc_ptr += CIFS_AUTH_RESP_SIZE; 686 + 687 + /* 688 + * can not sign if LANMAN negotiated so no need 689 + * to calculate signing key? but what if server 690 + * changed to do higher than lanman dialect and 691 + * we reconnected would we ever calc signing_key? 692 + */ 693 + 694 + cifs_dbg(FYI, "Negotiating LANMAN setting up strings\n"); 695 + /* Unicode not allowed for LANMAN dialects */ 696 + ascii_ssetup_strings(&bcc_ptr, ses, sess_data->nls_cp); 697 + 698 + sess_data->iov[2].iov_len = (long) bcc_ptr - 699 + (long) sess_data->iov[2].iov_base; 700 + 701 + rc = sess_sendreceive(sess_data); 702 + if (rc) 703 + goto out; 704 + 705 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 706 + smb_buf = (struct smb_hdr *)sess_data->iov[0].iov_base; 707 + 708 + /* lanman response has a word count of 3 */ 709 + if (smb_buf->WordCount != 3) { 868 710 rc = -EIO; 869 711 cifs_dbg(VFS, "bad word count %d\n", smb_buf->WordCount); 870 - goto ssetup_exit; 712 + goto out; 871 713 } 872 - action = le16_to_cpu(pSMB->resp.Action); 873 - if (action & GUEST_LOGIN) 714 + 715 + if (le16_to_cpu(pSMB->resp.Action) & GUEST_LOGIN) 874 716 cifs_dbg(FYI, "Guest login\n"); /* BB mark SesInfo struct? */ 717 + 875 718 ses->Suid = smb_buf->Uid; /* UID left in wire format (le) */ 876 719 cifs_dbg(FYI, "UID = %llu\n", ses->Suid); 877 - /* response can have either 3 or 4 word count - Samba sends 3 */ 878 - /* and lanman response is 3 */ 720 + 879 721 bytes_remaining = get_bcc(smb_buf); 880 722 bcc_ptr = pByteArea(smb_buf); 881 - 882 - if (smb_buf->WordCount == 4) { 883 - blob_len = le16_to_cpu(pSMB->resp.SecurityBlobLength); 884 - if (blob_len > bytes_remaining) { 885 - cifs_dbg(VFS, "bad security blob length %d\n", 886 - blob_len); 887 - rc = -EINVAL; 888 - goto ssetup_exit; 889 - } 890 - if (phase == NtLmChallenge) { 891 - rc = decode_ntlmssp_challenge(bcc_ptr, blob_len, ses); 892 - /* now goto beginning for ntlmssp authenticate phase */ 893 - if (rc) 894 - goto ssetup_exit; 895 - } 896 - bcc_ptr += blob_len; 897 - bytes_remaining -= blob_len; 898 - } 899 723 900 724 /* BB check if Unicode and decode strings */ 901 725 if (bytes_remaining == 0) { ··· 730 906 ++bcc_ptr; 731 907 --bytes_remaining; 732 908 } 733 - decode_unicode_ssetup(&bcc_ptr, bytes_remaining, ses, nls_cp); 909 + decode_unicode_ssetup(&bcc_ptr, bytes_remaining, ses, 910 + sess_data->nls_cp); 734 911 } else { 735 - decode_ascii_ssetup(&bcc_ptr, bytes_remaining, ses, nls_cp); 912 + decode_ascii_ssetup(&bcc_ptr, bytes_remaining, ses, 913 + sess_data->nls_cp); 736 914 } 737 915 738 - ssetup_exit: 739 - if (spnego_key) { 740 - key_invalidate(spnego_key); 741 - key_put(spnego_key); 742 - } 743 - kfree(str_area); 744 - kfree(ntlmsspblob); 745 - ntlmsspblob = NULL; 746 - if (resp_buf_type == CIFS_SMALL_BUFFER) { 747 - cifs_dbg(FYI, "ssetup freeing small buf %p\n", iov[0].iov_base); 748 - cifs_small_buf_release(iov[0].iov_base); 749 - } else if (resp_buf_type == CIFS_LARGE_BUFFER) 750 - cifs_buf_release(iov[0].iov_base); 916 + rc = sess_establish_session(sess_data); 917 + out: 918 + sess_data->result = rc; 919 + sess_data->func = NULL; 920 + sess_free_buffer(sess_data); 921 + } 751 922 752 - /* if ntlmssp, and negotiate succeeded, proceed to authenticate phase */ 753 - if ((phase == NtLmChallenge) && (rc == 0)) 754 - goto ssetup_ntlmssp_authenticate; 923 + #else 924 + 925 + static void 926 + sess_auth_lanman(struct sess_data *sess_data) 927 + { 928 + sess_data->result = -EOPNOTSUPP; 929 + sess_data->func = NULL; 930 + } 931 + #endif 932 + 933 + static void 934 + sess_auth_ntlm(struct sess_data *sess_data) 935 + { 936 + int rc = 0; 937 + struct smb_hdr *smb_buf; 938 + SESSION_SETUP_ANDX *pSMB; 939 + char *bcc_ptr; 940 + struct cifs_ses *ses = sess_data->ses; 941 + __u32 capabilities; 942 + __u16 bytes_remaining; 943 + 944 + /* old style NTLM sessionsetup */ 945 + /* wct = 13 */ 946 + rc = sess_alloc_buffer(sess_data, 13); 947 + if (rc) 948 + goto out; 949 + 950 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 951 + bcc_ptr = sess_data->iov[2].iov_base; 952 + capabilities = cifs_ssetup_hdr(ses, pSMB); 953 + 954 + pSMB->req_no_secext.Capabilities = cpu_to_le32(capabilities); 955 + pSMB->req_no_secext.CaseInsensitivePasswordLength = 956 + cpu_to_le16(CIFS_AUTH_RESP_SIZE); 957 + pSMB->req_no_secext.CaseSensitivePasswordLength = 958 + cpu_to_le16(CIFS_AUTH_RESP_SIZE); 959 + 960 + /* calculate ntlm response and session key */ 961 + rc = setup_ntlm_response(ses, sess_data->nls_cp); 962 + if (rc) { 963 + cifs_dbg(VFS, "Error %d during NTLM authentication\n", 964 + rc); 965 + goto out; 966 + } 967 + 968 + /* copy ntlm response */ 969 + memcpy(bcc_ptr, ses->auth_key.response + CIFS_SESS_KEY_SIZE, 970 + CIFS_AUTH_RESP_SIZE); 971 + bcc_ptr += CIFS_AUTH_RESP_SIZE; 972 + memcpy(bcc_ptr, ses->auth_key.response + CIFS_SESS_KEY_SIZE, 973 + CIFS_AUTH_RESP_SIZE); 974 + bcc_ptr += CIFS_AUTH_RESP_SIZE; 975 + 976 + if (ses->capabilities & CAP_UNICODE) { 977 + /* unicode strings must be word aligned */ 978 + if (sess_data->iov[0].iov_len % 2) { 979 + *bcc_ptr = 0; 980 + bcc_ptr++; 981 + } 982 + unicode_ssetup_strings(&bcc_ptr, ses, sess_data->nls_cp); 983 + } else { 984 + ascii_ssetup_strings(&bcc_ptr, ses, sess_data->nls_cp); 985 + } 986 + 987 + 988 + sess_data->iov[2].iov_len = (long) bcc_ptr - 989 + (long) sess_data->iov[2].iov_base; 990 + 991 + rc = sess_sendreceive(sess_data); 992 + if (rc) 993 + goto out; 994 + 995 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 996 + smb_buf = (struct smb_hdr *)sess_data->iov[0].iov_base; 997 + 998 + if (smb_buf->WordCount != 3) { 999 + rc = -EIO; 1000 + cifs_dbg(VFS, "bad word count %d\n", smb_buf->WordCount); 1001 + goto out; 1002 + } 1003 + 1004 + if (le16_to_cpu(pSMB->resp.Action) & GUEST_LOGIN) 1005 + cifs_dbg(FYI, "Guest login\n"); /* BB mark SesInfo struct? */ 1006 + 1007 + ses->Suid = smb_buf->Uid; /* UID left in wire format (le) */ 1008 + cifs_dbg(FYI, "UID = %llu\n", ses->Suid); 1009 + 1010 + bytes_remaining = get_bcc(smb_buf); 1011 + bcc_ptr = pByteArea(smb_buf); 1012 + 1013 + /* BB check if Unicode and decode strings */ 1014 + if (bytes_remaining == 0) { 1015 + /* no string area to decode, do nothing */ 1016 + } else if (smb_buf->Flags2 & SMBFLG2_UNICODE) { 1017 + /* unicode string area must be word-aligned */ 1018 + if (((unsigned long) bcc_ptr - (unsigned long) smb_buf) % 2) { 1019 + ++bcc_ptr; 1020 + --bytes_remaining; 1021 + } 1022 + decode_unicode_ssetup(&bcc_ptr, bytes_remaining, ses, 1023 + sess_data->nls_cp); 1024 + } else { 1025 + decode_ascii_ssetup(&bcc_ptr, bytes_remaining, ses, 1026 + sess_data->nls_cp); 1027 + } 1028 + 1029 + rc = sess_establish_session(sess_data); 1030 + out: 1031 + sess_data->result = rc; 1032 + sess_data->func = NULL; 1033 + sess_free_buffer(sess_data); 1034 + kfree(ses->auth_key.response); 1035 + ses->auth_key.response = NULL; 1036 + } 1037 + 1038 + static void 1039 + sess_auth_ntlmv2(struct sess_data *sess_data) 1040 + { 1041 + int rc = 0; 1042 + struct smb_hdr *smb_buf; 1043 + SESSION_SETUP_ANDX *pSMB; 1044 + char *bcc_ptr; 1045 + struct cifs_ses *ses = sess_data->ses; 1046 + __u32 capabilities; 1047 + __u16 bytes_remaining; 1048 + 1049 + /* old style NTLM sessionsetup */ 1050 + /* wct = 13 */ 1051 + rc = sess_alloc_buffer(sess_data, 13); 1052 + if (rc) 1053 + goto out; 1054 + 1055 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1056 + bcc_ptr = sess_data->iov[2].iov_base; 1057 + capabilities = cifs_ssetup_hdr(ses, pSMB); 1058 + 1059 + pSMB->req_no_secext.Capabilities = cpu_to_le32(capabilities); 1060 + 1061 + /* LM2 password would be here if we supported it */ 1062 + pSMB->req_no_secext.CaseInsensitivePasswordLength = 0; 1063 + 1064 + /* calculate nlmv2 response and session key */ 1065 + rc = setup_ntlmv2_rsp(ses, sess_data->nls_cp); 1066 + if (rc) { 1067 + cifs_dbg(VFS, "Error %d during NTLMv2 authentication\n", rc); 1068 + goto out; 1069 + } 1070 + 1071 + memcpy(bcc_ptr, ses->auth_key.response + CIFS_SESS_KEY_SIZE, 1072 + ses->auth_key.len - CIFS_SESS_KEY_SIZE); 1073 + bcc_ptr += ses->auth_key.len - CIFS_SESS_KEY_SIZE; 1074 + 1075 + /* set case sensitive password length after tilen may get 1076 + * assigned, tilen is 0 otherwise. 1077 + */ 1078 + pSMB->req_no_secext.CaseSensitivePasswordLength = 1079 + cpu_to_le16(ses->auth_key.len - CIFS_SESS_KEY_SIZE); 1080 + 1081 + if (ses->capabilities & CAP_UNICODE) { 1082 + if (sess_data->iov[0].iov_len % 2) { 1083 + *bcc_ptr = 0; 1084 + bcc_ptr++; 1085 + } 1086 + unicode_ssetup_strings(&bcc_ptr, ses, sess_data->nls_cp); 1087 + } else { 1088 + ascii_ssetup_strings(&bcc_ptr, ses, sess_data->nls_cp); 1089 + } 1090 + 1091 + 1092 + sess_data->iov[2].iov_len = (long) bcc_ptr - 1093 + (long) sess_data->iov[2].iov_base; 1094 + 1095 + rc = sess_sendreceive(sess_data); 1096 + if (rc) 1097 + goto out; 1098 + 1099 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1100 + smb_buf = (struct smb_hdr *)sess_data->iov[0].iov_base; 1101 + 1102 + if (smb_buf->WordCount != 3) { 1103 + rc = -EIO; 1104 + cifs_dbg(VFS, "bad word count %d\n", smb_buf->WordCount); 1105 + goto out; 1106 + } 1107 + 1108 + if (le16_to_cpu(pSMB->resp.Action) & GUEST_LOGIN) 1109 + cifs_dbg(FYI, "Guest login\n"); /* BB mark SesInfo struct? */ 1110 + 1111 + ses->Suid = smb_buf->Uid; /* UID left in wire format (le) */ 1112 + cifs_dbg(FYI, "UID = %llu\n", ses->Suid); 1113 + 1114 + bytes_remaining = get_bcc(smb_buf); 1115 + bcc_ptr = pByteArea(smb_buf); 1116 + 1117 + /* BB check if Unicode and decode strings */ 1118 + if (bytes_remaining == 0) { 1119 + /* no string area to decode, do nothing */ 1120 + } else if (smb_buf->Flags2 & SMBFLG2_UNICODE) { 1121 + /* unicode string area must be word-aligned */ 1122 + if (((unsigned long) bcc_ptr - (unsigned long) smb_buf) % 2) { 1123 + ++bcc_ptr; 1124 + --bytes_remaining; 1125 + } 1126 + decode_unicode_ssetup(&bcc_ptr, bytes_remaining, ses, 1127 + sess_data->nls_cp); 1128 + } else { 1129 + decode_ascii_ssetup(&bcc_ptr, bytes_remaining, ses, 1130 + sess_data->nls_cp); 1131 + } 1132 + 1133 + rc = sess_establish_session(sess_data); 1134 + out: 1135 + sess_data->result = rc; 1136 + sess_data->func = NULL; 1137 + sess_free_buffer(sess_data); 1138 + kfree(ses->auth_key.response); 1139 + ses->auth_key.response = NULL; 1140 + } 1141 + 1142 + #ifdef CONFIG_CIFS_UPCALL 1143 + static void 1144 + sess_auth_kerberos(struct sess_data *sess_data) 1145 + { 1146 + int rc = 0; 1147 + struct smb_hdr *smb_buf; 1148 + SESSION_SETUP_ANDX *pSMB; 1149 + char *bcc_ptr; 1150 + struct cifs_ses *ses = sess_data->ses; 1151 + __u32 capabilities; 1152 + __u16 bytes_remaining; 1153 + struct key *spnego_key = NULL; 1154 + struct cifs_spnego_msg *msg; 1155 + u16 blob_len; 1156 + 1157 + /* extended security */ 1158 + /* wct = 12 */ 1159 + rc = sess_alloc_buffer(sess_data, 12); 1160 + if (rc) 1161 + goto out; 1162 + 1163 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1164 + bcc_ptr = sess_data->iov[2].iov_base; 1165 + capabilities = cifs_ssetup_hdr(ses, pSMB); 1166 + 1167 + spnego_key = cifs_get_spnego_key(ses); 1168 + if (IS_ERR(spnego_key)) { 1169 + rc = PTR_ERR(spnego_key); 1170 + spnego_key = NULL; 1171 + goto out; 1172 + } 1173 + 1174 + msg = spnego_key->payload.data; 1175 + /* 1176 + * check version field to make sure that cifs.upcall is 1177 + * sending us a response in an expected form 1178 + */ 1179 + if (msg->version != CIFS_SPNEGO_UPCALL_VERSION) { 1180 + cifs_dbg(VFS, 1181 + "incorrect version of cifs.upcall (expected %d but got %d)", 1182 + CIFS_SPNEGO_UPCALL_VERSION, msg->version); 1183 + rc = -EKEYREJECTED; 1184 + goto out_put_spnego_key; 1185 + } 1186 + 1187 + ses->auth_key.response = kmemdup(msg->data, msg->sesskey_len, 1188 + GFP_KERNEL); 1189 + if (!ses->auth_key.response) { 1190 + cifs_dbg(VFS, "Kerberos can't allocate (%u bytes) memory", 1191 + msg->sesskey_len); 1192 + rc = -ENOMEM; 1193 + goto out_put_spnego_key; 1194 + } 1195 + ses->auth_key.len = msg->sesskey_len; 1196 + 1197 + pSMB->req.hdr.Flags2 |= SMBFLG2_EXT_SEC; 1198 + capabilities |= CAP_EXTENDED_SECURITY; 1199 + pSMB->req.Capabilities = cpu_to_le32(capabilities); 1200 + sess_data->iov[1].iov_base = msg->data + msg->sesskey_len; 1201 + sess_data->iov[1].iov_len = msg->secblob_len; 1202 + pSMB->req.SecurityBlobLength = cpu_to_le16(sess_data->iov[1].iov_len); 1203 + 1204 + if (ses->capabilities & CAP_UNICODE) { 1205 + /* unicode strings must be word aligned */ 1206 + if ((sess_data->iov[0].iov_len 1207 + + sess_data->iov[1].iov_len) % 2) { 1208 + *bcc_ptr = 0; 1209 + bcc_ptr++; 1210 + } 1211 + unicode_oslm_strings(&bcc_ptr, sess_data->nls_cp); 1212 + unicode_domain_string(&bcc_ptr, ses, sess_data->nls_cp); 1213 + } else { 1214 + /* BB: is this right? */ 1215 + ascii_ssetup_strings(&bcc_ptr, ses, sess_data->nls_cp); 1216 + } 1217 + 1218 + sess_data->iov[2].iov_len = (long) bcc_ptr - 1219 + (long) sess_data->iov[2].iov_base; 1220 + 1221 + rc = sess_sendreceive(sess_data); 1222 + if (rc) 1223 + goto out_put_spnego_key; 1224 + 1225 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1226 + smb_buf = (struct smb_hdr *)sess_data->iov[0].iov_base; 1227 + 1228 + if (smb_buf->WordCount != 4) { 1229 + rc = -EIO; 1230 + cifs_dbg(VFS, "bad word count %d\n", smb_buf->WordCount); 1231 + goto out_put_spnego_key; 1232 + } 1233 + 1234 + if (le16_to_cpu(pSMB->resp.Action) & GUEST_LOGIN) 1235 + cifs_dbg(FYI, "Guest login\n"); /* BB mark SesInfo struct? */ 1236 + 1237 + ses->Suid = smb_buf->Uid; /* UID left in wire format (le) */ 1238 + cifs_dbg(FYI, "UID = %llu\n", ses->Suid); 1239 + 1240 + bytes_remaining = get_bcc(smb_buf); 1241 + bcc_ptr = pByteArea(smb_buf); 1242 + 1243 + blob_len = le16_to_cpu(pSMB->resp.SecurityBlobLength); 1244 + if (blob_len > bytes_remaining) { 1245 + cifs_dbg(VFS, "bad security blob length %d\n", 1246 + blob_len); 1247 + rc = -EINVAL; 1248 + goto out_put_spnego_key; 1249 + } 1250 + bcc_ptr += blob_len; 1251 + bytes_remaining -= blob_len; 1252 + 1253 + /* BB check if Unicode and decode strings */ 1254 + if (bytes_remaining == 0) { 1255 + /* no string area to decode, do nothing */ 1256 + } else if (smb_buf->Flags2 & SMBFLG2_UNICODE) { 1257 + /* unicode string area must be word-aligned */ 1258 + if (((unsigned long) bcc_ptr - (unsigned long) smb_buf) % 2) { 1259 + ++bcc_ptr; 1260 + --bytes_remaining; 1261 + } 1262 + decode_unicode_ssetup(&bcc_ptr, bytes_remaining, ses, 1263 + sess_data->nls_cp); 1264 + } else { 1265 + decode_ascii_ssetup(&bcc_ptr, bytes_remaining, ses, 1266 + sess_data->nls_cp); 1267 + } 1268 + 1269 + rc = sess_establish_session(sess_data); 1270 + out_put_spnego_key: 1271 + key_invalidate(spnego_key); 1272 + key_put(spnego_key); 1273 + out: 1274 + sess_data->result = rc; 1275 + sess_data->func = NULL; 1276 + sess_free_buffer(sess_data); 1277 + kfree(ses->auth_key.response); 1278 + ses->auth_key.response = NULL; 1279 + } 1280 + 1281 + #else 1282 + 1283 + static void 1284 + sess_auth_kerberos(struct sess_data *sess_data) 1285 + { 1286 + cifs_dbg(VFS, "Kerberos negotiated but upcall support disabled!\n"); 1287 + sess_data->result = -ENOSYS; 1288 + sess_data->func = NULL; 1289 + } 1290 + #endif /* ! CONFIG_CIFS_UPCALL */ 1291 + 1292 + /* 1293 + * The required kvec buffers have to be allocated before calling this 1294 + * function. 1295 + */ 1296 + static int 1297 + _sess_auth_rawntlmssp_assemble_req(struct sess_data *sess_data) 1298 + { 1299 + struct smb_hdr *smb_buf; 1300 + SESSION_SETUP_ANDX *pSMB; 1301 + struct cifs_ses *ses = sess_data->ses; 1302 + __u32 capabilities; 1303 + char *bcc_ptr; 1304 + 1305 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1306 + smb_buf = (struct smb_hdr *)pSMB; 1307 + 1308 + capabilities = cifs_ssetup_hdr(ses, pSMB); 1309 + if ((pSMB->req.hdr.Flags2 & SMBFLG2_UNICODE) == 0) { 1310 + cifs_dbg(VFS, "NTLMSSP requires Unicode support\n"); 1311 + return -ENOSYS; 1312 + } 1313 + 1314 + pSMB->req.hdr.Flags2 |= SMBFLG2_EXT_SEC; 1315 + capabilities |= CAP_EXTENDED_SECURITY; 1316 + pSMB->req.Capabilities |= cpu_to_le32(capabilities); 1317 + 1318 + bcc_ptr = sess_data->iov[2].iov_base; 1319 + /* unicode strings must be word aligned */ 1320 + if ((sess_data->iov[0].iov_len + sess_data->iov[1].iov_len) % 2) { 1321 + *bcc_ptr = 0; 1322 + bcc_ptr++; 1323 + } 1324 + unicode_oslm_strings(&bcc_ptr, sess_data->nls_cp); 1325 + 1326 + sess_data->iov[2].iov_len = (long) bcc_ptr - 1327 + (long) sess_data->iov[2].iov_base; 1328 + 1329 + return 0; 1330 + } 1331 + 1332 + static void 1333 + sess_auth_rawntlmssp_authenticate(struct sess_data *sess_data); 1334 + 1335 + static void 1336 + sess_auth_rawntlmssp_negotiate(struct sess_data *sess_data) 1337 + { 1338 + int rc; 1339 + struct smb_hdr *smb_buf; 1340 + SESSION_SETUP_ANDX *pSMB; 1341 + struct cifs_ses *ses = sess_data->ses; 1342 + __u16 bytes_remaining; 1343 + char *bcc_ptr; 1344 + u16 blob_len; 1345 + 1346 + cifs_dbg(FYI, "rawntlmssp session setup negotiate phase\n"); 1347 + 1348 + /* 1349 + * if memory allocation is successful, caller of this function 1350 + * frees it. 1351 + */ 1352 + ses->ntlmssp = kmalloc(sizeof(struct ntlmssp_auth), GFP_KERNEL); 1353 + if (!ses->ntlmssp) { 1354 + rc = -ENOMEM; 1355 + goto out; 1356 + } 1357 + ses->ntlmssp->sesskey_per_smbsess = false; 1358 + 1359 + /* wct = 12 */ 1360 + rc = sess_alloc_buffer(sess_data, 12); 1361 + if (rc) 1362 + goto out; 1363 + 1364 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1365 + 1366 + /* Build security blob before we assemble the request */ 1367 + build_ntlmssp_negotiate_blob(pSMB->req.SecurityBlob, ses); 1368 + sess_data->iov[1].iov_len = sizeof(NEGOTIATE_MESSAGE); 1369 + sess_data->iov[1].iov_base = pSMB->req.SecurityBlob; 1370 + pSMB->req.SecurityBlobLength = cpu_to_le16(sizeof(NEGOTIATE_MESSAGE)); 1371 + 1372 + rc = _sess_auth_rawntlmssp_assemble_req(sess_data); 1373 + if (rc) 1374 + goto out; 1375 + 1376 + rc = sess_sendreceive(sess_data); 1377 + 1378 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1379 + smb_buf = (struct smb_hdr *)sess_data->iov[0].iov_base; 1380 + 1381 + /* If true, rc here is expected and not an error */ 1382 + if (sess_data->buf0_type != CIFS_NO_BUFFER && 1383 + smb_buf->Status.CifsError == 1384 + cpu_to_le32(NT_STATUS_MORE_PROCESSING_REQUIRED)) 1385 + rc = 0; 1386 + 1387 + if (rc) 1388 + goto out; 1389 + 1390 + cifs_dbg(FYI, "rawntlmssp session setup challenge phase\n"); 1391 + 1392 + if (smb_buf->WordCount != 4) { 1393 + rc = -EIO; 1394 + cifs_dbg(VFS, "bad word count %d\n", smb_buf->WordCount); 1395 + goto out; 1396 + } 1397 + 1398 + ses->Suid = smb_buf->Uid; /* UID left in wire format (le) */ 1399 + cifs_dbg(FYI, "UID = %llu\n", ses->Suid); 1400 + 1401 + bytes_remaining = get_bcc(smb_buf); 1402 + bcc_ptr = pByteArea(smb_buf); 1403 + 1404 + blob_len = le16_to_cpu(pSMB->resp.SecurityBlobLength); 1405 + if (blob_len > bytes_remaining) { 1406 + cifs_dbg(VFS, "bad security blob length %d\n", 1407 + blob_len); 1408 + rc = -EINVAL; 1409 + goto out; 1410 + } 1411 + 1412 + rc = decode_ntlmssp_challenge(bcc_ptr, blob_len, ses); 1413 + out: 1414 + sess_free_buffer(sess_data); 755 1415 756 1416 if (!rc) { 757 - mutex_lock(&ses->server->srv_mutex); 758 - if (!ses->server->session_estab) { 759 - if (ses->server->sign) { 760 - ses->server->session_key.response = 761 - kmemdup(ses->auth_key.response, 762 - ses->auth_key.len, GFP_KERNEL); 763 - if (!ses->server->session_key.response) { 764 - rc = -ENOMEM; 765 - mutex_unlock(&ses->server->srv_mutex); 766 - goto keycp_exit; 767 - } 768 - ses->server->session_key.len = 769 - ses->auth_key.len; 770 - } 771 - ses->server->sequence_number = 0x2; 772 - ses->server->session_estab = true; 773 - } 774 - mutex_unlock(&ses->server->srv_mutex); 775 - 776 - cifs_dbg(FYI, "CIFS session established successfully\n"); 777 - spin_lock(&GlobalMid_Lock); 778 - ses->status = CifsGood; 779 - ses->need_reconnect = false; 780 - spin_unlock(&GlobalMid_Lock); 1417 + sess_data->func = sess_auth_rawntlmssp_authenticate; 1418 + return; 781 1419 } 782 1420 783 - keycp_exit: 1421 + /* Else error. Cleanup */ 784 1422 kfree(ses->auth_key.response); 785 1423 ses->auth_key.response = NULL; 786 1424 kfree(ses->ntlmssp); 1425 + ses->ntlmssp = NULL; 787 1426 1427 + sess_data->func = NULL; 1428 + sess_data->result = rc; 1429 + } 1430 + 1431 + static void 1432 + sess_auth_rawntlmssp_authenticate(struct sess_data *sess_data) 1433 + { 1434 + int rc; 1435 + struct smb_hdr *smb_buf; 1436 + SESSION_SETUP_ANDX *pSMB; 1437 + struct cifs_ses *ses = sess_data->ses; 1438 + __u16 bytes_remaining; 1439 + char *bcc_ptr; 1440 + char *ntlmsspblob = NULL; 1441 + u16 blob_len; 1442 + 1443 + cifs_dbg(FYI, "rawntlmssp session setup authenticate phase\n"); 1444 + 1445 + /* wct = 12 */ 1446 + rc = sess_alloc_buffer(sess_data, 12); 1447 + if (rc) 1448 + goto out; 1449 + 1450 + /* Build security blob before we assemble the request */ 1451 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1452 + smb_buf = (struct smb_hdr *)pSMB; 1453 + /* 1454 + * 5 is an empirical value, large enough to hold 1455 + * authenticate message plus max 10 of av paris, 1456 + * domain, user, workstation names, flags, etc. 1457 + */ 1458 + ntlmsspblob = kzalloc(5*sizeof(struct _AUTHENTICATE_MESSAGE), 1459 + GFP_KERNEL); 1460 + if (!ntlmsspblob) { 1461 + rc = -ENOMEM; 1462 + goto out; 1463 + } 1464 + 1465 + rc = build_ntlmssp_auth_blob(ntlmsspblob, 1466 + &blob_len, ses, sess_data->nls_cp); 1467 + if (rc) 1468 + goto out_free_ntlmsspblob; 1469 + sess_data->iov[1].iov_len = blob_len; 1470 + sess_data->iov[1].iov_base = ntlmsspblob; 1471 + pSMB->req.SecurityBlobLength = cpu_to_le16(blob_len); 1472 + /* 1473 + * Make sure that we tell the server that we are using 1474 + * the uid that it just gave us back on the response 1475 + * (challenge) 1476 + */ 1477 + smb_buf->Uid = ses->Suid; 1478 + 1479 + rc = _sess_auth_rawntlmssp_assemble_req(sess_data); 1480 + if (rc) 1481 + goto out_free_ntlmsspblob; 1482 + 1483 + rc = sess_sendreceive(sess_data); 1484 + if (rc) 1485 + goto out_free_ntlmsspblob; 1486 + 1487 + pSMB = (SESSION_SETUP_ANDX *)sess_data->iov[0].iov_base; 1488 + smb_buf = (struct smb_hdr *)sess_data->iov[0].iov_base; 1489 + if (smb_buf->WordCount != 4) { 1490 + rc = -EIO; 1491 + cifs_dbg(VFS, "bad word count %d\n", smb_buf->WordCount); 1492 + goto out_free_ntlmsspblob; 1493 + } 1494 + 1495 + if (le16_to_cpu(pSMB->resp.Action) & GUEST_LOGIN) 1496 + cifs_dbg(FYI, "Guest login\n"); /* BB mark SesInfo struct? */ 1497 + 1498 + bytes_remaining = get_bcc(smb_buf); 1499 + bcc_ptr = pByteArea(smb_buf); 1500 + blob_len = le16_to_cpu(pSMB->resp.SecurityBlobLength); 1501 + if (blob_len > bytes_remaining) { 1502 + cifs_dbg(VFS, "bad security blob length %d\n", 1503 + blob_len); 1504 + rc = -EINVAL; 1505 + goto out_free_ntlmsspblob; 1506 + } 1507 + bcc_ptr += blob_len; 1508 + bytes_remaining -= blob_len; 1509 + 1510 + 1511 + /* BB check if Unicode and decode strings */ 1512 + if (bytes_remaining == 0) { 1513 + /* no string area to decode, do nothing */ 1514 + } else if (smb_buf->Flags2 & SMBFLG2_UNICODE) { 1515 + /* unicode string area must be word-aligned */ 1516 + if (((unsigned long) bcc_ptr - (unsigned long) smb_buf) % 2) { 1517 + ++bcc_ptr; 1518 + --bytes_remaining; 1519 + } 1520 + decode_unicode_ssetup(&bcc_ptr, bytes_remaining, ses, 1521 + sess_data->nls_cp); 1522 + } else { 1523 + decode_ascii_ssetup(&bcc_ptr, bytes_remaining, ses, 1524 + sess_data->nls_cp); 1525 + } 1526 + 1527 + out_free_ntlmsspblob: 1528 + kfree(ntlmsspblob); 1529 + out: 1530 + sess_free_buffer(sess_data); 1531 + 1532 + if (!rc) 1533 + rc = sess_establish_session(sess_data); 1534 + 1535 + /* Cleanup */ 1536 + kfree(ses->auth_key.response); 1537 + ses->auth_key.response = NULL; 1538 + kfree(ses->ntlmssp); 1539 + ses->ntlmssp = NULL; 1540 + 1541 + sess_data->func = NULL; 1542 + sess_data->result = rc; 1543 + } 1544 + 1545 + static int select_sec(struct cifs_ses *ses, struct sess_data *sess_data) 1546 + { 1547 + int type; 1548 + 1549 + type = select_sectype(ses->server, ses->sectype); 1550 + cifs_dbg(FYI, "sess setup type %d\n", type); 1551 + if (type == Unspecified) { 1552 + cifs_dbg(VFS, 1553 + "Unable to select appropriate authentication method!"); 1554 + return -EINVAL; 1555 + } 1556 + 1557 + switch (type) { 1558 + case LANMAN: 1559 + /* LANMAN and plaintext are less secure and off by default. 1560 + * So we make this explicitly be turned on in kconfig (in the 1561 + * build) and turned on at runtime (changed from the default) 1562 + * in proc/fs/cifs or via mount parm. Unfortunately this is 1563 + * needed for old Win (e.g. Win95), some obscure NAS and OS/2 */ 1564 + #ifdef CONFIG_CIFS_WEAK_PW_HASH 1565 + sess_data->func = sess_auth_lanman; 1566 + break; 1567 + #else 1568 + return -EOPNOTSUPP; 1569 + #endif 1570 + case NTLM: 1571 + sess_data->func = sess_auth_ntlm; 1572 + break; 1573 + case NTLMv2: 1574 + sess_data->func = sess_auth_ntlmv2; 1575 + break; 1576 + case Kerberos: 1577 + #ifdef CONFIG_CIFS_UPCALL 1578 + sess_data->func = sess_auth_kerberos; 1579 + break; 1580 + #else 1581 + cifs_dbg(VFS, "Kerberos negotiated but upcall support disabled!\n"); 1582 + return -ENOSYS; 1583 + break; 1584 + #endif /* CONFIG_CIFS_UPCALL */ 1585 + case RawNTLMSSP: 1586 + sess_data->func = sess_auth_rawntlmssp_negotiate; 1587 + break; 1588 + default: 1589 + cifs_dbg(VFS, "secType %d not supported!\n", type); 1590 + return -ENOSYS; 1591 + } 1592 + 1593 + return 0; 1594 + } 1595 + 1596 + int CIFS_SessSetup(const unsigned int xid, struct cifs_ses *ses, 1597 + const struct nls_table *nls_cp) 1598 + { 1599 + int rc = 0; 1600 + struct sess_data *sess_data; 1601 + 1602 + if (ses == NULL) { 1603 + WARN(1, "%s: ses == NULL!", __func__); 1604 + return -EINVAL; 1605 + } 1606 + 1607 + sess_data = kzalloc(sizeof(struct sess_data), GFP_KERNEL); 1608 + if (!sess_data) 1609 + return -ENOMEM; 1610 + 1611 + rc = select_sec(ses, sess_data); 1612 + if (rc) 1613 + goto out; 1614 + 1615 + sess_data->xid = xid; 1616 + sess_data->ses = ses; 1617 + sess_data->buf0_type = CIFS_NO_BUFFER; 1618 + sess_data->nls_cp = (struct nls_table *) nls_cp; 1619 + 1620 + while (sess_data->func) 1621 + sess_data->func(sess_data); 1622 + 1623 + /* Store result before we free sess_data */ 1624 + rc = sess_data->result; 1625 + 1626 + out: 1627 + kfree(sess_data); 788 1628 return rc; 789 1629 }
+8
fs/cifs/smb1ops.c
··· 1009 1009 return oplock == OPLOCK_READ; 1010 1010 } 1011 1011 1012 + static unsigned int 1013 + cifs_wp_retry_size(struct inode *inode) 1014 + { 1015 + return CIFS_SB(inode->i_sb)->wsize; 1016 + } 1017 + 1012 1018 struct smb_version_operations smb1_operations = { 1013 1019 .send_cancel = send_nt_cancel, 1014 1020 .compare_fids = cifs_compare_fids, ··· 1025 1019 .set_credits = cifs_set_credits, 1026 1020 .get_credits_field = cifs_get_credits_field, 1027 1021 .get_credits = cifs_get_credits, 1022 + .wait_mtu_credits = cifs_wait_mtu_credits, 1028 1023 .get_next_mid = cifs_get_next_mid, 1029 1024 .read_data_offset = cifs_read_data_offset, 1030 1025 .read_data_length = cifs_read_data_length, ··· 1085 1078 .query_mf_symlink = cifs_query_mf_symlink, 1086 1079 .create_mf_symlink = cifs_create_mf_symlink, 1087 1080 .is_read_op = cifs_is_read_op, 1081 + .wp_retry_size = cifs_wp_retry_size, 1088 1082 #ifdef CONFIG_CIFS_XATTR 1089 1083 .query_all_EAs = CIFSSMBQAllEAs, 1090 1084 .set_EA = CIFSSMBSetEA,
+1 -1
fs/cifs/smb2inode.c
··· 91 91 case SMB2_OP_SET_EOF: 92 92 tmprc = SMB2_set_eof(xid, tcon, fid.persistent_fid, 93 93 fid.volatile_fid, current->tgid, 94 - (__le64 *)data); 94 + (__le64 *)data, false); 95 95 break; 96 96 case SMB2_OP_SET_INFO: 97 97 tmprc = SMB2_set_info(xid, tcon, fid.persistent_fid,
+1 -1
fs/cifs/smb2maperror.c
··· 605 605 {STATUS_MAPPED_FILE_SIZE_ZERO, -EIO, "STATUS_MAPPED_FILE_SIZE_ZERO"}, 606 606 {STATUS_TOO_MANY_OPENED_FILES, -EMFILE, "STATUS_TOO_MANY_OPENED_FILES"}, 607 607 {STATUS_CANCELLED, -EIO, "STATUS_CANCELLED"}, 608 - {STATUS_CANNOT_DELETE, -EIO, "STATUS_CANNOT_DELETE"}, 608 + {STATUS_CANNOT_DELETE, -EACCES, "STATUS_CANNOT_DELETE"}, 609 609 {STATUS_INVALID_COMPUTER_NAME, -EIO, "STATUS_INVALID_COMPUTER_NAME"}, 610 610 {STATUS_FILE_DELETED, -EIO, "STATUS_FILE_DELETED"}, 611 611 {STATUS_SPECIAL_ACCOUNT, -EIO, "STATUS_SPECIAL_ACCOUNT"},
+3 -3
fs/cifs/smb2misc.c
··· 437 437 continue; 438 438 439 439 cifs_dbg(FYI, "found in the open list\n"); 440 - cifs_dbg(FYI, "lease key match, lease break 0x%d\n", 440 + cifs_dbg(FYI, "lease key match, lease break 0x%x\n", 441 441 le32_to_cpu(rsp->NewLeaseState)); 442 442 443 443 server->ops->set_oplock_level(cinode, lease_state, 0, NULL); ··· 467 467 } 468 468 469 469 cifs_dbg(FYI, "found in the pending open list\n"); 470 - cifs_dbg(FYI, "lease key match, lease break 0x%d\n", 470 + cifs_dbg(FYI, "lease key match, lease break 0x%x\n", 471 471 le32_to_cpu(rsp->NewLeaseState)); 472 472 473 473 open->oplock = lease_state; ··· 546 546 return false; 547 547 } 548 548 549 - cifs_dbg(FYI, "oplock level 0x%d\n", rsp->OplockLevel); 549 + cifs_dbg(FYI, "oplock level 0x%x\n", rsp->OplockLevel); 550 550 551 551 /* look up tcon based on tid & uid */ 552 552 spin_lock(&cifs_tcp_ses_lock);
+68 -5
fs/cifs/smb2ops.c
··· 19 19 20 20 #include <linux/pagemap.h> 21 21 #include <linux/vfs.h> 22 + #include <linux/falloc.h> 22 23 #include "cifsglob.h" 23 24 #include "smb2pdu.h" 24 25 #include "smb2proto.h" ··· 113 112 return le16_to_cpu(((struct smb2_hdr *)mid->resp_buf)->CreditRequest); 114 113 } 115 114 115 + static int 116 + smb2_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size, 117 + unsigned int *num, unsigned int *credits) 118 + { 119 + int rc = 0; 120 + unsigned int scredits; 121 + 122 + spin_lock(&server->req_lock); 123 + while (1) { 124 + if (server->credits <= 0) { 125 + spin_unlock(&server->req_lock); 126 + cifs_num_waiters_inc(server); 127 + rc = wait_event_killable(server->request_q, 128 + has_credits(server, &server->credits)); 129 + cifs_num_waiters_dec(server); 130 + if (rc) 131 + return rc; 132 + spin_lock(&server->req_lock); 133 + } else { 134 + if (server->tcpStatus == CifsExiting) { 135 + spin_unlock(&server->req_lock); 136 + return -ENOENT; 137 + } 138 + 139 + scredits = server->credits; 140 + /* can deadlock with reopen */ 141 + if (scredits == 1) { 142 + *num = SMB2_MAX_BUFFER_SIZE; 143 + *credits = 0; 144 + break; 145 + } 146 + 147 + /* leave one credit for a possible reopen */ 148 + scredits--; 149 + *num = min_t(unsigned int, size, 150 + scredits * SMB2_MAX_BUFFER_SIZE); 151 + 152 + *credits = DIV_ROUND_UP(*num, SMB2_MAX_BUFFER_SIZE); 153 + server->credits -= *credits; 154 + server->in_flight++; 155 + break; 156 + } 157 + } 158 + spin_unlock(&server->req_lock); 159 + return rc; 160 + } 161 + 116 162 static __u64 117 163 smb2_get_next_mid(struct TCP_Server_Info *server) 118 164 { ··· 230 182 /* start with specified wsize, or default */ 231 183 wsize = volume_info->wsize ? volume_info->wsize : CIFS_DEFAULT_IOSIZE; 232 184 wsize = min_t(unsigned int, wsize, server->max_write); 233 - /* set it to the maximum buffer size value we can send with 1 credit */ 234 - wsize = min_t(unsigned int, wsize, SMB2_MAX_BUFFER_SIZE); 185 + 186 + if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU)) 187 + wsize = min_t(unsigned int, wsize, SMB2_MAX_BUFFER_SIZE); 235 188 236 189 return wsize; 237 190 } ··· 246 197 /* start with specified rsize, or default */ 247 198 rsize = volume_info->rsize ? volume_info->rsize : CIFS_DEFAULT_IOSIZE; 248 199 rsize = min_t(unsigned int, rsize, server->max_read); 249 - /* set it to the maximum buffer size value we can send with 1 credit */ 250 - rsize = min_t(unsigned int, rsize, SMB2_MAX_BUFFER_SIZE); 200 + 201 + if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU)) 202 + rsize = min_t(unsigned int, rsize, SMB2_MAX_BUFFER_SIZE); 251 203 252 204 return rsize; 253 205 } ··· 737 687 { 738 688 __le64 eof = cpu_to_le64(size); 739 689 return SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid, 740 - cfile->fid.volatile_fid, cfile->pid, &eof); 690 + cfile->fid.volatile_fid, cfile->pid, &eof, false); 741 691 } 742 692 743 693 static int ··· 1154 1104 return le32_to_cpu(lc->lcontext.LeaseState); 1155 1105 } 1156 1106 1107 + static unsigned int 1108 + smb2_wp_retry_size(struct inode *inode) 1109 + { 1110 + return min_t(unsigned int, CIFS_SB(inode->i_sb)->wsize, 1111 + SMB2_MAX_BUFFER_SIZE); 1112 + } 1113 + 1157 1114 struct smb_version_operations smb20_operations = { 1158 1115 .compare_fids = smb2_compare_fids, 1159 1116 .setup_request = smb2_setup_request, ··· 1170 1113 .set_credits = smb2_set_credits, 1171 1114 .get_credits_field = smb2_get_credits_field, 1172 1115 .get_credits = smb2_get_credits, 1116 + .wait_mtu_credits = cifs_wait_mtu_credits, 1173 1117 .get_next_mid = smb2_get_next_mid, 1174 1118 .read_data_offset = smb2_read_data_offset, 1175 1119 .read_data_length = smb2_read_data_length, ··· 1235 1177 .create_lease_buf = smb2_create_lease_buf, 1236 1178 .parse_lease_buf = smb2_parse_lease_buf, 1237 1179 .clone_range = smb2_clone_range, 1180 + .wp_retry_size = smb2_wp_retry_size, 1238 1181 }; 1239 1182 1240 1183 struct smb_version_operations smb21_operations = { ··· 1247 1188 .set_credits = smb2_set_credits, 1248 1189 .get_credits_field = smb2_get_credits_field, 1249 1190 .get_credits = smb2_get_credits, 1191 + .wait_mtu_credits = smb2_wait_mtu_credits, 1250 1192 .get_next_mid = smb2_get_next_mid, 1251 1193 .read_data_offset = smb2_read_data_offset, 1252 1194 .read_data_length = smb2_read_data_length, ··· 1312 1252 .create_lease_buf = smb2_create_lease_buf, 1313 1253 .parse_lease_buf = smb2_parse_lease_buf, 1314 1254 .clone_range = smb2_clone_range, 1255 + .wp_retry_size = smb2_wp_retry_size, 1315 1256 }; 1316 1257 1317 1258 struct smb_version_operations smb30_operations = { ··· 1324 1263 .set_credits = smb2_set_credits, 1325 1264 .get_credits_field = smb2_get_credits_field, 1326 1265 .get_credits = smb2_get_credits, 1266 + .wait_mtu_credits = smb2_wait_mtu_credits, 1327 1267 .get_next_mid = smb2_get_next_mid, 1328 1268 .read_data_offset = smb2_read_data_offset, 1329 1269 .read_data_length = smb2_read_data_length, ··· 1392 1330 .parse_lease_buf = smb3_parse_lease_buf, 1393 1331 .clone_range = smb2_clone_range, 1394 1332 .validate_negotiate = smb3_validate_negotiate, 1333 + .wp_retry_size = smb2_wp_retry_size, 1395 1334 }; 1396 1335 1397 1336 struct smb_version_values smb20_values = {
+67 -27
fs/cifs/smb2pdu.c
··· 108 108 if (!tcon) 109 109 goto out; 110 110 111 - /* BB FIXME when we do write > 64K add +1 for every 64K in req or rsp */ 112 111 /* GLOBAL_CAP_LARGE_MTU will only be set if dialect > SMB2.02 */ 113 112 /* See sections 2.2.4 and 3.2.4.1.5 of MS-SMB2 */ 114 113 if ((tcon->ses) && ··· 244 245 if (rc) 245 246 goto out; 246 247 atomic_inc(&tconInfoReconnectCount); 247 - /* 248 - * BB FIXME add code to check if wsize needs update due to negotiated 249 - * smb buffer size shrinking. 250 - */ 251 248 out: 252 249 /* 253 250 * Check if handle based operation so we know whether we can continue ··· 303 308 304 309 return rc; 305 310 } 306 - 307 - static void 308 - free_rsp_buf(int resp_buftype, void *rsp) 309 - { 310 - if (resp_buftype == CIFS_SMALL_BUFFER) 311 - cifs_small_buf_release(rsp); 312 - else if (resp_buftype == CIFS_LARGE_BUFFER) 313 - cifs_buf_release(rsp); 314 - } 315 - 316 311 317 312 /* 318 313 * ··· 1723 1738 rc); 1724 1739 } 1725 1740 /* FIXME: should this be counted toward the initiating task? */ 1726 - task_io_account_read(rdata->bytes); 1727 - cifs_stats_bytes_read(tcon, rdata->bytes); 1741 + task_io_account_read(rdata->got_bytes); 1742 + cifs_stats_bytes_read(tcon, rdata->got_bytes); 1728 1743 break; 1729 1744 case MID_REQUEST_SUBMITTED: 1730 1745 case MID_RETRY_NEEDED: 1731 1746 rdata->result = -EAGAIN; 1747 + if (server->sign && rdata->got_bytes) 1748 + /* reset bytes number since we can not check a sign */ 1749 + rdata->got_bytes = 0; 1750 + /* FIXME: should this be counted toward the initiating task? */ 1751 + task_io_account_read(rdata->got_bytes); 1752 + cifs_stats_bytes_read(tcon, rdata->got_bytes); 1732 1753 break; 1733 1754 default: 1734 1755 if (rdata->result != -ENODATA) ··· 1753 1762 int 1754 1763 smb2_async_readv(struct cifs_readdata *rdata) 1755 1764 { 1756 - int rc; 1765 + int rc, flags = 0; 1757 1766 struct smb2_hdr *buf; 1758 1767 struct cifs_io_parms io_parms; 1759 1768 struct smb_rqst rqst = { .rq_iov = &rdata->iov, 1760 1769 .rq_nvec = 1 }; 1770 + struct TCP_Server_Info *server; 1761 1771 1762 1772 cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", 1763 1773 __func__, rdata->offset, rdata->bytes); ··· 1769 1777 io_parms.persistent_fid = rdata->cfile->fid.persistent_fid; 1770 1778 io_parms.volatile_fid = rdata->cfile->fid.volatile_fid; 1771 1779 io_parms.pid = rdata->pid; 1780 + 1781 + server = io_parms.tcon->ses->server; 1782 + 1772 1783 rc = smb2_new_read_req(&rdata->iov, &io_parms, 0, 0); 1773 - if (rc) 1784 + if (rc) { 1785 + if (rc == -EAGAIN && rdata->credits) { 1786 + /* credits was reset by reconnect */ 1787 + rdata->credits = 0; 1788 + /* reduce in_flight value since we won't send the req */ 1789 + spin_lock(&server->req_lock); 1790 + server->in_flight--; 1791 + spin_unlock(&server->req_lock); 1792 + } 1774 1793 return rc; 1794 + } 1775 1795 1776 1796 buf = (struct smb2_hdr *)rdata->iov.iov_base; 1777 1797 /* 4 for rfc1002 length field */ 1778 1798 rdata->iov.iov_len = get_rfc1002_length(rdata->iov.iov_base) + 4; 1779 1799 1800 + if (rdata->credits) { 1801 + buf->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, 1802 + SMB2_MAX_BUFFER_SIZE)); 1803 + spin_lock(&server->req_lock); 1804 + server->credits += rdata->credits - 1805 + le16_to_cpu(buf->CreditCharge); 1806 + spin_unlock(&server->req_lock); 1807 + wake_up(&server->request_q); 1808 + flags = CIFS_HAS_CREDITS; 1809 + } 1810 + 1780 1811 kref_get(&rdata->refcount); 1781 1812 rc = cifs_call_async(io_parms.tcon->ses->server, &rqst, 1782 1813 cifs_readv_receive, smb2_readv_callback, 1783 - rdata, 0); 1814 + rdata, flags); 1784 1815 if (rc) { 1785 1816 kref_put(&rdata->refcount, cifs_readdata_release); 1786 1817 cifs_stats_fail_inc(io_parms.tcon, SMB2_READ_HE); ··· 1921 1906 smb2_async_writev(struct cifs_writedata *wdata, 1922 1907 void (*release)(struct kref *kref)) 1923 1908 { 1924 - int rc = -EACCES; 1909 + int rc = -EACCES, flags = 0; 1925 1910 struct smb2_write_req *req = NULL; 1926 1911 struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); 1912 + struct TCP_Server_Info *server = tcon->ses->server; 1927 1913 struct kvec iov; 1928 1914 struct smb_rqst rqst; 1929 1915 1930 1916 rc = small_smb2_init(SMB2_WRITE, tcon, (void **) &req); 1931 - if (rc) 1917 + if (rc) { 1918 + if (rc == -EAGAIN && wdata->credits) { 1919 + /* credits was reset by reconnect */ 1920 + wdata->credits = 0; 1921 + /* reduce in_flight value since we won't send the req */ 1922 + spin_lock(&server->req_lock); 1923 + server->in_flight--; 1924 + spin_unlock(&server->req_lock); 1925 + } 1932 1926 goto async_writev_out; 1927 + } 1933 1928 1934 1929 req->hdr.ProcessId = cpu_to_le32(wdata->cfile->pid); 1935 1930 ··· 1972 1947 1973 1948 inc_rfc1001_len(&req->hdr, wdata->bytes - 1 /* Buffer */); 1974 1949 1950 + if (wdata->credits) { 1951 + req->hdr.CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes, 1952 + SMB2_MAX_BUFFER_SIZE)); 1953 + spin_lock(&server->req_lock); 1954 + server->credits += wdata->credits - 1955 + le16_to_cpu(req->hdr.CreditCharge); 1956 + spin_unlock(&server->req_lock); 1957 + wake_up(&server->request_q); 1958 + flags = CIFS_HAS_CREDITS; 1959 + } 1960 + 1975 1961 kref_get(&wdata->refcount); 1976 - rc = cifs_call_async(tcon->ses->server, &rqst, NULL, 1977 - smb2_writev_callback, wdata, 0); 1962 + rc = cifs_call_async(server, &rqst, NULL, smb2_writev_callback, wdata, 1963 + flags); 1978 1964 1979 1965 if (rc) { 1980 1966 kref_put(&wdata->refcount, release); ··· 2361 2325 2362 2326 int 2363 2327 SMB2_set_eof(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid, 2364 - u64 volatile_fid, u32 pid, __le64 *eof) 2328 + u64 volatile_fid, u32 pid, __le64 *eof, bool is_falloc) 2365 2329 { 2366 2330 struct smb2_file_eof_info info; 2367 2331 void *data; ··· 2372 2336 data = &info; 2373 2337 size = sizeof(struct smb2_file_eof_info); 2374 2338 2375 - return send_set_info(xid, tcon, persistent_fid, volatile_fid, pid, 2376 - FILE_END_OF_FILE_INFORMATION, 1, &data, &size); 2339 + if (is_falloc) 2340 + return send_set_info(xid, tcon, persistent_fid, volatile_fid, 2341 + pid, FILE_ALLOCATION_INFORMATION, 1, &data, &size); 2342 + else 2343 + return send_set_info(xid, tcon, persistent_fid, volatile_fid, 2344 + pid, FILE_END_OF_FILE_INFORMATION, 1, &data, &size); 2377 2345 } 2378 2346 2379 2347 int
+1 -1
fs/cifs/smb2proto.h
··· 139 139 __le16 *target_file); 140 140 extern int SMB2_set_eof(const unsigned int xid, struct cifs_tcon *tcon, 141 141 u64 persistent_fid, u64 volatile_fid, u32 pid, 142 - __le64 *eof); 142 + __le64 *eof, bool is_fallocate); 143 143 extern int SMB2_set_info(const unsigned int xid, struct cifs_tcon *tcon, 144 144 u64 persistent_fid, u64 volatile_fid, 145 145 FILE_BASIC_INFO *buf);
+5
fs/cifs/smb2transport.c
··· 466 466 static inline void 467 467 smb2_seq_num_into_buf(struct TCP_Server_Info *server, struct smb2_hdr *hdr) 468 468 { 469 + unsigned int i, num = le16_to_cpu(hdr->CreditCharge); 470 + 469 471 hdr->MessageId = get_next_mid64(server); 472 + /* skip message numbers according to CreditCharge field */ 473 + for (i = 1; i < num; i++) 474 + get_next_mid(server); 470 475 } 471 476 472 477 static struct mid_q_entry *
+18 -7
fs/cifs/transport.c
··· 448 448 return wait_for_free_credits(server, timeout, val); 449 449 } 450 450 451 + int 452 + cifs_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size, 453 + unsigned int *num, unsigned int *credits) 454 + { 455 + *num = size; 456 + *credits = 0; 457 + return 0; 458 + } 459 + 451 460 static int allocate_mid(struct cifs_ses *ses, struct smb_hdr *in_buf, 452 461 struct mid_q_entry **ppmidQ) 453 462 { ··· 540 531 { 541 532 int rc, timeout, optype; 542 533 struct mid_q_entry *mid; 534 + unsigned int credits = 0; 543 535 544 536 timeout = flags & CIFS_TIMEOUT_MASK; 545 537 optype = flags & CIFS_OP_MASK; 546 538 547 - rc = wait_for_free_request(server, timeout, optype); 548 - if (rc) 549 - return rc; 539 + if ((flags & CIFS_HAS_CREDITS) == 0) { 540 + rc = wait_for_free_request(server, timeout, optype); 541 + if (rc) 542 + return rc; 543 + credits = 1; 544 + } 550 545 551 546 mutex_lock(&server->srv_mutex); 552 547 mid = server->ops->setup_async_request(server, rqst); 553 548 if (IS_ERR(mid)) { 554 549 mutex_unlock(&server->srv_mutex); 555 - add_credits(server, 1, optype); 556 - wake_up(&server->request_q); 550 + add_credits_and_wake_if(server, credits, optype); 557 551 return PTR_ERR(mid); 558 552 } 559 553 ··· 584 572 return 0; 585 573 586 574 cifs_delete_mid(mid); 587 - add_credits(server, 1, optype); 588 - wake_up(&server->request_q); 575 + add_credits_and_wake_if(server, credits, optype); 589 576 return rc; 590 577 } 591 578