Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

SUNRPC: Use gssproxy upcall for server RPCGSS authentication.

The main advantge of this new upcall mechanism is that it can handle
big tickets as seen in Kerberos implementations where tickets carry
authorization data like the MS-PAC buffer with AD or the Posix Authorization
Data being discussed in IETF on the krbwg working group.

The Gssproxy program is used to perform the accept_sec_context call on the
kernel's behalf. The code is changed to also pass the input buffer straight
to upcall mechanism to avoid allocating and copying many pages as tokens can
be as big (potentially more in future) as 64KiB.

Signed-off-by: Simo Sorce <simo@redhat.com>
[bfields: containerization, negotiation api]
Signed-off-by: J. Bruce Fields <bfields@redhat.com>

authored by

Simo Sorce and committed by
J. Bruce Fields
030d794b 1d658336

+436 -9
+2
Documentation/filesystems/nfs/00-INDEX
··· 20 20 - introduction to the caching mechanisms in the sunrpc layer. 21 21 idmapper.txt 22 22 - information for configuring request-keys to be used by idmapper 23 + knfsd-rpcgss.txt 24 + - Information on GSS authentication support in the NFS Server
+91
Documentation/filesystems/nfs/rpc-server-gss.txt
··· 1 + 2 + rpcsec_gss support for kernel RPC servers 3 + ========================================= 4 + 5 + This document gives references to the standards and protocols used to 6 + implement RPCGSS authentication in kernel RPC servers such as the NFS 7 + server and the NFS client's NFSv4.0 callback server. (But note that 8 + NFSv4.1 and higher don't require the client to act as a server for the 9 + purposes of authentication.) 10 + 11 + RPCGSS is specified in a few IETF documents: 12 + - RFC2203 v1: http://tools.ietf.org/rfc/rfc2203.txt 13 + - RFC5403 v2: http://tools.ietf.org/rfc/rfc5403.txt 14 + and there is a 3rd version being proposed: 15 + - http://tools.ietf.org/id/draft-williams-rpcsecgssv3.txt 16 + (At draft n. 02 at the time of writing) 17 + 18 + Background 19 + ---------- 20 + 21 + The RPCGSS Authentication method describes a way to perform GSSAPI 22 + Authentication for NFS. Although GSSAPI is itself completely mechanism 23 + agnostic, in many cases only the KRB5 mechanism is supported by NFS 24 + implementations. 25 + 26 + The Linux kernel, at the moment, supports only the KRB5 mechanism, and 27 + depends on GSSAPI extensions that are KRB5 specific. 28 + 29 + GSSAPI is a complex library, and implementing it completely in kernel is 30 + unwarranted. However GSSAPI operations are fundementally separable in 2 31 + parts: 32 + - initial context establishment 33 + - integrity/privacy protection (signing and encrypting of individual 34 + packets) 35 + 36 + The former is more complex and policy-independent, but less 37 + performance-sensitive. The latter is simpler and needs to be very fast. 38 + 39 + Therefore, we perform per-packet integrity and privacy protection in the 40 + kernel, but leave the initial context establishment to userspace. We 41 + need upcalls to request userspace to perform context establishment. 42 + 43 + NFS Server Legacy Upcall Mechanism 44 + ---------------------------------- 45 + 46 + The classic upcall mechanism uses a custom text based upcall mechanism 47 + to talk to a custom daemon called rpc.svcgssd that is provide by the 48 + nfs-utils package. 49 + 50 + This upcall mechanism has 2 limitations: 51 + 52 + A) It can handle tokens that are no bigger than 2KiB 53 + 54 + In some Kerberos deployment GSSAPI tokens can be quite big, up and 55 + beyond 64KiB in size due to various authorization extensions attacked to 56 + the Kerberos tickets, that needs to be sent through the GSS layer in 57 + order to perform context establishment. 58 + 59 + B) It does not properly handle creds where the user is member of more 60 + than a few housand groups (the current hard limit in the kernel is 65K 61 + groups) due to limitation on the size of the buffer that can be send 62 + back to the kernel (4KiB). 63 + 64 + NFS Server New RPC Upcall Mechanism 65 + ----------------------------------- 66 + 67 + The newer upcall mechanism uses RPC over a unix socket to a daemon 68 + called gss-proxy, implemented by a userspace program called Gssproxy. 69 + 70 + The gss_proxy RPC protocol is currently documented here: 71 + 72 + https://fedorahosted.org/gss-proxy/wiki/ProtocolDocumentation 73 + 74 + This upcall mechanism uses the kernel rpc client and connects to the gssproxy 75 + userspace program over a regular unix socket. The gssproxy protocol does not 76 + suffer from the size limitations of the legacy protocol. 77 + 78 + Negotiating Upcall Mechanisms 79 + ----------------------------- 80 + 81 + To provide backward compatibility, the kernel defaults to using the 82 + legacy mechanism. To switch to the new mechanism, gss-proxy must bind 83 + to /var/run/gssproxy.sock and then write "1" to 84 + /proc/net/rpc/use-gss-proxy. If gss-proxy dies, it must repeat both 85 + steps. 86 + 87 + Once the upcall mechanism is chosen, it cannot be changed. To prevent 88 + locking into the legacy mechanisms, the above steps must be performed 89 + before starting nfsd. Whoever starts nfsd can guarantee this by reading 90 + from /proc/net/rpc/use-gss-proxy and checking that it contains a 91 + "1"--the read will block until gss-proxy has done its write to the file.
+2
net/sunrpc/auth_gss/gss_rpc_upcall.c
··· 137 137 { 138 138 mutex_init(&sn->gssp_lock); 139 139 sn->gssp_clnt = NULL; 140 + init_waitqueue_head(&sn->gssp_wq); 140 141 } 141 142 142 143 int set_gssp_clnt(struct net *net) ··· 154 153 sn->gssp_clnt = clnt; 155 154 } 156 155 mutex_unlock(&sn->gssp_lock); 156 + wake_up(&sn->gssp_wq); 157 157 return ret; 158 158 } 159 159
+338 -9
net/sunrpc/auth_gss/svcauth_gss.c
··· 48 48 #include <linux/sunrpc/svcauth.h> 49 49 #include <linux/sunrpc/svcauth_gss.h> 50 50 #include <linux/sunrpc/cache.h> 51 + #include "gss_rpc_upcall.h" 51 52 52 - #include "../netns.h" 53 53 54 54 #ifdef RPC_DEBUG 55 55 # define RPCDBG_FACILITY RPCDBG_AUTH ··· 988 988 } 989 989 990 990 static inline int 991 - gss_read_verf(struct rpc_gss_wire_cred *gc, 992 - struct kvec *argv, __be32 *authp, 993 - struct xdr_netobj *in_handle, 994 - struct xdr_netobj *in_token) 991 + gss_read_common_verf(struct rpc_gss_wire_cred *gc, 992 + struct kvec *argv, __be32 *authp, 993 + struct xdr_netobj *in_handle) 995 994 { 996 - struct xdr_netobj tmpobj; 997 - 998 995 /* Read the verifier; should be NULL: */ 999 996 *authp = rpc_autherr_badverf; 1000 997 if (argv->iov_len < 2 * 4) ··· 1007 1010 if (dup_netobj(in_handle, &gc->gc_ctx)) 1008 1011 return SVC_CLOSE; 1009 1012 *authp = rpc_autherr_badverf; 1013 + 1014 + return 0; 1015 + } 1016 + 1017 + static inline int 1018 + gss_read_verf(struct rpc_gss_wire_cred *gc, 1019 + struct kvec *argv, __be32 *authp, 1020 + struct xdr_netobj *in_handle, 1021 + struct xdr_netobj *in_token) 1022 + { 1023 + struct xdr_netobj tmpobj; 1024 + int res; 1025 + 1026 + res = gss_read_common_verf(gc, argv, authp, in_handle); 1027 + if (res) 1028 + return res; 1029 + 1010 1030 if (svc_safe_getnetobj(argv, &tmpobj)) { 1011 1031 kfree(in_handle->data); 1012 1032 return SVC_DENIED; ··· 1032 1018 kfree(in_handle->data); 1033 1019 return SVC_CLOSE; 1034 1020 } 1021 + 1022 + return 0; 1023 + } 1024 + 1025 + /* Ok this is really heavily depending on a set of semantics in 1026 + * how rqstp is set up by svc_recv and pages laid down by the 1027 + * server when reading a request. We are basically guaranteed that 1028 + * the token lays all down linearly across a set of pages, starting 1029 + * at iov_base in rq_arg.head[0] which happens to be the first of a 1030 + * set of pages stored in rq_pages[]. 1031 + * rq_arg.head[0].iov_base will provide us the page_base to pass 1032 + * to the upcall. 1033 + */ 1034 + static inline int 1035 + gss_read_proxy_verf(struct svc_rqst *rqstp, 1036 + struct rpc_gss_wire_cred *gc, __be32 *authp, 1037 + struct xdr_netobj *in_handle, 1038 + struct gssp_in_token *in_token) 1039 + { 1040 + struct kvec *argv = &rqstp->rq_arg.head[0]; 1041 + u32 inlen; 1042 + int res; 1043 + 1044 + res = gss_read_common_verf(gc, argv, authp, in_handle); 1045 + if (res) 1046 + return res; 1047 + 1048 + inlen = svc_getnl(argv); 1049 + if (inlen > (argv->iov_len + rqstp->rq_arg.page_len)) 1050 + return SVC_DENIED; 1051 + 1052 + in_token->pages = rqstp->rq_pages; 1053 + in_token->page_base = (ulong)argv->iov_base & ~PAGE_MASK; 1054 + in_token->page_len = inlen; 1035 1055 1036 1056 return 0; 1037 1057 } ··· 1097 1049 * the upcall results are available, write the verifier and result. 1098 1050 * Otherwise, drop the request pending an answer to the upcall. 1099 1051 */ 1100 - static int svcauth_gss_handle_init(struct svc_rqst *rqstp, 1052 + static int svcauth_gss_legacy_init(struct svc_rqst *rqstp, 1101 1053 struct rpc_gss_wire_cred *gc, __be32 *authp) 1102 1054 { 1103 1055 struct kvec *argv = &rqstp->rq_arg.head[0]; ··· 1136 1088 cache_put(&rsip->h, sn->rsi_cache); 1137 1089 return ret; 1138 1090 } 1091 + 1092 + static int gss_proxy_save_rsc(struct cache_detail *cd, 1093 + struct gssp_upcall_data *ud, 1094 + uint64_t *handle) 1095 + { 1096 + struct rsc rsci, *rscp = NULL; 1097 + static atomic64_t ctxhctr; 1098 + long long ctxh; 1099 + struct gss_api_mech *gm = NULL; 1100 + time_t expiry; 1101 + int status = -EINVAL; 1102 + 1103 + memset(&rsci, 0, sizeof(rsci)); 1104 + /* context handle */ 1105 + status = -ENOMEM; 1106 + /* the handle needs to be just a unique id, 1107 + * use a static counter */ 1108 + ctxh = atomic64_inc_return(&ctxhctr); 1109 + 1110 + /* make a copy for the caller */ 1111 + *handle = ctxh; 1112 + 1113 + /* make a copy for the rsc cache */ 1114 + if (dup_to_netobj(&rsci.handle, (char *)handle, sizeof(uint64_t))) 1115 + goto out; 1116 + rscp = rsc_lookup(cd, &rsci); 1117 + if (!rscp) 1118 + goto out; 1119 + 1120 + /* creds */ 1121 + if (!ud->found_creds) { 1122 + /* userspace seem buggy, we should always get at least a 1123 + * mapping to nobody */ 1124 + dprintk("RPC: No creds found, marking Negative!\n"); 1125 + set_bit(CACHE_NEGATIVE, &rsci.h.flags); 1126 + } else { 1127 + 1128 + /* steal creds */ 1129 + rsci.cred = ud->creds; 1130 + memset(&ud->creds, 0, sizeof(struct svc_cred)); 1131 + 1132 + status = -EOPNOTSUPP; 1133 + /* get mech handle from OID */ 1134 + gm = gss_mech_get_by_OID(&ud->mech_oid); 1135 + if (!gm) 1136 + goto out; 1137 + 1138 + status = -EINVAL; 1139 + /* mech-specific data: */ 1140 + status = gss_import_sec_context(ud->out_handle.data, 1141 + ud->out_handle.len, 1142 + gm, &rsci.mechctx, 1143 + &expiry, GFP_KERNEL); 1144 + if (status) 1145 + goto out; 1146 + } 1147 + 1148 + rsci.h.expiry_time = expiry; 1149 + rscp = rsc_update(cd, &rsci, rscp); 1150 + status = 0; 1151 + out: 1152 + gss_mech_put(gm); 1153 + rsc_free(&rsci); 1154 + if (rscp) 1155 + cache_put(&rscp->h, cd); 1156 + else 1157 + status = -ENOMEM; 1158 + return status; 1159 + } 1160 + 1161 + static int svcauth_gss_proxy_init(struct svc_rqst *rqstp, 1162 + struct rpc_gss_wire_cred *gc, __be32 *authp) 1163 + { 1164 + struct kvec *resv = &rqstp->rq_res.head[0]; 1165 + struct xdr_netobj cli_handle; 1166 + struct gssp_upcall_data ud; 1167 + uint64_t handle; 1168 + int status; 1169 + int ret; 1170 + struct net *net = rqstp->rq_xprt->xpt_net; 1171 + struct sunrpc_net *sn = net_generic(net, sunrpc_net_id); 1172 + 1173 + memset(&ud, 0, sizeof(ud)); 1174 + ret = gss_read_proxy_verf(rqstp, gc, authp, 1175 + &ud.in_handle, &ud.in_token); 1176 + if (ret) 1177 + return ret; 1178 + 1179 + ret = SVC_CLOSE; 1180 + 1181 + /* Perform synchronous upcall to gss-proxy */ 1182 + status = gssp_accept_sec_context_upcall(net, &ud); 1183 + if (status) 1184 + goto out; 1185 + 1186 + dprintk("RPC: svcauth_gss: gss major status = %d\n", 1187 + ud.major_status); 1188 + 1189 + switch (ud.major_status) { 1190 + case GSS_S_CONTINUE_NEEDED: 1191 + cli_handle = ud.out_handle; 1192 + break; 1193 + case GSS_S_COMPLETE: 1194 + status = gss_proxy_save_rsc(sn->rsc_cache, &ud, &handle); 1195 + if (status) 1196 + goto out; 1197 + cli_handle.data = (u8 *)&handle; 1198 + cli_handle.len = sizeof(handle); 1199 + break; 1200 + default: 1201 + ret = SVC_CLOSE; 1202 + goto out; 1203 + } 1204 + 1205 + /* Got an answer to the upcall; use it: */ 1206 + if (gss_write_init_verf(sn->rsc_cache, rqstp, 1207 + &cli_handle, &ud.major_status)) 1208 + goto out; 1209 + if (gss_write_resv(resv, PAGE_SIZE, 1210 + &cli_handle, &ud.out_token, 1211 + ud.major_status, ud.minor_status)) 1212 + goto out; 1213 + 1214 + ret = SVC_COMPLETE; 1215 + out: 1216 + gssp_free_upcall_data(&ud); 1217 + return ret; 1218 + } 1219 + 1220 + DEFINE_SPINLOCK(use_gssp_lock); 1221 + 1222 + static bool use_gss_proxy(struct net *net) 1223 + { 1224 + struct sunrpc_net *sn = net_generic(net, sunrpc_net_id); 1225 + 1226 + if (sn->use_gss_proxy != -1) 1227 + return sn->use_gss_proxy; 1228 + spin_lock(&use_gssp_lock); 1229 + /* 1230 + * If you wanted gss-proxy, you should have said so before 1231 + * starting to accept requests: 1232 + */ 1233 + sn->use_gss_proxy = 0; 1234 + spin_unlock(&use_gssp_lock); 1235 + return 0; 1236 + } 1237 + 1238 + static bool set_gss_proxy(struct net *net, int type) 1239 + { 1240 + struct sunrpc_net *sn = net_generic(net, sunrpc_net_id); 1241 + int ret = 0; 1242 + 1243 + WARN_ON_ONCE(type != 0 && type != 1); 1244 + spin_lock(&use_gssp_lock); 1245 + if (sn->use_gss_proxy == -1 || sn->use_gss_proxy == type) 1246 + sn->use_gss_proxy = type; 1247 + else 1248 + ret = -EBUSY; 1249 + spin_unlock(&use_gssp_lock); 1250 + wake_up(&sn->gssp_wq); 1251 + return ret; 1252 + } 1253 + 1254 + static inline bool gssp_ready(struct sunrpc_net *sn) 1255 + { 1256 + switch (sn->use_gss_proxy) { 1257 + case -1: 1258 + return false; 1259 + case 0: 1260 + return true; 1261 + case 1: 1262 + return sn->gssp_clnt; 1263 + } 1264 + WARN_ON_ONCE(1); 1265 + return false; 1266 + } 1267 + 1268 + static int wait_for_gss_proxy(struct net *net) 1269 + { 1270 + struct sunrpc_net *sn = net_generic(net, sunrpc_net_id); 1271 + 1272 + return wait_event_interruptible(sn->gssp_wq, gssp_ready(sn)); 1273 + } 1274 + 1275 + #ifdef CONFIG_PROC_FS 1276 + 1277 + static ssize_t write_gssp(struct file *file, const char __user *buf, 1278 + size_t count, loff_t *ppos) 1279 + { 1280 + struct net *net = PDE(file->f_path.dentry->d_inode)->data; 1281 + char tbuf[20]; 1282 + unsigned long i; 1283 + int res; 1284 + 1285 + if (*ppos || count > sizeof(tbuf)-1) 1286 + return -EINVAL; 1287 + if (copy_from_user(tbuf, buf, count)) 1288 + return -EFAULT; 1289 + 1290 + tbuf[count] = 0; 1291 + res = kstrtoul(tbuf, 0, &i); 1292 + if (res) 1293 + return res; 1294 + if (i != 1) 1295 + return -EINVAL; 1296 + res = set_gss_proxy(net, 1); 1297 + if (res) 1298 + return res; 1299 + res = set_gssp_clnt(net); 1300 + if (res) 1301 + return res; 1302 + return count; 1303 + } 1304 + 1305 + static ssize_t read_gssp(struct file *file, char __user *buf, 1306 + size_t count, loff_t *ppos) 1307 + { 1308 + struct net *net = PDE(file->f_path.dentry->d_inode)->data; 1309 + unsigned long p = *ppos; 1310 + char tbuf[10]; 1311 + size_t len; 1312 + int ret; 1313 + 1314 + ret = wait_for_gss_proxy(net); 1315 + if (ret) 1316 + return ret; 1317 + 1318 + snprintf(tbuf, sizeof(tbuf), "%d\n", use_gss_proxy(net)); 1319 + len = strlen(tbuf); 1320 + if (p >= len) 1321 + return 0; 1322 + len -= p; 1323 + if (len > count) 1324 + len = count; 1325 + if (copy_to_user(buf, (void *)(tbuf+p), len)) 1326 + return -EFAULT; 1327 + *ppos += len; 1328 + return len; 1329 + } 1330 + 1331 + static const struct file_operations use_gss_proxy_ops = { 1332 + .open = nonseekable_open, 1333 + .write = write_gssp, 1334 + .read = read_gssp, 1335 + }; 1336 + 1337 + static int create_use_gss_proxy_proc_entry(struct net *net) 1338 + { 1339 + struct sunrpc_net *sn = net_generic(net, sunrpc_net_id); 1340 + struct proc_dir_entry **p = &sn->use_gssp_proc; 1341 + 1342 + sn->use_gss_proxy = -1; 1343 + *p = proc_create_data("use-gss-proxy", S_IFREG|S_IRUSR|S_IWUSR, 1344 + sn->proc_net_rpc, 1345 + &use_gss_proxy_ops, net); 1346 + if (!*p) 1347 + return -ENOMEM; 1348 + init_gssp_clnt(sn); 1349 + return 0; 1350 + } 1351 + 1352 + static void destroy_use_gss_proxy_proc_entry(struct net *net) 1353 + { 1354 + struct sunrpc_net *sn = net_generic(net, sunrpc_net_id); 1355 + 1356 + if (sn->use_gssp_proc) { 1357 + remove_proc_entry("use-gss-proxy", sn->proc_net_rpc); 1358 + clear_gssp_clnt(sn); 1359 + } 1360 + } 1361 + 1362 + #endif /* CONFIG_PROC_FS */ 1139 1363 1140 1364 /* 1141 1365 * Accept an rpcsec packet. ··· 1475 1155 switch (gc->gc_proc) { 1476 1156 case RPC_GSS_PROC_INIT: 1477 1157 case RPC_GSS_PROC_CONTINUE_INIT: 1478 - return svcauth_gss_handle_init(rqstp, gc, authp); 1158 + if (use_gss_proxy(SVC_NET(rqstp))) 1159 + return svcauth_gss_proxy_init(rqstp, gc, authp); 1160 + else 1161 + return svcauth_gss_legacy_init(rqstp, gc, authp); 1479 1162 case RPC_GSS_PROC_DATA: 1480 1163 case RPC_GSS_PROC_DESTROY: 1481 1164 /* Look up the context, and check the verifier: */ ··· 1853 1530 rv = rsi_cache_create_net(net); 1854 1531 if (rv) 1855 1532 goto out1; 1533 + rv = create_use_gss_proxy_proc_entry(net); 1534 + if (rv) 1535 + goto out2; 1856 1536 return 0; 1537 + out2: 1538 + destroy_use_gss_proxy_proc_entry(net); 1857 1539 out1: 1858 1540 rsc_cache_destroy_net(net); 1859 1541 return rv; ··· 1867 1539 void 1868 1540 gss_svc_shutdown_net(struct net *net) 1869 1541 { 1542 + destroy_use_gss_proxy_proc_entry(net); 1870 1543 rsi_cache_destroy_net(net); 1871 1544 rsc_cache_destroy_net(net); 1872 1545 }
+3
net/sunrpc/netns.h
··· 25 25 unsigned int rpcb_users; 26 26 27 27 struct mutex gssp_lock; 28 + wait_queue_head_t gssp_wq; 28 29 struct rpc_clnt *gssp_clnt; 30 + int use_gss_proxy; 31 + struct proc_dir_entry *use_gssp_proc; 29 32 }; 30 33 31 34 extern int sunrpc_net_id;