2019-02-11 16:25:15 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Neil Brown <neilb@cse.unsw.edu.au>
|
|
|
|
* J. Bruce Fields <bfields@umich.edu>
|
|
|
|
* Andy Adamson <andros@umich.edu>
|
|
|
|
* Dug Song <dugsong@monkey.org>
|
|
|
|
*
|
|
|
|
* RPCSEC_GSS server authentication.
|
|
|
|
* This implements RPCSEC_GSS as defined in rfc2203 (rpcsec_gss) and rfc2078
|
|
|
|
* (gssapi)
|
|
|
|
*
|
|
|
|
* The RPCSEC_GSS involves three stages:
|
|
|
|
* 1/ context creation
|
|
|
|
* 2/ data exchange
|
|
|
|
* 3/ context destruction
|
|
|
|
*
|
|
|
|
* Context creation is handled largely by upcalls to user-space.
|
|
|
|
* In particular, GSS_Accept_sec_context is handled by an upcall
|
|
|
|
* Data exchange is handled entirely within the kernel
|
|
|
|
* In particular, GSS_GetMIC, GSS_VerifyMIC, GSS_Seal, GSS_Unseal are in-kernel.
|
|
|
|
* Context destruction is handled in-kernel
|
|
|
|
* GSS_Delete_sec_context is in-kernel
|
|
|
|
*
|
|
|
|
* Context creation is initiated by a RPCSEC_GSS_INIT request arriving.
|
|
|
|
* The context handle and gss_token are used as a key into the rpcsec_init cache.
|
|
|
|
* The content of this cache includes some of the outputs of GSS_Accept_sec_context,
|
|
|
|
* being major_status, minor_status, context_handle, reply_token.
|
|
|
|
* These are sent back to the client.
|
|
|
|
* Sequence window management is handled by the kernel. The window size if currently
|
|
|
|
* a compile time constant.
|
|
|
|
*
|
|
|
|
* When user-space is happy that a context is established, it places an entry
|
|
|
|
* in the rpcsec_context cache. The key for this cache is the context_handle.
|
|
|
|
* The content includes:
|
|
|
|
* uid/gidlist - for determining access rights
|
|
|
|
* mechanism type
|
|
|
|
* mechanism specific information, such as a key
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/pagemap.h>
|
2011-11-14 23:56:38 +00:00
|
|
|
#include <linux/user_namespace.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#include <linux/sunrpc/auth_gss.h>
|
|
|
|
#include <linux/sunrpc/gss_err.h>
|
|
|
|
#include <linux/sunrpc/svcauth.h>
|
|
|
|
#include <linux/sunrpc/svcauth_gss.h>
|
|
|
|
#include <linux/sunrpc/cache.h>
|
2022-11-27 17:17:27 +00:00
|
|
|
#include <linux/sunrpc/gss_krb5.h>
|
2019-10-24 13:34:10 +00:00
|
|
|
|
|
|
|
#include <trace/events/rpcgss.h>
|
|
|
|
|
2012-05-25 22:09:56 +00:00
|
|
|
#include "gss_rpc_upcall.h"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2022-11-27 17:17:27 +00:00
|
|
|
/*
|
|
|
|
* Unfortunately there isn't a maximum checksum size exported via the
|
|
|
|
* GSS API. Manufacture one based on GSS mechanisms supported by this
|
|
|
|
* implementation.
|
|
|
|
*/
|
|
|
|
#define GSS_MAX_CKSUMSIZE (GSS_KRB5_TOK_HDR_LEN + GSS_KRB5_MAX_CKSUM_LEN)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This value may be increased in the future to accommodate other
|
|
|
|
* usage of the scratch buffer.
|
|
|
|
*/
|
|
|
|
#define GSS_SCRATCH_SIZE GSS_MAX_CKSUMSIZE
|
|
|
|
|
|
|
|
struct gss_svc_data {
|
|
|
|
/* decoded gss client cred: */
|
|
|
|
struct rpc_gss_wire_cred clcred;
|
|
|
|
/* save a pointer to the beginning of the encoded verifier,
|
|
|
|
* for use in encryption/checksumming in svcauth_gss_release: */
|
|
|
|
__be32 *verf_start;
|
|
|
|
struct rsc *rsci;
|
|
|
|
|
|
|
|
/* for temporary results */
|
|
|
|
u8 gsd_scratch[GSS_SCRATCH_SIZE];
|
|
|
|
};
|
2012-01-19 17:42:37 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* The rpcsec_init cache is used for mapping RPCSEC_GSS_{,CONT_}INIT requests
|
|
|
|
* into replies.
|
|
|
|
*
|
|
|
|
* Key is context handle (\x if empty) and gss_token.
|
|
|
|
* Content is major_status minor_status (integers) context_handle, reply_token.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int netobj_equal(struct xdr_netobj *a, struct xdr_netobj *b)
|
|
|
|
{
|
|
|
|
return a->len == b->len && 0 == memcmp(a->data, b->data, a->len);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define RSI_HASHBITS 6
|
|
|
|
#define RSI_HASHMAX (1<<RSI_HASHBITS)
|
|
|
|
|
|
|
|
struct rsi {
|
|
|
|
struct cache_head h;
|
|
|
|
struct xdr_netobj in_handle, in_token;
|
|
|
|
struct xdr_netobj out_handle, out_token;
|
|
|
|
int major_status, minor_status;
|
2018-10-01 14:41:48 +00:00
|
|
|
struct rcu_head rcu_head;
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
static struct rsi *rsi_update(struct cache_detail *cd, struct rsi *new, struct rsi *old);
|
|
|
|
static struct rsi *rsi_lookup(struct cache_detail *cd, struct rsi *item);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
static void rsi_free(struct rsi *rsii)
|
|
|
|
{
|
|
|
|
kfree(rsii->in_handle.data);
|
|
|
|
kfree(rsii->in_token.data);
|
|
|
|
kfree(rsii->out_handle.data);
|
|
|
|
kfree(rsii->out_token.data);
|
|
|
|
}
|
|
|
|
|
2018-10-01 14:41:48 +00:00
|
|
|
static void rsi_free_rcu(struct rcu_head *head)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2018-10-01 14:41:48 +00:00
|
|
|
struct rsi *rsii = container_of(head, struct rsi, rcu_head);
|
|
|
|
|
2006-03-27 09:15:09 +00:00
|
|
|
rsi_free(rsii);
|
|
|
|
kfree(rsii);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2018-10-01 14:41:48 +00:00
|
|
|
static void rsi_put(struct kref *ref)
|
|
|
|
{
|
|
|
|
struct rsi *rsii = container_of(ref, struct rsi, h.ref);
|
|
|
|
|
|
|
|
call_rcu(&rsii->rcu_head, rsi_free_rcu);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline int rsi_hash(struct rsi *item)
|
|
|
|
{
|
|
|
|
return hash_mem(item->in_handle.data, item->in_handle.len, RSI_HASHBITS)
|
|
|
|
^ hash_mem(item->in_token.data, item->in_token.len, RSI_HASHBITS);
|
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:04 +00:00
|
|
|
static int rsi_match(struct cache_head *a, struct cache_head *b)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-03-27 09:15:04 +00:00
|
|
|
struct rsi *item = container_of(a, struct rsi, h);
|
|
|
|
struct rsi *tmp = container_of(b, struct rsi, h);
|
2009-11-30 00:55:45 +00:00
|
|
|
return netobj_equal(&item->in_handle, &tmp->in_handle) &&
|
|
|
|
netobj_equal(&item->in_token, &tmp->in_token);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int dup_to_netobj(struct xdr_netobj *dst, char *src, int len)
|
|
|
|
{
|
|
|
|
dst->len = len;
|
2006-11-21 03:21:34 +00:00
|
|
|
dst->data = (len ? kmemdup(src, len, GFP_KERNEL) : NULL);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (len && !dst->data)
|
|
|
|
return -ENOMEM;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int dup_netobj(struct xdr_netobj *dst, struct xdr_netobj *src)
|
|
|
|
{
|
|
|
|
return dup_to_netobj(dst, src->data, src->len);
|
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:04 +00:00
|
|
|
static void rsi_init(struct cache_head *cnew, struct cache_head *citem)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-03-27 09:15:04 +00:00
|
|
|
struct rsi *new = container_of(cnew, struct rsi, h);
|
|
|
|
struct rsi *item = container_of(citem, struct rsi, h);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
new->out_handle.data = NULL;
|
|
|
|
new->out_handle.len = 0;
|
|
|
|
new->out_token.data = NULL;
|
|
|
|
new->out_token.len = 0;
|
|
|
|
new->in_handle.len = item->in_handle.len;
|
|
|
|
item->in_handle.len = 0;
|
|
|
|
new->in_token.len = item->in_token.len;
|
|
|
|
item->in_token.len = 0;
|
|
|
|
new->in_handle.data = item->in_handle.data;
|
|
|
|
item->in_handle.data = NULL;
|
|
|
|
new->in_token.data = item->in_token.data;
|
|
|
|
item->in_token.data = NULL;
|
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:04 +00:00
|
|
|
static void update_rsi(struct cache_head *cnew, struct cache_head *citem)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-03-27 09:15:04 +00:00
|
|
|
struct rsi *new = container_of(cnew, struct rsi, h);
|
|
|
|
struct rsi *item = container_of(citem, struct rsi, h);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
BUG_ON(new->out_handle.data || new->out_token.data);
|
|
|
|
new->out_handle.len = item->out_handle.len;
|
|
|
|
item->out_handle.len = 0;
|
|
|
|
new->out_token.len = item->out_token.len;
|
|
|
|
item->out_token.len = 0;
|
|
|
|
new->out_handle.data = item->out_handle.data;
|
|
|
|
item->out_handle.data = NULL;
|
|
|
|
new->out_token.data = item->out_token.data;
|
|
|
|
item->out_token.data = NULL;
|
|
|
|
|
|
|
|
new->major_status = item->major_status;
|
|
|
|
new->minor_status = item->minor_status;
|
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:04 +00:00
|
|
|
static struct cache_head *rsi_alloc(void)
|
|
|
|
{
|
|
|
|
struct rsi *rsii = kmalloc(sizeof(*rsii), GFP_KERNEL);
|
|
|
|
if (rsii)
|
|
|
|
return &rsii->h;
|
|
|
|
else
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2020-03-01 23:21:42 +00:00
|
|
|
static int rsi_upcall(struct cache_detail *cd, struct cache_head *h)
|
|
|
|
{
|
|
|
|
return sunrpc_cache_pipe_upcall_timeout(cd, h);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
static void rsi_request(struct cache_detail *cd,
|
2007-02-09 23:38:13 +00:00
|
|
|
struct cache_head *h,
|
|
|
|
char **bpp, int *blen)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct rsi *rsii = container_of(h, struct rsi, h);
|
|
|
|
|
|
|
|
qword_addhex(bpp, blen, rsii->in_handle.data, rsii->in_handle.len);
|
|
|
|
qword_addhex(bpp, blen, rsii->in_token.data, rsii->in_token.len);
|
|
|
|
(*bpp)[-1] = '\n';
|
2021-09-01 23:30:37 +00:00
|
|
|
WARN_ONCE(*blen < 0,
|
|
|
|
"RPCSEC/GSS credential too large - please use gssproxy\n");
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int rsi_parse(struct cache_detail *cd,
|
2007-02-09 23:38:13 +00:00
|
|
|
char *mesg, int mlen)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
/* context token expiry major minor context token */
|
|
|
|
char *buf = mesg;
|
|
|
|
char *ep;
|
|
|
|
int len;
|
|
|
|
struct rsi rsii, *rsip = NULL;
|
nfs: use time64_t internally
The timestamps for the cache are all in boottime seconds, so they
don't overflow 32-bit values, but the use of time_t is deprecated
because it generally does overflow when used with wall-clock time.
There are multiple possible ways of avoiding it:
- leave time_t, which is safe here, but forces others to
look into this code to determine that it is over and over.
- use a more generic type, like 'int' or 'long', which is known
to be sufficient here but loses the documentation of referring
to timestamps
- use ktime_t everywhere, and convert into seconds in the few
places where we want realtime-seconds. The conversion is
sometimes expensive, but not more so than the conversion we
do today.
- use time64_t to clarify that this code is safe. Nothing would
change for 64-bit architectures, but it is slightly less
efficient on 32-bit architectures.
Without a clear winner of the three approaches above, this picks
the last one, favouring readability over a small performance
loss on 32-bit architectures.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2017-10-20 14:34:42 +00:00
|
|
|
time64_t expiry;
|
2005-04-16 22:20:36 +00:00
|
|
|
int status = -EINVAL;
|
|
|
|
|
|
|
|
memset(&rsii, 0, sizeof(rsii));
|
|
|
|
/* handle */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
|
|
|
if (len < 0)
|
|
|
|
goto out;
|
|
|
|
status = -ENOMEM;
|
|
|
|
if (dup_to_netobj(&rsii.in_handle, buf, len))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* token */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
|
|
|
status = -EINVAL;
|
|
|
|
if (len < 0)
|
|
|
|
goto out;
|
|
|
|
status = -ENOMEM;
|
|
|
|
if (dup_to_netobj(&rsii.in_token, buf, len))
|
|
|
|
goto out;
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
rsip = rsi_lookup(cd, &rsii);
|
2006-03-27 09:15:04 +00:00
|
|
|
if (!rsip)
|
|
|
|
goto out;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
rsii.h.flags = 0;
|
|
|
|
/* expiry */
|
|
|
|
expiry = get_expiry(&mesg);
|
|
|
|
status = -EINVAL;
|
|
|
|
if (expiry == 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* major/minor */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
2008-01-07 02:32:37 +00:00
|
|
|
if (len <= 0)
|
|
|
|
goto out;
|
|
|
|
rsii.major_status = simple_strtoul(buf, &ep, 10);
|
|
|
|
if (*ep)
|
|
|
|
goto out;
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
|
|
|
if (len <= 0)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out;
|
2008-01-07 02:32:37 +00:00
|
|
|
rsii.minor_status = simple_strtoul(buf, &ep, 10);
|
|
|
|
if (*ep)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out;
|
|
|
|
|
2008-01-07 02:32:37 +00:00
|
|
|
/* out_handle */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
|
|
|
if (len < 0)
|
|
|
|
goto out;
|
|
|
|
status = -ENOMEM;
|
|
|
|
if (dup_to_netobj(&rsii.out_handle, buf, len))
|
|
|
|
goto out;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-01-07 02:32:37 +00:00
|
|
|
/* out_token */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
|
|
|
status = -EINVAL;
|
|
|
|
if (len < 0)
|
|
|
|
goto out;
|
|
|
|
status = -ENOMEM;
|
|
|
|
if (dup_to_netobj(&rsii.out_token, buf, len))
|
|
|
|
goto out;
|
2005-04-16 22:20:36 +00:00
|
|
|
rsii.h.expiry_time = expiry;
|
2012-01-19 17:42:37 +00:00
|
|
|
rsip = rsi_update(cd, &rsii, rsip);
|
2005-04-16 22:20:36 +00:00
|
|
|
status = 0;
|
|
|
|
out:
|
|
|
|
rsi_free(&rsii);
|
|
|
|
if (rsip)
|
2012-01-19 17:42:37 +00:00
|
|
|
cache_put(&rsip->h, cd);
|
2006-03-27 09:15:04 +00:00
|
|
|
else
|
|
|
|
status = -ENOMEM;
|
2005-04-16 22:20:36 +00:00
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2017-10-17 16:14:26 +00:00
|
|
|
static const struct cache_detail rsi_cache_template = {
|
2005-09-06 22:17:08 +00:00
|
|
|
.owner = THIS_MODULE,
|
2005-04-16 22:20:36 +00:00
|
|
|
.hash_size = RSI_HASHMAX,
|
|
|
|
.name = "auth.rpcsec.init",
|
|
|
|
.cache_put = rsi_put,
|
2020-03-01 23:21:42 +00:00
|
|
|
.cache_upcall = rsi_upcall,
|
2013-02-04 11:02:45 +00:00
|
|
|
.cache_request = rsi_request,
|
2005-04-16 22:20:36 +00:00
|
|
|
.cache_parse = rsi_parse,
|
2006-03-27 09:15:04 +00:00
|
|
|
.match = rsi_match,
|
|
|
|
.init = rsi_init,
|
|
|
|
.update = update_rsi,
|
|
|
|
.alloc = rsi_alloc,
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
static struct rsi *rsi_lookup(struct cache_detail *cd, struct rsi *item)
|
2006-03-27 09:15:04 +00:00
|
|
|
{
|
|
|
|
struct cache_head *ch;
|
|
|
|
int hash = rsi_hash(item);
|
|
|
|
|
2018-10-01 14:41:48 +00:00
|
|
|
ch = sunrpc_cache_lookup_rcu(cd, &item->h, hash);
|
2006-03-27 09:15:04 +00:00
|
|
|
if (ch)
|
|
|
|
return container_of(ch, struct rsi, h);
|
|
|
|
else
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
static struct rsi *rsi_update(struct cache_detail *cd, struct rsi *new, struct rsi *old)
|
2006-03-27 09:15:04 +00:00
|
|
|
{
|
|
|
|
struct cache_head *ch;
|
|
|
|
int hash = rsi_hash(new);
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
ch = sunrpc_cache_update(cd, &new->h,
|
2006-03-27 09:15:04 +00:00
|
|
|
&old->h, hash);
|
|
|
|
if (ch)
|
|
|
|
return container_of(ch, struct rsi, h);
|
|
|
|
else
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The rpcsec_context cache is used to store a context that is
|
|
|
|
* used in data exchange.
|
|
|
|
* The key is a context handle. The content is:
|
|
|
|
* uid, gidlist, mechanism, service-set, mech-specific-data
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define RSC_HASHBITS 10
|
|
|
|
#define RSC_HASHMAX (1<<RSC_HASHBITS)
|
|
|
|
|
|
|
|
#define GSS_SEQ_WIN 128
|
|
|
|
|
|
|
|
struct gss_svc_seq_data {
|
|
|
|
/* highest seq number seen so far: */
|
2020-04-18 22:30:42 +00:00
|
|
|
u32 sd_max;
|
2005-04-16 22:20:36 +00:00
|
|
|
/* for i such that sd_max-GSS_SEQ_WIN < i <= sd_max, the i-th bit of
|
|
|
|
* sd_win is nonzero iff sequence number i has been seen already: */
|
|
|
|
unsigned long sd_win[GSS_SEQ_WIN/BITS_PER_LONG];
|
|
|
|
spinlock_t sd_lock;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct rsc {
|
|
|
|
struct cache_head h;
|
|
|
|
struct xdr_netobj handle;
|
|
|
|
struct svc_cred cred;
|
|
|
|
struct gss_svc_seq_data seqdata;
|
|
|
|
struct gss_ctx *mechctx;
|
2018-10-01 14:41:48 +00:00
|
|
|
struct rcu_head rcu_head;
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
static struct rsc *rsc_update(struct cache_detail *cd, struct rsc *new, struct rsc *old);
|
|
|
|
static struct rsc *rsc_lookup(struct cache_detail *cd, struct rsc *item);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
static void rsc_free(struct rsc *rsci)
|
|
|
|
{
|
|
|
|
kfree(rsci->handle.data);
|
|
|
|
if (rsci->mechctx)
|
|
|
|
gss_delete_sec_context(&rsci->mechctx);
|
2012-05-14 23:55:22 +00:00
|
|
|
free_svc_cred(&rsci->cred);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2018-10-01 14:41:48 +00:00
|
|
|
static void rsc_free_rcu(struct rcu_head *head)
|
|
|
|
{
|
|
|
|
struct rsc *rsci = container_of(head, struct rsc, rcu_head);
|
|
|
|
|
|
|
|
kfree(rsci->handle.data);
|
|
|
|
kfree(rsci);
|
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:09 +00:00
|
|
|
static void rsc_put(struct kref *ref)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-03-27 09:15:09 +00:00
|
|
|
struct rsc *rsci = container_of(ref, struct rsc, h.ref);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2018-10-01 14:41:48 +00:00
|
|
|
if (rsci->mechctx)
|
|
|
|
gss_delete_sec_context(&rsci->mechctx);
|
|
|
|
free_svc_cred(&rsci->cred);
|
|
|
|
call_rcu(&rsci->rcu_head, rsc_free_rcu);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
rsc_hash(struct rsc *rsci)
|
|
|
|
{
|
|
|
|
return hash_mem(rsci->handle.data, rsci->handle.len, RSC_HASHBITS);
|
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:05 +00:00
|
|
|
static int
|
|
|
|
rsc_match(struct cache_head *a, struct cache_head *b)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-03-27 09:15:05 +00:00
|
|
|
struct rsc *new = container_of(a, struct rsc, h);
|
|
|
|
struct rsc *tmp = container_of(b, struct rsc, h);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
return netobj_equal(&new->handle, &tmp->handle);
|
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:05 +00:00
|
|
|
static void
|
|
|
|
rsc_init(struct cache_head *cnew, struct cache_head *ctmp)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-03-27 09:15:05 +00:00
|
|
|
struct rsc *new = container_of(cnew, struct rsc, h);
|
|
|
|
struct rsc *tmp = container_of(ctmp, struct rsc, h);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
new->handle.len = tmp->handle.len;
|
|
|
|
tmp->handle.len = 0;
|
|
|
|
new->handle.data = tmp->handle.data;
|
|
|
|
tmp->handle.data = NULL;
|
|
|
|
new->mechctx = NULL;
|
2013-05-14 20:53:40 +00:00
|
|
|
init_svc_cred(&new->cred);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:05 +00:00
|
|
|
static void
|
|
|
|
update_rsc(struct cache_head *cnew, struct cache_head *ctmp)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-03-27 09:15:05 +00:00
|
|
|
struct rsc *new = container_of(cnew, struct rsc, h);
|
|
|
|
struct rsc *tmp = container_of(ctmp, struct rsc, h);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
new->mechctx = tmp->mechctx;
|
|
|
|
tmp->mechctx = NULL;
|
|
|
|
memset(&new->seqdata, 0, sizeof(new->seqdata));
|
|
|
|
spin_lock_init(&new->seqdata.sd_lock);
|
|
|
|
new->cred = tmp->cred;
|
2013-05-14 20:53:40 +00:00
|
|
|
init_svc_cred(&tmp->cred);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2006-03-27 09:15:05 +00:00
|
|
|
static struct cache_head *
|
|
|
|
rsc_alloc(void)
|
|
|
|
{
|
|
|
|
struct rsc *rsci = kmalloc(sizeof(*rsci), GFP_KERNEL);
|
|
|
|
if (rsci)
|
|
|
|
return &rsci->h;
|
|
|
|
else
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2020-03-01 23:21:42 +00:00
|
|
|
static int rsc_upcall(struct cache_detail *cd, struct cache_head *h)
|
|
|
|
{
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
static int rsc_parse(struct cache_detail *cd,
|
|
|
|
char *mesg, int mlen)
|
|
|
|
{
|
|
|
|
/* contexthandle expiry [ uid gid N <n gids> mechname ...mechdata... ] */
|
|
|
|
char *buf = mesg;
|
2013-02-02 09:40:53 +00:00
|
|
|
int id;
|
2005-04-16 22:20:36 +00:00
|
|
|
int len, rv;
|
|
|
|
struct rsc rsci, *rscp = NULL;
|
2018-06-07 15:02:50 +00:00
|
|
|
time64_t expiry;
|
2005-04-16 22:20:36 +00:00
|
|
|
int status = -EINVAL;
|
2006-06-30 08:56:16 +00:00
|
|
|
struct gss_api_mech *gm = NULL;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
memset(&rsci, 0, sizeof(rsci));
|
|
|
|
/* context handle */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
|
|
|
if (len < 0) goto out;
|
|
|
|
status = -ENOMEM;
|
|
|
|
if (dup_to_netobj(&rsci.handle, buf, len))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
rsci.h.flags = 0;
|
|
|
|
/* expiry */
|
|
|
|
expiry = get_expiry(&mesg);
|
|
|
|
status = -EINVAL;
|
|
|
|
if (expiry == 0)
|
|
|
|
goto out;
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
rscp = rsc_lookup(cd, &rsci);
|
2006-03-27 09:15:05 +00:00
|
|
|
if (!rscp)
|
|
|
|
goto out;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* uid, or NEGATIVE */
|
2013-02-02 09:40:53 +00:00
|
|
|
rv = get_int(&mesg, &id);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (rv == -EINVAL)
|
|
|
|
goto out;
|
|
|
|
if (rv == -ENOENT)
|
|
|
|
set_bit(CACHE_NEGATIVE, &rsci.h.flags);
|
|
|
|
else {
|
|
|
|
int N, i;
|
|
|
|
|
2013-03-04 13:44:01 +00:00
|
|
|
/*
|
|
|
|
* NOTE: we skip uid_valid()/gid_valid() checks here:
|
|
|
|
* instead, * -1 id's are later mapped to the
|
|
|
|
* (export-specific) anonymous id by nfsd_setuser.
|
|
|
|
*
|
|
|
|
* (But supplementary gid's get no such special
|
|
|
|
* treatment so are checked for validity here.)
|
|
|
|
*/
|
2013-02-02 09:40:53 +00:00
|
|
|
/* uid */
|
2019-04-09 16:13:41 +00:00
|
|
|
rsci.cred.cr_uid = make_kuid(current_user_ns(), id);
|
2013-02-02 09:40:53 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* gid */
|
2013-02-02 09:40:53 +00:00
|
|
|
if (get_int(&mesg, &id))
|
|
|
|
goto out;
|
2019-04-09 16:13:41 +00:00
|
|
|
rsci.cred.cr_gid = make_kgid(current_user_ns(), id);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* number of additional gid's */
|
|
|
|
if (get_int(&mesg, &N))
|
|
|
|
goto out;
|
2015-02-24 15:34:01 +00:00
|
|
|
if (N < 0 || N > NGROUPS_MAX)
|
|
|
|
goto out;
|
2005-04-16 22:20:36 +00:00
|
|
|
status = -ENOMEM;
|
|
|
|
rsci.cred.cr_group_info = groups_alloc(N);
|
|
|
|
if (rsci.cred.cr_group_info == NULL)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* gid's */
|
|
|
|
status = -EINVAL;
|
|
|
|
for (i=0; i<N; i++) {
|
2011-11-14 23:56:38 +00:00
|
|
|
kgid_t kgid;
|
2013-02-02 09:40:53 +00:00
|
|
|
if (get_int(&mesg, &id))
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out;
|
2019-04-09 16:13:41 +00:00
|
|
|
kgid = make_kgid(current_user_ns(), id);
|
2011-11-14 23:56:38 +00:00
|
|
|
if (!gid_valid(kgid))
|
|
|
|
goto out;
|
cred: simpler, 1D supplementary groups
Current supplementary groups code can massively overallocate memory and
is implemented in a way so that access to individual gid is done via 2D
array.
If number of gids is <= 32, memory allocation is more or less tolerable
(140/148 bytes). But if it is not, code allocates full page (!)
regardless and, what's even more fun, doesn't reuse small 32-entry
array.
2D array means dependent shifts, loads and LEAs without possibility to
optimize them (gid is never known at compile time).
All of the above is unnecessary. Switch to the usual
trailing-zero-len-array scheme. Memory is allocated with
kmalloc/vmalloc() and only as much as needed. Accesses become simpler
(LEA 8(gi,idx,4) or even without displacement).
Maximum number of gids is 65536 which translates to 256KB+8 bytes. I
think kernel can handle such allocation.
On my usual desktop system with whole 9 (nine) aux groups, struct
group_info shrinks from 148 bytes to 44 bytes, yay!
Nice side effects:
- "gi->gid[i]" is shorter than "GROUP_AT(gi, i)", less typing,
- fix little mess in net/ipv4/ping.c
should have been using GROUP_AT macro but this point becomes moot,
- aux group allocation is persistent and should be accounted as such.
Link: http://lkml.kernel.org/r/20160817201927.GA2096@p183.telecom.by
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Vasily Kulikov <segoon@openwall.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-10-08 00:03:12 +00:00
|
|
|
rsci.cred.cr_group_info->gid[i] = kgid;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2017-12-14 23:33:12 +00:00
|
|
|
groups_sort(rsci.cred.cr_group_info);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* mech name */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
|
|
|
if (len < 0)
|
|
|
|
goto out;
|
2013-05-14 20:07:13 +00:00
|
|
|
gm = rsci.cred.cr_gss_mech = gss_mech_get_by_name(buf);
|
2005-04-16 22:20:36 +00:00
|
|
|
status = -EOPNOTSUPP;
|
|
|
|
if (!gm)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
status = -EINVAL;
|
|
|
|
/* mech-specific data: */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
2006-06-30 08:56:16 +00:00
|
|
|
if (len < 0)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out;
|
2012-05-25 22:09:53 +00:00
|
|
|
status = gss_import_sec_context(buf, len, gm, &rsci.mechctx,
|
|
|
|
NULL, GFP_KERNEL);
|
2006-06-30 08:56:16 +00:00
|
|
|
if (status)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out;
|
2008-12-23 21:17:15 +00:00
|
|
|
|
|
|
|
/* get client name */
|
|
|
|
len = qword_get(&mesg, buf, mlen);
|
|
|
|
if (len > 0) {
|
2012-05-14 23:55:22 +00:00
|
|
|
rsci.cred.cr_principal = kstrdup(buf, GFP_KERNEL);
|
2013-04-18 02:49:09 +00:00
|
|
|
if (!rsci.cred.cr_principal) {
|
|
|
|
status = -ENOMEM;
|
2008-12-23 21:17:15 +00:00
|
|
|
goto out;
|
2013-04-18 02:49:09 +00:00
|
|
|
}
|
2008-12-23 21:17:15 +00:00
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
rsci.h.expiry_time = expiry;
|
2012-01-19 17:42:37 +00:00
|
|
|
rscp = rsc_update(cd, &rsci, rscp);
|
2005-04-16 22:20:36 +00:00
|
|
|
status = 0;
|
|
|
|
out:
|
|
|
|
rsc_free(&rsci);
|
|
|
|
if (rscp)
|
2012-01-19 17:42:37 +00:00
|
|
|
cache_put(&rscp->h, cd);
|
2006-03-27 09:15:05 +00:00
|
|
|
else
|
|
|
|
status = -ENOMEM;
|
2005-04-16 22:20:36 +00:00
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2017-10-17 16:14:26 +00:00
|
|
|
static const struct cache_detail rsc_cache_template = {
|
2005-09-06 22:17:08 +00:00
|
|
|
.owner = THIS_MODULE,
|
2005-04-16 22:20:36 +00:00
|
|
|
.hash_size = RSC_HASHMAX,
|
|
|
|
.name = "auth.rpcsec.context",
|
|
|
|
.cache_put = rsc_put,
|
2020-03-01 23:21:42 +00:00
|
|
|
.cache_upcall = rsc_upcall,
|
2005-04-16 22:20:36 +00:00
|
|
|
.cache_parse = rsc_parse,
|
2006-03-27 09:15:05 +00:00
|
|
|
.match = rsc_match,
|
|
|
|
.init = rsc_init,
|
|
|
|
.update = update_rsc,
|
|
|
|
.alloc = rsc_alloc,
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
static struct rsc *rsc_lookup(struct cache_detail *cd, struct rsc *item)
|
2006-03-27 09:15:05 +00:00
|
|
|
{
|
|
|
|
struct cache_head *ch;
|
|
|
|
int hash = rsc_hash(item);
|
|
|
|
|
2018-10-01 14:41:48 +00:00
|
|
|
ch = sunrpc_cache_lookup_rcu(cd, &item->h, hash);
|
2006-03-27 09:15:05 +00:00
|
|
|
if (ch)
|
|
|
|
return container_of(ch, struct rsc, h);
|
|
|
|
else
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
static struct rsc *rsc_update(struct cache_detail *cd, struct rsc *new, struct rsc *old)
|
2006-03-27 09:15:05 +00:00
|
|
|
{
|
|
|
|
struct cache_head *ch;
|
|
|
|
int hash = rsc_hash(new);
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
ch = sunrpc_cache_update(cd, &new->h,
|
2006-03-27 09:15:05 +00:00
|
|
|
&old->h, hash);
|
|
|
|
if (ch)
|
|
|
|
return container_of(ch, struct rsc, h);
|
|
|
|
else
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
static struct rsc *
|
2012-01-19 17:42:37 +00:00
|
|
|
gss_svc_searchbyctx(struct cache_detail *cd, struct xdr_netobj *handle)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct rsc rsci;
|
|
|
|
struct rsc *found;
|
|
|
|
|
|
|
|
memset(&rsci, 0, sizeof(rsci));
|
2016-09-01 14:50:38 +00:00
|
|
|
if (dup_to_netobj(&rsci.handle, handle->data, handle->len))
|
|
|
|
return NULL;
|
2012-01-19 17:42:37 +00:00
|
|
|
found = rsc_lookup(cd, &rsci);
|
2016-09-01 14:50:38 +00:00
|
|
|
rsc_free(&rsci);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (!found)
|
|
|
|
return NULL;
|
2012-01-19 17:42:37 +00:00
|
|
|
if (cache_check(cd, &found->h, NULL))
|
2005-04-16 22:20:36 +00:00
|
|
|
return NULL;
|
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
2020-04-18 22:30:42 +00:00
|
|
|
/**
|
|
|
|
* gss_check_seq_num - GSS sequence number window check
|
|
|
|
* @rqstp: RPC Call to use when reporting errors
|
|
|
|
* @rsci: cached GSS context state (updated on return)
|
|
|
|
* @seq_num: sequence number to check
|
|
|
|
*
|
|
|
|
* Implements sequence number algorithm as specified in
|
|
|
|
* RFC 2203, Section 5.3.3.1. "Context Management".
|
|
|
|
*
|
|
|
|
* Return values:
|
|
|
|
* %true: @rqstp's GSS sequence number is inside the window
|
|
|
|
* %false: @rqstp's GSS sequence number is outside the window
|
|
|
|
*/
|
|
|
|
static bool gss_check_seq_num(const struct svc_rqst *rqstp, struct rsc *rsci,
|
|
|
|
u32 seq_num)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct gss_svc_seq_data *sd = &rsci->seqdata;
|
2020-04-18 22:30:42 +00:00
|
|
|
bool result = false;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
spin_lock(&sd->sd_lock);
|
|
|
|
if (seq_num > sd->sd_max) {
|
|
|
|
if (seq_num >= sd->sd_max + GSS_SEQ_WIN) {
|
2020-04-18 22:30:42 +00:00
|
|
|
memset(sd->sd_win, 0, sizeof(sd->sd_win));
|
2005-04-16 22:20:36 +00:00
|
|
|
sd->sd_max = seq_num;
|
|
|
|
} else while (sd->sd_max < seq_num) {
|
|
|
|
sd->sd_max++;
|
|
|
|
__clear_bit(sd->sd_max % GSS_SEQ_WIN, sd->sd_win);
|
|
|
|
}
|
|
|
|
__set_bit(seq_num % GSS_SEQ_WIN, sd->sd_win);
|
|
|
|
goto ok;
|
2021-10-01 13:59:21 +00:00
|
|
|
} else if (seq_num + GSS_SEQ_WIN <= sd->sd_max) {
|
2020-04-18 22:30:42 +00:00
|
|
|
goto toolow;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
if (__test_and_set_bit(seq_num % GSS_SEQ_WIN, sd->sd_win))
|
2020-04-18 22:30:42 +00:00
|
|
|
goto alreadyseen;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
ok:
|
2020-04-18 22:30:42 +00:00
|
|
|
result = true;
|
|
|
|
out:
|
2005-04-16 22:20:36 +00:00
|
|
|
spin_unlock(&sd->sd_lock);
|
2020-04-18 22:30:42 +00:00
|
|
|
return result;
|
|
|
|
|
|
|
|
toolow:
|
|
|
|
trace_rpcgss_svc_seqno_low(rqstp, seq_num,
|
|
|
|
sd->sd_max - GSS_SEQ_WIN,
|
|
|
|
sd->sd_max);
|
|
|
|
goto out;
|
|
|
|
alreadyseen:
|
|
|
|
trace_rpcgss_svc_seqno_seen(rqstp, seq_num);
|
|
|
|
goto out;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline u32 round_up_to_quad(u32 i)
|
|
|
|
{
|
|
|
|
return (i + 3 ) & ~3;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
svc_safe_putnetobj(struct kvec *resv, struct xdr_netobj *o)
|
|
|
|
{
|
2006-09-27 05:30:23 +00:00
|
|
|
u8 *p;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (resv->iov_len + 4 > PAGE_SIZE)
|
|
|
|
return -1;
|
2006-09-27 05:28:46 +00:00
|
|
|
svc_putnl(resv, o->len);
|
2005-04-16 22:20:36 +00:00
|
|
|
p = resv->iov_base + resv->iov_len;
|
|
|
|
resv->iov_len += round_up_to_quad(o->len);
|
|
|
|
if (resv->iov_len > PAGE_SIZE)
|
|
|
|
return -1;
|
|
|
|
memcpy(p, o->data, o->len);
|
2006-09-27 05:30:23 +00:00
|
|
|
memset(p + o->len, 0, round_up_to_quad(o->len) - o->len);
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-08-10 00:16:22 +00:00
|
|
|
/*
|
2023-01-02 17:07:13 +00:00
|
|
|
* Decode and verify a Call's verifier field. For RPC_AUTH_GSS Calls,
|
|
|
|
* the body of this field contains a variable length checksum.
|
|
|
|
*
|
|
|
|
* GSS-specific auth_stat values are mandated by RFC 2203 Section
|
|
|
|
* 5.3.3.3.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
static int
|
2023-01-02 17:07:13 +00:00
|
|
|
svcauth_gss_verify_header(struct svc_rqst *rqstp, struct rsc *rsci,
|
|
|
|
__be32 *rpcstart, struct rpc_gss_wire_cred *gc)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2023-01-02 17:07:13 +00:00
|
|
|
struct xdr_stream *xdr = &rqstp->rq_arg_stream;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct gss_ctx *ctx_id = rsci->mechctx;
|
2023-01-02 17:07:13 +00:00
|
|
|
u32 flavor, maj_stat;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct xdr_buf rpchdr;
|
|
|
|
struct xdr_netobj checksum;
|
|
|
|
struct kvec iov;
|
|
|
|
|
2023-01-02 17:07:13 +00:00
|
|
|
/*
|
|
|
|
* Compute the checksum of the incoming Call from the
|
|
|
|
* XID field to credential field:
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
iov.iov_base = rpcstart;
|
2023-01-02 17:07:13 +00:00
|
|
|
iov.iov_len = (u8 *)xdr->p - (u8 *)rpcstart;
|
2005-04-16 22:20:36 +00:00
|
|
|
xdr_buf_from_iov(&iov, &rpchdr);
|
|
|
|
|
2023-01-02 17:07:13 +00:00
|
|
|
/* Call's verf field: */
|
|
|
|
if (xdr_stream_decode_opaque_auth(xdr, &flavor,
|
|
|
|
(void **)&checksum.data,
|
|
|
|
&checksum.len) < 0) {
|
|
|
|
rqstp->rq_auth_stat = rpc_autherr_badverf;
|
2005-04-16 22:20:36 +00:00
|
|
|
return SVC_DENIED;
|
2023-01-02 17:07:13 +00:00
|
|
|
}
|
|
|
|
if (flavor != RPC_AUTH_GSS) {
|
|
|
|
rqstp->rq_auth_stat = rpc_autherr_badverf;
|
2005-04-16 22:20:36 +00:00
|
|
|
return SVC_DENIED;
|
2023-01-02 17:07:13 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2023-01-02 17:07:13 +00:00
|
|
|
if (rqstp->rq_deferred)
|
2005-04-16 22:20:36 +00:00
|
|
|
return SVC_OK;
|
2023-01-02 17:07:13 +00:00
|
|
|
maj_stat = gss_verify_mic(ctx_id, &rpchdr, &checksum);
|
|
|
|
if (maj_stat != GSS_S_COMPLETE) {
|
|
|
|
trace_rpcgss_svc_mic(rqstp, maj_stat);
|
2021-07-15 19:52:06 +00:00
|
|
|
rqstp->rq_auth_stat = rpcsec_gsserr_credproblem;
|
2005-04-16 22:20:36 +00:00
|
|
|
return SVC_DENIED;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (gc->gc_seq > MAXSEQ) {
|
2020-04-18 22:30:42 +00:00
|
|
|
trace_rpcgss_svc_seqno_large(rqstp, gc->gc_seq);
|
2021-07-15 19:52:06 +00:00
|
|
|
rqstp->rq_auth_stat = rpcsec_gsserr_ctxproblem;
|
2005-04-16 22:20:36 +00:00
|
|
|
return SVC_DENIED;
|
|
|
|
}
|
2020-04-18 22:30:42 +00:00
|
|
|
if (!gss_check_seq_num(rqstp, rsci, gc->gc_seq))
|
2005-04-16 22:20:36 +00:00
|
|
|
return SVC_DROP;
|
|
|
|
return SVC_OK;
|
|
|
|
}
|
|
|
|
|
2006-01-19 01:43:24 +00:00
|
|
|
static int
|
|
|
|
gss_write_null_verf(struct svc_rqst *rqstp)
|
|
|
|
{
|
2006-09-27 05:29:38 +00:00
|
|
|
__be32 *p;
|
2006-01-19 01:43:24 +00:00
|
|
|
|
2006-09-27 05:28:46 +00:00
|
|
|
svc_putnl(rqstp->rq_res.head, RPC_AUTH_NULL);
|
2006-01-19 01:43:24 +00:00
|
|
|
p = rqstp->rq_res.head->iov_base + rqstp->rq_res.head->iov_len;
|
|
|
|
/* don't really need to check if head->iov_len > PAGE_SIZE ... */
|
|
|
|
*p++ = 0;
|
|
|
|
if (!xdr_ressize_check(rqstp, p))
|
|
|
|
return -1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
static int
|
|
|
|
gss_write_verf(struct svc_rqst *rqstp, struct gss_ctx *ctx_id, u32 seq)
|
|
|
|
{
|
2016-10-18 20:30:09 +00:00
|
|
|
__be32 *xdr_seq;
|
2005-04-16 22:20:36 +00:00
|
|
|
u32 maj_stat;
|
|
|
|
struct xdr_buf verf_data;
|
|
|
|
struct xdr_netobj mic;
|
2006-09-27 05:29:38 +00:00
|
|
|
__be32 *p;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct kvec iov;
|
2016-10-18 20:30:09 +00:00
|
|
|
int err = -1;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-09-27 05:28:46 +00:00
|
|
|
svc_putnl(rqstp->rq_res.head, RPC_AUTH_GSS);
|
2016-10-18 20:30:09 +00:00
|
|
|
xdr_seq = kmalloc(4, GFP_KERNEL);
|
|
|
|
if (!xdr_seq)
|
2021-09-10 09:33:24 +00:00
|
|
|
return -ENOMEM;
|
2016-10-18 20:30:09 +00:00
|
|
|
*xdr_seq = htonl(seq);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2016-10-18 20:30:09 +00:00
|
|
|
iov.iov_base = xdr_seq;
|
|
|
|
iov.iov_len = 4;
|
2005-04-16 22:20:36 +00:00
|
|
|
xdr_buf_from_iov(&iov, &verf_data);
|
|
|
|
p = rqstp->rq_res.head->iov_base + rqstp->rq_res.head->iov_len;
|
|
|
|
mic.data = (u8 *)(p + 1);
|
2005-10-13 20:55:18 +00:00
|
|
|
maj_stat = gss_get_mic(ctx_id, &verf_data, &mic);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (maj_stat != GSS_S_COMPLETE)
|
2016-10-18 20:30:09 +00:00
|
|
|
goto out;
|
2005-04-16 22:20:36 +00:00
|
|
|
*p++ = htonl(mic.len);
|
|
|
|
memset((u8 *)p + mic.len, 0, round_up_to_quad(mic.len) - mic.len);
|
|
|
|
p += XDR_QUADLEN(mic.len);
|
|
|
|
if (!xdr_ressize_check(rqstp, p))
|
2016-10-18 20:30:09 +00:00
|
|
|
goto out;
|
|
|
|
err = 0;
|
|
|
|
out:
|
|
|
|
kfree(xdr_seq);
|
|
|
|
return err;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
struct gss_domain {
|
|
|
|
struct auth_domain h;
|
|
|
|
u32 pseudoflavor;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct auth_domain *
|
|
|
|
find_gss_auth_domain(struct gss_ctx *ctx, u32 svc)
|
|
|
|
{
|
|
|
|
char *name;
|
|
|
|
|
|
|
|
name = gss_service_to_auth_domain_name(ctx->mech_type, svc);
|
|
|
|
if (!name)
|
|
|
|
return NULL;
|
|
|
|
return auth_domain_find(name);
|
|
|
|
}
|
|
|
|
|
2006-03-27 09:14:59 +00:00
|
|
|
static struct auth_ops svcauthops_gss;
|
|
|
|
|
2007-07-17 11:04:51 +00:00
|
|
|
u32 svcauth_gss_flavor(struct auth_domain *dom)
|
|
|
|
{
|
|
|
|
struct gss_domain *gd = container_of(dom, struct gss_domain, h);
|
|
|
|
|
|
|
|
return gd->pseudoflavor;
|
|
|
|
}
|
|
|
|
|
2008-12-23 20:21:32 +00:00
|
|
|
EXPORT_SYMBOL_GPL(svcauth_gss_flavor);
|
2007-07-17 11:04:51 +00:00
|
|
|
|
2020-05-22 02:01:33 +00:00
|
|
|
struct auth_domain *
|
2005-04-16 22:20:36 +00:00
|
|
|
svcauth_gss_register_pseudoflavor(u32 pseudoflavor, char * name)
|
|
|
|
{
|
|
|
|
struct gss_domain *new;
|
|
|
|
struct auth_domain *test;
|
|
|
|
int stat = -ENOMEM;
|
|
|
|
|
|
|
|
new = kmalloc(sizeof(*new), GFP_KERNEL);
|
|
|
|
if (!new)
|
|
|
|
goto out;
|
2006-03-27 09:14:59 +00:00
|
|
|
kref_init(&new->h.ref);
|
2006-11-21 03:21:34 +00:00
|
|
|
new->h.name = kstrdup(name, GFP_KERNEL);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (!new->h.name)
|
|
|
|
goto out_free_dom;
|
2006-03-27 09:14:59 +00:00
|
|
|
new->h.flavour = &svcauthops_gss;
|
2005-04-16 22:20:36 +00:00
|
|
|
new->pseudoflavor = pseudoflavor;
|
|
|
|
|
2006-03-27 09:14:59 +00:00
|
|
|
test = auth_domain_lookup(name, &new->h);
|
2020-05-22 02:01:33 +00:00
|
|
|
if (test != &new->h) {
|
|
|
|
pr_warn("svc: duplicate registration of gss pseudo flavour %s.\n",
|
|
|
|
name);
|
|
|
|
stat = -EADDRINUSE;
|
2007-07-24 01:43:52 +00:00
|
|
|
auth_domain_put(test);
|
2020-05-22 02:01:33 +00:00
|
|
|
goto out_free_name;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2020-05-22 02:01:33 +00:00
|
|
|
return test;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-05-22 02:01:33 +00:00
|
|
|
out_free_name:
|
|
|
|
kfree(new->h.name);
|
2005-04-16 22:20:36 +00:00
|
|
|
out_free_dom:
|
|
|
|
kfree(new);
|
|
|
|
out:
|
2020-05-22 02:01:33 +00:00
|
|
|
return ERR_PTR(stat);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2008-12-23 20:21:32 +00:00
|
|
|
EXPORT_SYMBOL_GPL(svcauth_gss_register_pseudoflavor);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2023-01-02 17:06:47 +00:00
|
|
|
/*
|
|
|
|
* RFC 2203, Section 5.3.2.2
|
|
|
|
*
|
|
|
|
* struct rpc_gss_integ_data {
|
|
|
|
* opaque databody_integ<>;
|
|
|
|
* opaque checksum<>;
|
|
|
|
* };
|
|
|
|
*
|
|
|
|
* struct rpc_gss_data_t {
|
|
|
|
* unsigned int seq_num;
|
|
|
|
* proc_req_arg_t arg;
|
|
|
|
* };
|
|
|
|
*/
|
2023-01-02 17:06:54 +00:00
|
|
|
static noinline_for_stack int
|
|
|
|
svcauth_gss_unwrap_integ(struct svc_rqst *rqstp, u32 seq, struct gss_ctx *ctx)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2022-11-27 17:17:27 +00:00
|
|
|
struct gss_svc_data *gsd = rqstp->rq_auth_data;
|
2023-01-02 17:06:54 +00:00
|
|
|
struct xdr_stream *xdr = &rqstp->rq_arg_stream;
|
|
|
|
u32 len, offset, seq_num, maj_stat;
|
|
|
|
struct xdr_buf *buf = xdr->buf;
|
2023-01-02 17:06:47 +00:00
|
|
|
struct xdr_buf databody_integ;
|
|
|
|
struct xdr_netobj checksum;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2017-06-19 18:00:40 +00:00
|
|
|
/* NFS READ normally uses splice to send data in-place. However
|
|
|
|
* the data in cache can change after the reply's MIC is computed
|
|
|
|
* but before the RPC reply is sent. To prevent the client from
|
|
|
|
* rejecting the server-computed MIC in this somewhat rare case,
|
|
|
|
* do not use splice with the GSS integrity service.
|
|
|
|
*/
|
2023-01-06 17:43:37 +00:00
|
|
|
clear_bit(RQ_SPLICE_OK, &rqstp->rq_flags);
|
2017-06-19 18:00:40 +00:00
|
|
|
|
2013-02-06 13:28:55 +00:00
|
|
|
/* Did we already verify the signature on the original pass through? */
|
|
|
|
if (rqstp->rq_deferred)
|
|
|
|
return 0;
|
|
|
|
|
2023-01-02 17:06:54 +00:00
|
|
|
if (xdr_stream_decode_u32(xdr, &len) < 0)
|
2020-04-18 22:30:42 +00:00
|
|
|
goto unwrap_failed;
|
2023-01-02 17:06:54 +00:00
|
|
|
if (len & 3)
|
2020-04-18 22:30:42 +00:00
|
|
|
goto unwrap_failed;
|
2023-01-02 17:06:54 +00:00
|
|
|
offset = xdr_stream_pos(xdr);
|
|
|
|
if (xdr_buf_subsegment(buf, &databody_integ, offset, len))
|
2020-04-18 22:30:42 +00:00
|
|
|
goto unwrap_failed;
|
|
|
|
|
2023-01-02 17:06:54 +00:00
|
|
|
/*
|
|
|
|
* The xdr_stream now points to the @seq_num field. The next
|
|
|
|
* XDR data item is the @arg field, which contains the clear
|
|
|
|
* text RPC program payload. The checksum, which follows the
|
|
|
|
* @arg field, is located and decoded without updating the
|
|
|
|
* xdr_stream.
|
|
|
|
*/
|
|
|
|
|
|
|
|
offset += len;
|
|
|
|
if (xdr_decode_word(buf, offset, &checksum.len))
|
2020-04-18 22:30:42 +00:00
|
|
|
goto unwrap_failed;
|
2023-01-02 17:06:47 +00:00
|
|
|
if (checksum.len > sizeof(gsd->gsd_scratch))
|
2020-04-18 22:30:42 +00:00
|
|
|
goto unwrap_failed;
|
2023-01-02 17:06:47 +00:00
|
|
|
checksum.data = gsd->gsd_scratch;
|
2023-01-02 17:06:54 +00:00
|
|
|
if (read_bytes_from_xdr_buf(buf, offset + XDR_UNIT, checksum.data,
|
|
|
|
checksum.len))
|
2020-04-18 22:30:42 +00:00
|
|
|
goto unwrap_failed;
|
2023-01-02 17:06:54 +00:00
|
|
|
|
2023-01-02 17:06:47 +00:00
|
|
|
maj_stat = gss_verify_mic(ctx, &databody_integ, &checksum);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (maj_stat != GSS_S_COMPLETE)
|
2020-04-18 22:30:42 +00:00
|
|
|
goto bad_mic;
|
2023-01-02 17:06:54 +00:00
|
|
|
|
|
|
|
/* The received seqno is protected by the checksum. */
|
|
|
|
if (xdr_stream_decode_u32(xdr, &seq_num) < 0)
|
|
|
|
goto unwrap_failed;
|
2023-01-02 17:06:47 +00:00
|
|
|
if (seq_num != seq)
|
2020-04-18 22:30:42 +00:00
|
|
|
goto bad_seqno;
|
2023-01-02 17:06:54 +00:00
|
|
|
|
|
|
|
xdr_truncate_decode(xdr, XDR_UNIT + checksum.len);
|
2022-11-27 17:17:27 +00:00
|
|
|
return 0;
|
2020-04-18 22:30:42 +00:00
|
|
|
|
|
|
|
unwrap_failed:
|
|
|
|
trace_rpcgss_svc_unwrap_failed(rqstp);
|
2022-11-27 17:17:27 +00:00
|
|
|
return -EINVAL;
|
2020-04-18 22:30:42 +00:00
|
|
|
bad_seqno:
|
2023-01-02 17:06:47 +00:00
|
|
|
trace_rpcgss_svc_seqno_bad(rqstp, seq, seq_num);
|
2022-11-27 17:17:27 +00:00
|
|
|
return -EINVAL;
|
2020-04-18 22:30:42 +00:00
|
|
|
bad_mic:
|
|
|
|
trace_rpcgss_svc_mic(rqstp, maj_stat);
|
2022-11-27 17:17:27 +00:00
|
|
|
return -EINVAL;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2023-01-02 17:07:00 +00:00
|
|
|
/*
|
|
|
|
* RFC 2203, Section 5.3.2.3
|
|
|
|
*
|
|
|
|
* struct rpc_gss_priv_data {
|
|
|
|
* opaque databody_priv<>
|
|
|
|
* };
|
|
|
|
*
|
|
|
|
* struct rpc_gss_data_t {
|
|
|
|
* unsigned int seq_num;
|
|
|
|
* proc_req_arg_t arg;
|
|
|
|
* };
|
|
|
|
*/
|
2023-01-02 17:07:07 +00:00
|
|
|
static noinline_for_stack int
|
|
|
|
svcauth_gss_unwrap_priv(struct svc_rqst *rqstp, u32 seq, struct gss_ctx *ctx)
|
2006-06-30 08:56:19 +00:00
|
|
|
{
|
2023-01-02 17:07:07 +00:00
|
|
|
struct xdr_stream *xdr = &rqstp->rq_arg_stream;
|
|
|
|
u32 len, maj_stat, seq_num, offset;
|
|
|
|
struct xdr_buf *buf = xdr->buf;
|
|
|
|
unsigned int saved_len;
|
2006-06-30 08:56:19 +00:00
|
|
|
|
2023-01-06 17:43:37 +00:00
|
|
|
clear_bit(RQ_SPLICE_OK, &rqstp->rq_flags);
|
2006-06-30 08:56:19 +00:00
|
|
|
|
2023-01-02 17:07:07 +00:00
|
|
|
if (xdr_stream_decode_u32(xdr, &len) < 0)
|
|
|
|
goto unwrap_failed;
|
2006-06-30 08:56:19 +00:00
|
|
|
if (rqstp->rq_deferred) {
|
|
|
|
/* Already decrypted last time through! The sequence number
|
|
|
|
* check at out_seq is unnecessary but harmless: */
|
|
|
|
goto out_seq;
|
|
|
|
}
|
2023-01-02 17:07:07 +00:00
|
|
|
if (len > xdr_stream_remaining(xdr))
|
2020-04-18 22:30:42 +00:00
|
|
|
goto unwrap_failed;
|
2023-01-02 17:07:07 +00:00
|
|
|
offset = xdr_stream_pos(xdr);
|
|
|
|
|
|
|
|
saved_len = buf->len;
|
|
|
|
maj_stat = gss_unwrap(ctx, offset, offset + len, buf);
|
2006-06-30 08:56:19 +00:00
|
|
|
if (maj_stat != GSS_S_COMPLETE)
|
2020-04-18 22:30:42 +00:00
|
|
|
goto bad_unwrap;
|
2023-01-02 17:07:07 +00:00
|
|
|
xdr->nwords -= XDR_QUADLEN(saved_len - buf->len);
|
|
|
|
|
2006-06-30 08:56:19 +00:00
|
|
|
out_seq:
|
2023-01-02 17:07:07 +00:00
|
|
|
/* gss_unwrap() decrypted the sequence number. */
|
|
|
|
if (xdr_stream_decode_u32(xdr, &seq_num) < 0)
|
|
|
|
goto unwrap_failed;
|
2023-01-02 17:07:00 +00:00
|
|
|
if (seq_num != seq)
|
2020-04-18 22:30:42 +00:00
|
|
|
goto bad_seqno;
|
2006-06-30 08:56:19 +00:00
|
|
|
return 0;
|
2020-04-18 22:30:42 +00:00
|
|
|
|
|
|
|
unwrap_failed:
|
|
|
|
trace_rpcgss_svc_unwrap_failed(rqstp);
|
|
|
|
return -EINVAL;
|
|
|
|
bad_seqno:
|
2023-01-02 17:07:00 +00:00
|
|
|
trace_rpcgss_svc_seqno_bad(rqstp, seq, seq_num);
|
2020-04-18 22:30:42 +00:00
|
|
|
return -EINVAL;
|
|
|
|
bad_unwrap:
|
|
|
|
trace_rpcgss_svc_unwrap(rqstp, maj_stat);
|
|
|
|
return -EINVAL;
|
2006-06-30 08:56:19 +00:00
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
static int
|
|
|
|
svcauth_gss_set_client(struct svc_rqst *rqstp)
|
|
|
|
{
|
|
|
|
struct gss_svc_data *svcdata = rqstp->rq_auth_data;
|
|
|
|
struct rsc *rsci = svcdata->rsci;
|
|
|
|
struct rpc_gss_wire_cred *gc = &svcdata->clcred;
|
knfsd: nfsd: set rq_client to ip-address-determined-domain
We want it to be possible for users to restrict exports both by IP address and
by pseudoflavor. The pseudoflavor information has previously been passed
using special auth_domains stored in the rq_client field. After the preceding
patch that stored the pseudoflavor in rq_pflavor, that's now superfluous; so
now we use rq_client for the ip information, as auth_null and auth_unix do.
However, we keep around the special auth_domain in the rq_gssclient field for
backwards compatibility purposes, so we can still do upcalls using the old
"gss/pseudoflavor" auth_domain if upcalls using the unix domain to give us an
appropriate export. This allows us to continue supporting old mountd.
In fact, for this first patch, we always use the "gss/pseudoflavor"
auth_domain (and only it) if it is available; thus rq_client is ignored in the
auth_gss case, and this patch on its own makes no change in behavior; that
will be left to later patches.
Note on idmap: I'm almost tempted to just replace the auth_domain in the idmap
upcall by a dummy value--no version of idmapd has ever used it, and it's
unlikely anyone really wants to perform idmapping differently depending on the
where the client is (they may want to perform *credential* mapping
differently, but that's a different matter--the idmapper just handles id's
used in getattr and setattr). But I'm updating the idmapd code anyway, just
out of general backwards-compatibility paranoia.
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-17 11:04:46 +00:00
|
|
|
int stat;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2021-07-15 19:52:12 +00:00
|
|
|
rqstp->rq_auth_stat = rpc_autherr_badcred;
|
|
|
|
|
knfsd: nfsd: set rq_client to ip-address-determined-domain
We want it to be possible for users to restrict exports both by IP address and
by pseudoflavor. The pseudoflavor information has previously been passed
using special auth_domains stored in the rq_client field. After the preceding
patch that stored the pseudoflavor in rq_pflavor, that's now superfluous; so
now we use rq_client for the ip information, as auth_null and auth_unix do.
However, we keep around the special auth_domain in the rq_gssclient field for
backwards compatibility purposes, so we can still do upcalls using the old
"gss/pseudoflavor" auth_domain if upcalls using the unix domain to give us an
appropriate export. This allows us to continue supporting old mountd.
In fact, for this first patch, we always use the "gss/pseudoflavor"
auth_domain (and only it) if it is available; thus rq_client is ignored in the
auth_gss case, and this patch on its own makes no change in behavior; that
will be left to later patches.
Note on idmap: I'm almost tempted to just replace the auth_domain in the idmap
upcall by a dummy value--no version of idmapd has ever used it, and it's
unlikely anyone really wants to perform idmapping differently depending on the
where the client is (they may want to perform *credential* mapping
differently, but that's a different matter--the idmapper just handles id's
used in getattr and setattr). But I'm updating the idmapd code anyway, just
out of general backwards-compatibility paranoia.
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-17 11:04:46 +00:00
|
|
|
/*
|
|
|
|
* A gss export can be specified either by:
|
|
|
|
* export *(sec=krb5,rw)
|
|
|
|
* or by
|
|
|
|
* export gss/krb5(rw)
|
|
|
|
* The latter is deprecated; but for backwards compatibility reasons
|
|
|
|
* the nfsd code will still fall back on trying it if the former
|
|
|
|
* doesn't work; so we try to make both available to nfsd, below.
|
|
|
|
*/
|
|
|
|
rqstp->rq_gssclient = find_gss_auth_domain(rsci->mechctx, gc->gc_svc);
|
|
|
|
if (rqstp->rq_gssclient == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
return SVC_DENIED;
|
knfsd: nfsd: set rq_client to ip-address-determined-domain
We want it to be possible for users to restrict exports both by IP address and
by pseudoflavor. The pseudoflavor information has previously been passed
using special auth_domains stored in the rq_client field. After the preceding
patch that stored the pseudoflavor in rq_pflavor, that's now superfluous; so
now we use rq_client for the ip information, as auth_null and auth_unix do.
However, we keep around the special auth_domain in the rq_gssclient field for
backwards compatibility purposes, so we can still do upcalls using the old
"gss/pseudoflavor" auth_domain if upcalls using the unix domain to give us an
appropriate export. This allows us to continue supporting old mountd.
In fact, for this first patch, we always use the "gss/pseudoflavor"
auth_domain (and only it) if it is available; thus rq_client is ignored in the
auth_gss case, and this patch on its own makes no change in behavior; that
will be left to later patches.
Note on idmap: I'm almost tempted to just replace the auth_domain in the idmap
upcall by a dummy value--no version of idmapd has ever used it, and it's
unlikely anyone really wants to perform idmapping differently depending on the
where the client is (they may want to perform *credential* mapping
differently, but that's a different matter--the idmapper just handles id's
used in getattr and setattr). But I'm updating the idmapd code anyway, just
out of general backwards-compatibility paranoia.
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-17 11:04:46 +00:00
|
|
|
stat = svcauth_unix_set_client(rqstp);
|
2010-08-12 07:04:07 +00:00
|
|
|
if (stat == SVC_DROP || stat == SVC_CLOSE)
|
knfsd: nfsd: set rq_client to ip-address-determined-domain
We want it to be possible for users to restrict exports both by IP address and
by pseudoflavor. The pseudoflavor information has previously been passed
using special auth_domains stored in the rq_client field. After the preceding
patch that stored the pseudoflavor in rq_pflavor, that's now superfluous; so
now we use rq_client for the ip information, as auth_null and auth_unix do.
However, we keep around the special auth_domain in the rq_gssclient field for
backwards compatibility purposes, so we can still do upcalls using the old
"gss/pseudoflavor" auth_domain if upcalls using the unix domain to give us an
appropriate export. This allows us to continue supporting old mountd.
In fact, for this first patch, we always use the "gss/pseudoflavor"
auth_domain (and only it) if it is available; thus rq_client is ignored in the
auth_gss case, and this patch on its own makes no change in behavior; that
will be left to later patches.
Note on idmap: I'm almost tempted to just replace the auth_domain in the idmap
upcall by a dummy value--no version of idmapd has ever used it, and it's
unlikely anyone really wants to perform idmapping differently depending on the
where the client is (they may want to perform *credential* mapping
differently, but that's a different matter--the idmapper just handles id's
used in getattr and setattr). But I'm updating the idmapd code anyway, just
out of general backwards-compatibility paranoia.
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-17 11:04:46 +00:00
|
|
|
return stat;
|
2021-07-15 19:52:12 +00:00
|
|
|
|
|
|
|
rqstp->rq_auth_stat = rpc_auth_ok;
|
2005-04-16 22:20:36 +00:00
|
|
|
return SVC_OK;
|
|
|
|
}
|
|
|
|
|
2006-01-19 01:43:25 +00:00
|
|
|
static inline int
|
2012-04-17 13:39:06 +00:00
|
|
|
gss_write_init_verf(struct cache_detail *cd, struct svc_rqst *rqstp,
|
|
|
|
struct xdr_netobj *out_handle, int *major_status)
|
2006-01-19 01:43:25 +00:00
|
|
|
{
|
|
|
|
struct rsc *rsci;
|
2007-05-09 09:34:53 +00:00
|
|
|
int rc;
|
2006-01-19 01:43:25 +00:00
|
|
|
|
2012-04-17 13:39:06 +00:00
|
|
|
if (*major_status != GSS_S_COMPLETE)
|
2006-01-19 01:43:25 +00:00
|
|
|
return gss_write_null_verf(rqstp);
|
2012-04-17 13:39:06 +00:00
|
|
|
rsci = gss_svc_searchbyctx(cd, out_handle);
|
2006-01-19 01:43:25 +00:00
|
|
|
if (rsci == NULL) {
|
2012-04-17 13:39:06 +00:00
|
|
|
*major_status = GSS_S_NO_CONTEXT;
|
2006-01-19 01:43:25 +00:00
|
|
|
return gss_write_null_verf(rqstp);
|
|
|
|
}
|
2007-05-09 09:34:53 +00:00
|
|
|
rc = gss_write_verf(rqstp, rsci->mechctx, GSS_SEQ_WIN);
|
2012-01-19 17:42:37 +00:00
|
|
|
cache_put(&rsci->h, cd);
|
2007-05-09 09:34:53 +00:00
|
|
|
return rc;
|
2006-01-19 01:43:25 +00:00
|
|
|
}
|
|
|
|
|
2019-10-24 13:34:16 +00:00
|
|
|
static void gss_free_in_token_pages(struct gssp_in_token *in_token)
|
2012-05-25 22:09:56 +00:00
|
|
|
{
|
|
|
|
u32 inlen;
|
2019-10-24 13:34:16 +00:00
|
|
|
int i;
|
|
|
|
|
|
|
|
i = 0;
|
|
|
|
inlen = in_token->page_len;
|
|
|
|
while (inlen) {
|
|
|
|
if (in_token->pages[i])
|
|
|
|
put_page(in_token->pages[i]);
|
|
|
|
inlen -= inlen > PAGE_SIZE ? PAGE_SIZE : inlen;
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(in_token->pages);
|
|
|
|
in_token->pages = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int gss_read_proxy_verf(struct svc_rqst *rqstp,
|
2021-07-15 19:52:06 +00:00
|
|
|
struct rpc_gss_wire_cred *gc,
|
2019-10-24 13:34:16 +00:00
|
|
|
struct xdr_netobj *in_handle,
|
|
|
|
struct gssp_in_token *in_token)
|
|
|
|
{
|
2023-01-02 17:06:35 +00:00
|
|
|
struct xdr_stream *xdr = &rqstp->rq_arg_stream;
|
2020-10-19 11:42:27 +00:00
|
|
|
unsigned int length, pgto_offs, pgfrom_offs;
|
2023-01-02 17:06:22 +00:00
|
|
|
int pages, i, pgto, pgfrom;
|
2023-01-02 17:06:35 +00:00
|
|
|
size_t to_offs, from_offs;
|
|
|
|
u32 inlen;
|
2012-05-25 22:09:56 +00:00
|
|
|
|
2023-01-02 17:06:22 +00:00
|
|
|
if (dup_netobj(in_handle, &gc->gc_ctx))
|
|
|
|
return SVC_CLOSE;
|
2012-05-25 22:09:56 +00:00
|
|
|
|
2023-01-02 17:06:35 +00:00
|
|
|
/*
|
|
|
|
* RFC 2203 Section 5.2.2
|
|
|
|
*
|
|
|
|
* struct rpc_gss_init_arg {
|
|
|
|
* opaque gss_token<>;
|
|
|
|
* };
|
|
|
|
*/
|
|
|
|
if (xdr_stream_decode_u32(xdr, &inlen) < 0)
|
|
|
|
goto out_denied_free;
|
|
|
|
if (inlen > xdr_stream_remaining(xdr))
|
|
|
|
goto out_denied_free;
|
2012-05-25 22:09:56 +00:00
|
|
|
|
2019-10-24 13:34:16 +00:00
|
|
|
pages = DIV_ROUND_UP(inlen, PAGE_SIZE);
|
|
|
|
in_token->pages = kcalloc(pages, sizeof(struct page *), GFP_KERNEL);
|
2023-01-02 17:06:35 +00:00
|
|
|
if (!in_token->pages)
|
|
|
|
goto out_denied_free;
|
2019-10-24 13:34:16 +00:00
|
|
|
in_token->page_base = 0;
|
2012-05-25 22:09:56 +00:00
|
|
|
in_token->page_len = inlen;
|
2019-10-24 13:34:16 +00:00
|
|
|
for (i = 0; i < pages; i++) {
|
|
|
|
in_token->pages[i] = alloc_page(GFP_KERNEL);
|
|
|
|
if (!in_token->pages[i]) {
|
|
|
|
gss_free_in_token_pages(in_token);
|
2023-01-02 17:06:35 +00:00
|
|
|
goto out_denied_free;
|
2019-10-24 13:34:16 +00:00
|
|
|
}
|
|
|
|
}
|
2012-05-25 22:09:56 +00:00
|
|
|
|
2023-01-02 17:06:35 +00:00
|
|
|
length = min_t(unsigned int, inlen, (char *)xdr->end - (char *)xdr->p);
|
|
|
|
memcpy(page_address(in_token->pages[0]), xdr->p, length);
|
2019-10-24 13:34:16 +00:00
|
|
|
inlen -= length;
|
|
|
|
|
2020-10-19 11:42:27 +00:00
|
|
|
to_offs = length;
|
|
|
|
from_offs = rqstp->rq_arg.page_base;
|
2019-10-24 13:34:16 +00:00
|
|
|
while (inlen) {
|
2020-10-19 11:42:27 +00:00
|
|
|
pgto = to_offs >> PAGE_SHIFT;
|
|
|
|
pgfrom = from_offs >> PAGE_SHIFT;
|
|
|
|
pgto_offs = to_offs & ~PAGE_MASK;
|
|
|
|
pgfrom_offs = from_offs & ~PAGE_MASK;
|
|
|
|
|
|
|
|
length = min_t(unsigned int, inlen,
|
|
|
|
min_t(unsigned int, PAGE_SIZE - pgto_offs,
|
|
|
|
PAGE_SIZE - pgfrom_offs));
|
|
|
|
memcpy(page_address(in_token->pages[pgto]) + pgto_offs,
|
|
|
|
page_address(rqstp->rq_arg.pages[pgfrom]) + pgfrom_offs,
|
2019-10-24 13:34:16 +00:00
|
|
|
length);
|
|
|
|
|
2020-10-19 11:42:27 +00:00
|
|
|
to_offs += length;
|
|
|
|
from_offs += length;
|
2019-10-24 13:34:16 +00:00
|
|
|
inlen -= length;
|
|
|
|
}
|
2012-05-25 22:09:56 +00:00
|
|
|
return 0;
|
2023-01-02 17:06:35 +00:00
|
|
|
|
|
|
|
out_denied_free:
|
|
|
|
kfree(in_handle->data);
|
|
|
|
return SVC_DENIED;
|
2012-05-25 22:09:56 +00:00
|
|
|
}
|
|
|
|
|
2012-04-17 13:39:06 +00:00
|
|
|
static inline int
|
|
|
|
gss_write_resv(struct kvec *resv, size_t size_limit,
|
|
|
|
struct xdr_netobj *out_handle, struct xdr_netobj *out_token,
|
|
|
|
int major_status, int minor_status)
|
|
|
|
{
|
|
|
|
if (resv->iov_len + 4 > size_limit)
|
|
|
|
return -1;
|
|
|
|
svc_putnl(resv, RPC_SUCCESS);
|
|
|
|
if (svc_safe_putnetobj(resv, out_handle))
|
|
|
|
return -1;
|
|
|
|
if (resv->iov_len + 3 * 4 > size_limit)
|
|
|
|
return -1;
|
|
|
|
svc_putnl(resv, major_status);
|
|
|
|
svc_putnl(resv, minor_status);
|
|
|
|
svc_putnl(resv, GSS_SEQ_WIN);
|
|
|
|
if (svc_safe_putnetobj(resv, out_token))
|
|
|
|
return -1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Having read the cred already and found we're in the context
|
|
|
|
* initiation case, read the verifier and initiate (or check the results
|
|
|
|
* of) upcalls to userspace for help with context initiation. If
|
|
|
|
* the upcall results are available, write the verifier and result.
|
|
|
|
* Otherwise, drop the request pending an answer to the upcall.
|
|
|
|
*/
|
2023-01-02 17:06:35 +00:00
|
|
|
static int
|
|
|
|
svcauth_gss_legacy_init(struct svc_rqst *rqstp,
|
|
|
|
struct rpc_gss_wire_cred *gc)
|
2012-04-17 13:39:06 +00:00
|
|
|
{
|
2023-01-02 17:06:35 +00:00
|
|
|
struct xdr_stream *xdr = &rqstp->rq_arg_stream;
|
2012-04-17 13:39:06 +00:00
|
|
|
struct kvec *resv = &rqstp->rq_res.head[0];
|
|
|
|
struct rsi *rsip, rsikey;
|
2023-01-02 17:06:35 +00:00
|
|
|
__be32 *p;
|
|
|
|
u32 len;
|
2012-04-17 13:39:06 +00:00
|
|
|
int ret;
|
2018-12-24 11:44:42 +00:00
|
|
|
struct sunrpc_net *sn = net_generic(SVC_NET(rqstp), sunrpc_net_id);
|
2012-04-17 13:39:06 +00:00
|
|
|
|
|
|
|
memset(&rsikey, 0, sizeof(rsikey));
|
2023-01-02 17:06:28 +00:00
|
|
|
if (dup_netobj(&rsikey.in_handle, &gc->gc_ctx))
|
|
|
|
return SVC_CLOSE;
|
2023-01-02 17:06:35 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* RFC 2203 Section 5.2.2
|
|
|
|
*
|
|
|
|
* struct rpc_gss_init_arg {
|
|
|
|
* opaque gss_token<>;
|
|
|
|
* };
|
|
|
|
*/
|
|
|
|
if (xdr_stream_decode_u32(xdr, &len) < 0) {
|
|
|
|
kfree(rsikey.in_handle.data);
|
|
|
|
return SVC_DENIED;
|
|
|
|
}
|
|
|
|
p = xdr_inline_decode(xdr, len);
|
|
|
|
if (!p) {
|
2023-01-02 17:06:28 +00:00
|
|
|
kfree(rsikey.in_handle.data);
|
|
|
|
return SVC_DENIED;
|
|
|
|
}
|
2023-01-02 17:06:35 +00:00
|
|
|
rsikey.in_token.data = kmalloc(len, GFP_KERNEL);
|
|
|
|
if (ZERO_OR_NULL_PTR(rsikey.in_token.data)) {
|
2023-01-02 17:06:28 +00:00
|
|
|
kfree(rsikey.in_handle.data);
|
|
|
|
return SVC_CLOSE;
|
|
|
|
}
|
2023-01-02 17:06:35 +00:00
|
|
|
memcpy(rsikey.in_token.data, p, len);
|
|
|
|
rsikey.in_token.len = len;
|
2012-04-17 13:39:06 +00:00
|
|
|
|
2007-08-10 00:16:22 +00:00
|
|
|
/* Perform upcall, or find upcall result: */
|
2012-01-19 17:42:37 +00:00
|
|
|
rsip = rsi_lookup(sn->rsi_cache, &rsikey);
|
2007-08-10 00:16:22 +00:00
|
|
|
rsi_free(&rsikey);
|
|
|
|
if (!rsip)
|
2010-08-12 07:04:07 +00:00
|
|
|
return SVC_CLOSE;
|
2012-01-19 17:42:37 +00:00
|
|
|
if (cache_check(sn->rsi_cache, &rsip->h, &rqstp->rq_chandle) < 0)
|
2007-08-10 00:16:22 +00:00
|
|
|
/* No upcall result: */
|
2010-08-12 07:04:07 +00:00
|
|
|
return SVC_CLOSE;
|
2010-08-12 07:04:07 +00:00
|
|
|
|
|
|
|
ret = SVC_CLOSE;
|
|
|
|
/* Got an answer to the upcall; use it: */
|
2012-04-17 13:39:06 +00:00
|
|
|
if (gss_write_init_verf(sn->rsc_cache, rqstp,
|
|
|
|
&rsip->out_handle, &rsip->major_status))
|
2010-08-12 07:04:07 +00:00
|
|
|
goto out;
|
2012-04-17 13:39:06 +00:00
|
|
|
if (gss_write_resv(resv, PAGE_SIZE,
|
|
|
|
&rsip->out_handle, &rsip->out_token,
|
|
|
|
rsip->major_status, rsip->minor_status))
|
2010-08-12 07:04:07 +00:00
|
|
|
goto out;
|
|
|
|
|
2007-12-12 23:21:17 +00:00
|
|
|
ret = SVC_COMPLETE;
|
|
|
|
out:
|
2012-01-19 17:42:37 +00:00
|
|
|
cache_put(&rsip->h, sn->rsi_cache);
|
2007-12-12 23:21:17 +00:00
|
|
|
return ret;
|
2007-08-10 00:16:22 +00:00
|
|
|
}
|
|
|
|
|
2012-05-25 22:09:56 +00:00
|
|
|
static int gss_proxy_save_rsc(struct cache_detail *cd,
|
|
|
|
struct gssp_upcall_data *ud,
|
|
|
|
uint64_t *handle)
|
|
|
|
{
|
|
|
|
struct rsc rsci, *rscp = NULL;
|
|
|
|
static atomic64_t ctxhctr;
|
|
|
|
long long ctxh;
|
|
|
|
struct gss_api_mech *gm = NULL;
|
2018-06-07 15:02:50 +00:00
|
|
|
time64_t expiry;
|
2021-06-13 14:06:52 +00:00
|
|
|
int status;
|
2012-05-25 22:09:56 +00:00
|
|
|
|
|
|
|
memset(&rsci, 0, sizeof(rsci));
|
|
|
|
/* context handle */
|
|
|
|
status = -ENOMEM;
|
|
|
|
/* the handle needs to be just a unique id,
|
|
|
|
* use a static counter */
|
|
|
|
ctxh = atomic64_inc_return(&ctxhctr);
|
|
|
|
|
|
|
|
/* make a copy for the caller */
|
|
|
|
*handle = ctxh;
|
|
|
|
|
|
|
|
/* make a copy for the rsc cache */
|
|
|
|
if (dup_to_netobj(&rsci.handle, (char *)handle, sizeof(uint64_t)))
|
|
|
|
goto out;
|
|
|
|
rscp = rsc_lookup(cd, &rsci);
|
|
|
|
if (!rscp)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* creds */
|
|
|
|
if (!ud->found_creds) {
|
|
|
|
/* userspace seem buggy, we should always get at least a
|
|
|
|
* mapping to nobody */
|
2013-10-08 19:53:07 +00:00
|
|
|
goto out;
|
2012-05-25 22:09:56 +00:00
|
|
|
} else {
|
2020-02-04 10:32:56 +00:00
|
|
|
struct timespec64 boot;
|
2012-05-25 22:09:56 +00:00
|
|
|
|
|
|
|
/* steal creds */
|
|
|
|
rsci.cred = ud->creds;
|
|
|
|
memset(&ud->creds, 0, sizeof(struct svc_cred));
|
|
|
|
|
|
|
|
status = -EOPNOTSUPP;
|
|
|
|
/* get mech handle from OID */
|
|
|
|
gm = gss_mech_get_by_OID(&ud->mech_oid);
|
|
|
|
if (!gm)
|
|
|
|
goto out;
|
2013-07-31 21:51:42 +00:00
|
|
|
rsci.cred.cr_gss_mech = gm;
|
2012-05-25 22:09:56 +00:00
|
|
|
|
|
|
|
status = -EINVAL;
|
|
|
|
/* mech-specific data: */
|
|
|
|
status = gss_import_sec_context(ud->out_handle.data,
|
|
|
|
ud->out_handle.len,
|
|
|
|
gm, &rsci.mechctx,
|
|
|
|
&expiry, GFP_KERNEL);
|
|
|
|
if (status)
|
|
|
|
goto out;
|
2020-02-04 10:32:56 +00:00
|
|
|
|
|
|
|
getboottime64(&boot);
|
|
|
|
expiry -= boot.tv_sec;
|
2012-05-25 22:09:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
rsci.h.expiry_time = expiry;
|
|
|
|
rscp = rsc_update(cd, &rsci, rscp);
|
|
|
|
status = 0;
|
|
|
|
out:
|
|
|
|
rsc_free(&rsci);
|
|
|
|
if (rscp)
|
|
|
|
cache_put(&rscp->h, cd);
|
|
|
|
else
|
|
|
|
status = -ENOMEM;
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int svcauth_gss_proxy_init(struct svc_rqst *rqstp,
|
2021-07-15 19:52:06 +00:00
|
|
|
struct rpc_gss_wire_cred *gc)
|
2012-05-25 22:09:56 +00:00
|
|
|
{
|
|
|
|
struct kvec *resv = &rqstp->rq_res.head[0];
|
|
|
|
struct xdr_netobj cli_handle;
|
|
|
|
struct gssp_upcall_data ud;
|
|
|
|
uint64_t handle;
|
|
|
|
int status;
|
|
|
|
int ret;
|
2018-12-24 11:44:42 +00:00
|
|
|
struct net *net = SVC_NET(rqstp);
|
2012-05-25 22:09:56 +00:00
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
|
|
|
|
|
|
|
memset(&ud, 0, sizeof(ud));
|
2021-07-15 19:52:06 +00:00
|
|
|
ret = gss_read_proxy_verf(rqstp, gc, &ud.in_handle, &ud.in_token);
|
2012-05-25 22:09:56 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
ret = SVC_CLOSE;
|
|
|
|
|
|
|
|
/* Perform synchronous upcall to gss-proxy */
|
|
|
|
status = gssp_accept_sec_context_upcall(net, &ud);
|
|
|
|
if (status)
|
|
|
|
goto out;
|
|
|
|
|
2020-04-18 22:30:42 +00:00
|
|
|
trace_rpcgss_svc_accept_upcall(rqstp, ud.major_status, ud.minor_status);
|
2012-05-25 22:09:56 +00:00
|
|
|
|
|
|
|
switch (ud.major_status) {
|
|
|
|
case GSS_S_CONTINUE_NEEDED:
|
|
|
|
cli_handle = ud.out_handle;
|
|
|
|
break;
|
|
|
|
case GSS_S_COMPLETE:
|
|
|
|
status = gss_proxy_save_rsc(sn->rsc_cache, &ud, &handle);
|
2020-03-02 20:16:06 +00:00
|
|
|
if (status)
|
2012-05-25 22:09:56 +00:00
|
|
|
goto out;
|
|
|
|
cli_handle.data = (u8 *)&handle;
|
|
|
|
cli_handle.len = sizeof(handle);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Got an answer to the upcall; use it: */
|
|
|
|
if (gss_write_init_verf(sn->rsc_cache, rqstp,
|
2020-03-02 20:16:06 +00:00
|
|
|
&cli_handle, &ud.major_status))
|
2012-05-25 22:09:56 +00:00
|
|
|
goto out;
|
|
|
|
if (gss_write_resv(resv, PAGE_SIZE,
|
|
|
|
&cli_handle, &ud.out_token,
|
2020-03-02 20:16:06 +00:00
|
|
|
ud.major_status, ud.minor_status))
|
2012-05-25 22:09:56 +00:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
ret = SVC_COMPLETE;
|
|
|
|
out:
|
2019-10-24 13:34:16 +00:00
|
|
|
gss_free_in_token_pages(&ud.in_token);
|
2012-05-25 22:09:56 +00:00
|
|
|
gssp_free_upcall_data(&ud);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2014-01-04 12:18:05 +00:00
|
|
|
/*
|
|
|
|
* Try to set the sn->use_gss_proxy variable to a new value. We only allow
|
|
|
|
* it to be changed if it's currently undefined (-1). If it's any other value
|
|
|
|
* then return -EBUSY unless the type wouldn't have changed anyway.
|
|
|
|
*/
|
|
|
|
static int set_gss_proxy(struct net *net, int type)
|
2012-05-25 22:09:56 +00:00
|
|
|
{
|
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
2014-01-04 12:18:05 +00:00
|
|
|
int ret;
|
2012-05-25 22:09:56 +00:00
|
|
|
|
2014-01-04 12:18:05 +00:00
|
|
|
WARN_ON_ONCE(type != 0 && type != 1);
|
|
|
|
ret = cmpxchg(&sn->use_gss_proxy, -1, type);
|
|
|
|
if (ret != -1 && ret != type)
|
|
|
|
return -EBUSY;
|
2012-05-25 22:09:56 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-01-04 12:18:05 +00:00
|
|
|
static bool use_gss_proxy(struct net *net)
|
2012-05-25 22:09:56 +00:00
|
|
|
{
|
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
|
|
|
|
2014-01-04 12:18:05 +00:00
|
|
|
/* If use_gss_proxy is still undefined, then try to disable it */
|
|
|
|
if (sn->use_gss_proxy == -1)
|
|
|
|
set_gss_proxy(net, 0);
|
|
|
|
return sn->use_gss_proxy;
|
2012-05-25 22:09:56 +00:00
|
|
|
}
|
|
|
|
|
2023-01-02 17:06:09 +00:00
|
|
|
static noinline_for_stack int
|
|
|
|
svcauth_gss_proc_init(struct svc_rqst *rqstp, struct rpc_gss_wire_cred *gc)
|
|
|
|
{
|
2023-01-02 17:06:35 +00:00
|
|
|
struct xdr_stream *xdr = &rqstp->rq_arg_stream;
|
|
|
|
u32 flavor, len;
|
|
|
|
void *body;
|
2023-01-02 17:06:15 +00:00
|
|
|
|
2023-01-02 17:06:35 +00:00
|
|
|
/* Call's verf field: */
|
|
|
|
if (xdr_stream_decode_opaque_auth(xdr, &flavor, &body, &len) < 0)
|
|
|
|
return SVC_GARBAGE;
|
|
|
|
if (flavor != RPC_AUTH_NULL || len != 0) {
|
|
|
|
rqstp->rq_auth_stat = rpc_autherr_badverf;
|
2023-01-02 17:06:15 +00:00
|
|
|
return SVC_DENIED;
|
2023-01-02 17:06:35 +00:00
|
|
|
}
|
2023-01-02 17:06:15 +00:00
|
|
|
|
|
|
|
if (gc->gc_proc == RPC_GSS_PROC_INIT && gc->gc_ctx.len != 0) {
|
|
|
|
rqstp->rq_auth_stat = rpc_autherr_badcred;
|
|
|
|
return SVC_DENIED;
|
|
|
|
}
|
|
|
|
|
2023-01-02 17:06:09 +00:00
|
|
|
if (!use_gss_proxy(SVC_NET(rqstp)))
|
|
|
|
return svcauth_gss_legacy_init(rqstp, gc);
|
|
|
|
return svcauth_gss_proxy_init(rqstp, gc);
|
|
|
|
}
|
|
|
|
|
2014-01-04 12:18:05 +00:00
|
|
|
#ifdef CONFIG_PROC_FS
|
|
|
|
|
2012-05-25 22:09:56 +00:00
|
|
|
static ssize_t write_gssp(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2022-01-22 06:14:23 +00:00
|
|
|
struct net *net = pde_data(file_inode(file));
|
2012-05-25 22:09:56 +00:00
|
|
|
char tbuf[20];
|
|
|
|
unsigned long i;
|
|
|
|
int res;
|
|
|
|
|
|
|
|
if (*ppos || count > sizeof(tbuf)-1)
|
|
|
|
return -EINVAL;
|
|
|
|
if (copy_from_user(tbuf, buf, count))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
tbuf[count] = 0;
|
|
|
|
res = kstrtoul(tbuf, 0, &i);
|
|
|
|
if (res)
|
|
|
|
return res;
|
|
|
|
if (i != 1)
|
|
|
|
return -EINVAL;
|
2014-01-04 12:18:04 +00:00
|
|
|
res = set_gssp_clnt(net);
|
2012-05-25 22:09:56 +00:00
|
|
|
if (res)
|
|
|
|
return res;
|
2014-01-04 12:18:04 +00:00
|
|
|
res = set_gss_proxy(net, 1);
|
2012-05-25 22:09:56 +00:00
|
|
|
if (res)
|
|
|
|
return res;
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t read_gssp(struct file *file, char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2022-01-22 06:14:23 +00:00
|
|
|
struct net *net = pde_data(file_inode(file));
|
2014-01-04 12:18:03 +00:00
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
2012-05-25 22:09:56 +00:00
|
|
|
unsigned long p = *ppos;
|
|
|
|
char tbuf[10];
|
|
|
|
size_t len;
|
|
|
|
|
2014-01-04 12:18:03 +00:00
|
|
|
snprintf(tbuf, sizeof(tbuf), "%d\n", sn->use_gss_proxy);
|
2012-05-25 22:09:56 +00:00
|
|
|
len = strlen(tbuf);
|
|
|
|
if (p >= len)
|
|
|
|
return 0;
|
|
|
|
len -= p;
|
|
|
|
if (len > count)
|
|
|
|
len = count;
|
|
|
|
if (copy_to_user(buf, (void *)(tbuf+p), len))
|
|
|
|
return -EFAULT;
|
|
|
|
*ppos += len;
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
2020-02-04 01:37:17 +00:00
|
|
|
static const struct proc_ops use_gss_proxy_proc_ops = {
|
|
|
|
.proc_open = nonseekable_open,
|
|
|
|
.proc_write = write_gssp,
|
|
|
|
.proc_read = read_gssp,
|
2012-05-25 22:09:56 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static int create_use_gss_proxy_proc_entry(struct net *net)
|
|
|
|
{
|
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
|
|
|
struct proc_dir_entry **p = &sn->use_gssp_proc;
|
|
|
|
|
|
|
|
sn->use_gss_proxy = -1;
|
2018-03-23 22:54:38 +00:00
|
|
|
*p = proc_create_data("use-gss-proxy", S_IFREG | 0600,
|
2012-05-25 22:09:56 +00:00
|
|
|
sn->proc_net_rpc,
|
2020-02-04 01:37:17 +00:00
|
|
|
&use_gss_proxy_proc_ops, net);
|
2012-05-25 22:09:56 +00:00
|
|
|
if (!*p)
|
|
|
|
return -ENOMEM;
|
|
|
|
init_gssp_clnt(sn);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void destroy_use_gss_proxy_proc_entry(struct net *net)
|
|
|
|
{
|
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
|
|
|
|
|
|
|
if (sn->use_gssp_proc) {
|
2018-07-24 19:29:15 +00:00
|
|
|
remove_proc_entry("use-gss-proxy", sn->proc_net_rpc);
|
2012-05-25 22:09:56 +00:00
|
|
|
clear_gssp_clnt(sn);
|
|
|
|
}
|
|
|
|
}
|
2013-04-29 21:03:31 +00:00
|
|
|
#else /* CONFIG_PROC_FS */
|
|
|
|
|
|
|
|
static int create_use_gss_proxy_proc_entry(struct net *net)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void destroy_use_gss_proxy_proc_entry(struct net *net) {}
|
2012-05-25 22:09:56 +00:00
|
|
|
|
|
|
|
#endif /* CONFIG_PROC_FS */
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
2023-01-02 17:07:26 +00:00
|
|
|
* The Call's credential body should contain a struct rpc_gss_cred_t.
|
|
|
|
*
|
|
|
|
* RFC 2203 Section 5
|
|
|
|
*
|
|
|
|
* struct rpc_gss_cred_t {
|
|
|
|
* union switch (unsigned int version) {
|
|
|
|
* case RPCSEC_GSS_VERS_1:
|
|
|
|
* struct {
|
|
|
|
* rpc_gss_proc_t gss_proc;
|
|
|
|
* unsigned int seq_num;
|
|
|
|
* rpc_gss_service_t service;
|
|
|
|
* opaque handle<>;
|
|
|
|
* } rpc_gss_cred_vers_1_t;
|
|
|
|
* }
|
|
|
|
* };
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
svcauth_gss_decode_credbody(struct xdr_stream *xdr,
|
|
|
|
struct rpc_gss_wire_cred *gc,
|
|
|
|
__be32 **rpcstart)
|
|
|
|
{
|
|
|
|
ssize_t handle_len;
|
|
|
|
u32 body_len;
|
|
|
|
__be32 *p;
|
|
|
|
|
|
|
|
p = xdr_inline_decode(xdr, XDR_UNIT);
|
|
|
|
if (!p)
|
|
|
|
return false;
|
|
|
|
/*
|
|
|
|
* start of rpc packet is 7 u32's back from here:
|
|
|
|
* xid direction rpcversion prog vers proc flavour
|
|
|
|
*/
|
|
|
|
*rpcstart = p - 7;
|
|
|
|
body_len = be32_to_cpup(p);
|
|
|
|
if (body_len > RPC_MAX_AUTH_SIZE)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* struct rpc_gss_cred_t */
|
|
|
|
if (xdr_stream_decode_u32(xdr, &gc->gc_v) < 0)
|
|
|
|
return false;
|
|
|
|
if (xdr_stream_decode_u32(xdr, &gc->gc_proc) < 0)
|
|
|
|
return false;
|
|
|
|
if (xdr_stream_decode_u32(xdr, &gc->gc_seq) < 0)
|
|
|
|
return false;
|
|
|
|
if (xdr_stream_decode_u32(xdr, &gc->gc_svc) < 0)
|
|
|
|
return false;
|
|
|
|
handle_len = xdr_stream_decode_opaque_inline(xdr,
|
|
|
|
(void **)&gc->gc_ctx.data,
|
|
|
|
body_len);
|
|
|
|
if (handle_len < 0)
|
|
|
|
return false;
|
|
|
|
if (body_len != XDR_UNIT * 5 + xdr_align_size(handle_len))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
gc->gc_ctx.len = handle_len;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* svcauth_gss_accept - Decode and validate incoming RPC_AUTH_GSS credential
|
|
|
|
* @rqstp: RPC transaction
|
|
|
|
*
|
|
|
|
* Return values:
|
|
|
|
* %SVC_OK: Success
|
|
|
|
* %SVC_COMPLETE: GSS context lifetime event
|
|
|
|
* %SVC_DENIED: Credential or verifier is not valid
|
|
|
|
* %SVC_GARBAGE: Failed to decode credential or verifier
|
|
|
|
* %SVC_CLOSE: Temporary failure
|
|
|
|
*
|
|
|
|
* The rqstp->rq_auth_stat field is also set (see RFCs 2203 and 5531).
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
static int
|
2021-07-15 19:52:06 +00:00
|
|
|
svcauth_gss_accept(struct svc_rqst *rqstp)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct kvec *resv = &rqstp->rq_res.head[0];
|
|
|
|
struct gss_svc_data *svcdata = rqstp->rq_auth_data;
|
2023-01-02 17:07:26 +00:00
|
|
|
__be32 *rpcstart;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct rpc_gss_wire_cred *gc;
|
|
|
|
struct rsc *rsci = NULL;
|
2006-09-27 05:29:38 +00:00
|
|
|
__be32 *reject_stat = resv->iov_base + resv->iov_len;
|
2005-04-16 22:20:36 +00:00
|
|
|
int ret;
|
2018-12-24 11:44:42 +00:00
|
|
|
struct sunrpc_net *sn = net_generic(SVC_NET(rqstp), sunrpc_net_id);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2021-07-15 19:52:06 +00:00
|
|
|
rqstp->rq_auth_stat = rpc_autherr_badcred;
|
2005-04-16 22:20:36 +00:00
|
|
|
if (!svcdata)
|
|
|
|
svcdata = kmalloc(sizeof(*svcdata), GFP_KERNEL);
|
|
|
|
if (!svcdata)
|
|
|
|
goto auth_err;
|
|
|
|
rqstp->rq_auth_data = svcdata;
|
2006-10-04 09:16:07 +00:00
|
|
|
svcdata->verf_start = NULL;
|
2005-04-16 22:20:36 +00:00
|
|
|
svcdata->rsci = NULL;
|
|
|
|
gc = &svcdata->clcred;
|
|
|
|
|
2023-01-02 17:07:26 +00:00
|
|
|
if (!svcauth_gss_decode_credbody(&rqstp->rq_arg_stream, gc, &rpcstart))
|
2005-04-16 22:20:36 +00:00
|
|
|
goto auth_err;
|
2023-01-02 17:07:26 +00:00
|
|
|
if (gc->gc_v != RPC_GSS_VERSION)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto auth_err;
|
2023-01-02 17:07:20 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
switch (gc->gc_proc) {
|
|
|
|
case RPC_GSS_PROC_INIT:
|
|
|
|
case RPC_GSS_PROC_CONTINUE_INIT:
|
2023-01-02 17:07:20 +00:00
|
|
|
if (rqstp->rq_proc != 0)
|
|
|
|
goto auth_err;
|
2023-01-02 17:06:09 +00:00
|
|
|
return svcauth_gss_proc_init(rqstp, gc);
|
2005-04-16 22:20:36 +00:00
|
|
|
case RPC_GSS_PROC_DESTROY:
|
2023-01-02 17:07:20 +00:00
|
|
|
if (rqstp->rq_proc != 0)
|
|
|
|
goto auth_err;
|
|
|
|
fallthrough;
|
|
|
|
case RPC_GSS_PROC_DATA:
|
2021-07-15 19:52:06 +00:00
|
|
|
rqstp->rq_auth_stat = rpcsec_gsserr_credproblem;
|
2012-01-19 17:42:37 +00:00
|
|
|
rsci = gss_svc_searchbyctx(sn->rsc_cache, &gc->gc_ctx);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (!rsci)
|
|
|
|
goto auth_err;
|
2023-01-02 17:07:13 +00:00
|
|
|
switch (svcauth_gss_verify_header(rqstp, rsci, rpcstart, gc)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
case SVC_OK:
|
|
|
|
break;
|
|
|
|
case SVC_DENIED:
|
|
|
|
goto auth_err;
|
|
|
|
case SVC_DROP:
|
|
|
|
goto drop;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
2023-01-02 17:07:20 +00:00
|
|
|
if (rqstp->rq_proc != 0)
|
|
|
|
goto auth_err;
|
2021-07-15 19:52:06 +00:00
|
|
|
rqstp->rq_auth_stat = rpc_autherr_rejectedcred;
|
2005-04-16 22:20:36 +00:00
|
|
|
goto auth_err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* now act upon the command: */
|
|
|
|
switch (gc->gc_proc) {
|
|
|
|
case RPC_GSS_PROC_DESTROY:
|
2007-05-09 09:34:54 +00:00
|
|
|
if (gss_write_verf(rqstp, rsci->mechctx, gc->gc_seq))
|
|
|
|
goto auth_err;
|
2016-12-22 17:38:06 +00:00
|
|
|
/* Delete the entry from the cache_list and call cache_put */
|
|
|
|
sunrpc_cache_unhash(sn->rsc_cache, &rsci->h);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (resv->iov_len + 4 > PAGE_SIZE)
|
|
|
|
goto drop;
|
2006-09-27 05:28:46 +00:00
|
|
|
svc_putnl(resv, RPC_SUCCESS);
|
2005-04-16 22:20:36 +00:00
|
|
|
goto complete;
|
|
|
|
case RPC_GSS_PROC_DATA:
|
2021-07-15 19:52:06 +00:00
|
|
|
rqstp->rq_auth_stat = rpcsec_gsserr_ctxproblem;
|
2006-10-04 09:16:07 +00:00
|
|
|
svcdata->verf_start = resv->iov_base + resv->iov_len;
|
2005-04-16 22:20:36 +00:00
|
|
|
if (gss_write_verf(rqstp, rsci->mechctx, gc->gc_seq))
|
|
|
|
goto auth_err;
|
|
|
|
rqstp->rq_cred = rsci->cred;
|
|
|
|
get_group_info(rsci->cred.cr_group_info);
|
2021-07-15 19:52:06 +00:00
|
|
|
rqstp->rq_auth_stat = rpc_autherr_badcred;
|
2005-04-16 22:20:36 +00:00
|
|
|
switch (gc->gc_svc) {
|
|
|
|
case RPC_GSS_SVC_NONE:
|
|
|
|
break;
|
|
|
|
case RPC_GSS_SVC_INTEGRITY:
|
2008-07-03 19:26:35 +00:00
|
|
|
/* placeholders for length and seq. number: */
|
|
|
|
svc_putnl(resv, 0);
|
|
|
|
svc_putnl(resv, 0);
|
2023-01-02 17:06:54 +00:00
|
|
|
if (svcauth_gss_unwrap_integ(rqstp, gc->gc_seq,
|
|
|
|
rsci->mechctx))
|
2008-02-19 23:56:56 +00:00
|
|
|
goto garbage_args;
|
2014-05-12 22:10:58 +00:00
|
|
|
rqstp->rq_auth_slack = RPC_MAX_AUTH_SIZE;
|
2008-07-03 19:26:35 +00:00
|
|
|
break;
|
|
|
|
case RPC_GSS_SVC_PRIVACY:
|
2005-04-16 22:20:36 +00:00
|
|
|
/* placeholders for length and seq. number: */
|
2006-09-27 05:28:46 +00:00
|
|
|
svc_putnl(resv, 0);
|
|
|
|
svc_putnl(resv, 0);
|
2023-01-02 17:07:07 +00:00
|
|
|
if (svcauth_gss_unwrap_priv(rqstp, gc->gc_seq,
|
|
|
|
rsci->mechctx))
|
2008-02-19 23:56:56 +00:00
|
|
|
goto garbage_args;
|
2014-05-12 22:10:58 +00:00
|
|
|
rqstp->rq_auth_slack = RPC_MAX_AUTH_SIZE * 2;
|
2006-06-30 08:56:19 +00:00
|
|
|
break;
|
2005-04-16 22:20:36 +00:00
|
|
|
default:
|
|
|
|
goto auth_err;
|
|
|
|
}
|
|
|
|
svcdata->rsci = rsci;
|
|
|
|
cache_get(&rsci->h);
|
2012-05-15 02:06:49 +00:00
|
|
|
rqstp->rq_cred.cr_flavor = gss_svc_to_pseudoflavor(
|
2013-03-16 19:55:01 +00:00
|
|
|
rsci->mechctx->mech_type,
|
|
|
|
GSS_C_QOP_DEFAULT,
|
|
|
|
gc->gc_svc);
|
2005-04-16 22:20:36 +00:00
|
|
|
ret = SVC_OK;
|
2020-04-18 22:30:42 +00:00
|
|
|
trace_rpcgss_svc_authenticate(rqstp, gc);
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out;
|
|
|
|
}
|
2008-02-19 23:56:56 +00:00
|
|
|
garbage_args:
|
|
|
|
ret = SVC_GARBAGE;
|
|
|
|
goto out;
|
2005-04-16 22:20:36 +00:00
|
|
|
auth_err:
|
2007-08-10 00:16:22 +00:00
|
|
|
/* Restore write pointer to its original value: */
|
2005-04-16 22:20:36 +00:00
|
|
|
xdr_ressize_check(rqstp, reject_stat);
|
|
|
|
ret = SVC_DENIED;
|
|
|
|
goto out;
|
|
|
|
complete:
|
|
|
|
ret = SVC_COMPLETE;
|
|
|
|
goto out;
|
|
|
|
drop:
|
2016-11-29 16:04:34 +00:00
|
|
|
ret = SVC_CLOSE;
|
2005-04-16 22:20:36 +00:00
|
|
|
out:
|
|
|
|
if (rsci)
|
2012-01-19 17:42:37 +00:00
|
|
|
cache_put(&rsci->h, sn->rsc_cache);
|
2005-04-16 22:20:36 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2006-10-10 21:49:27 +00:00
|
|
|
static __be32 *
|
2006-10-04 09:16:06 +00:00
|
|
|
svcauth_gss_prepare_to_wrap(struct xdr_buf *resbuf, struct gss_svc_data *gsd)
|
|
|
|
{
|
2006-10-10 21:49:27 +00:00
|
|
|
__be32 *p;
|
|
|
|
u32 verf_len;
|
2006-10-04 09:16:06 +00:00
|
|
|
|
2006-10-04 09:16:07 +00:00
|
|
|
p = gsd->verf_start;
|
|
|
|
gsd->verf_start = NULL;
|
|
|
|
|
|
|
|
/* If the reply stat is nonzero, don't wrap: */
|
|
|
|
if (*(p-1) != rpc_success)
|
|
|
|
return NULL;
|
|
|
|
/* Skip the verifier: */
|
|
|
|
p += 1;
|
|
|
|
verf_len = ntohl(*p++);
|
|
|
|
p += XDR_QUADLEN(verf_len);
|
2006-10-04 09:16:06 +00:00
|
|
|
/* move accept_stat to right place: */
|
|
|
|
memcpy(p, p + 2, 4);
|
2006-10-04 09:16:07 +00:00
|
|
|
/* Also don't wrap if the accept stat is nonzero: */
|
2006-10-04 09:16:06 +00:00
|
|
|
if (*p != rpc_success) {
|
|
|
|
resbuf->head[0].iov_len -= 2 * 4;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
p++;
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
2006-06-30 08:56:18 +00:00
|
|
|
static inline int
|
|
|
|
svcauth_gss_wrap_resp_integ(struct svc_rqst *rqstp)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct gss_svc_data *gsd = (struct gss_svc_data *)rqstp->rq_auth_data;
|
|
|
|
struct rpc_gss_wire_cred *gc = &gsd->clcred;
|
|
|
|
struct xdr_buf *resbuf = &rqstp->rq_res;
|
|
|
|
struct xdr_buf integ_buf;
|
|
|
|
struct xdr_netobj mic;
|
|
|
|
struct kvec *resv;
|
2006-09-27 05:29:38 +00:00
|
|
|
__be32 *p;
|
2005-04-16 22:20:36 +00:00
|
|
|
int integ_offset, integ_len;
|
|
|
|
int stat = -EINVAL;
|
|
|
|
|
2006-10-04 09:16:06 +00:00
|
|
|
p = svcauth_gss_prepare_to_wrap(resbuf, gsd);
|
|
|
|
if (p == NULL)
|
2006-06-30 08:56:18 +00:00
|
|
|
goto out;
|
|
|
|
integ_offset = (u8 *)(p + 1) - (u8 *)resbuf->head[0].iov_base;
|
|
|
|
integ_len = resbuf->len - integ_offset;
|
2020-03-02 20:01:08 +00:00
|
|
|
if (integ_len & 3)
|
|
|
|
goto out;
|
2006-06-30 08:56:18 +00:00
|
|
|
*p++ = htonl(integ_len);
|
|
|
|
*p++ = htonl(gc->gc_seq);
|
2017-10-24 18:58:11 +00:00
|
|
|
if (xdr_buf_subsegment(resbuf, &integ_buf, integ_offset, integ_len)) {
|
|
|
|
WARN_ON_ONCE(1);
|
|
|
|
goto out_err;
|
|
|
|
}
|
2007-05-09 09:34:52 +00:00
|
|
|
if (resbuf->tail[0].iov_base == NULL) {
|
2006-06-30 08:56:18 +00:00
|
|
|
if (resbuf->head[0].iov_len + RPC_MAX_AUTH_SIZE > PAGE_SIZE)
|
|
|
|
goto out_err;
|
|
|
|
resbuf->tail[0].iov_base = resbuf->head[0].iov_base
|
|
|
|
+ resbuf->head[0].iov_len;
|
|
|
|
resbuf->tail[0].iov_len = 0;
|
|
|
|
}
|
2013-01-18 16:33:08 +00:00
|
|
|
resv = &resbuf->tail[0];
|
2006-06-30 08:56:18 +00:00
|
|
|
mic.data = (u8 *)resv->iov_base + resv->iov_len + 4;
|
|
|
|
if (gss_get_mic(gsd->rsci->mechctx, &integ_buf, &mic))
|
|
|
|
goto out_err;
|
2006-09-27 05:28:46 +00:00
|
|
|
svc_putnl(resv, mic.len);
|
2006-06-30 08:56:18 +00:00
|
|
|
memset(mic.data + mic.len, 0,
|
|
|
|
round_up_to_quad(mic.len) - mic.len);
|
|
|
|
resv->iov_len += XDR_QUADLEN(mic.len) << 2;
|
|
|
|
/* not strictly required: */
|
|
|
|
resbuf->len += XDR_QUADLEN(mic.len) << 2;
|
2020-03-02 20:16:06 +00:00
|
|
|
if (resv->iov_len > PAGE_SIZE)
|
|
|
|
goto out_err;
|
2006-06-30 08:56:18 +00:00
|
|
|
out:
|
|
|
|
stat = 0;
|
|
|
|
out_err:
|
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
|
2006-06-30 08:56:19 +00:00
|
|
|
static inline int
|
|
|
|
svcauth_gss_wrap_resp_priv(struct svc_rqst *rqstp)
|
|
|
|
{
|
|
|
|
struct gss_svc_data *gsd = (struct gss_svc_data *)rqstp->rq_auth_data;
|
|
|
|
struct rpc_gss_wire_cred *gc = &gsd->clcred;
|
|
|
|
struct xdr_buf *resbuf = &rqstp->rq_res;
|
|
|
|
struct page **inpages = NULL;
|
2006-09-27 05:30:23 +00:00
|
|
|
__be32 *p, *len;
|
|
|
|
int offset;
|
2006-06-30 08:56:19 +00:00
|
|
|
int pad;
|
|
|
|
|
2006-10-04 09:16:06 +00:00
|
|
|
p = svcauth_gss_prepare_to_wrap(resbuf, gsd);
|
|
|
|
if (p == NULL)
|
2006-06-30 08:56:19 +00:00
|
|
|
return 0;
|
|
|
|
len = p++;
|
|
|
|
offset = (u8 *)p - (u8 *)resbuf->head[0].iov_base;
|
|
|
|
*p++ = htonl(gc->gc_seq);
|
|
|
|
inpages = resbuf->pages;
|
|
|
|
/* XXX: Would be better to write some xdr helper functions for
|
|
|
|
* nfs{2,3,4}xdr.c that place the data right, instead of copying: */
|
2010-03-17 17:02:47 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If there is currently tail data, make sure there is
|
|
|
|
* room for the head, tail, and 2 * RPC_MAX_AUTH_SIZE in
|
|
|
|
* the page, and move the current tail data such that
|
|
|
|
* there is RPC_MAX_AUTH_SIZE slack space available in
|
|
|
|
* both the head and tail.
|
|
|
|
*/
|
2006-10-04 09:15:46 +00:00
|
|
|
if (resbuf->tail[0].iov_base) {
|
2020-03-02 20:16:06 +00:00
|
|
|
if (resbuf->tail[0].iov_base >=
|
|
|
|
resbuf->head[0].iov_base + PAGE_SIZE)
|
|
|
|
return -EINVAL;
|
|
|
|
if (resbuf->tail[0].iov_base < resbuf->head[0].iov_base)
|
|
|
|
return -EINVAL;
|
2006-06-30 08:56:19 +00:00
|
|
|
if (resbuf->tail[0].iov_len + resbuf->head[0].iov_len
|
|
|
|
+ 2 * RPC_MAX_AUTH_SIZE > PAGE_SIZE)
|
|
|
|
return -ENOMEM;
|
|
|
|
memmove(resbuf->tail[0].iov_base + RPC_MAX_AUTH_SIZE,
|
|
|
|
resbuf->tail[0].iov_base,
|
|
|
|
resbuf->tail[0].iov_len);
|
|
|
|
resbuf->tail[0].iov_base += RPC_MAX_AUTH_SIZE;
|
|
|
|
}
|
2010-03-17 17:02:47 +00:00
|
|
|
/*
|
|
|
|
* If there is no current tail data, make sure there is
|
|
|
|
* room for the head data, and 2 * RPC_MAX_AUTH_SIZE in the
|
|
|
|
* allotted page, and set up tail information such that there
|
|
|
|
* is RPC_MAX_AUTH_SIZE slack space available in both the
|
|
|
|
* head and tail.
|
|
|
|
*/
|
2006-06-30 08:56:19 +00:00
|
|
|
if (resbuf->tail[0].iov_base == NULL) {
|
|
|
|
if (resbuf->head[0].iov_len + 2*RPC_MAX_AUTH_SIZE > PAGE_SIZE)
|
|
|
|
return -ENOMEM;
|
|
|
|
resbuf->tail[0].iov_base = resbuf->head[0].iov_base
|
|
|
|
+ resbuf->head[0].iov_len + RPC_MAX_AUTH_SIZE;
|
|
|
|
resbuf->tail[0].iov_len = 0;
|
|
|
|
}
|
|
|
|
if (gss_wrap(gsd->rsci->mechctx, offset, resbuf, inpages))
|
|
|
|
return -ENOMEM;
|
|
|
|
*len = htonl(resbuf->len - offset);
|
|
|
|
pad = 3 - ((resbuf->len - offset - 1)&3);
|
2006-09-27 05:29:38 +00:00
|
|
|
p = (__be32 *)(resbuf->tail[0].iov_base + resbuf->tail[0].iov_len);
|
2006-06-30 08:56:19 +00:00
|
|
|
memset(p, 0, pad);
|
|
|
|
resbuf->tail[0].iov_len += pad;
|
|
|
|
resbuf->len += pad;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-01-08 16:28:30 +00:00
|
|
|
/**
|
|
|
|
* svcauth_gss_release - Wrap payload and release resources
|
|
|
|
* @rqstp: RPC transaction context
|
|
|
|
*
|
|
|
|
* Return values:
|
|
|
|
* %0: the Reply is ready to be sent
|
|
|
|
* %-ENOMEM: failed to allocate memory
|
|
|
|
* %-EINVAL: encoding error
|
|
|
|
*
|
|
|
|
* XXX: These return values do not match the return values documented
|
|
|
|
* for the auth_ops ->release method in linux/sunrpc/svcauth.h.
|
|
|
|
*/
|
2006-06-30 08:56:18 +00:00
|
|
|
static int
|
|
|
|
svcauth_gss_release(struct svc_rqst *rqstp)
|
|
|
|
{
|
2018-12-24 11:44:42 +00:00
|
|
|
struct sunrpc_net *sn = net_generic(SVC_NET(rqstp), sunrpc_net_id);
|
2023-01-08 16:28:30 +00:00
|
|
|
struct gss_svc_data *gsd = rqstp->rq_auth_data;
|
|
|
|
struct rpc_gss_wire_cred *gc;
|
|
|
|
int stat;
|
2006-06-30 08:56:18 +00:00
|
|
|
|
2021-03-02 15:48:38 +00:00
|
|
|
if (!gsd)
|
|
|
|
goto out;
|
|
|
|
gc = &gsd->clcred;
|
2005-04-16 22:20:36 +00:00
|
|
|
if (gc->gc_proc != RPC_GSS_PROC_DATA)
|
|
|
|
goto out;
|
|
|
|
/* Release can be called twice, but we only wrap once. */
|
2006-10-04 09:16:07 +00:00
|
|
|
if (gsd->verf_start == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out;
|
2023-01-08 16:28:30 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
switch (gc->gc_svc) {
|
|
|
|
case RPC_GSS_SVC_NONE:
|
|
|
|
break;
|
|
|
|
case RPC_GSS_SVC_INTEGRITY:
|
2006-06-30 08:56:19 +00:00
|
|
|
stat = svcauth_gss_wrap_resp_integ(rqstp);
|
|
|
|
if (stat)
|
|
|
|
goto out_err;
|
2005-04-16 22:20:36 +00:00
|
|
|
break;
|
|
|
|
case RPC_GSS_SVC_PRIVACY:
|
2006-06-30 08:56:19 +00:00
|
|
|
stat = svcauth_gss_wrap_resp_priv(rqstp);
|
|
|
|
if (stat)
|
|
|
|
goto out_err;
|
|
|
|
break;
|
2009-08-04 09:27:52 +00:00
|
|
|
/*
|
|
|
|
* For any other gc_svc value, svcauth_gss_accept() already set
|
|
|
|
* the auth_error appropriately; just fall through:
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
stat = 0;
|
|
|
|
out_err:
|
|
|
|
if (rqstp->rq_client)
|
|
|
|
auth_domain_put(rqstp->rq_client);
|
|
|
|
rqstp->rq_client = NULL;
|
knfsd: nfsd: set rq_client to ip-address-determined-domain
We want it to be possible for users to restrict exports both by IP address and
by pseudoflavor. The pseudoflavor information has previously been passed
using special auth_domains stored in the rq_client field. After the preceding
patch that stored the pseudoflavor in rq_pflavor, that's now superfluous; so
now we use rq_client for the ip information, as auth_null and auth_unix do.
However, we keep around the special auth_domain in the rq_gssclient field for
backwards compatibility purposes, so we can still do upcalls using the old
"gss/pseudoflavor" auth_domain if upcalls using the unix domain to give us an
appropriate export. This allows us to continue supporting old mountd.
In fact, for this first patch, we always use the "gss/pseudoflavor"
auth_domain (and only it) if it is available; thus rq_client is ignored in the
auth_gss case, and this patch on its own makes no change in behavior; that
will be left to later patches.
Note on idmap: I'm almost tempted to just replace the auth_domain in the idmap
upcall by a dummy value--no version of idmapd has ever used it, and it's
unlikely anyone really wants to perform idmapping differently depending on the
where the client is (they may want to perform *credential* mapping
differently, but that's a different matter--the idmapper just handles id's
used in getattr and setattr). But I'm updating the idmapd code anyway, just
out of general backwards-compatibility paranoia.
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-17 11:04:46 +00:00
|
|
|
if (rqstp->rq_gssclient)
|
|
|
|
auth_domain_put(rqstp->rq_gssclient);
|
|
|
|
rqstp->rq_gssclient = NULL;
|
2005-04-16 22:20:36 +00:00
|
|
|
if (rqstp->rq_cred.cr_group_info)
|
|
|
|
put_group_info(rqstp->rq_cred.cr_group_info);
|
|
|
|
rqstp->rq_cred.cr_group_info = NULL;
|
2021-03-02 15:48:38 +00:00
|
|
|
if (gsd && gsd->rsci) {
|
2012-01-19 17:42:37 +00:00
|
|
|
cache_put(&gsd->rsci->h, sn->rsc_cache);
|
2021-03-02 15:48:38 +00:00
|
|
|
gsd->rsci = NULL;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2018-10-01 14:41:44 +00:00
|
|
|
svcauth_gss_domain_release_rcu(struct rcu_head *head)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2018-10-01 14:41:44 +00:00
|
|
|
struct auth_domain *dom = container_of(head, struct auth_domain, rcu_head);
|
2005-04-16 22:20:36 +00:00
|
|
|
struct gss_domain *gd = container_of(dom, struct gss_domain, h);
|
|
|
|
|
|
|
|
kfree(dom->name);
|
|
|
|
kfree(gd);
|
|
|
|
}
|
|
|
|
|
2018-10-01 14:41:44 +00:00
|
|
|
static void
|
|
|
|
svcauth_gss_domain_release(struct auth_domain *dom)
|
|
|
|
{
|
|
|
|
call_rcu(&dom->rcu_head, svcauth_gss_domain_release_rcu);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
static struct auth_ops svcauthops_gss = {
|
|
|
|
.name = "rpcsec_gss",
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.flavour = RPC_AUTH_GSS,
|
|
|
|
.accept = svcauth_gss_accept,
|
|
|
|
.release = svcauth_gss_release,
|
|
|
|
.domain_release = svcauth_gss_domain_release,
|
|
|
|
.set_client = svcauth_gss_set_client,
|
|
|
|
};
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
static int rsi_cache_create_net(struct net *net)
|
|
|
|
{
|
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
|
|
|
struct cache_detail *cd;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
cd = cache_create_net(&rsi_cache_template, net);
|
|
|
|
if (IS_ERR(cd))
|
|
|
|
return PTR_ERR(cd);
|
|
|
|
err = cache_register_net(cd, net);
|
|
|
|
if (err) {
|
|
|
|
cache_destroy_net(cd, net);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
sn->rsi_cache = cd;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void rsi_cache_destroy_net(struct net *net)
|
|
|
|
{
|
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
|
|
|
struct cache_detail *cd = sn->rsi_cache;
|
|
|
|
|
|
|
|
sn->rsi_cache = NULL;
|
|
|
|
cache_purge(cd);
|
|
|
|
cache_unregister_net(cd, net);
|
|
|
|
cache_destroy_net(cd, net);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int rsc_cache_create_net(struct net *net)
|
|
|
|
{
|
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
|
|
|
struct cache_detail *cd;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
cd = cache_create_net(&rsc_cache_template, net);
|
|
|
|
if (IS_ERR(cd))
|
|
|
|
return PTR_ERR(cd);
|
|
|
|
err = cache_register_net(cd, net);
|
|
|
|
if (err) {
|
|
|
|
cache_destroy_net(cd, net);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
sn->rsc_cache = cd;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void rsc_cache_destroy_net(struct net *net)
|
|
|
|
{
|
|
|
|
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
|
|
|
|
struct cache_detail *cd = sn->rsc_cache;
|
|
|
|
|
|
|
|
sn->rsc_cache = NULL;
|
|
|
|
cache_purge(cd);
|
|
|
|
cache_unregister_net(cd, net);
|
|
|
|
cache_destroy_net(cd, net);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
int
|
2012-01-19 17:42:37 +00:00
|
|
|
gss_svc_init_net(struct net *net)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2012-01-19 17:42:37 +00:00
|
|
|
int rv;
|
|
|
|
|
|
|
|
rv = rsc_cache_create_net(net);
|
2007-11-08 22:20:34 +00:00
|
|
|
if (rv)
|
|
|
|
return rv;
|
2012-01-19 17:42:37 +00:00
|
|
|
rv = rsi_cache_create_net(net);
|
2007-11-08 22:20:34 +00:00
|
|
|
if (rv)
|
|
|
|
goto out1;
|
2012-05-25 22:09:56 +00:00
|
|
|
rv = create_use_gss_proxy_proc_entry(net);
|
|
|
|
if (rv)
|
|
|
|
goto out2;
|
2007-11-08 22:20:34 +00:00
|
|
|
return 0;
|
2012-05-25 22:09:56 +00:00
|
|
|
out2:
|
2021-08-12 20:41:42 +00:00
|
|
|
rsi_cache_destroy_net(net);
|
2007-11-08 22:20:34 +00:00
|
|
|
out1:
|
2012-01-19 17:42:37 +00:00
|
|
|
rsc_cache_destroy_net(net);
|
2005-04-16 22:20:36 +00:00
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2012-01-19 17:42:37 +00:00
|
|
|
void
|
|
|
|
gss_svc_shutdown_net(struct net *net)
|
|
|
|
{
|
2012-05-25 22:09:56 +00:00
|
|
|
destroy_use_gss_proxy_proc_entry(net);
|
2012-01-19 17:42:37 +00:00
|
|
|
rsi_cache_destroy_net(net);
|
|
|
|
rsc_cache_destroy_net(net);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
gss_svc_init(void)
|
|
|
|
{
|
|
|
|
return svc_auth_register(RPC_AUTH_GSS, &svcauthops_gss);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
void
|
|
|
|
gss_svc_shutdown(void)
|
|
|
|
{
|
|
|
|
svc_auth_unregister(RPC_AUTH_GSS);
|
|
|
|
}
|