2007-04-26 22:48:28 +00:00
|
|
|
/* RxRPC individual remote procedure call handling
|
|
|
|
*
|
|
|
|
* Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
|
|
|
|
* Written by David Howells (dhowells@redhat.com)
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
*/
|
|
|
|
|
2016-06-02 19:08:52 +00:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2007-04-26 22:48:28 +00:00
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/circ_buf.h>
|
2014-03-03 23:04:45 +00:00
|
|
|
#include <linux/spinlock_types.h>
|
2007-04-26 22:48:28 +00:00
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/af_rxrpc.h>
|
|
|
|
#include "ar-internal.h"
|
|
|
|
|
2014-02-07 18:58:44 +00:00
|
|
|
/*
|
|
|
|
* Maximum lifetime of a call (in jiffies).
|
|
|
|
*/
|
2016-03-09 23:22:56 +00:00
|
|
|
unsigned int rxrpc_max_call_lifetime = 60 * HZ;
|
2014-02-07 18:58:44 +00:00
|
|
|
|
2016-03-04 15:53:46 +00:00
|
|
|
const char *const rxrpc_call_states[NR__RXRPC_CALL_STATES] = {
|
2016-08-30 08:49:28 +00:00
|
|
|
[RXRPC_CALL_UNINITIALISED] = "Uninit ",
|
2016-06-17 14:42:35 +00:00
|
|
|
[RXRPC_CALL_CLIENT_AWAIT_CONN] = "ClWtConn",
|
2007-05-22 23:14:24 +00:00
|
|
|
[RXRPC_CALL_CLIENT_SEND_REQUEST] = "ClSndReq",
|
|
|
|
[RXRPC_CALL_CLIENT_AWAIT_REPLY] = "ClAwtRpl",
|
|
|
|
[RXRPC_CALL_CLIENT_RECV_REPLY] = "ClRcvRpl",
|
|
|
|
[RXRPC_CALL_CLIENT_FINAL_ACK] = "ClFnlACK",
|
2016-09-08 10:10:12 +00:00
|
|
|
[RXRPC_CALL_SERVER_PREALLOC] = "SvPrealc",
|
2007-05-22 23:14:24 +00:00
|
|
|
[RXRPC_CALL_SERVER_SECURING] = "SvSecure",
|
|
|
|
[RXRPC_CALL_SERVER_ACCEPTING] = "SvAccept",
|
|
|
|
[RXRPC_CALL_SERVER_RECV_REQUEST] = "SvRcvReq",
|
|
|
|
[RXRPC_CALL_SERVER_ACK_REQUEST] = "SvAckReq",
|
|
|
|
[RXRPC_CALL_SERVER_SEND_REPLY] = "SvSndRpl",
|
|
|
|
[RXRPC_CALL_SERVER_AWAIT_ACK] = "SvAwtACK",
|
|
|
|
[RXRPC_CALL_COMPLETE] = "Complete",
|
2016-08-30 08:49:28 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
const char *const rxrpc_call_completions[NR__RXRPC_CALL_COMPLETIONS] = {
|
|
|
|
[RXRPC_CALL_SUCCEEDED] = "Complete",
|
2007-05-22 23:14:24 +00:00
|
|
|
[RXRPC_CALL_SERVER_BUSY] = "SvBusy ",
|
|
|
|
[RXRPC_CALL_REMOTELY_ABORTED] = "RmtAbort",
|
|
|
|
[RXRPC_CALL_LOCALLY_ABORTED] = "LocAbort",
|
2016-08-30 08:49:28 +00:00
|
|
|
[RXRPC_CALL_LOCAL_ERROR] = "LocError",
|
2007-05-22 23:14:24 +00:00
|
|
|
[RXRPC_CALL_NETWORK_ERROR] = "NetError",
|
|
|
|
};
|
|
|
|
|
2016-09-07 13:34:21 +00:00
|
|
|
const char rxrpc_call_traces[rxrpc_call__nr_trace][4] = {
|
|
|
|
[rxrpc_call_new_client] = "NWc",
|
|
|
|
[rxrpc_call_new_service] = "NWs",
|
|
|
|
[rxrpc_call_queued] = "QUE",
|
|
|
|
[rxrpc_call_queued_ref] = "QUR",
|
|
|
|
[rxrpc_call_seen] = "SEE",
|
|
|
|
[rxrpc_call_got] = "GOT",
|
|
|
|
[rxrpc_call_got_skb] = "Gsk",
|
|
|
|
[rxrpc_call_got_userid] = "Gus",
|
|
|
|
[rxrpc_call_put] = "PUT",
|
|
|
|
[rxrpc_call_put_skb] = "Psk",
|
|
|
|
[rxrpc_call_put_userid] = "Pus",
|
|
|
|
[rxrpc_call_put_noqueue] = "PNQ",
|
|
|
|
};
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
struct kmem_cache *rxrpc_call_jar;
|
|
|
|
LIST_HEAD(rxrpc_calls);
|
|
|
|
DEFINE_RWLOCK(rxrpc_call_lock);
|
|
|
|
|
|
|
|
static void rxrpc_call_life_expired(unsigned long _call);
|
|
|
|
static void rxrpc_ack_time_expired(unsigned long _call);
|
|
|
|
static void rxrpc_resend_time_expired(unsigned long _call);
|
|
|
|
|
2016-06-09 22:02:51 +00:00
|
|
|
/*
|
|
|
|
* find an extant server call
|
|
|
|
* - called in process context with IRQs enabled
|
|
|
|
*/
|
|
|
|
struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *rx,
|
|
|
|
unsigned long user_call_ID)
|
|
|
|
{
|
|
|
|
struct rxrpc_call *call;
|
|
|
|
struct rb_node *p;
|
|
|
|
|
|
|
|
_enter("%p,%lx", rx, user_call_ID);
|
|
|
|
|
|
|
|
read_lock(&rx->call_lock);
|
|
|
|
|
|
|
|
p = rx->calls.rb_node;
|
|
|
|
while (p) {
|
|
|
|
call = rb_entry(p, struct rxrpc_call, sock_node);
|
|
|
|
|
|
|
|
if (user_call_ID < call->user_call_ID)
|
|
|
|
p = p->rb_left;
|
|
|
|
else if (user_call_ID > call->user_call_ID)
|
|
|
|
p = p->rb_right;
|
|
|
|
else
|
|
|
|
goto found_extant_call;
|
|
|
|
}
|
|
|
|
|
|
|
|
read_unlock(&rx->call_lock);
|
|
|
|
_leave(" = NULL");
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
found_extant_call:
|
2016-09-07 13:34:21 +00:00
|
|
|
rxrpc_get_call(call, rxrpc_call_got);
|
2016-06-09 22:02:51 +00:00
|
|
|
read_unlock(&rx->call_lock);
|
|
|
|
_leave(" = %p [%d]", call, atomic_read(&call->usage));
|
|
|
|
return call;
|
|
|
|
}
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
/*
|
|
|
|
* allocate a new call
|
|
|
|
*/
|
2016-09-08 10:10:12 +00:00
|
|
|
struct rxrpc_call *rxrpc_alloc_call(gfp_t gfp)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
|
|
|
struct rxrpc_call *call;
|
|
|
|
|
|
|
|
call = kmem_cache_zalloc(rxrpc_call_jar, gfp);
|
|
|
|
if (!call)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
call->acks_winsz = 16;
|
|
|
|
call->acks_window = kmalloc(call->acks_winsz * sizeof(unsigned long),
|
|
|
|
gfp);
|
|
|
|
if (!call->acks_window) {
|
|
|
|
kmem_cache_free(rxrpc_call_jar, call);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
setup_timer(&call->lifetimer, &rxrpc_call_life_expired,
|
|
|
|
(unsigned long) call);
|
|
|
|
setup_timer(&call->ack_timer, &rxrpc_ack_time_expired,
|
|
|
|
(unsigned long) call);
|
|
|
|
setup_timer(&call->resend_timer, &rxrpc_resend_time_expired,
|
|
|
|
(unsigned long) call);
|
|
|
|
INIT_WORK(&call->processor, &rxrpc_process_call);
|
2016-06-17 14:42:35 +00:00
|
|
|
INIT_LIST_HEAD(&call->link);
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
INIT_LIST_HEAD(&call->chan_wait_link);
|
2007-04-26 22:48:28 +00:00
|
|
|
INIT_LIST_HEAD(&call->accept_link);
|
|
|
|
skb_queue_head_init(&call->rx_queue);
|
|
|
|
skb_queue_head_init(&call->rx_oos_queue);
|
rxrpc: Don't expose skbs to in-kernel users [ver #2]
Don't expose skbs to in-kernel users, such as the AFS filesystem, but
instead provide a notification hook the indicates that a call needs
attention and another that indicates that there's a new call to be
collected.
This makes the following possibilities more achievable:
(1) Call refcounting can be made simpler if skbs don't hold refs to calls.
(2) skbs referring to non-data events will be able to be freed much sooner
rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
will be able to consult the call state.
(3) We can shortcut the receive phase when a call is remotely aborted
because we don't have to go through all the packets to get to the one
cancelling the operation.
(4) It makes it easier to do encryption/decryption directly between AFS's
buffers and sk_buffs.
(5) Encryption/decryption can more easily be done in the AFS's thread
contexts - usually that of the userspace process that issued a syscall
- rather than in one of rxrpc's background threads on a workqueue.
(6) AFS will be able to wait synchronously on a call inside AF_RXRPC.
To make this work, the following interface function has been added:
int rxrpc_kernel_recv_data(
struct socket *sock, struct rxrpc_call *call,
void *buffer, size_t bufsize, size_t *_offset,
bool want_more, u32 *_abort_code);
This is the recvmsg equivalent. It allows the caller to find out about the
state of a specific call and to transfer received data into a buffer
piecemeal.
afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
logic between them. They don't wait synchronously yet because the socket
lock needs to be dealt with.
Five interface functions have been removed:
rxrpc_kernel_is_data_last()
rxrpc_kernel_get_abort_code()
rxrpc_kernel_get_error_number()
rxrpc_kernel_free_skb()
rxrpc_kernel_data_consumed()
As a temporary hack, sk_buffs going to an in-kernel call are queued on the
rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
in-kernel user. To process the queue internally, a temporary function,
temp_deliver_data() has been added. This will be replaced with common code
between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
future patch.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-08-30 19:42:14 +00:00
|
|
|
skb_queue_head_init(&call->knlrecv_queue);
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
init_waitqueue_head(&call->waitq);
|
2007-04-26 22:48:28 +00:00
|
|
|
spin_lock_init(&call->lock);
|
|
|
|
rwlock_init(&call->state_lock);
|
|
|
|
atomic_set(&call->usage, 1);
|
|
|
|
call->debug_id = atomic_inc_return(&rxrpc_debug_id);
|
|
|
|
|
|
|
|
memset(&call->sock_node, 0xed, sizeof(call->sock_node));
|
|
|
|
|
|
|
|
call->rx_data_expect = 1;
|
|
|
|
call->rx_data_eaten = 0;
|
|
|
|
call->rx_first_oos = 0;
|
2014-02-07 18:10:30 +00:00
|
|
|
call->ackr_win_top = call->rx_data_eaten + 1 + rxrpc_rx_window_size;
|
2007-04-26 22:48:28 +00:00
|
|
|
call->creation_jif = jiffies;
|
|
|
|
return call;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2016-06-17 14:42:35 +00:00
|
|
|
* Allocate a new client call.
|
2007-04-26 22:48:28 +00:00
|
|
|
*/
|
2016-06-17 09:06:56 +00:00
|
|
|
static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx,
|
|
|
|
struct sockaddr_rxrpc *srx,
|
|
|
|
gfp_t gfp)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
|
|
|
struct rxrpc_call *call;
|
|
|
|
|
|
|
|
_enter("");
|
|
|
|
|
2016-06-17 14:42:35 +00:00
|
|
|
ASSERT(rx->local != NULL);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
call = rxrpc_alloc_call(gfp);
|
|
|
|
if (!call)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2016-06-17 14:42:35 +00:00
|
|
|
call->state = RXRPC_CALL_CLIENT_AWAIT_CONN;
|
2007-04-26 22:48:28 +00:00
|
|
|
call->rx_data_post = 1;
|
2016-06-17 14:42:35 +00:00
|
|
|
call->service_id = srx->srx_service;
|
2016-09-07 08:19:31 +00:00
|
|
|
rcu_assign_pointer(call->socket, rx);
|
2016-06-17 14:42:35 +00:00
|
|
|
|
|
|
|
_leave(" = %p", call);
|
|
|
|
return call;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Begin client call.
|
|
|
|
*/
|
|
|
|
static int rxrpc_begin_client_call(struct rxrpc_call *call,
|
|
|
|
struct rxrpc_conn_parameters *cp,
|
|
|
|
struct sockaddr_rxrpc *srx,
|
|
|
|
gfp_t gfp)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/* Set up or get a connection record and set the protocol parameters,
|
|
|
|
* including channel number and call ID.
|
|
|
|
*/
|
2016-06-17 09:06:56 +00:00
|
|
|
ret = rxrpc_connect_call(call, cp, srx, gfp);
|
2016-06-17 14:42:35 +00:00
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
2016-04-04 13:00:36 +00:00
|
|
|
spin_lock(&call->conn->params.peer->lock);
|
|
|
|
hlist_add_head(&call->error_link, &call->conn->params.peer->error_targets);
|
|
|
|
spin_unlock(&call->conn->params.peer->lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2014-02-07 18:58:44 +00:00
|
|
|
call->lifetimer.expires = jiffies + rxrpc_max_call_lifetime;
|
2007-04-26 22:48:28 +00:00
|
|
|
add_timer(&call->lifetimer);
|
2016-06-17 14:42:35 +00:00
|
|
|
return 0;
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* set up a call for the given data
|
|
|
|
* - called in process context with IRQs enabled
|
|
|
|
*/
|
2016-06-09 22:02:51 +00:00
|
|
|
struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
|
2016-04-04 13:00:36 +00:00
|
|
|
struct rxrpc_conn_parameters *cp,
|
2016-06-17 14:42:35 +00:00
|
|
|
struct sockaddr_rxrpc *srx,
|
2007-04-26 22:48:28 +00:00
|
|
|
unsigned long user_call_ID,
|
|
|
|
gfp_t gfp)
|
|
|
|
{
|
2016-06-09 22:02:51 +00:00
|
|
|
struct rxrpc_call *call, *xcall;
|
|
|
|
struct rb_node *parent, **pp;
|
2016-08-30 08:49:29 +00:00
|
|
|
const void *here = __builtin_return_address(0);
|
2016-06-17 14:42:35 +00:00
|
|
|
int ret;
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-06-17 14:42:35 +00:00
|
|
|
_enter("%p,%lx", rx, user_call_ID);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-06-17 09:06:56 +00:00
|
|
|
call = rxrpc_alloc_client_call(rx, srx, gfp);
|
2016-06-09 22:02:51 +00:00
|
|
|
if (IS_ERR(call)) {
|
|
|
|
_leave(" = %ld", PTR_ERR(call));
|
|
|
|
return call;
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2016-09-08 10:10:12 +00:00
|
|
|
trace_rxrpc_call(call, 0, atomic_read(&call->usage), here,
|
|
|
|
(const void *)user_call_ID);
|
2016-08-30 08:49:29 +00:00
|
|
|
|
2016-06-17 14:42:35 +00:00
|
|
|
/* Publish the call, even though it is incompletely set up as yet */
|
2016-06-09 22:02:51 +00:00
|
|
|
call->user_call_ID = user_call_ID;
|
|
|
|
__set_bit(RXRPC_CALL_HAS_USERID, &call->flags);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
write_lock(&rx->call_lock);
|
|
|
|
|
|
|
|
pp = &rx->calls.rb_node;
|
|
|
|
parent = NULL;
|
|
|
|
while (*pp) {
|
|
|
|
parent = *pp;
|
2016-06-09 22:02:51 +00:00
|
|
|
xcall = rb_entry(parent, struct rxrpc_call, sock_node);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-06-09 22:02:51 +00:00
|
|
|
if (user_call_ID < xcall->user_call_ID)
|
2007-04-26 22:48:28 +00:00
|
|
|
pp = &(*pp)->rb_left;
|
2016-06-09 22:02:51 +00:00
|
|
|
else if (user_call_ID > xcall->user_call_ID)
|
2007-04-26 22:48:28 +00:00
|
|
|
pp = &(*pp)->rb_right;
|
|
|
|
else
|
2016-06-09 22:02:51 +00:00
|
|
|
goto found_user_ID_now_present;
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2016-09-07 13:34:21 +00:00
|
|
|
rxrpc_get_call(call, rxrpc_call_got_userid);
|
2007-04-26 22:48:28 +00:00
|
|
|
rb_link_node(&call->sock_node, parent, pp);
|
|
|
|
rb_insert_color(&call->sock_node, &rx->calls);
|
|
|
|
write_unlock(&rx->call_lock);
|
|
|
|
|
|
|
|
write_lock_bh(&rxrpc_call_lock);
|
|
|
|
list_add_tail(&call->link, &rxrpc_calls);
|
|
|
|
write_unlock_bh(&rxrpc_call_lock);
|
|
|
|
|
2016-06-17 09:06:56 +00:00
|
|
|
ret = rxrpc_begin_client_call(call, cp, srx, gfp);
|
2016-06-17 14:42:35 +00:00
|
|
|
if (ret < 0)
|
|
|
|
goto error;
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
_net("CALL new %d on CONN %d", call->debug_id, call->conn->debug_id);
|
|
|
|
|
|
|
|
_leave(" = %p [new]", call);
|
|
|
|
return call;
|
|
|
|
|
2016-06-17 14:42:35 +00:00
|
|
|
error:
|
|
|
|
write_lock(&rx->call_lock);
|
|
|
|
rb_erase(&call->sock_node, &rx->calls);
|
|
|
|
write_unlock(&rx->call_lock);
|
2016-09-07 13:34:21 +00:00
|
|
|
rxrpc_put_call(call, rxrpc_call_put_userid);
|
2016-06-17 14:42:35 +00:00
|
|
|
|
|
|
|
write_lock_bh(&rxrpc_call_lock);
|
2016-04-04 13:00:39 +00:00
|
|
|
list_del_init(&call->link);
|
2016-06-17 14:42:35 +00:00
|
|
|
write_unlock_bh(&rxrpc_call_lock);
|
|
|
|
|
2016-09-07 08:19:31 +00:00
|
|
|
error_out:
|
|
|
|
__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
|
|
|
|
RX_CALL_DEAD, ret);
|
2016-08-08 12:06:41 +00:00
|
|
|
set_bit(RXRPC_CALL_RELEASED, &call->flags);
|
2016-09-07 13:34:21 +00:00
|
|
|
rxrpc_put_call(call, rxrpc_call_put);
|
2016-06-17 14:42:35 +00:00
|
|
|
_leave(" = %d", ret);
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
|
2016-06-09 22:02:51 +00:00
|
|
|
/* We unexpectedly found the user ID in the list after taking
|
|
|
|
* the call_lock. This shouldn't happen unless the user races
|
|
|
|
* with itself and tries to add the same user ID twice at the
|
|
|
|
* same time in different threads.
|
|
|
|
*/
|
|
|
|
found_user_ID_now_present:
|
2007-04-26 22:48:28 +00:00
|
|
|
write_unlock(&rx->call_lock);
|
2016-09-07 08:19:31 +00:00
|
|
|
ret = -EEXIST;
|
|
|
|
goto error_out;
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* set up an incoming call
|
|
|
|
* - called in process context with IRQs enabled
|
|
|
|
*/
|
|
|
|
struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
|
|
|
|
struct rxrpc_connection *conn,
|
2016-06-16 12:31:07 +00:00
|
|
|
struct sk_buff *skb)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
2016-06-16 12:31:07 +00:00
|
|
|
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
|
2007-04-26 22:48:28 +00:00
|
|
|
struct rxrpc_call *call, *candidate;
|
2016-08-30 08:49:29 +00:00
|
|
|
const void *here = __builtin_return_address(0);
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
u32 call_id, chan;
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-04-07 16:23:37 +00:00
|
|
|
_enter(",%d", conn->debug_id);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
ASSERT(rx != NULL);
|
|
|
|
|
2016-04-07 16:23:37 +00:00
|
|
|
candidate = rxrpc_alloc_call(GFP_NOIO);
|
2007-04-26 22:48:28 +00:00
|
|
|
if (!candidate)
|
|
|
|
return ERR_PTR(-EBUSY);
|
|
|
|
|
2016-09-07 13:34:21 +00:00
|
|
|
trace_rxrpc_call(candidate, rxrpc_call_new_service,
|
2016-09-08 10:10:12 +00:00
|
|
|
atomic_read(&candidate->usage), here, NULL);
|
2016-08-30 08:49:29 +00:00
|
|
|
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
chan = sp->hdr.cid & RXRPC_CHANNELMASK;
|
2016-06-16 12:31:07 +00:00
|
|
|
candidate->conn = conn;
|
2016-08-24 13:31:43 +00:00
|
|
|
candidate->peer = conn->params.peer;
|
2016-06-16 12:31:07 +00:00
|
|
|
candidate->cid = sp->hdr.cid;
|
|
|
|
candidate->call_id = sp->hdr.callNumber;
|
2016-09-07 14:19:25 +00:00
|
|
|
candidate->security_ix = sp->hdr.securityIndex;
|
2016-06-16 12:31:07 +00:00
|
|
|
candidate->rx_data_post = 0;
|
|
|
|
candidate->state = RXRPC_CALL_SERVER_ACCEPTING;
|
2016-08-23 14:27:24 +00:00
|
|
|
candidate->flags |= (1 << RXRPC_CALL_IS_SERVICE);
|
2007-04-26 22:48:28 +00:00
|
|
|
if (conn->security_ix > 0)
|
|
|
|
candidate->state = RXRPC_CALL_SERVER_SECURING;
|
2016-09-07 08:19:31 +00:00
|
|
|
rcu_assign_pointer(candidate->socket, rx);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
spin_lock(&conn->channel_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
/* set the channel for this call */
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
call = rcu_dereference_protected(conn->channels[chan].call,
|
|
|
|
lockdep_is_held(&conn->channel_lock));
|
|
|
|
|
2016-08-23 14:27:24 +00:00
|
|
|
_debug("channel[%u] is %p", candidate->cid & RXRPC_CHANNELMASK, call);
|
2016-06-16 12:31:07 +00:00
|
|
|
if (call && call->call_id == sp->hdr.callNumber) {
|
2007-04-26 22:48:28 +00:00
|
|
|
/* already set; must've been a duplicate packet */
|
|
|
|
_debug("extant call [%d]", call->state);
|
|
|
|
ASSERTCMP(call->conn, ==, conn);
|
|
|
|
|
|
|
|
read_lock(&call->state_lock);
|
|
|
|
switch (call->state) {
|
|
|
|
case RXRPC_CALL_LOCALLY_ABORTED:
|
2016-03-04 15:53:46 +00:00
|
|
|
if (!test_and_set_bit(RXRPC_CALL_EV_ABORT, &call->events))
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 22:50:17 +00:00
|
|
|
rxrpc_queue_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
case RXRPC_CALL_REMOTELY_ABORTED:
|
|
|
|
read_unlock(&call->state_lock);
|
|
|
|
goto aborted_call;
|
|
|
|
default:
|
2016-09-07 13:34:21 +00:00
|
|
|
rxrpc_get_call(call, rxrpc_call_got);
|
2007-04-26 22:48:28 +00:00
|
|
|
read_unlock(&call->state_lock);
|
|
|
|
goto extant_call;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (call) {
|
|
|
|
/* it seems the channel is still in use from the previous call
|
|
|
|
* - ditch the old binding if its call is now complete */
|
|
|
|
_debug("CALL: %u { %s }",
|
|
|
|
call->debug_id, rxrpc_call_states[call->state]);
|
|
|
|
|
2016-08-30 08:49:28 +00:00
|
|
|
if (call->state == RXRPC_CALL_COMPLETE) {
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
__rxrpc_disconnect_call(conn, call);
|
2007-04-26 22:48:28 +00:00
|
|
|
} else {
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
spin_unlock(&conn->channel_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
kmem_cache_free(rxrpc_call_jar, candidate);
|
|
|
|
_leave(" = -EBUSY");
|
|
|
|
return ERR_PTR(-EBUSY);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* check the call number isn't duplicate */
|
|
|
|
_debug("check dup");
|
2016-06-16 12:31:07 +00:00
|
|
|
call_id = sp->hdr.callNumber;
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
|
|
|
|
/* We just ignore calls prior to the current call ID. Terminated calls
|
|
|
|
* are handled via the connection.
|
|
|
|
*/
|
|
|
|
if (call_id <= conn->channels[chan].call_counter)
|
|
|
|
goto old_call; /* TODO: Just drop packet */
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-09-08 10:10:12 +00:00
|
|
|
/* Temporary: Mirror the backlog prealloc ref (TODO: use prealloc) */
|
|
|
|
rxrpc_get_call(candidate, rxrpc_call_got);
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
/* make the call available */
|
|
|
|
_debug("new call");
|
|
|
|
call = candidate;
|
|
|
|
candidate = NULL;
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
conn->channels[chan].call_counter = call_id;
|
|
|
|
rcu_assign_pointer(conn->channels[chan].call, call);
|
2016-04-04 13:00:38 +00:00
|
|
|
rxrpc_get_connection(conn);
|
2016-08-24 13:31:43 +00:00
|
|
|
rxrpc_get_peer(call->peer);
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
spin_unlock(&conn->channel_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-04-04 13:00:36 +00:00
|
|
|
spin_lock(&conn->params.peer->lock);
|
|
|
|
hlist_add_head(&call->error_link, &conn->params.peer->error_targets);
|
|
|
|
spin_unlock(&conn->params.peer->lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
write_lock_bh(&rxrpc_call_lock);
|
|
|
|
list_add_tail(&call->link, &rxrpc_calls);
|
|
|
|
write_unlock_bh(&rxrpc_call_lock);
|
|
|
|
|
2016-04-04 13:00:36 +00:00
|
|
|
call->service_id = conn->params.service_id;
|
2014-03-03 23:04:45 +00:00
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
_net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
|
|
|
|
|
2014-02-07 18:58:44 +00:00
|
|
|
call->lifetimer.expires = jiffies + rxrpc_max_call_lifetime;
|
2007-04-26 22:48:28 +00:00
|
|
|
add_timer(&call->lifetimer);
|
|
|
|
_leave(" = %p {%d} [new]", call, call->debug_id);
|
|
|
|
return call;
|
|
|
|
|
|
|
|
extant_call:
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
spin_unlock(&conn->channel_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
kmem_cache_free(rxrpc_call_jar, candidate);
|
|
|
|
_leave(" = %p {%d} [extant]", call, call ? call->debug_id : -1);
|
|
|
|
return call;
|
|
|
|
|
|
|
|
aborted_call:
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
spin_unlock(&conn->channel_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
kmem_cache_free(rxrpc_call_jar, candidate);
|
|
|
|
_leave(" = -ECONNABORTED");
|
|
|
|
return ERR_PTR(-ECONNABORTED);
|
|
|
|
|
|
|
|
old_call:
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
spin_unlock(&conn->channel_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
kmem_cache_free(rxrpc_call_jar, candidate);
|
|
|
|
_leave(" = -ECONNRESET [old]");
|
|
|
|
return ERR_PTR(-ECONNRESET);
|
|
|
|
}
|
|
|
|
|
2016-09-07 08:19:31 +00:00
|
|
|
/*
|
|
|
|
* Queue a call's work processor, getting a ref to pass to the work queue.
|
|
|
|
*/
|
|
|
|
bool rxrpc_queue_call(struct rxrpc_call *call)
|
|
|
|
{
|
|
|
|
const void *here = __builtin_return_address(0);
|
|
|
|
int n = __atomic_add_unless(&call->usage, 1, 0);
|
|
|
|
if (n == 0)
|
|
|
|
return false;
|
|
|
|
if (rxrpc_queue_work(&call->processor))
|
2016-09-08 10:10:12 +00:00
|
|
|
trace_rxrpc_call(call, rxrpc_call_queued, n + 1, here, NULL);
|
2016-09-07 08:19:31 +00:00
|
|
|
else
|
|
|
|
rxrpc_put_call(call, rxrpc_call_put_noqueue);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Queue a call's work processor, passing the callers ref to the work queue.
|
|
|
|
*/
|
|
|
|
bool __rxrpc_queue_call(struct rxrpc_call *call)
|
|
|
|
{
|
|
|
|
const void *here = __builtin_return_address(0);
|
|
|
|
int n = atomic_read(&call->usage);
|
|
|
|
ASSERTCMP(n, >=, 1);
|
|
|
|
if (rxrpc_queue_work(&call->processor))
|
2016-09-08 10:10:12 +00:00
|
|
|
trace_rxrpc_call(call, rxrpc_call_queued_ref, n, here, NULL);
|
2016-09-07 08:19:31 +00:00
|
|
|
else
|
|
|
|
rxrpc_put_call(call, rxrpc_call_put_noqueue);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
/*
|
|
|
|
* Note the re-emergence of a call.
|
|
|
|
*/
|
|
|
|
void rxrpc_see_call(struct rxrpc_call *call)
|
|
|
|
{
|
|
|
|
const void *here = __builtin_return_address(0);
|
|
|
|
if (call) {
|
|
|
|
int n = atomic_read(&call->usage);
|
|
|
|
|
2016-09-08 10:10:12 +00:00
|
|
|
trace_rxrpc_call(call, rxrpc_call_seen, n, here, NULL);
|
2016-08-30 08:49:29 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note the addition of a ref on a call.
|
|
|
|
*/
|
2016-09-07 13:34:21 +00:00
|
|
|
void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
|
2016-08-30 08:49:29 +00:00
|
|
|
{
|
|
|
|
const void *here = __builtin_return_address(0);
|
|
|
|
int n = atomic_inc_return(&call->usage);
|
|
|
|
|
2016-09-08 10:10:12 +00:00
|
|
|
trace_rxrpc_call(call, op, n, here, NULL);
|
2016-08-30 08:49:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note the addition of a ref on a call for a socket buffer.
|
|
|
|
*/
|
|
|
|
void rxrpc_get_call_for_skb(struct rxrpc_call *call, struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
const void *here = __builtin_return_address(0);
|
|
|
|
int n = atomic_inc_return(&call->usage);
|
|
|
|
|
2016-09-08 10:10:12 +00:00
|
|
|
trace_rxrpc_call(call, rxrpc_call_got_skb, n, here, skb);
|
2016-08-30 08:49:29 +00:00
|
|
|
}
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
/*
|
|
|
|
* detach a call from a socket and set up for release
|
|
|
|
*/
|
2016-09-07 08:19:31 +00:00
|
|
|
void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
|
|
|
_enter("{%d,%d,%d,%d}",
|
|
|
|
call->debug_id, atomic_read(&call->usage),
|
|
|
|
atomic_read(&call->ackr_not_idle),
|
|
|
|
call->rx_first_oos);
|
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
rxrpc_see_call(call);
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
if (test_and_set_bit(RXRPC_CALL_RELEASED, &call->flags))
|
|
|
|
BUG();
|
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
|
|
|
|
/* dissociate from the socket
|
|
|
|
* - the socket's ref on the call is passed to the death timer
|
|
|
|
*/
|
2016-09-07 08:19:31 +00:00
|
|
|
_debug("RELEASE CALL %p (%d)", call, call->debug_id);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-09-07 13:45:26 +00:00
|
|
|
if (call->peer) {
|
|
|
|
spin_lock(&call->peer->lock);
|
|
|
|
hlist_del_init(&call->error_link);
|
|
|
|
spin_unlock(&call->peer->lock);
|
|
|
|
}
|
2016-04-04 13:00:38 +00:00
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
write_lock_bh(&rx->call_lock);
|
|
|
|
if (!list_empty(&call->accept_link)) {
|
|
|
|
_debug("unlinking once-pending call %p { e=%lx f=%lx }",
|
|
|
|
call, call->events, call->flags);
|
|
|
|
ASSERT(!test_bit(RXRPC_CALL_HAS_USERID, &call->flags));
|
|
|
|
list_del_init(&call->accept_link);
|
|
|
|
sk_acceptq_removed(&rx->sk);
|
|
|
|
} else if (test_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
|
|
|
|
rb_erase(&call->sock_node, &rx->calls);
|
|
|
|
memset(&call->sock_node, 0xdd, sizeof(call->sock_node));
|
|
|
|
clear_bit(RXRPC_CALL_HAS_USERID, &call->flags);
|
2016-09-07 08:19:31 +00:00
|
|
|
rxrpc_put_call(call, rxrpc_call_put_userid);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
write_unlock_bh(&rx->call_lock);
|
|
|
|
|
|
|
|
/* free up the channel for reuse */
|
2016-09-07 08:19:31 +00:00
|
|
|
if (call->state == RXRPC_CALL_CLIENT_FINAL_ACK) {
|
|
|
|
clear_bit(RXRPC_CALL_EV_ACK_FINAL, &call->events);
|
|
|
|
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ACK);
|
|
|
|
rxrpc_call_completed(call);
|
|
|
|
} else {
|
|
|
|
write_lock_bh(&call->state_lock);
|
|
|
|
|
|
|
|
if (call->state < RXRPC_CALL_COMPLETE) {
|
|
|
|
_debug("+++ ABORTING STATE %d +++\n", call->state);
|
2016-09-06 21:19:51 +00:00
|
|
|
__rxrpc_abort_call("SKT", call, 0, RX_CALL_DEAD, ECONNRESET);
|
2016-09-07 08:19:31 +00:00
|
|
|
clear_bit(RXRPC_CALL_EV_ACK_FINAL, &call->events);
|
|
|
|
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
|
|
|
|
}
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 22:50:17 +00:00
|
|
|
|
2016-09-07 08:19:31 +00:00
|
|
|
write_unlock_bh(&call->state_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2016-09-07 08:19:31 +00:00
|
|
|
if (call->conn)
|
|
|
|
rxrpc_disconnect_call(call);
|
2016-04-04 13:00:38 +00:00
|
|
|
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 22:50:17 +00:00
|
|
|
/* clean up the Rx queue */
|
2007-04-26 22:48:28 +00:00
|
|
|
if (!skb_queue_empty(&call->rx_queue) ||
|
|
|
|
!skb_queue_empty(&call->rx_oos_queue)) {
|
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
_debug("purge Rx queues");
|
|
|
|
|
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
while ((skb = skb_dequeue(&call->rx_queue)) ||
|
|
|
|
(skb = skb_dequeue(&call->rx_oos_queue))) {
|
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
|
2016-08-08 10:13:45 +00:00
|
|
|
sp = rxrpc_skb(skb);
|
2007-04-26 22:48:28 +00:00
|
|
|
_debug("- zap %s %%%u #%u",
|
|
|
|
rxrpc_pkts[sp->hdr.type],
|
2016-03-04 15:53:46 +00:00
|
|
|
sp->hdr.serial, sp->hdr.seq);
|
2007-04-26 22:48:28 +00:00
|
|
|
rxrpc_free_skb(skb);
|
|
|
|
spin_lock_bh(&call->lock);
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&call->lock);
|
|
|
|
}
|
2016-09-07 08:19:31 +00:00
|
|
|
rxrpc_purge_queue(&call->knlrecv_queue);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
del_timer_sync(&call->resend_timer);
|
|
|
|
del_timer_sync(&call->ack_timer);
|
|
|
|
del_timer_sync(&call->lifetimer);
|
|
|
|
|
2016-09-08 10:10:12 +00:00
|
|
|
/* We have to release the prealloc backlog ref */
|
|
|
|
if (rxrpc_is_service_call(call))
|
|
|
|
rxrpc_put_call(call, rxrpc_call_put);
|
2007-04-26 22:48:28 +00:00
|
|
|
_leave("");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* release all the calls associated with a socket
|
|
|
|
*/
|
|
|
|
void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
|
|
|
|
{
|
|
|
|
struct rxrpc_call *call;
|
|
|
|
struct rb_node *p;
|
|
|
|
|
|
|
|
_enter("%p", rx);
|
|
|
|
|
|
|
|
read_lock_bh(&rx->call_lock);
|
|
|
|
|
|
|
|
/* kill the not-yet-accepted incoming calls */
|
|
|
|
list_for_each_entry(call, &rx->secureq, accept_link) {
|
2016-09-07 08:19:31 +00:00
|
|
|
rxrpc_release_call(rx, call);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
list_for_each_entry(call, &rx->acceptq, accept_link) {
|
2016-09-07 08:19:31 +00:00
|
|
|
rxrpc_release_call(rx, call);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2016-08-23 14:27:24 +00:00
|
|
|
/* mark all the calls as no longer wanting incoming packets */
|
|
|
|
for (p = rb_first(&rx->calls); p; p = rb_next(p)) {
|
|
|
|
call = rb_entry(p, struct rxrpc_call, sock_node);
|
2016-09-07 08:19:31 +00:00
|
|
|
rxrpc_release_call(rx, call);
|
2016-08-23 14:27:24 +00:00
|
|
|
}
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
read_unlock_bh(&rx->call_lock);
|
|
|
|
_leave("");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* release a call
|
|
|
|
*/
|
2016-09-07 13:34:21 +00:00
|
|
|
void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
2016-08-30 08:49:29 +00:00
|
|
|
const void *here = __builtin_return_address(0);
|
2016-09-08 10:10:12 +00:00
|
|
|
int n;
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
ASSERT(call != NULL);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
n = atomic_dec_return(&call->usage);
|
2016-09-08 10:10:12 +00:00
|
|
|
trace_rxrpc_call(call, op, n, here, NULL);
|
2016-08-30 08:49:29 +00:00
|
|
|
ASSERTCMP(n, >=, 0);
|
|
|
|
if (n == 0) {
|
|
|
|
_debug("call %d dead", call->debug_id);
|
2016-09-07 08:19:31 +00:00
|
|
|
rxrpc_cleanup_call(call);
|
2016-08-30 08:49:29 +00:00
|
|
|
}
|
|
|
|
}
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
/*
|
|
|
|
* Release a call ref held by a socket buffer.
|
|
|
|
*/
|
|
|
|
void rxrpc_put_call_for_skb(struct rxrpc_call *call, struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
const void *here = __builtin_return_address(0);
|
2016-09-08 10:10:12 +00:00
|
|
|
int n;
|
2016-08-30 08:49:29 +00:00
|
|
|
|
|
|
|
n = atomic_dec_return(&call->usage);
|
2016-09-08 10:10:12 +00:00
|
|
|
trace_rxrpc_call(call, rxrpc_call_put_skb, n, here, skb);
|
2016-08-30 08:49:29 +00:00
|
|
|
ASSERTCMP(n, >=, 0);
|
|
|
|
if (n == 0) {
|
2007-04-26 22:48:28 +00:00
|
|
|
_debug("call %d dead", call->debug_id);
|
2016-09-07 08:19:31 +00:00
|
|
|
rxrpc_cleanup_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-06-27 16:11:19 +00:00
|
|
|
/*
|
|
|
|
* Final call destruction under RCU.
|
|
|
|
*/
|
|
|
|
static void rxrpc_rcu_destroy_call(struct rcu_head *rcu)
|
|
|
|
{
|
|
|
|
struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu);
|
|
|
|
|
|
|
|
rxrpc_purge_queue(&call->rx_queue);
|
rxrpc: Don't expose skbs to in-kernel users [ver #2]
Don't expose skbs to in-kernel users, such as the AFS filesystem, but
instead provide a notification hook the indicates that a call needs
attention and another that indicates that there's a new call to be
collected.
This makes the following possibilities more achievable:
(1) Call refcounting can be made simpler if skbs don't hold refs to calls.
(2) skbs referring to non-data events will be able to be freed much sooner
rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
will be able to consult the call state.
(3) We can shortcut the receive phase when a call is remotely aborted
because we don't have to go through all the packets to get to the one
cancelling the operation.
(4) It makes it easier to do encryption/decryption directly between AFS's
buffers and sk_buffs.
(5) Encryption/decryption can more easily be done in the AFS's thread
contexts - usually that of the userspace process that issued a syscall
- rather than in one of rxrpc's background threads on a workqueue.
(6) AFS will be able to wait synchronously on a call inside AF_RXRPC.
To make this work, the following interface function has been added:
int rxrpc_kernel_recv_data(
struct socket *sock, struct rxrpc_call *call,
void *buffer, size_t bufsize, size_t *_offset,
bool want_more, u32 *_abort_code);
This is the recvmsg equivalent. It allows the caller to find out about the
state of a specific call and to transfer received data into a buffer
piecemeal.
afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
logic between them. They don't wait synchronously yet because the socket
lock needs to be dealt with.
Five interface functions have been removed:
rxrpc_kernel_is_data_last()
rxrpc_kernel_get_abort_code()
rxrpc_kernel_get_error_number()
rxrpc_kernel_free_skb()
rxrpc_kernel_data_consumed()
As a temporary hack, sk_buffs going to an in-kernel call are queued on the
rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
in-kernel user. To process the queue internally, a temporary function,
temp_deliver_data() has been added. This will be replaced with common code
between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
future patch.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-08-30 19:42:14 +00:00
|
|
|
rxrpc_purge_queue(&call->knlrecv_queue);
|
2016-08-24 13:31:43 +00:00
|
|
|
rxrpc_put_peer(call->peer);
|
2016-06-27 16:11:19 +00:00
|
|
|
kmem_cache_free(rxrpc_call_jar, call);
|
|
|
|
}
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
/*
|
|
|
|
* clean up a call
|
|
|
|
*/
|
2016-09-08 10:10:12 +00:00
|
|
|
void rxrpc_cleanup_call(struct rxrpc_call *call)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
|
|
|
_net("DESTROY CALL %d", call->debug_id);
|
|
|
|
|
2016-09-07 08:19:31 +00:00
|
|
|
write_lock_bh(&rxrpc_call_lock);
|
|
|
|
list_del_init(&call->link);
|
|
|
|
write_unlock_bh(&rxrpc_call_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
memset(&call->sock_node, 0xcd, sizeof(call->sock_node));
|
|
|
|
|
|
|
|
del_timer_sync(&call->lifetimer);
|
|
|
|
del_timer_sync(&call->ack_timer);
|
|
|
|
del_timer_sync(&call->resend_timer);
|
|
|
|
|
2016-09-07 08:19:31 +00:00
|
|
|
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
|
2007-04-26 22:48:28 +00:00
|
|
|
ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));
|
2016-09-07 08:19:31 +00:00
|
|
|
ASSERT(!work_pending(&call->processor));
|
2016-04-04 13:00:38 +00:00
|
|
|
ASSERTCMP(call->conn, ==, NULL);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
if (call->acks_window) {
|
|
|
|
_debug("kill Tx window %d",
|
|
|
|
CIRC_CNT(call->acks_head, call->acks_tail,
|
|
|
|
call->acks_winsz));
|
|
|
|
smp_mb();
|
|
|
|
while (CIRC_CNT(call->acks_head, call->acks_tail,
|
|
|
|
call->acks_winsz) > 0) {
|
|
|
|
struct rxrpc_skb_priv *sp;
|
|
|
|
unsigned long _skb;
|
|
|
|
|
|
|
|
_skb = call->acks_window[call->acks_tail] & ~1;
|
2016-03-04 15:53:46 +00:00
|
|
|
sp = rxrpc_skb((struct sk_buff *)_skb);
|
|
|
|
_debug("+++ clear Tx %u", sp->hdr.seq);
|
|
|
|
rxrpc_free_skb((struct sk_buff *)_skb);
|
2007-04-26 22:48:28 +00:00
|
|
|
call->acks_tail =
|
|
|
|
(call->acks_tail + 1) & (call->acks_winsz - 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(call->acks_window);
|
|
|
|
}
|
|
|
|
|
|
|
|
rxrpc_free_skb(call->tx_pending);
|
|
|
|
|
|
|
|
rxrpc_purge_queue(&call->rx_queue);
|
|
|
|
ASSERT(skb_queue_empty(&call->rx_oos_queue));
|
rxrpc: Don't expose skbs to in-kernel users [ver #2]
Don't expose skbs to in-kernel users, such as the AFS filesystem, but
instead provide a notification hook the indicates that a call needs
attention and another that indicates that there's a new call to be
collected.
This makes the following possibilities more achievable:
(1) Call refcounting can be made simpler if skbs don't hold refs to calls.
(2) skbs referring to non-data events will be able to be freed much sooner
rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
will be able to consult the call state.
(3) We can shortcut the receive phase when a call is remotely aborted
because we don't have to go through all the packets to get to the one
cancelling the operation.
(4) It makes it easier to do encryption/decryption directly between AFS's
buffers and sk_buffs.
(5) Encryption/decryption can more easily be done in the AFS's thread
contexts - usually that of the userspace process that issued a syscall
- rather than in one of rxrpc's background threads on a workqueue.
(6) AFS will be able to wait synchronously on a call inside AF_RXRPC.
To make this work, the following interface function has been added:
int rxrpc_kernel_recv_data(
struct socket *sock, struct rxrpc_call *call,
void *buffer, size_t bufsize, size_t *_offset,
bool want_more, u32 *_abort_code);
This is the recvmsg equivalent. It allows the caller to find out about the
state of a specific call and to transfer received data into a buffer
piecemeal.
afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
logic between them. They don't wait synchronously yet because the socket
lock needs to be dealt with.
Five interface functions have been removed:
rxrpc_kernel_is_data_last()
rxrpc_kernel_get_abort_code()
rxrpc_kernel_get_error_number()
rxrpc_kernel_free_skb()
rxrpc_kernel_data_consumed()
As a temporary hack, sk_buffs going to an in-kernel call are queued on the
rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
in-kernel user. To process the queue internally, a temporary function,
temp_deliver_data() has been added. This will be replaced with common code
between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
future patch.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-08-30 19:42:14 +00:00
|
|
|
rxrpc_purge_queue(&call->knlrecv_queue);
|
2016-06-27 16:11:19 +00:00
|
|
|
call_rcu(&call->rcu, rxrpc_rcu_destroy_call);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2016-09-07 08:19:31 +00:00
|
|
|
* Make sure that all calls are gone.
|
2007-04-26 22:48:28 +00:00
|
|
|
*/
|
|
|
|
void __exit rxrpc_destroy_all_calls(void)
|
|
|
|
{
|
|
|
|
struct rxrpc_call *call;
|
|
|
|
|
|
|
|
_enter("");
|
2016-09-07 08:19:31 +00:00
|
|
|
|
|
|
|
if (list_empty(&rxrpc_calls))
|
|
|
|
return;
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
write_lock_bh(&rxrpc_call_lock);
|
|
|
|
|
|
|
|
while (!list_empty(&rxrpc_calls)) {
|
|
|
|
call = list_entry(rxrpc_calls.next, struct rxrpc_call, link);
|
|
|
|
_debug("Zapping call %p", call);
|
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
rxrpc_see_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
list_del_init(&call->link);
|
|
|
|
|
2016-09-07 08:19:31 +00:00
|
|
|
pr_err("Call %p still in use (%d,%d,%s,%lx,%lx)!\n",
|
|
|
|
call, atomic_read(&call->usage),
|
|
|
|
atomic_read(&call->ackr_not_idle),
|
|
|
|
rxrpc_call_states[call->state],
|
|
|
|
call->flags, call->events);
|
|
|
|
if (!skb_queue_empty(&call->rx_queue))
|
|
|
|
pr_err("Rx queue occupied\n");
|
|
|
|
if (!skb_queue_empty(&call->rx_oos_queue))
|
|
|
|
pr_err("OOS queue occupied\n");
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
write_unlock_bh(&rxrpc_call_lock);
|
|
|
|
cond_resched();
|
|
|
|
write_lock_bh(&rxrpc_call_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
write_unlock_bh(&rxrpc_call_lock);
|
|
|
|
_leave("");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* handle call lifetime being exceeded
|
|
|
|
*/
|
|
|
|
static void rxrpc_call_life_expired(unsigned long _call)
|
|
|
|
{
|
|
|
|
struct rxrpc_call *call = (struct rxrpc_call *) _call;
|
|
|
|
|
2016-08-30 08:49:28 +00:00
|
|
|
_enter("{%d}", call->debug_id);
|
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
rxrpc_see_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
if (call->state >= RXRPC_CALL_COMPLETE)
|
|
|
|
return;
|
|
|
|
|
2016-08-30 08:49:28 +00:00
|
|
|
set_bit(RXRPC_CALL_EV_LIFE_TIMER, &call->events);
|
|
|
|
rxrpc_queue_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* handle resend timer expiry
|
2010-08-04 02:34:17 +00:00
|
|
|
* - may not take call->state_lock as this can deadlock against del_timer_sync()
|
2007-04-26 22:48:28 +00:00
|
|
|
*/
|
|
|
|
static void rxrpc_resend_time_expired(unsigned long _call)
|
|
|
|
{
|
|
|
|
struct rxrpc_call *call = (struct rxrpc_call *) _call;
|
|
|
|
|
|
|
|
_enter("{%d}", call->debug_id);
|
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
rxrpc_see_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
if (call->state >= RXRPC_CALL_COMPLETE)
|
|
|
|
return;
|
|
|
|
|
|
|
|
clear_bit(RXRPC_CALL_RUN_RTIMER, &call->flags);
|
2016-03-04 15:53:46 +00:00
|
|
|
if (!test_and_set_bit(RXRPC_CALL_EV_RESEND_TIMER, &call->events))
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 22:50:17 +00:00
|
|
|
rxrpc_queue_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* handle ACK timer expiry
|
|
|
|
*/
|
|
|
|
static void rxrpc_ack_time_expired(unsigned long _call)
|
|
|
|
{
|
|
|
|
struct rxrpc_call *call = (struct rxrpc_call *) _call;
|
|
|
|
|
|
|
|
_enter("{%d}", call->debug_id);
|
|
|
|
|
2016-08-30 08:49:29 +00:00
|
|
|
rxrpc_see_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
if (call->state >= RXRPC_CALL_COMPLETE)
|
|
|
|
return;
|
|
|
|
|
2016-08-30 08:49:28 +00:00
|
|
|
if (!test_and_set_bit(RXRPC_CALL_EV_ACK, &call->events))
|
[AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available. AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.
This permits AFS (or whatever) to:
(1) Avoid the overhead of using the recvmsg() call.
(2) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(3) Avoid calling request_key() at the point of issue of a call or opening of
a socket. This is done instead by AFS at the point of open(), unlink() or
other VFS operation and the key handed through.
(4) Request the use of something other than GFP_KERNEL to allocate memory.
Furthermore:
(*) The socket buffer markings used by RxRPC are made available for AFS so
that it can interpret the cooked RxRPC messages itself.
(*) rxgen (un)marshalling abort codes are made available.
The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:
=========================
AF_RXRPC KERNEL INTERFACE
=========================
The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem. This permits such a utility to:
(1) Use different keys directly on individual client calls on one socket
rather than having to open a whole slew of sockets, one for each key it
might want to use.
(2) Avoid having RxRPC call request_key() at the point of issue of a call or
opening of a socket. Instead the utility is responsible for requesting a
key at the appropriate point. AFS, for instance, would do this during VFS
operations such as open() or unlink(). The key is then handed through
when the call is initiated.
(3) Request the use of something other than GFP_KERNEL to allocate memory.
(4) Avoid the overhead of using the recvmsg() call. RxRPC messages can be
intercepted before they get put into the socket Rx queue and the socket
buffers manipulated directly.
To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.
The kernel interface functions are as follows:
(*) Begin a new client call.
struct rxrpc_call *
rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *srx,
struct key *key,
unsigned long user_call_ID,
gfp_t gfp);
This allocates the infrastructure to make a new RxRPC call and assigns
call and connection numbers. The call will be made on the UDP port that
the socket is bound to. The call will go to the destination address of a
connected client socket unless an alternative is supplied (srx is
non-NULL).
If a key is supplied then this will be used to secure the call instead of
the key bound to the socket with the RXRPC_SECURITY_KEY sockopt. Calls
secured in this way will still share connections if at all possible.
The user_call_ID is equivalent to that supplied to sendmsg() in the
control data buffer. It is entirely feasible to use this to point to a
kernel data structure.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) End a client call.
void rxrpc_kernel_end_call(struct rxrpc_call *call);
This is used to end a previously begun call. The user_call_ID is expunged
from AF_RXRPC's knowledge and will not be seen again in association with
the specified call.
(*) Send data through a call.
int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
size_t len);
This is used to supply either the request part of a client call or the
reply part of a server call. msg.msg_iovlen and msg.msg_iov specify the
data buffers to be used. msg_iov may not be NULL and must point
exclusively to in-kernel virtual addresses. msg.msg_flags may be given
MSG_MORE if there will be subsequent data sends for this call.
The msg must not specify a destination address, control data or any flags
other than MSG_MORE. len is the total amount of data to transmit.
(*) Abort a call.
void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
This is used to abort a call if it's still in an abortable state. The
abort code specified will be placed in the ABORT message sent.
(*) Intercept received RxRPC messages.
typedef void (*rxrpc_interceptor_t)(struct sock *sk,
unsigned long user_call_ID,
struct sk_buff *skb);
void
rxrpc_kernel_intercept_rx_messages(struct socket *sock,
rxrpc_interceptor_t interceptor);
This installs an interceptor function on the specified AF_RXRPC socket.
All messages that would otherwise wind up in the socket's Rx queue are
then diverted to this function. Note that care must be taken to process
the messages in the right order to maintain DATA message sequentiality.
The interceptor function itself is provided with the address of the socket
and handling the incoming message, the ID assigned by the kernel utility
to the call and the socket buffer containing the message.
The skb->mark field indicates the type of message:
MARK MEANING
=============================== =======================================
RXRPC_SKB_MARK_DATA Data message
RXRPC_SKB_MARK_FINAL_ACK Final ACK received for an incoming call
RXRPC_SKB_MARK_BUSY Client call rejected as server busy
RXRPC_SKB_MARK_REMOTE_ABORT Call aborted by peer
RXRPC_SKB_MARK_NET_ERROR Network error detected
RXRPC_SKB_MARK_LOCAL_ERROR Local error encountered
RXRPC_SKB_MARK_NEW_CALL New incoming call awaiting acceptance
The remote abort message can be probed with rxrpc_kernel_get_abort_code().
The two error messages can be probed with rxrpc_kernel_get_error_number().
A new call can be accepted with rxrpc_kernel_accept_call().
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
(*) Accept an incoming call.
struct rxrpc_call *
rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID);
This is used to accept an incoming call and to assign it a call ID. This
function is similar to rxrpc_kernel_begin_call() and calls accepted must
be ended in the same way.
If this function is successful, an opaque reference to the RxRPC call is
returned. The caller now holds a reference on this and it must be
properly ended.
(*) Reject an incoming call.
int rxrpc_kernel_reject_call(struct socket *sock);
This is used to reject the first incoming call on the socket's queue with
a BUSY message. -ENODATA is returned if there were no incoming calls.
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
(*) Free a message.
void rxrpc_kernel_free_skb(struct sk_buff *skb);
This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
socket.
(*) Determine if a data message is the last one on a call.
bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
This is used to determine if a socket buffer holds the last data message
to be received for a call (true will be returned if it does, false
if not).
The data message will be part of the reply on a client call and the
request on an incoming call. In the latter case there will be more
messages, but in the former case there will not.
(*) Get the abort code from an abort message.
u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
This is used to extract the abort code from a remote abort message.
(*) Get the error number from a local or network error message.
int rxrpc_kernel_get_error_number(struct sk_buff *skb);
This is used to extract the error number from a message indicating either
a local error occurred or a network error occurred.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 22:50:17 +00:00
|
|
|
rxrpc_queue_call(call);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|