2019-05-27 06:55:01 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
/* RxRPC virtual connection handler, common bits.
|
2007-04-26 22:48:28 +00:00
|
|
|
*
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
* Copyright (C) 2007, 2016 Red Hat, Inc. All Rights Reserved.
|
2007-04-26 22:48:28 +00:00
|
|
|
* Written by David Howells (dhowells@redhat.com)
|
|
|
|
*/
|
|
|
|
|
2016-06-02 19:08:52 +00:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
#include <linux/module.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2007-04-26 22:48:28 +00:00
|
|
|
#include <linux/net.h>
|
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include "ar-internal.h"
|
|
|
|
|
2014-02-07 18:58:44 +00:00
|
|
|
/*
|
|
|
|
* Time till a connection expires after last use (in seconds).
|
|
|
|
*/
|
2017-11-24 10:18:42 +00:00
|
|
|
unsigned int __read_mostly rxrpc_connection_expiry = 10 * 60;
|
|
|
|
unsigned int __read_mostly rxrpc_closed_conn_expiry = 10;
|
2014-02-07 18:58:44 +00:00
|
|
|
|
2022-11-25 12:43:50 +00:00
|
|
|
static void rxrpc_clean_up_connection(struct work_struct *work);
|
|
|
|
static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet,
|
|
|
|
unsigned long reap_at);
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
|
2022-10-20 08:08:34 +00:00
|
|
|
void rxrpc_poke_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why)
|
|
|
|
{
|
|
|
|
struct rxrpc_local *local = conn->local;
|
|
|
|
bool busy;
|
|
|
|
|
|
|
|
if (WARN_ON_ONCE(!local))
|
|
|
|
return;
|
|
|
|
|
|
|
|
spin_lock_bh(&local->lock);
|
|
|
|
busy = !list_empty(&conn->attend_link);
|
|
|
|
if (!busy) {
|
|
|
|
rxrpc_get_connection(conn, why);
|
|
|
|
list_add_tail(&conn->attend_link, &local->conn_attend_q);
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&local->lock);
|
|
|
|
rxrpc_wake_up_io_thread(local);
|
|
|
|
}
|
|
|
|
|
2017-11-24 10:18:41 +00:00
|
|
|
static void rxrpc_connection_timer(struct timer_list *timer)
|
|
|
|
{
|
|
|
|
struct rxrpc_connection *conn =
|
|
|
|
container_of(timer, struct rxrpc_connection, timer);
|
|
|
|
|
2022-10-20 08:08:34 +00:00
|
|
|
rxrpc_poke_conn(conn, rxrpc_conn_get_poke_timer);
|
2017-11-24 10:18:41 +00:00
|
|
|
}
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
/*
|
|
|
|
* allocate a new connection
|
|
|
|
*/
|
2022-11-25 12:43:50 +00:00
|
|
|
struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet,
|
|
|
|
gfp_t gfp)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
|
|
|
struct rxrpc_connection *conn;
|
|
|
|
|
|
|
|
_enter("");
|
|
|
|
|
|
|
|
conn = kzalloc(sizeof(struct rxrpc_connection), gfp);
|
|
|
|
if (conn) {
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
INIT_LIST_HEAD(&conn->cache_link);
|
2017-11-24 10:18:41 +00:00
|
|
|
timer_setup(&conn->timer, &rxrpc_connection_timer, 0);
|
2022-11-25 12:43:50 +00:00
|
|
|
INIT_WORK(&conn->processor, rxrpc_process_connection);
|
|
|
|
INIT_WORK(&conn->destructor, rxrpc_clean_up_connection);
|
2016-08-24 06:30:52 +00:00
|
|
|
INIT_LIST_HEAD(&conn->proc_link);
|
2016-06-17 14:42:35 +00:00
|
|
|
INIT_LIST_HEAD(&conn->link);
|
2022-10-19 08:45:43 +00:00
|
|
|
mutex_init(&conn->security_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
skb_queue_head_init(&conn->rx_queue);
|
2022-11-25 12:43:50 +00:00
|
|
|
conn->rxnet = rxnet;
|
2016-04-07 16:23:58 +00:00
|
|
|
conn->security = &rxrpc_no_security;
|
2007-04-26 22:48:28 +00:00
|
|
|
spin_lock_init(&conn->state_lock);
|
|
|
|
conn->debug_id = atomic_inc_return(&rxrpc_debug_id);
|
2016-08-23 14:27:24 +00:00
|
|
|
conn->idle_timestamp = jiffies;
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2007-06-15 22:15:43 +00:00
|
|
|
_leave(" = %p{%d}", conn, conn ? conn->debug_id : 0);
|
2007-04-26 22:48:28 +00:00
|
|
|
return conn;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2016-07-01 06:51:50 +00:00
|
|
|
* Look up a connection in the cache by protocol parameters.
|
|
|
|
*
|
|
|
|
* If successful, a pointer to the connection is returned, but no ref is taken.
|
|
|
|
* NULL is returned if there is no match.
|
|
|
|
*
|
2018-09-27 14:13:09 +00:00
|
|
|
* When searching for a service call, if we find a peer but no connection, we
|
|
|
|
* return that through *_peer in case we need to create a new service call.
|
|
|
|
*
|
2016-07-01 06:51:50 +00:00
|
|
|
* The caller must be holding the RCU read lock.
|
2007-04-26 22:48:28 +00:00
|
|
|
*/
|
2020-01-23 13:13:41 +00:00
|
|
|
struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *local,
|
|
|
|
struct sockaddr_rxrpc *srx,
|
|
|
|
struct sk_buff *skb)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
|
|
|
struct rxrpc_connection *conn;
|
2016-06-16 12:31:07 +00:00
|
|
|
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
|
2016-06-30 11:02:53 +00:00
|
|
|
struct rxrpc_peer *peer;
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2016-07-01 06:51:50 +00:00
|
|
|
_enter(",%x", sp->hdr.cid & RXRPC_CIDMASK);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2022-10-20 21:58:56 +00:00
|
|
|
/* Look up client connections by connection ID alone as their
|
|
|
|
* IDs are unique for this machine.
|
2020-01-23 13:13:41 +00:00
|
|
|
*/
|
2022-10-20 21:58:56 +00:00
|
|
|
conn = idr_find(&local->conn_ids, sp->hdr.cid >> RXRPC_CIDSHIFT);
|
2020-01-23 13:13:41 +00:00
|
|
|
if (!conn || refcount_read(&conn->ref) == 0) {
|
|
|
|
_debug("no conn");
|
|
|
|
goto not_found;
|
|
|
|
}
|
2016-07-01 06:51:50 +00:00
|
|
|
|
2020-01-23 13:13:41 +00:00
|
|
|
if (conn->proto.epoch != sp->hdr.epoch ||
|
|
|
|
conn->local != local)
|
|
|
|
goto not_found;
|
|
|
|
|
|
|
|
peer = conn->peer;
|
|
|
|
switch (srx->transport.family) {
|
|
|
|
case AF_INET:
|
|
|
|
if (peer->srx.transport.sin.sin_port !=
|
|
|
|
srx->transport.sin.sin_port ||
|
|
|
|
peer->srx.transport.sin.sin_addr.s_addr !=
|
|
|
|
srx->transport.sin.sin_addr.s_addr)
|
2016-06-30 11:02:53 +00:00
|
|
|
goto not_found;
|
2020-01-23 13:13:41 +00:00
|
|
|
break;
|
2016-09-17 06:26:01 +00:00
|
|
|
#ifdef CONFIG_AF_RXRPC_IPV6
|
2020-01-23 13:13:41 +00:00
|
|
|
case AF_INET6:
|
|
|
|
if (peer->srx.transport.sin6.sin6_port !=
|
|
|
|
srx->transport.sin6.sin6_port ||
|
|
|
|
memcmp(&peer->srx.transport.sin6.sin6_addr,
|
|
|
|
&srx->transport.sin6.sin6_addr,
|
|
|
|
sizeof(struct in6_addr)) != 0)
|
|
|
|
goto not_found;
|
|
|
|
break;
|
2016-09-17 06:26:01 +00:00
|
|
|
#endif
|
2020-01-23 13:13:41 +00:00
|
|
|
default:
|
|
|
|
BUG();
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2020-01-23 13:13:41 +00:00
|
|
|
_leave(" = %p", conn);
|
|
|
|
return conn;
|
|
|
|
|
2016-06-30 11:02:53 +00:00
|
|
|
not_found:
|
2007-04-26 22:48:28 +00:00
|
|
|
_leave(" = NULL");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2016-06-17 14:42:35 +00:00
|
|
|
/*
|
|
|
|
* Disconnect a call and clear any channel it occupies when that call
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
* terminates. The caller must hold the channel_lock and must release the
|
|
|
|
* call's ref on the connection.
|
2016-06-17 14:42:35 +00:00
|
|
|
*/
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
void __rxrpc_disconnect_call(struct rxrpc_connection *conn,
|
|
|
|
struct rxrpc_call *call)
|
2016-06-17 14:42:35 +00:00
|
|
|
{
|
2016-08-23 14:27:24 +00:00
|
|
|
struct rxrpc_channel *chan =
|
|
|
|
&conn->channels[call->cid & RXRPC_CHANNELMASK];
|
2016-06-17 14:42:35 +00:00
|
|
|
|
2016-08-23 14:27:24 +00:00
|
|
|
_enter("%d,%x", conn->debug_id, call->cid);
|
2016-06-17 14:42:35 +00:00
|
|
|
|
2022-10-19 08:45:43 +00:00
|
|
|
if (chan->call == call) {
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
/* Save the result of the call so that we can repeat it if necessary
|
|
|
|
* through the channel, whilst disposing of the actual call record.
|
|
|
|
*/
|
2017-01-05 10:38:34 +00:00
|
|
|
trace_rxrpc_disconnect_call(call);
|
2018-02-06 16:44:12 +00:00
|
|
|
switch (call->completion) {
|
|
|
|
case RXRPC_CALL_SUCCEEDED:
|
2022-08-27 13:27:56 +00:00
|
|
|
chan->last_seq = call->rx_highest_seq;
|
2016-08-23 14:27:25 +00:00
|
|
|
chan->last_type = RXRPC_PACKET_TYPE_ACK;
|
2018-02-06 16:44:12 +00:00
|
|
|
break;
|
|
|
|
case RXRPC_CALL_LOCALLY_ABORTED:
|
|
|
|
chan->last_abort = call->abort_code;
|
|
|
|
chan->last_type = RXRPC_PACKET_TYPE_ABORT;
|
|
|
|
break;
|
|
|
|
default:
|
2022-05-21 07:45:48 +00:00
|
|
|
chan->last_abort = RX_CALL_DEAD;
|
2018-02-06 16:44:12 +00:00
|
|
|
chan->last_type = RXRPC_PACKET_TYPE_ABORT;
|
|
|
|
break;
|
2016-08-23 14:27:25 +00:00
|
|
|
}
|
2018-02-06 16:44:12 +00:00
|
|
|
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
chan->last_call = chan->call_id;
|
|
|
|
chan->call_id = chan->call_counter;
|
2022-10-19 08:45:43 +00:00
|
|
|
chan->call = NULL;
|
2016-06-17 14:42:35 +00:00
|
|
|
}
|
2016-04-04 13:00:38 +00:00
|
|
|
|
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
2016-06-27 13:39:44 +00:00
|
|
|
_leave("");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Disconnect a call and clear any channel it occupies when that call
|
|
|
|
* terminates.
|
|
|
|
*/
|
|
|
|
void rxrpc_disconnect_call(struct rxrpc_call *call)
|
|
|
|
{
|
|
|
|
struct rxrpc_connection *conn = call->conn;
|
|
|
|
|
2022-11-02 10:24:29 +00:00
|
|
|
set_bit(RXRPC_CALL_DISCONNECTED, &call->flags);
|
|
|
|
rxrpc_see_call(call, rxrpc_call_see_disconnected);
|
|
|
|
|
2022-10-03 17:49:11 +00:00
|
|
|
call->peer->cong_ssthresh = call->cong_ssthresh;
|
2017-06-14 16:56:50 +00:00
|
|
|
|
2020-07-28 23:03:56 +00:00
|
|
|
if (!hlist_unhashed(&call->error_link)) {
|
2022-10-12 14:42:06 +00:00
|
|
|
spin_lock(&call->peer->lock);
|
|
|
|
hlist_del_init(&call->error_link);
|
|
|
|
spin_unlock(&call->peer->lock);
|
2020-07-28 23:03:56 +00:00
|
|
|
}
|
rxrpc: Rewrite the data and ack handling code
Rewrite the data and ack handling code such that:
(1) Parsing of received ACK and ABORT packets and the distribution and the
filing of DATA packets happens entirely within the data_ready context
called from the UDP socket. This allows us to process and discard ACK
and ABORT packets much more quickly (they're no longer stashed on a
queue for a background thread to process).
(2) We avoid calling skb_clone(), pskb_pull() and pskb_trim(). We instead
keep track of the offset and length of the content of each packet in
the sk_buff metadata. This means we don't do any allocation in the
receive path.
(3) Jumbo DATA packet parsing is now done in data_ready context. Rather
than cloning the packet once for each subpacket and pulling/trimming
it, we file the packet multiple times with an annotation for each
indicating which subpacket is there. From that we can directly
calculate the offset and length.
(4) A call's receive queue can be accessed without taking locks (memory
barriers do have to be used, though).
(5) Incoming calls are set up from preallocated resources and immediately
made live. They can than have packets queued upon them and ACKs
generated. If insufficient resources exist, DATA packet #1 is given a
BUSY reply and other DATA packets are discarded).
(6) sk_buffs no longer take a ref on their parent call.
To make this work, the following changes are made:
(1) Each call's receive buffer is now a circular buffer of sk_buff
pointers (rxtx_buffer) rather than a number of sk_buff_heads spread
between the call and the socket. This permits each sk_buff to be in
the buffer multiple times. The receive buffer is reused for the
transmit buffer.
(2) A circular buffer of annotations (rxtx_annotations) is kept parallel
to the data buffer. Transmission phase annotations indicate whether a
buffered packet has been ACK'd or not and whether it needs
retransmission.
Receive phase annotations indicate whether a slot holds a whole packet
or a jumbo subpacket and, if the latter, which subpacket. They also
note whether the packet has been decrypted in place.
(3) DATA packet window tracking is much simplified. Each phase has just
two numbers representing the window (rx_hard_ack/rx_top and
tx_hard_ack/tx_top).
The hard_ack number is the sequence number before base of the window,
representing the last packet the other side says it has consumed.
hard_ack starts from 0 and the first packet is sequence number 1.
The top number is the sequence number of the highest-numbered packet
residing in the buffer. Packets between hard_ack+1 and top are
soft-ACK'd to indicate they've been received, but not yet consumed.
Four macros, before(), before_eq(), after() and after_eq() are added
to compare sequence numbers within the window. This allows for the
top of the window to wrap when the hard-ack sequence number gets close
to the limit.
Two flags, RXRPC_CALL_RX_LAST and RXRPC_CALL_TX_LAST, are added also
to indicate when rx_top and tx_top point at the packets with the
LAST_PACKET bit set, indicating the end of the phase.
(4) Calls are queued on the socket 'receive queue' rather than packets.
This means that we don't need have to invent dummy packets to queue to
indicate abnormal/terminal states and we don't have to keep metadata
packets (such as ABORTs) around
(5) The offset and length of a (sub)packet's content are now passed to
the verify_packet security op. This is currently expected to decrypt
the packet in place and validate it.
However, there's now nowhere to store the revised offset and length of
the actual data within the decrypted blob (there may be a header and
padding to skip) because an sk_buff may represent multiple packets, so
a locate_data security op is added to retrieve these details from the
sk_buff content when needed.
(6) recvmsg() now has to handle jumbo subpackets, where each subpacket is
individually secured and needs to be individually decrypted. The code
to do this is broken out into rxrpc_recvmsg_data() and shared with the
kernel API. It now iterates over the call's receive buffer rather
than walking the socket receive queue.
Additional changes:
(1) The timers are condensed to a single timer that is set for the soonest
of three timeouts (delayed ACK generation, DATA retransmission and
call lifespan).
(2) Transmission of ACK and ABORT packets is effected immediately from
process-context socket ops/kernel API calls that cause them instead of
them being punted off to a background work item. The data_ready
handler still has to defer to the background, though.
(3) A shutdown op is added to the AF_RXRPC socket so that the AFS
filesystem can shut down the socket and flush its own work items
before closing the socket to deal with any in-progress service calls.
Future additional changes that will need to be considered:
(1) Make sure that a call doesn't hog the front of the queue by receiving
data from the network as fast as userspace is consuming it to the
exclusion of other calls.
(2) Transmit delayed ACKs from within recvmsg() when we've consumed
sufficiently more packets to avoid the background work item needing to
run.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-09-08 10:10:12 +00:00
|
|
|
|
2022-11-02 10:24:29 +00:00
|
|
|
if (rxrpc_is_client_call(call)) {
|
2022-10-21 08:30:23 +00:00
|
|
|
rxrpc_disconnect_client_call(call->bundle, call);
|
2022-11-02 10:24:29 +00:00
|
|
|
} else {
|
|
|
|
__rxrpc_disconnect_call(conn, call);
|
|
|
|
conn->idle_timestamp = jiffies;
|
|
|
|
if (atomic_dec_and_test(&conn->active))
|
|
|
|
rxrpc_set_service_reap_timer(conn->rxnet,
|
2023-10-26 23:49:34 +00:00
|
|
|
jiffies + rxrpc_connection_expiry * HZ);
|
2022-11-02 10:24:29 +00:00
|
|
|
}
|
2016-04-04 13:00:38 +00:00
|
|
|
|
2022-11-02 10:24:29 +00:00
|
|
|
rxrpc_put_call(call, rxrpc_call_put_io_thread);
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
}
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
/*
|
2016-09-17 09:49:14 +00:00
|
|
|
* Queue a connection's work processor, getting a ref to pass to the work
|
|
|
|
* queue.
|
2007-04-26 22:48:28 +00:00
|
|
|
*/
|
2022-11-25 12:43:50 +00:00
|
|
|
void rxrpc_queue_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
2022-11-25 12:43:50 +00:00
|
|
|
if (atomic_read(&conn->active) >= 0 &&
|
|
|
|
rxrpc_queue_work(&conn->processor))
|
|
|
|
rxrpc_see_connection(conn, why);
|
2016-09-17 09:49:14 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note the re-emergence of a connection.
|
|
|
|
*/
|
2022-10-21 13:06:16 +00:00
|
|
|
void rxrpc_see_connection(struct rxrpc_connection *conn,
|
|
|
|
enum rxrpc_conn_trace why)
|
2016-09-17 09:49:14 +00:00
|
|
|
{
|
|
|
|
if (conn) {
|
2022-10-21 13:06:16 +00:00
|
|
|
int r = refcount_read(&conn->ref);
|
2016-09-17 09:49:14 +00:00
|
|
|
|
2022-10-21 13:06:16 +00:00
|
|
|
trace_rxrpc_conn(conn->debug_id, r, why);
|
2016-09-17 09:49:14 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get a ref on a connection.
|
|
|
|
*/
|
2022-10-21 13:06:16 +00:00
|
|
|
struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn,
|
|
|
|
enum rxrpc_conn_trace why)
|
2016-09-17 09:49:14 +00:00
|
|
|
{
|
2022-05-21 07:45:22 +00:00
|
|
|
int r;
|
2016-09-17 09:49:14 +00:00
|
|
|
|
2022-05-21 07:45:22 +00:00
|
|
|
__refcount_inc(&conn->ref, &r);
|
2022-10-21 13:06:16 +00:00
|
|
|
trace_rxrpc_conn(conn->debug_id, r + 1, why);
|
rxrpc: Rewrite the client connection manager
Rewrite the rxrpc client connection manager so that it can support multiple
connections for a given security key to a peer. The following changes are
made:
(1) For each open socket, the code currently maintains an rbtree with the
connections placed into it, keyed by communications parameters. This
is tricky to maintain as connections can be culled from the tree or
replaced within it. Connections can require replacement for a number
of reasons, e.g. their IDs span too great a range for the IDR data
type to represent efficiently, the call ID numbers on that conn would
overflow or the conn got aborted.
This is changed so that there's now a connection bundle object placed
in the tree, keyed on the same parameters. The bundle, however, does
not need to be replaced.
(2) An rxrpc_bundle object can now manage the available channels for a set
of parallel connections. The lock that manages this is moved there
from the rxrpc_connection struct (channel_lock).
(3) There'a a dummy bundle for all incoming connections to share so that
they have a channel_lock too. It might be better to give each
incoming connection its own bundle. This bundle is not needed to
manage which channels incoming calls are made on because that's the
solely at whim of the client.
(4) The restrictions on how many client connections are around are
removed. Instead, a previous patch limits the number of client calls
that can be allocated. Ordinarily, client connections are reaped
after 2 minutes on the idle queue, but when more than a certain number
of connections are in existence, the reaper starts reaping them after
2s of idleness instead to get the numbers back down.
It could also be made such that new call allocations are forced to
wait until the number of outstanding connections subsides.
Signed-off-by: David Howells <dhowells@redhat.com>
2020-07-01 10:15:32 +00:00
|
|
|
return conn;
|
2016-09-17 09:49:14 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Try to get a ref on a connection.
|
|
|
|
*/
|
|
|
|
struct rxrpc_connection *
|
2022-10-21 13:06:16 +00:00
|
|
|
rxrpc_get_connection_maybe(struct rxrpc_connection *conn,
|
|
|
|
enum rxrpc_conn_trace why)
|
2016-09-17 09:49:14 +00:00
|
|
|
{
|
2022-05-21 07:45:22 +00:00
|
|
|
int r;
|
2016-09-17 09:49:14 +00:00
|
|
|
|
|
|
|
if (conn) {
|
2022-05-21 07:45:22 +00:00
|
|
|
if (__refcount_inc_not_zero(&conn->ref, &r))
|
2022-10-21 13:06:16 +00:00
|
|
|
trace_rxrpc_conn(conn->debug_id, r + 1, why);
|
2016-09-17 09:49:14 +00:00
|
|
|
else
|
|
|
|
conn = NULL;
|
|
|
|
}
|
|
|
|
return conn;
|
|
|
|
}
|
|
|
|
|
2017-11-24 10:18:42 +00:00
|
|
|
/*
|
|
|
|
* Set the service connection reap timer.
|
|
|
|
*/
|
|
|
|
static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet,
|
|
|
|
unsigned long reap_at)
|
|
|
|
{
|
|
|
|
if (rxnet->live)
|
|
|
|
timer_reduce(&rxnet->service_conn_reap_timer, reap_at);
|
|
|
|
}
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
/*
|
|
|
|
* destroy a virtual connection
|
|
|
|
*/
|
2022-11-25 12:43:50 +00:00
|
|
|
static void rxrpc_rcu_free_connection(struct rcu_head *rcu)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
2016-06-27 16:11:19 +00:00
|
|
|
struct rxrpc_connection *conn =
|
|
|
|
container_of(rcu, struct rxrpc_connection, rcu);
|
2022-11-25 12:43:50 +00:00
|
|
|
struct rxrpc_net *rxnet = conn->rxnet;
|
2016-06-27 16:11:19 +00:00
|
|
|
|
2022-05-21 07:45:22 +00:00
|
|
|
_enter("{%d,u=%d}", conn->debug_id, refcount_read(&conn->ref));
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2022-10-21 13:06:16 +00:00
|
|
|
trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref),
|
|
|
|
rxrpc_conn_free);
|
2022-11-25 12:43:50 +00:00
|
|
|
kfree(conn);
|
2022-10-21 13:06:16 +00:00
|
|
|
|
2022-11-25 12:43:50 +00:00
|
|
|
if (atomic_dec_and_test(&rxnet->nr_conns))
|
|
|
|
wake_up_var(&rxnet->nr_conns);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Clean up a dead connection.
|
|
|
|
*/
|
|
|
|
static void rxrpc_clean_up_connection(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct rxrpc_connection *conn =
|
|
|
|
container_of(work, struct rxrpc_connection, destructor);
|
|
|
|
struct rxrpc_net *rxnet = conn->rxnet;
|
|
|
|
|
2022-10-19 08:45:43 +00:00
|
|
|
ASSERT(!conn->channels[0].call &&
|
|
|
|
!conn->channels[1].call &&
|
|
|
|
!conn->channels[2].call &&
|
|
|
|
!conn->channels[3].call);
|
2022-11-25 12:43:50 +00:00
|
|
|
ASSERT(list_empty(&conn->cache_link));
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2017-11-24 10:18:41 +00:00
|
|
|
del_timer_sync(&conn->timer);
|
2022-11-25 12:43:50 +00:00
|
|
|
cancel_work_sync(&conn->processor); /* Processing may restart the timer */
|
|
|
|
del_timer_sync(&conn->timer);
|
|
|
|
|
|
|
|
write_lock(&rxnet->conn_lock);
|
|
|
|
list_del_init(&conn->proc_link);
|
|
|
|
write_unlock(&rxnet->conn_lock);
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
rxrpc_purge_queue(&conn->rx_queue);
|
|
|
|
|
2022-11-25 12:43:50 +00:00
|
|
|
rxrpc_kill_client_conn(conn);
|
|
|
|
|
2016-04-07 16:23:58 +00:00
|
|
|
conn->security->clear(conn);
|
2022-10-19 12:49:02 +00:00
|
|
|
key_put(conn->key);
|
2022-11-02 21:54:46 +00:00
|
|
|
rxrpc_put_bundle(conn->bundle, rxrpc_bundle_put_conn);
|
2022-10-21 12:39:34 +00:00
|
|
|
rxrpc_put_peer(conn->peer, rxrpc_peer_put_conn);
|
2022-10-21 12:00:34 +00:00
|
|
|
rxrpc_put_local(conn->local, rxrpc_local_put_kill_conn);
|
2016-04-07 16:23:58 +00:00
|
|
|
|
2022-11-25 12:43:50 +00:00
|
|
|
/* Drain the Rx queue. Note that even though we've unpublished, an
|
|
|
|
* incoming packet could still be being added to our Rx queue, so we
|
|
|
|
* will need to drain it again in the RCU cleanup handler.
|
|
|
|
*/
|
|
|
|
rxrpc_purge_queue(&conn->rx_queue);
|
|
|
|
|
|
|
|
call_rcu(&conn->rcu, rxrpc_rcu_free_connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Drop a ref on a connection.
|
|
|
|
*/
|
|
|
|
void rxrpc_put_connection(struct rxrpc_connection *conn,
|
|
|
|
enum rxrpc_conn_trace why)
|
|
|
|
{
|
|
|
|
unsigned int debug_id;
|
|
|
|
bool dead;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
if (!conn)
|
|
|
|
return;
|
|
|
|
|
|
|
|
debug_id = conn->debug_id;
|
|
|
|
dead = __refcount_dec_and_test(&conn->ref, &r);
|
|
|
|
trace_rxrpc_conn(debug_id, r - 1, why);
|
|
|
|
if (dead) {
|
|
|
|
del_timer(&conn->timer);
|
|
|
|
cancel_work(&conn->processor);
|
|
|
|
|
|
|
|
if (in_softirq() || work_busy(&conn->processor) ||
|
|
|
|
timer_pending(&conn->timer))
|
|
|
|
/* Can't use the rxrpc workqueue as we need to cancel/flush
|
|
|
|
* something that may be running/waiting there.
|
|
|
|
*/
|
|
|
|
schedule_work(&conn->destructor);
|
|
|
|
else
|
|
|
|
rxrpc_clean_up_connection(&conn->destructor);
|
|
|
|
}
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
* reap dead service connections
|
2007-04-26 22:48:28 +00:00
|
|
|
*/
|
2017-05-24 16:02:32 +00:00
|
|
|
void rxrpc_service_connection_reaper(struct work_struct *work)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
|
|
|
struct rxrpc_connection *conn, *_p;
|
2017-05-24 16:02:32 +00:00
|
|
|
struct rxrpc_net *rxnet =
|
2017-11-24 10:18:42 +00:00
|
|
|
container_of(work, struct rxrpc_net, service_conn_reaper);
|
2017-11-24 10:18:42 +00:00
|
|
|
unsigned long expire_at, earliest, idle_timestamp, now;
|
2022-11-25 12:43:50 +00:00
|
|
|
int active;
|
2007-04-26 22:48:28 +00:00
|
|
|
|
|
|
|
LIST_HEAD(graveyard);
|
|
|
|
|
|
|
|
_enter("");
|
|
|
|
|
2016-08-23 14:27:24 +00:00
|
|
|
now = jiffies;
|
2017-11-24 10:18:42 +00:00
|
|
|
earliest = now + MAX_JIFFY_OFFSET;
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2017-05-24 16:02:32 +00:00
|
|
|
write_lock(&rxnet->conn_lock);
|
|
|
|
list_for_each_entry_safe(conn, _p, &rxnet->service_conns, link) {
|
2022-11-25 12:43:50 +00:00
|
|
|
ASSERTCMP(atomic_read(&conn->active), >=, 0);
|
|
|
|
if (likely(atomic_read(&conn->active) > 0))
|
2007-04-26 22:48:28 +00:00
|
|
|
continue;
|
2016-09-08 10:10:12 +00:00
|
|
|
if (conn->state == RXRPC_CONN_SERVICE_PREALLOC)
|
|
|
|
continue;
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2022-10-19 12:49:02 +00:00
|
|
|
if (rxnet->live && !conn->local->dead) {
|
2017-11-24 10:18:42 +00:00
|
|
|
idle_timestamp = READ_ONCE(conn->idle_timestamp);
|
|
|
|
expire_at = idle_timestamp + rxrpc_connection_expiry * HZ;
|
2022-10-19 12:49:02 +00:00
|
|
|
if (conn->local->service_closed)
|
2017-11-24 10:18:42 +00:00
|
|
|
expire_at = idle_timestamp + rxrpc_closed_conn_expiry * HZ;
|
|
|
|
|
2022-11-25 12:43:50 +00:00
|
|
|
_debug("reap CONN %d { a=%d,t=%ld }",
|
|
|
|
conn->debug_id, atomic_read(&conn->active),
|
2017-11-24 10:18:42 +00:00
|
|
|
(long)expire_at - (long)now);
|
|
|
|
|
|
|
|
if (time_before(now, expire_at)) {
|
|
|
|
if (time_before(expire_at, earliest))
|
|
|
|
earliest = expire_at;
|
|
|
|
continue;
|
|
|
|
}
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
2016-06-30 09:45:22 +00:00
|
|
|
|
2022-11-25 12:43:50 +00:00
|
|
|
/* The activity count sits at 0 whilst the conn is unused on
|
|
|
|
* the list; we reduce that to -1 to make the conn unavailable.
|
2016-06-30 09:45:22 +00:00
|
|
|
*/
|
2022-11-25 12:43:50 +00:00
|
|
|
active = 0;
|
|
|
|
if (!atomic_try_cmpxchg(&conn->active, &active, -1))
|
2016-06-30 09:45:22 +00:00
|
|
|
continue;
|
2022-10-21 13:06:16 +00:00
|
|
|
rxrpc_see_connection(conn, rxrpc_conn_see_reap_service);
|
2016-06-30 09:45:22 +00:00
|
|
|
|
|
|
|
if (rxrpc_conn_is_client(conn))
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
BUG();
|
2016-06-30 09:45:22 +00:00
|
|
|
else
|
|
|
|
rxrpc_unpublish_service_conn(conn);
|
|
|
|
|
|
|
|
list_move_tail(&conn->link, &graveyard);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
2017-05-24 16:02:32 +00:00
|
|
|
write_unlock(&rxnet->conn_lock);
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2017-11-24 10:18:42 +00:00
|
|
|
if (earliest != now + MAX_JIFFY_OFFSET) {
|
|
|
|
_debug("reschedule reaper %ld", (long)earliest - (long)now);
|
2016-08-23 14:27:24 +00:00
|
|
|
ASSERT(time_after(earliest, now));
|
2017-11-29 14:25:50 +00:00
|
|
|
rxrpc_set_service_reap_timer(rxnet, earliest);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
while (!list_empty(&graveyard)) {
|
|
|
|
conn = list_entry(graveyard.next, struct rxrpc_connection,
|
|
|
|
link);
|
|
|
|
list_del_init(&conn->link);
|
|
|
|
|
2022-11-25 12:43:50 +00:00
|
|
|
ASSERTCMP(atomic_read(&conn->active), ==, -1);
|
|
|
|
rxrpc_put_connection(conn, rxrpc_conn_put_service_reaped);
|
2007-04-26 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
_leave("");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
* preemptively destroy all the service connection records rather than
|
|
|
|
* waiting for them to time out
|
2007-04-26 22:48:28 +00:00
|
|
|
*/
|
2017-05-24 16:02:32 +00:00
|
|
|
void rxrpc_destroy_all_connections(struct rxrpc_net *rxnet)
|
2007-04-26 22:48:28 +00:00
|
|
|
{
|
2016-06-27 16:11:19 +00:00
|
|
|
struct rxrpc_connection *conn, *_p;
|
|
|
|
bool leak = false;
|
|
|
|
|
2007-04-26 22:48:28 +00:00
|
|
|
_enter("");
|
|
|
|
|
2018-03-30 20:05:33 +00:00
|
|
|
atomic_dec(&rxnet->nr_conns);
|
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
2016-08-24 06:30:52 +00:00
|
|
|
|
2017-11-24 10:18:42 +00:00
|
|
|
del_timer_sync(&rxnet->service_conn_reap_timer);
|
|
|
|
rxrpc_queue_work(&rxnet->service_conn_reaper);
|
2016-06-27 16:11:19 +00:00
|
|
|
flush_workqueue(rxrpc_workqueue);
|
|
|
|
|
2017-05-24 16:02:32 +00:00
|
|
|
write_lock(&rxnet->conn_lock);
|
|
|
|
list_for_each_entry_safe(conn, _p, &rxnet->service_conns, link) {
|
2016-06-27 16:11:19 +00:00
|
|
|
pr_err("AF_RXRPC: Leaked conn %p {%d}\n",
|
2022-05-21 07:45:22 +00:00
|
|
|
conn, refcount_read(&conn->ref));
|
2016-06-27 16:11:19 +00:00
|
|
|
leak = true;
|
|
|
|
}
|
2017-05-24 16:02:32 +00:00
|
|
|
write_unlock(&rxnet->conn_lock);
|
2016-06-27 16:11:19 +00:00
|
|
|
BUG_ON(leak);
|
|
|
|
|
2017-05-24 16:02:32 +00:00
|
|
|
ASSERT(list_empty(&rxnet->conn_proc_list));
|
2007-04-26 22:48:28 +00:00
|
|
|
|
2018-03-30 20:05:33 +00:00
|
|
|
/* We need to wait for the connections to be destroyed by RCU as they
|
|
|
|
* pin things that we still need to get rid of.
|
|
|
|
*/
|
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller:
1) Support offloading wireless authentication to userspace via
NL80211_CMD_EXTERNAL_AUTH, from Srinivas Dasari.
2) A lot of work on network namespace setup/teardown from Kirill Tkhai.
Setup and cleanup of namespaces now all run asynchronously and thus
performance is significantly increased.
3) Add rx/tx timestamping support to mv88e6xxx driver, from Brandon
Streiff.
4) Support zerocopy on RDS sockets, from Sowmini Varadhan.
5) Use denser instruction encoding in x86 eBPF JIT, from Daniel
Borkmann.
6) Support hw offload of vlan filtering in mvpp2 dreiver, from Maxime
Chevallier.
7) Support grafting of child qdiscs in mlxsw driver, from Nogah
Frankel.
8) Add packet forwarding tests to selftests, from Ido Schimmel.
9) Deal with sub-optimal GSO packets better in BBR congestion control,
from Eric Dumazet.
10) Support 5-tuple hashing in ipv6 multipath routing, from David Ahern.
11) Add path MTU tests to selftests, from Stefano Brivio.
12) Various bits of IPSEC offloading support for mlx5, from Aviad
Yehezkel, Yossi Kuperman, and Saeed Mahameed.
13) Support RSS spreading on ntuple filters in SFC driver, from Edward
Cree.
14) Lots of sockmap work from John Fastabend. Applications can use eBPF
to filter sendmsg and sendpage operations.
15) In-kernel receive TLS support, from Dave Watson.
16) Add XDP support to ixgbevf, this is significant because it should
allow optimized XDP usage in various cloud environments. From Tony
Nguyen.
17) Add new Intel E800 series "ice" ethernet driver, from Anirudh
Venkataramanan et al.
18) IP fragmentation match offload support in nfp driver, from Pieter
Jansen van Vuuren.
19) Support XDP redirect in i40e driver, from Björn Töpel.
20) Add BPF_RAW_TRACEPOINT program type for accessing the arguments of
tracepoints in their raw form, from Alexei Starovoitov.
21) Lots of striding RQ improvements to mlx5 driver with many
performance improvements, from Tariq Toukan.
22) Use rhashtable for inet frag reassembly, from Eric Dumazet.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1678 commits)
net: mvneta: improve suspend/resume
net: mvneta: split rxq/txq init and txq deinit into SW and HW parts
ipv6: frags: fix /proc/sys/net/ipv6/ip6frag_low_thresh
net: bgmac: Fix endian access in bgmac_dma_tx_ring_free()
net: bgmac: Correctly annotate register space
route: check sysctl_fib_multipath_use_neigh earlier than hash
fix typo in command value in drivers/net/phy/mdio-bitbang.
sky2: Increase D3 delay to sky2 stops working after suspend
net/mlx5e: Set EQE based as default TX interrupt moderation mode
ibmvnic: Disable irqs before exiting reset from closed state
net: sched: do not emit messages while holding spinlock
vlan: also check phy_driver ts_info for vlan's real device
Bluetooth: Mark expected switch fall-throughs
Bluetooth: Set HCI_QUIRK_SIMULTANEOUS_DISCOVERY for BTUSB_QCA_ROME
Bluetooth: btrsi: remove unused including <linux/version.h>
Bluetooth: hci_bcm: Remove DMI quirk for the MINIX Z83-4
sh_eth: kill useless check in __sh_eth_get_regs()
sh_eth: add sh_eth_cpu_data::no_xdfar flag
ipv6: factorize sk_wmem_alloc updates done by __ip6_append_data()
ipv4: factorize sk_wmem_alloc updates done by __ip_append_data()
...
2018-04-03 21:04:18 +00:00
|
|
|
wait_var_event(&rxnet->nr_conns, !atomic_read(&rxnet->nr_conns));
|
2007-04-26 22:48:28 +00:00
|
|
|
_leave("");
|
|
|
|
}
|