The addition of VLAN support caused a possible use of uninitialized
data if we encounter a zero TCA_FLOWER_KEY_ETH_TYPE key, as pointed
out by "gcc -Wmaybe-uninitialized":
net/sched/cls_flower.c: In function 'fl_change':
net/sched/cls_flower.c:366:22: error: 'ethertype' may be used uninitialized in this function [-Werror=maybe-uninitialized]
This changes the code to only set the ethertype field if it
was nonzero, as before the patch.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 9399ae9a6c ("net_sched: flower: Add vlan support")
Cc: Hadar Hen Zion <hadarh@mellanox.com>
Cc: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When TCP operates in lossy environments (between 1 and 10 % packet
losses), many SACK blocks can be exchanged, and I noticed we could
drop them on busy senders, if these SACK blocks have to be queued
into the socket backlog.
While the main cause is the poor performance of RACK/SACK processing,
we can try to avoid these drops of valuable information that can lead to
spurious timeouts and retransmits.
Cause of the drops is the skb->truesize overestimation caused by :
- drivers allocating ~2048 (or more) bytes as a fragment to hold an
Ethernet frame.
- various pskb_may_pull() calls bringing the headers into skb->head
might have pulled all the frame content, but skb->truesize could
not be lowered, as the stack has no idea of each fragment truesize.
The backlog drops are also more visible on bidirectional flows, since
their sk_rmem_alloc can be quite big.
Let's add some room for the backlog, as only the socket owner
can selectively take action to lower memory needs, like collapsing
receive queues or partial ofo pruning.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
kcm and strparser need to work with any type of stream socket not just
TCP. Eliminate references to TCP and call generic proto_ops functions of
read_sock and peek_len. Also in strp_init check if the socket support
the proto_ops read_sock and peek_len.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In inet_stream_ops we set read_sock to tcp_read_sock and peek_len to
tcp_peek_len (which is just a stub function that calls tcp_inq).
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When using replicast a UDP bearer can have an arbitrary amount of
remote ip addresses associated with it. This means we cannot simply
add all remote ip addresses to an existing bearer data message as it
might fill the message, leaving us with a truncated message that we
can't safely resume. To handle this we introduce the new netlink
command TIPC_NL_UDP_GET_REMOTEIP. This command is intended to be
called when the bearer data message has the
TIPC_NLA_UDP_MULTI_REMOTEIP flag set, indicating there are more than
one remote ip (replicast).
Signed-off-by: Richard Alpe <richard.alpe@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add UDP bearer options to netlink bearer get message. This is used by
the tipc user space tool to display UDP options.
The UDP bearer information is passed using either a sockaddr_in or
sockaddr_in6 structs. This means the user space receiver should
intermediately store the retrieved data in a large enough struct
(sockaddr_strage) before casting to the proper IP version type.
Signed-off-by: Richard Alpe <richard.alpe@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Automatically learn UDP remote IP addresses of communicating peers by
looking at the source IP address of incoming TIPC link configuration
messages (neighbor discovery).
This makes configuration slightly easier and removes the problematic
scenario where a node receives directly addressed neighbor discovery
messages sent using replicast which the node cannot "reply" to using
mutlicast, leaving the link FSM in a limbo state.
Signed-off-by: Richard Alpe <richard.alpe@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch introduces UDP replicast. A concept where we emulate
multicast by sending multiple unicast messages to configured peers.
The purpose of replicast is mainly to be able to use TIPC in cloud
environments where IP multicast is disabled. Using replicas to unicast
multicast messages is costly as we have to copy each skb and send the
copies individually.
Signed-off-by: Richard Alpe <richard.alpe@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a function to check if a tipc UDP media address is a multicast
address or not. This is a purely cosmetic change.
Signed-off-by: Richard Alpe <richard.alpe@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Split the UDP send function into two. One callback that prepares the
skb and one transmit function that sends the skb. This will come in
handy in later patches, when we introduce UDP replicast.
Signed-off-by: Richard Alpe <richard.alpe@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Split the UDP netlink parse function so that it only parses one
netlink attribute at the time. This makes the parse function more
generic and allow future UDP API functions to use it for parsing.
Signed-off-by: Richard Alpe <richard.alpe@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Johan Hedberg says:
====================
pull request: bluetooth 2016-08-25
Here are a couple of important Bluetooth fixes for the 4.8 kernel:
- Memory leak fix for HCI requests
- Fix sk_filter handling with L2CAP
- Fix sock_recvmsg behavior when MSG_TRUNC is not set
Please let me know if there are any issues pulling. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
switchdev_port_fwd_mark_set() is used to set the 'offload_fwd_mark' of
port netdevs so that packets being flooded by the device won't be
flooded twice.
It works by assigning a unique identifier (the ifindex of the first
bridge port) to bridge ports sharing the same parent ID. This prevents
packets from being flooded twice by the same switch, but will flood
packets through bridge ports belonging to a different switch.
This method is problematic when stacked devices are taken into account,
such as VLANs. In such cases, a physical port netdev can have upper
devices being members in two different bridges, thus requiring two
different 'offload_fwd_mark's to be configured on the port netdev, which
is impossible.
The main problem is that packet and netdev marking is performed at the
physical netdev level, whereas flooding occurs between bridge ports,
which are not necessarily port netdevs.
Instead, packet and netdev marking should really be done in the bridge
driver with the switch driver only telling it which packets it already
forwarded. The bridge driver will mark such packets using the mark
assigned to the ingress bridge port and will prevent the packet from
being forwarded through any bridge port sharing the same mark (i.e.
having the same parent ID).
Remove the current switchdev 'offload_fwd_mark' implementation and
instead implement the proposed method. In addition, make rocker - the
sole user of the mark - use the proposed method.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
switchdev_port_same_parent_id() currently expects port netdevs, but we
need it to support stacked devices in the next patch, so drop the
NO_RECURSE flag.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently in process_backlog(), the process_queue dequeuing is
performed with local IRQ disabled, to protect against
flush_backlog(), which runs in hard IRQ context.
This patch moves the flush operation to a work queue and runs the
callback with bottom half disabled to protect the process_queue
against dequeuing.
Since process_queue is now always manipulated in bottom half context,
the irq disable/enable pair around the dequeue operation are removed.
To keep the flush time as low as possible, the flush
works are scheduled on all online cpu simultaneously, using the
high priority work-queue and statically allocated, per cpu,
work structs.
Overall this change increases the time required to destroy a device
to improve slightly the packets reinjection performances.
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When I added support to export the vlan entry flags via xstats I forgot to
add support for the pvid since it is manually matched, so check if the
entry matches the vlan_group's pvid and set the flag appropriately.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Should qdisc_alloc() fail, we must release the module refcount
we got right before.
Fixes: 6da7c8fcbc ("qdisc: allow setting default queuing discipline")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Adds SNMP counter for drops caused by MD5 mismatches.
The current syslog might help, but a counter is more precise and helps
monitoring.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
TCP MD5 mismatches do increment sk_drops counter in all states but
SYN_RECV.
This is very unlikely to happen in the real world, but worth adding
to help diagnostics.
We increase the parent (listener) sk_drops.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix to return a negative error code in enable_mcast() error handling
case, and release udp socket when necessary.
Fixes: d0f91938be ("tipc: add ip/udp media type")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Similar to bt_sock_recvmsg MSG_TRUNC shall be checked using the original
flags not msg_flags.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Commit b5f34f9420 attempt to introduce
proper handling for MSG_TRUNC but recv and variants should still work
as read if no flag is passed, but because the code may set MSG_TRUNC to
msg->msg_flags that shall not be used as it may cause it to be behave as
if MSG_TRUNC is always, so instead of using it this changes the code to
use the flags parameter which shall contain the original flags.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
This allows a privileged process to filter by socket mark when
dumping sockets via INET_DIAG_BY_FAMILY. This is useful on
systems that use mark-based routing such as Android.
The ability to filter socket marks requires CAP_NET_ADMIN, which
is consistent with other privileged operations allowed by the
SOCK_DIAG interface such as the ability to destroy sockets and
the ability to inspect BPF filters attached to packet sockets.
Tested: https://android-review.googlesource.com/261350
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This simplifies the code a bit and also allows inet_diag_bc_audit
to send to userspace an error that isn't EINVAL.
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that the dsa_switch_driver structure contains only function pointers
as it is supposed to, rename it to the more appropriate dsa_switch_ops,
uniformly to any other operations structure in the kernel.
No functional changes here, basically just the result of something like:
s/dsa_switch_driver *drv/dsa_switch_ops *ops/g
However keep the {un,}register_switch_driver functions and their
dsa_switch_drivers list as is, since they represent the -- likely to be
deprecated soon -- legacy DSA registration framework.
In the meantime, also fix the following checks from checkpatch.pl to
make it happy with this patch:
CHECK: Comparison to NULL could be written "!ops"
#403: FILE: net/dsa/dsa.c:470:
+ if (ops == NULL) {
CHECK: Comparison to NULL could be written "ds->ops->get_strings"
#773: FILE: net/dsa/slave.c:697:
+ if (ds->ops->get_strings != NULL)
CHECK: Comparison to NULL could be written "ds->ops->get_ethtool_stats"
#824: FILE: net/dsa/slave.c:785:
+ if (ds->ops->get_ethtool_stats != NULL)
CHECK: Comparison to NULL could be written "ds->ops->get_sset_count"
#835: FILE: net/dsa/slave.c:798:
+ if (ds->ops->get_sset_count != NULL)
total: 0 errors, 0 warnings, 4 checks, 784 lines checked
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIVAwUAV72ycPSw1s6N8H32AQI9fQ/8CQQSB8vCRKCdTvL4fDVN9z66nwJOzt0E
4D41F646d2iYpaL7l2/7Z1ZUdTP0722Oss3b562vf24VjGSfJs4qFkqkXuhmtWhU
O33qk3/p/eVaHatkQuyLyvZut9CkGLY8sYiGozcEsEVzNYcEvAxXi95Mw3YpHRJV
OXbFedjaIrf2c2f2GsotsgLJz+1R2aCcbDePRpckh2dmNeN5tKtgnHx1+LSDGFL+
gyGzfY5wEt6tdunnqPnutL1KSLckCQnQdM22P6HA3L4ZspEsSmVX92WfBhBWQXOi
mG+lX3+qchACNHAQeSflxsP+hAWXKQCIE9c2wZs/jWHscZC7xXRao4mxkGEblczy
T+WENnNof5qqxCOrUkjKor1FyU06DbBYOXlM4u10iZLSK4smuMC2AYEf5+yA85hZ
D98ldD1/dtmck9nF5k719J+8qwbaskU7ZHWLny8Iz71qWymIKKUWd3B31tOqH6YV
it+YGSS0JMKOrJgd+QyxSkf5KLnqOLLd5aEpXkGNAcUQZfVcuxScF1HMed9odkny
TIzsLqHyVMBJ0Ik9vYXdcTVVDujWFFxyRGfRK4ZlGKBRjzAfyuUsHvMcZiXA9xYI
5bCQh0WKKWGgl6/dWdIXidWy7HtNburv1Cse7q+2X1Sly2Kzz2c75vASgWb/oWaR
/m6nA60G5FM=
=0zp2
-----END PGP SIGNATURE-----
Merge tag 'rxrpc-rewrite-20160824-2' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Add better client conn management strategy
These two patches add a better client connection management strategy. They
need to be applied on top of the just-posted fixes.
(1) Duplicate the connection list and separate out procfs iteration from
garbage collection. This is necessary for the next patch as with that
client connections no longer appear on a single list and may not
appear on a list at all - and really don't want to be exposed to the
old garbage collector.
(Note that client conns aren't left dangling, they're also in a tree
rooted in the local endpoint so that they can be found by a user
wanting to make a new client call. Service conns do not appear in
this tree.)
(2) Implement a better lifetime management and garbage collection strategy
for client connections.
In this, a client connection can be in one of five cache states
(inactive, waiting, active, culled and idle). Limits are set on the
number of client conns that may be active at any one time and makes
users wait if they want to start a new call when there isn't capacity
available.
To make capacity available, active and idle connections can be culled,
after a short delay (to allow for retransmission). The delay is
reduced if the capacity exceeds a tunable threshold.
If there is spare capacity, client conns are permitted to hang around
a fair bit longer (tunable) so as to allow reuse of negotiated
security contexts.
After this patch, the client conn strategy is separate from that of
service conns (which continues to use the old code for the moment).
This difference in strategy is because the client side retains control
over when it allows a connection to become active, whereas the service
side has no control over when it sees a new connection or a new call
on an old connection.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIVAwUAV72yZ/Sw1s6N8H32AQITcQ//Tj+QPw2wnc6p6CAXSQ4CJAgprZP25dEY
11aaLsKBsBpJXyGjRWVxHJeoNnfVO05ATyZU1AjIlALDUY2Kq1IyNJWmmxZbx0M/
oaRN4kB41jKyJRGWnPdvQb7KL0SvjlyiEWNV9ztEk4W5Ik7UInAYl2sdovwzvgL0
Nw7KClg2lTLE8Nu4v0GYFxz5bCUw3M4a0+C5oXCSIpXwLOMQezAmXRhYNhlxHwmZ
phuvJrP7xH0Z2G4MgVwvyOsQzzKptWoo3c5YxTWlhUl2qk1ZUKcu+eMv9ir/T3mD
exiiMgDLd74Wb9J9U1AEmGp9NgfxM20MGR7O/ARZ8K8FUrxMWbEifjv/eqMl0YGr
Wk/df+VwUsp3nfOMe7/UZaBCx5ZSV7x8WT6p6lRIQAIrJj1CFbo5pxHLgG/SPt5x
EPniDw/oC+0G7sc3BjqTLZZP7qh27TuvuUVqAdgM7lJCpozk37Qnq4C0jwheJ7ct
MvB1mGkfiHVnq3F0UL7emazmlYBULPTJvj7fN9iPAsvNYBbdENCbjfNP0T2YKNlW
1B08pUtMfHiS3/+LkFQ9yl0lhWnkApm++pNcOM1nLANi49Th92ch8lpfD6poAUBl
vRWHcMeeqevnydl57WQJ40gtmJr1/5pKgVQwRY/sSBQztr/uqtM811dkNwlF3szw
es7wA7HkKlI=
=mirU
-----END PGP SIGNATURE-----
Merge tag 'rxrpc-rewrite-20160824-1' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: More fixes
Here are a couple of fix patches:
(1) Fix the conn-based retransmission patch posted yesterday. This breaks
if it actually has to retransmit. However, it seems the likelihood of
this happening is really low, despite the server I'm testing against
being located >3000 miles away, and sometime of the time it's handled
in the call background processor before we manage to disconnect the
call - hence why I didn't spot it.
(2) /proc/net/rxrpc_calls can cause a crash it accessed whilst a call is
being torn down. The window of opportunity is pretty small, however,
as calls don't stay in this state for long.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
During an audit for sk_filter(), we found that rx_busy_skb handling
in l2cap_sock_recv_cb() and l2cap_sock_recvmsg() looks not quite as
intended.
The assumption from commit e328140fda ("Bluetooth: Use event-driven
approach for handling ERTM receive buffer") is that errors returned
from sock_queue_rcv_skb() are due to receive buffer shortage. However,
nothing should prevent doing a setsockopt() with SO_ATTACH_FILTER on
the socket, that could drop some of the incoming skbs when handled in
sock_queue_rcv_skb().
In that case sock_queue_rcv_skb() will return with -EPERM, propagated
from sk_filter() and if in L2CAP_MODE_ERTM mode, wrong assumption was
that we failed due to receive buffer being full. From that point onwards,
due to the to-be-dropped skb being held in rx_busy_skb, we cannot make
any forward progress as rx_busy_skb is never cleared from l2cap_sock_recvmsg(),
due to the filter drop verdict over and over coming from sk_filter().
Meanwhile, in l2cap_sock_recv_cb() all new incoming skbs are being
dropped due to rx_busy_skb being occupied.
Instead, just use __sock_queue_rcv_skb() where an error really tells that
there's a receive buffer issue. Split the sk_filter() and enable it for
non-segmented modes at queuing time since at this point in time the skb has
already been through the ERTM state machine and it has been acked, so dropping
is not allowed. Instead, for ERTM and streaming mode, call sk_filter() in
l2cap_data_rcv() so the packet can be dropped before the state machine sees it.
Fixes: e328140fda ("Bluetooth: Use event-driven approach for handling ERTM receive buffer")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
In hci_req_sync_complete the event skb is referenced in hdev->req_skb.
It is used (via hci_req_run_skb) from either __hci_cmd_sync_ev which will
pass the skb to the caller, or __hci_req_sync which leaks.
unreferenced object 0xffff880005339a00 (size 256):
comm "kworker/u3:1", pid 1011, jiffies 4294671976 (age 107.389s)
backtrace:
[<ffffffff818d89d9>] kmemleak_alloc+0x49/0xa0
[<ffffffff8116bba8>] kmem_cache_alloc+0x128/0x180
[<ffffffff8167c1df>] skb_clone+0x4f/0xa0
[<ffffffff817aa351>] hci_event_packet+0xc1/0x3290
[<ffffffff8179a57b>] hci_rx_work+0x18b/0x360
[<ffffffff810692ea>] process_one_work+0x14a/0x440
[<ffffffff81069623>] worker_thread+0x43/0x4d0
[<ffffffff8106ead4>] kthread+0xc4/0xe0
[<ffffffff818dd38f>] ret_from_fork+0x1f/0x40
[<ffffffffffffffff>] 0xffffffffffffffff
Signed-off-by: Frédéric Dalleau <frederic.dalleau@collabora.co.uk>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
The main connection list is used for two independent purposes: primarily it
is used to find connections to reap and secondarily it is used to list
connections in procfs.
Split the procfs list out from the reap list. This allows us to stop using
the reap list for client connections when they acquire a separate
management strategy from service collections.
The client connections will not be on a management single list, and sometimes
won't be on a management list at all. This doesn't leave them floating,
however, as they will also be on an rb-tree rooted on the socket so that the
socket can find them to dispatch calls.
Signed-off-by: David Howells <dhowells@redhat.com>
Make /proc/net/rxrpc_calls safer by stashing a copy of the peer pointer in
the rxrpc_call struct and checking in the show routine that the peer
pointer, the socket pointer and the local pointer obtained from the socket
pointer aren't NULL before we use them.
Signed-off-by: David Howells <dhowells@redhat.com>
If a duplicate packet comes in for a call that has just completed on a
connection's channel then there will be an oops in the data_ready handler
because it tries to examine the connection struct via a call struct (which
we don't have - the pointer is unset).
Since the connection struct pointer is available to us, go direct instead.
Also, the ACK packet to be retransmitted needs three octets of padding
between the soft ack list and the ackinfo.
Fixes: 18bfeba50d ("rxrpc: Perform terminal call ACK/ABORT retransmission from conn processor")
Signed-off-by: David Howells <dhowells@redhat.com>
We no longer use this handler, we can delete it.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now RCU lookups of IPv6 TCP sockets no longer dereference pinet6,
we do not need tcp_v6_clear_sk() anymore.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since we no longer use SLAB_DESTROY_BY_RCU for UDP,
we do not need sk_prot_clear_portaddr_nulls() helper.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now RCU lookups of ipv6 udp sockets no longer dereference
pinet6 field, we can get rid of udp_v6_clear_sk() helper.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This implements SOCK_DESTROY for UDP sockets similar to what was done
for TCP with commit c1e64e298b ("net: diag: Support destroying TCP
sockets.") A process with a UDP socket targeted for destroy is awakened
and recvmsg fails with ECONNABORTED.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
inet_diag_find_one_icsk takes a reference to a socket that is not
released if sock_diag_destroy returns an error. Fix by changing
tcp_diag_destroy to manage the refcnt for all cases and remove
the sock_put calls from tcp_abort.
Fixes: c1e64e298b ("net: diag: Support destroying TCP sockets")
Reported-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use kfree_skb() instead of kfree() to free sk_buff.
Fixes: 0d051bf93c ("tipc: make bearer packet filtering generic")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
After commit ca065d0cf8 ("udp: no longer use SLAB_DESTROY_BY_RCU")
we do not need this special allocation mode anymore, even if it is
harmless.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The function sctp_diag_dump_one() currently performs a memcpy()
of 64 bytes from a 16 byte field into another 16 byte field. Fix
by using correct size, use sizeof to obtain correct size instead
of using a hard-coded constant.
Fixes: 8f840e47f1 ("sctp: add the sctp_diag.c file")
Signed-off-by: Lance Richardson <lrichard@redhat.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIVAwUAV7xlrfSw1s6N8H32AQJbkw//fYVY6ulsaNWF7dBcnYP1/1bw8PB6GcUt
2QB67t91E8n30QPjgf3uxZPOwiKCgZGosb09ji71PkWX40B+EIF8NJTKV44SVA7P
r87kB3SKfXLA1677Fe/Vk2D3xE8l3iqObB3d1JK9YC6rbewuqxt4a2vjBH5w4G9d
PkZhP9Zy65cJCohc3BBz3RPwjqq40Q0fqhgsxw9sWu9hYThiLvFQPYxw6y+2tMOM
Rz+9v49P5VEoYQ+/BC38IvYlH3huNc8wh6WnTEjPvyy9ZfVrJTCmCBxhAX5LsrtD
amvP975JH6/a97wfNBDTKCVc89UOa50pmyTWi+6kWfRSy9biMfPtlZKI1REc0g+H
i1SSqEDa4Vkywx5BLtLlBgCREown3xKf4Mu7AS2QFqwt6VMlQ1i0jyhH925dBpE8
CrfeNVGISG21ppVBlq1mBS9shFzm2MpD9+BcBy0sZz69laXHQa+B7ATl1ksJ6QwI
SW/so3G5z0SaY4gVi1K55JTIi0BUQmoK3v/TuLwOYq31FxsoW6Mn75TlYzN2+xhE
VgGhcmRNQzaVRATchpD86z39bjK3WwuUL1IGLu11CBhipb0Pj8aGQ+LvMavQVqUi
loxKNaJ1ECQIWNeHAat+Q5QseaeBzAPeHIoEvxLX2ih3+d5dqzaDGI84S+7jjRsx
eeEcq1P5nIo=
=VYPa
-----END PGP SIGNATURE-----
Merge tag 'rxrpc-rewrite-20160823-2' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Miscellaneous improvements
Here are some improvements that are part of the AF_RXRPC rewrite. They
need to be applied on top of the just posted cleanups.
(1) Set the connection expiry on the connection becoming idle when its
last currently active call completes rather than each time put is
called.
This means that the connection isn't held open by retransmissions,
pings and duplicate packets. Future patches will limit the number of
live connections that the kernel will support, so making sure that old
connections don't overstay their welcome is necessary.
(2) Calculate packet serial skew in the UDP data_ready callback rather
than in the call processor on a work queue. Deferring it like this
causes the skew to be elevated by further packets coming in before we
get to make the calculation.
(3) Move retransmission of the terminal ACK or ABORT packet for a
connection to the connection processor, using the terminal state
cached in the rxrpc_connection struct. This means that once last_call
is set in a channel to the current call's ID, no more packets will be
routed to that rxrpc_call struct.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIVAwUAV7xlfPSw1s6N8H32AQIauQ/+JVoK/Pl2WuPhCgUC3/HwCpbndWzLSNyM
ab6yPXLKC3QAJx2jYraV5fP+IauSjMunrLX845QWmNz2EuCi0muhzWQJ9KaT//rS
JykAog0xoFdNqz1ySD2klozWncMxX9wdtwC6IsgGKZ3uF2pTh0Ji9kBldUKTkYn2
xZgLXEbM8qFlEskhUiQ9pKccaTkqNr3axJlypyt+INzRelkDtOSMwY4WyWwEeUwQ
1fwgbr8l0FOmSxYjGZCL77qHdHG3bwTHyyV4Yg6Bvkkk3cfHBlKV0DS+Rf8R5MO/
OCann/HjfuxMvYz90tWy1zeCzFtA8kG80RK2aVZaQAfjGoj7kFPWOPNNRKdV75y8
3kAnwfR0Su4Ui3sHFLOxkTbE1UuOwCRRpDTCZpRDkjPW7Ztl9ir1JK9FTnjA+8Bb
WOY4AhGy6B77J9+5dzH4Fgd6C791RY20D/AjZjSAmk7pIw3Zhm/UiD/+e6cZ/uC7
/p969stgQtloIV3IMOMU6IDbwXMvJqj5mxZwB/q3ietSzmezAM2ZAODcfhy4H7FY
o1ZsaQplcvB2PjxanSx+wGFsHUY2iXiMoLBWdGwbYfO2SAKN3MmCoAXC/VRMYQBB
ZGd7ns4CArNjH34wUyCSua+IWshjA3aLslqRRaR0Mqo0EdW6Nuxd/GaLJkdkziOk
QdliSsMoLZ0=
=TEJS
-----END PGP SIGNATURE-----
Merge tag 'rxrpc-rewrite-20160823-1' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Cleanups
Here are some cleanups for the AF_RXRPC rewrite:
(1) Remove some unused bits.
(2) Call releasing on socket closure is now done in the order in which
calls progress through the phases so that we don't miss a call
actively moving list.
(3) The rxrpc_call struct's channel number field is redundant and replaced
with accesses to the masked off cid field instead.
(4) Use a tracepoint for socket buffer accounting rather than printks.
Unfortunately, since this would require currently non-existend
arch-specific help to divine the current instruction location, the
accounting functions are moved out of line so that
__builtin_return_address() can be used.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Since the features bit field has bits for internal only use as well, it
may happen that the kernel exports RTAX_FEATURES attribute with zero
value which is pointless.
Fix this by making sure the attribute is added only if the exported
value is non-zero.
Signed-off-by: Phil Sutter <phil@nwl.cc>
Signed-off-by: David S. Miller <davem@davemloft.net>
TFO_SERVER_WO_SOCKOPT2 was intended for debugging purposes during
Fast Open development. Remove this config option and also
update/clean-up the documentation of the Fast Open sysctl.
Reported-by: Piotr Jurkiewicz <piotr.jerzy.jurkiewicz@gmail.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use PPP_ALLSTATIONS, PPP_UI, and SEND_SHUTDOWN instead of 0xff,
0x03, and 2 separately.
Signed-off-by: Gao Feng <fgao@ikuai8.com>
Acked-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Laura tracked poll() [and friends] regression caused by commit
e6afc8ace6 ("udp: remove headers from UDP packets before queueing")
udp_poll() needs to know if there is a valid packet in receive queue,
even if its payload length is 0.
Change first_packet_length() to return an signed int, and use -1
as the indication of an empty queue.
Fixes: e6afc8ace6 ("udp: remove headers from UDP packets before queueing")
Reported-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>