Commit Graph

21 Commits

Author SHA1 Message Date
David Howells
0d12f8a402 rxrpc: Keep the skb private record of the Rx header in host byte order
Currently, a copy of the Rx packet header is copied into the the sk_buff
private data so that we can advance the pointer into the buffer,
potentially discarding the original.  At the moment, this copy is held in
network byte order, but this means we're doing a lot of unnecessary
translations.

The reasons it was done this way are that we need the values in network
byte order occasionally and we can use the copy, slightly modified, as part
of an iov array when sending an ack or an abort packet.

However, it seems more reasonable on review that it would be better kept in
host byte order and that we make up a new header when we want to send
another packet.

To this end, rename the original header struct to rxrpc_wire_header (with
BE fields) and institute a variant called rxrpc_host_header that has host
order fields.  Change the struct in the sk_buff private data into an
rxrpc_host_header and translate the values when filling it in.

This further allows us to keep values kept in various structures in host
byte order rather than network byte order and allows removal of some fields
that are byteswapped duplicates.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-03-04 15:53:46 +00:00
David Howells
4c198ad17a rxrpc: Rename call events to begin RXRPC_CALL_EV_
Rename call event names to begin RXRPC_CALL_EV_ to distinguish them from the
flags.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-03-04 15:53:46 +00:00
David Howells
5b8848d149 rxrpc: Convert call flag and event numbers into enums
Convert call flag and event numbers into enums and move their definitions
outside of the struct.

Also move the call state enum outside of the struct and add an extra
element to count the number of states.

Signed-off-by: David Howells <dhowells@redhat.com>
2016-03-04 15:53:46 +00:00
David Howells
33c40e242c rxrpc: Correctly handle ack at end of client call transmit phase
Normally, the transmit phase of a client call is implicitly ack'd by the
reception of the first data packet of the response being received.
However, if a security negotiation happens, the transmit phase, if it is
entirely contained in a single packet, may get an ack packet in response
and then may get aborted due to security negotiation failure.

Because the client has shifted state to RXRPC_CALL_CLIENT_AWAIT_REPLY due
to having transmitted all the data, the code that handles processing of the
received ack packet doesn't note the hard ack the data packet.

The following abort packet in the case of security negotiation failure then
incurs an assertion failure when it tries to drain the Tx queue because the
hard ack state is out of sync (hard ack means the packets have been
processed and can be discarded by the sender; a soft ack means that the
packets are received but could still be discarded and rerequested by the
receiver).

To fix this, we should record the hard ack we received for the ack packet.

The assertion failure looks like:

	RxRPC: Assertion failed
	1 <= 0 is false
	0x1 <= 0x0 is false
	------------[ cut here ]------------
	kernel BUG at ../net/rxrpc/ar-ack.c:431!
	...
	RIP: 0010:[<ffffffffa006857b>]  [<ffffffffa006857b>] rxrpc_rotate_tx_window+0xbc/0x131 [af_rxrpc]
	...

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-11-24 17:14:50 -05:00
Florian Westphal
765dd3bb44 rxrpc: don't multiply with HZ twice
rxrpc_resend_timeout has an initial value of 4 * HZ; use it as-is.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-03-01 13:40:23 -05:00
Florian Westphal
c03ae533a9 rxrpc: terminate retrans loop when sending of skb fails
Typo, 'stop' is never set to true.
Seems intent is to not attempt to retransmit more packets after sendmsg
returns an error.

This change is based on code inspection only.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-03-01 13:40:23 -05:00
David Howells
817913d8cd af_rxrpc: Expose more RxRPC parameters via sysctls
Expose RxRPC parameters via sysctls to control the Rx window size, the Rx MTU
maximum size and the number of packets that can be glued into a jumbo packet.

More info added to Documentation/networking/rxrpc.txt.

Signed-off-by: David Howells <dhowells@redhat.com>
2014-02-26 17:25:07 +00:00
David Howells
9823f39a17 af_rxrpc: Improve ACK production
Improve ACK production by the following means:

 (1) Don't send an ACK_REQUESTED ack immediately even if the RXRPC_MORE_PACKETS
     flag isn't set on a data packet that has also has RXRPC_REQUEST_ACK set.

     MORE_PACKETS just means that the sender just emptied its Tx data buffer.
     More data will be forthcoming unless RXRPC_LAST_PACKET is also flagged.

     It is possible to see runs of DATA packets with MORE_PACKETS unset that
     aren't waiting for an ACK.

     It is therefore better to wait a small instant to see if we can combine an
     ACK for several packets.

 (2) Don't send an ACK_IDLE ack immediately unless we're responding to the
     terminal data packet of a call.

     Whilst sending an ACK_IDLE mid-call serves to let the other side know
     that we won't be asking it to resend certain Tx buffers and that it can
     discard them, spamming it with loads of acks just because we've
     temporarily run out of data just distracts it.

 (3) Put the ACK_IDLE ack generation timeout up to half a second rather than a
     single jiffy.  Just because we haven't been given more data immediately
     doesn't mean that more isn't forthcoming.  The other side may be busily
     finding the data to send to us.

Signed-off-by: David Howells <dhowells@redhat.com>
2014-02-26 17:25:07 +00:00
David Howells
5873c0834f af_rxrpc: Add sysctls for configuring RxRPC parameters
Add sysctls for configuring RxRPC protocol handling, specifically controls on
delays before ack generation, the delay before resending a packet, the maximum
lifetime of a call and the expiration times of calls, connections and
transports that haven't been recently used.

More info added in Documentation/networking/rxrpc.txt.

Signed-off-by: David Howells <dhowells@redhat.com>
2014-02-26 17:25:06 +00:00
Dan Carpenter
08d4d217df rxrpc: out of bound read in debug code
Smatch complains because we are using an untrusted index into the
rxrpc_acks[] array.  It's just a read and it's only in the debug code,
but it's simple enough to add a check and fix it.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-01-21 17:02:52 -08:00
Eric Dumazet
95c9617472 net: cleanup unsigned to unsigned int
Use of "unsigned int" is preferred to bare "unsigned" in net tree.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-04-15 12:44:40 -04:00
Rusty Russell
3db1cd5c05 net: fix assignment of 0/1 to bool variables.
DaveM said:
   Please, this kind of stuff rots forever and not using bool properly
   drives me crazy.

Joe Perches <joe@perches.com> gave me the spatch script:

	@@
	bool b;
	@@
	-b = 0
	+b = false
	@@
	bool b;
	@@
	-b = 1
	+b = true

I merely installed coccinelle, read the documentation and took credit.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-19 22:27:29 -05:00
David S. Miller
504f284a71 rxrpc: Kill set but unused variable 'sp' in rxrpc_rotate_tx_window()
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-05-19 18:34:37 -04:00
David Howells
3b5bac2bde RxRPC: Fix a potential deadlock between the call resend_timer and state_lock
RxRPC can potentially deadlock as rxrpc_resend_time_expired() wants to get
call->state_lock so that it can alter the state of an RxRPC call.  However, its
caller (call_timer_fn()) has an apparent lock on the timer struct.

The problem is that rxrpc_resend_time_expired() isn't permitted to lock
call->state_lock as this could cause a deadlock against rxrpc_send_abort() as
that takes state_lock and then attempts to delete the resend timer by calling
del_timer_sync().

The deadlock can occur because del_timer_sync() will sit there forever waiting
for rxrpc_resend_time_expired() to return, but the latter may then wait for
call->state_lock, which rxrpc_send_abort() holds around del_timer_sync()...

This leads to a warning appearing in the kernel log that looks something like
the attached.

It should be sufficient to simply dispense with the locks.  It doesn't matter
if we set the resend timer expired event bit and queue the event processor
whilst we're changing state to one where the resend timer is irrelevant as the
event can just be ignored by the processor thereafter.

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.35-rc3-cachefs+ #115
-------------------------------------------------------
swapper/0 is trying to acquire lock:
 (&call->state_lock){++--..}, at: [<ffffffffa00200d4>] rxrpc_resend_time_expired+0x56/0x96 [af_rxrpc]

but task is already holding lock:
 (&call->resend_timer){+.-...}, at: [<ffffffff8103b675>] run_timer_softirq+0x182/0x2a5

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (&call->resend_timer){+.-...}:
       [<ffffffff810560bc>] __lock_acquire+0x889/0x8fa
       [<ffffffff81056184>] lock_acquire+0x57/0x6d
       [<ffffffff8103bb9c>] del_timer_sync+0x3c/0x86
       [<ffffffffa002bb7a>] rxrpc_send_abort+0x50/0x97 [af_rxrpc]
       [<ffffffffa002bdd9>] rxrpc_kernel_abort_call+0xa1/0xdd [af_rxrpc]
       [<ffffffffa0061588>] afs_deliver_to_call+0x129/0x368 [kafs]
       [<ffffffffa006181b>] afs_process_async_call+0x54/0xff [kafs]
       [<ffffffff8104261d>] worker_thread+0x1ef/0x2e2
       [<ffffffff81045f47>] kthread+0x7a/0x82
       [<ffffffff81002cd4>] kernel_thread_helper+0x4/0x10

-> #0 (&call->state_lock){++--..}:
       [<ffffffff81055237>] validate_chain+0x727/0xd23
       [<ffffffff810560bc>] __lock_acquire+0x889/0x8fa
       [<ffffffff81056184>] lock_acquire+0x57/0x6d
       [<ffffffff813e6b69>] _raw_read_lock_bh+0x34/0x43
       [<ffffffffa00200d4>] rxrpc_resend_time_expired+0x56/0x96 [af_rxrpc]
       [<ffffffff8103b6e6>] run_timer_softirq+0x1f3/0x2a5
       [<ffffffff81036828>] __do_softirq+0xa2/0x13e
       [<ffffffff81002dcc>] call_softirq+0x1c/0x28
       [<ffffffff810049f0>] do_softirq+0x38/0x80
       [<ffffffff810361a2>] irq_exit+0x45/0x47
       [<ffffffff81018fb3>] smp_apic_timer_interrupt+0x88/0x96
       [<ffffffff81002893>] apic_timer_interrupt+0x13/0x20
       [<ffffffff810011ac>] cpu_idle+0x4d/0x83
       [<ffffffff813e06f3>] start_secondary+0x1bd/0x1c1

other info that might help us debug this:

1 lock held by swapper/0:
 #0:  (&call->resend_timer){+.-...}, at: [<ffffffff8103b675>] run_timer_softirq+0x182/0x2a5

stack backtrace:
Pid: 0, comm: swapper Not tainted 2.6.35-rc3-cachefs+ #115
Call Trace:
 <IRQ>  [<ffffffff81054414>] print_circular_bug+0xae/0xbd
 [<ffffffff81055237>] validate_chain+0x727/0xd23
 [<ffffffff810560bc>] __lock_acquire+0x889/0x8fa
 [<ffffffff810539a7>] ? mark_lock+0x42f/0x51f
 [<ffffffff81056184>] lock_acquire+0x57/0x6d
 [<ffffffffa00200d4>] ? rxrpc_resend_time_expired+0x56/0x96 [af_rxrpc]
 [<ffffffff813e6b69>] _raw_read_lock_bh+0x34/0x43
 [<ffffffffa00200d4>] ? rxrpc_resend_time_expired+0x56/0x96 [af_rxrpc]
 [<ffffffffa00200d4>] rxrpc_resend_time_expired+0x56/0x96 [af_rxrpc]
 [<ffffffff8103b6e6>] run_timer_softirq+0x1f3/0x2a5
 [<ffffffff8103b675>] ? run_timer_softirq+0x182/0x2a5
 [<ffffffffa002007e>] ? rxrpc_resend_time_expired+0x0/0x96 [af_rxrpc]
 [<ffffffff810367ef>] ? __do_softirq+0x69/0x13e
 [<ffffffff81036828>] __do_softirq+0xa2/0x13e
 [<ffffffff81002dcc>] call_softirq+0x1c/0x28
 [<ffffffff810049f0>] do_softirq+0x38/0x80
 [<ffffffff810361a2>] irq_exit+0x45/0x47
 [<ffffffff81018fb3>] smp_apic_timer_interrupt+0x88/0x96
 [<ffffffff81002893>] apic_timer_interrupt+0x13/0x20
 <EOI>  [<ffffffff81049de1>] ? __atomic_notifier_call_chain+0x0/0x86
 [<ffffffff8100955b>] ? mwait_idle+0x6e/0x78
 [<ffffffff81009552>] ? mwait_idle+0x65/0x78
 [<ffffffff810011ac>] cpu_idle+0x4d/0x83
 [<ffffffff813e06f3>] start_secondary+0x1bd/0x1c1

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-04 21:53:16 -07:00
Tejun Heo
5a0e3ad6af include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files.  percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed.  Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability.  As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

  http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
  only the necessary includes are there.  ie. if only gfp is used,
  gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
  blocks and try to put the new include such that its order conforms
  to its surrounding.  It's put in the include block which contains
  core kernel includes, in the same order that the rest are ordered -
  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
  doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
  because the file doesn't have fitting include block), it prints out
  an error message indicating which .h file needs to be added to the
  file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
   over 4000 files, deleting around 700 includes and adding ~480 gfp.h
   and ~3000 slab.h inclusions.  The script emitted errors for ~400
   files.

2. Each error was manually checked.  Some didn't need the inclusion,
   some needed manual addition while adding it to implementation .h or
   embedding .c file was more appropriate for others.  This step added
   inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
   from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
   e.g. lib/decompress_*.c used malloc/free() wrappers around slab
   APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
   editing them as sprinkling gfp.h and slab.h inclusions around .h
   files could easily lead to inclusion dependency hell.  Most gfp.h
   inclusion directives were ignored as stuff from gfp.h was usually
   wildly available and often used in preprocessor macros.  Each
   slab.h inclusion directive was examined and added manually as
   necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
   distributed build env didn't work with gcov compiles) and a few
   more options had to be turned off depending on archs to make things
   build (like ipr on powerpc/64 which failed due to missing writeq).

   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
   * powerpc and powerpc64 SMP allmodconfig
   * sparc and sparc64 SMP allmodconfig
   * ia64 SMP allmodconfig
   * s390 SMP allmodconfig
   * alpha SMP allmodconfig
   * um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
   a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-30 22:02:32 +09:00
David Howells
4e36a95e59 RxRPC: Use uX/sX rather than uintX_t/intX_t types
Use uX rather than uintX_t types for consistency.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-16 00:01:13 -07:00
Jan Engelhardt
36cbd3dcc1 net: mark read-only arrays as const
String literals are constant, and usually, we can also tag the array
of pointers const too, moving it to the .rodata section.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-08-05 10:42:58 -07:00
Julia Lawall
163e3cb7da net/rxrpc: Use BUG_ON
if (...) BUG(); should be replaced with BUG_ON(...) when the test has no
side-effects to allow a definition of BUG_ON that drops the code completely.

The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/)

// <smpl>
@ disable unlikely @ expression E,f; @@

(
  if (<... f(...) ...>) { BUG(); }
|
- if (unlikely(E)) { BUG(); }
+ BUG_ON(E);
)

@@ expression E,f; @@

(
  if (<... f(...) ...>) { BUG(); }
|
- if (E) { BUG(); }
+ BUG_ON(E);
)
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-17 18:42:03 -08:00
David Howells
224711df5c [AF_RXRPC]: Sort out MTU handling.
Sort out the MTU determination and handling in AF_RXRPC:

 (1) If it's present, parse the additional information supplied by the peer at
     the end of the ACK packet (struct ackinfo) to determine the MTU sizes
     that peer is willing to support.

 (2) Initialise the MTU size to that peer from the kernel's routing records.

 (3) Send ACKs rather than ACKALLs as the former carry the additional info,
     and the latter do not.

 (4) Declare the interface MTU size in outgoing ACKs as a maximum amount of
     data that can be stuffed into an RxRPC packet without it having to be
     fragmented to come in this computer's NIC.

 (5) If sendmsg() is given MSG_MORE then it should allocate an skb of the
     maximum size rather than one just big enough for the data it's got left
     to process on the theory that there is more data to come that it can
     append to that packet.

     This means, for example, that if AFS does a large StoreData op, all the
     packets barring the last will be filled to the maximum unfragmented size.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-05-04 12:41:11 -07:00
David Howells
651350d10f [AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use
Add an interface to the AF_RXRPC module so that the AFS filesystem module can
more easily make use of the services available.  AFS still opens a socket but
then uses the action functions in lieu of sendmsg() and registers an intercept
functions to grab messages before they're queued on the socket Rx queue.

This permits AFS (or whatever) to:

 (1) Avoid the overhead of using the recvmsg() call.

 (2) Use different keys directly on individual client calls on one socket
     rather than having to open a whole slew of sockets, one for each key it
     might want to use.

 (3) Avoid calling request_key() at the point of issue of a call or opening of
     a socket.  This is done instead by AFS at the point of open(), unlink() or
     other VFS operation and the key handed through.

 (4) Request the use of something other than GFP_KERNEL to allocate memory.

Furthermore:

 (*) The socket buffer markings used by RxRPC are made available for AFS so
     that it can interpret the cooked RxRPC messages itself.

 (*) rxgen (un)marshalling abort codes are made available.


The following documentation for the kernel interface is added to
Documentation/networking/rxrpc.txt:

=========================
AF_RXRPC KERNEL INTERFACE
=========================

The AF_RXRPC module also provides an interface for use by in-kernel utilities
such as the AFS filesystem.  This permits such a utility to:

 (1) Use different keys directly on individual client calls on one socket
     rather than having to open a whole slew of sockets, one for each key it
     might want to use.

 (2) Avoid having RxRPC call request_key() at the point of issue of a call or
     opening of a socket.  Instead the utility is responsible for requesting a
     key at the appropriate point.  AFS, for instance, would do this during VFS
     operations such as open() or unlink().  The key is then handed through
     when the call is initiated.

 (3) Request the use of something other than GFP_KERNEL to allocate memory.

 (4) Avoid the overhead of using the recvmsg() call.  RxRPC messages can be
     intercepted before they get put into the socket Rx queue and the socket
     buffers manipulated directly.

To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
bind an addess as appropriate and listen if it's to be a server socket, but
then it passes this to the kernel interface functions.

The kernel interface functions are as follows:

 (*) Begin a new client call.

	struct rxrpc_call *
	rxrpc_kernel_begin_call(struct socket *sock,
				struct sockaddr_rxrpc *srx,
				struct key *key,
				unsigned long user_call_ID,
				gfp_t gfp);

     This allocates the infrastructure to make a new RxRPC call and assigns
     call and connection numbers.  The call will be made on the UDP port that
     the socket is bound to.  The call will go to the destination address of a
     connected client socket unless an alternative is supplied (srx is
     non-NULL).

     If a key is supplied then this will be used to secure the call instead of
     the key bound to the socket with the RXRPC_SECURITY_KEY sockopt.  Calls
     secured in this way will still share connections if at all possible.

     The user_call_ID is equivalent to that supplied to sendmsg() in the
     control data buffer.  It is entirely feasible to use this to point to a
     kernel data structure.

     If this function is successful, an opaque reference to the RxRPC call is
     returned.  The caller now holds a reference on this and it must be
     properly ended.

 (*) End a client call.

	void rxrpc_kernel_end_call(struct rxrpc_call *call);

     This is used to end a previously begun call.  The user_call_ID is expunged
     from AF_RXRPC's knowledge and will not be seen again in association with
     the specified call.

 (*) Send data through a call.

	int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
				   size_t len);

     This is used to supply either the request part of a client call or the
     reply part of a server call.  msg.msg_iovlen and msg.msg_iov specify the
     data buffers to be used.  msg_iov may not be NULL and must point
     exclusively to in-kernel virtual addresses.  msg.msg_flags may be given
     MSG_MORE if there will be subsequent data sends for this call.

     The msg must not specify a destination address, control data or any flags
     other than MSG_MORE.  len is the total amount of data to transmit.

 (*) Abort a call.

	void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);

     This is used to abort a call if it's still in an abortable state.  The
     abort code specified will be placed in the ABORT message sent.

 (*) Intercept received RxRPC messages.

	typedef void (*rxrpc_interceptor_t)(struct sock *sk,
					    unsigned long user_call_ID,
					    struct sk_buff *skb);

	void
	rxrpc_kernel_intercept_rx_messages(struct socket *sock,
					   rxrpc_interceptor_t interceptor);

     This installs an interceptor function on the specified AF_RXRPC socket.
     All messages that would otherwise wind up in the socket's Rx queue are
     then diverted to this function.  Note that care must be taken to process
     the messages in the right order to maintain DATA message sequentiality.

     The interceptor function itself is provided with the address of the socket
     and handling the incoming message, the ID assigned by the kernel utility
     to the call and the socket buffer containing the message.

     The skb->mark field indicates the type of message:

	MARK				MEANING
	===============================	=======================================
	RXRPC_SKB_MARK_DATA		Data message
	RXRPC_SKB_MARK_FINAL_ACK	Final ACK received for an incoming call
	RXRPC_SKB_MARK_BUSY		Client call rejected as server busy
	RXRPC_SKB_MARK_REMOTE_ABORT	Call aborted by peer
	RXRPC_SKB_MARK_NET_ERROR	Network error detected
	RXRPC_SKB_MARK_LOCAL_ERROR	Local error encountered
	RXRPC_SKB_MARK_NEW_CALL		New incoming call awaiting acceptance

     The remote abort message can be probed with rxrpc_kernel_get_abort_code().
     The two error messages can be probed with rxrpc_kernel_get_error_number().
     A new call can be accepted with rxrpc_kernel_accept_call().

     Data messages can have their contents extracted with the usual bunch of
     socket buffer manipulation functions.  A data message can be determined to
     be the last one in a sequence with rxrpc_kernel_is_data_last().  When a
     data message has been used up, rxrpc_kernel_data_delivered() should be
     called on it..

     Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
     of.  It is possible to get extra refs on all types of message for later
     freeing, but this may pin the state of a call until the message is finally
     freed.

 (*) Accept an incoming call.

	struct rxrpc_call *
	rxrpc_kernel_accept_call(struct socket *sock,
				 unsigned long user_call_ID);

     This is used to accept an incoming call and to assign it a call ID.  This
     function is similar to rxrpc_kernel_begin_call() and calls accepted must
     be ended in the same way.

     If this function is successful, an opaque reference to the RxRPC call is
     returned.  The caller now holds a reference on this and it must be
     properly ended.

 (*) Reject an incoming call.

	int rxrpc_kernel_reject_call(struct socket *sock);

     This is used to reject the first incoming call on the socket's queue with
     a BUSY message.  -ENODATA is returned if there were no incoming calls.
     Other errors may be returned if the call had been aborted (-ECONNABORTED)
     or had timed out (-ETIME).

 (*) Record the delivery of a data message and free it.

	void rxrpc_kernel_data_delivered(struct sk_buff *skb);

     This is used to record a data message as having been delivered and to
     update the ACK state for the call.  The socket buffer will be freed.

 (*) Free a message.

	void rxrpc_kernel_free_skb(struct sk_buff *skb);

     This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
     socket.

 (*) Determine if a data message is the last one on a call.

	bool rxrpc_kernel_is_data_last(struct sk_buff *skb);

     This is used to determine if a socket buffer holds the last data message
     to be received for a call (true will be returned if it does, false
     if not).

     The data message will be part of the reply on a client call and the
     request on an incoming call.  In the latter case there will be more
     messages, but in the former case there will not.

 (*) Get the abort code from an abort message.

	u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);

     This is used to extract the abort code from a remote abort message.

 (*) Get the error number from a local or network error message.

	int rxrpc_kernel_get_error_number(struct sk_buff *skb);

     This is used to extract the error number from a message indicating either
     a local error occurred or a network error occurred.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 15:50:17 -07:00
David Howells
17926a7932 [AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both
Provide AF_RXRPC sockets that can be used to talk to AFS servers, or serve
answers to AFS clients.  KerberosIV security is fully supported.  The patches
and some example test programs can be found in:

	http://people.redhat.com/~dhowells/rxrpc/

This will eventually replace the old implementation of kernel-only RxRPC
currently resident in net/rxrpc/.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-26 15:48:28 -07:00