Commit Graph

110 Commits

Author SHA1 Message Date
Pavel Shilovsky
7c9421e1a9 CIFS: Change mid_q_entry structure fields
to be protocol-unspecific and big enough to keep both CIFS
and SMB2 values.

Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
2012-03-23 14:28:03 -04:00
Pavel Shilovsky
d4e4854fd1 CIFS: Separate protocol-specific code from demultiplex code
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
2012-03-23 14:28:02 -04:00
Pavel Shilovsky
792af7b05b CIFS: Separate protocol-specific code from transport routines
that lets us use this functions for SMB2.

Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
2012-03-23 14:28:02 -04:00
Pavel Shilovsky
bc205ed19b CIFS: Prepare credits code for a slot reservation
that is essential for CIFS/SMB/SMB2 oplock breaks and SMB2 echos.

Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2012-03-21 11:35:36 -05:00
Pavel Shilovsky
5bc594982f CIFS: Make wait_for_free_request killable
to let us kill the proccess if it hangs waiting for a credit when
the session is down and echo is disabled.

Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2012-03-21 11:35:32 -05:00
Pavel Shilovsky
2d86dbc970 CIFS: Introduce credit-based flow control
and send no more than credits value requests at once. For SMB/CIFS
it's trivial: increment this value by receiving any message and
decrement by sending one.

Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2012-03-21 11:35:03 -05:00
Pavel Shilovsky
fc40f9cf82 CIFS: Simplify inFlight logic
by making it as unsigned integer and surround access with req_lock
from server structure.

Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2012-03-21 11:27:35 -05:00
Pavel Shilovsky
10b9b98e41 CIFS: Respect negotiated MaxMpxCount
Some servers sets this value less than 50 that was hardcoded and
we lost the connection if when we exceed this limit. Fix this by
respecting this value - not sending more than the server allows.

Cc: stable@kernel.org
Reviewed-by: Jeff Layton <jlayton@samba.org>
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <stevef@smf-gateway.(none)>
2012-03-20 10:17:40 -05:00
Jeff Layton
f06ac72e92 cifs, freezer: add wait_event_freezekillable and have cifs use it
CIFS currently uses wait_event_killable to put tasks to sleep while
they await replies from the server. That function though does not
allow the freezer to run. In many cases, the network interface may
be going down anyway, in which case the reply will never come. The
client then ends up blocking the computer from suspending.

Fix this by adding a new wait_event_freezable variant --
wait_event_freezekillable. The idea is to combine the behavior of
wait_event_killable and wait_event_freezable -- put the task to
sleep and only allow it to be awoken by fatal signals, but also
allow the freezer to do its job.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
2011-10-19 15:30:40 -04:00
Jeff Layton
44d22d846f cifs: add a callback function to receive the rest of the frame
In order to handle larger SMBs for readpages and other calls, we want
to be able to read into a preallocated set of buffers. Rather than
changing all of the existing code to preallocate buffers however, we
instead add a receive callback function to the MID.

cifsd will call this function once the mid_q_entry has been identified
in order to receive the rest of the SMB. If the mid can't be identified
or the receive pointer is unset, then the standard 3rd phase receive
function will be called.

Reviewed-and-Tested-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
2011-10-19 15:29:49 -04:00
Jeff Layton
826a95e4a3 cifs: consolidate signature generating code
We have two versions of signature generating code. A vectorized and
non-vectorized version. Eliminate a large chunk of cut-and-paste
code by turning the non-vectorized version into a wrapper around the
vectorized one.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
2011-10-12 23:41:41 -05:00
Steve French
789e666123 [CIFS] Cleanup use of CONFIG_CIFS_STATS2 ifdef to make transport routines more readable
Christoph had requested that the stats related code (in
CONFIG_CIFS_STATS2) be moved into helpers to make code flow more
readable.   This patch should help.   For example the following
section from transport.c

                       spin_unlock(&GlobalMid_Lock);
                       atomic_inc(&ses->server->num_waiters);
                       wait_event(ses->server->request_q,
                                  atomic_read(&ses->server->inFlight)
                                    < cifs_max_pending);
                       atomic_dec(&ses->server->num_waiters);
                       spin_lock(&GlobalMid_Lock);

becomes simpler (with the patch below):
                       spin_unlock(&GlobalMid_Lock);
                       cifs_num_waiters_inc(server);
                       wait_event(server->request_q,
                                  atomic_read(&server->inFlight)
                                    < cifs_max_pending);
                       cifs_num_waiters_dec(server);
                       spin_lock(&GlobalMid_Lock);

Reviewed-by: Jeff Layton <jlayton@redhat.com>
CC: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru>
2011-08-11 18:23:45 +00:00
Pavel Shilovsky
0193e07226 CIFS: Fix missing a decrement of inFlight value
if we failed on getting mid entry in cifs_call_async.

Cc: stable@kernel.org
Signed-off-by: Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-08-03 19:42:12 +00:00
Steve French
96daf2b091 [CIFS] Rename three structures to avoid camel case
secMode to sec_mode
and
cifsTconInfo to cifs_tcon
and
cifsSesInfo to cifs_ses

Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-05-27 04:34:02 +00:00
Jeff Layton
3c1105df69 cifs: don't call mid_q_entry->callback under the Global_MidLock (try #5)
Minor revision to the last version of this patch -- the only difference
is the fix to the cFYI statement in cifs_reconnect.

Holding the spinlock while we call this function means that it can't
sleep, which really limits what it can do. Taking it out from under
the spinlock also means less contention for this global lock.

Change the semantics such that the Global_MidLock is not held when
the callback is called. To do this requires that we take extra care
not to have sync_mid_result remove the mid from the list when the
mid is in a state where that has already happened. This prevents
list corruption when the mid is sitting on a private list for
reconnect or when cifsd is coming down.

Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com>
Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-05-24 03:11:33 +00:00
Jeff Layton
59ffd84141 cifs: add ignore_pend flag to cifs_call_async
The current code always ignores the max_pending limit. Have it instead
only optionally ignore the pending limit. For CIFSSMBEcho, we need to
ignore it to make sure they always can go out. For async reads, writes
and potentially other calls, we need to respect it.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-05-23 02:59:16 +00:00
Jeff Layton
fcc31cb6f1 cifs: make cifs_send_async take a kvec array
We'll need this for async writes, so convert the call to take a kvec
array. CIFSSMBEcho is changed to put a kvec on the stack and pass
in the SMB buffer using that.

Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-05-23 02:58:26 +00:00
Jeff Layton
2c8f981d93 cifs: consolidate SendReceive response checks
Further consolidate the SendReceive code by moving the checks run over
the packet into a separate function that all the SendReceive variants
can call.

We can also eliminate the check for a receive_len that's too big or too
small. cifs_demultiplex_thread already checks that and disconnects the
socket if that occurs, while setting the midStatus to MALFORMED. It'll
never call this code if that's the case.

Finally do a little cleanup. Use "goto out" on errors so that the flow
of code in the normal case is more evident. Also switch the logErr
variable in map_smb_to_linux_error to a bool.

Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-05-23 02:58:24 +00:00
Jeff Layton
820a803ffa cifs: keep BCC in little-endian format
This is the same patch as originally posted, just with some merge
conflicts fixed up...

Currently, the ByteCount is usually converted to host-endian on receive.
This is confusing however, as we need to keep two sets of routines for
accessing it, and keep track of when to use each routine. Munging
received packets like this also limits when the signature can be
calulated.

Simplify the code by keeping the received ByteCount in little-endian
format. This allows us to eliminate a set of routines for accessing it
and we can now drop the *_le suffixes from the accessor functions since
that's now implied.

While we're at it, switch all of the places that read the ByteCount
directly to use the get_bcc inline which should also clean up some
unaligned accesses.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-05-19 14:10:53 +00:00
Steve French
be8e3b0044 consistently use smb_buf_length as be32 for cifs (try 3)
There is one big endian field in the cifs protocol, the RFC1001
       length, which cifs code (unlike in the smb2 code) had been handling as
       u32 until the last possible moment, when it was converted to be32 (its
       native form) before sending on the wire.   To remove the last sparse
       endian warning, and to make this consistent with the smb2
       implementation  (which always treats the fields in their
       native size and endianness), convert all uses of smb_buf_length to
       be32.

       This version incorporates Christoph's comment about
       using be32_add_cpu, and fixes a typo in the second
       version of the patch.

Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-05-19 14:10:51 +00:00
Jeff Layton
71823baff1 cifs: don't always drop malformed replies on the floor (try #3)
Slight revision to this patch...use min_t() instead of conditional
assignment. Also, remove the FIXME comment and replace it with the
explanation that Steve gave earlier.

After receiving a packet, we currently check the header. If it's no
good, then we toss it out and continue the loop, leaving the caller
waiting on that response.

In cases where the packet has length inconsistencies, but the MID is
valid, this leads to unneeded delays. That's especially problematic now
that the client waits indefinitely for responses.

Instead, don't immediately discard the packet if checkSMB fails. Try to
find a matching mid_q_entry, mark it as having a malformed response and
issue the callback.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-02-11 03:59:12 +00:00
Jeff Layton
e3f0dadb2b cifs: enable signing flag in SMB header when server has it on
cifs_sign_smb only generates a signature if the correct Flags2 bit is
set. Make sure that it gets set correctly if we're sending an async
call.

This patch fixes:

    https://bugzilla.kernel.org/show_bug.cgi?id=28142

Reported-and-Tested-by: JG <jg@cms.ac>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-02-04 20:19:57 +00:00
Jeff Layton
d804d41d16 cifs: don't pop a printk when sending on a socket is interrupted
If we kill the process while it's sending on a socket then the
kernel_sendmsg will return -EINTR. This is normal. No need to spam the
ring buffer with this info.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-31 04:32:21 +00:00
Jeff Layton
2db7c58155 cifs: send an NT_CANCEL request when a process is signalled
Use the new send_nt_cancel function to send an NT_CANCEL when the
process is delivered a fatal signal. This is a "best effort" enterprise
however, so don't bother to check the return code. There's nothing we
can reasonably do if it fails anyway.

Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com>
Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-31 04:24:38 +00:00
Jeff Layton
1be912dde7 cifs: handle cancelled requests better
Currently, when a request is cancelled via signal, we delete the mid
immediately. If the request was already transmitted however, the client
is still likely to receive a response. When it does, it won't recognize
it however and will pop a printk.

It's also a little dangerous to just delete the mid entry like this. We
may end up reusing that mid. If we do then we could potentially get the
response from the first request confused with the later one.

Prevent the reuse of mids by marking them as cancelled and keeping them
on the pending_mid_q list. If the reply comes in, we'll delete it from
the list then. If it never comes, then we'll delete it at reconnect
or when cifsd comes down.

Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-31 04:23:31 +00:00
Jeff Layton
690c522fa5 cifs: use get/put_unaligned functions to access ByteCount
It's possible that when we access the ByteCount that the alignment
will be off. Most CPUs deal with that transparently, but there's
usually some performance impact. Some CPUs raise an exception on
unaligned accesses.

Fix this by accessing the byte count using the get_unaligned and
put_unaligned inlined functions. While we're at it, fix the types
of some of the variables that end up getting returns from these
functions.

Acked-by: Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 21:46:29 +00:00
Jeff Layton
76dcc26f1d cifs: mangle existing header for SMB_COM_NT_CANCEL
The NT_CANCEL command looks just like the original command, except for a
few small differences. The send_nt_cancel function however currently takes
a tcon, which we don't have in SendReceive and SendReceive2.

Instead of "respinning" the entire header for an NT_CANCEL, just mangle
the existing header by replacing just the fields we need. This means we
don't need a tcon and allows us to call it from other places.

Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com>
Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 18:08:36 +00:00
Jeff Layton
7749981ec3 cifs: remove code for setting timeouts on requests
Since we don't time out individual requests anymore, remove the code
that we used to use for setting timeouts on different requests.

Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com>
Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 18:07:55 +00:00
Jeff Layton
766fdbb57f cifs: add ability to send an echo request
Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 17:46:44 +00:00
Jeff Layton
a6827c184e cifs: add cifs_call_async
Add a function that will send a request, and set up the mid for an
async reply.

Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 17:46:04 +00:00
Jeff Layton
2b84a36c55 cifs: allow for different handling of received response
In order to incorporate async requests, we need to allow for a more
general way to do things on receive, rather than just waking up a
process.

Turn the task pointer in the mid_q_entry into a callback function and a
generic data pointer. When a response comes in, or the socket is
reconnected, cifsd can call the callback function in order to wake up
the process.

The default is to just wake up the current process which should mean no
change in behavior for existing code.

Also, clean up the locking in cifs_reconnect. There doesn't seem to be
any need to hold both the srv_mutex and GlobalMid_Lock when walking the
list of mids.

Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 17:43:59 +00:00
Jeff Layton
74dd92a881 cifs: clean up sync_mid_result
Make it use a switch statement based on the value of the midStatus. If
the resp_buf is set, then MID_RESPONSE_RECEIVED is too.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 17:41:02 +00:00
Jeff Layton
dad255b182 cifs: don't reconnect server when we don't get a response
We only want to force a reconnect to the server under very limited and
specific circumstances. Now that we have processes waiting indefinitely
for responses, we shouldn't reach this point unless a reconnect is
already in process. Thus, there's no reason to re-mark the server for
reconnect here.

Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Reviewed-by:  Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 17:08:50 +00:00
Jeff Layton
0ade640e9c cifs: wait indefinitely for responses
The client should not be timing out on individual SMB requests. Too much
of the state between client and server is tied to the state of the
socket. If we time out requests and issue spurious disconnects then that
comprimises data integrity.

Instead of doing this complicated dance where we try to decide how long
to wait for a response for particular requests, have the client instead
wait indefinitely for a response. Also, use a TASK_KILLABLE sleep here
so that fatal signals will break out of this waiting.

Later patches will add support for detecting dead peers and forcing
reconnects based on that.

Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Reviewed-by:  Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-20 17:07:49 +00:00
Jeff Layton
053d503445 cifs: move mid result processing into common function
Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-19 17:52:45 +00:00
Jeff Layton
ddc8cf8fc7 cifs: move locked sections out of DeleteMidQEntry and AllocMidQEntry
In later patches, we're going to need to have finer-grained control
over the addition and removal of these structs from the pending_mid_q
and we'll need to be able to call the destructor while holding the
spinlock. Move the locked sections out of both routines and into
the callers. Fix up current callers of DeleteMidQEntry to call a new
routine that dequeues the entry and then destroys it.

Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-19 17:52:42 +00:00
Jeff Layton
8097531a5c cifs: clean up accesses to midCount
It's an atomic_t and the code accesses the "counter" field in it directly
instead of using atomic_read(). It also is sometimes accessed under a
spinlock and sometimes not. Move it out of the spinlock since we don't need
belt-and-suspenders for something that's just informational.

Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-19 17:52:38 +00:00
Jeff Layton
c5797a945c cifs: make wait_for_free_request take a TCP_Server_Info pointer
The cifsSesInfo pointer is only used to get at the server.

Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de>
Reviewed-by:  Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-19 17:52:30 +00:00
Pavel Shilovsky
a9f1b85e5b CIFS: Simplify ipv*_connect functions into one (try #4)
Make connect logic more ip-protocol independent and move RFC1001 stuff into
a separate function. Also replace union addr in TCP_Server_Info structure
with sockaddr_storage.

Signed-off-by: Pavel Shilovsky <piastryyy@gmail.com>
Reviewed-and-Tested-by: Jeff Layton <jlayton@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2011-01-06 19:07:53 +00:00
Shirish Pargaonkar
21e733930b NTLM auth and sign - Allocate session key/client response dynamically
Start calculating auth response within a session.  Move/Add pertinet
data structures like session key, server challenge and ntlmv2_hash in
a session structure.  We should do the calculations within a session
before copying session key and response over to server data
structures because a session setup can fail.

Only after a very first smb session succeeds, it copy/make its
session key, session key of smb connection.  This key stays with
the smb connection throughout its life.
sequence_number within server is set to 0x2.

The authentication Message Authentication Key (mak) which consists
of session key followed by client response within structure session_key
is now dynamic.  Every authentication type allocates the key + response
sized memory within its session structure and later either assigns or
frees it once the client response is sent and if session's session key
becomes connetion's session key.

ntlm/ntlmi authentication functions are rearranged.  A function
named setup_ntlm_resp(), similar to setup_ntlmv2_resp(), replaces
function cifs_calculate_session_key().

size of CIFS_SESS_KEY_SIZE is changed to 16, to reflect the byte size
of the key it holds.

Reviewed-by: Jeff Layton <jlayton@samba.org>
Signed-off-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2010-10-26 18:20:10 +00:00
Shirish Pargaonkar
5f98ca9afb cifs NTLMv2/NTLMSSP Change variable name mac_key to session key to reflect the key it holds
Change name of variable mac_key to session key.
The reason mac_key was changed to session key is, this structure does not
hold message authentication code, it holds the session key (for ntlmv2,
ntlmv1 etc.).  mac is generated as a signature in cifs_calc* functions.

Signed-off-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2010-09-29 19:04:29 +00:00
Steve French
c8e56f1f4f Revert "[CIFS] Fix ntlmv2 auth with ntlmssp"
This reverts commit 9fbc590860.

The change to kernel crypto and fixes to ntlvm2 and ntlmssp
series, introduced a regression.  Deferring this patch series
to 2.6.37 after Shirish fixes it.

Signed-off-by: Steve French <sfrench@us.ibm.com>
Acked-by: Jeff Layton <jlayton@redhat.com>
CC: Shirish Pargaonkar <shirishp@us.ibm.com>
2010-09-08 21:10:58 +00:00
Steve French
9fbc590860 [CIFS] Fix ntlmv2 auth with ntlmssp
Make ntlmv2 as an authentication mechanism within ntlmssp
instead of ntlmv1.
Parse type 2 response in ntlmssp negotiation to pluck
AV pairs and use them to calculate ntlmv2 response token.
Also, assign domain name from the sever response in type 2
packet of ntlmssp and use that (netbios) domain name in
calculation of response.

Enable cifs/smb signing using rc4 and md5.

Changed name of the structure mac_key to session_key to reflect
the type of key it holds.

Use kernel crypto_shash_* APIs instead of the equivalent cifs functions.

Signed-off-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2010-08-20 20:42:26 +00:00
Steve French
bdfae149c5 [CIFS] Remove unused cifs_oplock_cachep
CC: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2010-05-06 00:38:16 +00:00
Joe Perches
b6b38f704a [CIFS] Neaten cERROR and cFYI macros, reduce text space
Neaten cERROR and cFYI macros, reduce text space
~2.5K

Convert '__FILE__ ": " fmt' to '"%s: " fmt', __FILE__' to save text space
Surround macros with do {} while
Add parentheses to macros
Make statement expression macro from macro with assign
Remove now unnecessary parentheses from cFYI and cERROR uses

defconfig with CIFS support old
$ size fs/cifs/built-in.o
   text	   data	    bss	    dec	    hex	filename
 156012	   1760	    148	 157920	  268e0	fs/cifs/built-in.o

defconfig with CIFS support old
$ size fs/cifs/built-in.o
   text	   data	    bss	    dec	    hex	filename
 153508	   1760	    148	 155416	  25f18	fs/cifs/built-in.o

allyesconfig old:
$ size fs/cifs/built-in.o
   text	   data	    bss	    dec	    hex	filename
 309138	   3864	  74824	 387826	  5eaf2	fs/cifs/built-in.o

allyesconfig new
$ size fs/cifs/built-in.o
   text	   data	    bss	    dec	    hex	filename
 305655	   3864	  74824	 384343	  5dd57	fs/cifs/built-in.o

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2010-04-21 03:50:45 +00:00
Tejun Heo
5a0e3ad6af include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files.  percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed.  Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability.  As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

  http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
  only the necessary includes are there.  ie. if only gfp is used,
  gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
  blocks and try to put the new include such that its order conforms
  to its surrounding.  It's put in the include block which contains
  core kernel includes, in the same order that the rest are ordered -
  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
  doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
  because the file doesn't have fitting include block), it prints out
  an error message indicating which .h file needs to be added to the
  file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
   over 4000 files, deleting around 700 includes and adding ~480 gfp.h
   and ~3000 slab.h inclusions.  The script emitted errors for ~400
   files.

2. Each error was manually checked.  Some didn't need the inclusion,
   some needed manual addition while adding it to implementation .h or
   embedding .c file was more appropriate for others.  This step added
   inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
   from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
   e.g. lib/decompress_*.c used malloc/free() wrappers around slab
   APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
   editing them as sprinkling gfp.h and slab.h inclusions around .h
   files could easily lead to inclusion dependency hell.  Most gfp.h
   inclusion directives were ignored as stuff from gfp.h was usually
   wildly available and often used in preprocessor macros.  Each
   slab.h inclusion directive was examined and added manually as
   necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
   distributed build env didn't work with gcov compiles) and a few
   more options had to be turned off depending on archs to make things
   build (like ipr on powerpc/64 which failed due to missing writeq).

   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
   * powerpc and powerpc64 SMP allmodconfig
   * sparc and sparc64 SMP allmodconfig
   * ia64 SMP allmodconfig
   * s390 SMP allmodconfig
   * alpha SMP allmodconfig
   * um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
   a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-30 22:02:32 +09:00
Jeff Layton
3bc303c254 cifs: convert oplock breaks to use slow_work facility (try #4)
This is the fourth respin of the patch to convert oplock breaks to
use the slow_work facility.

A customer of ours was testing a backport of one of the earlier
patchsets, and hit a "Busy inodes after umount..." problem. An oplock
break job had raced with a umount, and the superblock got torn down and
its memory reused. When the oplock break job tried to dereference the
inode->i_sb, the kernel oopsed.

This patchset has the oplock break job hold an inode and vfsmount
reference until the oplock break completes.  With this, there should be
no need to take a tcon reference (the vfsmount implicitly holds one
already).

Currently, when an oplock break comes in there's a chance that the
oplock break job won't occur if the allocation of the oplock_q_entry
fails. There are also some rather nasty races in the allocation and
handling these structs.

Rather than allocating oplock queue entries when an oplock break comes
in, add a few extra fields to the cifsFileInfo struct. Get rid of the
dedicated cifs_oplock_thread as well and queue the oplock break job to
the slow_work thread pool.

This approach also has the advantage that the oplock break jobs can
potentially run in parallel rather than be serialized like they are
today.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2009-09-24 18:33:18 +00:00
Jeff Layton
1b49c55661 cifs: protect GlobalOplock_Q with its own spinlock
Right now, the GlobalOplock_Q is protected by the GlobalMid_Lock. That
lock is also used for completely unrelated purposes (mostly for managing
the global mid queue). Give the list its own dedicated spinlock
(cifs_oplock_lock) and rename the list to cifs_oplock_list to
eliminate the camel-case.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2009-09-01 22:25:29 +00:00
Steve French
da505c386c [CIFS] Make socket retry timeouts consistent between blocking and nonblocking cases
We have used approximately 15 second timeouts on nonblocking sends in the past, and
also 15 second SMB timeout (waiting for server responses, for most request types).
Now that we can do blocking tcp sends,
make blocking send timeout approximately the same (15 seconds).

Signed-off-by: Steve French <sfrench@us.ibm.com>
2009-01-29 03:32:13 +00:00
Jeff Layton
0496e02d87 cifs: turn smb_send into a wrapper around smb_sendv
cifs: turn smb_send into a wrapper around smb_sendv

Rename smb_send2 to smb_sendv to make it consistent with kernel naming
conventions for functions that take a vector.

There's no need to have 2 functions to handle sending SMB calls. Turn
smb_send into a wrapper around smb_sendv. This also allows us to
properly mark the socket as needing to be reconnected when there's a
partial send from smb_send.

Also, in practice we always use the address and noblocksnd flag
that's attached to the TCP_Server_Info. There's no need to pass
them in as separate args to smb_sendv.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Acked-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
2009-01-29 03:32:12 +00:00