This simplifies the process of timing out messages. We
keep lru of current messages that are in flight. If a
timeout has passed, we reset the osd connection, so that
messages will be retransmitted. This is a failsafe in case
we hit some sort of problem sending out message to the OSD.
Normally, we'll get notification via an updated osdmap if
there are problems.
If a request is older than the keepalive timeout, send a
keepalive to ensure we detect any breaks in the TCP connection.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
The flush_dirty_caps() used to loop over the first entry of the cap_dirty
dirty list on the assumption that after calling ceph_check_caps() it would
be removed from the list. This isn't true for caps that are being
migrated between MDSs, where we've received the EXPORT but not the IMPORT.
Instead, do a safe list iteration, and pin the next inode on the list via
the CEPH_I_NOFLUSH flag.
Signed-off-by: Sage Weil <sage@newdream.net>
We should include caps that are mid-migration (we've received the EXPORT,
but not the IMPORT) in the issued caps set.
Signed-off-by: Sage Weil <sage@newdream.net>
Verify the file is actually open for the given caps when we are
waiting for caps. This ensures we will wake up and return EBADF
if another thread closes the file out from under us.
Note that EBADF is also the correct return code from write(2)
when called on a file handle opened for reading (although the
vfs should catch that).
Signed-off-by: Sage Weil <sage@newdream.net>
We didn't set the front length correctly. When messages used
the message pool we ended up with the conservative max (4 KB), and
the rest of the time the slightly less conservative estimate. Even
though the OSD ignores the extra data, set it to the right value to avoid
sending extra data over the network.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Reset msg front len when a message is returned to the pool: the caller
may have changed it.
BUG if we try to send a message with a hdr.front_len that doesn't match
the front iov.
Signed-off-by: Sage Weil <sage@newdream.net>
This was simply broken. Apparently at some point we thought about putting
the snaptrace in the middle section, but didn't.
Signed-off-by: Sage Weil <sage@newdream.net>
Clear LOSSYTX bit, so that if/when we reconnect, said reconnect
will retry on failure.
Clear _PENDING bits too, to avoid polluting subsequent
connection state.
Drop unused REGISTERED bit.
Signed-off-by: Sage Weil <sage@newdream.net>
Move any out_sent messages to out_queue _before_ checking if
out_queue is empty and going to STANDBY, or else we may drop
something that was never acked.
And clean up the code a bit (less goto).
Signed-off-by: Sage Weil <sage@newdream.net>
This fixes lock ABBA inversion, as the ->invalidate_authorizer()
op may need to take a lock (or even call back into the
messenger).
Signed-off-by: Sage Weil <sage@newdream.net>
The tid is in the message header, not body. Broken since 6df058c0.
No need to look at next mds session; just mark the request and be done.
(The old error path was broken too, but now it's gone.)
Signed-off-by: Sage Weil <sage@newdream.net>
Verify the mds session is currently registered before handling
incoming messages. Clean up message handlers to pull mds out
of session->s_mds instead of less trustworthy src field.
Clean up con_{get,put} debug output.
Signed-off-by: Sage Weil <sage@newdream.net>
The destroy_inode path needs no inode locks since there are no
inode references. Update __ceph_remove_cap comment to reflect
that it is called without cap->session->s_mutex in this case.
Signed-off-by: Sage Weil <sage@newdream.net>
There is no state in local vars that requires us to loop after temporarily
dropping i_lock.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Instead of truncating the whole range of pages, we skip those
pages that are dirty or in the middle of writeback. Those pages
will be cleared later when the writeback completes.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
This page should have been removed earlier when the cache cap was
revoked, but a writeback was in flight, so it was skipped. We truncate
it here just as the writeback finishes, while it's still locked.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
We need to know whether there was any page left behind, and not the
return value (the total number of pages invalidated). Look at the mapping
to see if we were successful or not.
Move it all into a helper to simplify the two callers.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Since we can now create and destroy pg pools, the pool ids will be sparse,
and an array no longer makes sense for looking up by pool id. Use an
rbtree instead.
The OSDMap encoding also no longer has a max pool count (previously used to
allocate the array). There is a new pool_max, that is the largest pool id
we've ever used, although we don't actually need it in the client.
Signed-off-by: Sage Weil <sage@newdream.net>
We need to be able to iterate over all caps on a session with a
possibly slow callback on each cap. To allow this, we used to
prevent cap reordering while we were iterating. However, we were
not safe from races with removal: removing the 'next' cap would
make the next pointer from list_for_each_entry_safe be invalid,
and cause a lock up or similar badness.
Instead, we keep an iterator pointer in the session pointing to
the current cap. As before, we avoid reordering. For removal,
if the cap isn't the current cap we are iterating over, we are
fine. If it is, we clear cap->ci (to mark the cap as pending
removal) but leave it in the session list. In iterate_caps, we
can safely finish removal and get the next cap pointer.
While we're at it, clean up put_cap to not take a cap reservation
context, as it was never used.
Signed-off-by: Sage Weil <sage@newdream.net>
Use a global counter for the minimum number of allocated caps instead of
hard coding a check against readdir_max. This takes into account multiple
client instances, and avoids examining the superblock mount options when a
cap is dropped.
Signed-off-by: Sage Weil <sage@newdream.net>
Call __validate_auth() under monc->mutex, and use helper for
initial hello so that the pending_auth flag is set. This fixes
possible races in which we have an authentication request (hello
or otherwise) pending and send another one. In particular, with
auth_none, we _never_ want to call ceph_build_auth() from
__validate_auth(), since the ->build_request() method is NULL.
Signed-off-by: Sage Weil <sage@newdream.net>
An rbtree is lighter weight, particularly given we will generally have
very few in-flight statfs requests.
Signed-off-by: Sage Weil <sage@newdream.net>
Switch from radix tree to rbtree for snap realms. This is much more
appropriate given that realm keys are few and far between.
Signed-off-by: Sage Weil <sage@newdream.net>
The rbtree is a more appropriate data structure than a radix_tree. It
avoids extra memory usage and simplifies the code.
It also fixes a bug where the debugfs 'mdsc' file wasn't including the
most recent mds request.
Signed-off-by: Sage Weil <sage@newdream.net>
This ensures that if/when we reopen the connection, we can requeue work on
the connection immediately, without waiting for an old timer to expire.
Queue new delayed work inside con->mutex to avoid any race.
This fixes problems with clients failing to reconnect to the MDS due to
the client_reconnect message arriving too late (due to waiting for an old
delayed work timeout to expire).
Signed-off-by: Sage Weil <sage@newdream.net>
Fix the messenger to allow a ceph_con_open() during the fault callback.
Previously the work wasn't getting queued on the connection because the
fault path avoids requeued work (normally spurious). Loop on reopening by
checking for the OPENING state bit.
This fixes OSD reconnects when a TCP connection drops.
Signed-off-by: Sage Weil <sage@newdream.net>
A single osd connection fault (e.g. tcp disconnect) wasn't
reopening the connection, which causes all current and future
requests for that osd to hang.
Signed-off-by: Sage Weil <sage@newdream.net>
The test was backwards from commit b3d1dbbd: keep the message if the
connection _isn't_ lossy. This allows the client to continue when the
TCP connection drops for some reason (network glitch) but both ends
survive.
Signed-off-by: Sage Weil <sage@newdream.net>
We were invalidating mapping pages when dropping FILE_CACHE in
__send_cap(). But ceph_check_caps attempts to invalidate already, and
also checks for success, so we should never get to this point.
Signed-off-by: Sage Weil <sage@newdream.net>
If a sync read gets a short result from the OSD, it may need to do a
getattr to see if it is short due to reaching end-of-file. The getattr
was being done while holding a reference to FILE_RD, which can lead to
a deadlock if the MDS is revoking that capability bit and can't process
the getattr until it does.
We fix this by setting a flag if EOF size validation is needed, and doing
the getattr in ceph_aio_read, after the RD cap ref is dropped. If the
read needs to be continued, we loop and continue traversing the file.
Signed-off-by: Sage Weil <sage@newdream.net>
Try to invalidate pages in ceph_check_caps() if FILE_CACHE is being
revoked. If we fail, queue an immediate async invalidate if FILE_CACHE
is being revoked. (If it's not being revoked, we just queue the caps
for later evaluation later, as per the old behavior.)
Signed-off-by: Sage Weil <sage@newdream.net>
In the cases where we either do a sync read or a write, we
need to make sure that everything in the page cache is flushed.
In the case of a sync write we invalidate the relevant pages,
so that subsequent read/write reflects the new data written.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
A truncation should occur when either we have the
specified caps for the file, or (in cases where we are
not the only ones referencing the file) when it is mapped
or when it is opened. The latter two cases were not
handled.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Originally ceph_page_mkwrite called ceph_write_begin, hoping that
the returned locked page would be the page that it was requested
to mkwrite. Factored out relevant part of ceph_page_mkwrite and
we lock the right page anyway.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>
Zeroing of holes was not done correctly: page_off was miscalculated and
zeroing the tail didn't not adjust the 'read' value to include the zeroed
portion.
Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Sage Weil <sage@newdream.net>