vfs-6.8.netfs

-----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCZabMrQAKCRCRxhvAZXjc
 ovnUAQDgCOonb1tjtTvC8s8IMDUEoaVYZI91KVfsZQSJYN1sdQD+KfJmX1BhJnWG
 l0cEffGfnWGXMZkZqDgLPHUIPzFrmws=
 =1b3j
 -----END PGP SIGNATURE-----

Merge tag 'vfs-6.8.netfs' of gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs

Pull netfs updates from Christian Brauner:
 "This extends the netfs helper library that network filesystems can use
  to replace their own implementations. Both afs and 9p are ported. cifs
  is ready as well but the patches are way bigger and will be routed
  separately once this is merged. That will remove lots of code as well.

  The overal goal is to get high-level I/O and knowledge of the page
  cache and ouf of the filesystem drivers. This includes knowledge about
  the existence of pages and folios

  The pull request converts afs and 9p. This removes about 800 lines of
  code from afs and 300 from 9p. For 9p it is now possible to do writes
  in larger than a page chunks. Additionally, multipage folio support
  can be turned on for 9p. Separate patches exist for cifs removing
  another 2000+ lines. I've included detailed information in the
  individual pulls I took.

  Summary:

   - Add NFS-style (and Ceph-style) locking around DIO vs buffered I/O
     calls to prevent these from happening at the same time.

   - Support for direct and unbuffered I/O.

   - Support for write-through caching in the page cache.

   - O_*SYNC and RWF_*SYNC writes use write-through rather than writing
     to the page cache and then flushing afterwards.

   - Support for write-streaming.

   - Support for write grouping.

   - Skip reads for which the server could only return zeros or EOF.

   - The fscache module is now part of the netfs library and the
     corresponding maintainer entry is updated.

   - Some helpers from the fscache subsystem are renamed to mark them as
     belonging to the netfs library.

   - Follow-up fixes for the netfs library.

   - Follow-up fixes for the 9p conversion"

* tag 'vfs-6.8.netfs' of gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs: (50 commits)
  netfs: Fix wrong #ifdef hiding wait
  cachefiles: Fix signed/unsigned mixup
  netfs: Fix the loop that unmarks folios after writing to the cache
  netfs: Fix interaction between write-streaming and cachefiles culling
  netfs: Count DIO writes
  netfs: Mark netfs_unbuffered_write_iter_locked() static
  netfs: Fix proc/fs/fscache symlink to point to "netfs" not "../netfs"
  netfs: Rearrange netfs_io_subrequest to put request pointer first
  9p: Use length of data written to the server in preference to error
  9p: Do a couple of cleanups
  9p: Fix initialisation of netfs_inode for 9p
  cachefiles: Fix __cachefiles_prepare_write()
  9p: Use netfslib read/write_iter
  afs: Use the netfs write helpers
  netfs: Export the netfs_sreq tracepoint
  netfs: Optimise away reads above the point at which there can be no data
  netfs: Implement a write-through caching option
  netfs: Provide a launder_folio implementation
  netfs: Provide a writepages implementation
  netfs, cachefiles: Pass upper bound length to allow expansion
  ...
This commit is contained in:
Linus Torvalds 2024-01-19 09:10:23 -08:00
commit 16df6e07d6
74 changed files with 4164 additions and 2186 deletions

View File

@ -295,7 +295,6 @@ through which it can issue requests and negotiate::
struct netfs_request_ops { struct netfs_request_ops {
void (*init_request)(struct netfs_io_request *rreq, struct file *file); void (*init_request)(struct netfs_io_request *rreq, struct file *file);
void (*free_request)(struct netfs_io_request *rreq); void (*free_request)(struct netfs_io_request *rreq);
int (*begin_cache_operation)(struct netfs_io_request *rreq);
void (*expand_readahead)(struct netfs_io_request *rreq); void (*expand_readahead)(struct netfs_io_request *rreq);
bool (*clamp_length)(struct netfs_io_subrequest *subreq); bool (*clamp_length)(struct netfs_io_subrequest *subreq);
void (*issue_read)(struct netfs_io_subrequest *subreq); void (*issue_read)(struct netfs_io_subrequest *subreq);
@ -317,20 +316,6 @@ The operations are as follows:
[Optional] This is called as the request is being deallocated so that the [Optional] This is called as the request is being deallocated so that the
filesystem can clean up any state it has attached there. filesystem can clean up any state it has attached there.
* ``begin_cache_operation()``
[Optional] This is called to ask the network filesystem to call into the
cache (if present) to initialise the caching state for this read. The netfs
library module cannot access the cache directly, so the cache should call
something like fscache_begin_read_operation() to do this.
The cache gets to store its state in ->cache_resources and must set a table
of operations of its own there (though of a different type).
This should return 0 on success and an error code otherwise. If an error is
reported, the operation may proceed anyway, just without local caching (only
out of memory and interruption errors cause failure here).
* ``expand_readahead()`` * ``expand_readahead()``
[Optional] This is called to allow the filesystem to expand the size of a [Optional] This is called to allow the filesystem to expand the size of a
@ -460,14 +445,14 @@ When implementing a local cache to be used by the read helpers, two things are
required: some way for the network filesystem to initialise the caching for a required: some way for the network filesystem to initialise the caching for a
read request and a table of operations for the helpers to call. read request and a table of operations for the helpers to call.
The network filesystem's ->begin_cache_operation() method is called to set up a To begin a cache operation on an fscache object, the following function is
cache and this must call into the cache to do the work. If using fscache, for called::
example, the cache would call::
int fscache_begin_read_operation(struct netfs_io_request *rreq, int fscache_begin_read_operation(struct netfs_io_request *rreq,
struct fscache_cookie *cookie); struct fscache_cookie *cookie);
passing in the request pointer and the cookie corresponding to the file. passing in the request pointer and the cookie corresponding to the file. This
fills in the cache resources mentioned below.
The netfs_io_request object contains a place for the cache to hang its The netfs_io_request object contains a place for the cache to hang its
state:: state::

View File

@ -8214,6 +8214,19 @@ S: Supported
F: fs/iomap/ F: fs/iomap/
F: include/linux/iomap.h F: include/linux/iomap.h
FILESYSTEMS [NETFS LIBRARY]
M: David Howells <dhowells@redhat.com>
L: linux-cachefs@redhat.com (moderated for non-subscribers)
L: linux-fsdevel@vger.kernel.org
S: Supported
F: Documentation/filesystems/caching/
F: Documentation/filesystems/netfs_library.rst
F: fs/netfs/
F: include/linux/fscache*.h
F: include/linux/netfs.h
F: include/trace/events/fscache.h
F: include/trace/events/netfs.h
FILESYSTEMS [STACKABLE] FILESYSTEMS [STACKABLE]
M: Miklos Szeredi <miklos@szeredi.hu> M: Miklos Szeredi <miklos@szeredi.hu>
M: Amir Goldstein <amir73il@gmail.com> M: Amir Goldstein <amir73il@gmail.com>
@ -8659,14 +8672,6 @@ F: Documentation/power/freezing-of-tasks.rst
F: include/linux/freezer.h F: include/linux/freezer.h
F: kernel/freezer.c F: kernel/freezer.c
FS-CACHE: LOCAL CACHING FOR NETWORK FILESYSTEMS
M: David Howells <dhowells@redhat.com>
L: linux-cachefs@redhat.com (moderated for non-subscribers)
S: Supported
F: Documentation/filesystems/caching/
F: fs/fscache/
F: include/linux/fscache*.h
FSCRYPT: FILE SYSTEM LEVEL ENCRYPTION SUPPORT FSCRYPT: FILE SYSTEM LEVEL ENCRYPTION SUPPORT
M: Eric Biggers <ebiggers@kernel.org> M: Eric Biggers <ebiggers@kernel.org>
M: Theodore Y. Ts'o <tytso@mit.edu> M: Theodore Y. Ts'o <tytso@mit.edu>

View File

@ -138,7 +138,8 @@ CONFIG_PWM_MXS=y
CONFIG_NVMEM_MXS_OCOTP=y CONFIG_NVMEM_MXS_OCOTP=y
CONFIG_EXT4_FS=y CONFIG_EXT4_FS=y
# CONFIG_DNOTIFY is not set # CONFIG_DNOTIFY is not set
CONFIG_FSCACHE=m CONFIG_NETFS_SUPPORT=m
CONFIG_FSCACHE=y
CONFIG_FSCACHE_STATS=y CONFIG_FSCACHE_STATS=y
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
CONFIG_VFAT_FS=y CONFIG_VFAT_FS=y

View File

@ -34,7 +34,8 @@ CONFIG_GENERIC_PHY=y
CONFIG_EXT4_FS=y CONFIG_EXT4_FS=y
CONFIG_FANOTIFY=y CONFIG_FANOTIFY=y
CONFIG_QUOTA=y CONFIG_QUOTA=y
CONFIG_FSCACHE=m CONFIG_NETFS_SUPPORT=m
CONFIG_FSCACHE=y
CONFIG_FSCACHE_STATS=y CONFIG_FSCACHE_STATS=y
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
CONFIG_MSDOS_FS=y CONFIG_MSDOS_FS=y

View File

@ -287,7 +287,8 @@ CONFIG_BTRFS_FS_POSIX_ACL=y
CONFIG_QUOTA_NETLINK_INTERFACE=y CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_FUSE_FS=m CONFIG_FUSE_FS=m
CONFIG_CUSE=m CONFIG_CUSE=m
CONFIG_FSCACHE=m CONFIG_NETFS_SUPPORT=m
CONFIG_FSCACHE=y
CONFIG_FSCACHE_STATS=y CONFIG_FSCACHE_STATS=y
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
CONFIG_PROC_KCORE=y CONFIG_PROC_KCORE=y

View File

@ -238,7 +238,8 @@ CONFIG_BTRFS_FS=m
CONFIG_QUOTA=y CONFIG_QUOTA=y
CONFIG_QFMT_V2=m CONFIG_QFMT_V2=m
CONFIG_AUTOFS_FS=m CONFIG_AUTOFS_FS=m
CONFIG_FSCACHE=m CONFIG_NETFS_SUPPORT=m
CONFIG_FSCACHE=y
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
CONFIG_ISO9660_FS=m CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y CONFIG_JOLIET=y

View File

@ -356,7 +356,8 @@ CONFIG_QFMT_V2=m
CONFIG_AUTOFS_FS=y CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=m CONFIG_FUSE_FS=m
CONFIG_VIRTIO_FS=m CONFIG_VIRTIO_FS=m
CONFIG_FSCACHE=m CONFIG_NETFS_SUPPORT=m
CONFIG_FSCACHE=y
CONFIG_ISO9660_FS=m CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y CONFIG_JOLIET=y
CONFIG_MSDOS_FS=m CONFIG_MSDOS_FS=m

View File

@ -68,7 +68,8 @@ CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y CONFIG_EXT4_FS_SECURITY=y
CONFIG_AUTOFS_FS=m CONFIG_AUTOFS_FS=m
CONFIG_FUSE_FS=m CONFIG_FUSE_FS=m
CONFIG_FSCACHE=m CONFIG_NETFS_SUPPORT=m
CONFIG_FSCACHE=y
CONFIG_ISO9660_FS=m CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y CONFIG_JOLIET=y
CONFIG_ZISOFS=y CONFIG_ZISOFS=y

View File

@ -637,8 +637,9 @@ CONFIG_FUSE_FS=y
CONFIG_CUSE=m CONFIG_CUSE=m
CONFIG_VIRTIO_FS=m CONFIG_VIRTIO_FS=m
CONFIG_OVERLAY_FS=m CONFIG_OVERLAY_FS=m
CONFIG_NETFS_SUPPORT=m
CONFIG_NETFS_STATS=y CONFIG_NETFS_STATS=y
CONFIG_FSCACHE=m CONFIG_FSCACHE=y
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
CONFIG_ISO9660_FS=y CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y CONFIG_JOLIET=y

View File

@ -622,8 +622,9 @@ CONFIG_FUSE_FS=y
CONFIG_CUSE=m CONFIG_CUSE=m
CONFIG_VIRTIO_FS=m CONFIG_VIRTIO_FS=m
CONFIG_OVERLAY_FS=m CONFIG_OVERLAY_FS=m
CONFIG_NETFS_SUPPORT=m
CONFIG_NETFS_STATS=y CONFIG_NETFS_STATS=y
CONFIG_FSCACHE=m CONFIG_FSCACHE=y
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
CONFIG_ISO9660_FS=y CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y CONFIG_JOLIET=y

View File

@ -171,7 +171,8 @@ CONFIG_BTRFS_FS=y
CONFIG_AUTOFS_FS=m CONFIG_AUTOFS_FS=m
CONFIG_FUSE_FS=y CONFIG_FUSE_FS=y
CONFIG_CUSE=m CONFIG_CUSE=m
CONFIG_FSCACHE=m CONFIG_NETFS_SUPPORT=m
CONFIG_FSCACHE=y
CONFIG_CACHEFILES=m CONFIG_CACHEFILES=m
CONFIG_ISO9660_FS=m CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y CONFIG_JOLIET=y

View File

@ -42,6 +42,7 @@ struct inode *v9fs_alloc_inode(struct super_block *sb);
void v9fs_free_inode(struct inode *inode); void v9fs_free_inode(struct inode *inode);
struct inode *v9fs_get_inode(struct super_block *sb, umode_t mode, struct inode *v9fs_get_inode(struct super_block *sb, umode_t mode,
dev_t rdev); dev_t rdev);
void v9fs_set_netfs_context(struct inode *inode);
int v9fs_init_inode(struct v9fs_session_info *v9ses, int v9fs_init_inode(struct v9fs_session_info *v9ses,
struct inode *inode, umode_t mode, dev_t rdev); struct inode *inode, umode_t mode, dev_t rdev);
void v9fs_evict_inode(struct inode *inode); void v9fs_evict_inode(struct inode *inode);

View File

@ -19,12 +19,45 @@
#include <linux/netfs.h> #include <linux/netfs.h>
#include <net/9p/9p.h> #include <net/9p/9p.h>
#include <net/9p/client.h> #include <net/9p/client.h>
#include <trace/events/netfs.h>
#include "v9fs.h" #include "v9fs.h"
#include "v9fs_vfs.h" #include "v9fs_vfs.h"
#include "cache.h" #include "cache.h"
#include "fid.h" #include "fid.h"
static void v9fs_upload_to_server(struct netfs_io_subrequest *subreq)
{
struct p9_fid *fid = subreq->rreq->netfs_priv;
int err, len;
trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
len = p9_client_write(fid, subreq->start, &subreq->io_iter, &err);
netfs_write_subrequest_terminated(subreq, len ?: err, false);
}
static void v9fs_upload_to_server_worker(struct work_struct *work)
{
struct netfs_io_subrequest *subreq =
container_of(work, struct netfs_io_subrequest, work);
v9fs_upload_to_server(subreq);
}
/*
* Set up write requests for a writeback slice. We need to add a write request
* for each write we want to make.
*/
static void v9fs_create_write_requests(struct netfs_io_request *wreq, loff_t start, size_t len)
{
struct netfs_io_subrequest *subreq;
subreq = netfs_create_write_request(wreq, NETFS_UPLOAD_TO_SERVER,
start, len, v9fs_upload_to_server_worker);
if (subreq)
netfs_queue_write_request(subreq);
}
/** /**
* v9fs_issue_read - Issue a read from 9P * v9fs_issue_read - Issue a read from 9P
* @subreq: The read to make * @subreq: The read to make
@ -33,14 +66,10 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
{ {
struct netfs_io_request *rreq = subreq->rreq; struct netfs_io_request *rreq = subreq->rreq;
struct p9_fid *fid = rreq->netfs_priv; struct p9_fid *fid = rreq->netfs_priv;
struct iov_iter to;
loff_t pos = subreq->start + subreq->transferred;
size_t len = subreq->len - subreq->transferred;
int total, err; int total, err;
iov_iter_xarray(&to, ITER_DEST, &rreq->mapping->i_pages, pos, len); total = p9_client_read(fid, subreq->start + subreq->transferred,
&subreq->io_iter, &err);
total = p9_client_read(fid, pos, &to, &err);
/* if we just extended the file size, any portion not in /* if we just extended the file size, any portion not in
* cache won't be on server and is zeroes */ * cache won't be on server and is zeroes */
@ -50,25 +79,42 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
} }
/** /**
* v9fs_init_request - Initialise a read request * v9fs_init_request - Initialise a request
* @rreq: The read request * @rreq: The read request
* @file: The file being read from * @file: The file being read from
*/ */
static int v9fs_init_request(struct netfs_io_request *rreq, struct file *file) static int v9fs_init_request(struct netfs_io_request *rreq, struct file *file)
{ {
struct p9_fid *fid = file->private_data; struct p9_fid *fid;
bool writing = (rreq->origin == NETFS_READ_FOR_WRITE ||
rreq->origin == NETFS_WRITEBACK ||
rreq->origin == NETFS_WRITETHROUGH ||
rreq->origin == NETFS_LAUNDER_WRITE ||
rreq->origin == NETFS_UNBUFFERED_WRITE ||
rreq->origin == NETFS_DIO_WRITE);
BUG_ON(!fid); if (file) {
fid = file->private_data;
if (!fid)
goto no_fid;
p9_fid_get(fid);
} else {
fid = v9fs_fid_find_inode(rreq->inode, writing, INVALID_UID, true);
if (!fid)
goto no_fid;
}
/* we might need to read from a fid that was opened write-only /* we might need to read from a fid that was opened write-only
* for read-modify-write of page cache, use the writeback fid * for read-modify-write of page cache, use the writeback fid
* for that */ * for that */
WARN_ON(rreq->origin == NETFS_READ_FOR_WRITE && WARN_ON(rreq->origin == NETFS_READ_FOR_WRITE && !(fid->mode & P9_ORDWR));
!(fid->mode & P9_ORDWR));
p9_fid_get(fid);
rreq->netfs_priv = fid; rreq->netfs_priv = fid;
return 0; return 0;
no_fid:
WARN_ONCE(1, "folio expected an open fid inode->i_ino=%lx\n",
rreq->inode->i_ino);
return -EINVAL;
} }
/** /**
@ -82,281 +128,20 @@ static void v9fs_free_request(struct netfs_io_request *rreq)
p9_fid_put(fid); p9_fid_put(fid);
} }
/**
* v9fs_begin_cache_operation - Begin a cache operation for a read
* @rreq: The read request
*/
static int v9fs_begin_cache_operation(struct netfs_io_request *rreq)
{
#ifdef CONFIG_9P_FSCACHE
struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(rreq->inode));
return fscache_begin_read_operation(&rreq->cache_resources, cookie);
#else
return -ENOBUFS;
#endif
}
const struct netfs_request_ops v9fs_req_ops = { const struct netfs_request_ops v9fs_req_ops = {
.init_request = v9fs_init_request, .init_request = v9fs_init_request,
.free_request = v9fs_free_request, .free_request = v9fs_free_request,
.begin_cache_operation = v9fs_begin_cache_operation,
.issue_read = v9fs_issue_read, .issue_read = v9fs_issue_read,
.create_write_requests = v9fs_create_write_requests,
}; };
/**
* v9fs_release_folio - release the private state associated with a folio
* @folio: The folio to be released
* @gfp: The caller's allocation restrictions
*
* Returns true if the page can be released, false otherwise.
*/
static bool v9fs_release_folio(struct folio *folio, gfp_t gfp)
{
if (folio_test_private(folio))
return false;
#ifdef CONFIG_9P_FSCACHE
if (folio_test_fscache(folio)) {
if (current_is_kswapd() || !(gfp & __GFP_FS))
return false;
folio_wait_fscache(folio);
}
fscache_note_page_release(v9fs_inode_cookie(V9FS_I(folio_inode(folio))));
#endif
return true;
}
static void v9fs_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
folio_wait_fscache(folio);
}
#ifdef CONFIG_9P_FSCACHE
static void v9fs_write_to_cache_done(void *priv, ssize_t transferred_or_error,
bool was_async)
{
struct v9fs_inode *v9inode = priv;
__le32 version;
if (IS_ERR_VALUE(transferred_or_error) &&
transferred_or_error != -ENOBUFS) {
version = cpu_to_le32(v9inode->qid.version);
fscache_invalidate(v9fs_inode_cookie(v9inode), &version,
i_size_read(&v9inode->netfs.inode), 0);
}
}
#endif
static int v9fs_vfs_write_folio_locked(struct folio *folio)
{
struct inode *inode = folio_inode(folio);
loff_t start = folio_pos(folio);
loff_t i_size = i_size_read(inode);
struct iov_iter from;
size_t len = folio_size(folio);
struct p9_fid *writeback_fid;
int err;
struct v9fs_inode __maybe_unused *v9inode = V9FS_I(inode);
struct fscache_cookie __maybe_unused *cookie = v9fs_inode_cookie(v9inode);
if (start >= i_size)
return 0; /* Simultaneous truncation occurred */
len = min_t(loff_t, i_size - start, len);
iov_iter_xarray(&from, ITER_SOURCE, &folio_mapping(folio)->i_pages, start, len);
writeback_fid = v9fs_fid_find_inode(inode, true, INVALID_UID, true);
if (!writeback_fid) {
WARN_ONCE(1, "folio expected an open fid inode->i_private=%p\n",
inode->i_private);
return -EINVAL;
}
folio_wait_fscache(folio);
folio_start_writeback(folio);
p9_client_write(writeback_fid, start, &from, &err);
#ifdef CONFIG_9P_FSCACHE
if (err == 0 &&
fscache_cookie_enabled(cookie) &&
test_bit(FSCACHE_COOKIE_IS_CACHING, &cookie->flags)) {
folio_start_fscache(folio);
fscache_write_to_cache(v9fs_inode_cookie(v9inode),
folio_mapping(folio), start, len, i_size,
v9fs_write_to_cache_done, v9inode,
true);
}
#endif
folio_end_writeback(folio);
p9_fid_put(writeback_fid);
return err;
}
static int v9fs_vfs_writepage(struct page *page, struct writeback_control *wbc)
{
struct folio *folio = page_folio(page);
int retval;
p9_debug(P9_DEBUG_VFS, "folio %p\n", folio);
retval = v9fs_vfs_write_folio_locked(folio);
if (retval < 0) {
if (retval == -EAGAIN) {
folio_redirty_for_writepage(wbc, folio);
retval = 0;
} else {
mapping_set_error(folio_mapping(folio), retval);
}
} else
retval = 0;
folio_unlock(folio);
return retval;
}
static int v9fs_launder_folio(struct folio *folio)
{
int retval;
if (folio_clear_dirty_for_io(folio)) {
retval = v9fs_vfs_write_folio_locked(folio);
if (retval)
return retval;
}
folio_wait_fscache(folio);
return 0;
}
/**
* v9fs_direct_IO - 9P address space operation for direct I/O
* @iocb: target I/O control block
* @iter: The data/buffer to use
*
* The presence of v9fs_direct_IO() in the address space ops vector
* allowes open() O_DIRECT flags which would have failed otherwise.
*
* In the non-cached mode, we shunt off direct read and write requests before
* the VFS gets them, so this method should never be called.
*
* Direct IO is not 'yet' supported in the cached mode. Hence when
* this routine is called through generic_file_aio_read(), the read/write fails
* with an error.
*
*/
static ssize_t
v9fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
{
struct file *file = iocb->ki_filp;
loff_t pos = iocb->ki_pos;
ssize_t n;
int err = 0;
if (iov_iter_rw(iter) == WRITE) {
n = p9_client_write(file->private_data, pos, iter, &err);
if (n) {
struct inode *inode = file_inode(file);
loff_t i_size = i_size_read(inode);
if (pos + n > i_size)
inode_add_bytes(inode, pos + n - i_size);
}
} else {
n = p9_client_read(file->private_data, pos, iter, &err);
}
return n ? n : err;
}
static int v9fs_write_begin(struct file *filp, struct address_space *mapping,
loff_t pos, unsigned int len,
struct page **subpagep, void **fsdata)
{
int retval;
struct folio *folio;
struct v9fs_inode *v9inode = V9FS_I(mapping->host);
p9_debug(P9_DEBUG_VFS, "filp %p, mapping %p\n", filp, mapping);
/* Prefetch area to be written into the cache if we're caching this
* file. We need to do this before we get a lock on the page in case
* there's more than one writer competing for the same cache block.
*/
retval = netfs_write_begin(&v9inode->netfs, filp, mapping, pos, len, &folio, fsdata);
if (retval < 0)
return retval;
*subpagep = &folio->page;
return retval;
}
static int v9fs_write_end(struct file *filp, struct address_space *mapping,
loff_t pos, unsigned int len, unsigned int copied,
struct page *subpage, void *fsdata)
{
loff_t last_pos = pos + copied;
struct folio *folio = page_folio(subpage);
struct inode *inode = mapping->host;
p9_debug(P9_DEBUG_VFS, "filp %p, mapping %p\n", filp, mapping);
if (!folio_test_uptodate(folio)) {
if (unlikely(copied < len)) {
copied = 0;
goto out;
}
folio_mark_uptodate(folio);
}
/*
* No need to use i_size_read() here, the i_size
* cannot change under us because we hold the i_mutex.
*/
if (last_pos > inode->i_size) {
inode_add_bytes(inode, last_pos - inode->i_size);
i_size_write(inode, last_pos);
#ifdef CONFIG_9P_FSCACHE
fscache_update_cookie(v9fs_inode_cookie(V9FS_I(inode)), NULL,
&last_pos);
#endif
}
folio_mark_dirty(folio);
out:
folio_unlock(folio);
folio_put(folio);
return copied;
}
#ifdef CONFIG_9P_FSCACHE
/*
* Mark a page as having been made dirty and thus needing writeback. We also
* need to pin the cache object to write back to.
*/
static bool v9fs_dirty_folio(struct address_space *mapping, struct folio *folio)
{
struct v9fs_inode *v9inode = V9FS_I(mapping->host);
return fscache_dirty_folio(mapping, folio, v9fs_inode_cookie(v9inode));
}
#else
#define v9fs_dirty_folio filemap_dirty_folio
#endif
const struct address_space_operations v9fs_addr_operations = { const struct address_space_operations v9fs_addr_operations = {
.read_folio = netfs_read_folio, .read_folio = netfs_read_folio,
.readahead = netfs_readahead, .readahead = netfs_readahead,
.dirty_folio = v9fs_dirty_folio, .dirty_folio = netfs_dirty_folio,
.writepage = v9fs_vfs_writepage, .release_folio = netfs_release_folio,
.write_begin = v9fs_write_begin, .invalidate_folio = netfs_invalidate_folio,
.write_end = v9fs_write_end, .launder_folio = netfs_launder_folio,
.release_folio = v9fs_release_folio, .direct_IO = noop_direct_IO,
.invalidate_folio = v9fs_invalidate_folio, .writepages = netfs_writepages,
.launder_folio = v9fs_launder_folio,
.direct_IO = v9fs_direct_IO,
}; };

View File

@ -353,25 +353,15 @@ static ssize_t
v9fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) v9fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
{ {
struct p9_fid *fid = iocb->ki_filp->private_data; struct p9_fid *fid = iocb->ki_filp->private_data;
int ret, err = 0;
p9_debug(P9_DEBUG_VFS, "fid %d count %zu offset %lld\n", p9_debug(P9_DEBUG_VFS, "fid %d count %zu offset %lld\n",
fid->fid, iov_iter_count(to), iocb->ki_pos); fid->fid, iov_iter_count(to), iocb->ki_pos);
if (!(fid->mode & P9L_DIRECT)) { if (fid->mode & P9L_DIRECT)
p9_debug(P9_DEBUG_VFS, "(cached)\n"); return netfs_unbuffered_read_iter(iocb, to);
return generic_file_read_iter(iocb, to);
}
if (iocb->ki_filp->f_flags & O_NONBLOCK) p9_debug(P9_DEBUG_VFS, "(cached)\n");
ret = p9_client_read_once(fid, iocb->ki_pos, to, &err); return netfs_file_read_iter(iocb, to);
else
ret = p9_client_read(fid, iocb->ki_pos, to, &err);
if (!ret)
return err;
iocb->ki_pos += ret;
return ret;
} }
/* /*
@ -407,46 +397,14 @@ v9fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
{ {
struct file *file = iocb->ki_filp; struct file *file = iocb->ki_filp;
struct p9_fid *fid = file->private_data; struct p9_fid *fid = file->private_data;
ssize_t retval;
loff_t origin;
int err = 0;
p9_debug(P9_DEBUG_VFS, "fid %d\n", fid->fid); p9_debug(P9_DEBUG_VFS, "fid %d\n", fid->fid);
if (!(fid->mode & (P9L_DIRECT | P9L_NOWRITECACHE))) { if (fid->mode & (P9L_DIRECT | P9L_NOWRITECACHE))
p9_debug(P9_DEBUG_CACHE, "(cached)\n"); return netfs_unbuffered_write_iter(iocb, from);
return generic_file_write_iter(iocb, from);
}
retval = generic_write_checks(iocb, from); p9_debug(P9_DEBUG_CACHE, "(cached)\n");
if (retval <= 0) return netfs_file_write_iter(iocb, from);
return retval;
origin = iocb->ki_pos;
retval = p9_client_write(file->private_data, iocb->ki_pos, from, &err);
if (retval > 0) {
struct inode *inode = file_inode(file);
loff_t i_size;
unsigned long pg_start, pg_end;
pg_start = origin >> PAGE_SHIFT;
pg_end = (origin + retval - 1) >> PAGE_SHIFT;
if (inode->i_mapping && inode->i_mapping->nrpages)
invalidate_inode_pages2_range(inode->i_mapping,
pg_start, pg_end);
iocb->ki_pos += retval;
i_size = i_size_read(inode);
if (iocb->ki_pos > i_size) {
inode_add_bytes(inode, iocb->ki_pos - i_size);
/*
* Need to serialize against i_size_write() in
* v9fs_stat2inode()
*/
v9fs_i_size_write(inode, iocb->ki_pos);
}
return retval;
}
return err;
} }
static int v9fs_file_fsync(struct file *filp, loff_t start, loff_t end, static int v9fs_file_fsync(struct file *filp, loff_t start, loff_t end,
@ -519,36 +477,7 @@ v9fs_file_mmap(struct file *filp, struct vm_area_struct *vma)
static vm_fault_t static vm_fault_t
v9fs_vm_page_mkwrite(struct vm_fault *vmf) v9fs_vm_page_mkwrite(struct vm_fault *vmf)
{ {
struct folio *folio = page_folio(vmf->page); return netfs_page_mkwrite(vmf, NULL);
struct file *filp = vmf->vma->vm_file;
struct inode *inode = file_inode(filp);
p9_debug(P9_DEBUG_VFS, "folio %p fid %lx\n",
folio, (unsigned long)filp->private_data);
/* Wait for the page to be written to the cache before we allow it to
* be modified. We then assume the entire page will need writing back.
*/
#ifdef CONFIG_9P_FSCACHE
if (folio_test_fscache(folio) &&
folio_wait_fscache_killable(folio) < 0)
return VM_FAULT_NOPAGE;
#endif
/* Update file times before taking page lock */
file_update_time(filp);
if (folio_lock_killable(folio) < 0)
return VM_FAULT_RETRY;
if (folio_mapping(folio) != inode->i_mapping)
goto out_unlock;
folio_wait_stable(folio);
return VM_FAULT_LOCKED;
out_unlock:
folio_unlock(folio);
return VM_FAULT_NOPAGE;
} }
static void v9fs_mmap_vm_close(struct vm_area_struct *vma) static void v9fs_mmap_vm_close(struct vm_area_struct *vma)

View File

@ -246,10 +246,10 @@ void v9fs_free_inode(struct inode *inode)
/* /*
* Set parameters for the netfs library * Set parameters for the netfs library
*/ */
static void v9fs_set_netfs_context(struct inode *inode) void v9fs_set_netfs_context(struct inode *inode)
{ {
struct v9fs_inode *v9inode = V9FS_I(inode); struct v9fs_inode *v9inode = V9FS_I(inode);
netfs_inode_init(&v9inode->netfs, &v9fs_req_ops); netfs_inode_init(&v9inode->netfs, &v9fs_req_ops, true);
} }
int v9fs_init_inode(struct v9fs_session_info *v9ses, int v9fs_init_inode(struct v9fs_session_info *v9ses,
@ -326,8 +326,6 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses,
err = -EINVAL; err = -EINVAL;
goto error; goto error;
} }
v9fs_set_netfs_context(inode);
error: error:
return err; return err;
@ -359,6 +357,7 @@ struct inode *v9fs_get_inode(struct super_block *sb, umode_t mode, dev_t rdev)
iput(inode); iput(inode);
return ERR_PTR(err); return ERR_PTR(err);
} }
v9fs_set_netfs_context(inode);
return inode; return inode;
} }
@ -374,11 +373,8 @@ void v9fs_evict_inode(struct inode *inode)
truncate_inode_pages_final(&inode->i_data); truncate_inode_pages_final(&inode->i_data);
#ifdef CONFIG_9P_FSCACHE
version = cpu_to_le32(v9inode->qid.version); version = cpu_to_le32(v9inode->qid.version);
fscache_clear_inode_writeback(v9fs_inode_cookie(v9inode), inode, netfs_clear_inode_writeback(inode, &version);
&version);
#endif
clear_inode(inode); clear_inode(inode);
filemap_fdatawrite(&inode->i_data); filemap_fdatawrite(&inode->i_data);
@ -464,6 +460,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb,
goto error; goto error;
v9fs_stat2inode(st, inode, sb, 0); v9fs_stat2inode(st, inode, sb, 0);
v9fs_set_netfs_context(inode);
v9fs_cache_inode_get_cookie(inode); v9fs_cache_inode_get_cookie(inode);
unlock_new_inode(inode); unlock_new_inode(inode);
return inode; return inode;
@ -1113,7 +1110,7 @@ static int v9fs_vfs_setattr(struct mnt_idmap *idmap,
if ((iattr->ia_valid & ATTR_SIZE) && if ((iattr->ia_valid & ATTR_SIZE) &&
iattr->ia_size != i_size_read(inode)) { iattr->ia_size != i_size_read(inode)) {
truncate_setsize(inode, iattr->ia_size); truncate_setsize(inode, iattr->ia_size);
truncate_pagecache(inode, iattr->ia_size); netfs_resize_file(netfs_inode(inode), iattr->ia_size, true);
#ifdef CONFIG_9P_FSCACHE #ifdef CONFIG_9P_FSCACHE
if (v9ses->cache & CACHE_FSCACHE) { if (v9ses->cache & CACHE_FSCACHE) {
@ -1181,6 +1178,7 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
mode |= inode->i_mode & ~S_IALLUGO; mode |= inode->i_mode & ~S_IALLUGO;
inode->i_mode = mode; inode->i_mode = mode;
v9inode->netfs.remote_i_size = stat->length;
if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE)) if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
v9fs_i_size_write(inode, stat->length); v9fs_i_size_write(inode, stat->length);
/* not real number of blocks, but 512 byte ones ... */ /* not real number of blocks, but 512 byte ones ... */

View File

@ -128,6 +128,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
goto error; goto error;
v9fs_stat2inode_dotl(st, inode, 0); v9fs_stat2inode_dotl(st, inode, 0);
v9fs_set_netfs_context(inode);
v9fs_cache_inode_get_cookie(inode); v9fs_cache_inode_get_cookie(inode);
retval = v9fs_get_acl(inode, fid); retval = v9fs_get_acl(inode, fid);
if (retval) if (retval)
@ -598,7 +599,7 @@ int v9fs_vfs_setattr_dotl(struct mnt_idmap *idmap,
if ((iattr->ia_valid & ATTR_SIZE) && iattr->ia_size != if ((iattr->ia_valid & ATTR_SIZE) && iattr->ia_size !=
i_size_read(inode)) { i_size_read(inode)) {
truncate_setsize(inode, iattr->ia_size); truncate_setsize(inode, iattr->ia_size);
truncate_pagecache(inode, iattr->ia_size); netfs_resize_file(netfs_inode(inode), iattr->ia_size, true);
#ifdef CONFIG_9P_FSCACHE #ifdef CONFIG_9P_FSCACHE
if (v9ses->cache & CACHE_FSCACHE) if (v9ses->cache & CACHE_FSCACHE)
@ -655,6 +656,7 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
mode |= inode->i_mode & ~S_IALLUGO; mode |= inode->i_mode & ~S_IALLUGO;
inode->i_mode = mode; inode->i_mode = mode;
v9inode->netfs.remote_i_size = stat->st_size;
if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE)) if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
v9fs_i_size_write(inode, stat->st_size); v9fs_i_size_write(inode, stat->st_size);
inode->i_blocks = stat->st_blocks; inode->i_blocks = stat->st_blocks;
@ -683,8 +685,10 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
inode->i_mode = mode; inode->i_mode = mode;
} }
if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) && if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) &&
stat->st_result_mask & P9_STATS_SIZE) stat->st_result_mask & P9_STATS_SIZE) {
v9inode->netfs.remote_i_size = stat->st_size;
v9fs_i_size_write(inode, stat->st_size); v9fs_i_size_write(inode, stat->st_size);
}
if (stat->st_result_mask & P9_STATS_BLOCKS) if (stat->st_result_mask & P9_STATS_BLOCKS)
inode->i_blocks = stat->st_blocks; inode->i_blocks = stat->st_blocks;
} }

View File

@ -289,31 +289,21 @@ static int v9fs_drop_inode(struct inode *inode)
static int v9fs_write_inode(struct inode *inode, static int v9fs_write_inode(struct inode *inode,
struct writeback_control *wbc) struct writeback_control *wbc)
{ {
struct v9fs_inode *v9inode;
/* /*
* send an fsync request to server irrespective of * send an fsync request to server irrespective of
* wbc->sync_mode. * wbc->sync_mode.
*/ */
p9_debug(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode); p9_debug(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode);
return netfs_unpin_writeback(inode, wbc);
v9inode = V9FS_I(inode);
fscache_unpin_writeback(wbc, v9fs_inode_cookie(v9inode));
return 0;
} }
static int v9fs_write_inode_dotl(struct inode *inode, static int v9fs_write_inode_dotl(struct inode *inode,
struct writeback_control *wbc) struct writeback_control *wbc)
{ {
struct v9fs_inode *v9inode;
v9inode = V9FS_I(inode);
p9_debug(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode); p9_debug(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode);
fscache_unpin_writeback(wbc, v9fs_inode_cookie(v9inode)); return netfs_unpin_writeback(inode, wbc);
return 0;
} }
static const struct super_operations v9fs_super_ops = { static const struct super_operations v9fs_super_ops = {

View File

@ -144,7 +144,6 @@ source "fs/overlayfs/Kconfig"
menu "Caches" menu "Caches"
source "fs/netfs/Kconfig" source "fs/netfs/Kconfig"
source "fs/fscache/Kconfig"
source "fs/cachefiles/Kconfig" source "fs/cachefiles/Kconfig"
endmenu endmenu

View File

@ -61,7 +61,6 @@ obj-$(CONFIG_DLM) += dlm/
# Do not add any filesystems before this line # Do not add any filesystems before this line
obj-$(CONFIG_NETFS_SUPPORT) += netfs/ obj-$(CONFIG_NETFS_SUPPORT) += netfs/
obj-$(CONFIG_FSCACHE) += fscache/
obj-$(CONFIG_REISERFS_FS) += reiserfs/ obj-$(CONFIG_REISERFS_FS) += reiserfs/
obj-$(CONFIG_EXT4_FS) += ext4/ obj-$(CONFIG_EXT4_FS) += ext4/
# We place ext4 before ext2 so that clean ext3 root fs's do NOT mount using the # We place ext4 before ext2 so that clean ext3 root fs's do NOT mount using the

View File

@ -76,7 +76,7 @@ struct inode *afs_iget_pseudo_dir(struct super_block *sb, bool root)
/* there shouldn't be an existing inode */ /* there shouldn't be an existing inode */
BUG_ON(!(inode->i_state & I_NEW)); BUG_ON(!(inode->i_state & I_NEW));
netfs_inode_init(&vnode->netfs, NULL); netfs_inode_init(&vnode->netfs, NULL, false);
inode->i_size = 0; inode->i_size = 0;
inode->i_mode = S_IFDIR | S_IRUGO | S_IXUGO; inode->i_mode = S_IFDIR | S_IRUGO | S_IXUGO;
if (root) { if (root) {

View File

@ -20,9 +20,6 @@
static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); static int afs_file_mmap(struct file *file, struct vm_area_struct *vma);
static int afs_symlink_read_folio(struct file *file, struct folio *folio); static int afs_symlink_read_folio(struct file *file, struct folio *folio);
static void afs_invalidate_folio(struct folio *folio, size_t offset,
size_t length);
static bool afs_release_folio(struct folio *folio, gfp_t gfp_flags);
static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter); static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos, static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos,
@ -37,7 +34,7 @@ const struct file_operations afs_file_operations = {
.release = afs_release, .release = afs_release,
.llseek = generic_file_llseek, .llseek = generic_file_llseek,
.read_iter = afs_file_read_iter, .read_iter = afs_file_read_iter,
.write_iter = afs_file_write, .write_iter = netfs_file_write_iter,
.mmap = afs_file_mmap, .mmap = afs_file_mmap,
.splice_read = afs_file_splice_read, .splice_read = afs_file_splice_read,
.splice_write = iter_file_splice_write, .splice_write = iter_file_splice_write,
@ -53,22 +50,21 @@ const struct inode_operations afs_file_inode_operations = {
}; };
const struct address_space_operations afs_file_aops = { const struct address_space_operations afs_file_aops = {
.direct_IO = noop_direct_IO,
.read_folio = netfs_read_folio, .read_folio = netfs_read_folio,
.readahead = netfs_readahead, .readahead = netfs_readahead,
.dirty_folio = afs_dirty_folio, .dirty_folio = netfs_dirty_folio,
.launder_folio = afs_launder_folio, .launder_folio = netfs_launder_folio,
.release_folio = afs_release_folio, .release_folio = netfs_release_folio,
.invalidate_folio = afs_invalidate_folio, .invalidate_folio = netfs_invalidate_folio,
.write_begin = afs_write_begin,
.write_end = afs_write_end,
.writepages = afs_writepages,
.migrate_folio = filemap_migrate_folio, .migrate_folio = filemap_migrate_folio,
.writepages = afs_writepages,
}; };
const struct address_space_operations afs_symlink_aops = { const struct address_space_operations afs_symlink_aops = {
.read_folio = afs_symlink_read_folio, .read_folio = afs_symlink_read_folio,
.release_folio = afs_release_folio, .release_folio = netfs_release_folio,
.invalidate_folio = afs_invalidate_folio, .invalidate_folio = netfs_invalidate_folio,
.migrate_folio = filemap_migrate_folio, .migrate_folio = filemap_migrate_folio,
}; };
@ -323,11 +319,7 @@ static void afs_issue_read(struct netfs_io_subrequest *subreq)
fsreq->len = subreq->len - subreq->transferred; fsreq->len = subreq->len - subreq->transferred;
fsreq->key = key_get(subreq->rreq->netfs_priv); fsreq->key = key_get(subreq->rreq->netfs_priv);
fsreq->vnode = vnode; fsreq->vnode = vnode;
fsreq->iter = &fsreq->def_iter; fsreq->iter = &subreq->io_iter;
iov_iter_xarray(&fsreq->def_iter, ITER_DEST,
&fsreq->vnode->netfs.inode.i_mapping->i_pages,
fsreq->pos, fsreq->len);
afs_fetch_data(fsreq->vnode, fsreq); afs_fetch_data(fsreq->vnode, fsreq);
afs_put_read(fsreq); afs_put_read(fsreq);
@ -359,22 +351,13 @@ static int afs_symlink_read_folio(struct file *file, struct folio *folio)
static int afs_init_request(struct netfs_io_request *rreq, struct file *file) static int afs_init_request(struct netfs_io_request *rreq, struct file *file)
{ {
rreq->netfs_priv = key_get(afs_file_key(file)); if (file)
rreq->netfs_priv = key_get(afs_file_key(file));
rreq->rsize = 256 * 1024;
rreq->wsize = 256 * 1024;
return 0; return 0;
} }
static int afs_begin_cache_operation(struct netfs_io_request *rreq)
{
#ifdef CONFIG_AFS_FSCACHE
struct afs_vnode *vnode = AFS_FS_I(rreq->inode);
return fscache_begin_read_operation(&rreq->cache_resources,
afs_vnode_cache(vnode));
#else
return -ENOBUFS;
#endif
}
static int afs_check_write_begin(struct file *file, loff_t pos, unsigned len, static int afs_check_write_begin(struct file *file, loff_t pos, unsigned len,
struct folio **foliop, void **_fsdata) struct folio **foliop, void **_fsdata)
{ {
@ -388,129 +371,38 @@ static void afs_free_request(struct netfs_io_request *rreq)
key_put(rreq->netfs_priv); key_put(rreq->netfs_priv);
} }
static void afs_update_i_size(struct inode *inode, loff_t new_i_size)
{
struct afs_vnode *vnode = AFS_FS_I(inode);
loff_t i_size;
write_seqlock(&vnode->cb_lock);
i_size = i_size_read(&vnode->netfs.inode);
if (new_i_size > i_size) {
i_size_write(&vnode->netfs.inode, new_i_size);
inode_set_bytes(&vnode->netfs.inode, new_i_size);
}
write_sequnlock(&vnode->cb_lock);
fscache_update_cookie(afs_vnode_cache(vnode), NULL, &new_i_size);
}
static void afs_netfs_invalidate_cache(struct netfs_io_request *wreq)
{
struct afs_vnode *vnode = AFS_FS_I(wreq->inode);
afs_invalidate_cache(vnode, 0);
}
const struct netfs_request_ops afs_req_ops = { const struct netfs_request_ops afs_req_ops = {
.init_request = afs_init_request, .init_request = afs_init_request,
.free_request = afs_free_request, .free_request = afs_free_request,
.begin_cache_operation = afs_begin_cache_operation,
.check_write_begin = afs_check_write_begin, .check_write_begin = afs_check_write_begin,
.issue_read = afs_issue_read, .issue_read = afs_issue_read,
.update_i_size = afs_update_i_size,
.invalidate_cache = afs_netfs_invalidate_cache,
.create_write_requests = afs_create_write_requests,
}; };
int afs_write_inode(struct inode *inode, struct writeback_control *wbc)
{
fscache_unpin_writeback(wbc, afs_vnode_cache(AFS_FS_I(inode)));
return 0;
}
/*
* Adjust the dirty region of the page on truncation or full invalidation,
* getting rid of the markers altogether if the region is entirely invalidated.
*/
static void afs_invalidate_dirty(struct folio *folio, size_t offset,
size_t length)
{
struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
unsigned long priv;
unsigned int f, t, end = offset + length;
priv = (unsigned long)folio_get_private(folio);
/* we clean up only if the entire page is being invalidated */
if (offset == 0 && length == folio_size(folio))
goto full_invalidate;
/* If the page was dirtied by page_mkwrite(), the PTE stays writable
* and we don't get another notification to tell us to expand it
* again.
*/
if (afs_is_folio_dirty_mmapped(priv))
return;
/* We may need to shorten the dirty region */
f = afs_folio_dirty_from(folio, priv);
t = afs_folio_dirty_to(folio, priv);
if (t <= offset || f >= end)
return; /* Doesn't overlap */
if (f < offset && t > end)
return; /* Splits the dirty region - just absorb it */
if (f >= offset && t <= end)
goto undirty;
if (f < offset)
t = offset;
else
f = end;
if (f == t)
goto undirty;
priv = afs_folio_dirty(folio, f, t);
folio_change_private(folio, (void *)priv);
trace_afs_folio_dirty(vnode, tracepoint_string("trunc"), folio);
return;
undirty:
trace_afs_folio_dirty(vnode, tracepoint_string("undirty"), folio);
folio_clear_dirty_for_io(folio);
full_invalidate:
trace_afs_folio_dirty(vnode, tracepoint_string("inval"), folio);
folio_detach_private(folio);
}
/*
* invalidate part or all of a page
* - release a page and clean up its private data if offset is 0 (indicating
* the entire page)
*/
static void afs_invalidate_folio(struct folio *folio, size_t offset,
size_t length)
{
_enter("{%lu},%zu,%zu", folio->index, offset, length);
BUG_ON(!folio_test_locked(folio));
if (folio_get_private(folio))
afs_invalidate_dirty(folio, offset, length);
folio_wait_fscache(folio);
_leave("");
}
/*
* release a page and clean up its private state if it's not busy
* - return true if the page can now be released, false if not
*/
static bool afs_release_folio(struct folio *folio, gfp_t gfp)
{
struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
_enter("{{%llx:%llu}[%lu],%lx},%x",
vnode->fid.vid, vnode->fid.vnode, folio_index(folio), folio->flags,
gfp);
/* deny if folio is being written to the cache and the caller hasn't
* elected to wait */
#ifdef CONFIG_AFS_FSCACHE
if (folio_test_fscache(folio)) {
if (current_is_kswapd() || !(gfp & __GFP_FS))
return false;
folio_wait_fscache(folio);
}
fscache_note_page_release(afs_vnode_cache(vnode));
#endif
if (folio_test_private(folio)) {
trace_afs_folio_dirty(vnode, tracepoint_string("rel"), folio);
folio_detach_private(folio);
}
/* Indicate that the folio can be released */
_leave(" = T");
return true;
}
static void afs_add_open_mmap(struct afs_vnode *vnode) static void afs_add_open_mmap(struct afs_vnode *vnode)
{ {
if (atomic_inc_return(&vnode->cb_nr_mmap) == 1) { if (atomic_inc_return(&vnode->cb_nr_mmap) == 1) {
@ -576,28 +468,39 @@ static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pg
static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
{ {
struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp)); struct inode *inode = file_inode(iocb->ki_filp);
struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = iocb->ki_filp->private_data; struct afs_file *af = iocb->ki_filp->private_data;
int ret; ssize_t ret;
ret = afs_validate(vnode, af->key); if (iocb->ki_flags & IOCB_DIRECT)
return netfs_unbuffered_read_iter(iocb, iter);
ret = netfs_start_io_read(inode);
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = afs_validate(vnode, af->key);
return generic_file_read_iter(iocb, iter); if (ret == 0)
ret = filemap_read(iocb, iter, 0);
netfs_end_io_read(inode);
return ret;
} }
static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos, static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos,
struct pipe_inode_info *pipe, struct pipe_inode_info *pipe,
size_t len, unsigned int flags) size_t len, unsigned int flags)
{ {
struct afs_vnode *vnode = AFS_FS_I(file_inode(in)); struct inode *inode = file_inode(in);
struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = in->private_data; struct afs_file *af = in->private_data;
int ret; ssize_t ret;
ret = afs_validate(vnode, af->key); ret = netfs_start_io_read(inode);
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = afs_validate(vnode, af->key);
return filemap_splice_read(in, ppos, pipe, len, flags); if (ret == 0)
ret = filemap_splice_read(in, ppos, pipe, len, flags);
netfs_end_io_read(inode);
return ret;
} }

View File

@ -58,7 +58,7 @@ static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode *paren
*/ */
static void afs_set_netfs_context(struct afs_vnode *vnode) static void afs_set_netfs_context(struct afs_vnode *vnode)
{ {
netfs_inode_init(&vnode->netfs, &afs_req_ops); netfs_inode_init(&vnode->netfs, &afs_req_ops, true);
} }
/* /*
@ -166,6 +166,7 @@ static void afs_apply_status(struct afs_operation *op,
struct inode *inode = &vnode->netfs.inode; struct inode *inode = &vnode->netfs.inode;
struct timespec64 t; struct timespec64 t;
umode_t mode; umode_t mode;
bool unexpected_jump = false;
bool data_changed = false; bool data_changed = false;
bool change_size = vp->set_size; bool change_size = vp->set_size;
@ -230,6 +231,7 @@ static void afs_apply_status(struct afs_operation *op,
} }
change_size = true; change_size = true;
data_changed = true; data_changed = true;
unexpected_jump = true;
} else if (vnode->status.type == AFS_FTYPE_DIR) { } else if (vnode->status.type == AFS_FTYPE_DIR) {
/* Expected directory change is handled elsewhere so /* Expected directory change is handled elsewhere so
* that we can locally edit the directory and save on a * that we can locally edit the directory and save on a
@ -249,8 +251,10 @@ static void afs_apply_status(struct afs_operation *op,
* what's on the server. * what's on the server.
*/ */
vnode->netfs.remote_i_size = status->size; vnode->netfs.remote_i_size = status->size;
if (change_size) { if (change_size || status->size > i_size_read(inode)) {
afs_set_i_size(vnode, status->size); afs_set_i_size(vnode, status->size);
if (unexpected_jump)
vnode->netfs.zero_point = status->size;
inode_set_ctime_to_ts(inode, t); inode_set_ctime_to_ts(inode, t);
inode_set_atime_to_ts(inode, t); inode_set_atime_to_ts(inode, t);
} }
@ -647,7 +651,7 @@ void afs_evict_inode(struct inode *inode)
truncate_inode_pages_final(&inode->i_data); truncate_inode_pages_final(&inode->i_data);
afs_set_cache_aux(vnode, &aux); afs_set_cache_aux(vnode, &aux);
fscache_clear_inode_writeback(afs_vnode_cache(vnode), inode, &aux); netfs_clear_inode_writeback(inode, &aux);
clear_inode(inode); clear_inode(inode);
while (!list_empty(&vnode->wb_keys)) { while (!list_empty(&vnode->wb_keys)) {
@ -689,17 +693,17 @@ static void afs_setattr_success(struct afs_operation *op)
static void afs_setattr_edit_file(struct afs_operation *op) static void afs_setattr_edit_file(struct afs_operation *op)
{ {
struct afs_vnode_param *vp = &op->file[0]; struct afs_vnode_param *vp = &op->file[0];
struct inode *inode = &vp->vnode->netfs.inode; struct afs_vnode *vnode = vp->vnode;
if (op->setattr.attr->ia_valid & ATTR_SIZE) { if (op->setattr.attr->ia_valid & ATTR_SIZE) {
loff_t size = op->setattr.attr->ia_size; loff_t size = op->setattr.attr->ia_size;
loff_t i_size = op->setattr.old_i_size; loff_t i_size = op->setattr.old_i_size;
if (size < i_size) if (size != i_size) {
truncate_pagecache(inode, size); truncate_setsize(&vnode->netfs.inode, size);
if (size != i_size) netfs_resize_file(&vnode->netfs, size, true);
fscache_resize_cookie(afs_vnode_cache(vp->vnode), fscache_resize_cookie(afs_vnode_cache(vnode), size);
vp->scb.status.size); }
} }
} }
@ -767,11 +771,11 @@ int afs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
*/ */
if (!(attr->ia_valid & (supported & ~ATTR_SIZE & ~ATTR_MTIME)) && if (!(attr->ia_valid & (supported & ~ATTR_SIZE & ~ATTR_MTIME)) &&
attr->ia_size < i_size && attr->ia_size < i_size &&
attr->ia_size > vnode->status.size) { attr->ia_size > vnode->netfs.remote_i_size) {
truncate_pagecache(inode, attr->ia_size); truncate_setsize(inode, attr->ia_size);
netfs_resize_file(&vnode->netfs, size, false);
fscache_resize_cookie(afs_vnode_cache(vnode), fscache_resize_cookie(afs_vnode_cache(vnode),
attr->ia_size); attr->ia_size);
i_size_write(inode, attr->ia_size);
ret = 0; ret = 0;
goto out_unlock; goto out_unlock;
} }

View File

@ -985,62 +985,6 @@ static inline void afs_invalidate_cache(struct afs_vnode *vnode, unsigned int fl
i_size_read(&vnode->netfs.inode), flags); i_size_read(&vnode->netfs.inode), flags);
} }
/*
* We use folio->private to hold the amount of the folio that we've written to,
* splitting the field into two parts. However, we need to represent a range
* 0...FOLIO_SIZE, so we reduce the resolution if the size of the folio
* exceeds what we can encode.
*/
#ifdef CONFIG_64BIT
#define __AFS_FOLIO_PRIV_MASK 0x7fffffffUL
#define __AFS_FOLIO_PRIV_SHIFT 32
#define __AFS_FOLIO_PRIV_MMAPPED 0x80000000UL
#else
#define __AFS_FOLIO_PRIV_MASK 0x7fffUL
#define __AFS_FOLIO_PRIV_SHIFT 16
#define __AFS_FOLIO_PRIV_MMAPPED 0x8000UL
#endif
static inline unsigned int afs_folio_dirty_resolution(struct folio *folio)
{
int shift = folio_shift(folio) - (__AFS_FOLIO_PRIV_SHIFT - 1);
return (shift > 0) ? shift : 0;
}
static inline size_t afs_folio_dirty_from(struct folio *folio, unsigned long priv)
{
unsigned long x = priv & __AFS_FOLIO_PRIV_MASK;
/* The lower bound is inclusive */
return x << afs_folio_dirty_resolution(folio);
}
static inline size_t afs_folio_dirty_to(struct folio *folio, unsigned long priv)
{
unsigned long x = (priv >> __AFS_FOLIO_PRIV_SHIFT) & __AFS_FOLIO_PRIV_MASK;
/* The upper bound is immediately beyond the region */
return (x + 1) << afs_folio_dirty_resolution(folio);
}
static inline unsigned long afs_folio_dirty(struct folio *folio, size_t from, size_t to)
{
unsigned int res = afs_folio_dirty_resolution(folio);
from >>= res;
to = (to - 1) >> res;
return (to << __AFS_FOLIO_PRIV_SHIFT) | from;
}
static inline unsigned long afs_folio_dirty_mmapped(unsigned long priv)
{
return priv | __AFS_FOLIO_PRIV_MMAPPED;
}
static inline bool afs_is_folio_dirty_mmapped(unsigned long priv)
{
return priv & __AFS_FOLIO_PRIV_MMAPPED;
}
#include <trace/events/afs.h> #include <trace/events/afs.h>
/*****************************************************************************/ /*****************************************************************************/
@ -1167,7 +1111,6 @@ extern int afs_release(struct inode *, struct file *);
extern int afs_fetch_data(struct afs_vnode *, struct afs_read *); extern int afs_fetch_data(struct afs_vnode *, struct afs_read *);
extern struct afs_read *afs_alloc_read(gfp_t); extern struct afs_read *afs_alloc_read(gfp_t);
extern void afs_put_read(struct afs_read *); extern void afs_put_read(struct afs_read *);
extern int afs_write_inode(struct inode *, struct writeback_control *);
static inline struct afs_read *afs_get_read(struct afs_read *req) static inline struct afs_read *afs_get_read(struct afs_read *req)
{ {
@ -1658,24 +1601,11 @@ extern int afs_check_volume_status(struct afs_volume *, struct afs_operation *);
/* /*
* write.c * write.c
*/ */
#ifdef CONFIG_AFS_FSCACHE
bool afs_dirty_folio(struct address_space *, struct folio *);
#else
#define afs_dirty_folio filemap_dirty_folio
#endif
extern int afs_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len,
struct page **pagep, void **fsdata);
extern int afs_write_end(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *page, void *fsdata);
extern int afs_writepage(struct page *, struct writeback_control *);
extern int afs_writepages(struct address_space *, struct writeback_control *); extern int afs_writepages(struct address_space *, struct writeback_control *);
extern ssize_t afs_file_write(struct kiocb *, struct iov_iter *);
extern int afs_fsync(struct file *, loff_t, loff_t, int); extern int afs_fsync(struct file *, loff_t, loff_t, int);
extern vm_fault_t afs_page_mkwrite(struct vm_fault *vmf); extern vm_fault_t afs_page_mkwrite(struct vm_fault *vmf);
extern void afs_prune_wb_keys(struct afs_vnode *); extern void afs_prune_wb_keys(struct afs_vnode *);
int afs_launder_folio(struct folio *); void afs_create_write_requests(struct netfs_io_request *wreq, loff_t start, size_t len);
/* /*
* xattr.c * xattr.c

View File

@ -55,7 +55,7 @@ int afs_net_id;
static const struct super_operations afs_super_ops = { static const struct super_operations afs_super_ops = {
.statfs = afs_statfs, .statfs = afs_statfs,
.alloc_inode = afs_alloc_inode, .alloc_inode = afs_alloc_inode,
.write_inode = afs_write_inode, .write_inode = netfs_unpin_writeback,
.drop_inode = afs_drop_inode, .drop_inode = afs_drop_inode,
.destroy_inode = afs_destroy_inode, .destroy_inode = afs_destroy_inode,
.free_inode = afs_free_inode, .free_inode = afs_free_inode,

View File

@ -12,309 +12,17 @@
#include <linux/writeback.h> #include <linux/writeback.h>
#include <linux/pagevec.h> #include <linux/pagevec.h>
#include <linux/netfs.h> #include <linux/netfs.h>
#include <trace/events/netfs.h>
#include "internal.h" #include "internal.h"
static int afs_writepages_region(struct address_space *mapping,
struct writeback_control *wbc,
loff_t start, loff_t end, loff_t *_next,
bool max_one_loop);
static void afs_write_to_cache(struct afs_vnode *vnode, loff_t start, size_t len,
loff_t i_size, bool caching);
#ifdef CONFIG_AFS_FSCACHE
/*
* Mark a page as having been made dirty and thus needing writeback. We also
* need to pin the cache object to write back to.
*/
bool afs_dirty_folio(struct address_space *mapping, struct folio *folio)
{
return fscache_dirty_folio(mapping, folio,
afs_vnode_cache(AFS_FS_I(mapping->host)));
}
static void afs_folio_start_fscache(bool caching, struct folio *folio)
{
if (caching)
folio_start_fscache(folio);
}
#else
static void afs_folio_start_fscache(bool caching, struct folio *folio)
{
}
#endif
/*
* Flush out a conflicting write. This may extend the write to the surrounding
* pages if also dirty and contiguous to the conflicting region..
*/
static int afs_flush_conflicting_write(struct address_space *mapping,
struct folio *folio)
{
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = LONG_MAX,
.range_start = folio_pos(folio),
.range_end = LLONG_MAX,
};
loff_t next;
return afs_writepages_region(mapping, &wbc, folio_pos(folio), LLONG_MAX,
&next, true);
}
/*
* prepare to perform part of a write to a page
*/
int afs_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len,
struct page **_page, void **fsdata)
{
struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
struct folio *folio;
unsigned long priv;
unsigned f, from;
unsigned t, to;
pgoff_t index;
int ret;
_enter("{%llx:%llu},%llx,%x",
vnode->fid.vid, vnode->fid.vnode, pos, len);
/* Prefetch area to be written into the cache if we're caching this
* file. We need to do this before we get a lock on the page in case
* there's more than one writer competing for the same cache block.
*/
ret = netfs_write_begin(&vnode->netfs, file, mapping, pos, len, &folio, fsdata);
if (ret < 0)
return ret;
index = folio_index(folio);
from = pos - index * PAGE_SIZE;
to = from + len;
try_again:
/* See if this page is already partially written in a way that we can
* merge the new write with.
*/
if (folio_test_private(folio)) {
priv = (unsigned long)folio_get_private(folio);
f = afs_folio_dirty_from(folio, priv);
t = afs_folio_dirty_to(folio, priv);
ASSERTCMP(f, <=, t);
if (folio_test_writeback(folio)) {
trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio);
folio_unlock(folio);
goto wait_for_writeback;
}
/* If the file is being filled locally, allow inter-write
* spaces to be merged into writes. If it's not, only write
* back what the user gives us.
*/
if (!test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags) &&
(to < f || from > t))
goto flush_conflicting_write;
}
*_page = folio_file_page(folio, pos / PAGE_SIZE);
_leave(" = 0");
return 0;
/* The previous write and this write aren't adjacent or overlapping, so
* flush the page out.
*/
flush_conflicting_write:
trace_afs_folio_dirty(vnode, tracepoint_string("confl"), folio);
folio_unlock(folio);
ret = afs_flush_conflicting_write(mapping, folio);
if (ret < 0)
goto error;
wait_for_writeback:
ret = folio_wait_writeback_killable(folio);
if (ret < 0)
goto error;
ret = folio_lock_killable(folio);
if (ret < 0)
goto error;
goto try_again;
error:
folio_put(folio);
_leave(" = %d", ret);
return ret;
}
/*
* finalise part of a write to a page
*/
int afs_write_end(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *subpage, void *fsdata)
{
struct folio *folio = page_folio(subpage);
struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
unsigned long priv;
unsigned int f, from = offset_in_folio(folio, pos);
unsigned int t, to = from + copied;
loff_t i_size, write_end_pos;
_enter("{%llx:%llu},{%lx}",
vnode->fid.vid, vnode->fid.vnode, folio_index(folio));
if (!folio_test_uptodate(folio)) {
if (copied < len) {
copied = 0;
goto out;
}
folio_mark_uptodate(folio);
}
if (copied == 0)
goto out;
write_end_pos = pos + copied;
i_size = i_size_read(&vnode->netfs.inode);
if (write_end_pos > i_size) {
write_seqlock(&vnode->cb_lock);
i_size = i_size_read(&vnode->netfs.inode);
if (write_end_pos > i_size)
afs_set_i_size(vnode, write_end_pos);
write_sequnlock(&vnode->cb_lock);
fscache_update_cookie(afs_vnode_cache(vnode), NULL, &write_end_pos);
}
if (folio_test_private(folio)) {
priv = (unsigned long)folio_get_private(folio);
f = afs_folio_dirty_from(folio, priv);
t = afs_folio_dirty_to(folio, priv);
if (from < f)
f = from;
if (to > t)
t = to;
priv = afs_folio_dirty(folio, f, t);
folio_change_private(folio, (void *)priv);
trace_afs_folio_dirty(vnode, tracepoint_string("dirty+"), folio);
} else {
priv = afs_folio_dirty(folio, from, to);
folio_attach_private(folio, (void *)priv);
trace_afs_folio_dirty(vnode, tracepoint_string("dirty"), folio);
}
if (folio_mark_dirty(folio))
_debug("dirtied %lx", folio_index(folio));
out:
folio_unlock(folio);
folio_put(folio);
return copied;
}
/*
* kill all the pages in the given range
*/
static void afs_kill_pages(struct address_space *mapping,
loff_t start, loff_t len)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
struct folio *folio;
pgoff_t index = start / PAGE_SIZE;
pgoff_t last = (start + len - 1) / PAGE_SIZE, next;
_enter("{%llx:%llu},%llx @%llx",
vnode->fid.vid, vnode->fid.vnode, len, start);
do {
_debug("kill %lx (to %lx)", index, last);
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
next = index + 1;
continue;
}
next = folio_next_index(folio);
folio_clear_uptodate(folio);
folio_end_writeback(folio);
folio_lock(folio);
generic_error_remove_folio(mapping, folio);
folio_unlock(folio);
folio_put(folio);
} while (index = next, index <= last);
_leave("");
}
/*
* Redirty all the pages in a given range.
*/
static void afs_redirty_pages(struct writeback_control *wbc,
struct address_space *mapping,
loff_t start, loff_t len)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
struct folio *folio;
pgoff_t index = start / PAGE_SIZE;
pgoff_t last = (start + len - 1) / PAGE_SIZE, next;
_enter("{%llx:%llu},%llx @%llx",
vnode->fid.vid, vnode->fid.vnode, len, start);
do {
_debug("redirty %llx @%llx", len, start);
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
next = index + 1;
continue;
}
next = index + folio_nr_pages(folio);
folio_redirty_for_writepage(wbc, folio);
folio_end_writeback(folio);
folio_put(folio);
} while (index = next, index <= last);
_leave("");
}
/* /*
* completion of write to server * completion of write to server
*/ */
static void afs_pages_written_back(struct afs_vnode *vnode, loff_t start, unsigned int len) static void afs_pages_written_back(struct afs_vnode *vnode, loff_t start, unsigned int len)
{ {
struct address_space *mapping = vnode->netfs.inode.i_mapping;
struct folio *folio;
pgoff_t end;
XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE);
_enter("{%llx:%llu},{%x @%llx}", _enter("{%llx:%llu},{%x @%llx}",
vnode->fid.vid, vnode->fid.vnode, len, start); vnode->fid.vid, vnode->fid.vnode, len, start);
rcu_read_lock();
end = (start + len - 1) / PAGE_SIZE;
xas_for_each(&xas, folio, end) {
if (!folio_test_writeback(folio)) {
kdebug("bad %x @%llx page %lx %lx",
len, start, folio_index(folio), end);
ASSERT(folio_test_writeback(folio));
}
trace_afs_folio_dirty(vnode, tracepoint_string("clear"), folio);
folio_detach_private(folio);
folio_end_writeback(folio);
}
rcu_read_unlock();
afs_prune_wb_keys(vnode); afs_prune_wb_keys(vnode);
_leave(""); _leave("");
} }
@ -451,363 +159,53 @@ try_next_key:
return afs_put_operation(op); return afs_put_operation(op);
} }
/* static void afs_upload_to_server(struct netfs_io_subrequest *subreq)
* Extend the region to be written back to include subsequent contiguously
* dirty pages if possible, but don't sleep while doing so.
*
* If this page holds new content, then we can include filler zeros in the
* writeback.
*/
static void afs_extend_writeback(struct address_space *mapping,
struct afs_vnode *vnode,
long *_count,
loff_t start,
loff_t max_len,
bool new_content,
bool caching,
unsigned int *_len)
{ {
struct folio_batch fbatch; struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode);
struct folio *folio;
unsigned long priv;
unsigned int psize, filler = 0;
unsigned int f, t;
loff_t len = *_len;
pgoff_t index = (start + len) / PAGE_SIZE;
bool stop = true;
unsigned int i;
XA_STATE(xas, &mapping->i_pages, index);
folio_batch_init(&fbatch);
do {
/* Firstly, we gather up a batch of contiguous dirty pages
* under the RCU read lock - but we can't clear the dirty flags
* there if any of those pages are mapped.
*/
rcu_read_lock();
xas_for_each(&xas, folio, ULONG_MAX) {
stop = true;
if (xas_retry(&xas, folio))
continue;
if (xa_is_value(folio))
break;
if (folio_index(folio) != index)
break;
if (!folio_try_get_rcu(folio)) {
xas_reset(&xas);
continue;
}
/* Has the page moved or been split? */
if (unlikely(folio != xas_reload(&xas))) {
folio_put(folio);
break;
}
if (!folio_trylock(folio)) {
folio_put(folio);
break;
}
if (!folio_test_dirty(folio) ||
folio_test_writeback(folio) ||
folio_test_fscache(folio)) {
folio_unlock(folio);
folio_put(folio);
break;
}
psize = folio_size(folio);
priv = (unsigned long)folio_get_private(folio);
f = afs_folio_dirty_from(folio, priv);
t = afs_folio_dirty_to(folio, priv);
if (f != 0 && !new_content) {
folio_unlock(folio);
folio_put(folio);
break;
}
len += filler + t;
filler = psize - t;
if (len >= max_len || *_count <= 0)
stop = true;
else if (t == psize || new_content)
stop = false;
index += folio_nr_pages(folio);
if (!folio_batch_add(&fbatch, folio))
break;
if (stop)
break;
}
if (!stop)
xas_pause(&xas);
rcu_read_unlock();
/* Now, if we obtained any folios, we can shift them to being
* writable and mark them for caching.
*/
if (!folio_batch_count(&fbatch))
break;
for (i = 0; i < folio_batch_count(&fbatch); i++) {
folio = fbatch.folios[i];
trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio);
if (!folio_clear_dirty_for_io(folio))
BUG();
folio_start_writeback(folio);
afs_folio_start_fscache(caching, folio);
*_count -= folio_nr_pages(folio);
folio_unlock(folio);
}
folio_batch_release(&fbatch);
cond_resched();
} while (!stop);
*_len = len;
}
/*
* Synchronously write back the locked page and any subsequent non-locked dirty
* pages.
*/
static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
struct writeback_control *wbc,
struct folio *folio,
loff_t start, loff_t end)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
struct iov_iter iter;
unsigned long priv;
unsigned int offset, to, len, max_len;
loff_t i_size = i_size_read(&vnode->netfs.inode);
bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode));
long count = wbc->nr_to_write;
int ret;
_enter(",%lx,%llx-%llx", folio_index(folio), start, end);
folio_start_writeback(folio);
afs_folio_start_fscache(caching, folio);
count -= folio_nr_pages(folio);
/* Find all consecutive lockable dirty pages that have contiguous
* written regions, stopping when we find a page that is not
* immediately lockable, is not dirty or is missing, or we reach the
* end of the range.
*/
priv = (unsigned long)folio_get_private(folio);
offset = afs_folio_dirty_from(folio, priv);
to = afs_folio_dirty_to(folio, priv);
trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio);
len = to - offset;
start += offset;
if (start < i_size) {
/* Trim the write to the EOF; the extra data is ignored. Also
* put an upper limit on the size of a single storedata op.
*/
max_len = 65536 * 4096;
max_len = min_t(unsigned long long, max_len, end - start + 1);
max_len = min_t(unsigned long long, max_len, i_size - start);
if (len < max_len &&
(to == folio_size(folio) || new_content))
afs_extend_writeback(mapping, vnode, &count,
start, max_len, new_content,
caching, &len);
len = min_t(loff_t, len, max_len);
}
/* We now have a contiguous set of dirty pages, each with writeback
* set; the first page is still locked at this point, but all the rest
* have been unlocked.
*/
folio_unlock(folio);
if (start < i_size) {
_debug("write back %x @%llx [%llx]", len, start, i_size);
/* Speculatively write to the cache. We have to fix this up
* later if the store fails.
*/
afs_write_to_cache(vnode, start, len, i_size, caching);
iov_iter_xarray(&iter, ITER_SOURCE, &mapping->i_pages, start, len);
ret = afs_store_data(vnode, &iter, start, false);
} else {
_debug("write discard %x @%llx [%llx]", len, start, i_size);
/* The dirty region was entirely beyond the EOF. */
fscache_clear_page_bits(mapping, start, len, caching);
afs_pages_written_back(vnode, start, len);
ret = 0;
}
switch (ret) {
case 0:
wbc->nr_to_write = count;
ret = len;
break;
default:
pr_notice("kAFS: Unexpected error from FS.StoreData %d\n", ret);
fallthrough;
case -EACCES:
case -EPERM:
case -ENOKEY:
case -EKEYEXPIRED:
case -EKEYREJECTED:
case -EKEYREVOKED:
case -ENETRESET:
afs_redirty_pages(wbc, mapping, start, len);
mapping_set_error(mapping, ret);
break;
case -EDQUOT:
case -ENOSPC:
afs_redirty_pages(wbc, mapping, start, len);
mapping_set_error(mapping, -ENOSPC);
break;
case -EROFS:
case -EIO:
case -EREMOTEIO:
case -EFBIG:
case -ENOENT:
case -ENOMEDIUM:
case -ENXIO:
trace_afs_file_error(vnode, ret, afs_file_error_writeback_fail);
afs_kill_pages(mapping, start, len);
mapping_set_error(mapping, ret);
break;
}
_leave(" = %d", ret);
return ret;
}
/*
* write a region of pages back to the server
*/
static int afs_writepages_region(struct address_space *mapping,
struct writeback_control *wbc,
loff_t start, loff_t end, loff_t *_next,
bool max_one_loop)
{
struct folio *folio;
struct folio_batch fbatch;
ssize_t ret; ssize_t ret;
unsigned int i;
int n, skips = 0;
_enter("%llx,%llx,", start, end); _enter("%x[%x],%zx",
folio_batch_init(&fbatch); subreq->rreq->debug_id, subreq->debug_index, subreq->io_iter.count);
do { trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
pgoff_t index = start / PAGE_SIZE; ret = afs_store_data(vnode, &subreq->io_iter, subreq->start,
subreq->rreq->origin == NETFS_LAUNDER_WRITE);
netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len,
false);
}
n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE, static void afs_upload_to_server_worker(struct work_struct *work)
PAGECACHE_TAG_DIRTY, &fbatch); {
struct netfs_io_subrequest *subreq =
container_of(work, struct netfs_io_subrequest, work);
if (!n) afs_upload_to_server(subreq);
break; }
for (i = 0; i < n; i++) {
folio = fbatch.folios[i];
start = folio_pos(folio); /* May regress with THPs */
_debug("wback %lx", folio_index(folio)); /*
* Set up write requests for a writeback slice. We need to add a write request
* for each write we want to make.
*/
void afs_create_write_requests(struct netfs_io_request *wreq, loff_t start, size_t len)
{
struct netfs_io_subrequest *subreq;
/* At this point we hold neither the i_pages lock nor the _enter("%x,%llx-%llx", wreq->debug_id, start, start + len);
* page lock: the page may be truncated or invalidated
* (changing page->mapping to NULL), or even swizzled
* back from swapper_space to tmpfs file mapping
*/
try_again:
if (wbc->sync_mode != WB_SYNC_NONE) {
ret = folio_lock_killable(folio);
if (ret < 0) {
folio_batch_release(&fbatch);
return ret;
}
} else {
if (!folio_trylock(folio))
continue;
}
if (folio->mapping != mapping || subreq = netfs_create_write_request(wreq, NETFS_UPLOAD_TO_SERVER,
!folio_test_dirty(folio)) { start, len, afs_upload_to_server_worker);
start += folio_size(folio); if (subreq)
folio_unlock(folio); netfs_queue_write_request(subreq);
continue;
}
if (folio_test_writeback(folio) ||
folio_test_fscache(folio)) {
folio_unlock(folio);
if (wbc->sync_mode != WB_SYNC_NONE) {
folio_wait_writeback(folio);
#ifdef CONFIG_AFS_FSCACHE
folio_wait_fscache(folio);
#endif
goto try_again;
}
start += folio_size(folio);
if (wbc->sync_mode == WB_SYNC_NONE) {
if (skips >= 5 || need_resched()) {
*_next = start;
folio_batch_release(&fbatch);
_leave(" = 0 [%llx]", *_next);
return 0;
}
skips++;
}
continue;
}
if (!folio_clear_dirty_for_io(folio))
BUG();
ret = afs_write_back_from_locked_folio(mapping, wbc,
folio, start, end);
if (ret < 0) {
_leave(" = %zd", ret);
folio_batch_release(&fbatch);
return ret;
}
start += ret;
}
folio_batch_release(&fbatch);
cond_resched();
} while (wbc->nr_to_write > 0);
*_next = start;
_leave(" = 0 [%llx]", *_next);
return 0;
} }
/* /*
* write some of the pending data back to the server * write some of the pending data back to the server
*/ */
int afs_writepages(struct address_space *mapping, int afs_writepages(struct address_space *mapping, struct writeback_control *wbc)
struct writeback_control *wbc)
{ {
struct afs_vnode *vnode = AFS_FS_I(mapping->host); struct afs_vnode *vnode = AFS_FS_I(mapping->host);
loff_t start, next;
int ret; int ret;
_enter("");
/* We have to be careful as we can end up racing with setattr() /* We have to be careful as we can end up racing with setattr()
* truncating the pagecache since the caller doesn't take a lock here * truncating the pagecache since the caller doesn't take a lock here
* to prevent it. * to prevent it.
@ -817,68 +215,11 @@ int afs_writepages(struct address_space *mapping,
else if (!down_read_trylock(&vnode->validate_lock)) else if (!down_read_trylock(&vnode->validate_lock))
return 0; return 0;
if (wbc->range_cyclic) { ret = netfs_writepages(mapping, wbc);
start = mapping->writeback_index * PAGE_SIZE;
ret = afs_writepages_region(mapping, wbc, start, LLONG_MAX,
&next, false);
if (ret == 0) {
mapping->writeback_index = next / PAGE_SIZE;
if (start > 0 && wbc->nr_to_write > 0) {
ret = afs_writepages_region(mapping, wbc, 0,
start, &next, false);
if (ret == 0)
mapping->writeback_index =
next / PAGE_SIZE;
}
}
} else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) {
ret = afs_writepages_region(mapping, wbc, 0, LLONG_MAX,
&next, false);
if (wbc->nr_to_write > 0 && ret == 0)
mapping->writeback_index = next / PAGE_SIZE;
} else {
ret = afs_writepages_region(mapping, wbc,
wbc->range_start, wbc->range_end,
&next, false);
}
up_read(&vnode->validate_lock); up_read(&vnode->validate_lock);
_leave(" = %d", ret);
return ret; return ret;
} }
/*
* write to an AFS file
*/
ssize_t afs_file_write(struct kiocb *iocb, struct iov_iter *from)
{
struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp));
struct afs_file *af = iocb->ki_filp->private_data;
ssize_t result;
size_t count = iov_iter_count(from);
_enter("{%llx:%llu},{%zu},",
vnode->fid.vid, vnode->fid.vnode, count);
if (IS_SWAPFILE(&vnode->netfs.inode)) {
printk(KERN_INFO
"AFS: Attempt to write to active swap file!\n");
return -EBUSY;
}
if (!count)
return 0;
result = afs_validate(vnode, af->key);
if (result < 0)
return result;
result = generic_file_write_iter(iocb, from);
_leave(" = %zd", result);
return result;
}
/* /*
* flush any dirty pages for this process, and check for write errors. * flush any dirty pages for this process, and check for write errors.
* - the return status from this call provides a reliable indication of * - the return status from this call provides a reliable indication of
@ -907,59 +248,11 @@ int afs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
*/ */
vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
{ {
struct folio *folio = page_folio(vmf->page);
struct file *file = vmf->vma->vm_file; struct file *file = vmf->vma->vm_file;
struct inode *inode = file_inode(file);
struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = file->private_data;
unsigned long priv;
vm_fault_t ret = VM_FAULT_RETRY;
_enter("{{%llx:%llu}},{%lx}", vnode->fid.vid, vnode->fid.vnode, folio_index(folio)); if (afs_validate(AFS_FS_I(file_inode(file)), afs_file_key(file)) < 0)
return VM_FAULT_SIGBUS;
afs_validate(vnode, af->key); return netfs_page_mkwrite(vmf, NULL);
sb_start_pagefault(inode->i_sb);
/* Wait for the page to be written to the cache before we allow it to
* be modified. We then assume the entire page will need writing back.
*/
#ifdef CONFIG_AFS_FSCACHE
if (folio_test_fscache(folio) &&
folio_wait_fscache_killable(folio) < 0)
goto out;
#endif
if (folio_wait_writeback_killable(folio))
goto out;
if (folio_lock_killable(folio) < 0)
goto out;
/* We mustn't change folio->private until writeback is complete as that
* details the portion of the page we need to write back and we might
* need to redirty the page if there's a problem.
*/
if (folio_wait_writeback_killable(folio) < 0) {
folio_unlock(folio);
goto out;
}
priv = afs_folio_dirty(folio, 0, folio_size(folio));
priv = afs_folio_dirty_mmapped(priv);
if (folio_test_private(folio)) {
folio_change_private(folio, (void *)priv);
trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite+"), folio);
} else {
folio_attach_private(folio, (void *)priv);
trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite"), folio);
}
file_update_time(file);
ret = VM_FAULT_LOCKED;
out:
sb_end_pagefault(inode->i_sb);
return ret;
} }
/* /*
@ -989,64 +282,3 @@ void afs_prune_wb_keys(struct afs_vnode *vnode)
afs_put_wb_key(wbk); afs_put_wb_key(wbk);
} }
} }
/*
* Clean up a page during invalidation.
*/
int afs_launder_folio(struct folio *folio)
{
struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
struct iov_iter iter;
struct bio_vec bv;
unsigned long priv;
unsigned int f, t;
int ret = 0;
_enter("{%lx}", folio->index);
priv = (unsigned long)folio_get_private(folio);
if (folio_clear_dirty_for_io(folio)) {
f = 0;
t = folio_size(folio);
if (folio_test_private(folio)) {
f = afs_folio_dirty_from(folio, priv);
t = afs_folio_dirty_to(folio, priv);
}
bvec_set_folio(&bv, folio, t - f, f);
iov_iter_bvec(&iter, ITER_SOURCE, &bv, 1, bv.bv_len);
trace_afs_folio_dirty(vnode, tracepoint_string("launder"), folio);
ret = afs_store_data(vnode, &iter, folio_pos(folio) + f, true);
}
trace_afs_folio_dirty(vnode, tracepoint_string("laundered"), folio);
folio_detach_private(folio);
folio_wait_fscache(folio);
return ret;
}
/*
* Deal with the completion of writing the data to the cache.
*/
static void afs_write_to_cache_done(void *priv, ssize_t transferred_or_error,
bool was_async)
{
struct afs_vnode *vnode = priv;
if (IS_ERR_VALUE(transferred_or_error) &&
transferred_or_error != -ENOBUFS)
afs_invalidate_cache(vnode, 0);
}
/*
* Save the write to the cache also.
*/
static void afs_write_to_cache(struct afs_vnode *vnode,
loff_t start, size_t len, loff_t i_size,
bool caching)
{
fscache_write_to_cache(afs_vnode_cache(vnode),
vnode->netfs.inode.i_mapping, start, len, i_size,
afs_write_to_cache_done, vnode, caching);
}

View File

@ -2,7 +2,7 @@
config CACHEFILES config CACHEFILES
tristate "Filesystem caching on files" tristate "Filesystem caching on files"
depends on FSCACHE && BLOCK depends on NETFS_SUPPORT && FSCACHE && BLOCK
help help
This permits use of a mounted filesystem as a cache for other This permits use of a mounted filesystem as a cache for other
filesystems - primarily networking filesystems - thus allowing fast filesystems - primarily networking filesystems - thus allowing fast

View File

@ -246,7 +246,7 @@ extern bool cachefiles_begin_operation(struct netfs_cache_resources *cres,
enum fscache_want_state want_state); enum fscache_want_state want_state);
extern int __cachefiles_prepare_write(struct cachefiles_object *object, extern int __cachefiles_prepare_write(struct cachefiles_object *object,
struct file *file, struct file *file,
loff_t *_start, size_t *_len, loff_t *_start, size_t *_len, size_t upper_len,
bool no_space_allocated_yet); bool no_space_allocated_yet);
extern int __cachefiles_write(struct cachefiles_object *object, extern int __cachefiles_write(struct cachefiles_object *object,
struct file *file, struct file *file,

View File

@ -517,18 +517,26 @@ cachefiles_prepare_ondemand_read(struct netfs_cache_resources *cres,
*/ */
int __cachefiles_prepare_write(struct cachefiles_object *object, int __cachefiles_prepare_write(struct cachefiles_object *object,
struct file *file, struct file *file,
loff_t *_start, size_t *_len, loff_t *_start, size_t *_len, size_t upper_len,
bool no_space_allocated_yet) bool no_space_allocated_yet)
{ {
struct cachefiles_cache *cache = object->volume->cache; struct cachefiles_cache *cache = object->volume->cache;
loff_t start = *_start, pos; loff_t start = *_start, pos;
size_t len = *_len, down; size_t len = *_len;
int ret; int ret;
/* Round to DIO size */ /* Round to DIO size */
down = start - round_down(start, PAGE_SIZE); start = round_down(*_start, PAGE_SIZE);
*_start = start - down; if (start != *_start || *_len > upper_len) {
*_len = round_up(down + len, PAGE_SIZE); /* Probably asked to cache a streaming write written into the
* pagecache when the cookie was temporarily out of service to
* culling.
*/
fscache_count_dio_misfit();
return -ENOBUFS;
}
*_len = round_up(len, PAGE_SIZE);
/* We need to work out whether there's sufficient disk space to perform /* We need to work out whether there's sufficient disk space to perform
* the write - but we can skip that check if we have space already * the write - but we can skip that check if we have space already
@ -539,7 +547,7 @@ int __cachefiles_prepare_write(struct cachefiles_object *object,
pos = cachefiles_inject_read_error(); pos = cachefiles_inject_read_error();
if (pos == 0) if (pos == 0)
pos = vfs_llseek(file, *_start, SEEK_DATA); pos = vfs_llseek(file, start, SEEK_DATA);
if (pos < 0 && pos >= (loff_t)-MAX_ERRNO) { if (pos < 0 && pos >= (loff_t)-MAX_ERRNO) {
if (pos == -ENXIO) if (pos == -ENXIO)
goto check_space; /* Unallocated tail */ goto check_space; /* Unallocated tail */
@ -547,7 +555,7 @@ int __cachefiles_prepare_write(struct cachefiles_object *object,
cachefiles_trace_seek_error); cachefiles_trace_seek_error);
return pos; return pos;
} }
if ((u64)pos >= (u64)*_start + *_len) if ((u64)pos >= (u64)start + *_len)
goto check_space; /* Unallocated region */ goto check_space; /* Unallocated region */
/* We have a block that's at least partially filled - if we're low on /* We have a block that's at least partially filled - if we're low on
@ -560,13 +568,13 @@ int __cachefiles_prepare_write(struct cachefiles_object *object,
pos = cachefiles_inject_read_error(); pos = cachefiles_inject_read_error();
if (pos == 0) if (pos == 0)
pos = vfs_llseek(file, *_start, SEEK_HOLE); pos = vfs_llseek(file, start, SEEK_HOLE);
if (pos < 0 && pos >= (loff_t)-MAX_ERRNO) { if (pos < 0 && pos >= (loff_t)-MAX_ERRNO) {
trace_cachefiles_io_error(object, file_inode(file), pos, trace_cachefiles_io_error(object, file_inode(file), pos,
cachefiles_trace_seek_error); cachefiles_trace_seek_error);
return pos; return pos;
} }
if ((u64)pos >= (u64)*_start + *_len) if ((u64)pos >= (u64)start + *_len)
return 0; /* Fully allocated */ return 0; /* Fully allocated */
/* Partially allocated, but insufficient space: cull. */ /* Partially allocated, but insufficient space: cull. */
@ -574,7 +582,7 @@ int __cachefiles_prepare_write(struct cachefiles_object *object,
ret = cachefiles_inject_remove_error(); ret = cachefiles_inject_remove_error();
if (ret == 0) if (ret == 0)
ret = vfs_fallocate(file, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, ret = vfs_fallocate(file, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
*_start, *_len); start, *_len);
if (ret < 0) { if (ret < 0) {
trace_cachefiles_io_error(object, file_inode(file), ret, trace_cachefiles_io_error(object, file_inode(file), ret,
cachefiles_trace_fallocate_error); cachefiles_trace_fallocate_error);
@ -591,8 +599,8 @@ check_space:
} }
static int cachefiles_prepare_write(struct netfs_cache_resources *cres, static int cachefiles_prepare_write(struct netfs_cache_resources *cres,
loff_t *_start, size_t *_len, loff_t i_size, loff_t *_start, size_t *_len, size_t upper_len,
bool no_space_allocated_yet) loff_t i_size, bool no_space_allocated_yet)
{ {
struct cachefiles_object *object = cachefiles_cres_object(cres); struct cachefiles_object *object = cachefiles_cres_object(cres);
struct cachefiles_cache *cache = object->volume->cache; struct cachefiles_cache *cache = object->volume->cache;
@ -608,7 +616,7 @@ static int cachefiles_prepare_write(struct netfs_cache_resources *cres,
cachefiles_begin_secure(cache, &saved_cred); cachefiles_begin_secure(cache, &saved_cred);
ret = __cachefiles_prepare_write(object, cachefiles_cres_file(cres), ret = __cachefiles_prepare_write(object, cachefiles_cres_file(cres),
_start, _len, _start, _len, upper_len,
no_space_allocated_yet); no_space_allocated_yet);
cachefiles_end_secure(cache, saved_cred); cachefiles_end_secure(cache, saved_cred);
return ret; return ret;

View File

@ -50,7 +50,7 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb,
return -ENOBUFS; return -ENOBUFS;
cachefiles_begin_secure(cache, &saved_cred); cachefiles_begin_secure(cache, &saved_cred);
ret = __cachefiles_prepare_write(object, file, &pos, &len, true); ret = __cachefiles_prepare_write(object, file, &pos, &len, len, true);
cachefiles_end_secure(cache, saved_cred); cachefiles_end_secure(cache, saved_cred);
if (ret < 0) if (ret < 0)
return ret; return ret;

View File

@ -159,27 +159,7 @@ static void ceph_invalidate_folio(struct folio *folio, size_t offset,
ceph_put_snap_context(snapc); ceph_put_snap_context(snapc);
} }
folio_wait_fscache(folio); netfs_invalidate_folio(folio, offset, length);
}
static bool ceph_release_folio(struct folio *folio, gfp_t gfp)
{
struct inode *inode = folio->mapping->host;
struct ceph_client *cl = ceph_inode_to_client(inode);
doutc(cl, "%llx.%llx idx %lu (%sdirty)\n", ceph_vinop(inode),
folio->index, folio_test_dirty(folio) ? "" : "not ");
if (folio_test_private(folio))
return false;
if (folio_test_fscache(folio)) {
if (current_is_kswapd() || !(gfp & __GFP_FS))
return false;
folio_wait_fscache(folio);
}
ceph_fscache_note_page_release(inode);
return true;
} }
static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq) static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq)
@ -509,7 +489,6 @@ static void ceph_netfs_free_request(struct netfs_io_request *rreq)
const struct netfs_request_ops ceph_netfs_ops = { const struct netfs_request_ops ceph_netfs_ops = {
.init_request = ceph_init_request, .init_request = ceph_init_request,
.free_request = ceph_netfs_free_request, .free_request = ceph_netfs_free_request,
.begin_cache_operation = ceph_begin_cache_operation,
.issue_read = ceph_netfs_issue_read, .issue_read = ceph_netfs_issue_read,
.expand_readahead = ceph_netfs_expand_readahead, .expand_readahead = ceph_netfs_expand_readahead,
.clamp_length = ceph_netfs_clamp_length, .clamp_length = ceph_netfs_clamp_length,
@ -1586,7 +1565,7 @@ const struct address_space_operations ceph_aops = {
.write_end = ceph_write_end, .write_end = ceph_write_end,
.dirty_folio = ceph_dirty_folio, .dirty_folio = ceph_dirty_folio,
.invalidate_folio = ceph_invalidate_folio, .invalidate_folio = ceph_invalidate_folio,
.release_folio = ceph_release_folio, .release_folio = netfs_release_folio,
.direct_IO = noop_direct_IO, .direct_IO = noop_direct_IO,
}; };

View File

@ -43,38 +43,19 @@ static inline void ceph_fscache_resize(struct inode *inode, loff_t to)
} }
} }
static inline void ceph_fscache_unpin_writeback(struct inode *inode, static inline int ceph_fscache_unpin_writeback(struct inode *inode,
struct writeback_control *wbc) struct writeback_control *wbc)
{ {
fscache_unpin_writeback(wbc, ceph_fscache_cookie(ceph_inode(inode))); return netfs_unpin_writeback(inode, wbc);
} }
static inline int ceph_fscache_dirty_folio(struct address_space *mapping, #define ceph_fscache_dirty_folio netfs_dirty_folio
struct folio *folio)
{
struct ceph_inode_info *ci = ceph_inode(mapping->host);
return fscache_dirty_folio(mapping, folio, ceph_fscache_cookie(ci));
}
static inline int ceph_begin_cache_operation(struct netfs_io_request *rreq)
{
struct fscache_cookie *cookie = ceph_fscache_cookie(ceph_inode(rreq->inode));
return fscache_begin_read_operation(&rreq->cache_resources, cookie);
}
static inline bool ceph_is_cache_enabled(struct inode *inode) static inline bool ceph_is_cache_enabled(struct inode *inode)
{ {
return fscache_cookie_enabled(ceph_fscache_cookie(ceph_inode(inode))); return fscache_cookie_enabled(ceph_fscache_cookie(ceph_inode(inode)));
} }
static inline void ceph_fscache_note_page_release(struct inode *inode)
{
struct ceph_inode_info *ci = ceph_inode(inode);
fscache_note_page_release(ceph_fscache_cookie(ci));
}
#else /* CONFIG_CEPH_FSCACHE */ #else /* CONFIG_CEPH_FSCACHE */
static inline int ceph_fscache_register_fs(struct ceph_fs_client* fsc, static inline int ceph_fscache_register_fs(struct ceph_fs_client* fsc,
struct fs_context *fc) struct fs_context *fc)
@ -119,30 +100,18 @@ static inline void ceph_fscache_resize(struct inode *inode, loff_t to)
{ {
} }
static inline void ceph_fscache_unpin_writeback(struct inode *inode, static inline int ceph_fscache_unpin_writeback(struct inode *inode,
struct writeback_control *wbc) struct writeback_control *wbc)
{ {
return 0;
} }
static inline int ceph_fscache_dirty_folio(struct address_space *mapping, #define ceph_fscache_dirty_folio filemap_dirty_folio
struct folio *folio)
{
return filemap_dirty_folio(mapping, folio);
}
static inline bool ceph_is_cache_enabled(struct inode *inode) static inline bool ceph_is_cache_enabled(struct inode *inode)
{ {
return false; return false;
} }
static inline int ceph_begin_cache_operation(struct netfs_io_request *rreq)
{
return -ENOBUFS;
}
static inline void ceph_fscache_note_page_release(struct inode *inode)
{
}
#endif /* CONFIG_CEPH_FSCACHE */ #endif /* CONFIG_CEPH_FSCACHE */
#endif #endif

View File

@ -574,7 +574,7 @@ struct inode *ceph_alloc_inode(struct super_block *sb)
doutc(fsc->client, "%p\n", &ci->netfs.inode); doutc(fsc->client, "%p\n", &ci->netfs.inode);
/* Set parameters for the netfs library */ /* Set parameters for the netfs library */
netfs_inode_init(&ci->netfs, &ceph_netfs_ops); netfs_inode_init(&ci->netfs, &ceph_netfs_ops, false);
spin_lock_init(&ci->i_ceph_lock); spin_lock_init(&ci->i_ceph_lock);
@ -694,7 +694,7 @@ void ceph_evict_inode(struct inode *inode)
percpu_counter_dec(&mdsc->metric.total_inodes); percpu_counter_dec(&mdsc->metric.total_inodes);
truncate_inode_pages_final(&inode->i_data); truncate_inode_pages_final(&inode->i_data);
if (inode->i_state & I_PINNING_FSCACHE_WB) if (inode->i_state & I_PINNING_NETFS_WB)
ceph_fscache_unuse_cookie(inode, true); ceph_fscache_unuse_cookie(inode, true);
clear_inode(inode); clear_inode(inode);

View File

@ -114,8 +114,11 @@ config EROFS_FS_ZIP_DEFLATE
config EROFS_FS_ONDEMAND config EROFS_FS_ONDEMAND
bool "EROFS fscache-based on-demand read support" bool "EROFS fscache-based on-demand read support"
depends on CACHEFILES_ONDEMAND && (EROFS_FS=m && FSCACHE || EROFS_FS=y && FSCACHE=y) depends on EROFS_FS
default n select NETFS_SUPPORT
select FSCACHE
select CACHEFILES
select CACHEFILES_ONDEMAND
help help
This permits EROFS to use fscache-backed data blobs with on-demand This permits EROFS to use fscache-backed data blobs with on-demand
read support. read support.

View File

@ -1675,11 +1675,11 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))
inode->i_state |= I_DIRTY_PAGES; inode->i_state |= I_DIRTY_PAGES;
else if (unlikely(inode->i_state & I_PINNING_FSCACHE_WB)) { else if (unlikely(inode->i_state & I_PINNING_NETFS_WB)) {
if (!(inode->i_state & I_DIRTY_PAGES)) { if (!(inode->i_state & I_DIRTY_PAGES)) {
inode->i_state &= ~I_PINNING_FSCACHE_WB; inode->i_state &= ~I_PINNING_NETFS_WB;
wbc->unpinned_fscache_wb = true; wbc->unpinned_netfs_wb = true;
dirty |= I_PINNING_FSCACHE_WB; /* Cause write_inode */ dirty |= I_PINNING_NETFS_WB; /* Cause write_inode */
} }
} }
@ -1691,7 +1691,7 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
if (ret == 0) if (ret == 0)
ret = err; ret = err;
} }
wbc->unpinned_fscache_wb = false; wbc->unpinned_netfs_wb = false;
trace_writeback_single_inode(inode, wbc, nr_to_write); trace_writeback_single_inode(inode, wbc, nr_to_write);
return ret; return ret;
} }

View File

@ -1,40 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
config FSCACHE
tristate "General filesystem local caching manager"
select NETFS_SUPPORT
help
This option enables a generic filesystem caching manager that can be
used by various network and other filesystems to cache data locally.
Different sorts of caches can be plugged in, depending on the
resources available.
See Documentation/filesystems/caching/fscache.rst for more information.
config FSCACHE_STATS
bool "Gather statistical information on local caching"
depends on FSCACHE && PROC_FS
select NETFS_STATS
help
This option causes statistical information to be gathered on local
caching and exported through file:
/proc/fs/fscache/stats
The gathering of statistics adds a certain amount of overhead to
execution as there are a quite a few stats gathered, and on a
multi-CPU system these may be on cachelines that keep bouncing
between CPUs. On the other hand, the stats are very useful for
debugging purposes. Saying 'Y' here is recommended.
See Documentation/filesystems/caching/fscache.rst for more information.
config FSCACHE_DEBUG
bool "Debug FS-Cache"
depends on FSCACHE
help
This permits debugging to be dynamically enabled in the local caching
management module. If this is set, the debugging output may be
enabled by setting bits in /sys/modules/fscache/parameter/debug.
See Documentation/filesystems/caching/fscache.rst for more information.

View File

@ -1,16 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for general filesystem caching code
#
fscache-y := \
cache.o \
cookie.o \
io.o \
main.o \
volume.o
fscache-$(CONFIG_PROC_FS) += proc.o
fscache-$(CONFIG_FSCACHE_STATS) += stats.o
obj-$(CONFIG_FSCACHE) := fscache.o

View File

@ -1,277 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/* Internal definitions for FS-Cache
*
* Copyright (C) 2021 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#ifdef pr_fmt
#undef pr_fmt
#endif
#define pr_fmt(fmt) "FS-Cache: " fmt
#include <linux/slab.h>
#include <linux/fscache-cache.h>
#include <trace/events/fscache.h>
#include <linux/sched.h>
#include <linux/seq_file.h>
/*
* cache.c
*/
#ifdef CONFIG_PROC_FS
extern const struct seq_operations fscache_caches_seq_ops;
#endif
bool fscache_begin_cache_access(struct fscache_cache *cache, enum fscache_access_trace why);
void fscache_end_cache_access(struct fscache_cache *cache, enum fscache_access_trace why);
struct fscache_cache *fscache_lookup_cache(const char *name, bool is_cache);
void fscache_put_cache(struct fscache_cache *cache, enum fscache_cache_trace where);
static inline enum fscache_cache_state fscache_cache_state(const struct fscache_cache *cache)
{
return smp_load_acquire(&cache->state);
}
static inline bool fscache_cache_is_live(const struct fscache_cache *cache)
{
return fscache_cache_state(cache) == FSCACHE_CACHE_IS_ACTIVE;
}
static inline void fscache_set_cache_state(struct fscache_cache *cache,
enum fscache_cache_state new_state)
{
smp_store_release(&cache->state, new_state);
}
static inline bool fscache_set_cache_state_maybe(struct fscache_cache *cache,
enum fscache_cache_state old_state,
enum fscache_cache_state new_state)
{
return try_cmpxchg_release(&cache->state, &old_state, new_state);
}
/*
* cookie.c
*/
extern struct kmem_cache *fscache_cookie_jar;
#ifdef CONFIG_PROC_FS
extern const struct seq_operations fscache_cookies_seq_ops;
#endif
extern struct timer_list fscache_cookie_lru_timer;
extern void fscache_print_cookie(struct fscache_cookie *cookie, char prefix);
extern bool fscache_begin_cookie_access(struct fscache_cookie *cookie,
enum fscache_access_trace why);
static inline void fscache_see_cookie(struct fscache_cookie *cookie,
enum fscache_cookie_trace where)
{
trace_fscache_cookie(cookie->debug_id, refcount_read(&cookie->ref),
where);
}
/*
* main.c
*/
extern unsigned fscache_debug;
extern unsigned int fscache_hash(unsigned int salt, const void *data, size_t len);
/*
* proc.c
*/
#ifdef CONFIG_PROC_FS
extern int __init fscache_proc_init(void);
extern void fscache_proc_cleanup(void);
#else
#define fscache_proc_init() (0)
#define fscache_proc_cleanup() do {} while (0)
#endif
/*
* stats.c
*/
#ifdef CONFIG_FSCACHE_STATS
extern atomic_t fscache_n_volumes;
extern atomic_t fscache_n_volumes_collision;
extern atomic_t fscache_n_volumes_nomem;
extern atomic_t fscache_n_cookies;
extern atomic_t fscache_n_cookies_lru;
extern atomic_t fscache_n_cookies_lru_expired;
extern atomic_t fscache_n_cookies_lru_removed;
extern atomic_t fscache_n_cookies_lru_dropped;
extern atomic_t fscache_n_acquires;
extern atomic_t fscache_n_acquires_ok;
extern atomic_t fscache_n_acquires_oom;
extern atomic_t fscache_n_invalidates;
extern atomic_t fscache_n_relinquishes;
extern atomic_t fscache_n_relinquishes_retire;
extern atomic_t fscache_n_relinquishes_dropped;
extern atomic_t fscache_n_resizes;
extern atomic_t fscache_n_resizes_null;
static inline void fscache_stat(atomic_t *stat)
{
atomic_inc(stat);
}
static inline void fscache_stat_d(atomic_t *stat)
{
atomic_dec(stat);
}
#define __fscache_stat(stat) (stat)
int fscache_stats_show(struct seq_file *m, void *v);
#else
#define __fscache_stat(stat) (NULL)
#define fscache_stat(stat) do {} while (0)
#define fscache_stat_d(stat) do {} while (0)
#endif
/*
* volume.c
*/
#ifdef CONFIG_PROC_FS
extern const struct seq_operations fscache_volumes_seq_ops;
#endif
struct fscache_volume *fscache_get_volume(struct fscache_volume *volume,
enum fscache_volume_trace where);
void fscache_put_volume(struct fscache_volume *volume,
enum fscache_volume_trace where);
bool fscache_begin_volume_access(struct fscache_volume *volume,
struct fscache_cookie *cookie,
enum fscache_access_trace why);
void fscache_create_volume(struct fscache_volume *volume, bool wait);
/*****************************************************************************/
/*
* debug tracing
*/
#define dbgprintk(FMT, ...) \
printk("[%-6.6s] "FMT"\n", current->comm, ##__VA_ARGS__)
#define kenter(FMT, ...) dbgprintk("==> %s("FMT")", __func__, ##__VA_ARGS__)
#define kleave(FMT, ...) dbgprintk("<== %s()"FMT"", __func__, ##__VA_ARGS__)
#define kdebug(FMT, ...) dbgprintk(FMT, ##__VA_ARGS__)
#define kjournal(FMT, ...) no_printk(FMT, ##__VA_ARGS__)
#ifdef __KDEBUG
#define _enter(FMT, ...) kenter(FMT, ##__VA_ARGS__)
#define _leave(FMT, ...) kleave(FMT, ##__VA_ARGS__)
#define _debug(FMT, ...) kdebug(FMT, ##__VA_ARGS__)
#elif defined(CONFIG_FSCACHE_DEBUG)
#define _enter(FMT, ...) \
do { \
if (__do_kdebug(ENTER)) \
kenter(FMT, ##__VA_ARGS__); \
} while (0)
#define _leave(FMT, ...) \
do { \
if (__do_kdebug(LEAVE)) \
kleave(FMT, ##__VA_ARGS__); \
} while (0)
#define _debug(FMT, ...) \
do { \
if (__do_kdebug(DEBUG)) \
kdebug(FMT, ##__VA_ARGS__); \
} while (0)
#else
#define _enter(FMT, ...) no_printk("==> %s("FMT")", __func__, ##__VA_ARGS__)
#define _leave(FMT, ...) no_printk("<== %s()"FMT"", __func__, ##__VA_ARGS__)
#define _debug(FMT, ...) no_printk(FMT, ##__VA_ARGS__)
#endif
/*
* determine whether a particular optional debugging point should be logged
* - we need to go through three steps to persuade cpp to correctly join the
* shorthand in FSCACHE_DEBUG_LEVEL with its prefix
*/
#define ____do_kdebug(LEVEL, POINT) \
unlikely((fscache_debug & \
(FSCACHE_POINT_##POINT << (FSCACHE_DEBUG_ ## LEVEL * 3))))
#define ___do_kdebug(LEVEL, POINT) \
____do_kdebug(LEVEL, POINT)
#define __do_kdebug(POINT) \
___do_kdebug(FSCACHE_DEBUG_LEVEL, POINT)
#define FSCACHE_DEBUG_CACHE 0
#define FSCACHE_DEBUG_COOKIE 1
#define FSCACHE_DEBUG_OBJECT 2
#define FSCACHE_DEBUG_OPERATION 3
#define FSCACHE_POINT_ENTER 1
#define FSCACHE_POINT_LEAVE 2
#define FSCACHE_POINT_DEBUG 4
#ifndef FSCACHE_DEBUG_LEVEL
#define FSCACHE_DEBUG_LEVEL CACHE
#endif
/*
* assertions
*/
#if 1 /* defined(__KDEBUGALL) */
#define ASSERT(X) \
do { \
if (unlikely(!(X))) { \
pr_err("\n"); \
pr_err("Assertion failed\n"); \
BUG(); \
} \
} while (0)
#define ASSERTCMP(X, OP, Y) \
do { \
if (unlikely(!((X) OP (Y)))) { \
pr_err("\n"); \
pr_err("Assertion failed\n"); \
pr_err("%lx " #OP " %lx is false\n", \
(unsigned long)(X), (unsigned long)(Y)); \
BUG(); \
} \
} while (0)
#define ASSERTIF(C, X) \
do { \
if (unlikely((C) && !(X))) { \
pr_err("\n"); \
pr_err("Assertion failed\n"); \
BUG(); \
} \
} while (0)
#define ASSERTIFCMP(C, X, OP, Y) \
do { \
if (unlikely((C) && !((X) OP (Y)))) { \
pr_err("\n"); \
pr_err("Assertion failed\n"); \
pr_err("%lx " #OP " %lx is false\n", \
(unsigned long)(X), (unsigned long)(Y)); \
BUG(); \
} \
} while (0)
#else
#define ASSERT(X) do {} while (0)
#define ASSERTCMP(X, OP, Y) do {} while (0)
#define ASSERTIF(C, X) do {} while (0)
#define ASSERTIFCMP(C, X, OP, Y) do {} while (0)
#endif /* assert or not */

View File

@ -21,3 +21,42 @@ config NETFS_STATS
multi-CPU system these may be on cachelines that keep bouncing multi-CPU system these may be on cachelines that keep bouncing
between CPUs. On the other hand, the stats are very useful for between CPUs. On the other hand, the stats are very useful for
debugging purposes. Saying 'Y' here is recommended. debugging purposes. Saying 'Y' here is recommended.
config FSCACHE
bool "General filesystem local caching manager"
depends on NETFS_SUPPORT
help
This option enables a generic filesystem caching manager that can be
used by various network and other filesystems to cache data locally.
Different sorts of caches can be plugged in, depending on the
resources available.
See Documentation/filesystems/caching/fscache.rst for more information.
config FSCACHE_STATS
bool "Gather statistical information on local caching"
depends on FSCACHE && PROC_FS
select NETFS_STATS
help
This option causes statistical information to be gathered on local
caching and exported through file:
/proc/fs/fscache/stats
The gathering of statistics adds a certain amount of overhead to
execution as there are a quite a few stats gathered, and on a
multi-CPU system these may be on cachelines that keep bouncing
between CPUs. On the other hand, the stats are very useful for
debugging purposes. Saying 'Y' here is recommended.
See Documentation/filesystems/caching/fscache.rst for more information.
config FSCACHE_DEBUG
bool "Debug FS-Cache"
depends on FSCACHE
help
This permits debugging to be dynamically enabled in the local caching
management module. If this is set, the debugging output may be
enabled by setting bits in /sys/modules/fscache/parameter/debug.
See Documentation/filesystems/caching/fscache.rst for more information.

View File

@ -2,11 +2,29 @@
netfs-y := \ netfs-y := \
buffered_read.o \ buffered_read.o \
buffered_write.o \
direct_read.o \
direct_write.o \
io.o \ io.o \
iterator.o \ iterator.o \
locking.o \
main.o \ main.o \
objects.o misc.o \
objects.o \
output.o
netfs-$(CONFIG_NETFS_STATS) += stats.o netfs-$(CONFIG_NETFS_STATS) += stats.o
obj-$(CONFIG_NETFS_SUPPORT) := netfs.o netfs-$(CONFIG_FSCACHE) += \
fscache_cache.o \
fscache_cookie.o \
fscache_io.o \
fscache_main.o \
fscache_volume.o
ifeq ($(CONFIG_PROC_FS),y)
netfs-$(CONFIG_FSCACHE) += fscache_proc.o
endif
netfs-$(CONFIG_FSCACHE_STATS) += fscache_stats.o
obj-$(CONFIG_NETFS_SUPPORT) += netfs.o

View File

@ -16,6 +16,7 @@
void netfs_rreq_unlock_folios(struct netfs_io_request *rreq) void netfs_rreq_unlock_folios(struct netfs_io_request *rreq)
{ {
struct netfs_io_subrequest *subreq; struct netfs_io_subrequest *subreq;
struct netfs_folio *finfo;
struct folio *folio; struct folio *folio;
pgoff_t start_page = rreq->start / PAGE_SIZE; pgoff_t start_page = rreq->start / PAGE_SIZE;
pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1; pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1;
@ -63,6 +64,7 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq)
break; break;
} }
if (!folio_started && test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { if (!folio_started && test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) {
trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache);
folio_start_fscache(folio); folio_start_fscache(folio);
folio_started = true; folio_started = true;
} }
@ -86,6 +88,15 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq)
if (!pg_failed) { if (!pg_failed) {
flush_dcache_folio(folio); flush_dcache_folio(folio);
finfo = netfs_folio_info(folio);
if (finfo) {
trace_netfs_folio(folio, netfs_folio_trace_filled_gaps);
if (finfo->netfs_group)
folio_change_private(folio, finfo->netfs_group);
else
folio_detach_private(folio);
kfree(finfo);
}
folio_mark_uptodate(folio); folio_mark_uptodate(folio);
} }
@ -147,6 +158,15 @@ static void netfs_rreq_expand(struct netfs_io_request *rreq,
} }
} }
/*
* Begin an operation, and fetch the stored zero point value from the cookie if
* available.
*/
static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_inode *ctx)
{
return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cookie(ctx));
}
/** /**
* netfs_readahead - Helper to manage a read request * netfs_readahead - Helper to manage a read request
* @ractl: The description of the readahead request * @ractl: The description of the readahead request
@ -180,11 +200,9 @@ void netfs_readahead(struct readahead_control *ractl)
if (IS_ERR(rreq)) if (IS_ERR(rreq))
return; return;
if (ctx->ops->begin_cache_operation) { ret = netfs_begin_cache_read(rreq, ctx);
ret = ctx->ops->begin_cache_operation(rreq); if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS)
if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) goto cleanup_free;
goto cleanup_free;
}
netfs_stat(&netfs_n_rh_readahead); netfs_stat(&netfs_n_rh_readahead);
trace_netfs_read(rreq, readahead_pos(ractl), readahead_length(ractl), trace_netfs_read(rreq, readahead_pos(ractl), readahead_length(ractl),
@ -192,6 +210,10 @@ void netfs_readahead(struct readahead_control *ractl)
netfs_rreq_expand(rreq, ractl); netfs_rreq_expand(rreq, ractl);
/* Set up the output buffer */
iov_iter_xarray(&rreq->iter, ITER_DEST, &ractl->mapping->i_pages,
rreq->start, rreq->len);
/* Drop the refs on the folios here rather than in the cache or /* Drop the refs on the folios here rather than in the cache or
* filesystem. The locks will be dropped in netfs_rreq_unlock(). * filesystem. The locks will be dropped in netfs_rreq_unlock().
*/ */
@ -199,6 +221,7 @@ void netfs_readahead(struct readahead_control *ractl)
; ;
netfs_begin_read(rreq, false); netfs_begin_read(rreq, false);
netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
return; return;
cleanup_free: cleanup_free:
@ -226,6 +249,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)
struct address_space *mapping = folio_file_mapping(folio); struct address_space *mapping = folio_file_mapping(folio);
struct netfs_io_request *rreq; struct netfs_io_request *rreq;
struct netfs_inode *ctx = netfs_inode(mapping->host); struct netfs_inode *ctx = netfs_inode(mapping->host);
struct folio *sink = NULL;
int ret; int ret;
_enter("%lx", folio_index(folio)); _enter("%lx", folio_index(folio));
@ -238,15 +262,64 @@ int netfs_read_folio(struct file *file, struct folio *folio)
goto alloc_error; goto alloc_error;
} }
if (ctx->ops->begin_cache_operation) { ret = netfs_begin_cache_read(rreq, ctx);
ret = ctx->ops->begin_cache_operation(rreq); if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS)
if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) goto discard;
goto discard;
}
netfs_stat(&netfs_n_rh_readpage); netfs_stat(&netfs_n_rh_readpage);
trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage); trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage);
return netfs_begin_read(rreq, true);
/* Set up the output buffer */
if (folio_test_dirty(folio)) {
/* Handle someone trying to read from an unflushed streaming
* write. We fiddle the buffer so that a gap at the beginning
* and/or a gap at the end get copied to, but the middle is
* discarded.
*/
struct netfs_folio *finfo = netfs_folio_info(folio);
struct bio_vec *bvec;
unsigned int from = finfo->dirty_offset;
unsigned int to = from + finfo->dirty_len;
unsigned int off = 0, i = 0;
size_t flen = folio_size(folio);
size_t nr_bvec = flen / PAGE_SIZE + 2;
size_t part;
ret = -ENOMEM;
bvec = kmalloc_array(nr_bvec, sizeof(*bvec), GFP_KERNEL);
if (!bvec)
goto discard;
sink = folio_alloc(GFP_KERNEL, 0);
if (!sink)
goto discard;
trace_netfs_folio(folio, netfs_folio_trace_read_gaps);
rreq->direct_bv = bvec;
rreq->direct_bv_count = nr_bvec;
if (from > 0) {
bvec_set_folio(&bvec[i++], folio, from, 0);
off = from;
}
while (off < to) {
part = min_t(size_t, to - off, PAGE_SIZE);
bvec_set_folio(&bvec[i++], sink, part, 0);
off += part;
}
if (to < flen)
bvec_set_folio(&bvec[i++], folio, flen - to, to);
iov_iter_bvec(&rreq->iter, ITER_DEST, bvec, i, rreq->len);
} else {
iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages,
rreq->start, rreq->len);
}
ret = netfs_begin_read(rreq, true);
if (sink)
folio_put(sink);
netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
return ret < 0 ? ret : 0;
discard: discard:
netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); netfs_put_request(rreq, false, netfs_rreq_trace_put_discard);
@ -390,11 +463,9 @@ retry:
rreq->no_unlock_folio = folio_index(folio); rreq->no_unlock_folio = folio_index(folio);
__set_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags); __set_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags);
if (ctx->ops->begin_cache_operation) { ret = netfs_begin_cache_read(rreq, ctx);
ret = ctx->ops->begin_cache_operation(rreq); if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS)
if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS) goto error_put;
goto error_put;
}
netfs_stat(&netfs_n_rh_write_begin); netfs_stat(&netfs_n_rh_write_begin);
trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin); trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin);
@ -405,6 +476,10 @@ retry:
ractl._nr_pages = folio_nr_pages(folio); ractl._nr_pages = folio_nr_pages(folio);
netfs_rreq_expand(rreq, &ractl); netfs_rreq_expand(rreq, &ractl);
/* Set up the output buffer */
iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages,
rreq->start, rreq->len);
/* We hold the folio locks, so we can drop the references */ /* We hold the folio locks, so we can drop the references */
folio_get(folio); folio_get(folio);
while (readahead_folio(&ractl)) while (readahead_folio(&ractl))
@ -413,6 +488,7 @@ retry:
ret = netfs_begin_read(rreq, true); ret = netfs_begin_read(rreq, true);
if (ret < 0) if (ret < 0)
goto error; goto error;
netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
have_folio: have_folio:
ret = folio_wait_fscache_killable(folio); ret = folio_wait_fscache_killable(folio);
@ -434,3 +510,124 @@ error:
return ret; return ret;
} }
EXPORT_SYMBOL(netfs_write_begin); EXPORT_SYMBOL(netfs_write_begin);
/*
* Preload the data into a page we're proposing to write into.
*/
int netfs_prefetch_for_write(struct file *file, struct folio *folio,
size_t offset, size_t len)
{
struct netfs_io_request *rreq;
struct address_space *mapping = folio_file_mapping(folio);
struct netfs_inode *ctx = netfs_inode(mapping->host);
unsigned long long start = folio_pos(folio);
size_t flen = folio_size(folio);
int ret;
_enter("%zx @%llx", flen, start);
ret = -ENOMEM;
rreq = netfs_alloc_request(mapping, file, start, flen,
NETFS_READ_FOR_WRITE);
if (IS_ERR(rreq)) {
ret = PTR_ERR(rreq);
goto error;
}
rreq->no_unlock_folio = folio_index(folio);
__set_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags);
ret = netfs_begin_cache_read(rreq, ctx);
if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS)
goto error_put;
netfs_stat(&netfs_n_rh_write_begin);
trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write);
/* Set up the output buffer */
iov_iter_xarray(&rreq->iter, ITER_DEST, &mapping->i_pages,
rreq->start, rreq->len);
ret = netfs_begin_read(rreq, true);
netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
return ret;
error_put:
netfs_put_request(rreq, false, netfs_rreq_trace_put_discard);
error:
_leave(" = %d", ret);
return ret;
}
/**
* netfs_buffered_read_iter - Filesystem buffered I/O read routine
* @iocb: kernel I/O control block
* @iter: destination for the data read
*
* This is the ->read_iter() routine for all filesystems that can use the page
* cache directly.
*
* The IOCB_NOWAIT flag in iocb->ki_flags indicates that -EAGAIN shall be
* returned when no data can be read without waiting for I/O requests to
* complete; it doesn't prevent readahead.
*
* The IOCB_NOIO flag in iocb->ki_flags indicates that no new I/O requests
* shall be made for the read or for readahead. When no data can be read,
* -EAGAIN shall be returned. When readahead would be triggered, a partial,
* possibly empty read shall be returned.
*
* Return:
* * number of bytes copied, even for partial reads
* * negative error code (or 0 if IOCB_NOIO) if nothing was read
*/
ssize_t netfs_buffered_read_iter(struct kiocb *iocb, struct iov_iter *iter)
{
struct inode *inode = file_inode(iocb->ki_filp);
struct netfs_inode *ictx = netfs_inode(inode);
ssize_t ret;
if (WARN_ON_ONCE((iocb->ki_flags & IOCB_DIRECT) ||
test_bit(NETFS_ICTX_UNBUFFERED, &ictx->flags)))
return -EINVAL;
ret = netfs_start_io_read(inode);
if (ret == 0) {
ret = filemap_read(iocb, iter, 0);
netfs_end_io_read(inode);
}
return ret;
}
EXPORT_SYMBOL(netfs_buffered_read_iter);
/**
* netfs_file_read_iter - Generic filesystem read routine
* @iocb: kernel I/O control block
* @iter: destination for the data read
*
* This is the ->read_iter() routine for all filesystems that can use the page
* cache directly.
*
* The IOCB_NOWAIT flag in iocb->ki_flags indicates that -EAGAIN shall be
* returned when no data can be read without waiting for I/O requests to
* complete; it doesn't prevent readahead.
*
* The IOCB_NOIO flag in iocb->ki_flags indicates that no new I/O requests
* shall be made for the read or for readahead. When no data can be read,
* -EAGAIN shall be returned. When readahead would be triggered, a partial,
* possibly empty read shall be returned.
*
* Return:
* * number of bytes copied, even for partial reads
* * negative error code (or 0 if IOCB_NOIO) if nothing was read
*/
ssize_t netfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
{
struct netfs_inode *ictx = netfs_inode(iocb->ki_filp->f_mapping->host);
if ((iocb->ki_flags & IOCB_DIRECT) ||
test_bit(NETFS_ICTX_UNBUFFERED, &ictx->flags))
return netfs_unbuffered_read_iter(iocb, iter);
return netfs_buffered_read_iter(iocb, iter);
}
EXPORT_SYMBOL(netfs_file_read_iter);

1253
fs/netfs/buffered_write.c Normal file

File diff suppressed because it is too large Load Diff

125
fs/netfs/direct_read.c Normal file
View File

@ -0,0 +1,125 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Direct I/O support.
*
* Copyright (C) 2023 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/export.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/slab.h>
#include <linux/uio.h>
#include <linux/sched/mm.h>
#include <linux/task_io_accounting_ops.h>
#include <linux/netfs.h>
#include "internal.h"
/**
* netfs_unbuffered_read_iter_locked - Perform an unbuffered or direct I/O read
* @iocb: The I/O control descriptor describing the read
* @iter: The output buffer (also specifies read length)
*
* Perform an unbuffered I/O or direct I/O from the file in @iocb to the
* output buffer. No use is made of the pagecache.
*
* The caller must hold any appropriate locks.
*/
static ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *iter)
{
struct netfs_io_request *rreq;
ssize_t ret;
size_t orig_count = iov_iter_count(iter);
bool async = !is_sync_kiocb(iocb);
_enter("");
if (!orig_count)
return 0; /* Don't update atime */
ret = kiocb_write_and_wait(iocb, orig_count);
if (ret < 0)
return ret;
file_accessed(iocb->ki_filp);
rreq = netfs_alloc_request(iocb->ki_filp->f_mapping, iocb->ki_filp,
iocb->ki_pos, orig_count,
NETFS_DIO_READ);
if (IS_ERR(rreq))
return PTR_ERR(rreq);
netfs_stat(&netfs_n_rh_dio_read);
trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_dio_read);
/* If this is an async op, we have to keep track of the destination
* buffer for ourselves as the caller's iterator will be trashed when
* we return.
*
* In such a case, extract an iterator to represent as much of the the
* output buffer as we can manage. Note that the extraction might not
* be able to allocate a sufficiently large bvec array and may shorten
* the request.
*/
if (user_backed_iter(iter)) {
ret = netfs_extract_user_iter(iter, rreq->len, &rreq->iter, 0);
if (ret < 0)
goto out;
rreq->direct_bv = (struct bio_vec *)rreq->iter.bvec;
rreq->direct_bv_count = ret;
rreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
rreq->len = iov_iter_count(&rreq->iter);
} else {
rreq->iter = *iter;
rreq->len = orig_count;
rreq->direct_bv_unpin = false;
iov_iter_advance(iter, orig_count);
}
// TODO: Set up bounce buffer if needed
if (async)
rreq->iocb = iocb;
ret = netfs_begin_read(rreq, is_sync_kiocb(iocb));
if (ret < 0)
goto out; /* May be -EIOCBQUEUED */
if (!async) {
// TODO: Copy from bounce buffer
iocb->ki_pos += rreq->transferred;
ret = rreq->transferred;
}
out:
netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
if (ret > 0)
orig_count -= ret;
if (ret != -EIOCBQUEUED)
iov_iter_revert(iter, orig_count - iov_iter_count(iter));
return ret;
}
/**
* netfs_unbuffered_read_iter - Perform an unbuffered or direct I/O read
* @iocb: The I/O control descriptor describing the read
* @iter: The output buffer (also specifies read length)
*
* Perform an unbuffered I/O or direct I/O from the file in @iocb to the
* output buffer. No use is made of the pagecache.
*/
ssize_t netfs_unbuffered_read_iter(struct kiocb *iocb, struct iov_iter *iter)
{
struct inode *inode = file_inode(iocb->ki_filp);
ssize_t ret;
if (!iter->count)
return 0; /* Don't update atime */
ret = netfs_start_io_direct(inode);
if (ret == 0) {
ret = netfs_unbuffered_read_iter_locked(iocb, iter);
netfs_end_io_direct(inode);
}
return ret;
}
EXPORT_SYMBOL(netfs_unbuffered_read_iter);

171
fs/netfs/direct_write.c Normal file
View File

@ -0,0 +1,171 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* Unbuffered and direct write support.
*
* Copyright (C) 2023 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/export.h>
#include <linux/uio.h>
#include "internal.h"
static void netfs_cleanup_dio_write(struct netfs_io_request *wreq)
{
struct inode *inode = wreq->inode;
unsigned long long end = wreq->start + wreq->len;
if (!wreq->error &&
i_size_read(inode) < end) {
if (wreq->netfs_ops->update_i_size)
wreq->netfs_ops->update_i_size(inode, end);
else
i_size_write(inode, end);
}
}
/*
* Perform an unbuffered write where we may have to do an RMW operation on an
* encrypted file. This can also be used for direct I/O writes.
*/
static ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *iter,
struct netfs_group *netfs_group)
{
struct netfs_io_request *wreq;
unsigned long long start = iocb->ki_pos;
unsigned long long end = start + iov_iter_count(iter);
ssize_t ret, n;
bool async = !is_sync_kiocb(iocb);
_enter("");
/* We're going to need a bounce buffer if what we transmit is going to
* be different in some way to the source buffer, e.g. because it gets
* encrypted/compressed or because it needs expanding to a block size.
*/
// TODO
_debug("uw %llx-%llx", start, end);
wreq = netfs_alloc_request(iocb->ki_filp->f_mapping, iocb->ki_filp,
start, end - start,
iocb->ki_flags & IOCB_DIRECT ?
NETFS_DIO_WRITE : NETFS_UNBUFFERED_WRITE);
if (IS_ERR(wreq))
return PTR_ERR(wreq);
{
/* If this is an async op and we're not using a bounce buffer,
* we have to save the source buffer as the iterator is only
* good until we return. In such a case, extract an iterator
* to represent as much of the the output buffer as we can
* manage. Note that the extraction might not be able to
* allocate a sufficiently large bvec array and may shorten the
* request.
*/
if (async || user_backed_iter(iter)) {
n = netfs_extract_user_iter(iter, wreq->len, &wreq->iter, 0);
if (n < 0) {
ret = n;
goto out;
}
wreq->direct_bv = (struct bio_vec *)wreq->iter.bvec;
wreq->direct_bv_count = n;
wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
wreq->len = iov_iter_count(&wreq->iter);
} else {
wreq->iter = *iter;
}
wreq->io_iter = wreq->iter;
}
/* Copy the data into the bounce buffer and encrypt it. */
// TODO
/* Dispatch the write. */
__set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags);
if (async)
wreq->iocb = iocb;
wreq->cleanup = netfs_cleanup_dio_write;
ret = netfs_begin_write(wreq, is_sync_kiocb(iocb),
iocb->ki_flags & IOCB_DIRECT ?
netfs_write_trace_dio_write :
netfs_write_trace_unbuffered_write);
if (ret < 0) {
_debug("begin = %zd", ret);
goto out;
}
if (!async) {
trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip);
wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS,
TASK_UNINTERRUPTIBLE);
ret = wreq->error;
_debug("waited = %zd", ret);
if (ret == 0) {
ret = wreq->transferred;
iocb->ki_pos += ret;
}
} else {
ret = -EIOCBQUEUED;
}
out:
netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
return ret;
}
/**
* netfs_unbuffered_write_iter - Unbuffered write to a file
* @iocb: IO state structure
* @from: iov_iter with data to write
*
* Do an unbuffered write to a file, writing the data directly to the server
* and not lodging the data in the pagecache.
*
* Return:
* * Negative error code if no data has been written at all of
* vfs_fsync_range() failed for a synchronous write
* * Number of bytes written, even for truncated writes
*/
ssize_t netfs_unbuffered_write_iter(struct kiocb *iocb, struct iov_iter *from)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
struct netfs_inode *ictx = netfs_inode(inode);
unsigned long long end;
ssize_t ret;
_enter("%llx,%zx,%llx", iocb->ki_pos, iov_iter_count(from), i_size_read(inode));
trace_netfs_write_iter(iocb, from);
netfs_stat(&netfs_n_rh_dio_write);
ret = netfs_start_io_direct(inode);
if (ret < 0)
return ret;
ret = generic_write_checks(iocb, from);
if (ret < 0)
goto out;
ret = file_remove_privs(file);
if (ret < 0)
goto out;
ret = file_update_time(file);
if (ret < 0)
goto out;
ret = kiocb_invalidate_pages(iocb, iov_iter_count(from));
if (ret < 0)
goto out;
end = iocb->ki_pos + iov_iter_count(from);
if (end > ictx->zero_point)
ictx->zero_point = end;
fscache_invalidate(netfs_i_cookie(ictx), NULL, i_size_read(inode),
FSCACHE_INVAL_DIO_WRITE);
ret = netfs_unbuffered_write_iter_locked(iocb, from, NULL);
out:
netfs_end_io_direct(inode);
return ret;
}
EXPORT_SYMBOL(netfs_unbuffered_write_iter);

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/* Internal definitions for FS-Cache
*
* Copyright (C) 2021 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include "internal.h"
#ifdef pr_fmt
#undef pr_fmt
#endif
#define pr_fmt(fmt) "FS-Cache: " fmt

View File

@ -158,46 +158,6 @@ int __fscache_begin_write_operation(struct netfs_cache_resources *cres,
} }
EXPORT_SYMBOL(__fscache_begin_write_operation); EXPORT_SYMBOL(__fscache_begin_write_operation);
/**
* fscache_dirty_folio - Mark folio dirty and pin a cache object for writeback
* @mapping: The mapping the folio belongs to.
* @folio: The folio being dirtied.
* @cookie: The cookie referring to the cache object
*
* Set the dirty flag on a folio and pin an in-use cache object in memory
* so that writeback can later write to it. This is intended
* to be called from the filesystem's ->dirty_folio() method.
*
* Return: true if the dirty flag was set on the folio, false otherwise.
*/
bool fscache_dirty_folio(struct address_space *mapping, struct folio *folio,
struct fscache_cookie *cookie)
{
struct inode *inode = mapping->host;
bool need_use = false;
_enter("");
if (!filemap_dirty_folio(mapping, folio))
return false;
if (!fscache_cookie_valid(cookie))
return true;
if (!(inode->i_state & I_PINNING_FSCACHE_WB)) {
spin_lock(&inode->i_lock);
if (!(inode->i_state & I_PINNING_FSCACHE_WB)) {
inode->i_state |= I_PINNING_FSCACHE_WB;
need_use = true;
}
spin_unlock(&inode->i_lock);
if (need_use)
fscache_use_cookie(cookie, true);
}
return true;
}
EXPORT_SYMBOL(fscache_dirty_folio);
struct fscache_write_request { struct fscache_write_request {
struct netfs_cache_resources cache_resources; struct netfs_cache_resources cache_resources;
struct address_space *mapping; struct address_space *mapping;
@ -277,7 +237,7 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie,
fscache_access_io_write) < 0) fscache_access_io_write) < 0)
goto abandon_free; goto abandon_free;
ret = cres->ops->prepare_write(cres, &start, &len, i_size, false); ret = cres->ops->prepare_write(cres, &start, &len, len, i_size, false);
if (ret < 0) if (ret < 0)
goto abandon_end; goto abandon_end;

View File

@ -8,18 +8,9 @@
#define FSCACHE_DEBUG_LEVEL CACHE #define FSCACHE_DEBUG_LEVEL CACHE
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
#define CREATE_TRACE_POINTS
#include "internal.h" #include "internal.h"
#define CREATE_TRACE_POINTS
MODULE_DESCRIPTION("FS Cache Manager"); #include <trace/events/fscache.h>
MODULE_AUTHOR("Red Hat, Inc.");
MODULE_LICENSE("GPL");
unsigned fscache_debug;
module_param_named(debug, fscache_debug, uint,
S_IWUSR | S_IRUGO);
MODULE_PARM_DESC(fscache_debug,
"FS-Cache debugging mask");
EXPORT_TRACEPOINT_SYMBOL(fscache_access_cache); EXPORT_TRACEPOINT_SYMBOL(fscache_access_cache);
EXPORT_TRACEPOINT_SYMBOL(fscache_access_volume); EXPORT_TRACEPOINT_SYMBOL(fscache_access_volume);
@ -71,7 +62,7 @@ unsigned int fscache_hash(unsigned int salt, const void *data, size_t len)
/* /*
* initialise the fs caching module * initialise the fs caching module
*/ */
static int __init fscache_init(void) int __init fscache_init(void)
{ {
int ret = -ENOMEM; int ret = -ENOMEM;
@ -92,7 +83,7 @@ static int __init fscache_init(void)
goto error_cookie_jar; goto error_cookie_jar;
} }
pr_notice("Loaded\n"); pr_notice("FS-Cache loaded\n");
return 0; return 0;
error_cookie_jar: error_cookie_jar:
@ -103,19 +94,15 @@ error_wq:
return ret; return ret;
} }
fs_initcall(fscache_init);
/* /*
* clean up on module removal * clean up on module removal
*/ */
static void __exit fscache_exit(void) void __exit fscache_exit(void)
{ {
_enter(""); _enter("");
kmem_cache_destroy(fscache_cookie_jar); kmem_cache_destroy(fscache_cookie_jar);
fscache_proc_cleanup(); fscache_proc_cleanup();
destroy_workqueue(fscache_wq); destroy_workqueue(fscache_wq);
pr_notice("Unloaded\n"); pr_notice("FS-Cache unloaded\n");
} }
module_exit(fscache_exit);

View File

@ -12,41 +12,34 @@
#include "internal.h" #include "internal.h"
/* /*
* initialise the /proc/fs/fscache/ directory * Add files to /proc/fs/netfs/.
*/ */
int __init fscache_proc_init(void) int __init fscache_proc_init(void)
{ {
if (!proc_mkdir("fs/fscache", NULL)) if (!proc_symlink("fs/fscache", NULL, "netfs"))
goto error_dir; goto error_sym;
if (!proc_create_seq("fs/fscache/caches", S_IFREG | 0444, NULL, if (!proc_create_seq("fs/netfs/caches", S_IFREG | 0444, NULL,
&fscache_caches_seq_ops)) &fscache_caches_seq_ops))
goto error; goto error;
if (!proc_create_seq("fs/fscache/volumes", S_IFREG | 0444, NULL, if (!proc_create_seq("fs/netfs/volumes", S_IFREG | 0444, NULL,
&fscache_volumes_seq_ops)) &fscache_volumes_seq_ops))
goto error; goto error;
if (!proc_create_seq("fs/fscache/cookies", S_IFREG | 0444, NULL, if (!proc_create_seq("fs/netfs/cookies", S_IFREG | 0444, NULL,
&fscache_cookies_seq_ops)) &fscache_cookies_seq_ops))
goto error; goto error;
#ifdef CONFIG_FSCACHE_STATS
if (!proc_create_single("fs/fscache/stats", S_IFREG | 0444, NULL,
fscache_stats_show))
goto error;
#endif
return 0; return 0;
error: error:
remove_proc_entry("fs/fscache", NULL); remove_proc_entry("fs/fscache", NULL);
error_dir: error_sym:
return -ENOMEM; return -ENOMEM;
} }
/* /*
* clean up the /proc/fs/fscache/ directory * Clean up the /proc/fs/fscache symlink.
*/ */
void fscache_proc_cleanup(void) void fscache_proc_cleanup(void)
{ {

View File

@ -48,13 +48,15 @@ atomic_t fscache_n_no_create_space;
EXPORT_SYMBOL(fscache_n_no_create_space); EXPORT_SYMBOL(fscache_n_no_create_space);
atomic_t fscache_n_culled; atomic_t fscache_n_culled;
EXPORT_SYMBOL(fscache_n_culled); EXPORT_SYMBOL(fscache_n_culled);
atomic_t fscache_n_dio_misfit;
EXPORT_SYMBOL(fscache_n_dio_misfit);
/* /*
* display the general statistics * display the general statistics
*/ */
int fscache_stats_show(struct seq_file *m, void *v) int fscache_stats_show(struct seq_file *m)
{ {
seq_puts(m, "FS-Cache statistics\n"); seq_puts(m, "-- FS-Cache statistics --\n");
seq_printf(m, "Cookies: n=%d v=%d vcol=%u voom=%u\n", seq_printf(m, "Cookies: n=%d v=%d vcol=%u voom=%u\n",
atomic_read(&fscache_n_cookies), atomic_read(&fscache_n_cookies),
atomic_read(&fscache_n_volumes), atomic_read(&fscache_n_volumes),
@ -93,10 +95,9 @@ int fscache_stats_show(struct seq_file *m, void *v)
atomic_read(&fscache_n_no_create_space), atomic_read(&fscache_n_no_create_space),
atomic_read(&fscache_n_culled)); atomic_read(&fscache_n_culled));
seq_printf(m, "IO : rd=%u wr=%u\n", seq_printf(m, "IO : rd=%u wr=%u mis=%u\n",
atomic_read(&fscache_n_read), atomic_read(&fscache_n_read),
atomic_read(&fscache_n_write)); atomic_read(&fscache_n_write),
atomic_read(&fscache_n_dio_misfit));
netfs_stats_show(m);
return 0; return 0;
} }

View File

@ -5,9 +5,13 @@
* Written by David Howells (dhowells@redhat.com) * Written by David Howells (dhowells@redhat.com)
*/ */
#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/netfs.h> #include <linux/netfs.h>
#include <linux/fscache.h> #include <linux/fscache.h>
#include <linux/fscache-cache.h>
#include <trace/events/netfs.h> #include <trace/events/netfs.h>
#include <trace/events/fscache.h>
#ifdef pr_fmt #ifdef pr_fmt
#undef pr_fmt #undef pr_fmt
@ -19,6 +23,8 @@
* buffered_read.c * buffered_read.c
*/ */
void netfs_rreq_unlock_folios(struct netfs_io_request *rreq); void netfs_rreq_unlock_folios(struct netfs_io_request *rreq);
int netfs_prefetch_for_write(struct file *file, struct folio *folio,
size_t offset, size_t len);
/* /*
* io.c * io.c
@ -29,6 +35,41 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync);
* main.c * main.c
*/ */
extern unsigned int netfs_debug; extern unsigned int netfs_debug;
extern struct list_head netfs_io_requests;
extern spinlock_t netfs_proc_lock;
#ifdef CONFIG_PROC_FS
static inline void netfs_proc_add_rreq(struct netfs_io_request *rreq)
{
spin_lock(&netfs_proc_lock);
list_add_tail_rcu(&rreq->proc_link, &netfs_io_requests);
spin_unlock(&netfs_proc_lock);
}
static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq)
{
if (!list_empty(&rreq->proc_link)) {
spin_lock(&netfs_proc_lock);
list_del_rcu(&rreq->proc_link);
spin_unlock(&netfs_proc_lock);
}
}
#else
static inline void netfs_proc_add_rreq(struct netfs_io_request *rreq) {}
static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {}
#endif
/*
* misc.c
*/
#define NETFS_FLAG_PUT_MARK BIT(0)
#define NETFS_FLAG_PAGECACHE_MARK BIT(1)
int netfs_xa_store_and_mark(struct xarray *xa, unsigned long index,
struct folio *folio, unsigned int flags,
gfp_t gfp_mask);
int netfs_add_folios_to_buffer(struct xarray *buffer,
struct address_space *mapping,
pgoff_t index, pgoff_t to, gfp_t gfp_mask);
void netfs_clear_buffer(struct xarray *buffer);
/* /*
* objects.c * objects.c
@ -49,10 +90,21 @@ static inline void netfs_see_request(struct netfs_io_request *rreq,
trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), what); trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), what);
} }
/*
* output.c
*/
int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait,
enum netfs_write_trace what);
struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len);
int netfs_advance_writethrough(struct netfs_io_request *wreq, size_t copied, bool to_page_end);
int netfs_end_writethrough(struct netfs_io_request *wreq, struct kiocb *iocb);
/* /*
* stats.c * stats.c
*/ */
#ifdef CONFIG_NETFS_STATS #ifdef CONFIG_NETFS_STATS
extern atomic_t netfs_n_rh_dio_read;
extern atomic_t netfs_n_rh_dio_write;
extern atomic_t netfs_n_rh_readahead; extern atomic_t netfs_n_rh_readahead;
extern atomic_t netfs_n_rh_readpage; extern atomic_t netfs_n_rh_readpage;
extern atomic_t netfs_n_rh_rreq; extern atomic_t netfs_n_rh_rreq;
@ -71,7 +123,15 @@ extern atomic_t netfs_n_rh_write_begin;
extern atomic_t netfs_n_rh_write_done; extern atomic_t netfs_n_rh_write_done;
extern atomic_t netfs_n_rh_write_failed; extern atomic_t netfs_n_rh_write_failed;
extern atomic_t netfs_n_rh_write_zskip; extern atomic_t netfs_n_rh_write_zskip;
extern atomic_t netfs_n_wh_wstream_conflict;
extern atomic_t netfs_n_wh_upload;
extern atomic_t netfs_n_wh_upload_done;
extern atomic_t netfs_n_wh_upload_failed;
extern atomic_t netfs_n_wh_write;
extern atomic_t netfs_n_wh_write_done;
extern atomic_t netfs_n_wh_write_failed;
int netfs_stats_show(struct seq_file *m, void *v);
static inline void netfs_stat(atomic_t *stat) static inline void netfs_stat(atomic_t *stat)
{ {
@ -103,6 +163,176 @@ static inline bool netfs_is_cache_enabled(struct netfs_inode *ctx)
#endif #endif
} }
/*
* Get a ref on a netfs group attached to a dirty page (e.g. a ceph snap).
*/
static inline struct netfs_group *netfs_get_group(struct netfs_group *netfs_group)
{
if (netfs_group)
refcount_inc(&netfs_group->ref);
return netfs_group;
}
/*
* Dispose of a netfs group attached to a dirty page (e.g. a ceph snap).
*/
static inline void netfs_put_group(struct netfs_group *netfs_group)
{
if (netfs_group && refcount_dec_and_test(&netfs_group->ref))
netfs_group->free(netfs_group);
}
/*
* Dispose of a netfs group attached to a dirty page (e.g. a ceph snap).
*/
static inline void netfs_put_group_many(struct netfs_group *netfs_group, int nr)
{
if (netfs_group && refcount_sub_and_test(nr, &netfs_group->ref))
netfs_group->free(netfs_group);
}
/*
* fscache-cache.c
*/
#ifdef CONFIG_PROC_FS
extern const struct seq_operations fscache_caches_seq_ops;
#endif
bool fscache_begin_cache_access(struct fscache_cache *cache, enum fscache_access_trace why);
void fscache_end_cache_access(struct fscache_cache *cache, enum fscache_access_trace why);
struct fscache_cache *fscache_lookup_cache(const char *name, bool is_cache);
void fscache_put_cache(struct fscache_cache *cache, enum fscache_cache_trace where);
static inline enum fscache_cache_state fscache_cache_state(const struct fscache_cache *cache)
{
return smp_load_acquire(&cache->state);
}
static inline bool fscache_cache_is_live(const struct fscache_cache *cache)
{
return fscache_cache_state(cache) == FSCACHE_CACHE_IS_ACTIVE;
}
static inline void fscache_set_cache_state(struct fscache_cache *cache,
enum fscache_cache_state new_state)
{
smp_store_release(&cache->state, new_state);
}
static inline bool fscache_set_cache_state_maybe(struct fscache_cache *cache,
enum fscache_cache_state old_state,
enum fscache_cache_state new_state)
{
return try_cmpxchg_release(&cache->state, &old_state, new_state);
}
/*
* fscache-cookie.c
*/
extern struct kmem_cache *fscache_cookie_jar;
#ifdef CONFIG_PROC_FS
extern const struct seq_operations fscache_cookies_seq_ops;
#endif
extern struct timer_list fscache_cookie_lru_timer;
extern void fscache_print_cookie(struct fscache_cookie *cookie, char prefix);
extern bool fscache_begin_cookie_access(struct fscache_cookie *cookie,
enum fscache_access_trace why);
static inline void fscache_see_cookie(struct fscache_cookie *cookie,
enum fscache_cookie_trace where)
{
trace_fscache_cookie(cookie->debug_id, refcount_read(&cookie->ref),
where);
}
/*
* fscache-main.c
*/
extern unsigned int fscache_hash(unsigned int salt, const void *data, size_t len);
#ifdef CONFIG_FSCACHE
int __init fscache_init(void);
void __exit fscache_exit(void);
#else
static inline int fscache_init(void) { return 0; }
static inline void fscache_exit(void) {}
#endif
/*
* fscache-proc.c
*/
#ifdef CONFIG_PROC_FS
extern int __init fscache_proc_init(void);
extern void fscache_proc_cleanup(void);
#else
#define fscache_proc_init() (0)
#define fscache_proc_cleanup() do {} while (0)
#endif
/*
* fscache-stats.c
*/
#ifdef CONFIG_FSCACHE_STATS
extern atomic_t fscache_n_volumes;
extern atomic_t fscache_n_volumes_collision;
extern atomic_t fscache_n_volumes_nomem;
extern atomic_t fscache_n_cookies;
extern atomic_t fscache_n_cookies_lru;
extern atomic_t fscache_n_cookies_lru_expired;
extern atomic_t fscache_n_cookies_lru_removed;
extern atomic_t fscache_n_cookies_lru_dropped;
extern atomic_t fscache_n_acquires;
extern atomic_t fscache_n_acquires_ok;
extern atomic_t fscache_n_acquires_oom;
extern atomic_t fscache_n_invalidates;
extern atomic_t fscache_n_relinquishes;
extern atomic_t fscache_n_relinquishes_retire;
extern atomic_t fscache_n_relinquishes_dropped;
extern atomic_t fscache_n_resizes;
extern atomic_t fscache_n_resizes_null;
static inline void fscache_stat(atomic_t *stat)
{
atomic_inc(stat);
}
static inline void fscache_stat_d(atomic_t *stat)
{
atomic_dec(stat);
}
#define __fscache_stat(stat) (stat)
int fscache_stats_show(struct seq_file *m);
#else
#define __fscache_stat(stat) (NULL)
#define fscache_stat(stat) do {} while (0)
#define fscache_stat_d(stat) do {} while (0)
static inline int fscache_stats_show(struct seq_file *m) { return 0; }
#endif
/*
* fscache-volume.c
*/
#ifdef CONFIG_PROC_FS
extern const struct seq_operations fscache_volumes_seq_ops;
#endif
struct fscache_volume *fscache_get_volume(struct fscache_volume *volume,
enum fscache_volume_trace where);
void fscache_put_volume(struct fscache_volume *volume,
enum fscache_volume_trace where);
bool fscache_begin_volume_access(struct fscache_volume *volume,
struct fscache_cookie *cookie,
enum fscache_access_trace why);
void fscache_create_volume(struct fscache_volume *volume, bool wait);
/*****************************************************************************/ /*****************************************************************************/
/* /*
* debug tracing * debug tracing
@ -143,3 +373,57 @@ do { \
#define _leave(FMT, ...) no_printk("<== %s()"FMT"", __func__, ##__VA_ARGS__) #define _leave(FMT, ...) no_printk("<== %s()"FMT"", __func__, ##__VA_ARGS__)
#define _debug(FMT, ...) no_printk(FMT, ##__VA_ARGS__) #define _debug(FMT, ...) no_printk(FMT, ##__VA_ARGS__)
#endif #endif
/*
* assertions
*/
#if 1 /* defined(__KDEBUGALL) */
#define ASSERT(X) \
do { \
if (unlikely(!(X))) { \
pr_err("\n"); \
pr_err("Assertion failed\n"); \
BUG(); \
} \
} while (0)
#define ASSERTCMP(X, OP, Y) \
do { \
if (unlikely(!((X) OP (Y)))) { \
pr_err("\n"); \
pr_err("Assertion failed\n"); \
pr_err("%lx " #OP " %lx is false\n", \
(unsigned long)(X), (unsigned long)(Y)); \
BUG(); \
} \
} while (0)
#define ASSERTIF(C, X) \
do { \
if (unlikely((C) && !(X))) { \
pr_err("\n"); \
pr_err("Assertion failed\n"); \
BUG(); \
} \
} while (0)
#define ASSERTIFCMP(C, X, OP, Y) \
do { \
if (unlikely((C) && !((X) OP (Y)))) { \
pr_err("\n"); \
pr_err("Assertion failed\n"); \
pr_err("%lx " #OP " %lx is false\n", \
(unsigned long)(X), (unsigned long)(Y)); \
BUG(); \
} \
} while (0)
#else
#define ASSERT(X) do {} while (0)
#define ASSERTCMP(X, OP, Y) do {} while (0)
#define ASSERTIF(C, X) do {} while (0)
#define ASSERTIFCMP(C, X, OP, Y) do {} while (0)
#endif /* assert or not */

View File

@ -21,12 +21,7 @@
*/ */
static void netfs_clear_unread(struct netfs_io_subrequest *subreq) static void netfs_clear_unread(struct netfs_io_subrequest *subreq)
{ {
struct iov_iter iter; iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter);
iov_iter_xarray(&iter, ITER_DEST, &subreq->rreq->mapping->i_pages,
subreq->start + subreq->transferred,
subreq->len - subreq->transferred);
iov_iter_zero(iov_iter_count(&iter), &iter);
} }
static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error,
@ -46,14 +41,9 @@ static void netfs_read_from_cache(struct netfs_io_request *rreq,
enum netfs_read_from_hole read_hole) enum netfs_read_from_hole read_hole)
{ {
struct netfs_cache_resources *cres = &rreq->cache_resources; struct netfs_cache_resources *cres = &rreq->cache_resources;
struct iov_iter iter;
netfs_stat(&netfs_n_rh_read); netfs_stat(&netfs_n_rh_read);
iov_iter_xarray(&iter, ITER_DEST, &rreq->mapping->i_pages, cres->ops->read(cres, subreq->start, &subreq->io_iter, read_hole,
subreq->start + subreq->transferred,
subreq->len - subreq->transferred);
cres->ops->read(cres, subreq->start, &iter, read_hole,
netfs_cache_read_terminated, subreq); netfs_cache_read_terminated, subreq);
} }
@ -88,6 +78,13 @@ static void netfs_read_from_server(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq) struct netfs_io_subrequest *subreq)
{ {
netfs_stat(&netfs_n_rh_download); netfs_stat(&netfs_n_rh_download);
if (rreq->origin != NETFS_DIO_READ &&
iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred)
pr_warn("R=%08x[%u] ITER PRE-MISMATCH %zx != %zx-%zx %lx\n",
rreq->debug_id, subreq->debug_index,
iov_iter_count(&subreq->io_iter), subreq->len,
subreq->transferred, subreq->flags);
rreq->netfs_ops->issue_read(subreq); rreq->netfs_ops->issue_read(subreq);
} }
@ -129,7 +126,8 @@ static void netfs_rreq_unmark_after_write(struct netfs_io_request *rreq,
*/ */
if (have_unlocked && folio_index(folio) <= unlocked) if (have_unlocked && folio_index(folio) <= unlocked)
continue; continue;
unlocked = folio_index(folio); unlocked = folio_next_index(folio) - 1;
trace_netfs_folio(folio, netfs_folio_trace_end_copy);
folio_end_fscache(folio); folio_end_fscache(folio);
have_unlocked = true; have_unlocked = true;
} }
@ -201,7 +199,7 @@ static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq)
} }
ret = cres->ops->prepare_write(cres, &subreq->start, &subreq->len, ret = cres->ops->prepare_write(cres, &subreq->start, &subreq->len,
rreq->i_size, true); subreq->len, rreq->i_size, true);
if (ret < 0) { if (ret < 0) {
trace_netfs_failure(rreq, subreq, ret, netfs_fail_prepare_write); trace_netfs_failure(rreq, subreq, ret, netfs_fail_prepare_write);
trace_netfs_sreq(subreq, netfs_sreq_trace_write_skip); trace_netfs_sreq(subreq, netfs_sreq_trace_write_skip);
@ -259,6 +257,30 @@ static void netfs_rreq_short_read(struct netfs_io_request *rreq,
netfs_read_from_server(rreq, subreq); netfs_read_from_server(rreq, subreq);
} }
/*
* Reset the subrequest iterator prior to resubmission.
*/
static void netfs_reset_subreq_iter(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq)
{
size_t remaining = subreq->len - subreq->transferred;
size_t count = iov_iter_count(&subreq->io_iter);
if (count == remaining)
return;
_debug("R=%08x[%u] ITER RESUB-MISMATCH %zx != %zx-%zx-%llx %x\n",
rreq->debug_id, subreq->debug_index,
iov_iter_count(&subreq->io_iter), subreq->transferred,
subreq->len, rreq->i_size,
subreq->io_iter.iter_type);
if (count < remaining)
iov_iter_revert(&subreq->io_iter, remaining - count);
else
iov_iter_advance(&subreq->io_iter, count - remaining);
}
/* /*
* Resubmit any short or failed operations. Returns true if we got the rreq * Resubmit any short or failed operations. Returns true if we got the rreq
* ref back. * ref back.
@ -287,6 +309,7 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq)
trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead); trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead);
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
atomic_inc(&rreq->nr_outstanding); atomic_inc(&rreq->nr_outstanding);
netfs_reset_subreq_iter(rreq, subreq);
netfs_read_from_server(rreq, subreq); netfs_read_from_server(rreq, subreq);
} else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) { } else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) {
netfs_rreq_short_read(rreq, subreq); netfs_rreq_short_read(rreq, subreq);
@ -320,6 +343,43 @@ static void netfs_rreq_is_still_valid(struct netfs_io_request *rreq)
} }
} }
/*
* Determine how much we can admit to having read from a DIO read.
*/
static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)
{
struct netfs_io_subrequest *subreq;
unsigned int i;
size_t transferred = 0;
for (i = 0; i < rreq->direct_bv_count; i++)
flush_dcache_page(rreq->direct_bv[i].bv_page);
list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
if (subreq->error || subreq->transferred == 0)
break;
transferred += subreq->transferred;
if (subreq->transferred < subreq->len)
break;
}
for (i = 0; i < rreq->direct_bv_count; i++)
flush_dcache_page(rreq->direct_bv[i].bv_page);
rreq->transferred = transferred;
task_io_account_read(transferred);
if (rreq->iocb) {
rreq->iocb->ki_pos += transferred;
if (rreq->iocb->ki_complete)
rreq->iocb->ki_complete(
rreq->iocb, rreq->error ? rreq->error : transferred);
}
if (rreq->netfs_ops->done)
rreq->netfs_ops->done(rreq);
inode_dio_end(rreq->inode);
}
/* /*
* Assess the state of a read request and decide what to do next. * Assess the state of a read request and decide what to do next.
* *
@ -340,8 +400,12 @@ again:
return; return;
} }
netfs_rreq_unlock_folios(rreq); if (rreq->origin != NETFS_DIO_READ)
netfs_rreq_unlock_folios(rreq);
else
netfs_rreq_assess_dio(rreq);
trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip);
clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS);
@ -399,9 +463,9 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
struct netfs_io_request *rreq = subreq->rreq; struct netfs_io_request *rreq = subreq->rreq;
int u; int u;
_enter("[%u]{%llx,%lx},%zd", _enter("R=%x[%x]{%llx,%lx},%zd",
subreq->debug_index, subreq->start, subreq->flags, rreq->debug_id, subreq->debug_index,
transferred_or_error); subreq->start, subreq->flags, transferred_or_error);
switch (subreq->source) { switch (subreq->source) {
case NETFS_READ_FROM_CACHE: case NETFS_READ_FROM_CACHE:
@ -501,15 +565,20 @@ static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_subrequest
*/ */
static enum netfs_io_source static enum netfs_io_source
netfs_rreq_prepare_read(struct netfs_io_request *rreq, netfs_rreq_prepare_read(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq) struct netfs_io_subrequest *subreq,
struct iov_iter *io_iter)
{ {
enum netfs_io_source source; enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER;
struct netfs_inode *ictx = netfs_inode(rreq->inode);
size_t lsize;
_enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size); _enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size);
source = netfs_cache_prepare_read(subreq, rreq->i_size); if (rreq->origin != NETFS_DIO_READ) {
if (source == NETFS_INVALID_READ) source = netfs_cache_prepare_read(subreq, rreq->i_size);
goto out; if (source == NETFS_INVALID_READ)
goto out;
}
if (source == NETFS_DOWNLOAD_FROM_SERVER) { if (source == NETFS_DOWNLOAD_FROM_SERVER) {
/* Call out to the netfs to let it shrink the request to fit /* Call out to the netfs to let it shrink the request to fit
@ -518,19 +587,52 @@ netfs_rreq_prepare_read(struct netfs_io_request *rreq,
* to make serial calls, it can indicate a short read and then * to make serial calls, it can indicate a short read and then
* we will call it again. * we will call it again.
*/ */
if (rreq->origin != NETFS_DIO_READ) {
if (subreq->start >= ictx->zero_point) {
source = NETFS_FILL_WITH_ZEROES;
goto set;
}
if (subreq->len > ictx->zero_point - subreq->start)
subreq->len = ictx->zero_point - subreq->start;
}
if (subreq->len > rreq->i_size - subreq->start) if (subreq->len > rreq->i_size - subreq->start)
subreq->len = rreq->i_size - subreq->start; subreq->len = rreq->i_size - subreq->start;
if (rreq->rsize && subreq->len > rreq->rsize)
subreq->len = rreq->rsize;
if (rreq->netfs_ops->clamp_length && if (rreq->netfs_ops->clamp_length &&
!rreq->netfs_ops->clamp_length(subreq)) { !rreq->netfs_ops->clamp_length(subreq)) {
source = NETFS_INVALID_READ; source = NETFS_INVALID_READ;
goto out; goto out;
} }
if (subreq->max_nr_segs) {
lsize = netfs_limit_iter(io_iter, 0, subreq->len,
subreq->max_nr_segs);
if (subreq->len > lsize) {
subreq->len = lsize;
trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
}
}
} }
if (WARN_ON(subreq->len == 0)) set:
source = NETFS_INVALID_READ; if (subreq->len > rreq->len)
pr_warn("R=%08x[%u] SREQ>RREQ %zx > %zx\n",
rreq->debug_id, subreq->debug_index,
subreq->len, rreq->len);
if (WARN_ON(subreq->len == 0)) {
source = NETFS_INVALID_READ;
goto out;
}
subreq->source = source;
trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
subreq->io_iter = *io_iter;
iov_iter_truncate(&subreq->io_iter, subreq->len);
iov_iter_advance(io_iter, subreq->len);
out: out:
subreq->source = source; subreq->source = source;
trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
@ -541,6 +643,7 @@ out:
* Slice off a piece of a read request and submit an I/O request for it. * Slice off a piece of a read request and submit an I/O request for it.
*/ */
static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq, static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq,
struct iov_iter *io_iter,
unsigned int *_debug_index) unsigned int *_debug_index)
{ {
struct netfs_io_subrequest *subreq; struct netfs_io_subrequest *subreq;
@ -552,7 +655,7 @@ static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq,
subreq->debug_index = (*_debug_index)++; subreq->debug_index = (*_debug_index)++;
subreq->start = rreq->start + rreq->submitted; subreq->start = rreq->start + rreq->submitted;
subreq->len = rreq->len - rreq->submitted; subreq->len = io_iter->count;
_debug("slice %llx,%zx,%zx", subreq->start, subreq->len, rreq->submitted); _debug("slice %llx,%zx,%zx", subreq->start, subreq->len, rreq->submitted);
list_add_tail(&subreq->rreq_link, &rreq->subrequests); list_add_tail(&subreq->rreq_link, &rreq->subrequests);
@ -565,7 +668,7 @@ static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq,
* (the starts must coincide), in which case, we go around the loop * (the starts must coincide), in which case, we go around the loop
* again and ask it to download the next piece. * again and ask it to download the next piece.
*/ */
source = netfs_rreq_prepare_read(rreq, subreq); source = netfs_rreq_prepare_read(rreq, subreq, io_iter);
if (source == NETFS_INVALID_READ) if (source == NETFS_INVALID_READ)
goto subreq_failed; goto subreq_failed;
@ -603,6 +706,7 @@ subreq_failed:
*/ */
int netfs_begin_read(struct netfs_io_request *rreq, bool sync) int netfs_begin_read(struct netfs_io_request *rreq, bool sync)
{ {
struct iov_iter io_iter;
unsigned int debug_index = 0; unsigned int debug_index = 0;
int ret; int ret;
@ -611,50 +715,71 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync)
if (rreq->len == 0) { if (rreq->len == 0) {
pr_err("Zero-sized read [R=%x]\n", rreq->debug_id); pr_err("Zero-sized read [R=%x]\n", rreq->debug_id);
netfs_put_request(rreq, false, netfs_rreq_trace_put_zero_len);
return -EIO; return -EIO;
} }
INIT_WORK(&rreq->work, netfs_rreq_work); if (rreq->origin == NETFS_DIO_READ)
inode_dio_begin(rreq->inode);
if (sync) // TODO: Use bounce buffer if requested
netfs_get_request(rreq, netfs_rreq_trace_get_hold); rreq->io_iter = rreq->iter;
INIT_WORK(&rreq->work, netfs_rreq_work);
/* Chop the read into slices according to what the cache and the netfs /* Chop the read into slices according to what the cache and the netfs
* want and submit each one. * want and submit each one.
*/ */
netfs_get_request(rreq, netfs_rreq_trace_get_for_outstanding);
atomic_set(&rreq->nr_outstanding, 1); atomic_set(&rreq->nr_outstanding, 1);
io_iter = rreq->io_iter;
do { do {
if (!netfs_rreq_submit_slice(rreq, &debug_index)) _debug("submit %llx + %zx >= %llx",
rreq->start, rreq->submitted, rreq->i_size);
if (rreq->origin == NETFS_DIO_READ &&
rreq->start + rreq->submitted >= rreq->i_size)
break;
if (!netfs_rreq_submit_slice(rreq, &io_iter, &debug_index))
break;
if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) &&
test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags))
break; break;
} while (rreq->submitted < rreq->len); } while (rreq->submitted < rreq->len);
if (!rreq->submitted) {
netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit);
ret = 0;
goto out;
}
if (sync) { if (sync) {
/* Keep nr_outstanding incremented so that the ref always belongs to /* Keep nr_outstanding incremented so that the ref always
* us, and the service code isn't punted off to a random thread pool to * belongs to us, and the service code isn't punted off to a
* process. * random thread pool to process. Note that this might start
* further work, such as writing to the cache.
*/ */
for (;;) { wait_var_event(&rreq->nr_outstanding,
wait_var_event(&rreq->nr_outstanding, atomic_read(&rreq->nr_outstanding) == 1);
atomic_read(&rreq->nr_outstanding) == 1); if (atomic_dec_and_test(&rreq->nr_outstanding))
netfs_rreq_assess(rreq, false); netfs_rreq_assess(rreq, false);
if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
break; trace_netfs_rreq(rreq, netfs_rreq_trace_wait_ip);
cond_resched(); wait_on_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS,
} TASK_UNINTERRUPTIBLE);
ret = rreq->error; ret = rreq->error;
if (ret == 0 && rreq->submitted < rreq->len) { if (ret == 0 && rreq->submitted < rreq->len &&
rreq->origin != NETFS_DIO_READ) {
trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read); trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
ret = -EIO; ret = -EIO;
} }
netfs_put_request(rreq, false, netfs_rreq_trace_put_hold);
} else { } else {
/* If we decrement nr_outstanding to 0, the ref belongs to us. */ /* If we decrement nr_outstanding to 0, the ref belongs to us. */
if (atomic_dec_and_test(&rreq->nr_outstanding)) if (atomic_dec_and_test(&rreq->nr_outstanding))
netfs_rreq_assess(rreq, false); netfs_rreq_assess(rreq, false);
ret = 0; ret = -EIOCBQUEUED;
} }
out:
return ret; return ret;
} }

View File

@ -101,3 +101,100 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
return npages; return npages;
} }
EXPORT_SYMBOL_GPL(netfs_extract_user_iter); EXPORT_SYMBOL_GPL(netfs_extract_user_iter);
/*
* Select the span of a bvec iterator we're going to use. Limit it by both maximum
* size and maximum number of segments. Returns the size of the span in bytes.
*/
static size_t netfs_limit_bvec(const struct iov_iter *iter, size_t start_offset,
size_t max_size, size_t max_segs)
{
const struct bio_vec *bvecs = iter->bvec;
unsigned int nbv = iter->nr_segs, ix = 0, nsegs = 0;
size_t len, span = 0, n = iter->count;
size_t skip = iter->iov_offset + start_offset;
if (WARN_ON(!iov_iter_is_bvec(iter)) ||
WARN_ON(start_offset > n) ||
n == 0)
return 0;
while (n && ix < nbv && skip) {
len = bvecs[ix].bv_len;
if (skip < len)
break;
skip -= len;
n -= len;
ix++;
}
while (n && ix < nbv) {
len = min3(n, bvecs[ix].bv_len - skip, max_size);
span += len;
nsegs++;
ix++;
if (span >= max_size || nsegs >= max_segs)
break;
skip = 0;
n -= len;
}
return min(span, max_size);
}
/*
* Select the span of an xarray iterator we're going to use. Limit it by both
* maximum size and maximum number of segments. It is assumed that segments
* can be larger than a page in size, provided they're physically contiguous.
* Returns the size of the span in bytes.
*/
static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start_offset,
size_t max_size, size_t max_segs)
{
struct folio *folio;
unsigned int nsegs = 0;
loff_t pos = iter->xarray_start + iter->iov_offset;
pgoff_t index = pos / PAGE_SIZE;
size_t span = 0, n = iter->count;
XA_STATE(xas, iter->xarray, index);
if (WARN_ON(!iov_iter_is_xarray(iter)) ||
WARN_ON(start_offset > n) ||
n == 0)
return 0;
max_size = min(max_size, n - start_offset);
rcu_read_lock();
xas_for_each(&xas, folio, ULONG_MAX) {
size_t offset, flen, len;
if (xas_retry(&xas, folio))
continue;
if (WARN_ON(xa_is_value(folio)))
break;
if (WARN_ON(folio_test_hugetlb(folio)))
break;
flen = folio_size(folio);
offset = offset_in_folio(folio, pos);
len = min(max_size, flen - offset);
span += len;
nsegs++;
if (span >= max_size || nsegs >= max_segs)
break;
}
rcu_read_unlock();
return min(span, max_size);
}
size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
size_t max_size, size_t max_segs)
{
if (iov_iter_is_bvec(iter))
return netfs_limit_bvec(iter, start_offset, max_size, max_segs);
if (iov_iter_is_xarray(iter))
return netfs_limit_xarray(iter, start_offset, max_size, max_segs);
BUG();
}
EXPORT_SYMBOL(netfs_limit_iter);

216
fs/netfs/locking.c Normal file
View File

@ -0,0 +1,216 @@
// SPDX-License-Identifier: GPL-2.0
/*
* I/O and data path helper functionality.
*
* Borrowed from NFS Copyright (c) 2016 Trond Myklebust
*/
#include <linux/kernel.h>
#include <linux/netfs.h>
#include "internal.h"
/*
* inode_dio_wait_interruptible - wait for outstanding DIO requests to finish
* @inode: inode to wait for
*
* Waits for all pending direct I/O requests to finish so that we can
* proceed with a truncate or equivalent operation.
*
* Must be called under a lock that serializes taking new references
* to i_dio_count, usually by inode->i_mutex.
*/
static int inode_dio_wait_interruptible(struct inode *inode)
{
if (!atomic_read(&inode->i_dio_count))
return 0;
wait_queue_head_t *wq = bit_waitqueue(&inode->i_state, __I_DIO_WAKEUP);
DEFINE_WAIT_BIT(q, &inode->i_state, __I_DIO_WAKEUP);
for (;;) {
prepare_to_wait(wq, &q.wq_entry, TASK_INTERRUPTIBLE);
if (!atomic_read(&inode->i_dio_count))
break;
if (signal_pending(current))
break;
schedule();
}
finish_wait(wq, &q.wq_entry);
return atomic_read(&inode->i_dio_count) ? -ERESTARTSYS : 0;
}
/* Call with exclusively locked inode->i_rwsem */
static int netfs_block_o_direct(struct netfs_inode *ictx)
{
if (!test_bit(NETFS_ICTX_ODIRECT, &ictx->flags))
return 0;
clear_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
return inode_dio_wait_interruptible(&ictx->inode);
}
/**
* netfs_start_io_read - declare the file is being used for buffered reads
* @inode: file inode
*
* Declare that a buffered read operation is about to start, and ensure
* that we block all direct I/O.
* On exit, the function ensures that the NETFS_ICTX_ODIRECT flag is unset,
* and holds a shared lock on inode->i_rwsem to ensure that the flag
* cannot be changed.
* In practice, this means that buffered read operations are allowed to
* execute in parallel, thanks to the shared lock, whereas direct I/O
* operations need to wait to grab an exclusive lock in order to set
* NETFS_ICTX_ODIRECT.
* Note that buffered writes and truncates both take a write lock on
* inode->i_rwsem, meaning that those are serialised w.r.t. the reads.
*/
int netfs_start_io_read(struct inode *inode)
__acquires(inode->i_rwsem)
{
struct netfs_inode *ictx = netfs_inode(inode);
/* Be an optimist! */
if (down_read_interruptible(&inode->i_rwsem) < 0)
return -ERESTARTSYS;
if (test_bit(NETFS_ICTX_ODIRECT, &ictx->flags) == 0)
return 0;
up_read(&inode->i_rwsem);
/* Slow path.... */
if (down_write_killable(&inode->i_rwsem) < 0)
return -ERESTARTSYS;
if (netfs_block_o_direct(ictx) < 0) {
up_write(&inode->i_rwsem);
return -ERESTARTSYS;
}
downgrade_write(&inode->i_rwsem);
return 0;
}
EXPORT_SYMBOL(netfs_start_io_read);
/**
* netfs_end_io_read - declare that the buffered read operation is done
* @inode: file inode
*
* Declare that a buffered read operation is done, and release the shared
* lock on inode->i_rwsem.
*/
void netfs_end_io_read(struct inode *inode)
__releases(inode->i_rwsem)
{
up_read(&inode->i_rwsem);
}
EXPORT_SYMBOL(netfs_end_io_read);
/**
* netfs_start_io_write - declare the file is being used for buffered writes
* @inode: file inode
*
* Declare that a buffered read operation is about to start, and ensure
* that we block all direct I/O.
*/
int netfs_start_io_write(struct inode *inode)
__acquires(inode->i_rwsem)
{
struct netfs_inode *ictx = netfs_inode(inode);
if (down_write_killable(&inode->i_rwsem) < 0)
return -ERESTARTSYS;
if (netfs_block_o_direct(ictx) < 0) {
up_write(&inode->i_rwsem);
return -ERESTARTSYS;
}
return 0;
}
EXPORT_SYMBOL(netfs_start_io_write);
/**
* netfs_end_io_write - declare that the buffered write operation is done
* @inode: file inode
*
* Declare that a buffered write operation is done, and release the
* lock on inode->i_rwsem.
*/
void netfs_end_io_write(struct inode *inode)
__releases(inode->i_rwsem)
{
up_write(&inode->i_rwsem);
}
EXPORT_SYMBOL(netfs_end_io_write);
/* Call with exclusively locked inode->i_rwsem */
static int netfs_block_buffered(struct inode *inode)
{
struct netfs_inode *ictx = netfs_inode(inode);
int ret;
if (!test_bit(NETFS_ICTX_ODIRECT, &ictx->flags)) {
set_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
if (inode->i_mapping->nrpages != 0) {
unmap_mapping_range(inode->i_mapping, 0, 0, 0);
ret = filemap_fdatawait(inode->i_mapping);
if (ret < 0) {
clear_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
return ret;
}
}
}
return 0;
}
/**
* netfs_start_io_direct - declare the file is being used for direct i/o
* @inode: file inode
*
* Declare that a direct I/O operation is about to start, and ensure
* that we block all buffered I/O.
* On exit, the function ensures that the NETFS_ICTX_ODIRECT flag is set,
* and holds a shared lock on inode->i_rwsem to ensure that the flag
* cannot be changed.
* In practice, this means that direct I/O operations are allowed to
* execute in parallel, thanks to the shared lock, whereas buffered I/O
* operations need to wait to grab an exclusive lock in order to clear
* NETFS_ICTX_ODIRECT.
* Note that buffered writes and truncates both take a write lock on
* inode->i_rwsem, meaning that those are serialised w.r.t. O_DIRECT.
*/
int netfs_start_io_direct(struct inode *inode)
__acquires(inode->i_rwsem)
{
struct netfs_inode *ictx = netfs_inode(inode);
int ret;
/* Be an optimist! */
if (down_read_interruptible(&inode->i_rwsem) < 0)
return -ERESTARTSYS;
if (test_bit(NETFS_ICTX_ODIRECT, &ictx->flags) != 0)
return 0;
up_read(&inode->i_rwsem);
/* Slow path.... */
if (down_write_killable(&inode->i_rwsem) < 0)
return -ERESTARTSYS;
ret = netfs_block_buffered(inode);
if (ret < 0) {
up_write(&inode->i_rwsem);
return ret;
}
downgrade_write(&inode->i_rwsem);
return 0;
}
EXPORT_SYMBOL(netfs_start_io_direct);
/**
* netfs_end_io_direct - declare that the direct i/o operation is done
* @inode: file inode
*
* Declare that a direct I/O operation is done, and release the shared
* lock on inode->i_rwsem.
*/
void netfs_end_io_direct(struct inode *inode)
__releases(inode->i_rwsem)
{
up_read(&inode->i_rwsem);
}
EXPORT_SYMBOL(netfs_end_io_direct);

View File

@ -7,6 +7,8 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include "internal.h" #include "internal.h"
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <trace/events/netfs.h> #include <trace/events/netfs.h>
@ -15,6 +17,113 @@ MODULE_DESCRIPTION("Network fs support");
MODULE_AUTHOR("Red Hat, Inc."); MODULE_AUTHOR("Red Hat, Inc.");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
EXPORT_TRACEPOINT_SYMBOL(netfs_sreq);
unsigned netfs_debug; unsigned netfs_debug;
module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO); module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO);
MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask"); MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask");
#ifdef CONFIG_PROC_FS
LIST_HEAD(netfs_io_requests);
DEFINE_SPINLOCK(netfs_proc_lock);
static const char *netfs_origins[nr__netfs_io_origin] = {
[NETFS_READAHEAD] = "RA",
[NETFS_READPAGE] = "RP",
[NETFS_READ_FOR_WRITE] = "RW",
[NETFS_WRITEBACK] = "WB",
[NETFS_WRITETHROUGH] = "WT",
[NETFS_LAUNDER_WRITE] = "LW",
[NETFS_UNBUFFERED_WRITE] = "UW",
[NETFS_DIO_READ] = "DR",
[NETFS_DIO_WRITE] = "DW",
};
/*
* Generate a list of I/O requests in /proc/fs/netfs/requests
*/
static int netfs_requests_seq_show(struct seq_file *m, void *v)
{
struct netfs_io_request *rreq;
if (v == &netfs_io_requests) {
seq_puts(m,
"REQUEST OR REF FL ERR OPS COVERAGE\n"
"======== == === == ==== === =========\n"
);
return 0;
}
rreq = list_entry(v, struct netfs_io_request, proc_link);
seq_printf(m,
"%08x %s %3d %2lx %4d %3d @%04llx %zx/%zx",
rreq->debug_id,
netfs_origins[rreq->origin],
refcount_read(&rreq->ref),
rreq->flags,
rreq->error,
atomic_read(&rreq->nr_outstanding),
rreq->start, rreq->submitted, rreq->len);
seq_putc(m, '\n');
return 0;
}
static void *netfs_requests_seq_start(struct seq_file *m, loff_t *_pos)
__acquires(rcu)
{
rcu_read_lock();
return seq_list_start_head(&netfs_io_requests, *_pos);
}
static void *netfs_requests_seq_next(struct seq_file *m, void *v, loff_t *_pos)
{
return seq_list_next(v, &netfs_io_requests, _pos);
}
static void netfs_requests_seq_stop(struct seq_file *m, void *v)
__releases(rcu)
{
rcu_read_unlock();
}
static const struct seq_operations netfs_requests_seq_ops = {
.start = netfs_requests_seq_start,
.next = netfs_requests_seq_next,
.stop = netfs_requests_seq_stop,
.show = netfs_requests_seq_show,
};
#endif /* CONFIG_PROC_FS */
static int __init netfs_init(void)
{
int ret = -ENOMEM;
if (!proc_mkdir("fs/netfs", NULL))
goto error;
if (!proc_create_seq("fs/netfs/requests", S_IFREG | 0444, NULL,
&netfs_requests_seq_ops))
goto error_proc;
#ifdef CONFIG_FSCACHE_STATS
if (!proc_create_single("fs/netfs/stats", S_IFREG | 0444, NULL,
netfs_stats_show))
goto error_proc;
#endif
ret = fscache_init();
if (ret < 0)
goto error_proc;
return 0;
error_proc:
remove_proc_entry("fs/netfs", NULL);
error:
return ret;
}
fs_initcall(netfs_init);
static void __exit netfs_exit(void)
{
fscache_exit();
remove_proc_entry("fs/netfs", NULL);
}
module_exit(netfs_exit);

260
fs/netfs/misc.c Normal file
View File

@ -0,0 +1,260 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Miscellaneous routines.
*
* Copyright (C) 2023 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/swap.h>
#include "internal.h"
/*
* Attach a folio to the buffer and maybe set marks on it to say that we need
* to put the folio later and twiddle the pagecache flags.
*/
int netfs_xa_store_and_mark(struct xarray *xa, unsigned long index,
struct folio *folio, unsigned int flags,
gfp_t gfp_mask)
{
XA_STATE_ORDER(xas, xa, index, folio_order(folio));
retry:
xas_lock(&xas);
for (;;) {
xas_store(&xas, folio);
if (!xas_error(&xas))
break;
xas_unlock(&xas);
if (!xas_nomem(&xas, gfp_mask))
return xas_error(&xas);
goto retry;
}
if (flags & NETFS_FLAG_PUT_MARK)
xas_set_mark(&xas, NETFS_BUF_PUT_MARK);
if (flags & NETFS_FLAG_PAGECACHE_MARK)
xas_set_mark(&xas, NETFS_BUF_PAGECACHE_MARK);
xas_unlock(&xas);
return xas_error(&xas);
}
/*
* Create the specified range of folios in the buffer attached to the read
* request. The folios are marked with NETFS_BUF_PUT_MARK so that we know that
* these need freeing later.
*/
int netfs_add_folios_to_buffer(struct xarray *buffer,
struct address_space *mapping,
pgoff_t index, pgoff_t to, gfp_t gfp_mask)
{
struct folio *folio;
int ret;
if (to + 1 == index) /* Page range is inclusive */
return 0;
do {
/* TODO: Figure out what order folio can be allocated here */
folio = filemap_alloc_folio(readahead_gfp_mask(mapping), 0);
if (!folio)
return -ENOMEM;
folio->index = index;
ret = netfs_xa_store_and_mark(buffer, index, folio,
NETFS_FLAG_PUT_MARK, gfp_mask);
if (ret < 0) {
folio_put(folio);
return ret;
}
index += folio_nr_pages(folio);
} while (index <= to && index != 0);
return 0;
}
/*
* Clear an xarray buffer, putting a ref on the folios that have
* NETFS_BUF_PUT_MARK set.
*/
void netfs_clear_buffer(struct xarray *buffer)
{
struct folio *folio;
XA_STATE(xas, buffer, 0);
rcu_read_lock();
xas_for_each_marked(&xas, folio, ULONG_MAX, NETFS_BUF_PUT_MARK) {
folio_put(folio);
}
rcu_read_unlock();
xa_destroy(buffer);
}
/**
* netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback
* @mapping: The mapping the folio belongs to.
* @folio: The folio being dirtied.
*
* Set the dirty flag on a folio and pin an in-use cache object in memory so
* that writeback can later write to it. This is intended to be called from
* the filesystem's ->dirty_folio() method.
*
* Return: true if the dirty flag was set on the folio, false otherwise.
*/
bool netfs_dirty_folio(struct address_space *mapping, struct folio *folio)
{
struct inode *inode = mapping->host;
struct netfs_inode *ictx = netfs_inode(inode);
struct fscache_cookie *cookie = netfs_i_cookie(ictx);
bool need_use = false;
_enter("");
if (!filemap_dirty_folio(mapping, folio))
return false;
if (!fscache_cookie_valid(cookie))
return true;
if (!(inode->i_state & I_PINNING_NETFS_WB)) {
spin_lock(&inode->i_lock);
if (!(inode->i_state & I_PINNING_NETFS_WB)) {
inode->i_state |= I_PINNING_NETFS_WB;
need_use = true;
}
spin_unlock(&inode->i_lock);
if (need_use)
fscache_use_cookie(cookie, true);
}
return true;
}
EXPORT_SYMBOL(netfs_dirty_folio);
/**
* netfs_unpin_writeback - Unpin writeback resources
* @inode: The inode on which the cookie resides
* @wbc: The writeback control
*
* Unpin the writeback resources pinned by netfs_dirty_folio(). This is
* intended to be called as/by the netfs's ->write_inode() method.
*/
int netfs_unpin_writeback(struct inode *inode, struct writeback_control *wbc)
{
struct fscache_cookie *cookie = netfs_i_cookie(netfs_inode(inode));
if (wbc->unpinned_netfs_wb)
fscache_unuse_cookie(cookie, NULL, NULL);
return 0;
}
EXPORT_SYMBOL(netfs_unpin_writeback);
/**
* netfs_clear_inode_writeback - Clear writeback resources pinned by an inode
* @inode: The inode to clean up
* @aux: Auxiliary data to apply to the inode
*
* Clear any writeback resources held by an inode when the inode is evicted.
* This must be called before clear_inode() is called.
*/
void netfs_clear_inode_writeback(struct inode *inode, const void *aux)
{
struct fscache_cookie *cookie = netfs_i_cookie(netfs_inode(inode));
if (inode->i_state & I_PINNING_NETFS_WB) {
loff_t i_size = i_size_read(inode);
fscache_unuse_cookie(cookie, aux, &i_size);
}
}
EXPORT_SYMBOL(netfs_clear_inode_writeback);
/**
* netfs_invalidate_folio - Invalidate or partially invalidate a folio
* @folio: Folio proposed for release
* @offset: Offset of the invalidated region
* @length: Length of the invalidated region
*
* Invalidate part or all of a folio for a network filesystem. The folio will
* be removed afterwards if the invalidated region covers the entire folio.
*/
void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
{
struct netfs_folio *finfo = NULL;
size_t flen = folio_size(folio);
_enter("{%lx},%zx,%zx", folio_index(folio), offset, length);
folio_wait_fscache(folio);
if (!folio_test_private(folio))
return;
finfo = netfs_folio_info(folio);
if (offset == 0 && length >= flen)
goto erase_completely;
if (finfo) {
/* We have a partially uptodate page from a streaming write. */
unsigned int fstart = finfo->dirty_offset;
unsigned int fend = fstart + finfo->dirty_len;
unsigned int end = offset + length;
if (offset >= fend)
return;
if (end <= fstart)
return;
if (offset <= fstart && end >= fend)
goto erase_completely;
if (offset <= fstart && end > fstart)
goto reduce_len;
if (offset > fstart && end >= fend)
goto move_start;
/* A partial write was split. The caller has already zeroed
* it, so just absorb the hole.
*/
}
return;
erase_completely:
netfs_put_group(netfs_folio_group(folio));
folio_detach_private(folio);
folio_clear_uptodate(folio);
kfree(finfo);
return;
reduce_len:
finfo->dirty_len = offset + length - finfo->dirty_offset;
return;
move_start:
finfo->dirty_len -= offset - finfo->dirty_offset;
finfo->dirty_offset = offset;
}
EXPORT_SYMBOL(netfs_invalidate_folio);
/**
* netfs_release_folio - Try to release a folio
* @folio: Folio proposed for release
* @gfp: Flags qualifying the release
*
* Request release of a folio and clean up its private state if it's not busy.
* Returns true if the folio can now be released, false if not
*/
bool netfs_release_folio(struct folio *folio, gfp_t gfp)
{
struct netfs_inode *ctx = netfs_inode(folio_inode(folio));
unsigned long long end;
end = folio_pos(folio) + folio_size(folio);
if (end > ctx->zero_point)
ctx->zero_point = end;
if (folio_test_private(folio))
return false;
if (folio_test_fscache(folio)) {
if (current_is_kswapd() || !(gfp & __GFP_FS))
return false;
folio_wait_fscache(folio);
}
fscache_note_page_release(netfs_i_cookie(ctx));
return true;
}
EXPORT_SYMBOL(netfs_release_folio);

View File

@ -20,14 +20,20 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
struct inode *inode = file ? file_inode(file) : mapping->host; struct inode *inode = file ? file_inode(file) : mapping->host;
struct netfs_inode *ctx = netfs_inode(inode); struct netfs_inode *ctx = netfs_inode(inode);
struct netfs_io_request *rreq; struct netfs_io_request *rreq;
bool is_unbuffered = (origin == NETFS_UNBUFFERED_WRITE ||
origin == NETFS_DIO_READ ||
origin == NETFS_DIO_WRITE);
bool cached = !is_unbuffered && netfs_is_cache_enabled(ctx);
int ret; int ret;
rreq = kzalloc(sizeof(struct netfs_io_request), GFP_KERNEL); rreq = kzalloc(ctx->ops->io_request_size ?: sizeof(struct netfs_io_request),
GFP_KERNEL);
if (!rreq) if (!rreq)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
rreq->start = start; rreq->start = start;
rreq->len = len; rreq->len = len;
rreq->upper_len = len;
rreq->origin = origin; rreq->origin = origin;
rreq->netfs_ops = ctx->ops; rreq->netfs_ops = ctx->ops;
rreq->mapping = mapping; rreq->mapping = mapping;
@ -35,8 +41,14 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
rreq->i_size = i_size_read(inode); rreq->i_size = i_size_read(inode);
rreq->debug_id = atomic_inc_return(&debug_ids); rreq->debug_id = atomic_inc_return(&debug_ids);
INIT_LIST_HEAD(&rreq->subrequests); INIT_LIST_HEAD(&rreq->subrequests);
INIT_WORK(&rreq->work, NULL);
refcount_set(&rreq->ref, 1); refcount_set(&rreq->ref, 1);
__set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
if (cached)
__set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags);
if (file && file->f_flags & O_NONBLOCK)
__set_bit(NETFS_RREQ_NONBLOCK, &rreq->flags);
if (rreq->netfs_ops->init_request) { if (rreq->netfs_ops->init_request) {
ret = rreq->netfs_ops->init_request(rreq, file); ret = rreq->netfs_ops->init_request(rreq, file);
if (ret < 0) { if (ret < 0) {
@ -45,6 +57,8 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
} }
} }
trace_netfs_rreq_ref(rreq->debug_id, 1, netfs_rreq_trace_new);
netfs_proc_add_rreq(rreq);
netfs_stat(&netfs_n_rh_rreq); netfs_stat(&netfs_n_rh_rreq);
return rreq; return rreq;
} }
@ -74,33 +88,47 @@ static void netfs_free_request(struct work_struct *work)
{ {
struct netfs_io_request *rreq = struct netfs_io_request *rreq =
container_of(work, struct netfs_io_request, work); container_of(work, struct netfs_io_request, work);
unsigned int i;
trace_netfs_rreq(rreq, netfs_rreq_trace_free); trace_netfs_rreq(rreq, netfs_rreq_trace_free);
netfs_proc_del_rreq(rreq);
netfs_clear_subrequests(rreq, false); netfs_clear_subrequests(rreq, false);
if (rreq->netfs_ops->free_request) if (rreq->netfs_ops->free_request)
rreq->netfs_ops->free_request(rreq); rreq->netfs_ops->free_request(rreq);
if (rreq->cache_resources.ops) if (rreq->cache_resources.ops)
rreq->cache_resources.ops->end_operation(&rreq->cache_resources); rreq->cache_resources.ops->end_operation(&rreq->cache_resources);
kfree(rreq); if (rreq->direct_bv) {
for (i = 0; i < rreq->direct_bv_count; i++) {
if (rreq->direct_bv[i].bv_page) {
if (rreq->direct_bv_unpin)
unpin_user_page(rreq->direct_bv[i].bv_page);
}
}
kvfree(rreq->direct_bv);
}
kfree_rcu(rreq, rcu);
netfs_stat_d(&netfs_n_rh_rreq); netfs_stat_d(&netfs_n_rh_rreq);
} }
void netfs_put_request(struct netfs_io_request *rreq, bool was_async, void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
enum netfs_rreq_ref_trace what) enum netfs_rreq_ref_trace what)
{ {
unsigned int debug_id = rreq->debug_id; unsigned int debug_id;
bool dead; bool dead;
int r; int r;
dead = __refcount_dec_and_test(&rreq->ref, &r); if (rreq) {
trace_netfs_rreq_ref(debug_id, r - 1, what); debug_id = rreq->debug_id;
if (dead) { dead = __refcount_dec_and_test(&rreq->ref, &r);
if (was_async) { trace_netfs_rreq_ref(debug_id, r - 1, what);
rreq->work.func = netfs_free_request; if (dead) {
if (!queue_work(system_unbound_wq, &rreq->work)) if (was_async) {
BUG(); rreq->work.func = netfs_free_request;
} else { if (!queue_work(system_unbound_wq, &rreq->work))
netfs_free_request(&rreq->work); BUG();
} else {
netfs_free_request(&rreq->work);
}
} }
} }
} }
@ -112,8 +140,11 @@ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq
{ {
struct netfs_io_subrequest *subreq; struct netfs_io_subrequest *subreq;
subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL); subreq = kzalloc(rreq->netfs_ops->io_subrequest_size ?:
sizeof(struct netfs_io_subrequest),
GFP_KERNEL);
if (subreq) { if (subreq) {
INIT_WORK(&subreq->work, NULL);
INIT_LIST_HEAD(&subreq->rreq_link); INIT_LIST_HEAD(&subreq->rreq_link);
refcount_set(&subreq->ref, 2); refcount_set(&subreq->ref, 2);
subreq->rreq = rreq; subreq->rreq = rreq;
@ -140,6 +171,8 @@ static void netfs_free_subrequest(struct netfs_io_subrequest *subreq,
struct netfs_io_request *rreq = subreq->rreq; struct netfs_io_request *rreq = subreq->rreq;
trace_netfs_sreq(subreq, netfs_sreq_trace_free); trace_netfs_sreq(subreq, netfs_sreq_trace_free);
if (rreq->netfs_ops->free_subrequest)
rreq->netfs_ops->free_subrequest(subreq);
kfree(subreq); kfree(subreq);
netfs_stat_d(&netfs_n_rh_sreq); netfs_stat_d(&netfs_n_rh_sreq);
netfs_put_request(rreq, was_async, netfs_rreq_trace_put_subreq); netfs_put_request(rreq, was_async, netfs_rreq_trace_put_subreq);

478
fs/netfs/output.c Normal file
View File

@ -0,0 +1,478 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Network filesystem high-level write support.
*
* Copyright (C) 2023 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/slab.h>
#include <linux/writeback.h>
#include <linux/pagevec.h>
#include "internal.h"
/**
* netfs_create_write_request - Create a write operation.
* @wreq: The write request this is storing from.
* @dest: The destination type
* @start: Start of the region this write will modify
* @len: Length of the modification
* @worker: The worker function to handle the write(s)
*
* Allocate a write operation, set it up and add it to the list on a write
* request.
*/
struct netfs_io_subrequest *netfs_create_write_request(struct netfs_io_request *wreq,
enum netfs_io_source dest,
loff_t start, size_t len,
work_func_t worker)
{
struct netfs_io_subrequest *subreq;
subreq = netfs_alloc_subrequest(wreq);
if (subreq) {
INIT_WORK(&subreq->work, worker);
subreq->source = dest;
subreq->start = start;
subreq->len = len;
subreq->debug_index = wreq->subreq_counter++;
switch (subreq->source) {
case NETFS_UPLOAD_TO_SERVER:
netfs_stat(&netfs_n_wh_upload);
break;
case NETFS_WRITE_TO_CACHE:
netfs_stat(&netfs_n_wh_write);
break;
default:
BUG();
}
subreq->io_iter = wreq->io_iter;
iov_iter_advance(&subreq->io_iter, subreq->start - wreq->start);
iov_iter_truncate(&subreq->io_iter, subreq->len);
trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index,
refcount_read(&subreq->ref),
netfs_sreq_trace_new);
atomic_inc(&wreq->nr_outstanding);
list_add_tail(&subreq->rreq_link, &wreq->subrequests);
trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
}
return subreq;
}
EXPORT_SYMBOL(netfs_create_write_request);
/*
* Process a completed write request once all the component operations have
* been completed.
*/
static void netfs_write_terminated(struct netfs_io_request *wreq, bool was_async)
{
struct netfs_io_subrequest *subreq;
struct netfs_inode *ctx = netfs_inode(wreq->inode);
size_t transferred = 0;
_enter("R=%x[]", wreq->debug_id);
trace_netfs_rreq(wreq, netfs_rreq_trace_write_done);
list_for_each_entry(subreq, &wreq->subrequests, rreq_link) {
if (subreq->error || subreq->transferred == 0)
break;
transferred += subreq->transferred;
if (subreq->transferred < subreq->len)
break;
}
wreq->transferred = transferred;
list_for_each_entry(subreq, &wreq->subrequests, rreq_link) {
if (!subreq->error)
continue;
switch (subreq->source) {
case NETFS_UPLOAD_TO_SERVER:
/* Depending on the type of failure, this may prevent
* writeback completion unless we're in disconnected
* mode.
*/
if (!wreq->error)
wreq->error = subreq->error;
break;
case NETFS_WRITE_TO_CACHE:
/* Failure doesn't prevent writeback completion unless
* we're in disconnected mode.
*/
if (subreq->error != -ENOBUFS)
ctx->ops->invalidate_cache(wreq);
break;
default:
WARN_ON_ONCE(1);
if (!wreq->error)
wreq->error = -EIO;
return;
}
}
wreq->cleanup(wreq);
if (wreq->origin == NETFS_DIO_WRITE &&
wreq->mapping->nrpages) {
pgoff_t first = wreq->start >> PAGE_SHIFT;
pgoff_t last = (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT;
invalidate_inode_pages2_range(wreq->mapping, first, last);
}
if (wreq->origin == NETFS_DIO_WRITE)
inode_dio_end(wreq->inode);
_debug("finished");
trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip);
clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS);
if (wreq->iocb) {
wreq->iocb->ki_pos += transferred;
if (wreq->iocb->ki_complete)
wreq->iocb->ki_complete(
wreq->iocb, wreq->error ? wreq->error : transferred);
}
netfs_clear_subrequests(wreq, was_async);
netfs_put_request(wreq, was_async, netfs_rreq_trace_put_complete);
}
/*
* Deal with the completion of writing the data to the cache.
*/
void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
bool was_async)
{
struct netfs_io_subrequest *subreq = _op;
struct netfs_io_request *wreq = subreq->rreq;
unsigned int u;
_enter("%x[%x] %zd", wreq->debug_id, subreq->debug_index, transferred_or_error);
switch (subreq->source) {
case NETFS_UPLOAD_TO_SERVER:
netfs_stat(&netfs_n_wh_upload_done);
break;
case NETFS_WRITE_TO_CACHE:
netfs_stat(&netfs_n_wh_write_done);
break;
case NETFS_INVALID_WRITE:
break;
default:
BUG();
}
if (IS_ERR_VALUE(transferred_or_error)) {
subreq->error = transferred_or_error;
trace_netfs_failure(wreq, subreq, transferred_or_error,
netfs_fail_write);
goto failed;
}
if (WARN(transferred_or_error > subreq->len - subreq->transferred,
"Subreq excess write: R%x[%x] %zd > %zu - %zu",
wreq->debug_id, subreq->debug_index,
transferred_or_error, subreq->len, subreq->transferred))
transferred_or_error = subreq->len - subreq->transferred;
subreq->error = 0;
subreq->transferred += transferred_or_error;
if (iov_iter_count(&subreq->io_iter) != subreq->len - subreq->transferred)
pr_warn("R=%08x[%u] ITER POST-MISMATCH %zx != %zx-%zx %x\n",
wreq->debug_id, subreq->debug_index,
iov_iter_count(&subreq->io_iter), subreq->len,
subreq->transferred, subreq->io_iter.iter_type);
if (subreq->transferred < subreq->len)
goto incomplete;
__clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
out:
trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
/* If we decrement nr_outstanding to 0, the ref belongs to us. */
u = atomic_dec_return(&wreq->nr_outstanding);
if (u == 0)
netfs_write_terminated(wreq, was_async);
else if (u == 1)
wake_up_var(&wreq->nr_outstanding);
netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated);
return;
incomplete:
if (transferred_or_error == 0) {
if (__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) {
subreq->error = -ENODATA;
goto failed;
}
} else {
__clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
}
__set_bit(NETFS_SREQ_SHORT_IO, &subreq->flags);
set_bit(NETFS_RREQ_INCOMPLETE_IO, &wreq->flags);
goto out;
failed:
switch (subreq->source) {
case NETFS_WRITE_TO_CACHE:
netfs_stat(&netfs_n_wh_write_failed);
set_bit(NETFS_RREQ_INCOMPLETE_IO, &wreq->flags);
break;
case NETFS_UPLOAD_TO_SERVER:
netfs_stat(&netfs_n_wh_upload_failed);
set_bit(NETFS_RREQ_FAILED, &wreq->flags);
wreq->error = subreq->error;
break;
default:
break;
}
goto out;
}
EXPORT_SYMBOL(netfs_write_subrequest_terminated);
static void netfs_write_to_cache_op(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *wreq = subreq->rreq;
struct netfs_cache_resources *cres = &wreq->cache_resources;
trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
cres->ops->write(cres, subreq->start, &subreq->io_iter,
netfs_write_subrequest_terminated, subreq);
}
static void netfs_write_to_cache_op_worker(struct work_struct *work)
{
struct netfs_io_subrequest *subreq =
container_of(work, struct netfs_io_subrequest, work);
netfs_write_to_cache_op(subreq);
}
/**
* netfs_queue_write_request - Queue a write request for attention
* @subreq: The write request to be queued
*
* Queue the specified write request for processing by a worker thread. We
* pass the caller's ref on the request to the worker thread.
*/
void netfs_queue_write_request(struct netfs_io_subrequest *subreq)
{
if (!queue_work(system_unbound_wq, &subreq->work))
netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_wip);
}
EXPORT_SYMBOL(netfs_queue_write_request);
/*
* Set up a op for writing to the cache.
*/
static void netfs_set_up_write_to_cache(struct netfs_io_request *wreq)
{
struct netfs_cache_resources *cres = &wreq->cache_resources;
struct netfs_io_subrequest *subreq;
struct netfs_inode *ctx = netfs_inode(wreq->inode);
struct fscache_cookie *cookie = netfs_i_cookie(ctx);
loff_t start = wreq->start;
size_t len = wreq->len;
int ret;
if (!fscache_cookie_enabled(cookie)) {
clear_bit(NETFS_RREQ_WRITE_TO_CACHE, &wreq->flags);
return;
}
_debug("write to cache");
ret = fscache_begin_write_operation(cres, cookie);
if (ret < 0)
return;
ret = cres->ops->prepare_write(cres, &start, &len, wreq->upper_len,
i_size_read(wreq->inode), true);
if (ret < 0)
return;
subreq = netfs_create_write_request(wreq, NETFS_WRITE_TO_CACHE, start, len,
netfs_write_to_cache_op_worker);
if (!subreq)
return;
netfs_write_to_cache_op(subreq);
}
/*
* Begin the process of writing out a chunk of data.
*
* We are given a write request that holds a series of dirty regions and
* (partially) covers a sequence of folios, all of which are present. The
* pages must have been marked as writeback as appropriate.
*
* We need to perform the following steps:
*
* (1) If encrypting, create an output buffer and encrypt each block of the
* data into it, otherwise the output buffer will point to the original
* folios.
*
* (2) If the data is to be cached, set up a write op for the entire output
* buffer to the cache, if the cache wants to accept it.
*
* (3) If the data is to be uploaded (ie. not merely cached):
*
* (a) If the data is to be compressed, create a compression buffer and
* compress the data into it.
*
* (b) For each destination we want to upload to, set up write ops to write
* to that destination. We may need multiple writes if the data is not
* contiguous or the span exceeds wsize for a server.
*/
int netfs_begin_write(struct netfs_io_request *wreq, bool may_wait,
enum netfs_write_trace what)
{
struct netfs_inode *ctx = netfs_inode(wreq->inode);
_enter("R=%x %llx-%llx f=%lx",
wreq->debug_id, wreq->start, wreq->start + wreq->len - 1,
wreq->flags);
trace_netfs_write(wreq, what);
if (wreq->len == 0 || wreq->iter.count == 0) {
pr_err("Zero-sized write [R=%x]\n", wreq->debug_id);
return -EIO;
}
if (wreq->origin == NETFS_DIO_WRITE)
inode_dio_begin(wreq->inode);
wreq->io_iter = wreq->iter;
/* ->outstanding > 0 carries a ref */
netfs_get_request(wreq, netfs_rreq_trace_get_for_outstanding);
atomic_set(&wreq->nr_outstanding, 1);
/* Start the encryption/compression going. We can do that in the
* background whilst we generate a list of write ops that we want to
* perform.
*/
// TODO: Encrypt or compress the region as appropriate
/* We need to write all of the region to the cache */
if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &wreq->flags))
netfs_set_up_write_to_cache(wreq);
/* However, we don't necessarily write all of the region to the server.
* Caching of reads is being managed this way also.
*/
if (test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
ctx->ops->create_write_requests(wreq, wreq->start, wreq->len);
if (atomic_dec_and_test(&wreq->nr_outstanding))
netfs_write_terminated(wreq, false);
if (!may_wait)
return -EIOCBQUEUED;
wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS,
TASK_UNINTERRUPTIBLE);
return wreq->error;
}
/*
* Begin a write operation for writing through the pagecache.
*/
struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len)
{
struct netfs_io_request *wreq;
struct file *file = iocb->ki_filp;
wreq = netfs_alloc_request(file->f_mapping, file, iocb->ki_pos, len,
NETFS_WRITETHROUGH);
if (IS_ERR(wreq))
return wreq;
trace_netfs_write(wreq, netfs_write_trace_writethrough);
__set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags);
iov_iter_xarray(&wreq->iter, ITER_SOURCE, &wreq->mapping->i_pages, wreq->start, 0);
wreq->io_iter = wreq->iter;
/* ->outstanding > 0 carries a ref */
netfs_get_request(wreq, netfs_rreq_trace_get_for_outstanding);
atomic_set(&wreq->nr_outstanding, 1);
return wreq;
}
static void netfs_submit_writethrough(struct netfs_io_request *wreq, bool final)
{
struct netfs_inode *ictx = netfs_inode(wreq->inode);
unsigned long long start;
size_t len;
if (!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
return;
start = wreq->start + wreq->submitted;
len = wreq->iter.count - wreq->submitted;
if (!final) {
len /= wreq->wsize; /* Round to number of maximum packets */
len *= wreq->wsize;
}
ictx->ops->create_write_requests(wreq, start, len);
wreq->submitted += len;
}
/*
* Advance the state of the write operation used when writing through the
* pagecache. Data has been copied into the pagecache that we need to append
* to the request. If we've added more than wsize then we need to create a new
* subrequest.
*/
int netfs_advance_writethrough(struct netfs_io_request *wreq, size_t copied, bool to_page_end)
{
_enter("ic=%zu sb=%zu ws=%u cp=%zu tp=%u",
wreq->iter.count, wreq->submitted, wreq->wsize, copied, to_page_end);
wreq->iter.count += copied;
wreq->io_iter.count += copied;
if (to_page_end && wreq->io_iter.count - wreq->submitted >= wreq->wsize)
netfs_submit_writethrough(wreq, false);
return wreq->error;
}
/*
* End a write operation used when writing through the pagecache.
*/
int netfs_end_writethrough(struct netfs_io_request *wreq, struct kiocb *iocb)
{
int ret = -EIOCBQUEUED;
_enter("ic=%zu sb=%zu ws=%u",
wreq->iter.count, wreq->submitted, wreq->wsize);
if (wreq->submitted < wreq->io_iter.count)
netfs_submit_writethrough(wreq, true);
if (atomic_dec_and_test(&wreq->nr_outstanding))
netfs_write_terminated(wreq, false);
if (is_sync_kiocb(iocb)) {
wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS,
TASK_UNINTERRUPTIBLE);
ret = wreq->error;
}
netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
return ret;
}

View File

@ -9,6 +9,8 @@
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include "internal.h" #include "internal.h"
atomic_t netfs_n_rh_dio_read;
atomic_t netfs_n_rh_dio_write;
atomic_t netfs_n_rh_readahead; atomic_t netfs_n_rh_readahead;
atomic_t netfs_n_rh_readpage; atomic_t netfs_n_rh_readpage;
atomic_t netfs_n_rh_rreq; atomic_t netfs_n_rh_rreq;
@ -27,32 +29,48 @@ atomic_t netfs_n_rh_write_begin;
atomic_t netfs_n_rh_write_done; atomic_t netfs_n_rh_write_done;
atomic_t netfs_n_rh_write_failed; atomic_t netfs_n_rh_write_failed;
atomic_t netfs_n_rh_write_zskip; atomic_t netfs_n_rh_write_zskip;
atomic_t netfs_n_wh_wstream_conflict;
atomic_t netfs_n_wh_upload;
atomic_t netfs_n_wh_upload_done;
atomic_t netfs_n_wh_upload_failed;
atomic_t netfs_n_wh_write;
atomic_t netfs_n_wh_write_done;
atomic_t netfs_n_wh_write_failed;
void netfs_stats_show(struct seq_file *m) int netfs_stats_show(struct seq_file *m, void *v)
{ {
seq_printf(m, "RdHelp : RA=%u RP=%u WB=%u WBZ=%u rr=%u sr=%u\n", seq_printf(m, "Netfs : DR=%u DW=%u RA=%u RP=%u WB=%u WBZ=%u\n",
atomic_read(&netfs_n_rh_dio_read),
atomic_read(&netfs_n_rh_dio_write),
atomic_read(&netfs_n_rh_readahead), atomic_read(&netfs_n_rh_readahead),
atomic_read(&netfs_n_rh_readpage), atomic_read(&netfs_n_rh_readpage),
atomic_read(&netfs_n_rh_write_begin), atomic_read(&netfs_n_rh_write_begin),
atomic_read(&netfs_n_rh_write_zskip), atomic_read(&netfs_n_rh_write_zskip));
atomic_read(&netfs_n_rh_rreq), seq_printf(m, "Netfs : ZR=%u sh=%u sk=%u\n",
atomic_read(&netfs_n_rh_sreq));
seq_printf(m, "RdHelp : ZR=%u sh=%u sk=%u\n",
atomic_read(&netfs_n_rh_zero), atomic_read(&netfs_n_rh_zero),
atomic_read(&netfs_n_rh_short_read), atomic_read(&netfs_n_rh_short_read),
atomic_read(&netfs_n_rh_write_zskip)); atomic_read(&netfs_n_rh_write_zskip));
seq_printf(m, "RdHelp : DL=%u ds=%u df=%u di=%u\n", seq_printf(m, "Netfs : DL=%u ds=%u df=%u di=%u\n",
atomic_read(&netfs_n_rh_download), atomic_read(&netfs_n_rh_download),
atomic_read(&netfs_n_rh_download_done), atomic_read(&netfs_n_rh_download_done),
atomic_read(&netfs_n_rh_download_failed), atomic_read(&netfs_n_rh_download_failed),
atomic_read(&netfs_n_rh_download_instead)); atomic_read(&netfs_n_rh_download_instead));
seq_printf(m, "RdHelp : RD=%u rs=%u rf=%u\n", seq_printf(m, "Netfs : RD=%u rs=%u rf=%u\n",
atomic_read(&netfs_n_rh_read), atomic_read(&netfs_n_rh_read),
atomic_read(&netfs_n_rh_read_done), atomic_read(&netfs_n_rh_read_done),
atomic_read(&netfs_n_rh_read_failed)); atomic_read(&netfs_n_rh_read_failed));
seq_printf(m, "RdHelp : WR=%u ws=%u wf=%u\n", seq_printf(m, "Netfs : UL=%u us=%u uf=%u\n",
atomic_read(&netfs_n_rh_write), atomic_read(&netfs_n_wh_upload),
atomic_read(&netfs_n_rh_write_done), atomic_read(&netfs_n_wh_upload_done),
atomic_read(&netfs_n_rh_write_failed)); atomic_read(&netfs_n_wh_upload_failed));
seq_printf(m, "Netfs : WR=%u ws=%u wf=%u\n",
atomic_read(&netfs_n_wh_write),
atomic_read(&netfs_n_wh_write_done),
atomic_read(&netfs_n_wh_write_failed));
seq_printf(m, "Netfs : rr=%u sr=%u wsc=%u\n",
atomic_read(&netfs_n_rh_rreq),
atomic_read(&netfs_n_rh_sreq),
atomic_read(&netfs_n_wh_wstream_conflict));
return fscache_stats_show(m);
} }
EXPORT_SYMBOL(netfs_stats_show); EXPORT_SYMBOL(netfs_stats_show);

View File

@ -169,8 +169,8 @@ config ROOT_NFS
config NFS_FSCACHE config NFS_FSCACHE
bool "Provide NFS client caching support" bool "Provide NFS client caching support"
depends on NFS_FS=m && FSCACHE || NFS_FS=y && FSCACHE=y depends on NFS_FS=m && NETFS_SUPPORT || NFS_FS=y && NETFS_SUPPORT=y
select NETFS_SUPPORT select FSCACHE
help help
Say Y here if you want NFS data to be cached locally on disc through Say Y here if you want NFS data to be cached locally on disc through
the general filesystem cache manager the general filesystem cache manager

View File

@ -274,12 +274,6 @@ static void nfs_netfs_free_request(struct netfs_io_request *rreq)
put_nfs_open_context(rreq->netfs_priv); put_nfs_open_context(rreq->netfs_priv);
} }
static inline int nfs_netfs_begin_cache_operation(struct netfs_io_request *rreq)
{
return fscache_begin_read_operation(&rreq->cache_resources,
netfs_i_cookie(netfs_inode(rreq->inode)));
}
static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sreq) static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sreq)
{ {
struct nfs_netfs_io_data *netfs; struct nfs_netfs_io_data *netfs;
@ -387,7 +381,6 @@ void nfs_netfs_read_completion(struct nfs_pgio_header *hdr)
const struct netfs_request_ops nfs_netfs_ops = { const struct netfs_request_ops nfs_netfs_ops = {
.init_request = nfs_netfs_init_request, .init_request = nfs_netfs_init_request,
.free_request = nfs_netfs_free_request, .free_request = nfs_netfs_free_request,
.begin_cache_operation = nfs_netfs_begin_cache_operation,
.issue_read = nfs_netfs_issue_read, .issue_read = nfs_netfs_issue_read,
.clamp_length = nfs_netfs_clamp_length .clamp_length = nfs_netfs_clamp_length
}; };

View File

@ -80,7 +80,7 @@ static inline void nfs_netfs_put(struct nfs_netfs_io_data *netfs)
} }
static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi) static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi)
{ {
netfs_inode_init(&nfsi->netfs, &nfs_netfs_ops); netfs_inode_init(&nfsi->netfs, &nfs_netfs_ops, false);
} }
extern void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr); extern void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr);
extern void nfs_netfs_read_completion(struct nfs_pgio_header *hdr); extern void nfs_netfs_read_completion(struct nfs_pgio_header *hdr);

View File

@ -430,7 +430,7 @@ static void
cifs_evict_inode(struct inode *inode) cifs_evict_inode(struct inode *inode)
{ {
truncate_inode_pages_final(&inode->i_data); truncate_inode_pages_final(&inode->i_data);
if (inode->i_state & I_PINNING_FSCACHE_WB) if (inode->i_state & I_PINNING_NETFS_WB)
cifs_fscache_unuse_inode_cookie(inode, true); cifs_fscache_unuse_inode_cookie(inode, true);
cifs_fscache_release_inode_cookie(inode); cifs_fscache_release_inode_cookie(inode);
clear_inode(inode); clear_inode(inode);
@ -793,8 +793,7 @@ static int cifs_show_stats(struct seq_file *s, struct dentry *root)
static int cifs_write_inode(struct inode *inode, struct writeback_control *wbc) static int cifs_write_inode(struct inode *inode, struct writeback_control *wbc)
{ {
fscache_unpin_writeback(wbc, cifs_inode_cookie(inode)); return netfs_unpin_writeback(inode, wbc);
return 0;
} }
static int cifs_drop_inode(struct inode *inode) static int cifs_drop_inode(struct inode *inode)
@ -1222,7 +1221,7 @@ static int cifs_precopy_set_eof(struct inode *src_inode, struct cifsInodeInfo *s
if (rc < 0) if (rc < 0)
goto set_failed; goto set_failed;
netfs_resize_file(&src_cifsi->netfs, src_end); netfs_resize_file(&src_cifsi->netfs, src_end, true);
fscache_resize_cookie(cifs_inode_cookie(src_inode), src_end); fscache_resize_cookie(cifs_inode_cookie(src_inode), src_end);
return 0; return 0;
@ -1353,7 +1352,7 @@ static loff_t cifs_remap_file_range(struct file *src_file, loff_t off,
smb_file_src, smb_file_target, off, len, destoff); smb_file_src, smb_file_target, off, len, destoff);
if (rc == 0 && new_size > i_size_read(target_inode)) { if (rc == 0 && new_size > i_size_read(target_inode)) {
truncate_setsize(target_inode, new_size); truncate_setsize(target_inode, new_size);
netfs_resize_file(&target_cifsi->netfs, new_size); netfs_resize_file(&target_cifsi->netfs, new_size, true);
fscache_resize_cookie(cifs_inode_cookie(target_inode), fscache_resize_cookie(cifs_inode_cookie(target_inode),
new_size); new_size);
} }

View File

@ -5043,27 +5043,13 @@ static void cifs_swap_deactivate(struct file *file)
/* do we need to unpin (or unlock) the file */ /* do we need to unpin (or unlock) the file */
} }
/*
* Mark a page as having been made dirty and thus needing writeback. We also
* need to pin the cache object to write back to.
*/
#ifdef CONFIG_CIFS_FSCACHE
static bool cifs_dirty_folio(struct address_space *mapping, struct folio *folio)
{
return fscache_dirty_folio(mapping, folio,
cifs_inode_cookie(mapping->host));
}
#else
#define cifs_dirty_folio filemap_dirty_folio
#endif
const struct address_space_operations cifs_addr_ops = { const struct address_space_operations cifs_addr_ops = {
.read_folio = cifs_read_folio, .read_folio = cifs_read_folio,
.readahead = cifs_readahead, .readahead = cifs_readahead,
.writepages = cifs_writepages, .writepages = cifs_writepages,
.write_begin = cifs_write_begin, .write_begin = cifs_write_begin,
.write_end = cifs_write_end, .write_end = cifs_write_end,
.dirty_folio = cifs_dirty_folio, .dirty_folio = netfs_dirty_folio,
.release_folio = cifs_release_folio, .release_folio = cifs_release_folio,
.direct_IO = cifs_direct_io, .direct_IO = cifs_direct_io,
.invalidate_folio = cifs_invalidate_folio, .invalidate_folio = cifs_invalidate_folio,
@ -5087,7 +5073,7 @@ const struct address_space_operations cifs_addr_ops_smallbuf = {
.writepages = cifs_writepages, .writepages = cifs_writepages,
.write_begin = cifs_write_begin, .write_begin = cifs_write_begin,
.write_end = cifs_write_end, .write_end = cifs_write_end,
.dirty_folio = cifs_dirty_folio, .dirty_folio = netfs_dirty_folio,
.release_folio = cifs_release_folio, .release_folio = cifs_release_folio,
.invalidate_folio = cifs_invalidate_folio, .invalidate_folio = cifs_invalidate_folio,
.launder_folio = cifs_launder_folio, .launder_folio = cifs_launder_folio,

View File

@ -180,7 +180,7 @@ static int fscache_fallback_write_pages(struct inode *inode, loff_t start, size_
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = cres.ops->prepare_write(&cres, &start, &len, i_size_read(inode), ret = cres.ops->prepare_write(&cres, &start, &len, len, i_size_read(inode),
no_space_allocated_yet); no_space_allocated_yet);
if (ret == 0) if (ret == 0)
ret = fscache_write(&cres, start, &iter, NULL, NULL); ret = fscache_write(&cres, start, &iter, NULL, NULL);

View File

@ -2371,7 +2371,7 @@ static inline void kiocb_clone(struct kiocb *kiocb, struct kiocb *kiocb_src,
#define I_CREATING (1 << 15) #define I_CREATING (1 << 15)
#define I_DONTCACHE (1 << 16) #define I_DONTCACHE (1 << 16)
#define I_SYNC_QUEUED (1 << 17) #define I_SYNC_QUEUED (1 << 17)
#define I_PINNING_FSCACHE_WB (1 << 18) #define I_PINNING_NETFS_WB (1 << 18)
#define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC) #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC)
#define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES) #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)

View File

@ -189,17 +189,20 @@ extern atomic_t fscache_n_write;
extern atomic_t fscache_n_no_write_space; extern atomic_t fscache_n_no_write_space;
extern atomic_t fscache_n_no_create_space; extern atomic_t fscache_n_no_create_space;
extern atomic_t fscache_n_culled; extern atomic_t fscache_n_culled;
extern atomic_t fscache_n_dio_misfit;
#define fscache_count_read() atomic_inc(&fscache_n_read) #define fscache_count_read() atomic_inc(&fscache_n_read)
#define fscache_count_write() atomic_inc(&fscache_n_write) #define fscache_count_write() atomic_inc(&fscache_n_write)
#define fscache_count_no_write_space() atomic_inc(&fscache_n_no_write_space) #define fscache_count_no_write_space() atomic_inc(&fscache_n_no_write_space)
#define fscache_count_no_create_space() atomic_inc(&fscache_n_no_create_space) #define fscache_count_no_create_space() atomic_inc(&fscache_n_no_create_space)
#define fscache_count_culled() atomic_inc(&fscache_n_culled) #define fscache_count_culled() atomic_inc(&fscache_n_culled)
#define fscache_count_dio_misfit() atomic_inc(&fscache_n_dio_misfit)
#else #else
#define fscache_count_read() do {} while(0) #define fscache_count_read() do {} while(0)
#define fscache_count_write() do {} while(0) #define fscache_count_write() do {} while(0)
#define fscache_count_no_write_space() do {} while(0) #define fscache_count_no_write_space() do {} while(0)
#define fscache_count_no_create_space() do {} while(0) #define fscache_count_no_create_space() do {} while(0)
#define fscache_count_culled() do {} while(0) #define fscache_count_culled() do {} while(0)
#define fscache_count_dio_misfit() do {} while(0)
#endif #endif
#endif /* _LINUX_FSCACHE_CACHE_H */ #endif /* _LINUX_FSCACHE_CACHE_H */

View File

@ -437,9 +437,6 @@ const struct netfs_cache_ops *fscache_operation_valid(const struct netfs_cache_r
* indicates the cache resources to which the operation state should be * indicates the cache resources to which the operation state should be
* attached; @cookie indicates the cache object that will be accessed. * attached; @cookie indicates the cache object that will be accessed.
* *
* This is intended to be called from the ->begin_cache_operation() netfs lib
* operation as implemented by the network filesystem.
*
* @cres->inval_counter is set from @cookie->inval_counter for comparison at * @cres->inval_counter is set from @cookie->inval_counter for comparison at
* the end of the operation. This allows invalidation during the operation to * the end of the operation. This allows invalidation during the operation to
* be detected by the caller. * be detected by the caller.
@ -629,48 +626,6 @@ static inline void fscache_write_to_cache(struct fscache_cookie *cookie,
} }
#if __fscache_available
bool fscache_dirty_folio(struct address_space *mapping, struct folio *folio,
struct fscache_cookie *cookie);
#else
#define fscache_dirty_folio(MAPPING, FOLIO, COOKIE) \
filemap_dirty_folio(MAPPING, FOLIO)
#endif
/**
* fscache_unpin_writeback - Unpin writeback resources
* @wbc: The writeback control
* @cookie: The cookie referring to the cache object
*
* Unpin the writeback resources pinned by fscache_dirty_folio(). This is
* intended to be called by the netfs's ->write_inode() method.
*/
static inline void fscache_unpin_writeback(struct writeback_control *wbc,
struct fscache_cookie *cookie)
{
if (wbc->unpinned_fscache_wb)
fscache_unuse_cookie(cookie, NULL, NULL);
}
/**
* fscache_clear_inode_writeback - Clear writeback resources pinned by an inode
* @cookie: The cookie referring to the cache object
* @inode: The inode to clean up
* @aux: Auxiliary data to apply to the inode
*
* Clear any writeback resources held by an inode when the inode is evicted.
* This must be called before clear_inode() is called.
*/
static inline void fscache_clear_inode_writeback(struct fscache_cookie *cookie,
struct inode *inode,
const void *aux)
{
if (inode->i_state & I_PINNING_FSCACHE_WB) {
loff_t i_size = i_size_read(inode);
fscache_unuse_cookie(cookie, aux, &i_size);
}
}
/** /**
* fscache_note_page_release - Note that a netfs page got released * fscache_note_page_release - Note that a netfs page got released
* @cookie: The cookie corresponding to the file * @cookie: The cookie corresponding to the file

View File

@ -109,11 +109,18 @@ static inline int wait_on_page_fscache_killable(struct page *page)
return folio_wait_private_2_killable(page_folio(page)); return folio_wait_private_2_killable(page_folio(page));
} }
/* Marks used on xarray-based buffers */
#define NETFS_BUF_PUT_MARK XA_MARK_0 /* - Page needs putting */
#define NETFS_BUF_PAGECACHE_MARK XA_MARK_1 /* - Page needs wb/dirty flag wrangling */
enum netfs_io_source { enum netfs_io_source {
NETFS_FILL_WITH_ZEROES, NETFS_FILL_WITH_ZEROES,
NETFS_DOWNLOAD_FROM_SERVER, NETFS_DOWNLOAD_FROM_SERVER,
NETFS_READ_FROM_CACHE, NETFS_READ_FROM_CACHE,
NETFS_INVALID_READ, NETFS_INVALID_READ,
NETFS_UPLOAD_TO_SERVER,
NETFS_WRITE_TO_CACHE,
NETFS_INVALID_WRITE,
} __mode(byte); } __mode(byte);
typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error, typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error,
@ -129,8 +136,56 @@ struct netfs_inode {
struct fscache_cookie *cache; struct fscache_cookie *cache;
#endif #endif
loff_t remote_i_size; /* Size of the remote file */ loff_t remote_i_size; /* Size of the remote file */
loff_t zero_point; /* Size after which we assume there's no data
* on the server */
unsigned long flags;
#define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */
#define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */
#define NETFS_ICTX_WRITETHROUGH 2 /* Write-through caching */
#define NETFS_ICTX_NO_WRITE_STREAMING 3 /* Don't engage in write-streaming */
}; };
/*
* A netfs group - for instance a ceph snap. This is marked on dirty pages and
* pages marked with a group must be flushed before they can be written under
* the domain of another group.
*/
struct netfs_group {
refcount_t ref;
void (*free)(struct netfs_group *netfs_group);
};
/*
* Information about a dirty page (attached only if necessary).
* folio->private
*/
struct netfs_folio {
struct netfs_group *netfs_group; /* Filesystem's grouping marker (or NULL). */
unsigned int dirty_offset; /* Write-streaming dirty data offset */
unsigned int dirty_len; /* Write-streaming dirty data length */
};
#define NETFS_FOLIO_INFO 0x1UL /* OR'd with folio->private. */
static inline struct netfs_folio *netfs_folio_info(struct folio *folio)
{
void *priv = folio_get_private(folio);
if ((unsigned long)priv & NETFS_FOLIO_INFO)
return (struct netfs_folio *)((unsigned long)priv & ~NETFS_FOLIO_INFO);
return NULL;
}
static inline struct netfs_group *netfs_folio_group(struct folio *folio)
{
struct netfs_folio *finfo;
void *priv = folio_get_private(folio);
finfo = netfs_folio_info(folio);
if (finfo)
return finfo->netfs_group;
return priv;
}
/* /*
* Resources required to do operations on a cache. * Resources required to do operations on a cache.
*/ */
@ -143,17 +198,24 @@ struct netfs_cache_resources {
}; };
/* /*
* Descriptor for a single component subrequest. * Descriptor for a single component subrequest. Each operation represents an
* individual read/write from/to a server, a cache, a journal, etc..
*
* The buffer iterator is persistent for the life of the subrequest struct and
* the pages it points to can be relied on to exist for the duration.
*/ */
struct netfs_io_subrequest { struct netfs_io_subrequest {
struct netfs_io_request *rreq; /* Supervising I/O request */ struct netfs_io_request *rreq; /* Supervising I/O request */
struct work_struct work;
struct list_head rreq_link; /* Link in rreq->subrequests */ struct list_head rreq_link; /* Link in rreq->subrequests */
struct iov_iter io_iter; /* Iterator for this subrequest */
loff_t start; /* Where to start the I/O */ loff_t start; /* Where to start the I/O */
size_t len; /* Size of the I/O */ size_t len; /* Size of the I/O */
size_t transferred; /* Amount of data transferred */ size_t transferred; /* Amount of data transferred */
refcount_t ref; refcount_t ref;
short error; /* 0 or error that occurred */ short error; /* 0 or error that occurred */
unsigned short debug_index; /* Index in list (for debugging output) */ unsigned short debug_index; /* Index in list (for debugging output) */
unsigned int max_nr_segs; /* 0 or max number of segments in an iterator */
enum netfs_io_source source; /* Where to read from/write to */ enum netfs_io_source source; /* Where to read from/write to */
unsigned long flags; unsigned long flags;
#define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */ #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */
@ -168,6 +230,13 @@ enum netfs_io_origin {
NETFS_READAHEAD, /* This read was triggered by readahead */ NETFS_READAHEAD, /* This read was triggered by readahead */
NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READPAGE, /* This read is a synchronous read */
NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */
NETFS_WRITEBACK, /* This write was triggered by writepages */
NETFS_WRITETHROUGH, /* This write was made by netfs_perform_write() */
NETFS_LAUNDER_WRITE, /* This is triggered by ->launder_folio() */
NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */
NETFS_DIO_READ, /* This is a direct I/O read */
NETFS_DIO_WRITE, /* This is a direct I/O write */
nr__netfs_io_origin
} __mode(byte); } __mode(byte);
/* /*
@ -175,19 +244,34 @@ enum netfs_io_origin {
* operations to a variety of data stores and then stitch the result together. * operations to a variety of data stores and then stitch the result together.
*/ */
struct netfs_io_request { struct netfs_io_request {
struct work_struct work; union {
struct work_struct work;
struct rcu_head rcu;
};
struct inode *inode; /* The file being accessed */ struct inode *inode; /* The file being accessed */
struct address_space *mapping; /* The mapping being accessed */ struct address_space *mapping; /* The mapping being accessed */
struct kiocb *iocb; /* AIO completion vector */
struct netfs_cache_resources cache_resources; struct netfs_cache_resources cache_resources;
struct list_head proc_link; /* Link in netfs_iorequests */
struct list_head subrequests; /* Contributory I/O operations */ struct list_head subrequests; /* Contributory I/O operations */
struct iov_iter iter; /* Unencrypted-side iterator */
struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */
void *netfs_priv; /* Private data for the netfs */ void *netfs_priv; /* Private data for the netfs */
struct bio_vec *direct_bv; /* DIO buffer list (when handling iovec-iter) */
unsigned int direct_bv_count; /* Number of elements in direct_bv[] */
unsigned int debug_id; unsigned int debug_id;
unsigned int rsize; /* Maximum read size (0 for none) */
unsigned int wsize; /* Maximum write size (0 for none) */
unsigned int subreq_counter; /* Next subreq->debug_index */
atomic_t nr_outstanding; /* Number of ops in progress */ atomic_t nr_outstanding; /* Number of ops in progress */
atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */ atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */
size_t submitted; /* Amount submitted for I/O so far */ size_t submitted; /* Amount submitted for I/O so far */
size_t len; /* Length of the request */ size_t len; /* Length of the request */
size_t upper_len; /* Length can be extended to here */
size_t transferred; /* Amount to be indicated as transferred */
short error; /* 0 or error that occurred */ short error; /* 0 or error that occurred */
enum netfs_io_origin origin; /* Origin of the request */ enum netfs_io_origin origin; /* Origin of the request */
bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */
loff_t i_size; /* Size of the file */ loff_t i_size; /* Size of the file */
loff_t start; /* Start position */ loff_t start; /* Start position */
pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */
@ -199,17 +283,25 @@ struct netfs_io_request {
#define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */ #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */
#define NETFS_RREQ_FAILED 4 /* The request failed */ #define NETFS_RREQ_FAILED 4 /* The request failed */
#define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */ #define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */
#define NETFS_RREQ_WRITE_TO_CACHE 7 /* Need to write to the cache */
#define NETFS_RREQ_UPLOAD_TO_SERVER 8 /* Need to write to the server */
#define NETFS_RREQ_NONBLOCK 9 /* Don't block if possible (O_NONBLOCK) */
#define NETFS_RREQ_BLOCKED 10 /* We blocked */
const struct netfs_request_ops *netfs_ops; const struct netfs_request_ops *netfs_ops;
void (*cleanup)(struct netfs_io_request *req);
}; };
/* /*
* Operations the network filesystem can/must provide to the helpers. * Operations the network filesystem can/must provide to the helpers.
*/ */
struct netfs_request_ops { struct netfs_request_ops {
unsigned int io_request_size; /* Alloc size for netfs_io_request struct */
unsigned int io_subrequest_size; /* Alloc size for netfs_io_subrequest struct */
int (*init_request)(struct netfs_io_request *rreq, struct file *file); int (*init_request)(struct netfs_io_request *rreq, struct file *file);
void (*free_request)(struct netfs_io_request *rreq); void (*free_request)(struct netfs_io_request *rreq);
int (*begin_cache_operation)(struct netfs_io_request *rreq); void (*free_subrequest)(struct netfs_io_subrequest *rreq);
/* Read request handling */
void (*expand_readahead)(struct netfs_io_request *rreq); void (*expand_readahead)(struct netfs_io_request *rreq);
bool (*clamp_length)(struct netfs_io_subrequest *subreq); bool (*clamp_length)(struct netfs_io_subrequest *subreq);
void (*issue_read)(struct netfs_io_subrequest *subreq); void (*issue_read)(struct netfs_io_subrequest *subreq);
@ -217,6 +309,14 @@ struct netfs_request_ops {
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio **foliop, void **_fsdata); struct folio **foliop, void **_fsdata);
void (*done)(struct netfs_io_request *rreq); void (*done)(struct netfs_io_request *rreq);
/* Modification handling */
void (*update_i_size)(struct inode *inode, loff_t i_size);
/* Write request handling */
void (*create_write_requests)(struct netfs_io_request *wreq,
loff_t start, size_t len);
void (*invalidate_cache)(struct netfs_io_request *wreq);
}; };
/* /*
@ -229,8 +329,7 @@ enum netfs_read_from_hole {
}; };
/* /*
* Table of operations for access to a cache. This is obtained by * Table of operations for access to a cache.
* rreq->ops->begin_cache_operation().
*/ */
struct netfs_cache_ops { struct netfs_cache_ops {
/* End an operation */ /* End an operation */
@ -265,8 +364,8 @@ struct netfs_cache_ops {
* actually do. * actually do.
*/ */
int (*prepare_write)(struct netfs_cache_resources *cres, int (*prepare_write)(struct netfs_cache_resources *cres,
loff_t *_start, size_t *_len, loff_t i_size, loff_t *_start, size_t *_len, size_t upper_len,
bool no_space_allocated_yet); loff_t i_size, bool no_space_allocated_yet);
/* Prepare an on-demand read operation, shortening it to a cached/uncached /* Prepare an on-demand read operation, shortening it to a cached/uncached
* boundary as appropriate. * boundary as appropriate.
@ -284,22 +383,62 @@ struct netfs_cache_ops {
loff_t *_data_start, size_t *_data_len); loff_t *_data_start, size_t *_data_len);
}; };
/* High-level read API. */
ssize_t netfs_unbuffered_read_iter(struct kiocb *iocb, struct iov_iter *iter);
ssize_t netfs_buffered_read_iter(struct kiocb *iocb, struct iov_iter *iter);
ssize_t netfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
/* High-level write API */
ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
struct netfs_group *netfs_group);
ssize_t netfs_buffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *from,
struct netfs_group *netfs_group);
ssize_t netfs_unbuffered_write_iter(struct kiocb *iocb, struct iov_iter *from);
ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from);
/* Address operations API */
struct readahead_control; struct readahead_control;
void netfs_readahead(struct readahead_control *); void netfs_readahead(struct readahead_control *);
int netfs_read_folio(struct file *, struct folio *); int netfs_read_folio(struct file *, struct folio *);
int netfs_write_begin(struct netfs_inode *, struct file *, int netfs_write_begin(struct netfs_inode *, struct file *,
struct address_space *, loff_t pos, unsigned int len, struct address_space *, loff_t pos, unsigned int len,
struct folio **, void **fsdata); struct folio **, void **fsdata);
int netfs_writepages(struct address_space *mapping,
struct writeback_control *wbc);
bool netfs_dirty_folio(struct address_space *mapping, struct folio *folio);
int netfs_unpin_writeback(struct inode *inode, struct writeback_control *wbc);
void netfs_clear_inode_writeback(struct inode *inode, const void *aux);
void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length);
bool netfs_release_folio(struct folio *folio, gfp_t gfp);
int netfs_launder_folio(struct folio *folio);
/* VMA operations API. */
vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group);
/* (Sub)request management API. */
void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool); void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool);
void netfs_get_subrequest(struct netfs_io_subrequest *subreq, void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
enum netfs_sreq_ref_trace what); enum netfs_sreq_ref_trace what);
void netfs_put_subrequest(struct netfs_io_subrequest *subreq, void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
bool was_async, enum netfs_sreq_ref_trace what); bool was_async, enum netfs_sreq_ref_trace what);
void netfs_stats_show(struct seq_file *);
ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
struct iov_iter *new, struct iov_iter *new,
iov_iter_extraction_t extraction_flags); iov_iter_extraction_t extraction_flags);
size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
size_t max_size, size_t max_segs);
struct netfs_io_subrequest *netfs_create_write_request(
struct netfs_io_request *wreq, enum netfs_io_source dest,
loff_t start, size_t len, work_func_t worker);
void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
bool was_async);
void netfs_queue_write_request(struct netfs_io_subrequest *subreq);
int netfs_start_io_read(struct inode *inode);
void netfs_end_io_read(struct inode *inode);
int netfs_start_io_write(struct inode *inode);
void netfs_end_io_write(struct inode *inode);
int netfs_start_io_direct(struct inode *inode);
void netfs_end_io_direct(struct inode *inode);
/** /**
* netfs_inode - Get the netfs inode context from the inode * netfs_inode - Get the netfs inode context from the inode
@ -317,30 +456,44 @@ static inline struct netfs_inode *netfs_inode(struct inode *inode)
* netfs_inode_init - Initialise a netfslib inode context * netfs_inode_init - Initialise a netfslib inode context
* @ctx: The netfs inode to initialise * @ctx: The netfs inode to initialise
* @ops: The netfs's operations list * @ops: The netfs's operations list
* @use_zero_point: True to use the zero_point read optimisation
* *
* Initialise the netfs library context struct. This is expected to follow on * Initialise the netfs library context struct. This is expected to follow on
* directly from the VFS inode struct. * directly from the VFS inode struct.
*/ */
static inline void netfs_inode_init(struct netfs_inode *ctx, static inline void netfs_inode_init(struct netfs_inode *ctx,
const struct netfs_request_ops *ops) const struct netfs_request_ops *ops,
bool use_zero_point)
{ {
ctx->ops = ops; ctx->ops = ops;
ctx->remote_i_size = i_size_read(&ctx->inode); ctx->remote_i_size = i_size_read(&ctx->inode);
ctx->zero_point = LLONG_MAX;
ctx->flags = 0;
#if IS_ENABLED(CONFIG_FSCACHE) #if IS_ENABLED(CONFIG_FSCACHE)
ctx->cache = NULL; ctx->cache = NULL;
#endif #endif
/* ->releasepage() drives zero_point */
if (use_zero_point) {
ctx->zero_point = ctx->remote_i_size;
mapping_set_release_always(ctx->inode.i_mapping);
}
} }
/** /**
* netfs_resize_file - Note that a file got resized * netfs_resize_file - Note that a file got resized
* @ctx: The netfs inode being resized * @ctx: The netfs inode being resized
* @new_i_size: The new file size * @new_i_size: The new file size
* @changed_on_server: The change was applied to the server
* *
* Inform the netfs lib that a file got resized so that it can adjust its state. * Inform the netfs lib that a file got resized so that it can adjust its state.
*/ */
static inline void netfs_resize_file(struct netfs_inode *ctx, loff_t new_i_size) static inline void netfs_resize_file(struct netfs_inode *ctx, loff_t new_i_size,
bool changed_on_server)
{ {
ctx->remote_i_size = new_i_size; if (changed_on_server)
ctx->remote_i_size = new_i_size;
if (new_i_size < ctx->zero_point)
ctx->zero_point = new_i_size;
} }
/** /**

View File

@ -60,7 +60,7 @@ struct writeback_control {
unsigned for_reclaim:1; /* Invoked from the page allocator */ unsigned for_reclaim:1; /* Invoked from the page allocator */
unsigned range_cyclic:1; /* range_start is cyclic */ unsigned range_cyclic:1; /* range_start is cyclic */
unsigned for_sync:1; /* sync(2) WB_SYNC_ALL writeback */ unsigned for_sync:1; /* sync(2) WB_SYNC_ALL writeback */
unsigned unpinned_fscache_wb:1; /* Cleared I_PINNING_FSCACHE_WB */ unsigned unpinned_netfs_wb:1; /* Cleared I_PINNING_NETFS_WB */
/* /*
* When writeback IOs are bounced through async layers, only the * When writeback IOs are bounced through async layers, only the

View File

@ -902,37 +902,6 @@ TRACE_EVENT(afs_dir_check_failed,
__entry->vnode, __entry->off, __entry->i_size) __entry->vnode, __entry->off, __entry->i_size)
); );
TRACE_EVENT(afs_folio_dirty,
TP_PROTO(struct afs_vnode *vnode, const char *where, struct folio *folio),
TP_ARGS(vnode, where, folio),
TP_STRUCT__entry(
__field(struct afs_vnode *, vnode)
__field(const char *, where)
__field(pgoff_t, index)
__field(unsigned long, from)
__field(unsigned long, to)
),
TP_fast_assign(
unsigned long priv = (unsigned long)folio_get_private(folio);
__entry->vnode = vnode;
__entry->where = where;
__entry->index = folio_index(folio);
__entry->from = afs_folio_dirty_from(folio, priv);
__entry->to = afs_folio_dirty_to(folio, priv);
__entry->to |= (afs_is_folio_dirty_mmapped(priv) ?
(1UL << (BITS_PER_LONG - 1)) : 0);
),
TP_printk("vn=%p %lx %s %lx-%lx%s",
__entry->vnode, __entry->index, __entry->where,
__entry->from,
__entry->to & ~(1UL << (BITS_PER_LONG - 1)),
__entry->to & (1UL << (BITS_PER_LONG - 1)) ? " M" : "")
);
TRACE_EVENT(afs_call_state, TRACE_EVENT(afs_call_state,
TP_PROTO(struct afs_call *call, TP_PROTO(struct afs_call *call,
enum afs_call_state from, enum afs_call_state from,

View File

@ -16,34 +16,57 @@
* Define enums for tracing information. * Define enums for tracing information.
*/ */
#define netfs_read_traces \ #define netfs_read_traces \
EM(netfs_read_trace_dio_read, "DIO-READ ") \
EM(netfs_read_trace_expanded, "EXPANDED ") \ EM(netfs_read_trace_expanded, "EXPANDED ") \
EM(netfs_read_trace_readahead, "READAHEAD") \ EM(netfs_read_trace_readahead, "READAHEAD") \
EM(netfs_read_trace_readpage, "READPAGE ") \ EM(netfs_read_trace_readpage, "READPAGE ") \
EM(netfs_read_trace_prefetch_for_write, "PREFETCHW") \
E_(netfs_read_trace_write_begin, "WRITEBEGN") E_(netfs_read_trace_write_begin, "WRITEBEGN")
#define netfs_write_traces \
EM(netfs_write_trace_dio_write, "DIO-WRITE") \
EM(netfs_write_trace_launder, "LAUNDER ") \
EM(netfs_write_trace_unbuffered_write, "UNB-WRITE") \
EM(netfs_write_trace_writeback, "WRITEBACK") \
E_(netfs_write_trace_writethrough, "WRITETHRU")
#define netfs_rreq_origins \ #define netfs_rreq_origins \
EM(NETFS_READAHEAD, "RA") \ EM(NETFS_READAHEAD, "RA") \
EM(NETFS_READPAGE, "RP") \ EM(NETFS_READPAGE, "RP") \
E_(NETFS_READ_FOR_WRITE, "RW") EM(NETFS_READ_FOR_WRITE, "RW") \
EM(NETFS_WRITEBACK, "WB") \
EM(NETFS_WRITETHROUGH, "WT") \
EM(NETFS_LAUNDER_WRITE, "LW") \
EM(NETFS_UNBUFFERED_WRITE, "UW") \
EM(NETFS_DIO_READ, "DR") \
E_(NETFS_DIO_WRITE, "DW")
#define netfs_rreq_traces \ #define netfs_rreq_traces \
EM(netfs_rreq_trace_assess, "ASSESS ") \ EM(netfs_rreq_trace_assess, "ASSESS ") \
EM(netfs_rreq_trace_copy, "COPY ") \ EM(netfs_rreq_trace_copy, "COPY ") \
EM(netfs_rreq_trace_done, "DONE ") \ EM(netfs_rreq_trace_done, "DONE ") \
EM(netfs_rreq_trace_free, "FREE ") \ EM(netfs_rreq_trace_free, "FREE ") \
EM(netfs_rreq_trace_redirty, "REDIRTY") \
EM(netfs_rreq_trace_resubmit, "RESUBMT") \ EM(netfs_rreq_trace_resubmit, "RESUBMT") \
EM(netfs_rreq_trace_unlock, "UNLOCK ") \ EM(netfs_rreq_trace_unlock, "UNLOCK ") \
E_(netfs_rreq_trace_unmark, "UNMARK ") EM(netfs_rreq_trace_unmark, "UNMARK ") \
EM(netfs_rreq_trace_wait_ip, "WAIT-IP") \
EM(netfs_rreq_trace_wake_ip, "WAKE-IP") \
E_(netfs_rreq_trace_write_done, "WR-DONE")
#define netfs_sreq_sources \ #define netfs_sreq_sources \
EM(NETFS_FILL_WITH_ZEROES, "ZERO") \ EM(NETFS_FILL_WITH_ZEROES, "ZERO") \
EM(NETFS_DOWNLOAD_FROM_SERVER, "DOWN") \ EM(NETFS_DOWNLOAD_FROM_SERVER, "DOWN") \
EM(NETFS_READ_FROM_CACHE, "READ") \ EM(NETFS_READ_FROM_CACHE, "READ") \
E_(NETFS_INVALID_READ, "INVL") \ EM(NETFS_INVALID_READ, "INVL") \
EM(NETFS_UPLOAD_TO_SERVER, "UPLD") \
EM(NETFS_WRITE_TO_CACHE, "WRIT") \
E_(NETFS_INVALID_WRITE, "INVL")
#define netfs_sreq_traces \ #define netfs_sreq_traces \
EM(netfs_sreq_trace_download_instead, "RDOWN") \ EM(netfs_sreq_trace_download_instead, "RDOWN") \
EM(netfs_sreq_trace_free, "FREE ") \ EM(netfs_sreq_trace_free, "FREE ") \
EM(netfs_sreq_trace_limited, "LIMIT") \
EM(netfs_sreq_trace_prepare, "PREP ") \ EM(netfs_sreq_trace_prepare, "PREP ") \
EM(netfs_sreq_trace_resubmit_short, "SHORT") \ EM(netfs_sreq_trace_resubmit_short, "SHORT") \
EM(netfs_sreq_trace_submit, "SUBMT") \ EM(netfs_sreq_trace_submit, "SUBMT") \
@ -55,19 +78,24 @@
#define netfs_failures \ #define netfs_failures \
EM(netfs_fail_check_write_begin, "check-write-begin") \ EM(netfs_fail_check_write_begin, "check-write-begin") \
EM(netfs_fail_copy_to_cache, "copy-to-cache") \ EM(netfs_fail_copy_to_cache, "copy-to-cache") \
EM(netfs_fail_dio_read_short, "dio-read-short") \
EM(netfs_fail_dio_read_zero, "dio-read-zero") \
EM(netfs_fail_read, "read") \ EM(netfs_fail_read, "read") \
EM(netfs_fail_short_read, "short-read") \ EM(netfs_fail_short_read, "short-read") \
E_(netfs_fail_prepare_write, "prep-write") EM(netfs_fail_prepare_write, "prep-write") \
E_(netfs_fail_write, "write")
#define netfs_rreq_ref_traces \ #define netfs_rreq_ref_traces \
EM(netfs_rreq_trace_get_hold, "GET HOLD ") \ EM(netfs_rreq_trace_get_for_outstanding,"GET OUTSTND") \
EM(netfs_rreq_trace_get_subreq, "GET SUBREQ ") \ EM(netfs_rreq_trace_get_subreq, "GET SUBREQ ") \
EM(netfs_rreq_trace_put_complete, "PUT COMPLT ") \ EM(netfs_rreq_trace_put_complete, "PUT COMPLT ") \
EM(netfs_rreq_trace_put_discard, "PUT DISCARD") \ EM(netfs_rreq_trace_put_discard, "PUT DISCARD") \
EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \ EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \
EM(netfs_rreq_trace_put_hold, "PUT HOLD ") \ EM(netfs_rreq_trace_put_no_submit, "PUT NO-SUBM") \
EM(netfs_rreq_trace_put_return, "PUT RETURN ") \
EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \ EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \
EM(netfs_rreq_trace_put_zero_len, "PUT ZEROLEN") \ EM(netfs_rreq_trace_put_work, "PUT WORK ") \
EM(netfs_rreq_trace_see_work, "SEE WORK ") \
E_(netfs_rreq_trace_new, "NEW ") E_(netfs_rreq_trace_new, "NEW ")
#define netfs_sreq_ref_traces \ #define netfs_sreq_ref_traces \
@ -76,11 +104,44 @@
EM(netfs_sreq_trace_get_short_read, "GET SHORTRD") \ EM(netfs_sreq_trace_get_short_read, "GET SHORTRD") \
EM(netfs_sreq_trace_new, "NEW ") \ EM(netfs_sreq_trace_new, "NEW ") \
EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \ EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \
EM(netfs_sreq_trace_put_discard, "PUT DISCARD") \
EM(netfs_sreq_trace_put_failed, "PUT FAILED ") \ EM(netfs_sreq_trace_put_failed, "PUT FAILED ") \
EM(netfs_sreq_trace_put_merged, "PUT MERGED ") \ EM(netfs_sreq_trace_put_merged, "PUT MERGED ") \
EM(netfs_sreq_trace_put_no_copy, "PUT NO COPY") \ EM(netfs_sreq_trace_put_no_copy, "PUT NO COPY") \
EM(netfs_sreq_trace_put_wip, "PUT WIP ") \
EM(netfs_sreq_trace_put_work, "PUT WORK ") \
E_(netfs_sreq_trace_put_terminated, "PUT TERM ") E_(netfs_sreq_trace_put_terminated, "PUT TERM ")
#define netfs_folio_traces \
/* The first few correspond to enum netfs_how_to_modify */ \
EM(netfs_folio_is_uptodate, "mod-uptodate") \
EM(netfs_just_prefetch, "mod-prefetch") \
EM(netfs_whole_folio_modify, "mod-whole-f") \
EM(netfs_modify_and_clear, "mod-n-clear") \
EM(netfs_streaming_write, "mod-streamw") \
EM(netfs_streaming_write_cont, "mod-streamw+") \
EM(netfs_flush_content, "flush") \
EM(netfs_streaming_filled_page, "mod-streamw-f") \
EM(netfs_streaming_cont_filled_page, "mod-streamw-f+") \
/* The rest are for writeback */ \
EM(netfs_folio_trace_clear, "clear") \
EM(netfs_folio_trace_clear_s, "clear-s") \
EM(netfs_folio_trace_clear_g, "clear-g") \
EM(netfs_folio_trace_copy_to_cache, "copy") \
EM(netfs_folio_trace_end_copy, "end-copy") \
EM(netfs_folio_trace_filled_gaps, "filled-gaps") \
EM(netfs_folio_trace_kill, "kill") \
EM(netfs_folio_trace_launder, "launder") \
EM(netfs_folio_trace_mkwrite, "mkwrite") \
EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \
EM(netfs_folio_trace_read_gaps, "read-gaps") \
EM(netfs_folio_trace_redirty, "redirty") \
EM(netfs_folio_trace_redirtied, "redirtied") \
EM(netfs_folio_trace_store, "store") \
EM(netfs_folio_trace_store_plus, "store+") \
EM(netfs_folio_trace_wthru, "wthru") \
E_(netfs_folio_trace_wthru_plus, "wthru+")
#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
@ -90,11 +151,13 @@
#define E_(a, b) a #define E_(a, b) a
enum netfs_read_trace { netfs_read_traces } __mode(byte); enum netfs_read_trace { netfs_read_traces } __mode(byte);
enum netfs_write_trace { netfs_write_traces } __mode(byte);
enum netfs_rreq_trace { netfs_rreq_traces } __mode(byte); enum netfs_rreq_trace { netfs_rreq_traces } __mode(byte);
enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte); enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte);
enum netfs_failure { netfs_failures } __mode(byte); enum netfs_failure { netfs_failures } __mode(byte);
enum netfs_rreq_ref_trace { netfs_rreq_ref_traces } __mode(byte); enum netfs_rreq_ref_trace { netfs_rreq_ref_traces } __mode(byte);
enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte); enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte);
enum netfs_folio_trace { netfs_folio_traces } __mode(byte);
#endif #endif
@ -107,6 +170,7 @@ enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte);
#define E_(a, b) TRACE_DEFINE_ENUM(a); #define E_(a, b) TRACE_DEFINE_ENUM(a);
netfs_read_traces; netfs_read_traces;
netfs_write_traces;
netfs_rreq_origins; netfs_rreq_origins;
netfs_rreq_traces; netfs_rreq_traces;
netfs_sreq_sources; netfs_sreq_sources;
@ -114,6 +178,7 @@ netfs_sreq_traces;
netfs_failures; netfs_failures;
netfs_rreq_ref_traces; netfs_rreq_ref_traces;
netfs_sreq_ref_traces; netfs_sreq_ref_traces;
netfs_folio_traces;
/* /*
* Now redefine the EM() and E_() macros to map the enums to the strings that * Now redefine the EM() and E_() macros to map the enums to the strings that
@ -314,6 +379,82 @@ TRACE_EVENT(netfs_sreq_ref,
__entry->ref) __entry->ref)
); );
TRACE_EVENT(netfs_folio,
TP_PROTO(struct folio *folio, enum netfs_folio_trace why),
TP_ARGS(folio, why),
TP_STRUCT__entry(
__field(ino_t, ino)
__field(pgoff_t, index)
__field(unsigned int, nr)
__field(enum netfs_folio_trace, why)
),
TP_fast_assign(
__entry->ino = folio->mapping->host->i_ino;
__entry->why = why;
__entry->index = folio_index(folio);
__entry->nr = folio_nr_pages(folio);
),
TP_printk("i=%05lx ix=%05lx-%05lx %s",
__entry->ino, __entry->index, __entry->index + __entry->nr - 1,
__print_symbolic(__entry->why, netfs_folio_traces))
);
TRACE_EVENT(netfs_write_iter,
TP_PROTO(const struct kiocb *iocb, const struct iov_iter *from),
TP_ARGS(iocb, from),
TP_STRUCT__entry(
__field(unsigned long long, start )
__field(size_t, len )
__field(unsigned int, flags )
),
TP_fast_assign(
__entry->start = iocb->ki_pos;
__entry->len = iov_iter_count(from);
__entry->flags = iocb->ki_flags;
),
TP_printk("WRITE-ITER s=%llx l=%zx f=%x",
__entry->start, __entry->len, __entry->flags)
);
TRACE_EVENT(netfs_write,
TP_PROTO(const struct netfs_io_request *wreq,
enum netfs_write_trace what),
TP_ARGS(wreq, what),
TP_STRUCT__entry(
__field(unsigned int, wreq )
__field(unsigned int, cookie )
__field(enum netfs_write_trace, what )
__field(unsigned long long, start )
__field(size_t, len )
),
TP_fast_assign(
struct netfs_inode *__ctx = netfs_inode(wreq->inode);
struct fscache_cookie *__cookie = netfs_i_cookie(__ctx);
__entry->wreq = wreq->debug_id;
__entry->cookie = __cookie ? __cookie->debug_id : 0;
__entry->what = what;
__entry->start = wreq->start;
__entry->len = wreq->len;
),
TP_printk("R=%08x %s c=%08x by=%llx-%llx",
__entry->wreq,
__print_symbolic(__entry->what, netfs_write_traces),
__entry->cookie,
__entry->start, __entry->start + __entry->len - 1)
);
#undef EM #undef EM
#undef E_ #undef E_
#endif /* _TRACE_NETFS_H */ #endif /* _TRACE_NETFS_H */

View File

@ -2688,6 +2688,7 @@ int kiocb_write_and_wait(struct kiocb *iocb, size_t count)
return filemap_write_and_wait_range(mapping, pos, end); return filemap_write_and_wait_range(mapping, pos, end);
} }
EXPORT_SYMBOL_GPL(kiocb_write_and_wait);
int kiocb_invalidate_pages(struct kiocb *iocb, size_t count) int kiocb_invalidate_pages(struct kiocb *iocb, size_t count)
{ {
@ -2715,6 +2716,7 @@ int kiocb_invalidate_pages(struct kiocb *iocb, size_t count)
return invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT, return invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
end >> PAGE_SHIFT); end >> PAGE_SHIFT);
} }
EXPORT_SYMBOL_GPL(kiocb_invalidate_pages);
/** /**
* generic_file_read_iter - generic filesystem read routine * generic_file_read_iter - generic filesystem read routine