2015-02-16 23:59:09 +00:00
|
|
|
Direct Access for files
|
|
|
|
-----------------------
|
|
|
|
|
|
|
|
Motivation
|
|
|
|
----------
|
|
|
|
|
|
|
|
The page cache is usually used to buffer reads and writes to files.
|
|
|
|
It is also used to provide the pages which are mapped into userspace
|
|
|
|
by a call to mmap.
|
|
|
|
|
|
|
|
For block devices that are memory-like, the page cache pages would be
|
|
|
|
unnecessary copies of the original storage. The DAX code removes the
|
|
|
|
extra copy by performing reads and writes directly to the storage device.
|
|
|
|
For file mappings, the storage device is mapped directly into userspace.
|
|
|
|
|
|
|
|
|
|
|
|
Usage
|
|
|
|
-----
|
|
|
|
|
|
|
|
If you have a block device which supports DAX, you can make a filesystem
|
2015-07-03 14:40:38 +00:00
|
|
|
on it as usual. The DAX code currently only supports files with a block
|
|
|
|
size equal to your kernel's PAGE_SIZE, so you may need to specify a block
|
|
|
|
size when creating the filesystem. When mounting it, use the "-o dax"
|
|
|
|
option on the command line or add 'dax' to the options in /etc/fstab.
|
2015-02-16 23:59:09 +00:00
|
|
|
|
|
|
|
|
|
|
|
Implementation Tips for Block Driver Writers
|
|
|
|
--------------------------------------------
|
|
|
|
|
|
|
|
To support DAX in your block driver, implement the 'direct_access'
|
|
|
|
block device operation. It is used to translate the sector number
|
|
|
|
(expressed in units of 512-byte sectors) to a page frame number (pfn)
|
|
|
|
that identifies the physical page for the memory. It also returns a
|
|
|
|
kernel virtual address that can be used to access the memory.
|
|
|
|
|
|
|
|
The direct_access method takes a 'size' parameter that indicates the
|
|
|
|
number of bytes being requested. The function should return the number
|
|
|
|
of bytes that can be contiguously accessed at that offset. It may also
|
|
|
|
return a negative errno if an error occurs.
|
|
|
|
|
|
|
|
In order to support this method, the storage must be byte-accessible by
|
|
|
|
the CPU at all times. If your device uses paging techniques to expose
|
|
|
|
a large amount of memory through a smaller window, then you cannot
|
|
|
|
implement direct_access. Equally, if your device can occasionally
|
|
|
|
stall the CPU for an extended period, you should also not attempt to
|
|
|
|
implement direct_access.
|
|
|
|
|
|
|
|
These block devices may be used for inspiration:
|
|
|
|
- brd: RAM backed block device driver
|
|
|
|
- dcssblk: s390 dcss block device driver
|
2016-07-26 22:21:02 +00:00
|
|
|
- pmem: NVDIMM persistent memory driver
|
2015-02-16 23:59:09 +00:00
|
|
|
|
|
|
|
|
|
|
|
Implementation Tips for Filesystem Writers
|
|
|
|
------------------------------------------
|
|
|
|
|
|
|
|
Filesystem support consists of
|
|
|
|
- adding support to mark inodes as being DAX by setting the S_DAX flag in
|
|
|
|
i_flags
|
2016-11-21 01:48:36 +00:00
|
|
|
- implementing ->read_iter and ->write_iter operations which use dax_iomap_rw()
|
|
|
|
when inode has S_DAX flag set
|
2015-02-16 23:59:09 +00:00
|
|
|
- implementing an mmap file operation for DAX files which sets the
|
2015-09-08 21:58:57 +00:00
|
|
|
VM_MIXEDMAP and VM_HUGEPAGE flags on the VMA, and setting the vm_ops to
|
2016-11-21 01:48:36 +00:00
|
|
|
include handlers for fault, pmd_fault, page_mkwrite, pfn_mkwrite. These
|
dax: use common 4k zero page for dax mmap reads
When servicing mmap() reads from file holes the current DAX code
allocates a page cache page of all zeroes and places the struct page
pointer in the mapping->page_tree radix tree.
This has three major drawbacks:
1) It consumes memory unnecessarily. For every 4k page that is read via
a DAX mmap() over a hole, we allocate a new page cache page. This
means that if you read 1GiB worth of pages, you end up using 1GiB of
zeroed memory. This is easily visible by looking at the overall
memory consumption of the system or by looking at /proc/[pid]/smaps:
7f62e72b3000-7f63272b3000 rw-s 00000000 103:00 12 /root/dax/data
Size: 1048576 kB
Rss: 1048576 kB
Pss: 1048576 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 1048576 kB
Private_Dirty: 0 kB
Referenced: 1048576 kB
Anonymous: 0 kB
LazyFree: 0 kB
AnonHugePages: 0 kB
ShmemPmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
2) It is slower than using a common zero page because each page fault
has more work to do. Instead of just inserting a common zero page we
have to allocate a page cache page, zero it, and then insert it. Here
are the average latencies of dax_load_hole() as measured by ftrace on
a random test box:
Old method, using zeroed page cache pages: 3.4 us
New method, using the common 4k zero page: 0.8 us
This was the average latency over 1 GiB of sequential reads done by
this simple fio script:
[global]
size=1G
filename=/root/dax/data
fallocate=none
[io]
rw=read
ioengine=mmap
3) The fact that we had to check for both DAX exceptional entries and
for page cache pages in the radix tree made the DAX code more
complex.
Solve these issues by following the lead of the DAX PMD code and using a
common 4k zero page instead. As with the PMD code we will now insert a
DAX exceptional entry into the radix tree instead of a struct page
pointer which allows us to remove all the special casing in the DAX
code.
Note that we do still pretty aggressively check for regular pages in the
DAX radix tree, especially where we take action based on the bits set in
the page. If we ever find a regular page in our radix tree now that
most likely means that someone besides DAX is inserting pages (which has
happened lots of times in the past), and we want to find that out early
and fail loudly.
This solution also removes the extra memory consumption. Here is that
same /proc/[pid]/smaps after 1GiB of reading from a hole with the new
code:
7f2054a74000-7f2094a74000 rw-s 00000000 103:00 12 /root/dax/data
Size: 1048576 kB
Rss: 0 kB
Pss: 0 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 0 kB
Referenced: 0 kB
Anonymous: 0 kB
LazyFree: 0 kB
AnonHugePages: 0 kB
ShmemPmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
Overall system memory consumption is similarly improved.
Another major change is that we remove dax_pfn_mkwrite() from our fault
flow, and instead rely on the page fault itself to make the PTE dirty
and writeable. The following description from the patch adding the
vm_insert_mixed_mkwrite() call explains this a little more:
"To be able to use the common 4k zero page in DAX we need to have our
PTE fault path look more like our PMD fault path where a PTE entry
can be marked as dirty and writeable as it is first inserted rather
than waiting for a follow-up dax_pfn_mkwrite() =>
finish_mkwrite_fault() call.
Right now we can rely on having a dax_pfn_mkwrite() call because we
can distinguish between these two cases in do_wp_page():
case 1: 4k zero page => writable DAX storage
case 2: read-only DAX storage => writeable DAX storage
This distinction is made by via vm_normal_page(). vm_normal_page()
returns false for the common 4k zero page, though, just as it does
for DAX ptes. Instead of special casing the DAX + 4k zero page case
we will simplify our DAX PTE page fault sequence so that it matches
our DAX PMD sequence, and get rid of the dax_pfn_mkwrite() helper.
We will instead use dax_iomap_fault() to handle write-protection
faults.
This means that insert_pfn() needs to follow the lead of
insert_pfn_pmd() and allow us to pass in a 'mkwrite' flag. If
'mkwrite' is set insert_pfn() will do the work that was previously
done by wp_page_reuse() as part of the dax_pfn_mkwrite() call path"
Link: http://lkml.kernel.org/r/20170724170616.25810-4-ross.zwisler@linux.intel.com
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andreas Dilger <adilger.kernel@dilger.ca>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-06 23:18:43 +00:00
|
|
|
handlers should probably call dax_iomap_fault() passing the appropriate
|
|
|
|
fault size and iomap operations.
|
2016-11-21 01:48:36 +00:00
|
|
|
- calling iomap_zero_range() passing appropriate iomap operations instead of
|
|
|
|
block_truncate_page() for DAX files
|
2015-02-16 23:59:09 +00:00
|
|
|
- ensuring that there is sufficient locking between reads, writes,
|
|
|
|
truncates and page faults
|
|
|
|
|
2016-11-21 01:48:36 +00:00
|
|
|
The iomap handlers for allocating blocks must make sure that allocated blocks
|
|
|
|
are zeroed out and converted to written extents before being returned to avoid
|
|
|
|
exposure of uninitialized data through mmap.
|
2015-02-16 23:59:09 +00:00
|
|
|
|
|
|
|
These filesystems may be used for inspiration:
|
2016-07-26 22:21:02 +00:00
|
|
|
- ext2: see Documentation/filesystems/ext2.txt
|
2019-01-02 20:01:21 +00:00
|
|
|
- ext4: see Documentation/filesystems/ext4/
|
2019-07-15 16:15:09 +00:00
|
|
|
- xfs: see Documentation/admin-guide/xfs.rst
|
2015-02-16 23:59:09 +00:00
|
|
|
|
|
|
|
|
2016-04-21 19:13:46 +00:00
|
|
|
Handling Media Errors
|
|
|
|
---------------------
|
|
|
|
|
|
|
|
The libnvdimm subsystem stores a record of known media error locations for
|
|
|
|
each pmem block device (in gendisk->badblocks). If we fault at such location,
|
|
|
|
or one with a latent error not yet discovered, the application can expect
|
|
|
|
to receive a SIGBUS. Libnvdimm also allows clearing of these errors by simply
|
|
|
|
writing the affected sectors (through the pmem driver, and if the underlying
|
|
|
|
NVDIMM supports the clear_poison DSM defined by ACPI).
|
|
|
|
|
|
|
|
Since DAX IO normally doesn't go through the driver/bio path, applications or
|
|
|
|
sysadmins have an option to restore the lost data from a prior backup/inbuilt
|
|
|
|
redundancy in the following ways:
|
|
|
|
|
|
|
|
1. Delete the affected file, and restore from a backup (sysadmin route):
|
|
|
|
This will free the file system blocks that were being used by the file,
|
|
|
|
and the next time they're allocated, they will be zeroed first, which
|
|
|
|
happens through the driver, and will clear bad sectors.
|
|
|
|
|
|
|
|
2. Truncate or hole-punch the part of the file that has a bad-block (at least
|
|
|
|
an entire aligned sector has to be hole-punched, but not necessarily an
|
|
|
|
entire filesystem block).
|
|
|
|
|
|
|
|
These are the two basic paths that allow DAX filesystems to continue operating
|
|
|
|
in the presence of media errors. More robust error recovery mechanisms can be
|
|
|
|
built on top of this in the future, for example, involving redundancy/mirroring
|
|
|
|
provided at the block layer through DM, or additionally, at the filesystem
|
|
|
|
level. These would have to rely on the above two tenets, that error clearing
|
|
|
|
can happen either by sending an IO through the driver, or zeroing (also through
|
|
|
|
the driver).
|
|
|
|
|
|
|
|
|
2015-02-16 23:59:09 +00:00
|
|
|
Shortcomings
|
|
|
|
------------
|
|
|
|
|
|
|
|
Even if the kernel or its modules are stored on a filesystem that supports
|
|
|
|
DAX on a block device that supports DAX, they will still be copied into RAM.
|
|
|
|
|
2015-02-16 23:59:44 +00:00
|
|
|
The DAX code does not work correctly on architectures which have virtually
|
|
|
|
mapped caches such as ARM, MIPS and SPARC.
|
|
|
|
|
2015-02-16 23:59:09 +00:00
|
|
|
Calling get_user_pages() on a range of user memory that has been mmaped
|
2016-09-26 01:18:37 +00:00
|
|
|
from a DAX file will fail when there are no 'struct page' to describe
|
|
|
|
those pages. This problem has been addressed in some device drivers
|
|
|
|
by adding optional struct page support for pages under the control of
|
|
|
|
the driver (see CONFIG_NVDIMM_PFN in drivers/nvdimm for an example of
|
|
|
|
how to do this). In the non struct page cases O_DIRECT reads/writes to
|
|
|
|
those memory ranges from a non-DAX file will fail (note that O_DIRECT
|
|
|
|
reads/writes _of a DAX file_ do work, it is the memory that is being
|
|
|
|
accessed that is key here). Other things that will not work in the
|
|
|
|
non struct page case include RDMA, sendfile() and splice().
|