2019-05-27 06:55:05 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2021-03-30 16:44:46 +00:00
|
|
|
/*
|
2007-10-16 08:28:07 +00:00
|
|
|
* eCryptfs: Linux filesystem encryption layer
|
|
|
|
*
|
|
|
|
* Copyright (C) 2007 International Business Machines Corp.
|
|
|
|
* Author(s): Michael A. Halcrow <mahalcro@us.ibm.com>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/pagemap.h>
|
2017-02-02 18:15:33 +00:00
|
|
|
#include <linux/sched/signal.h>
|
|
|
|
|
2007-10-16 08:28:07 +00:00
|
|
|
#include "ecryptfs_kernel.h"
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ecryptfs_write_lower
|
|
|
|
* @ecryptfs_inode: The eCryptfs inode
|
|
|
|
* @data: Data to write
|
|
|
|
* @offset: Byte offset in the lower file to which to write the data
|
|
|
|
* @size: Number of bytes from @data to write at @offset in the lower
|
|
|
|
* file
|
|
|
|
*
|
|
|
|
* Write data to the lower file.
|
|
|
|
*
|
2009-09-17 00:04:20 +00:00
|
|
|
* Returns bytes written on success; less than zero on error
|
2007-10-16 08:28:07 +00:00
|
|
|
*/
|
|
|
|
int ecryptfs_write_lower(struct inode *ecryptfs_inode, char *data,
|
|
|
|
loff_t offset, size_t size)
|
|
|
|
{
|
2011-08-05 03:58:51 +00:00
|
|
|
struct file *lower_file;
|
2009-09-17 00:04:20 +00:00
|
|
|
ssize_t rc;
|
2007-10-16 08:28:07 +00:00
|
|
|
|
2011-08-05 03:58:51 +00:00
|
|
|
lower_file = ecryptfs_inode_to_private(ecryptfs_inode)->lower_file;
|
|
|
|
if (!lower_file)
|
|
|
|
return -EIO;
|
2017-09-01 15:39:14 +00:00
|
|
|
rc = kernel_write(lower_file, data, size, &offset);
|
2007-10-16 08:28:07 +00:00
|
|
|
mark_inode_dirty_sync(ecryptfs_inode);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ecryptfs_write_lower_page_segment
|
|
|
|
* @ecryptfs_inode: The eCryptfs inode
|
|
|
|
* @page_for_lower: The page containing the data to be written to the
|
|
|
|
* lower file
|
|
|
|
* @offset_in_page: The offset in the @page_for_lower from which to
|
|
|
|
* start writing the data
|
|
|
|
* @size: The amount of data from @page_for_lower to write to the
|
|
|
|
* lower file
|
|
|
|
*
|
|
|
|
* Determines the byte offset in the file for the given page and
|
|
|
|
* offset within the page, maps the page, and makes the call to write
|
|
|
|
* the contents of @page_for_lower to the lower inode.
|
|
|
|
*
|
|
|
|
* Returns zero on success; non-zero otherwise
|
|
|
|
*/
|
|
|
|
int ecryptfs_write_lower_page_segment(struct inode *ecryptfs_inode,
|
|
|
|
struct page *page_for_lower,
|
|
|
|
size_t offset_in_page, size_t size)
|
|
|
|
{
|
|
|
|
char *virt;
|
|
|
|
loff_t offset;
|
|
|
|
int rc;
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 12:29:47 +00:00
|
|
|
offset = ((((loff_t)page_for_lower->index) << PAGE_SHIFT)
|
2007-10-16 08:28:12 +00:00
|
|
|
+ offset_in_page);
|
fs/ecryptfs: Replace kmap() with kmap_local_page()
kmap() has been deprecated in favor of kmap_local_page().
Therefore, replace kmap() with kmap_local_page() in fs/ecryptfs.
There are two main problems with kmap(): (1) It comes with an overhead as
the mapping space is restricted and protected by a global lock for
synchronization and (2) it also requires global TLB invalidation when the
kmap’s pool wraps and it might block when the mapping space is fully
utilized until a slot becomes available.
With kmap_local_page() the mappings are per thread, CPU local, can take
page faults, and can be called from any context (including interrupts).
It is faster than kmap() in kernels with HIGHMEM enabled. The tasks can
be preempted and, when they are scheduled to run again, the kernel
virtual addresses are restored and still valid.
Obviously, thread locality implies that the kernel virtual addresses
returned by kmap_local_page() are only valid in the context of the
callers (i.e., they cannot be handed to other threads).
The use of kmap_local_page() in fs/ecryptfs does not break the
above-mentioned assumption, so it is allowed and preferred.
Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with
HIGHMEM64GB enabled.
Suggested-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: "Fabio M. De Francesco" <fmdefrancesco@gmail.com>
Message-Id: <20230426172223.8896-2-fmdefrancesco@gmail.com>
Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-04-26 17:22:21 +00:00
|
|
|
virt = kmap_local_page(page_for_lower);
|
2007-10-16 08:28:07 +00:00
|
|
|
rc = ecryptfs_write_lower(ecryptfs_inode, virt, offset, size);
|
2009-09-17 00:04:20 +00:00
|
|
|
if (rc > 0)
|
|
|
|
rc = 0;
|
fs/ecryptfs: Replace kmap() with kmap_local_page()
kmap() has been deprecated in favor of kmap_local_page().
Therefore, replace kmap() with kmap_local_page() in fs/ecryptfs.
There are two main problems with kmap(): (1) It comes with an overhead as
the mapping space is restricted and protected by a global lock for
synchronization and (2) it also requires global TLB invalidation when the
kmap’s pool wraps and it might block when the mapping space is fully
utilized until a slot becomes available.
With kmap_local_page() the mappings are per thread, CPU local, can take
page faults, and can be called from any context (including interrupts).
It is faster than kmap() in kernels with HIGHMEM enabled. The tasks can
be preempted and, when they are scheduled to run again, the kernel
virtual addresses are restored and still valid.
Obviously, thread locality implies that the kernel virtual addresses
returned by kmap_local_page() are only valid in the context of the
callers (i.e., they cannot be handed to other threads).
The use of kmap_local_page() in fs/ecryptfs does not break the
above-mentioned assumption, so it is allowed and preferred.
Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with
HIGHMEM64GB enabled.
Suggested-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: "Fabio M. De Francesco" <fmdefrancesco@gmail.com>
Message-Id: <20230426172223.8896-2-fmdefrancesco@gmail.com>
Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-04-26 17:22:21 +00:00
|
|
|
kunmap_local(virt);
|
2007-10-16 08:28:07 +00:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ecryptfs_write
|
2010-05-21 15:09:58 +00:00
|
|
|
* @ecryptfs_inode: The eCryptfs file into which to write
|
2007-10-16 08:28:07 +00:00
|
|
|
* @data: Virtual address where data to write is located
|
|
|
|
* @offset: Offset in the eCryptfs file at which to begin writing the
|
|
|
|
* data from @data
|
|
|
|
* @size: The number of bytes to write from @data
|
|
|
|
*
|
|
|
|
* Write an arbitrary amount of data to an arbitrary location in the
|
|
|
|
* eCryptfs inode page cache. This is done on a page-by-page, and then
|
|
|
|
* by an extent-by-extent, basis; individual extents are encrypted and
|
|
|
|
* written to the lower page cache (via VFS writes). This function
|
|
|
|
* takes care of all the address translation to locations in the lower
|
|
|
|
* filesystem; it also handles truncate events, writing out zeros
|
|
|
|
* where necessary.
|
|
|
|
*
|
|
|
|
* Returns zero on success; non-zero otherwise
|
|
|
|
*/
|
2010-05-21 15:09:58 +00:00
|
|
|
int ecryptfs_write(struct inode *ecryptfs_inode, char *data, loff_t offset,
|
2007-10-16 08:28:07 +00:00
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
struct page *ecryptfs_page;
|
2009-04-13 20:29:27 +00:00
|
|
|
struct ecryptfs_crypt_stat *crypt_stat;
|
2007-10-16 08:28:07 +00:00
|
|
|
char *ecryptfs_page_virt;
|
2009-04-13 20:29:27 +00:00
|
|
|
loff_t ecryptfs_file_size = i_size_read(ecryptfs_inode);
|
2007-10-16 08:28:07 +00:00
|
|
|
loff_t data_offset = 0;
|
|
|
|
loff_t pos;
|
|
|
|
int rc = 0;
|
|
|
|
|
2009-04-13 20:29:27 +00:00
|
|
|
crypt_stat = &ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat;
|
2007-12-18 00:20:10 +00:00
|
|
|
/*
|
|
|
|
* if we are writing beyond current size, then start pos
|
|
|
|
* at the current size - we'll fill in zeros from there.
|
|
|
|
*/
|
2007-10-16 08:28:07 +00:00
|
|
|
if (offset > ecryptfs_file_size)
|
|
|
|
pos = ecryptfs_file_size;
|
|
|
|
else
|
|
|
|
pos = offset;
|
|
|
|
while (pos < (offset + size)) {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 12:29:47 +00:00
|
|
|
pgoff_t ecryptfs_page_idx = (pos >> PAGE_SHIFT);
|
|
|
|
size_t start_offset_in_page = (pos & ~PAGE_MASK);
|
|
|
|
size_t num_bytes = (PAGE_SIZE - start_offset_in_page);
|
2012-01-19 01:44:36 +00:00
|
|
|
loff_t total_remaining_bytes = ((offset + size) - pos);
|
2007-10-16 08:28:07 +00:00
|
|
|
|
2012-01-19 00:30:04 +00:00
|
|
|
if (fatal_signal_pending(current)) {
|
|
|
|
rc = -EINTR;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2007-10-16 08:28:07 +00:00
|
|
|
if (num_bytes > total_remaining_bytes)
|
|
|
|
num_bytes = total_remaining_bytes;
|
|
|
|
if (pos < offset) {
|
2007-12-18 00:20:10 +00:00
|
|
|
/* remaining zeros to write, up to destination offset */
|
2012-01-19 01:44:36 +00:00
|
|
|
loff_t total_remaining_zeros = (offset - pos);
|
2007-10-16 08:28:07 +00:00
|
|
|
|
|
|
|
if (num_bytes > total_remaining_zeros)
|
|
|
|
num_bytes = total_remaining_zeros;
|
|
|
|
}
|
2010-05-21 15:02:14 +00:00
|
|
|
ecryptfs_page = ecryptfs_get_locked_page(ecryptfs_inode,
|
2007-10-16 08:28:14 +00:00
|
|
|
ecryptfs_page_idx);
|
2007-10-16 08:28:07 +00:00
|
|
|
if (IS_ERR(ecryptfs_page)) {
|
|
|
|
rc = PTR_ERR(ecryptfs_page);
|
|
|
|
printk(KERN_ERR "%s: Error getting page at "
|
|
|
|
"index [%ld] from eCryptfs inode "
|
2008-04-29 07:59:48 +00:00
|
|
|
"mapping; rc = [%d]\n", __func__,
|
2007-10-16 08:28:07 +00:00
|
|
|
ecryptfs_page_idx, rc);
|
|
|
|
goto out;
|
|
|
|
}
|
2023-04-26 17:22:22 +00:00
|
|
|
ecryptfs_page_virt = kmap_local_page(ecryptfs_page);
|
2007-12-18 00:20:10 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* pos: where we're now writing, offset: where the request was
|
|
|
|
* If current pos is before request, we are filling zeros
|
|
|
|
* If we are at or beyond request, we are writing the *data*
|
|
|
|
* If we're in a fresh page beyond eof, zero it in either case
|
|
|
|
*/
|
|
|
|
if (pos < offset || !start_offset_in_page) {
|
|
|
|
/* We are extending past the previous end of the file.
|
|
|
|
* Fill in zero values to the end of the page */
|
|
|
|
memset(((char *)ecryptfs_page_virt
|
|
|
|
+ start_offset_in_page), 0,
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 12:29:47 +00:00
|
|
|
PAGE_SIZE - start_offset_in_page);
|
2007-12-18 00:20:10 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* pos >= offset, we are now writing the data request */
|
2007-10-16 08:28:07 +00:00
|
|
|
if (pos >= offset) {
|
|
|
|
memcpy(((char *)ecryptfs_page_virt
|
|
|
|
+ start_offset_in_page),
|
|
|
|
(data + data_offset), num_bytes);
|
|
|
|
data_offset += num_bytes;
|
|
|
|
}
|
2023-04-26 17:22:22 +00:00
|
|
|
kunmap_local(ecryptfs_page_virt);
|
2007-10-16 08:28:07 +00:00
|
|
|
flush_dcache_page(ecryptfs_page);
|
2007-10-16 08:28:14 +00:00
|
|
|
SetPageUptodate(ecryptfs_page);
|
|
|
|
unlock_page(ecryptfs_page);
|
2009-04-13 20:29:27 +00:00
|
|
|
if (crypt_stat->flags & ECRYPTFS_ENCRYPTED)
|
|
|
|
rc = ecryptfs_encrypt_page(ecryptfs_page);
|
|
|
|
else
|
|
|
|
rc = ecryptfs_write_lower_page_segment(ecryptfs_inode,
|
|
|
|
ecryptfs_page,
|
|
|
|
start_offset_in_page,
|
|
|
|
data_offset);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 12:29:47 +00:00
|
|
|
put_page(ecryptfs_page);
|
2007-10-16 08:28:07 +00:00
|
|
|
if (rc) {
|
|
|
|
printk(KERN_ERR "%s: Error encrypting "
|
2008-04-29 07:59:48 +00:00
|
|
|
"page; rc = [%d]\n", __func__, rc);
|
2007-10-16 08:28:07 +00:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
pos += num_bytes;
|
|
|
|
}
|
2012-01-19 00:30:04 +00:00
|
|
|
if (pos > ecryptfs_file_size) {
|
|
|
|
i_size_write(ecryptfs_inode, pos);
|
2009-04-13 20:29:27 +00:00
|
|
|
if (crypt_stat->flags & ECRYPTFS_ENCRYPTED) {
|
2012-01-19 00:30:04 +00:00
|
|
|
int rc2;
|
|
|
|
|
|
|
|
rc2 = ecryptfs_write_inode_size_to_metadata(
|
2009-04-13 20:29:27 +00:00
|
|
|
ecryptfs_inode);
|
2012-01-19 00:30:04 +00:00
|
|
|
if (rc2) {
|
2009-04-13 20:29:27 +00:00
|
|
|
printk(KERN_ERR "Problem with "
|
|
|
|
"ecryptfs_write_inode_size_to_metadata; "
|
2012-01-19 00:30:04 +00:00
|
|
|
"rc = [%d]\n", rc2);
|
|
|
|
if (!rc)
|
|
|
|
rc = rc2;
|
2009-04-13 20:29:27 +00:00
|
|
|
goto out;
|
|
|
|
}
|
2007-10-16 08:28:07 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ecryptfs_read_lower
|
|
|
|
* @data: The read data is stored here by this function
|
|
|
|
* @offset: Byte offset in the lower file from which to read the data
|
|
|
|
* @size: Number of bytes to read from @offset of the lower file and
|
|
|
|
* store into @data
|
|
|
|
* @ecryptfs_inode: The eCryptfs inode
|
|
|
|
*
|
|
|
|
* Read @size bytes of data at byte offset @offset from the lower
|
|
|
|
* inode into memory location @data.
|
|
|
|
*
|
2009-09-17 00:04:20 +00:00
|
|
|
* Returns bytes read on success; 0 on EOF; less than zero on error
|
2007-10-16 08:28:07 +00:00
|
|
|
*/
|
|
|
|
int ecryptfs_read_lower(char *data, loff_t offset, size_t size,
|
|
|
|
struct inode *ecryptfs_inode)
|
|
|
|
{
|
2011-08-05 03:58:51 +00:00
|
|
|
struct file *lower_file;
|
|
|
|
lower_file = ecryptfs_inode_to_private(ecryptfs_inode)->lower_file;
|
|
|
|
if (!lower_file)
|
|
|
|
return -EIO;
|
2017-09-01 15:39:13 +00:00
|
|
|
return kernel_read(lower_file, data, size, &offset);
|
2007-10-16 08:28:07 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ecryptfs_read_lower_page_segment
|
|
|
|
* @page_for_ecryptfs: The page into which data for eCryptfs will be
|
|
|
|
* written
|
2021-03-30 16:44:46 +00:00
|
|
|
* @page_index: Page index in @page_for_ecryptfs from which to start
|
|
|
|
* writing
|
2007-10-16 08:28:07 +00:00
|
|
|
* @offset_in_page: Offset in @page_for_ecryptfs from which to start
|
|
|
|
* writing
|
|
|
|
* @size: The number of bytes to write into @page_for_ecryptfs
|
|
|
|
* @ecryptfs_inode: The eCryptfs inode
|
|
|
|
*
|
|
|
|
* Determines the byte offset in the file for the given page and
|
|
|
|
* offset within the page, maps the page, and makes the call to read
|
|
|
|
* the contents of @page_for_ecryptfs from the lower inode.
|
|
|
|
*
|
|
|
|
* Returns zero on success; non-zero otherwise
|
|
|
|
*/
|
|
|
|
int ecryptfs_read_lower_page_segment(struct page *page_for_ecryptfs,
|
|
|
|
pgoff_t page_index,
|
|
|
|
size_t offset_in_page, size_t size,
|
|
|
|
struct inode *ecryptfs_inode)
|
|
|
|
{
|
|
|
|
char *virt;
|
|
|
|
loff_t offset;
|
|
|
|
int rc;
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 12:29:47 +00:00
|
|
|
offset = ((((loff_t)page_index) << PAGE_SHIFT) + offset_in_page);
|
fs/ecryptfs: Replace kmap() with kmap_local_page()
kmap() has been deprecated in favor of kmap_local_page().
Therefore, replace kmap() with kmap_local_page() in fs/ecryptfs.
There are two main problems with kmap(): (1) It comes with an overhead as
the mapping space is restricted and protected by a global lock for
synchronization and (2) it also requires global TLB invalidation when the
kmap’s pool wraps and it might block when the mapping space is fully
utilized until a slot becomes available.
With kmap_local_page() the mappings are per thread, CPU local, can take
page faults, and can be called from any context (including interrupts).
It is faster than kmap() in kernels with HIGHMEM enabled. The tasks can
be preempted and, when they are scheduled to run again, the kernel
virtual addresses are restored and still valid.
Obviously, thread locality implies that the kernel virtual addresses
returned by kmap_local_page() are only valid in the context of the
callers (i.e., they cannot be handed to other threads).
The use of kmap_local_page() in fs/ecryptfs does not break the
above-mentioned assumption, so it is allowed and preferred.
Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with
HIGHMEM64GB enabled.
Suggested-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: "Fabio M. De Francesco" <fmdefrancesco@gmail.com>
Message-Id: <20230426172223.8896-2-fmdefrancesco@gmail.com>
Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-04-26 17:22:21 +00:00
|
|
|
virt = kmap_local_page(page_for_ecryptfs);
|
2007-10-16 08:28:07 +00:00
|
|
|
rc = ecryptfs_read_lower(virt, offset, size, ecryptfs_inode);
|
2009-09-17 00:04:20 +00:00
|
|
|
if (rc > 0)
|
|
|
|
rc = 0;
|
fs/ecryptfs: Replace kmap() with kmap_local_page()
kmap() has been deprecated in favor of kmap_local_page().
Therefore, replace kmap() with kmap_local_page() in fs/ecryptfs.
There are two main problems with kmap(): (1) It comes with an overhead as
the mapping space is restricted and protected by a global lock for
synchronization and (2) it also requires global TLB invalidation when the
kmap’s pool wraps and it might block when the mapping space is fully
utilized until a slot becomes available.
With kmap_local_page() the mappings are per thread, CPU local, can take
page faults, and can be called from any context (including interrupts).
It is faster than kmap() in kernels with HIGHMEM enabled. The tasks can
be preempted and, when they are scheduled to run again, the kernel
virtual addresses are restored and still valid.
Obviously, thread locality implies that the kernel virtual addresses
returned by kmap_local_page() are only valid in the context of the
callers (i.e., they cannot be handed to other threads).
The use of kmap_local_page() in fs/ecryptfs does not break the
above-mentioned assumption, so it is allowed and preferred.
Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with
HIGHMEM64GB enabled.
Suggested-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: "Fabio M. De Francesco" <fmdefrancesco@gmail.com>
Message-Id: <20230426172223.8896-2-fmdefrancesco@gmail.com>
Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-04-26 17:22:21 +00:00
|
|
|
kunmap_local(virt);
|
2007-10-16 08:28:14 +00:00
|
|
|
flush_dcache_page(page_for_ecryptfs);
|
2007-10-16 08:28:07 +00:00
|
|
|
return rc;
|
|
|
|
}
|