scsi: st: Convert convert get_user_pages() --> pin_user_pages()

This code was using get_user_pages*(), in a "Case 1" scenario (Direct IO),
using the categorization from [1]. That means that it's time to convert the
get_user_pages*() + put_page() calls to pin_user_pages*() +
unpin_user_pages() calls.

There is some helpful background in [2]: basically, this is a small part of
fixing a long-standing disconnect between pinning pages, and file systems'
use of those pages.

Note that this effectively changes the code's behavior as well: it now
ultimately calls set_page_dirty_lock(), instead of SetPageDirty().This is
probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [3]

Also, this deletes one of the two FIXME comments (about refcounting),
because there is nothing wrong with the refcounting at this point.

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
    https://lwn.net/Articles/807108/

[3] https://lore.kernel.org/r/20190723153640.GB720@lst.de

Link: https://lore.kernel.org/r/20200526182709.99599-1-jhubbard@nvidia.com
Cc: "Kai Mäkisara (Kolumbus)" <kai.makisara@kolumbus.fi>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: James E.J. Bottomley <jejb@linux.ibm.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: linux-scsi@vger.kernel.org
Acked-by: Kai Mäkisara <kai.makisara@kolumbus.fi>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
This commit is contained in:
John Hubbard 2020-05-26 11:27:09 -07:00 committed by Martin K. Petersen
parent 987db58737
commit 08e9cbe75f

View File

@ -4921,7 +4921,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp,
unsigned long end = (uaddr + count + PAGE_SIZE - 1) >> PAGE_SHIFT;
unsigned long start = uaddr >> PAGE_SHIFT;
const int nr_pages = end - start;
int res, i, j;
int res, i;
struct page **pages;
struct rq_map_data *mdata = &STbp->map_data;
@ -4943,7 +4943,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp,
/* Try to fault in all of the necessary pages */
/* rw==READ means read from drive, write into memory area */
res = get_user_pages_fast(uaddr, nr_pages, rw == READ ? FOLL_WRITE : 0,
res = pin_user_pages_fast(uaddr, nr_pages, rw == READ ? FOLL_WRITE : 0,
pages);
/* Errors and no page mapped should return here */
@ -4963,8 +4963,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp,
return nr_pages;
out_unmap:
if (res > 0) {
for (j=0; j < res; j++)
put_page(pages[j]);
unpin_user_pages(pages, res);
res = 0;
}
kfree(pages);
@ -4976,18 +4975,9 @@ static int sgl_map_user_pages(struct st_buffer *STbp,
static int sgl_unmap_user_pages(struct st_buffer *STbp,
const unsigned int nr_pages, int dirtied)
{
int i;
/* FIXME: cache flush missing for rw==READ */
unpin_user_pages_dirty_lock(STbp->mapped_pages, nr_pages, dirtied);
for (i=0; i < nr_pages; i++) {
struct page *page = STbp->mapped_pages[i];
if (dirtied)
SetPageDirty(page);
/* FIXME: cache flush missing for rw==READ
* FIXME: call the correct reference counting function
*/
put_page(page);
}
kfree(STbp->mapped_pages);
STbp->mapped_pages = NULL;