forked from Minki/linux
RDMA/umem: Add a schedule point in ib_umem_get()
Mapping as little as 64GB can take more than 10 seconds, triggering issues on kernels with CONFIG_PREEMPT_NONE=y. ib_umem_get() already splits the work in 2MB units on x86_64, adding a cond_resched() in the long-lasting loop is enough to solve the issue. Note that sg_alloc_table() can still use more than 100 ms, which is also problematic. This might be addressed later in ib_umem_add_sg_table(), adding new blocks in sgl on demand. Link: https://lore.kernel.org/r/20200730015755.1827498-1-edumazet@google.com Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
This commit is contained in:
parent
395f2e8fd3
commit
928da37a22
@ -261,6 +261,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
|
||||
sg = umem->sg_head.sgl;
|
||||
|
||||
while (npages) {
|
||||
cond_resched();
|
||||
ret = pin_user_pages_fast(cur_base,
|
||||
min_t(unsigned long, npages,
|
||||
PAGE_SIZE /
|
||||
|
Loading…
Reference in New Issue
Block a user