iommufd: Use atomic_long_try_cmpxchg() in incr_user_locked_vm()

Use atomic_long_try_cmpxchg() instead of
atomic_long_cmpxchg (*ptr, old, new) != old in incr_user_locked_vm().
cmpxchg returns success in ZF flag, so this change saves a compare
after cmpxchg (and related move instruction in front of cmpxchg).

Also, atomic_long_try_cmpxchg() implicitly assigns old *ptr
value to "old" when cmpxchg fails. There is no need to re-read
the value in the loop.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/20240522082729.971123-3-ubizjak@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This commit is contained in:
Uros Bizjak 2024-05-22 10:26:49 +02:00 committed by Joerg Roedel
parent c94ad1d5e3
commit b95a40122a

View File

@ -809,13 +809,14 @@ static int incr_user_locked_vm(struct iopt_pages *pages, unsigned long npages)
lock_limit = task_rlimit(pages->source_task, RLIMIT_MEMLOCK) >>
PAGE_SHIFT;
cur_pages = atomic_long_read(&pages->source_user->locked_vm);
do {
cur_pages = atomic_long_read(&pages->source_user->locked_vm);
new_pages = cur_pages + npages;
if (new_pages > lock_limit)
return -ENOMEM;
} while (atomic_long_cmpxchg(&pages->source_user->locked_vm, cur_pages,
new_pages) != cur_pages);
} while (!atomic_long_try_cmpxchg(&pages->source_user->locked_vm,
&cur_pages, new_pages));
return 0;
}