Merge branch 'akpm' (patches from Andrew)

Merge second patch-bomb from Andrew Morton:

 - most of the rest of MM

 - procfs

 - lib/ updates

 - printk updates

 - bitops infrastructure tweaks

 - checkpatch updates

 - nilfs2 update

 - signals

 - various other misc bits: coredump, seqfile, kexec, pidns, zlib, ipc,
   dma-debug, dma-mapping, ...

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (102 commits)
  ipc,msg: drop dst nil validation in copy_msg
  include/linux/zutil.h: fix usage example of zlib_adler32()
  panic: release stale console lock to always get the logbuf printed out
  dma-debug: check nents in dma_sync_sg*
  dma-mapping: tidy up dma_parms default handling
  pidns: fix set/getpriority and ioprio_set/get in PRIO_USER mode
  kexec: use file name as the output message prefix
  fs, seqfile: always allow oom killer
  seq_file: reuse string_escape_str()
  fs/seq_file: use seq_* helpers in seq_hex_dump()
  coredump: change zap_threads() and zap_process() to use for_each_thread()
  coredump: ensure all coredumping tasks have SIGNAL_GROUP_COREDUMP
  signal: remove jffs2_garbage_collect_thread()->allow_signal(SIGCONT)
  signal: introduce kernel_signal_stop() to fix jffs2_garbage_collect_thread()
  signal: turn dequeue_signal_lock() into kernel_dequeue_signal()
  signals: kill block_all_signals() and unblock_all_signals()
  nilfs2: fix gcc uninitialized-variable warnings in powerpc build
  nilfs2: fix gcc unused-but-set-variable warnings
  MAINTAINERS: nilfs2: add header file for tracing
  nilfs2: add tracepoints for analyzing reading and writing metadata files
  ...
This commit is contained in:
Linus Torvalds 2015-11-07 14:32:45 -08:00
commit ad804a0b2a
211 changed files with 2308 additions and 1642 deletions

View File

@ -23,6 +23,10 @@ Example:
Reminder: sizeof() result is of type size_t. Reminder: sizeof() result is of type size_t.
The kernel's printf does not support %n. For obvious reasons, floating
point formats (%e, %f, %g, %a) are also not recognized. Use of any
unsupported specifier or length qualifier results in a WARN and early
return from vsnprintf.
Raw pointer value SHOULD be printed with %p. The kernel supports Raw pointer value SHOULD be printed with %p. The kernel supports
the following extended format specifiers for pointer types: the following extended format specifiers for pointer types:
@ -119,6 +123,7 @@ Raw buffer as an escaped string:
If field width is omitted the 1 byte only will be escaped. If field width is omitted the 1 byte only will be escaped.
Raw buffer as a hex string: Raw buffer as a hex string:
%*ph 00 01 02 ... 3f %*ph 00 01 02 ... 3f
%*phC 00:01:02: ... :3f %*phC 00:01:02: ... :3f
%*phD 00-01-02- ... -3f %*phD 00-01-02- ... -3f
@ -234,6 +239,7 @@ UUID/GUID addresses:
Passed by reference. Passed by reference.
dentry names: dentry names:
%pd{,2,3,4} %pd{,2,3,4}
%pD{,2,3,4} %pD{,2,3,4}
@ -256,6 +262,8 @@ struct va_format:
va_list *va; va_list *va;
}; };
Implements a "recursive vsnprintf".
Do not use this feature without some mechanism to verify the Do not use this feature without some mechanism to verify the
correctness of the format string and va_list arguments. correctness of the format string and va_list arguments.
@ -284,6 +292,27 @@ bitmap and its derivatives such as cpumask and nodemask:
Passed by reference. Passed by reference.
Network device features:
%pNF 0x000000000000c000
For printing netdev_features_t.
Passed by reference.
Command from struct task_struct
%pT ls
For printing executable name excluding path from struct
task_struct.
Passed by reference.
If you add other %p extensions, please extend lib/test_printf.c with
one or more test cases, if at all feasible.
Thank you for your cooperation and attention. Thank you for your cooperation and attention.

View File

@ -1,12 +1,14 @@
Started Jan 2000 by Kanoj Sarcar <kanoj@sgi.com> Started Jan 2000 by Kanoj Sarcar <kanoj@sgi.com>
Memory balancing is needed for non __GFP_WAIT as well as for non Memory balancing is needed for !__GFP_ATOMIC and !__GFP_KSWAPD_RECLAIM as
__GFP_IO allocations. well as for non __GFP_IO allocations.
There are two reasons to be requesting non __GFP_WAIT allocations: The first reason why a caller may avoid reclaim is that the caller can not
the caller can not sleep (typically intr context), or does not want sleep due to holding a spinlock or is in interrupt context. The second may
to incur cost overheads of page stealing and possible swap io for be that the caller is willing to fail the allocation without incurring the
whatever reasons. overhead of page reclaim. This may happen for opportunistic high-order
allocation requests that have order-0 fallback options. In such cases,
the caller may also wish to avoid waking kswapd.
__GFP_IO allocation requests are made to prevent file system deadlocks. __GFP_IO allocation requests are made to prevent file system deadlocks.

View File

@ -54,8 +54,8 @@ everything required is done by pgtable_page_ctor() and pgtable_page_dtor(),
which must be called on PTE table allocation / freeing. which must be called on PTE table allocation / freeing.
Make sure the architecture doesn't use slab allocator for page table Make sure the architecture doesn't use slab allocator for page table
allocation: slab uses page->slab_cache and page->first_page for its pages. allocation: slab uses page->slab_cache for its pages.
These fields share storage with page->ptl. This field shares storage with page->ptl.
PMD split lock only makes sense if you have more than two page table PMD split lock only makes sense if you have more than two page table
levels. levels.

View File

@ -4209,7 +4209,10 @@ L: linux-kernel@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/extcon.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/extcon.git
S: Maintained S: Maintained
F: drivers/extcon/ F: drivers/extcon/
F: include/linux/extcon/
F: include/linux/extcon.h
F: Documentation/extcon/ F: Documentation/extcon/
F: Documentation/devicetree/bindings/extcon/
EXYNOS DP DRIVER EXYNOS DP DRIVER
M: Jingoo Han <jingoohan1@gmail.com> M: Jingoo Han <jingoohan1@gmail.com>
@ -7490,6 +7493,7 @@ S: Supported
F: Documentation/filesystems/nilfs2.txt F: Documentation/filesystems/nilfs2.txt
F: fs/nilfs2/ F: fs/nilfs2/
F: include/linux/nilfs2_fs.h F: include/linux/nilfs2_fs.h
F: include/trace/events/nilfs2.h
NINJA SCSI-3 / NINJA SCSI-32Bi (16bit/CardBus) PCMCIA SCSI HOST ADAPTER DRIVER NINJA SCSI-3 / NINJA SCSI-32Bi (16bit/CardBus) PCMCIA SCSI HOST ADAPTER DRIVER
M: YOKOTA Hiroshi <yokota@netlab.is.tsukuba.ac.jp> M: YOKOTA Hiroshi <yokota@netlab.is.tsukuba.ac.jp>

View File

@ -651,12 +651,12 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
if (nommu()) if (nommu())
addr = __alloc_simple_buffer(dev, size, gfp, &page); addr = __alloc_simple_buffer(dev, size, gfp, &page);
else if (dev_get_cma_area(dev) && (gfp & __GFP_WAIT)) else if (dev_get_cma_area(dev) && (gfp & __GFP_DIRECT_RECLAIM))
addr = __alloc_from_contiguous(dev, size, prot, &page, addr = __alloc_from_contiguous(dev, size, prot, &page,
caller, want_vaddr); caller, want_vaddr);
else if (is_coherent) else if (is_coherent)
addr = __alloc_simple_buffer(dev, size, gfp, &page); addr = __alloc_simple_buffer(dev, size, gfp, &page);
else if (!(gfp & __GFP_WAIT)) else if (!gfpflags_allow_blocking(gfp))
addr = __alloc_from_pool(size, &page); addr = __alloc_from_pool(size, &page);
else else
addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, addr = __alloc_remap_buffer(dev, size, gfp, prot, &page,
@ -1363,7 +1363,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
*handle = DMA_ERROR_CODE; *handle = DMA_ERROR_CODE;
size = PAGE_ALIGN(size); size = PAGE_ALIGN(size);
if (!(gfp & __GFP_WAIT)) if (!gfpflags_allow_blocking(gfp))
return __iommu_alloc_atomic(dev, size, handle); return __iommu_alloc_atomic(dev, size, handle);
/* /*

View File

@ -25,7 +25,7 @@
unsigned long xen_get_swiotlb_free_pages(unsigned int order) unsigned long xen_get_swiotlb_free_pages(unsigned int order)
{ {
struct memblock_region *reg; struct memblock_region *reg;
gfp_t flags = __GFP_NOWARN; gfp_t flags = __GFP_NOWARN|__GFP_KSWAPD_RECLAIM;
for_each_memblock(memory, reg) { for_each_memblock(memory, reg) {
if (reg->base < (phys_addr_t)0xffffffff) { if (reg->base < (phys_addr_t)0xffffffff) {

View File

@ -100,7 +100,7 @@ static void *__dma_alloc_coherent(struct device *dev, size_t size,
if (IS_ENABLED(CONFIG_ZONE_DMA) && if (IS_ENABLED(CONFIG_ZONE_DMA) &&
dev->coherent_dma_mask <= DMA_BIT_MASK(32)) dev->coherent_dma_mask <= DMA_BIT_MASK(32))
flags |= GFP_DMA; flags |= GFP_DMA;
if (dev_get_cma_area(dev) && (flags & __GFP_WAIT)) { if (dev_get_cma_area(dev) && gfpflags_allow_blocking(flags)) {
struct page *page; struct page *page;
void *addr; void *addr;
@ -148,7 +148,7 @@ static void *__dma_alloc(struct device *dev, size_t size,
size = PAGE_ALIGN(size); size = PAGE_ALIGN(size);
if (!coherent && !(flags & __GFP_WAIT)) { if (!coherent && !gfpflags_allow_blocking(flags)) {
struct page *page = NULL; struct page *page = NULL;
void *addr = __alloc_from_pool(size, &page, flags); void *addr = __alloc_from_pool(size, &page, flags);

View File

@ -159,7 +159,7 @@ static int lookup_prev_stack_frame(unsigned long fp, unsigned long pc,
/* Sign extend */ /* Sign extend */
regcache[dest] = regcache[dest] =
((((s64)(u64)op >> 10) & 0xffff) << 54) >> 54; sign_extend64((((u64)op >> 10) & 0xffff), 9);
break; break;
case (0xd0 >> 2): /* addi */ case (0xd0 >> 2): /* addi */
case (0xd4 >> 2): /* addi.l */ case (0xd4 >> 2): /* addi.l */

View File

@ -101,7 +101,7 @@ static int generate_and_check_address(struct pt_regs *regs,
if (displacement_not_indexed) { if (displacement_not_indexed) {
__s64 displacement; __s64 displacement;
displacement = (opcode >> 10) & 0x3ff; displacement = (opcode >> 10) & 0x3ff;
displacement = ((displacement << 54) >> 54); /* sign extend */ displacement = sign_extend64(displacement, 9);
addr = (__u64)((__s64)base_address + (displacement << width_shift)); addr = (__u64)((__s64)base_address + (displacement << width_shift));
} else { } else {
__u64 offset; __u64 offset;

View File

@ -163,10 +163,9 @@ again:
goto again; goto again;
delta = now - prev; delta = now - prev;
if (unlikely(event->hw.event_base == MSR_SMI_COUNT)) { if (unlikely(event->hw.event_base == MSR_SMI_COUNT))
delta <<= 32; delta = sign_extend64(delta, 31);
delta >>= 32; /* sign extend */
}
local64_add(now - prev, &event->count); local64_add(now - prev, &event->count);
} }

View File

@ -90,7 +90,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
again: again:
page = NULL; page = NULL;
/* CMA can be used only in the context which permits sleeping */ /* CMA can be used only in the context which permits sleeping */
if (flag & __GFP_WAIT) { if (gfpflags_allow_blocking(flag)) {
page = dma_alloc_from_contiguous(dev, count, get_order(size)); page = dma_alloc_from_contiguous(dev, count, get_order(size));
if (page && page_to_phys(page) + size > dma_mask) { if (page && page_to_phys(page) + size > dma_mask) {
dma_release_from_contiguous(dev, page, count); dma_release_from_contiguous(dev, page, count);

View File

@ -169,7 +169,6 @@ CONFIG_FLATMEM_MANUAL=y
# CONFIG_SPARSEMEM_MANUAL is not set # CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y CONFIG_FLAT_NODE_MEM_MAP=y
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4 CONFIG_SPLIT_PTLOCK_CPUS=4
# CONFIG_PHYS_ADDR_T_64BIT is not set # CONFIG_PHYS_ADDR_T_64BIT is not set
CONFIG_ZONE_DMA_FLAG=1 CONFIG_ZONE_DMA_FLAG=1

View File

@ -211,7 +211,7 @@ fallback:
bvl = mempool_alloc(pool, gfp_mask); bvl = mempool_alloc(pool, gfp_mask);
} else { } else {
struct biovec_slab *bvs = bvec_slabs + *idx; struct biovec_slab *bvs = bvec_slabs + *idx;
gfp_t __gfp_mask = gfp_mask & ~(__GFP_WAIT | __GFP_IO); gfp_t __gfp_mask = gfp_mask & ~(__GFP_DIRECT_RECLAIM | __GFP_IO);
/* /*
* Make this allocation restricted and don't dump info on * Make this allocation restricted and don't dump info on
@ -221,11 +221,11 @@ fallback:
__gfp_mask |= __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN; __gfp_mask |= __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN;
/* /*
* Try a slab allocation. If this fails and __GFP_WAIT * Try a slab allocation. If this fails and __GFP_DIRECT_RECLAIM
* is set, retry with the 1-entry mempool * is set, retry with the 1-entry mempool
*/ */
bvl = kmem_cache_alloc(bvs->slab, __gfp_mask); bvl = kmem_cache_alloc(bvs->slab, __gfp_mask);
if (unlikely(!bvl && (gfp_mask & __GFP_WAIT))) { if (unlikely(!bvl && (gfp_mask & __GFP_DIRECT_RECLAIM))) {
*idx = BIOVEC_MAX_IDX; *idx = BIOVEC_MAX_IDX;
goto fallback; goto fallback;
} }
@ -395,12 +395,12 @@ static void punt_bios_to_rescuer(struct bio_set *bs)
* If @bs is NULL, uses kmalloc() to allocate the bio; else the allocation is * If @bs is NULL, uses kmalloc() to allocate the bio; else the allocation is
* backed by the @bs's mempool. * backed by the @bs's mempool.
* *
* When @bs is not NULL, if %__GFP_WAIT is set then bio_alloc will always be * When @bs is not NULL, if %__GFP_DIRECT_RECLAIM is set then bio_alloc will
* able to allocate a bio. This is due to the mempool guarantees. To make this * always be able to allocate a bio. This is due to the mempool guarantees.
* work, callers must never allocate more than 1 bio at a time from this pool. * To make this work, callers must never allocate more than 1 bio at a time
* Callers that need to allocate more than 1 bio must always submit the * from this pool. Callers that need to allocate more than 1 bio must always
* previously allocated bio for IO before attempting to allocate a new one. * submit the previously allocated bio for IO before attempting to allocate
* Failure to do so can cause deadlocks under memory pressure. * a new one. Failure to do so can cause deadlocks under memory pressure.
* *
* Note that when running under generic_make_request() (i.e. any block * Note that when running under generic_make_request() (i.e. any block
* driver), bios are not submitted until after you return - see the code in * driver), bios are not submitted until after you return - see the code in
@ -459,13 +459,13 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)
* We solve this, and guarantee forward progress, with a rescuer * We solve this, and guarantee forward progress, with a rescuer
* workqueue per bio_set. If we go to allocate and there are * workqueue per bio_set. If we go to allocate and there are
* bios on current->bio_list, we first try the allocation * bios on current->bio_list, we first try the allocation
* without __GFP_WAIT; if that fails, we punt those bios we * without __GFP_DIRECT_RECLAIM; if that fails, we punt those
* would be blocking to the rescuer workqueue before we retry * bios we would be blocking to the rescuer workqueue before
* with the original gfp_flags. * we retry with the original gfp_flags.
*/ */
if (current->bio_list && !bio_list_empty(current->bio_list)) if (current->bio_list && !bio_list_empty(current->bio_list))
gfp_mask &= ~__GFP_WAIT; gfp_mask &= ~__GFP_DIRECT_RECLAIM;
p = mempool_alloc(bs->bio_pool, gfp_mask); p = mempool_alloc(bs->bio_pool, gfp_mask);
if (!p && gfp_mask != saved_gfp) { if (!p && gfp_mask != saved_gfp) {

View File

@ -638,7 +638,7 @@ int blk_queue_enter(struct request_queue *q, gfp_t gfp)
if (percpu_ref_tryget_live(&q->q_usage_counter)) if (percpu_ref_tryget_live(&q->q_usage_counter))
return 0; return 0;
if (!(gfp & __GFP_WAIT)) if (!gfpflags_allow_blocking(gfp))
return -EBUSY; return -EBUSY;
ret = wait_event_interruptible(q->mq_freeze_wq, ret = wait_event_interruptible(q->mq_freeze_wq,
@ -1206,8 +1206,8 @@ rq_starved:
* @bio: bio to allocate request for (can be %NULL) * @bio: bio to allocate request for (can be %NULL)
* @gfp_mask: allocation mask * @gfp_mask: allocation mask
* *
* Get a free request from @q. If %__GFP_WAIT is set in @gfp_mask, this * Get a free request from @q. If %__GFP_DIRECT_RECLAIM is set in @gfp_mask,
* function keeps retrying under memory pressure and fails iff @q is dead. * this function keeps retrying under memory pressure and fails iff @q is dead.
* *
* Must be called with @q->queue_lock held and, * Must be called with @q->queue_lock held and,
* Returns ERR_PTR on failure, with @q->queue_lock held. * Returns ERR_PTR on failure, with @q->queue_lock held.
@ -1227,7 +1227,7 @@ retry:
if (!IS_ERR(rq)) if (!IS_ERR(rq))
return rq; return rq;
if (!(gfp_mask & __GFP_WAIT) || unlikely(blk_queue_dying(q))) { if (!gfpflags_allow_blocking(gfp_mask) || unlikely(blk_queue_dying(q))) {
blk_put_rl(rl); blk_put_rl(rl);
return rq; return rq;
} }
@ -1305,11 +1305,11 @@ EXPORT_SYMBOL(blk_get_request);
* BUG. * BUG.
* *
* WARNING: When allocating/cloning a bio-chain, careful consideration should be * WARNING: When allocating/cloning a bio-chain, careful consideration should be
* given to how you allocate bios. In particular, you cannot use __GFP_WAIT for * given to how you allocate bios. In particular, you cannot use
* anything but the first bio in the chain. Otherwise you risk waiting for IO * __GFP_DIRECT_RECLAIM for anything but the first bio in the chain. Otherwise
* completion of a bio that hasn't been submitted yet, thus resulting in a * you risk waiting for IO completion of a bio that hasn't been submitted yet,
* deadlock. Alternatively bios should be allocated using bio_kmalloc() instead * thus resulting in a deadlock. Alternatively bios should be allocated using
* of bio_alloc(), as that avoids the mempool deadlock. * bio_kmalloc() instead of bio_alloc(), as that avoids the mempool deadlock.
* If possible a big IO should be split into smaller parts when allocation * If possible a big IO should be split into smaller parts when allocation
* fails. Partial allocation should not be an error, or you risk a live-lock. * fails. Partial allocation should not be an error, or you risk a live-lock.
*/ */
@ -2038,7 +2038,7 @@ void generic_make_request(struct bio *bio)
do { do {
struct request_queue *q = bdev_get_queue(bio->bi_bdev); struct request_queue *q = bdev_get_queue(bio->bi_bdev);
if (likely(blk_queue_enter(q, __GFP_WAIT) == 0)) { if (likely(blk_queue_enter(q, __GFP_DIRECT_RECLAIM) == 0)) {
q->make_request_fn(q, bio); q->make_request_fn(q, bio);

View File

@ -289,7 +289,7 @@ struct io_context *get_task_io_context(struct task_struct *task,
{ {
struct io_context *ioc; struct io_context *ioc;
might_sleep_if(gfp_flags & __GFP_WAIT); might_sleep_if(gfpflags_allow_blocking(gfp_flags));
do { do {
task_lock(task); task_lock(task);

View File

@ -268,7 +268,7 @@ static int bt_get(struct blk_mq_alloc_data *data,
if (tag != -1) if (tag != -1)
return tag; return tag;
if (!(data->gfp & __GFP_WAIT)) if (!gfpflags_allow_blocking(data->gfp))
return -1; return -1;
bs = bt_wait_ptr(bt, hctx); bs = bt_wait_ptr(bt, hctx);

View File

@ -244,11 +244,11 @@ struct request *blk_mq_alloc_request(struct request_queue *q, int rw, gfp_t gfp,
ctx = blk_mq_get_ctx(q); ctx = blk_mq_get_ctx(q);
hctx = q->mq_ops->map_queue(q, ctx->cpu); hctx = q->mq_ops->map_queue(q, ctx->cpu);
blk_mq_set_alloc_data(&alloc_data, q, gfp & ~__GFP_WAIT, blk_mq_set_alloc_data(&alloc_data, q, gfp & ~__GFP_DIRECT_RECLAIM,
reserved, ctx, hctx); reserved, ctx, hctx);
rq = __blk_mq_alloc_request(&alloc_data, rw); rq = __blk_mq_alloc_request(&alloc_data, rw);
if (!rq && (gfp & __GFP_WAIT)) { if (!rq && (gfp & __GFP_DIRECT_RECLAIM)) {
__blk_mq_run_hw_queue(hctx); __blk_mq_run_hw_queue(hctx);
blk_mq_put_ctx(ctx); blk_mq_put_ctx(ctx);
@ -1186,7 +1186,7 @@ static struct request *blk_mq_map_request(struct request_queue *q,
ctx = blk_mq_get_ctx(q); ctx = blk_mq_get_ctx(q);
hctx = q->mq_ops->map_queue(q, ctx->cpu); hctx = q->mq_ops->map_queue(q, ctx->cpu);
blk_mq_set_alloc_data(&alloc_data, q, blk_mq_set_alloc_data(&alloc_data, q,
__GFP_WAIT|GFP_ATOMIC, false, ctx, hctx); __GFP_RECLAIM|__GFP_HIGH, false, ctx, hctx);
rq = __blk_mq_alloc_request(&alloc_data, rw); rq = __blk_mq_alloc_request(&alloc_data, rw);
ctx = alloc_data.ctx; ctx = alloc_data.ctx;
hctx = alloc_data.hctx; hctx = alloc_data.hctx;

View File

@ -123,7 +123,8 @@ SYSCALL_DEFINE3(ioprio_set, int, which, int, who, int, ioprio)
break; break;
do_each_thread(g, p) { do_each_thread(g, p) {
if (!uid_eq(task_uid(p), uid)) if (!uid_eq(task_uid(p), uid) ||
!task_pid_vnr(p))
continue; continue;
ret = set_task_ioprio(p, ioprio); ret = set_task_ioprio(p, ioprio);
if (ret) if (ret)
@ -220,7 +221,8 @@ SYSCALL_DEFINE2(ioprio_get, int, which, int, who)
break; break;
do_each_thread(g, p) { do_each_thread(g, p) {
if (!uid_eq(task_uid(p), user->uid)) if (!uid_eq(task_uid(p), user->uid) ||
!task_pid_vnr(p))
continue; continue;
tmpio = get_task_ioprio(p); tmpio = get_task_ioprio(p);
if (tmpio < 0) if (tmpio < 0)

View File

@ -444,7 +444,7 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
} }
rq = blk_get_request(q, in_len ? WRITE : READ, __GFP_WAIT); rq = blk_get_request(q, in_len ? WRITE : READ, __GFP_RECLAIM);
if (IS_ERR(rq)) { if (IS_ERR(rq)) {
err = PTR_ERR(rq); err = PTR_ERR(rq);
goto error_free_buffer; goto error_free_buffer;
@ -495,7 +495,7 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
break; break;
} }
if (bytes && blk_rq_map_kern(q, rq, buffer, bytes, __GFP_WAIT)) { if (bytes && blk_rq_map_kern(q, rq, buffer, bytes, __GFP_RECLAIM)) {
err = DRIVER_ERROR << 24; err = DRIVER_ERROR << 24;
goto error; goto error;
} }
@ -536,7 +536,7 @@ static int __blk_send_generic(struct request_queue *q, struct gendisk *bd_disk,
struct request *rq; struct request *rq;
int err; int err;
rq = blk_get_request(q, WRITE, __GFP_WAIT); rq = blk_get_request(q, WRITE, __GFP_RECLAIM);
if (IS_ERR(rq)) if (IS_ERR(rq))
return PTR_ERR(rq); return PTR_ERR(rq);
blk_rq_set_block_pc(rq); blk_rq_set_block_pc(rq);

View File

@ -1007,7 +1007,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_ho
bm_set_page_unchanged(b->bm_pages[page_nr]); bm_set_page_unchanged(b->bm_pages[page_nr]);
if (ctx->flags & BM_AIO_COPY_PAGES) { if (ctx->flags & BM_AIO_COPY_PAGES) {
page = mempool_alloc(drbd_md_io_page_pool, __GFP_HIGHMEM|__GFP_WAIT); page = mempool_alloc(drbd_md_io_page_pool, __GFP_HIGHMEM|__GFP_RECLAIM);
copy_highpage(page, b->bm_pages[page_nr]); copy_highpage(page, b->bm_pages[page_nr]);
bm_store_page_idx(page, page_nr); bm_store_page_idx(page, page_nr);
} else } else

View File

@ -357,7 +357,8 @@ drbd_alloc_peer_req(struct drbd_peer_device *peer_device, u64 id, sector_t secto
} }
if (has_payload && data_size) { if (has_payload && data_size) {
page = drbd_alloc_pages(peer_device, nr_pages, (gfp_mask & __GFP_WAIT)); page = drbd_alloc_pages(peer_device, nr_pages,
gfpflags_allow_blocking(gfp_mask));
if (!page) if (!page)
goto fail; goto fail;
} }

View File

@ -173,7 +173,7 @@ static struct mtip_cmd *mtip_get_int_command(struct driver_data *dd)
{ {
struct request *rq; struct request *rq;
rq = blk_mq_alloc_request(dd->queue, 0, __GFP_WAIT, true); rq = blk_mq_alloc_request(dd->queue, 0, __GFP_RECLAIM, true);
return blk_mq_rq_to_pdu(rq); return blk_mq_rq_to_pdu(rq);
} }

View File

@ -444,9 +444,7 @@ static int nbd_thread_recv(struct nbd_device *nbd)
spin_unlock_irqrestore(&nbd->tasks_lock, flags); spin_unlock_irqrestore(&nbd->tasks_lock, flags);
if (signal_pending(current)) { if (signal_pending(current)) {
siginfo_t info; ret = kernel_dequeue_signal(NULL);
ret = dequeue_signal_lock(current, &current->blocked, &info);
dev_warn(nbd_to_dev(nbd), "pid %d, %s, got signal %d\n", dev_warn(nbd_to_dev(nbd), "pid %d, %s, got signal %d\n",
task_pid_nr(current), current->comm, ret); task_pid_nr(current), current->comm, ret);
mutex_lock(&nbd->tx_lock); mutex_lock(&nbd->tx_lock);
@ -560,11 +558,8 @@ static int nbd_thread_send(void *data)
!list_empty(&nbd->waiting_queue)); !list_empty(&nbd->waiting_queue));
if (signal_pending(current)) { if (signal_pending(current)) {
siginfo_t info; int ret = kernel_dequeue_signal(NULL);
int ret;
ret = dequeue_signal_lock(current, &current->blocked,
&info);
dev_warn(nbd_to_dev(nbd), "pid %d, %s, got signal %d\n", dev_warn(nbd_to_dev(nbd), "pid %d, %s, got signal %d\n",
task_pid_nr(current), current->comm, ret); task_pid_nr(current), current->comm, ret);
mutex_lock(&nbd->tx_lock); mutex_lock(&nbd->tx_lock);
@ -592,10 +587,8 @@ static int nbd_thread_send(void *data)
spin_unlock_irqrestore(&nbd->tasks_lock, flags); spin_unlock_irqrestore(&nbd->tasks_lock, flags);
/* Clear maybe pending signals */ /* Clear maybe pending signals */
if (signal_pending(current)) { if (signal_pending(current))
siginfo_t info; kernel_dequeue_signal(NULL);
dequeue_signal_lock(current, &current->blocked, &info);
}
return 0; return 0;
} }

View File

@ -271,7 +271,7 @@ static struct bio *bio_chain_clone(struct bio *old_chain, gfp_t gfpmask)
goto err_out; goto err_out;
tmp->bi_bdev = NULL; tmp->bi_bdev = NULL;
gfpmask &= ~__GFP_WAIT; gfpmask &= ~__GFP_DIRECT_RECLAIM;
tmp->bi_next = NULL; tmp->bi_next = NULL;
if (!new_chain) if (!new_chain)

View File

@ -723,7 +723,7 @@ static int pd_special_command(struct pd_unit *disk,
struct request *rq; struct request *rq;
int err = 0; int err = 0;
rq = blk_get_request(disk->gd->queue, READ, __GFP_WAIT); rq = blk_get_request(disk->gd->queue, READ, __GFP_RECLAIM);
if (IS_ERR(rq)) if (IS_ERR(rq))
return PTR_ERR(rq); return PTR_ERR(rq);

View File

@ -704,14 +704,14 @@ static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *
int ret = 0; int ret = 0;
rq = blk_get_request(q, (cgc->data_direction == CGC_DATA_WRITE) ? rq = blk_get_request(q, (cgc->data_direction == CGC_DATA_WRITE) ?
WRITE : READ, __GFP_WAIT); WRITE : READ, __GFP_RECLAIM);
if (IS_ERR(rq)) if (IS_ERR(rq))
return PTR_ERR(rq); return PTR_ERR(rq);
blk_rq_set_block_pc(rq); blk_rq_set_block_pc(rq);
if (cgc->buflen) { if (cgc->buflen) {
ret = blk_rq_map_kern(q, rq, cgc->buffer, cgc->buflen, ret = blk_rq_map_kern(q, rq, cgc->buffer, cgc->buflen,
__GFP_WAIT); __GFP_RECLAIM);
if (ret) if (ret)
goto out; goto out;
} }

View File

@ -106,7 +106,7 @@ static void zram_set_obj_size(struct zram_meta *meta,
meta->table[index].value = (flags << ZRAM_FLAG_SHIFT) | size; meta->table[index].value = (flags << ZRAM_FLAG_SHIFT) | size;
} }
static inline int is_partial_io(struct bio_vec *bvec) static inline bool is_partial_io(struct bio_vec *bvec)
{ {
return bvec->bv_len != PAGE_SIZE; return bvec->bv_len != PAGE_SIZE;
} }
@ -114,25 +114,25 @@ static inline int is_partial_io(struct bio_vec *bvec)
/* /*
* Check if request is within bounds and aligned on zram logical blocks. * Check if request is within bounds and aligned on zram logical blocks.
*/ */
static inline int valid_io_request(struct zram *zram, static inline bool valid_io_request(struct zram *zram,
sector_t start, unsigned int size) sector_t start, unsigned int size)
{ {
u64 end, bound; u64 end, bound;
/* unaligned request */ /* unaligned request */
if (unlikely(start & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1))) if (unlikely(start & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1)))
return 0; return false;
if (unlikely(size & (ZRAM_LOGICAL_BLOCK_SIZE - 1))) if (unlikely(size & (ZRAM_LOGICAL_BLOCK_SIZE - 1)))
return 0; return false;
end = start + (size >> SECTOR_SHIFT); end = start + (size >> SECTOR_SHIFT);
bound = zram->disksize >> SECTOR_SHIFT; bound = zram->disksize >> SECTOR_SHIFT;
/* out of range range */ /* out of range range */
if (unlikely(start >= bound || end > bound || start > end)) if (unlikely(start >= bound || end > bound || start > end))
return 0; return false;
/* I/O request is valid */ /* I/O request is valid */
return 1; return true;
} }
static void update_position(u32 *index, int *offset, struct bio_vec *bvec) static void update_position(u32 *index, int *offset, struct bio_vec *bvec)
@ -157,7 +157,7 @@ static inline void update_used_max(struct zram *zram,
} while (old_max != cur_max); } while (old_max != cur_max);
} }
static int page_zero_filled(void *ptr) static bool page_zero_filled(void *ptr)
{ {
unsigned int pos; unsigned int pos;
unsigned long *page; unsigned long *page;
@ -166,10 +166,10 @@ static int page_zero_filled(void *ptr)
for (pos = 0; pos != PAGE_SIZE / sizeof(*page); pos++) { for (pos = 0; pos != PAGE_SIZE / sizeof(*page); pos++) {
if (page[pos]) if (page[pos])
return 0; return false;
} }
return 1; return true;
} }
static void handle_zero_page(struct bio_vec *bvec) static void handle_zero_page(struct bio_vec *bvec)
@ -365,6 +365,9 @@ static ssize_t comp_algorithm_store(struct device *dev,
struct zram *zram = dev_to_zram(dev); struct zram *zram = dev_to_zram(dev);
size_t sz; size_t sz;
if (!zcomp_available_algorithm(buf))
return -EINVAL;
down_write(&zram->init_lock); down_write(&zram->init_lock);
if (init_done(zram)) { if (init_done(zram)) {
up_write(&zram->init_lock); up_write(&zram->init_lock);
@ -378,9 +381,6 @@ static ssize_t comp_algorithm_store(struct device *dev,
if (sz > 0 && zram->compressor[sz - 1] == '\n') if (sz > 0 && zram->compressor[sz - 1] == '\n')
zram->compressor[sz - 1] = 0x00; zram->compressor[sz - 1] = 0x00;
if (!zcomp_available_algorithm(zram->compressor))
len = -EINVAL;
up_write(&zram->init_lock); up_write(&zram->init_lock);
return len; return len;
} }
@ -726,14 +726,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
} }
alloced_pages = zs_get_total_pages(meta->mem_pool); alloced_pages = zs_get_total_pages(meta->mem_pool);
update_used_max(zram, alloced_pages);
if (zram->limit_pages && alloced_pages > zram->limit_pages) { if (zram->limit_pages && alloced_pages > zram->limit_pages) {
zs_free(meta->mem_pool, handle); zs_free(meta->mem_pool, handle);
ret = -ENOMEM; ret = -ENOMEM;
goto out; goto out;
} }
update_used_max(zram, alloced_pages);
cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO); cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) { if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {

View File

@ -124,7 +124,8 @@ int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group,
if (group) if (group)
return netlink_broadcast(dev->nls, skb, portid, group, return netlink_broadcast(dev->nls, skb, portid, group,
gfp_mask); gfp_mask);
return netlink_unicast(dev->nls, skb, portid, !(gfp_mask&__GFP_WAIT)); return netlink_unicast(dev->nls, skb, portid,
!gfpflags_allow_blocking(gfp_mask));
} }
EXPORT_SYMBOL_GPL(cn_netlink_send_mult); EXPORT_SYMBOL_GPL(cn_netlink_send_mult);

View File

@ -486,7 +486,7 @@ static int ioctl_get_info(struct client *client, union ioctl_arg *arg)
static int add_client_resource(struct client *client, static int add_client_resource(struct client *client,
struct client_resource *resource, gfp_t gfp_mask) struct client_resource *resource, gfp_t gfp_mask)
{ {
bool preload = !!(gfp_mask & __GFP_WAIT); bool preload = gfpflags_allow_blocking(gfp_mask);
unsigned long flags; unsigned long flags;
int ret; int ret;

View File

@ -491,7 +491,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj)
* __GFP_DMA32 to be set in mapping_gfp_mask(inode->i_mapping) * __GFP_DMA32 to be set in mapping_gfp_mask(inode->i_mapping)
* so shmem can relocate pages during swapin if required. * so shmem can relocate pages during swapin if required.
*/ */
BUG_ON((mapping_gfp_mask(mapping) & __GFP_DMA32) && BUG_ON(mapping_gfp_constraint(mapping, __GFP_DMA32) &&
(page_to_pfn(p) >= 0x00100000UL)); (page_to_pfn(p) >= 0x00100000UL));
} }

View File

@ -38,8 +38,6 @@
#include "drm_legacy.h" #include "drm_legacy.h"
#include "drm_internal.h" #include "drm_internal.h"
static int drm_notifier(void *priv);
static int drm_lock_take(struct drm_lock_data *lock_data, unsigned int context); static int drm_lock_take(struct drm_lock_data *lock_data, unsigned int context);
/** /**
@ -118,14 +116,8 @@ int drm_legacy_lock(struct drm_device *dev, void *data,
* really probably not the correct answer but lets us debug xkb * really probably not the correct answer but lets us debug xkb
* xserver for now */ * xserver for now */
if (!file_priv->is_master) { if (!file_priv->is_master) {
sigemptyset(&dev->sigmask);
sigaddset(&dev->sigmask, SIGSTOP);
sigaddset(&dev->sigmask, SIGTSTP);
sigaddset(&dev->sigmask, SIGTTIN);
sigaddset(&dev->sigmask, SIGTTOU);
dev->sigdata.context = lock->context; dev->sigdata.context = lock->context;
dev->sigdata.lock = master->lock.hw_lock; dev->sigdata.lock = master->lock.hw_lock;
block_all_signals(drm_notifier, dev, &dev->sigmask);
} }
if (dev->driver->dma_quiescent && (lock->flags & _DRM_LOCK_QUIESCENT)) if (dev->driver->dma_quiescent && (lock->flags & _DRM_LOCK_QUIESCENT))
@ -169,7 +161,6 @@ int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_
/* FIXME: Should really bail out here. */ /* FIXME: Should really bail out here. */
} }
unblock_all_signals();
return 0; return 0;
} }
@ -287,38 +278,6 @@ int drm_legacy_lock_free(struct drm_lock_data *lock_data, unsigned int context)
return 0; return 0;
} }
/**
* If we get here, it means that the process has called DRM_IOCTL_LOCK
* without calling DRM_IOCTL_UNLOCK.
*
* If the lock is not held, then let the signal proceed as usual. If the lock
* is held, then set the contended flag and keep the signal blocked.
*
* \param priv pointer to a drm_device structure.
* \return one if the signal should be delivered normally, or zero if the
* signal should be blocked.
*/
static int drm_notifier(void *priv)
{
struct drm_device *dev = priv;
struct drm_hw_lock *lock = dev->sigdata.lock;
unsigned int old, new, prev;
/* Allow signal delivery if lock isn't held */
if (!lock || !_DRM_LOCK_IS_HELD(lock->lock)
|| _DRM_LOCKING_CONTEXT(lock->lock) != dev->sigdata.context)
return 1;
/* Otherwise, set flag to force call to
drmUnlock */
do {
old = lock->lock;
new = old | _DRM_LOCK_CONT;
prev = cmpxchg(&lock->lock, old, new);
} while (prev != old);
return 0;
}
/** /**
* This function returns immediately and takes the hw lock * This function returns immediately and takes the hw lock
* with the kernel context if it is free, otherwise it gets the highest priority when and if * with the kernel context if it is free, otherwise it gets the highest priority when and if

View File

@ -2214,9 +2214,8 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
* Fail silently without starting the shrinker * Fail silently without starting the shrinker
*/ */
mapping = file_inode(obj->base.filp)->i_mapping; mapping = file_inode(obj->base.filp)->i_mapping;
gfp = mapping_gfp_mask(mapping); gfp = mapping_gfp_constraint(mapping, ~(__GFP_IO | __GFP_RECLAIM));
gfp |= __GFP_NORETRY | __GFP_NOWARN | __GFP_NO_KSWAPD; gfp |= __GFP_NORETRY | __GFP_NOWARN;
gfp &= ~(__GFP_IO | __GFP_WAIT);
sg = st->sgl; sg = st->sgl;
st->nents = 0; st->nents = 0;
for (i = 0; i < page_count; i++) { for (i = 0; i < page_count; i++) {

View File

@ -92,7 +92,7 @@ int ide_queue_pc_tail(ide_drive_t *drive, struct gendisk *disk,
struct request *rq; struct request *rq;
int error; int error;
rq = blk_get_request(drive->queue, READ, __GFP_WAIT); rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV; rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->special = (char *)pc; rq->special = (char *)pc;

View File

@ -441,7 +441,7 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
struct request *rq; struct request *rq;
int error; int error;
rq = blk_get_request(drive->queue, write, __GFP_WAIT); rq = blk_get_request(drive->queue, write, __GFP_RECLAIM);
memcpy(rq->cmd, cmd, BLK_MAX_CDB); memcpy(rq->cmd, cmd, BLK_MAX_CDB);
rq->cmd_type = REQ_TYPE_ATA_PC; rq->cmd_type = REQ_TYPE_ATA_PC;

View File

@ -303,7 +303,7 @@ int ide_cdrom_reset(struct cdrom_device_info *cdi)
struct request *rq; struct request *rq;
int ret; int ret;
rq = blk_get_request(drive->queue, READ, __GFP_WAIT); rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV; rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->cmd_flags = REQ_QUIET; rq->cmd_flags = REQ_QUIET;
ret = blk_execute_rq(drive->queue, cd->disk, rq, 0); ret = blk_execute_rq(drive->queue, cd->disk, rq, 0);

View File

@ -165,7 +165,7 @@ int ide_devset_execute(ide_drive_t *drive, const struct ide_devset *setting,
if (!(setting->flags & DS_SYNC)) if (!(setting->flags & DS_SYNC))
return setting->set(drive, arg); return setting->set(drive, arg);
rq = blk_get_request(q, READ, __GFP_WAIT); rq = blk_get_request(q, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV; rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->cmd_len = 5; rq->cmd_len = 5;
rq->cmd[0] = REQ_DEVSET_EXEC; rq->cmd[0] = REQ_DEVSET_EXEC;

View File

@ -477,7 +477,7 @@ static int set_multcount(ide_drive_t *drive, int arg)
if (drive->special_flags & IDE_SFLAG_SET_MULTMODE) if (drive->special_flags & IDE_SFLAG_SET_MULTMODE)
return -EBUSY; return -EBUSY;
rq = blk_get_request(drive->queue, READ, __GFP_WAIT); rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_TASKFILE; rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
drive->mult_req = arg; drive->mult_req = arg;

View File

@ -125,7 +125,7 @@ static int ide_cmd_ioctl(ide_drive_t *drive, unsigned long arg)
if (NULL == (void *) arg) { if (NULL == (void *) arg) {
struct request *rq; struct request *rq;
rq = blk_get_request(drive->queue, READ, __GFP_WAIT); rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_TASKFILE; rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
err = blk_execute_rq(drive->queue, NULL, rq, 0); err = blk_execute_rq(drive->queue, NULL, rq, 0);
blk_put_request(rq); blk_put_request(rq);
@ -221,7 +221,7 @@ static int generic_drive_reset(ide_drive_t *drive)
struct request *rq; struct request *rq;
int ret = 0; int ret = 0;
rq = blk_get_request(drive->queue, READ, __GFP_WAIT); rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV; rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->cmd_len = 1; rq->cmd_len = 1;
rq->cmd[0] = REQ_DRIVE_RESET; rq->cmd[0] = REQ_DRIVE_RESET;

View File

@ -31,7 +31,7 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
} }
spin_unlock_irq(&hwif->lock); spin_unlock_irq(&hwif->lock);
rq = blk_get_request(q, READ, __GFP_WAIT); rq = blk_get_request(q, READ, __GFP_RECLAIM);
rq->cmd[0] = REQ_PARK_HEADS; rq->cmd[0] = REQ_PARK_HEADS;
rq->cmd_len = 1; rq->cmd_len = 1;
rq->cmd_type = REQ_TYPE_DRV_PRIV; rq->cmd_type = REQ_TYPE_DRV_PRIV;

View File

@ -18,7 +18,7 @@ int generic_ide_suspend(struct device *dev, pm_message_t mesg)
} }
memset(&rqpm, 0, sizeof(rqpm)); memset(&rqpm, 0, sizeof(rqpm));
rq = blk_get_request(drive->queue, READ, __GFP_WAIT); rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_PM_SUSPEND; rq->cmd_type = REQ_TYPE_ATA_PM_SUSPEND;
rq->special = &rqpm; rq->special = &rqpm;
rqpm.pm_step = IDE_PM_START_SUSPEND; rqpm.pm_step = IDE_PM_START_SUSPEND;
@ -88,7 +88,7 @@ int generic_ide_resume(struct device *dev)
} }
memset(&rqpm, 0, sizeof(rqpm)); memset(&rqpm, 0, sizeof(rqpm));
rq = blk_get_request(drive->queue, READ, __GFP_WAIT); rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_PM_RESUME; rq->cmd_type = REQ_TYPE_ATA_PM_RESUME;
rq->cmd_flags |= REQ_PREEMPT; rq->cmd_flags |= REQ_PREEMPT;
rq->special = &rqpm; rq->special = &rqpm;

View File

@ -852,7 +852,7 @@ static int idetape_queue_rw_tail(ide_drive_t *drive, int cmd, int size)
BUG_ON(cmd != REQ_IDETAPE_READ && cmd != REQ_IDETAPE_WRITE); BUG_ON(cmd != REQ_IDETAPE_READ && cmd != REQ_IDETAPE_WRITE);
BUG_ON(size < 0 || size % tape->blk_size); BUG_ON(size < 0 || size % tape->blk_size);
rq = blk_get_request(drive->queue, READ, __GFP_WAIT); rq = blk_get_request(drive->queue, READ, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_DRV_PRIV; rq->cmd_type = REQ_TYPE_DRV_PRIV;
rq->cmd[13] = cmd; rq->cmd[13] = cmd;
rq->rq_disk = tape->disk; rq->rq_disk = tape->disk;
@ -860,7 +860,7 @@ static int idetape_queue_rw_tail(ide_drive_t *drive, int cmd, int size)
if (size) { if (size) {
ret = blk_rq_map_kern(drive->queue, rq, tape->buf, size, ret = blk_rq_map_kern(drive->queue, rq, tape->buf, size,
__GFP_WAIT); __GFP_RECLAIM);
if (ret) if (ret)
goto out_put; goto out_put;
} }

View File

@ -430,7 +430,7 @@ int ide_raw_taskfile(ide_drive_t *drive, struct ide_cmd *cmd, u8 *buf,
int error; int error;
int rw = !(cmd->tf_flags & IDE_TFLAG_WRITE) ? READ : WRITE; int rw = !(cmd->tf_flags & IDE_TFLAG_WRITE) ? READ : WRITE;
rq = blk_get_request(drive->queue, rw, __GFP_WAIT); rq = blk_get_request(drive->queue, rw, __GFP_RECLAIM);
rq->cmd_type = REQ_TYPE_ATA_TASKFILE; rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
/* /*
@ -441,7 +441,7 @@ int ide_raw_taskfile(ide_drive_t *drive, struct ide_cmd *cmd, u8 *buf,
*/ */
if (nsect) { if (nsect) {
error = blk_rq_map_kern(drive->queue, rq, buf, error = blk_rq_map_kern(drive->queue, rq, buf,
nsect * SECTOR_SIZE, __GFP_WAIT); nsect * SECTOR_SIZE, __GFP_RECLAIM);
if (error) if (error)
goto put_req; goto put_req;
} }

View File

@ -1086,7 +1086,7 @@ static void init_mad(struct ib_sa_mad *mad, struct ib_mad_agent *agent)
static int send_mad(struct ib_sa_query *query, int timeout_ms, gfp_t gfp_mask) static int send_mad(struct ib_sa_query *query, int timeout_ms, gfp_t gfp_mask)
{ {
bool preload = !!(gfp_mask & __GFP_WAIT); bool preload = gfpflags_allow_blocking(gfp_mask);
unsigned long flags; unsigned long flags;
int ret, id; int ret, id;

View File

@ -1680,7 +1680,7 @@ int qib_setup_eagerbufs(struct qib_ctxtdata *rcd)
* heavy filesystem activity makes these fail, and we can * heavy filesystem activity makes these fail, and we can
* use compound pages. * use compound pages.
*/ */
gfp_flags = __GFP_WAIT | __GFP_IO | __GFP_COMP; gfp_flags = __GFP_RECLAIM | __GFP_IO | __GFP_COMP;
egrcnt = rcd->rcvegrcnt; egrcnt = rcd->rcvegrcnt;
egroff = rcd->rcvegr_tid_base; egroff = rcd->rcvegr_tid_base;

View File

@ -2668,7 +2668,7 @@ static void *alloc_coherent(struct device *dev, size_t size,
page = alloc_pages(flag | __GFP_NOWARN, get_order(size)); page = alloc_pages(flag | __GFP_NOWARN, get_order(size));
if (!page) { if (!page) {
if (!(flag & __GFP_WAIT)) if (!gfpflags_allow_blocking(flag))
return NULL; return NULL;
page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,

View File

@ -3647,7 +3647,7 @@ static void *intel_alloc_coherent(struct device *dev, size_t size,
flags |= GFP_DMA32; flags |= GFP_DMA32;
} }
if (flags & __GFP_WAIT) { if (gfpflags_allow_blocking(flags)) {
unsigned int count = size >> PAGE_SHIFT; unsigned int count = size >> PAGE_SHIFT;
page = dma_alloc_from_contiguous(dev, count, order); page = dma_alloc_from_contiguous(dev, count, order);

View File

@ -994,7 +994,7 @@ static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned size)
struct bio_vec *bvec; struct bio_vec *bvec;
retry: retry:
if (unlikely(gfp_mask & __GFP_WAIT)) if (unlikely(gfp_mask & __GFP_DIRECT_RECLAIM))
mutex_lock(&cc->bio_alloc_lock); mutex_lock(&cc->bio_alloc_lock);
clone = bio_alloc_bioset(GFP_NOIO, nr_iovecs, cc->bs); clone = bio_alloc_bioset(GFP_NOIO, nr_iovecs, cc->bs);
@ -1010,7 +1010,7 @@ retry:
if (!page) { if (!page) {
crypt_free_buffer_pages(cc, clone); crypt_free_buffer_pages(cc, clone);
bio_put(clone); bio_put(clone);
gfp_mask |= __GFP_WAIT; gfp_mask |= __GFP_DIRECT_RECLAIM;
goto retry; goto retry;
} }
@ -1027,7 +1027,7 @@ retry:
} }
return_clone: return_clone:
if (unlikely(gfp_mask & __GFP_WAIT)) if (unlikely(gfp_mask & __GFP_DIRECT_RECLAIM))
mutex_unlock(&cc->bio_alloc_lock); mutex_unlock(&cc->bio_alloc_lock);
return clone; return clone;

View File

@ -244,7 +244,7 @@ static int kcopyd_get_pages(struct dm_kcopyd_client *kc,
*pages = NULL; *pages = NULL;
do { do {
pl = alloc_pl(__GFP_NOWARN | __GFP_NORETRY); pl = alloc_pl(__GFP_NOWARN | __GFP_NORETRY | __GFP_KSWAPD_RECLAIM);
if (unlikely(!pl)) { if (unlikely(!pl)) {
/* Use reserved pages */ /* Use reserved pages */
pl = kc->pages; pl = kc->pages;

View File

@ -1297,7 +1297,7 @@ static struct solo_enc_dev *solo_enc_alloc(struct solo_dev *solo_dev,
solo_enc->vidq.ops = &solo_enc_video_qops; solo_enc->vidq.ops = &solo_enc_video_qops;
solo_enc->vidq.mem_ops = &vb2_dma_sg_memops; solo_enc->vidq.mem_ops = &vb2_dma_sg_memops;
solo_enc->vidq.drv_priv = solo_enc; solo_enc->vidq.drv_priv = solo_enc;
solo_enc->vidq.gfp_flags = __GFP_DMA32; solo_enc->vidq.gfp_flags = __GFP_DMA32 | __GFP_KSWAPD_RECLAIM;
solo_enc->vidq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; solo_enc->vidq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
solo_enc->vidq.buf_struct_size = sizeof(struct solo_vb2_buf); solo_enc->vidq.buf_struct_size = sizeof(struct solo_vb2_buf);
solo_enc->vidq.lock = &solo_enc->lock; solo_enc->vidq.lock = &solo_enc->lock;

View File

@ -678,7 +678,7 @@ int solo_v4l2_init(struct solo_dev *solo_dev, unsigned nr)
solo_dev->vidq.mem_ops = &vb2_dma_contig_memops; solo_dev->vidq.mem_ops = &vb2_dma_contig_memops;
solo_dev->vidq.drv_priv = solo_dev; solo_dev->vidq.drv_priv = solo_dev;
solo_dev->vidq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; solo_dev->vidq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
solo_dev->vidq.gfp_flags = __GFP_DMA32; solo_dev->vidq.gfp_flags = __GFP_DMA32 | __GFP_KSWAPD_RECLAIM;
solo_dev->vidq.buf_struct_size = sizeof(struct solo_vb2_buf); solo_dev->vidq.buf_struct_size = sizeof(struct solo_vb2_buf);
solo_dev->vidq.lock = &solo_dev->lock; solo_dev->vidq.lock = &solo_dev->lock;
ret = vb2_queue_init(&solo_dev->vidq); ret = vb2_queue_init(&solo_dev->vidq);

View File

@ -979,7 +979,7 @@ int tw68_video_init2(struct tw68_dev *dev, int video_nr)
dev->vidq.ops = &tw68_video_qops; dev->vidq.ops = &tw68_video_qops;
dev->vidq.mem_ops = &vb2_dma_sg_memops; dev->vidq.mem_ops = &vb2_dma_sg_memops;
dev->vidq.drv_priv = dev; dev->vidq.drv_priv = dev;
dev->vidq.gfp_flags = __GFP_DMA32; dev->vidq.gfp_flags = __GFP_DMA32 | __GFP_KSWAPD_RECLAIM;
dev->vidq.buf_struct_size = sizeof(struct tw68_buf); dev->vidq.buf_struct_size = sizeof(struct tw68_buf);
dev->vidq.lock = &dev->lock; dev->vidq.lock = &dev->lock;
dev->vidq.min_buffers_needed = 2; dev->vidq.min_buffers_needed = 2;

View File

@ -75,7 +75,7 @@ MODULE_LICENSE("GPL");
/* /*
* Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We don't * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We don't
* allow wait (__GFP_WAIT) for NOSLEEP page allocations. Use * allow wait (__GFP_RECLAIM) for NOSLEEP page allocations. Use
* __GFP_NOWARN, to suppress page allocation failure warnings. * __GFP_NOWARN, to suppress page allocation failure warnings.
*/ */
#define VMW_PAGE_ALLOC_NOSLEEP (__GFP_HIGHMEM|__GFP_NOWARN) #define VMW_PAGE_ALLOC_NOSLEEP (__GFP_HIGHMEM|__GFP_NOWARN)

View File

@ -1216,8 +1216,7 @@ EXPORT_SYMBOL_GPL(mtd_writev);
*/ */
void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size) void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size)
{ {
gfp_t flags = __GFP_NOWARN | __GFP_WAIT | gfp_t flags = __GFP_NOWARN | __GFP_DIRECT_RECLAIM | __GFP_NORETRY;
__GFP_NORETRY | __GFP_NO_KSWAPD;
size_t min_alloc = max_t(size_t, mtd->writesize, PAGE_SIZE); size_t min_alloc = max_t(size_t, mtd->writesize, PAGE_SIZE);
void *kbuf; void *kbuf;

View File

@ -691,7 +691,7 @@ static void *bnx2x_frag_alloc(const struct bnx2x_fastpath *fp, gfp_t gfp_mask)
{ {
if (fp->rx_frag_size) { if (fp->rx_frag_size) {
/* GFP_KERNEL allocations are used only during initialization */ /* GFP_KERNEL allocations are used only during initialization */
if (unlikely(gfp_mask & __GFP_WAIT)) if (unlikely(gfpflags_allow_blocking(gfp_mask)))
return (void *)__get_free_page(gfp_mask); return (void *)__get_free_page(gfp_mask);
return netdev_alloc_frag(fp->rx_frag_size); return netdev_alloc_frag(fp->rx_frag_size);

View File

@ -1025,11 +1025,13 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
req->special = (void *)0; req->special = (void *)0;
if (buffer && bufflen) { if (buffer && bufflen) {
ret = blk_rq_map_kern(q, req, buffer, bufflen, __GFP_WAIT); ret = blk_rq_map_kern(q, req, buffer, bufflen,
__GFP_DIRECT_RECLAIM);
if (ret) if (ret)
goto out; goto out;
} else if (ubuffer && bufflen) { } else if (ubuffer && bufflen) {
ret = blk_rq_map_user(q, req, NULL, ubuffer, bufflen, __GFP_WAIT); ret = blk_rq_map_user(q, req, NULL, ubuffer, bufflen,
__GFP_DIRECT_RECLAIM);
if (ret) if (ret)
goto out; goto out;
bio = req->bio; bio = req->bio;

View File

@ -1970,7 +1970,7 @@ static void scsi_eh_lock_door(struct scsi_device *sdev)
struct request *req; struct request *req;
/* /*
* blk_get_request with GFP_KERNEL (__GFP_WAIT) sleeps until a * blk_get_request with GFP_KERNEL (__GFP_RECLAIM) sleeps until a
* request becomes available * request becomes available
*/ */
req = blk_get_request(sdev->request_queue, READ, GFP_KERNEL); req = blk_get_request(sdev->request_queue, READ, GFP_KERNEL);

View File

@ -222,13 +222,13 @@ int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
int write = (data_direction == DMA_TO_DEVICE); int write = (data_direction == DMA_TO_DEVICE);
int ret = DRIVER_ERROR << 24; int ret = DRIVER_ERROR << 24;
req = blk_get_request(sdev->request_queue, write, __GFP_WAIT); req = blk_get_request(sdev->request_queue, write, __GFP_RECLAIM);
if (IS_ERR(req)) if (IS_ERR(req))
return ret; return ret;
blk_rq_set_block_pc(req); blk_rq_set_block_pc(req);
if (bufflen && blk_rq_map_kern(sdev->request_queue, req, if (bufflen && blk_rq_map_kern(sdev->request_queue, req,
buffer, bufflen, __GFP_WAIT)) buffer, bufflen, __GFP_RECLAIM))
goto out; goto out;
req->cmd_len = COMMAND_SIZE(cmd[0]); req->cmd_len = COMMAND_SIZE(cmd[0]);

View File

@ -27,7 +27,7 @@
#include "ion_priv.h" #include "ion_priv.h"
static gfp_t high_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN | static gfp_t high_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN |
__GFP_NORETRY) & ~__GFP_WAIT; __GFP_NORETRY) & ~__GFP_DIRECT_RECLAIM;
static gfp_t low_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN); static gfp_t low_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN);
static const unsigned int orders[] = {8, 4, 0}; static const unsigned int orders[] = {8, 4, 0};
static const int num_orders = ARRAY_SIZE(orders); static const int num_orders = ARRAY_SIZE(orders);

View File

@ -95,7 +95,7 @@ do { \
do { \ do { \
LASSERT(!in_interrupt() || \ LASSERT(!in_interrupt() || \
((size) <= LIBCFS_VMALLOC_SIZE && \ ((size) <= LIBCFS_VMALLOC_SIZE && \
((mask) & __GFP_WAIT) == 0)); \ !gfpflags_allow_blocking(mask))); \
} while (0) } while (0)
#define LIBCFS_ALLOC_POST(ptr, size) \ #define LIBCFS_ALLOC_POST(ptr, size) \

View File

@ -1245,7 +1245,7 @@ lnet_new_rtrbuf(lnet_rtrbufpool_t *rbp, int cpt)
for (i = 0; i < npages; i++) { for (i = 0; i < npages; i++) {
page = alloc_pages_node( page = alloc_pages_node(
cfs_cpt_spread_node(lnet_cpt_table(), cpt), cfs_cpt_spread_node(lnet_cpt_table(), cpt),
__GFP_ZERO | GFP_IOFS, 0); GFP_KERNEL | __GFP_ZERO, 0);
if (page == NULL) { if (page == NULL) {
while (--i >= 0) while (--i >= 0)
__free_page(rb->rb_kiov[i].kiov_page); __free_page(rb->rb_kiov[i].kiov_page);

View File

@ -860,7 +860,7 @@ lstcon_testrpc_prep(lstcon_node_t *nd, int transop, unsigned feats,
bulk->bk_iovs[i].kiov_offset = 0; bulk->bk_iovs[i].kiov_offset = 0;
bulk->bk_iovs[i].kiov_len = len; bulk->bk_iovs[i].kiov_len = len;
bulk->bk_iovs[i].kiov_page = bulk->bk_iovs[i].kiov_page =
alloc_page(GFP_IOFS); alloc_page(GFP_KERNEL);
if (bulk->bk_iovs[i].kiov_page == NULL) { if (bulk->bk_iovs[i].kiov_page == NULL) {
lstcon_rpc_put(*crpc); lstcon_rpc_put(*crpc);

View File

@ -146,7 +146,7 @@ srpc_alloc_bulk(int cpt, unsigned bulk_npg, unsigned bulk_len, int sink)
int nob; int nob;
pg = alloc_pages_node(cfs_cpt_spread_node(lnet_cpt_table(), cpt), pg = alloc_pages_node(cfs_cpt_spread_node(lnet_cpt_table(), cpt),
GFP_IOFS, 0); GFP_KERNEL, 0);
if (pg == NULL) { if (pg == NULL) {
CERROR("Can't allocate page %d of %d\n", i, bulk_npg); CERROR("Can't allocate page %d of %d\n", i, bulk_npg);
srpc_free_bulk(bk); srpc_free_bulk(bk);

View File

@ -319,7 +319,7 @@ static int libcfs_ioctl(struct cfs_psdev_file *pfile, unsigned long cmd, void *a
struct libcfs_ioctl_data *data; struct libcfs_ioctl_data *data;
int err = 0; int err = 0;
LIBCFS_ALLOC_GFP(buf, 1024, GFP_IOFS); LIBCFS_ALLOC_GFP(buf, 1024, GFP_KERNEL);
if (buf == NULL) if (buf == NULL)
return -ENOMEM; return -ENOMEM;

View File

@ -810,7 +810,7 @@ int cfs_trace_allocate_string_buffer(char **str, int nob)
if (nob > 2 * PAGE_CACHE_SIZE) /* string must be "sensible" */ if (nob > 2 * PAGE_CACHE_SIZE) /* string must be "sensible" */
return -EINVAL; return -EINVAL;
*str = kmalloc(nob, GFP_IOFS | __GFP_ZERO); *str = kmalloc(nob, GFP_KERNEL | __GFP_ZERO);
if (*str == NULL) if (*str == NULL)
return -ENOMEM; return -ENOMEM;

View File

@ -82,7 +82,7 @@ static struct hlist_head *alloc_rmtperm_hash(void)
struct hlist_head *hash; struct hlist_head *hash;
int i; int i;
hash = kmem_cache_alloc(ll_rmtperm_hash_cachep, GFP_IOFS | __GFP_ZERO); hash = kmem_cache_alloc(ll_rmtperm_hash_cachep, GFP_NOFS | __GFP_ZERO);
if (!hash) if (!hash)
return NULL; return NULL;

View File

@ -1112,7 +1112,7 @@ static int mgc_apply_recover_logs(struct obd_device *mgc,
LASSERT(cfg->cfg_instance != NULL); LASSERT(cfg->cfg_instance != NULL);
LASSERT(cfg->cfg_sb == cfg->cfg_instance); LASSERT(cfg->cfg_sb == cfg->cfg_instance);
inst = kzalloc(PAGE_CACHE_SIZE, GFP_NOFS); inst = kzalloc(PAGE_CACHE_SIZE, GFP_KERNEL);
if (!inst) if (!inst)
return -ENOMEM; return -ENOMEM;
@ -1308,14 +1308,14 @@ static int mgc_process_recover_log(struct obd_device *obd,
if (cfg->cfg_last_idx == 0) /* the first time */ if (cfg->cfg_last_idx == 0) /* the first time */
nrpages = CONFIG_READ_NRPAGES_INIT; nrpages = CONFIG_READ_NRPAGES_INIT;
pages = kcalloc(nrpages, sizeof(*pages), GFP_NOFS); pages = kcalloc(nrpages, sizeof(*pages), GFP_KERNEL);
if (pages == NULL) { if (pages == NULL) {
rc = -ENOMEM; rc = -ENOMEM;
goto out; goto out;
} }
for (i = 0; i < nrpages; i++) { for (i = 0; i < nrpages; i++) {
pages[i] = alloc_page(GFP_IOFS); pages[i] = alloc_page(GFP_KERNEL);
if (pages[i] == NULL) { if (pages[i] == NULL) {
rc = -ENOMEM; rc = -ENOMEM;
goto out; goto out;
@ -1466,7 +1466,7 @@ static int mgc_process_cfg_log(struct obd_device *mgc,
if (cld->cld_cfg.cfg_sb) if (cld->cld_cfg.cfg_sb)
lsi = s2lsi(cld->cld_cfg.cfg_sb); lsi = s2lsi(cld->cld_cfg.cfg_sb);
env = kzalloc(sizeof(*env), GFP_NOFS); env = kzalloc(sizeof(*env), GFP_KERNEL);
if (!env) if (!env)
return -ENOMEM; return -ENOMEM;

View File

@ -1562,7 +1562,7 @@ static int echo_client_kbrw(struct echo_device *ed, int rw, struct obdo *oa,
(oa->o_valid & OBD_MD_FLFLAGS) != 0 && (oa->o_valid & OBD_MD_FLFLAGS) != 0 &&
(oa->o_flags & OBD_FL_DEBUG_CHECK) != 0); (oa->o_flags & OBD_FL_DEBUG_CHECK) != 0);
gfp_mask = ((ostid_id(&oa->o_oi) & 2) == 0) ? GFP_IOFS : GFP_HIGHUSER; gfp_mask = ((ostid_id(&oa->o_oi) & 2) == 0) ? GFP_KERNEL : GFP_HIGHUSER;
LASSERT(rw == OBD_BRW_WRITE || rw == OBD_BRW_READ); LASSERT(rw == OBD_BRW_WRITE || rw == OBD_BRW_READ);
LASSERT(lsm != NULL); LASSERT(lsm != NULL);

View File

@ -346,7 +346,7 @@ static struct osc_extent *osc_extent_alloc(struct osc_object *obj)
{ {
struct osc_extent *ext; struct osc_extent *ext;
ext = kmem_cache_alloc(osc_extent_kmem, GFP_IOFS | __GFP_ZERO); ext = kmem_cache_alloc(osc_extent_kmem, GFP_NOFS | __GFP_ZERO);
if (ext == NULL) if (ext == NULL)
return NULL; return NULL;

View File

@ -1560,7 +1560,7 @@ int hfi1_setup_eagerbufs(struct hfi1_ctxtdata *rcd)
* heavy filesystem activity makes these fail, and we can * heavy filesystem activity makes these fail, and we can
* use compound pages. * use compound pages.
*/ */
gfp_flags = __GFP_WAIT | __GFP_IO | __GFP_COMP; gfp_flags = __GFP_RECLAIM | __GFP_IO | __GFP_COMP;
/* /*
* The minimum size of the eager buffers is a groups of MTU-sized * The minimum size of the eager buffers is a groups of MTU-sized

View File

@ -905,7 +905,7 @@ static int ipath_create_user_egr(struct ipath_portdata *pd)
* heavy filesystem activity makes these fail, and we can * heavy filesystem activity makes these fail, and we can
* use compound pages. * use compound pages.
*/ */
gfp_flags = __GFP_WAIT | __GFP_IO | __GFP_COMP; gfp_flags = __GFP_RECLAIM | __GFP_IO | __GFP_COMP;
egrcnt = dd->ipath_rcvegrcnt; egrcnt = dd->ipath_rcvegrcnt;
/* TID number offset for this port */ /* TID number offset for this port */

View File

@ -2345,7 +2345,6 @@ static void fsg_disable(struct usb_function *f)
static void handle_exception(struct fsg_common *common) static void handle_exception(struct fsg_common *common)
{ {
siginfo_t info;
int i; int i;
struct fsg_buffhd *bh; struct fsg_buffhd *bh;
enum fsg_state old_state; enum fsg_state old_state;
@ -2357,8 +2356,7 @@ static void handle_exception(struct fsg_common *common)
* into a high-priority EXIT exception. * into a high-priority EXIT exception.
*/ */
for (;;) { for (;;) {
int sig = int sig = kernel_dequeue_signal(NULL);
dequeue_signal_lock(current, &current->blocked, &info);
if (!sig) if (!sig)
break; break;
if (sig != SIGUSR1) { if (sig != SIGUSR1) {

View File

@ -2244,7 +2244,7 @@ static int u132_urb_enqueue(struct usb_hcd *hcd, struct urb *urb,
{ {
struct u132 *u132 = hcd_to_u132(hcd); struct u132 *u132 = hcd_to_u132(hcd);
if (irqs_disabled()) { if (irqs_disabled()) {
if (__GFP_WAIT & mem_flags) { if (gfpflags_allow_blocking(mem_flags)) {
printk(KERN_ERR "invalid context for function that might sleep\n"); printk(KERN_ERR "invalid context for function that might sleep\n");
return -EINVAL; return -EINVAL;
} }

View File

@ -99,7 +99,7 @@ static int vmlfb_alloc_vram_area(struct vram_area *va, unsigned max_order,
* below the first 16MB. * below the first 16MB.
*/ */
flags = __GFP_DMA | __GFP_HIGH; flags = __GFP_DMA | __GFP_HIGH | __GFP_KSWAPD_RECLAIM;
va->logical = va->logical =
__get_free_pages(flags, --max_order); __get_free_pages(flags, --max_order);
} while (va->logical == 0 && max_order > min_order); } while (va->logical == 0 && max_order > min_order);

View File

@ -482,13 +482,12 @@ static noinline int add_ra_bio_pages(struct inode *inode,
goto next; goto next;
} }
page = __page_cache_alloc(mapping_gfp_mask(mapping) & page = __page_cache_alloc(mapping_gfp_constraint(mapping,
~__GFP_FS); ~__GFP_FS));
if (!page) if (!page)
break; break;
if (add_to_page_cache_lru(page, mapping, pg_index, if (add_to_page_cache_lru(page, mapping, pg_index, GFP_NOFS)) {
GFP_NOFS)) {
page_cache_release(page); page_cache_release(page);
goto next; goto next;
} }

View File

@ -3367,7 +3367,7 @@ static inline bool btrfs_mixed_space_info(struct btrfs_space_info *space_info)
static inline gfp_t btrfs_alloc_write_mask(struct address_space *mapping) static inline gfp_t btrfs_alloc_write_mask(struct address_space *mapping)
{ {
return mapping_gfp_mask(mapping) & ~__GFP_FS; return mapping_gfp_constraint(mapping, ~__GFP_FS);
} }
/* extent-tree.c */ /* extent-tree.c */

View File

@ -2575,7 +2575,7 @@ int open_ctree(struct super_block *sb,
fs_info->commit_interval = BTRFS_DEFAULT_COMMIT_INTERVAL; fs_info->commit_interval = BTRFS_DEFAULT_COMMIT_INTERVAL;
fs_info->avg_delayed_ref_runtime = NSEC_PER_SEC >> 6; /* div by 64 */ fs_info->avg_delayed_ref_runtime = NSEC_PER_SEC >> 6; /* div by 64 */
/* readahead state */ /* readahead state */
INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_WAIT); INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
spin_lock_init(&fs_info->reada_lock); spin_lock_init(&fs_info->reada_lock);
fs_info->thread_pool_size = min_t(unsigned long, fs_info->thread_pool_size = min_t(unsigned long,

View File

@ -616,7 +616,7 @@ static int __clear_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
if (bits & (EXTENT_IOBITS | EXTENT_BOUNDARY)) if (bits & (EXTENT_IOBITS | EXTENT_BOUNDARY))
clear = 1; clear = 1;
again: again:
if (!prealloc && (mask & __GFP_WAIT)) { if (!prealloc && gfpflags_allow_blocking(mask)) {
/* /*
* Don't care for allocation failure here because we might end * Don't care for allocation failure here because we might end
* up not needing the pre-allocated extent state at all, which * up not needing the pre-allocated extent state at all, which
@ -741,7 +741,7 @@ search_again:
if (start > end) if (start > end)
goto out; goto out;
spin_unlock(&tree->lock); spin_unlock(&tree->lock);
if (mask & __GFP_WAIT) if (gfpflags_allow_blocking(mask))
cond_resched(); cond_resched();
goto again; goto again;
} }
@ -874,7 +874,7 @@ __set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
bits |= EXTENT_FIRST_DELALLOC; bits |= EXTENT_FIRST_DELALLOC;
again: again:
if (!prealloc && (mask & __GFP_WAIT)) { if (!prealloc && gfpflags_allow_blocking(mask)) {
prealloc = alloc_extent_state(mask); prealloc = alloc_extent_state(mask);
BUG_ON(!prealloc); BUG_ON(!prealloc);
} }
@ -1052,7 +1052,7 @@ search_again:
if (start > end) if (start > end)
goto out; goto out;
spin_unlock(&tree->lock); spin_unlock(&tree->lock);
if (mask & __GFP_WAIT) if (gfpflags_allow_blocking(mask))
cond_resched(); cond_resched();
goto again; goto again;
} }
@ -1100,7 +1100,7 @@ int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
btrfs_debug_check_extent_io_range(tree, start, end); btrfs_debug_check_extent_io_range(tree, start, end);
again: again:
if (!prealloc && (mask & __GFP_WAIT)) { if (!prealloc && gfpflags_allow_blocking(mask)) {
/* /*
* Best effort, don't worry if extent state allocation fails * Best effort, don't worry if extent state allocation fails
* here for the first iteration. We might have a cached state * here for the first iteration. We might have a cached state
@ -1278,7 +1278,7 @@ search_again:
if (start > end) if (start > end)
goto out; goto out;
spin_unlock(&tree->lock); spin_unlock(&tree->lock);
if (mask & __GFP_WAIT) if (gfpflags_allow_blocking(mask))
cond_resched(); cond_resched();
first_iteration = false; first_iteration = false;
goto again; goto again;
@ -4386,7 +4386,7 @@ int try_release_extent_mapping(struct extent_map_tree *map,
u64 start = page_offset(page); u64 start = page_offset(page);
u64 end = start + PAGE_CACHE_SIZE - 1; u64 end = start + PAGE_CACHE_SIZE - 1;
if ((mask & __GFP_WAIT) && if (gfpflags_allow_blocking(mask) &&
page->mapping->host->i_size > 16 * 1024 * 1024) { page->mapping->host->i_size > 16 * 1024 * 1024) {
u64 len; u64 len;
while (start <= end) { while (start <= end) {

View File

@ -85,8 +85,8 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
} }
mapping_set_gfp_mask(inode->i_mapping, mapping_set_gfp_mask(inode->i_mapping,
mapping_gfp_mask(inode->i_mapping) & mapping_gfp_constraint(inode->i_mapping,
~(__GFP_FS | __GFP_HIGHMEM)); ~(__GFP_FS | __GFP_HIGHMEM)));
return inode; return inode;
} }

View File

@ -232,8 +232,8 @@ static struct btrfs_device *__alloc_device(void)
spin_lock_init(&dev->reada_lock); spin_lock_init(&dev->reada_lock);
atomic_set(&dev->reada_in_flight, 0); atomic_set(&dev->reada_in_flight, 0);
atomic_set(&dev->dev_stats_ccnt, 0); atomic_set(&dev->dev_stats_ccnt, 0);
INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS & ~__GFP_WAIT); INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_WAIT); INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
return dev; return dev;
} }

View File

@ -999,7 +999,7 @@ grow_dev_page(struct block_device *bdev, sector_t block,
int ret = 0; /* Will call free_more_memory() */ int ret = 0; /* Will call free_more_memory() */
gfp_t gfp_mask; gfp_t gfp_mask;
gfp_mask = (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS) | gfp; gfp_mask = mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS) | gfp;
/* /*
* XXX: __getblk_slow() can not really deal with failure and * XXX: __getblk_slow() can not really deal with failure and

View File

@ -30,7 +30,7 @@ extern unsigned cachefiles_debug;
#define CACHEFILES_DEBUG_KLEAVE 2 #define CACHEFILES_DEBUG_KLEAVE 2
#define CACHEFILES_DEBUG_KDEBUG 4 #define CACHEFILES_DEBUG_KDEBUG 4
#define cachefiles_gfp (__GFP_WAIT | __GFP_NORETRY | __GFP_NOMEMALLOC) #define cachefiles_gfp (__GFP_RECLAIM | __GFP_NORETRY | __GFP_NOMEMALLOC)
/* /*
* node records * node records

View File

@ -1283,8 +1283,8 @@ static int ceph_filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
int ret1; int ret1;
struct address_space *mapping = inode->i_mapping; struct address_space *mapping = inode->i_mapping;
struct page *page = find_or_create_page(mapping, 0, struct page *page = find_or_create_page(mapping, 0,
mapping_gfp_mask(mapping) & mapping_gfp_constraint(mapping,
~__GFP_FS); ~__GFP_FS));
if (!page) { if (!page) {
ret = VM_FAULT_OOM; ret = VM_FAULT_OOM;
goto out; goto out;
@ -1428,7 +1428,8 @@ void ceph_fill_inline_data(struct inode *inode, struct page *locked_page,
if (i_size_read(inode) == 0) if (i_size_read(inode) == 0)
return; return;
page = find_or_create_page(mapping, 0, page = find_or_create_page(mapping, 0,
mapping_gfp_mask(mapping) & ~__GFP_FS); mapping_gfp_constraint(mapping,
~__GFP_FS));
if (!page) if (!page)
return; return;
if (PageUptodate(page)) { if (PageUptodate(page)) {

View File

@ -3380,7 +3380,7 @@ readpages_get_pages(struct address_space *mapping, struct list_head *page_list,
struct page *page, *tpage; struct page *page, *tpage;
unsigned int expected_index; unsigned int expected_index;
int rc; int rc;
gfp_t gfp = GFP_KERNEL & mapping_gfp_mask(mapping); gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL);
INIT_LIST_HEAD(tmplist); INIT_LIST_HEAD(tmplist);

View File

@ -280,23 +280,24 @@ out:
return ispipe; return ispipe;
} }
static int zap_process(struct task_struct *start, int exit_code) static int zap_process(struct task_struct *start, int exit_code, int flags)
{ {
struct task_struct *t; struct task_struct *t;
int nr = 0; int nr = 0;
/* ignore all signals except SIGKILL, see prepare_signal() */
start->signal->flags = SIGNAL_GROUP_COREDUMP | flags;
start->signal->group_exit_code = exit_code; start->signal->group_exit_code = exit_code;
start->signal->group_stop_count = 0; start->signal->group_stop_count = 0;
t = start; for_each_thread(start, t) {
do {
task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK); task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
if (t != current && t->mm) { if (t != current && t->mm) {
sigaddset(&t->pending.signal, SIGKILL); sigaddset(&t->pending.signal, SIGKILL);
signal_wake_up(t, 1); signal_wake_up(t, 1);
nr++; nr++;
} }
} while_each_thread(start, t); }
return nr; return nr;
} }
@ -311,10 +312,8 @@ static int zap_threads(struct task_struct *tsk, struct mm_struct *mm,
spin_lock_irq(&tsk->sighand->siglock); spin_lock_irq(&tsk->sighand->siglock);
if (!signal_group_exit(tsk->signal)) { if (!signal_group_exit(tsk->signal)) {
mm->core_state = core_state; mm->core_state = core_state;
nr = zap_process(tsk, exit_code);
tsk->signal->group_exit_task = tsk; tsk->signal->group_exit_task = tsk;
/* ignore all signals except SIGKILL, see prepare_signal() */ nr = zap_process(tsk, exit_code, 0);
tsk->signal->flags = SIGNAL_GROUP_COREDUMP;
clear_tsk_thread_flag(tsk, TIF_SIGPENDING); clear_tsk_thread_flag(tsk, TIF_SIGPENDING);
} }
spin_unlock_irq(&tsk->sighand->siglock); spin_unlock_irq(&tsk->sighand->siglock);
@ -360,18 +359,18 @@ static int zap_threads(struct task_struct *tsk, struct mm_struct *mm,
continue; continue;
if (g->flags & PF_KTHREAD) if (g->flags & PF_KTHREAD)
continue; continue;
p = g;
do { for_each_thread(g, p) {
if (p->mm) { if (unlikely(!p->mm))
continue;
if (unlikely(p->mm == mm)) { if (unlikely(p->mm == mm)) {
lock_task_sighand(p, &flags); lock_task_sighand(p, &flags);
nr += zap_process(p, exit_code); nr += zap_process(p, exit_code,
p->signal->flags = SIGNAL_GROUP_EXIT; SIGNAL_GROUP_EXIT);
unlock_task_sighand(p, &flags); unlock_task_sighand(p, &flags);
} }
break; break;
} }
} while_each_thread(g, p);
} }
rcu_read_unlock(); rcu_read_unlock();
done: done:

View File

@ -361,7 +361,7 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
/* /*
* bio_alloc() is guaranteed to return a bio when called with * bio_alloc() is guaranteed to return a bio when called with
* __GFP_WAIT and we request a valid number of vectors. * __GFP_RECLAIM and we request a valid number of vectors.
*/ */
bio = bio_alloc(GFP_KERNEL, nr_vecs); bio = bio_alloc(GFP_KERNEL, nr_vecs);

View File

@ -3386,7 +3386,7 @@ static int __ext4_block_zero_page_range(handle_t *handle,
int err = 0; int err = 0;
page = find_or_create_page(mapping, from >> PAGE_CACHE_SHIFT, page = find_or_create_page(mapping, from >> PAGE_CACHE_SHIFT,
mapping_gfp_mask(mapping) & ~__GFP_FS); mapping_gfp_constraint(mapping, ~__GFP_FS));
if (!page) if (!page)
return -ENOMEM; return -ENOMEM;

View File

@ -166,7 +166,7 @@ int ext4_mpage_readpages(struct address_space *mapping,
page = list_entry(pages->prev, struct page, lru); page = list_entry(pages->prev, struct page, lru);
list_del(&page->lru); list_del(&page->lru);
if (add_to_page_cache_lru(page, mapping, page->index, if (add_to_page_cache_lru(page, mapping, page->index,
GFP_KERNEL & mapping_gfp_mask(mapping))) mapping_gfp_constraint(mapping, GFP_KERNEL)))
goto next_page; goto next_page;
} }

View File

@ -1061,7 +1061,7 @@ static int bdev_try_to_free_page(struct super_block *sb, struct page *page,
return 0; return 0;
if (journal) if (journal)
return jbd2_journal_try_to_free_buffers(journal, page, return jbd2_journal_try_to_free_buffers(journal, page,
wait & ~__GFP_WAIT); wait & ~__GFP_DIRECT_RECLAIM);
return try_to_free_buffers(page); return try_to_free_buffers(page);
} }

View File

@ -111,7 +111,7 @@ struct fscache_cookie *__fscache_acquire_cookie(
/* radix tree insertion won't use the preallocation pool unless it's /* radix tree insertion won't use the preallocation pool unless it's
* told it may not wait */ * told it may not wait */
INIT_RADIX_TREE(&cookie->stores, GFP_NOFS & ~__GFP_WAIT); INIT_RADIX_TREE(&cookie->stores, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
switch (cookie->def->type) { switch (cookie->def->type) {
case FSCACHE_COOKIE_TYPE_INDEX: case FSCACHE_COOKIE_TYPE_INDEX:

View File

@ -58,7 +58,7 @@ bool release_page_wait_timeout(struct fscache_cookie *cookie, struct page *page)
/* /*
* decide whether a page can be released, possibly by cancelling a store to it * decide whether a page can be released, possibly by cancelling a store to it
* - we're allowed to sleep if __GFP_WAIT is flagged * - we're allowed to sleep if __GFP_DIRECT_RECLAIM is flagged
*/ */
bool __fscache_maybe_release_page(struct fscache_cookie *cookie, bool __fscache_maybe_release_page(struct fscache_cookie *cookie,
struct page *page, struct page *page,
@ -122,7 +122,7 @@ page_busy:
* allocator as the work threads writing to the cache may all end up * allocator as the work threads writing to the cache may all end up
* sleeping on memory allocation, so we may need to impose a timeout * sleeping on memory allocation, so we may need to impose a timeout
* too. */ * too. */
if (!(gfp & __GFP_WAIT) || !(gfp & __GFP_FS)) { if (!(gfp & __GFP_DIRECT_RECLAIM) || !(gfp & __GFP_FS)) {
fscache_stat(&fscache_n_store_vmscan_busy); fscache_stat(&fscache_n_store_vmscan_busy);
return false; return false;
} }
@ -132,7 +132,7 @@ page_busy:
_debug("fscache writeout timeout page: %p{%lx}", _debug("fscache writeout timeout page: %p{%lx}",
page, page->index); page, page->index);
gfp &= ~__GFP_WAIT; gfp &= ~__GFP_DIRECT_RECLAIM;
goto try_again; goto try_again;
} }
EXPORT_SYMBOL(__fscache_maybe_release_page); EXPORT_SYMBOL(__fscache_maybe_release_page);

View File

@ -1937,8 +1937,8 @@ out:
* @journal: journal for operation * @journal: journal for operation
* @page: to try and free * @page: to try and free
* @gfp_mask: we use the mask to detect how hard should we try to release * @gfp_mask: we use the mask to detect how hard should we try to release
* buffers. If __GFP_WAIT and __GFP_FS is set, we wait for commit code to * buffers. If __GFP_DIRECT_RECLAIM and __GFP_FS is set, we wait for commit
* release the buffers. * code to release the buffers.
* *
* *
* For all the buffers on this page, * For all the buffers on this page,

View File

@ -80,7 +80,6 @@ static int jffs2_garbage_collect_thread(void *_c)
siginitset(&hupmask, sigmask(SIGHUP)); siginitset(&hupmask, sigmask(SIGHUP));
allow_signal(SIGKILL); allow_signal(SIGKILL);
allow_signal(SIGSTOP); allow_signal(SIGSTOP);
allow_signal(SIGCONT);
allow_signal(SIGHUP); allow_signal(SIGHUP);
c->gc_task = current; c->gc_task = current;
@ -121,20 +120,18 @@ static int jffs2_garbage_collect_thread(void *_c)
/* Put_super will send a SIGKILL and then wait on the sem. /* Put_super will send a SIGKILL and then wait on the sem.
*/ */
while (signal_pending(current) || freezing(current)) { while (signal_pending(current) || freezing(current)) {
siginfo_t info;
unsigned long signr; unsigned long signr;
if (try_to_freeze()) if (try_to_freeze())
goto again; goto again;
signr = dequeue_signal_lock(current, &current->blocked, &info); signr = kernel_dequeue_signal(NULL);
switch(signr) { switch(signr) {
case SIGSTOP: case SIGSTOP:
jffs2_dbg(1, "%s(): SIGSTOP received\n", jffs2_dbg(1, "%s(): SIGSTOP received\n",
__func__); __func__);
set_current_state(TASK_STOPPED); kernel_signal_stop();
schedule();
break; break;
case SIGKILL: case SIGKILL:

View File

@ -1264,7 +1264,7 @@ int jffs2_dataflash_setup(struct jffs2_sb_info *c) {
if ((c->flash_size % c->sector_size) != 0) { if ((c->flash_size % c->sector_size) != 0) {
c->flash_size = (c->flash_size / c->sector_size) * c->sector_size; c->flash_size = (c->flash_size / c->sector_size) * c->sector_size;
pr_warn("flash size adjusted to %dKiB\n", c->flash_size); pr_warn("flash size adjusted to %dKiB\n", c->flash_size);
}; }
c->wbuf_ofs = 0xFFFFFFFF; c->wbuf_ofs = 0xFFFFFFFF;
c->wbuf = kmalloc(c->wbuf_pagesize, GFP_KERNEL); c->wbuf = kmalloc(c->wbuf_pagesize, GFP_KERNEL);

View File

@ -57,7 +57,7 @@ static struct page *get_mapping_page(struct super_block *sb, pgoff_t index,
filler_t *filler = super->s_devops->readpage; filler_t *filler = super->s_devops->readpage;
struct page *page; struct page *page;
BUG_ON(mapping_gfp_mask(mapping) & __GFP_FS); BUG_ON(mapping_gfp_constraint(mapping, __GFP_FS));
if (use_filler) if (use_filler)
page = read_cache_page(mapping, index, filler, sb); page = read_cache_page(mapping, index, filler, sb);
else { else {

View File

@ -361,7 +361,7 @@ mpage_readpages(struct address_space *mapping, struct list_head *pages,
sector_t last_block_in_bio = 0; sector_t last_block_in_bio = 0;
struct buffer_head map_bh; struct buffer_head map_bh;
unsigned long first_logical_block = 0; unsigned long first_logical_block = 0;
gfp_t gfp = GFP_KERNEL & mapping_gfp_mask(mapping); gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL);
map_bh.b_state = 0; map_bh.b_state = 0;
map_bh.b_size = 0; map_bh.b_size = 0;
@ -397,7 +397,7 @@ int mpage_readpage(struct page *page, get_block_t get_block)
sector_t last_block_in_bio = 0; sector_t last_block_in_bio = 0;
struct buffer_head map_bh; struct buffer_head map_bh;
unsigned long first_logical_block = 0; unsigned long first_logical_block = 0;
gfp_t gfp = GFP_KERNEL & mapping_gfp_mask(page->mapping); gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL);
map_bh.b_state = 0; map_bh.b_state = 0;
map_bh.b_size = 0; map_bh.b_size = 0;

View File

@ -4604,7 +4604,7 @@ EXPORT_SYMBOL(__page_symlink);
int page_symlink(struct inode *inode, const char *symname, int len) int page_symlink(struct inode *inode, const char *symname, int len)
{ {
return __page_symlink(inode, symname, len, return __page_symlink(inode, symname, len,
!(mapping_gfp_mask(inode->i_mapping) & __GFP_FS)); !mapping_gfp_constraint(inode->i_mapping, __GFP_FS));
} }
EXPORT_SYMBOL(page_symlink); EXPORT_SYMBOL(page_symlink);

View File

@ -473,8 +473,8 @@ static int nfs_release_page(struct page *page, gfp_t gfp)
dfprintk(PAGECACHE, "NFS: release_page(%p)\n", page); dfprintk(PAGECACHE, "NFS: release_page(%p)\n", page);
/* Always try to initiate a 'commit' if relevant, but only /* Always try to initiate a 'commit' if relevant, but only
* wait for it if __GFP_WAIT is set. Even then, only wait 1 * wait for it if the caller allows blocking. Even then,
* second and only if the 'bdi' is not congested. * only wait 1 second and only if the 'bdi' is not congested.
* Waiting indefinitely can cause deadlocks when the NFS * Waiting indefinitely can cause deadlocks when the NFS
* server is on this machine, when a new TCP connection is * server is on this machine, when a new TCP connection is
* needed and in other rare cases. There is no particular * needed and in other rare cases. There is no particular
@ -484,7 +484,7 @@ static int nfs_release_page(struct page *page, gfp_t gfp)
if (mapping) { if (mapping) {
struct nfs_server *nfss = NFS_SERVER(mapping->host); struct nfs_server *nfss = NFS_SERVER(mapping->host);
nfs_commit_inode(mapping->host, 0); nfs_commit_inode(mapping->host, 0);
if ((gfp & __GFP_WAIT) && if (gfpflags_allow_blocking(gfp) &&
!bdi_write_congested(&nfss->backing_dev_info)) { !bdi_write_congested(&nfss->backing_dev_info)) {
wait_on_page_bit_killable_timeout(page, PG_private, wait_on_page_bit_killable_timeout(page, PG_private,
HZ); HZ);

View File

@ -133,38 +133,38 @@ nilfs_palloc_bitmap_blkoff(const struct inode *inode, unsigned long group)
/** /**
* nilfs_palloc_group_desc_nfrees - get the number of free entries in a group * nilfs_palloc_group_desc_nfrees - get the number of free entries in a group
* @inode: inode of metadata file using this allocator
* @group: group number
* @desc: pointer to descriptor structure for the group * @desc: pointer to descriptor structure for the group
* @lock: spin lock protecting @desc
*/ */
static unsigned long static unsigned long
nilfs_palloc_group_desc_nfrees(struct inode *inode, unsigned long group, nilfs_palloc_group_desc_nfrees(const struct nilfs_palloc_group_desc *desc,
const struct nilfs_palloc_group_desc *desc) spinlock_t *lock)
{ {
unsigned long nfree; unsigned long nfree;
spin_lock(nilfs_mdt_bgl_lock(inode, group)); spin_lock(lock);
nfree = le32_to_cpu(desc->pg_nfrees); nfree = le32_to_cpu(desc->pg_nfrees);
spin_unlock(nilfs_mdt_bgl_lock(inode, group)); spin_unlock(lock);
return nfree; return nfree;
} }
/** /**
* nilfs_palloc_group_desc_add_entries - adjust count of free entries * nilfs_palloc_group_desc_add_entries - adjust count of free entries
* @inode: inode of metadata file using this allocator
* @group: group number
* @desc: pointer to descriptor structure for the group * @desc: pointer to descriptor structure for the group
* @lock: spin lock protecting @desc
* @n: delta to be added * @n: delta to be added
*/ */
static void static u32
nilfs_palloc_group_desc_add_entries(struct inode *inode, nilfs_palloc_group_desc_add_entries(struct nilfs_palloc_group_desc *desc,
unsigned long group, spinlock_t *lock, u32 n)
struct nilfs_palloc_group_desc *desc,
u32 n)
{ {
spin_lock(nilfs_mdt_bgl_lock(inode, group)); u32 nfree;
spin_lock(lock);
le32_add_cpu(&desc->pg_nfrees, n); le32_add_cpu(&desc->pg_nfrees, n);
spin_unlock(nilfs_mdt_bgl_lock(inode, group)); nfree = le32_to_cpu(desc->pg_nfrees);
spin_unlock(lock);
return nfree;
} }
/** /**
@ -239,6 +239,26 @@ static int nilfs_palloc_get_block(struct inode *inode, unsigned long blkoff,
return ret; return ret;
} }
/**
* nilfs_palloc_delete_block - delete a block on the persistent allocator file
* @inode: inode of metadata file using this allocator
* @blkoff: block offset
* @prev: nilfs_bh_assoc struct of the last used buffer
* @lock: spin lock protecting @prev
*/
static int nilfs_palloc_delete_block(struct inode *inode, unsigned long blkoff,
struct nilfs_bh_assoc *prev,
spinlock_t *lock)
{
spin_lock(lock);
if (prev->bh && blkoff == prev->blkoff) {
brelse(prev->bh);
prev->bh = NULL;
}
spin_unlock(lock);
return nilfs_mdt_delete_block(inode, blkoff);
}
/** /**
* nilfs_palloc_get_desc_block - get buffer head of a group descriptor block * nilfs_palloc_get_desc_block - get buffer head of a group descriptor block
* @inode: inode of metadata file using this allocator * @inode: inode of metadata file using this allocator
@ -277,6 +297,22 @@ static int nilfs_palloc_get_bitmap_block(struct inode *inode,
&cache->prev_bitmap, &cache->lock); &cache->prev_bitmap, &cache->lock);
} }
/**
* nilfs_palloc_delete_bitmap_block - delete a bitmap block
* @inode: inode of metadata file using this allocator
* @group: group number
*/
static int nilfs_palloc_delete_bitmap_block(struct inode *inode,
unsigned long group)
{
struct nilfs_palloc_cache *cache = NILFS_MDT(inode)->mi_palloc_cache;
return nilfs_palloc_delete_block(inode,
nilfs_palloc_bitmap_blkoff(inode,
group),
&cache->prev_bitmap, &cache->lock);
}
/** /**
* nilfs_palloc_get_entry_block - get buffer head of an entry block * nilfs_palloc_get_entry_block - get buffer head of an entry block
* @inode: inode of metadata file using this allocator * @inode: inode of metadata file using this allocator
@ -295,6 +331,20 @@ int nilfs_palloc_get_entry_block(struct inode *inode, __u64 nr,
&cache->prev_entry, &cache->lock); &cache->prev_entry, &cache->lock);
} }
/**
* nilfs_palloc_delete_entry_block - delete an entry block
* @inode: inode of metadata file using this allocator
* @nr: serial number of the entry
*/
static int nilfs_palloc_delete_entry_block(struct inode *inode, __u64 nr)
{
struct nilfs_palloc_cache *cache = NILFS_MDT(inode)->mi_palloc_cache;
return nilfs_palloc_delete_block(inode,
nilfs_palloc_entry_blkoff(inode, nr),
&cache->prev_entry, &cache->lock);
}
/** /**
* nilfs_palloc_block_get_group_desc - get kernel address of a group descriptor * nilfs_palloc_block_get_group_desc - get kernel address of a group descriptor
* @inode: inode of metadata file using this allocator * @inode: inode of metadata file using this allocator
@ -332,51 +382,40 @@ void *nilfs_palloc_block_get_entry(const struct inode *inode, __u64 nr,
/** /**
* nilfs_palloc_find_available_slot - find available slot in a group * nilfs_palloc_find_available_slot - find available slot in a group
* @inode: inode of metadata file using this allocator
* @group: group number
* @target: offset number of an entry in the group (start point)
* @bitmap: bitmap of the group * @bitmap: bitmap of the group
* @target: offset number of an entry in the group (start point)
* @bsize: size in bits * @bsize: size in bits
* @lock: spin lock protecting @bitmap
*/ */
static int nilfs_palloc_find_available_slot(struct inode *inode, static int nilfs_palloc_find_available_slot(unsigned char *bitmap,
unsigned long group,
unsigned long target, unsigned long target,
unsigned char *bitmap, unsigned bsize,
int bsize) spinlock_t *lock)
{ {
int curr, pos, end, i; int pos, end = bsize;
if (target > 0) { if (likely(target < bsize)) {
end = (target + BITS_PER_LONG - 1) & ~(BITS_PER_LONG - 1); pos = target;
if (end > bsize) do {
end = bsize; pos = nilfs_find_next_zero_bit(bitmap, end, pos);
pos = nilfs_find_next_zero_bit(bitmap, end, target); if (pos >= end)
if (pos < end && break;
!nilfs_set_bit_atomic( if (!nilfs_set_bit_atomic(lock, pos, bitmap))
nilfs_mdt_bgl_lock(inode, group), pos, bitmap))
return pos; return pos;
} else } while (++pos < end);
end = 0;
end = target;
}
for (i = 0, curr = end;
i < bsize;
i += BITS_PER_LONG, curr += BITS_PER_LONG) {
/* wrap around */ /* wrap around */
if (curr >= bsize) for (pos = 0; pos < end; pos++) {
curr = 0; pos = nilfs_find_next_zero_bit(bitmap, end, pos);
while (*((unsigned long *)bitmap + curr / BITS_PER_LONG) if (pos >= end)
!= ~0UL) { break;
end = curr + BITS_PER_LONG; if (!nilfs_set_bit_atomic(lock, pos, bitmap))
if (end > bsize)
end = bsize;
pos = nilfs_find_next_zero_bit(bitmap, end, curr);
if ((pos < end) &&
!nilfs_set_bit_atomic(
nilfs_mdt_bgl_lock(inode, group), pos,
bitmap))
return pos; return pos;
} }
}
return -ENOSPC; return -ENOSPC;
} }
@ -475,15 +514,15 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
void *desc_kaddr, *bitmap_kaddr; void *desc_kaddr, *bitmap_kaddr;
unsigned long group, maxgroup, ngroups; unsigned long group, maxgroup, ngroups;
unsigned long group_offset, maxgroup_offset; unsigned long group_offset, maxgroup_offset;
unsigned long n, entries_per_group, groups_per_desc_block; unsigned long n, entries_per_group;
unsigned long i, j; unsigned long i, j;
spinlock_t *lock;
int pos, ret; int pos, ret;
ngroups = nilfs_palloc_groups_count(inode); ngroups = nilfs_palloc_groups_count(inode);
maxgroup = ngroups - 1; maxgroup = ngroups - 1;
group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset);
entries_per_group = nilfs_palloc_entries_per_group(inode); entries_per_group = nilfs_palloc_entries_per_group(inode);
groups_per_desc_block = nilfs_palloc_groups_per_desc_block(inode);
for (i = 0; i < ngroups; i += n) { for (i = 0; i < ngroups; i += n) {
if (group >= ngroups) { if (group >= ngroups) {
@ -501,8 +540,8 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
n = nilfs_palloc_rest_groups_in_desc_block(inode, group, n = nilfs_palloc_rest_groups_in_desc_block(inode, group,
maxgroup); maxgroup);
for (j = 0; j < n; j++, desc++, group++) { for (j = 0; j < n; j++, desc++, group++) {
if (nilfs_palloc_group_desc_nfrees(inode, group, desc) lock = nilfs_mdt_bgl_lock(inode, group);
> 0) { if (nilfs_palloc_group_desc_nfrees(desc, lock) > 0) {
ret = nilfs_palloc_get_bitmap_block( ret = nilfs_palloc_get_bitmap_block(
inode, group, 1, &bitmap_bh); inode, group, 1, &bitmap_bh);
if (ret < 0) if (ret < 0)
@ -510,12 +549,12 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
bitmap_kaddr = kmap(bitmap_bh->b_page); bitmap_kaddr = kmap(bitmap_bh->b_page);
bitmap = bitmap_kaddr + bh_offset(bitmap_bh); bitmap = bitmap_kaddr + bh_offset(bitmap_bh);
pos = nilfs_palloc_find_available_slot( pos = nilfs_palloc_find_available_slot(
inode, group, group_offset, bitmap, bitmap, group_offset,
entries_per_group); entries_per_group, lock);
if (pos >= 0) { if (pos >= 0) {
/* found a free entry */ /* found a free entry */
nilfs_palloc_group_desc_add_entries( nilfs_palloc_group_desc_add_entries(
inode, group, desc, -1); desc, lock, -1);
req->pr_entry_nr = req->pr_entry_nr =
entries_per_group * group + pos; entries_per_group * group + pos;
kunmap(desc_bh->b_page); kunmap(desc_bh->b_page);
@ -573,6 +612,7 @@ void nilfs_palloc_commit_free_entry(struct inode *inode,
unsigned long group, group_offset; unsigned long group, group_offset;
unsigned char *bitmap; unsigned char *bitmap;
void *desc_kaddr, *bitmap_kaddr; void *desc_kaddr, *bitmap_kaddr;
spinlock_t *lock;
group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset);
desc_kaddr = kmap(req->pr_desc_bh->b_page); desc_kaddr = kmap(req->pr_desc_bh->b_page);
@ -580,13 +620,15 @@ void nilfs_palloc_commit_free_entry(struct inode *inode,
req->pr_desc_bh, desc_kaddr); req->pr_desc_bh, desc_kaddr);
bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page);
bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh);
lock = nilfs_mdt_bgl_lock(inode, group);
if (!nilfs_clear_bit_atomic(nilfs_mdt_bgl_lock(inode, group), if (!nilfs_clear_bit_atomic(lock, group_offset, bitmap))
group_offset, bitmap)) nilfs_warning(inode->i_sb, __func__,
printk(KERN_WARNING "%s: entry number %llu already freed\n", "entry number %llu already freed: ino=%lu\n",
__func__, (unsigned long long)req->pr_entry_nr); (unsigned long long)req->pr_entry_nr,
(unsigned long)inode->i_ino);
else else
nilfs_palloc_group_desc_add_entries(inode, group, desc, 1); nilfs_palloc_group_desc_add_entries(desc, lock, 1);
kunmap(req->pr_bitmap_bh->b_page); kunmap(req->pr_bitmap_bh->b_page);
kunmap(req->pr_desc_bh->b_page); kunmap(req->pr_desc_bh->b_page);
@ -611,6 +653,7 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode,
void *desc_kaddr, *bitmap_kaddr; void *desc_kaddr, *bitmap_kaddr;
unsigned char *bitmap; unsigned char *bitmap;
unsigned long group, group_offset; unsigned long group, group_offset;
spinlock_t *lock;
group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset); group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset);
desc_kaddr = kmap(req->pr_desc_bh->b_page); desc_kaddr = kmap(req->pr_desc_bh->b_page);
@ -618,12 +661,15 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode,
req->pr_desc_bh, desc_kaddr); req->pr_desc_bh, desc_kaddr);
bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page); bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page);
bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh);
if (!nilfs_clear_bit_atomic(nilfs_mdt_bgl_lock(inode, group), lock = nilfs_mdt_bgl_lock(inode, group);
group_offset, bitmap))
printk(KERN_WARNING "%s: entry number %llu already freed\n", if (!nilfs_clear_bit_atomic(lock, group_offset, bitmap))
__func__, (unsigned long long)req->pr_entry_nr); nilfs_warning(inode->i_sb, __func__,
"entry number %llu already freed: ino=%lu\n",
(unsigned long long)req->pr_entry_nr,
(unsigned long)inode->i_ino);
else else
nilfs_palloc_group_desc_add_entries(inode, group, desc, 1); nilfs_palloc_group_desc_add_entries(desc, lock, 1);
kunmap(req->pr_bitmap_bh->b_page); kunmap(req->pr_bitmap_bh->b_page);
kunmap(req->pr_desc_bh->b_page); kunmap(req->pr_desc_bh->b_page);
@ -679,22 +725,6 @@ void nilfs_palloc_abort_free_entry(struct inode *inode,
req->pr_desc_bh = NULL; req->pr_desc_bh = NULL;
} }
/**
* nilfs_palloc_group_is_in - judge if an entry is in a group
* @inode: inode of metadata file using this allocator
* @group: group number
* @nr: serial number of the entry (e.g. inode number)
*/
static int
nilfs_palloc_group_is_in(struct inode *inode, unsigned long group, __u64 nr)
{
__u64 first, last;
first = group * nilfs_palloc_entries_per_group(inode);
last = first + nilfs_palloc_entries_per_group(inode) - 1;
return (nr >= first) && (nr <= last);
}
/** /**
* nilfs_palloc_freev - deallocate a set of persistent objects * nilfs_palloc_freev - deallocate a set of persistent objects
* @inode: inode of metadata file using this allocator * @inode: inode of metadata file using this allocator
@ -708,9 +738,18 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems)
unsigned char *bitmap; unsigned char *bitmap;
void *desc_kaddr, *bitmap_kaddr; void *desc_kaddr, *bitmap_kaddr;
unsigned long group, group_offset; unsigned long group, group_offset;
int i, j, n, ret; __u64 group_min_nr, last_nrs[8];
const unsigned long epg = nilfs_palloc_entries_per_group(inode);
const unsigned epb = NILFS_MDT(inode)->mi_entries_per_block;
unsigned entry_start, end, pos;
spinlock_t *lock;
int i, j, k, ret;
u32 nfree;
for (i = 0; i < nitems; i = j) { for (i = 0; i < nitems; i = j) {
int change_group = false;
int nempties = 0, n = 0;
group = nilfs_palloc_group(inode, entry_nrs[i], &group_offset); group = nilfs_palloc_group(inode, entry_nrs[i], &group_offset);
ret = nilfs_palloc_get_desc_block(inode, group, 0, &desc_bh); ret = nilfs_palloc_get_desc_block(inode, group, 0, &desc_bh);
if (ret < 0) if (ret < 0)
@ -721,38 +760,89 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems)
brelse(desc_bh); brelse(desc_bh);
return ret; return ret;
} }
desc_kaddr = kmap(desc_bh->b_page);
desc = nilfs_palloc_block_get_group_desc( /* Get the first entry number of the group */
inode, group, desc_bh, desc_kaddr); group_min_nr = (__u64)group * epg;
bitmap_kaddr = kmap(bitmap_bh->b_page); bitmap_kaddr = kmap(bitmap_bh->b_page);
bitmap = bitmap_kaddr + bh_offset(bitmap_bh); bitmap = bitmap_kaddr + bh_offset(bitmap_bh);
for (j = i, n = 0; lock = nilfs_mdt_bgl_lock(inode, group);
(j < nitems) && nilfs_palloc_group_is_in(inode, group,
entry_nrs[j]); j = i;
j++) { entry_start = rounddown(group_offset, epb);
nilfs_palloc_group(inode, entry_nrs[j], &group_offset); do {
if (!nilfs_clear_bit_atomic( if (!nilfs_clear_bit_atomic(lock, group_offset,
nilfs_mdt_bgl_lock(inode, group), bitmap)) {
group_offset, bitmap)) { nilfs_warning(inode->i_sb, __func__,
printk(KERN_WARNING "entry number %llu already freed: ino=%lu\n",
"%s: entry number %llu already freed\n", (unsigned long long)entry_nrs[j],
__func__, (unsigned long)inode->i_ino);
(unsigned long long)entry_nrs[j]);
} else { } else {
n++; n++;
} }
j++;
if (j >= nitems || entry_nrs[j] < group_min_nr ||
entry_nrs[j] >= group_min_nr + epg) {
change_group = true;
} else {
group_offset = entry_nrs[j] - group_min_nr;
if (group_offset >= entry_start &&
group_offset < entry_start + epb) {
/* This entry is in the same block */
continue;
} }
nilfs_palloc_group_desc_add_entries(inode, group, desc, n); }
/* Test if the entry block is empty or not */
end = entry_start + epb;
pos = nilfs_find_next_bit(bitmap, end, entry_start);
if (pos >= end) {
last_nrs[nempties++] = entry_nrs[j - 1];
if (nempties >= ARRAY_SIZE(last_nrs))
break;
}
if (change_group)
break;
/* Go on to the next entry block */
entry_start = rounddown(group_offset, epb);
} while (true);
kunmap(bitmap_bh->b_page); kunmap(bitmap_bh->b_page);
kunmap(desc_bh->b_page);
mark_buffer_dirty(desc_bh);
mark_buffer_dirty(bitmap_bh); mark_buffer_dirty(bitmap_bh);
nilfs_mdt_mark_dirty(inode);
brelse(bitmap_bh); brelse(bitmap_bh);
for (k = 0; k < nempties; k++) {
ret = nilfs_palloc_delete_entry_block(inode,
last_nrs[k]);
if (ret && ret != -ENOENT) {
nilfs_warning(inode->i_sb, __func__,
"failed to delete block of entry %llu: ino=%lu, err=%d\n",
(unsigned long long)last_nrs[k],
(unsigned long)inode->i_ino, ret);
}
}
desc_kaddr = kmap_atomic(desc_bh->b_page);
desc = nilfs_palloc_block_get_group_desc(
inode, group, desc_bh, desc_kaddr);
nfree = nilfs_palloc_group_desc_add_entries(desc, lock, n);
kunmap_atomic(desc_kaddr);
mark_buffer_dirty(desc_bh);
nilfs_mdt_mark_dirty(inode);
brelse(desc_bh); brelse(desc_bh);
if (nfree == nilfs_palloc_entries_per_group(inode)) {
ret = nilfs_palloc_delete_bitmap_block(inode, group);
if (ret && ret != -ENOENT) {
nilfs_warning(inode->i_sb, __func__,
"failed to delete bitmap block of group %lu: ino=%lu, err=%d\n",
group,
(unsigned long)inode->i_ino, ret);
}
}
} }
return 0; return 0;
} }

View File

@ -77,6 +77,7 @@ int nilfs_palloc_freev(struct inode *, __u64 *, size_t);
#define nilfs_set_bit_atomic ext2_set_bit_atomic #define nilfs_set_bit_atomic ext2_set_bit_atomic
#define nilfs_clear_bit_atomic ext2_clear_bit_atomic #define nilfs_clear_bit_atomic ext2_clear_bit_atomic
#define nilfs_find_next_zero_bit find_next_zero_bit_le #define nilfs_find_next_zero_bit find_next_zero_bit_le
#define nilfs_find_next_bit find_next_bit_le
/** /**
* struct nilfs_bh_assoc - block offset and buffer head association * struct nilfs_bh_assoc - block offset and buffer head association

View File

@ -919,8 +919,6 @@ static void nilfs_btree_split(struct nilfs_bmap *btree,
int level, __u64 *keyp, __u64 *ptrp) int level, __u64 *keyp, __u64 *ptrp)
{ {
struct nilfs_btree_node *node, *right; struct nilfs_btree_node *node, *right;
__u64 newkey;
__u64 newptr;
int nchildren, n, move, ncblk; int nchildren, n, move, ncblk;
node = nilfs_btree_get_nonroot_node(path, level); node = nilfs_btree_get_nonroot_node(path, level);
@ -942,9 +940,6 @@ static void nilfs_btree_split(struct nilfs_bmap *btree,
if (!buffer_dirty(path[level].bp_sib_bh)) if (!buffer_dirty(path[level].bp_sib_bh))
mark_buffer_dirty(path[level].bp_sib_bh); mark_buffer_dirty(path[level].bp_sib_bh);
newkey = nilfs_btree_node_get_key(right, 0);
newptr = path[level].bp_newreq.bpr_ptr;
if (move) { if (move) {
path[level].bp_index -= nilfs_btree_node_get_nchildren(node); path[level].bp_index -= nilfs_btree_node_get_nchildren(node);
nilfs_btree_node_insert(right, path[level].bp_index, nilfs_btree_node_insert(right, path[level].bp_index,
@ -1856,7 +1851,7 @@ int nilfs_btree_convert_and_insert(struct nilfs_bmap *btree,
__u64 key, __u64 ptr, __u64 key, __u64 ptr,
const __u64 *keys, const __u64 *ptrs, int n) const __u64 *keys, const __u64 *ptrs, int n)
{ {
struct buffer_head *bh; struct buffer_head *bh = NULL;
union nilfs_bmap_ptr_req dreq, nreq, *di, *ni; union nilfs_bmap_ptr_req dreq, nreq, *di, *ni;
struct nilfs_bmap_stats stats; struct nilfs_bmap_stats stats;
int ret; int ret;

Some files were not shown because too many files have changed in this diff Show More