2019-05-27 06:55:05 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Digital Audio (PCM) abstract layer
|
2007-10-15 07:50:19 +00:00
|
|
|
* Copyright (c) by Jaroslav Kysela <perex@perex.cz>
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
|
2015-01-28 15:49:33 +00:00
|
|
|
#include <linux/io.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/time.h>
|
|
|
|
#include <linux/init.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/moduleparam.h>
|
2011-09-22 13:34:58 +00:00
|
|
|
#include <linux/export.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <sound/core.h>
|
|
|
|
#include <sound/pcm.h>
|
|
|
|
#include <sound/info.h>
|
|
|
|
#include <sound/initval.h>
|
2019-11-17 10:05:12 +00:00
|
|
|
#include "pcm_local.h"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
static int preallocate_dma = 1;
|
|
|
|
module_param(preallocate_dma, int, 0444);
|
|
|
|
MODULE_PARM_DESC(preallocate_dma, "Preallocate DMA memory when the PCM devices are initialized.");
|
|
|
|
|
|
|
|
static int maximum_substreams = 4;
|
|
|
|
module_param(maximum_substreams, int, 0444);
|
|
|
|
MODULE_PARM_DESC(maximum_substreams, "Maximum substreams with preallocated DMA memory.");
|
|
|
|
|
|
|
|
static const size_t snd_minimum_buffer = 16384;
|
|
|
|
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
static unsigned long max_alloc_per_card = 32UL * 1024UL * 1024UL;
|
|
|
|
module_param(max_alloc_per_card, ulong, 0644);
|
|
|
|
MODULE_PARM_DESC(max_alloc_per_card, "Max total allocation bytes per card.");
|
|
|
|
|
2023-07-03 11:24:30 +00:00
|
|
|
static void __update_allocated_size(struct snd_card *card, ssize_t bytes)
|
|
|
|
{
|
|
|
|
card->total_pcm_alloc_bytes += bytes;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void update_allocated_size(struct snd_card *card, ssize_t bytes)
|
|
|
|
{
|
2024-02-27 08:53:03 +00:00
|
|
|
guard(mutex)(&card->memory_mutex);
|
2023-07-03 11:24:30 +00:00
|
|
|
__update_allocated_size(card, bytes);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void decrease_allocated_size(struct snd_card *card, size_t bytes)
|
|
|
|
{
|
2024-02-27 08:53:03 +00:00
|
|
|
guard(mutex)(&card->memory_mutex);
|
2023-07-03 11:24:30 +00:00
|
|
|
WARN_ON(card->total_pcm_alloc_bytes < bytes);
|
|
|
|
__update_allocated_size(card, -(ssize_t)bytes);
|
|
|
|
}
|
|
|
|
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
static int do_alloc_pages(struct snd_card *card, int type, struct device *dev,
|
ALSA: memalloc: Support for non-contiguous page allocation
This patch adds the support for allocation of non-contiguous DMA pages
in the common memalloc helper. It's another SG-buffer type, but
unlike the existing one, this is directional and requires the explicit
sync / invalidation of dirty pages on non-coherent architectures.
For this enhancement, the following points are changed:
- snd_dma_device stores the DMA direction.
- snd_dma_device stores need_sync flag indicating whether the explicit
sync is required or not.
- A new variant of helper functions, snd_dma_alloc_dir_pages() and
*_all() are introduced; the old snd_dma_alloc_pages() and *_all()
kept as just wrappers with DMA_BIDIRECTIONAL.
- A new helper snd_dma_buffer_sync() is introduced; this gets called
in the appropriate places.
- A new allocation type, SNDRV_DMA_TYPE_NONCONTIG, is introduced.
When the driver allocates pages with this new type, and it may require
the SNDRV_PCM_INFO_EXPLICIT_SYNC flag set to the PCM hardware.info for
taking the full control of PCM applptr and hwptr changes (that implies
disabling the mmap of control/status data). When the buffer
allocation is managed by snd_pcm_set_managed_buffer(), this flag is
automatically set depending on the result of dma_need_sync()
internally. Otherwise, if the buffer is managed manually, the driver
has to set the flag explicitly, too.
The explicit sync between CPU and device for non-coherent memory is
performed at the points before and after read/write transfer as well
as the applptr/hwptr syncptr ioctl. In the case of mmap mode,
user-space is supposed to call the syncptr ioctl with the hwptr flag
to update and fetch the status at first; this corresponds to CPU-sync.
Then user-space advances the applptr via syncptr ioctl again with
applptr flag, and this corresponds to the device sync with flushing.
Other than the DMA direction and the explicit sync, the usage of this
new buffer type is almost equivalent with the existing
SNDRV_DMA_TYPE_DEV_SG; you can get the page and the address via
snd_sgbuf_get_page() and snd_sgbuf_get_addr(), also calculate the
continuous pages via snd_sgbuf_get_chunk_size().
For those SG-page handling, the non-contig type shares the same ops
with the vmalloc handler. As we do always vmap the SG pages at first,
the actual address can be deduced from the vmapped address easily
without iterating the SG-list.
Link: https://lore.kernel.org/r/20211017074859.24112-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-10-17 07:48:57 +00:00
|
|
|
int str, size_t size, struct snd_dma_buffer *dmab)
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
{
|
ALSA: memalloc: Support for non-contiguous page allocation
This patch adds the support for allocation of non-contiguous DMA pages
in the common memalloc helper. It's another SG-buffer type, but
unlike the existing one, this is directional and requires the explicit
sync / invalidation of dirty pages on non-coherent architectures.
For this enhancement, the following points are changed:
- snd_dma_device stores the DMA direction.
- snd_dma_device stores need_sync flag indicating whether the explicit
sync is required or not.
- A new variant of helper functions, snd_dma_alloc_dir_pages() and
*_all() are introduced; the old snd_dma_alloc_pages() and *_all()
kept as just wrappers with DMA_BIDIRECTIONAL.
- A new helper snd_dma_buffer_sync() is introduced; this gets called
in the appropriate places.
- A new allocation type, SNDRV_DMA_TYPE_NONCONTIG, is introduced.
When the driver allocates pages with this new type, and it may require
the SNDRV_PCM_INFO_EXPLICIT_SYNC flag set to the PCM hardware.info for
taking the full control of PCM applptr and hwptr changes (that implies
disabling the mmap of control/status data). When the buffer
allocation is managed by snd_pcm_set_managed_buffer(), this flag is
automatically set depending on the result of dma_need_sync()
internally. Otherwise, if the buffer is managed manually, the driver
has to set the flag explicitly, too.
The explicit sync between CPU and device for non-coherent memory is
performed at the points before and after read/write transfer as well
as the applptr/hwptr syncptr ioctl. In the case of mmap mode,
user-space is supposed to call the syncptr ioctl with the hwptr flag
to update and fetch the status at first; this corresponds to CPU-sync.
Then user-space advances the applptr via syncptr ioctl again with
applptr flag, and this corresponds to the device sync with flushing.
Other than the DMA direction and the explicit sync, the usage of this
new buffer type is almost equivalent with the existing
SNDRV_DMA_TYPE_DEV_SG; you can get the page and the address via
snd_sgbuf_get_page() and snd_sgbuf_get_addr(), also calculate the
continuous pages via snd_sgbuf_get_chunk_size().
For those SG-page handling, the non-contig type shares the same ops
with the vmalloc handler. As we do always vmap the SG pages at first,
the actual address can be deduced from the vmapped address easily
without iterating the SG-list.
Link: https://lore.kernel.org/r/20211017074859.24112-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-10-17 07:48:57 +00:00
|
|
|
enum dma_data_direction dir;
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
int err;
|
|
|
|
|
2023-07-03 11:24:30 +00:00
|
|
|
/* check and reserve the requested size */
|
2024-02-27 08:53:03 +00:00
|
|
|
scoped_guard(mutex, &card->memory_mutex) {
|
|
|
|
if (max_alloc_per_card &&
|
|
|
|
card->total_pcm_alloc_bytes + size > max_alloc_per_card)
|
|
|
|
return -ENOMEM;
|
|
|
|
__update_allocated_size(card, size);
|
2023-07-03 11:24:30 +00:00
|
|
|
}
|
2020-06-15 16:00:45 +00:00
|
|
|
|
ALSA: memalloc: Support for non-contiguous page allocation
This patch adds the support for allocation of non-contiguous DMA pages
in the common memalloc helper. It's another SG-buffer type, but
unlike the existing one, this is directional and requires the explicit
sync / invalidation of dirty pages on non-coherent architectures.
For this enhancement, the following points are changed:
- snd_dma_device stores the DMA direction.
- snd_dma_device stores need_sync flag indicating whether the explicit
sync is required or not.
- A new variant of helper functions, snd_dma_alloc_dir_pages() and
*_all() are introduced; the old snd_dma_alloc_pages() and *_all()
kept as just wrappers with DMA_BIDIRECTIONAL.
- A new helper snd_dma_buffer_sync() is introduced; this gets called
in the appropriate places.
- A new allocation type, SNDRV_DMA_TYPE_NONCONTIG, is introduced.
When the driver allocates pages with this new type, and it may require
the SNDRV_PCM_INFO_EXPLICIT_SYNC flag set to the PCM hardware.info for
taking the full control of PCM applptr and hwptr changes (that implies
disabling the mmap of control/status data). When the buffer
allocation is managed by snd_pcm_set_managed_buffer(), this flag is
automatically set depending on the result of dma_need_sync()
internally. Otherwise, if the buffer is managed manually, the driver
has to set the flag explicitly, too.
The explicit sync between CPU and device for non-coherent memory is
performed at the points before and after read/write transfer as well
as the applptr/hwptr syncptr ioctl. In the case of mmap mode,
user-space is supposed to call the syncptr ioctl with the hwptr flag
to update and fetch the status at first; this corresponds to CPU-sync.
Then user-space advances the applptr via syncptr ioctl again with
applptr flag, and this corresponds to the device sync with flushing.
Other than the DMA direction and the explicit sync, the usage of this
new buffer type is almost equivalent with the existing
SNDRV_DMA_TYPE_DEV_SG; you can get the page and the address via
snd_sgbuf_get_page() and snd_sgbuf_get_addr(), also calculate the
continuous pages via snd_sgbuf_get_chunk_size().
For those SG-page handling, the non-contig type shares the same ops
with the vmalloc handler. As we do always vmap the SG pages at first,
the actual address can be deduced from the vmapped address easily
without iterating the SG-list.
Link: https://lore.kernel.org/r/20211017074859.24112-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-10-17 07:48:57 +00:00
|
|
|
if (str == SNDRV_PCM_STREAM_PLAYBACK)
|
|
|
|
dir = DMA_TO_DEVICE;
|
|
|
|
else
|
|
|
|
dir = DMA_FROM_DEVICE;
|
|
|
|
err = snd_dma_alloc_dir_pages(type, dev, dir, size, dmab);
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
if (!err) {
|
2023-07-03 11:24:30 +00:00
|
|
|
/* the actual allocation size might be bigger than requested,
|
|
|
|
* and we need to correct the account
|
|
|
|
*/
|
|
|
|
if (dmab->bytes != size)
|
|
|
|
update_allocated_size(card, dmab->bytes - size);
|
|
|
|
} else {
|
|
|
|
/* take back on allocation failure */
|
|
|
|
decrease_allocated_size(card, size);
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
}
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void do_free_pages(struct snd_card *card, struct snd_dma_buffer *dmab)
|
|
|
|
{
|
|
|
|
if (!dmab->area)
|
|
|
|
return;
|
2023-07-03 11:24:30 +00:00
|
|
|
decrease_allocated_size(card, dmab->bytes);
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
snd_dma_free_pages(dmab);
|
|
|
|
dmab->area = NULL;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* try to allocate as the large pages as possible.
|
|
|
|
* stores the resultant memory size in *res_size.
|
|
|
|
*
|
|
|
|
* the minimum size is snd_minimum_buffer. it should be power of 2.
|
|
|
|
*/
|
2021-08-02 07:28:03 +00:00
|
|
|
static int preallocate_pcm_pages(struct snd_pcm_substream *substream,
|
|
|
|
size_t size, bool no_fallback)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct snd_dma_buffer *dmab = &substream->dma_buffer;
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
struct snd_card *card = substream->pcm->card;
|
2014-01-10 12:01:41 +00:00
|
|
|
size_t orig_size = size;
|
2005-04-16 22:20:36 +00:00
|
|
|
int err;
|
|
|
|
|
|
|
|
do {
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
err = do_alloc_pages(card, dmab->dev.type, dmab->dev.dev,
|
ALSA: memalloc: Support for non-contiguous page allocation
This patch adds the support for allocation of non-contiguous DMA pages
in the common memalloc helper. It's another SG-buffer type, but
unlike the existing one, this is directional and requires the explicit
sync / invalidation of dirty pages on non-coherent architectures.
For this enhancement, the following points are changed:
- snd_dma_device stores the DMA direction.
- snd_dma_device stores need_sync flag indicating whether the explicit
sync is required or not.
- A new variant of helper functions, snd_dma_alloc_dir_pages() and
*_all() are introduced; the old snd_dma_alloc_pages() and *_all()
kept as just wrappers with DMA_BIDIRECTIONAL.
- A new helper snd_dma_buffer_sync() is introduced; this gets called
in the appropriate places.
- A new allocation type, SNDRV_DMA_TYPE_NONCONTIG, is introduced.
When the driver allocates pages with this new type, and it may require
the SNDRV_PCM_INFO_EXPLICIT_SYNC flag set to the PCM hardware.info for
taking the full control of PCM applptr and hwptr changes (that implies
disabling the mmap of control/status data). When the buffer
allocation is managed by snd_pcm_set_managed_buffer(), this flag is
automatically set depending on the result of dma_need_sync()
internally. Otherwise, if the buffer is managed manually, the driver
has to set the flag explicitly, too.
The explicit sync between CPU and device for non-coherent memory is
performed at the points before and after read/write transfer as well
as the applptr/hwptr syncptr ioctl. In the case of mmap mode,
user-space is supposed to call the syncptr ioctl with the hwptr flag
to update and fetch the status at first; this corresponds to CPU-sync.
Then user-space advances the applptr via syncptr ioctl again with
applptr flag, and this corresponds to the device sync with flushing.
Other than the DMA direction and the explicit sync, the usage of this
new buffer type is almost equivalent with the existing
SNDRV_DMA_TYPE_DEV_SG; you can get the page and the address via
snd_sgbuf_get_page() and snd_sgbuf_get_addr(), also calculate the
continuous pages via snd_sgbuf_get_chunk_size().
For those SG-page handling, the non-contig type shares the same ops
with the vmalloc handler. As we do always vmap the SG pages at first,
the actual address can be deduced from the vmapped address easily
without iterating the SG-list.
Link: https://lore.kernel.org/r/20211017074859.24112-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-10-17 07:48:57 +00:00
|
|
|
substream->stream, size, dmab);
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
if (err != -ENOMEM)
|
|
|
|
return err;
|
2021-08-02 07:28:03 +00:00
|
|
|
if (no_fallback)
|
|
|
|
break;
|
2005-04-16 22:20:36 +00:00
|
|
|
size >>= 1;
|
|
|
|
} while (size >= snd_minimum_buffer);
|
|
|
|
dmab->bytes = 0; /* tell error */
|
2014-01-10 12:01:41 +00:00
|
|
|
pr_warn("ALSA pcmC%dD%d%c,%d:%s: cannot preallocate for size %zu\n",
|
|
|
|
substream->pcm->card->number, substream->pcm->device,
|
|
|
|
substream->stream ? 'c' : 'p', substream->number,
|
|
|
|
substream->pcm->name, orig_size);
|
2021-08-02 07:28:03 +00:00
|
|
|
return -ENOMEM;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* snd_pcm_lib_preallocate_free - release the preallocated buffer of the specified substream.
|
|
|
|
* @substream: the pcm substream instance
|
|
|
|
*
|
|
|
|
* Releases the pre-allocated buffer of the given substream.
|
|
|
|
*/
|
2019-02-04 15:42:24 +00:00
|
|
|
void snd_pcm_lib_preallocate_free(struct snd_pcm_substream *substream)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2020-12-18 15:34:00 +00:00
|
|
|
do_free_pages(substream->pcm->card, &substream->dma_buffer);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* snd_pcm_lib_preallocate_free_for_all - release all pre-allocated buffers on the pcm
|
|
|
|
* @pcm: the pcm instance
|
|
|
|
*
|
|
|
|
* Releases all the pre-allocated buffers on the given pcm.
|
|
|
|
*/
|
2019-02-04 15:42:24 +00:00
|
|
|
void snd_pcm_lib_preallocate_free_for_all(struct snd_pcm *pcm)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-11-17 12:59:38 +00:00
|
|
|
struct snd_pcm_substream *substream;
|
2005-04-16 22:20:36 +00:00
|
|
|
int stream;
|
|
|
|
|
2021-02-06 20:36:56 +00:00
|
|
|
for_each_pcm_substream(pcm, stream, substream)
|
|
|
|
snd_pcm_lib_preallocate_free(substream);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2006-04-28 13:13:40 +00:00
|
|
|
EXPORT_SYMBOL(snd_pcm_lib_preallocate_free_for_all);
|
|
|
|
|
2006-04-25 10:56:04 +00:00
|
|
|
#ifdef CONFIG_SND_VERBOSE_PROCFS
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* read callback for prealloc proc file
|
|
|
|
*
|
|
|
|
* prints the current allocated size in kB.
|
|
|
|
*/
|
2005-11-17 12:59:38 +00:00
|
|
|
static void snd_pcm_lib_preallocate_proc_read(struct snd_info_entry *entry,
|
|
|
|
struct snd_info_buffer *buffer)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-11-17 12:59:38 +00:00
|
|
|
struct snd_pcm_substream *substream = entry->private_data;
|
2005-04-16 22:20:36 +00:00
|
|
|
snd_iprintf(buffer, "%lu\n", (unsigned long) substream->dma_buffer.bytes / 1024);
|
|
|
|
}
|
|
|
|
|
2006-10-06 13:12:29 +00:00
|
|
|
/*
|
|
|
|
* read callback for prealloc_max proc file
|
|
|
|
*
|
|
|
|
* prints the maximum allowed size in kB.
|
|
|
|
*/
|
|
|
|
static void snd_pcm_lib_preallocate_max_proc_read(struct snd_info_entry *entry,
|
|
|
|
struct snd_info_buffer *buffer)
|
|
|
|
{
|
|
|
|
struct snd_pcm_substream *substream = entry->private_data;
|
|
|
|
snd_iprintf(buffer, "%lu\n", (unsigned long) substream->dma_max / 1024);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* write callback for prealloc proc file
|
|
|
|
*
|
|
|
|
* accepts the preallocation size in kB.
|
|
|
|
*/
|
2005-11-17 12:59:38 +00:00
|
|
|
static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
|
|
|
|
struct snd_info_buffer *buffer)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-11-17 12:59:38 +00:00
|
|
|
struct snd_pcm_substream *substream = entry->private_data;
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
struct snd_card *card = substream->pcm->card;
|
2005-04-16 22:20:36 +00:00
|
|
|
char line[64], str[64];
|
2024-09-01 13:45:12 +00:00
|
|
|
unsigned long size;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct snd_dma_buffer new_dmab;
|
|
|
|
|
2024-02-27 08:53:03 +00:00
|
|
|
guard(mutex)(&substream->pcm->open_mutex);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (substream->runtime) {
|
|
|
|
buffer->error = -EBUSY;
|
2024-02-27 08:53:03 +00:00
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
if (!snd_info_get_line(buffer, line, sizeof(line))) {
|
|
|
|
snd_info_get_str(str, line, sizeof(str));
|
2024-08-31 08:06:39 +00:00
|
|
|
buffer->error = kstrtoul(str, 10, &size);
|
|
|
|
if (buffer->error != 0)
|
|
|
|
return;
|
|
|
|
size *= 1024;
|
2005-04-16 22:20:36 +00:00
|
|
|
if ((size != 0 && size < 8192) || size > substream->dma_max) {
|
|
|
|
buffer->error = -EINVAL;
|
2024-02-27 08:53:03 +00:00
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
if (substream->dma_buffer.bytes == size)
|
2024-02-27 08:53:03 +00:00
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
memset(&new_dmab, 0, sizeof(new_dmab));
|
|
|
|
new_dmab.dev = substream->dma_buffer.dev;
|
|
|
|
if (size > 0) {
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
if (do_alloc_pages(card,
|
|
|
|
substream->dma_buffer.dev.type,
|
|
|
|
substream->dma_buffer.dev.dev,
|
ALSA: memalloc: Support for non-contiguous page allocation
This patch adds the support for allocation of non-contiguous DMA pages
in the common memalloc helper. It's another SG-buffer type, but
unlike the existing one, this is directional and requires the explicit
sync / invalidation of dirty pages on non-coherent architectures.
For this enhancement, the following points are changed:
- snd_dma_device stores the DMA direction.
- snd_dma_device stores need_sync flag indicating whether the explicit
sync is required or not.
- A new variant of helper functions, snd_dma_alloc_dir_pages() and
*_all() are introduced; the old snd_dma_alloc_pages() and *_all()
kept as just wrappers with DMA_BIDIRECTIONAL.
- A new helper snd_dma_buffer_sync() is introduced; this gets called
in the appropriate places.
- A new allocation type, SNDRV_DMA_TYPE_NONCONTIG, is introduced.
When the driver allocates pages with this new type, and it may require
the SNDRV_PCM_INFO_EXPLICIT_SYNC flag set to the PCM hardware.info for
taking the full control of PCM applptr and hwptr changes (that implies
disabling the mmap of control/status data). When the buffer
allocation is managed by snd_pcm_set_managed_buffer(), this flag is
automatically set depending on the result of dma_need_sync()
internally. Otherwise, if the buffer is managed manually, the driver
has to set the flag explicitly, too.
The explicit sync between CPU and device for non-coherent memory is
performed at the points before and after read/write transfer as well
as the applptr/hwptr syncptr ioctl. In the case of mmap mode,
user-space is supposed to call the syncptr ioctl with the hwptr flag
to update and fetch the status at first; this corresponds to CPU-sync.
Then user-space advances the applptr via syncptr ioctl again with
applptr flag, and this corresponds to the device sync with flushing.
Other than the DMA direction and the explicit sync, the usage of this
new buffer type is almost equivalent with the existing
SNDRV_DMA_TYPE_DEV_SG; you can get the page and the address via
snd_sgbuf_get_page() and snd_sgbuf_get_addr(), also calculate the
continuous pages via snd_sgbuf_get_chunk_size().
For those SG-page handling, the non-contig type shares the same ops
with the vmalloc handler. As we do always vmap the SG pages at first,
the actual address can be deduced from the vmapped address easily
without iterating the SG-list.
Link: https://lore.kernel.org/r/20211017074859.24112-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-10-17 07:48:57 +00:00
|
|
|
substream->stream,
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
size, &new_dmab) < 0) {
|
2005-04-16 22:20:36 +00:00
|
|
|
buffer->error = -ENOMEM;
|
2024-09-02 06:22:16 +00:00
|
|
|
pr_debug("ALSA pcmC%dD%d%c,%d:%s: cannot preallocate for size %lu\n",
|
2021-03-18 16:06:16 +00:00
|
|
|
substream->pcm->card->number, substream->pcm->device,
|
|
|
|
substream->stream ? 'c' : 'p', substream->number,
|
|
|
|
substream->pcm->name, size);
|
2024-02-27 08:53:03 +00:00
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
substream->buffer_bytes_max = size;
|
|
|
|
} else {
|
|
|
|
substream->buffer_bytes_max = UINT_MAX;
|
|
|
|
}
|
|
|
|
if (substream->dma_buffer.area)
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
do_free_pages(card, &substream->dma_buffer);
|
2005-04-16 22:20:36 +00:00
|
|
|
substream->dma_buffer = new_dmab;
|
|
|
|
} else {
|
|
|
|
buffer->error = -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-12-01 09:42:42 +00:00
|
|
|
static inline void preallocate_info_init(struct snd_pcm_substream *substream)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-11-17 12:59:38 +00:00
|
|
|
struct snd_info_entry *entry;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2019-02-04 14:02:11 +00:00
|
|
|
entry = snd_info_create_card_entry(substream->pcm->card, "prealloc",
|
|
|
|
substream->proc_root);
|
|
|
|
if (entry) {
|
|
|
|
snd_info_set_text_ops(entry, substream,
|
|
|
|
snd_pcm_lib_preallocate_proc_read);
|
2005-04-16 22:20:36 +00:00
|
|
|
entry->c.text.write = snd_pcm_lib_preallocate_proc_write;
|
2018-05-23 19:20:59 +00:00
|
|
|
entry->mode |= 0200;
|
2006-10-06 13:12:29 +00:00
|
|
|
}
|
2019-02-04 14:02:11 +00:00
|
|
|
entry = snd_info_create_card_entry(substream->pcm->card, "prealloc_max",
|
|
|
|
substream->proc_root);
|
|
|
|
if (entry)
|
|
|
|
snd_info_set_text_ops(entry, substream,
|
|
|
|
snd_pcm_lib_preallocate_max_proc_read);
|
2005-12-01 09:42:42 +00:00
|
|
|
}
|
|
|
|
|
2006-04-25 10:56:04 +00:00
|
|
|
#else /* !CONFIG_SND_VERBOSE_PROCFS */
|
2021-03-22 10:31:10 +00:00
|
|
|
static inline void preallocate_info_init(struct snd_pcm_substream *substream)
|
|
|
|
{
|
|
|
|
}
|
2006-04-25 10:56:04 +00:00
|
|
|
#endif /* CONFIG_SND_VERBOSE_PROCFS */
|
2005-12-01 09:42:42 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* pre-allocate the buffer and create a proc file for the substream
|
|
|
|
*/
|
2021-08-02 07:28:03 +00:00
|
|
|
static int preallocate_pages(struct snd_pcm_substream *substream,
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
int type, struct device *data,
|
|
|
|
size_t size, size_t max, bool managed)
|
2005-12-01 09:42:42 +00:00
|
|
|
{
|
2021-08-02 07:28:03 +00:00
|
|
|
int err;
|
|
|
|
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
if (snd_BUG_ON(substream->dma_buffer.dev.type))
|
2021-08-02 07:28:03 +00:00
|
|
|
return -EINVAL;
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
|
|
|
|
substream->dma_buffer.dev.type = type;
|
|
|
|
substream->dma_buffer.dev.dev = data;
|
2005-12-01 09:42:42 +00:00
|
|
|
|
2021-08-02 07:28:03 +00:00
|
|
|
if (size > 0) {
|
|
|
|
if (!max) {
|
|
|
|
/* no fallback, only also inform -ENOMEM */
|
|
|
|
err = preallocate_pcm_pages(substream, size, true);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
} else if (preallocate_dma &&
|
|
|
|
substream->number < maximum_substreams) {
|
|
|
|
err = preallocate_pcm_pages(substream, size, false);
|
|
|
|
if (err < 0 && err != -ENOMEM)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
}
|
2005-12-01 09:42:42 +00:00
|
|
|
|
|
|
|
if (substream->dma_buffer.bytes > 0)
|
|
|
|
substream->buffer_bytes_max = substream->dma_buffer.bytes;
|
|
|
|
substream->dma_max = max;
|
2019-11-05 19:10:07 +00:00
|
|
|
if (max > 0)
|
|
|
|
preallocate_info_init(substream);
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
if (managed)
|
|
|
|
substream->managed_buffer_alloc = 1;
|
2021-08-02 07:28:03 +00:00
|
|
|
return 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2021-08-02 07:28:03 +00:00
|
|
|
static int preallocate_pages_for_all(struct snd_pcm *pcm, int type,
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
void *data, size_t size, size_t max,
|
|
|
|
bool managed)
|
|
|
|
{
|
|
|
|
struct snd_pcm_substream *substream;
|
2021-08-02 07:28:03 +00:00
|
|
|
int stream, err;
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
|
2021-08-02 07:28:03 +00:00
|
|
|
for_each_pcm_substream(pcm, stream, substream) {
|
|
|
|
err = preallocate_pages(substream, type, data, size, max, managed);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
return 0;
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* snd_pcm_lib_preallocate_pages - pre-allocation for the given DMA type
|
|
|
|
* @substream: the pcm substream instance
|
|
|
|
* @type: DMA type (SNDRV_DMA_TYPE_*)
|
2011-03-31 01:57:33 +00:00
|
|
|
* @data: DMA type dependent data
|
2005-04-16 22:20:36 +00:00
|
|
|
* @size: the requested pre-allocation size in bytes
|
|
|
|
* @max: the max. allowed pre-allocation size
|
|
|
|
*
|
|
|
|
* Do pre-allocation for the given DMA buffer type.
|
|
|
|
*/
|
2019-02-04 15:42:24 +00:00
|
|
|
void snd_pcm_lib_preallocate_pages(struct snd_pcm_substream *substream,
|
2005-04-16 22:20:36 +00:00
|
|
|
int type, struct device *data,
|
|
|
|
size_t size, size_t max)
|
|
|
|
{
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
preallocate_pages(substream, type, data, size, max, false);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2006-04-28 13:13:40 +00:00
|
|
|
EXPORT_SYMBOL(snd_pcm_lib_preallocate_pages);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
2011-03-31 01:57:33 +00:00
|
|
|
* snd_pcm_lib_preallocate_pages_for_all - pre-allocation for continuous memory type (all substreams)
|
2005-09-07 11:38:19 +00:00
|
|
|
* @pcm: the pcm instance
|
2005-04-16 22:20:36 +00:00
|
|
|
* @type: DMA type (SNDRV_DMA_TYPE_*)
|
2011-03-31 01:57:33 +00:00
|
|
|
* @data: DMA type dependent data
|
2005-04-16 22:20:36 +00:00
|
|
|
* @size: the requested pre-allocation size in bytes
|
|
|
|
* @max: the max. allowed pre-allocation size
|
|
|
|
*
|
|
|
|
* Do pre-allocation to all substreams of the given pcm for the
|
|
|
|
* specified DMA type.
|
|
|
|
*/
|
2019-02-04 15:42:24 +00:00
|
|
|
void snd_pcm_lib_preallocate_pages_for_all(struct snd_pcm *pcm,
|
2005-04-16 22:20:36 +00:00
|
|
|
int type, void *data,
|
|
|
|
size_t size, size_t max)
|
|
|
|
{
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
preallocate_pages_for_all(pcm, type, data, size, max, false);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2006-04-28 13:13:40 +00:00
|
|
|
EXPORT_SYMBOL(snd_pcm_lib_preallocate_pages_for_all);
|
|
|
|
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
/**
|
|
|
|
* snd_pcm_set_managed_buffer - set up buffer management for a substream
|
|
|
|
* @substream: the pcm substream instance
|
|
|
|
* @type: DMA type (SNDRV_DMA_TYPE_*)
|
|
|
|
* @data: DMA type dependent data
|
|
|
|
* @size: the requested pre-allocation size in bytes
|
|
|
|
* @max: the max. allowed pre-allocation size
|
|
|
|
*
|
|
|
|
* Do pre-allocation for the given DMA buffer type, and set the managed
|
|
|
|
* buffer allocation mode to the given substream.
|
|
|
|
* In this mode, PCM core will allocate a buffer automatically before PCM
|
|
|
|
* hw_params ops call, and release the buffer after PCM hw_free ops call
|
|
|
|
* as well, so that the driver doesn't need to invoke the allocation and
|
|
|
|
* the release explicitly in its callback.
|
|
|
|
* When a buffer is actually allocated before the PCM hw_params call, it
|
|
|
|
* turns on the runtime buffer_changed flag for drivers changing their h/w
|
|
|
|
* parameters accordingly.
|
2021-08-02 07:28:03 +00:00
|
|
|
*
|
|
|
|
* When @size is non-zero and @max is zero, this tries to allocate for only
|
|
|
|
* the exact buffer size without fallback, and may return -ENOMEM.
|
|
|
|
* Otherwise, the function tries to allocate smaller chunks if the allocation
|
|
|
|
* fails. This is the behavior of snd_pcm_set_fixed_buffer().
|
|
|
|
*
|
|
|
|
* When both @size and @max are zero, the function only sets up the buffer
|
|
|
|
* for later dynamic allocations. It's used typically for buffers with
|
|
|
|
* SNDRV_DMA_TYPE_VMALLOC type.
|
|
|
|
*
|
|
|
|
* Upon successful buffer allocation and setup, the function returns 0.
|
2022-07-13 10:47:54 +00:00
|
|
|
*
|
|
|
|
* Return: zero if successful, or a negative error code
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
*/
|
2021-08-02 07:28:03 +00:00
|
|
|
int snd_pcm_set_managed_buffer(struct snd_pcm_substream *substream, int type,
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
struct device *data, size_t size, size_t max)
|
|
|
|
{
|
2021-08-02 07:28:03 +00:00
|
|
|
return preallocate_pages(substream, type, data, size, max, true);
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(snd_pcm_set_managed_buffer);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* snd_pcm_set_managed_buffer_all - set up buffer management for all substreams
|
|
|
|
* for all substreams
|
|
|
|
* @pcm: the pcm instance
|
|
|
|
* @type: DMA type (SNDRV_DMA_TYPE_*)
|
|
|
|
* @data: DMA type dependent data
|
|
|
|
* @size: the requested pre-allocation size in bytes
|
|
|
|
* @max: the max. allowed pre-allocation size
|
|
|
|
*
|
|
|
|
* Do pre-allocation to all substreams of the given pcm for the specified DMA
|
|
|
|
* type and size, and set the managed_buffer_alloc flag to each substream.
|
2022-07-13 10:47:54 +00:00
|
|
|
*
|
|
|
|
* Return: zero if successful, or a negative error code
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
*/
|
2021-08-02 07:28:03 +00:00
|
|
|
int snd_pcm_set_managed_buffer_all(struct snd_pcm *pcm, int type,
|
|
|
|
struct device *data,
|
|
|
|
size_t size, size_t max)
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
{
|
2021-08-02 07:28:03 +00:00
|
|
|
return preallocate_pages_for_all(pcm, type, data, size, max, true);
|
ALSA: pcm: Introduce managed buffer allocation mode
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-17 08:53:01 +00:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(snd_pcm_set_managed_buffer_all);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* snd_pcm_lib_malloc_pages - allocate the DMA buffer
|
|
|
|
* @substream: the substream to allocate the DMA buffer to
|
|
|
|
* @size: the requested buffer size in bytes
|
|
|
|
*
|
|
|
|
* Allocates the DMA buffer on the BUS type given earlier to
|
|
|
|
* snd_pcm_lib_preallocate_xxx_pages().
|
|
|
|
*
|
2013-03-11 21:05:14 +00:00
|
|
|
* Return: 1 if the buffer is changed, 0 if not changed, or a negative
|
2005-04-16 22:20:36 +00:00
|
|
|
* code on failure.
|
|
|
|
*/
|
2005-11-17 12:59:38 +00:00
|
|
|
int snd_pcm_lib_malloc_pages(struct snd_pcm_substream *substream, size_t size)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2020-09-02 21:21:18 +00:00
|
|
|
struct snd_card *card;
|
2005-11-17 12:59:38 +00:00
|
|
|
struct snd_pcm_runtime *runtime;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct snd_dma_buffer *dmab = NULL;
|
|
|
|
|
2008-08-08 15:09:09 +00:00
|
|
|
if (PCM_RUNTIME_CHECK(substream))
|
|
|
|
return -EINVAL;
|
|
|
|
if (snd_BUG_ON(substream->dma_buffer.dev.type ==
|
|
|
|
SNDRV_DMA_TYPE_UNKNOWN))
|
|
|
|
return -EINVAL;
|
2005-04-16 22:20:36 +00:00
|
|
|
runtime = substream->runtime;
|
2020-09-02 21:21:18 +00:00
|
|
|
card = substream->pcm->card;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (runtime->dma_buffer_p) {
|
|
|
|
/* perphaps, we might free the large DMA memory region
|
|
|
|
to save some space here, but the actual solution
|
|
|
|
costs us less time */
|
|
|
|
if (runtime->dma_buffer_p->bytes >= size) {
|
|
|
|
runtime->dma_bytes = size;
|
|
|
|
return 0; /* ok, do not change */
|
|
|
|
}
|
|
|
|
snd_pcm_lib_free_pages(substream);
|
|
|
|
}
|
2005-11-17 12:59:38 +00:00
|
|
|
if (substream->dma_buffer.area != NULL &&
|
|
|
|
substream->dma_buffer.bytes >= size) {
|
2005-04-16 22:20:36 +00:00
|
|
|
dmab = &substream->dma_buffer; /* use the pre-allocated buffer */
|
|
|
|
} else {
|
2021-08-02 07:28:03 +00:00
|
|
|
/* dma_max=0 means the fixed size preallocation */
|
|
|
|
if (substream->dma_buffer.area && !substream->dma_max)
|
|
|
|
return -ENOMEM;
|
2005-09-09 12:20:23 +00:00
|
|
|
dmab = kzalloc(sizeof(*dmab), GFP_KERNEL);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (! dmab)
|
|
|
|
return -ENOMEM;
|
|
|
|
dmab->dev = substream->dma_buffer.dev;
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
if (do_alloc_pages(card,
|
|
|
|
substream->dma_buffer.dev.type,
|
|
|
|
substream->dma_buffer.dev.dev,
|
ALSA: memalloc: Support for non-contiguous page allocation
This patch adds the support for allocation of non-contiguous DMA pages
in the common memalloc helper. It's another SG-buffer type, but
unlike the existing one, this is directional and requires the explicit
sync / invalidation of dirty pages on non-coherent architectures.
For this enhancement, the following points are changed:
- snd_dma_device stores the DMA direction.
- snd_dma_device stores need_sync flag indicating whether the explicit
sync is required or not.
- A new variant of helper functions, snd_dma_alloc_dir_pages() and
*_all() are introduced; the old snd_dma_alloc_pages() and *_all()
kept as just wrappers with DMA_BIDIRECTIONAL.
- A new helper snd_dma_buffer_sync() is introduced; this gets called
in the appropriate places.
- A new allocation type, SNDRV_DMA_TYPE_NONCONTIG, is introduced.
When the driver allocates pages with this new type, and it may require
the SNDRV_PCM_INFO_EXPLICIT_SYNC flag set to the PCM hardware.info for
taking the full control of PCM applptr and hwptr changes (that implies
disabling the mmap of control/status data). When the buffer
allocation is managed by snd_pcm_set_managed_buffer(), this flag is
automatically set depending on the result of dma_need_sync()
internally. Otherwise, if the buffer is managed manually, the driver
has to set the flag explicitly, too.
The explicit sync between CPU and device for non-coherent memory is
performed at the points before and after read/write transfer as well
as the applptr/hwptr syncptr ioctl. In the case of mmap mode,
user-space is supposed to call the syncptr ioctl with the hwptr flag
to update and fetch the status at first; this corresponds to CPU-sync.
Then user-space advances the applptr via syncptr ioctl again with
applptr flag, and this corresponds to the device sync with flushing.
Other than the DMA direction and the explicit sync, the usage of this
new buffer type is almost equivalent with the existing
SNDRV_DMA_TYPE_DEV_SG; you can get the page and the address via
snd_sgbuf_get_page() and snd_sgbuf_get_addr(), also calculate the
continuous pages via snd_sgbuf_get_chunk_size().
For those SG-page handling, the non-contig type shares the same ops
with the vmalloc handler. As we do always vmap the SG pages at first,
the actual address can be deduced from the vmapped address easily
without iterating the SG-list.
Link: https://lore.kernel.org/r/20211017074859.24112-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2021-10-17 07:48:57 +00:00
|
|
|
substream->stream,
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
size, dmab) < 0) {
|
2005-04-16 22:20:36 +00:00
|
|
|
kfree(dmab);
|
2021-03-18 16:06:16 +00:00
|
|
|
pr_debug("ALSA pcmC%dD%d%c,%d:%s: cannot preallocate for size %zu\n",
|
|
|
|
substream->pcm->card->number, substream->pcm->device,
|
|
|
|
substream->stream ? 'c' : 'p', substream->number,
|
|
|
|
substream->pcm->name, size);
|
2005-04-16 22:20:36 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
snd_pcm_set_runtime_buffer(substream, dmab);
|
|
|
|
runtime->dma_bytes = size;
|
|
|
|
return 1; /* area was changed */
|
|
|
|
}
|
2006-04-28 13:13:40 +00:00
|
|
|
EXPORT_SYMBOL(snd_pcm_lib_malloc_pages);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* snd_pcm_lib_free_pages - release the allocated DMA buffer.
|
|
|
|
* @substream: the substream to release the DMA buffer
|
|
|
|
*
|
|
|
|
* Releases the DMA buffer allocated via snd_pcm_lib_malloc_pages().
|
|
|
|
*
|
2013-03-11 21:05:14 +00:00
|
|
|
* Return: Zero if successful, or a negative error code on failure.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2005-11-17 12:59:38 +00:00
|
|
|
int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-11-17 12:59:38 +00:00
|
|
|
struct snd_pcm_runtime *runtime;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-08-08 15:09:09 +00:00
|
|
|
if (PCM_RUNTIME_CHECK(substream))
|
|
|
|
return -EINVAL;
|
2005-04-16 22:20:36 +00:00
|
|
|
runtime = substream->runtime;
|
|
|
|
if (runtime->dma_area == NULL)
|
|
|
|
return 0;
|
|
|
|
if (runtime->dma_buffer_p != &substream->dma_buffer) {
|
2022-04-24 20:59:45 +00:00
|
|
|
struct snd_card *card = substream->pcm->card;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* it's a newly allocated buffer. release it now. */
|
ALSA: pcm: Set per-card upper limit of PCM buffer allocations
Currently, the available buffer allocation size for a PCM stream
depends on the preallocated size; when a buffer has been preallocated,
the max buffer size is set to that size, so that application won't
re-allocate too much memory. OTOH, when no preallocation is done,
each substream may allocate arbitrary size of buffers as long as
snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high,
HD-audio sets 1GB there.
It means that the system may consume a high amount of pages for PCM
buffers, and they are pinned and never swapped out. This can lead to
OOM easily.
For avoiding such a situation, this patch adds the upper limit per
card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are
tracked and it will return an error if the total amount of buffers
goes over the defined upper limit. The default value is set to 32MB,
which should be really large enough for usual operations.
If larger buffers are needed for any specific usage, it can be
adjusted (also dynamically) via snd_pcm.max_alloc_per_card option.
Setting zero there means no chceck is performed, and again, unlimited
amount of buffers are allowed.
Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-20 12:44:22 +00:00
|
|
|
do_free_pages(card, runtime->dma_buffer_p);
|
2005-04-16 22:20:36 +00:00
|
|
|
kfree(runtime->dma_buffer_p);
|
|
|
|
}
|
|
|
|
snd_pcm_set_runtime_buffer(substream, NULL);
|
|
|
|
return 0;
|
|
|
|
}
|
2006-04-28 13:13:40 +00:00
|
|
|
EXPORT_SYMBOL(snd_pcm_lib_free_pages);
|