The recent conversion to the automatic kfree() forgot to mark a
variable with __free(kfree), leading to memory leaks. Fix it.
Fixes: 1052d98822 ("ALSA: control: Use automatic cleanup of kfree()")
Reported-by: Mirsad Todorovac <mirsad.todorovac@alu.unizg.hr>
Closes: https://lore.kernel.org/r/c1e2ef3c-164f-4840-9b1c-f7ca07ca422a@alu.unizg.hr
Message-ID: <20240320062722.31325-1-tiwai@suse.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Clang prior to 17.0.0 has a bug in its asm goto jump scope analysis to
determine that no variables with the cleanup attribute are skipped by an
indirect jump. Instead of only checking the scope of each label that is
a possible target of each asm goto statement, it checks the scope of
every label, which can cause an error when a variable with the cleanup
attribute is used between two asm goto statements with different scopes,
even if they have completely different label targets:
sound/core/hwdep.c:273:8: error: cannot jump from this asm goto statement to one of its possible targets
if (get_user(device, (int __user *)arg))
^
arch/powerpc/include/asm/uaccess.h:295:5: note: expanded from macro 'get_user'
__get_user(x, _gu_addr) : \
^
arch/powerpc/include/asm/uaccess.h:283:2: note: expanded from macro '__get_user'
__get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \
^
arch/powerpc/include/asm/uaccess.h:199:3: note: expanded from macro '__get_user_size_allowed'
__get_user_size_goto(x, ptr, size, __gus_failed); \
^
arch/powerpc/include/asm/uaccess.h:187:10: note: expanded from macro '__get_user_size_goto'
case 1: __get_user_asm_goto(x, (u8 __user *)ptr, label, "lbz"); break; \
^
arch/powerpc/include/asm/uaccess.h:158:2: note: expanded from macro '__get_user_asm_goto'
asm_volatile_goto( \
^
include/linux/compiler_types.h:366:33: note: expanded from macro 'asm_volatile_goto'
#define asm_volatile_goto(x...) asm goto(x)
^
sound/core/hwdep.c:291:9: note: possible target of asm goto statement
if (put_user(device, (int __user *)arg))
^
arch/powerpc/include/asm/uaccess.h:66:5: note: expanded from macro 'put_user'
__put_user(x, _pu_addr) : -EFAULT; \
^
arch/powerpc/include/asm/uaccess.h:52:9: note: expanded from macro '__put_user'
\
^
sound/core/hwdep.c:276:4: note: jump bypasses initialization of variable with __attribute__((cleanup))
scoped_guard(mutex, ®ister_mutex) {
^
include/linux/cleanup.h:169:20: note: expanded from macro 'scoped_guard'
for (CLASS(_name, scope)(args), \
To avoid this issue, move the put_user() call out of the scoped_guard()
scope, which allows the asm goto scope analysis to see that the variable
with the cleanup attribute will never be skipped by the asm goto
statements.
There should be no functional change because prior to the refactoring,
put_user() was not called under register_mutex, so this call does not
even need to be in the scoped_guard() in the first place.
Fixes: e6684d08cc ("ALSA: hwdep: Use guard() for locking")
Closes: https://github.com/ClangBuiltLinux/linux/issues/2003
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Link: https://lore.kernel.org/r/20240301-fix-snd-hwdep-guard-v1-1-6aab033f3f83@kernel.org
Signed-off-by: Takashi Iwai <tiwai@suse.de>
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
A couple of functions that use snd_card_ref() and *_unref() are also
cleaned up with a defined class, too.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-25-tiwai@suse.de
The setup_mutex in PCM oss code can be simplified with guard().
(params_lock is tough and not trivial to covert, though.)
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-24-tiwai@suse.de
Define guard() usage for PCM stream locking and use it in appropriate
places.
The pair of snd_pcm_stream_lock() and snd_pcm_stream_unlock() can be
presented with guard(pcm_stream_lock) now.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-23-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-22-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-21-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-20-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-19-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-18-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-17-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-16-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-15-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-14-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-13-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-12-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-11-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
There are a few remaining explicit mutex and spinlock calls, and those
are the places where the temporary unlock/relocking happens -- which
guard() doens't cover well yet.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-10-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
The lops calls under multiple rwsems are factored out as a simple
macro, so that it can be called easily from snd_ctl_dev_register()
and snd_ctl_dev_disconnect().
There are a few remaining explicit rwsem and spinlock calls, and those
are the places where the lock downgrade happens or where the temporary
unlock/relocking happens -- which guard() doens't cover well yet.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-9-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-8-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-7-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
There are still a few remaining explicit mutex_lock/unlock calls, and
those are for the places where we do temporary unlock/relock, which
doesn't fit well with the guard(), so far.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-6-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-5-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
For making changes easier, some functions widen the application of
register_mutex, but those shouldn't influence on any actual
performance.
Also, one code block was factored out as a function so that guard()
can be applied cleanly without much indentation.
There are still a few remaining explicit spin_lock/unlock calls, and
those are for the places where we do temporary unlock/relock, which
doesn't fit well with the guard(), so far.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-4-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
The explicit mutex_lock/unlock are still seen only in
snd_compress_wait_for_drain() which does temporary unlock/relocking.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-3-tiwai@suse.de
We can simplify the code gracefully with new guard() macro and co for
automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240227085306.9764-2-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240223084241.3361-5-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240223084241.3361-4-tiwai@suse.de
Now we have a nice definition of CLASS(fd) that can be applied as a
clean up for the fdget/fdput pairs in snd_pcm_link().
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240223084241.3361-2-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-10-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-9-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-8-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-7-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-6-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-5-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
A caveat is that some allocations are memdup_user() and they return an
error pointer instead of NULL. Those need special cares and the value
has to be cleared with no_free_ptr() at the allocation error path.
Other than that, the conversions are straightforward.
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-4-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
A caveat is that some allocations are memdup_user() and they return an
error pointer instead of NULL. Those need special cares and the value
has to be cleared with no_free_ptr() at the allocation error path.
Other than that, the conversions are straightforward.
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-3-tiwai@suse.de
There are common patterns where a temporary buffer is allocated and
freed at the exit, and those can be simplified with the recent cleanup
mechanism via __free(kfree).
A caveat is that some allocations are memdup_user() and they return an
error pointer instead of NULL. Those need special cares and the value
has to be cleared with no_free_ptr() at the allocation error path.
Other than that, the conversions are straightforward.
No functional changes, only code refactoring.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20240222111509.28390-2-tiwai@suse.de
Return used most significant bits from sample bit-width rather than the whole
physical sample word size. The starting bit offset is defined in the format
itself.
The behaviour is not changed for 32-bit formats like S32_LE. But with this
change - msbits value 24 instead 32 is returned for 24-bit formats like S24_LE
etc.
Also, commit 2112aa0349 ("ALSA: pcm: Introduce MSBITS subformat interface")
compares sample bit-width not physical sample bit-width to reset MSBITS_MAX bit
from the subformat bitmask.
Probably no applications are using msbits value for other than S32_LE/U32_LE
formats, because no drivers are reducing msbits value for other formats (with
the msb offset) at the moment.
For sanity, increase PCM protocol version, letting the user space to detect
the changed behaviour.
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Link: https://lore.kernel.org/r/20240222173649.1447549-1-perex@perex.cz
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Both snd_seq_prioq_remove_events() and snd_seq_prioq_leave() have a
very similar loop for removing events. Unify them with a callback for
code simplification.
Only the code refactoring, and no functional changes.
Link: https://lore.kernel.org/r/20240222132152.29063-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
We forgot to remove the line for snd-rtctimer from Makefile while
dropping the functionality. Get rid of the stale line.
Fixes: 34ce71a96d ("ALSA: timer: remove legacy rtctimer")
Link: https://lore.kernel.org/r/20240221092156.28695-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
In modern C versions, 'bool' is a keyword that cannot be used as
a variable name, so change this instance use something else, and
change the type to bool instead.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20240216130211.3828455-1-arnd@kernel.org
Signed-off-by: Takashi Iwai <tiwai@suse.de>