Commit Graph

982081 Commits

Author SHA1 Message Date
Marc Zyngier
a0db197f53 gpiolib: cdev: Flag invalid GPIOs as used
When reporting the state of a GPIO to userspace, we never check
for the actual validity of the line, meaning we report invalid
lines as being usable. A subsequent request will fail though,
which is an inconsistent behaviour from a userspace perspective.

Instead, let's check for the validity of the line and report it
as used if it is invalid. This allows a tool such as gpioinfo
to report something sensible:

gpiochip3 - 4 lines:
	line   0:      unnamed       unused   input  active-high
	line   1:      unnamed       kernel   input  active-high [used]
	line   2:      unnamed       kernel   input  active-high [used]
	line   3:      unnamed       unused   input  active-high

In this example, lines 1 and 2 are invalid, and cannot be used by
userspace.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Link: https://lore.kernel.org/r/20201204164739.781812-2-maz@kernel.org
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-12-09 10:23:04 +01:00
Anant Thazhemadam
e51d68e76d fs: quota: fix array-index-out-of-bounds bug by passing correct argument to vfs_cleanup_quota_inode()
When dquot_resume() was last updated, the argument that got passed
to vfs_cleanup_quota_inode was incorrectly set.

If type = -1 and dquot_load_quota_sb() returns a negative value,
then vfs_cleanup_quota_inode() gets called with -1 passed as an
argument, and this leads to an array-index-out-of-bounds bug.

Fix this issue by correctly passing the arguments.

Fixes: ae45f07d47 ("quota: Simplify dquot_resume()")
Link: https://lore.kernel.org/r/20201208194338.7064-1-anant.thazhemadam@gmail.com
Reported-by: syzbot+2643e825238d7aabb37f@syzkaller.appspotmail.com
Tested-by: syzbot+2643e825238d7aabb37f@syzkaller.appspotmail.com
CC: stable@vger.kernel.org
Signed-off-by: Anant Thazhemadam <anant.thazhemadam@gmail.com>
Signed-off-by: Jan Kara <jack@suse.cz>
2020-12-09 10:07:10 +01:00
Jessica Yu
38dc717e97 module: delay kobject uevent until after module init call
Apparently there has been a longstanding race between udev/systemd and
the module loader. Currently, the module loader sends a uevent right
after sysfs initialization, but before the module calls its init
function. However, some udev rules expect that the module has
initialized already upon receiving the uevent.

This race has been triggered recently (see link in references) in some
systemd mount unit files. For instance, the configfs module creates the
/sys/kernel/config mount point in its init function, however the module
loader issues the uevent before this happens. sys-kernel-config.mount
expects to be able to mount /sys/kernel/config upon receipt of the
module loading uevent, but if the configfs module has not called its
init function yet, then this directory will not exist and the mount unit
fails. A similar situation exists for sys-fs-fuse-connections.mount, as
the fuse sysfs mount point is created during the fuse module's init
function. If udev is faster than module initialization then the mount
unit would fail in a similar fashion.

To fix this race, delay the module KOBJ_ADD uevent until after the
module has finished calling its init routine.

References: https://github.com/systemd/systemd/issues/17586
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tested-By: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
Signed-off-by: Jessica Yu <jeyu@kernel.org>
2020-12-09 09:42:47 +01:00
Chunyan Zhang
263ade7166 gpio: eic-sprd: break loop when getting NULL device resource
EIC controller have unfixed numbers of banks on different Spreadtrum SoCs,
and each bank has its own base address, the loop of getting there base
address in driver should break if the resource gotten via
platform_get_resource() is NULL already. The later ones would be all NULL
even if the loop continues.

Fixes: 25518e024e ("gpio: Add Spreadtrum EIC driver support")
Signed-off-by: Chunyan Zhang <chunyan.zhang@unisoc.com>
Link: https://lore.kernel.org/r/20201209055106.840100-1-zhang.lyra@gmail.com
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-12-09 09:41:49 +01:00
Andy Lutomirski
e45cdc71d1 membarrier: Execute SYNC_CORE on the calling thread
membarrier()'s MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE is documented as
syncing the core on all sibling threads but not necessarily the calling
thread.  This behavior is fundamentally buggy and cannot be used safely.

Suppose a user program has two threads.  Thread A is on CPU 0 and thread B
is on CPU 1.  Thread A modifies some text and calls
membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE).

Then thread B executes the modified code.  If, at any point after
membarrier() decides which CPUs to target, thread A could be preempted and
replaced by thread B on CPU 0.  This could even happen on exit from the
membarrier() syscall.  If this happens, thread B will end up running on CPU
0 without having synced.

In principle, this could be fixed by arranging for the scheduler to issue
sync_core_before_usermode() whenever switching between two threads in the
same mm if there is any possibility of a concurrent membarrier() call, but
this would have considerable overhead.  Instead, make membarrier() sync the
calling CPU as well.

As an optimization, this avoids an extra smp_mb() in the default
barrier-only mode and an extra rseq preempt on the caller.

Fixes: 70216e18e5 ("membarrier: Provide core serializing command, *_SYNC_CORE")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/r/250ded637696d490c69bef1877148db86066881c.1607058304.git.luto@kernel.org
2020-12-09 09:37:43 +01:00
Andy Lutomirski
758c9373d8 membarrier: Explicitly sync remote cores when SYNC_CORE is requested
membarrier() does not explicitly sync_core() remote CPUs; instead, it
relies on the assumption that an IPI will result in a core sync.  On x86,
this may be true in practice, but it's not architecturally reliable.  In
particular, the SDM and APM do not appear to guarantee that interrupt
delivery is serializing.  While IRET does serialize, IPI return can
schedule, thereby switching to another task in the same mm that was
sleeping in a syscall.  The new task could then SYSRET back to usermode
without ever executing IRET.

Make this more robust by explicitly calling sync_core_before_usermode()
on remote cores.  (This also helps people who search the kernel tree for
instances of sync_core() and sync_core_before_usermode() -- one might be
surprised that the core membarrier code doesn't currently show up in a
such a search.)

Fixes: 70216e18e5 ("membarrier: Provide core serializing command, *_SYNC_CORE")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/776b448d5f7bd6b12690707f5ed67bcda7f1d427.1607058304.git.luto@kernel.org
2020-12-09 09:37:43 +01:00
Andy Lutomirski
2ecedd7569 membarrier: Add an actual barrier before rseq_preempt()
It seems that most RSEQ membarrier users will expect any stores done before
the membarrier() syscall to be visible to the target task(s).  While this
is extremely likely to be true in practice, nothing actually guarantees it
by a strict reading of the x86 manuals.  Rather than providing this
guarantee by accident and potentially causing a problem down the road, just
add an explicit barrier.

Fixes: 70216e18e5 ("membarrier: Provide core serializing command, *_SYNC_CORE")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/d3e7197e034fa4852afcf370ca49c30496e58e40.1607058304.git.luto@kernel.org
2020-12-09 09:37:43 +01:00
Andy Lutomirski
a493d1ca1a x86/membarrier: Get rid of a dubious optimization
sync_core_before_usermode() had an incorrect optimization.  If the kernel
returns from an interrupt, it can get to usermode without IRET. It just has
to schedule to a different task in the same mm and do SYSRET.  Fortunately,
there were no callers of sync_core_before_usermode() that could have had
in_irq() or in_nmi() equal to true, because it's only ever called from the
scheduler.

While at it, clarify a related comment.

Fixes: 70216e18e5 ("membarrier: Provide core serializing command, *_SYNC_CORE")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/5afc7632be1422f91eaf7611aaaa1b5b8580a086.1607058304.git.luto@kernel.org
2020-12-09 09:37:42 +01:00
Chris Chiu
7e41352847 ALSA: hda/realtek - Enable headset mic of ASUS Q524UQK with ALC255
The ASUS laptop Q524UQK with ALC255 codec can't detect the headset
microphone until ALC255_FIXUP_ASUS_MIC_NO_PRESENCE quirk applied.

Signed-off-by: Chris Chiu <chiu@endlessos.org>
Signed-off-by: Jian-Hong Pan <jhp@endlessos.org>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20201209045730.9972-1-chiu@endlessos.org
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-12-09 09:26:52 +01:00
Andy Shevchenko
e8873c0afd pinctrl: intel: Actually disable Tx and Rx buffers on GPIO request
Mistakenly the buffers (input and output) become enabled together for a short
period of time during GPIO request. This is problematic, because instead of
initial motive to disable them in the commit af7e3eeb84
("pinctrl: intel: Disable input and output buffer when switching to GPIO"),
the driven value on the pin, which might be used as an IRQ line, brings
firmwares of some touch pads to an awkward state that needs a full power off
to recover. Fix this, as stated in the culprit commit, by disabling the buffers.

Fixes: af7e3eeb84 ("pinctrl: intel: Disable input and output buffer when switching to GPIO")
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=210497
Reported-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Tested-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Link: https://lore.kernel.org/r/20201208182403.40435-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-12-09 09:08:18 +01:00
Ard Biesheuvel
54649911f3 efi: stub: get rid of efi_get_max_fdt_addr()
Now that ARM started following the example of arm64 and RISC-V, and
no longer imposes any restrictions on the placement of the FDT in
memory at boot, we no longer need per-arch implementations of
efi_get_max_fdt_addr() to factor out the differences. So get rid of
it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Atish Patra <atish.patra@wdc.com>
Link: https://lore.kernel.org/r/20201029134901.9773-1-ardb@kernel.org
2020-12-09 08:37:27 +01:00
Heinrich Schuchardt
ff20661bb5 efi/efi_test: read RuntimeServicesSupported
Since the UEFI 2.8A specification the UEFI enabled firmware provides a
configuration table EFI_RT_PROPERTIES_TABLE which indicates which runtime
services are enabled. The EFI stub reads this table and saves the value of
the field RuntimeServicesSupported internally.

The Firmware Test Suite requires the value to determine if UEFI runtime
services are correctly implemented.

With this patch an IOCTL call is provided to read the value of the field
RuntimeServicesSupported, e.g.

    #define EFI_RUNTIME_GET_SUPPORTED_MASK \
            _IOR('p', 0x0C, unsigned int)
    unsigned int mask;
    fd = open("/dev/efi_test", O_RDWR);
    ret = ioctl(fd, EFI_RUNTIME_GET_SUPPORTED_MASK, &mask);

Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Link: https://lore.kernel.org/r/20201127192051.1430-1-xypron.glpk@gmx.de
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Ivan Hu <ivan.hu@canonical.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-12-09 08:37:27 +01:00
Ard Biesheuvel
c0249238fe efi: arm: reduce minimum alignment of uncompressed kernel
Now that we reduced the minimum relative alignment between PHYS_OFFSET
and PAGE_OFFSET to 2 MiB, we can take this into account when allocating
memory for the decompressed kernel when booting via EFI. This minimizes
the amount of unusable memory we may end up with due to the base of DRAM
being occupied by firmware.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-12-09 08:37:27 +01:00
Ard Biesheuvel
4dbe44fb53 efi: capsule: clean scatter-gather entries from the D-cache
Scatter-gather lists passed to UpdateCapsule() should be cleaned
from the D-cache to ensure that they are visible to the CPU after a
warm reboot before the MMU is enabled. On ARM and arm64 systems, this
implies a D-cache clean by virtual address to the point of coherency.

However, due to the fact that the firmware itself is not able to map
physical addresses back to virtual addresses when running under the OS,
this must be done by the caller.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-12-09 08:37:27 +01:00
Ping-Ke Shih
3f79e54159 rtlwifi: rtl8192de: fix ofdm power compensation
ofdm_index[] is used to indicate how many power compensation is needed to
current thermal value. For internal PA module or 2.4G band, the min_index
is different from other cases.

This issue originally is reported by Dan. He found the size of ofdm_index[]
is 2, but access index 'i' may be equal to 2 if 'rf' is 2 in case of
'is2t'.

In fact, the chunk of code is added to wrong place, so move it back to
proper place, and then power compensation and buffer overflow are fixed.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/20201207031903.7599-1-pkshih@realtek.com
2020-12-09 09:22:37 +02:00
Mathy Vanhoef
8a71f34bb2 ath9k_htc: adhere to the DONT_REORDER transmit flag
Assure that frames with the fixed order flag are not reordered
relative to each other. This is accomplished by transmitting them
using a fixed priority independent of their QoS field.

Signed-off-by: Mathy Vanhoef <Mathy.Vanhoef@kuleuven.be>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/20201104061823.197407-6-Mathy.Vanhoef@kuleuven.be
2020-12-09 09:05:20 +02:00
Pradeep Kumar Chitrapu
743b9065fe ath11k: mesh: add support for 256 bitmap in blockack frames in 11ax
Currently 256 bitmap in blockack frames is being set only for AP mode.
Fix this to set whenever beacon has changed and has HE capability, there
by supporting mesh as well.

Tested-on: IPQ8074 hw2.0 AHB WLAN.HK.2.1.0.1-01228-QCAHKSWPL_SILICONZ-1

Signed-off-by: Pradeep Kumar Chitrapu <pradeepc@codeaurora.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/20201020183111.25458-5-pradeepc@codeaurora.org
2020-12-09 09:04:45 +02:00
Pradeep Kumar Chitrapu
bd88815060 ath11k: support TXOP duration based RTS threshold
HE operation IE in beacons is constructed based on userspace params,
which firmware might not be aware of. This causes firmware not to
configure TXOP duration based RTS threshold which could cause mismatch
in behaviour with respect to what is being advertised in beacons. This
patch sends HE operation IE fetched from beacon to firmware using
WMI interface for configuration.

Tested-on: IPQ8074 hw2.0 AHB WLAN.HK.2.1.0.1-01228-QCAHKSWPL_SILICONZ-1

Signed-off-by: Pradeep Kumar Chitrapu <pradeepc@codeaurora.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/20201020183111.25458-4-pradeepc@codeaurora.org
2020-12-09 09:04:42 +02:00
Pradeep Kumar Chitrapu
f00490432b ath11k: fix incorrect wmi param for configuring HE operation
WMI_VDEV_PARAM_TX_OFDMA_CPLEN has been added with incorrect enum
value. Fix the enum to represent WMI param for configuring HE
operation field.

This is applicable to firmwares under
https://github.com/kvalo/ath11k-firmware/tree/master/IPQ8074/hw2.0
https://github.com/kvalo/ath11k-firmware/tree/master/IPQ6018/hw1.0
https://github.com/kvalo/ath11k-firmware/tree/master/QCN9074/hw1.0

Tested-on: IPQ8074 hw2.0 AHB WLAN.HK.2.1.0.1-01228-QCAHKSWPL_SILICONZ-1

Signed-off-by: Pradeep Kumar Chitrapu <pradeepc@codeaurora.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/20201020183111.25458-3-pradeepc@codeaurora.org
2020-12-09 09:04:39 +02:00
Dan Carpenter
59ec8e2fa5 ath11k: unlock on error path in ath11k_mac_op_add_interface()
These error paths need to drop the &ar->conf_mutex before returning.

Fixes: 690ace20ff ("ath11k: peer delete synchronization with firmware")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/X85sVGVP/0XvlrEJ@mwanda
2020-12-09 09:04:07 +02:00
Gustavo A. R. Silva
b6041e1a30 wcn36xx: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a
warning by replacing a /* fall through */ comment with the new
pseudo-keyword macro fallthrough; instead of letting the code fall
through to the next case.

Notice that Clang doesn't recognize /* fall through */ comments as
implicit fall-through markings.

Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/f932c887e013767cbdabfdddd671086e8ae63193.1605896060.git.gustavoars@kernel.org
2020-12-09 09:01:32 +02:00
Gustavo A. R. Silva
e2cb111654 carl9170: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of letting the code fall
through to the next case.

Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/04257418814755f081fa0ac14a61b01328cdc4ed.1605896059.git.gustavoars@kernel.org
2020-12-09 09:01:29 +02:00
Gustavo A. R. Silva
e64fa6d92a ath5k: Fix fall-through warnings for Clang
In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of letting the code fall
through to the next case.

Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/e127232621c4de340509047a11d98093958303c5.1605896059.git.gustavoars@kernel.org
2020-12-09 09:01:26 +02:00
Anilkumar Kolli
61a57e51e4 ath11k: fix rmmod failure if qmi sequence fails
QMI sequence fails if caldata file is not available.
It is observed that 'rmmod ath11k' fails if qmi message fails.
With this patch rmmod/insmod is working.

Logs:
Direct firmware load for IPQ8074/caldata.bin failed with error -2
Falling back to user helper
qmi failed to load CAL: IPQ8074/caldata.bin
qmi failed to load board data file:-11

Tested-on: IPQ8074 hw2.0 AHB WLAN.HK.2.4.0.1-00009-QCAHKSWPL_SILICONZ-1
Tested-on: QCA6390 hw2.0 PCI WLAN.HST.1.0.1-01699-QCAHSTSWPLZ_V2_TO_X86-1

Signed-off-by: Anilkumar Kolli <akolli@codeaurora.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/1606916215-24643-1-git-send-email-akolli@codeaurora.org
2020-12-09 09:00:29 +02:00
Christophe Leroy
613df979da powerpc/8xx: DEBUG_PAGEALLOC doesn't require an ITLB miss exception handler
Since commit e611939fc8 ("powerpc/mm: Ensure change_page_attr()
doesn't invalidate pinned TLBs"), pinned TLBs are not anymore
invalidated by __kernel_map_pages() when CONFIG_DEBUG_PAGEALLOC is
selected.

Remove the dependency on CONFIG_DEBUG_PAGEALLOC.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e796c5fcb5898de827c803cf1ab8ba1d7a5d4b76.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09 17:01:59 +11:00
Christophe Leroy
ad3ed15cd0 powerpc/process: Remove target specific __set_dabr()
__set_dabr() are simple functions that can be inline directly
inside set_dabr() and using IS_ENABLED() instead of #ifdef

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c10b263668e137236c71d76648b03cf2cd1ee66f.1607076733.git.christophe.leroy@csgroup.eu
2020-12-09 17:01:40 +11:00
Christophe Leroy
1e78f723d6 powerpc/8xx: Fix early debug when SMC1 is relocated
When SMC1 is relocated and early debug is selected, the
board hangs is ppc_md.setup_arch(). This is because ones
the microcode has been loaded and SMC1 relocated, early
debug writes in the weed.

To allow smooth continuation, the SMC1 parameter RAM set up
by the bootloader have to be copied into the new location.

Fixes: 43db76f418 ("powerpc/8xx: Add microcode patch to move SMC parameter RAM.")
Cc: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b2f71f39eca543f1e4ec06596f09a8b12235c701.1607076683.git.christophe.leroy@csgroup.eu
2020-12-09 17:00:54 +11:00
Christophe Leroy
1b03e71ff6 powerpc/32s: Handle PROTFAULT in hash_page() also for CONFIG_PPC_KUAP
On hash 32 bits, handling minor protection faults like unsetting
dirty flag is heavy if done from the normal page_fault processing,
because it implies hash table software lookup for flushing the entry
and then a DSI is taken anyway to add the entry back.

When KUAP was implemented, as explained in commit a68c31fc01
("powerpc/32s: Implement Kernel Userspace Access Protection"),
protection faults has been diverted from hash_page() because
hash_page() was not able to identify a KUAP fault.

Implement KUAP verification in hash_page(), by clearing write
permission when the access is a kernel access and Ks is 1.
This works regardless of the address because kernel segments always
have Ks set to 0 while user segments have Ks set to 0 only
when kernel write to userspace is granted.

Then protection faults can be handled by hash_page() even for KUAP.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8a4ffe4798e9ea32aaaccdf85e411bb1beed3500.1605542955.git.christophe.leroy@csgroup.eu
2020-12-09 16:59:46 +11:00
Viresh Kumar
2c07b0fd9b Merge branch 'opp/empty' into opp/linux-next 2020-12-09 11:24:12 +05:30
Nicola Mazzucato
6ee70e8c34 opp: of: Allow empty opp-table with opp-shared
The opp binding now allows to have an empty opp table and shared-opp to
still describe that devices share v/f lines.

When initialising an empty opp table, allow such case by:
- treating such conditions with warnings in place of errors
- don't fail on empty table

Signed-off-by: Nicola Mazzucato <nicola.mazzucato@arm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:23:46 +05:30
Nicola Mazzucato
24b3c963c0 dt-bindings: opp: Allow empty OPP tables
Currently the optional property opp-shared is used within an opp table
to tell that a set of devices share their clock/voltage lines (and the
OPP points).
It is therefore possible to use an empty OPP table to convey only that
information, useful in situations where the opp points are provided via
other means (hardware. firmware, etc).

Update the documentation to remark this additional case and provide an
example.

Signed-off-by: Nicola Mazzucato <nicola.mazzucato@arm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:22:12 +05:30
Viresh Kumar
e91e3d902b media: venus: dev_pm_opp_put_*() accepts NULL argument
The dev_pm_opp_put_*() APIs now accepts a NULL opp_table pointer and so
there is no need for us to carry the extra check. Drop them.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:12 +05:30
Viresh Kumar
b66ba5b593 drm/panfrost: dev_pm_opp_put_*() accepts NULL argument
The dev_pm_opp_put_*() APIs now accepts a NULL opp_table pointer and so
there is no need for us to carry the extra check. Drop them.

Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:12 +05:30
Viresh Kumar
72ba9e226f drm/lima: dev_pm_opp_put_*() accepts NULL argument
The dev_pm_opp_put_*() APIs now accepts a NULL opp_table pointer and so
there is no need for us to carry the extra check. Drop them.

Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:12 +05:30
Viresh Kumar
8145687283 PM / devfreq: exynos: dev_pm_opp_put_*() accepts NULL argument
The dev_pm_opp_put_*() APIs now accepts a NULL opp_table pointer and so
there is no need for us to carry the extra check. Drop them.

Acked-by: Chanwoo Choi <cw00.choi@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:12 +05:30
Viresh Kumar
2ff8fe13ac cpufreq: qcom-cpufreq-nvmem: dev_pm_opp_put_*() accepts NULL argument
The dev_pm_opp_put_*() APIs now accepts a NULL opp_table pointer and so
there is no need for us to carry the extra checks. Drop them.

Reviewed-by: Ilia Lin <ilia.lin@kernel.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:12 +05:30
Viresh Kumar
5f6ffb8d8f cpufreq: dt: dev_pm_opp_put_regulators() accepts NULL argument
The dev_pm_opp_put_*() APIs now accepts a NULL opp_table pointer and so
there is no need for us to carry the extra checks. Drop them.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:11 +05:30
Viresh Kumar
c7bf8758c9 opp: Allow dev_pm_opp_put_*() APIs to accept NULL opp_table
This allows the callers to drop the unnecessary checks.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:11 +05:30
Viresh Kumar
e77dcb0b73 opp: Don't create an OPP table from dev_pm_opp_get_opp_table()
It has been found that some users (like cpufreq-dt and others on LKML)
have abused the helper dev_pm_opp_get_opp_table() to create the OPP
table instead of just finding it, which is the wrong thing to do. This
routine was meant for OPP core's internal working and exposed the whole
functionality by mistake.

Change the scope of dev_pm_opp_get_opp_table() to only finding the
table. The internal helpers _opp_get_opp_table*() are thus renamed to
_add_opp_table*(), dev_pm_opp_get_opp_table_indexed() is removed (as we
don't need the index field for finding the OPP table) and so the only
user, genpd, is updated.

Note that the prototype of _add_opp_table() was already left in opp.h by
mistake when it was removed earlier and so we weren't required to add it
now.

Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:11 +05:30
Viresh Kumar
873c9851eb cpufreq: dt: Don't (ab)use dev_pm_opp_get_opp_table() to create OPP table
Initially, the helper dev_pm_opp_get_opp_table() was supposed to be used
only for the OPP core's internal use (it tries to find an existing OPP
table and if it doesn't find one, then it allocates the OPP table).

Sometime back, the cpufreq-dt driver started using it to make sure all
the relevant resources required by the OPP core are available earlier
during initialization process to properly propagate -EPROBE_DEFER.

It worked but it also abused the API to create an OPP table, which
should be created with the help of other helpers provided by the OPP
core.

The OPP core will be updated in a later commit to limit the scope of
dev_pm_opp_get_opp_table() to only finding an existing OPP table and not
create one. This commit updates the cpufreq-dt driver before that
happens.

Now the cpufreq-dt driver creates the OPP and cpufreq tables for all the
CPUs from driver's init callback itself.

Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:11 +05:30
Viresh Kumar
cf1fac943c opp: Reduce the size of critical section in _opp_kref_release()
There is a lot of stuff here which can be done outside of the
opp_table->lock, do that. This helps avoiding a circular dependency
lockdeps around debugfs.

Reported-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2020-12-09 11:21:11 +05:30
Christophe Leroy
44e9754d63 powerpc/32s: Make support for 603 and 604+ selectable
book3s/32 has two main families:
- CPU with 603 cores that don't have HASH PTE table and
perform SW TLB loading.
- Other CPUs based on 604+ cores that have HASH PTE table.

This leads to some complex logic and additionnal code to
support both. This makes sense for distribution kernels
that aim at running on any CPU, but when you are fine
tuning a kernel for an embedded 603 based board you
don't need all the HASH logic.

Allow selection of support for each family, in order to opt
out unneeded parts of code. At least one must be selected.

Note that some of the CPU supporting HASH also support SW TLB
loading, however it is not supported by Linux kernel at the
time being, because they do not have alternate registers in
the TLB miss exception handlers.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8dde0cdb629a71abc29b0d85a52a86e920376cb6.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:48:59 +11:00
Christophe Leroy
ad510e37e4 powerpc/32s: Regroup 603 based CPUs in cputable
In order to selectively build the kernel for 603 SW TLB handling,
regroup all 603 based CPUs together.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/45065263fdb9f5cc2a2d210ec2a762ac8bf5b2bc.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:48:59 +11:00
Christophe Leroy
a6a50d8495 powerpc/32s: Remove CONFIG_PPC_BOOK3S_6xx
As 601 is gone, CONFIG_PPC_BOO3S_6xx and CONFIG_PPC_BOOK3S_32
are dedundant.

Remove CONFIG_PPC_BOOK3S_6xx.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f18c16af37f6f77b577bed8d9e12831b695617ae.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:48:59 +11:00
Christophe Leroy
068fdba10e powerpc/32s: Move early_mmu_init() into mmu.c
early_mmu_init() is independent of MMU type and not
directly linked to tlb handling.

In a following patch, tlb.c will be restricted to HASH mmu.

Move early_mmu_init() into mmu.c which is common.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e51b5e2fe6bca623b33116403043d3a1b5eaf826.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:46:56 +11:00
Christophe Leroy
80007a17fc powerpc/32s: Inline flush_hash_entry()
flush_hash_entry() is a simple function calling
flush_hash_pages() if it's a hash MMU or doing nothing otherwise.

Inline it.

And use it also in __ptep_test_and_clear_young().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/9af895be7d4b404d40e749a2659552fd138e62c4.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:46:56 +11:00
Christophe Leroy
ef08d95546 powerpc/32s: Inline tlb_flush()
On book3s/32, tlb_flush() does nothing when the CPU has a hash table,
it calls _tlbia() otherwise.

Inline it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ebc933d1c530a19ef3cf7983f6ae94814f6e92ac.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:46:56 +11:00
Christophe Leroy
91ec450f8d powerpc/32s: Split and inline flush_range()
flush_range() handle both the MMU_FTR_HPTE_TABLE case and
the other case.

The non MMU_FTR_HPTE_TABLE case is trivial as it is only a call
to _tlbie()/_tlbia() which is not worth a dedicated function.

Make flush_range() a hash specific and call it from tlbflush.h based
on mmu_has_feature(MMU_FTR_HPTE_TABLE).

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/132ab19aae52abc8e06ab524ec86d4229b5b9c3d.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:46:56 +11:00
Christophe Leroy
1e83396f29 powerpc/32s: Inline flush_tlb_range() and flush_tlb_kernel_range()
flush_tlb_range() and flush_tlb_kernel_range() are trivial calls to
flush_range().

Make flush_range() global and inline flush_tlb_range()
and flush_tlb_kernel_range().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c7029a78e78709ad9272d7a44260e06b649169b2.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:46:55 +11:00
Christophe Leroy
fd1b4b7f51 powerpc/32s: Split and inline flush_tlb_mm() and flush_tlb_page()
flush_tlb_mm() and flush_tlb_page() handle both the MMU_FTR_HPTE_TABLE
case and the other case.

The non MMU_FTR_HPTE_TABLE case is trivial as it is only a call
to _tlbie()/_tlbia() which is not worth a dedicated function.

Make flush_tlb_mm() and flush_tlb_page() hash specific and call
them from tlbflush.h based on mmu_has_feature(MMU_FTR_HPTE_TABLE).

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/11e932ded41ba6d9b251d89b7afa33cc060d3aa4.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09 16:46:55 +11:00