Merge remote-tracking branch 'torvalds/master' into perf/core

To pick up fixes sent via perf/urgent and in the BPF tools/ directories.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This commit is contained in:
Arnaldo Carvalho de Melo 2021-03-29 10:39:10 -03:00
commit b0a752d43b
1301 changed files with 11277 additions and 6798 deletions

View File

@ -36,6 +36,7 @@ Andrew Morton <akpm@linux-foundation.org>
Andrew Murray <amurray@thegoodpenguin.co.uk> <amurray@embedded-bits.co.uk> Andrew Murray <amurray@thegoodpenguin.co.uk> <amurray@embedded-bits.co.uk>
Andrew Murray <amurray@thegoodpenguin.co.uk> <andrew.murray@arm.com> Andrew Murray <amurray@thegoodpenguin.co.uk> <andrew.murray@arm.com>
Andrew Vasquez <andrew.vasquez@qlogic.com> Andrew Vasquez <andrew.vasquez@qlogic.com>
Andrey Konovalov <andreyknvl@gmail.com> <andreyknvl@google.com>
Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com> Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com>
Andrey Ryabinin <ryabinin.a.a@gmail.com> <aryabinin@virtuozzo.com> Andrey Ryabinin <ryabinin.a.a@gmail.com> <aryabinin@virtuozzo.com>
Andy Adamson <andros@citi.umich.edu> Andy Adamson <andros@citi.umich.edu>
@ -65,6 +66,8 @@ Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com>
Changbin Du <changbin.du@intel.com> <changbin.du@intel.com> Changbin Du <changbin.du@intel.com> <changbin.du@intel.com>
Chao Yu <chao@kernel.org> <chao2.yu@samsung.com> Chao Yu <chao@kernel.org> <chao2.yu@samsung.com>
Chao Yu <chao@kernel.org> <yuchao0@huawei.com> Chao Yu <chao@kernel.org> <yuchao0@huawei.com>
Chris Chiu <chris.chiu@canonical.com> <chiu@endlessm.com>
Chris Chiu <chris.chiu@canonical.com> <chiu@endlessos.org>
Christophe Ricard <christophe.ricard@gmail.com> Christophe Ricard <christophe.ricard@gmail.com>
Christoph Hellwig <hch@lst.de> Christoph Hellwig <hch@lst.de>
Corey Minyard <minyard@acm.org> Corey Minyard <minyard@acm.org>

View File

@ -33,7 +33,7 @@ Contact: xfs@oss.sgi.com
Description: Description:
The current state of the log write grant head. It The current state of the log write grant head. It
represents the total log reservation of all currently represents the total log reservation of all currently
oustanding transactions, including regrants due to outstanding transactions, including regrants due to
rolling transactions. The grant head is exported in rolling transactions. The grant head is exported in
"cycle:bytes" format. "cycle:bytes" format.
Users: xfstests Users: xfstests

View File

@ -17,12 +17,12 @@ For ACPI on arm64, tables also fall into the following categories:
- Recommended: BERT, EINJ, ERST, HEST, PCCT, SSDT - Recommended: BERT, EINJ, ERST, HEST, PCCT, SSDT
- Optional: BGRT, CPEP, CSRT, DBG2, DRTM, ECDT, FACS, FPDT, IORT, - Optional: BGRT, CPEP, CSRT, DBG2, DRTM, ECDT, FACS, FPDT, IBFT,
MCHI, MPST, MSCT, NFIT, PMTT, RASF, SBST, SLIT, SPMI, SRAT, STAO, IORT, MCHI, MPST, MSCT, NFIT, PMTT, RASF, SBST, SLIT, SPMI, SRAT,
TCPA, TPM2, UEFI, XENV STAO, TCPA, TPM2, UEFI, XENV
- Not supported: BOOT, DBGP, DMAR, ETDT, HPET, IBFT, IVRS, LPIT, - Not supported: BOOT, DBGP, DMAR, ETDT, HPET, IVRS, LPIT, MSDM, OEMx,
MSDM, OEMx, PSDT, RSDT, SLIC, WAET, WDAT, WDRT, WPBT PSDT, RSDT, SLIC, WAET, WDAT, WDRT, WPBT
====== ======================================================================== ====== ========================================================================
Table Usage for ARMv8 Linux Table Usage for ARMv8 Linux

View File

@ -130,6 +130,9 @@ stable kernels.
| Marvell | ARM-MMU-500 | #582743 | N/A | | Marvell | ARM-MMU-500 | #582743 | N/A |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| NVIDIA | Carmel Core | N/A | NVIDIA_CARMEL_CNP_ERRATUM |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 | | Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+

View File

@ -23,6 +23,7 @@ properties:
- enum: - enum:
- ingenic,jz4775-intc - ingenic,jz4775-intc
- ingenic,jz4770-intc - ingenic,jz4770-intc
- ingenic,jz4760b-intc
- const: ingenic,jz4760-intc - const: ingenic,jz4760-intc
- items: - items:
- const: ingenic,x1000-intc - const: ingenic,x1000-intc

View File

@ -21,6 +21,10 @@ properties:
- fsl,vf610-spdif - fsl,vf610-spdif
- fsl,imx6sx-spdif - fsl,imx6sx-spdif
- fsl,imx8qm-spdif - fsl,imx8qm-spdif
- fsl,imx8qxp-spdif
- fsl,imx8mq-spdif
- fsl,imx8mm-spdif
- fsl,imx8mn-spdif
reg: reg:
maxItems: 1 maxItems: 1

View File

@ -613,6 +613,27 @@ Some of these date from the very introduction of KMS in 2008 ...
Level: Intermediate Level: Intermediate
Remove automatic page mapping from dma-buf importing
----------------------------------------------------
When importing dma-bufs, the dma-buf and PRIME frameworks automatically map
imported pages into the importer's DMA area. drm_gem_prime_fd_to_handle() and
drm_gem_prime_handle_to_fd() require that importers call dma_buf_attach()
even if they never do actual device DMA, but only CPU access through
dma_buf_vmap(). This is a problem for USB devices, which do not support DMA
operations.
To fix the issue, automatic page mappings should be removed from the
buffer-sharing code. Fixing this is a bit more involved, since the import/export
cache is also tied to &drm_gem_object.import_attach. Meanwhile we paper over
this problem for USB devices by fishing out the USB host controller device, as
long as that supports DMA. Otherwise importing can still needlessly fail.
Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
Level: Advanced
Better Testing Better Testing
============== ==============

View File

@ -1988,7 +1988,7 @@ netif_carrier.
If use_carrier is 0, then the MII monitor will first query the If use_carrier is 0, then the MII monitor will first query the
device's (via ioctl) MII registers and check the link state. If that device's (via ioctl) MII registers and check the link state. If that
request fails (not just that it returns carrier down), then the MII request fails (not just that it returns carrier down), then the MII
monitor will make an ethtool ETHOOL_GLINK request to attempt to obtain monitor will make an ethtool ETHTOOL_GLINK request to attempt to obtain
the same information. If both methods fail (i.e., the driver either the same information. If both methods fail (i.e., the driver either
does not support or had some error in processing both the MII register does not support or had some error in processing both the MII register
and ethtool requests), then the MII monitor will assume the link is and ethtool requests), then the MII monitor will assume the link is

View File

@ -267,7 +267,7 @@ DATA PATH
Tx Tx
-- --
end_start_xmit() is called by the stack. This function does the following: ena_start_xmit() is called by the stack. This function does the following:
- Maps data buffers (skb->data and frags). - Maps data buffers (skb->data and frags).
- Populates ena_buf for the push buffer (if the driver and device are - Populates ena_buf for the push buffer (if the driver and device are

View File

@ -52,7 +52,7 @@ purposes as a standard complementary tool. The system's view from
``devlink-dpipe`` should change according to the changes done by the ``devlink-dpipe`` should change according to the changes done by the
standard configuration tools. standard configuration tools.
For example, its quiet common to implement Access Control Lists (ACL) For example, its quite common to implement Access Control Lists (ACL)
using Ternary Content Addressable Memory (TCAM). The TCAM memory can be using Ternary Content Addressable Memory (TCAM). The TCAM memory can be
divided into TCAM regions. Complex TC filters can have multiple rules with divided into TCAM regions. Complex TC filters can have multiple rules with
different priorities and different lookup keys. On the other hand hardware different priorities and different lookup keys. On the other hand hardware

View File

@ -151,7 +151,7 @@ representor netdevice.
------------- -------------
A subfunction devlink port is created but it is not active yet. That means the A subfunction devlink port is created but it is not active yet. That means the
entities are created on devlink side, the e-switch port representor is created, entities are created on devlink side, the e-switch port representor is created,
but the subfunction device itself it not created. A user might use e-switch port but the subfunction device itself is not created. A user might use e-switch port
representor to do settings, putting it into bridge, adding TC rules, etc. A user representor to do settings, putting it into bridge, adding TC rules, etc. A user
might as well configure the hardware address (such as MAC address) of the might as well configure the hardware address (such as MAC address) of the
subfunction while subfunction is inactive. subfunction while subfunction is inactive.
@ -173,7 +173,7 @@ Terms and Definitions
* - Term * - Term
- Definitions - Definitions
* - ``PCI device`` * - ``PCI device``
- A physical PCI device having one or more PCI bus consists of one or - A physical PCI device having one or more PCI buses consists of one or
more PCI controllers. more PCI controllers.
* - ``PCI controller`` * - ``PCI controller``
- A controller consists of potentially multiple physical functions, - A controller consists of potentially multiple physical functions,

View File

@ -142,73 +142,13 @@ Please send incremental versions on top of what has been merged in order to fix
the patches the way they would look like if your latest patch series was to be the patches the way they would look like if your latest patch series was to be
merged. merged.
How can I tell what patches are queued up for backporting to the various stable releases? Are there special rules regarding stable submissions on netdev?
-----------------------------------------------------------------------------------------
Normally Greg Kroah-Hartman collects stable commits himself, but for
networking, Dave collects up patches he deems critical for the
networking subsystem, and then hands them off to Greg.
There is a patchworks queue that you can see here:
https://patchwork.kernel.org/bundle/netdev/stable/?state=*
It contains the patches which Dave has selected, but not yet handed off
to Greg. If Greg already has the patch, then it will be here:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
A quick way to find whether the patch is in this stable-queue is to
simply clone the repo, and then git grep the mainline commit ID, e.g.
::
stable-queue$ git grep -l 284041ef21fdf2e
releases/3.0.84/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
releases/3.4.51/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
releases/3.9.8/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
stable/stable-queue$
I see a network patch and I think it should be backported to stable. Should I request it via stable@vger.kernel.org like the references in the kernel's Documentation/process/stable-kernel-rules.rst file say?
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
No, not for networking. Check the stable queues as per above first
to see if it is already queued. If not, then send a mail to netdev,
listing the upstream commit ID and why you think it should be a stable
candidate.
Before you jump to go do the above, do note that the normal stable rules
in :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
still apply. So you need to explicitly indicate why it is a critical
fix and exactly what users are impacted. In addition, you need to
convince yourself that you *really* think it has been overlooked,
vs. having been considered and rejected.
Generally speaking, the longer it has had a chance to "soak" in
mainline, the better the odds that it is an OK candidate for stable. So
scrambling to request a commit be added the day after it appears should
be avoided.
I have created a network patch and I think it should be backported to stable. Should I add a Cc: stable@vger.kernel.org like the references in the kernel's Documentation/ directory say?
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
No. See above answer. In short, if you think it really belongs in
stable, then ensure you write a decent commit log that describes who
gets impacted by the bug fix and how it manifests itself, and when the
bug was introduced. If you do that properly, then the commit will get
handled appropriately and most likely get put in the patchworks stable
queue if it really warrants it.
If you think there is some valid information relating to it being in
stable that does *not* belong in the commit log, then use the three dash
marker line as described in
:ref:`Documentation/process/submitting-patches.rst <the_canonical_patch_format>`
to temporarily embed that information into the patch that you send.
Are all networking bug fixes backported to all stable releases?
--------------------------------------------------------------- ---------------------------------------------------------------
Due to capacity, Dave could only take care of the backports for the While it used to be the case that netdev submissions were not supposed
last two stable releases. For earlier stable releases, each stable to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer
branch maintainer is supposed to take care of them. If you find any the case today. Please follow the standard stable rules in
patch is missing from an earlier stable branch, please notify :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,
stable@vger.kernel.org with either a commit ID or a formal patch and make sure you include appropriate Fixes tags!
backported, and CC Dave and other relevant networking developers.
Is the comment style convention different for the networking content? Is the comment style convention different for the networking content?
--------------------------------------------------------------------- ---------------------------------------------------------------------

View File

@ -50,7 +50,7 @@ Callbacks to implement
The NIC driver offering ipsec offload will need to implement these The NIC driver offering ipsec offload will need to implement these
callbacks to make the offload available to the network stack's callbacks to make the offload available to the network stack's
XFRM subsytem. Additionally, the feature bits NETIF_F_HW_ESP and XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and
NETIF_F_HW_ESP_TX_CSUM will signal the availability of the offload. NETIF_F_HW_ESP_TX_CSUM will signal the availability of the offload.

View File

@ -35,12 +35,6 @@ Rules on what kind of patches are accepted, and which ones are not, into the
Procedure for submitting patches to the -stable tree Procedure for submitting patches to the -stable tree
---------------------------------------------------- ----------------------------------------------------
- If the patch covers files in net/ or drivers/net please follow netdev stable
submission guidelines as described in
:ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
after first checking the stable networking queue at
https://patchwork.kernel.org/bundle/netdev/stable/?state=*
to ensure the requested patch is not already queued up.
- Security patches should not be handled (solely) by the -stable review - Security patches should not be handled (solely) by the -stable review
process but should follow the procedures in process but should follow the procedures in
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`. :ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.

View File

@ -250,11 +250,6 @@ should also read
:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>` :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
in addition to this file. in addition to this file.
Note, however, that some subsystem maintainers want to come to their own
conclusions on which patches should go to the stable trees. The networking
maintainer, in particular, would rather not see individual developers
adding lines like the above to their patches.
If changes affect userland-kernel interfaces, please send the MAN-PAGES If changes affect userland-kernel interfaces, please send the MAN-PAGES
maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
least a notification of the change, so that some information makes its way least a notification of the change, so that some information makes its way

View File

@ -182,6 +182,9 @@ is dependent on the CPU capability and the kernel configuration. The limit can
be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the KVM_CHECK_EXTENSION be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the KVM_CHECK_EXTENSION
ioctl() at run-time. ioctl() at run-time.
Creation of the VM will fail if the requested IPA size (whether it is
implicit or explicit) is unsupported on the host.
Please note that configuring the IPA size does not affect the capability Please note that configuring the IPA size does not affect the capability
exposed by the guest CPUs in ID_AA64MMFR0_EL1[PARange]. It only affects exposed by the guest CPUs in ID_AA64MMFR0_EL1[PARange]. It only affects
size of the address translated by the stage2 level (guest physical to size of the address translated by the stage2 level (guest physical to
@ -1492,7 +1495,8 @@ Fails if any VCPU has already been created.
Define which vcpu is the Bootstrap Processor (BSP). Values are the same Define which vcpu is the Bootstrap Processor (BSP). Values are the same
as the vcpu id in KVM_CREATE_VCPU. If this ioctl is not called, the default as the vcpu id in KVM_CREATE_VCPU. If this ioctl is not called, the default
is vcpu 0. is vcpu 0. This ioctl has to be called before vcpu creation,
otherwise it will return EBUSY error.
4.42 KVM_GET_XSAVE 4.42 KVM_GET_XSAVE
@ -4803,8 +4807,10 @@ If an MSR access is not permitted through the filtering, it generates a
allows user space to deflect and potentially handle various MSR accesses allows user space to deflect and potentially handle various MSR accesses
into user space. into user space.
If a vCPU is in running state while this ioctl is invoked, the vCPU may Note, invoking this ioctl with a vCPU is running is inherently racy. However,
experience inconsistent filtering behavior on MSR accesses. KVM does guarantee that vCPUs will see either the previous filter or the new
filter, e.g. MSRs with identical settings in both the old and new filter will
have deterministic behavior.
4.127 KVM_XEN_HVM_SET_ATTR 4.127 KVM_XEN_HVM_SET_ATTR
-------------------------- --------------------------

View File

@ -261,8 +261,8 @@ ABI/API
L: linux-api@vger.kernel.org L: linux-api@vger.kernel.org
F: include/linux/syscalls.h F: include/linux/syscalls.h
F: kernel/sys_ni.c F: kernel/sys_ni.c
F: include/uapi/ X: include/uapi/
F: arch/*/include/uapi/ X: arch/*/include/uapi/
ABIT UGURU 1,2 HARDWARE MONITOR DRIVER ABIT UGURU 1,2 HARDWARE MONITOR DRIVER
M: Hans de Goede <hdegoede@redhat.com> M: Hans de Goede <hdegoede@redhat.com>
@ -1181,7 +1181,7 @@ M: Joel Fernandes <joel@joelfernandes.org>
M: Christian Brauner <christian@brauner.io> M: Christian Brauner <christian@brauner.io>
M: Hridya Valsaraju <hridya@google.com> M: Hridya Valsaraju <hridya@google.com>
M: Suren Baghdasaryan <surenb@google.com> M: Suren Baghdasaryan <surenb@google.com>
L: devel@driverdev.osuosl.org L: linux-kernel@vger.kernel.org
S: Supported S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
F: drivers/android/ F: drivers/android/
@ -2489,7 +2489,7 @@ N: sc27xx
N: sc2731 N: sc2731
ARM/STI ARCHITECTURE ARM/STI ARCHITECTURE
M: Patrice Chotard <patrice.chotard@st.com> M: Patrice Chotard <patrice.chotard@foss.st.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
W: http://www.stlinux.com W: http://www.stlinux.com
@ -2522,7 +2522,7 @@ F: include/linux/remoteproc/st_slim_rproc.h
ARM/STM32 ARCHITECTURE ARM/STM32 ARCHITECTURE
M: Maxime Coquelin <mcoquelin.stm32@gmail.com> M: Maxime Coquelin <mcoquelin.stm32@gmail.com>
M: Alexandre Torgue <alexandre.torgue@st.com> M: Alexandre Torgue <alexandre.torgue@foss.st.com>
L: linux-stm32@st-md-mailman.stormreply.com (moderated for non-subscribers) L: linux-stm32@st-md-mailman.stormreply.com (moderated for non-subscribers)
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
@ -3115,7 +3115,7 @@ C: irc://irc.oftc.net/bcache
F: drivers/md/bcache/ F: drivers/md/bcache/
BDISP ST MEDIA DRIVER BDISP ST MEDIA DRIVER
M: Fabien Dessenne <fabien.dessenne@st.com> M: Fabien Dessenne <fabien.dessenne@foss.st.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Supported S: Supported
W: https://linuxtv.org W: https://linuxtv.org
@ -3675,7 +3675,7 @@ M: bcm-kernel-feedback-list@broadcom.com
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
T: git git://github.com/broadcom/stblinux.git T: git git://github.com/broadcom/stblinux.git
F: drivers/soc/bcm/bcm-pmb.c F: drivers/soc/bcm/bcm63xx/bcm-pmb.c
F: include/dt-bindings/soc/bcm-pmb.h F: include/dt-bindings/soc/bcm-pmb.h
BROADCOM SPECIFIC AMBA DRIVER (BCMA) BROADCOM SPECIFIC AMBA DRIVER (BCMA)
@ -5080,7 +5080,7 @@ S: Maintained
F: drivers/platform/x86/dell/dell-wmi.c F: drivers/platform/x86/dell/dell-wmi.c
DELTA ST MEDIA DRIVER DELTA ST MEDIA DRIVER
M: Hugues Fruchet <hugues.fruchet@st.com> M: Hugues Fruchet <hugues.fruchet@foss.st.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Supported S: Supported
W: https://linuxtv.org W: https://linuxtv.org
@ -5835,7 +5835,7 @@ M: David Airlie <airlied@linux.ie>
M: Daniel Vetter <daniel@ffwll.ch> M: Daniel Vetter <daniel@ffwll.ch>
L: dri-devel@lists.freedesktop.org L: dri-devel@lists.freedesktop.org
S: Maintained S: Maintained
B: https://bugs.freedesktop.org/ B: https://gitlab.freedesktop.org/drm
C: irc://chat.freenode.net/dri-devel C: irc://chat.freenode.net/dri-devel
T: git git://anongit.freedesktop.org/drm/drm T: git git://anongit.freedesktop.org/drm/drm
F: Documentation/devicetree/bindings/display/ F: Documentation/devicetree/bindings/display/
@ -6006,7 +6006,6 @@ F: drivers/gpu/drm/rockchip/
DRM DRIVERS FOR STI DRM DRIVERS FOR STI
M: Benjamin Gaignard <benjamin.gaignard@linaro.org> M: Benjamin Gaignard <benjamin.gaignard@linaro.org>
M: Vincent Abriou <vincent.abriou@st.com>
L: dri-devel@lists.freedesktop.org L: dri-devel@lists.freedesktop.org
S: Maintained S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc T: git git://anongit.freedesktop.org/drm/drm-misc
@ -6014,10 +6013,9 @@ F: Documentation/devicetree/bindings/display/st,stih4xx.txt
F: drivers/gpu/drm/sti F: drivers/gpu/drm/sti
DRM DRIVERS FOR STM DRM DRIVERS FOR STM
M: Yannick Fertre <yannick.fertre@st.com> M: Yannick Fertre <yannick.fertre@foss.st.com>
M: Philippe Cornu <philippe.cornu@st.com> M: Philippe Cornu <philippe.cornu@foss.st.com>
M: Benjamin Gaignard <benjamin.gaignard@linaro.org> M: Benjamin Gaignard <benjamin.gaignard@linaro.org>
M: Vincent Abriou <vincent.abriou@st.com>
L: dri-devel@lists.freedesktop.org L: dri-devel@lists.freedesktop.org
S: Maintained S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc T: git git://anongit.freedesktop.org/drm/drm-misc
@ -8116,7 +8114,6 @@ F: drivers/crypto/hisilicon/sec2/sec_main.c
HISILICON STAGING DRIVERS FOR HIKEY 960/970 HISILICON STAGING DRIVERS FOR HIKEY 960/970
M: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> M: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
L: devel@driverdev.osuosl.org
S: Maintained S: Maintained
F: drivers/staging/hikey9xx/ F: drivers/staging/hikey9xx/
@ -8231,7 +8228,7 @@ F: include/linux/hugetlb.h
F: mm/hugetlb.c F: mm/hugetlb.c
HVA ST MEDIA DRIVER HVA ST MEDIA DRIVER
M: Jean-Christophe Trotin <jean-christophe.trotin@st.com> M: Jean-Christophe Trotin <jean-christophe.trotin@foss.st.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Supported S: Supported
W: https://linuxtv.org W: https://linuxtv.org
@ -8521,6 +8518,7 @@ IBM Power SRIOV Virtual NIC Device Driver
M: Dany Madden <drt@linux.ibm.com> M: Dany Madden <drt@linux.ibm.com>
M: Lijun Pan <ljp@linux.ibm.com> M: Lijun Pan <ljp@linux.ibm.com>
M: Sukadev Bhattiprolu <sukadev@linux.ibm.com> M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
R: Thomas Falcon <tlfalcon@linux.ibm.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/ethernet/ibm/ibmvnic.* F: drivers/net/ethernet/ibm/ibmvnic.*
@ -10030,7 +10028,6 @@ F: scripts/leaking_addresses.pl
LED SUBSYSTEM LED SUBSYSTEM
M: Pavel Machek <pavel@ucw.cz> M: Pavel Machek <pavel@ucw.cz>
R: Dan Murphy <dmurphy@ti.com>
L: linux-leds@vger.kernel.org L: linux-leds@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pavel/linux-leds.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/pavel/linux-leds.git
@ -10716,7 +10713,8 @@ F: drivers/net/ethernet/marvell/mvpp2/
MARVELL MWIFIEX WIRELESS DRIVER MARVELL MWIFIEX WIRELESS DRIVER
M: Amitkumar Karwar <amitkarwar@gmail.com> M: Amitkumar Karwar <amitkarwar@gmail.com>
M: Ganapathi Bhat <ganapathi.bhat@nxp.com> M: Ganapathi Bhat <ganapathi017@gmail.com>
M: Sharvari Harisangam <sharvari.harisangam@nxp.com>
M: Xinming Hu <huxinming820@gmail.com> M: Xinming Hu <huxinming820@gmail.com>
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
S: Maintained S: Maintained
@ -10905,7 +10903,6 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/radio/radio-maxiradio* F: drivers/media/radio/radio-maxiradio*
MCAN MMIO DEVICE DRIVER MCAN MMIO DEVICE DRIVER
M: Dan Murphy <dmurphy@ti.com>
M: Pankaj Sharma <pankj.sharma@samsung.com> M: Pankaj Sharma <pankj.sharma@samsung.com>
L: linux-can@vger.kernel.org L: linux-can@vger.kernel.org
S: Maintained S: Maintained
@ -11166,7 +11163,7 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/dvb-frontends/stv6111* F: drivers/media/dvb-frontends/stv6111*
MEDIA DRIVERS FOR STM32 - DCMI MEDIA DRIVERS FOR STM32 - DCMI
M: Hugues Fruchet <hugues.fruchet@st.com> M: Hugues Fruchet <hugues.fruchet@foss.st.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Supported S: Supported
T: git git://linuxtv.org/media_tree.git T: git git://linuxtv.org/media_tree.git
@ -12537,7 +12534,7 @@ NETWORKING [MPTCP]
M: Mat Martineau <mathew.j.martineau@linux.intel.com> M: Mat Martineau <mathew.j.martineau@linux.intel.com>
M: Matthieu Baerts <matthieu.baerts@tessares.net> M: Matthieu Baerts <matthieu.baerts@tessares.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: mptcp@lists.01.org L: mptcp@lists.linux.dev
S: Maintained S: Maintained
W: https://github.com/multipath-tcp/mptcp_net-next/wiki W: https://github.com/multipath-tcp/mptcp_net-next/wiki
B: https://github.com/multipath-tcp/mptcp_net-next/issues B: https://github.com/multipath-tcp/mptcp_net-next/issues
@ -14710,15 +14707,11 @@ F: drivers/net/ethernet/qlogic/qlcnic/
QLOGIC QLGE 10Gb ETHERNET DRIVER QLOGIC QLGE 10Gb ETHERNET DRIVER
M: Manish Chopra <manishc@marvell.com> M: Manish Chopra <manishc@marvell.com>
M: GR-Linux-NIC-Dev@marvell.com M: GR-Linux-NIC-Dev@marvell.com
L: netdev@vger.kernel.org
S: Supported
F: drivers/staging/qlge/
QLOGIC QLGE 10Gb ETHERNET DRIVER
M: Coiby Xu <coiby.xu@gmail.com> M: Coiby Xu <coiby.xu@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Supported
F: Documentation/networking/device_drivers/qlogic/qlge.rst F: Documentation/networking/device_drivers/qlogic/qlge.rst
F: drivers/staging/qlge/
QM1D1B0004 MEDIA DRIVER QM1D1B0004 MEDIA DRIVER
M: Akihiro Tsukada <tskd08@gmail.com> M: Akihiro Tsukada <tskd08@gmail.com>
@ -16888,8 +16881,10 @@ F: tools/spi/
SPIDERNET NETWORK DRIVER for CELL SPIDERNET NETWORK DRIVER for CELL
M: Ishizaki Kou <kou.ishizaki@toshiba.co.jp> M: Ishizaki Kou <kou.ishizaki@toshiba.co.jp>
M: Geoff Levand <geoff@infradead.org>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported L: linuxppc-dev@lists.ozlabs.org
S: Maintained
F: Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst F: Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst
F: drivers/net/ethernet/toshiba/spider_net* F: drivers/net/ethernet/toshiba/spider_net*
@ -16943,7 +16938,8 @@ F: Documentation/devicetree/bindings/media/i2c/st,st-mipid02.txt
F: drivers/media/i2c/st-mipid02.c F: drivers/media/i2c/st-mipid02.c
ST STM32 I2C/SMBUS DRIVER ST STM32 I2C/SMBUS DRIVER
M: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> M: Pierre-Yves MORDRET <pierre-yves.mordret@foss.st.com>
M: Alain Volmat <alain.volmat@foss.st.com>
L: linux-i2c@vger.kernel.org L: linux-i2c@vger.kernel.org
S: Maintained S: Maintained
F: drivers/i2c/busses/i2c-stm32* F: drivers/i2c/busses/i2c-stm32*
@ -17041,7 +17037,7 @@ F: drivers/staging/vt665?/
STAGING SUBSYSTEM STAGING SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: devel@driverdev.osuosl.org L: linux-staging@lists.linux.dev
S: Supported S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
F: drivers/staging/ F: drivers/staging/
@ -17068,7 +17064,7 @@ F: kernel/jump_label.c
F: kernel/static_call.c F: kernel/static_call.c
STI AUDIO (ASoC) DRIVERS STI AUDIO (ASoC) DRIVERS
M: Arnaud Pouliquen <arnaud.pouliquen@st.com> M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers) L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt F: Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt
@ -17088,15 +17084,15 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/usb/stk1160/ F: drivers/media/usb/stk1160/
STM32 AUDIO (ASoC) DRIVERS STM32 AUDIO (ASoC) DRIVERS
M: Olivier Moysan <olivier.moysan@st.com> M: Olivier Moysan <olivier.moysan@foss.st.com>
M: Arnaud Pouliquen <arnaud.pouliquen@st.com> M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers) L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml
F: sound/soc/stm/ F: sound/soc/stm/
STM32 TIMER/LPTIMER DRIVERS STM32 TIMER/LPTIMER DRIVERS
M: Fabrice Gasnier <fabrice.gasnier@st.com> M: Fabrice Gasnier <fabrice.gasnier@foss.st.com>
S: Maintained S: Maintained
F: Documentation/ABI/testing/*timer-stm32 F: Documentation/ABI/testing/*timer-stm32
F: Documentation/devicetree/bindings/*/*stm32-*timer* F: Documentation/devicetree/bindings/*/*stm32-*timer*
@ -17106,7 +17102,7 @@ F: include/linux/*/stm32-*tim*
STMMAC ETHERNET DRIVER STMMAC ETHERNET DRIVER
M: Giuseppe Cavallaro <peppe.cavallaro@st.com> M: Giuseppe Cavallaro <peppe.cavallaro@st.com>
M: Alexandre Torgue <alexandre.torgue@st.com> M: Alexandre Torgue <alexandre.torgue@foss.st.com>
M: Jose Abreu <joabreu@synopsys.com> M: Jose Abreu <joabreu@synopsys.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
@ -17848,7 +17844,6 @@ S: Maintained
F: drivers/thermal/ti-soc-thermal/ F: drivers/thermal/ti-soc-thermal/
TI BQ27XXX POWER SUPPLY DRIVER TI BQ27XXX POWER SUPPLY DRIVER
R: Dan Murphy <dmurphy@ti.com>
F: drivers/power/supply/bq27xxx_battery.c F: drivers/power/supply/bq27xxx_battery.c
F: drivers/power/supply/bq27xxx_battery_i2c.c F: drivers/power/supply/bq27xxx_battery_i2c.c
F: include/linux/power/bq27xxx_battery.h F: include/linux/power/bq27xxx_battery.h
@ -17983,7 +17978,6 @@ S: Odd Fixes
F: sound/soc/codecs/tas571x* F: sound/soc/codecs/tas571x*
TI TCAN4X5X DEVICE DRIVER TI TCAN4X5X DEVICE DRIVER
M: Dan Murphy <dmurphy@ti.com>
L: linux-can@vger.kernel.org L: linux-can@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/net/can/tcan4x5x.txt F: Documentation/devicetree/bindings/net/can/tcan4x5x.txt
@ -19136,7 +19130,7 @@ VME SUBSYSTEM
M: Martyn Welch <martyn@welchs.me.uk> M: Martyn Welch <martyn@welchs.me.uk>
M: Manohar Vanga <manohar.vanga@gmail.com> M: Manohar Vanga <manohar.vanga@gmail.com>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: devel@driverdev.osuosl.org L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
F: Documentation/driver-api/vme.rst F: Documentation/driver-api/vme.rst
@ -19167,7 +19161,7 @@ S: Maintained
F: drivers/infiniband/hw/vmw_pvrdma/ F: drivers/infiniband/hw/vmw_pvrdma/
VMware PVSCSI driver VMware PVSCSI driver
M: Jim Gill <jgill@vmware.com> M: Vishal Bhakta <vbhakta@vmware.com>
M: VMware PV-Drivers <pv-drivers@vmware.com> M: VMware PV-Drivers <pv-drivers@vmware.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Maintained S: Maintained

View File

@ -2,7 +2,7 @@
VERSION = 5 VERSION = 5
PATCHLEVEL = 12 PATCHLEVEL = 12
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc2 EXTRAVERSION = -rc5
NAME = Frozen Wasteland NAME = Frozen Wasteland
# *DOCUMENTATION* # *DOCUMENTATION*
@ -264,7 +264,8 @@ no-dot-config-targets := $(clean-targets) \
$(version_h) headers headers_% archheaders archscripts \ $(version_h) headers headers_% archheaders archscripts \
%asm-generic kernelversion %src-pkg dt_binding_check \ %asm-generic kernelversion %src-pkg dt_binding_check \
outputmakefile outputmakefile
no-sync-config-targets := $(no-dot-config-targets) %install kernelrelease no-sync-config-targets := $(no-dot-config-targets) %install kernelrelease \
image_name
single-targets := %.a %.i %.ko %.lds %.ll %.lst %.mod %.o %.s %.symtypes %/ single-targets := %.a %.i %.ko %.lds %.ll %.lst %.mod %.o %.s %.symtypes %/
config-build := config-build :=
@ -478,6 +479,7 @@ USERINCLUDE := \
-I$(objtree)/arch/$(SRCARCH)/include/generated/uapi \ -I$(objtree)/arch/$(SRCARCH)/include/generated/uapi \
-I$(srctree)/include/uapi \ -I$(srctree)/include/uapi \
-I$(objtree)/include/generated/uapi \ -I$(objtree)/include/generated/uapi \
-include $(srctree)/include/linux/compiler-version.h \
-include $(srctree)/include/linux/kconfig.h -include $(srctree)/include/linux/kconfig.h
# Use LINUXINCLUDE when you must reference the include/ directory. # Use LINUXINCLUDE when you must reference the include/ directory.

View File

@ -632,13 +632,12 @@ config HAS_LTO_CLANG
def_bool y def_bool y
# Clang >= 11: https://github.com/ClangBuiltLinux/linux/issues/510 # Clang >= 11: https://github.com/ClangBuiltLinux/linux/issues/510
depends on CC_IS_CLANG && CLANG_VERSION >= 110000 && LD_IS_LLD depends on CC_IS_CLANG && CLANG_VERSION >= 110000 && LD_IS_LLD
depends on $(success,test $(LLVM) -eq 1)
depends on $(success,test $(LLVM_IAS) -eq 1) depends on $(success,test $(LLVM_IAS) -eq 1)
depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm) depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm)
depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm) depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm)
depends on ARCH_SUPPORTS_LTO_CLANG depends on ARCH_SUPPORTS_LTO_CLANG
depends on !FTRACE_MCOUNT_USE_RECORDMCOUNT depends on !FTRACE_MCOUNT_USE_RECORDMCOUNT
depends on !KASAN depends on !KASAN || KASAN_HW_TAGS
depends on !GCOV_KERNEL depends on !GCOV_KERNEL
help help
The compiler and Kconfig options support building with Clang's The compiler and Kconfig options support building with Clang's

View File

@ -348,6 +348,7 @@ config ARCH_EP93XX
select ARM_AMBA select ARM_AMBA
imply ARM_PATCH_PHYS_VIRT imply ARM_PATCH_PHYS_VIRT
select ARM_VIC select ARM_VIC
select GENERIC_IRQ_MULTI_HANDLER
select AUTO_ZRELADDR select AUTO_ZRELADDR
select CLKDEV_LOOKUP select CLKDEV_LOOKUP
select CLKSRC_MMIO select CLKSRC_MMIO

View File

@ -40,6 +40,9 @@
ethernet1 = &cpsw_emac1; ethernet1 = &cpsw_emac1;
spi0 = &spi0; spi0 = &spi0;
spi1 = &spi1; spi1 = &spi1;
mmc0 = &mmc1;
mmc1 = &mmc2;
mmc2 = &mmc3;
}; };
cpus { cpus {

View File

@ -334,14 +334,6 @@
}; };
&pinctrl { &pinctrl {
atmel,mux-mask = <
/* A B C */
0xFFFFFE7F 0xC0E0397F 0xEF00019D /* pioA */
0x03FFFFFF 0x02FC7E68 0x00780000 /* pioB */
0xffffffff 0xF83FFFFF 0xB800F3FC /* pioC */
0x003FFFFF 0x003F8000 0x00000000 /* pioD */
>;
adc { adc {
pinctrl_adc_default: adc_default { pinctrl_adc_default: adc_default {
atmel,pins = <AT91_PIOB 15 AT91_PERIPH_A AT91_PINCTRL_NONE>; atmel,pins = <AT91_PIOB 15 AT91_PERIPH_A AT91_PINCTRL_NONE>;

View File

@ -84,8 +84,8 @@
pinctrl-0 = <&pinctrl_macb0_default>; pinctrl-0 = <&pinctrl_macb0_default>;
phy-mode = "rmii"; phy-mode = "rmii";
ethernet-phy@0 { ethernet-phy@7 {
reg = <0x0>; reg = <0x7>;
interrupt-parent = <&pioA>; interrupt-parent = <&pioA>;
interrupts = <PIN_PD31 IRQ_TYPE_LEVEL_LOW>; interrupts = <PIN_PD31 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default"; pinctrl-names = "default";

View File

@ -210,9 +210,6 @@
micrel,led-mode = <1>; micrel,led-mode = <1>;
clocks = <&clks IMX6UL_CLK_ENET_REF>; clocks = <&clks IMX6UL_CLK_ENET_REF>;
clock-names = "rmii-ref"; clock-names = "rmii-ref";
reset-gpios = <&gpio_spi 1 GPIO_ACTIVE_LOW>;
reset-assert-us = <10000>;
reset-deassert-us = <100>;
}; };
@ -222,9 +219,6 @@
micrel,led-mode = <1>; micrel,led-mode = <1>;
clocks = <&clks IMX6UL_CLK_ENET2_REF>; clocks = <&clks IMX6UL_CLK_ENET2_REF>;
clock-names = "rmii-ref"; clock-names = "rmii-ref";
reset-gpios = <&gpio_spi 2 GPIO_ACTIVE_LOW>;
reset-assert-us = <10000>;
reset-deassert-us = <100>;
}; };
}; };
}; };
@ -243,6 +237,22 @@
status = "okay"; status = "okay";
}; };
&gpio_spi {
eth0-phy-hog {
gpio-hog;
gpios = <1 GPIO_ACTIVE_HIGH>;
output-high;
line-name = "eth0-phy";
};
eth1-phy-hog {
gpio-hog;
gpios = <2 GPIO_ACTIVE_HIGH>;
output-high;
line-name = "eth1-phy";
};
};
&i2c1 { &i2c1 {
clock-frequency = <100000>; clock-frequency = <100000>;
pinctrl-names = "default"; pinctrl-names = "default";

View File

@ -14,5 +14,6 @@
}; };
&gpmi { &gpmi {
fsl,use-minimum-ecc;
status = "okay"; status = "okay";
}; };

View File

@ -606,6 +606,15 @@
compatible = "microchip,sam9x60-pinctrl", "atmel,at91sam9x5-pinctrl", "atmel,at91rm9200-pinctrl", "simple-bus"; compatible = "microchip,sam9x60-pinctrl", "atmel,at91sam9x5-pinctrl", "atmel,at91rm9200-pinctrl", "simple-bus";
ranges = <0xfffff400 0xfffff400 0x800>; ranges = <0xfffff400 0xfffff400 0x800>;
/* mux-mask corresponding to sam9x60 SoC in TFBGA228L package */
atmel,mux-mask = <
/* A B C */
0xffffffff 0xffe03fff 0xef00019d /* pioA */
0x03ffffff 0x02fc7e7f 0x00780000 /* pioB */
0xffffffff 0xffffffff 0xf83fffff /* pioC */
0x003fffff 0x003f8000 0x00000000 /* pioD */
>;
pioA: gpio@fffff400 { pioA: gpio@fffff400 {
compatible = "microchip,sam9x60-gpio", "atmel,at91sam9x5-gpio", "atmel,at91rm9200-gpio"; compatible = "microchip,sam9x60-gpio", "atmel,at91sam9x5-gpio", "atmel,at91rm9200-gpio";
reg = <0xfffff400 0x200>; reg = <0xfffff400 0x200>;

View File

@ -7,6 +7,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/irqdomain.h> #include <linux/irqdomain.h>
#include <linux/irqchip.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_address.h> #include <linux/of_address.h>
@ -162,7 +163,7 @@ static void __exception_irq_entry avic_handle_irq(struct pt_regs *regs)
* interrupts. It registers the interrupt enable and disable functions * interrupts. It registers the interrupt enable and disable functions
* to the kernel for each interrupt source. * to the kernel for each interrupt source.
*/ */
void __init mxc_init_irq(void __iomem *irqbase) static void __init mxc_init_irq(void __iomem *irqbase)
{ {
struct device_node *np; struct device_node *np;
int irq_base; int irq_base;
@ -220,3 +221,16 @@ void __init mxc_init_irq(void __iomem *irqbase)
printk(KERN_INFO "MXC IRQ initialized\n"); printk(KERN_INFO "MXC IRQ initialized\n");
} }
static int __init imx_avic_init(struct device_node *node,
struct device_node *parent)
{
void __iomem *avic_base;
avic_base = of_iomap(node, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
return 0;
}
IRQCHIP_DECLARE(imx_avic, "fsl,avic", imx_avic_init);

View File

@ -22,7 +22,6 @@ void mx35_map_io(void);
void imx21_init_early(void); void imx21_init_early(void);
void imx31_init_early(void); void imx31_init_early(void);
void imx35_init_early(void); void imx35_init_early(void);
void mxc_init_irq(void __iomem *);
void mx31_init_irq(void); void mx31_init_irq(void);
void mx35_init_irq(void); void mx35_init_irq(void);
void mxc_set_cpu_type(unsigned int type); void mxc_set_cpu_type(unsigned int type);

View File

@ -17,16 +17,6 @@ static void __init imx1_init_early(void)
mxc_set_cpu_type(MXC_CPU_MX1); mxc_set_cpu_type(MXC_CPU_MX1);
} }
static void __init imx1_init_irq(void)
{
void __iomem *avic_addr;
avic_addr = ioremap(MX1_AVIC_ADDR, SZ_4K);
WARN_ON(!avic_addr);
mxc_init_irq(avic_addr);
}
static const char * const imx1_dt_board_compat[] __initconst = { static const char * const imx1_dt_board_compat[] __initconst = {
"fsl,imx1", "fsl,imx1",
NULL NULL
@ -34,7 +24,6 @@ static const char * const imx1_dt_board_compat[] __initconst = {
DT_MACHINE_START(IMX1_DT, "Freescale i.MX1 (Device Tree Support)") DT_MACHINE_START(IMX1_DT, "Freescale i.MX1 (Device Tree Support)")
.init_early = imx1_init_early, .init_early = imx1_init_early,
.init_irq = imx1_init_irq,
.dt_compat = imx1_dt_board_compat, .dt_compat = imx1_dt_board_compat,
.restart = mxc_restart, .restart = mxc_restart,
MACHINE_END MACHINE_END

View File

@ -22,17 +22,6 @@ static void __init imx25_dt_init(void)
imx_aips_allow_unprivileged_access("fsl,imx25-aips"); imx_aips_allow_unprivileged_access("fsl,imx25-aips");
} }
static void __init mx25_init_irq(void)
{
struct device_node *np;
void __iomem *avic_base;
np = of_find_compatible_node(NULL, NULL, "fsl,avic");
avic_base = of_iomap(np, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
}
static const char * const imx25_dt_board_compat[] __initconst = { static const char * const imx25_dt_board_compat[] __initconst = {
"fsl,imx25", "fsl,imx25",
NULL NULL
@ -42,6 +31,5 @@ DT_MACHINE_START(IMX25_DT, "Freescale i.MX25 (Device Tree Support)")
.init_early = imx25_init_early, .init_early = imx25_init_early,
.init_machine = imx25_dt_init, .init_machine = imx25_dt_init,
.init_late = imx25_pm_init, .init_late = imx25_pm_init,
.init_irq = mx25_init_irq,
.dt_compat = imx25_dt_board_compat, .dt_compat = imx25_dt_board_compat,
MACHINE_END MACHINE_END

View File

@ -56,17 +56,6 @@ static void __init imx27_init_early(void)
mxc_set_cpu_type(MXC_CPU_MX27); mxc_set_cpu_type(MXC_CPU_MX27);
} }
static void __init mx27_init_irq(void)
{
void __iomem *avic_base;
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,avic");
avic_base = of_iomap(np, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
}
static const char * const imx27_dt_board_compat[] __initconst = { static const char * const imx27_dt_board_compat[] __initconst = {
"fsl,imx27", "fsl,imx27",
NULL NULL
@ -75,7 +64,6 @@ static const char * const imx27_dt_board_compat[] __initconst = {
DT_MACHINE_START(IMX27_DT, "Freescale i.MX27 (Device Tree Support)") DT_MACHINE_START(IMX27_DT, "Freescale i.MX27 (Device Tree Support)")
.map_io = mx27_map_io, .map_io = mx27_map_io,
.init_early = imx27_init_early, .init_early = imx27_init_early,
.init_irq = mx27_init_irq,
.init_late = imx27_pm_init, .init_late = imx27_pm_init,
.dt_compat = imx27_dt_board_compat, .dt_compat = imx27_dt_board_compat,
MACHINE_END MACHINE_END

View File

@ -14,6 +14,5 @@ static const char * const imx31_dt_board_compat[] __initconst = {
DT_MACHINE_START(IMX31_DT, "Freescale i.MX31 (Device Tree Support)") DT_MACHINE_START(IMX31_DT, "Freescale i.MX31 (Device Tree Support)")
.map_io = mx31_map_io, .map_io = mx31_map_io,
.init_early = imx31_init_early, .init_early = imx31_init_early,
.init_irq = mx31_init_irq,
.dt_compat = imx31_dt_board_compat, .dt_compat = imx31_dt_board_compat,
MACHINE_END MACHINE_END

View File

@ -27,6 +27,5 @@ DT_MACHINE_START(IMX35_DT, "Freescale i.MX35 (Device Tree Support)")
.l2c_aux_mask = ~0, .l2c_aux_mask = ~0,
.map_io = mx35_map_io, .map_io = mx35_map_io,
.init_early = imx35_init_early, .init_early = imx35_init_early,
.init_irq = mx35_init_irq,
.dt_compat = imx35_dt_board_compat, .dt_compat = imx35_dt_board_compat,
MACHINE_END MACHINE_END

View File

@ -109,18 +109,6 @@ void __init imx31_init_early(void)
mx3_ccm_base = of_iomap(np, 0); mx3_ccm_base = of_iomap(np, 0);
BUG_ON(!mx3_ccm_base); BUG_ON(!mx3_ccm_base);
} }
void __init mx31_init_irq(void)
{
void __iomem *avic_base;
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,imx31-avic");
avic_base = of_iomap(np, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
}
#endif /* ifdef CONFIG_SOC_IMX31 */ #endif /* ifdef CONFIG_SOC_IMX31 */
#ifdef CONFIG_SOC_IMX35 #ifdef CONFIG_SOC_IMX35
@ -158,16 +146,4 @@ void __init imx35_init_early(void)
mx3_ccm_base = of_iomap(np, 0); mx3_ccm_base = of_iomap(np, 0);
BUG_ON(!mx3_ccm_base); BUG_ON(!mx3_ccm_base);
} }
void __init mx35_init_irq(void)
{
void __iomem *avic_base;
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,imx35-avic");
avic_base = of_iomap(np, 0);
BUG_ON(!avic_base);
mxc_init_irq(avic_base);
}
#endif /* ifdef CONFIG_SOC_IMX35 */ #endif /* ifdef CONFIG_SOC_IMX35 */

View File

@ -88,34 +88,26 @@ static void __init sr_set_nvalues(struct omap_volt_data *volt_data,
extern struct omap_sr_data omap_sr_pdata[]; extern struct omap_sr_data omap_sr_pdata[];
static int __init sr_dev_init(struct omap_hwmod *oh, void *user) static int __init sr_init_by_name(const char *name, const char *voltdm)
{ {
struct omap_sr_data *sr_data = NULL; struct omap_sr_data *sr_data = NULL;
struct omap_volt_data *volt_data; struct omap_volt_data *volt_data;
struct omap_smartreflex_dev_attr *sr_dev_attr;
static int i; static int i;
if (!strncmp(oh->name, "smartreflex_mpu_iva", 20) || if (!strncmp(name, "smartreflex_mpu_iva", 20) ||
!strncmp(oh->name, "smartreflex_mpu", 16)) !strncmp(name, "smartreflex_mpu", 16))
sr_data = &omap_sr_pdata[OMAP_SR_MPU]; sr_data = &omap_sr_pdata[OMAP_SR_MPU];
else if (!strncmp(oh->name, "smartreflex_core", 17)) else if (!strncmp(name, "smartreflex_core", 17))
sr_data = &omap_sr_pdata[OMAP_SR_CORE]; sr_data = &omap_sr_pdata[OMAP_SR_CORE];
else if (!strncmp(oh->name, "smartreflex_iva", 16)) else if (!strncmp(name, "smartreflex_iva", 16))
sr_data = &omap_sr_pdata[OMAP_SR_IVA]; sr_data = &omap_sr_pdata[OMAP_SR_IVA];
if (!sr_data) { if (!sr_data) {
pr_err("%s: Unknown instance %s\n", __func__, oh->name); pr_err("%s: Unknown instance %s\n", __func__, name);
return -EINVAL; return -EINVAL;
} }
sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr; sr_data->name = name;
if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) {
pr_err("%s: No voltage domain specified for %s. Cannot initialize\n",
__func__, oh->name);
goto exit;
}
sr_data->name = oh->name;
if (cpu_is_omap343x()) if (cpu_is_omap343x())
sr_data->ip_type = 1; sr_data->ip_type = 1;
else else
@ -136,10 +128,10 @@ static int __init sr_dev_init(struct omap_hwmod *oh, void *user)
} }
} }
sr_data->voltdm = voltdm_lookup(sr_dev_attr->sensor_voltdm_name); sr_data->voltdm = voltdm_lookup(voltdm);
if (!sr_data->voltdm) { if (!sr_data->voltdm) {
pr_err("%s: Unable to get voltage domain pointer for VDD %s\n", pr_err("%s: Unable to get voltage domain pointer for VDD %s\n",
__func__, sr_dev_attr->sensor_voltdm_name); __func__, voltdm);
goto exit; goto exit;
} }
@ -160,6 +152,20 @@ exit:
return 0; return 0;
} }
static int __init sr_dev_init(struct omap_hwmod *oh, void *user)
{
struct omap_smartreflex_dev_attr *sr_dev_attr;
sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr;
if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) {
pr_err("%s: No voltage domain specified for %s. Cannot initialize\n",
__func__, oh->name);
return 0;
}
return sr_init_by_name(oh->name, sr_dev_attr->sensor_voltdm_name);
}
/* /*
* API to be called from board files to enable smartreflex * API to be called from board files to enable smartreflex
* autocompensation at init. * autocompensation at init.
@ -169,7 +175,42 @@ void __init omap_enable_smartreflex_on_init(void)
sr_enable_on_init = true; sr_enable_on_init = true;
} }
static const char * const omap4_sr_instances[] = {
"mpu",
"iva",
"core",
};
static const char * const dra7_sr_instances[] = {
"mpu",
"core",
};
int __init omap_devinit_smartreflex(void) int __init omap_devinit_smartreflex(void)
{ {
const char * const *sr_inst;
int i, nr_sr = 0;
if (soc_is_omap44xx()) {
sr_inst = omap4_sr_instances;
nr_sr = ARRAY_SIZE(omap4_sr_instances);
} else if (soc_is_dra7xx()) {
sr_inst = dra7_sr_instances;
nr_sr = ARRAY_SIZE(dra7_sr_instances);
}
if (nr_sr) {
const char *name, *voltdm;
for (i = 0; i < nr_sr; i++) {
name = kasprintf(GFP_KERNEL, "smartreflex_%s", sr_inst[i]);
voltdm = sr_inst[i];
sr_init_by_name(name, voltdm);
}
return 0;
}
return omap_hwmod_for_each_by_class("smartreflex", sr_dev_init, NULL); return omap_hwmod_for_each_by_class("smartreflex", sr_dev_init, NULL);
} }

View File

@ -11,6 +11,7 @@
#include <xen/xen.h> #include <xen/xen.h>
#include <xen/interface/memory.h> #include <xen/interface/memory.h>
#include <xen/grant_table.h>
#include <xen/page.h> #include <xen/page.h>
#include <xen/swiotlb-xen.h> #include <xen/swiotlb-xen.h>
@ -109,7 +110,7 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
map_ops[i].status = GNTST_general_error; map_ops[i].status = GNTST_general_error;
unmap.host_addr = map_ops[i].host_addr, unmap.host_addr = map_ops[i].host_addr,
unmap.handle = map_ops[i].handle; unmap.handle = map_ops[i].handle;
map_ops[i].handle = ~0; map_ops[i].handle = INVALID_GRANT_HANDLE;
if (map_ops[i].flags & GNTMAP_device_map) if (map_ops[i].flags & GNTMAP_device_map)
unmap.dev_bus_addr = map_ops[i].dev_bus_addr; unmap.dev_bus_addr = map_ops[i].dev_bus_addr;
else else
@ -130,7 +131,6 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops, int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
struct gnttab_unmap_grant_ref *kunmap_ops, struct gnttab_unmap_grant_ref *kunmap_ops,
@ -145,7 +145,6 @@ int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
bool __set_phys_to_machine_multi(unsigned long pfn, bool __set_phys_to_machine_multi(unsigned long pfn,
unsigned long mfn, unsigned long nr_pages) unsigned long mfn, unsigned long nr_pages)

View File

@ -810,6 +810,16 @@ config QCOM_FALKOR_ERRATUM_E1041
If unsure, say Y. If unsure, say Y.
config NVIDIA_CARMEL_CNP_ERRATUM
bool "NVIDIA Carmel CNP: CNP on Carmel semantically different than ARM cores"
default y
help
If CNP is enabled on Carmel cores, non-sharable TLBIs on a core will not
invalidate shared TLB entries installed by a different core, as it would
on standard ARM cores.
If unsure, say Y.
config SOCIONEXT_SYNQUACER_PREITS config SOCIONEXT_SYNQUACER_PREITS
bool "Socionext Synquacer: Workaround for GICv3 pre-ITS" bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"
default y default y
@ -1055,8 +1065,6 @@ config HW_PERF_EVENTS
config SYS_SUPPORTS_HUGETLBFS config SYS_SUPPORTS_HUGETLBFS
def_bool y def_bool y
config ARCH_WANT_HUGE_PMD_SHARE
config ARCH_HAS_CACHE_LINE_SIZE config ARCH_HAS_CACHE_LINE_SIZE
def_bool y def_bool y
@ -1157,8 +1165,8 @@ config XEN
config FORCE_MAX_ZONEORDER config FORCE_MAX_ZONEORDER
int int
default "14" if (ARM64_64K_PAGES && TRANSPARENT_HUGEPAGE) default "14" if ARM64_64K_PAGES
default "12" if (ARM64_16K_PAGES && TRANSPARENT_HUGEPAGE) default "12" if ARM64_16K_PAGES
default "11" default "11"
help help
The kernel memory allocator divides physically contiguous memory The kernel memory allocator divides physically contiguous memory
@ -1855,12 +1863,6 @@ config CMDLINE_FROM_BOOTLOADER
the boot loader doesn't provide any, the default kernel command the boot loader doesn't provide any, the default kernel command
string provided in CMDLINE will be used. string provided in CMDLINE will be used.
config CMDLINE_EXTEND
bool "Extend bootloader kernel arguments"
help
The command-line arguments provided by the boot loader will be
appended to the default kernel command string.
config CMDLINE_FORCE config CMDLINE_FORCE
bool "Always use the default kernel command string" bool "Always use the default kernel command string"
help help

View File

@ -198,6 +198,7 @@
ranges = <0x0 0x00 0x1700000 0x100000>; ranges = <0x0 0x00 0x1700000 0x100000>;
reg = <0x00 0x1700000 0x0 0x100000>; reg = <0x00 0x1700000 0x0 0x100000>;
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
dma-coherent;
sec_jr0: jr@10000 { sec_jr0: jr@10000 {
compatible = "fsl,sec-v5.4-job-ring", compatible = "fsl,sec-v5.4-job-ring",

View File

@ -348,6 +348,7 @@
ranges = <0x0 0x00 0x1700000 0x100000>; ranges = <0x0 0x00 0x1700000 0x100000>;
reg = <0x00 0x1700000 0x0 0x100000>; reg = <0x00 0x1700000 0x0 0x100000>;
interrupts = <0 75 0x4>; interrupts = <0 75 0x4>;
dma-coherent;
sec_jr0: jr@10000 { sec_jr0: jr@10000 {
compatible = "fsl,sec-v5.4-job-ring", compatible = "fsl,sec-v5.4-job-ring",

View File

@ -354,6 +354,7 @@
ranges = <0x0 0x00 0x1700000 0x100000>; ranges = <0x0 0x00 0x1700000 0x100000>;
reg = <0x00 0x1700000 0x0 0x100000>; reg = <0x00 0x1700000 0x0 0x100000>;
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
dma-coherent;
sec_jr0: jr@10000 { sec_jr0: jr@10000 {
compatible = "fsl,sec-v5.4-job-ring", compatible = "fsl,sec-v5.4-job-ring",

View File

@ -35,7 +35,7 @@
&i2c2 { &i2c2 {
clock-frequency = <400000>; clock-frequency = <400000>;
pinctrl-names = "default"; pinctrl-names = "default", "gpio";
pinctrl-0 = <&pinctrl_i2c2>; pinctrl-0 = <&pinctrl_i2c2>;
pinctrl-1 = <&pinctrl_i2c2_gpio>; pinctrl-1 = <&pinctrl_i2c2_gpio>;
sda-gpios = <&gpio5 17 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; sda-gpios = <&gpio5 17 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;

View File

@ -67,7 +67,7 @@
&i2c1 { &i2c1 {
clock-frequency = <400000>; clock-frequency = <400000>;
pinctrl-names = "default"; pinctrl-names = "default", "gpio";
pinctrl-0 = <&pinctrl_i2c1>; pinctrl-0 = <&pinctrl_i2c1>;
pinctrl-1 = <&pinctrl_i2c1_gpio>; pinctrl-1 = <&pinctrl_i2c1_gpio>;
sda-gpios = <&gpio5 15 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; sda-gpios = <&gpio5 15 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;

View File

@ -37,7 +37,7 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
} while (--n > 0); } while (--n > 0);
sum += ((sum >> 32) | (sum << 32)); sum += ((sum >> 32) | (sum << 32));
return csum_fold((__force u32)(sum >> 32)); return csum_fold((__force __wsum)(sum >> 32));
} }
#define ip_fast_csum ip_fast_csum #define ip_fast_csum ip_fast_csum

View File

@ -66,7 +66,8 @@
#define ARM64_WORKAROUND_1508412 58 #define ARM64_WORKAROUND_1508412 58
#define ARM64_HAS_LDAPR 59 #define ARM64_HAS_LDAPR 59
#define ARM64_KVM_PROTECTED_MODE 60 #define ARM64_KVM_PROTECTED_MODE 60
#define ARM64_WORKAROUND_NVIDIA_CARMEL_CNP 61
#define ARM64_NCAPS 61 #define ARM64_NCAPS 62
#endif /* __ASM_CPUCAPS_H */ #endif /* __ASM_CPUCAPS_H */

View File

@ -47,10 +47,10 @@
#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context 2 #define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context 2
#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa 3 #define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa 3
#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid 4 #define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid 4
#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_local_vmid 5 #define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context 5
#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff 6 #define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff 6
#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs 7 #define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs 7
#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_ich_vtr_el2 8 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config 8
#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr 9 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr 9
#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr 10 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr 10
#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs 11 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs 11
@ -183,16 +183,16 @@ DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
#define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs) #define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
extern void __kvm_flush_vm_context(void); extern void __kvm_flush_vm_context(void);
extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
int level); int level);
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
extern void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu);
extern void __kvm_timer_set_cntvoff(u64 cntvoff); extern void __kvm_timer_set_cntvoff(u64 cntvoff);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u64 __vgic_v3_get_gic_config(void);
extern u64 __vgic_v3_read_vmcr(void); extern u64 __vgic_v3_read_vmcr(void);
extern void __vgic_v3_write_vmcr(u32 vmcr); extern void __vgic_v3_write_vmcr(u32 vmcr);
extern void __vgic_v3_init_lrs(void); extern void __vgic_v3_init_lrs(void);

View File

@ -83,6 +83,11 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt);
void __debug_switch_to_guest(struct kvm_vcpu *vcpu); void __debug_switch_to_guest(struct kvm_vcpu *vcpu);
void __debug_switch_to_host(struct kvm_vcpu *vcpu); void __debug_switch_to_host(struct kvm_vcpu *vcpu);
#ifdef __KVM_NVHE_HYPERVISOR__
void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu);
void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu);
#endif
void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
@ -97,7 +102,8 @@ bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt);
void __noreturn hyp_panic(void); void __noreturn hyp_panic(void);
#ifdef __KVM_NVHE_HYPERVISOR__ #ifdef __KVM_NVHE_HYPERVISOR__
void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par); void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
u64 elr, u64 par);
#endif #endif
#endif /* __ARM64_KVM_HYP_H__ */ #endif /* __ARM64_KVM_HYP_H__ */

View File

@ -328,6 +328,11 @@ static inline void *phys_to_virt(phys_addr_t x)
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET) #define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL) #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
#define page_to_virt(x) ({ \
__typeof__(x) __page = x; \
void *__addr = __va(page_to_phys(__page)); \
(void *)__tag_set((const void *)__addr, page_kasan_tag(__page));\
})
#define virt_to_page(x) pfn_to_page(virt_to_pfn(x)) #define virt_to_page(x) pfn_to_page(virt_to_pfn(x))
#else #else
#define page_to_virt(x) ({ \ #define page_to_virt(x) ({ \

View File

@ -63,23 +63,6 @@ static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
extern u64 idmap_t0sz; extern u64 idmap_t0sz;
extern u64 idmap_ptrs_per_pgd; extern u64 idmap_ptrs_per_pgd;
static inline bool __cpu_uses_extended_idmap(void)
{
if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
return false;
return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
}
/*
* True if the extended ID map requires an extra level of translation table
* to be configured.
*/
static inline bool __cpu_uses_extended_idmap_level(void)
{
return ARM64_HW_PGTABLE_LEVELS(64 - idmap_t0sz) > CONFIG_PGTABLE_LEVELS;
}
/* /*
* Ensure TCR.T0SZ is set to the provided value. * Ensure TCR.T0SZ is set to the provided value.
*/ */

View File

@ -66,7 +66,6 @@ extern bool arm64_use_ng_mappings;
#define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL)) #define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
#define PAGE_KERNEL __pgprot(PROT_NORMAL) #define PAGE_KERNEL __pgprot(PROT_NORMAL)
#define PAGE_KERNEL_TAGGED __pgprot(PROT_NORMAL_TAGGED)
#define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY) #define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
#define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY) #define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
#define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN) #define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN)

View File

@ -486,6 +486,9 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC) | PTE_PXN | PTE_UXN) __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC) | PTE_PXN | PTE_UXN)
#define pgprot_device(prot) \ #define pgprot_device(prot) \
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_PXN | PTE_UXN) __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_PXN | PTE_UXN)
#define pgprot_tagged(prot) \
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_TAGGED))
#define pgprot_mhp pgprot_tagged
/* /*
* DMA allocations for non-coherent devices use what the Arm architecture calls * DMA allocations for non-coherent devices use what the Arm architecture calls
* "Normal non-cacheable" memory, which permits speculation, unaligned accesses * "Normal non-cacheable" memory, which permits speculation, unaligned accesses

View File

@ -251,6 +251,8 @@ unsigned long get_wchan(struct task_struct *p);
extern struct task_struct *cpu_switch_to(struct task_struct *prev, extern struct task_struct *cpu_switch_to(struct task_struct *prev,
struct task_struct *next); struct task_struct *next);
asmlinkage void arm64_preempt_schedule_irq(void);
#define task_pt_regs(p) \ #define task_pt_regs(p) \
((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1) ((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1)

View File

@ -796,6 +796,11 @@
#define ID_AA64MMFR0_PARANGE_48 0x5 #define ID_AA64MMFR0_PARANGE_48 0x5
#define ID_AA64MMFR0_PARANGE_52 0x6 #define ID_AA64MMFR0_PARANGE_52 0x6
#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT 0x0
#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE 0x1
#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN 0x2
#define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX 0x7
#ifdef CONFIG_ARM64_PA_BITS_52 #ifdef CONFIG_ARM64_PA_BITS_52
#define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_52 #define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_52
#else #else
@ -962,13 +967,16 @@
#if defined(CONFIG_ARM64_4K_PAGES) #if defined(CONFIG_ARM64_4K_PAGES)
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT
#define ID_AA64MMFR0_TGRAN_SUPPORTED ID_AA64MMFR0_TGRAN4_SUPPORTED #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN4_SUPPORTED
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX 0x7
#elif defined(CONFIG_ARM64_16K_PAGES) #elif defined(CONFIG_ARM64_16K_PAGES)
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN16_SHIFT #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN16_SHIFT
#define ID_AA64MMFR0_TGRAN_SUPPORTED ID_AA64MMFR0_TGRAN16_SUPPORTED #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN16_SUPPORTED
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX 0xF
#elif defined(CONFIG_ARM64_64K_PAGES) #elif defined(CONFIG_ARM64_64K_PAGES)
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN64_SHIFT #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN64_SHIFT
#define ID_AA64MMFR0_TGRAN_SUPPORTED ID_AA64MMFR0_TGRAN64_SUPPORTED #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN64_SUPPORTED
#define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX 0x7
#endif #endif
#define MVFR2_FPMISC_SHIFT 4 #define MVFR2_FPMISC_SHIFT 4

View File

@ -55,6 +55,8 @@ void arch_setup_new_exec(void);
#define arch_setup_new_exec arch_setup_new_exec #define arch_setup_new_exec arch_setup_new_exec
void arch_release_task_struct(struct task_struct *tsk); void arch_release_task_struct(struct task_struct *tsk);
int arch_dup_task_struct(struct task_struct *dst,
struct task_struct *src);
#endif #endif

View File

@ -525,6 +525,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
0, 0, 0, 0,
1, 0), 1, 0),
}, },
#endif
#ifdef CONFIG_NVIDIA_CARMEL_CNP_ERRATUM
{
/* NVIDIA Carmel */
.desc = "NVIDIA Carmel CNP erratum",
.capability = ARM64_WORKAROUND_NVIDIA_CARMEL_CNP,
ERRATA_MIDR_ALL_VERSIONS(MIDR_NVIDIA_CARMEL),
},
#endif #endif
{ {
} }

View File

@ -1324,6 +1324,9 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
if (is_kdump_kernel()) if (is_kdump_kernel())
return false; return false;
if (cpus_have_const_cap(ARM64_WORKAROUND_NVIDIA_CARMEL_CNP))
return false;
return has_cpuid_feature(entry, scope); return has_cpuid_feature(entry, scope);
} }

View File

@ -353,7 +353,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
* with the CLIDR_EL1 fields to avoid triggering false warnings * with the CLIDR_EL1 fields to avoid triggering false warnings
* when there is a mismatch across the CPUs. Keep track of the * when there is a mismatch across the CPUs. Keep track of the
* effective value of the CTR_EL0 in our internal records for * effective value of the CTR_EL0 in our internal records for
* acurate sanity check and feature enablement. * accurate sanity check and feature enablement.
*/ */
info->reg_ctr = read_cpuid_effective_cachetype(); info->reg_ctr = read_cpuid_effective_cachetype();
info->reg_dczid = read_cpuid(DCZID_EL0); info->reg_dczid = read_cpuid(DCZID_EL0);

View File

@ -64,5 +64,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos) ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{ {
memcpy(buf, phys_to_virt((phys_addr_t)*ppos), count); memcpy(buf, phys_to_virt((phys_addr_t)*ppos), count);
*ppos += count;
return count; return count;
} }

View File

@ -319,7 +319,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
*/ */
adrp x5, __idmap_text_end adrp x5, __idmap_text_end
clz x5, x5 clz x5, x5
cmp x5, TCR_T0SZ(VA_BITS) // default T0SZ small enough? cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough?
b.ge 1f // .. then skip VA range extension b.ge 1f // .. then skip VA range extension
adr_l x6, idmap_t0sz adr_l x6, idmap_t0sz
@ -655,8 +655,10 @@ SYM_FUNC_END(__secondary_too_slow)
SYM_FUNC_START(__enable_mmu) SYM_FUNC_START(__enable_mmu)
mrs x2, ID_AA64MMFR0_EL1 mrs x2, ID_AA64MMFR0_EL1
ubfx x2, x2, #ID_AA64MMFR0_TGRAN_SHIFT, 4 ubfx x2, x2, #ID_AA64MMFR0_TGRAN_SHIFT, 4
cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED_MIN
b.ne __no_granule_support b.lt __no_granule_support
cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED_MAX
b.gt __no_granule_support
update_early_cpu_boot_status 0, x2, x3 update_early_cpu_boot_status 0, x2, x3
adrp x2, idmap_pg_dir adrp x2, idmap_pg_dir
phys_to_ttbr x1, x1 phys_to_ttbr x1, x1

View File

@ -163,33 +163,36 @@ static __init void __parse_cmdline(const char *cmdline, bool parse_aliases)
} while (1); } while (1);
} }
static __init void parse_cmdline(void) static __init const u8 *get_bootargs_cmdline(void)
{ {
if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
const u8 *prop; const u8 *prop;
void *fdt; void *fdt;
int node; int node;
fdt = get_early_fdt_ptr(); fdt = get_early_fdt_ptr();
if (!fdt) if (!fdt)
goto out; return NULL;
node = fdt_path_offset(fdt, "/chosen"); node = fdt_path_offset(fdt, "/chosen");
if (node < 0) if (node < 0)
goto out; return NULL;
prop = fdt_getprop(fdt, node, "bootargs", NULL); prop = fdt_getprop(fdt, node, "bootargs", NULL);
if (!prop) if (!prop)
goto out; return NULL;
__parse_cmdline(prop, true); return strlen(prop) ? prop : NULL;
if (!IS_ENABLED(CONFIG_CMDLINE_EXTEND))
return;
} }
out: static __init void parse_cmdline(void)
{
const u8 *prop = get_bootargs_cmdline();
if (IS_ENABLED(CONFIG_CMDLINE_FORCE) || !prop)
__parse_cmdline(CONFIG_CMDLINE, true); __parse_cmdline(CONFIG_CMDLINE, true);
if (!IS_ENABLED(CONFIG_CMDLINE_FORCE) && prop)
__parse_cmdline(prop, true);
} }
/* Keep checkers quiet */ /* Keep checkers quiet */

View File

@ -101,6 +101,9 @@ KVM_NVHE_ALIAS(__stop___kvm_ex_table);
/* Array containing bases of nVHE per-CPU memory regions. */ /* Array containing bases of nVHE per-CPU memory regions. */
KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base); KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base);
/* PMU available static key */
KVM_NVHE_ALIAS(kvm_arm_pmu_available);
#endif /* CONFIG_KVM */ #endif /* CONFIG_KVM */
#endif /* __ARM64_KERNEL_IMAGE_VARS_H */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */

View File

@ -460,7 +460,7 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx)); return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
} }
static inline u32 armv8pmu_read_evcntr(int idx) static inline u64 armv8pmu_read_evcntr(int idx)
{ {
u32 counter = ARMV8_IDX_TO_COUNTER(idx); u32 counter = ARMV8_IDX_TO_COUNTER(idx);

View File

@ -57,6 +57,8 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/pointer_auth.h> #include <asm/pointer_auth.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <asm/switch_to.h>
#include <asm/system_misc.h>
#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK) #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK)
#include <linux/stackprotector.h> #include <linux/stackprotector.h>

View File

@ -194,8 +194,9 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
#ifdef CONFIG_STACKTRACE #ifdef CONFIG_STACKTRACE
void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, noinline void arch_stack_walk(stack_trace_consume_fn consume_entry,
struct task_struct *task, struct pt_regs *regs) void *cookie, struct task_struct *task,
struct pt_regs *regs)
{ {
struct stackframe frame; struct stackframe frame;
@ -203,8 +204,8 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
start_backtrace(&frame, regs->regs[29], regs->pc); start_backtrace(&frame, regs->regs[29], regs->pc);
else if (task == current) else if (task == current)
start_backtrace(&frame, start_backtrace(&frame,
(unsigned long)__builtin_frame_address(0), (unsigned long)__builtin_frame_address(1),
(unsigned long)arch_stack_walk); (unsigned long)__builtin_return_address(0));
else else
start_backtrace(&frame, thread_saved_fp(task), start_backtrace(&frame, thread_saved_fp(task),
thread_saved_pc(task)); thread_saved_pc(task));

View File

@ -385,11 +385,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
last_ran = this_cpu_ptr(mmu->last_vcpu_ran); last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
/* /*
* We guarantee that both TLBs and I-cache are private to each
* vcpu. If detecting that a vcpu from the same VM has
* previously run on the same physical CPU, call into the
* hypervisor code to nuke the relevant contexts.
*
* We might get preempted before the vCPU actually runs, but * We might get preempted before the vCPU actually runs, but
* over-invalidation doesn't affect correctness. * over-invalidation doesn't affect correctness.
*/ */
if (*last_ran != vcpu->vcpu_id) { if (*last_ran != vcpu->vcpu_id) {
kvm_call_hyp(__kvm_tlb_flush_local_vmid, mmu); kvm_call_hyp(__kvm_flush_cpu_context, mmu);
*last_ran = vcpu->vcpu_id; *last_ran = vcpu->vcpu_id;
} }

View File

@ -85,8 +85,10 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
// If the hyp context is loaded, go straight to hyp_panic // If the hyp context is loaded, go straight to hyp_panic
get_loaded_vcpu x0, x1 get_loaded_vcpu x0, x1
cbz x0, hyp_panic cbnz x0, 1f
b hyp_panic
1:
// The hyp context is saved so make sure it is restored to allow // The hyp context is saved so make sure it is restored to allow
// hyp_panic to run at hyp and, subsequently, panic to run in the host. // hyp_panic to run at hyp and, subsequently, panic to run in the host.
// This makes use of __guest_exit to avoid duplication but sets the // This makes use of __guest_exit to avoid duplication but sets the
@ -94,7 +96,7 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
// current state is saved to the guest context but it will only be // current state is saved to the guest context but it will only be
// accurate if the guest had been completely restored. // accurate if the guest had been completely restored.
adr_this_cpu x0, kvm_hyp_ctxt, x1 adr_this_cpu x0, kvm_hyp_ctxt, x1
adr x1, hyp_panic adr_l x1, hyp_panic
str x1, [x0, #CPU_XREG_OFFSET(30)] str x1, [x0, #CPU_XREG_OFFSET(30)]
get_vcpu_ptr x1, x0 get_vcpu_ptr x1, x0
@ -146,7 +148,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
// Now restore the hyp regs // Now restore the hyp regs
restore_callee_saved_regs x2 restore_callee_saved_regs x2
set_loaded_vcpu xzr, x1, x2 set_loaded_vcpu xzr, x2, x3
alternative_if ARM64_HAS_RAS_EXTN alternative_if ARM64_HAS_RAS_EXTN
// If we have the RAS extensions we can consume a pending error // If we have the RAS extensions we can consume a pending error

View File

@ -90,14 +90,17 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
* counter, which could make a PMXEVCNTR_EL0 access UNDEF at * counter, which could make a PMXEVCNTR_EL0 access UNDEF at
* EL1 instead of being trapped to EL2. * EL1 instead of being trapped to EL2.
*/ */
if (kvm_arm_support_pmu_v3()) {
write_sysreg(0, pmselr_el0); write_sysreg(0, pmselr_el0);
write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
}
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
} }
static inline void __deactivate_traps_common(void) static inline void __deactivate_traps_common(void)
{ {
write_sysreg(0, hstr_el2); write_sysreg(0, hstr_el2);
if (kvm_arm_support_pmu_v3())
write_sysreg(0, pmuserenr_el0); write_sysreg(0, pmuserenr_el0);
} }

View File

@ -58,16 +58,24 @@ static void __debug_restore_spe(u64 pmscr_el1)
write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1);
} }
void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
{ {
/* Disable and flush SPE data generation */ /* Disable and flush SPE data generation */
__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
}
void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
{
__debug_switch_to_guest_common(vcpu); __debug_switch_to_guest_common(vcpu);
} }
void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
{
__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
}
void __debug_switch_to_host(struct kvm_vcpu *vcpu) void __debug_switch_to_host(struct kvm_vcpu *vcpu)
{ {
__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
__debug_switch_to_host_common(vcpu); __debug_switch_to_host_common(vcpu);
} }

View File

@ -71,7 +71,8 @@ SYM_FUNC_START(__host_enter)
SYM_FUNC_END(__host_enter) SYM_FUNC_END(__host_enter)
/* /*
* void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par); * void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
* u64 elr, u64 par);
*/ */
SYM_FUNC_START(__hyp_do_panic) SYM_FUNC_START(__hyp_do_panic)
/* Prepare and exit to the host's panic funciton. */ /* Prepare and exit to the host's panic funciton. */
@ -82,9 +83,11 @@ SYM_FUNC_START(__hyp_do_panic)
hyp_kimg_va lr, x6 hyp_kimg_va lr, x6
msr elr_el2, lr msr elr_el2, lr
/* Set the panic format string. Use the, now free, LR as scratch. */ mov x29, x0
ldr lr, =__hyp_panic_string
hyp_kimg_va lr, x6 /* Load the format string into x0 and arguments into x1-7 */
ldr x0, =__hyp_panic_string
hyp_kimg_va x0, x6
/* Load the format arguments into x1-7. */ /* Load the format arguments into x1-7. */
mov x6, x3 mov x6, x3
@ -94,9 +97,7 @@ SYM_FUNC_START(__hyp_do_panic)
mrs x5, hpfar_el2 mrs x5, hpfar_el2
/* Enter the host, conditionally restoring the host context. */ /* Enter the host, conditionally restoring the host context. */
cmp x0, xzr cbz x29, __host_enter_without_restoring
mov x0, lr
b.eq __host_enter_without_restoring
b __host_enter_for_panic b __host_enter_for_panic
SYM_FUNC_END(__hyp_do_panic) SYM_FUNC_END(__hyp_do_panic)

View File

@ -46,11 +46,11 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt)
__kvm_tlb_flush_vmid(kern_hyp_va(mmu)); __kvm_tlb_flush_vmid(kern_hyp_va(mmu));
} }
static void handle___kvm_tlb_flush_local_vmid(struct kvm_cpu_context *host_ctxt) static void handle___kvm_flush_cpu_context(struct kvm_cpu_context *host_ctxt)
{ {
DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1);
__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu)); __kvm_flush_cpu_context(kern_hyp_va(mmu));
} }
static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt) static void handle___kvm_timer_set_cntvoff(struct kvm_cpu_context *host_ctxt)
@ -67,9 +67,9 @@ static void handle___kvm_enable_ssbs(struct kvm_cpu_context *host_ctxt)
write_sysreg_el2(tmp, SYS_SCTLR); write_sysreg_el2(tmp, SYS_SCTLR);
} }
static void handle___vgic_v3_get_ich_vtr_el2(struct kvm_cpu_context *host_ctxt) static void handle___vgic_v3_get_gic_config(struct kvm_cpu_context *host_ctxt)
{ {
cpu_reg(host_ctxt, 1) = __vgic_v3_get_ich_vtr_el2(); cpu_reg(host_ctxt, 1) = __vgic_v3_get_gic_config();
} }
static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt) static void handle___vgic_v3_read_vmcr(struct kvm_cpu_context *host_ctxt)
@ -115,10 +115,10 @@ static const hcall_t host_hcall[] = {
HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_flush_vm_context),
HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_tlb_flush_vmid),
HANDLE_FUNC(__kvm_tlb_flush_local_vmid), HANDLE_FUNC(__kvm_flush_cpu_context),
HANDLE_FUNC(__kvm_timer_set_cntvoff), HANDLE_FUNC(__kvm_timer_set_cntvoff),
HANDLE_FUNC(__kvm_enable_ssbs), HANDLE_FUNC(__kvm_enable_ssbs),
HANDLE_FUNC(__vgic_v3_get_ich_vtr_el2), HANDLE_FUNC(__vgic_v3_get_gic_config),
HANDLE_FUNC(__vgic_v3_read_vmcr), HANDLE_FUNC(__vgic_v3_read_vmcr),
HANDLE_FUNC(__vgic_v3_write_vmcr), HANDLE_FUNC(__vgic_v3_write_vmcr),
HANDLE_FUNC(__vgic_v3_init_lrs), HANDLE_FUNC(__vgic_v3_init_lrs),

View File

@ -192,6 +192,14 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
__sysreg_save_state_nvhe(host_ctxt); __sysreg_save_state_nvhe(host_ctxt);
/*
* We must flush and disable the SPE buffer for nVHE, as
* the translation regime(EL1&0) is going to be loaded with
* that of the guest. And we must do this before we change the
* translation regime to EL2 (via MDCR_EL2_E2PB == 0) and
* before we load guest Stage1.
*/
__debug_save_host_buffers_nvhe(vcpu);
__adjust_pc(vcpu); __adjust_pc(vcpu);
@ -234,11 +242,12 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
__fpsimd_save_fpexc32(vcpu); __fpsimd_save_fpexc32(vcpu);
__debug_switch_to_host(vcpu);
/* /*
* This must come after restoring the host sysregs, since a non-VHE * This must come after restoring the host sysregs, since a non-VHE
* system may enable SPE here and make use of the TTBRs. * system may enable SPE here and make use of the TTBRs.
*/ */
__debug_switch_to_host(vcpu); __debug_restore_host_buffers_nvhe(vcpu);
if (pmu_switch_needed) if (pmu_switch_needed)
__pmu_switch_to_host(host_ctxt); __pmu_switch_to_host(host_ctxt);
@ -257,7 +266,6 @@ void __noreturn hyp_panic(void)
u64 spsr = read_sysreg_el2(SYS_SPSR); u64 spsr = read_sysreg_el2(SYS_SPSR);
u64 elr = read_sysreg_el2(SYS_ELR); u64 elr = read_sysreg_el2(SYS_ELR);
u64 par = read_sysreg_par(); u64 par = read_sysreg_par();
bool restore_host = true;
struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *host_ctxt;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
@ -271,7 +279,7 @@ void __noreturn hyp_panic(void)
__sysreg_restore_state_nvhe(host_ctxt); __sysreg_restore_state_nvhe(host_ctxt);
} }
__hyp_do_panic(restore_host, spsr, elr, par); __hyp_do_panic(host_ctxt, spsr, elr, par);
unreachable(); unreachable();
} }

View File

@ -123,7 +123,7 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
__tlb_switch_to_host(&cxt); __tlb_switch_to_host(&cxt);
} }
void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu) void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
{ {
struct tlb_inv_context cxt; struct tlb_inv_context cxt;
@ -131,6 +131,7 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
__tlb_switch_to_guest(mmu, &cxt); __tlb_switch_to_guest(mmu, &cxt);
__tlbi(vmalle1); __tlbi(vmalle1);
asm volatile("ic iallu");
dsb(nsh); dsb(nsh);
isb(); isb();

View File

@ -223,6 +223,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
goto out; goto out;
if (!table) { if (!table) {
data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
data->addr += kvm_granule_size(level); data->addr += kvm_granule_size(level);
goto out; goto out;
} }

View File

@ -405,9 +405,45 @@ void __vgic_v3_init_lrs(void)
__gic_v3_set_lr(0, i); __gic_v3_set_lr(0, i);
} }
u64 __vgic_v3_get_ich_vtr_el2(void) /*
* Return the GIC CPU configuration:
* - [31:0] ICH_VTR_EL2
* - [62:32] RES0
* - [63] MMIO (GICv2) capable
*/
u64 __vgic_v3_get_gic_config(void)
{ {
return read_gicreg(ICH_VTR_EL2); u64 val, sre = read_gicreg(ICC_SRE_EL1);
unsigned long flags = 0;
/*
* To check whether we have a MMIO-based (GICv2 compatible)
* CPU interface, we need to disable the system register
* view. To do that safely, we have to prevent any interrupt
* from firing (which would be deadly).
*
* Note that this only makes sense on VHE, as interrupts are
* already masked for nVHE as part of the exception entry to
* EL2.
*/
if (has_vhe())
flags = local_daif_save();
write_gicreg(0, ICC_SRE_EL1);
isb();
val = read_gicreg(ICC_SRE_EL1);
write_gicreg(sre, ICC_SRE_EL1);
isb();
if (has_vhe())
local_daif_restore(flags);
val = (val & ICC_SRE_EL1_SRE) ? 0 : (1ULL << 63);
val |= read_gicreg(ICH_VTR_EL2);
return val;
} }
u64 __vgic_v3_read_vmcr(void) u64 __vgic_v3_read_vmcr(void)

View File

@ -127,7 +127,7 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
__tlb_switch_to_host(&cxt); __tlb_switch_to_host(&cxt);
} }
void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu) void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
{ {
struct tlb_inv_context cxt; struct tlb_inv_context cxt;
@ -135,6 +135,7 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
__tlb_switch_to_guest(mmu, &cxt); __tlb_switch_to_guest(mmu, &cxt);
__tlbi(vmalle1); __tlbi(vmalle1);
asm volatile("ic iallu");
dsb(nsh); dsb(nsh);
isb(); isb();

View File

@ -1312,8 +1312,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
* Prevent userspace from creating a memory region outside of the IPA * Prevent userspace from creating a memory region outside of the IPA
* space addressable by the KVM guest IPA space. * space addressable by the KVM guest IPA space.
*/ */
if (memslot->base_gfn + memslot->npages >= if ((memslot->base_gfn + memslot->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT))
(kvm_phys_size(kvm) >> PAGE_SHIFT))
return -EFAULT; return -EFAULT;
mmap_read_lock(current->mm); mmap_read_lock(current->mm);

View File

@ -11,6 +11,8 @@
#include <asm/kvm_emulate.h> #include <asm/kvm_emulate.h>
DEFINE_STATIC_KEY_FALSE(kvm_arm_pmu_available);
static int kvm_is_in_guest(void) static int kvm_is_in_guest(void)
{ {
return kvm_get_running_vcpu() != NULL; return kvm_get_running_vcpu() != NULL;
@ -48,6 +50,14 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
int kvm_perf_init(void) int kvm_perf_init(void)
{ {
/*
* Check if HW_PERF_EVENTS are supported by checking the number of
* hardware performance counters. This could ensure the presence of
* a physical PMU and CONFIG_PERF_EVENT is selected.
*/
if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
static_branch_enable(&kvm_arm_pmu_available);
return perf_register_guest_info_callbacks(&kvm_guest_cbs); return perf_register_guest_info_callbacks(&kvm_guest_cbs);
} }

View File

@ -823,16 +823,6 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
return val & mask; return val & mask;
} }
bool kvm_arm_support_pmu_v3(void)
{
/*
* Check if HW_PERF_EVENTS are supported by checking the number of
* hardware performance counters. This could ensure the presence of
* a physical PMU and CONFIG_PERF_EVENT is selected.
*/
return (perf_num_counters() > 0);
}
int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
{ {
if (!kvm_vcpu_has_pmu(vcpu)) if (!kvm_vcpu_has_pmu(vcpu))

View File

@ -311,23 +311,24 @@ int kvm_set_ipa_limit(void)
} }
switch (cpuid_feature_extract_unsigned_field(mmfr0, tgran_2)) { switch (cpuid_feature_extract_unsigned_field(mmfr0, tgran_2)) {
default: case ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE:
case 1:
kvm_err("PAGE_SIZE not supported at Stage-2, giving up\n"); kvm_err("PAGE_SIZE not supported at Stage-2, giving up\n");
return -EINVAL; return -EINVAL;
case 0: case ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT:
kvm_debug("PAGE_SIZE supported at Stage-2 (default)\n"); kvm_debug("PAGE_SIZE supported at Stage-2 (default)\n");
break; break;
case 2: case ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN ... ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX:
kvm_debug("PAGE_SIZE supported at Stage-2 (advertised)\n"); kvm_debug("PAGE_SIZE supported at Stage-2 (advertised)\n");
break; break;
default:
kvm_err("Unsupported value for TGRAN_2, giving up\n");
return -EINVAL;
} }
kvm_ipa_limit = id_aa64mmfr0_parange_to_phys_shift(parange); kvm_ipa_limit = id_aa64mmfr0_parange_to_phys_shift(parange);
WARN(kvm_ipa_limit < KVM_PHYS_SHIFT, kvm_info("IPA Size Limit: %d bits%s\n", kvm_ipa_limit,
"KVM IPA Size Limit (%d bits) is smaller than default size\n", ((kvm_ipa_limit < KVM_PHYS_SHIFT) ?
kvm_ipa_limit); " (Reduced IPA size, limited VM/VMM compatibility)" : ""));
kvm_info("IPA Size Limit: %d bits\n", kvm_ipa_limit);
return 0; return 0;
} }
@ -356,6 +357,11 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
return -EINVAL; return -EINVAL;
} else { } else {
phys_shift = KVM_PHYS_SHIFT; phys_shift = KVM_PHYS_SHIFT;
if (phys_shift > kvm_ipa_limit) {
pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n",
current->comm);
return -EINVAL;
}
} }
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);

View File

@ -574,9 +574,13 @@ early_param("kvm-arm.vgic_v4_enable", early_gicv4_enable);
*/ */
int vgic_v3_probe(const struct gic_kvm_info *info) int vgic_v3_probe(const struct gic_kvm_info *info)
{ {
u32 ich_vtr_el2 = kvm_call_hyp_ret(__vgic_v3_get_ich_vtr_el2); u64 ich_vtr_el2 = kvm_call_hyp_ret(__vgic_v3_get_gic_config);
bool has_v2;
int ret; int ret;
has_v2 = ich_vtr_el2 >> 63;
ich_vtr_el2 = (u32)ich_vtr_el2;
/* /*
* The ListRegs field is 5 bits, but there is an architectural * The ListRegs field is 5 bits, but there is an architectural
* maximum of 16 list registers. Just ignore bit 4... * maximum of 16 list registers. Just ignore bit 4...
@ -594,13 +598,15 @@ int vgic_v3_probe(const struct gic_kvm_info *info)
gicv4_enable ? "en" : "dis"); gicv4_enable ? "en" : "dis");
} }
kvm_vgic_global_state.vcpu_base = 0;
if (!info->vcpu.start) { if (!info->vcpu.start) {
kvm_info("GICv3: no GICV resource entry\n"); kvm_info("GICv3: no GICV resource entry\n");
kvm_vgic_global_state.vcpu_base = 0; } else if (!has_v2) {
pr_warn(FW_BUG "CPU interface incapable of MMIO access\n");
} else if (!PAGE_ALIGNED(info->vcpu.start)) { } else if (!PAGE_ALIGNED(info->vcpu.start)) {
pr_warn("GICV physical address 0x%llx not page aligned\n", pr_warn("GICV physical address 0x%llx not page aligned\n",
(unsigned long long)info->vcpu.start); (unsigned long long)info->vcpu.start);
kvm_vgic_global_state.vcpu_base = 0;
} else { } else {
kvm_vgic_global_state.vcpu_base = info->vcpu.start; kvm_vgic_global_state.vcpu_base = info->vcpu.start;
kvm_vgic_global_state.can_emulate_gicv2 = true; kvm_vgic_global_state.can_emulate_gicv2 = true;

View File

@ -219,17 +219,40 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
int pfn_valid(unsigned long pfn) int pfn_valid(unsigned long pfn)
{ {
phys_addr_t addr = pfn << PAGE_SHIFT; phys_addr_t addr = PFN_PHYS(pfn);
if ((addr >> PAGE_SHIFT) != pfn) /*
* Ensure the upper PAGE_SHIFT bits are clear in the
* pfn. Else it might lead to false positives when
* some of the upper bits are set, but the lower bits
* match a valid pfn.
*/
if (PHYS_PFN(addr) != pfn)
return 0; return 0;
#ifdef CONFIG_SPARSEMEM #ifdef CONFIG_SPARSEMEM
{
struct mem_section *ms;
if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
return 0; return 0;
if (!valid_section(__pfn_to_section(pfn))) ms = __pfn_to_section(pfn);
if (!valid_section(ms))
return 0; return 0;
/*
* ZONE_DEVICE memory does not have the memblock entries.
* memblock_is_map_memory() check for ZONE_DEVICE based
* addresses will always fail. Even the normal hotplugged
* memory will never have MEMBLOCK_NOMAP flag set in their
* memblock entries. Skip memblock search for all non early
* memory sections covering all of hotplug memory including
* both normal and ZONE_DEVICE based.
*/
if (!early_section(ms))
return pfn_section_valid(ms, pfn);
}
#endif #endif
return memblock_is_map_memory(addr); return memblock_is_map_memory(addr);
} }

View File

@ -40,7 +40,7 @@
#define NO_BLOCK_MAPPINGS BIT(0) #define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1) #define NO_CONT_MAPPINGS BIT(1)
u64 idmap_t0sz = TCR_T0SZ(VA_BITS); u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
u64 __section(".mmuoff.data.write") vabits_actual; u64 __section(".mmuoff.data.write") vabits_actual;
@ -512,7 +512,8 @@ static void __init map_mem(pgd_t *pgdp)
* if MTE is present. Otherwise, it has the same attributes as * if MTE is present. Otherwise, it has the same attributes as
* PAGE_KERNEL. * PAGE_KERNEL.
*/ */
__map_memblock(pgdp, start, end, PAGE_KERNEL_TAGGED, flags); __map_memblock(pgdp, start, end, pgprot_tagged(PAGE_KERNEL),
flags);
} }
/* /*
@ -1447,6 +1448,22 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
struct range arch_get_mappable_range(void) struct range arch_get_mappable_range(void)
{ {
struct range mhp_range; struct range mhp_range;
u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
u64 end_linear_pa = __pa(PAGE_END - 1);
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
/*
* Check for a wrap, it is possible because of randomized linear
* mapping the start physical address is actually bigger than
* the end physical address. In this case set start to zero
* because [0, end_linear_pa] range must still be able to cover
* all addressable physical addresses.
*/
if (start_linear_pa > end_linear_pa)
start_linear_pa = 0;
}
WARN_ON(start_linear_pa > end_linear_pa);
/* /*
* Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)] * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
@ -1454,8 +1471,9 @@ struct range arch_get_mappable_range(void)
* range which can be mapped inside this linear mapping range, must * range which can be mapped inside this linear mapping range, must
* also be derived from its end points. * also be derived from its end points.
*/ */
mhp_range.start = __pa(_PAGE_OFFSET(vabits_actual)); mhp_range.start = start_linear_pa;
mhp_range.end = __pa(PAGE_END - 1); mhp_range.end = end_linear_pa;
return mhp_range; return mhp_range;
} }

View File

@ -9,7 +9,7 @@ int arch_check_ftrace_location(struct kprobe *p)
return 0; return 0;
} }
/* Ftrace callback handler for kprobes -- called under preepmt disabed */ /* Ftrace callback handler for kprobes -- called under preepmt disabled */
void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *ops, struct ftrace_regs *fregs) struct ftrace_ops *ops, struct ftrace_regs *fregs)
{ {

View File

@ -32,7 +32,7 @@ static inline void syscall_rollback(struct task_struct *task,
static inline long syscall_get_error(struct task_struct *task, static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs) struct pt_regs *regs)
{ {
return regs->r10 == -1 ? regs->r8:0; return regs->r10 == -1 ? -regs->r8:0;
} }
static inline long syscall_get_return_value(struct task_struct *task, static inline long syscall_get_return_value(struct task_struct *task,

View File

@ -59,7 +59,7 @@ show_##name(struct device *dev, struct device_attribute *attr, \
char *buf) \ char *buf) \
{ \ { \
u32 cpu=dev->id; \ u32 cpu=dev->id; \
return sprintf(buf, "%lx\n", name[cpu]); \ return sprintf(buf, "%llx\n", name[cpu]); \
} }
#define store(name) \ #define store(name) \
@ -86,9 +86,9 @@ store_call_start(struct device *dev, struct device_attribute *attr,
#ifdef ERR_INJ_DEBUG #ifdef ERR_INJ_DEBUG
printk(KERN_DEBUG "pal_mc_err_inject for cpu%d:\n", cpu); printk(KERN_DEBUG "pal_mc_err_inject for cpu%d:\n", cpu);
printk(KERN_DEBUG "err_type_info=%lx,\n", err_type_info[cpu]); printk(KERN_DEBUG "err_type_info=%llx,\n", err_type_info[cpu]);
printk(KERN_DEBUG "err_struct_info=%lx,\n", err_struct_info[cpu]); printk(KERN_DEBUG "err_struct_info=%llx,\n", err_struct_info[cpu]);
printk(KERN_DEBUG "err_data_buffer=%lx, %lx, %lx.\n", printk(KERN_DEBUG "err_data_buffer=%llx, %llx, %llx.\n",
err_data_buffer[cpu].data1, err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2, err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3); err_data_buffer[cpu].data3);
@ -117,8 +117,8 @@ store_call_start(struct device *dev, struct device_attribute *attr,
#ifdef ERR_INJ_DEBUG #ifdef ERR_INJ_DEBUG
printk(KERN_DEBUG "Returns: status=%d,\n", (int)status[cpu]); printk(KERN_DEBUG "Returns: status=%d,\n", (int)status[cpu]);
printk(KERN_DEBUG "capabilities=%lx,\n", capabilities[cpu]); printk(KERN_DEBUG "capabilities=%llx,\n", capabilities[cpu]);
printk(KERN_DEBUG "resources=%lx\n", resources[cpu]); printk(KERN_DEBUG "resources=%llx\n", resources[cpu]);
#endif #endif
return size; return size;
} }
@ -131,7 +131,7 @@ show_virtual_to_phys(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
unsigned int cpu=dev->id; unsigned int cpu=dev->id;
return sprintf(buf, "%lx\n", phys_addr[cpu]); return sprintf(buf, "%llx\n", phys_addr[cpu]);
} }
static ssize_t static ssize_t
@ -145,7 +145,7 @@ store_virtual_to_phys(struct device *dev, struct device_attribute *attr,
ret = get_user_pages_fast(virt_addr, 1, FOLL_WRITE, NULL); ret = get_user_pages_fast(virt_addr, 1, FOLL_WRITE, NULL);
if (ret<=0) { if (ret<=0) {
#ifdef ERR_INJ_DEBUG #ifdef ERR_INJ_DEBUG
printk("Virtual address %lx is not existing.\n",virt_addr); printk("Virtual address %llx is not existing.\n", virt_addr);
#endif #endif
return -EINVAL; return -EINVAL;
} }
@ -163,7 +163,7 @@ show_err_data_buffer(struct device *dev,
{ {
unsigned int cpu=dev->id; unsigned int cpu=dev->id;
return sprintf(buf, "%lx, %lx, %lx\n", return sprintf(buf, "%llx, %llx, %llx\n",
err_data_buffer[cpu].data1, err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2, err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3); err_data_buffer[cpu].data3);
@ -178,13 +178,13 @@ store_err_data_buffer(struct device *dev,
int ret; int ret;
#ifdef ERR_INJ_DEBUG #ifdef ERR_INJ_DEBUG
printk("write err_data_buffer=[%lx,%lx,%lx] on cpu%d\n", printk("write err_data_buffer=[%llx,%llx,%llx] on cpu%d\n",
err_data_buffer[cpu].data1, err_data_buffer[cpu].data1,
err_data_buffer[cpu].data2, err_data_buffer[cpu].data2,
err_data_buffer[cpu].data3, err_data_buffer[cpu].data3,
cpu); cpu);
#endif #endif
ret=sscanf(buf, "%lx, %lx, %lx", ret = sscanf(buf, "%llx, %llx, %llx",
&err_data_buffer[cpu].data1, &err_data_buffer[cpu].data1,
&err_data_buffer[cpu].data2, &err_data_buffer[cpu].data2,
&err_data_buffer[cpu].data3); &err_data_buffer[cpu].data3);

View File

@ -1824,7 +1824,7 @@ ia64_mca_cpu_init(void *cpu_data)
data = mca_bootmem(); data = mca_bootmem();
first_time = 0; first_time = 0;
} else } else
data = (void *)__get_free_pages(GFP_KERNEL, data = (void *)__get_free_pages(GFP_ATOMIC,
get_order(sz)); get_order(sz));
if (!data) if (!data)
panic("Could not allocate MCA memory for cpu %d\n", panic("Could not allocate MCA memory for cpu %d\n",

View File

@ -2013,27 +2013,39 @@ static void syscall_get_set_args_cb(struct unw_frame_info *info, void *data)
{ {
struct syscall_get_set_args *args = data; struct syscall_get_set_args *args = data;
struct pt_regs *pt = args->regs; struct pt_regs *pt = args->regs;
unsigned long *krbs, cfm, ndirty; unsigned long *krbs, cfm, ndirty, nlocals, nouts;
int i, count; int i, count;
if (unw_unwind_to_user(info) < 0) if (unw_unwind_to_user(info) < 0)
return; return;
/*
* We get here via a few paths:
* - break instruction: cfm is shared with caller.
* syscall args are in out= regs, locals are non-empty.
* - epsinstruction: cfm is set by br.call
* locals don't exist.
*
* For both cases argguments are reachable in cfm.sof - cfm.sol.
* CFM: [ ... | sor: 17..14 | sol : 13..7 | sof : 6..0 ]
*/
cfm = pt->cr_ifs; cfm = pt->cr_ifs;
nlocals = (cfm >> 7) & 0x7f; /* aka sol */
nouts = (cfm & 0x7f) - nlocals; /* aka sof - sol */
krbs = (unsigned long *)info->task + IA64_RBS_OFFSET/8; krbs = (unsigned long *)info->task + IA64_RBS_OFFSET/8;
ndirty = ia64_rse_num_regs(krbs, krbs + (pt->loadrs >> 19)); ndirty = ia64_rse_num_regs(krbs, krbs + (pt->loadrs >> 19));
count = 0; count = 0;
if (in_syscall(pt)) if (in_syscall(pt))
count = min_t(int, args->n, cfm & 0x7f); count = min_t(int, args->n, nouts);
/* Iterate over outs. */
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
int j = ndirty + nlocals + i + args->i;
if (args->rw) if (args->rw)
*ia64_rse_skip_regs(krbs, ndirty + i + args->i) = *ia64_rse_skip_regs(krbs, j) = args->args[i];
args->args[i];
else else
args->args[i] = *ia64_rse_skip_regs(krbs, args->args[i] = *ia64_rse_skip_regs(krbs, j);
ndirty + i + args->i);
} }
if (!args->rw) { if (!args->rw) {

View File

@ -14,6 +14,7 @@
#include <asm/addrspace.h> #include <asm/addrspace.h>
#include <asm/unaligned.h> #include <asm/unaligned.h>
#include <asm-generic/vmlinux.lds.h>
/* /*
* These two variables specify the free mem region * These two variables specify the free mem region
@ -120,6 +121,13 @@ void decompress_kernel(unsigned long boot_heap_start)
/* last four bytes is always image size in little endian */ /* last four bytes is always image size in little endian */
image_size = get_unaligned_le32((void *)&__image_end - 4); image_size = get_unaligned_le32((void *)&__image_end - 4);
/* The device tree's address must be properly aligned */
image_size = ALIGN(image_size, STRUCT_ALIGNMENT);
puts("Copy device tree to address ");
puthex(VMLINUX_LOAD_ADDRESS_ULL + image_size);
puts("\n");
/* copy dtb to where the booted kernel will expect it */ /* copy dtb to where the booted kernel will expect it */
memcpy((void *)VMLINUX_LOAD_ADDRESS_ULL + image_size, memcpy((void *)VMLINUX_LOAD_ADDRESS_ULL + image_size,
__appended_dtb, dtb_size); __appended_dtb, dtb_size);

View File

@ -12,8 +12,8 @@ AFLAGS_chacha-core.o += -O2 # needed to fill branch delay slots
obj-$(CONFIG_CRYPTO_POLY1305_MIPS) += poly1305-mips.o obj-$(CONFIG_CRYPTO_POLY1305_MIPS) += poly1305-mips.o
poly1305-mips-y := poly1305-core.o poly1305-glue.o poly1305-mips-y := poly1305-core.o poly1305-glue.o
perlasm-flavour-$(CONFIG_CPU_MIPS32) := o32 perlasm-flavour-$(CONFIG_32BIT) := o32
perlasm-flavour-$(CONFIG_CPU_MIPS64) := 64 perlasm-flavour-$(CONFIG_64BIT) := 64
quiet_cmd_perlasm = PERLASM $@ quiet_cmd_perlasm = PERLASM $@
cmd_perlasm = $(PERL) $(<) $(perlasm-flavour-y) $(@) cmd_perlasm = $(PERL) $(<) $(perlasm-flavour-y) $(@)

View File

@ -24,8 +24,11 @@ extern void (*board_ebase_setup)(void);
extern void (*board_cache_error_setup)(void); extern void (*board_cache_error_setup)(void);
extern int register_nmi_notifier(struct notifier_block *nb); extern int register_nmi_notifier(struct notifier_block *nb);
extern void reserve_exception_space(phys_addr_t addr, unsigned long size);
extern char except_vec_nmi[]; extern char except_vec_nmi[];
#define VECTORSPACING 0x100 /* for EI/VI mode */
#define nmi_notifier(fn, pri) \ #define nmi_notifier(fn, pri) \
({ \ ({ \
static struct notifier_block fn##_nb = { \ static struct notifier_block fn##_nb = { \

View File

@ -26,6 +26,7 @@
#include <asm/elf.h> #include <asm/elf.h>
#include <asm/pgtable-bits.h> #include <asm/pgtable-bits.h>
#include <asm/spram.h> #include <asm/spram.h>
#include <asm/traps.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include "fpu-probe.h" #include "fpu-probe.h"
@ -1628,6 +1629,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
c->cputype = CPU_BMIPS3300; c->cputype = CPU_BMIPS3300;
__cpu_name[cpu] = "Broadcom BMIPS3300"; __cpu_name[cpu] = "Broadcom BMIPS3300";
set_elf_platform(cpu, "bmips3300"); set_elf_platform(cpu, "bmips3300");
reserve_exception_space(0x400, VECTORSPACING * 64);
break; break;
case PRID_IMP_BMIPS43XX: { case PRID_IMP_BMIPS43XX: {
int rev = c->processor_id & PRID_REV_MASK; int rev = c->processor_id & PRID_REV_MASK;
@ -1638,6 +1640,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
__cpu_name[cpu] = "Broadcom BMIPS4380"; __cpu_name[cpu] = "Broadcom BMIPS4380";
set_elf_platform(cpu, "bmips4380"); set_elf_platform(cpu, "bmips4380");
c->options |= MIPS_CPU_RIXI; c->options |= MIPS_CPU_RIXI;
reserve_exception_space(0x400, VECTORSPACING * 64);
} else { } else {
c->cputype = CPU_BMIPS4350; c->cputype = CPU_BMIPS4350;
__cpu_name[cpu] = "Broadcom BMIPS4350"; __cpu_name[cpu] = "Broadcom BMIPS4350";
@ -1654,6 +1657,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
__cpu_name[cpu] = "Broadcom BMIPS5000"; __cpu_name[cpu] = "Broadcom BMIPS5000";
set_elf_platform(cpu, "bmips5000"); set_elf_platform(cpu, "bmips5000");
c->options |= MIPS_CPU_ULRI | MIPS_CPU_RIXI; c->options |= MIPS_CPU_ULRI | MIPS_CPU_RIXI;
reserve_exception_space(0x1000, VECTORSPACING * 64);
break; break;
} }
} }
@ -2133,6 +2137,8 @@ void cpu_probe(void)
if (cpu == 0) if (cpu == 0)
__ua_limit = ~((1ull << cpu_vmbits) - 1); __ua_limit = ~((1ull << cpu_vmbits) - 1);
#endif #endif
reserve_exception_space(0, 0x1000);
} }
void cpu_report(void) void cpu_report(void)

View File

@ -21,6 +21,7 @@
#include <asm/fpu.h> #include <asm/fpu.h>
#include <asm/mipsregs.h> #include <asm/mipsregs.h>
#include <asm/elf.h> #include <asm/elf.h>
#include <asm/traps.h>
#include "fpu-probe.h" #include "fpu-probe.h"
@ -158,6 +159,8 @@ void cpu_probe(void)
cpu_set_fpu_opts(c); cpu_set_fpu_opts(c);
else else
cpu_set_nofpu_opts(c); cpu_set_nofpu_opts(c);
reserve_exception_space(0, 0x400);
} }
void cpu_report(void) void cpu_report(void)

View File

@ -2009,13 +2009,16 @@ void __noreturn nmi_exception_handler(struct pt_regs *regs)
nmi_exit(); nmi_exit();
} }
#define VECTORSPACING 0x100 /* for EI/VI mode */
unsigned long ebase; unsigned long ebase;
EXPORT_SYMBOL_GPL(ebase); EXPORT_SYMBOL_GPL(ebase);
unsigned long exception_handlers[32]; unsigned long exception_handlers[32];
unsigned long vi_handlers[64]; unsigned long vi_handlers[64];
void reserve_exception_space(phys_addr_t addr, unsigned long size)
{
memblock_reserve(addr, size);
}
void __init *set_except_vector(int n, void *addr) void __init *set_except_vector(int n, void *addr)
{ {
unsigned long handler = (unsigned long) addr; unsigned long handler = (unsigned long) addr;
@ -2367,10 +2370,7 @@ void __init trap_init(void)
if (!cpu_has_mips_r2_r6) { if (!cpu_has_mips_r2_r6) {
ebase = CAC_BASE; ebase = CAC_BASE;
ebase_pa = virt_to_phys((void *)ebase);
vec_size = 0x400; vec_size = 0x400;
memblock_reserve(ebase_pa, vec_size);
} else { } else {
if (cpu_has_veic || cpu_has_vint) if (cpu_has_veic || cpu_has_vint)
vec_size = 0x200 + VECTORSPACING*64; vec_size = 0x200 + VECTORSPACING*64;

View File

@ -145,6 +145,7 @@ SECTIONS
} }
#ifdef CONFIG_MIPS_ELF_APPENDED_DTB #ifdef CONFIG_MIPS_ELF_APPENDED_DTB
STRUCT_ALIGN();
.appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) { .appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
*(.appended_dtb) *(.appended_dtb)
KEEP(*(.appended_dtb)) KEEP(*(.appended_dtb))
@ -172,6 +173,11 @@ SECTIONS
#endif #endif
#ifdef CONFIG_MIPS_RAW_APPENDED_DTB #ifdef CONFIG_MIPS_RAW_APPENDED_DTB
.fill : {
FILL(0);
BYTE(0);
STRUCT_ALIGN();
}
__appended_dtb = .; __appended_dtb = .;
/* leave space for appended DTB */ /* leave space for appended DTB */
. += 0x100000; . += 0x100000;

View File

@ -73,9 +73,10 @@ void __patch_exception(int exc, unsigned long addr);
#endif #endif
#define OP_RT_RA_MASK 0xffff0000UL #define OP_RT_RA_MASK 0xffff0000UL
#define LIS_R2 0x3c020000UL #define LIS_R2 (PPC_INST_ADDIS | __PPC_RT(R2))
#define ADDIS_R2_R12 0x3c4c0000UL #define ADDIS_R2_R12 (PPC_INST_ADDIS | __PPC_RT(R2) | __PPC_RA(R12))
#define ADDI_R2_R2 0x38420000UL #define ADDI_R2_R2 (PPC_INST_ADDI | __PPC_RT(R2) | __PPC_RA(R2))
static inline unsigned long ppc_function_entry(void *func) static inline unsigned long ppc_function_entry(void *func)
{ {

View File

@ -7,7 +7,7 @@
#include <linux/bug.h> #include <linux/bug.h>
#include <asm/cputable.h> #include <asm/cputable.h>
static inline bool early_cpu_has_feature(unsigned long feature) static __always_inline bool early_cpu_has_feature(unsigned long feature)
{ {
return !!((CPU_FTRS_ALWAYS & feature) || return !!((CPU_FTRS_ALWAYS & feature) ||
(CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature)); (CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long feature)
return static_branch_likely(&cpu_feature_keys[i]); return static_branch_likely(&cpu_feature_keys[i]);
} }
#else #else
static inline bool cpu_has_feature(unsigned long feature) static __always_inline bool cpu_has_feature(unsigned long feature)
{ {
return early_cpu_has_feature(feature); return early_cpu_has_feature(feature);
} }

View File

@ -410,7 +410,6 @@ DECLARE_INTERRUPT_HANDLER(altivec_assist_exception);
DECLARE_INTERRUPT_HANDLER(CacheLockingException); DECLARE_INTERRUPT_HANDLER(CacheLockingException);
DECLARE_INTERRUPT_HANDLER(SPEFloatingPointException); DECLARE_INTERRUPT_HANDLER(SPEFloatingPointException);
DECLARE_INTERRUPT_HANDLER(SPEFloatingPointRoundException); DECLARE_INTERRUPT_HANDLER(SPEFloatingPointRoundException);
DECLARE_INTERRUPT_HANDLER(unrecoverable_exception);
DECLARE_INTERRUPT_HANDLER(WatchdogException); DECLARE_INTERRUPT_HANDLER(WatchdogException);
DECLARE_INTERRUPT_HANDLER(kernel_bad_stack); DECLARE_INTERRUPT_HANDLER(kernel_bad_stack);
@ -437,6 +436,8 @@ DECLARE_INTERRUPT_HANDLER_NMI(hmi_exception_realmode);
DECLARE_INTERRUPT_HANDLER_ASYNC(TAUException); DECLARE_INTERRUPT_HANDLER_ASYNC(TAUException);
void unrecoverable_exception(struct pt_regs *regs);
void replay_system_reset(void); void replay_system_reset(void);
void replay_soft_interrupts(void); void replay_soft_interrupts(void);

View File

@ -195,7 +195,7 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
#define TRAP_FLAGS_MASK 0x11 #define TRAP_FLAGS_MASK 0x11
#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK) #define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
#define FULL_REGS(regs) (((regs)->trap & 1) == 0) #define FULL_REGS(regs) (((regs)->trap & 1) == 0)
#define SET_FULL_REGS(regs) ((regs)->trap |= 1) #define SET_FULL_REGS(regs) ((regs)->trap &= ~1)
#endif #endif
#define CHECK_FULL_REGS(regs) BUG_ON(!FULL_REGS(regs)) #define CHECK_FULL_REGS(regs) BUG_ON(!FULL_REGS(regs))
#define NV_REG_POISON 0xdeadbeefdeadbeefUL #define NV_REG_POISON 0xdeadbeefdeadbeefUL
@ -210,7 +210,7 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
#define TRAP_FLAGS_MASK 0x1F #define TRAP_FLAGS_MASK 0x1F
#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK) #define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
#define FULL_REGS(regs) (((regs)->trap & 1) == 0) #define FULL_REGS(regs) (((regs)->trap & 1) == 0)
#define SET_FULL_REGS(regs) ((regs)->trap |= 1) #define SET_FULL_REGS(regs) ((regs)->trap &= ~1)
#define IS_CRITICAL_EXC(regs) (((regs)->trap & 2) != 0) #define IS_CRITICAL_EXC(regs) (((regs)->trap & 2) != 0)
#define IS_MCHECK_EXC(regs) (((regs)->trap & 4) != 0) #define IS_MCHECK_EXC(regs) (((regs)->trap & 4) != 0)
#define IS_DEBUG_EXC(regs) (((regs)->trap & 8) != 0) #define IS_DEBUG_EXC(regs) (((regs)->trap & 8) != 0)

View File

@ -71,6 +71,16 @@ static inline void disable_kernel_vsx(void)
{ {
msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX); msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX);
} }
#else
static inline void enable_kernel_vsx(void)
{
BUILD_BUG();
}
static inline void disable_kernel_vsx(void)
{
BUILD_BUG();
}
#endif #endif
#ifdef CONFIG_SPE #ifdef CONFIG_SPE

View File

@ -466,7 +466,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real)
ld r10,PACAKMSR(r13) /* get MSR value for kernel */ ld r10,PACAKMSR(r13) /* get MSR value for kernel */
/* MSR[RI] is clear iff using SRR regs */ /* MSR[RI] is clear iff using SRR regs */
.if IHSRR == EXC_HV_OR_STD .if IHSRR_IF_HVMODE
BEGIN_FTR_SECTION BEGIN_FTR_SECTION
xori r10,r10,MSR_RI xori r10,r10,MSR_RI
END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE) END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE)

View File

@ -436,7 +436,6 @@ again:
return ret; return ret;
} }
void unrecoverable_exception(struct pt_regs *regs);
void preempt_schedule_irq(void); void preempt_schedule_irq(void);
notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsigned long msr) notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsigned long msr)

Some files were not shown because too many files have changed in this diff Show More