Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-05-25 20:56:19 -07:00
commit d6f1e0bfe5
345 changed files with 2949 additions and 1565 deletions

View File

@ -215,12 +215,14 @@ again.
reduce the compile time enormously, especially if you are running an
universal kernel from a commodity Linux distribution.
There is a catch: the make target 'localmodconfig' will disable kernel
features you have not directly or indirectly through some program utilized
since you booted the system. You can reduce or nearly eliminate that risk by
using tricks outlined in the reference section; for quick testing purposes
that risk is often negligible, but it is an aspect you want to keep in mind
in case your kernel behaves oddly.
There is a catch: 'localmodconfig' is likely to disable kernel features you
did not use since you booted your Linux -- like drivers for currently
disconnected peripherals or a virtualization software not haven't used yet.
You can reduce or nearly eliminate that risk with tricks the reference
section outlines; but when building a kernel just for quick testing purposes
it is often negligible if such features are missing. But you should keep that
aspect in mind when using a kernel built with this make target, as it might
be the reason why something you only use occasionally stopped working.
[:ref:`details<configuration>`]
@ -271,6 +273,9 @@ again.
does nothing at all; in that case you have to manually install your kernel,
as outlined in the reference section.
If you are running a immutable Linux distribution, check its documentation
and the web to find out how to install your own kernel there.
[:ref:`details<install>`]
.. _another_sbs:
@ -291,29 +296,29 @@ again.
version you care about, as git otherwise might retrieve the entire commit
history::
git fetch --shallow-exclude=v6.1 origin
git fetch --shallow-exclude=v6.0 origin
If you modified the sources (for example by applying a patch), you now need
to discard those modifications; that's because git otherwise will not be able
to switch to the sources of another version due to potential conflicting
changes::
Now switch to the version you are interested in -- but be aware the command
used here will discard any modifications you performed, as they would
conflict with the sources you want to checkout::
git reset --hard
Now checkout the version you are interested in, as explained above::
git checkout --detach origin/master
git checkout --force --detach origin/master
At this point you might want to patch the sources again or set/modify a build
tag, as explained earlier; afterwards adjust the build configuration to the
new codebase and build your next kernel::
tag, as explained earlier. Afterwards adjust the build configuration to the
new codebase using olddefconfig, which will now adjust the configuration file
you prepared earlier using localmodconfig (~/linux/.config) for your next
kernel::
# reminder: if you want to apply patches, do it at this point
# reminder: you might want to update your build tag at this point
make olddefconfig
Now build your kernel::
make -j $(nproc --all)
Install the kernel as outlined above::
Afterwards install the kernel as outlined above::
command -v installkernel && sudo make modules_install install
@ -584,11 +589,11 @@ versions and individual commits at hand at any time::
curl -L \
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/clone.bundle \
-o linux-stable.git.bundle
git clone clone.bundle ~/linux/
git clone linux-stable.git.bundle ~/linux/
rm linux-stable.git.bundle
cd ~/linux/
git remote set-url origin
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
git remote set-url origin \
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
git fetch origin
git checkout --detach origin/master

View File

@ -1,8 +1,8 @@
.. SPDX-License-Identifier: GPL-2.0
=====
cdrom
=====
======
CD-ROM
======
.. toctree::
:maxdepth: 1

View File

@ -32,7 +32,7 @@ properties:
maxItems: 1
iommus:
maxItems: 1
maxItems: 4
power-domains:
maxItems: 1

View File

@ -82,6 +82,18 @@ properties:
Indicates if the DSI controller is driving a panel which needs
2 DSI links.
qcom,master-dsi:
type: boolean
description: |
Indicates if the DSI controller is the master DSI controller when
qcom,dual-dsi-mode enabled.
qcom,sync-dual-dsi:
type: boolean
description: |
Indicates if the DSI controller needs to sync the other DSI controller
with MIPI DCS commands when qcom,dual-dsi-mode enabled.
assigned-clocks:
minItems: 2
maxItems: 4

View File

@ -55,7 +55,9 @@ properties:
description: TDM TX current sense time slot.
'#sound-dai-cells':
const: 1
# The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
# compatibility but is deprecated.
enum: [0, 1]
required:
- compatible
@ -72,7 +74,7 @@ examples:
codec: codec@4c {
compatible = "ti,tas2562";
reg = <0x4c>;
#sound-dai-cells = <1>;
#sound-dai-cells = <0>;
interrupt-parent = <&gpio1>;
interrupts = <14>;
shutdown-gpios = <&gpio1 15 0>;

View File

@ -57,7 +57,9 @@ properties:
- 1 # Falling edge
'#sound-dai-cells':
const: 1
# The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
# compatibility but is deprecated.
enum: [0, 1]
required:
- compatible
@ -74,7 +76,7 @@ examples:
codec: codec@41 {
compatible = "ti,tas2770";
reg = <0x41>;
#sound-dai-cells = <1>;
#sound-dai-cells = <0>;
interrupt-parent = <&gpio1>;
interrupts = <14>;
reset-gpio = <&gpio1 15 0>;

View File

@ -50,7 +50,9 @@ properties:
description: TDM TX voltage sense time slot.
'#sound-dai-cells':
const: 1
# The codec has a single DAI, the #sound-dai-cells=<1>; case is left in for backward
# compatibility but is deprecated.
enum: [0, 1]
required:
- compatible
@ -67,7 +69,7 @@ examples:
codec: codec@38 {
compatible = "ti,tas2764";
reg = <0x38>;
#sound-dai-cells = <1>;
#sound-dai-cells = <0>;
interrupt-parent = <&gpio1>;
interrupts = <14>;
reset-gpios = <&gpio1 15 0>;

View File

@ -8,7 +8,7 @@ Required properties:
"ti,tlv320aic32x6" TLV320AIC3206, TLV320AIC3256
"ti,tas2505" TAS2505, TAS2521
- reg: I2C slave address
- supply-*: Required supply regulators are:
- *-supply: Required supply regulators are:
"iov" - digital IO power supply
"ldoin" - LDO power supply
"dv" - Digital core power supply

View File

@ -6,8 +6,7 @@ Ramfs, rootfs and initramfs
October 17, 2005
Rob Landley <rob@landley.net>
=============================
:Author: Rob Landley <rob@landley.net>
What is ramfs?
--------------

View File

@ -147,6 +147,7 @@ replicas continue to be exactly same.
3) Setting mount states
-----------------------
The mount command (util-linux package) can be used to set mount
states::
@ -612,6 +613,7 @@ replicas continue to be exactly same.
6) Quiz
-------
A. What is the result of the following command sequence?
@ -673,6 +675,7 @@ replicas continue to be exactly same.
/mnt/1/test be?
7) FAQ
------
Q1. Why is bind mount needed? How is it different from symbolic links?
symbolic links can get stale if the destination mount gets
@ -841,6 +844,7 @@ replicas continue to be exactly same.
tmp usr tmp usr tmp usr
8) Implementation
-----------------
8A) Datastructure

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
====
fpga
FPGA
====
.. toctree::

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
=======
locking
Locking
=======
.. toctree::

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
======
pcmcia
PCMCIA
======
.. toctree::

View File

@ -551,7 +551,6 @@ These are the steps:
* IOMMU_SUPPORT
* S390
* ZCRYPT
* S390_AP_IOMMU
* VFIO
* KVM

View File

@ -1,5 +1,5 @@
=================================
brief tutorial on CRC computation
Brief tutorial on CRC computation
=================================
A CRC is a long-division remainder. You add the CRC to the message,

View File

@ -1,7 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
======
timers
Timers
======
.. toctree::

View File

@ -1677,10 +1677,7 @@ F: drivers/power/reset/arm-versatile-reboot.c
F: drivers/soc/versatile/
ARM KOMEDA DRM-KMS DRIVER
M: James (Qian) Wang <james.qian.wang@arm.com>
M: Liviu Dudau <liviu.dudau@arm.com>
M: Mihail Atanassov <mihail.atanassov@arm.com>
L: Mali DP Maintainers <malidp@foss.arm.com>
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/display/arm,komeda.yaml
@ -1701,8 +1698,6 @@ F: include/uapi/drm/panfrost_drm.h
ARM MALI-DP DRM DRIVER
M: Liviu Dudau <liviu.dudau@arm.com>
M: Brian Starkey <brian.starkey@arm.com>
L: Mali DP Maintainers <malidp@foss.arm.com>
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/display/arm,malidp.yaml
@ -4914,7 +4909,6 @@ F: drivers/media/cec/i2c/ch7322.c
CIRRUS LOGIC AUDIO CODEC DRIVERS
M: James Schulman <james.schulman@cirrus.com>
M: David Rhodes <david.rhodes@cirrus.com>
M: Lucas Tanure <tanureal@opensource.cirrus.com>
M: Richard Fitzgerald <rf@opensource.cirrus.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
L: patches@opensource.cirrus.com
@ -6012,7 +6006,7 @@ W: http://www.dialog-semiconductor.com/products
F: Documentation/devicetree/bindings/input/da90??-onkey.txt
F: Documentation/devicetree/bindings/input/dlg,da72??.txt
F: Documentation/devicetree/bindings/mfd/da90*.txt
F: Documentation/devicetree/bindings/mfd/da90*.yaml
F: Documentation/devicetree/bindings/mfd/dlg,da90*.yaml
F: Documentation/devicetree/bindings/regulator/da92*.txt
F: Documentation/devicetree/bindings/regulator/dlg,da9*.yaml
F: Documentation/devicetree/bindings/regulator/slg51000.txt
@ -18585,10 +18579,9 @@ F: Documentation/admin-guide/LSM/SafeSetID.rst
F: security/safesetid/
SAMSUNG AUDIO (ASoC) DRIVERS
M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
M: Sylwester Nawrocki <s.nawrocki@samsung.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported
S: Maintained
B: mailto:linux-samsung-soc@vger.kernel.org
F: Documentation/devicetree/bindings/sound/samsung*
F: sound/soc/samsung/
@ -18716,7 +18709,6 @@ F: include/dt-bindings/clock/samsung,*.h
F: include/linux/clk/samsung.h
SAMSUNG SPI DRIVERS
M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
M: Andi Shyti <andi.shyti@kernel.org>
L: linux-spi@vger.kernel.org
L: linux-samsung-soc@vger.kernel.org

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 4
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@ -92,7 +92,7 @@
#define RETURN_READ_PMEVCNTRN(n) \
return read_sysreg(PMEVCNTR##n)
static unsigned long read_pmevcntrn(int n)
static inline unsigned long read_pmevcntrn(int n)
{
PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
return 0;
@ -100,14 +100,14 @@ static unsigned long read_pmevcntrn(int n)
#define WRITE_PMEVCNTRN(n) \
write_sysreg(val, PMEVCNTR##n)
static void write_pmevcntrn(int n, unsigned long val)
static inline void write_pmevcntrn(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
}
#define WRITE_PMEVTYPERN(n) \
write_sysreg(val, PMEVTYPER##n)
static void write_pmevtypern(int n, unsigned long val)
static inline void write_pmevtypern(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
}

View File

@ -13,7 +13,7 @@
#define RETURN_READ_PMEVCNTRN(n) \
return read_sysreg(pmevcntr##n##_el0)
static unsigned long read_pmevcntrn(int n)
static inline unsigned long read_pmevcntrn(int n)
{
PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
return 0;
@ -21,14 +21,14 @@ static unsigned long read_pmevcntrn(int n)
#define WRITE_PMEVCNTRN(n) \
write_sysreg(val, pmevcntr##n##_el0)
static void write_pmevcntrn(int n, unsigned long val)
static inline void write_pmevcntrn(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
}
#define WRITE_PMEVTYPERN(n) \
write_sysreg(val, pmevtyper##n##_el0)
static void write_pmevtypern(int n, unsigned long val)
static inline void write_pmevtypern(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
}

View File

@ -126,6 +126,10 @@
#define APPLE_CPU_PART_M1_FIRESTORM_MAX 0x029
#define APPLE_CPU_PART_M2_BLIZZARD 0x032
#define APPLE_CPU_PART_M2_AVALANCHE 0x033
#define APPLE_CPU_PART_M2_BLIZZARD_PRO 0x034
#define APPLE_CPU_PART_M2_AVALANCHE_PRO 0x035
#define APPLE_CPU_PART_M2_BLIZZARD_MAX 0x038
#define APPLE_CPU_PART_M2_AVALANCHE_MAX 0x039
#define AMPERE_CPU_PART_AMPERE1 0xAC3
@ -181,6 +185,10 @@
#define MIDR_APPLE_M1_FIRESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_MAX)
#define MIDR_APPLE_M2_BLIZZARD MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD)
#define MIDR_APPLE_M2_AVALANCHE MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE)
#define MIDR_APPLE_M2_BLIZZARD_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_PRO)
#define MIDR_APPLE_M2_AVALANCHE_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_PRO)
#define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX)
#define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX)
#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
/* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */

View File

@ -209,6 +209,7 @@ struct kvm_pgtable_visit_ctx {
kvm_pte_t old;
void *arg;
struct kvm_pgtable_mm_ops *mm_ops;
u64 start;
u64 addr;
u64 end;
u32 level;

View File

@ -66,13 +66,10 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
return;
/* if PG_mte_tagged is set, tags have already been initialised */
for (i = 0; i < nr_pages; i++, page++) {
if (!page_mte_tagged(page)) {
for (i = 0; i < nr_pages; i++, page++)
if (!page_mte_tagged(page))
mte_sync_page_tags(page, old_pte, check_swap,
pte_is_tagged);
set_page_mte_tagged(page);
}
}
/* ensure the tags are visible before the PTE is set */
smp_wmb();

View File

@ -288,7 +288,7 @@ static int aarch32_alloc_kuser_vdso_page(void)
memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start,
kuser_sz);
aarch32_vectors_page = virt_to_page(vdso_page);
aarch32_vectors_page = virt_to_page((void *)vdso_page);
return 0;
}

View File

@ -81,26 +81,34 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
fpsimd_kvm_prepare();
/*
* We will check TIF_FOREIGN_FPSTATE just before entering the
* guest in kvm_arch_vcpu_ctxflush_fp() and override this to
* FP_STATE_FREE if the flag set.
*/
vcpu->arch.fp_state = FP_STATE_HOST_OWNED;
vcpu_clear_flag(vcpu, HOST_SVE_ENABLED);
if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
vcpu_set_flag(vcpu, HOST_SVE_ENABLED);
/*
* We don't currently support SME guests but if we leave
* things in streaming mode then when the guest starts running
* FPSIMD or SVE code it may generate SME traps so as a
* special case if we are in streaming mode we force the host
* state to be saved now and exit streaming mode so that we
* don't have to handle any SME traps for valid guest
* operations. Do this for ZA as well for now for simplicity.
*/
if (system_supports_sme()) {
vcpu_clear_flag(vcpu, HOST_SME_ENABLED);
if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
vcpu_set_flag(vcpu, HOST_SME_ENABLED);
/*
* If PSTATE.SM is enabled then save any pending FP
* state and disable PSTATE.SM. If we leave PSTATE.SM
* enabled and the guest does not enable SME via
* CPACR_EL1.SMEN then operations that should be valid
* may generate SME traps from EL1 to EL1 which we
* can't intercept and which would confuse the guest.
*
* Do the same for PSTATE.ZA in the case where there
* is state in the registers which has not already
* been saved, this is very unlikely to happen.
*/
if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) {
vcpu->arch.fp_state = FP_STATE_FREE;
fpsimd_save_and_flush_cpu_state();

View File

@ -177,9 +177,17 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
sve_guest = vcpu_has_sve(vcpu);
esr_ec = kvm_vcpu_trap_get_class(vcpu);
/* Don't handle SVE traps for non-SVE vcpus here: */
if (!sve_guest && esr_ec != ESR_ELx_EC_FP_ASIMD)
/* Only handle traps the vCPU can support here: */
switch (esr_ec) {
case ESR_ELx_EC_FP_ASIMD:
break;
case ESR_ELx_EC_SVE:
if (!sve_guest)
return false;
break;
default:
return false;
}
/* Valid trap. Switch the context: */

View File

@ -58,8 +58,9 @@
struct kvm_pgtable_walk_data {
struct kvm_pgtable_walker *walker;
const u64 start;
u64 addr;
u64 end;
const u64 end;
};
static bool kvm_phys_is_valid(u64 phys)
@ -201,6 +202,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
.old = READ_ONCE(*ptep),
.arg = data->walker->arg,
.mm_ops = mm_ops,
.start = data->start,
.addr = data->addr,
.end = data->end,
.level = level,
@ -293,6 +295,7 @@ int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
struct kvm_pgtable_walker *walker)
{
struct kvm_pgtable_walk_data walk_data = {
.start = ALIGN_DOWN(addr, PAGE_SIZE),
.addr = ALIGN_DOWN(addr, PAGE_SIZE),
.end = PAGE_ALIGN(walk_data.addr + size),
.walker = walker,
@ -349,7 +352,7 @@ int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr,
}
struct hyp_map_data {
u64 phys;
const u64 phys;
kvm_pte_t attr;
};
@ -407,13 +410,12 @@ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte)
static bool hyp_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
struct hyp_map_data *data)
{
u64 phys = data->phys + (ctx->addr - ctx->start);
kvm_pte_t new;
u64 granule = kvm_granule_size(ctx->level), phys = data->phys;
if (!kvm_block_mapping_supported(ctx, phys))
return false;
data->phys += granule;
new = kvm_init_valid_leaf_pte(phys, data->attr, ctx->level);
if (ctx->old == new)
return true;
@ -576,7 +578,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
}
struct stage2_map_data {
u64 phys;
const u64 phys;
kvm_pte_t attr;
u8 owner_id;
@ -794,20 +796,43 @@ static bool stage2_pte_executable(kvm_pte_t pte)
return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN);
}
static u64 stage2_map_walker_phys_addr(const struct kvm_pgtable_visit_ctx *ctx,
const struct stage2_map_data *data)
{
u64 phys = data->phys;
/*
* Stage-2 walks to update ownership data are communicated to the map
* walker using an invalid PA. Avoid offsetting an already invalid PA,
* which could overflow and make the address valid again.
*/
if (!kvm_phys_is_valid(phys))
return phys;
/*
* Otherwise, work out the correct PA based on how far the walk has
* gotten.
*/
return phys + (ctx->addr - ctx->start);
}
static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
struct stage2_map_data *data)
{
u64 phys = stage2_map_walker_phys_addr(ctx, data);
if (data->force_pte && (ctx->level < (KVM_PGTABLE_MAX_LEVELS - 1)))
return false;
return kvm_block_mapping_supported(ctx, data->phys);
return kvm_block_mapping_supported(ctx, phys);
}
static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
struct stage2_map_data *data)
{
kvm_pte_t new;
u64 granule = kvm_granule_size(ctx->level), phys = data->phys;
u64 phys = stage2_map_walker_phys_addr(ctx, data);
u64 granule = kvm_granule_size(ctx->level);
struct kvm_pgtable *pgt = data->mmu->pgt;
struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
@ -841,8 +866,6 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
stage2_make_pte(ctx, new);
if (kvm_phys_is_valid(phys))
data->phys += granule;
return 0;
}

View File

@ -204,7 +204,7 @@ void kvm_inject_size_fault(struct kvm_vcpu *vcpu)
* Size Fault at level 0, as if exceeding PARange.
*
* Non-LPAE guests will only get the external abort, as there
* is no way to to describe the ASF.
* is no way to describe the ASF.
*/
if (vcpu_el1_is_32bit(vcpu) &&
!(vcpu_read_sys_reg(vcpu, TCR_EL1) & TTBCR_EAE))

View File

@ -616,6 +616,10 @@ static const struct midr_range broken_seis[] = {
MIDR_ALL_VERSIONS(MIDR_APPLE_M1_FIRESTORM_MAX),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD_PRO),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE_PRO),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_BLIZZARD_MAX),
MIDR_ALL_VERSIONS(MIDR_APPLE_M2_AVALANCHE_MAX),
{},
};

View File

@ -47,7 +47,7 @@ static void flush_context(void)
int cpu;
u64 vmid;
bitmap_clear(vmid_map, 0, NUM_USER_VMIDS);
bitmap_zero(vmid_map, NUM_USER_VMIDS);
for_each_possible_cpu(cpu) {
vmid = atomic64_xchg_relaxed(&per_cpu(active_vmids, cpu), 0);
@ -182,8 +182,7 @@ int __init kvm_arm_vmid_alloc_init(void)
*/
WARN_ON(NUM_USER_VMIDS - 1 <= num_possible_cpus());
atomic64_set(&vmid_generation, VMID_FIRST_VERSION);
vmid_map = kcalloc(BITS_TO_LONGS(NUM_USER_VMIDS),
sizeof(*vmid_map), GFP_KERNEL);
vmid_map = bitmap_zalloc(NUM_USER_VMIDS, GFP_KERNEL);
if (!vmid_map)
return -ENOMEM;
@ -192,5 +191,5 @@ int __init kvm_arm_vmid_alloc_init(void)
void __init kvm_arm_vmid_alloc_free(void)
{
kfree(vmid_map);
bitmap_free(vmid_map);
}

View File

@ -21,9 +21,10 @@ void copy_highpage(struct page *to, struct page *from)
copy_page(kto, kfrom);
if (kasan_hw_tags_enabled())
page_kasan_tag_reset(to);
if (system_supports_mte() && page_mte_tagged(from)) {
if (kasan_hw_tags_enabled())
page_kasan_tag_reset(to);
/* It's a new page, shouldn't have been tagged yet */
WARN_ON_ONCE(!try_page_mte_tagging(to));
mte_copy_page_tags(kto, kfrom);

View File

@ -480,8 +480,8 @@ static void do_bad_area(unsigned long far, unsigned long esr,
}
}
#define VM_FAULT_BADMAP 0x010000
#define VM_FAULT_BADACCESS 0x020000
#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000)
#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000)
static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
unsigned int mm_flags, unsigned long vm_flags,

View File

@ -858,11 +858,17 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *
}
static inline void __user *
get_sigframe(struct ksignal *ksig, size_t frame_size)
get_sigframe(struct ksignal *ksig, struct pt_regs *tregs, size_t frame_size)
{
unsigned long usp = sigsp(rdusp(), ksig);
unsigned long gap = 0;
return (void __user *)((usp - frame_size) & -8UL);
if (CPU_IS_020_OR_030 && tregs->format == 0xb) {
/* USP is unreliable so use worst-case value */
gap = 256;
}
return (void __user *)((usp - gap - frame_size) & -8UL);
}
static int setup_frame(struct ksignal *ksig, sigset_t *set,
@ -880,7 +886,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
return -EFAULT;
}
frame = get_sigframe(ksig, sizeof(*frame) + fsize);
frame = get_sigframe(ksig, tregs, sizeof(*frame) + fsize);
if (fsize)
err |= copy_to_user (frame + 1, regs + 1, fsize);
@ -952,7 +958,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
return -EFAULT;
}
frame = get_sigframe(ksig, sizeof(*frame));
frame = get_sigframe(ksig, tregs, sizeof(*frame));
if (fsize)
err |= copy_to_user (&frame->uc.uc_extra, regs + 1, fsize);

View File

@ -34,8 +34,6 @@ endif
BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \
$(call cc-option,-mno-prefixed) $(call cc-option,-mno-pcrel) \
$(call cc-option,-mno-mma) \
$(call cc-option,-mno-spe) $(call cc-option,-mspe=no) \
-pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
$(LINUXINCLUDE)
@ -71,6 +69,10 @@ BOOTAFLAGS := -D__ASSEMBLY__ $(BOOTCFLAGS) -nostdinc
BOOTARFLAGS := -crD
BOOTCFLAGS += $(call cc-option,-mno-prefixed) \
$(call cc-option,-mno-pcrel) \
$(call cc-option,-mno-mma)
ifdef CONFIG_CC_IS_CLANG
BOOTCFLAGS += $(CLANG_FLAGS)
BOOTAFLAGS += $(CLANG_FLAGS)

View File

@ -96,7 +96,7 @@ config CRYPTO_AES_PPC_SPE
config CRYPTO_AES_GCM_P10
tristate "Stitched AES/GCM acceleration support on P10 or later CPU (PPC)"
depends on PPC64 && CPU_LITTLE_ENDIAN
depends on PPC64 && CPU_LITTLE_ENDIAN && VSX
select CRYPTO_LIB_AES
select CRYPTO_ALGAPI
select CRYPTO_AEAD

View File

@ -205,7 +205,6 @@ extern void iommu_register_group(struct iommu_table_group *table_group,
int pci_domain_number, unsigned long pe_num);
extern int iommu_add_device(struct iommu_table_group *table_group,
struct device *dev);
extern void iommu_del_device(struct device *dev);
extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
unsigned long entry, unsigned long *hpa,
enum dma_data_direction *direction);
@ -229,10 +228,6 @@ static inline int iommu_add_device(struct iommu_table_group *table_group,
{
return 0;
}
static inline void iommu_del_device(struct device *dev)
{
}
#endif /* !CONFIG_IOMMU_API */
u64 dma_iommu_get_required_mask(struct device *dev);

View File

@ -144,7 +144,7 @@ static bool dma_iommu_bypass_supported(struct device *dev, u64 mask)
/* We support DMA to/from any memory page via the iommu */
int dma_iommu_dma_supported(struct device *dev, u64 mask)
{
struct iommu_table *tbl = get_iommu_table_base(dev);
struct iommu_table *tbl;
if (dev_is_pci(dev) && dma_iommu_bypass_supported(dev, mask)) {
/*
@ -162,6 +162,8 @@ int dma_iommu_dma_supported(struct device *dev, u64 mask)
return 1;
}
tbl = get_iommu_table_base(dev);
if (!tbl) {
dev_err(dev, "Warning: IOMMU dma not supported: mask 0x%08llx, table unavailable\n", mask);
return 0;

View File

@ -518,7 +518,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
/* Convert entry to a dma_addr_t */
entry += tbl->it_offset;
dma_addr = entry << tbl->it_page_shift;
dma_addr |= (s->offset & ~IOMMU_PAGE_MASK(tbl));
dma_addr |= (vaddr & ~IOMMU_PAGE_MASK(tbl));
DBG(" - %lu pages, entry: %lx, dma_addr: %lx\n",
npages, entry, dma_addr);
@ -905,6 +905,7 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
unsigned int order;
unsigned int nio_pages, io_order;
struct page *page;
int tcesize = (1 << tbl->it_page_shift);
size = PAGE_ALIGN(size);
order = get_order(size);
@ -931,7 +932,8 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
memset(ret, 0, size);
/* Set up tces to cover the allocated range */
nio_pages = size >> tbl->it_page_shift;
nio_pages = IOMMU_PAGE_ALIGN(size, tbl) >> tbl->it_page_shift;
io_order = get_iommu_order(size, tbl);
mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL,
mask >> tbl->it_page_shift, io_order, 0);
@ -939,7 +941,8 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
free_pages((unsigned long)ret, order);
return NULL;
}
*dma_handle = mapping;
*dma_handle = mapping | ((u64)ret & (tcesize - 1));
return ret;
}
@ -950,7 +953,7 @@ void iommu_free_coherent(struct iommu_table *tbl, size_t size,
unsigned int nio_pages;
size = PAGE_ALIGN(size);
nio_pages = size >> tbl->it_page_shift;
nio_pages = IOMMU_PAGE_ALIGN(size, tbl) >> tbl->it_page_shift;
iommu_free(tbl, dma_handle, nio_pages);
size = PAGE_ALIGN(size);
free_pages((unsigned long)vaddr, get_order(size));
@ -1168,23 +1171,6 @@ int iommu_add_device(struct iommu_table_group *table_group, struct device *dev)
}
EXPORT_SYMBOL_GPL(iommu_add_device);
void iommu_del_device(struct device *dev)
{
/*
* Some devices might not have IOMMU table and group
* and we needn't detach them from the associated
* IOMMU groups
*/
if (!device_iommu_mapped(dev)) {
pr_debug("iommu_tce: skipping device %s with no tbl\n",
dev_name(dev));
return;
}
iommu_group_remove_device(dev);
}
EXPORT_SYMBOL_GPL(iommu_del_device);
/*
* A simple iommu_table_group_ops which only allows reusing the existing
* iommu_table. This handles VFIO for POWER7 or the nested KVM.

View File

@ -93,11 +93,12 @@ static int process_ISA_OF_ranges(struct device_node *isa_node,
}
inval_range:
if (!phb_io_base_phys) {
if (phb_io_base_phys) {
pr_err("no ISA IO ranges or unexpected isa range, mapping 64k\n");
remap_isa_base(phb_io_base_phys, 0x10000);
return 0;
}
return 0;
return -EINVAL;
}

View File

@ -1040,8 +1040,8 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
pte_t entry, unsigned long address, int psize)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
_PAGE_RW | _PAGE_EXEC);
unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_SOFT_DIRTY |
_PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
unsigned long change = pte_val(entry) ^ pte_val(*ptep);
/*

View File

@ -101,6 +101,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
bpf_hdr = jit_data->header;
proglen = jit_data->proglen;
extra_pass = true;
/* During extra pass, ensure index is reset before repopulating extable entries */
cgctx.exentry_idx = 0;
goto skip_init_ctx;
}

View File

@ -265,6 +265,7 @@ config CPM2
config FSL_ULI1575
bool "ULI1575 PCIe south bridge support"
depends on FSL_SOC_BOOKE || PPC_86xx
depends on PCI
select FSL_PCI
select GENERIC_ISA_DMA
help

View File

@ -865,28 +865,3 @@ void __init pnv_pci_init(void)
/* Configure IOMMU DMA hooks */
set_pci_dma_ops(&dma_iommu_ops);
}
static int pnv_tce_iommu_bus_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
struct device *dev = data;
switch (action) {
case BUS_NOTIFY_DEL_DEVICE:
iommu_del_device(dev);
return 0;
default:
return 0;
}
}
static struct notifier_block pnv_tce_iommu_bus_nb = {
.notifier_call = pnv_tce_iommu_bus_notifier,
};
static int __init pnv_tce_iommu_bus_notifier_init(void)
{
bus_register_notifier(&pci_bus_type, &pnv_tce_iommu_bus_nb);
return 0;
}
machine_subsys_initcall_sync(powernv, pnv_tce_iommu_bus_notifier_init);

View File

@ -91,19 +91,24 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node)
static void iommu_pseries_free_group(struct iommu_table_group *table_group,
const char *node_name)
{
struct iommu_table *tbl;
if (!table_group)
return;
tbl = table_group->tables[0];
#ifdef CONFIG_IOMMU_API
if (table_group->group) {
iommu_group_put(table_group->group);
BUG_ON(table_group->group);
}
#endif
iommu_tce_table_put(tbl);
/* Default DMA window table is at index 0, while DDW at 1. SR-IOV
* adapters only have table on index 1.
*/
if (table_group->tables[0])
iommu_tce_table_put(table_group->tables[0]);
if (table_group->tables[1])
iommu_tce_table_put(table_group->tables[1]);
kfree(table_group);
}
@ -1695,31 +1700,6 @@ static int __init disable_multitce(char *str)
__setup("multitce=", disable_multitce);
static int tce_iommu_bus_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
struct device *dev = data;
switch (action) {
case BUS_NOTIFY_DEL_DEVICE:
iommu_del_device(dev);
return 0;
default:
return 0;
}
}
static struct notifier_block tce_iommu_bus_nb = {
.notifier_call = tce_iommu_bus_notifier,
};
static int __init tce_iommu_bus_notifier_init(void)
{
bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb);
return 0;
}
machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init);
#ifdef CONFIG_SPAPR_TCE_IOMMU
struct iommu_group *pSeries_pci_device_group(struct pci_controller *hose,
struct pci_dev *pdev)

View File

@ -4,3 +4,5 @@ obj-$(CONFIG_RETHOOK) += rethook.o rethook_trampoline.o
obj-$(CONFIG_KPROBES_ON_FTRACE) += ftrace.o
obj-$(CONFIG_UPROBES) += uprobes.o decode-insn.o simulate-insn.o
CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_rethook.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_rethook_trampoline.o = $(CC_FLAGS_FTRACE)

View File

@ -469,19 +469,11 @@ config SCHED_SMT
config SCHED_MC
def_bool n
config SCHED_BOOK
def_bool n
config SCHED_DRAWER
def_bool n
config SCHED_TOPOLOGY
def_bool y
prompt "Topology scheduler support"
select SCHED_SMT
select SCHED_MC
select SCHED_BOOK
select SCHED_DRAWER
help
Topology scheduler support improves the CPU scheduler's decision
making when dealing with machines that have multi-threading,
@ -716,7 +708,6 @@ config EADM_SCH
config VFIO_CCW
def_tristate n
prompt "Support for VFIO-CCW subchannels"
depends on S390_CCW_IOMMU
depends on VFIO
select VFIO_MDEV
help
@ -728,7 +719,7 @@ config VFIO_CCW
config VFIO_AP
def_tristate n
prompt "VFIO support for AP devices"
depends on S390_AP_IOMMU && KVM
depends on KVM
depends on VFIO
depends on ZCRYPT
select VFIO_MDEV

View File

@ -591,8 +591,6 @@ CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_INPUT=y
CONFIG_VHOST_NET=m
CONFIG_VHOST_VSOCK=m
CONFIG_S390_CCW_IOMMU=y
CONFIG_S390_AP_IOMMU=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
@ -703,6 +701,7 @@ CONFIG_IMA_DEFAULT_HASH_SHA256=y
CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y
CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
CONFIG_INIT_STACK_NONE=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_PCRYPT=m

View File

@ -580,8 +580,6 @@ CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_INPUT=y
CONFIG_VHOST_NET=m
CONFIG_VHOST_VSOCK=m
CONFIG_S390_CCW_IOMMU=y
CONFIG_S390_AP_IOMMU=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
@ -686,6 +684,7 @@ CONFIG_IMA_DEFAULT_HASH_SHA256=y
CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y
CONFIG_LSM="yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
CONFIG_INIT_STACK_NONE=y
CONFIG_CRYPTO_FIPS=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set

View File

@ -67,6 +67,7 @@ CONFIG_ZFCP=y
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_LSM="yama,loadpin,safesetid,integrity"
CONFIG_INIT_STACK_NONE=y
# CONFIG_ZLIB_DFLTCC is not set
CONFIG_XZ_DEC_MICROLZMA=y
CONFIG_PRINTK_TIME=y

View File

@ -82,7 +82,7 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src,
* it cannot handle a block of data or less, but otherwise
* it can handle data of arbitrary size
*/
if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20)
if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20 || !MACHINE_HAS_VX)
chacha_crypt_generic(state, dst, src, bytes, nrounds);
else
chacha20_crypt_s390(state, dst, src, bytes,

View File

@ -112,7 +112,7 @@ struct compat_statfs64 {
u32 f_namelen;
u32 f_frsize;
u32 f_flags;
u32 f_spare[4];
u32 f_spare[5];
};
/*

View File

@ -30,7 +30,7 @@ struct statfs {
unsigned int f_namelen;
unsigned int f_frsize;
unsigned int f_flags;
unsigned int f_spare[4];
unsigned int f_spare[5];
};
struct statfs64 {
@ -45,7 +45,7 @@ struct statfs64 {
unsigned int f_namelen;
unsigned int f_frsize;
unsigned int f_flags;
unsigned int f_spare[4];
unsigned int f_spare[5];
};
#endif

View File

@ -10,6 +10,7 @@ CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
# Do not trace early setup code
CFLAGS_REMOVE_early.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_rethook.o = $(CC_FLAGS_FTRACE)
endif

View File

@ -1935,14 +1935,13 @@ static struct shutdown_action __refdata dump_action = {
static void dump_reipl_run(struct shutdown_trigger *trigger)
{
unsigned long ipib = (unsigned long) reipl_block_actual;
struct lowcore *abs_lc;
unsigned int csum;
csum = (__force unsigned int)
csum_partial(reipl_block_actual, reipl_block_actual->hdr.len, 0);
abs_lc = get_abs_lowcore();
abs_lc->ipib = ipib;
abs_lc->ipib = __pa(reipl_block_actual);
abs_lc->ipib_checksum = csum;
put_abs_lowcore(abs_lc);
dump_run(trigger);

View File

@ -95,7 +95,7 @@ out:
static void cpu_thread_map(cpumask_t *dst, unsigned int cpu)
{
static cpumask_t mask;
int i;
unsigned int max_cpu;
cpumask_clear(&mask);
if (!cpumask_test_cpu(cpu, &cpu_setup_mask))
@ -104,9 +104,10 @@ static void cpu_thread_map(cpumask_t *dst, unsigned int cpu)
if (topology_mode != TOPOLOGY_MODE_HW)
goto out;
cpu -= cpu % (smp_cpu_mtid + 1);
for (i = 0; i <= smp_cpu_mtid; i++) {
if (cpumask_test_cpu(cpu + i, &cpu_setup_mask))
cpumask_set_cpu(cpu + i, &mask);
max_cpu = min(cpu + smp_cpu_mtid, nr_cpu_ids - 1);
for (; cpu <= max_cpu; cpu++) {
if (cpumask_test_cpu(cpu, &cpu_setup_mask))
cpumask_set_cpu(cpu, &mask);
}
out:
cpumask_copy(dst, &mask);
@ -123,25 +124,26 @@ static void add_cpus_to_mask(struct topology_core *tl_core,
unsigned int core;
for_each_set_bit(core, &tl_core->mask, TOPOLOGY_CORE_BITS) {
unsigned int rcore;
int lcpu, i;
unsigned int max_cpu, rcore;
int cpu;
rcore = TOPOLOGY_CORE_BITS - 1 - core + tl_core->origin;
lcpu = smp_find_processor_id(rcore << smp_cpu_mt_shift);
if (lcpu < 0)
cpu = smp_find_processor_id(rcore << smp_cpu_mt_shift);
if (cpu < 0)
continue;
for (i = 0; i <= smp_cpu_mtid; i++) {
topo = &cpu_topology[lcpu + i];
max_cpu = min(cpu + smp_cpu_mtid, nr_cpu_ids - 1);
for (; cpu <= max_cpu; cpu++) {
topo = &cpu_topology[cpu];
topo->drawer_id = drawer->id;
topo->book_id = book->id;
topo->socket_id = socket->id;
topo->core_id = rcore;
topo->thread_id = lcpu + i;
topo->thread_id = cpu;
topo->dedicated = tl_core->d;
cpumask_set_cpu(lcpu + i, &drawer->mask);
cpumask_set_cpu(lcpu + i, &book->mask);
cpumask_set_cpu(lcpu + i, &socket->mask);
smp_cpu_set_polarization(lcpu + i, tl_core->pp);
cpumask_set_cpu(cpu, &drawer->mask);
cpumask_set_cpu(cpu, &book->mask);
cpumask_set_cpu(cpu, &socket->mask);
smp_cpu_set_polarization(cpu, tl_core->pp);
}
}
}

View File

@ -16,7 +16,8 @@ mconsole-objs := mconsole_kern.o mconsole_user.o
hostaudio-objs := hostaudio_kern.o
ubd-objs := ubd_kern.o ubd_user.o
port-objs := port_kern.o port_user.o
harddog-objs := harddog_kern.o harddog_user.o
harddog-objs := harddog_kern.o
harddog-builtin-$(CONFIG_UML_WATCHDOG) := harddog_user.o harddog_user_exp.o
rtc-objs := rtc_kern.o rtc_user.o
LDFLAGS_pcap.o = $(shell $(CC) $(KBUILD_CFLAGS) -print-file-name=libpcap.a)
@ -60,6 +61,7 @@ obj-$(CONFIG_PTY_CHAN) += pty.o
obj-$(CONFIG_TTY_CHAN) += tty.o
obj-$(CONFIG_XTERM_CHAN) += xterm.o xterm_kern.o
obj-$(CONFIG_UML_WATCHDOG) += harddog.o
obj-y += $(harddog-builtin-y) $(harddog-builtin-m)
obj-$(CONFIG_BLK_DEV_COW_COMMON) += cow_user.o
obj-$(CONFIG_UML_RANDOM) += random.o
obj-$(CONFIG_VIRTIO_UML) += virtio_uml.o

View File

@ -0,0 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef UM_WATCHDOG_H
#define UM_WATCHDOG_H
int start_watchdog(int *in_fd_ret, int *out_fd_ret, char *sock);
void stop_watchdog(int in_fd, int out_fd);
int ping_watchdog(int fd);
#endif /* UM_WATCHDOG_H */

View File

@ -47,6 +47,7 @@
#include <linux/spinlock.h>
#include <linux/uaccess.h>
#include "mconsole.h"
#include "harddog.h"
MODULE_LICENSE("GPL");
@ -60,8 +61,6 @@ static int harddog_out_fd = -1;
* Allow only one person to hold it open
*/
extern int start_watchdog(int *in_fd_ret, int *out_fd_ret, char *sock);
static int harddog_open(struct inode *inode, struct file *file)
{
int err = -EBUSY;
@ -92,8 +91,6 @@ err:
return err;
}
extern void stop_watchdog(int in_fd, int out_fd);
static int harddog_release(struct inode *inode, struct file *file)
{
/*
@ -112,8 +109,6 @@ static int harddog_release(struct inode *inode, struct file *file)
return 0;
}
extern int ping_watchdog(int fd);
static ssize_t harddog_write(struct file *file, const char __user *data, size_t len,
loff_t *ppos)
{

View File

@ -7,6 +7,7 @@
#include <unistd.h>
#include <errno.h>
#include <os.h>
#include "harddog.h"
struct dog_data {
int stdin_fd;

View File

@ -0,0 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/export.h>
#include "harddog.h"
#if IS_MODULE(CONFIG_UML_WATCHDOG)
EXPORT_SYMBOL(start_watchdog);
EXPORT_SYMBOL(stop_watchdog);
EXPORT_SYMBOL(ping_watchdog);
#endif

View File

@ -13,7 +13,9 @@
#include <linux/bitops.h>
#include <linux/bug.h>
#include <linux/types.h>
#include <uapi/asm/vmx.h>
#include <asm/vmxfeatures.h>

View File

@ -17,6 +17,7 @@ CFLAGS_REMOVE_ftrace.o = -pg
CFLAGS_REMOVE_early_printk.o = -pg
CFLAGS_REMOVE_head64.o = -pg
CFLAGS_REMOVE_sev.o = -pg
CFLAGS_REMOVE_rethook.o = -pg
endif
KASAN_SANITIZE_head$(BITS).o := n

View File

@ -253,7 +253,6 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_e
int nent)
{
struct kvm_cpuid_entry2 *best;
u64 guest_supported_xcr0 = cpuid_get_supported_xcr0(entries, nent);
best = cpuid_entry2_find(entries, nent, 1, KVM_CPUID_INDEX_NOT_SIGNIFICANT);
if (best) {
@ -292,21 +291,6 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_e
vcpu->arch.ia32_misc_enable_msr &
MSR_IA32_MISC_ENABLE_MWAIT);
}
/*
* Bits 127:0 of the allowed SECS.ATTRIBUTES (CPUID.0x12.0x1) enumerate
* the supported XSAVE Feature Request Mask (XFRM), i.e. the enclave's
* requested XCR0 value. The enclave's XFRM must be a subset of XCRO
* at the time of EENTER, thus adjust the allowed XFRM by the guest's
* supported XCR0. Similar to XCR0 handling, FP and SSE are forced to
* '1' even on CPUs that don't support XSAVE.
*/
best = cpuid_entry2_find(entries, nent, 0x12, 0x1);
if (best) {
best->ecx &= guest_supported_xcr0 & 0xffffffff;
best->edx &= guest_supported_xcr0 >> 32;
best->ecx |= XFEATURE_MASK_FPSSE;
}
}
void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)

View File

@ -170,12 +170,19 @@ static int __handle_encls_ecreate(struct kvm_vcpu *vcpu,
return 1;
}
/* Enforce CPUID restrictions on MISCSELECT, ATTRIBUTES and XFRM. */
/*
* Enforce CPUID restrictions on MISCSELECT, ATTRIBUTES and XFRM. Note
* that the allowed XFRM (XFeature Request Mask) isn't strictly bound
* by the supported XCR0. FP+SSE *must* be set in XFRM, even if XSAVE
* is unsupported, i.e. even if XCR0 itself is completely unsupported.
*/
if ((u32)miscselect & ~sgx_12_0->ebx ||
(u32)attributes & ~sgx_12_1->eax ||
(u32)(attributes >> 32) & ~sgx_12_1->ebx ||
(u32)xfrm & ~sgx_12_1->ecx ||
(u32)(xfrm >> 32) & ~sgx_12_1->edx) {
(u32)(xfrm >> 32) & ~sgx_12_1->edx ||
xfrm & ~(vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FPSSE) ||
(xfrm & XFEATURE_MASK_FPSSE) != XFEATURE_MASK_FPSSE) {
kvm_inject_gp(vcpu, 0);
return 1;
}

View File

@ -1446,7 +1446,7 @@ static const u32 msrs_to_save_base[] = {
#endif
MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA,
MSR_IA32_FEAT_CTL, MSR_IA32_BNDCFGS, MSR_TSC_AUX,
MSR_IA32_SPEC_CTRL,
MSR_IA32_SPEC_CTRL, MSR_IA32_TSX_CTRL,
MSR_IA32_RTIT_CTL, MSR_IA32_RTIT_STATUS, MSR_IA32_RTIT_CR3_MATCH,
MSR_IA32_RTIT_OUTPUT_BASE, MSR_IA32_RTIT_OUTPUT_MASK,
MSR_IA32_RTIT_ADDR0_A, MSR_IA32_RTIT_ADDR0_B,
@ -7155,6 +7155,10 @@ static void kvm_probe_msr_to_save(u32 msr_index)
if (!kvm_cpu_cap_has(X86_FEATURE_XFD))
return;
break;
case MSR_IA32_TSX_CTRL:
if (!(kvm_get_arch_capabilities() & ARCH_CAP_TSX_CTRL_MSR))
return;
break;
default:
break;
}

View File

@ -9,6 +9,7 @@
#include <linux/sched/task.h>
#include <asm/set_memory.h>
#include <asm/cpu_device_id.h>
#include <asm/e820/api.h>
#include <asm/init.h>
#include <asm/page.h>
@ -261,6 +262,24 @@ static void __init probe_page_size_mask(void)
}
}
#define INTEL_MATCH(_model) { .vendor = X86_VENDOR_INTEL, \
.family = 6, \
.model = _model, \
}
/*
* INVLPG may not properly flush Global entries
* on these CPUs when PCIDs are enabled.
*/
static const struct x86_cpu_id invlpg_miss_ids[] = {
INTEL_MATCH(INTEL_FAM6_ALDERLAKE ),
INTEL_MATCH(INTEL_FAM6_ALDERLAKE_L ),
INTEL_MATCH(INTEL_FAM6_ALDERLAKE_N ),
INTEL_MATCH(INTEL_FAM6_RAPTORLAKE ),
INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_P),
INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_S),
{}
};
static void setup_pcid(void)
{
if (!IS_ENABLED(CONFIG_X86_64))
@ -269,6 +288,12 @@ static void setup_pcid(void)
if (!boot_cpu_has(X86_FEATURE_PCID))
return;
if (x86_match_cpu(invlpg_miss_ids)) {
pr_info("Incomplete global flushes, disabling PCID");
setup_clear_cpu_cap(X86_FEATURE_PCID);
return;
}
if (boot_cpu_has(X86_FEATURE_PGE)) {
/*
* This can't be cr4_set_bits_and_update_boot() -- the

View File

@ -343,7 +343,19 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
struct rt_sigframe *frame;
int err = 0, sig = ksig->sig;
unsigned long sp, ra, tp, ps;
unsigned long handler = (unsigned long)ksig->ka.sa.sa_handler;
unsigned long handler_fdpic_GOT = 0;
unsigned int base;
bool fdpic = IS_ENABLED(CONFIG_BINFMT_ELF_FDPIC) &&
(current->personality & FDPIC_FUNCPTRS);
if (fdpic) {
unsigned long __user *fdpic_func_desc =
(unsigned long __user *)handler;
if (__get_user(handler, &fdpic_func_desc[0]) ||
__get_user(handler_fdpic_GOT, &fdpic_func_desc[1]))
return -EFAULT;
}
sp = regs->areg[1];
@ -373,20 +385,26 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
if (ksig->ka.sa.sa_flags & SA_RESTORER) {
ra = (unsigned long)ksig->ka.sa.sa_restorer;
if (fdpic) {
unsigned long __user *fdpic_func_desc =
(unsigned long __user *)ksig->ka.sa.sa_restorer;
err |= __get_user(ra, fdpic_func_desc);
} else {
ra = (unsigned long)ksig->ka.sa.sa_restorer;
}
} else {
/* Create sys_rt_sigreturn syscall in stack frame */
err |= gen_return_code(frame->retcode);
if (err) {
return -EFAULT;
}
ra = (unsigned long) frame->retcode;
}
/*
if (err)
return -EFAULT;
/*
* Create signal handler execution context.
* Return context not modified until this point.
*/
@ -394,8 +412,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
/* Set up registers for signal handler; preserve the threadptr */
tp = regs->threadptr;
ps = regs->ps;
start_thread(regs, (unsigned long) ksig->ka.sa.sa_handler,
(unsigned long) frame);
start_thread(regs, handler, (unsigned long)frame);
/* Set up a stack frame for a call4 if userspace uses windowed ABI */
if (ps & PS_WOE_MASK) {
@ -413,6 +430,8 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
regs->areg[base + 4] = (unsigned long) &frame->uc;
regs->threadptr = tp;
regs->ps = ps;
if (fdpic)
regs->areg[base + 11] = handler_fdpic_GOT;
pr_debug("SIG rt deliver (%s:%d): signal=%d sp=%p pc=%08lx\n",
current->comm, current->pid, sig, frame, regs->pc);

View File

@ -56,6 +56,8 @@ EXPORT_SYMBOL(empty_zero_page);
*/
extern long long __ashrdi3(long long, int);
extern long long __ashldi3(long long, int);
extern long long __bswapdi2(long long);
extern int __bswapsi2(int);
extern long long __lshrdi3(long long, int);
extern int __divsi3(int, int);
extern int __modsi3(int, int);
@ -66,6 +68,8 @@ extern unsigned long long __umulsidi3(unsigned int, unsigned int);
EXPORT_SYMBOL(__ashldi3);
EXPORT_SYMBOL(__ashrdi3);
EXPORT_SYMBOL(__bswapdi2);
EXPORT_SYMBOL(__bswapsi2);
EXPORT_SYMBOL(__lshrdi3);
EXPORT_SYMBOL(__divsi3);
EXPORT_SYMBOL(__modsi3);

View File

@ -4,7 +4,7 @@
#
lib-y += memcopy.o memset.o checksum.o \
ashldi3.o ashrdi3.o lshrdi3.o \
ashldi3.o ashrdi3.o bswapdi2.o bswapsi2.o lshrdi3.o \
divsi3.o udivsi3.o modsi3.o umodsi3.o mulsi3.o umulsidi3.o \
usercopy.o strncpy_user.o strnlen_user.o
lib-$(CONFIG_PCI) += pci-auto.o

View File

@ -0,0 +1,21 @@
/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */
#include <linux/linkage.h>
#include <asm/asmmacro.h>
#include <asm/core.h>
ENTRY(__bswapdi2)
abi_entry_default
ssai 8
srli a4, a2, 16
src a4, a4, a2
src a4, a4, a4
src a4, a2, a4
srli a2, a3, 16
src a2, a2, a3
src a2, a2, a2
src a2, a3, a2
mov a3, a4
abi_ret_default
ENDPROC(__bswapdi2)

View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */
#include <linux/linkage.h>
#include <asm/asmmacro.h>
#include <asm/core.h>
ENTRY(__bswapsi2)
abi_entry_default
ssai 8
srli a3, a2, 16
src a3, a3, a2
src a3, a3, a3
src a2, a2, a3
abi_ret_default
ENDPROC(__bswapsi2)

View File

@ -678,6 +678,16 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
return error;
}
static int blkdev_mmap(struct file *file, struct vm_area_struct *vma)
{
struct inode *bd_inode = bdev_file_inode(file);
if (bdev_read_only(I_BDEV(bd_inode)))
return generic_file_readonly_mmap(file, vma);
return generic_file_mmap(file, vma);
}
const struct file_operations def_blk_fops = {
.open = blkdev_open,
.release = blkdev_close,
@ -685,7 +695,7 @@ const struct file_operations def_blk_fops = {
.read_iter = blkdev_read_iter,
.write_iter = blkdev_write_iter,
.iopoll = iocb_bio_iopoll,
.mmap = generic_file_mmap,
.mmap = blkdev_mmap,
.fsync = blkdev_fsync,
.unlocked_ioctl = blkdev_ioctl,
#ifdef CONFIG_COMPAT

View File

@ -516,6 +516,17 @@ static const struct dmi_system_id maingear_laptop[] = {
{ }
};
static const struct dmi_system_id lg_laptop[] = {
{
.ident = "LG Electronics 17U70P",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),
DMI_MATCH(DMI_BOARD_NAME, "17U70P"),
},
},
{ }
};
struct irq_override_cmp {
const struct dmi_system_id *system;
unsigned char irq;
@ -532,6 +543,7 @@ static const struct irq_override_cmp override_table[] = {
{ lenovo_laptop, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
{ tongfang_gm_rg, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
{ maingear_laptop, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
{ lg_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
};
static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,

View File

@ -320,6 +320,7 @@ void class_dev_iter_init(struct class_dev_iter *iter, const struct class *class,
start_knode = &start->p->knode_class;
klist_iter_init_node(&sp->klist_devices, &iter->ki, start_knode);
iter->type = type;
iter->sp = sp;
}
EXPORT_SYMBOL_GPL(class_dev_iter_init);
@ -361,6 +362,7 @@ EXPORT_SYMBOL_GPL(class_dev_iter_next);
void class_dev_iter_exit(struct class_dev_iter *iter)
{
klist_iter_exit(&iter->ki);
subsys_put(iter->sp);
}
EXPORT_SYMBOL_GPL(class_dev_iter_exit);

View File

@ -1120,6 +1120,11 @@ static inline bool ublk_queue_ready(struct ublk_queue *ubq)
return ubq->nr_io_ready == ubq->q_depth;
}
static void ublk_cmd_cancel_cb(struct io_uring_cmd *cmd, unsigned issue_flags)
{
io_uring_cmd_done(cmd, UBLK_IO_RES_ABORT, 0, issue_flags);
}
static void ublk_cancel_queue(struct ublk_queue *ubq)
{
int i;
@ -1131,8 +1136,8 @@ static void ublk_cancel_queue(struct ublk_queue *ubq)
struct ublk_io *io = &ubq->ios[i];
if (io->flags & UBLK_IO_FLAG_ACTIVE)
io_uring_cmd_done(io->cmd, UBLK_IO_RES_ABORT, 0,
IO_URING_F_UNLOCKED);
io_uring_cmd_complete_in_task(io->cmd,
ublk_cmd_cancel_cb);
}
/* all io commands are canceled */

View File

@ -138,6 +138,13 @@ static const struct dmi_system_id tpm_tis_dmi_table[] = {
DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L490"),
},
},
{
.callback = tpm_tis_disable_irq,
.ident = "UPX-TGL",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "AAEON"),
},
},
{}
};

View File

@ -975,7 +975,7 @@ static int __init acpi_cpufreq_probe(struct platform_device *pdev)
/* don't keep reloading if cpufreq_driver exists */
if (cpufreq_get_current_driver())
return -EEXIST;
return -ENODEV;
pr_debug("%s\n", __func__);

View File

@ -583,7 +583,7 @@ static int __init pcc_cpufreq_probe(struct platform_device *pdev)
/* Skip initialization if another cpufreq driver is there. */
if (cpufreq_get_current_driver())
return -EEXIST;
return -ENODEV;
if (acpi_disabled)
return -ENODEV;

View File

@ -582,7 +582,8 @@ void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev)
if (r)
amdgpu_fence_driver_force_completion(ring);
if (ring->fence_drv.irq_src)
if (!drm_dev_is_unplugged(adev_to_drm(adev)) &&
ring->fence_drv.irq_src)
amdgpu_irq_put(adev, ring->fence_drv.irq_src,
ring->fence_drv.irq_type);

View File

@ -8152,8 +8152,14 @@ static int gfx_v10_0_set_powergating_state(void *handle,
case IP_VERSION(10, 3, 3):
case IP_VERSION(10, 3, 6):
case IP_VERSION(10, 3, 7):
if (!enable)
amdgpu_gfx_off_ctrl(adev, false);
gfx_v10_cntl_pg(adev, enable);
amdgpu_gfx_off_ctrl(adev, enable);
if (enable)
amdgpu_gfx_off_ctrl(adev, true);
break;
default:
break;

View File

@ -4667,24 +4667,27 @@ static uint64_t gfx_v11_0_get_gpu_clock_counter(struct amdgpu_device *adev)
uint64_t clock;
uint64_t clock_counter_lo, clock_counter_hi_pre, clock_counter_hi_after;
amdgpu_gfx_off_ctrl(adev, false);
mutex_lock(&adev->gfx.gpu_clock_mutex);
if (amdgpu_sriov_vf(adev)) {
amdgpu_gfx_off_ctrl(adev, false);
mutex_lock(&adev->gfx.gpu_clock_mutex);
clock_counter_hi_pre = (uint64_t)RREG32_SOC15(GC, 0, regCP_MES_MTIME_HI);
clock_counter_lo = (uint64_t)RREG32_SOC15(GC, 0, regCP_MES_MTIME_LO);
clock_counter_hi_after = (uint64_t)RREG32_SOC15(GC, 0, regCP_MES_MTIME_HI);
if (clock_counter_hi_pre != clock_counter_hi_after)
clock_counter_lo = (uint64_t)RREG32_SOC15(GC, 0, regCP_MES_MTIME_LO);
mutex_unlock(&adev->gfx.gpu_clock_mutex);
amdgpu_gfx_off_ctrl(adev, true);
} else {
preempt_disable();
clock_counter_hi_pre = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_UPPER);
clock_counter_lo = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_LOWER);
clock_counter_hi_after = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_UPPER);
if (clock_counter_hi_pre != clock_counter_hi_after)
clock_counter_lo = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_LOWER);
preempt_enable();
}
clock = clock_counter_lo | (clock_counter_hi_after << 32ULL);
mutex_unlock(&adev->gfx.gpu_clock_mutex);
amdgpu_gfx_off_ctrl(adev, true);
return clock;
}
@ -5150,8 +5153,14 @@ static int gfx_v11_0_set_powergating_state(void *handle,
break;
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
if (!enable)
amdgpu_gfx_off_ctrl(adev, false);
gfx_v11_cntl_pg(adev, enable);
amdgpu_gfx_off_ctrl(adev, enable);
if (enable)
amdgpu_gfx_off_ctrl(adev, true);
break;
default:
break;

View File

@ -4003,30 +4003,25 @@ static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev)
clock = clock_lo | (clock_hi << 32ULL);
break;
case IP_VERSION(9, 1, 0):
preempt_disable();
clock_hi = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven);
clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven);
hi_check = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven);
/* The PWR TSC clock frequency is 100MHz, which sets 32-bit carry over
* roughly every 42 seconds.
*/
if (hi_check != clock_hi) {
clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven);
clock_hi = hi_check;
}
preempt_enable();
clock = clock_lo | (clock_hi << 32ULL);
break;
case IP_VERSION(9, 2, 2):
preempt_disable();
clock_hi = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven2);
clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven2);
hi_check = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven2);
/* The PWR TSC clock frequency is 100MHz, which sets 32-bit carry over
* roughly every 42 seconds.
*/
if (hi_check != clock_hi) {
if (adev->rev_id >= 0x8) {
clock_hi = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven2);
clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven2);
hi_check = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven2);
} else {
clock_hi = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven);
clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven);
hi_check = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven);
}
/* The PWR TSC clock frequency is 100MHz, which sets 32-bit carry over
* roughly every 42 seconds.
*/
if (hi_check != clock_hi) {
if (adev->rev_id >= 0x8)
clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven2);
else
clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven);
clock_hi = hi_check;
}
preempt_enable();

View File

@ -31,6 +31,8 @@
#include "umc_v8_10.h"
#include "athub/athub_3_0_0_sh_mask.h"
#include "athub/athub_3_0_0_offset.h"
#include "dcn/dcn_3_2_0_offset.h"
#include "dcn/dcn_3_2_0_sh_mask.h"
#include "oss/osssys_6_0_0_offset.h"
#include "ivsrcid/vmc/irqsrcs_vmc_1_0.h"
#include "navi10_enum.h"
@ -546,7 +548,24 @@ static void gmc_v11_0_get_vm_pte(struct amdgpu_device *adev,
static unsigned gmc_v11_0_get_vbios_fb_size(struct amdgpu_device *adev)
{
return 0;
u32 d1vga_control = RREG32_SOC15(DCE, 0, regD1VGA_CONTROL);
unsigned size;
if (REG_GET_FIELD(d1vga_control, D1VGA_CONTROL, D1VGA_MODE_ENABLE)) {
size = AMDGPU_VBIOS_VGA_ALLOCATION;
} else {
u32 viewport;
u32 pitch;
viewport = RREG32_SOC15(DCE, 0, regHUBP0_DCSURF_PRI_VIEWPORT_DIMENSION);
pitch = RREG32_SOC15(DCE, 0, regHUBPREQ0_DCSURF_SURFACE_PITCH);
size = (REG_GET_FIELD(viewport,
HUBP0_DCSURF_PRI_VIEWPORT_DIMENSION, PRI_VIEWPORT_HEIGHT) *
REG_GET_FIELD(pitch, HUBPREQ0_DCSURF_SURFACE_PITCH, PITCH) *
4);
}
return size;
}
static const struct amdgpu_gmc_funcs gmc_v11_0_gmc_funcs = {

View File

@ -359,5 +359,8 @@ bool link_validate_dpia_bandwidth(const struct dc_stream_state *stream, const un
link[i] = stream[i].link;
bw_needed[i] = dc_bandwidth_in_kbps_from_timing(&stream[i].timing);
}
ret = dpia_validate_usb4_bw(link, bw_needed, num_streams);
return ret;
}

View File

@ -733,6 +733,24 @@ static int smu_late_init(void *handle)
return ret;
}
/*
* Explicitly notify PMFW the power mode the system in. Since
* the PMFW may boot the ASIC with a different mode.
* For those supporting ACDC switch via gpio, PMFW will
* handle the switch automatically. Driver involvement
* is unnecessary.
*/
if (!smu->dc_controlled_by_gpio) {
ret = smu_set_power_source(smu,
adev->pm.ac_power ? SMU_POWER_SOURCE_AC :
SMU_POWER_SOURCE_DC);
if (ret) {
dev_err(adev->dev, "Failed to switch to %s mode!\n",
adev->pm.ac_power ? "AC" : "DC");
return ret;
}
}
if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 1)) ||
(adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 3)))
return 0;

View File

@ -3413,26 +3413,8 @@ static int navi10_post_smu_init(struct smu_context *smu)
return 0;
ret = navi10_run_umc_cdr_workaround(smu);
if (ret) {
if (ret)
dev_err(adev->dev, "Failed to apply umc cdr workaround!\n");
return ret;
}
if (!smu->dc_controlled_by_gpio) {
/*
* For Navi1X, manually switch it to AC mode as PMFW
* may boot it with DC mode.
*/
ret = smu_v11_0_set_power_source(smu,
adev->pm.ac_power ?
SMU_POWER_SOURCE_AC :
SMU_POWER_SOURCE_DC);
if (ret) {
dev_err(adev->dev, "Failed to switch to %s mode!\n",
adev->pm.ac_power ? "AC" : "DC");
return ret;
}
}
return ret;
}

View File

@ -1770,6 +1770,7 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
.enable_mgpu_fan_boost = smu_v13_0_7_enable_mgpu_fan_boost,
.get_power_limit = smu_v13_0_7_get_power_limit,
.set_power_limit = smu_v13_0_set_power_limit,
.set_power_source = smu_v13_0_set_power_source,
.get_power_profile_mode = smu_v13_0_7_get_power_profile_mode,
.set_power_profile_mode = smu_v13_0_7_set_power_profile_mode,
.set_tool_table_location = smu_v13_0_set_tool_table_location,

View File

@ -34,11 +34,11 @@ static inline int exynos_g2d_exec_ioctl(struct drm_device *dev, void *data,
return -ENODEV;
}
int g2d_open(struct drm_device *drm_dev, struct drm_file *file)
static inline int g2d_open(struct drm_device *drm_dev, struct drm_file *file)
{
return 0;
}
void g2d_close(struct drm_device *drm_dev, struct drm_file *file)
static inline void g2d_close(struct drm_device *drm_dev, struct drm_file *file)
{ }
#endif

View File

@ -204,8 +204,6 @@ bool intel_hdcp2_capable(struct intel_connector *connector)
struct intel_digital_port *dig_port = intel_attached_dig_port(connector);
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_hdcp *hdcp = &connector->hdcp;
struct intel_gt *gt = dev_priv->media_gt;
struct intel_gsc_uc *gsc = &gt->uc.gsc;
bool capable = false;
/* I915 support for HDCP2.2 */
@ -213,9 +211,13 @@ bool intel_hdcp2_capable(struct intel_connector *connector)
return false;
/* If MTL+ make sure gsc is loaded and proxy is setup */
if (intel_hdcp_gsc_cs_required(dev_priv))
if (!intel_uc_fw_is_running(&gsc->fw))
if (intel_hdcp_gsc_cs_required(dev_priv)) {
struct intel_gt *gt = dev_priv->media_gt;
struct intel_gsc_uc *gsc = gt ? &gt->uc.gsc : NULL;
if (!gsc || !intel_uc_fw_is_running(&gsc->fw))
return false;
}
/* MEI/GSC interface is solid depending on which is used */
mutex_lock(&dev_priv->display.hdcp.comp_mutex);

View File

@ -98,17 +98,17 @@ static const struct dpu_sspp_cfg msm8998_sspp[] = {
static const struct dpu_lm_cfg msm8998_lm[] = {
LM_BLK("lm_0", LM_0, 0x44000, MIXER_MSM8998_MASK,
&msm8998_lm_sblk, PINGPONG_0, LM_2, DSPP_0),
&msm8998_lm_sblk, PINGPONG_0, LM_1, DSPP_0),
LM_BLK("lm_1", LM_1, 0x45000, MIXER_MSM8998_MASK,
&msm8998_lm_sblk, PINGPONG_1, LM_5, DSPP_1),
&msm8998_lm_sblk, PINGPONG_1, LM_0, DSPP_1),
LM_BLK("lm_2", LM_2, 0x46000, MIXER_MSM8998_MASK,
&msm8998_lm_sblk, PINGPONG_2, LM_0, 0),
&msm8998_lm_sblk, PINGPONG_2, LM_5, 0),
LM_BLK("lm_3", LM_3, 0x47000, MIXER_MSM8998_MASK,
&msm8998_lm_sblk, PINGPONG_MAX, 0, 0),
LM_BLK("lm_4", LM_4, 0x48000, MIXER_MSM8998_MASK,
&msm8998_lm_sblk, PINGPONG_MAX, 0, 0),
LM_BLK("lm_5", LM_5, 0x49000, MIXER_MSM8998_MASK,
&msm8998_lm_sblk, PINGPONG_3, LM_1, 0),
&msm8998_lm_sblk, PINGPONG_3, LM_2, 0),
};
static const struct dpu_pingpong_cfg msm8998_pp[] = {
@ -134,10 +134,10 @@ static const struct dpu_dspp_cfg msm8998_dspp[] = {
};
static const struct dpu_intf_cfg msm8998_intf[] = {
INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
INTF_BLK("intf_1", INTF_1, 0x6a800, 0x280, INTF_DSI, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
INTF_BLK("intf_2", INTF_2, 0x6b000, 0x280, INTF_DSI, 1, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
INTF_BLK("intf_3", INTF_3, 0x6b800, 0x280, INTF_HDMI, 0, 25, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
INTF_BLK("intf_0", INTF_0, 0x6a000, 0x280, INTF_DP, 0, 21, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 24, 25),
INTF_BLK("intf_1", INTF_1, 0x6a800, 0x280, INTF_DSI, 0, 21, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 26, 27),
INTF_BLK("intf_2", INTF_2, 0x6b000, 0x280, INTF_DSI, 1, 21, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 28, 29),
INTF_BLK("intf_3", INTF_3, 0x6b800, 0x280, INTF_HDMI, 0, 21, INTF_SDM845_MASK, MDP_SSPP_TOP0_INTR, 30, 31),
};
static const struct dpu_perf_cfg msm8998_perf_data = {

View File

@ -128,10 +128,10 @@ static const struct dpu_dspp_cfg sm8150_dspp[] = {
};
static const struct dpu_pingpong_cfg sm8150_pp[] = {
PP_BLK_TE("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
PP_BLK_TE("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
PP_BLK("pingpong_2", PINGPONG_2, 0x71000, MERGE_3D_1, sdm845_pp_sblk,

View File

@ -116,10 +116,10 @@ static const struct dpu_lm_cfg sc8180x_lm[] = {
};
static const struct dpu_pingpong_cfg sc8180x_pp[] = {
PP_BLK_TE("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
PP_BLK_TE("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
PP_BLK("pingpong_2", PINGPONG_2, 0x71000, MERGE_3D_1, sdm845_pp_sblk,

View File

@ -129,10 +129,10 @@ static const struct dpu_dspp_cfg sm8250_dspp[] = {
};
static const struct dpu_pingpong_cfg sm8250_pp[] = {
PP_BLK_TE("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
PP_BLK_TE("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
PP_BLK("pingpong_2", PINGPONG_2, 0x71000, MERGE_3D_1, sdm845_pp_sblk,

View File

@ -80,8 +80,8 @@ static const struct dpu_dspp_cfg sc7180_dspp[] = {
};
static const struct dpu_pingpong_cfg sc7180_pp[] = {
PP_BLK_TE("pingpong_0", PINGPONG_0, 0x70000, 0, sdm845_pp_sblk_te, -1, -1),
PP_BLK_TE("pingpong_1", PINGPONG_1, 0x70800, 0, sdm845_pp_sblk_te, -1, -1),
PP_BLK("pingpong_0", PINGPONG_0, 0x70000, 0, sdm845_pp_sblk, -1, -1),
PP_BLK("pingpong_1", PINGPONG_1, 0x70800, 0, sdm845_pp_sblk, -1, -1),
};
static const struct dpu_intf_cfg sc7180_intf[] = {

View File

@ -122,7 +122,6 @@ const struct dpu_mdss_cfg dpu_sm6115_cfg = {
.mdss_irqs = BIT(MDP_SSPP_TOP0_INTR) | \
BIT(MDP_SSPP_TOP0_INTR2) | \
BIT(MDP_SSPP_TOP0_HIST_INTR) | \
BIT(MDP_INTF0_INTR) | \
BIT(MDP_INTF1_INTR),
};

View File

@ -112,7 +112,6 @@ const struct dpu_mdss_cfg dpu_qcm2290_cfg = {
.mdss_irqs = BIT(MDP_SSPP_TOP0_INTR) | \
BIT(MDP_SSPP_TOP0_INTR2) | \
BIT(MDP_SSPP_TOP0_HIST_INTR) | \
BIT(MDP_INTF0_INTR) | \
BIT(MDP_INTF1_INTR),
};

View File

@ -127,22 +127,22 @@ static const struct dpu_dspp_cfg sm8350_dspp[] = {
};
static const struct dpu_pingpong_cfg sm8350_pp[] = {
PP_BLK_TE("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK_DITHER("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
PP_BLK_TE("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK_DITHER("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
PP_BLK("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
PP_BLK("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
PP_BLK("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
-1),
PP_BLK("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
-1),
};

View File

@ -87,10 +87,10 @@ static const struct dpu_dspp_cfg sc7280_dspp[] = {
};
static const struct dpu_pingpong_cfg sc7280_pp[] = {
PP_BLK("pingpong_0", PINGPONG_0, 0x69000, 0, sc7280_pp_sblk, -1, -1),
PP_BLK("pingpong_1", PINGPONG_1, 0x6a000, 0, sc7280_pp_sblk, -1, -1),
PP_BLK("pingpong_2", PINGPONG_2, 0x6b000, 0, sc7280_pp_sblk, -1, -1),
PP_BLK("pingpong_3", PINGPONG_3, 0x6c000, 0, sc7280_pp_sblk, -1, -1),
PP_BLK_DITHER("pingpong_0", PINGPONG_0, 0x69000, 0, sc7280_pp_sblk, -1, -1),
PP_BLK_DITHER("pingpong_1", PINGPONG_1, 0x6a000, 0, sc7280_pp_sblk, -1, -1),
PP_BLK_DITHER("pingpong_2", PINGPONG_2, 0x6b000, 0, sc7280_pp_sblk, -1, -1),
PP_BLK_DITHER("pingpong_3", PINGPONG_3, 0x6c000, 0, sc7280_pp_sblk, -1, -1),
};
static const struct dpu_intf_cfg sc7280_intf[] = {

View File

@ -121,18 +121,18 @@ static const struct dpu_dspp_cfg sc8280xp_dspp[] = {
};
static const struct dpu_pingpong_cfg sc8280xp_pp[] = {
PP_BLK_TE("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sdm845_pp_sblk_te,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8), -1),
PP_BLK_TE("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sdm845_pp_sblk_te,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9), -1),
PP_BLK_TE("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sdm845_pp_sblk_te,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10), -1),
PP_BLK_TE("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sdm845_pp_sblk_te,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11), -1),
PP_BLK_TE("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sdm845_pp_sblk_te,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30), -1),
PP_BLK_TE("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sdm845_pp_sblk_te,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31), -1),
PP_BLK_DITHER("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8), -1),
PP_BLK_DITHER("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9), -1),
PP_BLK_DITHER("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10), -1),
PP_BLK_DITHER("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11), -1),
PP_BLK_DITHER("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30), -1),
PP_BLK_DITHER("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31), -1),
};
static const struct dpu_merge_3d_cfg sc8280xp_merge_3d[] = {

View File

@ -128,28 +128,28 @@ static const struct dpu_dspp_cfg sm8450_dspp[] = {
};
/* FIXME: interrupts */
static const struct dpu_pingpong_cfg sm8450_pp[] = {
PP_BLK_TE("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK_DITHER("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
PP_BLK_TE("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sdm845_pp_sblk_te,
PP_BLK_DITHER("pingpong_1", PINGPONG_1, 0x6a000, MERGE_3D_0, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
PP_BLK("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_2", PINGPONG_2, 0x6b000, MERGE_3D_1, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
PP_BLK("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_3", PINGPONG_3, 0x6c000, MERGE_3D_1, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
PP_BLK("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_4", PINGPONG_4, 0x6d000, MERGE_3D_2, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
-1),
PP_BLK("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_5", PINGPONG_5, 0x6e000, MERGE_3D_2, sc7280_pp_sblk,
DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
-1),
PP_BLK("pingpong_6", PINGPONG_6, 0x65800, MERGE_3D_3, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_6", PINGPONG_6, 0x65800, MERGE_3D_3, sc7280_pp_sblk,
-1,
-1),
PP_BLK("pingpong_7", PINGPONG_7, 0x65c00, MERGE_3D_3, sdm845_pp_sblk,
PP_BLK_DITHER("pingpong_7", PINGPONG_7, 0x65c00, MERGE_3D_3, sc7280_pp_sblk,
-1,
-1),
};

Some files were not shown because too many files have changed in this diff Show More