KVM: s390: features and fixes for 4.9

- lazy enablement of runtime instrumentation
 - up to 255 CPUs for nested guests
 - rework of machine check deliver
 - cleanups/fixes
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJX0VHHAAoJEBF7vIC1phx88QwP/ir7L15bHPLUtdqwQn95yjzK
 wlMTSekOrvbXImEKMh7nizMN76WI2nee1NvRe3pNf6Uc0Ntwiyzr7d2wQwX6L8Fn
 D+28Crx3v/X3wGSLWdCf/17tuOUjVVMLhRHAIX4K+bl88L6NoMN2lsuXSAq7Fp7I
 /CiXTGjHnF2eVL41e6p5oIRJvNaIfX3DlvRgfYc3TVrJum4tXcfCCMGWaXASarIW
 g/q/q7s3iNbaLPrgb5rhQ2XY6puTTrot4whW7hXWsNWEW0r3MuwjOtxoy1VCMBeI
 5PiCSqByYf7c6O5hqu4VClhalB76q43LQLY55/WLnuljJfru7Koiy9zxpnE0zTii
 VnesQZv7emI1HYd4UbZkwo1LdE0o67I1dWucQx2yrqRpRvx6K/0VvvWH/Z4Y8N1Z
 B6DQdvn0tRAnpFxsJLZ51WaEwtZuaXYirXwE6wMDcxhbmCorXvPTUdu4mLPhMkp7
 nLfbA8fyXeU7HwzH90v/dvKY38jU6O0yzFAKHySowlSb6Kzmbuhym5Hgd2N91BUK
 bbE4uW4mOjUpByGNGKcOJ+5typN2hfotriayWwgKKiROYQR19HpFNR4VFiE4Gwgy
 yGxkbDgr11tHZ9flEMJT1c7k6sDDY+Dg2xgc1AxQQ9aZCXeH4Fe70uvgQzrTKjX3
 XRER04mHV5yXpze7lumg
 =99TB
 -----END PGP SIGNATURE-----

Merge tag 'kvm-s390-next-4.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD

KVM: s390: features and fixes for 4.9

- lazy enablement of runtime instrumentation
- up to 255 CPUs for nested guests
- rework of machine check deliver
- cleanups/fixes
This commit is contained in:
Paolo Bonzini 2016-09-08 15:35:44 +02:00
commit 6f90f1d1d2
138 changed files with 2460 additions and 651 deletions

View File

@ -131,7 +131,7 @@ pygments_style = 'sphinx'
todo_include_todos = False
primary_domain = 'C'
highlight_language = 'C'
highlight_language = 'guess'
# -- Options for HTML output ----------------------------------------------

View File

@ -19,5 +19,5 @@ enhancements. It can monitor up to 4 voltages, 16 temperatures and
implemented in this driver.
Specification of the chip can be found here:
ftp:///pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/BMC-Teutates_Specification_V1.21.pdf
ftp:///pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/Fujitsu_mainboards-1-Sensors_HowTo-en-US.pdf
ftp://ftp.ts.fujitsu.com/pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/BMC-Teutates_Specification_V1.21.pdf
ftp://ftp.ts.fujitsu.com/pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/Fujitsu_mainboards-1-Sensors_HowTo-en-US.pdf

View File

@ -366,8 +366,6 @@ Domain`_ references.
Cross-referencing from reStructuredText
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. highlight:: none
To cross-reference the functions and types defined in the kernel-doc comments
from reStructuredText documents, please use the `Sphinx C Domain`_
references. For example::
@ -390,8 +388,6 @@ For further details, please refer to the `Sphinx C Domain`_ documentation.
Function documentation
----------------------
.. highlight:: c
The general format of a function and function-like macro kernel-doc comment is::
/**
@ -572,8 +568,6 @@ DocBook XML [DEPRECATED]
Converting DocBook to Sphinx
----------------------------
.. highlight:: none
Over time, we expect all of the documents under ``Documentation/DocBook`` to be
converted to Sphinx and reStructuredText. For most DocBook XML documents, a good
enough solution is to use the simple ``Documentation/sphinx/tmplcvt`` script,

View File

@ -164,7 +164,32 @@ load n/2 modules more and try again.
Again, if you find the offending module(s), it(they) must be unloaded every time
before hibernation, and please report the problem with it(them).
c) Advanced debugging
c) Using the "test_resume" hibernation option
/sys/power/disk generally tells the kernel what to do after creating a
hibernation image. One of the available options is "test_resume" which
causes the just created image to be used for immediate restoration. Namely,
after doing:
# echo test_resume > /sys/power/disk
# echo disk > /sys/power/state
a hibernation image will be created and a resume from it will be triggered
immediately without involving the platform firmware in any way.
That test can be used to check if failures to resume from hibernation are
related to bad interactions with the platform firmware. That is, if the above
works every time, but resume from actual hibernation does not work or is
unreliable, the platform firmware may be responsible for the failures.
On architectures and platforms that support using different kernels to restore
hibernation images (that is, the kernel used to read the image from storage and
load it into memory is different from the one included in the image) or support
kernel address space randomization, it also can be used to check if failures
to resume may be related to the differences between the restore and image
kernels.
d) Advanced debugging
In case that hibernation does not work on your system even in the minimal
configuration and compiling more drivers as modules is not practical or some

View File

@ -1,75 +1,76 @@
Power Management Interface
Power Management Interface for System Sleep
Copyright (c) 2016 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The power management subsystem provides a unified sysfs interface to
userspace, regardless of what architecture or platform one is
running. The interface exists in /sys/power/ directory (assuming sysfs
is mounted at /sys).
The power management subsystem provides userspace with a unified sysfs interface
for system sleep regardless of the underlying system architecture or platform.
The interface is located in the /sys/power/ directory (assuming that sysfs is
mounted at /sys).
/sys/power/state controls system power state. Reading from this file
returns what states are supported, which is hard-coded to 'freeze',
'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk'
(Suspend-to-Disk).
/sys/power/state is the system sleep state control file.
Writing to this file one of those strings causes the system to
transition into that state. Please see the file
Documentation/power/states.txt for a description of each of those
states.
Reading from it returns a list of supported sleep states, encoded as:
'freeze' (Suspend-to-Idle)
'standby' (Power-On Suspend)
'mem' (Suspend-to-RAM)
'disk' (Suspend-to-Disk)
/sys/power/disk controls the operating mode of the suspend-to-disk
mechanism. Suspend-to-disk can be handled in several ways. We have a
few options for putting the system to sleep - using the platform driver
(e.g. ACPI or other suspend_ops), powering off the system or rebooting the
system (for testing).
Suspend-to-Idle is always supported. Suspend-to-Disk is always supported
too as long the kernel has been configured to support hibernation at all
(ie. CONFIG_HIBERNATION is set in the kernel configuration file). Support
for Suspend-to-RAM and Power-On Suspend depends on the capabilities of the
platform.
Additionally, /sys/power/disk can be used to turn on one of the two testing
modes of the suspend-to-disk mechanism: 'testproc' or 'test'. If the
suspend-to-disk mechanism is in the 'testproc' mode, writing 'disk' to
/sys/power/state will cause the kernel to disable nonboot CPUs and freeze
tasks, wait for 5 seconds, unfreeze tasks and enable nonboot CPUs. If it is
in the 'test' mode, writing 'disk' to /sys/power/state will cause the kernel
to disable nonboot CPUs and freeze tasks, shrink memory, suspend devices, wait
for 5 seconds, resume devices, unfreeze tasks and enable nonboot CPUs. Then,
we are able to look in the log messages and work out, for example, which code
is being slow and which device drivers are misbehaving.
If one of the strings listed in /sys/power/state is written to it, the system
will attempt to transition into the corresponding sleep state. Refer to
Documentation/power/states.txt for a description of each of those states.
Reading from this file will display all supported modes and the currently
selected one in brackets, for example
/sys/power/disk controls the operating mode of hibernation (Suspend-to-Disk).
Specifically, it tells the kernel what to do after creating a hibernation image.
[shutdown] reboot test testproc
Reading from it returns a list of supported options encoded as:
Writing to this file will accept one of
'platform' (put the system into sleep using a platform-provided method)
'shutdown' (shut the system down)
'reboot' (reboot the system)
'suspend' (trigger a Suspend-to-RAM transition)
'test_resume' (resume-after-hibernation test mode)
'platform' (only if the platform supports it)
'shutdown'
'reboot'
'testproc'
'test'
The currently selected option is printed in square brackets.
/sys/power/image_size controls the size of the image created by
the suspend-to-disk mechanism. It can be written a string
representing a non-negative integer that will be used as an upper
limit of the image size, in bytes. The suspend-to-disk mechanism will
do its best to ensure the image size will not exceed that number. However,
if this turns out to be impossible, it will try to suspend anyway using the
smallest image possible. In particular, if "0" is written to this file, the
suspend image will be as small as possible.
The 'platform' option is only available if the platform provides a special
mechanism to put the system to sleep after creating a hibernation image (ACPI
does that, for example). The 'suspend' option is available if Suspend-to-RAM
is supported. Refer to Documentation/power/basic_pm_debugging.txt for the
description of the 'test_resume' option.
Reading from this file will display the current image size limit, which
is set to 2/5 of available RAM by default.
To select an option, write the string representing it to /sys/power/disk.
/sys/power/pm_trace controls the code which saves the last PM event point in
the RTC across reboots, so that you can debug a machine that just hangs
during suspend (or more commonly, during resume). Namely, the RTC is only
used to save the last PM event point if this file contains '1'. Initially it
contains '0' which may be changed to '1' by writing a string representing a
nonzero integer into it.
/sys/power/image_size controls the size of hibernation images.
To use this debugging feature you should attempt to suspend the machine, then
reboot it and run
It can be written a string representing a non-negative integer that will be
used as a best-effort upper limit of the image size, in bytes. The hibernation
core will do its best to ensure that the image size will not exceed that number.
However, if that turns out to be impossible to achieve, a hibernation image will
still be created and its size will be as small as possible. In particular,
writing '0' to this file will enforce hibernation images to be as small as
possible.
dmesg -s 1000000 | grep 'hash matches'
Reading from this file returns the current image size limit, which is set to
around 2/5 of available RAM by default.
CAUTION: Using it will cause your machine's real-time (CMOS) clock to be
set to a random invalid time after a resume.
/sys/power/pm_trace controls the PM trace mechanism saving the last suspend
or resume event point in the RTC across reboots.
It helps to debug hard lockups or reboots due to device driver failures that
occur during system suspend or resume (which is more common) more effectively.
If /sys/power/pm_trace contains '1', the fingerprint of each suspend/resume
event point in turn will be stored in the RTC memory (overwriting the actual
RTC information), so it will survive a system crash if one occurs right after
storing it and it can be used later to identify the driver that caused the crash
to happen (see Documentation/power/s2ram.txt for more information).
Initially it contains '0' which may be changed to '1' by writing a string
representing a nonzero integer into it.

View File

@ -42,11 +42,12 @@
caption a.headerlink { opacity: 0; }
caption a.headerlink:hover { opacity: 1; }
/* inline literal: drop the borderbox and red color */
/* inline literal: drop the borderbox, padding and red color */
code, .rst-content tt, .rst-content code {
color: inherit;
border: none;
padding: unset;
background: inherit;
font-size: 85%;
}

View File

@ -4525,6 +4525,12 @@ L: linux-edac@vger.kernel.org
S: Maintained
F: drivers/edac/sb_edac.c
EDAC-SKYLAKE
M: Tony Luck <tony.luck@intel.com>
L: linux-edac@vger.kernel.org
S: Maintained
F: drivers/edac/skx_edac.c
EDAC-XGENE
APPLIED MICRO (APM) X-GENE SOC EDAC
M: Loc Ho <lho@apm.com>

View File

@ -1,7 +1,7 @@
VERSION = 4
PATCHLEVEL = 8
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Psychotic Stoned Sheep
# *DOCUMENTATION*

View File

@ -295,6 +295,7 @@ __und_svc_fault:
bl __und_fault
__und_svc_finish:
get_thread_info tsk
ldr r5, [sp, #S_PSR] @ Get SVC cpsr
svc_exit r5 @ return from exception
UNWIND(.fnend )

View File

@ -271,6 +271,12 @@ static int __init imx_gpc_init(struct device_node *node,
for (i = 0; i < IMR_NUM; i++)
writel_relaxed(~0, gpc_base + GPC_IMR1 + i * 4);
/*
* Clear the OF_POPULATED flag set in of_irq_init so that
* later the GPC power domain driver will not be skipped.
*/
of_node_clear_flag(node, OF_POPULATED);
return 0;
}
IRQCHIP_DECLARE(imx_gpc, "fsl,imx6q-gpc", imx_gpc_init);

View File

@ -728,7 +728,8 @@ static void *__init late_alloc(unsigned long sz)
{
void *ptr = (void *)__get_free_pages(PGALLOC_GFP, get_order(sz));
BUG_ON(!ptr);
if (!ptr || !pgtable_page_ctor(virt_to_page(ptr)))
BUG();
return ptr;
}
@ -1155,10 +1156,19 @@ void __init sanity_check_meminfo(void)
{
phys_addr_t memblock_limit = 0;
int highmem = 0;
phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
u64 vmalloc_limit;
struct memblock_region *reg;
bool should_use_highmem = false;
/*
* Let's use our own (unoptimized) equivalent of __pa() that is
* not affected by wrap-arounds when sizeof(phys_addr_t) == 4.
* The result is used as the upper bound on physical memory address
* and may itself be outside the valid range for which phys_addr_t
* and therefore __pa() is defined.
*/
vmalloc_limit = (u64)(uintptr_t)vmalloc_min - PAGE_OFFSET + PHYS_OFFSET;
for_each_memblock(memory, reg) {
phys_addr_t block_start = reg->base;
phys_addr_t block_end = reg->base + reg->size;
@ -1183,10 +1193,11 @@ void __init sanity_check_meminfo(void)
if (reg->size > size_limit) {
phys_addr_t overlap_size = reg->size - size_limit;
pr_notice("Truncating RAM at %pa-%pa to -%pa",
&block_start, &block_end, &vmalloc_limit);
memblock_remove(vmalloc_limit, overlap_size);
pr_notice("Truncating RAM at %pa-%pa",
&block_start, &block_end);
block_end = vmalloc_limit;
pr_cont(" to -%pa", &block_end);
memblock_remove(vmalloc_limit, overlap_size);
should_use_highmem = true;
}
}

View File

@ -101,12 +101,20 @@ ENTRY(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly
/* enable the MMU early - so we can access sleep_save_stash by va */
adr_l lr, __enable_mmu /* __cpu_setup will return here */
ldr x27, =_cpu_resume /* __enable_mmu will branch here */
adr_l x27, _resume_switched /* __enable_mmu will branch here */
adrp x25, idmap_pg_dir
adrp x26, swapper_pg_dir
b __cpu_setup
ENDPROC(cpu_resume)
.pushsection ".idmap.text", "ax"
_resume_switched:
ldr x8, =_cpu_resume
br x8
ENDPROC(_resume_switched)
.ltorg
.popsection
ENTRY(_cpu_resume)
mrs x1, mpidr_el1
adrp x8, mpidr_hash

View File

@ -242,7 +242,7 @@ static void note_page(struct pg_state *st, unsigned long addr, unsigned level,
static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start)
{
pte_t *pte = pte_offset_kernel(pmd, 0);
pte_t *pte = pte_offset_kernel(pmd, 0UL);
unsigned long addr;
unsigned i;
@ -254,7 +254,7 @@ static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start)
static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
{
pmd_t *pmd = pmd_offset(pud, 0);
pmd_t *pmd = pmd_offset(pud, 0UL);
unsigned long addr;
unsigned i;
@ -271,7 +271,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
{
pud_t *pud = pud_offset(pgd, 0);
pud_t *pud = pud_offset(pgd, 0UL);
unsigned long addr;
unsigned i;

View File

@ -23,6 +23,8 @@
#include <linux/module.h>
#include <linux/of.h>
#include <asm/acpi.h>
struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
EXPORT_SYMBOL(node_data);
nodemask_t numa_nodes_parsed __initdata;

View File

@ -97,10 +97,10 @@
#define ENOTCONN 235 /* Transport endpoint is not connected */
#define ESHUTDOWN 236 /* Cannot send after transport endpoint shutdown */
#define ETOOMANYREFS 237 /* Too many references: cannot splice */
#define EREFUSED ECONNREFUSED /* for HP's NFS apparently */
#define ETIMEDOUT 238 /* Connection timed out */
#define ECONNREFUSED 239 /* Connection refused */
#define EREMOTERELEASE 240 /* Remote peer released connection */
#define EREFUSED ECONNREFUSED /* for HP's NFS apparently */
#define EREMOTERELEASE 240 /* Remote peer released connection */
#define EHOSTDOWN 241 /* Host is down */
#define EHOSTUNREACH 242 /* No route to host */

View File

@ -51,8 +51,6 @@ EXPORT_SYMBOL(_parisc_requires_coherency);
DEFINE_PER_CPU(struct cpuinfo_parisc, cpu_data);
extern int update_cr16_clocksource(void); /* from time.c */
/*
** PARISC CPU driver - claim "device" and initialize CPU data structures.
**
@ -228,12 +226,6 @@ static int processor_probe(struct parisc_device *dev)
}
#endif
/* If we've registered more than one cpu,
* we'll use the jiffies clocksource since cr16
* is not synchronized between CPUs.
*/
update_cr16_clocksource();
return 0;
}

View File

@ -221,18 +221,6 @@ static struct clocksource clocksource_cr16 = {
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
int update_cr16_clocksource(void)
{
/* since the cr16 cycle counters are not synchronized across CPUs,
we'll check if we should switch to a safe clocksource: */
if (clocksource_cr16.rating != 0 && num_online_cpus() > 1) {
clocksource_change_rating(&clocksource_cr16, 0);
return 1;
}
return 0;
}
void __init start_cpu_itimer(void)
{
unsigned int cpu = smp_processor_id();

View File

@ -55,4 +55,28 @@ static struct facility_def facility_defs[] = {
-1 /* END */
}
},
{
.name = "FACILITIES_KVM",
.bits = (int[]){
0, /* N3 instructions */
1, /* z/Arch mode installed */
2, /* z/Arch mode active */
3, /* DAT-enhancement */
4, /* idte segment table */
5, /* idte region table */
6, /* ASN-and-LX reuse */
7, /* stfle */
8, /* enhanced-DAT 1 */
9, /* sense-running-status */
10, /* conditional sske */
13, /* ipte-range */
14, /* nonquiescing key-setting */
73, /* transactional execution */
75, /* access-exception-fetch/store indication */
76, /* msa extension 3 */
77, /* msa extension 4 */
78, /* enhanced-DAT 2 */
-1 /* END */
}
},
};

View File

@ -28,7 +28,7 @@
#define KVM_S390_BSCA_CPU_SLOTS 64
#define KVM_S390_ESCA_CPU_SLOTS 248
#define KVM_MAX_VCPUS KVM_S390_ESCA_CPU_SLOTS
#define KVM_MAX_VCPUS 255
#define KVM_USER_MEM_SLOTS 32
/*

View File

@ -125,6 +125,7 @@ int main(void)
OFFSET(__LC_STFL_FAC_LIST, lowcore, stfl_fac_list);
OFFSET(__LC_STFLE_FAC_LIST, lowcore, stfle_fac_list);
OFFSET(__LC_MCCK_CODE, lowcore, mcck_interruption_code);
OFFSET(__LC_EXT_DAMAGE_CODE, lowcore, external_damage_code);
OFFSET(__LC_MCCK_FAIL_STOR_ADDR, lowcore, failing_storage_address);
OFFSET(__LC_LAST_BREAK, lowcore, breaking_event_addr);
OFFSET(__LC_RST_OLD_PSW, lowcore, restart_old_psw);

View File

@ -495,6 +495,18 @@ static int trans_exc(struct kvm_vcpu *vcpu, int code, unsigned long gva,
tec = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
switch (code) {
case PGM_PROTECTION:
switch (prot) {
case PROT_TYPE_ALC:
tec->b60 = 1;
/* FALL THROUGH */
case PROT_TYPE_DAT:
tec->b61 = 1;
break;
default: /* LA and KEYC set b61 to 0, other params undefined */
return code;
}
/* FALL THROUGH */
case PGM_ASCE_TYPE:
case PGM_PAGE_TRANSLATION:
case PGM_REGION_FIRST_TRANS:
@ -504,8 +516,7 @@ static int trans_exc(struct kvm_vcpu *vcpu, int code, unsigned long gva,
/*
* op_access_id only applies to MOVE_PAGE -> set bit 61
* exc_access_id has to be set to 0 for some instructions. Both
* cases have to be handled by the caller. We can always store
* exc_access_id, as it is undefined for non-ar cases.
* cases have to be handled by the caller.
*/
tec->addr = gva >> PAGE_SHIFT;
tec->fsi = mode == GACC_STORE ? FSI_STORE : FSI_FETCH;
@ -516,25 +527,13 @@ static int trans_exc(struct kvm_vcpu *vcpu, int code, unsigned long gva,
case PGM_ASTE_VALIDITY:
case PGM_ASTE_SEQUENCE:
case PGM_EXTENDED_AUTHORITY:
/*
* We can always store exc_access_id, as it is
* undefined for non-ar cases. It is undefined for
* most DAT protection exceptions.
*/
pgm->exc_access_id = ar;
break;
case PGM_PROTECTION:
switch (prot) {
case PROT_TYPE_ALC:
tec->b60 = 1;
/* FALL THROUGH */
case PROT_TYPE_DAT:
tec->b61 = 1;
tec->addr = gva >> PAGE_SHIFT;
tec->fsi = mode == GACC_STORE ? FSI_STORE : FSI_FETCH;
tec->as = psw_bits(vcpu->arch.sie_block->gpsw).as;
/* exc_access_id is undefined for most cases */
pgm->exc_access_id = ar;
break;
default: /* LA and KEYC set b61 to 0, other params undefined */
break;
}
break;
}
return code;
}

View File

@ -206,7 +206,7 @@ static int __import_wp_info(struct kvm_vcpu *vcpu,
int kvm_s390_import_bp_data(struct kvm_vcpu *vcpu,
struct kvm_guest_debug *dbg)
{
int ret = 0, nr_wp = 0, nr_bp = 0, i, size;
int ret = 0, nr_wp = 0, nr_bp = 0, i;
struct kvm_hw_breakpoint *bp_data = NULL;
struct kvm_hw_wp_info_arch *wp_info = NULL;
struct kvm_hw_bp_info_arch *bp_info = NULL;
@ -216,17 +216,10 @@ int kvm_s390_import_bp_data(struct kvm_vcpu *vcpu,
else if (dbg->arch.nr_hw_bp > MAX_BP_COUNT)
return -EINVAL;
size = dbg->arch.nr_hw_bp * sizeof(struct kvm_hw_breakpoint);
bp_data = kmalloc(size, GFP_KERNEL);
if (!bp_data) {
ret = -ENOMEM;
goto error;
}
if (copy_from_user(bp_data, dbg->arch.hw_bp, size)) {
ret = -EFAULT;
goto error;
}
bp_data = memdup_user(dbg->arch.hw_bp,
sizeof(*bp_data) * dbg->arch.nr_hw_bp);
if (IS_ERR(bp_data))
return PTR_ERR(bp_data);
for (i = 0; i < dbg->arch.nr_hw_bp; i++) {
switch (bp_data[i].type) {
@ -241,17 +234,19 @@ int kvm_s390_import_bp_data(struct kvm_vcpu *vcpu,
}
}
size = nr_wp * sizeof(struct kvm_hw_wp_info_arch);
if (size > 0) {
wp_info = kmalloc(size, GFP_KERNEL);
if (nr_wp > 0) {
wp_info = kmalloc_array(nr_wp,
sizeof(*wp_info),
GFP_KERNEL);
if (!wp_info) {
ret = -ENOMEM;
goto error;
}
}
size = nr_bp * sizeof(struct kvm_hw_bp_info_arch);
if (size > 0) {
bp_info = kmalloc(size, GFP_KERNEL);
if (nr_bp > 0) {
bp_info = kmalloc_array(nr_bp,
sizeof(*bp_info),
GFP_KERNEL);
if (!bp_info) {
ret = -ENOMEM;
goto error;
@ -382,14 +377,20 @@ void kvm_s390_prepare_debug_exit(struct kvm_vcpu *vcpu)
vcpu->guest_debug &= ~KVM_GUESTDBG_EXIT_PENDING;
}
#define PER_CODE_MASK (PER_EVENT_MASK >> 24)
#define PER_CODE_BRANCH (PER_EVENT_BRANCH >> 24)
#define PER_CODE_IFETCH (PER_EVENT_IFETCH >> 24)
#define PER_CODE_STORE (PER_EVENT_STORE >> 24)
#define PER_CODE_STORE_REAL (PER_EVENT_STORE_REAL >> 24)
#define per_bp_event(code) \
(code & (PER_EVENT_IFETCH | PER_EVENT_BRANCH))
(code & (PER_CODE_IFETCH | PER_CODE_BRANCH))
#define per_write_wp_event(code) \
(code & (PER_EVENT_STORE | PER_EVENT_STORE_REAL))
(code & (PER_CODE_STORE | PER_CODE_STORE_REAL))
static int debug_exit_required(struct kvm_vcpu *vcpu)
{
u32 perc = (vcpu->arch.sie_block->perc << 24);
u8 perc = vcpu->arch.sie_block->perc;
struct kvm_debug_exit_arch *debug_exit = &vcpu->run->debug.arch;
struct kvm_hw_wp_info_arch *wp_info = NULL;
struct kvm_hw_bp_info_arch *bp_info = NULL;
@ -444,7 +445,7 @@ int kvm_s390_handle_per_ifetch_icpt(struct kvm_vcpu *vcpu)
const u8 ilen = kvm_s390_get_ilen(vcpu);
struct kvm_s390_pgm_info pgm_info = {
.code = PGM_PER,
.per_code = PER_EVENT_IFETCH >> 24,
.per_code = PER_CODE_IFETCH,
.per_address = __rewind_psw(vcpu->arch.sie_block->gpsw, ilen),
};
@ -458,33 +459,33 @@ int kvm_s390_handle_per_ifetch_icpt(struct kvm_vcpu *vcpu)
static void filter_guest_per_event(struct kvm_vcpu *vcpu)
{
u32 perc = vcpu->arch.sie_block->perc << 24;
const u8 perc = vcpu->arch.sie_block->perc;
u64 peraddr = vcpu->arch.sie_block->peraddr;
u64 addr = vcpu->arch.sie_block->gpsw.addr;
u64 cr9 = vcpu->arch.sie_block->gcr[9];
u64 cr10 = vcpu->arch.sie_block->gcr[10];
u64 cr11 = vcpu->arch.sie_block->gcr[11];
/* filter all events, demanded by the guest */
u32 guest_perc = perc & cr9 & PER_EVENT_MASK;
u8 guest_perc = perc & (cr9 >> 24) & PER_CODE_MASK;
if (!guest_per_enabled(vcpu))
guest_perc = 0;
/* filter "successful-branching" events */
if (guest_perc & PER_EVENT_BRANCH &&
if (guest_perc & PER_CODE_BRANCH &&
cr9 & PER_CONTROL_BRANCH_ADDRESS &&
!in_addr_range(addr, cr10, cr11))
guest_perc &= ~PER_EVENT_BRANCH;
guest_perc &= ~PER_CODE_BRANCH;
/* filter "instruction-fetching" events */
if (guest_perc & PER_EVENT_IFETCH &&
if (guest_perc & PER_CODE_IFETCH &&
!in_addr_range(peraddr, cr10, cr11))
guest_perc &= ~PER_EVENT_IFETCH;
guest_perc &= ~PER_CODE_IFETCH;
/* All other PER events will be given to the guest */
/* TODO: Check altered address/address space */
vcpu->arch.sie_block->perc = guest_perc >> 24;
vcpu->arch.sie_block->perc = guest_perc;
if (!guest_perc)
vcpu->arch.sie_block->iprcc &= ~PGM_PER;

View File

@ -29,6 +29,7 @@ static const intercept_handler_t instruction_handlers[256] = {
[0x01] = kvm_s390_handle_01,
[0x82] = kvm_s390_handle_lpsw,
[0x83] = kvm_s390_handle_diag,
[0xaa] = kvm_s390_handle_aa,
[0xae] = kvm_s390_handle_sigp,
[0xb2] = kvm_s390_handle_b2,
[0xb6] = kvm_s390_handle_stctl,

View File

@ -24,6 +24,8 @@
#include <asm/sclp.h>
#include <asm/isc.h>
#include <asm/gmap.h>
#include <asm/switch_to.h>
#include <asm/nmi.h>
#include "kvm-s390.h"
#include "gaccess.h"
#include "trace-s390.h"
@ -40,6 +42,7 @@ static int sca_ext_call_pending(struct kvm_vcpu *vcpu, int *src_id)
if (!(atomic_read(&vcpu->arch.sie_block->cpuflags) & CPUSTAT_ECALL_PEND))
return 0;
BUG_ON(!kvm_s390_use_sca_entries());
read_lock(&vcpu->kvm->arch.sca_lock);
if (vcpu->kvm->arch.use_esca) {
struct esca_block *sca = vcpu->kvm->arch.sca;
@ -68,6 +71,7 @@ static int sca_inject_ext_call(struct kvm_vcpu *vcpu, int src_id)
{
int expect, rc;
BUG_ON(!kvm_s390_use_sca_entries());
read_lock(&vcpu->kvm->arch.sca_lock);
if (vcpu->kvm->arch.use_esca) {
struct esca_block *sca = vcpu->kvm->arch.sca;
@ -109,6 +113,8 @@ static void sca_clear_ext_call(struct kvm_vcpu *vcpu)
struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int;
int rc, expect;
if (!kvm_s390_use_sca_entries())
return;
atomic_andnot(CPUSTAT_ECALL_PEND, li->cpuflags);
read_lock(&vcpu->kvm->arch.sca_lock);
if (vcpu->kvm->arch.use_esca) {
@ -400,12 +406,78 @@ static int __must_check __deliver_pfault_init(struct kvm_vcpu *vcpu)
return rc ? -EFAULT : 0;
}
static int __write_machine_check(struct kvm_vcpu *vcpu,
struct kvm_s390_mchk_info *mchk)
{
unsigned long ext_sa_addr;
freg_t fprs[NUM_FPRS];
union mci mci;
int rc;
mci.val = mchk->mcic;
/* take care of lazy register loading via vcpu load/put */
save_fpu_regs();
save_access_regs(vcpu->run->s.regs.acrs);
/* Extended save area */
rc = read_guest_lc(vcpu, __LC_VX_SAVE_AREA_ADDR, &ext_sa_addr,
sizeof(unsigned long));
/* Only bits 0-53 are used for address formation */
ext_sa_addr &= ~0x3ffUL;
if (!rc && mci.vr && ext_sa_addr && test_kvm_facility(vcpu->kvm, 129)) {
if (write_guest_abs(vcpu, ext_sa_addr, vcpu->run->s.regs.vrs,
512))
mci.vr = 0;
} else {
mci.vr = 0;
}
/* General interruption information */
rc |= put_guest_lc(vcpu, 1, (u8 __user *) __LC_AR_MODE_ID);
rc |= write_guest_lc(vcpu, __LC_MCK_OLD_PSW,
&vcpu->arch.sie_block->gpsw, sizeof(psw_t));
rc |= read_guest_lc(vcpu, __LC_MCK_NEW_PSW,
&vcpu->arch.sie_block->gpsw, sizeof(psw_t));
rc |= put_guest_lc(vcpu, mci.val, (u64 __user *) __LC_MCCK_CODE);
/* Register-save areas */
if (MACHINE_HAS_VX) {
convert_vx_to_fp(fprs, (__vector128 *) vcpu->run->s.regs.vrs);
rc |= write_guest_lc(vcpu, __LC_FPREGS_SAVE_AREA, fprs, 128);
} else {
rc |= write_guest_lc(vcpu, __LC_FPREGS_SAVE_AREA,
vcpu->run->s.regs.fprs, 128);
}
rc |= write_guest_lc(vcpu, __LC_GPREGS_SAVE_AREA,
vcpu->run->s.regs.gprs, 128);
rc |= put_guest_lc(vcpu, current->thread.fpu.fpc,
(u32 __user *) __LC_FP_CREG_SAVE_AREA);
rc |= put_guest_lc(vcpu, vcpu->arch.sie_block->todpr,
(u32 __user *) __LC_TOD_PROGREG_SAVE_AREA);
rc |= put_guest_lc(vcpu, kvm_s390_get_cpu_timer(vcpu),
(u64 __user *) __LC_CPU_TIMER_SAVE_AREA);
rc |= put_guest_lc(vcpu, vcpu->arch.sie_block->ckc >> 8,
(u64 __user *) __LC_CLOCK_COMP_SAVE_AREA);
rc |= write_guest_lc(vcpu, __LC_AREGS_SAVE_AREA,
&vcpu->run->s.regs.acrs, 64);
rc |= write_guest_lc(vcpu, __LC_CREGS_SAVE_AREA,
&vcpu->arch.sie_block->gcr, 128);
/* Extended interruption information */
rc |= put_guest_lc(vcpu, mchk->ext_damage_code,
(u32 __user *) __LC_EXT_DAMAGE_CODE);
rc |= put_guest_lc(vcpu, mchk->failing_storage_address,
(u64 __user *) __LC_MCCK_FAIL_STOR_ADDR);
rc |= write_guest_lc(vcpu, __LC_PSW_SAVE_AREA, &mchk->fixed_logout,
sizeof(mchk->fixed_logout));
return rc ? -EFAULT : 0;
}
static int __must_check __deliver_machine_check(struct kvm_vcpu *vcpu)
{
struct kvm_s390_float_interrupt *fi = &vcpu->kvm->arch.float_int;
struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int;
struct kvm_s390_mchk_info mchk = {};
unsigned long adtl_status_addr;
int deliver = 0;
int rc = 0;
@ -446,29 +518,9 @@ static int __must_check __deliver_machine_check(struct kvm_vcpu *vcpu)
trace_kvm_s390_deliver_interrupt(vcpu->vcpu_id,
KVM_S390_MCHK,
mchk.cr14, mchk.mcic);
rc = kvm_s390_vcpu_store_status(vcpu,
KVM_S390_STORE_STATUS_PREFIXED);
rc |= read_guest_lc(vcpu, __LC_VX_SAVE_AREA_ADDR,
&adtl_status_addr,
sizeof(unsigned long));
rc |= kvm_s390_vcpu_store_adtl_status(vcpu,
adtl_status_addr);
rc |= put_guest_lc(vcpu, mchk.mcic,
(u64 __user *) __LC_MCCK_CODE);
rc |= put_guest_lc(vcpu, mchk.failing_storage_address,
(u64 __user *) __LC_MCCK_FAIL_STOR_ADDR);
rc |= write_guest_lc(vcpu, __LC_PSW_SAVE_AREA,
&mchk.fixed_logout,
sizeof(mchk.fixed_logout));
rc |= write_guest_lc(vcpu, __LC_MCK_OLD_PSW,
&vcpu->arch.sie_block->gpsw,
sizeof(psw_t));
rc |= read_guest_lc(vcpu, __LC_MCK_NEW_PSW,
&vcpu->arch.sie_block->gpsw,
sizeof(psw_t));
rc = __write_machine_check(vcpu, &mchk);
}
return rc ? -EFAULT : 0;
return rc;
}
static int __must_check __deliver_restart(struct kvm_vcpu *vcpu)

View File

@ -132,10 +132,7 @@ module_param(nested, int, S_IRUGO);
MODULE_PARM_DESC(nested, "Nested virtualization support");
/* upper facilities limit for kvm */
unsigned long kvm_s390_fac_list_mask[16] = {
0xffe6000000000000UL,
0x005e000000000000UL,
};
unsigned long kvm_s390_fac_list_mask[16] = { FACILITIES_KVM };
unsigned long kvm_s390_fac_list_mask_size(void)
{
@ -376,7 +373,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_NR_VCPUS:
case KVM_CAP_MAX_VCPUS:
r = KVM_S390_BSCA_CPU_SLOTS;
if (sclp.has_esca && sclp.has_64bscao)
if (!kvm_s390_use_sca_entries())
r = KVM_MAX_VCPUS;
else if (sclp.has_esca && sclp.has_64bscao)
r = KVM_S390_ESCA_CPU_SLOTS;
break;
case KVM_CAP_NR_MEMSLOTS:
@ -1553,6 +1552,8 @@ static int __kvm_ucontrol_vcpu_init(struct kvm_vcpu *vcpu)
static void sca_del_vcpu(struct kvm_vcpu *vcpu)
{
if (!kvm_s390_use_sca_entries())
return;
read_lock(&vcpu->kvm->arch.sca_lock);
if (vcpu->kvm->arch.use_esca) {
struct esca_block *sca = vcpu->kvm->arch.sca;
@ -1570,6 +1571,13 @@ static void sca_del_vcpu(struct kvm_vcpu *vcpu)
static void sca_add_vcpu(struct kvm_vcpu *vcpu)
{
if (!kvm_s390_use_sca_entries()) {
struct bsca_block *sca = vcpu->kvm->arch.sca;
/* we still need the basic sca for the ipte control */
vcpu->arch.sie_block->scaoh = (__u32)(((__u64)sca) >> 32);
vcpu->arch.sie_block->scaol = (__u32)(__u64)sca;
}
read_lock(&vcpu->kvm->arch.sca_lock);
if (vcpu->kvm->arch.use_esca) {
struct esca_block *sca = vcpu->kvm->arch.sca;
@ -1650,6 +1658,11 @@ static int sca_can_add_vcpu(struct kvm *kvm, unsigned int id)
{
int rc;
if (!kvm_s390_use_sca_entries()) {
if (id < KVM_MAX_VCPUS)
return true;
return false;
}
if (id < KVM_S390_BSCA_CPU_SLOTS)
return true;
if (!sclp.has_esca || !sclp.has_64bscao)
@ -1938,8 +1951,6 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
vcpu->arch.sie_block->eca |= 1;
if (sclp.has_sigpif)
vcpu->arch.sie_block->eca |= 0x10000000U;
if (test_kvm_facility(vcpu->kvm, 64))
vcpu->arch.sie_block->ecb3 |= 0x01;
if (test_kvm_facility(vcpu->kvm, 129)) {
vcpu->arch.sie_block->eca |= 0x00020000;
vcpu->arch.sie_block->ecd |= 0x20000000;
@ -2694,6 +2705,19 @@ static void sync_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
if (vcpu->arch.pfault_token == KVM_S390_PFAULT_TOKEN_INVALID)
kvm_clear_async_pf_completion_queue(vcpu);
}
/*
* If userspace sets the riccb (e.g. after migration) to a valid state,
* we should enable RI here instead of doing the lazy enablement.
*/
if ((kvm_run->kvm_dirty_regs & KVM_SYNC_RICCB) &&
test_kvm_facility(vcpu->kvm, 64)) {
struct runtime_instr_cb *riccb =
(struct runtime_instr_cb *) &kvm_run->s.regs.riccb;
if (riccb->valid)
vcpu->arch.sie_block->ecb3 |= 0x01;
}
kvm_run->kvm_dirty_regs = 0;
}
@ -2837,38 +2861,6 @@ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr)
return kvm_s390_store_status_unloaded(vcpu, addr);
}
/*
* store additional status at address
*/
int kvm_s390_store_adtl_status_unloaded(struct kvm_vcpu *vcpu,
unsigned long gpa)
{
/* Only bits 0-53 are used for address formation */
if (!(gpa & ~0x3ff))
return 0;
return write_guest_abs(vcpu, gpa & ~0x3ff,
(void *)&vcpu->run->s.regs.vrs, 512);
}
int kvm_s390_vcpu_store_adtl_status(struct kvm_vcpu *vcpu, unsigned long addr)
{
if (!test_kvm_facility(vcpu->kvm, 129))
return 0;
/*
* The guest VXRS are in the host VXRs due to the lazy
* copying in vcpu load/put. We can simply call save_fpu_regs()
* to save the current register state because we are in the
* middle of a load/put cycle.
*
* Let's update our copies before we save it into the save area.
*/
save_fpu_regs();
return kvm_s390_store_adtl_status_unloaded(vcpu, addr);
}
static void __disable_ibs_on_vcpu(struct kvm_vcpu *vcpu)
{
kvm_check_request(KVM_REQ_ENABLE_IBS, vcpu);

View File

@ -20,6 +20,7 @@
#include <linux/kvm_host.h>
#include <asm/facility.h>
#include <asm/processor.h>
#include <asm/sclp.h>
typedef int (*intercept_handler_t)(struct kvm_vcpu *vcpu);
@ -245,6 +246,7 @@ static inline void kvm_s390_retry_instr(struct kvm_vcpu *vcpu)
/* implemented in priv.c */
int is_valid_psw(psw_t *psw);
int kvm_s390_handle_aa(struct kvm_vcpu *vcpu);
int kvm_s390_handle_b2(struct kvm_vcpu *vcpu);
int kvm_s390_handle_e5(struct kvm_vcpu *vcpu);
int kvm_s390_handle_01(struct kvm_vcpu *vcpu);
@ -273,10 +275,7 @@ int handle_sthyi(struct kvm_vcpu *vcpu);
void kvm_s390_set_tod_clock(struct kvm *kvm, u64 tod);
long kvm_arch_fault_in_page(struct kvm_vcpu *vcpu, gpa_t gpa, int writable);
int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
int kvm_s390_store_adtl_status_unloaded(struct kvm_vcpu *vcpu,
unsigned long addr);
int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
int kvm_s390_vcpu_store_adtl_status(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_s390_vcpu_start(struct kvm_vcpu *vcpu);
void kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu);
void kvm_s390_vcpu_block(struct kvm_vcpu *vcpu);
@ -389,4 +388,13 @@ static inline union ipte_control *kvm_s390_get_ipte_control(struct kvm *kvm)
return &sca->ipte_control;
}
static inline int kvm_s390_use_sca_entries(void)
{
/*
* Without SIGP interpretation, only SRS interpretation (if available)
* might use the entries. By not setting the entries and keeping them
* invalid, hardware will not access them but intercept.
*/
return sclp.has_sigpif;
}
#endif

View File

@ -32,6 +32,24 @@
#include "kvm-s390.h"
#include "trace.h"
static int handle_ri(struct kvm_vcpu *vcpu)
{
if (test_kvm_facility(vcpu->kvm, 64)) {
vcpu->arch.sie_block->ecb3 |= 0x01;
kvm_s390_retry_instr(vcpu);
return 0;
} else
return kvm_s390_inject_program_int(vcpu, PGM_OPERATION);
}
int kvm_s390_handle_aa(struct kvm_vcpu *vcpu)
{
if ((vcpu->arch.sie_block->ipa & 0xf) <= 4)
return handle_ri(vcpu);
else
return -EOPNOTSUPP;
}
/* Handle SCK (SET CLOCK) interception */
static int handle_set_clock(struct kvm_vcpu *vcpu)
{
@ -1093,6 +1111,9 @@ static int handle_stctg(struct kvm_vcpu *vcpu)
static const intercept_handler_t eb_handlers[256] = {
[0x2f] = handle_lctlg,
[0x25] = handle_stctg,
[0x60] = handle_ri,
[0x61] = handle_ri,
[0x62] = handle_ri,
};
int kvm_s390_handle_eb(struct kvm_vcpu *vcpu)

View File

@ -355,6 +355,7 @@ void load_ucode_amd_ap(void)
unsigned int cpu = smp_processor_id();
struct equiv_cpu_entry *eq;
struct microcode_amd *mc;
u8 *cont = container;
u32 rev, eax;
u16 eq_id;
@ -371,8 +372,11 @@ void load_ucode_amd_ap(void)
if (check_current_patch_level(&rev, false))
return;
/* Add CONFIG_RANDOMIZE_MEMORY offset. */
cont += PAGE_OFFSET - __PAGE_OFFSET_BASE;
eax = cpuid_eax(0x00000001);
eq = (struct equiv_cpu_entry *)(container + CONTAINER_HDR_SZ);
eq = (struct equiv_cpu_entry *)(cont + CONTAINER_HDR_SZ);
eq_id = find_equiv_id(eq, eax);
if (!eq_id)
@ -434,6 +438,9 @@ int __init save_microcode_in_initrd_amd(void)
else
container = cont_va;
/* Add CONFIG_RANDOMIZE_MEMORY offset. */
container += PAGE_OFFSET - __PAGE_OFFSET_BASE;
eax = cpuid_eax(0x00000001);
eax = ((eax >> 8) & 0xf) + ((eax >> 20) & 0xff);

View File

@ -100,10 +100,11 @@ EXPORT_PER_CPU_SYMBOL(cpu_info);
/* Logical package management. We might want to allocate that dynamically */
static int *physical_to_logical_pkg __read_mostly;
static unsigned long *physical_package_map __read_mostly;;
static unsigned long *logical_package_map __read_mostly;
static unsigned int max_physical_pkg_id __read_mostly;
unsigned int __max_logical_packages __read_mostly;
EXPORT_SYMBOL(__max_logical_packages);
static unsigned int logical_packages __read_mostly;
static bool logical_packages_frozen __read_mostly;
/* Maximum number of SMT threads on any online core */
int __max_smt_threads __read_mostly;
@ -277,14 +278,14 @@ int topology_update_package_map(unsigned int apicid, unsigned int cpu)
if (test_and_set_bit(pkg, physical_package_map))
goto found;
new = find_first_zero_bit(logical_package_map, __max_logical_packages);
if (new >= __max_logical_packages) {
if (logical_packages_frozen) {
physical_to_logical_pkg[pkg] = -1;
pr_warn("APIC(%x) Package %u exceeds logical package map\n",
pr_warn("APIC(%x) Package %u exceeds logical package max\n",
apicid, pkg);
return -ENOSPC;
}
set_bit(new, logical_package_map);
new = logical_packages++;
pr_info("APIC(%x) Converting physical %u to logical package %u\n",
apicid, pkg, new);
physical_to_logical_pkg[pkg] = new;
@ -341,6 +342,7 @@ static void __init smp_init_package_map(void)
}
__max_logical_packages = DIV_ROUND_UP(total_cpus, ncpus);
logical_packages = 0;
/*
* Possibly larger than what we need as the number of apic ids per
@ -352,10 +354,6 @@ static void __init smp_init_package_map(void)
memset(physical_to_logical_pkg, 0xff, size);
size = BITS_TO_LONGS(max_physical_pkg_id) * sizeof(unsigned long);
physical_package_map = kzalloc(size, GFP_KERNEL);
size = BITS_TO_LONGS(__max_logical_packages) * sizeof(unsigned long);
logical_package_map = kzalloc(size, GFP_KERNEL);
pr_info("Max logical packages: %u\n", __max_logical_packages);
for_each_present_cpu(cpu) {
unsigned int apicid = apic->cpu_present_to_apicid(cpu);
@ -369,6 +367,15 @@ static void __init smp_init_package_map(void)
set_cpu_possible(cpu, false);
set_cpu_present(cpu, false);
}
if (logical_packages > __max_logical_packages) {
pr_warn("Detected more packages (%u), then computed by BIOS data (%u).\n",
logical_packages, __max_logical_packages);
logical_packages_frozen = true;
__max_logical_packages = logical_packages;
}
pr_info("Max logical packages: %u\n", __max_logical_packages);
}
void __init smp_store_boot_cpu_info(void)

View File

@ -113,7 +113,7 @@ static int set_up_temporary_mappings(void)
return result;
}
temp_level4_pgt = (unsigned long)pgd - __PAGE_OFFSET;
temp_level4_pgt = __pa(pgd);
return 0;
}

View File

@ -66,10 +66,10 @@ static void kona_timer_disable_and_clear(void __iomem *base)
}
static void
static int
kona_timer_get_counter(void __iomem *timer_base, uint32_t *msw, uint32_t *lsw)
{
int loop_limit = 4;
int loop_limit = 3;
/*
* Read 64-bit free running counter
@ -83,18 +83,19 @@ kona_timer_get_counter(void __iomem *timer_base, uint32_t *msw, uint32_t *lsw)
* if new hi-word is equal to previously read hi-word then stop.
*/
while (--loop_limit) {
do {
*msw = readl(timer_base + KONA_GPTIMER_STCHI_OFFSET);
*lsw = readl(timer_base + KONA_GPTIMER_STCLO_OFFSET);
if (*msw == readl(timer_base + KONA_GPTIMER_STCHI_OFFSET))
break;
}
} while (--loop_limit);
if (!loop_limit) {
pr_err("bcm_kona_timer: getting counter failed.\n");
pr_err(" Timer will be impacted\n");
return -ETIMEDOUT;
}
return;
return 0;
}
static int kona_timer_set_next_event(unsigned long clc,
@ -112,8 +113,11 @@ static int kona_timer_set_next_event(unsigned long clc,
uint32_t lsw, msw;
uint32_t reg;
int ret;
kona_timer_get_counter(timers.tmr_regs, &msw, &lsw);
ret = kona_timer_get_counter(timers.tmr_regs, &msw, &lsw);
if (ret)
return ret;
/* Load the "next" event tick value */
writel(lsw + clc, timers.tmr_regs + KONA_GPTIMER_STCM0_OFFSET);

View File

@ -164,7 +164,7 @@ void __init gic_clocksource_init(unsigned int frequency)
gic_start_count();
}
static void __init gic_clocksource_of_init(struct device_node *node)
static int __init gic_clocksource_of_init(struct device_node *node)
{
struct clk *clk;
int ret;

View File

@ -338,7 +338,6 @@ static int __init armada_xp_timer_init(struct device_node *np)
struct clk *clk = of_clk_get_by_name(np, "fixed");
int ret;
clk = of_clk_get(np, 0);
if (IS_ERR(clk)) {
pr_err("Failed to get clock");
return PTR_ERR(clk);

View File

@ -251,6 +251,14 @@ config EDAC_SBRIDGE
Support for error detection and correction the Intel
Sandy Bridge, Ivy Bridge and Haswell Integrated Memory Controllers.
config EDAC_SKX
tristate "Intel Skylake server Integrated MC"
depends on EDAC_MM_EDAC && PCI && X86_64 && X86_MCE_INTEL
depends on PCI_MMCONFIG
help
Support for error detection and correction the Intel
Skylake server Integrated Memory Controllers.
config EDAC_MPC85XX
tristate "Freescale MPC83xx / MPC85xx"
depends on EDAC_MM_EDAC && FSL_SOC

View File

@ -31,6 +31,7 @@ obj-$(CONFIG_EDAC_I5400) += i5400_edac.o
obj-$(CONFIG_EDAC_I7300) += i7300_edac.o
obj-$(CONFIG_EDAC_I7CORE) += i7core_edac.o
obj-$(CONFIG_EDAC_SBRIDGE) += sb_edac.o
obj-$(CONFIG_EDAC_SKX) += skx_edac.o
obj-$(CONFIG_EDAC_E7XXX) += e7xxx_edac.o
obj-$(CONFIG_EDAC_E752X) += e752x_edac.o
obj-$(CONFIG_EDAC_I82443BXGX) += i82443bxgx_edac.o

1121
drivers/edac/skx_edac.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -646,9 +646,9 @@ int amdgpu_gart_table_vram_pin(struct amdgpu_device *adev);
void amdgpu_gart_table_vram_unpin(struct amdgpu_device *adev);
int amdgpu_gart_init(struct amdgpu_device *adev);
void amdgpu_gart_fini(struct amdgpu_device *adev);
void amdgpu_gart_unbind(struct amdgpu_device *adev, unsigned offset,
void amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
int pages);
int amdgpu_gart_bind(struct amdgpu_device *adev, unsigned offset,
int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
int pages, struct page **pagelist,
dma_addr_t *dma_addr, uint32_t flags);

View File

@ -200,16 +200,7 @@ static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)
atpx->is_hybrid = false;
if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {
printk("ATPX Hybrid Graphics\n");
#if 1
/* This is a temporary hack until the D3 cold support
* makes it upstream. The ATPX power_control method seems
* to still work on even if the system should be using
* the new standardized hybrid D3 cold ACPI interface.
*/
atpx->functions.power_cntl = true;
#else
atpx->functions.power_cntl = false;
#endif
atpx->is_hybrid = true;
}

View File

@ -221,7 +221,7 @@ void amdgpu_gart_table_vram_free(struct amdgpu_device *adev)
* Unbinds the requested pages from the gart page table and
* replaces them with the dummy page (all asics).
*/
void amdgpu_gart_unbind(struct amdgpu_device *adev, unsigned offset,
void amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
int pages)
{
unsigned t;
@ -268,7 +268,7 @@ void amdgpu_gart_unbind(struct amdgpu_device *adev, unsigned offset,
* (all asics).
* Returns 0 for success, -EINVAL for failure.
*/
int amdgpu_gart_bind(struct amdgpu_device *adev, unsigned offset,
int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
int pages, struct page **pagelist, dma_addr_t *dma_addr,
uint32_t flags)
{

View File

@ -1187,7 +1187,8 @@ int amdgpu_uvd_ring_test_ib(struct amdgpu_ring *ring, long timeout)
r = 0;
}
error:
fence_put(fence);
error:
return r;
}

View File

@ -1535,7 +1535,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
r = amd_sched_entity_init(&ring->sched, &vm->entity,
rq, amdgpu_sched_jobs);
if (r)
return r;
goto err;
vm->page_directory_fence = NULL;
@ -1565,6 +1565,9 @@ error_free_page_directory:
error_free_sched_entity:
amd_sched_entity_fini(&ring->sched, &vm->entity);
err:
drm_free_large(vm->page_tables);
return r;
}

View File

@ -184,7 +184,7 @@ u32 __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
sizeof(u32)) + inx;
pr_debug("kfd: get kernel queue doorbell\n"
" doorbell offset == 0x%08d\n"
" doorbell offset == 0x%08X\n"
" kernel address == 0x%08lX\n",
*doorbell_off, (uintptr_t)(kfd->doorbell_kernel_ptr + inx));

View File

@ -464,7 +464,7 @@ static bool drm_fb_helper_is_bound(struct drm_fb_helper *fb_helper)
/* Sometimes user space wants everything disabled, so don't steal the
* display if there's a master. */
if (lockless_dereference(dev->master))
if (READ_ONCE(dev->master))
return false;
drm_for_each_crtc(crtc, dev) {

View File

@ -1333,8 +1333,6 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
if (ret < 0)
return ret;
mutex_lock(&gpu->lock);
/*
* TODO
*
@ -1348,16 +1346,18 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
if (unlikely(event == ~0U)) {
DRM_ERROR("no free event\n");
ret = -EBUSY;
goto out_unlock;
goto out_pm_put;
}
fence = etnaviv_gpu_fence_alloc(gpu);
if (!fence) {
event_free(gpu, event);
ret = -ENOMEM;
goto out_unlock;
goto out_pm_put;
}
mutex_lock(&gpu->lock);
gpu->event[event].fence = fence;
submit->fence = fence->seqno;
gpu->active_fence = submit->fence;
@ -1395,9 +1395,9 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
hangcheck_timer_reset(gpu);
ret = 0;
out_unlock:
mutex_unlock(&gpu->lock);
out_pm_put:
etnaviv_gpu_pm_put(gpu);
return ret;

View File

@ -1854,6 +1854,7 @@ struct drm_i915_private {
enum modeset_restore modeset_restore;
struct mutex modeset_restore_lock;
struct drm_atomic_state *modeset_restore_state;
struct drm_modeset_acquire_ctx reset_ctx;
struct list_head vm_list; /* Global list of all address spaces */
struct i915_ggtt ggtt; /* VM representing the global address space */

View File

@ -879,9 +879,12 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
ret = i915_gem_shmem_pread(dev, obj, args, file);
/* pread for non shmem backed objects */
if (ret == -EFAULT || ret == -ENODEV)
if (ret == -EFAULT || ret == -ENODEV) {
intel_runtime_pm_get(to_i915(dev));
ret = i915_gem_gtt_pread(dev, obj, args->size,
args->offset, args->data_ptr);
intel_runtime_pm_put(to_i915(dev));
}
out:
drm_gem_object_unreference(&obj->base);
@ -1306,7 +1309,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
* textures). Fallback to the shmem path in that case. */
}
if (ret == -EFAULT) {
if (ret == -EFAULT || ret == -ENOSPC) {
if (obj->phys_handle)
ret = i915_gem_phys_pwrite(obj, args, file);
else if (i915_gem_object_has_struct_page(obj))
@ -3169,6 +3172,8 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
}
intel_ring_init_seqno(engine, engine->last_submitted_seqno);
engine->i915->gt.active_engines &= ~intel_engine_flag(engine);
}
void i915_gem_reset(struct drm_device *dev)
@ -3186,6 +3191,7 @@ void i915_gem_reset(struct drm_device *dev)
for_each_engine(engine, dev_priv)
i915_gem_reset_engine_cleanup(engine);
mod_delayed_work(dev_priv->wq, &dev_priv->gt.idle_work, 0);
i915_gem_context_reset(dev);

View File

@ -2873,6 +2873,7 @@ void i915_ggtt_cleanup_hw(struct drm_device *dev)
struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
ppgtt->base.cleanup(&ppgtt->base);
kfree(ppgtt);
}
i915_gem_cleanup_stolen(dev);

View File

@ -1536,6 +1536,7 @@ enum skl_disp_power_wells {
#define BALANCE_LEG_MASK(port) (7<<(8+3*(port)))
/* Balance leg disable bits */
#define BALANCE_LEG_DISABLE_SHIFT 23
#define BALANCE_LEG_DISABLE(port) (1 << (23 + (port)))
/*
* Fence registers

View File

@ -600,6 +600,8 @@ static void i915_audio_component_codec_wake_override(struct device *dev,
if (!IS_SKYLAKE(dev_priv) && !IS_KABYLAKE(dev_priv))
return;
i915_audio_component_get_power(dev);
/*
* Enable/disable generating the codec wake signal, overriding the
* internal logic to generate the codec wake to controller.
@ -615,6 +617,8 @@ static void i915_audio_component_codec_wake_override(struct device *dev,
I915_WRITE(HSW_AUD_CHICKENBIT, tmp);
usleep_range(1000, 1500);
}
i915_audio_component_put_power(dev);
}
/* Get CDCLK in kHz */
@ -648,6 +652,7 @@ static int i915_audio_component_sync_audio_rate(struct device *dev,
!IS_HASWELL(dev_priv))
return 0;
i915_audio_component_get_power(dev);
mutex_lock(&dev_priv->av_mutex);
/* 1. get the pipe */
intel_encoder = dev_priv->dig_port_map[port];
@ -698,6 +703,7 @@ static int i915_audio_component_sync_audio_rate(struct device *dev,
unlock:
mutex_unlock(&dev_priv->av_mutex);
i915_audio_component_put_power(dev);
return err;
}

View File

@ -145,7 +145,7 @@ static const struct ddi_buf_trans skl_ddi_translations_dp[] = {
static const struct ddi_buf_trans skl_u_ddi_translations_dp[] = {
{ 0x0000201B, 0x000000A2, 0x0 },
{ 0x00005012, 0x00000088, 0x0 },
{ 0x80007011, 0x000000CD, 0x0 },
{ 0x80007011, 0x000000CD, 0x1 },
{ 0x80009010, 0x000000C0, 0x1 },
{ 0x0000201B, 0x0000009D, 0x0 },
{ 0x80005012, 0x000000C0, 0x1 },
@ -158,7 +158,7 @@ static const struct ddi_buf_trans skl_u_ddi_translations_dp[] = {
static const struct ddi_buf_trans skl_y_ddi_translations_dp[] = {
{ 0x00000018, 0x000000A2, 0x0 },
{ 0x00005012, 0x00000088, 0x0 },
{ 0x80007011, 0x000000CD, 0x0 },
{ 0x80007011, 0x000000CD, 0x3 },
{ 0x80009010, 0x000000C0, 0x3 },
{ 0x00000018, 0x0000009D, 0x0 },
{ 0x80005012, 0x000000C0, 0x3 },
@ -388,6 +388,40 @@ skl_get_buf_trans_hdmi(struct drm_i915_private *dev_priv, int *n_entries)
}
}
static int intel_ddi_hdmi_level(struct drm_i915_private *dev_priv, enum port port)
{
int n_hdmi_entries;
int hdmi_level;
int hdmi_default_entry;
hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
if (IS_BROXTON(dev_priv))
return hdmi_level;
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
skl_get_buf_trans_hdmi(dev_priv, &n_hdmi_entries);
hdmi_default_entry = 8;
} else if (IS_BROADWELL(dev_priv)) {
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
} else if (IS_HASWELL(dev_priv)) {
n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);
hdmi_default_entry = 6;
} else {
WARN(1, "ddi translation table missing\n");
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
}
/* Choose a good default if VBT is badly populated */
if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||
hdmi_level >= n_hdmi_entries)
hdmi_level = hdmi_default_entry;
return hdmi_level;
}
/*
* Starting with Haswell, DDI port buffers must be programmed with correct
* values in advance. The buffer values are different for FDI and DP modes,
@ -399,7 +433,7 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 iboost_bit = 0;
int i, n_hdmi_entries, n_dp_entries, n_edp_entries, hdmi_default_entry,
int i, n_hdmi_entries, n_dp_entries, n_edp_entries,
size;
int hdmi_level;
enum port port;
@ -410,7 +444,7 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
const struct ddi_buf_trans *ddi_translations;
port = intel_ddi_get_encoder_port(encoder);
hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
hdmi_level = intel_ddi_hdmi_level(dev_priv, port);
if (IS_BROXTON(dev_priv)) {
if (encoder->type != INTEL_OUTPUT_HDMI)
@ -430,7 +464,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
skl_get_buf_trans_edp(dev_priv, &n_edp_entries);
ddi_translations_hdmi =
skl_get_buf_trans_hdmi(dev_priv, &n_hdmi_entries);
hdmi_default_entry = 8;
/* If we're boosting the current, set bit 31 of trans1 */
if (dev_priv->vbt.ddi_port_info[port].hdmi_boost_level ||
dev_priv->vbt.ddi_port_info[port].dp_boost_level)
@ -456,7 +489,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
} else if (IS_HASWELL(dev_priv)) {
ddi_translations_fdi = hsw_ddi_translations_fdi;
ddi_translations_dp = hsw_ddi_translations_dp;
@ -464,7 +496,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
ddi_translations_hdmi = hsw_ddi_translations_hdmi;
n_dp_entries = n_edp_entries = ARRAY_SIZE(hsw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);
hdmi_default_entry = 6;
} else {
WARN(1, "ddi translation table missing\n");
ddi_translations_edp = bdw_ddi_translations_dp;
@ -474,7 +505,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);
n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
}
switch (encoder->type) {
@ -505,11 +535,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
if (encoder->type != INTEL_OUTPUT_HDMI)
return;
/* Choose a good default if VBT is badly populated */
if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||
hdmi_level >= n_hdmi_entries)
hdmi_level = hdmi_default_entry;
/* Entry 9 is for HDMI: */
I915_WRITE(DDI_BUF_TRANS_LO(port, i),
ddi_translations_hdmi[hdmi_level].trans1 | iboost_bit);
@ -1379,14 +1404,30 @@ void intel_ddi_disable_pipe_clock(struct intel_crtc *intel_crtc)
TRANS_CLK_SEL_DISABLED);
}
static void skl_ddi_set_iboost(struct drm_i915_private *dev_priv,
u32 level, enum port port, int type)
static void _skl_ddi_set_iboost(struct drm_i915_private *dev_priv,
enum port port, uint8_t iboost)
{
u32 tmp;
tmp = I915_READ(DISPIO_CR_TX_BMU_CR0);
tmp &= ~(BALANCE_LEG_MASK(port) | BALANCE_LEG_DISABLE(port));
if (iboost)
tmp |= iboost << BALANCE_LEG_SHIFT(port);
else
tmp |= BALANCE_LEG_DISABLE(port);
I915_WRITE(DISPIO_CR_TX_BMU_CR0, tmp);
}
static void skl_ddi_set_iboost(struct intel_encoder *encoder, u32 level)
{
struct intel_digital_port *intel_dig_port = enc_to_dig_port(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(intel_dig_port->base.base.dev);
enum port port = intel_dig_port->port;
int type = encoder->type;
const struct ddi_buf_trans *ddi_translations;
uint8_t iboost;
uint8_t dp_iboost, hdmi_iboost;
int n_entries;
u32 reg;
/* VBT may override standard boost values */
dp_iboost = dev_priv->vbt.ddi_port_info[port].dp_boost_level;
@ -1428,16 +1469,10 @@ static void skl_ddi_set_iboost(struct drm_i915_private *dev_priv,
return;
}
reg = I915_READ(DISPIO_CR_TX_BMU_CR0);
reg &= ~BALANCE_LEG_MASK(port);
reg &= ~(1 << (BALANCE_LEG_DISABLE_SHIFT + port));
_skl_ddi_set_iboost(dev_priv, port, iboost);
if (iboost)
reg |= iboost << BALANCE_LEG_SHIFT(port);
else
reg |= 1 << (BALANCE_LEG_DISABLE_SHIFT + port);
I915_WRITE(DISPIO_CR_TX_BMU_CR0, reg);
if (port == PORT_A && intel_dig_port->max_lanes == 4)
_skl_ddi_set_iboost(dev_priv, PORT_E, iboost);
}
static void bxt_ddi_vswing_sequence(struct drm_i915_private *dev_priv,
@ -1568,7 +1603,7 @@ uint32_t ddi_signal_levels(struct intel_dp *intel_dp)
level = translate_signal_level(signal_levels);
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
skl_ddi_set_iboost(dev_priv, level, port, encoder->type);
skl_ddi_set_iboost(encoder, level);
else if (IS_BROXTON(dev_priv))
bxt_ddi_vswing_sequence(dev_priv, level, port, encoder->type);
@ -1637,6 +1672,10 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder)
intel_dp_stop_link_train(intel_dp);
} else if (type == INTEL_OUTPUT_HDMI) {
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
int level = intel_ddi_hdmi_level(dev_priv, port);
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
skl_ddi_set_iboost(intel_encoder, level);
intel_hdmi->set_infoframes(encoder,
crtc->config->has_hdmi_sink,

View File

@ -3093,40 +3093,110 @@ static void intel_update_primary_planes(struct drm_device *dev)
for_each_crtc(dev, crtc) {
struct intel_plane *plane = to_intel_plane(crtc->primary);
struct intel_plane_state *plane_state;
drm_modeset_lock_crtc(crtc, &plane->base);
plane_state = to_intel_plane_state(plane->base.state);
struct intel_plane_state *plane_state =
to_intel_plane_state(plane->base.state);
if (plane_state->visible)
plane->update_plane(&plane->base,
to_intel_crtc_state(crtc->state),
plane_state);
drm_modeset_unlock_crtc(crtc);
}
}
static int
__intel_display_resume(struct drm_device *dev,
struct drm_atomic_state *state)
{
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc;
int i, ret;
intel_modeset_setup_hw_state(dev);
i915_redisable_vga(dev);
if (!state)
return 0;
for_each_crtc_in_state(state, crtc, crtc_state, i) {
/*
* Force recalculation even if we restore
* current state. With fast modeset this may not result
* in a modeset when the state is compatible.
*/
crtc_state->mode_changed = true;
}
/* ignore any reset values/BIOS leftovers in the WM registers */
to_intel_atomic_state(state)->skip_intermediate_wm = true;
ret = drm_atomic_commit(state);
WARN_ON(ret == -EDEADLK);
return ret;
}
void intel_prepare_reset(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = &dev_priv->drm;
struct drm_modeset_acquire_ctx *ctx = &dev_priv->reset_ctx;
struct drm_atomic_state *state;
int ret;
/* no reset support for gen2 */
if (IS_GEN2(dev_priv))
return;
/* reset doesn't touch the display */
/*
* Need mode_config.mutex so that we don't
* trample ongoing ->detect() and whatnot.
*/
mutex_lock(&dev->mode_config.mutex);
drm_modeset_acquire_init(ctx, 0);
while (1) {
ret = drm_modeset_lock_all_ctx(dev, ctx);
if (ret != -EDEADLK)
break;
drm_modeset_backoff(ctx);
}
/* reset doesn't touch the display, but flips might get nuked anyway, */
if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv))
return;
drm_modeset_lock_all(&dev_priv->drm);
/*
* Disabling the crtcs gracefully seems nicer. Also the
* g33 docs say we should at least disable all the planes.
*/
intel_display_suspend(&dev_priv->drm);
state = drm_atomic_helper_duplicate_state(dev, ctx);
if (IS_ERR(state)) {
ret = PTR_ERR(state);
state = NULL;
DRM_ERROR("Duplicating state failed with %i\n", ret);
goto err;
}
ret = drm_atomic_helper_disable_all(dev, ctx);
if (ret) {
DRM_ERROR("Suspending crtc's failed with %i\n", ret);
goto err;
}
dev_priv->modeset_restore_state = state;
state->acquire_ctx = ctx;
return;
err:
drm_atomic_state_free(state);
}
void intel_finish_reset(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = &dev_priv->drm;
struct drm_modeset_acquire_ctx *ctx = &dev_priv->reset_ctx;
struct drm_atomic_state *state = dev_priv->modeset_restore_state;
int ret;
/*
* Flips in the rings will be nuked by the reset,
* so complete all pending flips so that user space
@ -3138,6 +3208,8 @@ void intel_finish_reset(struct drm_i915_private *dev_priv)
if (IS_GEN2(dev_priv))
return;
dev_priv->modeset_restore_state = NULL;
/* reset doesn't touch the display */
if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) {
/*
@ -3149,29 +3221,32 @@ void intel_finish_reset(struct drm_i915_private *dev_priv)
* FIXME: Atomic will make this obsolete since we won't schedule
* CS-based flips (which might get lost in gpu resets) any more.
*/
intel_update_primary_planes(&dev_priv->drm);
return;
intel_update_primary_planes(dev);
} else {
/*
* The display has been reset as well,
* so need a full re-initialization.
*/
intel_runtime_pm_disable_interrupts(dev_priv);
intel_runtime_pm_enable_interrupts(dev_priv);
intel_modeset_init_hw(dev);
spin_lock_irq(&dev_priv->irq_lock);
if (dev_priv->display.hpd_irq_setup)
dev_priv->display.hpd_irq_setup(dev_priv);
spin_unlock_irq(&dev_priv->irq_lock);
ret = __intel_display_resume(dev, state);
if (ret)
DRM_ERROR("Restoring old state failed with %i\n", ret);
intel_hpd_init(dev_priv);
}
/*
* The display has been reset as well,
* so need a full re-initialization.
*/
intel_runtime_pm_disable_interrupts(dev_priv);
intel_runtime_pm_enable_interrupts(dev_priv);
intel_modeset_init_hw(&dev_priv->drm);
spin_lock_irq(&dev_priv->irq_lock);
if (dev_priv->display.hpd_irq_setup)
dev_priv->display.hpd_irq_setup(dev_priv);
spin_unlock_irq(&dev_priv->irq_lock);
intel_display_resume(&dev_priv->drm);
intel_hpd_init(dev_priv);
drm_modeset_unlock_all(&dev_priv->drm);
drm_modeset_drop_locks(ctx);
drm_modeset_acquire_fini(ctx);
mutex_unlock(&dev->mode_config.mutex);
}
static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
@ -16156,9 +16231,10 @@ void intel_display_resume(struct drm_device *dev)
struct drm_atomic_state *state = dev_priv->modeset_restore_state;
struct drm_modeset_acquire_ctx ctx;
int ret;
bool setup = false;
dev_priv->modeset_restore_state = NULL;
if (state)
state->acquire_ctx = &ctx;
/*
* This is a cludge because with real atomic modeset mode_config.mutex
@ -16169,43 +16245,17 @@ void intel_display_resume(struct drm_device *dev)
mutex_lock(&dev->mode_config.mutex);
drm_modeset_acquire_init(&ctx, 0);
retry:
ret = drm_modeset_lock_all_ctx(dev, &ctx);
while (1) {
ret = drm_modeset_lock_all_ctx(dev, &ctx);
if (ret != -EDEADLK)
break;
if (ret == 0 && !setup) {
setup = true;
intel_modeset_setup_hw_state(dev);
i915_redisable_vga(dev);
}
if (ret == 0 && state) {
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc;
int i;
state->acquire_ctx = &ctx;
/* ignore any reset values/BIOS leftovers in the WM registers */
to_intel_atomic_state(state)->skip_intermediate_wm = true;
for_each_crtc_in_state(state, crtc, crtc_state, i) {
/*
* Force recalculation even if we restore
* current state. With fast modeset this may not result
* in a modeset when the state is compatible.
*/
crtc_state->mode_changed = true;
}
ret = drm_atomic_commit(state);
}
if (ret == -EDEADLK) {
drm_modeset_backoff(&ctx);
goto retry;
}
if (!ret)
ret = __intel_display_resume(dev, state);
drm_modeset_drop_locks(&ctx);
drm_modeset_acquire_fini(&ctx);
mutex_unlock(&dev->mode_config.mutex);

View File

@ -1230,12 +1230,29 @@ static int intel_sanitize_fbc_option(struct drm_i915_private *dev_priv)
if (i915.enable_fbc >= 0)
return !!i915.enable_fbc;
if (!HAS_FBC(dev_priv))
return 0;
if (IS_BROADWELL(dev_priv))
return 1;
return 0;
}
static bool need_fbc_vtd_wa(struct drm_i915_private *dev_priv)
{
#ifdef CONFIG_INTEL_IOMMU
/* WaFbcTurnOffFbcWhenHyperVisorIsUsed:skl,bxt */
if (intel_iommu_gfx_mapped &&
(IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv))) {
DRM_INFO("Disabling framebuffer compression (FBC) to prevent screen flicker with VT-d enabled\n");
return true;
}
#endif
return false;
}
/**
* intel_fbc_init - Initialize FBC
* @dev_priv: the i915 device
@ -1253,6 +1270,9 @@ void intel_fbc_init(struct drm_i915_private *dev_priv)
fbc->active = false;
fbc->work.scheduled = false;
if (need_fbc_vtd_wa(dev_priv))
mkwrite_device_info(dev_priv)->has_fbc = false;
i915.enable_fbc = intel_sanitize_fbc_option(dev_priv);
DRM_DEBUG_KMS("Sanitized enable_fbc value: %d\n", i915.enable_fbc);

View File

@ -3344,6 +3344,8 @@ static uint32_t skl_wm_method2(uint32_t pixel_rate, uint32_t pipe_htotal,
plane_bytes_per_line *= 4;
plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
plane_blocks_per_line /= 4;
} else if (tiling == DRM_FORMAT_MOD_NONE) {
plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512) + 1;
} else {
plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
}
@ -6574,9 +6576,7 @@ void intel_init_gt_powersave(struct drm_i915_private *dev_priv)
void intel_cleanup_gt_powersave(struct drm_i915_private *dev_priv)
{
if (IS_CHERRYVIEW(dev_priv))
return;
else if (IS_VALLEYVIEW(dev_priv))
if (IS_VALLEYVIEW(dev_priv))
valleyview_cleanup_gt_powersave(dev_priv);
if (!i915.enable_rc6)

View File

@ -1178,8 +1178,8 @@ static int bxt_init_workarounds(struct intel_engine_cs *engine)
I915_WRITE(GEN8_L3SQCREG1, L3_GENERAL_PRIO_CREDITS(62) |
L3_HIGH_PRIO_CREDITS(2));
/* WaInsertDummyPushConstPs:bxt */
if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_B0))
/* WaToEnableHwFixForPushConstHWBug:bxt */
if (IS_BXT_REVID(dev_priv, BXT_REVID_C0, REVID_FOREVER))
WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);
@ -1222,8 +1222,8 @@ static int kbl_init_workarounds(struct intel_engine_cs *engine)
I915_WRITE(GEN8_L3SQCREG4, I915_READ(GEN8_L3SQCREG4) |
GEN8_LQSC_RO_PERF_DIS);
/* WaInsertDummyPushConstPs:kbl */
if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0))
/* WaToEnableHwFixForPushConstHWBug:kbl */
if (IS_KBL_REVID(dev_priv, KBL_REVID_C0, REVID_FOREVER))
WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);

View File

@ -2,6 +2,9 @@ config DRM_MEDIATEK
tristate "DRM Support for Mediatek SoCs"
depends on DRM
depends on ARCH_MEDIATEK || (ARM && COMPILE_TEST)
depends on COMMON_CLK
depends on HAVE_ARM_SMCCC
depends on OF
select DRM_GEM_CMA_HELPER
select DRM_KMS_HELPER
select DRM_MIPI_DSI

View File

@ -198,16 +198,7 @@ static int radeon_atpx_validate(struct radeon_atpx *atpx)
atpx->is_hybrid = false;
if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {
printk("ATPX Hybrid Graphics\n");
#if 1
/* This is a temporary hack until the D3 cold support
* makes it upstream. The ATPX power_control method seems
* to still work on even if the system should be using
* the new standardized hybrid D3 cold ACPI interface.
*/
atpx->functions.power_cntl = true;
#else
atpx->functions.power_cntl = false;
#endif
atpx->is_hybrid = true;
}

View File

@ -491,7 +491,7 @@ struct it87_sio_data {
struct it87_data {
const struct attribute_group *groups[7];
enum chips type;
u16 features;
u32 features;
u8 peci_mask;
u8 old_peci_mask;

View File

@ -38,6 +38,7 @@
#define AT91_I2C_TIMEOUT msecs_to_jiffies(100) /* transfer timeout */
#define AT91_I2C_DMA_THRESHOLD 8 /* enable DMA if transfer size is bigger than this threshold */
#define AUTOSUSPEND_TIMEOUT 2000
#define AT91_I2C_MAX_ALT_CMD_DATA_SIZE 256
/* AT91 TWI register definitions */
#define AT91_TWI_CR 0x0000 /* Control Register */
@ -141,6 +142,7 @@ struct at91_twi_dev {
unsigned twi_cwgr_reg;
struct at91_twi_pdata *pdata;
bool use_dma;
bool use_alt_cmd;
bool recv_len_abort;
u32 fifo_size;
struct at91_twi_dma dma;
@ -269,7 +271,7 @@ static void at91_twi_write_next_byte(struct at91_twi_dev *dev)
/* send stop when last byte has been written */
if (--dev->buf_len == 0)
if (!dev->pdata->has_alt_cmd)
if (!dev->use_alt_cmd)
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP);
dev_dbg(dev->dev, "wrote 0x%x, to go %d\n", *dev->buf, dev->buf_len);
@ -292,7 +294,7 @@ static void at91_twi_write_data_dma_callback(void *data)
* we just have to enable TXCOMP one.
*/
at91_twi_write(dev, AT91_TWI_IER, AT91_TWI_TXCOMP);
if (!dev->pdata->has_alt_cmd)
if (!dev->use_alt_cmd)
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP);
}
@ -410,7 +412,7 @@ static void at91_twi_read_next_byte(struct at91_twi_dev *dev)
}
/* send stop if second but last byte has been read */
if (!dev->pdata->has_alt_cmd && dev->buf_len == 1)
if (!dev->use_alt_cmd && dev->buf_len == 1)
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP);
dev_dbg(dev->dev, "read 0x%x, to go %d\n", *dev->buf, dev->buf_len);
@ -426,7 +428,7 @@ static void at91_twi_read_data_dma_callback(void *data)
dma_unmap_single(dev->dev, sg_dma_address(&dev->dma.sg[0]),
dev->buf_len, DMA_FROM_DEVICE);
if (!dev->pdata->has_alt_cmd) {
if (!dev->use_alt_cmd) {
/* The last two bytes have to be read without using dma */
dev->buf += dev->buf_len - 2;
dev->buf_len = 2;
@ -443,7 +445,7 @@ static void at91_twi_read_data_dma(struct at91_twi_dev *dev)
struct dma_chan *chan_rx = dma->chan_rx;
size_t buf_len;
buf_len = (dev->pdata->has_alt_cmd) ? dev->buf_len : dev->buf_len - 2;
buf_len = (dev->use_alt_cmd) ? dev->buf_len : dev->buf_len - 2;
dma->direction = DMA_FROM_DEVICE;
/* Keep in mind that we won't use dma to read the last two bytes */
@ -651,7 +653,7 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev)
unsigned start_flags = AT91_TWI_START;
/* if only one byte is to be read, immediately stop transfer */
if (!has_alt_cmd && dev->buf_len <= 1 &&
if (!dev->use_alt_cmd && dev->buf_len <= 1 &&
!(dev->msg->flags & I2C_M_RECV_LEN))
start_flags |= AT91_TWI_STOP;
at91_twi_write(dev, AT91_TWI_CR, start_flags);
@ -745,7 +747,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
int ret;
unsigned int_addr_flag = 0;
struct i2c_msg *m_start = msg;
bool is_read, use_alt_cmd = false;
bool is_read;
dev_dbg(&adap->dev, "at91_xfer: processing %d messages:\n", num);
@ -768,14 +770,16 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
at91_twi_write(dev, AT91_TWI_IADR, internal_address);
}
dev->use_alt_cmd = false;
is_read = (m_start->flags & I2C_M_RD);
if (dev->pdata->has_alt_cmd) {
if (m_start->len > 0) {
if (m_start->len > 0 &&
m_start->len < AT91_I2C_MAX_ALT_CMD_DATA_SIZE) {
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_ACMEN);
at91_twi_write(dev, AT91_TWI_ACR,
AT91_TWI_ACR_DATAL(m_start->len) |
((is_read) ? AT91_TWI_ACR_DIR : 0));
use_alt_cmd = true;
dev->use_alt_cmd = true;
} else {
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_ACMDIS);
}
@ -784,7 +788,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
at91_twi_write(dev, AT91_TWI_MMR,
(m_start->addr << 16) |
int_addr_flag |
((!use_alt_cmd && is_read) ? AT91_TWI_MREAD : 0));
((!dev->use_alt_cmd && is_read) ? AT91_TWI_MREAD : 0));
dev->buf_len = m_start->len;
dev->buf = m_start->buf;

View File

@ -158,7 +158,7 @@ static irqreturn_t bcm_iproc_i2c_isr(int irq, void *data)
if (status & BIT(IS_M_START_BUSY_SHIFT)) {
iproc_i2c->xfer_is_done = 1;
complete_all(&iproc_i2c->done);
complete(&iproc_i2c->done);
}
writel(status, iproc_i2c->base + IS_OFFSET);

View File

@ -229,7 +229,7 @@ static irqreturn_t bcm_kona_i2c_isr(int irq, void *devid)
dev->base + TXFCR_OFFSET);
writel(status & ~ISR_RESERVED_MASK, dev->base + ISR_OFFSET);
complete_all(&dev->done);
complete(&dev->done);
return IRQ_HANDLED;
}

View File

@ -228,7 +228,7 @@ static irqreturn_t brcmstb_i2c_isr(int irq, void *devid)
return IRQ_NONE;
brcmstb_i2c_enable_disable_irq(dev, INT_DISABLE);
complete_all(&dev->done);
complete(&dev->done);
dev_dbg(dev->device, "isr handled");
return IRQ_HANDLED;

View File

@ -215,7 +215,7 @@ static int ec_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg i2c_msgs[],
msg->outsize = request_len;
msg->insize = response_len;
result = cros_ec_cmd_xfer(bus->ec, msg);
result = cros_ec_cmd_xfer_status(bus->ec, msg);
if (result < 0) {
dev_err(dev, "Error transferring EC i2c message %d\n", result);
goto exit;

View File

@ -211,7 +211,7 @@ static void meson_i2c_stop(struct meson_i2c *i2c)
meson_i2c_add_token(i2c, TOKEN_STOP);
} else {
i2c->state = STATE_IDLE;
complete_all(&i2c->done);
complete(&i2c->done);
}
}
@ -238,7 +238,7 @@ static irqreturn_t meson_i2c_irq(int irqno, void *dev_id)
dev_dbg(i2c->dev, "error bit set\n");
i2c->error = -ENXIO;
i2c->state = STATE_IDLE;
complete_all(&i2c->done);
complete(&i2c->done);
goto out;
}
@ -269,7 +269,7 @@ static irqreturn_t meson_i2c_irq(int irqno, void *dev_id)
break;
case STATE_STOP:
i2c->state = STATE_IDLE;
complete_all(&i2c->done);
complete(&i2c->done);
break;
case STATE_IDLE:
break;

View File

@ -379,6 +379,7 @@ static int ocores_i2c_of_probe(struct platform_device *pdev,
if (!clock_frequency_present) {
dev_err(&pdev->dev,
"Missing required parameter 'opencores,ip-clock-frequency'\n");
clk_disable_unprepare(i2c->clk);
return -ENODEV;
}
i2c->ip_clock_khz = clock_frequency / 1000;
@ -467,20 +468,21 @@ static int ocores_i2c_probe(struct platform_device *pdev)
default:
dev_err(&pdev->dev, "Unsupported I/O width (%d)\n",
i2c->reg_io_width);
return -EINVAL;
ret = -EINVAL;
goto err_clk;
}
}
ret = ocores_init(&pdev->dev, i2c);
if (ret)
return ret;
goto err_clk;
init_waitqueue_head(&i2c->wait);
ret = devm_request_irq(&pdev->dev, irq, ocores_isr, 0,
pdev->name, i2c);
if (ret) {
dev_err(&pdev->dev, "Cannot claim IRQ\n");
return ret;
goto err_clk;
}
/* hook up driver to tree */
@ -494,7 +496,7 @@ static int ocores_i2c_probe(struct platform_device *pdev)
ret = i2c_add_adapter(&i2c->adap);
if (ret) {
dev_err(&pdev->dev, "Failed to add adapter\n");
return ret;
goto err_clk;
}
/* add in known devices to the bus */
@ -504,6 +506,10 @@ static int ocores_i2c_probe(struct platform_device *pdev)
}
return 0;
err_clk:
clk_disable_unprepare(i2c->clk);
return ret;
}
static int ocores_i2c_remove(struct platform_device *pdev)

View File

@ -68,7 +68,7 @@ static int i2c_demux_activate_master(struct i2c_demux_pinctrl_priv *priv, u32 ne
adap = of_find_i2c_adapter_by_node(priv->chan[new_chan].parent_np);
if (!adap) {
ret = -ENODEV;
goto err;
goto err_with_revert;
}
p = devm_pinctrl_get_select(adap->dev.parent, priv->bus_name);
@ -103,6 +103,8 @@ static int i2c_demux_activate_master(struct i2c_demux_pinctrl_priv *priv, u32 ne
err_with_put:
i2c_put_adapter(adap);
err_with_revert:
of_changeset_revert(&priv->chan[new_chan].chgset);
err:
dev_err(priv->dev, "failed to setup demux-adapter %d (%d)\n", new_chan, ret);
return ret;

View File

@ -181,7 +181,7 @@ struct crypt_config {
u8 key[0];
};
#define MIN_IOS 16
#define MIN_IOS 64
static void clone_init(struct dm_crypt_io *, struct bio *);
static void kcryptd_queue_crypt(struct dm_crypt_io *io);

View File

@ -191,7 +191,6 @@ struct raid_dev {
#define RT_FLAG_RS_BITMAP_LOADED 2
#define RT_FLAG_UPDATE_SBS 3
#define RT_FLAG_RESHAPE_RS 4
#define RT_FLAG_KEEP_RS_FROZEN 5
/* Array elements of 64 bit needed for rebuild/failed disk bits */
#define DISKS_ARRAY_ELEMS ((MAX_RAID_DEVICES + (sizeof(uint64_t) * 8 - 1)) / sizeof(uint64_t) / 8)
@ -861,6 +860,9 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size)
{
unsigned long min_region_size = rs->ti->len / (1 << 21);
if (rs_is_raid0(rs))
return 0;
if (!region_size) {
/*
* Choose a reasonable default. All figures in sectors.
@ -930,6 +932,8 @@ static int validate_raid_redundancy(struct raid_set *rs)
rebuild_cnt++;
switch (rs->raid_type->level) {
case 0:
break;
case 1:
if (rebuild_cnt >= rs->md.raid_disks)
goto too_many;
@ -2335,6 +2339,13 @@ static int analyse_superblocks(struct dm_target *ti, struct raid_set *rs)
case 0:
break;
default:
/*
* We have to keep any raid0 data/metadata device pairs or
* the MD raid0 personality will fail to start the array.
*/
if (rs_is_raid0(rs))
continue;
dev = container_of(rdev, struct raid_dev, rdev);
if (dev->meta_dev)
dm_put_device(ti, dev->meta_dev);
@ -2579,7 +2590,6 @@ static int rs_prepare_reshape(struct raid_set *rs)
} else {
/* Process raid1 without delta_disks */
mddev->raid_disks = rs->raid_disks;
set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);
reshape = false;
}
} else {
@ -2590,7 +2600,6 @@ static int rs_prepare_reshape(struct raid_set *rs)
if (reshape) {
set_bit(RT_FLAG_RESHAPE_RS, &rs->runtime_flags);
set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);
} else if (mddev->raid_disks < rs->raid_disks)
/* Create new superblocks and bitmaps, if any new disks */
set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
@ -2902,7 +2911,6 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
goto bad;
set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);
/* Takeover ain't recovery, so disable recovery */
rs_setup_recovery(rs, MaxSector);
rs_set_new(rs);
@ -3386,21 +3394,28 @@ static void raid_postsuspend(struct dm_target *ti)
{
struct raid_set *rs = ti->private;
if (test_and_clear_bit(RT_FLAG_RS_RESUMED, &rs->runtime_flags)) {
if (!rs->md.suspended)
mddev_suspend(&rs->md);
rs->md.ro = 1;
}
if (!rs->md.suspended)
mddev_suspend(&rs->md);
rs->md.ro = 1;
}
static void attempt_restore_of_faulty_devices(struct raid_set *rs)
{
int i;
uint64_t failed_devices, cleared_failed_devices = 0;
uint64_t cleared_failed_devices[DISKS_ARRAY_ELEMS];
unsigned long flags;
bool cleared = false;
struct dm_raid_superblock *sb;
struct mddev *mddev = &rs->md;
struct md_rdev *r;
/* RAID personalities have to provide hot add/remove methods or we need to bail out. */
if (!mddev->pers || !mddev->pers->hot_add_disk || !mddev->pers->hot_remove_disk)
return;
memset(cleared_failed_devices, 0, sizeof(cleared_failed_devices));
for (i = 0; i < rs->md.raid_disks; i++) {
r = &rs->dev[i].rdev;
if (test_bit(Faulty, &r->flags) && r->sb_page &&
@ -3420,7 +3435,7 @@ static void attempt_restore_of_faulty_devices(struct raid_set *rs)
* ourselves.
*/
if ((r->raid_disk >= 0) &&
(r->mddev->pers->hot_remove_disk(r->mddev, r) != 0))
(mddev->pers->hot_remove_disk(mddev, r) != 0))
/* Failed to revive this device, try next */
continue;
@ -3430,22 +3445,30 @@ static void attempt_restore_of_faulty_devices(struct raid_set *rs)
clear_bit(Faulty, &r->flags);
clear_bit(WriteErrorSeen, &r->flags);
clear_bit(In_sync, &r->flags);
if (r->mddev->pers->hot_add_disk(r->mddev, r)) {
if (mddev->pers->hot_add_disk(mddev, r)) {
r->raid_disk = -1;
r->saved_raid_disk = -1;
r->flags = flags;
} else {
r->recovery_offset = 0;
cleared_failed_devices |= 1 << i;
set_bit(i, (void *) cleared_failed_devices);
cleared = true;
}
}
}
if (cleared_failed_devices) {
/* If any failed devices could be cleared, update all sbs failed_devices bits */
if (cleared) {
uint64_t failed_devices[DISKS_ARRAY_ELEMS];
rdev_for_each(r, &rs->md) {
sb = page_address(r->sb_page);
failed_devices = le64_to_cpu(sb->failed_devices);
failed_devices &= ~cleared_failed_devices;
sb->failed_devices = cpu_to_le64(failed_devices);
sb_retrieve_failed_devices(sb, failed_devices);
for (i = 0; i < DISKS_ARRAY_ELEMS; i++)
failed_devices[i] &= ~cleared_failed_devices[i];
sb_update_failed_devices(sb, failed_devices);
}
}
}
@ -3610,26 +3633,15 @@ static void raid_resume(struct dm_target *ti)
* devices are reachable again.
*/
attempt_restore_of_faulty_devices(rs);
} else {
mddev->ro = 0;
mddev->in_sync = 0;
/*
* When passing in flags to the ctr, we expect userspace
* to reset them because they made it to the superblocks
* and reload the mapping anyway.
*
* -> only unfreeze recovery in case of a table reload or
* we'll have a bogus recovery/reshape position
* retrieved from the superblock by the ctr because
* the ongoing recovery/reshape will change it after read.
*/
if (!test_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags))
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
if (mddev->suspended)
mddev_resume(mddev);
}
mddev->ro = 0;
mddev->in_sync = 0;
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
if (mddev->suspended)
mddev_resume(mddev);
}
static struct target_type raid_target = {

View File

@ -210,14 +210,17 @@ static struct dm_path *rr_select_path(struct path_selector *ps, size_t nr_bytes)
struct path_info *pi = NULL;
struct dm_path *current_path = NULL;
local_irq_save(flags);
current_path = *this_cpu_ptr(s->current_path);
if (current_path) {
percpu_counter_dec(&s->repeat_count);
if (percpu_counter_read_positive(&s->repeat_count) > 0)
if (percpu_counter_read_positive(&s->repeat_count) > 0) {
local_irq_restore(flags);
return current_path;
}
}
spin_lock_irqsave(&s->lock, flags);
spin_lock(&s->lock);
if (!list_empty(&s->valid_paths)) {
pi = list_entry(s->valid_paths.next, struct path_info, list);
list_move_tail(&pi->list, &s->valid_paths);

View File

@ -1631,8 +1631,7 @@ static int __of_parse_phandle_with_args(const struct device_node *np,
*/
err:
if (it.node)
of_node_put(it.node);
of_node_put(it.node);
return rc;
}
@ -2343,20 +2342,13 @@ struct device_node *of_graph_get_endpoint_by_regs(
const struct device_node *parent, int port_reg, int reg)
{
struct of_endpoint endpoint;
struct device_node *node, *prev_node = NULL;
while (1) {
node = of_graph_get_next_endpoint(parent, prev_node);
of_node_put(prev_node);
if (!node)
break;
struct device_node *node = NULL;
for_each_endpoint_of_node(parent, node) {
of_graph_parse_endpoint(node, &endpoint);
if (((port_reg == -1) || (endpoint.port == port_reg)) &&
((reg == -1) || (endpoint.id == reg)))
return node;
prev_node = node;
}
return NULL;

View File

@ -517,7 +517,7 @@ static void *__unflatten_device_tree(const void *blob,
pr_warning("End of tree marker overwritten: %08x\n",
be32_to_cpup(mem + size));
if (detached) {
if (detached && mynodes) {
of_node_set_flag(*mynodes, OF_DETACHED);
pr_debug("unflattened tree is detached\n");
}

View File

@ -544,12 +544,15 @@ void __init of_irq_init(const struct of_device_id *matches)
list_del(&desc->list);
of_node_set_flag(desc->dev, OF_POPULATED);
pr_debug("of_irq_init: init %s (%p), parent %p\n",
desc->dev->full_name,
desc->dev, desc->interrupt_parent);
ret = desc->irq_init_cb(desc->dev,
desc->interrupt_parent);
if (ret) {
of_node_clear_flag(desc->dev, OF_POPULATED);
kfree(desc);
continue;
}
@ -559,8 +562,6 @@ void __init of_irq_init(const struct of_device_id *matches)
* its children can get processed in a subsequent pass.
*/
list_add_tail(&desc->list, &intc_parent_list);
of_node_set_flag(desc->dev, OF_POPULATED);
}
/* Get the next pending parent that might have children */

View File

@ -497,6 +497,7 @@ int of_platform_default_populate(struct device_node *root,
}
EXPORT_SYMBOL_GPL(of_platform_default_populate);
#ifndef CONFIG_PPC
static int __init of_platform_default_populate_init(void)
{
struct device_node *node;
@ -521,6 +522,7 @@ static int __init of_platform_default_populate_init(void)
return 0;
}
arch_initcall_sync(of_platform_default_populate_init);
#endif
static int of_platform_device_destroy(struct device *dev, void *data)
{

View File

@ -63,7 +63,7 @@ static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
struct fib *fibptr;
struct hw_fib * hw_fib = (struct hw_fib *)0;
dma_addr_t hw_fib_pa = (dma_addr_t)0LL;
unsigned size;
unsigned int size, osize;
int retval;
if (dev->in_reset) {
@ -87,7 +87,8 @@ static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
* will not overrun the buffer when we copy the memory. Return
* an error if we would.
*/
size = le16_to_cpu(kfib->header.Size) + sizeof(struct aac_fibhdr);
osize = size = le16_to_cpu(kfib->header.Size) +
sizeof(struct aac_fibhdr);
if (size < le16_to_cpu(kfib->header.SenderSize))
size = le16_to_cpu(kfib->header.SenderSize);
if (size > dev->max_fib_size) {
@ -118,6 +119,14 @@ static int ioctl_send_fib(struct aac_dev * dev, void __user *arg)
goto cleanup;
}
/* Sanity check the second copy */
if ((osize != le16_to_cpu(kfib->header.Size) +
sizeof(struct aac_fibhdr))
|| (size < le16_to_cpu(kfib->header.SenderSize))) {
retval = -EINVAL;
goto cleanup;
}
if (kfib->header.Command == cpu_to_le16(TakeABreakPt)) {
aac_adapter_interrupt(dev);
/*

View File

@ -2923,7 +2923,7 @@ static int fcoe_ctlr_vlan_recv(struct fcoe_ctlr *fip, struct sk_buff *skb)
mutex_unlock(&fip->ctlr_mutex);
drop:
kfree(skb);
kfree_skb(skb);
return rc;
}

View File

@ -5037,7 +5037,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
/* Find first memory bar */
bar_list = pci_select_bars(instance->pdev, IORESOURCE_MEM);
instance->bar = find_first_bit(&bar_list, sizeof(unsigned long));
if (pci_request_selected_regions(instance->pdev, instance->bar,
if (pci_request_selected_regions(instance->pdev, 1<<instance->bar,
"megasas: LSI")) {
dev_printk(KERN_DEBUG, &instance->pdev->dev, "IO memory region busy!\n");
return -EBUSY;
@ -5339,7 +5339,7 @@ fail_ready_state:
iounmap(instance->reg_set);
fail_ioremap:
pci_release_selected_regions(instance->pdev, instance->bar);
pci_release_selected_regions(instance->pdev, 1<<instance->bar);
return -EINVAL;
}
@ -5360,7 +5360,7 @@ static void megasas_release_mfi(struct megasas_instance *instance)
iounmap(instance->reg_set);
pci_release_selected_regions(instance->pdev, instance->bar);
pci_release_selected_regions(instance->pdev, 1<<instance->bar);
}
/**

View File

@ -2603,7 +2603,7 @@ megasas_release_fusion(struct megasas_instance *instance)
iounmap(instance->reg_set);
pci_release_selected_regions(instance->pdev, instance->bar);
pci_release_selected_regions(instance->pdev, 1<<instance->bar);
}
/**

View File

@ -2188,6 +2188,17 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
} else
ioc->msix96_vector = 0;
if (ioc->is_warpdrive) {
ioc->reply_post_host_index[0] = (resource_size_t __iomem *)
&ioc->chip->ReplyPostHostIndex;
for (i = 1; i < ioc->cpu_msix_table_sz; i++)
ioc->reply_post_host_index[i] =
(resource_size_t __iomem *)
((u8 __iomem *)&ioc->chip->Doorbell + (0x4000 + ((i - 1)
* 4)));
}
list_for_each_entry(reply_q, &ioc->reply_queue_list, list)
pr_info(MPT3SAS_FMT "%s: IRQ %d\n",
reply_q->name, ((ioc->msix_enable) ? "PCI-MSI-X enabled" :
@ -5280,17 +5291,6 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
if (r)
goto out_free_resources;
if (ioc->is_warpdrive) {
ioc->reply_post_host_index[0] = (resource_size_t __iomem *)
&ioc->chip->ReplyPostHostIndex;
for (i = 1; i < ioc->cpu_msix_table_sz; i++)
ioc->reply_post_host_index[i] =
(resource_size_t __iomem *)
((u8 __iomem *)&ioc->chip->Doorbell + (0x4000 + ((i - 1)
* 4)));
}
pci_set_drvdata(ioc->pdev, ioc->shost);
r = _base_get_ioc_facts(ioc, CAN_SLEEP);
if (r)

View File

@ -778,6 +778,8 @@ static void ses_intf_remove_enclosure(struct scsi_device *sdev)
if (!edev)
return;
enclosure_unregister(edev);
ses_dev = edev->scratch;
edev->scratch = NULL;
@ -789,7 +791,6 @@ static void ses_intf_remove_enclosure(struct scsi_device *sdev)
kfree(edev->component[0].scratch);
put_device(&edev->edev);
enclosure_unregister(edev);
}
static void ses_intf_remove(struct device *cdev,

View File

@ -1354,7 +1354,6 @@ made_compressed_probe:
spin_lock_init(&acm->write_lock);
spin_lock_init(&acm->read_lock);
mutex_init(&acm->mutex);
acm->rx_endpoint = usb_rcvbulkpipe(usb_dev, epread->bEndpointAddress);
acm->is_int_ep = usb_endpoint_xfer_int(epread);
if (acm->is_int_ep)
acm->bInterval = epread->bInterval;
@ -1394,14 +1393,14 @@ made_compressed_probe:
urb->transfer_dma = rb->dma;
if (acm->is_int_ep) {
usb_fill_int_urb(urb, acm->dev,
acm->rx_endpoint,
usb_rcvintpipe(usb_dev, epread->bEndpointAddress),
rb->base,
acm->readsize,
acm_read_bulk_callback, rb,
acm->bInterval);
} else {
usb_fill_bulk_urb(urb, acm->dev,
acm->rx_endpoint,
usb_rcvbulkpipe(usb_dev, epread->bEndpointAddress),
rb->base,
acm->readsize,
acm_read_bulk_callback, rb);

View File

@ -96,7 +96,6 @@ struct acm {
struct acm_rb read_buffers[ACM_NR];
struct acm_wb *putbuffer; /* for acm_tty_put_char() */
int rx_buflimit;
int rx_endpoint;
spinlock_t read_lock;
int write_used; /* number of non-empty write buffers */
int transmitting;

View File

@ -171,6 +171,31 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
ep, buffer, size);
}
static const unsigned short low_speed_maxpacket_maxes[4] = {
[USB_ENDPOINT_XFER_CONTROL] = 8,
[USB_ENDPOINT_XFER_ISOC] = 0,
[USB_ENDPOINT_XFER_BULK] = 0,
[USB_ENDPOINT_XFER_INT] = 8,
};
static const unsigned short full_speed_maxpacket_maxes[4] = {
[USB_ENDPOINT_XFER_CONTROL] = 64,
[USB_ENDPOINT_XFER_ISOC] = 1023,
[USB_ENDPOINT_XFER_BULK] = 64,
[USB_ENDPOINT_XFER_INT] = 64,
};
static const unsigned short high_speed_maxpacket_maxes[4] = {
[USB_ENDPOINT_XFER_CONTROL] = 64,
[USB_ENDPOINT_XFER_ISOC] = 1024,
[USB_ENDPOINT_XFER_BULK] = 512,
[USB_ENDPOINT_XFER_INT] = 1023,
};
static const unsigned short super_speed_maxpacket_maxes[4] = {
[USB_ENDPOINT_XFER_CONTROL] = 512,
[USB_ENDPOINT_XFER_ISOC] = 1024,
[USB_ENDPOINT_XFER_BULK] = 1024,
[USB_ENDPOINT_XFER_INT] = 1024,
};
static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum,
int asnum, struct usb_host_interface *ifp, int num_ep,
unsigned char *buffer, int size)
@ -179,6 +204,8 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum,
struct usb_endpoint_descriptor *d;
struct usb_host_endpoint *endpoint;
int n, i, j, retval;
unsigned int maxp;
const unsigned short *maxpacket_maxes;
d = (struct usb_endpoint_descriptor *) buffer;
buffer += d->bLength;
@ -286,6 +313,42 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum,
endpoint->desc.wMaxPacketSize = cpu_to_le16(8);
}
/* Validate the wMaxPacketSize field */
maxp = usb_endpoint_maxp(&endpoint->desc);
/* Find the highest legal maxpacket size for this endpoint */
i = 0; /* additional transactions per microframe */
switch (to_usb_device(ddev)->speed) {
case USB_SPEED_LOW:
maxpacket_maxes = low_speed_maxpacket_maxes;
break;
case USB_SPEED_FULL:
maxpacket_maxes = full_speed_maxpacket_maxes;
break;
case USB_SPEED_HIGH:
/* Bits 12..11 are allowed only for HS periodic endpoints */
if (usb_endpoint_xfer_int(d) || usb_endpoint_xfer_isoc(d)) {
i = maxp & (BIT(12) | BIT(11));
maxp &= ~i;
}
/* fallthrough */
default:
maxpacket_maxes = high_speed_maxpacket_maxes;
break;
case USB_SPEED_SUPER:
case USB_SPEED_SUPER_PLUS:
maxpacket_maxes = super_speed_maxpacket_maxes;
break;
}
j = maxpacket_maxes[usb_endpoint_type(&endpoint->desc)];
if (maxp > j) {
dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid maxpacket %d, setting to %d\n",
cfgno, inum, asnum, d->bEndpointAddress, maxp, j);
maxp = j;
endpoint->desc.wMaxPacketSize = cpu_to_le16(i | maxp);
}
/*
* Some buggy high speed devices have bulk endpoints using
* maxpacket sizes other than 512. High speed HCDs may not
@ -293,9 +356,6 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum,
*/
if (to_usb_device(ddev)->speed == USB_SPEED_HIGH
&& usb_endpoint_xfer_bulk(d)) {
unsigned maxp;
maxp = usb_endpoint_maxp(&endpoint->desc) & 0x07ff;
if (maxp != 512)
dev_warn(ddev, "config %d interface %d altsetting %d "
"bulk endpoint 0x%X has invalid maxpacket %d\n",

View File

@ -241,7 +241,8 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
goto error_decrease_mem;
}
mem = usb_alloc_coherent(ps->dev, size, GFP_USER, &dma_handle);
mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN,
&dma_handle);
if (!mem) {
ret = -ENOMEM;
goto error_free_usbm;
@ -2582,7 +2583,9 @@ static unsigned int usbdev_poll(struct file *file,
if (file->f_mode & FMODE_WRITE && !list_empty(&ps->async_completed))
mask |= POLLOUT | POLLWRNORM;
if (!connected(ps))
mask |= POLLERR | POLLHUP;
mask |= POLLHUP;
if (list_empty(&ps->list))
mask |= POLLERR;
return mask;
}

View File

@ -1052,14 +1052,11 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
/* Continue a partial initialization */
if (type == HUB_INIT2 || type == HUB_INIT3) {
device_lock(hub->intfdev);
device_lock(&hdev->dev);
/* Was the hub disconnected while we were waiting? */
if (hub->disconnected) {
device_unlock(hub->intfdev);
kref_put(&hub->kref, hub_release);
return;
}
if (hub->disconnected)
goto disconnected;
if (type == HUB_INIT2)
goto init2;
goto init3;
@ -1262,7 +1259,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
queue_delayed_work(system_power_efficient_wq,
&hub->init_work,
msecs_to_jiffies(delay));
device_unlock(hub->intfdev);
device_unlock(&hdev->dev);
return; /* Continues at init3: below */
} else {
msleep(delay);
@ -1281,12 +1278,12 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
/* Scan all ports that need attention */
kick_hub_wq(hub);
/* Allow autosuspend if it was suppressed */
if (type <= HUB_INIT3)
if (type == HUB_INIT2 || type == HUB_INIT3) {
/* Allow autosuspend if it was suppressed */
disconnected:
usb_autopm_put_interface_async(to_usb_interface(hub->intfdev));
if (type == HUB_INIT2 || type == HUB_INIT3)
device_unlock(hub->intfdev);
device_unlock(&hdev->dev);
}
kref_put(&hub->kref, hub_release);
}
@ -1315,8 +1312,6 @@ static void hub_quiesce(struct usb_hub *hub, enum hub_quiescing_type type)
struct usb_device *hdev = hub->hdev;
int i;
cancel_delayed_work_sync(&hub->init_work);
/* hub_wq and related activity won't re-trigger */
hub->quiescing = 1;

View File

@ -61,6 +61,7 @@ static int dwc3_of_simple_probe(struct platform_device *pdev)
if (!simple->clks)
return -ENOMEM;
platform_set_drvdata(pdev, simple);
simple->dev = dev;
for (i = 0; i < simple->num_clocks; i++) {

View File

@ -37,6 +37,7 @@
#define PCI_DEVICE_ID_INTEL_BXT 0x0aaa
#define PCI_DEVICE_ID_INTEL_BXT_M 0x1aaa
#define PCI_DEVICE_ID_INTEL_APL 0x5aaa
#define PCI_DEVICE_ID_INTEL_KBP 0xa2b0
static const struct acpi_gpio_params reset_gpios = { 0, 0, false };
static const struct acpi_gpio_params cs_gpios = { 1, 0, false };
@ -227,6 +228,7 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BXT), },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BXT_M), },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_APL), },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBP), },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB), },
{ } /* Terminating Entry */
};

View File

@ -829,7 +829,7 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
if (!req->request.no_interrupt && !chain)
trb->ctrl |= DWC3_TRB_CTRL_IOC | DWC3_TRB_CTRL_ISP_IMI;
if (last)
if (last && !usb_endpoint_xfer_isoc(dep->endpoint.desc))
trb->ctrl |= DWC3_TRB_CTRL_LST;
if (chain)
@ -1955,7 +1955,8 @@ static void dwc3_gadget_free_endpoints(struct dwc3 *dwc)
static int __dwc3_cleanup_done_trbs(struct dwc3 *dwc, struct dwc3_ep *dep,
struct dwc3_request *req, struct dwc3_trb *trb,
const struct dwc3_event_depevt *event, int status)
const struct dwc3_event_depevt *event, int status,
int chain)
{
unsigned int count;
unsigned int s_pkt = 0;
@ -1964,17 +1965,22 @@ static int __dwc3_cleanup_done_trbs(struct dwc3 *dwc, struct dwc3_ep *dep,
dep->queued_requests--;
trace_dwc3_complete_trb(dep, trb);
/*
* If we're in the middle of series of chained TRBs and we
* receive a short transfer along the way, DWC3 will skip
* through all TRBs including the last TRB in the chain (the
* where CHN bit is zero. DWC3 will also avoid clearing HWO
* bit and SW has to do it manually.
*
* We're going to do that here to avoid problems of HW trying
* to use bogus TRBs for transfers.
*/
if (chain && (trb->ctrl & DWC3_TRB_CTRL_HWO))
trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
if ((trb->ctrl & DWC3_TRB_CTRL_HWO) && status != -ESHUTDOWN)
/*
* We continue despite the error. There is not much we
* can do. If we don't clean it up we loop forever. If
* we skip the TRB then it gets overwritten after a
* while since we use them in a ring buffer. A BUG()
* would help. Lets hope that if this occurs, someone
* fixes the root cause instead of looking away :)
*/
dev_err(dwc->dev, "%s's TRB (%p) still owned by HW\n",
dep->name, trb);
return 1;
count = trb->size & DWC3_TRB_SIZE_MASK;
if (dep->direction) {
@ -2013,15 +2019,7 @@ static int __dwc3_cleanup_done_trbs(struct dwc3 *dwc, struct dwc3_ep *dep,
s_pkt = 1;
}
/*
* We assume here we will always receive the entire data block
* which we should receive. Meaning, if we program RX to
* receive 4K but we receive only 2K, we assume that's all we
* should receive and we simply bounce the request back to the
* gadget driver for further processing.
*/
req->request.actual += req->request.length - count;
if (s_pkt)
if (s_pkt && !chain)
return 1;
if ((event->status & DEPEVT_STATUS_LST) &&
(trb->ctrl & (DWC3_TRB_CTRL_LST |
@ -2040,13 +2038,17 @@ static int dwc3_cleanup_done_reqs(struct dwc3 *dwc, struct dwc3_ep *dep,
struct dwc3_trb *trb;
unsigned int slot;
unsigned int i;
int count = 0;
int ret;
do {
int chain;
req = next_request(&dep->started_list);
if (WARN_ON_ONCE(!req))
return 1;
chain = req->request.num_mapped_sgs > 0;
i = 0;
do {
slot = req->first_trb_index + i;
@ -2054,13 +2056,22 @@ static int dwc3_cleanup_done_reqs(struct dwc3 *dwc, struct dwc3_ep *dep,
slot++;
slot %= DWC3_TRB_NUM;
trb = &dep->trb_pool[slot];
count += trb->size & DWC3_TRB_SIZE_MASK;
ret = __dwc3_cleanup_done_trbs(dwc, dep, req, trb,
event, status);
event, status, chain);
if (ret)
break;
} while (++i < req->request.num_mapped_sgs);
/*
* We assume here we will always receive the entire data block
* which we should receive. Meaning, if we program RX to
* receive 4K but we receive only 2K, we assume that's all we
* should receive and we simply bounce the request back to the
* gadget driver for further processing.
*/
req->request.actual += req->request.length - count;
dwc3_gadget_giveback(dep, req, status);
if (ret)

View File

@ -1913,6 +1913,8 @@ unknown:
break;
case USB_RECIP_ENDPOINT:
if (!cdev->config)
break;
endp = ((w_index & 0x80) >> 3) | (w_index & 0x0f);
list_for_each_entry(f, &cdev->config->functions, list) {
if (test_bit(endp, f->endpoints))
@ -2124,14 +2126,14 @@ int composite_os_desc_req_prepare(struct usb_composite_dev *cdev,
cdev->os_desc_req = usb_ep_alloc_request(ep0, GFP_KERNEL);
if (!cdev->os_desc_req) {
ret = PTR_ERR(cdev->os_desc_req);
ret = -ENOMEM;
goto end;
}
/* OS feature descriptor length <= 4kB */
cdev->os_desc_req->buf = kmalloc(4096, GFP_KERNEL);
if (!cdev->os_desc_req->buf) {
ret = PTR_ERR(cdev->os_desc_req->buf);
ret = -ENOMEM;
kfree(cdev->os_desc_req);
goto end;
}

View File

@ -1490,7 +1490,9 @@ void unregister_gadget_item(struct config_item *item)
{
struct gadget_info *gi = to_gadget_info(item);
mutex_lock(&gi->lock);
unregister_gadget(gi);
mutex_unlock(&gi->lock);
}
EXPORT_SYMBOL_GPL(unregister_gadget_item);

View File

@ -680,6 +680,12 @@ static int rndis_reset_response(struct rndis_params *params,
{
rndis_reset_cmplt_type *resp;
rndis_resp_t *r;
u8 *xbuf;
u32 length;
/* drain the response queue */
while ((xbuf = rndis_get_next_response(params, &length)))
rndis_free_response(params, xbuf);
r = rndis_add_response(params, sizeof(rndis_reset_cmplt_type));
if (!r)

View File

@ -556,7 +556,8 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
/* Multi frame CDC protocols may store the frame for
* later which is not a dropped frame.
*/
if (dev->port_usb->supports_multi_frame)
if (dev->port_usb &&
dev->port_usb->supports_multi_frame)
goto multiframe;
goto drop;
}

View File

@ -2023,7 +2023,7 @@ static int uvcg_streaming_class_allow_link(struct config_item *src,
if (!data) {
kfree(*class_array);
*class_array = NULL;
ret = PTR_ERR(data);
ret = -ENOMEM;
goto unlock;
}
cl_arr = *class_array;

View File

@ -542,7 +542,7 @@ static ssize_t ep_aio(struct kiocb *iocb,
*/
spin_lock_irq(&epdata->dev->lock);
value = -ENODEV;
if (unlikely(epdata->ep))
if (unlikely(epdata->ep == NULL))
goto fail;
req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC);
@ -606,7 +606,7 @@ ep_read_iter(struct kiocb *iocb, struct iov_iter *to)
}
if (is_sync_kiocb(iocb)) {
value = ep_io(epdata, buf, len);
if (value >= 0 && copy_to_iter(buf, value, to))
if (value >= 0 && (copy_to_iter(buf, value, to) != value))
value = -EFAULT;
} else {
struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL);

View File

@ -1145,7 +1145,7 @@ int usb_add_gadget_udc_release(struct device *parent, struct usb_gadget *gadget,
if (ret != -EPROBE_DEFER)
list_del(&driver->pending);
if (ret)
goto err4;
goto err5;
break;
}
}
@ -1154,6 +1154,9 @@ int usb_add_gadget_udc_release(struct device *parent, struct usb_gadget *gadget,
return 0;
err5:
device_del(&udc->dev);
err4:
list_del(&udc->list);
mutex_unlock(&udc_lock);

View File

@ -2053,7 +2053,7 @@ static void setup_received_handle(struct qe_udc *udc,
struct qe_ep *ep;
if (wValue != 0 || wLength != 0
|| pipe > USB_MAX_ENDPOINTS)
|| pipe >= USB_MAX_ENDPOINTS)
break;
ep = &udc->eps[pipe];

View File

@ -332,11 +332,11 @@ static void ehci_turn_off_all_ports(struct ehci_hcd *ehci)
int port = HCS_N_PORTS(ehci->hcs_params);
while (port--) {
ehci_writel(ehci, PORT_RWC_BITS,
&ehci->regs->port_status[port]);
spin_unlock_irq(&ehci->lock);
ehci_port_power(ehci, port, false);
spin_lock_irq(&ehci->lock);
ehci_writel(ehci, PORT_RWC_BITS,
&ehci->regs->port_status[port]);
}
}

View File

@ -1675,7 +1675,7 @@ max3421_gpout_set_value(struct usb_hcd *hcd, u8 pin_number, u8 value)
if (pin_number > 7)
return;
mask = 1u << pin_number;
mask = 1u << (pin_number % 4);
idx = pin_number / 4;
if (value)

View File

@ -386,6 +386,9 @@ static int xhci_stop_device(struct xhci_hcd *xhci, int slot_id, int suspend)
ret = 0;
virt_dev = xhci->devs[slot_id];
if (!virt_dev)
return -ENODEV;
cmd = xhci_alloc_command(xhci, false, true, GFP_NOIO);
if (!cmd) {
xhci_dbg(xhci, "Couldn't allocate command structure.\n");

View File

@ -314,11 +314,12 @@ static void xhci_pci_remove(struct pci_dev *dev)
usb_remove_hcd(xhci->shared_hcd);
usb_put_hcd(xhci->shared_hcd);
}
usb_hcd_pci_remove(dev);
/* Workaround for spurious wakeups at shutdown with HSW */
if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
pci_set_power_state(dev, PCI_D3hot);
usb_hcd_pci_remove(dev);
}
#ifdef CONFIG_PM

View File

@ -1334,12 +1334,6 @@ static void handle_cmd_completion(struct xhci_hcd *xhci,
cmd = list_entry(xhci->cmd_list.next, struct xhci_command, cmd_list);
if (cmd->command_trb != xhci->cmd_ring->dequeue) {
xhci_err(xhci,
"Command completion event does not match command\n");
return;
}
del_timer(&xhci->cmd_timer);
trace_xhci_cmd_completion(cmd_trb, (struct xhci_generic_trb *) event);
@ -1351,6 +1345,13 @@ static void handle_cmd_completion(struct xhci_hcd *xhci,
xhci_handle_stopped_cmd_ring(xhci, cmd);
return;
}
if (cmd->command_trb != xhci->cmd_ring->dequeue) {
xhci_err(xhci,
"Command completion event does not match command\n");
return;
}
/*
* Host aborted the command ring, check if the current command was
* supposed to be aborted, otherwise continue normally.
@ -3243,7 +3244,8 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
send_addr = addr;
/* Queue the TRBs, even if they are zero-length */
for (enqd_len = 0; enqd_len < full_len; enqd_len += trb_buff_len) {
for (enqd_len = 0; first_trb || enqd_len < full_len;
enqd_len += trb_buff_len) {
field = TRB_TYPE(TRB_NORMAL);
/* TRB buffer should not cross 64KB boundaries */

View File

@ -665,7 +665,7 @@ static ssize_t ftdi_elan_read(struct file *file, char __user *buffer,
{
char data[30 *3 + 4];
char *d = data;
int m = (sizeof(data) - 1) / 3;
int m = (sizeof(data) - 1) / 3 - 1;
int bytes_read = 0;
int retry_on_empty = 10;
int retry_on_timeout = 5;
@ -1684,7 +1684,7 @@ wait:if (ftdi->disconnected > 0) {
int i = 0;
char data[30 *3 + 4];
char *d = data;
int m = (sizeof(data) - 1) / 3;
int m = (sizeof(data) - 1) / 3 - 1;
int l = 0;
struct u132_target *target = &ftdi->target[ed];
struct u132_command *command = &ftdi->command[
@ -1876,7 +1876,7 @@ more:{
if (packet_bytes > 2) {
char diag[30 *3 + 4];
char *d = diag;
int m = (sizeof(diag) - 1) / 3;
int m = (sizeof(diag) - 1) / 3 - 1;
char *b = ftdi->bulk_in_buffer;
int bytes_read = 0;
diag[0] = 0;
@ -2053,7 +2053,7 @@ static int ftdi_elan_synchronize(struct usb_ftdi *ftdi)
if (packet_bytes > 2) {
char diag[30 *3 + 4];
char *d = diag;
int m = (sizeof(diag) - 1) / 3;
int m = (sizeof(diag) - 1) / 3 - 1;
char *b = ftdi->bulk_in_buffer;
int bytes_read = 0;
unsigned char c = 0;
@ -2155,7 +2155,7 @@ more:{
if (packet_bytes > 2) {
char diag[30 *3 + 4];
char *d = diag;
int m = (sizeof(diag) - 1) / 3;
int m = (sizeof(diag) - 1) / 3 - 1;
char *b = ftdi->bulk_in_buffer;
int bytes_read = 0;
diag[0] = 0;

Some files were not shown because too many files have changed in this diff Show More