Merge tag 'v3.2' into staging/for_v3.3

* tag 'v3.2': (83 commits)
  Linux 3.2
  minixfs: misplaced checks lead to dentry leak
  ptrace: ensure JOBCTL_STOP_SIGMASK is not zero after detach
  ptrace: partially fix the do_wait(WEXITED) vs EXIT_DEAD->EXIT_ZOMBIE race
  Revert "rtc: Expire alarms after the time is set."
  [CIFS] default ntlmv2 for cifs mount delayed to 3.3
  cifs: fix bad buffer length check in coalesce_t2
  Revert "rtc: Disable the alarm in the hardware"
  hung_task: fix false positive during vfork
  security: Fix security_old_inode_init_security() when CONFIG_SECURITY is not set
  fix CAN MAINTAINERS SCM tree type
  mwifiex: fix crash during simultaneous scan and connect
  b43: fix regression in PIO case
  ath9k: Fix kernel panic in AR2427 in AP mode
  CAN MAINTAINERS update
  net: fsl: fec: fix build for mx23-only kernel
  sch_qfq: fix overflow in qfq_update_start()
  drm/radeon/kms/atom: fix possible segfault in pm setup
  gspca: Fix falling back to lower isoc alt settings
  futex: Fix uninterruptible loop due to gate_area
  ...
This commit is contained in:
Mauro Carvalho Chehab 2012-01-06 10:18:43 -02:00
commit b35009a9a7
88 changed files with 570 additions and 392 deletions

View File

@ -1100,6 +1100,15 @@ emulate them efficiently. The fields in each entry are defined as follows:
eax, ebx, ecx, edx: the values returned by the cpuid instruction for eax, ebx, ecx, edx: the values returned by the cpuid instruction for
this function/index combination this function/index combination
The TSC deadline timer feature (CPUID leaf 1, ecx[24]) is always returned
as false, since the feature depends on KVM_CREATE_IRQCHIP for local APIC
support. Instead it is reported via
ioctl(KVM_CHECK_EXTENSION, KVM_CAP_TSC_DEADLINE_TIMER)
if that returns true and you use KVM_CREATE_IRQCHIP, or if you emulate the
feature in userspace, then you can enable the feature for KVM_SET_CPUID2.
4.47 KVM_PPC_GET_PVINFO 4.47 KVM_PPC_GET_PVINFO
Capability: KVM_CAP_PPC_GET_PVINFO Capability: KVM_CAP_PPC_GET_PVINFO
@ -1151,6 +1160,13 @@ following flags are specified:
/* Depends on KVM_CAP_IOMMU */ /* Depends on KVM_CAP_IOMMU */
#define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0) #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0)
The KVM_DEV_ASSIGN_ENABLE_IOMMU flag is a mandatory option to ensure
isolation of the device. Usages not specifying this flag are deprecated.
Only PCI header type 0 devices with PCI BAR resources are supported by
device assignment. The user requesting this ioctl must have read/write
access to the PCI sysfs resource files associated with the device.
4.49 KVM_DEASSIGN_PCI_DEVICE 4.49 KVM_DEASSIGN_PCI_DEVICE
Capability: KVM_CAP_DEVICE_DEASSIGNMENT Capability: KVM_CAP_DEVICE_DEASSIGNMENT

View File

@ -1698,11 +1698,9 @@ F: arch/x86/include/asm/tce.h
CAN NETWORK LAYER CAN NETWORK LAYER
M: Oliver Hartkopp <socketcan@hartkopp.net> M: Oliver Hartkopp <socketcan@hartkopp.net>
M: Oliver Hartkopp <oliver.hartkopp@volkswagen.de>
M: Urs Thuermann <urs.thuermann@volkswagen.de>
L: linux-can@vger.kernel.org L: linux-can@vger.kernel.org
L: netdev@vger.kernel.org W: http://gitorious.org/linux-can
W: http://developer.berlios.de/projects/socketcan/ T: git git://gitorious.org/linux-can/linux-can-next.git
S: Maintained S: Maintained
F: net/can/ F: net/can/
F: include/linux/can.h F: include/linux/can.h
@ -1713,9 +1711,10 @@ F: include/linux/can/gw.h
CAN NETWORK DRIVERS CAN NETWORK DRIVERS
M: Wolfgang Grandegger <wg@grandegger.com> M: Wolfgang Grandegger <wg@grandegger.com>
M: Marc Kleine-Budde <mkl@pengutronix.de>
L: linux-can@vger.kernel.org L: linux-can@vger.kernel.org
L: netdev@vger.kernel.org W: http://gitorious.org/linux-can
W: http://developer.berlios.de/projects/socketcan/ T: git git://gitorious.org/linux-can/linux-can-next.git
S: Maintained S: Maintained
F: drivers/net/can/ F: drivers/net/can/
F: include/linux/can/dev.h F: include/linux/can/dev.h
@ -2700,7 +2699,7 @@ FIREWIRE SUBSYSTEM
M: Stefan Richter <stefanr@s5r6.in-berlin.de> M: Stefan Richter <stefanr@s5r6.in-berlin.de>
L: linux1394-devel@lists.sourceforge.net L: linux1394-devel@lists.sourceforge.net
W: http://ieee1394.wiki.kernel.org/ W: http://ieee1394.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394.git
S: Maintained S: Maintained
F: drivers/firewire/ F: drivers/firewire/
F: include/linux/firewire*.h F: include/linux/firewire*.h

View File

@ -1,7 +1,7 @@
VERSION = 3 VERSION = 3
PATCHLEVEL = 2 PATCHLEVEL = 2
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc7 EXTRAVERSION =
NAME = Saber-toothed Squirrel NAME = Saber-toothed Squirrel
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -1246,7 +1246,7 @@ config PL310_ERRATA_588369
config ARM_ERRATA_720789 config ARM_ERRATA_720789
bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID" bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID"
depends on CPU_V7 && SMP depends on CPU_V7
help help
This option enables the workaround for the 720789 Cortex-A9 (prior to This option enables the workaround for the 720789 Cortex-A9 (prior to
r2p0) erratum. A faulty ASID can be sent to the other CPUs for the r2p0) erratum. A faulty ASID can be sent to the other CPUs for the
@ -1282,7 +1282,7 @@ config ARM_ERRATA_743622
config ARM_ERRATA_751472 config ARM_ERRATA_751472
bool "ARM errata: Interrupted ICIALLUIS may prevent completion of broadcasted operation" bool "ARM errata: Interrupted ICIALLUIS may prevent completion of broadcasted operation"
depends on CPU_V7 && SMP depends on CPU_V7
help help
This option enables the workaround for the 751472 Cortex-A9 (prior This option enables the workaround for the 751472 Cortex-A9 (prior
to r3p0) erratum. An interrupted ICIALLUIS operation may prevent the to r3p0) erratum. An interrupted ICIALLUIS operation may prevent the

View File

@ -221,17 +221,6 @@
*/ */
#define MCODE_BUFF_PER_REQ 256 #define MCODE_BUFF_PER_REQ 256
/*
* Mark a _pl330_req as free.
* We do it by writing DMAEND as the first instruction
* because no valid request is going to have DMAEND as
* its first instruction to execute.
*/
#define MARK_FREE(req) do { \
_emit_END(0, (req)->mc_cpu); \
(req)->mc_len = 0; \
} while (0)
/* If the _pl330_req is available to the client */ /* If the _pl330_req is available to the client */
#define IS_FREE(req) (*((u8 *)((req)->mc_cpu)) == CMD_DMAEND) #define IS_FREE(req) (*((u8 *)((req)->mc_cpu)) == CMD_DMAEND)
@ -301,8 +290,10 @@ struct pl330_thread {
struct pl330_dmac *dmac; struct pl330_dmac *dmac;
/* Only two at a time */ /* Only two at a time */
struct _pl330_req req[2]; struct _pl330_req req[2];
/* Index of the last submitted request */ /* Index of the last enqueued request */
unsigned lstenq; unsigned lstenq;
/* Index of the last submitted request or -1 if the DMA is stopped */
int req_running;
}; };
enum pl330_dmac_state { enum pl330_dmac_state {
@ -778,6 +769,22 @@ static inline void _execute_DBGINSN(struct pl330_thread *thrd,
writel(0, regs + DBGCMD); writel(0, regs + DBGCMD);
} }
/*
* Mark a _pl330_req as free.
* We do it by writing DMAEND as the first instruction
* because no valid request is going to have DMAEND as
* its first instruction to execute.
*/
static void mark_free(struct pl330_thread *thrd, int idx)
{
struct _pl330_req *req = &thrd->req[idx];
_emit_END(0, req->mc_cpu);
req->mc_len = 0;
thrd->req_running = -1;
}
static inline u32 _state(struct pl330_thread *thrd) static inline u32 _state(struct pl330_thread *thrd)
{ {
void __iomem *regs = thrd->dmac->pinfo->base; void __iomem *regs = thrd->dmac->pinfo->base;
@ -836,31 +843,6 @@ static inline u32 _state(struct pl330_thread *thrd)
} }
} }
/* If the request 'req' of thread 'thrd' is currently active */
static inline bool _req_active(struct pl330_thread *thrd,
struct _pl330_req *req)
{
void __iomem *regs = thrd->dmac->pinfo->base;
u32 buf = req->mc_bus, pc = readl(regs + CPC(thrd->id));
if (IS_FREE(req))
return false;
return (pc >= buf && pc <= buf + req->mc_len) ? true : false;
}
/* Returns 0 if the thread is inactive, ID of active req + 1 otherwise */
static inline unsigned _thrd_active(struct pl330_thread *thrd)
{
if (_req_active(thrd, &thrd->req[0]))
return 1; /* First req active */
if (_req_active(thrd, &thrd->req[1]))
return 2; /* Second req active */
return 0;
}
static void _stop(struct pl330_thread *thrd) static void _stop(struct pl330_thread *thrd)
{ {
void __iomem *regs = thrd->dmac->pinfo->base; void __iomem *regs = thrd->dmac->pinfo->base;
@ -892,17 +874,22 @@ static bool _trigger(struct pl330_thread *thrd)
struct _arg_GO go; struct _arg_GO go;
unsigned ns; unsigned ns;
u8 insn[6] = {0, 0, 0, 0, 0, 0}; u8 insn[6] = {0, 0, 0, 0, 0, 0};
int idx;
/* Return if already ACTIVE */ /* Return if already ACTIVE */
if (_state(thrd) != PL330_STATE_STOPPED) if (_state(thrd) != PL330_STATE_STOPPED)
return true; return true;
if (!IS_FREE(&thrd->req[1 - thrd->lstenq])) idx = 1 - thrd->lstenq;
req = &thrd->req[1 - thrd->lstenq]; if (!IS_FREE(&thrd->req[idx]))
else if (!IS_FREE(&thrd->req[thrd->lstenq])) req = &thrd->req[idx];
req = &thrd->req[thrd->lstenq]; else {
else idx = thrd->lstenq;
req = NULL; if (!IS_FREE(&thrd->req[idx]))
req = &thrd->req[idx];
else
req = NULL;
}
/* Return if no request */ /* Return if no request */
if (!req || !req->r) if (!req || !req->r)
@ -933,6 +920,8 @@ static bool _trigger(struct pl330_thread *thrd)
/* Only manager can execute GO */ /* Only manager can execute GO */
_execute_DBGINSN(thrd, insn, true); _execute_DBGINSN(thrd, insn, true);
thrd->req_running = idx;
return true; return true;
} }
@ -1382,8 +1371,8 @@ static void pl330_dotask(unsigned long data)
thrd->req[0].r = NULL; thrd->req[0].r = NULL;
thrd->req[1].r = NULL; thrd->req[1].r = NULL;
MARK_FREE(&thrd->req[0]); mark_free(thrd, 0);
MARK_FREE(&thrd->req[1]); mark_free(thrd, 1);
/* Clear the reset flag */ /* Clear the reset flag */
pl330->dmac_tbd.reset_chan &= ~(1 << i); pl330->dmac_tbd.reset_chan &= ~(1 << i);
@ -1461,14 +1450,12 @@ int pl330_update(const struct pl330_info *pi)
thrd = &pl330->channels[id]; thrd = &pl330->channels[id];
active = _thrd_active(thrd); active = thrd->req_running;
if (!active) /* Aborted */ if (active == -1) /* Aborted */
continue; continue;
active -= 1;
rqdone = &thrd->req[active]; rqdone = &thrd->req[active];
MARK_FREE(rqdone); mark_free(thrd, active);
/* Get going again ASAP */ /* Get going again ASAP */
_start(thrd); _start(thrd);
@ -1509,7 +1496,7 @@ int pl330_chan_ctrl(void *ch_id, enum pl330_chan_op op)
struct pl330_thread *thrd = ch_id; struct pl330_thread *thrd = ch_id;
struct pl330_dmac *pl330; struct pl330_dmac *pl330;
unsigned long flags; unsigned long flags;
int ret = 0, active; int ret = 0, active = thrd->req_running;
if (!thrd || thrd->free || thrd->dmac->state == DYING) if (!thrd || thrd->free || thrd->dmac->state == DYING)
return -EINVAL; return -EINVAL;
@ -1525,28 +1512,24 @@ int pl330_chan_ctrl(void *ch_id, enum pl330_chan_op op)
thrd->req[0].r = NULL; thrd->req[0].r = NULL;
thrd->req[1].r = NULL; thrd->req[1].r = NULL;
MARK_FREE(&thrd->req[0]); mark_free(thrd, 0);
MARK_FREE(&thrd->req[1]); mark_free(thrd, 1);
break; break;
case PL330_OP_ABORT: case PL330_OP_ABORT:
active = _thrd_active(thrd);
/* Make sure the channel is stopped */ /* Make sure the channel is stopped */
_stop(thrd); _stop(thrd);
/* ABORT is only for the active req */ /* ABORT is only for the active req */
if (!active) if (active == -1)
break; break;
active--;
thrd->req[active].r = NULL; thrd->req[active].r = NULL;
MARK_FREE(&thrd->req[active]); mark_free(thrd, active);
/* Start the next */ /* Start the next */
case PL330_OP_START: case PL330_OP_START:
if (!_thrd_active(thrd) && !_start(thrd)) if ((active == -1) && !_start(thrd))
ret = -EIO; ret = -EIO;
break; break;
@ -1587,14 +1570,13 @@ int pl330_chan_status(void *ch_id, struct pl330_chanstatus *pstatus)
else else
pstatus->faulting = false; pstatus->faulting = false;
active = _thrd_active(thrd); active = thrd->req_running;
if (!active) { if (active == -1) {
/* Indicate that the thread is not running */ /* Indicate that the thread is not running */
pstatus->top_req = NULL; pstatus->top_req = NULL;
pstatus->wait_req = NULL; pstatus->wait_req = NULL;
} else { } else {
active--;
pstatus->top_req = thrd->req[active].r; pstatus->top_req = thrd->req[active].r;
pstatus->wait_req = !IS_FREE(&thrd->req[1 - active]) pstatus->wait_req = !IS_FREE(&thrd->req[1 - active])
? thrd->req[1 - active].r : NULL; ? thrd->req[1 - active].r : NULL;
@ -1659,9 +1641,9 @@ void *pl330_request_channel(const struct pl330_info *pi)
thrd->free = false; thrd->free = false;
thrd->lstenq = 1; thrd->lstenq = 1;
thrd->req[0].r = NULL; thrd->req[0].r = NULL;
MARK_FREE(&thrd->req[0]); mark_free(thrd, 0);
thrd->req[1].r = NULL; thrd->req[1].r = NULL;
MARK_FREE(&thrd->req[1]); mark_free(thrd, 1);
break; break;
} }
} }
@ -1767,14 +1749,14 @@ static inline void _reset_thread(struct pl330_thread *thrd)
thrd->req[0].mc_bus = pl330->mcode_bus thrd->req[0].mc_bus = pl330->mcode_bus
+ (thrd->id * pi->mcbufsz); + (thrd->id * pi->mcbufsz);
thrd->req[0].r = NULL; thrd->req[0].r = NULL;
MARK_FREE(&thrd->req[0]); mark_free(thrd, 0);
thrd->req[1].mc_cpu = thrd->req[0].mc_cpu thrd->req[1].mc_cpu = thrd->req[0].mc_cpu
+ pi->mcbufsz / 2; + pi->mcbufsz / 2;
thrd->req[1].mc_bus = thrd->req[0].mc_bus thrd->req[1].mc_bus = thrd->req[0].mc_bus
+ pi->mcbufsz / 2; + pi->mcbufsz / 2;
thrd->req[1].r = NULL; thrd->req[1].r = NULL;
MARK_FREE(&thrd->req[1]); mark_free(thrd, 1);
} }
static int dmac_alloc_threads(struct pl330_dmac *pl330) static int dmac_alloc_threads(struct pl330_dmac *pl330)

View File

@ -18,9 +18,10 @@ CONFIG_ARCH_MXC=y
CONFIG_ARCH_IMX_V4_V5=y CONFIG_ARCH_IMX_V4_V5=y
CONFIG_ARCH_MX1ADS=y CONFIG_ARCH_MX1ADS=y
CONFIG_MACH_SCB9328=y CONFIG_MACH_SCB9328=y
CONFIG_MACH_APF9328=y
CONFIG_MACH_MX21ADS=y CONFIG_MACH_MX21ADS=y
CONFIG_MACH_MX25_3DS=y CONFIG_MACH_MX25_3DS=y
CONFIG_MACH_EUKREA_CPUIMX25=y CONFIG_MACH_EUKREA_CPUIMX25SD=y
CONFIG_MACH_MX27ADS=y CONFIG_MACH_MX27ADS=y
CONFIG_MACH_PCM038=y CONFIG_MACH_PCM038=y
CONFIG_MACH_CPUIMX27=y CONFIG_MACH_CPUIMX27=y
@ -72,17 +73,16 @@ CONFIG_MTD_CFI_GEOMETRY=y
CONFIG_MTD_CFI_INTELEXT=y CONFIG_MTD_CFI_INTELEXT=y
CONFIG_MTD_PHYSMAP=y CONFIG_MTD_PHYSMAP=y
CONFIG_MTD_NAND=y CONFIG_MTD_NAND=y
CONFIG_MTD_NAND_MXC=y
CONFIG_MTD_UBI=y CONFIG_MTD_UBI=y
CONFIG_MISC_DEVICES=y CONFIG_MISC_DEVICES=y
CONFIG_EEPROM_AT24=y CONFIG_EEPROM_AT24=y
CONFIG_EEPROM_AT25=y CONFIG_EEPROM_AT25=y
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
CONFIG_NET_ETHERNET=y
CONFIG_SMC91X=y
CONFIG_DM9000=y CONFIG_DM9000=y
CONFIG_SMC91X=y
CONFIG_SMC911X=y CONFIG_SMC911X=y
# CONFIG_NETDEV_1000 is not set CONFIG_SMSC_PHY=y
# CONFIG_NETDEV_10000 is not set
# CONFIG_INPUT_MOUSEDEV is not set # CONFIG_INPUT_MOUSEDEV is not set
CONFIG_INPUT_EVDEV=y CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_KEYBOARD is not set # CONFIG_INPUT_KEYBOARD is not set
@ -100,6 +100,7 @@ CONFIG_I2C_CHARDEV=y
CONFIG_I2C_IMX=y CONFIG_I2C_IMX=y
CONFIG_SPI=y CONFIG_SPI=y
CONFIG_SPI_IMX=y CONFIG_SPI_IMX=y
CONFIG_SPI_SPIDEV=y
CONFIG_W1=y CONFIG_W1=y
CONFIG_W1_MASTER_MXC=y CONFIG_W1_MASTER_MXC=y
CONFIG_W1_SLAVE_THERM=y CONFIG_W1_SLAVE_THERM=y
@ -139,6 +140,7 @@ CONFIG_MMC=y
CONFIG_MMC_MXC=y CONFIG_MMC_MXC=y
CONFIG_NEW_LEDS=y CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y CONFIG_LEDS_CLASS=y
CONFIG_LEDS_GPIO=y
CONFIG_LEDS_MC13783=y CONFIG_LEDS_MC13783=y
CONFIG_LEDS_TRIGGERS=y CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=y CONFIG_LEDS_TRIGGER_TIMER=y

View File

@ -110,11 +110,6 @@ static struct map_desc exynos4_iodesc[] __initdata = {
.pfn = __phys_to_pfn(EXYNOS4_PA_DMC0), .pfn = __phys_to_pfn(EXYNOS4_PA_DMC0),
.length = SZ_4K, .length = SZ_4K,
.type = MT_DEVICE, .type = MT_DEVICE,
}, {
.virtual = (unsigned long)S5P_VA_SROMC,
.pfn = __phys_to_pfn(EXYNOS4_PA_SROMC),
.length = SZ_4K,
.type = MT_DEVICE,
}, { }, {
.virtual = (unsigned long)S3C_VA_USB_HSPHY, .virtual = (unsigned long)S3C_VA_USB_HSPHY,
.pfn = __phys_to_pfn(EXYNOS4_PA_HSPHY), .pfn = __phys_to_pfn(EXYNOS4_PA_HSPHY),

View File

@ -132,7 +132,7 @@ config MACH_MX25_3DS
select IMX_HAVE_PLATFORM_MXC_NAND select IMX_HAVE_PLATFORM_MXC_NAND
select IMX_HAVE_PLATFORM_SDHCI_ESDHC_IMX select IMX_HAVE_PLATFORM_SDHCI_ESDHC_IMX
config MACH_EUKREA_CPUIMX25 config MACH_EUKREA_CPUIMX25SD
bool "Support Eukrea CPUIMX25 Platform" bool "Support Eukrea CPUIMX25 Platform"
select SOC_IMX25 select SOC_IMX25
select IMX_HAVE_PLATFORM_FLEXCAN select IMX_HAVE_PLATFORM_FLEXCAN
@ -148,7 +148,7 @@ config MACH_EUKREA_CPUIMX25
choice choice
prompt "Baseboard" prompt "Baseboard"
depends on MACH_EUKREA_CPUIMX25 depends on MACH_EUKREA_CPUIMX25SD
default MACH_EUKREA_MBIMXSD25_BASEBOARD default MACH_EUKREA_MBIMXSD25_BASEBOARD
config MACH_EUKREA_MBIMXSD25_BASEBOARD config MACH_EUKREA_MBIMXSD25_BASEBOARD
@ -542,7 +542,7 @@ config MACH_MX35_3DS
Include support for MX35PDK platform. This includes specific Include support for MX35PDK platform. This includes specific
configurations for the board and its peripherals. configurations for the board and its peripherals.
config MACH_EUKREA_CPUIMX35 config MACH_EUKREA_CPUIMX35SD
bool "Support Eukrea CPUIMX35 Platform" bool "Support Eukrea CPUIMX35 Platform"
select SOC_IMX35 select SOC_IMX35
select IMX_HAVE_PLATFORM_FLEXCAN select IMX_HAVE_PLATFORM_FLEXCAN
@ -560,7 +560,7 @@ config MACH_EUKREA_CPUIMX35
choice choice
prompt "Baseboard" prompt "Baseboard"
depends on MACH_EUKREA_CPUIMX35 depends on MACH_EUKREA_CPUIMX35SD
default MACH_EUKREA_MBIMXSD35_BASEBOARD default MACH_EUKREA_MBIMXSD35_BASEBOARD
config MACH_EUKREA_MBIMXSD35_BASEBOARD config MACH_EUKREA_MBIMXSD35_BASEBOARD

View File

@ -24,7 +24,7 @@ obj-$(CONFIG_MACH_MX21ADS) += mach-mx21ads.o
# i.MX25 based machines # i.MX25 based machines
obj-$(CONFIG_MACH_MX25_3DS) += mach-mx25_3ds.o obj-$(CONFIG_MACH_MX25_3DS) += mach-mx25_3ds.o
obj-$(CONFIG_MACH_EUKREA_CPUIMX25) += mach-eukrea_cpuimx25.o obj-$(CONFIG_MACH_EUKREA_CPUIMX25SD) += mach-eukrea_cpuimx25.o
obj-$(CONFIG_MACH_EUKREA_MBIMXSD25_BASEBOARD) += eukrea_mbimxsd25-baseboard.o obj-$(CONFIG_MACH_EUKREA_MBIMXSD25_BASEBOARD) += eukrea_mbimxsd25-baseboard.o
# i.MX27 based machines # i.MX27 based machines
@ -57,7 +57,7 @@ obj-$(CONFIG_MACH_BUG) += mach-bug.o
# i.MX35 based machines # i.MX35 based machines
obj-$(CONFIG_MACH_PCM043) += mach-pcm043.o obj-$(CONFIG_MACH_PCM043) += mach-pcm043.o
obj-$(CONFIG_MACH_MX35_3DS) += mach-mx35_3ds.o obj-$(CONFIG_MACH_MX35_3DS) += mach-mx35_3ds.o
obj-$(CONFIG_MACH_EUKREA_CPUIMX35) += mach-cpuimx35.o obj-$(CONFIG_MACH_EUKREA_CPUIMX35SD) += mach-cpuimx35.o
obj-$(CONFIG_MACH_EUKREA_MBIMXSD35_BASEBOARD) += eukrea_mbimxsd35-baseboard.o obj-$(CONFIG_MACH_EUKREA_MBIMXSD35_BASEBOARD) += eukrea_mbimxsd35-baseboard.o
obj-$(CONFIG_MACH_VPR200) += mach-vpr200.o obj-$(CONFIG_MACH_VPR200) += mach-vpr200.o

View File

@ -507,7 +507,7 @@ static struct clk_lookup lookups[] = {
int __init mx35_clocks_init() int __init mx35_clocks_init()
{ {
unsigned int cgr2 = 3 << 26, cgr3 = 0; unsigned int cgr2 = 3 << 26;
#if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_ICEDCC) #if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_ICEDCC)
cgr2 |= 3 << 16; cgr2 |= 3 << 16;
@ -521,6 +521,12 @@ int __init mx35_clocks_init()
__raw_writel((3 << 18), CCM_BASE + CCM_CGR0); __raw_writel((3 << 18), CCM_BASE + CCM_CGR0);
__raw_writel((3 << 2) | (3 << 4) | (3 << 6) | (3 << 8) | (3 << 16), __raw_writel((3 << 2) | (3 << 4) | (3 << 6) | (3 << 8) | (3 << 16),
CCM_BASE + CCM_CGR1); CCM_BASE + CCM_CGR1);
__raw_writel(cgr2, CCM_BASE + CCM_CGR2);
__raw_writel(0, CCM_BASE + CCM_CGR3);
clk_enable(&iim_clk);
imx_print_silicon_rev("i.MX35", mx35_revision());
clk_disable(&iim_clk);
/* /*
* Check if we came up in internal boot mode. If yes, we need some * Check if we came up in internal boot mode. If yes, we need some
@ -529,17 +535,11 @@ int __init mx35_clocks_init()
*/ */
if (!(__raw_readl(CCM_BASE + CCM_RCSR) & (3 << 10))) { if (!(__raw_readl(CCM_BASE + CCM_RCSR) & (3 << 10))) {
/* Additionally turn on UART1, SCC, and IIM clocks */ /* Additionally turn on UART1, SCC, and IIM clocks */
cgr2 |= 3 << 16 | 3 << 4; clk_enable(&iim_clk);
cgr3 |= 3 << 2; clk_enable(&uart1_clk);
clk_enable(&scc_clk);
} }
__raw_writel(cgr2, CCM_BASE + CCM_CGR2);
__raw_writel(cgr3, CCM_BASE + CCM_CGR3);
clk_enable(&iim_clk);
imx_print_silicon_rev("i.MX35", mx35_revision());
clk_disable(&iim_clk);
#ifdef CONFIG_MXC_USE_EPIT #ifdef CONFIG_MXC_USE_EPIT
epit_timer_init(&epit1_clk, epit_timer_init(&epit1_clk,
MX35_IO_ADDRESS(MX35_EPIT1_BASE_ADDR), MX35_INT_EPIT1); MX35_IO_ADDRESS(MX35_EPIT1_BASE_ADDR), MX35_INT_EPIT1);

View File

@ -53,12 +53,18 @@ static const struct imxi2c_platform_data
.bitrate = 100000, .bitrate = 100000,
}; };
#define TSC2007_IRQGPIO IMX_GPIO_NR(3, 2)
static int tsc2007_get_pendown_state(void)
{
return !gpio_get_value(TSC2007_IRQGPIO);
}
static struct tsc2007_platform_data tsc2007_info = { static struct tsc2007_platform_data tsc2007_info = {
.model = 2007, .model = 2007,
.x_plate_ohms = 180, .x_plate_ohms = 180,
.get_pendown_state = tsc2007_get_pendown_state,
}; };
#define TSC2007_IRQGPIO IMX_GPIO_NR(3, 2)
static struct i2c_board_info eukrea_cpuimx35_i2c_devices[] = { static struct i2c_board_info eukrea_cpuimx35_i2c_devices[] = {
{ {
I2C_BOARD_INFO("pcf8563", 0x51), I2C_BOARD_INFO("pcf8563", 0x51),

View File

@ -3247,18 +3247,14 @@ static __initdata struct omap_hwmod *omap3xxx_hwmods[] = {
/* 3430ES1-only hwmods */ /* 3430ES1-only hwmods */
static __initdata struct omap_hwmod *omap3430es1_hwmods[] = { static __initdata struct omap_hwmod *omap3430es1_hwmods[] = {
&omap3xxx_iva_hwmod,
&omap3430es1_dss_core_hwmod, &omap3430es1_dss_core_hwmod,
&omap3xxx_mailbox_hwmod,
NULL NULL
}; };
/* 3430ES2+-only hwmods */ /* 3430ES2+-only hwmods */
static __initdata struct omap_hwmod *omap3430es2plus_hwmods[] = { static __initdata struct omap_hwmod *omap3430es2plus_hwmods[] = {
&omap3xxx_iva_hwmod,
&omap3xxx_dss_core_hwmod, &omap3xxx_dss_core_hwmod,
&omap3xxx_usbhsotg_hwmod, &omap3xxx_usbhsotg_hwmod,
&omap3xxx_mailbox_hwmod,
NULL NULL
}; };

View File

@ -363,11 +363,13 @@ __v7_setup:
orreq r10, r10, #1 << 6 @ set bit #6 orreq r10, r10, #1 << 6 @ set bit #6
mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register
#endif #endif
#ifdef CONFIG_ARM_ERRATA_751472 #if defined(CONFIG_ARM_ERRATA_751472) && defined(CONFIG_SMP)
cmp r6, #0x30 @ present prior to r3p0 ALT_SMP(cmp r6, #0x30) @ present prior to r3p0
ALT_UP_B(1f)
mrclt p15, 0, r10, c15, c0, 1 @ read diagnostic register mrclt p15, 0, r10, c15, c0, 1 @ read diagnostic register
orrlt r10, r10, #1 << 11 @ set bit #11 orrlt r10, r10, #1 << 11 @ set bit #11
mcrlt p15, 0, r10, c15, c0, 1 @ write diagnostic register mcrlt p15, 0, r10, c15, c0, 1 @ write diagnostic register
1:
#endif #endif
3: mov r10, #0 3: mov r10, #0

View File

@ -116,7 +116,7 @@ int __init oprofile_arch_init(struct oprofile_operations *ops)
return oprofile_perf_init(ops); return oprofile_perf_init(ops);
} }
void __exit oprofile_arch_exit(void) void oprofile_arch_exit(void)
{ {
oprofile_perf_exit(); oprofile_perf_exit();
} }

View File

@ -98,7 +98,7 @@ static int mxc_set_target(struct cpufreq_policy *policy,
return ret; return ret;
} }
static int __init mxc_cpufreq_init(struct cpufreq_policy *policy) static int mxc_cpufreq_init(struct cpufreq_policy *policy)
{ {
int ret; int ret;
int i; int i;

View File

@ -98,6 +98,7 @@ static __inline__ void __arch_decomp_setup(unsigned long arch_id)
case MACH_TYPE_PCM043: case MACH_TYPE_PCM043:
case MACH_TYPE_LILLY1131: case MACH_TYPE_LILLY1131:
case MACH_TYPE_VPR200: case MACH_TYPE_VPR200:
case MACH_TYPE_EUKREA_CPUIMX35SD:
uart_base = MX3X_UART1_BASE_ADDR; uart_base = MX3X_UART1_BASE_ADDR;
break; break;
case MACH_TYPE_MAGX_ZN5: case MACH_TYPE_MAGX_ZN5:

View File

@ -77,6 +77,15 @@ int pwm_config(struct pwm_device *pwm, int duty_ns, int period_ns)
do_div(c, period_ns); do_div(c, period_ns);
duty_cycles = c; duty_cycles = c;
/*
* according to imx pwm RM, the real period value should be
* PERIOD value in PWMPR plus 2.
*/
if (period_cycles > 2)
period_cycles -= 2;
else
period_cycles = 0;
writel(duty_cycles, pwm->mmio_base + MX3_PWMSAR); writel(duty_cycles, pwm->mmio_base + MX3_PWMSAR);
writel(period_cycles, pwm->mmio_base + MX3_PWMPR); writel(period_cycles, pwm->mmio_base + MX3_PWMPR);

View File

@ -384,12 +384,16 @@ void __init orion_gpio_init(int gpio_base, int ngpio,
struct orion_gpio_chip *ochip; struct orion_gpio_chip *ochip;
struct irq_chip_generic *gc; struct irq_chip_generic *gc;
struct irq_chip_type *ct; struct irq_chip_type *ct;
char gc_label[16];
if (orion_gpio_chip_count == ARRAY_SIZE(orion_gpio_chips)) if (orion_gpio_chip_count == ARRAY_SIZE(orion_gpio_chips))
return; return;
snprintf(gc_label, sizeof(gc_label), "orion_gpio%d",
orion_gpio_chip_count);
ochip = orion_gpio_chips + orion_gpio_chip_count; ochip = orion_gpio_chips + orion_gpio_chip_count;
ochip->chip.label = "orion_gpio"; ochip->chip.label = kstrdup(gc_label, GFP_KERNEL);
ochip->chip.request = orion_gpio_request; ochip->chip.request = orion_gpio_request;
ochip->chip.direction_input = orion_gpio_direction_input; ochip->chip.direction_input = orion_gpio_direction_input;
ochip->chip.get = orion_gpio_get; ochip->chip.get = orion_gpio_get;

View File

@ -202,14 +202,6 @@ extern int s3c_plltab_register(struct cpufreq_frequency_table *plls,
extern struct s3c_cpufreq_config *s3c_cpufreq_getconfig(void); extern struct s3c_cpufreq_config *s3c_cpufreq_getconfig(void);
extern struct s3c_iotimings *s3c_cpufreq_getiotimings(void); extern struct s3c_iotimings *s3c_cpufreq_getiotimings(void);
extern void s3c2410_iotiming_debugfs(struct seq_file *seq,
struct s3c_cpufreq_config *cfg,
union s3c_iobank *iob);
extern void s3c2412_iotiming_debugfs(struct seq_file *seq,
struct s3c_cpufreq_config *cfg,
union s3c_iobank *iob);
#ifdef CONFIG_CPU_FREQ_S3C24XX_DEBUGFS #ifdef CONFIG_CPU_FREQ_S3C24XX_DEBUGFS
#define s3c_cpufreq_debugfs_call(x) x #define s3c_cpufreq_debugfs_call(x) x
#else #else
@ -226,6 +218,10 @@ extern void s3c2410_cpufreq_setrefresh(struct s3c_cpufreq_config *cfg);
extern void s3c2410_set_fvco(struct s3c_cpufreq_config *cfg); extern void s3c2410_set_fvco(struct s3c_cpufreq_config *cfg);
#ifdef CONFIG_S3C2410_IOTIMING #ifdef CONFIG_S3C2410_IOTIMING
extern void s3c2410_iotiming_debugfs(struct seq_file *seq,
struct s3c_cpufreq_config *cfg,
union s3c_iobank *iob);
extern int s3c2410_iotiming_calc(struct s3c_cpufreq_config *cfg, extern int s3c2410_iotiming_calc(struct s3c_cpufreq_config *cfg,
struct s3c_iotimings *iot); struct s3c_iotimings *iot);
@ -235,6 +231,7 @@ extern int s3c2410_iotiming_get(struct s3c_cpufreq_config *cfg,
extern void s3c2410_iotiming_set(struct s3c_cpufreq_config *cfg, extern void s3c2410_iotiming_set(struct s3c_cpufreq_config *cfg,
struct s3c_iotimings *iot); struct s3c_iotimings *iot);
#else #else
#define s3c2410_iotiming_debugfs NULL
#define s3c2410_iotiming_calc NULL #define s3c2410_iotiming_calc NULL
#define s3c2410_iotiming_get NULL #define s3c2410_iotiming_get NULL
#define s3c2410_iotiming_set NULL #define s3c2410_iotiming_set NULL
@ -242,8 +239,10 @@ extern void s3c2410_iotiming_set(struct s3c_cpufreq_config *cfg,
/* S3C2412 compatible routines */ /* S3C2412 compatible routines */
extern int s3c2412_iotiming_get(struct s3c_cpufreq_config *cfg, #ifdef CONFIG_S3C2412_IOTIMING
struct s3c_iotimings *timings); extern void s3c2412_iotiming_debugfs(struct seq_file *seq,
struct s3c_cpufreq_config *cfg,
union s3c_iobank *iob);
extern int s3c2412_iotiming_get(struct s3c_cpufreq_config *cfg, extern int s3c2412_iotiming_get(struct s3c_cpufreq_config *cfg,
struct s3c_iotimings *timings); struct s3c_iotimings *timings);
@ -253,6 +252,12 @@ extern int s3c2412_iotiming_calc(struct s3c_cpufreq_config *cfg,
extern void s3c2412_iotiming_set(struct s3c_cpufreq_config *cfg, extern void s3c2412_iotiming_set(struct s3c_cpufreq_config *cfg,
struct s3c_iotimings *iot); struct s3c_iotimings *iot);
#else
#define s3c2412_iotiming_debugfs NULL
#define s3c2412_iotiming_calc NULL
#define s3c2412_iotiming_get NULL
#define s3c2412_iotiming_set NULL
#endif /* CONFIG_S3C2412_IOTIMING */
#ifdef CONFIG_CPU_FREQ_S3C24XX_DEBUG #ifdef CONFIG_CPU_FREQ_S3C24XX_DEBUG
#define s3c_freq_dbg(x...) printk(KERN_INFO x) #define s3c_freq_dbg(x...) printk(KERN_INFO x)

View File

@ -60,6 +60,7 @@ typedef u64 cputime64_t;
*/ */
#define cputime_to_usecs(__ct) ((__ct) / NSEC_PER_USEC) #define cputime_to_usecs(__ct) ((__ct) / NSEC_PER_USEC)
#define usecs_to_cputime(__usecs) ((__usecs) * NSEC_PER_USEC) #define usecs_to_cputime(__usecs) ((__usecs) * NSEC_PER_USEC)
#define usecs_to_cputime64(__usecs) usecs_to_cputime(__usecs)
/* /*
* Convert cputime <-> seconds * Convert cputime <-> seconds

View File

@ -150,6 +150,8 @@ static inline cputime_t usecs_to_cputime(const unsigned long us)
return ct; return ct;
} }
#define usecs_to_cputime64(us) usecs_to_cputime(us)
/* /*
* Convert cputime <-> seconds * Convert cputime <-> seconds
*/ */

View File

@ -381,39 +381,6 @@ static inline bool kvmppc_critical_section(struct kvm_vcpu *vcpu)
} }
#endif #endif
static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r,
unsigned long pte_index)
{
unsigned long rb, va_low;
rb = (v & ~0x7fUL) << 16; /* AVA field */
va_low = pte_index >> 3;
if (v & HPTE_V_SECONDARY)
va_low = ~va_low;
/* xor vsid from AVA */
if (!(v & HPTE_V_1TB_SEG))
va_low ^= v >> 12;
else
va_low ^= v >> 24;
va_low &= 0x7ff;
if (v & HPTE_V_LARGE) {
rb |= 1; /* L field */
if (cpu_has_feature(CPU_FTR_ARCH_206) &&
(r & 0xff000)) {
/* non-16MB large page, must be 64k */
/* (masks depend on page size) */
rb |= 0x1000; /* page encoding in LP field */
rb |= (va_low & 0x7f) << 16; /* 7b of VA in AVA/LP field */
rb |= (va_low & 0xfe); /* AVAL field (P7 doesn't seem to care) */
}
} else {
/* 4kB page */
rb |= (va_low & 0x7ff) << 12; /* remaining 11b of VA */
}
rb |= (v >> 54) & 0x300; /* B field */
return rb;
}
/* Magic register values loaded into r3 and r4 before the 'sc' assembly /* Magic register values loaded into r3 and r4 before the 'sc' assembly
* instruction for the OSI hypercalls */ * instruction for the OSI hypercalls */
#define OSI_SC_MAGIC_R3 0x113724FA #define OSI_SC_MAGIC_R3 0x113724FA

View File

@ -29,4 +29,37 @@ static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
#define SPAPR_TCE_SHIFT 12 #define SPAPR_TCE_SHIFT 12
static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r,
unsigned long pte_index)
{
unsigned long rb, va_low;
rb = (v & ~0x7fUL) << 16; /* AVA field */
va_low = pte_index >> 3;
if (v & HPTE_V_SECONDARY)
va_low = ~va_low;
/* xor vsid from AVA */
if (!(v & HPTE_V_1TB_SEG))
va_low ^= v >> 12;
else
va_low ^= v >> 24;
va_low &= 0x7ff;
if (v & HPTE_V_LARGE) {
rb |= 1; /* L field */
if (cpu_has_feature(CPU_FTR_ARCH_206) &&
(r & 0xff000)) {
/* non-16MB large page, must be 64k */
/* (masks depend on page size) */
rb |= 0x1000; /* page encoding in LP field */
rb |= (va_low & 0x7f) << 16; /* 7b of VA in AVA/LP field */
rb |= (va_low & 0xfe); /* AVAL field (P7 doesn't seem to care) */
}
} else {
/* 4kB page */
rb |= (va_low & 0x7ff) << 12; /* remaining 11b of VA */
}
rb |= (v >> 54) & 0x300; /* B field */
return rb;
}
#endif /* __ASM_KVM_BOOK3S_64_H__ */ #endif /* __ASM_KVM_BOOK3S_64_H__ */

View File

@ -538,7 +538,7 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu)
tpaca->kvm_hstate.napping = 0; tpaca->kvm_hstate.napping = 0;
vcpu->cpu = vc->pcpu; vcpu->cpu = vc->pcpu;
smp_wmb(); smp_wmb();
#ifdef CONFIG_PPC_ICP_NATIVE #if defined(CONFIG_PPC_ICP_NATIVE) && defined(CONFIG_SMP)
if (vcpu->arch.ptid) { if (vcpu->arch.ptid) {
tpaca->cpu_start = 0x80; tpaca->cpu_start = 0x80;
wmb(); wmb();

View File

@ -658,10 +658,12 @@ program_interrupt:
ulong cmd = kvmppc_get_gpr(vcpu, 3); ulong cmd = kvmppc_get_gpr(vcpu, 3);
int i; int i;
#ifdef CONFIG_KVM_BOOK3S_64_PR
if (kvmppc_h_pr(vcpu, cmd) == EMULATE_DONE) { if (kvmppc_h_pr(vcpu, cmd) == EMULATE_DONE) {
r = RESUME_GUEST; r = RESUME_GUEST;
break; break;
} }
#endif
run->papr_hcall.nr = cmd; run->papr_hcall.nr = cmd;
for (i = 0; i < 9; ++i) { for (i = 0; i < 9; ++i) {

View File

@ -15,6 +15,7 @@
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/export.h>
#include <asm/reg.h> #include <asm/reg.h>
#include <asm/cputable.h> #include <asm/cputable.h>

View File

@ -87,6 +87,8 @@ usecs_to_cputime(const unsigned int m)
return (cputime_t) m * 4096; return (cputime_t) m * 4096;
} }
#define usecs_to_cputime64(m) usecs_to_cputime(m)
/* /*
* Convert cputime to milliseconds and back. * Convert cputime to milliseconds and back.
*/ */

View File

@ -49,7 +49,7 @@ int __init oprofile_arch_init(struct oprofile_operations *ops)
return oprofile_perf_init(ops); return oprofile_perf_init(ops);
} }
void __exit oprofile_arch_exit(void) void oprofile_arch_exit(void)
{ {
oprofile_perf_exit(); oprofile_perf_exit();
kfree(sh_pmu_op_name); kfree(sh_pmu_op_name);
@ -60,5 +60,5 @@ int __init oprofile_arch_init(struct oprofile_operations *ops)
ops->backtrace = sh_backtrace; ops->backtrace = sh_backtrace;
return -ENODEV; return -ENODEV;
} }
void __exit oprofile_arch_exit(void) {} void oprofile_arch_exit(void) {}
#endif /* CONFIG_HW_PERF_EVENTS */ #endif /* CONFIG_HW_PERF_EVENTS */

View File

@ -1169,7 +1169,7 @@ again:
*/ */
c = &unconstrained; c = &unconstrained;
} else if (intel_try_alt_er(event, orig_idx)) { } else if (intel_try_alt_er(event, orig_idx)) {
raw_spin_unlock(&era->lock); raw_spin_unlock_irqrestore(&era->lock, flags);
goto again; goto again;
} }
raw_spin_unlock_irqrestore(&era->lock, flags); raw_spin_unlock_irqrestore(&era->lock, flags);

View File

@ -338,11 +338,15 @@ static enum hrtimer_restart pit_timer_fn(struct hrtimer *data)
return HRTIMER_NORESTART; return HRTIMER_NORESTART;
} }
static void create_pit_timer(struct kvm_kpit_state *ps, u32 val, int is_period) static void create_pit_timer(struct kvm *kvm, u32 val, int is_period)
{ {
struct kvm_kpit_state *ps = &kvm->arch.vpit->pit_state;
struct kvm_timer *pt = &ps->pit_timer; struct kvm_timer *pt = &ps->pit_timer;
s64 interval; s64 interval;
if (!irqchip_in_kernel(kvm))
return;
interval = muldiv64(val, NSEC_PER_SEC, KVM_PIT_FREQ); interval = muldiv64(val, NSEC_PER_SEC, KVM_PIT_FREQ);
pr_debug("create pit timer, interval is %llu nsec\n", interval); pr_debug("create pit timer, interval is %llu nsec\n", interval);
@ -394,13 +398,13 @@ static void pit_load_count(struct kvm *kvm, int channel, u32 val)
/* FIXME: enhance mode 4 precision */ /* FIXME: enhance mode 4 precision */
case 4: case 4:
if (!(ps->flags & KVM_PIT_FLAGS_HPET_LEGACY)) { if (!(ps->flags & KVM_PIT_FLAGS_HPET_LEGACY)) {
create_pit_timer(ps, val, 0); create_pit_timer(kvm, val, 0);
} }
break; break;
case 2: case 2:
case 3: case 3:
if (!(ps->flags & KVM_PIT_FLAGS_HPET_LEGACY)){ if (!(ps->flags & KVM_PIT_FLAGS_HPET_LEGACY)){
create_pit_timer(ps, val, 1); create_pit_timer(kvm, val, 1);
} }
break; break;
default: default:

View File

@ -602,7 +602,6 @@ static void update_cpuid(struct kvm_vcpu *vcpu)
{ {
struct kvm_cpuid_entry2 *best; struct kvm_cpuid_entry2 *best;
struct kvm_lapic *apic = vcpu->arch.apic; struct kvm_lapic *apic = vcpu->arch.apic;
u32 timer_mode_mask;
best = kvm_find_cpuid_entry(vcpu, 1, 0); best = kvm_find_cpuid_entry(vcpu, 1, 0);
if (!best) if (!best)
@ -615,15 +614,12 @@ static void update_cpuid(struct kvm_vcpu *vcpu)
best->ecx |= bit(X86_FEATURE_OSXSAVE); best->ecx |= bit(X86_FEATURE_OSXSAVE);
} }
if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && if (apic) {
best->function == 0x1) { if (best->ecx & bit(X86_FEATURE_TSC_DEADLINE_TIMER))
best->ecx |= bit(X86_FEATURE_TSC_DEADLINE_TIMER); apic->lapic_timer.timer_mode_mask = 3 << 17;
timer_mode_mask = 3 << 17; else
} else apic->lapic_timer.timer_mode_mask = 1 << 17;
timer_mode_mask = 1 << 17; }
if (apic)
apic->lapic_timer.timer_mode_mask = timer_mode_mask;
} }
int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
@ -2135,6 +2131,9 @@ int kvm_dev_ioctl_check_extension(long ext)
case KVM_CAP_TSC_CONTROL: case KVM_CAP_TSC_CONTROL:
r = kvm_has_tsc_control; r = kvm_has_tsc_control;
break; break;
case KVM_CAP_TSC_DEADLINE_TIMER:
r = boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER);
break;
default: default:
r = 0; r = 0;
break; break;

View File

@ -311,7 +311,7 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
if (IS_ERR(bio)) if (IS_ERR(bio))
return PTR_ERR(bio); return PTR_ERR(bio);
if (rq_data_dir(rq) == WRITE) if (!reading)
bio->bi_rw |= REQ_WRITE; bio->bi_rw |= REQ_WRITE;
if (do_copy) if (do_copy)

View File

@ -282,18 +282,9 @@ EXPORT_SYMBOL(blk_queue_resize_tags);
void blk_queue_end_tag(struct request_queue *q, struct request *rq) void blk_queue_end_tag(struct request_queue *q, struct request *rq)
{ {
struct blk_queue_tag *bqt = q->queue_tags; struct blk_queue_tag *bqt = q->queue_tags;
int tag = rq->tag; unsigned tag = rq->tag; /* negative tags invalid */
BUG_ON(tag == -1); BUG_ON(tag >= bqt->real_max_depth);
if (unlikely(tag >= bqt->max_depth)) {
/*
* This can happen after tag depth has been reduced.
* But tag shouldn't be larger than real_max_depth.
*/
WARN_ON(tag >= bqt->real_max_depth);
return;
}
list_del_init(&rq->queuelist); list_del_init(&rq->queuelist);
rq->cmd_flags &= ~REQ_QUEUED; rq->cmd_flags &= ~REQ_QUEUED;

View File

@ -1655,6 +1655,8 @@ cfq_merged_requests(struct request_queue *q, struct request *rq,
struct request *next) struct request *next)
{ {
struct cfq_queue *cfqq = RQ_CFQQ(rq); struct cfq_queue *cfqq = RQ_CFQQ(rq);
struct cfq_data *cfqd = q->elevator->elevator_data;
/* /*
* reposition in fifo if next is older than rq * reposition in fifo if next is older than rq
*/ */
@ -1669,6 +1671,16 @@ cfq_merged_requests(struct request_queue *q, struct request *rq,
cfq_remove_request(next); cfq_remove_request(next);
cfq_blkiocg_update_io_merged_stats(&(RQ_CFQG(rq))->blkg, cfq_blkiocg_update_io_merged_stats(&(RQ_CFQG(rq))->blkg,
rq_data_dir(next), rq_is_sync(next)); rq_data_dir(next), rq_is_sync(next));
cfqq = RQ_CFQQ(next);
/*
* all requests of this queue are merged to other queues, delete it
* from the service tree. If it's the active_queue,
* cfq_dispatch_requests() will choose to expire it or do idle
*/
if (cfq_cfqq_on_rr(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list) &&
cfqq != cfqd->active_queue)
cfq_del_cfqq_rr(cfqd, cfqq);
} }
static int cfq_allow_merge(struct request_queue *q, struct request *rq, static int cfq_allow_merge(struct request_queue *q, struct request *rq,

View File

@ -124,7 +124,7 @@ config MV_XOR
config MX3_IPU config MX3_IPU
bool "MX3x Image Processing Unit support" bool "MX3x Image Processing Unit support"
depends on ARCH_MX3 depends on SOC_IMX31 || SOC_IMX35
select DMA_ENGINE select DMA_ENGINE
default y default y
help help
@ -216,7 +216,7 @@ config PCH_DMA
config IMX_SDMA config IMX_SDMA
tristate "i.MX SDMA support" tristate "i.MX SDMA support"
depends on ARCH_MX25 || ARCH_MX3 || ARCH_MX5 depends on ARCH_MX25 || SOC_IMX31 || SOC_IMX35 || ARCH_MX5
select DMA_ENGINE select DMA_ENGINE
help help
Support the i.MX SDMA engine. This engine is integrated into Support the i.MX SDMA engine. This engine is integrated into

View File

@ -756,9 +756,9 @@ intel_enable_semaphores(struct drm_device *dev)
if (i915_semaphores >= 0) if (i915_semaphores >= 0)
return i915_semaphores; return i915_semaphores;
/* Enable semaphores on SNB when IO remapping is off */ /* Disable semaphores on SNB */
if (INTEL_INFO(dev)->gen == 6) if (INTEL_INFO(dev)->gen == 6)
return !intel_iommu_enabled; return 0;
return 1; return 1;
} }

View File

@ -7922,13 +7922,11 @@ static bool intel_enable_rc6(struct drm_device *dev)
return 0; return 0;
/* /*
* Enable rc6 on Sandybridge if DMA remapping is disabled * Disable rc6 on Sandybridge
*/ */
if (INTEL_INFO(dev)->gen == 6) { if (INTEL_INFO(dev)->gen == 6) {
DRM_DEBUG_DRIVER("Sandybridge: intel_iommu_enabled %s -- RC6 %sabled\n", DRM_DEBUG_DRIVER("Sandybridge: RC6 disabled\n");
intel_iommu_enabled ? "true" : "false", return 0;
!intel_iommu_enabled ? "en" : "dis");
return !intel_iommu_enabled;
} }
DRM_DEBUG_DRIVER("RC6 enabled\n"); DRM_DEBUG_DRIVER("RC6 enabled\n");
return 1; return 1;

View File

@ -3276,6 +3276,18 @@ int evergreen_init(struct radeon_device *rdev)
rdev->accel_working = false; rdev->accel_working = false;
} }
} }
/* Don't start up if the MC ucode is missing on BTC parts.
* The default clocks and voltages before the MC ucode
* is loaded are not suffient for advanced operations.
*/
if (ASIC_IS_DCE5(rdev)) {
if (!rdev->mc_fw && !(rdev->flags & RADEON_IS_IGP)) {
DRM_ERROR("radeon: MC ucode required for NI+.\n");
return -EINVAL;
}
}
return 0; return 0;
} }

View File

@ -2560,7 +2560,11 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
rdev->pm.current_power_state_index = rdev->pm.default_power_state_index; rdev->pm.current_power_state_index = rdev->pm.default_power_state_index;
rdev->pm.current_clock_mode_index = 0; rdev->pm.current_clock_mode_index = 0;
rdev->pm.current_vddc = rdev->pm.power_state[rdev->pm.default_power_state_index].clock_info[0].voltage.voltage; if (rdev->pm.default_power_state_index >= 0)
rdev->pm.current_vddc =
rdev->pm.power_state[rdev->pm.default_power_state_index].clock_info[0].voltage.voltage;
else
rdev->pm.current_vddc = 0;
} }
void radeon_atom_set_clock_gating(struct radeon_device *rdev, int enable) void radeon_atom_set_clock_gating(struct radeon_device *rdev, int enable)

View File

@ -1093,7 +1093,6 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
struct vmw_surface *surface = NULL; struct vmw_surface *surface = NULL;
struct vmw_dma_buffer *bo = NULL; struct vmw_dma_buffer *bo = NULL;
struct ttm_base_object *user_obj; struct ttm_base_object *user_obj;
u64 required_size;
int ret; int ret;
/** /**
@ -1102,8 +1101,9 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
* requested framebuffer. * requested framebuffer.
*/ */
required_size = mode_cmd->pitch * mode_cmd->height; if (!vmw_kms_validate_mode_vram(dev_priv,
if (unlikely(required_size > (u64) dev_priv->vram_size)) { mode_cmd->pitch,
mode_cmd->height)) {
DRM_ERROR("VRAM size is too small for requested mode.\n"); DRM_ERROR("VRAM size is too small for requested mode.\n");
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
} }

View File

@ -2,7 +2,7 @@
* Finger Sensing Pad PS/2 mouse driver. * Finger Sensing Pad PS/2 mouse driver.
* *
* Copyright (C) 2005-2007 Asia Vital Components Co., Ltd. * Copyright (C) 2005-2007 Asia Vital Components Co., Ltd.
* Copyright (C) 2005-2010 Tai-hwa Liang, Sentelic Corporation. * Copyright (C) 2005-2011 Tai-hwa Liang, Sentelic Corporation.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License * modify it under the terms of the GNU General Public License
@ -162,7 +162,7 @@ static int fsp_reg_write(struct psmouse *psmouse, int reg_addr, int reg_val)
ps2_sendbyte(ps2dev, v, FSP_CMD_TIMEOUT2); ps2_sendbyte(ps2dev, v, FSP_CMD_TIMEOUT2);
if (ps2_sendbyte(ps2dev, 0xf3, FSP_CMD_TIMEOUT) < 0) if (ps2_sendbyte(ps2dev, 0xf3, FSP_CMD_TIMEOUT) < 0)
return -1; goto out;
if ((v = fsp_test_invert_cmd(reg_val)) != reg_val) { if ((v = fsp_test_invert_cmd(reg_val)) != reg_val) {
/* inversion is required */ /* inversion is required */
@ -261,7 +261,7 @@ static int fsp_page_reg_write(struct psmouse *psmouse, int reg_val)
ps2_sendbyte(ps2dev, 0x88, FSP_CMD_TIMEOUT2); ps2_sendbyte(ps2dev, 0x88, FSP_CMD_TIMEOUT2);
if (ps2_sendbyte(ps2dev, 0xf3, FSP_CMD_TIMEOUT) < 0) if (ps2_sendbyte(ps2dev, 0xf3, FSP_CMD_TIMEOUT) < 0)
return -1; goto out;
if ((v = fsp_test_invert_cmd(reg_val)) != reg_val) { if ((v = fsp_test_invert_cmd(reg_val)) != reg_val) {
ps2_sendbyte(ps2dev, 0x47, FSP_CMD_TIMEOUT2); ps2_sendbyte(ps2dev, 0x47, FSP_CMD_TIMEOUT2);
@ -309,7 +309,7 @@ static int fsp_get_buttons(struct psmouse *psmouse, int *btn)
}; };
int val; int val;
if (fsp_reg_read(psmouse, FSP_REG_TMOD_STATUS1, &val) == -1) if (fsp_reg_read(psmouse, FSP_REG_TMOD_STATUS, &val) == -1)
return -EIO; return -EIO;
*btn = buttons[(val & 0x30) >> 4]; *btn = buttons[(val & 0x30) >> 4];

View File

@ -2,7 +2,7 @@
* Finger Sensing Pad PS/2 mouse driver. * Finger Sensing Pad PS/2 mouse driver.
* *
* Copyright (C) 2005-2007 Asia Vital Components Co., Ltd. * Copyright (C) 2005-2007 Asia Vital Components Co., Ltd.
* Copyright (C) 2005-2009 Tai-hwa Liang, Sentelic Corporation. * Copyright (C) 2005-2011 Tai-hwa Liang, Sentelic Corporation.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License * modify it under the terms of the GNU General Public License
@ -33,6 +33,7 @@
/* Finger-sensing Pad control registers */ /* Finger-sensing Pad control registers */
#define FSP_REG_SYSCTL1 0x10 #define FSP_REG_SYSCTL1 0x10
#define FSP_BIT_EN_REG_CLK BIT(5) #define FSP_BIT_EN_REG_CLK BIT(5)
#define FSP_REG_TMOD_STATUS 0x20
#define FSP_REG_OPC_QDOWN 0x31 #define FSP_REG_OPC_QDOWN 0x31
#define FSP_BIT_EN_OPC_TAG BIT(7) #define FSP_BIT_EN_OPC_TAG BIT(7)
#define FSP_REG_OPTZ_XLO 0x34 #define FSP_REG_OPTZ_XLO 0x34

View File

@ -90,7 +90,7 @@ struct iommu_domain *iommu_domain_alloc(struct bus_type *bus)
if (bus == NULL || bus->iommu_ops == NULL) if (bus == NULL || bus->iommu_ops == NULL)
return NULL; return NULL;
domain = kmalloc(sizeof(*domain), GFP_KERNEL); domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain) if (!domain)
return NULL; return NULL;

View File

@ -983,7 +983,7 @@ retry:
ret = -EIO; ret = -EIO;
goto out; goto out;
} }
alt = ep_tb[--alt_idx].alt; gspca_dev->alt = ep_tb[--alt_idx].alt;
} }
} }
out: out:

View File

@ -675,7 +675,8 @@ mmci_data_irq(struct mmci_host *host, struct mmc_data *data,
unsigned int status) unsigned int status)
{ {
/* First check for errors */ /* First check for errors */
if (status & (MCI_DATACRCFAIL|MCI_DATATIMEOUT|MCI_TXUNDERRUN|MCI_RXOVERRUN)) { if (status & (MCI_DATACRCFAIL|MCI_DATATIMEOUT|MCI_STARTBITERR|
MCI_TXUNDERRUN|MCI_RXOVERRUN)) {
u32 remain, success; u32 remain, success;
/* Terminate the DMA transfer */ /* Terminate the DMA transfer */
@ -754,8 +755,12 @@ mmci_cmd_irq(struct mmci_host *host, struct mmc_command *cmd,
} }
if (!cmd->data || cmd->error) { if (!cmd->data || cmd->error) {
if (host->data) if (host->data) {
/* Terminate the DMA transfer */
if (dma_inprogress(host))
mmci_dma_data_error(host);
mmci_stop_data(host); mmci_stop_data(host);
}
mmci_request_end(host, cmd->mrq); mmci_request_end(host, cmd->mrq);
} else if (!(cmd->data->flags & MMC_DATA_READ)) { } else if (!(cmd->data->flags & MMC_DATA_READ)) {
mmci_start_data(host, cmd->data); mmci_start_data(host, cmd->data);
@ -955,8 +960,9 @@ static irqreturn_t mmci_irq(int irq, void *dev_id)
dev_dbg(mmc_dev(host->mmc), "irq0 (data+cmd) %08x\n", status); dev_dbg(mmc_dev(host->mmc), "irq0 (data+cmd) %08x\n", status);
data = host->data; data = host->data;
if (status & (MCI_DATACRCFAIL|MCI_DATATIMEOUT|MCI_TXUNDERRUN| if (status & (MCI_DATACRCFAIL|MCI_DATATIMEOUT|MCI_STARTBITERR|
MCI_RXOVERRUN|MCI_DATAEND|MCI_DATABLOCKEND) && data) MCI_TXUNDERRUN|MCI_RXOVERRUN|MCI_DATAEND|
MCI_DATABLOCKEND) && data)
mmci_data_irq(host, data, status); mmci_data_irq(host, data, status);
cmd = host->cmd; cmd = host->cmd;

View File

@ -23,8 +23,8 @@ if NET_VENDOR_FREESCALE
config FEC config FEC
bool "FEC ethernet controller (of ColdFire and some i.MX CPUs)" bool "FEC ethernet controller (of ColdFire and some i.MX CPUs)"
depends on (M523x || M527x || M5272 || M528x || M520x || M532x || \ depends on (M523x || M527x || M5272 || M528x || M520x || M532x || \
ARCH_MXC || ARCH_MXS) ARCH_MXC || SOC_IMX28)
default ARCH_MXC || ARCH_MXS if ARM default ARCH_MXC || SOC_IMX28 if ARM
select PHYLIB select PHYLIB
---help--- ---help---
Say Y here if you want to use the built-in 10/100 Fast ethernet Say Y here if you want to use the built-in 10/100 Fast ethernet

View File

@ -2606,6 +2606,9 @@ static int skge_up(struct net_device *dev)
spin_unlock_irq(&hw->hw_lock); spin_unlock_irq(&hw->hw_lock);
napi_enable(&skge->napi); napi_enable(&skge->napi);
skge_set_multicast(dev);
return 0; return 0;
free_tx_ring: free_tx_ring:

View File

@ -147,6 +147,7 @@ void mlx4_en_destroy_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq)
mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size); mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size);
if (priv->mdev->dev->caps.comp_pool && cq->vector) if (priv->mdev->dev->caps.comp_pool && cq->vector)
mlx4_release_eq(priv->mdev->dev, cq->vector); mlx4_release_eq(priv->mdev->dev, cq->vector);
cq->vector = 0;
cq->buf_size = 0; cq->buf_size = 0;
cq->buf = NULL; cq->buf = NULL;
} }

View File

@ -1843,6 +1843,9 @@ static void ath9k_sta_notify(struct ieee80211_hw *hw,
struct ath_softc *sc = hw->priv; struct ath_softc *sc = hw->priv;
struct ath_node *an = (struct ath_node *) sta->drv_priv; struct ath_node *an = (struct ath_node *) sta->drv_priv;
if (!(sc->sc_flags & SC_OP_TXAGGR))
return;
switch (cmd) { switch (cmd) {
case STA_NOTIFY_SLEEP: case STA_NOTIFY_SLEEP:
an->sleeping = true; an->sleeping = true;

View File

@ -617,9 +617,19 @@ static bool pio_rx_frame(struct b43_pio_rxqueue *q)
const char *err_msg = NULL; const char *err_msg = NULL;
struct b43_rxhdr_fw4 *rxhdr = struct b43_rxhdr_fw4 *rxhdr =
(struct b43_rxhdr_fw4 *)wl->pio_scratchspace; (struct b43_rxhdr_fw4 *)wl->pio_scratchspace;
size_t rxhdr_size = sizeof(*rxhdr);
BUILD_BUG_ON(sizeof(wl->pio_scratchspace) < sizeof(*rxhdr)); BUILD_BUG_ON(sizeof(wl->pio_scratchspace) < sizeof(*rxhdr));
memset(rxhdr, 0, sizeof(*rxhdr)); switch (dev->fw.hdr_format) {
case B43_FW_HDR_410:
case B43_FW_HDR_351:
rxhdr_size -= sizeof(rxhdr->format_598) -
sizeof(rxhdr->format_351);
break;
case B43_FW_HDR_598:
break;
}
memset(rxhdr, 0, rxhdr_size);
/* Check if we have data and wait for it to get ready. */ /* Check if we have data and wait for it to get ready. */
if (q->rev >= 8) { if (q->rev >= 8) {
@ -657,11 +667,11 @@ data_ready:
/* Get the preamble (RX header) */ /* Get the preamble (RX header) */
if (q->rev >= 8) { if (q->rev >= 8) {
b43_block_read(dev, rxhdr, sizeof(*rxhdr), b43_block_read(dev, rxhdr, rxhdr_size,
q->mmio_base + B43_PIO8_RXDATA, q->mmio_base + B43_PIO8_RXDATA,
sizeof(u32)); sizeof(u32));
} else { } else {
b43_block_read(dev, rxhdr, sizeof(*rxhdr), b43_block_read(dev, rxhdr, rxhdr_size,
q->mmio_base + B43_PIO_RXDATA, q->mmio_base + B43_PIO_RXDATA,
sizeof(u16)); sizeof(u16));
} }

View File

@ -55,9 +55,14 @@ int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter)
{ {
bool cancel_flag = false; bool cancel_flag = false;
int status = adapter->cmd_wait_q.status; int status = adapter->cmd_wait_q.status;
struct cmd_ctrl_node *cmd_queued = adapter->cmd_queued; struct cmd_ctrl_node *cmd_queued;
if (!adapter->cmd_queued)
return 0;
cmd_queued = adapter->cmd_queued;
adapter->cmd_queued = NULL; adapter->cmd_queued = NULL;
dev_dbg(adapter->dev, "cmd pending\n"); dev_dbg(adapter->dev, "cmd pending\n");
atomic_inc(&adapter->cmd_pending); atomic_inc(&adapter->cmd_pending);

View File

@ -314,7 +314,7 @@ static const struct of_dev_auxdata *of_dev_lookup(const struct of_dev_auxdata *l
if (!lookup) if (!lookup)
return NULL; return NULL;
for(; lookup->name != NULL; lookup++) { for(; lookup->compatible != NULL; lookup++) {
if (!of_device_is_compatible(np, lookup->compatible)) if (!of_device_is_compatible(np, lookup->compatible))
continue; continue;
if (of_address_to_resource(np, 0, &res)) if (of_address_to_resource(np, 0, &res))

View File

@ -73,8 +73,6 @@ int rtc_set_time(struct rtc_device *rtc, struct rtc_time *tm)
err = -EINVAL; err = -EINVAL;
mutex_unlock(&rtc->ops_lock); mutex_unlock(&rtc->ops_lock);
/* A timer might have just expired */
schedule_work(&rtc->irqwork);
return err; return err;
} }
EXPORT_SYMBOL_GPL(rtc_set_time); EXPORT_SYMBOL_GPL(rtc_set_time);
@ -114,8 +112,6 @@ int rtc_set_mmss(struct rtc_device *rtc, unsigned long secs)
err = -EINVAL; err = -EINVAL;
mutex_unlock(&rtc->ops_lock); mutex_unlock(&rtc->ops_lock);
/* A timer might have just expired */
schedule_work(&rtc->irqwork);
return err; return err;
} }
@ -323,20 +319,6 @@ int rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
} }
EXPORT_SYMBOL_GPL(rtc_read_alarm); EXPORT_SYMBOL_GPL(rtc_read_alarm);
static int ___rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
{
int err;
if (!rtc->ops)
err = -ENODEV;
else if (!rtc->ops->set_alarm)
err = -EINVAL;
else
err = rtc->ops->set_alarm(rtc->dev.parent, alarm);
return err;
}
static int __rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) static int __rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
{ {
struct rtc_time tm; struct rtc_time tm;
@ -360,7 +342,14 @@ static int __rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
* over right here, before we set the alarm. * over right here, before we set the alarm.
*/ */
return ___rtc_set_alarm(rtc, alarm); if (!rtc->ops)
err = -ENODEV;
else if (!rtc->ops->set_alarm)
err = -EINVAL;
else
err = rtc->ops->set_alarm(rtc->dev.parent, alarm);
return err;
} }
int rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm) int rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
@ -407,8 +396,6 @@ int rtc_initialize_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
timerqueue_add(&rtc->timerqueue, &rtc->aie_timer.node); timerqueue_add(&rtc->timerqueue, &rtc->aie_timer.node);
} }
mutex_unlock(&rtc->ops_lock); mutex_unlock(&rtc->ops_lock);
/* maybe that was in the past.*/
schedule_work(&rtc->irqwork);
return err; return err;
} }
EXPORT_SYMBOL_GPL(rtc_initialize_alarm); EXPORT_SYMBOL_GPL(rtc_initialize_alarm);
@ -776,20 +763,6 @@ static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
return 0; return 0;
} }
static void rtc_alarm_disable(struct rtc_device *rtc)
{
struct rtc_wkalrm alarm;
struct rtc_time tm;
__rtc_read_time(rtc, &tm);
alarm.time = rtc_ktime_to_tm(ktime_add(rtc_tm_to_ktime(tm),
ktime_set(300, 0)));
alarm.enabled = 0;
___rtc_set_alarm(rtc, &alarm);
}
/** /**
* rtc_timer_remove - Removes a rtc_timer from the rtc_device timerqueue * rtc_timer_remove - Removes a rtc_timer from the rtc_device timerqueue
* @rtc rtc device * @rtc rtc device
@ -811,10 +784,8 @@ static void rtc_timer_remove(struct rtc_device *rtc, struct rtc_timer *timer)
struct rtc_wkalrm alarm; struct rtc_wkalrm alarm;
int err; int err;
next = timerqueue_getnext(&rtc->timerqueue); next = timerqueue_getnext(&rtc->timerqueue);
if (!next) { if (!next)
rtc_alarm_disable(rtc);
return; return;
}
alarm.time = rtc_ktime_to_tm(next->expires); alarm.time = rtc_ktime_to_tm(next->expires);
alarm.enabled = 1; alarm.enabled = 1;
err = __rtc_set_alarm(rtc, &alarm); err = __rtc_set_alarm(rtc, &alarm);
@ -876,8 +847,7 @@ again:
err = __rtc_set_alarm(rtc, &alarm); err = __rtc_set_alarm(rtc, &alarm);
if (err == -ETIME) if (err == -ETIME)
goto again; goto again;
} else }
rtc_alarm_disable(rtc);
mutex_unlock(&rtc->ops_lock); mutex_unlock(&rtc->ops_lock);
} }

View File

@ -76,8 +76,6 @@ static int irq;
static void __iomem *virtbase; static void __iomem *virtbase;
static unsigned long coh901327_users; static unsigned long coh901327_users;
static unsigned long boot_status; static unsigned long boot_status;
static u16 wdogenablestore;
static u16 irqmaskstore;
static struct device *parent; static struct device *parent;
/* /*
@ -461,6 +459,10 @@ out:
} }
#ifdef CONFIG_PM #ifdef CONFIG_PM
static u16 wdogenablestore;
static u16 irqmaskstore;
static int coh901327_suspend(struct platform_device *pdev, pm_message_t state) static int coh901327_suspend(struct platform_device *pdev, pm_message_t state)
{ {
irqmaskstore = readw(virtbase + U300_WDOG_IMR) & 0x0001U; irqmaskstore = readw(virtbase + U300_WDOG_IMR) & 0x0001U;

View File

@ -231,6 +231,7 @@ static int __devinit cru_detect(unsigned long map_entry,
cmn_regs.u1.reax = CRU_BIOS_SIGNATURE_VALUE; cmn_regs.u1.reax = CRU_BIOS_SIGNATURE_VALUE;
set_memory_x((unsigned long)bios32_entrypoint, (2 * PAGE_SIZE));
asminline_call(&cmn_regs, bios32_entrypoint); asminline_call(&cmn_regs, bios32_entrypoint);
if (cmn_regs.u1.ral != 0) { if (cmn_regs.u1.ral != 0) {
@ -248,8 +249,10 @@ static int __devinit cru_detect(unsigned long map_entry,
if ((physical_bios_base + physical_bios_offset)) { if ((physical_bios_base + physical_bios_offset)) {
cru_rom_addr = cru_rom_addr =
ioremap(cru_physical_address, cru_length); ioremap(cru_physical_address, cru_length);
if (cru_rom_addr) if (cru_rom_addr) {
set_memory_x((unsigned long)cru_rom_addr, cru_length);
retval = 0; retval = 0;
}
} }
printk(KERN_DEBUG "hpwdt: CRU Base Address: 0x%lx\n", printk(KERN_DEBUG "hpwdt: CRU Base Address: 0x%lx\n",

View File

@ -384,10 +384,10 @@ MODULE_PARM_DESC(nowayout,
"Watchdog cannot be stopped once started (default=" "Watchdog cannot be stopped once started (default="
__MODULE_STRING(WATCHDOG_NOWAYOUT) ")"); __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
static int turn_SMI_watchdog_clear_off = 0; static int turn_SMI_watchdog_clear_off = 1;
module_param(turn_SMI_watchdog_clear_off, int, 0); module_param(turn_SMI_watchdog_clear_off, int, 0);
MODULE_PARM_DESC(turn_SMI_watchdog_clear_off, MODULE_PARM_DESC(turn_SMI_watchdog_clear_off,
"Turn off SMI clearing watchdog (default=0)"); "Turn off SMI clearing watchdog (depends on TCO-version)(default=1)");
/* /*
* Some TCO specific functions * Some TCO specific functions
@ -813,7 +813,7 @@ static int __devinit iTCO_wdt_init(struct pci_dev *pdev,
ret = -EIO; ret = -EIO;
goto out_unmap; goto out_unmap;
} }
if (turn_SMI_watchdog_clear_off) { if (turn_SMI_watchdog_clear_off >= iTCO_wdt_private.iTCO_version) {
/* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */ /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */
val32 = inl(SMI_EN); val32 = inl(SMI_EN);
val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */ val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */

View File

@ -351,7 +351,7 @@ static int __devexit sp805_wdt_remove(struct amba_device *adev)
return 0; return 0;
} }
static struct amba_id sp805_wdt_ids[] __initdata = { static struct amba_id sp805_wdt_ids[] = {
{ {
.id = 0x00141805, .id = 0x00141805,
.mask = 0x00ffffff, .mask = 0x00ffffff,

View File

@ -1094,42 +1094,19 @@ static int ceph_snapdir_d_revalidate(struct dentry *dentry,
/* /*
* Set/clear/test dir complete flag on the dir's dentry. * Set/clear/test dir complete flag on the dir's dentry.
*/ */
static struct dentry * __d_find_any_alias(struct inode *inode)
{
struct dentry *alias;
if (list_empty(&inode->i_dentry))
return NULL;
alias = list_first_entry(&inode->i_dentry, struct dentry, d_alias);
return alias;
}
void ceph_dir_set_complete(struct inode *inode) void ceph_dir_set_complete(struct inode *inode)
{ {
struct dentry *dentry = __d_find_any_alias(inode); /* not yet implemented */
if (dentry && ceph_dentry(dentry)) {
dout(" marking %p (%p) complete\n", inode, dentry);
set_bit(CEPH_D_COMPLETE, &ceph_dentry(dentry)->flags);
}
} }
void ceph_dir_clear_complete(struct inode *inode) void ceph_dir_clear_complete(struct inode *inode)
{ {
struct dentry *dentry = __d_find_any_alias(inode); /* not yet implemented */
if (dentry && ceph_dentry(dentry)) {
dout(" marking %p (%p) NOT complete\n", inode, dentry);
clear_bit(CEPH_D_COMPLETE, &ceph_dentry(dentry)->flags);
}
} }
bool ceph_dir_test_complete(struct inode *inode) bool ceph_dir_test_complete(struct inode *inode)
{ {
struct dentry *dentry = __d_find_any_alias(inode); /* not yet implemented */
if (dentry && ceph_dentry(dentry))
return test_bit(CEPH_D_COMPLETE, &ceph_dentry(dentry)->flags);
return false; return false;
} }

View File

@ -282,7 +282,7 @@ static int coalesce_t2(struct smb_hdr *psecond, struct smb_hdr *pTargetSMB)
byte_count = be32_to_cpu(pTargetSMB->smb_buf_length); byte_count = be32_to_cpu(pTargetSMB->smb_buf_length);
byte_count += total_in_buf2; byte_count += total_in_buf2;
/* don't allow buffer to overflow */ /* don't allow buffer to overflow */
if (byte_count > CIFSMaxBufSize) if (byte_count > CIFSMaxBufSize + MAX_CIFS_HDR_SIZE - 4)
return -ENOBUFS; return -ENOBUFS;
pTargetSMB->smb_buf_length = cpu_to_be32(byte_count); pTargetSMB->smb_buf_length = cpu_to_be32(byte_count);
@ -2122,7 +2122,7 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info)
warned_on_ntlm = true; warned_on_ntlm = true;
cERROR(1, "default security mechanism requested. The default " cERROR(1, "default security mechanism requested. The default "
"security mechanism will be upgraded from ntlm to " "security mechanism will be upgraded from ntlm to "
"ntlmv2 in kernel release 3.2"); "ntlmv2 in kernel release 3.3");
} }
ses->overrideSecFlg = volume_info->secFlg; ses->overrideSecFlg = volume_info->secFlg;

View File

@ -1205,6 +1205,8 @@ int __break_lease(struct inode *inode, unsigned int mode)
int want_write = (mode & O_ACCMODE) != O_RDONLY; int want_write = (mode & O_ACCMODE) != O_RDONLY;
new_fl = lease_alloc(NULL, want_write ? F_WRLCK : F_RDLCK); new_fl = lease_alloc(NULL, want_write ? F_WRLCK : F_RDLCK);
if (IS_ERR(new_fl))
return PTR_ERR(new_fl);
lock_flocks(); lock_flocks();
@ -1221,12 +1223,6 @@ int __break_lease(struct inode *inode, unsigned int mode)
if (fl->fl_owner == current->files) if (fl->fl_owner == current->files)
i_have_this_lease = 1; i_have_this_lease = 1;
if (IS_ERR(new_fl) && !i_have_this_lease
&& ((mode & O_NONBLOCK) == 0)) {
error = PTR_ERR(new_fl);
goto out;
}
break_time = 0; break_time = 0;
if (lease_break_time > 0) { if (lease_break_time > 0) {
break_time = jiffies + lease_break_time * HZ; break_time = jiffies + lease_break_time * HZ;
@ -1284,8 +1280,7 @@ restart:
out: out:
unlock_flocks(); unlock_flocks();
if (!IS_ERR(new_fl)) locks_free_lock(new_fl);
locks_free_lock(new_fl);
return error; return error;
} }

View File

@ -263,23 +263,6 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
goto out_no_root; goto out_no_root;
} }
ret = -ENOMEM;
s->s_root = d_alloc_root(root_inode);
if (!s->s_root)
goto out_iput;
if (!(s->s_flags & MS_RDONLY)) {
if (sbi->s_version != MINIX_V3) /* s_state is now out from V3 sb */
ms->s_state &= ~MINIX_VALID_FS;
mark_buffer_dirty(bh);
}
if (!(sbi->s_mount_state & MINIX_VALID_FS))
printk("MINIX-fs: mounting unchecked file system, "
"running fsck is recommended\n");
else if (sbi->s_mount_state & MINIX_ERROR_FS)
printk("MINIX-fs: mounting file system with errors, "
"running fsck is recommended\n");
/* Apparently minix can create filesystems that allocate more blocks for /* Apparently minix can create filesystems that allocate more blocks for
* the bitmaps than needed. We simply ignore that, but verify it didn't * the bitmaps than needed. We simply ignore that, but verify it didn't
* create one with not enough blocks and bail out if so. * create one with not enough blocks and bail out if so.
@ -300,6 +283,23 @@ static int minix_fill_super(struct super_block *s, void *data, int silent)
goto out_iput; goto out_iput;
} }
ret = -ENOMEM;
s->s_root = d_alloc_root(root_inode);
if (!s->s_root)
goto out_iput;
if (!(s->s_flags & MS_RDONLY)) {
if (sbi->s_version != MINIX_V3) /* s_state is now out from V3 sb */
ms->s_state &= ~MINIX_VALID_FS;
mark_buffer_dirty(bh);
}
if (!(sbi->s_mount_state & MINIX_VALID_FS))
printk("MINIX-fs: mounting unchecked file system, "
"running fsck is recommended\n");
else if (sbi->s_mount_state & MINIX_ERROR_FS)
printk("MINIX-fs: mounting file system with errors, "
"running fsck is recommended\n");
return 0; return 0;
out_iput: out_iput:

View File

@ -32,7 +32,7 @@ static cputime64_t get_idle_time(int cpu)
idle = kstat_cpu(cpu).cpustat.idle; idle = kstat_cpu(cpu).cpustat.idle;
idle = cputime64_add(idle, arch_idle_time(cpu)); idle = cputime64_add(idle, arch_idle_time(cpu));
} else } else
idle = nsecs_to_jiffies64(1000 * idle_time); idle = usecs_to_cputime64(idle_time);
return idle; return idle;
} }
@ -46,7 +46,7 @@ static cputime64_t get_iowait_time(int cpu)
/* !NO_HZ so we can rely on cpustat.iowait */ /* !NO_HZ so we can rely on cpustat.iowait */
iowait = kstat_cpu(cpu).cpustat.iowait; iowait = kstat_cpu(cpu).cpustat.iowait;
else else
iowait = nsecs_to_jiffies64(1000 * iowait_time); iowait = usecs_to_cputime64(iowait_time);
return iowait; return iowait;
} }

View File

@ -868,27 +868,6 @@ xfs_fs_dirty_inode(
XFS_I(inode)->i_update_core = 1; XFS_I(inode)->i_update_core = 1;
} }
STATIC int
xfs_log_inode(
struct xfs_inode *ip)
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp;
int error;
tp = xfs_trans_alloc(mp, XFS_TRANS_FSYNC_TS);
error = xfs_trans_reserve(tp, 0, XFS_FSYNC_TS_LOG_RES(mp), 0, 0, 0);
if (error) {
xfs_trans_cancel(tp, 0);
return error;
}
xfs_ilock(ip, XFS_ILOCK_EXCL);
xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
return xfs_trans_commit(tp, 0);
}
STATIC int STATIC int
xfs_fs_write_inode( xfs_fs_write_inode(
struct inode *inode, struct inode *inode,
@ -902,10 +881,8 @@ xfs_fs_write_inode(
if (XFS_FORCED_SHUTDOWN(mp)) if (XFS_FORCED_SHUTDOWN(mp))
return -XFS_ERROR(EIO); return -XFS_ERROR(EIO);
if (!ip->i_update_core)
return 0;
if (wbc->sync_mode == WB_SYNC_ALL) { if (wbc->sync_mode == WB_SYNC_ALL || wbc->for_kupdate) {
/* /*
* Make sure the inode has made it it into the log. Instead * Make sure the inode has made it it into the log. Instead
* of forcing it all the way to stable storage using a * of forcing it all the way to stable storage using a
@ -913,11 +890,14 @@ xfs_fs_write_inode(
* ->sync_fs call do that for thus, which reduces the number * ->sync_fs call do that for thus, which reduces the number
* of synchronous log forces dramatically. * of synchronous log forces dramatically.
*/ */
error = xfs_log_inode(ip); error = xfs_log_dirty_inode(ip, NULL, 0);
if (error) if (error)
goto out; goto out;
return 0; return 0;
} else { } else {
if (!ip->i_update_core)
return 0;
/* /*
* We make this non-blocking if the inode is contended, return * We make this non-blocking if the inode is contended, return
* EAGAIN to indicate to the caller that they did not succeed. * EAGAIN to indicate to the caller that they did not succeed.

View File

@ -336,6 +336,32 @@ xfs_sync_fsdata(
return error; return error;
} }
int
xfs_log_dirty_inode(
struct xfs_inode *ip,
struct xfs_perag *pag,
int flags)
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp;
int error;
if (!ip->i_update_core)
return 0;
tp = xfs_trans_alloc(mp, XFS_TRANS_FSYNC_TS);
error = xfs_trans_reserve(tp, 0, XFS_FSYNC_TS_LOG_RES(mp), 0, 0, 0);
if (error) {
xfs_trans_cancel(tp, 0);
return error;
}
xfs_ilock(ip, XFS_ILOCK_EXCL);
xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
return xfs_trans_commit(tp, 0);
}
/* /*
* When remounting a filesystem read-only or freezing the filesystem, we have * When remounting a filesystem read-only or freezing the filesystem, we have
* two phases to execute. This first phase is syncing the data before we * two phases to execute. This first phase is syncing the data before we
@ -359,6 +385,16 @@ xfs_quiesce_data(
{ {
int error, error2 = 0; int error, error2 = 0;
/*
* Log all pending size and timestamp updates. The vfs writeback
* code is supposed to do this, but due to its overagressive
* livelock detection it will skip inodes where appending writes
* were written out in the first non-blocking sync phase if their
* completion took long enough that it happened after taking the
* timestamp for the cut-off in the blocking phase.
*/
xfs_inode_ag_iterator(mp, xfs_log_dirty_inode, 0);
xfs_qm_sync(mp, SYNC_TRYLOCK); xfs_qm_sync(mp, SYNC_TRYLOCK);
xfs_qm_sync(mp, SYNC_WAIT); xfs_qm_sync(mp, SYNC_WAIT);

View File

@ -34,6 +34,8 @@ void xfs_quiesce_attr(struct xfs_mount *mp);
void xfs_flush_inodes(struct xfs_inode *ip); void xfs_flush_inodes(struct xfs_inode *ip);
int xfs_log_dirty_inode(struct xfs_inode *ip, struct xfs_perag *pag, int flags);
int xfs_reclaim_inodes(struct xfs_mount *mp, int mode); int xfs_reclaim_inodes(struct xfs_mount *mp, int mode);
int xfs_reclaim_inodes_count(struct xfs_mount *mp); int xfs_reclaim_inodes_count(struct xfs_mount *mp);
void xfs_reclaim_inodes_nr(struct xfs_mount *mp, int nr_to_scan); void xfs_reclaim_inodes_nr(struct xfs_mount *mp, int nr_to_scan);

View File

@ -40,6 +40,7 @@ typedef u64 cputime64_t;
*/ */
#define cputime_to_usecs(__ct) jiffies_to_usecs(__ct) #define cputime_to_usecs(__ct) jiffies_to_usecs(__ct)
#define usecs_to_cputime(__msecs) usecs_to_jiffies(__msecs) #define usecs_to_cputime(__msecs) usecs_to_jiffies(__msecs)
#define usecs_to_cputime64(__msecs) nsecs_to_jiffies64((__msecs) * 1000)
/* /*
* Convert cputime to seconds and back. * Convert cputime to seconds and back.

View File

@ -557,6 +557,7 @@ struct kvm_ppc_pvinfo {
#define KVM_CAP_MAX_VCPUS 66 /* returns max vcpus per vm */ #define KVM_CAP_MAX_VCPUS 66 /* returns max vcpus per vm */
#define KVM_CAP_PPC_PAPR 68 #define KVM_CAP_PPC_PAPR 68
#define KVM_CAP_S390_GMAP 71 #define KVM_CAP_S390_GMAP 71
#define KVM_CAP_TSC_DEADLINE_TIMER 72
#ifdef KVM_CAP_IRQ_ROUTING #ifdef KVM_CAP_IRQ_ROUTING

View File

@ -2056,7 +2056,7 @@ static inline int security_old_inode_init_security(struct inode *inode,
char **name, void **value, char **name, void **value,
size_t *len) size_t *len)
{ {
return 0; return -EOPNOTSUPP;
} }
static inline int security_inode_create(struct inode *dir, static inline int security_inode_create(struct inode *dir,

View File

@ -1207,7 +1207,7 @@ extern void ip_vs_control_cleanup(void);
extern struct ip_vs_dest * extern struct ip_vs_dest *
ip_vs_find_dest(struct net *net, int af, const union nf_inet_addr *daddr, ip_vs_find_dest(struct net *net, int af, const union nf_inet_addr *daddr,
__be16 dport, const union nf_inet_addr *vaddr, __be16 vport, __be16 dport, const union nf_inet_addr *vaddr, __be16 vport,
__u16 protocol, __u32 fwmark); __u16 protocol, __u32 fwmark, __u32 flags);
extern struct ip_vs_dest *ip_vs_try_bind_dest(struct ip_vs_conn *cp); extern struct ip_vs_dest *ip_vs_try_bind_dest(struct ip_vs_conn *cp);

View File

@ -1540,8 +1540,15 @@ static int wait_consider_task(struct wait_opts *wo, int ptrace,
} }
/* dead body doesn't have much to contribute */ /* dead body doesn't have much to contribute */
if (p->exit_state == EXIT_DEAD) if (unlikely(p->exit_state == EXIT_DEAD)) {
/*
* But do not ignore this task until the tracer does
* wait_task_zombie()->do_notify_parent().
*/
if (likely(!ptrace) && unlikely(ptrace_reparented(p)))
wo->notask_error = 0;
return 0; return 0;
}
/* slay zombie? */ /* slay zombie? */
if (p->exit_state == EXIT_ZOMBIE) { if (p->exit_state == EXIT_ZOMBIE) {

View File

@ -314,17 +314,29 @@ again:
#endif #endif
lock_page(page_head); lock_page(page_head);
/*
* If page_head->mapping is NULL, then it cannot be a PageAnon
* page; but it might be the ZERO_PAGE or in the gate area or
* in a special mapping (all cases which we are happy to fail);
* or it may have been a good file page when get_user_pages_fast
* found it, but truncated or holepunched or subjected to
* invalidate_complete_page2 before we got the page lock (also
* cases which we are happy to fail). And we hold a reference,
* so refcount care in invalidate_complete_page's remove_mapping
* prevents drop_caches from setting mapping to NULL beneath us.
*
* The case we do have to guard against is when memory pressure made
* shmem_writepage move it from filecache to swapcache beneath us:
* an unlikely race, but we do need to retry for page_head->mapping.
*/
if (!page_head->mapping) { if (!page_head->mapping) {
int shmem_swizzled = PageSwapCache(page_head);
unlock_page(page_head); unlock_page(page_head);
put_page(page_head); put_page(page_head);
/* if (shmem_swizzled)
* ZERO_PAGE pages don't have a mapping. Avoid a busy loop goto again;
* trying to find one. RW mapping would have COW'd (and thus return -EFAULT;
* have a mapping) so this page is RO and won't ever change.
*/
if ((page_head == ZERO_PAGE(address)))
return -EFAULT;
goto again;
} }
/* /*

View File

@ -74,11 +74,17 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
/* /*
* Ensure the task is not frozen. * Ensure the task is not frozen.
* Also, when a freshly created task is scheduled once, changes * Also, skip vfork and any other user process that freezer should skip.
* its state to TASK_UNINTERRUPTIBLE without having ever been
* switched out once, it musn't be checked.
*/ */
if (unlikely(t->flags & PF_FROZEN || !switch_count)) if (unlikely(t->flags & (PF_FROZEN | PF_FREEZER_SKIP)))
return;
/*
* When a freshly created task is scheduled once, changes its state to
* TASK_UNINTERRUPTIBLE without having ever been switched out once, it
* musn't be checked.
*/
if (unlikely(!switch_count))
return; return;
if (switch_count != t->last_switch_count) { if (switch_count != t->last_switch_count) {

View File

@ -96,9 +96,20 @@ void __ptrace_unlink(struct task_struct *child)
*/ */
if (!(child->flags & PF_EXITING) && if (!(child->flags & PF_EXITING) &&
(child->signal->flags & SIGNAL_STOP_STOPPED || (child->signal->flags & SIGNAL_STOP_STOPPED ||
child->signal->group_stop_count)) child->signal->group_stop_count)) {
child->jobctl |= JOBCTL_STOP_PENDING; child->jobctl |= JOBCTL_STOP_PENDING;
/*
* This is only possible if this thread was cloned by the
* traced task running in the stopped group, set the signal
* for the future reports.
* FIXME: we should change ptrace_init_task() to handle this
* case.
*/
if (!(child->jobctl & JOBCTL_STOP_SIGMASK))
child->jobctl |= SIGSTOP;
}
/* /*
* If transition to TASK_STOPPED is pending or in TASK_TRACED, kick * If transition to TASK_STOPPED is pending or in TASK_TRACED, kick
* @child in the butt. Note that @resume should be used iff @child * @child in the butt. Note that @resume should be used iff @child

View File

@ -1994,8 +1994,6 @@ static bool do_signal_stop(int signr)
*/ */
if (!(sig->flags & SIGNAL_STOP_STOPPED)) if (!(sig->flags & SIGNAL_STOP_STOPPED))
sig->group_exit_code = signr; sig->group_exit_code = signr;
else
WARN_ON_ONCE(!current->ptrace);
sig->group_stop_count = 0; sig->group_stop_count = 0;

View File

@ -387,7 +387,6 @@ void clockevents_exchange_device(struct clock_event_device *old,
* released list and do a notify add later. * released list and do a notify add later.
*/ */
if (old) { if (old) {
old->event_handler = clockevents_handle_noop;
clockevents_set_mode(old, CLOCK_EVT_MODE_UNUSED); clockevents_set_mode(old, CLOCK_EVT_MODE_UNUSED);
list_del(&old->list); list_del(&old->list);
list_add(&old->list, &clockevents_released); list_add(&old->list, &clockevents_released);

View File

@ -901,7 +901,6 @@ retry:
h->resv_huge_pages += delta; h->resv_huge_pages += delta;
ret = 0; ret = 0;
spin_unlock(&hugetlb_lock);
/* Free the needed pages to the hugetlb pool */ /* Free the needed pages to the hugetlb pool */
list_for_each_entry_safe(page, tmp, &surplus_list, lru) { list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
if ((--needed) < 0) if ((--needed) < 0)
@ -915,6 +914,7 @@ retry:
VM_BUG_ON(page_count(page)); VM_BUG_ON(page_count(page));
enqueue_huge_page(h, page); enqueue_huge_page(h, page);
} }
spin_unlock(&hugetlb_lock);
/* Free unnecessary surplus pages to the buddy allocator */ /* Free unnecessary surplus pages to the buddy allocator */
free: free:

View File

@ -636,6 +636,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
struct vm_area_struct *prev; struct vm_area_struct *prev;
struct vm_area_struct *vma; struct vm_area_struct *vma;
int err = 0; int err = 0;
pgoff_t pgoff;
unsigned long vmstart; unsigned long vmstart;
unsigned long vmend; unsigned long vmend;
@ -643,13 +644,21 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
if (!vma || vma->vm_start > start) if (!vma || vma->vm_start > start)
return -EFAULT; return -EFAULT;
if (start > vma->vm_start)
prev = vma;
for (; vma && vma->vm_start < end; prev = vma, vma = next) { for (; vma && vma->vm_start < end; prev = vma, vma = next) {
next = vma->vm_next; next = vma->vm_next;
vmstart = max(start, vma->vm_start); vmstart = max(start, vma->vm_start);
vmend = min(end, vma->vm_end); vmend = min(end, vma->vm_end);
if (mpol_equal(vma_policy(vma), new_pol))
continue;
pgoff = vma->vm_pgoff +
((vmstart - vma->vm_start) >> PAGE_SHIFT);
prev = vma_merge(mm, prev, vmstart, vmend, vma->vm_flags, prev = vma_merge(mm, prev, vmstart, vmend, vma->vm_flags,
vma->anon_vma, vma->vm_file, vma->vm_pgoff, vma->anon_vma, vma->vm_file, pgoff,
new_pol); new_pol);
if (prev) { if (prev) {
vma = prev; vma = prev;

View File

@ -613,7 +613,7 @@ static int hci_dev_do_close(struct hci_dev *hdev)
if (!test_bit(HCI_RAW, &hdev->flags)) { if (!test_bit(HCI_RAW, &hdev->flags)) {
set_bit(HCI_INIT, &hdev->flags); set_bit(HCI_INIT, &hdev->flags);
__hci_request(hdev, hci_reset_req, 0, __hci_request(hdev, hci_reset_req, 0,
msecs_to_jiffies(HCI_INIT_TIMEOUT)); msecs_to_jiffies(250));
clear_bit(HCI_INIT, &hdev->flags); clear_bit(HCI_INIT, &hdev->flags);
} }

View File

@ -616,7 +616,7 @@ struct ip_vs_dest *ip_vs_try_bind_dest(struct ip_vs_conn *cp)
if ((cp) && (!cp->dest)) { if ((cp) && (!cp->dest)) {
dest = ip_vs_find_dest(ip_vs_conn_net(cp), cp->af, &cp->daddr, dest = ip_vs_find_dest(ip_vs_conn_net(cp), cp->af, &cp->daddr,
cp->dport, &cp->vaddr, cp->vport, cp->dport, &cp->vaddr, cp->vport,
cp->protocol, cp->fwmark); cp->protocol, cp->fwmark, cp->flags);
ip_vs_bind_dest(cp, dest); ip_vs_bind_dest(cp, dest);
return dest; return dest;
} else } else

View File

@ -619,15 +619,21 @@ struct ip_vs_dest *ip_vs_find_dest(struct net *net, int af,
const union nf_inet_addr *daddr, const union nf_inet_addr *daddr,
__be16 dport, __be16 dport,
const union nf_inet_addr *vaddr, const union nf_inet_addr *vaddr,
__be16 vport, __u16 protocol, __u32 fwmark) __be16 vport, __u16 protocol, __u32 fwmark,
__u32 flags)
{ {
struct ip_vs_dest *dest; struct ip_vs_dest *dest;
struct ip_vs_service *svc; struct ip_vs_service *svc;
__be16 port = dport;
svc = ip_vs_service_get(net, af, fwmark, protocol, vaddr, vport); svc = ip_vs_service_get(net, af, fwmark, protocol, vaddr, vport);
if (!svc) if (!svc)
return NULL; return NULL;
dest = ip_vs_lookup_dest(svc, daddr, dport); if (fwmark && (flags & IP_VS_CONN_F_FWD_MASK) != IP_VS_CONN_F_MASQ)
port = 0;
dest = ip_vs_lookup_dest(svc, daddr, port);
if (!dest)
dest = ip_vs_lookup_dest(svc, daddr, port ^ dport);
if (dest) if (dest)
atomic_inc(&dest->refcnt); atomic_inc(&dest->refcnt);
ip_vs_service_put(svc); ip_vs_service_put(svc);

View File

@ -740,7 +740,7 @@ static void ip_vs_proc_conn(struct net *net, struct ip_vs_conn_param *param,
* but still handled. * but still handled.
*/ */
dest = ip_vs_find_dest(net, type, daddr, dport, param->vaddr, dest = ip_vs_find_dest(net, type, daddr, dport, param->vaddr,
param->vport, protocol, fwmark); param->vport, protocol, fwmark, flags);
/* Set the approprite ativity flag */ /* Set the approprite ativity flag */
if (protocol == IPPROTO_TCP) { if (protocol == IPPROTO_TCP) {

View File

@ -135,7 +135,7 @@ nla_put_failure:
static inline int static inline int
ctnetlink_dump_timeout(struct sk_buff *skb, const struct nf_conn *ct) ctnetlink_dump_timeout(struct sk_buff *skb, const struct nf_conn *ct)
{ {
long timeout = (ct->timeout.expires - jiffies) / HZ; long timeout = ((long)ct->timeout.expires - (long)jiffies) / HZ;
if (timeout < 0) if (timeout < 0)
timeout = 0; timeout = 0;
@ -1358,12 +1358,15 @@ ctnetlink_create_conntrack(struct net *net, u16 zone,
nf_ct_protonum(ct)); nf_ct_protonum(ct));
if (helper == NULL) { if (helper == NULL) {
rcu_read_unlock(); rcu_read_unlock();
spin_unlock_bh(&nf_conntrack_lock);
#ifdef CONFIG_MODULES #ifdef CONFIG_MODULES
if (request_module("nfct-helper-%s", helpname) < 0) { if (request_module("nfct-helper-%s", helpname) < 0) {
spin_lock_bh(&nf_conntrack_lock);
err = -EOPNOTSUPP; err = -EOPNOTSUPP;
goto err1; goto err1;
} }
spin_lock_bh(&nf_conntrack_lock);
rcu_read_lock(); rcu_read_lock();
helper = __nf_conntrack_helper_find(helpname, helper = __nf_conntrack_helper_find(helpname,
nf_ct_l3num(ct), nf_ct_l3num(ct),
@ -1638,7 +1641,7 @@ ctnetlink_exp_dump_expect(struct sk_buff *skb,
const struct nf_conntrack_expect *exp) const struct nf_conntrack_expect *exp)
{ {
struct nf_conn *master = exp->master; struct nf_conn *master = exp->master;
long timeout = (exp->timeout.expires - jiffies) / HZ; long timeout = ((long)exp->timeout.expires - (long)jiffies) / HZ;
struct nf_conn_help *help; struct nf_conn_help *help;
if (timeout < 0) if (timeout < 0)
@ -1869,25 +1872,30 @@ ctnetlink_get_expect(struct sock *ctnl, struct sk_buff *skb,
err = -ENOMEM; err = -ENOMEM;
skb2 = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); skb2 = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (skb2 == NULL) if (skb2 == NULL) {
nf_ct_expect_put(exp);
goto out; goto out;
}
rcu_read_lock(); rcu_read_lock();
err = ctnetlink_exp_fill_info(skb2, NETLINK_CB(skb).pid, err = ctnetlink_exp_fill_info(skb2, NETLINK_CB(skb).pid,
nlh->nlmsg_seq, IPCTNL_MSG_EXP_NEW, exp); nlh->nlmsg_seq, IPCTNL_MSG_EXP_NEW, exp);
rcu_read_unlock(); rcu_read_unlock();
nf_ct_expect_put(exp);
if (err <= 0) if (err <= 0)
goto free; goto free;
nf_ct_expect_put(exp); err = netlink_unicast(ctnl, skb2, NETLINK_CB(skb).pid, MSG_DONTWAIT);
if (err < 0)
goto out;
return netlink_unicast(ctnl, skb2, NETLINK_CB(skb).pid, MSG_DONTWAIT); return 0;
free: free:
kfree_skb(skb2); kfree_skb(skb2);
out: out:
nf_ct_expect_put(exp); /* this avoids a loop in nfnetlink. */
return err; return err == -EAGAIN ? -ENOBUFS : err;
} }
static int static int

View File

@ -2448,8 +2448,12 @@ static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 protoc
{ {
struct packet_sock *po = pkt_sk(sk); struct packet_sock *po = pkt_sk(sk);
if (po->fanout) if (po->fanout) {
if (dev)
dev_put(dev);
return -EINVAL; return -EINVAL;
}
lock_sock(sk); lock_sock(sk);

View File

@ -488,7 +488,7 @@ static int get_dist_table(struct Qdisc *sch, const struct nlattr *attr)
return -EINVAL; return -EINVAL;
s = sizeof(struct disttable) + n * sizeof(s16); s = sizeof(struct disttable) + n * sizeof(s16);
d = kmalloc(s, GFP_KERNEL); d = kmalloc(s, GFP_KERNEL | __GFP_NOWARN);
if (!d) if (!d)
d = vmalloc(s); d = vmalloc(s);
if (!d) if (!d)
@ -501,9 +501,10 @@ static int get_dist_table(struct Qdisc *sch, const struct nlattr *attr)
root_lock = qdisc_root_sleeping_lock(sch); root_lock = qdisc_root_sleeping_lock(sch);
spin_lock_bh(root_lock); spin_lock_bh(root_lock);
dist_free(q->delay_dist); swap(q->delay_dist, d);
q->delay_dist = d;
spin_unlock_bh(root_lock); spin_unlock_bh(root_lock);
dist_free(d);
return 0; return 0;
} }

View File

@ -817,11 +817,11 @@ skip_unblock:
static void qfq_update_start(struct qfq_sched *q, struct qfq_class *cl) static void qfq_update_start(struct qfq_sched *q, struct qfq_class *cl)
{ {
unsigned long mask; unsigned long mask;
uint32_t limit, roundedF; u64 limit, roundedF;
int slot_shift = cl->grp->slot_shift; int slot_shift = cl->grp->slot_shift;
roundedF = qfq_round_down(cl->F, slot_shift); roundedF = qfq_round_down(cl->F, slot_shift);
limit = qfq_round_down(q->V, slot_shift) + (1UL << slot_shift); limit = qfq_round_down(q->V, slot_shift) + (1ULL << slot_shift);
if (!qfq_gt(cl->F, q->V) || qfq_gt(roundedF, limit)) { if (!qfq_gt(cl->F, q->V) || qfq_gt(roundedF, limit)) {
/* timestamp was stale */ /* timestamp was stale */

View File

@ -381,7 +381,7 @@ int security_old_inode_init_security(struct inode *inode, struct inode *dir,
void **value, size_t *len) void **value, size_t *len)
{ {
if (unlikely(IS_PRIVATE(inode))) if (unlikely(IS_PRIVATE(inode)))
return 0; return -EOPNOTSUPP;
return security_ops->inode_init_security(inode, dir, qstr, name, value, return security_ops->inode_init_security(inode, dir, qstr, name, value,
len); len);
} }

View File

@ -235,6 +235,7 @@ static int wm8776_hw_params(struct snd_pcm_substream *substream,
switch (snd_pcm_format_width(params_format(params))) { switch (snd_pcm_format_width(params_format(params))) {
case 16: case 16:
iface = 0; iface = 0;
break;
case 20: case 20:
iface = 0x10; iface = 0x10;
break; break;

View File

@ -17,6 +17,8 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/namei.h>
#include <linux/fs.h>
#include "irq.h" #include "irq.h"
static struct kvm_assigned_dev_kernel *kvm_find_assigned_dev(struct list_head *head, static struct kvm_assigned_dev_kernel *kvm_find_assigned_dev(struct list_head *head,
@ -480,12 +482,76 @@ out:
return r; return r;
} }
/*
* We want to test whether the caller has been granted permissions to
* use this device. To be able to configure and control the device,
* the user needs access to PCI configuration space and BAR resources.
* These are accessed through PCI sysfs. PCI config space is often
* passed to the process calling this ioctl via file descriptor, so we
* can't rely on access to that file. We can check for permissions
* on each of the BAR resource files, which is a pretty clear
* indicator that the user has been granted access to the device.
*/
static int probe_sysfs_permissions(struct pci_dev *dev)
{
#ifdef CONFIG_SYSFS
int i;
bool bar_found = false;
for (i = PCI_STD_RESOURCES; i <= PCI_STD_RESOURCE_END; i++) {
char *kpath, *syspath;
struct path path;
struct inode *inode;
int r;
if (!pci_resource_len(dev, i))
continue;
kpath = kobject_get_path(&dev->dev.kobj, GFP_KERNEL);
if (!kpath)
return -ENOMEM;
/* Per sysfs-rules, sysfs is always at /sys */
syspath = kasprintf(GFP_KERNEL, "/sys%s/resource%d", kpath, i);
kfree(kpath);
if (!syspath)
return -ENOMEM;
r = kern_path(syspath, LOOKUP_FOLLOW, &path);
kfree(syspath);
if (r)
return r;
inode = path.dentry->d_inode;
r = inode_permission(inode, MAY_READ | MAY_WRITE | MAY_ACCESS);
path_put(&path);
if (r)
return r;
bar_found = true;
}
/* If no resources, probably something special */
if (!bar_found)
return -EPERM;
return 0;
#else
return -EINVAL; /* No way to control the device without sysfs */
#endif
}
static int kvm_vm_ioctl_assign_device(struct kvm *kvm, static int kvm_vm_ioctl_assign_device(struct kvm *kvm,
struct kvm_assigned_pci_dev *assigned_dev) struct kvm_assigned_pci_dev *assigned_dev)
{ {
int r = 0, idx; int r = 0, idx;
struct kvm_assigned_dev_kernel *match; struct kvm_assigned_dev_kernel *match;
struct pci_dev *dev; struct pci_dev *dev;
u8 header_type;
if (!(assigned_dev->flags & KVM_DEV_ASSIGN_ENABLE_IOMMU))
return -EINVAL;
mutex_lock(&kvm->lock); mutex_lock(&kvm->lock);
idx = srcu_read_lock(&kvm->srcu); idx = srcu_read_lock(&kvm->srcu);
@ -513,6 +579,18 @@ static int kvm_vm_ioctl_assign_device(struct kvm *kvm,
r = -EINVAL; r = -EINVAL;
goto out_free; goto out_free;
} }
/* Don't allow bridges to be assigned */
pci_read_config_byte(dev, PCI_HEADER_TYPE, &header_type);
if ((header_type & PCI_HEADER_TYPE) != PCI_HEADER_TYPE_NORMAL) {
r = -EPERM;
goto out_put;
}
r = probe_sysfs_permissions(dev);
if (r)
goto out_put;
if (pci_enable_device(dev)) { if (pci_enable_device(dev)) {
printk(KERN_INFO "%s: Could not enable PCI device\n", __func__); printk(KERN_INFO "%s: Could not enable PCI device\n", __func__);
r = -EBUSY; r = -EBUSY;
@ -544,16 +622,14 @@ static int kvm_vm_ioctl_assign_device(struct kvm *kvm,
list_add(&match->list, &kvm->arch.assigned_dev_head); list_add(&match->list, &kvm->arch.assigned_dev_head);
if (assigned_dev->flags & KVM_DEV_ASSIGN_ENABLE_IOMMU) { if (!kvm->arch.iommu_domain) {
if (!kvm->arch.iommu_domain) { r = kvm_iommu_map_guest(kvm);
r = kvm_iommu_map_guest(kvm);
if (r)
goto out_list_del;
}
r = kvm_assign_device(kvm, match);
if (r) if (r)
goto out_list_del; goto out_list_del;
} }
r = kvm_assign_device(kvm, match);
if (r)
goto out_list_del;
out: out:
srcu_read_unlock(&kvm->srcu, idx); srcu_read_unlock(&kvm->srcu, idx);
@ -593,8 +669,7 @@ static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
goto out; goto out;
} }
if (match->flags & KVM_DEV_ASSIGN_ENABLE_IOMMU) kvm_deassign_device(kvm, match);
kvm_deassign_device(kvm, match);
kvm_free_assigned_device(kvm, match); kvm_free_assigned_device(kvm, match);