Merge branch 'perf/urgent' into perf/core

Merge the latest fixes.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Ingo Molnar 2014-03-11 11:53:50 +01:00
commit 0066f3b93e
450 changed files with 4478 additions and 2554 deletions

View File

@ -124,12 +124,11 @@ the default being 204800 sectors (or 100MB).
Updating on-disk metadata Updating on-disk metadata
------------------------- -------------------------
On-disk metadata is committed every time a REQ_SYNC or REQ_FUA bio is On-disk metadata is committed every time a FLUSH or FUA bio is written.
written. If no such requests are made then commits will occur every If no such requests are made then commits will occur every second. This
second. This means the cache behaves like a physical disk that has a means the cache behaves like a physical disk that has a volatile write
write cache (the same is true of the thin-provisioning target). If cache. If power is lost you may lose some recent writes. The metadata
power is lost you may lose some recent writes. The metadata should should always be consistent in spite of any crash.
always be consistent in spite of any crash.
The 'dirty' state for a cache block changes far too frequently for us The 'dirty' state for a cache block changes far too frequently for us
to keep updating it on the fly. So we treat it as a hint. In normal to keep updating it on the fly. So we treat it as a hint. In normal

View File

@ -116,6 +116,35 @@ Resuming a device with a new table itself triggers an event so the
userspace daemon can use this to detect a situation where a new table userspace daemon can use this to detect a situation where a new table
already exceeds the threshold. already exceeds the threshold.
A low water mark for the metadata device is maintained in the kernel and
will trigger a dm event if free space on the metadata device drops below
it.
Updating on-disk metadata
-------------------------
On-disk metadata is committed every time a FLUSH or FUA bio is written.
If no such requests are made then commits will occur every second. This
means the thin-provisioning target behaves like a physical disk that has
a volatile write cache. If power is lost you may lose some recent
writes. The metadata should always be consistent in spite of any crash.
If data space is exhausted the pool will either error or queue IO
according to the configuration (see: error_if_no_space). If metadata
space is exhausted or a metadata operation fails: the pool will error IO
until the pool is taken offline and repair is performed to 1) fix any
potential inconsistencies and 2) clear the flag that imposes repair.
Once the pool's metadata device is repaired it may be resized, which
will allow the pool to return to normal operation. Note that if a pool
is flagged as needing repair, the pool's data and metadata devices
cannot be resized until repair is performed. It should also be noted
that when the pool's metadata space is exhausted the current metadata
transaction is aborted. Given that the pool will cache IO whose
completion may have already been acknowledged to upper IO layers
(e.g. filesystem) it is strongly suggested that consistency checks
(e.g. fsck) be performed on those layers when repair of the pool is
required.
Thin provisioning Thin provisioning
----------------- -----------------
@ -258,10 +287,9 @@ ii) Status
should register for the event and then check the target's status. should register for the event and then check the target's status.
held metadata root: held metadata root:
The location, in sectors, of the metadata root that has been The location, in blocks, of the metadata root that has been
'held' for userspace read access. '-' indicates there is no 'held' for userspace read access. '-' indicates there is no
held root. This feature is not yet implemented so '-' is held root.
always returned.
discard_passdown|no_discard_passdown discard_passdown|no_discard_passdown
Whether or not discards are actually being passed down to the Whether or not discards are actually being passed down to the

View File

@ -21,9 +21,9 @@ Required Properties:
must appear in the same order as the output clocks. must appear in the same order as the output clocks.
- #clock-cells: Must be 1 - #clock-cells: Must be 1
- clock-output-names: The name of the clocks as free-form strings - clock-output-names: The name of the clocks as free-form strings
- renesas,indices: Indices of the gate clocks into the group (0 to 31) - renesas,clock-indices: Indices of the gate clocks into the group (0 to 31)
The clocks, clock-output-names and renesas,indices properties contain one The clocks, clock-output-names and renesas,clock-indices properties contain one
entry per gate clock. The MSTP groups are sparsely populated. Unimplemented entry per gate clock. The MSTP groups are sparsely populated. Unimplemented
gate clocks must not be declared. gate clocks must not be declared.

View File

@ -1,12 +1,16 @@
* Freescale Smart Direct Memory Access (SDMA) Controller for i.MX * Freescale Smart Direct Memory Access (SDMA) Controller for i.MX
Required properties: Required properties:
- compatible : Should be "fsl,imx31-sdma", "fsl,imx31-to1-sdma", - compatible : Should be one of
"fsl,imx31-to2-sdma", "fsl,imx35-sdma", "fsl,imx35-to1-sdma", "fsl,imx25-sdma"
"fsl,imx35-to2-sdma", "fsl,imx51-sdma", "fsl,imx53-sdma" or "fsl,imx31-sdma", "fsl,imx31-to1-sdma", "fsl,imx31-to2-sdma"
"fsl,imx6q-sdma". The -to variants should be preferred since they "fsl,imx35-sdma", "fsl,imx35-to1-sdma", "fsl,imx35-to2-sdma"
allow to determnine the correct ROM script addresses needed for "fsl,imx51-sdma"
the driver to work without additional firmware. "fsl,imx53-sdma"
"fsl,imx6q-sdma"
The -to variants should be preferred since they allow to determnine the
correct ROM script addresses needed for the driver to work without additional
firmware.
- reg : Should contain SDMA registers location and length - reg : Should contain SDMA registers location and length
- interrupts : Should contain SDMA interrupt - interrupts : Should contain SDMA interrupt
- #dma-cells : Must be <3>. - #dma-cells : Must be <3>.

View File

@ -0,0 +1,22 @@
* OpenCores MAC 10/100 Mbps
Required properties:
- compatible: Should be "opencores,ethoc".
- reg: two memory regions (address and length),
first region is for the device registers and descriptor rings,
second is for the device packet memory.
- interrupts: interrupt for the device.
Optional properties:
- clocks: phandle to refer to the clk used as per
Documentation/devicetree/bindings/clock/clock-bindings.txt
Examples:
enet0: ethoc@fd030000 {
compatible = "opencores,ethoc";
reg = <0xfd030000 0x4000 0xfd800000 0x4000>;
interrupts = <1>;
local-mac-address = [00 50 c2 13 6f 00];
clocks = <&osc>;
};

View File

@ -1,4 +1,4 @@
Broadcom Capri Pin Controller Broadcom BCM281xx Pin Controller
This is a pin controller for the Broadcom BCM281xx SoC family, which includes This is a pin controller for the Broadcom BCM281xx SoC family, which includes
BCM11130, BCM11140, BCM11351, BCM28145, and BCM28155 SoCs. BCM11130, BCM11140, BCM11351, BCM28145, and BCM28155 SoCs.
@ -7,14 +7,14 @@ BCM11130, BCM11140, BCM11351, BCM28145, and BCM28155 SoCs.
Required Properties: Required Properties:
- compatible: Must be "brcm,capri-pinctrl". - compatible: Must be "brcm,bcm11351-pinctrl"
- reg: Base address of the PAD Controller register block and the size - reg: Base address of the PAD Controller register block and the size
of the block. of the block.
For example, the following is the bare minimum node: For example, the following is the bare minimum node:
pinctrl@35004800 { pinctrl@35004800 {
compatible = "brcm,capri-pinctrl"; compatible = "brcm,bcm11351-pinctrl";
reg = <0x35004800 0x430>; reg = <0x35004800 0x430>;
}; };
@ -119,7 +119,7 @@ Optional Properties (for HDMI pins):
Example: Example:
// pin controller node // pin controller node
pinctrl@35004800 { pinctrl@35004800 {
compatible = "brcm,capri-pinctrl"; compatible = "brcmbcm11351-pinctrl";
reg = <0x35004800 0x430>; reg = <0x35004800 0x430>;
// pin configuration node // pin configuration node

View File

@ -554,12 +554,6 @@ solution for a couple of reasons:
not specified in the struct can_frame and therefore it is only valid in not specified in the struct can_frame and therefore it is only valid in
CANFD_MTU sized CAN FD frames. CANFD_MTU sized CAN FD frames.
As long as the payload length is <=8 the received CAN frames from CAN FD
capable CAN devices can be received and read by legacy sockets too. When
user-generated CAN FD frames have a payload length <=8 these can be send
by legacy CAN network interfaces too. Sending CAN FD frames with payload
length > 8 to a legacy CAN network interface returns an -EMSGSIZE error.
Implementation hint for new CAN applications: Implementation hint for new CAN applications:
To build a CAN FD aware application use struct canfd_frame as basic CAN To build a CAN FD aware application use struct canfd_frame as basic CAN

View File

@ -73,7 +73,8 @@ Descriptions of section entries:
L: Mailing list that is relevant to this area L: Mailing list that is relevant to this area
W: Web-page with status/info W: Web-page with status/info
Q: Patchwork web based patch tracking system site Q: Patchwork web based patch tracking system site
T: SCM tree type and location. Type is one of: git, hg, quilt, stgit, topgit. T: SCM tree type and location.
Type is one of: git, hg, quilt, stgit, topgit
S: Status, one of the following: S: Status, one of the following:
Supported: Someone is actually paid to look after this. Supported: Someone is actually paid to look after this.
Maintained: Someone actually looks after it. Maintained: Someone actually looks after it.
@ -473,7 +474,7 @@ F: net/rxrpc/af_rxrpc.c
AGPGART DRIVER AGPGART DRIVER
M: David Airlie <airlied@linux.ie> M: David Airlie <airlied@linux.ie>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6.git T: git git://people.freedesktop.org/~airlied/linux (part of drm maint)
S: Maintained S: Maintained
F: drivers/char/agp/ F: drivers/char/agp/
F: include/linux/agp* F: include/linux/agp*
@ -1612,11 +1613,11 @@ S: Maintained
F: drivers/net/wireless/atmel* F: drivers/net/wireless/atmel*
ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
M: Bradley Grove <linuxdrivers@attotech.com> M: Bradley Grove <linuxdrivers@attotech.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
W: http://www.attotech.com W: http://www.attotech.com
S: Supported S: Supported
F: drivers/scsi/esas2r F: drivers/scsi/esas2r
AUDIT SUBSYSTEM AUDIT SUBSYSTEM
M: Eric Paris <eparis@redhat.com> M: Eric Paris <eparis@redhat.com>
@ -1737,6 +1738,7 @@ F: include/uapi/linux/bfs_fs.h
BLACKFIN ARCHITECTURE BLACKFIN ARCHITECTURE
M: Steven Miao <realmz6@gmail.com> M: Steven Miao <realmz6@gmail.com>
L: adi-buildroot-devel@lists.sourceforge.net L: adi-buildroot-devel@lists.sourceforge.net
T: git git://git.code.sf.net/p/adi-linux/code
W: http://blackfin.uclinux.org W: http://blackfin.uclinux.org
S: Supported S: Supported
F: arch/blackfin/ F: arch/blackfin/
@ -2159,7 +2161,7 @@ F: Documentation/zh_CN/
CHIPIDEA USB HIGH SPEED DUAL ROLE CONTROLLER CHIPIDEA USB HIGH SPEED DUAL ROLE CONTROLLER
M: Peter Chen <Peter.Chen@freescale.com> M: Peter Chen <Peter.Chen@freescale.com>
T: git://github.com/hzpeterchen/linux-usb.git T: git git://github.com/hzpeterchen/linux-usb.git
L: linux-usb@vger.kernel.org L: linux-usb@vger.kernel.org
S: Maintained S: Maintained
F: drivers/usb/chipidea/ F: drivers/usb/chipidea/
@ -2179,9 +2181,9 @@ S: Supported
F: drivers/net/ethernet/cisco/enic/ F: drivers/net/ethernet/cisco/enic/
CISCO VIC LOW LATENCY NIC DRIVER CISCO VIC LOW LATENCY NIC DRIVER
M: Upinder Malhi <umalhi@cisco.com> M: Upinder Malhi <umalhi@cisco.com>
S: Supported S: Supported
F: drivers/infiniband/hw/usnic F: drivers/infiniband/hw/usnic
CIRRUS LOGIC EP93XX ETHERNET DRIVER CIRRUS LOGIC EP93XX ETHERNET DRIVER
M: Hartley Sweeten <hsweeten@visionengravers.com> M: Hartley Sweeten <hsweeten@visionengravers.com>
@ -2378,20 +2380,20 @@ F: drivers/cpufreq/arm_big_little.c
F: drivers/cpufreq/arm_big_little_dt.c F: drivers/cpufreq/arm_big_little_dt.c
CPUIDLE DRIVER - ARM BIG LITTLE CPUIDLE DRIVER - ARM BIG LITTLE
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
M: Daniel Lezcano <daniel.lezcano@linaro.org> M: Daniel Lezcano <daniel.lezcano@linaro.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org
T: git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
S: Maintained S: Maintained
F: drivers/cpuidle/cpuidle-big_little.c F: drivers/cpuidle/cpuidle-big_little.c
CPUIDLE DRIVERS CPUIDLE DRIVERS
M: Rafael J. Wysocki <rjw@rjwysocki.net> M: Rafael J. Wysocki <rjw@rjwysocki.net>
M: Daniel Lezcano <daniel.lezcano@linaro.org> M: Daniel Lezcano <daniel.lezcano@linaro.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
T: git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
F: drivers/cpuidle/* F: drivers/cpuidle/*
F: include/linux/cpuidle.h F: include/linux/cpuidle.h
@ -2458,9 +2460,9 @@ S: Maintained
F: sound/pci/cs5535audio/ F: sound/pci/cs5535audio/
CW1200 WLAN driver CW1200 WLAN driver
M: Solomon Peachy <pizza@shaftnet.org> M: Solomon Peachy <pizza@shaftnet.org>
S: Maintained S: Maintained
F: drivers/net/wireless/cw1200/ F: drivers/net/wireless/cw1200/
CX18 VIDEO4LINUX DRIVER CX18 VIDEO4LINUX DRIVER
M: Andy Walls <awalls@md.metrocast.net> M: Andy Walls <awalls@md.metrocast.net>
@ -2848,12 +2850,22 @@ F: lib/kobj*
DRM DRIVERS DRM DRIVERS
M: David Airlie <airlied@linux.ie> M: David Airlie <airlied@linux.ie>
L: dri-devel@lists.freedesktop.org L: dri-devel@lists.freedesktop.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6.git T: git git://people.freedesktop.org/~airlied/linux
S: Maintained S: Maintained
F: drivers/gpu/drm/ F: drivers/gpu/drm/
F: include/drm/ F: include/drm/
F: include/uapi/drm/ F: include/uapi/drm/
RADEON DRM DRIVERS
M: Alex Deucher <alexander.deucher@amd.com>
M: Christian König <christian.koenig@amd.com>
L: dri-devel@lists.freedesktop.org
T: git git://people.freedesktop.org/~agd5f/linux
S: Supported
F: drivers/gpu/drm/radeon/
F: include/drm/radeon*
F: include/uapi/drm/radeon*
INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets) INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets)
M: Daniel Vetter <daniel.vetter@ffwll.ch> M: Daniel Vetter <daniel.vetter@ffwll.ch>
M: Jani Nikula <jani.nikula@linux.intel.com> M: Jani Nikula <jani.nikula@linux.intel.com>
@ -3085,6 +3097,8 @@ F: fs/ecryptfs/
EDAC-CORE EDAC-CORE
M: Doug Thompson <dougthompson@xmission.com> M: Doug Thompson <dougthompson@xmission.com>
M: Borislav Petkov <bp@alien8.de>
M: Mauro Carvalho Chehab <m.chehab@samsung.com>
L: linux-edac@vger.kernel.org L: linux-edac@vger.kernel.org
W: bluesmoke.sourceforge.net W: bluesmoke.sourceforge.net
S: Supported S: Supported
@ -4548,6 +4562,7 @@ F: Documentation/networking/ixgbevf.txt
F: Documentation/networking/i40e.txt F: Documentation/networking/i40e.txt
F: Documentation/networking/i40evf.txt F: Documentation/networking/i40evf.txt
F: drivers/net/ethernet/intel/ F: drivers/net/ethernet/intel/
F: drivers/net/ethernet/intel/*/
INTEL-MID GPIO DRIVER INTEL-MID GPIO DRIVER
M: David Cohen <david.a.cohen@linux.intel.com> M: David Cohen <david.a.cohen@linux.intel.com>
@ -4904,7 +4919,7 @@ F: drivers/staging/ktap/
KCONFIG KCONFIG
M: "Yann E. MORIN" <yann.morin.1998@free.fr> M: "Yann E. MORIN" <yann.morin.1998@free.fr>
L: linux-kbuild@vger.kernel.org L: linux-kbuild@vger.kernel.org
T: git://gitorious.org/linux-kconfig/linux-kconfig T: git git://gitorious.org/linux-kconfig/linux-kconfig
S: Maintained S: Maintained
F: Documentation/kbuild/kconfig-language.txt F: Documentation/kbuild/kconfig-language.txt
F: scripts/kconfig/ F: scripts/kconfig/
@ -5461,11 +5476,11 @@ S: Maintained
F: drivers/media/tuners/m88ts2022* F: drivers/media/tuners/m88ts2022*
MA901 MASTERKIT USB FM RADIO DRIVER MA901 MASTERKIT USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com> M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
T: git git://linuxtv.org/media_tree.git T: git git://linuxtv.org/media_tree.git
S: Maintained S: Maintained
F: drivers/media/radio/radio-ma901.c F: drivers/media/radio/radio-ma901.c
MAC80211 MAC80211
M: Johannes Berg <johannes@sipsolutions.net> M: Johannes Berg <johannes@sipsolutions.net>
@ -5501,6 +5516,11 @@ W: http://www.kernel.org/doc/man-pages
L: linux-man@vger.kernel.org L: linux-man@vger.kernel.org
S: Maintained S: Maintained
MARVELL ARMADA DRM SUPPORT
M: Russell King <rmk+kernel@arm.linux.org.uk>
S: Maintained
F: drivers/gpu/drm/armada/
MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2) MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2)
M: Mirko Lindner <mlindner@marvell.com> M: Mirko Lindner <mlindner@marvell.com>
M: Stephen Hemminger <stephen@networkplumber.org> M: Stephen Hemminger <stephen@networkplumber.org>
@ -5621,7 +5641,7 @@ F: drivers/scsi/megaraid/
MELLANOX ETHERNET DRIVER (mlx4_en) MELLANOX ETHERNET DRIVER (mlx4_en)
M: Amir Vadai <amirv@mellanox.com> M: Amir Vadai <amirv@mellanox.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
W: http://www.mellanox.com W: http://www.mellanox.com
Q: http://patchwork.ozlabs.org/project/netdev/list/ Q: http://patchwork.ozlabs.org/project/netdev/list/
@ -5662,7 +5682,7 @@ F: include/linux/mtd/
F: include/uapi/mtd/ F: include/uapi/mtd/
MEN A21 WATCHDOG DRIVER MEN A21 WATCHDOG DRIVER
M: Johannes Thumshirn <johannes.thumshirn@men.de> M: Johannes Thumshirn <johannes.thumshirn@men.de>
L: linux-watchdog@vger.kernel.org L: linux-watchdog@vger.kernel.org
S: Supported S: Supported
F: drivers/watchdog/mena21_wdt.c F: drivers/watchdog/mena21_wdt.c
@ -5718,20 +5738,20 @@ L: linux-rdma@vger.kernel.org
W: http://www.mellanox.com W: http://www.mellanox.com
Q: http://patchwork.ozlabs.org/project/netdev/list/ Q: http://patchwork.ozlabs.org/project/netdev/list/
Q: http://patchwork.kernel.org/project/linux-rdma/list/ Q: http://patchwork.kernel.org/project/linux-rdma/list/
T: git://openfabrics.org/~eli/connect-ib.git T: git git://openfabrics.org/~eli/connect-ib.git
S: Supported S: Supported
F: drivers/net/ethernet/mellanox/mlx5/core/ F: drivers/net/ethernet/mellanox/mlx5/core/
F: include/linux/mlx5/ F: include/linux/mlx5/
Mellanox MLX5 IB driver Mellanox MLX5 IB driver
M: Eli Cohen <eli@mellanox.com> M: Eli Cohen <eli@mellanox.com>
L: linux-rdma@vger.kernel.org L: linux-rdma@vger.kernel.org
W: http://www.mellanox.com W: http://www.mellanox.com
Q: http://patchwork.kernel.org/project/linux-rdma/list/ Q: http://patchwork.kernel.org/project/linux-rdma/list/
T: git://openfabrics.org/~eli/connect-ib.git T: git git://openfabrics.org/~eli/connect-ib.git
S: Supported S: Supported
F: include/linux/mlx5/ F: include/linux/mlx5/
F: drivers/infiniband/hw/mlx5/ F: drivers/infiniband/hw/mlx5/
MODULE SUPPORT MODULE SUPPORT
M: Rusty Russell <rusty@rustcorp.com.au> M: Rusty Russell <rusty@rustcorp.com.au>
@ -6156,6 +6176,12 @@ S: Supported
F: drivers/block/nvme* F: drivers/block/nvme*
F: include/linux/nvme.h F: include/linux/nvme.h
NXP TDA998X DRM DRIVER
M: Russell King <rmk+kernel@arm.linux.org.uk>
S: Supported
F: drivers/gpu/drm/i2c/tda998x_drv.c
F: include/drm/i2c/tda998x.h
OMAP SUPPORT OMAP SUPPORT
M: Tony Lindgren <tony@atomide.com> M: Tony Lindgren <tony@atomide.com>
L: linux-omap@vger.kernel.org L: linux-omap@vger.kernel.org
@ -8685,17 +8711,17 @@ S: Maintained
F: drivers/media/radio/radio-raremono.c F: drivers/media/radio/radio-raremono.c
THERMAL THERMAL
M: Zhang Rui <rui.zhang@intel.com> M: Zhang Rui <rui.zhang@intel.com>
M: Eduardo Valentin <eduardo.valentin@ti.com> M: Eduardo Valentin <eduardo.valentin@ti.com>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal.git
Q: https://patchwork.kernel.org/project/linux-pm/list/ Q: https://patchwork.kernel.org/project/linux-pm/list/
S: Supported S: Supported
F: drivers/thermal/ F: drivers/thermal/
F: include/linux/thermal.h F: include/linux/thermal.h
F: include/linux/cpu_cooling.h F: include/linux/cpu_cooling.h
F: Documentation/devicetree/bindings/thermal/ F: Documentation/devicetree/bindings/thermal/
THINGM BLINK(1) USB RGB LED DRIVER THINGM BLINK(1) USB RGB LED DRIVER
M: Vivien Didelot <vivien.didelot@savoirfairelinux.com> M: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
@ -9797,7 +9823,7 @@ ZR36067 VIDEO FOR LINUX DRIVER
L: mjpeg-users@lists.sourceforge.net L: mjpeg-users@lists.sourceforge.net
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
W: http://mjpeg.sourceforge.net/driver-zoran/ W: http://mjpeg.sourceforge.net/driver-zoran/
T: Mercurial http://linuxtv.org/hg/v4l-dvb T: hg http://linuxtv.org/hg/v4l-dvb
S: Odd Fixes S: Odd Fixes
F: drivers/media/pci/zoran/ F: drivers/media/pci/zoran/

View File

@ -1,7 +1,7 @@
VERSION = 3 VERSION = 3
PATCHLEVEL = 14 PATCHLEVEL = 14
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc4 EXTRAVERSION = -rc6
NAME = Shuffling Zombie Juror NAME = Shuffling Zombie Juror
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -282,7 +282,7 @@ static inline void __cache_line_loop(unsigned long paddr, unsigned long vaddr,
#else #else
/* if V-P const for loop, PTAG can be written once outside loop */ /* if V-P const for loop, PTAG can be written once outside loop */
if (full_page_op) if (full_page_op)
write_aux_reg(ARC_REG_DC_PTAG, paddr); write_aux_reg(aux_tag, paddr);
#endif #endif
while (num_lines-- > 0) { while (num_lines-- > 0) {
@ -296,7 +296,7 @@ static inline void __cache_line_loop(unsigned long paddr, unsigned long vaddr,
write_aux_reg(aux_cmd, vaddr); write_aux_reg(aux_cmd, vaddr);
vaddr += L1_CACHE_BYTES; vaddr += L1_CACHE_BYTES;
#else #else
write_aux_reg(aux, paddr); write_aux_reg(aux_cmd, paddr);
paddr += L1_CACHE_BYTES; paddr += L1_CACHE_BYTES;
#endif #endif
} }

View File

@ -1578,6 +1578,7 @@ config BL_SWITCHER_DUMMY_IF
choice choice
prompt "Memory split" prompt "Memory split"
depends on MMU
default VMSPLIT_3G default VMSPLIT_3G
help help
Select the desired split between kernel and user memory. Select the desired split between kernel and user memory.
@ -1595,6 +1596,7 @@ endchoice
config PAGE_OFFSET config PAGE_OFFSET
hex hex
default PHYS_OFFSET if !MMU
default 0x40000000 if VMSPLIT_1G default 0x40000000 if VMSPLIT_1G
default 0x80000000 if VMSPLIT_2G default 0x80000000 if VMSPLIT_2G
default 0xC0000000 default 0xC0000000
@ -1903,6 +1905,7 @@ config XEN
depends on ARM && AEABI && OF depends on ARM && AEABI && OF
depends on CPU_V7 && !CPU_V6 depends on CPU_V7 && !CPU_V6
depends on !GENERIC_ATOMIC64 depends on !GENERIC_ATOMIC64
depends on MMU
select ARM_PSCI select ARM_PSCI
select SWIOTLB_XEN select SWIOTLB_XEN
select ARCH_DMA_ADDR_T_64BIT select ARCH_DMA_ADDR_T_64BIT

View File

@ -1,4 +1,5 @@
ashldi3.S ashldi3.S
bswapsdi2.S
font.c font.c
lib1funcs.S lib1funcs.S
hyp-stub.S hyp-stub.S

View File

@ -147,7 +147,7 @@
}; };
pinctrl@35004800 { pinctrl@35004800 {
compatible = "brcm,capri-pinctrl"; compatible = "brcm,bcm11351-pinctrl";
reg = <0x35004800 0x430>; reg = <0x35004800 0x430>;
}; };

View File

@ -612,7 +612,7 @@ clocks {
compatible = "ti,keystone,psc-clock"; compatible = "ti,keystone,psc-clock";
clocks = <&chipclk13>; clocks = <&chipclk13>;
clock-output-names = "vcp-3"; clock-output-names = "vcp-3";
reg = <0x0235000a8 0xb00>, <0x02350060 0x400>; reg = <0x023500a8 0xb00>, <0x02350060 0x400>;
reg-names = "control", "domain"; reg-names = "control", "domain";
domain-id = <24>; domain-id = <24>;
}; };

View File

@ -13,7 +13,7 @@
/ { / {
model = "OMAP3 GTA04"; model = "OMAP3 GTA04";
compatible = "ti,omap3-gta04", "ti,omap3"; compatible = "ti,omap3-gta04", "ti,omap36xx", "ti,omap3";
cpus { cpus {
cpu@0 { cpu@0 {

View File

@ -14,7 +14,7 @@
/ { / {
model = "IGEPv2 (TI OMAP AM/DM37x)"; model = "IGEPv2 (TI OMAP AM/DM37x)";
compatible = "isee,omap3-igep0020", "ti,omap3"; compatible = "isee,omap3-igep0020", "ti,omap36xx", "ti,omap3";
leds { leds {
pinctrl-names = "default"; pinctrl-names = "default";

View File

@ -13,7 +13,7 @@
/ { / {
model = "IGEP COM MODULE (TI OMAP AM/DM37x)"; model = "IGEP COM MODULE (TI OMAP AM/DM37x)";
compatible = "isee,omap3-igep0030", "ti,omap3"; compatible = "isee,omap3-igep0030", "ti,omap36xx", "ti,omap3";
leds { leds {
pinctrl-names = "default"; pinctrl-names = "default";

View File

@ -426,7 +426,7 @@
}; };
rtp: rtp@01c25000 { rtp: rtp@01c25000 {
compatible = "allwinner,sun4i-ts"; compatible = "allwinner,sun4i-a10-ts";
reg = <0x01c25000 0x100>; reg = <0x01c25000 0x100>;
interrupts = <29>; interrupts = <29>;
}; };

View File

@ -383,7 +383,7 @@
}; };
rtp: rtp@01c25000 { rtp: rtp@01c25000 {
compatible = "allwinner,sun4i-ts"; compatible = "allwinner,sun4i-a10-ts";
reg = <0x01c25000 0x100>; reg = <0x01c25000 0x100>;
interrupts = <29>; interrupts = <29>;
}; };

View File

@ -346,7 +346,7 @@
}; };
rtp: rtp@01c25000 { rtp: rtp@01c25000 {
compatible = "allwinner,sun4i-ts"; compatible = "allwinner,sun4i-a10-ts";
reg = <0x01c25000 0x100>; reg = <0x01c25000 0x100>;
interrupts = <29>; interrupts = <29>;
}; };

View File

@ -454,7 +454,7 @@
rtc: rtc@01c20d00 { rtc: rtc@01c20d00 {
compatible = "allwinner,sun7i-a20-rtc"; compatible = "allwinner,sun7i-a20-rtc";
reg = <0x01c20d00 0x20>; reg = <0x01c20d00 0x20>;
interrupts = <0 24 1>; interrupts = <0 24 4>;
}; };
sid: eeprom@01c23800 { sid: eeprom@01c23800 {
@ -463,7 +463,7 @@
}; };
rtp: rtp@01c25000 { rtp: rtp@01c25000 {
compatible = "allwinner,sun4i-ts"; compatible = "allwinner,sun4i-a10-ts";
reg = <0x01c25000 0x100>; reg = <0x01c25000 0x100>;
interrupts = <0 29 4>; interrupts = <0 29 4>;
}; };
@ -596,10 +596,10 @@
hstimer@01c60000 { hstimer@01c60000 {
compatible = "allwinner,sun7i-a20-hstimer"; compatible = "allwinner,sun7i-a20-hstimer";
reg = <0x01c60000 0x1000>; reg = <0x01c60000 0x1000>;
interrupts = <0 81 1>, interrupts = <0 81 4>,
<0 82 1>, <0 82 4>,
<0 83 1>, <0 83 4>,
<0 84 1>; <0 84 4>;
clocks = <&ahb_gates 28>; clocks = <&ahb_gates 28>;
}; };

View File

@ -204,7 +204,10 @@ CONFIG_MMC_BLOCK_MINORS=16
CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_TEGRA=y CONFIG_MMC_SDHCI_TEGRA=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_GPIO=y CONFIG_LEDS_GPIO=y
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=y CONFIG_LEDS_TRIGGER_TIMER=y
CONFIG_LEDS_TRIGGER_ONESHOT=y CONFIG_LEDS_TRIGGER_ONESHOT=y
CONFIG_LEDS_TRIGGER_HEARTBEAT=y CONFIG_LEDS_TRIGGER_HEARTBEAT=y

View File

@ -30,14 +30,15 @@
*/ */
#define UL(x) _AC(x, UL) #define UL(x) _AC(x, UL)
/* PAGE_OFFSET - the virtual address of the start of the kernel image */
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
/* /*
* PAGE_OFFSET - the virtual address of the start of the kernel image
* TASK_SIZE - the maximum size of a user space task. * TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/ */
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M)) #define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M) #define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
@ -104,10 +105,6 @@
#define END_MEM (UL(CONFIG_DRAM_BASE) + CONFIG_DRAM_SIZE) #define END_MEM (UL(CONFIG_DRAM_BASE) + CONFIG_DRAM_SIZE)
#endif #endif
#ifndef PAGE_OFFSET
#define PAGE_OFFSET PLAT_PHYS_OFFSET
#endif
/* /*
* The module can be at any place in ram in nommu mode. * The module can be at any place in ram in nommu mode.
*/ */

View File

@ -177,6 +177,18 @@ __lookup_processor_type_data:
.long __proc_info_end .long __proc_info_end
.size __lookup_processor_type_data, . - __lookup_processor_type_data .size __lookup_processor_type_data, . - __lookup_processor_type_data
__error_lpae:
#ifdef CONFIG_DEBUG_LL
adr r0, str_lpae
bl printascii
b __error
str_lpae: .asciz "\nError: Kernel with LPAE support, but CPU does not support LPAE.\n"
#else
b __error
#endif
.align
ENDPROC(__error_lpae)
__error_p: __error_p:
#ifdef CONFIG_DEBUG_LL #ifdef CONFIG_DEBUG_LL
adr r0, str_p1 adr r0, str_p1

View File

@ -102,7 +102,7 @@ ENTRY(stext)
and r3, r3, #0xf @ extract VMSA support and r3, r3, #0xf @ extract VMSA support
cmp r3, #5 @ long-descriptor translation table format? cmp r3, #5 @ long-descriptor translation table format?
THUMB( it lo ) @ force fixup-able long branch encoding THUMB( it lo ) @ force fixup-able long branch encoding
blo __error_p @ only classic page table format blo __error_lpae @ only classic page table format
#endif #endif
#ifndef CONFIG_XIP_KERNEL #ifndef CONFIG_XIP_KERNEL

View File

@ -878,7 +878,8 @@ static int hyp_init_cpu_pm_notifier(struct notifier_block *self,
unsigned long cmd, unsigned long cmd,
void *v) void *v)
{ {
if (cmd == CPU_PM_EXIT) { if (cmd == CPU_PM_EXIT &&
__hyp_get_vectors() == hyp_default_vectors) {
cpu_init_hyp_mode(NULL); cpu_init_hyp_mode(NULL);
return NOTIFY_OK; return NOTIFY_OK;
} }

View File

@ -220,6 +220,10 @@ after_vfp_restore:
* in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
* passed in r0 and r1. * passed in r0 and r1.
* *
* A function pointer with a value of 0xffffffff has a special meaning,
* and is used to implement __hyp_get_vectors in the same way as in
* arch/arm/kernel/hyp_stub.S.
*
* The calling convention follows the standard AAPCS: * The calling convention follows the standard AAPCS:
* r0 - r3: caller save * r0 - r3: caller save
* r12: caller save * r12: caller save
@ -363,6 +367,11 @@ hyp_hvc:
host_switch_to_hyp: host_switch_to_hyp:
pop {r0, r1, r2} pop {r0, r1, r2}
/* Check for __hyp_get_vectors */
cmp r0, #-1
mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
beq 1f
push {lr} push {lr}
mrs lr, SPSR mrs lr, SPSR
push {lr} push {lr}
@ -378,7 +387,7 @@ THUMB( orr lr, #1)
pop {lr} pop {lr}
msr SPSR_csxf, lr msr SPSR_csxf, lr
pop {lr} pop {lr}
eret 1: eret
guest_trap: guest_trap:
load_vcpu @ Load VCPU pointer to r0 load_vcpu @ Load VCPU pointer to r0

View File

@ -433,7 +433,9 @@ static const struct clk_ops dpll4_m5x2_ck_ops = {
.enable = &omap2_dflt_clk_enable, .enable = &omap2_dflt_clk_enable,
.disable = &omap2_dflt_clk_disable, .disable = &omap2_dflt_clk_disable,
.is_enabled = &omap2_dflt_clk_is_enabled, .is_enabled = &omap2_dflt_clk_is_enabled,
.set_rate = &omap3_clkoutx2_set_rate,
.recalc_rate = &omap3_clkoutx2_recalc, .recalc_rate = &omap3_clkoutx2_recalc,
.round_rate = &omap3_clkoutx2_round_rate,
}; };
static const struct clk_ops dpll4_m5x2_ck_3630_ops = { static const struct clk_ops dpll4_m5x2_ck_3630_ops = {

View File

@ -23,6 +23,8 @@
#include "prm.h" #include "prm.h"
#include "clockdomain.h" #include "clockdomain.h"
#define MAX_CPUS 2
/* Machine specific information */ /* Machine specific information */
struct idle_statedata { struct idle_statedata {
u32 cpu_state; u32 cpu_state;
@ -48,11 +50,11 @@ static struct idle_statedata omap4_idle_data[] = {
}, },
}; };
static struct powerdomain *mpu_pd, *cpu_pd[NR_CPUS]; static struct powerdomain *mpu_pd, *cpu_pd[MAX_CPUS];
static struct clockdomain *cpu_clkdm[NR_CPUS]; static struct clockdomain *cpu_clkdm[MAX_CPUS];
static atomic_t abort_barrier; static atomic_t abort_barrier;
static bool cpu_done[NR_CPUS]; static bool cpu_done[MAX_CPUS];
static struct idle_statedata *state_ptr = &omap4_idle_data[0]; static struct idle_statedata *state_ptr = &omap4_idle_data[0];
/* Private functions */ /* Private functions */

View File

@ -623,25 +623,12 @@ void omap3_dpll_deny_idle(struct clk_hw_omap *clk)
/* Clock control for DPLL outputs */ /* Clock control for DPLL outputs */
/** /* Find the parent DPLL for the given clkoutx2 clock */
* omap3_clkoutx2_recalc - recalculate DPLL X2 output virtual clock rate static struct clk_hw_omap *omap3_find_clkoutx2_dpll(struct clk_hw *hw)
* @clk: DPLL output struct clk
*
* Using parent clock DPLL data, look up DPLL state. If locked, set our
* rate to the dpll_clk * 2; otherwise, just use dpll_clk.
*/
unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
unsigned long parent_rate)
{ {
const struct dpll_data *dd;
unsigned long rate;
u32 v;
struct clk_hw_omap *pclk = NULL; struct clk_hw_omap *pclk = NULL;
struct clk *parent; struct clk *parent;
if (!parent_rate)
return 0;
/* Walk up the parents of clk, looking for a DPLL */ /* Walk up the parents of clk, looking for a DPLL */
do { do {
do { do {
@ -656,9 +643,35 @@ unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
/* clk does not have a DPLL as a parent? error in the clock data */ /* clk does not have a DPLL as a parent? error in the clock data */
if (!pclk) { if (!pclk) {
WARN_ON(1); WARN_ON(1);
return 0; return NULL;
} }
return pclk;
}
/**
* omap3_clkoutx2_recalc - recalculate DPLL X2 output virtual clock rate
* @clk: DPLL output struct clk
*
* Using parent clock DPLL data, look up DPLL state. If locked, set our
* rate to the dpll_clk * 2; otherwise, just use dpll_clk.
*/
unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
unsigned long parent_rate)
{
const struct dpll_data *dd;
unsigned long rate;
u32 v;
struct clk_hw_omap *pclk = NULL;
if (!parent_rate)
return 0;
pclk = omap3_find_clkoutx2_dpll(hw);
if (!pclk)
return 0;
dd = pclk->dpll_data; dd = pclk->dpll_data;
WARN_ON(!dd->enable_mask); WARN_ON(!dd->enable_mask);
@ -672,6 +685,55 @@ unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
return rate; return rate;
} }
int omap3_clkoutx2_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
return 0;
}
long omap3_clkoutx2_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *prate)
{
const struct dpll_data *dd;
u32 v;
struct clk_hw_omap *pclk = NULL;
if (!*prate)
return 0;
pclk = omap3_find_clkoutx2_dpll(hw);
if (!pclk)
return 0;
dd = pclk->dpll_data;
/* TYPE J does not have a clkoutx2 */
if (dd->flags & DPLL_J_TYPE) {
*prate = __clk_round_rate(__clk_get_parent(pclk->hw.clk), rate);
return *prate;
}
WARN_ON(!dd->enable_mask);
v = omap2_clk_readl(pclk, dd->control_reg) & dd->enable_mask;
v >>= __ffs(dd->enable_mask);
/* If in bypass, the rate is fixed to the bypass rate*/
if (v != OMAP3XXX_EN_DPLL_LOCKED)
return *prate;
if (__clk_get_flags(hw->clk) & CLK_SET_RATE_PARENT) {
unsigned long best_parent;
best_parent = (rate / 2);
*prate = __clk_round_rate(__clk_get_parent(hw->clk),
best_parent);
}
return *prate * 2;
}
/* OMAP3/4 non-CORE DPLL clkops */ /* OMAP3/4 non-CORE DPLL clkops */
const struct clk_hw_omap_ops clkhwops_omap3_dpll = { const struct clk_hw_omap_ops clkhwops_omap3_dpll = {
.allow_idle = omap3_dpll_allow_idle, .allow_idle = omap3_dpll_allow_idle,

View File

@ -1947,29 +1947,31 @@ static int _ocp_softreset(struct omap_hwmod *oh)
goto dis_opt_clks; goto dis_opt_clks;
_write_sysconfig(v, oh); _write_sysconfig(v, oh);
if (oh->class->sysc->srst_udelay)
udelay(oh->class->sysc->srst_udelay);
c = _wait_softreset_complete(oh);
if (c == MAX_MODULE_SOFTRESET_WAIT) {
pr_warning("omap_hwmod: %s: softreset failed (waited %d usec)\n",
oh->name, MAX_MODULE_SOFTRESET_WAIT);
ret = -ETIMEDOUT;
goto dis_opt_clks;
} else {
pr_debug("omap_hwmod: %s: softreset in %d usec\n", oh->name, c);
}
ret = _clear_softreset(oh, &v); ret = _clear_softreset(oh, &v);
if (ret) if (ret)
goto dis_opt_clks; goto dis_opt_clks;
_write_sysconfig(v, oh); _write_sysconfig(v, oh);
if (oh->class->sysc->srst_udelay)
udelay(oh->class->sysc->srst_udelay);
c = _wait_softreset_complete(oh);
if (c == MAX_MODULE_SOFTRESET_WAIT)
pr_warning("omap_hwmod: %s: softreset failed (waited %d usec)\n",
oh->name, MAX_MODULE_SOFTRESET_WAIT);
else
pr_debug("omap_hwmod: %s: softreset in %d usec\n", oh->name, c);
/* /*
* XXX add _HWMOD_STATE_WEDGED for modules that don't come back from * XXX add _HWMOD_STATE_WEDGED for modules that don't come back from
* _wait_target_ready() or _reset() * _wait_target_ready() or _reset()
*/ */
ret = (c == MAX_MODULE_SOFTRESET_WAIT) ? -ETIMEDOUT : 0;
dis_opt_clks: dis_opt_clks:
if (oh->flags & HWMOD_CONTROL_OPT_CLKS_IN_RESET) if (oh->flags & HWMOD_CONTROL_OPT_CLKS_IN_RESET)
_disable_optional_clocks(oh); _disable_optional_clocks(oh);

View File

@ -1365,11 +1365,10 @@ static struct omap_hwmod_class_sysconfig dra7xx_spinlock_sysc = {
.rev_offs = 0x0000, .rev_offs = 0x0000,
.sysc_offs = 0x0010, .sysc_offs = 0x0010,
.syss_offs = 0x0014, .syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_AUTOIDLE | SYSC_HAS_CLOCKACTIVITY | .sysc_flags = (SYSC_HAS_AUTOIDLE | SYSC_HAS_ENAWAKEUP |
SYSC_HAS_ENAWAKEUP | SYSC_HAS_SIDLEMODE | SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
SYSC_HAS_SOFTRESET | SYSS_HAS_RESET_STATUS), SYSS_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART),
SIDLE_SMART_WKUP),
.sysc_fields = &omap_hwmod_sysc_type1, .sysc_fields = &omap_hwmod_sysc_type1,
}; };

View File

@ -22,6 +22,8 @@
#include "common-board-devices.h" #include "common-board-devices.h"
#include "dss-common.h" #include "dss-common.h"
#include "control.h" #include "control.h"
#include "omap-secure.h"
#include "soc.h"
struct pdata_init { struct pdata_init {
const char *compatible; const char *compatible;
@ -169,6 +171,22 @@ static void __init am3517_evm_legacy_init(void)
omap_ctrl_writel(v, AM35XX_CONTROL_IP_SW_RESET); omap_ctrl_writel(v, AM35XX_CONTROL_IP_SW_RESET);
omap_ctrl_readl(AM35XX_CONTROL_IP_SW_RESET); /* OCP barrier */ omap_ctrl_readl(AM35XX_CONTROL_IP_SW_RESET); /* OCP barrier */
} }
static void __init nokia_n900_legacy_init(void)
{
hsmmc2_internal_input_clk();
if (omap_type() == OMAP2_DEVICE_TYPE_SEC) {
if (IS_ENABLED(CONFIG_ARM_ERRATA_430973)) {
pr_info("RX-51: Enabling ARM errata 430973 workaround\n");
/* set IBE to 1 */
rx51_secure_update_aux_cr(BIT(6), 0);
} else {
pr_warning("RX-51: Not enabling ARM errata 430973 workaround\n");
pr_warning("Thumb binaries may crash randomly without this workaround\n");
}
}
}
#endif /* CONFIG_ARCH_OMAP3 */ #endif /* CONFIG_ARCH_OMAP3 */
#ifdef CONFIG_ARCH_OMAP4 #ifdef CONFIG_ARCH_OMAP4
@ -239,6 +257,7 @@ struct of_dev_auxdata omap_auxdata_lookup[] __initdata = {
#endif #endif
#ifdef CONFIG_ARCH_OMAP3 #ifdef CONFIG_ARCH_OMAP3
OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002030, "48002030.pinmux", &pcs_pdata), OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002030, "48002030.pinmux", &pcs_pdata),
OF_DEV_AUXDATA("ti,omap3-padconf", 0x480025a0, "480025a0.pinmux", &pcs_pdata),
OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002a00, "48002a00.pinmux", &pcs_pdata), OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002a00, "48002a00.pinmux", &pcs_pdata),
/* Only on am3517 */ /* Only on am3517 */
OF_DEV_AUXDATA("ti,davinci_mdio", 0x5c030000, "davinci_mdio.0", NULL), OF_DEV_AUXDATA("ti,davinci_mdio", 0x5c030000, "davinci_mdio.0", NULL),
@ -259,7 +278,7 @@ struct of_dev_auxdata omap_auxdata_lookup[] __initdata = {
static struct pdata_init pdata_quirks[] __initdata = { static struct pdata_init pdata_quirks[] __initdata = {
#ifdef CONFIG_ARCH_OMAP3 #ifdef CONFIG_ARCH_OMAP3
{ "compulab,omap3-sbc-t3730", omap3_sbc_t3730_legacy_init, }, { "compulab,omap3-sbc-t3730", omap3_sbc_t3730_legacy_init, },
{ "nokia,omap3-n900", hsmmc2_internal_input_clk, }, { "nokia,omap3-n900", nokia_n900_legacy_init, },
{ "nokia,omap3-n9", hsmmc2_internal_input_clk, }, { "nokia,omap3-n9", hsmmc2_internal_input_clk, },
{ "nokia,omap3-n950", hsmmc2_internal_input_clk, }, { "nokia,omap3-n950", hsmmc2_internal_input_clk, },
{ "isee,omap3-igep0020", omap3_igep0020_legacy_init, }, { "isee,omap3-igep0020", omap3_igep0020_legacy_init, },

View File

@ -183,11 +183,11 @@ void omap4_prminst_global_warm_sw_reset(void)
OMAP4_PRM_RSTCTRL_OFFSET); OMAP4_PRM_RSTCTRL_OFFSET);
v |= OMAP4430_RST_GLOBAL_WARM_SW_MASK; v |= OMAP4430_RST_GLOBAL_WARM_SW_MASK;
omap4_prminst_write_inst_reg(v, OMAP4430_PRM_PARTITION, omap4_prminst_write_inst_reg(v, OMAP4430_PRM_PARTITION,
OMAP4430_PRM_DEVICE_INST, dev_inst,
OMAP4_PRM_RSTCTRL_OFFSET); OMAP4_PRM_RSTCTRL_OFFSET);
/* OCP barrier */ /* OCP barrier */
v = omap4_prminst_read_inst_reg(OMAP4430_PRM_PARTITION, v = omap4_prminst_read_inst_reg(OMAP4430_PRM_PARTITION,
OMAP4430_PRM_DEVICE_INST, dev_inst,
OMAP4_PRM_RSTCTRL_OFFSET); OMAP4_PRM_RSTCTRL_OFFSET);
} }

View File

@ -13,6 +13,8 @@
#ifndef __ASM_ARCH_COLLIE_H #ifndef __ASM_ARCH_COLLIE_H
#define __ASM_ARCH_COLLIE_H #define __ASM_ARCH_COLLIE_H
#include "hardware.h" /* Gives GPIO_MAX */
extern void locomolcd_power(int on); extern void locomolcd_power(int on);
#define COLLIE_SCOOP_GPIO_BASE (GPIO_MAX + 1) #define COLLIE_SCOOP_GPIO_BASE (GPIO_MAX + 1)

View File

@ -264,6 +264,9 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
note_page(st, addr, 3, pmd_val(*pmd)); note_page(st, addr, 3, pmd_val(*pmd));
else else
walk_pte(st, pmd, addr); walk_pte(st, pmd, addr);
if (SECTION_SIZE < PMD_SIZE && pmd_large(pmd[1]))
note_page(st, addr + SECTION_SIZE, 3, pmd_val(pmd[1]));
} }
} }

View File

@ -16,6 +16,8 @@
#ifndef __ASM_PERCPU_H #ifndef __ASM_PERCPU_H
#define __ASM_PERCPU_H #define __ASM_PERCPU_H
#ifdef CONFIG_SMP
static inline void set_my_cpu_offset(unsigned long off) static inline void set_my_cpu_offset(unsigned long off)
{ {
asm volatile("msr tpidr_el1, %0" :: "r" (off) : "memory"); asm volatile("msr tpidr_el1, %0" :: "r" (off) : "memory");
@ -36,6 +38,12 @@ static inline unsigned long __my_cpu_offset(void)
} }
#define __my_cpu_offset __my_cpu_offset() #define __my_cpu_offset __my_cpu_offset()
#else /* !CONFIG_SMP */
#define set_my_cpu_offset(x) do { } while (0)
#endif /* CONFIG_SMP */
#include <asm-generic/percpu.h> #include <asm-generic/percpu.h>
#endif /* __ASM_PERCPU_H */ #endif /* __ASM_PERCPU_H */

View File

@ -136,11 +136,11 @@ extern struct page *empty_zero_page;
/* /*
* The following only work if pte_present(). Undefined behaviour otherwise. * The following only work if pte_present(). Undefined behaviour otherwise.
*/ */
#define pte_present(pte) (pte_val(pte) & (PTE_VALID | PTE_PROT_NONE)) #define pte_present(pte) (!!(pte_val(pte) & (PTE_VALID | PTE_PROT_NONE)))
#define pte_dirty(pte) (pte_val(pte) & PTE_DIRTY) #define pte_dirty(pte) (!!(pte_val(pte) & PTE_DIRTY))
#define pte_young(pte) (pte_val(pte) & PTE_AF) #define pte_young(pte) (!!(pte_val(pte) & PTE_AF))
#define pte_special(pte) (pte_val(pte) & PTE_SPECIAL) #define pte_special(pte) (!!(pte_val(pte) & PTE_SPECIAL))
#define pte_write(pte) (pte_val(pte) & PTE_WRITE) #define pte_write(pte) (!!(pte_val(pte) & PTE_WRITE))
#define pte_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_exec(pte) (!(pte_val(pte) & PTE_UXN))
#define pte_valid_user(pte) \ #define pte_valid_user(pte) \

View File

@ -48,7 +48,11 @@ int unwind_frame(struct stackframe *frame)
frame->sp = fp + 0x10; frame->sp = fp + 0x10;
frame->fp = *(unsigned long *)(fp); frame->fp = *(unsigned long *)(fp);
frame->pc = *(unsigned long *)(fp + 8); /*
* -4 here because we care about the PC at time of bl,
* not where the return will go.
*/
frame->pc = *(unsigned long *)(fp + 8) - 4;
return 0; return 0;
} }

View File

@ -694,6 +694,24 @@ __hyp_panic_str:
.align 2 .align 2
/*
* u64 kvm_call_hyp(void *hypfn, ...);
*
* This is not really a variadic function in the classic C-way and care must
* be taken when calling this to ensure parameters are passed in registers
* only, since the stack will change between the caller and the callee.
*
* Call the function with the first argument containing a pointer to the
* function you wish to call in Hyp mode, and subsequent arguments will be
* passed as x0, x1, and x2 (a maximum of 3 arguments in addition to the
* function pointer can be passed). The function being called must be mapped
* in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
* passed in r0 and r1.
*
* A function pointer with a value of 0 has a special meaning, and is
* used to implement __hyp_get_vectors in the same way as in
* arch/arm64/kernel/hyp_stub.S.
*/
ENTRY(kvm_call_hyp) ENTRY(kvm_call_hyp)
hvc #0 hvc #0
ret ret
@ -737,7 +755,12 @@ el1_sync: // Guest trapped into EL2
pop x2, x3 pop x2, x3
pop x0, x1 pop x0, x1
push lr, xzr /* Check for __hyp_get_vectors */
cbnz x0, 1f
mrs x0, vbar_el2
b 2f
1: push lr, xzr
/* /*
* Compute the function address in EL2, and shuffle the parameters. * Compute the function address in EL2, and shuffle the parameters.
@ -750,7 +773,7 @@ el1_sync: // Guest trapped into EL2
blr lr blr lr
pop lr, xzr pop lr, xzr
eret 2: eret
el1_trap: el1_trap:
/* /*

View File

@ -12,6 +12,7 @@
#define _ASM_C6X_CACHE_H #define _ASM_C6X_CACHE_H
#include <linux/irqflags.h> #include <linux/irqflags.h>
#include <linux/init.h>
/* /*
* Cache line size * Cache line size

View File

@ -144,7 +144,7 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
* definition, which doesn't have the same semantics. We don't want to * definition, which doesn't have the same semantics. We don't want to
* use -fno-builtin, so just hide the name ffs. * use -fno-builtin, so just hide the name ffs.
*/ */
#define ffs kernel_ffs #define ffs(x) kernel_ffs(x)
#include <asm-generic/bitops/fls.h> #include <asm-generic/bitops/fls.h>
#include <asm-generic/bitops/__fls.h> #include <asm-generic/bitops/__fls.h>

View File

@ -98,7 +98,7 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid)
/* attempt to allocate a granule's worth of cached memory pages */ /* attempt to allocate a granule's worth of cached memory pages */
page = alloc_pages_exact_node(nid, page = alloc_pages_exact_node(nid,
GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE,
IA64_GRANULE_SHIFT-PAGE_SHIFT); IA64_GRANULE_SHIFT-PAGE_SHIFT);
if (!page) { if (!page) {
mutex_unlock(&uc_pool->add_chunk_mutex); mutex_unlock(&uc_pool->add_chunk_mutex);

View File

@ -200,10 +200,11 @@ static inline void __user *arch_compat_alloc_user_space(long len)
/* /*
* We can't access below the stack pointer in the 32bit ABI and * We can't access below the stack pointer in the 32bit ABI and
* can access 288 bytes in the 64bit ABI * can access 288 bytes in the 64bit big-endian ABI,
* or 512 bytes with the new ELFv2 little-endian ABI.
*/ */
if (!is_32bit_task()) if (!is_32bit_task())
usp -= 288; usp -= USER_REDZONE_SIZE;
return (void __user *) (usp - len); return (void __user *) (usp - len);
} }

View File

@ -816,8 +816,8 @@ int64_t opal_pci_next_error(uint64_t phb_id, uint64_t *first_frozen_pe,
int64_t opal_pci_poll(uint64_t phb_id); int64_t opal_pci_poll(uint64_t phb_id);
int64_t opal_return_cpu(void); int64_t opal_return_cpu(void);
int64_t opal_xscom_read(uint32_t gcid, uint32_t pcb_addr, __be64 *val); int64_t opal_xscom_read(uint32_t gcid, uint64_t pcb_addr, __be64 *val);
int64_t opal_xscom_write(uint32_t gcid, uint32_t pcb_addr, uint64_t val); int64_t opal_xscom_write(uint32_t gcid, uint64_t pcb_addr, uint64_t val);
int64_t opal_lpc_write(uint32_t chip_id, enum OpalLPCAddressType addr_type, int64_t opal_lpc_write(uint32_t chip_id, enum OpalLPCAddressType addr_type,
uint32_t addr, uint32_t data, uint32_t sz); uint32_t addr, uint32_t data, uint32_t sz);

View File

@ -28,11 +28,23 @@
#ifdef __powerpc64__ #ifdef __powerpc64__
/*
* Size of redzone that userspace is allowed to use below the stack
* pointer. This is 288 in the 64-bit big-endian ELF ABI, and 512 in
* the new ELFv2 little-endian ABI, so we allow the larger amount.
*
* For kernel code we allow a 288-byte redzone, in order to conserve
* kernel stack space; gcc currently only uses 288 bytes, and will
* hopefully allow explicit control of the redzone size in future.
*/
#define USER_REDZONE_SIZE 512
#define KERNEL_REDZONE_SIZE 288
#define STACK_FRAME_OVERHEAD 112 /* size of minimum stack frame */ #define STACK_FRAME_OVERHEAD 112 /* size of minimum stack frame */
#define STACK_FRAME_LR_SAVE 2 /* Location of LR in stack frame */ #define STACK_FRAME_LR_SAVE 2 /* Location of LR in stack frame */
#define STACK_FRAME_REGS_MARKER ASM_CONST(0x7265677368657265) #define STACK_FRAME_REGS_MARKER ASM_CONST(0x7265677368657265)
#define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + \ #define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + \
STACK_FRAME_OVERHEAD + 288) STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE)
#define STACK_FRAME_MARKER 12 #define STACK_FRAME_MARKER 12
/* Size of dummy stack frame allocated when calling signal handler. */ /* Size of dummy stack frame allocated when calling signal handler. */
@ -41,6 +53,8 @@
#else /* __powerpc64__ */ #else /* __powerpc64__ */
#define USER_REDZONE_SIZE 0
#define KERNEL_REDZONE_SIZE 0
#define STACK_FRAME_OVERHEAD 16 /* size of minimum stack frame */ #define STACK_FRAME_OVERHEAD 16 /* size of minimum stack frame */
#define STACK_FRAME_LR_SAVE 1 /* Location of LR in stack frame */ #define STACK_FRAME_LR_SAVE 1 /* Location of LR in stack frame */
#define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773) #define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773)

View File

@ -98,17 +98,19 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
size_t csize, unsigned long offset, int userbuf) size_t csize, unsigned long offset, int userbuf)
{ {
void *vaddr; void *vaddr;
phys_addr_t paddr;
if (!csize) if (!csize)
return 0; return 0;
csize = min_t(size_t, csize, PAGE_SIZE); csize = min_t(size_t, csize, PAGE_SIZE);
paddr = pfn << PAGE_SHIFT;
if ((min_low_pfn < pfn) && (pfn < max_pfn)) { if (memblock_is_region_memory(paddr, csize)) {
vaddr = __va(pfn << PAGE_SHIFT); vaddr = __va(paddr);
csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf); csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf);
} else { } else {
vaddr = __ioremap(pfn << PAGE_SHIFT, PAGE_SIZE, 0); vaddr = __ioremap(paddr, PAGE_SIZE, 0);
csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf); csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf);
iounmap(vaddr); iounmap(vaddr);
} }

View File

@ -74,6 +74,7 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
*/ */
static int test_24bit_addr(unsigned long ip, unsigned long addr) static int test_24bit_addr(unsigned long ip, unsigned long addr)
{ {
addr = ppc_function_entry((void *)addr);
/* use the create_branch to verify that this offset can be branched */ /* use the create_branch to verify that this offset can be branched */
return create_branch((unsigned int *)ip, addr, 0); return create_branch((unsigned int *)ip, addr, 0);

View File

@ -1048,6 +1048,15 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
flush_altivec_to_thread(src); flush_altivec_to_thread(src);
flush_vsx_to_thread(src); flush_vsx_to_thread(src);
flush_spe_to_thread(src); flush_spe_to_thread(src);
/*
* Flush TM state out so we can copy it. __switch_to_tm() does this
* flush but it removes the checkpointed state from the current CPU and
* transitions the CPU out of TM mode. Hence we need to call
* tm_recheckpoint_new_task() (on the same task) to restore the
* checkpointed state back and the TM mode.
*/
__switch_to_tm(src);
tm_recheckpoint_new_task(src);
*dst = *src; *dst = *src;

View File

@ -81,6 +81,7 @@ _GLOBAL(relocate)
6: blr 6: blr
.balign 8
p_dyn: .llong __dynamic_start - 0b p_dyn: .llong __dynamic_start - 0b
p_rela: .llong __rela_dyn_start - 0b p_rela: .llong __rela_dyn_start - 0b
p_st: .llong _stext - 0b p_st: .llong _stext - 0b

View File

@ -65,8 +65,8 @@ struct rt_sigframe {
struct siginfo __user *pinfo; struct siginfo __user *pinfo;
void __user *puc; void __user *puc;
struct siginfo info; struct siginfo info;
/* 64 bit ABI allows for 288 bytes below sp before decrementing it. */ /* New 64 bit little-endian ABI allows redzone of 512 bytes below sp */
char abigap[288]; char abigap[USER_REDZONE_SIZE];
} __attribute__ ((aligned (16))); } __attribute__ ((aligned (16)));
static const char fmt32[] = KERN_INFO \ static const char fmt32[] = KERN_INFO \

View File

@ -123,7 +123,8 @@ static int __init cbe_ptcal_enable_on_node(int nid, int order)
area->nid = nid; area->nid = nid;
area->order = order; area->order = order;
area->pages = alloc_pages_exact_node(area->nid, GFP_KERNEL|GFP_THISNODE, area->pages = alloc_pages_exact_node(area->nid,
GFP_KERNEL|__GFP_THISNODE,
area->order); area->order);
if (!area->pages) { if (!area->pages) {

View File

@ -114,6 +114,7 @@ DEFINE_SIMPLE_ATTRIBUTE(ioda_eeh_inbB_dbgfs_ops, ioda_eeh_inbB_dbgfs_get,
ioda_eeh_inbB_dbgfs_set, "0x%llx\n"); ioda_eeh_inbB_dbgfs_set, "0x%llx\n");
#endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_DEBUG_FS */
/** /**
* ioda_eeh_post_init - Chip dependent post initialization * ioda_eeh_post_init - Chip dependent post initialization
* @hose: PCI controller * @hose: PCI controller
@ -221,6 +222,22 @@ static int ioda_eeh_set_option(struct eeh_pe *pe, int option)
return ret; return ret;
} }
static void ioda_eeh_phb_diag(struct pci_controller *hose)
{
struct pnv_phb *phb = hose->private_data;
long rc;
rc = opal_pci_get_phb_diag_data2(phb->opal_id, phb->diag.blob,
PNV_PCI_DIAG_BUF_SIZE);
if (rc != OPAL_SUCCESS) {
pr_warning("%s: Failed to get diag-data for PHB#%x (%ld)\n",
__func__, hose->global_number, rc);
return;
}
pnv_pci_dump_phb_diag_data(hose, phb->diag.blob);
}
/** /**
* ioda_eeh_get_state - Retrieve the state of PE * ioda_eeh_get_state - Retrieve the state of PE
* @pe: EEH PE * @pe: EEH PE
@ -272,6 +289,9 @@ static int ioda_eeh_get_state(struct eeh_pe *pe)
result |= EEH_STATE_DMA_ACTIVE; result |= EEH_STATE_DMA_ACTIVE;
result |= EEH_STATE_MMIO_ENABLED; result |= EEH_STATE_MMIO_ENABLED;
result |= EEH_STATE_DMA_ENABLED; result |= EEH_STATE_DMA_ENABLED;
} else if (!(pe->state & EEH_PE_ISOLATED)) {
eeh_pe_state_mark(pe, EEH_PE_ISOLATED);
ioda_eeh_phb_diag(hose);
} }
return result; return result;
@ -315,6 +335,15 @@ static int ioda_eeh_get_state(struct eeh_pe *pe)
__func__, fstate, hose->global_number, pe_no); __func__, fstate, hose->global_number, pe_no);
} }
/* Dump PHB diag-data for frozen PE */
if (result != EEH_STATE_NOT_SUPPORT &&
(result & (EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE)) !=
(EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE) &&
!(pe->state & EEH_PE_ISOLATED)) {
eeh_pe_state_mark(pe, EEH_PE_ISOLATED);
ioda_eeh_phb_diag(hose);
}
return result; return result;
} }
@ -529,42 +558,6 @@ static int ioda_eeh_reset(struct eeh_pe *pe, int option)
return ret; return ret;
} }
/**
* ioda_eeh_get_log - Retrieve error log
* @pe: EEH PE
* @severity: Severity level of the log
* @drv_log: buffer to store the log
* @len: space of the log buffer
*
* The function is used to retrieve error log from P7IOC.
*/
static int ioda_eeh_get_log(struct eeh_pe *pe, int severity,
char *drv_log, unsigned long len)
{
s64 ret;
unsigned long flags;
struct pci_controller *hose = pe->phb;
struct pnv_phb *phb = hose->private_data;
spin_lock_irqsave(&phb->lock, flags);
ret = opal_pci_get_phb_diag_data2(phb->opal_id,
phb->diag.blob, PNV_PCI_DIAG_BUF_SIZE);
if (ret) {
spin_unlock_irqrestore(&phb->lock, flags);
pr_warning("%s: Can't get log for PHB#%x-PE#%x (%lld)\n",
__func__, hose->global_number, pe->addr, ret);
return -EIO;
}
/* The PHB diag-data is always indicative */
pnv_pci_dump_phb_diag_data(hose, phb->diag.blob);
spin_unlock_irqrestore(&phb->lock, flags);
return 0;
}
/** /**
* ioda_eeh_configure_bridge - Configure the PCI bridges for the indicated PE * ioda_eeh_configure_bridge - Configure the PCI bridges for the indicated PE
* @pe: EEH PE * @pe: EEH PE
@ -646,22 +639,6 @@ static void ioda_eeh_hub_diag(struct pci_controller *hose)
} }
} }
static void ioda_eeh_phb_diag(struct pci_controller *hose)
{
struct pnv_phb *phb = hose->private_data;
long rc;
rc = opal_pci_get_phb_diag_data2(phb->opal_id, phb->diag.blob,
PNV_PCI_DIAG_BUF_SIZE);
if (rc != OPAL_SUCCESS) {
pr_warning("%s: Failed to get diag-data for PHB#%x (%ld)\n",
__func__, hose->global_number, rc);
return;
}
pnv_pci_dump_phb_diag_data(hose, phb->diag.blob);
}
static int ioda_eeh_get_phb_pe(struct pci_controller *hose, static int ioda_eeh_get_phb_pe(struct pci_controller *hose,
struct eeh_pe **pe) struct eeh_pe **pe)
{ {
@ -834,6 +811,20 @@ static int ioda_eeh_next_error(struct eeh_pe **pe)
__func__, err_type); __func__, err_type);
} }
/*
* EEH core will try recover from fenced PHB or
* frozen PE. In the time for frozen PE, EEH core
* enable IO path for that before collecting logs,
* but it ruins the site. So we have to dump the
* log in advance here.
*/
if ((ret == EEH_NEXT_ERR_FROZEN_PE ||
ret == EEH_NEXT_ERR_FENCED_PHB) &&
!((*pe)->state & EEH_PE_ISOLATED)) {
eeh_pe_state_mark(*pe, EEH_PE_ISOLATED);
ioda_eeh_phb_diag(hose);
}
/* /*
* If we have no errors on the specific PHB or only * If we have no errors on the specific PHB or only
* informative error there, we continue poking it. * informative error there, we continue poking it.
@ -852,7 +843,6 @@ struct pnv_eeh_ops ioda_eeh_ops = {
.set_option = ioda_eeh_set_option, .set_option = ioda_eeh_set_option,
.get_state = ioda_eeh_get_state, .get_state = ioda_eeh_get_state,
.reset = ioda_eeh_reset, .reset = ioda_eeh_reset,
.get_log = ioda_eeh_get_log,
.configure_bridge = ioda_eeh_configure_bridge, .configure_bridge = ioda_eeh_configure_bridge,
.next_error = ioda_eeh_next_error .next_error = ioda_eeh_next_error
}; };

View File

@ -71,11 +71,11 @@ static int opal_xscom_err_xlate(int64_t rc)
} }
} }
static u64 opal_scom_unmangle(u64 reg) static u64 opal_scom_unmangle(u64 addr)
{ {
/* /*
* XSCOM indirect addresses have the top bit set. Additionally * XSCOM indirect addresses have the top bit set. Additionally
* the reset of the top 3 nibbles is always 0. * the rest of the top 3 nibbles is always 0.
* *
* Because the debugfs interface uses signed offsets and shifts * Because the debugfs interface uses signed offsets and shifts
* the address left by 3, we basically cannot use the top 4 bits * the address left by 3, we basically cannot use the top 4 bits
@ -86,10 +86,13 @@ static u64 opal_scom_unmangle(u64 reg)
* conversion here. To leave room for further xscom address * conversion here. To leave room for further xscom address
* expansion, we only clear out the top byte * expansion, we only clear out the top byte
* *
* For in-kernel use, we also support the real indirect bit, so
* we test for any of the top 5 bits
*
*/ */
if (reg & (1ull << 59)) if (addr & (0x1full << 59))
reg = (reg & ~(0xffull << 56)) | (1ull << 63); addr = (addr & ~(0xffull << 56)) | (1ull << 63);
return reg; return addr;
} }
static int opal_scom_read(scom_map_t map, u64 reg, u64 *value) static int opal_scom_read(scom_map_t map, u64 reg, u64 *value)
@ -98,8 +101,8 @@ static int opal_scom_read(scom_map_t map, u64 reg, u64 *value)
int64_t rc; int64_t rc;
__be64 v; __be64 v;
reg = opal_scom_unmangle(reg); reg = opal_scom_unmangle(m->addr + reg);
rc = opal_xscom_read(m->chip, m->addr + reg, (__be64 *)__pa(&v)); rc = opal_xscom_read(m->chip, reg, (__be64 *)__pa(&v));
*value = be64_to_cpu(v); *value = be64_to_cpu(v);
return opal_xscom_err_xlate(rc); return opal_xscom_err_xlate(rc);
} }
@ -109,8 +112,8 @@ static int opal_scom_write(scom_map_t map, u64 reg, u64 value)
struct opal_scom_map *m = map; struct opal_scom_map *m = map;
int64_t rc; int64_t rc;
reg = opal_scom_unmangle(reg); reg = opal_scom_unmangle(m->addr + reg);
rc = opal_xscom_write(m->chip, m->addr + reg, value); rc = opal_xscom_write(m->chip, reg, value);
return opal_xscom_err_xlate(rc); return opal_xscom_err_xlate(rc);
} }

View File

@ -134,57 +134,72 @@ static void pnv_pci_dump_p7ioc_diag_data(struct pci_controller *hose,
pr_info("P7IOC PHB#%d Diag-data (Version: %d)\n\n", pr_info("P7IOC PHB#%d Diag-data (Version: %d)\n\n",
hose->global_number, common->version); hose->global_number, common->version);
pr_info(" brdgCtl: %08x\n", data->brdgCtl); if (data->brdgCtl)
pr_info(" brdgCtl: %08x\n",
pr_info(" portStatusReg: %08x\n", data->portStatusReg); data->brdgCtl);
pr_info(" rootCmplxStatus: %08x\n", data->rootCmplxStatus); if (data->portStatusReg || data->rootCmplxStatus ||
pr_info(" busAgentStatus: %08x\n", data->busAgentStatus); data->busAgentStatus)
pr_info(" UtlSts: %08x %08x %08x\n",
pr_info(" deviceStatus: %08x\n", data->deviceStatus); data->portStatusReg, data->rootCmplxStatus,
pr_info(" slotStatus: %08x\n", data->slotStatus); data->busAgentStatus);
pr_info(" linkStatus: %08x\n", data->linkStatus); if (data->deviceStatus || data->slotStatus ||
pr_info(" devCmdStatus: %08x\n", data->devCmdStatus); data->linkStatus || data->devCmdStatus ||
pr_info(" devSecStatus: %08x\n", data->devSecStatus); data->devSecStatus)
pr_info(" RootSts: %08x %08x %08x %08x %08x\n",
pr_info(" rootErrorStatus: %08x\n", data->rootErrorStatus); data->deviceStatus, data->slotStatus,
pr_info(" uncorrErrorStatus: %08x\n", data->uncorrErrorStatus); data->linkStatus, data->devCmdStatus,
pr_info(" corrErrorStatus: %08x\n", data->corrErrorStatus); data->devSecStatus);
pr_info(" tlpHdr1: %08x\n", data->tlpHdr1); if (data->rootErrorStatus || data->uncorrErrorStatus ||
pr_info(" tlpHdr2: %08x\n", data->tlpHdr2); data->corrErrorStatus)
pr_info(" tlpHdr3: %08x\n", data->tlpHdr3); pr_info(" RootErrSts: %08x %08x %08x\n",
pr_info(" tlpHdr4: %08x\n", data->tlpHdr4); data->rootErrorStatus, data->uncorrErrorStatus,
pr_info(" sourceId: %08x\n", data->sourceId); data->corrErrorStatus);
pr_info(" errorClass: %016llx\n", data->errorClass); if (data->tlpHdr1 || data->tlpHdr2 ||
pr_info(" correlator: %016llx\n", data->correlator); data->tlpHdr3 || data->tlpHdr4)
pr_info(" p7iocPlssr: %016llx\n", data->p7iocPlssr); pr_info(" RootErrLog: %08x %08x %08x %08x\n",
pr_info(" p7iocCsr: %016llx\n", data->p7iocCsr); data->tlpHdr1, data->tlpHdr2,
pr_info(" lemFir: %016llx\n", data->lemFir); data->tlpHdr3, data->tlpHdr4);
pr_info(" lemErrorMask: %016llx\n", data->lemErrorMask); if (data->sourceId || data->errorClass ||
pr_info(" lemWOF: %016llx\n", data->lemWOF); data->correlator)
pr_info(" phbErrorStatus: %016llx\n", data->phbErrorStatus); pr_info(" RootErrLog1: %08x %016llx %016llx\n",
pr_info(" phbFirstErrorStatus: %016llx\n", data->phbFirstErrorStatus); data->sourceId, data->errorClass,
pr_info(" phbErrorLog0: %016llx\n", data->phbErrorLog0); data->correlator);
pr_info(" phbErrorLog1: %016llx\n", data->phbErrorLog1); if (data->p7iocPlssr || data->p7iocCsr)
pr_info(" mmioErrorStatus: %016llx\n", data->mmioErrorStatus); pr_info(" PhbSts: %016llx %016llx\n",
pr_info(" mmioFirstErrorStatus: %016llx\n", data->mmioFirstErrorStatus); data->p7iocPlssr, data->p7iocCsr);
pr_info(" mmioErrorLog0: %016llx\n", data->mmioErrorLog0); if (data->lemFir || data->lemErrorMask ||
pr_info(" mmioErrorLog1: %016llx\n", data->mmioErrorLog1); data->lemWOF)
pr_info(" dma0ErrorStatus: %016llx\n", data->dma0ErrorStatus); pr_info(" Lem: %016llx %016llx %016llx\n",
pr_info(" dma0FirstErrorStatus: %016llx\n", data->dma0FirstErrorStatus); data->lemFir, data->lemErrorMask,
pr_info(" dma0ErrorLog0: %016llx\n", data->dma0ErrorLog0); data->lemWOF);
pr_info(" dma0ErrorLog1: %016llx\n", data->dma0ErrorLog1); if (data->phbErrorStatus || data->phbFirstErrorStatus ||
pr_info(" dma1ErrorStatus: %016llx\n", data->dma1ErrorStatus); data->phbErrorLog0 || data->phbErrorLog1)
pr_info(" dma1FirstErrorStatus: %016llx\n", data->dma1FirstErrorStatus); pr_info(" PhbErr: %016llx %016llx %016llx %016llx\n",
pr_info(" dma1ErrorLog0: %016llx\n", data->dma1ErrorLog0); data->phbErrorStatus, data->phbFirstErrorStatus,
pr_info(" dma1ErrorLog1: %016llx\n", data->dma1ErrorLog1); data->phbErrorLog0, data->phbErrorLog1);
if (data->mmioErrorStatus || data->mmioFirstErrorStatus ||
data->mmioErrorLog0 || data->mmioErrorLog1)
pr_info(" OutErr: %016llx %016llx %016llx %016llx\n",
data->mmioErrorStatus, data->mmioFirstErrorStatus,
data->mmioErrorLog0, data->mmioErrorLog1);
if (data->dma0ErrorStatus || data->dma0FirstErrorStatus ||
data->dma0ErrorLog0 || data->dma0ErrorLog1)
pr_info(" InAErr: %016llx %016llx %016llx %016llx\n",
data->dma0ErrorStatus, data->dma0FirstErrorStatus,
data->dma0ErrorLog0, data->dma0ErrorLog1);
if (data->dma1ErrorStatus || data->dma1FirstErrorStatus ||
data->dma1ErrorLog0 || data->dma1ErrorLog1)
pr_info(" InBErr: %016llx %016llx %016llx %016llx\n",
data->dma1ErrorStatus, data->dma1FirstErrorStatus,
data->dma1ErrorLog0, data->dma1ErrorLog1);
for (i = 0; i < OPAL_P7IOC_NUM_PEST_REGS; i++) { for (i = 0; i < OPAL_P7IOC_NUM_PEST_REGS; i++) {
if ((data->pestA[i] >> 63) == 0 && if ((data->pestA[i] >> 63) == 0 &&
(data->pestB[i] >> 63) == 0) (data->pestB[i] >> 63) == 0)
continue; continue;
pr_info(" PE[%3d] PESTA: %016llx\n", i, data->pestA[i]); pr_info(" PE[%3d] A/B: %016llx %016llx\n",
pr_info(" PESTB: %016llx\n", data->pestB[i]); i, data->pestA[i], data->pestB[i]);
} }
} }
@ -197,62 +212,77 @@ static void pnv_pci_dump_phb3_diag_data(struct pci_controller *hose,
data = (struct OpalIoPhb3ErrorData*)common; data = (struct OpalIoPhb3ErrorData*)common;
pr_info("PHB3 PHB#%d Diag-data (Version: %d)\n\n", pr_info("PHB3 PHB#%d Diag-data (Version: %d)\n\n",
hose->global_number, common->version); hose->global_number, common->version);
if (data->brdgCtl)
pr_info(" brdgCtl: %08x\n", data->brdgCtl); pr_info(" brdgCtl: %08x\n",
data->brdgCtl);
pr_info(" portStatusReg: %08x\n", data->portStatusReg); if (data->portStatusReg || data->rootCmplxStatus ||
pr_info(" rootCmplxStatus: %08x\n", data->rootCmplxStatus); data->busAgentStatus)
pr_info(" busAgentStatus: %08x\n", data->busAgentStatus); pr_info(" UtlSts: %08x %08x %08x\n",
data->portStatusReg, data->rootCmplxStatus,
pr_info(" deviceStatus: %08x\n", data->deviceStatus); data->busAgentStatus);
pr_info(" slotStatus: %08x\n", data->slotStatus); if (data->deviceStatus || data->slotStatus ||
pr_info(" linkStatus: %08x\n", data->linkStatus); data->linkStatus || data->devCmdStatus ||
pr_info(" devCmdStatus: %08x\n", data->devCmdStatus); data->devSecStatus)
pr_info(" devSecStatus: %08x\n", data->devSecStatus); pr_info(" RootSts: %08x %08x %08x %08x %08x\n",
data->deviceStatus, data->slotStatus,
pr_info(" rootErrorStatus: %08x\n", data->rootErrorStatus); data->linkStatus, data->devCmdStatus,
pr_info(" uncorrErrorStatus: %08x\n", data->uncorrErrorStatus); data->devSecStatus);
pr_info(" corrErrorStatus: %08x\n", data->corrErrorStatus); if (data->rootErrorStatus || data->uncorrErrorStatus ||
pr_info(" tlpHdr1: %08x\n", data->tlpHdr1); data->corrErrorStatus)
pr_info(" tlpHdr2: %08x\n", data->tlpHdr2); pr_info(" RootErrSts: %08x %08x %08x\n",
pr_info(" tlpHdr3: %08x\n", data->tlpHdr3); data->rootErrorStatus, data->uncorrErrorStatus,
pr_info(" tlpHdr4: %08x\n", data->tlpHdr4); data->corrErrorStatus);
pr_info(" sourceId: %08x\n", data->sourceId); if (data->tlpHdr1 || data->tlpHdr2 ||
pr_info(" errorClass: %016llx\n", data->errorClass); data->tlpHdr3 || data->tlpHdr4)
pr_info(" correlator: %016llx\n", data->correlator); pr_info(" RootErrLog: %08x %08x %08x %08x\n",
data->tlpHdr1, data->tlpHdr2,
pr_info(" nFir: %016llx\n", data->nFir); data->tlpHdr3, data->tlpHdr4);
pr_info(" nFirMask: %016llx\n", data->nFirMask); if (data->sourceId || data->errorClass ||
pr_info(" nFirWOF: %016llx\n", data->nFirWOF); data->correlator)
pr_info(" PhbPlssr: %016llx\n", data->phbPlssr); pr_info(" RootErrLog1: %08x %016llx %016llx\n",
pr_info(" PhbCsr: %016llx\n", data->phbCsr); data->sourceId, data->errorClass,
pr_info(" lemFir: %016llx\n", data->lemFir); data->correlator);
pr_info(" lemErrorMask: %016llx\n", data->lemErrorMask); if (data->nFir || data->nFirMask ||
pr_info(" lemWOF: %016llx\n", data->lemWOF); data->nFirWOF)
pr_info(" phbErrorStatus: %016llx\n", data->phbErrorStatus); pr_info(" nFir: %016llx %016llx %016llx\n",
pr_info(" phbFirstErrorStatus: %016llx\n", data->phbFirstErrorStatus); data->nFir, data->nFirMask,
pr_info(" phbErrorLog0: %016llx\n", data->phbErrorLog0); data->nFirWOF);
pr_info(" phbErrorLog1: %016llx\n", data->phbErrorLog1); if (data->phbPlssr || data->phbCsr)
pr_info(" mmioErrorStatus: %016llx\n", data->mmioErrorStatus); pr_info(" PhbSts: %016llx %016llx\n",
pr_info(" mmioFirstErrorStatus: %016llx\n", data->mmioFirstErrorStatus); data->phbPlssr, data->phbCsr);
pr_info(" mmioErrorLog0: %016llx\n", data->mmioErrorLog0); if (data->lemFir || data->lemErrorMask ||
pr_info(" mmioErrorLog1: %016llx\n", data->mmioErrorLog1); data->lemWOF)
pr_info(" dma0ErrorStatus: %016llx\n", data->dma0ErrorStatus); pr_info(" Lem: %016llx %016llx %016llx\n",
pr_info(" dma0FirstErrorStatus: %016llx\n", data->dma0FirstErrorStatus); data->lemFir, data->lemErrorMask,
pr_info(" dma0ErrorLog0: %016llx\n", data->dma0ErrorLog0); data->lemWOF);
pr_info(" dma0ErrorLog1: %016llx\n", data->dma0ErrorLog1); if (data->phbErrorStatus || data->phbFirstErrorStatus ||
pr_info(" dma1ErrorStatus: %016llx\n", data->dma1ErrorStatus); data->phbErrorLog0 || data->phbErrorLog1)
pr_info(" dma1FirstErrorStatus: %016llx\n", data->dma1FirstErrorStatus); pr_info(" PhbErr: %016llx %016llx %016llx %016llx\n",
pr_info(" dma1ErrorLog0: %016llx\n", data->dma1ErrorLog0); data->phbErrorStatus, data->phbFirstErrorStatus,
pr_info(" dma1ErrorLog1: %016llx\n", data->dma1ErrorLog1); data->phbErrorLog0, data->phbErrorLog1);
if (data->mmioErrorStatus || data->mmioFirstErrorStatus ||
data->mmioErrorLog0 || data->mmioErrorLog1)
pr_info(" OutErr: %016llx %016llx %016llx %016llx\n",
data->mmioErrorStatus, data->mmioFirstErrorStatus,
data->mmioErrorLog0, data->mmioErrorLog1);
if (data->dma0ErrorStatus || data->dma0FirstErrorStatus ||
data->dma0ErrorLog0 || data->dma0ErrorLog1)
pr_info(" InAErr: %016llx %016llx %016llx %016llx\n",
data->dma0ErrorStatus, data->dma0FirstErrorStatus,
data->dma0ErrorLog0, data->dma0ErrorLog1);
if (data->dma1ErrorStatus || data->dma1FirstErrorStatus ||
data->dma1ErrorLog0 || data->dma1ErrorLog1)
pr_info(" InBErr: %016llx %016llx %016llx %016llx\n",
data->dma1ErrorStatus, data->dma1FirstErrorStatus,
data->dma1ErrorLog0, data->dma1ErrorLog1);
for (i = 0; i < OPAL_PHB3_NUM_PEST_REGS; i++) { for (i = 0; i < OPAL_PHB3_NUM_PEST_REGS; i++) {
if ((data->pestA[i] >> 63) == 0 && if ((data->pestA[i] >> 63) == 0 &&
(data->pestB[i] >> 63) == 0) (data->pestB[i] >> 63) == 0)
continue; continue;
pr_info(" PE[%3d] PESTA: %016llx\n", i, data->pestA[i]); pr_info(" PE[%3d] A/B: %016llx %016llx\n",
pr_info(" PESTB: %016llx\n", data->pestB[i]); i, data->pestA[i], data->pestB[i]);
} }
} }

View File

@ -35,12 +35,7 @@
#include "offline_states.h" #include "offline_states.h"
/* This version can't take the spinlock, because it never returns */ /* This version can't take the spinlock, because it never returns */
static struct rtas_args rtas_stop_self_args = { static int rtas_stop_self_token = RTAS_UNKNOWN_SERVICE;
.token = RTAS_UNKNOWN_SERVICE,
.nargs = 0,
.nret = 1,
.rets = &rtas_stop_self_args.args[0],
};
static DEFINE_PER_CPU(enum cpu_state_vals, preferred_offline_state) = static DEFINE_PER_CPU(enum cpu_state_vals, preferred_offline_state) =
CPU_STATE_OFFLINE; CPU_STATE_OFFLINE;
@ -93,15 +88,20 @@ void set_default_offline_state(int cpu)
static void rtas_stop_self(void) static void rtas_stop_self(void)
{ {
struct rtas_args *args = &rtas_stop_self_args; struct rtas_args args = {
.token = cpu_to_be32(rtas_stop_self_token),
.nargs = 0,
.nret = 1,
.rets = &args.args[0],
};
local_irq_disable(); local_irq_disable();
BUG_ON(args->token == RTAS_UNKNOWN_SERVICE); BUG_ON(rtas_stop_self_token == RTAS_UNKNOWN_SERVICE);
printk("cpu %u (hwid %u) Ready to die...\n", printk("cpu %u (hwid %u) Ready to die...\n",
smp_processor_id(), hard_smp_processor_id()); smp_processor_id(), hard_smp_processor_id());
enter_rtas(__pa(args)); enter_rtas(__pa(&args));
panic("Alas, I survived.\n"); panic("Alas, I survived.\n");
} }
@ -392,10 +392,10 @@ static int __init pseries_cpu_hotplug_init(void)
} }
} }
rtas_stop_self_args.token = rtas_token("stop-self"); rtas_stop_self_token = rtas_token("stop-self");
qcss_tok = rtas_token("query-cpu-stopped-state"); qcss_tok = rtas_token("query-cpu-stopped-state");
if (rtas_stop_self_args.token == RTAS_UNKNOWN_SERVICE || if (rtas_stop_self_token == RTAS_UNKNOWN_SERVICE ||
qcss_tok == RTAS_UNKNOWN_SERVICE) { qcss_tok == RTAS_UNKNOWN_SERVICE) {
printk(KERN_INFO "CPU Hotplug not supported by firmware " printk(KERN_INFO "CPU Hotplug not supported by firmware "
"- disabling.\n"); "- disabling.\n");

View File

@ -18,7 +18,7 @@
#define SH_CACHE_ASSOC 8 #define SH_CACHE_ASSOC 8
#if defined(CONFIG_CPU_SUBTYPE_SH7619) #if defined(CONFIG_CPU_SUBTYPE_SH7619)
#define CCR 0xffffffec #define SH_CCR 0xffffffec
#define CCR_CACHE_CE 0x01 /* Cache enable */ #define CCR_CACHE_CE 0x01 /* Cache enable */
#define CCR_CACHE_WT 0x02 /* CCR[bit1=1,bit2=1] */ #define CCR_CACHE_WT 0x02 /* CCR[bit1=1,bit2=1] */

View File

@ -17,8 +17,8 @@
#define SH_CACHE_COMBINED 4 #define SH_CACHE_COMBINED 4
#define SH_CACHE_ASSOC 8 #define SH_CACHE_ASSOC 8
#define CCR 0xfffc1000 /* CCR1 */ #define SH_CCR 0xfffc1000 /* CCR1 */
#define CCR2 0xfffc1004 #define SH_CCR2 0xfffc1004
/* /*
* Most of the SH-2A CCR1 definitions resemble the SH-4 ones. All others not * Most of the SH-2A CCR1 definitions resemble the SH-4 ones. All others not

View File

@ -17,7 +17,7 @@
#define SH_CACHE_COMBINED 4 #define SH_CACHE_COMBINED 4
#define SH_CACHE_ASSOC 8 #define SH_CACHE_ASSOC 8
#define CCR 0xffffffec /* Address of Cache Control Register */ #define SH_CCR 0xffffffec /* Address of Cache Control Register */
#define CCR_CACHE_CE 0x01 /* Cache Enable */ #define CCR_CACHE_CE 0x01 /* Cache Enable */
#define CCR_CACHE_WT 0x02 /* Write-Through (for P0,U0,P3) (else writeback) */ #define CCR_CACHE_WT 0x02 /* Write-Through (for P0,U0,P3) (else writeback) */

View File

@ -17,7 +17,7 @@
#define SH_CACHE_COMBINED 4 #define SH_CACHE_COMBINED 4
#define SH_CACHE_ASSOC 8 #define SH_CACHE_ASSOC 8
#define CCR 0xff00001c /* Address of Cache Control Register */ #define SH_CCR 0xff00001c /* Address of Cache Control Register */
#define CCR_CACHE_OCE 0x0001 /* Operand Cache Enable */ #define CCR_CACHE_OCE 0x0001 /* Operand Cache Enable */
#define CCR_CACHE_WT 0x0002 /* Write-Through (for P0,U0,P3) (else writeback)*/ #define CCR_CACHE_WT 0x0002 /* Write-Through (for P0,U0,P3) (else writeback)*/
#define CCR_CACHE_CB 0x0004 /* Copy-Back (for P1) (else writethrough) */ #define CCR_CACHE_CB 0x0004 /* Copy-Back (for P1) (else writethrough) */

View File

@ -112,7 +112,7 @@ static void cache_init(void)
unsigned long ccr, flags; unsigned long ccr, flags;
jump_to_uncached(); jump_to_uncached();
ccr = __raw_readl(CCR); ccr = __raw_readl(SH_CCR);
/* /*
* At this point we don't know whether the cache is enabled or not - a * At this point we don't know whether the cache is enabled or not - a
@ -189,7 +189,7 @@ static void cache_init(void)
l2_cache_init(); l2_cache_init();
__raw_writel(flags, CCR); __raw_writel(flags, SH_CCR);
back_to_cached(); back_to_cached();
} }
#else #else

View File

@ -36,7 +36,7 @@ static int cache_seq_show(struct seq_file *file, void *iter)
*/ */
jump_to_uncached(); jump_to_uncached();
ccr = __raw_readl(CCR); ccr = __raw_readl(SH_CCR);
if ((ccr & CCR_CACHE_ENABLE) == 0) { if ((ccr & CCR_CACHE_ENABLE) == 0) {
back_to_cached(); back_to_cached();

View File

@ -63,9 +63,9 @@ static void sh2__flush_invalidate_region(void *start, int size)
local_irq_save(flags); local_irq_save(flags);
jump_to_uncached(); jump_to_uncached();
ccr = __raw_readl(CCR); ccr = __raw_readl(SH_CCR);
ccr |= CCR_CACHE_INVALIDATE; ccr |= CCR_CACHE_INVALIDATE;
__raw_writel(ccr, CCR); __raw_writel(ccr, SH_CCR);
back_to_cached(); back_to_cached();
local_irq_restore(flags); local_irq_restore(flags);

View File

@ -134,7 +134,8 @@ static void sh2a__flush_invalidate_region(void *start, int size)
/* If there are too many pages then just blow the cache */ /* If there are too many pages then just blow the cache */
if (((end - begin) >> PAGE_SHIFT) >= MAX_OCACHE_PAGES) { if (((end - begin) >> PAGE_SHIFT) >= MAX_OCACHE_PAGES) {
__raw_writel(__raw_readl(CCR) | CCR_OCACHE_INVALIDATE, CCR); __raw_writel(__raw_readl(SH_CCR) | CCR_OCACHE_INVALIDATE,
SH_CCR);
} else { } else {
for (v = begin; v < end; v += L1_CACHE_BYTES) for (v = begin; v < end; v += L1_CACHE_BYTES)
sh2a_invalidate_line(CACHE_OC_ADDRESS_ARRAY, v); sh2a_invalidate_line(CACHE_OC_ADDRESS_ARRAY, v);
@ -167,7 +168,8 @@ static void sh2a_flush_icache_range(void *args)
/* I-Cache invalidate */ /* I-Cache invalidate */
/* If there are too many pages then just blow the cache */ /* If there are too many pages then just blow the cache */
if (((end - start) >> PAGE_SHIFT) >= MAX_ICACHE_PAGES) { if (((end - start) >> PAGE_SHIFT) >= MAX_ICACHE_PAGES) {
__raw_writel(__raw_readl(CCR) | CCR_ICACHE_INVALIDATE, CCR); __raw_writel(__raw_readl(SH_CCR) | CCR_ICACHE_INVALIDATE,
SH_CCR);
} else { } else {
for (v = start; v < end; v += L1_CACHE_BYTES) for (v = start; v < end; v += L1_CACHE_BYTES)
sh2a_invalidate_line(CACHE_IC_ADDRESS_ARRAY, v); sh2a_invalidate_line(CACHE_IC_ADDRESS_ARRAY, v);

View File

@ -133,9 +133,9 @@ static void flush_icache_all(void)
jump_to_uncached(); jump_to_uncached();
/* Flush I-cache */ /* Flush I-cache */
ccr = __raw_readl(CCR); ccr = __raw_readl(SH_CCR);
ccr |= CCR_CACHE_ICI; ccr |= CCR_CACHE_ICI;
__raw_writel(ccr, CCR); __raw_writel(ccr, SH_CCR);
/* /*
* back_to_cached() will take care of the barrier for us, don't add * back_to_cached() will take care of the barrier for us, don't add

View File

@ -19,7 +19,7 @@ void __init shx3_cache_init(void)
{ {
unsigned int ccr; unsigned int ccr;
ccr = __raw_readl(CCR); ccr = __raw_readl(SH_CCR);
/* /*
* If we've got cache aliases, resolve them in hardware. * If we've got cache aliases, resolve them in hardware.
@ -40,5 +40,5 @@ void __init shx3_cache_init(void)
ccr |= CCR_CACHE_IBE; ccr |= CCR_CACHE_IBE;
#endif #endif
writel_uncached(ccr, CCR); writel_uncached(ccr, SH_CCR);
} }

View File

@ -285,8 +285,8 @@ void __init cpu_cache_init(void)
{ {
unsigned int cache_disabled = 0; unsigned int cache_disabled = 0;
#ifdef CCR #ifdef SH_CCR
cache_disabled = !(__raw_readl(CCR) & CCR_CACHE_ENABLE); cache_disabled = !(__raw_readl(SH_CCR) & CCR_CACHE_ENABLE);
#endif #endif
compute_alias(&boot_cpu_data.icache); compute_alias(&boot_cpu_data.icache);

View File

@ -111,7 +111,7 @@ struct mem_vector {
}; };
#define MEM_AVOID_MAX 5 #define MEM_AVOID_MAX 5
struct mem_vector mem_avoid[MEM_AVOID_MAX]; static struct mem_vector mem_avoid[MEM_AVOID_MAX];
static bool mem_contains(struct mem_vector *region, struct mem_vector *item) static bool mem_contains(struct mem_vector *region, struct mem_vector *item)
{ {
@ -180,7 +180,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
} }
/* Does this memory vector overlap a known avoided area? */ /* Does this memory vector overlap a known avoided area? */
bool mem_avoid_overlap(struct mem_vector *img) static bool mem_avoid_overlap(struct mem_vector *img)
{ {
int i; int i;
@ -192,8 +192,9 @@ bool mem_avoid_overlap(struct mem_vector *img)
return false; return false;
} }
unsigned long slots[CONFIG_RANDOMIZE_BASE_MAX_OFFSET / CONFIG_PHYSICAL_ALIGN]; static unsigned long slots[CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
unsigned long slot_max = 0; CONFIG_PHYSICAL_ALIGN];
static unsigned long slot_max;
static void slots_append(unsigned long addr) static void slots_append(unsigned long addr)
{ {

View File

@ -134,6 +134,7 @@ extern void efi_setup_page_tables(void);
extern void __init old_map_region(efi_memory_desc_t *md); extern void __init old_map_region(efi_memory_desc_t *md);
extern void __init runtime_code_page_mkexec(void); extern void __init runtime_code_page_mkexec(void);
extern void __init efi_runtime_mkexec(void); extern void __init efi_runtime_mkexec(void);
extern void __init efi_apply_memmap_quirks(void);
struct efi_setup_data { struct efi_setup_data {
u64 fw_vendor; u64 fw_vendor;

View File

@ -544,6 +544,10 @@ ENDPROC(early_idt_handlers)
/* This is global to keep gas from relaxing the jumps */ /* This is global to keep gas from relaxing the jumps */
ENTRY(early_idt_handler) ENTRY(early_idt_handler)
cld cld
cmpl $2,(%esp) # X86_TRAP_NMI
je is_nmi # Ignore NMI
cmpl $2,%ss:early_recursion_flag cmpl $2,%ss:early_recursion_flag
je hlt_loop je hlt_loop
incl %ss:early_recursion_flag incl %ss:early_recursion_flag
@ -594,8 +598,9 @@ ex_entry:
pop %edx pop %edx
pop %ecx pop %ecx
pop %eax pop %eax
addl $8,%esp /* drop vector number and error code */
decl %ss:early_recursion_flag decl %ss:early_recursion_flag
is_nmi:
addl $8,%esp /* drop vector number and error code */
iret iret
ENDPROC(early_idt_handler) ENDPROC(early_idt_handler)

View File

@ -343,6 +343,9 @@ early_idt_handlers:
ENTRY(early_idt_handler) ENTRY(early_idt_handler)
cld cld
cmpl $2,(%rsp) # X86_TRAP_NMI
je is_nmi # Ignore NMI
cmpl $2,early_recursion_flag(%rip) cmpl $2,early_recursion_flag(%rip)
jz 1f jz 1f
incl early_recursion_flag(%rip) incl early_recursion_flag(%rip)
@ -405,8 +408,9 @@ ENTRY(early_idt_handler)
popq %rdx popq %rdx
popq %rcx popq %rcx
popq %rax popq %rax
addq $16,%rsp # drop vector number and error code
decl early_recursion_flag(%rip) decl early_recursion_flag(%rip)
is_nmi:
addq $16,%rsp # drop vector number and error code
INTERRUPT_RETURN INTERRUPT_RETURN
ENDPROC(early_idt_handler) ENDPROC(early_idt_handler)

View File

@ -279,5 +279,7 @@ void arch_crash_save_vmcoreinfo(void)
VMCOREINFO_SYMBOL(node_data); VMCOREINFO_SYMBOL(node_data);
VMCOREINFO_LENGTH(node_data, MAX_NUMNODES); VMCOREINFO_LENGTH(node_data, MAX_NUMNODES);
#endif #endif
vmcoreinfo_append_str("KERNELOFFSET=%lx\n",
(unsigned long)&_text - __START_KERNEL);
} }

View File

@ -1239,14 +1239,8 @@ void __init setup_arch(char **cmdline_p)
register_refined_jiffies(CLOCK_TICK_RATE); register_refined_jiffies(CLOCK_TICK_RATE);
#ifdef CONFIG_EFI #ifdef CONFIG_EFI
/* Once setup is done above, unmap the EFI memory map on if (efi_enabled(EFI_BOOT))
* mismatched firmware/kernel archtectures since there is no efi_apply_memmap_quirks();
* support for runtime services.
*/
if (efi_enabled(EFI_BOOT) && !efi_is_native()) {
pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n");
efi_unmap_memmap();
}
#endif #endif
} }

View File

@ -2672,6 +2672,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write,
break; break;
} }
drop_large_spte(vcpu, iterator.sptep);
if (!is_shadow_present_pte(*iterator.sptep)) { if (!is_shadow_present_pte(*iterator.sptep)) {
u64 base_addr = iterator.addr; u64 base_addr = iterator.addr;

View File

@ -6688,7 +6688,7 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu)
else if (is_page_fault(intr_info)) else if (is_page_fault(intr_info))
return enable_ept; return enable_ept;
else if (is_no_device(intr_info) && else if (is_no_device(intr_info) &&
!(nested_read_cr0(vmcs12) & X86_CR0_TS)) !(vmcs12->guest_cr0 & X86_CR0_TS))
return 0; return 0;
return vmcs12->exception_bitmap & return vmcs12->exception_bitmap &
(1u << (intr_info & INTR_INFO_VECTOR_MASK)); (1u << (intr_info & INTR_INFO_VECTOR_MASK));

View File

@ -6186,7 +6186,7 @@ static int complete_emulated_mmio(struct kvm_vcpu *vcpu)
frag->len -= len; frag->len -= len;
} }
if (vcpu->mmio_cur_fragment == vcpu->mmio_nr_fragments) { if (vcpu->mmio_cur_fragment >= vcpu->mmio_nr_fragments) {
vcpu->mmio_needed = 0; vcpu->mmio_needed = 0;
/* FIXME: return into emulator if single-stepping. */ /* FIXME: return into emulator if single-stepping. */

View File

@ -1020,13 +1020,17 @@ static inline bool smap_violation(int error_code, struct pt_regs *regs)
* This routine handles page faults. It determines the address, * This routine handles page faults. It determines the address,
* and the problem, and then passes it off to one of the appropriate * and the problem, and then passes it off to one of the appropriate
* routines. * routines.
*
* This function must have noinline because both callers
* {,trace_}do_page_fault() have notrace on. Having this an actual function
* guarantees there's a function trace entry.
*/ */
static void __kprobes static void __kprobes noinline
__do_page_fault(struct pt_regs *regs, unsigned long error_code) __do_page_fault(struct pt_regs *regs, unsigned long error_code,
unsigned long address)
{ {
struct vm_area_struct *vma; struct vm_area_struct *vma;
struct task_struct *tsk; struct task_struct *tsk;
unsigned long address;
struct mm_struct *mm; struct mm_struct *mm;
int fault; int fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
@ -1034,9 +1038,6 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
tsk = current; tsk = current;
mm = tsk->mm; mm = tsk->mm;
/* Get the faulting address: */
address = read_cr2();
/* /*
* Detect and handle instructions that would cause a page fault for * Detect and handle instructions that would cause a page fault for
* both a tracked kernel page and a userspace page. * both a tracked kernel page and a userspace page.
@ -1248,32 +1249,50 @@ good_area:
up_read(&mm->mmap_sem); up_read(&mm->mmap_sem);
} }
dotraplinkage void __kprobes dotraplinkage void __kprobes notrace
do_page_fault(struct pt_regs *regs, unsigned long error_code) do_page_fault(struct pt_regs *regs, unsigned long error_code)
{ {
unsigned long address = read_cr2(); /* Get the faulting address */
enum ctx_state prev_state; enum ctx_state prev_state;
/*
* We must have this function tagged with __kprobes, notrace and call
* read_cr2() before calling anything else. To avoid calling any kind
* of tracing machinery before we've observed the CR2 value.
*
* exception_{enter,exit}() contain all sorts of tracepoints.
*/
prev_state = exception_enter(); prev_state = exception_enter();
__do_page_fault(regs, error_code); __do_page_fault(regs, error_code, address);
exception_exit(prev_state); exception_exit(prev_state);
} }
static void trace_page_fault_entries(struct pt_regs *regs, #ifdef CONFIG_TRACING
static void trace_page_fault_entries(unsigned long address, struct pt_regs *regs,
unsigned long error_code) unsigned long error_code)
{ {
if (user_mode(regs)) if (user_mode(regs))
trace_page_fault_user(read_cr2(), regs, error_code); trace_page_fault_user(address, regs, error_code);
else else
trace_page_fault_kernel(read_cr2(), regs, error_code); trace_page_fault_kernel(address, regs, error_code);
} }
dotraplinkage void __kprobes dotraplinkage void __kprobes notrace
trace_do_page_fault(struct pt_regs *regs, unsigned long error_code) trace_do_page_fault(struct pt_regs *regs, unsigned long error_code)
{ {
/*
* The exception_enter and tracepoint processing could
* trigger another page faults (user space callchain
* reading) and destroy the original cr2 value, so read
* the faulting address now.
*/
unsigned long address = read_cr2();
enum ctx_state prev_state; enum ctx_state prev_state;
prev_state = exception_enter(); prev_state = exception_enter();
trace_page_fault_entries(regs, error_code); trace_page_fault_entries(address, regs, error_code);
__do_page_fault(regs, error_code); __do_page_fault(regs, error_code, address);
exception_exit(prev_state); exception_exit(prev_state);
} }
#endif /* CONFIG_TRACING */

View File

@ -52,6 +52,7 @@
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/x86_init.h> #include <asm/x86_init.h>
#include <asm/rtc.h> #include <asm/rtc.h>
#include <asm/uv/uv.h>
#define EFI_DEBUG #define EFI_DEBUG
@ -1210,3 +1211,22 @@ static int __init parse_efi_cmdline(char *str)
return 0; return 0;
} }
early_param("efi", parse_efi_cmdline); early_param("efi", parse_efi_cmdline);
void __init efi_apply_memmap_quirks(void)
{
/*
* Once setup is done earlier, unmap the EFI memory map on mismatched
* firmware/kernel architectures since there is no support for runtime
* services.
*/
if (!efi_is_native()) {
pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n");
efi_unmap_memmap();
}
/*
* UV doesn't support the new EFI pagetable mapping yet.
*/
if (is_uv_system())
set_bit(EFI_OLD_MEMMAP, &x86_efi_facility);
}

View File

@ -65,7 +65,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
* be resued after dying flag is set * be resued after dying flag is set
*/ */
if (q->mq_ops) { if (q->mq_ops) {
blk_mq_insert_request(q, rq, at_head, true); blk_mq_insert_request(rq, at_head, true, false);
return; return;
} }

View File

@ -137,7 +137,7 @@ static void mq_flush_run(struct work_struct *work)
rq = container_of(work, struct request, mq_flush_work); rq = container_of(work, struct request, mq_flush_work);
memset(&rq->csd, 0, sizeof(rq->csd)); memset(&rq->csd, 0, sizeof(rq->csd));
blk_mq_run_request(rq, true, false); blk_mq_insert_request(rq, false, true, false);
} }
static bool blk_flush_queue_rq(struct request *rq) static bool blk_flush_queue_rq(struct request *rq)
@ -411,7 +411,7 @@ void blk_insert_flush(struct request *rq)
if ((policy & REQ_FSEQ_DATA) && if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) { !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
if (q->mq_ops) { if (q->mq_ops) {
blk_mq_run_request(rq, false, true); blk_mq_insert_request(rq, false, false, true);
} else } else
list_add_tail(&rq->queuelist, &q->queue_head); list_add_tail(&rq->queuelist, &q->queue_head);
return; return;

View File

@ -11,7 +11,7 @@
#include "blk-mq.h" #include "blk-mq.h"
static LIST_HEAD(blk_mq_cpu_notify_list); static LIST_HEAD(blk_mq_cpu_notify_list);
static DEFINE_SPINLOCK(blk_mq_cpu_notify_lock); static DEFINE_RAW_SPINLOCK(blk_mq_cpu_notify_lock);
static int blk_mq_main_cpu_notify(struct notifier_block *self, static int blk_mq_main_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu) unsigned long action, void *hcpu)
@ -19,12 +19,12 @@ static int blk_mq_main_cpu_notify(struct notifier_block *self,
unsigned int cpu = (unsigned long) hcpu; unsigned int cpu = (unsigned long) hcpu;
struct blk_mq_cpu_notifier *notify; struct blk_mq_cpu_notifier *notify;
spin_lock(&blk_mq_cpu_notify_lock); raw_spin_lock(&blk_mq_cpu_notify_lock);
list_for_each_entry(notify, &blk_mq_cpu_notify_list, list) list_for_each_entry(notify, &blk_mq_cpu_notify_list, list)
notify->notify(notify->data, action, cpu); notify->notify(notify->data, action, cpu);
spin_unlock(&blk_mq_cpu_notify_lock); raw_spin_unlock(&blk_mq_cpu_notify_lock);
return NOTIFY_OK; return NOTIFY_OK;
} }
@ -32,16 +32,16 @@ void blk_mq_register_cpu_notifier(struct blk_mq_cpu_notifier *notifier)
{ {
BUG_ON(!notifier->notify); BUG_ON(!notifier->notify);
spin_lock(&blk_mq_cpu_notify_lock); raw_spin_lock(&blk_mq_cpu_notify_lock);
list_add_tail(&notifier->list, &blk_mq_cpu_notify_list); list_add_tail(&notifier->list, &blk_mq_cpu_notify_list);
spin_unlock(&blk_mq_cpu_notify_lock); raw_spin_unlock(&blk_mq_cpu_notify_lock);
} }
void blk_mq_unregister_cpu_notifier(struct blk_mq_cpu_notifier *notifier) void blk_mq_unregister_cpu_notifier(struct blk_mq_cpu_notifier *notifier)
{ {
spin_lock(&blk_mq_cpu_notify_lock); raw_spin_lock(&blk_mq_cpu_notify_lock);
list_del(&notifier->list); list_del(&notifier->list);
spin_unlock(&blk_mq_cpu_notify_lock); raw_spin_unlock(&blk_mq_cpu_notify_lock);
} }
void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier, void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,

View File

@ -73,8 +73,8 @@ static void blk_mq_hctx_mark_pending(struct blk_mq_hw_ctx *hctx,
set_bit(ctx->index_hw, hctx->ctx_map); set_bit(ctx->index_hw, hctx->ctx_map);
} }
static struct request *blk_mq_alloc_rq(struct blk_mq_hw_ctx *hctx, gfp_t gfp, static struct request *__blk_mq_alloc_request(struct blk_mq_hw_ctx *hctx,
bool reserved) gfp_t gfp, bool reserved)
{ {
struct request *rq; struct request *rq;
unsigned int tag; unsigned int tag;
@ -193,12 +193,6 @@ static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
ctx->rq_dispatched[rw_is_sync(rw_flags)]++; ctx->rq_dispatched[rw_is_sync(rw_flags)]++;
} }
static struct request *__blk_mq_alloc_request(struct blk_mq_hw_ctx *hctx,
gfp_t gfp, bool reserved)
{
return blk_mq_alloc_rq(hctx, gfp, reserved);
}
static struct request *blk_mq_alloc_request_pinned(struct request_queue *q, static struct request *blk_mq_alloc_request_pinned(struct request_queue *q,
int rw, gfp_t gfp, int rw, gfp_t gfp,
bool reserved) bool reserved)
@ -289,38 +283,10 @@ void blk_mq_free_request(struct request *rq)
__blk_mq_free_request(hctx, ctx, rq); __blk_mq_free_request(hctx, ctx, rq);
} }
static void blk_mq_bio_endio(struct request *rq, struct bio *bio, int error) bool blk_mq_end_io_partial(struct request *rq, int error, unsigned int nr_bytes)
{ {
if (error) if (blk_update_request(rq, error, blk_rq_bytes(rq)))
clear_bit(BIO_UPTODATE, &bio->bi_flags); return true;
else if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
error = -EIO;
if (unlikely(rq->cmd_flags & REQ_QUIET))
set_bit(BIO_QUIET, &bio->bi_flags);
/* don't actually finish bio if it's part of flush sequence */
if (!(rq->cmd_flags & REQ_FLUSH_SEQ))
bio_endio(bio, error);
}
void blk_mq_end_io(struct request *rq, int error)
{
struct bio *bio = rq->bio;
unsigned int bytes = 0;
trace_block_rq_complete(rq->q, rq);
while (bio) {
struct bio *next = bio->bi_next;
bio->bi_next = NULL;
bytes += bio->bi_iter.bi_size;
blk_mq_bio_endio(rq, bio, error);
bio = next;
}
blk_account_io_completion(rq, bytes);
blk_account_io_done(rq); blk_account_io_done(rq);
@ -328,8 +294,9 @@ void blk_mq_end_io(struct request *rq, int error)
rq->end_io(rq, error); rq->end_io(rq, error);
else else
blk_mq_free_request(rq); blk_mq_free_request(rq);
return false;
} }
EXPORT_SYMBOL(blk_mq_end_io); EXPORT_SYMBOL(blk_mq_end_io_partial);
static void __blk_mq_complete_request_remote(void *data) static void __blk_mq_complete_request_remote(void *data)
{ {
@ -730,60 +697,27 @@ static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
blk_mq_add_timer(rq); blk_mq_add_timer(rq);
} }
void blk_mq_insert_request(struct request_queue *q, struct request *rq, void blk_mq_insert_request(struct request *rq, bool at_head, bool run_queue,
bool at_head, bool run_queue) bool async)
{
struct blk_mq_hw_ctx *hctx;
struct blk_mq_ctx *ctx, *current_ctx;
ctx = rq->mq_ctx;
hctx = q->mq_ops->map_queue(q, ctx->cpu);
if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) {
blk_insert_flush(rq);
} else {
current_ctx = blk_mq_get_ctx(q);
if (!cpu_online(ctx->cpu)) {
ctx = current_ctx;
hctx = q->mq_ops->map_queue(q, ctx->cpu);
rq->mq_ctx = ctx;
}
spin_lock(&ctx->lock);
__blk_mq_insert_request(hctx, rq, at_head);
spin_unlock(&ctx->lock);
blk_mq_put_ctx(current_ctx);
}
if (run_queue)
__blk_mq_run_hw_queue(hctx);
}
EXPORT_SYMBOL(blk_mq_insert_request);
/*
* This is a special version of blk_mq_insert_request to bypass FLUSH request
* check. Should only be used internally.
*/
void blk_mq_run_request(struct request *rq, bool run_queue, bool async)
{ {
struct request_queue *q = rq->q; struct request_queue *q = rq->q;
struct blk_mq_hw_ctx *hctx; struct blk_mq_hw_ctx *hctx;
struct blk_mq_ctx *ctx, *current_ctx; struct blk_mq_ctx *ctx = rq->mq_ctx, *current_ctx;
current_ctx = blk_mq_get_ctx(q); current_ctx = blk_mq_get_ctx(q);
if (!cpu_online(ctx->cpu))
rq->mq_ctx = ctx = current_ctx;
ctx = rq->mq_ctx;
if (!cpu_online(ctx->cpu)) {
ctx = current_ctx;
rq->mq_ctx = ctx;
}
hctx = q->mq_ops->map_queue(q, ctx->cpu); hctx = q->mq_ops->map_queue(q, ctx->cpu);
/* ctx->cpu might be offline */ if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA) &&
spin_lock(&ctx->lock); !(rq->cmd_flags & (REQ_FLUSH_SEQ))) {
__blk_mq_insert_request(hctx, rq, false); blk_insert_flush(rq);
spin_unlock(&ctx->lock); } else {
spin_lock(&ctx->lock);
__blk_mq_insert_request(hctx, rq, at_head);
spin_unlock(&ctx->lock);
}
blk_mq_put_ctx(current_ctx); blk_mq_put_ctx(current_ctx);
@ -926,6 +860,8 @@ static void blk_mq_make_request(struct request_queue *q, struct bio *bio)
ctx = blk_mq_get_ctx(q); ctx = blk_mq_get_ctx(q);
hctx = q->mq_ops->map_queue(q, ctx->cpu); hctx = q->mq_ops->map_queue(q, ctx->cpu);
if (is_sync)
rw |= REQ_SYNC;
trace_block_getrq(q, bio, rw); trace_block_getrq(q, bio, rw);
rq = __blk_mq_alloc_request(hctx, GFP_ATOMIC, false); rq = __blk_mq_alloc_request(hctx, GFP_ATOMIC, false);
if (likely(rq)) if (likely(rq))

View File

@ -23,7 +23,6 @@ struct blk_mq_ctx {
}; };
void __blk_mq_complete_request(struct request *rq); void __blk_mq_complete_request(struct request *rq);
void blk_mq_run_request(struct request *rq, bool run_queue, bool async);
void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async); void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async);
void blk_mq_init_flush(struct request_queue *q); void blk_mq_init_flush(struct request_queue *q);
void blk_mq_drain_queue(struct request_queue *q); void blk_mq_drain_queue(struct request_queue *q);

View File

@ -67,6 +67,8 @@ enum ec_command {
#define ACPI_EC_DELAY 500 /* Wait 500ms max. during EC ops */ #define ACPI_EC_DELAY 500 /* Wait 500ms max. during EC ops */
#define ACPI_EC_UDELAY_GLK 1000 /* Wait 1ms max. to get global lock */ #define ACPI_EC_UDELAY_GLK 1000 /* Wait 1ms max. to get global lock */
#define ACPI_EC_MSI_UDELAY 550 /* Wait 550us for MSI EC */ #define ACPI_EC_MSI_UDELAY 550 /* Wait 550us for MSI EC */
#define ACPI_EC_CLEAR_MAX 100 /* Maximum number of events to query
* when trying to clear the EC */
enum { enum {
EC_FLAGS_QUERY_PENDING, /* Query is pending */ EC_FLAGS_QUERY_PENDING, /* Query is pending */
@ -116,6 +118,7 @@ EXPORT_SYMBOL(first_ec);
static int EC_FLAGS_MSI; /* Out-of-spec MSI controller */ static int EC_FLAGS_MSI; /* Out-of-spec MSI controller */
static int EC_FLAGS_VALIDATE_ECDT; /* ASUStec ECDTs need to be validated */ static int EC_FLAGS_VALIDATE_ECDT; /* ASUStec ECDTs need to be validated */
static int EC_FLAGS_SKIP_DSDT_SCAN; /* Not all BIOS survive early DSDT scan */ static int EC_FLAGS_SKIP_DSDT_SCAN; /* Not all BIOS survive early DSDT scan */
static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Transaction Management Transaction Management
@ -440,6 +443,29 @@ acpi_handle ec_get_handle(void)
EXPORT_SYMBOL(ec_get_handle); EXPORT_SYMBOL(ec_get_handle);
static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 *data);
/*
* Clears stale _Q events that might have accumulated in the EC.
* Run with locked ec mutex.
*/
static void acpi_ec_clear(struct acpi_ec *ec)
{
int i, status;
u8 value = 0;
for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) {
status = acpi_ec_query_unlocked(ec, &value);
if (status || !value)
break;
}
if (unlikely(i == ACPI_EC_CLEAR_MAX))
pr_warn("Warning: Maximum of %d stale EC events cleared\n", i);
else
pr_info("%d stale EC events cleared\n", i);
}
void acpi_ec_block_transactions(void) void acpi_ec_block_transactions(void)
{ {
struct acpi_ec *ec = first_ec; struct acpi_ec *ec = first_ec;
@ -463,6 +489,10 @@ void acpi_ec_unblock_transactions(void)
mutex_lock(&ec->mutex); mutex_lock(&ec->mutex);
/* Allow transactions to be carried out again */ /* Allow transactions to be carried out again */
clear_bit(EC_FLAGS_BLOCKED, &ec->flags); clear_bit(EC_FLAGS_BLOCKED, &ec->flags);
if (EC_FLAGS_CLEAR_ON_RESUME)
acpi_ec_clear(ec);
mutex_unlock(&ec->mutex); mutex_unlock(&ec->mutex);
} }
@ -821,6 +851,13 @@ static int acpi_ec_add(struct acpi_device *device)
/* EC is fully operational, allow queries */ /* EC is fully operational, allow queries */
clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags); clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
/* Clear stale _Q events if hardware might require that */
if (EC_FLAGS_CLEAR_ON_RESUME) {
mutex_lock(&ec->mutex);
acpi_ec_clear(ec);
mutex_unlock(&ec->mutex);
}
return ret; return ret;
} }
@ -922,6 +959,30 @@ static int ec_enlarge_storm_threshold(const struct dmi_system_id *id)
return 0; return 0;
} }
/*
* On some hardware it is necessary to clear events accumulated by the EC during
* sleep. These ECs stop reporting GPEs until they are manually polled, if too
* many events are accumulated. (e.g. Samsung Series 5/9 notebooks)
*
* https://bugzilla.kernel.org/show_bug.cgi?id=44161
*
* Ideally, the EC should also be instructed NOT to accumulate events during
* sleep (which Windows seems to do somehow), but the interface to control this
* behaviour is not known at this time.
*
* Models known to be affected are Samsung 530Uxx/535Uxx/540Uxx/550Pxx/900Xxx,
* however it is very likely that other Samsung models are affected.
*
* On systems which don't accumulate _Q events during sleep, this extra check
* should be harmless.
*/
static int ec_clear_on_resume(const struct dmi_system_id *id)
{
pr_debug("Detected system needing EC poll on resume.\n");
EC_FLAGS_CLEAR_ON_RESUME = 1;
return 0;
}
static struct dmi_system_id ec_dmi_table[] __initdata = { static struct dmi_system_id ec_dmi_table[] __initdata = {
{ {
ec_skip_dsdt_scan, "Compal JFL92", { ec_skip_dsdt_scan, "Compal JFL92", {
@ -965,6 +1026,9 @@ static struct dmi_system_id ec_dmi_table[] __initdata = {
ec_validate_ecdt, "ASUS hardware", { ec_validate_ecdt, "ASUS hardware", {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTek Computer Inc."), DMI_MATCH(DMI_SYS_VENDOR, "ASUSTek Computer Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "L4R"),}, NULL}, DMI_MATCH(DMI_PRODUCT_NAME, "L4R"),}, NULL},
{
ec_clear_on_resume, "Samsung hardware", {
DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
{}, {},
}; };

View File

@ -56,6 +56,12 @@ struct throttling_tstate {
int target_state; /* target T-state */ int target_state; /* target T-state */
}; };
struct acpi_processor_throttling_arg {
struct acpi_processor *pr;
int target_state;
bool force;
};
#define THROTTLING_PRECHANGE (1) #define THROTTLING_PRECHANGE (1)
#define THROTTLING_POSTCHANGE (2) #define THROTTLING_POSTCHANGE (2)
@ -1060,16 +1066,24 @@ static int acpi_processor_set_throttling_ptc(struct acpi_processor *pr,
return 0; return 0;
} }
static long acpi_processor_throttling_fn(void *data)
{
struct acpi_processor_throttling_arg *arg = data;
struct acpi_processor *pr = arg->pr;
return pr->throttling.acpi_processor_set_throttling(pr,
arg->target_state, arg->force);
}
int acpi_processor_set_throttling(struct acpi_processor *pr, int acpi_processor_set_throttling(struct acpi_processor *pr,
int state, bool force) int state, bool force)
{ {
cpumask_var_t saved_mask;
int ret = 0; int ret = 0;
unsigned int i; unsigned int i;
struct acpi_processor *match_pr; struct acpi_processor *match_pr;
struct acpi_processor_throttling *p_throttling; struct acpi_processor_throttling *p_throttling;
struct acpi_processor_throttling_arg arg;
struct throttling_tstate t_state; struct throttling_tstate t_state;
cpumask_var_t online_throttling_cpus;
if (!pr) if (!pr)
return -EINVAL; return -EINVAL;
@ -1080,14 +1094,6 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
if ((state < 0) || (state > (pr->throttling.state_count - 1))) if ((state < 0) || (state > (pr->throttling.state_count - 1)))
return -EINVAL; return -EINVAL;
if (!alloc_cpumask_var(&saved_mask, GFP_KERNEL))
return -ENOMEM;
if (!alloc_cpumask_var(&online_throttling_cpus, GFP_KERNEL)) {
free_cpumask_var(saved_mask);
return -ENOMEM;
}
if (cpu_is_offline(pr->id)) { if (cpu_is_offline(pr->id)) {
/* /*
* the cpu pointed by pr->id is offline. Unnecessary to change * the cpu pointed by pr->id is offline. Unnecessary to change
@ -1096,17 +1102,15 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
return -ENODEV; return -ENODEV;
} }
cpumask_copy(saved_mask, &current->cpus_allowed);
t_state.target_state = state; t_state.target_state = state;
p_throttling = &(pr->throttling); p_throttling = &(pr->throttling);
cpumask_and(online_throttling_cpus, cpu_online_mask,
p_throttling->shared_cpu_map);
/* /*
* The throttling notifier will be called for every * The throttling notifier will be called for every
* affected cpu in order to get one proper T-state. * affected cpu in order to get one proper T-state.
* The notifier event is THROTTLING_PRECHANGE. * The notifier event is THROTTLING_PRECHANGE.
*/ */
for_each_cpu(i, online_throttling_cpus) { for_each_cpu_and(i, cpu_online_mask, p_throttling->shared_cpu_map) {
t_state.cpu = i; t_state.cpu = i;
acpi_processor_throttling_notifier(THROTTLING_PRECHANGE, acpi_processor_throttling_notifier(THROTTLING_PRECHANGE,
&t_state); &t_state);
@ -1118,21 +1122,18 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
* it can be called only for the cpu pointed by pr. * it can be called only for the cpu pointed by pr.
*/ */
if (p_throttling->shared_type == DOMAIN_COORD_TYPE_SW_ANY) { if (p_throttling->shared_type == DOMAIN_COORD_TYPE_SW_ANY) {
/* FIXME: use work_on_cpu() */ arg.pr = pr;
if (set_cpus_allowed_ptr(current, cpumask_of(pr->id))) { arg.target_state = state;
/* Can't migrate to the pr->id CPU. Exit */ arg.force = force;
ret = -ENODEV; ret = work_on_cpu(pr->id, acpi_processor_throttling_fn, &arg);
goto exit;
}
ret = p_throttling->acpi_processor_set_throttling(pr,
t_state.target_state, force);
} else { } else {
/* /*
* When the T-state coordination is SW_ALL or HW_ALL, * When the T-state coordination is SW_ALL or HW_ALL,
* it is necessary to set T-state for every affected * it is necessary to set T-state for every affected
* cpus. * cpus.
*/ */
for_each_cpu(i, online_throttling_cpus) { for_each_cpu_and(i, cpu_online_mask,
p_throttling->shared_cpu_map) {
match_pr = per_cpu(processors, i); match_pr = per_cpu(processors, i);
/* /*
* If the pointer is invalid, we will report the * If the pointer is invalid, we will report the
@ -1153,13 +1154,12 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
"on CPU %d\n", i)); "on CPU %d\n", i));
continue; continue;
} }
t_state.cpu = i;
/* FIXME: use work_on_cpu() */ arg.pr = match_pr;
if (set_cpus_allowed_ptr(current, cpumask_of(i))) arg.target_state = state;
continue; arg.force = force;
ret = match_pr->throttling. ret = work_on_cpu(pr->id, acpi_processor_throttling_fn,
acpi_processor_set_throttling( &arg);
match_pr, t_state.target_state, force);
} }
} }
/* /*
@ -1168,17 +1168,12 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
* affected cpu to update the T-states. * affected cpu to update the T-states.
* The notifier event is THROTTLING_POSTCHANGE * The notifier event is THROTTLING_POSTCHANGE
*/ */
for_each_cpu(i, online_throttling_cpus) { for_each_cpu_and(i, cpu_online_mask, p_throttling->shared_cpu_map) {
t_state.cpu = i; t_state.cpu = i;
acpi_processor_throttling_notifier(THROTTLING_POSTCHANGE, acpi_processor_throttling_notifier(THROTTLING_POSTCHANGE,
&t_state); &t_state);
} }
/* restore the previous state */
/* FIXME: use work_on_cpu() */
set_cpus_allowed_ptr(current, saved_mask);
exit:
free_cpumask_var(online_throttling_cpus);
free_cpumask_var(saved_mask);
return ret; return ret;
} }

View File

@ -77,18 +77,24 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res)
switch (ares->type) { switch (ares->type) {
case ACPI_RESOURCE_TYPE_MEMORY24: case ACPI_RESOURCE_TYPE_MEMORY24:
memory24 = &ares->data.memory24; memory24 = &ares->data.memory24;
if (!memory24->address_length)
return false;
acpi_dev_get_memresource(res, memory24->minimum, acpi_dev_get_memresource(res, memory24->minimum,
memory24->address_length, memory24->address_length,
memory24->write_protect); memory24->write_protect);
break; break;
case ACPI_RESOURCE_TYPE_MEMORY32: case ACPI_RESOURCE_TYPE_MEMORY32:
memory32 = &ares->data.memory32; memory32 = &ares->data.memory32;
if (!memory32->address_length)
return false;
acpi_dev_get_memresource(res, memory32->minimum, acpi_dev_get_memresource(res, memory32->minimum,
memory32->address_length, memory32->address_length,
memory32->write_protect); memory32->write_protect);
break; break;
case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:
fixed_memory32 = &ares->data.fixed_memory32; fixed_memory32 = &ares->data.fixed_memory32;
if (!fixed_memory32->address_length)
return false;
acpi_dev_get_memresource(res, fixed_memory32->address, acpi_dev_get_memresource(res, fixed_memory32->address,
fixed_memory32->address_length, fixed_memory32->address_length,
fixed_memory32->write_protect); fixed_memory32->write_protect);
@ -144,12 +150,16 @@ bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res)
switch (ares->type) { switch (ares->type) {
case ACPI_RESOURCE_TYPE_IO: case ACPI_RESOURCE_TYPE_IO:
io = &ares->data.io; io = &ares->data.io;
if (!io->address_length)
return false;
acpi_dev_get_ioresource(res, io->minimum, acpi_dev_get_ioresource(res, io->minimum,
io->address_length, io->address_length,
io->io_decode); io->io_decode);
break; break;
case ACPI_RESOURCE_TYPE_FIXED_IO: case ACPI_RESOURCE_TYPE_FIXED_IO:
fixed_io = &ares->data.fixed_io; fixed_io = &ares->data.fixed_io;
if (!fixed_io->address_length)
return false;
acpi_dev_get_ioresource(res, fixed_io->address, acpi_dev_get_ioresource(res, fixed_io->address,
fixed_io->address_length, fixed_io->address_length,
ACPI_DECODE_10); ACPI_DECODE_10);

View File

@ -4175,6 +4175,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
/* Seagate Momentus SpinPoint M8 seem to have FPMDA_AA issues */ /* Seagate Momentus SpinPoint M8 seem to have FPMDA_AA issues */
{ "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA }, { "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA },
{ "ST1000LM024 HN-M101MBB", "2BA30001", ATA_HORKAGE_BROKEN_FPDMA_AA },
/* Blacklist entries taken from Silicon Image 3124/3132 /* Blacklist entries taken from Silicon Image 3124/3132
Windows driver .inf file - also several Linux problem reports */ Windows driver .inf file - also several Linux problem reports */
@ -4224,7 +4225,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
/* devices that don't properly handle queued TRIM commands */ /* devices that don't properly handle queued TRIM commands */
{ "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
{ "Crucial_CT???M500SSD1", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, { "Crucial_CT???M500SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
/* /*
* Some WD SATA-I drives spin up and down erratically when the link * Some WD SATA-I drives spin up and down erratically when the link

View File

@ -1580,6 +1580,7 @@ static int fw_pm_notify(struct notifier_block *notify_block,
switch (mode) { switch (mode) {
case PM_HIBERNATION_PREPARE: case PM_HIBERNATION_PREPARE:
case PM_SUSPEND_PREPARE: case PM_SUSPEND_PREPARE:
case PM_RESTORE_PREPARE:
kill_requests_without_uevent(); kill_requests_without_uevent();
device_cache_fw_images(); device_cache_fw_images();
break; break;

View File

@ -874,7 +874,7 @@ bio_pageinc(struct bio *bio)
/* Non-zero page count for non-head members of /* Non-zero page count for non-head members of
* compound pages is no longer allowed by the kernel. * compound pages is no longer allowed by the kernel.
*/ */
page = compound_trans_head(bv.bv_page); page = compound_head(bv.bv_page);
atomic_inc(&page->_count); atomic_inc(&page->_count);
} }
} }
@ -887,7 +887,7 @@ bio_pagedec(struct bio *bio)
struct bvec_iter iter; struct bvec_iter iter;
bio_for_each_segment(bv, bio, iter) { bio_for_each_segment(bv, bio, iter) {
page = compound_trans_head(bv.bv_page); page = compound_head(bv.bv_page);
atomic_dec(&page->_count); atomic_dec(&page->_count);
} }
} }

View File

@ -53,7 +53,7 @@
#define MTIP_FTL_REBUILD_TIMEOUT_MS 2400000 #define MTIP_FTL_REBUILD_TIMEOUT_MS 2400000
/* unaligned IO handling */ /* unaligned IO handling */
#define MTIP_MAX_UNALIGNED_SLOTS 8 #define MTIP_MAX_UNALIGNED_SLOTS 2
/* Macro to extract the tag bit number from a tag value. */ /* Macro to extract the tag bit number from a tag value. */
#define MTIP_TAG_BIT(tag) (tag & 0x1F) #define MTIP_TAG_BIT(tag) (tag & 0x1F)

View File

@ -612,6 +612,8 @@ static ssize_t disksize_store(struct device *dev,
disksize = PAGE_ALIGN(disksize); disksize = PAGE_ALIGN(disksize);
meta = zram_meta_alloc(disksize); meta = zram_meta_alloc(disksize);
if (!meta)
return -ENOMEM;
down_write(&zram->init_lock); down_write(&zram->init_lock);
if (zram->init_done) { if (zram->init_done) {
up_write(&zram->init_lock); up_write(&zram->init_lock);

View File

@ -242,7 +242,7 @@ of_at91_clk_master_setup(struct device_node *np, struct at91_pmc *pmc,
irq = irq_of_parse_and_map(np, 0); irq = irq_of_parse_and_map(np, 0);
if (!irq) if (!irq)
return; goto out_free_characteristics;
clk = at91_clk_register_master(pmc, irq, name, num_parents, clk = at91_clk_register_master(pmc, irq, name, num_parents,
parent_names, layout, parent_names, layout,

View File

@ -494,6 +494,9 @@ static const struct file_operations nomadik_src_clk_debugfs_ops = {
static int __init nomadik_src_clk_init_debugfs(void) static int __init nomadik_src_clk_init_debugfs(void)
{ {
/* Vital for multiplatform */
if (!src_base)
return -ENODEV;
src_pcksr0_boot = readl(src_base + SRC_PCKSR0); src_pcksr0_boot = readl(src_base + SRC_PCKSR0);
src_pcksr1_boot = readl(src_base + SRC_PCKSR1); src_pcksr1_boot = readl(src_base + SRC_PCKSR1);
debugfs_create_file("nomadik-src-clk", S_IFREG | S_IRUGO, debugfs_create_file("nomadik-src-clk", S_IFREG | S_IRUGO,

View File

@ -2226,24 +2226,25 @@ EXPORT_SYMBOL_GPL(devm_clk_unregister);
*/ */
int __clk_get(struct clk *clk) int __clk_get(struct clk *clk)
{ {
if (clk && !try_module_get(clk->owner)) if (clk) {
return 0; if (!try_module_get(clk->owner))
return 0;
kref_get(&clk->ref); kref_get(&clk->ref);
}
return 1; return 1;
} }
void __clk_put(struct clk *clk) void __clk_put(struct clk *clk)
{ {
if (WARN_ON_ONCE(IS_ERR(clk))) if (!clk || WARN_ON_ONCE(IS_ERR(clk)))
return; return;
clk_prepare_lock(); clk_prepare_lock();
kref_put(&clk->ref, __clk_release); kref_put(&clk->ref, __clk_release);
clk_prepare_unlock(); clk_prepare_unlock();
if (clk) module_put(clk->owner);
module_put(clk->owner);
} }
/*** clk rate change notifiers ***/ /*** clk rate change notifiers ***/

View File

@ -179,6 +179,7 @@ static struct clk *clk_register_psc(struct device *dev,
init.name = name; init.name = name;
init.ops = &clk_psc_ops; init.ops = &clk_psc_ops;
init.flags = 0;
init.parent_names = (parent_name ? &parent_name : NULL); init.parent_names = (parent_name ? &parent_name : NULL);
init.num_parents = (parent_name ? 1 : 0); init.num_parents = (parent_name ? 1 : 0);

View File

@ -141,13 +141,6 @@ static const struct coreclk_soc_desc a370_coreclks = {
.num_ratios = ARRAY_SIZE(a370_coreclk_ratios), .num_ratios = ARRAY_SIZE(a370_coreclk_ratios),
}; };
static void __init a370_coreclk_init(struct device_node *np)
{
mvebu_coreclk_setup(np, &a370_coreclks);
}
CLK_OF_DECLARE(a370_core_clk, "marvell,armada-370-core-clock",
a370_coreclk_init);
/* /*
* Clock Gating Control * Clock Gating Control
*/ */
@ -168,9 +161,15 @@ static const struct clk_gating_soc_desc a370_gating_desc[] __initconst = {
{ } { }
}; };
static void __init a370_clk_gating_init(struct device_node *np) static void __init a370_clk_init(struct device_node *np)
{ {
mvebu_clk_gating_setup(np, a370_gating_desc); struct device_node *cgnp =
of_find_compatible_node(NULL, NULL, "marvell,armada-370-gating-clock");
mvebu_coreclk_setup(np, &a370_coreclks);
if (cgnp)
mvebu_clk_gating_setup(cgnp, a370_gating_desc);
} }
CLK_OF_DECLARE(a370_clk_gating, "marvell,armada-370-gating-clock", CLK_OF_DECLARE(a370_clk, "marvell,armada-370-core-clock", a370_clk_init);
a370_clk_gating_init);

View File

@ -158,13 +158,6 @@ static const struct coreclk_soc_desc axp_coreclks = {
.num_ratios = ARRAY_SIZE(axp_coreclk_ratios), .num_ratios = ARRAY_SIZE(axp_coreclk_ratios),
}; };
static void __init axp_coreclk_init(struct device_node *np)
{
mvebu_coreclk_setup(np, &axp_coreclks);
}
CLK_OF_DECLARE(axp_core_clk, "marvell,armada-xp-core-clock",
axp_coreclk_init);
/* /*
* Clock Gating Control * Clock Gating Control
*/ */
@ -202,9 +195,14 @@ static const struct clk_gating_soc_desc axp_gating_desc[] __initconst = {
{ } { }
}; };
static void __init axp_clk_gating_init(struct device_node *np) static void __init axp_clk_init(struct device_node *np)
{ {
mvebu_clk_gating_setup(np, axp_gating_desc); struct device_node *cgnp =
of_find_compatible_node(NULL, NULL, "marvell,armada-xp-gating-clock");
mvebu_coreclk_setup(np, &axp_coreclks);
if (cgnp)
mvebu_clk_gating_setup(cgnp, axp_gating_desc);
} }
CLK_OF_DECLARE(axp_clk_gating, "marvell,armada-xp-gating-clock", CLK_OF_DECLARE(axp_clk, "marvell,armada-xp-core-clock", axp_clk_init);
axp_clk_gating_init);

View File

@ -154,12 +154,6 @@ static const struct coreclk_soc_desc dove_coreclks = {
.num_ratios = ARRAY_SIZE(dove_coreclk_ratios), .num_ratios = ARRAY_SIZE(dove_coreclk_ratios),
}; };
static void __init dove_coreclk_init(struct device_node *np)
{
mvebu_coreclk_setup(np, &dove_coreclks);
}
CLK_OF_DECLARE(dove_core_clk, "marvell,dove-core-clock", dove_coreclk_init);
/* /*
* Clock Gating Control * Clock Gating Control
*/ */
@ -186,9 +180,14 @@ static const struct clk_gating_soc_desc dove_gating_desc[] __initconst = {
{ } { }
}; };
static void __init dove_clk_gating_init(struct device_node *np) static void __init dove_clk_init(struct device_node *np)
{ {
mvebu_clk_gating_setup(np, dove_gating_desc); struct device_node *cgnp =
of_find_compatible_node(NULL, NULL, "marvell,dove-gating-clock");
mvebu_coreclk_setup(np, &dove_coreclks);
if (cgnp)
mvebu_clk_gating_setup(cgnp, dove_gating_desc);
} }
CLK_OF_DECLARE(dove_clk_gating, "marvell,dove-gating-clock", CLK_OF_DECLARE(dove_clk, "marvell,dove-core-clock", dove_clk_init);
dove_clk_gating_init);

View File

@ -193,13 +193,6 @@ static const struct coreclk_soc_desc kirkwood_coreclks = {
.num_ratios = ARRAY_SIZE(kirkwood_coreclk_ratios), .num_ratios = ARRAY_SIZE(kirkwood_coreclk_ratios),
}; };
static void __init kirkwood_coreclk_init(struct device_node *np)
{
mvebu_coreclk_setup(np, &kirkwood_coreclks);
}
CLK_OF_DECLARE(kirkwood_core_clk, "marvell,kirkwood-core-clock",
kirkwood_coreclk_init);
static const struct coreclk_soc_desc mv88f6180_coreclks = { static const struct coreclk_soc_desc mv88f6180_coreclks = {
.get_tclk_freq = kirkwood_get_tclk_freq, .get_tclk_freq = kirkwood_get_tclk_freq,
.get_cpu_freq = mv88f6180_get_cpu_freq, .get_cpu_freq = mv88f6180_get_cpu_freq,
@ -208,13 +201,6 @@ static const struct coreclk_soc_desc mv88f6180_coreclks = {
.num_ratios = ARRAY_SIZE(kirkwood_coreclk_ratios), .num_ratios = ARRAY_SIZE(kirkwood_coreclk_ratios),
}; };
static void __init mv88f6180_coreclk_init(struct device_node *np)
{
mvebu_coreclk_setup(np, &mv88f6180_coreclks);
}
CLK_OF_DECLARE(mv88f6180_core_clk, "marvell,mv88f6180-core-clock",
mv88f6180_coreclk_init);
/* /*
* Clock Gating Control * Clock Gating Control
*/ */
@ -239,9 +225,21 @@ static const struct clk_gating_soc_desc kirkwood_gating_desc[] __initconst = {
{ } { }
}; };
static void __init kirkwood_clk_gating_init(struct device_node *np) static void __init kirkwood_clk_init(struct device_node *np)
{ {
mvebu_clk_gating_setup(np, kirkwood_gating_desc); struct device_node *cgnp =
of_find_compatible_node(NULL, NULL, "marvell,kirkwood-gating-clock");
if (of_device_is_compatible(np, "marvell,mv88f6180-core-clock"))
mvebu_coreclk_setup(np, &mv88f6180_coreclks);
else
mvebu_coreclk_setup(np, &kirkwood_coreclks);
if (cgnp)
mvebu_clk_gating_setup(cgnp, kirkwood_gating_desc);
} }
CLK_OF_DECLARE(kirkwood_clk_gating, "marvell,kirkwood-gating-clock", CLK_OF_DECLARE(kirkwood_clk, "marvell,kirkwood-core-clock",
kirkwood_clk_gating_init); kirkwood_clk_init);
CLK_OF_DECLARE(mv88f6180_clk, "marvell,mv88f6180-core-clock",
kirkwood_clk_init);

View File

@ -26,6 +26,8 @@ struct rcar_gen2_cpg {
void __iomem *reg; void __iomem *reg;
}; };
#define CPG_FRQCRB 0x00000004
#define CPG_FRQCRB_KICK BIT(31)
#define CPG_SDCKCR 0x00000074 #define CPG_SDCKCR 0x00000074
#define CPG_PLL0CR 0x000000d8 #define CPG_PLL0CR 0x000000d8
#define CPG_FRQCRC 0x000000e0 #define CPG_FRQCRC 0x000000e0
@ -45,6 +47,7 @@ struct rcar_gen2_cpg {
struct cpg_z_clk { struct cpg_z_clk {
struct clk_hw hw; struct clk_hw hw;
void __iomem *reg; void __iomem *reg;
void __iomem *kick_reg;
}; };
#define to_z_clk(_hw) container_of(_hw, struct cpg_z_clk, hw) #define to_z_clk(_hw) container_of(_hw, struct cpg_z_clk, hw)
@ -83,17 +86,45 @@ static int cpg_z_clk_set_rate(struct clk_hw *hw, unsigned long rate,
{ {
struct cpg_z_clk *zclk = to_z_clk(hw); struct cpg_z_clk *zclk = to_z_clk(hw);
unsigned int mult; unsigned int mult;
u32 val; u32 val, kick;
unsigned int i;
mult = div_u64((u64)rate * 32, parent_rate); mult = div_u64((u64)rate * 32, parent_rate);
mult = clamp(mult, 1U, 32U); mult = clamp(mult, 1U, 32U);
if (clk_readl(zclk->kick_reg) & CPG_FRQCRB_KICK)
return -EBUSY;
val = clk_readl(zclk->reg); val = clk_readl(zclk->reg);
val &= ~CPG_FRQCRC_ZFC_MASK; val &= ~CPG_FRQCRC_ZFC_MASK;
val |= (32 - mult) << CPG_FRQCRC_ZFC_SHIFT; val |= (32 - mult) << CPG_FRQCRC_ZFC_SHIFT;
clk_writel(val, zclk->reg); clk_writel(val, zclk->reg);
return 0; /*
* Set KICK bit in FRQCRB to update hardware setting and wait for
* clock change completion.
*/
kick = clk_readl(zclk->kick_reg);
kick |= CPG_FRQCRB_KICK;
clk_writel(kick, zclk->kick_reg);
/*
* Note: There is no HW information about the worst case latency.
*
* Using experimental measurements, it seems that no more than
* ~10 iterations are needed, independently of the CPU rate.
* Since this value might be dependant of external xtal rate, pll1
* rate or even the other emulation clocks rate, use 1000 as a
* "super" safe value.
*/
for (i = 1000; i; i--) {
if (!(clk_readl(zclk->kick_reg) & CPG_FRQCRB_KICK))
return 0;
cpu_relax();
}
return -ETIMEDOUT;
} }
static const struct clk_ops cpg_z_clk_ops = { static const struct clk_ops cpg_z_clk_ops = {
@ -120,6 +151,7 @@ static struct clk * __init cpg_z_clk_register(struct rcar_gen2_cpg *cpg)
init.num_parents = 1; init.num_parents = 1;
zclk->reg = cpg->reg + CPG_FRQCRC; zclk->reg = cpg->reg + CPG_FRQCRC;
zclk->kick_reg = cpg->reg + CPG_FRQCRB;
zclk->hw.init = &init; zclk->hw.init = &init;
clk = clk_register(NULL, &zclk->hw); clk = clk_register(NULL, &zclk->hw);
@ -186,7 +218,7 @@ rcar_gen2_cpg_register_clock(struct device_node *np, struct rcar_gen2_cpg *cpg,
const char *name) const char *name)
{ {
const struct clk_div_table *table = NULL; const struct clk_div_table *table = NULL;
const char *parent_name = "main"; const char *parent_name;
unsigned int shift; unsigned int shift;
unsigned int mult = 1; unsigned int mult = 1;
unsigned int div = 1; unsigned int div = 1;
@ -201,23 +233,31 @@ rcar_gen2_cpg_register_clock(struct device_node *np, struct rcar_gen2_cpg *cpg,
* the multiplier value. * the multiplier value.
*/ */
u32 value = clk_readl(cpg->reg + CPG_PLL0CR); u32 value = clk_readl(cpg->reg + CPG_PLL0CR);
parent_name = "main";
mult = ((value >> 24) & ((1 << 7) - 1)) + 1; mult = ((value >> 24) & ((1 << 7) - 1)) + 1;
} else if (!strcmp(name, "pll1")) { } else if (!strcmp(name, "pll1")) {
parent_name = "main";
mult = config->pll1_mult / 2; mult = config->pll1_mult / 2;
} else if (!strcmp(name, "pll3")) { } else if (!strcmp(name, "pll3")) {
parent_name = "main";
mult = config->pll3_mult; mult = config->pll3_mult;
} else if (!strcmp(name, "lb")) { } else if (!strcmp(name, "lb")) {
parent_name = "pll1_div2";
div = cpg_mode & BIT(18) ? 36 : 24; div = cpg_mode & BIT(18) ? 36 : 24;
} else if (!strcmp(name, "qspi")) { } else if (!strcmp(name, "qspi")) {
parent_name = "pll1_div2";
div = (cpg_mode & (BIT(3) | BIT(2) | BIT(1))) == BIT(2) div = (cpg_mode & (BIT(3) | BIT(2) | BIT(1))) == BIT(2)
? 16 : 20; ? 8 : 10;
} else if (!strcmp(name, "sdh")) { } else if (!strcmp(name, "sdh")) {
parent_name = "pll1_div2";
table = cpg_sdh_div_table; table = cpg_sdh_div_table;
shift = 8; shift = 8;
} else if (!strcmp(name, "sd0")) { } else if (!strcmp(name, "sd0")) {
parent_name = "pll1_div2";
table = cpg_sd01_div_table; table = cpg_sd01_div_table;
shift = 4; shift = 4;
} else if (!strcmp(name, "sd1")) { } else if (!strcmp(name, "sd1")) {
parent_name = "pll1_div2";
table = cpg_sd01_div_table; table = cpg_sd01_div_table;
shift = 0; shift = 0;
} else if (!strcmp(name, "z")) { } else if (!strcmp(name, "z")) {

Some files were not shown because too many files have changed in this diff Show More