Commit Graph

12 Commits

Author SHA1 Message Date
Chris Metcalf
02b67e0954 tile PCI DMA: fix bug in non-page-aligned accessors
The code incorrectly masked with PAGE_OFFSET instead of
PAGE_SIZE-1.  This only matters when trying to do a
non page-aligned DMA; it was noticed during code inspection.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2013-08-06 12:56:04 -04:00
Chris Metcalf
dc7d5cf2ca tile PCI RC: add dma_get_required_mask()
The standard kernel function dma_get_required_mask() uses the
highest DRAM address to determine if 32-bit or 64-bit DMA addressing
is needed.  This only works on architectures that have direct mapping
between the PA and the PCI address space, i.e. those that don't have
I/O TLBs or have I/O TLB but choose to use direct mapping.  Neither
of these are true for tilegx.  Whether to use 64-bit DMA should depend
on the PCI device's capability only, not on the amount of DRAM
installeds, so we now advertise a 64-bit DMA mask unconditionally.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2013-08-06 12:56:00 -04:00
Chris Metcalf
9b6846cede tile PCI DMA: handle a NULL dev argument properly
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2013-08-06 12:55:36 -04:00
Chris Metcalf
803c874abe tile: support LSI MEGARAID SAS HBA hybrid dma_ops
The LSI MEGARAID SAS HBA suffers from the problem where it can do
64-bit DMA to streaming buffers but not to consistent buffers.
In other words, 64-bit DMA is used for disk data transfers and 32-bit
DMA must be used for control message transfers. According to LSI,
the firmware is not fully functional yet. This change implements a
kind of hybrid dma_ops to support this.

Note that on most other platforms, the 64-bit DMA addressing space is the
same as the 32-bit DMA space and they overlap the physical memory space.
No special arrangement is needed to support this kind of mixed DMA
capability.  On TILE-Gx, the 64-bit DMA space is completely separate
from the 32-bit DMA space.  Due to the use of the IOMMU, the 64-bit DMA
space doesn't overlap the physical memory space.  On the other hand,
the 32-bit DMA space overlaps the physical memory space under 4GB.
The separate address spaces make it necessary to have separate dma_ops.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2013-08-06 12:52:33 -04:00
Chris Metcalf
41bb38fc53 tile pci: enable IOMMU to support DMA for legacy devices
This change uses the TRIO IOMMU to map the PCI DMA space and physical
memory at different addresses.  We also now use the dma_mapping_ops
to provide support for non-PCI DMA, PCIe DMA (64-bit) and legacy PCI
DMA (32-bit).  We use the kernel's software I/O TLB framework
(i.e. bounce buffers) for the legacy 32-bit PCI device support since
there are a limited number of TLB entries in the IOMMU and it is
non-trivial to handle indexing, searching, matching, etc.  For 32-bit
devices the performance impact of bounce buffers should not be a concern.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2012-07-18 16:40:17 -04:00
Chris Metcalf
eef015c8aa arch/tile: enable ZONE_DMA for tilegx
This is required for PCI root complex legacy support and USB OHCI root
complex support.  With this change tilegx now supports allocating memory
whose PA fits in 32 bits.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2012-07-18 16:40:11 -04:00
Chris Metcalf
bbaa22c3a0 tilegx pci: support I/O to arbitrarily-cached pages
The tilegx PCI root complex support (currently only in linux-next)
is limited to pages that are homed on cached in the default manner,
i.e. "hash-for-home".  This change supports delivery of I/O data to
pages that are cached in other ways (locally on a particular core,
uncached, user-managed incoherent, etc.).

A large part of the change is supporting flushing pages from cache
on particular homes so that we can transition the data that we are
delivering to or from the device appropriately.  The new homecache_finv*
routines handle this.

Some changes to page_table_range_init() were also required to make
the fixmap code work correctly on tilegx; it hadn't been used there
before.

We also remove some stub mark_caches_evicted_*() routines that
were just no-ops anyway.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2012-07-18 16:40:05 -04:00
Chris Metcalf
3989efb770 arch/tile: add a few #includes and an EXPORT to catch up with kernel changes.
The empty_zero_page[] export is required for ZERO_PAGE() module references.
The #includes are due to changes in implicit inclusion, and should of
course have been in the sources from the beginning.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-12-03 15:31:41 -05:00
James Hogan
ef0aaf873e tile,mn10300: add device parameter to dma_cache_sync()
Since v2.6.20 "Pass struct dev pointer to dma_cache_sync()"
(d3fa72e455), dma_cache_sync() takes a
struct dev pointer, but these appear to be missing from the tile and
mn10300 implementations, so add them.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
[cmetcalf@tilera.com: took only the "tile" portion as I don't maintain mn10300]
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-05-04 14:41:36 -04:00
Chris Metcalf
76c567fbba arch/tile: support 4KB page size as well as 64KB
The Tilera architecture traditionally supports 64KB page sizes
to improve TLB utilization and improve performance when the
hardware is being used primarily to run a single application.

For more generic server scenarios, it can be beneficial to run
with 4KB page sizes, so this commit allows that to be specified
(by modifying the arch/tile/include/hv/pagesize.h header).

As part of this change, we also re-worked the PTE management
slightly so that PTE writes all go through a __set_pte() function
where we can do some additional validation.  The set_pte_order()
function was eliminated since the "order" argument wasn't being used.

One bug uncovered was in the PCI DMA code, which wasn't properly
flushing the specified range.  This was benign with 64KB pages,
but with 4KB pages we were getting some larger flushes wrong.

The per-cpu memory reservation code also needed updating to
conform with the newer percpu stuff; before it always chose 64KB,
and that was always correct, but with 4KB granularity we now have
to pay closer attention and reserve the amount of memory that will
be requested when the percpu code starts allocating.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-03-10 13:17:53 -05:00
Chris Metcalf
482e6f8466 arch/tile: Do not use GFP_KERNEL for dma_alloc_coherent().
Feedback from fujita.tomonori@lab.ntt.co.jp.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2010-06-05 10:26:55 -04:00
Chris Metcalf
867e359b97 arch/tile: core support for Tilera 32-bit chips.
This change is the core kernel support for TILEPro and TILE64 chips.
No driver support (except the console driver) is included yet.

This includes the relevant Linux headers in asm/; the low-level
low-level "Tile architecture" headers in arch/, which are
shared with the hypervisor, etc., and are build-system agnostic;
and the relevant hypervisor headers in hv/.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Reviewed-by: Paul Mundt <lethal@linux-sh.org>
2010-06-04 17:11:18 -04:00